hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8fc56b79ae9b52445ccd630cff6f04a731c8c363 | 2,706 | py | Python | src/mathenjeu/cli/openssl.py | sdpython/mathenjeu | 97fc9140ef89ac9c3c6ba46803121fd5d23eb8d1 | [
"MIT"
] | 1 | 2019-10-12T00:48:35.000Z | 2019-10-12T00:48:35.000Z | src/mathenjeu/cli/openssl.py | sdpython/mathenjeu | 97fc9140ef89ac9c3c6ba46803121fd5d23eb8d1 | [
"MIT"
] | 8 | 2019-01-13T11:52:55.000Z | 2020-11-19T01:27:28.000Z | src/mathenjeu/cli/openssl.py | sdpython/mathenjeu | 97fc9140ef89ac9c3c6ba46803121fd5d23eb8d1 | [
"MIT"
] | null | null | null | """
@file
@brief Starts an app locally to test it.
"""
from OpenSSL import crypto
def create_self_signed_cert(keyfile="key.pem", certfile="cert.pem",
country='FR', state='Paris', location='Paris',
organization='mathenjeu', cn='mathenjeu',
organizational_unit_name=None,
email=None, size=4096, days=365, algo="sha256",
fLOG=print):
"""
Creates a signed certificate.
:param keyfile: key file
:param certfile: certificate file
:param country: country
:param state: state
:param location: location
:param cn: common name
:param organization: organization
:param organizational_unit_name: organizational unit name (can be empty)
:param email: email (can be empty)
:param size: key size
:param days: days it is valid
:param algo: algorithm
:param fLOG: logging function
See also `How to generate a certificate using pyOpenSSL to make it secure connection?
<https://stackoverflow.com/questions/44055029/how-to-generate-a-certificate-using-pyopenssl-to-make-it-secure-connection>`_,
`How to serve HTTP/2 using Python
<https://medium.com/python-pandemonium/how-to-serve-http-2-using-python-5e5bbd1e7ff1>`_.
.. cmdref::
:title: Creates a signed certificate
:cmd: -m mathenjeu create_self_signed_cert --help
The command line creates a certificate used later by
a service such as :epkg:`hypercorn` or :epkg:`waitress`.
Example::
python -m mathenjeu create_self_signed_cert --keyfile=key.pem --certfile=cert.pem
"""
k = crypto.PKey()
k.generate_key(crypto.TYPE_RSA, size)
cert = crypto.X509()
cert.get_subject().C = country
cert.get_subject().ST = state
cert.get_subject().L = location
cert.get_subject().O = organization
if organizational_unit_name:
cert.get_subject().OU = organizational_unit_name
cert.get_subject().CN = cn
if email:
cert.get_subject().emailAddress = email
cert.set_serial_number(1000)
cert.gmtime_adj_notBefore(0)
cert.gmtime_adj_notAfter(5 * days * 24 * 60 * 60)
cert.set_issuer(cert.get_subject())
cert.set_pubkey(k)
cert.sign(k, 'sha256')
with open(certfile, 'wb') as f:
if fLOG:
fLOG("[create_self_signed_cert] create '{0}'".format(certfile))
f.write(crypto.dump_certificate(crypto.FILETYPE_PEM, cert))
with open(keyfile, 'wb') as f:
if fLOG:
fLOG("[create_self_signed_cert] create '{0}'".format(keyfile))
f.write(crypto.dump_privatekey(crypto.FILETYPE_PEM, k))
| 35.142857 | 128 | 0.648189 | 347 | 2,706 | 4.919308 | 0.386167 | 0.032806 | 0.065612 | 0.058582 | 0.282367 | 0.282367 | 0.216755 | 0.186292 | 0.186292 | 0.186292 | 0 | 0.021898 | 0.240577 | 2,706 | 76 | 129 | 35.605263 | 0.808759 | 0.422395 | 0 | 0.060606 | 0 | 0 | 0.094875 | 0.034626 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.030303 | 0 | 0.060606 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fc8cb9153d75f29f52cbd961e16ca92d1a97e91 | 1,099 | py | Python | .github/rename_to_lowercase.py | openmicroanalysis/calczaf | f95de343dae908c1b3a531b9ed24fa1f5daf3e6d | [
"MIT"
] | 1 | 2021-06-29T17:56:14.000Z | 2021-06-29T17:56:14.000Z | .github/rename_to_lowercase.py | openmicroanalysis/calczaf | f95de343dae908c1b3a531b9ed24fa1f5daf3e6d | [
"MIT"
] | 3 | 2015-12-02T22:29:35.000Z | 2019-02-21T03:30:56.000Z | .github/rename_to_lowercase.py | openmicroanalysis/calczaf | f95de343dae908c1b3a531b9ed24fa1f5daf3e6d | [
"MIT"
] | null | null | null | import os
import shutil
import argparse
def rename(source_path, recursive):
if recursive and os.path.isdir(source_path):
for dir_path, dir_names, filenames in os.walk(source_path):
for name in filenames + dir_names:
rename(os.path.join(dir_path, name), recursive)
dirname, basename = os.path.split(source_path)
destination_path = os.path.join(dirname, basename.lower())
if not os.path.exists(destination_path):
shutil.move(source_path, destination_path)
print('Moved {0} to {1}'.format(source_path, destination_path))
def main():
description = 'Rename files and directories to lowercase'
parser = argparse.ArgumentParser(description=description)
parser.add_argument('paths', nargs='+', help='Path to files or directories')
parser.add_argument('-r', '--recursive', action='store_true',
help='Recursively search directory')
args = parser.parse_args()
recursive = args.recursive
for path in args.paths:
rename(path, recursive)
if __name__ == '__main__':
main()
| 31.4 | 80 | 0.678799 | 138 | 1,099 | 5.217391 | 0.405797 | 0.083333 | 0.0875 | 0.104167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002302 | 0.209281 | 1,099 | 34 | 81 | 32.323529 | 0.826237 | 0 | 0 | 0 | 0 | 0 | 0.136488 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.2 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fcb61eebe8ee15539f00118a131def866c5523d | 2,099 | py | Python | drive.py | eshimelis/RCC | 536ec0cc8373fb7d5cd5ca59166d7e55bf6a8643 | [
"MIT"
] | null | null | null | drive.py | eshimelis/RCC | 536ec0cc8373fb7d5cd5ca59166d7e55bf6a8643 | [
"MIT"
] | null | null | null | drive.py | eshimelis/RCC | 536ec0cc8373fb7d5cd5ca59166d7e55bf6a8643 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
import numpy as np
import rospy
from rospy.numpy_msg import numpy_msg
from sensor_msgs.msg import LaserScan
from ackermann_msgs.msg import AckermannDriveStamped
def follow_wall(scan):
####### STUDENT CODE START #######
# use the scan data to appropriately modify 'speed' and 'steering_angle'
speed = 1
steering_angle = 1
####### STUDENT CODE END #######
return (speed, steering_angle)
########################### Ignore Code Below ###########################
class WallFollower:
# import ROS parameters from the "params.yaml" file.
# access these variables in class functions with self:
# i.e. self.CONSTANT
SCAN_TOPIC = rospy.get_param("wall_follower/scan_topic")
DRIVE_TOPIC = rospy.get_param("wall_follower/drive_topic")
SIDE = rospy.get_param("wall_follower/side")
VELOCITY = rospy.get_param("wall_follower/velocity")
DESIRED_DISTANCE = rospy.get_param("wall_follower/desired_distance")
def __init__(self):
# setup laser scan subscriber
self.sub_scan = rospy.Subscriber(self.SCAN_TOPIC,
LaserScan,
callback=self.scan_callback)
# setup drive publisher
self.pub_drive = rospy.Publisher(self.DRIVE_TOPIC,
AckermannDriveStamped,
queue_size=1)
def scan_callback(self, scan_msg):
"""Lidar callback function"""
# get list of range measurements
scan_data = scan_msg.ranges
# call student's code for speed and angle, given scan
drive_command = follow_wall(scan_data)
print(drive_command)
# create, populate and publish drive command
drive_msg = AckermannDriveStamped()
drive_msg.drive.speed = drive_command[0]
drive_msg.drive.steering_angle = drive_command[1]
self.pub_drive.publish(drive_msg)
if __name__ == "__main__":
rospy.init_node('wall_follower')
wall_follower = WallFollower()
rospy.spin() | 30.42029 | 76 | 0.626965 | 241 | 2,099 | 5.207469 | 0.39834 | 0.066932 | 0.051793 | 0.067729 | 0.10757 | 0.047809 | 0 | 0 | 0 | 0 | 0 | 0.003881 | 0.263459 | 2,099 | 69 | 77 | 30.42029 | 0.807891 | 0.226298 | 0 | 0 | 0 | 0 | 0.091623 | 0.066099 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.147059 | 0 | 0.441176 | 0.029412 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fd1772e98d57dfc163eac7dcaf96158ad02de68 | 8,117 | py | Python | rgw/v2/tests/multisite/test_dynamic_bucket_resharding.py | viduship/ceph-qe-scripts | 886619fa6600c24cbf989d65868951b9c3decd72 | [
"MIT"
] | null | null | null | rgw/v2/tests/multisite/test_dynamic_bucket_resharding.py | viduship/ceph-qe-scripts | 886619fa6600c24cbf989d65868951b9c3decd72 | [
"MIT"
] | null | null | null | rgw/v2/tests/multisite/test_dynamic_bucket_resharding.py | viduship/ceph-qe-scripts | 886619fa6600c24cbf989d65868951b9c3decd72 | [
"MIT"
] | null | null | null | import os
import sys
sys.path.append(os.path.abspath(os.path.join(__file__, "../../../..")))
import argparse
import json
import time
import traceback
import v2.lib.resource_op as s3lib
import v2.utils.log as log
import v2.utils.utils as utils
import yaml
from v2.lib.exceptions import TestExecError
from v2.lib.read_io_info import ReadIOInfo
from v2.lib.resource_op import Config
from v2.lib.rgw_config_opts import CephConfOp, ConfigOpts
from v2.lib.s3.auth import Auth
from v2.lib.s3.write_io_info import BasicIOInfoStructure, IOInfoInitialize
from v2.tests.multisite import resuables
from v2.utils.test_desc import AddTestInfo
from v2.utils.utils import HttpResponseParser, RGWService
TEST_DATA_PATH = None
def create_bucket_with_versioning(rgw_conn, user_info, bucket_name):
# create buckets
bucket = resuables.create_bucket(bucket_name, rgw_conn, user_info)
bucket_versioning = s3lib.resource_op(
{"obj": rgw_conn, "resource": "BucketVersioning", "args": [bucket.name]}
)
# checking the versioning status
version_status = s3lib.resource_op(
{"obj": bucket_versioning, "resource": "status", "args": None}
)
if version_status is None:
log.info("bucket versioning still not enabled")
# enabling bucket versioning
version_enable_status = s3lib.resource_op(
{"obj": bucket_versioning, "resource": "enable", "args": None}
)
response = HttpResponseParser(version_enable_status)
if response.status_code == 200:
log.info("version enabled")
else:
raise TestExecError("version enable failed")
return bucket
def upload_objects(user_info, bucket, config):
log.info("s3 objects to create: %s" % config.objects_count)
for oc in range(config.objects_count):
s3_object_name = utils.gen_s3_object_name(bucket.name, oc)
resuables.upload_object(
s3_object_name, bucket, TEST_DATA_PATH, config, user_info
)
def test_exec(config):
test_info = AddTestInfo("RGW Dynamic Resharding test")
io_info_initialize = IOInfoInitialize()
basic_io_structure = BasicIOInfoStructure()
io_info_initialize.initialize(basic_io_structure.initial())
ceph_conf = CephConfOp()
rgw_service = RGWService()
try:
test_info.started_info()
log.info("starting IO")
config.max_objects_per_shard = 10
config.no_of_shards = 10
config.user_count = 1
user_info = s3lib.create_users(config.user_count)
user_info = user_info[0]
auth = Auth(user_info)
rgw_conn = auth.do_auth()
config.bucket_count = 1
log.info("no of buckets to create: %s" % config.bucket_count)
bucket_name = utils.gen_bucket_name_from_userid(user_info["user_id"], rand_no=1)
bucket = create_bucket_with_versioning(rgw_conn, user_info, bucket_name)
upload_objects(user_info, bucket, config)
log.info("sharding configuration will be added now.")
if config.sharding_type == "online":
log.info("sharding type is online")
# for online,
# the number of shards should be greater than [ (no of objects)/(max objects per shard) ]
# example: objects = 500 ; max object per shard = 10
# then no of shards should be at least 50 or more
time.sleep(15)
log.info("making changes to ceph.conf")
ceph_conf.set_to_ceph_conf(
"global",
ConfigOpts.rgw_max_objs_per_shard,
config.max_objects_per_shard,
)
ceph_conf.set_to_ceph_conf(
"global", ConfigOpts.rgw_dynamic_resharding, True
)
num_shards_expected = config.objects_count / config.max_objects_per_shard
log.info("num_shards_expected: %s" % num_shards_expected)
log.info("trying to restart services ")
srv_restarted = rgw_service.restart()
time.sleep(30)
if srv_restarted is False:
raise TestExecError("RGW service restart failed")
else:
log.info("RGW service restarted")
if config.sharding_type == "offline":
log.info("sharding type is offline")
# for offline.
# the number of shards will be the value set in the command.
time.sleep(15)
log.info("in offline sharding")
cmd_exec = utils.exec_shell_cmd(
"radosgw-admin bucket reshard --bucket=%s --num-shards=%s"
% (bucket.name, config.no_of_shards)
)
if cmd_exec is False:
raise TestExecError("offline resharding command execution failed")
# upload_objects(user_info, bucket, config)
log.info("s3 objects to create: %s" % config.objects_count)
for oc in range(config.objects_count):
s3_object_name = utils.gen_s3_object_name(
bucket.name, config.objects_count + oc
)
resuables.upload_object(
s3_object_name, bucket, TEST_DATA_PATH, config, user_info
)
time.sleep(300)
log.info("verification starts")
op = utils.exec_shell_cmd("radosgw-admin metadata get bucket:%s" % bucket.name)
json_doc = json.loads(op)
bucket_id = json_doc["data"]["bucket"]["bucket_id"]
op2 = utils.exec_shell_cmd(
"radosgw-admin metadata get bucket.instance:%s:%s"
% (bucket.name, bucket_id)
)
json_doc2 = json.loads((op2))
num_shards_created = json_doc2["data"]["bucket_info"]["num_shards"]
log.info("no_of_shards_created: %s" % num_shards_created)
log.info("no_of_shards_expected: %s" % num_shards_expected)
if config.sharding_type == "offline":
if num_shards_expected != num_shards_created:
raise TestExecError("expected number of shards not created")
log.info("Expected number of shards created")
if config.sharding_type == "online":
log.info(
"for online, "
"number of shards created should be greater than or equal to number of expected shards"
)
if int(num_shards_created) >= int(num_shards_expected):
log.info("Expected number of shards created")
else:
raise TestExecError("Expected number of shards not created")
read_io = ReadIOInfo()
read_io.yaml_fname = "io_info.yaml"
read_io.verify_io()
test_info.success_status("test passed")
sys.exit(0)
except Exception as e:
log.info(e)
log.info(traceback.format_exc())
test_info.failed_status("test failed")
sys.exit(1)
except TestExecError as e:
log.info(e)
log.info(traceback.format_exc())
test_info.failed_status("test failed")
sys.exit(1)
if __name__ == "__main__":
project_dir = os.path.abspath(os.path.join(__file__, "../../.."))
test_data_dir = "test_data"
TEST_DATA_PATH = os.path.join(project_dir, test_data_dir)
log.info("TEST_DATA_PATH: %s" % TEST_DATA_PATH)
if not os.path.exists(TEST_DATA_PATH):
log.info("test data dir not exists, creating.. ")
os.makedirs(TEST_DATA_PATH)
parser = argparse.ArgumentParser(description="RGW S3 Automation")
parser.add_argument("-c", dest="config", help="RGW Test yaml configuration")
args = parser.parse_args()
yaml_file = args.config
config = Config()
with open(yaml_file, "r") as f:
doc = yaml.load(f)
config.objects_count = doc["config"]["objects_count"]
config.objects_size_range = {
"min": doc["config"]["objects_size_range"]["min"],
"max": doc["config"]["objects_size_range"]["max"],
}
config.sharding_type = doc["config"]["sharding_type"]
log.info(
"objects_count: %s\n"
"objects_size_range: %s\n"
"sharding_type: %s\n"
% (config.objects_count, config.objects_size_range, config.sharding_type)
)
test_exec(config)
| 40.585 | 104 | 0.643957 | 1,035 | 8,117 | 4.803865 | 0.200966 | 0.038013 | 0.032582 | 0.014481 | 0.357804 | 0.292237 | 0.273532 | 0.218825 | 0.17136 | 0.136766 | 0 | 0.01025 | 0.254774 | 8,117 | 199 | 105 | 40.788945 | 0.811704 | 0.047801 | 0 | 0.174157 | 0 | 0 | 0.183338 | 0.008292 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016854 | false | 0.005618 | 0.106742 | 0 | 0.129213 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fd22f3b02ea78de31bf1097cb5218ccd7460292 | 5,410 | py | Python | jsondictionary.py | gbarreiro/diccionariojson | 5918ed4b39b418592f16d2baa724c4c1f631c43b | [
"MIT"
] | null | null | null | jsondictionary.py | gbarreiro/diccionariojson | 5918ed4b39b418592f16d2baa724c4c1f631c43b | [
"MIT"
] | null | null | null | jsondictionary.py | gbarreiro/diccionariojson | 5918ed4b39b418592f16d2baa724c4c1f631c43b | [
"MIT"
] | 1 | 2020-12-29T14:11:07.000Z | 2020-12-29T14:11:07.000Z | import os
import sys
import json
from json.decoder import JSONDecodeError
# Program which manages a simple database (a Python dictionary stored in a file)
class Database():
""" Clase que modela la BD sobre la que trabaja el programa. Incluye los métodos necesarios para cargarla y actualizarla en el disco"""
def __init__(self,nombre_fichero):
"""Inicializa el objeto BD y carga en memoria los datos"""
self.nombre_fichero = nombre_fichero
self.diccionario = {}
self.__cargar_archivo()
def __comprobar_archivo(self):
"""Comprueba si existe el archivo. En caso afirmativo, devuelve True. En caso negativo, lo crea y devuelve False."""
if(not(os.path.isfile(nombre_archivo))):
with open(nombre_archivo,'w') as archivo:
archivo.close
return False
else:
return True # sí que existe el archivo
def __cargar_archivo(self):
"""Carga la BD en la memoria RAM"""
if(self.__comprobar_archivo()):
try:
archivo = open(nombre_archivo,'r')
self.diccionario.update(json.load(archivo))
archivo.close
except JSONDecodeError:
print('Error reading the JSON file: wrong format')
sys.exit(1) # termina la ejecución del programa con error
except Exception:
print('Error reading the database')
sys.exit(1) # termina la ejecución del programa con error
"""Métodos públicos de la clase: CRUD"""
def actualizar_archivo(self):
"""Actualiza el archivo de texto en el que se almacena la BD"""
archivo = open(nombre_archivo,'w')
archivo.write(json.dumps(self.diccionario))
archivo.close
def crear_entrada(self,clave, valor):
"""Añade una entrada al diccionario con la clave y valor especificados"""
if(clave in self.diccionario):
print("There is already an entry with the key " + clave)
else:
# No hay entradas con esa clave
self.diccionario[clave] = valor
print("Entry successfully created")
self.actualizar_archivo()
def ver_entradas(self):
"""Muestra en pantalla las entradas del diccionario"""
print("Number of entries: " + str(len(self.diccionario)) + '\n')
for clave,valor in self.diccionario.items():
print(('\t %s --> %s') %(clave,valor))
def eliminar_entrada(self,clave):
"""Elimina del diccionario la entrada con la clave especificada"""
if(clave in self.diccionario):
del self.diccionario[clave]
print(('Entry with key "%s" successfully deleted' %(clave,)))
self.actualizar_archivo()
else:
# No existe ninguna entrada con esa clave
print("No entry in the database with key " + clave)
def modificar_entrada(self,clave):
"""Modifica una entrada ya creada en el diccionario"""
if(clave in self.diccionario):
nuevo_valor = input("Insert a new value for " + clave + ": ")
self.diccionario[clave] = nuevo_valor
print('Entry updated')
self.actualizar_archivo()
else:
# No existe ninguna entrada con esa clave
print("No entry in the database with key " + clave)
# Carga la base de datos desde el fichero de texto
nombre_archivo = 'file.json' # nombre por defecto
if(len(sys.argv)==2):
nombre_archivo = sys.argv[1] # nombre de fichero especificado en los argumentos del programa
base_datos = Database(nombre_archivo)
def mostrar_menu():
"""Muestra un menú para que el usuario pueda interactuar con la aplicación"""
print("Database: " + nombre_archivo)
while(True):
print("\nChoose an option: ")
print("1) Read entries in the DB")
print("2) Create a new entry in the DB")
print("3) Delete entry from the DB")
print("4) Modify entry from the DB")
print("0) Exit")
seleccion = input("Option: ")
if(not(seleccion.isdigit())):
print("Wrong option. Try again.")
else:
# Comprueba la opción elegida
if(int(seleccion)==1):
# Ver entradas en la BD
print('\n')
base_datos.ver_entradas()
elif(int(seleccion)==2):
# Crear nueva entrada en la BD
clave = input('\nType the key: ')
valor = input('Type the value: ')
base_datos.crear_entrada(clave, valor)
elif(int(seleccion)==3):
# Eliminar entrada de la BD
print('\n')
clave = input("Key of the entry you want to delete: ")
base_datos.eliminar_entrada(clave)
elif(int(seleccion)==4):
# Modificar entrada de la BD
print('\n')
clave = input("Key of the entry you want to modify: ")
base_datos.modificar_entrada(clave)
elif(int(seleccion)==0):
# Salir del programa
sys.exit(0)
else:
# Selección no válida
print("Wrong selection. Try again.")
# Muestra el menú de usuario
mostrar_menu()
| 40.373134 | 139 | 0.58207 | 646 | 5,410 | 4.80031 | 0.304954 | 0.053209 | 0.021928 | 0.012577 | 0.176717 | 0.123186 | 0.123186 | 0.123186 | 0.123186 | 0.123186 | 0 | 0.004105 | 0.324584 | 5,410 | 133 | 140 | 40.676692 | 0.844554 | 0.244177 | 0 | 0.231579 | 0 | 0 | 0.161055 | 0 | 0 | 0 | 0 | 0.007519 | 0 | 1 | 0.094737 | false | 0 | 0.042105 | 0 | 0.168421 | 0.231579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fd4993d9f2830ffd5acae8b1b2c37958dbdbbf5 | 11,205 | py | Python | create_tf_record.py | goruck/edge-tpu-train | 21608ddf19b736be2639342363ff331ca8b272f3 | [
"MIT"
] | 10 | 2020-07-28T21:00:05.000Z | 2022-02-24T13:35:46.000Z | create_tf_record.py | maxpark/edge-tpu-train | 21608ddf19b736be2639342363ff331ca8b272f3 | [
"MIT"
] | 10 | 2020-07-17T02:19:54.000Z | 2022-03-12T00:41:22.000Z | create_tf_record.py | maxpark/edge-tpu-train | 21608ddf19b736be2639342363ff331ca8b272f3 | [
"MIT"
] | 9 | 2020-10-28T01:17:58.000Z | 2022-03-31T07:34:17.000Z | """
Convert dataset to TFRecord for TF object detection training.
Example usage:
python3 create_tf_record.py \
--root_dir ./ \
--image_dir images \
--annotation_dir annotations \
--output_dir tf-record \
--dataset_name radar-ml
Only datset_name is required.
Based on:
https://github.com/tensorflow/models/blob/master/research/object_detection/dataset_tools/create_pascal_tf_record.py
Copyright (c) 2019~2020 Lindo St. Angel
"""
import hashlib
import io
import logging
import os
import random
import re
import contextlib2
import numpy as np
import PIL.Image
import tensorflow as tf
import argparse
from lxml import etree
from object_detection.dataset_tools import tf_record_creation_util
from object_detection.utils import dataset_util
from object_detection.utils import label_map_util
logger = logging.getLogger(__name__)
NUM_TFRECORD_SHARDS = 1
TRAIN_VAL_SPLIT = 0.8
TFRECORD_TRAIN_NAME = 'train'
TFRECORD_VAL_NAME = 'val'
ALT_NAME_MAP = {
'lindo': 'person',
'nikki': 'person',
'eva': 'person',
'nico': 'person',
'unknown': 'person',
'polly': 'dog',
'rebel': 'cat',
'jack': 'cat'
}
def dict_to_tf_example(data,
label_map_dict,
image_subdirectory,
use_alt_names=False,
ignore_difficult_instances=False):
"""Convert XML derived dict to tf.Example proto.
Notice that this function normalizes the bounding box coordinates provided
by the raw data.
Args:
data: dict holding PASCAL XML fields for a single image (obtained by
running dataset_util.recursive_parse_xml_to_dict)
label_map_dict: A map from string label names to integers ids.
image_subdirectory: String specifying subdirectory within the
Pascal dataset directory holding the actual image data.
ignore_difficult_instances: Whether to skip difficult instances in the
dataset (default: False).
use_alt_names: Use class names that may be different than labels in images.
A translation map must be provided (default: False).
Returns:
example: The converted tf.Example.
Raises:
ValueError: if the image pointed to by data['filename'] is not a valid JPEG
"""
img_path = os.path.join(image_subdirectory, data['filename'])
with tf.io.gfile.GFile(img_path, 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = PIL.Image.open(encoded_jpg_io)
if image.format != 'JPEG':
raise ValueError('Image format must be JPEG.')
key = hashlib.sha256(encoded_jpg).hexdigest()
width = int(data['size']['width'])
height = int(data['size']['height'])
xmins = []
ymins = []
xmaxs = []
ymaxs = []
classes = []
classes_text = []
truncated = []
poses = []
difficult_obj = []
if 'object' in data:
for obj in data['object']:
difficult = bool(int(obj['difficult']))
if ignore_difficult_instances and difficult:
continue
difficult_obj.append(int(difficult))
xmin = float(obj['bndbox']['xmin'])
xmax = float(obj['bndbox']['xmax'])
ymin = float(obj['bndbox']['ymin'])
ymax = float(obj['bndbox']['ymax'])
xmins.append(xmin / width)
ymins.append(ymin / height)
xmaxs.append(xmax / width)
ymaxs.append(ymax / height)
#class_name = get_class_name_from_filename(data['filename'])
if use_alt_names:
class_name = ALT_NAME_MAP.get(obj['name'], obj['name'])
else:
class_name = obj['name']
print(class_name, label_map_dict[class_name])
classes_text.append(class_name.encode('utf8'))
classes.append(label_map_dict[class_name])
truncated.append(int(obj['truncated']))
poses.append(obj['pose'].encode('utf8'))
feature_dict = {
'image/height': dataset_util.int64_feature(height),
'image/width': dataset_util.int64_feature(width),
'image/filename': dataset_util.bytes_feature(
data['filename'].encode('utf8')),
'image/source_id': dataset_util.bytes_feature(
data['filename'].encode('utf8')),
'image/key/sha256': dataset_util.bytes_feature(key.encode('utf8')),
'image/encoded': dataset_util.bytes_feature(encoded_jpg),
'image/format': dataset_util.bytes_feature('jpeg'.encode('utf8')),
'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
'image/object/class/label': dataset_util.int64_list_feature(classes),
'image/object/difficult': dataset_util.int64_list_feature(difficult_obj),
'image/object/truncated': dataset_util.int64_list_feature(truncated),
'image/object/view': dataset_util.bytes_list_feature(poses),
}
return tf.train.Example(features=tf.train.Features(feature=feature_dict))
def create_tf_record(output_filename,
num_shards,
label_map_dict,
annotations_dir,
image_dir,
examples,
use_alt_names):
"""Creates a TFRecord file from examples.
Args:
output_filename: Path to where output file is saved.
num_shards: Number of shards for output file.
label_map_dict: The label map dictionary.
annotations_dir: Directory where annotation files are stored.
image_dir: Directory where image files are stored.
examples: Examples to parse and save to tf record.
use_alt_names: use alternative class name mapping.
"""
with contextlib2.ExitStack() as tf_record_close_stack:
output_tfrecords = tf_record_creation_util.open_sharded_output_tfrecords(
tf_record_close_stack, output_filename, num_shards)
for idx, example in enumerate(examples):
if idx % 10 == 0:
logger.info('On image %d of %d', idx, len(examples))
xml_path = os.path.join(annotations_dir, 'xmls', example + '.xml')
if not os.path.exists(xml_path):
logger.warning('Could not find %s, ignoring example.', xml_path)
continue
with tf.io.gfile.GFile(xml_path, 'r') as fid:
xml_str = fid.read()
xml = etree.fromstring(xml_str)
data = dataset_util.recursive_parse_xml_to_dict(xml)['annotation']
try:
tf_example = dict_to_tf_example(
data=data,
label_map_dict=label_map_dict,
image_subdirectory=image_dir,
use_alt_names=use_alt_names)
if tf_example:
shard_idx = idx % num_shards
output_tfrecords[shard_idx].write(tf_example.SerializeToString())
except ValueError:
logger.warning('Invalid example: %s, ignoring.', xml_path)
def gen_trainval_list(images_path):
"""Creates a list of image names without file extensions.
The list items will not match the ordering of the images on disk.
Args:
images_path: Path to where images are located.
"""
def make(file):
if file.endswith('.jpg' or '.jpeg'):
return os.path.basename(file).split('.')[0]
return [make(file) for file in os.listdir(images_path)]
def main(args):
logger.info('Reading dataset info.')
image_dir = os.path.join(args.root_dir, args.image_dir,
args.dataset_name)
logger.info(f'Image directory: {image_dir}')
annotations_dir = os.path.join(args.root_dir, args.annotation_dir,
args.dataset_name)
logger.info(f'Annotation directory: {annotations_dir}')
label_map = os.path.join(args.root_dir, args.annotation_dir,
args.dataset_name, args.label_map_name)
logger.info(f'Label map: {label_map}')
use_alt_names = args.use_alt_names
logger.info(f'use alt names: {use_alt_names}')
# Split data into training and validation sets.
random.seed(42)
examples_list = gen_trainval_list(image_dir)
random.shuffle(examples_list)
num_examples = len(examples_list)
num_train = int(TRAIN_VAL_SPLIT * num_examples)
train_examples = examples_list[:num_train]
val_examples = examples_list[num_train:]
logger.info('Found %d training and %d validation examples.',
len(train_examples), len(val_examples))
train_output_path = os.path.join(args.root_dir, args.output_dir,
args.dataset_name, TFRECORD_TRAIN_NAME)
val_output_path = os.path.join(args.root_dir, args.output_dir,
args.dataset_name, TFRECORD_VAL_NAME)
label_map_dict = label_map_util.get_label_map_dict(label_map)
# Create training TFRecord.
logger.info('Creating training TFRecord.')
create_tf_record(
train_output_path,
NUM_TFRECORD_SHARDS,
label_map_dict,
annotations_dir,
image_dir,
train_examples,
use_alt_names)
logger.info(f'Created training TFRecord: {train_output_path}')
# Create validation TFRecord.
logger.info('Creating validation TFRecord.')
create_tf_record(
val_output_path,
NUM_TFRECORD_SHARDS,
label_map_dict,
annotations_dir,
image_dir,
val_examples,
use_alt_names)
logger.info(f'Created validation TFRecord: {val_output_path}')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--root_dir', type=str,
help='Root directory.',
default='./')
parser.add_argument('--output_dir', type=str,
help='TFRecord directory.',
default='tf-record')
parser.add_argument('--annotation_dir', type=str,
help='Annotation directory.',
default='annotations')
parser.add_argument('--label_map_name', type=str,
help='Label map name.',
default='label_map.pbtxt')
parser.add_argument('--image_dir', type=str,
help='Image directory.',
default='images')
parser.add_argument('--dataset_name', type=str,
help='Name of dataset',
required=True)
parser.add_argument('--use_alt_names', action='store_true',
help='Use alternative class names. Must match label_map_name.pbtxt')
parser.set_defaults(use_alt_names=False)
args = parser.parse_args()
logging.basicConfig(
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
level=logging.DEBUG)
main(args) | 36.858553 | 119 | 0.635341 | 1,372 | 11,205 | 4.94242 | 0.216472 | 0.028314 | 0.024333 | 0.016959 | 0.209704 | 0.125203 | 0.105442 | 0.089515 | 0.068132 | 0.053384 | 0 | 0.005445 | 0.262383 | 11,205 | 304 | 120 | 36.858553 | 0.815003 | 0.187506 | 0 | 0.104762 | 0 | 0 | 0.1537 | 0.020009 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | false | 0 | 0.071429 | 0 | 0.109524 | 0.004762 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fd609f33e73003a14511ee46598d44faf2d22bd | 2,791 | py | Python | Cogs/core.py | Lustidrike/Economy-Bot | f5b989b9086d9c0834555126c275d12381dd108f | [
"MIT"
] | 2 | 2019-09-24T21:44:00.000Z | 2019-09-24T21:44:17.000Z | Cogs/core.py | Lustidrike/Economy-Bot | f5b989b9086d9c0834555126c275d12381dd108f | [
"MIT"
] | 1 | 2021-03-31T14:35:11.000Z | 2021-11-08T17:48:47.000Z | Cogs/core.py | Lustidrike/Economy-Bot | f5b989b9086d9c0834555126c275d12381dd108f | [
"MIT"
] | 1 | 2020-08-16T16:59:57.000Z | 2020-08-16T16:59:57.000Z | import logging
import discord
import datetime
import json
from operator import itemgetter
from discord.ext import commands
from os import linesep
from .base_cog import BaseCog
from conf import config
log = logging.getLogger(__name__)
class Core(BaseCog):
"""A minimal cog for testing."""
def __init__(self, bot):
BaseCog.__init__(self, bot)
self.bot = bot
with open(config.cogs_data_path + '/user_shortcuts.json', 'r') as shortcuts_file:
self.shortcuts = json.load(shortcuts_file)
@commands.command()
async def info(self, context):
"""General information on the bot instance."""
BaseCog.check_main_server(self, context)
BaseCog.check_bot_channel(self, context)
BaseCog.check_forbidden_characters(self, context)
await self.bot.post_message(self.bot.bot_channel, '```' + self.bot.info_text + '```')
@commands.command(pass_context=True)
async def time(self, context):
"""Displays current local time and date for the bot."""
BaseCog.check_forbidden_characters(self, context)
await self.bot.post_message(context.message.channel, 'Current time is ' + datetime.datetime.now().strftime("%Y-%m-%d %H:%M") + ' (' + config.timezone + ').')
@commands.command()
async def shortcuts(self, context):
"""Displays registered shortcuts for user nicknames."""
BaseCog.check_main_server(self, context)
BaseCog.check_bot_channel(self, context)
BaseCog.check_forbidden_characters(self, context)
indent = max(len(shortcut) for shortcut, name in self.shortcuts.items())
sorted_shortcuts = sorted(self.shortcuts.items(), key=itemgetter(0), reverse=False)
result = '```Shortcut Nickname' + linesep + linesep
for shortcut, name in sorted_shortcuts:
result += shortcut.ljust(indent) + ' ' + name + linesep
result += '```'
await self.bot.post_message(self.bot.bot_channel, result)
@commands.command()
async def addshortcut(self, context, shortcut, user):
"""[ADMINS ONLY] Creates a new shortcut for a specified username."""
BaseCog.check_main_server(self, context)
BaseCog.check_bot_channel(self, context)
BaseCog.check_admin(self, context)
BaseCog.check_forbidden_characters(self, context)
self.shortcuts[shortcut] = user
with open(config.cogs_data_path + '/user_shortcuts.json', 'w') as shortcuts_file:
json.dump(self.shortcuts, shortcuts_file)
await self.bot.post_message(self.bot.bot_channel, context.message.author.name + ' has created a new shortcut \"' + shortcut + '\".')
def setup(bot):
"""Core cog load."""
bot.add_cog(Core(bot))
log.info("Core cog loaded")
| 32.453488 | 165 | 0.672161 | 344 | 2,791 | 5.290698 | 0.313953 | 0.090659 | 0.069231 | 0.088462 | 0.33956 | 0.33956 | 0.33956 | 0.33956 | 0.31044 | 0.20989 | 0 | 0.000453 | 0.209244 | 2,791 | 85 | 166 | 32.835294 | 0.824196 | 0.01469 | 0 | 0.254902 | 0 | 0 | 0.062823 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0.019608 | 0.176471 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fda72af2a11ecb22d4d1d7e69668167ef276bd0 | 1,008 | py | Python | tests/test_rendering/test_delta.py | ryu57/pyHalo | 61b9ab49d76f3552f5680b2e457fbd3e49b9cc89 | [
"MIT"
] | 7 | 2020-12-09T23:58:34.000Z | 2022-03-13T12:18:32.000Z | tests/test_rendering/test_delta.py | ryu57/pyHalo | 61b9ab49d76f3552f5680b2e457fbd3e49b9cc89 | [
"MIT"
] | 8 | 2020-10-12T21:30:22.000Z | 2022-01-25T16:04:54.000Z | tests/test_rendering/test_delta.py | ryu57/pyHalo | 61b9ab49d76f3552f5680b2e457fbd3e49b9cc89 | [
"MIT"
] | 6 | 2021-06-07T16:30:41.000Z | 2022-01-12T16:58:04.000Z | import numpy.testing as npt
import pytest
import numpy as np
from pyHalo.Rendering.MassFunctions.delta import DeltaFunction
class TestBackgroundDensityDelta(object):
def setup(self):
self.mass = 0.01
self.volume = 10
self.rho = 10
self.mfunc = DeltaFunction(self.mass, self.volume, self.rho, False)
self.mfunc_poisson = DeltaFunction(self.mass, self.volume, self.rho, True)
self.mfunc_empty = DeltaFunction(100000 * self.volume * self.rho, self.volume, self.rho, False)
def test_density_delta(self):
n_expected = self.rho * self.volume / self.mass
m = self.mfunc.draw()
n_drawn = len(m)
npt.assert_equal(n_drawn, n_expected)
for mi in m:
npt.assert_equal(mi, self.mass)
m = self.mfunc_poisson.draw()
for mi in m:
npt.assert_equal(mi, self.mass)
m = self.mfunc_empty.draw()
npt.assert_equal(len(m), 0.)
if __name__ == '__main__':
pytest.main()
| 28 | 103 | 0.638889 | 138 | 1,008 | 4.507246 | 0.34058 | 0.07717 | 0.11254 | 0.109325 | 0.368167 | 0.257235 | 0.257235 | 0.135048 | 0.135048 | 0.135048 | 0 | 0.018617 | 0.253968 | 1,008 | 35 | 104 | 28.8 | 0.808511 | 0 | 0 | 0.153846 | 0 | 0 | 0.007937 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fdacbeae6f9e86eed514fcf51543bad698647f9 | 4,716 | py | Python | LayeredGraphAPI/VisualizeGraph.py | whytestalyon/LayeredGraph | f7126d50b5d362c6e9302244de268c34ca9c31b0 | [
"Apache-2.0"
] | null | null | null | LayeredGraphAPI/VisualizeGraph.py | whytestalyon/LayeredGraph | f7126d50b5d362c6e9302244de268c34ca9c31b0 | [
"Apache-2.0"
] | 1 | 2020-03-06T16:00:11.000Z | 2020-03-06T16:00:11.000Z | LayeredGraphAPI/VisualizeGraph.py | whytestalyon/LayeredGraph | f7126d50b5d362c6e9302244de268c34ca9c31b0 | [
"Apache-2.0"
] | 2 | 2019-01-15T01:32:56.000Z | 2019-11-12T15:58:43.000Z | #!/usr/bin/env python
'''
VisualizeGraph.py
This file contains helpful sub-routines for generating images from a Random Walk run.
'''
import math
import os
import struct
from LayeredGraph import LayeredGraph
def saveGraphImage(mg, outFN, rankings=None, minWeight=0.0, drawEdgeWeights=False, nodeTypes=None):
'''
This function generates a dot file for graphviz to visualize the graph
@param mg - the LayeredGraph to generate an image from
@param outFN - the location to save the output (.dot is expected)
@param rankings - the full rankings of all nodes in the graph (default: None, do not color the graph and visualize the whole graph)
@param minWeight - the minimum weight from the ranking required to show up in the image (default: 0.0)
@param drawEdgeWeights - if True, weight values will be included on the edges (default: False)
'''
#if we have ranks, create a dictionary of the weights for lookup later
if rankings != None:
rDict = {}
for w, t, v in rankings:
rDict[(t, v)] = w
#open the file for writing
fp = open(outFN, 'w+')
fp.write('digraph food {\n')
n = mg.nodes
if nodeTypes == None:
nodeTypes = sorted(n.keys())
#iterate through all nodes in the graph
#for k in sorted(n.keys()):
for k in nodeTypes:
for v in sorted(n[k]):
vw = v.replace(':', '_')
if rankings == None:
#if there are no rankings, then always write the node
fp.write(k+'_'+vw+';\n')
else:
#we have rankings, so only write the node if it has sufficient weight
r = rDict[(k, v)]
if r < minWeight:
continue
#all weights are in the range [0, 1], so scale that up to RGB 255 scale
fc = int(math.floor(r*255))
rgb = (255, 255-fc, 255-fc)
fcHash = '#'+bytes.hex(struct.pack('BBB',*rgb))
#write the node and include the weight
fp.write('{}_{} [label="{}_{} ({:.4f})" style=filled fillcolor="{}"];\n'.format(k, vw, k, vw, r, fcHash));
#now go through the nodes again looking for edges
#for k2 in sorted(n.keys()):
for k2 in nodeTypes:
for v2 in sorted(n[k2]):
#make sure this node has enough weight to show up
if rankings != None:
r2 = rDict[(k2, v2)]
if r2 < minWeight:
continue
#if an edges exists, it has a weight > 0
w = mg.getEdge(k, v, k2, v2)
if w > 0.0:
wn = mg.getEdge(k, v, k2, v2, True)
vw2 = v2.replace(':', '_')
if drawEdgeWeights:
#include the raw weight and the normalized weight
#TODO: option for one or both?
fp.write('{}_{} -> {}_{} [label="{}({:.2f})"];\n'.format(k, vw, k2, vw2, w, wn))
else:
#only include the edge itself
fp.write('{}_{} -> {}_{};\n'.format(k, vw, k2, vw2))
fp.write('}\n')
fp.close()
def visualize_RWR(dotPrefix, imagePrefix, mg, startProbs, restartProb, bg=None, cycleLimit=1000, minWeight=0.0):
'''
Run RWR and generate a dot file for each iteration. Requires graphviz to be installed to run "dot".
@param dotPrefix - dot files will be saved to <dotPrefix>.<iteration>.dot
@param imagePrefix - image files will be saved to <imagePrefix>.<iteration>.png
@param mg - an instance of LayeredGraph
@param startProbs - same as LayeredGraph.RWR_rank(..)
@param restartProb - same as LayeredGraph.RWR_rank(..)
@param bg - same as LayeredGraph.RWR_rank(..)
@param cycleLimit - same as LayeredGraph.RWR_rank(..)
@param minWeight - the minimum weight on a node to visualize it (default: 0.0)
'''
#first, generate the iterator
rankTypes = set(mg.nodes.keys())
rwr_iter = mg.RWR_iter(startProbs, restartProb, rankTypes, bg, cycleLimit)
for x, rankings in enumerate(rwr_iter):
dotFN = '.'.join([dotPrefix, str(x), 'dot'])
pngFN = '.'.join([imagePrefix, str(x), 'png'])
#create the dot file, then run dot to generate the image file
saveGraphImage(mg, dotFN, rankings, minWeight=minWeight, drawEdgeWeights=False)
os.system('dot -Tpng -o '+pngFN+' '+dotFN)
| 43.666667 | 135 | 0.550254 | 596 | 4,716 | 4.325503 | 0.330537 | 0.016292 | 0.013964 | 0.032583 | 0.131497 | 0.069822 | 0 | 0 | 0 | 0 | 0 | 0.016731 | 0.340967 | 4,716 | 108 | 136 | 43.666667 | 0.812741 | 0.417727 | 0 | 0.117647 | 0 | 0 | 0.064773 | 0.008712 | 0 | 0 | 0 | 0.009259 | 0 | 1 | 0.039216 | false | 0 | 0.078431 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fdadfd6b7318f2fb0f818d5d171f52fa0f3f300 | 6,881 | py | Python | MyBot/hlt.py | ranb2002/halite | 85bce75c10ab89c563e9e5cc34e8a221fdc74f42 | [
"MIT"
] | null | null | null | MyBot/hlt.py | ranb2002/halite | 85bce75c10ab89c563e9e5cc34e8a221fdc74f42 | [
"MIT"
] | null | null | null | MyBot/hlt.py | ranb2002/halite | 85bce75c10ab89c563e9e5cc34e8a221fdc74f42 | [
"MIT"
] | null | null | null | import sys
import math
import heapq
from collections import namedtuple
from itertools import chain, zip_longest
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
NORTH, EAST, SOUTH, WEST, STILL = range(5)
DIRECTIONS = [NORTH, EAST, SOUTH, WEST]
ALL_DIRECTIONS = [NORTH, EAST, SOUTH, WEST, STILL]
class PriorityQueue:
def __init__(self):
self.elements = []
def empty(self):
return len(self.elements) == 0
def put(self, item, priority):
heapq.heappush(self.elements, (priority, item))
def get(self):
return heapq.heappop(self.elements)[1]
def opposite_cardinal(direction):
"Returns the opposing cardinal direction."
return (direction + 2) % 4 if direction != STILL else STILL
Square = namedtuple('Square', 'x y owner strength production')
Move = namedtuple('Move', 'square direction')
class GameMap:
def __init__(self, size_string, production_string, map_string=None):
self.width, self.height = tuple(map(int, size_string.split()))
self.production = tuple(tuple(map(int, substring)) for substring in grouper(production_string.split(), self.width))
self.contents = None
self.get_frame(map_string)
self.starting_player_count = len(set(square.owner for square in self)) - 1
def get_frame(self, map_string=None):
"Updates the map information from the latest frame provided by the Halite game environment."
if map_string is None:
map_string = get_string()
split_string = map_string.split()
owners = list()
while len(owners) < self.width * self.height:
counter = int(split_string.pop(0))
owner = int(split_string.pop(0))
owners.extend([owner] * counter)
assert len(owners) == self.width * self.height
assert len(split_string) == self.width * self.height
self.contents = [[Square(x, y, owner, strength, production)
for x, (owner, strength, production)
in enumerate(zip(owner_row, strength_row, production_row))]
for y, (owner_row, strength_row, production_row)
in enumerate(zip(grouper(owners, self.width),
grouper(map(int, split_string), self.width),
self.production))]
def __iter__(self):
"Allows direct iteration over all squares in the GameMap instance."
return chain.from_iterable(self.contents)
def neighbors(self, square, n=1, include_self=False):
"Iterable over the n-distance neighbors of a given square. For single-step neighbors, the enumeration index provides the direction associated with the neighbor."
assert isinstance(include_self, bool)
assert isinstance(n, int) and n > 0
if n == 1:
combos = ((0, -1), (1, 0), (0, 1), (-1, 0), (0, 0)) # NORTH, EAST, SOUTH, WEST, STILL ... matches indices provided by enumerate(game_map.neighbors(square))
else:
combos = ((dx, dy) for dy in range(-n, n+1) for dx in range(-n, n+1) if abs(dx) + abs(dy) <= n)
return (self.contents[(square.y + dy) % self.height][(square.x + dx) % self.width] for dx, dy in combos if include_self or dx or dy)
def get_target(self, square, direction):
"Returns a single, one-step neighbor in a given direction."
dx, dy = ((0, -1), (1, 0), (0, 1), (-1, 0), (0, 0))[direction]
return self.contents[(square.y + dy) % self.height][(square.x + dx) % self.width]
def get_distance(self, sq1, sq2):
"Returns Manhattan distance between two squares."
dx = min(abs(sq1.x - sq2.x), sq1.x + self.width - sq2.x, sq2.x + self.width - sq1.x)
dy = min(abs(sq1.y - sq2.y), sq1.y + self.height - sq2.y, sq2.y + self.height - sq1.y)
return dx + dy
def get_direction_toward(self, sq1, sq2):
best_cost = math.inf
best_direction = None
for direction in ALL_DIRECTIONS:
cur_cost = self.get_distance(self.get_target(sq1, direction), sq2)
if cur_cost < best_cost:
best_direction = direction
best_cost = cur_cost
return best_direction
def get_direction_toward_with_A_star(self, sq1, sq2):
frontier = PriorityQueue()
frontier.put(sq1, 0)
came_from = {}
cost_so_far = {}
came_from[sq1] = sq1
cost_so_far[sq1] = 0
while not frontier.empty():
current = frontier.get()
if current == sq2:
break
for next in self.neighbors(current):
new_cost = cost_so_far[current] + 1 # La valeur 1 represente le cout pour passer d'un carre a l'autre.
if next not in cost_so_far or new_cost < cost_so_far[next]:
cost_so_far[next] = new_cost
priority = new_cost + self.get_distance(sq2, next)
frontier.put(next, priority)
came_from[next] = current
next_tile = sq2
while next_tile in came_from.keys() and came_from[next_tile] != sq1:
next_tile = came_from[next_tile]
deltaX = sq1.x - next_tile.x
deltaY = sq1.y - next_tile.y
# ((0, -1), (1, 0), (0, 1), (-1, 0), (0, 0)) # NORTH, EAST, SOUTH, WEST, STILL
if (deltaX == 1 or deltaX == -(self.width - 1)) and deltaY == 0:
return WEST
elif (deltaX == -1 or deltaX == self.width-1) and deltaY == 0:
return EAST
elif deltaX == 0 and (deltaY == -1 or deltaY == self.height-1):
return SOUTH
elif deltaX == 0 and (deltaY == 1 or deltaY == -(self.height - 1)):
return NORTH
else:
return STILL
#################################################################
# Functions for communicating with the Halite game environment #
#################################################################
def send_string(s):
sys.stdout.write(s)
sys.stdout.write('\n')
sys.stdout.flush()
def get_string():
return sys.stdin.readline().rstrip('\n')
def get_init():
playerID = int(get_string())
m = GameMap(get_string(), get_string())
return playerID, m
def send_init(name):
send_string(name)
def translate_cardinal(direction):
"Translate direction constants used by this Python-based bot framework to that used by the official Halite game environment."
return (direction + 1) % 5
def send_frame(moves):
send_string(' '.join(str(move.square.x) + ' ' + str(move.square.y) + ' ' + str(translate_cardinal(move.direction)) for move in moves))
| 38.657303 | 170 | 0.596425 | 907 | 6,881 | 4.403528 | 0.227122 | 0.029294 | 0.019529 | 0.006009 | 0.190786 | 0.135704 | 0.090135 | 0.090135 | 0.090135 | 0.090135 | 0 | 0.018349 | 0.271327 | 6,881 | 177 | 171 | 38.875706 | 0.778221 | 0.143438 | 0 | 0.015152 | 0 | 0.015152 | 0.108104 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 1 | 0.151515 | false | 0 | 0.037879 | 0.022727 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fdcf04045655bc17942f9bca87ae63747446edb | 2,389 | py | Python | server/attender-mobile/api/facebook_login.py | denbedilov/SIStore | da8e6f38170959efe756bafe2f83adcf1fbb14a4 | [
"MIT"
] | null | null | null | server/attender-mobile/api/facebook_login.py | denbedilov/SIStore | da8e6f38170959efe756bafe2f83adcf1fbb14a4 | [
"MIT"
] | null | null | null | server/attender-mobile/api/facebook_login.py | denbedilov/SIStore | da8e6f38170959efe756bafe2f83adcf1fbb14a4 | [
"MIT"
] | null | null | null | __author__ = 'itamar'
import sys
from engine.facebook_logic import fb_logic
import logging
import webapp2
from engine.DAL import DAL
sys.path.insert(0, 'lib') #we need this line in order to make libraries imported from lib folder work properly
import requests
FACEBOOK_APP_ID = "683953828381840"
class APILoginHandler(webapp2.RequestHandler):
def get(self):
received = False
_id = self.request.get("id").encode('ascii', 'ignore')
token = self.request.get("token").encode('ascii', 'ignore')
if _id == "" or token == "":
received = False
else:
fb = fb_logic()
if fb.test_id(_id) is False:
received = 2
else:
fb = fb_logic()
if fb.validate_fb_login(_id, access_token=token) is not False:
mydb = DAL()
user = fb.validate_fb_login(_id, access_token=token)
logging.info(user)
try:
email = user["email"].encode('ascii', 'ignore')
except:
email = None
received = mydb.set_user_details(fb_id=int(_id), name=user['first_name'].encode('ascii', 'ignore'),
last_name=user['last_name'].encode('ascii', 'ignore'),
email=email)
logging.info("received is "+ str(received))
else:
received = -1
logging.info(received)
self.post(received)
def post(self, received):
if received is False:
self.response.set_status(400)
self.response.write("ERROR: Missing parameters")
return
elif received == -1:
self.response.set_status(401)
self.response.write("Session Aouth Failed")
elif received == 2:
self.response.set_status(402)
self.response.write("Invalid ID")
else:
self.response.set_status(200)
self.response.write(received)
return
def get_results(request_url, params):
request = requests.get(request_url, params=params, verify=True)
data = request.json()
return data, request.status_code
login = webapp2.WSGIApplication([
('/login', APILoginHandler)
], debug=True) | 32.283784 | 119 | 0.551277 | 259 | 2,389 | 4.942085 | 0.378378 | 0.075 | 0.066406 | 0.065625 | 0.079688 | 0.079688 | 0.054688 | 0.054688 | 0 | 0 | 0 | 0.02235 | 0.344496 | 2,389 | 74 | 120 | 32.283784 | 0.795019 | 0.034743 | 0 | 0.166667 | 0 | 0 | 0.079358 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.216667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fddaa46f409381998e02d2d70d806b3585116f9 | 2,996 | py | Python | hashing/LinkedList.py | iamlmn/PyDS | 11b00629a91e8231eea7f8feb7c3c6065fdb1ce5 | [
"MIT"
] | null | null | null | hashing/LinkedList.py | iamlmn/PyDS | 11b00629a91e8231eea7f8feb7c3c6065fdb1ce5 | [
"MIT"
] | null | null | null | hashing/LinkedList.py | iamlmn/PyDS | 11b00629a91e8231eea7f8feb7c3c6065fdb1ce5 | [
"MIT"
] | null | null | null | class _Node:
__slots__ = '_element', '_next'
def __init__(self, element, next):
self._element = element
self._next = next
class LinkedList:
def __init__(self):
self._head = None
self._tail = None
self._size = 0
def __len__(self):
return self._size
def isempty(self):
return self._size == 0
def addlast(self, e):
newest = _Node(e, None)
if self.isempty():
self._head = newest
else:
self._tail._next = newest
self._tail = newest
self._size += 1
def addfirst(self, e):
newest = _Node(e, None)
if self.isempty():
self._head = newest
self._tail = newest
else:
newest._next = self._head
self._head = newest
self._size += 1
def addany(self, e, position):
newest = _Node(e, None)
p = self._head
i = 1
while i < position-1:
p = p._next
i = i + 1
newest._next = p._next
p._next = newest
self._size += 1
def removefirst(self):
if self.isempty():
print('List is empty')
return
e = self._head._element
self._head = self._head._next
self._size -= 1
if self.isempty():
self._tail = None
return e
def removelast(self):
if self.isempty():
print('List is empty')
return
p = self._head
i = 1
while i < len(self) - 1:
p = p._next
i = i + 1
self._tail = p
p = p._next
e = p._element
self._tail._next = None
self._size -= 1
return e
def removeany(self, position):
p = self._head
i = 1
while i < position - 1:
p = p._next
i = i + 1
e = p._next._element
p._next = p._next._next
self._size -= 1
return e
def display(self):
p = self._head
while p:
print(p._element,end='-->')
p = p._next
print()
def search(self,key):
p = self._head
index = 0
while p:
if p._element == key:
return index
p = p._next
index = index + 1
return -1
def insertsorted(self,e):
newest = _Node(e, None)
if self.isempty():
self._head = newest
else:
p = self._head
q = self._head
while p and p._element < e:
q = p
p = p._next
if p == self._head:
newest._next = self._head
self._head = newest
else:
newest._next = q._next
q._next = newest
self._size += 1
| 24.357724 | 42 | 0.437583 | 334 | 2,996 | 3.658683 | 0.131737 | 0.124386 | 0.051555 | 0.0491 | 0.40671 | 0.348609 | 0.317512 | 0.243044 | 0.243044 | 0.179214 | 0 | 0.013224 | 0.46996 | 2,996 | 122 | 43 | 24.557377 | 0.756297 | 0 | 0 | 0.574074 | 0 | 0 | 0.014619 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12037 | false | 0 | 0 | 0.018519 | 0.231481 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fdf3ea0d8d65163228f397b06ee77dbf23617ec | 806 | py | Python | signalbot/functions/trivia.py | pblaas/signalbot | 11d9b5fdcc63ddbc99ea817b0fcdb6a6d3cc42da | [
"MIT"
] | null | null | null | signalbot/functions/trivia.py | pblaas/signalbot | 11d9b5fdcc63ddbc99ea817b0fcdb6a6d3cc42da | [
"MIT"
] | null | null | null | signalbot/functions/trivia.py | pblaas/signalbot | 11d9b5fdcc63ddbc99ea817b0fcdb6a6d3cc42da | [
"MIT"
] | null | null | null | """Trivia module."""
import urllib3
import json
import random
import html
class Trivia:
"""Defining base class for inheritence."""
@staticmethod
def trivia():
"""Get random questions from opentdb trivia API."""
http = urllib3.PoolManager()
req_return = http.request('GET', 'https://opentdb.com/api.php?amount=1')
trivia_data = json.loads(req_return.data.decode('utf-8'))
all_answers = trivia_data['results'][0]['incorrect_answers']
all_answers.insert(0, trivia_data['results'][0]['correct_answer'])
random.shuffle(all_answers)
comma = ","
shuffled_string = comma.join(all_answers)
return f"""Trivia:
{html.unescape(trivia_data['results'][0]['question'])}
Options: {shuffled_string}
"""
| 29.851852 | 80 | 0.635236 | 94 | 806 | 5.297872 | 0.542553 | 0.080321 | 0.10241 | 0.108434 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012678 | 0.217122 | 806 | 26 | 81 | 31 | 0.776545 | 0.120347 | 0 | 0 | 0 | 0 | 0.294372 | 0.077922 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.210526 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fe525e1bcdcf2d80e15cfd4df2b184377d84c8c | 4,365 | py | Python | dashboard/dashboard/services/swarming_service_test.py | ravitejavalluri/catapult | 246a39a82c2213d913a96fff020a263838dc76e6 | [
"BSD-3-Clause"
] | null | null | null | dashboard/dashboard/services/swarming_service_test.py | ravitejavalluri/catapult | 246a39a82c2213d913a96fff020a263838dc76e6 | [
"BSD-3-Clause"
] | null | null | null | dashboard/dashboard/services/swarming_service_test.py | ravitejavalluri/catapult | 246a39a82c2213d913a96fff020a263838dc76e6 | [
"BSD-3-Clause"
] | 1 | 2020-07-24T05:13:01.000Z | 2020-07-24T05:13:01.000Z | # Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
import json
import mock
from dashboard.services import swarming_service
class _SwarmingTest(unittest.TestCase):
def setUp(self):
patcher = mock.patch('dashboard.common.utils.ServiceAccountHttp')
self.__http = mock.MagicMock()
service_account_http = patcher.start()
service_account_http.return_value = self.__http
self.addCleanup(patcher.stop)
def _Set200ReturnValue(self):
self.__SetRequestReturnValue({'status': '200'}, {'content': {}})
def _Set500ReturnValue(self):
self.__SetRequestReturnValue({'status': '500'}, {'errors': {}})
def _Assert200Response(self, content):
self.assertEqual(content, {'content': {}})
def _AssertRequestMade(self, path, *args, **kwargs):
self.__http.request.assert_called_once_with(
swarming_service.API_BASE_URL + path, *args, **kwargs)
def __SetRequestReturnValue(self, response, content):
self.__http.request.return_value = (response, json.dumps(content))
class BotTest(_SwarmingTest):
def testGet(self):
self._Set200ReturnValue()
response = swarming_service.Bot('bot_id').Get()
self._Assert200Response(response)
self._AssertRequestMade('bot/bot_id/get', 'GET')
def testTasks(self):
self._Set200ReturnValue()
response = swarming_service.Bot('bot_id').Tasks()
self._Assert200Response(response)
self._AssertRequestMade('bot/bot_id/tasks', 'GET')
class BotsTest(_SwarmingTest):
def testList(self):
self._Set200ReturnValue()
response = swarming_service.Bots().List(
'CkMSPWoQ', {'pool': 'Chrome-perf', 'a': 'b'}, False, 1, True)
self._Assert200Response(response)
path = ('bots/list?cursor=CkMSPWoQ&dimensions=a%3Ab&'
'dimensions=pool%3AChrome-perf&is_dead=false&'
'limit=1&quarantined=true')
self._AssertRequestMade(path, 'GET')
class TaskTest(_SwarmingTest):
def testCancel(self):
self._Set200ReturnValue()
response = swarming_service.Task('task_id').Cancel()
self._Assert200Response(response)
self._AssertRequestMade('task/task_id/cancel', 'POST')
def testRequest(self):
self._Set200ReturnValue()
response = swarming_service.Task('task_id').Request()
self._Assert200Response(response)
self._AssertRequestMade('task/task_id/request', 'GET')
def testResult(self):
self._Set200ReturnValue()
response = swarming_service.Task('task_id').Result()
self._Assert200Response(response)
self._AssertRequestMade('task/task_id/result', 'GET')
def testResultWithPerformanceStats(self):
self._Set200ReturnValue()
response = swarming_service.Task('task_id').Result(True)
self._Assert200Response(response)
self._AssertRequestMade(
'task/task_id/result?include_performance_stats=true', 'GET')
def testStdout(self):
self._Set200ReturnValue()
response = swarming_service.Task('task_id').Stdout()
self._Assert200Response(response)
self._AssertRequestMade('task/task_id/stdout', 'GET')
class TasksTest(_SwarmingTest):
def testNew(self):
body = {
'name': 'name',
'user': 'user',
'priority': '100',
'expiration_secs': '600',
'properties': {
'inputs_ref': {
'isolated': 'isolated_hash',
},
'extra_args': ['--output-format=json'],
'dimensions': [
{'key': 'id', 'value': 'bot_id'},
{'key': 'pool', 'value': 'Chrome-perf'},
],
'execution_timeout_secs': '3600',
'io_timeout_secs': '3600',
},
'tags': [
'id:bot_id',
'pool:Chrome-perf',
],
}
self._Set200ReturnValue()
response = swarming_service.Tasks().New(body)
self._Assert200Response(response)
self._AssertRequestMade('tasks/new', 'POST',
body=json.dumps(body),
headers={'Content-Type': 'application/json'})
class FailureTest(_SwarmingTest):
def testBotGet(self):
self._Set500ReturnValue()
with self.assertRaises(swarming_service.SwarmingError):
swarming_service.Bot('bot_id').Get()
self._AssertRequestMade('bot/bot_id/get', 'GET')
| 30.524476 | 73 | 0.666208 | 451 | 4,365 | 6.21286 | 0.332594 | 0.069593 | 0.035689 | 0.118844 | 0.37616 | 0.342612 | 0.325482 | 0.3005 | 0.194861 | 0.045682 | 0 | 0.026781 | 0.195876 | 4,365 | 142 | 74 | 30.739437 | 0.77151 | 0.03551 | 0 | 0.207547 | 0 | 0 | 0.177603 | 0.053257 | 0 | 0 | 0 | 0 | 0.226415 | 1 | 0.150943 | false | 0 | 0.037736 | 0 | 0.245283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fe5d7b26c0ea9f708ffcc671befa4d94503df4f | 3,199 | py | Python | ssh_tarpit/utils.py | Snawoot/ssh-tarp | 8c10f1d94497de246b37cf00cb19a5359cccc1e5 | [
"Unlicense"
] | 24 | 2019-08-23T23:45:42.000Z | 2022-03-01T04:21:19.000Z | ssh_tarpit/utils.py | Snawoot/ssh-tarp | 8c10f1d94497de246b37cf00cb19a5359cccc1e5 | [
"Unlicense"
] | 8 | 2021-03-08T15:10:06.000Z | 2021-06-16T11:04:54.000Z | ssh_tarpit/utils.py | Snawoot/ssh-tarp | 8c10f1d94497de246b37cf00cb19a5359cccc1e5 | [
"Unlicense"
] | 5 | 2021-03-13T06:49:10.000Z | 2022-02-28T09:25:22.000Z | import asyncio
import logging
import logging.handlers
import os
import queue
class Heartbeat:
def __init__(self, interval=.5):
self._interval = interval
self._beat = None
async def heartbeat(self):
while True:
await asyncio.sleep(self._interval)
async def __aenter__(self):
return await self.start()
async def start(self):
if self._beat is None:
self._beat = asyncio.ensure_future(self.heartbeat())
return self
async def __aexit__(self, exc_type, exc_value, traceback):
return await self.stop()
async def stop(self):
self._beat.cancel()
while not self._beat.done():
try:
await self._beat
except asyncio.CancelledError:
pass
def singleton(class_):
instances = {}
def getinstance(*args, **kwargs):
if class_ not in instances:
instances[class_] = class_(*args, **kwargs)
return instances[class_]
return getinstance
@singleton
class RotateHandlers:
def __init__(self):
self._callbacks = []
def add_callback(self, cb):
self._callbacks.append(cb)
def fire(self):
for cb in self._callbacks:
cb()
class OverflowingQueue(queue.Queue):
def put(self, item, block=True, timeout=None):
try:
return queue.Queue.put(self, item, block, timeout)
except queue.Full:
# Log sink hang
pass
return None
def put_nowait(self, item):
return self.put(item, False)
class AsyncLoggingHandler:
def __init__(self, handler, maxsize=1024):
_queue = OverflowingQueue(maxsize)
self._listener = logging.handlers.QueueListener(_queue, handler)
self._async_handler = logging.handlers.QueueHandler(_queue)
def __enter__(self):
self._listener.start()
return self._async_handler
def __exit__(self, exc_type, exc_value, traceback):
self._listener.stop()
def raw_log_handler(verbosity, logfile=None):
if logfile:
if is_nt():
handler = logging.FileHandler(logfile)
else:
handler = logging.handlers.WatchedFileHandler(logfile)
def rotate_cb():
try:
handler.reopenIfNeeded()
except:
pass
RotateHandlers().add_callback(rotate_cb)
else:
handler = logging.StreamHandler()
handler.setLevel(verbosity)
handler.setFormatter(logging.Formatter('%(asctime)s '
'%(levelname)-8s '
'%(name)s: %(message)s',
'%Y-%m-%d %H:%M:%S'))
return handler
def setup_logger(name, verbosity, handler):
logger = logging.getLogger(name)
logger.setLevel(verbosity)
logger.addHandler(handler)
return logger
def enable_uvloop():
try:
import uvloop
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
except ImportError:
return False
else:
return True
def is_nt():
return os.name == 'nt'
| 25.388889 | 72 | 0.589872 | 336 | 3,199 | 5.39881 | 0.324405 | 0.026461 | 0.018192 | 0.015436 | 0.030871 | 0.030871 | 0 | 0 | 0 | 0 | 0 | 0.002745 | 0.316661 | 3,199 | 125 | 73 | 25.592 | 0.827081 | 0.004064 | 0 | 0.103093 | 0 | 0 | 0.021357 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.164948 | false | 0.030928 | 0.072165 | 0.020619 | 0.42268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fe650bd344f12877d6483ceb013ce5a8f97a5b9 | 41,012 | py | Python | luna/gateware/interface/serdes_phy/xc7_gtp.py | shrine-maiden-heavy-industries/luna | 6e737ea004d64c0b81de13e68657fecb45f93c1b | [
"BSD-3-Clause"
] | null | null | null | luna/gateware/interface/serdes_phy/xc7_gtp.py | shrine-maiden-heavy-industries/luna | 6e737ea004d64c0b81de13e68657fecb45f93c1b | [
"BSD-3-Clause"
] | null | null | null | luna/gateware/interface/serdes_phy/xc7_gtp.py | shrine-maiden-heavy-industries/luna | 6e737ea004d64c0b81de13e68657fecb45f93c1b | [
"BSD-3-Clause"
] | null | null | null | #
# This file is part of LUNA.
#
# Copyright (c) 2020 Great Scott Gadgets <info@greatscottgadgets.com>
# Copyright (c) 2020 Florent Kermarrec <florent@enjoy-digital.fr>
#
# Code based in part on ``litex`` and ``liteiclink``.
# SPDX-License-Identifier: BSD-3-Clause
""" Soft PIPE backend for the Xilinx 7 Series GTP transceivers. """
from amaranth import *
from amaranth.lib.cdc import FFSynchronizer
from .xc7 import DRPInterface, DRPArbiter, DRPFieldController
from .xc7 import GTResetDeferrer, GTPRXPMAResetWorkaround, GTOOBClockDivider
from .lfps import LFPSSquareWaveGenerator, LFPSSquareWaveDetector
from ..pipe import PIPEInterface
Open = Signal
class GTPQuadPLL(Elaboratable):
def __init__(self, refclk, refclk_freq, linerate, channel=0):
assert channel in [0, 1]
self.channel = channel
self._refclk = refclk
self._refclk_freq = refclk_freq
self._linerate = linerate
self.config = self.compute_config(refclk_freq, linerate)
#
# I/O ports
#
self.clk = Signal()
self.refclk = Signal()
self.reset = Signal()
self.lock = Signal()
self.drp = DRPInterface()
def elaborate(self, platform):
gtpe2_params = dict(
# Common Block Attributes
p_BIAS_CFG = 0x0000000000050001,
p_COMMON_CFG = 0x00000000,
# PLL Attributes
p_PLL_CLKOUT_CFG = 0x00,
p_PLLx_CFG = 0x01F03DC,
p_PLLx_DMON_CFG = 0b0,
p_PLLx_FBDIV = self.config["n2"],
p_PLLx_FBDIV_45 = self.config["n1"],
p_PLLx_INIT_CFG = 0x00001E,
p_PLLx_LOCK_CFG = 0x1E8,
p_PLLx_REFCLK_DIV = self.config["m"],
# Common Block - Dynamic Reconfiguration Port
i_DRPCLK = ClockSignal("ss"),
i_DRPADDR = self.drp.addr,
i_DRPDI = self.drp.di,
o_DRPDO = self.drp.do,
i_DRPWE = self.drp.we,
i_DRPEN = self.drp.en,
o_DRPRDY = self.drp.rdy,
# Common Block - Clocking Ports
i_GTREFCLK0 = self._refclk,
o_PLLxOUTCLK = self.clk,
o_PLLxOUTREFCLK = self.refclk,
# Common Block - PLL Ports
o_PLLxLOCK = self.lock,
i_PLLxLOCKEN = 1,
i_PLLxPD = 0,
i_PLLxREFCLKSEL = 0b001,
i_PLLxRESET = self.reset,
i_PLLyPD = 1,
# QPLL Ports
i_BGBYPASSB = 1,
i_BGMONITORENB = 1,
i_BGPDB = 1,
i_BGRCALOVRD = 0b11111,
i_RCALENB = 1,
)
if self.channel == 0:
pll_x, pll_y = "PLL0", "PLL1"
else:
pll_x, pll_y = "PLL1", "PLL0"
return Instance("GTPE2_COMMON", **{
name.replace("PLLx", pll_x).replace("PLLy", pll_y): value
for name, value in gtpe2_params.items()
})
@staticmethod
def compute_config(refclk_freq, linerate):
for n1 in 4, 5:
for n2 in 1, 2, 3, 4, 5:
for m in 1, 2:
vco_freq = refclk_freq*(n1*n2)/m
if 1.6e9 <= vco_freq <= 3.3e9:
for d in 1, 2, 4, 8, 16:
current_linerate = vco_freq*2/d
if current_linerate == linerate:
return {"n1": n1, "n2": n2, "m": m, "d": d,
"vco_freq": vco_freq,
"clkin": refclk_freq,
"linerate": linerate}
msg = "No config found for {:3.2f} MHz refclk / {:3.2f} Gbps linerate."
raise ValueError(msg.format(refclk_freq/1e6, linerate/1e9))
def __repr__(self):
config = self.config
r = """
GTPQuadPLL
==========
overview:
---------
+--------------------------------------------------+
| |
| +---------------------------+ +-----+ |
| +-----+ | Phase Frequency Detector | | | |
CLKIN +----> /M +--> Charge Pump +-> VCO +---> CLKOUT
| +-----+ | Loop Filter | | | |
| +---------------------------+ +--+--+ |
| ^ | |
| | +-------+ +-------+ | |
| +----+ /N2 <----+ /N1 <----+ |
| +-------+ +-------+ |
+--------------------------------------------------+
+-------+
CLKOUT +-> 2/D +-> LINERATE
+-------+
config:
-------
CLKIN = {clkin}MHz
CLKOUT = CLKIN x (N1 x N2) / M = {clkin}MHz x ({n1} x {n2}) / {m}
= {vco_freq}GHz
LINERATE = CLKOUT x 2 / D = {vco_freq}GHz x 2 / {d}
= {linerate}GHz
""".format(clkin = config["clkin"]/1e6,
n1 = config["n1"],
n2 = config["n2"],
m = config["m"],
vco_freq = config["vco_freq"]/1e9,
d = config["d"],
linerate = config["linerate"]/1e9)
return r
class GTPChannel(Elaboratable):
def __init__(self, qpll, tx_pads, rx_pads, ss_clock_frequency):
self._qpll = qpll
self._tx_pads = tx_pads
self._rx_pads = rx_pads
self._ss_clock_frequency = ss_clock_frequency
# For now, always operate at 2x gearing, and using the corresponding width for
# the internal data path.
self._io_words = 2
self._data_width = self._io_words * 10
#
# I/O ports.
#
# Dynamic reconfiguration port
self.drp = DRPInterface()
# Interface clock
self.pclk = Signal()
# Reset sequencing
self.reset = Signal()
self.tx_ready = Signal()
self.rx_ready = Signal()
# Core Rx and Tx lines
self.tx_data = Signal(self._io_words * 8)
self.tx_datak = Signal(self._io_words)
self.rx_data = Signal(self._io_words * 8)
self.rx_datak = Signal(self._io_words)
# TX controls
self.tx_polarity = Signal()
self.tx_elec_idle = Signal()
self.tx_gpio_en = Signal()
self.tx_gpio = Signal()
# RX controls
self.rx_polarity = Signal()
self.rx_eq_training = Signal()
self.rx_termination = Signal()
# RX status
self.rx_valid = Signal()
self.rx_status = Signal(3)
self.rx_elec_idle = Signal()
def elaborate(self, platform):
m = Module()
# Aliases.
qpll = self._qpll
io_words = self._io_words
data_width = self._data_width
#
# Clocking.
#
# Ensure we have a valid PLL/CDR configuration.
assert qpll.config["linerate"] < 6.6e9
# From [UG482: Table 4-14]: CDR Recommended Settings for Protocols with SSC
rxcdr_cfgs = {
1: 0x0_0000_87FE_2060_2448_1010,
2: 0x0_0000_47FE_2060_2450_1010,
4: 0x0_0000_47FE_1060_2450_1010,
}
# Generate the PIPE interface clock from the transmit word clock, and use it to drive both
# the Tx and the Rx FIFOs, to bring both halves of the data bus to the same clock domain.
# The recovered Rx clock will not match the generated Tx clock; use the recovered word
# clock to drive the CTC FIFO in the transceiver, which will compensate for the difference.
txoutclk = Signal()
m.submodules += Instance("BUFG",
i_I=txoutclk,
o_O=self.pclk
)
platform.add_clock_constraint(self.pclk, 250e6)
# Transceiver uses a 25 MHz clock internally, which needs to be derived from
# the reference clock.
for clk25_div in range(1, 33):
if qpll._refclk_freq / clk25_div <= 25e6:
break
# Out of band sequence detector uses an auxiliary clock whose frequency is derived
# from the properties of the sequences.
m.submodules.oob_clkdiv = oob_clkdiv = GTOOBClockDivider(self._ss_clock_frequency)
#
# Initialization.
#
# Per [AR43482], GTP transceivers must not be reset immediately after configuration.
m.submodules.defer_rst = defer_rst = GTResetDeferrer(self._ss_clock_frequency)
m.d.comb += [
defer_rst.tx_i.eq(~qpll.lock | self.reset),
defer_rst.rx_i.eq(~qpll.lock | self.reset),
]
# Per [UG482], GTP receiver reset must follow a specific sequence.
m.submodules.rx_pma_rst = rx_pma_rst = GTPRXPMAResetWorkaround(self._ss_clock_frequency)
m.d.comb += [
rx_pma_rst.i.eq(defer_rst.rx_o)
]
tx_rst_done = Signal()
rx_rst_done = Signal()
m.d.comb += [
self.tx_ready.eq(defer_rst.done & tx_rst_done),
self.rx_ready.eq(defer_rst.done & rx_rst_done),
]
#
# Dynamic reconfiguration.
#
rx_termination = Signal()
m.submodules += FFSynchronizer(self.rx_termination, rx_termination, o_domain="ss")
m.submodules.rx_term = rx_term = DRPFieldController(
addr=0x0011, bits=slice(4, 6), reset=0b10) # RX_CM_SEL
m.d.comb += [
rx_term.value.eq(Mux(rx_termination,
0b11, # Programmable
0b10)), # Floating
]
m.submodules.drp_arbiter = drp_arbiter = DRPArbiter()
drp_arbiter.add_interface(rx_pma_rst.drp)
drp_arbiter.add_interface(rx_term.drp)
drp_arbiter.add_interface(self.drp)
#
# Core SerDes instantiation.
#
m.submodules.gtp = Instance("GTPE2_CHANNEL",
# Simulation-Only Attributes
p_SIM_RECEIVER_DETECT_PASS = "TRUE",
p_SIM_TX_EIDLE_DRIVE_LEVEL = "X",
p_SIM_RESET_SPEEDUP = "FALSE",
p_SIM_VERSION = "2.0",
# RX 8B/10B Decoder Attributes
p_RX_DISPERR_SEQ_MATCH = "FALSE",
p_DEC_MCOMMA_DETECT = "TRUE",
p_DEC_PCOMMA_DETECT = "TRUE",
p_DEC_VALID_COMMA_ONLY = "TRUE",
p_UCODEER_CLR = 0b0,
# RX Byte and Word Alignment Attributes
p_ALIGN_COMMA_DOUBLE = "FALSE",
p_ALIGN_COMMA_ENABLE = 0b1111_111111,
p_ALIGN_COMMA_WORD = 1,
p_ALIGN_MCOMMA_DET = "TRUE",
p_ALIGN_MCOMMA_VALUE = 0b0101_111100, # K28.5 RD- 10b code
p_ALIGN_PCOMMA_DET = "TRUE",
p_ALIGN_PCOMMA_VALUE = 0b1010_000011, # K28.5 RD+ 10b code
p_SHOW_REALIGN_COMMA = "TRUE",
p_RXSLIDE_AUTO_WAIT = 7,
p_RXSLIDE_MODE = "OFF",
p_RX_SIG_VALID_DLY = 10,
# RX Clock Correction Attributes
p_CBCC_DATA_SOURCE_SEL = "DECODED",
p_CLK_CORRECT_USE = "TRUE",
p_CLK_COR_KEEP_IDLE = "FALSE",
p_CLK_COR_MAX_LAT = 14,
p_CLK_COR_MIN_LAT = 11,
p_CLK_COR_PRECEDENCE = "TRUE",
p_CLK_COR_REPEAT_WAIT = 0,
p_CLK_COR_SEQ_LEN = 2,
p_CLK_COR_SEQ_1_ENABLE = 0b1111,
p_CLK_COR_SEQ_1_1 = 0b01_001_11100, # K28.1 1+8b code
p_CLK_COR_SEQ_1_2 = 0b01_001_11100, # K28.1 1+8b code
p_CLK_COR_SEQ_1_3 = 0b0000000000,
p_CLK_COR_SEQ_1_4 = 0b0000000000,
p_CLK_COR_SEQ_2_ENABLE = 0b1111,
p_CLK_COR_SEQ_2_USE = "FALSE",
p_CLK_COR_SEQ_2_1 = 0b0000000000,
p_CLK_COR_SEQ_2_2 = 0b0000000000,
p_CLK_COR_SEQ_2_3 = 0b0000000000,
p_CLK_COR_SEQ_2_4 = 0b0000000000,
# RX Channel Bonding Attributes
p_CHAN_BOND_KEEP_ALIGN = "FALSE",
p_CHAN_BOND_MAX_SKEW = 1,
p_CHAN_BOND_SEQ_LEN = 1,
p_CHAN_BOND_SEQ_1_1 = 0b0000000000,
p_CHAN_BOND_SEQ_1_2 = 0b0000000000,
p_CHAN_BOND_SEQ_1_3 = 0b0000000000,
p_CHAN_BOND_SEQ_1_4 = 0b0000000000,
p_CHAN_BOND_SEQ_1_ENABLE = 0b1111,
p_CHAN_BOND_SEQ_2_1 = 0b0000000000,
p_CHAN_BOND_SEQ_2_2 = 0b0000000000,
p_CHAN_BOND_SEQ_2_3 = 0b0000000000,
p_CHAN_BOND_SEQ_2_4 = 0b0000000000,
p_CHAN_BOND_SEQ_2_ENABLE = 0b1111,
p_CHAN_BOND_SEQ_2_USE = "FALSE",
p_FTS_DESKEW_SEQ_ENABLE = 0b1111,
p_FTS_LANE_DESKEW_CFG = 0b1111,
p_FTS_LANE_DESKEW_EN = "FALSE",
# RX Margin Analysis Attributes
p_ES_CONTROL = 0b000000,
p_ES_ERRDET_EN = "FALSE",
p_ES_EYE_SCAN_EN = "TRUE",
p_ES_HORZ_OFFSET = 0x000,
p_ES_PMA_CFG = 0b0000000000,
p_ES_PRESCALE = 0b00000,
p_ES_QUALIFIER = 0x00000000000000000000,
p_ES_QUAL_MASK = 0x00000000000000000000,
p_ES_SDATA_MASK = 0x00000000000000000000,
p_ES_VERT_OFFSET = 0b000000000,
# FPGA RX Interface Attributes
p_RX_DATA_WIDTH = data_width,
# PMA Attributes
p_OUTREFCLK_SEL_INV = 0b11,
p_PMA_RSV = 0x00000333,
p_PMA_RSV2 = 0x00002040,
p_PMA_RSV3 = 0b00,
p_PMA_RSV4 = 0b0000,
p_RX_BIAS_CFG = 0b0000111100110011,
p_DMONITOR_CFG = 0x000A00,
p_RX_CM_SEL = 0b10,
p_RX_CM_TRIM = 0b1010,
p_RX_DEBUG_CFG = 0b00000000000000,
p_RX_OS_CFG = 0b0000010000000,
p_TERM_RCAL_CFG = 0b100001000010000,
p_TERM_RCAL_OVRD = 0b000,
p_TST_RSV = 0x00000000,
p_RX_CLK25_DIV = clk25_div,
p_TX_CLK25_DIV = clk25_div,
# PCI Express Attributes
p_PCS_PCIE_EN = "FALSE",
# PCS Attributes
p_PCS_RSVD_ATTR = 0x0000_0000_0100, # OOB power up
# RX Buffer Attributes
p_RXBUF_ADDR_MODE = "FULL",
p_RXBUF_EIDLE_HI_CNT = 0b1000,
p_RXBUF_EIDLE_LO_CNT = 0b0000,
p_RXBUF_EN = "TRUE",
p_RX_BUFFER_CFG = 0b000000,
p_RXBUF_RESET_ON_CB_CHANGE = "TRUE",
p_RXBUF_RESET_ON_COMMAALIGN = "FALSE",
p_RXBUF_RESET_ON_EIDLE = "FALSE",
p_RXBUF_RESET_ON_RATE_CHANGE = "TRUE",
p_RXBUFRESET_TIME = 0b00001,
p_RXBUF_THRESH_OVFLW = 61,
p_RXBUF_THRESH_OVRD = "FALSE",
p_RXBUF_THRESH_UNDFLW = 4,
p_RXDLY_CFG = 0x001F,
p_RXDLY_LCFG = 0x030,
p_RXDLY_TAP_CFG = 0x0000,
p_RXPH_CFG = 0xC00002,
p_RXPHDLY_CFG = 0x084020,
p_RXPH_MONITOR_SEL = 0b00000,
p_RX_XCLK_SEL = "RXREC",
p_RX_DDI_SEL = 0b000000,
p_RX_DEFER_RESET_BUF_EN = "TRUE",
# CDR Attributes
p_RXCDR_CFG = rxcdr_cfgs[qpll.config["d"]],
p_RXCDR_FR_RESET_ON_EIDLE = 0b0,
p_RXCDR_HOLD_DURING_EIDLE = 0b0,
p_RXCDR_PH_RESET_ON_EIDLE = 0b0,
p_RXCDR_LOCK_CFG = 0b001001,
# RX Initialization and Reset Attributes
p_RXCDRFREQRESET_TIME = 0b00001,
p_RXCDRPHRESET_TIME = 0b00001,
p_RXISCANRESET_TIME = 0b00001,
p_RXPCSRESET_TIME = 0b00001,
p_RXPMARESET_TIME = 0b00011,
# RX OOB Signaling Attributes
p_RXOOB_CFG = 0b0000110,
# RX Gearbox Attributes
p_RXGEARBOX_EN = "FALSE",
p_GEARBOX_MODE = 0b000,
# PRBS Detection Attribute
p_RXPRBS_ERR_LOOPBACK = 0b0,
# Power-Down Attributes
p_PD_TRANS_TIME_FROM_P2 = 0x03c,
p_PD_TRANS_TIME_NONE_P2 = 0x3c,
p_PD_TRANS_TIME_TO_P2 = 0x64,
# RX OOB Signaling Attributes
p_SAS_MAX_COM = 64,
p_SAS_MIN_COM = 36,
p_SATA_BURST_SEQ_LEN = 0b0101,
p_SATA_BURST_VAL = 0b100,
p_SATA_EIDLE_VAL = 0b100,
p_SATA_MAX_BURST = 8,
p_SATA_MAX_INIT = 21,
p_SATA_MAX_WAKE = 7,
p_SATA_MIN_BURST = 4,
p_SATA_MIN_INIT = 12,
p_SATA_MIN_WAKE = 4,
# RX Fabric Clock Output Control Attributes
p_TRANS_TIME_RATE = 0x0E,
# TX Buffer Attributes
p_TXBUF_EN = "TRUE",
p_TXBUF_RESET_ON_RATE_CHANGE = "TRUE",
p_TXDLY_CFG = 0x001F,
p_TXDLY_LCFG = 0x030,
p_TXDLY_TAP_CFG = 0x0000,
p_TXPH_CFG = 0x0780,
p_TXPHDLY_CFG = 0x084020,
p_TXPH_MONITOR_SEL = 0b00000,
p_TX_XCLK_SEL = "TXOUT",
# FPGA TX Interface Attributes
p_TX_DATA_WIDTH = data_width,
# TX Configurable Driver Attributes
p_TX_DEEMPH0 = 0b000000,
p_TX_DEEMPH1 = 0b000000,
p_TX_DRIVE_MODE = "DIRECT",
p_TX_EIDLE_ASSERT_DELAY = 0b110,
p_TX_EIDLE_DEASSERT_DELAY = 0b100,
p_TX_LOOPBACK_DRIVE_HIZ = "FALSE",
p_TX_MAINCURSOR_SEL = 0b0,
p_TX_MARGIN_FULL_0 = 0b1001110,
p_TX_MARGIN_FULL_1 = 0b1001001,
p_TX_MARGIN_FULL_2 = 0b1000101,
p_TX_MARGIN_FULL_3 = 0b1000010,
p_TX_MARGIN_FULL_4 = 0b1000000,
p_TX_MARGIN_LOW_0 = 0b1000110,
p_TX_MARGIN_LOW_1 = 0b1000100,
p_TX_MARGIN_LOW_2 = 0b1000010,
p_TX_MARGIN_LOW_3 = 0b1000000,
p_TX_MARGIN_LOW_4 = 0b1000000,
p_TX_PREDRIVER_MODE = 0b0,
p_PMA_RSV5 = 0b0,
# TX Gearbox Attributes
p_TXGEARBOX_EN = "FALSE",
# TX Initialization and Reset Attributes
p_TXPCSRESET_TIME = 0b00001,
p_TXPMARESET_TIME = 0b00001,
# TX Receiver Detection Attributes
p_TX_RXDETECT_CFG = 0x1832,
p_TX_RXDETECT_REF = 0b100,
# JTAG Attributes
p_ACJTAG_DEBUG_MODE = 0b0,
p_ACJTAG_MODE = 0b0,
p_ACJTAG_RESET = 0b0,
# CDR Attributes
p_CFOK_CFG = 0x49000040E80,
p_CFOK_CFG2 = 0b0100000,
p_CFOK_CFG3 = 0b0100000,
p_CFOK_CFG4 = 0b0,
p_CFOK_CFG5 = 0x0,
p_CFOK_CFG6 = 0b0000,
p_RXOSCALRESET_TIME = 0b00011,
p_RXOSCALRESET_TIMEOUT = 0b00000,
# PMA Attributes
p_CLK_COMMON_SWING = 0b0,
p_RX_CLKMUX_EN = 0b1,
p_TX_CLKMUX_EN = 0b1,
p_ES_CLK_PHASE_SEL = 0b0,
p_USE_PCS_CLK_PHASE_SEL = 0b0,
p_PMA_RSV6 = 0b0,
p_PMA_RSV7 = 0b0,
# RX Fabric Clock Output Control Attributes
p_RXOUT_DIV = qpll.config["d"],
# TX Fabric Clock Output Control Attributes
p_TXOUT_DIV = qpll.config["d"],
# RX Phase Interpolator Attributes
p_RXPI_CFG0 = 0b000,
p_RXPI_CFG1 = 0b1,
p_RXPI_CFG2 = 0b1,
# RX Equalizer Attributes
p_ADAPT_CFG0 = 0x00000,
p_RXLPMRESET_TIME = 0b0001111,
p_RXLPM_BIAS_STARTUP_DISABLE = 0b0,
p_RXLPM_CFG = 0b0110,
p_RXLPM_CFG1 = 0b0,
p_RXLPM_CM_CFG = 0b0,
p_RXLPM_GC_CFG = 0b111100010,
p_RXLPM_GC_CFG2 = 0b001,
p_RXLPM_HF_CFG = 0b00001111110000,
p_RXLPM_HF_CFG2 = 0b01010,
p_RXLPM_HF_CFG3 = 0b0000,
p_RXLPM_HOLD_DURING_EIDLE = 0b0,
p_RXLPM_INCM_CFG = 0b1,
p_RXLPM_IPCM_CFG = 0b0,
p_RXLPM_LF_CFG = 0b000000001111110000,
p_RXLPM_LF_CFG2 = 0b01010,
p_RXLPM_OSINT_CFG = 0b100,
# TX Phase Interpolator PPM Controller Attributes
p_TXPI_CFG0 = 0b00,
p_TXPI_CFG1 = 0b00,
p_TXPI_CFG2 = 0b00,
p_TXPI_CFG3 = 0b0,
p_TXPI_CFG4 = 0b0,
p_TXPI_CFG5 = 0b000,
p_TXPI_GREY_SEL = 0b0,
p_TXPI_INVSTROBE_SEL = 0b0,
p_TXPI_PPMCLK_SEL = "TXUSRCLK2",
p_TXPI_PPM_CFG = 0x00,
p_TXPI_SYNFREQ_PPM = 0b001,
# LOOPBACK Attributes
p_LOOPBACK_CFG = 0b0,
p_PMA_LOOPBACK_CFG = 0b0,
# RX OOB Signalling Attributes
p_RXOOB_CLK_CFG = "FABRIC",
# TX OOB Signalling Attributes
p_SATA_PLL_CFG = "VCO_3000MHZ",
p_TXOOB_CFG = 0b0,
# RX Buffer Attributes
p_RXSYNC_MULTILANE = 0b0,
p_RXSYNC_OVRD = 0b0,
p_RXSYNC_SKIP_DA = 0b0,
# TX Buffer Attributes
p_TXSYNC_MULTILANE = 0b0,
p_TXSYNC_OVRD = 0b0,
p_TXSYNC_SKIP_DA = 0b0,
# CPLL Ports
i_GTRSVD = 0b0000000000000000,
i_PCSRSVDIN = 0b0000000000000000,
i_TSTIN = 0b11111111111111111111,
# Channel - DRP Ports
i_DRPCLK = ClockSignal("ss"),
i_DRPADDR = drp_arbiter.shared.addr,
i_DRPDI = drp_arbiter.shared.di,
o_DRPDO = drp_arbiter.shared.do,
i_DRPWE = drp_arbiter.shared.we,
i_DRPEN = drp_arbiter.shared.en,
o_DRPRDY = drp_arbiter.shared.rdy,
# Transceiver Reset Mode Operation
i_GTRESETSEL = 0,
i_RESETOVRD = 0,
# Clocking Ports
i_PLL0CLK = qpll.clk if qpll.channel == 0 else 0,
i_PLL0REFCLK = qpll.refclk if qpll.channel == 0 else 0,
i_PLL1CLK = qpll.clk if qpll.channel == 1 else 0,
i_PLL1REFCLK = qpll.refclk if qpll.channel == 1 else 0,
i_RXSYSCLKSEL = 0b00 if qpll.channel == 0 else 0b11,
i_TXSYSCLKSEL = 0b00 if qpll.channel == 0 else 0b11,
# Loopback Ports
i_LOOPBACK = 0b000,
# PMA Reserved Ports
i_PMARSVDIN3 = 0b0,
i_PMARSVDIN4 = 0b0,
# Power-Down Ports
i_RXPD = 0,
i_TXPD = 0b00,
# RX Initialization and Reset Ports
i_EYESCANRESET = 0,
i_GTRXRESET = rx_pma_rst.o,
i_RXLPMRESET = 0,
i_RXOOBRESET = 0,
i_RXPCSRESET = 0,
i_RXPMARESET = 0,
o_RXPMARESETDONE = rx_pma_rst.rxpmaresetdone,
o_RXRESETDONE = rx_rst_done,
i_RXUSERRDY = 1,
# Receive Ports
i_CLKRSVD0 = 0,
i_CLKRSVD1 = 0,
i_DMONFIFORESET = 0,
i_DMONITORCLK = 0,
i_SIGVALIDCLK = oob_clkdiv.o,
# Receive Ports - CDR Ports
i_RXCDRFREQRESET = 0,
i_RXCDRHOLD = 0,
o_RXCDRLOCK = Open(),
i_RXCDROVRDEN = 0,
i_RXCDRRESET = 0,
i_RXCDRRESETRSV = 0,
i_RXOSCALRESET = 0,
i_RXOSINTCFG = 0b0010,
o_RXOSINTDONE = Open(),
i_RXOSINTHOLD = 0,
i_RXOSINTOVRDEN = 0,
i_RXOSINTPD = 0,
o_RXOSINTSTARTED = Open(),
i_RXOSINTSTROBE = 0,
o_RXOSINTSTROBESTARTED = Open(),
i_RXOSINTTESTOVRDEN = 0,
# Receive Ports - Clock Correction Ports
o_RXCLKCORCNT = Open(2),
# Receive Ports - FPGA RX Interface Datapath Configuration
i_RX8B10BEN = 1,
# Receive Ports - FPGA RX Interface Ports
o_RXDATA = self.rx_data,
i_RXUSRCLK = self.pclk,
i_RXUSRCLK2 = self.pclk,
# Receive Ports - Pattern Checker Ports
o_RXPRBSERR = Open(),
i_RXPRBSSEL = 0b000,
i_RXPRBSCNTRESET = 0,
# Receive Ports - PCI Express Ports
o_PHYSTATUS = Open(),
i_RXRATE = 0,
o_RXSTATUS = self.rx_status,
o_RXVALID = self.rx_valid,
# Receive Ports - RX 8B/10B Decoder Ports
o_RXCHARISCOMMA = Open(4),
o_RXCHARISK = self.rx_datak,
o_RXDISPERR = Open(4),
o_RXNOTINTABLE = Open(4),
i_SETERRSTATUS = 0,
# Receive Ports - RX AFE Ports
i_GTPRXN = self._rx_pads.n,
i_GTPRXP = self._rx_pads.p,
i_PMARSVDIN2 = 0b0,
o_PMARSVDOUT0 = Open(),
o_PMARSVDOUT1 = Open(),
# Receive Ports - RX Buffer Bypass Ports
i_RXBUFRESET = 0,
o_RXBUFSTATUS = Open(3),
i_RXDDIEN = 0,
i_RXDLYBYPASS = 1,
i_RXDLYEN = 0,
i_RXDLYOVRDEN = 0,
i_RXDLYSRESET = 0,
o_RXDLYSRESETDONE = Open(),
i_RXPHALIGN = 0,
o_RXPHALIGNDONE = Open(),
i_RXPHALIGNEN = 0,
i_RXPHDLYPD = 0,
i_RXPHDLYRESET = 0,
o_RXPHMONITOR = Open(5),
i_RXPHOVRDEN = 0,
o_RXPHSLIPMONITOR = Open(5),
i_RXSYNCALLIN = 0,
o_RXSYNCDONE = Open(),
i_RXSYNCIN = 0,
i_RXSYNCMODE = 0,
o_RXSYNCOUT = Open(),
# Receive Ports - RX Byte and Word Alignment Ports
o_RXBYTEISALIGNED = Open(),
o_RXBYTEREALIGN = Open(),
o_RXCOMMADET = Open(),
i_RXCOMMADETEN = 1,
i_RXMCOMMAALIGNEN = 1,
i_RXPCOMMAALIGNEN = 1,
i_RXSLIDE = 0,
# Receive Ports - RX Channel Bonding Ports
o_RXCHANBONDSEQ = Open(),
o_RXCHANISALIGNED = Open(),
o_RXCHANREALIGN = Open(),
i_RXCHBONDEN = 0,
i_RXCHBONDI = 0b0000,
i_RXCHBONDLEVEL = 0b000,
i_RXCHBONDMASTER = 0,
o_RXCHBONDO = Open(4),
i_RXCHBONDSLAVE = 0,
# Receive Ports - RX Decision Feedback Equalizer
o_DMONITOROUT = Open(15),
i_RXADAPTSELTEST = 0,
i_RXDFEXYDEN = 0,
i_RXOSINTEN = 0b1,
i_RXOSINTID0 = 0,
i_RXOSINTNTRLEN = 0,
o_RXOSINTSTROBEDONE = Open(),
# Receive Ports - RX Equalizer Ports
i_RXLPMHFHOLD = ~self.rx_eq_training,
i_RXLPMHFOVRDEN = 0,
i_RXLPMLFHOLD = ~self.rx_eq_training,
i_RXLPMLFOVRDEN = 0,
i_RXLPMOSINTNTRLEN = 0,
i_RXOSHOLD = ~self.rx_eq_training,
i_RXOSOVRDEN = 0,
# Receive Ports - RX Fabric Clock Output Control Ports
o_RXRATEDONE = Open(),
i_RXRATEMODE = 0b0,
# Receive Ports - RX Fabric Output Control Ports
o_RXOUTCLK = Open(),
o_RXOUTCLKFABRIC = Open(),
o_RXOUTCLKPCS = Open(),
i_RXOUTCLKSEL = 0b010,
# Receive Ports - RX Gearbox Ports
o_RXDATAVALID = Open(2),
o_RXHEADER = Open(3),
o_RXHEADERVALID = Open(),
o_RXSTARTOFSEQ = Open(2),
i_RXGEARBOXSLIP = 0,
# Receive Ports - RX Margin Analysis Ports
o_EYESCANDATAERROR = Open(),
i_EYESCANMODE = 0,
i_EYESCANTRIGGER = 0,
# Receive Ports - RX OOB Signaling Ports
o_RXCOMSASDET = Open(),
o_RXCOMWAKEDET = Open(),
o_RXCOMINITDET = Open(),
o_RXELECIDLE = self.rx_elec_idle,
i_RXELECIDLEMODE = 0b00,
# Receive Ports - RX Polarity Control Ports
i_RXPOLARITY = self.rx_polarity,
# TX Initialization and Reset Ports
i_CFGRESET = 0,
i_GTTXRESET = defer_rst.tx_o,
i_TXPCSRESET = 0,
i_TXPMARESET = 0,
o_TXPMARESETDONE = Open(),
o_TXRESETDONE = tx_rst_done,
i_TXUSERRDY = 1,
o_PCSRSVDOUT = Open(),
# Transmit Ports - Configurable Driver Ports
o_GTPTXN = self._tx_pads.n,
o_GTPTXP = self._tx_pads.p,
i_TXBUFDIFFCTRL = 0b100,
i_TXDEEMPH = 0,
i_TXDIFFCTRL = 0b1000,
i_TXDIFFPD = 0,
i_TXINHIBIT = self.tx_gpio_en,
i_TXMAINCURSOR = 0b0000000,
i_TXPISOPD = 0,
i_TXPOSTCURSOR = 0b00000,
i_TXPOSTCURSORINV = 0,
i_TXPRECURSOR = 0b00000,
i_TXPRECURSORINV = 0,
i_PMARSVDIN0 = 0b0,
i_PMARSVDIN1 = 0b0,
# Transmit Ports - FPGA TX Interface Datapath Configuration
i_TX8B10BEN = 1,
# Transmit Ports - FPGA TX Interface Ports
i_TXUSRCLK = self.pclk,
i_TXUSRCLK2 = self.pclk,
# Transmit Ports - PCI Express Ports
i_TXELECIDLE = ~self.tx_gpio_en & self.tx_elec_idle,
i_TXMARGIN = 0,
i_TXRATE = 0b000,
i_TXSWING = 0,
# Transmit Ports - Pattern Generator Ports
i_TXPRBSSEL = 0b000,
i_TXPRBSFORCEERR = 0,
# Transmit Ports - TX 8B/10B Encoder Ports
i_TX8B10BBYPASS = 0b0000,
i_TXCHARDISPMODE = 0b0000,
i_TXCHARDISPVAL = 0b0000,
i_TXCHARISK = self.tx_datak,
# Transmit Ports - TX Data Path Interface
i_TXDATA = self.tx_data,
# Transmit Ports - TX Buffer Bypass Ports
i_TXDLYBYPASS = 1,
i_TXDLYEN = 0,
i_TXDLYHOLD = 0,
i_TXDLYOVRDEN = 0,
i_TXDLYSRESET = 0,
o_TXDLYSRESETDONE = Open(),
i_TXDLYUPDOWN = 0,
i_TXPHALIGN = 0,
o_TXPHALIGNDONE = Open(),
i_TXPHALIGNEN = 0,
i_TXPHDLYPD = 0,
i_TXPHDLYRESET = 0,
i_TXPHDLYTSTCLK = 0,
i_TXPHINIT = 0,
o_TXPHINITDONE = Open(),
i_TXPHOVRDEN = 0,
# Transmit Ports - TX Buffer Ports
o_TXBUFSTATUS = Open(2),
# Transmit Ports - TX Buffer and Phase Alignment Ports
i_TXSYNCALLIN = 0,
o_TXSYNCDONE = Open(),
i_TXSYNCIN = 0,
i_TXSYNCMODE = 0,
o_TXSYNCOUT = Open(),
# Transmit Ports - TX Fabric Clock Output Control Ports
o_TXOUTCLK = txoutclk,
o_TXOUTCLKFABRIC = Open(),
o_TXOUTCLKPCS = Open(),
i_TXOUTCLKSEL = 0b010,
i_TXRATEMODE = 0,
o_TXRATEDONE = Open(),
# Transmit Ports - TX Gearbox Ports
o_TXGEARBOXREADY = Open(),
i_TXHEADER = 0b000,
i_TXSEQUENCE = 0b0000000,
i_TXSTARTSEQ = 0,
# Transmit Ports - TX OOB Signalling Ports
o_TXCOMFINISH = Open(),
i_TXCOMINIT = 0,
i_TXCOMSAS = 0,
i_TXCOMWAKE = 0,
i_TXPDELECIDLEMODE = 0,
# Transmit Ports - TX Phase Interpolator PPM Controller Ports
i_TXPIPPMEN = 0,
i_TXPIPPMOVRDEN = 0,
i_TXPIPPMPD = 0,
i_TXPIPPMSEL = 1,
i_TXPIPPMSTEPSIZE = 0,
# Transmit Ports - TX Polarity Control Ports
i_TXPOLARITY = self.tx_polarity ^ (self.tx_gpio_en & self.tx_gpio),
# Transmit Ports - TX Receiver Detection Ports
i_TXDETECTRX = 0,
)
return m
class XC7GTPSerDesPIPE(PIPEInterface, Elaboratable):
""" Wrapper around the core GTP SerDes that adapts it to the PIPE interface.
The implementation-dependent behavior of the standard PIPE signals is described below:
width :
Interface width. Always 2 symbols.
clk :
Reference clock for the PHY receiver and transmitter. Could be routed through fabric,
or connected to the output of an ``IBUFDS_GTE2`` block.
pclk :
Clock for the PHY interface. Frequency is always 250 MHz.
phy_mode :
PHY operating mode. Only SuperSpeed USB mode is supported.
elas_buf_mode :
Elastic buffer mode. Only nominal half-full mode is supported.
rate :
Link signaling rate. Only 5 GT/s is supported.
power_down :
Power management mode. Only P0 is supported.
tx_deemph :
Transmitter de-emphasis level. Only TBD is supported.
tx_margin :
Transmitter voltage levels. Only TBD is supported.
tx_swing :
Transmitter voltage swing level. Only full swing is supported.
tx_detrx_lpbk :
tx_elec_idle :
Transmit control signals. Loopback and receiver detection are not implemented.
tx_compliance :
tx_ones_zeroes :
These inputs are not implemented.
power_present :
This output is not implemented. External logic may drive it if necessary.
"""
def __init__(self, *, tx_pads, rx_pads, refclk_frequency, ss_clock_frequency):
super().__init__(width=2)
self._tx_pads = tx_pads
self._rx_pads = rx_pads
self._refclk_frequency = refclk_frequency
self._ss_clock_frequency = ss_clock_frequency
def elaborate(self, platform):
m = Module()
#
# PLL and SerDes instantiation.
#
m.submodules.qpll = qpll = GTPQuadPLL(
refclk = self.clk,
refclk_freq = self._refclk_frequency,
linerate = 5e9
)
m.submodules.serdes = serdes = GTPChannel(
qpll = qpll,
tx_pads = self._tx_pads,
rx_pads = self._rx_pads,
ss_clock_frequency = self._ss_clock_frequency
)
# Our soft PHY includes some logic that needs to run synchronously to the PIPE clock; create
# a local clock domain to drive it.
m.domains.pipe = ClockDomain(local=True, async_reset=True)
m.d.comb += [
ClockSignal("pipe") .eq(serdes.pclk),
]
#
# LFPS generation.
#
m.submodules.lfps_generator = lfps_generator = LFPSSquareWaveGenerator(25e6, 250e6)
m.d.comb += [
serdes.tx_gpio_en .eq(lfps_generator.tx_gpio_en),
serdes.tx_gpio .eq(lfps_generator.tx_gpio),
]
#
# PIPE interface signaling.
#
m.d.comb += [
qpll.reset .eq(self.reset),
serdes.reset .eq(self.reset),
self.pclk .eq(serdes.pclk),
serdes.tx_elec_idle .eq(self.tx_elec_idle),
serdes.rx_polarity .eq(self.rx_polarity),
serdes.rx_eq_training .eq(self.rx_eq_training),
serdes.rx_termination .eq(self.rx_termination),
lfps_generator.generate .eq(self.tx_detrx_lpbk & self.tx_elec_idle),
self.phy_status .eq(~serdes.tx_ready),
self.rx_valid .eq(serdes.rx_valid),
self.rx_status .eq(serdes.rx_status),
self.rx_elec_idle .eq(serdes.rx_elec_idle),
serdes.tx_data .eq(self.tx_data),
serdes.tx_datak .eq(self.tx_datak),
self.rx_data .eq(serdes.rx_data),
self.rx_datak .eq(serdes.rx_datak),
]
return m
| 39.510597 | 100 | 0.467961 | 4,028 | 41,012 | 4.427507 | 0.229643 | 0.006617 | 0.006673 | 0.006729 | 0.139957 | 0.077997 | 0.031849 | 0.00785 | 0.00785 | 0.00785 | 0 | 0.080093 | 0.463279 | 41,012 | 1,037 | 101 | 39.548698 | 0.730102 | 0.142958 | 0 | 0.042796 | 0 | 0.001427 | 0.043981 | 0.004648 | 0 | 0 | 0.008119 | 0 | 0.005706 | 1 | 0.011412 | false | 0.007133 | 0.008559 | 0 | 0.031384 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fe7efedfa8f56385459dfe830c3948c64e75c65 | 2,202 | py | Python | src/image_scripts/paper/PS_0p0V_a.py | flaviu-gostin/xrd_analysis_workflow | 47699b88d3e603ea1cc80079d59bd084a68d9bdb | [
"MIT"
] | 2 | 2020-09-11T19:49:30.000Z | 2021-11-17T09:23:49.000Z | src/image_scripts/paper/PS_0p0V_a.py | flaviu-gostin/xrd_analysis_workflow | 47699b88d3e603ea1cc80079d59bd084a68d9bdb | [
"MIT"
] | 1 | 2020-11-21T19:51:10.000Z | 2020-11-21T19:51:10.000Z | src/image_scripts/paper/PS_0p0V_a.py | flaviu-gostin/xrd_analysis_workflow | 47699b88d3e603ea1cc80079d59bd084a68d9bdb | [
"MIT"
] | null | null | null | """Create a powder diffraction figure"""
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
import sys
sys.path.append('..')
from plot_diffraction_patterns import powder_diffr_fig
measured_patterns_dir = "../../../results/intermediate/integrated_1D/PS_0p0V_a"
reference_peaks_dir = "../../../results/intermediate/peaks_references"
figure_fn = "../../../results/final/PS_0p0V_a.svg"
references = ['Pd3.97', 'Pd3.91', 'Pd', 'CuCl']
position_subplot_measured=4
#layers = {'Corrosion\nproducts': (0, 149),
# 'Metallic\nglass': (149, 167)}
fig, axs = powder_diffr_fig(measured_patterns_dir=measured_patterns_dir,
position_subplot_measured=position_subplot_measured,
reference_peaks_dir=reference_peaks_dir,
offset_patterns=100,
label_every_nth_pattern=10,#no labels wanted
references=references,
twotheta_range=[27, 36],
linewidth=0.3,
#layers=layers,
height_ratio_measured_to_reference=7)
ax_measured = axs[position_subplot_measured -1]
ax_measured.set(ylim=[-13500, 6500])
ax_measured.annotate('Corrosion\nproducts', xy=(1, 0.6),
xycoords='axes fraction',
xytext=(13, 0), textcoords='offset points', va='top',
color='black')
ax_measured.annotate('Metallic\nglass', xy=(1, 0.15),
xycoords='axes fraction',
xytext=(20, 0), textcoords='offset points', va='top',
color='magenta')
for i in range(3):
ax_i = axs[i]
ax_i.set(ylim=[0, 45])
ax_Pd = axs[2]
ax_Pd.annotate(r'$a = 3.89 \AA$', xy=(1, 0.5), xycoords='axes fraction',
xytext=(10, 0), textcoords='offset points', va='top',
color='blue')
ax_CuCl = axs[-1]
ax_CuCl.set(ylim=[0, 21])
#toplabel = ax_measured.annotate('$y = 4.671 mm', xy=)
axs[-1].xaxis.set_major_locator(MultipleLocator(2))
axs[-1].xaxis.set_minor_locator(MultipleLocator(0.5))
fig.savefig(figure_fn)
#plt.grid()
#plt.show()
| 35.516129 | 80 | 0.600363 | 269 | 2,202 | 4.698885 | 0.431227 | 0.039557 | 0.072785 | 0.061709 | 0.130538 | 0.130538 | 0.078323 | 0 | 0 | 0 | 0 | 0.051471 | 0.258856 | 2,202 | 61 | 81 | 36.098361 | 0.723039 | 0.099909 | 0 | 0.04878 | 0 | 0 | 0.15533 | 0.068528 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.097561 | 0 | 0.097561 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8fe97bb9e8bdef14a32c685ef3d9f30bd722998a | 416 | py | Python | clusim/__init__.py | yy/clusim | 5d3081d74e08d3e42e99f27834e1f9408af222a1 | [
"MIT"
] | 2 | 2019-01-13T19:21:09.000Z | 2022-03-04T17:05:42.000Z | clusim/__init__.py | yy/clusim | 5d3081d74e08d3e42e99f27834e1f9408af222a1 | [
"MIT"
] | null | null | null | clusim/__init__.py | yy/clusim | 5d3081d74e08d3e42e99f27834e1f9408af222a1 | [
"MIT"
] | null | null | null | __package__ = 'clusim'
__title__ = 'CluSim: A python package for clustering similarity'
__description__ = 'This package implements a series of methods to compare \
disjoint, overlapping, and hierarchical clusterings.'
__copyright__ = '2017, Gates, A.J., Ahn YY'
__author__ = """\n""".join([
'Alexander J Gates <ajgates42@gmail.com>',
'YY Ahn <yyahn@iu.edu>'
])
__version__ = '0.3'
__release__ = '0.3'
| 27.733333 | 75 | 0.701923 | 52 | 416 | 5.076923 | 0.788462 | 0.015152 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028736 | 0.163462 | 416 | 14 | 76 | 29.714286 | 0.729885 | 0 | 0 | 0 | 0 | 0 | 0.358173 | 0.050481 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8feab8882dff671c56c9fbfeba6ac87921f9199e | 8,132 | py | Python | online_main.py | samar-khanna/cs224w-project | 50b255f87fe395cb8b638ec599825af2d1fc172b | [
"MIT"
] | 2 | 2022-03-06T17:14:12.000Z | 2022-03-11T13:37:59.000Z | online_main.py | samar-khanna/cs224w-project | 50b255f87fe395cb8b638ec599825af2d1fc172b | [
"MIT"
] | null | null | null | online_main.py | samar-khanna/cs224w-project | 50b255f87fe395cb8b638ec599825af2d1fc172b | [
"MIT"
] | 2 | 2022-03-06T16:26:39.000Z | 2022-03-06T17:14:20.000Z | import os
import torch
import pickle
import argparse
import torch.optim as optim
from gnn_stack import GNNStack
from link_predictor import LinkPredictor
from torch_geometric.data import DataLoader
from ogb.linkproppred import PygLinkPropPredDataset
from train import train
from online_train import online_train
from online_eval import online_eval
from utils import print_and_log
def passed_arguments():
parser = argparse.ArgumentParser(description="Script to train online graph setting")
parser.add_argument('--data_path', type=str,
default='./dataset/online_init:1000-online_nodes:10-seed:0.pkl',
help='Path to data .pkl file')
parser.add_argument('--exp_dir', type=str, default=None,
help="Path to exp dir for model checkpoints and experiment logs")
parser.add_argument('--init_epochs', type=int, default=100,
help="Number of epochs for initial subgraph training")
parser.add_argument('--online_steps', type=int, default=10,
help="Number of gradient steps for online learning.")
parser.add_argument('--init_lr', type=float, default=1e-2,
help="Learning rate for initial graph pre-training")
parser.add_argument('--online_lr', type=float, default=1e-2,
help="Learning rate for online node learning")
parser.add_argument('--node_dim', type=int, default=256,
help='Embedding dimension for nodes')
parser.add_argument('--init_batch_size', type=int, default=1024 * 64,
help='Number of links per batch used in initial pre-training')
parser.add_argument('--online_batch_size', type=int, default=32,
help='Number of links per batch used for online learning')
return parser.parse_args()
if __name__ == "__main__":
args = passed_arguments()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
hidden_dim = 32
dropout = 0.5
num_layers = 4
optim_wd = 0
init_train_epochs = args.init_epochs
num_online_steps = args.online_steps
init_lr = args.init_lr
online_lr = args.online_lr
node_emb_dim = args.node_dim
init_batch_size = args.init_batch_size
online_batch_size = args.online_batch_size
path_to_dataset = args.data_path
exp_dir = args.exp_dir
# Get dataset
with open(path_to_dataset, 'rb') as f:
dataset = pickle.load(f)
init_nodes = dataset['init_nodes'].to(device)
init_edge_index = dataset['init_edge_index'].to(device)
init_pos_train = init_edge_index[:, ::2].to(device) # Relying on interleaved order
online_node_edge_index = dataset['online']
# Configure experiment saving directories
if exp_dir is None:
exp_dir = "./experiments"
dir = f"online.init_nodes:{len(init_nodes)}.num_online:{len(online_node_edge_index)}" \
f".{path_to_dataset.split('-')[2]}" \
f".epochs:{init_train_epochs}.online_steps:{num_online_steps}" \
f".layers:{num_layers}.hidden_dim:{hidden_dim}.node_dim:{node_emb_dim}" \
f".init_lr:{init_lr}.online_lr:{online_lr}.optim_wd:{optim_wd}" \
f".init_batch_size:{init_batch_size}.online_batch_size:{online_batch_size}"
exp_dir = os.path.join(exp_dir, dir)
model_dir = os.path.join(exp_dir, 'checkpoints')
logs_dir = os.path.join(exp_dir, 'logs')
os.makedirs(exp_dir, exist_ok=True)
os.makedirs(model_dir, exist_ok=True)
os.makedirs(logs_dir, exist_ok=True)
logfile_path = os.path.join(logs_dir, 'log.txt')
resfile_val_path = os.path.join(logs_dir, 'res_val.pkl')
resfile_test_path = os.path.join(logs_dir, 'res_test.pkl')
logfile = open(logfile_path, "a" if os.path.isfile(logfile_path) else "w", buffering=1)
# Create embedding, model, and optimizer
emb = torch.nn.Embedding(len(init_nodes) + max(online_node_edge_index) + 1, node_emb_dim).to(device)
model = GNNStack(node_emb_dim, hidden_dim, hidden_dim, num_layers, dropout, emb=True).to(device)
link_predictor = LinkPredictor(hidden_dim, hidden_dim, 1, num_layers + 1, dropout).to(device)
optimizer = optim.Adam(
list(model.parameters()) + list(link_predictor.parameters()) + list(emb.parameters()),
lr=init_lr, weight_decay=optim_wd
)
# Train on initial subgraph
for e in range(init_train_epochs):
loss = train(model, link_predictor, emb.weight[:len(init_nodes)], init_edge_index, init_pos_train.T,
init_batch_size, optimizer)
print_and_log(logfile, f"Epoch {e + 1}/{init_train_epochs}: Loss = {round(loss, 5)}")
if (e + 1) % 20 == 0:
torch.save(model.state_dict(), os.path.join(model_dir, f"init_train:{e}.pt"))
# New optimizer for online learning (don't update GraphSAGE)
optimizer = optim.Adam(
list(link_predictor.parameters()) + list(emb.parameters()),
lr=online_lr, weight_decay=optim_wd
)
curr_nodes = init_nodes
curr_edge_index = init_edge_index # (2, E)
val_preds, test_preds = {}, {}
for n_id, node_split in online_node_edge_index.items():
train_msg, train_sup, train_neg, valid, valid_neg, test, test_neg = \
node_split['train_msg'], node_split['train_sup'], node_split['train_neg'], \
node_split['valid'], node_split['valid_neg'], node_split['test'], node_split['test_neg']
train_msg = train_msg.to(device)
train_sup = train_sup.to(device)
train_neg = train_neg.to(device)
valid = valid.to(device)
valid_neg = valid_neg.to(device)
test = test.to(device)
test_neg = test_neg.to(device)
# Add message edges to edge index
curr_edge_index = torch.cat((curr_edge_index, train_msg.T), dim=1) # (2, E+Tr_msg)
# Add new node to list of curr_nodes
curr_nodes = torch.cat((curr_nodes, torch.as_tensor([n_id], device=device)))
# Create new embedding for n_id
# optimizer.param_groups[0]['params'].extend(node_emb.parameters())
# Warm start embedding for new node
with torch.no_grad():
emb.weight[n_id] = emb.weight[curr_nodes].mean(dim=0)
# Nodes are ordered sequentially (online node ids start at len(init_nodes))
for t in range(num_online_steps):
loss = online_train(model, link_predictor, emb.weight[:n_id + 1],
curr_edge_index, train_sup, train_neg, online_batch_size, optimizer, device)
print_and_log(logfile, f"Step {t + 1}/{num_online_steps}: loss = {round(loss, 5)}")
torch.save(model.state_dict(), os.path.join(model_dir, f"online_id:{n_id}_model.pt"))
torch.save(emb.state_dict(), os.path.join(model_dir, f"online_id:{n_id}_emb.pt"))
torch.save(link_predictor.state_dict(), os.path.join(model_dir, f"online_id:{n_id}_lp.pt"))
val_tp, val_tn, val_fp, val_fn, preds = online_eval(model, link_predictor, emb.weight[:n_id + 1],
curr_edge_index, valid, valid_neg, online_batch_size)
val_preds[n_id] = preds
test_tp, test_tn, test_fp, test_fn, preds = online_eval(model, link_predictor, emb.weight[:n_id + 1],
curr_edge_index, valid, test_neg, online_batch_size,)
test_preds[n_id] = preds
print_and_log(logfile,f"For node {n_id}")
print_and_log(logfile, f"VAL accuracy: {(val_tp + val_tn) / (val_tp + val_tn + val_fp + val_fn)}")
print_and_log(logfile, f"VAL tp: {val_tp}, fn: {val_fn}, tn: {val_tn}, fp: {val_fp}")
print_and_log(logfile, f"TEST accuracy: {(test_tp + test_tn) / (test_tp + test_tn + test_fp + test_fn)}")
print_and_log(logfile, f"TEST tp: {test_tp}, fn: {test_fn}, tn: {test_tn}, fp: {test_fp}")
with open(resfile_val_path, 'wb') as f:
pickle.dump(val_preds, f)
with open(resfile_test_path, 'wb') as f:
pickle.dump(test_preds, f)
logfile.close()
| 46.204545 | 113 | 0.65728 | 1,181 | 8,132 | 4.237934 | 0.176969 | 0.028771 | 0.01998 | 0.025175 | 0.267133 | 0.224975 | 0.142058 | 0.120879 | 0.085714 | 0.085714 | 0 | 0.008714 | 0.223807 | 8,132 | 175 | 114 | 46.468571 | 0.784221 | 0.060994 | 0 | 0.015038 | 0 | 0.015038 | 0.210602 | 0.070201 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007519 | false | 0.015038 | 0.097744 | 0 | 0.112782 | 0.06015 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8feb7480e4bbc54978c0c32f4a7ee1a2f74788fe | 13,618 | py | Python | playground/simulation/forwardtesting/session.py | murlokito/playground | 405a7091bbfd6705db967e872ed6c4591bd892e6 | [
"MIT"
] | null | null | null | playground/simulation/forwardtesting/session.py | murlokito/playground | 405a7091bbfd6705db967e872ed6c4591bd892e6 | [
"MIT"
] | null | null | null | playground/simulation/forwardtesting/session.py | murlokito/playground | 405a7091bbfd6705db967e872ed6c4591bd892e6 | [
"MIT"
] | null | null | null | __title__ = "simulation"
__author__ = "murlux"
__copyright__ = "Copyright 2019, " + __author__
__credits__ = (__author__, )
__license__ = "MIT"
__email__ = "murlux@protonmail.com"
import bokeh.plotting
import pandas as pd
import numpy as np
import warnings
import time
import logging
from datetime import datetime as dt
from dateutil.parser import parse
from dateutil.relativedelta import relativedelta as rd
from typing import Callable
# Local imorts
from playground import settings
from playground.notifiers import TwitterNotifier
from playground.simulation import Account, helpers
from playground.util import setup_logger
from playground.util_ops import get_delta_callable_for_tf
class ForwardTestSession():
"""An object representing a Forward Testing Simulation."""
backdata: pd.DataFrame = None
yesterday: pd.DataFrame = None
today: pd.DataFrame = None
data: pd.DataFrame = pd.DataFrame()
initial_capital: float = 1.0
pair: dict = None
tf: dict = None
tracker: list = None
logic: Callable = None
logger: logging.Logger
_name: str = ''
_tts: str = ''
_simple_tts: str = ''
__start_time: dt = None
__next_candle: dt = None
__next_analysis: dt = None
__analysis_throttle: rd = None
__verbosity: bool = False
def __init__(self, data, yesterday, initial_capital, pair, tf, logic,):
"""Initate the ForwardTestSession.
:param data: An HLOCV+ pandas dataframe with a datetime index
:type data: pandas.DataFrame
:param yesterday: An HLOCV+ pandas dataframe with a datetime index
:type yesterday: pandas.DataFrame
:param initial_capital: Starting capital to fund account
:type initial_capital: float
:param pair: Operating market pair
:type pair: MarketPair obj
:param tf: Operating timeframe
:type tf: str
:param logic: A function that will be applied to each lookback period of the data
:type logic: function
:return: A forwardtesting simulation
:rtype: ForwardTestSession
"""
if not isinstance(data, pd.DataFrame):
raise ValueError("Data must be a pandas dataframe")
missing = set(['high', 'low', 'open', 'close', 'volume'])-set(data.columns)
if len(missing) > 0:
msg = "Missing {0} column(s), dataframe must be HLOCV+".format(list(missing))
warnings.warn(msg)
self.tracker = []
self.backdata = data.copy()
self.yesterday = yesterday
self.today = None
self.backdata = self.backdata.set_index('datetime').sort_index(inplace=True, ascending=False)
self.account = Account(initial_capital=initial_capital, pair=pair, tf=tf)
self.logic = logic
self.pair = pair
self.tf = tf
self._simple_tts = '{} - {}\n\n'.format(
self.pair, self.tf, logic.__name__,
)
self._tts = __name__+'\n\n{} - {}\n :: {}\n\n'.format(
self.pair, self.tf, logic.__name__,
)
self._name = __name__+'. {} - {} :: {} :: {}'.format(
self.pair, self.tf, logic.__name__, str(dt.now().date()),
).replace(' ', '')
self.logger = setup_logger(name=self._name)
# rd stands for relativedelta
rd_call: Callable = None
rd_args: dict = None
rd_call, rd_args = get_delta_callable_for_tf(tf=self.tf)
self.__verbosity = settings.FORWARDTESTING_VERBOSITY
self.__analysis_throttle = rd_call(**rd_args)
self.__next_candle = (dt.fromtimestamp(self.yesterday.time) + self.__analysis_throttle)
self.__next_analysis = (self.__next_candle + self.__analysis_throttle)
self.__start_time = dt.now()
self.logger.info('Forwardtesting session started for: {}-{} using {} at {} '.format(
self.pair, self.tf, self.logic.__name__, self.__start_time,
),
)
self.logger.info('next analysis {}'.format(self.__next_analysis))
def update_dataset(self, dataset):
"""Process ForwardTestSession.
:param dataset: An HLOCV+ pandas dataframe with a datetime index
:type dataset: pandas.DataFrame
"""
self.backdata = dataset
def process(self, today):
"""Process ForwardTestSession.
:param today: An HLOCV+ pandas dataframe with the last closed candle
:type today: pandas.DataFrame
:return: A bactesting simulation
:rtype: BackTestSession
"""
current_time = dt.now()
if current_time > (self.__next_analysis):
self.logger.info(
'Processing... %-4s - %-4s - %-4s ' + '------------'*10,
self.pair, self.tf, today.datetime,
)
self.logger.info(
'O: %-6.6g - H: %-6.6g - L: %-6.6g - C: %-6.6g - V: %-6.6g - MRFI:' \
+' %-6.6g - SMRFI: %-6.6g - RSI: %-6.6g - MFI: %-6.6g - EMA50: %-6.6g - EMA100: %-6.6g', \
today.open, today.high, today.low, today.close, today.volumeto, today.mrfi,
today.smrfi, today.rsi, today.mfi, today.ema50, today.ema100,
)
date = today.get('datetime')
equity = self.account.total_value(today.close)
self.data = self.data.append(today)
self.data.sort_index(inplace=True, ascending=False)
# Handle stop loss
for p in self.account.positions:
if p.type == "long":
if p.stop_hit(today.get('low')):
self.account.close_position(p, 1.0, today.get('low'))
if p.type == "short":
if p.stop_hit(today.get('high')):
self.account.close_position(p, 1.0, today.get('high'))
self.account.purge_positions()
# Update account variables
self.account.date = date
self.account.equity.append(equity)
# Equity tracking
self.tracker.append({
'date': date,
'benchmark_equity': today.get('close'),
'strategy_equity': equity,
})
self.logger.info('Executing trading logic... LookbackData: {} :: Data: {}'.format(
self.backdata.shape, self.data.shape
))
# Execute trading logic and allow full lookback
self.logic(
name=self._name,
pair=self.pair,
timeframe=self.tf,
account=self.account,
dataset=self.backdata,
lookback=self.data,
logger=self.logger,
last_candle=today,
_tts=self._tts,
_simple_tts=self._simple_tts
)
self.__next_candle = (dt.fromtimestamp(today.time) + self.__analysis_throttle)
self.__next_analysis = (self.__next_analysis + self.__analysis_throttle)
self.yesterday = today
# Cleanup empty positions
# self.account.purge_positions()
# ------------------------------------------------------------
def print_results(self):
"""Print results"""
self.logger.info("-------------- Results ----------------\n")
being_price = self.data.iloc[0].open
final_price = self.data.iloc[-1].close
pc = helpers.percent_change(being_price, final_price)
tweet_string = "--{}--\n".format(self._name)
tweet_string += "Begin vs end : {0} {0}\n".format(being_price, final_price)
tweet_string += "Buy and Hold : {0}%\n".format(round(pc*100, 2))
tweet_string += "Net Profit : {0}\n".format(round(helpers.profit(self.account.initial_capital, pc), 2))
pc = helpers.percent_change(self.account.initial_capital, self.account.total_value(final_price))
tweet_string += "Strategy : {0}%\n".format(round(pc*100, 2))
tweet_string += "Net Profit : {0}\n".format(round(helpers.profit(self.account.initial_capital, pc), 2))
longs = len([t for t in self.account.opened_trades if t.type == 'long'])
sells = len([t for t in self.account.closed_trades if t.type == 'long'])
shorts = len([t for t in self.account.opened_trades if t.type == 'short'])
covers = len([t for t in self.account.closed_trades if t.type == 'short'])
tweet_string += "Longs : {0}\n".format(longs)
tweet_string += "Sells : {0}\n".format(sells)
tweet_string += "Shorts : {0}\n".format(shorts)
tweet_string += "Covers : {0}\n".format(covers)
tweet_string += "--------------------\n"
tweet_string += "Total Trades : {0}\n".format(longs + sells + shorts + covers)
tweet_string += "---------------------------------------"
self.logger.info(tweet_string)
#tn = TwitterNotifier()
#tn.post_results_tweet(tweet_string)
def _get_results(self):
"""
Return results as dict.
# TODO: please.... lol
# """
longs = len([t for t in self.account.opened_trades if t.type == 'long'])
sells = len([t for t in self.account.closed_trades if t.type == 'long'])
shorts = len([t for t in self.account.opened_trades if t.type == 'short'])
covers = len([t for t in self.account.closed_trades if t.type == 'short'])
if len(self.data) != 0:
begin_price = self.data.iloc[0].open
final_price = self.data.iloc[-1].close
buy_hold_pc = helpers.percent_change(begin_price, final_price)
strategy_pc = helpers.percent_change(self.account.initial_capital, self.account.total_value(final_price))
return {
'name': self._name,
'begin_price': begin_price,
'final_price': final_price,
'buy_and_hold': {
'rate_on_equity': round(buy_hold_pc*100, 2),
'net_profit': round(helpers.profit(self.account.initial_capital, buy_hold_pc), 2),
},
'strategy':{
'rate_on_equity': round(strategy_pc*100, 2),
'net_profit': round(helpers.profit(self.account.initial_capital, strategy_pc), 2),
'long_count': longs,
'sell_count': sells,
'short_count': shorts,
'cover_count': covers,
'total': longs + sells + shorts + covers,
},
'positions': self.account._get_positions(),
}
else:
begin_price = 'N/A'
final_price = 'N/A'
buy_hold_pc = 'N/A'
strategy_pc = 'N/A'
return {
'name': self._name,
'begin_price': begin_price,
'final_price': final_price,
'buy_and_hold': {
'rate_on_equity': 0,
'net_profit': 0,
},
'strategy':{
'rate_on_equity': 0,
'net_profit': 0,
'long_count': longs,
'sell_count': sells,
'short_count': shorts,
'cover_count': covers,
'total': longs + sells + shorts + covers,
},
'positions': self.account._get_positions(),
}
def _get_longs(self):
return self.account._get_longs()
def _get_shorts(self):
return self.account._get_shorts()
def chart(self, show_trades=False, title="Equity Curve"):
"""Chart results.
:param show_trades: Show trades on plot
:type show_trades: bool
:param title: Plot title
:type title: str
"""
bokeh.plotting.output_file("{}chart-{0}.html".format(settings.FORWARDTESTS_CHARTS_FOLDER, self._name), title=title)
p = bokeh.plotting.figure(x_axis_type="datetime", plot_width=1000, plot_height=400, title=title)
p.grid.grid_line_alpha = 0.3
p.xaxis.axis_label = 'Date'
p.yaxis.axis_label = 'Equity'
shares = self.account.initial_capital/self.data.iloc[-1].open
base_equity = [price*shares for price in self.data.open]
p.line(self.data.datetime, base_equity, color='#CAD8DE', legend_label='Buy and Hold')
p.line(self.data.datetime, self.account.equity, color='#49516F', legend_label='Strategy')
p.legend.location = "top_left"
if show_trades:
for trade in self.account.opened_trades:
try:
x = time.mktime(trade.date.timetuple())*1000
y = self.account.equity[np.where(self.data.datetime == trade.date)[0][0]]
if trade.type == 'long': p.circle(x, y, size=6, color='green', alpha=0.5)
elif trade.type == 'short': p.circle(x, y, size=6, color='red', alpha=0.5)
except:
pass
for trade in self.account.closed_trades:
try:
x = time.mktime(trade.date.timetuple())*1000
y = self.account.equity[np.where(self.data.datetime == trade.date)[0][0]]
if trade.type == 'long': p.circle(x, y, size=6, color='blue', alpha=0.5)
elif trade.type == 'short': p.circle(x, y, size=6, color='orange', alpha=0.5)
except:
pass
bokeh.plotting.show(p) | 41.773006 | 123 | 0.561463 | 1,574 | 13,618 | 4.658196 | 0.187421 | 0.05401 | 0.019504 | 0.008729 | 0.364021 | 0.313284 | 0.290644 | 0.282051 | 0.282051 | 0.241408 | 0 | 0.013773 | 0.306873 | 13,618 | 326 | 124 | 41.773006 | 0.763005 | 0.106844 | 0 | 0.224066 | 0 | 0.008299 | 0.118584 | 0.006911 | 0 | 0 | 0 | 0.003067 | 0 | 1 | 0.033195 | false | 0.008299 | 0.062241 | 0.008299 | 0.190871 | 0.004149 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8ff1e45352cd99cbac855e077cebbf9c64db6800 | 12,868 | py | Python | ahrs/filters/roleq.py | jaluebbe/ahrs | 4b4a33b1006e0d455a71ac8379a2697202361758 | [
"MIT"
] | null | null | null | ahrs/filters/roleq.py | jaluebbe/ahrs | 4b4a33b1006e0d455a71ac8379a2697202361758 | [
"MIT"
] | null | null | null | ahrs/filters/roleq.py | jaluebbe/ahrs | 4b4a33b1006e0d455a71ac8379a2697202361758 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Recursive Optimal Linear Estimator of Quaternion
================================================
This is a modified `OLEQ <./oleq.html>`_, where a recursive estimation of the
attitude is made with the measured angular velocity [Zhou2018]_. This
estimation is set as the initial value for the OLEQ estimation, simplyfing the
rotational operations.
First, the quaternion :math:`\\mathbf{q}_\\omega` is estimated from the angular
velocity, :math:`\\boldsymbol\\omega=\\begin{bmatrix}\\omega_x & \\omega_y &
\\omega_z \\end{bmatrix}^T`, measured by the gyroscopes, in rad/s, at a time
:math:`t` as:
.. math::
\\mathbf{q}_\\omega = \\Big(\\mathbf{I}_4 + \\frac{\\Delta t}{2}\\boldsymbol\\Omega_t\\Big)\\mathbf{q}_{t-1} =
\\begin{bmatrix}
q_w - \\frac{\\Delta t}{2} \\omega_x q_x - \\frac{\\Delta t}{2} \\omega_y q_y - \\frac{\\Delta t}{2} \\omega_z q_z\\\\
q_x + \\frac{\\Delta t}{2} \\omega_x q_w - \\frac{\\Delta t}{2} \\omega_y q_z + \\frac{\\Delta t}{2} \\omega_z q_y\\\\
q_y + \\frac{\\Delta t}{2} \\omega_x q_z + \\frac{\\Delta t}{2} \\omega_y q_w - \\frac{\\Delta t}{2} \\omega_z q_x\\\\
q_z - \\frac{\\Delta t}{2} \\omega_x q_y + \\frac{\\Delta t}{2} \\omega_y q_x + \\frac{\\Delta t}{2} \\omega_z q_w
\\end{bmatrix}
Then, the attitude is "corrected" through OLEQ using a single multiplication of
its rotation operator:
.. math::
\\mathbf{q}_\\mathbf{ROLEQ} = \\frac{1}{2}\\Big(\\mathbf{I}_4 + \\sum_{i=1}^na_i\\mathbf{W}_i\\Big)\\mathbf{q}_\\omega
where each :math:`\\mathbf{W}` (one for accelerations and one for magnetic
field) is built from their reference vectors, :math:`D^r`, and measurements,
:math:`D^b`, exactly as in OLEQ:
.. math::
\\begin{array}{rcl}
\\mathbf{W} &=& D_x^r\\mathbf{M}_1 + D_y^r\\mathbf{M}_2 + D_z^r\\mathbf{M}_3 \\\\ && \\\\
\\mathbf{M}_1 &=&
\\begin{bmatrix}
D_x^b & 0 & D_z^b & -D_y^b \\\\
0 & D_x^b & D_y^b & D_z^b \\\\
D_z^b & D_y^b & -D_x^b & 0 \\\\
-D_y^b & D_z^b & 0 & -D_x^b
\\end{bmatrix} \\\\
\\mathbf{M}_2 &=&
\\begin{bmatrix}
D_y^b & -D_z^b & 0 & D_x^b \\\\
-D_z^b & -D_y^b & D_x^b & 0 \\\\
0 & D_x^b & D_y^b & D_z^b \\\\
D_x^b & 0 & D_z^b & -D_y^b
\\end{bmatrix} \\\\
\\mathbf{M}_3 &=&
\\begin{bmatrix}
D_z^b & D_y^b & -D_x^b & 0 \\\\
D_y^b & -D_z^b & 0 & D_x^b \\\\
-D_x^b & 0 & -D_z^b & D_y^b \\\\
0 & D_x^b & D_y^b & D_z^b
\\end{bmatrix}
\\end{array}
It is noticeable that, for OLEQ, a random quaternion was used as a starting
value for an iterative procedure to find the optimal quaternion. Here, that
initial value is now :math:`\\mathbf{q}_\\omega` and a simple product (instead
of a large iterative product) is required.
In this way, the quaternions are recursively computed with much fewer
computations, and the accuracy is maintained.
For this case, however the three sensor data (gyroscopes, accelerometers and
magnetometers) have to be provided, along with the an initial quaternion,
:math:`\\mathbf{q}_0` from which the attitude will be built upon.
References
----------
.. [Zhou2018] Zhou, Z.; Wu, J.; Wang, J.; Fourati, H. Optimal, Recursive and
Sub-Optimal Linear Solutions to Attitude Determination from Vector
Observations for GNSS/Accelerometer/Magnetometer Orientation Measurement.
Remote Sens. 2018, 10, 377.
(https://www.mdpi.com/2072-4292/10/3/377)
"""
import numpy as np
from ..common.orientation import ecompass
from ..common.mathfuncs import cosd, sind
class ROLEQ:
"""
Recursive Optimal Linear Estimator of Quaternion
Uses OLEQ to estimate the initial attitude.
Parameters
----------
gyr : numpy.ndarray, default: None
N-by-3 array with measurements of angular velocity in rad/s.
acc : numpy.ndarray, default: None
N-by-3 array with measurements of acceleration in in m/s^2.
mag : numpy.ndarray, default: None
N-by-3 array with measurements of magnetic field in mT.
Attributes
----------
gyr : numpy.ndarray
N-by-3 array with N gyroscope samples.
acc : numpy.ndarray
N-by-3 array with N accelerometer samples.
mag : numpy.ndarray
N-by-3 array with N magnetometer samples.
frequency : float
Sampling frequency in Herz
Dt : float
Sampling step in seconds. Inverse of sampling frequency.
Q : numpy.array, default: None
M-by-4 Array with all estimated quaternions, where M is the number of
samples. Equal to None when no estimation is performed.
Raises
------
ValueError
When dimension of input arrays ``gyr``, ``acc`` or ``mag`` are not
equal.
Examples
--------
>>> gyr_data.shape, acc_data.shape, mag_data.shape # NumPy arrays with sensor data
((1000, 3), (1000, 3), (1000, 3))
>>> from ahrs.filters import ROLEQ
>>> orientation = ROLEQ(gyr=gyr_data, acc=acc_data, mag=mag_data)
>>> orientation.Q.shape # Estimated attitude
(1000, 4)
"""
def __init__(self,
gyr: np.ndarray = None,
acc: np.ndarray = None,
mag: np.ndarray = None,
weights: np.ndarray = None,
magnetic_ref: np.ndarray = None,
frame: str = 'NED',
**kwargs
):
self.gyr = gyr
self.acc = acc
self.mag = mag
self.a = weights if weights is not None else np.ones(2)
self.Q = None
self.frequency = kwargs.get('frequency', 100.0)
self.Dt = kwargs.get('Dt', 1.0/self.frequency)
self.q0 = kwargs.get('q0')
self.frame = frame
# Reference measurements
self._set_reference_frames(magnetic_ref, self.frame)
# Estimate all quaternions if data is given
if self.acc is not None and self.gyr is not None and self.mag is not None:
self.Q = self._compute_all()
def _set_reference_frames(self, mref: float, frame: str = 'NED'):
if frame.upper() not in ['NED', 'ENU']:
raise ValueError(f"Invalid frame '{frame}'. Try 'NED' or 'ENU'")
#### Magnetic Reference Vector ####
if mref is None:
# Local magnetic reference of Munich, Germany
from ..common.constants import MUNICH_LATITUDE, MUNICH_LONGITUDE, MUNICH_HEIGHT
from ..utils.wmm import WMM
wmm = WMM(latitude=MUNICH_LATITUDE, longitude=MUNICH_LONGITUDE, height=MUNICH_HEIGHT)
cd, sd = cosd(wmm.I), sind(wmm.I)
self.m_ref = np.array([sd, 0.0, cd]) if frame.upper() == 'NED' else np.array([0.0, cd, -sd])
elif isinstance(mref, (int, float)):
# Use given magnetic dip angle (in degrees)
cd, sd = cosd(mref), sind(mref)
self.m_ref = np.array([sd, 0.0, cd]) if frame.upper() == 'NED' else np.array([0.0, cd, -sd])
else:
self.m_ref = np.copy(mref)
self.m_ref /= np.linalg.norm(self.m_ref)
#### Gravitational Reference Vector ####
self.a_ref = np.array([0.0, 0.0, -1.0]) if frame.upper() == 'NED' else np.array([0.0, 0.0, 1.0])
def _compute_all(self) -> np.ndarray:
"""
Estimate the quaternions given all data.
Attributes ``gyr``, ``acc`` and ``mag`` must contain data.
Returns
-------
Q : numpy.ndarray
M-by-4 Array with all estimated quaternions, where M is the number
of samples.
"""
if self.acc.shape != self.gyr.shape:
raise ValueError("acc and gyr are not the same size")
if self.acc.shape != self.mag.shape:
raise ValueError("acc and mag are not the same size")
num_samples = np.atleast_2d(self.acc).shape[0]
if num_samples < 2:
raise ValueError("ROLEQ needs at least 2 samples of each sensor")
Q = np.zeros((num_samples, 4))
Q[0] = ecompass(-self.acc[0], self.mag[0], frame=self.frame, representation='quaternion') if self.q0 is None else self.q0
for t in range(1, num_samples):
Q[t] = self.update(Q[t-1], self.gyr[t], self.acc[t], self.mag[t])
return Q
def attitude_propagation(self, q: np.ndarray, omega: np.ndarray, dt: float) -> np.ndarray:
"""
Attitude estimation from previous quaternion and current angular velocity.
.. math::
\\mathbf{q}_\\omega = \\Big(\\mathbf{I}_4 + \\frac{\\Delta t}{2}\\boldsymbol\\Omega_t\\Big)\\mathbf{q}_{t-1} =
\\begin{bmatrix}
q_w - \\frac{\\Delta t}{2} \\omega_x q_x - \\frac{\\Delta t}{2} \\omega_y q_y - \\frac{\\Delta t}{2} \\omega_z q_z\\\\
q_x + \\frac{\\Delta t}{2} \\omega_x q_w - \\frac{\\Delta t}{2} \\omega_y q_z + \\frac{\\Delta t}{2} \\omega_z q_y\\\\
q_y + \\frac{\\Delta t}{2} \\omega_x q_z + \\frac{\\Delta t}{2} \\omega_y q_w - \\frac{\\Delta t}{2} \\omega_z q_x\\\\
q_z - \\frac{\\Delta t}{2} \\omega_x q_y + \\frac{\\Delta t}{2} \\omega_y q_x + \\frac{\\Delta t}{2} \\omega_z q_w
\\end{bmatrix}
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
omega : numpy.ndarray
Angular velocity, in rad/s.
dt : float
Time step, in seconds, between consecutive Quaternions.
Returns
-------
q : numpy.ndarray
Attitude as a quaternion.
"""
Omega_t = np.array([
[0.0, -omega[0], -omega[1], -omega[2]],
[omega[0], 0.0, omega[2], -omega[1]],
[omega[1], -omega[2], 0.0, omega[0]],
[omega[2], omega[1], -omega[0], 0.0]])
q_omega = (np.identity(4) + 0.5*dt*Omega_t) @ q # (eq. 37)
return q_omega/np.linalg.norm(q_omega)
def WW(self, Db, Dr):
"""
W Matrix
.. math::
\\mathbf{W} = D_x^r\\mathbf{M}_1 + D_y^r\\mathbf{M}_2 + D_z^r\\mathbf{M}_3
Parameters
----------
Db : numpy.ndarray
Normalized tri-axial observations vector.
Dr : numpy.ndarray
Normalized tri-axial reference vector.
Returns
-------
W_matrix : numpy.ndarray
W Matrix.
"""
bx, by, bz = Db
rx, ry, rz = Dr
M1 = np.array([
[bx, 0.0, bz, -by],
[0.0, bx, by, bz],
[bz, by, -bx, 0.0],
[-by, bz, 0.0, -bx]]) # (eq. 18a)
M2 = np.array([
[by, -bz, 0.0, bx],
[-bz, -by, bx, 0.0],
[0.0, bx, by, bz],
[bx, 0.0, bz, -by]]) # (eq. 18b)
M3 = np.array([
[bz, by, -bx, 0.0],
[by, -bz, 0.0, bx],
[-bx, 0.0, -bz, by],
[0.0, bx, by, bz]]) # (eq. 18c)
return rx*M1 + ry*M2 + rz*M3 # (eq. 20)
def oleq(self, acc: np.ndarray, mag: np.ndarray, q_omega: np.ndarray) -> np.ndarray:
"""
OLEQ with a single rotation by R.
Parameters
----------
acc : numpy.ndarray
Sample of tri-axial Accelerometer.
mag : numpy.ndarray
Sample of tri-axial Magnetometer.
q_omega : numpy.ndarray
Preceding quaternion estimated with angular velocity.
Returns
-------
q : numpy.ndarray
Final quaternion.
"""
a_norm = np.linalg.norm(acc)
m_norm = np.linalg.norm(mag)
if not a_norm > 0 or not m_norm > 0: # handle NaN
return q_omega
acc = np.copy(acc) / np.linalg.norm(acc)
mag = np.copy(mag) / np.linalg.norm(mag)
sum_aW = self.a[0]*self.WW(acc, self.a_ref) + self.a[1]*self.WW(mag, self.m_ref) # (eq. 31)
R = 0.5*(np.identity(4) + sum_aW) # (eq. 33)
q = R @ q_omega # (eq. 25)
return q / np.linalg.norm(q)
def update(self, q: np.ndarray, gyr: np.ndarray, acc: np.ndarray, mag: np.ndarray, dt: float = None) -> np.ndarray:
"""
Update Attitude with a Recursive OLEQ
Parameters
----------
q : numpy.ndarray
A-priori quaternion.
gyr : numpy.ndarray
Sample of angular velocity in rad/s
acc : numpy.ndarray
Sample of tri-axial Accelerometer in m/s^2
mag : numpy.ndarray
Sample of tri-axial Magnetometer in mT
dt : float, default: None
Time step, in seconds, between consecutive Quaternions.
Returns
-------
q : numpy.ndarray
Estimated quaternion.
"""
dt = self.Dt if dt is None else dt
q_g = self.attitude_propagation(q, gyr, dt) # Quaternion from previous quaternion and angular velocity
q = self.oleq(acc, mag, q_g) # Second stage: Estimate with OLEQ
return q
| 38.41194 | 130 | 0.559994 | 1,897 | 12,868 | 3.698471 | 0.16816 | 0.008552 | 0.037058 | 0.040764 | 0.337799 | 0.289481 | 0.265251 | 0.251995 | 0.215792 | 0.202965 | 0 | 0.02548 | 0.283261 | 12,868 | 334 | 131 | 38.526946 | 0.735227 | 0.560149 | 0 | 0.082474 | 0 | 0 | 0.042728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.072165 | false | 0.020619 | 0.051546 | 0 | 0.195876 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8ff331bc3a412efbad4a3a5552077887b1637581 | 9,475 | py | Python | test/integration_test/test_bad_network.py | heshu-by/likelib-ws | 85987d328dc274622f4b758afa1b6af43d15564f | [
"Apache-2.0"
] | null | null | null | test/integration_test/test_bad_network.py | heshu-by/likelib-ws | 85987d328dc274622f4b758afa1b6af43d15564f | [
"Apache-2.0"
] | null | null | null | test/integration_test/test_bad_network.py | heshu-by/likelib-ws | 85987d328dc274622f4b758afa1b6af43d15564f | [
"Apache-2.0"
] | null | null | null | from tester import test_case, Env, NodeConfig, Id, TEST_CHECK, TEST_CHECK_EQUAL,\
ClientType, get_distributor_address_path, TransactionStatusCode
from time import sleep
import subprocess, os
from random import randrange
def log_subprocess_output(pipe, env):
for line in iter(pipe.readline, b''): # b'\n'-separated lines
env.logger.info(f"start_bad_network.sh: {line}")
def run_bad_nodes(env):
script_path = os.path.realpath(os.path.join(os.path.dirname(os.path.abspath(__file__)),
"..", "..", "test", "integration_test", "tester", "start_bad_network.sh"))
process=subprocess.Popen([f"{script_path}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
with process.stdout:
log_subprocess_output(process.stdout, env)
exitcode=process.wait()
if exitcode != 0:
env.logger.debug(f"Script exit code = {exitcode}")
return 1
bad_client_pool = []
bad_node_ids = []
bad_node_ids.append(Id(20203, grpc_port=50051, absolute_address="192.168.100.141"))
bad_node_ids.append(Id(20203, grpc_port=50051, absolute_address="192.168.100.142"))
bad_node_ids.append(Id(20203, grpc_port=50051, absolute_address="192.168.100.143"))
env.logger.info("Get client from bad network pool:")
for id in bad_node_ids:
bad_client_pool.append(env.get_grpc_client_to_outside_node(id))
return bad_client_pool, bad_node_ids
@test_case("connect_node_to_bad_network")
def main(env: Env) -> int:
sync_port = 20100
grpc_port = 50100
amount = randrange(1000)
update_time = 0.5
timeout = 2
wait_time = 1
transaction_update_time=2
max_update_request=10
env.logger.debug(f"Random amount for test = {amount}")
bad_client_pool, bad_node_ids = run_bad_nodes(env)
main_id = Id(sync_port, grpc_port = grpc_port)
env.logger.info("Start main node with connecting to bad network nodes:")
# connect to only first node form bad pool, becouse it's IP from good network.
# If connect to this ids, nodes in bad pool synchron across 2 network card
env.start_node(NodeConfig(main_id, nodes=[bad_node_ids[0], ]))
main_client = env.get_client(ClientType.LEGACY_GRPC, main_id)
env.logger.info("Check all nodes:")
TEST_CHECK(main_client.connection_test())
for client in bad_client_pool:
TEST_CHECK(client.connection_test())
env.logger.info("All nodes started success.")
address = main_client.generate_keys(keys_path=f"keys")
distributor_address = main_client.load_address(keys_path=get_distributor_address_path())
TEST_CHECK_EQUAL(main_client.get_balance(address=address.address, timeout=timeout, wait=wait_time), 0)
env.logger.info("New address created.")
transaction = main_client.transfer(to_address=address.address, amount=amount,
from_address=distributor_address, fee=0, wait=wait_time, timeout=timeout)
TEST_CHECK_EQUAL(transaction.status_code, TransactionStatusCode.PENDING)
TEST_CHECK(main_client.transaction_success_wait(transaction=transaction))
TEST_CHECK_EQUAL(main_client.get_balance(address=address.address, timeout=timeout, wait=wait_time), amount)
env.logger.info("Main client transaction checked success.")
for client in bad_client_pool:
TEST_CHECK_EQUAL(client.get_balance(address=address.address, timeout=timeout, wait=wait_time), amount)
env.logger.info("Test ended success.")
return 0
@test_case("double_connection_in_bad_network")
def main(env: Env) -> int:
sync_port = 20100
grpc_port = 50100
amount = randrange(1000)
update_time = 0.5
timeout = 2
wait_time = 1
transaction_update_time=2
max_update_request=10
env.logger.debug(f"Random amount for test = {amount}")
bad_client_pool, bad_node_ids = run_bad_nodes(env)
main_id = Id(sync_port, grpc_port = grpc_port)
env.logger.info("Start main node with connecting to bad network nodes:")
# connect to all nodes form bad pool.
# For nodes in bad pool synchron across 2 network card
env.start_node(NodeConfig(main_id, nodes=bad_node_ids))
main_client = env.get_client(ClientType.LEGACY_GRPC, main_id)
env.logger.info("Check all nodes:")
TEST_CHECK(main_client.connection_test())
for client in bad_client_pool:
TEST_CHECK(client.connection_test())
env.logger.info("All nodes started success.")
address = main_client.generate_keys(keys_path=f"keys")
distributor_address = main_client.load_address(keys_path=get_distributor_address_path())
TEST_CHECK_EQUAL(main_client.get_balance(address=address.address, timeout=timeout, wait=wait_time), 0)
env.logger.info("New address created.")
transaction = main_client.transfer(to_address=address.address, amount=amount,
from_address=distributor_address, fee=0, wait=wait_time, timeout=timeout)
TEST_CHECK_EQUAL(transaction.status_code, TransactionStatusCode.PENDING)
TEST_CHECK(main_client.transaction_success_wait(transaction=transaction))
TEST_CHECK_EQUAL(main_client.get_balance(address=address.address, timeout=timeout, wait=wait_time), amount)
env.logger.info("Main client transaction checked success.")
for client in bad_client_pool:
TEST_CHECK_EQUAL(client.get_balance(address=address.address, timeout=timeout, wait=wait_time), amount)
env.logger.info("Test ended success.")
return 0
def node_transfers(client, addresses, transaction_wait, finish_address, amount, timeout,
wait_time, transaction_update_time, max_update_request):
shift = len(addresses) - 1
pos = 0
from_address = addresses[pos]
transactions = []
for _ in range(len(addresses) * 5):
pos = (pos + shift) % len(addresses)
to_address = addresses[pos]
transactions.append(node.transfer(to_address=finish_address.address, amount=amount,
from_address=from_address, fee=0, wait=wait_time, timeout=timeout))
TEST_CHECK_EQUAL(transactions[-1].status_code, TransactionStatusCode.PENDING)
env.logger.info(f"Transaction {transactions[-1].tx_hash} is PENDING (from {from_address.address})")
for transaction in transactions:
TEST_CHECK(client.transaction_success_wait(transaction=transaction))
@test_case("transaction_stress_in_bad_network")
def main(env: Env) -> int:
amount = randrange(1000)
start_balance = 5*amount
finish_balance = start_balance - amount
update_time = 0.5
timeout = 2
wait_time = 1
transaction_update_time=2
max_update_request=10
number_addresses_per_thread = 5
env.logger.debug(f"Random amount for test = {amount}")
bad_client_pool, bad_node_ids = run_bad_nodes(env)
env.logger.info("Check all nodes:")
for client in bad_client_pool:
TEST_CHECK(client.connection_test())
env.logger.info("All nodes started success.")
addresses = [bad_client_pool[0].generate_keys(keys_path=f"keys{i}")
for i in range(1, number_addresses_per_thread * len(bad_client_pool) + 1)]
distributor_address = bad_client_pool[0].load_address(keys_path=get_distributor_address_path())
for address in addresses:
TEST_CHECK_EQUAL(bad_client_pool[0].get_balance(address=address.address, timeout=timeout, wait=wait_time), 0)
evn.logger.info(f"Balance of ${address.address} 0")
env.logger.info(f"New {number_addresses} addresses created.")
for address in addresses:
transaction = bad_client_pool[0].transfer(to_address=address.address, amount=start_amount,
from_address=distributor_address, fee=0, wait=wait_time, timeout=timeout)
TEST_CHECK_EQUAL(transaction.status_code, TransactionStatusCode.PENDING)
TEST_CHECK(bad_client_pool[0].transaction_success_wait(transaction=transaction))
for client in bad_client_pool:
for address in addresses:
TEST_CHECK_EQUAL(client.get_balance(address=address.address, timeout=timeout, wait=wait_time), start_balance)
env.logger.info(f"Node {client.name} check initialize balance success")
env.logger.info("Initialize transfers success, current balanse {start_balance}")
with concurrent.futures.ThreadPoolExecutor(len(bad_client_pool)) as executor:
threads = []
for i in range(len(bad_client_pool)):
first_address_number = i * number_addresses_per_thread
last_address_number = (i * number_addresses_per_thread) + number_addresses_per_thread
threads.append(
executor.submit(node_transfers, bad_client_pool[i],
addresses[first_address_number:last_address_number], transaction_wait,
addresses[-1], amount, timeout, wait_time, transaction_update_time, max_update_request))
for i in threads:
i.result()
env.logger.info("Check finish_balance (in this test {finish_balance})")
for address in addresses[:-1]:
TEST_CHECK_EQUAL(bad_client_pool[0].get_balance(address=address.address, timeout=timeout,
wait=wait_time), finish_balance)
last_address_finish_balance = start_balance + amount * len(bad_client_pool) * number_addresses_per_thread
env.logger.info("Check balance on last address start_balance + all transfers {last_address_finish_balance}")
TEST_CHECK_EQUAL(bad_client_pool[0].get_balance(address=addresses[-1].address, timeout=timeout,
wait=wait_time), last_address_finish_balance)
return 0
| 46.446078 | 117 | 0.731504 | 1,309 | 9,475 | 5.016807 | 0.123759 | 0.057561 | 0.04751 | 0.038069 | 0.6522 | 0.636059 | 0.60134 | 0.575148 | 0.56388 | 0.56388 | 0 | 0.019553 | 0.16876 | 9,475 | 203 | 118 | 46.674877 | 0.814246 | 0.027441 | 0 | 0.518072 | 0 | 0 | 0.130104 | 0.020743 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036145 | false | 0 | 0.024096 | 0 | 0.090361 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8ff94ac0731085a0d5493ef70f80657677ea8039 | 1,398 | py | Python | ymir/backend/src/ymir_viz/tests/controllers/test_dataset_controller.py | Zhang-SJ930104/ymir | dd6481be6f229ade4cf8fba64ef44a15357430c4 | [
"Apache-2.0"
] | 64 | 2021-11-15T03:48:00.000Z | 2022-03-25T07:08:46.000Z | ymir/backend/src/ymir_viz/tests/controllers/test_dataset_controller.py | Zhang-SJ930104/ymir | dd6481be6f229ade4cf8fba64ef44a15357430c4 | [
"Apache-2.0"
] | 35 | 2021-11-23T04:14:35.000Z | 2022-03-26T09:03:43.000Z | ymir/backend/src/ymir_viz/tests/controllers/test_dataset_controller.py | Aryalfrat/ymir | d4617ed00ef67a77ab4e1944763f608bface4be6 | [
"Apache-2.0"
] | 57 | 2021-11-11T10:15:40.000Z | 2022-03-29T07:27:54.000Z | from mir.tools.mir_storage_ops import MirStorageOps
class TestDatasetController:
def test_get_dataset_info(self, test_client, mocker):
user_id = "user_id"
repo_id = "repo_id"
branch_id = "branch_id"
mir_dataset_content = {
"class_names_count": {
'cat': 34
},
"class_ids_count": {
3: 34
},
"ignored_labels": {
'cat': 5,
},
"negative_info": {
"negative_images_cnt": 0,
"project_negative_images_cnt": 0,
},
"total_images_cnt": 1,
}
mocker.patch.object(MirStorageOps, "load_single_dataset", return_value=mir_dataset_content)
resp = test_client.get(f"/v1/users/{user_id}/repositories/{repo_id}/branches/{branch_id}/datasets")
assert resp.status_code == 200
assert resp.json()["result"] == {
'class_ids_count': {
'3': 34 # int is converted to str in json.dumps.
},
'class_names_count': {
'cat': 34
},
'ignored_labels': {
'cat': 5
},
'negative_info': {
'negative_images_cnt': 0,
'project_negative_images_cnt': 0
},
'total_images_cnt': 1
}
| 29.744681 | 107 | 0.489986 | 137 | 1,398 | 4.635037 | 0.467153 | 0.085039 | 0.107087 | 0.113386 | 0.387402 | 0.280315 | 0.280315 | 0.280315 | 0.280315 | 0.280315 | 0 | 0.026253 | 0.400572 | 1,398 | 46 | 108 | 30.391304 | 0.731504 | 0.027182 | 0 | 0.04878 | 0 | 0 | 0.276141 | 0.092784 | 0 | 0 | 0 | 0 | 0.04878 | 1 | 0.02439 | false | 0 | 0.02439 | 0 | 0.073171 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8ffb9bb104fc4ecdc1b59a3518ce4c5fd9a36d4b | 1,145 | py | Python | core/ctrl/notifications.py | prochor666/ctrl | bb6bef2dd8e0690f632be4990e8564bfe4c1e859 | [
"MIT"
] | null | null | null | core/ctrl/notifications.py | prochor666/ctrl | bb6bef2dd8e0690f632be4990e8564bfe4c1e859 | [
"MIT"
] | null | null | null | core/ctrl/notifications.py | prochor666/ctrl | bb6bef2dd8e0690f632be4990e8564bfe4c1e859 | [
"MIT"
] | null | null | null | from core import app, data, utils
from core.ctrl import mailer
def list_notifications(filter_data, sort_data=None):
finder = {
'collection': 'notifications',
'filter': filter_data,
'sort': sort_data
}
return data.ex(finder)
def email(case, template, subject, html_message_data, att = None):
if app.config['user']['username'] != 'system':
valid_users = data.collect(data.ex({
'collection': 'users',
'filter': {
case: True
}
}))
for user in valid_users:
html_message_data['user'] = user
html_message = mailer.assign_template(
template, html_message_data)
mailer.send(
user['email'], subject, html_message, att)
def db(obj_type, obj_id, message, json_data=''):
notifs = app.db['notifications']
notification = {
'user_id': app.config['user']['_id'],
'created_at': utils.now(),
'obj_type': obj_type,
'obj_id': obj_id,
'message': message,
'json_data': json_data
}
notifs.insert_one(notification)
| 27.261905 | 66 | 0.570306 | 128 | 1,145 | 4.875 | 0.390625 | 0.088141 | 0.072115 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.30131 | 1,145 | 41 | 67 | 27.926829 | 0.78 | 0 | 0 | 0 | 0 | 0 | 0.129258 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.058824 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8904c57096c354f788916e14175c19f20d188140 | 3,147 | py | Python | fileWork.py | codedandy/imageRenamer | c7795a406fed3f424d6d2e199dd615350fe00c49 | [
"MIT"
] | null | null | null | fileWork.py | codedandy/imageRenamer | c7795a406fed3f424d6d2e199dd615350fe00c49 | [
"MIT"
] | null | null | null | fileWork.py | codedandy/imageRenamer | c7795a406fed3f424d6d2e199dd615350fe00c49 | [
"MIT"
] | null | null | null | import os
import os.path
import shutil
import urllib.parse
def openReadFile(filePath):
try:
openFile = open(filePath)
readFile = openFile.read()
openFile.close()
return readFile
except Exception:
print("Problem reading file, aborting.")
quit()
def createBackups(sourceFile, sourceFolder):
os.chdir(sourceFolder)
os.chdir("../")
if os.path.isdir(f'{os.getcwd()}/backup'):
shutil.rmtree(f'{os.getcwd()}/backup')
else:
pass
try:
os.makedirs("backup")
shutil.copy(sourceFile, f'{os.getcwd()}/backup')
shutil.copytree(sourceFolder, f'{os.getcwd()}/backup/images')
print("• Backups created.")
except Exception as e:
print(f'!!! Exception raised:\n{e}\n-= Shutting Down Process =-')
quit()
def assembleNewNames(prefixString, filteredImages):
newNameList = []
imageCounter = 0
processedRefs = []
for image in filteredImages:
imageCounter += 1
imageExt = ""
if image in processedRefs:
print(f'- Skipping duplicate reference: {image}')
else:
if ".jpg" in image:
imageExt = ".jpg"
elif ".png" in image:
imageExt = ".png"
elif ".gif" in image:
imageExt = ".gif"
else:
pass
if imageCounter < 10:
newNameList.append((image, f'{prefixString}_00{imageCounter}{imageExt}'))
elif imageCounter < 100:
newNameList.append((image, f'{prefixString}_0{imageCounter}{imageExt}'))
else:
newNameList.append((image, f'{prefixString}_{imageCounter}{imageExt}'))
processedRefs.append(image)
return newNameList
def updateSourceFile(workingFile, renamedImageSets):
for imageRef in renamedImageSets:
try:
print(f'-- Replacing {imageRef[0]} with {imageRef[1]}')
workingFile = workingFile.replace(imageRef[0], imageRef[1])
except Exception as e:
print(f'!!! Exception replacing {imageRef[0]} with reason given:\n{e}')
return workingFile
def saveSourceDocument(sourceFile, updatedFile):
try:
rewriteFile = open(sourceFile, "w")
rewriteFile.write(updatedFile)
rewriteFile.close()
except Exception as e:
print(f'!!! Exception raised while saving:\n{e}')
def renameImageFiles(sourceFolder, renamedImageSets):
for imageRef in renamedImageSets:
try:
os.rename(f'{sourceFolder}/{imageRef[0]}', f'{sourceFolder}/{imageRef[1]}')
print(f'-- Image file {imageRef[0]} being renamed {imageRef[1]}')
except OSError:
os.rename(urllib.parse.unquote(f'{sourceFolder}/{imageRef[0]}'), f'{sourceFolder}/{imageRef[1]}')
print(f'-- Image file {urllib.parse.unquote(imageRef[0])} being renamed {imageRef[1]}')
except Exception as e:
print(f'!!! Exception raised for {imageRef[0]}: {e}') | 32.443299 | 109 | 0.576104 | 312 | 3,147 | 5.804487 | 0.304487 | 0.026505 | 0.019879 | 0.033131 | 0.323578 | 0.242408 | 0.189398 | 0.157924 | 0.111541 | 0.065157 | 0 | 0.01085 | 0.297108 | 3,147 | 97 | 110 | 32.443299 | 0.807414 | 0 | 0 | 0.240506 | 0 | 0.012658 | 0.259212 | 0.093393 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075949 | false | 0.025316 | 0.050633 | 0 | 0.164557 | 0.126582 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8904f49aef9db23d6afd2eea0a9d01ce4beef6a7 | 3,554 | py | Python | zeropdk/default_library/io.py | lightwave-lab/zeropdk | cc49eb1008c449185cf9dcdbb283ba086ebd8de0 | [
"MIT"
] | 17 | 2019-08-22T15:55:50.000Z | 2022-02-02T20:52:00.000Z | zeropdk/default_library/io.py | lightwave-lab/zeropdk | cc49eb1008c449185cf9dcdbb283ba086ebd8de0 | [
"MIT"
] | 1 | 2020-09-29T00:43:38.000Z | 2020-10-27T07:15:01.000Z | zeropdk/default_library/io.py | lightwave-lab/zeropdk | cc49eb1008c449185cf9dcdbb283ba086ebd8de0 | [
"MIT"
] | 3 | 2019-09-04T07:48:35.000Z | 2021-06-16T09:39:42.000Z | from zeropdk.pcell import (
PCell,
PCellParameter,
TypeDouble,
TypeInt,
TypeLayer,
TypePoint,
Port,
ParamContainer,
)
from zeropdk.layout import insert_shape
from zeropdk.layout.polygons import rectangle
from klayout.db import DPoint, DVector
pad_width = PCellParameter(
name="pad_width",
type=TypeDouble,
description="Width of electrical pad.",
default=120,
unit="um",
)
pad_height = PCellParameter(
name="pad_height",
type=TypeDouble,
description="Height of electrical pad.",
default=120,
unit="um",
)
port_width = PCellParameter(
name="port_width",
type=TypeDouble,
description="Port width (same as trace width)",
default=20,
unit="um",
)
pad_array_count = PCellParameter(
name="pad_array_count", type=TypeInt, description="Number of pads", default=10
)
pad_array_pitch = PCellParameter(
name="pad_array_pitch",
type=TypeDouble,
description="Pad array pitch",
default=150,
unit="um",
)
origin = PCellParameter(name="origin", type=TypePoint, description="Origin", default=DPoint(0, 0))
ex = PCellParameter(
name="ex", type=TypePoint, description="x-axis unit vector", default=DPoint(1, 0)
)
ey = PCellParameter(
name="ey", type=TypePoint, description="y-axis unit vector", default=DPoint(0, 1)
)
layer_metal = PCellParameter(name="layer_metal", type=TypeLayer, description="Metal Layer")
layer_opening = PCellParameter(name="layer_opening", type=TypeLayer, description="Open Layer")
class OrientedCell(PCell):
"""A standard cell that has the following parameters:
- origin: Point
- ex: unit vector of x axis
- ey: unit vector of y axis
"""
params = ParamContainer(origin, ex, ey)
def origin_ex_ey(self):
origin = DPoint(self.params["origin"])
ex = DVector(self.params.ex)
ey = DVector(self.params.ey)
return origin, ex, ey
class DCPad(OrientedCell):
"""A standard DC pad.
Ports: el0
"""
params = ParamContainer(pad_width, pad_height, port_width, layer_metal, layer_opening)
def draw(self, cell):
layout = cell.layout()
origin, ex, ey = self.origin_ex_ey()
cp = self.params
def make_shape_from_dpolygon(dpoly, resize_dx, dbu, layer):
dpoly.resize(resize_dx, dbu)
# if resize_dx > dbu:
# dpoly.round_corners(resize_dx, 100)
insert_shape(cell, layer, dpoly)
return dpoly
def make_pad(origin, pad_width, pad_height, ex, ey):
pad_square = rectangle(origin, pad_width, pad_height, ex, ey)
make_shape_from_dpolygon(pad_square, 0, layout.dbu, cp.layer_metal)
make_shape_from_dpolygon(pad_square, -2.5, layout.dbu, cp.layer_opening)
make_pad(origin + cp.pad_height * ey / 2, cp.pad_width, cp.pad_height, ex, ey)
port = Port("el0", origin + cp.port_width * ey / 2, -ey, cp.port_width, "el_dc")
return cell, {"el0": port}
class DCPadArray(DCPad):
params = ParamContainer(pad_array_count, pad_array_pitch)
def draw(self, cell):
cp = self.params
origin, ex, _ = self.origin_ex_ey()
ports = dict()
for i in range(cp.pad_array_count):
dcpad = DCPad(name=f"pad_{i}", params=cp)
dc_ports = dcpad.place_cell(cell, origin + cp.pad_array_pitch * i * ex)
ports[f"el_{i}"] = dc_ports["el0"].rename(f"el_{i}")
# self.add_port(dc_ports["el0"].rename(f"el_{i}"))
return cell, ports
| 27.129771 | 98 | 0.64969 | 464 | 3,554 | 4.803879 | 0.217672 | 0.080754 | 0.026918 | 0.02288 | 0.139076 | 0.096904 | 0.069987 | 0 | 0 | 0 | 0 | 0.011679 | 0.229038 | 3,554 | 130 | 99 | 27.338462 | 0.801825 | 0.074001 | 0 | 0.159091 | 0 | 0 | 0.096248 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056818 | false | 0 | 0.045455 | 0 | 0.215909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8908edcf96eefb5b56200a032729fbdafb7498d2 | 26,374 | py | Python | MILWRM/ST.py | codyheiser/H2-tissue-labeling | 713a7580f17987f36af73562e06f25d1b92f51c4 | [
"MIT"
] | null | null | null | MILWRM/ST.py | codyheiser/H2-tissue-labeling | 713a7580f17987f36af73562e06f25d1b92f51c4 | [
"MIT"
] | null | null | null | MILWRM/ST.py | codyheiser/H2-tissue-labeling | 713a7580f17987f36af73562e06f25d1b92f51c4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Functions and classes for manipulating 10X Visium spatial transcriptomic (ST) and
histological imaging data
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import seaborn as sns
import scanpy as sc
sc.set_figure_params(dpi=100, dpi_save=400)
sns.set_style("white")
plt.rcParams["font.family"] = "monospace"
from math import ceil
from matplotlib.lines import Line2D
from scipy.spatial import cKDTree
from scipy.interpolate import interpnd, griddata
from sklearn.metrics.pairwise import euclidean_distances
def bin_threshold(mat, threshmin=None, threshmax=0.5):
"""
Generate binary segmentation from probabilities
Parameters
----------
mat : np.array
The data
threshmin : float or None
Minimum value on [0,1] to assign binary IDs from probabilities.
thresmax : float
Maximum value on [0,1] to assign binary IDs from probabilities. Values higher
than threshmax -> 1. Values lower than thresmax -> 0.
Returns
-------
a : np.array
Thresholded matrix
"""
a = np.ma.array(mat, copy=True)
mask = np.zeros(a.shape, dtype=bool)
if threshmin:
mask |= (a < threshmin).filled(False)
if threshmax:
mask |= (a > threshmax).filled(False)
a[mask] = 1
a[~mask] = 0
return a
def map_pixels(adata, filter_label="in_tissue", img_key="hires", library_id=None):
"""
Map spot IDs to 'pixel space' by assigning spot ID values to evenly spaced grid
Parameters
----------
adata : AnnData.anndata
The data
filter_label : str or None
adata.obs column key that contains binary labels for filtering barcodes. If
None, do not filter.
img_key : str
adata.uns key containing the image to use for mapping
Returns
-------
adata : AnnData.anndata
with the following attributes:
adata.uns["pixel_map_df"] : pd.DataFrame
Long-form dataframe of Visium spot barcode IDs, pixel coordinates, and
.obs metadata
adata.uns["pixel_map"] : np.array
Pixel space array of Visium spot barcode IDs
"""
adata.uns["pixel_map_params"] = {
"img_key": img_key
} # create params dict for future use
# add library_id key to params
if library_id is None:
library_id = adata.uns["pixel_map_params"]["library_id"] = list(
adata.uns["spatial"].keys()
)[0]
else:
adata.uns["pixel_map_params"]["library_id"] = library_id
# first get center-to-face pixel distance of hexagonal Visium spots
dist = euclidean_distances(adata.obsm["spatial"])
adata.uns["pixel_map_params"]["ctr_to_face"] = (
np.unique(dist)[np.unique(dist) != 0].min() / 2
)
# also save center-to-vertex pixel distance as vadata attribute
adata.uns["pixel_map_params"]["ctr_to_vert"] = adata.uns["pixel_map_params"][
"ctr_to_face"
] / np.cos(30 * (np.pi / 180))
# get the spot radius from adata.uns["spatial"] as well
adata.uns["pixel_map_params"]["radius"] = (
adata.uns["spatial"][library_id]["scalefactors"]["spot_diameter_fullres"] / 2
)
# get scale factor from adata.uns["spatial"]
adata.uns["pixel_map_params"]["scalef"] = adata.uns["spatial"][library_id][
"scalefactors"
][f"tissue_{img_key}_scalef"]
# determine pixel bounds from spot coords, adding center-to-face distance
adata.uns["pixel_map_params"]["xmin_px"] = int(
np.floor(
adata.uns["pixel_map_params"]["scalef"]
* (
adata.obsm["spatial"][:, 0].min()
- adata.uns["pixel_map_params"]["radius"]
)
)
)
adata.uns["pixel_map_params"]["xmax_px"] = int(
np.ceil(
adata.uns["pixel_map_params"]["scalef"]
* (
adata.obsm["spatial"][:, 0].max()
+ adata.uns["pixel_map_params"]["radius"]
)
)
)
adata.uns["pixel_map_params"]["ymin_px"] = int(
np.floor(
adata.uns["pixel_map_params"]["scalef"]
* (
adata.obsm["spatial"][:, 1].min()
- adata.uns["pixel_map_params"]["radius"]
)
)
)
adata.uns["pixel_map_params"]["ymax_px"] = int(
np.ceil(
adata.uns["pixel_map_params"]["scalef"]
* (
adata.obsm["spatial"][:, 1].max()
+ adata.uns["pixel_map_params"]["radius"]
)
)
)
print("Creating pixel grid and mapping to nearest barcode coordinates")
# define grid for pixel space
grid_y, grid_x = np.mgrid[
adata.uns["pixel_map_params"]["ymin_px"] : adata.uns["pixel_map_params"][
"ymax_px"
],
adata.uns["pixel_map_params"]["xmin_px"] : adata.uns["pixel_map_params"][
"xmax_px"
],
]
# map barcodes to pixel coordinates
pixel_coords = np.column_stack((grid_x.ravel(order="C"), grid_y.ravel(order="C")))
barcode_list = griddata(
np.multiply(adata.obsm["spatial"], adata.uns["pixel_map_params"]["scalef"]),
adata.obs_names,
(pixel_coords[:, 0], pixel_coords[:, 1]),
method="nearest",
)
# save grid_x and grid_y to adata.uns
adata.uns["grid_x"], adata.uns["grid_y"] = grid_x, grid_y
# put results into DataFrame for filtering and reindexing
print("Saving barcode mapping to adata.uns['pixel_map_df'] and adding metadata")
adata.uns["pixel_map_df"] = pd.DataFrame(pixel_coords, columns=["x", "y"])
# add barcodes to long-form dataframe
adata.uns["pixel_map_df"]["barcode"] = barcode_list
# merge master df with self.adata.obs for metadata
adata.uns["pixel_map_df"] = adata.uns["pixel_map_df"].merge(
adata.obs, how="outer", left_on="barcode", right_index=True
)
# filter using label from adata.obs if desired (i.e. "in_tissue")
if filter_label is not None:
print(
"Filtering barcodes using labels in self.adata.obs['{}']".format(
filter_label
)
)
# set empty pixels (no Visium spot) to "none"
adata.uns["pixel_map_df"].loc[
adata.uns["pixel_map_df"][filter_label] == 0,
"barcode",
] = "none"
# subset the entire anndata object using filter_label
adata = adata[adata.obs[filter_label] == 1, :].copy()
print("New size: {} spots x {} genes".format(adata.n_obs, adata.n_vars))
print("Done!")
return adata
def trim_image(
adata, distance_trim=False, threshold=None, channels=None, plot_out=True, **kwargs
):
"""
Trim pixels in image using pixel map output from Visium barcodes
Parameters
----------
adata : AnnData.anndata
The data
distance_trim : bool
Manually trim pixels by distance to nearest Visium spot center
threshold : int or None
Number of pixels from nearest Visium spot center to call barcode ID. Ignored
if `distance_trim==False`.
channels : list of str or None
Names of image channels in axis order. If None, channels are named "ch_0",
"ch_1", etc.
plot_out : bool
Plot final trimmed image
**kwargs
Arguments to pass to `show_pita()` function if `plot_out==True`
Returns
-------
adata.uns["pixel_map_trim"] : np.array
Contains image with unused pixels set to `np.nan`
adata.obsm["spatial_trim"] : np.array
Contains spatial coords with adjusted pixel values after image cropping
"""
assert (
adata.uns["pixel_map_params"] is not None
), "Pixel map not yet created. Run map_pixels() first."
print(
"Cropping image to pixel dimensions and adding values to adata.uns['pixel_map_df']"
)
cropped = adata.uns["spatial"][adata.uns["pixel_map_params"]["library_id"]][
"images"
][adata.uns["pixel_map_params"]["img_key"]].transpose(1, 0, 2)[
int(adata.uns["pixel_map_params"]["xmin_px"]) : int(
(adata.uns["pixel_map_params"]["xmax_px"])
),
int(adata.uns["pixel_map_params"]["ymin_px"]) : int(
(adata.uns["pixel_map_params"]["ymax_px"])
),
]
# crop x,y coords and save to .obsm as well
print("Cropping Visium spot coordinates and saving to adata.obsm['spatial_trim']")
adata.obsm["spatial_trim"] = adata.obsm["spatial"] - np.repeat(
[
[
adata.uns["pixel_map_params"]["xmin_px"],
adata.uns["pixel_map_params"]["ymin_px"],
]
],
adata.obsm["spatial"].shape[0],
axis=0,
)
# manual trimming of pixels by distance if desired
if distance_trim:
print("Calculating pixel distances from spot centers for thresholding")
tree = cKDTree(adata.obsm["spatial"])
xi = interpnd._ndim_coords_from_arrays(
(adata.uns["grid_x"], adata.uns["grid_y"]),
ndim=adata.obsm["spatial"].shape[1],
)
dists, _ = tree.query(xi)
# determine distance threshold
if threshold is None:
threshold = int(adata.uns["pixel_map_params"]["ctr_to_vert"] + 1)
print(
"Using distance threshold of {} pixels from adata.uns['pixel_map_params']['ctr_to_vert']".format(
threshold
)
)
dist_mask = bin_threshold(dists, threshmax=threshold)
if plot_out:
# plot pixel distances from spot centers on image
show_pita(pita=dists, figsize=(4, 4))
# plot binary thresholded image
show_pita(pita=dist_mask, figsize=(4, 4))
print(
"Trimming pixels by spot distance and adjusting labels in adata.uns['pixel_map_df']"
)
mask_df = pd.DataFrame(dist_mask.T.ravel(order="F"), columns=["manual_trim"])
adata.uns["pixel_map_df"] = adata.uns["pixel_map_df"].merge(
mask_df, left_index=True, right_index=True
)
adata.uns["pixel_map_df"].loc[
adata.uns["pixel_map_df"]["manual_trim"] == 1, ["barcode"]
] = "none" # set empty pixels to empty barcode
adata.uns["pixel_map_df"].drop(
columns="manual_trim", inplace=True
) # remove unneeded label
if channels is None:
# if channel names not specified, name them numerically
channels = ["ch_{}".format(x) for x in range(cropped.shape[2])]
# cast image intensity values to long-form and add to adata.uns["pixel_map_df"]
rgb = pd.DataFrame(
np.column_stack(
[cropped[:, :, x].ravel(order="F") for x in range(cropped.shape[2])]
),
columns=channels,
)
adata.uns["pixel_map_df"] = adata.uns["pixel_map_df"].merge(
rgb, left_index=True, right_index=True
)
adata.uns["pixel_map_df"].loc[
adata.uns["pixel_map_df"]["barcode"] == "none", channels
] = np.nan # set empty pixels to invalid image intensity value
# calculate mean image values for each channel and create .obsm key
adata.obsm["image_means"] = (
adata.uns["pixel_map_df"]
.loc[adata.uns["pixel_map_df"]["barcode"] != "none", ["barcode"] + channels]
.groupby("barcode")
.mean()
.values
)
print(
"Saving cropped and trimmed image to adata.uns['spatial']['{}']['images']['{}_trim']".format(
adata.uns["pixel_map_params"]["library_id"],
adata.uns["pixel_map_params"]["img_key"],
)
)
adata.uns["spatial"][adata.uns["pixel_map_params"]["library_id"]]["images"][
"{}_trim".format(adata.uns["pixel_map_params"]["img_key"])
] = np.dstack(
[
adata.uns["pixel_map_df"]
.pivot(index="y", columns="x", values=[channels[x]])
.values
for x in range(len(channels))
]
)
# save scale factor as well
adata.uns["spatial"][adata.uns["pixel_map_params"]["library_id"]]["scalefactors"][
"tissue_{}_trim_scalef".format(adata.uns["pixel_map_params"]["img_key"])
] = adata.uns["spatial"][adata.uns["pixel_map_params"]["library_id"]][
"scalefactors"
][
"tissue_{}_scalef".format(adata.uns["pixel_map_params"]["img_key"])
]
# plot results if desired
if plot_out:
if len(channels) == 3:
show_pita(
pita=adata.uns["spatial"][adata.uns["pixel_map_params"]["library_id"]][
"images"
]["{}_trim".format(adata.uns["pixel_map_params"]["img_key"])],
RGB=True,
label=channels,
**kwargs,
)
else:
show_pita(
pita=adata.uns["spatial"][adata.uns["pixel_map_params"]["library_id"]][
"images"
]["{}_trim".format(adata.uns["pixel_map_params"]["img_key"])],
RGB=False,
label=channels,
**kwargs,
)
print("Done!")
def assemble_pita(
adata, features=None, use_rep=None, layer=None, plot_out=True, histo=None, **kwargs
):
"""
Cast feature into pixel space to construct gene expression image ("pita")
Parameters
----------
adata : AnnData.anndata
the data
features : list of int or str
Names or indices of features to cast onto spot image. If `None`, cast all
features. If `plot_out`, first feature in list will be plotted. If not
specified and `plot_out`, first feature (index 0) will be plotted.
use_rep : str
Key from `adata.obsm` to use for plotting. If `None`, use `adata.X`.
layer :str
Key from `adata.layers` to use for plotting. Ignored if `use_rep` is not `None`
plot_out : bool
Show resulting image?
histo : str or `None`, optional (default=`None`)
Histology image to show along with pita in gridspec (i.e. "hires",
"hires_trim", "lowres"). If `None` or if `plot_out`==`False`, ignore.
**kwargs
Arguments to pass to `show_pita()` function
Returns
-------
assembled : np.array
Image of desired expression in pixel space
"""
assert (
adata.uns["pixel_map_params"] is not None
), "Pixel map not yet created. Run map_pixels() first."
# coerce features to list if only single string
if features and not isinstance(features, list):
features = [features]
if use_rep is None:
# use all genes if no gene features specified
if not features:
features = adata.var_names # [adata.var.highly_variable == 1].tolist()
if layer is None:
print("Assembling pita with {} features from adata.X".format(len(features)))
mapper = pd.DataFrame(
adata.X[:, [adata.var_names.get_loc(x) for x in features]],
index=adata.obs_names,
)
else:
print(
"Assembling pita with {} features from adata.layers['{}']".format(
len(features), layer
)
)
mapper = pd.DataFrame(
adata.layers[layer][:, [adata.var_names.get_loc(x) for x in features]],
index=adata.obs_names,
)
elif use_rep in [".obs", "obs"]:
assert features is not None, "Must provide feature(s) from adata.obs"
print("Assembling pita with {} features from adata.obs".format(len(features)))
if all(isinstance(x, int) for x in features):
mapper = adata.obs.iloc[:, features].copy()
else:
mapper = adata.obs[features].copy()
features = None # set features to None in case show==True
else:
if not features:
print(
"Assembling pita with {} features from adata.obsm['{}']".format(
adata.obsm[use_rep].shape[1], use_rep
)
)
mapper = pd.DataFrame(adata.obsm[use_rep], index=adata.obs_names)
else:
assert all(
isinstance(x, int) for x in features
), "Features must be integer indices if using rep from adata.obsm"
print(
"Assembling pita with {} features from adata.obsm['{}']".format(
len(features), use_rep
)
)
mapper = pd.DataFrame(
adata.obsm[use_rep][:, features], index=adata.obs_names
)
# cast barcodes into pixel dimensions for reindexing
print("Casting barcodes to pixel dimensions and saving to adata.uns['pixel_map']")
pixel_map = (
adata.uns["pixel_map_df"].pivot(index="y", columns="x", values="barcode").values
)
assembled = np.array(
[mapper.reindex(index=pixel_map[x], copy=True) for x in range(len(pixel_map))]
).squeeze()
if plot_out:
# determine where the histo image is in anndata
if histo is not None:
assert (
histo
in adata.uns["spatial"][list(adata.uns["spatial"].keys())[0]][
"images"
].keys()
), "Must provide one of {} for histo".format(
adata.uns["spatial"][list(adata.uns["spatial"].keys())[0]][
"images"
].keys()
)
histo = adata.uns["spatial"][list(adata.uns["spatial"].keys())[0]][
"images"
][histo]
show_pita(pita=assembled, features=features, histo=histo, **kwargs)
print("Done!")
return assembled
def show_pita(
pita,
features=None,
RGB=False,
histo=None,
label="feature",
ncols=4,
figsize=(7, 7),
save_to=None,
**kwargs,
):
"""
Plot assembled pita using `plt.imshow()`
Parameters
----------
pita : np.array
Image of desired expression in pixel space from `.assemble_pita()`
features : list of int, optional (default=`None`)
List of features by index to show in plot. If `None`, use all features.
RGB : bool, optional (default=`False`)
Treat 3-dimensional array as RGB image
histo : np.array or `None`, optional (default=`None`)
Histology image to show along with pita in gridspec. If `None`, ignore.
label : str
What to title each panel of the gridspec (i.e. "PC" or "usage") or each
channel in RGB image. Can also pass list of names e.g. ["NeuN","GFAP",
"DAPI"] corresponding to channels.
ncols : int
Number of columns for gridspec
figsize : tuple of float
Size in inches of output figure
save_to : str or None
Path to image file to save results. if `None`, show figure.
**kwargs
Arguments to pass to `plt.imshow()` function
Returns
-------
Matplotlib object (if plotting one feature or RGB) or gridspec object (for
multiple features). Saves plot to file if `save_to` is not `None`.
"""
assert pita.ndim > 1, "Pita does not have enough dimensions: {} given".format(
pita.ndim
)
assert pita.ndim < 4, "Pita has too many dimensions: {} given".format(pita.ndim)
# if only one feature (2D), plot it quickly
if (pita.ndim == 2) and histo is None:
fig = plt.figure(figsize=figsize)
plt.imshow(pita, **kwargs)
plt.tick_params(labelbottom=False, labelleft=False)
sns.despine(bottom=True, left=True)
plt.colorbar(shrink=0.8)
plt.tight_layout()
if save_to:
plt.savefig(fname=save_to, transparent=True, bbox_inches="tight", dpi=800)
return fig
if (pita.ndim == 2) and histo is not None:
n_rows, n_cols = 1, 2 # two images here, histo and RGB
fig = plt.figure(figsize=(ncols * n_cols, ncols * n_rows))
# arrange axes as subplots
gs = gridspec.GridSpec(n_rows, n_cols, figure=fig)
# add plots to axes
ax = plt.subplot(gs[0])
im = ax.imshow(histo, **kwargs)
ax.tick_params(labelbottom=False, labelleft=False)
sns.despine(bottom=True, left=True)
ax.set_title(
label="Histology",
loc="left",
fontweight="bold",
fontsize=16,
)
ax = plt.subplot(gs[1])
im = ax.imshow(pita, **kwargs)
ax.tick_params(labelbottom=False, labelleft=False)
sns.despine(bottom=True, left=True)
cbar = plt.colorbar(im, shrink=0.8)
fig.tight_layout()
if save_to:
plt.savefig(fname=save_to, transparent=True, bbox_inches="tight", dpi=800)
return fig
if RGB:
# if third dim has 3 features, treat as RGB and plot it quickly
assert (pita.ndim == 3) & (
pita.shape[2] == 3
), "Need 3 dimensions and 3 given features for an RGB image; shape = {}; features given = {}".format(
pita.shape, len(features)
)
print("Plotting pita as RGB image")
if isinstance(label, str):
# if label is single string, name channels numerically
channels = ["{}_{}".format(label, x) for x in range(pita.shape[2])]
else:
assert (
len(label) == 3
), "Please pass 3 channel names for RGB plot; {} labels given: {}".format(
len(label), label
)
channels = label
if histo is not None:
n_rows, n_cols = 1, 2 # two images here, histo and RGB
fig = plt.figure(figsize=(ncols * n_cols, ncols * n_rows))
# arrange axes as subplots
gs = gridspec.GridSpec(n_rows, n_cols, figure=fig)
# add plots to axes
ax = plt.subplot(gs[0])
im = ax.imshow(histo, **kwargs)
ax.tick_params(labelbottom=False, labelleft=False)
sns.despine(bottom=True, left=True)
ax.set_title(
label="Histology",
loc="left",
fontweight="bold",
fontsize=16,
)
ax = plt.subplot(gs[1])
im = ax.imshow(pita, **kwargs)
# add legend for channel IDs
custom_lines = [
Line2D([0], [0], color=(1, 0, 0), lw=5),
Line2D([0], [0], color=(0, 1, 0), lw=5),
Line2D([0], [0], color=(0, 0, 1), lw=5),
]
plt.legend(custom_lines, channels, fontsize="medium")
ax.tick_params(labelbottom=False, labelleft=False)
sns.despine(bottom=True, left=True)
fig.tight_layout()
if save_to:
plt.savefig(
fname=save_to, transparent=True, bbox_inches="tight", dpi=800
)
return fig
else:
fig = plt.figure(figsize=figsize)
plt.imshow(pita, **kwargs)
# add legend for channel IDs
custom_lines = [
Line2D([0], [0], color=(1, 0, 0), lw=5),
Line2D([0], [0], color=(0, 1, 0), lw=5),
Line2D([0], [0], color=(0, 0, 1), lw=5),
]
plt.legend(custom_lines, channels, fontsize="medium")
plt.tick_params(labelbottom=False, labelleft=False)
sns.despine(bottom=True, left=True)
plt.tight_layout()
if save_to:
plt.savefig(
fname=save_to, transparent=True, bbox_inches="tight", dpi=800
)
return fig
# if pita has multiple features, plot them in gridspec
if isinstance(features, int): # force features into list if single integer
features = [features]
# if no features are given, use all of them
if features is None:
features = [x + 1 for x in range(pita.shape[2])]
else:
assert (
pita.ndim > 2
), "Not enough features in pita: shape {}, expecting 3rd dim with length {}".format(
pita.shape, len(features)
)
assert (
len(features) <= pita.shape[2]
), "Too many features given: pita has {}, expected {}".format(
pita.shape[2], len(features)
)
if isinstance(label, str):
# if label is single string, name channels numerically
labels = ["{}_{}".format(label, x) for x in features]
else:
assert len(label) == len(
features
), "Please provide the same number of labels as features; {} labels given, {} features given.".format(
len(label), len(features)
)
labels = label
# calculate gridspec dimensions
if histo is not None:
labels = ["Histology"] + labels # append histo to front of labels
if len(features) + 1 <= ncols:
n_rows, n_cols = 1, len(features) + 1
else:
n_rows, n_cols = ceil((len(features) + 1) / ncols), ncols
else:
if len(features) <= ncols:
n_rows, n_cols = 1, len(features)
else:
n_rows, n_cols = ceil(len(features) / ncols), ncols
fig = plt.figure(figsize=(ncols * n_cols, ncols * n_rows))
# arrange axes as subplots
gs = gridspec.GridSpec(n_rows, n_cols, figure=fig)
# add plots to axes
i = 0
if histo is not None:
# add histology plot to first axes
ax = plt.subplot(gs[i])
im = ax.imshow(histo, **kwargs)
ax.tick_params(labelbottom=False, labelleft=False)
sns.despine(bottom=True, left=True)
ax.set_title(
label=labels[i],
loc="left",
fontweight="bold",
fontsize=16,
)
i = i + 1
for feature in features:
ax = plt.subplot(gs[i])
im = ax.imshow(pita[:, :, feature - 1], **kwargs)
ax.tick_params(labelbottom=False, labelleft=False)
sns.despine(bottom=True, left=True)
ax.set_title(
label=labels[i],
loc="left",
fontweight="bold",
fontsize=16,
)
cbar = plt.colorbar(im, shrink=0.8)
i = i + 1
fig.tight_layout()
if save_to:
plt.savefig(fname=save_to, transparent=True, bbox_inches="tight", dpi=800)
return fig
| 36.990182 | 113 | 0.574733 | 3,327 | 26,374 | 4.436129 | 0.136159 | 0.054204 | 0.066942 | 0.08239 | 0.474964 | 0.444068 | 0.421844 | 0.381733 | 0.327868 | 0.292906 | 0 | 0.009319 | 0.300182 | 26,374 | 712 | 114 | 37.042135 | 0.790323 | 0.240047 | 0 | 0.418651 | 0 | 0 | 0.194389 | 0.01431 | 0 | 0 | 0 | 0 | 0.02381 | 1 | 0.009921 | false | 0.001984 | 0.021825 | 0 | 0.047619 | 0.039683 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
890a5161d6eed2959007d6f815eb9b7dd35c2414 | 2,541 | py | Python | src/main.py | chanleoc/kbc_demo | 9138de9083d92f5c8bab1dfc42d3dde50544920d | [
"MIT"
] | null | null | null | src/main.py | chanleoc/kbc_demo | 9138de9083d92f5c8bab1dfc42d3dde50544920d | [
"MIT"
] | null | null | null | src/main.py | chanleoc/kbc_demo | 9138de9083d92f5c8bab1dfc42d3dde50544920d | [
"MIT"
] | 1 | 2019-02-01T19:37:30.000Z | 2019-02-01T19:37:30.000Z | "__author__ = 'Leo Chan'"
"__credits__ = 'Keboola 2019'"
"__project__ = 'kbc_demo'"
"""
Python 3 environment
"""
#import pip
#pip.main(['install', '--disable-pip-version-check', '--no-cache-dir', 'logging_gelf'])
import sys
import os
import logging
import csv
import json
import pandas as pd
import logging_gelf.formatters
import logging_gelf.handlers
from keboola import docker
### Environment setup
abspath = os.path.abspath(__file__)
script_path = os.path.dirname(abspath)
os.chdir(script_path)
### Logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
datefmt="%Y-%m-%d %H:%M:%S")
"""
logger = logging.getLogger()
logging_gelf_handler = logging_gelf.handlers.GELFTCPSocketHandler(
host=os.getenv('KBC_LOGGER_ADDR'),
port=int(os.getenv('KBC_LOGGER_PORT'))
)
logging_gelf_handler.setFormatter(logging_gelf.formatters.GELFFormatter(null_character=True))
logger.addHandler(logging_gelf_handler)
# removes the initial stdout logging
logger.removeHandler(logger.handlers[0])
"""
### Access the supplied rules
cfg = docker.Config('/data/')
params = cfg.get_parameters()
#data_table = cfg.get_parameters()["data_table"]
### Get proper list of tables
cfg = docker.Config('/data/')
in_tables = cfg.get_input_tables()
out_tables = cfg.get_expected_output_tables()
logging.info("IN tables mapped: "+str(in_tables))
logging.info("OUT tables mapped: "+str(out_tables))
### destination to fetch and output files
DEFAULT_FILE_INPUT = "/data/in/tables/"
DEFAULT_FILE_DESTINATION = "/data/out/tables/"
def get_tables(in_tables):
"""
Evaluate input and output table names.
Only taking the first one into consideration!
"""
### input file
table = in_tables[0]
in_name = table["full_path"]
in_destination = table["destination"]
logging.info("Data table: " + str(in_name))
logging.info("Input table source: " + str(in_destination))
return in_name
def get_output_tables(out_tables):
"""
Evaluate output table names.
Only taking the first one into consideration!
"""
### input file
table = out_tables[0]
in_name = table["full_path"]
in_destination = table["source"]
logging.info("Data table: " + str(in_name))
logging.info("Input table source: " + str(in_destination))
return in_name
def main():
"""
Main execution script.
"""
print('demo 2')
print('demo 3')
print('demo4')
return
if __name__ == "__main__":
main()
logging.info("Done.")
| 23.1 | 93 | 0.69815 | 335 | 2,541 | 5.059701 | 0.364179 | 0.051917 | 0.031858 | 0.020059 | 0.264307 | 0.234808 | 0.234808 | 0.234808 | 0.234808 | 0.234808 | 0 | 0.005169 | 0.162534 | 2,541 | 109 | 94 | 23.311927 | 0.791353 | 0.19205 | 0 | 0.2 | 0 | 0 | 0.219528 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.06 | false | 0 | 0.18 | 0 | 0.3 | 0.06 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
890cb45c8ba1a1a9867fe4cb3e4acf45b6533679 | 5,504 | py | Python | peac_pkg/contourer.py | emcramer/peac | 74a3b7c5885d84a0b6e1dfadd887d08aa3967866 | [
"MIT"
] | null | null | null | peac_pkg/contourer.py | emcramer/peac | 74a3b7c5885d84a0b6e1dfadd887d08aa3967866 | [
"MIT"
] | null | null | null | peac_pkg/contourer.py | emcramer/peac | 74a3b7c5885d84a0b6e1dfadd887d08aa3967866 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Jul 8 10:01:34 2019
@author: ecramer
"""
import numpy as np
from scipy import interpolate
from skimage.feature import peak_local_max
class Contourer():
""" TODO: Full writeup of class documentation here.
Steps:
1. generate the contours for each factor
2. find the peak coordinates and values
3. transform the peak coordinates into the manifold embedding's space
"""
def __init__(self):
self._interp_method = 'linear'
self._resolution = 50 # default resolution
self._peak_distance = np.floor(self._resolution/10.0).astype(np.uint8)
self._peak_threshold = 0.5
self._factor_names = [] # list to contain the factors of each column
# storage structures for the results
self.contours_ = {}
self.peak_values_ = {}
self.peak_coors_ = {}
self.transformed_peaks_ = {}
self.all_transformed_peaks_ = []
self.peak_names_ = []
def _check_input_dims(self, X, Y):
return (X.shape[1] == 2) and (len(Y.shape) > 0)
def fit(self, X, Y, **kwargs):
"""
X = the manifold embeddings for a 2D space as a numpy array. Each column must be a
Y = a pandas dataframe with the values for a series of predictors/factors
kwargs = extra parameters to feed to the contouring and peak finding algorithms
"""
if self._check_input_dims(X, Y):
self.X_ = X
self.Y_ = Y
# get the names of columns if Y is a pandas dataframe, otherwise assign numbers
if hasattr(Y, 'columns'):
self._factor_names = Y.columns
elif Y.ndim > 1:
self._factor_names = np.arange(Y.shape[1])
else:
self._factor_names = [0]
# unpack the dictionary to populate the fields in the class
for key, value in kwargs.items():
setattr(self, key, value)
# run the algorithms
self._gen_contours()
self._find_peaks()
self._transform_peaks()
return self
else:
print('Please double check input for correct dimensions. See documentation for details.')
return False
def _gen_contour(self, x1, x2, z):
"""
Generates a contour from the manifold embeddings and factor levels
"""
x_lin = np.linspace(min(x1), max(x1), self._resolution)
y_lin = np.linspace(min(x2), max(x2), self._resolution)
# create a grid of points
x_grid, y_grid = np.meshgrid(x_lin, y_lin)
z_grid = interpolate.griddata((x1, x2), z, (x_grid, y_grid), method=self._interp_method)
return x_grid, y_grid, z_grid
pass
def _gen_contours(self):
"""
Step 1
Generate the contours for each factor in Y
"""
# check to see if the number of factors to contour is > 1, otherwise
if self.Y_.ndim < 2:
z = np.asarray(self.Y_)
# get the values of the manifold embedding
x1 = self.X_[:, 0]
x2 = self.X_[:, 1]
x1g, x2g, zg = self._gen_contour(x1, x2, z)
self.contours_[0] = np.nan_to_num(zg)
else:
col = 0
while col < self.Y_.shape[self.Y_.ndim-1]:
z = np.asarray(self.Y_)[:, col]
# get the values of the manifold embedding
x1 = self.X_[:, 0]
x2 = self.X_[:, 1]
x1g, x2g, zg = self._gen_contour(x1, x2, z)
self.contours_[col] = np.nan_to_num(zg) # zero out the non-contoured points in the 2D space
col += 1 # go to the next column
def _find_peaks(self):
"""
Step 2
Find the local peaks in each contour.
"""
# find the peaks for each contour
for key, contour in self.contours_.items():
# find the peaks such that they are not within _peak_distance 'pixels' of each other
# and the peaks are above the _peak_threshold
peaks = peak_local_max(contour,
min_distance=self._peak_distance,
threshold_rel=self._peak_threshold)
self.peak_coors_[key] = peaks
# get the value of each peak found
self.peak_values_[key] = [contour[i[0], i[1]] for i in peaks]
def _transform_peaks(self):
"""
Step 3
Transform the peaks into the same space as the manifold embedding
"""
x = np.arange(0, self._resolution+1, 1)
x = np.interp(x, (x.min(), x.max()), (self.X_[:, 0].min(), self.X_[:, 0].max()))
y = np.arange(0, self._resolution+1, 1)
y = np.interp(y, (y.min(), y.max()), (self.X_[:, 1].min(), self.X_[:, 1].max()))
xx, yy = np.meshgrid(x, y)
for key in self.peak_coors_.keys():
self.transformed_peaks_[key] = np.column_stack(([x[a[0]] for a in self.peak_coors_[key]],
[y[a[1]] for a in self.peak_coors_[key]]))
self.all_transformed_peaks_ = np.concatenate(tuple(self.transformed_peaks_.values()))
self.peak_names_ = np.concatenate([[self._factor_names[k]]*len(v) for k, v in self.peak_coors_.items()])
| 39.597122 | 112 | 0.553779 | 729 | 5,504 | 3.997257 | 0.263374 | 0.038435 | 0.026767 | 0.02059 | 0.134523 | 0.115992 | 0.115992 | 0.061084 | 0.061084 | 0.061084 | 0 | 0.021607 | 0.344113 | 5,504 | 139 | 113 | 39.597122 | 0.785596 | 0.26508 | 0 | 0.115385 | 0 | 0 | 0.024282 | 0 | 0 | 0 | 0 | 0.007194 | 0 | 1 | 0.089744 | false | 0.012821 | 0.038462 | 0.012821 | 0.192308 | 0.012821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
890cd66cad388ad20cba21addbcdb0f32f1ebfc7 | 1,344 | py | Python | cert_core/cert_store/config.py | johnykkwan/cert-core | 40fdf04bdc255de1b36ca1f99fae10e6994858a1 | [
"MIT"
] | 13 | 2017-03-10T01:03:08.000Z | 2021-06-05T14:13:35.000Z | cert_core/cert_store/config.py | johnykkwan/cert-core | 40fdf04bdc255de1b36ca1f99fae10e6994858a1 | [
"MIT"
] | 2 | 2018-05-09T23:37:21.000Z | 2018-05-09T23:49:56.000Z | cert_core/cert_store/config.py | johnykkwan/cert-core | 40fdf04bdc255de1b36ca1f99fae10e6994858a1 | [
"MIT"
] | 14 | 2017-05-27T16:21:43.000Z | 2022-02-12T16:25:21.000Z | import os
import configargparse
BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir))
def create_config():
p = configargparse.getArgumentParser(default_config_files=[os.path.join(BASE_DIR, 'conf_test.ini'),
os.path.join(BASE_DIR, 'conf_local.ini'),
os.path.join(BASE_DIR, 'conf.ini'),
'/etc/cert-issuer/conf.ini'])
p.add('-c', '--my-config', required=False, is_config_file=True, help='config file path')
p.add_argument('--mongodb_uri', default='mongodb://localhost:27017/test', type=str, env_var='MONGODB_URI',
help='Mongo connection string, including db containing certificates')
p.add_argument('--cert_store_type', type=str, help='type of key value store to use for Cert Store')
p.add_argument('--cert_store_path', type=str, help='path to file system Cert Store')
p.add_argument('--v1_aware', action='store_true', help='Whether to support v1 certs')
args, _ = p.parse_known_args()
return args
parsed_config = None
def get_config():
global parsed_config
if parsed_config:
return parsed_config
parsed_config = create_config()
return parsed_config
| 42 | 110 | 0.614583 | 171 | 1,344 | 4.602339 | 0.421053 | 0.045743 | 0.050826 | 0.053367 | 0.179161 | 0.087675 | 0.060991 | 0 | 0 | 0 | 0 | 0.007099 | 0.266369 | 1,344 | 31 | 111 | 43.354839 | 0.791075 | 0 | 0 | 0.086957 | 0 | 0 | 0.267857 | 0.040923 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.086957 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
89170f8e42f9d4e440f81502dbb93240f0f1b350 | 11,203 | py | Python | Data_Structures/Graph/unweighted_graph.py | D-Chase-H/PurePy-Data-Structures | 892b9666a80054f4524c090a7b442b125c372403 | [
"MIT"
] | null | null | null | Data_Structures/Graph/unweighted_graph.py | D-Chase-H/PurePy-Data-Structures | 892b9666a80054f4524c090a7b442b125c372403 | [
"MIT"
] | 1 | 2017-12-15T04:13:08.000Z | 2017-12-15T04:13:08.000Z | Data_Structures/Graph/unweighted_graph.py | D-Chase-H/PurePy-Data-Structures | 892b9666a80054f4524c090a7b442b125c372403 | [
"MIT"
] | null | null | null |
# Author: D-Chase-H
"""
License:
MIT License
Copyright (c) 2017 Dustin Chase Harmon
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
class Node(object):
"""docstring for Node."""
def __init__(self):
self.id_num = None
self.edges = set()
class Graph(object):
"""docstring for Graph."""
def __init__(self):
self.nodes = dict()
############################################################################
# Insertion Methods
############################################################################
def insert_node(self, id_num):
try:
self.nodes[id_num]
except KeyError:
new_node = Node()
new_node.id_num = id_num
self.nodes[id_num] = new_node
return
def undirected_insert_nodes_from_edge_pair(self, pair):
""" pair: list/tuple type == [x, y] """
node_1_id_num = pair[0]
self.insert_node(node_1_id_num)
node_1 = self.nodes[node_1_id_num]
node_2_id_num = pair[1]
self.insert_node(node_2_id_num)
node_2 = self.nodes[node_2_id_num]
node_1.edges.add(node_2)
node_2.edges.add(node_1)
return
def directed_insert_nodes_from_edge_pair(self, from_node, to_node):
"""
from_node = integer
to_node = integer
"""
node_1_id_num = from_node
self.insert_node(node_1_id_num)
node_1 = self.nodes[node_1_id_num]
node_2_id_num = to_node
self.insert_node(node_2_id_num)
node_2 = self.nodes[node_2_id_num]
node_1.edges.add(node_2)
return
############################################################################
# Disjoint-Set Methods
############################################################################
def arr_of_disjoint_sets(self):
def search_forest(curr_node, curr_subset):
if curr_node.id_num in curr_subset:
return
curr_subset.add(curr_node.id_num)
visited_nodes.add(curr_node.id_num)
for node in curr_node.edges:
search_forest(node, curr_subset)
return
arr = []
visited_nodes = set()
for n in self.nodes.values():
if n.id_num not in visited_nodes:
curr_subset = set()
search_forest(n, curr_subset)
if curr_subset:
arr.append(curr_subset)
return arr
############################################################################
# Depth-First-Search Methods
############################################################################
def depth_first_search_bool(self, start, end):
"""
start = integer
end = integer
Returns: Bool
"""
def dfs_check(curr_node):
nonlocal is_connected
nonlocal end_node
if is_connected is True:
return
if curr_node in visited_nodes:
return
visited_nodes.add(curr_node)
for node in curr_node.edges:
if node == end_node:
is_connected = True
return
else:
dfs_check(node)
start_node = self.nodes[start]
end_node = self.nodes[end]
if start_node == end_node:
return True
visited_nodes = set([])
is_connected = False
dfs_check(start_node)
return is_connected
def depth_first_search_all_paths(self, start, end):
from copy import copy
"""
start = integer
end = integer
Returns: Bool
"""
def dfs_find_path(curr_node, visited_nodes=set(), path=[]):
nonlocal all_paths
nonlocal end_node
if curr_node in visited_nodes:
return
visited_nodes.add(curr_node)
path.append(curr_node)
for node in curr_node.edges:
if node == end_node:
temp_path = tuple(copy(path) + [node])
all_paths.append(temp_path)
else:
temp_visited_nodes = copy(visited_nodes)
temp_path = copy(path)
dfs_find_path(node, temp_visited_nodes, temp_path)
start_node = self.nodes[start]
end_node = self.nodes[end]
if start_node == end_node:
return [[start_node]]
all_paths = []
dfs_find_path(start_node)
return all_paths
############################################################################
# Breadth-First-Search Methods
############################################################################
def breadth_first_search_bool(self, start, end):
"""
start = integer
end = integer
Returns: Bool
"""
def bfs_check(curr_edges):
nonlocal is_connected
nonlocal visited_nodes
nonlocal end_node
if end_node in curr_edges:
is_connected = True
return
new_edges = set()
for node in curr_edges:
visited_nodes.add(node)
for node_edge in node.edges:
if node_edge not in visited_nodes:
new_edges.add(node_edge)
if not new_edges:
return
else:
bfs_check(new_edges)
start_node = self.nodes[start]
end_node = self.nodes[end]
if start_node == end_node:
return True
visited_nodes = set([start_node])
curr_edges = set([edg for edg in start_node.edges])
is_connected = False
bfs_check(curr_edges)
return is_connected
def bfs_shortest_path(self, start, end):
"""
start = integer
end = integer
Returns: Bool
"""
def determine_path():
nonlocal curr_edges
nonlocal visited_nodes
nonlocal start_node
nonlocal end_node
nonlocal depth
paths = [[end_node]]
complete = False
tally = 0
for loop_num in range(depth):
remove_nodes = set()
new_paths = []
for index, p in enumerate(paths):
last_node = p[-1]
poss_paths = []
for node in last_node.edges:
# If we are on the last node, then skip any edge-node
# that is not the start node.
if loop_num == depth - 1:
if node != start_node:
continue
if node in visited_nodes:
temp = p + [node]
poss_paths.append(temp)
remove_nodes.add(node)
for sub_path in poss_paths:
new_paths.append(sub_path)
for node in remove_nodes:
visited_nodes.remove(node)
paths = new_paths
paths = tuple([tuple(reversed(p)) for p in paths])
return paths
def bfs_check(curr_edges):
nonlocal is_connected
nonlocal visited_nodes
nonlocal end_node
nonlocal depth
depth += 1
if end_node in curr_edges:
is_connected = True
return
new_edges = set()
for node in curr_edges:
visited_nodes.add(node)
for node_edge in node.edges:
if node_edge not in visited_nodes:
new_edges.add(node_edge)
if not new_edges:
return
else:
bfs_check(new_edges)
start_node = self.nodes[start]
end_node = self.nodes[end]
if start_node == end_node:
return [[start_node]]
depth = 0
visited_nodes = set([start_node])
curr_edges = set([edg for edg in start_node.edges])
is_connected = False
bfs_check(curr_edges)
paths = determine_path()
return paths
if __name__ == '__main__':
import sys
from random import randrange
print("START\n")
############################################################################
pairs = set()
while len(pairs) < 20:
num1 = randrange(20)
num2 = randrange(20)
if num1 == num2:
continue
new = (num1, num2)
pairs.add(new)
pairs = list(pairs)
g = Graph()
print(pairs, "\n")
for p in pairs:
g.undirected_insert_nodes_from_edge_pair(p)
d_set = g.arr_of_disjoint_sets()
print("Disjoint Set: ", d_set, "\n")
start = pairs[9][0]
end = pairs[4][1]
dfs = g.depth_first_search_bool(start, end)
print("DFS: Path from {} to {} is {}\n".format(start, end, dfs))
dfs_paths = g.depth_first_search_all_paths(start, end)
dfs_paths = [[node.id_num for node in path] for path in dfs_paths]
print("DFS: All paths from {} to {} are {}\n".format(start, end, dfs_paths))
bfs = g.breadth_first_search_bool(start, end)
print("BFS: Path from {} to {} is {}\n".format(start, end, bfs))
bfs_short = g.bfs_shortest_path(start, end)
bfs_short = [[j.id_num for j in i] for i in bfs_short]
print("BFS Shortest: Path from {} to {} is {}\n".format(start, end, bfs_short))
############################################################################
print("\nEND")
| 28.652174 | 84 | 0.496028 | 1,243 | 11,203 | 4.224457 | 0.170555 | 0.022853 | 0.013712 | 0.011426 | 0.403733 | 0.374595 | 0.336126 | 0.336126 | 0.322986 | 0.31156 | 0 | 0.00694 | 0.369722 | 11,203 | 390 | 85 | 28.725641 | 0.736723 | 0.136214 | 0 | 0.479452 | 0 | 0 | 0.021537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073059 | false | 0 | 0.013699 | 0 | 0.200913 | 0.03653 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8919dcadf341d13761481e587ce0aa0f760975fb | 3,550 | py | Python | buildtest/tools/unittests.py | buildntest/buildtest | d371048631cdd33ae7bf66f795f5afed83491a90 | [
"MIT"
] | 29 | 2017-10-20T02:47:10.000Z | 2020-03-26T17:24:03.000Z | buildtest/tools/unittests.py | shahzebsiddiqui/testgen-HPC | e69d9334cf2939af4fca59e75f397b0b1edbbfaf | [
"MIT"
] | 219 | 2017-08-25T13:21:53.000Z | 2020-04-18T19:07:05.000Z | buildtest/tools/unittests.py | shahzebsiddiqui/BuildTest | 5a04641f37ba588c906112b3848249b241061a9c | [
"MIT"
] | 5 | 2017-08-24T11:20:30.000Z | 2020-02-21T04:28:40.000Z | import argparse
import os
import shutil
import sys
import coverage
import pytest
from buildtest.defaults import (
BUILDTEST_ROOT,
BUILDTEST_UNITTEST_ROOT,
BUILDTEST_USER_HOME,
VAR_DIR,
console,
)
from buildtest.utils.file import is_dir, resolve_path
def run_unit_tests(pytestopts=None, sourcefiles=None, enable_coverage=False):
"""Entry point for running buildtest unit tests. This method can be invoked via ``buildtest unittests`` or run
via command line as standalone program. The unit tests are run via `pytest <https://docs.pytest.org/>`_ and `coverage <https://coverage.readthedocs.io/en/6.2/>`_
for measuring coverage report. This method will report coverage results that can be viewable in html or json.
Args:
pytestopts (str): Specify options to pytest command.
sourcefiles (list): List of source files to run with pytest
enable_coverage (bool): Enable coverage when running regression test
"""
if not os.getenv("BUILDTEST_ROOT"):
sys.exit(
"Please check your buildtest installation by running 'source setup.sh'"
)
pytestopts = pytestopts.split() if pytestopts else []
sources = []
# if --sourcefiles specified we resolve path to each argument otherwise default to BUILDTEST_UNITTEST_ROOT which is root of test directory
sourcefiles = sourcefiles or [BUILDTEST_UNITTEST_ROOT]
for fpath in sourcefiles:
sources.append(resolve_path(fpath))
# need to remove any None types from list since resolve_path method can return None if path is invalid
sources = list(filter(None, sources))
pytest_cmd = pytestopts + sources
html_dir = os.path.join(BUILDTEST_ROOT, "htmlcov")
if is_dir(BUILDTEST_USER_HOME):
shutil.rmtree(BUILDTEST_USER_HOME)
if is_dir(VAR_DIR):
shutil.rmtree(VAR_DIR)
cov = coverage.Coverage(branch=True)
# run regression test with coverage if --coverage is specified
if enable_coverage:
cov.erase()
cov.start()
# run regression test
retcode = pytest.main(pytest_cmd)
# if there is a failure in pytest raise exit 1
if retcode == pytest.ExitCode.TESTS_FAILED:
sys.exit(1)
if enable_coverage:
cov.stop()
cov.html_report(title="buildtest unittests coverage report", directory=html_dir)
cov.json_report(outfile=os.path.join(BUILDTEST_ROOT, "coverage.json"))
cov.report(ignore_errors=True, skip_empty=True, sort="-cover", precision=2)
print("\n\n")
console.print("Writing coverage results to: ", html_dir)
coverage_file = os.path.join(html_dir, "index.html")
assert os.path.exists(coverage_file)
console.print("You can view coverage report by viewing file: ", coverage_file)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
prog="unittest",
description="Run buildtest unit tests",
)
parser.add_argument(
"-c",
"--coverage",
action="store_true",
help="Enable coverage when running regression test",
)
parser.add_argument("-p", "--pytestopts", type=str, help="Specify option to pytest")
parser.add_argument(
"-s",
"--sourcefiles",
type=str,
help="Specify path to file or directory when running regression test",
action="append",
)
args = parser.parse_args()
run_unit_tests(
pytestopts=args.pytestopts,
sourcefiles=args.sourcefiles,
enable_coverage=args.coverage,
)
| 32.568807 | 165 | 0.684225 | 454 | 3,550 | 5.211454 | 0.365639 | 0.04142 | 0.026627 | 0.031699 | 0.052409 | 0.032967 | 0 | 0 | 0 | 0 | 0 | 0.001818 | 0.225352 | 3,550 | 108 | 166 | 32.87037 | 0.858545 | 0.266197 | 0 | 0.054795 | 0 | 0 | 0.179267 | 0 | 0 | 0 | 0 | 0 | 0.013699 | 1 | 0.013699 | false | 0 | 0.109589 | 0 | 0.123288 | 0.041096 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8919eadb500a2ec6192616456774d8da32531229 | 2,909 | py | Python | dashboard/__init__.py | VisionTale/StreamHelper-overlay | f50ea9ddfecf177db10c4ee99eac6880d362c3cc | [
"MIT"
] | null | null | null | dashboard/__init__.py | VisionTale/StreamHelper-overlay | f50ea9ddfecf177db10c4ee99eac6880d362c3cc | [
"MIT"
] | null | null | null | dashboard/__init__.py | VisionTale/StreamHelper-overlay | f50ea9ddfecf177db10c4ee99eac6880d362c3cc | [
"MIT"
] | null | null | null | from os.path import realpath, join, dirname
from flask import render_template, request, flash
from flask_login import login_required
from flask_wtf.csrf import validate_csrf
from wtforms.validators import ValidationError
from webapi.libs.network import is_up
from webapi.libs.text import camel_case
from .. import bp, name
from .forms import create_data_form, create_settings_form
from .caspar_connector import is_caspar_up
from .backend_connector import backend_request
@bp.route('/', methods=['POST', 'GET'])
@bp.route('/dashboard', methods=['POST', 'GET'])
@login_required
def dashboard():
"""
Create a dashboard page.
:return:
"""
from .. import config
# Create default config values
_init_server_config()
# Create the server settings form
settings = create_settings_form(config)
# Save config values if submitted
if settings.validate_on_submit():
_set_server_config(settings.server.data, settings.overlay_server.data)
# Read config values
server, port = _get_caspar_server_and_port()
if not is_up(server):
# Check if server is not reachable
flash(f"Server {server} is not reachable")
reachable = False
elif not is_caspar_up(server, port):
# Check if CasparCG server is not reachable
flash(f"CasparCG server on route {server} port {port} not reachable")
reachable = False
else:
reachable = True
# If a form was submitted, check the csrf token for security
validated = False
if request.form.get('csrf_token'):
try:
validate_csrf(request.form.get('csrf_token'))
validated = True
except ValidationError as e:
flash(f'ValidationError: {e}')
if reachable and validated:
backend_request(request.form.to_dict())
# MAGIC Parse routes file to get form values
from .. import definitions
defs = dict()
for e in dir(definitions):
if e.endswith('_definition'):
defs[e.replace('_definition', '')] = getattr(definitions, e)
form_list = list()
for e in defs:
# Create the form for the overlay and save it
form = create_data_form(defs[e], e)
form_list.append((camel_case(e, '_'), form))
return render_template('overlay_dashboard.html', settings=settings, forms=form_list)
def _init_server_config():
from .. import config
config.set_if_none(name, 'server', 'localhost:5250')
config.set_if_none(name, 'overlay_server', 'http://localhost:5000/overlay/')
def _set_server_config(server: str, overlay_server: str):
from .. import config
if '://' in server:
server = server.split('://')[1]
config.set(name, 'server', server)
config.set(name, 'overlay_server', overlay_server)
def _get_caspar_server_and_port() -> tuple:
from .. import config
return config.get(name, 'server').split(':')
| 29.989691 | 88 | 0.682709 | 382 | 2,909 | 5.013089 | 0.282723 | 0.031332 | 0.03342 | 0.031332 | 0.093995 | 0.027154 | 0 | 0 | 0 | 0 | 0 | 0.003939 | 0.214507 | 2,909 | 96 | 89 | 30.302083 | 0.834136 | 0.12616 | 0 | 0.098361 | 0 | 0 | 0.118536 | 0.008751 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065574 | false | 0 | 0.262295 | 0 | 0.360656 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
891b9abd04593024e436067023f0359758fe9dff | 13,412 | py | Python | UserCode/John/MergeBinaries.py | RunzZhang/SBCcode | e75b8e751cec5fb2c28950edef0c82f005caedcb | [
"MIT"
] | 4 | 2018-08-27T18:02:34.000Z | 2020-06-09T21:19:04.000Z | UserCode/John/MergeBinaries.py | RunzZhang/SBCcode | e75b8e751cec5fb2c28950edef0c82f005caedcb | [
"MIT"
] | null | null | null | UserCode/John/MergeBinaries.py | RunzZhang/SBCcode | e75b8e751cec5fb2c28950edef0c82f005caedcb | [
"MIT"
] | 4 | 2019-06-20T21:36:26.000Z | 2020-11-10T17:23:14.000Z | ## John Gresl
import os
from collections import defaultdict, OrderedDict
import time
from copy import deepcopy
import numpy as np
from SBCcode.DataHandling.ReadBinary import ReadBlock as RB
from SBCcode.DataHandling.WriteBinary import WriteBinaryNtupleFile as WB
from SBCcode.AnalysisModules.AnalyzeDytran import dytranAnalysis as da
from SBCcode.AnalysisModules.EventAnalysis import EventAnalysis as eva
from SBCcode.AnalysisModules.ImageAnalysis import BubbleFinder
from SBCcode.AnalysisModules.AcousticT0 import AcousticAnalysis as aa
from SBCcode.AnalysisModules.PMTComprehensiveModule import PMTcm as pmtpa
from SBCcode.AnalysisModules.PMTfastDAQalignment import PMTandFastDAQalignment as pmtfda
from SBCcode.AnalysisModules.PTData import main as ptd
from SBCcode.AnalysisModules.TimingAnalysis import TimingAnalysis as ta
from SBCcode.DataHandling.GetSBCEvent import GetEvent as get_event
from SBCcode.DataHandling.WriteBinary import WriteBinaryNtupleFile as wb
from SBCcode.UserCode.John.NewT0 import calculate_t0 as calculate_t0
def sort_runs(arr):
# Input:
# arr: An array of run_ids as strings. Should look like ["20170623_0", "20170623_5", etc...]
# Outputs: A natural-ish sorted version that puts the dates in order and the run numbers for each date in order
dates_only = []
runs_only = []
for run_id in arr:
dates_only.append(run_id.split("_")[0])
runs_only.append(run_id.split("_")[1])
run_dict = defaultdict(list)
for date, run in zip(dates_only, runs_only):
run_dict[date].append(run)
k = sorted(list(run_dict.keys()), key=int)
out_list = []
for date in k:
run_ids_d = sorted(run_dict[date], key=int)
for run_id in run_ids_d:
out_list.append(date+"_"+run_id)
return out_list
def trim_runlist(arr, start=None, stop=None):
# Inputs:
# arr: An array of run_ids as strings. Should look like ["20170623_0", "20170623_5", etc...]
# start: Start run number. If this is not supplied, will start at the beginning
# stop: Stop run number. If this is not supplied, will continue to end
# Outputs: A sorted, trimmed runlist that goes from start to stop
arr = sort_runs(arr)
start = arr[0] if start == None else start
stop = arr[-1] if stop == None else stop
start_date = int(start.split("_")[0])
start_run_num = int(start.split("_")[1])
stop_date = int(stop.split("_")[0])
stop_run_num = int(stop.split("_")[1])
out = [ ]
for run in arr:
date = int(run.split("_")[0])
run_num = int(run.split("_")[1])
if start_date > date or date > stop_date:
continue
if (start_run_num > run_num and date == start_date) or (run_num > stop_run_num and date == stop_date):
continue
out.append(run)
return out
def dictionary_append(d1, *args):
# Inputs:
# d1: Dictionary
# args: Any number of dictionaries
# Outputs: A dictionary with the same keys, but the values are the values from args appended to the values of d1
# Note: The keys in d1 and args MUST match, and all of the values MUST be lists.
d1 = deepcopy(d1)
if len(args) == 0:
raise TypeError("dictionary_append must be called with at least 2 arguments.")
for arg in args:
if type(arg) not in [dict, defaultdict, OrderedDict]:
raise TypeError("args must be dictionaries!")
for d2 in args:
if not set(d1.keys()) == set(d2.keys()):
raise KeyError("The keys for the two dictionaries must match! Mismatched keys = {}".\
format(set(d1.keys()).symmetric_difference(set(d2.keys()))))
for d2 in args: # This '2nd' for loop because we want to make sure all the keys are the same first.
for k,v in d2.items():
if type(v) not in [list, np.ndarray]:
raise ValueError("The values of the dictionary MUST be list or np.ndarray. Key {} has type(value)={}".\
format(k, type(v)))
try:
d1[k].extend(v) # <-- If we have python lists
except AttributeError:
print("DEBUG:", k, d1[k].shape, v.shape)
d1[k] = np.append(d1[k], v, axis=0) # <-- If we have numpy arrays
return d1
if __name__ == "__main__":
file_templates = [#"AcousticAnalysis_{runid}.bin",
#"DytranAnalysis_{runid}.bin",
#"EventAnalysis_{runid}.bin",
#"HistoryAnalysis_{runid}.bin",
#"ImageAnalysis_{runid}.bin",
#"PMTfastDAQalignment_{runid}.bin",
#"PMTpulseAnalysis_{runid}.bin",
"TimingAnalysis_{runid}.bin",
]
defaults = [#aa(None, None),
#da(None),
#eva(None),
#ptd(None),
#BubbleFinder(None, None, None, None, None, None),
#pmtfda(None),
#pmtpa(None),
ta(None, None, None),
#calculate_t0(None, None, None, None)
]
p_list = [(f, d) for f, d in zip(file_templates, defaults)]
recon_directory = "/pnfs/coupp/persistent/grid_output/SBC-17-T0Test3/output/"
output_directory = "/nashome/j/jgresl/"
runid_list = sort_runs([f for f in os.listdir(recon_directory) if os.path.isdir(os.path.join(recon_directory, f))])
runid_list = trim_runlist(runid_list, start="20170623_3")
bad_run_file = "BadRunsV6.npy"
remake_badruns = False
if remake_badruns:
print("Building bad run list.")
bad_runs = defaultdict()
for f_temp in file_templates:
bad_runs[f_temp] = set()
bad_runs["AcousticAnalysis_{runid}.bin"] = set()
for f_temp in file_templates:
if f_temp != "AcousticTEST_{runid}.bin":
for runid in runid_list:
if not os.path.isfile(os.path.join(recon_directory, runid, f_temp.format(runid=runid))):
print("\tSkipping {}. File not present."\
.format(os.path.join(recon_directory, runid, f_temp.format(runid=runid))))
bad_runs[f_temp].add(runid)
continue
try:
RB(os.path.join(recon_directory, runid, f_temp.format(runid=runid)), max_file_size=800)
except IndexError:
print("\tSkipping {}. File exists, but unable to read properly." \
.format(os.path.join(recon_directory, runid, f_temp.format(runid=runid))))
bad_runs[f_temp].add(runid)
except OSError:
print("\tSkipping {}. File above maximum file size. (Raise this in ReadBlock(...)" \
.format(os.path.join(recon_directory, runid, f_temp.format(runid=runid))))
bad_runs[f_temp].add(runid)
if f_temp == "AcousticTEST_{runid}.bin":
f_temp = "AcousticAnalysis_{runid}.bin"
for runid in runid_list:
recon_directory = "/pnfs/coupp/persistent/grid_output/SBC-17-T0Test2/output"
if not os.path.isfile(os.path.join(recon_directory, runid, f_temp.format(runid=runid))):
print("\tSkipping {}. File not present."\
.format(os.path.join(recon_directory, runid, f_temp.format(runid=runid))))
bad_runs[f_temp].add(runid)
continue
try:
RB(os.path.join(recon_directory, runid, f_temp.format(runid=runid)), max_file_size=800)
except IndexError:
print("\tSkipping {}. File exists, but unable to read properly." \
.format(os.path.join(recon_directory, runid, f_temp.format(runid=runid))))
bad_runs[f_temp].add(runid)
except OSError:
print("\tSkipping {}. File above maximum file size. (Raise this in ReadBlock(...)" \
.format(os.path.join(recon_directory, runid, f_temp.format(runid=runid))))
bad_runs[f_temp].add(runid)
print("----------")
print("-Bad Runs-")
print("----------")
for k,v in bad_runs.items():
print("{} has {} bad runs.".format(k, len(v)))
print("\tSaving bad runs to {}.".format(bad_run_file))
np.save(bad_run_file, bad_runs)
###############################################################################
###############################################################################
###############################################################################
###############################################################################
###############################################################################
do_merge = True
if do_merge:
print("Beginning to merge all files.")
big_out = []
tstart = time.time()
try:
bad_runs = np.load(bad_run_file).flat[0]
except:
print("Couldn't find bad run file...")
bad_runs = defaultdict(list)
print("----------")
print("-Bad Runs-")
print("----------")
for k, v in bad_runs.items():
print("\t{} has {} bad runs.".format(k, len(v)))
bad_list_intersection = set.intersection(*bad_runs.values())
print("\t\tIntersection of bad events (failed for all analyses): {} items: {}".format(len(bad_list_intersection),
bad_list_intersection))
for f_temp, d_default in p_list:
if f_temp == "AcousticTEST_{runid}.bin":
recon_directory = "/pnfs/coupp/persistent/grid_output/SBC-17-T0Test/output/"
f_temp = "AcousticAnalysis_{runid}.bin"
t0 = time.time()
out = {}
first_real_run = True
print("\tStarting {}".format(f_temp))
n_to_process = len(runid_list) - len(bad_runs[f_temp])
n_processed = 0
for n, runid in enumerate(runid_list):
npev = np.array([int(runid.split("_")[1])], dtype=np.int32)
nprunid = np.int32(runid.split("_"))
print(nprunid)
if n%25 == 0: # Print a status message every 25 runs
print("\t\t{:.2f}% done with file {}".\
format(n_processed/n_to_process*100, f_temp))
if runid in bad_list_intersection: # If it failed for all analysis, skip it entirely.
continue
if runid in bad_runs[f_temp]:
# print("#####FAKE VALUES#####")
temp = deepcopy(d_default)
temp["runid"] = np.array([nprunid])
temp["ev"] = npev
if first_real_run:
out = temp
first_real_run = False
# big_out.append(out)
else:
# big_out.append(temp)
out = dictionary_append(out, temp)
continue
# print("\t\t\tRunID: {}".format(runid))
if first_real_run:
out = RB(os.path.join(recon_directory, runid, f_temp.format(runid=runid)), max_file_size=800)
# big_out.append(out)
first_real_run = False
else:
# print("******REAL VALUES*******")
d=RB(os.path.join(recon_directory, runid, f_temp.format(runid=runid)),
max_file_size=800)
# big_out.append(d)
try:
out = dictionary_append(out, d)
except:
print("Failed.#########")
if runid == "20170805_4":
for eee in range(12):
temp = deepcopy(d_default)
temp["runid"] = np.array([nprunid])
temp["ev"] = npev
out = dictionary_append(out, temp)
else:
temp = deepcopy(d_default)
temp["runid"] = np.array([nprunid])
temp["ev"] = npev
out = dictionary_append(out, temp)
n_processed += 1
t1 = time.time()
print("\tTook {:.2f} seconds to read input files for {}".format(t1 - t0, f_temp))
print("\tStarting to write output merged file.")
WB(os.path.join(output_directory, f_temp.format(runid="all")), out, rowdef=1)
t2 = time.time()
print("\tTook {:.2f} seconds to write merged file {}".format(t2 - t1, f_temp.format(runid="all")))
print("\tTook {:.2f} seconds for entire process to read and create {}". \
format(t2 - t0, f_temp.format(runid="all")))
tfinish = time.time()
| 49.308824 | 121 | 0.530122 | 1,542 | 13,412 | 4.457198 | 0.191959 | 0.025462 | 0.024007 | 0.034919 | 0.374363 | 0.332315 | 0.315001 | 0.292594 | 0.283282 | 0.262331 | 0 | 0.016357 | 0.329928 | 13,412 | 271 | 122 | 49.490775 | 0.748414 | 0.115419 | 0 | 0.351852 | 0 | 0 | 0.134898 | 0.030746 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013889 | false | 0 | 0.083333 | 0 | 0.111111 | 0.12963 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
891bdc5511e0a29c4be2339f1dc933bccb992ce1 | 2,066 | py | Python | src/ndc/get_ndc.py | TEI-EAJ/auto_aozora_tei | 5535abef680a1e186f8a7dc6efc30a1dcf4efeec | [
"CC0-1.0"
] | 3 | 2019-02-12T13:28:22.000Z | 2021-07-25T20:58:07.000Z | src/ndc/get_ndc.py | TEI-EAJ/auto_aozora_tei | 5535abef680a1e186f8a7dc6efc30a1dcf4efeec | [
"CC0-1.0"
] | null | null | null | src/ndc/get_ndc.py | TEI-EAJ/auto_aozora_tei | 5535abef680a1e186f8a7dc6efc30a1dcf4efeec | [
"CC0-1.0"
] | 1 | 2019-02-12T22:04:00.000Z | 2019-02-12T22:04:00.000Z | import urllib.request
from bs4 import BeautifulSoup
import json
def extract(index):
url = "http://yozora.main.jp/"
html = urllib.request.urlopen(url)
# htmlをBeautifulSoupで扱う
soup = BeautifulSoup(html, "lxml")
a_list = soup.find_all(class_="navi")[index].find_all("a")
arr = []
for a in a_list:
id = a.get("href").split(".")[0]
name = a.text
print(name)
print("id\t" + id)
obj = {}
obj["name"] = name
obj["id"] = id
arr2 = []
obj["children"] = arr2
arr.append(obj)
url2 = "http://yozora.main.jp/" + id + ".html"
html2 = urllib.request.urlopen(url2)
# htmlをBeautifulSoupで扱う
soup2 = BeautifulSoup(html2, "lxml")
a_list2 = soup2.find(class_="navi").find_all("a")
for a2 in a_list2:
id2 = a2.get("href").split(".")[0]
print("id2\t" + id2)
obj2 = {}
obj2["name"] = a2.text
obj2["id"] = id2
arr3 = []
obj2["children"] = arr3
arr2.append(obj2)
url3 = "http://yozora.main.jp/" + id2 + ".html"
html3 = urllib.request.urlopen(url3)
# htmlをBeautifulSoupで扱う
soup3 = BeautifulSoup(html3, "lxml")
a_list3 = soup3.find(class_="navi").find_all("a")
for a3 in a_list3:
id3 = a3.get("href").split(".")[0]
id3 = id2.split("/")[0] + "/" + id3
print("id3\t" + id3)
obj3 = {}
obj3["name"] = a3.text
obj3["id"] = id3
obj3["value"] = 0
arr3.append(obj3)
return arr
result = {}
result["name"] = "all"
arr = []
result["children"] = arr
arr.append({
"name": "分野別トップ",
"children": extract(0)
})
arr.append({
"name": "児童書トップ",
"children": extract(1)
})
with open('data/ndc.json', 'w') as outfile:
json.dump(result, outfile, ensure_ascii=False, indent=4, sort_keys=True, separators=(',', ': '))
| 22.703297 | 100 | 0.494676 | 234 | 2,066 | 4.303419 | 0.34188 | 0.051639 | 0.041708 | 0.047666 | 0.047666 | 0.047666 | 0.047666 | 0 | 0 | 0 | 0 | 0.042972 | 0.335431 | 2,066 | 90 | 101 | 22.955556 | 0.690459 | 0.031462 | 0 | 0.098361 | 0 | 0 | 0.120681 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016393 | false | 0 | 0.04918 | 0 | 0.081967 | 0.065574 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
891be27156d1066f41832049260bb47279ac1736 | 4,116 | py | Python | backend/server/task/views.py | munteanugabriel25/Javascript-Django-TodoList- | e3cb8d4a573dfbb84960839b7a01a24a195c7755 | [
"Unlicense"
] | null | null | null | backend/server/task/views.py | munteanugabriel25/Javascript-Django-TodoList- | e3cb8d4a573dfbb84960839b7a01a24a195c7755 | [
"Unlicense"
] | null | null | null | backend/server/task/views.py | munteanugabriel25/Javascript-Django-TodoList- | e3cb8d4a573dfbb84960839b7a01a24a195c7755 | [
"Unlicense"
] | null | null | null | from django.shortcuts import render
from rest_framework.views import APIView
from rest_framework import status
from .serializers import TaskListCreateSerializer, UserCreateSerializer, UserLoginSerializer, UserSerializer, TaskUpdateSerializer
from rest_framework.response import Response
from .models import Task
from rest_framework.views import csrf_exempt
from django.contrib.auth import login, authenticate
from rest_framework.authtoken.models import Token
from django.shortcuts import get_object_or_404
from django.contrib.auth.models import User
from rest_framework.permissions import IsAuthenticated
# Create your views here.
class ListCreateApiView(APIView):
permission_classes = [IsAuthenticated]
def get(self, request):
filter = request.GET.get("period", None)
user = get_object_or_404(User, username=request.user)
if filter != None:
if filter == 'week':
query = Task.objects.next_week_tasks(user.id)
else:
query = Task.objects.today_tasks(user.id)
else:
query = Task.objects.all_tasks(user.id)
serializer = TaskListCreateSerializer(query, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
def post(self, request):
user = get_object_or_404(User,username=request.user)
serializer = TaskListCreateSerializer(data=request.data, partial=True)
if serializer.is_valid():
serializer.save(user=user)
return Response(serializer.data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class UserRegisterApiView(APIView):
serializer_class = UserCreateSerializer
def post(self, request):
serializer = UserCreateSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class UserLoginApiView(APIView):
def post(self, request):
serializer = UserLoginSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
user = serializer.validated_data["user"]
token, created = Token.objects.get_or_create(user=user)
user_serializer = UserSerializer(user, context={"request":request})
return Response(user_serializer.data, status=status.HTTP_200_OK)
class RetrieveDeleteUpdateApiView(APIView):
permission_classes = [IsAuthenticated]
def get(self, request, pk):
task_object = get_object_or_404(Task, pk=pk)
user = get_object_or_404(User, username=request.user)
if task_object.user == user:
serializer = TaskUpdateSerializer(task_object)
return Response(serializer.data, status=status.HTTP_200_OK)
else:
return Response({"status": "you don't have permissions for this task"}, status=status.HTTP_400_BAD_REQUEST)
def delete(self, request, pk):
task_object = get_object_or_404(Task, pk=pk)
user = get_object_or_404(User, username=request.user)
if task_object.user == user:
task_object.delete()
return Response({"status": "deleted"}, status=status.HTTP_200_OK)
else:
return Response({"status": "you don't have permissions for this task"}, status=status.HTTP_400_BAD_REQUEST)
def put(self, request, pk):
task_object = get_object_or_404(Task, pk=pk)
user = get_object_or_404(User, username=request.user)
serializer = TaskUpdateSerializer(task_object, data =request.data, partial=True)
if serializer.is_valid():
serializer.save()
return Response({"status": "updated"}, status=status.HTTP_200_OK)
else:
print(serializer.errors)
return Response({"status": "you don't have permissions for this task"}, status=status.HTTP_400_BAD_REQUEST) | 40.752475 | 130 | 0.688047 | 478 | 4,116 | 5.740586 | 0.198745 | 0.061224 | 0.069971 | 0.045918 | 0.581268 | 0.514942 | 0.505831 | 0.470481 | 0.416545 | 0.388484 | 0 | 0.019743 | 0.224733 | 4,116 | 101 | 131 | 40.752475 | 0.840175 | 0.005588 | 0 | 0.455696 | 0 | 0 | 0.04521 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088608 | false | 0 | 0.151899 | 0 | 0.481013 | 0.012658 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64dfbc4c6711f4cdd56dde0eb4ae1be40d05958f | 1,334 | py | Python | capsule.py | lebek/reversible-raytracer | 9b502737da0e0a7cfd664a795b3a38c1809c1774 | [
"MIT"
] | 15 | 2015-04-11T14:40:35.000Z | 2020-06-05T14:17:53.000Z | capsule.py | lebek/RRT | 9b502737da0e0a7cfd664a795b3a38c1809c1774 | [
"MIT"
] | null | null | null | capsule.py | lebek/RRT | 9b502737da0e0a7cfd664a795b3a38c1809c1774 | [
"MIT"
] | 3 | 2016-02-09T18:12:51.000Z | 2018-05-24T13:07:52.000Z | import numpy as np
import theano
import theano.tensor as T
class Capsule():
def __init__(self, name, n_hidden, n_output, num_caps):
self.name = name
bias = np.asarray([0,0, 3 * num_caps,1,1,1], dtype=theano.config.floatX)/ num_caps
self.params = [self.init_capsule_weight(n_hidden),
theano.shared(bias, borrow=True)]
def init_capsule_weight(self, n_hidden_l3):
l3_to_center = 0.05*np.asarray(
np.random.uniform(
low=-4 * np.sqrt(6. / 6+n_hidden_l3),
high=4 * np.sqrt(6. / 6+n_hidden_l3),
size=(n_hidden_l3, 3)
), dtype=theano.config.floatX)
l3_to_radius = 0.0005*np.asarray( np.random.uniform(
low=-4 * np.sqrt(6. / 6+n_hidden_l3),
high=4 * np.sqrt(6. / 6+n_hidden_l3),
size=(n_hidden_l3, 3)
), dtype=theano.config.floatX)
return theano.shared(np.concatenate((l3_to_center, l3_to_radius), 1))
#return theano.shared(0.07*np.asarray(
# np.random.uniform(
# low=-4 * np.sqrt(6. / n_output+n_hidden_l3),
# high=4 * np.sqrt(6. / n_output+n_hidden_l3),
# size=(n_hidden_l3, n_output)
# ), dtype=theano.config.floatX),borrow=True)
| 33.35 | 90 | 0.55997 | 192 | 1,334 | 3.65625 | 0.25 | 0.119658 | 0.128205 | 0.068376 | 0.42735 | 0.42735 | 0.42735 | 0.408832 | 0.403134 | 0.346154 | 0 | 0.054054 | 0.306597 | 1,334 | 39 | 91 | 34.205128 | 0.704865 | 0.1994 | 0 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.136364 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64e0d083a4907b69b2dd393b09adcf2074033d97 | 60,731 | py | Python | SubExperiment.py | zhangyintai/Experiment_Manager | 800f95068a12b64d4a7e524fe406d5ef3b47f521 | [
"MIT"
] | null | null | null | SubExperiment.py | zhangyintai/Experiment_Manager | 800f95068a12b64d4a7e524fe406d5ef3b47f521 | [
"MIT"
] | null | null | null | SubExperiment.py | zhangyintai/Experiment_Manager | 800f95068a12b64d4a7e524fe406d5ef3b47f521 | [
"MIT"
] | null | null | null | # For controlling experiments for the ion trap lab led by Prof. Yiheng Lin
# The code is written by Yintai Zhang, School of Physical Sciences, USTC
# Last updated: April 29th, 2019
from PyQt5 import QtWidgets, QtCore, QtGui
# import pylint
from Ui_SubExperiment import Ui_SubExperiment_Dialog
import sys
import os
import Functions
import DataType
import time
##-------------------------------------------------------------------------------
class SubExperiment(QtWidgets.QWidget, Ui_SubExperiment_Dialog):
def __init__(self, exp_name):
##Configure window
self.SubExperiment_Dialog = QtWidgets.QDialog()
super(SubExperiment, self).__init__()
self.setupUi(self.SubExperiment_Dialog)
self._translate = QtCore.QCoreApplication.translate
##Initiate Parametres
self.FVar_num = 0
self.TVar_num = 0
self.AmpVar_num = 0
self.PhVar_num = 0
self.OVar_num = 0
self.exp_name = exp_name
#self.channels = 16 ## this number is for test
self.FScan = 0
self.TScan = 0
self.AmpScan = 0
self.PhScan = 0
self.OScan = 0
self.FScan_step = 0
self.TScan_step = 0
self.AmpScan_step = 0
self.PhScan_step = 0
self.OScan_step = 0
self.name = ''
self.exp_dir = ''
self.script_dir = ''
self.winconfig_dir = ''
self.FVar_list = []
self.TVar_list = []
self.AmpVar_list = []
self.PhVar_list = []
self.OVar_list = []
##Initiate Widgets
self.FVar_scan_CheckBox.setDisabled(True)
self.TVar_scan_CheckBox.setDisabled(True)
self.AmpVar_scan_CheckBox.setDisabled(True)
self.PhVar_scan_CheckBox.setDisabled(True)
self.OVar_scan_CheckBox.setDisabled(True)
self.FVar_step_SpinBox.setDisabled(True)
self.TVar_step_SpinBox.setDisabled(True)
self.AmpVar_step_SpinBox.setDisabled(True)
self.PhVar_step_SpinBox.setDisabled(True)
self.OVar_step_SpinBox.setDisabled(True)
self.FVar_lb_SpinBox.setDisabled(True)
self.FVar_ub_SpinBox.setDisabled(True)
self.FVar_var_SpinBox.setDisabled(True)
self.OVar_lb_SpinBox.setDisabled(True)
self.OVar_ub_SpinBox.setDisabled(True)
self.OVar_var_SpinBox.setDisabled(True)
self.TVar_lb_SpinBox.setDisabled(True)
self.TVar_ub_SpinBox.setDisabled(True)
self.TVar_var_SpinBox.setDisabled(True)
self.AmpVar_lb_SpinBox.setDisabled(True)
self.AmpVar_ub_SpinBox.setDisabled(True)
self.AmpVar_var_SpinBox.setDisabled(True)
self.PhVar_lb_SpinBox.setDisabled(True)
self.PhVar_ub_SpinBox.setDisabled(True)
self.PhVar_var_SpinBox.setDisabled(True)
#self.FVarChannel_ComboBox.setDisabled(True)
#self.AmpVarChannel_ComboBox.setDisabled(True)
#self.TVarChannel_ComboBox.setDisabled(True)
#self.PhVarChannel_ComboBox.setDisabled(True)
self.ScriptSave_Button.setDisabled(True)
self.ScriptDirectoryBrowse_Button.setDisabled(True)
self.SetDir_Button.setDisabled(True)
self.FVar_times_SpinBox.setDisabled(True)
self.TVar_times_SpinBox.setDisabled(True)
self.PhVar_times_SpinBox.setDisabled(True)
self.AmpVar_times_SpinBox.setDisabled(True)
self.OVar_times_SpinBox.setDisabled(True)
self.ExpScriptRun_Button.setDisabled(True)
self.ExpScriptView_Button.setDisabled(True)
self.TitleConfirm_Button.setDisabled(True)
self.WinConfigView_Button.setDisabled(True)
self.ParaScriptView_Button.setDisabled(True)
self.f1shortcut = QtWidgets.QShortcut(QtGui.QKeySequence(QtCore.Qt.Key_F1), self.FVar_Label)
self.f1shortcut.activated.connect(self.bilibili)
self.f5shortcut = QtWidgets.QShortcut(QtGui.QKeySequence(QtCore.Qt.Key_F5), self.FVar_Label)
self.f5shortcut.activated.connect(self.ExpScriptRun)
self.f2shortcut = QtWidgets.QShortcut(QtGui.QKeySequence(QtCore.Qt.Key_F2), self.FVar_Label)
self.f2shortcut.activated.connect(self.arxiv)
##
self.SubExperiment_Dialog.setWindowTitle("Experiment Name: " + self.exp_name + "[*]")
##Connect Widgets
self.ConfigFileBrowse_Button.clicked.connect(self.ConfigFileBrowse)
self.ConfigFileConfirm_Button.clicked.connect(self.ConfigFileConfirm)
self.ScriptDirectoryBrowse_Button.clicked.connect(self.ScriptDirectoryBrowse)
self.SetDir_Button.clicked.connect(self.SetDir)
self.ScriptSave_Button.clicked.connect(self.ScriptSave)
self.FVar_ComboBox.currentIndexChanged.connect(self.FVarIndexChanged)
self.TVar_ComboBox.currentIndexChanged.connect(self.TVarIndexChanged)
self.AmpVar_ComboBox.currentIndexChanged.connect(self.AmpVarIndexChanged)
self.PhVar_ComboBox.currentIndexChanged.connect(self.PhVarIndexChanged)
self.OVar_ComboBox.currentIndexChanged.connect(self.OVarIndexChanged)
self.FVar_lb_SpinBox.valueChanged.connect(self.FVar_lbChanged)
self.FVar_ub_SpinBox.valueChanged.connect(self.FVar_ubChanged)
self.FVar_var_SpinBox.valueChanged.connect(self.FVar_varChanged)
self.FVar_step_SpinBox.valueChanged.connect(self.FVar_stepChanged)
self.FVar_scan_CheckBox.stateChanged.connect(self.FVar_scanChanged)
self.OVar_lb_SpinBox.valueChanged.connect(self.OVar_lbChanged)
self.OVar_ub_SpinBox.valueChanged.connect(self.OVar_ubChanged)
self.OVar_var_SpinBox.valueChanged.connect(self.OVar_varChanged)
self.OVar_step_SpinBox.valueChanged.connect(self.OVar_stepChanged)
self.OVar_scan_CheckBox.stateChanged.connect(self.OVar_scanChanged)
self.TVar_lb_SpinBox.valueChanged.connect(self.TVar_lbChanged)
self.TVar_ub_SpinBox.valueChanged.connect(self.TVar_ubChanged)
self.TVar_var_SpinBox.valueChanged.connect(self.TVar_varChanged)
self.TVar_step_SpinBox.valueChanged.connect(self.TVar_stepChanged)
self.TVar_scan_CheckBox.stateChanged.connect(self.TVar_scanChanged)
self.AmpVar_lb_SpinBox.valueChanged.connect(self.AmpVar_lbChanged)
self.AmpVar_ub_SpinBox.valueChanged.connect(self.AmpVar_ubChanged)
self.AmpVar_var_SpinBox.valueChanged.connect(self.AmpVar_varChanged)
self.AmpVar_step_SpinBox.valueChanged.connect(self.AmpVar_stepChanged)
self.AmpVar_scan_CheckBox.stateChanged.connect(self.AmpVar_scanChanged)
self.PhVar_lb_SpinBox.valueChanged.connect(self.PhVar_lbChanged)
self.PhVar_ub_SpinBox.valueChanged.connect(self.PhVar_ubChanged)
self.PhVar_var_SpinBox.valueChanged.connect(self.PhVar_varChanged)
self.PhVar_step_SpinBox.valueChanged.connect(self.PhVar_stepChanged)
self.PhVar_scan_CheckBox.stateChanged.connect(self.PhVar_scanChanged)
#self.FVarChannel_ComboBox.currentIndexChanged.connect(self.FVarChannel_Change)
#self.TVarChannel_ComboBox.currentIndexChanged.connect(self.TVarChannel_Change)
#self.AmpVarChannel_ComboBox.currentIndexChanged.connect(self.AmpVarChannel_Change)
#self.PhVarChannel_ComboBox.currentIndexChanged.connect(self.PhVarChannel_Change)
self.FVar_times_SpinBox.valueChanged.connect(self.FVar_timesChanged)
self.TVar_times_SpinBox.valueChanged.connect(self.TVar_timesChanged)
self.AmpVar_times_SpinBox.valueChanged.connect(self.AmpVar_timesChanged)
self.PhVar_times_SpinBox.valueChanged.connect(self.PhVar_timesChanged)
self.OVar_times_SpinBox.valueChanged.connect(self.OVar_timesChanged)
self.ExpDirBrowse_Button.clicked.connect(self.ExpDirBrowse)
self.ExpDirSet_Button.clicked.connect(self.ExpDirSet)
self.ExpScriptView_Button.clicked.connect(self.ExpScriptView)
self.ExpScriptRun_Button.clicked.connect(self.ExpScriptRun)
self.WinConfigView_Button.clicked.connect(self.WinConfigView)
self.TitleConfirm_Button.clicked.connect(self.TitleConfirm)
self.Help_Button.clicked.connect(self.bilibili)
self.ParaScriptView_Button.clicked.connect(self.ParaScriptView)
def arxiv(self):
os.system("explorer https://arxiv.org/")
def bilibili(self):
os.system("explorer https://www.bilibili.com/")
def TitleConfirm(self):
try:
text = self.Title_LEdit.text()
if text != '':
text = Functions.RemoveSpace(text)
self.Title_LEdit.setText(text)
self.name = text
else:
self.Title_LEDit.setText(self.name)
except:
pass
def test(self):
print("test passed!")
def ConfigFileBrowse(self):
try:
path = QtWidgets.QFileDialog.getOpenFileName(self, "Browse Configuration File", "explorer", "(*.zyt)")
self.ConfigFile_LEdit.setText(path[0])
if os.path.exists(path[0]):
self.WinConfigView_Button.setEnabled(True)
self.winconfig_dir = path[0]
else:
self.WinConfigView_Button.setDisabled(True)
except:
self.ConfigFile_LEdit.clear()
def ExpDirBrowse(self):
try:
path = QtWidgets.QFileDialog.getOpenFileName(self, "Browse Experiment Script Directory", "explorer", "(*.py)")
print(path)
self.ExpDir_LineEdit.setText(path[0])
except:
self.ExpDir_LineEdit.clear()
def ScriptDirectoryBrowse(self):
try:
path = QtWidgets.QFileDialog.getExistingDirectory(self, "Browse Parameters Script Directory", "explorer")
print(path)
self.ScriptDirectory_LineEdit.setText(path)
self.script_dir = path
self.ParaScriptView_Button.setEnabled(True)
except:
self.ScriptDirectory_LineEdit.clear()
self.ParaScriptView_Button.setDisabled(True)
def ParaScriptView(self):
try:
print(self.script_dir + "/" + self.name + "_para.py")
if (os.path.exists(self.script_dir + "/" + self.name + "_para.py")):
print("exists!")
os.system("notepad " + self.script_dir + "/" + self.name + "_para.py")
except:
pass
def SetDir(self):
directory = self.ScriptDirectory_LineEdit.text()
if os.path.exists(directory):
self.ScriptSave_Button.setEnabled(True)
else:
self.ScriptSave_Button.setDisabled(True)
def ConfigFileConfirm(self):##Read Configuration File
if not os.path.exists(self.ConfigFile_LEdit.text()):
self.FVar_step_SpinBox.setDisabled(True)
self.OVar_step_SpinBox.setDisabled(True)
self.TVar_step_SpinBox.setDisabled(True)
self.AmpVar_step_SpinBox.setDisabled(True)
self.PhVar_step_SpinBox.setDisabled(True)
self.FVar_lb_SpinBox.setDisabled(True)
self.FVar_ub_SpinBox.setDisabled(True)
self.FVar_var_SpinBox.setDisabled(True)
self.OVar_lb_SpinBox.setDisabled(True)
self.OVar_ub_SpinBox.setDisabled(True)
self.OVar_var_SpinBox.setDisabled(True)
self.TVar_lb_SpinBox.setDisabled(True)
self.TVar_ub_SpinBox.setDisabled(True)
self.TVar_var_SpinBox.setDisabled(True)
self.AmpVar_lb_SpinBox.setDisabled(True)
self.AmpVar_ub_SpinBox.setDisabled(True)
self.AmpVar_var_SpinBox.setDisabled(True)
self.PhVar_lb_SpinBox.setDisabled(True)
self.PhVar_ub_SpinBox.setDisabled(True)
self.PhVar_var_SpinBox.setDisabled(True)
#self.FVarChannel_ComboBox.setDisabled(True)
#self.TVarChannel_ComboBox.setDisabled(True)
#self.AmpVarChannel_ComboBox.setDisabled(True)
#self.PhVarChannel_ComboBox.setDisabled(True)
self.FVar_times_SpinBox.setDisabled(True)
self.OVar_times_SpinBox.setDisabled(True)
self.TVar_times_SpinBox.setDisabled(True)
self.AmpVar_times_SpinBox.setDisabled(True)
self.PhVar_times_SpinBox.setDisabled(True)
self.TitleConfirm_Button.setDisabled(True)
self.ScriptDirectoryBrowse_Button.setDisabled(True)
self.SetDir_Button.setDisabled(True)
try:
inputfilename = self.ConfigFile_LEdit.text()
inputfile = open(inputfilename, 'r+')
text = inputfile.readlines()
flag = 0
flag_another = 0
self.FVar_list.clear()
self.OVar_list.clear()
self.TVar_list.clear()
self.AmpVar_list.clear()
self.PhVar_list.clear()
for line in text:
if flag == 0:
self.name = line.replace("\n", "")
self.Title_LEdit.setText(self.name)
flag = flag + 1
else:
try:
num = int(line)
if flag_another == 0:
self.FVar_num = num
elif flag_another == 1:
self.TVar_num = num
elif flag_another == 2:
self.AmpVar_num = num
elif flag_another == 3:
self.PhVar_num = num
elif flag_another == 4:
self.OVar_num = num
else:
pass
flag_another = flag_another + 1
except:
if True:
s_list = Functions.StringSeparate(line)
name = s_list[0]
lb = float(s_list[1])
ub = float(s_list[2])
var = float(s_list[3])
llb = float(s_list[4])
uub = float(s_list[5])
##print(flag_another)
if flag_another == 1:
self.FVar_list.append(DataType.FVar(name, lb, ub, var, llb, uub))
elif flag_another == 2:
self.TVar_list.append(DataType.TVar(name, lb, ub, var, llb, uub))
elif flag_another == 3:
self.AmpVar_list.append(DataType.AmpVar(name, lb, ub, var, llb, uub))
elif flag_another == 4:
self.PhVar_list.append(DataType.PhVar(name, lb, ub, var, llb, uub))
elif flag_another == 5:
self.OVar_list.append(DataType.OVar(name, lb, ub, var, llb, uub))
print("Input Finished!")
self.ScriptDirectoryBrowse_Button.setEnabled(True)
self.SetDir_Button.setEnabled(True)
self.VarCombo_Init()
##break
inputfile.close()
except:
pass
def VarCombo_Init(self):
#self.FVarChannel_ComboBox.clear()
#self.TVarChannel_ComboBox.clear()
#self.AmpVarChannel_ComboBox.clear()
#self.PhVarChannel_ComboBox.clear()
self.FVar_ComboBox.clear()
self.TVar_ComboBox.clear()
self.AmpVar_ComboBox.clear()
self.PhVar_ComboBox.clear()
self.OVar_ComboBox.clear()
#for i in range(0, self.channels):
#self.FVarChannel_ComboBox.addItem(str(i))
#self.TVarChannel_ComboBox.addItem(str(i))
#self.AmpVarChannel_ComboBox.addItem(str(i))
#self.PhVarChannel_ComboBox.addItem(str(i))
##pass
##Add items to each combobox
##The index of each combobox starts from 0
for fvar in self.FVar_list:
var = fvar.var
ub = fvar.ub
lb = fvar.lb
step = fvar.step
scan = fvar.scan
self.FVar_ComboBox.addItem(fvar.name)
fvar.var = var
fvar.ub = ub
fvar.lb = lb
fvar.step = step
fvar.scan = scan
##print(self.FVar_list[0].ub, self.FVar_list[0].lb)
for tvar in self.TVar_list:
var = tvar.var
ub = tvar.ub
lb = tvar.lb
step = tvar.step
scan = tvar.scan
self.TVar_ComboBox.addItem(tvar.name)
tvar.var = var
tvar.ub = ub
tvar.lb = lb
tvar.step = step
tvar.scan = scan
for ampvar in self.AmpVar_list:
var = ampvar.var
ub = ampvar.ub
lb = ampvar.lb
step = ampvar.step
scan = ampvar.scan
self.AmpVar_ComboBox.addItem(ampvar.name)
ampvar.var = var
ampvar.ub = ub
ampvar.lb = lb
ampvar.step = step
ampvar.scan = scan
for phvar in self.PhVar_list:
var = phvar.var
ub = phvar.ub
lb = phvar.lb
step = phvar.step
scan = phvar.scan
self.PhVar_ComboBox.addItem(phvar.name)
phvar.var = var
phvar.ub = ub
phvar.lb = lb
phvar.step = step
phvar.scan = scan
for ovar in self.OVar_list:
var = ovar.var
ub = ovar.ub
lb = ovar.lb
step = ovar.step
scan = ovar.scan
self.OVar_ComboBox.addItem(ovar.name)
ovar.var = var
ovar.ub = ub
ovar.lb = lb
ovar.step = step
ovar.scan = scan
##Initiate the rest part
self.FVar_step_SpinBox.setEnabled(True)
self.TVar_step_SpinBox.setEnabled(True)
self.AmpVar_step_SpinBox.setEnabled(True)
self.PhVar_step_SpinBox.setEnabled(True)
self.OVar_step_SpinBox.setEnabled(True)
self.FVar_lb_SpinBox.setEnabled(True)
self.FVar_ub_SpinBox.setEnabled(True)
self.FVar_var_SpinBox.setEnabled(True)
self.TVar_lb_SpinBox.setEnabled(True)
self.TVar_ub_SpinBox.setEnabled(True)
self.TVar_var_SpinBox.setEnabled(True)
self.AmpVar_lb_SpinBox.setEnabled(True)
self.AmpVar_ub_SpinBox.setEnabled(True)
self.AmpVar_var_SpinBox.setEnabled(True)
self.PhVar_lb_SpinBox.setEnabled(True)
self.PhVar_ub_SpinBox.setEnabled(True)
self.PhVar_var_SpinBox.setEnabled(True)
self.OVar_lb_SpinBox.setEnabled(True)
self.OVar_ub_SpinBox.setEnabled(True)
self.OVar_var_SpinBox.setEnabled(True)
#self.FVarChannel_ComboBox.setEnabled(True)
#self.TVarChannel_ComboBox.setEnabled(True)
#self.AmpVarChannel_ComboBox.setEnabled(True)
#self.PhVarChannel_ComboBox.setEnabled(True)
self.FVar_times_SpinBox.setEnabled(True)
self.TVar_times_SpinBox.setEnabled(True)
self.AmpVar_times_SpinBox.setEnabled(True)
self.PhVar_times_SpinBox.setEnabled(True)
self.TitleConfirm_Button.setEnabled(True)
self.OVar_times_SpinBox.setEnabled(True)
try:
self.FVarIndexChanged(0)
except:
pass
try:
self.TVarIndexChanged(0)
except:
pass
try:
self.AmpVarIndexChanged(0)
except:
pass
try:
self.PhVarIndexChanged(0)
except:
pass
try:
self.OVarIndexChanged(0)
except:
pass
def FVarIndexChanged(self, i):
self.FVar_lb_SpinBox.setMinimum(self.FVar_list[i].llb)
self.FVar_lb_SpinBox.setMaximum(self.FVar_list[i].uub)
self.FVar_ub_SpinBox.setMinimum(self.FVar_list[i].llb)
self.FVar_ub_SpinBox.setMaximum(self.FVar_list[i].uub)
self.FVar_var_SpinBox.setMaximum(self.FVar_list[i].uub)
self.FVar_var_SpinBox.setMinimum(self.FVar_list[i].llb)
##print(self.FVar_list[i].lb, self.FVar_list[i].ub, self.FVar_list[i].var)
self.FVar_lb_SpinBox.setValue(self.FVar_list[i].lb)
self.FVar_ub_SpinBox.setValue(self.FVar_list[i].ub)
self.FVar_var_SpinBox.setValue(self.FVar_list[i].var)
self.FVar_times_SpinBox.setValue(self.FVar_list[i].times)
self.FVar_lb_SpinBox.setMinimum(self.FVar_list[i].llb)
self.FVar_lb_SpinBox.setMaximum(self.FVar_list[i].ub)
self.FVar_ub_SpinBox.setMinimum(self.FVar_list[i].lb)
self.FVar_ub_SpinBox.setMaximum(self.FVar_list[i].uub)
self.FVar_var_SpinBox.setMaximum(self.FVar_list[i].ub)
self.FVar_var_SpinBox.setMinimum(self.FVar_list[i].lb)
self.FVar_step_SpinBox.setValue(self.FVar_list[i].step)
self.FVar_step_SpinBox.setMaximum(self.FVar_list[i].uub - self.FVar_list[i].llb)
self.FVar_step_SpinBox.setMinimum(-(self.FVar_list[i].uub - self.FVar_list[i].llb))
if self.FVar_list[i].step == 0:
self.FVar_scan_CheckBox.setDisabled(True)
else:
self.FVar_scan_CheckBox.setEnabled(True)
self.FVar_scan_CheckBox.setCheckState(self.FVar_list[i].scan)
#self.FVarChannel_ComboBox.setCurrentIndex(self.FVar_list[i].channel)
def TVarIndexChanged(self, i):
self.TVar_lb_SpinBox.setMinimum(self.TVar_list[i].llb)
self.TVar_lb_SpinBox.setMaximum(self.TVar_list[i].uub)
self.TVar_var_SpinBox.setMaximum(self.TVar_list[i].uub)
self.TVar_var_SpinBox.setMinimum(self.TVar_list[i].llb)
self.TVar_ub_SpinBox.setMinimum(self.TVar_list[i].llb)
self.TVar_ub_SpinBox.setMaximum(self.TVar_list[i].uub)
self.TVar_lb_SpinBox.setValue(self.TVar_list[i].lb)
self.TVar_ub_SpinBox.setValue(self.TVar_list[i].ub)
self.TVar_var_SpinBox.setValue(self.TVar_list[i].var)
self.TVar_times_SpinBox.setValue(self.TVar_list[i].times)
self.TVar_lb_SpinBox.setMinimum(self.TVar_list[i].llb)
self.TVar_lb_SpinBox.setMaximum(self.TVar_list[i].ub)
self.TVar_var_SpinBox.setMaximum(self.TVar_list[i].ub)
self.TVar_var_SpinBox.setMinimum(self.TVar_list[i].lb)
self.TVar_ub_SpinBox.setMinimum(self.TVar_list[i].lb)
self.TVar_ub_SpinBox.setMaximum(self.TVar_list[i].uub)
self.TVar_step_SpinBox.setValue(self.TVar_list[i].step)
self.TVar_step_SpinBox.setMaximum(self.TVar_list[i].uub - self.TVar_list[i].llb)
self.TVar_step_SpinBox.setMinimum(-(self.TVar_list[i].uub - self.TVar_list[i].llb))
if self.TVar_list[i].step == 0:
self.TVar_scan_CheckBox.setDisabled(True)
else:
self.TVar_scan_CheckBox.setEnabled(True)
self.TVar_scan_CheckBox.setCheckState(self.TVar_list[i].scan)
# self.TVarChannel_ComboBox.setCurrentIndex(self.TVar_list[i].channel)
def AmpVarIndexChanged(self, i):
self.AmpVar_lb_SpinBox.setMinimum(self.AmpVar_list[i].llb)
self.AmpVar_lb_SpinBox.setMaximum(self.AmpVar_list[i].uub)
self.AmpVar_var_SpinBox.setMaximum(self.AmpVar_list[i].uub)
self.AmpVar_var_SpinBox.setMinimum(self.AmpVar_list[i].llb)
self.AmpVar_ub_SpinBox.setMinimum(self.AmpVar_list[i].llb)
self.AmpVar_ub_SpinBox.setMaximum(self.AmpVar_list[i].uub)
self.AmpVar_lb_SpinBox.setValue(self.AmpVar_list[i].lb)
self.AmpVar_ub_SpinBox.setValue(self.AmpVar_list[i].ub)
self.AmpVar_var_SpinBox.setValue(self.AmpVar_list[i].var)
self.AmpVar_times_SpinBox.setValue(self.AmpVar_list[i].times)
self.AmpVar_lb_SpinBox.setMinimum(self.AmpVar_list[i].llb)
self.AmpVar_lb_SpinBox.setMaximum(self.AmpVar_list[i].ub)
self.AmpVar_var_SpinBox.setMaximum(self.AmpVar_list[i].ub)
self.AmpVar_var_SpinBox.setMinimum(self.AmpVar_list[i].lb)
self.AmpVar_ub_SpinBox.setMinimum(self.AmpVar_list[i].lb)
self.AmpVar_ub_SpinBox.setMaximum(self.AmpVar_list[i].uub)
self.AmpVar_step_SpinBox.setMaximum(self.AmpVar_list[i].uub - self.AmpVar_list[i].llb)
self.AmpVar_step_SpinBox.setMinimum(-(self.AmpVar_list[i].uub - self.AmpVar_list[i].llb))
if self.AmpVar_list[i].step == 0:
self.AmpVar_scan_CheckBox.setDisabled(True)
else:
self.AmpVar_scan_CheckBox.setEnabled(True)
self.AmpVar_scan_CheckBox.setCheckState(self.AmpVar_list[i].scan)
# self.AmpVarChannel_ComboBox.setCurrentIndex(self.AmpVar_list[i].channel)
def PhVarIndexChanged(self, i):
self.PhVar_lb_SpinBox.setMinimum(self.PhVar_list[i].llb)
self.PhVar_lb_SpinBox.setMaximum(self.PhVar_list[i].uub)
self.PhVar_ub_SpinBox.setMinimum(self.PhVar_list[i].llb)
self.PhVar_ub_SpinBox.setMaximum(self.PhVar_list[i].uub)
self.PhVar_var_SpinBox.setMaximum(self.PhVar_list[i].uub)
self.PhVar_var_SpinBox.setMinimum(self.PhVar_list[i].llb)
self.PhVar_lb_SpinBox.setValue(self.PhVar_list[i].lb)
self.PhVar_ub_SpinBox.setValue(self.PhVar_list[i].ub)
self.PhVar_var_SpinBox.setValue(self.PhVar_list[i].var)
self.PhVar_times_SpinBox.setValue(self.PhVar_list[i].times)
self.PhVar_lb_SpinBox.setMinimum(self.PhVar_list[i].llb)
self.PhVar_lb_SpinBox.setMaximum(self.PhVar_list[i].ub)
self.PhVar_ub_SpinBox.setMinimum(self.PhVar_list[i].lb)
self.PhVar_ub_SpinBox.setMaximum(self.PhVar_list[i].uub)
self.PhVar_var_SpinBox.setMaximum(self.PhVar_list[i].ub)
self.PhVar_var_SpinBox.setMinimum(self.PhVar_list[i].lb)
self.PhVar_step_SpinBox.setMaximum(self.PhVar_list[i].uub - self.PhVar_list[i].llb)
self.PhVar_step_SpinBox.setMinimum(-(self.PhVar_list[i].uub - self.PhVar_list[i].llb))
if self.PhVar_list[i].step == 0:
self.PhVar_scan_CheckBox.setDisabled(True)
else:
self.PhVar_scan_CheckBox.setEnabled(True)
self.PhVar_scan_CheckBox.setCheckState(self.PhVar_list[i].scan)
# self.PhVarChannel_ComboBox.setCurrentIndex(self.PhVar_list[i].channel)
def OVarIndexChanged(self, i):
self.OVar_lb_SpinBox.setMinimum(self.OVar_list[i].llb)
self.OVar_lb_SpinBox.setMaximum(self.OVar_list[i].uub)
self.OVar_var_SpinBox.setMaximum(self.OVar_list[i].uub)
self.OVar_var_SpinBox.setMinimum(self.OVar_list[i].llb)
self.OVar_ub_SpinBox.setMinimum(self.OVar_list[i].llb)
self.OVar_ub_SpinBox.setMaximum(self.OVar_list[i].uub)
self.OVar_lb_SpinBox.setValue(self.OVar_list[i].lb)
self.OVar_ub_SpinBox.setValue(self.OVar_list[i].ub)
self.OVar_var_SpinBox.setValue(self.OVar_list[i].var)
self.OVar_times_SpinBox.setValue(self.OVar_list[i].times)
self.OVar_lb_SpinBox.setMinimum(self.OVar_list[i].llb)
self.OVar_lb_SpinBox.setMaximum(self.OVar_list[i].ub)
self.OVar_var_SpinBox.setMaximum(self.OVar_list[i].ub)
self.OVar_var_SpinBox.setMinimum(self.OVar_list[i].lb)
self.OVar_ub_SpinBox.setMinimum(self.OVar_list[i].lb)
self.OVar_ub_SpinBox.setMaximum(self.OVar_list[i].uub)
self.OVar_step_SpinBox.setValue(self.OVar_list[i].step)
self.OVar_step_SpinBox.setMaximum(self.OVar_list[i].uub - self.OVar_list[i].llb)
self.OVar_step_SpinBox.setMinimum(-(self.OVar_list[i].uub - self.OVar_list[i].llb))
if self.OVar_list[i].step == 0:
self.OVar_scan_CheckBox.setDisabled(True)
else:
self.OVar_scan_CheckBox.setEnabled(True)
self.OVar_scan_CheckBox.setCheckState(self.TVar_list[i].scan)
def FVarSelect(self):
index = self.FVar_ComboBox.currentIndex()
print("Current FVar index is", index)
self.FVarIndexChange(index)
def TVarSelect(self):
index = self.TVar_ComboBox.currentIndex()
self.TVarIndexChange(index)
def AmpVarSelect(self):
index = self.AmpVar_ComboBox.currentIndex()
self.AmpVarIndexChange(index)
def PhVarSelect(self):
index = self.PhVar_ComboBox.currentIndex()
self.PhVarIndexChange(index)
def OVarSelect(self):
index = self.OVar_ComboBox.currentIndex()
print("Current OVar index is", index)
self.OVarIndexChange(index)
def FVar_lbChanged(self):
try:
index = self.FVar_ComboBox.currentIndex()
self.FVar_list[index].set_lb(self.FVar_lb_SpinBox.value())
if self.FVar_list[index].var < self.FVar_list[index].lb:
self.FVar_list[index].set_var(self.FVar_list[index].lb)
self.FVar_var_SpinBox.setValue(self.FVar_list[index].lb)
self.FVar_var_SpinBox.setMinimum(self.FVar_list[index].lb)
self.FVar_ub_SpinBox.setMinimum(self.FVar_list[index].lb)
except:
print("FVAR LB CHANGE Warning!")
def FVar_ubChanged(self):
try:
index = self.FVar_ComboBox.currentIndex()
self.FVar_list[index].set_ub(self.FVar_ub_SpinBox.value())
if self.FVar_list[index].var > self.FVar_list[index].ub:
self.FVar_list[index].set_var(self.FVar_list[index].ub)
self.FVar_var_SpinBox.setValue(self.FVar_list[index].ub)
self.FVar_var_SpinBox.setMaximum(self.FVar_list[index].ub)
self.FVar_lb_SpinBox.setMaximum(self.FVar_list[index].ub)
except:
print("FVAR UB CHANGE Warning!")
def FVar_varChanged(self):
try:
index = self.FVar_ComboBox.currentIndex()
self.FVar_list[index].set_var(self.FVar_var_SpinBox.value())
except:
print("FVAR VAR CHANGE Warning!")
def FVar_timesChanged(self):
try:
index = self.FVar_ComboBox.currentIndex()
self.FVar_list[index].set_times(self.FVar_times_SpinBox.value())
except:
print("FVAR TIMES CHANGE Warning!")
def FVar_stepChanged(self):
try:
index = self.FVar_ComboBox.currentIndex()
self.FVar_list[index].set_step(self.FVar_step_SpinBox.value())
if self.FVar_list[index].step == 0:
self.FVar_scan_CheckBox.setDisabled(True)
self.FVar_list[index].set_scan(0)
else:
self.FVar_scan_CheckBox.setEnabled(True)
self.FVar_scan_CheckBox.setCheckState(self.FVar_list[index].scan)
except:
print("FVar step Warning!")
def FVar_scanChanged(self):
try:
print("FVar scan changed")
index = self.FVar_ComboBox.currentIndex()
self.FVar_list[index].set_scan(self.FVar_scan_CheckBox.checkState())
except:
print("Fvar scan Warning!")
def TVar_lbChanged(self):
try:
index = self.TVar_ComboBox.currentIndex()
self.TVar_list[index].set_lb(self.TVar_lb_SpinBox.value())
print(self.TVar_list[index].lb)
if self.TVar_list[index].var < self.TVar_list[index].lb:
self.TVar_list[index].set_var(self.TVar_list[index].lb)
self.TVar_var_SpinBox.setValue(self.TVar_list[index].lb)
self.TVar_var_SpinBox.setMinimum(self.TVar_list[index].lb)
self.TVar_ub_SpinBox.setMinimum(self.TVar_list[index].lb)
except:
pass
def TVar_ubChanged(self):
try:
index = self.TVar_ComboBox.currentIndex()
self.TVar_list[index].set_ub(self.TVar_ub_SpinBox.value())
if self.TVar_list[index].var > self.TVar_list[index].ub:
self.TVar_list[index].set_var(self.TVar_list[index].ub)
self.TVar_var_SpinBox.setValue(self.TVar_list[index].ub)
self.TVar_var_SpinBox.setMaximum(self.TVar_list[index].ub)
self.TVar_lb_SpinBox.setMaximum(self.TVar_list[index].ub)
except:
print("TVar ub change Warning!")
def TVar_varChanged(self):
try:
index = self.TVar_ComboBox.currentIndex()
self.TVar_list[index].set_var(self.TVar_var_SpinBox.value())
print(self.TVar_var_SpinBox.value(), self.TVar_list[index].var)
except:
print("TVar var change Warning!")
def TVar_timesChanged(self):
try:
index = self.TVar_ComboBox.currentIndex()
self.TVar_list[index].set_times(self.TVar_times_SpinBox.value())
except:
print("TVar var change Warning!")
def TVar_stepChanged(self):
try:
index = self.TVar_ComboBox.currentIndex()
self.TVar_list[index].set_step(self.TVar_step_SpinBox.value())
if self.TVar_list[index].step == 0:
self.TVar_scan_CheckBox.setDisabled(True)
self.TVar_list[index].set_scan(0)
else:
self.TVar_scan_CheckBox.setEnabled(True)
self.TVar_scan_CheckBox.setCheckState(self.TVar_list[index].scan)
except:
print("TVar step changeWarning!")
def TVar_scanChanged(self):
try:
print("changed")
index = self.TVar_ComboBox.currentIndex()
self.TVar_list[index].set_scan(self.TVar_scan_CheckBox.checkState())
except:
print("TVar scan changed Warning!")
def OVar_lbChanged(self):
try:
index = self.OVar_ComboBox.currentIndex()
self.OVar_list[index].set_lb(self.OVar_lb_SpinBox.value())
if self.OVar_list[index].var < self.OVar_list[index].lb:
self.OVar_list[index].set_var(self.OVar_list[index].lb)
self.OVar_var_SpinBox.setValue(self.OVar_list[index].lb)
self.OVar_var_SpinBox.setMinimum(self.OVar_list[index].lb)
self.OVar_ub_SpinBox.setMinimum(self.OVar_list[index].lb)
except:
print("OVar LB CHANGE Warning!")
def OVar_ubChanged(self):
try:
index = self.OVar_ComboBox.currentIndex()
self.OVar_list[index].set_ub(self.OVar_ub_SpinBox.value())
if self.OVar_list[index].var > self.OVar_list[index].ub:
self.OVar_list[index].set_var(self.OVar_list[index].ub)
self.OVar_var_SpinBox.setValue(self.OVar_list[index].ub)
self.OVar_var_SpinBox.setMaximum(self.OVar_list[index].ub)
self.OVar_lb_SpinBox.setMaximum(self.OVar_list[index].ub)
except:
print("OVar UB CHANGE Warning!")
def OVar_varChanged(self):
try:
index = self.OVar_ComboBox.currentIndex()
self.OVar_list[index].set_var(self.OVar_var_SpinBox.value())
except:
print("OVar VAR CHANGE Warning!")
def OVar_timesChanged(self):
try:
index = self.OVar_ComboBox.currentIndex()
self.OVar_list[index].set_times(self.OVar_times_SpinBox.value())
except:
print("OVar TIMES CHANGE Warning!")
def OVar_stepChanged(self):
try:
index = self.OVar_ComboBox.currentIndex()
self.OVar_list[index].set_step(self.OVar_step_SpinBox.value())
if self.OVar_list[index].step == 0:
self.OVar_scan_CheckBox.setDisabled(True)
self.OVar_list[index].set_scan(0)
else:
self.OVar_scan_CheckBox.setEnabled(True)
self.OVar_scan_CheckBox.setCheckState(self.OVar_list[index].scan)
except:
print("OVar step Warning!")
def OVar_scanChanged(self):
try:
print("OVar scan changed")
index = self.OVar_ComboBox.currentIndex()
self.OVar_list[index].set_scan(self.OVar_scan_CheckBox.checkState())
except:
print("OVar scan Warning!")
def AmpVar_lbChanged(self):
try:
index = self.AmpVar_ComboBox.currentIndex()
self.AmpVar_list[index].set_lb(self.AmpVar_lb_SpinBox.value())
if self.AmpVar_list[index].var < self.AmpVar_list[index].lb:
self.AmpVar_list[index].set_var(self.AmpVar_list[index].lb)
self.AmpVar_var_SpinBox.setValue(self.AmpVar_list[index].lb)
self.AmpVar_var_SpinBox.setMinimum(self.AmpVar_list[index].lb)
self.AmpVar_ub_SpinBox.setMinimum(self.AmpVar_list[index].lb)
except:
print("AmpVar Warning!")
def AmpVar_ubChanged(self):
try:
index = self.AmpVar_ComboBox.currentIndex()
self.AmpVar_list[index].set_ub(self.AmpVar_ub_SpinBox.value())
if self.AmpVar_list[index].var > self.AmpVar_list[index].ub:
self.AmpVar_list[index].set_var(self.AmpVar_list[index].ub)
self.AmpVar_var_SpinBox.setValue(self.AmpVar_list[index].ub)
except:
print("AmpVar Warning!")
def AmpVar_varChanged(self):
try:
index = self.AmpVar_ComboBox.currentIndex()
self.AmpVar_list[index].set_var(self.AmpVar_var_SpinBox.value())
except:
print("AmpVar Warning!")
def AmpVar_timesChanged(self):
try:
index = self.AmpVar_ComboBox.currentIndex()
self.AmpVar_list[index].set_times(self.AmpVar_times_SpinBox.value())
except:
print("AmpVar Warning!")
def AmpVar_stepChanged(self):
try:
index = self.AmpVar_ComboBox.currentIndex()
self.AmpVar_list[index].set_step(self.AmpVar_step_SpinBox.value())
if self.AmpVar_list[index].step == 0:
self.AmpVar_scan_CheckBox.setDisabled(True)
self.AmpVar_list[index].set_scan(0)
else:
self.AmpVar_scan_CheckBox.setEnabled(True)
self.AmpVar_scan_CheckBox.setCheckState(self.AmpVar_list[index].scan)
except:
print("AmpVar Warning!")
def AmpVar_scanChanged(self):
try:
print("changed")
index = self.AmpVar_ComboBox.currentIndex()
self.AmpVar_list[index].set_scan(self.AmpVar_scan_CheckBox.checkState())
except:
print("AmpVar Warning!")
def PhVar_lbChanged(self):
try:
index = self.PhVar_ComboBox.currentIndex()
self.PhVar_list[index].set_lb(self.PhVar_lb_SpinBox.value())
self.PhVar_var_SpinBox.setMinimum(self.PhVar_list[index].lb)
self.PhVar_ub_SpinBox.setMinimum(self.PhVar_list[index].lb)
if self.PhVar_list[index].var < self.PhVar_list[index].lb:
self.PhVar_list[index].set_var(self.PhVar_list[index].lb)
self.PhVar_var_SpinBox.setValue(self.PhVar_list[index].lb)
except:
print("PhVar Warning!")
def PhVar_ubChanged(self):
try:
index = self.PhVar_ComboBox.currentIndex()
self.PhVar_list[index].set_ub(self.PhVar_ub_SpinBox.value())
self.PhVar_var_SpinBox.setMaximum(self.PhVar_list[index].ub)
self.PhVar_lb_SpinBox.setMaximum(self.PhVar_list[index].ub)
if self.PhVar_list[index].var > self.PhVar_list[index].ub:
self.PhVar_list[index].set_var(self.PhVar_list[index].ub)
self.PhVar_var_SpinBox.setValue(self.PhVar_list[index].ub)
except:
print("PhVar Warning!")
def PhVar_varChanged(self):
try:
index = self.PhVar_ComboBox.currentIndex()
self.PhVar_list[index].set_var(self.PhVar_var_SpinBox.value())
except:
print("PhVar Warning!")
def PhVar_timesChanged(self):
try:
index = self.PhVar_ComboBox.currentIndex()
self.PhVar_list[index].set_times(self.PhVar_times_SpinBox.value())
except:
print("PhVar Warning!")
def PhVar_stepChanged(self):
try:
index = self.PhVar_ComboBox.currentIndex()
self.PhVar_list[index].set_step(self.PhVar_step_SpinBox.value())
if self.PhVar_list[index].step == 0:
self.PhVar_scan_CheckBox.setDisabled(True)
self.PhVar_list[index].set_scan(0)
else:
self.PhVar_scan_CheckBox.setEnabled(True)
self.PhVar_scan_CheckBox.setCheckState(self.PhVar_list[index].scan)
except:
print("PhVar Warning!")
def PhVar_scanChanged(self):
try:
print("changed")
index = self.PhVar_ComboBox.currentIndex()
self.PhVar_list[index].set_scan(self.PhVar_scan_CheckBox.checkState())
except:
print("PhVar Warning!")
"""
def FVarChannel_Change(self):
try:
index = self.FVar_ComboBox.currentIndex()
self.FVar_list[index].set_channel(self.FVarChannel_ComboBox.currentIndex())
except:
print("FVar Warning!")
def TVarChannel_Change(self):
try:
index = self.TVar_ComboBox.currentIndex()
self.TVar_list[index].set_channel(self.TVarChannel_ComboBox.currentIndex())
except:
print("TVar Channel Change Warning!")
def AmpVarChannel_Change(self):
try:
index = self.AmpVar_ComboBox.currentIndex()
self.AmpVar_list[index].set_channel(self.AmpVarChannel_ComboBox.currentIndex())
except:
print("AmpVar Warning!")
def PhVarChannel_Change(self):
try:
index = self.PhVar_ComboBox.currentIndex()
self.PhVar_list[index].set_channel(self.PhVarChannel_ComboBox.currentIndex())
except:
print("PhVar Warning!")
"""
def Configure_change(self):
print("Configure change Warning!")
def ScriptSave(self):
script_name = self.script_dir + "/" + self.name + "_para.py"
try:
script_file = open(script_name, "w")
print("#This is a the list of all defined variables!", file = script_file)
f_count = 0
t_count = 0
ph_count = 0
amp_count = 0
o_count = 0
for var in self.FVar_list:
if var.name != "None":
print(var.name, " = ", var.var, file = script_file)
print(var.name + "_lb", " = ", var.lb, file = script_file)
print(var.name + "_ub", " = ", var.ub, file = script_file)
#print(var.name + "_channel", " = ", var.channel, file = script_file)
print(var.name+"_times", " = ", var.times, file = script_file)
print(var.name + "_step", " = ", var.step, file = script_file)
print(var.name + "_type", " = \'fvar\'", file = script_file)
print(var.name + "_name = \'" + var.name + '\'', file = script_file)
if var.scan == 0:
print(var.name + "_scan = False", file = script_file)
else:
print(var.name + "_scan = True", file = script_file)
f_count = f_count + 1
print(file = script_file)
print("n_fvar =", f_count, file = script_file)
print(file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name, " = ", var.var, file = script_file)
print(var.name + "_lb", " = ", var.lb, file = script_file)
print(var.name + "_ub", " = ", var.ub, file = script_file)
#print(var.name + "_channel", " = ", var.channel, file = script_file)
print(var.name+"_times", " = ", var.times, file = script_file)
print(var.name + "_step", " = ", var.step, file = script_file)
print(var.name + "_type", " = \'tvar\'", file = script_file)
print(var.name + "_name = \'" + var.name + '\'', file = script_file)
if var.scan == 0:
print(var.name + "_scan = False", file = script_file)
else:
print(var.name + "_scan = True", file = script_file)
t_count = t_count + 1
print(file = script_file)
print("n_tvar =", t_count, file = script_file)
print(file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name, " = ", var.var, file = script_file)
print(var.name + "_lb", " = ", var.lb, file = script_file)
print(var.name + "_ub", " = ", var.ub, file = script_file)
#print(var.name + "_channel", " = ", var.channel, file = script_file)
print(var.name+"_times", " = ", var.times, file = script_file)
print(var.name + "_step", " = ", var.step, file = script_file)
print(var.name + "_type", " = \'ampvar\'", file = script_file)
print(var.name + "_name = \'" + var.name + '\'', file = script_file)
if var.scan == 0:
print(var.name + "_scan = False", file = script_file)
else:
print(var.name + "_scan = True", file = script_file)
amp_count = amp_count + 1
print(file = script_file)
print("n_ampvar =", amp_count, file = script_file)
print(file = script_file)
for var in self.PhVar_list:
print()
if var.name != "None":
print(var.name, " = ", var.var, file = script_file)
print(var.name + "_lb", " = ", var.lb, file = script_file)
print(var.name + "_ub", " = ", var.ub, file = script_file)
#print(var.name + "_channel", " = ", var.channel, file = script_file)
print(var.name+"_times", " = ", var.times, file = script_file)
print(var.name + "_step", " = ", var.step, file = script_file)
print(var.name + "_type", " = \'phvar\'", file = script_file)
print(var.name + "_name = \'" + var.name + '\'', file = script_file)
if var.scan == 0:
print(var.name + "_scan = False", file = script_file)
else:
print(var.name + "_scan = True", file = script_file)
ph_count = ph_count + 1
print("n_phvar =", ph_count, file = script_file)
print(file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name, " = ", var.var, file = script_file)
print(var.name + "_lb", " = ", var.lb, file = script_file)
print(var.name + "_ub", " = ", var.ub, file = script_file)
#print(var.name + "_channel", " = ", var.channel, file = script_file)
print(var.name+"_times", " = ", var.times, file = script_file)
print(var.name + "_step", " = ", var.step, file = script_file)
print(var.name + "_type", " = \'ovar\'", file = script_file)
print(var.name + "_name = \'" + var.name + '\'', file = script_file)
if var.scan == 0:
print(var.name + "_scan = False", file = script_file)
else:
print(var.name + "_scan = True", file = script_file)
o_count = o_count + 1
print("n_ovar =", o_count, file = script_file)
print(file = script_file)
print(file = script_file)
print("#___________________________________________", file = script_file)
print ("var_list = [", end = '', file = script_file)
for var in self.FVar_list:
if var.name != "None":
print(var.name, ", ", sep = "", end = '', file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name, ", ", sep = "", end = '', file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name, ", ", sep = "", end = '', file = script_file)
for var in self.PhVar_list:
if var.name != "None":
print(var.name, ", ", sep = "", end = '', file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name, ", ", sep = "", end = '', file = script_file)
print("]", file = script_file)
print ("var_lb_list = [", end = '', file = script_file)
for var in self.FVar_list:
if var.name != "None":
print(var.name + "_lb", ", ", sep = "", end = '', file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name + "_lb", ", ", sep = "", end = '', file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name + "_lb", ", ", sep = "", end = '', file = script_file)
for var in self.PhVar_list:
if var.name != "None":
print(var.name + "_lb", ", ", sep = "", end = '', file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name + "_lb", ", ", sep = "", end = '', file = script_file)
print("]", file = script_file)
print ("var_ub_list = [", end = '', file = script_file)
for var in self.FVar_list:
if var.name != "None":
print(var.name + "_ub", ", ", sep = "", end = '', file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name + "_ub", ", ", sep = "", end = '', file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name + "_ub", ", ", sep = "", end = '', file = script_file)
for var in self.PhVar_list:
if var.name != "None":
print(var.name + "_ub", ", ", sep = "", end = '', file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name + "_ub", ", ", sep = "", end = '', file = script_file)
print("]", file = script_file)
print ("var_step_list = [", end = '', file = script_file)
for var in self.FVar_list:
if var.name != "None":
print(var.name + "_step", ", ", sep = "", end = '', file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name + "_step", ", ", sep = "", end = '', file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name + "_step", ", ", sep = "", end = '', file = script_file)
for var in self.PhVar_list:
if var.name != "None":
print(var.name + "_step", ", ", sep = "", end = '', file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name + "_step", ", ", sep = "", end = '', file = script_file)
print("]", file = script_file)
print ("var_times_list = [", end = '', file = script_file)
for var in self.FVar_list:
if var.name != "None":
print(var.name + "_times", ", ", sep = "", end = '', file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name + "_times", ", ", sep = "", end = '', file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name + "_times", ", ", sep = "", end = '', file = script_file)
for var in self.PhVar_list:
if var.name != "None":
print(var.name + "_times", ", ", sep = "", end = '', file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name + "_times", ", ", sep = "", end = '', file = script_file)
print("]", file = script_file)
print ("var_scan_list = [", end = '', file = script_file)
for var in self.FVar_list:
if var.name != "None":
print(var.name + "_scan", ", ", sep = "", end = '', file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name + "_scan", ", ", sep = "", end = '', file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name + "_scan", ", ", sep = "", end = '', file = script_file)
for var in self.PhVar_list:
if var.name != "None":
print(var.name + "_scan", ", ", sep = "", end = '', file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name + "_scan", ", ", sep = "", end = '', file = script_file)
print("]", file = script_file)
print ("var_type_list = [", end = '', file = script_file)
for var in self.FVar_list:
if var.name != "None":
print(var.name + "_type", ", ", sep = "", end = '', file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name + "_type", ", ", sep = "", end = '', file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name + "_type", ", ", sep = "", end = '', file = script_file)
for var in self.PhVar_list:
if var.name != "None":
print(var.name + "_type", ", ", sep = "", end = '', file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name + "_type", ", ", sep = "", end = '', file = script_file)
print("]", file = script_file)
print ("var_name_list = [", end = '', file = script_file)
for var in self.FVar_list:
if var.name != "None":
print(var.name + "_name", ", ", sep = "", end = '', file = script_file)
for var in self.TVar_list:
if var.name != "None":
print(var.name + "_name", ", ", sep = "", end = '', file = script_file)
for var in self.AmpVar_list:
if var.name != "None":
print(var.name + "_name", ", ", sep = "", end = '', file = script_file)
for var in self.PhVar_list:
if var.name != "None":
print(var.name + "_name", ", ", sep = "", end = '', file = script_file)
for var in self.OVar_list:
if var.name != "None":
print(var.name + "_name", ", ", sep = "", end = '', file = script_file)
print("]", file = script_file)
print("#____________________________________________", file = script_file)
print("#END", file = script_file)
script_file.close()
except:
print("SCRIPT SAVE Warning!")
def ExpDirSet(self):
try:
directory = self.ExpDir_LineEdit.text()
print(directory)
if os.path.exists(directory):
self.exp_dir = directory
self.ExpScriptRun_Button.setEnabled(True)
self.ExpScriptView_Button.setEnabled(True)
else:
self.ExpScriptRun_Button.setDisabled(True)
self.ExpScriptView_Button.setDisabled(True)
except:
pass
def ExpScriptView(self):
time.sleep(0.001)
try:
if os.path.exists(self.exp_dir):
os.system("notepad " + self.exp_dir)
except:
pass
def WinConfigView(self):
try:
if os.path.exists(self.winconfig_dir):
os.system("notepad " + self.winconfig_dir)
except:
pass
def ExpScriptRun(self):
print("~")
try:
if os.path.exists(self.exp_dir):
print("python \" "+ self.exp_dir + "\"")
os.system("python \""+ self.exp_dir + "\"")
except:
pass
##--------------------------------------------------------------------------------
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
win = SubExperiment("TEST")
##win.SubExperiment_Dialog.setCentralWidget(win.centralWidget)
win.SubExperiment_Dialog.show()
sys.exit(app.exec_())
| 41.941298 | 123 | 0.556783 | 6,650 | 60,731 | 4.847669 | 0.041654 | 0.03648 | 0.054285 | 0.037131 | 0.730124 | 0.603127 | 0.573627 | 0.542451 | 0.495704 | 0.433663 | 0 | 0.002493 | 0.332993 | 60,731 | 1,447 | 124 | 41.970283 | 0.793325 | 0.042334 | 0 | 0.462255 | 0 | 0 | 0.040159 | 0.0016 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054054 | false | 0.01398 | 0.006524 | 0 | 0.06151 | 0.157502 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64e1becee21ec789ab3fba8c1362708a1fcff647 | 1,509 | py | Python | train_DT.py | caspase-like-homolog-identifier/c14_witcher | e2c481607b85fed749daec0e9b3b29b65d6b448f | [
"MIT"
] | null | null | null | train_DT.py | caspase-like-homolog-identifier/c14_witcher | e2c481607b85fed749daec0e9b3b29b65d6b448f | [
"MIT"
] | null | null | null | train_DT.py | caspase-like-homolog-identifier/c14_witcher | e2c481607b85fed749daec0e9b3b29b65d6b448f | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_graphviz
from IPython.display import Image
from sklearn import metrics
from six import StringIO
import pandas as pd
import pydotplus
import argparse
import pickle
c14reference = pd.read_csv("c14reference.tsv", delimiter = "\t")
c14reference.shape
c14_ref = c14reference.dropna()
feature_cols = c14_ref.columns[:-1]
c14_ref = c14_ref[['p20', 'linker', 'p10','Classification']]
# +
#attributes
y = c14_ref.loc[:,"Classification"].values
# #labels
X = c14_ref.drop(["Classification"], axis = 1).values
# -
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
c14classifier = DecisionTreeClassifier(random_state=0)
c14classifier.fit(X_train, y_train)
y_pred = c14classifier.predict(X_test)
y_pred
y_test
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
dot_data = StringIO()
export_graphviz(c14classifier,
out_file=dot_data,
filled=True,
rounded=True,
special_characters=True,
feature_names = feature_cols,class_names=['MCP','Type_I','Type_II', 'Type_III'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png('c14classifier.png')
Image(graph.create_png())
pkl_filename = "c14classifier.pickle"
with open(pkl_filename, 'wb') as file:
pickle.dump(c14classifier, file)
| 24.33871 | 96 | 0.726972 | 203 | 1,509 | 5.162562 | 0.458128 | 0.034351 | 0.026718 | 0.040076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0347 | 0.159708 | 1,509 | 61 | 97 | 24.737705 | 0.791798 | 0.027833 | 0 | 0 | 0 | 0 | 0.098698 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.27027 | 0 | 0.27027 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64e2f7bd0c5248b10b722215e23712c6705e3215 | 3,205 | py | Python | data/process_data.py | YvesDeutschmann/disaster-response-pipeline-project | 6ce33642a9bc05ed063c5adfab42fd69c076bd40 | [
"MIT"
] | null | null | null | data/process_data.py | YvesDeutschmann/disaster-response-pipeline-project | 6ce33642a9bc05ed063c5adfab42fd69c076bd40 | [
"MIT"
] | null | null | null | data/process_data.py | YvesDeutschmann/disaster-response-pipeline-project | 6ce33642a9bc05ed063c5adfab42fd69c076bd40 | [
"MIT"
] | null | null | null | import sys
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
def load_data(messages_filepath, categories_filepath):
"""
Loads the data.
Args:
messages_filepath: String - csv file containing disaster messages.
categories_filepath: String - csv file containing categories for each disaster message.
Returns:
df: DataFrame containing messages and categories.
"""
messages = pd.read_csv(messages_filepath)
categories = pd.read_csv(categories_filepath)
df = messages.merge(categories)
return df
def clean_data(df):
"""
Clean up the message dataframe.
Args:
df: DataFrame containing messages and categories.
Returns:
df: cleaned DataFrame containing messages and categories.
"""
# create a dataframe of the 36 individual category columns
categories = df.categories.str.split(';', expand=True)
# extract column names for categories from first row
row = categories.iloc[0,:]
colnames = [column[:-2] for column in row.values]
categories.columns = colnames
# Convert category values to 0 or 1
for column in categories:
# set each value to be the last character of the string
categories[column] = categories[column].apply(lambda x: x[-1])
# convert column from string to numeric
categories[column] = pd.to_numeric(categories[column])
# Replace categories column in df with new category columns
df.drop('categories', axis=1, inplace=True)
df = pd.concat([df, categories], axis=1)
# remove duplicates
df.drop_duplicates(inplace=True)
# remove rows with a value of 2 in 'related' column
df.drop(df[df.related==2].index, inplace=True)
return df
def save_data(df, database_filename):
"""
Store data in database.
Args:
df: cleaned DataFrame containing messages and categories.
database_filename: String - Name of Database the DataFrame is stored in.
"""
engine = create_engine(r'sqlite:///{}'.format(database_filename))
df.to_sql('CleanData', engine, index=False, if_exists='replace')
def main():
if len(sys.argv) == 4:
messages_filepath, categories_filepath, database_filepath = sys.argv[1:]
print('Loading data...\n MESSAGES: {}\n CATEGORIES: {}'
.format(messages_filepath, categories_filepath))
df = load_data(messages_filepath, categories_filepath)
print('Cleaning data...')
df = clean_data(df)
print('Saving data...\n DATABASE: {}'.format(database_filepath))
save_data(df, database_filepath)
print('Cleaned data saved to database!')
else:
print('Please provide the filepaths of the messages and categories '\
'datasets as the first and second argument respectively, as '\
'well as the filepath of the database to save the cleaned data '\
'to as the third argument. \n\nExample: python process_data.py '\
'disaster_messages.csv disaster_categories.csv '\
'DisasterResponse.db')
if __name__ == '__main__':
main() | 32.05 | 95 | 0.659282 | 391 | 3,205 | 5.294118 | 0.319693 | 0.046377 | 0.062802 | 0.0657 | 0.158454 | 0.128502 | 0.047343 | 0 | 0 | 0 | 0 | 0.00541 | 0.250234 | 3,205 | 100 | 96 | 32.05 | 0.856013 | 0.294228 | 0 | 0.044444 | 0 | 0 | 0.226512 | 0.020465 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088889 | false | 0 | 0.088889 | 0 | 0.222222 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64e6d9802ff131b02145f06b9894c085a64f01d6 | 795 | py | Python | streamselect/concept_representations/__init__.py | BenHals/streamselect | ca5e80f3a8a31a38ac52bccfd92528d73f387a6a | [
"BSD-3-Clause"
] | null | null | null | streamselect/concept_representations/__init__.py | BenHals/streamselect | ca5e80f3a8a31a38ac52bccfd92528d73f387a6a | [
"BSD-3-Clause"
] | null | null | null | streamselect/concept_representations/__init__.py | BenHals/streamselect | ca5e80f3a8a31a38ac52bccfd92528d73f387a6a | [
"BSD-3-Clause"
] | null | null | null | """ Base classes for concept representations.
A concept is a joint distribution between x and y.
A concept representation is a finite sized approximation of this distribution using a given classifier.
Each concept distribution should have a method of construction from a window of observations and
a similarity method to another concept representation. Ideally, it should also be able to be updated
online."""
from .base import ConceptRepresentation
from .error_rate_representation import ErrorRateRepresentation
from .meta_feature_distributions import (
DistributionTypes,
GaussianDistribution,
SingleValueDistribution,
)
__all__ = [
"ConceptRepresentation",
"ErrorRateRepresentation",
"DistributionTypes",
"SingleValueDistribution",
"GaussianDistribution",
]
| 34.565217 | 103 | 0.798742 | 86 | 795 | 7.290698 | 0.604651 | 0.025518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153459 | 795 | 22 | 104 | 36.136364 | 0.931649 | 0.50566 | 0 | 0 | 0 | 0 | 0.26943 | 0.173575 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64ea22c140e09fcc03e94150afada75a7e282353 | 1,099 | py | Python | src/layers/tsm.py | zhaojieting/e3d_lstm | e77d5523ad3a6f062042c095f1d40a29ee054db4 | [
"Apache-2.0"
] | null | null | null | src/layers/tsm.py | zhaojieting/e3d_lstm | e77d5523ad3a6f062042c095f1d40a29ee054db4 | [
"Apache-2.0"
] | null | null | null | src/layers/tsm.py | zhaojieting/e3d_lstm | e77d5523ad3a6f062042c095f1d40a29ee054db4 | [
"Apache-2.0"
] | null | null | null | """Module for constructing TSN layers Cells."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import tensorflow.contrib.layers as layers
def TSM_layer(inputs, output_channels, kernel_shape, padding='same'):
with tf.variable_scope('generator'):
input_shape = inputs.shape
inputs = tf.unstack(inputs)
out_puts = np.zeros(input_shape)
for i in range(len(inputs)):
input = tf.unstack(inputs[i])
shift_input = np.zeros(input_shape[1:])
out_puts[i] = shift_input + inputs
out_puts = tf.stack(out_puts)
out_puts = out_puts.views(-1, input_shape[2], input_shape[3], input_shape[4])
tf.layers.conv2d(out_puts, output_channels, [5, 5], padding=padding)
out_puts = out_puts.views(input_shape[0], input_shape[1], input_shape[2], input_shape[3], input_shape[4])
return out_puts | 42.269231 | 121 | 0.622384 | 142 | 1,099 | 4.521127 | 0.373239 | 0.17134 | 0.074766 | 0.065421 | 0.165109 | 0.105919 | 0.105919 | 0.105919 | 0.105919 | 0.105919 | 0 | 0.016518 | 0.283894 | 1,099 | 26 | 122 | 42.269231 | 0.799238 | 0.037307 | 0 | 0 | 0 | 0 | 0.012346 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.3 | 0 | 0.4 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64ecd16af3bba6904bb50e63ae398928437708ec | 1,946 | py | Python | tests/tmp.py | ribeirojose/d6tstack | 4d974cca4dc75ff988269443a6622ca9922127e6 | [
"MIT"
] | 176 | 2018-04-30T15:40:34.000Z | 2022-03-16T09:31:08.000Z | tests/tmp.py | tsering10/d6tstack | 7b6c5851b53bdd221466facfb7aebdc96006bf41 | [
"MIT"
] | 29 | 2018-10-28T15:35:24.000Z | 2022-01-31T03:23:35.000Z | tests/tmp.py | tsering10/d6tstack | 7b6c5851b53bdd221466facfb7aebdc96006bf41 | [
"MIT"
] | 45 | 2018-07-27T04:16:28.000Z | 2022-01-10T18:29:21.000Z | import importlib
import d6tstack.utils
importlib.reload(d6tstack.utils)
import time
import yaml
config = yaml.load(open('tests/.test-cred.yaml'))
cfg_uri_psql = config['rds']
cfg_uri_psql = config['wlo']
import pandas as pd
df = pd.DataFrame({'a':range(10),'b':range(10)})
d6tstack.utils.pd_to_psql(df,cfg_uri_psql,'quick',sep='\t',if_exists='replace')
print(pd.read_sql_table('quick',sqlengine))
import yaml
config = yaml.load(open('.test-cred.yaml'))
cfg_uri_psql = config['wlo']
import pandas as pd
df = pd.DataFrame({'a':range(10),'b':range(10),'name':['name,first name']*10})
import d6tstack.utils
d6tstack.utils.pd_to_psql(df,cfg_uri_psql,'quick',sep='\t',if_exists='replace')
import sqlalchemy
sqlengine = sqlalchemy.create_engine(cfg_uri_psql)
print(pd.read_sql_table('quick',sqlengine))
config = yaml.load(open('tests/.test-cred.yaml'))
cfg_uri_mysql = config['local-mysql']
sqlengine = sqlalchemy.create_engine(cfg_uri_mysql)
importlib.reload(d6tstack.utils)
d6tstack.utils.pd_to_mysql(df,cfg_uri_mysql,'quick',if_exists='replace')
print(pd.read_sql_table('quick',sqlengine))
import sqlalchemy
sqlengine = sqlalchemy.create_engine(cfg_uri_psql)
sqlengine = sqlalchemy.create_engine(cfg_uri_mysql)
sqlengine = sqlalchemy.create_engine(cfg_uri_psql)
print(pd.read_sql_table('benchmark',sqlengine).head())
dft = pd.read_sql_table('benchmark',sqlengine)
dft.shape
# cursor = sqlengine.cursor()
sql = sqlengine.execute("SELECT * FROM benchmark;")
dft2 = pd.DataFrame(sql.fetchall())
dft2.shape
sql.keys()
importlib.reload(d6tstack.utils)
start_time = time.time()
dft2 = d6tstack.utils.pd_from_sqlengine(cfg_uri_psql, "SELECT * FROM benchmark;")
assert dft2.shape==(100000, 23)
print("--- %s seconds ---" % (time.time() - start_time))
start_time = time.time()
dft = pd.read_sql_table('benchmark',sqlengine)
assert dft.shape==(100000, 23)
print("--- %s seconds ---" % (time.time() - start_time))
d6tstack.utils.test()
| 25.605263 | 81 | 0.751285 | 293 | 1,946 | 4.791809 | 0.215017 | 0.055556 | 0.064103 | 0.059829 | 0.696581 | 0.696581 | 0.630342 | 0.491453 | 0.491453 | 0.443732 | 0 | 0.022396 | 0.08222 | 1,946 | 75 | 82 | 25.946667 | 0.763718 | 0.013875 | 0 | 0.632653 | 0 | 0 | 0.138831 | 0.021921 | 0 | 0 | 0 | 0 | 0.040816 | 1 | 0 | false | 0 | 0.265306 | 0 | 0.265306 | 0.122449 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64eea514dc744490d62df2dbfa0ff759f1bd366a | 13,313 | py | Python | pi_weather.py | mitgobla/Pi-Weather | aa5a8a4a543d721ba9c7ebe3a69444512133d4cc | [
"MIT"
] | 1 | 2021-08-22T20:56:37.000Z | 2021-08-22T20:56:37.000Z | pi_weather.py | mitgobla/Pi-Weather | aa5a8a4a543d721ba9c7ebe3a69444512133d4cc | [
"MIT"
] | null | null | null | pi_weather.py | mitgobla/Pi-Weather | aa5a8a4a543d721ba9c7ebe3a69444512133d4cc | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import json
import os
from sys import argv
from time import sleep
from papirus import PapirusComposite
from weather import Weather
DIRECTORY = os.path.dirname(os.path.realpath(__file__))
class PiWeather:
def __init__(self):
self.config = self.load_config()["weather"]
self.unit = self.get_unit()
self.weather = Weather(unit=self.unit)
self.location = self.get_location()
self.lookup = {}
self.compass_dirs = ["N", "NNE", "NE", "ENE", "E", "ESE", "SE", "SSE",
"S", "SSW", "SW", "WSW", "W", "WNW", "NW", "NNW"]
self.compass_dirs_simple = ["N", "NE", "NE", "NE", "E", "SE", "SE", "SE",
"S", "SW", "SW", "SW", "W", "NW", "NW", "NW"]
@staticmethod
def load_config():
"""Load PiWeather Config
Returns:
dict -- Dictonary of config options
"""
with open(os.path.join(DIRECTORY, 'config.json')) as config_file:
return json.load(config_file)
def get_unit(self):
"""Read the selected temperature unit from config
Returns:
str -- String of unit in lowercase
"""
if "unit" in self.config:
return self.config["unit"].lower()
return "c"
def get_location(self):
"""Read the location set in the config
Returns:
str -- String of the location
"""
if len(argv) > 1:
return str(argv[1])
if "location" in self.config:
return self.config["location"]
return "London"
def get_wind_direction(self, direction):
"""Converts the direction from degrees to compass
Arguments:
direction {int} -- Direction in degrees
Returns:
str -- Compass/Degrees direction depending on config
"""
ix = int((int(direction) + 11.25)/22.5 - 0.02)
if self.config["wind_direction"] == "compass":
return self.compass_dirs[ix % 16]
elif self.config["wind_direction"] == "simplecompass":
return self.compass_dirs_simple[ix % 16]
return direction
@staticmethod
def convert24(time, meridiem):
"""Convert hour to 24 hour format
Arguments:
time {list} -- Array of Hour, Minute
meridiem {str} -- String of meridiem
Returns:
int -- 24 Hour format of hour
"""
if meridiem == 'am' and time[0] == '12':
return 0
elif meridiem == 'am':
return int(time[0])
elif meridiem == 'pm' and time[0] == '12':
return int(time[0])
return int(time[0])+12
def get_suntime(self, suntime):
"""Convert sunrise/set to 24 hour
Arguments:
suntime {str} -- String of time in 'HH:MM pm' format
Returns:
str -- Returns HH:MM in 24 format
"""
meridiem = suntime.split(' ')[-1]
suntime = suntime.split(' ')[0].split(':')
sun_hour = self.convert24(suntime, meridiem)
sun_minute = int(suntime[1])
return str(sun_hour)+":"+str(sun_minute)
def get_weather(self):
"""Get weather and populate lookup dictonary
"""
lookup_data = self.weather.lookup_by_location(self.location)
self.lookup = {
"temperature": lookup_data.condition.temp+"°"+lookup_data.units.temperature,
"humidity": lookup_data.atmosphere.humidity+"%",
"wind": {
"speed": lookup_data.wind.speed+lookup_data.units.speed,
"direction": self.get_wind_direction(lookup_data.wind.direction)
},
"pressure": lookup_data.atmosphere.pressure+lookup_data.units.pressure,
"visibility": lookup_data.atmosphere.visibility+lookup_data.units.distance,
"sunrise": self.get_suntime(lookup_data.astronomy.sunrise),
"sunset": self.get_suntime(lookup_data.astronomy.sunset),
"weather_type": lookup_data.condition.text,
"weather_code": lookup_data.condition.code,
"forecast": lookup_data.forecast
}
class PiDisplay(PiWeather):
def __init__(self):
PiWeather.__init__(self)
self.display = PapirusComposite(False)
self.unknown_icon = "3200.png"
self.order = []
self.gotWeather = False
self.initalize_order()
self.initalize_display()
def initalize_order(self):
"""Create the order that information is displayed
"""
for stat in self.config["stats"]:
if self.config["stats"][stat]:
self.order.append(stat)
def initalize_display(self):
"""Add all the screen elements to the e-ink display
"""
if self.config["forecast"]["enabled"]:
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 0, 0, (48, 48), Id="WeatherIcon")
self.display.AddText("Loading...", 48, 0, size=13, Id="LineOne",
fontPath='/usr/share/fonts/truetype/freefont/FreeMonoBold.ttf')
self.display.AddText("Loading...", 48, 20, size=12, Id="LineTwo")
self.display.AddText("Loading...", 48, 34,
size=12, Id="LineThree")
if self.config["forecast"]["sixday"]:
self.display.AddText("...", 3, 49, size=12, Id="ForecastOne")
self.display.AddText("...", 35, 49, size=12, Id="ForecastTwo")
self.display.AddText(
"...", 68, 49, size=12, Id="ForecastThree")
self.display.AddText(
"...", 101, 49, size=12, Id="ForecastFour")
self.display.AddText(
"...", 135, 49, size=12, Id="ForecastFive")
self.display.AddText("...", 167, 49, size=12, Id="ForecastSix")
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 1, 63, (32, 32), Id="ForecastIconOne")
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 34, 63, (32, 32), Id="ForecastIconTwo")
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 67, 63, (32, 32), Id="ForecastIconThree")
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 100, 63, (32, 32), Id="ForecastIconFour")
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 133, 63, (32, 32), Id="ForecastIconFive")
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 166, 63, (32, 32), Id="ForecastIconSix")
else:
self.display.AddText("Today: ...", 25, 51,
size=12, Id="ForecastOne")
self.display.AddText("Tomorrow: ...", 25,
74, size=12, Id="ForecastTwo")
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 1, 49, (23, 23), Id="ForecastIconOne")
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 1, 72, (23, 23), Id="ForecastIconTwo")
else:
self.display.AddImg(os.path.join(
DIRECTORY, 'images', 'weather', self.unknown_icon), 1, 15, (80, 80), Id="WeatherIcon")
self.display.AddText("Loading...", 1, 1, size=13, Id="LineOne",
fontPath='/usr/share/fonts/truetype/freefont/FreeMonoBold.ttf')
self.display.AddText("Loading...", 82, 15, size=12, Id="LineTwo")
self.display.AddText("Loading...", 82, 30,
size=12, Id="LineThree")
self.display.WriteAll()
def update(self):
"""Regurlarly update the screen with new information
"""
self.gotWeather = False
while not self.gotWeather:
try:
self.get_weather()
self.gotWeather = True
except:
sleep(60)
if not self.lookup:
print("Invalid Location")
exit()
self.display.UpdateImg("WeatherIcon", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["weather_code"])+'.png'))
self.display.UpdateText("LineOne", self.lookup["weather_type"])
if self.config["forecast"]["enabled"]:
if self.config["forecast"]["sixday"]:
self.display.UpdateText(
"ForecastOne", self.lookup["forecast"][0].day)
self.display.UpdateText(
"ForecastTwo", self.lookup["forecast"][1].day)
self.display.UpdateText(
"ForecastThree", self.lookup["forecast"][2].day)
self.display.UpdateText(
"ForecastFour", self.lookup["forecast"][3].day)
self.display.UpdateText(
"ForecastFive", self.lookup["forecast"][4].day)
self.display.UpdateText(
"ForecastSix", self.lookup["forecast"][5].day)
self.display.UpdateImg("ForecastIconOne", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["forecast"][0].code)+'.png'))
self.display.UpdateImg("ForecastIconTwo", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["forecast"][1].code)+'.png'))
self.display.UpdateImg("ForecastIconThree", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["forecast"][2].code)+'.png'))
self.display.UpdateImg("ForecastIconFour", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["forecast"][3].code)+'.png'))
self.display.UpdateImg("ForecastIconFive", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["forecast"][4].code)+'.png'))
self.display.UpdateImg("ForecastIconSix", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["forecast"][5].code)+'.png'))
else:
self.display.UpdateText(
"ForecastOne", "Today: "+self.lookup["forecast"][0].text)
self.display.UpdateText(
"ForecastTwo", "Tomorrow: "+self.lookup["forecast"][1].text)
self.display.UpdateImg("ForecastIconOne", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["forecast"][0].code)+'.png'))
self.display.UpdateImg("ForecastIconTwo", os.path.join(
DIRECTORY, 'images', 'weather', str(self.lookup["forecast"][1].code)+'.png'))
for stat in self.order:
if stat == "temperature":
self.display.UpdateText("LineTwo", "Temp: "+self.lookup[stat])
self.display.UpdateText(
"LineThree", "Hi: "+self.lookup["forecast"][0].high+" Lo: "+self.lookup["forecast"][0].low)
elif stat == "humidity":
self.display.UpdateText(
"LineTwo", "Humidity: "+self.lookup[stat])
humidity = int(self.lookup[stat][:-1])
scale = ""
if humidity < 25:
scale = "Very Dry"
elif humidity < 60:
scale = "Dry"
elif humidity < 80:
scale = "Wet"
else:
scale = "Very Wet"
self.display.UpdateText("LineThree", scale)
elif stat == "wind":
self.display.UpdateText(
"LineTwo", "Speed: "+self.lookup[stat]["speed"])
self.display.UpdateText(
"LineThree", "Direction: "+self.lookup[stat]["direction"])
elif stat == "pressure":
self.display.UpdateText("LineTwo", "Pressure")
self.display.UpdateText("LineThree", self.lookup[stat])
elif stat == "visibility":
self.display.UpdateText("LineTwo", "Visibility")
self.display.UpdateText("LineThree", self.lookup[stat])
elif stat == "sunrise":
self.display.UpdateText("LineTwo", "Sunrise")
self.display.UpdateText("LineThree", self.lookup[stat])
elif stat == "sunset":
self.display.UpdateText("LineTwo", "Sunset")
self.display.UpdateText("LineThree", self.lookup[stat])
self.display.WriteAll()
if len(self.order) >= 3:
sleep(20)
else:
sleep(int(60/len(self.order)))
# Can only request weather data every 43 seconds (2000 calls a day)
# 20 seconds per slide is safe
PI = PiDisplay()
if __name__ == "__main__":
while True:
PI.update()
| 40.588415 | 113 | 0.534966 | 1,380 | 13,313 | 5.096377 | 0.188406 | 0.092279 | 0.068676 | 0.054031 | 0.349637 | 0.309683 | 0.274563 | 0.247263 | 0.235888 | 0.207308 | 0 | 0.027526 | 0.320514 | 13,313 | 327 | 114 | 40.712538 | 0.749834 | 0.082401 | 0 | 0.264317 | 0 | 0 | 0.154065 | 0.008531 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052863 | false | 0.026432 | 0.026432 | 0 | 0.14978 | 0.004405 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64eedb6c77b1e955f83b08c16eb463ea0891406d | 961 | py | Python | crystallography/cube_symmetry.py | rpw199912j/matsci_animation | cd613853a40cdee73f9cdff7bdf23a02451bb1ef | [
"MIT"
] | null | null | null | crystallography/cube_symmetry.py | rpw199912j/matsci_animation | cd613853a40cdee73f9cdff7bdf23a02451bb1ef | [
"MIT"
] | null | null | null | crystallography/cube_symmetry.py | rpw199912j/matsci_animation | cd613853a40cdee73f9cdff7bdf23a02451bb1ef | [
"MIT"
] | null | null | null | from manim import *
class CubeSymmetry(ThreeDScene):
def construct(self):
# define a 3D axes
axes_3d = ThreeDAxes(
tips=False
)
# define a cube with side length 2 placed at the origin
cube = Cube(stroke_color=YELLOW, stroke_width=3)
# define a line that aligns with one the edges
line = Line3D(
start=np.array([1, -1, 1]),
end=np.array([1, 1, 1]),
stroke_color=PURPLE
)
self.add(axes_3d)
self.wait()
self.play(
FadeIn(cube)
)
self.wait()
self.move_camera(phi=(90 - 35.26) * DEGREES, theta=-45 * DEGREES)
self.wait()
self.play(
Create(line)
)
self.wait()
for _ in range(3):
self.play(
Rotate(VGroup(cube, line), angle=120 * DEGREES, axis=np.array([1, -1, 1]))
)
self.wait()
| 23.439024 | 90 | 0.495317 | 115 | 961 | 4.078261 | 0.556522 | 0.025586 | 0.051173 | 0.057569 | 0.063966 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046233 | 0.3923 | 961 | 40 | 91 | 24.025 | 0.756849 | 0.119667 | 0 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.034483 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64f18694f99e323c1c04d6f1bc14cbfb5fcf7280 | 3,226 | py | Python | src/python/services/rpc_services/rpc_requests.py | rockmind/LoveAndMarriage | 2877d6af626eff2a3134a05ab7f03c52f14fde5c | [
"Apache-2.0"
] | null | null | null | src/python/services/rpc_services/rpc_requests.py | rockmind/LoveAndMarriage | 2877d6af626eff2a3134a05ab7f03c52f14fde5c | [
"Apache-2.0"
] | 1 | 2021-12-18T16:07:39.000Z | 2021-12-18T16:07:39.000Z | src/python/services/rpc_services/rpc_requests.py | rockmind/LoveAndMarriage | 2877d6af626eff2a3134a05ab7f03c52f14fde5c | [
"Apache-2.0"
] | null | null | null | from asyncio import sleep, get_event_loop
from aiohttp import ClientSession
from typing import OrderedDict, Union, List
from numpy.random import randint
from oauthlib.oauth2 import LegacyApplicationClient
from requests_oauthlib import OAuth2Session
from services import json_dumps, json_loads
class RequestRpc:
REFRESH_TOKEN_TIME = 30*60 # in sec
def __init__(self, url: str, username: str, password: str, token_url: str, ):
self.url = url
self.username = username
self.password = password
self.token_url = token_url
self._oauth = OAuth2Session(client=LegacyApplicationClient(client_id=username))
self._token = None
self._token_refresh_task = get_event_loop().create_task(self.refresh_token_loop())
self._session = None
async def refresh_token_loop(self):
while True:
try:
await self.refresh_token()
except Exception as err:
pass
await sleep(self.REFRESH_TOKEN_TIME)
async def refresh_token(self):
self._token = self._oauth.fetch_token(
token_url=self.token_url,
username=self.username,
password=self.password
)
async def rpc_request(self, methods: Union[OrderedDict, List[str], str]):
if not self._token:
await self.refresh_token()
if isinstance(methods, OrderedDict):
body = [{
"jsonrpc": "2.0",
"method": m,
"params": v or dict(),
"id": randint(10000000)
} for m, v in methods.items()]
elif isinstance(methods, List):
body = [{
"jsonrpc": "2.0",
"method": m,
"id": randint(10000000)
} for m in methods]
elif isinstance(methods, str):
body = [{
"jsonrpc": "2.0",
"method": methods,
"id": randint(10000000)
}]
else:
raise Exception('Unexpected type params.')
if not self._session:
self._session = ClientSession(json_serialize=json_dumps)
for i in range(4):
async with self._session.get(self.url+'authentication_check', headers=self._prepare_headers()) as resp:
if resp.status == 200:
results = await resp.json(loads=json_loads)
if results.get('Status') == 'OK':
break
await sleep(5**i)
await self.refresh_token()
continue
async with self._session.post(self.url, headers=self._prepare_headers(), json=body) as resp:
results = await resp.json(loads=json_loads)
for result in results:
if 'error' in result:
raise Exception(result['error'].get('message'))
if isinstance(methods, str):
return results[0]
return results
def _prepare_headers(self):
headers = {
'Authorization': f'{self._token["token_type"]} {self._token["access_token"]}',
'Content-Type': 'application/json'
}
return headers
| 33.604167 | 115 | 0.567886 | 348 | 3,226 | 5.091954 | 0.316092 | 0.054176 | 0.045147 | 0.035553 | 0.095372 | 0.060948 | 0.038375 | 0 | 0 | 0 | 0 | 0.020075 | 0.33602 | 3,226 | 95 | 116 | 33.957895 | 0.80719 | 0.00186 | 0 | 0.2 | 0 | 0 | 0.07023 | 0.017402 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0.05 | 0.0875 | 0 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64f449e7051ecea8b2412b9c5d7d5fca434151bc | 13,150 | py | Python | pyrallis/wrappers/field_wrapper.py | eladrich/pyrallis | 1e0586f9de9ed5d8d67d061dac1fb44c73f9d4a4 | [
"MIT"
] | 22 | 2021-12-30T16:06:09.000Z | 2022-03-09T23:27:30.000Z | pyrallis/wrappers/field_wrapper.py | eladrich/pyrallis | 1e0586f9de9ed5d8d67d061dac1fb44c73f9d4a4 | [
"MIT"
] | 5 | 2022-01-18T14:05:52.000Z | 2022-03-03T17:23:03.000Z | pyrallis/wrappers/field_wrapper.py | eladrich/pyrallis | 1e0586f9de9ed5d8d67d061dac1fb44c73f9d4a4 | [
"MIT"
] | null | null | null | import argparse
import dataclasses
import inspect
from logging import getLogger
from typing import Any, Optional, List, Type, Dict, Set, Union, Tuple
from . import docstring
from .wrapper import Wrapper
from .. import utils
logger = getLogger(__name__)
class FieldWrapper(Wrapper[dataclasses.Field]):
"""
The FieldWrapper class acts a bit like an 'argparse.Action' class, which
essentially just creates the `option_strings` and `arg_options` that get
passed to the `add_argument(*option_strings, **arg_options)` function of the
`argparse._ArgumentGroup` (in this case represented by the `parent`
attribute, an instance of the class `DataclassWrapper`).
The `option_strings`, `required`, `help`, `default`, etc.
attributes just autogenerate the argument of the same name of the
above-mentioned `add_argument` function. The `arg_options` attribute fills
in the rest and may overwrite these values, depending on the type of field.
The `field` argument is the actually wrapped `dataclasses.Field` instance.
"""
def __init__(self, field: dataclasses.Field, parent: Any = None, prefix: str = ""):
super().__init__(wrapped=field, name=field.name)
self.field: dataclasses.Field = field
self.prefix: str = prefix
self._parent: Any = parent
# Holders used to 'cache' the properties.
# (could've used cached_property with Python 3.8).
self._option_strings: Optional[Set[str]] = None
self._required: Optional[bool] = None
self._docstring: docstring.AttributeDocString = docstring.AttributeDocString()
self._help: Optional[str] = None
self._default: Optional[Union[Any, List[Any]]] = None
self._dest: Optional[str] = None
# the argparse-related options:
self._arg_options: Dict[str, Any] = {}
self._type: Optional[Type[Any]] = None
# stores the resulting values for each of the destination attributes.
self._results: Dict[str, Any] = {}
@property
def arg_options(self) -> Dict[str, Any]:
"""Dictionary of values to be passed to the `add_argument` method.
The main feature of this package is to infer these arguments
automatically using features of the built-in `dataclasses` package, as
well as Python's type annotations.
By passing additional keyword arguments to the `field()`
function, the autogenerated arguments can be overwritten,
giving access to all of the usual argparse features know and love.
NOTE: When passing an `action` keyword argument, we remove all the
autogenerated options that aren't required by the Action class
constructor.
For example, when specifying a custom `action` like "store_true" or
"store_false", the `type` argument autogenerated here shouldn't be
passed to the constructor of the `argparse._StoreFalseAction`, so we
discard it.
"""
if self._arg_options:
return self._arg_options
# get the auto-generated options.
options = self.get_arg_options()
# overwrite the auto-generated options with given ones, if any.
options.update(self.custom_arg_options)
# only keep the arguments used by the Action constructor.
action = options.get("action", "store")
self._arg_options = only_keep_action_args(options, action)
return self._arg_options
def get_arg_options(self) -> Dict[str, Any]:
"""Create the `parser.add_arguments` kwargs for this field."""
if not self.field.init:
return {}
# TODO: Refactor this:
# 1. Create a `get_argparse_options_for_field` function
# 2. Use `get_argparse_options_for_annotation` below as part of that function
# 3. Update the dict returned from 1. with values set in the field() function
# 4. Update the dict from 3. with the values set by the DataclassWrapper, or
# when this field is reused. (are they ever modified externally?)
# 5. Return that dictionary.
_arg_options: Dict[str, Any] = {}
_arg_options["required"] = False # Required arguments can also be set from yaml,
# so do not enforce with argparse
_arg_options["dest"] = self.dest
_arg_options["default"] = self.default
if self.help:
_arg_options["help"] = self.help
elif self.default is not None:
# issue 64: Need to add an empty 'help' string, so that the formatter
# automatically adds the (default: '123')
_arg_options["help"] = " "
_arg_options['type'] = self.type
try:
_arg_options['type'].__name__ = self.type.__repr__().replace('typing.', '')
except Exception as e:
# Only to prettify printing, if fails just continue
pass
return _arg_options
@property
def action(self) -> Union[str, Type[argparse.Action]]:
"""The `action` argument to be passed to `add_argument(...)`."""
return self.custom_arg_options.get("action", "store")
@property
def action_str(self) -> str:
if isinstance(self.action, str):
return self.action
return self.action.__name__
@property
def custom_arg_options(self) -> Dict[str, Any]:
"""Custom argparse options that overwrite those in `arg_options`.
Can be set by using the `field` function, passing in a keyword argument
that would usually be passed to the parser.add_argument(
*option_strings, **kwargs) method.
"""
return self.field.metadata.get("custom_args", {})
@property
def option_strings(self) -> List[str]:
"""Generates the `option_strings` argument to the `add_argument` call.
`parser.add_argument(*name_or_flags, **arg_options)`
## Notes:
- Additional names for the same argument can be added via the `field`
function.
- Whenever the name of an attribute includes underscores ("_"), the same
argument can be passed by using dashes ("-") instead. This also includes
aliases.
- If an alias contained leading dashes, either single or double, the
same number of dashes will be used, even in the case where a prefix is
added.
For an illustration of this, see the aliases example.
"""
dashes: List[str] = [] # contains the leading dashes.
options: List[str] = [] # contains the name following the dashes.
# Currently create only a single option name, no support for aliases
dashes.append('--')
options.append(self.dest)
# remove duplicates by creating a set.
option_strings = set(f"{dash}{option}" for dash, option in zip(dashes, options))
return list(sorted(option_strings, key=len))
@property
def dest(self) -> str:
"""Where the attribute will be stored in the Namespace."""
self._dest = super().dest
return self._dest
@property
def nargs(self):
return self.custom_arg_options.get("nargs", None)
@property
def default(self) -> Any:
"""Either a single default value, when parsing a single argument, or
the list of default values, when this argument is reused multiple times
(which only happens with the `ConflictResolution.ALWAYS_MERGE` option).
In order of increasing priority, this could either be:
1. The default attribute of the field
2. the value of the corresponding attribute on the parent,
if it has a default value
"""
if self._default is not None:
return self._default
default: Any = utils.default_value(self.field)
if default is dataclasses.MISSING:
default = None
self._default = default
return self._default
@default.setter
def default(self, value: Any):
self._default = value
@property
def required(self) -> bool:
if self._required is not None:
return self._required
if self.action_str.startswith("store_"):
# all the store_* actions do not require a value.
self._required = False
elif self.is_optional:
self._required = False
elif self.parent.required:
# if the parent dataclass is required, then this attribute is too.
# TODO: does that make sense though?
self._required = True
elif self.nargs in {"?", "*"}:
self._required = False
elif self.nargs == "+":
self._required = True
elif self.default is None:
self._required = True
else:
self._required = False
return self._required
@required.setter
def required(self, value: bool):
self._required = value
@property
def type(self) -> Type[Any]:
"""Returns the wrapped field's type annotation."""
if self._type is None:
self._type = self.field.type
return self._type
def __str__(self):
return f"""<FieldWrapper for field '{self.dest}'>"""
@property
def help(self) -> Optional[str]:
if self._help:
return self._help
try:
self._docstring = docstring.get_attribute_docstring(
self.parent.dataclass, self.field.name
)
except (SystemExit, Exception) as e:
logger.debug(
f"Couldn't find attribute docstring for field {self.name}, {e}"
)
self._docstring = docstring.AttributeDocString()
if self._docstring.docstring_below:
self._help = self._docstring.docstring_below
elif self._docstring.comment_above:
self._help = self._docstring.comment_above
elif self._docstring.comment_inline:
self._help = self._docstring.comment_inline
return self._help
@help.setter
def help(self, value: str):
self._help = value
@property
def name(self) -> str:
return self.field.name
@property
def is_list(self):
return utils.is_list(self.type)
@property
def is_enum(self) -> bool:
return utils.is_enum(self.type)
@property
def is_tuple(self) -> bool:
return utils.is_tuple(self.type)
@property
def is_bool(self) -> bool:
return utils.is_bool(self.type)
@property
def is_optional(self) -> bool:
return utils.is_optional(self.field.type)
@property
def is_union(self) -> bool:
return utils.is_union(self.field.type)
@property
def type_arguments(self) -> Optional[Tuple[Type, ...]]:
return utils.get_type_arguments(self.type)
@property
def parent(self) -> "DataclassWrapper":
return self._parent
def only_keep_action_args(
options: Dict[str, Any], action: Union[str, Any]
) -> Dict[str, Any]:
"""Remove all the arguments in `options` that aren't required by the Action.
Parameters
----------
options : Dict[str, Any]
A dictionary of options that would usually be passed to
`add_arguments(*option_strings, **options)`.
action : Union[str, Any]
The action class or name.
Returns
-------
Dict[str, Any]
[description]
"""
# TODO: explicitly tests these custom actions?
argparse_action_classes: Dict[str, Type[argparse.Action]] = {
"store": argparse._StoreAction,
"store_const": argparse._StoreConstAction,
"store_true": argparse._StoreTrueAction,
"store_false": argparse._StoreFalseAction,
"append": argparse._AppendAction,
"append_const": argparse._AppendConstAction,
"count": argparse._CountAction,
"help": argparse._HelpAction,
"version": argparse._VersionAction,
"parsers": argparse._SubParsersAction,
}
if action not in argparse_action_classes:
# the provided `action` is not a standard argparse-action.
# We don't remove any of the provided options.
return options
# Remove all the keys that aren't needed by the action constructor:
action_class = argparse_action_classes[action]
argspec = inspect.getfullargspec(action_class)
if argspec.varargs is not None or argspec.varkw is not None:
# if the constructor takes variable arguments, pass all the options.
logger.debug("Constructor takes var args. returning all options.")
return options
args_to_keep = argspec.args + ["action"]
kept_options, deleted_options = utils.keep_keys(options, args_to_keep)
if deleted_options:
logger.debug(
f"Some auto-generated options were deleted, as they were "
f"not required by the Action constructor: {deleted_options}."
)
if deleted_options:
logger.debug(f"Kept options: \t{kept_options.keys()}")
logger.debug(f"Removed options: \t{deleted_options.keys()}")
return kept_options
| 35.831063 | 89 | 0.63924 | 1,620 | 13,150 | 5.048765 | 0.201852 | 0.031789 | 0.012226 | 0.011615 | 0.125076 | 0.037657 | 0.008559 | 0.008559 | 0 | 0 | 0 | 0.001668 | 0.270646 | 13,150 | 366 | 90 | 35.928962 | 0.85111 | 0.363422 | 0 | 0.215 | 0 | 0 | 0.069293 | 0.006207 | 0 | 0 | 0 | 0.008197 | 0 | 1 | 0.135 | false | 0.005 | 0.04 | 0.055 | 0.335 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64f496f7a964f4a0c2008a4e6ef5318ca9d938ee | 7,183 | py | Python | up/settings/base.py | rodlukas/UP-admin | 08f36de0773f39c6222da82016bf1384af2cce18 | [
"MIT"
] | 4 | 2019-07-19T17:39:04.000Z | 2022-03-22T21:02:15.000Z | up/settings/base.py | rodlukas/UP-admin | 08f36de0773f39c6222da82016bf1384af2cce18 | [
"MIT"
] | 53 | 2019-08-04T14:25:40.000Z | 2022-03-26T20:30:55.000Z | up/settings/base.py | rodlukas/UP-admin | 08f36de0773f39c6222da82016bf1384af2cce18 | [
"MIT"
] | 3 | 2020-03-09T07:11:03.000Z | 2020-09-11T01:22:50.000Z | """
Základní konfigurace Django projektu.
Je základem pro konfigurace v souborech local.py a production.py.
"""
import os
import sys
from datetime import timedelta
import environ
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# env promenne
env = environ.Env(
# nastaveni typu a pripadne vychozi hodnoty
BANK_ACTIVE=(bool, True), # aktivace propojeni s bankou
BANK_RENT_PRICE=(int, 0), # vyse najmu (v Kc)
DATABASE_URL=str, # url pouzivane DB (napr. postgresql://postgres:postgres@localhost:5432/up)
DEBUG=(bool, False), # aktivace debug prostredi
ENVIRONMENT=str, # nazev aktualniho prostredi, kde je aplikace spustena (pro Sentry)
FIO_API_KEY=(str, ""), # token pro pristup do Fia
HEADLESS=(bool, True), # indikace headless mode pro testy UI
HEROKU=(bool, False), # priznak nasazeni aplikace na Heroku
MANUAL_PRODUCTION=(bool, False), # pro simulaci produkcni verze nastavit: True
SECRET_KEY=str, # tajny klic pro Django
SENTRY_DSN=str, # DSN klic pro Sentry
TESTS_RUNNING=(bool, False), # indikace bezicich testu
)
# cteni z .env souboru
environ.Env.read_env(os.path.join(BASE_DIR, ".env"))
# vlastni konstanty
CONST_AUTH_EXPIRATION = 60 * 8 # minuty -> 8 hodin (60*8)
CONST_DB_CON_AGE = 600
# vlastni konstanty nactene z prostredi/souboru
BANK_ACTIVE = env("BANK_ACTIVE")
BANK_RENT_PRICE = env("BANK_RENT_PRICE")
ENVIRONMENT = env("ENVIRONMENT")
FIO_API_KEY = env("FIO_API_KEY")
HEADLESS = env("HEADLESS")
HEROKU = env("HEROKU")
MANUAL_PRODUCTION = env("MANUAL_PRODUCTION")
SENTRY_DSN = env("SENTRY_DSN")
# osetreni pro bezici testy - rozpoznani spusteni z radky/promenna prostredi (kvuli IDE)
TESTS_RUNNING = env("TESTS_RUNNING") or (len(sys.argv) > 1 and sys.argv[1] in ["test", "behave"])
# Django konstanty
DEBUG = env("DEBUG")
SECRET_KEY = env("SECRET_KEY")
# Application definition
INSTALLED_APPS = [
"whitenoise.runserver_nostatic",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"admin.apps.AdminConfig",
"rest_framework",
"api.apps.ApiConfig",
"django_filters",
"debug_toolbar",
]
if not HEROKU:
INSTALLED_APPS.append("behave_django")
# API
REST_FRAMEWORK = {
# pouziva se JWTTokenUserAuthentication, aby se neprovadel pri kazdem req DB lookup na uzivatele
"DEFAULT_AUTHENTICATION_CLASSES": (
# BasicAuthentication pro OpenAPI dokumentaci a Browsable API
"rest_framework.authentication.BasicAuthentication",
# JWTTokenUserAuthentication pro pristup k API z frontendu
"rest_framework_simplejwt.authentication.JWTTokenUserAuthentication",
),
"DEFAULT_FILTER_BACKENDS": ("django_filters.rest_framework.DjangoFilterBackend",),
"DEFAULT_PERMISSION_CLASSES": ("rest_framework.permissions.IsAuthenticated",),
"TEST_REQUEST_DEFAULT_FORMAT": "json",
}
SIMPLE_JWT = {
# pouzivaji se Sliding tokens - 1 a tentyz token pro autentizaci i refresh
"SLIDING_TOKEN_LIFETIME": timedelta(minutes=CONST_AUTH_EXPIRATION),
"SLIDING_TOKEN_REFRESH_LIFETIME": timedelta(days=2),
"AUTH_TOKEN_CLASSES": ("rest_framework_simplejwt.tokens.SlidingToken",),
"AUTH_HEADER_TYPES": ("Bearer",),
}
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"csp.middleware.CSPMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
"debug_toolbar.middleware.DebugToolbarMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = "up.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [os.path.join(BASE_DIR, "admin/templates")],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
]
},
}
]
WSGI_APPLICATION = "up.wsgi.application"
CACHES = {"default": {"BACKEND": "django.core.cache.backends.locmem.LocMemCache"}}
# Database
DATABASES = {"default": env.db()}
# nastaveni persistentnich spojeni s DB (mimo testy - zpusobuje problemy)
if not TESTS_RUNNING:
DATABASES["default"]["CONN_MAX_AGE"] = CONST_DB_CON_AGE
# https://docs.djangoproject.com/fr/3.2/releases/3.2/#customizing-type-of-auto-created-primary-keys
DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
# Password validation
AUTH_PASSWORD_VALIDATORS = [
{"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator"},
{"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator"},
{"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator"},
{"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator"},
]
# Internationalization
LANGUAGE_CODE = "cs"
TIME_ZONE = "Europe/Prague"
USE_I18N = True
USE_L10N = True
USE_TZ = True
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
# Static files
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATIC_URL = "/static/"
# debug toolbar
DEBUG_TOOLBAR_CONFIG = {
"SHOW_TOOLBAR_CALLBACK": lambda request: True if DEBUG else False,
"SHOW_COLLAPSED": True,
}
# Django konstanty pro bezpecnost
SECURE_REFERRER_POLICY = "strict-origin-when-cross-origin" # Referer je potreba posilat na Sentry
X_FRAME_OPTIONS = "DENY"
SECURE_CONTENT_TYPE_NOSNIFF = True
SECURE_BROWSER_XSS_FILTER = True
# CSP
# CSP pro Google Analytics, viz https://developers.google.com/tag-manager/web/csp#universal_analytics_google_analytics
CSPURL_GOOGLE_ANALYTICS = "https://www.google-analytics.com"
CSPURL_GOOGLE_ANALYTICS_SSL = "https://ssl.google-analytics.com"
# CSP pro Google Fonts
CSPURL_GOOGLE_FONTS_STYLE = "fonts.googleapis.com"
CSPURL_GOOGLE_FONTS_FONT = "fonts.gstatic.com"
# CSP pro Sentry
CSPURL_SENTRY = "https://sentry.io"
# CSP pro unpkg.com
CSPURL_UNPKG = "https://unpkg.com"
CSP_SELF = "'self'"
CSP_NONE = "'none'"
# CSP konfigurace
CSP_DEFAULT_SRC = (CSP_NONE,)
CSP_STYLE_SRC = (
CSP_SELF,
"'unsafe-inline'",
CSPURL_GOOGLE_FONTS_STYLE,
CSPURL_UNPKG,
) # 'unsafe-inline' kvuli inline CSS v Sentry feedback formulari
CSP_CONNECT_SRC = (CSP_SELF, CSPURL_GOOGLE_ANALYTICS, CSPURL_SENTRY)
CSP_SCRIPT_SRC = (
CSP_SELF,
CSPURL_SENTRY,
CSPURL_GOOGLE_ANALYTICS,
CSPURL_GOOGLE_ANALYTICS_SSL,
CSPURL_UNPKG,
)
CSP_FONT_SRC = (CSP_SELF, CSPURL_GOOGLE_FONTS_FONT)
CSP_IMG_SRC = (CSP_SELF, CSPURL_GOOGLE_ANALYTICS, "data:")
CSP_FRAME_ANCESTORS = (CSP_NONE,)
CSP_FORM_ACTION = (CSP_NONE,)
CSP_BASE_URI = (CSP_NONE,)
CSP_MANIFEST_SRC = (CSP_SELF,) # site.webmanifest
| 35.384236 | 118 | 0.738828 | 873 | 7,183 | 5.853379 | 0.399771 | 0.035616 | 0.023288 | 0.010959 | 0.091194 | 0.059491 | 0.01683 | 0.008806 | 0 | 0 | 0 | 0.0044 | 0.145761 | 7,183 | 202 | 119 | 35.559406 | 0.82839 | 0.244466 | 0 | 0.027027 | 0 | 0 | 0.418496 | 0.301824 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.033784 | 0.027027 | 0 | 0.027027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64f625ff3b06630f6e5652636658877b09051eea | 10,470 | py | Python | acme/agents/jax/ail/learning.py | wookayin/acme | 71b2ab8577a118c103718f034fa62c5ad2c0fd97 | [
"Apache-2.0"
] | null | null | null | acme/agents/jax/ail/learning.py | wookayin/acme | 71b2ab8577a118c103718f034fa62c5ad2c0fd97 | [
"Apache-2.0"
] | null | null | null | acme/agents/jax/ail/learning.py | wookayin/acme | 71b2ab8577a118c103718f034fa62c5ad2c0fd97 | [
"Apache-2.0"
] | null | null | null | # Copyright 2018 DeepMind Technologies Limited. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""AIL learner implementation."""
import functools
import itertools
import time
from typing import Any, Callable, Iterator, List, NamedTuple, Optional, Tuple
import acme
from acme import types
from acme.agents.jax.ail import losses
from acme.agents.jax.ail import networks as ail_networks
from acme.jax import networks as networks_lib
from acme.jax import utils
from acme.utils import counting
from acme.utils import loggers
from acme.utils import reverb_utils
import jax
import optax
import reverb
class DiscriminatorTrainingState(NamedTuple):
"""Contains training state for the discriminator."""
# State of the optimizer used to optimize the discriminator parameters.
optimizer_state: optax.OptState
# Parameters of the discriminator.
discriminator_params: networks_lib.Params
# State of the discriminator
discriminator_state: losses.State
# For AIRL variants, we need the policy params to compute the loss.
policy_params: Optional[networks_lib.Params]
# Key for random number generation.
key: networks_lib.PRNGKey
# Training step of the discriminator.
steps: int
class TrainingState(NamedTuple):
"""Contains training state of the AIL learner."""
rewarder_state: DiscriminatorTrainingState
learner_state: Any
def ail_update_step(
state: DiscriminatorTrainingState, data: Tuple[types.Transition,
types.Transition],
optimizer: optax.GradientTransformation,
ail_network: ail_networks.AILNetworks,
loss_fn: losses.Loss) -> Tuple[DiscriminatorTrainingState, losses.Metrics]:
"""Run an update steps on the given transitions.
Args:
state: The learner state.
data: Demo and rb transitions.
optimizer: Discriminator optimizer.
ail_network: AIL networks.
loss_fn: Discriminator loss to minimize.
Returns:
A new state and metrics.
"""
demo_transitions, rb_transitions = data
key, discriminator_key, loss_key = jax.random.split(state.key, 3)
def compute_loss(
discriminator_params: networks_lib.Params) -> losses.LossOutput:
discriminator_fn = functools.partial(
ail_network.discriminator_network.apply,
discriminator_params,
state.policy_params,
is_training=True,
rng=discriminator_key)
return loss_fn(discriminator_fn, state.discriminator_state,
demo_transitions, rb_transitions, loss_key)
loss_grad = jax.grad(compute_loss, has_aux=True)
grads, (loss, new_discriminator_state) = loss_grad(state.discriminator_params)
update, optimizer_state = optimizer.update(
grads,
state.optimizer_state,
params=state.discriminator_params)
discriminator_params = optax.apply_updates(state.discriminator_params, update)
new_state = DiscriminatorTrainingState(
optimizer_state=optimizer_state,
discriminator_params=discriminator_params,
discriminator_state=new_discriminator_state,
policy_params=state.policy_params, # Not modified.
key=key,
steps=state.steps + 1,
)
return new_state, loss
class AILSample(NamedTuple):
discriminator_sample: types.Transition
direct_sample: reverb.ReplaySample
demonstration_sample: types.Transition
class AILLearner(acme.Learner):
"""AIL learner."""
def __init__(
self,
counter: counting.Counter,
direct_rl_learner_factory: Callable[[Iterator[reverb.ReplaySample]],
acme.Learner],
loss_fn: losses.Loss,
iterator: Iterator[AILSample],
discriminator_optimizer: optax.GradientTransformation,
ail_network: ail_networks.AILNetworks,
discriminator_key: networks_lib.PRNGKey,
is_sequence_based: bool,
num_sgd_steps_per_step: int = 1,
policy_variable_name: Optional[str] = None,
logger: Optional[loggers.Logger] = None):
"""AIL Learner.
Args:
counter: Counter.
direct_rl_learner_factory: Function that creates the direct RL learner
when passed a replay sample iterator.
loss_fn: Discriminator loss.
iterator: Iterator that returns AILSamples.
discriminator_optimizer: Discriminator optax optimizer.
ail_network: AIL networks.
discriminator_key: RNG key.
is_sequence_based: If True, a direct rl algorithm is using SequenceAdder
data format. Otherwise the learner assumes that the direct rl algorithm
is using NStepTransitionAdder.
num_sgd_steps_per_step: Number of discriminator gradient updates per step.
policy_variable_name: The name of the policy variable to retrieve
direct_rl policy parameters.
logger: Logger.
"""
self._is_sequence_based = is_sequence_based
state_key, networks_key = jax.random.split(discriminator_key)
# Generator expression that works the same as an iterator.
# https://pymbook.readthedocs.io/en/latest/igd.html#generator-expressions
iterator, direct_rl_iterator = itertools.tee(iterator)
direct_rl_iterator = (
self._process_sample(sample.direct_sample)
for sample in direct_rl_iterator)
self._direct_rl_learner = direct_rl_learner_factory(direct_rl_iterator)
self._iterator = iterator
if policy_variable_name is not None:
def get_policy_params():
return self._direct_rl_learner.get_variables([policy_variable_name])[0]
self._get_policy_params = get_policy_params
else:
self._get_policy_params = lambda: None
# General learner book-keeping and loggers.
self._counter = counter or counting.Counter()
self._logger = logger or loggers.make_default_logger(
'learner',
asynchronous=True,
serialize_fn=utils.fetch_devicearray,
steps_key=self._counter.get_steps_key())
# Use the JIT compiler.
self._update_step = functools.partial(
ail_update_step,
optimizer=discriminator_optimizer,
ail_network=ail_network,
loss_fn=loss_fn)
self._update_step = utils.process_multiple_batches(self._update_step,
num_sgd_steps_per_step)
self._update_step = jax.jit(self._update_step)
discriminator_params, discriminator_state = (
ail_network.discriminator_network.init(networks_key))
self._state = DiscriminatorTrainingState(
optimizer_state=discriminator_optimizer.init(discriminator_params),
discriminator_params=discriminator_params,
discriminator_state=discriminator_state,
policy_params=self._get_policy_params(),
key=state_key,
steps=0,
)
# Do not record timestamps until after the first learning step is done.
# This is to avoid including the time it takes for actors to come online and
# fill the replay buffer.
self._timestamp = None
self._get_reward = jax.jit(
functools.partial(
ail_networks.compute_ail_reward, networks=ail_network))
def _process_sample(self, sample: reverb.ReplaySample) -> reverb.ReplaySample:
"""Updates the reward of the replay sample.
Args:
sample: Replay sample to update the reward to.
Returns:
The replay sample with an updated reward.
"""
transitions = reverb_utils.replay_sample_to_sars_transition(
sample, is_sequence=self._is_sequence_based)
rewards = self._get_reward(self._state.discriminator_params,
self._state.discriminator_state,
self._state.policy_params, transitions)
return sample._replace(data=sample.data._replace(reward=rewards))
def step(self):
sample = next(self._iterator)
rb_transitions = sample.discriminator_sample
demo_transitions = sample.demonstration_sample
if demo_transitions.reward.shape != rb_transitions.reward.shape:
raise ValueError(
'Different shapes for demo transitions and rb_transitions: '
f'{demo_transitions.reward.shape} != {rb_transitions.reward.shape}')
# Update the parameters of the policy before doing a gradient step.
state = self._state._replace(policy_params=self._get_policy_params())
self._state, metrics = self._update_step(state,
(demo_transitions, rb_transitions))
# The order is important for AIRL.
# In AIRL, the discriminator update depends on the logpi of the direct rl
# policy.
# When updating the discriminator, we want the logpi for which the
# transitions were made with and not an updated one.
# Get data from replay (dropping extras if any). Note there is no
# extra data here because we do not insert any into Reverb.
self._direct_rl_learner.step()
# Compute elapsed time.
timestamp = time.time()
elapsed_time = timestamp - self._timestamp if self._timestamp else 0
self._timestamp = timestamp
# Increment counts and record the current time.
counts = self._counter.increment(steps=1, walltime=elapsed_time)
# Attempts to write the logs.
self._logger.write({**metrics, **counts})
def get_variables(self, names: List[str]) -> List[Any]:
rewarder_dict = {'discriminator': self._state.discriminator_params}
learner_names = [name for name in names if name not in rewarder_dict]
learner_dict = {}
if learner_names:
learner_dict = dict(
zip(learner_names,
self._direct_rl_learner.get_variables(learner_names)))
variables = [
rewarder_dict.get(name, learner_dict.get(name, None)) for name in names
]
return variables
def save(self) -> TrainingState:
return TrainingState(
rewarder_state=self._state,
learner_state=self._direct_rl_learner.save())
def restore(self, state: TrainingState):
self._state = state.rewarder_state
self._direct_rl_learner.restore(state.learner_state)
| 35.612245 | 80 | 0.720153 | 1,279 | 10,470 | 5.671618 | 0.228303 | 0.019851 | 0.020678 | 0.015715 | 0.14654 | 0.084367 | 0.032534 | 0.032534 | 0 | 0 | 0 | 0.001811 | 0.209074 | 10,470 | 293 | 81 | 35.733788 | 0.87417 | 0.288252 | 0 | 0.02439 | 0 | 0 | 0.019535 | 0.008254 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054878 | false | 0 | 0.097561 | 0.012195 | 0.280488 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
64f62b0e9a31a02193441b2301ca86a8c2d36616 | 9,878 | py | Python | json_database/utils/__init__.py | NeonJarbas/json_database | 026d01faff79f178ab8e5a8505666959279761cb | [
"MIT"
] | 8 | 2020-05-30T12:44:35.000Z | 2022-02-14T15:12:53.000Z | json_database/utils/__init__.py | NeonJarbas/json_database | 026d01faff79f178ab8e5a8505666959279761cb | [
"MIT"
] | 4 | 2021-08-18T23:40:45.000Z | 2021-09-30T00:43:42.000Z | json_database/utils/__init__.py | NeonJarbas/json_database | 026d01faff79f178ab8e5a8505666959279761cb | [
"MIT"
] | 7 | 2020-05-30T12:44:41.000Z | 2021-09-30T00:27:04.000Z | import json
from difflib import SequenceMatcher
def fuzzy_match(x, against):
"""Perform a 'fuzzy' comparison between two strings.
Returns:
float: match percentage -- 1.0 for perfect match,
down to 0.0 for no match at all.
"""
return SequenceMatcher(None, x, against).ratio()
def match_one(query, choices):
"""
Find best match from a list or dictionary given an input
Arguments:
query: string to test
choices: list or dictionary of choices
Returns: tuple with best match, score
"""
if isinstance(choices, dict):
_choices = list(choices.keys())
elif isinstance(choices, list):
_choices = choices
else:
raise ValueError('a list or dict of choices must be provided')
best = (_choices[0], fuzzy_match(query, _choices[0]))
for c in _choices[1:]:
score = fuzzy_match(query, c)
if score > best[1]:
best = (c, score)
if isinstance(choices, dict):
return (choices[best[0]], best[1])
else:
return best
def merge_dict(base, delta, merge_lists=True, skip_empty=True,
no_dupes=True, new_only=False):
"""
Recursively merging configuration dictionaries.
Args:
base: Target for merge
delta: Dictionary to merge into base
merge_lists: if a list is found merge contents instead of replacing
skip_empty: if an item in delta is empty, dont overwrite base
no_dupes: when merging lists deduplicate entries
new_only: only merge keys not yet in base
"""
for k, d in delta.items():
b = base.get(k)
if isinstance(d, dict) and isinstance(b, dict):
merge_dict(b, d, merge_lists, skip_empty, no_dupes, new_only)
else:
if new_only and k in base:
continue
if skip_empty and not d and d is not False:
# dont replace if new entry is empty
pass
elif all((isinstance(b, list), isinstance(d, list), merge_lists)):
if no_dupes:
base[k] += [item for item in d if item not in base[k]]
else:
base[k] += d
else:
base[k] = d
return base
def load_commented_json(filename):
""" Loads an JSON file, ignoring comments
Supports a trivial extension to the JSON file format. Allow comments
to be embedded within the JSON, requiring that a comment be on an
independent line starting with '//' or '#'.
NOTE: A file created with these style comments will break strict JSON
parsers. This is similar to but lighter-weight than "human json"
proposed at https://hjson.org
Args:
filename (str): path to the commented JSON file
Returns:
obj: decoded Python object
"""
with open(filename, encoding='utf-8') as f:
contents = f.read()
return json.loads(uncomment_json(contents))
def uncomment_json(commented_json_str):
""" Removes comments from a JSON string.
Supporting a trivial extension to the JSON format. Allow comments
to be embedded within the JSON, requiring that a comment be on an
independent line starting with '//' or '#'.
Example...
{
// comment
'name' : 'value'
}
Args:
commented_json_str (str): a JSON string
Returns:
str: uncommented, legal JSON
"""
lines = commented_json_str.splitlines()
# remove all comment lines, starting with // or #
nocomment = []
for line in lines:
stripped = line.lstrip()
if stripped.startswith("//") or stripped.startswith("#"):
continue
nocomment.append(line)
return " ".join(nocomment)
def is_jsonifiable(thing):
if not isinstance(thing, dict):
if isinstance(thing, str):
try:
json.loads(thing)
return True
except:
pass
else:
try:
thing.__dict__
return True
except:
pass
return False
return True
def get_key_recursively(search_dict, field, filter_None=True):
"""
Takes a dict with nested lists and dicts,
and searches all dicts for a key of the field
provided.
"""
if not is_jsonifiable(search_dict):
raise ValueError("unparseable format")
fields_found = []
for key, value in search_dict.items():
if value is None and filter_None:
continue
if key == field:
fields_found.append(search_dict)
elif isinstance(value, dict):
fields_found += get_key_recursively(value, field, filter_None)
elif isinstance(value, list):
for item in value:
if not isinstance(item, dict):
try:
if get_key_recursively(item.__dict__, field, filter_None):
fields_found.append(item)
except:
continue # can't parse
else:
fields_found += get_key_recursively(item, field, filter_None)
return fields_found
def get_key_recursively_fuzzy(search_dict, field, thresh=0.6, filter_None=True):
"""
Takes a dict with nested lists and dicts,
and searches all dicts for a key of the field
provided.
"""
if not is_jsonifiable(search_dict):
raise ValueError("unparseable format")
fields_found = []
for key, value in search_dict.items():
if value is None and filter_None:
continue
score = 0
if isinstance(key, str):
score = fuzzy_match(key, field)
if score >= thresh:
fields_found.append((search_dict, score))
elif isinstance(value, dict):
fields_found += get_key_recursively_fuzzy(value, field, thresh, filter_None)
elif isinstance(value, list):
for item in value:
if not isinstance(item, dict):
try:
if get_key_recursively_fuzzy(item.__dict__, field, thresh, filter_None):
fields_found.append((item, score))
except:
continue # can't parse
else:
fields_found += get_key_recursively_fuzzy(item, field, thresh, filter_None)
return sorted(fields_found, key = lambda i: i[1],reverse=True)
def get_value_recursively(search_dict, field, target_value):
"""
Takes a dict with nested lists and dicts,
and searches all dicts for a key of the field
provided.
"""
if not is_jsonifiable(search_dict):
raise ValueError("unparseable format")
fields_found = []
for key, value in search_dict.items():
if key == field and value == target_value:
fields_found.append(search_dict)
elif isinstance(value, dict):
fields_found += get_value_recursively(value, field, target_value)
elif isinstance(value, list):
for item in value:
if not isinstance(item, dict):
try:
if get_value_recursively(item.__dict__, field, target_value):
fields_found.append(item)
except:
continue # can't parse
else:
fields_found += get_value_recursively(item, field, target_value)
return fields_found
def get_value_recursively_fuzzy(search_dict, field, target_value, thresh=0.6):
"""
Takes a dict with nested lists and dicts,
and searches all dicts for a key of the field
provided.
"""
if not is_jsonifiable(search_dict):
raise ValueError("unparseable format")
fields_found = []
for key, value in search_dict.items():
if key == field:
if isinstance(value, str):
score = fuzzy_match(target_value, value)
if score >= thresh:
fields_found.append((search_dict, score))
elif isinstance(value, list):
for item in value:
score = fuzzy_match(target_value, item)
if score >= thresh:
fields_found.append((search_dict, score))
elif isinstance(value, dict):
fields_found += get_value_recursively_fuzzy(value, field, target_value, thresh)
elif isinstance(value, list):
for item in value:
if not isinstance(item, dict):
try:
found = get_value_recursively_fuzzy(item.__dict__, field, target_value, thresh)
if len(found):
fields_found.append((item, found[0][1]))
except:
continue # can't parse
else:
fields_found += get_value_recursively_fuzzy(item, field, target_value, thresh)
return sorted(fields_found, key = lambda i: i[1],reverse=True)
def jsonify_recursively(thing):
if isinstance(thing, list):
jsonified = list(thing)
for idx, item in enumerate(thing):
jsonified[idx] = jsonify_recursively(item)
elif isinstance(thing, dict):
try:
# can't import at top level to do proper check
jsonified = dict(thing.db)
except:
jsonified = dict(thing)
for key in jsonified.keys():
value = jsonified[key]
jsonified[key] = jsonify_recursively(value)
else:
try:
jsonified = thing.__dict__
except:
jsonified = thing
return jsonified
| 32.071429 | 103 | 0.57542 | 1,161 | 9,878 | 4.745047 | 0.179156 | 0.049918 | 0.027773 | 0.020875 | 0.476493 | 0.414413 | 0.387185 | 0.387185 | 0.383917 | 0.373026 | 0 | 0.003096 | 0.346021 | 9,878 | 307 | 104 | 32.175896 | 0.84969 | 0.215934 | 0 | 0.519126 | 0 | 0 | 0.016651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060109 | false | 0.016393 | 0.010929 | 0 | 0.153005 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f02541a7ae70196b397d818b7763843b81bed3a | 6,213 | py | Python | pong_v2b.py | jasonj2333/Pico-Pong-2021 | dabd57a9b6a44723a1d690a614265e1a6d5d9b1f | [
"MIT"
] | null | null | null | pong_v2b.py | jasonj2333/Pico-Pong-2021 | dabd57a9b6a44723a1d690a614265e1a6d5d9b1f | [
"MIT"
] | null | null | null | pong_v2b.py | jasonj2333/Pico-Pong-2021 | dabd57a9b6a44723a1d690a614265e1a6d5d9b1f | [
"MIT"
] | 1 | 2021-03-11T08:34:21.000Z | 2021-03-11T08:34:21.000Z | #####################################################
####### Pico Pong 2021 by Jerzy Jasonek ########
####### version 2.0 ########
####### add training mode - 1 player mode ########
#####################################################
from machine import Pin, I2C, ADC
from ssd1306 import SSD1306_I2C
import framebuf
from utime import sleep
from random import randint
################################### Hardware Settings #################################
WIDTH = 128
HEIGHT = 64
i2c = I2C(1, scl = Pin(3), sda = Pin(2), freq=400000)
oled = SSD1306_I2C(WIDTH, HEIGHT, i2c)
Pot = ADC(26) # player 1 controller
Pot2 = ADC(27) # player 2 controller / scroll max left on start screen to turn on training mode
conversion_factor = 3.3 / (65535) # Conversion from Pin read to proper voltage
button = machine.Pin(14, machine.Pin.IN, machine.Pin.PULL_DOWN) # start button
start_button = machine.Pin(15, machine.Pin.IN, machine.Pin.PULL_DOWN) # level button
global level1, level2,level3
level1 = machine.Pin(13, machine.Pin.OUT) # level 1 led
level2 = machine.Pin(12, machine.Pin.OUT) #level 2 led
level3 = machine.Pin(11, machine.Pin.OUT) #level 3 led
led_one_player_game = machine.Pin(1, machine.Pin.OUT) #training mode - 1 player game - led
################################### Game Settings #################################
one_player_game = False # training mode - you control player2
one_player_game_score = 0
game_over = False
#ball = bytearray(b'?\x00\x7f\x80\xff\xc0\xff\xc0\xff\xc0\xff\xc0\xff\xc0\xff\xc0\x7f\x80?\x00')
ball = bytearray(b'x\xfc\xfc\xfc\xfcx')
ball_x = 1
ball_y = 1
player1 = bytearray(b'\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0\xe0')
player1X = 5
player1Y = int((HEIGHT-20)/2)
player2X = WIDTH-8
player2Y = int((HEIGHT-20)/2)
player1_score = 0
player2_score = 0
ball_buff = framebuf.FrameBuffer(ball, 6, 6, framebuf.MONO_HLSB)
player1_buff = framebuf.FrameBuffer(player1, 3, 20, framebuf.MONO_HLSB)
player2_buff = framebuf.FrameBuffer(player1, 3, 20, framebuf.MONO_HLSB)
global level
level = 1
level1.value(0)
level2.value(0)
level3.value(0)
global start
start = False
################################### Function #################################
def button_handler(pin):
global level
level +=1
if level == 4:
level=1
def button_start(pin):
global start
if not start:
start = True
button.irq(trigger=machine.Pin.IRQ_RISING, handler=button_handler)
start_button.irq(trigger=machine.Pin.IRQ_RISING, handler=button_start)
def check_level(level):
global level1, level2,level3
if level == 1:
level1.value(1)
level2.value(0)
level3.value(0)
elif level == 2:
level1.value(1)
level2.value(1)
level3.value(0)
elif level == 3:
level1.value(1)
level2.value(1)
level3.value(1)
#Map function
def convert(x, in_min, in_max, out_min, out_max):
return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min
def set_ball_y(y, playerY):
pY = int(playerY)
if (y >= pY-3 and y <= pY+2):
return -2
elif (y >= pY+16 and y <= pY+19):
return 2
elif y >= pY+3 and y <= pY+6:
return -1
elif y >= pY+12 and y <= pY+15:
return 1
else:
return 0
################################### Start Screen #################################
oled.fill(0)
x = int((WIDTH-4)/2)
y = int((HEIGHT-4)/2)
oled.text('Pico Pong 2021', 10,21)
oled.text('by Jerzy Jasonek', 0,41)
oled.show()
check_level(level)
sleep(2)
################################### Game loop #################################
while not game_over:
check_level(level)
if not start:
player2Y = (Pot2.read_u16())
if player2Y < 1000:
one_player_game = True
led_one_player_game.value(1)
else:
one_player_game = False
led_one_player_game.value(0)
else:
#update player position
if not one_player_game:
player1Y = (Pot.read_u16() * conversion_factor)
player1Y = convert(player1Y, 0, 3.3, 0, 44)
else:
player1Y = y-10
player2Y = (Pot2.read_u16() * conversion_factor)
player2Y = convert(player2Y, 0, 3.3, 0, 44)
#draw screen
oled.fill(0)
oled.text(str(player1_score), 40,3)
oled.text(str(player2_score), 88,3)
oled.blit(ball_buff, x, y)
oled.blit(player1_buff, player1X, int(player1Y))
oled.blit(player2_buff, player2X, int(player2Y))
if one_player_game:
oled.text('Score:'+str(one_player_game_score), 28,55)
oled.show()
#check collinsion with z wall
if y > HEIGHT-7 or y < 0:
ball_y *= -1
if x < 0 or x > WIDTH-6:
if x < 0:
player2_score +=1
else:
player1_score +=1
x = int((WIDTH-4)/2)
y = int((HEIGHT-4)/2)
ball_x *= -1
if player1_score == 15 or player2_score == 15:
game_over = True
else:
sleep(1)
#collision with player
if player1X+3 <=x and player1X+4 >=x and player1Y-3 <= y and player1Y+21 >= y:
ball_x *= -1
ball_y = set_ball_y(y, player1Y)
if one_player_game:
ball_y = randint(0,2)
if player2X-5 <= x and player2X-4 >= x and player2Y-3 <= y and player2Y+21 >= y:
ball_x *= -1
ball_y = set_ball_y(y, player2Y)
if one_player_game:
one_player_game_score +=level
x += ball_x*level
y += ball_y*level
################################### Game over screen #################################
oled.fill(0)
oled.text(str(player1_score), 40,3)
oled.text(str(player2_score), 88,3)
if one_player_game:
oled.text('Score:'+str(one_player_game_score), 28,55)
oled.text('Game over', 30,31)
oled.show()
| 30.014493 | 104 | 0.541445 | 839 | 6,213 | 3.882002 | 0.191895 | 0.035002 | 0.049739 | 0.062634 | 0.30089 | 0.254222 | 0.233344 | 0.214922 | 0.19343 | 0.133866 | 0 | 0.072368 | 0.266055 | 6,213 | 206 | 105 | 30.160194 | 0.641886 | 0.110736 | 0 | 0.355705 | 0 | 0.006711 | 0.030113 | 0.016168 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033557 | false | 0 | 0.033557 | 0.006711 | 0.107383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f0774dc003329dbb4aee294ed1abf6b04d27049 | 14,904 | py | Python | gh/views.py | Gepetto/dashboard | a24bbcec7c13c00b2a783c840658130083ad3b30 | [
"BSD-2-Clause"
] | null | null | null | gh/views.py | Gepetto/dashboard | a24bbcec7c13c00b2a783c840658130083ad3b30 | [
"BSD-2-Clause"
] | 7 | 2018-02-21T18:03:36.000Z | 2021-04-29T15:17:59.000Z | gh/views.py | Gepetto/dashboard | a24bbcec7c13c00b2a783c840658130083ad3b30 | [
"BSD-2-Clause"
] | 1 | 2018-07-10T15:19:31.000Z | 2018-07-10T15:19:31.000Z | """Views for dashboard_apps."""
import hmac
import logging
import re
import traceback
from hashlib import sha1
from ipaddress import ip_address, ip_network
from json import loads
from asgiref.sync import sync_to_async, async_to_sync
from django.conf import settings
from django.core.mail import mail_admins
from django.http import HttpRequest
from django.http.response import (HttpResponse, HttpResponseBadRequest, HttpResponseForbidden, HttpResponseRedirect,
HttpResponseServerError)
from django.shortcuts import get_object_or_404, reverse
from django.utils.encoding import force_bytes
from django.views.decorators.csrf import csrf_exempt
import git
import github
from gitlab import GitlabDeleteError
from autoslug.utils import slugify
from dashboard.middleware import ip_laas
from rainboard.models import Namespace, Project
from rainboard.utils import SOURCES
from . import models
logger = logging.getLogger(__name__)
PR_MASTER_MSG = """Hi ! This project doesn't usually accept pull requests on master. If this wasn't intentionnal, you
can change the base branch of this pull request to devel (No need to close it for that). Best, a bot."""
async def check_suite(request: HttpRequest, rep: str) -> HttpResponse:
"""Manage Github's check suites."""
data = loads(request.body.decode())
slug = slugify(data['repository']['name'])
if 'ros-release' in slug: # Don't run check suites on ros-release repositories
return HttpResponse(rep)
await sync_to_async(models.GithubCheckSuite.objects.get_or_create)(id=data['check_suite']['id'])
return HttpResponse(rep)
async def pull_request(request: HttpRequest, rep: str) -> HttpResponse:
"""Manage Github's Pull Requests."""
logger.info('process gh pr')
data = loads(request.body.decode())
event = data['action']
branch = f'pr/{data["number"]}'
login = slugify(data["pull_request"]["head"]["repo"]["owner"]["login"])
namespace = await sync_to_async(get_object_or_404)(Namespace,
slug_github=slugify(data['repository']['owner']['login']))
project = await sync_to_async(get_object_or_404)(Project,
main_namespace=namespace,
slug=slugify(data['repository']['name']))
git_repo = await sync_to_async(project.git)()
logger.debug(f'{namespace.slug}/{project.slug}: Pull request on {branch}: {event}')
# Prevent pull requests on master when necessary
if event in ['opened', 'reopened']:
gh = await sync_to_async(project.github)()
pr = await sync_to_async(gh.get_pull)(data["number"])
pr_branch = pr.base.ref
branches = [b.name for b in await sync_to_async(gh.get_branches)()]
if (not project.accept_pr_to_master and pr_branch == 'master' and 'devel' in branches
and login != namespace.slug_github):
logger.info(f"{namespace.slug}/{project.slug}: New pr {data['number']} to master")
await sync_to_async(pr.create_issue_comment)(PR_MASTER_MSG)
gh_remote_name = f'github/{login}'
if gh_remote_name not in git_repo.remotes:
remote = await sync_to_async(git_repo.create_remote)(gh_remote_name,
data["pull_request"]["head"]["repo"]["clone_url"])
else:
remote = await sync_to_async(git_repo.remote)(gh_remote_name)
# Sync the pull request with the pr/XX branch on Gitlab
if event in ['opened', 'reopened', 'synchronize']:
remote.fetch()
commit = data['pull_request']['head']['sha']
# Update branch to the latest commit
if branch in git_repo.branches:
git_repo.heads[branch].commit = commit
else:
await sync_to_async(git_repo.create_head)(branch, commit=commit)
# Create a gitlab remote if it doesn't exist
gl_remote_name = f'gitlab/{namespace.slug}'
if gl_remote_name not in git_repo.remotes:
url = await sync_to_async(project.remote_url_gitlab)()
await sync_to_async(git_repo.create_remote)(gl_remote_name, url=url)
# Push the changes to gitlab
logger.info(f'{namespace.slug}/{project.slug}: Pushing {commit} on {branch} on gitlab')
try:
git_repo.git.push(gl_remote_name, branch)
except git.exc.GitCommandError:
logger.warning(f'{namespace.slug}/{project.slug}: Failed to push on {branch} on gitlab, force pushing ...')
git_repo.git.push(gl_remote_name, branch, force=True)
# The pull request was closed, delete the branch pr/XX on Gitlab
elif event == 'closed':
if branch in git_repo.branches:
git_repo.delete_head(branch, force=True)
git_repo.delete_remote(gh_remote_name)
gitlab = await sync_to_async(project.gitlab)()
try:
await sync_to_async(gitlab.branches.delete)(branch)
logger.info(f'{namespace.slug}/{project.slug}: Deleted branch {branch}')
except GitlabDeleteError as e:
logger.info(f'{namespace.slug}/{project.slug}: branch {branch} not delete: {e}')
return HttpResponse(rep)
async def push(request: HttpRequest, source: SOURCES, rep: str) -> HttpResponse:
"""Someone pushed on github or gitlab. Synchronise local & remote repos."""
data = loads(request.body.decode())
slug = slugify(data['repository']['name'])
if 'ros-release' in slug: # Don't sync ros-release repositories
return HttpResponse(rep)
if source == SOURCES.gitlab:
namespace = await sync_to_async(get_object_or_404)(Namespace,
slug_gitlab=slugify(
data['project']['path_with_namespace'].split('/')[0]))
else:
namespace = await sync_to_async(get_object_or_404)(Namespace,
slug_github=slugify(data['repository']['owner']['login']))
project = await sync_to_async(get_object_or_404)(Project, main_namespace=namespace, slug=slug)
branch = data['ref'][11:] # strip 'refs/heads/'
commit = data['after']
gl_remote_name = f'gitlab/{namespace.slug}'
gh_remote_name = f'github/{namespace.slug}'
git_repo = await sync_to_async(project.git)()
logger.debug(f'{namespace.slug}/{slug}: Push detected on {source.name} {branch} (commit {commit})')
if branch.startswith('pr/'): # Don't sync pr/XX branches here, they are already handled by pull_request()
return HttpResponse(rep)
if branch.startswith('release/'): # Don't sync release/X.Y.Z branches at all
return HttpResponse(rep)
# Fetch the latest commit from gitlab
if gl_remote_name in git_repo.remotes:
gl_remote = await sync_to_async(git_repo.remote)(gl_remote_name)
else:
url = await sync_to_async(project.remote_url_gitlab)()
gl_remote = await sync_to_async(git_repo.create_remote)(gl_remote_name, url=url)
gl_remote.fetch()
# Fetch the latest commit from github
if gh_remote_name in git_repo.remotes:
gh_remote = await sync_to_async(git_repo.remote)(gh_remote_name)
else:
url = await sync_to_async(project.remote_url_github)()
gh_remote = await sync_to_async(git_repo.create_remote)(gh_remote_name, url=url)
gh_remote.fetch()
# The branch was deleted on one remote, delete the branch on the other remote as well
if commit == "0000000000000000000000000000000000000000":
if branch in git_repo.branches:
git_repo.delete_head(branch, force=True)
if source == SOURCES.gitlab:
github = await sync_to_async(project.github)()
github.get_git_ref(f'heads/{branch}').delete()
else:
gitlab = await sync_to_async(project.gitlab)()
gitlab.branches.delete(branch)
logger.info(f'{namespace.slug}/{slug}: Deleted branch {branch}')
return HttpResponse(rep)
# Make sure we fetched the latest commit
ref = gl_remote.refs[branch] if source == SOURCES.gitlab else gh_remote.refs[branch]
if str(ref.commit) != commit:
fail = f'Push: wrong commit: {ref.commit} vs {commit}'
logger.error(f'{namespace.slug}/{slug}: ' + fail)
return HttpResponseBadRequest(fail)
# Update the branch to the latest commit
if branch in git_repo.branches:
git_repo.heads[branch].commit = commit
else:
await sync_to_async(git_repo.create_head)(branch, commit=commit)
# Push the changes to other remote
try:
if source == SOURCES.gitlab and (branch not in gh_remote.refs or str(gh_remote.refs[branch].commit) != commit):
logger.info(f'{namespace.slug}/{slug}: Pushing {commit} on {branch} on github')
await sync_to_async(git_repo.git.push)(gh_remote_name, branch)
elif branch not in gl_remote.refs or str(gl_remote.refs[branch].commit) != commit:
logger.info(f'{namespace.slug}/{slug}: Pushing {commit} on {branch} on gitlab')
await sync_to_async(git_repo.git.push)(gl_remote_name, branch)
else:
return HttpResponse('already synced')
except git.exc.GitCommandError:
# Probably failed because of a force push
logger.exception(f'{namespace.slug}/{slug}: Forge sync failed')
message = traceback.format_exc()
message = re.sub(r'://.*@', '://[REDACTED]@', message) # Hide access tokens in the mail
await sync_to_async(mail_admins)(f'Forge sync failed for {namespace.slug}/{slug}', message)
return HttpResponse(rep)
async def pipeline(request: HttpRequest, rep: str) -> HttpResponse:
"""Something happened on a Gitlab pipeline. Tell Github if necessary."""
data = loads(request.body.decode())
branch, commit, gl_status, pipeline_id = (data['object_attributes'][key] for key in ['ref', 'sha', 'status', 'id'])
namespace = await sync_to_async(get_object_or_404)(Namespace,
slug_gitlab=slugify(
data['project']['path_with_namespace'].split('/')[0]))
project = await sync_to_async(get_object_or_404)(Project,
main_namespace=namespace,
slug=slugify(data['project']['name']))
gh_repo = await sync_to_async(project.github)()
ci_web_url = f'{project.url_gitlab()}/pipelines/{pipeline_id}'
logger.debug(f'{namespace.slug}/{project.slug}: Pipeline #{pipeline_id} on commit {commit} for branch {branch}, '
f'status: {gl_status}')
# Report the status to Github
if gl_status in ['pending', 'success', 'failed']:
gh_status = gl_status if gl_status != 'failed' else 'failure'
if branch.startswith('pr/'):
sha = await sync_to_async(gh_repo.get_commit)(sha=commit)
await sync_to_async(sha.create_status)(state=gh_status, target_url=ci_web_url, context='gitlab-ci')
else:
try:
sha = await sync_to_async(gh_repo.get_branch)(branch)
await sync_to_async(sha.commit.create_status)(state=gh_status,
target_url=ci_web_url,
context='gitlab-ci')
except github.GithubException as e:
if e.status == 404:
# Happens when a new branch is created on gitlab and the pipeline event comes before the push event
logger.warning(f"Branch {branch} does not exist on github, unable to report the pipeline status.")
else:
raise
return HttpResponse(rep)
@sync_to_async
@csrf_exempt
@async_to_sync
async def webhook(request: HttpRequest) -> HttpResponse:
"""
Process request incoming from a github webhook.
thx https://simpleisbetterthancomplex.com/tutorial/2016/10/31/how-to-handle-github-webhooks-using-django.html
"""
# validate ip source
forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR').split(', ')[0]
# networks = httpx.get('https://api.github.com/meta').json()['hooks'] # Fails if API rate limit exceeded
networks = ['185.199.108.0/22', '140.82.112.0/20']
if not any(ip_address(forwarded_for) in ip_network(net) for net in networks):
logger.warning('not from github IP')
return HttpResponseRedirect(reverse('login'))
# validate signature
signature = request.META.get('HTTP_X_HUB_SIGNATURE')
if signature is None:
logger.warning('no signature')
return HttpResponseRedirect(reverse('login'))
algo, signature = signature.split('=')
if algo != 'sha1':
logger.warning('signature not sha-1')
return HttpResponseServerError('I only speak sha1.', status=501)
mac = hmac.new(force_bytes(settings.GITHUB_WEBHOOK_KEY), msg=force_bytes(request.body), digestmod=sha1)
if not hmac.compare_digest(force_bytes(mac.hexdigest()), force_bytes(signature)):
logger.warning('wrong signature')
return HttpResponseForbidden('wrong signature.')
# process event
event = request.META.get('HTTP_X_GITHUB_EVENT', 'ping')
if event == 'ping':
return HttpResponse('pong')
if event == 'push':
return await push(request, SOURCES.github, 'push event detected')
if event == 'check_suite':
return await check_suite(request, 'check_suite event detected')
if event == 'pull_request':
return await pull_request(request, 'pull_request event detected')
return HttpResponseForbidden('event not found')
@sync_to_async
@csrf_exempt
@async_to_sync
async def gl_webhook(request: HttpRequest) -> HttpResponse:
"""Process request incoming from a gitlab webhook."""
# validate ip source
if not ip_laas(request):
logger.warning('not from LAAS IP')
return HttpResponseRedirect(reverse('login'))
# validate token
token = request.META.get('HTTP_X_GITLAB_TOKEN')
if token is None:
logger.warning('no token')
return HttpResponseRedirect(reverse('login'))
if token != settings.GITLAB_WEBHOOK_KEY:
logger.warning('wrong token')
return HttpResponseForbidden('wrong token.')
event = request.META.get('HTTP_X_GITLAB_EVENT')
if event == 'ping':
return HttpResponse('pong')
elif event == 'Pipeline Hook':
return await pipeline(request, 'pipeline event detected')
elif event == 'Push Hook':
return await push(request, SOURCES.gitlab, 'push event detected')
return HttpResponseForbidden('event not found')
| 45.577982 | 119 | 0.650631 | 1,900 | 14,904 | 4.933684 | 0.164737 | 0.026243 | 0.048112 | 0.06486 | 0.449755 | 0.387881 | 0.332836 | 0.279923 | 0.243759 | 0.213783 | 0 | 0.009806 | 0.240472 | 14,904 | 326 | 120 | 45.717791 | 0.818286 | 0.078502 | 0 | 0.349794 | 0 | 0.020576 | 0.18201 | 0.041201 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.09465 | 0 | 0.205761 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f0ac5d355c9a9038a09d1d81879dac676f628c0 | 2,844 | py | Python | plugins/serializables/global_packets.py | wesleyd1124/WLUS | ce319962d57d91dc9c8b06cc435469c7b24da826 | [
"MIT"
] | 25 | 2018-06-05T22:45:03.000Z | 2021-09-01T08:15:38.000Z | plugins/serializables/global_packets.py | wesleyd1124/WLUS | ce319962d57d91dc9c8b06cc435469c7b24da826 | [
"MIT"
] | 15 | 2018-07-10T10:39:55.000Z | 2021-07-01T20:56:26.000Z | plugins/serializables/global_packets.py | wesleyd1124/WLUS | ce319962d57d91dc9c8b06cc435469c7b24da826 | [
"MIT"
] | 13 | 2018-05-19T19:44:59.000Z | 2021-07-18T18:45:58.000Z | """
Contains all the packets which are sent by either the client or server
"""
from pyraknet import bitstream
class HandshakePacket(bitstream.Serializable):
"""
[53-00-00-00]
Global handshake packet serializable.
This packet is sent to establish a connection.
"""
def __init__(self):
self.game_version = 171022
self.unknown_0 = 0
self.remote_connection_type = 0 # For auth this is 1, otherwise it is 4
self.process_id = 1124
self.local_port = 0xff
def serialize(self, stream: bitstream.WriteStream) -> None:
stream.write(bitstream.c_uint32(self.game_version))
stream.write(bitstream.c_uint32(self.unknown_0))
stream.write(bitstream.c_uint32(self.remote_connection_type))
stream.write(bitstream.c_uint32(self.process_id))
stream.write(bitstream.c_uint16(self.local_port))
stream.write("127.0.0.1", allocated_length=33)
@classmethod
def deserialize(cls, stream: bitstream.ReadStream) -> bitstream.Serializable:
packet = HandshakePacket()
packet.game_version = stream.read(bitstream.c_uint32)
packet.unknown_0 = stream.read(bitstream.c_uint32)
packet.remote_connection_type = stream.read(bitstream.c_uint32)
packet.process_id = stream.read(bitstream.c_uint32)
packet.local_port = stream.read(bitstream.c_uint16)
return packet
class DisconnectNotifyPacket(bitstream.Serializable):
"""
[53-00-00-01]
This packet is sent when the server and client disconnect from each other
"""
def __init__(self):
self.disconnect_id = 0
def serialize(self, stream: bitstream.WriteStream) -> None:
stream.write(bitstream.c_uint32(self.disconnect_id))
@classmethod
def deserialize(cls, stream: bitstream.ReadStream) -> bitstream.Serializable:
packet = DisconnectNotifyPacket()
packet.disconnect_id = stream.read(bitstream.c_uint32)
return packet
def send(self, generic_game_server, address):
disconnect_packet = bitstream.WriteStream()
disconnect_packet.write(b"S\x00\x00\x01\x00\x00\x00\x00")
disconnect_packet.write(self)
generic_game_server.send(disconnect_packet, address)
generic_game_server.delete_session(ip_address=address[0])
class GeneralNotifyPacket(bitstream.Serializable):
"""
[53-00-00-02]
This packet is sent to notify the player?
"""
def __init__(self):
self.notify_id = 0
def serialize(self, stream: bitstream.WriteStream) -> None:
stream.write(bitstream.c_uint32(self.notify_id))
@classmethod
def deserialize(cls, stream: bitstream.ReadStream) -> bitstream.Serializable:
packet = GeneralNotifyPacket()
packet.notify_id = stream.read(bitstream.c_uint32)
return packet
| 35.55 | 81 | 0.697961 | 347 | 2,844 | 5.538905 | 0.262248 | 0.072841 | 0.099896 | 0.076483 | 0.468783 | 0.407908 | 0.291883 | 0.291883 | 0.25026 | 0.25026 | 0 | 0.041998 | 0.204641 | 2,844 | 79 | 82 | 36 | 0.807692 | 0.123769 | 0 | 0.3 | 0 | 0 | 0.015683 | 0.011969 | 0 | 0 | 0.001651 | 0 | 0 | 1 | 0.2 | false | 0 | 0.02 | 0 | 0.34 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f0b154543475c7fee417eb340ebc23a9a65e18b | 1,278 | py | Python | tests/test_rasterize.py | PADAS/django-raster | 68b2d181c70827dffad3c07f4f38d3490872a3eb | [
"BSD-3-Clause"
] | null | null | null | tests/test_rasterize.py | PADAS/django-raster | 68b2d181c70827dffad3c07f4f38d3490872a3eb | [
"BSD-3-Clause"
] | null | null | null | tests/test_rasterize.py | PADAS/django-raster | 68b2d181c70827dffad3c07f4f38d3490872a3eb | [
"BSD-3-Clause"
] | null | null | null | from django.contrib.gis.gdal import GDALRaster, OGRGeometry
from django.test import TestCase
from raster.rasterize import rasterize
class RasterizeGeometryTests(TestCase):
def setUp(self):
self.rast = GDALRaster({
'datatype': 1,
'driver': 'MEM',
'width': 2,
'height': 2,
'nr_of_bands': 1,
'srid': 3086,
'origin': (500000, 400000),
'scale': (100, -100),
'skew': (0, 0),
'bands': [{
'nodata_value': 10,
'data': range(4)
}],
})
def test_covering_geom_rasterization(self):
geom = OGRGeometry.from_bbox(self.rast.extent)
geom.srid = 3086
result = rasterize(geom, self.rast)
self.assertEqual(result.bands[0].data().ravel().tolist(), [1, 1, 1, 1])
self.assertEqual(result.geotransform, self.rast.geotransform)
self.assertEqual(result.srs.wkt, self.rast.srs.wkt)
def test_half_covering_geom_rasterization(self):
geom = OGRGeometry.from_bbox((500000.0, 399800.0, 500200.0, 399900.0))
geom.srid = 3086
result = rasterize(geom, self.rast)
self.assertEqual(result.bands[0].data().ravel().tolist(), [0, 0, 1, 1])
| 32.769231 | 79 | 0.571205 | 145 | 1,278 | 4.951724 | 0.406897 | 0.066852 | 0.116992 | 0.08078 | 0.370474 | 0.370474 | 0.370474 | 0.370474 | 0.225627 | 0.225627 | 0 | 0.084615 | 0.28795 | 1,278 | 38 | 80 | 33.631579 | 0.704396 | 0 | 0 | 0.125 | 0 | 0 | 0.061815 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.09375 | false | 0 | 0.09375 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f0ecc69fcbb900b73340911f3b2c0dbdb93c8a1 | 6,577 | py | Python | otcs.py | neckro/mr-otcs | 5782f3664afb7213729e207881ae855fb60e43a0 | [
"MIT"
] | null | null | null | otcs.py | neckro/mr-otcs | 5782f3664afb7213729e207881ae855fb60e43a0 | [
"MIT"
] | null | null | null | otcs.py | neckro/mr-otcs | 5782f3664afb7213729e207881ae855fb60e43a0 | [
"MIT"
] | null | null | null | import datetime
import errno
import itertools
import os
import subprocess
import sys
###############################################################################
# Configuration.
# Program paths. Use absolute paths.
MEDIA_PLAYER_PATH = "/usr/bin/vlc"
FFPROBE_PATH = "/usr/bin/ffprobe"
# Base path for all video files, including trailing slash.
BASE_PATH = "/media/videos/"
# This path will also contain play_index.txt and play_history.txt.
# Video files, including subdirectories.
MEDIA_PLAYLIST = ['video1.mp4','video2.mp4','Series/E01.mp4']
# Number of videos to keep in history log, saved in play_history.txt in
# BASE_PATH. Set to 0 to disable.
PLAY_HISTORY_LENGTH = 100
# Path for HTML schedule written by write_schedule().
# See template.html for the file to be read by this script.
# Set to None to disable writing schedule.
SCHEDULE_PATH = "/var/www/schedule.html"
# Number of upcoming shows to write in schedule.
# High settings can cause delays in playing next file.
# Setting too high can cause MemoryError.
SCHEDULE_UPCOMING_LENGTH = 10
###############################################################################
# Function definitions.
def get_length(file):
"""Run ffprobe and retrieve length of file."""
result = subprocess.run([FFPROBE_PATH,"-v","error","-select_streams","v:0",
"-show_entries","stream=duration","-of",
"default=noprint_wrappers=1:nokey=1",file],
capture_output=True,text=True).stdout
return result
def write_schedule(file_list,previous_file = None):
"""
Write an HTML file containing file names and lengths read from a list
containing video file paths. Optionally, include the most recently played
file as well.
"""
# next_time contains start times of upcoming videos.
# For the first file in file_list, this is the current system time.
# Time is retrieved in UTC, to be converted to user's local time when
# they load the schedule in their browser.
next_time = datetime.datetime.utcnow()
coming_up_next = []
for filename in file_list:
# Get length of next video in seconds from ffprobe.
duration = float(get_length(os.path.join(BASE_PATH,filename)))
# Remove .mp4 extension from file names and convert backslashes to
# forward slashes.
filename = os.path.splitext(filename)[0].replace("\\","/")
# Append duration and stripped filename to list as tuple.
coming_up_next.append((next_time,filename))
# Add length of current video to current time and use as starting time
# for next video. Format to ISO 8601 string for Day.js.
next_time = next_time + datetime.timedelta(seconds=duration)
# Format coming_up_next list into string suitable for assigning as
# JavaScript array of objects.
js_array = "[" + ",".join(["{{time:'{}',name:' {}'}}".format(i,n) for i,n in coming_up_next]) + "]"
# Generate HTML contents.
with open(os.path.join(sys.path[0],"template.html"),"r") as html_template:
html_contents = html_template.read()
html_contents = html_contents.format(js_array=js_array)
with open(SCHEDULE_PATH,"w") as html_file:
html_file.write(html_contents)
###############################################################################
# Main loop.
# Keep playlist index and store in file play_index.txt. Create it if it does
# not exist.
try:
with open(os.path.join(BASE_PATH,"play_index.txt"),"r") as index_file:
play_index = int(index_file.read())
except FileNotFoundError:
with open(os.path.join(BASE_PATH,"play_index.txt"),"w") as index_file:
index_file.write("0")
play_index = 0
# Loop over playlist indefinitely.
while True:
if play_index < len(MEDIA_PLAYLIST):
video_time = datetime.datetime.now()
video_file = MEDIA_PLAYLIST[play_index]
video_file_fullpath = os.path.join(BASE_PATH,video_file)
# Check if video_file exists and raise exception if it does not.
if not os.path.isfile(video_file_fullpath):
raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT),
video_file_fullpath)
# Write history of played video files and timestamps, limited to
# PLAY_HISTORY_LENGTH.
if PLAY_HISTORY_LENGTH > 0:
with open(os.path.join(BASE_PATH,"play_history.txt"),"r") as play_history:
play_history_buffer = play_history.readlines()
with open(os.path.join(BASE_PATH,"play_history.txt"),"w+") as play_history:
play_history_buffer.append("{},{}\n".format(video_time,video_file))
play_history.writelines(play_history_buffer[-PLAY_HISTORY_LENGTH:])
# TODO: Write schedule in second thread.
# If HTML schedule writing is enabled, retrieve next videos in list up
# to SCHEDULE_UPCOMING_LENGTH and pass to write_schedule.
if SCHEDULE_PATH != None:
# Copy of media list sliced from current video to the end.
media_progress = MEDIA_PLAYLIST[play_index:]
# Pass sliced list to write_schedule.
if len(media_progress) >= SCHEDULE_UPCOMING_LENGTH:
media_copy = media_progress[:SCHEDULE_UPCOMING_LENGTH + 1]
# If media_progress is shorter than SCHEDULE_UPCOMING_LENGTH, copy
# full media playlist until the correct length is reached.
else:
media_copy = media_progress + list(
itertools.islice(itertools.cycle(MEDIA_PLAYLIST),
SCHEDULE_UPCOMING_LENGTH
- len(media_progress) + 1))
write_schedule(media_copy,
previous_file=MEDIA_PLAYLIST[play_index - 1])
# TODO: Delay playback for several seconds to account for window capture
# delay.
print("Now playing: " + video_file)
result = subprocess.run([MEDIA_PLAYER_PATH,video_file_fullpath,"--play-and-exit"])
# Increment play_index and write play_index.txt in BASE_PATH.
play_index = play_index + 1
with open(os.path.join(BASE_PATH,"play_index.txt"),"w") as index_file:
index_file.write(str(play_index))
else:
# Reset index at end of playlist.
play_index = 0
with open(os.path.join(BASE_PATH,"play_index.txt"),"w") as index_file:
index_file.write("0")
| 39.383234 | 103 | 0.636764 | 854 | 6,577 | 4.728337 | 0.289227 | 0.040119 | 0.022288 | 0.027737 | 0.145864 | 0.09262 | 0.077761 | 0.077761 | 0.077761 | 0.077266 | 0 | 0.006401 | 0.239927 | 6,577 | 166 | 104 | 39.620482 | 0.80136 | 0.35168 | 0 | 0.121622 | 0 | 0 | 0.092335 | 0.014166 | 0.013514 | 0 | 0 | 0.006024 | 0 | 1 | 0.027027 | false | 0 | 0.081081 | 0 | 0.121622 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f0f210fb96be3418eb569e273c87bbeadbc980a | 4,674 | py | Python | src/lambda_functions/lex_v2_cfn_cr/lex_v2_cfn_cr/slot.py | mohsenari/aws-lex-v2-cfn-cr | 619b223d5b6fb4561ca3adb4c278ad03cc978cf0 | [
"Apache-2.0"
] | 11 | 2021-06-24T23:23:16.000Z | 2021-09-07T16:38:01.000Z | src/lambda_functions/lex_v2_cfn_cr/lex_v2_cfn_cr/slot.py | mohsenari/aws-lex-v2-cfn-cr | 619b223d5b6fb4561ca3adb4c278ad03cc978cf0 | [
"Apache-2.0"
] | 3 | 2021-09-23T00:07:36.000Z | 2021-11-24T00:29:33.000Z | src/lambda_functions/lex_v2_cfn_cr/lex_v2_cfn_cr/slot.py | mohsenari/aws-lex-v2-cfn-cr | 619b223d5b6fb4561ca3adb4c278ad03cc978cf0 | [
"Apache-2.0"
] | 4 | 2021-07-11T02:46:36.000Z | 2022-01-13T22:47:39.000Z | #!/usr/bin/env python3.8
################################################################################
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance with the License. #
# A copy of the License is located at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# or in the 'license' file accompanying this file. This file is distributed #
# on an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express #
# or implied. See the License for the specific language governing #
# permissions and limitations under the License. #
################################################################################
"""Amazon Lex CloudFormation Custom Resource Slot Manager"""
import logging
from typing import Any, Dict, Optional, TYPE_CHECKING
import boto3
from .shared.api import get_api_parameters
if TYPE_CHECKING:
from mypy_boto3_lexv2_models import LexModelsV2Client
from mypy_boto3_lexv2_models.type_defs import (
CreateSlotResponseTypeDef,
UpdateSlotResponseTypeDef,
)
else:
LexModelsV2Client = object
CreateSlotResponseTypeDef = object
UpdateSlotResponseTypeDef = object
class Slot:
"""Lex V2 CloudFormation Custom Resource Slot"""
def __init__(
self,
client: Optional[LexModelsV2Client] = None,
logger: Optional[logging.Logger] = None,
):
self._client = client or boto3.client("lexv2-models")
self._logger = logger or logging.getLogger(__name__)
def get_slot_id(
self,
bot_id: str,
bot_version: str,
intent_id: str,
locale_id: str,
slot_name: str,
) -> str:
"""Get Slot ID from Name"""
list_slots_args: Dict[str, Any] = dict(
botId=bot_id,
botVersion=bot_version,
localeId=locale_id,
intentId=intent_id,
filters=[
{
"name": "SlotName",
"values": [slot_name],
"operator": "EQ",
}
],
sortBy={
"attribute": "SlotName",
"order": "Ascending",
},
)
while True:
response = self._client.list_slots(**list_slots_args)
self._logger.debug(response)
slot_summaries = response["slotSummaries"]
slot_id = slot_summaries[0]["slotId"] if slot_summaries else ""
if slot_id:
break
next_token = response.get("nextToken")
if next_token:
list_slots_args["nextToken"] = next_token
else:
break
if not slot_id:
self._logger.warning("could not find slot named: %s", slot_id)
return slot_id
def create_slot(self, input_parameters: Dict[str, Any]) -> CreateSlotResponseTypeDef:
"""Create Slot"""
operation = "CreateSlot"
operation_parameters = get_api_parameters(
operation=operation,
input_parameters=input_parameters,
client=self._client,
logger=self._logger,
)
response = self._client.create_slot(**operation_parameters)
self._logger.debug(response)
return response
def delete_slot(self, input_parameters: Dict[str, Any]) -> None:
"""Delete Slot"""
operation = "DeleteSlot"
operation_parameters = get_api_parameters(
operation=operation,
input_parameters=input_parameters,
client=self._client,
logger=self._logger,
)
self._client.delete_slot(**operation_parameters)
def update_slot(self, input_parameters: Dict[str, Any]) -> UpdateSlotResponseTypeDef:
"""Update Slot"""
operation = "UpdateSlot"
operation_parameters = get_api_parameters(
operation=operation,
input_parameters=input_parameters,
client=self._client,
logger=self._logger,
)
response = self._client.update_slot(**operation_parameters)
self._logger.debug(response)
return response
| 34.367647 | 89 | 0.537441 | 431 | 4,674 | 5.614849 | 0.345708 | 0.03719 | 0.026446 | 0.028512 | 0.267769 | 0.247934 | 0.247934 | 0.207025 | 0.207025 | 0.157438 | 0 | 0.005921 | 0.349594 | 4,674 | 135 | 90 | 34.622222 | 0.790132 | 0.232135 | 0 | 0.27957 | 0 | 0 | 0.04994 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053763 | false | 0 | 0.064516 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f170c370bacf2243ee7c2b4a5545bf2f5612009 | 7,102 | py | Python | okcupyd_testing/util.py | sphericalcow/okcupyd | ae0a99d248c515eea9a6d21a9c89f51e299b33f5 | [
"MIT"
] | 89 | 2015-01-09T19:58:07.000Z | 2022-03-03T21:56:50.000Z | okcupyd_testing/util.py | sphericalcow/okcupyd | ae0a99d248c515eea9a6d21a9c89f51e299b33f5 | [
"MIT"
] | 51 | 2015-01-18T23:09:35.000Z | 2017-04-24T03:16:03.000Z | okcupyd_testing/util.py | sphericalcow/okcupyd | ae0a99d248c515eea9a6d21a9c89f51e299b33f5 | [
"MIT"
] | 24 | 2015-01-16T17:43:21.000Z | 2020-09-18T12:19:15.000Z | import copy
import inspect
import logging
import os
import zlib
from six.moves import urllib
import simplejson
import vcr
import wrapt
from okcupyd import settings
from okcupyd import util
log = logging.getLogger(__name__)
TESTING_USERNAME = 'username'
TESTING_PASSWORD = 'password'
WBITS = 16 + zlib.MAX_WBITS
SHOULD_SCRUB = False
REPLACEMENTS = []
REMOVE_OLD_CASSETTES = False
@wrapt.decorator
def check_should_scrub(function, instance, args, kwargs):
if SHOULD_SCRUB:
return function(*args)
else:
return args[0] # The request or response
@util.curry
def remove_headers(request, headers_to_remove=()):
headers = copy.copy(request.headers)
headers_to_remove = [h.lower() for h in headers_to_remove]
keys = [k for k in headers if k.lower() in headers_to_remove]
if keys:
for k in keys:
headers.pop(k)
request.headers = headers
return request
def scrub_request_body(request):
if urllib.parse.urlsplit(request.uri).path == '/login':
request.body = scrub_query_string(request.body)
request.uri = scrub_uri(request.uri)
return request
def scrub_uri(uri):
replaced = util.replace_all_case_insensitive(uri, settings.USERNAME,
TESTING_USERNAME)
return util.replace_all_case_insensitive(replaced, settings.PASSWORD,
TESTING_PASSWORD)
def scrub_query_string(query_string):
request_dict = urllib.parse.parse_qs(query_string)
if 'password' not in request_dict:
return query_string
for key in request_dict:
request_dict[key] = request_dict[key][0]
request_dict['username'] = TESTING_USERNAME
request_dict['password'] = TESTING_PASSWORD
return urllib.parse.urlencode(request_dict)
def gzip_string(incoming):
if isinstance(incoming, str) and bytes is not str:
incoming = bytes(incoming, 'utf8')
else:
incoming = incoming.encode('utf8')
compress_object = zlib.compressobj(6, zlib.DEFLATED, WBITS)
start = compress_object.compress(incoming)
end = compress_object.flush()
if not isinstance(start, str):
return start + end
return ''.join([start, end])
def scrub_response_headers(response):
for item in ('location', 'Location'):
if item in response['headers']:
response['headers'][item] = [scrub_uri(uri)
for uri in response['headers'][item]]
return response
def replace_json_fields(body):
try:
response_dict = simplejson.loads(body)
except:
return body
if 'screenname' not in response_dict:
return body
if response_dict['screenname'] is not None:
response_dict['screenname'] = TESTING_USERNAME
response_dict['userid'] = 1
response_dict['thumbnail'] = ''
return simplejson.dumps(response_dict)
def scrub_response(response):
if not SHOULD_SCRUB:
return response
response = response.copy()
response = scrub_response_headers(response)
body = response['body']['string']
try:
body = zlib.decompress(response['body']['string'], WBITS).decode('utf8')
except:
should_recompress = False
else:
should_recompress = True
body = replace_json_fields(body)
body = util.replace_all_case_insensitive(body, settings.USERNAME,
TESTING_USERNAME)
if should_recompress:
body = gzip_string(body)
response['body']['string'] = body
return response
before_record = check_should_scrub(util.compose(
scrub_request_body, remove_headers(headers_to_remove=(
'Set-Cookie',
'Cookie'
))
))
def _maybe_decode(maybe_bytes):
try:
return maybe_bytes.decode('utf-8')
except (AttributeError, UnicodeDecodeError):
return maybe_bytes
def _match_search_query(left, right):
left_filter = set([value for param_name, value in left
if 'filter' in _maybe_decode(param_name)])
right_filter = set([value for param_name, value in right
if 'filter' in _maybe_decode(param_name)])
left_rest = set([(param_name, value) for param_name, value in left
if 'filter' not in _maybe_decode(param_name)])
right_rest = set([(param_name, value) for param_name, value in right
if 'filter' not in _maybe_decode(param_name)])
try:
log.info(simplejson.dumps(
{
'filter_differences': list(
left_filter.symmetric_difference(right_filter)
),
'rest_differences': list(
left_rest.symmetric_difference(right_rest)
),
}, encoding='utf-8'
))
except Exception as e:
log.warning(e)
return left_filter == right_filter and left_rest == right_rest
def match_search_query(left, right):
return _match_search_query(left.query, right.query)
def body_as_query_string(left, right):
if left.path == right.path and 'ajaxuploader' in left.path:
return True # We can't seem to handle matching photo uploads likely
# because of requests internals.
try:
left_qs_items = list(urllib.parse.parse_qs(left.body).items())
right_qs_items = list(urllib.parse.parse_qs(right.body).items())
except Exception as exc:
log.debug(exc)
return left.body == right.body
else:
left_qs_items = [(k, tuple(v)) for k, v in left_qs_items]
right_qs_items = [(k, tuple(v)) for k, v in right_qs_items]
return _match_search_query(left_qs_items, right_qs_items)
cassette_library_directory = os.path.join(os.path.dirname(os.path.dirname(__file__)),
'tests', 'vcr_cassettes')
okcupyd_vcr = vcr.VCR(match_on=('path', 'method', 'match_search_query',
'body_as_query_string'),
before_record=(before_record,),
before_record_response=scrub_response,
cassette_library_dir=cassette_library_directory,
path_transformer=vcr.VCR.ensure_suffix('.yaml'))
okcupyd_vcr.register_matcher('body_as_query_string', body_as_query_string)
okcupyd_vcr.register_matcher('match_search_query', match_search_query)
match_on_no_body = list(filter(lambda x: 'body' not in x, okcupyd_vcr.match_on))
@wrapt.adapter_factory
def add_request_to_signature(function):
argspec = inspect.getargspec(function)
return inspect.ArgSpec(argspec.args + ['request'], argspec.varargs, argspec.keywords, argspec.defaults)
@wrapt.decorator(adapter=add_request_to_signature)
def skip_if_live(function, instance, args, kwargs):
request = kwargs.pop('request')
if request.config.getoption('skip_vcrpy'):
log.debug("Skipping {0} because vcrpy is being skipped.".format(
function.__name__
))
else:
return function(*args, **kwargs)
use_cassette = okcupyd_vcr.use_cassette
| 31.286344 | 107 | 0.656998 | 870 | 7,102 | 5.106897 | 0.222989 | 0.020257 | 0.025208 | 0.018006 | 0.141796 | 0.111411 | 0.087779 | 0.064821 | 0.049516 | 0.018006 | 0 | 0.002248 | 0.248381 | 7,102 | 226 | 108 | 31.424779 | 0.830086 | 0.015207 | 0 | 0.177143 | 0 | 0 | 0.063949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0.028571 | 0.062857 | 0.005714 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f1d431916ff72728e735e8e734dd11974fd8bb1 | 1,474 | py | Python | utils/checkpoint.py | Jackson-Kang/VQVC-Pytorch | d2267b5c52253b6ae11a5767963a65320ae335c2 | [
"MIT"
] | 13 | 2021-02-11T17:48:40.000Z | 2022-02-08T06:37:12.000Z | utils/checkpoint.py | Jackson-Kang/VQVC-Pytorch | d2267b5c52253b6ae11a5767963a65320ae335c2 | [
"MIT"
] | 1 | 2022-01-17T17:07:22.000Z | 2022-01-18T06:51:21.000Z | utils/checkpoint.py | Jackson-Kang/VQVC-Pytorch | d2267b5c52253b6ae11a5767963a65320ae335c2 | [
"MIT"
] | 3 | 2021-03-10T08:40:00.000Z | 2022-01-17T17:08:48.000Z | import torch
import os, glob
from .path import create_dir, get_path
def load_checkpoint(checkpoint_path, model, optimizer=None, scheduler=None):
if optimizer is not None:
if not(os.path.exists(checkpoint_path)):
print("[WARNING] No checkpoint exists. Start from scratch.")
global_step = 0
else:
print("[WARNING] Already exists. Restart to train model.")
last_model_path = sorted(glob.glob(get_path(checkpoint_path, '*.pth.tar')))[-1]
state = torch.load(last_model_path)
model.load_state_dict(state['model'])
global_step = state['global_step']
optimizer.load_state_dict(state['optimizer'])
scheduler.load_state_dict(state['scheduler'])
else:
last_model_path = sorted(glob.glob(get_path(checkpoint_path, '*.pth.tar')))[-1]
state = torch.load(last_model_path)
model.load_state_dict(state['model'])
global_step = 0
print("[WARNING] Model: {} has been loaded.".format(last_model_path.split("/")[-1].replace(".pth.tar", "")))
return global_step
def save_checkpoint(checkpoint_path, global_step, model, optimizer, scheduler):
create_dir("/".join(checkpoint_path.split("/")[:-1]))
checkpoint_path = create_dir(checkpoint_path)
cur_checkpoint_name = "model-{:03d}k.pth.tar".format(global_step//1000)
state = {
'global_step': global_step,
'model': model.state_dict(),
'optimizer': optimizer.state_dict(),
'scheduler': scheduler.state_dict()
}
torch.save(state, get_path(checkpoint_path, cur_checkpoint_name))
| 30.708333 | 111 | 0.729308 | 205 | 1,474 | 4.990244 | 0.263415 | 0.123167 | 0.063539 | 0.070381 | 0.29521 | 0.234604 | 0.234604 | 0.234604 | 0.234604 | 0.234604 | 0 | 0.009252 | 0.120081 | 1,474 | 47 | 112 | 31.361702 | 0.779491 | 0 | 0 | 0.294118 | 0 | 0 | 0.175832 | 0.014257 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.088235 | 0 | 0.176471 | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f21a46afa8fc39f3ecf728165185537c6f76296 | 1,712 | py | Python | lib/tsetmc_api/core/symbol.py | mahs4d/tsetmc-api | 4d7252b9e9aeda870e0340d7641aa244427a4ab1 | [
"MIT"
] | 18 | 2020-06-01T06:12:41.000Z | 2021-05-08T07:57:47.000Z | lib/tsetmc_api/core/symbol.py | mahs4d/tsetmc-api | 4d7252b9e9aeda870e0340d7641aa244427a4ab1 | [
"MIT"
] | 3 | 2020-08-07T11:25:53.000Z | 2021-04-09T12:37:00.000Z | lib/tsetmc_api/core/symbol.py | mahs4d/tsetmc-api | 4d7252b9e9aeda870e0340d7641aa244427a4ab1 | [
"MIT"
] | 6 | 2021-04-09T12:37:40.000Z | 2021-11-08T20:50:16.000Z | from datetime import date
import requests
from bs4 import BeautifulSoup
def get_symbol_details(symbol_id):
raw = requests.get(f'http://www.tsetmc.com/Loader.aspx?Partree=15131M&i={symbol_id}', timeout=20, verify=False).text
ret = {}
trs = BeautifulSoup(raw, 'lxml').find_all('tr')
for tr in trs:
tds = tr.find_all('td')
ret[tds[0].contents[0]] = str(tds[1].contents[0])
return ret
def get_daily_history(symbol_id):
daily_content = requests.get(
f'http://members.tsetmc.com/tsev2/data/InstTradeHistory.aspx?i={symbol_id}&Top=99999&A=0',
timeout=20, verify=False).text
raw_ticks = daily_content.split(';')
ticks = []
for raw_tick in raw_ticks:
if raw_tick == '':
continue
tick_data = raw_tick.split('@')
date_raw = tick_data[0]
high_price = tick_data[1]
low_price = tick_data[2]
close_price = tick_data[3]
last_price = tick_data[4]
first_price = tick_data[5]
yesterday_price = tick_data[6]
value = tick_data[7]
volume = tick_data[8]
count = tick_data[9]
ticks.append({
'date': date(year=int(date_raw[:4]), month=int(date_raw[4:6]), day=int(date_raw[6:])),
'first_price': int(first_price[:-3]),
'high_price': int(high_price[:-3]),
'low_price': int(low_price[:-3]),
'close_price': int(close_price[:-3]),
'last_price': int(last_price[:-3]),
'yesterday_price': int(yesterday_price[:-3]),
'value': int(float(value)),
'volume': int(float(volume)),
'count': int(float(count)),
})
return ticks
| 30.035088 | 120 | 0.586449 | 234 | 1,712 | 4.076923 | 0.350427 | 0.092243 | 0.081761 | 0.033543 | 0.050314 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032385 | 0.260514 | 1,712 | 56 | 121 | 30.571429 | 0.721169 | 0 | 0 | 0 | 0 | 0.022727 | 0.142523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.068182 | 0 | 0.159091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f231583cf8cbaed36b765c3c82ea90dabd12d76 | 975 | py | Python | shamester_api/handlers/new_website_handler.py | heynemann/shamester | b098c922be941037410d3c7b3214a9aecde67495 | [
"MIT"
] | 1 | 2015-01-25T13:13:23.000Z | 2015-01-25T13:13:23.000Z | shamester_api/handlers/new_website_handler.py | heynemann/shamester | b098c922be941037410d3c7b3214a9aecde67495 | [
"MIT"
] | null | null | null | shamester_api/handlers/new_website_handler.py | heynemann/shamester | b098c922be941037410d3c7b3214a9aecde67495 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
from tornado.web import RequestHandler, asynchronous
import tornado.gen
import motor
from ujson import loads, dumps
from shamester_api.models import Website
class NewWebsiteHandler(RequestHandler):
@property
def websites(self):
return self.application.mongo.websites
@asynchronous
@tornado.gen.coroutine
def post(self):
website = loads(self.request.body)
if website.get('url', None) is None:
self.write(dumps({
"success": False,
"reason": "Url is required!"
}))
website = Website(url=website['url'])
website_data = website.to_dict()
new_website = yield motor.Op(self.websites.insert, website_data)
self.application.redis.publish("new-website", website_data)
self.write(dumps({
"success": True,
"websiteId": str(new_website)
}))
self.finish()
| 23.780488 | 72 | 0.61641 | 107 | 975 | 5.551402 | 0.53271 | 0.055556 | 0.047138 | 0.070707 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001403 | 0.268718 | 975 | 40 | 73 | 24.375 | 0.831697 | 0.038974 | 0 | 0.148148 | 0 | 0 | 0.06631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.185185 | 0.037037 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f246adac761633444e7917b2e83510f4f23aa8d | 8,368 | py | Python | TM271A-ctrl.py | wb4bxo/TM-271A-ctrl | 45f9f931553b08f4bc1b30360c2c946b07b54074 | [
"Unlicense"
] | null | null | null | TM271A-ctrl.py | wb4bxo/TM-271A-ctrl | 45f9f931553b08f4bc1b30360c2c946b07b54074 | [
"Unlicense"
] | null | null | null | TM271A-ctrl.py | wb4bxo/TM-271A-ctrl | 45f9f931553b08f4bc1b30360c2c946b07b54074 | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python3
import sys
import os
import time
import serial
# This is a quick and dirty control program for the Kenwood TM-271A and TM-281A
# transceiver to allow remote base like operations for use with Allstar or
# other digital modes. It is primarily targeted at the Raspberry Pi but being
# in Python allows it to be built and run on multiple platforms including
# Windows and Linux.
#
# This is targeting Python3 and you must install the pyserial libraries by
# issuing "pip3 install pyserial"
### Some global variables most for configuration and operation modes
usage = """
Arguments passed in can be:
ser xxx
Where xxx is the name for the serial port appropriate for the OS.
For example "ser COM3" for Windows or "ser /dev/tty0" for linux.
NOTE - must be first argument if used. Environment variable
"TM271Aser" or "TM281Aser: is read if it exists as the default
port to use.
mem xxx
Where xxx is up to a 3 digit memory number
vfo xxxxxxxxxx{-|+}
Where xxxxxxxxxx is the 10 digit frequency in Hz.
If the leading character is not "1" a zero is appended as the GHz value.
If 10 digits is not supplied, "0"s are appended to the end to 13 digits.
Thus you can enter 0147330000 or 14733 for the same thing.
The optional + or - sets the offset
This command clears any tone setting, set desired tone afterwards
tone {x}xx.x
Where {x}xx.x is a 2 or 3 digit whole number followed by a decimal.
For example tone 141.3
Note these must match exactly the standard tones
ctcss {x}xx.x
Where {x}xx.x is a 2 or 3 digit whole number followed by a decimal.
For example tone 141.3
Note these must match exactly the standard tones
pow [h|l]
Set transmit power to high or low (h or l)
freq
Read frequency from display suitable for use with TTS.
Multiple arguments can be passed like "mem 33 freq" to change to a memory
and read back what the frequency is. Or "vfo 147330+ tone 100.0".
"""
serialName=os.getenv("TM271Aser")
if serialName is None:
serialName=os.getenv("TM281Aser")
if serialName is None:
serialName = "/dev/ttyUSB0"
verbose=0
radioID = ""
CTCSS_Tones = { # dictionary for tone to control number for the radio
"67.0" : "00",
"69.3" : "01",
"71.9" : "02",
"74.4" : "03",
"77.0" : "04",
"79.7" : "05",
"82.5" : "06",
"85.4" : "07",
"88.5" : "08",
"91.5" : "09",
"94.8" : "10",
"97.4" : "11",
"100.0" : "12",
"103.5" : "13",
"107.2" : "14",
"110.9" : "15",
"114.8" : "16",
"118.8" : "17",
"123.0" : "18",
"127.3" : "19",
"131.8" : "20",
"136.5" : "21",
"141.3" : "22",
"146.2" : "23",
"151.4" : "24",
"156.7" : "25",
"162.2" : "26",
"167.9" : "27",
"173.8" : "28",
"179.9" : "29",
"186.2" : "30",
"192.8" : "31",
"203.5" : "32",
"206.5" : "33",
"210.7" : "34",
"218.1" : "35",
"225.7" : "36",
"229.1" : "37",
"233.6" : "38",
"241.8" : "39",
"250.3" : "40",
"254.1" : "41"
}
### Some functions we'll use
# Send and check for same thing to echo, try to resync if needed.
def sendAndWait(data):
cnt = 50
while 1:
if cnt == 0:
return "ERR"
cnt -= 1
ser.read(1000)
ser.write((data + "\r").encode())
rtn = ser.readline().decode()
if rtn[0:2] == data[0:2]:
break
# Sometimes the radio gets out of sync and will return ?, E or the tail of something else...
# It has not taken the command if it doesn't echo it back.
if verbose >= 2:
print("Retrying - Sent: " + data + " Got: " + rtn)
# time.sleep(0.25)
ser.write(("\r").encode())
ser.read(1000) # force timeout to flush buffers
ser.read(1000) # force timeout to flush buffers
if verbose >= 2:
print(rtn)
return rtn
# Select a memory channel. Should be 3 digits but will fix it up if not
def memorySelect(mem):
data = "VM 1"
sendAndWait(data)
if len(mem) > 3: # sanity check in case more digits passed in than radio can handled
mem = mem[-3]
while len(mem) < 3: # radio requires 3 digit memory numbers
mem = "0" + mem
data="MR " + mem
sendAndWait(data)
return
# Select and set the vfo frequency passed in as string.
# freq should be 10 digits as Hz. as in 0147330000
# An appended + or - is used to signify offset
# VF format: (spaces only to align with description, omit when sending to radio)
# 3 14 16 18 20 22 24 26 29 32 36 45 47
# VF 0147330000, 0, 0, 0, 1, 0, 0, 13, 13,056,00600000,0 ,0
# freq,step,shift,reverse,Tone,CTCSS,DCS,ENC,DEC,DCS,Offset ,Narrow,BeatShift
def vfoSelect(freq):
data = "VM 0"
sendAndWait(data)
current = sendAndWait("VF")
if current[-1] == "\r":
current = current[0:-1]
if freq[-1] == "-":
shift = "2"
freq=freq[0:-1]
elif freq[-1] == "+":
shift = "1"
freq=freq[0:-1]
else:
shift = "0"
if freq[0] != "0":
freq = "0" + freq
if len(freq) > 10:
freq = freq[0:10]
while len(freq) < 10:
freq = freq + "0"
data = current[0:3] + freq + ",0," + shift + current[17:20] + "0,0,0" + current[25:]
sendAndWait(data)
return
# Set the tone parameters for the current VFO setting. Reads what is in the radio,
# makes the changes, then writes it back.
# VF format: (spaces only to align with description, omit when sending to radio)
# 3 14 16 18 20 22 24 26 29 32 36 45 47
# VF 0147330000, 0, 0, 0, 1, 0, 0, 13, 13,056,00600000,0 ,0
# freq,step,shift,reverse,Tone,CTCSS,DCS,ENC,DEC,DCS,Offset ,Narrow,BeatShift
def vfoTone(toneFreq, tx, rx):
if rx == 1: #there can only be one
tx = 0
current = sendAndWait("VF")
if current[-1] == "\r":
current = current[0:-1]
if toneFreq == "0": #tone of zero to turn off tone
tx=0
rx=0
theToneNumber = "00"
else:
theToneNumber = CTCSS_Tones[toneFreq]
if verbose >= 2:
print( "Tone set to: " + theToneNumber)
data = current[0:20] + str(tx) + "," + str(rx) + ",0," + theToneNumber + "," + theToneNumber + current[31:]
if verbose >= 2:
print("Setting: " + data)
sendAndWait(data)
return
def powerSelect(pow):
pow = pow.lower()[0:1]
if pow == "h":
sendAndWait("PC 0")
elif pow == "l":
sendAndWait("PC 2")
return
# Read radio frequency
def getFreq():
rtn = sendAndWait("FQ")
# rtn will be "FQ 0147330000,0"
mhz = rtn[4:7]
khz = rtn[7:13]
print(mhz + "." + khz)
# Initialize the serial port as global variable ser
def serialInit(serPort):
ser = serial.Serial(
port= serPort, #Replace ttyS0 with ttyAM0 for Pi1,Pi2,Pi0
baudrate = 9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
rtscts=False,
timeout=0.100
)
time.sleep(0.5) # mostly needed on Windows to allow port to settle in background
return ser
#### Start of exectution
i=1
ser = None
if (len(sys.argv) > i) and ((sys.argv[i].lower())[0:2] == "-v"):
# verbose must be first
verbose = len(sys.argv[i]) - 1
i += 1
print ("Verbose: " + str(verbose))
try:
# serial init must happen first or second
if (len(sys.argv) > i) and (sys.argv[i].lower() == "ser"):
serialName = sys.argv[i+1]
i += 2
ser = serialInit(serialName)
radioID = sendAndWait("ID")
except:
print("Could not open: " + serialName)
sys.exit(1)
while i < len(sys.argv):
if sys.argv[i].lower() == "mem":
memorySelect(sys.argv[i+1])
i += 2
elif sys.argv[i].lower() == "vfo":
vfoSelect(sys.argv[i+1])
i += 2
elif sys.argv[i].lower() == "tone":
vfoTone(sys.argv[i+1], 1, 0)
i += 2
elif sys.argv[i].lower() == "ctcss":
vfoTone(sys.argv[i+1], 0, 1)
i += 2
elif sys.argv[i].lower()[0:3] == "pow":
powerSelect(sys.argv[i+1])
i += 2
elif sys.argv[i].lower()[0:4] == "freq":
getFreq()
i += 1
elif sys.argv[i].lower() == "help":
print(usage)
break
else:
print ("Error input:" + sys.argv[i])
break
# while
if ser is not None:
ser.close()
| 30.652015 | 112 | 0.58162 | 1,296 | 8,368 | 3.752315 | 0.320988 | 0.028789 | 0.031256 | 0.024059 | 0.229899 | 0.206046 | 0.196381 | 0.192474 | 0.177257 | 0.172733 | 0 | 0.097666 | 0.278083 | 8,368 | 272 | 113 | 30.764706 | 0.707333 | 0.275215 | 0 | 0.214286 | 0 | 0 | 0.31249 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0.008929 | 0.017857 | 0 | 0.080357 | 0.040179 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f247daa65827948d6dd176860fe6b66dc1abfcd | 2,347 | py | Python | snake/games/snake_gen.py | TeamSerpentine/retro-baselines | 9b2c725604496aca9c382a53f456d31fdbcaa5b1 | [
"BSD-3-Clause"
] | 2 | 2019-12-09T08:41:13.000Z | 2020-10-22T02:29:22.000Z | snake/games/snake_gen.py | TeamSerpentine/retro-baselines | 9b2c725604496aca9c382a53f456d31fdbcaa5b1 | [
"BSD-3-Clause"
] | null | null | null | snake/games/snake_gen.py | TeamSerpentine/retro-baselines | 9b2c725604496aca9c382a53f456d31fdbcaa5b1 | [
"BSD-3-Clause"
] | null | null | null |
import itertools
import numpy as np
from snake.objects.utils import Point
from snake.games.base_game import SnakeGame
from snake.boards.classic import Board
from snake.displays.single_image import SingleImage
class Snake(SnakeGame):
"""
Classic snake game, which outputs a numpy ndarray of size (24,)
Containing 8 times snake
"""
def __init__(self):
board = Board()
render = SingleImage(board.width, board.height)
self._image = np.zeros((board.width, board.height, 3), dtype=np.uint8)
super().__init__(board, render)
def obs(self):
"""
Generates the output array.
The output will be a (24,) numpy array, with 3 times 8 directions.
wall distance, snake distance, food distance
["UP", "DOWN", "LEFT", "LEFT UP", "LEFT DOWN", "RIGHT", "RIGHT UP", "RIGHT DOWN"]
"""
object_types = [v for k, v in self.board.object_types.items() if k != "ground"]
obs_directions = [x for x in itertools.product([0, 1, -1], repeat=2)][1:]
obs_out = np.zeros((len(object_types), len(obs_directions)), dtype=np.int)
snake = self.board.objects['snake'][0]
for idx_direction, direction in enumerate(obs_directions):
scan_direction = Point(*direction)
object_found = False
scan_counter = 1
while not object_found:
scan_x = snake.position.x + scan_direction.x * scan_counter
scan_y = snake.position.y + scan_direction.y * scan_counter
for idx_object, object_type in enumerate(object_types):
if isinstance(self.board.board[scan_x, scan_y], object_type):
obs_out[idx_object, idx_direction] = scan_counter
object_found = True
scan_counter += 1
return obs_out.flatten()
def reward(self):
""" Returns the number of apples eaten during the entire game. """
snake = self.board.objects['snake'][0]
return len(snake) - snake.LEN_SNAKE_START
def render(self):
obs = self.board._get_obs(attribute="colour")
for x in range(obs.shape[0]):
for y in range(obs.shape[1]):
self._image[x, y] = obs[x, y]
return self.display.render(self._image)
| 39.116667 | 93 | 0.607158 | 306 | 2,347 | 4.5 | 0.349673 | 0.039216 | 0.020334 | 0.030501 | 0.039216 | 0.039216 | 0 | 0 | 0 | 0 | 0 | 0.011926 | 0.285471 | 2,347 | 59 | 94 | 39.779661 | 0.809183 | 0.157648 | 0 | 0.051282 | 0 | 0 | 0.011721 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.153846 | 0 | 0.358974 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f272df30ff4e1643b5772b92ea4e650ad48af7e | 473 | py | Python | hw5/close_p_q.py | rocke97/crypto | 89c4e595adf74558e12ceb1762025fd2f0275fec | [
"MIT"
] | null | null | null | hw5/close_p_q.py | rocke97/crypto | 89c4e595adf74558e12ceb1762025fd2f0275fec | [
"MIT"
] | null | null | null | hw5/close_p_q.py | rocke97/crypto | 89c4e595adf74558e12ceb1762025fd2f0275fec | [
"MIT"
] | null | null | null | import math
def find_s_and_t(n):
s = 0
t = 0
for possible_t in range(math.ceil(math.sqrt(n)), n):
if math.sqrt((pow(possible_t, 2) - n)) == math.ceil(math.sqrt((pow(possible_t, 2) - n))):
t = possible_t
s_squared = pow(t, 2) - n
s = math.floor(math.sqrt(s_squared))
return (s, t)
result = find_s_and_t(310485170747)
p = result[0] + result[1]
q = result[1] - result[0]
print("p = ", p)
print("q = ", q) | 24.894737 | 97 | 0.547569 | 81 | 473 | 3.049383 | 0.333333 | 0.145749 | 0.036437 | 0.072874 | 0.178138 | 0.178138 | 0.178138 | 0 | 0 | 0 | 0 | 0.061584 | 0.27907 | 473 | 19 | 98 | 24.894737 | 0.662757 | 0 | 0 | 0 | 0 | 0 | 0.016878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.2 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f2a2ed8a1f8b461a2fc0955e0ccc67710df3bfc | 1,051 | py | Python | Week_4/xcoverage.py | actaylor05/learning_python | d8c72fdb7c07bac4176a4418f83d75013db2245a | [
"MIT"
] | null | null | null | Week_4/xcoverage.py | actaylor05/learning_python | d8c72fdb7c07bac4176a4418f83d75013db2245a | [
"MIT"
] | null | null | null | Week_4/xcoverage.py | actaylor05/learning_python | d8c72fdb7c07bac4176a4418f83d75013db2245a | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# Write a program that simulates random BAC coverage over a genome
# Command line arguments include
# Genome size (e.g. 1000)
# X coverage (e.g. 5)
# Use assert() to check parameter bounds
# Report min, max, and histogram of coverage
# Note that your output may vary due to random function
import sys
import random
assert(len(sys.argv) == 3)
size = int(sys.argv[1])
coverage = float(sys.argv[2]) #can use float cause 5.5 is ok
assert(size > 0)
assert(coverage > 0)
bacs = int(size * coverage)
genome = [0] * size
for i in range(bacs):
r = random.randint(0, size -1)
genome[r] += 1
genome.sort()
min = genome[0]
max = genome[-1]
hist = [0] * (max + 1)
for v in genome:
hist[v] += 1
#output
print(f'Size: {size}')
print(f'X: {coverage}')
print(f'BACs: {bacs}')
print(f'Min: {min}')
print(f'Max: {max}')
print(f'Counts:')
for i in range(len(hist)):
print(i, hist[i])
"""
Size: 1000
X: 5.0
BACs: 5000
Min: 0
Max: 13
Counts:
0 5
1 39
2 88
3 144
4 175
5 150
6 151
7 116
8 59
9 40
10 20
11 5
12 6
13 2
"""
| 16.169231 | 66 | 0.647954 | 199 | 1,051 | 3.422111 | 0.452261 | 0.052863 | 0.017621 | 0.032305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097735 | 0.201713 | 1,051 | 64 | 67 | 16.421875 | 0.713945 | 0.317793 | 0 | 0 | 0 | 0 | 0.111693 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.269231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f2afa44239c14e6b1bb586457468cf46963a9c2 | 3,352 | py | Python | python/desc/sims_ci_pipe/psf_mag_check.py | jchiang87/sims_ci_pipe | db8f5ba03880c8def4242fc80ab4cfe6e225e72f | [
"BSD-3-Clause"
] | 3 | 2019-12-04T02:47:34.000Z | 2021-07-04T16:25:34.000Z | python/desc/sims_ci_pipe/psf_mag_check.py | jchiang87/sims_ci_pipe | db8f5ba03880c8def4242fc80ab4cfe6e225e72f | [
"BSD-3-Clause"
] | 5 | 2019-12-10T15:54:49.000Z | 2020-07-19T02:25:39.000Z | python/desc/sims_ci_pipe/psf_mag_check.py | jchiang87/sims_ci_pipe | db8f5ba03880c8def4242fc80ab4cfe6e225e72f | [
"BSD-3-Clause"
] | 1 | 2020-07-15T15:41:34.000Z | 2020-07-15T15:41:34.000Z | """
Compute visit-level distributions of psf_mag - calib_mag to check
for biases in photometry.
"""
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import lsst.daf.persistence as dp
from .ellipticity_distributions import get_point_sources
__all__ = ['get_psf_calib_mags', 'psf_mag_check']
def get_psf_calib_mags(butler, visit, sn_min=150):
"""
Compute psf and calib magnitudes.
Parameters
----------
butler: lsst.daf.persistence.Butler
Butler pointing at the data repo with the calexps.
visit: int
Visit number to consider.
sn_min: float [150]
Mininum signal-to-noise cut on psfFlux/psfFluxErr.
Returns
-------
pandas.DataFrame containing the psf_mag and calib_mag values.
"""
datarefs = butler.subset('src', visit=visit)
psf_mags = []
calib_mags = []
psf_fluxes = []
psf_fluxErrs = []
for dataref in list(datarefs):
try:
src = dataref.get('src')
photoCalib = dataref.get('calexp_photoCalib')
except:
continue
visit = dataref.dataId['visit']
stars = get_point_sources(src)
psf_mags.extend(photoCalib.instFluxToMagnitude(
stars, 'base_PsfFlux').transpose()[0])
calib_mags.extend(photoCalib.instFluxToMagnitude(
stars, 'base_CircularApertureFlux_12_0').transpose()[0])
psf_fluxes.extend(stars['base_PsfFlux_instFlux'])
psf_fluxErrs.extend(stars['base_PsfFlux_instFluxErr'])
psf_mags = np.array(psf_mags)
calib_mags = np.array(calib_mags)
psf_fluxes = np.array(psf_fluxes)
psf_fluxErrs = np.array(psf_fluxErrs)
psf_flux_sn = psf_fluxes/psf_fluxErrs
index = np.where((psf_flux_sn == psf_flux_sn) & (psf_flux_sn > sn_min))
return pd.DataFrame(data=dict(psf_mag=psf_mags[index],
calib_mag=calib_mags[index]))
def psf_mag_check(repo, visit, dmag_range=(-0.05, 0.05), sn_min=150):
"""
Plot distribution of delta_mag = psf_mag - calib_mag values, and
return estimate of the delta_mag peak location.
Parameters
----------
butler: lsst.daf.persistence.Butler
Butler pointing at the data repo with the calexps.
visit: int
Visit number to consider.
dmag_range: (float, float) [(-0.05, 0.05)]
Magnitude range to use for plotting and median estimation.
sn_min: float [150]
Mininum signal-to-noise cut on psfFlux/psfFluxErr.
Returns
-------
float: An estimate of the delta_mag peak location.
"""
butler = dp.Butler(repo)
df = get_psf_calib_mags(butler, visit, sn_min=sn_min)
if len(df) == 0:
return None
delta_mag = (df['psf_mag'] - df['calib_mag']).to_numpy()
delta_mag = delta_mag[np.where(delta_mag == delta_mag)]
index = np.where((dmag_range[0] < delta_mag) & (delta_mag < dmag_range[1]))
dmag_median = np.median(delta_mag[index])
plt.hist(delta_mag, range=dmag_range, bins=100, histtype='step')
plt.axvline(0, linestyle=':')
plt.axvline(dmag_median, linestyle='--')
plt.annotate(f'median: {dmag_median*1000:.2f} mmag\n'
f'psfFlux/psfFluxErr > {sn_min}', (0.05, 0.95),
xycoords='axes fraction', verticalalignment='top')
plt.xlabel('psf_mag - calib_mag')
return dmag_median
| 34.204082 | 79 | 0.656026 | 452 | 3,352 | 4.641593 | 0.294248 | 0.045758 | 0.017159 | 0.020019 | 0.283603 | 0.283603 | 0.224976 | 0.193518 | 0.163966 | 0.163966 | 0 | 0.018133 | 0.22673 | 3,352 | 97 | 80 | 34.556701 | 0.791281 | 0.28401 | 0 | 0 | 0 | 0 | 0.119363 | 0.042882 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0 | 0.098039 | 0 | 0.196078 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f332ee1aa858c59191df24f355feb7d4151e658 | 3,052 | py | Python | Baseline/NABA/obs_naba.py | sarthak-chakraborty/PARIMA | c6ceb6e17fc3c934603fa843febc42a8b6ee5bb1 | [
"MIT"
] | 13 | 2021-03-06T16:53:33.000Z | 2022-02-04T20:28:13.000Z | Baseline/NABA/obs_naba.py | sarthak-chakraborty/Adaptive-360-video | c6ceb6e17fc3c934603fa843febc42a8b6ee5bb1 | [
"MIT"
] | 6 | 2021-06-02T08:08:09.000Z | 2022-03-12T00:58:26.000Z | Baseline/NABA/obs_naba.py | sarthak-chakraborty/Adaptive-360-video | c6ceb6e17fc3c934603fa843febc42a8b6ee5bb1 | [
"MIT"
] | 3 | 2021-05-26T03:32:04.000Z | 2021-07-17T14:34:20.000Z | import numpy as np
import math
import pickle
from naba import get_data, tiling, alloc_bitrate, calc_qoe
import argparse
import json
def main():
parser = argparse.ArgumentParser(description='Run NABA algorithm and calculate Average QoE of a video for all users')
parser.add_argument('-D', '--dataset', type=int, required=True, help='Dataset ID (1 or 2)')
parser.add_argument('-T', '--topic', required=True, help='Topic in the particular Dataset (video name)')
parser.add_argument('--fps', type=int, required=True, help='fps of the video')
parser.add_argument('-O', '--offset', type=int, default=0, help='Offset for the start of the video in seconds (when the data was logged in the dataset) [default: 0]')
parser.add_argument('--fpsfrac', type=float, default=1.0, help='Fraction with which fps is to be multiplied to change the chunk size [default: 1.0]')
parser.add_argument('-Q', '--quality', required=True, help='Preferred bitrate quality of the video (360p, 480p, 720p, 1080p, 1440p)')
args = parser.parse_args()
if args.dataset != 1 and args.dataset != 2:
print("Incorrect value of the Dataset ID provided!!...")
print("======= EXIT ===========")
exit()
pred_nframe = args.fps * args.fpsfrac
# Get the necessary information regarding the dimensions of the video
print("Reading JSON...")
file = open('./meta.json', )
jsonRead = json.load(file)
nusers = jsonRead["dataset"][args.dataset-1]["nusers"]
width = jsonRead["dataset"][args.dataset-1]["width"]
height = jsonRead["dataset"][args.dataset-1]["height"]
view_width = jsonRead["dataset"][args.dataset-1]["view_width"]
view_height = jsonRead["dataset"][args.dataset-1]["view_height"]
milisec = jsonRead["dataset"][args.dataset-1]["milisec"]
pref_bitrate = jsonRead["bitrates"][args.quality]
ncol_tiles = jsonRead["ncol_tiles"]
nrow_tiles = jsonRead["nrow_tiles"]
player_width = jsonRead["player_width"]
player_height = jsonRead["player_height"]
final_qoe = []
for usernum in range(nusers):
print('User_{}'.format(usernum))
data, frame_nos = [],[]
data, frame_nos, max_frame = get_data(data, frame_nos, args.dataset, args.topic, usernum+1, args.fps, milisec, width, height, view_width, view_height)
act_tiles, chunk_frames = tiling(data, frame_nos, max_frame, width, height, nrow_tiles, ncol_tiles, args.fps, pred_nframe)
# To be consistent with our model
i = 0
while True:
curr_frame=frame_nos[i]
if curr_frame<5*args.fps:
i += 1
else:
break
frame_nos = frame_nos[i:]
vid_bitrate = alloc_bitrate(frame_nos, chunk_frames, pref_bitrate, nrow_tiles, ncol_tiles)
q = calc_qoe(vid_bitrate, act_tiles, frame_nos, chunk_frames, width, height, nrow_tiles, ncol_tiles, player_width, player_height)
final_qoe.append(q)
print("QoE: {}".format(q))
# Find averaged results
final_qoe.sort()
avg_qoe = np.mean(final_qoe)
# Print averaged results
print('Topic: '+topic)
print('Qoe NABA')
print('Pred nframe',(args.fps*args.fpsfrac))
print('Avg. QoE: ',avg_qoe)
print('\n\n')
if __name__ == "__main__":
main()
| 35.488372 | 167 | 0.713303 | 452 | 3,052 | 4.652655 | 0.311947 | 0.047076 | 0.039943 | 0.07418 | 0.1864 | 0.119829 | 0 | 0 | 0 | 0 | 0 | 0.01402 | 0.135321 | 3,052 | 85 | 168 | 35.905882 | 0.782872 | 0.047182 | 0 | 0 | 0 | 0.016393 | 0.260076 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016393 | false | 0 | 0.098361 | 0 | 0.114754 | 0.163934 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f397a404c6c6b7845cf8d3cc2ad927c19c0bc7f | 1,568 | py | Python | tests/time_delay_layers_test.py | veqtor/veqtor_keras | 303f81b7c6aaa7962b288541275fe7ea618804b9 | [
"MIT"
] | 1 | 2020-08-07T14:47:16.000Z | 2020-08-07T14:47:16.000Z | tests/time_delay_layers_test.py | veqtor/veqtor_keras | 303f81b7c6aaa7962b288541275fe7ea618804b9 | [
"MIT"
] | null | null | null | tests/time_delay_layers_test.py | veqtor/veqtor_keras | 303f81b7c6aaa7962b288541275fe7ea618804b9 | [
"MIT"
] | null | null | null | import tensorflow as tf
from tensorflow.keras.utils import custom_object_scope
from tensorflow.python.keras.testing_utils import layer_test
from veqtor_keras.layers.time_delay_layers import TimeDelayLayer1D, DepthGroupwiseTimeDelayLayer1D, \
DepthGroupwiseTimeDelayLayerFake2D, TimeDelayLayerFake2D
class TimeDelayLayer1DTest(tf.test.TestCase):
def test_simple(self):
with custom_object_scope({'TimeDelayLayer1D': TimeDelayLayer1D}):
layer_test(
TimeDelayLayer1D, kwargs={'output_dim': 4}, input_shape=(5, 32, 3))
class SeparableTimeDelayLayer1DTest(tf.test.TestCase):
def test_simple(self):
with custom_object_scope(
{'DepthGroupwiseTimeDelayLayer1D': DepthGroupwiseTimeDelayLayer1D,
'TimeDelayLayer1D': TimeDelayLayer1D}):
layer_test(
DepthGroupwiseTimeDelayLayer1D, kwargs={'output_mul': 2}, input_shape=(5, 32, 3))
class SeparableTimeDelayLayerFake2DTest(tf.test.TestCase):
def test_simple(self):
with custom_object_scope({'DepthGroupwiseTimeDelayLayerFake2D': DepthGroupwiseTimeDelayLayerFake2D}):
layer_test(
DepthGroupwiseTimeDelayLayerFake2D, input_shape=(5, 16, 16, 3))
class TimeDelayLayerFake2DTest(tf.test.TestCase):
def test_simple(self):
with custom_object_scope({'TimeDelayLayerFake2D': TimeDelayLayerFake2D}):
layer_test(
TimeDelayLayerFake2D, kwargs={'output_dim': 4}, input_shape=(5, 16, 16, 3))
if __name__ == '__main__':
tf.test.main()
| 38.243902 | 109 | 0.720026 | 147 | 1,568 | 7.421769 | 0.319728 | 0.054995 | 0.07791 | 0.062328 | 0.284143 | 0.284143 | 0.240147 | 0.190651 | 0.190651 | 0.190651 | 0 | 0.035377 | 0.188776 | 1,568 | 40 | 110 | 39.2 | 0.822327 | 0 | 0 | 0.275862 | 0 | 0 | 0.098214 | 0.040816 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.137931 | 0 | 0.413793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f3f574c981c25a097dafd8bdfc599460d07e952 | 660 | py | Python | PyTester/data/Root.py | Sildra/PyTester | ebe16dc4dc169416ee839adc03e42806d8d57620 | [
"Apache-2.0"
] | null | null | null | PyTester/data/Root.py | Sildra/PyTester | ebe16dc4dc169416ee839adc03e42806d8d57620 | [
"Apache-2.0"
] | null | null | null | PyTester/data/Root.py | Sildra/PyTester | ebe16dc4dc169416ee839adc03e42806d8d57620 | [
"Apache-2.0"
] | null | null | null | from data.Category import Category
class Root(Category):
"""description of class"""
instance = None
def __init__(self, args, name="Root", depth=-1):
super().__init__(args.path, name, depth)
global instance
instance = self
self.args = args
def accept(self, visitor):
visitor.visit(self)
if len(self.categories) > 0:
for node in self.categories.values():
node.accept(visitor)
visitor.leave(self, node)
@staticmethod
def get_root_option(a, b):
instance.get_option(a, b)
@staticmethod
def args():
return instance.args
| 23.571429 | 52 | 0.589394 | 77 | 660 | 4.909091 | 0.493506 | 0.042328 | 0.042328 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004348 | 0.30303 | 660 | 27 | 53 | 24.444444 | 0.817391 | 0.030303 | 0 | 0.1 | 0 | 0 | 0.006309 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.05 | 0.05 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f41984526bd0a507c9c13a13aba537060703cb9 | 2,880 | py | Python | predict.py | wangruichens/textcnn | 99dadc2da13d6dff48cc824492788046ceb82031 | [
"Apache-2.0"
] | null | null | null | predict.py | wangruichens/textcnn | 99dadc2da13d6dff48cc824492788046ceb82031 | [
"Apache-2.0"
] | null | null | null | predict.py | wangruichens/textcnn | 99dadc2da13d6dff48cc824492788046ceb82031 | [
"Apache-2.0"
] | null | null | null | # @Time : 18-11-5
# @Author : wangrc
# @Refers :
# @Outputs :
# @Desc :
import tensorflow as tf
import numpy as np
import os
import time
import datetime
import data_helpers
from text_cnn import TextCNN
from tensorflow.contrib import learn
import csv
import read_nlpcc
# Eval Parameters
tf.flags.DEFINE_integer("batch_size", 64, "Batch Size (default: 64)")
tf.flags.DEFINE_string("checkpoint_dir", "./runs/1541395146/checkpoints/", "Checkpoint directory from training run")
tf.flags.DEFINE_boolean("eval_train", False, "Evaluate on all training data")
# Misc Parameters
tf.flags.DEFINE_boolean("allow_soft_placement", True, "Allow device soft device placement")
tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices")
#
FLAGS = tf.flags.FLAGS
x_raw=read_nlpcc.load_app_data()
# Map data into vocabulary
vocab_path = os.path.join(FLAGS.checkpoint_dir, "..", "vocab")
vocab_processor = learn.preprocessing.VocabularyProcessor.restore(vocab_path)
x_test = np.array(list(vocab_processor.transform(x_raw)))
print("\nEvaluating...\n")
# Evaluation
# ==================================================
checkpoint_dir="./runs/1541395146/checkpoints/"
checkpoint_file = tf.train.latest_checkpoint(checkpoint_dir)
graph = tf.Graph()
with graph.as_default():
session_conf = tf.ConfigProto(
allow_soft_placement=FLAGS.allow_soft_placement,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
# Load the saved meta graph and restore variables
saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
saver.restore(sess, checkpoint_file)
# Get the placeholders from the graph by name
input_x = graph.get_operation_by_name("input_x").outputs[0]
# input_y = graph.get_operation_by_name("input_y").outputs[0]
dropout_keep_prob = graph.get_operation_by_name("dropout_keep_prob").outputs[0]
# Tensors we want to evaluate
predictions = graph.get_operation_by_name("output/predictions").outputs[0]
# Generate batches for one epoch
batches = data_helpers.batch_iter(list(x_test), FLAGS.batch_size, 1, shuffle=False)
# Collect the predictions here
all_predictions = []
for x_test_batch in batches:
print('waiting')
batch_predictions = sess.run(predictions, {input_x: x_test_batch, dropout_keep_prob: 1.0})
all_predictions = np.concatenate([all_predictions, batch_predictions])
# Save the evaluation to a csv
predictions_human_readable = np.column_stack((np.array(x_raw), all_predictions))
out_path = os.path.join(FLAGS.checkpoint_dir, "..", "prediction.csv")
print("Saving evaluation to {0}".format(out_path))
with open(out_path, 'w') as f:
csv.writer(f).writerows(predictions_human_readable)
| 36 | 116 | 0.721875 | 390 | 2,880 | 5.092308 | 0.369231 | 0.021148 | 0.032729 | 0.038268 | 0.131923 | 0.108761 | 0.032226 | 0 | 0 | 0 | 0 | 0.015158 | 0.152431 | 2,880 | 79 | 117 | 36.455696 | 0.798443 | 0.160417 | 0 | 0 | 0 | 0 | 0.171321 | 0.02501 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.23913 | 0 | 0.23913 | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f46e0716183980d44e4a86f4c6e12b6c8d6a358 | 1,422 | py | Python | innuendo/core/interface.py | innuendoio/innuendo-agent-python | bcd79ddaf39083fa6498d1c9af2be2d79e495fc2 | [
"MIT"
] | null | null | null | innuendo/core/interface.py | innuendoio/innuendo-agent-python | bcd79ddaf39083fa6498d1c9af2be2d79e495fc2 | [
"MIT"
] | null | null | null | innuendo/core/interface.py | innuendoio/innuendo-agent-python | bcd79ddaf39083fa6498d1c9af2be2d79e495fc2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Backwards compatibility imports
from __future__ import absolute_import, division, print_function
from builtins import *
# Imports
import sys
import imp
import os
import argparse
import traceback
from innuendo.utils import file_manager as fm, parser
class TerminalInterface():
def __init__(self):
try:
# Private constants for PATHs
self._PATH = os.path.dirname(os.path.abspath(__file__))
self._CONF_FOLDER_PATH = 'config'
self._CONF_FILE_PATH = '{}/../{}/conf.yml'.format(self._PATH, self._CONF_FOLDER_PATH)
# Loads a configuration file
self.conf = fm.load_yaml(self._CONF_FILE_PATH)
self.arguments = self.conf.get('arguments', dict())
except IOError as e:
traceback.print_exc(file=sys.stdout)
except Exception as e:
traceback.print_exc(file=sys.stdout)
def process_args(self):
arg_p = argparse.ArgumentParser()
# Sets the arguments
for k, v in self.arguments.items():
arg_p.add_argument(k, help=v.get(
'help', ''), type=parser.get_value_type(v.get('type', '')))
args = arg_p.parse_args()
print(args)
print(args.command)
def run(self):
try:
self.process_args()
print('Run Forrest')
except Exception as e:
print(e)
| 27.346154 | 97 | 0.613221 | 175 | 1,422 | 4.748571 | 0.445714 | 0.057762 | 0.028881 | 0.043321 | 0.079422 | 0.079422 | 0.079422 | 0.079422 | 0 | 0 | 0 | 0.000978 | 0.280591 | 1,422 | 51 | 98 | 27.882353 | 0.811339 | 0.094937 | 0 | 0.176471 | 0 | 0 | 0.039844 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.235294 | 0 | 0.352941 | 0.205882 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f483a2d601c9006e62be6731d151449cfbbb0bc | 8,108 | py | Python | IMLearn/learners/gaussian_estimators.py | guymkaplan/IML.HUJI | cd0aac71c3684bca9a64df13b0ba15d42ec88e98 | [
"MIT"
] | null | null | null | IMLearn/learners/gaussian_estimators.py | guymkaplan/IML.HUJI | cd0aac71c3684bca9a64df13b0ba15d42ec88e98 | [
"MIT"
] | null | null | null | IMLearn/learners/gaussian_estimators.py | guymkaplan/IML.HUJI | cd0aac71c3684bca9a64df13b0ba15d42ec88e98 | [
"MIT"
] | null | null | null | from __future__ import annotations
import math
import numpy as np
from numpy.linalg import inv, det, slogdet
class UnivariateGaussian:
"""
Class for univariate Gaussian Distribution Estimator
"""
def __init__(self, biased_var: bool = False) -> UnivariateGaussian:
"""
Estimator for univariate Gaussian mean and variance parameters
Parameters
----------
biased_var : bool, default=False
Should fitted estimator of variance be a biased or unbiased estimator
Attributes
----------
fitted_ : bool
Initialized as false indicating current estimator instance has not been fitted.
To be set as True in `UnivariateGaussian.fit` function.
mu_: float
Estimated expectation initialized as None. To be set in `UnivariateGaussian.fit`
function.
var_: float
Estimated variance initialized as None. To be set in `UnivariateGaussian.fit`
function.
"""
self.biased_ = biased_var
self.fitted_, self.mu_, self.var_ = False, None, None
def fit(self, X: np.ndarray) -> UnivariateGaussian:
"""
Estimate Gaussian expectation and variance from given samples
Parameters
----------
X: ndarray of shape (n_samples, )
Training data
Returns
-------
self : returns an instance of self.
Notes
-----
Sets `self.mu_`, `self.var_` attributes according to calculated estimation (where
estimator is either biased or unbiased). Then sets `self.fitted_` attribute to `True`
"""
self.mu_ = X.mean()
# n-1 is for unbiased, n is for biased
if self.biased_:
self.var_ = X.var()
else:
self.var_ = X.var(ddof=1)
self.fitted_ = True
return self
def pdf(self, X: np.ndarray) -> np.ndarray:
"""
Calculate PDF of observations under Gaussian model with fitted estimators
Parameters
----------
X: ndarray of shape (n_samples, )
Samples to calculate PDF for
Returns
-------
pdfs: ndarray of shape (n_samples, )
Calculated values of given samples for PDF function of N(mu_, var_)
Raises
------
ValueError: In case function was called prior fitting the model
"""
if not self.fitted_:
raise ValueError("Estimator must first be fitted before calling `pdf` function")
pdf_on_vector = np.vectorize(self.probability_density_func_uni)
return pdf_on_vector(self.mu_, self.var_, X)
@staticmethod
def log_likelihood(mu: float, sigma: float, X: np.ndarray) -> float:
"""
Calculate the log-likelihood of the data under a specified Gaussian model
Parameters
----------
mu : float
Expectation of Gaussian
sigma : float
Variance of Gaussian
X : ndarray of shape (n_samples, )
Samples to calculate log-likelihood with
Returns
-------
log_likelihood: float
log-likelihood calculated
"""
return -(len(X)/2) * math.log(2 * math.pi * (sigma)) - (np.sum(
((X - mu) ** 2)) / (2 * (sigma)))
@staticmethod
def probability_density_func_uni(mu: float, sigma: float, sample: float) -> float:
"""
Computes the pdf of a Univariate Guassian Distribution, as defined
in literature.
:param mu: Expectation of Gaussian
:param sigma: Variance of Gaussian
:param sample: a single sample
:return: the PDF of a single sample
"""
coof = 1/math.sqrt(sigma * 2 * math.pi)
exponent = math.exp((-1/(2*sigma))*((sample-mu)**2))
return coof * exponent
class MultivariateGaussian:
"""
Class for multivariate Gaussian Distribution Estimator
"""
def __init__(self):
"""
Initialize an instance of multivariate Gaussian estimator
Attributes
----------
fitted_ : bool
Initialized as false indicating current estimator instance has not been fitted.
To be set as True in `MultivariateGaussian.fit` function.
mu_: ndarray of shape (n_features,)
Estimated expectation initialized as None. To be set in `MultivariateGaussian.fit`
function.
cov_: ndarray of shape (n_features, n_features)
Estimated covariance initialized as None. To be set in `MultivariateGaussian.fit`
function.
"""
self.inv_cov_ = None
self.cov_det_ = 0
self.mu_, self.cov_ = None, None
self.fitted_ = False
def fit(self, X: np.ndarray) -> MultivariateGaussian:
"""
Estimate Gaussian expectation and covariance from given samples
Parameters
----------
X: ndarray of shape (n_samples, n_features)
Training data
Returns
-------
self : returns an instance of self
Notes
-----
Sets `self.mu_`, `self.cov_` attributes according to calculated estimation.
Then sets `self.fitted_` attribute to `True`
"""
self.mu_ = np.mean(X, axis=0)
# no need for ddof as default computes sum(X)/N-1:
self.cov_ = np.cov(X, rowvar=False)
self.cov_det_ = np.linalg.det(self.cov_)
self.inv_cov_ = np.linalg.inv(self.cov_)
self.fitted_ = True
return self
def pdf(self, X: np.ndarray):
"""
Calculate PDF of observations under Gaussian model with fitted estimators
Parameters
----------
X: ndarray of shape (n_samples, n_features)
Samples to calculate PDF for
Returns
-------
pdfs: ndarray of shape (n_samples, )
Calculated values of given samples for PDF function of N(mu_, cov_)
Raises
------
ValueError: In case function was called prior fitting the model
"""
if not self.fitted_:
raise ValueError("Estimator must first be fitted before calling `pdf` function")
pdf_on_mat = np.vectorize(self.probability_density_func_multi)
return pdf_on_mat(X)
@staticmethod
def log_likelihood(mu: np.ndarray, cov: np.ndarray, X: np.ndarray) -> float:
"""
Calculate the log-likelihood of the data under a specified Gaussian model
Parameters
----------
mu : ndarray of shape (n_features,)
Expectation of Gaussian
cov : ndarray of shape (n_features, n_features)
covariance matrix of Gaussian
X : ndarray of shape (n_samples, n_features)
Samples to calculate log-likelihood with
Returns
-------
log_likelihood: float
log-likelihood calculated over all input data and under given parameters of Gaussian
"""
A = np.linalg.inv(cov)
detA = np.linalg.det(A)
coof = -((len(X) * len(A))/2) * np.log(2 * np.pi) - ((len(X)/2) * np.log(detA))
delta = X - mu
return coof - (0.5 * np.sum((delta @ A) * delta)) # <x_1, Ax_1> +...+ <x_n, Ax_n>
# return (len(X) / 2 * (np.log(1 / (1 / np.sqrt(np.linalg.det(cov) * ((2 * math.pi) ** (len(cov)))))))) - \
# (0.5 * np.sum(((X - mu) @ (np.linalg.inv(cov))) * (X - mu)))
def probability_density_func_multi(self, samples: np.ndarray) -> float:
"""
Computes the pdf of a Multivariate Guassian Distribution, as defined
in literature.
:param mu: Expectation of Gaussian
:param cov: Covariance of Gaussian
:param samples: n samples
:return: the PDF of a multivariate gaussian sample
"""
coof = 1/math.sqrt(((math.pi * 2) ** len(self.cov_)) * self.cov_det_)
delta = samples-self.mu_
exponent = math.exp(-0.5*(np.transpose(delta) @ self.inv_cov_ @ delta))
return coof * exponent
| 32.562249 | 115 | 0.583004 | 958 | 8,108 | 4.818372 | 0.160752 | 0.023397 | 0.036395 | 0.038995 | 0.602686 | 0.560659 | 0.485269 | 0.485269 | 0.465771 | 0.433276 | 0 | 0.005595 | 0.316601 | 8,108 | 248 | 116 | 32.693548 | 0.827468 | 0.519364 | 0 | 0.216667 | 0 | 0 | 0.042857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.066667 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f499412eff769d36ce3fc8f434016dea692f534 | 1,509 | py | Python | pycqed/analysis/fit_toolbox/init_guess.py | nuttamas/PycQED_py3 | 1ee35c7428d36ed42ba4afb5d4bda98140b2283e | [
"MIT"
] | 60 | 2016-08-03T10:00:18.000Z | 2021-11-10T11:46:16.000Z | pycqed/analysis/fit_toolbox/init_guess.py | nuttamas/PycQED_py3 | 1ee35c7428d36ed42ba4afb5d4bda98140b2283e | [
"MIT"
] | 512 | 2016-08-03T17:10:02.000Z | 2022-03-31T14:03:43.000Z | pycqed/analysis/fit_toolbox/init_guess.py | nuttamas/PycQED_py3 | 1ee35c7428d36ed42ba4afb5d4bda98140b2283e | [
"MIT"
] | 34 | 2016-10-19T12:00:52.000Z | 2022-03-19T04:43:26.000Z | import numpy
from scipy import *
def lorentzian(x_data, y_data):
p=4*[0]
y_min = min(y_data)
index_y_min = y_data.tolist().index(y_min)
x_min = x_data[index_y_min]
y_max = max(y_data)
index_y_max = y_data.tolist().index(y_max)
y_mean = y_data.mean()
HM = (y_max - y_min)/2
#print 'check 3'
#print x_min
#print index_y_max
HM_index = index_y_min
#print HM_index
value_found = False
index_array = numpy.linspace(index_y_min, index_y_max, abs(index_y_max-index_y_min)+1)
if sign(index_y_min-index_y_max)>0:
index_array_2 = numpy.linspace(index_y_min, size(y_data), abs(index_y_min-size(y_data))+1)
else:
index_array_2 = numpy.linspace(index_y_min, 0, abs(index_y_min)+1)
#print 'check 4'
#print index_array
#print index_array_2
index_i = 0
while (not value_found):
index1 = index_array[index_i]
index2 = index_array_2[index_i]
if y_data[index1] > (y_max-HM):
HM_index = index1
#print 'check 2'
#print HM_index
value_found = True
elif y_data[index2] > (y_max-HM):
HM_index = index2
#print 'check 2'
#print HM_index
value_found = True
index_i+=1
HWHM = abs(x_data[HM_index] - x_min)
#print 'check 1'
#print 2*HWHM
FWHM = 2*HWHM
p[0] = x_min
p[1] = -2*HM
p[2] = FWHM
p[3] = y_max
#print p
return p
| 26.473684 | 98 | 0.588469 | 247 | 1,509 | 3.259109 | 0.174089 | 0.126708 | 0.122981 | 0.063354 | 0.445963 | 0.252174 | 0.173913 | 0.173913 | 0.091925 | 0 | 0 | 0.030564 | 0.306163 | 1,509 | 56 | 99 | 26.946429 | 0.7383 | 0.132538 | 0 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.055556 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f4a9032ca67ddad37ba103c8f41aa58eaf22f85 | 268 | py | Python | src/rez/data/tests/commands/packages/rextest2/2/package.py | alexey-pelykh/rez | ad12105d89d658e4d2ea9249e537b3de90391f0e | [
"Apache-2.0"
] | null | null | null | src/rez/data/tests/commands/packages/rextest2/2/package.py | alexey-pelykh/rez | ad12105d89d658e4d2ea9249e537b3de90391f0e | [
"Apache-2.0"
] | null | null | null | src/rez/data/tests/commands/packages/rextest2/2/package.py | alexey-pelykh/rez | ad12105d89d658e4d2ea9249e537b3de90391f0e | [
"Apache-2.0"
] | 1 | 2020-09-24T08:33:43.000Z | 2020-09-24T08:33:43.000Z | name = 'rextest2'
version = '2'
requires = ["rextest-1.3"]
def commands():
# prepend to existing var
env.REXTEST_DIRS.append('{root}/data2')
setenv("REXTEST2_REXTEST_VER", '{resolve.rextest.version}')
env.REXTEST2_REXTEST_BASE = resolve.rextest.base
| 24.363636 | 63 | 0.697761 | 34 | 268 | 5.352941 | 0.676471 | 0.164835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030702 | 0.149254 | 268 | 10 | 64 | 26.8 | 0.767544 | 0.085821 | 0 | 0 | 0 | 0 | 0.316872 | 0.102881 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f4cb0e01f8ead732bd4879f7c1e12e4253c6239 | 937 | py | Python | examples/CheckFirmwareVersion.py | drizztguen77/PTHat | f46d05054875599e80b396f74bc5a348cfcefbfb | [
"Apache-2.0"
] | 5 | 2021-01-28T13:26:08.000Z | 2022-02-24T08:15:44.000Z | examples/CheckFirmwareVersion.py | drizztguen77/PTHat | f46d05054875599e80b396f74bc5a348cfcefbfb | [
"Apache-2.0"
] | null | null | null | examples/CheckFirmwareVersion.py | drizztguen77/PTHat | f46d05054875599e80b396f74bc5a348cfcefbfb | [
"Apache-2.0"
] | null | null | null | """
This is an example of setting up an Axis (motor) and starting it, revving it up to a specified RPM and letting it
run for some time and then shutting it down.
This example does not auto send the commands. It gets the command and then sends it to the send_command method.
"""
from pthat.pthat import Axis
def wait_for_responses(axis, responses_to_check, msg):
responses = axis.get_all_responses()
while not all(x in responses for x in responses_to_check):
responses = responses + axis.get_all_responses()
# Print the responses
print(msg)
axis.parse_responses(responses)
xaxis = Axis("X", command_id=1, serial_device="/dev/ttyS0")
xaxis.debug = True
# Get the firmware version
firmware_version_cmd = xaxis.get_firmware_version()
xaxis.send_command(firmware_version_cmd)
# Show the responses
wait_for_responses(xaxis, ["RI00FW*", "CI00FW*"], "------- Get firmware version command responses -------")
| 32.310345 | 113 | 0.742796 | 143 | 937 | 4.706294 | 0.454545 | 0.111441 | 0.047548 | 0.056464 | 0.08321 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007643 | 0.16222 | 937 | 28 | 114 | 33.464286 | 0.849682 | 0.358591 | 0 | 0 | 0 | 0 | 0.133672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f583755beca1d6f083766bc7e6fb3691cf83619 | 1,015 | py | Python | pw22/__main__.py | paeronskruven/pyweek22 | 4657b03a49c011581af6ae460fd97b6d58d13ead | [
"MIT"
] | null | null | null | pw22/__main__.py | paeronskruven/pyweek22 | 4657b03a49c011581af6ae460fd97b6d58d13ead | [
"MIT"
] | null | null | null | pw22/__main__.py | paeronskruven/pyweek22 | 4657b03a49c011581af6ae460fd97b6d58d13ead | [
"MIT"
] | null | null | null | import pyglet
import logging
logging.basicConfig(format='%(asctime)s %(module)s %(levelname)s %(message)s', level=logging.DEBUG)
# setup resource paths before importing any game code
pyglet.resource.path = ['data', 'data/tiles']
pyglet.resource.reindex()
from .scenes import SceneManager, GameScene
window = pyglet.window.Window(width=1024, height=768, caption='The Nightmare')
scene_manager = SceneManager(window)
scene_manager.push(GameScene(scene_manager))
clock_display = pyglet.clock.ClockDisplay()
pyglet.gl.glEnable(pyglet.gl.GL_BLEND)
pyglet.gl.glBlendFunc(pyglet.gl.GL_SRC_ALPHA, pyglet.gl.GL_ONE_MINUS_SRC_ALPHA)
@window.event
def on_draw():
window.clear()
try:
scene_manager.on_draw()
except IndexError:
pyglet.app.exit()
pyglet.gl.glLoadIdentity()
clock_display.draw()
def on_update(dt):
try:
scene_manager.on_update(dt)
except IndexError:
pyglet.app.exit()
def main():
pyglet.clock.schedule(on_update)
pyglet.app.run()
| 22.065217 | 99 | 0.729064 | 136 | 1,015 | 5.301471 | 0.485294 | 0.066574 | 0.041609 | 0.047157 | 0.080444 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008083 | 0.146798 | 1,015 | 45 | 100 | 22.555556 | 0.82448 | 0.050246 | 0 | 0.206897 | 0 | 0 | 0.077963 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.103448 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f58b85680a38832bb5ae69272da473e0e9adc35 | 1,993 | py | Python | src/web-scrapers/GetFixtures.py | CharlesFrankum/FF_Team_Selector | f230904faa6713dcec97e086e14eb7d841de9278 | [
"Apache-2.0"
] | null | null | null | src/web-scrapers/GetFixtures.py | CharlesFrankum/FF_Team_Selector | f230904faa6713dcec97e086e14eb7d841de9278 | [
"Apache-2.0"
] | 3 | 2021-03-31T19:24:31.000Z | 2021-12-13T20:07:43.000Z | src/web-scrapers/GetFixtures.py | CharlesFrankum/FF_Team_Selector | f230904faa6713dcec97e086e14eb7d841de9278 | [
"Apache-2.0"
] | 1 | 2019-08-08T06:46:13.000Z | 2019-08-08T06:46:13.000Z | import os
import sys
sys.path.insert(1, f'{os.path.dirname(os.getcwd())}\\models\\')
from datetime import datetime
from time import sleep
import pandas as pd
from Mapper import df_ISO3_mapper
def get_fixture_data(url, driver):
# Get Fixture data for gameweeks 1-38
home_teams = []
away_teams = []
date_times = []
gameweeks = []
gw_counter = 0
for i in range(1,39):
gw_counter += 1
week = url+str(i)
driver.get(week)
sleep(1)
game_days = driver.find_elements_by_css_selector('div.sc-bdVaJa.eIzRjw')
for day in game_days:
date = day.find_element_by_tag_name('h4').text
game_day = day.find_element_by_tag_name('ul').text
games = game_day.split('\n')
if ':' in game_day:
# work around to keep loop consistent with game updates
n_games = []
for item in games:
new_items = item.split(':')
for i in new_items:
n_games.append(i)
for i in range(0, len(n_games), 4):
home_teams.append(n_games[i])
away_teams.append(n_games[i+3])
date_time = datetime.strptime(date, '%A %d %B %Y')
date_times.append(date_time)
gameweeks.append(gw_counter)
df = pd.DataFrame({'home_team':home_teams,'away_team':away_teams,'datetime':date_times,'gameweek':gameweeks})
return df[['home_team','away_team','gameweek','datetime']]
def save_csv(data):
path = f'{os.path.dirname(os.getcwd())}\\data\\Fixtures\\fixtures.csv'
data.to_csv(path, index=0, sep=',')
def collect(driver, mapper):
print('Collecting fixtures...')
fixtures_url = 'https://fantasy.premierleague.com/fixtures/'
fixtures = get_fixture_data(fixtures_url, driver)
fixtures = df_ISO3_mapper(fixtures, mapper)
save_csv(fixtures)
| 31.634921 | 117 | 0.586553 | 263 | 1,993 | 4.235741 | 0.38403 | 0.02693 | 0.037702 | 0.025135 | 0.113106 | 0.08079 | 0 | 0 | 0 | 0 | 0 | 0.012074 | 0.293527 | 1,993 | 62 | 118 | 32.145161 | 0.779119 | 0.044656 | 0 | 0 | 0 | 0 | 0.143609 | 0.052604 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0 | 0.130435 | 0 | 0.217391 | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f5cb3300698dddd958b6e5a7a02e3cb797a505c | 463 | py | Python | stackstats/settings.py | kapsali29/StackStatsAPI | 5181bd5275129080206350e147ce6b1db18a0b69 | [
"MIT"
] | null | null | null | stackstats/settings.py | kapsali29/StackStatsAPI | 5181bd5275129080206350e147ce6b1db18a0b69 | [
"MIT"
] | null | null | null | stackstats/settings.py | kapsali29/StackStatsAPI | 5181bd5275129080206350e147ce6b1db18a0b69 | [
"MIT"
] | null | null | null | # =================================
# STACKEXCHANGE APP SETTINGS
# =================================
CLIENT_ID = "***"
CLIENT_SECRET = "*****"
KEY = "****"
ACCESS_TOKEN = "*****"
# =================================
# STACKEXCHANGE API SETTINGS
# =================================
STACKEXCHANGE_URL = "api.stackexchange.com"
API_VERSION = "2.2"
ANSWERS_URL = "answers"
QUESTIONS_URL = "questions"
COMMENTS_URL = "answers/{answerID}/comments"
SECONDS_TO_SLEEP = 10 | 27.235294 | 44 | 0.490281 | 37 | 463 | 5.864865 | 0.594595 | 0.092166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009592 | 0.099352 | 463 | 17 | 45 | 27.235294 | 0.510791 | 0.408207 | 0 | 0 | 0 | 0 | 0.313433 | 0.179104 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f60c2158a21875e4f1814a10f74c2d6e01951da | 410 | py | Python | src/leetcode/1997/sol_0.py | kagemeka/competitive-programming | c70fe481bcd518f507b885fc9234691d8ce63171 | [
"MIT"
] | 1 | 2021-07-11T03:20:10.000Z | 2021-07-11T03:20:10.000Z | src/leetcode/1997/sol_0.py | kagemeka/competitive-programming | c70fe481bcd518f507b885fc9234691d8ce63171 | [
"MIT"
] | 39 | 2021-07-10T05:21:09.000Z | 2021-12-15T06:10:12.000Z | src/leetcode/1997/sol_0.py | kagemeka/competitive-programming | c70fe481bcd518f507b885fc9234691d8ce63171 | [
"MIT"
] | null | null | null | import typing
import functools
class Solution:
def firstDayBeenInAllRooms(
self,
nx: typing.List[int],
) -> int:
mod = 10 ** 9 + 7
@functools.lru_cache(maxsize=None)
def dfs(i: int):
if i == 0: return 0
res = 2 + dfs(i - 1)
if nx[i - 1] != i - 1:
res += dfs(i - 1) - dfs(nx[i - 1])
return res % mod
return dfs(len(nx) - 1)
| 19.52381 | 42 | 0.495122 | 59 | 410 | 3.423729 | 0.474576 | 0.049505 | 0.049505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.365854 | 410 | 21 | 43 | 19.52381 | 0.726923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f639c82ba6fc3b596994756f2ed202a124ee6d6 | 472 | py | Python | scripts/audio/replace.py | Y4SSIN/video-editor | 879e53ee689e0085140d10f3c7b35a4048ca233b | [
"MIT"
] | 8 | 2019-01-21T13:14:33.000Z | 2020-10-02T14:40:21.000Z | scripts/audio/replace.py | Y4SSIN/video-editor | 879e53ee689e0085140d10f3c7b35a4048ca233b | [
"MIT"
] | 3 | 2021-06-08T21:30:11.000Z | 2022-03-12T00:28:37.000Z | scripts/audio/replace.py | Y4SSIN/video-editor | 879e53ee689e0085140d10f3c7b35a4048ca233b | [
"MIT"
] | 2 | 2020-12-01T16:59:04.000Z | 2021-02-01T03:31:21.000Z | '''
This function gives you the possibility to
replace the video audio.
'''
import os
def replace_audio(video_file_path, audio_file_path):
old_filename = video_file_path.rsplit('\\', 1)[-1]
old_extension = os.path.splitext(video_file_path)[1]
new_filename = old_filename.replace(old_extension, '.mp4')
os.system(f'ffmpeg -i {video_file_path} -i {audio_file_path} -map 0:0 -map 1:0 -c:v copy -c:a aac -b:a 256k -shortest assets/videos/{new_filename}')
| 31.466667 | 152 | 0.722458 | 79 | 472 | 4.075949 | 0.493671 | 0.149068 | 0.161491 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02716 | 0.141949 | 472 | 14 | 153 | 33.714286 | 0.767901 | 0.144068 | 0 | 0 | 0 | 0.166667 | 0.353535 | 0.070707 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f64fd79f6f4e0f16023d3c4112423cb2c29995a | 405 | py | Python | sims-g2/pos-adv/code/plot2D.py | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | 1 | 2019-12-19T16:21:13.000Z | 2019-12-19T16:21:13.000Z | sims-g2/pos-adv/code/plot2D.py | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | null | null | null | sims-g2/pos-adv/code/plot2D.py | ammarhakim/ammar-simjournal | 85b64ddc9556f01a4fab37977864a7d878eac637 | [
"MIT",
"Unlicense"
] | 2 | 2020-01-08T06:23:33.000Z | 2020-01-08T07:06:50.000Z | from pylab import *
X = linspace(-1, 1, 50)
XX, YY = meshgrid(X, X)
def calcf(XX, YY, mu1):
f1 = 0.5
f2 = 1/(2*sqrt(3)*mu1)
f3 = 1/(2*sqrt(3)*mu1)
f4 = 1/(6*mu1**2)
return f1*0.5 + f2*sqrt(3)*XX/2 + f3*sqrt(3)*YY/2 + 3*XX*YY/2
mu1 = 3.0/5.0
f1 = calcf(XX, YY, mu1)
pcolormesh(XX, YY, transpose(f1))
axis('image')
colorbar()
print("Max: %g. Min = %g" % (f1.max(), f1.min()))
show()
| 19.285714 | 65 | 0.540741 | 84 | 405 | 2.607143 | 0.404762 | 0.091324 | 0.082192 | 0.109589 | 0.091324 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137072 | 0.207407 | 405 | 20 | 66 | 20.25 | 0.545171 | 0 | 0 | 0 | 0 | 0 | 0.054321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f651734783eec2d577591561e02a2c193bbe807 | 4,312 | py | Python | cfn_custom_resources_backed_by_step_functions/cfn_custom_resources_backed_by_step_functions_stack.py | bitesizedserverless/cfn-custom-resources-backed-by-step-functions | 45c424a9d6f491700e1729ef88c5fee36beb5e44 | [
"MIT"
] | null | null | null | cfn_custom_resources_backed_by_step_functions/cfn_custom_resources_backed_by_step_functions_stack.py | bitesizedserverless/cfn-custom-resources-backed-by-step-functions | 45c424a9d6f491700e1729ef88c5fee36beb5e44 | [
"MIT"
] | null | null | null | cfn_custom_resources_backed_by_step_functions/cfn_custom_resources_backed_by_step_functions_stack.py | bitesizedserverless/cfn-custom-resources-backed-by-step-functions | 45c424a9d6f491700e1729ef88c5fee36beb5e44 | [
"MIT"
] | null | null | null | """Module for the main CfnCustomResourcesBackedByStepFunctions Stack."""
# Standard library imports
import time
# Related third party imports
# -
# Local application/library specific imports
from aws_cdk import (
core as cdk,
aws_lambda as lambda_,
aws_stepfunctions as sfn,
aws_stepfunctions_tasks as sfn_tasks,
)
class CfnCustomResourcesBackedByStepFunctionsStack(cdk.Stack):
"""The CfnCustomResourcesBackedByStepFunctions Stack."""
def __init__(
self,
scope: cdk.Construct,
construct_id: str,
**kwargs,
) -> None:
"""Construct a new CfnCustomResourcesBackedByStepFunctionsStack."""
super().__init__(scope, construct_id, **kwargs)
# Define the Lambda functions for the state machine
fail_50_percent_lambda = lambda_.Function(
scope=self,
id="Fail50PercentOfUpdates",
code=lambda_.Code.from_asset("lambda/functions/fail_50_percent_of_updates"),
handler="index.lambda_handler",
runtime=lambda_.Runtime.PYTHON_3_9,
)
requests_layer = lambda_.LayerVersion(
scope=self,
id="RequestsLayer",
code=lambda_.Code.from_asset("lambda/layers/requests_layer/python.zip"),
)
update_cfn_lambda = lambda_.Function(
scope=self,
id="UpdateCfnLambda",
code=lambda_.Code.from_asset("lambda/functions/update_cfn_custom_resource"),
handler="index.lambda_handler",
runtime=lambda_.Runtime.PYTHON_3_9,
layers=[requests_layer],
)
# The State Machine looks like this:
# Start
# |
# V
#
# Lambda (fails 50% of the time)
#
# | |
# success \ / catch
# V
#
# Lambda (update CFN)
fail_50_percent_step = sfn_tasks.LambdaInvoke(
scope=self,
id="Lambda (Fail 50%)",
lambda_function=fail_50_percent_lambda,
retry_on_service_exceptions=False,
)
update_cfn_step = sfn_tasks.LambdaInvoke(
scope=self,
id="Update CloudFormation",
lambda_function=update_cfn_lambda,
# We pass both the original execution input AND the lambda execution
# results to the Update CloudFormation Lambda. The function will use
# the Lambda execution results to determine success or failure, and will
# use the original Step Functions Execution Input to fetch the CloudFormation
# callback parameters (ResponseURL, StackId, RequestId and LogicalResourceId).
payload=sfn.TaskInput.from_object(
{
"ExecutionInput": sfn.JsonPath.string_at("$$.Execution.Input"),
"LambdaResults.$": "$",
}
),
)
# Make sure failures are also handled by the update_cfn_step
fail_50_percent_step.add_catch(handler=update_cfn_step, errors=["States.ALL"])
# Create the state machine.
state_machine = sfn.StateMachine(
self,
"StateMachine",
definition=fail_50_percent_step.next(update_cfn_step),
timeout=cdk.Duration.minutes(1),
)
# The Lambda Function backing the custom resource
custom_resource_handler_function = lambda_.Function(
scope=self,
id="CustomResourceHandler",
code=lambda_.Code.from_asset("lambda/functions/custom_resource_handler"),
handler="index.lambda_handler",
runtime=lambda_.Runtime.PYTHON_3_9,
environment={"STATE_MACHINE_ARN": state_machine.state_machine_arn},
)
state_machine.grant_start_execution(custom_resource_handler_function)
# The CFN Custom Resource
cdk.CustomResource(
scope=self,
id="CustomResource",
service_token=custom_resource_handler_function.function_arn,
# Passing the time as a parameter will trigger the custom
# resource with every deployment.
properties={"ExecutionTime": str(time.time())},
)
| 35.344262 | 90 | 0.606679 | 424 | 4,312 | 5.910377 | 0.351415 | 0.038308 | 0.030726 | 0.028731 | 0.226257 | 0.173184 | 0.136872 | 0.063448 | 0.063448 | 0.063448 | 0 | 0.008452 | 0.314007 | 4,312 | 121 | 91 | 35.636364 | 0.838742 | 0.265074 | 0 | 0.202703 | 0 | 0 | 0.143314 | 0.066539 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013514 | false | 0 | 0.027027 | 0 | 0.054054 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f6693ddb85d3bb717698451379c61708255fc9d | 9,726 | py | Python | Pokemon/pokemon_item.py | V-FEXrt/Pokemon-Spoof-Plus | d397d680742496b7f64b401511da7eb57f63c973 | [
"MIT"
] | 2 | 2017-05-04T20:24:19.000Z | 2017-05-04T20:58:07.000Z | Pokemon/pokemon_item.py | V-FEXrt/Pokemon-Spoof-Plus | d397d680742496b7f64b401511da7eb57f63c973 | [
"MIT"
] | null | null | null | Pokemon/pokemon_item.py | V-FEXrt/Pokemon-Spoof-Plus | d397d680742496b7f64b401511da7eb57f63c973 | [
"MIT"
] | null | null | null | import random
class Item():
def __init__(self, name, hex):
self.name = name
self.hex = hex
def __str__(self):
return self.name
@staticmethod
def fromBytes(hex):
return Item.reverse[hex]
@staticmethod
def buildReverse():
reverse = {}
Item.members = [attr for attr in dir(Item) if not callable(getattr(Item, attr)) and not attr.startswith("__")]
for member in Item.members:
item = getattr(Item, member)
reverse[item.hex] = item
Item.reverse = reverse
@staticmethod
def rnd():
return getattr(Item, random.choice(Item.members))
Item.NOTHING = Item('Nothing', 0x00)
Item.MASTER_BALL = Item("Master Ball", 0x01)
Item.ULTRA_BALL = Item("Ultra Ball", 0x02)
Item.BRIGHT_POWDER = Item("BrightPowder", 0x03)
Item.GREAT_BALL = Item("Great Ball", 0x04)
Item.POKE_BALL = Item("Poke Ball", 0x05)
Item.BICYCLE = Item("Bicycle", 0x06)
Item.MOON_STONE = Item("Moon Stone", 0x08)
Item.ANTIDOTE = Item("Antidote", 0x09)
Item.BURN_HEAL = Item("Burn Heal", 0x0A)
Item.ICE_HEAL = Item("Ice Heal", 0x0B)
Item.AWAKENING = Item("Awakening", 0x0C)
Item.PARLYZ_HEAL = Item("Parlyz Heal", 0x0D)
Item.FULL_RESTORE = Item("Full Restore", 0x0E)
Item.MAX_POTION = Item("Max Potion", 0x0F)
Item.HYPER_POTION = Item("Hyper Potion", 0x10)
Item.SUPER_POTION = Item("Super Potion", 0x11)
Item.POTION = Item("Potion", 0x12)
Item.ESCAPE_ROPE = Item("Escape Rope", 0x13)
Item.REPEL = Item("Repel", 0x14)
Item.MAX_ELIXER = Item("Max Elixer", 0x15)
Item.FIRE_STONE = Item("Fire Stone", 0x16)
Item.THUNDER_STONE = Item("Thunder Stone", 0x17)
Item.WATER_STONE = Item("Water Stone", 0x18)
Item.HP_UP = Item("HP Up", 0x1A)
Item.PROTEIN = Item("Protein", 0x1B)
Item.IRON = Item("Iron", 0x1C)
Item.CARBOS = Item("Carbos", 0x1D)
Item.LUCKY_PUNCH = Item("Lucky Punch", 0x1E)
Item.CALCIUM = Item("Calcium", 0x1F)
Item.RARE_CANDY = Item("Rare Candy", 0x20)
Item.X_ACCURACY = Item("X Accuracy", 0x21)
Item.LEAF_STONE = Item("Leaf Stone", 0x22)
Item.METAL_POWDER = Item("Metal Powder", 0x23)
Item.NUGGET = Item("Nugget", 0x24)
Item.POKE_DOLL = Item("Poke Doll", 0x25)
Item.FULL_HEAL = Item("Full Heal", 0x26)
Item.REVIVE = Item("Revive", 0x27)
Item.MAX_REVIVE = Item("Max Revive", 0x28)
Item.GUARD_SPEC = Item("Guard Spec.", 0x29)
Item.SUPER_REPEL = Item("Super Repel", 0x2A)
Item.MAX_REPEL = Item("Max Repel", 0x2B)
Item.DIRE_HIT = Item("Dire Hit", 0x2C)
Item.FRESH_WATER = Item("Fresh Water", 0x2E)
Item.SODA_POP = Item("Soda Pop", 0x2F)
Item.LEMONADE = Item("Lemonade", 0x30)
Item.X_ATTACK = Item("X Attack", 0x31)
Item.X_DEFEND = Item("X Defend", 0x33)
Item.X_SPEED = Item("X Speed", 0x34)
Item.X_SPECIAL = Item("X Special", 0x35)
Item.COIN_CASE = Item("Coin Case", 0x36)
Item.ITEMFINDER = Item("Itemfinder", 0x37)
Item.EXP_SHARE = Item("Exp Share", 0x39)
Item.OLD_ROD = Item("Old Rod", 0x3A)
Item.GOOD_ROD = Item("Good Rod", 0x3B)
Item.SILVER_LEAF = Item("Silver Leaf", 0x3C)
Item.SUPER_ROD = Item("Super Rod", 0x3D)
Item.PP_UP = Item("PP Up", 0x3E)
Item.ETHER = Item("Ether", 0x3F)
Item.MAX_ETHER = Item("Max Ether", 0x40)
Item.ELIXER = Item("Elixer", 0x41)
Item.RED_SCALE = Item("Red Scale", 0x42)
Item.SECRET_POTION = Item("SecretPotion", 0x43)
Item.SS_TICKET = Item("S.S. Ticket", 0x44)
Item.MYSTERY_EGG = Item("Mystery Egg", 0x45)
Item.CLEAR_BELL = Item("Clear Bell*", 0x46)
Item.SILVER_WING = Item("Silver Wing", 0x47)
Item.MOOMOO_MILK = Item("Moomoo Milk", 0x48)
Item.QUICK_CLAW = Item("Quick Claw", 0x49)
Item.PSN_CURE_BERRY = Item("PSNCureBerry", 0x4A)
Item.GOLD_LEAF = Item("Gold Leaf", 0x4B)
Item.SOFT_SAND = Item("Soft Sand", 0x4C)
Item.SHARP_BEAK = Item("Sharp Beak", 0x4D)
Item.PRZ_CURE_BERRY = Item("PRZCureBerry", 0x4E)
Item.BURNT_BERRY = Item("Burnt Berry", 0x4F)
Item.ICE_BERRY = Item("Ice Berry", 0x50)
Item.POISON_BARB = Item("Poison Barb", 0x51)
Item.KINGS_ROCK = Item("King's Rock", 0x52)
Item.BITTER_BERRY = Item("Bitter Berry", 0x53)
Item.MINT_BERRY = Item("Mint Berry", 0x54)
Item.RED_APRICORN = Item("Red Apricorn", 0x55)
Item.TINY_MUSHROOM = Item("TinyMushroom", 0x56)
Item.BIG_MUSHROOM = Item("Big Mushroom", 0x57)
Item.SILVER_POWDER = Item("SilverPowder", 0x58)
Item.BLU_APRICORN = Item("Blu Apricorn", 0x59)
Item.AMULET_COIN = Item("Amulet Coin", 0x5B)
Item.YLW_APRICORN = Item("Ylw Apricorn", 0x5C)
Item.GRN_APRICORN = Item("Grn Apricorn", 0x5D)
Item.CLEANSE_TAG = Item("Cleanse Tag", 0x5E)
Item.MYSTIC_WATER = Item("Mystic Water", 0x5F)
Item.TWISTED_SPOON = Item("TwistedSpoon", 0x60)
Item.WHT_APRICORN = Item("Wht Apricorn", 0x61)
Item.BLACK_BELT = Item("Black Belt", 0x62)
Item.BLK_APRICORN = Item("Blk Apricorn", 0x63)
Item.PNK_APRICORN = Item("Pnk Apricorn", 0x65)
Item.BLACK_GLASSES = Item("BlackGlasses", 0x66)
Item.SLOWPOKE_TAIL = Item("SlowpokeTail", 0x67)
Item.PINK_BOW = Item("Pink Bow", 0x68)
Item.STICK = Item("Stick", 0x69)
Item.SMOKE_BALL = Item("Smoke Ball", 0x6A)
Item.NEVER_MELT_ICE = Item("NeverMeltIce", 0x6B)
Item.MAGNET = Item("Magnet", 0x6C)
Item.MIRACLE_BERRY = Item("MiracleBerry", 0x6D)
Item.PEARL = Item("Pearl", 0x6E)
Item.BIG_PEARL = Item("Big Pearl", 0x6F)
Item.EVERSTONE = Item("Everstone", 0x70)
Item.SPELL_TAG = Item("Spell Tag", 0x71)
Item.RAGE_CANDY_BAR = Item("RageCandyBar", 0x72)
Item.GS_BALL = Item("GS Ball*", 0x73)
Item.BLUE_CARD = Item("Blue Card*", 0x74)
Item.MIRACLE_SEED = Item("Miracle Seed", 0x75)
Item.THICK_CLUB = Item("Thick Club", 0x76)
Item.FOCUS_BAND = Item("Focus Band", 0x77)
Item.ENERGY_POWDER = Item("EnergyPowder", 0x79)
Item.ENERGY_ROOT = Item("Energy Root", 0x7A)
Item.HEAL_POWDER = Item("Heal Powder", 0x7B)
Item.REVIVAL_HERB = Item("Revival Herb", 0x7C)
Item.HARD_STONE = Item("Hard Stone", 0x7D)
Item.LUCKY_EGG = Item("Lucky Egg", 0x7E)
Item.CARD_KEY = Item("Card Key", 0x7F)
Item.MACHINE_PART = Item("Machine Part", 0x80)
Item.EGG_TICKET = Item("Egg Ticket*", 0x81)
Item.LOST_ITEM = Item("Lost Item", 0x82)
Item.STARDUST = Item("Stardust", 0x83)
Item.STAR_PIECE = Item("Star Piece", 0x84)
Item.BASEMENT_KEY = Item("Basement Key", 0x85)
Item.PASS = Item("Pass", 0x86)
Item.CHARCOAL = Item("Charcoal", 0x8A)
Item.BERRY_JUICE = Item("Berry Juice", 0x8B)
Item.SCOPE_LENS = Item("Scope Lens", 0x8C)
Item.METAL_COAT = Item("Metal Coat", 0x8F)
Item.DRAGON_FANG = Item("Dragon Fang", 0x90)
Item.LEFTOVERS = Item("Leftovers", 0x92)
Item.MYSTERY_BERRY = Item("MysteryBerry", 0x96)
Item.DRAGON_SCALE = Item("Dragon Scale", 0x97)
Item.BERSERK_GENE = Item("Berserk Gene", 0x98)
Item.SACRED_ASH = Item("Sacred Ash", 0x9C)
Item.HEAVY_BALL = Item("Heavy Ball", 0x9D)
Item.FLOWER_MAIL = Item("Flower Mail", 0x9E)
Item.LEVEL_BALL = Item("Level Ball", 0x9F)
Item.LURE_BALL = Item("Lure Ball", 0xA0)
Item.FAST_BALL = Item("Fast Ball", 0xA1)
Item.LIGHT_BALL = Item("Light Ball", 0xA3)
Item.FRIEND_BALL = Item("Friend Ball", 0xA4)
Item.MOON_BALL = Item("Moon Ball", 0xA5)
Item.LOVE_BALL = Item("Love Ball", 0xA6)
Item.NORMAL_BOX = Item("Normal Box", 0xA7)
Item.GORGEOUS_BOX = Item("Gorgeous Box", 0xA8)
Item.SUN_STONE = Item("Sun Stone", 0xA9)
Item.POLKADOT_BOW = Item("Polkadot Bow", 0xAA)
Item.UP_GRADE = Item("Up-Grade", 0xAC)
Item.BERRY = Item("Berry", 0xAD)
Item.GOLD_BERRY = Item("Gold Berry", 0xAE)
Item.SQUIRT_BOTTLE = Item("SquirtBottle", 0xAF)
Item.PARK_BALL = Item("Park Ball", 0xB1)
Item.RAINBOW_WING = Item("Rainbow Wing", 0xB2)
Item.BRICK_PIECE = Item("Brick Piece", 0xB4)
Item.SURF_MAIL = Item("Surf Mail", 0xB5)
Item.LITEBLUEMAIL = Item("Litebluemail", 0xB6)
Item.PORTRAITMAIL = Item("Portraitmail", 0xB7)
Item.LOVELY_MAIL = Item("Lovely Mail", 0xB8)
Item.EON_MAIL = Item("Eon Mail", 0xB9)
Item.MORPH_MAIL = Item("Morph Mail", 0xBA)
Item.BLUESKY_MAIL = Item("Bluesky Mail", 0xBB)
Item.MUSIC_MAIL = Item("Music Mail", 0xBC)
Item.MIRAGE_MAIL = Item("Mirage Mail", 0xBD)
Item.TM01 = Item("TM01", 0xBF)
Item.TM02 = Item("TM02", 0xC0)
Item.TM03 = Item("TM03", 0xC1)
Item.TM04 = Item("TM04", 0xC2)
Item.TM05 = Item("TM05", 0xC4)
Item.TM06 = Item("TM06", 0xC5)
Item.TM07 = Item("TM07", 0xC6)
Item.TM08 = Item("TM08", 0xC7)
Item.TM09 = Item("TM09", 0xC8)
Item.TM10 = Item("TM10", 0xC9)
Item.TM11 = Item("TM11", 0xCA)
Item.TM12 = Item("TM12", 0xCB)
Item.TM13 = Item("TM13", 0xCC)
Item.TM14 = Item("TM14", 0xCD)
Item.TM15 = Item("TM15", 0xCE)
Item.TM16 = Item("TM16", 0xCF)
Item.TM17 = Item("TM17", 0xD0)
Item.TM18 = Item("TM18", 0xD1)
Item.TM19 = Item("TM19", 0xD2)
Item.TM20 = Item("TM20", 0xD3)
Item.TM21 = Item("TM21", 0xD4)
Item.TM22 = Item("TM22", 0xD5)
Item.TM23 = Item("TM23", 0xD6)
Item.TM24 = Item("TM24", 0xD7)
Item.TM25 = Item("TM25", 0xD8)
Item.TM26 = Item("TM26", 0xD9)
Item.TM27 = Item("TM27", 0xDA)
Item.TM28 = Item("TM28", 0xDB)
Item.TM29 = Item("TM29", 0xDD)
Item.TM30 = Item("TM30", 0xDE)
Item.TM31 = Item("TM31", 0xDF)
Item.TM32 = Item("TM32", 0xE0)
Item.TM33 = Item("TM33", 0xE1)
Item.TM34 = Item("TM34", 0xE2)
Item.TM35 = Item("TM35", 0xE3)
Item.TM36 = Item("TM36", 0xE4)
Item.TM37 = Item("TM37", 0xE5)
Item.TM38 = Item("TM38", 0xE6)
Item.TM39 = Item("TM39", 0xE7)
Item.TM40 = Item("TM40", 0xE8)
Item.TM41 = Item("TM41", 0xE9)
Item.TM42 = Item("TM42", 0xEA)
Item.TM43 = Item("TM43", 0xEB)
Item.TM44 = Item("TM44", 0xEC)
Item.TM45 = Item("TM45", 0xED)
Item.TM46 = Item("TM46", 0xEE)
Item.TM47 = Item("TM47", 0xEF)
Item.TM48 = Item("TM48", 0xF0)
Item.TM49 = Item("TM49", 0xF1)
Item.TM50 = Item("TM50", 0xF2)
Item.HM01 = Item("HM01", 0xF3)
Item.HM02 = Item("HM02", 0xF4)
Item.HM03 = Item("HM03", 0xF5)
Item.HM04 = Item("HM04", 0xF6)
Item.HM05 = Item("HM05", 0xF7)
Item.HM06 = Item("HM06", 0xF8)
Item.HM07 = Item("HM07", 0xF9)
Item.HM08 = Item("HM08", 0xFA)
Item.HM09 = Item("HM09", 0xFB)
Item.HM10 = Item("HM10", 0xFC)
Item.HM11 = Item("HM11", 0xFD)
Item.HM12 = Item("HM12", 0xFE)
Item.UNKNOWN = Item("Unknown", 0xFF)
Item.buildReverse()
| 36.980989 | 118 | 0.699054 | 1,493 | 9,726 | 4.45144 | 0.358339 | 0.018056 | 0.004514 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089473 | 0.127802 | 9,726 | 262 | 119 | 37.122137 | 0.693976 | 0 | 0 | 0.011952 | 0 | 0 | 0.193728 | 0 | 0 | 0 | 0.09419 | 0 | 0 | 1 | 0.01992 | false | 0.003984 | 0.003984 | 0.011952 | 0.039841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f66a163bf1e5878e2474fa634b0488a8aa1b816 | 3,589 | py | Python | train.py | Markus-Goetz/block-prediction | 3f89d17d449f023d60fae5ec6bd712cb6cc8cb50 | [
"MIT"
] | 5 | 2018-11-28T22:18:29.000Z | 2021-08-16T22:09:35.000Z | train.py | Markus-Goetz/block-prediction | 3f89d17d449f023d60fae5ec6bd712cb6cc8cb50 | [
"MIT"
] | null | null | null | train.py | Markus-Goetz/block-prediction | 3f89d17d449f023d60fae5ec6bd712cb6cc8cb50 | [
"MIT"
] | 5 | 2018-12-03T08:40:46.000Z | 2022-02-21T14:21:52.000Z | #!/usr/bin/env python
import argparse
import pickle
import h5py
from keras import optimizers
from keras.callbacks import ModelCheckpoint
from keras.layers import Activation, add, BatchNormalization, Conv2D, Dense, Dropout, Flatten, Input, ZeroPadding2D
from keras.models import load_model, Model
from keras.regularizers import l2
from keras.utils import plot_model
import numpy as np
def positive_int(value):
try:
parsed = int(value)
if not parsed > 0:
raise ValueError()
return parsed
except ValueError:
raise argparse.ArgumentTypeError('value must be an positive integer')
def parse_cli():
parser = argparse.ArgumentParser()
parser.add_argument(
'-e', '--epochs',
nargs='?',
type=positive_int,
action='store',
default=10,
help='number of training epochs'
)
parser.add_argument(
metavar='TRAIN',
type=str,
dest='train',
help='path to the HDF5 file with the training data'
)
parser.add_argument(
metavar='MODEL',
type=str,
dest='model',
help='path where to store the model'
)
return parser.parse_args()
def load_data(path):
with h5py.File(path, 'r') as handle:
data = np.array(handle['diagonalset'])
labels = np.array(handle['vectorset'])
return data, labels
def preprocess(data, labels):
# simply add an additional dimension for the channels for data
# swap axis of the label set
return np.expand_dims(data, axis=3), np.moveaxis(labels, 0, -1)
def build_model(input_shape):
input_img = Input(shape=input_shape)
# first bottleneck unit
bn_1 = BatchNormalization()(input_img)
activation_1 = Activation('selu')(bn_1)
conv_1 = Conv2D(32, kernel_size=(5, 5,), padding='same', kernel_regularizer=l2(0.02))(activation_1)
bn_2 = BatchNormalization()(conv_1)
activation_2 = Activation('selu')(bn_2)
conv_2 = Conv2D(128, kernel_size=(3, 3,), padding='same', kernel_regularizer=l2(0.02))(activation_2)
merged = add([input_img, conv_2])
# corner detection
bn_3 = BatchNormalization()(merged)
padding = ZeroPadding2D(padding=(0, 3))(bn_3)
conv_3 = Conv2D( 32, kernel_size=(21, 7,), padding='valid', activation='tanh')(padding)
conv_4 = Conv2D(128, kernel_size=( 1, 3,), padding='same', activation='tanh')(conv_3)
# fully-connected predictor
flat = Flatten()(conv_4)
classify = Dense(512, activation='sigmoid')(flat)
dropout = Dropout(0.1)(classify)
result = Dense(input_shape[1], activation='sigmoid')(dropout)
model = Model(inputs=input_img, outputs=result)
model.compile(optimizer=optimizers.Nadam(lr=1e-4), loss='binary_crossentropy', metrics=['accuracy'])
return model
def train_network(model, data, labels, model_file, epochs):
plot_model(model, to_file='{}.png'.format(model_file), show_shapes=True)
checkpoint = ModelCheckpoint(model_file, monitor='val_loss', verbose=True, save_best_only=True, save_weights_only=False, mode='auto')
training = model.fit(data, labels, epochs=epochs, batch_size=8, validation_split=1.0/5.0, class_weight={0: 0.1, 1: 0.9}, callbacks=[checkpoint])
with open('{}.history'.format(model_file), 'wb') as handle:
pickle.dump(training.history, handle)
if __name__ == '__main__':
arguments = parse_cli()
data, labels = preprocess(*load_data(arguments.train))
model = build_model(input_shape=data.shape[1:])
train_network(model, data, labels, arguments.model, arguments.epochs)
| 31.482456 | 148 | 0.679019 | 474 | 3,589 | 4.987342 | 0.381857 | 0.022843 | 0.021574 | 0.020305 | 0.059222 | 0.036379 | 0.036379 | 0.036379 | 0 | 0 | 0 | 0.029036 | 0.193926 | 3,589 | 113 | 149 | 31.761062 | 0.788109 | 0.048203 | 0 | 0.063291 | 0 | 0 | 0.088002 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075949 | false | 0 | 0.126582 | 0.012658 | 0.265823 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f6cda2b221292f939019bccf17b0c0c955ce9d9 | 492 | py | Python | options/test_options.py | salfamusic/encoder4editing | 8263cb9d42cd4811f4ab2768dfcc9085259fc251 | [
"MIT"
] | null | null | null | options/test_options.py | salfamusic/encoder4editing | 8263cb9d42cd4811f4ab2768dfcc9085259fc251 | [
"MIT"
] | null | null | null | options/test_options.py | salfamusic/encoder4editing | 8263cb9d42cd4811f4ab2768dfcc9085259fc251 | [
"MIT"
] | null | null | null | from .base_options import BaseOptions
class TestOptions(BaseOptions):
def initialize(self, parser):
BaseOptions.initialize(self, parser)
parser.set_defaults(phase='test')
parser.add_argument('--only_for_test', type=str, default='...')
parser.add_argument('--network_pkl', type=str, default='gdrive:networks/stylegan2-ffhq-config-f.pkl')
parser.add_argument('--max_result_snapshots', default=30, help='max result snapshots')
return parser | 41 | 109 | 0.705285 | 59 | 492 | 5.711864 | 0.610169 | 0.080119 | 0.151335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007264 | 0.160569 | 492 | 12 | 110 | 41 | 0.808717 | 0 | 0 | 0 | 0 | 0 | 0.243408 | 0.131846 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f745bc0d930118b19141f5dfac7b6915950c7e9 | 1,772 | py | Python | f5/bigiq/cm/device/licensing/pool/initial_activation.py | nghia-tran/f5-common-python | acb23a6e5830a119b460c19a578654113419f5c3 | [
"Apache-2.0"
] | 272 | 2016-02-23T06:05:44.000Z | 2022-02-20T02:09:32.000Z | f5/bigiq/cm/device/licensing/pool/initial_activation.py | nghia-tran/f5-common-python | acb23a6e5830a119b460c19a578654113419f5c3 | [
"Apache-2.0"
] | 1,103 | 2016-02-11T17:48:03.000Z | 2022-02-15T17:13:37.000Z | f5/bigiq/cm/device/licensing/pool/initial_activation.py | nghia-tran/f5-common-python | acb23a6e5830a119b460c19a578654113419f5c3 | [
"Apache-2.0"
] | 167 | 2016-02-11T17:48:21.000Z | 2022-01-17T20:13:05.000Z | # coding=utf-8
#
# Copyright 2017 F5 Networks Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""BIG-IQ® license pool regkeys.
REST URI
``http://localhost/mgmt/cm/device/licensing/pool/initial-activation``
REST Kind
``cm:device:licensing:pool:initial-activation:*``
"""
from f5.bigiq.resource import Collection
from f5.bigiq.resource import Resource
class Initial_Activations(Collection):
def __init__(self, pool):
super(Initial_Activations, self).__init__(pool)
self._meta_data['required_json_kind'] = \
'cm:device:licensing:pool:initial-activation:initialactivationworkercollectionstate' # NOQA
self._meta_data['allowed_lazy_attributes'] = [Initial_Activation]
self._meta_data['attribute_registry'] = {
'cm:device:licensing:pool:initial-activation:initialactivationworkeritemstate': Initial_Activation # NOQA
}
class Initial_Activation(Resource):
def __init__(self, initial_activations):
super(Initial_Activation, self).__init__(initial_activations)
self._meta_data['required_creation_parameters'] = {'name', 'regKey'}
self._meta_data['required_json_kind'] = \
'cm:device:licensing:pool:initial-activation:initialactivationworkeritemstate'
| 36.916667 | 118 | 0.738149 | 219 | 1,772 | 5.780822 | 0.488584 | 0.120853 | 0.067141 | 0.082938 | 0.28752 | 0.248025 | 0.218009 | 0.104265 | 0.104265 | 0.104265 | 0 | 0.008081 | 0.161964 | 1,772 | 47 | 119 | 37.702128 | 0.843771 | 0.426072 | 0 | 0.117647 | 0 | 0 | 0.35146 | 0.287009 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.117647 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f78b69e4772845ec293a193b3ef41aeb3b1c4fc | 1,491 | py | Python | prophet_gcp/utils.py | SpikeLab-CL/paralell_prophet | c04b069ae27eb8645dd10e0cf9992415e585ba62 | [
"WTFPL"
] | 7 | 2018-10-18T18:06:27.000Z | 2021-11-02T19:53:31.000Z | prophet_gcp/utils.py | SpikeLab-CL/paralell_prophet | c04b069ae27eb8645dd10e0cf9992415e585ba62 | [
"WTFPL"
] | null | null | null | prophet_gcp/utils.py | SpikeLab-CL/paralell_prophet | c04b069ae27eb8645dd10e0cf9992415e585ba62 | [
"WTFPL"
] | 5 | 2020-01-23T22:03:00.000Z | 2022-02-17T08:28:51.000Z | import dask.dataframe as dd
import pandas as pd
def load_parse_file(file_path, date_column="date"):
"""Loads a file into Pandas dataframe, and parse the datetime columns
Arguments:
file_path: string path to the input file.
Returns:
Dataframe: dask.dataframe from the file
"""
data = dd.read_csv(file_path)
data[date_column] = dd.to_datetime(data[date_column], format='%Y-%m-%d')
return data
def get_frames_by_id(dataframe, index_col=None):
"""Group by the dataframe by index
Arguments:
dataframe: dask.dataframe.
index_col: string with the index_col to order
Returns:
list: list of dask.dataframe with the data filtered
"""
assert index_col != None, "Must specify and index column"
indexs_vals = dataframe[index_col].unique().compute()
dfs = []
for index in indexs_vals:
print("Doing ",index)
d = dataframe[(dataframe[index_col] == index)]
d = d.compute(scheduler='processes')
dfs.append(d)
return dfs
def write_results(dataframes=None, file_name=None):
"""Group by the dataframe by index
Arguments:
dataframes: pandas.dataframe.
Returns:
string: path to the output file
"""
file_name = "output.csv" if file_name == None else file_name
dataframe_ = pd.concat(dataframes, axis=0, copy=False, sort=False)
dataframe_.to_csv(file_name)
return file_name | 32.413043 | 77 | 0.646546 | 199 | 1,491 | 4.688442 | 0.38191 | 0.051447 | 0.072883 | 0.032154 | 0.083601 | 0.083601 | 0.083601 | 0.083601 | 0 | 0 | 0 | 0.000906 | 0.259557 | 1,491 | 46 | 78 | 32.413043 | 0.844203 | 0.326626 | 0 | 0 | 0 | 0 | 0.075862 | 0 | 0 | 0 | 0 | 0 | 0.047619 | 1 | 0.142857 | false | 0 | 0.095238 | 0 | 0.380952 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f79be8eb8ccee8775fd0b6dbe06883ce3e72270 | 4,756 | py | Python | eva-accession-release-automation/run_release_in_embassy/analyze_vcf_validation_results.py | sundarvenkata-EBI/eva-accession | b26f0b5e5acaafe63d0755bad81837b9a5976237 | [
"Apache-2.0"
] | 3 | 2018-02-28T17:14:53.000Z | 2020-03-17T17:19:45.000Z | eva-accession-release-automation/run_release_in_embassy/analyze_vcf_validation_results.py | sundarvenkata-EBI/eva-accession | b26f0b5e5acaafe63d0755bad81837b9a5976237 | [
"Apache-2.0"
] | 52 | 2018-03-29T15:44:23.000Z | 2022-02-16T00:54:28.000Z | eva-accession-release-automation/run_release_in_embassy/analyze_vcf_validation_results.py | sundarvenkata-EBI/eva-accession | b26f0b5e5acaafe63d0755bad81837b9a5976237 | [
"Apache-2.0"
] | 15 | 2018-03-02T13:34:19.000Z | 2021-06-22T15:54:59.000Z | #!/usr/bin/env python3
# Copyright 2019 EMBL - European Bioinformatics Institute
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import click
import glob
import logging
import sys
from ebi_eva_common_pyutils.command_utils import run_command_with_output
from ebi_eva_common_pyutils.logger import logging_config as log_cfg
from run_release_in_embassy.release_metadata import vcf_validation_output_file_pattern, asm_report_output_file_pattern
logger = log_cfg.get_logger(__name__)
def analyze_vcf_validation_files(vcf_validation_report_files):
exit_code = 0
vcf_validation_report_error_classes_to_ignore = ["Error: Duplicated variant",
"Warning: Reference and alternate alleles "
"do not share the first nucleotide",
"the input file is not valid",
"the input file is valid",
"not listed in a valid meta-data ALT entry"]
vcf_validation_error_grep_command_chain = " | ".join(['grep -v "{0}"'.format(error_class) for error_class in
vcf_validation_report_error_classes_to_ignore])
for vcf_validation_report_file in vcf_validation_report_files:
logger.info("Analyzing file {0} ....".format(vcf_validation_report_file))
command_to_run = "cat {0} | {1} | wc -l".format(vcf_validation_report_file,
vcf_validation_error_grep_command_chain)
number_of_lines_with_unusual_errors = \
int(run_command_with_output("Checking unusual errors in {0}".format(vcf_validation_report_file),
command_to_run, return_process_output=True))
if number_of_lines_with_unusual_errors > 0:
logger.error("Unusual error(s) found in VCF validation log: {0}. \nRun command\n {1} \nfor details."
.format(vcf_validation_report_file, command_to_run))
exit_code = -1
return exit_code
def analyze_asm_report_files(asm_report_files):
exit_code = 0
assembly_report_error_classes_to_ignore = ["not present in FASTA file", "does not match the reference sequence"]
asm_report_error_grep_command_chain = " | ".join(['grep -v "{0}"'.format(error_class) for error_class in
assembly_report_error_classes_to_ignore])
for asm_report_file in asm_report_files:
logger.info("Analyzing file {0} ....".format(asm_report_file))
command_to_run = "cat {0} | {1} | wc -l".format(asm_report_file, asm_report_error_grep_command_chain)
number_of_lines_with_unusual_errors = \
int(run_command_with_output("Checking unusual errors in {0}".format(asm_report_file), command_to_run,
return_process_output=True))
if number_of_lines_with_unusual_errors > 0:
logger.error("Unusual error(s) found in assembly report log: {0}. \nRun command\n {1} \nfor details."
.format(asm_report_file, command_to_run))
exit_code = -1
return exit_code
def analyze_vcf_validation_results(species_release_folder, assembly_accession):
vcf_validation_report_files = glob.glob("{0}/{1}/{2}".format(species_release_folder, assembly_accession,
vcf_validation_output_file_pattern))
exit_code = analyze_vcf_validation_files(vcf_validation_report_files)
asm_report_files = glob.glob("{0}/{1}/{2}".format(species_release_folder, assembly_accession,
asm_report_output_file_pattern))
exit_code = exit_code or analyze_asm_report_files(asm_report_files)
sys.exit(exit_code)
@click.option("--species-release-folder", required=True)
@click.option("--assembly-accession", required=True)
@click.command()
def main(species_release_folder, assembly_accession):
analyze_vcf_validation_results(species_release_folder, assembly_accession)
if __name__ == '__main__':
main()
| 52.263736 | 118 | 0.660849 | 596 | 4,756 | 4.901007 | 0.271812 | 0.089011 | 0.071551 | 0.039028 | 0.618281 | 0.535433 | 0.495036 | 0.436837 | 0.371106 | 0.295104 | 0 | 0.009994 | 0.263667 | 4,756 | 90 | 119 | 52.844444 | 0.824101 | 0.126156 | 0 | 0.163934 | 0 | 0.032787 | 0.163448 | 0.005794 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065574 | false | 0 | 0.114754 | 0 | 0.213115 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f7c0232a0c1a08c2f41b65eb62c5d3ff5bd11ae | 3,562 | py | Python | tests/test_subnet.py | Diapolo10/iplib | 001479b2095fd8008f9db726b1bd9c0b0ee16eac | [
"MIT"
] | 6 | 2021-04-18T19:46:40.000Z | 2021-06-28T22:03:25.000Z | tests/test_subnet.py | Diapolo10/iplib | 001479b2095fd8008f9db726b1bd9c0b0ee16eac | [
"MIT"
] | 10 | 2021-05-01T19:46:35.000Z | 2021-07-04T08:39:35.000Z | tests/test_subnet.py | Diapolo10/iplib | 001479b2095fd8008f9db726b1bd9c0b0ee16eac | [
"MIT"
] | 4 | 2021-05-01T22:04:24.000Z | 2021-06-13T14:29:20.000Z | """Unit tests for iplib3.subnet"""
import pytest
from iplib3.subnet import ( # pylint: disable=import-error,no-name-in-module
SubnetMask,
PureSubnetMask,
)
from iplib3.constants import ( # pylint: disable=import-error,no-name-in-module
IPV4_MIN_SUBNET_VALUE,
IPV4_MAX_SUBNET_VALUE,
IPV6_MAX_SUBNET_VALUE,
)
def test_pure_subnet_mask():
"""Test the PureSubnetMask base class"""
_ = PureSubnetMask()
def test_pure_subnet_mask_prefix_length():
"""Test PureSubnetMask prefix length"""
subnet = PureSubnetMask()
another = PureSubnetMask()
another._prefix_length = None
assert subnet._prefix_length == IPV4_MIN_SUBNET_VALUE
assert another._prefix_length is None
def test_pure_subnet_mask_string():
"""Test PureSubnetMask string represesetation"""
subnet = PureSubnetMask()
assert str(subnet) == '0'
assert repr(subnet) == "iplib3.PureSubnetMask('0')"
def test_pure_subnet_mask_equality():
"""Test PureSubnetMask equality"""
subnet = PureSubnetMask()
assert subnet == PureSubnetMask()
assert subnet == IPV4_MIN_SUBNET_VALUE
assert subnet == '0'
def test_pure_subnet_mask_inequality():
"""Test PureSubnetMask inequality"""
subnet = PureSubnetMask()
another = PureSubnetMask()
another._prefix_length = None
assert subnet != 3.14
assert subnet != another
def test_subnet_mask_subnet_type():
"""Test SubnetMask subnet type"""
assert SubnetMask()._subnet_type == 'ipv6'
assert SubnetMask('255.255.255.0')._subnet_type == 'ipv4'
def test_subnet_mask_string():
"""Test SubnetMask string representation"""
assert (
repr(SubnetMask(24, subnet_type='ipv4'))
== "iplib3.SubnetMask('255.255.255.0')")
assert repr(SubnetMask(24)) == "iplib3.SubnetMask('24')"
def test_subnet_mask_subnet_to_num():
"""Test SubnetMask subnet to number converter"""
assert SubnetMask._subnet_to_num(None) is None
assert SubnetMask._subnet_to_num(24) == 24
assert SubnetMask._subnet_to_num('24') == 24
assert SubnetMask._subnet_to_num(None, subnet_type='ipv4') is None
assert SubnetMask._subnet_to_num(24, subnet_type='ipv4') == 24
assert SubnetMask._subnet_to_num('24', subnet_type='ipv4') == 24
assert SubnetMask._subnet_to_num('255.255.128.0', subnet_type='ipv4') == 17
def test_subnet_mask_subnet_to_num_errors():
"""Test SubnetMask subnet to number converter errors"""
with pytest.raises(TypeError):
SubnetMask._subnet_to_num([255, 255, 255, 0])
with pytest.raises(ValueError):
SubnetMask._subnet_to_num('255.255.255.0')
with pytest.raises(ValueError):
SubnetMask._subnet_to_num('3e2')
with pytest.raises(ValueError):
SubnetMask._subnet_to_num(IPV4_MAX_SUBNET_VALUE+1, subnet_type='ipv4')
with pytest.raises(ValueError):
SubnetMask._subnet_to_num(IPV6_MAX_SUBNET_VALUE+1)
with pytest.raises(ValueError):
SubnetMask._subnet_to_num('255.6.0.0', subnet_type='ipv4')
def test_subnet_mask_prefix_to_subnet_mask():
"""Test SubnetMask number to mask converter"""
assert (
SubnetMask._prefix_to_subnet_mask(24, subnet_type='ipv4')
== '255.255.255.0'
)
def test_subnet_mask_prefix_to_subnet_mask_errors():
"""Test SubnetMask number to mask converter"""
with pytest.raises(ValueError):
SubnetMask._prefix_to_subnet_mask(24, subnet_type='ipv6')
with pytest.raises(ValueError):
SubnetMask._prefix_to_subnet_mask(IPV4_MAX_SUBNET_VALUE+1, subnet_type='ipv4')
| 29.683333 | 86 | 0.714486 | 455 | 3,562 | 5.265934 | 0.138462 | 0.113523 | 0.068865 | 0.11394 | 0.641903 | 0.5697 | 0.474124 | 0.450751 | 0.308431 | 0.20576 | 0 | 0.044302 | 0.169848 | 3,562 | 119 | 87 | 29.932773 | 0.765979 | 0.150477 | 0 | 0.236111 | 0 | 0 | 0.067814 | 0.028003 | 0 | 0 | 0 | 0 | 0.291667 | 1 | 0.152778 | false | 0 | 0.041667 | 0 | 0.194444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f7e37d4906c116b6d7ca399d7e8dabb52aaae91 | 2,570 | py | Python | examples/linear_regression_with_database.py | facebookresearch/svinfer | 14edce1af6c91e622b8691f5d78a490a8585e7b5 | [
"Apache-2.0"
] | 14 | 2020-05-29T18:45:16.000Z | 2022-03-21T03:30:27.000Z | examples/linear_regression_with_database.py | facebookresearch/svinfer | 14edce1af6c91e622b8691f5d78a490a8585e7b5 | [
"Apache-2.0"
] | null | null | null | examples/linear_regression_with_database.py | facebookresearch/svinfer | 14edce1af6c91e622b8691f5d78a490a8585e7b5 | [
"Apache-2.0"
] | 1 | 2020-07-30T17:01:20.000Z | 2020-07-30T17:01:20.000Z | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Illustrate how to run linear model (y ~ x1 + x2) with statistically
valid inference when x1, x2 contains designed noise, when training data
is stored as a table in SQLite database.
"""
from svinfer.processor import DatabaseProcessor
from svinfer.linear_model import LinearRegression
import sqlite3
from linear_regression_with_dataframe import simulate_training_data
if __name__ == "__main__":
# get training data
# assume the variance of the added noise are 4 and 1 for each predictor
# assume the training data is stored as a table called my_data in SQLite database
x_s2 = [4, 1]
data = simulate_training_data(x_s2)
connection = sqlite3.connect(":memory:")
data.to_sql("my_data", con=connection)
# fit y ~ x1 + x2, where x1 and x2 have added noise
db_data = DatabaseProcessor(connection, "my_data")
model = LinearRegression(
["x1", "x2"], # column names for predictors
"y", # column name for the response
x_s2, # variances of the added noises to each predictor
random_state=123, # optional, to ensure reproducibility
).fit(db_data)
# check result
print("beta_tilde is: \n{}".format(model.beta))
# expect results to be close to
# beta_tilde is:
# [10.53475783 12.26662045 -3.11457588]
print("beta_tilde's standard error is: \n{}".format(model.beta_standarderror))
# expect results to be close to
# beta_tilde's standard error is:
# [1.28940235 0.45779356 0.17814397]
print("beta_tile's variance-covariance matrix: \n{}".format(model.beta_vcov))
# expect results to be close to
# beta_tile's variance-covariance matrix:
# [[1.66255843 0.35312458 -0.17656444]
# [0.35312458 0.20957495 -0.07915853]
# [-0.17656444 -0.07915853 0.03173527]]
print("estimated residual variance is {}".format(model.sigma_sq))
# expect results to be close to
# estimated residual variance is 0.5136891806650965
| 39.538462 | 85 | 0.716342 | 374 | 2,570 | 4.828877 | 0.470588 | 0.033223 | 0.033223 | 0.037652 | 0.172204 | 0.153378 | 0.083056 | 0.036545 | 0 | 0 | 0 | 0.087567 | 0.19572 | 2,570 | 64 | 86 | 40.15625 | 0.786164 | 0.618677 | 0 | 0 | 0 | 0 | 0.17766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
8f808924f32b0bba54dcbd5d9c58b33439b7f83b | 2,705 | py | Python | sightpy/backgrounds/skybox.py | ulises1229/Python-Raytracer | ad89b9dabda1c3eeb68af2d3578c3f38dee9f5b9 | [
"MIT"
] | 326 | 2020-08-14T07:29:40.000Z | 2022-03-30T11:13:32.000Z | sightpy/backgrounds/skybox.py | ulises1229/Python-Raytracer | ad89b9dabda1c3eeb68af2d3578c3f38dee9f5b9 | [
"MIT"
] | 7 | 2020-08-14T21:57:56.000Z | 2021-06-09T00:53:04.000Z | sightpy/backgrounds/skybox.py | ulises1229/Python-Raytracer | ad89b9dabda1c3eeb68af2d3578c3f38dee9f5b9 | [
"MIT"
] | 37 | 2020-08-14T17:37:56.000Z | 2022-03-30T09:37:22.000Z | from ..geometry import Cuboid_Collider, Primitive
from ..materials import Material
from ..utils.vector3 import vec3
from ..utils.constants import SKYBOX_DISTANCE
from ..utils.image_functions import load_image, load_image_as_linear_sRGB
from .util.blur_background import blur_skybox
class SkyBox(Primitive):
def __init__(self, cubemap, center = vec3(0.,0.,0.), light_intensity = 0.0, blur = 0.0):
super().__init__(center, SkyBox_Material(cubemap, light_intensity, blur), shadow = False)
l = SKYBOX_DISTANCE
self.light_intensity = light_intensity
#BOTTOM
self.collider_list += [Cuboid_Collider(assigned_primitive = self, center = center, width = 2*l, height =2*l ,length =2*l )]
def get_uv(self, hit):
u,v = hit.collider.get_uv(hit)
u,v = u/4,v/3
return u,v
class SkyBox_Material(Material):
def __init__(self, cubemap, light_intensity, blur):
self.texture = load_image_as_linear_sRGB("sightpy/backgrounds/" + cubemap)
if light_intensity != 0.0:
self.lightmap = load_image("sightpy/backgrounds/lightmaps/" + cubemap)
if blur != 0.0:
self.blur_image = blur_skybox(load_image("sightpy/backgrounds/" + cubemap), blur, cubemap)
self.blur = blur
self.light_intensity = light_intensity
self.repeat = 1.0
def get_texture_color(self, hit, ray):
u,v = hit.get_uv()
if (self.blur != 0.0) :
im = self.blur_image[-((v * self.blur_image.shape[0]*self.repeat ).astype(int)% self.blur_image.shape[0]) , (u * self.blur_image.shape[1]*self.repeat).astype(int) % self.blur_image.shape[1] ].T
else:
im = self.texture[-((v * self.texture.shape[0]*self.repeat ).astype(int)% self.texture.shape[0]) , (u * self.texture.shape[1]*self.repeat).astype(int) % self.texture.shape[1] ].T
if (ray.depth != 0) and (self.light_intensity != 0.0):
ls = self.lightmap[-((v * self.texture.shape[0]*self.repeat ).astype(int)% self.texture.shape[0]) , (u * self.texture.shape[1]*self.repeat).astype(int) % self.texture.shape[1] ].T
color = vec3(im[0] + self.light_intensity * ls[0], im[1] + self.light_intensity * ls[1], im[2] + self.light_intensity * ls[2])
else:
color = vec3(im[0] , im[1] , im[2] )
return color
def get_color(self, scene, ray, hit):
hit.point = (ray.origin + ray.dir * hit.distance)
return hit.material.get_texture_color(hit,ray)
| 46.637931 | 209 | 0.598521 | 361 | 2,705 | 4.315789 | 0.204986 | 0.107831 | 0.082157 | 0.073171 | 0.28113 | 0.195122 | 0.195122 | 0.18742 | 0.139923 | 0.139923 | 0 | 0.024723 | 0.267283 | 2,705 | 57 | 210 | 47.45614 | 0.761352 | 0.002218 | 0 | 0.097561 | 0 | 0 | 0.026515 | 0.011364 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.146341 | 0 | 0.390244 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56a8d54ad528be2aaf7182e51f33608226c5e2df | 45,398 | py | Python | CGATPipelines/pipeline_exome_cancer.py | cdrakesmith/CGATPipelines | 3c94ae4f9d87d51108255dc405c4b95af7c8b694 | [
"MIT"
] | null | null | null | CGATPipelines/pipeline_exome_cancer.py | cdrakesmith/CGATPipelines | 3c94ae4f9d87d51108255dc405c4b95af7c8b694 | [
"MIT"
] | null | null | null | CGATPipelines/pipeline_exome_cancer.py | cdrakesmith/CGATPipelines | 3c94ae4f9d87d51108255dc405c4b95af7c8b694 | [
"MIT"
] | null | null | null | """
======================
Exome Cancer pipeline
======================
.. todo::
*Final filtering if SNPs/INDELs is currently done in the
reporting. This should be handled by the pipeline. The SNP output
would also then be passed to the mutational signature task
*Document
*fully make phone home/key option work - GATK public key? Summarise
*Indel calling (size of indels called) Example
The exome cancer pipeline imports unmapped reads from matched sample fastqs or
sra files and aligns them to the genome using BWA. Post alignment
quality control is performed using Picard. The pipeline then performs
local realignment around indels and base quality score recalibration
using GATK. Next variants (SNVs and indels) are called and filtered
1. Align to genome using gapped alignment (BWA)
2. Check alignment quality and target region coverage (Picard)
3. Local realignment and BQSR in GATK
4. Variant calling (SNPs) on control samples using muTect to generate
a "panel of normal" variants
5a. Variant calling (SNPs) with tumour samples using muTect including
filtering
5b. Variant calling (indels) using Strelka
6a. Variant annotation using SNPeff, GATK VariantAnnotator, and SnpSift
6b. Variant annotation with data from eBIO
6c. Load Network of Cancer Genes (NCG) for Variant annotation in reporting
.. note::
An optional downsampling analysis can also be performed to assess how
coverage a control sample affects the called variants
1. Currently the pipeline is not able to deal with replicates, i.e
replicates will be treated seperately.
Usage
=====
See :ref:`PipelineSettingUp` and :ref:`PipelineRunning` on general
information how to use CGAT pipelines.
Configuration
-------------
Input
-----
Reads are imported by placing files or linking to files in the
:term:`working directory`.
The default file format assumes the following convention:
<patientID>-<tissue>-<replicate>.<suffix>
``patientID`` and ``tissue`` make up an :term:`experiment`, while ``replicate``
denotes the :term:`replicate` within an :term:`experiment`.
The ``suffix`` determines the file type.
The following suffixes/file types are possible:
sra
Short-Read Archive format. Reads will be extracted using the
:file:`fastq-dump` tool.
fastq.gz
Single-end reads in fastq format.
fastq.1.gz, fastq.2.gz
Paired-end reads in fastq format. The two fastq files must be sorted
by read-pair.
.. note::
Quality scores need to be of the same scale for all input
files. Thus it might be difficult to mix different formats.
Documentation
-------------
If you would like the genes of interest to be flagged in your vcf,
make add_genes_of_interest=1 (default=0) and provide a list of comma
separated genes (without spaces) in the ini file.
If you would like to annotate genes of interest with a particular
value in the results table, create a file call [label]_annotations.tsv
in your working directory listing all the genes. For example, to
annotate all genes identified in a previous shRNA screen, add a file
called shRNA_annoations.tsv listing the genes and the results table
will contain a column called "shRNA" with values "shRNA" and "null".
Requirements
------------
On top of the default CGAT setup, the pipeline requires the following
software to be in the path:
+--------------------+------------+-------------------------------------------+
|*Program* |*Version* |*Purpose* |
+--------------------+------------+-------------------------------------------+
|Stampy |>=0.9.0 |read mapping |
+--------------------+------------+-------------------------------------------+
|BWA | |read mapping |
+--------------------+------------+-------------------------------------------+
|SAMtools | |filtering, SNV / indel calling |
+--------------------+------------+-------------------------------------------+
|BEDTools | |filtering |
+--------------------+------------+-------------------------------------------+
|sra-tools | |extracting reads from .sra files |
+--------------------+------------+-------------------------------------------+
|picard |>=1.38 |bam/sam files. The .jar files need to be in|
| | |your CLASSPATH environment variable. |
+--------------------+------------+-------------------------------------------+
|vcf-tools | |VCF filtering |
+--------------------+------------+-------------------------------------------+
|GATK | 2.5-2 |local realignment, BQSR, variant calling |
+--------------------+------------+-------------------------------------------+
|SNPeff | 3.3 | |
+--------------------+------------+-------------------------------------------+
Pipeline output
===============
The major output is a csvdb containing quality control information
and variant information by patientID and an html report with
similar information.
Example
=======
Code
====
"""
# load modules
from ruffus import *
# from rpy2.robjects import r as R
import numpy
import CGAT.Experiment as E
import sys
import os
import sqlite3
import CGAT.IOTools as IOTools
import CGATPipelines.PipelineMapping as PipelineMapping
import CGATPipelines.PipelineMappingQC as PipelineMappingQC
import CGATPipelines.Pipeline as P
import re
import CGATPipelines.PipelineExome as PipelineExome
USECLUSTER = True
#########################################################################
#########################################################################
def connect():
'''connect to database.
Use this method to connect to additional databases.
Returns a database connection.
'''
dbh = sqlite3.connect(PARAMS["database_name"])
return dbh
#########################################################################
P.getParameters(
["%s/pipeline.ini" % os.path.splitext(__file__)[0],
"../pipeline.ini",
"pipeline.ini"],
defaults={
'paired_end': False},
only_import=__name__ != "__main__")
PARAMS = P.PARAMS
PipelineMapping.PARAMS = PARAMS
PipelineMappingQC.PARAMS = PARAMS
PipelineExome.PARAMS = PARAMS
#########################################################################
#########################################################################
# Load manual annotations
#########################################################################
@transform("*_annotations.tsv",
suffix(".tsv"),
".load")
def loadManualAnnotations(infile, outfile):
tmp = P.getTempFilename(".")
annotation = P.snip(infile, "_annotations.tsv")
with IOTools.openFile(tmp, "w") as outf:
outf.write("%s\tgene_id\n" % annotation)
with IOTools.openFile(infile, "r") as inf:
for line in inf:
outf.write("%s\t%s" % (annotation, line))
P.load(tmp, outfile, options="--add-index=gene_id")
os.unlink(tmp)
#########################################################################
# Alignment to a reference genome
#########################################################################
@follows(mkdir("bam"))
@transform(("*.fastq.1.gz", "*.fastq.gz", "*.sra"),
regex(r"(\S+).(fastq.1.gz|fastq.gz|sra)"),
r"bam/\1.bam")
def mapReads(infile, outfile):
'''Map reads to the genome using BWA, sort and index BAM file,
generate alignment statistics and deduplicate using Picard'''
job_threads = PARAMS["bwa_threads"]
job_memory = PARAMS["bwa_memory"]
if PARAMS["bwa_algorithm"] == "aln":
m = PipelineMapping.BWA(
remove_non_unique=PARAMS["bwa_remove_non_unique"],
strip_sequence=False)
elif PARAMS["bwa_algorithm"] == "mem":
m = PipelineMapping.BWAMEM(
remove_non_unique=PARAMS["bwa_remove_non_unique"],
strip_sequence=False)
else:
raise ValueError("bwa algorithm '%s' not known" % algorithm)
statement = m.build((infile,), outfile)
print(statement)
P.run()
@merge(mapReads, "picard_duplicate_stats.load")
def loadPicardDuplicateStats(infiles, outfile):
'''Merge Picard duplicate stats into single table and load into SQLite.'''
PipelineMappingQC.loadPicardDuplicateStats(infiles, outfile)
#########################################################################
# Post-alignment QC
#########################################################################
@follows(mapReads)
@merge("bam/*.picard_stats", "picard_stats.load")
def loadPicardAlignStats(infiles, outfile):
'''Merge Picard alignment stats into single table and load into SQLite.'''
PipelineMappingQC.loadPicardAlignmentStats(infiles, outfile)
#########################################################################
@transform(mapReads, regex(r"bam/(\S+).bam"), r"bam/\1.cov")
def buildCoverageStats(infile, outfile):
'''Generate coverage statistics for regions of interest from a
bed file using Picard'''
# TS check whether this is always required or specific to current baits
# file
# baits file requires modification to make picard accept it
# this is performed before CalculateHsMetrics
to_cluster = USECLUSTER
baits = PARAMS["roi_baits"]
modified_baits = infile + "_temp_baits_final.bed"
regions = PARAMS["roi_regions"]
statement = '''samtools view -H %(infile)s > %(infile)s_temp_header.txt;
awk 'NR>2' %(baits)s |
awk -F '\\t' 'BEGIN { OFS="\\t" } {print $1,$2,$3,"+",$4;}'
> %(infile)s_temp_baits.bed;
cat %(infile)s_temp_header.txt %(infile)s_temp_baits.bed
> %(modified_baits)s; checkpoint ;
rm -rf %(infile)s_temp_baits.bed %(infile)s_temp_header.txt
'''
P.run()
PipelineMappingQC.buildPicardCoverageStats(
infile, outfile, modified_baits, modified_baits)
IOTools.zapFile(modified_baits)
@follows(buildCoverageStats)
@merge(buildCoverageStats, "coverage_stats.load")
def loadCoverageStats(infiles, outfile):
PipelineMappingQC.loadPicardCoverageStats(infiles, outfile)
#########################################################################
#########################################################################
#########################################################################
# GATK realign bams
#########################################################################
@transform(mapReads,
regex(r"bam/(\S+).bam"),
r"bam/\1.bqsr.bam")
def GATKpreprocessing(infile, outfile):
'''Reorders BAM according to reference fasta and add read groups using
SAMtools, realigns around indels and recalibrates base quality scores
using GATK'''
to_cluster = USECLUSTER
track = P.snip(os.path.basename(infile), ".bam")
tmpdir_gatk = P.getTempDir()
job_memory = PARAMS["gatk_memory"]
genome = "%s/%s.fa" % (PARAMS["bwa_index_dir"],
PARAMS["genome"])
outfile1 = outfile.replace(".bqsr", ".readgroups.bqsr")
outfile2 = outfile.replace(".bqsr", ".realign.bqsr")
PipelineExome.GATKReadGroups(infile, outfile1, genome,
PARAMS["readgroup_library"],
PARAMS["readgroup_platform"],
PARAMS["readgroup_platform_unit"])
PipelineExome.GATKIndelRealign(outfile1, outfile2, genome,
PARAMS["gatk_threads"])
IOTools.zapFile(outfile1)
PipelineExome.GATKBaseRecal(outfile2, outfile, genome,
PARAMS["gatk_dbsnp"],
PARAMS["gatk_solid_options"])
IOTools.zapFile(outfile2)
@transform(GATKpreprocessing,
regex("bam/(\S+)-%s-(\d+).bqsr.bam" % PARAMS["sample_control"]),
r"bam/\1-%s-\2.merged.bam" % PARAMS["sample_control"])
def mergeSampleBams(infile, outfile):
'''merge control and tumor bams'''
# Note: need to change readgroup headers for merge and subsequent
# splitting of bam files
to_cluster = USECLUSTER
job_memory = PARAMS["gatk_memory"]
tmpdir_gatk = P.getTempDir(shared=True)
outfile_tumor = outfile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
infile_tumor = infile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
infile_base = os.path.basename(infile)
infile_tumor_base = infile_base.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
track = P.snip(os.path.basename(infile), ".bam")
track_tumor = track.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
library = PARAMS["readgroup_library"]
platform = PARAMS["readgroup_platform"]
platform_unit = PARAMS["readgroup_platform_unit"]
control_id = "Control.bam"
tumor_id = control_id.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
statement = '''picard AddOrReplaceReadGroups
INPUT=%(infile)s
OUTPUT=%(tmpdir_gatk)s/%(infile_base)s
RGLB=%(library)s RGPL=%(platform)s
RGPU=%(platform_unit)s RGSM=%(track)s
ID=%(track)s
VALIDATION_STRINGENCY=SILENT ;
checkpoint ;'''
statement += '''picard AddOrReplaceReadGroups
INPUT=%(infile_tumor)s
OUTPUT=%(tmpdir_gatk)s/%(infile_tumor_base)s
RGLB=%(library)s RGPL=%(platform)s
RGPU=%(platform_unit)s RGSM=%(track_tumor)s
ID=%(track_tumor)s
VALIDATION_STRINGENCY=SILENT ;
checkpoint ;'''
statement += '''samtools merge -rf
%(outfile)s
%(tmpdir_gatk)s/%(infile_base)s
%(tmpdir_gatk)s/%(infile_tumor_base)s
; checkpoint ;'''
statement += '''samtools index %(outfile)s ;
checkpoint ;'''
statement += '''rm -rf %(tmpdir_gatk)s ;
checkpoint ; '''
P.run()
IOTools.zapFile(infile)
IOTools.zapFile(infile_tumor)
@transform(mergeSampleBams,
regex("bam/(\S+)-%s-(\d+).merged.bam" % PARAMS["sample_control"]),
r"bam/\1-%s-\2.realigned.bqsr.bam" % PARAMS["sample_control"])
def realignMatchedSample(infile, outfile):
''' repeat realignments with merged bam of control and tumor
this should help avoid problems with sample-specific realignments'''
genome = "%s/%s.fa" % (PARAMS["bwa_index_dir"],
PARAMS["genome"])
PipelineExome.GATKIndelRealign(infile, outfile, genome)
IOTools.zapFile(infile)
@transform(realignMatchedSample,
regex("bam/(\S+)-%s-(\d+).realigned.bqsr.bam" %
PARAMS["sample_control"]),
r"bam/\1-%s-\2.realigned.split.bqsr.bam" % PARAMS["sample_control"])
def splitMergedRealigned(infile, outfile):
''' split realignment file and truncate intermediate bams'''
track = P.snip(os.path.basename(infile), ".realigned.bqsr.bam") + ".bqsr"
track_tumor = track.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
outfile_tumor = outfile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
statement = '''samtools view -hb %(infile)s
-r %(track)s > %(outfile)s;
samtools view -hb %(infile)s
-r %(track_tumor)s > %(outfile_tumor)s; checkpoint ;
samtools index %(outfile)s;
samtools index %(outfile_tumor)s; checkpoint;'''
P.run()
IOTools.zapFile(infile)
@transform(splitMergedRealigned,
regex("bam/(\S+)-%s-(\S+).realigned.split.bqsr.bam" %
PARAMS["sample_control"]),
r"bam/\1-%s-\2.realigned.picard_stats" % PARAMS["sample_control"])
def runPicardOnRealigned(infile, outfile):
to_cluster = USECLUSTER
job_memory = PARAMS["gatk_memory"]
tmpdir_gatk = P.getTempDir()
outfile_tumor = outfile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
infile_tumor = infile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
track = P.snip(os.path.basename(infile), ".bam")
track_tumor = track.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
genome = "%s/%s.fa" % (PARAMS["bwa_index_dir"],
PARAMS["genome"])
PipelineMappingQC.buildPicardAlignmentStats(infile, outfile, genome)
PipelineMappingQC.buildPicardAlignmentStats(infile_tumor,
outfile_tumor, genome)
@follows(runPicardOnRealigned)
@merge("bam/*.realigned.picard_stats", "realigned_picard_stats.load")
def loadPicardRealigenedAlignStats(infiles, outfile):
'''Merge Picard alignment stats into single table and load into SQLite.'''
PipelineMappingQC.loadPicardAlignmentStats(infiles, outfile)
#########################################################################
#########################################################################
#########################################################################
# Variant Calling
#########################################################################
@follows(mkdir("normal_panel_variants"))
@transform(splitMergedRealigned,
regex(r"bam/(\S+)-%s-(\S).realigned.split.bqsr.bam" %
PARAMS["sample_control"]),
r"normal_panel_variants/\1_normal_mutect.vcf")
def callControlVariants(infile, outfile):
'''run mutect to call snps in control sample'''
basename = P.snip(outfile, "_normal_mutect.vcf")
call_stats_out = basename + "_call_stats.out"
mutect_log = basename + ".log"
cosmic, dbsnp, = (PARAMS["mutect_cosmic"],
PARAMS["gatk_dbsnp"])
genome = "%s/%s.fa" % (PARAMS["bwa_index_dir"],
PARAMS["genome"])
PipelineExome.mutectSNPCaller(infile, outfile, mutect_log, genome, cosmic,
dbsnp, call_stats_out, PARAMS[
'mutect_memory'],
PARAMS['mutect_threads'], artifact=True)
@transform(callControlVariants,
suffix(".vcf"),
"_slim.vcf.gz")
def indexControlVariants(infile, outfile):
'''index control vcf for intersection by vcftools'''
outfile = P.snip(outfile, ".gz")
statement = '''cut -f1-8 %(infile)s > %(outfile)s;
bgzip -f %(outfile)s;
tabix -f %(outfile)s.gz'''
P.run()
# paramaterise vcf intersection (number of req. observations - currently 1)
@merge(indexControlVariants,
"normal_panel_variants/combined.vcf")
def mergeControlVariants(infiles, outfile):
''' intersect control vcfs to generate a panel of normals for mutect'''
infiles = " ".join(infiles)
# remove module command when Sebastian has made latest version executable
statement = '''module load bio/vcftools/0.1.08a;
vcf-isec -o -n +1 %(infiles)s
> %(outfile)s'''
P.run()
@follows(mkdir("variants"), callControlVariants)
@transform(splitMergedRealigned,
regex(r"bam/(\S+)-%s-(\S).realigned.split.bqsr.bam" %
PARAMS["sample_control"]),
add_inputs(mergeControlVariants),
r"variants/\1.mutect.snp.vcf")
def runMutect(infiles, outfile):
'''calls somatic SNPs using MuTect'''
infile, normal_panel = infiles
infile_tumour = infile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
basename = P.snip(outfile, ".mutect.snp.vcf")
call_stats_out = basename + "_call_stats.out"
mutect_log = basename + ".log"
(cosmic, dbsnp, quality, max_alt_qual, max_alt,
max_fraction, tumor_LOD, strand_LOD) = (
PARAMS["mutect_cosmic"], PARAMS["gatk_dbsnp"],
PARAMS["mutect_quality"], PARAMS["mutect_max_alt_qual"],
PARAMS["mutect_max_alt"], PARAMS["mutect_max_fraction"],
PARAMS["mutect_lod"], PARAMS["mutect_strand_lod"])
genome = "%s/%s.fa" % (PARAMS["bwa_index_dir"],
PARAMS["genome"])
PipelineExome.mutectSNPCaller(
infile_tumour, outfile, mutect_log, genome,
cosmic, dbsnp, call_stats_out,
PARAMS['mutect_memory'], PARAMS['mutect_threads'],
quality, max_alt_qual,
max_alt, max_fraction, tumor_LOD, strand_LOD,
normal_panel, infile)
@transform(runMutect,
regex(r"variants/(\S+).mutect.snp.vcf"),
r"variants/\1_call_stats.load")
def loadMutectExtendedOutput(infile, outfile):
'''Load mutect extended output into database'''
infile = infile.replace(".mutect.snp.vcf", "_call_stats.out")
indices = "contig,position"
P.load(infile, outfile, options="--add-index=%(indices)s" % locals())
@transform(splitMergedRealigned,
regex(r"bam/(\S+)-%s-(\S).realigned.split.bqsr.bam" %
PARAMS["sample_control"]),
r"variants/\1/results/all.somatic.indels.vcf")
def indelCaller(infile, outfile):
'''Call somatic indels using Strelka'''
infile_tumour = infile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
outdir = "/".join(outfile.split("/")[0:2])
genome = "%s/%s.fa" % (PARAMS["bwa_index_dir"],
PARAMS["genome"])
PipelineExome.strelkaINDELCaller(infile, infile_tumour, outfile,
genome, PARAMS['strelka_config'], outdir,
PARAMS['strelka_memory'],
PARAMS['strelka_threads'])
##########################################################################
##########################################################################
##########################################################################
# repeat mutect in reverse and on subsampled control bam as quality control
##########################################################################
# this analysis should be part of an optional check of mutect parameters
# mutect paramters should be identical to the runMutect function above
@follows(mergeControlVariants)
@transform(splitMergedRealigned,
regex(r"bam/(\S+)-%s-(\S).realigned.split.bqsr.bam" %
PARAMS["sample_control"]),
add_inputs(mergeControlVariants),
r"variants/\1.mutect.reverse.snp.vcf")
def runMutectReverse(infiles, outfile):
'''Use control as tumor and vis versa to estimate false positive rate'''
infile, normal_panel = infiles
infile_tumour = infile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
basename = P.snip(outfile, "_normal_mutect.vcf")
call_stats_out = basename + "_call_stats.out"
mutect_log = basename + ".log"
basename = P.snip(outfile, ".mutect.reverse.snp.vcf")
call_stats_out = basename + "_call_stats.reverse.out"
coverage_wig_out = basename + "_coverage.reverse.wig"
mutect_log = basename + ".reverse.log"
(cosmic, dbsnp, quality, max_alt_qual, max_alt,
max_fraction, tumor_LOD) = (
PARAMS["mutect_cosmic"], PARAMS["gatk_dbsnp"],
PARAMS["mutect_quality"], PARAMS["mutect_max_alt_qual"],
PARAMS["mutect_max_alt"], PARAMS["mutect_max_fraction"],
PARAMS["mutect_LOD"])
genome = "%s/%s.fa" % (PARAMS["bwa_index_dir"],
PARAMS["genome"])
PipelineExome.mutectSNPCaller(infile, outfile, mutect_log, genome,
cosmic, dbsnp, call_stats_out,
PARAMS['mutect_memory'],
PARAMS['mutect_threads'],
quality, max_alt_qual,
max_alt, max_fraction, tumor_LOD,
normal_panel, infile_tumour)
# generalise the functions below
# 1. identify sample with highest coverage in control
# - should this check coverage in tumour also?
# 2. subset control bam
# 3. run mutect calling function with subset against unsubsetted tumour
# 4. summary table
adeno_bam = "bam/NU16C-Control-1.realigned.bqsr.bam"
@subdivide(adeno_bam,
regex("(\S+).bqsr.bam"),
[r"\1.0.1.bqsr.bam",
r"\1.0.2.bqsr.bam",
r"\1.0.3.bqsr.bam",
r"\1.0.4.bqsr.bam",
r"\1.0.5.bqsr.bam",
r"\1.0.6.bqsr.bam",
r"\1.0.7.bqsr.bam",
r"\1.0.8.bqsr.bam",
r"\1.0.9.bqsr.bam",
r"\1.1.0.bqsr.bam"])
def subsetControlBam(infile, outfiles):
statements = []
n = 0
for fraction in numpy.arange(0.1, 1.1, 0.1):
outfile = outfiles[n]
n += 1
statement = '''samtools view -s %(fraction)s -b %(infile)s
> %(outfile)s'''
P.run()
@transform(subsetControlBam,
suffix(".bam"),
".bam.bai")
def indexSubsets(infile, outfile):
statement = '''samtools index %(infile)s'''
P.run()
@follows(indexSubsets)
@transform(subsetControlBam,
regex(r"bam/(\S+)-%s-1.realigned.(\S+).bqsr.bam" %
PARAMS["sample_control"]),
add_inputs(mergeControlVariants),
r"variants/\1-downsampled-\2.mutect.snp.vcf")
def runMutectOnDownsampled(infiles, outfile):
'''call somatic SNPs using MuTect on downsampled bams'''
infile, normal_panel = infiles
infile_tumour = infile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
basename = P.snip(outfile, "_normal_mutect.vcf")
call_stats_out = basename + "_call_stats.out"
mutect_log = basename + ".log"
(cosmic, dbsnp, quality, max_alt_qual, max_alt,
max_fraction, tumor_LOD) = (
PARAMS["mutect_cosmic"], PARAMS["gatk_dbsnp"],
PARAMS["mutect_quality"], PARAMS["mutect_max_alt_qual"],
PARAMS["mutect_max_alt"], PARAMS["mutect_max_fraction"],
PARAMS["mutect_LOD"])
genome = "%s/%s.fa" % (PARAMS["bwa_index_dir"],
PARAMS["genome"])
PipelineExome.mutectSNPCaller(infile_tumour, outfile, mutect_log, genome,
cosmic, dbsnp, call_stats_out,
PARAMS['mutect_memory'], PARAMS[
'mutect_threads'],
quality, max_alt_qual,
max_alt, max_fraction, tumor_LOD,
normal_panel, infile)
##############################################################################
##############################################################################
##############################################################################
# Variant Annotation and Recalibration
##############################################################################
@collate(splitMergedRealigned,
regex(r"bam/(\S+)-(\S+)-(\S+).realigned.split.bqsr.bam"),
r"bam/\1.list")
def listOfBAMs(infiles, outfile):
'''generates a file containing a list of BAMs for each patient,
for use in variant calling'''
with IOTools.openFile(outfile, "w") as outf:
for infile in infiles:
infile_tumour = infile.replace(
PARAMS["sample_control"], PARAMS["sample_tumour"])
outf.write(infile + '\n')
outf.write(infile_tumour + '\n')
@transform(runMutect,
regex(r"variants/(\S+).mutect.snp.vcf"),
r"variants/\1.mutect.snp.snpeff.vcf")
def annotateVariantsSNPeff(infile, outfile):
'''Annotate SNP variants using SNPeff'''
to_cluster = USECLUSTER
job_memory = "4G"
job_threads = 2
snpeff_genome = PARAMS["annotation_snpeff_genome"]
config = PARAMS["annotation_snpeff_config"]
statement = '''java -Xmx4G -jar /ifs/apps/bio/snpEff-3.3-dev/snpEff.jar
-c %(config)s -v %(snpeff_genome)s -o gatk
%(infile)s > %(outfile)s'''
P.run()
@transform(indelCaller,
regex("variants/(\S+)/results/all.somatic.indels.vcf"),
r"variants/\1.indels.snpeff.vcf")
def annotateVariantsINDELsSNPeff(infile, outfile):
'''Annotate INDEL variants using SNPeff'''
to_cluster = USECLUSTER
job_memory = "4G"
job_threads = 2
snpeff_genome = PARAMS["annotation_snpeff_genome"]
config = PARAMS["annotation_snpeff_config"]
statement = '''java -Xmx4G -jar /ifs/apps/bio/snpEff-3.3-dev/snpEff.jar
-c %(config)s -v %(snpeff_genome)s -o gatk
%(infile)s > %(outfile)s'''
P.run()
#########################################################################
# Annotate SNP and INDEL variants
#########################################################################
# Need to check whether variant annotatot is using both bams
# from a single patient?
# should just be the tumour bam or else scores will be wrong!
@follows(annotateVariantsSNPeff, listOfBAMs)
@transform(runMutect,
regex(r"variants/(\S+).mutect.snp.vcf"),
add_inputs(r"bam/\1.list",
r"variants/\1.mutect.snp.snpeff.vcf"),
r"variants/\1.mutect.snp.annotated.vcf")
def variantAnnotator(infiles, outfile):
'''Annotate variant file using GATK VariantAnnotator'''
to_cluster = USECLUSTER
infile, bamlist, effFile = infiles
dbsnp = PARAMS["gatk_dbsnp"]
statement = '''GenomeAnalysisTK
-T VariantAnnotator
-R %(bwa_index_dir)s/%(genome)s.fa
-I %(bamlist)s
-A SnpEff --snpEffFile %(effFile)s
-o %(outfile)s
--variant %(infile)s
-L %(infile)s
--dbsnp %(dbsnp)s
-A HaplotypeScore
-A MappingQualityRankSumTest
-A ReadPosRankSumTest
-A AlleleBalanceBySample'''
P.run()
@follows(annotateVariantsINDELsSNPeff, listOfBAMs)
@transform(indelCaller,
regex("variants/(\S+)/results/all.somatic.indels.vcf"),
add_inputs(r"bam/\1.list", r"variants/\1.indels.snpeff.vcf"),
r"variants/\1.indels.annotated.vcf")
def variantAnnotatorIndels(infiles, outfile):
'''Annotate variant file using GATK VariantAnnotator'''
to_cluster = USECLUSTER
infile, bamlist, effFile = infiles
statement = '''GenomeAnalysisTK
-T VariantAnnotator
-R %(bwa_index_dir)s/%(genome)s.fa
-I %(bamlist)s
-A SnpEff --snpEffFile %(effFile)s
-o %(outfile)s
--variant %(infile)s
-L %(infile)s
-A Coverage
-A FisherStrand
-A HaplotypeScore
-A MappingQualityRankSumTest
-A ReadPosRankSumTest
-A AlleleBalanceBySample
-A RMSMappingQuality'''
P.run()
######################################################################
# this does not work - insufficient number of indels in mills+
# therefore this task is not a dependency of task full
@transform(variantAnnotatorIndels,
suffix(".annotated.vcf"),
".annotated.recalibrated.vcf")
def variantRecalibrator(infile, outfile):
'''Create variant recalibration file for indels'''
to_cluster = USECLUSTER
job_memory = PARAMS["gatk_memory"]
job_threads = 6
track = P.snip(os.path.basename(outfile), ".annotated.recalibrated.vcf")
mills = PARAMS["gatk_mills"]
statement = '''GenomeAnalysisTK
-T VariantRecalibrator
-R %(bwa_index_dir)s/%(genome)s.fa
-input %(infile)s
-resource:mills,known=true,training=true,truth=true,prior=12.0
%(mills)s
-an DP -an MQRankSum -an ReadPosRankSum
-mode INDEL
-tranche 100.0 -tranche 99.9 -tranche 99.0 -tranche 90.0
--maxGaussians 4
-recalFile %(outfile)s
-tranchesFile variants/%(track)s.tranches
-rscriptFile variants/%(track)s.plots.R'''
P.run()
##############################################################################
# Filter SNPs and INDELs
##############################################################################
@transform(variantAnnotatorIndels,
suffix(".annotated.vcf"),
".annotated.filtered.vcf")
def filterIndels(infile, outfile):
''' use SnpSift to filter INDELS using VCF fields'''
statement = '''cat %(infile)s |
java -Xmx2g -jar /ifs/apps/bio/snpEff-3.1/SnpSift.jar filter
"(QSI_NT>%(filter_indel_nt)s &
IHP<%(filter_indel_ihp)s &
RC<%(filter_indel_rc)s &
IC<%(filter_indel_rc)s) "
> %(outfile)s '''
P.run()
@transform(variantAnnotator,
regex("variants/(\S+).mutect.snp.annotated.vcf"),
r"variants/\1.mutect.snp.annotated.filtered.vcf")
def filterMutect(infile, outfile):
''' filter mutect snps using allele frequencies '''
logfile = outfile.replace(".vcf", ".log")
min_t_alt = PARAMS["filter_minimum_tumor_allele"]
min_t_alt_freq = PARAMS["filter_minimum_tumor_allele_frequency"]
min_n_depth = PARAMS["filter_minimum_normal_depth"]
max_n_alt_freq = PARAMS["filter_maximum_normal_allele_frequency"]
min_ratio = PARAMS["filter_minimum_ratio"]
PipelineExome.filterMutect(
infile, outfile, logfile,
PARAMS["sample_control"], PARAMS["sample_tumour"],
min_t_alt, min_n_depth, max_n_alt_freq,
min_t_alt_freq, min_ratio)
##############################################################################
# Intersect filtered SNPs and INDELs
##############################################################################
@mkdir("intersection.dir")
@collate((filterIndels, filterMutect),
regex(r"variants/(\S+)\.(\S+).annotated.filtered.vcf"),
r"intersection.dir/overlap_\2_heatmap.png")
def intersectHeatmap(infiles, outfile):
''' intersect DE test_ids across the different quantifiers'''
PipelineExome.intersectionHeatmap(infiles, outfile)
#########################################################################
#########################################################################
# convert vcf to tsv files and load into database
@transform(filterMutect,
regex("variants/(\S+).annotated.filtered.vcf"),
r"variants/\1.annotated.filtered.tsv")
def snpvcfToTable(infile, outfile):
'''Converts vcf to tab-delimited file'''
to_cluster = USECLUSTER
statement = '''GenomeAnalysisTK
-T VariantsToTable -R %(bwa_index_dir)s/%(genome)s.fa
-V %(infile)s --showFiltered --allowMissingData
-F CHROM -F POS -F ID -F REF -F ALT -F QUAL -F FILTER
-F INFO -F BaseQRankSum
-F HaplotypeScore -F MQRankSum -F ReadPosRankSum
-F SNPEFF_EFFECT -F SNPEFF_IMPACT -F SNPEFF_FUNCTIONAL_CLASS
-F SNPEFF_CODON_CHANGE -F SNPEFF_AMINO_ACID_CHANGE
-F SNPEFF_GENE_NAME -F SNPEFF_GENE_BIOTYPE
-F SNPEFF_TRANSCRIPT_ID -F SNPEFF_EXON_ID
-GF GT -GF AD -GF SS -GF FA -GF AB -GF DP
-o %(outfile)s'''
P.run()
@transform(filterIndels,
regex("variants/(\S+).annotated.filtered.vcf"),
r"variants/\1.annotated.filtered.tsv")
def indelvcfToTable(infile, outfile):
'''Converts vcf to tab-delimited file'''
to_cluster = USECLUSTER
statement = '''GenomeAnalysisTK
-T VariantsToTable -R %(bwa_index_dir)s/%(genome)s.fa
-V %(infile)s --showFiltered --allowMissingData
-F CHROM -F POS -F ID -F REF -F ALT -F QUAL -F FILTER
-F INFO -F BaseQRankSum
-F HaplotypeScore -F MQRankSum -F ReadPosRankSum
-F SNPEFF_EFFECT -F SNPEFF_IMPACT -F SNPEFF_FUNCTIONAL_CLASS
-F SNPEFF_CODON_CHANGE -F SNPEFF_AMINO_ACID_CHANGE
-F SNPEFF_GENE_NAME -F SNPEFF_GENE_BIOTYPE
-F SNPEFF_TRANSCRIPT_ID -F SNPEFF_EXON_ID
-F TQSI -F TSQI_NT -F DP -F IC -F IHP -F NT
-F QSI -F QSI_NT -F RC -F RU -F SGT
-GF DP -GF DP2 -GF DP50 -GF SUBDP50 -GF TAR -GF TIR -GF TOR
-o %(outfile)s'''
P.run()
@transform([snpvcfToTable,
indelvcfToTable],
regex(r"variants/(\S+).annotated.filtered.tsv"),
r"variants/\1_annotated.load")
def loadVariantAnnotation(infile, outfile):
'''Load VCF annotations into database'''
if infile.endswith("indels.annotated.filtered.tsv"):
indices = "CHROM,POS,SNPEFF_GENE_NAME"
elif infile.endswith("mutect.snp.annotated.filtered.tsv"):
indices = "CHROM,POS,SNPEFF_GENE_NAME"
P.load(infile, outfile, options="--add-index=%(indices)s" % locals())
#########################################################################
# Genes of interest
# check this will run in the correct position if option selected
# @active_if(PARAMS["annotation_add_genes_of_interest"] == 1)
# @transform((annotateVariantsSNPsift),
# regex(r"variants/(\S+).haplotypeCaller.snpsift.vcf"),
# r"variants/\1.genes.vcf")
# def findGenes(infile, outfile):
# '''Adds expression "GENE_OF_INTEREST" to the FILTER column of the vcf
# if variant is within a gene of interest as defined in the ini
# file'''
#
# geneList = P.asList(PARAMS["annotation_genes_of_interest"])
# expression = '\'||SNPEFF_GENE_NAME==\''.join(geneList)
# statement = '''GenomeAnalysisTK -T VariantFiltration
# -R %%(bwa_index_dir)s/%%(genome)s.fa
# --variant %(infile)s
# --filterExpression "SNPEFF_GENE_NAME=='%(expression)s'"
# --filterName "GENE_OF_INTEREST" -o %(outfile)s''' % locals()
# P.run()
#########################################################################
#########################################################################
#########################################################################
# vcf statistics - this only summarises the nucleotide changes
# this currently does not provide useful output!
@transform((variantAnnotator,
variantAnnotatorIndels),
regex(r"variants/(\S+).vcf"),
r"variants/\1.vcfstats")
def buildVCFstats(infile, outfile):
'''Calculate statistics on VCF file'''
to_cluster = USECLUSTER
statement = '''vcf-stats %(infile)s
> %(outfile)s 2>>%(outfile)s.log;'''
P.run()
@merge(buildVCFstats, "vcf_stats.load")
def loadVCFstats(infiles, outfile):
'''Import variant statistics into SQLite'''
filenames = " ".join(infiles)
tablename = P.toTable(outfile)
csv2db_options = PARAMS["csv2db_options"]
E.info("Loading vcf stats...")
statement = '''cgat vcfstats2db
%(filenames)s >> %(outfile)s; '''
statement += '''cat vcfstats.txt |
cgat csv2db %(csv2db_options)s
--allow-empty-file --add-index=track --table=vcf_stats
>> %(outfile)s; '''
P.run()
#########################################################################
@transform(runMutect,
suffix(".mutect.snp.vcf"),
"_mutect_filtering_summary.tsv")
def summariseFiltering(infile, outfile):
infile = infile.replace(".mutect.snp.vcf", "_call_stats.out")
PipelineExome.parseMutectCallStats(infile, outfile, submit=True)
@transform(summariseFiltering,
regex(r"variants/(\S+)_mutect_filtering_summary.tsv"),
r"variants/\1_mutect_filtering_summary.load")
def loadMutectFilteringSummary(infile, outfile):
'''Load mutect extended output into database'''
dbh = connect()
tablename = P.toTable(outfile)
statement = '''cat %(infile)s |
cgat csv2db
--table %(tablename)s --retry --ignore-empty
> %(outfile)s'''
P.run()
#########################################################################
#########################################################################
#########################################################################
@originate("eBio_studies.tsv")
def defineEBioStudies(outfile):
''' For the cancer types specified in pipeline.ini, identify the
relevent studies in eBio '''
cancer_types = PARAMS["annotation_ebio_cancer_types"]
PipelineExome.defineEBioStudies(cancer_types, outfile, submit=False)
@transform(defineEBioStudies,
suffix("eBio_studies.tsv"),
add_inputs(filterIndels, filterMutect),
"eBio_studies_gene_frequencies.tsv")
def extractEBioinfo(infiles, outfile):
'''find the number of mutations identitified in previous studies (ebio_ids)
for the mutated genes in the annotated vcfs'''
eBio_ids = infiles[0]
vcfs = infiles[1:]
PipelineExome.extractEBioinfo(eBio_ids, vcfs, outfile, submit=False)
@transform(extractEBioinfo,
suffix(".tsv"),
".load")
def loadEBioInfo(infile, outfile):
'''load the frequencies from the eBIO portal'''
P.load(infile, outfile, options="--add-index=gene")
#########################################################################
#########################################################################
#########################################################################
# load Network of Cancer Genes table
# parameterise file location:
@originate("cancergenes.load")
def loadNCG(outfile):
'''Load NCG into database'''
infile = PARAMS["cancergenes_table"]
# infile = "/ifs/projects/proj053/backup/NCG/cancergenes2016.tsv"
P.load(infile, outfile, options="--add-index=symbol")
#########################################################################
#########################################################################
#########################################################################
# analyse mutational siganture of filtered variants
@merge(filterMutect,
["variants/mutational_signature.tsv",
"variants/mutational_signature_table.tsv"])
def mutationalSignature(infiles, outfiles):
PipelineExome.compileMutationalSignature(
infiles, outfiles)
@transform(mutationalSignature,
suffix(".tsv"),
".load")
def loadMutationalSignature(infiles, outfile):
outfile2 = re.sub(".load", "_table.load", outfile)
P.load(infiles[0], outfile)
P.load(infiles[1], outfile2)
#########################################################################
#########################################################################
#########################################################################
@follows(loadManualAnnotations,
loadMutectFilteringSummary,
loadMutectExtendedOutput,
loadVariantAnnotation,
loadCoverageStats,
loadPicardRealigenedAlignStats,
loadPicardAlignStats,
loadNCG,
loadMutationalSignature,
loadEBioInfo,
intersectHeatmap)
def full():
pass
@follows(defineEBioStudies)
def test():
pass
@follows(runMutectOnDownsampled,
runMutectReverse)
def TestMutect():
'''This target runs function which can be used to assess the chosen
mutect parameters'''
# @follows(loadROI,
# loadROI2Gene)
# def loadMetadata():
# pass
@follows(mapReads)
def mapping():
pass
@follows(loadPicardDuplicateStats,
loadPicardAlignStats,
buildCoverageStats,
loadCoverageStats)
def postMappingQC():
pass
@follows(GATKpreprocessing,
runPicardOnRealigned)
def gatk():
pass
@follows(runMutect,
indelCaller)
def callVariants():
pass
@follows(loadVariantAnnotation)
def tabulation():
pass
@follows(buildVCFstats,
loadVCFstats)
def vcfstats():
pass
#########################################################################
#########################################################################
#########################################################################
@follows()
def publish():
'''publish files.'''
P.publish_report()
@follows(mkdir("report"))
def build_report():
'''build report from scratch.'''
E.info("starting documentation build process from scratch")
P.run_report(clean=True)
@follows(mkdir("report"))
def update_report():
'''update report.'''
E.info("updating documentation")
P.run_report(clean=False)
def main(argv=None):
if argv is None:
argv = sys.argv
P.main(argv)
if __name__ == "__main__":
sys.exit(P.main(sys.argv))
| 35.973059 | 81 | 0.552998 | 4,554 | 45,398 | 5.390426 | 0.170839 | 0.021998 | 0.022446 | 0.016295 | 0.373676 | 0.337176 | 0.314201 | 0.296969 | 0.277049 | 0.265643 | 0 | 0.005522 | 0.230076 | 45,398 | 1,261 | 82 | 36.001586 | 0.696793 | 0.22675 | 0 | 0.392908 | 0 | 0.011348 | 0.389166 | 0.121364 | 0.001418 | 0 | 0 | 0.000793 | 0 | 1 | 0.08227 | false | 0.011348 | 0.01844 | 0 | 0.102128 | 0.002837 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56af02a969b9dab95dd47f1d92c922008e2433c4 | 435 | py | Python | guestbook/models.py | hcpthanks/vCard | cc9a301f413961c398c355426013c0cc05fbb1b7 | [
"MIT"
] | null | null | null | guestbook/models.py | hcpthanks/vCard | cc9a301f413961c398c355426013c0cc05fbb1b7 | [
"MIT"
] | null | null | null | guestbook/models.py | hcpthanks/vCard | cc9a301f413961c398c355426013c0cc05fbb1b7 | [
"MIT"
] | null | null | null | import reprlib
from django.db import models
class Message(models.Model):
"""留言消息类
"""
name = models.CharField('用户名', max_length=20)
email = models.EmailField('邮箱', max_length=200)
message = models.TextField('留言')
active = models.BooleanField('有效', default=True)
posted = models.DateTimeField('发布时间', auto_now_add=True)
def __str__(self):
return f'{self.name}{reprlib.repr(self.message)}'
| 27.1875 | 60 | 0.671264 | 55 | 435 | 5.163636 | 0.709091 | 0.091549 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.183908 | 435 | 15 | 61 | 29 | 0.785915 | 0.011494 | 0 | 0 | 0 | 0 | 0.124105 | 0.093079 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0.1 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56b027366621352ff84a9bd75357a7f9c2bdede8 | 293 | py | Python | rationalratio/rationalratio.py | omarchehab98/open.kattis.com-problems | 0523e2e641151dad719ef05cc9811a8ef5c6a278 | [
"MIT"
] | 1 | 2020-10-04T22:41:04.000Z | 2020-10-04T22:41:04.000Z | rationalratio/rationalratio.py | omarchehab98/open.kattis.com-problems | 0523e2e641151dad719ef05cc9811a8ef5c6a278 | [
"MIT"
] | null | null | null | rationalratio/rationalratio.py | omarchehab98/open.kattis.com-problems | 0523e2e641151dad719ef05cc9811a8ef5c6a278 | [
"MIT"
] | null | null | null | from fractions import Fraction
x, d = input().split(' ')
d = int(d)
k = len(x) - x.index('.') - d - 1
a, b = x[0:-d].replace('.', ''), 10 ** k
ab = Fraction(int(a), b)
rd = Fraction(int(x[-d:]), (10 ** d - 1) * b)
result = ab + rd
print(str(result.numerator) + '/' + str(result.denominator))
| 26.636364 | 60 | 0.546075 | 50 | 293 | 3.2 | 0.5 | 0.025 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029289 | 0.1843 | 293 | 10 | 61 | 29.3 | 0.640167 | 0 | 0 | 0 | 0 | 0 | 0.013652 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56b18a5e976b97460394eeab951ee9f6df83fd21 | 3,636 | py | Python | examples/doq_server.py | SouvikGhosh05/aioquic | da566b8ee616b9c83d51f0f5ad0521393119f40f | [
"BSD-3-Clause"
] | null | null | null | examples/doq_server.py | SouvikGhosh05/aioquic | da566b8ee616b9c83d51f0f5ad0521393119f40f | [
"BSD-3-Clause"
] | null | null | null | examples/doq_server.py | SouvikGhosh05/aioquic | da566b8ee616b9c83d51f0f5ad0521393119f40f | [
"BSD-3-Clause"
] | null | null | null | import argparse
import asyncio
import logging
import struct
from typing import Dict, Optional
from dnslib.dns import DNSRecord
from aioquic.asyncio import QuicConnectionProtocol, serve
from aioquic.quic.configuration import QuicConfiguration
from aioquic.quic.events import QuicEvent, StreamDataReceived
from aioquic.quic.logger import QuicFileLogger
from aioquic.tls import SessionTicket
class DnsServerProtocol(QuicConnectionProtocol):
def quic_event_received(self, event: QuicEvent):
if isinstance(event, StreamDataReceived):
# parse query
length = struct.unpack("!H", bytes(event.data[:2]))[0]
query = DNSRecord.parse(event.data[2 : 2 + length])
# perform lookup and serialize answer
data = query.send(args.resolver, 53)
data = struct.pack("!H", len(data)) + data
# send answer
self._quic.send_stream_data(event.stream_id, data, end_stream=True)
class SessionTicketStore:
"""
Simple in-memory store for session tickets.
"""
def __init__(self) -> None:
self.tickets: Dict[bytes, SessionTicket] = {}
def add(self, ticket: SessionTicket) -> None:
self.tickets[ticket.ticket] = ticket
def pop(self, label: bytes) -> Optional[SessionTicket]:
return self.tickets.pop(label, None)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="DNS over QUIC server")
parser.add_argument(
"--host",
type=str,
default="::",
help="listen on the specified address (defaults to ::)",
)
parser.add_argument(
"--port",
type=int,
default=4784,
help="listen on the specified port (defaults to 4784)",
)
parser.add_argument(
"-k",
"--private-key",
type=str,
help="load the TLS private key from the specified file",
)
parser.add_argument(
"-c",
"--certificate",
type=str,
required=True,
help="load the TLS certificate from the specified file",
)
parser.add_argument(
"--resolver",
type=str,
default="8.8.8.8",
help="Upstream Classic DNS resolver to use",
)
parser.add_argument(
"--retry",
action="store_true",
help="send a retry for new connections",
)
parser.add_argument(
"-q",
"--quic-log",
type=str,
help="log QUIC events to QLOG files in the specified directory",
)
parser.add_argument(
"-v", "--verbose", action="store_true", help="increase logging verbosity"
)
args = parser.parse_args()
logging.basicConfig(
format="%(asctime)s %(levelname)s %(name)s %(message)s",
level=logging.DEBUG if args.verbose else logging.INFO,
)
if args.quic_log:
quic_logger = QuicFileLogger(args.quic_log)
else:
quic_logger = None
configuration = QuicConfiguration(
alpn_protocols=["doq-i03"],
is_client=False,
quic_logger=quic_logger,
)
configuration.load_cert_chain(args.certificate, args.private_key)
ticket_store = SessionTicketStore()
loop = asyncio.get_event_loop()
loop.run_until_complete(
serve(
args.host,
args.port,
configuration=configuration,
create_protocol=DnsServerProtocol,
session_ticket_fetcher=ticket_store.pop,
session_ticket_handler=ticket_store.add,
retry=args.retry,
)
)
try:
loop.run_forever()
except KeyboardInterrupt:
pass
| 27.338346 | 81 | 0.622937 | 401 | 3,636 | 5.513716 | 0.38404 | 0.032564 | 0.061511 | 0.013569 | 0.055179 | 0.033469 | 0.033469 | 0 | 0 | 0 | 0 | 0.007553 | 0.271727 | 3,636 | 132 | 82 | 27.545455 | 0.827417 | 0.028603 | 0 | 0.12381 | 0 | 0 | 0.152817 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038095 | false | 0.009524 | 0.104762 | 0.009524 | 0.171429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56b2d0fc3f97f8a7563d2f632e5448b894ac8ef4 | 9,483 | py | Python | extended_templates/backends/pdf.py | knivets/djaodjin-extended-templates | 71bc725b3900fc45968e5a625d72dc0931561856 | [
"BSD-2-Clause"
] | null | null | null | extended_templates/backends/pdf.py | knivets/djaodjin-extended-templates | 71bc725b3900fc45968e5a625d72dc0931561856 | [
"BSD-2-Clause"
] | null | null | null | extended_templates/backends/pdf.py | knivets/djaodjin-extended-templates | 71bc725b3900fc45968e5a625d72dc0931561856 | [
"BSD-2-Clause"
] | null | null | null | # Copyright (c) 2018, Djaodjin Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from __future__ import unicode_literals
import logging, re, subprocess, io, warnings
from bs4 import BeautifulSoup
from django.conf import settings as django_settings
from django.core.exceptions import ImproperlyConfigured
from django.template import TemplateDoesNotExist
from django.template.exceptions import TemplateSyntaxError
from django.template.response import TemplateResponse
from django.utils.module_loading import import_string
from django.utils import six
from django.utils.functional import cached_property
import weasyprint
from .. import settings
from ..compat import BaseEngine, _dirs_undefined, RemovedInDjango110Warning
from ..helpers import build_absolute_uri
LOGGER = logging.getLogger(__name__)
class PdfTemplateResponse(TemplateResponse):
"""
Response as PDF content.
"""
#pylint:disable=too-many-arguments
def __init__(self, request, template, context=None, content_type=None,
status=None, **kwargs):
# Django 1.9 added (charset=None, using=None) to the prototype.
# Django 1.10 removed (current_app=None) to the prototype.
# We donot declare them explicitely but through **kwargs instead
# so that our prototype is compatible with from Django 1.7
# through to Django 1.10.
super(PdfTemplateResponse, self).__init__(request, template,
context=context, content_type='application/pdf', status=status,
**kwargs)
@property
def rendered_content(self):
"""
Converts the HTML content generated from the template
as a Pdf document on the fly.
"""
html_content = super(PdfTemplateResponse, self).rendered_content
soup = BeautifulSoup(html_content.encode('utf-8'), 'html.parser')
for lnk in soup.find_all('a'):
href = lnk.get('href')
if href and href.startswith('/'):
lnk['href'] = build_absolute_uri(self._request, href)
html_content = soup.prettify()
cstr = io.BytesIO()
try:
doc = weasyprint.HTML(string=html_content)
doc.write_pdf(cstr)
except RuntimeError as _:
raise
return cstr.getvalue()
class PdfTemplateError(Exception):
pass
class PdfEngine(BaseEngine):
#pylint: disable=no-member
app_dirname = 'pdf'
def __init__(self, params):
params = params.copy()
options = params.pop('OPTIONS').copy()
super(PdfEngine, self).__init__(params)
self.file_charset = options.get(
'file_charset', django_settings.FILE_CHARSET)
self.loaders = options.get('loaders', [])
# This is an ugly way to add the search paths for .pdf template files.
@cached_property
def template_loaders(self):
return self.get_template_loaders(self.loaders)
def get_template_loaders(self, template_loaders):
loaders = []
for loader in template_loaders:
if isinstance(loader, (tuple, list)):
args = list(loader[1:])
loader = loader[0]
else:
args = []
if isinstance(loader, six.string_types):
loader_class = import_string(loader)
if getattr(loader_class, '_accepts_engine_in_init', False):
args.insert(0, self)
loader = loader_class(self, *args)
if loader is not None:
loaders.append(loader)
else:
raise ImproperlyConfigured(
"Invalid value in template loaders configuration: %r" % loader)
return loaders
def find_template(self, template_name, dirs=None, skip=None):
tried = []
# if dirs is None:
# dirs = self.dirs
# for search_dir in dirs:
for loader in self.template_loaders:
if hasattr(loader, 'get_contents'):
# From Django 1.9, this is the code that should be executed.
for origin in loader.get_template_sources(
template_name, template_dirs=dirs):
if skip is not None and origin in skip:
tried.append((origin, 'Skipped'))
continue
try:
contents = loader.get_contents(origin)
except TemplateDoesNotExist:
tried.append((origin, 'Source does not exist'))
continue
else:
template = Template(
contents, origin, origin.template_name)
return template, template.origin
else:
# This code is there to support Django 1.8 only.
try:
source, template_path = loader.load_template_source(
template_name, template_dirs=dirs)
origin = self.make_origin(
template_path, loader.load_template_source,
template_name, dirs)
template = Template(source, origin, template_path)
return template, template.origin
except TemplateDoesNotExist:
pass
raise TemplateDoesNotExist(template_name, tried=tried)
def from_string(self, template_code):
raise TemplateSyntaxError(
"The from_string() method is not implemented")
def get_template(self, template_name, dirs=_dirs_undefined):
#pylint:disable=arguments-differ
if template_name and template_name.endswith('.pdf'):
if dirs is _dirs_undefined:
dirs = None
else:
warnings.warn(
"The dirs argument of get_template is deprecated.",
RemovedInDjango110Warning, stacklevel=2)
template, origin = self.find_template(template_name, dirs)
if not hasattr(template, 'render'):
# template needs to be compiled
template = Template(template, origin, template_name)
return template
raise TemplateDoesNotExist(template_name)
class Template(object):
"""
Fills a PDF template
"""
def __init__(self, template_string, origin=None, name=None):
#pylint:disable=unused-argument
self.name = name
self.origin = origin
def render(self, context=None, request=None):
#pylint:disable=unused-argument
if self.origin:
template_path = self.origin.name
else:
template_path = self.name
output, err = self.fill_form(context, template_path)
if err:
raise PdfTemplateError(err)
return output
@staticmethod
def fill_form(fields, src, pdf_flatform_bin=None):
if pdf_flatform_bin is None:
assert hasattr(settings, 'PDF_FLATFORM_BIN'), "PDF generation"\
" requires podofo-flatform (https://github.com/djaodjin/podofo-flatform)."\
" Edit your PDF_FLATFORM_BIN settings accordingly."
pdf_flatform_bin = settings.PDF_FLATFORM_BIN
cmd = [pdf_flatform_bin]
for key, value in six.iteritems(fields):
if not isinstance(value, six.string_types):
value = str(value)
# We substitute non-standard whitespaces here because
# they interact poorly with the Python utf-8 encoder.
value = re.sub(r"\s", ' ', value)
if len(value) > 0:
# We don't want to end-up with ``--fill key=``
cmd += ["--fill", '%s=%s' % (key, value)]
cmd += [src, '-']
cmdline = cmd[0]
for param in cmd[1:]:
try:
key, value = param.split('=')
if any(char in value for char in [' ', ';']):
value = '"%s"' % value
cmdline += " %s=%s" % (key, value)
except ValueError:
cmdline += " " + param
LOGGER.info("RUN: %s", ' '.join(cmd))
return subprocess.check_output(cmd), None
| 39.348548 | 80 | 0.61953 | 1,069 | 9,483 | 5.369504 | 0.315248 | 0.025087 | 0.017073 | 0.008014 | 0.088153 | 0.040418 | 0.040418 | 0.040418 | 0.023693 | 0.023693 | 0 | 0.005427 | 0.300432 | 9,483 | 240 | 81 | 39.5125 | 0.859813 | 0.240114 | 0 | 0.115385 | 0 | 0 | 0.066601 | 0.003239 | 0 | 0 | 0 | 0 | 0.00641 | 1 | 0.070513 | false | 0.012821 | 0.102564 | 0.00641 | 0.25641 | 0.012821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56b2fc9930c872a6e85f2a12f4ba1b8f96b7e270 | 1,484 | py | Python | python/practices/docset.py | gloomyline/ML | 3764ac7dd64e3a92de1b34d6a92a809e02f7c038 | [
"MIT"
] | null | null | null | python/practices/docset.py | gloomyline/ML | 3764ac7dd64e3a92de1b34d6a92a809e02f7c038 | [
"MIT"
] | null | null | null | python/practices/docset.py | gloomyline/ML | 3764ac7dd64e3a92de1b34d6a92a809e02f7c038 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# @Author: Administrator
# @Date: 2018-05-17 11:09:22
# @Last Modified by: Administrator
# @Last Modified time: 2018-05-17 11:23:24
class Dict(dict):
'''
Simple dict but also support access as x.y style.
>>> d1 = Dict()
>>> d1['x'] = 100
>>> d1.x
100
>>> d1.y = 200
>>> d1['y']
200
>>> d2 = Dict(a=1, b=2, c='3')
>>> d2.c
'3'
>>> d2['empty']
Traceback (most recent call last):
...
KeyError: 'empty'
>>> d2.empty
Traceback (most recent call last):
...
AttributeError: 'Dict' object has no attribute 'empty'
'''
def __init__(self, **kw):
super(Dict, self).__init__(**kw)
def __getattr__(self, key):
try:
return self[key]
except KeyError:
raise AttributeError(r"'Dict' object has no attribute '%s'" % key)
def __setattr__(self, key, value):
self[key] = value
def fact(n):
'''
Calculate 1*2*3...(n-1)*n
>>> fact(1)
1
>>> fact(10)
3628800
>>> fact(-1)
Traceback (most recent call last):
File "D:\\programTools\\python\\lib\\doctest.py", line 1330, in __run
compileflags, 1), test.globs)
File "<doctest __main__.fact[2]>", line 1, in <module>
fact(-1)
File "E:\\localRepositories\\ML\\python\\practices\\docset.py", line 53, in fact
raise ValueError()
ValueError
'''
if n < 1:
raise ValueError()
if n == 1:
return 1
else:
return n*fact(n-1)
if __name__=='__main__':
import doctest
doctest.testmod() | 21.823529 | 84 | 0.584232 | 210 | 1,484 | 3.985714 | 0.466667 | 0.033453 | 0.0681 | 0.082437 | 0.170848 | 0.081243 | 0.081243 | 0 | 0 | 0 | 0 | 0.074009 | 0.235175 | 1,484 | 68 | 85 | 21.823529 | 0.663436 | 0.607143 | 0 | 0 | 0 | 0 | 0.088477 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.05 | 0 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56b748953a338c9c796774f938f8407392ae2efe | 1,498 | py | Python | docs/latex/src/plots/FOvsAsy2.py | vbertone/apfelxx | 7a37b982083b2a1cded2f5d6ab3aae267877f3c4 | [
"MIT"
] | 5 | 2019-10-07T14:01:59.000Z | 2021-04-13T19:54:47.000Z | docs/latex/src/plots/FOvsAsy2.py | vbertone/apfelxx | 7a37b982083b2a1cded2f5d6ab3aae267877f3c4 | [
"MIT"
] | 3 | 2017-05-30T10:43:40.000Z | 2018-09-11T14:29:53.000Z | docs/latex/src/plots/FOvsAsy2.py | vbertone/apfelxx | 7a37b982083b2a1cded2f5d6ab3aae267877f3c4 | [
"MIT"
] | 4 | 2019-06-23T08:42:00.000Z | 2022-03-18T15:25:46.000Z | import ruamel.yaml as yaml
import numpy as np
import matplotlib.pyplot as plt
import MatplotlibSettings
from scipy.interpolate import make_interp_spline, BSpline
# Loada data
data = np.loadtxt("FOvsAsy2.dat")
f, (ax1, ax2) = plt.subplots(2, 1, sharex = "all", gridspec_kw = dict(width_ratios = [1], height_ratios = [4, 1]))
plt.subplots_adjust(wspace = 0, hspace = 0)
ax1.set_title(r"\textbf{SIDIS at $\mathcal{O}(\alpha_s)$, $\sqrt{s}=10.5$ GeV}")
ax1.text(0.0002, 0.2, r"\textbf{$Q^2 = 2$ GeV$^2$}", fontsize = 16)
ax1.text(0.0002, 0.1, r"\textbf{$x = 0.1$}", fontsize = 16)
ax1.text(0.0002, 0.05, r"\textbf{$z = 0.2$}", fontsize = 16)
ax1.set(ylabel = r"$\displaystyle\left|\frac{d\sigma}{dy dz dQ dq_T}\right|$")
ax1.set_xscale("log")
ax1.set_yscale("log")
ax1.set_xlim([0.0001, 1])
ax1.set_ylim([0.0001, 10])
ax1.plot(data[:, 0], np.absolute(data[:, 1]), color = "red", label = r"\textbf{Fixed order}")
ax1.plot(data[:, 0], np.absolute(data[:, 2]), color = "blue", label = r"\textbf{Asymptotic}")
ax1.plot(data[:, 0], np.absolute(data[:, 1] - data[:, 2]), color = "orange", label = r"\textbf{Difference}")
ax1.legend(fontsize = 20)
ax2.set_xlabel(r"\textbf{$q_T$ [GeV]}")
ax2.set_ylabel(r"\textbf{Ratio}", fontsize = 16)
ax2.set_ylim([0.55, 1.45])
ax2.plot(data[:, 0], np.absolute(data[:, 1] / data[:, 2]), color = "green")
ax2.plot(data[:, 0], np.absolute(data[:, 1] / data[:, 1]), color = "black", ls = "--", lw = 1.5)
ax2.set_xlim([0.0001, 1])
plt.savefig("FOvsAsy2.pdf")
plt.close()
| 41.611111 | 114 | 0.646862 | 261 | 1,498 | 3.643678 | 0.398467 | 0.066246 | 0.047319 | 0.057834 | 0.255521 | 0.214511 | 0.214511 | 0.138801 | 0.107256 | 0.071504 | 0 | 0.084465 | 0.11482 | 1,498 | 35 | 115 | 42.8 | 0.63273 | 0.006676 | 0 | 0 | 0 | 0.034483 | 0.222746 | 0.04105 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.172414 | 0 | 0.172414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56b9ba77444e2cd8a93d2c91b41f8c6f997f8056 | 2,006 | py | Python | generation/process_datasets/process-NYT.py | Pratik-11/roft | 29c54c9712832051170c47909a5d38790ff5350b | [
"MIT"
] | 10 | 2020-05-31T19:19:42.000Z | 2022-01-15T01:44:33.000Z | generation/process_datasets/process-NYT.py | kirubarajan/trick | 04ef53c1d9646e0d7e7ec0eb47cc94d423682421 | [
"MIT"
] | 121 | 2020-06-05T20:29:24.000Z | 2021-09-24T21:33:33.000Z | generation/process_datasets/process-NYT.py | kirubarajan/trick | 04ef53c1d9646e0d7e7ec0eb47cc94d423682421 | [
"MIT"
] | 2 | 2020-06-05T20:10:29.000Z | 2020-09-30T14:55:48.000Z | '''
Script to parse out the raw text of articles from the NYT Articles Corpus
This script will look for a directory named raw and find any .ta.xml
files inside, parse out the "text" field in the file, strip all newlines and
carriage returns from the file and then write the text out, one article per line
to two files in an 80/20 split named "nyt-articles-test.txt" and
"nyt-articles-train.txt"
'''
import os, json, random
import xml.etree.ElementTree as xml
corpus_location = './raw'
pretraining_output_file_path = './processed/nyt-articles-train.txt'
dev_output_file_path = './processed/nyt-articles-dev.txt'
sampling_output_file_path = './processed/nyt-articles-test.txt'
def clean(text):
return text.replace('\n', ' ').replace('\r', '') + '\n'
def get_outfile(filename):
rng = random.random()
if rng < 0.90:
return pretraining_output_file_path
elif rng < 0.95:
return dev_output_file_path
else:
return sampling_output_file_path
def makedirs(filename):
''' https://stackoverflow.com/a/12517490 '''
if not os.path.exists(os.path.dirname(filename)):
try:
os.makedirs(os.path.dirname(filename))
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
return filename
if __name__ == '__main__':
if os.path.exists(corpus_location) and os.path.isdir(corpus_location):
total = len(os.listdir(corpus_location))
for index, filename in enumerate(os.listdir(corpus_location)):
if filename.endswith('.ta.xml'):
path = os.path.join(corpus_location, filename)
outfile = get_outfile(path)
with open(path, 'r+') as f:
with open(makedirs(outfile), 'a+') as out_f:
data = json.load(f)
out_f.write(clean(data['text']))
print('Read in file {0}/{1}: {2}'.format(index, total, path))
| 37.849057 | 84 | 0.638584 | 277 | 2,006 | 4.494585 | 0.422383 | 0.053012 | 0.06747 | 0.055422 | 0.081928 | 0.081928 | 0 | 0 | 0 | 0 | 0 | 0.013917 | 0.247757 | 2,006 | 52 | 85 | 38.576923 | 0.811133 | 0.228814 | 0 | 0 | 0 | 0 | 0.105368 | 0.065606 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.055556 | 0.027778 | 0.277778 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56baa2531aac4a1b2d5cf0f754d7c2d4f1573f35 | 853 | py | Python | DataMGT/consumers.py | BerryBC/SpyDataWebAppAndAPI | 6dd42a186e6955575fb747f7ff69c5b5a060ca19 | [
"MIT"
] | null | null | null | DataMGT/consumers.py | BerryBC/SpyDataWebAppAndAPI | 6dd42a186e6955575fb747f7ff69c5b5a060ca19 | [
"MIT"
] | null | null | null | DataMGT/consumers.py | BerryBC/SpyDataWebAppAndAPI | 6dd42a186e6955575fb747f7ff69c5b5a060ca19 | [
"MIT"
] | null | null | null | '''
@Descripttion:
@Author: BerryBC
@Date: 2020-02-24 23:40:18
@LastEditors: BerryBC
@LastEditTime: 2020-04-29 22:28:49
'''
import json
import Lib.LLearn as LLearn
from channels.generic.websocket import WebsocketConsumer
class wsCreatSklearnModel(WebsocketConsumer):
def funFB2C(self,strMsg, intCode):
self.send(text_data=json.dumps({
'msg': strMsg, 'code': intCode
}))
def connect(self):
self.accept()
self.funFB2C('OK', 1)
print(' Client Start Sklearn Learn Websocket.')
def disconnect(self, close_code):
print(' Learn Websocket disconnected')
def receive(self, text_data):
objRevData = json.loads(text_data)
intCode = objRevData['doCode']
if intCode == 0:
LLearn.funGoLearn(self.funFB2C)
self.funFB2C('Done', 3)
| 23.054054 | 56 | 0.638921 | 98 | 853 | 5.520408 | 0.622449 | 0.044362 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054096 | 0.241501 | 853 | 37 | 57 | 23.054054 | 0.782071 | 0.135991 | 0 | 0 | 0 | 0 | 0.117808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.15 | 0 | 0.4 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
56bb48fc93cbdd9d51e108045eb0d3f0918f92f4 | 821 | py | Python | main/test/test_image.py | kittenh2o/mosaic | 19dc7cb3300b00a055fad874a097aa7a011ca56f | [
"MIT"
] | null | null | null | main/test/test_image.py | kittenh2o/mosaic | 19dc7cb3300b00a055fad874a097aa7a011ca56f | [
"MIT"
] | null | null | null | main/test/test_image.py | kittenh2o/mosaic | 19dc7cb3300b00a055fad874a097aa7a011ca56f | [
"MIT"
] | null | null | null | import unittest
from main.core.process_pic import Image
class TestImage(unittest.TestCase):
def test_read(self):
uris = [
"https://res.cloudinary.com/dwf6x1ohn/image/upload/v1534347950/bgnppredgmslafb5pkpw.jpg",
"https://res.cloudinary.com/dwf6x1ohn/image/upload/v1534347979/wptzfdqidfnlyhgt3kti.jpg"
]
sizes = [
(540, 547),
(259, 194)
]
for (uri, size) in zip(uris, sizes):
image = Image(uri)
self.assertEqual(size[0], image.width())
self.assertEqual(size[1], image.height())
self.assertEqual(size[0] * size[1], image.size())
if __name__ == "__main__":
suite = unittest.TestLoader().loadTestsFromTestCase(TestImage)
unittest.TextTestRunner(verbosity=2).run(suite)
| 28.310345 | 101 | 0.618758 | 87 | 821 | 5.724138 | 0.586207 | 0.090361 | 0.114458 | 0.084337 | 0.164659 | 0.164659 | 0.164659 | 0 | 0 | 0 | 0 | 0.069805 | 0.249695 | 821 | 28 | 102 | 29.321429 | 0.738636 | 0 | 0 | 0 | 0 | 0 | 0.219245 | 0 | 0 | 0 | 0 | 0 | 0.15 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |