hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10aad86e7ff44123a9ea653ae8ca81813915a013 | 1,733 | py | Python | eruditio/shared_apps/django_userhistory/userhistory.py | genghisu/eruditio | 5f8f3b682ac28fd3f464e7a993c3988c1a49eb02 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | eruditio/shared_apps/django_userhistory/userhistory.py | genghisu/eruditio | 5f8f3b682ac28fd3f464e7a993c3988c1a49eb02 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | eruditio/shared_apps/django_userhistory/userhistory.py | genghisu/eruditio | 5f8f3b682ac28fd3f464e7a993c3988c1a49eb02 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | from django_userhistory.models import UserTrackedContent
class UserHistoryRegistry(object):
"""
Registry for UserHistory handlers. Necessary so that only one
receiver is registered for each UserTrackedContent object.
"""
def __init__(self):
self._registry = {}
self._handlers = {}
user_tracked_contents = UserTrackedContent.objects.all()
for content in user_tracked_contents:
self.register(content.content_type, content.action)
def get_handler(self, content_name):
"""
Attempt to get a handler for target content type, based
on the following naming convention.
content_type.model_class()._meta.db_table as StudlyCaps + Handler
"""
import django_userhistory.handlers as handlers
def to_studly(x):
return "".join([token.capitalize() for token in x.split("_")])
handler_class = getattr(handlers,
"%sHandler" % (to_studly(content_name)),
handlers.BaseUserHistoryHandler)
return handler_class
def register(self, content_type, action):
"""
Registers a handler from django_userhistory.handlers with the target
content type.
"""
content_name = content_type.model_class()._meta.db_table
if not content_name in self._registry.keys():
HandlerClass = self.get_handler(content_name)
handler = HandlerClass(content_type, action)
self._registry[content_name] = content_type
self._handlers[content_name] = handler
user_history_registry = UserHistoryRegistry() | 39.386364 | 76 | 0.627813 | 176 | 1,733 | 5.931818 | 0.397727 | 0.084291 | 0.04023 | 0.04023 | 0.061303 | 0.061303 | 0.061303 | 0 | 0 | 0 | 0 | 0 | 0.29775 | 1,733 | 44 | 77 | 39.386364 | 0.857847 | 0.209463 | 0 | 0 | 0 | 0 | 0.007855 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0.041667 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10b1c5c7ce4e9217c6d0fbeeff70f73b9438c3c9 | 949 | py | Python | configs/oscar/oscar_nlvr2_large.py | linxi1158/iMIX | af87a17275f02c94932bb2e29f132a84db812002 | [
"Apache-2.0"
] | 23 | 2021-06-26T08:45:19.000Z | 2022-03-02T02:13:33.000Z | configs/oscar/oscar_nlvr2_large.py | XChuanLee/iMIX | 99898de97ef8b45462ca1d6bf2542e423a73d769 | [
"Apache-2.0"
] | null | null | null | configs/oscar/oscar_nlvr2_large.py | XChuanLee/iMIX | 99898de97ef8b45462ca1d6bf2542e423a73d769 | [
"Apache-2.0"
] | 9 | 2021-06-10T02:36:20.000Z | 2021-11-09T02:18:16.000Z | _base_ = [
'../_base_/models/oscar/oscar_nlvr2_config.py',
'../_base_/datasets/oscar/oscar_nlvr2_dataset.py',
'../_base_/default_runtime.py',
]
# cover the parrmeter in above files
model = dict(
params=dict(
model_name_or_path='/home/datasets/mix_data/model/oscar/large-vg-labels/ep_55_1617000',
cls_hidden_scale=2,
))
lr_config = dict(
num_warmup_steps=5000, # warmup_proportion=0
num_training_steps=36000, # ceil(totoal 86373 / batch size 24 / GPUS 2) * epoch size 20
)
nlvr_reader_train_cfg = dict(model_name_or_path='/home/datasets/mix_data/model/oscar/large-vg-labels/ep_55_1617000', )
nlvr_reader_test_cfg = dict(model_name_or_path='/home/datasets/mix_data/model/oscar/large-vg-labels/ep_55_1617000', )
train_data = dict(
samples_per_gpu=24,
data=dict(reader=nlvr_reader_train_cfg, ),
sampler='DistributedSampler',
)
test_data = dict(data=dict(reader=nlvr_reader_test_cfg, ), )
| 31.633333 | 118 | 0.733404 | 142 | 949 | 4.514085 | 0.443662 | 0.062403 | 0.060842 | 0.070203 | 0.421217 | 0.346334 | 0.346334 | 0.346334 | 0.346334 | 0.346334 | 0 | 0.063415 | 0.135933 | 949 | 29 | 119 | 32.724138 | 0.718293 | 0.120126 | 0 | 0 | 0 | 0 | 0.4 | 0.378313 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10b1e3b9d1d4213de73da1374bd3a913cf3c0b32 | 1,608 | py | Python | commander/src/commander/executors/CameraExecutor.py | ugnelis/ros-cameras-controller | 0a7811f3c33da13b89d0b4f1723afd50dabd7bc7 | [
"BSD-3-Clause"
] | 4 | 2018-09-06T11:01:17.000Z | 2018-09-08T23:02:32.000Z | commander/src/commander/executors/CameraExecutor.py | ugnelis/ros-cameras-controller | 0a7811f3c33da13b89d0b4f1723afd50dabd7bc7 | [
"BSD-3-Clause"
] | 6 | 2018-10-03T13:20:01.000Z | 2018-10-18T05:54:33.000Z | commander/src/commander/executors/CameraExecutor.py | ugnelis/ros-cameras-controller | 0a7811f3c33da13b89d0b4f1723afd50dabd7bc7 | [
"BSD-3-Clause"
] | 3 | 2018-10-08T18:14:59.000Z | 2018-10-16T02:15:00.000Z | import rospy
import rospkg
import time
from commander.executors.Executor import Executor
from commander.utils.Process import Process
class CameraExecutor(Executor):
"""
Executor class for adding and removing camera.
"""
def __init__(self):
Executor.__init__(self)
def execute(self, **kwargs):
"""
Execute the command.
:param kwargs: key-worded arguments.
:keyword stream_url: Video stream's URL.
:keyword namespace: Camera's namespace.
"""
stream_url = kwargs.get('stream_url', "https://localhost:mjpg/video.mjpg")
namespace = kwargs.get('namespace', "/")
# Stop process if it's running.
if self.process is not None:
self.stop()
rospack = rospkg.RosPack()
file_path = rospack.get_path(
'video_stream_to_topic') + "/launch/video_stream_to_topic.launch"
args_list = ['roslaunch', file_path, "stream_url:=%s" % stream_url]
env = {'ROS_NAMESPACE': namespace}
# Create and launch process.
self.process = Process.create(*args_list, env=env)
# Be sure that topics are running.
while not rospy.get_published_topics(namespace):
time.sleep(0.5)
def stop(self):
"""
Stop executors.
"""
if self.process is not None:
self.process = Process.terminate(self.process)
def is_running(self):
"""
Check if executor is running.
:return: True if it's running, False if not.
"""
return Process.is_running(self.process)
| 27.724138 | 82 | 0.61194 | 189 | 1,608 | 5.05291 | 0.380952 | 0.06911 | 0.010471 | 0.025131 | 0.104712 | 0.05445 | 0.05445 | 0 | 0 | 0 | 0 | 0.001733 | 0.282338 | 1,608 | 57 | 83 | 28.210526 | 0.825823 | 0.229478 | 0 | 0.076923 | 0 | 0 | 0.130357 | 0.050893 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.192308 | 0 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10b3c739973daf911e526da937baf03a510983c2 | 4,022 | py | Python | mkdocs_simple_plugin/generator.py | gwhitney/mkdocs-simple-plugin | b9c68821621d4aad4ef4f6345b030fb2fdf5595d | [
"Apache-2.0"
] | null | null | null | mkdocs_simple_plugin/generator.py | gwhitney/mkdocs-simple-plugin | b9c68821621d4aad4ef4f6345b030fb2fdf5595d | [
"Apache-2.0"
] | null | null | null | mkdocs_simple_plugin/generator.py | gwhitney/mkdocs-simple-plugin | b9c68821621d4aad4ef4f6345b030fb2fdf5595d | [
"Apache-2.0"
] | null | null | null | """md
# Mkdocs Simple Generator
`mkdocs_simple_gen` is a program that will automatically create a `mkdocs.yml`
configuration file (only if needed) and optionally install dependencies, build,
and serve the site.
## Installation
Install the plugin with pip.
```bash
pip install mkdocs-simple-plugin
```
_Python 3.x, 3.6, 3.7, 3.8, 3.9 supported._
## Usage
```bash
mkdocs_simple_gen
```
### Command line options
See `--help`
```txt
Usage: mkdocs_simple_gen [OPTIONS]
Options:
--build / --no-build build the site using mkdocs build
--serve / --no-serve serve the site using mkdocs serve
--help Show this message and exit.
```
default flags:
```bash
mkdocs_simple_gen --build --no-serve
```
"""
import click
import tempfile
import os
import yaml
def default_config():
"""Get default configuration for mkdocs.yml file."""
config = {}
config['site_name'] = os.path.basename(os.path.abspath("."))
if "SITE_NAME" in os.environ.keys():
config['site_name'] = os.environ["SITE_NAME"]
if "SITE_URL" in os.environ.keys():
config['site_url'] = os.environ["SITE_URL"]
if "REPO_URL" in os.environ.keys():
config['repo_url'] = os.environ["REPO_URL"]
# Set the docs dir to temporary directory, or docs if the folder exists
config['docs_dir'] = os.path.join(
tempfile.gettempdir(),
'mkdocs-simple',
os.path.basename(os.getcwd()),
"docs")
if os.path.exists(os.path.join(os.getcwd(), "docs")):
config['docs_dir'] = "docs"
config['plugins'] = ("simple", "search")
return config
class MkdocsConfigDumper(yaml.Dumper):
"""Format yaml files better."""
def increase_indent(self, flow=False, indentless=False):
"""Indent lists."""
return super(MkdocsConfigDumper, self).increase_indent(flow, False)
def write_config(config_file, config):
"""Write configuration file."""
with open(config_file, 'w+') as file:
try:
yaml.dump(
data=config,
stream=file,
sort_keys=False,
default_flow_style=False,
Dumper=MkdocsConfigDumper)
except yaml.YAMLError as exc:
print(exc)
def setup_config():
"""Create the mkdocs.yml file with defaults for params that don't exist."""
config_file = "mkdocs.yml"
config = default_config()
if not os.path.exists(config_file):
# If config file doesn't exit, create a simple one, guess the site name
# from the folder name.
write_config(config_file, config)
# Open the config file to verify settings.
with open(config_file, 'r') as stream:
try:
local_config = yaml.load(stream, yaml.Loader)
if local_config:
# Overwrite default config values with local mkdocs.yml
config.update(local_config)
print(config)
if not os.path.exists(config["docs_dir"]):
# Ensure docs directory exists.
print("making docs_dir {}".format(config["docs_dir"]))
os.makedirs(config["docs_dir"], exist_ok=True)
except yaml.YAMLError as exc:
print(exc)
raise
write_config(config_file, config)
@click.command()
@click.option('--build/--no-build', default=False,
help="build the site using mkdocs build")
@click.option('--serve/--no-serve', default=False,
help="serve the site using mkdocs serve")
@click.argument('mkdocs-args', nargs=-1)
def main(build, serve, mkdocs_args):
"""Generate and build a mkdocs site."""
setup_config()
if build:
os.system("mkdocs build " + " ".join(mkdocs_args))
if serve:
os.system("mkdocs serve " + " ".join(mkdocs_args))
if __name__ == "__main__":
# pylint doesn't know how to parse the click decorators,
# so disable no-value-for-parameter on main
# pylint: disable=no-value-for-parameter
main(['--help'])
| 28.524823 | 79 | 0.624316 | 519 | 4,022 | 4.722543 | 0.306358 | 0.03672 | 0.02652 | 0.029376 | 0.181151 | 0.126887 | 0.049776 | 0 | 0 | 0 | 0 | 0.003309 | 0.248633 | 4,022 | 140 | 80 | 28.728571 | 0.807743 | 0.341372 | 0 | 0.117647 | 0 | 0 | 0.136329 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073529 | false | 0 | 0.058824 | 0 | 0.176471 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10b434a1f677ee35c21627675c3cfb556dcfeabd | 7,851 | py | Python | helper-scripts/0-merge-fastq.py | GordonLab/riesling-pipeline | 384f41dc964db0f59b3992f775e87c651e846f2b | [
"MIT"
] | 9 | 2017-10-25T18:27:23.000Z | 2020-10-15T08:06:42.000Z | helper-scripts/0-merge-fastq.py | GordonLab/riesling-pipeline | 384f41dc964db0f59b3992f775e87c651e846f2b | [
"MIT"
] | null | null | null | helper-scripts/0-merge-fastq.py | GordonLab/riesling-pipeline | 384f41dc964db0f59b3992f775e87c651e846f2b | [
"MIT"
] | 4 | 2016-11-05T23:21:48.000Z | 2020-02-25T12:35:33.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# This script merges one sample across lanes.
#
# This script will *only* be useful if you have the same multiplexed sample loaded into multiple lanes of a flowcell.
#
# This concatenates the same index's pared-end files (L*_R*_* .fastq.gz) across multiple
# lanes into one set of PE files per-sample.
#
#
# Copyright (c) 2014-2016 Nick Semenkovich <semenko@alum.mit.edu>.
# https://nick.semenkovich.com/
#
# Developed for the Gordon Lab, Washington University in St. Louis (WUSTL)
# https://gordonlab.wustl.edu/
#
# This software is released under the MIT License:
# http://opensource.org/licenses/MIT
#
# Source: https://github.com/GordonLab/riesling-pipeline
from __future__ import absolute_import, division, print_function, unicode_literals
__author__ = 'Nick Semenkovich <semenko@alum.mit.edu>'
__copyright__ = 'Gordon Lab at Washington University in St. Louis'
__license__ = 'MIT'
__version__ = '1.0.3'
from collections import OrderedDict
import _logshim
import _script_helpers
import argparse
import glob
import os
import pprint
import re
def fastq_map_predict(input_path, verbose=False):
"""
Determine a sane .fastq muti-lane merge strategy.
Fail if we can't merge correctly, if there are remaining files, etc.
sample file name: Gordon-Ad2-11-AAGAGGCA-AAGAGGCA_S7_L001_R1_001.fastq.gz
Args:
input_path: An input path containing .fastq / .fastq.gz files
Returns:
A dict of mappings.
"""
fastq_map_logger = _logshim.getLogger('fastq_map_predict')
if not os.path.isdir(input_path):
raise ValueError("Input must be a directory. You gave: %s" % (input_path))
all_files = glob.glob(input_path + "/*_R*.fastq.gz") # Ignore index files, must have _R in title
all_files.extend(glob.glob(input_path + "/*_R*.fastq"))
if len(all_files) == 0:
raise ValueError("Input directory is empty!")
# Given paired ends, we must always have an even number of input files.
if len(all_files) % 2 != 0:
raise ValueError("Input directory contains an odd number of files.")
re_pattern = re.compile(r'^(.*)_L(\d+)_R(\d)_\d+(\.fastq|\.fastq\.gz)$')
file_dict = OrderedDict()
prefixes_seen = []
lanes_seen = []
pe_seen = []
for file in sorted(all_files):
if not os.access(file, os.R_OK):
raise OSError("Cannot read file: %s" % (file))
filename_only = file.rsplit('/', 1)[-1]
result = re.match(re_pattern, filename_only)
file_dict[file] = {'prefix': str(result.group(1)),
'L': int(result.group(2)),
'R': int(result.group(3))}
prefixes_seen.append(file_dict[file]['prefix'])
lanes_seen.append(file_dict[file]['L'])
pe_seen.append(file_dict[file]['R'])
# Sanity checking here. Missing files? Other oddities?
if len(file_dict) % len(set(lanes_seen)) != 0:
raise ValueError("Missing or extra file(s)? Saw %d lanes, and %d input files." %
(len(file_dict), len(set(lanes_seen))))
if len(set(pe_seen)) != 2:
raise ValueError("Saw %d paired ends, expecting exactly two. That's confusing!" % (len(set(pe_seen))))
if pe_seen.count(1) != pe_seen.count(2):
raise ValueError("Uneven pairing of paired ends (are you missing a file)? R1 count: %d, R2 count: %d" %
(pe_seen.count(1), pe_seen.count(2)))
fastq_map_logger.info("Files seen: %d" % (len(all_files)))
fastq_map_logger.info("Samples seen: %d" % (len(set(prefixes_seen))))
fastq_map_logger.info("Lanes seen: %d" % (len(set(lanes_seen))))
merge_strategy = {}
fastq_map_logger.info("Sample IDs:")
for prefix in sorted(set(prefixes_seen)):
fastq_map_logger.info(" %s" % (prefix))
for file in file_dict.iterkeys():
merge_strategy.setdefault(file_dict[file]['prefix'] + ".PE" + str(file_dict[file]['R']), []).append(file)
if verbose:
fastq_map_logger.debug("Merge strategy is:")
fastq_map_logger.debug(pprint.pformat(merge_strategy))
return merge_strategy
def fastq_merge(merge_strategy, output_path, disable_parallel=False):
"""
Concatenate multiple fastq files (from multiple lanes) into one.
:param merge_strategy:
:param output_path:
:return:
"""
merge_log = _logshim.getLogger('fastq_merge')
if disable_parallel:
shell_job_runner = _script_helpers.ShellJobRunner(merge_log)
else:
shell_job_runner = _script_helpers.ShellJobRunner(merge_log, delay_seconds=45)
for merged_name, merge_inputs in merge_strategy.iteritems():
merge_input_files = ' '.join(merge_inputs)
merge_log.info('Spawning niced process to merge: %s' % (merged_name))
for filename in merge_inputs:
assert(" " not in filename)
assert(";" not in filename) # Vague sanity testing for input filenames
merge_log.debug(' Input: %s' % (filename))
# WARNING: Using shell has security implications! Don't work on untrusted input filenames.
command = "zcat %s | gzip -1 > %s/%s.fastq.gz" % (merge_input_files, output_path, merged_name)
shell_job_runner.run(command)
shell_job_runner.finish()
return True
def main():
# Parse & interpret command line flags.
parser = argparse.ArgumentParser(description='Intelligently merge fastq/fastq.gz files from an Illumina pipeline.'
'Merges all L*_R*_* .fastq.gz files into one per sample.',
epilog="Written by Nick Semenkovich <semenko@alum.mit.edu> for the Gordon Lab at "
"Washington University in St. Louis: http://gordonlab.wustl.edu.",
usage='%(prog)s [options]',
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--input-path', '-i', dest="input_path", metavar='input_dir/', type=str,
help='Input path.', required=True)
parser.add_argument('--output-path', '-o', dest="output_path", metavar='output_dir/', type=str,
help='Output path.', required=True)
parser.add_argument('--no-parallel', '-np', dest="no_parallel", default=False, action='store_true',
help='Disable parallel job spawning.')
# parser.add_argument('--skip-stats', dest="skip_stats", action='store_true',
# help='Skip statistics generation.', required=False)
parser.add_argument("--verbose", "-v", dest="verbose", default=False, action='store_true')
parser.add_argument("--no-log", "-nl", dest="nolog", default=False, action='store_true',
help="Do not create a log file.")
args = parser.parse_args()
output_path = _script_helpers.setup_output_path(args.output_path)
_logshim.startLogger(verbose=args.verbose, noFileLog=args.nolog, outPath=output_path)
# Our goal is to intelligently merge .fastq/.fastq.gz output from an Illumina run
# The Illumina standard pipeline splits by barcode w/ semi-predictable filenames we can use, e.g.
# IsoA-M1-CD4_S1_L001_I1_001.fastq.gz # index (discard)
# IsoA-M1-CD4_S1_L001_R1_001.fastq.gz # end 1, lane 1
# IsoA-M1-CD4_S1_L001_R2_001.fastq.gz # end 2, lane 2
# IsoA-M1-CD4_S1_L002_I1_001.fastq.gz # index (discard), lane 2
# IsoA-M1-CD4_S1_L002_R1_001.fastq.gz # end 1, lane 2
# ...
# TODO: Move some lower glob code up so we can test these functions
merge_strategy = fastq_map_predict(args.input_path, verbose=args.verbose)
fastq_merge(merge_strategy, args.output_path, disable_parallel=args.no_parallel)
if __name__ == '__main__':
main()
| 38.674877 | 119 | 0.654821 | 1,066 | 7,851 | 4.624765 | 0.306754 | 0.019878 | 0.022718 | 0.018256 | 0.216227 | 0.150507 | 0.086004 | 0.045842 | 0 | 0 | 0 | 0.016211 | 0.222137 | 7,851 | 202 | 120 | 38.866337 | 0.791059 | 0.265444 | 0 | 0 | 0 | 0.01 | 0.219335 | 0.015553 | 0 | 0 | 0 | 0.004951 | 0.02 | 1 | 0.03 | false | 0 | 0.09 | 0 | 0.14 | 0.03 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10b43c0a3e7aad299a903692dc3ea12445b85006 | 2,840 | py | Python | src/common.py | ashishpatel26/tartarus | b214f66dd4e61e83edc45ffc5c280efe7318a1b6 | [
"MIT"
] | 104 | 2017-06-30T06:51:54.000Z | 2022-02-17T08:47:57.000Z | src/common.py | ashishpatel26/tartarus | b214f66dd4e61e83edc45ffc5c280efe7318a1b6 | [
"MIT"
] | 5 | 2017-11-24T04:12:02.000Z | 2021-06-03T08:20:21.000Z | src/common.py | sergiooramas/tartarus | 555bf8e12f54af7d3e68a2cfca0e2b79968f09d2 | [
"MIT"
] | 26 | 2017-07-19T07:19:25.000Z | 2022-01-02T00:19:04.000Z | import glob
import json
import os
import pandas as pd
from sklearn.preprocessing import StandardScaler
# Files and extensions
DATA_DIR = "/Users/Sergio/webserver/tartarus/dummy-data"
DEFAULT_TRAINED_MODELS_FILE = DATA_DIR+"/trained_models.tsv"
DEFAULT_MODEL_PREFIX = "model_"
MODELS_DIR = DATA_DIR+"/models"
PATCHES_DIR = DATA_DIR+"/patches"
DATASETS_DIR = DATA_DIR+"/splits"
TRAINDATA_DIR = DATA_DIR+"/train_data"
PREDICTIONS_DIR = DATA_DIR+"/predictions"
RESULTS_DIR = DATA_DIR+"/results"
REC_DIR = DATA_DIR+"/playlists"
MODEL_EXT = ".json"
PLOT_EXT = ".png"
WEIGHTS_EXT = ".h5"
MAX_N_SCALER = 300000
#create spectrograms folders
SPECTRO_PATH = DATA_DIR+"/spectrograms/"
INDEX_PATH = DATA_DIR+"/index/"
### Spectrograms
config_spectro = {
'SUPER' : {
'audio_folder' : DATA_DIR+'/audio/',
'spectrograms_name' : 'SUPER',
'resample_sr' : 22050,
'hop' : 1024,
'spectrogram_type' : 'cqt',
'cqt_bins' : 96,
'convert_id' : True, # converts the (path) name of a file to its ID name - correspondence in index_file.
'index_file' : 'index_audio_SUPER.tsv', # index to be converted. THIS IS THE LIST THAT ONE WILL COMPUTE
'audio_ext' : ['mp3'] , # in list form
'num_process' : 8,
'compute_spectro' : True
}
}
def ensure_dir(directory):
"""Makes sure that the given directory exists."""
if not os.path.exists(directory):
os.makedirs(directory)
def get_next_model_id(models_dir=MODELS_DIR,
model_prefix=DEFAULT_MODEL_PREFIX,
model_ext=MODEL_EXT):
"""Gets the next model id based on the models in `models_dir`
directory."""
models = glob.glob(os.path.join(models_dir, model_prefix + '*'))
return model_prefix + str(len(models) + 1)
def save_model(model, model_file):
"""Saves the model into the given model file."""
json_string = model.to_json()
with open(model_file, 'w') as f:
json.dump(json_string, f)
def minmax_normalize(X):
"""Normalizes X into the -0.5, 0.5 range."""
# X -= X.min()
# X /= X.max()
# X -= 0.5
X = (X-X.min()) / (X.max() - X.min())
return X
def save_trained_model(trained_models_file, trained_model):
try:
df = pd.read_csv(trained_models_file, sep='\t')
except IOError:
df = pd.DataFrame(columns=trained_model.keys())
df = df.append(trained_model, ignore_index=True)
df.to_csv(trained_models_file, sep='\t', index=False)
def preprocess_data(X, scaler=None, max_N=MAX_N_SCALER):
shape = X.shape
X.shape = (shape[0], shape[2] * shape[3])
if not scaler:
scaler = StandardScaler()
N = pd.np.min([len(X), max_N]) # Limit the number of patches to fit
scaler.fit(X[:N])
X = scaler.transform(X)
X.shape = shape
return X, scaler
| 30.212766 | 112 | 0.653873 | 405 | 2,840 | 4.365432 | 0.377778 | 0.047511 | 0.039593 | 0.026018 | 0.027149 | 0.027149 | 0 | 0 | 0 | 0 | 0 | 0.013453 | 0.214789 | 2,840 | 93 | 113 | 30.537634 | 0.779372 | 0.170423 | 0 | 0 | 0 | 0 | 0.144828 | 0.027586 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.073529 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10b516b21eb839237b199374485e82a9b5b8fa74 | 1,633 | py | Python | salt/hardening/print_dependent_modules.py | megacool/salt-states | 5910ee0c664aa781aebeb9a594760f17533675b0 | [
"MIT"
] | 1 | 2020-02-12T01:23:16.000Z | 2020-02-12T01:23:16.000Z | salt/hardening/print_dependent_modules.py | thusoy/salt-states | 3184686e8b0ad41788051e770d8ab56695f561a2 | [
"MIT"
] | 16 | 2016-05-19T06:18:54.000Z | 2017-03-05T19:22:36.000Z | salt/hardening/print_dependent_modules.py | get-wrecked/salt-states | 72f4f367f331e685eaf917190c6b495e649e6e6c | [
"MIT"
] | 1 | 2016-08-19T19:47:42.000Z | 2016-08-19T19:47:42.000Z | #!/usr/bin/env python
'''
Prints kernel modules that depend on the module given, ending with the module itself.
Prints nothing if the module isn't loaded
'''
from __future__ import print_function
import argparse
import subprocess
def main():
args = get_args()
for module in get_module_dependencies(args.module):
print(module)
def get_args():
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('module')
return parser.parse_args()
def get_module_dependencies(target_module, module_tree=None):
if module_tree is None:
module_tree = get_current_modules()
dependents = module_tree.get(target_module)
if dependents is None:
return []
all_modules = []
for dep in dependents:
all_modules.extend(unique(get_module_dependencies(dep, module_tree)))
all_modules.append(target_module)
return unique(all_modules)
def get_current_modules():
lsmod = ['lsmod']
modules = {}
for line in subprocess.check_output(lsmod).split('\n')[1:]:
if not line:
continue
line_parts = line.split()
module = line_parts[0]
dependents = line_parts[3].split(',') if len(line_parts) == 4 else []
modules[module] = dependents
return modules
def unique(iterable):
"Yields unique elements from iterable, preserving order."
seen = set()
# Shortcut this lookup to save python some work
seen_add = seen.add
for element in iterable:
if element in seen:
continue
seen_add(element)
yield element
if __name__ == '__main__':
main()
| 23.666667 | 85 | 0.672994 | 208 | 1,633 | 5.043269 | 0.413462 | 0.047664 | 0.060057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003205 | 0.235762 | 1,633 | 68 | 86 | 24.014706 | 0.83734 | 0.153092 | 0 | 0.045455 | 0 | 0 | 0.053846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113636 | false | 0 | 0.068182 | 0 | 0.272727 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10b7fb63df39757a966861cacb8449c7cb8c64d1 | 1,834 | py | Python | day09/qiubaiproject/qiubaiproject/spiders/qiu.py | Mhh123/spider | fe4410f9f3b4a9c0a5dac51bd93a1434aaa4f888 | [
"Apache-2.0"
] | null | null | null | day09/qiubaiproject/qiubaiproject/spiders/qiu.py | Mhh123/spider | fe4410f9f3b4a9c0a5dac51bd93a1434aaa4f888 | [
"Apache-2.0"
] | null | null | null | day09/qiubaiproject/qiubaiproject/spiders/qiu.py | Mhh123/spider | fe4410f9f3b4a9c0a5dac51bd93a1434aaa4f888 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import scrapy
from qiubaiproject.items import QiubaiprojectItem
class QiuSpider(scrapy.Spider):
name = 'qiu'
allowed_domains = ['www.qiushibaike.com']
start_urls = ['http://www.qiushibaike.com/']
# 实现爬取多页的代码
page = 1
url = 'https://www.qiushibaike.com/8hr/page/{}/'
def parse(self, response):
div_list = response.xpath('//div[@id="content-left"]/div')
for oDiv in div_list:
# 创建对象
item = QiubaiprojectItem()
face = oDiv.xpath('./div[1]//img/@src')[0].extract()
try:
name = oDiv.xpath('./div[1]//h2/text()')[0].extract().strip('\t\n\r')
except Exception as identifier:
name = '匿名用户'
try:
age = oDiv.xpath('./div[1]/div/text()')[0].extract()
except Exception as identifier:
age = ''
content = oDiv.xpath('./a[1]//span[1]/text()').extract()
content = ''.join(content).strip('\n\t ')
# 好笑个数
haha_count = oDiv.xpath('//div[@id="content-left"]/div/div[@class="stats"]/span[1]//i/text()')[0].extract()
# 评论个数
ping_count = oDiv.xpath('//div[@id="content-left"]/div/div[@class="stats"]/span[2]//i/text()')[0].extract()
# 将内容全部保存到item
item['face'] = face
item['name'] = name
item['age'] = age
item['content'] = content
item['haha_count'] = haha_count
item['ping_count'] = ping_count
# 将item扔给引擎,管道进行处理
yield item
# 接着爬取指定页面
if self.page <= 5:
self.page += 1
url = self.url.format(self.page)
# 相拼接好的url再次发送请求,扔给调度器进行处理
yield scrapy.Request(url=url, callback=self.parse)
| 31.084746 | 119 | 0.512541 | 206 | 1,834 | 4.514563 | 0.407767 | 0.051613 | 0.064516 | 0.054839 | 0.133333 | 0.133333 | 0.107527 | 0.107527 | 0.107527 | 0.107527 | 0 | 0.014343 | 0.315703 | 1,834 | 58 | 120 | 31.62069 | 0.726693 | 0.061069 | 0 | 0.111111 | 0 | 0.055556 | 0.223846 | 0.108124 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.055556 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10b85c9d35c8395a363f63dabf340f4280891979 | 2,948 | py | Python | main_surreal.py | bdvllrs/recvis-project-3d-pose | b7f36cad50ab78e76af483bf08d2e52d999f2275 | [
"MIT"
] | null | null | null | main_surreal.py | bdvllrs/recvis-project-3d-pose | b7f36cad50ab78e76af483bf08d2e52d999f2275 | [
"MIT"
] | 1 | 2019-01-06T15:21:51.000Z | 2019-01-12T10:38:22.000Z | main_surreal.py | bdvllrs/recvis-project-3d-pose | b7f36cad50ab78e76af483bf08d2e52d999f2275 | [
"MIT"
] | null | null | null | import torch
import torch.utils.data
from utils.data import SurrealDatasetWithVideoContinuity as SurrealDataset
from utils import Config
from models import StackedHourGlass, Linear
from utils import SurrealTrainer as Trainer
config = Config('./config')
device_type = "cuda" if torch.cuda.is_available() and config.device_type == "cuda" else "cpu"
print("Using", device_type)
config_video_constraints = config.video_constraints
config_surreal = config.surreal
config = config.hourglass
device = torch.device(device_type)
test_set, val_set, train_set = [], [], []
# Load dataset
print("Loading datasets...")
if config.data_type == "surreal":
test_set = SurrealDataset(config_surreal.data_path, 'test', config_surreal.run,
frames_before=config_video_constraints.frames_before,
frames_after=config_video_constraints.frames_after)
val_set = SurrealDataset(config_surreal.data_path, 'val', config_surreal.run,
frames_before=config_video_constraints.frames_before,
frames_after=config_video_constraints.frames_after)
train_set = SurrealDataset(config_surreal.data_path, 'train', config_surreal.run,
frames_before=config_video_constraints.frames_before,
frames_after=config_video_constraints.frames_after)
# Define Torch dataset
test_dataset = torch.utils.data.DataLoader(test_set, batch_size=config.batch_size, shuffle=False)
val_dataset = torch.utils.data.DataLoader(val_set, batch_size=config.batch_size, shuffle=False)
train_dataset = torch.utils.data.DataLoader(train_set, batch_size=config.batch_size, shuffle=True)
print("Loaded.")
print("Loading models...")
# Load pretrained model of stacked hourglass for 2D pose estimation
hg_model = StackedHourGlass(config.n_channels, config.n_stack, config.n_modules, config.n_reductions,
config.n_joints)
hg_model.to(device)
hg_model.load_state_dict(torch.load(config.pretrained_path, map_location=device)['model_state'])
hg_model.eval()
number_frames = config_video_constraints.frames_before + config_video_constraints.frames_after + 1
model = Linear(input_size=2 * config.n_joints * number_frames, hidden_size=1024,
output_size=48).to(device)
print("Loaded.")
optimizer = torch.optim.Adam(model.parameters())
trainer = Trainer(train_dataset, test_dataset, optimizer, model, hg_model,
save_folder='builds', plot_logs=config.plot_logs,
video_constraints=config_video_constraints.use,
frames_before=config_video_constraints.frames_before,
frames_after=config_video_constraints.frames_after,
regularization_video_constraints=config_video_constraints.regularization).to(device)
trainer.train(config.n_epochs)
# trainer.load('./builds/2019-01-03 15:28:25')
# trainer.val()
| 44 | 102 | 0.734735 | 364 | 2,948 | 5.651099 | 0.266484 | 0.124453 | 0.149733 | 0.136121 | 0.44142 | 0.327662 | 0.252795 | 0.236266 | 0.198347 | 0.198347 | 0 | 0.009442 | 0.173677 | 2,948 | 66 | 103 | 44.666667 | 0.834975 | 0.053596 | 0 | 0.191489 | 0 | 0 | 0.039511 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.12766 | 0 | 0.12766 | 0.106383 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
10bb4fa4003789219023006be73310ea3e926495 | 30,607 | py | Python | testing/tools/integration/end_points.py | vishalbelsare/wallaroo | 2b985a3e756786139316c72ebcca342346546ba7 | [
"Apache-2.0"
] | 1,459 | 2017-09-16T13:13:15.000Z | 2020-10-05T06:19:50.000Z | testing/tools/integration/end_points.py | vishalbelsare/wallaroo | 2b985a3e756786139316c72ebcca342346546ba7 | [
"Apache-2.0"
] | 1,413 | 2017-09-14T18:18:14.000Z | 2020-09-28T08:10:30.000Z | testing/tools/integration/end_points.py | vishalbelsare/wallaroo | 2b985a3e756786139316c72ebcca342346546ba7 | [
"Apache-2.0"
] | 80 | 2017-09-27T23:16:23.000Z | 2020-06-02T09:18:53.000Z | # Copyright 2017 The Wallaroo Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
import datetime
import errno
import io
import logging
import threading
import time
import socket
import struct
from .errors import TimeoutError
from .logger import INFO2
from .stoppable_thread import StoppableThread
from wallaroo.experimental.connectors import (BaseIter,
BaseSource,
MultiSourceConnector)
try:
basestring
except:
basestring = (str, bytes)
class SingleSocketReceiver(StoppableThread):
"""
Read length or newline encoded data from a socket and append it to an
accumulator list.
Multiple SingleSocketReceivers may write to the same accumulator safely,
so long as they perform atomic writes (e.g. each append adds a single,
complete entry).
"""
__base_name__ = 'SingleSocketReceiver'
def __init__(self, sock, accumulator, mode='framed', header_fmt='>I',
name=None):
super(SingleSocketReceiver, self).__init__()
self.sock = sock
self.accumulator = accumulator
self.mode = mode
self.header_fmt = header_fmt
self.header_length = struct.calcsize(self.header_fmt)
if name:
self.name = '{}:{}:{}'.format(self.__base_name__, name,
sock.fileno())
else:
self.name = '{}:{}'.format(self.__base_name__, sock.fileno())
def try_recv(self, bs, flags=0):
"""
Try to to run `sock.recv(bs)` and return None if error
"""
try:
return self.sock.recv(bs, flags)
except:
return None
def append(self, bs):
if self.mode == 'framed':
self.accumulator.append(bs)
else:
self.accumulator.append(bs + b'\n')
def run(self):
if self.mode == 'framed':
self.run_framed()
else:
self.run_newlines()
def run_newlines(self):
data = []
while not self.stopped():
buf = self.try_recv(1024)
if not buf:
self.stop()
if data:
self.append(b''.join(data))
break
# We must be careful not to accidentally join two separate lines
# nor split a line
split = buf.split(b'\n') # '\n' show as '' in list after split
s0 = split.pop(0)
if s0:
if data:
data.append(s0)
self.append(b''.join(data))
data = []
else:
self.append(s0)
else:
# s0 is '', so first line is a '\n', and overflow is a
# complete message if it isn't empty
if data:
self.append(b''.join(data))
data = []
for s in split[:-1]:
self.append(s)
if split: # not an empty list
if split[-1]: # not an empty string, i.e. it wasn't a '\n'
data.append(split[-1])
time.sleep(0.000001)
def run_framed(self):
while not self.stopped():
header = self.try_recv(self.header_length, socket.MSG_WAITALL)
if not header:
self.stop()
continue
expect = struct.unpack(self.header_fmt, header)[0]
data = self.try_recv(expect, socket.MSG_WAITALL)
if not data:
self.stop()
else:
self.append(b''.join((header, data)))
time.sleep(0.000001)
def stop(self, *args, **kwargs):
super(self.__class__, self).stop(*args, **kwargs)
self.sock.close()
class MultiClientStreamView(object):
def __init__(self, initial_streams, blocking=True):
self.streams = {s.name: s.accumulator for s in initial_streams}
self.positions = {s.name: 0 for s in initial_streams}
self.keys = list(self.positions.keys())
self.key_position = 0
self.blocking = blocking
def add_stream(self, stream):
if stream.name in self.streams:
raise KeyError("Stream {} already in view!".format(stream.name))
self.streams[stream.name] = stream.accumulator
self.positions[stream.name] = 0
self.keys.append(stream.name)
def throw(self, type=None, value=None, traceback=None):
raise StopIteration
def __iter__(self):
return self
def next(self):
return self.__next__()
def __next__(self):
# sleep condition
origin = self.key_position
while True:
# get current key
cur = self.keys[self.key_position]
# set key for next iteration
self.key_position = (self.key_position + 1) % len(self.keys)
# Can we read from current key?
if self.positions[cur] < len(self.streams[cur]):
# read next value
val = self.streams[cur][self.positions[cur]]
# Increment position
self.positions[cur] += 1
return val
elif self.key_position == origin:
if self.blocking:
# sleep after a full round on all keys produces no value
time.sleep(0.001)
else:
time.sleep(0.001)
return None
# implicit: continue
class TCPReceiver(StoppableThread):
"""
Listen on a (host,port) pair and write any incoming data to an accumulator.
If `port` is 0, an available port will be chosen by the operation system.
`get_connection_info` may be used to obtain the (host, port) pair after
`start()` is called.
`max_connections` specifies the number of total concurrent connections
supported.
`mode` specifices how the receiver handles parsing the network stream
into records. `'newlines'` will split on newlines, and `'framed'` will
use a length-encoded framing, along with the `header_fmt` value (default
mode is `'framed'` with `header_fmt='>I'`).
You can read any data saved to the accumulator (a list) at any time
by reading the `data` attribute of the receiver, although this attribute
is only guaranteed to stop growing after `stop()` has been called.
"""
__base_name__ = 'TCPReceiver'
def __init__(self, host, port=0, max_connections=1000, mode='framed',
split_streams=False, header_fmt='>I'):
"""
Listen on a (host, port) pair for up to max_connections connections.
Each connection is handled by a separate client thread.
"""
super(TCPReceiver, self).__init__()
self.host = host
self.port = port
self.address = '{}.{}'.format(host, port)
self.max_connections = max_connections
self.mode = mode
self.split_streams = split_streams
self.header_fmt = header_fmt
self.header_length = struct.calcsize(self.header_fmt)
# use an in-memory byte buffer
self.data = {}
# Create a socket and start listening
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.clients = []
self.err = None
self.event = threading.Event()
self.start_time = None
self.views = []
def __len__(self):
return sum(map(len, self.data.values()))
def bytes_received(self):
return sum( sum(map(len, acc)) for acc in d.values() )
def get_connection_info(self, timeout=10):
is_connected = self.event.wait(timeout)
if not is_connected:
raise TimeoutError("{} Couldn't get connection info after {}"
" seconds".format(self.__base_name__, timeout))
return self.sock.getsockname()
def run(self):
self.start_time = datetime.datetime.now()
try:
self.sock.bind((self.host, self.port))
self.sock.listen(self.max_connections)
self.host, self.port = self.sock.getsockname()
self.event.set()
while not self.stopped():
try:
(clientsocket, address) = self.sock.accept()
except Exception as err:
try:
if self.stopped():
break
else:
raise err
except OSError as err:
if err.errno == errno.ECONNABORTED:
# [ECONNABORTED] A connection arrived, but it was
# closed while waiting on the listen queue.
# This happens on macOS during normal
# harness shutdown.
return
else:
logging.error("socket accept errno {}"
.format(err.errno))
self.err = err
raise
if self.split_streams:
# Use a counter to identify unique streams
client_accumulator = self.data.setdefault(len(self.data),
[])
else:
# use * to identify the "everything" stream
client_accumulator = self.data.setdefault('*', [])
cl = SingleSocketReceiver(clientsocket,
client_accumulator,
self.mode,
self.header_fmt,
name='{}-{}'.format(
self.__base_name__,
len(self.clients)))
logging.debug("{}:{} accepting connection from ({}, {}) on "
"port {}."
.format(self.__base_name__, self.name, self.host,
self.port, address[1]))
self.clients.append(cl)
if self.views:
for v in self.views:
v.add_stream(cl)
cl.start()
except Exception as err:
self.err = err
raise
def stop(self, *args, **kwargs):
if not self.stopped():
super(TCPReceiver, self).stop(*args, **kwargs)
try:
self.sock.shutdown(socket.SHUT_RDWR)
except OSError as err:
if err.errno == errno.ENOTCONN:
# [ENOTCONN] Connection is already closed or unopened
# and can't be shutdown.
pass
else:
raise
self.sock.close()
for cl in self.clients:
cl.stop()
def view(self, blocking=True):
view = MultiClientStreamView(self.clients, blocking=blocking)
self.views.append(view)
return view
def save(self, path):
files = []
if self.split_streams:
# Save streams separately
for stream, data in self.data.items():
base, suffix = path.rsplit('.', 1)
new_path = '{}.{}.{}'.format(base, stream, suffix)
logging.debug("Saving stream {} to path {}".format(
stream, new_path))
with open(new_path, 'wb') as f:
files.append(new_path)
for item in data:
f.write(item)
f.flush()
else:
# only have stream '*' to save
logging.debug("Saving stream * to path {}".format(path))
with open(path, 'wb') as f:
files.append(path)
for item in self.data['*']:
f.write(item)
f.flush()
return files
class Metrics(TCPReceiver):
__base_name__ = 'Metrics'
class Sink(TCPReceiver):
__base_name__ = 'Sink'
class Sender(StoppableThread):
"""
Send length framed data to a destination (addr).
`address` is the full address in the host:port format
`reader` is a Reader instance
`batch_size` denotes how many records to send at once (default=1)
`interval` denotes the minimum delay between transmissions, in seconds
(default=0.001)
`header_length` denotes the byte length of the length header
`header_fmt` is the format to use for encoding the length using
`struct.pack`
`reconnect` is a boolean denoting whether sender should attempt to
reconnect after a connection is lost.
"""
def __init__(self, address, reader, batch_size=1, interval=0.001,
header_fmt='>I', reconnect=False):
logging.info("Sender({address}, {reader}, {batch_size}, {interval},"
" {header_fmt}, {reconnect}) created".format(
address=address, reader=reader, batch_size=batch_size,
interval=interval, header_fmt=header_fmt,
reconnect=reconnect))
super(Sender, self).__init__()
self.daemon = True
self.reader = reader
self.batch_size = batch_size
self.batch = []
self.interval = interval
self.header_fmt = header_fmt
self.header_length = struct.calcsize(self.header_fmt)
self.address = address
(host, port) = address.split(":")
self.host = host
self.port = int(port)
self.name = 'Sender'
self.error = None
self._bytes_sent = 0
self.reconnect = reconnect
self.pause_event = threading.Event()
self.data = []
self.start_time = None
def pause(self):
self.pause_event.set()
def paused(self):
return self.pause_event.is_set()
def resume(self):
self.pause_event.clear()
def send(self, bs):
try:
self.sock.sendall(bs)
except OSError as err:
if err.errno == 104 or err.errno == 54:
# ECONNRESET on Linux or macOS, respectively
is_econnreset = True
else:
is_econnreset = False
logging.info("socket errno {} ECONNRESET {} stopped {}"
.format(err.errno, is_econnreset, self.stopped()))
self.data.append(bs)
self._bytes_sent += len(bs)
def bytes_sent(self):
return self._bytes_sent
def batch_append(self, bs):
self.batch.append(bs)
def batch_send(self):
if len(self.batch) >= self.batch_size:
self.batch_send_final()
time.sleep(self.interval)
def batch_send_final(self):
if self.batch:
self.send(b''.join(self.batch))
self.batch = []
def run(self):
self.start_time = datetime.datetime.now()
while not self.stopped():
try:
logging.info("Sender connecting to ({}, {})."
.format(self.host, self.port))
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect((self.host, self.port))
while not self.stopped():
while self.paused():
# make sure to empty the send buffer before
#entering pause state!
self.batch_send_final()
time.sleep(0.001)
header = self.reader.read(self.header_length)
if not header:
self.maybe_stop()
break
expect = struct.unpack(self.header_fmt, header)[0]
body = self.reader.read(expect)
if not body:
self.maybe_stop()
break
self.batch_append(header + body)
self.batch_send()
time.sleep(0.000000001)
self.batch_send_final()
self.sock.close()
except KeyboardInterrupt:
logging.info("KeyboardInterrupt received.")
self.stop()
break
except Exception as err:
self.error = err
logging.error(err)
if not self.reconnect:
break
if not self.stopped():
logging.info("Waiting 1 second before retrying...")
time.sleep(1)
self.sock.close()
def maybe_stop(self):
if not self.batch:
self.stop()
def stop(self, *args, **kwargs):
if not self.stopped():
logging.log(INFO2, "Sender received stop instruction.")
super(Sender, self).stop(*args, **kwargs)
if self.batch:
logging.warning("Sender stopped, but send buffer size is {}"
.format(len(self.batch)))
def last_sent(self):
if isinstance(self.reader.gen, MultiSequenceGenerator):
return self.reader.gen.last_sent()
else:
raise ValueError("Can only use last_sent on a sender with "
"a MultiSequenceGenerator, or an ALOSender.")
class NoNonzeroError(ValueError):
pass
def first_nonzero_index(seq):
idx = 0
for item in seq:
if item == 0:
idx += 1
else:
return idx
else:
raise NoNonzeroError("No nonzero values found in list")
class Sequence(object):
def __init__(self, index, val=0):
self.index = '{index:07d}'.format(index = index)
self.key = self.index.encode()
self.val = val
def __next__(self):
self.val += 1
return (self.key, self.val)
def __iter__(self):
return self
def next(self):
return self.__next__()
def throw(self, type=None, value=None, traceback=None):
raise StopIteration
class MultiSequenceGenerator(object):
"""
A growable collection of sequence generators.
- Each new generator has its own partition
- Messages are emitted in a round-robin fashion over the generator list
- When a new generator joins, it takes over all new messages until it
catches up
- At stoppage time, all generators are allowed to reach the same final
value
"""
def __init__(self, base_index=0, initial_partitions=1, base_value=0):
self._base_value = base_value
self._next_index = base_index + initial_partitions
self.seqs = [Sequence(x, self._base_value)
for x in range(base_index, self._next_index)]
# self.seqs stores the last value sent for each sequence
self._idx = 0 # the idx of the last sequence sent
self._remaining = []
self.lock = threading.Lock()
def format_value(self, value, partition):
return struct.pack('>IQ{}s'.format(len(partition)), 8+len(partition),
value, partition)
def _next_value_(self):
# Normal operation next value: round robin through the sets
if self._idx >= len(self.seqs):
self._idx = 0
next_seq = self.seqs[self._idx]
self._idx += 1
return next(next_seq)
def _next_catchup_value(self):
# After stop() was called: all sets catch up to current max
try:
idx = first_nonzero_index(self._remaining)
next_seq = self.seqs[idx]
self._remaining[idx] -= 1
return next(next_seq)
except NoNonzeroError:
# reset self._remaining so it can be reused
if not self.max_val:
self._remaining = []
logging.debug("MultiSequenceGenerator: Stop condition "
"reached. Final values are: {}".format(
self.seqs))
self.throw()
def add_sequence(self):
if not self._remaining:
logging.debug("MultiSequenceGenerator: adding new sequence")
self.seqs.append(Sequence(self._next_index, self._base_value))
self._next_index += 1
def stop(self):
logging.info("MultiSequenceGenerator: stop called")
logging.debug("seqs are: {}".format(self.seqs))
with self.lock:
self.max_val = max([seq.val for seq in self.seqs])
self._remaining = [self.max_val - seq.val for seq in self.seqs]
logging.debug("_remaining: {}".format(self._remaining))
def last_sent(self):
return [('{}'.format(key), val) for (key,val) in
[(seq.index, seq.val) for seq in self.seqs]]
def send(self, ignored_arg):
with self.lock:
if self._remaining:
idx, val = self._next_catchup_value()
else:
idx, val = self._next_value_()
return self.format_value(val, idx)
def throw(self, type=None, value=None, traceback=None):
raise StopIteration
def __iter__(self):
return self
def next(self):
return self.__next__()
def __next__(self):
return self.send(None)
def close(self):
"""Raise GeneratorExit inside generator.
"""
try:
self.throw(GeneratorExit)
except (GeneratorExit, StopIteration):
pass
else:
raise RuntimeError("generator ignored GeneratorExit")
def sequence_generator(stop=1000, start=0, header_fmt='>I', partition=''):
"""
Generate a sequence of integers, encoded as big-endian U64.
`stop` denotes the maximum value of the sequence (inclusive)
`start` denotes the starting value of the sequence (exclusive)
`header_length` denotes the byte length of the length header
`header_fmt` is the format to use for encoding the length using
`struct.pack`
`partition` is a string representing the optional partition key. It is
empty by default.
"""
partition = partition.encode()
size = 8 + len(partition)
fmt = '>Q{}s'.format(len(partition)) if partition else '>Q'
for x in range(start+1, stop+1):
yield struct.pack(header_fmt, size)
if partition:
yield struct.pack(fmt, x, partition)
else:
yield struct.pack(fmt, x)
def iter_generator(items,
to_bytes=lambda s: s.encode()
if isinstance(s, basestring) else str(s).encode(),
header_fmt='>I',
on_next=None):
"""
Generate a sequence of length encoded binary records from an iterator.
`items` is the iterator of items to encode
`to_bytes` is a function for converting items to a bytes
(default:`lambda s: s.encode() if isinstance(s, basestring) else
str(s).encode()`)
`header_fmt` is the format to use for encoding the length using
`struct.pack`
"""
for val in items:
if on_next:
on_next(val)
bs = to_bytes(val)
yield struct.pack(header_fmt, len(bs))
yield bs
def files_generator(files, mode='framed', header_fmt='>I', on_next=None):
"""
Generate a sequence of binary data stubs from a set of files.
- `files`: either a single filepath or a list of filepaths.
The same filepath may be provided multiple times, in which case it will
be read that many times from start to finish.
- `mode`: 'framed' or 'newlines'. If 'framed' is used, `header_fmt` is
used to determine how many bytes to read each time. Default: 'framed'
- `header_fmt`: the format of the length encoding header used in the files
Default: '>I'
"""
if isinstance(files, basestring):
files = [files]
for path in files:
if mode == 'newlines':
for l in newline_file_generator(path):
if on_next:
on_next(l)
yield l
elif mode == 'framed':
for l in framed_file_generator(path, header_fmt):
if on_next:
on_next(l)
yield l
else:
raise ValueError("`mode` must be either 'framed' or 'newlines'")
def newline_file_generator(filepath, header_fmt='>I', on_next=None):
"""
Generate length-encoded strings from a newline-delimited file.
"""
with open(filepath, 'rb') as f:
f.seek(0, 2)
fin = f.tell()
f.seek(0)
while f.tell() < fin:
o = f.readline().strip(b'\n')
if o:
if on_next:
on_next(o)
yield struct.pack(header_fmt, len(o))
yield o
def framed_file_generator(filepath, header_fmt='>I', on_next=None):
"""
Generate length encoded records from a length-framed binary file.
"""
header_length = struct.calcsize(header_fmt)
with open(filepath, 'rb') as f:
while True:
header = f.read(header_length)
if not header:
break
expect = struct.unpack(header_fmt, header)[0]
body = f.read(expect)
if not body:
break
if on_next:
on_next(header + body)
yield header
yield body
class Reader(object):
"""
A BufferedReader interface over a bytes generator
"""
def __init__(self, generator):
self.gen = generator
self.overflow = b''
def read(self, num):
remaining = num
out = io.BufferedWriter(io.BytesIO())
remaining -= out.write(self.overflow)
while remaining > 0:
try:
remaining -= out.write(next(self.gen))
except StopIteration:
break
# first num bytes go to return, remainder to overflow
out.seek(0)
r = out.raw.read(num)
self.overflow = out.raw.read()
return r
class ALOSequenceGenerator(BaseIter, BaseSource):
"""
A sequence generator with a resettable position.
Starts at 1, and stops aftering sending `stop`.
Usage: `ALOSequenceGenerator(partition, stop=1000, data=None)`
if `data` is a list, data generated is appended to it in order
as (position, value) tuples.
"""
def __init__(self, key, stop=None, start=0):
self.partition = key
self.name = key.encode()
self.key = key.encode()
self.position = start
self._stop = stop
self.start = start
self.stopped = False
self.paused = False
def __str__(self):
return ("ALOSequenceGenerator(partition: {}, stopped: {}, point_of_ref: {})"
.format(self.name, self.stopped, self.point_of_ref()))
def point_of_ref(self):
return self.position
def reset(self, pos=None):
if pos is None:
pos = self.start
if pos == 18446744073709551615:
pos = self.start
self.position = pos
def __next__(self):
# This has to be before the increment, otherwise point_of_ref()
# doesn't return the previous position!
if self.stopped:
raise StopIteration
if self._stop is not None:
if self.position >= self._stop:
raise StopIteration
if self.paused:
return (None, self.position)
self.position += 1
val, pos, key = (self.position, self.position, self.key)
payload = struct.pack('>Q{}s'.format(len(key)), val, key)
return (payload, pos)
def wallaroo_acked(self, point_of_ref):
None
def close(self):
self.closed = True
def stop(self):
self.stopped = True
def pause(self):
self.paused = True
def resume(self):
self.paused = False
class ALOSender(StoppableThread):
"""
A wrapper for MultiSourceConnector to look like a regular TCP Sender
"""
def __init__(self, sources, version, cookie, program_name, instance_name,
addr):
super(ALOSender, self).__init__()
host, port = addr.split(':')
port = int(port)
self.client = client = MultiSourceConnector(
version,
cookie,
program_name,
instance_name,
host, port)
self.name = "ALOSender_{}".format("-".join(
[source.partition for source in sources]))
self.sources = sources
logging.debug("ALO: sources = {}".format(sources))
self.data = []
self.client.data = self.data
for source in self.sources:
source.data = self.data
self.host = host
self.port = port
self.start_time = None
self.error = None
self.batch = [] # for compatibility with Sender during validations
def run(self):
self.start_time = datetime.datetime.now()
self.client.connect()
for source in self.sources:
self.client.add_source(source)
self.error = self.client.join()
def stop(self, error=None):
logging.debug("ALOSender stop")
for source in self.sources:
logging.debug("source to stop: {}".format(source))
source.stop()
if error is not None:
self.client.shutdown(error=error)
def pause(self):
logging.debug("ALOSender pause: pausing {} sources"
.format(len(self.sources)))
for source in self.sources:
source.pause()
def resume(self):
logging.debug("ALOSender resume: resuming {} sources"
.format(len(self.sources)))
for source in self.sources:
source.resume()
def last_sent(self):
return [(source.partition, source.position) for source in self.sources]
| 34.467342 | 84 | 0.55422 | 3,525 | 30,607 | 4.696454 | 0.156596 | 0.019571 | 0.008457 | 0.005436 | 0.2119 | 0.157052 | 0.120507 | 0.104077 | 0.092963 | 0.080882 | 0 | 0.007796 | 0.350443 | 30,607 | 887 | 85 | 34.506201 | 0.824908 | 0.192309 | 0 | 0.34345 | 0 | 0 | 0.051434 | 0.005081 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121406 | false | 0.004792 | 0.019169 | 0.025559 | 0.21885 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
529d951b9eba055452835cb62f5708f0310592ba | 392 | py | Python | ABC130/ABC130d.py | VolgaKurvar/AtCoder | 21acb489f1594bbb1cdc64fbf8421d876b5b476d | [
"Unlicense"
] | null | null | null | ABC130/ABC130d.py | VolgaKurvar/AtCoder | 21acb489f1594bbb1cdc64fbf8421d876b5b476d | [
"Unlicense"
] | null | null | null | ABC130/ABC130d.py | VolgaKurvar/AtCoder | 21acb489f1594bbb1cdc64fbf8421d876b5b476d | [
"Unlicense"
] | null | null | null | #最初に入れておくとinputが早くなる
import sys
input = sys.stdin.readline
n, k = map(int, input().split())
a = list(map(int, input().split()))
ruisekiwa = [0]
for i in range(1,n+1):
ruisekiwa.append(ruisekiwa[i - 1] + a[i-1])
count = 0
i = 0
j = 0
#print(ruisekiwa)
while(i<n+1):
if (ruisekiwa[i] - ruisekiwa[j] >= k):
count += n-i+1
j += 1
else:
i += 1
print(count) | 16.333333 | 47 | 0.566327 | 65 | 392 | 3.415385 | 0.415385 | 0.036036 | 0.099099 | 0.144144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040404 | 0.242347 | 392 | 24 | 48 | 16.333333 | 0.707071 | 0.089286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52a3b24d265c70f69941d93b5f00be2dff577330 | 8,883 | py | Python | volapi/auxo.py | volafiled/python-volapi | 8b213c1a30fe24166403da18b87ca2a3b2409689 | [
"MIT"
] | 5 | 2019-05-12T01:18:33.000Z | 2021-03-25T07:34:29.000Z | volapi/auxo.py | volafiled/python-volapi | 8b213c1a30fe24166403da18b87ca2a3b2409689 | [
"MIT"
] | 11 | 2018-12-06T17:17:27.000Z | 2021-07-05T07:54:58.000Z | volapi/auxo.py | RealDolos/volapi | 8b213c1a30fe24166403da18b87ca2a3b2409689 | [
"MIT"
] | 9 | 2015-11-28T08:51:29.000Z | 2018-09-29T10:46:44.000Z | """
The MIT License (MIT)
Copyright © 2015 RealDolos
Copyright © 2018 Szero
See LICENSE
"""
# pylint: disable=broad-except
import logging
import sys
import asyncio
from collections import namedtuple, defaultdict
from functools import wraps
from threading import Thread, Event, RLock, Condition, Barrier, get_ident
from urllib.parse import urlsplit
from copy import copy
from requests import Request
from requests.cookies import get_cookie_header
from autobahn.asyncio.websocket import WebSocketClientFactory, WebSocketClientProtocol
logger = logging.getLogger(__name__)
def call_async(func):
"""Decorates a function to be called async on the loop thread"""
@wraps(func)
def wrapper(self, *args, **kw):
"""Wraps instance method to be called on loop thread"""
def call():
"""Calls function on loop thread"""
try:
func(self, *args, **kw)
except Exception:
logger.exception(
"failed to call async [%r] with [%r] [%r]", func, args, kw
)
self.loop.call_soon_threadsafe(call)
return wrapper
def call_sync(func):
"""Decorates a function to be called sync on the loop thread"""
@wraps(func)
def wrapper(self, *args, **kw):
"""Wraps instance method to be called on loop thread"""
# Just return when already on the event thread
if self.thread.ident == get_ident():
return func(self, *args, **kw)
barrier = Barrier(2)
result = None
ex = None
def call():
"""Calls function on loop thread"""
nonlocal result, ex
try:
result = func(self, *args, **kw)
except Exception as exc:
ex = exc
finally:
barrier.wait()
self.loop.call_soon_threadsafe(call)
barrier.wait()
if ex:
raise ex or Exception("Unknown error")
return result
return wrapper
class Awakener:
"""
Callable helper thread to awaken non-event loop threads.
The issue is that notify_all will temporarily release the
lock to fire the listeners.
If a listener would interact with volapi from there, this
might mean a deadlock.
Having the release on another thread will not block the
event loop thread, therefore problem solved
"""
def __init__(self, condition):
self.condition = condition
self.count = 0
self.lock = RLock()
self.event = Event()
self.thread = Thread(daemon=True, target=self.target)
self.thread.start()
def __call__(self):
with self.lock:
self.count += 1
self.event.set()
def target(self):
"""Thread routine"""
while self.event.wait():
self.event.clear()
while True:
with self.lock:
if not self.count:
break
self.count -= 1
with self.condition:
self.condition.notify_all()
class ListenerArbitrator:
"""Manages the asyncio loop and thread"""
def __init__(self):
self.loop = None
self.condition = Condition()
barrier = Barrier(2)
self.awaken = Awakener(self.condition)
self.thread = Thread(daemon=True, target=lambda: self._loop(barrier))
self.thread.start()
barrier.wait()
def _loop(self, barrier):
"""Actual thread"""
if sys.platform != "win32":
self.loop = asyncio.new_event_loop()
else:
self.loop = asyncio.ProactorEventLoop()
asyncio.set_event_loop(self.loop)
barrier.wait()
try:
self.loop.run_forever()
except Exception:
sys.exit(1)
@call_async
def create_connection(self, room, ws_url, agent, cookies):
"""Creates a new connection"""
urlparts = urlsplit(ws_url)
req = Request("GET", ws_url)
cookies = get_cookie_header(cookies, req)
if cookies:
headers = dict(Cookie=cookies)
else:
headers = None
factory = WebSocketClientFactory(ws_url, headers=headers, loop=self.loop)
factory.useragent = agent
factory.protocol = lambda: room
conn = self.loop.create_connection(
factory,
host=urlparts.netloc,
port=urlparts.port or 443,
ssl=urlparts.scheme == "wss",
)
asyncio.ensure_future(conn, loop=self.loop)
def __send_message(self, proto, payload):
# pylint: disable=no-self-use
"""Sends a message"""
try:
if not isinstance(payload, bytes):
payload = payload.encode("utf-8")
if not proto.connected:
raise IOError("not connected")
proto.sendMessage(payload)
logger.debug("sent: %r", payload)
except Exception as ex:
logger.exception("Failed to send message with payload of:\n%r", payload)
proto.reraise(ex)
@call_async
def send_message(self, proto, payload):
self.__send_message(proto, payload)
@call_sync
def close(self, proto):
# pylint: disable=no-self-use
"""Closes a connection"""
try:
proto.sendClose()
except Exception as ex:
logger.exception("Failed to send close")
proto.reraise(ex)
ARBITRATOR = ListenerArbitrator()
class Listeners(namedtuple("Listeners", ("callbacks", "queue", "enlock", "lock"))):
"""Collection of Listeners
`callbacks` are function objects.
`queue` holds data that will be sent to each function object
in `callbacks` variable.
Each issue of `process` method will run given callback
against items in `queue` when their types match. After that
`queue` is cleared and amount of `callbacks` is returned.
Callbacks that return False will be removed."""
def __new__(cls):
return super().__new__(
cls, defaultdict(list), defaultdict(list), RLock(), RLock()
)
def process(self):
"""Process queue for these listeners. Only the items with type that
matches """
with self.lock, self.enlock:
queue = copy(self.queue)
self.queue.clear()
callbacks = copy(self.callbacks)
with self.lock:
rm_cb = False
for ki, vi in queue.items():
if ki in self.callbacks:
for item in vi:
for cb in self.callbacks[ki]:
if cb(item) is False:
callbacks[ki].remove(cb)
if not callbacks[ki]:
del callbacks[ki]
rm_cb = True
with self.lock:
if rm_cb:
self.callbacks.clear()
for k, v in callbacks.items():
self.callbacks[k].extend(v)
return len(self.callbacks)
def add(self, callback_type, callback):
"""Add a new listener"""
with self.lock:
self.callbacks[callback_type].append(callback)
def enqueue(self, item_type, item):
"""Queue a new data item, make item iterable"""
with self.enlock:
self.queue[item_type].append(item)
def __len__(self):
"""Return number of listeners in collection"""
with self.lock:
return len(self.callbacks)
class Protocol(WebSocketClientProtocol):
"""Implements the websocket protocol"""
def __init__(self, conn):
super().__init__()
self.conn = conn
self.connected = False
self.max_id = 0
self.send_count = 1
self.session = None
def onConnect(self, _response):
self.connected = True
async def onOpen(self):
await self.conn.on_open()
def onMessage(self, payload, isBinary):
if not payload:
logger.warning("empty frame!")
return
try:
if not isBinary:
payload = payload.decode("utf-8")
self.conn.on_message(payload)
except Exception:
logger.exception("something went horribly wrong")
async def onClose(self, _wasClean, _code, _reason):
await self.conn.on_close()
self.connected = False
def connection_lost(self, exc):
self.reraise(exc)
self.connected = False
def reraise(self, ex):
if hasattr(self, "conn"):
self.conn.reraise(ex)
else:
logger.error("Cannot reraise")
def __repr__(self):
return f"<Protocol({self.session},max_id={self.max_id},send_count={self.send_count})>"
| 28.747573 | 94 | 0.575932 | 1,011 | 8,883 | 4.964392 | 0.276954 | 0.017533 | 0.016736 | 0.008368 | 0.141861 | 0.125922 | 0.077705 | 0.052202 | 0.052202 | 0.033871 | 0 | 0.003856 | 0.328493 | 8,883 | 308 | 95 | 28.840909 | 0.837217 | 0.170438 | 0 | 0.235 | 0 | 0 | 0.045322 | 0.010566 | 0 | 0 | 0 | 0 | 0 | 1 | 0.13 | false | 0 | 0.055 | 0.01 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52a548400e4445ff6a7e4508d0c6e1b0af2b4a49 | 1,382 | py | Python | vaccine_feed_ingest/runners/ca/metrolink/parse.py | jeremyschlatter/vaccine-feed-ingest | 215f6c144fe5220deaccdb5db3e96f28b7077b3f | [
"MIT"
] | 27 | 2021-04-24T02:11:18.000Z | 2021-05-17T00:54:45.000Z | vaccine_feed_ingest/runners/ca/metrolink/parse.py | jeremyschlatter/vaccine-feed-ingest | 215f6c144fe5220deaccdb5db3e96f28b7077b3f | [
"MIT"
] | 574 | 2021-04-06T18:09:11.000Z | 2021-08-30T07:55:06.000Z | vaccine_feed_ingest/runners/ca/metrolink/parse.py | jeremyschlatter/vaccine-feed-ingest | 215f6c144fe5220deaccdb5db3e96f28b7077b3f | [
"MIT"
] | 47 | 2021-04-23T05:31:14.000Z | 2021-07-01T20:22:46.000Z | #!/usr/bin/env python3
import json
import os
import pathlib
import re
import sys
from bs4 import BeautifulSoup
input_dir = pathlib.Path(sys.argv[2])
output_dir = pathlib.Path(sys.argv[1])
input_filenames = [p for p in pathlib.Path(input_dir).iterdir() if p.is_file()]
for filename in input_filenames:
sites = []
with filename.open() as fin:
content = fin.read()
soup = BeautifulSoup(content, "html.parser")
table = soup.find(id="vaxLocationsTable")
for row in table.find("tbody").find_all("tr"):
cells = row.find_all("td")
longname = str(cells[0].renderContents())[2:-1]
site_name = longname.split(" <br/> ")[0]
address_tokens = re.search("(.*), (.*)", longname.split(" <br/> ")[1])
site_address, site_city = None, None
if address_tokens is not None:
site_address = address_tokens.group(1)
site_city = address_tokens.group(2)
site = {
"name": site_name,
"address": site_address,
"city": site_city,
"metrolink_line": cells[1].string,
"metrolink_station": cells[2].string,
}
sites.append(site)
with (output_dir / (os.path.basename(filename) + ".parsed.ndjson")).open(
"w"
) as fout:
for site in sites:
json.dump(site, fout)
fout.write("\n")
| 27.098039 | 79 | 0.592619 | 177 | 1,382 | 4.497175 | 0.435028 | 0.065327 | 0.035176 | 0.042714 | 0.052764 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012783 | 0.26411 | 1,382 | 50 | 80 | 27.64 | 0.769912 | 0.015195 | 0 | 0 | 0 | 0 | 0.091176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.157895 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52a722a71a7d967de469d0276cdea355d5919d88 | 2,304 | py | Python | accuracy/z_1_gen_horseshoe_bats.py | thejasvibr/itsfm | 4b5af1824656d3eefe9bb1994aa6a35512e377e5 | [
"MIT"
] | 1 | 2020-03-19T21:22:44.000Z | 2020-03-19T21:22:44.000Z | accuracy/z_1_gen_horseshoe_bats.py | thejasvibr/measure_horseshoe_bat_calls | 4b5af1824656d3eefe9bb1994aa6a35512e377e5 | [
"MIT"
] | null | null | null | accuracy/z_1_gen_horseshoe_bats.py | thejasvibr/measure_horseshoe_bat_calls | 4b5af1824656d3eefe9bb1994aa6a35512e377e5 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Generating the CF-FM synthetic calls
====================================
Module that creates the data for accuracy testing horseshoe bat type calls
"""
import h5py
from itsfm.simulate_calls import make_cffm_call
import numpy as np
import pandas as pd
import scipy.signal as signal
from tqdm import tqdm
cf_durations = [0.005, 0.010, 0.015]
cf_peakfreq = [40000, 60000, 90000]
fm_durations = [0.001, 0.002]
fm_bw = [5000, 10000, 20000]
all_combinations = np.array(np.meshgrid(cf_peakfreq, cf_durations,
fm_bw,fm_durations,
np.flip(fm_bw),np.flip(fm_durations)))
all_params = all_combinations.flatten().reshape(6,-1).T
col_names = ['cf_peak_frequency', 'cf_duration',
'upfm_bw', 'upfm_duration',
'downfm_bw', 'downfm_duration']
parameter_space = pd.DataFrame(all_params, columns=col_names)
parameter_space['upfm_terminal_frequency'] = parameter_space['cf_peak_frequency'] - parameter_space['upfm_bw']
parameter_space['downfm_terminal_frequency'] = parameter_space['cf_peak_frequency'] - parameter_space['downfm_bw']
parameter_columns = ['cf_peak_frequency', 'cf_duration',
'upfm_terminal_frequency', 'upfm_duration',
'downfm_terminal_frequency', 'downfm_duration']
all_calls = {}
for row_number, parameters in tqdm(parameter_space.iterrows(),
total=parameter_space.shape[0]):
cf_peak, cf_durn, upfm_terminal, upfm_durn, downfm_terminal, downfm_durn = parameters[parameter_columns]
call_parameters = {'cf':(cf_peak, cf_durn),
'upfm':(upfm_terminal, upfm_durn),
'downfm':(downfm_terminal, downfm_durn),
}
fs = 250*10**3 # 500kHz sampling rate
synthetic_call, _ = make_cffm_call(call_parameters, fs)
synthetic_call *= signal.tukey(synthetic_call.size, 0.1)
all_calls[row_number] = synthetic_call
# now save the data into an hdf5 file
with h5py.File('horseshoe_test.hdf5','w') as f:
for index, audio in all_calls.items():
f.create_dataset(str(index), data=audio)
f.create_dataset('fs', data=np.array([fs]))
parameter_space.to_csv('horseshoe_test_parameters.csv')
| 38.4 | 114 | 0.661458 | 297 | 2,304 | 4.83165 | 0.363636 | 0.097561 | 0.041812 | 0.023693 | 0.179791 | 0.124042 | 0.083624 | 0.083624 | 0.083624 | 0 | 0 | 0.037507 | 0.213108 | 2,304 | 60 | 115 | 38.4 | 0.753999 | 0.098958 | 0 | 0 | 0 | 0 | 0.163196 | 0.060533 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.15 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52a8035536127f968470bcc871b54853198eee7b | 5,271 | py | Python | src/utils/plots.py | RUBIX-ML/credit-analysis | 68c4c2a16176f295e852800483802e4183d1df43 | [
"MIT"
] | 1 | 2019-09-23T04:55:20.000Z | 2019-09-23T04:55:20.000Z | src/utils/plots.py | RUBIX-ML/credit-analysis | 68c4c2a16176f295e852800483802e4183d1df43 | [
"MIT"
] | null | null | null | src/utils/plots.py | RUBIX-ML/credit-analysis | 68c4c2a16176f295e852800483802e4183d1df43 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import re
import plotly.graph_objects as go
from plotly.graph_objs import *
from plotly.offline import plot
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
import IPython, graphviz
from sklearn.tree import export_graphviz
def missing_value_plot(df):
"""
Plot missing values
Args:
df: pd.DataFrame, input data
"""
patterns = ['Nan', 'Null', 'NULL', 'NAN', '']
col_names = []
indexes = []
for column in df.columns:
current_indexes = list(df[df[column].isin(patterns)].index)
indexes = indexes + current_indexes
col_names = col_names + [column for x in current_indexes]
y0 = indexes
x0 = col_names
fig = go.Figure()
fig.add_trace(go.Scatter(
name='Matching points',
x=x0,
y=y0,
mode='text',
text='______________',
textfont=dict(
color="red",
size=11)))
fig.add_trace(go.Bar(
name='All pints',
x=df.columns,
y=[len(df) for x in df.columns],
marker=dict(color='green'),
# fillcolor='green',
opacity=0.6
# marker_size=2,
# mode='markers'
))
fig.update_layout(
title='Pattern Filtering Cross All Columns',
paper_bgcolor='rgba(255,255,255,255)',
plot_bgcolor='rgba(255,255,255,255)',
font=dict(size=10, color="#333333"),
width=1200,
height=len(df) * 8,
xaxis=dict(autorange=True, gridcolor='#eeeeee', title='Columns'),
yaxis=dict(autorange='reversed', gridcolor='#eeeeee', title='Index number'),
bargap=0.2)
fig = go.Figure(data=data, layout=layout)
plot.show()
def histogram_plot(df, column):
"""
Plot distributions chart of a column/feature
Args:
df: pd.DataFrame, input data
column: column/feature name
"""
trace = go.Histogram(
x=df[column],
marker=dict(color='#337ab7'),
opacity=1)
data = [trace]
layout = go.Layout(
paper_bgcolor='rgba(255,255,255.255)',
plot_bgcolor='rgba(255,255,255,255)',
font=dict(size=8, color="#333333"),
width=600,
height=100,
xaxis=dict(autorange=True, gridcolor='#eeeeee'),
yaxis=dict(autorange=True, gridcolor='#eeeeee'),
bargap=0.1,
margin=go.layout.Margin(l=10, r=10, b=40, t=20, pad=0))
fig = go.Figure(data=data, layout=layout).update_xaxes(categoryorder='total descending')
fig.show()
def bar_plot(df):
"""
Plot bar chart
Args:
df: pd.DataFrame, input data
"""
traces = []
for i in range(len(x_cols)):
for j in range(len(y_cols)):
trace = go.Bar(
x=df[x_cols[i]],
y=df[y_cols[j]],
name=str(y_cols[j]) + ' | ' + str(x_cols[i]),
)
traces.append(trace)
data = traces
layout = go.Layout(
title=chart_name,
height=500,
xaxis=dict(title=x_title, autorange=True, gridcolor='#eeeeee'),
yaxis=dict(title=y_title, autorange=True, gridcolor='#eeeeee'),
barmode='group',
paper_bgcolor='rgba(255,255,255,255)',
plot_bgcolor='rgba(255,255,255,255)',
font=dict(size=9, color="#333333"))
fig = go.Figure(data=data, layout=layout)
fig.show()
def correlation_matrix(df):
"""
Plot correlation matrix
Args:
df: pd.DataFrame, input data
"""
trace = {
"type": "heatmap",
"x": df.columns,
"y": df.columns,
"z": [list(df.iloc[row, :]) for row in range(df.shape[0])],
"colorscale": "Viridis"
}
data = [trace]
layout = {
"title": "Correlation Matrix",
"width": 800,
"xaxis": {"automargin": True},
"yaxis": {"automargin": True},
"height": 600,
"autosize": True,
"showlegend": True
}
fig = go.Figure(data=data, layout=layout)
fig.show()
def feature_importance_plot(df):
traces = []
trace = go.Bar(
x=list(df.keys()),
y=list(df.values()),
)
traces.append(trace)
data = traces
layout = go.Layout(
title='Feature Importance',
height=300,
xaxis=dict(title='Features', autorange=True, gridcolor='#eeeeee'),
yaxis=dict(title='Impact', autorange=True, gridcolor='#eeeeee'),
barmode='group',
paper_bgcolor='rgba(255,255,255,255)',
plot_bgcolor='rgba(255,255,255,255)',
font=dict(size=9, color="#333333"))
fig = go.Figure(data=data, layout=layout)
fig.show()
def draw_tree(t, df, size=70, ratio=4, precision=0):
"""
Draws a representation of a random forest in IPython.
Args:
t: The tree to draw
df: The data used to train the tree. This is used to get the names of the features.
example: draw_tree(m.estimators_[0], X_train, precision=2)
"""
s=export_graphviz(t, out_file=None, feature_names=df.columns, filled=True, special_characters=True, rotate=True, precision=precision)
IPython.display.display(graphviz.Source(re.sub('Tree {', f'Tree {{ size={size}; ratio={ratio}', s))) | 28.33871 | 137 | 0.578638 | 666 | 5,271 | 4.487988 | 0.282282 | 0.048177 | 0.048177 | 0.0455 | 0.320509 | 0.307795 | 0.245233 | 0.196387 | 0.196387 | 0.165607 | 0 | 0.047594 | 0.274521 | 5,271 | 186 | 138 | 28.33871 | 0.734048 | 0.10738 | 0 | 0.246154 | 0 | 0 | 0.130929 | 0.036721 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046154 | false | 0 | 0.084615 | 0 | 0.130769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ac1fbfb6a84e1d8bb4cd73f7561fd81ee8d053 | 4,299 | py | Python | rx/operators/observable/delay.py | yutiansut/RxPY | c3bbba77f9ebd7706c949141725e220096deabd4 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | rx/operators/observable/delay.py | yutiansut/RxPY | c3bbba77f9ebd7706c949141725e220096deabd4 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | rx/operators/observable/delay.py | yutiansut/RxPY | c3bbba77f9ebd7706c949141725e220096deabd4 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | from typing import Union, Callable
from datetime import datetime, timedelta
from rx.core import ObservableBase, AnonymousObservable
from rx.disposables import CompositeDisposable, SerialDisposable, MultipleAssignmentDisposable
from rx.concurrency import timeout_scheduler
class Timestamp(object):
def __init__(self, value, timestamp):
self.value = value
self.timestamp = timestamp
def observable_delay_timespan(source: ObservableBase, duetime: Union[timedelta, int]) -> ObservableBase:
def subscribe(observer, scheduler=None):
nonlocal duetime
scheduler = scheduler or timeout_scheduler
if isinstance(duetime, datetime):
duetime = scheduler.to_datetime(duetime) - scheduler.now
else:
duetime = scheduler.to_timedelta(duetime)
cancelable = SerialDisposable()
exception = [None]
active = [False]
running = [False]
queue = []
def on_next(notification):
should_run = False
with source.lock:
if notification.value.kind == 'E':
del queue[:]
queue.append(notification)
exception[0] = notification.value.exception
should_run = not running[0]
else:
queue.append(Timestamp(value=notification.value,
timestamp=notification.timestamp + duetime))
should_run = not active[0]
active[0] = True
if should_run:
if exception[0]:
observer.on_error(exception[0])
else:
mad = MultipleAssignmentDisposable()
cancelable.disposable = mad
def action(scheduler, state):
if exception[0]:
return
with source.lock:
running[0] = True
while True:
result = None
if queue and queue[0].timestamp <= scheduler.now:
result = queue.pop(0).value
if result:
result.accept(observer)
if not result:
break
should_continue = False
recurse_duetime = 0
if queue:
should_continue = True
diff = queue[0].timestamp - scheduler.now
zero = timedelta(0) if isinstance(diff, timedelta) else 0
recurse_duetime = max(zero, diff)
else:
active[0] = False
ex = exception[0]
running[0] = False
if ex:
observer.on_error(ex)
elif should_continue:
mad.disposable = scheduler.schedule_relative(
recurse_duetime, action)
mad.disposable = scheduler.schedule_relative(
duetime, action)
subscription = source.materialize().timestamp(
).subscribe_(on_next, scheduler=scheduler)
return CompositeDisposable(subscription, cancelable)
return AnonymousObservable(subscribe)
def delay(duetime: Union[datetime, int]) -> Callable[[ObservableBase], ObservableBase]:
"""Time shifts the observable sequence by duetime. The relative time
intervals between the values are preserved.
1 - res = rx.Observable.delay(datetime())
2 - res = rx.Observable.delay(5000)
Keyword arguments:
duetime -- Absolute (specified as a datetime object) or relative
time (specified as an integer denoting milliseconds) by which
to shift the observable sequence.
Returns time-shifted sequence.
"""
def partial(source: ObservableBase) -> ObservableBase:
return observable_delay_timespan(source, duetime)
return partial
| 37.060345 | 104 | 0.521749 | 358 | 4,299 | 6.184358 | 0.315642 | 0.022584 | 0.020777 | 0.026197 | 0.058717 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009127 | 0.413817 | 4,299 | 115 | 105 | 37.382609 | 0.869444 | 0.095138 | 0 | 0.125 | 0 | 0 | 0.00026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0875 | false | 0 | 0.0625 | 0.0125 | 0.225 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ad3b9fabe04fbf150d454e7b8668326c62311a | 4,804 | py | Python | state_neighbors/state_neighbors.py | HandyCodeJob/state-neighbors | d4cd1274c13a1a93e08dd8fdbe04774336866fa9 | [
"0BSD"
] | null | null | null | state_neighbors/state_neighbors.py | HandyCodeJob/state-neighbors | d4cd1274c13a1a93e08dd8fdbe04774336866fa9 | [
"0BSD"
] | null | null | null | state_neighbors/state_neighbors.py | HandyCodeJob/state-neighbors | d4cd1274c13a1a93e08dd8fdbe04774336866fa9 | [
"0BSD"
] | null | null | null | # -*- coding: utf-8 -*-
from collections import OrderedDict
from json import dump, dumps
import csv
class StateNeighbors(object):
"""
Object that handles the finding of nieghbors
:param file_name: CSV that contains pairings of states
:param ordered: Whether or not to use an `OrderedDict` so that the states
are all in order. If set to `False`, then a normal `dict` will be used.
:param contiguous: How to handle stats that have no boardering states (ie.
Alaska and Hawaii) If set to `True` then states without neighbors will
not have any neighbors. If `False` then the nearest state to a non-
contiguous state will be considerd its neighbor.
:param allow_blank: Only used if `contiguous==True`. If this is set to
`True`, then states that have no neighbors will return an empty list.
If it is `False`, then states with no neighbors will be removed from
the dictionary. This will raise a `ValueError` if a non-contiguous
state is suplied as an argument.
"""
def __init__(self, file_name='state_neighbors/state_neighbors.csv', ordered=True,
contiguous=False, allow_blank=False):
self.file_name = file_name
self.ordered = ordered
self.contiguous = contiguous
self.allow_blank = allow_blank
self.states = self.get_state_neighbors_dict()
def get_state_neighbors_dict(self):
"""
Loads the cvs and parses the file to create an `OrderedDict` if
ordered is set to `True`, else it will return a `dict`. This also
handles states that are not contiguous and either removes them
(`contiguous=True`) and allows for empty lists to be returned.
"""
with open(self.file_name) as csvfile:
states = OrderedDict() if self.ordered else dict()
reader = csv.DictReader(csvfile)
for row in reader:
states.setdefault(row['state'], []).append(row['neighbor'])
if self.contiguous:
states.popitem('AK')
states.popitem('HI')
if self.allow_blank:
states.update({'AK': []})
states.update({'HI': []})
return states
def state_neighbors(self, state):
"""
Creates the list of neighbors from `neighbors` for a the given `state`.
This is the function that will most likey be used. As such the global
`neighboring` funtion is set to this funtion from an initialized obj.
:pram state: A state the is in `file_name` and now in `states`
:returns: An list of states that neighbor `state`.
:rtype: list
"""
neighbors = []
try:
for neighbor in self.states[state]:
neighbors.append(neighbor)
except KeyError:
raise ValueError("Could not find the state \"%s\"" % state)
return neighbors
def neighbors(self, depth=1):
"""
Creates the nested dictonary that holds the states and thier neighbors
in a list.
:pram depth: How far to follow neighbors, as in a `depth=2` would
produce a list of dictonarys with keys being the state and then the
neighbors of that state as a list. Used to get states that are not
neighbors of a state, but that states neighbors as well. Not
Implemented currently.
:returns: An `OrderedDict` or `dict` of states as keys, with the value
as the list of states that are neighbors to it.
:rtype: OrderedDict, dict
.. todo: Allow for a depth that is greater than one.
"""
if depth != 1:
raise NotImplementedError
neighbors = OrderedDict() if self.ordered else dict()
states = self.states.keys()
for _ in range(depth):
for state in states:
state_neighbor = self.state_neighbors(state)
neighbors.setdefault(state, []).append(state_neighbor)
return neighbors
def json(self, depth=1, file_name=None):
"""
Produces the dictionary of states as a json string, or outputs the json
to file.
:pram depth: Passed directly to `neighbors()`
:pram file_name: If `file_name` is provided, the json will output to
that file and return `True`.
:rtype: bool, str
.. seealso: neighbors()
"""
if file_name:
with open(file_name, 'w') as output:
dump(self.neighbors(depth), output)
return True
else:
return dumps(self.neighbors(depth))
_state_neighbors = StateNeighbors()
neighboring = _state_neighbors.state_neighbors
STATES = _state_neighbors.states
| 39.056911 | 85 | 0.620316 | 622 | 4,804 | 4.726688 | 0.276527 | 0.057143 | 0.009184 | 0.028571 | 0.034694 | 0.021769 | 0 | 0 | 0 | 0 | 0 | 0.001495 | 0.303705 | 4,804 | 122 | 86 | 39.377049 | 0.877429 | 0.485637 | 0 | 0.038462 | 0 | 0 | 0.039114 | 0.016494 | 0 | 0 | 0 | 0.008197 | 0 | 1 | 0.096154 | false | 0 | 0.057692 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52af147db8a5109a892c61bb35ace8dd82981174 | 397 | py | Python | getbing.py | Cursor-S/GetBing | 56bd4512446842d7b849a2cf859cd0f4714b4318 | [
"MIT"
] | 3 | 2020-09-18T11:58:50.000Z | 2021-01-18T13:48:09.000Z | getbing.py | Cursor-S/GetBing | 56bd4512446842d7b849a2cf859cd0f4714b4318 | [
"MIT"
] | 2 | 2020-09-19T10:40:25.000Z | 2021-01-18T13:48:05.000Z | getbing.py | Cursor-S/GetBing | 56bd4512446842d7b849a2cf859cd0f4714b4318 | [
"MIT"
] | null | null | null | import requests
print("Collecting JSON...")
url_text = requests.get("https://bing.com/HPImageArchive.aspx?format=js&idx=0&n=1").json()
response = url_text["images"][0]["url"]
img_url = f"https://bing.com{response}"
print(f"Collecting Image from {img_url}")
img = requests.get(img_url)
print("Writing...")
with open("./bing.jpg", "wb") as i:
i.write(img.content)
print("Done!")
| 24.8125 | 91 | 0.664987 | 61 | 397 | 4.245902 | 0.57377 | 0.069498 | 0.092664 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008596 | 0.120907 | 397 | 15 | 92 | 26.466667 | 0.733524 | 0 | 0 | 0 | 0 | 0.090909 | 0.437173 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52b06c9570bf49554c1be63557fa135cb472397c | 5,466 | py | Python | tuneme/tests/test_admin.py | praekelt/molo-unpfa-yep | 449dada5a6101fe02bfa033c73fcb1663b88dcf0 | [
"BSD-2-Clause"
] | null | null | null | tuneme/tests/test_admin.py | praekelt/molo-unpfa-yep | 449dada5a6101fe02bfa033c73fcb1663b88dcf0 | [
"BSD-2-Clause"
] | 438 | 2015-07-28T09:53:45.000Z | 2018-11-01T08:19:35.000Z | tuneme/tests/test_admin.py | praekeltfoundation/molo-tuneme | 449dada5a6101fe02bfa033c73fcb1663b88dcf0 | [
"BSD-2-Clause"
] | 2 | 2015-11-12T15:39:23.000Z | 2015-11-20T12:56:59.000Z | from django.contrib.auth import get_user_model
from django.test import TestCase
from django.test.client import Client
from molo.core.models import SiteLanguageRelation, Main, Languages, ArticlePage
from molo.core.tests.base import MoloTestCaseMixin
from molo.surveys.models import (
MoloSurveyPage,
MoloSurveyFormField,
SurveysIndexPage
)
User = get_user_model()
class AdminTestCase(TestCase, MoloTestCaseMixin):
def setUp(self):
self.client = Client()
self.mk_main()
self.main = Main.objects.all().first()
self.language_setting = Languages.objects.create(
site_id=self.main.get_site().pk)
self.english = SiteLanguageRelation.objects.create(
language_setting=self.language_setting,
locale='en',
is_active=True)
self.french = SiteLanguageRelation.objects.create(
language_setting=self.language_setting,
locale='fr',
is_active=True)
self.section = self.mk_section(self.section_index, title='section')
self.article = self.mk_article(self.section, title='article')
# Create surveys index pages
self.surveys_index = SurveysIndexPage.objects.child_of(
self.main).first()
self.user = User.objects.create_user(
username='tester',
email='tester@example.com',
password='tester')
self.super_user = User.objects.create_superuser(
username='testuser', password='password', email='test@email.com')
def create_molo_survey_page(self, parent, **kwargs):
molo_survey_page = MoloSurveyPage(
title='Test Survey', slug='test-survey',
introduction='Introduction to Test Survey ...',
thank_you_text='Thank you for taking the Test Survey',
**kwargs
)
parent.add_child(instance=molo_survey_page)
molo_survey_page.save_revision().publish()
molo_survey_form_field = MoloSurveyFormField.objects.create(
page=molo_survey_page,
sort_order=1,
label='Your favourite animal',
admin_label='fav_animal',
field_type='singleline',
required=True
)
return molo_survey_page, molo_survey_form_field
def test_convert_to_article(self):
username = 'testuser'
password = 'tester'
molo_survey_page, molo_survey_form_field = \
self.create_molo_survey_page(parent=self.section_index)
self.client.login(username=username, password=password)
response = self.client.get(molo_survey_page.url)
self.assertContains(response, molo_survey_page.title)
key = '{}'.format(
molo_survey_form_field.label.lower().replace(' ', '-')
)
response = self.client.post(
molo_survey_page.url, {key: 'python'}, follow=True)
self.assertEquals(response.status_code, 200)
self.client.logout()
self.client.login(
username=self.super_user.username, password='password')
# test shows convert to article button when no article created yet
response = self.client.get(
'/admin/surveys/submissions/%s/' % molo_survey_page.id)
self.assertContains(response, 'Convert to Article')
# convert submission to article
SubmissionClass = molo_survey_page.get_submission_class()
submission = SubmissionClass.objects.filter(
page=molo_survey_page).first()
response = self.client.get(
'/surveys/submissions/%s/article/%s/' % (
molo_survey_page.id, submission.pk))
self.assertEquals(response.status_code, 302)
article = ArticlePage.objects.last()
submission = SubmissionClass.objects.filter(
page=molo_survey_page).first()
self.assertEquals(article.title, article.slug)
self.assertEquals(submission.article_page, article)
article_stream_data = article.body.stream_data
self.assertEqual(
sorted([
body_elem['type'] for body_elem in article_stream_data]),
[u'paragraph'],
)
self.assertEqual(
sorted([
body_elem['value'] for body_elem in article_stream_data]),
['python'],
)
# explicit assertion for username and created_at that its not imported
self.assertNotIn(
username,
[body_elem['value'] for body_elem in article_stream_data])
self.assertNotIn(
str(submission.created_at),
[body_elem['value'] for body_elem in article_stream_data])
# first time it goes to the move page
self.assertEquals(
response['Location'],
'/admin/pages/%d/move/' % article.id)
# second time it should redirect to the edit page
response = self.client.get(
'/surveys/submissions/%s/article/%s/' % (
molo_survey_page.id, submission.pk))
self.assertEquals(response.status_code, 302)
self.assertEquals(
response['Location'],
'/admin/pages/%d/edit/' % article.id)
response = self.client.get(
'/admin/surveys/submissions/%s/' % molo_survey_page.id)
# it should not show convert to article as there is already article
self.assertNotContains(response, 'Convert to Article')
| 36.44 | 79 | 0.634102 | 600 | 5,466 | 5.59 | 0.27 | 0.065593 | 0.075134 | 0.031306 | 0.308587 | 0.276386 | 0.276386 | 0.222123 | 0.222123 | 0.142218 | 0 | 0.002494 | 0.266557 | 5,466 | 149 | 80 | 36.684564 | 0.834123 | 0.062203 | 0 | 0.275862 | 0 | 0 | 0.097304 | 0.033607 | 0 | 0 | 0 | 0 | 0.12069 | 1 | 0.025862 | false | 0.043103 | 0.051724 | 0 | 0.094828 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52b21ccf832db156b4660a15ac34cd4eb1b662ce | 2,078 | py | Python | djangocms_page_tags/models.py | Bernardvdv/djangocms-page-tags | 64553f07613a2531da7cc31192a46e05c22a046c | [
"BSD-3-Clause"
] | 7 | 2015-08-25T14:44:07.000Z | 2018-01-31T18:04:18.000Z | djangocms_page_tags/models.py | Bernardvdv/djangocms-page-tags | 64553f07613a2531da7cc31192a46e05c22a046c | [
"BSD-3-Clause"
] | 20 | 2015-08-15T07:35:08.000Z | 2021-06-25T15:24:06.000Z | djangocms_page_tags/models.py | Bernardvdv/djangocms-page-tags | 64553f07613a2531da7cc31192a46e05c22a046c | [
"BSD-3-Clause"
] | 6 | 2015-08-11T16:07:49.000Z | 2021-06-24T14:12:41.000Z | from cms.extensions import PageExtension, TitleExtension
from cms.extensions.extension_pool import extension_pool
from cms.models import Page, Title
from django.core.cache import cache
from django.db.models.signals import post_save, pre_delete
from django.dispatch import receiver
from django.utils.translation import ugettext_lazy as _
from taggit_autosuggest.managers import TaggableManager
from .utils import get_cache_key
@extension_pool.register
class PageTags(PageExtension):
tags = TaggableManager()
def copy_relations(self, oldinstance, language):
""" Needed to copy tags when publishing page """
self.tags.set(*oldinstance.tags.all())
class Meta:
verbose_name = _("Page tags (all languages)")
@extension_pool.register
class TitleTags(TitleExtension):
tags = TaggableManager()
def copy_relations(self, oldinstance, language):
""" Needed to copy tags when publishing page """
self.tags.set(*oldinstance.tags.all())
class Meta:
verbose_name = _("Page tags (language-dependent)")
# Cache cleanup when deleting pages / editing page extensions
@receiver(pre_delete, sender=Page)
def cleanup_page(sender, instance, **kwargs):
site_id = instance.node.site_id
key = get_cache_key(None, instance, "", site_id, False)
cache.delete(key)
@receiver(pre_delete, sender=Title)
def cleanup_title(sender, instance, **kwargs):
site_id = instance.page.node.site_id
key = get_cache_key(None, instance.page, instance.language, site_id, True)
cache.delete(key)
@receiver(post_save, sender=PageTags)
def cleanup_pagetags(sender, instance, **kwargs):
site_id = instance.extended_object.node.site_id
key = get_cache_key(None, instance.extended_object, "", site_id, False)
cache.delete(key)
@receiver(post_save, sender=TitleTags)
def cleanup_titletags(sender, instance, **kwargs):
site_id = instance.extended_object.page.node.site_id
key = get_cache_key(None, instance.extended_object.page, instance.extended_object.language, site_id, True)
cache.delete(key)
| 32.46875 | 110 | 0.750722 | 273 | 2,078 | 5.531136 | 0.25641 | 0.047682 | 0.036424 | 0.063576 | 0.535099 | 0.535099 | 0.490066 | 0.380132 | 0.316556 | 0.292715 | 0 | 0 | 0.149663 | 2,078 | 63 | 111 | 32.984127 | 0.854556 | 0.069297 | 0 | 0.325581 | 0 | 0 | 0.028631 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.139535 | false | 0 | 0.209302 | 0 | 0.488372 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52b36db664954866432cb36328ff954839040769 | 5,248 | py | Python | src/main/resources/add_vars.py | ShreyaPrasad31/simulation_lung-cancer_patients- | fe1bb453be7fc31034cada7a2604c081d9637aa9 | [
"Apache-2.0"
] | null | null | null | src/main/resources/add_vars.py | ShreyaPrasad31/simulation_lung-cancer_patients- | fe1bb453be7fc31034cada7a2604c081d9637aa9 | [
"Apache-2.0"
] | null | null | null | src/main/resources/add_vars.py | ShreyaPrasad31/simulation_lung-cancer_patients- | fe1bb453be7fc31034cada7a2604c081d9637aa9 | [
"Apache-2.0"
] | null | null | null | import numpy as np
from numpy.random import choice
from numpy import array
from numpy import random
import csv
import random
#from itertools import izip
#pseudo code
list_molecular_profile = ['BRAF','CTNNB1','HRAS','SUB_OPTIMAL_SAMPLE','PIK3CA','TP53','NRAS','EGFR','KRAS','STK11','RET','ERBB2']
ls_mut_PICK3CA = ['MUT_N345K']
ls_mut_CTNNB1 = ['MUT_S37C','MUT_D32G','MUT_D32V']
ls_mut_NRAS = ['MUT_Q61K']
ls_mut_STK11 = ['MUT_Q170']
ls_mut_RET = ['MUT_H784R']
ls_mut_ERBB2 = ['MUT_A775_G776INSAVMA','MUT_G776VC']
ls_mut_TP53 = ['MUT_R110H','MUT_R248L', 'MUT_V157F','MUT_V274A']
ls_mut_SUB_OPTIMAL_SAMPLE = ['Unknown']
ls_mut_HRAS = ['MUT_G13R']
ls_mut_BRAF = ['MUT_V600E','MUT_G469A']
ls_mut_EGFR = ['MUT_E709A','MUT_G719S','MUT_E746_A750del','MUT_E746_A750DELINSQP','MUT_L747_S752del','MUT_L858R']
ls_mut_KRAS = ['MUT_G12A','MUT_G12C','MUT_G12D','MUT_G12S','MUT_G12V','MUT_G13C','MUT_G13D','MUT_Q129E','MUT_Q61H','MUT_G12I']
ls_response = ['Yes','No']
ls_response_prob_EGFR = [0.5,0.5]
ls_response_prob_BRAF = [0.5,0.5]
ls_response_prob_KRAS = [0.3,0.7]
drug_EGFR = 'Afatinib'
drug_KRAS = 'Docetaxel'
drug_BRAF = 'Dabrafenib'
ls_response = []
#probability_distribution = [0.8,0.1,0.1]
#mp = choice(list_molecular_profile, 1, probability_distribution)
#mp = random.choice(list_molecular_profile,9, replace = True )
#mp = random.sample(list_molecular_profile, 10)
#mp = random.choice(list_molecular_profile)
ls_mp = []
for i in range (0, 100):
mp = random.choice(list_molecular_profile)
ls_mp.append(mp)
print(mp)
ls_drug = []
ls_mutation = []
length = len(ls_mp)
for i in range (0, length):
if ls_mp[i] == 'PIK3CA':
mut = random.choice(ls_mut_PICK3CA)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'TP53':
mut = random.choice(ls_mut_TP53)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'NRAS':
mut = random.choice(ls_mut_NRAS)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'EGFR':
mut = random.choice(ls_mut_EGFR)
ls_mutation.append(mut)
ls_drug.append(drug_EGFR)
a = array(ls_response, object)
res = np.random.choice(a,1,ls_response_prob_EGFR)
res = res.tolist()
ls_response.append(res)
elif ls_mp[i] == 'KRAS':
mut = random.choice(ls_mut_KRAS)
ls_mutation.append(mut)
ls_drug.append(drug_KRAS)
a = array(ls_response,object)
res = np.random.choice(a, 1,ls_response_prob_KRAS)
res = res.tolist()
ls_response.append(res)
elif ls_mp[i] == 'CTNNB1':
mut = random.choice(ls_mut_CTNNB1)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'HRAS':
mut = random.choice(ls_mut_HRAS)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'SUB_OPTIMAL_SAMPLE':
mut = random.choice(ls_mut_SUB_OPTIMAL_SAMPLE)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'STK11':
mut = random.choice(ls_mut_STK11)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'RET':
mut = random.choice(ls_mut_RET)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'ERBB2':
mut = random.choice(ls_mut_ERBB2)
ls_mutation.append(mut)
ls_drug.append('Unknown')
ls_response.append('Unknown')
elif ls_mp[i] == 'BRAF':
mut = random.choice(ls_mut_BRAF)
ls_mutation.append(mut)
ls_drug.append(drug_BRAF)
a = array(ls_response,object)
res = np.random.choice(a, 1, ls_response_prob_BRAF)
res = res.tolist()
ls_response.append(res)
fields = ['Molecular_Profile', 'Mutation', 'Drugs','Response']
print(len(ls_mutation))
print(len(ls_drug))
csv_file = "C:/Users/Shreya/Desktop/123.csv"
rows = zip(ls_mp, ls_mutation, ls_drug,ls_response)
with open(csv_file, 'w') as f:
writer = csv.writer(f, lineterminator = '\n')
writer.writerow(fields)
writer.writerows(rows)
#writer.writerows(zip(ls_mp, ls_mutation))
"""with open(csv_file , "w") as output:
writer = csv.writer(output, lineterminator = '\n')
for val in ls_mp:
writer.writerow([val])
for val in ls_mutation:
writer.writerow([val])
"""
#mp = choice(list_molecular_profile,1,probability_distribution)_
#_mp.append(mp)
"""print(mp)
mutation = 'no_mutation'
if mp =='EFGR':
mutation = 'G919S'
drug = 'ftainib'
elif mp =='TP53':
mutation = 'fgg'
drug = 'fgr'
elif mp =='KRAS':
mutation =='dfgg'
drug = 'frgerg'
#rint('The molecular profile is {} the muation is {} and the drug is {}'.format(mp,mutation,drug))""" | 30.870588 | 129 | 0.637576 | 739 | 5,248 | 4.246279 | 0.189445 | 0.038241 | 0.01912 | 0.06501 | 0.531549 | 0.422881 | 0.401211 | 0.370937 | 0.313257 | 0.27884 | 0 | 0.039259 | 0.218559 | 5,248 | 170 | 130 | 30.870588 | 0.725921 | 0.078506 | 0 | 0.342105 | 0 | 0 | 0.15831 | 0.012071 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.052632 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52b470e8f12c583f668a46f8cdb6365e9edc6202 | 2,048 | py | Python | python/src/analib/api_inbound.py | IBM/transparent-supply-toolbox | cfdbf1eb4180f5399265d1d75c1aaec56819260a | [
"Apache-2.0"
] | null | null | null | python/src/analib/api_inbound.py | IBM/transparent-supply-toolbox | cfdbf1eb4180f5399265d1d75c1aaec56819260a | [
"Apache-2.0"
] | null | null | null | python/src/analib/api_inbound.py | IBM/transparent-supply-toolbox | cfdbf1eb4180f5399265d1d75c1aaec56819260a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
from analib import api_common as api_common
import utils
class APIInbound:
debug_level = 3
env = "staging"
header_entitled_org = None
def __init__(self, debug_level, org_id, env, header_entitled_org):
self.env = env
self.org_id = org_id
self.debug_level = debug_level
self.header_entitled_org = header_entitled_org
self.a = api_common.Authenticate(self.debug_level, self.org_id, self.env)
self.token = self.a.get_token()
return
# quit the program if inbound data rules Fail.
def validate_inbound_data_rules(self):
if self.env == "prod":
# you can NEVER push an XML to prod.
utils.d_print(1, "env can never be prod")
utils.quit("Tried to push to PROD !")
return False
if self.env == "sandbox":
return False
return True
# push a given XML into IFT
# you can NEVER push to prod
def push_xml(self, xml_data):
utils.d_print(self.debug_level, "ENTER", "len(xml_data)", len(xml_data))
self.validate_inbound_data_rules()
utils.d_print(self.debug_level+1, "XMLData:", xml_data)
auth_token = "Bearer " + self.token['onboarding_token']
post_header = {
"Authorization": auth_token,
"Content-Type": "application/xml"
}
if self.header_entitled_org:
post_header["IFT-Entitled-Orgs"] = self.header_entitled_org
post_data = xml_data.encode('utf-8')
post_url = utils.global_args.config['urls']['hosts'][self.env] + utils.global_args.config['urls']['inbound'][self.env]
p = api_common.PostRequest(post_url, post_data, post_header, 4)
r = p.execute()
utils.d_print(self.debug_level, "EXIT", "Response Text:", utils.color.make_color('RED', r.text), "Response Headers:", r.headers)
utils.d_print(0, "EXIT", "Response Text:", utils.color.make_color('RED', r.text))
return
| 33.57377 | 136 | 0.618164 | 277 | 2,048 | 4.33935 | 0.33213 | 0.066556 | 0.084859 | 0.052413 | 0.217138 | 0.133943 | 0.071547 | 0.071547 | 0.071547 | 0.071547 | 0 | 0.004008 | 0.269043 | 2,048 | 60 | 137 | 34.133333 | 0.798931 | 0.072754 | 0 | 0.1 | 0 | 0 | 0.133192 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0 | 0.05 | 0 | 0.35 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52b594299494c2fb7d273768b133918a66a7f90b | 10,902 | py | Python | Scripts/development/lib/legacy_deps.py | mmaction/core | 9a5b28ecae445191fb7b125662d787f1602d6644 | [
"BSD-2-Clause"
] | 2 | 2019-03-15T03:35:54.000Z | 2019-03-15T07:50:36.000Z | Scripts/development/lib/legacy_deps.py | ass-a2s/opnsense-core | a0634d180325f6afe3be7f514b4470e47ff5eb75 | [
"BSD-2-Clause"
] | null | null | null | Scripts/development/lib/legacy_deps.py | ass-a2s/opnsense-core | a0634d180325f6afe3be7f514b4470e47ff5eb75 | [
"BSD-2-Clause"
] | null | null | null | """
Copyright (c) 2015 Ad Schellevis <ad@opnsense.org>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
--------------------------------------------------------------------------------------
Crawler class to find module (require/include) dependencies
"""
import os
import os.path
class DependancyCrawler(object):
""" Legacy dependency crawler and grapher
"""
def __init__(self, root):
""" init
:param root: start crawling at
:return:
"""
self._all_dependencies = {}
self._all_dependencies_src = {}
self._all_functions = {}
self._exclude_deps = ['/usr/local/opnsense/mvc/app/config/config.php']
self.root = root
def get_dependency_by_src(self, src_filename):
""" dependencies are stored by a single name, this method maps a filename back to its name
usually the basename of the file.
:param src_filename:
:return:
"""
if src_filename in self._all_dependencies_src:
return self._all_dependencies_src[src_filename]
else:
return None
def fetch_php_modules(self, src_filename):
# create a new list for this base filename
base_filename = os.path.basename(src_filename)
if base_filename in self._all_dependencies:
base_filename = '%s__%s' % (src_filename.split('/')[-2], base_filename)
self._all_dependencies[base_filename] = []
self._all_dependencies_src[src_filename] = base_filename
source_data = open(src_filename).read()
# fetch all include, include_once, require, require_once statements and
# add dependencies to object dependency list.
for tag in ('include', 'require'):
data = source_data
while True:
startpos = data.find(tag)
if startpos == -1:
break
else:
strlen = data[startpos:].find(';')
if strlen > -1:
# parse (single) statement, check if this could be an include type command
dep_stmt = data[startpos-1:strlen+startpos]
if dep_stmt[0] in (' ', '\n'):
dep_stmt = dep_stmt[1:].replace("'", '"')
if dep_stmt.find('\n') == -1 and dep_stmt.count('"') == 2:
dep_filename = dep_stmt.split('"')[1]
if dep_filename not in self._all_dependencies[base_filename]:
if dep_filename not in self._exclude_deps:
self._all_dependencies[base_filename].append(dep_filename)
data = data[strlen+startpos:]
def fetch_php_functions(self, src_filename):
""" find php functions
:param src_filename:
:return:
"""
base_filename = os.path.basename(src_filename)
if base_filename in self._all_functions:
base_filename = '%s__%s' % (src_filename.split('/')[-2], base_filename)
function_list = []
for line in open(src_filename,'r').read().split('\n'):
if line.find('function ') > -1 and line.find('(') > -1:
if line.find('*') > -1 and line.find('function') > line.find('*'):
continue
function_nm = line.split('(')[0].strip().split(' ')[-1].strip()
function_list.append(function_nm)
self._all_functions[base_filename] = function_list
def find_files(self, analyse_dirs=('etc','www', 'captiveportal', 'sbin')):
"""
:param analyse_dirs: directories to analyse
:return:
"""
for analyse_dir in analyse_dirs:
analyse_dir = ('%s/%s' % (self.root, analyse_dir)).replace('//', '/')
for wroot, wdirs, wfiles in os.walk(analyse_dir):
for src_filename in wfiles:
src_filename = '%s/%s' % (wroot, src_filename)
if src_filename.split('.')[-1] in ('php', 'inc','class') \
or open(src_filename).read(1024).find('/bin/php') > -1:
yield src_filename
def crawl(self):
""" Crawl through legacy code
:param analyse_dirs: only analyse these directories
:return: None
"""
for src_filename in self.find_files():
self.fetch_php_modules(src_filename)
self.fetch_php_functions(src_filename)
def where_used(self, src):
"""
:param src: source object name (base name)
:return: dictionary containing files and functions
"""
where_used_lst={}
for src_filename in self.find_files():
data = open(src_filename,'r').read().replace('\n',' ').replace('\t',' ').replace('@',' ')
use_list = []
for function in self._all_functions[src]:
if data.find(' %s(' % (function)) > -1 or \
data.find('!%s ' % (function)) > -1 or \
data.find('!%s(' % (function)) > -1 or \
data.find('(%s(' % (function)) > -1 or \
data.find('(%s ' % (function)) > -1 or \
data.find(' %s ' % (function)) > -1:
use_list.append(function)
if len(use_list) > 0:
where_used_lst[src_filename] = sorted(use_list)
return where_used_lst
def get_total_files(self):
""" get total number of analysed files
:return: int
"""
return len(self._all_dependencies)
def get_total_dependencies(self):
""" get total number of dependencies
:return: int
"""
count = 0
for src_filename in self._all_dependencies:
count += len(self._all_dependencies[src_filename])
return count
def get_files(self):
""" retrieve all analysed files as iterator (ordered by name)
:return: iterator
"""
for src_filename in sorted(self._all_dependencies):
yield src_filename
def trace(self, src_filename, parent_filename=None, result=None, level=0):
""" trace dependencies (recursive)
:param src_filename:
:param parent_filename:
:param result:
:param level:
:return:
"""
if result is None:
result = {}
if src_filename not in result:
result[src_filename] = {'level': level, 'dup': list(), 'parent': parent_filename}
else:
result[src_filename]['dup'].append(parent_filename)
return
if src_filename in self._all_dependencies:
for dependency in self._all_dependencies[src_filename]:
self.trace(dependency, src_filename, result, level=level+1)
return result
def file_info(self, src_filename):
""" retrieve file info, like maximum recursive depth and number of duplicate dependencies
:param src_filename:
:return:
"""
result = {'levels': 0,'dup_count':0}
if src_filename in self._all_dependencies:
data = self.trace(src_filename)
for dep_filename in data:
if data[dep_filename]['level'] > result['levels']:
result['levels'] = data[dep_filename]['level']
result['dup_count'] += len(data[dep_filename]['dup'])
return result
def generate_dot(self, filename_to_inspect):
""" convert trace data to do graph
:param filename_to_inspect: source filename to generate graph for
:return: string (dot) data
"""
trace_data = self.trace(filename_to_inspect)
result = list()
result.append('digraph dependencies {')
result.append('\toverlap=scale;')
nodes = {}
for level in range(100):
for src_filename in trace_data:
if trace_data[src_filename]['level'] == level:
if trace_data[src_filename]['parent'] is not None:
result.append('\tedge [color=black style=filled];')
result.append('\t"%s" -> "%s" [weight=%d];' % (trace_data[src_filename]['parent'],
src_filename, trace_data[src_filename]['level']))
if len(trace_data[src_filename]['dup']) > 0:
for target in trace_data[src_filename]['dup']:
result.append('\tedge [color=red style=dotted];')
result.append('\t"%s" -> "%s";' % (target, src_filename))
if trace_data[src_filename]['parent'] is None:
nodes[src_filename] = '[shape=Mdiamond]'
elif len(trace_data[src_filename]['dup']) > 0:
nodes[src_filename] = '[shape=box,style=filled,color=".7 .3 1.0"]'
else:
nodes[src_filename] = '[shape=box]'
for node in nodes:
result.append('\t"%s" %s;' % (node, nodes[node]))
result.append('}')
return '\n'.join(result)
@staticmethod
def generate_index_html(filelist):
html_body = "<html><head><title></title></head><body><table><tr><th>Name</th></tr>\n%s</body>"
html_row = '<tr><td><a href="%s">%s</a></td></tr>\n'
html = html_body % ('\n'.join(map(lambda x: html_row % (x, x), sorted(filelist))))
return html
| 42.92126 | 120 | 0.560264 | 1,241 | 10,902 | 4.746978 | 0.233683 | 0.104566 | 0.051604 | 0.02716 | 0.25191 | 0.160075 | 0.12935 | 0.094381 | 0.094381 | 0.078085 | 0 | 0.00648 | 0.320583 | 10,902 | 253 | 121 | 43.090909 | 0.788848 | 0.245918 | 0 | 0.111111 | 0 | 0.006944 | 0.08385 | 0.024031 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097222 | false | 0 | 0.013889 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ba9dcbcb374df87f5583cc7ca53a5a8168a093 | 4,057 | py | Python | deps/node-10.15.3/deps/v8/test/mozilla/testcfg.py | gatarelib/LiquidCore | 9405979363f2353ac9a71ad8ab59685dd7f919c9 | [
"MIT"
] | 2,151 | 2020-04-18T07:31:17.000Z | 2022-03-31T08:39:18.000Z | test/mozilla/testcfg.py | nacersalaheddine/v8 | cffe6247ad06d90d2fb69e3710358983eaed39d5 | [
"BSD-3-Clause"
] | 395 | 2020-04-18T08:22:18.000Z | 2021-12-08T13:04:49.000Z | test/mozilla/testcfg.py | nacersalaheddine/v8 | cffe6247ad06d90d2fb69e3710358983eaed39d5 | [
"BSD-3-Clause"
] | 338 | 2020-04-18T08:03:10.000Z | 2022-03-29T12:33:22.000Z | # Copyright 2008 the V8 project authors. All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
from testrunner.local import testsuite
from testrunner.objects import testcase
from testrunner.outproc import mozilla
EXCLUDED = ["CVS", ".svn"]
FRAMEWORK = """
browser.js
shell.js
jsref.js
template.js
""".split()
TEST_DIRS = """
ecma
ecma_2
ecma_3
js1_1
js1_2
js1_3
js1_4
js1_5
""".split()
class TestSuite(testsuite.TestSuite):
def __init__(self, *args, **kwargs):
super(TestSuite, self).__init__(*args, **kwargs)
self.testroot = os.path.join(self.root, "data")
def ListTests(self):
tests = []
for testdir in TEST_DIRS:
current_root = os.path.join(self.testroot, testdir)
for dirname, dirs, files in os.walk(current_root):
for dotted in [x for x in dirs if x.startswith(".")]:
dirs.remove(dotted)
for excluded in EXCLUDED:
if excluded in dirs:
dirs.remove(excluded)
dirs.sort()
files.sort()
for filename in files:
if filename.endswith(".js") and not filename in FRAMEWORK:
fullpath = os.path.join(dirname, filename)
relpath = fullpath[len(self.testroot) + 1 : -3]
testname = relpath.replace(os.path.sep, "/")
case = self._create_test(testname)
tests.append(case)
return tests
def _test_class(self):
return TestCase
class TestCase(testcase.TestCase):
def _get_files_params(self):
files = [os.path.join(self.suite.root, "mozilla-shell-emulation.js")]
testfilename = self.path + ".js"
testfilepath = testfilename.split("/")
for i in xrange(len(testfilepath)):
script = os.path.join(self.suite.testroot,
reduce(os.path.join, testfilepath[:i], ""),
"shell.js")
if os.path.exists(script):
files.append(script)
files.append(os.path.join(self.suite.testroot, testfilename))
return files
def _get_suite_flags(self):
return ['--expose-gc']
def _get_source_path(self):
return os.path.join(self.suite.testroot, self.path + self._get_suffix())
@property
def output_proc(self):
if not self.expected_outcomes:
if self.path.endswith('-n'):
return mozilla.MOZILLA_PASS_NEGATIVE
return mozilla.MOZILLA_PASS_DEFAULT
if self.path.endswith('-n'):
return mozilla.NegOutProc(self.expected_outcomes)
return mozilla.OutProc(self.expected_outcomes)
def GetSuite(*args, **kwargs):
return TestSuite(*args, **kwargs)
| 32.98374 | 76 | 0.690905 | 538 | 4,057 | 5.13197 | 0.39777 | 0.021731 | 0.028975 | 0.030424 | 0.126041 | 0.101775 | 0.072438 | 0.049258 | 0.049258 | 0.049258 | 0 | 0.005984 | 0.217402 | 4,057 | 122 | 77 | 33.254098 | 0.863622 | 0.374168 | 0 | 0.053333 | 0 | 0 | 0.073647 | 0.01035 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106667 | false | 0.026667 | 0.053333 | 0.053333 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52bb072378654c39b1b223dbad24a330fab42f8f | 2,533 | py | Python | old/userinfo.py | lifeofchrome/TwincastBot | 250cd59724ddfcb6a7a347c39b00a46c1ef4f6bf | [
"MIT"
] | 2 | 2017-09-02T18:17:47.000Z | 2019-02-21T05:43:17.000Z | old/userinfo.py | lifeofchrome/TwincastBot | 250cd59724ddfcb6a7a347c39b00a46c1ef4f6bf | [
"MIT"
] | 1 | 2017-10-07T18:03:23.000Z | 2017-10-07T18:04:20.000Z | old/userinfo.py | lifeofchrome/TwincastBot | 250cd59724ddfcb6a7a347c39b00a46c1ef4f6bf | [
"MIT"
] | 2 | 2017-10-07T17:53:10.000Z | 2017-10-14T17:00:29.000Z | from discord.ext import commands
from discord import Embed, Member
from discord.ext.commands import BucketType
import rethinkdb as r
class UserInfo: # Inside this class we make our own command.
def __init__(self, bot):
self.bot = bot # Makes this class a command/extension
self.connection = r.connect(db='twincastbot')
@commands.command(aliases=["stats", "pinfo", "user", "u"], description="Get Twincast info about a user",
help="Shows how many twincasts, single casts, and failures a user has", usage="<user>",
brief="Get Twincast info about a user")
@commands.cooldown(1, 8, BucketType.user)
async def userinfo(self, ctx, user: Member = None): # the function's name is our command name.
if not user:
user = ctx.author
if r.table('users').get(str(user.id)).run(self.connection):
if r.table('users').get(str(user.id)).has_fields('twincasts').run(self.connection):
twincasts = r.table('users').get(str(user.id)).run(self.connection)['twincasts']
else:
twincasts = 0
else:
twincasts = 0
if r.table('users').get(str(user.id)).run(self.connection):
if r.table('users').get(str(user.id)).has_fields('single_casts').run(self.connection):
single_casts = r.table('users').get(str(user.id)).run(self.connection)['single_casts']
else:
single_casts = 0
else:
single_casts = 0
if r.table('users').get(str(user.id)).run(self.connection):
if r.table('users').get(str(user.id)).has_fields('failures').run(self.connection):
failures = r.table('users').get(str(user.id)).run(self.connection)['failures']
else:
failures = 0
else:
failures = 0
if user:
embed = Embed(
author=user.name + "'s Twincast info",
description="Twincasts: " + str(twincasts) + "\nSingle Casts: " +
str(single_casts) + "\nFailures: " + str(failures),
colour=0xFFFF00)
embed.set_author(name=user.name + "'s Twincast info", icon_url=user.avatar_url)
await ctx.send(embed=embed)
def setup(bot): # This function outside of the class initializes our extension/command and makes it readable by the
# main file, TwincastBot.py
bot.add_cog(UserInfo(bot)) # Well, yeah. adds the extension/command.
| 45.232143 | 116 | 0.594552 | 323 | 2,533 | 4.609907 | 0.309598 | 0.094023 | 0.066488 | 0.084621 | 0.323036 | 0.268637 | 0.235057 | 0.235057 | 0.235057 | 0.235057 | 0 | 0.005978 | 0.273589 | 2,533 | 55 | 117 | 46.054545 | 0.803261 | 0.11212 | 0 | 0.326087 | 0 | 0 | 0.146744 | 0 | 0 | 0 | 0.003568 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.086957 | 0 | 0.152174 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52be9e80a9fb87a985a133a24563205fd6312a69 | 787 | py | Python | Dmitry_Shevelev/h4_task5.py | Perekalskiyigor/Sirius | 2dcf792b072fa2f3fe4c2e900a9d4b6d0c2bd9b8 | [
"MIT"
] | null | null | null | Dmitry_Shevelev/h4_task5.py | Perekalskiyigor/Sirius | 2dcf792b072fa2f3fe4c2e900a9d4b6d0c2bd9b8 | [
"MIT"
] | null | null | null | Dmitry_Shevelev/h4_task5.py | Perekalskiyigor/Sirius | 2dcf792b072fa2f3fe4c2e900a9d4b6d0c2bd9b8 | [
"MIT"
] | null | null | null | print("Блок на числа:")
a: int = int(input("От числа>"))
b: int = int(input("До числа>"))
print("На числа:")
c: int = int(input("От числа>"))
d: int = int(input("До числа>")) # Вводим переменные
factor1: list = list(range(a, b+1))
factor2: list = list(range(c, d+1)) # Диапазон чисел в листы
print(" ", end="\t") # Угловая клетка
for x in factor1:
if x != b:
print(x, end="\t") # Выводим шапку таблицы вместо концов TAB
else:
print(x) # Последний элемент шапки, конец - обычный
for y in factor2:
print(y, end="\t") # Выводим заданные столбцы
for x in factor1:
if x != b:
print(y*x, end="\t") # Выводим содержимое таблицы вместо концовTAB
else:
print(y*x) # Последний элемент содержимого, конец - обычный
| 31.48 | 79 | 0.601017 | 118 | 787 | 4.008475 | 0.415254 | 0.05074 | 0.093023 | 0.054968 | 0.245243 | 0.093023 | 0.093023 | 0.093023 | 0 | 0 | 0 | 0.011785 | 0.245235 | 787 | 24 | 80 | 32.791667 | 0.784512 | 0.320203 | 0 | 0.285714 | 0 | 0 | 0.129278 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.380952 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52c0ce3dd42ded6e42a1ab104a143a5c392b78b2 | 4,409 | py | Python | DBHelp.py | Matcha00/LeetCode_FS | c732606e003ad69c2c748cf720f55f696f404880 | [
"MIT"
] | null | null | null | DBHelp.py | Matcha00/LeetCode_FS | c732606e003ad69c2c748cf720f55f696f404880 | [
"MIT"
] | 5 | 2021-05-10T16:36:40.000Z | 2022-02-26T18:32:50.000Z | DBHelp.py | Matcha00/LeetCode_FS | c732606e003ad69c2c748cf720f55f696f404880 | [
"MIT"
] | null | null | null | import pymysql
class DBHelp():
def __init__(self,host,port,user,password,database,charset):
self.host = host
self.port = port
self.user = user
self.password = password
self.database = database
self.charset = charset
self.db = None
self.cursor = None
def openDB(self):
self.db = pymysql.connect(host=self.host,port = self.port,user=self.user,password=self.password,database=self.database,charset=self.charset,cursorclass=pymysql.cursors.DictCursor)
self.cursor = self.db.cursor()
def close(self):
self.cursor.close()
self.db.close()
# 数据增删改
def cud(self, sql, params):
self.openDB()
try:
self.cursor.execute(sql, params)
self.db.commit()
print("ok")
except Exception as e:
print(e)
self.db.rollback()
self.close()
#查一个数据
def find_one(self, sql, params=None):
self.openDB()
row = None
try:
self.cursor.execute(sql, params)
row = self.cursor.fetchone()
if not row:
return None
if len(row.keys()) == 1:
key = list(row.keys())[0]
return row[key]
self.close()
return row
except Exception as e:
self.db.rollback()
self.close()
print('find_one出现错误' + e)
def find_all(self,sql,params = None):
self.openDB()
rows = None
try:
self.cursor.execute(sql,params)
rows = self.cursor.fetchall()
if not rows:
return None
if len(rows[0].keys()) == 1:
simple_list = []
key = list(rows[0].keys())[0]
for row in rows:
simple_list.append(row[key])
return simple_list
self.close()
return rows
except Exception as e:
self.db.rollback()
self.close()
print('查找全部错误' + e)
def cud_sql(self,sql):
self.openDB()
try:
self.cursor.execute(sql)
self.db.commit()
self.db.close()
print('ok')
print(self.db)
print(self.cursor)
except Exception as e:
self.db.rollback()
self.close()
print(e)
def insertone(self,sql,params = None):
self.openDB()
lastrowid = 0
try:
self.cursor.execute(sql,params)
self.db.commit()
lastrowid = self.cursor.lastrowid
self.close()
return lastrowid
except Exception as e:
self.db.rollback()
self.close()
print(e)
def insertmany(self,sql,params = None,batch_size=1000):
self.openDB()
cnt = 0
try:
batch_cnt = int(len(params) / batch_size) + 1
for i in range(batch_cnt):
sub_array = params[i * batch_size:(i + 1) * batch_size]
if sub_array:
cnt += self.cursor.executemany(sql, sub_array)
self.db.commit()
return cnt
except Exception as e:
self.db.rollback()
self.close()
print(e)
# def execute(sql, param=None):
# """
# 执行sql语句:修改或删除
# :param sql: sql语句
# :param param: string|list
# :return: 影响数量
# """
# con = connect_mysql()
# cur = con.cursor()
# cnt = execute_process(con, cur, sql, param)
# cur.close()
# con.close()
# return cnt
# def execute_process(con, cur, sql, param=None):
# """ execute:内部调用 """
# cnt = 0
# try:
# cnt = cur.execute(sql, param)
# con.commit()
# except Exception as e:
# con.rollback()
# logger.error(traceback.format_exc())
# logger.error("[sql]:{} [param]:{}".format(sql, param))
# return cnt
#
def updateOrdelete(self,sql,params = None):
self.openDB()
cnt = 0
try:
cnt = self.cursor.execute(sql,params)
self.db.commit()
return cnt
except Exception as e:
self.db.rollback()
self.close()
print(e)
| 25.935294 | 187 | 0.490814 | 482 | 4,409 | 4.439834 | 0.184647 | 0.050467 | 0.063551 | 0.06729 | 0.35514 | 0.343925 | 0.256075 | 0.20514 | 0.192991 | 0.154673 | 0 | 0.005988 | 0.393967 | 4,409 | 169 | 188 | 26.088757 | 0.79491 | 0.140168 | 0 | 0.517241 | 0 | 0 | 0.005862 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086207 | false | 0.025862 | 0.008621 | 0 | 0.181034 | 0.094828 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52c1531519a26ca71503c94f2a0810d69513c24f | 8,975 | py | Python | scripts/supervised/.~c9_invoke_vRCWi.py | fredshentu/public_model_based_controller | 9301699bc56aa49ba5c699f7d5be299046a8aa0c | [
"MIT"
] | null | null | null | scripts/supervised/.~c9_invoke_vRCWi.py | fredshentu/public_model_based_controller | 9301699bc56aa49ba5c699f7d5be299046a8aa0c | [
"MIT"
] | null | null | null | scripts/supervised/.~c9_invoke_vRCWi.py | fredshentu/public_model_based_controller | 9301699bc56aa49ba5c699f7d5be299046a8aa0c | [
"MIT"
] | null | null | null | """
Author: Dian Chen
Note that this script only applies to Box3dReachPixel environments
"""
from sandbox.rocky.tf.envs.base import TfEnv
from rllab.envs.gym_env import GymEnv
from rllab.envs.normalized_env import normalize
from rllab.misc import logger
from railrl.predictors.dynamics_model import NoEncoder, FullyConnectedEncoder, ConvEncoder, InverseModel, ForwardModel
import argparse
import tensorflow as tf
from os import listdir
import os.path as osp
import joblib
def read_and_decode(filename_queue, obs_shape, action_shape, batch_size=128, num_threads=12, queue_capacity=20000):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
features={
'image_raw': tf.FixedLenFeature([], tf.string),
'next_image_raw': tf.FixedLenFeature([], tf.string),
'action': tf.FixedLenFeature([4], tf.float32)
})
obs = tf.decode_raw(features['image_raw'], tf.uint8)
next_obs = tf.decode_raw(features['next_image_raw'], tf.uint8)
obs = tf.reshape(obs, obs_shape)
next_obs = tf.reshape(next_obs, obs_shape)
obs = tf.cast(obs, tf.float32) * (1. / 255) - 0.5
next_obs = tf.cast(next_obs, tf.float32) * (1. / 255) - 0.5
action = tf.cast(features['action'], tf.float32)
obs_batch, next_obs_batch, action_batch = tf.train.batch(
[obs, next_obs, action],
batch_size=batch_size,
num_threads=num_threads,
capacity=queue_capacity,
enqueue_many=False,
)
return obs_batch, next_obs_batch, action_batch
def save_snapshot(encoder, inverse_model, forward_model, tfmodel_path):
save_dict = dict(
encoder=encoder,
inverse_model=inverse_model,
forward_model=forward_model
)
joblib.dump(save_dict, tfmodel_path, compress=3)
logger.log("Saved ICM model to {}".format(tfmodel_path))
def cos_loss(A, B):
dotproduct = tf.reduce_sum(tf.multiply(tf.nn.l2_normalize(A, 1), tf.nn.l2_normalize(B,1)), axis = 1)
return 1 - tf.reduce_mean(dotproduct)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('env_name', type=str,
help="name of gym env")
parser.add_argument('dataset_path', type=str,
help="path of training and validation dataset")
parser.add_argument('--tfboard_path', type=str, default='/tmp/tfboard')
parser.add_argument('--tfmodel_path', type=str, default='/tmp/tfmodels')
# Training parameters
parser.add_argument('--val_ratio', type=float, default=0.1,
help="ratio of validation sets")
parser.add_argument('--num_itr', type=int, default=10000000)
parser.add_argument('--val_freq', type=int, default=1000)
parser.add_argument('--log_freq', type=int, default=200)
parser.add_argument('--save_freq', type=int, default=5000)
# ICM parameters
parser.add_argument('--init_lr', type=float, default=1e-4)
parser.add_argument('--forward_weight', type=float, default=0.8,
help="the ratio of forward loss vs inverse loss")
parser.add_argument('--cos_forward', action='store_true',
help="whether to use cosine forward loss")
# parser.add_argument('--norm_input', action='store_true',
# help="whether to normalize observation input")
args = parser.parse_args()
env = TfEnv(normalize(env=GymEnv(args.env_name,record_video=False, \
log_dir='/tmp/gym_test',record_log=False)))
# Get dataset
dataset_names = list(map(lambda file_name: osp.join(args.dataset_path, file_name), listdir(args.dataset_path)))
val_set_names = dataset_names[:int(len(dataset_names)*args.val_ratio)]
train_set_names = dataset_names[int(len(dataset_names)*args.val_ratio):]
train_queue = tf.train.string_input_producer(train_set_names, num_epochs=None)
val_queue = tf.train.string_input_producer(val_set_names, num_epochs=None)
train_obs, train_next_obs, train_action = read_and_decode(train_queue, env.observation_space.shape, env.action_space.shape)
val_obs, val_next_obs, val_action = read_and_decode(val_queue, env.observation_space.shape, env.action_space.shape)
# Build ICM model
# if args.norm_input:
# train_obs = train_obs * (1./255) - 0.5
# train_next_obs = train_next_obs *(1./255) - 0.5
# val_obs = val_obs * (1./255) - 0.5
# val_next_obs = val_next_obs * (1./255) - 0.5
# train_obs = tf.cast(train_obs, tf.float32) / 255.0 - 0.5
# train_next_obs = tf.cast(train_next_obs, tf.float32) / 255.0 - 0.5
# val_obs = tf.cast(val_obs, tf.float32) / 255.0 - 0.5
# val_next_obs = tf.cast(val_next_obs, tf.float32) / 255.0 - 0.5
# else:
# train_obs = tf.cast(train_obs, tf.float32)
# train_next_obs = tf.cast(train_next_obs, tf.float32)
# val_obs = tf.cast(val_obs, tf.float32)
# val_next_obs = tf.cast(val_next_obs, tf.float32)
_encoder = ConvEncoder(
feature_dim=256,
input_shape=env.observation_space.shape,
conv_filters=(64, 64, 64, 32),
conv_filter_sizes=((5,5), (5,5), (5,5), (3,3)),
conv_strides=(3, 2, 2, 2),
conv_pads=('SAME', 'SAME', 'SAME', 'SAME'),
hidden_sizes=(256,),
hidden_activation=tf.nn.elu,
)
_inverse_model = InverseModel(
feature_dim=256,
env_spec=env.spec,
hidden_sizes=(256,),
hidden_activation=tf.nn.tanh,
output_activation=tf.nn.tanh,
)
_forward_model = ForwardModel(
feature_dim=256,
env_spec=env.spec,
hidden_sizes=(256,),
hidden_activation=tf.nn.elu,
)
sess = tf.Session()
_encoder.sess = sess
_inverse_model.sess = sess
_forward_model.sess = sess
with sess.as_default():
# Initialize variables for get_copy to work
sess.run(tf.initialize_all_variables())
train_encoder1 = _encoder.get_weight_tied_copy(observation_input=train_obs)
train_encoder2 = _encoder.get_weight_tied_copy(observation_input=train_next_obs)
train_inverse_model = _inverse_model.get_weight_tied_copy(feature_input1=train_encoder1.output, feature_input2=train_encoder2.output)
train_forward_model = _forward_model.get_weight_tied_copy(feature_input=train_encoder1.output, action_input=train_action)
val_encoder1 = _encoder.get_weight_tied_copy(observation_input=val_obs)
val_encoder2 = _encoder.get_weight_tied_copy(observation_input=val_next_obs)
val_inverse_model = _inverse_model.get_weight_tied_copy(feature_input1=val_encoder1.output, feature_input2=val_encoder2.output)
val_forward_model = _forward_model.get_weight_tied_copy(feature_input=val_encoder1.output, action_input=val_action)
if args.cos_forward:
train_forward_loss = cos_loss(train_encoder2.output, train_forward_model.output)
val_forward_loss = cos_loss(val_encoder2.output, val_forward_model.output)
else:
train_forward_loss = tf.reduce_mean(tf.square(train_encoder2.output - train_forward_model.output))
val_forward_loss = tf.reduce_mean(tf.square(val_encoder2.output - val_forward_model.output))
train_inverse_loss = tf.reduce_mean(tf.square(train_action - train_inverse_model.output))
val_inverse_loss = tf.reduce_mean(tf.square(val_action - val_inverse_model.output))
train_total_loss = args.forward_weight * train_forward_loss + (1. - args.forward_weight) * train_inverse_loss
val_total_loss = args.forward_weight * val_forward_loss + (1. - args.forward_weight) * val_inverse_loss
icm_opt = tf.train.AdamOptimizer(args.init_lr).minimize(train_total_loss)
# Setup summaries
summary_writer = tf.summary.FileWriter(args.tfboard_path, graph=tf.get_default_graph())
train_inverse_loss_summ = tf.summary.scalar("train/icm_inverse_loss", train_inverse_loss)
train_forward_loss_summ = tf.summary.scalar("train/icm_forward_loss", train_forward_loss)
train_total_loss_summ = tf.summary.scalar("train/icm_total_loss", train_total_loss)
val_inverse_loss_summ = tf.summary.scalar("val/icm_inverse_loss", val_inverse_loss)
val_forward_loss_summ = tf.summary.scalar("val/icm_forward_loss", val_forward_loss)
val_total_loss_summ = tf.summary.scalar("val/icm_total_loss", val_total_loss)
train_summary_op = tf.summary.merge(
[train_inverse_loss_summ,
train_forward_loss_summ,
train_total_loss_summ])
val_summary_op = tf.summary.merge(
[val_inverse_loss_summ,
val_forward_loss_summ,
val_total_loss_summ])
logger.log("Finished creating ICM model")
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
for timestep in range(args.num_itr):
if timestep % args.log_freq == 0:
logger.log("Start itr {}".format(timestep))
_, train_summary = sess.run(
[icm_opt, train_summary_op]
)
else:
sess.run(icm_opt)
if timestep % args.log_freq == 0:
summary_writer.add_summary(train_summary, timestep)
if timestep % args.save_freq == 0:
save_snapshot(_encoder, _inverse_model, _forward_model, args.tfmodel_path)
if timestep % args.val_freq == 0:
val_summary = sess.run(
val_summary_op
)
summary_writer.add_summary(val_summary, timestep)
except KeyboardInterrupt:
print ("End training...")
pass
coord.join(threads)
sess.close()
if __name__ == "__main__":
main()
# test_img()
| 36.632653 | 135 | 0.755766 | 1,353 | 8,975 | 4.679231 | 0.181079 | 0.018954 | 0.034908 | 0.021482 | 0.392039 | 0.353183 | 0.276576 | 0.187016 | 0.126678 | 0.111515 | 0 | 0.024424 | 0.124123 | 8,975 | 244 | 136 | 36.782787 | 0.780944 | 0.106407 | 0 | 0.095808 | 0 | 0 | 0.077241 | 0.005508 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023952 | false | 0.005988 | 0.05988 | 0 | 0.095808 | 0.005988 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52c608d9a32aa50c6e72c6fe820240c3e65ac148 | 2,120 | py | Python | tests/models/var_len_categorical_test.py | China-ChallengeHub/DeepTables | 6a60a26e5f4a591be416b93d3146eadc4ef409db | [
"Apache-2.0"
] | null | null | null | tests/models/var_len_categorical_test.py | China-ChallengeHub/DeepTables | 6a60a26e5f4a591be416b93d3146eadc4ef409db | [
"Apache-2.0"
] | null | null | null | tests/models/var_len_categorical_test.py | China-ChallengeHub/DeepTables | 6a60a26e5f4a591be416b93d3146eadc4ef409db | [
"Apache-2.0"
] | 1 | 2020-12-11T13:58:35.000Z | 2020-12-11T13:58:35.000Z | # -*- encoding: utf-8 -*-
import numpy as np
from sklearn.model_selection import train_test_split
from deeptables.models import deeptable
from deeptables.preprocessing.transformer import MultiVarLenFeatureEncoder
from deeptables.utils import consts
from deeptables.datasets import dsutils
class TestVarLenCategoricalFeature:
def setup_class(cls):
cls.df = dsutils.load_movielens().drop(['timestamp', "title"], axis=1)
def test_encoder(self):
df = self.df.copy()
df['genres_copy'] = df['genres']
multi_encoder = MultiVarLenFeatureEncoder([('genres', '|'), ('genres_copy', '|'), ])
result_df = multi_encoder.fit_transform(df)
assert multi_encoder._encoders['genres'].max_element_length > 0
assert multi_encoder._encoders['genres_copy'].max_element_length > 0
shape = np.array(result_df['genres'].tolist()).shape
assert shape[1] == multi_encoder._encoders['genres'].max_element_length
def test_var_categorical_feature(self):
X = self.df.copy()
y = X.pop('rating').values.astype('float32')
conf = deeptable.ModelConfig(nets=['dnn_nets'],
task=consts.TASK_REGRESSION,
categorical_columns=["movie_id", "user_id", "gender", "occupation", "zip", "title", "age"],
metrics=['mse'],
fixed_embedding_dim=True,
embeddings_output_dim=4,
apply_gbm_features=False,
apply_class_weight=True,
earlystopping_patience=5,
var_len_categorical_columns=[('genres', "|", "max")])
dt = deeptable.DeepTable(config=conf)
X_train, X_validation, y_train, y_validation = train_test_split(X, y, test_size=0.2)
model, history = dt.fit(X_train, y_train, validation_data=(X_validation, y_validation), epochs=10, batch_size=32)
assert 'genres' in model.model.input_names
| 42.4 | 128 | 0.600943 | 228 | 2,120 | 5.324561 | 0.482456 | 0.049423 | 0.049423 | 0.06425 | 0.100494 | 0.069193 | 0.069193 | 0 | 0 | 0 | 0 | 0.009927 | 0.287264 | 2,120 | 49 | 129 | 43.265306 | 0.793514 | 0.010849 | 0 | 0 | 0 | 0 | 0.07685 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 1 | 0.085714 | false | 0 | 0.171429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52c935a632eba072a676ea6a5a4105f05455eb4d | 1,170 | py | Python | src/app/api/login.py | Wedding-APIs-System/Backend-APi | 5a03be5f36ce8ca7e3abba2d64b63c55752697f3 | [
"MIT"
] | null | null | null | src/app/api/login.py | Wedding-APIs-System/Backend-APi | 5a03be5f36ce8ca7e3abba2d64b63c55752697f3 | [
"MIT"
] | null | null | null | src/app/api/login.py | Wedding-APIs-System/Backend-APi | 5a03be5f36ce8ca7e3abba2d64b63c55752697f3 | [
"MIT"
] | null | null | null | from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from sql_app.database import orm_connection
from sql_app import schemas, models, crud
SessionLocal, engine = orm_connection()
models.Base.metadata.create_all(bind=engine)
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
router = APIRouter()
# @router.get("/login/{guest_number}", response_model=schemas.GuestFamily)
@router.get("/login/{guest_number}", tags=["Login endpoint"])
async def get_family(guest_number: str, db: Session = Depends(get_db)):
db_family = crud.get_guest(db, phone_number=guest_number)
assistants_number = crud.get_assistants(db, phone_number=guest_number)
if db_family is None:
raise HTTPException(status_code=404, detail="User not found")
elif assistants_number is None:
raise HTTPException(status_code=404, detail="Number of assistants not found")
return {'family_name':f'{db_family.family.family_name}',
'attendance_confirmation': f'{db_family.attendance_confirmation}',
'Number_of_assistants': f'{assistants_number}'} | 34.411765 | 86 | 0.728205 | 151 | 1,170 | 5.423841 | 0.417219 | 0.067155 | 0.02442 | 0.046398 | 0.224664 | 0.105006 | 0.105006 | 0.105006 | 0 | 0 | 0 | 0.006135 | 0.164103 | 1,170 | 34 | 87 | 34.411765 | 0.831288 | 0.07094 | 0 | 0 | 0 | 0 | 0.2 | 0.100461 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.166667 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52c94e588f6388e27498c7856eedc215ddbc65d0 | 1,823 | py | Python | tempest/tests/lib/services/volume/v3/test_attachments_client.py | rishabh20111990/tempest | df15531cd4231000b0da016f5cd8641523ce984e | [
"Apache-2.0"
] | 254 | 2015-01-05T19:22:52.000Z | 2022-03-29T08:14:54.000Z | tempest/tests/lib/services/volume/v3/test_attachments_client.py | rishabh20111990/tempest | df15531cd4231000b0da016f5cd8641523ce984e | [
"Apache-2.0"
] | 13 | 2015-03-02T15:53:04.000Z | 2022-02-16T02:28:14.000Z | tempest/tests/lib/services/volume/v3/test_attachments_client.py | rishabh20111990/tempest | df15531cd4231000b0da016f5cd8641523ce984e | [
"Apache-2.0"
] | 367 | 2015-01-07T15:05:39.000Z | 2022-03-04T09:50:35.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.lib.services.volume.v3 import attachments_client
from tempest.tests.lib import fake_auth_provider
from tempest.tests.lib.services import base
from oslo_utils.fixture import uuidsentinel as uuids
class TestAttachmentsClient(base.BaseServiceTest):
FAKE_ATTACHMENT_INFO = {
"attachment": {
"status": "attaching",
"detached_at": "2015-09-16T09:28:52.000000",
"connection_info": {},
"attached_at": "2015-09-16T09:28:52.000000",
"attach_mode": "ro",
"instance": uuids.instance_id,
"volume_id": uuids.volume_id,
"id": uuids.id,
}
}
def setUp(self):
super(TestAttachmentsClient, self).setUp()
fake_auth = fake_auth_provider.FakeAuthProvider()
self.client = attachments_client.AttachmentsClient(fake_auth,
'volume',
'regionOne')
def test_show_attachment(self):
self.check_service_client_function(
self.client.show_attachment,
'tempest.lib.common.rest_client.RestClient.get',
self.FAKE_ATTACHMENT_INFO, attachment_id=uuids.id)
| 38.787234 | 78 | 0.637411 | 212 | 1,823 | 5.349057 | 0.542453 | 0.05291 | 0.022928 | 0.028219 | 0.040564 | 0.040564 | 0.040564 | 0 | 0 | 0 | 0 | 0.034117 | 0.276467 | 1,823 | 46 | 79 | 39.630435 | 0.825625 | 0.299506 | 0 | 0 | 0 | 0 | 0.163233 | 0.076862 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52c973fc5ebbd69322a75885957b2810133a248c | 2,968 | py | Python | api/routes/users.py | a-samir97/author-manager-restapi | f9931edd76dfdc94f9c1c59774334c10b7ccd33b | [
"MIT"
] | null | null | null | api/routes/users.py | a-samir97/author-manager-restapi | f9931edd76dfdc94f9c1c59774334c10b7ccd33b | [
"MIT"
] | null | null | null | api/routes/users.py | a-samir97/author-manager-restapi | f9931edd76dfdc94f9c1c59774334c10b7ccd33b | [
"MIT"
] | null | null | null | from flask import Blueprint, request, url_for, render_template_string
from api.utils.responses import response_with
from api.utils import responses as resp
from api.models.users import User, UserSchema
from api.utils.database import db
from api.utils.token import generate_verification_token, confirm_verification_token
from api.utils.email import send_email
from flask_jwt_extended import create_access_token
# blueprint for user routes
user_routes = Blueprint("user_routes", __name__)
# create a new user
@user_routes.route('/', methods=['POST'])
def create_user():
try:
data = request.get_json()
if User.find_by_email(email=data.get('email')) is not None or User.find_by_username(username=data.get('username')) is not None:
return response_with(resp.INVALID_INPUT_422)
data['password'] = User.generate_hash(data['password'])
user_schema = UserSchema()
user = user_schema.load(data)
token = generate_verification_token(data.get('email'))
verification_email = url_for('user_routes.verify_email', token=token, _external=True)
html = render_template_string("<p>Welcome! Thanks for\
signing up. Please follow this link to activate your\
account:</p> <p><a href='{{ verification_email }}'>{{\
verification_email }}</a></p> <br> <p>Thanks!</p>",
verification_email=verification_email
)
subject = "Please Verify your email"
send_email(user.email, subject, html)
result = user_schema.dump(user.create())
return response_with(resp.SUCCESS_201)
except Exception as e:
print(e)
return response_with(resp.INVALID_INPUT_422)
# login users (authentication)
@user_routes.route('/login', methods=['POST'])
def authenticate_user():
try:
data = request.get_json()
if data.get('email'):
current_user = User.find_by_email(data['email'])
elif data.get('username'):
current_user = User.find_by_username(data['username'])
if not current_user:
return response_with(resp.SERVER_ERROR_404)
if current_user and not current_user.is_verified:
return response_with(resp.BAD_REQUEST_400)
if User.verify_hash(data['password'], current_user.password):
access_token = create_access_token(identity=data['username'])
return response_with(resp.SUCCESS_201, value={'message': 'Logged in as {}'.format(current_user.username), "access_token":access_token})
else:
return response_with(resp.UNAUTHORIZED_401)
except Exception as e:
print(e)
return response_with(resp.INVALID_INPUT_422)
@user_routes.route('/confirm/<token>', methods=['GET'])
def verify_email(token):
try:
email = confirm_verification_token(token)
except Exception as e:
return response_with(resp.SERVER_ERROR_401)
user = User.query.filter_by(email=email).first_or_404()
if user.is_verified:
return response_with(resp.INVALID_INPUT_422)
else:
user.is_verified = True
db.session.add(user)
db.session.commit()
return response_with(resp.SUCCESS_200, value={"message": 'E-mail verified, you can proceed to login now.'}) | 32.977778 | 138 | 0.760445 | 429 | 2,968 | 5.020979 | 0.284382 | 0.066852 | 0.091922 | 0.112349 | 0.233055 | 0.199629 | 0.139276 | 0.056639 | 0.056639 | 0.056639 | 0 | 0.013793 | 0.12062 | 2,968 | 90 | 139 | 32.977778 | 0.811494 | 0.024259 | 0 | 0.238806 | 0 | 0 | 0.096785 | 0.008296 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0.029851 | 0.119403 | 0 | 0.328358 | 0.059701 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52c97543de1860be24945ea8c74fb7a56ba6196d | 2,700 | py | Python | csn.py | MachineLP/conditional-similarity-networks | 8914fd5900d3b21cd6bd31e0efe787c9fbb48eea | [
"BSD-3-Clause"
] | 1 | 2021-04-16T08:22:44.000Z | 2021-04-16T08:22:44.000Z | csn.py | MachineLP/conditional-similarity-networks | 8914fd5900d3b21cd6bd31e0efe787c9fbb48eea | [
"BSD-3-Clause"
] | null | null | null | csn.py | MachineLP/conditional-similarity-networks | 8914fd5900d3b21cd6bd31e0efe787c9fbb48eea | [
"BSD-3-Clause"
] | null | null | null | import torch
import torch.nn as nn
import numpy as np
class ConditionalSimNet(nn.Module):
def __init__(self, embeddingnet, n_conditions, embedding_size, learnedmask=True, prein=False):
""" embeddingnet: The network that projects the inputs into an embedding of embedding_size
n_conditions: Integer defining number of different similarity notions
embedding_size: Number of dimensions of the embedding output from the embeddingnet
learnedmask: Boolean indicating whether masks are learned or fixed
prein: Boolean indicating whether masks are initialized in equally sized disjoint
sections or random otherwise"""
super(ConditionalSimNet, self).__init__()
self.learnedmask = learnedmask
self.embeddingnet = embeddingnet
# create the mask
if learnedmask:
if prein:
# define masks
self.masks = torch.nn.Embedding(n_conditions, embedding_size)
# initialize masks
mask_array = np.zeros([n_conditions, embedding_size])
mask_array.fill(0.1)
mask_len = int(embedding_size / n_conditions)
for i in range(n_conditions):
mask_array[i, i*mask_len:(i+1)*mask_len] = 1
# no gradients for the masks
self.masks.weight = torch.nn.Parameter(torch.Tensor(mask_array), requires_grad=True)
else:
# define masks with gradients
self.masks = torch.nn.Embedding(n_conditions, embedding_size)
# initialize weights
self.masks.weight.data.normal_(0.9, 0.7) # 0.1, 0.005
else:
# define masks
self.masks = torch.nn.Embedding(n_conditions, embedding_size)
# initialize masks
mask_array = np.zeros([n_conditions, embedding_size])
mask_len = int(embedding_size / n_conditions)
for i in range(n_conditions):
mask_array[i, i*mask_len:(i+1)*mask_len] = 1
# no gradients for the masks
self.masks.weight = torch.nn.Parameter(torch.Tensor(mask_array), requires_grad=False)
def forward(self, x, c):
# 通过net得到特征向量。
embedded_x = self.embeddingnet(x)
# self.masks定义的是[n_conditions, embedding_size],也就是n_conditions行embedding_size列,而self.masks(c)只是取其中第c列的值。
self.mask = self.masks(c)
if self.learnedmask:
self.mask = torch.nn.functional.relu(self.mask)
masked_embedding = embedded_x * self.mask
return masked_embedding, self.mask.norm(1), embedded_x.norm(2), masked_embedding.norm(2)
| 50.943396 | 112 | 0.632963 | 324 | 2,700 | 5.104938 | 0.314815 | 0.079807 | 0.084643 | 0.101572 | 0.420193 | 0.381499 | 0.381499 | 0.381499 | 0.381499 | 0.381499 | 0 | 0.009891 | 0.288519 | 2,700 | 52 | 113 | 51.923077 | 0.851119 | 0.27 | 0 | 0.382353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.088235 | 0 | 0.205882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52c9cd6db5c0e01c03973d912b4c73d54838fe27 | 33,901 | py | Python | core/plugin.py | alturiak/nio-smith | e54774af0aefcb2119caee946fcd4d84619452bb | [
"Apache-2.0"
] | 16 | 2020-07-03T23:04:13.000Z | 2022-02-21T15:29:33.000Z | core/plugin.py | alturiak/nio-smith | e54774af0aefcb2119caee946fcd4d84619452bb | [
"Apache-2.0"
] | 37 | 2020-07-02T21:41:17.000Z | 2022-02-05T23:40:17.000Z | core/plugin.py | alturiak/nio-smith | e54774af0aefcb2119caee946fcd4d84619452bb | [
"Apache-2.0"
] | 4 | 2020-08-03T20:25:54.000Z | 2021-12-30T07:02:50.000Z | import os.path
from os import remove, path
import pickle
from typing import List, Any, Dict, Callable, Union, Hashable, Tuple
import datetime
import yaml
from core.chat_functions import send_text_to_room, send_reaction, send_replace
from asyncio import sleep
import logging
from nio import AsyncClient, JoinedMembersResponse, RoomMember, RoomSendResponse, RoomSendError
from core.timer import Timer
from fuzzywuzzy import fuzz
import copy
import jsonpickle
logger = logging.getLogger(__name__)
class Plugin:
def __init__(self, name: str, category: str, description: str):
"""
commands (list[tuple]): list of commands in the form of (trigger: str, method: str, helptext: str)
"""
self.category: str = category
self.name: str = name
self.description: str = description
self.commands: Dict[str, PluginCommand] = {}
self.help_texts: Dict[str, str] = {}
self.hooks: Dict[str, List[PluginHook]] = {}
self.timers: List[Timer] = []
self.rooms: List[str] = []
if path.isdir(f"plugins/{self.name}"):
self.is_directory_based: bool = True
self.basepath: str = f"plugins/{self.name}/{self.name}"
else:
self.is_directory_based: bool = False
self.basepath: str = f"plugins/{self.name}"
self.plugin_data_filename: str = f"{self.basepath}.pkl"
self.plugin_dataj_filename: str = f"{self.basepath}.json"
self.plugin_state_filename: str = f"{self.basepath}_state.json"
self.config_items_filename: str = f"{self.basepath}.yaml"
self.plugin_data: Dict[str, Any] = {}
self.config_items: Dict[str, Any] = {}
self.configuration: Union[Dict[Hashable, Any], list, None] = self.__load_config()
def is_valid_for_room(self, room_id: str) -> bool:
if self.rooms == [] or room_id in self.rooms:
return True
else:
return False
def get_help_text(self):
"""
Extract helptexts from commands
:return:
List[dict]: {command: helptext}
"""
command_help: List[Dict] = []
for command in self.commands:
command_help.append({command[0]: command[2]})
return command_help
def add_command(self, command: str, method: Callable, help_text: str, room_id: List[str] = None, power_level: int = 0, command_type: str = "static"):
"""
Adds a new command
:param command: the actual name of the command, e.g. !help
:param method: the method to be called when the command is received
:param help_text: the command's helptext, e.g. as displayed by !help
:param power_level: an optional matrix room power_level needed for users to be able to execute the command (defaults to 0, all users may execute the command)
:param room_id: an optional room_id-list the command will be added on (defaults to none, command will be active on all rooms)
:param command_type: the optional type of the command, currently "static" (default) or "dynamic"
:return:
"""
plugin_command = PluginCommand(command, method, help_text, power_level=power_level, room_id=room_id, command_type=command_type)
if command not in self.commands.keys():
self.commands[command] = plugin_command
self.help_texts[command] = help_text
# Add rooms from command to the rooms the plugin is valid for
if room_id:
for room in room_id:
if room not in self.rooms:
self.rooms.append(room)
logger.debug(f"Added command {command} to rooms {room_id}")
if command_type == "dynamic":
self._save_state()
else:
logger.error(f"Error adding command {command} - command already exists")
def del_command(self, command: str) -> bool:
"""
Removes an active command if it's of type dynamic
:param command: the actual name of the command
:return: True, if the command has been found and removed,
False, otherwise
"""
command: str
if command in self.commands.keys():
if self.commands.get(command).command_type == "dynamic":
del self.commands[command]
self._save_state()
return True
else:
logger.warning(f"Plugin {self.name} tried to remove static command {command}.")
else:
return False
def get_commands(self):
"""
Extract called methods from commands
:return:
dict: {command: method}
"""
return self.commands
def add_hook(self, event_type: str, method: Callable, room_id: List[str] or None = None, event_ids: List[str] or None = None, hook_type: str = "static"):
"""
Hook into events defined by event_type with `method`.
Will overwrite existing hooks with the same event_type and method.
:param event_type: event-type to hook into, currently "m.reaction" and "m.room.message"
:param method: method to be called when an event is received
:param room_id: optional list of room_ids the hook is active on
:param event_ids: optional list of event-ids, the hook is applicable for, currently only useful for "m.reaction"-hooks
:param hook_type: the optional type of the hook, currently "static" (default) or "dynamic"
:return:
"""
plugin_hook = PluginHook(event_type, method, room_id=room_id, event_ids=event_ids, hook_type=hook_type)
if event_type not in self.hooks.keys():
self.hooks[event_type] = [plugin_hook]
else:
self.hooks[event_type].append(plugin_hook)
if hook_type == "dynamic":
self._save_state()
logger.debug(f"Added hook for {event_type} to rooms {room_id}")
def get_hooks(self):
return self.hooks
def del_hook(self, event_type: str, method: Callable) -> bool:
"""
Remove an active hook
:param event_type:
:param method:
:return: True, if hook for given event_type and method has been found and removed
False, otherwise
"""
if event_type in self.hooks.keys():
hooks = self.hooks
hook: PluginHook
for hook in hooks.get(event_type):
if hook.method == method:
if hook.hook_type == "dynamic":
self.hooks[event_type].remove(hook)
self._save_state()
return True
else:
logger.warning(f"Plugin {self.name} tried to remove static hook for {event_type}.")
return False
def add_timer(self, method: Callable, frequency: str or datetime.timedelta or None = None, timer_type: str = "static"):
"""
:param method: the method to be called when the timer trigger
:param frequency: frequency in which the timer should trigger
can be:
- datetime.timedelta: to specify timeperiods between triggers or
- str:
- "weekly": run once a week roughly at Monday, 00:00
- "daily": run once a day roughly at midnight
- "hourly": run once an hour roughly at :00
- None: triggers about every thirty seconds
:param timer_type:
:return:
"""
self.timers.append(Timer(f"{self.name}.{method.__name__}", method, frequency=frequency, timer_type=timer_type))
if timer_type == "dynamic":
self._save_state()
def get_timers(self) -> List[Timer]:
return self.timers
def has_timer_for_method(self, method: Callable) -> bool:
"""
Check if timers exist for a given method
:param method: Callable to check for
:return: True, if there are any timers for the given method
False, otherwise
"""
timer: Timer
timers = self.get_timers()
for timer in timers:
if timer.method == method:
return True
return False
def del_timer(self, method: Callable) -> bool:
"""
Remove an existing timer, if it is of timer_type dynamic
:param method:
:return:
"""
timer: Timer
timers = self.get_timers()
for timer in timers:
if timer.method == method:
if timer.timer_type == "dynamic":
self.get_timers().remove(timer)
self._save_state()
return True
else:
logger.warning(f"Plugin {self.name} tried to remove static timer {timer.name}.")
return False
async def store_data(self, name: str, data: Any) -> bool:
"""
Store data in plugins/<pluginname>.dill
:param name: Name of the data to store, used as a reference to retrieve it later
:param data: data to be stored
:return: True, if data was successfully stored
False, if data could not be stored
"""
if data != self.plugin_data.get(name):
self.plugin_data[name] = data
return await self.__save_data_to_file()
else:
return True
async def read_data(self, name: str) -> Any:
"""
Read data from self.plugin_data
:param name: Name of the data to be retrieved
:return: the previously stored data
"""
if name in self.plugin_data:
return copy.deepcopy(self.plugin_data[name])
else:
return None
async def clear_data(self, name: str) -> bool:
"""
Clear a specific field in self.plugin_data
:param name: name of the field to be cleared
:return: True, if successfully cleared
False, if name not contained in self.plugin_data or data could not be saved to disk
"""
if name in self.plugin_data:
del self.plugin_data[name]
return await self.__save_data_to_file()
else:
return False
async def __load_pickle_data_from_file(self, filename: str) -> Dict[str, Any]:
"""
Load data from a pickle-file
:param filename: filename to load data from
:return: loaded data
"""
file = open(filename, "rb")
data = pickle.load(file)
file.close()
return data
async def __load_json_data_from_file(self, filename: str, convert: bool = False) -> Dict[str, Any]:
"""
Load data from a json-file
:param filename: filename to load data from
:param convert: If data needs to be converted from single-file to directory-based
:return: loaded data
"""
file = open(filename, "r")
json_data: str = file.read()
if convert and f"\"py/object\": \"plugins.{self.name}.{self.name}." not in json_data:
json_data = json_data.replace(f"\"py/object\": \"plugins.{self.name}.", f"\"py/object\": \"plugins.{self.name}.{self.name}.")
data = jsonpickle.decode(json_data)
return data
async def _load_data_from_file(self) -> Dict[str, Any]:
"""
Load plugin_data from file
:return: Data read from file to be loaded into self.plugin_data
"""
plugin_data_from_json: Dict[str, Any] = {}
plugin_data_from_pickle: Dict[str, Any] = {}
abandoned_data: bool = False
try:
if os.path.isfile(self.plugin_dataj_filename):
# local json data found, convert if needed
plugin_data_from_json = await self.__load_json_data_from_file(self.plugin_dataj_filename, self.is_directory_based)
if os.path.isfile(self.plugin_data_filename):
logger.warning(f"Data for {self.name} read from {self.plugin_dataj_filename}, but {self.plugin_data_filename} still exists. After "
f"verifying, that {self.name} is running correctly, please remove {self.plugin_data_filename}")
elif os.path.isfile(self.plugin_data_filename):
# local pickle-data found
logger.warning(f"Reading data for {self.name} from pickle. This should only happen once. Data will be stored in new format.")
plugin_data_from_pickle = await self.__load_pickle_data_from_file(self.plugin_data_filename)
else:
# no local data found, check for abandoned data
if self.is_directory_based:
abandoned_json_file: str = f"plugins/{self.name}.json"
if os.path.isfile(abandoned_json_file):
abandoned_data = True
logger.warning(f"Loading abandoned data for {self.name} from {abandoned_json_file}. This should only happen once.")
plugin_data_from_json = await self.__load_json_data_from_file(abandoned_json_file, convert=True)
except Exception as err:
logger.critical(f"Could not load plugin_data for {self.name}: {err}")
return {}
if abandoned_data:
if 'abandoned_json_file' in locals() and os.path.isfile(abandoned_json_file):
logger.warning(f"You may remove {abandoned_json_file} now, it is no longer being used.")
await self.__save_data_to_json_file(plugin_data_from_json, self.plugin_dataj_filename)
if plugin_data_from_pickle != {} and not os.path.isfile(self.plugin_dataj_filename):
logger.warning(f"Converting data for {self.name} to {self.plugin_dataj_filename}. This should only happen once.")
if os.path.isfile(self.plugin_data_filename):
logger.warning(f"You may remove {self.plugin_data_filename} now, it is no longer being used.")
if plugin_data_from_pickle != {}:
return plugin_data_from_pickle
elif plugin_data_from_json != {}:
return plugin_data_from_json
else:
return {}
async def __save_data_to_pickle_file(self, data: Dict[str, Any], filename: str):
"""
Save data to a pickle-file
:param data: data to save
:param filename: filename to save the data to
:return: True, if data stored successfully
False, otherwise
"""
try:
pickle.dump(data, open(filename, "wb"))
return True
except Exception as err:
logger.critical(f"Could not write plugin_data to {self.plugin_data_filename}: {err}")
return False
async def __save_data_to_json_file(self, data: Dict[str, Any], filename: str):
"""
Save data to a json file
:param data: data to save
:param filename: filename to save the data to
:return: True, if data stored successfully
False, otherwise
"""
try:
json_data = jsonpickle.encode(data)
file = open(filename, "w")
file.write(json_data)
file.close()
return True
except Exception as err:
logger.critical(f"Could not write plugin_data to {self.plugin_data_filename}: {err}")
return False
async def __save_data_to_file(self) -> bool:
"""
Save modified plugin_data to disk
:return: True, if data stored successfully
False, otherwise
"""
if self.plugin_data != {}:
"""there is actual data to save"""
return await self.__save_data_to_json_file(self.plugin_data, self.plugin_dataj_filename)
else:
logger.debug("No data to save, remove datafiles")
"""no data to save, remove file"""
if os.path.isfile(self.plugin_data_filename):
try:
remove(self.plugin_data_filename)
return True
except Exception as err:
logger.critical(f"Could not remove file {self.plugin_data_filename}: {err}")
return False
if os.path.isfile(self.plugin_dataj_filename):
try:
remove(self.plugin_dataj_filename)
return True
except Exception as err:
logger.critical(f"Could not remove file {self.plugin_dataj_filename}: {err}")
return False
async def message(self, client, room_id, message: str, delay: int = 0) -> str or None:
"""
Send a message to a room, usually utilized by plugins to respond to commands
:param client: AsyncClient used to send the message
:param room_id: room_id to send to message to
:param message: the actual message
:param delay: optional delay with typing notification, 1..1000ms
:return: the event_id of the sent message or None in case of an error
"""
if delay > 0:
if delay > 1000:
delay = 1000
await client.room_typing(room_id, timeout=delay)
await sleep(float(delay/1000))
await client.room_typing(room_id, typing_state=False)
event_response: RoomSendResponse or RoomSendError
event_response = await send_text_to_room(client, room_id, message, notice=False)
if isinstance(event_response, RoomSendResponse):
return event_response.event_id
else:
return None
async def reply(self, command, message: str, delay: int = 0) -> str or None:
"""
Simplified version of self.message() to reply to commands
:param command: the command object passed by the message we're responding to
:param message: the actual message
:param delay: optional delay with typing notification, 1..1000ms
:return: the event_id of the sent message or None in case of an error
"""
return await self.message(command.client, command.room.room_id, message, delay)
async def notice(self, client, room_id: str, message: str) -> str or None:
"""
Send a notice to a room, usually utilized by plugins to post errors, help texts or other messages not warranting pinging users
:param client: AsyncClient used to send the message
:param room_id: room_id to send to message to
:param message: the actual message
:return: the event_id of the sent message or None in case of an error
"""
event_response: RoomSendResponse
event_response = await send_text_to_room(client, room_id, message, notice=True)
if event_response:
return event_response.event_id
else:
return None
async def reply_notice(self, command, message: str) -> str or None:
"""
Simplified version of self.notice() to reply to commands
:param command: the command object passed by the message we're responding to
:param message: the actual message
:return: the event_id of the sent message or None in case of an error
"""
return await self.notice(command.client, command.room.room_id, message)
async def react(self, client, room_id: str, event_id: str, reaction: str):
"""
React to a specific event
:param client: (nio.AsyncClient) The client to communicate to matrix with
:param room_id: (str) room_id to send the reaction to (is this actually being used?)
:param event_id: (str) event_id to react to
:param reaction: (str) the reaction to send
:return:
"""
await send_reaction(client, room_id, event_id, reaction)
async def replace(self, client: AsyncClient, room_id: str, event_id: str, message: str) -> str or None:
"""
Edits an event. send_replace() will check if the new content actualy differs before really sending the replacement
:param client: (nio.AsyncClient) The client to communicate to matrix with
:param room_id: (str) room_id of the original event
:param event_id: (str) event_id to edit
:param message: (str) the new message
:return: (str) the event-id of the new room-event, if the original event has been replaced or
None, if the event has not been edited
"""
return await send_replace(client, room_id, event_id, message)
async def message_redact(self, client: AsyncClient, room_id: str, event_id: str, reason: str = ""):
"""
Redact an event
:param client: (nio.AsyncClient) The client to communicate to matrix with
:param room_id: (str) room_id to send the redaction to
:param event_id: (str) event_id to redact
:param reason: (str) optional reason for the redaction
:return:
"""
await client.room_redact(room_id, event_id, reason)
async def message_delete(self, client: AsyncClient, room_id: str, event_id: str, reason: str = ""):
"""
Alias for message_redact
"""
await self.message_redact(client, room_id, event_id, reason)
async def is_user_in_room(self, client: AsyncClient, room_id: str, display_name: str, strictness: str = "loose", fuzziness: int = 75) -> RoomMember or None:
"""
Try to determine if a diven displayname is currently a member of the room
:param client: AsyncClient
:param room_id: id of the room to check for a user
:param display_name: displayname of the user
:param strictness: how strict to match the nickname
strict: exact match
loose: case-insensitive match (default)
fuzzy: fuzzy matching
:param fuzziness: if strictness == fuzzy, fuzziness determines the required percentage for a match
:return: RoomMember matching the displayname if found,
None otherwise
"""
room_members: JoinedMembersResponse = await client.joined_members(room_id)
room_member: RoomMember
if strictness == "strict" or strictness == "loose":
for room_member in room_members.members:
if strictness == "strict":
if room_member.display_name == display_name:
return room_member
else:
"""loose matching"""
if room_member.display_name.lower() == display_name.lower():
return room_member
else:
return None
else:
"""attempt fuzzy matching"""
ratios: Dict[int, RoomMember] = {}
for room_member in room_members.members:
score: int = 0
if room_member.display_name and (score := fuzz.ratio(display_name.lower(), room_member.display_name.lower())) >= fuzziness:
ratios[score] = room_member
if ratios != {}:
return ratios[max(ratios.keys())]
else:
return None
async def link_user(self, client: AsyncClient, room_id: str, display_name: str, strictness: str = "loose", fuzziness: int = 75) -> str or None:
"""
Given a displayname and a command, returns a userlink
:param client: AsyncClient
:param room_id: id of the room
:param display_name: displayname of the user
:param strictness: how strict to match the nickname
strict: exact match
loose: case-insensitive match (default)
fuzzy: fuzzy matching
:param fuzziness: if strictness == fuzzy, fuzziness determines the required percentage for a match
:return: string with the userlink-html-code if found,
None otherwise
"""
user: RoomMember
if user := await self.is_user_in_room(client, room_id, display_name, strictness=strictness, fuzziness=fuzziness):
return f"<a href=\"https://matrix.to/#/{user.user_id}\">{user.display_name}</a>"
else:
return None
async def get_mx_user_id(self, client: AsyncClient, room_id: str, display_name: str, strictness="loose", fuzziness: int = 75) -> str or None:
"""
Given a displayname and a command, returns a mx user id
:param client:
:param room_id:
:param display_name: displayname of the user
:param strictness: how strict to match the nickname
strict: exact match
loose: case-insensitive match (default)
fuzzy: fuzzy matching
:param fuzziness: if strictness == fuzzy, fuzziness determines the required percentage for a match
:return: string with the user_id if found
None otherwise
"""
user: RoomMember
if user := await self.is_user_in_room(client, room_id, display_name, strictness=strictness, fuzziness=fuzziness):
return user.user_id
else:
return None
def add_config(self, config_item: str, default_value: Any = None, is_required: bool = False) -> bool:
"""
Add a config value to be searched for in the plugin-specific configuration file upon loading, raise KeyError exception if required config_item can't be
found
:param config_item: the name of the configuration Item
:param default_value: The value to use if the item is not specified in the configuration
:param is_required: specify whether the configuration has to be present in the config-file for successful loading
:return: True, if the config_item is stored and not a duplicate
False, if the config_item already exists
"""
if config_item in self.config_items.keys():
logger.warning(f"{self.name}: Configuration item {config_item} has been defined already")
return False
else:
# check for the value in configuration file and apply it if found
if self.configuration and self.configuration.get(config_item):
self.config_items[config_item] = self.configuration.get(config_item)
# otherwise apply default
elif default_value:
self.config_items[config_item] = default_value
# if no value and no default, but item is not required, set it to None
elif not is_required:
self.config_items[config_item] = None
# no value, no default, and item is required
else:
err_str: str = f"Required configuration item {config_item} for plugin {self.name} could not be found"
logger.warning(err_str)
raise KeyError(err_str)
return True
def read_config(self, config_item: str) -> Any:
"""
Read and return a specific value from the configuration
If the actual value hasn't been read from the configuration, return the default value
:param config_item: the name of the configuration item to look for
:return: The value of the requested config_item
None, if the value could not be found or does not have a default value defined
"""
try:
return self.config_items[config_item]
except KeyError:
return None
def __load_config(self) -> Union[Dict[Hashable, Any], list, None] or None:
"""
Load the plugins configuration from a .yaml-file. The filename needs to match the plugins name
e.g. the configuration for sample has to be provided in sample.yaml
:return: result of yaml.safe_load, if the config file has been successfully read
None, if there was an error reading the configuration
"""
if path.exists(self.config_items_filename):
try:
with open(self.config_items_filename) as file_stream:
return yaml.safe_load(file_stream.read())
except OSError as err:
logger.info(f"Error loading {self.config_items_filename}: {format(err)}")
return None
else:
return None
def _save_state(self) -> bool:
"""
Save dynamic commands, dynamic hooks and all timers to state file
:return:
"""
dynamic_commands: Dict[str, PluginCommand] = {}
dynamic_hooks: Dict[str, List[PluginHook]] = {}
command: PluginCommand
for name, command in self.get_commands().items():
if command.command_type == "dynamic":
dynamic_commands[name] = command
hooks_list: List[PluginHook]
for event_type, hooks_list in self.get_hooks().items():
hook: PluginHook
for hook in hooks_list:
if hook.hook_type == "dynamic":
if event_type in dynamic_hooks.keys():
dynamic_hooks.get(event_type).append(hook)
else:
dynamic_hooks[event_type] = [hook]
plugin_state: Tuple[Dict, Dict, List] = (dynamic_commands, dynamic_hooks, self.get_timers())
if plugin_state != ({}, {}, []):
# we have an actual state to save
try:
json_data = jsonpickle.encode(plugin_state)
file = open(self.plugin_state_filename, "w")
file.write(json_data)
file.close()
return True
except Exception as err:
logger.critical(f"Could not write plugin_state to {self.plugin_state_filename}: {err}")
return False
else:
# state is empty, remove file if it exists
if os.path.isfile(self.plugin_state_filename):
try:
remove(self.plugin_state_filename)
return True
except Exception as err:
logger.critical(f"Could not remove file {self.plugin_state_filename}: {err}")
return False
def _load_state(self):
"""
Load dynamic commands, dynamic hooks and all timers from state file
:return:
"""
try:
file = open(self.plugin_state_filename, "r")
json_data: str = file.read()
file.close()
except Exception as err:
logger.debug(f"Could not load plugin_state from {self.plugin_state_filename}: {err}")
return
dynamic_commands: Dict[str, PluginCommand]
dynamic_hooks: Dict[str, PluginHook]
timers: List[Timer]
(dynamic_commands, dynamic_hooks, timers) = jsonpickle.decode(json_data)
# add dynamic commands
self.commands.update(dynamic_commands)
# add dynamic hooks
event: str
hooks_list: List[PluginHook]
for event, hooks_list in dynamic_hooks.items():
self.get_hooks()[event] += hooks_list
# add last execution for static timers and all dynamic timers
state_timer: Timer
static_timer: Timer
for state_timer in timers:
for static_timer in self.get_timers():
if static_timer.name == state_timer.name:
# update last execution from state, keep everything else
static_timer.last_execution = state_timer.last_execution
break
else:
# state_timer not found in static_timers, add if dynamic
if state_timer.timer_type == "dynamic":
self.timers.append(state_timer)
class PluginCommand:
def __init__(self, command: str, method: Callable, help_text: str, power_level, room_id: List[str], command_type: str = "static"):
"""
Initialise a PluginCommand
:param command: the actual command used to call the Command, e.g. "help"
:param method: the method that's being called by the command
:param help_text: the help_text displayed for the command
:param power_level: an optional required power_level to execute the command
:param room_id: an optional list of room_ids the command will be active on
:param command_type: the optional type of the command, currently "static" (default) or "dynamic"
"""
self.command: str = command
self.method: Callable = method
self.help_text: str = help_text
self.power_level: int = power_level
self.room_id: List[str] = room_id
self.command_type: str = command_type
class PluginHook:
def __init__(self, event_type: str, method: Callable, room_id: List[str] or None = None, event_ids: List[str] or None = None, hook_type: str = "static"):
"""
Initialise a PluginHook
:param event_type: the event_type the hook is being executed for
:param method: the method that's being called when the hook is called
:param room_id: an optional list of room_ids for
:param event_ids: optional list of event-ids, the hook is applicable for, currently only useful for "m.reaction"-hooks
:param hook_type: the optional type of the hook, currently "static" (default) or "dynamic"
"""
self.event_type: str = event_type
self.method: Callable = method
self.room_id: List[str] or None = room_id
self.event_ids: List[str] or None = event_ids
self.hook_type: str = hook_type
| 41.698647 | 165 | 0.606855 | 4,268 | 33,901 | 4.672212 | 0.093955 | 0.017451 | 0.018254 | 0.014342 | 0.485582 | 0.401484 | 0.364876 | 0.316283 | 0.272303 | 0.252244 | 0 | 0.001805 | 0.313531 | 33,901 | 812 | 166 | 41.75 | 0.855062 | 0.158904 | 0 | 0.341646 | 0 | 0.037406 | 0.101838 | 0.027407 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049875 | false | 0 | 0.034913 | 0.004988 | 0.249377 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ca43fc59a33e1efc8189017af385a4128eb0dc | 1,158 | py | Python | Python Advanced/Advanced/Stacks, Queues, Tuples and Sets/Exercise/Task03.py | IvanTodorovBG/SoftUni | 7b667f6905d9f695ab1484efbb02b6715f6d569e | [
"MIT"
] | 1 | 2022-03-16T10:23:04.000Z | 2022-03-16T10:23:04.000Z | Python Advanced/Advanced/Stacks, Queues, Tuples and Sets/Exercise/Task03.py | IvanTodorovBG/SoftUni | 7b667f6905d9f695ab1484efbb02b6715f6d569e | [
"MIT"
] | null | null | null | Python Advanced/Advanced/Stacks, Queues, Tuples and Sets/Exercise/Task03.py | IvanTodorovBG/SoftUni | 7b667f6905d9f695ab1484efbb02b6715f6d569e | [
"MIT"
] | null | null | null | from collections import deque
chocolate = [int(num) for num in input().split(",")]
milk = deque([int(num) for num in input().split(",")])
milkshakes = 0
milkshakes_success = False
while chocolate and milk:
current_chocolate = chocolate[-1]
current_milk = milk[0]
if current_chocolate <= 0 and current_milk <= 0:
chocolate.pop()
milk.popleft()
continue
if current_chocolate <= 0:
chocolate.pop()
continue
elif current_milk <= 0:
milk.popleft()
continue
if current_chocolate == current_milk:
chocolate.pop()
milk.popleft()
milkshakes += 1
if milkshakes == 5:
milkshakes_success = True
break
else:
milk.append(milk.popleft())
current_chocolate -= 5
if milkshakes_success:
print("Great! You made all the chocolate milkshakes needed!")
else:
print("Not enough milkshakes.")
if chocolate:
print(f"Chocolate: {', '.join([str(x) for x in chocolate])}")
else:
print("Chocolate: empty")
if milk:
print(f"Milk: {', '.join([str(x) for x in milk])}")
else:
print("Milk: empty")
| 21.444444 | 65 | 0.606218 | 140 | 1,158 | 4.928571 | 0.314286 | 0.115942 | 0.078261 | 0.034783 | 0.217391 | 0.217391 | 0.069565 | 0 | 0 | 0 | 0 | 0.011792 | 0.267703 | 1,158 | 53 | 66 | 21.849057 | 0.801887 | 0 | 0 | 0.325 | 0 | 0 | 0.168394 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.025 | 0 | 0.025 | 0.15 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ca5fe97f243d732791a64c6369d1b4bbdb8a45 | 579 | py | Python | architectures/DumbFeat.py | xcmax/FewShotWithoutForgetting | 214286c8ab2a4c2d30696eab6fcf67823db7e17f | [
"MIT"
] | 497 | 2018-04-26T09:27:55.000Z | 2022-03-29T02:58:16.000Z | architectures/DumbFeat.py | xcmax/FewShotWithoutForgetting | 214286c8ab2a4c2d30696eab6fcf67823db7e17f | [
"MIT"
] | 30 | 2018-05-22T11:16:45.000Z | 2021-11-23T08:34:46.000Z | architectures/DumbFeat.py | xcmax/FewShotWithoutForgetting | 214286c8ab2a4c2d30696eab6fcf67823db7e17f | [
"MIT"
] | 111 | 2018-04-27T20:37:26.000Z | 2022-03-09T00:37:18.000Z | import torch
import torch.nn as nn
import math
class DumbFeat(nn.Module):
def __init__(self,opt):
super(DumbFeat,self).__init__()
dropout = opt['dropout'] if ('dropout' in opt) else 0.0
self.dropout = (
torch.nn.Dropout(p=dropout, inplace=False) if (dropout>0.0)
else None)
def forward(self,x):
if x.dim() > 2:
x = x.view(x.size(0), -1)
assert(x.dim()==2)
if self.dropout is not None:
x = self.dropout(x)
return x
def create_model(opt):
return DumbFeat(opt)
| 23.16 | 71 | 0.56304 | 84 | 579 | 3.77381 | 0.416667 | 0.104101 | 0.031546 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019901 | 0.305699 | 579 | 24 | 72 | 24.125 | 0.768657 | 0 | 0 | 0 | 0 | 0 | 0.02418 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 1 | 0.157895 | false | 0 | 0.157895 | 0.052632 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52cba731160c708dd2290dc3e8a551a6a6439b1a | 14,362 | py | Python | odoo-14.0/addons/l10n_it_edi/models/account_invoice.py | Yomy1996/P1 | 59e24cdd5f7f82005fe15bd7a7ff54dd5364dd29 | [
"CC-BY-3.0"
] | null | null | null | odoo-14.0/addons/l10n_it_edi/models/account_invoice.py | Yomy1996/P1 | 59e24cdd5f7f82005fe15bd7a7ff54dd5364dd29 | [
"CC-BY-3.0"
] | null | null | null | odoo-14.0/addons/l10n_it_edi/models/account_invoice.py | Yomy1996/P1 | 59e24cdd5f7f82005fe15bd7a7ff54dd5364dd29 | [
"CC-BY-3.0"
] | null | null | null | # -*- coding:utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
import base64
import zipfile
import io
import logging
import re
from datetime import date, datetime
from lxml import etree
from odoo import api, fields, models, _
from odoo.tools import float_repr
from odoo.exceptions import UserError, ValidationError
from odoo.addons.base.models.ir_mail_server import MailDeliveryException
from odoo.tests.common import Form
_logger = logging.getLogger(__name__)
DEFAULT_FACTUR_ITALIAN_DATE_FORMAT = '%Y-%m-%d'
class AccountMove(models.Model):
_inherit = 'account.move'
l10n_it_send_state = fields.Selection([
('new', 'New'),
('other', 'Other'),
('to_send', 'Not yet send'),
('sent', 'Sent, waiting for response'),
('invalid', 'Sent, but invalid'),
('delivered', 'This invoice is delivered'),
('delivered_accepted', 'This invoice is delivered and accepted by destinatory'),
('delivered_refused', 'This invoice is delivered and refused by destinatory'),
('delivered_expired', 'This invoice is delivered and expired (expiry of the maximum term for communication of acceptance/refusal)'),
('failed_delivery', 'Delivery impossible, ES certify that it has received the invoice and that the file \
could not be delivered to the addressee') # ok we must do nothing
], default='to_send', copy=False)
l10n_it_stamp_duty = fields.Float(default=0, string="Dati Bollo", readonly=True, states={'draft': [('readonly', False)]})
l10n_it_ddt_id = fields.Many2one('l10n_it.ddt', string='DDT', readonly=True, states={'draft': [('readonly', False)]}, copy=False)
l10n_it_einvoice_name = fields.Char(compute='_compute_l10n_it_einvoice')
l10n_it_einvoice_id = fields.Many2one('ir.attachment', string="Electronic invoice", compute='_compute_l10n_it_einvoice')
@api.depends('edi_document_ids', 'edi_document_ids.attachment_id')
def _compute_l10n_it_einvoice(self):
fattura_pa = self.env.ref('l10n_it_edi.edi_fatturaPA')
for invoice in self:
einvoice = invoice.edi_document_ids.filtered(lambda d: d.edi_format_id == fattura_pa)
invoice.l10n_it_einvoice_id = einvoice.attachment_id
invoice.l10n_it_einvoice_name = einvoice.attachment_id.name
def _check_before_xml_exporting(self):
# DEPRECATED use AccountEdiFormat._l10n_it_edi_check_invoice_configuration instead
errors = self.env['account.edi.format']._l10n_it_edi_check_invoice_configuration(self)
if errors:
raise self.env['account.edi.format']._format_error_message(_("Invalid configuration:"), errors)
def invoice_generate_xml(self):
self.ensure_one()
report_name = self.env['account_edi_format']._l10n_it_edi_generate_electronic_invoice_filename(self)
data = b"<?xml version='1.0' encoding='UTF-8'?>" + self._export_as_xml()
description = _('Italian invoice: %s', self.move_type)
attachment = self.env['ir.attachment'].create({
'name': report_name,
'res_id': self.id,
'res_model': self._name,
'datas': base64.encodebytes(data),
'description': description,
'type': 'binary',
})
self.message_post(
body=(_("E-Invoice is generated on %s by %s") % (fields.Datetime.now(), self.env.user.display_name))
)
return {'attachment': attachment}
def _prepare_fatturapa_export_values(self):
self.ensure_one()
def format_date(dt):
# Format the date in the italian standard.
dt = dt or datetime.now()
return dt.strftime(DEFAULT_FACTUR_ITALIAN_DATE_FORMAT)
def format_monetary(number, currency):
# Format the monetary values to avoid trailing decimals (e.g. 90.85000000000001).
return float_repr(number, min(2, currency.decimal_places))
def format_numbers(number):
#format number to str with between 2 and 8 decimals (event if it's .00)
number_splited = str(number).split('.')
if len(number_splited) == 1:
return "%.02f" % number
cents = number_splited[1]
if len(cents) > 8:
return "%.08f" % number
return float_repr(number, max(2, len(cents)))
def format_numbers_two(number):
#format number to str with 2 (event if it's .00)
return "%.02f" % number
def discount_type(discount):
return 'SC' if discount > 0 else 'MG'
def format_phone(number):
if not number:
return False
number = number.replace(' ', '').replace('/', '').replace('.', '')
if len(number) > 4 and len(number) < 13:
return number
return False
def get_vat_number(vat):
return vat[2:].replace(' ', '')
def get_vat_country(vat):
return vat[:2].upper()
def in_eu(partner):
europe = self.env.ref('base.europe', raise_if_not_found=False)
country = partner.country_id
if not europe or not country or country in europe.country_ids:
return True
return False
formato_trasmissione = "FPR12"
if len(self.commercial_partner_id.l10n_it_pa_index or '1') == 6:
formato_trasmissione = "FPA12"
if self.move_type == 'out_invoice':
document_type = 'TD01'
elif self.move_type == 'out_refund':
document_type = 'TD04'
else:
document_type = 'TD0X'
pdf = self.env.ref('account.account_invoices')._render_qweb_pdf(self.id)[0]
pdf = base64.b64encode(pdf)
pdf_name = re.sub(r'\W+', '', self.name) + '.pdf'
# tax map for 0% taxes which have no tax_line_id
tax_map = dict()
for line in self.line_ids:
for tax in line.tax_ids:
if tax.amount == 0.0:
tax_map[tax] = tax_map.get(tax, 0.0) + line.price_subtotal
# Create file content.
template_values = {
'record': self,
'format_date': format_date,
'format_monetary': format_monetary,
'format_numbers': format_numbers,
'format_numbers_two': format_numbers_two,
'format_phone': format_phone,
'discount_type': discount_type,
'get_vat_number': get_vat_number,
'get_vat_country': get_vat_country,
'in_eu': in_eu,
'abs': abs,
'formato_trasmissione': formato_trasmissione,
'document_type': document_type,
'pdf': pdf,
'pdf_name': pdf_name,
'tax_map': tax_map,
}
return template_values
def _export_as_xml(self):
'''DEPRECATED : this will be moved to AccountEdiFormat in a future version.
Create the xml file content.
:return: The XML content as str.
'''
template_values = self._prepare_fatturapa_export_values()
content = self.env.ref('l10n_it_edi.account_invoice_it_FatturaPA_export')._render(template_values)
return content
def _post(self, soft=True):
# OVERRIDE
posted = super()._post(soft=soft)
for move in posted.filtered(lambda m: m.l10n_it_send_state == 'to_send' and m.move_type == 'out_invoice' and m.company_id.country_id.code == 'IT'):
move.send_pec_mail()
return posted
def send_pec_mail(self):
self.ensure_one()
allowed_state = ['to_send', 'invalid']
if (
not self.company_id.l10n_it_mail_pec_server_id
or not self.company_id.l10n_it_mail_pec_server_id.active
or not self.company_id.l10n_it_address_send_fatturapa
):
self.message_post(
body=(_("Error when sending mail with E-Invoice: Your company must have a mail PEC server and must indicate the mail PEC that will send electronic invoice."))
)
self.l10n_it_send_state = 'invalid'
return
if self.l10n_it_send_state not in allowed_state:
raise UserError(_("%s isn't in a right state. It must be in a 'Not yet send' or 'Invalid' state.") % (self.display_name))
message = self.env['mail.message'].create({
'subject': _('Sending file: %s') % (self.l10n_it_einvoice_name),
'body': _('Sending file: %s to ES: %s') % (self.l10n_it_einvoice_name, self.env.company.l10n_it_address_recipient_fatturapa),
'author_id': self.env.user.partner_id.id,
'email_from': self.env.company.l10n_it_address_send_fatturapa,
'reply_to': self.env.company.l10n_it_address_send_fatturapa,
'mail_server_id': self.env.company.l10n_it_mail_pec_server_id.id,
'attachment_ids': [(6, 0, self.l10n_it_einvoice_id.ids)],
})
mail_fattura = self.env['mail.mail'].sudo().with_context(wo_bounce_return_path=True).create({
'mail_message_id': message.id,
'email_to': self.env.company.l10n_it_address_recipient_fatturapa,
})
try:
mail_fattura.send(raise_exception=True)
self.message_post(
body=(_("Mail sent on %s by %s") % (fields.Datetime.now(), self.env.user.display_name))
)
self.l10n_it_send_state = 'sent'
except MailDeliveryException as error:
self.message_post(
body=(_("Error when sending mail with E-Invoice: %s") % (error.args[0]))
)
self.l10n_it_send_state = 'invalid'
def _compose_info_message(self, tree, element_tags):
output_str = ""
elements = tree.xpath(element_tags)
for element in elements:
output_str += "<ul>"
for line in element.iter():
if line.text:
text = " ".join(line.text.split())
if text:
output_str += "<li>%s: %s</li>" % (line.tag, text)
output_str += "</ul>"
return output_str
def _compose_multi_info_message(self, tree, element_tags):
output_str = "<ul>"
for element_tag in element_tags:
elements = tree.xpath(element_tag)
if not elements:
continue
for element in elements:
text = " ".join(element.text.split())
if text:
output_str += "<li>%s: %s</li>" % (element.tag, text)
return output_str + "</ul>"
class AccountTax(models.Model):
_name = "account.tax"
_inherit = "account.tax"
l10n_it_vat_due_date = fields.Selection([
("I", "[I] IVA ad esigibilità immediata"),
("D", "[D] IVA ad esigibilità differita"),
("S", "[S] Scissione dei pagamenti")], default="I", string="VAT due date")
l10n_it_has_exoneration = fields.Boolean(string="Has exoneration of tax (Italy)", help="Tax has a tax exoneration.")
l10n_it_kind_exoneration = fields.Selection(selection=[
("N1", "[N1] Escluse ex art. 15"),
("N2", "[N2] Non soggette"),
("N2.1", "[N2.1] Non soggette ad IVA ai sensi degli artt. Da 7 a 7-septies del DPR 633/72"),
("N2.2", "[N2.2] Non soggette – altri casi"),
("N3", "[N3] Non imponibili"),
("N3.1", "[N3.1] Non imponibili – esportazioni"),
("N3.2", "[N3.2] Non imponibili – cessioni intracomunitarie"),
("N3.3", "[N3.3] Non imponibili – cessioni verso San Marino"),
("N3.4", "[N3.4] Non imponibili – operazioni assimilate alle cessioni all’esportazione"),
("N3.5", "[N3.5] Non imponibili – a seguito di dichiarazioni d’intento"),
("N3.6", "[N3.6] Non imponibili – altre operazioni che non concorrono alla formazione del plafond"),
("N4", "[N4] Esenti"),
("N5", "[N5] Regime del margine / IVA non esposta in fattura"),
("N6", "[N6] Inversione contabile (per le operazioni in reverse charge ovvero nei casi di autofatturazione per acquisti extra UE di servizi ovvero per importazioni di beni nei soli casi previsti)"),
("N6.1", "[N6.1] Inversione contabile – cessione di rottami e altri materiali di recupero"),
("N6.2", "[N6.2] Inversione contabile – cessione di oro e argento puro"),
("N6.3", "[N6.3] Inversione contabile – subappalto nel settore edile"),
("N6.4", "[N6.4] Inversione contabile – cessione di fabbricati"),
("N6.5", "[N6.5] Inversione contabile – cessione di telefoni cellulari"),
("N6.6", "[N6.6] Inversione contabile – cessione di prodotti elettronici"),
("N6.7", "[N6.7] Inversione contabile – prestazioni comparto edile esettori connessi"),
("N6.8", "[N6.8] Inversione contabile – operazioni settore energetico"),
("N6.9", "[N6.9] Inversione contabile – altri casi"),
("N7", "[N7] IVA assolta in altro stato UE (vendite a distanza ex art. 40 c. 3 e 4 e art. 41 c. 1 lett. b, DL 331/93; prestazione di servizi di telecomunicazioni, tele-radiodiffusione ed elettronici ex art. 7-sexies lett. f, g, art. 74-sexies DPR 633/72)")],
string="Exoneration",
help="Exoneration type",
default="N1")
l10n_it_law_reference = fields.Char(string="Law Reference", size=100)
@api.constrains('l10n_it_has_exoneration',
'l10n_it_kind_exoneration',
'l10n_it_law_reference',
'amount',
'l10n_it_vat_due_date')
def _check_exoneration_with_no_tax(self):
for tax in self:
if tax.l10n_it_has_exoneration:
if not tax.l10n_it_kind_exoneration or not tax.l10n_it_law_reference or tax.amount != 0:
raise ValidationError(_("If the tax has exoneration, you must enter a kind of exoneration, a law reference and the amount of the tax must be 0.0."))
if tax.l10n_it_kind_exoneration == 'N6' and tax.l10n_it_vat_due_date == 'S':
raise UserError(_("'Scissione dei pagamenti' is not compatible with exoneration of kind 'N6'"))
| 45.163522 | 271 | 0.612658 | 1,835 | 14,362 | 4.585286 | 0.249591 | 0.032802 | 0.016639 | 0.010696 | 0.198003 | 0.124554 | 0.079391 | 0.075826 | 0.038745 | 0.038745 | 0 | 0.027483 | 0.272873 | 14,362 | 317 | 272 | 45.305994 | 0.776693 | 0.045049 | 0 | 0.083004 | 0 | 0.023715 | 0.285652 | 0.017844 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075099 | false | 0 | 0.051383 | 0.019763 | 0.264822 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52cc16b8dc6ab04237a2ffdd63437a1cda8a60d7 | 5,628 | py | Python | cbcli_logic.py | ZwodahS/CBCLI | bca3c6f942a852adff132ed9d3004654ae95161c | [
"WTFPL"
] | 1 | 2015-09-23T19:27:37.000Z | 2015-09-23T19:27:37.000Z | cbcli_logic.py | ZwodahS/CBCLI | bca3c6f942a852adff132ed9d3004654ae95161c | [
"WTFPL"
] | null | null | null | cbcli_logic.py | ZwodahS/CBCLI | bca3c6f942a852adff132ed9d3004654ae95161c | [
"WTFPL"
] | null | null | null | #!/usr/bin/env python
# every node in roots will have a key. meta_info and synced is not part of the bookmarks
# every node after that , the node are stored like a tree, with their key being "childrens".
# each node will have
# .1 date_added
# .2 date_modified
# .3 id ?? probably the id that is registed.
# .4 meta_info ?? not sure what this is for
# .5 type - I only saw 2 value so far, "url" and "folder"
# .6 name - The name of this bookmark/folder <-- possibly the 1 that we want to search
# .7 url - url to open.
import re
import os
root = os.path.dirname(os.path.abspath(__file__))
# load the cached and return the list of urls that was last searched.
# I want to keep both the url and name and other information in the future,
# to allow me to display the last search results but for now this will do
def loadCache() :
actualpath = os.path.join(root, "cached")
if os.path.isfile(actualpath) :
fin = open(actualpath, "r")
l = fin.readlines()
final = []
for item in l :
final.append(item.strip())
fin.close()
return final
else :
return []
# cache the result and write it to the cached file
# the cached file is used when opening a link
def cache(result):
actualpath = os.path.join(root, "cached")
if os.path.isfile(actualpath):
os.remove(actualpath)
fout = open(actualpath, "w")
for value in result:
fout.write(value["url"] + "\n")
fout.close()
# this will open with default browser
# this is the mac function to open a link
# create one for each OS that is needed
def macopen(link, background):
cmd = "open " + link + " "
if background :
cmd += "-g"
os.system(cmd)
# public call to open a link
# switch to different os if necessary.
def openlink(link, background=False) :
# if you want to extend to linux or other os, you can check it here.
macopen(link, background)
##### all the match functions should be put into this area #####
# generic match for regex
def match(data, searchstring):
return re.search(searchstring, data["name"]) != None
# regex match but convert all to lower case first.
def matchignorecase(data, searchstring):
return re.search(searchstring.lower(), data["name"].lower()) != None
#################################################################
# filter into dictionary instead of list
def tag_filterdict(data, taglist) :
newdata = {}
for d in data :
tokens = d["name"].split()
for t in taglist :
if "#"+t in tokens :
newdata[d["id"]] = d;
break
return newdata
# tag list is always a AND operator now
def tag_filter(data, taglist) :
if len(taglist) == 0 :
return data
newdata = []
for d in data :
tokens = d["name"].split()
for t in taglist :
if "#"+t in tokens :
newdata.append(d);
break
return newdata
# This is for the string search option .
def stringsearch(data, searchstring, taglist = []):
return search(tag_filter(data, taglist), searchstring, matchignorecase)
# the generic search function.
# pass in the match function to use.
def search(data, searchstring, matchfunction=match):
l = {}
for d in data:
if matchfunction(d, searchstring) :
l[d["id"]] = d
return l
def findtags(data) :
hashtags = {}
for d in data :
tokens = d["name"].split()
for token in tokens :
if token[0] == "#" :
if token[1:] in hashtags :
hashtags[token[1:]] += 1
else :
hashtags[token[1:]] = 1
return hashtags
def has_folders(data, folders) :
if "parent" in data :
for f in folders :
if f in data["parent"] :
return True
return False
################ JSON parsing code ##############################
# parent is a list, such that '/'.join(parent) gives use the absolute path to the child.
def createChild(data, parent):
childData = {}
childData["name"] = data["name"]
childData["url"] = data["url"]
childData["id"] = data["id"]
childData["parent"] = parent
return childData
# this parse should return each object individually as a dict
# with an additional attribute , the abs path to the bookmark.
def parseRecursive(data, parent) :
#if you are here, means you are a folder
name = data['name']
current = list(parent)
current.append(name)
output = []
for child in data["children"] :
if("children" in child) : # if the child has a children, means it is a folder
l = parseRecursive(child, current)
output += l
else :
output.append(createChild(child, current))
return output
# parse the whole json file and return a list of url objects.
# This is the main parse function
# return a triplet (data, checksum, version)
def parse(data) :
checksum = data["checksum"]
version = data["version"]
roots = data["roots"]
l = []
for (k, v) in roots.items() :
# there are 2 values we don't want ( as of [ September 20, 2013 ] )
# .1 meta_info
# .2 synced
# .3 sync_transaction_version
# in additional to that, in case new values are added, I will also make sure that it is a dictionary before parsing recursively
if(k != "meta_info" and k != "synced" and k != "sync_transaction_version" and isinstance(v, dict)) :
path = ["root", k]
l = l + parseRecursive(v, path)
return (l, checksum, version)
| 32.72093 | 135 | 0.604833 | 778 | 5,628 | 4.352185 | 0.312339 | 0.012404 | 0.007088 | 0.011813 | 0.103071 | 0.103071 | 0.078263 | 0.078263 | 0.078263 | 0.069699 | 0 | 0.006365 | 0.274165 | 5,628 | 171 | 136 | 32.912281 | 0.822521 | 0.365672 | 0 | 0.203704 | 0 | 0 | 0.050808 | 0.007048 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.018519 | 0.027778 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ccc4bc545fdc7871f6f21f441838ac4f5a47ba | 3,389 | py | Python | src/sales/forms.py | vladimirtkach/yesjob | 83800f4d29bf2dab30b14fc219d3150e3bc51e15 | [
"MIT"
] | null | null | null | src/sales/forms.py | vladimirtkach/yesjob | 83800f4d29bf2dab30b14fc219d3150e3bc51e15 | [
"MIT"
] | 18 | 2020-02-12T00:41:40.000Z | 2022-02-10T12:00:03.000Z | src/sales/forms.py | vladimirtkach/yesjob | 83800f4d29bf2dab30b14fc219d3150e3bc51e15 | [
"MIT"
] | null | null | null | from authtools.models import User
from django import forms
from .services import normalize_phone
from .models import *
from datetime import date
from datetime import timedelta
class ContactForm(forms.ModelForm):
class Meta:
model = Contact
exclude = ['agent', 'in_sales', 'is_client', 'created_at', 'updated_at', 'last_contact_date', 'next_contact_date', 'color', 'cv_url', 'cv_title']
widgets = {
'comment': forms.Textarea(attrs={'cols': 45, 'rows': 6}),
}
def save(self, user, commit=True):
inst = super(ContactForm, self).save(commit=False)
if commit:
inst.agent = user
inst.save()
self.save_m2m()
return inst
def clean_phone_main(self):
phone = self.cleaned_data['phone_main']
phone = normalize_phone(phone)
if phone is None:
raise forms.ValidationError("Номер %s имеет не правильный формат, введите в формате 380ХХ ХХХ ХХ ХХ" % self.cleaned_data['phone_main'])
return phone
class AgentForm(forms.Form):
agent = forms.ChoiceField(choices=[])
def __init__(self, *args, **kwargs):
super(AgentForm, self).__init__(*args, **kwargs)
choices = [(i.pk, i.name) for i in
(User.objects.filter(groups__name='Agent') | User.objects.filter(groups__name='SuperAgent'))]
choices.append((1, "Admin"))
self.fields['agent'].choices = choices
class AgentStats(AgentForm):
start_date=forms.DateField(label="Начало периода", widget=forms.TextInput(attrs={'autocomplete':'off'}),
initial=date.today()-timedelta(days = 3))
end_date=forms.DateField(label="Конец периода", widget=forms.TextInput(attrs={'autocomplete':'off'}),
initial=date.today()+timedelta(days = 1))
agent = forms.ChoiceField(choices=[])
def __init__(self, *args, **kwargs):
super(AgentForm, self).__init__(*args, **kwargs)
choices = [(i.pk, i.name) for i in
(User.objects.filter(groups__name='Agent') | User.objects.filter(groups__name='SuperAgent'))]
choices.insert(0, ("all", "Все"))
self.fields['agent'].choices = choices
class ContactSourceForm(forms.ModelForm):
class Meta:
model=ContactSource
fields='__all__'
class InteractionForm(forms.ModelForm):
date = forms.DateTimeField(label="Дата сл. контакта",widget=forms.TextInput(attrs={'autocomplete':'off'}))
class Meta:
model = Interaction
exclude = ["agent", "interaction_date", "contact"]
widgets = {
'result': forms.Textarea(attrs={'cols': 35, 'rows': 6}),
}
labels = {
"result": "Результат",
"type": "Тип",
}
def save(self, agent, contact, commit=True):
inst = super(InteractionForm, self).save(commit=False)
inst.agent = agent
inst.contact = contact
if commit:
inst.save()
self.save_m2m()
return inst
# def __init__(self, data, choices=None, *args, **kwargs):
# super(InteractionForm, self).__init__(data, *args, **kwargs)
# self.fields['sale_position'] = forms.ChoiceField(choices=choices)
class SkillProfileForm(forms.ModelForm):
class Meta:
model = SkillProfile
fields = '__all__' | 36.44086 | 153 | 0.61375 | 376 | 3,389 | 5.361702 | 0.343085 | 0.029762 | 0.027778 | 0.045635 | 0.402778 | 0.337302 | 0.28373 | 0.28373 | 0.251984 | 0.251984 | 0 | 0.005903 | 0.250221 | 3,389 | 93 | 154 | 36.44086 | 0.787485 | 0.056359 | 0 | 0.378378 | 0 | 0 | 0.130829 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067568 | false | 0 | 0.081081 | 0 | 0.391892 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52cde76ff7c128fffc3d5c9222878e785b9eb387 | 2,465 | py | Python | backend-project/small_eod/notifications/views.py | merito/small_eod | ab19b82f374cd7c4b21d8f9412657dbe7f7f03e2 | [
"MIT"
] | 64 | 2019-12-30T11:24:03.000Z | 2021-06-24T01:04:56.000Z | backend-project/small_eod/notifications/views.py | merito/small_eod | ab19b82f374cd7c4b21d8f9412657dbe7f7f03e2 | [
"MIT"
] | 465 | 2018-06-13T21:43:43.000Z | 2022-01-04T23:33:56.000Z | backend-project/small_eod/notifications/views.py | merito/small_eod | ab19b82f374cd7c4b21d8f9412657dbe7f7f03e2 | [
"MIT"
] | 72 | 2018-12-02T19:47:03.000Z | 2022-01-04T22:54:49.000Z | from django.forms.models import model_to_dict
from rest_framework.views import APIView
class NotificationsView(APIView):
notified_users_field = None
notification_diff_ignored_fields = []
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._http_action = None
self._init_instance = None
def send_notifications(self, request, **kwargs):
self.kwargs["pk"] = self.kwargs.get("pk", None) or kwargs["data"].get(
"id", None
)
instance = (
self.get_object()
if self._http_action in ["post", "put", "patch"]
else self._init_instance
)
if self._http_action == "put" and not self.has_instance_changed(
self._init_instance, instance
):
return
notified_users = self.get_notified_uers(instance)
kwargs["source"] = self.basename
kwargs["action"] = self.action
kwargs["instance"] = instance
kwargs["request"] = request
for user in notified_users:
user.notify(**kwargs)
def has_instance_changed(self, init_instance, instance):
d1 = model_to_dict(init_instance)
d2 = model_to_dict(instance)
for field in self.notification_diff_ignored_fields:
d1.pop(field, None)
d2.pop(field, None)
return d1 != d2
def get_notified_uers(self, instance):
if not self.notified_users_field:
raise TypeError(
"{} is missing a `notified_users_field` attribute.".format(
self.__class__.__name__
)
)
for attr in self.notified_users_field.split("."):
instance = getattr(instance, attr, None)
if not instance:
return []
return instance.all()
def initial(self, request, *args, **kwargs):
self._http_action = next(
k for k, v in self.action_map.items() if v == self.action
)
if self._http_action in ["put", "delete", "patch"]:
self._init_instance = self.get_object()
return super().initial(request, *args, **kwargs)
def dispatch(self, request, *args, **kwargs):
response = super().dispatch(request, *args, **kwargs)
if self._http_action in ["delete", "post", "put", "patch"]:
self.send_notifications(request=request, data=response.data)
return response
| 31.602564 | 78 | 0.59432 | 280 | 2,465 | 4.967857 | 0.289286 | 0.056075 | 0.060388 | 0.04601 | 0.133717 | 0.060388 | 0.060388 | 0 | 0 | 0 | 0 | 0.00345 | 0.294523 | 2,465 | 77 | 79 | 32.012987 | 0.796435 | 0 | 0 | 0 | 0 | 0 | 0.054361 | 0.008925 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.033333 | 0 | 0.283333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ce9668e4cbf74bf1d1f16d04552686c73f00c0 | 809 | py | Python | tests/python/extensions/pybind/focus/presum.py | isce3-testing/isce3-circleci-poc | ec1dfb6019bcdc7afb7beee7be0fa0ce3f3b87b3 | [
"Apache-2.0"
] | null | null | null | tests/python/extensions/pybind/focus/presum.py | isce3-testing/isce3-circleci-poc | ec1dfb6019bcdc7afb7beee7be0fa0ce3f3b87b3 | [
"Apache-2.0"
] | 1 | 2021-12-23T00:00:31.000Z | 2021-12-23T00:00:31.000Z | tests/python/extensions/pybind/focus/presum.py | isce3-testing/isce3-circleci-poc | ec1dfb6019bcdc7afb7beee7be0fa0ce3f3b87b3 | [
"Apache-2.0"
] | 1 | 2021-12-02T21:10:11.000Z | 2021-12-02T21:10:11.000Z | #!/usr/bin/env python3
import numpy as np
import isce3.ext.isce3 as isce3
def test_presum_weights():
n = 31
rng = np.random.default_rng(12345)
t = np.linspace(0, 9, n) + rng.normal(0.1, size=n)
t.sort()
tout = 4.5
L = 1.0
acor = isce3.core.AzimuthKernel(L)
offset, w = isce3.focus.get_presum_weights(acor, t, tout)
i = slice(offset, offset + len(w))
assert all(abs(t[i] - tout) <= L)
def test_fill_weights():
lut = {
123: np.array([1, 2, 3.]),
456: np.array([4, 5, 6.]),
789: np.array([7, 8, 9.]),
}
nr = 1000
ids = np.random.choice(list(lut.keys()), nr)
weights = isce3.focus.fill_weights(ids, lut)
i = 0
# This function just accelerates this particular dict lookup.
assert all(weights[:, i] == lut[ids[i]])
| 26.096774 | 65 | 0.587145 | 131 | 809 | 3.564886 | 0.526718 | 0.044968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073892 | 0.247219 | 809 | 30 | 66 | 26.966667 | 0.692939 | 0.100124 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52d07ff1396b8fa69b6323cb66add3d5beec13f0 | 1,849 | py | Python | tests/conftest.py | cffbots/ewatercycle | 29571aace32fcea8f70948259e33a62c9c834808 | [
"Apache-2.0"
] | 18 | 2021-03-25T08:25:32.000Z | 2022-03-25T09:23:09.000Z | tests/conftest.py | cffbots/ewatercycle | 29571aace32fcea8f70948259e33a62c9c834808 | [
"Apache-2.0"
] | 323 | 2016-08-11T12:13:58.000Z | 2022-03-30T11:29:04.000Z | tests/conftest.py | cffbots/ewatercycle | 29571aace32fcea8f70948259e33a62c9c834808 | [
"Apache-2.0"
] | 4 | 2018-06-27T11:47:23.000Z | 2022-02-02T14:14:13.000Z | from collections import OrderedDict
from pathlib import Path
import pytest
from ewatercycle.parametersetdb import build_from_urls
@pytest.fixture
def yaml_config_url():
return "data:text/plain,data: data/PEQ_Hupsel.dat\nparameters:\n cW: 200\n cV: 4\n cG: 5.0e+6\n cQ: 10\n cS: 4\n dG0: 1250\n cD: 1500\n aS: 0.01\n st: loamy_sand\nstart: 367416 # 2011120000\nend: 368904 # 2012020000\nstep: 1\n" # noqa: E501
@pytest.fixture
def yaml_config():
return OrderedDict(
[
("data", "data/PEQ_Hupsel.dat"),
(
"parameters",
OrderedDict(
[
("cW", 200),
("cV", 4),
("cG", 5000000.0),
("cQ", 10),
("cS", 4),
("dG0", 1250),
("cD", 1500),
("aS", 0.01),
("st", "loamy_sand"),
]
),
),
("start", 367416),
("end", 368904),
("step", 1),
]
)
@pytest.fixture
def sample_parameterset(yaml_config_url):
return build_from_urls(
config_format="yaml",
config_url=yaml_config_url,
datafiles_format="svn",
datafiles_url="http://example.com",
)
@pytest.fixture
def sample_shape():
return str(
Path(__file__).parents[1] / "docs" / "examples" / "data" / "Rhine" / "Rhine.shp"
)
@pytest.fixture
def sample_marrmot_forcing_file():
# Downloaded from
# https://github.com/wknoben/MARRMoT/blob/master/BMI/Config/BMI_testcase_m01_BuffaloRiver_TN_USA.mat
return str(
Path(__file__).parent
/ "models"
/ "data"
/ "BMI_testcase_m01_BuffaloRiver_TN_USA.mat"
)
| 26.797101 | 254 | 0.510546 | 200 | 1,849 | 4.51 | 0.47 | 0.072062 | 0.088692 | 0.073171 | 0.177384 | 0.075388 | 0.075388 | 0 | 0 | 0 | 0 | 0.08692 | 0.359113 | 1,849 | 68 | 255 | 27.191176 | 0.674262 | 0.067604 | 0 | 0.166667 | 0 | 0.018519 | 0.236047 | 0.055233 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092593 | false | 0 | 0.074074 | 0.092593 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52d3e7fe051106625a10784cf84618101a2a5507 | 2,701 | py | Python | unit_07/02/product_controller.py | janusnic/21v-pyqt | 8ee3828e1c6e6259367d6cedbd63b9057cf52c24 | [
"MIT"
] | null | null | null | unit_07/02/product_controller.py | janusnic/21v-pyqt | 8ee3828e1c6e6259367d6cedbd63b9057cf52c24 | [
"MIT"
] | null | null | null | unit_07/02/product_controller.py | janusnic/21v-pyqt | 8ee3828e1c6e6259367d6cedbd63b9057cf52c24 | [
"MIT"
] | 2 | 2019-11-14T15:04:22.000Z | 2021-10-31T07:34:46.000Z | from controller_class import *
class ProductController(ShopController):
"""creates a controller to add/delete/amend product records in the
myshop database"""
def __init__(self):
super(ProductController,self).__init__()
def add_product(self,name,price,product_type):
sql = """insert into Product (Name, ProductTypeID, Price)
values (?,?,?)"""
self.query(sql,(name,product_type,price))
def delete_product(self,product_id):
sql = "delete from Product where ProductID = ?"
self.query(sql,(product_id,))
def product_details(self,product_id=None,name=None):
sql = None
data = []
if product_id != None:
sql = "select * from Product where ProductID = ?"
data = (product_id,)
elif name != None:
sql = "select * from Product where Name = ?"
data = (name,)
else:
sql = "select * from Product"
return self.select_query(sql,data)
def amend_product(self,product_id,name=None,product_type_id=None,price=None):
updates = {}
data = []
if name != None:
updates["Name"] = name
if product_type_id != None:
updates["ProductTypeID"] = product_type_id
if price != None:
updates["Price"] = price
sql = "update Product set "
for key, value in updates.items():
sql += "{0}=?, ".format(key)
data.append(value)
sql = sql[:-2]
sql += " where ProductID = ?"
data.append(product_id)
self.query(sql,data)
def product_headings(self):
sql = "PRAGMA table_info(Product)"
return self.select_query(sql)
def add_product_type(self,name):
sql = "insert into ProductType (Description) values (?)"
self.query(sql,(name,))
def delete_product_type(self,product_type_id):
sql = "delete from ProductType where ProductTypeID = ?"
self.query(sql,(product_type_id,))
def amend_product_type(self,product_type_id,name):
sql = "update ProductType set Name = ? where ProductTypeID = ?"
self.query(sql,(name,product_type_id))
def product_type_details(self,product_type_id=None,name=None):
if product_type_id != None:
sql = "select * from ProductType where ProductTypeID = ?"
data = (product_type_id,)
elif name != None:
sql = "select * from ProductType where Name = ?"
data = (name,)
return self.select_query(sql,data)
def product_type_headings(self):
sql = "PRAGMA table_info(ProductType)"
return self.select_query(sql) | 33.7625 | 81 | 0.597186 | 314 | 2,701 | 4.949045 | 0.187898 | 0.120335 | 0.083655 | 0.043758 | 0.372587 | 0.266409 | 0.074646 | 0 | 0 | 0 | 0 | 0.001041 | 0.288412 | 2,701 | 80 | 82 | 33.7625 | 0.807492 | 0.029248 | 0 | 0.190476 | 0 | 0 | 0.222222 | 0.008812 | 0 | 0 | 0 | 0 | 0 | 1 | 0.174603 | false | 0 | 0.015873 | 0 | 0.269841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52d7d6ef8abf829d0132cf325a786a38a6279301 | 834 | py | Python | bilstm_crf_ner/train.py | rufinob/ehr-relation-extraction | d23e38a08de8c08e1d2de3f840b65fbbb4b61d8c | [
"MIT"
] | 43 | 2020-12-02T11:04:38.000Z | 2022-03-28T15:07:06.000Z | bilstm_crf_ner/train.py | rufinob/ehr-relation-extraction | d23e38a08de8c08e1d2de3f840b65fbbb4b61d8c | [
"MIT"
] | 17 | 2021-05-05T00:19:17.000Z | 2022-03-17T03:14:58.000Z | bilstm_crf_ner/train.py | rufinob/ehr-relation-extraction | d23e38a08de8c08e1d2de3f840b65fbbb4b61d8c | [
"MIT"
] | 23 | 2020-11-12T00:03:19.000Z | 2022-03-16T10:32:30.000Z | from model.data_utils import CoNLLDataset
from model.config import Config
from model.ner_model import NERModel
from model.ner_learner import NERLearner
# from model.ent_model import EntModel
# from model.ent_learner import EntLearner
def main():
# create instance of config
config = Config()
if config.use_elmo: config.processing_word = None
#build model
model = NERModel(config)
# create datasets
dev = CoNLLDataset(config.filename_dev, config.processing_word,
config.processing_tag, config.max_iter, config.use_crf)
train = CoNLLDataset(config.filename_train, config.processing_word,
config.processing_tag, config.max_iter, config.use_crf)
learn = NERLearner(config, model)
learn.fit(train, dev)
if __name__ == "__main__":
main()
| 27.8 | 80 | 0.713429 | 104 | 834 | 5.480769 | 0.355769 | 0.094737 | 0.105263 | 0.091228 | 0.224561 | 0.224561 | 0.224561 | 0.224561 | 0.224561 | 0.224561 | 0 | 0 | 0.211031 | 834 | 29 | 81 | 28.758621 | 0.866261 | 0.155875 | 0 | 0.125 | 0 | 0 | 0.011478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.25 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52e3ea8d73030ba47f405db5df255830e0f534e3 | 7,115 | py | Python | rest.py | sverrirab/generic-rest | 3c5f7c2d499b89e06778d86f3802190cbe59a24c | [
"MIT"
] | 3 | 2019-09-30T23:15:53.000Z | 2021-06-08T12:35:50.000Z | rest.py | sverrirab/generic-rest | 3c5f7c2d499b89e06778d86f3802190cbe59a24c | [
"MIT"
] | null | null | null | rest.py | sverrirab/generic-rest | 3c5f7c2d499b89e06778d86f3802190cbe59a24c | [
"MIT"
] | null | null | null | from __future__ import print_function
from flask import Flask, request
from flask_restful import reqparse, abort, Api, Resource
import argparse
import json
import os
import random
CHARS = "BCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz123456789" # 60 unique characters.
UNIQUE_LEN = 6 # 46B unique
app = Flask(__name__)
api = Api(app)
db = None
request_parser = reqparse.RequestParser()
verbose = 0
strict_put = False
authorization_token = ""
def create_unique():
"""
:return: string very likely to be unique.
"""
result = ""
for i in range(UNIQUE_LEN):
result += random.choice(CHARS)
return result
def strip_tag(field, tag):
"""
If 'field' is in the form 'str_tag1_tag2' and tag matches any tag remove it.
:param field: arbitrary string
:return: str, bool (indicating if 'tag' was removed)
"""
s = field.split("_")
found = False
if tag in s:
s.remove(tag)
found = True
return "_".join(s), found
def strip_from_end(s, c):
"""
Remove all 'c' from the end of 's'
:param s: a string.
:param c: character or substring to remove
:return: remainder of 's'
"""
if c:
while s.endswith(c):
s = s[:-len(c)]
return s
def strip_from_start(s, c):
"""
Remove all 'c' from the start of 's'
:param s: a string.
:param c: character or substring to remove
:return: remainder of 's'
"""
if c:
while s.startswith(c):
s = s[len(c):]
return s
def validate_authorization():
if authorization_token:
auth = request.headers.get("Authorization", "").split(" ")
if len(auth) == 2 and auth[0].lower() == "bearer" and auth[1] == authorization_token:
return # Success!
print("Authentication failed: '{}'".format(" ".join(auth)))
abort(401)
class DataBase(object):
def __init__(self, file_name=""):
self._data = {}
self._file_name = file_name
self.load_from_disk()
def load_from_disk(self):
if self._file_name and os.path.exists(self._file_name):
with open(self._file_name, "r") as f:
self._data = json.load(f)
if verbose:
print("Loaded data from file", self._file_name, "- records found:", len(self._data))
def persist_to_disk(self):
if self._file_name:
with open(self._file_name, "w") as f:
json.dump(self._data, f, sort_keys=True, indent=4, separators=(",", ": "))
if verbose:
print("Persisted data to file", self._file_name, "- records to store:", len(self._data))
def throw_if_does_not_exist(self, unique_id):
if not self.exists(unique_id):
abort(404, message="Item with id '{}' does not exist.".format(unique_id))
def all(self):
return self._data
def exists(self, unique_id):
return unique_id in self._data
def get(self, unique_id):
self.throw_if_does_not_exist(unique_id)
return self._data[unique_id]
def get_field(self, unique_id, field):
record = self.get(unique_id)
if field not in record:
abort(404, message="field '{}' not found!".format(field))
return record[field]
def delete(self, unique_id):
validate_authorization()
self.throw_if_does_not_exist(unique_id)
del db.data[unique_id]
self.persist_to_disk()
def put(self, unique_id, record, only_update=True):
validate_authorization()
if only_update:
self.throw_if_does_not_exist(unique_id)
self._data[unique_id] = record
self.persist_to_disk()
def post(self, record):
validate_authorization()
while True:
unique_id = create_unique()
if not self.exists(unique_id):
self._data[unique_id] = record
self.persist_to_disk()
return unique_id
class ItemList(Resource):
def get(self):
return db.all()
def post(self):
validate_authorization()
args = request_parser.parse_args()
unique_id = db.post(args)
return unique_id, 201
class Item(Resource):
def get(self, unique_id):
return db.get(unique_id)
def delete(self, unique_id):
db.delete(unique_id)
return '', 204
def put(self, unique_id):
args = request_parser.parse_args()
db.put(unique_id, args, strict_put)
return args, 201
class ItemField(Resource):
def get(self, unique_id, field):
return db.get_field(unique_id, field)
def main():
global verbose, strict_put, authorization_token, db
parser = argparse.ArgumentParser("Simple rest api server")
parser.add_argument("-v", "--verbose", action="count", default=0,
help="Increase output verbosity")
parser.add_argument("-d", "--debug", action="store_true", default=False,
help="Run in debug mode")
parser.add_argument("-a", "--api", default="/api",
help="API root. Default '%(default)s'")
parser.add_argument("-f", "--file-name", default="",
help="Filename for persistent storage")
parser.add_argument("-t", "--token", default="",
help="Authorization token required for updates")
parser.add_argument("-s", "--strict-put", action="store_true", default=False,
help="Only allow PUT on existing resource")
parser.add_argument("field", nargs="*", default=["text", "count_optional_int", "help_optional"],
help="Fields in API. If empty: 'text count_optional_int help_optional'")
args = parser.parse_args()
verbose = args.verbose
strict_put = args.strict_put
authorization_token = args.token
db = DataBase(file_name=args.file_name)
api_normalized = strip_from_end(strip_from_start(args.api, "/"), "/")
api.add_resource(ItemList, "/" + api_normalized)
if len(api_normalized) > 0:
api_normalized = "/" + api_normalized
api.add_resource(Item, api_normalized + "/<unique_id>")
api.add_resource(ItemField, api_normalized + "/<unique_id>/<field>")
print("Starting API server on", args.api)
for field in args.field:
argvars = {}
field, optional = strip_tag(field, "optional")
field, required = strip_tag(field, "required")
field, type_int = strip_tag(field, "int")
field, type_str = strip_tag(field, "str")
if type_int:
argvars["type"] = int
if not optional:
argvars["required"] = True
else:
if type_int:
argvars["default"] = 0
else:
argvars["default"] = ""
print("Adding field:", field, ("required" if not optional else "optional"), ("int" if type_int else "str"))
request_parser.add_argument(field, **argvars)
app.run(debug=args.debug, host="0.0.0.0", port=5000)
return 0
if __name__ == '__main__':
exit(main())
| 29.769874 | 115 | 0.607309 | 907 | 7,115 | 4.553473 | 0.2172 | 0.061985 | 0.029056 | 0.013559 | 0.230024 | 0.173608 | 0.113559 | 0.104358 | 0.060048 | 0.060048 | 0 | 0.009823 | 0.270274 | 7,115 | 238 | 116 | 29.894958 | 0.785632 | 0.069571 | 0 | 0.179641 | 0 | 0 | 0.122912 | 0.009195 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137725 | false | 0 | 0.041916 | 0.02994 | 0.305389 | 0.035928 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52e5a3209d8fed1bb2e419371c2306705ce2dc89 | 2,381 | py | Python | examples/tutorials/scipy2008/ploteditor.py | janvonrickenbach/Chaco_wxPhoenix_py3 | 21a10cfd81100f28e3fbc273357ac45642519f33 | [
"BSD-3-Clause"
] | null | null | null | examples/tutorials/scipy2008/ploteditor.py | janvonrickenbach/Chaco_wxPhoenix_py3 | 21a10cfd81100f28e3fbc273357ac45642519f33 | [
"BSD-3-Clause"
] | null | null | null | examples/tutorials/scipy2008/ploteditor.py | janvonrickenbach/Chaco_wxPhoenix_py3 | 21a10cfd81100f28e3fbc273357ac45642519f33 | [
"BSD-3-Clause"
] | null | null | null | from numpy import linspace, sin
from chaco.api import ArrayPlotData, Plot
from chaco.tools.api import PanTool, ZoomTool
from enable.component_editor import ComponentEditor
from traits.api import Enum, HasTraits, Instance
from traitsui.api import Item, Group, View
class PlotEditor(HasTraits):
plot = Instance(Plot)
plot_type = Enum("scatter", "line")
orientation = Enum("horizontal", "vertical")
traits_view = View(
Item(
'orientation', label="Orientation"),
Item(
'plot', editor=ComponentEditor(), show_label=False),
width=500,
height=500,
resizable=True)
def __init__(self, *args, **kw):
HasTraits.__init__(self, *args, **kw)
# Create the data and the PlotData object
x = linspace(-14, 14, 100)
y = sin(x) * x**3
plotdata = ArrayPlotData(x=x, y=y)
# Create the scatter plot
plot = Plot(plotdata)
plot.plot(("x", "y"), type=self.plot_type, color="blue")
plot.tools.append(PanTool(plot))
plot.tools.append(ZoomTool(plot))
self.plot = plot
def _orientation_changed(self):
if self.orientation == "vertical":
self.plot.orientation = "v"
else:
self.plot.orientation = "h"
self.plot.request_redraw()
#===============================================================================
# demo object that is used by the demo.py application.
#===============================================================================
class Demo(HasTraits):
# Scatter plot.
scatter_plot = Instance(PlotEditor)
# Line plot.
line_plot = Instance(PlotEditor)
traits_view = View(
Group(
Item(
'@scatter_plot', show_label=False), label='Scatter'),
Group(
Item(
'@line_plot', show_label=False), label='Line'),
title='Chaco Plot',
resizable=True)
def __init__(self, *args, **kws):
super(Demo, self).__init__(*args, **kws)
#Hook up the ranges.
self.scatter_plot.plot.range2d = self.line_plot.plot.range2d
def _scatter_plot_default(self):
return PlotEditor(plot_type="scatter")
def _line_plot_default(self):
return PlotEditor(plot_type="line")
demo = Demo()
if __name__ == "__main__":
demo.configure_traits()
| 29.395062 | 80 | 0.573289 | 261 | 2,381 | 5.038314 | 0.325671 | 0.048669 | 0.031939 | 0.030418 | 0.136882 | 0.101901 | 0.059316 | 0 | 0 | 0 | 0 | 0.008949 | 0.249055 | 2,381 | 80 | 81 | 29.7625 | 0.72651 | 0.133557 | 0 | 0.178571 | 0 | 0 | 0.065239 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089286 | false | 0 | 0.107143 | 0.035714 | 0.392857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52e6081f2a531560e71eadf9ebb1a80e4d382951 | 466 | py | Python | TCP_client_demo.py | markzxx/ImplementNetwork | f46f172a5aafe6ef975e77764a31f73cd2f2e269 | [
"MIT"
] | null | null | null | TCP_client_demo.py | markzxx/ImplementNetwork | f46f172a5aafe6ef975e77764a31f73cd2f2e269 | [
"MIT"
] | null | null | null | TCP_client_demo.py | markzxx/ImplementNetwork | f46f172a5aafe6ef975e77764a31f73cd2f2e269 | [
"MIT"
] | null | null | null | from mysocket import *
from LinkLayer import util
ip = util.get_local_ipv4_address()
port = 5000
local_address = (ip, port)
remote_address = ('10.20.117.131', 5000)
ClientSocket = socket(AF_INET, SOCK_STREAM)
ClientSocket.bind(local_address)
ClientSocket.connect(remote_address)
for i in range(10):
ClientSocket.send("{}".format(i).encode())
raw_message = ClientSocket.recv(2048)
print("\n---------------\n", raw_message.decode(), "\n----------------\n") | 35.846154 | 78 | 0.688841 | 63 | 466 | 4.920635 | 0.619048 | 0.077419 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060096 | 0.107296 | 466 | 13 | 78 | 35.846154 | 0.685096 | 0 | 0 | 0 | 0 | 0 | 0.115632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52e96b81ee897f3cc98d6620682ccb2e9e920b1e | 11,731 | py | Python | dusty/scanners/performer.py | hunkom/dusty-1 | 04ec2dc26f395f5d97b58354473ba6f280c4efb0 | [
"Apache-2.0"
] | 6 | 2019-02-25T08:22:24.000Z | 2021-11-02T04:34:35.000Z | dusty/scanners/performer.py | hunkom/dusty-1 | 04ec2dc26f395f5d97b58354473ba6f280c4efb0 | [
"Apache-2.0"
] | 7 | 2019-01-08T17:32:38.000Z | 2021-05-27T08:03:23.000Z | dusty/scanners/performer.py | hunkom/dusty-1 | 04ec2dc26f395f5d97b58354473ba6f280c4efb0 | [
"Apache-2.0"
] | 19 | 2018-12-13T07:42:32.000Z | 2021-07-21T09:07:29.000Z | #!/usr/bin/python3
# coding=utf-8
# pylint: disable=I0011,R0903,W0702,W0703,R0914,R0912,R0915
# Copyright 2019 getcarrier.io
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Scanning performer
"""
import importlib
import traceback
import pkgutil
import time
import concurrent.futures
from ruamel.yaml.comments import CommentedMap
from dusty.tools import log
from dusty.tools import dependency
from dusty.models.module import ModuleModel
from dusty.models.performer import PerformerModel
from dusty.models.error import Error
from . import constants
class ScanningPerformer(ModuleModel, PerformerModel):
""" Runs scanners """
def __init__(self, context):
""" Initialize instance """
super().__init__()
self.context = context
def prepare(self):
""" Prepare for action """
log.debug("Preparing")
config = self.context.config["scanners"]
# Schedule scanners
for scanner_type in list(config):
for scanner_name in list(config[scanner_type]):
if isinstance(config[scanner_type][scanner_name], bool) and \
not config[scanner_type][scanner_name]:
continue
try:
self.schedule_scanner(scanner_type, scanner_name, dict())
except:
log.exception(
"Failed to prepare %s scanner %s",
scanner_type, scanner_name
)
error = Error(
tool=f"{scanner_type}.{scanner_name}",
error=f"Failed to prepare {scanner_type} scanner {scanner_name}",
details=f"```\n{traceback.format_exc()}\n```"
)
self.context.errors.append(error)
# Resolve depencies once again
dependency.resolve_depencies(self.context.scanners)
def perform(self):
""" Perform action """
log.info("Starting scanning")
reporting = self.context.performers.get("reporting", None)
# Create executors
executor = dict()
settings = self.context.config["settings"]
for scanner_type in self.context.config["scanners"]:
max_workers = settings.get(scanner_type, dict()).get("max_concurrent_scanners", 1)
executor[scanner_type] = concurrent.futures.ThreadPoolExecutor(max_workers=max_workers)
log.info("Made %s executor with %d workers", scanner_type.upper(), max_workers)
# Starting scanning
if reporting:
reporting.on_start()
# Submit scanners
futures = list()
future_map = dict()
future_dep_map = dict()
for item in self.context.scanners:
scanner = self.context.scanners[item]
scanner_type = scanner.__class__.__module__.split(".")[-3]
scanner_module = scanner.__class__.__module__.split(".")[-2]
depencies = list()
for dep in scanner.depends_on() + scanner.run_after():
if dep in future_dep_map:
depencies.append(future_dep_map[dep])
future = executor[scanner_type].submit(self._execute_scanner, scanner, depencies)
future_dep_map[scanner_module] = future
future_map[future] = item
futures.append(future)
# Wait for executors to start and finish
started = set()
finished = set()
while True:
# Check for started executors
for future in futures:
if future not in started and (future.running() or future.done()):
item = future_map[future]
scanner = self.context.scanners[item]
if not scanner.get_meta("meta_scanner", False):
log.info(f"Started {item} ({scanner.get_description()})")
if reporting:
reporting.on_scanner_start(item)
# Add to started set
started.add(future)
# Check for finished executors
for future in futures:
if future not in finished and future.done():
item = future_map[future]
try:
future.result()
except:
log.exception("Scanner %s failed", item)
error = Error(
tool=item,
error=f"Scanner {item} failed",
details=f"```\n{traceback.format_exc()}\n```"
)
self.context.errors.append(error)
# Collect scanner findings and errors
scanner = self.context.scanners[item]
scanner_type = scanner.__class__.__module__.split(".")[-3]
for result in scanner.get_findings():
result.set_meta("scanner_type", scanner_type)
self.context.findings.append(result)
for error in scanner.get_errors():
error.set_meta("scanner_type", scanner_type)
self.context.errors.append(error)
if not scanner.get_meta("meta_scanner", False):
if reporting:
reporting.on_scanner_finish(item)
# Add to finished set
finished.add(future)
# Exit if all executors done
if self._all_futures_done(futures):
break
# Sleep for some short time
time.sleep(constants.EXECUTOR_STATUS_CHECK_INTERVAL)
# All scanners completed
if reporting:
reporting.on_finish()
@staticmethod
def _execute_scanner(scanner, depencies):
if depencies:
concurrent.futures.wait(depencies)
scanner.execute()
@staticmethod
def _all_futures_done(futures):
for item in futures:
if not item.done():
return False
return True
def get_module_meta(self, module, name, default=None):
""" Get submodule meta value """
try:
module_name = importlib.import_module(
f"dusty.scanners.{module}.scanner"
).Scanner.get_name()
if module_name in self.context.scanners:
return self.context.scanners[module_name].get_meta(name, default)
return default
except:
return default
def set_module_meta(self, module, name, value):
""" Set submodule meta value """
try:
module_name = importlib.import_module(
f"dusty.scanners.{module}.scanner"
).Scanner.get_name()
if module_name in self.context.scanners:
self.context.scanners[module_name].set_meta(name, value)
except:
pass
def schedule_scanner(self, scanner_type, scanner_name, scanner_config):
""" Schedule scanner run in current context after all already configured scanners """
try:
# Init scanner instance
scanner = importlib.import_module(
f"dusty.scanners.{scanner_type}.{scanner_name}.scanner"
).Scanner
if scanner.get_name() in self.context.scanners:
log.debug("Scanner %s.%s already scheduled", scanner_type, scanner_name)
return
# Prepare config
config = self.context.config["scanners"]
if scanner_type not in config:
config[scanner_type] = dict()
if scanner_name not in config[scanner_type] or \
not isinstance(config[scanner_type][scanner_name], dict):
config[scanner_type][scanner_name] = dict()
general_config = dict()
if "settings" in self.context.config:
general_config = self.context.config["settings"]
if scanner_type in general_config:
merged_config = general_config[scanner_type].copy()
merged_config.update(config[scanner_type][scanner_name])
config[scanner_type][scanner_name] = merged_config
config[scanner_type][scanner_name].update(scanner_config)
# Validate config
scanner.validate_config(config[scanner_type][scanner_name])
# Add to context
scanner = scanner(self.context)
self.context.scanners[scanner.get_name()] = scanner
# Resolve depencies
dependency.resolve_depencies(self.context.scanners)
# Prepare scanner
scanner.prepare()
# Done
log.debug("Scheduled scanner %s.%s", scanner_type, scanner_name)
except:
log.exception(
"Failed to schedule %s scanner %s",
scanner_type, scanner_name
)
error = Error(
tool=f"{scanner_type}.{scanner_name}",
error=f"Failed to schedule {scanner_type} scanner {scanner_name}",
details=f"```\n{traceback.format_exc()}\n```"
)
self.context.errors.append(error)
@staticmethod
def fill_config(data_obj):
""" Make sample config """
data_obj.insert(len(data_obj), "scanners", CommentedMap(), comment="Scanners config")
scanner_obj = data_obj["scanners"]
scanners_module = importlib.import_module("dusty.scanners")
for _, name, pkg in pkgutil.iter_modules(scanners_module.__path__):
if not pkg:
continue
# general_scanner_obj = data_obj["settings"][name] # This can also be used
scanner_type = importlib.import_module("dusty.scanners.{}".format(name))
scanner_obj.insert(len(scanner_obj), name, CommentedMap())
inner_obj = scanner_obj[name]
for _, inner_name, inner_pkg in pkgutil.iter_modules(scanner_type.__path__):
if not inner_pkg:
continue
try:
scanner = importlib.import_module(
"dusty.scanners.{}.{}.scanner".format(name, inner_name)
)
inner_obj.insert(
len(inner_obj), inner_name, CommentedMap(),
comment=scanner.Scanner.get_description()
)
scanner.Scanner.fill_config(inner_obj[inner_name])
except:
pass # Skip scanner, it may be DAST scanner (in SAST image) or vice versa
@staticmethod
def validate_config(config):
""" Validate config """
if "scanners" not in config:
log.warning("No scanners defined in config")
config["scanners"] = dict()
@staticmethod
def get_name():
""" Module name """
return "scanning"
@staticmethod
def get_description():
""" Module description or help message """
return "performs scanning"
| 41.161404 | 99 | 0.571136 | 1,218 | 11,731 | 5.318555 | 0.200328 | 0.066224 | 0.063909 | 0.057734 | 0.317073 | 0.210713 | 0.154677 | 0.154677 | 0.131522 | 0.119173 | 0 | 0.005422 | 0.339698 | 11,731 | 284 | 100 | 41.306338 | 0.83088 | 0.132811 | 0 | 0.331754 | 0 | 0 | 0.09278 | 0.035203 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056872 | false | 0.009479 | 0.085308 | 0 | 0.184834 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ebb4cb376cf5a2c773ef97a0105e544d1fccc1 | 1,058 | py | Python | tree/python/leetcode104_Maximum_Depth_of_Binary_Tree.py | wenxinjie/leetcode | c459a01040c8fe0783e15a16b8d7cca4baf4612a | [
"Apache-2.0"
] | null | null | null | tree/python/leetcode104_Maximum_Depth_of_Binary_Tree.py | wenxinjie/leetcode | c459a01040c8fe0783e15a16b8d7cca4baf4612a | [
"Apache-2.0"
] | null | null | null | tree/python/leetcode104_Maximum_Depth_of_Binary_Tree.py | wenxinjie/leetcode | c459a01040c8fe0783e15a16b8d7cca4baf4612a | [
"Apache-2.0"
] | null | null | null |
# Given a binary tree, find its maximum depth.
# The maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node.
# Note: A leaf is a node with no children.
# Example:
# Given binary tree [3,9,20,null,null,15,7],
# 3
# / \
# 9 20
# / \
# 15 7
# return its depth = 3.
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def maxDepth(self, root):
"""
:type root: TreeNode
:rtype: int
"""
if not root: return 0
queue= [root]
depth = 1
while queue:
queue1 = []
for node in queue:
if node.left:
queue1.append(node.left)
if node.right:
queue1.append(node.right)
queue = queue1
depth += 1
return depth-1
# Time: O(n)
# Space: O(n)
# Difficulty: medium | 21.591837 | 116 | 0.513233 | 136 | 1,058 | 3.963235 | 0.492647 | 0.055659 | 0.040816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.391304 | 1,058 | 49 | 117 | 21.591837 | 0.801242 | 0.510397 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ec959ce800103240cc7fb362a8c2e0a3f764a0 | 3,584 | py | Python | precompute/convert_kanjidic2_to_hashtable.py | 011000101101/VRAR_project | 7b0be02517de3e3975c9a697e4d6353c3fd6225f | [
"MIT"
] | null | null | null | precompute/convert_kanjidic2_to_hashtable.py | 011000101101/VRAR_project | 7b0be02517de3e3975c9a697e4d6353c3fd6225f | [
"MIT"
] | null | null | null | precompute/convert_kanjidic2_to_hashtable.py | 011000101101/VRAR_project | 7b0be02517de3e3975c9a697e4d6353c3fd6225f | [
"MIT"
] | null | null | null | import xml.etree.ElementTree as ET
import pickle
import os
from utils.params import *
import utils.stringutils as stringutils
def hiraganify_single_on_yomi(on_yomi: str):
utf8_bytes = list(on_yomi.encode("utf-8"))
kun_yomi = ""
for char in on_yomi:
if char == 'ー':
# new_char = kun_yomi[len(kun_yomi)-1]
kun_yomi += 'あ' # TODO dirty workaround for only occurence of 'ー' (ダース)
continue
# convert utf-9 string of single char to byte array holding its utf-8 codepoint
char_bytes = list(char.encode("utf-8"))
# skip non-katakana chars
if char_bytes[0] != 227:
continue
## assertion holds on full dataset
# assert (
# (char_bytes[1] == 130 and 161 <= char_bytes[2] <= 191)
# or
# (char_bytes[1] == 131 and 128 <= char_bytes[2] <= 182)
# ), "{} is not a katakana char: {}".format(char, char_bytes) # 82a1 <= ... <= 83B6
# change katakana char to equivalent hiragana char according to utf-8 codepoint table
if char_bytes[1] == 130: # 82
char_bytes[1] = 129
char_bytes[2] -= 32 # bf - 9f, distance of "ta"
elif char_bytes[1] == 131:
if char_bytes[2] < 160: # a0
char_bytes[1] = 129
char_bytes[2] += 32 # 9f - bf, distance of "mi"
else:
char_bytes[1] = 130
char_bytes[2] -= 32 # a0 - 80, distance of "mu"
else:
continue # skip non-katakana chars
# convert byte array holding utf-8 codepoint of single char back to utf-8 string
new_char = bytes(char_bytes).decode("utf-8")
# concatenate the characters
kun_yomi += new_char
return kun_yomi
def hiraganify_on_yomi(readings_on: list):
return list(map(hiraganify_single_on_yomi, readings_on))
def isolate_actual_readings(readings_kun: list):
return [extended_reading.split('.')[0] for extended_reading in readings_kun]
def cut_non_hiragana_chars(kun_yomi: str):
utf8_bytes = list(kun_yomi.encode("utf-8"))
hiragana_only = ""
for char in kun_yomi:
if char == 'ー':
hiragana_only += 'ー'
continue
if not stringutils.is_kana(char):
continue
# concatenate the characters
hiragana_only += char
return hiragana_only
def entry_list_to_map(entries_in: list):
kanji_dict = {}
for entry in entries_in:
kanji = entry.find("literal").text
readings_on = [reading.text for reading in entry.findall("reading_meaning/rmgroup/reading[@r_type='ja_on']")]
readings_kun = [reading.text for reading in entry.findall("reading_meaning/rmgroup/reading[@r_type='ja_kun']")]
readings_nanori = [reading.text for reading in entry.findall("reading_meaning/nanori")]
readings = hiraganify_on_yomi(readings_on) + list(
map(cut_non_hiragana_chars, isolate_actual_readings(readings_kun))
)
readings_nanori = list(map(cut_non_hiragana_chars, readings_nanori))
kanji_dict[kanji] = (readings, readings_nanori)
return kanji_dict
def convert():
tree = ET.parse(os.path.join(ROOT_DIR, "resources/kanjidic2/kanjidic2.xml"))
entries = tree.findall("character")
kanji_dict_map = entry_list_to_map(entries)
with open(os.path.join(ROOT_DIR, "bin_blobs/kanjidic2_hashtable.pkl"), 'wb') as f:
pickle.dump(kanji_dict_map, f, pickle.HIGHEST_PROTOCOL)
if __name__ == "__main__":
convert()
| 29.377049 | 119 | 0.625558 | 481 | 3,584 | 4.430353 | 0.305613 | 0.076021 | 0.032848 | 0.018301 | 0.247302 | 0.162834 | 0.112154 | 0.112154 | 0.088691 | 0.065697 | 0 | 0.033944 | 0.268415 | 3,584 | 121 | 120 | 29.619835 | 0.778795 | 0.213449 | 0 | 0.19697 | 0 | 0 | 0.084406 | 0.066166 | 0 | 0 | 0 | 0.008264 | 0 | 1 | 0.090909 | false | 0 | 0.075758 | 0.030303 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ee8ec529a645bffe1f58dac476a76515f1e923 | 15,445 | py | Python | runtime_config/awsdownloader.py | fpga-open-speech-tools/utils | 768e647b4b279b72a68729f32218b73708f43470 | [
"MIT"
] | null | null | null | runtime_config/awsdownloader.py | fpga-open-speech-tools/utils | 768e647b4b279b72a68729f32218b73708f43470 | [
"MIT"
] | 17 | 2020-04-22T18:50:45.000Z | 2021-01-28T17:54:24.000Z | runtime_config/awsdownloader.py | fpga-open-speech-tools/utils | 768e647b4b279b72a68729f32218b73708f43470 | [
"MIT"
] | 1 | 2021-02-10T14:09:58.000Z | 2021-02-10T14:09:58.000Z | #!/usr/bin/python3
"""
Download files for an SoC FPGA project from AWS.
This script downloads the bitstream, device tree overlay, and device
drivers located in a user-supplied directory in a user-supplied S3 bucket.
Parameters
----------
s3bucket : str
Name of the S3 bucket
s3directory : str
Name of the S3 directory in `s3bucket` where the desired files are located
driver_path : str
Path prefix where the device drivers will be downloaded to; drivers are
placed in a subdirectory of this path
config_path : str
Where to put the UI.json and Linker.json config files
progress : list of str
How to display download progress; options are 'bar' and 'json'
endpoint : str
HTTP endpoint, specified as http://ip:port, to send download progress to
verbose : bool
Print verbose output
Notes
-----
boto3 and tqdm must be installed on the system in order to run this script;
they can both be installed with pip.
By convention, S3 directories are all lowercase, with words separated by hyphens
when doing so improves readability. For example, the directory for the
Audio Mini sound effects project is audiomini/sound-effects. Additionally,
The bitstream and device tree overlays are named the same as the project, but
with underscores instead of hyphens, e.g. sound_effects.rbf.
The directory name can be given with or without a trailing slash.
The .dtbo and .rbf files need to be on the firmware search path, so they
will always be placed in /lib/firmware. If placing drivers in a non-default
path, users will need to supply that path as an argument to drivermgr.sh.
Displaying download progress as json messages is intended to be read by
another program that can display a progress bar to a user on a web app.
In order for the progress monitor bar to work to stay at the bottom of the
console output, tqdm.write() is used instead of print().
Examples
--------
Download files for the Audio Mini sound effects project
$ ./awsdownloader.py -b nih-demos -d audiomini/sound-effects
Download files for the Audio Mini passthrough project and show a progress bar
# ./awsdownloader.py -b nih-demos -d audiomini/passthrough --progress bar
Copyright
---------
Copyright 2020 Audio Logic
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Trevor Vannoy, Tyler Davis
Audio Logic
985 Technology Blvd
Bozeman, MT 59718
openspeech@flatearthinc.com
"""
import boto3
import sys
import argparse
import os
import json
import requests
from tqdm import tqdm
from collections import namedtuple
from botocore.client import Config
from botocore import UNSIGNED
FIRMWARE_PATH = '/lib/firmware/'
DEFAULT_DRIVER_PATH = '/lib/modules/'
DEFAULT_CONFIG_PATH = '../config/'
FIRMWARE_EXTENSIONS = ('.rbf', '.dtbo')
DRIVER_EXTENSIONS = ('.ko')
CONFIG_EXTENSIONS = ('.json')
HTTP_HEADERS = {'Content-type': 'application/json', 'Accept': 'text/plain'}
"""
Named tuple to group together info about S3 files.
Parameters
----------
names
A tuple or list of file names
keys
A tuple or list of the S3 keys corresponding to the files
sizes
A tuple or list of the file sizes in bytes
"""
_S3Files = namedtuple('S3Files', ['names', 'keys', 'sizes'])
class _ProgressMonitor(object):
"""
A download progress monitor.
Monitors the download progress of all the S3 files and displays a
progress indicator on stdout. This class is used as the callback
to the boto3 download_files method, which calls the __call__ method.
Parameters
----------
total_download_size
Size of all the S3 files being downloaded, in bytes
show_json : bool
Show download progress as a json message
show_bar : bool
Show download progress as a progress bar
Attributes
----------
status : str
User-definable download status message
bytes_received : int
The number of bytes received from S3 so far
percent_downloaded : int
How much of the files have been downloaded so far
json_status_message : dict
Download status message and progress as JSON
Notes
-----
The JSON status message can be read from stdout and used by other programs
to report the download progress/status. It's format is
{"progress": 42, "status": "downloading file x"}
"""
def __init__(self, total_download_size, show_json=False, show_bar=False,
endpoint=None):
self.total_download_size = total_download_size
self.show_json = show_json
self.show_bar = show_bar
self.status = ""
self.bytes_received = 0
self.percent_downloaded = 0
self.json_status_message = {
"progress": 0,
"status": ""
}
self.endpoint = endpoint
if self.show_bar:
self._progress_bar = tqdm(
bar_format='{l_bar}{bar}| {n_fmt}B/{total_fmt}B [{elapsed}]', desc="Downloading", total=self.total_download_size,
mininterval=0.05, unit_scale=True)
else:
self._progress_bar = None
def __call__(self, bytes_received):
"""
Update and print the download progress.
Download progress is only printed is show_json and/or show_bar are True.
Parameters
----------
bytes_received : int
The number of bytes received since the previous callback from boto3
"""
self.bytes_received += bytes_received
self.percent_downloaded = (
int(self.bytes_received / self.total_download_size * 100)
)
self.json_status_message['progress'] = self.percent_downloaded
self.json_status_message['status'] = self.status
json_str = json.dumps(self.json_status_message)
if self.show_json:
tqdm.write(json_str)
if self.show_bar:
self._progress_bar.update(bytes_received)
if self.endpoint:
try:
requests.put(self.endpoint, data=json_str, headers=HTTP_HEADERS)
except Exception as e:
print(e)
# TODO: real error handling; at least use a more specific exception type once I know what it is
def parseargs():
"""
Parse command-line arguments.
Returns
-------
args : Namespace
Object containing the parsed arguments
"""
# Create the argument parser
parser = argparse.ArgumentParser(add_help=False)
# Create a new group for the required arguments
required_args = parser.add_argument_group('required arguments')
# Add arguments for the directory and the bucket name
required_args.add_argument('-d', '--directory', type=str, required=True,
help="S3 directory to download files from")
required_args.add_argument('-b', '--bucket', type=str, required=True,
help="S3 bucket name")
# Create a group for optional arguments so required arguments print first
optional_args = parser.add_argument_group('optional arguments')
# Add optional arguments
optional_args.add_argument('-h', '--help', action='help',
help="show this help message and exit")
optional_args.add_argument('-v', '--verbose', action='store_true',
help="print verbose output", default=False)
optional_args.add_argument(
'--driver-path', type=str, default=DEFAULT_DRIVER_PATH,
help="path prefix where kernel modules folder gets created \
(default: " + DEFAULT_DRIVER_PATH + ")"
)
optional_args.add_argument(
'--config-path', type=str, default=DEFAULT_CONFIG_PATH,
help="where to put the UI.json and Linker.json config files \
(default: " + DEFAULT_CONFIG_PATH + ")"
)
optional_args.add_argument(
'-p', '--progress', action='append', choices=['bar', 'json'],
default=[], help="progress monitoring; 'bar' displays a progress bar, \
and 'json' displays progress in json format; multiple arguments \
can be given",
)
optional_args.add_argument('-e', '--endpoint', type=str,
help="HTTP endpoint to send download progress to; format is http://ip:port"
)
# Parse the arguments
args = parser.parse_args()
# Ensure paths ends in a trailing slash
if args.driver_path[-1] != '/':
args.driver_path += '/'
if args.config_path[-1] != '/':
args.config_path += '/'
return args
def _get_file_info(s3objects, file_extensions):
"""
Get information about files in an s3 objects list.
Given a list of s3 objects, this function extracts the key, filename,
and file size for each file that ends in an extension in `file_extensions`
Parameters
----------
s3objects : list
List of dictionaries return by boto3's download_file()
file_extensions : tuple
File extensions to match keys against
Returns
-------
_S3Files
A named tuple containing tuples of file names, keys, and sizes
Notes
-----
boto3's list_objects_v2 returns a dictionary of information about the S3
objects. Within the 'Contents' key, which is what needs to be fed to this
function, each object in the list has 'Key' and 'Size' keys.
The 'Key' attribute is of the form "some/directory/filename.extension".
"""
# Get firmware keys that end with any extension in file_extensions
keys = tuple(obj['Key'] for obj in s3objects
if obj['Key'].endswith(file_extensions))
# If no keys matching file_extensions were found, exit early
if not keys:
return None
# Get firmware filenames (the part of the key after the last slash)
# A typical key will be '<device name>/<project name>/<file name>'
names = tuple(key.split('/')[-1] for key in keys)
# Get file sizes for all keys that end with an extension in file_extensions
sizes = tuple(obj['Size'] for obj in s3objects
if obj['Key'].endswith(file_extensions))
# Pack everything into a _S3Files named tuple
return _S3Files(names=names, keys=keys, sizes=sizes)
def main(s3bucket, s3directory, driver_path=DEFAULT_DRIVER_PATH,
config_path=DEFAULT_CONFIG_PATH, progress=[], endpoint=None,
verbose=False):
"""
Download files for an SoC FPGA project from AWS.
This script downloads the bitstream, device tree overlay, and device
drivers located in a user-supplied directory in a user-supplied S3 bucket.
Parameters
----------
s3bucket : str
Name of the S3 bucket
s3directory : str
Name of the S3 directory in `s3bucket` where the desired files are
driver_path : str
Path prefix where the device drivers will be downloaded to; drivers are
placed in a subdirectory of this path
config_path : str
Where to put the UI.json and Linker.json config files
progress : list of str
How to display download progress; valid options are 'bar' and 'json'
endpoint : str
HTTP endpoint, specified as http://ip:port, to send download progress to
verbose : bool
Print verbose output
"""
project_name = s3directory.split('/')[-1].replace('-', '_')
# Create an s3 client that doesn't need/use aws credentials
client = boto3.client('s3', region_name='us-west-2',
config=Config(signature_version=UNSIGNED))
# Get all of the s3 objects that are in the desired bucket and directory.
# The "directory" isn't really a directory, but a prefix in the object keys,
# e.g. Key: some/directory/<actual_file>
# list_objects_v2 returns a big dictionary about the objects; the Contents
# key is what contains the actual directory contents
objects = client.list_objects_v2(
Bucket=s3bucket, Prefix=s3directory)['Contents']
# Get info about the firmware files in the s3 directory
firmware_files = _get_file_info(objects, FIRMWARE_EXTENSIONS)
# Kill the program if the s3 directory doesn't have firmware files
if firmware_files is None:
print("The s3 directory {} does not contain an overlay".format(
s3directory), file=sys.stderr)
exit(1)
total_download_size = sum(firmware_files.sizes)
# Get info about any driver files in the s3 directory
driver_files = _get_file_info(objects, DRIVER_EXTENSIONS)
if driver_files:
# Create a directory for the drivers if one doesn't already exist
if not os.path.isdir(driver_path + project_name):
os.mkdir(driver_path + project_name)
total_download_size += sum(driver_files.sizes)
# Get info about any config files in the s3 directory
config_files = _get_file_info(objects, CONFIG_EXTENSIONS)
if config_files:
# Create a directory for the config files if one doesn't already exist
if not os.path.isdir(config_path):
os.mkdir(config_path)
total_download_size += sum(config_files.sizes)
# Set up a progress monitor
show_bar = False
show_json = False
if 'bar' in progress:
show_bar = True
if 'json' in progress:
show_json = True
progressMonitor = _ProgressMonitor(total_download_size, show_bar=show_bar,
show_json=show_json, endpoint=endpoint)
# Download the firmware files
for (key, filename) in zip(firmware_files.keys, firmware_files.names):
progressMonitor.status = "downloading {}".format(key)
if verbose:
tqdm.write('Downloading file {} to {}...'.format(
filename, FIRMWARE_PATH + filename))
client.download_file(s3bucket, key, FIRMWARE_PATH + filename,
Callback=progressMonitor)
# If there are driver files, download them
if driver_files:
for key, filename in zip(driver_files.keys, driver_files.names):
progressMonitor.status = "downloading {}".format(key)
if verbose:
tqdm.write('Downloading file {} to {}...'.format(
filename, driver_path + project_name + '/' + filename))
client.download_file(s3bucket, key, driver_path + project_name
+ '/' + filename, Callback=progressMonitor)
# If there are config files, download them
if config_files:
for key, filename in zip(config_files.keys, config_files.names):
progressMonitor.status = "downloading {}".format(key)
if verbose:
tqdm.write('Downloading file {} to {}...'.format(
filename, config_path + filename))
client.download_file(s3bucket, key, config_path + filename,
Callback=progressMonitor)
return (firmware_files)
if __name__ == "__main__":
args = parseargs()
main(args.bucket, args.directory, args.driver_path,
args.config_path, args.progress, args.endpoint, args.verbose)
| 34.943439 | 129 | 0.669861 | 2,058 | 15,445 | 4.913508 | 0.207969 | 0.014834 | 0.016812 | 0.013647 | 0.30271 | 0.248121 | 0.201246 | 0.17415 | 0.166238 | 0.166238 | 0 | 0.007676 | 0.249336 | 15,445 | 441 | 130 | 35.022676 | 0.864499 | 0.467595 | 0 | 0.136646 | 0 | 0 | 0.107961 | 0.002782 | 0 | 0 | 0 | 0.002268 | 0 | 1 | 0.031056 | false | 0 | 0.062112 | 0 | 0.124224 | 0.018634 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52ef6266c9222c0ce33626f4cdc6a1884aba54a8 | 8,433 | py | Python | python/modules/data_set.py | CppPhil/fix_mogasens_csv | eedfe3037c1bd626f81a40073bd616b58d6ba677 | [
"Unlicense"
] | null | null | null | python/modules/data_set.py | CppPhil/fix_mogasens_csv | eedfe3037c1bd626f81a40073bd616b58d6ba677 | [
"Unlicense"
] | null | null | null | python/modules/data_set.py | CppPhil/fix_mogasens_csv | eedfe3037c1bd626f81a40073bd616b58d6ba677 | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf8 -*-
import _csv
import csv
import sys
import traceback
from .constants import *
from .this_sensor import this_sensor
class DataSet:
def __init__(self, time, hardware_timestamp, extract_id, trigger,
accelerometer_x, accelerometer_y, accelerometer_z, gyroscope_x,
gyroscope_y, gyroscope_z):
self.time = time
self.hardware_timestamp = hardware_timestamp
self.extract_id = extract_id
self.trigger = trigger
self.accelerometer_x = accelerometer_x
self.accelerometer_y = accelerometer_y
self.accelerometer_z = accelerometer_z
self.gyroscope_x = gyroscope_x
self.gyroscope_y = gyroscope_y
self.gyroscope_z = gyroscope_z
@classmethod
def from_file(cls, csv_file_name):
obj = cls.__new__(cls)
super(DataSet, obj).__init__()
obj.time = []
obj.hardware_timestamp = []
obj.extract_id = []
obj.trigger = []
obj.accelerometer_x = []
obj.accelerometer_y = []
obj.accelerometer_z = []
obj.gyroscope_x = []
obj.gyroscope_y = []
obj.gyroscope_z = []
with open(csv_file_name, 'r', newline='', encoding='utf-8') as csv_file:
try:
plots = csv.reader(csv_file, delimiter=',')
for row_count, row in enumerate(plots):
if row_count == 0: # Skip the header row
continue
try:
obj.time.append(float(row[time_column_index()]))
obj.hardware_timestamp.append(int(row[hardware_timestamp_index()]))
obj.extract_id.append(int(row[extract_id_column_index()]))
obj.trigger.append(float(row[trigger_index()]))
obj.accelerometer_x.append(
float(row[accelerometer_x_column_index()]))
obj.accelerometer_y.append(
float(row[accelerometer_y_column_index()]))
obj.accelerometer_z.append(
float(row[accelerometer_z_column_index()]))
obj.gyroscope_x.append(float(row[gyroscope_x_column_index()]))
obj.gyroscope_y.append(float(row[gyroscope_y_column_index()]))
obj.gyroscope_z.append(float(row[gyroscope_z_column_index()]))
except IndexError as err:
print(
f"data_set.py: DataSet.from_file: IndexError for file \"{csv_file_name}\": \"{err}\", row_count: {row_count}, row: {row}",
file=sys.stderr)
traceback.print_exc(file=sys.stderr)
sys.exit(1)
except ValueError as value_error:
print(
f"data_set.py: DataSet.from_file: ValueError for file \"{csv_file_name}\": \"{value_error}\", row_count: {row_count}, row: {row}",
file=sys.stderr)
traceback.print_exc(file=sys.stderr)
sys.exit(1)
except _csv.Error as err:
print(
f"data_set.py: DataSet.from_file: _csv.Error for file \"{csv_file_name}\": \"{err}\"",
file=sys.stderr)
sys.exit(1)
return obj
def size(self):
return len(self.time)
def is_empty(self):
return self.size() == 0
def has_elements(self):
return not self.is_empty()
def filter_by_sensor(self, sensor):
this_sensor_time, this_sensor_hardware_timestamp, this_sensor_extract_id, this_sensor_trigger, this_sensor_accelerometer_x, this_sensor_accelerometer_y, this_sensor_accelerometer_z, this_sensor_gyroscope_x, this_sensor_gyroscope_y, this_sensor_gyroscope_z = this_sensor(
sensor, self.time, self.hardware_timestamp, self.extract_id,
self.trigger, self.accelerometer_x, self.accelerometer_y,
self.accelerometer_z, self.gyroscope_x, self.gyroscope_y,
self.gyroscope_z)
return DataSet(this_sensor_time, this_sensor_hardware_timestamp,
this_sensor_extract_id, this_sensor_trigger,
this_sensor_accelerometer_x, this_sensor_accelerometer_y,
this_sensor_accelerometer_z, this_sensor_gyroscope_x,
this_sensor_gyroscope_y, this_sensor_gyroscope_z)
def apply_filter(self, func):
self.accelerometer_x = func(self.accelerometer_x)
self.accelerometer_y = func(self.accelerometer_y)
self.accelerometer_z = func(self.accelerometer_z)
self.gyroscope_x = func(self.gyroscope_x)
self.gyroscope_y = func(self.gyroscope_y)
self.gyroscope_z = func(self.gyroscope_z)
return self
def write_to_file(self, file_path):
with open(file_path, 'w', newline='\n', encoding='utf-8') as csv_file:
writer = csv.writer(csv_file,
delimiter=',',
quotechar='"',
quoting=csv.QUOTE_MINIMAL)
writer.writerow([
'Time (s)', 'HWTimestamp (ms)', 'ExtractID', 'Trigger', 'Channel 1',
'Channel 2', 'Channel 3', 'Channel 4', 'Channel 5', 'Channel 6'
])
for i in range(self.size()):
writer.writerow([
self.time[i], self.hardware_timestamp[i], self.extract_id[i],
self.trigger[i], self.accelerometer_x[i], self.accelerometer_y[i],
self.accelerometer_z[i], self.gyroscope_x[i], self.gyroscope_y[i],
self.gyroscope_z[i]
])
def channel_by_str(self, string):
if string == channel1_string():
return self.accelerometer_x
if string == channel2_string():
return self.accelerometer_y
if string == channel3_string():
return self.accelerometer_z
if string == channel4_string():
return self.gyroscope_x
if string == channel5_string():
return self.gyroscope_y
if string == channel6_string():
return self.gyroscope_z
raise Exception(f"\"{string}\" is not a valid input to channel_by_str!")
def segmenting_hardware_timestamps(self, segmentation_points):
return [
self.hardware_timestamp[segmentation_point]
for segmentation_point in segmentation_points
]
def segment_by(self, segmenting_hwstamps):
segments = []
last_idx = 0
for hwstamp in segmenting_hwstamps:
idx = self.hardware_timestamp.index(hwstamp)
segments.append(
DataSet(self.time[last_idx:idx],
self.hardware_timestamp[last_idx:idx],
self.extract_id[last_idx:idx], self.trigger[last_idx:idx],
self.accelerometer_x[last_idx:idx],
self.accelerometer_y[last_idx:idx],
self.accelerometer_z[last_idx:idx],
self.gyroscope_x[last_idx:idx],
self.gyroscope_y[last_idx:idx],
self.gyroscope_z[last_idx:idx]))
last_idx = idx
segments.append(
DataSet(self.time[last_idx:], self.hardware_timestamp[last_idx:],
self.extract_id[last_idx:], self.trigger[last_idx:],
self.accelerometer_x[last_idx:],
self.accelerometer_y[last_idx:],
self.accelerometer_z[last_idx:], self.gyroscope_x[last_idx:],
self.gyroscope_y[last_idx:], self.gyroscope_z[last_idx:]))
return segments
def crop_front(self, exercise_start_timestamp):
# inclusive
crop_index = self.hardware_timestamp.index(exercise_start_timestamp)
self.time = self.time[crop_index:]
self.hardware_timestamp = self.hardware_timestamp[crop_index:]
self.extract_id = self.extract_id[crop_index:]
self.trigger = self.trigger[crop_index:]
self.accelerometer_x = self.accelerometer_x[crop_index:]
self.accelerometer_y = self.accelerometer_y[crop_index:]
self.accelerometer_z = self.accelerometer_z[crop_index:]
self.gyroscope_x = self.gyroscope_x[crop_index:]
self.gyroscope_y = self.gyroscope_y[crop_index:]
self.gyroscope_z = self.gyroscope_z[crop_index:]
def crop_back(self, exercise_end_timestamp):
# exclusive
crop_index = self.hardware_timestamp.index(exercise_end_timestamp)
self.time = self.time[:crop_index]
self.hardware_timestamp = self.hardware_timestamp[:crop_index]
self.extract_id = self.extract_id[:crop_index]
self.trigger = self.trigger[:crop_index]
self.accelerometer_x = self.accelerometer_x[:crop_index]
self.accelerometer_y = self.accelerometer_y[:crop_index]
self.accelerometer_z = self.accelerometer_z[:crop_index]
self.gyroscope_x = self.gyroscope_x[:crop_index]
self.gyroscope_y = self.gyroscope_y[:crop_index]
self.gyroscope_z = self.gyroscope_z[:crop_index]
| 39.591549 | 274 | 0.665362 | 1,067 | 8,433 | 4.939082 | 0.137769 | 0.116129 | 0.049336 | 0.023909 | 0.501328 | 0.453131 | 0.308918 | 0.278937 | 0.273245 | 0.273245 | 0 | 0.003379 | 0.227914 | 8,433 | 212 | 275 | 39.778302 | 0.806021 | 0.009724 | 0 | 0.104972 | 0 | 0 | 0.046969 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071823 | false | 0 | 0.033149 | 0.022099 | 0.187845 | 0.027624 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52f174c7dff1a5b08c6fe5d6c64499c249c20e96 | 1,292 | py | Python | openhands/datasets/isolated/autsl.py | AI4Bharat/OpenHands | a3c2c416395d70c7eb63294d955a84d1c8ea4410 | [
"Apache-2.0"
] | 13 | 2021-10-09T14:42:40.000Z | 2022-03-21T10:40:50.000Z | openhands/datasets/isolated/autsl.py | AI4Bharat/OpenHands | a3c2c416395d70c7eb63294d955a84d1c8ea4410 | [
"Apache-2.0"
] | 15 | 2021-10-10T03:20:44.000Z | 2022-03-16T03:19:14.000Z | openhands/datasets/isolated/autsl.py | AI4Bharat/OpenHands | a3c2c416395d70c7eb63294d955a84d1c8ea4410 | [
"Apache-2.0"
] | 2 | 2022-03-05T14:25:08.000Z | 2022-03-17T07:31:44.000Z | import os
import pandas as pd
from .base import BaseIsolatedDataset
from ..data_readers import load_frames_from_video
class AUTSLDataset(BaseIsolatedDataset):
"""
Turkish Isolated Sign language dataset from the paper:
`AUTSL: A Large Scale Multi-modal Turkish Sign Language Dataset and Baseline Methods <https://arxiv.org/abs/2008.00932>`_
"""
lang_code = "tsm"
def read_glosses(self):
class_mappings_df = pd.read_csv(self.class_mappings_file_path)
self.id_to_glosses = dict(
zip(class_mappings_df["ClassId"], class_mappings_df["TR"])
)
self.glosses = sorted(self.id_to_glosses.values())
def read_original_dataset(self):
df = pd.read_csv(self.split_file, header=None)
if self.modality == "rgb":
file_suffix = "color.mp4"
elif self.modality == "pose":
file_suffix = "color.pkl"
for i in range(len(df)):
instance_entry = df[0][i] + "_" + file_suffix, df[1][i]
self.data.append(instance_entry)
def read_video_data(self, index):
video_name, label = self.data[index]
video_path = os.path.join(self.root_dir, video_name)
imgs = load_frames_from_video(video_path)
return imgs, label, video_name
| 33.128205 | 125 | 0.655573 | 173 | 1,292 | 4.653179 | 0.508671 | 0.064596 | 0.055901 | 0.047205 | 0.037267 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012232 | 0.240712 | 1,292 | 38 | 126 | 34 | 0.808359 | 0.136997 | 0 | 0 | 0 | 0 | 0.03483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52f2ad2509c5a23112bfdda59cf0b013697af9ff | 1,117 | py | Python | fool/dictionary.py | xlturing/FoolNLTK | 76166d5f7cac1f70e697bcac4b0b1a1fdd128da9 | [
"Apache-2.0"
] | 2 | 2018-05-03T00:57:36.000Z | 2021-05-16T16:08:51.000Z | fool/dictionary.py | sdd031215/FoolNLTK | 76166d5f7cac1f70e697bcac4b0b1a1fdd128da9 | [
"Apache-2.0"
] | null | null | null | fool/dictionary.py | sdd031215/FoolNLTK | 76166d5f7cac1f70e697bcac4b0b1a1fdd128da9 | [
"Apache-2.0"
] | 1 | 2018-03-01T14:39:00.000Z | 2018-03-01T14:39:00.000Z | #!/usr/bin/env python
# -*-coding:utf-8-*-
from fool import trie
class Dictionary():
def __init__(self):
self.trie = trie.Trie()
self.weights = {}
self.sizes = 0
def delete_dict(self):
self.trie = trie.Trie()
self.weights = {}
self.sizes = 0
def add_dict(self, path):
words = []
with open(path) as f:
for i, line in enumerate(f):
line = line.strip("\n").strip()
if not line:
continue
line = line.split()
word = line[0].strip()
self.trie.add_keyword(word)
if len(line) == 1:
weight = 1.0
else:
weight = float(line[1])
weight = float(weight)
self.weights[word] = weight
words.append(word)
self.sizes += len(self.weights)
def parse_words(self, text):
matchs = self.trie.parse_text(text)
return matchs
def get_weight(self, word):
return self.weights.get(word, 1.0)
| 25.386364 | 47 | 0.477171 | 128 | 1,117 | 4.085938 | 0.40625 | 0.105163 | 0.045889 | 0.061185 | 0.16826 | 0.16826 | 0.16826 | 0.16826 | 0.16826 | 0.16826 | 0 | 0.015038 | 0.404655 | 1,117 | 43 | 48 | 25.976744 | 0.771429 | 0.034915 | 0 | 0.181818 | 0 | 0 | 0.001859 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.151515 | false | 0 | 0.030303 | 0.030303 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52f476362c5d4b1a6d29f1d69457c7e3d7f1faad | 10,170 | py | Python | modules/j2ASTwalker.py | ankudinov/J2parser | b783c0337c366d43baa991e6f907284be138ab74 | [
"MIT"
] | 1 | 2018-05-14T12:34:40.000Z | 2018-05-14T12:34:40.000Z | modules/j2ASTwalker.py | ankudinov/J2parser | b783c0337c366d43baa991e6f907284be138ab74 | [
"MIT"
] | null | null | null | modules/j2ASTwalker.py | ankudinov/J2parser | b783c0337c366d43baa991e6f907284be138ab74 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
__author__ = 'Petr Ankudinov'
from jinja2 import meta, FileSystemLoader
import jinja2.nodes
import os
import sys
import yaml
from modules.tools import merge_dict, build_dict
def build_dict_recursive(lst_or_tpl):
# Recursive function that builds a hierarchical dictionary from lists and sublists of (key_list, value) tuples.
if isinstance(lst_or_tpl, tuple):
if isinstance(lst_or_tpl[1], list):
value = list()
for e in lst_or_tpl[1]:
value.append(build_dict_recursive(e))
elif isinstance(lst_or_tpl[1], tuple):
value = build_dict_recursive(lst_or_tpl[1])
else:
value = lst_or_tpl[1]
result = build_dict(list(reversed(lst_or_tpl[0])), value)
elif isinstance(lst_or_tpl, list):
result = dict()
for e in lst_or_tpl:
result = merge_dict(result, build_dict_recursive(e))
else:
result = lst_or_tpl
return result
value_dict = {
# these values will be assigned to extracted variables
'not defined': '{{ not defined }}',
'error': 'Error!',
'list': '{{ more elements in the list }}',
}
class J2Meta:
def __init__(self, template_realpath):
self.env = jinja2.Environment(loader=FileSystemLoader(searchpath=os.path.dirname(template_realpath)))
self.parent_template = os.path.basename(template_realpath)
self.known_templates = self.get_known_templates(self.parent_template)
# INTERNAL methods
def get_known_templates(self, template_name):
# initialise known template list and append parent template name
known_template_list = set()
known_template_list.add(template_name)
# parse parent template
template_src = self.env.loader.get_source(self.env, template_name)[0]
parsed_template = self.env.parse(source=template_src)
# get referenced templates and walk over these templates recursively
referenced_template_list = meta.find_referenced_templates(parsed_template)
for child_template in referenced_template_list:
known_template_list.add(child_template)
known_template_list.update(self.get_known_templates(child_template))
# return parent and all child template names
return known_template_list
def j2_ast_walk_main(self, j2node):
# The script will start walking over Jinja2 AST here looking for Getattr, Assign, Name, For nodes.
result_list = list()
recursion_required_nodes = [jinja2.nodes.Template, jinja2.nodes.Output]
recursion_required = False
for node in recursion_required_nodes:
if isinstance(j2node, node):
recursion_required = True
if recursion_required:
for child_node in j2node.iter_child_nodes():
# Recursion to get more specific nodes
for e in self.j2_ast_walk_main(child_node):
result_list.append(e)
else:
# Node specific walk
if isinstance(j2node, jinja2.nodes.For):
for e in self.j2_ast_walk_for(j2node):
result_list.append(e)
if isinstance(j2node, jinja2.nodes.If):
for e in self.j2_ast_walk_if(j2node):
result_list.append(e)
if isinstance(j2node, jinja2.nodes.Getattr):
for e in self.j2_ast_walk_getattr(j2node):
result_list.append(e)
if isinstance(j2node, jinja2.nodes.Assign):
for e in self.j2_ast_walk_assign(j2node):
result_list.append(e)
if isinstance(j2node, jinja2.nodes.Name):
for e in self.j2_ast_walk_name(j2node):
result_list.append(e)
# Ignore following nodes
ignored_node_list = [
jinja2.nodes.TemplateData,
jinja2.nodes.Literal,
jinja2.nodes.Expr,
jinja2.nodes.Const,
jinja2.nodes.Include,
]
for ignored_node in ignored_node_list:
if isinstance(j2node, ignored_node):
pass # do nothing
# Generate alert for future debugging
alert_nodes_list = [
jinja2.nodes.Macro,
jinja2.nodes.CallBlock,
jinja2.nodes.FilterBlock,
jinja2.nodes.With,
jinja2.nodes.Block,
jinja2.nodes.Import,
jinja2.nodes.FromImport,
jinja2.nodes.ExprStmt,
jinja2.nodes.AssignBlock,
jinja2.nodes.BinExpr,
jinja2.nodes.UnaryExpr,
jinja2.nodes.Tuple,
jinja2.nodes.List,
jinja2.nodes.Dict,
jinja2.nodes.Pair,
jinja2.nodes.Keyword,
jinja2.nodes.CondExpr,
jinja2.nodes.Filter,
jinja2.nodes.Test,
jinja2.nodes.Call,
jinja2.nodes.Getitem,
jinja2.nodes.Slice,
jinja2.nodes.Concat,
jinja2.nodes.Compare,
jinja2.nodes.Operand,
]
for i, ignored_node in enumerate(alert_nodes_list):
if isinstance(j2node, ignored_node):
print("Ignoring %s!" % alert_nodes_list[i], file=sys.stderr)
print(j2node, file=sys.stderr)
return result_list
@staticmethod
def j2_ast_walk_name(j2node):
key_list = [j2node.name]
value = False
if j2node.ctx == 'load':
value = value_dict['not defined']
else: # ctx == 'store'
pass # ctx should be 'load' for Name node
if not value:
value = value_dict['error']
key_list = list(key_list)
return [(key_list, value)] # return a list with a single tuple
def j2_ast_walk_getattr(self, j2node):
result_list = list()
for child_node in j2node.iter_child_nodes():
for e in self.j2_ast_walk_main(child_node):
result_list.append(e)
for tpl in result_list:
tpl[0].append(j2node.attr) # add parent key to each tuple
return result_list
def j2_ast_walk_assign(self, j2node):
key_list = list()
value = False
for child in j2node.iter_child_nodes():
if isinstance(child, jinja2.nodes.Name):
if child.ctx == 'store': # 'store' should be the only context for Assign node
key_list.append(child.name)
else:
value = child.name
if isinstance(child, jinja2.nodes.Pair):
if isinstance(child.value, jinja2.nodes.Const):
if not value:
value = child.value.value
if isinstance(child.value, jinja2.nodes.Name):
if not value:
value = child.value.name
if isinstance(child.value, jinja2.nodes.Dict):
for temp_list, value in self.j2_ast_walk_assign(child.value):
key_list = key_list + temp_list
key_list.append(child.key.value)
if isinstance(child, jinja2.nodes.Dict):
temp_list, value = self.j2_ast_walk_assign(child)
key_list = key_list + temp_list
key_list = list(reversed(key_list))
return [(key_list, value)]
def j2_ast_walk_for(self, j2node):
result_list = list()
iter_list = self.j2_ast_walk_main(j2node.iter)
target_key_list = self.j2_ast_walk_main(j2node.target) # value will be ignored
target_key_length = len(target_key_list)
target_child_key_list = list()
for node in j2node.body:
for e in self.j2_ast_walk_main(node):
for tk in target_key_list:
if e[0][:target_key_length] == tk[0]:
if e[0][target_key_length:]: # verify if there are any other key apart from target
target_child_key_list.append((e[0][target_key_length:], e[1]))
else:
result_list.append(e)
for ik in iter_list:
if target_child_key_list:
result_list.append((ik[0], [target_child_key_list, value_dict['list']]))
else:
result_list.append((ik[0], [ik[1], value_dict['list']]))
return result_list
def j2_ast_walk_if(self, j2node):
result_list = list()
if isinstance(j2node.test, jinja2.nodes.Compare):
for key_list, value in self.j2_ast_walk_getattr(j2node.test.expr):
result_list.append((key_list, value))
for node in j2node.body:
for key_list, value in self.j2_ast_walk_main(node):
result_list.append((key_list, value))
for node in j2node.else_:
for key_list, value in self.j2_ast_walk_main(node):
result_list.append((key_list, value))
return result_list
# EXTERNAL methods
def get_template_list(self):
return self.known_templates
def parse(self, variables):
j2_template = self.env.get_template(self.parent_template) # get parent template
config = j2_template.render(variables)
return config
def get_variables(self):
result_list = list()
for template in self.known_templates:
template_src = self.env.loader.get_source(self.env, template)[0]
parsed_template = self.env.parse(source=template_src)
for e in self.j2_ast_walk_main(parsed_template):
result_list.append(e)
var_dict = build_dict_recursive(result_list)
return var_dict
if __name__ == '__main__':
# Extract variables from the specified template and display as YAML
template_name = sys.argv[1]
template_meta = J2Meta(template_name)
print(
yaml.dump(template_meta.get_variables(), default_flow_style=False)
)
| 38.669202 | 115 | 0.596362 | 1,233 | 10,170 | 4.679643 | 0.163017 | 0.085789 | 0.034315 | 0.036049 | 0.350087 | 0.275043 | 0.197574 | 0.153726 | 0.124783 | 0.104333 | 0 | 0.018277 | 0.322124 | 10,170 | 262 | 116 | 38.816794 | 0.818683 | 0.094395 | 0 | 0.240566 | 0 | 0 | 0.015349 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056604 | false | 0.009434 | 0.037736 | 0.004717 | 0.150943 | 0.014151 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52f640632163d843693cecbc6b17624fdaefae0e | 3,225 | py | Python | Scripts/simulation/routing/route_events/route_event_type_balloon.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/routing/route_events/route_event_type_balloon.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | Scripts/simulation/routing/route_events/route_event_type_balloon.py | velocist/TS4CheatsInfo | b59ea7e5f4bd01d3b3bd7603843d525a9c179867 | [
"Apache-2.0"
] | null | null | null | # uncompyle6 version 3.7.4
# Python bytecode 3.7 (3394)
# Decompiled from: Python 3.7.9 (tags/v3.7.9:13c94747c7, Aug 17 2020, 18:58:18) [MSC v.1900 64 bit (AMD64)]
# Embedded file name: T:\InGame\Gameplay\Scripts\Server\routing\route_events\route_event_type_balloon.py
# Compiled at: 2019-05-04 03:39:00
# Size of source mod 2**32: 3792 bytes
from balloon.balloon_enums import BALLOON_TYPE_LOOKUP
from balloon.balloon_request import BalloonRequest
from balloon.balloon_variant import BalloonVariant
from balloon.tunable_balloon import TunableBalloon
from event_testing.resolver import SingleSimResolver
from routing.route_events.route_event_mixins import RouteEventDataBase
from sims4.tuning.tunable import HasTunableFactory, AutoFactoryInit, TunableList, OptionalTunable, TunableRange
import sims4.random
class RouteEventTypeBalloon(RouteEventDataBase, HasTunableFactory, AutoFactoryInit):
FACTORY_TUNABLES = {'balloons':TunableList(description='\n A list of the possible balloons and balloon categories.\n ',
tunable=BalloonVariant.TunableFactory()),
'_duration_override':TunableRange(description='\n The duration we want this route event to have. This modifies how\n much of the route time this event will take up to play the\n animation. For route events that freeze locomotion, you might\n want to set this to a very low value. Bear in mind that high\n values are less likely to be scheduled for shorter routes.\n ',
tunable_type=float,
default=0,
minimum=0)}
def __init__(self, *args, **kwargs):
(super().__init__)(*args, **kwargs)
self._balloon_icons = None
def is_valid_for_scheduling(self, actor, path):
if not self._balloon_icons:
return False
return True
@property
def duration_override(self):
return self._duration_override
def prepare(self, actor):
if self._balloon_icons is None:
self._balloon_icons = []
resolver = SingleSimResolver(actor)
balloons = []
for balloon in self.balloons:
balloons = balloon.get_balloon_icons(resolver)
self._balloon_icons.extend(balloons)
def execute(self, actor, **kwargs):
pass
def process(self, actor):
if not self._balloon_icons:
return
else:
balloon = sims4.random.weighted_random_item(self._balloon_icons)
if balloon is None:
return
resolver = SingleSimResolver(actor)
icon_info = balloon.icon(resolver, balloon_target_override=None)
if icon_info[0] is None and icon_info[1] is None:
return
category_icon = None
if balloon.category_icon is not None:
category_icon = balloon.category_icon(resolver, balloon_target_override=None)
balloon_type, priority = BALLOON_TYPE_LOOKUP[balloon.balloon_type]
balloon_overlay = balloon.overlay
request = BalloonRequest(actor, icon_info[0], icon_info[1], balloon_overlay, balloon_type, priority, TunableBalloon.BALLOON_DURATION, 0, 0, category_icon)
request.distribute() | 48.863636 | 439 | 0.694264 | 402 | 3,225 | 5.390547 | 0.422886 | 0.044301 | 0.051684 | 0.021228 | 0.08491 | 0.059068 | 0 | 0 | 0 | 0 | 0 | 0.030645 | 0.231008 | 3,225 | 66 | 440 | 48.863636 | 0.843145 | 0.102326 | 0 | 0.132075 | 0 | 0.018868 | 0.171686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113208 | false | 0.018868 | 0.150943 | 0.018868 | 0.415094 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52f6d2520bc89f161468a3e3b9b932e6698c32e7 | 8,280 | py | Python | rom_to_cc.py | MegaIng/turing-complete-interface | 7f79ca13e1965f7ac9c509437def57dc28d62303 | [
"MIT"
] | 4 | 2022-01-23T20:29:16.000Z | 2022-03-20T06:10:47.000Z | rom_to_cc.py | MegaIng/logic_nodes | 7f79ca13e1965f7ac9c509437def57dc28d62303 | [
"MIT"
] | null | null | null | rom_to_cc.py | MegaIng/logic_nodes | 7f79ca13e1965f7ac9c509437def57dc28d62303 | [
"MIT"
] | 1 | 2022-01-28T02:41:25.000Z | 2022-01-28T02:41:25.000Z | #!/usr/bin/env python3
from typing import Iterator
from turing_complete_interface.scripts import *
import argparse
def read_file(file_name: str, word_size: int, dont_cares: list[int]) -> list[int]:
with open(file_name, "rb") as f:
data = f.read()
data = [data[i:i + word_size] for i in range(0, len(data), word_size)]
data = [int.from_bytes(b, "little") for b in data]
if dont_cares:
data = [None if b in dont_cares else b for b in data]
return data
def xnor_lfsr(width: int) -> Iterator[int]:
result = 0
mask = 1 << (width - 1)
yield 0
while True:
next_r = (result << 1)
xor = (next_r ^ result) & mask
result = next_r & (2 ** width - 1)
if xor == 0:
result ^= 1
yield result
def apply_lfsr(lfsr_size: int, data: list[int]) -> list[int]:
lfsr = xnor_lfsr(lfsr_size)
positions = [next(lfsr) for _ in range(2 ** lfsr_size)]
for i, p in enumerate(positions):
if positions.index(p) < i:
print(f"Warning: LFSR only has {i} unique values. Will attempt to pack data.")
del positions[i:]
break
mask = 2 ** lfsr_size - 1
data3: list[int | None] = [None] * len(data)
for i, d in enumerate(data):
if d is None:
continue
assert i & mask < len(positions), f"Unable to pack data into this LFSR at index {i}, {i & mask} >= {len(positions)}"
p = positions[i & mask] | (i & ~mask)
data3[p] = d
return data3
def decoding_template(out_bits: int) -> tuple[str, list[str]]:
if out_bits <= 16:
return "LUTs/Templates/Expand16", ["Expand16D"]
elif out_bits <= 32:
return "LUTs/Templates/Expand32", ["Expand16A", "Expand16D"]
elif out_bits <= 64:
return "LUTs/Templates/Expand64", ["Expand16A", "Expand16B", "Expand16C", "Expand16D"]
else:
raise f"Too many outputs: {out_bits}, no template available"
def output_template(out_bits: int) -> tuple[str, str]:
if out_bits <= 1:
return "LUTs/Templates/Output1", "Output1"
elif out_bits <= 8:
return "LUTs/Templates/Output8", "Output8"
elif out_bits <= 64:
return "LUTs/Templates/Output64", "Output64"
else:
raise f"Too many outputs: {out_bits}, no template available"
def input_template(in_bits: int, inverted_inputs: bool) -> tuple[str, str, str]:
if in_bits <= 8 and inverted_inputs:
return "LUTs/Templates/Input8-Inverted", "ByteSplitter", "ByteSplitter"
elif in_bits <= 32 and inverted_inputs:
return "LUTs/Templates/Input32-Inverted", "ByteSplitter", "ByteSplitter",
elif in_bits <= 8:
return "LUTs/Templates/Input8", "ByteSplitter", "Not"
elif in_bits <= 32:
return "LUTs/Templates/Input32", "ByteSplitter", "ByteSplitter"
else:
raise f"Too many inputs: {in_bits}, no template available"
def ttgen(in_bits, inverted_inputs, out_bits) -> CompactTruthTableGenerator:
input_circuit, kind_pos, kind_neg = input_template(in_bits, inverted_inputs)
input_pattern = Pattern.from_circuit(load_circuit(input_circuit), {
"pos": FilterPins(SortPins(FromGates(GatesByKind(kind_pos), outputs=True), "xy"), lambda p, _: p[1] <= 0),
"neg": FilterPins(SortPins(FromGates(GatesByKind(kind_neg), outputs=True), "xy"), lambda p, _: p[1] > 0),
})
single_entry_pattern = Pattern.from_circuit(load_circuit("LUTs/Templates/EntrySingle"), {
"ins": FromGates(CustomByName("Or14"), inputs=True),
"out": FromGates(CustomByName("Combine-NOR"), outputs=True)
})
double_entry_pattern = Pattern.from_circuit(load_circuit("LUTs/Templates/EntryDouble"), {
"a": FromGates(CustomByName("Or14"), inputs=True),
"b": FromGates(CustomByName("Or14R"), inputs=True),
"out": FromGates(CustomByName("Combine-NAND"), outputs=True)
})
decoding_circuit, decoding_gates = decoding_template(out_bits)
decoding_pattern = Pattern.from_circuit(load_circuit(decoding_circuit), {
"values": Concatenate([
SortPins(FromGates(CustomByName(gate), inputs=True,
filter=lambda _, pin: isinstance(pin.name, int) or pin.name.isdigit()), "xy")
for gate in decoding_gates
]),
"prev": FromGates(GatesByKind("QwordOr"), inputs=True, filter=lambda _, pin: pin.name == "a"),
"next": FromGates(GatesByKind("QwordOr"), outputs=True)
})
output_circuit, output_kind = output_template(out_bits)
output_pattern = Pattern.from_circuit(load_circuit(output_circuit), {
"prev": FromGates(GatesByKind(output_kind), inputs=True)
})
return CompactTruthTableGenerator(
input_pattern,
single_entry_pattern,
double_entry_pattern,
decoding_pattern,
output_pattern)
def rom_to_cc(data: list[int],
in_bits: int,
inverted_inputs: bool,
lfsr_size: int,
output_file_name: str,
out_bits: int,
layout: LevelLayout = None,
print_lut: bool = False):
# Reorder data for LFSR counter
if lfsr_size > 0:
data = apply_lfsr(lfsr_size, data)
# Create the LUT
lut = lut_from_bytes(data, in_bits, out_bits)
lut.truth.prune_zeros()
lut.truth.reduce_dupes()
if print_lut:
print(lut)
select_level("component_factory")
gen = ttgen(in_bits, inverted_inputs, out_bits)
circuit = gen.generate(lut.truth, layout=layout)
try:
old = load_circuit(output_file_name)
print(f"Found {output_file_name} {old.save_version}")
circuit.save_version = old.save_version
except FileNotFoundError:
pass
circuit.delay = 2 if inverted_inputs else 4
save_custom_component(circuit, output_file_name)
print(f"Wrote to {output_file_name}, LUT cost is {circuit.nand}/{circuit.delay}")
def main():
parser = argparse.ArgumentParser(description="""
example: %(prog)s microcode.bin 4 9 19 SAP-Microcode
Used to convert binary files to LUT components for Turing Complete
""")
parser.add_argument('in_file',
help='Input ROM file')
parser.add_argument('in_alignment',
type=int,
help='Input ROM entry alignment, in bytes')
parser.add_argument('in_bits',
type=int,
help='Input width, in bits')
parser.add_argument('out_bits',
type=int,
help='Output width, in bits')
parser.add_argument('out_file',
help='Output component name')
parser.add_argument('-i', '--inverted-inputs',
action='store_true',
help='Inverted inputs available')
parser.add_argument('-p', '--prune',
type=int,
action="append",
default=[0],
help="Prune this value from the LUT (don't cares)")
parser.add_argument('-n', '--disable-prune-zero',
action='store_true',
help="Disable pruning zero from the LUT (don't cares)")
parser.add_argument('-l', '--lfsr-size',
type=int,
default=0,
help='Encode LUT for XNOR LFSR lookup')
parser.add_argument('-v', '--verbose',
action='count',
default=0,
help='Increase log level')
parser.add_argument('-V', '--version',
action='version',
version='%(prog)s 1.0')
options = parser.parse_args()
if options.disable_prune_zero:
options.prune.remove(0)
data = read_file(
options.in_file,
options.in_alignment,
options.prune)
rom_to_cc(
data,
options.in_bits,
options.inverted_inputs,
options.lfsr_size,
options.out_file,
options.out_bits,
None,
options.verbose > 0)
if __name__ == '__main__':
main()
| 38.511628 | 124 | 0.592633 | 987 | 8,280 | 4.794326 | 0.238095 | 0.026627 | 0.039518 | 0.026416 | 0.287194 | 0.217244 | 0.110313 | 0.070161 | 0.06044 | 0.022401 | 0 | 0.015447 | 0.288527 | 8,280 | 214 | 125 | 38.691589 | 0.787812 | 0.007971 | 0 | 0.111702 | 0 | 0.005319 | 0.205334 | 0.039216 | 0 | 0 | 0 | 0 | 0.005319 | 1 | 0.047872 | false | 0.005319 | 0.015957 | 0 | 0.132979 | 0.031915 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52fa3873d05bd8605ebd6971d5837e4ea96ef3aa | 3,784 | py | Python | cli/main.py | raba-jp/gh-label-sync | 4c5dc1be1e947138dd136d021bb0364a34c4da53 | [
"MIT"
] | null | null | null | cli/main.py | raba-jp/gh-label-sync | 4c5dc1be1e947138dd136d021bb0364a34c4da53 | [
"MIT"
] | null | null | null | cli/main.py | raba-jp/gh-label-sync | 4c5dc1be1e947138dd136d021bb0364a34c4da53 | [
"MIT"
] | null | null | null | import os
from itertools import chain
from typing import Dict, List
import github
import yaml
from fire import Fire
GITHUB_TOKEN: str = os.getenv('GITHUB_TOKEN')
gh: github.Github = github.Github(GITHUB_TOKEN)
class Label(object):
def __init__(self, data: Dict) -> None:
self.name = data.get('name')
self.color = data.get('color')
self.description = data.get('description')
def load_config(filepath: str) -> Dict:
data = None
def __load() -> Dict:
nonlocal data
if data is None:
data = yaml.load(open(filepath, 'r+'))
return data
return __load()
def expected_labels(data) -> List[Label]:
list: List[Label] = [Label(d) for d in data]
return list
def user_repositories(gh: github.Github) -> List[github.Repository.Repository]:
username: str = gh.get_user().login
return [
r for r in gh.get_user().get_repos()
if (not r.archived) and (r.owner.login == username)
]
def org_repositories(orgname: str,
gh: github.Github) -> List[github.Repository.Repository]:
return [
r for r in gh.get_organization(orgname).get_repos() if not r.archived
]
def target_repositories(data: Dict) -> List[github.Repository.Repository]:
if data.get("user"):
return user_repositories(gh)
else:
return chain.from_iterable([
org_repositories(orgname, gh)
for orgname in data.get('organizations')
])
def create_labels(current: List[github.Label.Label], expected: List[Label],
repo: github.Repository.Repository, run: bool) -> None:
def __creation_plan(current: List[github.Label.Label],
expected: List[Label]) -> List[Label]:
current_names: List[str] = [l.name for l in current]
expected_names: List[str] = [l.name for l in expected]
names: List[str] = list(set(expected_names) - set(current_names))
return [l for l in expected if l.name in names]
for l in __creation_plan(current, expected):
print(' Create `' + l.name + '`')
if run:
repo.create_label(l.name, l.color, l.description)
def delete_labels(current: List[github.Label.Label], expected: List[Label],
run: bool) -> None:
def __deletion_plan(current: List[github.Label.Label],
expected: List[Label]) -> List[github.Label.Label]:
current_names: List[str] = [l.name for l in current]
exptected_names: List[str] = [l.name for l in expected]
names = list(set(current_names) - set(exptected_names))
return [l for l in current if l.name in names]
for l in __deletion_plan(current, expected):
print(' Delete `' + l.name + '`')
if run:
l.delete()
def edit_labels(current: List[github.Label.Label], expected: List[Label],
run: bool) -> None:
for c in current:
for e in expected:
if c.name != e.name:
continue
print(' Edit `' + e.name + '`')
if run:
c.edit(e.name, e.color, e.description)
def sync(filepath='config.yaml', run=False):
print('======= In sync... =======')
if not run:
print('======= Dry Run Mode =======')
config: Dict = load_config(filepath)
for repo in target_repositories(config):
print(repo.full_name)
current: List[github.Label] = repo.get_labels()
expected: List[Label] = expected_labels(config.get('labels'))
delete_labels(current, expected, run)
create_labels(current, expected, repo, run)
edit_labels(current, expected, run)
print('======= Complete!! =======')
if __name__ == '__main__':
Fire({'sync': sync})
| 31.016393 | 79 | 0.604123 | 485 | 3,784 | 4.581443 | 0.171134 | 0.045005 | 0.021602 | 0.059406 | 0.307831 | 0.307831 | 0.271827 | 0.216022 | 0.19802 | 0.175518 | 0 | 0 | 0.262421 | 3,784 | 121 | 80 | 31.272727 | 0.79613 | 0 | 0 | 0.098901 | 0 | 0 | 0.050476 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.065934 | 0.010989 | 0.318681 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52fc11170ea0d482a87c4a495e1bb8d239745a5c | 32,301 | py | Python | imagej/imagej.py | oeway/pyimagej | 6b6b4a8cf57824a4e42fcac6348b06db843c4522 | [
"Apache-2.0"
] | 1 | 2021-03-31T19:18:19.000Z | 2021-03-31T19:18:19.000Z | imagej/imagej.py | oeway/pyimagej | 6b6b4a8cf57824a4e42fcac6348b06db843c4522 | [
"Apache-2.0"
] | null | null | null | imagej/imagej.py | oeway/pyimagej | 6b6b4a8cf57824a4e42fcac6348b06db843c4522 | [
"Apache-2.0"
] | null | null | null | """
wrapper for imagej and python integration using ImgLyb
"""
# TODO: Unify version declaration to one place.
# https://www.python.org/dev/peps/pep-0396/#deriving
__version__ = '0.6.0.dev0'
__author__ = 'Curtis Rueden, Yang Liu, Michael Pinkert'
import logging, os, re, sys
import scyjava_config
import jnius_config
from pathlib import Path
import numpy
import xarray as xr
_logger = logging.getLogger(__name__)
# Enable debug logging if DEBUG environment variable is set.
try:
debug = os.environ['DEBUG']
if debug:
_logger.setLevel(logging.DEBUG)
except KeyError as e:
pass
def _dump_exception(exc):
if _logger.isEnabledFor(logging.DEBUG) and hasattr(exc, 'stacktrace'):
_logger.debug("\n\tat ".join([str(e) for e in exc.stacktrace]))
def search_for_jars(ij_dir, subfolder):
"""
Search and add .jars ile to a list
:param ij_dir: System path for Fiji.app
:param subfolder: the folder needs to be searched
:return: a list of jar files
"""
jars = []
for root, dirs, files in os.walk(ij_dir + subfolder):
for f in files:
if f.endswith('.jar'):
path = root + '/' + f
jars.append(path)
_logger.debug('Added %s', path)
return jars
def set_ij_env(ij_dir):
"""
Create a list of required jars and add to the java classpath
:param ij_dir: System path for Fiji.app
:return: num_jar(int): number of jars added
"""
jars = []
# search jars directory
jars.extend(search_for_jars(ij_dir, '/jars'))
# search plugins directory
jars.extend(search_for_jars(ij_dir, '/plugins'))
# add to classpath
scyjava_config.add_classpath(os.pathsep.join(jars))
return len(jars)
def init(ij_dir_or_version_or_endpoint=None, headless=True, new_instance=False):
"""
Initialize the ImageJ environment.
:param ij_dir_or_version_or_endpoint:
Path to a local ImageJ installation (e.g. /Applications/Fiji.app),
OR version of net.imagej:imagej artifact to launch (e.g. 2.0.0-rc-67),
OR endpoint of another artifact (e.g. sc.fiji:fiji) that uses imagej.
OR list of Maven artifacts to include (e.g. ['net.imagej:imagej-legacy', 'net.preibisch:BigStitcher'])
:param headless: Whether to start the JVM in headless or gui mode.
:param new_instance: If JVM is already running, setting this parameter to
True will create a new ImageJ instance.
:return: an instance of the net.imagej.ImageJ gateway
"""
global ij
if jnius_config.vm_running and not new_instance:
_logger.warning('The JVM is already running.')
return ij
if not jnius_config.vm_running:
if headless:
scyjava_config.add_options('-Djava.awt.headless=true')
if ij_dir_or_version_or_endpoint is None:
# Use latest release of ImageJ.
_logger.debug('Using newest ImageJ release')
scyjava_config.add_endpoints('net.imagej:imagej')
elif isinstance(ij_dir_or_version_or_endpoint, list):
# Assume that this is a list of Maven endpoints
endpoint = '+'.join(ij_dir_or_version_or_endpoint)
_logger.debug('List of Maven coordinates given: %s', ij_dir_or_version_or_endpoint)
scyjava_config.add_endpoints(endpoint)
elif os.path.isdir(ij_dir_or_version_or_endpoint):
# Assume path to local ImageJ installation.
path = ij_dir_or_version_or_endpoint
_logger.debug('Local path to ImageJ installation given: %s', path)
num_jars = set_ij_env(path)
_logger.info("Added " + str(num_jars + 1) + " JARs to the Java classpath.")
plugins_dir = str(Path(path, 'plugins'))
scyjava_config.add_options('-Dplugins.dir=' + plugins_dir)
elif re.match('^(/|[A-Za-z]:)', ij_dir_or_version_or_endpoint):
# Looks like a file path was intended, but it's not a folder.
path = ij_dir_or_version_or_endpoint
_logger.error('Local path given is not a directory: %s', path)
return False
elif ':' in ij_dir_or_version_or_endpoint:
# Assume endpoint of an artifact.
# Strip out white spaces
endpoint = ij_dir_or_version_or_endpoint.replace(" ", "")
_logger.debug('Maven coordinate given: %s', endpoint)
scyjava_config.add_endpoints(endpoint)
else:
# Assume version of net.imagej:imagej.
version = ij_dir_or_version_or_endpoint
_logger.debug('ImageJ version given: %s', version)
scyjava_config.add_endpoints('net.imagej:imagej:' + version)
# Must import imglyb (not scyjava) to spin up the JVM now.
import imglyb
from jnius import autoclass, JavaException, cast
import scyjava
# Initialize ImageJ.
ImageJ = autoclass('net.imagej.ImageJ')
ij = ImageJ()
# Append some useful utility functions to the ImageJ gateway.
from scyjava import jclass, isjava, to_java, to_python
Dataset = autoclass('net.imagej.Dataset')
ImgPlus = autoclass('net.imagej.ImgPlus')
Img = autoclass('net.imglib2.img.Img')
RandomAccessibleInterval = autoclass('net.imglib2.RandomAccessibleInterval')
Axes = autoclass('net.imagej.axis.Axes')
Double = autoclass('java.lang.Double')
# EnumeratedAxis is a new axis made for xarray, so is only present in ImageJ versions that are released
# later than March 2020. This check defaults to LinearAxis instead if Enumerated does not work.
try:
EnumeratedAxis = autoclass('net.imagej.axis.EnumeratedAxis')
except JavaException:
DefaultLinearAxis = autoclass('net.imagej.axis.DefaultLinearAxis')
def EnumeratedAxis(axis_type, values):
origin = values[0]
scale = values[1] - values[0]
axis = DefaultLinearAxis(axis_type, scale, origin)
return axis
# Try to define the legacy service, and create a dummy method if it doesn't exist.
try:
LegacyService = autoclass('net.imagej.legacy.LegacyService')
legacyService = cast(LegacyService, ij.get("net.imagej.legacy.LegacyService"))
except JavaException:
class LegacyService:
def isActive(self):
return False
legacyService = LegacyService()
# Create a method to get the legacy service that is similar to other ImageJ services
def legacy():
try:
legacyService = cast(LegacyService, ij.get('net.imagej.legacy.LegacyService'))
except JavaException:
legacyService = LegacyService()
return legacyService
setattr(ij, 'legacy', legacy)
if legacyService.isActive():
WindowManager = autoclass('ij.WindowManager')
else:
class WindowManager:
def getCurrentImage(self):
"""
Throw an error saying IJ1 is not available
:return:
"""
raise ImportError("Your ImageJ installation does not support IJ1. This function does not work.")
WindowManager = WindowManager()
class ImageJPython:
def __init__(self, ij):
self._ij = ij
def dims(self, image):
"""
Return the dimensions of the equivalent numpy array for the image. Reverse dimension order from Java.
"""
if self._is_arraylike(image):
return image.shape
if not isjava(image):
raise TypeError('Unsupported type: ' + str(type(image)))
if jclass('net.imglib2.Dimensions').isInstance(image):
return [image.dimension(d) for d in range(image.numDimensions() -1, -1, -1)]
if jclass('ij.ImagePlus').isInstance(image):
dims = image.getDimensions()
dims.reverse()
dims = [dim for dim in dims if dim > 1]
return dims
raise TypeError('Unsupported Java type: ' + str(jclass(image).getName()))
def dtype(self, image_or_type):
"""
Return the dtype of the equivalent numpy array for the given image or type.
"""
if type(image_or_type) == numpy.dtype:
return image_or_type
if self._is_arraylike(image_or_type):
return image_or_type.dtype
if not isjava(image_or_type):
raise TypeError('Unsupported type: ' + str(type(image_or_type)))
# -- ImgLib2 types --
if jclass('net.imglib2.type.Type').isInstance(image_or_type):
ij2_types = {
'net.imglib2.type.logic.BitType': 'bool',
'net.imglib2.type.numeric.integer.ByteType': 'int8',
'net.imglib2.type.numeric.integer.ShortType': 'int16',
'net.imglib2.type.numeric.integer.IntType': 'int32',
'net.imglib2.type.numeric.integer.LongType': 'int64',
'net.imglib2.type.numeric.integer.UnsignedByteType': 'uint8',
'net.imglib2.type.numeric.integer.UnsignedShortType': 'uint16',
'net.imglib2.type.numeric.integer.UnsignedIntType': 'uint32',
'net.imglib2.type.numeric.integer.UnsignedLongType': 'uint64',
'net.imglib2.type.numeric.real.FloatType': 'float32',
'net.imglib2.type.numeric.real.DoubleType': 'float64',
}
for c in ij2_types:
if jclass(c).isInstance(image_or_type):
return numpy.dtype(ij2_types[c])
raise TypeError('Unsupported ImgLib2 type: {}'.format(image_or_type))
# -- ImgLib2 images --
if jclass('net.imglib2.IterableInterval').isInstance(image_or_type):
ij2_type = image_or_type.firstElement()
return self.dtype(ij2_type)
if jclass('net.imglib2.RandomAccessibleInterval').isInstance(image_or_type):
Util = autoclass('net.imglib2.util.Util')
ij2_type = Util.getTypeFromInterval(image_or_type)
return self.dtype(ij2_type)
# -- ImageJ1 images --
if jclass('ij.ImagePlus').isInstance(image_or_type):
ij1_type = image_or_type.getType()
ImagePlus = autoclass('ij.ImagePlus')
ij1_types = {
ImagePlus.GRAY8: 'uint8',
ImagePlus.GRAY16: 'uint16',
ImagePlus.GRAY32: 'float32', # NB: ImageJ1's 32-bit type is float32, not uint32.
}
for t in ij1_types:
if ij1_type == t:
return numpy.dtype(ij1_types[t])
raise TypeError('Unsupported ImageJ1 type: {}'.format(ij1_type))
raise TypeError('Unsupported Java type: ' + str(jclass(image_or_type).getName()))
def new_numpy_image(self, image):
"""
Creates a numpy image (NOT a Java image) dimensioned the same as
the given image, and with the same pixel type as the given image.
"""
try:
dtype_to_use = self.dtype(image)
except TypeError:
dtype_to_use = numpy.dtype('float64')
return numpy.zeros(self.dims(image), dtype=dtype_to_use)
def rai_to_numpy(self, rai):
"""
Convert a RandomAccessibleInterval into a numpy array
"""
result = self.new_numpy_image(rai)
self._ij.op().run("copy.rai", self.to_java(result), rai)
return result
def run_plugin(self, plugin, args=None, ij1_style=True):
"""
Run an ImageJ plugin
:param plugin: The string name for the plugin command
:param args: A dict of macro arguments in key/value pairs
:param ij1_style: Whether to use implicit booleans in IJ1 style or explicit booleans in IJ2 style
:return: The plugin output
"""
macro = self._assemble_plugin_macro(plugin, args=args, ij1_style=ij1_style)
return self.run_macro(macro)
def run_macro(self, macro, args=None):
"""
Run an ImageJ1 style macro script
:param macro: The macro code
:param args: Arguments for the script as a dictionary of key/value pairs
:return:
"""
if not ij.legacy().isActive():
raise ImportError("Your IJ endpoint does not support IJ1, and thus cannot use IJ1 macros.")
try:
if args is None:
return self._ij.script().run("macro.ijm", macro, True).get()
else:
return self._ij.script().run("macro.ijm", macro, True, to_java(args)).get()
except Exception as exc:
_dump_exception(exc)
raise exc
def run_script(self, language, script, args=None):
"""
Run a script in an IJ scripting language
:param language: The file extension for the scripting language
:param script: A string containing the script code
:param args: Arguments for the script as a dictionary of key/value pairs
:return:
"""
script_lang = self._ij.script().getLanguageByName(language)
if script_lang is None:
script_lang = self._ij.script().getLanguageByExtension(language)
if script_lang is None:
raise ValueError("Unknown script language: " + language)
exts = script_lang.getExtensions()
if exts.isEmpty():
raise ValueError("Script language '" + script_lang.getLanguageName() + "' has no extensions")
ext = exts.get(0)
try:
if args is None:
return self._ij.script().run("script." + ext, script, True).get()
return self._ij.script().run("script." + ext, script, True, to_java(args)).get()
except Exception as exc:
_dump_exception(exc)
raise exc
def to_java(self, data):
"""
Converts the data into a java equivalent. For numpy arrays, the java image points to the python array.
In addition to the scyjava types, we allow ndarray-like and xarray-like variables
"""
if self._is_memoryarraylike(data):
return imglyb.to_imglib(data)
if self._is_xarraylike(data):
return self.to_dataset(data)
return to_java(data)
def to_dataset(self, data):
"""Converts the data into an ImageJ dataset"""
if self._is_xarraylike(data):
return self._xarray_to_dataset(data)
if self._is_arraylike(data):
return self._numpy_to_dataset(data)
if scyjava.isjava(data):
return self._java_to_dataset(data)
raise TypeError(f'Type not supported: {type(data)}')
def _numpy_to_dataset(self, data):
rai = imglyb.to_imglib(data)
return self._java_to_dataset(rai)
def _ends_with_channel_axis(self, xarr):
ends_with_axis = xarr.dims[len(xarr.dims)-1].lower() in ['c', 'channel']
return ends_with_axis
def _xarray_to_dataset(self, xarr):
"""
Converts a xarray dataarray to a dataset, inverting C-style (slow axis first) to F-style (slow-axis last)
:param xarr: Pass an xarray dataarray and turn into a dataset.
:return: The dataset
"""
if self._ends_with_channel_axis(xarr):
vals = numpy.moveaxis(xarr.values, -1, 0)
dataset = self._numpy_to_dataset(vals)
else:
dataset = self._numpy_to_dataset(xarr.values)
axes = self._assign_axes(xarr)
dataset.setAxes(axes)
self._assign_dataset_metadata(dataset, xarr.attrs)
return dataset
def _assign_axes(self, xarr):
"""
Obtain xarray axes names, origin, and scale and convert into ImageJ Axis; currently supports EnumeratedAxis
:param xarr: xarray that holds the units
:return: A list of ImageJ Axis with the specified origin and scale
"""
axes = ['']*len(xarr.dims)
for axis in xarr.dims:
axis_str = self._pydim_to_ijdim(axis)
ax_type = Axes.get(axis_str)
ax_num = self._get_axis_num(xarr, axis)
scale = self._get_scale(xarr.coords[axis])
if scale is None:
logging.warning(f"The {ax_type.label} axis is non-numeric and is translated to a linear index.")
doub_coords = [Double(numpy.double(x)) for x in numpy.arange(len(xarr.coords[axis]))]
else:
doub_coords = [Double(numpy.double(x)) for x in xarr.coords[axis]]
# EnumeratedAxis is a new axis made for xarray, so is only present in ImageJ versions that are released
# later than March 2020. This actually returns a LinearAxis if using an earlier version.
java_axis = EnumeratedAxis(ax_type, ij.py.to_java(doub_coords))
axes[ax_num] = java_axis
return axes
def _pydim_to_ijdim(self, axis):
"""Convert between the lowercase Python convention (x, y, z, c, t) to IJ (X, Y, Z, C, T)"""
if str(axis) in ['x', 'y', 'z', 'c', 't']:
return str(axis).upper()
return str(axis)
def _ijdim_to_pydim(self, axis):
"""Convert the IJ uppercase dimension convention (X, Y, Z C, T) to lowercase python (x, y, z, c, t) """
if str(axis) in ['X', 'Y', 'Z', 'C', 'T']:
return str(axis).lower()
return str(axis)
def _get_axis_num(self, xarr, axis):
"""
Get the xarray -> java axis number due to inverted axis order for C style numpy arrays (default)
:param xarr: Xarray to convert
:param axis: Axis number to convert
:return: Axis idx in java
"""
py_axnum = xarr.get_axis_num(axis)
if numpy.isfortran(xarr.values):
return py_axnum
if self._ends_with_channel_axis(xarr):
if axis == len(xarr.dims) - 1:
return axis
else:
return len(xarr.dims) - py_axnum - 2
else:
return len(xarr.dims) - py_axnum - 1
def _assign_dataset_metadata(self, dataset, attrs):
"""
:param dataset: ImageJ Java dataset
:param attrs: Dictionary containing metadata
"""
dataset.getProperties().putAll(self.to_java(attrs))
def _get_origin(self, axis):
"""
Get the coordinate origin of an axis, assuming it is the first entry.
:param axis: A 1D list like entry accessible with indexing, which contains the axis coordinates
:return: The origin for this axis.
"""
return axis.values[0]
def _get_scale(self, axis):
"""
Get the scale of an axis, assuming it is linear and so the scale is simply second - first coordinate.
:param axis: A 1D list like entry accessible with indexing, which contains the axis coordinates
:return: The scale for this axis or None if it is a non-numeric scale.
"""
try:
return axis.values[1] - axis.values[0]
except TypeError:
return None
def _java_to_dataset(self, data):
"""
Converts the data into a ImageJ Dataset
"""
# This try checking is necessary because the set of ImageJ converters is not complete. E.g., here is no way
# to directly go from Img to Dataset, instead you need to chain the Img->ImgPlus->Dataset converters.
try:
if self._ij.convert().supports(data, Dataset):
return self._ij.convert().convert(data, Dataset)
if self._ij.convert().supports(data, ImgPlus):
imgPlus = self._ij.convert().convert(data, ImgPlus)
return self._ij.dataset().create(imgPlus)
if self._ij.convert().supports(data, Img):
img = self._ij.convert().convert(data, Img)
return self._ij.dataset().create(ImgPlus(img))
if self._ij.convert().supports(data, RandomAccessibleInterval):
rai = self._ij.convert().convert(data, RandomAccessibleInterval)
return self._ij.dataset().create(rai)
except Exception as exc:
_dump_exception(exc)
raise exc
raise TypeError('Cannot convert to dataset: ' + str(type(data)))
def from_java(self, data):
"""
Converts the data into a python equivalent
"""
# todo: convert a datset to xarray
if not isjava(data): return data
try:
if self._ij.convert().supports(data, Dataset):
# HACK: Converter exists for ImagePlus -> Dataset, but not ImagePlus -> RAI.
data = self._ij.convert().convert(data, Dataset)
return self._dataset_to_xarray(data)
if self._ij.convert().supports(data, RandomAccessibleInterval):
rai = self._ij.convert().convert(data, RandomAccessibleInterval)
return self.rai_to_numpy(rai)
except Exception as exc:
_dump_exception(exc)
raise exc
return to_python(data)
def _dataset_to_xarray(self, dataset):
"""
Converts an ImageJ dataset into an xarray, inverting F-style (slow idx last) to C-style (slow idx first)
:param dataset: ImageJ dataset
:return: xarray with reversed (C-style) dims and coords as labeled by the dataset
"""
attrs = self._ij.py.from_java(dataset.getProperties())
axes = [(cast('net.imagej.axis.CalibratedAxis', dataset.axis(idx)))
for idx in range(dataset.numDimensions())]
dims = [self._ijdim_to_pydim(axes[idx].type().getLabel()) for idx in range(len(axes))]
values = self.rai_to_numpy(dataset)
coords = self._get_axes_coords(axes, dims, numpy.shape(numpy.transpose(values)))
if dims[len(dims)-1].lower() in ['c', 'channel']:
xarr_dims = self._invert_except_last_element(dims)
values = numpy.moveaxis(values, 0, -1)
else:
xarr_dims = list(reversed(dims))
xarr = xr.DataArray(values, dims=xarr_dims, coords=coords, attrs=attrs)
return xarr
def _invert_except_last_element(self, lst):
"""
Invert a list except for the last element.
:param lst:
:return:
"""
cut_list = lst[0:-1]
reverse_cut = list(reversed(cut_list))
reverse_cut.append(lst[-1])
return reverse_cut
def _get_axes_coords(self, axes, dims, shape):
"""
Get xarray style coordinate list dictionary from a dataset
:param axes: List of ImageJ axes
:param dims: List of axes labels for each dataset axis
:param shape: F-style, or reversed C-style, shape of axes numpy array.
:return: Dictionary of coordinates for each axis.
"""
coords = {dims[idx]: [axes[idx].calibratedValue(position) for position in range(shape[idx])]
for idx in range(len(dims))}
return coords
def show(self, image, cmap=None):
"""
Display a java or python 2D image.
:param image: A java or python image that can be converted to a numpy array
:param cmap: The colormap of the image, if it is not RGB
:return:
"""
if image is None:
raise TypeError('Image must not be None')
# NB: Import this only here on demand, rather than above.
# Otherwise, some headless systems may experience errors
# like "ImportError: Failed to import any qt binding".
from matplotlib import pyplot
pyplot.imshow(self.from_java(image), interpolation='nearest', cmap=cmap)
pyplot.show()
def _is_arraylike(self, arr):
return hasattr(arr, 'shape') and \
hasattr(arr, 'dtype') and \
hasattr(arr, '__array__') and \
hasattr(arr, 'ndim')
def _is_memoryarraylike(self, arr):
return self._is_arraylike(arr) and \
hasattr(arr, 'data') and \
type(arr.data).__name__ == 'memoryview'
def _is_xarraylike(self, xarr):
return hasattr(xarr, 'values') and \
hasattr(xarr, 'dims') and \
hasattr(xarr, 'coords') and \
self._is_arraylike(xarr.values)
def _assemble_plugin_macro(self, plugin: str, args=None, ij1_style=True):
"""
Assemble an ImageJ macro string given a plugin to run and optional arguments in a dict
:param plugin: The string call for the function to run
:param args: A dict of macro arguments in key/value pairs
:param ij1_style: Whether to use implicit booleans in IJ1 style or explicit booleans in IJ2 style
:return: A string version of the macro run
"""
if args is None:
macro = "run(\"{}\");".format(plugin)
return macro
macro = """run("{0}", \"""".format(plugin)
for key, value in args.items():
argument = self._format_argument(key, value, ij1_style)
if argument is not None:
macro = macro + ' {}'.format(argument)
macro = macro + """\");"""
return macro
def _format_argument(self, key, value, ij1_style):
if value is True:
argument = '{}'.format(key)
if not ij1_style:
argument = argument + '=true'
elif value is False:
argument = None
if not ij1_style:
argument = '{0}=false'.format(key)
elif value is None:
raise NotImplementedError('Conversion for None is not yet implemented')
else:
val_str = self._format_value(value)
argument = '{0}={1}'.format(key, val_str)
return argument
def _format_value(self, value):
temp_value = str(value).replace('\\', '/')
if temp_value.startswith('[') and temp_value.endswith(']'):
return temp_value
final_value = '[' + temp_value + ']'
return final_value
def window_manager(self):
"""
Get the ImageJ1 window manager if legacy mode is enabled. It may not work properly if in headless mode.
:return: WindowManager
"""
if not ij.legacy_enabled:
raise ImportError("Your ImageJ installation does not support IJ1. This function does not work.")
elif ij.ui().isHeadless():
logging.warning("Operating in headless mode - The WindowManager will not be fully funtional.")
else:
return WindowManager
def active_xarray(self, sync=True):
"""
Convert the active image to a xarray.DataArray, synchronizing from IJ1 -> IJ2
:param sync: Manually synchronize the current IJ1 slice if True
:return: numpy array containing the image data
"""
# todo: make the behavior use pure IJ2 if legacy is not active
if ij.legacy().isActive():
imp = self.active_image_plus(sync=sync)
return self._ij.py.from_java(imp)
else:
dataset = self.active_dataset()
return self._ij.py.from_java(dataset)
def active_dataset(self):
"""Get the currently active Dataset from the Dataset service"""
return self._ij.imageDisplay().getActiveDataset()
def active_image_plus(self, sync=True):
"""
Get the currently active IJ1 image, optionally synchronizing from IJ1 -> IJ2
:param sync: Manually synchronize the current IJ1 slice if True
:return: The ImagePlus corresponding to the active image
"""
imp = WindowManager.getCurrentImage()
if sync:
self.synchronize_ij1_to_ij2(imp)
return imp
def synchronize_ij1_to_ij2(self, imp):
"""
Synchronize between a Dataset or ImageDisplay linked to an ImagePlus by accepting the ImagePlus data as true
:param imp: The IJ1 ImagePlus that needs to be synchronized
"""
# This code is necessary because an ImagePlus can sometimes be modified without modifying the
# linked Dataset/ImageDisplay. This happens when someone uses the ImageProcessor of the ImagePlus to change
# values on a slice. The imagej-legacy layer does not synchronize when this happens to prevent
# significant overhead, as otherwise changing a single pixel would mean syncing a whole slice. The
# ImagePlus also has a stack, which in the legacy case links to the Dataset/ImageDisplay. This stack is
# updated by the legacy layer when you change slices, using ImageJVirtualStack.setPixelsZeroBasedIndex().
# As such, we only need to make sure that the current 2D image slice is up to date. We do this by manually
# setting the stack to be the same as the imageprocessor.
stack = imp.getStack()
pixels = imp.getProcessor().getPixels()
# Don't sync if the ImagePlus is not linked back to a corresponding dataset
if str(type(pixels)) == '<class \'jnius.ByteArray\'>':
return
stack.setPixels(pixels, imp.getCurrentSlice())
ij.py = ImageJPython(ij)
# Forward stdout and stderr from Java to Python.
from jnius import PythonJavaClass, java_method
class JavaOutputListener(PythonJavaClass):
__javainterfaces__ = ['org/scijava/console/OutputListener']
@java_method('(Lorg/scijava/console/OutputEvent;)V')
def outputOccurred(self, e):
source = e.getSource().toString()
output = e.getOutput()
if source == 'STDOUT':
sys.stdout.write(output)
elif source == 'STDERR':
sys.stderr.write(output)
else:
sys.stderr.write('[{}] {}'.format(source, output))
ij.py._outputMapper = JavaOutputListener()
ij.console().addOutputListener(ij.py._outputMapper)
return ij
def imagej_main():
args = []
for i in range(1, len(sys.argv)):
args.append(sys.argv[i])
ij = init(headless='--headless' in args)
# TODO: Investigate why ij.launch(args) doesn't work.
ij.ui().showUI()
def help():
"""
print the instruction for using imagej module
:return:
"""
print(("Please set the environment variables first:\n"
"Fiji.app: ij_dir = 'your local fiji.app path'\n"
"Then call init(ij_dir)"))
| 42.669749 | 120 | 0.581561 | 3,815 | 32,301 | 4.797641 | 0.166972 | 0.009179 | 0.010818 | 0.009944 | 0.254166 | 0.210348 | 0.169371 | 0.142217 | 0.110474 | 0.100967 | 0 | 0.007421 | 0.328349 | 32,301 | 756 | 121 | 42.72619 | 0.83623 | 0.257577 | 0 | 0.175676 | 0 | 0 | 0.116983 | 0.04075 | 0 | 0 | 0 | 0.005291 | 0 | 1 | 0.108108 | false | 0.002252 | 0.033784 | 0.009009 | 0.308559 | 0.002252 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52fc11811d8171f6e5dd556a354f614af7c939d5 | 1,270 | py | Python | demo/ecb.py | uldisa/tuxedo-python | 59bd44ee9be1807b63599b48b3af9b4dc4ac4277 | [
"MIT"
] | null | null | null | demo/ecb.py | uldisa/tuxedo-python | 59bd44ee9be1807b63599b48b3af9b4dc4ac4277 | [
"MIT"
] | null | null | null | demo/ecb.py | uldisa/tuxedo-python | 59bd44ee9be1807b63599b48b3af9b4dc4ac4277 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# HTTP+XML client as Oracle Tuxedo server
# Caches rates in memory until client.py calls RELOAD_* services
import os
import sys
import urllib.request
from xml.etree import ElementTree as et
import tuxedo as t
class Server:
def tpsvrinit(self, args):
self._rates = None
t.userlog('Server startup')
t.tpadvertise('GETRATE')
t.tpadvertisex('RELOAD_' + str(os.getpid()), 'RELOAD', t.TPSINGLETON + t.TPSECONDARYRQ)
return 0
def tpsvrdone(self):
t.userlog('Server shutdown')
def GETRATE(self, args):
if self._rates is None:
t.userlog('Loading rates')
f = urllib.request.urlopen('https://www.ecb.europa.eu/stats/eurofxref/eurofxref-daily.xml')
x = et.fromstring(f.read().decode('utf8'))
self._rates = {}
for r in x.findall('.//*[@currency]'):
self._rates[r.attrib['currency']] = float(r.attrib['rate'])
return t.tpreturn(t.TPSUCCESS, 0, {'RATE': self._rates[args['CURRENCY'][0]]})
def RELOAD(self, args):
t.userlog('Singleton called with' + str(args))
self._rates = None
return t.tpreturn(t.TPSUCCESS, 0, {})
if __name__ == '__main__':
t.run(Server(), sys.argv)
| 31.75 | 103 | 0.61811 | 166 | 1,270 | 4.63253 | 0.5 | 0.070221 | 0.03381 | 0.044213 | 0.06762 | 0.06762 | 0 | 0 | 0 | 0 | 0 | 0.006198 | 0.237795 | 1,270 | 39 | 104 | 32.564103 | 0.788223 | 0.097638 | 0 | 0.068966 | 0 | 0 | 0.170604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.172414 | 0 | 0.448276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
52fdfc231fed1779605dedf6f02ce7a70650e2f0 | 4,462 | py | Python | tests/image/test_processing.py | lorenzo-cavazzi/bblab | 60146d2b073ae9bc4e380814ceb7015242ee1e2a | [
"MIT"
] | null | null | null | tests/image/test_processing.py | lorenzo-cavazzi/bblab | 60146d2b073ae9bc4e380814ceb7015242ee1e2a | [
"MIT"
] | 2 | 2018-09-23T17:02:35.000Z | 2019-04-15T12:25:52.000Z | tests/image/test_processing.py | lorenzo-cavazzi/bblab | 60146d2b073ae9bc4e380814ceb7015242ee1e2a | [
"MIT"
] | null | null | null | """
Unit tests for bblab/image/processing.py
Please keep them up-to-date when developing new code.
Run the tests with
>>> python -m unittest
For detailed information please refer to https://readthedocsmissinglink.temp or
docs/source/tests.rst
"""
import os
import unittest
import platform
import numpy
import cv2
import csv
from bblab.image import processing
from pathlib import Path
FOLDER_CHANNELS = "./data/1i_channels"
FOLDER_OVERLAY = "./data/1o_overlay"
FOLDER_MASK = "./data/2i_mask"
FOLDER_HIGHLIGHT = "./data/2o_highlight"
FOLDER_MEAN = "./data/3i_mean"
FOLDER_TEMP = "./tests/image"
class TestProcessing(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
# @unittest.expectedFailure
# also available: @unittest.skip(reason) and @unittest.skipIf(condition, reason)
def test_check_images_availability(self):
self.assertEqual(len(os.listdir(str(Path(FOLDER_CHANNELS)))), 3,
"Channels images not available for tests")
self.assertEqual(len(os.listdir(str(Path(FOLDER_OVERLAY)))), 1,
"Overlay image not available for tests")
self.assertEqual(len(os.listdir(str(Path(FOLDER_MASK)))), 1,
"Mask image not available for tests")
self.assertEqual(len(os.listdir(str(Path(FOLDER_HIGHLIGHT)))), 1,
"Highlight image not available for tests")
# self.assertEqual(len(os.listdir(str(Path(FOLDER_MEAN)))), 1,
# "Mean Csv not available for tests")
def test_get_filenames_from_folder(self):
filenames = processing._get_filenames_from_folder(FOLDER_CHANNELS)
self.assertEqual(len(filenames), 3)
def test_get_validated_filenames(self):
filenames = processing._get_filenames_from_folder(FOLDER_CHANNELS)
self.assertEqual(processing._get_validated_filenames(filenames), filenames)
# TODO: smarter way to test path...
def test_build_validated_filename(self):
filename_function = processing._build_validated_filename(FOLDER_OVERLAY, "fakename", extension=".tiff")
if platform.system() == "Windows":
filename_manual = r"data\1o_overlay\fakename.tiff"
else:
filename_manual = r"data/1o_overlay/fakename.tiff"
self.assertEqual(str(filename_function), filename_manual)
def test_validate_filename(self):
filename_function = processing._validate_filename(FOLDER_OVERLAY + "/fakename.tiff")
if platform.system() == "Windows":
filename_manual = r"data\1o_overlay\fakename.tiff"
else:
filename_manual = r"data/1o_overlay/fakename.tiff"
self.assertEqual(str(filename_function), filename_manual)
def test_get_channel_mean(self):
temp_array = numpy.array([[100, 200, 300], [100, 200, 300], [100, 200, 300]], numpy.uint16)
temp_mask = numpy.array([[True, False, False], [True, False, False], [True, False, False]])
self.assertEqual(processing._get_channel_mean(temp_array, temp_mask), 100)
# TODO: using Pillow to load images to be compared? Would it be a double check?
def test_overlay_channels(self):
image_processed = processing.overlay_channels(FOLDER_CHANNELS, False, return_image = True)
files_loaded = processing._get_filenames_from_folder(FOLDER_OVERLAY)
image_loaded = cv2.imread(str(files_loaded[0]), cv2.IMREAD_UNCHANGED)
self.assertTrue((image_processed == image_loaded).all())
def test_highlight_cells(self):
image_processed = processing.highlight_cells(FOLDER_OVERLAY, FOLDER_MASK, FOLDER_HIGHLIGHT, return_image = True)
files_loaded = processing._get_filenames_from_folder(FOLDER_HIGHLIGHT)
image_loaded = cv2.imread(str(files_loaded[0]), cv2.IMREAD_UNCHANGED)
self.assertTrue((image_processed == image_loaded).all())
def test_compute_mean(self):
data_processed = processing.compute_mean(FOLDER_HIGHLIGHT, False, True)
data_processed_stringified = [{k: str(v) for k, v in row.items()} for row in data_processed]
file_paths = processing._get_filenames_from_folder(FOLDER_MEAN)
with open(str(file_paths[0]), "r", newline="") as file_reader:
data_loaded = [{k: v for k, v in row.items()} for row in csv.DictReader(file_reader, skipinitialspace=True)]
self.assertEqual(data_processed_stringified, data_loaded)
# if __name__ == "__main__":
# unittest.main() | 43.745098 | 120 | 0.70641 | 564 | 4,462 | 5.340426 | 0.255319 | 0.054781 | 0.035857 | 0.043825 | 0.444223 | 0.418991 | 0.383466 | 0.383466 | 0.370186 | 0.370186 | 0 | 0.01516 | 0.186912 | 4,462 | 102 | 121 | 43.745098 | 0.81505 | 0.136934 | 0 | 0.26087 | 0 | 0 | 0.104769 | 0.030232 | 0 | 0 | 0 | 0.009804 | 0.173913 | 1 | 0.15942 | false | 0.028986 | 0.115942 | 0 | 0.289855 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e0026d3a268cfe4bc974d504f1b3c71b1673bce | 1,032 | py | Python | mongodb/factory/rpdata.py | RaenonX/Jelly-Bot-API | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 5 | 2020-08-26T20:12:00.000Z | 2020-12-11T16:39:22.000Z | mongodb/factory/rpdata.py | RaenonX/Jelly-Bot | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 234 | 2019-12-14T03:45:19.000Z | 2020-08-26T18:55:19.000Z | mongodb/factory/rpdata.py | RaenonX/Jelly-Bot-API | c7da1e91783dce3a2b71b955b3a22b68db9056cf | [
"MIT"
] | 2 | 2019-10-23T15:21:15.000Z | 2020-05-22T09:35:55.000Z | from models import OID_KEY, PendingRepairDataModel
from extutils.mongo import get_codec_options
from ._base import BaseCollection
from ._dbctrl import SINGLE_DB_NAME
from .factory import MONGO_CLIENT
from ..utils import BulkWriteDataHolder
__all__ = ("PendingRepairDataManager",)
DB_NAME = "pdrp"
class _PendingRepairDataManager:
def __init__(self):
if SINGLE_DB_NAME:
self._db = MONGO_CLIENT.get_database(SINGLE_DB_NAME)
else:
self._db = MONGO_CLIENT.get_database(DB_NAME)
def new_bulk_holder(self, col_inst: BaseCollection) -> BulkWriteDataHolder:
if SINGLE_DB_NAME:
col_full_name = f"{DB_NAME}.{col_inst.get_col_name()}"
else:
col_full_name = col_inst.full_name
col = self._db.get_collection(col_full_name, codec_options=get_codec_options())
col.create_index(f"{PendingRepairDataModel.Data.key}.{OID_KEY}", unique=True)
return BulkWriteDataHolder(col)
PendingRepairDataManager = _PendingRepairDataManager()
| 30.352941 | 87 | 0.734496 | 125 | 1,032 | 5.632 | 0.36 | 0.059659 | 0.068182 | 0.039773 | 0.079545 | 0.079545 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 1,032 | 33 | 88 | 31.272727 | 0.838095 | 0 | 0 | 0.173913 | 0 | 0 | 0.102713 | 0.098837 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.26087 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e047ec076af5340bb838887f4c38e67502ddae2 | 1,102 | py | Python | mysensors/cli/gateway_tcp.py | alexdz18/pymysensors | 41d002b5c9f4b2594147b72178de2fc2293fdb89 | [
"MIT"
] | 66 | 2015-05-29T16:15:29.000Z | 2022-01-07T14:06:24.000Z | mysensors/cli/gateway_tcp.py | alexdz18/pymysensors | 41d002b5c9f4b2594147b72178de2fc2293fdb89 | [
"MIT"
] | 95 | 2015-04-07T17:46:25.000Z | 2022-01-24T17:16:18.000Z | mysensors/cli/gateway_tcp.py | alexdz18/pymysensors | 41d002b5c9f4b2594147b72178de2fc2293fdb89 | [
"MIT"
] | 59 | 2015-04-03T02:06:05.000Z | 2022-01-19T17:03:17.000Z | """Start a tcp gateway."""
import click
from mysensors.cli.helper import (
common_gateway_options,
handle_msg,
run_async_gateway,
run_gateway,
)
from mysensors.gateway_tcp import AsyncTCPGateway, TCPGateway
def common_tcp_options(func):
"""Supply common tcp gateway options."""
func = click.option(
"-p",
"--port",
default=5003,
show_default=True,
type=int,
help="TCP port of the connection.",
)(func)
func = click.option(
"-H", "--host", required=True, help="TCP address of the gateway."
)(func)
return func
@click.command(options_metavar="<options>")
@common_tcp_options
@common_gateway_options
def tcp_gateway(**kwargs):
"""Start a tcp gateway."""
gateway = TCPGateway(event_callback=handle_msg, **kwargs)
run_gateway(gateway)
@click.command(options_metavar="<options>")
@common_tcp_options
@common_gateway_options
def async_tcp_gateway(**kwargs):
"""Start an async tcp gateway."""
gateway = AsyncTCPGateway(event_callback=handle_msg, **kwargs)
run_async_gateway(gateway)
| 24.488889 | 73 | 0.683303 | 134 | 1,102 | 5.395522 | 0.335821 | 0.082988 | 0.082988 | 0.04426 | 0.284924 | 0.284924 | 0.19917 | 0.19917 | 0.19917 | 0.19917 | 0 | 0.004489 | 0.19147 | 1,102 | 44 | 74 | 25.045455 | 0.806958 | 0.094374 | 0 | 0.30303 | 0 | 0 | 0.090072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.212121 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e071ee47c7d5003d3647aa350fd9e7b1ae14d7a | 2,309 | py | Python | project/forms.py | bjthorpe/cogs3 | d6f0091c41f784ff1884456037685bfabd7c1897 | [
"MIT"
] | null | null | null | project/forms.py | bjthorpe/cogs3 | d6f0091c41f784ff1884456037685bfabd7c1897 | [
"MIT"
] | 150 | 2018-08-07T09:34:47.000Z | 2019-08-15T20:16:11.000Z | project/forms.py | bjthorpe/cogs3 | d6f0091c41f784ff1884456037685bfabd7c1897 | [
"MIT"
] | 2 | 2019-02-20T15:40:30.000Z | 2019-07-01T15:03:55.000Z | from django import forms
from project.models import Project
from project.models import ProjectUserMembership
class ProjectAdminForm(forms.ModelForm):
class Meta:
model = Project
fields = '__all__'
def clean_code(self):
"""
Ensure the project code is unique.
"""
current_code = self.instance.code
updated_code = self.cleaned_data['code']
if current_code != updated_code:
if Project.objects.filter(code=updated_code).exists():
raise forms.ValidationError('Project code must be unique.')
return updated_code
class ProjectCreationForm(forms.ModelForm):
class Meta:
model = Project
exclude = [
'code',
'category',
'status',
'allocation_systems',
'members',
'tech_lead',
'notes',
'economic_user',
'allocation_rse',
'reason_decision',
]
widgets = {
'start_date': forms.DateInput(attrs={
'class': 'datepicker'
}),
'end_date': forms.DateInput(attrs={
'class': 'datepicker'
}),
}
class ProjectUserMembershipCreationForm(forms.Form):
project_code = forms.CharField(max_length=20)
def clean_project_code(self):
# Verify the project code is valid and the project has been approved.
project_code = self.cleaned_data['project_code']
try:
project = Project.objects.get(code=project_code)
user = self.initial.get('user', None)
# The technical lead will automatically be added as a member of the of project.
if project.tech_lead == user:
raise forms.ValidationError("You are currently a member of the project.")
if project.awaiting_approval():
raise forms.ValidationError("The project is currently awaiting approval.")
if ProjectUserMembership.objects.filter(project=project, user=user).exists():
raise forms.ValidationError("A membership request for this project already exists.")
except Project.DoesNotExist:
raise forms.ValidationError("Invalid Project Code.")
return project_code
| 32.985714 | 100 | 0.601559 | 232 | 2,309 | 5.857759 | 0.409483 | 0.080942 | 0.091979 | 0.033848 | 0.107432 | 0.107432 | 0 | 0 | 0 | 0 | 0 | 0.001257 | 0.310957 | 2,309 | 69 | 101 | 33.463768 | 0.852923 | 0.078389 | 0 | 0.150943 | 0 | 0 | 0.17166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037736 | false | 0 | 0.056604 | 0 | 0.245283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e07441b244e0dfffabf0e07017fe73eb14c3333 | 8,234 | py | Python | memc_load_mp.py | alexyvassili/otuspy-memcload | b1b79206293ba2426b697b1d2e12e0601d2e00c3 | [
"MIT"
] | null | null | null | memc_load_mp.py | alexyvassili/otuspy-memcload | b1b79206293ba2426b697b1d2e12e0601d2e00c3 | [
"MIT"
] | null | null | null | memc_load_mp.py | alexyvassili/otuspy-memcload | b1b79206293ba2426b697b1d2e12e0601d2e00c3 | [
"MIT"
] | 1 | 2019-11-28T10:09:20.000Z | 2019-11-28T10:09:20.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import gzip
import sys
import glob
import logging
import collections
from optparse import OptionParser
# brew install protobuf
# protoc --python_out=. ./appsinstalled.proto
# pip install protobuf
import appsinstalled_pb2
# pip install python-memcached
import memcache
from multiprocessing import Queue, Process, Array, current_process
from itertools import islice
NORMAL_ERR_RATE = 0.01
BATCH_SIZE = 10000
PARSERS_NUM = 4
AppsInstalled = collections.namedtuple("AppsInstalled", ["dev_type", "dev_id", "lat", "lon", "apps"])
def dot_rename(path):
head, fn = os.path.split(path)
# atomic in most cases
os.rename(path, os.path.join(head, "." + fn))
def insert_appsinstalled(memc_addr, memc_clients, appsinstalled, dry_run=False):
ua = appsinstalled_pb2.UserApps()
ua.lat = appsinstalled.lat
ua.lon = appsinstalled.lon
key = "%s:%s" % (appsinstalled.dev_type, appsinstalled.dev_id)
ua.apps.extend(appsinstalled.apps)
packed = ua.SerializeToString()
# @TODO persistent connection
# @TODO retry and timeouts!
try:
if dry_run:
logging.debug("%s - %s -> %s" % (memc_addr, key, str(ua).replace("\n", " ")))
else:
memc_clients[memc_addr].set(key, packed)
except Exception as e:
logging.exception("Cannot write to memc %s: %s" % (memc_addr, e))
return False
return True
def parse_appsinstalled(line):
# line = line.decode()
line_parts = line.strip().split("\t")
if len(line_parts) < 5:
return
dev_type, dev_id, lat, lon, raw_apps = line_parts
if not dev_type or not dev_id:
return
try:
apps = [int(a.strip()) for a in raw_apps.split(",")]
except ValueError:
apps = [int(a.strip()) for a in raw_apps.split(",") if a.isidigit()]
logging.info("Not all user apps are digits: `%s`" % line)
try:
lat, lon = float(lat), float(lon)
except ValueError:
logging.info("Invalid geo coords: `%s`" % line)
return AppsInstalled(dev_type, dev_id, lat, lon, apps)
def process_gz(file, batch_queue):
logging.info('Processing %s' % file)
fd = gzip.open(file, 'rt')
batch = list(islice(fd, BATCH_SIZE))
while batch:
batch_queue.put((file, batch))
batch = list(islice(fd, BATCH_SIZE))
batch_queue.put((file, ['EOF']))
def process_batch(batch, memc_clients, device_memc, options):
logging.info('Process %s: working on batch' % current_process())
errors, processed = 0, 0
for line in batch:
line = line.strip()
if not line:
continue
appsinstalled = parse_appsinstalled(line)
if not appsinstalled:
errors += 1
continue
memc_addr = device_memc.get(appsinstalled.dev_type)
if not memc_addr:
errors += 1
logging.error("Unknow device type: %s" % appsinstalled.dev_type)
continue
ok = insert_appsinstalled(memc_addr, memc_clients, appsinstalled, options.dry)
if ok:
processed += 1
else:
errors += 1
return processed, errors
def add_statistic(processed, errors, file,
file_stats_processed, file_stats_errors, file_stats_map):
ix = file_stats_map[file]
file_stats_processed[ix] += processed
file_stats_errors[ix] += errors
def parser(batch_queue: Queue, memc_clients, device_memc, options,
file_stats_processed, file_stats_errors, file_stats_map):
while 1:
file, batch = batch_queue.get()
logging.info('Process %s: get batch' % current_process())
if not batch:
logging.info('Process %s: empty batch, exiting' % current_process())
return
elif batch[0] == 'EOF':
logging.info('Process %s: this is EOF batch' % current_process())
logging.info('Ending %s' % file)
dot_rename(file)
else:
processed, errors = process_batch(batch, memc_clients, device_memc, options)
add_statistic(processed, errors, file,
file_stats_processed, file_stats_errors, file_stats_map)
def show_statistic(file_stats_processed, file_stats_errors, file_stats_map):
for file, ix in file_stats_map.items():
errors = file_stats_errors[ix]
processed = file_stats_processed[ix]
if not processed:
continue
err_rate = float(errors) / processed
if err_rate < NORMAL_ERR_RATE:
logging.info("File: {}: Acceptable error rate {}. Successfull load".format(file, err_rate))
else:
logging.error("File: {}: High error rate ({} > {}). Failed load".format(file,
err_rate, NORMAL_ERR_RATE))
def main(options):
device_memc = {
"idfa": options.idfa,
"gaid": options.gaid,
"adid": options.adid,
"dvid": options.dvid,
}
# Memcached clients
memc_clients = dict((key, memcache.Client([address]))
for key, address in device_memc.items())
batch_queue = Queue()
# Getting files list and shared arrays for statistic on files
files = list(glob.iglob(options.pattern))
file_stats_map = {file: ix for ix, file in enumerate(files)}
file_stats_processed = Array('i', [0 for _ in range(len(files))])
file_stats_errors = Array('i', [0 for _ in range(len(files))])
# Create parsers pool
parsers = []
for i in range(PARSERS_NUM):
p = Process(target=parser, args=(batch_queue,
memc_clients,
device_memc,
options,
file_stats_processed,
file_stats_errors,
file_stats_map))
p.start()
parsers.append(p)
# Sending batches to Queue
for file in files:
process_gz(file, batch_queue)
# Put ending batches to Queue
for _ in range(PARSERS_NUM):
batch_queue.put(('', list()))
# join parsers
for p in parsers:
p.join()
show_statistic(file_stats_processed, file_stats_errors, file_stats_map)
def prototest():
logging.info('Starting test')
sample = "idfa\t1rfw452y52g2gq4g\t55.55\t42.42\t1423,43,567,3,7,23\ngaid\t7rfw452y52g2gq4g\t55.55\t42.42\t7423,424"
for line in sample.splitlines():
dev_type, dev_id, lat, lon, raw_apps = line.strip().split("\t")
apps = [int(a) for a in raw_apps.split(",") if a.isdigit()]
lat, lon = float(lat), float(lon)
ua = appsinstalled_pb2.UserApps()
ua.lat = lat
ua.lon = lon
ua.apps.extend(apps)
packed = ua.SerializeToString()
unpacked = appsinstalled_pb2.UserApps()
unpacked.ParseFromString(packed)
assert ua == unpacked
if __name__ == '__main__':
op = OptionParser()
op.add_option("-t", "--test", action="store_true", default=False)
op.add_option("-l", "--log", action="store", default=None)
op.add_option("--dry", action="store_true", default=False)
# op.add_option("--pattern", action="store", default="/data/appsinstalled/*.tsv.gz")
op.add_option("--pattern", action="store", default="/mnt/data/tmp/otuspy/*.tsv.gz")
op.add_option("--idfa", action="store", default="127.0.0.1:33013")
op.add_option("--gaid", action="store", default="127.0.0.1:33014")
op.add_option("--adid", action="store", default="127.0.0.1:33015")
op.add_option("--dvid", action="store", default="127.0.0.1:33016")
(opts, args) = op.parse_args()
logging.basicConfig(filename=opts.log, level=logging.INFO if not opts.dry else logging.DEBUG,
format='[%(asctime)s] %(levelname).1s %(message)s', datefmt='%Y.%m.%d %H:%M:%S')
if opts.test:
prototest()
sys.exit(0)
logging.info("Memc loader started with options: %s" % opts)
try:
main(opts)
except Exception as e:
logging.exception("Unexpected error: %s" % e)
sys.exit(1)
| 35.188034 | 119 | 0.609424 | 1,039 | 8,234 | 4.6641 | 0.246391 | 0.050144 | 0.03343 | 0.034668 | 0.308089 | 0.278787 | 0.23215 | 0.178704 | 0.129179 | 0.101527 | 0 | 0.02053 | 0.266456 | 8,234 | 233 | 120 | 35.339056 | 0.781788 | 0.060967 | 0 | 0.174863 | 0 | 0.005464 | 0.11173 | 0.017239 | 0 | 0 | 0 | 0.004292 | 0.005464 | 1 | 0.054645 | false | 0 | 0.060109 | 0 | 0.153005 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e07f390e232fdb95b93a9274432225ea5b45290 | 333 | py | Python | eval/eval_binding.py | rillian/coptic-nlp | c8931baad7a543361a73d7b4e0cf081fe541c3c6 | [
"Apache-2.0"
] | 12 | 2016-06-11T01:57:55.000Z | 2022-02-19T06:44:14.000Z | eval/eval_binding.py | rillian/coptic-nlp | c8931baad7a543361a73d7b4e0cf081fe541c3c6 | [
"Apache-2.0"
] | 20 | 2016-05-08T20:46:09.000Z | 2021-09-28T20:24:10.000Z | eval/eval_binding.py | rillian/coptic-nlp | c8931baad7a543361a73d7b4e0cf081fe541c3c6 | [
"Apache-2.0"
] | 5 | 2019-02-14T20:44:55.000Z | 2022-02-09T06:54:34.000Z | import sys
import os
PY3 = sys.version_info[0] == 3
script_dir = os.path.dirname(os.path.realpath(__file__)) + os.sep
err_dir = script_dir + "errors" + os.sep
lib = os.path.abspath(script_dir + os.sep + ".." + os.sep + "lib")
sys.path.append(lib)
import binder
run_eval = binder.run_eval
if __name__ == "__main__":
binder.main()
| 22.2 | 66 | 0.696697 | 55 | 333 | 3.872727 | 0.472727 | 0.093897 | 0.103286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01049 | 0.141141 | 333 | 14 | 67 | 23.785714 | 0.734266 | 0 | 0 | 0 | 0 | 0 | 0.057057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.272727 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e0848e8643125033bef200b113dc252ce56b386 | 1,272 | py | Python | pychemia/code/vasp/incar.py | petavazohi/PyChemia | e779389418771c25c830aed360773c63bb069372 | [
"MIT"
] | 67 | 2015-01-31T07:44:55.000Z | 2022-03-21T21:43:34.000Z | pychemia/code/vasp/incar.py | petavazohi/PyChemia | e779389418771c25c830aed360773c63bb069372 | [
"MIT"
] | 13 | 2016-06-03T19:07:51.000Z | 2022-03-31T04:20:40.000Z | pychemia/code/vasp/incar.py | petavazohi/PyChemia | e779389418771c25c830aed360773c63bb069372 | [
"MIT"
] | 37 | 2015-01-22T15:37:23.000Z | 2022-03-21T15:38:10.000Z | import os
from .input import VaspInput
__author__ = "Guillermo Avendano-Franco"
__copyright__ = "Copyright 2016"
__version__ = "0.1"
__maintainer__ = "Guillermo Avendano-Franco"
__email__ = "gtux.gaf@gmail.com"
__status__ = "Development"
__date__ = "May 13, 2016"
def read_incar(filename='INCAR'):
"""
Load the file INCAR in the directory 'path' or
read directly the file 'path' and return an object
'inputvars' for pychemia
:param filename: (str) Filename of a INCAR file format
:return:
"""
if os.path.isfile(filename):
filename = filename
elif os.path.isdir(filename) and os.path.isfile(filename + '/INCAR'):
filename += '/INCAR'
else:
raise ValueError('[ERROR] INCAR path not found: %s' % filename)
iv = VaspInput(filename=filename)
return iv
def write_incar(iv, filepath='INCAR'):
"""
Takes an object inputvars from pychemia and
save the file INCAR in the directory 'path' or
save the file 'path' as a VASP INCAR file
:param iv: (VaspInput) VASP Input variables
:param filepath: (str) File path to write the INCAR file
"""
if os.path.isdir(filepath):
filename = filepath + '/INCAR'
else:
filename = filepath
iv.write(filename)
| 24.941176 | 73 | 0.665881 | 164 | 1,272 | 4.981707 | 0.420732 | 0.034272 | 0.056304 | 0.034272 | 0.078335 | 0.078335 | 0.078335 | 0.078335 | 0 | 0 | 0 | 0.01227 | 0.231132 | 1,272 | 50 | 74 | 25.44 | 0.823108 | 0.331761 | 0 | 0.083333 | 0 | 0 | 0.21374 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e0b48f92d435f4f95d7684dd0630c61330e179a | 3,677 | py | Python | coredis/commands/function.py | alisaifee/aredis | c5764a5a2a29c4ed25278548aa54eece94974440 | [
"MIT"
] | null | null | null | coredis/commands/function.py | alisaifee/aredis | c5764a5a2a29c4ed25278548aa54eece94974440 | [
"MIT"
] | null | null | null | coredis/commands/function.py | alisaifee/aredis | c5764a5a2a29c4ed25278548aa54eece94974440 | [
"MIT"
] | null | null | null | from __future__ import annotations
import weakref
from typing import AnyStr, Generic
from coredis.exceptions import FunctionError
from coredis.typing import (
TYPE_CHECKING,
Any,
Dict,
Iterable,
KeyT,
Optional,
StringT,
ValueT,
)
from coredis.utils import EncodingInsensitiveDict, nativestr
if TYPE_CHECKING:
import coredis.client
class Library(Generic[AnyStr]):
def __init__(
self,
client: coredis.client.AbstractRedis,
name: StringT,
code: Optional[StringT] = None,
):
"""
Abstraction over a library of redis functions
Example::
library_code = "redis.register_function('myfunc', function(k, a) return a[1] end)"
lib = await Library(client, "mylib", library_code)
assert "1" == await lib["myfunc"]([], [1])
"""
self._client: weakref.ReferenceType[coredis.client.AbstractRedis] = weakref.ref(
client
)
self._name = nativestr(name)
self._code = code
self._functions: EncodingInsensitiveDict = EncodingInsensitiveDict()
@property
def client(self) -> coredis.client.AbstractRedis:
c = self._client()
assert c
return c
@property
def functions(self) -> Dict[str, Function]:
"""
mapping of function names to :class:`~coredis.commands.function.Function`
instances that can be directly called.
"""
return self._functions
async def update(self, new_code: StringT) -> bool:
"""
Update the code of a library with :paramref:`new_code`
"""
if await self.client.function_load(new_code, replace=True):
await self.__initialize()
return True
return False
def __getitem__(self, function: str) -> Optional[Function]:
return self._functions.get(function)
async def __initialize(self):
self._functions.clear()
if self._code:
await self.client.function_load(self._code)
library = (await self.client.function_list(self._name)).get(self._name)
if not library:
raise FunctionError(f"No library found for {self._name}")
for name, function in library["functions"].items():
self._functions[name] = Function(self.client, self._name, name)
def __await__(self):
async def closure():
await self.__initialize()
return self
return closure().__await__()
class Function:
def __init__(
self, client: coredis.client.AbstractRedis, library: StringT, name: StringT
):
"""
Wrapper to call a redis function that has already been loaded
Example::
func = await Function(client, "mylib", "myfunc")
response = await func(keys=["a"], args=[1])
"""
self._client: weakref.ReferenceType[coredis.client.AbstractRedis] = weakref.ref(
client
)
self._library = Library(client, library)
self._name = name
@property
def client(self) -> coredis.client.AbstractRedis:
c = self._client()
assert c
return c
def __await__(self):
async def closure():
await self._library
return closure().__await__()
async def __call__(
self,
*,
keys: Optional[Iterable[KeyT]] = None,
args: Optional[Iterable[ValueT]] = None,
) -> Any:
"""
Wrapper to call :meth:`~coredis.Redis.fcall`
:param args:
:param keys:
"""
return await self.client.fcall(self._name, keys or [], args or [])
| 27.036765 | 94 | 0.600761 | 389 | 3,677 | 5.488432 | 0.272494 | 0.051522 | 0.073068 | 0.032319 | 0.238876 | 0.213583 | 0.213583 | 0.173302 | 0.139578 | 0.139578 | 0 | 0.001545 | 0.295893 | 3,677 | 135 | 95 | 27.237037 | 0.823098 | 0.145499 | 0 | 0.345238 | 0 | 0 | 0.014899 | 0 | 0 | 0 | 0 | 0 | 0.02381 | 1 | 0.095238 | false | 0 | 0.083333 | 0.011905 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e0b9abdf225a472d052a63e4bab1777c336f1e6 | 407 | py | Python | hardhat/recipes/x11/libXxf86dga.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | hardhat/recipes/x11/libXxf86dga.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | hardhat/recipes/x11/libXxf86dga.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | from .base import X11BaseRecipe
class LibXxf86dgaRecipe(X11BaseRecipe):
def __init__(self, *args, **kwargs):
super(LibXxf86dgaRecipe, self).__init__(*args, **kwargs)
self.sha256 = '8eecd4b6c1df9a3704c04733c2f4fa93' \
'ef469b55028af5510b25818e2456c77e'
self.name = 'libXxf86dga'
self.version = '1.1.4'
self.depends = ['libX11', 'libXext']
| 31.307692 | 64 | 0.643735 | 34 | 407 | 7.470588 | 0.676471 | 0.07874 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190323 | 0.238329 | 407 | 12 | 65 | 33.916667 | 0.629032 | 0 | 0 | 0 | 0 | 0 | 0.228501 | 0.157248 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e0f180f537297eb8a720c672f6b85b639452c9e | 838 | py | Python | m2core/common/int_enum.py | mdutkin/m2core | 1e08acbc99e9e6c60a03d63110e2fcec96a35ec0 | [
"MIT"
] | 18 | 2017-11-02T16:06:41.000Z | 2019-04-16T08:11:37.000Z | m2core/common/int_enum.py | mdutkin/m2core | 1e08acbc99e9e6c60a03d63110e2fcec96a35ec0 | [
"MIT"
] | 4 | 2018-06-19T08:45:26.000Z | 2019-02-08T04:28:28.000Z | m2core/common/int_enum.py | mdutkin/m2core | 1e08acbc99e9e6c60a03d63110e2fcec96a35ec0 | [
"MIT"
] | 2 | 2017-11-10T07:27:22.000Z | 2018-06-27T12:16:27.000Z | __author__ = 'Maxim Dutkin (max@dutkin.ru)'
from enum import IntEnum
class M2CoreIntEnum(IntEnum):
@classmethod
def get(cls, member_value: int or str):
"""
Returns enum member from `int` ID. If no member found - returns `None`
:param member_value:
:return: enum member or `None`
"""
if type(member_value) is int:
try:
return cls(member_value)
except ValueError:
return None
elif type(member_value) is str:
for m in cls.all():
if m.name == member_value:
return m
return None
else:
raise AttributeError('You can load enum members only by `str` name or `int` value')
@classmethod
def all(cls):
return [_ for _ in cls]
| 25.393939 | 95 | 0.546539 | 100 | 838 | 4.46 | 0.48 | 0.147982 | 0.06278 | 0.076233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001905 | 0.373508 | 838 | 32 | 96 | 26.1875 | 0.847619 | 0.147971 | 0 | 0.2 | 0 | 0 | 0.12908 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0.05 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e0f1d5f912e1eab38c895dcb13b786a4065bc53 | 1,811 | py | Python | measure/sender_src/WiFi/ESP32-DevKitC/delay.py | ASHIJANKEN/CompareCommProtocols | e3dcd38ec7c2b193591ceaee0d017706bd6fc46f | [
"MIT"
] | 1 | 2021-03-20T07:46:08.000Z | 2021-03-20T07:46:08.000Z | measure/sender_src/WiFi/ESP32-DevKitC/delay.py | ASHIJANKEN/CompareCommProtocols | e3dcd38ec7c2b193591ceaee0d017706bd6fc46f | [
"MIT"
] | null | null | null | measure/sender_src/WiFi/ESP32-DevKitC/delay.py | ASHIJANKEN/CompareCommProtocols | e3dcd38ec7c2b193591ceaee0d017706bd6fc46f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import time
import sys
import socket
import subprocess
import random
family_addr = socket.AF_INET
host = 'esp_server.local'
PORT = 3333
def getdata(sock, send_bytes, max_speed_hz):
start_time = time.time()
# Send data
sock.sendall(bytes(bytearray(send_bytes)))
# Receive data
result = sock.recv(1)
end_time = time.time()
return result[0], end_time - start_time
if __name__ == '__main__':
try:
argvs = sys.argv
base_dir = argvs[1]
speed_hz = int(argvs[2])
for send_bytes in [10000]:
#記録ファイルの生成
file_path = base_dir + str(speed_hz) + 'Hz' + '_' + str(send_bytes) + 'bytes.txt'
with open(file_path, mode = 'w', encoding = 'utf-8') as fh:
pass
# 送信データの作成
send = []
for i in range(send_bytes):
send.append(random.randint(0, 255))
# ESP32との接続
sock = socket.socket(family_addr, socket.SOCK_STREAM)
sock.connect((host, PORT))
# send_bytes回の試行
for i in range(send_bytes):
# データの送信
result, execution_time = getdata(sock, [send[i]], speed_hz)
# 受信データのエラーチェック
err = 0 if result == send[i] else 1
print('[TCP delay] {0}:{1}\t{2}\t{3}\t{4}'.format(i, send_bytes, speed_hz, execution_time, err))
with open(file_path, mode = 'a', encoding = 'utf-8') as fh:
fh.write('{0}:{1}\t{2}\n'.format(i, execution_time, err))
# ESP32との接続を切断
sock.close()
# ログを消す
proc = subprocess.Popen(['clear'])
proc.wait()
print('[TCP delay] Recorded : {0}\t{1}'.format(send_bytes, speed_hz))
sock.close()
sys.exit(0)
except KeyboardInterrupt:
sock.close()
sys.exit(0)
except socket.error as msg:
print('Could not connect ESP32: ' + str(msg[0]) + ': ' + msg[1])
sys.exit(1);
| 24.146667 | 104 | 0.604638 | 257 | 1,811 | 4.097276 | 0.40856 | 0.068376 | 0.030389 | 0.030389 | 0.150047 | 0.081671 | 0 | 0 | 0 | 0 | 0 | 0.031525 | 0.246825 | 1,811 | 74 | 105 | 24.472973 | 0.740469 | 0.070127 | 0 | 0.152174 | 0 | 0.021739 | 0.095096 | 0.013158 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0.021739 | 0.108696 | 0 | 0.152174 | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e15b30df7fde1ebd49446090d6d40df27539b0c | 1,200 | py | Python | azure-mgmt-media/azure/mgmt/media/models/streaming_endpoint_access_control.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 1 | 2021-09-07T18:36:04.000Z | 2021-09-07T18:36:04.000Z | azure-mgmt-media/azure/mgmt/media/models/streaming_endpoint_access_control.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 2 | 2019-10-02T23:37:38.000Z | 2020-10-02T01:17:31.000Z | azure-mgmt-media/azure/mgmt/media/models/streaming_endpoint_access_control.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 1 | 2019-06-17T22:18:23.000Z | 2019-06-17T22:18:23.000Z | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class StreamingEndpointAccessControl(Model):
"""StreamingEndpoint access control definition.
:param akamai: The access control of Akamai
:type akamai: ~azure.mgmt.media.models.AkamaiAccessControl
:param ip: The IP access control of the StreamingEndpoint.
:type ip: ~azure.mgmt.media.models.IPAccessControl
"""
_attribute_map = {
'akamai': {'key': 'akamai', 'type': 'AkamaiAccessControl'},
'ip': {'key': 'ip', 'type': 'IPAccessControl'},
}
def __init__(self, **kwargs):
super(StreamingEndpointAccessControl, self).__init__(**kwargs)
self.akamai = kwargs.get('akamai', None)
self.ip = kwargs.get('ip', None)
| 36.363636 | 76 | 0.611667 | 122 | 1,200 | 5.934426 | 0.590164 | 0.053867 | 0.041436 | 0.055249 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000998 | 0.165 | 1,200 | 32 | 77 | 37.5 | 0.721557 | 0.593333 | 0 | 0 | 0 | 0 | 0.159292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e1763fd5810fa3bc2bb92d5a2e5fbe3b0a2b675 | 528 | py | Python | algorithm/__init__.py | ctlab/evoguess | c0649438fdbfc0e3133cdcf79e70ad9145d9c867 | [
"MIT"
] | 1 | 2021-12-12T15:09:56.000Z | 2021-12-12T15:09:56.000Z | algorithm/__init__.py | ctlab/evoguess | c0649438fdbfc0e3133cdcf79e70ad9145d9c867 | [
"MIT"
] | null | null | null | algorithm/__init__.py | ctlab/evoguess | c0649438fdbfc0e3133cdcf79e70ad9145d9c867 | [
"MIT"
] | null | null | null | from .impl import algorithms
from .portfolio import portfolio, schemas
from .module import modules, limit, tuner, evolution
from util import load_modules
algorithms = {
**portfolio,
**algorithms,
}
modules = {
**modules,
**schemas,
}
def Algorithm(configuration, **kwargs):
slug = configuration.pop('slug')
loaded_modules = load_modules(modules, **configuration)
return algorithms.get(slug)(**kwargs, **loaded_modules)
__all__ = [
'limit',
'tuner',
'evolution',
'Algorithm',
]
| 17.6 | 59 | 0.674242 | 53 | 528 | 6.566038 | 0.433962 | 0.057471 | 0.109195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19697 | 528 | 29 | 60 | 18.206897 | 0.820755 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.181818 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e17e6a3b43339186fca073e3436782489157e2a | 12,802 | py | Python | code/import_examples/db_functions.py | Biosystems-Analytics-Lab/shellbase | 0e257ef716956d6f4f12d975eb744ba4b49e6403 | [
"CC0-1.0"
] | 1 | 2021-08-31T16:26:20.000Z | 2021-08-31T16:26:20.000Z | code/import_examples/db_functions.py | Biosystems-Analytics-Lab/shellbase | 0e257ef716956d6f4f12d975eb744ba4b49e6403 | [
"CC0-1.0"
] | 1 | 2021-03-02T22:51:55.000Z | 2021-03-02T22:51:55.000Z | code/import_examples/db_functions.py | Biosystems-Analytics-Lab/shellbase | 0e257ef716956d6f4f12d975eb744ba4b49e6403 | [
"CC0-1.0"
] | 1 | 2021-02-11T20:29:33.000Z | 2021-02-11T20:29:33.000Z | import psycopg2
from shapely.geometry import Polygon
import traceback
from multiprocessing import Process, Queue, Event
import time
def database_connect(**kwargs):
try:
if kwargs.get('type', 'postgres') == 'postgres':
db_conn = psycopg2.connect(host=kwargs['db_host'],
dbname=kwargs['db_name'],
user=kwargs['db_user'],
password=kwargs['db_pwd'])
return db_conn
except Exception as e:
raise e
def get_id(db_cursor, sql):
try:
db_cursor.execute(sql)
rec = db_cursor.fetchone()
if rec:
return rec[0]
except Exception as e:
raise e
return None
def obs_id(db_cursor, obs_name):
sql = "SELECT id FROM lkp_sample_type WHERE name='%s'" % (obs_name)
return get_id(db_cursor, sql)
def uom_id(db_cursor, uom_name):
sql = "SELECT id FROM lkp_sample_units WHERE name='%s'" % (uom_name)
return get_id(db_cursor, sql)
def fc_analysis_method_id(db_cursor, analysis_name):
sql = "SELECT id FROM lkp_fc_analysis_method WHERE name='%s'" % (analysis_name)
return get_id(db_cursor, sql)
def sample_reason_id(db_cursor, sample_reason_name):
sql = "SELECT id FROM lkp_sample_reason WHERE name='%s'" % (sample_reason_name)
return get_id(db_cursor, sql)
def tide_id(db_cursor, tide_name):
sql = "SELECT id FROM lkp_tide WHERE name='%s'" % (tide_name)
return get_id(db_cursor, sql)
def strategy_id(db_cursor, strategy_name):
sql = "SELECT id FROM lkp_sample_strategy WHERE name='%s'" % (strategy_name)
return get_id(db_cursor, sql)
def add_strategy(db_cursor, strategy_name, description):
try:
sql = "INSERT INTO lkp_sample_strategy (id,name,description) VALUES(%d,'%s','%s')" % (0,strategy_name, description)
db_cursor.execute(sql)
return True
except Exception as e:
raise e
return False
def reason_id(db_cursor, reason_name):
sql = "SELECT id FROM lkp_sample_reason WHERE name='%s'"%(reason_name)
return get_id(db_cursor, sql)
def area_id(db_cursor, name, state):
sql = "SELECT id FROM areas WHERE name='%s' AND state='%s'" % (name, state)
return get_id(db_cursor, sql)
def add_area(db_cursor, area_name, state):
try:
sql = "INSERT INTO areas (name,state) VALUES('%s','%s')" % (area_name, state)
db_cursor.execute(sql)
return True
except Exception as e:
raise e
return False
def station_id(db_cursor, station_name):
sql = "SELECT id FROM stations WHERE name='%s'" % (station_name)
return get_id(db_cursor, sql)
def classification_id(db_cursor, classification_type):
sql = "SELECT id FROM lkp_area_classification WHERE name='%s'" % (classification_type)
return get_id(db_cursor, sql)
def growing_area_id(db_cursor, growing_area_name):
sql = "SELECT id FROM areas WHERE name='%s'" % (growing_area_name)
return get_id(db_cursor, sql)
def add_growing_area(db_cursor, growing_area_name, state, classification_type):
#Get the classification ID
try:
class_id = classification_id(db_cursor, classification_type)
if class_id is not None:
print("Inserting growing area: {area} state: {state} classification: {classification}({class_id})".format(
area=growing_area_name,
state=state,
classification=classification_type, class_id=class_id
))
sql = "INSERT INTO areas (name, state, classification)"\
"VALUES('{growing_area}','{state}',{class_id})"\
.format(growing_area=growing_area_name, state=state, class_id=class_id)
db_cursor.execute(sql)
return True
else:
print("ERROR, could not find classification: %s" % (classification_type))
except Exception as e:
raise e
return False
def add_station(db_cursor,
station_name,
state,area_id,
longitude, latitude,
sample_depth, sample_depth_type,
active):
try:
print("Adding Station: %s State: %s lon: %f lat: %f" % (station_name, state, longitude, latitude))
sql = "INSERT INTO stations\
(name, state, area_id, lat, long, sample_depth_type, sample_depth,active)\
VALUES('%s','%s',%d,%f,%f,'%s',%f,%r)" % \
(station_name, state, area_id, latitude, longitude, sample_depth_type, sample_depth,active)
db_cursor.execute(sql)
return True
except Exception as e:
raise e
return False
def add_sample(db_cursor,
station_name,
sample_date,
date_only,
obs_type,
obs_uom,
obs_value,
tide,
strategy,
reason,
fc_analysis_method,
flag,
sql_file=None):
try:
sta_id = station_id(db_cursor, station_name)
obs_name_id = obs_id(db_cursor, obs_type)
uom_name_id = uom_id(db_cursor, obs_uom)
tide_name_id = tide_id(db_cursor, tide)
strategy_name_id = strategy_id(db_cursor, strategy)
reasonid = reason_id(db_cursor, reason)
fc_analysis_method_name_id = fc_analysis_method_id(db_cursor, fc_analysis_method)
if uom_name_id is None:
uom_name_id = 'NULL'
if tide_name_id is None:
tide_name_id = 'NULL'
sql = "INSERT INTO samples"\
"(sample_datetime,date_only,station_id,tide,strategy,reason,fc_analysis_method,type,units,value,flag)"\
"VALUES('{sample_date}',{date_only},{sta_id},{tide_name_id},{strategy_name_id},{reasonid}," \
"{fc_analysis_method_name_id},{obs_name_id},{uom_name_id},{obs_value},'{flag}');".format(
sample_date=sample_date,
date_only=date_only,
sta_id=sta_id,
tide_name_id=tide_name_id,
strategy_name_id=strategy_name_id,
reasonid=reasonid,
fc_analysis_method_name_id=fc_analysis_method_name_id,
obs_name_id=obs_name_id,
uom_name_id=uom_name_id,
obs_value=obs_value,
flag=flag)
'''
sql = "INSERT INTO samples"\
"(sample_datetime,date_only,station_id,tide,strategy,reason,fc_analysis_method,type,units,value,flag)"\
"VALUES('%s',%r,%d,%d,%d,%d,%d,%d,%d,%f,'%s')"%\
(sample_date, date_only,
sta_id,
tide_name_id,
strategy_name_id,
reasonid,
fc_analysis_method_name_id,
obs_name_id,
uom_name_id,
obs_value,
flag)
'''
if sql_file is None:
db_cursor.execute(sql)
else:
sql_file.write(sql)
sql_file.write('\n')
return True
except Exception as e:
raise e
return False
def add_sample_with_ids(db_cursor,
sta_id,
sample_date,
date_only,
obs_name_id,
uom_name_id,
obs_value,
tide_name_id,
strategy_name_id,
reasonid,
fc_analysis_method_name_id,
flag,
sql_file=None):
try:
if uom_name_id is None:
uom_name_id = 'NULL'
if tide_name_id is None:
tide_name_id = 'NULL'
sql = "INSERT INTO samples"\
"(sample_datetime,date_only,station_id,tide,strategy,reason,fc_analysis_method,type,units,value,flag)"\
"VALUES('{sample_date}',{date_only},{sta_id},{tide_name_id},{strategy_name_id},{reasonid}," \
"{fc_analysis_method_name_id},{obs_name_id},{uom_name_id},{obs_value},'{flag}');".format(
sample_date=sample_date,
date_only=date_only,
sta_id=sta_id,
tide_name_id=tide_name_id,
strategy_name_id=strategy_name_id,
reasonid=reasonid,
fc_analysis_method_name_id=fc_analysis_method_name_id,
obs_name_id=obs_name_id,
uom_name_id=uom_name_id,
obs_value=obs_value,
flag=flag)
if sql_file is None:
db_cursor.execute(sql)
else:
sql_file.write(sql)
sql_file.write('\n')
return True
except Exception as e:
raise e
return False
def update_sample_with_ids(db_cursor,
sta_id,
sample_date,
date_only,
obs_name_id,
uom_name_id,
obs_value,
tide_name_id,
strategy_name_id,
reasonid,
fc_analysis_method_name_id,
flag,
sql_file=None):
try:
if uom_name_id is None:
uom_name_id = 'NULL'
if tide_name_id is None:
tide_name_id = 'NULL'
sql = "UPDATE samples "\
"SET tide={tide_name_id},strategy={strategy_name_id},reason={reasonid}," \
"fc_analysis_method={fc_analysis_method_name_id},units={uom_name_id},value={obs_value},flag='{flag}' "\
"WHERE sample_datetime='{sample_date}' AND station_id={sta_id} AND type={obs_name_id};".format(
tide_name_id=tide_name_id,
strategy_name_id=strategy_name_id,
reasonid=reasonid,
fc_analysis_method_name_id=fc_analysis_method_name_id,
uom_name_id=uom_name_id,
obs_value=obs_value,
flag=flag,
sample_date=sample_date,
sta_id=sta_id,
obs_name_id=obs_name_id
)
if sql_file is None:
db_cursor.execute(sql)
else:
sql_file.write(sql)
sql_file.write('\n')
return True
except Exception as e:
raise e
return False
def get_growing_area(growing_areas, station_loc):
growing_area = None
for ndx,shape in enumerate(growing_areas.iterShapes()):
area_rec = None
if shape.shapeTypeName.lower() == "polygon":
growing_area = Polygon(shape.points)
if growing_area is not None:
if station_loc.within(growing_area):
area_rec = growing_areas.record(ndx)
break
return area_rec
def add_classification_area(db_cursor,
area_name,
classification_name,
is_current,
start_date,
end_data,
comments):
try:
areaid = area_id(db_cursor, area_name)
classificationid = classification_id(db_cursor, classification_name)
sql = "INSERT INTO history_areas_classification (area_id,classification,current,start_date,end_date,comments)"\
"VALUES(%d,%d,%r,'%s','%s','%s')" % (areaid,classificationid,is_current,start_date,end_data,comments)
db_cursor.execute(sql)
return True
except Exception as e:
raise e
return False
def build_lookup_id_map(db_recs):
lookup_map = {}
for rec in db_recs:
lookup_map[rec[1]] = rec[0]
return lookup_map
class sample_saver(Process):
def __init__(self):
Process.__init__(self)
self._host = None
self._dbname = None
self._user = None
self._password = None
self._input_queue = Queue()
self._stop_event = Event()
return
def initialize(self, **kwargs):
self._host = kwargs['db_host'],
self._dbname = kwargs['db_name'],
self._user = kwargs['db_user'],
self._password = kwargs['db_pwd']
@property
def input_queue(self):
return self._input_queue
@property
def stop_event(self):
return self._stop_event
def run(self):
try:
db_conn = database_connect(type='postgres',
db_host=self._db_host,
db_name=self._db_name,
db_user=self._db_user,
db_pwd=self._db_pwd)
for sample in iter(self._input_queue.get, 'STOP'):
tot_file_time_start = time.time()
if logger:
logger.debug("ID: %s processing file: %s" % (current_process().name, xmrg_filename))
except Exception as e:
traceback.print_exc() | 35.759777 | 123 | 0.580847 | 1,596 | 12,802 | 4.313283 | 0.095238 | 0.067112 | 0.04939 | 0.037769 | 0.587304 | 0.520918 | 0.449884 | 0.417199 | 0.377833 | 0.358367 | 0 | 0.000696 | 0.326199 | 12,802 | 358 | 124 | 35.759777 | 0.797357 | 0.001953 | 0 | 0.530744 | 0 | 0.006472 | 0.165755 | 0.080878 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090615 | false | 0.009709 | 0.016181 | 0.006472 | 0.223301 | 0.012945 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e18d90345b3780041bc8062c7cd6284e5729bce | 11,536 | py | Python | todocal.py | jjwithanr/todocal | 8c19280fb42757c188f681f2b06819142c539528 | [
"MIT"
] | null | null | null | todocal.py | jjwithanr/todocal | 8c19280fb42757c188f681f2b06819142c539528 | [
"MIT"
] | null | null | null | todocal.py | jjwithanr/todocal | 8c19280fb42757c188f681f2b06819142c539528 | [
"MIT"
] | null | null | null | # Tkinter class structure
import tkinter as tk
from tkinter import ttk, messagebox, font
from datetime import *
import sqlite3 as sq
import GoogleCal
class Scheduler(tk.Frame):
def __init__(self, root, task):
tk.Frame.__init__(self, root)
self.root = root
self.task = task
self.selected = []
self._draw()
def show_tasks(self):
self.Tasks.delete(0,'end')
for i in self.task: self.Tasks.insert('end', i)
def get_selected(self):
self.selected = [self.task[i] for i in self.Tasks.curselection()]
self.selectWindow.destroy()
self.schedule_event()
def open_task_selection(self) -> list:
# Create new window
self.selectWindow = tk.Toplevel(self.root)
self.selectWindow.title("Choose tasks to schedule")
self.selectWindow.geometry("400x300")
icon = tk.PhotoImage(file="todo-icon.png")
self.selectWindow.iconphoto(False, icon)
# Window widgets
scrollbar = tk.Scrollbar(self.selectWindow)
scrollbar.pack(side="right", fill = "both")
self.Tasks = tk.Listbox(self.selectWindow, height=11, width=30, font=font.Font(size=15), selectmode=tk.EXTENDED)
self.Tasks.pack(fill=tk.BOTH, expand=True)
self.Tasks.config(yscrollcommand = scrollbar.set)
scrollbar.config(command=self.Tasks.yview)
# ! add warning for no selected tasks?
self.close_bio_btn = tk.Button(self.selectWindow, text="Confirm", width=25, command=self.get_selected)
self.close_bio_btn.pack(side=tk.TOP, pady=5)
self.show_tasks()
return self.selected
def schedule_event(self):
# ? move spinbox to selectedWindow?
# ? add footer saying what Google account signed into.
# ! sign-in button and schedule button?
def getAvailability() -> dict:
# Get time from spinboxes and convert to string
s = self.start_hour.get() + ":" + self.start_min.get() + self.start_clock12hr.get()
e = self.end_hour.get() + ":" + self.end_min.get() + self.end_clock12hr.get()
if not (s and e):
messagebox.showerror("Error", "Invalid time range")
return {}
# Convert string into datetime time objects
start = datetime.strptime(s, "%I:%M%p").time()
end = datetime.strptime(e, "%I:%M%p").time()
if start >= end:
messagebox.showerror("Error","Invalid time range")
return {1:1}
else:
# * insert setting for time duration here
return GoogleCal.schedule_time(start, end, time_duaration=7)
if not self.task:
# Must have a task to schedule task
messagebox.showinfo('Cannot schedule', 'Add a task')
else:
# Check busyness and find available date
scheduled_times = getAvailability()
if scheduled_times == {1:1}: return 0
elif scheduled_times:
# All tasks are in the description
# NOTE: can use settings for single select task scheduling? this is default ...
GoogleCal.create_event(scheduled_times["start"], scheduled_times["end"], description="\n".join(self.selected) if self.selected else "\n".join(self.task))
started = scheduled_times["start"].strftime("%I:%M%p")
ended = scheduled_times["end"].strftime("%I:%M%p")
date = scheduled_times["end"].strftime("%b %d")
messagebox.showinfo("Sucess", "Created event from " + started + " to " + ended + " on " + date)
else:
messagebox.showinfo("Failure", "You are unavailable during this time")
def _draw(self):
scheduleFrame = tk.Frame(master=self.root)
scheduleFrame.pack(fill=tk.BOTH, side=tk.BOTTOM, expand=False)
calendarLabel = ttk.Label(scheduleFrame, text='Schedule Time:')
calendarLabel.pack(side=tk.LEFT, padx=5)
# Spinboxes to pick time
self.start_hour = ttk.Spinbox(master=scheduleFrame, from_=1,to=12, wrap=True, width=3, state="readonly")
self.start_hour.set(4)
self.start_hour.pack(side=tk.LEFT, padx=5)
self.start_min = ttk.Spinbox(master=scheduleFrame, from_=0,to=59, wrap=True, width=3, state="readonly")
self.start_min.set(0)
self.start_min.pack(side=tk.LEFT, padx=5)
self.start_clock12hr = ttk.Spinbox(master=scheduleFrame, values=("AM", "PM"), wrap=True, width=3)
self.start_clock12hr.set("PM")
self.start_clock12hr.pack(side=tk.LEFT, padx=5)
bufferLabel = tk.Label(scheduleFrame, text="to")
bufferLabel.pack(side=tk.LEFT, padx=5)
self.end_hour = ttk.Spinbox(master=scheduleFrame, from_=1,to=12, wrap=True, width=3, state="readonly")
self.end_hour.set(6)
self.end_hour.pack(side=tk.LEFT, padx=5)
self.end_min = ttk.Spinbox(master=scheduleFrame, from_=0,to=59, wrap=True, width=3, state="readonly")
self.end_min.set(0)
self.end_min.pack(side=tk.LEFT, padx=5)
self.end_clock12hr = ttk.Spinbox(master=scheduleFrame, values=("AM", "PM"), wrap=True, width=3)
self.end_clock12hr.set("PM")
self.end_clock12hr.pack(side=tk.LEFT, padx=5)
# Callback to create event for desired time
scheduleBtn = ttk.Button(scheduleFrame, text='Confirm', width=10, command=self.open_task_selection)
scheduleBtn.pack(side=tk.LEFT, padx=5)
class View(tk.Frame):
def __init__(self, root):
tk.Frame.__init__(self, root)
self.root = root
self._draw()
def _draw(self):
scrollbar = tk.Scrollbar(self.root)
scrollbar.pack(side="right", fill = "both")
viewFrame = tk.Frame(master=self.root)
viewFrame.pack(fill=tk.BOTH, side=tk.LEFT, expand=True, padx=10, pady=10)
self.viewTasks = tk.Listbox(viewFrame, height=11, width = 30, font=font.Font(size=15), selectmode=tk.SINGLE)
self.viewTasks.pack(fill=tk.BOTH, side=tk.LEFT, expand=True)
self.viewTasks.config(yscrollcommand = scrollbar.set)
scrollbar.config(command=self.viewTasks.yview)
self.viewTasks.config(selectmode=tk.SINGLE)
# ? Add a scheduled tasks view? shows which ones are already assigned with its date next to it?
class MainApp(tk.Frame):
def __init__(self, root):
tk.Frame.__init__(self, root)
self.root = root
self._init_database()
self._draw()
def addTask(self):
word = self.entry.get()
if len(word)==0:
messagebox.showinfo('Empty Entry', 'Enter task name')
else:
self.task.append(word)
self.cur.execute('insert into tasks values (?)', (word,))
self.listUpdate()
self.entry.delete(0,'end')
def listUpdate(self):
self.clearList()
for i in self.task:
self.view.viewTasks.insert('end', i)
def delOne(self):
try:
val = self.view.viewTasks.get(self.view.viewTasks.curselection())
if val in self.task:
self.oldTasks.append(val)
self.task.remove(val)
self.listUpdate()
self.cur.execute('delete from tasks where title = ?', (val,))
except:
messagebox.showinfo('Cannot Delete', 'No Task Item Selected')
def deleteAll(self):
mb = messagebox.askyesno('Delete All','Are you sure?')
if mb == True:
while(len(self.task) != 0):
deleted = self.task.pop()
self.oldTasks.append(deleted)
self.cur.execute('delete from tasks')
self.listUpdate()
def undoTask(self):
try:
word = self.oldTasks.pop()
self.task.append(word)
self.cur.execute('insert into tasks values (?)', (word,))
self.listUpdate()
except:
messagebox.showerror("Error", "Nothing to undo")
def clearList(self):
self.view.viewTasks.delete(0,'end')
def bye(self):
self.root.destroy()
def retrieveDB(self):
while(len(self.task) != 0):
self.task.pop()
for row in self.cur.execute('select title from tasks'):
self.task.append(row[0])
def _init_database(self):
self.conn = sq.connect('todo.db')
self.cur = self.conn.cursor()
self.cur.execute('create table if not exists tasks (title text)')
self.task = []
self.oldTasks = []
self.oldIndex = 0
def new_settings(self):
self.settingsWindow = tk.Toplevel(self.root)
self.settingsWindow.title("Scheduler Settings")
self.settingsWindow.geometry("400x200")
# Configure time length
# self.new_bio_entry = tk.Entry(self.settingsWindow, width=50)
# self.new_bio_entry.pack()
# ! convert this to # of weeks / week label
self.start_hour = ttk.Spinbox(master=settingsWindow, from_=1,to=4, wrap=True, width=5, state="readonly")
self.start_hour.set(1)
self.start_hour.pack(padx=5)
self.start_min = ttk.Spinbox(master=settingsWindow, from_=0,to=59, wrap=True, width=3, state="readonly")
self.start_min.set(0)
self.start_min.pack(side=tk.LEFT, padx=5)
# Confirm button
self.change_bio_btn = tk.Button(self.settingsWindow, text="Update bio", width=25, command=self.save_bio)
self.change_bio_btn.pack(pady=5)
def _draw(self):
menu_bar = tk.Menu(self.root)
self.root['menu'] = menu_bar
menu_settings = tk.Menu(menu_bar, tearoff=0)
menu_bar.add_cascade(menu=menu_settings, label='Settings')
# Change from 1 week to up to 1 month
# ! time settings
# Change from all tasks in description to single select task and make that the title
menu_settings.add_command(label="Schedule Preferences", command = lambda x: x)
# Add buttons
buttonFrame = tk.Frame(master=self.root, width=50)
buttonFrame.pack(fill=tk.BOTH, side=tk.LEFT, padx=10)
todoLabel = ttk.Label(buttonFrame, text = 'To-Do List')
todoLabel.pack(pady=7)
taskLabel = ttk.Label(buttonFrame, text='Enter task title: ')
taskLabel.pack(pady=5)
self.entry = ttk.Entry(buttonFrame, width=21)
self.entry.pack(pady=5)
addBtn = ttk.Button(buttonFrame, text='Add task', width=20, command=self.addTask)
addBtn.pack(pady=5)
delBtn = ttk.Button(buttonFrame, text='Delete', width=20, command=self.delOne)
delBtn.pack(pady=5)
clearBtn = ttk.Button(buttonFrame, text='Delete all', width=20, command=self.deleteAll)
clearBtn.pack(pady=5)
undoBtn = ttk.Button(buttonFrame, text='Undo delete', width=20, command=self.undoTask)
undoBtn.pack(pady=5)
exitBtn = ttk.Button(buttonFrame, text='Exit', width=20, command=self.bye)
exitBtn.pack(pady=5)
self.scheduler = Scheduler(self.root, self.task)
self.view = View(self.root)
self.retrieveDB()
self.listUpdate()
self.root.mainloop()
self.conn.commit()
self.cur.close()
if __name__ == "__main__":
root = tk.Tk()
root.title('ToDo Cal')
icon = tk.PhotoImage(file="todo-icon.png")
root.iconphoto(False, icon)
root.geometry("650x280")
root.configure(bg="#d6d6d6")
root.minsize(650, 280)
MainApp(root) | 41.646209 | 169 | 0.615985 | 1,477 | 11,536 | 4.728504 | 0.202437 | 0.024055 | 0.018614 | 0.02205 | 0.305842 | 0.261312 | 0.217211 | 0.183706 | 0.142755 | 0.128007 | 0 | 0.017828 | 0.256068 | 11,536 | 277 | 170 | 41.646209 | 0.795968 | 0.086859 | 0 | 0.198157 | 0 | 0 | 0.07928 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.023041 | 0 | 0.152074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e1987abf8877aa3f402cd02b8bbde9420ddd416 | 5,848 | py | Python | src/dfx.py | deanishe/alfred-default-folder-x | 24786979033963a7387bbac76ae92a375f5b883e | [
"MIT"
] | 22 | 2016-04-09T21:29:36.000Z | 2021-11-17T23:21:42.000Z | src/dfx.py | deanishe/alfred-default-folder-x | 24786979033963a7387bbac76ae92a375f5b883e | [
"MIT"
] | 2 | 2019-07-07T08:20:57.000Z | 2022-03-16T06:03:04.000Z | src/dfx.py | deanishe/alfred-default-folder-x | 24786979033963a7387bbac76ae92a375f5b883e | [
"MIT"
] | 1 | 2020-02-17T15:40:18.000Z | 2020-02-17T15:40:18.000Z | #!/usr/bin/python
# encoding: utf-8
#
# Copyright (c) 2016 Dean Jackson <deanishe@deanishe.net>
#
# MIT Licence. See http://opensource.org/licenses/MIT
#
# Created on 2016-03-22
#
"""dfx.py [-t <type>...] [<query>]
Usage:
dfx.py [-t <type>...] [<query>]
dfx.py -u
dfx.py -h | --help
dfx.py --version
Options:
-t <TYPE>, --type=<TYPE> Show only items of type. May be "fav", "rfile",
"rfolder" or "all" [default: all].
-u, --update Update cached data.
-h, --help Show this message and exit.
--version Show version number and exit.
"""
from __future__ import print_function, unicode_literals, absolute_import
from collections import namedtuple
import os
from subprocess import check_output
import sys
from time import time
import docopt
from workflow import Workflow3, ICON_WARNING
from workflow.background import is_running, run_in_background
log = None
# Initial values for `settings.json`
DEFAULT_SETTINGS = {}
# Auto-update from GitHub releases
UPDATE_SETTINGS = {
'github_slug': 'deanishe/alfred-default-folder-x',
}
HELP_URL = 'https://github.com/deanishe/alfred-default-folder-x/issues'
# Where data will be cached by `update.py`
DFX_CACHE_KEY = 'dfx-entries'
MAX_CACHE_AGE = 10 # seconds
# Data model. `type` is one of 'fav', 'rfolder' or 'rfile'
# ("favorite", "recent folder" and "recent file" respectively)
# `name` is the basename of `path` and `pretty_path` is `path` with
# $HOME replaced with ~
DfxEntry = namedtuple('DfxEntry', ['type', 'path', 'name', 'pretty_path'])
def get_dfx_data():
"""Return DFX favourites and recent items.
Returns:
list: Sequence of `DfxEntry` objects.
"""
st = time()
script = wf.workflowfile('DFX Files.scpt')
output = wf.decode(check_output(['/usr/bin/osascript', script]))
log.debug('DFX files updated in %0.3fs', time() - st)
entries = []
home = os.getenv('HOME')
for line in [s.strip() for s in output.split('\n') if s.strip()]:
row = line.split('\t')
if len(row) != 2:
log.warning('Invalid output from DFX : %r', line)
continue
typ, path = row
# Remove trailing slash from path or things go wrong...
path = path.rstrip('/')
e = DfxEntry(
typ,
path,
os.path.basename(path),
path.replace(home, '~'),
)
log.debug('entry=%r', e)
entries.append(e)
return entries
def do_update():
"""Update cached DFX files and folders."""
log.info('Updating DFX data...')
data = get_dfx_data()
wf.cache_data(DFX_CACHE_KEY, data)
def prefix_name(entry):
"""Prepend a Unicode icon to `entry.name` based on `entry.type`.
Args:
entry (DfxEntry): `DfxEntry` or something else with `type`
and `name` properties.
Returns:
unicode: `entry.name` with Unicode icon prefix.
"""
if entry.type == 'fav':
prefix = '\U00002764' # HEAVY BLACK HEART
else:
prefix = '\U0001F55E' # CLOCK FACE THREE-THIRTY
return '{} {}'.format(prefix, entry.name)
def main(wf):
"""Run workflow script."""
# Parse input
wf.args
args = docopt.docopt(__doc__, version=wf.version)
query = args.get('<query>') or b''
query = wf.decode(query).strip()
types = args.get('--type')
log.debug('args=%r', args)
# -----------------------------------------------------------------
# Update cached DFX data
if args.get('--update'):
return do_update()
# -----------------------------------------------------------------
# Script Filter
# Load cached entries first and start update if they've
# expired (or don't exist)
entries = wf.cached_data(DFX_CACHE_KEY, max_age=0)
if not entries or not wf.cached_data_fresh(DFX_CACHE_KEY, MAX_CACHE_AGE):
if not is_running('update'):
run_in_background(
'update',
['/usr/bin/python', wf.workflowfile('dfx.py'), '--update']
)
# Tell Alfred to re-run the Script Filter if cache is being updated
if is_running('update'):
wf.rerun = 1
# No data in cache yet. Show warning and exit.
if entries is None:
wf.add_item('Waiting for Default Folder X data…',
'Please try again in a second or two',
icon=ICON_WARNING)
wf.send_feedback()
return
# Filter entries
if types != ['all']:
log.debug('Filtering for types : %r', types)
entries = [e for e in entries if e.type in types]
# Remove duplicates and non-existent files
entries = [e for e in set(entries) if os.path.exists(e.path)]
# Filter data against query if there is one
if query:
total = len(entries)
entries = wf.filter(query, entries, lambda e: e.name, min_score=30)
log.info('%d/%d entries match `%s`', len(entries), total, query)
# Prepare Alfred results
if not entries:
wf.add_item(
'Nothing found',
'Try a different query?',
icon=ICON_WARNING)
for e in entries:
if types == ['all']:
title = prefix_name(e)
else:
title = e.name
wf.add_item(
title,
e.pretty_path,
arg=e.path,
uid=e.path,
copytext=e.path,
largetext=e.path,
type='file',
valid=True,
icon=e.path,
icontype='fileicon')
wf.send_feedback()
return 0
if __name__ == '__main__':
wf = Workflow3(
default_settings=DEFAULT_SETTINGS,
update_settings=UPDATE_SETTINGS,
help_url=HELP_URL,
)
log = wf.logger
sys.exit(wf.run(main))
| 27.074074 | 77 | 0.57712 | 750 | 5,848 | 4.404 | 0.342667 | 0.009083 | 0.013321 | 0.006055 | 0.04178 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009229 | 0.27736 | 5,848 | 215 | 78 | 27.2 | 0.771652 | 0.32541 | 0 | 0.070175 | 0 | 0 | 0.142709 | 0.008318 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0 | 0.078947 | 0 | 0.157895 | 0.008772 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e1abc8a94b3a392ed50fc0b3ccdc815e74cb702 | 488 | py | Python | GUIScripts/Face Lock OpenCV/Utils/faceRegister.py | tanujadasari/Awesome_Python_Scripts | 7c5f7cfd40475cd02cbc0db966044a7d530861e6 | [
"MIT"
] | 1 | 2021-12-07T17:37:56.000Z | 2021-12-07T17:37:56.000Z | GUIScripts/Face Lock OpenCV/Utils/faceRegister.py | tanujadasari/Awesome_Python_Scripts | 7c5f7cfd40475cd02cbc0db966044a7d530861e6 | [
"MIT"
] | null | null | null | GUIScripts/Face Lock OpenCV/Utils/faceRegister.py | tanujadasari/Awesome_Python_Scripts | 7c5f7cfd40475cd02cbc0db966044a7d530861e6 | [
"MIT"
] | 2 | 2021-10-03T16:22:08.000Z | 2021-10-03T17:35:14.000Z | import cv2
import copy
cap = cv2.VideoCapture(0)
name=input("Please Enter Your First Name : ")
while(True):
ret,frame = cap.read()
frame1=copy.deepcopy(frame)
frame1 = cv2.putText(frame1, 'Press \'k\' to click photo', (200,200), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,255), 3, cv2.LINE_AA)
cv2.imshow('img1',frame1)
if cv2.waitKey(1) & 0xFF == ord('k'):
cv2.imwrite('faceRegister/%s.png'%name,frame)
cv2.destroyAllWindows()
break
cap.release() | 30.5 | 131 | 0.647541 | 71 | 488 | 4.408451 | 0.661972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080605 | 0.186475 | 488 | 16 | 132 | 30.5 | 0.707809 | 0 | 0 | 0 | 0 | 0 | 0.161554 | 0 | 0 | 0 | 0.00818 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e1ba41a49d67813eea251efa3d4b9c85e5c7d26 | 5,409 | py | Python | NMT.py | yansun1996/Neural-Machine-Translation-Chinese2English | 394c8dd43f26acfcec9de081c39376d56f128266 | [
"MIT"
] | null | null | null | NMT.py | yansun1996/Neural-Machine-Translation-Chinese2English | 394c8dd43f26acfcec9de081c39376d56f128266 | [
"MIT"
] | null | null | null | NMT.py | yansun1996/Neural-Machine-Translation-Chinese2English | 394c8dd43f26acfcec9de081c39376d56f128266 | [
"MIT"
] | 4 | 2018-10-24T23:06:05.000Z | 2018-12-27T04:18:21.000Z | #!/usr/bin/env python
#-*- coding:utf-8 -*-
# author:Darksoul
# datetime:11/24/2018 22:02
# software: PyCharm
from network import *
from torch import optim
from torch.nn.utils import clip_grad_norm_
from random import randint
from tqdm import tqdm
# import loss func
import masked_cross_entropy
import numpy as np
def nmt_training(src, tgt, pairs, test_src, test_tgt, test_pairs):
num_batch = len(pairs) // cfg.batch_size
encoder_test = Encoder(src.num, cfg.embed_size, cfg.hidden_size, cfg.n_layers_encoder, dropout=cfg.dropout)
decoder_test = Decoder(cfg.embed_size, cfg.hidden_size, tgt.num, cfg.n_layers_decoder, dropout=cfg.dropout)
net = Seq2Seq(encoder_test,decoder_test).cuda()
net = load_checkpoint(net, cfg)
opt = optim.Adam(net.parameters(), cfg.lr)
total_loss = []
for step in range(1, cfg.iteration-cfg.load_checkpoint):
tmp_loss = 0
for batch_index in range(num_batch):
input_batches, input_lengths, \
target_batches, target_lengths = random_batch(src, tgt, pairs, cfg.batch_size, batch_index)
opt.zero_grad()
output = net(input_batches, input_lengths, target_batches, target_lengths)
# mask loss
if cfg.loss_type == 'mask':
loss = masked_cross_entropy.compute_loss(
output.transpose(0, 1).contiguous(),
target_batches.transpose(0, 1).contiguous(),
target_lengths, ignore_index=cfg.PAD_idx
)
else:
loss = F.nll_loss(output[1:].view(-1, tgt.num),
target_batches[1:].contiguous().view(-1),
ignore_index=cfg.PAD_idx)
tmp_loss += loss.item()
total_loss.append(loss.item())
clip_grad_norm_(net.parameters(), cfg.grad_clip)
loss.backward()
opt.step()
if (step + cfg.load_checkpoint) % cfg.save_iteration == 0:
test_idx = randint(0, cfg.batch_size-1)
with open('./loss_log_train.txt', 'w') as outfile:
for item in total_loss:
outfile.write("%s\n" % item)
print("Epoch: {}, Loss: {}".format(step + cfg.load_checkpoint, tmp_loss/num_batch))
save_checkpoint(net, cfg, step + cfg.load_checkpoint)
_, pred = net.inference(input_batches[:, test_idx].reshape(input_lengths[0].item(), 1),
input_lengths[0].reshape(1))
try:
inp = ' '.join([src.idx2w[t] for t in input_batches[:,test_idx].cpu().numpy() if t != PAD_idx])
pred = ' '.join([tgt.idx2w[t] for t in pred if t != PAD_idx])
gt = ' '.join([tgt.idx2w[t] for t in target_batches[:,test_idx].cpu().numpy() if t != PAD_idx])
print("Input: {}".format(inp))
print("Ground Truth: {}".format(gt))
print("Prediction: {}".format(pred))
except Exception as e:
print(e)
# print("Epoch {} finished".format(str(step)))
random.shuffle(pairs)
def nmt_testing(src, tgt, pairs, test_src, test_tgt, test_pairs):
encoder_test = Encoder(src.num, cfg.embed_size, cfg.hidden_size, cfg.n_layers_encoder, dropout=cfg.dropout)
decoder_test = Decoder(cfg.embed_size, cfg.hidden_size, tgt.num, cfg.n_layers_decoder, dropout=cfg.dropout)
net = Seq2Seq(encoder_test,decoder_test).cuda()
net = BeamSearch(net.encoder, net.decoder, cfg.beam_widths).cuda()
net = load_checkpoint(net, cfg)
# if don't want beam search, set beam width = [1]
for i in cfg.beam_widths:
blue_score = []
for index_sample in tqdm(range(len(test_pairs))):
input_batches, input_lengths, \
target_batches, target_lengths = random_batch(test_src, test_tgt, test_pairs, 1, index_sample)
for test_idx in range(1):
pred = net(input_batches[:, test_idx].reshape(input_lengths[0].item(), 1), input_lengths[0].reshape(1),
i, MAX_LENGTH)
inp = ' '.join([test_src.idx2w[t] for t in input_batches[:, test_idx].cpu().numpy()])
mt = ' '.join([test_tgt.idx2w[t] for t in pred if t != PAD_idx])
idx = mt.find('<eos>')
mt = mt[:idx + 5]
ref = ' '.join([test_tgt.idx2w[t] for t in target_batches[:, test_idx].cpu().numpy() if t != PAD_idx])
blue_score.append(bleu([mt], [[ref]], 4))
if index_sample % 100 == 0:
print(str(index_sample))
# print('INPUT:\n' + inp)
# print('REF:\n' + ref)
# print('PREDICTION:\n' + mt)
# print("------")
print(str(i) + " finished: " + str(np.mean(blue_score)))
if __name__ == '__main__':
if not os.path.exists(cfg.checkpoints_path):
os.mkdir(cfg.checkpoints_path)
src, tgt, pairs = prepareData(cfg.data_path, 'english', 'chinese')
src.trim()
tgt.trim()
test_src, test_tgt, test_pairs = prepareData('data/small_set/test.txt', 'english', 'chinese')
test_src.trim()
test_tgt.trim()
if cfg.is_training:
nmt_training(src, tgt, pairs, test_src, test_tgt, test_pairs)
else:
nmt_testing(src, tgt, pairs, test_src, test_tgt, test_pairs)
| 38.913669 | 119 | 0.588649 | 716 | 5,409 | 4.22486 | 0.236034 | 0.020826 | 0.021818 | 0.027769 | 0.463471 | 0.416529 | 0.385785 | 0.383141 | 0.361322 | 0.358347 | 0 | 0.013067 | 0.278425 | 5,409 | 138 | 120 | 39.195652 | 0.761978 | 0.063228 | 0 | 0.130435 | 0 | 0 | 0.033248 | 0.004552 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0.076087 | 0 | 0.097826 | 0.076087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e231fd0d0d1f3e3c8776518fd18c86da5e4e419 | 7,453 | py | Python | model/model_enc/b.py | Tommy-Liu/MovieQA_Contest | 4281bf4a731aa14a0d19f18adda31d59a4a297cb | [
"MIT"
] | null | null | null | model/model_enc/b.py | Tommy-Liu/MovieQA_Contest | 4281bf4a731aa14a0d19f18adda31d59a4a297cb | [
"MIT"
] | null | null | null | model/model_enc/b.py | Tommy-Liu/MovieQA_Contest | 4281bf4a731aa14a0d19f18adda31d59a4a297cb | [
"MIT"
] | null | null | null | import numpy as np
import tensorflow as tf
from config import MovieQAPath
from legacy.input import Input
_mp = MovieQAPath()
hp = {'emb_dim': 300, 'feat_dim': 512,
'learning_rate': 10 ** (-3), 'decay_rate': 0.83, 'decay_type': 'exp', 'decay_epoch': 2,
'opt': 'adam', 'checkpoint': '', 'dropout_rate': 0.1, 'pos_len': 40}
def dropout(x, training):
return tf.layers.dropout(x, hp['dropout_rate'], training=training)
def language_encode(a, b, al, bl, a_pos, b_pos):
gamma = tf.convert_to_tensor(10 ** (-8), tf.float32)
al, bl = tf.expand_dims(al, axis=-1), tf.expand_dims(bl, axis=-1)
ab_dot = tf.tensordot(a, b, axes=[[-1], [-1]]) / (hp['emb_dim'] ** 0.5) # (1, L_q, N, L_s)
a_attention = tf.expand_dims(tf.nn.softmax(tf.reduce_sum(ab_dot, axis=[2, 3])), axis=2) # (1, L_q, 1)
a_output = (a + a * a_attention) * tf.nn.tanh(a_pos) # (1, L_q, E_t)
a_enc = tf.reduce_sum(a_output, axis=1) / (tf.to_float(al) + gamma) # (1, E_t)
b_attention = tf.expand_dims(tf.nn.softmax(tf.reduce_sum(ab_dot, axis=[0, 1])), axis=2) # (N, L_s)
b_output = (b + b * b_attention) * tf.nn.tanh(b_pos) # (N, L_s, E_t)
b_enc = tf.reduce_sum(b_output, axis=1) / (tf.to_float(bl) + gamma) # (N, E_t)
return a_enc, b_enc
class Model(object):
def __init__(self, data, training=False):
self.data = data
self.initializer = tf.orthogonal_initializer()
q_mask = tf.sequence_mask(self.data.ql, maxlen=25) # (1, L_q)
s_mask = tf.sequence_mask(self.data.sl, maxlen=29) # (N, L_s)
a_mask = tf.sequence_mask(self.data.al, maxlen=34) # (5, L_a)
with tf.variable_scope('Embedding'):
self.embedding = tf.get_variable('embedding_matrix',
initializer=np.load(_mp.embedding_file), trainable=False)
self.ques = tf.nn.embedding_lookup(self.embedding, self.data.ques) # (1, L_q, E)
self.ans = tf.nn.embedding_lookup(self.embedding, self.data.ans) # (5, L_a, E)
self.subt = tf.nn.embedding_lookup(self.embedding, self.data.subt) # (N, L_s, E)
# self.ques = tf.layers.dropout(self.ques, hp['dropout_rate'], training=training) # (1, L_q, E)
# self.ans = tf.layers.dropout(self.ans, hp['dropout_rate'], training=training) # (5, L_a, E)
# self.subt = tf.layers.dropout(self.subt, hp['dropout_rate'], training=training) # (N, L_s, E)
with tf.variable_scope('Embedding_Linear'):
self.ques_embedding = tf.layers.dense(
self.ques, hp['emb_dim'], use_bias=False, kernel_initializer=self.initializer) # (1, L_q, E_t)
self.ans_embedding = tf.layers.dense(self.ans, hp['emb_dim'], use_bias=False, reuse=True) # (5, L_a, E_t)
self.subt_embedding = tf.layers.dense(self.subt, hp['emb_dim'], use_bias=False,
reuse=True, ) # (N, L_s, E_t)
with tf.variable_scope('Language_Encode'):
position_attn = tf.get_variable('position_attention', shape=[hp['pos_len'], hp['emb_dim']],
initializer=self.initializer, trainable=False)
ques_pos, _ = tf.split(position_attn, [25, hp['pos_len'] - 25])
ans_pos, _ = tf.split(position_attn, [34, hp['pos_len'] - 34])
subt_pos, _ = tf.split(position_attn, [29, hp['pos_len'] - 29])
q_qa_enc, a_qa_enc = language_encode(self.ques, self.ans, self.data.ql, self.data.al, ques_pos, ans_pos)
q_qs_enc, s_qs_enc = language_encode(self.ques, self.subt, self.data.ql, self.data.sl, ques_pos, subt_pos)
a_as_enc, s_as_enc = language_encode(self.ans, self.subt, self.data.al, self.data.sl, ans_pos, subt_pos)
self.ques_enc = tf.layers.dense(tf.concat(
[q_qa_enc, q_qs_enc], axis=-1), hp['feat_dim'],
kernel_initializer=self.initializer, activation=tf.nn.tanh) # (1, L_q, 2 * E_t)
self.ans_enc = tf.layers.dense(tf.concat(
[a_qa_enc, a_as_enc], axis=-1), hp['feat_dim'],
kernel_initializer=self.initializer, activation=tf.nn.tanh) # (5, L_a, 2 * E_t)
self.subt_enc = tf.layers.dense(tf.concat(
[s_qs_enc, s_as_enc], axis=-1), hp['feat_dim'],
kernel_initializer=self.initializer, activation=tf.nn.tanh) # (N, L_s, 2 * E_t)
#
# self.ques_enc = tf.layers.dense(self.ques_enc, hp['feat_dim']) # (1, L_q, 2 * E_t)
# self.ans_enc = tf.layers.dense(self.ques_enc, hp['feat_dim']) # (5, L_a, 2 * E_t)
# self.subt_enc = tf.layers.dense(self.ques_enc, hp['feat_dim']) # (N, L_s, 2 * E_t)
#
# self.m_subt = tf.layers.dense(
# self.subt_enc, hp['feat_dim'], use_bias=False, name='encode_transform') # (N, F_t)
# self.m_ques = tf.layers.dense(
# self.ques_enc, hp['feat_dim'], use_bias=False, reuse=True, name='encode_transform') # (1, F_t)
# self.m_ans = tf.layers.dense(
# self.ans_enc, hp['feat_dim'], use_bias=False, reuse=True, name='encode_transform') # (5, F_t)
#
# self.m_subt = tf.layers.dropout(self.m_subt, hp['dropout_rate'], training=training)
# self.m_ques = tf.layers.dropout(self.m_ques, hp['dropout_rate'], training=training)
# self.m_ans = tf.layers.dropout(self.m_ans, hp['dropout_rate'], training=training)
#
t_shape = tf.shape(self.subt_enc)
split_num = tf.cast(tf.ceil(t_shape[0] / 5), dtype=tf.int32)
pad_num = split_num * 5 - t_shape[0]
paddings = tf.convert_to_tensor([[0, pad_num], [0, 0]])
with tf.variable_scope('Memory_Block'):
self.mem_feat = tf.pad(self.subt_enc, paddings)
self.mem_block = tf.reshape(self.mem_feat, [split_num, 5, hp['feat_dim']])
self.mem_node = tf.reduce_mean(self.mem_block, axis=1)
self.mem_opt = tf.layers.dense(self.mem_node, hp['feat_dim'],
activation=tf.nn.tanh, kernel_initializer=self.initializer)
self.mem_direct = tf.matmul(self.mem_node, self.mem_opt, transpose_b=True) / (hp['feat_dim'] ** 0.5)
self.mem_fw_direct = tf.nn.softmax(self.mem_direct)
self.mem_bw_direct = tf.nn.softmax(self.mem_direct, axis=0)
self.mem_self = tf.matmul(self.mem_fw_direct, self.mem_node) + tf.matmul(self.mem_bw_direct, self.mem_node)
self.mem_attn = tf.nn.softmax(tf.matmul(self.mem_self, self.ques_enc, transpose_b=True))
self.mem_output = tf.reduce_sum(self.mem_self * self.mem_attn, axis=0)
self.output = tf.reduce_sum(self.mem_output * self.ans_enc, axis=1)
def main():
data = Input(split='train')
model = Model(data)
for v in tf.global_variables():
print(v)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
with tf.Session(config=config) as sess:
sess.run([model.data.initializer, tf.global_variables_initializer()], )
# q, a, s = sess.run([model.ques_enc, model.ans_enc, model.subt_enc])
# print(q.shape, a.shape, s.shape)
t, tt = sess.run([model.output, data.gt])
print(t, tt)
print(t.shape, tt.shape)
if __name__ == '__main__':
main()
| 48.396104 | 119 | 0.608748 | 1,157 | 7,453 | 3.674157 | 0.146067 | 0.041167 | 0.039755 | 0.039991 | 0.46836 | 0.351211 | 0.241825 | 0.189602 | 0.14773 | 0.14279 | 0 | 0.017353 | 0.234536 | 7,453 | 153 | 120 | 48.712418 | 0.727783 | 0.202469 | 0 | 0.033333 | 0 | 0 | 0.054293 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.044444 | 0.011111 | 0.122222 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e2367d878b415218b0850fdded8bf92ca5665fe | 12,010 | py | Python | modules/finance.py | NikhilNGY/telegram-Bot | fabd5930f3ae426b30791d76d5237fd78e57dd8c | [
"Apache-2.0"
] | 5 | 2018-01-10T03:39:00.000Z | 2021-08-08T15:22:31.000Z | modules/finance.py | NikhilNGY/telegram-Bot | fabd5930f3ae426b30791d76d5237fd78e57dd8c | [
"Apache-2.0"
] | null | null | null | modules/finance.py | NikhilNGY/telegram-Bot | fabd5930f3ae426b30791d76d5237fd78e57dd8c | [
"Apache-2.0"
] | 12 | 2018-10-08T13:52:39.000Z | 2021-07-20T16:13:38.000Z | #!/usr/bin/env python3.5
# -*- coding: utf-8 -*-
""" This is the finance module, controlling all the money commands.
The data is stored in data/owed.json, which is loaded using some of the helper
functions.
"""
from telegram.ext import (ConversationHandler, MessageHandler, CommandHandler,
Filters, CallbackQueryHandler)
from telegram import InlineKeyboardMarkup, ReplyKeyboardRemove
from modules import helper, strings
import Bot
# give numbers for the ConversationHandler buttons used in /iowes
OWER, OWEE, AMOUNT = range(3)
def list_owed(bot, update, args):
owed = helper.loadjson(loc_owedjson)
chat_id = str(update.message.chat_id)
reply_markup = None
try:
owed[chat_id]
except KeyError:
owed[chat_id] = {}
if len(args) == 0:
keyboard = helper.make_keyboard(owed[chat_id].keys(), "owers")
reply_markup = InlineKeyboardMarkup(keyboard)
res = msgListMoneyOwed
elif len(args) == 1:
if args[0] == "all": # if arg1 is "all", print all debts
if len(owed[chat_id]) == 0:
res = msgNoDebts
else:
res = msgListMoneyOwed
for ower in owed[chat_id]:
res += helper.print_owed(owed, chat_id, ower)
else: # else, print debts of name
try:
res = msgListMoneyOwedIndiv.format(args[0])
res += helper.print_owed(owed, chat_id, args[0])
except KeyError:
res = args[0] + " has no debts!"
else:
res = strings.errBadFormat
update.message.reply_text(res, reply_markup=reply_markup)
def clear(bot, update, args):
owed = helper.loadjson(loc_owedjson)
chat_id = str(update.message.chat_id)
sender = update.message.from_user
try:
owed[chat_id]
except KeyError:
owed[chat_id] = {}
if sender.id != Bot.OWNER_ID:
update.message.reply_text(strings.errNotAdmin)
print(strings.errUnauthCommand.format(sender.username))
return
if len(args) == 1 and args[0] == "all":
helper.dumpjson(loc_bckpjson, owed)
owed.pop(chat_id)
update.message.reply_text(msgAllDebtsCleared)
print(msgAllDebtsClearedTerm)
elif len(args) == 2:
try:
owed[chat_id][args[0]]
except KeyError:
update.message.reply_text(strings.errNoOwer + args[0])
return
if args[1] == "all":
owed[chat_id].pop(args[0])
print(msgDebtsOfCleared.format(args[0]))
else:
owed[chat_id][args[0]].pop(args[1])
update.message.reply_text(
msgDebtsOfToCleared.format(args[0], args[1]))
# remove from database if no debts
if owed[chat_id] == {}:
owed.pop(chat_id)
elif owed[chat_id][args[0]] == {}:
owed[chat_id].pop(args[0])
else:
update.message.reply_text(strings.errBadFormat)
helper.dumpjson(loc_owedjson, owed)
def owes_helper(chat_id, ower, owee, amount):
owed = helper.loadjson(loc_owedjson)
try:
owed[chat_id]
except KeyError:
owed[chat_id] = {}
try:
owed[chat_id][ower]
except KeyError:
owed[chat_id][ower] = {}
print("Added new ower: " + ower + ".")
try:
owed[chat_id][ower][owee]
except KeyError:
owed[chat_id][ower][owee] = 0
print("Added new owee for ower " + ower + ".")
owed[chat_id][ower][owee] += float(amount)
result = owed[chat_id][ower][owee]
# check whether owed sum is now 0 and removes if necessary
if owed[chat_id][ower][owee] == 0:
owed[chat_id][ower].pop(owee)
if owed[chat_id][ower] == {}:
owed[chat_id].pop(ower)
if owed[chat_id] == {}:
owed.pop(chat_id)
helper.dumpjson(loc_owedjson, owed)
return result
def inline_owes(bot, update, args, user_data):
owed = helper.loadjson(loc_owedjson)
chat_id = str(update.message.chat_id)
res = ""
if len(args) == 0:
try:
keyboard = helper.make_keyboard(owed[chat_id].keys(), "")
res = msgCurrentOwers
except KeyError:
res = msgNoDebts
keyboard = []
reply_markup = InlineKeyboardMarkup(keyboard)
update.message.reply_text(res, reply_markup=reply_markup)
return OWER
elif len(args) == 1:
if args[0] == "all":
update.message.reply_text(strings.errAllName)
return ConversationHandler.END
try:
res = msgWhoOwedTo.format(args[0])
keyboard = helper.make_keyboard(owed[chat_id][args[0]].keys(), "")
except KeyError:
keyboard = []
user_data["ower"] = args[0]
reply_markup = InlineKeyboardMarkup(keyboard)
update.message.reply_text(res, reply_markup=reply_markup)
return OWEE
elif len(args) == 2:
if args[0] == "all" or args[1] == "all":
update.message.reply_text(strings.errAllName)
return ConversationHandler.END
user_data["ower"] = args[0]
user_data["owee"] = args[1]
update.message.reply_text(msgHowMuch.format(args[0], args[1]))
return AMOUNT
elif len(args) == 3:
if args[0] == "all" or args[1] == "all":
update.message.reply_text(strings.errAllName)
else:
try:
amount = owes_helper(chat_id, args[0], args[1], args[2])
update.message.reply_text(args[0] + " now owes " + args[1] + " " + Bot.CURRENCY
+ str(amount) + ".")
except ValueError:
update.message.reply_text(strings.errNotInt)
else:
update.message.reply_text(strings.errBadFormat)
return ConversationHandler.END
def cancel(bot, update, user_data):
try:
user_data.pop("ower")
user_data.pop("owee")
except KeyError:
pass
update.message.reply_text("Command cancelled",
reply_markup=ReplyKeyboardRemove())
return ConversationHandler.END
def create_ower(bot, update, user_data):
ower = update.message.text
user_data["ower"] = ower
update.message.reply_text(msgNewOwer.format(ower, ower))
return OWEE
def create_owee(bot, update, user_data):
owee = update.message.text
ower = user_data["ower"]
user_data["owee"] = owee
update.message.reply_text(msgNewOwee.format(owee, ower, ower, owee))
return AMOUNT
def amount_owed(bot, update, user_data):
chat_id = str(update.message.chat_id)
ower = user_data["ower"]
owee = user_data["owee"]
amount = update.message.text
try:
amount = owes_helper(chat_id, ower, owee, amount) # save
msg = ower + " now owes " + owee + " " + Bot.CURRENCY + str(amount) + "."
except ValueError:
msg = strings.errNotInt
update.message.reply_text(msg)
user_data.pop("ower")
user_data.pop("owee")
return ConversationHandler.END
def ower_button(bot, update, user_data):
owed = helper.loadjson(loc_owedjson)
query = update.callback_query
chat_id = str(query.message.chat_id)
ower = query.data # this is the name pressed
user_data["ower"] = ower
try:
keyboard = helper.make_keyboard(owed[chat_id][ower].keys(), "")
except KeyError:
keyboard = []
reply = InlineKeyboardMarkup(keyboard)
bot.editMessageText(text=msgWhoOwedTo.format(ower),
chat_id=query.message.chat_id,
message_id=query.message.message_id,
reply_markup=reply)
return OWEE
def owee_button(bot, update, user_data):
query = update.callback_query
ower = user_data["ower"]
owee = query.data # this is the name pressed
user_data["owee"] = owee
bot.editMessageText(text=msgHowMuch.format(ower, owee),
chat_id=query.message.chat_id,
message_id=query.message.message_id)
return AMOUNT
def list_owed_button(bot, update, user_data):
query = update.callback_query
chat_id = str(query.message.chat_id)
message_here = strings.errButtonMsg
reply_markup = None
if query.data.startswith("owers"):
ower = query.data[5:]
user_data["oweower"] = ower
owed = helper.loadjson(loc_owedjson)
keyboard = helper.make_keyboard(owed[chat_id][ower].keys(), "owees")
message_here = msgListMoneyOwedIndiv.format(ower)
reply_markup = InlineKeyboardMarkup(keyboard)
elif query.data.startswith("owees"):
owed = helper.loadjson(loc_owedjson)
ower = user_data["oweower"]
owee = query.data[5:]
message_here = ower + " owes " + owee + " " + Bot.CURRENCY \
+ str(owed[chat_id][ower][owee])
user_data.pop("oweower") # cleanup
else:
message_here = strings.errUnknownCallback
bot.editMessageText(text=message_here,
chat_id=query.message.chat_id,
message_id=query.message.message_id,
reply_markup=reply_markup)
def reset_owes(bot, update, user_data):
try:
user_data.pop("ower")
user_data.pop("owee")
except KeyError:
pass
update.message.reply_text(strings.errCommandStillRunning,
reply_markup=ReplyKeyboardRemove())
return ConversationHandler.END
# define handlers
clear_handler = CommandHandler("clear", clear, pass_args=True)
owe_handler = CommandHandler("owe", list_owed, pass_args=True)
owes_buttons_handler = CallbackQueryHandler(list_owed_button, pass_user_data=True)
owes_handler = ConversationHandler(
entry_points=[CommandHandler("owes",
inline_owes,
pass_args=True,
pass_user_data=True)],
states={
OWER: [MessageHandler(Filters.text,
create_ower,
pass_user_data=True),
CallbackQueryHandler(ower_button,
pass_user_data=True)],
OWEE: [MessageHandler(Filters.text,
create_owee,
pass_user_data=True),
CallbackQueryHandler(owee_button,
pass_user_data=True)],
AMOUNT: [MessageHandler(Filters.text,
amount_owed,
pass_user_data=True)]
},
fallbacks=[CommandHandler("cancel",
cancel,
pass_user_data=True),
CommandHandler("owes",
reset_owes,
pass_user_data=True)]
)
loc_owedjson = "./data/owed.json"
loc_bckpjson = "./data/bckp.json"
msgNoMoneyOwed = "Lucky... that person doesn't owe anything!"
msgListMoneyOwed = "Here is a list of people owing money:\n"
msgListMoneyOwedIndiv = "Here is a list of everyone {} owes money to: \n"
msgHowMuch = "How much does {} owe {}? Please type a number."
msgNewOwer = "{} was saved as a new ower. Please input the name of the person that {} owes " \
"money to."
msgNewOwee = "{} was saved as a new owee for {}. " + msgHowMuch
msgCurrentOwers = "Here is the current list of people who owe money. Please select one, or reply " \
"with a new name to add a new ower."
msgWhoOwedTo = "Who does {} owe money to? Type in a new name to add a new owee."
msgNoDebts = "Wow... there are no debts here!"
msgAllDebtsCleared = "All debts cleared!"
msgAllDebtsClearedTerm = msgAllDebtsCleared + " A backup file can be found in bckp.json."
msgDebtsOfCleared = "{} had all debts cleared by the owner."
msgDebtsOfToCleared = "{}'s debts to {} were cleared."
| 31.856764 | 100 | 0.596503 | 1,396 | 12,010 | 4.982808 | 0.15043 | 0.049166 | 0.047441 | 0.063255 | 0.466504 | 0.35552 | 0.299166 | 0.247269 | 0.222685 | 0.165469 | 0 | 0.006146 | 0.295504 | 12,010 | 376 | 101 | 31.941489 | 0.815979 | 0.041049 | 0 | 0.503497 | 0 | 0 | 0.079927 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041958 | false | 0.048951 | 0.013986 | 0 | 0.111888 | 0.024476 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e23756e604347068b55a50bce901aa100335e52 | 3,174 | py | Python | examples/unsupervised_learning/tsdae/train_tsdae_from_file.py | djstrong/sentence-transformers | 878e05436a155d7072b0bc64bf5cd216baf49758 | [
"Apache-2.0"
] | 1 | 2021-07-04T07:06:19.000Z | 2021-07-04T07:06:19.000Z | examples/unsupervised_learning/tsdae/train_tsdae_from_file.py | triet1102/sentence-transformers | 6992f4c9b7e600ce89f69d6bc0b495ec177b0312 | [
"Apache-2.0"
] | null | null | null | examples/unsupervised_learning/tsdae/train_tsdae_from_file.py | triet1102/sentence-transformers | 6992f4c9b7e600ce89f69d6bc0b495ec177b0312 | [
"Apache-2.0"
] | null | null | null | """
This file loads sentences from a provided text file. It is expected, that the there is one sentence per line in that text file.
TSDAE will be training using these sentences. Checkpoints are stored every 500 steps to the output folder.
Usage:
python train_tsdae_from_file.py path/to/sentences.txt
"""
from sentence_transformers import SentenceTransformer, LoggingHandler
from sentence_transformers import models, datasets, losses
import logging
import gzip
from torch.utils.data import DataLoader
from datetime import datetime
import sys
import tqdm
#### Just some code to print debug information to stdout
logging.basicConfig(format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=logging.INFO,
handlers=[LoggingHandler()])
#### /print debug information to stdout
# Train Parameters
model_name = 'bert-base-uncased'
batch_size = 8
#Input file path (a text file, each line a sentence)
if len(sys.argv) < 2:
print("Run this script with: python {} path/to/sentences.txt".format(sys.argv[0]))
exit()
filepath = sys.argv[1]
# Save path to store our model
output_name = ''
if len(sys.argv) >= 3:
output_name = "-"+sys.argv[2].replace(" ", "_").replace("/", "_").replace("\\", "_")
model_output_path = 'output/train_tsdae{}-{}'.format(output_name, datetime.now().strftime("%Y-%m-%d_%H-%M-%S"))
################# Read the train corpus #################
train_sentences = []
with gzip.open(filepath, 'rt', encoding='utf8') if filepath.endswith('.gz') else open(filepath, encoding='utf8') as fIn:
for line in tqdm.tqdm(fIn, desc='Read file'):
line = line.strip()
if len(line) >= 10:
train_sentences.append(line)
logging.info("{} train sentences".format(len(train_sentences)))
################# Intialize an SBERT model #################
word_embedding_model = models.Transformer(model_name)
# Apply **cls** pooling to get one fixed sized sentence vector
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),
pooling_mode_mean_tokens=False,
pooling_mode_cls_token=True,
pooling_mode_max_tokens=False)
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
################# Train and evaluate the model (it needs about 1 hour for one epoch of AskUbuntu) #################
# We wrap our training sentences in the DenoisingAutoEncoderDataset to add deletion noise on the fly
train_dataset = datasets.DenoisingAutoEncoderDataset(train_sentences)
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, drop_last=True)
train_loss = losses.DenoisingAutoEncoderLoss(model, decoder_name_or_path=model_name, tie_encoder_decoder=True)
logging.info("Start training")
model.fit(
train_objectives=[(train_dataloader, train_loss)],
epochs=1,
weight_decay=0,
scheduler='constantlr',
optimizer_params={'lr': 3e-5},
show_progress_bar=True,
checkpoint_path=model_output_path,
use_amp=False #Set to True, if your GPU supports FP16 cores
)
| 35.662921 | 127 | 0.687461 | 416 | 3,174 | 5.086538 | 0.466346 | 0.016541 | 0.02552 | 0.017013 | 0.033081 | 0.005671 | 0 | 0 | 0 | 0 | 0 | 0.007654 | 0.176749 | 3,174 | 88 | 128 | 36.068182 | 0.802143 | 0.257404 | 0 | 0 | 0 | 0 | 0.101436 | 0.019749 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.163265 | 0 | 0.163265 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e25089a713df45c8e53da3a2d7ea9e4aebc9787 | 1,057 | py | Python | tests/test_Settings.py | MafferDragonhand/ABookDownloader | 963c437698fc43d64ad84b625bdb7b1e3e2c6eb2 | [
"MIT"
] | 1 | 2021-01-14T10:38:06.000Z | 2021-01-14T10:38:06.000Z | tests/test_Settings.py | MafferDragonhand/ABookDownloader | 963c437698fc43d64ad84b625bdb7b1e3e2c6eb2 | [
"MIT"
] | null | null | null | tests/test_Settings.py | MafferDragonhand/ABookDownloader | 963c437698fc43d64ad84b625bdb7b1e3e2c6eb2 | [
"MIT"
] | null | null | null | import sys
import os
import json
sys.path.append('.')
sys.path.append('..')
import src.Settings # noqa: E402
def test_Settings_1():
os.makedirs('tests', exist_ok=True)
settingsPath = './tests/test_Settings_1.json'
settings = src.Settings.Settings(settingsPath)
with open(settingsPath, 'r', encoding='utf-8') as file:
localSettings = json.load(file)
assert localSettings == settings.DEFAULT_SETTINGS
assert localSettings['download_path'] == settings['download_path']
settings['debug'] = not settings['debug']
assert localSettings['debug'] != settings['debug']
os.remove(settingsPath)
def test_Settings_2():
os.makedirs('tests', exist_ok=True)
settingsPath = './tests/test_Settings_2.json'
localSettings = {
"debug": True,
}
with open(settingsPath, 'w', encoding='utf-8') as file:
json.dump(localSettings, file, ensure_ascii=False, indent=4)
settings = src.Settings.Settings(settingsPath)
assert settings['debug'] is True
os.remove(settingsPath)
| 32.030303 | 74 | 0.680227 | 126 | 1,057 | 5.595238 | 0.349206 | 0.068085 | 0.036879 | 0.056738 | 0.317731 | 0.156028 | 0.156028 | 0.156028 | 0.156028 | 0.156028 | 0 | 0.011574 | 0.182592 | 1,057 | 32 | 75 | 33.03125 | 0.804398 | 0.009461 | 0 | 0.214286 | 0 | 0 | 0.1311 | 0.053589 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e27ffd152210caae507abad2eee7abb9aa58e82 | 3,730 | py | Python | tests/testAnkiClasses.py | ConorSheehan1/org_to_anki | 1fa7633313e6888e06416ba9f43f55de04be0201 | [
"MIT"
] | null | null | null | tests/testAnkiClasses.py | ConorSheehan1/org_to_anki | 1fa7633313e6888e06416ba9f43f55de04be0201 | [
"MIT"
] | null | null | null | tests/testAnkiClasses.py | ConorSheehan1/org_to_anki | 1fa7633313e6888e06416ba9f43f55de04be0201 | [
"MIT"
] | null | null | null | import sys
import os
sys.path.append('../org_to_anki')
# Anki deck
from org_to_anki.ankiClasses.AnkiDeck import AnkiDeck
from org_to_anki.ankiClasses.AnkiQuestion import AnkiQuestion
def testGettingDeckNames():
# Create deck with subdeck
parent = AnkiDeck("parent")
child = AnkiDeck("child")
subChild = AnkiDeck("subChild")
child.addSubdeck(subChild)
parent.addSubdeck(child)
deckNames = parent.getDeckNames()
assert(deckNames == ["parent", "parent::child", "parent::child::subChild"])
def testDeckNameSetFor_GetAllDeckQuestion():
parent = AnkiDeck("parent")
child = AnkiDeck("child")
subChild = AnkiDeck("subChild")
child.addSubdeck(subChild)
parent.addSubdeck(child)
# Expected question
expectedQuestion1 = AnkiQuestion("What is the capital of Ireland")
expectedQuestion1.addAnswer("Dublin")
expectedQuestion1.setDeckName("parent")
expectedQuestion2 = AnkiQuestion("What is the capital of France")
expectedQuestion2.addAnswer("Paris")
expectedQuestion2.setDeckName("parent::child")
expectedQuestion3 = AnkiQuestion("What is the capital of Germany")
expectedQuestion3.addAnswer("Berlin")
expectedQuestion3.setDeckName("parent::child::subChild")
# Add questions
firstQuestion = AnkiQuestion("What is the capital of Ireland")
firstQuestion.addAnswer("Dublin")
parent.addQuestion(firstQuestion)
secondQuestion = AnkiQuestion("What is the capital of France")
secondQuestion.addAnswer("Paris")
child.addQuestion(secondQuestion)
thirdQuestion = AnkiQuestion("What is the capital of Germany")
thirdQuestion.addAnswer("Berlin")
subChild.addQuestion(thirdQuestion)
# Comprae
questions = parent.getQuestions()
assert(questions == [expectedQuestion1, expectedQuestion2, expectedQuestion3])
def testCommentsAndParametersForAnkiQuestion():
q = AnkiQuestion("Test question")
q.addAnswer("Test Answer")
q.addTag("test tag")
q.addComment("Test comment")
q.addParameter("type", "basic")
q.addParameter("type1", "basic1")
assert(q.getAnswers() == ["Test Answer"])
assert(q.getTags() == ["test tag"])
assert(q.getComments() == ["Test comment"])
assert(q.getParameter("type") == "basic")
assert(q.getParameter("type1") == "basic1")
assert(q.getParameter("notFound") == None)
def testQuestionInheritParamsFromDeck():
q1 = AnkiQuestion("Test question")
q1.addAnswer("Test Answer")
q1.addParameter("type", "reversed")
deck = AnkiDeck("Test Deck")
deck.addParameter("type1", "basic1")
deck.addParameter("type", "basic")
deck.addQuestion(q1)
questions = deck.getQuestions()
assert(questions[0].getParameter("type") == "reversed")
assert(questions[0].getParameter("type1") == "basic1")
def testDecksInheritParamsFromParentDeck():
q1 = AnkiQuestion("Test question")
q1.addAnswer("Test Answer")
q1.addParameter("q0", "question")
deck0 = AnkiDeck("deck0")
deck0.addParameter("deck0", "deck0")
deck0.addQuestion(q1)
deck1 = AnkiDeck("deck1")
deck1.addParameter("deck1", "deck1")
deck1.addSubdeck(deck0)
deck2 = AnkiDeck("deck2")
deck2.addParameter("deck2", "deck2")
deck2.addParameter("deck1", "deck2")
deck2.addParameter("deck0", "deck2")
deck2.addParameter("q0", "deck2")
deck2.addSubdeck(deck1)
questions = deck2.getQuestions()
print(deck2._parameters)
print(questions[0]._parameters)
assert(questions[0].getParameter("deck2") == "deck2")
assert(questions[0].getParameter("deck1") == "deck1")
assert(questions[0].getParameter("deck0") == "deck0")
assert(questions[0].getParameter("q0") == "question")
| 29.370079 | 82 | 0.697319 | 368 | 3,730 | 7.043478 | 0.225543 | 0.040509 | 0.041667 | 0.048611 | 0.232253 | 0.213735 | 0.213735 | 0.128858 | 0.128858 | 0.128858 | 0 | 0.023657 | 0.161394 | 3,730 | 126 | 83 | 29.603175 | 0.804987 | 0.019839 | 0 | 0.162791 | 0 | 0 | 0.193151 | 0.012603 | 0 | 0 | 0 | 0 | 0.162791 | 1 | 0.05814 | false | 0 | 0.046512 | 0 | 0.104651 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e29f50c4ff76f899cfe2c47fc4e488e52280c33 | 320 | py | Python | PARTE_1/EX015/index.py | 0Fernando0/CursoPython | 1dcfdb6556e41c6dedcba2857aa4382b2f81aa59 | [
"MIT"
] | null | null | null | PARTE_1/EX015/index.py | 0Fernando0/CursoPython | 1dcfdb6556e41c6dedcba2857aa4382b2f81aa59 | [
"MIT"
] | null | null | null | PARTE_1/EX015/index.py | 0Fernando0/CursoPython | 1dcfdb6556e41c6dedcba2857aa4382b2f81aa59 | [
"MIT"
] | null | null | null | '''
script para calcular o gasto com carro alugado
usando como base os quilômetros
e também os dias que permaneceu alugado
'''
km = float(input('quantos quilômetros percorridos? '))
dias = int(input('quantos dias o veiculo permaneceu alugado? '))
preco = (dias * 60) + (km * 0.15)
print(f'você pagará R${preco:.2f}') | 24.615385 | 64 | 0.7125 | 48 | 320 | 4.75 | 0.729167 | 0.149123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022388 | 0.1625 | 320 | 13 | 65 | 24.615385 | 0.828358 | 0.36875 | 0 | 0 | 0 | 0 | 0.517949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e2a88d3705d628c7317c1f7af97bc064784c79e | 1,452 | py | Python | systemcontrol/heaterPWM.py | hallee/espresso-arm | d535cc7d8fa41043c6f27fcefa52f98168df4cd4 | [
"MIT"
] | 36 | 2016-06-20T21:40:57.000Z | 2022-03-29T20:24:12.000Z | systemcontrol/heaterPWM.py | hallee/espresso-arm | d535cc7d8fa41043c6f27fcefa52f98168df4cd4 | [
"MIT"
] | 4 | 2016-08-25T01:52:25.000Z | 2018-04-29T19:07:21.000Z | systemcontrol/heaterPWM.py | hallee/espresso-arm | d535cc7d8fa41043c6f27fcefa52f98168df4cd4 | [
"MIT"
] | 7 | 2017-01-31T15:59:36.000Z | 2021-02-10T16:24:47.000Z | """
Software PWM.
Hardkernel doesn't support hardware PWM on the Odroid C2.
This should work fine on the Raspberry Pi GPIO as well, though.
"""
import time
try:
import wiringpi2 as wp
except ImportError:
import wiringpi as wp
from multiprocessing import Process, Value
class SoftwarePWM:
"""
Software PWM.
Opens a process that runs continuously.
"""
def __init__(self, pinNum):
self.Process = Process
self.duration = None
self.onTime = Value("d", 0.0)
self.offTime = Value("d", 0.0)
self.stop = False
self.processStarted = False
self.pin = pinNum
def controlPin(self, onTime, offTime, pin):
wp.pinMode(pin, 1)
while not self.stop:
wp.digitalWrite(pin, 1)
time.sleep(onTime.value)
wp.digitalWrite(pin, 0)
time.sleep(offTime.value)
def pwmUpdate(self, dutyCycle, f):
self.duration = 1/f
self.onTime.value = self.duration * (dutyCycle / 100)
self.offTime.value = self.duration - self.onTime.value
if self.processStarted == False:
self.p = self.Process(target = self.controlPin, args = (self.onTime, self.offTime, self.pin,))
self.p.start()
self.processStarted = True
# class HeaterControl:
#
# def main():
# controller = SoftwarePWM(27)
# controller.pwmUpdate(25, 1)
#
#
# if __name__ == '__main__':
# main()
| 24.610169 | 106 | 0.613636 | 176 | 1,452 | 4.994318 | 0.4375 | 0.056883 | 0.051195 | 0.018203 | 0.027304 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017159 | 0.277548 | 1,452 | 58 | 107 | 25.034483 | 0.820782 | 0.225207 | 0 | 0 | 0 | 0 | 0.001835 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.166667 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e2ab831671ccf5c451c37ed048bb11a3edb92e7 | 1,807 | py | Python | zentral/contrib/inventory/clients/dummy.py | arubdesu/zentral | ac0fe663f6e1c27f9a9f55a7500a87e6ac7d9190 | [
"Apache-2.0"
] | 634 | 2015-10-30T00:55:40.000Z | 2022-03-31T02:59:00.000Z | zentral/contrib/inventory/clients/dummy.py | arubdesu/zentral | ac0fe663f6e1c27f9a9f55a7500a87e6ac7d9190 | [
"Apache-2.0"
] | 145 | 2015-11-06T00:17:33.000Z | 2022-03-16T13:30:31.000Z | zentral/contrib/inventory/clients/dummy.py | arubdesu/zentral | ac0fe663f6e1c27f9a9f55a7500a87e6ac7d9190 | [
"Apache-2.0"
] | 103 | 2015-11-07T07:08:49.000Z | 2022-03-18T17:34:36.000Z | import logging
from .base import BaseInventory
logger = logging.getLogger('zentral.contrib.inventory.backends.dummy')
DUMMY_MACHINES = [
{'serial_number': '0123456789',
'groups': [{'reference': 'dummy_group_1',
'name': 'Dummy Group 1'}],
'os_version': {'name': 'OSX',
'build': 'Build1',
'major': 10,
'minor': 11,
'patch': 1},
'system_info': {'computer_name': 'dummy1',
'hardware_model': 'MacBook1,2',
'cpu_type': "Intel Core M @ 1.3GHz",
'cpu_physical_cores': 2,
'physical_memory': 8 * 2**30},
'osx_app_instances': [{'app': {'bundle_name': 'Dummy.app', 'bundle_version_str': '1.0'}}],
},
{'serial_number': '9876543210',
'groups': [{'reference': 'dummy_group_2',
'name': 'Dummy Group 2'}],
'os_version': {'name': 'OSX',
'build': 'Build2',
'major': 10,
'minor': 11,
'patch': 2},
'system_info': {'computer_name': 'dummy2',
'hardware_model': 'MacBook2,1',
'cpu_type': "Intel Core M @ 1.3GHz",
'cpu_physical_cores': 2,
'physical_memory': 8 * 2**30},
'osx_app_instances': [{'app': {'bundle_name': 'Dummy.app', 'bundle_version_str': '2.0'}}],
}
]
class InventoryClient(BaseInventory):
def __init__(self, config_d):
super(InventoryClient, self).__init__(config_d)
def get_machines(self):
for idx, machine_snapshot_d in enumerate(DUMMY_MACHINES):
machine_snapshot_d['reference'] = '{}${}'.format(self.__module__, idx)
yield machine_snapshot_d
| 37.645833 | 95 | 0.510791 | 179 | 1,807 | 4.849162 | 0.430168 | 0.046083 | 0.0553 | 0.057604 | 0.343318 | 0.251152 | 0.251152 | 0.251152 | 0.251152 | 0.251152 | 0 | 0.049464 | 0.328722 | 1,807 | 47 | 96 | 38.446809 | 0.666117 | 0 | 0 | 0.285714 | 0 | 0 | 0.348644 | 0.022136 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.047619 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e2b349192af6fe46d5f08ec199f771ed35f2189 | 6,293 | py | Python | src/librender/tests/test_integrator.py | tizian/layer-laboratory | 008cc94b76127e9eb74227fcd3d0145da8ddec30 | [
"CNRI-Python"
] | 7 | 2020-07-24T03:19:59.000Z | 2022-03-30T10:56:12.000Z | src/librender/tests/test_integrator.py | tizian/layer-laboratory | 008cc94b76127e9eb74227fcd3d0145da8ddec30 | [
"CNRI-Python"
] | 1 | 2021-04-07T22:30:23.000Z | 2021-04-08T00:55:36.000Z | src/librender/tests/test_integrator.py | tizian/layer-laboratory | 008cc94b76127e9eb74227fcd3d0145da8ddec30 | [
"CNRI-Python"
] | 2 | 2020-06-08T08:25:09.000Z | 2021-04-05T22:13:08.000Z | # """Tests common to all integrator implementations."""
import os
import mitsuba
import pytest
import enoki as ek
import numpy as np
from enoki.dynamic import Float32 as Float
from mitsuba.python.test.scenes import SCENES
integrators = [
'int_name', [
"depth",
"direct",
"path",
]
]
def make_integrator(kind, xml=""):
from mitsuba.core.xml import load_string
integrator = load_string("<integrator version='2.0.0' type='%s'>"
"%s</integrator>" % (kind, xml))
assert integrator is not None
return integrator
scene_i = 0
def _save(film, int_name, suffix=''):
"""Quick utility to save rendered scenes for debugging."""
global scene_i
fname = os.path.join("/tmp",
"scene_{}_{}{}.exr".format(scene_i, int_name, suffix))
film.set_destination_file(fname)
film.develop()
print('Saved debug image to: ' + fname)
scene_i += 1
def check_scene(int_name, scene_name, is_empty=False):
from mitsuba.core.xml import load_string
from mitsuba.core import Bitmap, Struct
variant_name = mitsuba.variant()
print("variant_name:", variant_name)
integrator = make_integrator(int_name, "")
scene = SCENES[scene_name]['factory']()
integrator_type = {
'direct': 'direct',
'depth': 'depth',
# All other integrators: 'full'
}.get(int_name, 'full')
sensor = scene.sensors()[0]
avg = SCENES[scene_name][integrator_type]
film = sensor.film()
status = integrator.render(scene, sensor)
assert status, "Rendering ({}) failed".format(variant_name)
if False:
_save(film, int_name, suffix='_' + variant_name)
converted = film.bitmap(raw=True).convert(Bitmap.PixelFormat.RGBA, Struct.Type.Float32, False)
values = np.array(converted, copy=False)
means = np.mean(values, axis=(0, 1))
# Very noisy images, so we add a tolerance
assert ek.allclose(means, avg, rtol=5e-2), \
"Mismatch: {} integrator, {} scene, {}".format(
int_name, scene_name, variant_name)
return np.array(film.bitmap(raw=False), copy=True)
@pytest.mark.parametrize(*integrators)
def test01_create(variants_all_rgb, int_name):
integrator = make_integrator(int_name)
assert integrator is not None
if int_name == "direct":
# These properties should be queried
integrator = make_integrator(int_name, xml="""
<integer name="emitter_samples" value="3"/>
<integer name="bsdf_samples" value="12"/>
""")
# Cannot specify both shading_samples and (emitter_samples | bsdf_samples)
with pytest.raises(RuntimeError):
integrator = make_integrator(int_name, xml="""
<integer name="shading_samples" value="3"/>
<integer name="emitter_samples" value="5"/>
""")
elif int_name == "path":
# These properties should be queried
integrator = make_integrator(int_name, xml="""
<integer name="rr_depth" value="5"/>
<integer name="max_depth" value="-1"/>
""")
# Cannot use a negative `max_depth`, except -1 (unlimited depth)
with pytest.raises(RuntimeError):
integrator = make_integrator(int_name, xml="""
<integer name="max_depth" value="-2"/>
""")
@pytest.mark.parametrize(*integrators)
def test02_render_empty_scene(variants_all_rgb, int_name):
if not mitsuba.variant().startswith('gpu'):
check_scene(int_name, 'empty', is_empty=True)
@pytest.mark.parametrize(*integrators)
def test03_render_teapot(variants_all_rgb, int_name):
check_scene(int_name, 'teapot')
@pytest.mark.parametrize(*integrators)
def test04_render_box(variants_all_rgb, int_name):
check_scene(int_name, 'box')
@pytest.mark.parametrize(*integrators)
def test05_render_museum_plane(variants_all_rgb, int_name):
check_scene(int_name, 'museum_plane')
@pytest.mark.parametrize(*integrators)
def test06_render_timeout(variants_cpu_rgb, int_name):
from timeit import timeit
if mitsuba.core.DEBUG:
pytest.skip("Timeout is unreliable in debug mode.")
# Very long rendering job, but interrupted by a short timeout
timeout = 0.5
integrator = make_integrator(int_name,
"""<float name="timeout" value="{}"/>""".format(timeout))
scene = SCENES['teapot']['factory'](spp=100000)
sensor = scene.sensors()[0]
def wrapped():
assert integrator.render(scene, sensor) is True
effective = timeit(wrapped, number=1)
# Check that timeout is respected +/- 0.5s.
assert ek.allclose(timeout, effective, atol=0.5)
def make_reference_renders():
mitsuba.set_variant('scalar_rgb')
from mitsuba.core import Bitmap, Struct
"""Produces reference images for the test scenes, printing the per-channel
average pixel values to update `scenes.SCENE_AVERAGES` when needed."""
integrators = {
'depth': make_integrator('depth'),
'direct': make_integrator('direct'),
'full': make_integrator('path'),
}
spp = 32
averages = {n: {} for n in SCENES}
for int_name, integrator in integrators.items():
for scene_name, props in SCENES.items():
scene = props['factory'](spp=spp)
sensor = scene.sensors()[0]
film = sensor.film()
status = integrator.render(scene, sensor)
# Extract per-channel averages
converted = film.bitmap(raw=True).convert(
Bitmap.PixelFormat.RGBA, Struct.Type.Float32, False)
values = np.array(converted, copy=False)
averages[scene_name][int_name] = np.mean(values, axis=(0, 1))
# Save files
fname = "scene_{}_{}_reference.exr".format(scene_name, int_name)
film.set_destination_file(os.path.join("/tmp", fname))
film.develop()
print('========== Test scenes: per channel averages ==========')
for k, avg in averages.items():
print("'{}': {{".format(k))
for int_name, v in avg.items():
print(" {:<12}{},".format("'{}':".format(int_name), list(v)))
print('},')
if __name__ == '__main__':
make_reference_renders()
| 32.271795 | 98 | 0.634991 | 764 | 6,293 | 5.053665 | 0.265707 | 0.056203 | 0.043512 | 0.048951 | 0.375032 | 0.268324 | 0.203574 | 0.185962 | 0.161616 | 0.13209 | 0 | 0.012006 | 0.232322 | 6,293 | 194 | 99 | 32.438144 | 0.787208 | 0.083585 | 0 | 0.244444 | 0 | 0 | 0.163251 | 0.016361 | 0 | 0 | 0 | 0 | 0.044444 | 1 | 0.081481 | false | 0 | 0.088889 | 0 | 0.185185 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e2c387925002e004d41740e8d786fd0c2b731b6 | 4,476 | py | Python | script/hexadec.py | franbar1/TravisCI | 0dcbed84846df264d1c588dbbe7e88821274b04f | [
"MIT"
] | null | null | null | script/hexadec.py | franbar1/TravisCI | 0dcbed84846df264d1c588dbbe7e88821274b04f | [
"MIT"
] | null | null | null | script/hexadec.py | franbar1/TravisCI | 0dcbed84846df264d1c588dbbe7e88821274b04f | [
"MIT"
] | null | null | null | import os
import sys
import csv
import re
#Creazione di script congiunto
#modificare i path in base alla revisione
def getFirstLetter(s):
return s[0].capitalize()
def listaPresenti():
lista=[]
with open('presenti.txt', 'r') as presenti:
lista = presenti.read().splitlines()
return lista
def aggiungiVoce(titolo, descrizione):
presenti = listaPresenti()
if len(str(titolo)) == 0:
print("--- Errore: titolo vuoto ---")
exit()
if titolo == 'q':
print('End')
exit()
if str(titolo).lower() in presenti:
print('--- Errore, termine '+titolo+' già presente ---')
exit()
if titolo[0].capitalize() not in 'QWERTYUIOPASDFGHJKLZXCVBNM':
print('--- Shit, non abbiamo considerato le eventuali parole che '
'cominciano con cose diverse da una lettera, sorry ---')
exit()
# metto la capitol letter
if len(titolo) > 1:
titolo = titolo[0].capitalize()+titolo[1:]
else:
titolo = titolo.capitalize()
with open('../RR/Esterni/Glossario/sezioni/'+getFirstLetter(titolo)+'.tex', 'a') as file:
if os.stat('../RR/Esterni/Glossario/sezioni/'+getFirstLetter(titolo)+'.tex').st_size == 0:
file.write('\section{\\quad$'+getFirstLetter(titolo)+'\\quad$}\n')
file.write('\subsection{'+titolo+'}\n')
file.write('\index{' + titolo + '}\n')
file.write(descrizione+'\n\n')
with open('presenti.txt', 'a') as file2:
file2.write(titolo.lower()+'\n')
presenti.append(titolo.lower())
print('--- Aggiunto '+titolo+' ---')
# Quando viene chiamato il comando python hexadec.py ho questo risultato
if len(sys.argv) == 1:
print('Script usage: \n'
'\t hexadec.py -a \t\t\t=> inserire a mano \n'
'\t hexadec.py -f nomeFile.csv\t=> importa glossario da csv \n'
'\t hexadec.py -g \t=> calcola gulpease')
exit()
elif len(sys.argv) ==2 and sys.argv[1] == '-a':
print('Inserimento a mano dei termini')
while True:
print('=> Inserire il nome del NUOVO TERMINE (\'q\' per uscire ):\n')
titolo = input()
print('=> inserisci la descrizione ')
descrizione = input()
aggiungiVoce(titolo, descrizione)
elif len(sys.argv) ==3 and sys.argv[1] == '-f':
#lettura e scrittura da file csv
print('Inserimento de termini nel glossario')
#resetto presenti
nuovofile = open('presenti.txt', 'w')
for char in 'QWERTYUIOPASDFGHJKLZXCVBNM':
nuovofile = open('../RR/Esterni/Glossario/sezioni/'+char+'.tex', 'w')
nuovofile.close()
nuovofile.close()
types_of_encoding = ["utf8", "cp1252"]
for encoding_type in types_of_encoding:
with open(sys.argv[2],encoding = encoding_type, errors ='replace') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
intestazione = True
for dato in csv_reader:
if intestazione:
print('Apertura del documento..')
intestazione = False
else:
if dato[1] == "":
print(dato[0]+' non ha una descrizione! Impossibile proseguire')
exit()
aggiungiVoce(dato[0], dato[1])
print('Finito')
elif len(sys.argv) ==2 and sys.argv[1] == '-g':
print('calcola gulpease')
rev = '../RR/'
revisione ='Revisione dei Requisiti'
rootE = 'Esterni'
rootI ='Interni'
docs =dict()
#lo fillo
docs['Glo'] = 'Glossario//standard_documento/sezioni/'
docs['NdP'] = 'NormeDiProgetto/standard_documento/sezioni/'
docs['SdF'] = 'studioDiFattibilita/standard_documento/sezioni/'
docs['PdP'] = 'PianoDiProgetto/'
docs['AdR'] = 'AnalisiDeiRequisiti//standard_documento/sezioni/'
docs['1VI'] = 'Verbale_I_2016-12-10/'
docs['2VI'] = 'Verbale_I_2016-12-19/'
docs['1VE'] = 'Verbale_E_2016-12-17/'
docs['PdQ'] = 'PianoDiQualifica/standard_documento/sezioni/'
docs['SDK'] = 'AnalisiSDK/'
docs['LdP'] = 'LetteraDiPresentazione/'
print('+------------+--------------+---------+\n'
+'+------------+--------------+---------+\n'
+'\t'+revisione+
'\n+------------+--------------+---------+\n'
'| Documento | Valore | Esito |\n'
'+-----------+--------------+----------+')
else:
print("comando non riconosciuto")
| 32.671533 | 98 | 0.561662 | 495 | 4,476 | 5.034343 | 0.39596 | 0.022472 | 0.048154 | 0.05618 | 0.072632 | 0.05939 | 0.05939 | 0.020867 | 0.020867 | 0 | 0 | 0.015854 | 0.253128 | 4,476 | 136 | 99 | 32.911765 | 0.729584 | 0.049374 | 0 | 0.107843 | 0 | 0 | 0.358304 | 0.145583 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.04902 | 0.009804 | 0.098039 | 0.156863 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e36318801c585e48d9a61570fdf5dca4857bd3d | 2,093 | py | Python | day-19-race/race.py | jskolnicki/100-Days-of-Python | 146af2b73914a525121f1c91737abd4857dc2f89 | [
"CNRI-Python"
] | null | null | null | day-19-race/race.py | jskolnicki/100-Days-of-Python | 146af2b73914a525121f1c91737abd4857dc2f89 | [
"CNRI-Python"
] | null | null | null | day-19-race/race.py | jskolnicki/100-Days-of-Python | 146af2b73914a525121f1c91737abd4857dc2f89 | [
"CNRI-Python"
] | null | null | null | import turtle
import random
import os
def turtle_in_lead(list_of_turtles, list_of_turtles_str):
x_coordinates = []
for turtle in list_of_turtles:
x_coordinates.append(turtle.xcor())
max_index = x_coordinates.index(max(x_coordinates))
return list_of_turtles_str[max_index]
screen = turtle.Screen()
screen.setup(width=500, height=400)
is_race_on = False
user_bet = screen.textinput(title="Get Your Pick In!", prompt="Jared, David, Aaron, Mike.\nWho will win? Enter Your Pick: ").lower()
is_race_on = True
jared = turtle.Turtle()
david = turtle.Turtle()
aaron = turtle.Turtle()
mike = turtle.Turtle()
jared.color('green')
david.color('blue')
aaron.color('orange')
mike.color('purple')
jared.penup()
david.penup()
aaron.penup()
mike.penup()
os.chdir(os.path.dirname(__file__))
screen.addshape(f"lordbiz.gif")
screen.bgpic(f"finish_line.gif")
jared.shape('turtle')
david.shape('turtle')
aaron.shape('turtle')
mike.shape('lordbiz.gif')
jared.setposition(-230, 75)
david.setposition(-230, 25)
aaron.setposition(-230,-25)
mike.setposition(-230, -75)
turtles = [jared,david,aaron,mike]
turtles_str = ['jared','david','aaron','mike']
while is_race_on:
jared.forward(random.randint(0,10))
david.forward(random.randint(0,10))
aaron.forward(random.randint(0,10))
mike.forward(random.randint(1,11))
for turtle in turtles:
if turtle.xcor() >= 250:
is_race_on = False
print("")
if user_bet == 'mike':
if turtle_in_lead(turtles,turtles_str) == 'mike':
print("You guessed correctly. Lord Biz never loses..")
else:
print(f"Great guess. Unfortunately, {turtle_in_lead(turtles,turtles_str)} won the race somehow but still a great guess.")
elif turtle_in_lead(turtles,turtles_str) == 'mike':
print(f"Never bet against Lord Biz...")
else:
print(f"You guessed correctly. The winner of the race is {turtle_in_lead(turtles,turtles_str)}. (but still don't bet against Lord Biz)")
screen.exitonclick()
#things i could add: finish the race and then give the standings
#figure out how to handle ties. | 24.057471 | 140 | 0.712852 | 315 | 2,093 | 4.590476 | 0.368254 | 0.038728 | 0.041494 | 0.052559 | 0.140387 | 0.092669 | 0.052559 | 0.052559 | 0 | 0 | 0 | 0.022918 | 0.145246 | 2,093 | 87 | 141 | 24.057471 | 0.785355 | 0.044434 | 0 | 0.068966 | 0 | 0.034483 | 0.247124 | 0.037519 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017241 | false | 0 | 0.051724 | 0 | 0.086207 | 0.086207 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e380f83ea26c7bdf940e55e96e1845576dad1bd | 7,746 | py | Python | examples/00-EDB/03_5G_antenna_example.py | Samuelopez-ansys/pyaedt | 87df97641aeff02024e9d5756ae82d78bb4bc033 | [
"MIT"
] | 12 | 2021-07-01T06:35:12.000Z | 2021-09-22T15:53:07.000Z | examples/00-EDB/03_5G_antenna_example.py | Samuelopez-ansys/pyaedt | 87df97641aeff02024e9d5756ae82d78bb4bc033 | [
"MIT"
] | 111 | 2021-07-01T16:02:36.000Z | 2021-09-29T12:36:44.000Z | examples/00-EDB/03_5G_antenna_example.py | Samuelopez-ansys/pyaedt | 87df97641aeff02024e9d5756ae82d78bb4bc033 | [
"MIT"
] | 5 | 2021-07-09T14:24:59.000Z | 2021-09-07T12:42:03.000Z | """
5G linear array antenna
-----------------------
This example shows how to use HFSS 3D Layout to create and solve a 5G linear array antenna.
"""
# sphinx_gallery_thumbnail_path = 'Resources/5gantenna.png'
###############################################################################
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# This example imports the `Hfss3dlayout` object and initializes it on version
# 2021.2.
import tempfile
from pyaedt import Edb
from pyaedt.generic.general_methods import generate_unique_name
from pyaedt import Hfss3dLayout
import os
class Patch:
def __init__(self, width=0.0, height=0.0, position=0.0):
self.width = width
self.height = height
self.position = position
@property
def points(self):
return [
[self.position, -self.height / 2],
[self.position + self.width, -self.height / 2],
[self.position + self.width, self.height / 2],
[self.position, self.height / 2],
]
class Line:
def __init__(self, length=0.0, width=0.0, position=0.0):
self.length = length
self.width = width
self.position = position
@property
def points(self):
return [
[self.position, -self.width / 2],
[self.position + self.length, -self.width / 2],
[self.position + self.length, self.width / 2],
[self.position, self.width / 2],
]
class LinearArray:
def __init__(self, nb_patch=1, array_length=10e-3, array_width=5e-3):
self.nbpatch = nb_patch
self.length = array_length
self.width = array_width
@property
def points(self):
return [
[-1e-3, -self.width / 2 - 1e-3],
[self.length + 1e-3, -self.width / 2 - 1e-3],
[self.length + 1e-3, self.width / 2 + 1e-3],
[-1e-3, self.width / 2 + 1e-3],
]
tmpfold = tempfile.gettempdir()
aedb_path = os.path.join(tmpfold, generate_unique_name("pcb") + ".aedb")
print(aedb_path)
edb = Edb(edbpath=aedb_path, edbversion="2021.2")
###############################################################################
# Create a Stackup
# ~~~~~~~~~~~~~~~~
# This method adds the stackup layers.
#
if edb:
edb.core_stackup.stackup_layers.add_layer("Virt_GND")
edb.core_stackup.stackup_layers.add_layer("Gap", "Virt_GND", layerType=1, thickness="0.05mm", material="Air")
edb.core_stackup.stackup_layers.add_layer("GND", "Gap")
edb.core_stackup.stackup_layers.add_layer("Substrat", "GND", layerType=1, thickness="0.5mm", material="Duroid (tm)")
edb.core_stackup.stackup_layers.add_layer("TOP", "Substrat")
###############################################################################
# Creating the linear array.
# First patch
first_patch = Patch(width=1.4e-3, height=1.2e-3, position=0.0)
first_patch_poly = edb.core_primitives.Shape("polygon", points=first_patch.points)
edb.core_primitives.create_polygon(first_patch_poly, "TOP", net_name="Array_antenna")
# First line
first_line = Line(length=2.4e-3, width=0.3e-3, position=first_patch.width)
first_line_poly = edb.core_primitives.Shape("polygon", points=first_line.points)
edb.core_primitives.create_polygon(first_line_poly, "TOP", net_name="Array_antenna")
###############################################################################
# Linear array
patch = Patch(width=2.29e-3, height=3.3e-3)
line = Line(length=1.9e-3, width=0.2e-3)
linear_array = LinearArray(nb_patch=8, array_width=patch.height)
current_patch = 1
current_position = first_line.position + first_line.length
while current_patch <= linear_array.nbpatch:
patch.position = current_position
patch_shape = edb.core_primitives.Shape("polygon", points=patch.points)
edb.core_primitives.create_polygon(patch_shape, "TOP", net_name="Array_antenna")
current_position += patch.width
if current_patch < linear_array.nbpatch:
line.position = current_position
line_shape = edb.core_primitives.Shape("polygon", points=line.points)
edb.core_primitives.create_polygon(line_shape, "TOP", net_name="Array_antenna")
current_position += line.length
current_patch += 1
linear_array.length = current_position
###############################################################################
# Adding ground
gnd_shape = edb.core_primitives.Shape("polygon", points=linear_array.points)
edb.core_primitives.create_polygon(gnd_shape, "GND", net_name="GND")
###############################################################################
# Connector central pin
edb.core_padstack.create_padstack(padstackname="Connector_pin", holediam="100um", paddiam="0", antipaddiam="200um")
con_pin = edb.core_padstack.place_padstack(
[first_patch.width / 4, 0],
"Connector_pin",
net_name="Array_antenna",
fromlayer="TOP",
tolayer="GND",
via_name="coax",
)
###############################################################################
# Connector GND
virt_gnd_shape = edb.core_primitives.Shape("polygon", points=first_patch.points)
edb.core_primitives.create_polygon(virt_gnd_shape, "Virt_GND", net_name="GND")
edb.core_padstack.create_padstack("gnd_via", "100um", "0", "0", "GND", "Virt_GND")
con_ref1 = edb.core_padstack.place_padstack(
[first_patch.points[0][0] + 0.2e-3, first_patch.points[0][1] + 0.2e-3],
"gnd_via",
fromlayer="GND",
tolayer="Virt_GND",
net_name="GND",
)
con_ref2 = edb.core_padstack.place_padstack(
[first_patch.points[1][0] - 0.2e-3, first_patch.points[1][1] + 0.2e-3],
"gnd_via",
fromlayer="GND",
tolayer="Virt_GND",
net_name="GND",
)
con_ref3 = edb.core_padstack.place_padstack(
[first_patch.points[2][0] - 0.2e-3, first_patch.points[2][1] - 0.2e-3],
"gnd_via",
fromlayer="GND",
tolayer="Virt_GND",
net_name="GND",
)
con_ref4 = edb.core_padstack.place_padstack(
[first_patch.points[3][0] + 0.2e-3, first_patch.points[3][1] - 0.2e-3],
"gnd_via",
fromlayer="GND",
tolayer="Virt_GND",
net_name="GND",
)
###############################################################################
# Adding excitation port
edb.core_padstack.set_solderball(con_pin, "Virt_GND", isTopPlaced=False, ballDiam=0.1e-3)
port_name = edb.core_padstack.create_coax_port(con_pin)
###############################################################################
# saving edb
if edb:
edb.standalone = False
edb.save_edb()
edb.close_edb()
print("EDB saved correctly to {}. You can import in AEDT.".format(aedb_path))
###############################################################################
# Launch Hfss3d Layout and open Edb
#
project = os.path.join(aedb_path, "edb.def")
h3d = Hfss3dLayout(projectname=project, specified_version="2021.2", new_desktop_session=True, non_graphical=False)
###############################################################################
# Create Setup and Sweeps
#
setup = h3d.create_setup()
setup.props["AdaptiveSettings"]["SingleFrequencyDataList"]["AdaptiveFrequencyData"]["AdaptiveFrequency"] = "20GHz"
setup.props["AdaptiveSettings"]["SingleFrequencyDataList"]["AdaptiveFrequencyData"]["MaxPasses"] = 4
setup.update()
h3d.create_linear_count_sweep(
setupname=setup.name,
unit="GHz",
freqstart=20,
freqstop=50,
num_of_freq_points=1001,
sweepname="sweep1",
sweep_type="Interpolating",
interpolation_tol_percent=1,
interpolation_max_solutions=255,
save_fields=False,
use_q3d_for_dc=False,
)
###############################################################################
# Solve Setup
#
h3d.analyze_nominal()
h3d.post.create_rectangular_plot(["db(S({0},{1}))".format(port_name, port_name)])
h3d.save_project()
h3d.release_desktop()
| 34.580357 | 120 | 0.604441 | 949 | 7,746 | 4.717597 | 0.222339 | 0.040652 | 0.045566 | 0.022783 | 0.46683 | 0.386196 | 0.359392 | 0.24235 | 0.162162 | 0.162162 | 0 | 0.029088 | 0.143429 | 7,746 | 223 | 121 | 34.735426 | 0.645667 | 0.078234 | 0 | 0.208054 | 0 | 0 | 0.103213 | 0.014281 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040268 | false | 0.006711 | 0.040268 | 0.020134 | 0.120805 | 0.013423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e3b9e81979b17b89bff2982f711ffac7d052003 | 89,972 | py | Python | MonteCarloMarginalizeCode/Code/RIFT/likelihood/factored_likelihood.py | spfanning/research-projects-RIT | 34afc69ccb502825c81285733dac8ff993f79503 | [
"MIT"
] | 8 | 2019-10-23T01:18:44.000Z | 2021-07-09T18:24:36.000Z | MonteCarloMarginalizeCode/Code/RIFT/likelihood/factored_likelihood.py | spfanning/research-projects-RIT | 34afc69ccb502825c81285733dac8ff993f79503 | [
"MIT"
] | 7 | 2020-01-03T14:38:26.000Z | 2022-01-17T16:57:02.000Z | MonteCarloMarginalizeCode/Code/RIFT/likelihood/factored_likelihood.py | spfanning/research-projects-RIT | 34afc69ccb502825c81285733dac8ff993f79503 | [
"MIT"
] | 11 | 2019-10-23T01:19:50.000Z | 2021-11-20T23:35:39.000Z | # Copyright (C) 2013 Evan Ochsner, R. O'Shaughnessy
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
Code to compute the log likelihood of parameters of a gravitational
waveform. Precomputes terms that depend only on intrinsic parameters
and computes the log likelihood for given values of extrinsic parameters
Requires python SWIG bindings of the LIGO Algorithms Library (LAL)
"""
from __future__ import print_function
import lal
import lalsimulation as lalsim
import RIFT.lalsimutils as lsu # problem of relative comprehensive import - dangerous due to package name
import numpy as np
try:
import cupy
from . import optimized_gpu_tools
from . import Q_inner_product
xpy_default=cupy
junk_to_check_installed = cupy.array(5) # this will fail if GPU not installed correctly
except:
print(' no cupy (factored)')
cupy=np #import numpy as cupy # make sure pointer is identical
optimized_gpu_tools=None
Q_inner_product=None
xpy_default=np
# Old code
#from SphericalHarmonics_gpu_orig import SphericalHarmonicsVectorized_orig as SphericalHarmonicsVectorized
# New code
from .SphericalHarmonics_gpu import SphericalHarmonicsVectorized
from scipy import interpolate, integrate
from scipy import special
from itertools import product
import math
from .vectorized_lal_tools import ComputeDetAMResponse,TimeDelayFromEarthCenter
import os
if 'PROFILE' not in os.environ:
def profile(fn):
return fn
__author__ = "Evan Ochsner <evano@gravity.phys.uwm.edu>, R. O'Shaughnessy"
try:
import NRWaveformCatalogManager3 as nrwf
useNR =True
print(" factored_likelihood.py : NRWaveformCatalogManager3 available ")
except ImportError:
useNR=False
try:
import RIFT.physics.ROMWaveformManager as romwf
print(" factored_likelihood.py: ROMWaveformManager as romwf")
useROM=True
rom_basis_scale = 1.0*1e-21 # Fundamental problem: Inner products with ROM basis vectors/Sh are tiny. Need to rescale to avoid overflow/underflow and simplify comparisons
except ImportError:
useROM=False
print(" factored_likelihood.py: - no ROM - ")
rom_basis_scale =1
try:
hasEOB=True
import RIFT.physics.EOBTidalExternalC as eobwf
# import EOBTidalExternal as eobwf
except:
hasEOB=False
print(" factored_likelihood: no EOB ")
distMpcRef = 1000 # a fiducial distance for the template source.
tWindowExplore = [-0.15, 0.15] # Not used in main code. Provided for backward compatibility for ROS. Should be consistent with t_ref_wind in ILE.
rosDebugMessages = True
rosDebugMessagesDictionary = {} # Mutable after import (passed by reference). Not clear if it can be used by caling routines
# BUT if every module has a `PopulateMessagesDictionary' module, I can set their internal copies
rosDebugMessagesDictionary["DebugMessages"] = False
rosDebugMessagesDictionary["DebugMessagesLong"] = False
#
# Main driver functions
#
def PrecomputeLikelihoodTerms(event_time_geo, t_window, P, data_dict,
psd_dict, Lmax, fMax, analyticPSD_Q=False,
inv_spec_trunc_Q=False, T_spec=0., verbose=True,quiet=False,
NR_group=None,NR_param=None,
ignore_threshold=1e-4, # dangerous for peak lnL of 25^2/2~300 : biases
use_external_EOB=False,nr_lookup=False,nr_lookup_valid_groups=None,no_memory=True,perturbative_extraction=False,perturbative_extraction_full=False,hybrid_use=False,hybrid_method='taper_add',use_provided_strain=False,ROM_group=None,ROM_param=None,ROM_use_basis=False,ROM_limit_basis_size=None,skip_interpolation=False):
"""
Compute < h_lm(t) | d > and < h_lm | h_l'm' >
Returns:
- Dictionary of interpolating functions, keyed on detector, then (l,m)
e.g. rholms_intp['H1'][(2,2)]
- Dictionary of "cross terms" <h_lm | h_l'm' > keyed on (l,m),(l',m')
e.g. crossTerms[((2,2),(2,1))]
- Dictionary of discrete time series of < h_lm(t) | d >, keyed the same
as the interpolating functions.
Their main use is to validate the interpolating functions
"""
assert data_dict.keys() == psd_dict.keys()
global distMpcRef
detectors = list(data_dict.keys())
first_data = data_dict[detectors[0]]
rholms = {}
rholms_intp = {}
crossTerms = {}
crossTermsV = {}
# Compute hlms at a reference distance, distance scaling is applied later
P.dist = distMpcRef*1e6*lsu.lsu_PC
if not quiet:
print(" ++++ Template data being computed for the following binary +++ ")
P.print_params()
if use_external_EOB:
# Mass sanity check "
if (P.m1/lal.MSUN_SI)>3 or P.m2/lal.MSUN_SI>3:
print(" ----- external EOB code: MASS DANGER ---")
# Compute all hlm modes with l <= Lmax
# Zero-pad to same length as data - NB: Assuming all FD data same resolution
P.deltaF = first_data.deltaF
if not( ROM_group is None) and not (ROM_param is None):
# For ROM, use the ROM basis. Note that hlmoff -> basis_off henceforth
acatHere= romwf.WaveformModeCatalog(ROM_group,ROM_param,max_nbasis_per_mode=ROM_limit_basis_size,lmax=Lmax)
if ROM_use_basis:
if hybrid_use:
# WARNING
# - Hybridization is NOT enabled
print(" WARNING: Hybridization will not be applied (obviously) if you are using a ROM basis. ")
bT = acatHere.basis_oft(P,return_numpy=False,force_T=1./P.deltaF)
# Fake names, to re-use the code below.
hlms = {}
hlms_conj = {}
for mode in bT:
if mode[0]<=Lmax: # don't report waveforms from modes outside the target L range
if rosDebugMessagesDictionary["DebugMessagesLong"]:
print(" FFT for mode ", mode, bT[mode].data.length, " note duration = ", bT[mode].data.length*bT[mode].deltaT)
hlms[mode] = lsu.DataFourier(bT[mode])
# print " FFT for conjugate mode ", mode, bT[mode].data.length
bT[mode].data.data = np.conj(bT[mode].data.data)
hlms_conj[mode] = lsu.DataFourier(bT[mode])
# APPLY SCALE FACTOR
hlms[mode].data.data *=rom_basis_scale
hlms_conj[mode].data.data *=rom_basis_scale
else:
# this code is modular but inefficient: the waveform is regenerated twice
hlms = acatHere.hlmoff(P, use_basis=False,deltaT=P.deltaT,force_T=1./P.deltaF,Lmax=Lmax,hybrid_use=hybrid_use,hybrid_method=hybrid_method) # Must force duration consistency, very annoying
hlms_conj = acatHere.conj_hlmoff(P, force_T=1./P.deltaF, use_basis=False,deltaT=P.deltaT,Lmax=Lmax,hybrid_use=hybrid_use,hybrid_method=hybrid_method) # Must force duration consistency, very annoying
mode_list = list(hlms.keys()) # make copy: dictionary will change during iteration
for mode in mode_list:
if no_memory and mode[1]==0 and P.SoftAlignedQ():
# skip memory modes if requested to do so. DANGER
print(" WARNING: Deleting memory mode in precompute stage ", mode)
del hlms[mode]
del hlms_conj[mode]
continue
elif (not nr_lookup) and (not NR_group) and ( P.approx ==lalsim.SEOBNRv2 or P.approx == lalsim.SEOBNRv1 or P.approx==lalsim.SEOBNRv3 or P.approx == lsu.lalSEOBv4 or P.approx ==lsu.lalSEOBNRv4HM or P.approx == lalsim.EOBNRv2 or P.approx == lsu.lalTEOBv2 or P.approx==lsu.lalTEOBv4 ):
# note: alternative to this branch is to call hlmoff, which will actually *work* if ChooseTDModes is propertly implemented for that model
# or P.approx == lsu.lalSEOBNRv4PHM or P.approx == lsu.lalSEOBNRv4P
if not quiet:
print(" FACTORED LIKELIHOOD WITH SEOB ")
hlmsT = {}
hlmsT = lsu.hlmoft(P,Lmax) # do a standard function call NOT anything special; should be wrapped properly now!
# if P.approx == lalsim.SEOBNRv3:
# hlmsT = lsu.hlmoft_SEOBv3_dict(P) # only 2,2 modes -- Lmax irrelevant
# else:
# if useNR:
# nrwf.HackRoundTransverseSpin(P) # HACK, to make reruns of NR play nicely, without needing to rerun
# hlmsT = lsu.hlmoft_SEOB_dict(P, Lmax) # only 2,2 modes -- Lmax irrelevant
if not quiet:
print(" hlm generation complete ")
if P.approx == lalsim.SEOBNRv3 or P.deltaF == None: # h_lm(t) should be zero-padded properly inside code
TDlen = int(1./(P.deltaF*P.deltaT))#TDlen = lsu.nextPow2(hlmsT[(2,2)].data.length)
if not quiet:
print(" Resizing to ", TDlen, " from ", hlmsT[(2,2)].data.length)
for mode in hlmsT:
hlmsT[mode] = lal.ResizeCOMPLEX16TimeSeries(hlmsT[mode],0, TDlen)
#h22 = hlmsT[(2,2)]
#h2m2 = hlmsT[(2,-2)]
#hlmsT[(2,2)] = lal.ResizeCOMPLEX16TimeSeries(h22, 0, TDlen)
#hlmsT[(2,-2)] = lal.ResizeCOMPLEX16TimeSeries(h2m2, 0, TDlen)
hlms = {}
hlms_conj = {}
for mode in hlmsT:
if verbose:
print(" FFT for mode ", mode, hlmsT[mode].data.length, " note duration = ", hlmsT[mode].data.length*hlmsT[mode].deltaT)
hlms[mode] = lsu.DataFourier(hlmsT[mode])
if verbose:
print(" -> ", hlms[mode].data.length)
print(" FFT for conjugate mode ", mode, hlmsT[mode].data.length)
hlmsT[mode].data.data = np.conj(hlmsT[mode].data.data)
hlms_conj[mode] = lsu.DataFourier(hlmsT[mode])
elif (not (NR_group) or not (NR_param)) and (not use_external_EOB) and (not nr_lookup):
if not quiet:
print( " FACTORED LIKELIHOOD WITH hlmoff (default ChooseTDModes) " )
hlms_list = lsu.hlmoff(P, Lmax) # a linked list of hlms
if not isinstance(hlms_list, dict):
hlms = lsu.SphHarmFrequencySeries_to_dict(hlms_list, Lmax) # a dictionary
else:
hlms = hlms_list
hlms_conj_list = lsu.conj_hlmoff(P, Lmax)
if not isinstance(hlms_list,dict):
hlms_conj = lsu.SphHarmFrequencySeries_to_dict(hlms_conj_list, Lmax) # a dictionary
else:
hlms_conj = hlms_conj_list
elif (nr_lookup or NR_group) and useNR:
# look up simulation
# use nrwf to get hlmf
print(" Using NR waveforms ")
group = None
param = None
if nr_lookup:
compare_dict = {}
compare_dict['q'] = P.m2/P.m1 # Need to match the template parameter. NOTE: VERY IMPORTANT that P is updated with the event params
compare_dict['s1z'] = P.s1z
compare_dict['s1x'] = P.s1x
compare_dict['s1y'] = P.s1y
compare_dict['s2z'] = P.s2z
compare_dict['s2x'] = P.s2x
compare_dict['s2y'] = P.s2y
print(" Parameter matching condition ", compare_dict)
good_sim_list = nrwf.NRSimulationLookup(compare_dict,valid_groups=nr_lookup_valid_groups)
if len(good_sim_list)< 1:
print(" ------- NO MATCHING SIMULATIONS FOUND ----- ")
import sys
sys.exit(0)
print(" Identified set of matching NR simulations ", good_sim_list)
try:
print(" Attempting to pick longest simulation matching the simulation ")
MOmega0 = 1
good_sim = None
for key in good_sim_list:
print(key, nrwf.internal_EstimatePeakL2M2Emission[key[0]][key[1]])
if nrwf.internal_WaveformMetadata[key[0]][key[1]]['Momega0'] < MOmega0:
good_sim = key
MOmega0 = nrwf.internal_WaveformMetadata[key[0]][key[1]]['Momega0']
print(" Picked ",key, " with MOmega0 ", MOmega0, " and peak duration ", nrwf.internal_EstimatePeakL2M2Emission[key[0]][key[1]])
except:
good_sim = good_sim_list[0] # pick the first one. Note we will want to reduce /downselect the lookup process
group = good_sim[0]
param = good_sim[1]
else:
group = NR_group
param = NR_param
print(" Identified matching NR simulation ", group, param)
mtot = P.m1 + P.m2
q = P.m2/P.m1
# Load the catalog
wfP = nrwf.WaveformModeCatalog(group, param, \
clean_initial_transient=True,clean_final_decay=True, shift_by_extraction_radius=True,perturbative_extraction_full=perturbative_extraction_full,perturbative_extraction=perturbative_extraction,lmax=Lmax,align_at_peak_l2_m2_emission=True, build_strain_and_conserve_memory=True,use_provided_strain=use_provided_strain)
# Overwrite the parameters in wfP to set the desired scale
wfP.P.m1 = mtot/(1+q)
wfP.P.m2 = mtot*q/(1+q)
wfP.P.dist =distMpcRef*1e6*lal.PC_SI # fiducial distance
wfP.P.approx = P.approx
wfP.P.deltaT = P.deltaT
wfP.P.deltaF = P.deltaF
wfP.P.fmin = P.fmin
hlms = wfP.hlmoff( deltaT=P.deltaT,force_T=1./P.deltaF,hybrid_use=hybrid_use,hybrid_method=hybrid_method) # force a window. Check the time
hlms_conj = wfP.conj_hlmoff( deltaT=P.deltaT,force_T=1./P.deltaF,hybrid_use=hybrid_use) # force a window. Check the time
if rosDebugMessages:
print("NR variant: Length check: ",hlms[(2,2)].data.length, first_data.data.length)
# Remove memory modes (ALIGNED ONLY: Dangerous for precessing spins)
if no_memory and wfP.P.SoftAlignedQ():
for key in hlms.keys():
if key[1]==0:
hlms[key].data.data *=0.
hlms_conj[key].data.data *=0.
elif hasEOB and use_external_EOB:
print(" Using external EOB interface (Bernuzzi) ")
# Code WILL FAIL IF LAMBDA=0
P.taper = lsu.lsu_TAPER_START
lambda_crit=1e-3 # Needed to have adequate i/o output
if P.lambda1<lambda_crit:
P.lambda1=lambda_crit
if P.lambda2<lambda_crit:
P.lambda2=lambda_crit
if P.deltaT > 1./16384:
print(" Bad idea to use such a low sampling rate for EOB tidal ")
wfP = eobwf.WaveformModeCatalog(P,lmax=Lmax)
hlms = wfP.hlmoff(force_T=1./P.deltaF,deltaT=P.deltaT)
# Reflection symmetric
hlms_conj = wfP.conj_hlmoff(force_T=1./P.deltaF,deltaT=P.deltaT)
# Code will not make the EOB waveform shorter, so the code can fail if you have insufficient data, later
print(" External EOB length check ", hlms[(2,2)].data.length, first_data.data.length, first_data.data.length*P.deltaT)
print(" External EOB length check (in M) ", end=' ')
print(" Comparison EOB duration check vs epoch vs window size (sec) ", wfP.estimateDurationSec(), -hlms[(2,2)].epoch, 1./hlms[(2,2)].deltaF)
assert hlms[(2,2)].data.length ==first_data.data.length
if rosDebugMessagesDictionary["DebugMessagesLong"]:
hlmT_ref = lsu.DataInverseFourier(hlms[(2,2)])
print(" External EOB: Time offset of largest sample (should be zero) ", hlms[(2,2)].epoch + np.argmax(np.abs(hlmT_ref.data.data))*P.deltaT)
elif useNR: # NR signal required
mtot = P.m1 + P.m2
# Load the catalog
wfP = nrwf.WaveformModeCatalog(NR_group, NR_param, \
clean_initial_transient=True,clean_final_decay=True, shift_by_extraction_radius=True,
lmax=Lmax,align_at_peak_l2_m2_emission=True,use_provided_strain=use_provided_strain)
# Overwrite the parameters in wfP to set the desired scale
q = wfP.P.m2/wfP.P.m1
wfP.P.m1 *= mtot/(1+q)
wfP.P.m2 *= mtot*q/(1+q)
wfP.P.dist =distMpcRef*1e6*lal.PC_SI # fiducial distance.
hlms = wfP.hlmoff( deltaT=P.deltaT,force_T=1./P.deltaF) # force a window
else:
print(" No waveform available ")
import sys
sys.exit(0)
if not(ignore_threshold is None) and (not ROM_use_basis):
crossTermsFiducial = ComputeModeCrossTermIP(hlms,hlms, psd_dict[detectors[0]],
P.fmin, fMax,
1./2./P.deltaT, P.deltaF, analyticPSD_Q, inv_spec_trunc_Q, T_spec,verbose=verbose)
theWorthwhileModes = IdentifyEffectiveModesForDetector(crossTermsFiducial, ignore_threshold, detectors)
# Make sure worthwhile modes satisfy reflection symmetry! Do not truncate egregiously!
theWorthwhileModes = theWorthwhileModes.union( set([(p,-q) for (p,q) in theWorthwhileModes]))
print(" Worthwhile modes : ", theWorthwhileModes)
hlmsNew = {}
hlmsConjNew = {}
for pair in theWorthwhileModes:
hlmsNew[pair]=hlms[pair]
hlmsConjNew[pair] = hlms_conj[pair]
hlms =hlmsNew
hlms_conj= hlmsConjNew
if len(hlms.keys()) == 0:
print(" Failure ")
import sys
sys.exit(0)
# Print statistics on timeseries provided
if verbose:
print(" Mode npts(data) npts epoch epoch/deltaT ")
for mode in hlms.keys():
print(mode, first_data.data.length, hlms[mode].data.length, hlms[mode].data.length*P.deltaT, hlms[mode].epoch, hlms[mode].epoch/P.deltaT)
for det in detectors:
# This is the event time at the detector
t_det = ComputeArrivalTimeAtDetector(det, P.phi, P.theta,event_time_geo)
# The is the difference between the time of the leading edge of the
# time window we wish to compute the likelihood in, and
# the time corresponding to the first sample in the rholms
rho_epoch = data_dict[det].epoch - hlms[list(hlms.keys())[0]].epoch
t_shift = float(float(t_det) - float(t_window) - float(rho_epoch))
# assert t_shift > 0 # because NR waveforms may start at any time, they don't always have t_shift > 0 !
# tThe leading edge of our time window of interest occurs
# this many samples into the rholms
N_shift = int( t_shift / P.deltaT + 0.5 ) # be careful about rounding: might be one sample off!
# Number of samples in the window [t_ref - t_window, t_ref + t_window]
N_window = int( 2 * t_window / P.deltaT )
# Compute cross terms < h_lm | h_l'm' >
crossTerms[det] = ComputeModeCrossTermIP(hlms, hlms, psd_dict[det], P.fmin,
fMax, 1./2./P.deltaT, P.deltaF, analyticPSD_Q,
inv_spec_trunc_Q, T_spec,verbose=verbose)
crossTermsV[det] = ComputeModeCrossTermIP(hlms_conj, hlms, psd_dict[det], P.fmin,
fMax, 1./2./P.deltaT, P.deltaF, analyticPSD_Q,
inv_spec_trunc_Q, T_spec,prefix="V",verbose=verbose)
# Compute rholm(t) = < h_lm(t) | d >
rholms[det] = ComputeModeIPTimeSeries(hlms, data_dict[det],
psd_dict[det], P.fmin, fMax, 1./2./P.deltaT, N_shift, N_window,
analyticPSD_Q, inv_spec_trunc_Q, T_spec)
rhoXX = rholms[det][list(rholms[det].keys())[0]]
# The vector of time steps within our window of interest
# for which we have discrete values of the rholms
# N.B. I don't do simply rho_epoch + t_shift, b/c t_shift is the
# precise desired time, while we round and shift an integer number of
# steps of size deltaT
t = np.arange(N_window) * P.deltaT\
+ float(rho_epoch + N_shift * P.deltaT )
if verbose:
print("For detector", det, "...")
print("\tData starts at %.20g" % float(data_dict[det].epoch))
print("\trholm starts at %.20g" % float(rho_epoch))
print("\tEvent time at detector is: %.18g" % float(t_det))
print("\tInterpolation window has half width %g" % t_window)
print("\tComputed t_shift = %.20g" % t_shift)
print("\t(t_shift should be t_det - t_window - t_rholm = %.20g)" %\
(t_det - t_window - float(rho_epoch)))
print("\tInterpolation starts at time %.20g" % t[0])
print("\t(Should start at t_event - t_window = %.20g)" %\
(float(rho_epoch + N_shift * P.deltaT)))
# The minus N_shift indicates we need to roll left
# to bring the desired samples to the front of the array
if not skip_interpolation:
rholms_intp[det] = InterpolateRholms(rholms[det], t,verbose=verbose)
else:
rholms_intp[det] = None
if not ROM_use_basis:
return rholms_intp, crossTerms, crossTermsV, rholms, None
else:
return rholms_intp, crossTerms, crossTermsV, rholms, acatHere # labels are misleading for use_rom_basis
def ReconstructPrecomputedLikelihoodTermsROM(P,acat_rom,rho_intp_rom,crossTerms_rom, crossTermsV_rom, rho_rom,verbose=True):
"""
Using a set of ROM coefficients for hlm[lm] = coef[l,m,basis] w[basis], reconstructs <h[lm]|data>, <h[lm]|h[l'm']>
Requires ROM also be loaded in top level, for simplicity
"""
# Extract coefficients
coefs = acat_rom.coefficients(P)
# Identify available modes
modelist = acat_rom.modes_available
detectors = crossTerms_rom.keys()
rholms = {}
rholms_intp = {}
crossTerms = {}
crossTermsV = {}
# Reproduce rholms and rholms_intp
# Loop over detectors
for det in detectors:
rholms[det] ={}
rholms_intp[det] ={}
# Loop over available modes
for mode in modelist:
# Identify relevant terms in the sum
indx_list_ok = [indx for indx in coefs.keys() if indx[0]==mode[0] and indx[1]==mode[1]]
# Discrete case:
# - Create data structure to hold it
indx0 = indx_list_ok[0]
rhoTS = lal.CreateCOMPLEX16TimeSeries("rho",rho_rom[det][indx0].epoch,rho_rom[det][indx0].f0,rho_rom[det][indx0].deltaT,rho_rom[det][indx0].sampleUnits,rho_rom[det][indx0].data.length)
rhoTS.data.data = np.zeros( rho_rom[det][indx0].data.length) # problems with data initialization common with LAL
# - fill the data structure
fn_list_here = []
wt_list_here = []
for indx in indx_list_ok:
rhoTS.data.data+= np.conj(coefs[indx])*rho_rom[det][indx].data.data
wt_list_here.append(np.conj(coefs[indx]) )
fn_list_here = rho_intp_rom[det][indx]
rholms[det][mode]=rhoTS
# Interpolated case
# - create a lambda structure for it, holding the coefficients. NOT IMPLEMENTED since not used in production
if verbose:
print(" factored_likelihood: ROM: interpolated timeseries ", det, mode, " NOT CREATED")
wt_list_here = np.array(wt_list_here)
rholms_intp[det][mode] = lambda t, fns=fn_list_here, wts=wt_list_here: np.sum(np.array(map(fn_list_here,t))*wt_list_here )
# Reproduce crossTerms, crossTermsV
for det in detectors:
crossTerms[det] ={}
crossTermsV[det] ={}
for mode1 in modelist:
indx_list_ok1 = [indx for indx in coefs.keys() if indx[0]==mode1[0] and indx[1]==mode1[1]]
for mode2 in modelist:
crossTerms[det][(mode1,mode2)] =0.j
indx_list_ok2 = [indx for indx in coefs.keys() if indx[0]==mode2[0] and indx[1]==mode2[1]]
crossTerms[det][(mode1,mode2)] = np.sum(np.array([ np.conj(coefs[indx1])*coefs[indx2]*crossTerms_rom[det][(indx1,indx2)] for indx1 in indx_list_ok1 for indx2 in indx_list_ok2]))
crossTermsV[det][(mode1,mode2)] = np.sum(np.array([ coefs[indx1]*coefs[indx2]*crossTermsV_rom[det][(indx1,indx2)] for indx1 in indx_list_ok1 for indx2 in indx_list_ok2]))
if verbose:
print(" : U populated ", (mode1, mode2), " = ",crossTerms[det][(mode1,mode2) ])
print(" : V populated ", (mode1, mode2), " = ",crossTermsV[det][(mode1,mode2) ])
return rholms_intp, crossTerms, crossTermsV, rholms, None # Same return pattern as Precompute...
def FactoredLogLikelihood(extr_params, rholms,rholms_intp, crossTerms, crossTermsV, Lmax,interpolate=True):
"""
Compute the log-likelihood = -1/2 < d - h | d - h > from:
- extr_params is an object containing values of all extrinsic parameters
- rholms_intp is a dictionary of interpolating functions < h_lm(t) | d >
- crossTerms is a dictionary of < h_lm | h_l'm' >
- Lmax is the largest l-index of any h_lm mode considered
N.B. rholms_intp and crossTerms are the first two outputs of the function
'PrecomputeLikelihoodTerms'
"""
# Sanity checks
assert rholms_intp.keys() == crossTerms.keys()
detectors = rholms_intp.keys()
RA = extr_params.phi
DEC = extr_params.theta
tref = extr_params.tref # geocenter time
phiref = extr_params.phiref
incl = extr_params.incl
psi = extr_params.psi
dist = extr_params.dist
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
# In practice, all detectors have the same set of Ylms selected, so we only compute for a subset
Ylms = ComputeYlms(Lmax, incl, -phiref, selected_modes=rholms_intp[list(rholms_intp.keys())[0]].keys())
lnL = 0.
for det in detectors:
CT = crossTerms[det]
CTV = crossTermsV[det]
F = ComplexAntennaFactor(det, RA, DEC, psi, tref)
# This is the GPS time at the detector
t_det = ComputeArrivalTimeAtDetector(det, RA, DEC, tref)
det_rholms = {} # rholms evaluated at time at detector
if (interpolate):
for key in rholms_intp[det]:
func = rholms_intp[det][key]
det_rholms[key] = func(float(t_det))
else:
# do not interpolate, just use nearest neighbor.
for key, rhoTS in rholms[det].items():
tfirst = t_det
ifirst = int(np.round(( float(tfirst) - float(rhoTS.epoch)) / rhoTS.deltaT) + 0.5)
det_rholms[key] = rhoTS.data.data[ifirst]
lnL += SingleDetectorLogLikelihood(det_rholms, CT, CTV,Ylms, F, dist)
return lnL
def FactoredLogLikelihoodTimeMarginalized(tvals, extr_params, rholms_intp, rholms, crossTerms, crossTermsV, Lmax, interpolate=False):
"""
Compute the log-likelihood = -1/2 < d - h | d - h > from:
- extr_params is an object containing values of all extrinsic parameters
- rholms_intp is a dictionary of interpolating functions < h_lm(t) | d >
- crossTerms is a dictionary of < h_lm | h_l'm' >
- Lmax is the largest l-index of any h_lm mode considered
tvals is an array of timeshifts relative to the detector,
used to compute the marginalized integral.
It provides both the time prior and the sample points used for the integral.
N.B. rholms_intp and crossTerms are the first two outputs of the function
'PrecomputeLikelihoodTerms'
"""
# Sanity checks
assert rholms_intp.keys() == crossTerms.keys()
detectors = rholms_intp.keys()
RA = extr_params.phi
DEC = extr_params.theta
tref = extr_params.tref # geocenter time
phiref = extr_params.phiref
incl = extr_params.incl
psi = extr_params.psi
dist = extr_params.dist
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
Ylms = ComputeYlms(Lmax, incl, -phiref, selected_modes=rholms_intp[list(rholms.keys())[0]].keys())
# lnL = 0.
lnL = np.zeros(len(tvals),dtype=np.float128)
for det in detectors:
CT = crossTerms[det]
CTV = crossTermsV[det]
F = ComplexAntennaFactor(det, RA, DEC, psi, tref)
# This is the GPS time at the detector
t_det = ComputeArrivalTimeAtDetector(det, RA, DEC, tref)
det_rholms = {} # rholms evaluated at time at detector
if ( interpolate ):
# use the interpolating functions.
for key, func in rholms_intp[det].items():
det_rholms[key] = func(float(t_det)+tvals)
else:
# do not interpolate, just use nearest neighbors.
for key, rhoTS in rholms[det].items():
tfirst = float(t_det)+tvals[0]
ifirst = int(np.round(( float(tfirst) - float(rhoTS.epoch)) / rhoTS.deltaT) + 0.5)
ilast = ifirst + len(tvals)
det_rholms[key] = rhoTS.data.data[ifirst:ilast]
lnL += SingleDetectorLogLikelihood(det_rholms, CT, CTV, Ylms, F, dist)
maxlnL = np.max(lnL)
return maxlnL + np.log(integrate.simps(np.exp(lnL - maxlnL), dx=tvals[1]-tvals[0]))
#
# Internal functions
#
def SingleDetectorLogLikelihoodModel( crossTermsDictionary,crossTermsVDictionary, tref, RA,DEC, thS,phiS,psi, dist, Lmax, det):
"""
DOCUMENT ME!!!
"""
global distMpcRef
crossTerms = crossTermsDictionary[det]
crossTermsV = crossTermsVDictionary[det]
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
Ylms = ComputeYlms(Lmax, thS, -phiS)
if (det == "Fake"):
F=1
else:
F = ComplexAntennaFactor(det, RA,DEC,psi,tref)
distMpc = dist/(lsu.lsu_PC*1e6)
# keys = Ylms.keys()
keys = crossTermsDictionary.keys()[:0]
# Eq. 26 of Richard's notes
# APPROXIMATING V BY U (appropriately swapped). THIS APPROXIMATION MUST BE FIXED FOR PRECSSING SOURCES
term2 = 0.
for pair1 in keys:
for pair2 in keys:
term2 += F * np.conj(F) * ( crossTerms[(pair1,pair2)])* np.conj(Ylms[pair1]) * Ylms[pair2] + F*F*Ylms[pair1]*Ylms[pair2]*crossTermsV[(pair1,pair2)] #((-1)**pair1[0])*crossTerms[((pair1[0],-pair1[1]),pair2)]
term2 = -np.real(term2) / 4. /(distMpc/distMpcRef)**2
return term2
def SingleDetectorLogLikelihoodData(epoch,rholmsDictionary,tref, RA,DEC, thS,phiS,psi, dist, Lmax, det):
"""
DOCUMENT ME!!!
"""
global distMpcRef
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
Ylms = ComputeYlms(Lmax, thS, -phiS)
if (det == "Fake"):
F=1
tshift= tref - epoch
else:
F = ComplexAntennaFactor(det, RA,DEC,psi,tref)
detector = lalsim.DetectorPrefixToLALDetector(det)
tshift = ComputeArrivalTimeAtDetector(det, RA,DEC, tref)
rholms_intp = rholmsDictionary[det]
distMpc = dist/(lsu.lsu_PC*1e6)
term1 = 0.
for key in rholms_intp.keys():
l = key[0]
m = key[1]
term1 += np.conj(F * Ylms[(l,m)]) * rholms_intp[(l,m)]( float(tshift))
term1 = np.real(term1) / (distMpc/distMpcRef)
return term1
# Prototyping speed of time marginalization. Not yet confirmed
def NetworkLogLikelihoodTimeMarginalized(epoch,rholmsDictionary,crossTerms,crossTermsV, tref, RA,DEC, thS,phiS,psi, dist, Lmax, detList):
"""
DOCUMENT ME!!!
"""
global distMpcRef
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
Ylms = ComputeYlms(Lmax, thS, -phiS, selected_modes = rholmsDictionary[list(rholmsDictionary.keys())[0]].keys())
distMpc = dist/(lsu.lsu_PC*1e6)
F = {}
tshift= {}
for det in detList:
F[det] = ComplexAntennaFactor(det, RA,DEC,psi,tref)
detector = lalsim.DetectorPrefixToLALDetector(det)
tshift[det] = float(ComputeArrivalTimeAtDetector(det, RA,DEC, tref)) # detector time minus reference time (so far)
term2 = 0.
for det in detList:
for pair1 in rholmsDictionary[det]:
for pair2 in rholmsDictionary[det]:
term2 += F[det] * np.conj(F[det]) * ( crossTerms[det][(pair1,pair2)])* np.conj(Ylms[pair1]) * Ylms[pair2] \
+ F[det]*F[det]*Ylms[pair1]*Ylms[pair2]*crossTermsV[det][(pair1,pair2)] #((-1)**pair1[0])*crossTerms[det][((pair1[0],-pair1[1]),pair2)]
# + F[det]*F[det]*Ylms[pair1]*Ylms[pair2]*((-1)**pair1[0])*crossTerms[det][((pair1[0],-pair1[1]),pair2)]
term2 = -np.real(term2) / 4. /(distMpc/distMpcRef)**2
def fnIntegrand(dt):
term1 = 0.
for det in detList:
for pair in rholmsDictionary[det]:
term1+= np.conj(F[det]*Ylms[pair])*rholmsDictionary[det][pair]( float(tshift[det]) + dt)
term1 = np.real(term1) / (distMpc/distMpcRef)
return np.exp(np.max([term1+term2,-15.])) # avoid hugely negative numbers. This floor on the log likelihood here will not significantly alter any physical result.
# empirically this procedure will find a gaussian with width less than 0.5 e (-3) times th window length. This procedure *should* therefore work for a sub-s window
LmargTime = integrate.quad(fnIntegrand, tWindowExplore[0], tWindowExplore[1],points=[0],limit=300)[0] # the second return value is the error
# LmargTime = integrate.quadrature(fnIntegrand, tWindowExplore[0], tWindowExplore[1],maxiter=400) # very slow, not reliable
return np.log(LmargTime)
# Prototyping speed of time marginalization. Not yet confirmed
def NetworkLogLikelihoodPolarizationMarginalized(epoch,rholmsDictionary,crossTerms, crossTermsV, tref, RA,DEC, thS,phiS,psi, dist, Lmax, detList):
"""
DOCUMENT ME!!!
"""
global distMpcRef
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
Ylms = ComputeYlms(Lmax, thS, -phiS, selected_modes = rholmsDictionary[list(rholmsDictionary.keys())[0]].keys())
distMpc = dist/(lsu.lsu_PC*1e6)
F = {}
tshift= {}
for det in detList:
F[det] = ComplexAntennaFactor(det, RA,DEC,psi,tref)
detector = lalsim.DetectorPrefixToLALDetector(det)
tshift[det] = float(ComputeArrivalTimeAtDetector(det, RA,DEC, tref)) # detector time minus reference time (so far)
term2a = 0.
term2b = 0.
for det in detList:
for pair1 in rholmsDictionary[det]:
for pair2 in rholmsDictionary[det]:
term2a += F[det] * np.conj(F[det]) * ( crossTerms[det][(pair1,pair2)])* np.conj(Ylms[pair1]) * Ylms[pair2]
term2b += F[det]*F[det]*Ylms[pair1]*Ylms[pair2]*crossTermsV[(pair1,pair2)] #((-1)**pair1[0])*crossTerms[det][((pair1[0],-pair1[1]),pair2)]
term2a = -np.real(term2a) / 4. /(distMpc/distMpcRef)**2
term2b = -term2b/4./(distMpc/distMpcRef)**2 # coefficient of exp(-4ipsi)
term1 = 0.
for det in detList:
for pair in rholmsDictionary[det]:
term1+= np.conj(F[det]*Ylms[pair])*rholmsDictionary[det][pair]( float(tshift[det]) )
term1 = term1 / (distMpc/distMpcRef) # coefficient of exp(-2ipsi)
# if the coefficients of the exponential are too large, do the integral by hand, in the gaussian limit? NOT IMPLEMENTED YET
if False: #xgterm2a+np.abs(term2b)+np.abs(term1)>100:
return term2a+ np.log(special.iv(0,np.abs(term1))) # an approximation, ignoring term2b entirely!
else:
# marginalize over phase. Ideally done analytically. Only works if the terms are not too large -- otherwise overflow can occur.
# Should probably implement a special solution if overflow occurs
def fnIntegrand(x):
return np.exp( term2a+ np.real(term2b*np.exp(-4.j*x)+ term1*np.exp(+2.j*x)))/np.pi # remember how the two terms enter -- note signs!
LmargPsi = integrate.quad(fnIntegrand,0,np.pi,limit=100,epsrel=1e-4)[0]
return np.log(LmargPsi)
def SingleDetectorLogLikelihood(rholm_vals, crossTerms,crossTermsV, Ylms, F, dist):
"""
Compute the value of the log-likelihood at a single detector from
several intermediate pieces of data.
Inputs:
- rholm_vals: A dictionary of values of inner product between data
and h_lm modes, < h_lm(t*) | d >, at a single time of interest t*
- crossTerms: A dictionary of inner products between h_lm modes:
< h_lm | h_l'm' >
- Ylms: Dictionary of values of -2-spin-weighted spherical harmonic modes
for a certain inclination and ref. phase, Y_lm(incl, - phiref)
- F: Complex-valued antenna pattern depending on sky location and
polarization angle, F = F_+ + i F_x
- dist: The distance from the source to detector in meters
Outputs: The value of ln L for a single detector given the inputs.
"""
global distMpcRef
distMpc = dist/(lsu.lsu_PC*1e6)
invDistMpc = distMpcRef/distMpc
Fstar = np.conj(F)
# Eq. 35 of Richard's notes
term1 = 0.
# for mode in rholm_vals:
for mode, Ylm in Ylms.items():
term1 += Fstar * np.conj( Ylms[mode]) * rholm_vals[mode]
term1 = np.real(term1) *invDistMpc
# Eq. 26 of Richard's notes
term2 = 0.
for pair1 in rholm_vals:
for pair2 in rholm_vals:
term2 += F * np.conj(F) * ( crossTerms[(pair1,pair2)])* np.conj(Ylms[pair1]) * Ylms[pair2] \
+ F*F*Ylms[pair1]*Ylms[pair2]*crossTermsV[pair1,pair2] #((-1)**pair1[0])* crossTerms[((pair1[0],-pair1[1]),pair2)]
# + F*F*Ylms[pair1]*Ylms[pair2]*((-1)**pair1[0])* crossTerms[((pair1[0],-pair1[1]),pair2)]
term2 = -np.real(term2) / 4. /(distMpc/distMpcRef)**2
return term1 + term2
def ComputeModeIPTimeSeries(hlms, data, psd, fmin, fMax, fNyq,
N_shift, N_window, analyticPSD_Q=False,
inv_spec_trunc_Q=False, T_spec=0.):
r"""
Compute the complex-valued overlap between
each member of a SphHarmFrequencySeries 'hlms'
and the interferometer data COMPLEX16FrequencySeries 'data',
weighted the power spectral density REAL8FrequencySeries 'psd'.
The integrand is non-zero in the range: [-fNyq, -fmin] union [fmin, fNyq].
This integrand is then inverse-FFT'd to get the inner product
at a discrete series of time shifts.
Returns a SphHarmTimeSeries object containing the complex inner product
for discrete values of the reference time tref. The epoch of the
SphHarmTimeSeries object is set to account for the transformation
"""
rholms = {}
assert data.deltaF == hlms[list(hlms.keys())[0]].deltaF
assert data.data.length == hlms[list(hlms.keys())[0]].data.length
deltaT = data.data.length/(2*fNyq)
# Create an instance of class to compute inner product time series
IP = lsu.ComplexOverlap(fmin, fMax, fNyq, data.deltaF, psd,
analyticPSD_Q, inv_spec_trunc_Q, T_spec, full_output=True)
# Loop over modes and compute the overlap time series
for pair in hlms.keys():
rho, rhoTS, rhoIdx, rhoPhase = IP.ip(hlms[pair], data)
rhoTS.epoch = data.epoch - hlms[pair].epoch
# rholms[pair] = lal.CutCOMPLEX16TimeSeries(rhoTS, N_shift, N_window) # Warning: code currently fails w/o this cut.
tmp= lsu.DataRollBins(rhoTS, N_shift) # restore functionality for bidirectional shifts: waveform need not start at t=0
rholms[pair] =lal.CutCOMPLEX16TimeSeries(rhoTS, 0, N_window)
return rholms
def InterpolateRholm(rholm, t,verbose=False):
h_re = np.real(rholm.data.data)
h_im = np.imag(rholm.data.data)
if verbose:
print("Interpolation length check ", len(t), len(h_re))
# spline interpolate the real and imaginary parts of the time series
h_real = interpolate.InterpolatedUnivariateSpline(t, h_re[:len(t)], k=3,ext='zeros')
h_imag = interpolate.InterpolatedUnivariateSpline(t, h_im[:len(t)], k=3,ext='zeros')
return lambda ti: h_real(ti) + 1j*h_imag(ti)
# Little faster
#def anon_intp(ti):
#idx = np.searchsorted(t, ti)
#return rholm.data.data[idx]
#return anon_intp
#from pygsl import spline
#spl_re = spline.cspline(len(t))
#spl_im = spline.cspline(len(t))
#spl_re.init(t, np.real(rholm.data.data))
#spl_im.init(t, np.imag(rholm.data.data))
#@profile
#def anon_intp(ti):
#re = spl_re.eval_e_vector(ti)
#return re + 1j*im
#return anon_intp
# Doesn't work, hits recursion depth
#from scipy.signal import cspline1d, cspline1d_eval
#re_coef = cspline1d(np.real(rholm.data.data))
#im_coef = cspline1d(np.imag(rholm.data.data))
#dx, x0 = rholm.deltaT, float(rholm.epoch)
#return lambda ti: cspline1d_eval(re_coef, ti) + 1j*cspline1d_eval(im_coef, ti)
def InterpolateRholms(rholms, t,verbose=False):
"""
Return a dictionary keyed on mode index tuples, (l,m)
where each value is an interpolating function of the overlap against data
as a function of time shift:
rholm_intp(t) = < h_lm(t) | d >
'rholms' is a dictionary keyed on (l,m) containing discrete time series of
< h_lm(t_i) | d >
't' is an array of the discrete times:
[t_0, t_1, ..., t_N]
"""
rholm_intp = {}
for mode in rholms.keys():
rholm = rholms[mode]
# The mode is identically zero, don't bother with it
if sum(abs(rholm.data.data)) == 0.0:
continue
rholm_intp[ mode ] = InterpolateRholm(rholm, t,verbose)
return rholm_intp
def ComputeModeCrossTermIP(hlmsA, hlmsB, psd, fmin, fMax, fNyq, deltaF,
analyticPSD_Q=False, inv_spec_trunc_Q=False, T_spec=0., verbose=True,prefix="U"):
"""
Compute the 'cross terms' between waveform modes, i.e.
< h_lm | h_l'm' >.
The inner product is weighted by power spectral density 'psd' and
integrated over the interval [-fNyq, -fmin] union [fmin, fNyq]
Returns a dictionary of inner product values keyed by tuples of mode indices
i.e. ((l,m),(l',m'))
"""
# Create an instance of class to compute inner product
IP = lsu.ComplexIP(fmin, fMax, fNyq, deltaF, psd, analyticPSD_Q,
inv_spec_trunc_Q, T_spec)
crossTerms = {}
for mode1 in hlmsA.keys():
for mode2 in hlmsB.keys():
crossTerms[ (mode1,mode2) ] = IP.ip(hlmsA[mode1], hlmsB[mode2])
if verbose:
print(" : ", prefix, " populated ", (mode1, mode2), " = ",\
crossTerms[(mode1,mode2) ])
return crossTerms
def ComplexAntennaFactor(det, RA, DEC, psi, tref):
"""
Function to compute the complex-valued antenna pattern function:
F+ + i Fx
'det' is a detector prefix string (e.g. 'H1')
'RA' and 'DEC' are right ascension and declination (in radians)
'psi' is the polarization angle
'tref' is the reference GPS time
"""
detector = lalsim.DetectorPrefixToLALDetector(det)
Fp, Fc = lal.ComputeDetAMResponse(detector.response, RA, DEC, psi, lal.GreenwichMeanSiderealTime(tref))
return Fp + 1j * Fc
def ComputeYlms(Lmax, theta, phi, selected_modes=None):
"""
Return a dictionary keyed by tuples
(l,m)
that contains the values of all
-2Y_lm(theta,phi)
with
l <= Lmax
-l <= m <= l
"""
Ylms = {}
for l in range(2,Lmax+1):
for m in range(-l,l+1):
if selected_modes is not None and (l,m) not in selected_modes:
continue
Ylms[ (l,m) ] = lal.SpinWeightedSphericalHarmonic(theta, phi,-2, l, m)
return Ylms
def ComputeArrivalTimeAtDetector(det, RA, DEC, tref):
"""
Function to compute the time of arrival at a detector
from the time of arrival at the geocenter.
'det' is a detector prefix string (e.g. 'H1')
'RA' and 'DEC' are right ascension and declination (in radians)
'tref' is the reference time at the geocenter. It can be either a float (in which case the return is a float) or a GPSTime object (in which case it returns a GPSTime)
"""
detector = lalsim.DetectorPrefixToLALDetector(det)
# if tref is a float or a GPSTime object,
# it shoud be automagically converted in the appropriate way
return tref + lal.TimeDelayFromEarthCenter(detector.location, RA, DEC, tref)
def ComputeArrivalTimeAtDetectorWithoutShift(det, RA, DEC, tref):
"""
Function to compute the time of arrival at a detector
from the time of arrival at the geocenter.
'det' is a detector prefix string (e.g. 'H1')
'RA' and 'DEC' are right ascension and declination (in radians)
'tref' is the reference time at the geocenter. It can be either a float (in which case the return is a float) or a GPSTime object (in which case it returns a GPSTime)
"""
detector = lalsim.DetectorPrefixToLALDetector(det)
print(detector, detector.location)
# if tref is a float or a GPSTime object,
# it shoud be automagically converted in the appropriate way
return lal.TimeDelayFromEarthCenter(detector.location, RA, DEC, tref)
# Create complex FD data that does not assume Hermitianity - i.e.
# contains positive and negative freq. content
# TIMING INFO:
# - epoch set so the merger event occurs at total time P.tref
def non_herm_hoff(P):
hp, hc = lalsim.SimInspiralChooseTDWaveform(P.phiref, P.deltaT, P.m1, P.m2,
P.s1x, P.s1y, P.s1z, P.s2x, P.s2y, P.s2z, P.fmin, P.fref, P.dist,
P.incl, P.lambda1, P.lambda2, P.waveFlags, P.nonGRparams,
P.ampO, P.phaseO, P.approx)
hp.epoch = hp.epoch + P.tref
hc.epoch = hc.epoch + P.tref
hoft = lalsim.SimDetectorStrainREAL8TimeSeries(hp, hc,
P.phi, P.theta, P.psi,
lalsim.InstrumentNameToLALDetector(str(P.detector))) # Propagates signal to the detector, including beampattern and time delay
if rosDebugMessages:
print(" +++ Injection creation for detector ", P.detector, " ++ ")
print(" : Creating signal for injection with epoch ", float(hp.epoch), " and event time centered at ", lsu.stringGPSNice(P.tref))
Fp, Fc = lal.ComputeDetAMResponse(lalsim.InstrumentNameToLALDetector(str(P.detector)).response, P.phi, P.theta, P.psi, lal.GreenwichMeanSiderealTime(hp.epoch))
print(" : creating signal for injection with (det, t,RA, DEC,psi,Fp,Fx)= ", P.detector, float(P.tref), P.phi, P.theta, P.psi, Fp, Fc)
if P.taper != lsu.lsu_TAPER_NONE: # Taper if requested
lalsim.SimInspiralREAL8WaveTaper(hoft.data, P.taper)
if P.deltaF == None:
TDlen = nextPow2(hoft.data.length)
else:
TDlen = int(1./P.deltaF * 1./P.deltaT)
assert TDlen >= hoft.data.length
fwdplan=lal.CreateForwardCOMPLEX16FFTPlan(TDlen,0)
hoft = lal.ResizeREAL8TimeSeries(hoft, 0, TDlen)
hoftC = lal.CreateCOMPLEX16TimeSeries("hoft", hoft.epoch, hoft.f0,
hoft.deltaT, hoft.sampleUnits, TDlen)
# copy h(t) into a COMPLEX16 array which happens to be purely real
for i in range(TDlen):
hoftC.data.data[i] = hoft.data.data[i]
FDlen = TDlen
hoff = lal.CreateCOMPLEX16FrequencySeries("Template h(f)",
hoft.epoch, hoft.f0, 1./hoft.deltaT/TDlen, lal.lalHertzUnit,
FDlen)
lal.COMPLEX16TimeFreqFFT(hoff, hoftC, fwdplan)
return hoff
# def rollTimeSeries(series_dict, nRollRight):
# """
# rollTimeSeries
# Deprecated -- see DataRollBins in lalsimutils.py
# """
# # Use the fact that we pass by value and that we swig bind numpy arrays
# for det in series_dict:
# print " Rolling timeseries ", nRollRight
# np.roll(series_dict.data.data, nRollRight)
def estimateUpperDistanceBoundInMpc(rholms,crossTerms):
# For nonprecessing sources, use the 22 mode to estimate the optimally oriented distance
Qbar = 0
nDet = 0
for det in rholms:
nDet+=1
rho22 = rholms[det][( 2, 2)]
Qbar+=np.abs(crossTerms[det][(2,2), (2,2)])/np.max(np.abs(rho22.data.data)) # one value for each detector
fudgeFactor = 1.1 # let's give ourselves a buffer -- we can't afford to be too tight
return fudgeFactor*distMpcRef* Qbar/nDet *np.sqrt(5/(4.*np.pi))/2.
def estimateEventTimeRelative(theEpochFiducial,rholms, rholms_intp):
return 0
def evaluateFast1dInterpolator(x,y,xlow,xhigh):
indx = np.floor((x-xlow)/(xhigh-xlow) * len(y))
if indx<0:
indx = 0
if indx > len(y):
indx = len(y)-1
return (y[indx]*(xhigh-x) + y[indx+1]*(x-xlow))/(xhigh-xlow)
def makeFast1dInterpolator(y,xlow,xhigh):
return lambda x: evaluateFast1dInterpolator(x,y, xlow, xhigh)
def NetworkLogLikelihoodTimeMarginalizedDiscrete(epoch,rholmsDictionary,crossTerms, tref, deltaTWindow, RA,DEC, thS,phiS,psi, dist, Lmax, detList,array_output=False):
"""
NetworkLogLikelihoodTimeMarginalizedDiscrete
Uses DiscreteSingleDetectorLogLikelihoodData to calculate the lnL(t) on a discrete grid.
- Mode 1: array_output = False
Computes \int L dt/T over tref+deltaTWindow (GPSTime + pair, with |deltaTWindow|=T)
- Mode 2: array_output = True
Returns lnL(t) as a raw numpy array.
"""
global distMpcRef
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
Ylms = ComputeYlms(Lmax, thS, -phiS)
distMpc = dist/(lsu.lsu_PC*1e6)
F = {}
tshift= {}
for det in detList:
if (det == "Fake"):
F[det]=1
tshift= tref - epoch
else:
F[det] = ComplexAntennaFactor(det, RA,DEC,psi,tref)
term2 = 0.
keys = constructLMIterator(Lmax)
for det in detList:
for pair1 in keys:
for pair2 in keys:
term2 += F[det] * np.conj(F[det]) * ( crossTerms[det][(pair1,pair2)])* np.conj(Ylms[pair1]) * Ylms[pair2] + F[det]*F[det]*Ylms[pair1]*Ylms[pair2]*crossTermsV[(pair1,pair2)] #((-1)**pair1[0])*crossTerms[det][((pair1[0],-pair1[1]),pair2)]
term2 = -np.real(term2) / 4. /(distMpc/distMpcRef)**2
print(detList)
rho22 = rholmsDictionary[detList[0]][( 2,2)]
nBins =int( (deltaTWindow[1]-deltaTWindow[0])/rho22.deltaT)
term1 =np.zeros(nBins)
for det in detList:
term1+=DiscreteSingleDetectorLogLikelihoodData(epoch, rholmsDictionary, tref+deltaTWindow[0], nBins,RA,DEC, thS,phiS,psi, dist, Lmax, det)
# Compute integral. Note the NORMALIZATION interval is assumed to be tWindow.
# This is equivalent to dividing by 1/N in *this case*. That formula will not hold if the prior and integration region are different.
if array_output:
return term1+term2 # output is lnL(t), NO marginalization
else:
LmargTime = rho22.deltaT*np.sum(np.exp(term1+term2))/(deltaTWindow[1]-deltaTWindow[0])
return np.log(LmargTime)
def DiscreteSingleDetectorLogLikelihoodData(epoch,rholmsDictionary, tStart,nBins, RA,DEC, thS,phiS,psi, dist, Lmax, det):
"""
DiscreteSingleDetectorLogLikelihoodData
Returns lnLdata array, evaluated at the geocenter, based on a DISCRETE timeshift.
- At low sampling rates, this procedure will be considerably time offset (~ ms).
It will also be undersampled for use in integration.
Return value is
- a RAW numpy array
- associated with nBins following tStart
Uses 'tStart' (a GPSTime) to identify the current detector orientations and hence time of flight delay.
- the assumption is that nBins will be very small
Does NOT
- resample the Q array : it is nearest-neighbor timeshifted before computing lnL
"""
global distMpcRef
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
Ylms = ComputeYlms(Lmax, thS, -phiS)
if (det == "Fake"):
F=1
tshift= tStart - epoch
else:
F = ComplexAntennaFactor(det, RA,DEC,psi,tStart)
detector = lalsim.DetectorPrefixToLALDetector(det)
tshift = ComputeArrivalTimeAtDetector(det, RA,DEC, tStart) - epoch # detector time minus reference time (so far)
rholms_grid = rholmsDictionary[det]
distMpc = dist/(lsu.lsu_PC*1e6)
rho22 = rholms_grid[( 2,2)]
nShiftL = int( float(tshift)/rho22.deltaT)
term1 = 0.
# Only loop over terms available in the keys
for key in rholms_grid.keys():
# for l in range(2,Lmax+1):
# for m in range(-l,l+1):
l = int(key[0])
m = int(key[1])
rhoTSnow = rholms_grid[( l,m)]
term1 += np.conj(F * Ylms[(l,m)]) * np.roll(rhoTSnow.data.data,nShiftL)
term1 = np.real(term1) / (distMpc/distMpcRef)
nBinLow = int(( tStart + tshift - rho22.epoch )/rho22.deltaT) # time interval is specified in GEOCENTER, but rho is at each IFO
if (nBinLow>-1):
return term1[nBinLow:nBinLow+nBins]
else:
tmp = np.roll(term1,nBinLow) # Good enough
return tmp[0:nBins] # Good enough
def constructLMIterator(Lmax): # returns a list of (l,m) pairs covering all modes, as a list. Useful for building iterators without nested lists
mylist = []
for L in np.arange(2, Lmax+1):
for m in np.arange(-L, L+1):
mylist.append((L,m))
return mylist
def IdentifyEffectiveModesForDetector(crossTermsOneDetector, fac,det):
# extract a list of possible pairs
pairsOfPairs = crossTermsOneDetector.keys()
pairsUnion = []
for x in pairsOfPairs:
pairsUnion.append(x[1])
pairsUnion = set(pairsUnion) # a list of unique pairs that occur in crossTerms' first index
# Find modes which are less effective
pairsIneffective = []
for pair in pairsUnion:
isEffective = False
threshold = crossTermsOneDetector[((2,2),(2,2))]*fac
for pair2 in pairsUnion:
if crossTermsOneDetector[(pair,pair2)] > threshold:
isEffective = True
if not isEffective:
pairsIneffective.append(pair)
if rosDebugMessagesDictionary["DebugMessages"]:
print(" ", pair, " - no significant impact on U, less than ", threshold)
return pairsUnion - set(pairsIneffective)
####
#### Reimplementation with arrays [NOT YET GENERALIZED TO USE V]
####
def PackLikelihoodDataStructuresAsArrays(pairKeys, rholms_intpDictionaryForDetector, rholmsDictionaryForDetector,crossTermsForDetector, crossTermsForDetectorV):
"""
Accepts list of LM pairs, dictionary for rholms against keys, and cross terms (a dictionary)
PROBLEM: Different detectors may have different time zeros. User must use the returned arrays with great care.
"""
#print pairKeys, rholmsDictionaryForDetector
nKeys = len(pairKeys)
keyRef = list(pairKeys)[0]
npts = rholmsDictionaryForDetector[keyRef].data.length
### Step 0: Create two lookup tables: index->pair and pair->index
lookupNumberToKeys = np.zeros((nKeys,2),dtype=np.int)
lookupKeysToNumber = {}
for indx, val in enumerate(pairKeys):
lookupNumberToKeys[indx][0]= val[0]
lookupNumberToKeys[indx][1]= val[1]
lookupKeysToNumber[val] = indx
# Now create a *second* lookup table, for complex-conjugation-in-time: (l,m)->(l,-m)
lookupNumberToNumberConjugation = np.zeros(nKeys,dtype=np.int)
for indx in np.arange(nKeys):
l = lookupNumberToKeys[indx][0]
m = lookupNumberToKeys[indx][1]
indxOut = lookupKeysToNumber[(l,-m)]
lookupNumberToNumberConjugation[indx] = indxOut
### Step 1: Convert crossTermsForDetector explicitly into a matrix
crossTermsArrayU = np.zeros((nKeys,nKeys),dtype=np.complex) # Make sure complex numbers can be stored
crossTermsArrayV = np.zeros((nKeys,nKeys),dtype=np.complex) # Make sure complex numbers can be stored
for pair1 in pairKeys:
for pair2 in pairKeys:
indx1 = lookupKeysToNumber[pair1]
indx2 = lookupKeysToNumber[pair2]
crossTermsArrayU[indx1][indx2] = crossTermsForDetector[(pair1,pair2)]
crossTermsArrayV[indx1][indx2] = crossTermsForDetectorV[(pair1,pair2)]
# pair1New = (pair1[0], -pair1[1])
# crossTermsArrayV[indx1][indx2] = (-1)**pair1[0]*crossTermsForDetector[(pair1New,pair2)] # this actually should be a seperate array in general; we are assuming reflection symmetry to populate it
if rosDebugMessagesDictionary["DebugMessagesLong"]:
print(" Built cross-terms matrix ", crossTermsArray)
### Step 2: Convert rholmsDictionaryForDetector
rholmArray = np.zeros((nKeys,npts),dtype=np.complex)
for pair1 in pairKeys:
indx1 = lookupKeysToNumber[pair1]
rholmArray[indx1][:] = rholmsDictionaryForDetector[pair1].data.data # Copy the array of time values.
### Step 3: Create rholm_intp array-ized structure
rholm_intpArray = range(nKeys) # create a flexible python array of the desired size, to hold function pointers
if rholms_intpDictionaryForDetector:
for pair1 in pairKeys:
indx1 = lookupKeysToNumber[pair1]
rholm_intpArray[indx1] = rholms_intpDictionaryForDetector[pair1]
### step 4: create dictionary (one per detector) with epoch associated with the starting point for that IFO. (should be the same for all modes for a given IFO)
epochHere = float(rholmsDictionaryForDetector[pair1].epoch)
return lookupNumberToKeys,lookupKeysToNumber, lookupNumberToNumberConjugation, crossTermsArrayU,crossTermsArrayV, rholmArray, rholm_intpArray, epochHere
def SingleDetectorLogLikelihoodDataViaArray(epoch,lookupNK, rholms_intpArrayDict,tref, RA,DEC, thS,phiS,psi, dist, det):
"""
SingleDetectorLogLikelihoodDataViaArray evaluates everything using *arrays* for each (l,m) pair
Note arguments passed are STILL SCALARS
DEPRECATED: use DiscreteFactoredLogLikelihoodViaArray for end-to-end uuse
USED IN : FactoredLogLikelihoodViaArray
"""
global distMpcRef
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
Ylms = ComputeYlmsArray(lookupNK[det], thS,-phiS)
if (det == "Fake"):
F=np.exp(-2.*1j*psi) # psi is applied through *F* in our model
tshift= tref - epoch
else:
F = ComplexAntennaFactor(det, RA,DEC,psi,tref)
detector = lalsim.DetectorPrefixToLALDetector(det)
tshift = ComputeArrivalTimeAtDetector(det, RA,DEC, tref) - epoch # detector time minus reference time (so far)
rholmsArray = np.array(map( lambda x : complex(x( float(tshift))) , rholms_intpArrayDict[det]),dtype=complex) # Evaluate interpolating functions at target time
distMpc = dist/(lal.PC_SI*1e6)
# Following loop *should* be implemented as an array multiply!
term1 = 0.j
term1 = np.dot(np.conj(F*Ylms),rholmsArray) # be very careful re how this multiplication is done: suitable to use this form of multiply
term1 = np.real(term1) / (distMpc/distMpcRef)
return term1
def DiscreteSingleDetectorLogLikelihoodDataViaArray(tvals,extr_params,lookupNK, rholmsArrayDict,Lmax=2,det='H1'):
"""
SingleDetectorLogLikelihoodDataViaArray evaluates everything using *arrays* for each (l,m) pair.
Uses discrete arrays. Compare to FactoredLogLikelihoodTimeMarginalized
"""
global distMpcRef
RA = extr_params.phi
DEC = extr_params.theta
tref = extr_params.tref # geocenter time
phiref = extr_params.phiref
incl = extr_params.incl
psi = extr_params.psi
dist = extr_params.dist
npts = len(tvals)
deltaT = extr_params.deltaT
Ylms = ComputeYlmsArray(lookupNK[det], incl,-phiref)
if (det == "Fake"):
F=np.exp(-2.*1j*psi) # psi is applied through *F* in our model
tshift= tref - epoch
else:
F = ComplexAntennaFactor(det, RA,DEC,psi,tref)
detector = lalsim.DetectorPrefixToLALDetector(det)
t_det = ComputeArrivalTimeAtDetector(det, RA, DEC, tref)
tshift= t_det - tref
rhoTS = rholmsArrayDict[det]
distMpc = dist/(lal.PC_SI*1e6)
npts = len(rhoTS[0])
# Following loop *should* be implemented as an array multiply!
term1 = np.zeros(npts,dtype=complex)
term1 = np.dot(np.conj(F*Ylms),rhoTS) # be very careful re how this multiplication is done: suitable to use this form of multiply
term1 = np.real(term1) / (distMpc/distMpcRef)
# Apply timeshift *at end*, without loss of generality: this is a single detector. Note no subsample interpolation
# This timeshift should *only* be applied if all detectors start at the same array index!
nShiftL =int(np.round(float(tshift)/deltaT))
# return structure: different shifts
term1 = np.roll(term1,-nShiftL)
return term1
def SingleDetectorLogLikelihoodModelViaArray(lookupNKDict,ctUArrayDict,ctVArrayDict, tref, RA,DEC, thS,phiS,psi, dist,det):
"""
DOCUMENT ME!!!
"""
global distMpcRef
# N.B.: The Ylms are a function of - phiref b/c we are passively rotating
# the source frame, rather than actively rotating the binary.
# Said another way, the m^th harmonic of the waveform should transform as
# e^{- i m phiref}, but the Ylms go as e^{+ i m phiref}, so we must give
# - phiref as an argument so Y_lm h_lm has the proper phiref dependence
U = ctUArrayDict[det]
V = ctVArrayDict[det]
Ylms = ComputeYlmsArray(lookupNKDict[det], thS,-phiS)
if (det == "Fake"):
F=np.exp(-2.*1j*psi) # psi is applied through *F* in our model
else:
F = ComplexAntennaFactor(det, RA,DEC,psi,tref)
distMpc = dist/(lal.PC_SI*1e6)
# Term 2 part 1 : conj(Ylms*F)*crossTermsU*F*Ylms
# Term 2 part 2: Ylms*F*crossTermsV*F*Ylms
term2 = 0.j
term2 += F*np.conj(F)*(np.dot(np.conj(Ylms), np.dot(U,Ylms)))
term2 += F*F*np.dot(Ylms,np.dot(V,Ylms))
term2 = np.sum(term2)
term2 = -np.real(term2) / 4. /(distMpc/distMpcRef)**2
return term2
def FactoredLogLikelihoodViaArray(epoch, P, lookupNKDict, rholms_intpArrayDict, ctUArrayDict,ctVArrayDict):
"""
FactoredLogLikelihoodViaArray uses the array-ized data structures to compute the log likelihood, a single scalar value.
This generally is marginally faster, particularly if Lmax is large.
The timeseries quantities are computed via interpolation onto the desired grid
Speed-wise, because we extract a *single* scalar value, this code has the same efficiency as FactoredLogLikelihoood
Note 'P' must have the *sampling rate* set to correctly interpret the event time.
Note arguments passed are STILL SCALARS
"""
global distMpcRef
detectors = rholms_intpArrayDict.keys()
RA = P.phi
DEC = P.theta
tref = P.tref # geocenter time
phiref = P.phiref
incl = P.incl
psi = P.psi
dist = P.dist
deltaT = P.deltaT
term1 = 0.
term2 = 0.
for det in detectors:
term1 += SingleDetectorLogLikelihoodDataViaArray(epoch,lookupNKDict, rholms_intpArrayDict,tref, RA, DEC, incl,phiref,psi,dist,det)
term2 += SingleDetectorLogLikelihoodModelViaArray(lookupNKDict, ctUArrayDict, ctVArrayDict, tref, RA, DEC, incl,phiref,psi,dist,det)
return term1+term2
def DiscreteFactoredLogLikelihoodViaArray(tvals, P, lookupNKDict, rholmsArrayDict, ctUArrayDict,ctVArrayDict,epochDict,Lmax=2,array_output=False):
"""
DiscreteFactoredLogLikelihoodViaArray uses the array-ized data structures to compute the log likelihood,
either as an array vs time *or* marginalized in time.
This generally is marginally faster, particularly if Lmax is large.
The timeseries quantities are computed via discrete shifts of an existing grid
Note 'P' must have the *sampling rate* set to correctly interpret the event time.
Note arguments passed are STILL SCALARS
"""
global distMpcRef
detectors = rholmsArrayDict.keys()
npts = len(tvals)
RA = P.phi
DEC = P.theta
tref = P.tref # geocenter time?
phiref = P.phiref
incl = P.incl
psi = P.psi
dist = P.dist
distMpc = dist/(lal.PC_SI*1e6)
invDistMpc = distMpcRef/distMpc
deltaT = P.deltaT
lnL = np.zeros(npts,dtype=np.float128)
for det in detectors:
assert len(tvals) <= len(rholmsArrayDict[det][0]) # code cannot work if window too large!
U = ctUArrayDict[det]
V = ctVArrayDict[det]
Ylms = ComputeYlmsArray(lookupNKDict[det], incl,-phiref)
t_ref = epochDict[det]
# BE CAREFUL ABOUT DETECTOR ROTATION: need time not at start time in general! Use 'reference time' to do better
F = ComplexAntennaFactor(det, RA, DEC, psi, tref)
invDistMpc = distMpcRef/distMpc
# This is the GPS time at the detector
t_det = ComputeArrivalTimeAtDetector(det, RA, DEC, tref) # target time to explore around; should be CENTERED in interval
tfirst = float(t_det)+tvals[0]
ifirst = int(round(( float(tfirst) - t_ref) / P.deltaT) + 0.5) # this should be fast, done once. Should also be POSITIVE
ilast = ifirst + npts
det_rholms = np.zeros(( len(lookupNKDict[det]),npts),dtype=np.complex64) # rholms evaluated at time at detector, in window, packed. Do NOT
# do not interpolate, just use nearest neighbors.
for indx in np.arange(len(lookupNKDict[det])):
det_rholms[indx] = rholmsArrayDict[det][indx][ifirst:ilast]
# Quadratic term: SingleDetectorLogLikelihoodModelViaArray
term2 = 0.j
term2 += F*np.conj(F)*(np.dot(np.conj(Ylms), np.dot(U,Ylms)))
term2 += F*F*np.dot(Ylms,np.dot(V,Ylms))
term2 = np.sum(term2)
term2 = -np.real(term2) / 4. /(distMpc/distMpcRef)**2
# Linear term
term1 = np.zeros(len(tvals), dtype=complex)
term1 = np.dot(np.conj(F*Ylms),det_rholms) # be very careful re how this multiplication is done: suitable to use this form of multiply
term1 = np.real(term1) / (distMpc/distMpcRef)
lnL+= term1+term2
if array_output: # return the raw array
return lnL
else: # return the marginalized lnL in time
lnLmax = np.max(lnL)
lnLmargT = np.log(integrate.simps(np.exp(lnL-lnLmax), dx=deltaT)) + lnLmax
return lnLmargT
def DiscreteFactoredLogLikelihoodViaArrayVector(tvals, P_vec, lookupNKDict, rholmsArrayDict, ctUArrayDict,ctVArrayDict,epochDict,Lmax=2,array_output=False,xpy=xpy_default):
"""
DiscreteFactoredLogLikelihoodViaArray uses the array-ized data structures to compute the log likelihood,
either as an array vs time *or* marginalized in time.
This generally is marginally faster, particularly if Lmax is large.
The timeseries quantities are computed via discrete shifts of an existing grid
Note 'P' must have the *sampling rate* set to correctly interpret the event time.
Note arguments passed are NOW ARRAYS, in contrast to similar function which does not have 'Vector' postfix
"""
detectors = rholmsArrayDict.keys()
npts = len(tvals)
npts_extrinsic = len(P_vec.phi)
# All arrays of length `npts_extrinsic`, except for `tref` which is a scalar
RA = P_vec.phi
DEC = P_vec.theta
tref = P_vec.tref # geocenter time, stored as a scalar
phiref = P_vec.phiref
incl = P_vec.incl
psi = P_vec.psi
dist = P_vec.dist
distMpc = dist/(lal.PC_SI*1e6)
invDistMpc = distMpcRef/distMpc
deltaT = P_vec.deltaT # this is stored as a scalar
# Array to use for work
lnL = np.zeros(npts,dtype=np.float128)
lnL_array = np.zeros((npts_extrinsic,npts),dtype=np.float128)
# Array to use for output
lnLmargOut = np.zeros(npts_extrinsic,dtype=np.float128)
# term1 = np.zeros(npts, dtype=complex) # workspace
for det in detectors: # strings right now - need to change to make ufunc-able
# these do not depend on extrinsic params
U= ctUArrayDict[det]
V = ctVArrayDict[det]
# these do depend on extrinsic params
Ylms_vec = ComputeYlmsArrayVector(lookupNKDict[det], incl,-phiref)
F_vec = lalF(det, RA, DEC, psi, tref)
invDistMpc = distMpcRef/distMpc
t_ref = epochDict[det] # a constant for each IFO
# This is the GPS time at the detector...an arra y
t_det = lalT(det, RA, DEC, tref)
for indx_ex in np.arange(npts_extrinsic): # effectively a loop over RA, DEC
tfirst = float(t_det[indx_ex])+tvals[0]
d_here = distMpc[indx_ex]
# pull out scalars
Ylms = Ylms_vec.T[indx_ex].T # yank out Ylms for this specific set of parameters
F = complex(F_vec.T[indx_ex]) # should be scalar
# these are scalars
ifirst = int(round(( float(tfirst) - t_ref) / P_vec.deltaT) + 0.5) # this should be fast, done once
ilast = ifirst + npts
det_rholms = np.zeros(( len(lookupNKDict[det]),npts),dtype=np.complex64) # rholms evaluated at time at detector, in window, packed. Do NOT
# do not interpolate, just use nearest neighbors.
for indx in np.arange(len(lookupNKDict[det])):
det_rholms[indx] = rholmsArrayDict[det][indx][ifirst:ilast]
# Quadratic term: SingleDetectorLogLikelihoodModelViaArray
term2 = 0.j
term2 += F*np.conj(F)*(np.dot(np.conj(Ylms), np.dot(U,Ylms)))
term2 += F*F*np.dot(Ylms,np.dot(V,Ylms))
term2 = np.sum(term2)
term2 = -np.real(term2) / 4. /(d_here/distMpcRef)**2
# Linear term
term1 = np.zeros(len(tvals), dtype=complex) # workspace
term1 = np.dot(np.conj(F*Ylms),det_rholms) # be very careful re how this multiplication is done: suitable to use this form of multiply
term1 = np.real(term1) / (d_here/distMpcRef)
lnL = term1+term2
lnL_array[indx_ex] += lnL # copy into array. Add, because we will get terms from other IFOs
maxlnL = np.max(lnL)
lnLmargOut[indx_ex] = maxlnL + np.log(integrate.simps(np.exp(lnL_array[indx_ex] - maxlnL), dx=deltaT)) # integrate term by term, minmize overflows
return lnLmargOut
def DiscreteFactoredLogLikelihoodViaArrayVectorNoLoopOrig(tvals, P_vec, lookupNKDict, rholmsArrayDict, ctUArrayDict,ctVArrayDict,epochDict,Lmax=2,array_output=False):
"""
DiscreteFactoredLogLikelihoodViaArray uses the array-ized data structures to compute the log likelihood,
either as an array vs time *or* marginalized in time.
This generally is marginally faster, particularly if Lmax is large.
The timeseries quantities are computed via discrete shifts of an existing grid
Note 'P' must have the *sampling rate* set to correctly interpret the event time.
Note arguments passed are NOW ARRAYS, in contrast to similar function which does not have 'Vector' postfix
"""
global distMpcRef
detectors = rholmsArrayDict.keys()
npts = len(tvals)
npts_extrinsic = len(P_vec.phi)
# All arrays of length `npts_extrinsic`, except for `tref` which is a scalar
RA = P_vec.phi
DEC = P_vec.theta
tref = P_vec.tref # geocenter time, stored as a scalar
phiref = P_vec.phiref
incl = P_vec.incl
psi = P_vec.psi
dist = P_vec.dist
distMpc = dist/(lal.PC_SI*1e6)
invDistMpc = distMpcRef/distMpc
deltaT = P_vec.deltaT # this is stored as a scalar
# Array to use for work
lnL = np.zeros(npts,dtype=np.float64)
lnL_t_accum = np.zeros((npts_extrinsic,npts),dtype=np.float64)
for det in detectors: # strings right now - need to change to make ufunc-able
# These do not depend on extrinsic params.
# Arrays of shape (n_lms, n_lms).
# Axis 0 corresponds to (l,m), and axis 1 corresponds to (l',m').
U = ctUArrayDict[det]
V = ctVArrayDict[det]
n_lms = len(U)
# These do depend on extrinsic params
# Array of shape (npts_extrinsic, n_lms,)
Ylms_vec = ComputeYlmsArrayVector(lookupNKDict[det], incl, -phiref).T
# Array of shape (npts_extrinsic,)
F_vec = lalF(det, RA, DEC, psi, tref)
# Array of shape (npts_extrinsic,)
invDistMpc = distMpcRef/distMpc
# Scalar -- is constant for each IFO
t_ref = epochDict[det]
# This is the GPS time at the detector, an array of shape (npts_extrinsic,)
t_det = lalT(det, RA, DEC, tref)
tfirst = t_det + tvals[0]
ifirst = (np.round((tfirst-t_ref) / P_vec.deltaT) + 0.5).astype(int)
ilast = ifirst + npts
# Note: Very inefficient, need to avoid making `Qlms` by doing the
# inner product in a CUDA kernel.
det_rholms = rholmsArrayDict[det]
Qlms = np.empty((npts_extrinsic, npts, n_lms), dtype=np.complex128)
for i in range(npts_extrinsic):
Qlms[i] = det_rholms[..., ifirst[i]:ilast[i]].T
# Has shape (npts_extrinsic,)
term2 = ( (F_vec*np.conj(F_vec)).real *np.einsum(
"...i,...j,ij",
np.conj(Ylms_vec), Ylms_vec, U,
).real )
term2 += (np.square(F_vec) *
np.einsum(
"...i,...j,ij",
Ylms_vec, Ylms_vec, V,
)
).real
term2 *= -0.25 * np.square(distMpcRef / distMpc)
# Has shape (npts_extrinsic, npts).
# Starts as term1, and accumulates term2 after.
# View into F with shape (npts_extrinsic, n_lms)
F_vec_dummy_lm = F_vec[..., np.newaxis]
# View into F * Ylm with shape (npts_extrinsic, npts, n_lms)
FY_dummy_t = np.broadcast_to(
(F_vec_dummy_lm * Ylms_vec)[:, np.newaxis],
Qlms.shape,
)
lnL_t_accum += np.einsum(
"...i,...i",
np.conj(FY_dummy_t), Qlms,
).real * (distMpcRef/distMpc)[...,None]
# Accumulate term2 into the time-dependent log likelihood.
# Have to create a view with an extra axis so they broadcast.
lnL_t_accum += term2[..., np.newaxis]
# Take exponential of the log likelihood in-place.
lnLmax = np.max(lnL_t_accum)
L_t = np.exp(lnL_t_accum - lnLmax, out=lnL_t_accum)
# Integrate out the time dimension. We now have an array of shape
# (npts_extrinsic,)
L = integrate.simps(L_t, dx=deltaT, axis=-1)
# Compute log likelihood in-place.
lnL = lnLmax+ np.log(L, out=L)
return lnL
def DiscreteFactoredLogLikelihoodViaArrayVectorNoLoop(tvals, P_vec, lookupNKDict, rholmsArrayDict, ctUArrayDict,ctVArrayDict,epochDict,Lmax=2,array_output=False,xpy=np):
"""
DiscreteFactoredLogLikelihoodViaArray uses the array-ized data structures to compute the log likelihood,
either as an array vs time *or* marginalized in time.
This generally is marginally faster, particularly if Lmax is large.
The timeseries quantities are computed via discrete shifts of an existing grid
Note 'P' must have the *sampling rate* set to correctly interpret the event time.
Note arguments passed are NOW ARRAYS, in contrast to similar function which does not have 'Vector' postfix
"""
global distMpcRef
detectors = rholmsArrayDict.keys()
npts = len(tvals)
npts_extrinsic = len(P_vec.phi)
# npts_full = len(rholmsArrayDict[detectors[0]][0]) # all have same size
# print " npts :", npts
# print " npts_full:", npts_full
# All arrays of length `npts_extrinsic`, except for `tref` which is a scalar
RA = P_vec.phi
DEC = P_vec.theta
# geocenter time, stored as a scalar
tref = P_vec.tref
phiref = P_vec.phiref
incl = P_vec.incl
psi = P_vec.psi
dist = P_vec.dist
distMpc = dist/(lal.PC_SI*1e6)
invDistMpc = distMpcRef/distMpc
deltaT = float(P_vec.deltaT) # this is stored as a scalar
# Convert tref to greenwich mean sidereal time
greenwich_mean_sidereal_time_tref = xpy.asarray(
lal.GreenwichMeanSiderealTime(tref)
)
# this is stored as a scalar
deltaT = P_vec.deltaT
# Array to accumulate lnL(t) summed across all detectors.
lnL_t_accum = xpy.zeros((npts_extrinsic, npts), dtype=np.float64)
if (xpy is np) or (optimized_gpu_tools is None):
simps = integrate.simps
elif not (xpy is np):
simps = optimized_gpu_tools.simps
else:
raise NotImplementedError("Backend not supported: {}".format(xpy))
# strings right now - need to change to make ufunc-able
for det in detectors:
# Compute the detector's location and response matrix
detector = lalsim.DetectorPrefixToLALDetector(det)
detector_location = xpy.asarray(detector.location)
detector_response = xpy.asarray(detector.response)
# These do not depend on extrinsic params.
# Arrays of shape (n_lms, n_lms).
# Axis 0 corresponds to (l,m), and axis 1 corresponds to (l',m').
U = ctUArrayDict[det]
V = ctVArrayDict[det]
lms = lookupNKDict[det]
n_lms = len(lms)
# These do depend on extrinsic params
# Array of shape (npts_extrinsic, n_lms,)
Ylms_vec = SphericalHarmonicsVectorized(
lms, incl, -phiref,
xpy=xpy,
l_max=Lmax,
)
# Array of shape (npts_extrinsic,)
# F_vec_old = xpy.asarray(lalF(det, RA, DEC, psi, tref))
F_vec = ComputeDetAMResponse(
detector_response,
RA, DEC, psi,
greenwich_mean_sidereal_time_tref,
xpy=xpy
)
# Scalar -- is constant for each IFO
t_ref = epochDict[det]
# This is the GPS time at the detector,
# Note that to save on precision compared to ...NoLoopOrig, we CHANGE the t_det definition to be relative to the IFO statt time t_ref
# ... this means we don't keep a 1e9 out in front, so we have more significant digits in the event time (and can if needed reduce precision in GPU ops)
# an array of shape (npts_extrinsic,)
t_det = float(tref - float(t_ref)) + TimeDelayFromEarthCenter(
detector_location, RA, DEC,
float(greenwich_mean_sidereal_time_tref),
xpy=xpy
)
tfirst = t_det + tvals[0]
ifirst = (xpy.rint((tfirst) / deltaT) + 0.5).astype(np.int32) # C uses 32 bit integers : be careful
# ilast = ifirst + npts
Q = xpy.ascontiguousarray(rholmsArrayDict[det].T)
# # Note: Very inefficient, need to avoid making `Qlms` by doing the
# # inner product in a CUDA kernel.
# det_rholms = xpy.asarray(rholmsArrayDict[det])
# Qlms = xpy.empty((npts_extrinsic, npts, n_lms), dtype=complex)
# for i in range(npts_extrinsic):
# Qlms[i] = det_rholms[...,ifirst[i]:ilast[i]].T
# Has shape (npts_extrinsic,)
term2 = (
(F_vec*xpy.conj(F_vec)).real *
xpy.einsum(
"...i,...j,ij",
xpy.conj(Ylms_vec), Ylms_vec, U,
).real
)
term2 += (
xpy.square(F_vec) *
xpy.einsum(
"...i,...j,ij",
Ylms_vec, Ylms_vec, V,
)
).real
term2 *= -0.25 * xpy.square(distMpcRef / distMpc)
# Has shape (npts_extrinsic, npts).
# Starts as term1, and accumulates term2 after.
# View into F with shape (npts_extrinsic, n_lms)
F_vec_dummy_lm = F_vec[..., np.newaxis]
# # View into F * Ylm with shape (npts_extrinsic, npts, n_lms)
# FY_dummy_t = xpy.broadcast_to(
# (F_vec_dummy_lm * Ylms_vec)[:, np.newaxis],
# Qlms.shape,
# )
# lnL_t_accum += xpy.einsum(
# "...i,...i",
# xpy.conj(FY_dummy_t), Qlms,
# ).real * (distMpcRef/distMpc)[...,None]
if not (xpy is np):
FY_conj = xpy.conj(F_vec_dummy_lm * Ylms_vec)
# Shape Q = (npts_time_full, nlms)
# Shape A=FY_conj = (npts_extrinsic, nlms)
# shape result = (npts_extrinsic, npts_time_*window* = npts)
Q_prod_result = Q_inner_product.Q_inner_product_cupy(
Q, FY_conj,
ifirst, npts,
).real
else:
# Use old code completely unchanged ... very wasteful on memory management!
Qlms = xpy.empty((npts_extrinsic, npts, n_lms), dtype=np.complex128)
for i in range(npts_extrinsic):
Qlms[i] = rholmsArrayDict[det][...,ifirst[i]:(ifirst[i]+npts)].T
FY_dummy_t = np.broadcast_to(
(F_vec_dummy_lm * Ylms_vec)[:, np.newaxis],
Qlms.shape,
)
Q_prod_result = np.einsum(
"...i,...i",
np.conj(FY_dummy_t), Qlms,
).real
lnL_t_accum += Q_prod_result * (distMpcRef/distMpc)[...,None]
# lnL_t_accum += Q_inner_product.Q_inner_product_cupy(
# FY_conj, Q,
# ifirst, npts,
# ).real * (distMpcRef/distMpc)[...,None]
# Accumulate term2 into the time-dependent log likelihood.
# Have to create a view with an extra axis so they broadcast.
lnL_t_accum += term2[..., np.newaxis]
# print lnL_t_accum.shape, lnL_t.shape
# lnL_t_accum += lnL_t
# Take exponential of the log likelihood in-place.
lnLmax = xpy.max(lnL_t_accum)
L_t = xpy.exp(lnL_t_accum - lnLmax, out=lnL_t_accum)
L = simps(L_t, dx=deltaT, axis=-1)
# Compute log likelihood in-place.
lnL = lnLmax+ xpy.log(L, out=L)
return lnL
def ComputeYlmsArray(lookupNK, theta, phi):
"""
Returns an array Ylm[k] where lookup(k) = l,m. Only computes the LM values needed.
theta, phi arguments are *scalars*
SHOULD BE DEPRECATED
"""
Ylms=None
if isinstance(lookupNK, dict):
pairs = lookupNK.keys()
Ylms = np.zeros(len(pairs),dtype=complex)
for indx in np.arange(len(pairs)):
l = int(pairs[indx][0])
m = int(pairs[indx][1])
Ylms[ indx] = lal.SpinWeightedSphericalHarmonic(theta, phi,-2, l, m)
elif isinstance(lookupNK, np.ndarray):
Ylms = np.zeros(len(lookupNK),dtype=complex)
for indx in np.arange(len(lookupNK)):
l = int(lookupNK[indx][0])
m = int(lookupNK[indx][1])
Ylms[ indx] = lal.SpinWeightedSphericalHarmonic(theta, phi,-2, l, m)
return Ylms
try:
import numba
from numba import vectorize, complex128, float64, int64
numba_on = True
print(" Numba on ")
# Very inefficient : decorating
# Problem - lately, compiler not correctly identifying return value of code
# Should just use SphericalHarmonicsVectorized
@vectorize([complex128(float64,float64,int64,int64,int64)])
def lalylm(th,ph,s,l,m):
return lal.SpinWeightedSphericalHarmonic(th,ph,s,l,m)
# @vectorize
# def lalF(det, RA,DEC,psi,tref):
# return ComplexAntennaFactor(det, RA, DEC, psi, tref)
# @vectorize
# def lalT(deta, RA, DEC, tref):
# return ComputeArrivalTimeAtDetector(det, RA, DEC, tref)
def lalF(det, RA, DEC,psi,tref): # note tref is a SCALAR
F = np.zeros( len(RA), dtype=complex)
for indx in np.arange(len(RA)):
F[indx] = ComplexAntennaFactor(det, RA[indx],DEC[indx], psi[indx], tref)
return F
def lalT(det, RA, DEC,tref): # note tref is a SCALAR
T = np.zeros( len(RA), dtype=float)
for indx in np.arange(len(RA)):
T[indx] = ComputeArrivalTimeAtDetector(det, RA[indx],DEC[indx], tref)
return T
except:
numba_on = False
print(" Numba off ")
# Very inefficient
def lalylm(th,ph,s,l,m):
return lal.SpinWeightedSphericalHarmonic(th,ph,s,l,m)
def lalF(det, RA, DEC,psi,tref):
if isinstance(RA, float):
return ComplexAntennaFactor(det, RA, DEC, psi,tref)
F = np.zeros( len(RA), dtype=complex)
for indx in np.arange(len(RA)):
F[indx] = ComplexAntennaFactor(det, RA[indx],DEC[indx], psi[indx], tref)
return F
def lalT(det, RA, DEC,tref):
if isinstance(RA, float):
return ComputeArrivalTimeAtDetector(det, RA, DEC,tref)
T = np.zeros( len(RA), dtype=float)
for indx in np.arange(len(RA)):
T[indx] = ComputeArrivalTimeAtDetector(det, RA[indx],DEC[indx], tref)
return T
# lalF = ComplexAntennaFactor
# lalT = ComputeArrivalTimeAtDetector
def ComputeYlmsArrayVector(lookupNK, theta, phi):
"""
Returns an array Ylm[k] where lookup(k) = l,m. Only computes the LM values needed.
theta, phi arguments are *vectors*. Shape is (len(th),len(lookup(NK)))
Should be combined with the previous routine ComputeYlmsArray (redundant)
Example:
th = np.linspace(0,np.pi, 5); lookupNK =[[2,-2], [2,2]]
factored_likelihood.ComputeYlmsArrayVector(lookupNK,th,th)
"""
# Allocate
Ylms = np.zeros((len(lookupNK), len(theta)),dtype=complex)
# Force cast to array. This should never be called, but can avoid some failures due to 'object' dtype failing through
theta = np.array(theta,dtype=float)
phi = np.array(phi,dtype=float)
# Loop over l, m and evaluate.
for indx in range(len(lookupNK)):
l = int(lookupNK[indx][0])*np.ones(len(theta),dtype=int) # use np.repeat instead for speed
m = int(lookupNK[indx][1])*np.ones(len(theta),dtype=int)
s = -2 * np.ones(len(theta),dtype=int)
Ylms[indx] = lalylm(theta, phi, s, l, m)
return Ylms
| 45.076152 | 353 | 0.638765 | 11,974 | 89,972 | 4.720979 | 0.112744 | 0.005042 | 0.005378 | 0.004086 | 0.482531 | 0.446673 | 0.411046 | 0.376745 | 0.352686 | 0.331653 | 0 | 0.014968 | 0.259681 | 89,972 | 1,995 | 354 | 45.098747 | 0.833714 | 0.356311 | 0 | 0.408 | 0 | 0.000889 | 0.043393 | 0.002161 | 0 | 0 | 0 | 0 | 0.007111 | 1 | 0.040889 | false | 0 | 0.022222 | 0.005333 | 0.110222 | 0.055111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5e3c05d3e94f5d74ca62fb3a55ef73ddd48cfd63 | 2,144 | py | Python | polaris/polaris/transaction/serializers.py | antb123/django-polaris | c89d57f4dbdec379028feaeb1f7ae00d01b23cce | [
"Apache-2.0"
] | 1 | 2021-08-10T00:07:16.000Z | 2021-08-10T00:07:16.000Z | polaris/polaris/transaction/serializers.py | antb123/django-polaris | c89d57f4dbdec379028feaeb1f7ae00d01b23cce | [
"Apache-2.0"
] | null | null | null | polaris/polaris/transaction/serializers.py | antb123/django-polaris | c89d57f4dbdec379028feaeb1f7ae00d01b23cce | [
"Apache-2.0"
] | null | null | null | """This module defines a serializer for the transaction model."""
from rest_framework import serializers
from polaris.models import Transaction
class PolarisDecimalField(serializers.DecimalField):
@staticmethod
def strip_trailing_zeros(num):
if num == num.to_integral():
return round(num, 1)
else:
return num.normalize()
def to_representation(self, value):
return str(self.strip_trailing_zeros(value))
class TransactionSerializer(serializers.ModelSerializer):
"""Defines the custom serializer for a transaction."""
id = serializers.CharField()
amount_in = PolarisDecimalField(max_digits=50, decimal_places=25)
amount_out = PolarisDecimalField(max_digits=50, decimal_places=25)
amount_fee = PolarisDecimalField(max_digits=50, decimal_places=25)
more_info_url = serializers.SerializerMethodField()
def get_more_info_url(self, transaction_instance):
return self.context.get("more_info_url")
def to_representation(self, instance):
"""
Removes the "address" part of the to_address and from_address field
names from the serialized representation.
Since "from" is a python keyword, a "from" variable could not be
defined directly as an attribute.
"""
data = super().to_representation(instance)
data["to"] = data.pop("to_address")
data["from"] = data.pop("from_address")
return data
class Meta:
model = Transaction
fields = [
"id",
"kind",
"status",
"status_eta",
"amount_in",
"amount_out",
"amount_fee",
"started_at",
"completed_at",
"stellar_transaction_id",
"external_transaction_id",
"from_address",
"to_address",
"external_extra",
"external_extra_text",
"deposit_memo",
"deposit_memo_type",
"withdraw_anchor_account",
"withdraw_memo",
"withdraw_memo_type",
"more_info_url",
]
| 31.072464 | 75 | 0.619403 | 224 | 2,144 | 5.683036 | 0.450893 | 0.025137 | 0.034564 | 0.070699 | 0.115475 | 0.115475 | 0.115475 | 0.080126 | 0 | 0 | 0 | 0.008519 | 0.288246 | 2,144 | 68 | 76 | 31.529412 | 0.825688 | 0.148321 | 0 | 0 | 0 | 0 | 0.175339 | 0.038462 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.040816 | 0.040816 | 0.387755 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eaa593e5965ed8254b829487af88f0e67c391eef | 1,589 | py | Python | normal.py | leonheld/INE5118 | 356ac119275237f6291efd2e8d5df0cc9cb69c52 | [
"WTFPL"
] | null | null | null | normal.py | leonheld/INE5118 | 356ac119275237f6291efd2e8d5df0cc9cb69c52 | [
"WTFPL"
] | null | null | null | normal.py | leonheld/INE5118 | 356ac119275237f6291efd2e8d5df0cc9cb69c52 | [
"WTFPL"
] | null | null | null | #envio4
# -*- coding: utf-8 -*-
"""
Created on Fri May 31 10:52:39 2019
@author: Leon
"""
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
from scipy import stats
import pandas as pd
import random
bw = []
with open('bodyweight.txt') as inputfile:
for line in inputfile:
bw.append(float(line.strip()))
#parâmetros de normal da mediaAmostraCinco
mubw = np.mean(bw)
sigmabw = np.std(bw)
mediaAmostraCinco = []
for i in range(1, 1000):
amostraCinco = random.sample(population = bw, k = 5)
mediaAmostraCinco.append(np.mean(amostraCinco))
#parâmetros de normal da mediaAmostraCinco
muAmostraCinco = np.mean(mediaAmostraCinco)
sigmaAmostraCinco = np.std(mediaAmostraCinco)
mediaAmostraCinquenta = []
for i in range(1, 1000):
amostraCinquenta = random.sample(population = bw, k = 50)
mediaAmostraCinquenta.append(np.mean(amostraCinquenta))
#parâmetros de normal da mediaAmostraCinquenta
muAmostraCinquenta = np.mean(mediaAmostraCinquenta)
sigmaAmostraCinquenta = np.std(mediaAmostraCinquenta)
plt.figure(1)
sns.distplot(bw, kde=False, norm_hist=1, color = "#3dc1ff")
axes = plt.gca()
axes.set_xlim([0, 40])
axes.set_ylim([0, 0.175])
plt.figure(2)
sns.distplot(mediaAmostraCinco, kde=False, norm_hist=1, color = "#3dc1ff")
axes = plt.gca()
axes.set_xlim([0, 40])
axes.set_ylim([0, 0.3])
plt.figure(3)
sns.distplot(mediaAmostraCinquenta, kde=False, norm_hist=1, color = "#3ddacf")
axes = plt.gca()
axes.set_xlim([0, 40])
axes.set_ylim([0, 1])
plt.show()
| 24.075758 | 78 | 0.70107 | 215 | 1,589 | 5.139535 | 0.404651 | 0.038009 | 0.048869 | 0.054299 | 0.311312 | 0.199095 | 0.150226 | 0.150226 | 0.150226 | 0.150226 | 0 | 0.043247 | 0.170548 | 1,589 | 65 | 79 | 24.446154 | 0.795144 | 0.129641 | 0 | 0.195122 | 0 | 0 | 0.025529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.170732 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eaa71b38a5e9e1402d624599b6971ebc72e4bf75 | 895 | py | Python | 02_assignment/toolbox/Toolbox_Python02450/Scripts/ex11_2_2.py | LukaAvbreht/ML_projects | 8b36acdeb017ce8a57959c609b96111968852d5f | [
"MIT"
] | null | null | null | 02_assignment/toolbox/Toolbox_Python02450/Scripts/ex11_2_2.py | LukaAvbreht/ML_projects | 8b36acdeb017ce8a57959c609b96111968852d5f | [
"MIT"
] | null | null | null | 02_assignment/toolbox/Toolbox_Python02450/Scripts/ex11_2_2.py | LukaAvbreht/ML_projects | 8b36acdeb017ce8a57959c609b96111968852d5f | [
"MIT"
] | null | null | null | # exercise 11.2.2
import numpy as np
from matplotlib.pyplot import figure, subplot, hist, title, show, plot
from scipy.stats.kde import gaussian_kde
# Draw samples from mixture of gaussians (as in exercise 11.1.1)
N = 1000; M = 1
x = np.linspace(-10, 10, 50)
X = np.empty((N,M))
m = np.array([1, 3, 6]); s = np.array([1, .5, 2])
c_sizes = np.random.multinomial(N, [1./3, 1./3, 1./3])
for c_id, c_size in enumerate(c_sizes):
X[c_sizes.cumsum()[c_id]-c_sizes[c_id]:c_sizes.cumsum()[c_id],:] = np.random.normal(m[c_id], np.sqrt(s[c_id]), (c_size,M))
# x-values to evaluate the KDE
xe = np.linspace(-10, 10, 100)
# Compute kernel density estimate
kde = gaussian_kde(X.ravel())
# Plot kernel density estimate
figure(figsize=(6,7))
subplot(2,1,1)
hist(X,x)
title('Data histogram')
subplot(2,1,2)
plot(xe, kde.evaluate(xe))
title('Kernel density estimate')
show()
print('Ran Exercise 11.2.2') | 27.96875 | 126 | 0.686034 | 173 | 895 | 3.462428 | 0.393064 | 0.03005 | 0.026711 | 0.040067 | 0.050083 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064433 | 0.132961 | 895 | 32 | 127 | 27.96875 | 0.707474 | 0.18771 | 0 | 0 | 0 | 0 | 0.077562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eaa85ae31a886898388e530052568bfa066c1dd0 | 1,903 | py | Python | xawscf/utils/loader.py | DmitryBogomolov/aws-cloudformation-sample | f0454b203973e07027a4cdf5f36468d137d310fd | [
"MIT"
] | null | null | null | xawscf/utils/loader.py | DmitryBogomolov/aws-cloudformation-sample | f0454b203973e07027a4cdf5f36468d137d310fd | [
"MIT"
] | 36 | 2018-04-20T06:11:41.000Z | 2018-07-07T21:55:55.000Z | xawscf/utils/loader.py | DmitryBogomolov/aws-cloudformation-sample | f0454b203973e07027a4cdf5f36468d137d310fd | [
"MIT"
] | null | null | null | from os import getcwd, path
import yaml
# pylint: disable=too-few-public-methods
class Custom:
def __init__(self, tag, value):
self.tag = tag
self.value = value
def __eq__(self, other):
return self.tag == other.tag and self.value == other.value
def custom_constructor(loader, node):
construct = getattr(loader, 'construct_' + node.id)
return Custom(node.tag, construct(node))
TYPE_TO_REPRESENTER = {
'list': 'sequence',
'dict': 'mapping'
}
def custom_representer(dumper, data):
represent = getattr(dumper,
'represent_' + TYPE_TO_REPRESENTER.get(type(data.value).__name__, 'scalar'))
return represent(data.tag, data.value)
INTRINSIC_FUNCTIONS = (
'!Base64',
'!Cidr',
'!FindInMap',
'!GetAtt',
'!GetAZs',
'!ImportValue',
'!Join',
'!Select',
'!Split',
'!Sub',
'!Ref'
)
for intrinsic in INTRINSIC_FUNCTIONS:
yaml.add_constructor(intrinsic, custom_constructor)
yaml.add_representer(Custom, custom_representer)
CWD = path.join(getcwd(), '')
def include_file_constructor(loader, node):
value = loader.construct_scalar(node)
path_to_file = path.realpath(path.join(path.dirname(loader.stream.name), value))
if path.commonprefix([CWD, path_to_file]) != CWD:
raise Exception('File path *{}* is not valid'.format(value))
return load_template_from_file(path_to_file)
yaml.add_constructor('!include', include_file_constructor)
def load_template(template: str):
return yaml.load(template, Loader=yaml.Loader)
def load_template_from_file(file_path: str):
with open(file_path, mode='r', encoding='utf-8') as file_object:
return load_template(file_object)
def save_template_to_file(file_path: str, data):
with open(file_path, mode='w', encoding='utf-8') as file_object:
yaml.dump(data, file_object, default_flow_style=False, Dumper=yaml.Dumper)
| 28.833333 | 84 | 0.694167 | 248 | 1,903 | 5.08871 | 0.366935 | 0.044374 | 0.023772 | 0.031696 | 0.069731 | 0.038035 | 0 | 0 | 0 | 0 | 0 | 0.002538 | 0.171834 | 1,903 | 65 | 85 | 29.276923 | 0.798223 | 0.019968 | 0 | 0 | 0 | 0 | 0.091251 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.156863 | false | 0 | 0.058824 | 0.039216 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eaa9f059e43ce27806eeb238eac5644f37ebc8b5 | 6,132 | py | Python | src/image.py | piraaa/VideoDigitalWatermarking | 6439881dc88fb7257a3dd9856b185e5c667b89b4 | [
"MIT"
] | 38 | 2017-11-06T08:59:23.000Z | 2022-02-21T01:42:50.000Z | src/image.py | qiuqiu888888/VideoDigitalWatermarking | 6439881dc88fb7257a3dd9856b185e5c667b89b4 | [
"MIT"
] | 2 | 2018-10-01T15:56:37.000Z | 2018-10-01T15:59:19.000Z | src/image.py | qiuqiu888888/VideoDigitalWatermarking | 6439881dc88fb7257a3dd9856b185e5c667b89b4 | [
"MIT"
] | 9 | 2017-09-09T02:39:44.000Z | 2021-10-19T08:56:57.000Z | #
# image.py
# Created by pira on 2017/07/28.
#
#coding: utf-8
u"""For image processing."""
import sys
import numpy as np
import cv2
RED = 2
GREEN = 1
BLUE = 0
def readGrayImage(filename):
u"""Read grayscale image.
@param filename : filename
@return img : 2 dimension np.ndarray[Height][Width]
"""
#imread flags=0(cv2.IMREAD_GRAYSCALE):GrayScale
img = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)
print('Read "' + filename + '".')
return img
def readColorImage(filename):
u"""Read color image. [notice] Grayscale images are treated as RGB image. (ex. if pixel value is 100, it's treated [100][100][100] RGB image.)
@param filename : filename
@return img : 3 dimension np.ndarray[Height][Width][BGR]
"""
#imread flags>0(cv2.IMREAD_COLOR):3ChannelColors
#白黒画像でも強制的にRGBで扱ってしまう.
img = cv2.imread(filename, cv2.IMREAD_COLOR)
print('Read "' + filename + '".')
return img
def getRgbLayer(img, rgb=RED):
u"""Read grayscale image.
@param img : a 3 dimension color image like np.ndarray[Height][Width][BGR]
@param rgb : a returned layer number. Blue is 0, Green is 1 and Red is 2. You can also give a colorname like RED.
@return layer : a color layer of image, only red, green or blue.
"""
height = img.shape[0]
width = img.shape[1]
layer = np.empty([height, width])
for i in np.arange(height):
for j in np.arange(width):
layer[i][j] = img[i][j][rgb]
return layer
def writeImage(filename, img):
u"""Export image data.
@param filename : filename for export image data
@param img : 2 or 3 dimension image array
"""
cv2.imwrite(filename, img)
print('Write "'+filename+'".')
def showImage(img):
u"""Show imsge data.
@param img : image array
"""
cv2.imshow('Image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
def grayimage2block(img, size):
u"""Divide gray image into blocks.
@param img : a 2 dimension gray image like a np.array[height][width]
@param size : block size list like a [height, width]
@return blocks : a 4 dimension blocks like a np.ndarray[block_height][block_width][height][width]
"""
image_height = int(img.shape[0])
image_width = int(img.shape[1])
block_height = int(size[0])
block_width = int(size[1])
#print(image_height, image_width, block_height, block_width)
if image_height%block_height != 0 or image_width%block_width != 0:
print('Please give an appropriate bloxk size.')
print('You can use only numbers that are divisors of image height and width as block size.')
sys.exit()
row = int(image_height / block_height)
col = int(image_width / block_width)
blocks = np.empty([row, col, block_height, block_width])
#print('blocks shape =', blocks.shape)
for i in np.arange(row):
for j in np.arange(col):
for k in np.arange(block_height):
for l in np.arange(block_width):
blocks[i][j][k][l] = img[i*block_height+k][j*block_width+l]
return blocks
def grayblock2image(blocks):
u"""Convert the blocks to a gray image..
@param blocks : a 4 dimension gray image such as np.ndarray[block_y][block_x][block_height][block_width]
@return image : an image array.
"""
block_y, block_x, block_height, block_width = blocks.shape
image = np.empty((0,block_x*block_width), int)
for x in blocks:
col_block = np.empty((block_height,0), int)
for block in x:
col_block = np.concatenate([col_block, block], axis=1)
image = np.concatenate([image, col_block], axis=0)
return image
def colorimage2block(img, size):
u"""Divide color image into blocks.
@param img : a 3 dimension image like a np.array[height][width][BGR]
@param size : block size list like a [height, width]
@return blocks : a 5 dimension blocks like a np.ndarray[block_height][block_width][height][width][BGR]
"""
image_height = int(img.shape[0])
image_width = int(img.shape[1])
block_height = int(size[0])
block_width = int(size[1])
color_num = int(img.shape[2])
#print(image_height, image_width, block_height, block_width)
if image_height%block_height != 0 or image_width%block_width != 0:
print('Please give an appropriate bloxk size.')
print('You can use only numbers that are divisors of image height and width as block size.')
sys.exit()
row = int(image_height / block_height)
col = int(image_width / block_width)
blocks = np.empty([row, col, block_height, block_width, color_num])
#print('blocks shape =', blocks.shape)
for i in np.arange(row):
for j in np.arange(col):
for k in np.arange(block_height):
for l in np.arange(block_width):
for rgb in np.arange(color_num):
blocks[i][j][k][l][rgb] = img[i*block_height+k][j*block_width+l][rgb]
return blocks
def rgb2ycc(img):
u"""RGB to YCbCr.
@param img : 3 dimension np.ndarray[Height][Width][RGB]
@return ycc_data : 3 dimension np.ndarray[Height][Width][YCC]
"""
height = img.shape[0]
width = img.shape[1]
ycc_data = np.empty([height,width,3])
for i in np.arange(height):
for j in np.arange(width):
ycc_data[i][j][0] = 0.299*img[i][j][2] + 0.587*img[i][j][1] + 0.114*img[i][j][0] #Y
ycc_data[i][j][1] = -0.169*img[i][j][2] - 0.331*img[i][j][1] + 0.500*img[i][j][0] #Cb
ycc_data[i][j][2] = 0.500*img[i][j][2] - 0.419*img[i][j][1] - 0.081*img[i][j][0] #Cr
return ycc_data
def ycc2rgb(img):
u"""YCbCr to BGR.
@param img : 3 dimension np.ndarray[Height][Width][YCC]
@return rgb_data : 3 dimension np.ndarray[Height][Width][BGR]
"""
height = img.shape[0]
width = img.shape[1]
rgb_data = np.empty([height,width,3])
for i in np.arange(height):
for j in np.arange(width):
rgb_data[i][j][0] = img[i][j][0] + 1.772*img[i][j][1] #B
rgb_data[i][j][1] = img[i][j][0] - 0.344*img[i][j][1] - 0.714*img[i][j][2] #G
rgb_data[i][j][2] = img[i][j][0] + 1.402*img[i][j][2] #R
return rgb_data
def get_y(img):
u"""Get only Y from YCC image.
@param img : 3 dimension np.ndarray[Height][Width][YCC]
@return y_data : 2 dimension np.ndarray[Height][Width]. Including only Y.
"""
height = img.shape[0]
width = img.shape[1]
y_data = np.empty([height,width])
for i in np.arange(height):
for j in np.arange(width):
y_data[i][j] = img[i][j][0]
return y_data
| 31.446154 | 143 | 0.676614 | 1,054 | 6,132 | 3.85389 | 0.145161 | 0.013786 | 0.022157 | 0.044313 | 0.621123 | 0.562038 | 0.490399 | 0.449532 | 0.395372 | 0.381585 | 0 | 0.032315 | 0.167319 | 6,132 | 194 | 144 | 31.608247 | 0.76322 | 0.375571 | 0 | 0.431034 | 0 | 0 | 0.073434 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.094828 | false | 0 | 0.025862 | 0 | 0.198276 | 0.060345 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eaab0e1b6ada6a01e24a7b5a93f377831c51ea0c | 5,399 | py | Python | example/match_gaussian_basis.py | amandadumi/semiempy | 2f8ed7e9fb033772f2c34381de3c3faa3a05e3ab | [
"BSD-3-Clause"
] | 6 | 2019-01-14T15:50:10.000Z | 2019-03-26T07:08:33.000Z | example/match_gaussian_basis.py | amandadumi/semiempy | 2f8ed7e9fb033772f2c34381de3c3faa3a05e3ab | [
"BSD-3-Clause"
] | 4 | 2019-01-14T15:45:08.000Z | 2019-03-04T15:26:15.000Z | example/match_gaussian_basis.py | amandadumi/semiempy | 2f8ed7e9fb033772f2c34381de3c3faa3a05e3ab | [
"BSD-3-Clause"
] | 2 | 2019-01-14T15:23:44.000Z | 2019-01-14T15:51:07.000Z | import numpy as np
import basis
from utils.molecule import Molecule
from basis.basis import Basis
from basis.gaussian import Gaussian
# STO-NG fitted parameters from Table I of doi:10.1063/1.1672392
sto_alpha_1s = [
# pad with two empty lists so index corresponds to sto_ng
[],[],[],
# sto_3g
[3.207831241e+00, 5.843104649e-01, 1.581372150e-01]
]
sto_alpha_2s = [
# pad with two empty lists so index corresponds to sto_ng
[],[],[],
# sto_3g
[5.145620502e+00, 1.195731544e+00, 3.888890096e-01]
]
# These are just the same as the 2s but if we ever decide to do the fit
# by ourselves we could let them become different values
sto_alpha_2p = [
# pad with two empty lists so index corresponds to sto_ng
[],[],[],
# sto_3g
[5.145620502e+00, 1.195731544e+00, 3.888890096e-01]
]
sto_coeff_1s = [
# pad with two empty lists so index corresponds to sto_ng
[],[],[],
# sto_3g
[1.543289673e-01, 5.353281423e-01, 4.446345422e-01]
]
sto_coeff_2s = [
# pad with two empty lists so index corr38esponds to sto_ng
[],[],[],
# sto_3g
[-9.996722919e-02, 3.995128261e-01, 7.001154689e-01]
]
sto_coeff_2p = [
# pad with two empty lists so index corresponds to sto_ng
[],[],[],
# sto_3g
[1.559162750e-01, 6.076837186e-01, 3.919573931e-01]
]
# These functions are fit to Slaters with \zeta == 1. To scale them to a
# paticular atom multiply by \zeta_opt**2
# The following optimized zetas are from Table I of doi: 10.1063/1.1733573
# except for hydrogen which is from doi:10.1063/1.1727227
# Formally Li and Be don't require p functions and weren't provided in the
# paper so we repeat the 2s values. This would be consistent if these were
# calculated using Slater's rules for screening doi:10.1103/PhysRev.36.57
# because the screening only would depend on what the principal quantum
# number is.
opt_zeta_1s = {
"H" : 1.0
}
opt_zeta_2s = {
"O" : 1.0
}
opt_zeta_2p = {
"O" : 1.0
}
class MinimalNoCore(Basis):
"""
A minimal STO-NG basis with no core electrons
Attributes
----------
n_func : int
number of atomic basis functions
funcs : list
list of basis functions
"""
def __init__(self, mol, num_gaussians):
Basis.__init__(self, mol)
self.num_gaussians = num_gaussians
self.n_func = 0
self.funcs = []
self.populate_basis_from_mol()
def populate_basis_from_mol(self):
for i in range(self.mol.n_atom):
# hydrogen and helium only have s functions
if(self.mol.at_num[i] < 3):
self.n_func += 1
exps = [j*opt_zeta_1s[self.mol.symb[i]]**2 for j in sto_alpha_1s[self.num_gaussians]]
contract_coeff = sto_coeff_1s[self.num_gaussians]
ang_mom = (0,0,0)
pos = self.mol.xyz[i]
on_center = i
center_Z = self.mol.at_num[i]
center_symb = self.mol.symb[i]
new_func = Gaussian(exps, contract_coeff, ang_mom, pos, on_center, center_Z, center_symb)
self.funcs.append(new_func)
elif (self.mol.at_num[i] > 2 and self.mol.at_num[i] < 11):
self.n_func += 4
exps = [j*opt_zeta_2s[self.mol.symb[i]]**2 for j in sto_alpha_2s[self.num_gaussians]]
contract_coeff = sto_coeff_1s[self.num_gaussians]
ang_mom = (0,0,0)
pos = self.mol.xyz[i]
on_center = i
center_Z = self.mol.at_num[i]
center_symb = self.mol.symb[i]
new_func = Gaussian(exps, contract_coeff, ang_mom, pos, on_center, center_Z, center_symb)
self.funcs.append(new_func)
exps = [j*opt_zeta_2p[self.mol.symb[i]]**2 for j in sto_alpha_2p[self.num_gaussians]]
contract_coeff = sto_coeff_1s[self.num_gaussians]
ang_mom = (1,0,0)
pos = self.mol.xyz[i]
on_center = i
center_Z = self.mol.at_num[i]
center_symb = self.mol.symb[i]
new_func = Gaussian(exps, contract_coeff, ang_mom, pos, on_center, center_Z, center_symb)
self.funcs.append(new_func)
exps = [j*opt_zeta_2p[self.mol.symb[i]]**2 for j in sto_alpha_2p[self.num_gaussians]]
contract_coeff = sto_coeff_1s[self.num_gaussians]
ang_mom = (0,1,0)
pos = self.mol.xyz[i]
on_center = i
center_Z = self.mol.at_num[i]
center_symb = self.mol.symb[i]
new_func = Gaussian(exps, contract_coeff, ang_mom, pos, on_center, center_Z, center_symb)
self.funcs.append(new_func)
exps = [j*opt_zeta_2p[self.mol.symb[i]]**2 for j in sto_alpha_2p[self.num_gaussians]]
contract_coeff = sto_coeff_1s[self.num_gaussians]
ang_mom = (0,0,1)
pos = self.mol.xyz[i]
on_center = i
center_Z = self.mol.at_num[i]
center_symb = self.mol.symb[i]
new_func = Gaussian(exps, contract_coeff, ang_mom, pos, on_center, center_Z, center_symb)
self.funcs.append(new_func)
else:
raise NotImplementedError("Functions above 2p are not implemented.")
| 37.493056 | 105 | 0.601778 | 806 | 5,399 | 3.831266 | 0.227047 | 0.058938 | 0.056995 | 0.03886 | 0.559909 | 0.543394 | 0.543394 | 0.543394 | 0.519106 | 0.519106 | 0 | 0.088737 | 0.294499 | 5,399 | 143 | 106 | 37.755245 | 0.721974 | 0.242452 | 0 | 0.510204 | 0 | 0 | 0.010432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.05102 | 0 | 0.081633 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eaacb35fdc29fceff2cd414e734549aedf165845 | 508 | py | Python | setup.py | class4kayaker/grader_toolkit | 21b5d287cb63b93015e503dccdfcccba478a2806 | [
"MIT"
] | null | null | null | setup.py | class4kayaker/grader_toolkit | 21b5d287cb63b93015e503dccdfcccba478a2806 | [
"MIT"
] | null | null | null | setup.py | class4kayaker/grader_toolkit | 21b5d287cb63b93015e503dccdfcccba478a2806 | [
"MIT"
] | null | null | null | """
A utility to ease grading.
"""
from setuptools import find_packages, setup
package = 'grader_toolkit'
version = '0.1.0'
dependencies = [
'click',
'click-repl',
'ruamel.yaml',
'six',
'sqlalchemy',
'typing'
]
setup(
name=package,
version=version,
package_dir={'': 'src'},
packages=find_packages(where='src'),
install_requires=dependencies,
entry_points={
'console_scripts': [
'py-grtk = grader_toolkit.cli:cli_main'
]
},
)
| 16.933333 | 51 | 0.598425 | 54 | 508 | 5.462963 | 0.722222 | 0.081356 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007874 | 0.25 | 508 | 29 | 52 | 17.517241 | 0.766404 | 0.051181 | 0 | 0 | 0 | 0 | 0.257384 | 0.056962 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eab2825eb021bd00bfb66b718f522acc49226b31 | 5,175 | py | Python | damast/postgres_rest_api/person.py | UniStuttgart-VISUS/damast | 05ceb150e86227d81f0f4bb441af6d4f8bce65ee | [
"MIT"
] | 2 | 2021-12-22T17:01:02.000Z | 2022-01-18T17:05:48.000Z | damast/postgres_rest_api/person.py | UniStuttgart-VISUS/damast | 05ceb150e86227d81f0f4bb441af6d4f8bce65ee | [
"MIT"
] | 94 | 2021-12-22T11:21:57.000Z | 2022-03-31T22:40:56.000Z | damast/postgres_rest_api/person.py | UniStuttgart-VISUS/damast | 05ceb150e86227d81f0f4bb441af6d4f8bce65ee | [
"MIT"
] | 1 | 2022-01-04T09:04:47.000Z | 2022-01-04T09:04:47.000Z | import flask
import psycopg2
import json
from ..authenticated_blueprint_preparator import AuthenticatedBlueprintPreparator
import werkzeug.exceptions
from .user_action import add_user_action
from .decorators import rest_endpoint
name = 'person'
app = AuthenticatedBlueprintPreparator(name, __name__, template_folder=None, url_prefix='/person')
@app.route('/<int:person_id>', methods=['GET', 'PUT', 'PATCH', 'DELETE'], role='user')
@rest_endpoint
def person_data(c, person_id):
'''
CRUD endpoint to manipulate person tuples.
[all] @param person_id ID of person tuple, 0 or `None` for PUT
C/PUT @payload application/json
@returns application/json
Create a new person tuple. `name` is a required field, the rest is optional.
Returns the ID for the created entity. Fails if a person with that name
already exists.
Exemplary payload for `PUT /person/0`:
{
"name": "Testperson",
"comment": "Test comment",
"time_range": "6th century",
"person_type": 2
}
R/GET @returns application/json
Get person data for the person with ID `person_id`.
@param person_id Integer, `id` in table `person`
@returns application/json
This returns the data from table `person` as a single JSON object.
Example return value:
{
"id": 12,
"name": "Testperson",
"comment": "Test comment",
"time_range": "6th century",
"person_type": 2
}
U/PATCH @payload application/json
@returns application/json
Update one or more of the fields 'comment', 'name', 'time_range',
'person_type', or 'name'.
Exemplary payload for `PATCH /person/12345`:
{
"comment": "updated comment...",
"name": "updated name"
}
D/DELETE @returns application/json
Delete a person if there are no conflicts. Otherwise, fail. Returns the ID
of the deleted tuple.
'''
if flask.request.method == 'PUT':
return put_person_data(c)
else:
if c.one('select count(*) from person where id = %s;', (person_id,)) == 0:
return flask.abort(404, F'Person {person_id} does not exist.')
if flask.request.method == 'GET':
return get_person_data(c, person_id)
if flask.request.method == 'DELETE':
return delete_person_data(c, person_id)
if flask.request.method == 'PATCH':
return update_person_data(c, person_id)
flask.abort(405)
def get_person_data(c, person_id):
data = c.one('select * from person where id = %(person_id)s;', **locals())
return flask.jsonify(data._asdict())
def update_person_data(c, person_id):
payload = flask.request.json
if type(payload) is not dict or len(payload) == 0:
return flask.abort(415, 'Payload must be non-empty JSON.')
old_value = c.one('select * from person where id = %s;', (person_id,))
payload = flask.request.json
allowed_kws = ('comment', 'time_range', 'person_type', 'name')
if type(payload) is not dict \
or len(payload) == 0 \
or all(map(lambda x: x not in payload, allowed_kws)) \
or any(map(lambda x: x not in allowed_kws, payload.keys())):
return flask.abort(400, 'Payload must be a JSON object with one or more of these fields: '
+ ', '.join(map(lambda x: F"'{x}'", allowed_kws)))
if 'name' in payload and payload['name'] == '':
return 'Person name must not be empty', 400
kws = list(filter(lambda x: x in payload, allowed_kws))
update_str = ', '.join(map(lambda x: F'{x} = %({x})s', kws))
query_str = 'UPDATE person SET ' + update_str + ' WHERE id = %(person_id)s;'
query = c.mogrify(query_str, dict(person_id=person_id, **payload))
c.execute(query)
add_user_action(c, None, 'UPDATE',
F'Update person {person_id}: {c.mogrify(update_str, payload).decode("utf-8")}',
old_value._asdict())
return '', 205
def delete_person_data(c, person_id):
query = c.mogrify('delete from person where id = %s returning *;', (person_id,))
old_value = c.one(query)._asdict()
add_user_action(c, None, 'DELETE', F'Delete person {old_value["name"]} ({person_id}).', old_value)
return flask.jsonify(dict(deleted=dict(person=person_id))), 205
def put_person_data(c):
payload = flask.request.json
if not payload or 'name' not in payload:
flask.abort(400, 'Payload must be a JSON object with at least the `name` field.')
comment = payload.get('comment', None)
time_range = payload.get('time_range', '')
name = payload.get('name', None)
person_type = payload.get('person_type', None)
person_id = c.one('insert into person (name, comment, time_range, person_type) values (%(name)s, %(comment)s, %(time_range)s, %(person_type)s) returning id;', **locals())
add_user_action(c, None, 'CREATE', F'Create person {name} with ID {person_id}.', None)
return flask.jsonify(dict(person_id=person_id)), 201
| 32.34375 | 174 | 0.6257 | 700 | 5,175 | 4.488571 | 0.225714 | 0.0662 | 0.031509 | 0.037874 | 0.311903 | 0.237747 | 0.140993 | 0.112667 | 0.112667 | 0.087842 | 0 | 0.011553 | 0.247343 | 5,175 | 159 | 175 | 32.54717 | 0.795122 | 0.277295 | 0 | 0.044776 | 0 | 0.014925 | 0.256231 | 0.013162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074627 | false | 0 | 0.104478 | 0 | 0.358209 | 0.029851 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eab43307e356a8c760f1d866d379438e007aad5e | 715 | py | Python | tools/hex.py | AlexanderHD27/toolkit | f927c616fadc2c5be57dccc326243b6aa1bffb14 | [
"MIT"
] | 2 | 2020-12-15T20:57:51.000Z | 2021-03-14T15:29:20.000Z | tools/hex.py | AlexanderHD27/toolkit | f927c616fadc2c5be57dccc326243b6aa1bffb14 | [
"MIT"
] | null | null | null | tools/hex.py | AlexanderHD27/toolkit | f927c616fadc2c5be57dccc326243b6aa1bffb14 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import struct
import sys
if len(sys.argv) < 2:
sys.stderr.write("Usage: <hex>\n")
sys.stderr.flush()
exit(1)
CHUNKSIZE = 4
hex_data = []
hex_string = ""
for i in sys.argv[1:]:
hex_string += i.replace("0x", "")
while len(hex_string) >= CHUNKSIZE:
hex_data.append(hex_string[:CHUNKSIZE])
hex_string = hex_string[CHUNKSIZE:]
if hex_string != "":
hex_data.append("0"*(CHUNKSIZE-len(hex_string)) + hex_string)
try:
for i,j in enumerate(hex_data):
hex_data[i] = int(j, 16)
except ValueError:
sys.stderr.write("\"{}\" is not hex!\n".format(i))
sys.stderr.flush()
exit(1)
for i in hex_data:
sys.stdout.buffer.write(struct.pack("<H", i))
| 21.029412 | 65 | 0.632168 | 112 | 715 | 3.901786 | 0.392857 | 0.185355 | 0.12357 | 0.08238 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.188811 | 715 | 33 | 66 | 21.666667 | 0.736207 | 0.023776 | 0 | 0.16 | 0 | 0 | 0.04878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |