hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d8965469242d4e72828c54c19635f40c52cf043e | 850 | py | Python | douyu/douyu/spiders/spider.py | smujm/ScrapyProjects | 04e9eb42c64805475893be595db4f3b6530ba597 | [
"MIT"
] | null | null | null | douyu/douyu/spiders/spider.py | smujm/ScrapyProjects | 04e9eb42c64805475893be595db4f3b6530ba597 | [
"MIT"
] | null | null | null | douyu/douyu/spiders/spider.py | smujm/ScrapyProjects | 04e9eb42c64805475893be595db4f3b6530ba597 | [
"MIT"
] | null | null | null | import scrapy
import json
from douyu.items import DouyuItem
class SpiderSpider(scrapy.Spider):
name = 'douyu'
allowed_domains = ['https://www.douyu.com']
base_url = 'http://capi.douyucdn.cn/api/v1/getVerticalRoom?limit=20&offset='
offset = 0
start_urls = [base_url + str(offset)]
def parse(self, response):
# 提取数据
data_list = json.loads(response.body)['data']
if len(data_list) == 0:
return
for data in data_list:
item = DouyuItem()
item['nickname'] = data['nickname'].encode('utf-8')
item['vertical_src'] = data['vertical_src']
yield item
self.offset += 20
url = self.base_url + str(self.offset)
# callback 回调函数,将得到请求的相应交给自己处理
yield scrapy.Request(url=url, callback=self.parse, dont_filter=True)
| 29.310345 | 80 | 0.611765 | 105 | 850 | 4.847619 | 0.580952 | 0.041257 | 0.039293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012759 | 0.262353 | 850 | 28 | 81 | 30.357143 | 0.799043 | 0.038824 | 0 | 0 | 0 | 0 | 0.169533 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.142857 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d897d34629e02e537f13d11f12451d99e9ab865b | 521 | py | Python | synonym.py | amber5634/Synonym-Generator-using-Word-Net | 5ce0f71d4639bbae39ee0d279103e576065c094a | [
"MIT"
] | null | null | null | synonym.py | amber5634/Synonym-Generator-using-Word-Net | 5ce0f71d4639bbae39ee0d279103e576065c094a | [
"MIT"
] | null | null | null | synonym.py | amber5634/Synonym-Generator-using-Word-Net | 5ce0f71d4639bbae39ee0d279103e576065c094a | [
"MIT"
] | null | null | null | import nltk
from nltk.corpus import wordnet
class Keyword:
def synonymn_generator(self):
synonyms = []
antonyms = []
word = input("enter the word : ")
for syn in wordnet.synsets(word):
for l in syn.lemmas():
synonyms.append(l.name())
if l.antonyms():
antonyms.append(l.antonyms()[0].name())
print(set(synonyms))
print(set(antonyms))
p1 = Keyword()
p1.synonymn_generator() | 26.05 | 60 | 0.520154 | 55 | 521 | 4.890909 | 0.545455 | 0.126394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009063 | 0.364683 | 521 | 20 | 61 | 26.05 | 0.803625 | 0 | 0 | 0 | 0 | 0 | 0.033797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d89df34e44c6bfd5607bac84838a10b568961067 | 3,846 | py | Python | scripts/src/__main__.py | 9999years/dotfiles | 763c2ca5f8aeb3b64eb28262e6708135e6cd2005 | [
"MIT"
] | 1 | 2020-09-09T15:06:43.000Z | 2020-09-09T15:06:43.000Z | scripts/src/__main__.py | 9999years/dotfiles | 763c2ca5f8aeb3b64eb28262e6708135e6cd2005 | [
"MIT"
] | 2 | 2020-09-09T14:16:21.000Z | 2020-09-29T17:31:15.000Z | scripts/src/__main__.py | 9999years/dotfiles | 763c2ca5f8aeb3b64eb28262e6708135e6cd2005 | [
"MIT"
] | 2 | 2020-09-04T14:55:57.000Z | 2020-10-30T19:08:58.000Z | """Entry point for linking dotfiles.
"""
from __future__ import annotations
import argparse
import os
import subprocess
import sys
from dataclasses import dataclass
from pathlib import Path
from typing import Optional
from . import log
from .link import Linker
from .resolver import Resolver
from .scan import Scanner
from .schema import DotfilesJson, PrettyPath
def main() -> None:
"""Entry point.
"""
args = Args.parse_args()
if args.dotfiles is None:
repo_root = _get_repo_root()
dotfiles_fh = open(repo_root / "dotfiles.json")
else:
repo_root = args.dotfiles.parent.absolute()
dotfiles_fh = args.dotfiles.open()
dotfiles = DotfilesJson.load_from_file(dotfiles_fh)
dotfiles_fh.close()
link_root = Path.home() if args.link_root is None else args.link_root
resolver = Resolver(
repo_root=repo_root, link_root=link_root, relative=not args.absolute
)
resolved = resolver.resolve_all(dotfiles)
if args.scan:
log.warn("Scanning for dotfiles is an experimental feature.")
scanner = Scanner(link_root, resolved.ignored, resolved.dotfiles)
for p in scanner.find_dotfiles():
# TODO: Fill in scanner processing.
# Actions:
# - skip
# - quit
# - ignore the path
# - move it to dotfiles
# - if it's a directory, recurse
# - if it's a file, cat it / display its stat
#
# Should also note if it's a directory or file.
p_disp = str(PrettyPath.from_path(p).disp)
if p.is_dir():
log.info("📁 " + p_disp)
else:
log.info(p_disp)
# TODO: Offer to commit new files...?
else:
linker = Linker(verbose=args.verbose,)
linker.link_all(resolved.dotfiles)
@dataclass
class Args:
"""Command-line arguments; see ``_argparser``.
"""
dotfiles: Optional[Path]
link_root: Optional[Path]
absolute: bool
scan: bool
verbose: bool
@classmethod
def parse_args(cls) -> Args:
"""Parse args from ``sys.argv``.
"""
args = _argparser().parse_args()
return cls(
dotfiles=args.dotfiles,
link_root=args.link_root,
absolute=args.absolute,
scan=args.scan,
verbose=args.verbose,
)
def _argparser() -> argparse.ArgumentParser:
"""Command-line argument parser.
"""
parser = argparse.ArgumentParser(description="links dotfiles")
parser.add_argument(
"-d", "--dotfiles", type=Path, help="The dotfiles.json file to load",
)
parser.add_argument(
"-l",
"--link-root",
type=Path,
help="Where to create links from; defaults to your home directory",
)
parser.add_argument(
"-a",
"--absolute",
action="store_true",
help="Create absolute links, rather than relative ones",
)
parser.add_argument(
"-s", "--scan", action="store_true", help="Scan for untracked dotfiles",
)
parser.add_argument(
"-v", "--verbose", action="store_true", help="Make output more verbose",
)
return parser
def _get_repo_root() -> Path:
try:
proc = subprocess.run(
["git", "rev-parse", "--show-toplevel"],
capture_output=True,
text=True,
check=False,
)
except FileNotFoundError:
log.fatal(
"Couldn't run `git` to determine repo root; pass --dotfiles explicitly."
)
sys.exit(1)
if proc.returncode != 0:
log.fatal("Couldn't get repo root from git; pass --dotfiles explicitly.")
sys.exit(1)
return Path(proc.stdout.strip()).absolute()
if __name__ == "__main__":
main()
| 26.524138 | 84 | 0.595684 | 450 | 3,846 | 4.957778 | 0.348889 | 0.035858 | 0.0381 | 0.008068 | 0.040341 | 0.026894 | 0 | 0 | 0 | 0 | 0 | 0.001103 | 0.292772 | 3,846 | 144 | 85 | 26.708333 | 0.81875 | 0.111544 | 0 | 0.1 | 0 | 0 | 0.152959 | 0 | 0 | 0 | 0 | 0.006944 | 0 | 1 | 0.04 | false | 0.02 | 0.13 | 0 | 0.26 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d89ebf9a6b581abb7634d793d29dd4afbd5a6f07 | 3,778 | py | Python | verified.py | tophersmith/veracode-verified-checker | f2f85dbb4b8039c9ccd9848367a37b9caab0c9aa | [
"MIT"
] | null | null | null | verified.py | tophersmith/veracode-verified-checker | f2f85dbb4b8039c9ccd9848367a37b9caab0c9aa | [
"MIT"
] | null | null | null | verified.py | tophersmith/veracode-verified-checker | f2f85dbb4b8039c9ccd9848367a37b9caab0c9aa | [
"MIT"
] | null | null | null | import sys
import json
import requests
from veracode_api_signing.plugin_requests import RequestsAuthPluginVeracodeHMAC
from pprint import pprint
from datetime import datetime
from app_definition import AppDefinition
from verified_check import VerifiedStandard, VerifiedTeam, VerifiedContinuous
from verified_report import VerifiedReport, ConsoleReport
from pprint import pprint
url_base = 'https://api.veracode.com/appsec'
min_severity = 3 # findings api only returns medium +
def main():
if len(sys.argv) != 4:
print('Usage: [API Key] [API Secret Key] [Check Type s=Standard t=Team c=Continuous a=All]')
exit(1)
auth = RequestsAuthPluginVeracodeHMAC(api_key_id=sys.argv[1],
api_key_secret=sys.argv[2])
'''
Process:
Make Veracode Verified Checks class
Make reporter
Get all policies
Get all apps
For each app
Get findings for the app
Check the app + policies based on the Verified level
Report any failures from the Verified Checks
'''
try:
checks = make_checks(sys.argv[3])
report = ConsoleReport()
policies_dict = get_policies_dict(auth)
apps_list = get_applications_list(auth)
apps_size = len(apps_list)
print('%d apps found' % (apps_size))
count = 1
for app in apps_list:
print('Checking %s (%d/%d)' % (app.name, count, apps_size))
add_findings_to_app(auth, app)
check(app, policies_dict, report, checks)
count = count + 1
report.output()
except Exception as e:
print('Error while scanning or uploading. ' + str(e))
raise e
def get_policies_dict(auth):
#Get all policies available to the user as a dict of 'policy_name': 'policy_json'
done = False
policies = {}
page_count = 0
while not done:
r = requests.get(url_base + '/v1/policies', auth=auth, params={'size':500, 'page': page_count})
if not r.ok:
print(r.text)
raise Exception('ERROR: Received status code %s while trying to get applications' % r.status_code)
#Check pagination
total_pages = r.json()['page']['total_pages']
page_count = page_count + 1
if page_count == total_pages:
done = True
policies.update({policy['name']:policy for policy in r.json()['_embedded']['policy_versions']})
return policies
def make_checks(check_type):
#Create the Verified Check class for the given check_type
cases = {'s': [VerifiedStandard],
't': [VerifiedTeam],
'c': [VerifiedContinuous],
'a': [VerifiedStandard,VerifiedTeam, VerifiedContinuous]}
if check_type in cases:
return cases[check_type]
else:
raise Exception('Unknown case. Must be one of %s' % ( ', '.join(cases.keys()) ))
def get_applications_list(auth):
#Get all applications
done = False
apps_list = []
page_count = 0
while not done:
r = requests.get(url_base + '/v1/applications', auth=auth, params={'size':500, 'page':page_count})
if not r.ok:
print(r.text)
raise Exception('ERROR: Received status code %s while trying to get applications' % r.status_code)
#Check pagination
total_pages = r.json()['page']['total_pages']
page_count = page_count + 1
if page_count == total_pages:
done = True
apps_list.extend([AppDefinition(application) for application in r.json()['_embedded']['applications']])
return apps_list
def add_findings_to_app(auth, app):
#Add the findings json to the app
r = requests.get(url_base + ('/v2/applications/%s/findings' % app.guid), auth=auth, params={'severity_gte': min_severity})
if not r.ok:
print(r.text)
raise Exception('ERROR: Received status code %s while trying to get findings' % r.status_code)
app.add_findings(r.json())
def check(app, policies_dict, report, checks):
#Using the Verified Check, check the app + policies
for check_func in checks:
check = check_func(app, policies_dict)
check.do_check(report)
if __name__ == '__main__':
sys.exit(main()) | 32.568966 | 123 | 0.724722 | 553 | 3,778 | 4.79566 | 0.264014 | 0.033937 | 0.016968 | 0.016968 | 0.274887 | 0.267722 | 0.226244 | 0.226244 | 0.226244 | 0.226244 | 0 | 0.006627 | 0.161196 | 3,778 | 116 | 124 | 32.568966 | 0.83023 | 0.080731 | 0 | 0.27907 | 0 | 0.011628 | 0.178705 | 0.008717 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.116279 | 0 | 0.22093 | 0.104651 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8a4ffb4de362b2f4a2070e30f28d8fd00e06627 | 206 | py | Python | try-except.py | arhue/python-learning | 058c93315fd5aa76584e32432e7c80cb3972478e | [
"MIT"
] | null | null | null | try-except.py | arhue/python-learning | 058c93315fd5aa76584e32432e7c80cb3972478e | [
"MIT"
] | null | null | null | try-except.py | arhue/python-learning | 058c93315fd5aa76584e32432e7c80cb3972478e | [
"MIT"
] | null | null | null | x=input("Enter a no. I will convert to integer")
z=1
try:
y=int(float(x))
z="float"
except:
z="wrong"
if z=="wrong":
print("fix your input")
else:
print("int of your input is:", y)
| 15.846154 | 48 | 0.57767 | 37 | 206 | 3.216216 | 0.675676 | 0.10084 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006452 | 0.247573 | 206 | 12 | 49 | 17.166667 | 0.76129 | 0 | 0 | 0 | 0 | 0 | 0.42233 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8a7d33720089c11a74552c8c79ff625254ee85a | 769 | py | Python | cuhk03/init_env.py | cwpeng-cn/TorchReID | e6cf1d38bfc3100ea19e3e92aa4306b79fd3517b | [
"MIT"
] | null | null | null | cuhk03/init_env.py | cwpeng-cn/TorchReID | e6cf1d38bfc3100ea19e3e92aa4306b79fd3517b | [
"MIT"
] | null | null | null | cuhk03/init_env.py | cwpeng-cn/TorchReID | e6cf1d38bfc3100ea19e3e92aa4306b79fd3517b | [
"MIT"
] | null | null | null | import zipfile
import os
def download_and_prepare():
reid_path = "/content/drive/My Drive/Colab/datasets/reid.zip"
file_zip = zipfile.ZipFile(reid_path, 'r')
for file in file_zip.namelist():
file_zip.extract(file, r'.')
with open("/content/drive/My Drive/Colab/ReID works/CVPR fintuning/resnet_ibn_b.py", "rb") as f, open(
'./resnet_ibn_b.py',
'wb') as fw:
fw.write(f.read())
with open("/content/drive/My Drive/Colab/ReID works/CVPR fintuning/net_149.pth", "rb") as f, open('./net_149.pth',
'wb') as fw:
fw.write(f.read())
if not os.path.exists('./resnet_ibn_b.py'):
download_and_prepare()
| 34.954545 | 118 | 0.559168 | 106 | 769 | 3.896226 | 0.40566 | 0.087167 | 0.101695 | 0.138015 | 0.40678 | 0.348668 | 0.348668 | 0.261501 | 0.261501 | 0.261501 | 0 | 0.011132 | 0.29909 | 769 | 21 | 119 | 36.619048 | 0.755102 | 0 | 0 | 0.25 | 0 | 0 | 0.314694 | 0.097529 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.125 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8a907f41af888797cb8bfb82d2555a46654432c | 2,109 | py | Python | myutils/dictionaries.py | joeledwardson/betfair-browser | b641f134e60307250a0e51bafa849422ecf5264b | [
"MIT"
] | 3 | 2021-11-23T19:03:02.000Z | 2021-11-24T08:44:23.000Z | myutils/dictionaries.py | joeledwardson/betfair-browser | b641f134e60307250a0e51bafa849422ecf5264b | [
"MIT"
] | 2 | 2021-11-23T18:47:31.000Z | 2021-12-08T15:36:11.000Z | myutils/dictionaries.py | joeledwardson/betfair-browser | b641f134e60307250a0e51bafa849422ecf5264b | [
"MIT"
] | null | null | null | from typing import Iterable, Dict
import copy
from collections.abc import Mapping
from .exceptions import DictException
def validate_config(cfg: Dict, cfg_spec: Dict):
_cfg = copy.deepcopy(cfg)
for k, spec in cfg_spec.items():
exist = k in _cfg
val = _cfg.pop(k, None)
if not spec.get('optional'):
if not exist:
raise DictException(f'expected key "{k}" in configuration dict as per config spec: "{cfg_spec}"')
if exist:
# if 'type' in spec:
if not isinstance(val, spec['type']):
raise DictException(f'expected key "{k}" value to be type "{spec["type"]}", got "{type(val)}"')
if _cfg:
raise DictException(f'configuration dictionary has unexpected values: "{_cfg}"')
def is_dict_subset(x, y):
"""recursively determine if key value pairs in x are a subset of y"""
for k, v in x.items():
if k not in y:
return False
elif type(v) is dict:
if not isinstance(y[k], Iterable):
return False
elif not is_dict_subset(v, y[k]):
return False
elif v != y[k]:
return False
return True
def dict_update(updates: Mapping, base_dict: Mapping):
"""recursively update key value pairs of base_dict with updates"""
for k, v in updates.items():
if type(v) is not dict:
# value is not dict
base_dict[k] = v
continue
# value is dict
if k not in base_dict:
# value is dict & key not found in y
base_dict[k] = v
continue
# value is dict & key found in y
if isinstance(base_dict[k], Iterable):
# value is dict & key found in y & value in y is iterable
dict_update(v, base_dict[k])
continue
# value is dict & key found in y & value in y is not iterable
base_dict[k] = v
def dict_sort(d: dict, key=lambda item: item[1]) -> Dict:
"""sort a dictionary items"""
return {k: v for k, v in sorted(d.items(), key=key)} | 31.954545 | 113 | 0.573732 | 305 | 2,109 | 3.888525 | 0.239344 | 0.040472 | 0.037943 | 0.047218 | 0.194772 | 0.171164 | 0.118887 | 0.118887 | 0.053963 | 0.053963 | 0 | 0.000706 | 0.328118 | 2,109 | 66 | 114 | 31.954545 | 0.836274 | 0.181129 | 0 | 0.232558 | 0 | 0 | 0.124267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.093023 | false | 0 | 0.093023 | 0 | 0.325581 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8aa39d9d29606bfc3d0bf3b107305b6d1c667aa | 3,406 | py | Python | metallic/metalearners/mbml/base.py | Renovamen/metallic | c3992e4b322f9d41d9b7997c472baf99c843046c | [
"MIT"
] | 5 | 2021-04-14T07:31:06.000Z | 2021-12-11T08:12:10.000Z | metallic/metalearners/mbml/base.py | Renovamen/metallic | c3992e4b322f9d41d9b7997c472baf99c843046c | [
"MIT"
] | 1 | 2021-04-14T07:44:36.000Z | 2021-04-15T14:01:52.000Z | metallic/metalearners/mbml/base.py | Renovamen/metallic | c3992e4b322f9d41d9b7997c472baf99c843046c | [
"MIT"
] | null | null | null | import os
from abc import ABC, abstractmethod
from typing import Callable, Optional, Tuple
import torch
from torch import nn, optim
from ..base import MetaLearner
class MBML(MetaLearner, ABC):
"""
A base class for metric-based meta-learning algorithms.
Parameters
----------
model : torch.nn.Module
Model to be wrapped
optim : torch.optim.Optimizer
Optimizer
root : str
Root directory to save checkpoints
save_basename : str, optional
Base name of the saved checkpoints
lr_scheduler : callable, optional
Learning rate scheduler
loss_function : callable, optional
Loss function
device : optional
Device on which the model is defined. If `None`, device will be
detected automatically.
"""
def __init__(
self,
model: nn.Module,
optim: optim.Optimizer,
root: Optional[str] = None,
save_basename: Optional[str] = None,
lr_scheduler: Optional[Callable] = None,
loss_function: Optional[Callable] = None,
device: Optional = None
) -> None:
super(MBML, self).__init__(
model = model,
root = root,
save_basename = save_basename,
lr_scheduler = lr_scheduler,
loss_function = loss_function,
device = device
)
self.optim = optim
@classmethod
def load(cls, model_path: str, **kwargs):
"""Load a trained model."""
state = torch.load(model_path)
# load model and optimizers
kwargs['model'] = state['model']
kwargs['optim'] = state['optim']
# model name and save path
if 'root' not in kwargs:
kwargs['root'] = os.path.dirname(model_path)
if 'save_basename' not in kwargs:
kwargs['save_basename'] = os.path.basename(model_path)
return cls(**kwargs)
def save(self, prefix: Optional[str] = None) -> str:
"""Save the trained model."""
if self.root is None or self.save_basename is None:
raise RuntimeError('The root directory or save basename of the'
'checkpoints is not defined.')
state = {
'model': self.model,
'optim': self.optim
}
name = self.save_basename
if prefix is not None:
name = prefix + name + '.pth.tar'
path = os.path.join(self.root, name)
torch.save(state, os.path.join(self.root, name))
return path
def step(self, batch: dict, meta_train: bool = True) -> Tuple[float]:
if meta_train:
self.model.train()
else:
self.model.eval()
task_batch, n_tasks = self.get_tasks(batch)
losses, accuracies = 0., 0.
self.optim.zero_grad()
for task_data in task_batch:
loss, accuracy = self.single_task(task_data)
losses += loss.detach().item()
accuracies += accuracy.item()
if meta_train == True:
(loss / n_tasks).backward()
self.optim.step()
# average the losses and accuracies
losses /= n_tasks
accuracies /= n_tasks
return losses, accuracies
@abstractmethod
def single_task(
self, task: Tuple[torch.Tensor], meta_train: bool = True
) -> Tuple[float]:
pass
| 26.818898 | 75 | 0.579272 | 391 | 3,406 | 4.933504 | 0.2711 | 0.055988 | 0.023328 | 0.017626 | 0.050804 | 0.050804 | 0 | 0 | 0 | 0 | 0 | 0.00087 | 0.325308 | 3,406 | 126 | 76 | 27.031746 | 0.838555 | 0.192601 | 0 | 0 | 0 | 0 | 0.053127 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068493 | false | 0.013699 | 0.082192 | 0 | 0.205479 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8aadfacb7f4de5abfc2dccb19ef5736e4d36538 | 593 | py | Python | python/sorting/group_0s_1s.py | amitsaha/playground | 82cb5ac02ac90d3fa858a5153b0a5705187c14ce | [
"Unlicense"
] | 4 | 2018-04-14T16:28:39.000Z | 2021-11-14T12:08:02.000Z | python/sorting/group_0s_1s.py | amitsaha/playground | 82cb5ac02ac90d3fa858a5153b0a5705187c14ce | [
"Unlicense"
] | 3 | 2022-02-14T10:38:51.000Z | 2022-02-27T16:01:16.000Z | python/sorting/group_0s_1s.py | amitsaha/playground | 82cb5ac02ac90d3fa858a5153b0a5705187c14ce | [
"Unlicense"
] | 4 | 2015-07-07T01:01:27.000Z | 2019-04-12T05:38:26.000Z | '''
Groups the 0s and 1s together from a random array
Reference: http://www.geeksforgeeks.org/segregate-0s-and-1s-in-an-array-by-traversing-array-once/
'''
from __future__ import print_function
def rearrange(arr):
p1 = 0
p2 = len(arr) - 1
while p1 < p2:
if arr[p1] == 0:
p1 += 1
if arr[p2] == 1:
p2 -= 1
if p1 < p2:
arr[p1], arr[p2] = arr[p2], arr[p1]
return arr
print(rearrange([0, 0, 1, 1]))
print(rearrange([1, 0, 0, 1, 1]))
print(rearrange([1, 0, 0, 0, 1, 0, 0]))
print(rearrange([0, 1, 0, 1, 0, 1, 0, 1]))
| 21.962963 | 97 | 0.548061 | 100 | 593 | 3.2 | 0.36 | 0.04375 | 0.0375 | 0.0375 | 0.15 | 0.15 | 0.125 | 0.125 | 0.125 | 0 | 0 | 0.111628 | 0.274874 | 593 | 26 | 98 | 22.807692 | 0.632558 | 0.247892 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.1875 | 0.3125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8ad33478b60fc223af35de65ba50412bd1bf355 | 3,039 | py | Python | GABClient/GAB.Client/wwwroot/ml/pipeline1/mu.py | intelequia/GAB2019ScienceLab.Client | 982bcfacc31c25201755eb2353aef2204923261b | [
"MIT"
] | null | null | null | GABClient/GAB.Client/wwwroot/ml/pipeline1/mu.py | intelequia/GAB2019ScienceLab.Client | 982bcfacc31c25201755eb2353aef2204923261b | [
"MIT"
] | null | null | null | GABClient/GAB.Client/wwwroot/ml/pipeline1/mu.py | intelequia/GAB2019ScienceLab.Client | 982bcfacc31c25201755eb2353aef2204923261b | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
from scipy.signal import savgol_filter
import sys
def Interpolate(time, mask, y):
yy = np.array(y)
t_ = np.delete(time, mask)
y_ = np.delete(y, mask, axis = 0)
if len(yy.shape) == 1:
yy[mask] = np.interp(time[mask], t_, y_)
elif len(yy.shape) == 2:
for n in range(yy.shape[1]):
yy[mask, n] = np.interp(time[mask], t_, y_[:, n])
else:
raise Exception("Array ``y`` must be either 1- or 2-d.")
return yy
def Chunks(l, n, all = False):
if all:
jarr = range(0, n - 1)
else:
jarr = [0]
for j in jarr:
for i in range(j, len(l), n):
if i + 2 * n <= len(l):
yield l[i:i + n]
else:
if not all:
yield l[i:]
break
def Smooth(x, window_len = 100, window = 'hanning'):
if window_len == 0:
return np.zeros_like(x)
s = np.r_[2 * x[0] - x[window_len - 1::-1], x, 2 * x[-1] - x[-1:-window_len:-1]]
if window == 'flat':
w = np.ones(window_len, 'd')
else:
w = eval('np.' + window + '(window_len)')
y = np.convolve(w / w.sum(), s, mode = 'same')
return y[window_len:-window_len + 1]
def Scatter(y, win = 13, remove_outliers = False):
if remove_outliers:
if len(y) >= 50:
ys = y - Smooth(y, 50)
else:
ys = y
M = np.nanmedian(ys)
MAD = 1.4826 * np.nanmedian(np.abs(ys - M))
out = []
for i, _ in enumerate(y):
if (ys[i] > M + 5 * MAD) or (ys[i] < M - 5 * MAD):
out.append(i)
out = np.array(out, dtype = int)
y = np.delete(y, out)
if len(y):
return 1.e6 * np.nanmedian([np.std(yi) / np.sqrt(win) for yi in Chunks(y, win, all = True)])
else:
return np.nan
def SavGol(y, win = 49):
if len(y) >= win:
return y - savgol_filter(y, win, 2) + np.nanmedian(y)
else:
return y
def _float(s):
try:
res = float(s)
except:
res = np.nan
return res
def Downbin(x, newsize, axis = 0, operation = 'mean'):
assert newsize < x.shape[axis], "The new size of the array must be smaller than the current size."
oldsize = x.shape[axis]
newshape = list(x.shape)
newshape[axis] = newsize
newshape.insert(axis + 1, oldsize // newsize)
trim = oldsize % newsize
if trim:
xtrim = x[:-trim]
else:
xtrim = x
if operation == 'mean':
xbin = np.nanmean(xtrim.reshape(newshape), axis = axis + 1)
elif operation == 'sum':
xbin = np.nansum(xtrim.reshape(newshape), axis = axis + 1)
elif operation == 'quadsum':
xbin = np.sqrt(np.nansum(xtrim.reshape(newshape) ** 2, axis = axis + 1))
elif operation == 'median':
xbin = np.nanmedian(xtrim.reshape(newshape), axis = axis + 1)
else:
raise ValueError("`operation` must be either `mean`, `sum`, `quadsum`, or `median`.")
return xbin
| 29.504854 | 102 | 0.524844 | 453 | 3,039 | 3.472406 | 0.284768 | 0.045772 | 0.050858 | 0.045772 | 0.159568 | 0.094723 | 0.053401 | 0.053401 | 0 | 0 | 0 | 0.023717 | 0.320171 | 3,039 | 102 | 103 | 29.794118 | 0.737657 | 0.01382 | 0 | 0.101124 | 0 | 0 | 0.07379 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 1 | 0.078652 | false | 0 | 0.033708 | 0 | 0.213483 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8ad658c4df19c485095900714b12cbc63dc40bd | 544 | py | Python | setup.py | farouk-muha/pav_bsc | f12e2365e97146d05a1e60f1a6112bb3e08295dd | [
"MIT"
] | null | null | null | setup.py | farouk-muha/pav_bsc | f12e2365e97146d05a1e60f1a6112bb3e08295dd | [
"MIT"
] | null | null | null | setup.py | farouk-muha/pav_bsc | f12e2365e97146d05a1e60f1a6112bb3e08295dd | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from setuptools import setup, find_packages
with open('requirements.txt') as f:
install_requires = f.read().strip().split('\n')
# get version from __version__ variable in pav_bsc/__init__.py
from pav_bsc import __version__ as version
setup(
name='pav_bsc',
version=version,
description='Partner ERPNext - Add Value On Balanced Scorecard',
author='Farouk Muharram',
author_email='farouk1dev@gmail.com',
packages=find_packages(),
zip_safe=False,
include_package_data=True,
install_requires=install_requires
)
| 25.904762 | 65 | 0.766544 | 75 | 544 | 5.24 | 0.693333 | 0.114504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004158 | 0.115809 | 544 | 20 | 66 | 27.2 | 0.81289 | 0.150735 | 0 | 0 | 0 | 0 | 0.237473 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8adc735050a0fd5a61d2b42aa76a945a006c221 | 2,957 | py | Python | components/resnet-cmle/resnet/deploy.py | cbreuel/pipelines | 22a85b4af642b896b57293c0d15d0f20c995be99 | [
"Apache-2.0"
] | 9 | 2019-03-28T02:20:45.000Z | 2021-12-01T22:43:36.000Z | components/resnet-cmle/resnet/deploy.py | cbreuel/pipelines | 22a85b4af642b896b57293c0d15d0f20c995be99 | [
"Apache-2.0"
] | 2 | 2019-10-17T16:51:43.000Z | 2019-10-18T01:18:35.000Z | components/resnet-cmle/resnet/deploy.py | cbreuel/pipelines | 22a85b4af642b896b57293c0d15d0f20c995be99 | [
"Apache-2.0"
] | 4 | 2019-04-11T12:09:59.000Z | 2020-10-11T15:53:53.000Z | # Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import argparse
import os
from time import gmtime, strftime
import time
import subprocess
import logging
logging.getLogger().setLevel(logging.INFO)
def parse_arguments():
"""Parse command line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument('--model',
type = str,
default = 'flowers_model',
help = 'What to name your ml-engine model')
parser.add_argument('--version',
type = str,
default = 'resnet',
help = 'What to name the version of the model')
parser.add_argument('--model_dir',
type = str,
required=True,
help = 'The model directory generated by the train component.')
parser.add_argument('--project_id',
type = str,
required = True,
default = '',
help = 'Pass in your project id.')
parser.add_argument('--region',
type = str,
default = 'us-central1',
help = 'Region to use.')
parser.add_argument('--TFVERSION',
type = str,
default = '1.8',
help = 'Version of TensorFlow to use.')
args = parser.parse_args()
return args
if __name__== "__main__":
args = parse_arguments()
model_export_dir = os.path.join(args.model_dir, 'export')
logging.info('Writing latest model directory name: ' + model_export_dir)
subprocess.call('gsutil ls ' + model_export_dir + ' | tail -1 > model.txt', shell=True)
with open("./model.txt", "r") as model_path_file:
model_location = model_path_file.read()[:-1]
logging.info('Deploying ' + args.model + ' ' + args.version + ' from ' + model_location + ' ... this will take a few minutes')
subprocess.call('gcloud ml-engine versions delete ' + args.version + ' --model=' + args.model + ' --quiet', shell=True)
subprocess.call('gcloud ml-engine models create ' + args.model + ' --regions ' + args.region, shell=True)
subprocess.check_call('gcloud ml-engine versions create ' + args.version + ' --model ' + args.model +
' --origin ' + str(model_location) + ' --runtime-version=' + args.TFVERSION, shell=True) | 41.069444 | 131 | 0.600609 | 347 | 2,957 | 5.017291 | 0.420749 | 0.034463 | 0.058587 | 0.031017 | 0.080414 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006185 | 0.289144 | 2,957 | 72 | 132 | 41.069444 | 0.822074 | 0.195807 | 0 | 0.163265 | 0 | 0 | 0.249894 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0.020408 | 0.142857 | 0 | 0.183673 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8adf264910375ea507ebd88b7147dd9829ca904 | 3,506 | py | Python | tests/test_quickbooks_payroll.py | fulfilio/trytond-quickbooks-payroll | 18148e6f366025268b4335a89f07d2506ad5f446 | [
"BSD-3-Clause"
] | null | null | null | tests/test_quickbooks_payroll.py | fulfilio/trytond-quickbooks-payroll | 18148e6f366025268b4335a89f07d2506ad5f446 | [
"BSD-3-Clause"
] | null | null | null | tests/test_quickbooks_payroll.py | fulfilio/trytond-quickbooks-payroll | 18148e6f366025268b4335a89f07d2506ad5f446 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
tests/test_quickbooks_payroll.py
"""
import csv
import tempfile
class TestQuickBooksPayroll:
def test_views(self, install_module):
"Test all tryton views"
from trytond.tests.test_tryton import test_view
test_view('quickbooks_payroll')
def test_depends(self, install_module):
"Test missing depends on fields"
from trytond.tests.test_tryton import test_depends
test_depends()
def test_import_payroll_item(self, test_dataset, transaction):
"Test import payroll item wizard"
Date = self.POOL.get('ir.date')
Account = self.POOL.get('account.account')
Move = self.POOL.get('account.move')
Employee = self.POOL.get('company.employee')
QuickBooksPayroll = self.POOL.get('quickbooks.payroll_account')
ImportPayrollItem = self.POOL.get(
'quickbooks.wizard_import_payroll_item', type='wizard'
)
# Map quickbooks payroll item to tryton
main_expense, = Account.search([('name', '=', 'Main Expense')])
main_expense.party_required = True
main_expense.save()
main_tax, = Account.search([('name', '=', 'Main Tax')])
main_tax.party_required = True
main_tax.save()
main_cash, = Account.search([('name', '=', 'Main Cash')])
QuickBooksPayroll.create([{
'account': main_expense.id,
'payroll_item': 'Salary Expense',
}, {
'account': main_tax.id,
'payroll_item': 'Federal Income Taxes Payable',
}, {
'account': main_tax.id,
'payroll_item': 'State Income Taxes Payable',
}, {
'account': main_tax.id,
'payroll_item': 'FICA Taxes Payable',
}])
# Map employee to quickbooks source name
employee, = Employee.search([])
employee.quickbooks_source_name = 'Pandey, Prakash'
employee.save()
credit_account, = Account.search([], limit=1)
import_payroll_item = ImportPayrollItem(
ImportPayrollItem.create()[0]
)
import_payroll_item.start.credit_account = main_cash
with tempfile.NamedTemporaryFile(delete=False) as csv_file:
csv_writer = csv.writer(csv_file, quoting=csv.QUOTE_ALL)
csv_writer.writerow([
'Date', 'Num', 'Type', 'Source Name', 'Payroll Item',
'Wage Base', 'Amount',
])
csv_writer.writerow([
Date.today(), '309333', 'Cash', "Pandey, Prakash",
'Salary Expense', '', '-100000',
])
csv_writer.writerow([
'', '', '', "Pandey, Prakash", 'Federal Income Taxes Payable',
'', 15000,
])
csv_writer.writerow([
'', '', '', "Pandey, Prakash", 'State Income Taxes Payable',
'', 5000,
])
csv_writer.writerow([
'', '', '', "Pandey, Prakash", 'FICA Taxes Payable', '', 7650,
])
csv_writer.writerow([
'', '', '', '', '', '', 72350
])
csv_file.flush()
import_payroll_item.start.csv_file = \
buffer(open(csv_file.name).read())
_, res = import_payroll_item.do_import_(action=None)
move, = Move.search([])
assert move.id in res['res_id']
assert len(move.lines) == 5
Move.post([move])
| 31.303571 | 78 | 0.553622 | 350 | 3,506 | 5.351429 | 0.302857 | 0.076348 | 0.063534 | 0.033636 | 0.148959 | 0.100908 | 0.086492 | 0.048051 | 0.048051 | 0 | 0 | 0.014114 | 0.312892 | 3,506 | 111 | 79 | 31.585586 | 0.763387 | 0.061894 | 0 | 0.222222 | 0 | 0 | 0.195892 | 0.018756 | 0 | 0 | 0 | 0 | 0.024691 | 1 | 0.037037 | false | 0 | 0.160494 | 0 | 0.209877 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8aed52f5f4d4d6a14a346f71946749b037d0d84 | 4,284 | py | Python | general/cc12m.py | robvanvolt/DALLE-datasets | 527e54aeac879bc4da669fa5c5b64c9354890728 | [
"MIT"
] | 60 | 2021-05-09T02:51:10.000Z | 2022-03-27T06:36:04.000Z | general/cc12m.py | robvanvolt/DALLE-datasets | 527e54aeac879bc4da669fa5c5b64c9354890728 | [
"MIT"
] | 4 | 2021-07-07T21:24:33.000Z | 2021-11-17T21:54:17.000Z | general/cc12m.py | robvanvolt/DALLE-datasets | 527e54aeac879bc4da669fa5c5b64c9354890728 | [
"MIT"
] | 9 | 2021-05-20T14:38:59.000Z | 2022-02-18T11:51:20.000Z | import pandas as pd
import os
import requests
from pathlib import Path
from PIL import Image
from tqdm import tqdm
from multiprocessing import Pool
import gc
import glob
cc_url = 'https://storage.googleapis.com/conceptual_12m/cc12m.tsv'
root_folder = './'
total = 12423374
maxwidth = 256
maxheight = 256
thread_count = 16
batch = 10000
def load_caption(x):
name, caption, text_folder = x
fid = str(int(int(name) / 10000 ))
subdir = "0"*(5-len(fid)) + fid
os.makedirs(Path(text_folder+"/"+subdir), exist_ok=True)
fp = text_folder + '/' + subdir + "/" + "0"*(9-len(str(name))) + str(name) + '.txt'
with open(fp, 'w') as f:
f.write(caption)
def download_file(url):
response = requests.get(url, stream=True)
total_size_in_bytes= int(response.headers.get('content-length', 0))
block_size = 1024
progress_bar = tqdm(total=total_size_in_bytes, unit='iB', unit_scale=True)
with open(Path(root_folder + '/cc12m.tsv'), 'wb') as file:
for data in response.iter_content(block_size):
progress_bar.update(len(data))
file.write(data)
progress_bar.close()
if total_size_in_bytes != 0 and progress_bar.n != total_size_in_bytes:
print("Error, something went wrong...")
def load_image(x):
name, url, image_folder, skip_folder = x
fid = str(int(int(name) / 10000 ))
subdir = "0"*(5-len(fid)) + fid
os.makedirs(Path(image_folder+"/"+subdir), exist_ok=True)
id = subdir + "/" + "0"*(9-len(str(name))) + str(name)
try:
with Image.open(requests.get(url,
headers={'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0'},
stream=True, timeout=3).raw) as foo:
a = max(maxwidth/foo.size[0], maxheight/foo.size[1])
foo = foo.resize((int(foo.size[0] * a), int(foo.size[1] * a)), Image.ANTIALIAS)
with open(Path(image_folder + "/" + id + '.jpg'), 'wb') as file:
foo.save(file, optimize=True, quality=85)
except Exception:
os.makedirs(Path(skip_folder+"/"+subdir), exist_ok=True)
open(Path(skip_folder + '/' + id), 'a').close
pass
if __name__ == '__main__':
if not os.path.isfile(Path(root_folder + '/cc12m.tsv')):
print('Missing cc12m url-caption-dataset. Downloading...')
download_file(cc_url)
else:
print('cc12m.tsv already downloaded. Proceeding with downloading images!')
dfc = pd.read_csv(root_folder + "cc12m.tsv", sep='\t', names=["url", "caption"])
image_folder = root_folder + '/images'
text_folder = root_folder + '/texts'
skip_folder = root_folder + '/skip'
paths = [image_folder, text_folder, skip_folder]
for path in paths:
os.makedirs(path, exist_ok=True)
def list_ids(path):
return [int(os.path.splitext(os.path.basename(a))[0]) for a in glob.glob(path+"/**/*")]
skiplist = list_ids(text_folder)
remaining = total - len(skiplist)
percent_remaining = 100 * (total - remaining) / total
df = dfc.loc[~dfc.index.isin(skiplist)]
print('Remaining {} captions to be written - {} ({:.5f} %) already written.'.format(remaining, len(skiplist), percent_remaining))
if len(df) > 0:
captions = zip(df.index, df["caption"], [text_folder]*len(df))
pool = Pool(thread_count)
for _ in tqdm(pool.imap_unordered(load_caption, captions), total=len(df)):
pass
pool.close()
print('Done with captions!')
skiplist = list_ids(skip_folder) + list_ids(image_folder)
remaining = total - len(skiplist)
percent_remaining = 100 * (total - remaining) / total
df = dfc.loc[~dfc.index.isin(skiplist)]
print('Remaining {} images to be downloaded - {} ({:.5f} %) already downloaded.'.format(remaining, len(skiplist), percent_remaining))
images = list(zip(df.index, df["url"], [image_folder]*len(df), [skip_folder]*len(df)))
for i in tqdm(range(0, len(df), batch)):
pool = Pool(thread_count)
for _ in tqdm(pool.imap_unordered(load_image, images[i:i+batch]), total=batch):
pass
pool.terminate()
pool.join()
del pool
gc.collect()
print('Finished downloading available images from conceptual images!')
| 37.578947 | 137 | 0.635854 | 595 | 4,284 | 4.433613 | 0.29916 | 0.026535 | 0.021228 | 0.024261 | 0.259666 | 0.216831 | 0.184989 | 0.184989 | 0.166035 | 0.166035 | 0 | 0.029325 | 0.211951 | 4,284 | 113 | 138 | 37.911504 | 0.752073 | 0 | 0 | 0.15625 | 0 | 0.010417 | 0.147292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0.03125 | 0.09375 | 0.010417 | 0.145833 | 0.072917 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8b00b965eee02af4b8f3676c77e8a154d98eecb | 5,707 | py | Python | src/itint/widget.py | ColorsWind/iTint | 48d18ed42d9ca44caa2c71104cf4f489fe54d98d | [
"MIT"
] | 1 | 2022-01-15T07:01:41.000Z | 2022-01-15T07:01:41.000Z | src/itint/widget.py | ColorsWind/iTint | 48d18ed42d9ca44caa2c71104cf4f489fe54d98d | [
"MIT"
] | null | null | null | src/itint/widget.py | ColorsWind/iTint | 48d18ed42d9ca44caa2c71104cf4f489fe54d98d | [
"MIT"
] | null | null | null | import numpy as np
from PySide2.QtCore import Qt, QUrl, QSize, QEventLoop
from PySide2.QtGui import QPixmap, QDropEvent, QDragEnterEvent, QMouseEvent, QResizeEvent, QHideEvent
from PySide2.QtWidgets import QApplication, QWidget, QHBoxLayout, QFileDialog, QWidgetItem
from itint.octree import Octree
from itint.ui_widget import Ui_MainWidget
from itint.widget_color_display import qimage_to_pil, ColorDisplayWidget
from itint.widget_screen_color_picker import ScreenColorPicker
from itint.widget_screenshot import WidgetScreenShot
class MainWidget(QWidget):
def __init__(self, parent=None):
super(MainWidget, self).__init__(parent)
self.setAcceptDrops(True)
self.internal_loader = Ui_MainWidget()
self.internal_loader.setupUi(self)
self.screen = WidgetScreenShot()
self.picker = ScreenColorPicker()
self.layout = QHBoxLayout(self.internal_loader.colorDisplayContent)
self.layout.setAlignment(Qt.AlignLeft)
self.internal_loader.btnFromScan.clicked.connect(self.btn_from_screen)
self.default_text = self.internal_loader.labelImagePreview.text()
self.internal_loader.labelImagePreview.mousePressEvent = self.btn_from_file
self.internal_loader.btnColorPickup.clicked.connect(self.btn_from_screen_color_picker)
self.internal_loader.btnFromClipboard.clicked.connect(self.btn_from_clipboard)
self.pixmap = QPixmap()
self.hide_callback = None
def dropEvent(self, event: QDropEvent) -> None:
url: QUrl = event.mimeData().urls()[0]
self.pixmap.load(url.toLocalFile())
self.update_image()
if not self.pixmap.isNull():
self.update_color_display(self.pixmap)
def dragEnterEvent(self, event: QDragEnterEvent) -> None:
if event.mimeData().hasUrls() and event.mimeData().urls()[0].isLocalFile():
event.acceptProposedAction()
def check_and_clear_color_display(self):
if self.internal_loader.cBtnAutoClear.isChecked():
for i in range(self.layout.count()):
color_display_widget: QWidgetItem = self.layout.itemAt(i)
color_display_widget.widget().deleteLater()
def update_color_display(self, image: QPixmap):
if image.isNull():
return
data = np.asarray(qimage_to_pil(image)).reshape((-1, 3))
tree = Octree()
tree.build(data, 8)
colors = tree.get_color(tree.root)
self.check_and_clear_color_display()
for r, g, b in colors:
color_display_widget = ColorDisplayWidget(r, g, b, self)
self.layout.addWidget(color_display_widget)
def resizeEvent(self, event: QResizeEvent):
self.update_image()
def hideEvent(self, event: QHideEvent):
if self.hide_callback is not None:
self.hide_callback()
self.hide_callback = None
def update_image(self):
if self.pixmap.isNull():
self.internal_loader.labelImagePreview.setText(self.default_text)
else:
pixel_ratio = QApplication.primaryScreen().devicePixelRatio()
pixmap_aspect = self.pixmap.width() / self.pixmap.height()
label_width = self.internal_loader.labelImagePreview.width() * pixel_ratio
label_height = self.internal_loader.labelImagePreview.height() * pixel_ratio
label_aspect = label_width / label_height
if pixmap_aspect > label_aspect:
pixmap = self.pixmap.scaled(
QSize(label_width,
label_width / pixmap_aspect),
Qt.KeepAspectRatio,
Qt.SmoothTransformation,
)
else:
pixmap = self.pixmap.scaled(
QSize(label_height * pixmap_aspect,
label_height),
Qt.KeepAspectRatio,
Qt.SmoothTransformation,
)
self.internal_loader.labelImagePreview.setPixmap(pixmap)
def btn_from_screen_color_picker(self):
def callback_screen_color_picker(rgb):
r, g, b = rgb
color_display_widget = ColorDisplayWidget(r, g, b, self)
self.layout.addWidget(color_display_widget)
if self.internal_loader.cBtnAutoHide.isChecked():
self.setVisible(True)
self.setWindowOpacity(1.0)
if self.internal_loader.cBtnAutoHide.isChecked():
self.setVisible(False)
self.setWindowOpacity(0.0)
QApplication.processEvents(QEventLoop.AllEvents)
# time.sleep(0.20) # 窗口动画
self.picker.pick_color(callback=callback_screen_color_picker)
def btn_from_screen(self):
def callback_captured_image(pixmap: QPixmap):
self.pixmap = pixmap
self.update_image()
if self.internal_loader.cBtnAutoHide.isChecked():
self.setVisible(True)
self.update_color_display(pixmap)
if self.internal_loader.cBtnAutoHide.isChecked():
self.setVisible(False)
self.screen.capture_screen(callback=callback_captured_image)
def btn_from_file(self, event: QMouseEvent):
filepath, _ = QFileDialog.getOpenFileName(self, "选择文件", "", "图片 (*.png;*.jpg;*.gif;*.bmp);;所有类型 (*)")
self.pixmap.load(filepath)
self.update_image()
if not self.pixmap.isNull():
self.update_color_display(self.pixmap)
def btn_from_clipboard(self):
clipboard = QApplication.clipboard()
self.pixmap = clipboard.pixmap()
self.update_image()
self.update_color_display(self.pixmap)
| 39.909091 | 109 | 0.656913 | 611 | 5,707 | 5.92144 | 0.255319 | 0.056385 | 0.084577 | 0.058043 | 0.264234 | 0.209232 | 0.153676 | 0.153676 | 0.153676 | 0.153676 | 0 | 0.003514 | 0.251971 | 5,707 | 142 | 110 | 40.190141 | 0.843992 | 0.003855 | 0 | 0.278261 | 0 | 0 | 0.007394 | 0.005458 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121739 | false | 0 | 0.078261 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8b41261c2c681fcdb62fde84ac5266ed078c65f | 816 | py | Python | hwtLib/examples/statements/constDriver_test.py | optical-o/hwtLib | edad621f5ad4cdbea20a5751ff4468979afe2f77 | [
"MIT"
] | null | null | null | hwtLib/examples/statements/constDriver_test.py | optical-o/hwtLib | edad621f5ad4cdbea20a5751ff4468979afe2f77 | [
"MIT"
] | null | null | null | hwtLib/examples/statements/constDriver_test.py | optical-o/hwtLib | edad621f5ad4cdbea20a5751ff4468979afe2f77 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from hwt.hdl.constants import Time
from hwt.simulator.simTestCase import SingleUnitSimTestCase
from hwtLib.examples.statements.constDriver import ConstDriverUnit
class ConstDriverTC(SingleUnitSimTestCase):
@classmethod
def getUnit(cls):
cls.u = ConstDriverUnit()
return cls.u
def test_simple(self):
u = self.u
self.runSim(20 * Time.ns)
self.assertValSequenceEqual(u.out0._ag.data, [0, 0])
self.assertValSequenceEqual(u.out1._ag.data, [1, 1])
if __name__ == "__main__":
import unittest
suite = unittest.TestSuite()
# suite.addTest(TwoCntrsTC('test_nothingEnable'))
suite.addTest(unittest.makeSuite(ConstDriverTC))
runner = unittest.TextTestRunner(verbosity=3)
runner.run(suite)
| 26.322581 | 66 | 0.699755 | 93 | 816 | 6.010753 | 0.612903 | 0.025045 | 0.0322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016541 | 0.185049 | 816 | 30 | 67 | 27.2 | 0.82406 | 0.11152 | 0 | 0 | 0 | 0 | 0.01108 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.105263 | false | 0 | 0.210526 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8b44009ab655e1119911f81cd812061c34aa19f | 491 | py | Python | tutorial_web_scraper.py | mariusciurea/webscraping-tutorials | 9fb53252c4cc08d5e2b8b0d46e67c2374e7c84c5 | [
"Unlicense"
] | null | null | null | tutorial_web_scraper.py | mariusciurea/webscraping-tutorials | 9fb53252c4cc08d5e2b8b0d46e67c2374e7c84c5 | [
"Unlicense"
] | null | null | null | tutorial_web_scraper.py | mariusciurea/webscraping-tutorials | 9fb53252c4cc08d5e2b8b0d46e67c2374e7c84c5 | [
"Unlicense"
] | null | null | null | import requests
from bs4 import BeautifulSoup
# with open('index.html', 'rb') as hf:
# soup = BeautifulSoup(hf, 'html.parser')
# print(soup.prettify())
# print(soup.head.title.text)
# print(soup.li.a.h2.text)
# print(soup.li.a.p.text)
source_code = requests.get('https://mariusciurea.github.io/links/')
soup = BeautifulSoup(source_code.content, 'lxml')
apps = soup.find_all('a', {'title':'Ajuta un elev sa aleaga informat facultatea'})
for app in apps:
print(app)
| 28.882353 | 83 | 0.684318 | 72 | 491 | 4.625 | 0.638889 | 0.108108 | 0.078078 | 0.09009 | 0.096096 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004796 | 0.150713 | 491 | 16 | 84 | 30.6875 | 0.793765 | 0.366599 | 0 | 0 | 0 | 0 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8ba6e17bc85f2ea591e7b78c0b6ba596ae2eb60 | 2,866 | py | Python | google_assist.py | eholic/dash-assistant | 97204e1402fbb742fb7838e995110a22ea814ab5 | [
"MIT"
] | null | null | null | google_assist.py | eholic/dash-assistant | 97204e1402fbb742fb7838e995110a22ea814ab5 | [
"MIT"
] | null | null | null | google_assist.py | eholic/dash-assistant | 97204e1402fbb742fb7838e995110a22ea814ab5 | [
"MIT"
] | null | null | null | import os
import sys
import requests
import logging
import json
import google.auth.transport.grpc
import google.auth.transport.requests
import google.oauth2.credentials
from google.assistant.embedded.v1alpha2 import (
embedded_assistant_pb2,
embedded_assistant_pb2_grpc
)
from config import Config
# Ref: https://github.com/googlesamples/assistant-sdk-python/blob/master/google-assistant-sdk/googlesamples/assistant/grpc/textinput.py
ASSISTANT_API_ENDPOINT = 'embeddedassistant.googleapis.com'
DEFAULT_GRPC_DEADLINE = 60 * 3 + 5
def gassist(text_query, lang_code='en-US'):
logging.info(text_query)
# Load OAuth 2.0 credentials.
try:
with open(Config.CREDENTIALS, 'r') as f:
credentials = google.oauth2.credentials.Credentials(token=None, **json.load(f))
session = requests.Session()
http_request = google.auth.transport.requests.Request(session)
credentials.refresh(http_request)
except Exception as e:
logging.error('Error loading credentials', exc_info=True)
sys.exit(-1)
# Create an authorized gRPC channel.
grpc_channel = google.auth.transport.grpc.secure_authorized_channel(
credentials, http_request, ASSISTANT_API_ENDPOINT)
# Create an assistant.
assistant = embedded_assistant_pb2_grpc.EmbeddedAssistantStub(grpc_channel)
def assist(text_query):
def iter_assist_requests():
config = embedded_assistant_pb2.AssistConfig(
audio_out_config=embedded_assistant_pb2.AudioOutConfig(
encoding='LINEAR16',
sample_rate_hertz=16000,
volume_percentage=0,
),
dialog_state_in=embedded_assistant_pb2.DialogStateIn(
language_code=lang_code,
conversation_state=None,
is_new_conversation=True,
),
device_config=embedded_assistant_pb2.DeviceConfig(
device_id=Config.DEVICE_ID,
device_model_id=Config.DEVICE_MODEL_ID,
),
text_query=text_query,
)
req = embedded_assistant_pb2.AssistRequest(config=config)
yield req
text_response = None
html_response = None
for resp in assistant.Assist(iter_assist_requests(), DEFAULT_GRPC_DEADLINE):
if resp.screen_out.data:
html_response = resp.screen_out.data
if resp.dialog_state_out.supplemental_display_text:
text_response = resp.dialog_state_out.supplemental_display_text
return text_response, html_response
text, html = assist(text_query)
logging.info(text)
grpc_channel.close()
session.close()
return text
if __name__ == '__main__':
print(gassist('hello'))
| 34.95122 | 135 | 0.665736 | 316 | 2,866 | 5.756329 | 0.386076 | 0.074766 | 0.08796 | 0.042881 | 0.04508 | 0.04508 | 0.04508 | 0 | 0 | 0 | 0 | 0.0127 | 0.2582 | 2,866 | 81 | 136 | 35.382716 | 0.842897 | 0.075715 | 0 | 0.046154 | 0 | 0 | 0.03177 | 0.012103 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046154 | false | 0 | 0.153846 | 0 | 0.230769 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8bd67134893a262683665a0dbc9878a51447c79 | 15,809 | py | Python | menu.py | Jasonlmx/Touhou-Star-Salvation | a8804450625957af7b81d0075873a68708374db8 | [
"MIT"
] | 4 | 2021-10-15T13:18:43.000Z | 2022-03-05T10:49:47.000Z | menu.py | Jasonlmx/Touhou-Star-Salvation | a8804450625957af7b81d0075873a68708374db8 | [
"MIT"
] | null | null | null | menu.py | Jasonlmx/Touhou-Star-Salvation | a8804450625957af7b81d0075873a68708374db8 | [
"MIT"
] | 1 | 2021-11-29T04:17:32.000Z | 2021-11-29T04:17:32.000Z | import pygame,sys
import random
import math
from pygame.locals import *
from pygame.sprite import Group
import gF
import Bullet
import DADcharacter
import Slave
import global_var
import Effect
import Item
import gameRule
class titleStar(pygame.sprite.Sprite):
def __init__(self):
super(titleStar,self).__init__()
self.tx=0.0
self.ty=0.0
self.speedx=0
self.speedy=0
self.image=pygame.Surface((64,64)).convert_alpha()
self.image.fill((0,0,0,0))
self.image.blit(global_var.get_value('titleStar'),(0,0),(0,0,64,64))
self.lastFrame=0
self.rAngle=random.random()*360
self.rDirection=random.randint(0,1)
if self.rDirection==0:
self.rDirection=-1
self.rotation=(random.random()*1.5+1.2)*self.rDirection
self.maxFrame=270+random.randint(0,80)
self.shadowInt=4
self.voidifyFrame=30
self.speed=0
self.dDeg=-0.07*random.random()-0.07
def initial(self,posx,posy):
self.tx=posx
self.ty=posy
def movement(self):
tick=global_var.get_value('DELTA_T')
self.tx+=self.speedx*60/1000*tick
self.ty+=self.speedy*60/1000*tick
def speedAlter(self,speedx,speedy):
self.speedx=speedx
self.speedy=speedy
def countAngle(self):
if self.speedx!=0:
t=self.speedy/self.speedx
deg=math.atan(t)*180/math.pi
else:
if self.speedy>0:
deg=90
if self.speedy<0:
deg=270
if deg<0:
deg+=360
if self.speedy>0 and deg>=180:
deg=deg-180
if self.speedy<0 and deg<=180:
deg=deg+180
if self.speedy==0 and self.speedx<0:
deg=180
self.angle=deg
def setSpeed(self,angle,speed):
s=math.sin(math.radians(angle))
c=math.cos(math.radians(angle))
self.speedy=s*speed
self.speedx=c*speed
self.speed=speed
def arc(self):
if self.angle>95:
angle=self.angle+self.dDeg
self.setSpeed(angle,self.speed)
def checkValid(self):
if self.lastFrame>self.maxFrame:
self.kill()
def update(self,screen,titleDec):
self.lastFrame+=1
self.rAngle+=self.rotation
self.movement()
self.countAngle()
self.arc()
self.draw(screen)
if self.lastFrame%self.shadowInt==0:
self.newShadow(titleDec)
self.checkValid()
def newShadow(self,titleDec):
new_shadow=starShadow((self.tx,self.ty),80,self.rAngle)
titleDec.add(new_shadow)
def draw(self,screen):
pos=(round(self.tx)-32,round(self.ty)-32)
if self.lastFrame<=self.voidifyFrame:
tempImg=self.image
alpha=round((256-56)*self.lastFrame/self.voidifyFrame+56)
tempImg.set_alpha(alpha)
gF.drawRotation(tempImg,pos,self.rAngle,screen)
elif (self.maxFrame-self.lastFrame)<=self.voidifyFrame:
tempImg=self.image
alpha=round((256-56)*(self.maxFrame-self.lastFrame)/self.voidifyFrame+56)
tempImg.set_alpha(alpha)
gF.drawRotation(tempImg,pos,self.rAngle,screen)
else:
#pos=(round(self.tx)-32,round(self.ty)-32)
gF.drawRotation(self.image,pos,self.rAngle,screen)
#screen.blit(self.image,pos)
class starShadow(pygame.sprite.Sprite):
def __init__(self,pos,length=20,angle=0):
super(starShadow,self).__init__()
self.maxFrame=length
self.angle=angle
self.pos=pos
self.image=pygame.Surface((64,64)).convert_alpha()
self.image.fill((0,0,0,0))
self.image.blit(global_var.get_value('titleStar'),(0,0),(0,0,64,64))
self.lastFrame=0
def checkValid(self):
if self.lastFrame>=self.maxFrame:
self.kill()
def update(self,screen,*arg):
self.lastFrame+=1
self.draw(screen)
self.checkValid()
def draw(self,screen):
self.percentage=self.lastFrame/self.maxFrame
self.alpha=round((120-0)*(1-self.percentage)+0)
self.size=round(33*(1-self.percentage))+1
tempImg=pygame.Surface((64,64)).convert_alpha()
tempImg.fill((0,0,0,0))
tempImg.blit(self.image,(0,0),(0,0,64,64))
tempImg=pygame.transform.smoothscale(tempImg,(self.size,self.size))
tempImg.set_alpha(self.alpha)
x,y=self.pos
pos=(round(x-self.size/2),round(y-self.size/2))
gF.drawRotation(tempImg,pos,self.angle,screen)
class Menu():
def __init__(self):
super(Menu,self).__init__()
self.image=pygame.image.load('resource/title/menu.png').convert()
self.sign=global_var.get_value('menuSign')
self.shadow=global_var.get_value('menuShadow')
self.playerTitleImg=global_var.get_value('playerTitleImg')
self.kanjiLogo=global_var.get_value('kanjiLogo')
self.engLogo=global_var.get_value('engLogo')
self.lightLogo=global_var.get_value('lightLogo')
self.tachie=global_var.get_value('reimuLogo')
self.selectImg=global_var.get_value('menuSelectImg')
self.levelImg=global_var.get_value('levelImg')
self.font=pygame.font.SysFont('arial', 20)
self.selectNum=[0,0,0,0]
self.stairMax=[7,0,1,1]
self.menuStair=0 #0:main menu, 1 stage selection, 2 player selection, 3 practice menu
self.playerReset=False
self.lightStrength=0.0
self.logoPosAdj=[0,0]
self.lastFrame=0
self.testSpellNum=1
self.ifSpell=False
self.substract=False
self.plus=False
self.starInt=180
def update(self,screen,pressed_keys,pressed_keys_last,player,titleDec):
self.lastFrame+=1
self.addTitleStar(titleDec)
if self.lastFrame>360:
self.lastFrame=self.lastFrame%360
screen.blit(self.image,(0,0))
self.alterSelect(pressed_keys,pressed_keys_last)
self.drawSign(screen,titleDec)
self.doSelection(pressed_keys,pressed_keys_last,player)
def addTitleStar(self,titleDec):
if self.lastFrame%self.starInt==0:
new_star=titleStar()
i_x=300+random.random()*660
i_y=random.random()*5+10
new_star.initial(i_x,i_y)
new_star.setSpeed(135+random.random()*10,1.8+0.6*random.random())
titleDec.add(new_star)
def alterSelect(self,pressed_keys,pressed_keys_last):
if self.menuStair!=2 and self.menuStair!=3:
if not (pressed_keys[K_UP] and pressed_keys_last[K_UP]):
if pressed_keys[K_UP]:
self.selectNum[self.menuStair]-=1
global_var.get_value('select_sound').stop()
global_var.get_value('select_sound').play()
if not (pressed_keys[K_DOWN] and pressed_keys_last[K_DOWN]):
if pressed_keys[K_DOWN]:
self.selectNum[self.menuStair]+=1
global_var.get_value('select_sound').stop()
global_var.get_value('select_sound').play()
elif self.menuStair==2:
if not (pressed_keys[K_LEFT] and pressed_keys_last[K_LEFT]):
if pressed_keys[K_LEFT]:
self.selectNum[self.menuStair]-=1
global_var.get_value('select_sound').stop()
global_var.get_value('select_sound').play()
if not (pressed_keys[K_RIGHT] and pressed_keys_last[K_RIGHT]):
if pressed_keys[K_RIGHT]:
self.selectNum[self.menuStair]+=1
global_var.get_value('select_sound').stop()
global_var.get_value('select_sound').play()
elif self.menuStair==3:
if not (pressed_keys[K_LEFT] and pressed_keys_last[K_LEFT]):
if pressed_keys[K_LEFT]:
self.testSpellNum-=1
self.substract=True
global_var.get_value('select_sound').stop()
global_var.get_value('select_sound').play()
if not (pressed_keys[K_RIGHT] and pressed_keys_last[K_RIGHT]):
if pressed_keys[K_RIGHT]:
self.testSpellNum+=1
self.plus=True
global_var.get_value('select_sound').stop()
global_var.get_value('select_sound').play()
if self.testSpellNum>10:
self.testSpellNum=1
elif self.testSpellNum<1:
self.testSpellNum=10
if not (pressed_keys[K_DOWN] and pressed_keys_last[K_DOWN]):
if pressed_keys[K_DOWN]:
self.ifSpell=False
global_var.get_value('select_sound').stop()
global_var.get_value('select_sound').play()
if not (pressed_keys[K_UP] and pressed_keys_last[K_UP]):
if pressed_keys[K_UP]:
self.ifSpell=True
global_var.get_value('select_sound').stop()
global_var.get_value('select_sound').play()
if not self.ifSpell and self.testSpellNum==10:
if self.substract:
self.testSpellNum=9
elif self.plus:
self.testSpellNum=1
else:
self.ifSpell=True
self.substract=False
self.plus=False
if (pressed_keys[K_ESCAPE]!=pressed_keys_last[K_ESCAPE] and pressed_keys[K_ESCAPE]) or (pressed_keys[K_x]!=pressed_keys_last[K_x] and pressed_keys[K_x]):
if self.menuStair>0:
self.menuStair-=1
global_var.get_value('cancel_sound').play()
else:
if self.selectNum[0]!=7:
self.selectNum[0]=7
global_var.get_value('cancel_sound').play()
else:
global_var.get_value('cancel_sound').play()
sys.exit()
if self.selectNum[self.menuStair]>self.stairMax[self.menuStair]:
self.selectNum[self.menuStair]=0
elif self.selectNum[self.menuStair]<0:
self.selectNum[self.menuStair]=self.stairMax[self.menuStair]
def drawSign(self,screen,titleDec):
#stars
if self.menuStair!=0:
for entity in titleDec:
entity.update(screen,titleDec)
if self.menuStair==0:
screen.blit(self.tachie,(600,90))
for entity in titleDec:
entity.update(screen,titleDec)
self.logoPosAdj=[math.sin(self.lastFrame*math.pi/180)*20,math.sin(self.lastFrame*0.5*math.pi/180)*5]
screen.blit(self.kanjiLogo,(100+self.logoPosAdj[0],30+self.logoPosAdj[1]))
self.lightStrength=0.5*math.sin(self.lastFrame*2*math.pi/180)+0.5
alpha=round(self.lightStrength*256)
self.lightLogo.set_alpha(alpha)
screen.blit(self.lightLogo,(100-5,164))
screen.blit(self.engLogo,(100,164))
for i in range(0,8):
if i!=self.selectNum[self.menuStair]:
screen.blit(self.shadow[i],(100,250+i*48))
else:
screen.blit(self.sign[i],(100,250+i*48))
elif self.menuStair==1:
screen.blit(self.selectImg[0],(40,10))
screen.blit(self.levelImg[0],(288,264))
elif self.menuStair==2:
if self.selectNum[0]==0 or self.selectNum[0]==2:
screen.blit(self.selectImg[1],(40,10))
for i in range(0,2):
self.playerTitleImg[i].set_alpha(256)
if self.selectNum[2]==0:
self.playerTitleImg[1].set_alpha(100)
elif self.selectNum[2]==1:
self.playerTitleImg[0].set_alpha(100)
for i in range(0,2):
screen.blit(self.playerTitleImg[i],(450*i,120))
elif self.menuStair==3:
if self.selectNum[0]==2:
if self.ifSpell:
pracText=self.font.render('Test: Start From Spell No.'+str(self.testSpellNum),True,(255,255,255))
else:
pracText=self.font.render('Test: Start From non-Spell No.'+str(self.testSpellNum),True,(255,255,255))
screen.blit(pracText,(200,300))
def doSelection(self,pressed_keys,pressed_keys_last,player):
if pressed_keys[K_z]!=pressed_keys_last[K_z] and pressed_keys[K_z]:
if self.menuStair==0:
if self.selectNum[self.menuStair]==0:
global_var.get_value('ok_sound').play()
self.menuStair+=1
elif self.selectNum[self.menuStair]==2:
global_var.get_value('ok_sound').play()
self.menuStair+=1
elif self.selectNum[self.menuStair]==7:
global_var.get_value('ok_sound').play()
pygame.quit()
sys.exit()
else:
global_var.get_value('invalid_sound').stop()
global_var.get_value('invalid_sound').play()
elif self.menuStair==1:
if self.selectNum[0]==0 or self.selectNum[0]==2:
if self.selectNum[self.menuStair]==0:
global_var.get_value('ok_sound').play()
self.menuStair+=1
elif self.menuStair==2:
if self.selectNum[0]==0:
if self.selectNum[self.menuStair]==0:
global_var.set_value('playerNum',0)
elif self.selectNum[self.menuStair]==1:
global_var.set_value('playerNum',1)
global_var.get_value('ok_sound').play()
global_var.get_value('ok_sound').play()
global_var.set_value('ifTest',False)
pygame.mixer.music.stop()
pygame.mixer.music.load('resource/bgm/lightnessOnTheWay.mp3') # 载入背景音乐文件
#pygame.mixer.music.load('resource/bgm/上海アリス幻樂団 - 死体旅行~ Be of good cheer!.mp3')
pygame.mixer.music.set_volume(0.6) # 设定背景音乐音量
pygame.mixer.music.play(loops=-1)
self.menuStair=0
global_var.set_value('menu',False)
self.playerReset=True
if self.selectNum[0]==2:
if self.selectNum[self.menuStair]==0:
global_var.set_value('playerNum',0)
elif self.selectNum[self.menuStair]==1:
global_var.set_value('playerNum',1)
global_var.get_value('ok_sound').play()
self.menuStair+=1
elif self.menuStair==3:
if self.selectNum[0]==2:
global_var.get_value('ok_sound').play()
global_var.set_value('ifTest',True)
global_var.set_value('ifSpellTest',self.ifSpell)
global_var.set_value('spellNum',self.testSpellNum)
pygame.mixer.music.stop()
pygame.mixer.music.load('resource/bgm/lightnessOnTheWay.mp3') # 载入背景音乐文件
#pygame.mixer.music.load('resource/bgm/上海アリス幻樂団 - 死体旅行~ Be of good cheer!.mp3')
pygame.mixer.music.set_volume(0.6) # 设定背景音乐音量
pygame.mixer.music.play(loops=-1)
self.menuStair=0
global_var.set_value('menu',False)
self.playerReset=True | 42.727027 | 161 | 0.567651 | 1,953 | 15,809 | 4.456221 | 0.119304 | 0.053775 | 0.056532 | 0.080087 | 0.531541 | 0.493853 | 0.444904 | 0.427439 | 0.395266 | 0.369298 | 0 | 0.040084 | 0.310393 | 15,809 | 370 | 162 | 42.727027 | 0.75821 | 0.021001 | 0 | 0.427746 | 0 | 0 | 0.042475 | 0.005883 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060694 | false | 0 | 0.037572 | 0 | 0.106936 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c1279c1f035fd1c0ca93502531ba20b1cf610a | 2,323 | py | Python | app/product/tests/test_product_api.py | RamzeyXD/varanus-ecommerce-api | 4688fc393b73d70a4923d471006caee2ec624f68 | [
"MIT"
] | null | null | null | app/product/tests/test_product_api.py | RamzeyXD/varanus-ecommerce-api | 4688fc393b73d70a4923d471006caee2ec624f68 | [
"MIT"
] | 5 | 2021-03-19T04:52:44.000Z | 2021-09-22T19:12:07.000Z | app/product/tests/test_product_api.py | RamzeyXD/varanus-ecommerce-api | 4688fc393b73d70a4923d471006caee2ec624f68 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.urls import reverse
from django.test import TestCase
from rest_framework import status
from rest_framework.test import APIClient
from core.models import Product
from product.serializers import ProductSerializer
PRODUCTS_URL = reverse('product:product-list')
def detail_url(product_slug):
"""Return product detail URL"""
return reverse('product:product-detail', args=[product_slug])
def sample_product(**params):
"""Create and return sample product"""
defaults = {
'name': 'TestNameCase',
'description': "test description for test Product",
'cost': 45
}
defaults.update(params)
return Product.objects.create(**defaults)
class PublicProductsApiTests(TestCase):
"""Test the publicly available products API"""
def setUp(self):
self.client = APIClient()
def test_login_required(self):
"""Test that login is required to access the endpoint"""
res = self.client.get(PRODUCTS_URL)
self.assertEqual(res.status_code, status.HTTP_401_UNAUTHORIZED)
class PrivateProductApiTests(TestCase):
"""Test products can be retrieved by authorized user"""
def setUp(self):
self.client = APIClient()
self.user = get_user_model().objects.create_user(
email='TestMail@gmail.com',
password='TestPassword123'
)
self.client.force_authenticate(self.user)
def test_retrieve_product_list(self):
"""Test retrieving list of products"""
params = {
'name': 'TestProduct',
'description': 'Test description for second test product',
'cost': 5.00
}
sample_product(**params)
sample_product()
products = Product.objects.all()
serializer = ProductSerializer(products, many=True)
res = self.client.get(PRODUCTS_URL)
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertEqual(res.data, serializer.data)
def test_view_product_detail(self):
"""Test viewing product detail"""
product = sample_product()
url = detail_url(product.slug)
res = self.client.get(url)
serializer = ProductSerializer(product)
self.assertEqual(serializer.data, res.data)
| 28.679012 | 71 | 0.671545 | 262 | 2,323 | 5.828244 | 0.358779 | 0.039293 | 0.02554 | 0.031434 | 0.125737 | 0.125737 | 0.085134 | 0.085134 | 0.085134 | 0.085134 | 0 | 0.007795 | 0.226862 | 2,323 | 80 | 72 | 29.0375 | 0.842428 | 0.112355 | 0 | 0.117647 | 0 | 0 | 0.103159 | 0.010859 | 0 | 0 | 0 | 0 | 0.078431 | 1 | 0.137255 | false | 0.019608 | 0.137255 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c141a49a479e74699dc9b65661ce60383e9e67 | 4,686 | py | Python | src/face_feature.py | ryota0051/facial_expressions | 763f1108fc56f5360fbd6603e0dc3e40c27a3d1b | [
"MIT"
] | null | null | null | src/face_feature.py | ryota0051/facial_expressions | 763f1108fc56f5360fbd6603e0dc3e40c27a3d1b | [
"MIT"
] | null | null | null | src/face_feature.py | ryota0051/facial_expressions | 763f1108fc56f5360fbd6603e0dc3e40c27a3d1b | [
"MIT"
] | null | null | null | import os
from typing import Dict, Tuple, List
import json
import time
import tensorflow as tf
import numpy as np
from type_def import BOUNDARY_BOX_TYPE, PERSONAL_INFO_TYPE
class FaceFeatureExtractor():
def __init__(self, base_model_path: str, nationality_model_path: str, label_path: str) -> None:
'''必要なファイルを読み込むインスタンスメソッド
Parameter
----------
base_model_path: mobilenetV2の畳み込み部分のみのモデルパス
nationality_model_path: 国籍モデルのパス
label_path: 各モデルが出力する数字が表す文字列を格納したファイルのパス
ファイル内容の例:
{
"gender": {
0: "female",
1: "male"
},
...
}
'''
self.base_model = self.__load_model(base_model_path)
self.nationality_model = self.__load_model(nationality_model_path)
self.labels = self.__load_labels(label_path)
def get_personal_data_from_faces(self, img_batch: np.array, rect_list: BOUNDARY_BOX_TYPE) -> PERSONAL_INFO_TYPE:
'''顔画像データから性別、年齢、人種を判別するメソッド
Parameter
----------
img_batch: バッチ画像
rect_list: 顔座標
Returns
----------
例:
[
{
"coodinate": [x, y, W, H],
"attrributes": {
"nationality": "japanese"
}
},
...
]
'''
features = self.get_feature_batch(img_batch)
features = features.reshape(len(features), -1)
# 国籍判定
nationality_list = self.predict_facial_expression(features, self.nationality_model)
result_list = [None] * len(rect_list)
assert len(rect_list) == len(nationality_list)
for i, (rect, nationality) in enumerate(zip(rect_list, nationality_list)):
result = {'coodinate': None, 'attribute': {}}
result['coodinate'] = list(rect)
result['attribute']['nationality'] = self.labels['nationality'][str(nationality)]
result_list[i] = result
return result_list
def get_feature_batch(self, img_batch: np.array) -> np.array:
'''ベースとなるモバイルネットからバッチ画像ごとに特徴量を抽出するメソッド
Parameter
---------
img_batch: バッチ画像
Returns
---------
モバイルネットが出力する特徴量
'''
assert isinstance(img_batch, np.ndarray)
assert img_batch.ndim == 4
x = tf.keras.applications.mobilenet_v2.preprocess_input(img_batch)
features = self.base_model.predict(x)
return features
def predict_facial_expression(
self,
features: np.array,
model: '学習済み予測部分モデル') -> List[int]:
'''指定モデルにおける顔の属性を予測するメソッド
Parameter
---------
features: modelに入力する特徴量
model: 属性予測モデル(kerasのクラスラベルを返すメソッドである
predict_classesを用いているので、別のフレームワークを使う場合は、
classなどでラッパーする。)
Returns
---------
要素として、予測結果の数値ラベルをもつリスト
'''
return model.predict_classes(features).tolist()
def __load_labels(self, label_path: str) -> Dict[str, Dict[str, str]]:
'''json形式で記述されたファイルからone-hot-vectorが表す文字列辞書を取得するメソッド
Parameter
----------
label_path: 各モデルが出力するラベルが表す文字列辞書が記述されたjsonファイルのパス
Returns
----------
one-hot-vectorが表す文字列辞書
例:
{
"gender":
{
"0": "female",
"1": "male"
},
"age": {
"0": "10代",
"1": "20代",
"2": "30代",
"3": "40代",
"4": "50代"
},
"race":
{
"0": "Asian",
"1": "Black",
"2": "Indian",
"3": "others",
"4": "White"
}
}
'''
self.__check_file_exists(label_path)
with open(label_path, 'r') as f:
labels = json.load(f)
return labels
def __load_model(self, model_path:str) -> 'kerasのmodel':
'''kerasモデルを読み込むメソッド
Parameter
----------
model_path: 読み込みモデルパス
Returns
----------
tf.keras.models.load_modelの返り値
'''
self.__check_file_exists(model_path)
return tf.keras.models.load_model(model_path)
def __check_file_exists(self, file_path: str) -> None:
'''ファイルが存在するかを確かめるメソッド(ファイルが存在しない場合は、例外を出力する。)
Parameter
----------
file_path: 存在を確かめるファイル
'''
if not os.path.exists(file_path):
raise FileNotFoundError('[{}]が存在しません。'.format(file_path))
| 28.573171 | 116 | 0.522621 | 412 | 4,686 | 5.682039 | 0.36165 | 0.038445 | 0.01666 | 0.01965 | 0.058095 | 0.026484 | 0 | 0 | 0 | 0 | 0 | 0.009299 | 0.357448 | 4,686 | 163 | 117 | 28.748466 | 0.768183 | 0.33312 | 0 | 0 | 0 | 0 | 0.039241 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 1 | 0.152174 | false | 0 | 0.152174 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c15c388c58bbae49aac02c97bdee96b885e94e | 3,234 | py | Python | app/main/routes.py | Tsolmon1/company | 270d88e40e0c709247a7338cd41942b0ceb67c5e | [
"MIT"
] | null | null | null | app/main/routes.py | Tsolmon1/company | 270d88e40e0c709247a7338cd41942b0ceb67c5e | [
"MIT"
] | null | null | null | app/main/routes.py | Tsolmon1/company | 270d88e40e0c709247a7338cd41942b0ceb67c5e | [
"MIT"
] | null | null | null | from datetime import datetime
from flask import render_template, flash, redirect, url_for, request, g, \
jsonify, current_app
from flask_login import current_user, login_required
from flask_babel import _, get_locale
#from guess_language import guess_language
from app import db
from app.main.forms import CompanyForm
from app.models import Company_list
from app.main import bp
@bp.route("/company", methods=['GET'])
def company_namelist():
"""
List all company
"""
#loan_requests = Loan_request.query.all()
page = request.args.get('page', 1, type=int)
companys = Company_list.query.order_by(Company_list.id.asc()).paginate(
page, current_app.config['POSTS_PER_PAGE'], False)
next_url = url_for('main.company_namelist', page=companys.next_num) \
if companys.has_next else None
prev_url = url_for('main.company_namelist', page=companys.prev_num) \
if companys.has_prev else None
return render_template('company/company_namelists.html', companys=companys.items, title="companys", next_url=next_url, prev_url=prev_url)
@bp.route('/company/add', methods=['GET', 'POST'])
def add_company():
form = CompanyForm()
if form.validate_on_submit():
company = Company_list(names_one=form.names_one.data,
names_two=form.names_two.data,
names_three=form.names_three.data,
branches=form.branches.data)
# add employee to the database
db.session.add(company)
db.session.commit()
flash('You have successfully registered!')
# redirect to the login page
return redirect(url_for('main.company_namelist'))
# load registration template
return render_template('company/company_add.html', form=form, title='LoanTypeAdd')
@bp.route('/companys/edit/<int:id>', methods=['GET', 'POST'])
def edit_company(id):
"""
Edit a user
"""
add_company = False
companys = Company_list.query.get_or_404(id)
form = CompanyForm(obj=companys)
if form.validate_on_submit():
companys.names_one = form.names_one.data
companys.names_two = form.names_two.data
companys.names_three = form.names_three.data
companys.branches = form.branches.data
db.session.add(companys)
db.session.commit()
flash('You have successfully edited the companys.')
# redirect to the roles page
return redirect(url_for('main.company_namelist'))
form.names_one.data = companys.names_one
form.names_two.data = companys.names_two
form.names_three.data = companys.names_three
form.branches.data = companys.branches
return render_template('company/company_edit.html', add_company=add_company,
form=form, title="Edit company")
@bp.route('/company/delete/<int:id>', methods=['GET', 'POST'])
def delete_company(id):
"""
Delete a employee from the database
"""
companyss = Company_list.query.get_or_404(id)
db.session.delete(companyss)
db.session.commit()
flash('You have successfully deleted the company.')
# redirect to the roles page
return redirect(url_for('main.company_namelist')) | 32.019802 | 141 | 0.682746 | 425 | 3,234 | 5.002353 | 0.237647 | 0.0381 | 0.023518 | 0.039981 | 0.419567 | 0.327375 | 0.194732 | 0.11524 | 0.057385 | 0.057385 | 0 | 0.002728 | 0.206555 | 3,234 | 101 | 142 | 32.019802 | 0.825799 | 0.087508 | 0 | 0.135593 | 0 | 0 | 0.152069 | 0.079655 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.135593 | 0 | 0.305085 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c4609c13c1b5b024cb78f178101d21b07a60ae | 31,034 | py | Python | opentisim/containers/container_defaults.py | TUDelft-CITG/OpenTISim | 443b20572eb2aae2f1909a8a01e95e31be53b675 | [
"MIT"
] | 7 | 2020-02-15T01:34:29.000Z | 2022-02-28T01:24:05.000Z | opentisim/containers/container_defaults.py | TUDelft-CITG/OpenTISim | 443b20572eb2aae2f1909a8a01e95e31be53b675 | [
"MIT"
] | 2 | 2020-02-14T18:44:31.000Z | 2020-04-06T15:39:17.000Z | opentisim/containers/container_defaults.py | TUDelft-CITG/OpenTISim | 443b20572eb2aae2f1909a8a01e95e31be53b675 | [
"MIT"
] | 2 | 2019-07-19T08:50:31.000Z | 2020-02-05T11:14:07.000Z | """
Main generic object classes:
- 1. Quay_wall
- 2. Berth
- 3. Cyclic_Unloader
- STS crane
- 4. Horizontal transport
- Tractor trailer
- 5. Commodity
- TEU
- 6. Containers
- Laden
- Reefer
- Empty
- OOG
- 7. Laden and reefer stack
- 8. Stack equipment
- 9. Empty stack
- 10. OOG stack
- 11. Gates
- 12. Empty handler
- 13. Vessel
- 14. Labour
- 15. Energy
- 16. General
- 17. Indirect Costs
"""
# package(s) for data handling
import pandas as pd
# *** Default inputs: Quay_Wall class *** todo add values of RHDHV or general (e.g. PIANC)
quay_wall_data = {"name": 'Quay',
"ownership": 'Port authority',
"delivery_time": 2,
"lifespan": 50,
"mobilisation_min": 2_500_000,
"mobilisation_perc": 0.02,
"maintenance_perc": 0.01,
"insurance_perc": 0.01,
"berthing_gap": 15, # see PIANC (2014), p 98
"freeboard": 4, # m
"Gijt_constant": 753.24, # Source: (J. de Gijt, 2011) Figure 2 ; USD/m (if 1.0 EUR = 1.12 USD, 670.45 EUR = 757.8 USD)
"Gijt_coefficient": 1.2729, # Source: (J. de Gijt, 2011) Figure 2
"max_sinkage": 0.5,
"wave_motion": 0.5,
"safety_margin": 0.5,
"apron_width": 65.5, # see PIANC (2014b), p 62
"apron_pavement": 125} # all values from Ijzermans, 2019, P 91
# *** Default inputs: Berth class ***
berth_data = {"name": 'Berth',
"crane_type": 'Mobile cranes',
"delivery_time": 2,
"max_cranes": 3} # STS cranes
# *** Default inputs: Crane class *** todo check sources sts_crane_data and check small sts_crane_data for the barge berths
sts_crane_data = {"name": 'STS_crane',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"unit_rate": 10_000_000, # USD per unit
"mobilisation_perc": 0.15, # percentage
"maintenance_perc": 0.02, # percentage
"insurance_perc": 0.01, # percentage
"consumption": 8, # Source: Peter Beamish (RHDHV)
"crew": 5.5, # 1.5 crane driver, 2 quay staff, 2 twistlock handler (per shift)
"crane_type": 'STS crane',
"lifting_capacity": 2.13, # weighted average of TEU per lift
"hourly_cycles": 25, # PIANC wg135
"eff_fact": 0.75}
# *** Default inputs: Barge_Berth class ***
barge_berth_data = {"name": 'Barge_Berth',
"delivery_time": 2, # years
"max_cranes": 1.0} # barge_cranes/barge_berth (Source: RHDHV)
barge_quay_wall_data = {"name": 'Barge_Quay',
"ownership": "Terminal operator",
"delivery_time": 2, # years
"lifespan": 50, # equal to quay wall OGV
"mobilisation_min": 1_000_000, # todo add source
"mobilisation_perc": 0.02,
"maintenance_perc": 0.01,
"insurance_perc": 0.01,
"berthing_gap": 15, # see PIANC (2014), p 98
"freeboard": 4, # m
"Gijt_constant": 753.24, # Source: (J. de Gijt, 2011) Figure 2 ; USD/m (if 1.0 EUR = 1.12 USD, 670.45 EUR = 757.8 USD)
"Gijt_coefficient": 1.2729, # Source: (J. de Gijt, 2011) Figure 2
"max_sinkage": 0.5,
"wave_motion": 0.5,
"safety_margin": 0.5,
"apron_width": 30, # todo add source, check PIANC 2014b
"apron_pavement": 125} # all values from Ijzermans, 2019, P 91
barge_crane_data = {"name": 'Barge Crane',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"unit_rate": 5_000_000, # USD per unit
"mobilisation_perc": 0.15, # percentage
"maintenance_perc": 0.02, # percentage
"insurance_perc": 0.01, # percentage
"consumption": 4, # RHDHV
"crew": 1.5, # 1.5 crane driver (per shift)
"lifting_capacity": 1.60, # RHDHV, weighted average of TEU per lift
"avg_utilisation": 0.9, # RHDHV
"nom_crane_productivity": 15.0, # moves per hour
"utilisation": 0.90, # rate
"efficiency": 0.75, # rate
"handling_time_ratio": 0.90, # handling time to berthing time ratio
"peak_factor": 1.10} # RHDHV
# *** Default inputs: ***
channel_data = {"name": 'Channel',
"ownership": 'Port authority',
"delivery_time": 2, # years
"lifespan": 50, # years
"capital_dredging_rate": 7.0, # USD per m3 (source: Payra, $6.82)
"infill_dredging_rate": 5.5, # USD per m3 (source: Payra, $5.25)
"maintenance_dredging_rate": 4.5, # USD per m3 (source: Payra, $4.43)
"mobilisation_min": 2_500_000,
"mobilisation_perc": 0.02,
"maintenance_perc": 0.10,
"insurance_perc": 0.01}
bridge_data = {"name": 'Bridge',
"ownership": 'Port authority',
"delivery_time": 3,
"lifespan": 50, # years
"unit_rate": 100_000_000, # USD per km
"maintenance_perc": 0.025,
"insurance_perc": 0.01}
reclamation_data = {"name": 'Reclamation',
"ownership": 'Port authority',
"delivery_time": 2, # years
"lifespan": 50, # years
"reclamation_rate": 12.50, # USD per m3
"maintenance_perc": 0.02,
"insurance_perc": 0.00}
revetment_data = {"name": 'Revetment',
"ownership": 'Port authority',
"delivery_time": 2, # years
"lifespan": 50, # years
"revetment_rate": 180_000, # USD per m
"quay_length_rate": 1.5,
"maintenance_perc": 0.01,
"insurance_perc": 0.00}
breakwater_data = {"name": 'Breakwater',
"ownership": 'Port authority',
"delivery_time": 2, # years
"lifespan": 50, # years
"breakwater_rate": 275_000, # USD per m
"quay_length_rate": 1.5,
"maintenance_perc": 0.01,
"insurance_perc": 0.00}
# Default inputs: Horizontal_Transport class *** #todo add sources
tractor_trailer_data = {"name": 'Tractor-trailer',
"type": 'tractor_trailer',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"mobilisation": 1_000,
"unit_rate": 85_000,
"maintenance_perc": 0.10,
"insurance_perc": 0.01,
"crew": 1,
"salary": 30_000, # dummy
"utilisation": 0.80,
"fuel_consumption": 2, # liter per box move
"productivity": 1,
"required": 5, # typical 3 - 6 see PIANC 2014b, p 58
"non_essential_moves": 1.2} # todo input value for tractor productivity
# *** Default inputs: Container class #todo add sources
laden_container_data = {"name": 'Laden container',
"type": 'laden_container',
"teu_factor": 1.60,
"dwell_time": 3, # days, PIANC (2014b) p 64 (5 - 10)
"peak_factor": 1.2,
"stack_ratio": 0.7,
"stack_occupancy": 0.8, # acceptable occupancy rate (0.65 to 0.70), Quist and Wijdeven (2014), p 49
"width": 48, # TEU
"height": 4, # TEU
"length": 20 # TEU
}
reefer_container_data = {"name": 'Reefer container',
"type": 'reefer_container',
"teu_factor": 1.75,
"dwell_time": 3, # days, PIANC (2014b) p 64 (5 - 10)
"peak_factor": 1.2,
"stack_ratio": 0.7,
"stack_occupancy": 0.8, # acceptable occupancy rate (0.65 to 0.70), Quist and Wijdeven (2014), p 49
"width": 21, # TEU
"height": 4, # TEU
"length": 4 # TEU
}
empty_container_data = {"name": 'Empty container',
"type": 'empty_container',
"teu_factor": 1.55,
"dwell_time": 10, # days, PIANC (2014b) p 64 (10 - 20)
"peak_factor": 1.2,
"stack_ratio": 1, # looking for a good reference for this value
"stack_occupancy": 0.7, # acceptable occupancy rate (0.65 to 0.70), Quist and Wijdeven (2014), p 49
"width": 48, # TEU
"height": 4, # TEU
"length": 20 # TEU
}
oog_container_data = {"name": 'OOG container',
"type": 'oog_container',
"teu_factor": 1.55,
"dwell_time": 4, # days, PIANC (2014b) p 64 (5 - 10)
"peak_factor": 1.2,
"stack_ratio": 1, # by definition the H of oog stacks is 1
"stack_occupancy": 0.9, # acceptable occupancy rate (0.65 to 0.70), Quist and Wijdeven (2014), p 49
"width": 48, # TEU
"height": 4, # TEU
"length": 20 # TEU
}
# *** Default inputs: Laden_Stack class within the stacks
rtg_stack_data = {"name": 'RTG Stack',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"mobilisation": 25_000, # USD
"maintenance_perc": 0.1,
# "width": 6, # TEU
# "height": 5, # TEU
# "length": 30, # TEU
# "capacity": 900, # TEU
"gross_tgs": 18, # TEU Ground Slot [m2/teu]
"area_factor": 2.04, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # m2 DUMMY
"drainage": 50, # m2 DUMMY
"household": 0.1, # moves
"digout_margin": 1.2, # percentage
"reefer_factor": 2.33, # RHDHV
"consumption": 4, # kWh per active reefer
"reefer_rack": 3500, # USD
"reefers_present": 0.5} # per reefer spot
rmg_stack_data = {"name": 'RMG Stack',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"mobilisation": 50_000, # USD
"maintenance_perc": 0.1,
# "width": 6, # TEU
# "height": 5, # TEU
# "length": 40, # TEU
# "capacity": 1200, # TEU
"gross_tgs": 18.67, # TEU Ground Slot [m2/teu]
"area_factor": 2.79, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # m2 DUMMY
"drainage": 50, # m2 DUMMY
"household": 0.1, # moves
"digout_margin": 1.2, # percentage
"reefer_factor": 2.33, # RHDHV
"consumption": 4, # kWh per active reefer
"reefer_rack": 3500, # USD
"reefers_present": 0.5} # per reefer spot
sc_stack_data = {"name": 'SC Stack',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"mobilisation": 50_000, # USD
"maintenance_perc": 0.1,
# "width": 45, # TEU
# "height": 3, # TEU
# "length": 22, # TEU
# "capacity": 1200, # TEU
"gross_tgs": 27.3, # TEU Ground Slot [m2/teu]
"area_factor": 1.45, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # DUMMY
"drainage": 50, # DUMMY
"household": 0.1, # moves
"digout_margin": 1.2, # percentage
"reefer_factor": 2.33, # RHDHV
"consumption": 4, # kWh per active reefer
"reefer_rack": 3500, # USD
"reefers_present": 0.5} # per reefer spot
rs_stack_data = {"name": 'RS Stack',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"mobilisation": 10_000, # USD
"maintenance_perc": 0.1,
# "width": 4, # TEU
# "height": 4, # TEU
# "length": 20, # TEU
# "capacity": 320, # TEU
"gross_tgs": 18, # TEU Ground Slot [m2/teu]
"area_factor": 3.23, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # m2 DUMMY
"drainage": 50, # m2 DUMMY
"household": 0.1, # moves
"digout_margin": 1.2, # percentage
"reefer_factor": 2.33, # RHDHV
"consumption": 4, # kWh per active reefer
"reefer_rack": 3500, # USD
"reefers_present": 0.5} # per reefer spot
# *** Default inputs: Other_Stack class
empty_stack_data = {"name": 'Empty Stack',
"ownership": 'Terminal operator',
"delivery_time": 1,
"lifespan": 40,
"mobilisation": 25_000,
"maintenance_perc": 0.1,
"width": 8, # TEU
"height": 6, # TEU
"length": 10, # TEU
"capacity": 480, # TEU
"gross_tgs": 18, # TEU Ground Slot
"area_factor": 2.04, # Based on grasshopper layout
"pavement": 200, # DUMMY
"drainage": 50,
"household": 1.05,
"digout": 1.05} # DUMMY
oog_stack_data = {"name": 'OOG Stack',
"ownership": 'Terminal operator',
"delivery_time": 1,
"lifespan": 40,
"mobilisation": 25_000,
"maintenance_perc": 0.1,
"width": 10, # TEU
"height": 1, # TEU
"length": 10, # TEU
"capacity": 100, # TEU
"gross_tgs": 64, # TEU Ground Slot
"area_factor": 1.05, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # DUMMY
"drainage": 50} # DUMMY
# *** Default inputs: Stack_Equipment class
rtg_data = {"name": 'RTG',
"type": 'rtg',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"unit_rate": 1_400_000,
"mobilisation": 5000,
"maintenance_perc": 0.1, # dummy
"insurance_perc": 0,
"crew": 1, # dummy
"salary": 50_000, # dummy
"required": 3,
"fuel_consumption": 1, # dummy
"power_consumption": 0
}
rmg_data = {"name": 'RMG',
"type": 'rmg',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"unit_rate": 2_500_000,
"mobilisation": 5000,
"maintenance_perc": 0.1, # dummy
"insurance_perc": 0,
"crew": 0, # dummy
"salary": 50_000, # dummy
"required": 1, # one per stack
"fuel_consumption": 0, # dummy
"power_consumption": 15 # kWh/box move
}
sc_data = {"name": 'Straddle carrier',
"type": 'sc',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"unit_rate": 2_000_000, # dummy
"mobilisation": 5000,
"maintenance_perc": 0.1, # dummy
"insurance_perc": 0,
"crew": 0, # dummy
"salary": 50_000, # dummy
"required": 5,
"fuel_consumption": 0, # dummy
"power_consumption": 30
}
rs_data = {"name": 'Reach stacker',
"type": 'rs',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"unit_rate": 500_000,
"mobilisation": 5000,
"maintenance_perc": 0.1, # dummy
"insurance_perc": 0,
"crew": 2, # dummy
"salary": 50_000, # dummy
"required": 4,
"fuel_consumption": 1, # dummy
"power_consumption": 0
}
# *** Default inputs: Gate class ***
gate_data = {"name": 'Gate',
"type": 'gate',
"ownership": "Terminal operator",
"delivery_time": 1, # years
"lifespan": 15, # years
"unit_rate": 30_000, # USD/gate
"mobilisation": 5000, # USD/gate
"maintenance_perc": 0.02,
"crew": 2, # crew
"salary": 30_000, # Dummy
"canopy_costs": 250, # USD/m2 # Dummy
"area": 288.75, # PIANC WG135
"staff_gates": 1, #
"service_gates": 1, #
"design_capacity": 0.98, #
"exit_inspection_time": 3, # min #dummy
"entry_inspection_time": 2, # min #dummy
"peak_hour": 0.125, # dummy
"peak_day": 0.25, # dummy
"peak_factor": 1.2,
"truck_moves": 0.75,
"operating_days": 7,
"capacity": 60}
# *** Default inputs: ECH class***
empty_handler_data = {"name": 'Empty Handler',
"type": 'empty_handler',
"ownership": "Terminal operator",
"delivery_time": 1,
"lifespan": 15,
"unit_rate": 500_000,
"mobilisation": 5000,
"maintenance_perc": 0.02,
"crew": 1,
"salary": 35_000, # dummy
"fuel_consumption": 1.5,
"required": 5}
# *** Default inputs: Commodity class ***
container_data = {"name": 'Laden',
"handling_fee": 150,
"fully_cellular_perc": 0,
"panamax_perc": 0,
"panamax_max_perc": 0,
"post_panamax_I_perc": 0,
"post_panamax_II_perc": 0,
"new_panamax_perc": 100,
"VLCS_perc": 0,
"ULCS_perc": 0}
# *** Default inputs: Vessel class *** (Source: i) The Geography of Transport Systems, Jean-Paul Rodrigue (2017), ii) UNCTAD)
fully_cellular_data = {"name": 'Fully_Cellular_1',
"type": 'Fully_Cellular',
"delivery_time": 0, # years
"call_size": 2500 / 8, # TEU
"LOA": 215, # m
"draught": 10.0, # m
"beam": 20.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source
"mooring_time": 6, # berthing + deberthing time
"demurrage_rate": 730, # USD todo edit
"transport_costs": 200, # USD per TEU, RHDHV
"all_in_transport_costs": 2128 # USD per TEU, Ports and Terminals p.158
}
panamax_data = {"name": 'Panamax_1',
"type": 'Panamax',
"delivery_time": 0, # years
"call_size": 3400 / 8, # TEU
"LOA": 250, # m
"draught": 12.5, # m
"beam": 32.2, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 6, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 180, # USD per TEU, RHDHV
"all_in_transport_costs": 1881 # USD per TEU, Ports and Terminals p.158
}
panamax_max_data = {"name": 'Panamax_Max_1',
"type": 'Panamax_Max',
"delivery_time": 0, # years
"call_size": 4500 / 8, # TEU
"LOA": 290, # m
"draught": 12.5, # m
"beam": 32.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 2, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 160, # USD per TEU, RHDHV
"all_in_transport_costs": 1682 # USD per TEU, Ports and Terminals p.158
}
post_panamax_I_data = {"name": 'Post_Panamax_I_1',
"type": 'Post_Panamax_I',
"delivery_time": 0, # years
"call_size": 6000 / 8, # TEU
"LOA": 300, # m
"draught": 13.0, # m
"beam": 40.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 2, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 150, # USD per TEU, RHDHV
"all_in_transport_costs": 1499 # USD per TEU, Ports and Terminals p.158
}
post_panamax_II_data = {"name": 'Post_Panamax_II_1',
"type": 'Post_Panamax_II',
"delivery_time": 0, # years
"call_size": 8500 / 8, # TEU
"LOA": 340, # m
"draught": 14.5, # m
"beam": 43.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 2, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 140, # USD per TEU, RHDHV
"all_in_transport_costs": 1304 # USD per TEU, Ports and Terminals p.158
}
new_panamax_data = {"name": 'New_Panamax_1',
"type": 'New_Panamax',
"delivery_time": 0, # years
"call_size": 12500 / 8, # TEU
"LOA": 366, # m
"draught": 15.2, # m
"beam": 49.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 6, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 120, # USD per TEU, RHDHV
"all_in_transport_costs": 1118 # USD per TEU, Ports and Terminals p.158
}
VLCS_data = {"name": 'VLCS_1',
"type": 'VLCS',
"delivery_time": 0, # years
"call_size": 15000 / 8, # TEU
"LOA": 397, # m
"draught": 15.5, # m
"beam": 56.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 4, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 80, # USD per TEU, RHDHV
"all_in_transport_costs": 2128 # USD per TEU, Ports and Terminals p.158
}
ULCS_data = {"name": 'ULCS_1',
"type": 'ULCS',
"delivery_time": 0, # years
"call_size": 21000 / 8, # TEU
"LOA": 400, # m
"draught": 16.0, # m
"beam": 59.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 4, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 60, # USD per TEU, RHDHV
"all_in_transport_costs": 908 # USD per TEU, Ports and Terminals p.158
}
# *** Default inputs: Barge class *** # todo add sources
small_barge_data = {"name": 'Small_Barge_1',
"type": 'small',
"ownership": 'Port authority',
"delivery_time": 1, # years
"lifespan": 10, # years
"call_size": 200, # TEU
"LOA": 90, # m
"draught": 4.5, # m
"beam": 12.0, # m
"unit_rate": 1_000_000, # USD per barge
"operations_perc": 0.10,
"maintenance_perc": 0.10,
"insurance_perc": 0.01,
"mooring_time": 6, # berthing + deberthing time
"transport_costs": 200} # USD per TEU
medium_barge_data = {"name": 'Medium_Barge_1',
"type": 'medium',
"ownership": 'Port authority',
"delivery_time": 1, # years
"lifespan": 10, # years
"call_size": 250, # TEU
"LOA": 100, # m
"draught": 5.0, # m
"beam": 13.0, # m
"unit_rate": 1_000_000, # USD per barge
"operations_perc": 0.10,
"maintenance_perc": 0.10,
"insurance_perc": 0.01,
"mooring_time": 6, # berthing + deberthing time
"transport_costs": 200} # USD per TEU
large_barge_data = {"name": 'Large_Barge_1',
"type": 'large',
"ownership": 'Port authority',
"delivery_time": 1, # years
"lifespan": 10, # years
"call_size": 300, # TEU
"LOA": 120, # m
"draught": 5.5, # m
"beam": 14.0, # m
"unit_rate": 1_000_000, # USD per barge
"operations_perc": 0.10,
"maintenance_perc": 0.10,
"insurance_perc": 0.01,
"mooring_time": 6, # berthing + deberthing time
"transport_costs": 200} # USD per TEU
truck_data = {"name": 'Truck',
"ownership": 'Port authority',
"delivery_time": 1,
"lifespan": 10,
"unit_rate": 10_000, # USD per truck
"operations_perc": 0.10,
"maintenance_perc": 0.10,
"insurance_perc": 0.01}
# *** Default inputs: Labour class ***
labour_data = {"name": 'Labour',
"international_salary": 105_000,
"international_staff": 4,
"local_salary": 18_850,
"local_staff": 10,
"operational_salary": 16_750,
"shift_length": 6.5, # hr per shift
"annual_shifts": 200,
"daily_shifts": 5, # shifts per day
"blue_collar_salary": 25_000, # USD per crew per day
"white_collar_salary": 35_000} # USD per crew per day
# *** Default inputs: Energy class ***
energy_data = {"name": 'Energy',
"price": 0.10}
# *** Default inputs: General_Services class ***
general_services_data = {"name": 'General_Services"',
"type": 'general_services',
"office": 2400,
"office_cost": 1500,
"workshop": 2400,
"workshop_cost": 1000,
"fuel_station_cost": 500_000,
"scanning_inspection_area": 2700,
"scanning_inspection_area_cost": 1000,
"lighting_mast_required": 1.2, # masts per ha
"lighting_mast_cost": 30_000,
"firefight_cost": 2_000_000,
"maintenance_tools_cost": 10_000_000,
"terminal_operating_software_cost": 10_000_000,
"electrical_station_cost": 2_000_000,
"repair_building": 100,
"repair_building_cost": 1000,
"ceo": 1, # FTE per 500 k TEU
"secretary": 1, # FTE per 500 k TEU
"administration": 3, # FTE per 500 k TEU
"hr": 2, # FTE per 500 k TEU
"commercial": 1, # FTE per 500 k TEU
"operations": 4, # FTE/shirt per 500 k TEU
"engineering": 2, # FTE/shift per 500 k TEU
"security": 2, # FTE/shift per 500 k TEU
"general_maintenance": 0.015,
"crew_required": 500_000, # for each 500_k TEU an additional crew team is added
"delivery_time": 1,
"lighting_consumption": 1,
"general_consumption": 1000}
# *** Default inputs: Indirect_Costs class ***
indirect_costs_data = {"name": 'Indirect_Costs',
"preliminaries": 0.15,
"engineering": 0.05,
"miscellaneous": 0.15,
"electrical_works_fuel_terminal": 0.12,
"electrical_works_power_terminal": 0.15}
| 43.343575 | 142 | 0.444512 | 3,092 | 31,034 | 4.278137 | 0.136805 | 0.022679 | 0.031448 | 0.039915 | 0.587239 | 0.564636 | 0.515573 | 0.481403 | 0.442849 | 0.436045 | 0 | 0.09082 | 0.439421 | 31,034 | 715 | 143 | 43.404196 | 0.669541 | 0.20571 | 0 | 0.522998 | 0 | 0 | 0.296287 | 0.019698 | 0 | 0 | 0 | 0.001399 | 0 | 1 | 0 | false | 0 | 0.001704 | 0 | 0.001704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c9e071e19e41968b2a38fb82cb08379e2983f3 | 12,413 | py | Python | pyoogle/preprocessing/crawl/crawler.py | DanDits/Pyoogle | f860dffb574f8629d3e894074450fdcb76547a03 | [
"Apache-2.0"
] | null | null | null | pyoogle/preprocessing/crawl/crawler.py | DanDits/Pyoogle | f860dffb574f8629d3e894074450fdcb76547a03 | [
"Apache-2.0"
] | null | null | null | pyoogle/preprocessing/crawl/crawler.py | DanDits/Pyoogle | f860dffb574f8629d3e894074450fdcb76547a03 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Feb 6 12:49:02 2016
@author: daniel
"""
import logging
import threading # For main processing thread
import urllib # For downloading websites
import urllib.error
import urllib.request
from concurrent.futures import ThreadPoolExecutor # each downloads a website
from http.client import RemoteDisconnected
from queue import Queue, Empty # For processing downloaded websites
from socket import timeout as socket_timeout
from pyoogle.config import LOGGING_LEVEL
from pyoogle.preprocessing.crawl.linkconstraint import LinkConstraint # constraint to which links are allowed
from pyoogle.preprocessing.web.net import WebNet
from pyoogle.preprocessing.web.node import WebNode
from pyoogle.preprocessing.web.nodestore import WebNodeStore # for permanently saving created WebNodes
from pyoogle.preprocessing.web.parser import WebParser # parses the downloaded html site and extracts info
logging.getLogger().setLevel(LOGGING_LEVEL)
NotResolvable = "NOT_RESOLVABLE_LINK"
class Crawler:
# Initializes the Crawler. If max_sites is greater than zero it will only
# download this many sites and stop afterwards, else until no new site is found.
def __init__(self, store_path, link_constraint, max_sites=0, max_workers=2, timeout=30):
self.store_path = store_path
self.pending_links = Queue()
self.pending_websites = Queue()
self.web_net = None
self.link_constraint = link_constraint
if self.link_constraint is None:
raise ValueError("No link constraint given!")
self.already_processed_links = set()
self.already_processed_websites = set()
self.is_crawling = False
self.max_sites = max_sites
self.processed_sites_count = 0
self.max_workers = max_workers
self.timeout = timeout
self.starting_processor = None
self.links_processor = None
self.websites_processor = None
def _is_finished(self):
return not self.is_crawling or self.has_maximum_sites_processed()
def has_maximum_sites_processed(self):
return 0 < self.max_sites <= self.processed_sites_count
def process_link(self, link):
if self._is_finished():
return
website = Crawler.download_website(link, self.timeout)
if website is None:
logging.debug("Website %s not downloaded", link)
if website is NotResolvable:
logging.debug("Website %s not resolvable and not trying again.", link)
return
return self, link, website
@staticmethod
def link_got_processed(future):
if future.done() and future.result() is not None:
self, link, website = future.result()
if self._is_finished():
return
if website is None:
# revert and try later
logging.debug("Website %s not downloaded, retrying later ", link)
self.add_link(link)
return
if not self.has_maximum_sites_processed():
self.pending_websites.put((link, website))
def obtain_new_link(self):
link = None
while link is None and not self._is_finished():
try:
link = self.pending_links.get(timeout=self.timeout)
except Empty:
logging.info("No more links found to process!")
return
if link in self.already_processed_links:
link = None
continue # already processed
if link is not None:
self.already_processed_links.add(link)
return link
def process_links(self):
logging.info("Starting to process links")
try:
with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
while not self._is_finished():
# this will submit many many futures when testing with limited maxsites(>0)
# but they will be ignored!
link = self.obtain_new_link()
if link is None:
return
future = executor.submit(self.process_link, link)
future.add_done_callback(Crawler.link_got_processed)
finally:
self.stop() # ensure crawler is really stopped
def process_website(self, link, website):
logging.debug("Starting to parse %s pending links %d", link, self.pending_links.qsize())
try:
webparser = WebParser(link, website)
except ValueError:
logging.debug("Website %s not parsable, ignored but out link kept", link)
return
web_hash = hash(webparser)
if web_hash in self.already_processed_websites:
# Already processed but with a different url, add this url to node so we know this in the future!
logging.debug("Website %s already processed (with different url)!", link)
node = self.web_net.get_by_content_hash(web_hash)
if node is not None:
node.add_url(link)
return
logging.info("Processed %d.link %s pending websites %d",
self.processed_sites_count + 1, link, self.pending_websites.qsize())
self.already_processed_websites.add(web_hash)
self.processed_sites_count += 1
builder = WebNode.Builder(self.link_constraint)
builder.init_from_webparser(webparser)
webnode = builder.make_node()
self.web_net.add_node(webnode)
for link in webnode.get_out_links():
self.add_link(link)
def process_websites(self, clear_store):
# We are required to open the store in the same thread the store is modified in
logging.info("Starting to process websites")
with WebNodeStore(self.store_path, clear_store) as node_store:
try:
while not self._is_finished():
data = self.pending_websites.get(block=True)
if data is None:
break
link, website = data
self.process_website(link, website)
node_store.save_webnodes(self.web_net.get_nodes())
finally:
self.stop() # ensure crawler is really stopped
def _init_net(self, clear_store):
self.web_net = WebNet()
if not clear_store:
# Do not clear the store but add new nodes to it, load and add existing to webnet
with WebNodeStore(self.store_path, clear=False) as node_store:
for node in node_store.load_webnodes(True):
self.already_processed_websites.add(node.get_content_hash())
for link in node.get_urls():
self.already_processed_links.add(link)
self.web_net.add_node(node)
# After we marked all already processed links, add new outgoings to restart
restart_link_count = 0
total_link_out = 0
for node in self.web_net:
for link in node.get_out_links():
total_link_out += 1
if link not in self.already_processed_links:
self.add_link(link)
restart_link_count += 1
logging.info("Restarting with %d links of %d", restart_link_count, total_link_out)
def _start_async(self, clear_store):
self._init_net(clear_store)
self.links_processor = threading.Thread(target=self.process_links)
self.links_processor.start()
self.websites_processor = threading.Thread(target=Crawler.process_websites, args=[self, clear_store])
self.websites_processor.start()
def join(self):
try:
self.starting_processor.join() # If this stops blocking, the other processors are valid
self.websites_processor.join()
self.links_processor.join()
except KeyboardInterrupt:
self.stop()
def start(self, start_url, clear_store=True):
logging.info("Starting crawling at %s", start_url)
self.is_crawling = True
self.add_link(start_url)
self.starting_processor = threading.Thread(target=Crawler._start_async, args=[self, clear_store])
self.starting_processor.start()
def add_link(self, link):
link = self.link_constraint.get_valid(link)
if link is None:
return
self.pending_links.put(link)
def stop(self):
if self.is_crawling: # Race condition safe (could be executed multiple times)
logging.info("Stopping crawling")
self.is_crawling = False
self.pending_websites.put(None) # Ensure threads do not wait forever and exit
self.pending_links.put(None)
@staticmethod
def download_website(url, timeout):
# Download and read website
logging.debug("Downloading website %s", url)
try:
website = urllib.request.urlopen(url, timeout=timeout).read()
except socket_timeout:
logging.debug("Timeout error when downloading %s", url)
website = None
except urllib.error.HTTPError as err:
if int(err.code / 100) == 4:
logging.debug("Client http error when downloading %s %s", url, err)
website = NotResolvable # 404 Not Found or other Client Error, ignore link in future
else:
logging.debug("HTTP Error when downloading %d %s %s", err.code, url, err)
website = None
except urllib.error.URLError as err:
logging.debug("Url error when downloading %s %s", url, err)
website = None
except RemoteDisconnected as disc:
logging.debug("(RemoteDisconnect) error when downloading %s %s", url, disc)
website = NotResolvable
except UnicodeEncodeError:
logging.debug("(UnicodeEncodeError) error when downloading %s", url)
website = NotResolvable
return website
def crawl_mathy():
# Build constraint that describes which outgoing WebNode links to follow
constraint = LinkConstraint('http', 'www.math.kit.edu')
# Prevent downloading links with these endings
# Frequent candidates: '.png', '.jpg', '.jpeg', '.pdf', '.ico', '.doc', '.txt', '.gz', '.zip', '.tar','.ps',
# '.docx', '.tex', 'gif', '.ppt', '.m', '.mw', '.mp3', '.wav', '.mp4'
forbidden_endings = ['.pdf', '.png', '.ico', '#top'] # for fast exclusion
constraint.add_rule(lambda link: all((not link.lower().endswith(ending) for ending in forbidden_endings)))
# Forbid every point in the last path segment as this likely is a file and we are not interested in it
def rule_no_point_in_last_path_segment(link_parsed):
split = link_parsed.path.split("/")
return len(split) == 0 or "." not in split[-1]
constraint.add_rule(rule_no_point_in_last_path_segment, parsed_link=True)
# Start the crawler from a start domain, optionally loading already existing nodes
from pyoogle.config import DATABASE_PATH
path = DATABASE_PATH
c = Crawler(path, constraint)
c.start("http://www.math.kit.edu", clear_store=False)
# Wait for the crawler to finish
c.join()
webnet = c.web_net
logging.info("DONE, webnet contains %d nodes", len(webnet))
return path, webnet
def crawl_spon():
constraint = LinkConstraint('', 'www.spiegel.de')
# Forbid every point in the last path segment as this likely is a file and we are not interested in it
def rule_no_point_in_last_path_segment(link_parsed):
split = link_parsed.path.split("/")
return len(split) == 0 or ("." not in split[-1] or
split[-1].lower().endswith(".html") or split[-1].lower().endswith(".htm"))
constraint.add_rule(rule_no_point_in_last_path_segment, parsed_link=True)
path = "/home/daniel/PycharmProjects/PageRank/spon.db"
c = Crawler(path, constraint)
c.start("http://www.spiegel.de", clear_store=False)
# Wait for the crawler to finish
c.join()
webnet = c.web_net
logging.info("DONE, webnet contains %d nodes", len(webnet))
return path, webnet
if __name__ == "__main__":
crawl_spon()
| 42.077966 | 112 | 0.634174 | 1,546 | 12,413 | 4.930789 | 0.205692 | 0.027286 | 0.023613 | 0.016398 | 0.283878 | 0.173554 | 0.122262 | 0.122262 | 0.103896 | 0.091827 | 0 | 0.004483 | 0.281157 | 12,413 | 294 | 113 | 42.221088 | 0.849826 | 0.152501 | 0 | 0.265217 | 0 | 0 | 0.092098 | 0.004295 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.069565 | 0.008696 | 0.234783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8caaf44d7f053ff6f28f609749087b123ec4b34 | 2,965 | py | Python | 13.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | 1 | 2021-12-02T15:19:36.000Z | 2021-12-02T15:19:36.000Z | 13.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | null | null | null | 13.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | null | null | null | from collections import defaultdict
def return_default():
return 0
REAL=open("13.txt").readlines()
SAMPLE=open("13.sample2").readlines()
def parse_lines(lines):
return list(map(list, lines))
CARTS = "^>v<"
DIRS = [(0, -1), (1, 0), (0, 1), (-1, 0)]
def cart_positions(start, facing, board):
poses = []
pos = start
corners = 0
fidx = DIRS.index(facing)
while True:
poses.append(pos)
x, y = pos
here = board[y][x]
delta = 0
if here == "\\":
delta = [-1, 1, -1, 1][fidx]
elif here == "/":
delta = [1, -1, 1, -1][fidx]
elif here == "+":
cmod = corners % 3
if cmod == 0:
delta = -1
elif cmod == 1:
delta = 0
elif cmod == 2:
delta = 1
corners += 1
else:
assert here in CARTS or here in "|-+"
fidx = (fidx + len(DIRS) + delta) % len(DIRS)
facing = DIRS[fidx]
dx, dy = facing
x += dx
y += dy
pos = (x, y)
if pos == start and corners % 3 == 0:
break
return poses
def solve(lines):
carts = []
parsed = parse_lines(lines)
ats = {}
for y in range(len(lines)):
for x in range(len(lines[y])):
here = parsed[y][x]
if here in CARTS:
facing = DIRS[CARTS.index(here)]
pos = (x, y)
carts.append(cart_positions(pos, facing, parsed))
ats[pos] = len(carts) - 1
t = 0
dead_carts = set()
while True:
moved = set()
for y in range(len(parsed)):
for x in range(len(parsed[y])):
pos = (x, y)
if pos not in ats:
continue
cidx = ats[pos]
if cidx in moved:
continue
moved.add(cidx)
cart = carts[cidx]
cart_next = cart[(t + 1) % len(cart)]
if cart_next in ats:
dead_carts.add(cidx)
dead2 = ats[cart_next]
dead_carts.add(dead2)
print("Crash at ", cart_next, " from ", pos, cidx, dead2)
del ats[cart_next]
del ats[pos]
if len(ats) == 1:
at = list(ats.keys())[0]
print("EARLY: " + str(at[0]) + "," + str(at[1]))
else:
ats[cart_next] = cidx
del ats[pos]
# assert len(ats) + len(dead_carts) == len(carts)
# assert len(set(ats.keys()).intersection(dead_carts)) == 0
if len(ats) == 1:
at = list(ats.keys())[0]
return str(at[0]) + "," + str(at[1])
t += 1
sample = solve(SAMPLE)
assert sample == "6,4"
print("*** SAMPLE PASSED ***")
print(solve(REAL)) # not 93,59
| 27.201835 | 77 | 0.43204 | 355 | 2,965 | 3.56338 | 0.230986 | 0.012648 | 0.01581 | 0.006324 | 0.151779 | 0.0917 | 0.072727 | 0.072727 | 0.072727 | 0 | 0 | 0.033175 | 0.430691 | 2,965 | 108 | 78 | 27.453704 | 0.716232 | 0.038786 | 0 | 0.186813 | 0 | 0 | 0.026353 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 1 | 0.043956 | false | 0.010989 | 0.010989 | 0.021978 | 0.098901 | 0.043956 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8cb54d17428f4a861ab1eb4f8524561f2936c44 | 844 | py | Python | docs/_downloads/485d1a22616717976d2f85cbaf046db3/plot__jitterdodge_position.py | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | docs/_downloads/485d1a22616717976d2f85cbaf046db3/plot__jitterdodge_position.py | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | docs/_downloads/485d1a22616717976d2f85cbaf046db3/plot__jitterdodge_position.py | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | """
Jitterdodge Position
====================
Position adjustments determine how to arrange geoms that would otherwise
occupy the same space.
Simultaneously dodge and jitter in one function:
``position_jitterdodge()``.
See
`position_jitterdodge() <https://jetbrains.github.io/lets-plot-docs/pages/api/lets_plot.position_jitterdodge.html#lets_plot.position_jitterdodge>`__.
"""
# sphinx_gallery_thumbnail_path = "gallery_py\_position_adjustments\_jitterdodge_position.png"
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
# %%
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
# %%
ggplot(df, aes('cyl', 'hwy', group='drv', fill='drv')) + \
geom_boxplot() + \
geom_point(position='jitterdodge', shape=21, color='black') | 27.225806 | 151 | 0.703791 | 103 | 844 | 5.563107 | 0.669903 | 0.165794 | 0.041885 | 0.094241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00277 | 0.14455 | 844 | 31 | 152 | 27.225806 | 0.790859 | 0.569905 | 0 | 0 | 0 | 0.142857 | 0.329193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8ceaa47207dcd451d3a6b75d0d1b483e1ba9218 | 2,537 | py | Python | mask_example/classification_vars.py | ami-a/MaskDetection | 9df329a24a987e63331c17db154319b3ebcaad74 | [
"MIT"
] | 1 | 2021-04-09T09:08:33.000Z | 2021-04-09T09:08:33.000Z | mask_example/classification_vars.py | ami-a/MaskDetection | 9df329a24a987e63331c17db154319b3ebcaad74 | [
"MIT"
] | null | null | null | mask_example/classification_vars.py | ami-a/MaskDetection | 9df329a24a987e63331c17db154319b3ebcaad74 | [
"MIT"
] | null | null | null | """loading the classification model variables for the detector object"""
import numpy as np
import cv2
from TrackEverything.tool_box import ClassificationVars
def get_class_vars(class_model_path):
"""loading the classification model variables for the detector object
We define here the model interpolation function so the detector
can use the classification model
Args:
class_model_path (str): classification model path
Returns:
ClassificationVars: classification variables for the detector
"""
#custom classification model interpolation
def custom_classify_detection(model,det_images,size=(224,224)):
"""Classify a batch of images
Args:
model (tensorflow model): classification model
det_images (np.array): batch of images in numpy array to classify
size (tuple, optional): size to resize to, 1-D int32 Tensor of 2 elements:
new_height, new_width (if None then no resizing).
(In custom function you can use model.inputs[0].shape.as_list()
and set size to default)
Returns:
Numpy NxM vector where N num of images, M num of classes and filled with scores.
For example two images (car,plan) with three possible classes (car,plan,lion)
that are identify currectly with 90% in the currect category and the rest is
devided equally will return [[0.9,0.05,0.05],[0.05,0.9,0.05]].
"""
#resize bounding box capture to fit classification model
if size is not None:
det_images=np.asarray(
[
cv2.resize(img, size, interpolation = cv2.INTER_LINEAR) for img in det_images
]
)
predictions=model.predict(det_images/255.)
#if class is binary make sure size is 2
if len(predictions)>0 and len(predictions[0])<2:
reshaped_pred=np.ones((len(predictions),2))
#size of classification list is 1 so turn it to 2
for ind,pred in enumerate(predictions):
reshaped_pred[ind,:]=pred,1-pred
#print(reshaped_pred)
predictions=reshaped_pred
return predictions
#providing only the classification model path for ClassificationVars
#since the default loding method
#tf.keras.models.load_model(path) will work
return ClassificationVars(
class_model_path=class_model_path,
class_proccessing=custom_classify_detection
)
| 41.590164 | 97 | 0.658652 | 330 | 2,537 | 4.972727 | 0.421212 | 0.092626 | 0.053626 | 0.042048 | 0.076782 | 0.070689 | 0.070689 | 0.070689 | 0.070689 | 0 | 0 | 0.023433 | 0.276705 | 2,537 | 60 | 98 | 42.283333 | 0.870845 | 0.564446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.136364 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d26259abf1d70bfe1abffb2493230cee42b319 | 668 | py | Python | detector/urls.py | SPIN-RD/data_analysis | b2ec9ca008781f3015ec3780a858de0dac4549b9 | [
"MIT"
] | null | null | null | detector/urls.py | SPIN-RD/data_analysis | b2ec9ca008781f3015ec3780a858de0dac4549b9 | [
"MIT"
] | null | null | null | detector/urls.py | SPIN-RD/data_analysis | b2ec9ca008781f3015ec3780a858de0dac4549b9 | [
"MIT"
] | null | null | null | from django.urls import path
from .views import (
MeasurementCreateView,
MeasurementRetrieveView,
energy_spectrum_analysis,
half_life_analysis,
index,
)
urlpatterns = [
path("api/measurements/", MeasurementCreateView.as_view()),
path(
"api/measurements/<str:device_id>/<str:mode>", MeasurementRetrieveView.as_view()
),
path("", index, name="index"),
path(
"detector/half-life/<str:device_id>",
half_life_analysis,
name="half-life-analysis",
),
path(
"detector/energy-spectrum/<str:device_id>",
energy_spectrum_analysis,
name="energy-spectrum-analysis",
),
]
| 23.857143 | 88 | 0.646707 | 67 | 668 | 6.253731 | 0.373134 | 0.133652 | 0.157518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22006 | 668 | 27 | 89 | 24.740741 | 0.804223 | 0 | 0 | 0.4 | 0 | 0 | 0.270958 | 0.211078 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d586caec5e48f58983b527adfdcf89eb123054 | 6,604 | py | Python | bin/pylint_runner.py | PickBas/meta-social | f6fb0a50c30e240086a75917b705dfdc71dbebf9 | [
"MIT"
] | null | null | null | bin/pylint_runner.py | PickBas/meta-social | f6fb0a50c30e240086a75917b705dfdc71dbebf9 | [
"MIT"
] | 15 | 2020-06-07T07:58:05.000Z | 2022-01-19T16:53:47.000Z | bin/pylint_runner.py | PickBas/meta-social | f6fb0a50c30e240086a75917b705dfdc71dbebf9 | [
"MIT"
] | null | null | null | '''
The MIT License (MIT)
Copyright (c) 2015 Matthew Peveler
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''
# https://github.com/MasterOdin/pylint_runner
from argparse import ArgumentParser
import configparser
import os
import sys
import colorama
import pylint
import pylint.lint
PYTHON_VERSION = ".".join([str(x) for x in sys.version_info[0:3]])
class Runner:
""" A pylint runner that will lint all files recursively from the CWD. """
DEFAULT_IGNORE_FOLDERS = [".git", ".idea", "__pycache__"]
DEFAULT_ARGS = ["--reports=n", "--output-format=colorized"]
DEFAULT_RCFILE = ".pylintrc"
def __init__(self, args=None):
colorama.init(autoreset=True)
self.verbose = False
self.args = self.DEFAULT_ARGS
self.rcfile = self.DEFAULT_RCFILE
self.ignore_folders = self.DEFAULT_IGNORE_FOLDERS
self._parse_args(args or sys.argv[1:])
self._parse_ignores()
def _parse_args(self, args):
"""Parses any supplied command-line args and provides help text. """
parser = ArgumentParser(description="Runs pylint recursively on a directory")
parser.add_argument(
"-v",
"--verbose",
dest="verbose",
action="store_true",
default=False,
help="Verbose mode (report which files were found for testing).",
)
parser.add_argument(
"--rcfile",
dest="rcfile",
action="store",
default=".pylintrc",
help="A relative or absolute path to your pylint rcfile. Defaults to\
`.pylintrc` at the current working directory",
)
options, _ = parser.parse_known_args(args)
self.verbose = options.verbose
if options.rcfile:
if not os.path.isfile(options.rcfile):
options.rcfile = os.getcwd() + "/" + options.rcfile
self.rcfile = options.rcfile
return options
def _parse_ignores(self):
""" Parse the ignores setting from the pylintrc file if available. """
error_message = (
colorama.Fore.RED
+ "{} does not appear to be a valid pylintrc file".format(self.rcfile)
+ colorama.Fore.RESET
)
if not os.path.isfile(self.rcfile):
if not self._is_using_default_rcfile():
print(error_message)
sys.exit(1)
else:
return
config = configparser.ConfigParser()
try:
config.read(self.rcfile)
except configparser.MissingSectionHeaderError:
print(error_message)
sys.exit(1)
if config.has_section("MASTER") and config.get("MASTER", "ignore"):
self.ignore_folders += config.get("MASTER", "ignore").split(",")
def _is_using_default_rcfile(self):
return self.rcfile == os.getcwd() + "/" + self.DEFAULT_RCFILE
def _print_line(self, line):
""" Print output only with verbose flag. """
if self.verbose and line != 'pylint_runner.py' and 'test_settings' not in line and 'test' not in line and 'migrations' not in line:
print(line)
def get_files_from_dir(self, current_dir):
"""
Recursively walk through a directory and get all python files and then walk
through any potential directories that are found off current directory,
so long as not within self.IGNORE_FOLDERS
:return: all python files that were found off current_dir
"""
if current_dir[-1] != "/" and current_dir != ".":
current_dir += "/"
files = []
for dir_file in os.listdir(current_dir):
if current_dir != ".":
file_path = current_dir + dir_file
else:
file_path = dir_file
if os.path.isfile(file_path):
file_split = os.path.splitext(dir_file)
if len(file_split) == 2 and file_split[0] != "" \
and file_split[1] == ".py":
files.append(file_path)
elif (os.path.isdir(dir_file) or os.path.isdir(file_path)) \
and dir_file not in self.ignore_folders:
path = dir_file + os.path.sep
if current_dir not in ["", "."]:
path = os.path.join(current_dir.rstrip(os.path.sep), path)
files += self.get_files_from_dir(path)
return files
def run(self, output=None, error=None):
""" Runs pylint on all python files in the current directory """
pylint_output = output if output is not None else sys.stdout
pylint_error = error if error is not None else sys.stderr
savedout, savederr = sys.__stdout__, sys.__stderr__
sys.stdout = pylint_output
sys.stderr = pylint_error
pylint_files = self.get_files_from_dir(os.curdir)
for pylint_file in pylint_files:
# we need to recast this as a string, else pylint enters an endless recursion
split_file = str(pylint_file).split("/")
split_file[-1] = colorama.Fore.CYAN + split_file[-1] + colorama.Fore.RESET
pylint_file = "/".join(split_file)
if 'pylint' not in pylint_file:
self._print_line(pylint_file)
def main(output=None, error=None, verbose=False):
""" The main (cli) interface for the pylint runner. """
runner = Runner(args=["--verbose"] if verbose is not False else None)
runner.run(output, error)
if __name__ == "__main__":
main(verbose=True)
| 36.087432 | 139 | 0.629164 | 842 | 6,604 | 4.789786 | 0.30285 | 0.024795 | 0.016861 | 0.011158 | 0.062485 | 0.0243 | 0 | 0 | 0 | 0 | 0 | 0.003151 | 0.279073 | 6,604 | 182 | 140 | 36.285714 | 0.84394 | 0.271048 | 0 | 0.073395 | 0 | 0 | 0.078764 | 0.005293 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073395 | false | 0 | 0.06422 | 0.009174 | 0.211009 | 0.045872 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d6b4d53e13b0fd18dcd2609163a130f5b31c93 | 1,311 | py | Python | mysite/polls/migrations/0007_auto_20150314_0332.py | aaronkrolik/rule46 | 20d3e384768caced5b76f37e8fdefc2e9fb129d6 | [
"Apache-2.0"
] | null | null | null | mysite/polls/migrations/0007_auto_20150314_0332.py | aaronkrolik/rule46 | 20d3e384768caced5b76f37e8fdefc2e9fb129d6 | [
"Apache-2.0"
] | null | null | null | mysite/polls/migrations/0007_auto_20150314_0332.py | aaronkrolik/rule46 | 20d3e384768caced5b76f37e8fdefc2e9fb129d6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('polls', '0006_auto_20150314_0320'),
]
operations = [
migrations.CreateModel(
name='Accolade',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=200)),
('accolade_text', models.TextField()),
('player', models.ForeignKey(to='polls.Player')),
],
options={
},
bases=(models.Model,),
),
migrations.AddField(
model_name='player',
name='position',
field=models.CharField(default='x', max_length=200),
preserve_default=False,
),
migrations.AddField(
model_name='player',
name='salary',
field=models.IntegerField(default=0),
preserve_default=True,
),
migrations.AddField(
model_name='player',
name='team',
field=models.CharField(default='x', max_length=200),
preserve_default=False,
),
]
| 29.133333 | 114 | 0.536995 | 115 | 1,311 | 5.93913 | 0.495652 | 0.065886 | 0.052709 | 0.118594 | 0.338214 | 0.338214 | 0.175695 | 0.175695 | 0.175695 | 0.175695 | 0 | 0.030928 | 0.334096 | 1,311 | 44 | 115 | 29.795455 | 0.751432 | 0.016018 | 0 | 0.368421 | 0 | 0 | 0.088509 | 0.017857 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d80406f757e14704187e04f0b5d07b32575e58 | 1,071 | py | Python | core/objs/zona.py | aanacleto/erp- | 9c2d5388248cfe4b8cdb8454f6f47df4cb521f0e | [
"MIT"
] | null | null | null | core/objs/zona.py | aanacleto/erp- | 9c2d5388248cfe4b8cdb8454f6f47df4cb521f0e | [
"MIT"
] | null | null | null | core/objs/zona.py | aanacleto/erp- | 9c2d5388248cfe4b8cdb8454f6f47df4cb521f0e | [
"MIT"
] | 2 | 2017-12-04T14:59:22.000Z | 2018-12-06T18:50:29.000Z | # !/usr/bin/env python3
# -*- encoding: utf-8 -*-
"""
ERP+
"""
__author__ = 'António Anacleto'
__credits__ = []
__version__ = "1.0"
__maintainer__ = "António Anacleto"
__status__ = "Development"
__model_name__ = 'zona.Zona'
import auth, base_models
from orm import *
from form import *
class Zona(Model, View):
def __init__(self, **kargs):
Model.__init__(self, **kargs)
self.__name__ = 'zona'
self.__title__ = 'Zonas de Distribuição'
self.__model_name__ = __model_name__
self.__list_edit_mode__ = 'inline'
self.__order_by__ = 'zona.nome'
self.__auth__ = {
'read':['All'],
'write':['Gestor'],
'create':['Gestor'],
'delete':['Gestor'],
'full_access':['Gestor']
}
self.__get_options__ = ['nome']
self.nome = string_field(view_order=1 , name='Nome', size=80)
self.contratos = list_field(view_order=2 , name='Contratos', model_name='contrato.Contrato', condition="zona='{id}'", list_edit_mode='edit', onlist = False)
| 29.75 | 164 | 0.605042 | 119 | 1,071 | 4.773109 | 0.554622 | 0.06338 | 0.045775 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.238095 | 1,071 | 35 | 165 | 30.6 | 0.686275 | 0.047619 | 0 | 0 | 0 | 0 | 0.20099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.111111 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d85cecefde2c0134f937fbe84f1d254b9a273b | 4,383 | py | Python | biothings/hub/upgrade.py | sirloon/biothings.api | 8a981fa2151e368d0ca76aaf226eb565d794d4fb | [
"Apache-2.0"
] | null | null | null | biothings/hub/upgrade.py | sirloon/biothings.api | 8a981fa2151e368d0ca76aaf226eb565d794d4fb | [
"Apache-2.0"
] | null | null | null | biothings/hub/upgrade.py | sirloon/biothings.api | 8a981fa2151e368d0ca76aaf226eb565d794d4fb | [
"Apache-2.0"
] | null | null | null | import sys
from biothings.utils.hub_db import get_src_dump, get_data_plugin, get_hub_db_conn, backup, restore
from biothings import config
logging = config.logger
def migrate_0dot1_to_0dot2():
"""
mongodb src_dump/data_plugin changed:
1. "data_folder" and "release" under "download"
2. "data_folder" and "release" in upload.jobs[subsrc] taken from "download"
3. no more "err" under "upload"
4. no more "status" under "upload"
5. "pending_to_upload" is now "pending": ["upload"]
"""
src_dump = get_src_dump()
data_plugin = get_data_plugin()
for srccol in [src_dump,data_plugin]:
logging.info("Converting collection %s" % srccol)
srcs = [src for src in srccol.find()]
wasdue = False
for src in srcs:
logging.info("\tConverting '%s'" % src["_id"])
# 1.
for field in ["data_folder","release"]:
if field in src:
logging.debug("\t\t%s: found '%s' in document, moving under 'download'" % (src["_id"],field))
try:
src["download"][field] = src.pop(field)
wasdue = True
except KeyError as e:
logging.warning("\t\t%s: no such field '%s' found, skip it (error: %s)" % (src["_id"],field,e))
# 2.
for subsrc_name in src.get("upload",{}).get("jobs",{}):
for field in ["data_folder","release"]:
if not field in src["upload"]["jobs"][subsrc_name]:
logging.debug("\t\t%s: no '%s' found in upload jobs, taking it from 'download' (or from root keys)" % (src["_id"],field))
try:
src["upload"]["jobs"][subsrc_name][field] = src["download"][field]
wasdue = True
except KeyError:
try:
src["upload"]["jobs"][subsrc_name][field] = src[field]
wasdue = True
except KeyError:
logging.warning("\t\t%s: no such field '%s' found, skip it" % (src["_id"],field))
# 3. & 4.
for field in ["err","status"]:
if field in src.get("upload",{}):
logging.debug("\t\t%s: removing '%s' key from 'upload'" % (src["_id"],field))
src["upload"].pop(field)
wasdue = True
# 5.
if "pending_to_upload" in src:
logging.debug("\t%s: found 'pending_to_upload' field, moving to 'pending' list" % src["_id"])
src.pop("pending_to_upload")
wasdue = True
if not "upload" in src.get("pending",[]):
src.setdefault("pending",[]).append("upload")
if wasdue:
logging.info("\tFinishing converting document for '%s'" % src["_id"])
srccol.save(src)
else:
logging.info("\tDocument for '%s' already converted" % src["_id"])
def migrate(from_version, to_version,restore_if_failure=True):
func_name = "migrate_%s_to_%s" % (from_version.replace(".","dot"),
to_version.replace(".","dot"))
# backup
db = get_hub_db_conn()[config.DATA_HUB_DB_DATABASE]
logging.info("Backing up %s" % db.name)
path = backup()
logging.info("Backup file: %s" % path)
thismodule = sys.modules[__name__]
try:
func = getattr(thismodule,func_name)
except AttributeError:
logging.error("Can't upgrade, no such function to migrate from '%s' to '%s'" % (from_version, to_version))
raise
# resolve A->C = A->B then B->C
logging.info("Start upgrading from '%s' to '%s'" % (from_version, to_version))
try:
func()
except Exception as e:
logging.exception("Failed upgrading: %s")
if restore_if_failure:
logging.info("Now restoring original database from '%s" % path)
restore(db,path,drop=True)
logging.info("Done. If you want to keep converted data for inspection, use restore_if_failure=False")
else:
logging.info("*not* restoring original data. It can still be restored using file '%s'" % path)
| 44.72449 | 145 | 0.531371 | 525 | 4,383 | 4.293333 | 0.257143 | 0.048802 | 0.006655 | 0.022626 | 0.213398 | 0.116238 | 0.116238 | 0.090506 | 0.035492 | 0.035492 | 0 | 0.004806 | 0.335387 | 4,383 | 97 | 146 | 45.185567 | 0.768967 | 0.080995 | 0 | 0.216216 | 0 | 0.027027 | 0.254395 | 0.006027 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.040541 | 0 | 0.067568 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d8d4bab6bca93fe7ec5b879bc940d20a949497 | 22,052 | py | Python | capirca/lib/gce.py | supertylerc/capirca | 31235e964c9893f3f3432d84604fbaa727384047 | [
"Apache-2.0"
] | null | null | null | capirca/lib/gce.py | supertylerc/capirca | 31235e964c9893f3f3432d84604fbaa727384047 | [
"Apache-2.0"
] | null | null | null | capirca/lib/gce.py | supertylerc/capirca | 31235e964c9893f3f3432d84604fbaa727384047 | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Google Compute Engine firewall generator.
More information about GCE networking and firewalls:
https://cloud.google.com/compute/docs/networking
https://cloud.google.com/compute/docs/reference/latest/firewalls
"""
import copy
import datetime
import ipaddress
import json
import logging
import re
from typing import Dict, Any
from capirca.lib import gcp
from capirca.lib import nacaddr
import six
class Error(Exception):
"""Generic error class."""
class GceFirewallError(Error):
"""Raised with problems in formatting for GCE firewall."""
class ExceededAttributeCountError(Error):
"""Raised when the total attribute count of a policy is above the maximum."""
def IsDefaultDeny(term):
"""Returns true if a term is a default deny without IPs, ports, etc."""
skip_attrs = ['flattened', 'flattened_addr', 'flattened_saddr',
'flattened_daddr', 'action', 'comment', 'name', 'logging']
if 'deny' not in term.action:
return False
# This lc will look through all methods and attributes of the object.
# It returns only the attributes that need to be looked at to determine if
# this is a default deny.
for i in [a for a in dir(term) if not a.startswith('__') and
a.islower() and not callable(getattr(term, a))]:
if i in skip_attrs:
continue
v = getattr(term, i)
if isinstance(v, str) and v:
return False
if isinstance(v, list) and v:
return False
return True
def GetNextPriority(priority):
"""Get the priority for the next rule."""
return priority
class Term(gcp.Term):
"""Creates the term for the GCE firewall."""
ACTION_MAP = {'accept': 'allowed',
'deny': 'denied'}
# Restrict the number of addresses per term to 256.
# Similar restrictions apply to source and target tags, and ports.
# Details: https://cloud.google.com/vpc/docs/quota#per_network_2
_TERM_ADDRESS_LIMIT = 256
_TERM_SOURCE_TAGS_LIMIT = 30
_TERM_TARGET_TAGS_LIMIT = 70
_TERM_PORTS_LIMIT = 256
# Firewall rule name has to match specific RE:
# The first character must be a lowercase letter, and all following characters
# must be a dash, lowercase letter, or digit, except the last character, which
# cannot be a dash.
# Details: https://cloud.google.com/compute/docs/reference/latest/firewalls
_TERM_NAME_RE = re.compile(r'^[a-z]([-a-z0-9]*[a-z0-9])?$')
# Protocols allowed by name from:
# https://cloud.google.com/vpc/docs/firewalls#protocols_and_ports
_ALLOW_PROTO_NAME = frozenset(
['tcp', 'udp', 'icmp', 'esp', 'ah', 'ipip', 'sctp',
'all' # Needed for default deny, do not use in policy file.
])
# Any protocol not in _ALLOW_PROTO_NAME must be passed by number.
ALWAYS_PROTO_NUM = set(gcp.Term.PROTO_MAP.keys()) - _ALLOW_PROTO_NAME
def __init__(self, term, inet_version='inet', policy_inet_version='inet'):
super().__init__(term)
self.term = term
self.inet_version = inet_version
# This is to handle mixed, where the policy_inet_version is mixed,
# but the term inet version is either inet/inet6.
# This is only useful for term name and priority.
self.policy_inet_version = policy_inet_version
self._validateDirection()
if self.term.source_address_exclude and not self.term.source_address:
raise GceFirewallError(
'GCE firewall does not support address exclusions without a source '
'address list.')
# The reason for the error below isn't because of a GCE restriction, but
# because we don't want to use a bad default of GCE that allows talking
# to anything when there's no source address, source tag, or source service
# account.
if (not self.term.source_address and
not self.term.source_tag) and self.term.direction == 'INGRESS':
raise GceFirewallError(
'GCE firewall needs either to specify source address or source tags.')
if self.term.source_port:
raise GceFirewallError(
'GCE firewall does not support source port restrictions.')
if (self.term.source_address_exclude and self.term.source_address or
self.term.destination_address_exclude and
self.term.destination_address):
self.term.FlattenAll()
if not self.term.source_address and self.term.direction == 'INGRESS':
raise GceFirewallError(
'GCE firewall rule no longer contains any source addresses after '
'the prefixes in source_address_exclude were removed.')
# Similarly to the comment above, the reason for this error is also
# because we do not want to use the bad default of GCE that allows for
# talking to anything when there is no IP address provided for this field.
if not self.term.destination_address and self.term.direction == 'EGRESS':
raise GceFirewallError(
'GCE firewall rule no longer contains any destination addresses '
'after the prefixes in destination_address_exclude were removed.')
def __str__(self):
"""Convert term to a string."""
json.dumps(self.ConvertToDict(), indent=2,
separators=(six.ensure_str(','), six.ensure_str(': ')))
def _validateDirection(self):
if self.term.direction == 'INGRESS':
if not self.term.source_address and not self.term.source_tag:
raise GceFirewallError(
'Ingress rule missing required field oneof "sourceRanges" or '
'"sourceTags".')
if self.term.destination_address:
raise GceFirewallError('Ingress rules cannot include '
'"destinationRanges.')
elif self.term.direction == 'EGRESS':
if self.term.source_address:
raise GceFirewallError(
'Egress rules cannot include "sourceRanges".')
if not self.term.destination_address:
raise GceFirewallError(
'Egress rule missing required field "destinationRanges".')
if self.term.destination_tag:
raise GceFirewallError(
'GCE Egress rule cannot have destination tag.')
def ConvertToDict(self):
"""Convert term to a dictionary.
This is used to get a dictionary describing this term which can be
output easily as a JSON blob.
Returns:
A dictionary that contains all fields necessary to create or update a GCE
firewall.
Raises:
GceFirewallError: The term name is too long.
"""
if self.term.owner:
self.term.comment.append('Owner: %s' % self.term.owner)
term_dict = {
'description': ' '.join(self.term.comment),
'name': self.term.name,
'direction': self.term.direction
}
if self.term.network:
term_dict['network'] = self.term.network
term_dict['name'] = '%s-%s' % (
self.term.network.split('/')[-1], term_dict['name'])
# Identify if this is inet6 processing for a term under a mixed policy.
mixed_policy_inet6_term = False
if self.policy_inet_version == 'mixed' and self.inet_version == 'inet6':
mixed_policy_inet6_term = True
# Update term name to have the IPv6 suffix for the inet6 rule.
if mixed_policy_inet6_term:
term_dict['name'] = gcp.GetIpv6TermName(term_dict['name'])
# Checking counts of tags, and ports to see if they exceeded limits.
if len(self.term.source_tag) > self._TERM_SOURCE_TAGS_LIMIT:
raise GceFirewallError(
'GCE firewall rule exceeded number of source tags per rule: %s' %
self.term.name)
if len(self.term.destination_tag) > self._TERM_TARGET_TAGS_LIMIT:
raise GceFirewallError(
'GCE firewall rule exceeded number of target tags per rule: %s' %
self.term.name)
if self.term.source_tag:
if self.term.direction == 'INGRESS':
term_dict['sourceTags'] = self.term.source_tag
elif self.term.direction == 'EGRESS':
term_dict['targetTags'] = self.term.source_tag
if self.term.destination_tag and self.term.direction == 'INGRESS':
term_dict['targetTags'] = self.term.destination_tag
if self.term.priority:
term_dict['priority'] = self.term.priority
# Update term priority for the inet6 rule.
if mixed_policy_inet6_term:
term_dict['priority'] = GetNextPriority(term_dict['priority'])
rules = []
# If 'mixed' ends up in indvidual term inet_version, something has gone
# horribly wrong. The only valid values are inet/inet6.
term_af = self.AF_MAP.get(self.inet_version)
if self.inet_version == 'mixed':
raise GceFirewallError(
'GCE firewall rule has incorrect inet_version for rule: %s' %
self.term.name)
# Exit early for inet6 processing of mixed rules that have only tags,
# and no IP addresses, since this is handled in the inet processing.
if mixed_policy_inet6_term:
if not self.term.source_address and not self.term.destination_address:
if 'targetTags' in term_dict or 'sourceTags' in term_dict:
return []
saddrs = sorted(self.term.GetAddressOfVersion('source_address', term_af),
key=ipaddress.get_mixed_type_key)
daddrs = sorted(
self.term.GetAddressOfVersion('destination_address', term_af),
key=ipaddress.get_mixed_type_key)
# If the address got filtered out and is empty due to address family, we
# don't render the term. At this point of term processing, the direction
# has already been validated, so we can just log and return empty rule.
if self.term.source_address and not saddrs:
logging.warning(
'WARNING: Term %s is not being rendered for %s, '
'because there are no addresses of that family.', self.term.name,
self.inet_version)
return []
if self.term.destination_address and not daddrs:
logging.warning(
'WARNING: Term %s is not being rendered for %s, '
'because there are no addresses of that family.', self.term.name,
self.inet_version)
return []
if not self.term.protocol:
raise GceFirewallError(
'GCE firewall rule contains no protocol, it must be specified.')
proto_dict = copy.deepcopy(term_dict)
if self.term.logging:
proto_dict['logConfig'] = {'enable': True}
filtered_protocols = []
for proto in self.term.protocol:
# ICMP filtering by inet_version
# Since each term has inet_version, 'mixed' is correctly processed here.
# Convert protocol to number for uniformity of comparison.
# PROTO_MAP always returns protocol number.
if proto in self._ALLOW_PROTO_NAME:
proto_num = self.PROTO_MAP[proto]
else:
proto_num = proto
if proto_num == self.PROTO_MAP['icmp'] and self.inet_version == 'inet6':
logging.warning(
'WARNING: Term %s is being rendered for inet6, ICMP '
'protocol will not be rendered.', self.term.name)
continue
if proto_num == self.PROTO_MAP['icmpv6'] and self.inet_version == 'inet':
logging.warning(
'WARNING: Term %s is being rendered for inet, ICMPv6 '
'protocol will not be rendered.', self.term.name)
continue
if proto_num == self.PROTO_MAP['igmp'] and self.inet_version == 'inet6':
logging.warning(
'WARNING: Term %s is being rendered for inet6, IGMP '
'protocol will not be rendered.', self.term.name)
continue
filtered_protocols.append(proto)
# If there is no protocol left after ICMP/IGMP filtering, drop this term.
if not filtered_protocols:
return []
for proto in filtered_protocols:
# If the protocol name is not supported, protocol number is used.
# This is done by default in policy.py.
if proto not in self._ALLOW_PROTO_NAME:
logging.info(
'INFO: Term %s is being rendered using protocol number',
self.term.name)
dest = {
'IPProtocol': proto
}
if self.term.destination_port:
ports = []
for start, end in self.term.destination_port:
if start == end:
ports.append(str(start))
else:
ports.append('%d-%d' % (start, end))
if len(ports) > self._TERM_PORTS_LIMIT:
raise GceFirewallError(
'GCE firewall rule exceeded number of ports per rule: %s' %
self.term.name)
dest['ports'] = ports
action = self.ACTION_MAP[self.term.action[0]]
dict_val = []
if action in proto_dict:
dict_val = proto_dict[action]
if not isinstance(dict_val, list):
dict_val = [dict_val]
dict_val.append(dest)
proto_dict[action] = dict_val
# There's a limit of 256 addresses each term can contain.
# If we're above that limit, we're breaking it down in more terms.
if saddrs:
source_addr_chunks = [
saddrs[x:x+self._TERM_ADDRESS_LIMIT] for x in range(
0, len(saddrs), self._TERM_ADDRESS_LIMIT)]
for i, chunk in enumerate(source_addr_chunks):
rule = copy.deepcopy(proto_dict)
if len(source_addr_chunks) > 1:
rule['name'] = '%s-%d' % (rule['name'], i+1)
rule['sourceRanges'] = [str(saddr) for saddr in chunk]
rules.append(rule)
elif daddrs:
dest_addr_chunks = [
daddrs[x:x+self._TERM_ADDRESS_LIMIT] for x in range(
0, len(daddrs), self._TERM_ADDRESS_LIMIT)]
for i, chunk in enumerate(dest_addr_chunks):
rule = copy.deepcopy(proto_dict)
if len(dest_addr_chunks) > 1:
rule['name'] = '%s-%d' % (rule['name'], i+1)
rule['destinationRanges'] = [str(daddr) for daddr in chunk]
rules.append(rule)
else:
rules.append(proto_dict)
# Sanity checking term name lengths.
long_rules = [rule['name'] for rule in rules if len(rule['name']) > 63]
if long_rules:
raise GceFirewallError(
'GCE firewall name ended up being too long: %s' % long_rules)
return rules
class GCE(gcp.GCP):
"""A GCE firewall policy object."""
_PLATFORM = 'gce'
SUFFIX = '.gce'
_SUPPORTED_AF = frozenset(('inet', 'inet6', 'mixed'))
_ANY_IP = {
'inet': nacaddr.IP('0.0.0.0/0'),
'inet6': nacaddr.IP('::/0'),
}
# Supported is 63 but we need to account for dynamic updates when the term
# is rendered (which can add proto and a counter).
_TERM_MAX_LENGTH = 53
_GOOD_DIRECTION = ['INGRESS', 'EGRESS']
_OPTIONAL_SUPPORTED_KEYWORDS = set(['expiration',
'destination_tag',
'source_tag'])
def _BuildTokens(self):
"""Build supported tokens for platform.
Returns:
tuple containing both supported tokens and sub tokens
"""
supported_tokens, _ = super()._BuildTokens()
# add extra things
supported_tokens |= {'destination_tag',
'expiration',
'owner',
'priority',
'source_tag'}
# remove unsupported things
supported_tokens -= {'icmp_type',
'platform',
'platform_exclude',
'verbatim'}
# easier to make a new structure
supported_sub_tokens = {'action': {'accept', 'deny'}}
return supported_tokens, supported_sub_tokens
def _TranslatePolicy(self, pol, exp_info):
self.gce_policies = []
max_attribute_count = 0
total_attribute_count = 0
total_rule_count = 0
current_date = datetime.datetime.utcnow().date()
exp_info_date = current_date + datetime.timedelta(weeks=exp_info)
for header, terms in pol.filters:
if self._PLATFORM not in header.platforms:
continue
filter_options = header.FilterOptions(self._PLATFORM)
filter_name = header.FilterName(self._PLATFORM)
network = ''
direction = 'INGRESS'
if filter_options:
for i in self._GOOD_DIRECTION:
if i in filter_options:
direction = i
filter_options.remove(i)
# Get the address family if set.
address_family = 'inet'
for i in self._SUPPORTED_AF:
if i in filter_options:
address_family = i
filter_options.remove(i)
for opt in filter_options:
try:
max_attribute_count = int(opt)
logging.info(
'Checking policy for max attribute count %d', max_attribute_count)
filter_options.remove(opt)
break
except ValueError:
continue
if filter_options:
network = filter_options[0]
else:
logging.warning('GCE filter does not specify a network.')
term_names = set()
if IsDefaultDeny(terms[-1]):
terms[-1].protocol = ['all']
terms[-1].priority = 65534
if direction == 'EGRESS':
if address_family != 'mixed':
# Default deny also gets processed as part of terms processing.
# The name and priority get updated there.
terms[-1].destination_address = [self._ANY_IP[address_family]]
else:
terms[-1].destination_address = [self._ANY_IP['inet'],
self._ANY_IP['inet6']]
else:
if address_family != 'mixed':
terms[-1].source_address = [self._ANY_IP[address_family]]
else:
terms[-1].source_address = [
self._ANY_IP['inet'], self._ANY_IP['inet6']
]
for term in terms:
if term.stateless_reply:
logging.warning('WARNING: Term %s in policy %s is a stateless reply '
'term and will not be rendered.',
term.name, filter_name)
continue
term.network = network
if not term.comment:
term.comment = header.comment
if direction == 'EGRESS':
term.name += '-e'
term.name = self.FixTermLength(term.name)
if term.name in term_names:
raise GceFirewallError('Duplicate term name %s' % term.name)
term_names.add(term.name)
term.direction = direction
if term.expiration:
if term.expiration <= exp_info_date:
logging.info('INFO: Term %s in policy %s expires '
'in less than two weeks.', term.name, filter_name)
if term.expiration <= current_date:
logging.warning('WARNING: Term %s in policy %s is expired and '
'will not be rendered.', term.name, filter_name)
continue
if term.option:
raise GceFirewallError(
'GCE firewall does not support term options.')
# Handle mixed for each indvidual term as inet and inet6.
# inet/inet6 are treated the same.
term_address_families = []
if address_family == 'mixed':
term_address_families = ['inet', 'inet6']
else:
term_address_families = [address_family]
for term_af in term_address_families:
for rules in Term(term, term_af, address_family).ConvertToDict():
logging.debug('Attribute count of rule %s is: %d', term.name,
GetAttributeCount(rules))
total_attribute_count += GetAttributeCount(rules)
total_rule_count += 1
if max_attribute_count and total_attribute_count > max_attribute_count:
# Stop processing rules as soon as the attribute count is over the
# limit.
raise ExceededAttributeCountError(
'Attribute count (%d) for %s exceeded the maximum (%d)' %
(total_attribute_count, filter_name, max_attribute_count))
self.gce_policies.append(rules)
logging.info('Total rule count of policy %s is: %d', filter_name,
total_rule_count)
logging.info('Total attribute count of policy %s is: %d', filter_name,
total_attribute_count)
def __str__(self):
out = '%s\n\n' % (json.dumps(self.gce_policies, indent=2,
separators=(six.ensure_str(','),
six.ensure_str(': ')),
sort_keys=True))
return out
def GetAttributeCount(dict_term: Dict[str, Any]) -> int:
"""Calculate the attribute count of a term in its dictionary form.
The attribute count of a rule is the sum of the number of ports, protocols, IP
ranges, tags and target service account.
Note: The goal of this function is not to determine if a term is valid, but
to calculate its attribute count regardless of correctness.
Args:
dict_term: A dict object.
Returns:
int: The attribute count of the term.
"""
addresses = (len(dict_term.get('destinationRanges', []))
or len(dict_term.get('sourceRanges', [])))
proto_ports = 0
for allowed in dict_term.get('allowed', []):
proto_ports += len(allowed.get('ports', [])) + 1 # 1 for ipProtocol
for denied in dict_term.get('denied', []):
proto_ports += len(denied.get('ports', [])) + 1 # 1 for ipProtocol
tags = 0
for _ in dict_term.get('sourceTags', []):
tags += 1
for _ in dict_term.get('targetTags', []):
tags += 1
service_accounts = 0
for _ in dict_term.get('targetServiceAccount', []):
service_accounts += 1
return addresses + proto_ports + tags + service_accounts
| 38.02069 | 83 | 0.640169 | 2,853 | 22,052 | 4.809674 | 0.173502 | 0.043725 | 0.018365 | 0.027984 | 0.289899 | 0.224457 | 0.185396 | 0.161347 | 0.157557 | 0.080601 | 0 | 0.006492 | 0.266597 | 22,052 | 579 | 84 | 38.086356 | 0.841959 | 0.232269 | 0 | 0.227041 | 0 | 0 | 0.175992 | 0.004603 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02551 | false | 0 | 0.02551 | 0 | 0.135204 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8dab0f4aacf85ce7a8eb87b58a351fa764a3691 | 134,339 | py | Python | myhabitatagent.py | karkuspeter/habitat-challenge | 4b61be2b24b43d03246c94435febc691b6172ab6 | [
"MIT"
] | null | null | null | myhabitatagent.py | karkuspeter/habitat-challenge | 4b61be2b24b43d03246c94435febc691b6172ab6 | [
"MIT"
] | null | null | null | myhabitatagent.py | karkuspeter/habitat-challenge | 4b61be2b24b43d03246c94435febc691b6172ab6 | [
"MIT"
] | null | null | null | import argparse
import habitat
import random
import numpy as np
import scipy
import os
import cv2
import time
from habitat.tasks.nav.shortest_path_follower import ShortestPathFollower
from habitat.utils.visualizations import maps
from gibsonagents.expert import Expert
from gibsonagents.pathplanners import Dstar_planner, Astar3D, VI_planner
from gibsonagents.classic_mapping import rotate_2d, ClassicMapping, map_path_for_sim
from utils.dotdict import dotdict
from utils.tfrecordfeatures import tf_bytes_feature, tf_int64_feature, sequence_feature_wrapper # tf_bytelist_feature
from habitat_utils import load_map_from_file, encode_image_to_png, get_model_id_from_episode, get_floor_from_json
from vin import grid_actions_from_trajectory, project_state_and_goal_to_smaller_map
import quaternion
from multiprocessing import Queue, Process
import atexit
import platform
from arguments import parse_args
import tensorflow as tf
from train import get_brain, get_tf_config
from common_net import load_from_file, count_number_trainable_params
from visualize.visualize_habitat_training import plot_viewpoints, plot_target_and_path, mapping_visualizer
from gen_habitat_data import actions_from_trajectory
from gen_planner_data import rotate_map_and_poses, Transform2D
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib.gridspec as gridspec
try:
import ipdb as pdb
except:
import pdb
# # Fix multiprocessing on mac OSX
# if platform.system() == "Darwin":
# import multiprocessing
# multiprocessing.set_start_method('spawn')
ACTION_SOURCE = "plan" #"expert" # "plan"
# START_WITH_SPIN = False # True
SPIN_TARGET = np.deg2rad(370) # np.deg2rad(270) # np.deg2rad(360 - 70)
SPIN_DIRECTION = 1 # 1 for same direction as target, -1 for opposite direction. Opposite is better if target < 360
PLANNER_SINGLE_THREAD = False
PLANNER_STOP_THREAD_EACH_EPISODE = False
# COST_SETTING = 0 # 2
# SOFT_COST_MAP = True
PLANNER2D_TIMEOUT = 200 # 200. # 0.08
# PLANNER3D_TIMEOUT = 2.5 # 1.5 # 200. # 0.08 - ------------------
RECOVER_ON_COLLISION = True
COLLISION_DISTANCE_THRESHOLD = 0.6 # 0.8
MAX_SHORTCUT_TURNS = 2 # was 1 in submission
NEAR_TARGET_COLLISION_STOP_DISTANCE = 5. # when colliding withing this radius to the goal, stop instead
# # Patch map with collisions and around target
TARGET_MAP_MARGIN = 2
OBSTACLE_DOWNWEIGHT_DISTANCE = 20 # from top, smaller the further
OBSTACLE_DOWNWEIGHT_SCALARS = (0.3, 0.8) # (0.3, 0.8)
EXTRA_STEPS_WHEN_EXPANDING_MAP = 30
# !!!!!!
SUPRESS_EXCEPTIONS = False
INTERACTIVE_ON_EXCEPTIONS = True
PLOT_EVERY_N_STEP = -1
PRINT_TIMES = True
INTERACTIVE_PLOT = True
PLOT_PROCESS = False # True
SAVE_VIDEO = True # will save params.num_video number of videos, or all if interactive
USE_ASSERTS = False
# 42 * 60 * 60 - 3 * 60 * 60 # 30 * 60 - 5 * 60 #
TOTAL_TIME_LIMIT = 42 * 60 * 60 - 30 * 60 # challenge gave up at 38h and finished at 39h so 120 minutes should be enough. Even more recently. all giveup finished in 6 mins.
# 42 hours = 2520 mins for 1000-2000 episodes.
# Average episode time should be < 75.6 sec
ERROR_ON_TIMEOUT = False # True
SKIP_FIRST_N_FOR_TEST = -1 # 10 # 10 # 10
VIDEO_FRAME_SKIP = 1 # 6`
VIDEO_FPS = 5 # 5 # 30
VIDEO_LARGE_PLOT = False
VIDEO_DETAILED = True
DEBUG_DUMMY_ACTIONS_ONLY = False
SKIP_FIRST_N = -1 # 1000
SKIP_AFTER_N = -1 # 1500
SKIP_MAP_SHAPE_MISMATCH = True
# !!!!!!!
REPLACE_WITH_RANDOM_ACTIONS = False
EXIT_AFTER_N_STEPS_FOR_SPEED_TEST = -1
FAKE_INPUT_FOR_SPEED_TEST = False
MAX_MAP_SIZE_FOR_SPEED_TEST = False
# DATA GENERATION - for both sim scenarios and real spot
DATA_TYPE = "scenario"
SAVE_DATA_EVERY_N = 1 # 4
DATA_FIRST_STEP_ONLY = True
DATA_MAX_TRAJLEN = 50 # when DATA_FIRST_STEP_ONLY == False
DATA_INCLUDE_NONPLANNED_ACTIONS = False
DATA_USE_LAST_SEGMENT = False # when map is smaller use either the last or the first trajectory segment
# DATA_SEPARATE_FILES = True # for real spot data
DATA_SEPARATE_FILES = False # for simulated scenario data
def giveup_settings(giveup_setting):
# # Give up settings - submission
if giveup_setting == "1":
GIVE_UP_NO_PROGRESS_STEPS = 90 # 100
NO_PROGRESS_THRESHOLD = 15
GIVE_UP_NUM_COLLISIONS = 6 # 100 # TODO increase TODO increase later distances
GIVE_UP_STEP_AND_DISTANCE = [[0, 340], [150, 220], [300, 150], [400, 100]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [[3.5, 100], [4., 120], [5., 300], [6., 400]] # in minutes ! and distance reduction from beginning
# Give up settings - more agressive for submission2
elif giveup_setting == "2":
GIVE_UP_NO_PROGRESS_STEPS = 90 # 100
NO_PROGRESS_THRESHOLD = 15
GIVE_UP_NUM_COLLISIONS = 6
GIVE_UP_STEP_AND_DISTANCE = [[0, 340], [150, 220], [300, 100], [400, 50]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [[3.5, 100], [4., 120], [5., 300], [6., 400]] # in minutenum_wrong_frees ! and distance reduction from beginning
# Relaxed giveup settings for local evaluation (3)
elif giveup_setting == "2":
GIVE_UP_NO_PROGRESS_STEPS = 100 # 100
NO_PROGRESS_THRESHOLD = 12
GIVE_UP_NUM_COLLISIONS = 8 # 100 # TODO increase TODO increase later distances
GIVE_UP_STEP_AND_DISTANCE = [[0, 440], [150, 320], [300, 250], [400, 150]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [[10., 100], [15., 120], [20., 300], [30., 400]] # in minutes ! and distance reduction from beginning
# Almost never give up -- august submission4
elif giveup_setting == "4":
GIVE_UP_NO_PROGRESS_STEPS = 100
NO_PROGRESS_THRESHOLD = 12
GIVE_UP_NUM_COLLISIONS = 20
GIVE_UP_STEP_AND_DISTANCE = [[0, 440], [150, 300], [200, 250], [250, 200], [300, 150], [350, 100], [400, 40]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [] #[10., 100], [15., 120], [20., 300], [30., 400]] # in minutes ! and distance reduction from beginning
elif giveup_setting == "5":
# # Almost never give up -- sept submission5
GIVE_UP_NO_PROGRESS_STEPS = 100
NO_PROGRESS_THRESHOLD = 10
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[200, 300], [300, 200], [400, 100]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = []
# # Almost never give up -- sept submission6
elif giveup_setting == "6":
GIVE_UP_NO_PROGRESS_STEPS = 120
NO_PROGRESS_THRESHOLD = 10
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[200, 400], [300, 250], [400, 150],
[450, 100]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = []
# # Almost never give up -- sept submission7
elif giveup_setting == "7":
GIVE_UP_NO_PROGRESS_STEPS = 120
NO_PROGRESS_THRESHOLD = 10
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[200, 500], [300, 300], [400, 175], [450, 100]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = []
# # Almost never give up -- nov submission8
elif giveup_setting == "8":
GIVE_UP_NO_PROGRESS_STEPS = 1000
NO_PROGRESS_THRESHOLD = 1
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[200, 400], [300, 240], [400, 160]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = []
# # no giveup but 300 limit for data generation
elif giveup_setting == "data300":
GIVE_UP_NO_PROGRESS_STEPS = 1000
NO_PROGRESS_THRESHOLD = 1
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[300, 1], ] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [] # in minutes ! and distance reduction from beginning
# # No giveup
elif giveup_setting == "never":
GIVE_UP_NO_PROGRESS_STEPS = 1000 # 100
NO_PROGRESS_THRESHOLD = 1
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [] # in minutes ! and distance reduction from beginning
# # No giveup
elif giveup_setting == "always":
GIVE_UP_NO_PROGRESS_STEPS = 1 # 100
NO_PROGRESS_THRESHOLD = 1
GIVE_UP_NUM_COLLISIONS = 1
GIVE_UP_STEP_AND_DISTANCE = [[0, 1]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [] # in minutes ! and distance reduction from beginning
# # Very agressive for fast testing
elif giveup_setting == "fast":
GIVE_UP_NO_PROGRESS_STEPS = 50 # 100
NO_PROGRESS_THRESHOLD = 15
GIVE_UP_NUM_COLLISIONS = 1
GIVE_UP_STEP_AND_DISTANCE = [[0, 340], [100, 200], [200, 100], [300, 50]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [[3.5, 100], [4., 120], [5., 300], [6., 400]] # in minutes ! and distance reduction from beginning
else:
raise ValueError('Unknown giveup_setting %s'%giveup_setting)
return GIVE_UP_NO_PROGRESS_STEPS, NO_PROGRESS_THRESHOLD, GIVE_UP_NUM_COLLISIONS, GIVE_UP_STEP_AND_DISTANCE, GIVE_UP_TIME_AND_REDUCTION
class DSLAMAgent(habitat.Agent):
def __init__(self, task_config, params, env=None, logdir='./temp/', tfwriters=()):
self.start_time = time.time()
self._POSSIBLE_ACTIONS = task_config.TASK.POSSIBLE_ACTIONS
self.step_i = 0
self.episode_i = -2
self.env = env
self.task_config = task_config
self.tfwriters = tfwriters
self.num_data_entries = 0
if env is None:
self.follower = None
assert ACTION_SOURCE != "expert"
else:
self.follower = ShortestPathFollower(env._sim, 0.36/2., False)
# if len(params.gpu) > 0 and int(params.gpu[0]) > 4:
# print ("Try to explicitly disable gpu")
# try:
# tf.config.experimental.set_visible_devices([], 'GPU')
# except Exception as e:
# print("Exception " + str(e))
print (params)
self.params = params
# Giveup setting
self.GIVE_UP_NO_PROGRESS_STEPS, self.NO_PROGRESS_THRESHOLD, self.GIVE_UP_NUM_COLLISIONS, \
self.GIVE_UP_STEP_AND_DISTANCE, self.GIVE_UP_TIME_AND_REDUCTION = giveup_settings(params.giveup)
if INTERACTIVE_PLOT or self.params.interactive_video:
plt.ion()
assert params.sim in ['habitat', 'spot', 'spotsmall', 'spotsmall2', 'habitat2021']
self.map_source = self.params.agent_map_source
self.pose_source = self.params.agent_pose_source
self.action_source = ACTION_SOURCE
self.max_confidence = 0.96 # 0.98
self.confidence_threshold = None # (0.2, 0.01) # (0.35, 0.05)
self.use_custom_visibility = (self.params.visibility_mask in [2, 20, 21])
assert self.params.agent_map_source in ['true', 'true-saved', 'true-saved-sampled', 'true-saved-hrsampled',
'true-partial', 'true-partial-sampled', 'pred']
assert self.params.agent_pose_source in ['slam', 'slam-truestart', 'true']
_, gpuname = get_tf_config(devices=params.gpu) # sets CUDA_VISIBLE_DEVICES
if params.skip_slam:
print ("SKIP SLAM overwritting particles and removing noise.")
assert self.pose_source == 'true'
assert params.num_particles == 1
assert params.odom_source == 'relmotion'
self.accumulated_spin = 0.
self.spin_direction = None
self.map_ch = 2
# slam_map_ch = 1
self.max_map_size = (self.params.global_map_size, self.params.global_map_size) # (360, 360)
params.batchsize = 1
params.trajlen = 1
sensor_ch = (1 if params.mode == 'depth' else (3 if params.mode == 'rgb' else 4))
batchsize = params.batchsize
if params.seed is not None and params.seed > 0:
print("Fix Numpy and TF seed to %d" % params.seed)
tf.set_random_seed(params.seed)
np.random.seed(params.seed)
random.seed(params.seed)
# Build graph for slam and planner
with tf.Graph().as_default():
with tf.variable_scope(tf.get_variable_scope(), reuse=False):
# Choose planner
if self.params.planner == 'astar3d':
self.max_map_size = (370, 370) # also change giveup setting when changing this
self.fixed_map_size = True
self.planner_needs_cont_map = False
self.allow_shrink_map = True
assert self.params.agent_map_downscale == 1
# assert MAP_SOURCE != "true"
self.pathplanner = Astar3D(single_thread=PLANNER_SINGLE_THREAD, max_map_size=self.max_map_size, timeout=self.params.planner_timeout)
self.need_to_stop_planner_thread = PLANNER_STOP_THREAD_EACH_EPISODE
elif self.params.planner in ['dstar_track_fixsize', 'dstar4_track_fixsize']:
self.fixed_map_size = True
self.planner_needs_cont_map = False
self.allow_shrink_map = True
assert self.params.agent_map_downscale == 1
if self.params.planner in ['dstar4_track_fixsize']:
assert not self.params.connect8
else:
assert self.params.connect8
self.pathplanner = Dstar_planner(single_thread=PLANNER_SINGLE_THREAD, max_map_size=self.max_map_size, connect8=self.params.connect8)
self.need_to_stop_planner_thread = PLANNER_STOP_THREAD_EACH_EPISODE
elif self.params.planner in ['dstar_track', 'dstar2d']:
self.max_map_size = (900, 900)
self.fixed_map_size = False
self.planner_needs_cont_map = False
self.allow_shrink_map = False
assert self.params.agent_map_downscale == 1
assert self.params.connect8 # add to def config
self.pathplanner = Dstar_planner(single_thread=PLANNER_SINGLE_THREAD, max_map_size=self.max_map_size, connect8=self.params.connect8)
self.need_to_stop_planner_thread = PLANNER_STOP_THREAD_EACH_EPISODE
elif self.params.planner in ['vi4', 'vi8', 'vi4-e1', 'vi8-e1', 'vi4-noshrink', 'vi8-noshrink']:
self.fixed_map_size = True
self.planner_needs_cont_map = False
self.allow_shrink_map = (self.params.planner not in ['vi4-noshrink', 'vi8-noshrink'])
if self.params.planner in ['vi4', 'vi4-e1', 'vi4-noshrink']:
assert not self.params.connect8
else:
assert self.params.connect8
self.pathplanner = VI_planner(max_map_size=(None, None), brain="trueplanner", params=self.params, connect8=self.params.connect8,
downscale=self.params.agent_map_downscale)
self.need_to_stop_planner_thread = False
elif self.params.planner in ['vin', 'vin-e1', 'vinpred']:
self.fixed_map_size = True
self.planner_needs_cont_map = (self.params.planner in ['vinpred'])
self.allow_shrink_map = False
self.pathplanner = VI_planner(max_map_size=(None, None), brain=self.params.agent_planner_brain, params=self.params, connect8=self.params.connect8,
downscale=self.params.agent_map_downscale)
self.need_to_stop_planner_thread = False
else:
raise ValueError("Unknown planner %s"%self.params.planner)
# Test data and network
assert params.target in ["traj"]
train_brain = get_brain(params.brain, params)
req = train_brain.requirements()
self.brain_requirements = req
self.local_map_shape = req.local_map_size
# Build slam brain with placeholder inputs
# global_map_input = tf.placeholder(shape=(batchsize, None, None, slam_map_ch,), dtype=tf.float32)
# self.images_input = tf.placeholder(shape=(batchsize, None) + req.sensor_shape + (sensor_ch,),
# dtype=tf.float32)
# self.visibility_input = (
# tf.placeholder(shape=(batchsize, None) + tuple(req.local_map_size) + (1,), dtype=tf.float32)
# if params.visibility_mask == 2
# else tf.zeros((batchsize, None, 0, 0, 1)))
self.new_images_input = tf.placeholder(shape=(batchsize, 1) + req.sensor_shape + (sensor_ch,),
dtype=tf.float32)
self.last_images_input = tf.placeholder(shape=(batchsize, 1) + req.sensor_shape + (sensor_ch,),
dtype=tf.float32)
self.past_visibility_input = tf.placeholder(shape=(batchsize, None) + tuple(req.local_map_size) + (1,), dtype=tf.float32)
self.visibility_input = tf.placeholder(shape=(batchsize, 1) + tuple(req.local_map_size) + (1,), dtype=tf.float32)
self.past_local_maps_input = tf.placeholder(shape=(batchsize, None) + tuple(req.local_map_size) + (1,), dtype=tf.float32)
self.past_needed_image_features_input = tf.placeholder(shape=(batchsize, None) + tuple(req.local_map_size) + (req.latent_map_ch,), dtype=tf.float32)
self.particle_xy_input = tf.placeholder(shape=(batchsize, None, params.num_particles, 2,), dtype=tf.float32)
self.particle_yaw_input = tf.placeholder(shape=(batchsize, None, params.num_particles, 1,), dtype=tf.float32)
self.last_step_particle_logits_input = tf.placeholder(shape=(batchsize, params.num_particles),
dtype=tf.float32)
self.new_action_input = tf.placeholder(shape=(batchsize, 1, 1,), dtype=tf.int32)
self.new_rel_xy_input = tf.placeholder(shape=(batchsize, 1, 2,), dtype=tf.float32)
self.new_rel_yaw_input = tf.placeholder(shape=(batchsize, 1, 1,), dtype=tf.float32)
self.true_xy_input = tf.placeholder(shape=(batchsize, None, 2,), dtype=tf.float32)
self.true_yaw_input = tf.placeholder(shape=(batchsize, None, 1,), dtype=tf.float32)
self.inference_timesteps_input = tf.placeholder(shape=(batchsize, None), dtype=tf.int32) # indexes history to be used for slam update
self.global_map_shape_input = tf.placeholder(shape=(2, ), dtype=tf.int32)
if self.params.obstacle_downweight:
custom_obstacle_prediction_weight = Expert.get_obstacle_prediction_weight(OBSTACLE_DOWNWEIGHT_DISTANCE, OBSTACLE_DOWNWEIGHT_SCALARS, self.local_map_shape)
else:
custom_obstacle_prediction_weight = None
if FAKE_INPUT_FOR_SPEED_TEST:
self.inference_outputs = train_brain.sequential_localization_with_past_and_pred_maps(
tf.zeros_like(self.past_local_maps_input), tf.ones_like(self.past_visibility_input),
tf.zeros_like(self.past_needed_image_features_input),
tf.zeros_like(self.new_images_input), tf.zeros_like(self.true_xy_input), tf.zeros_like(self.true_yaw_input),
tf.zeros_like(self.visibility_input),
tf.zeros_like(self.particle_xy_input), tf.zeros_like(self.particle_yaw_input),
tf.zeros_like(self.new_action_input), tf.zeros_like(self.new_rel_xy_input), tf.zeros_like(self.new_rel_yaw_input),
particle_logits_acc=tf.zeros_like(self.last_step_particle_logits_input),
global_map_shape=self.global_map_shape_input,
max_confidence=self.max_confidence)
else:
###
# THIS IS USED NORMALLY
###
self.inference_outputs = train_brain.sequential_localization_with_past_and_pred_maps(
self.past_local_maps_input, self.past_visibility_input, self.past_needed_image_features_input,
self.new_images_input, self.true_xy_input, self.true_yaw_input, self.visibility_input,
self.particle_xy_input, self.particle_yaw_input,
self.new_action_input, self.new_rel_xy_input, self.new_rel_yaw_input,
inference_timesteps=self.inference_timesteps_input,
particle_logits_acc=self.last_step_particle_logits_input,
global_map_shape=(tuple(self.max_map_size) if self.fixed_map_size else self.global_map_shape_input), # self.global_map_shape_input, tuple(self.max_map_size),
max_confidence=self.max_confidence,
custom_obstacle_prediction_weight=custom_obstacle_prediction_weight,
last_images=self.last_images_input,
use_true_pose_instead_of_slam=(self.params.agent_pose_source == 'true'),
)
if PLOT_EVERY_N_STEP < 0:
self.inference_outputs = self.drop_output(self.inference_outputs, drop_names=['tiled_visibility_mask'])
self.inference_outputs_without_map = self.drop_output(self.inference_outputs, drop_names=['global_map_logodds'])
# self.inference_outputs = train_brain.sequential_localization_with_map_prediction(
# self.images_input, self.true_xy_input, self.true_yaw_input, self.visibility_input,
# self.particle_xy_input, self.particle_yaw_input,
# self.new_action_input, self.new_rel_xy_input, self.new_rel_yaw_input,
# particle_logits_acc=self.last_step_particle_logits_input)
# self.inference_outputs = train_brain.sequential_localization_with_past_and_pred_maps(
# self.past_local_maps_input, self.past_visibility_input, NEED_IMAGES,
# self.new_images_input, self.true_xy_input, self.true_yaw_input, self.visibility_input,
# self.particle_xy_input, self.particle_yaw_input,
# self.new_action_input, self.new_rel_xy_input, self.new_rel_yaw_input,
# particle_logits_acc=self.last_step_particle_logits_input)
#
# TODO pass in map inference inputs. Could produce one processed and one unprocess map for slam.
# self.true_map_input = tf.placeholder(shape=self.max_map_size + (1, ), dtype=tf.uint8)
# self.images_input = tf.placeholder(shape=req.sensor_shape + (sensor_ch,), dtype=tf.float32)
# self.xy_input = tf.placeholder(shape=(2,), dtype=tf.float32)
# self.yaw_input = tf.placeholder(shape=(1, ), dtype=tf.float32)
# # self.action_input = tf.placeholder(shape=(2,), dtype=tf.float32)
# actions = tf.zeros((1, 1, 2), dtype=tf.float32)
# self.global_map_input = tf.placeholder(shape=self.max_map_size + (self.map_ch, ), dtype=tf.float32)
# self.visibility_input = tf.placeholder(shape=self.local_map_shape + (1, ), dtype=tf.uint8) if self.use_custom_visibility else None
# local_obj_map_labels = tf.zeros((1, 1, ) + self.local_map_shape + (1, ), dtype=np.uint8)
#
# self.inference_outputs = train_brain.sequential_inference(
# self.true_map_input[None], self.images_input[None, None], self.xy_input[None, None], self.yaw_input[None, None],
# actions, prev_global_map_logodds=self.global_map_input[None],
# local_obj_maps=local_obj_map_labels,
# confidence_threshold=self.confidence_threshold,
# max_confidence=self.max_confidence,
# max_obj_confidence=0.8,
# custom_visibility_maps=None if self.visibility_input is None else self.visibility_input[None, None],
# is_training=True)
# self.true_map_input = tf.zeros(shape=self.max_map_size + (1, ), dtype=tf.uint8)
# self.images_input = tf.zeros(shape=req.sensor_shape + (sensor_ch,), dtype=tf.float32)
# self.xy_input = tf.ones(shape=(2,), dtype=tf.float32)
# self.yaw_input = tf.zeros(shape=(1, ), dtype=tf.float32)
# # self.action_input = tf.placeholder(shape=(2,), dtype=tf.float32)
# actions = tf.ones((1, 1, 2), dtype=tf.float32)
# self.global_map_input = tf.ones(shape=self.max_map_size + (self.map_ch, ), dtype=tf.float32)
# self.visibility_input = tf.ones(shape=self.local_map_shape + (1, ), dtype=tf.uint8) if self.use_custom_visibility else None
# local_obj_map_labels = tf.zeros((1, 1, ) + self.local_map_shape + (1, ), dtype=np.uint8)
#
# self.inference_outputs = train_brain.sequential_inference(
# self.true_map_input[None], self.images_input[None, None], self.xy_input[None, None], self.yaw_input[None, None],
# actions, prev_global_map_logodds=self.global_map_input[None],
# local_obj_maps=local_obj_map_labels,
# confidence_threshold=self.confidence_threshold,
# max_confidence=self.max_confidence,
# max_obj_confidence=0.8,
# custom_visibility_maps=None if self.visibility_input is None else self.visibility_input[None, None],
# is_training=True)
# Add the variable initializer Op.
init = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
count_number_trainable_params(verbose=True)
# training session
gpuconfig, gpuname = get_tf_config(devices=params.gpu)
self.sess = tf.Session(config=gpuconfig)
# # Debug
# self.sess.run(init) # withouth init if a variable is not loaded we get an error
# outputs = self.sess.run(self.inference_outputs)
# print ("Success")
# pdb.set_trace()
# self.sess.run(init) # withouth init if a variable is not loaded we get an error
load_from_file(self.sess, params.load, partialload=params.partialload, loadcenter=[],
skip=params.loadskip, autorename=False)
self.global_map_logodds = None
self.xy = None
self.yaw = None
self.target_xy = None
self.step_i = -1
self.t = time.time()
# datafile
self.scenario_traj_data = []
# video
self.frame_traj_data = []
self.num_videos_saved = 0
self.summary_str = ""
self.filename_addition = ""
self.logdir = logdir
self.saved_map_i = 0
if self.params.interactive_video and PLOT_PROCESS:
print ("Starting plot process.. ANY PLOT IN THIS THREAD WILL LEAD TO A CRASH")
atexit.register(self.cleanup) # to stop process
self.plot_queue = Queue()
self.plot_process = Process(target=self.plot_loop, args=(self.plot_queue, ))
self.plot_process.start()
# # plotting should be only done in one thread... any plot in this thread will lead to a crash
# plt.show = lambda *args: None
# plt.figure = lambda *args: None
# plt.imshow = lambda *args: None
# plt.plot = lambda *args: None
# # import matplotlib
# # matplotlib.use('Agg')
# # plt.figure = lambda *args: None
else:
self.plot_queue = None
self.plot_process = None
self.reset()
def cleanup(self):
try:
if not self.plot_process or not self.plot_process.is_alive():
return
print("Stopping plot process")
self.plot_queue.put(("exit", None), block=False)
self.plot_process.join(timeout=4.0)
self.plot_process.terminate()
except Exception as e:
print ("Destructor had an exception. %s" % str(e))
def get_scene_name(self):
scene_path = "unknown" if self.env is None else self.env._sim._current_scene
scene_name = scene_path.split('/')[-1].split('.')[0]
return scene_name
def _to_grid_pos(self, agent_pos_0, agent_pos_2, top_down_map_dict):
if not hasattr(maps, 'COORDINATE_MAX'):
grid_pos = maps.to_grid(
agent_pos_2, # not the order!
agent_pos_0,
top_down_map_dict['map'].shape[0:2],
sim=self.env.sim,
keep_float=True,
)
else:
del top_down_map_dict
map_shape = (self.task_config.TASK.TOP_DOWN_MAP.MAP_RESOLUTION,
self.task_config.TASK.TOP_DOWN_MAP.MAP_RESOLUTION)
grid_pos = maps.to_grid(
agent_pos_0, agent_pos_2, # order does not matter here
maps.COORDINATE_MIN, maps.COORDINATE_MAX,
map_shape, keep_float=True)
return np.array(grid_pos, np.float32)
def reset(self, last_success=None):
self.step_i = -1
self.episode_i += 1
self.t = time.time()
self.episode_t = self.t
self.accumulated_spin = 0.
self.spin_direction = None
self.distance_history = []
self.raw_xy_transform = lambda xys: xys
self.raw_yaw_offset = 0.
self.recover_step_i = 0
self.num_collisions = 0
self.num_shortcut_actions = 0
self.num_wrong_obstacle = 0
self.num_wrong_free = 0
self.num_wrong_free_area = 0
self.num_wrong_free_area2 = 0
self.num_wrong_free_area3 = 0
self.map_mismatch_count = 0
self.plan_times = []
assert self.params.recoverpolicy in ['back5', 'back1']
num_recover_back_steps = (5 if self.params.recoverpolicy == 'back5' else 1)
self.recover_policy = [3] * 6 + [1] * num_recover_back_steps
self.global_map_logodds = None # will initialize in act() np.zeros((1, 1) + (1, ), np.float32)
self.collision_timesteps = []
for tfwriter in self.tfwriters:
tfwriter.flush()
self.pathplanner.reset()
self.reset_scenario_data_writer()
self.reset_video_writer(last_success=last_success)
print ("Resetting agent %d. Scene %s."%(self.episode_i, self.get_scene_name()))
self.last_call_time = time.time()
def drop_output(self, outputs, drop_names):
return dotdict({key: val for key, val in outputs.items() if key not in drop_names})
def plan_and_control(self, xy, yaw, target_xy, global_map_pred, ang_vel, target_fi, allow_shrink_map=False, cont_global_map_pred=None):
if self.params.start_with_spin and np.abs(self.accumulated_spin) < SPIN_TARGET and self.step_i < 40:
if self.spin_direction is None:
self.spin_direction = SPIN_DIRECTION * np.sign(target_fi) # spin opposite direction to the goal
self.accumulated_spin += ang_vel
# spin
status_message = "%d: spin %f: %f"%(self.step_i, self.spin_direction, self.accumulated_spin)
action = (2 if self.spin_direction > 0 else 3)
planned_path = np.zeros((0, 2))
return action, planned_path, status_message, (global_map_pred * 255.).astype(np.uint8), None
assert global_map_pred.dtype == np.float32
assert cont_global_map_pred is None or cont_global_map_pred.dtype == np.float32
if self.params.soft_cost_map:
assert not allow_shrink_map
assert cont_global_map_pred is None
# keep global_map as continuous input
elif allow_shrink_map:
assert cont_global_map_pred is None # otherwise would need to shrink it
global_map_pred = (global_map_pred * 255.).astype(np.uint8)
xy, target_xy, global_map_pred, offset_xy = self.shrink_map(xy, target_xy, global_map_pred)
else:
global_map_pred = (global_map_pred * 255.).astype(np.uint8)
offset_xy = None
# Scan map and cost graph.
scan_graph, eroded_scan_map, normal_scan_map, costmap = Expert.get_graph_and_eroded_map(
raw_trav_map=global_map_pred[..., :1],
trav_map_for_simulator=global_map_pred[..., :1],
raw_scan_map=global_map_pred,
rescale_scan_map=1.,
erosion=self.params.map_erosion_for_planning,
build_graph=False,
interactive_channel=False,
cost_setting=self.params.cost_setting,
soft_cost_map=self.params.soft_cost_map,
)
# plt.figure()
# plt.imshow(costmap)
# plt.show()
# pdb.set_trace()
# scan_map = global_map_pred[..., :1]
#
# scan_map = cv2.erode(scan_map, kernel=np.ones((3, 3)))
# scan_map[scan_map<255] = 0
#
# costmap = np.zeros_like(global_map_pred, dtype=np.float32)
# costmap[global_map_pred == 0] = 1000.
#
# temp_map1 = scan_map
# temp_map2 = cv2.erode(temp_map1, kernel=np.ones((3, 3)))
# temp_filter = np.logical_and(temp_map2 < 255, temp_map1 == 255)
# costmap[temp_filter] = 100.
#
# temp_map1 = scan_map
# temp_map2 = cv2.erode(temp_map1, kernel=np.ones((7, 7)))
# temp_filter = np.logical_and(temp_map2 < 255, temp_map1 == 255)
# costmap[temp_filter] = 1.
assert self.params.goalpolicy in ['twostep', 'none']
start_time = time.time()
if self.params.planner == 'astar3d':
assert self.params.goalpolicy in ['twostep'] # need to add support below for removing twostep
action, obstacle_distance, planned_path, status_message = Expert.discrete3d_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
pathplanner=self.pathplanner)
elif self.params.planner == 'dstar2d':
assert self.params.goalpolicy in ['twostep'] # need to add support below for removing twostep
action, obstacle_distance, planned_path, status_message = Expert.discrete_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_asserts=True,
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.dstar_path(
cost_map, source_tuple, target_tuple, timeout=PLANNER2D_TIMEOUT))
elif self.params.planner in ['dstar_track', 'dstar_track_fixsize', 'dstar4_track_fixsize']:
action, obstacle_distance, planned_path, status_message = Expert.discrete_tracking_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_lookahead=True, use_twostep_approach=(self.params.goalpolicy == 'twostep'),
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.dstar_path(
cost_map, source_tuple, target_tuple, timeout=PLANNER2D_TIMEOUT))
elif self.params.planner in ['vi4', 'vi8', 'vin', 'vi4-noshrink', 'vi8-noshrink']:
assert not self.params.soft_cost_map # because expert policy checks traversability assuming uint map
action, obstacle_distance, planned_path, status_message = Expert.discrete_tracking_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_lookahead=True, use_twostep_approach=(self.params.goalpolicy == 'twostep'),
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.vi_path(
scan_map, cost_map, source_tuple, target_tuple, sess=self.sess))
elif self.params.planner in ['vinpred']:
assert not allow_shrink_map # because we are shortcutting map, not using the shrunk map
assert not self.params.soft_cost_map # because expert policy checks traversability assuming uint map
assert cont_global_map_pred is not None
# Shortcut map input to path planner directly
action, obstacle_distance, planned_path, status_message = Expert.discrete_tracking_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_lookahead=True, use_twostep_approach=(self.params.goalpolicy == 'twostep'),
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.vi_path(
cont_global_map_pred * 255., # this is intentionally shortcutting the scan_map input
cost_map, source_tuple, target_tuple, sess=self.sess))
elif self.params.planner in ['vi4-e1', 'vi8-e1', 'vin-e1']:
assert not self.params.soft_cost_map # because expert policy checks traversability assuming uint map
action, obstacle_distance, planned_path, status_message = Expert.discrete_tracking_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_lookahead=True, use_twostep_approach=(self.params.goalpolicy == 'twostep'),
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.vi_path(
normal_scan_map, cost_map, source_tuple, target_tuple, sess=self.sess))
else:
raise ValueError("Unknown planner %s"%(self.params.planner))
status_message = "%d/%d: %.2f %s"%(self.episode_i, self.step_i, time.time()-self.t, status_message)
self.t = time.time()
self.plan_times.append(time.time() - start_time)
if allow_shrink_map:
planned_path = planned_path + offset_xy[None]
return action, planned_path, status_message, eroded_scan_map, offset_xy
def act(self, observations):
if SUPRESS_EXCEPTIONS or INTERACTIVE_ON_EXCEPTIONS:
try:
return_val = self.wrapped_act(observations)
if self.need_to_stop_planner_thread and return_val['action'] == 0:
self.pathplanner.stop_thread()
self.last_call_time = time.time()
return return_val
except Exception as e:
print ("Exception " + str(e))
if INTERACTIVE_ON_EXCEPTIONS:
self.reset_scenario_data_writer()
self.reset_video_writer(last_success=False)
print ("Data and video saved. Continue?")
pdb.set_trace()
self.last_call_time = time.time()
return {"action": 0, "xy_error": 0.}
else:
return_val = self.wrapped_act(observations)
if self.need_to_stop_planner_thread and return_val['action'] == 0:
self.pathplanner.stop_thread()
self.last_call_time = time.time()
return return_val
def wrapped_act(self, observations):
time_sim = time.time() - self.last_call_time
time_last = time.time()
time_now = time.time()
time_since_beginning = time_now - self.start_time
if REPLACE_WITH_RANDOM_ACTIONS and self.episode_i > 2:
self.step_i += 1
if self.step_i > 100:
action = 0
else:
action = np.random.choice([1, 2, 3])
return {"action": action, "xy_error": 0.}
initial_target_r_meters, initial_target_fi = observations['pointgoal']
if self.step_i == -1:
self.step_i += 1
# self.initial_xy = np.zeros((2, ), np.float32)
# self.initial_yaw = np.zeros((1, ), np.float32)
self.episode_t = time_now
if self.env is not None:
# info = self.env.get_metrics()
# print (info['top_down_map']['agent_map_coord'])
# pdb.set_trace()
stay_action = self.task_config.SIMULATOR.get('STAY_ACTION', 3) # turn right by default
return {"action": stay_action} # turn right, because first step does not provide the top down map
# otherwise continue below
if TOTAL_TIME_LIMIT > 0 and time_since_beginning > TOTAL_TIME_LIMIT:
print ("Giving up because total time limit of %d sec reached."%TOTAL_TIME_LIMIT)
if ERROR_ON_TIMEOUT:
raise ValueError("Timeout.. only for minival!")
return {"action": 0, "xy_error": 0.}
if (SKIP_FIRST_N_FOR_TEST > 0 and self.episode_i < SKIP_FIRST_N_FOR_TEST) or (SKIP_FIRST_N > 0 and self.episode_i < SKIP_FIRST_N) or (SKIP_AFTER_N > 0 and self.episode_i >= SKIP_AFTER_N):
print ("Skip")
return {"action": 0, "xy_error": 0.}
# if EXIT_AFTER_N_STEPS_FOR_SPEED_TEST > 0 and self.step_i > EXIT_AFTER_N_STEPS_FOR_SPEED_TEST:
# raise SystemExit
# Check for possible shortcut
shortcut_action = None
need_map = True
if self.params.skip_plan_when_turning and len(self.pathplanner.cached_action_path) >= 2 and self.num_shortcut_actions < MAX_SHORTCUT_TURNS:
cached_next_action = self.pathplanner.cached_action_path[1]
if cached_next_action in [2, 3]: # turning
print ("Shortcut turn action")
shortcut_action = cached_next_action
need_map = False
self.num_shortcut_actions += 1
self.pathplanner.num_timeouts += 1
self.pathplanner.cached_action_path = self.pathplanner.cached_action_path[1:]
else:
self.num_shortcut_actions = 0
else:
self.num_shortcut_actions = 0
if RECOVER_ON_COLLISION and self.recover_step_i > 0:
need_map = False
# True pose and map
if self.env is not None:
# When using slam, must run with --habitat_eval local not localtest.
# Thats because with --localtest we skip the first step, but that ruins the goal observation.
assert self.pose_source != 'slam'
info = self.env.get_metrics()
agent_pos = self.env.sim.get_agent_state().position
goal_pos = self.env.current_episode.goals[0].position
# First deal with observed map and optionally samplre random rotation
true_global_map = info['top_down_map']['map']
true_global_map = (true_global_map > 0).astype(np.uint8) * 255
true_global_map = np.atleast_3d(true_global_map)
if self.step_i > 0:
# Count pixels that used to be free but not in the latest map
if not self.params.random_rotations:
new_map_mismatch_count = np.count_nonzero(np.logical_and(true_global_map == 0, self.last_true_global_map))
if new_map_mismatch_count > 200:
print ("TOO MANY MAP MISMATCHES %d"%new_map_mismatch_count)
# assert False
self.map_mismatch_count += new_map_mismatch_count
# assert true_global_map.shape == self.last_true_global_map.shape
true_global_map = self.last_true_global_map # keep the first
else:
if self.map_source in ['true-saved', 'true-saved-sampled', 'true-saved-hrsampled', 'true-partial-sampled']:
saved_global_map = load_map_from_file(scene_id=self.get_scene_name(), height=agent_pos[1], map_name=(
"map" if self.map_source == 'true-saved' else ('sampledmap' if self.map_source in [
'true-saved-sampled', 'true-partial-sampled'] else 'hrsampledmap')), basepath=map_path_for_sim(self.params.sim))
assert saved_global_map.dtype == np.uint8
if saved_global_map.shape != true_global_map.shape:
# This can happen if floors are not perfectly aligned, etc. Its a problem as we cannot recover
# map pose anymore.
# Save log
print ('Map shapes mismatch for %s. Exmple saved under ./temp/failures/'%self.get_scene_name())
if not os.path.exists('./temp/failures'):
os.mkdir('./temp/failures')
cv2.imwrite('./temp/failures/%s_ep%d_envmap.png'%(self.get_scene_name(), int(self.env.current_episode.episode_id)), true_global_map)
cv2.imwrite('./temp/failures/%s_ep%d_savedmap.png'%(self.get_scene_name(), int(self.env.current_episode.episode_id)), saved_global_map)
if SKIP_MAP_SHAPE_MISMATCH:
print("Skip because of map mismatch")
return {"action": 0, "xy_error": 0.}
else:
raise ValueError('Map shapes mismatch. Exmple saved under ./temp/failures/')
self.map_mismatch_count = np.count_nonzero(np.logical_and(saved_global_map, true_global_map))
true_global_map = saved_global_map
else:
self.map_mismatch_count = 0
# Rotate true map once and keep this for the episode. Remember the transformation and apply it from all raw observations
if self.params.random_rotations:
self.raw_yaw_offset = np.random.rand() * 2. * np.pi
true_global_map, new_poses, transform = rotate_map_and_poses(true_global_map, self.raw_yaw_offset, poses=[np.zeros((1, 2), np.float32)], constant_value=0)
assert true_global_map.dtype == np.uint8
# Reapply threshold
true_global_map = (true_global_map > 128).astype(np.uint8) * 255
self.raw_xy_transform = transform
self.last_true_global_map = true_global_map
# Deal with observed pose
# TODO this might be wrong here if map shapes don't match and/or change during episode.
true_xy = np.array(info['top_down_map']['agent_map_coord']) # x: downwards; y: rightwars
if np.any(true_xy < 0.):
raise ValueError("Map coordinates are less than zero. On spot this happens if the dummy environment "
"happens to be smaller than the real world.")
true_xy = self.raw_xy_transform(true_xy[None])[0]
true_yaw = info['top_down_map']['agent_angle'] # 0 downwards, positive ccw. Forms stanard coord system with x and y.
true_yaw = true_yaw + self.raw_yaw_offset
true_yaw = np.array((true_yaw, ), np.float32)
true_yaw = (true_yaw + np.pi) % (2 * np.pi) - np.pi # normalize
# Recover from simulator pos
true_xy_from_pos = self._to_grid_pos(agent_pos[0], agent_pos[2], info['top_down_map'])
true_xy_from_pos = self.raw_xy_transform(true_xy_from_pos[None])[0]
offset_xy = true_xy - true_xy_from_pos
global_true_target_xy = self._to_grid_pos(goal_pos[0], goal_pos[2], info['top_down_map'])
true_target_xy = self.raw_xy_transform(global_true_target_xy[None])[0]
true_target_xy += offset_xy
del offset_xy
# # Debug pose
# print("position", true_xy, self.env.sim.get_agent_state().position)
# rot_euler = quaternion.as_euler_angles(self.env.sim.get_agent_state().rotation)
# print("rotation", np.rad2deg(true_yaw), np.rad2deg(rot_euler))
#
# quat = quaternion.from_euler_angles(np.pi, true_yaw - np.pi, np.pi)
# print (np.rad2deg(quaternion.as_euler_angles(quat)))
#
# pdb.set_trace()
else:
true_xy = np.zeros((2,), np.float32)
true_yaw = np.zeros((1,), np.float32)
true_target_xy = np.zeros((2,), np.float32)
global_true_target_xy = None
info = None
true_global_map = np.zeros([self.max_map_size[0], self.max_map_size[1], 1], np.float32)
# Initialize everything
if self.step_i == 0:
if self.pose_source in ['true', 'slam-truestart']:
# Initialize with true things. Only makes sense if we access it
assert self.env is not None
self.true_xy_offset = -true_xy.astype(np.int32)
# self.true_xy_transform = Transform2D
# self.true_xy_transform.add_translation()
if self.fixed_map_size:
self.global_map_logodds = np.zeros((self.max_map_size[0], self.max_map_size[1], 1), np.float32) # np.zeros(true_global_map.shape, np.float32)
else:
self.global_map_logodds = np.zeros((1, 1, 1), np.float32) # np.zeros(true_global_map.shape, np.float32)
self.prev_yaw = true_yaw
self.xy = true_xy + self.true_xy_offset
self.yaw = true_yaw
particle_xy0 = np.tile((self.xy)[None], [self.params.num_particles, 1])
particle_yaw0 = np.tile(self.yaw[None], [self.params.num_particles, 1])
# Target from observed distance. Can only use it after reset
initial_target_r_meters, initial_target_fi = observations['pointgoal']
initial_target_r = initial_target_r_meters / 0.05 # meters to grid cells
# assumes initial pose is 0.0
initial_target_xy = rotate_2d(np.array([initial_target_r, 0.], np.float32), initial_target_fi + true_yaw + np.deg2rad(30)) + true_xy + self.true_xy_offset
# Target from observed distance. Can only use it after reset
initial_target_r_meters, initial_target_fi = observations['pointgoal_with_gps_compass']
target_r = initial_target_r_meters / 0.05 # meters to grid cells
# assumes initial pose is 0.0
observed_target_xy = rotate_2d(np.array([target_r, 0.], np.float32), initial_target_fi + true_yaw) + true_xy + self.true_xy_offset
print ("Target observed: (%d, %d) true: (%d, %d) initial (%d, %d)"%(
observed_target_xy[0], observed_target_xy[1], true_target_xy[0] + self.true_xy_offset[0], true_target_xy[1] + self.true_xy_offset[1], initial_target_xy[0], initial_target_xy[1]))
if np.linalg.norm(observed_target_xy - (true_target_xy + self.true_xy_offset)) > 0.001:
pdb.set_trace()
self.target_xy = observed_target_xy
elif self.pose_source == "slam":
self.true_xy_offset = np.zeros((2,), np.int32) # we dont know
if self.fixed_map_size:
self.global_map_logodds = np.zeros((self.max_map_size[0], self.max_map_size[1], 1), np.float32) # np.zeros(true_global_map.shape, np.float32)
else:
self.global_map_logodds = np.zeros((1, 1, 1), np.float32) # np.zeros(true_global_map.shape, np.float32)
self.prev_yaw = 0.
self.xy = np.zeros((2, ), np.float32)
self.yaw = np.zeros((1, ), np.float32)
particle_xy0 = np.zeros((self.params.num_particles, 2), np.float32)
particle_yaw0 = np.zeros((self.params.num_particles, 1), np.float32)
# Target from observed distance. Can only use it after reset
initial_target_r_meters, initial_target_fi = observations['pointgoal']
target_r = initial_target_r_meters / 0.05 # meters to grid cells
# assumes initial pose is 0.0
observed_target_xy = rotate_2d(np.array([target_r, 0.], np.float32), initial_target_fi)
self.target_xy = observed_target_xy
else:
raise ValueError("Unknown pose estimation source.")
self.particle_xy_list = [particle_xy0]
self.particle_yaw_list = [particle_yaw0]
self.particle_logit_acc_list = [np.zeros((self.params.num_particles,), np.float32)]
self.xy_loss_list = [0.]
self.yaw_loss_list = [0.]
self.true_xy_traj = [true_xy]
self.true_yaw_traj = [true_yaw]
self.action_traj = []
# Resize map and add offset
map_shape = self.global_map_logodds.shape
if self.fixed_map_size:
# xy_map_margin = 10 # this is before slam update. a single step can move 6 cells plus estimation may change.
# # TODO xy_map_margin was occasionally too small. Expose as param and increase
#
# # Keep a fixed map size. Dont even update it, only move the offset, such that center point is between current pose and goal
# assert map_shape[:2] == self.max_map_size
# assert self.max_map_size[0] == self.max_map_size[1]
#
# center_xy = (self.xy + self.target_xy) * 0.5
# desired_center_xy = np.array(self.max_map_size, np.float32) * 0.5
# offset_xy = (desired_center_xy - center_xy).astype(np.int)
#
# new_xy = self.xy + offset_xy
#
# # Handle the case when xy would fall out of the map area or would be too near the edge.
# # These will be only nonzero if xy is outside the allowed area
# # if np.any(new_xy < xy_map_margin):
# offset_xy += np.ceil(np.maximum(xy_map_margin - new_xy, 0.)).astype(np.int32)
# # if np.any(new_xy >= self.max_map_size[0] - xy_map_margin):
# offset_xy -= np.ceil(np.maximum(new_xy - (self.max_map_size[0] - xy_map_margin), 0.)).astype(np.int32)
#
# self.particle_xy_list = [xy + offset_xy for xy in self.particle_xy_list]
# self.target_xy += offset_xy
# self.true_xy_offset += offset_xy
# self.xy += offset_xy
#
# # Handle the case when target is outside of the map area
# if np.any(self.target_xy < TARGET_MAP_MARGIN) or np.any(self.target_xy >= self.max_map_size[0] - TARGET_MAP_MARGIN):
# # Find the free map cell closest to the target
# global_map_pred = ClassicMapping.inverse_logodds(self.global_map_logodds)
# # TODO this should use the same threshold instead of 0.5
# free_x, free_y = np.nonzero(np.squeeze(global_map_pred[TARGET_MAP_MARGIN:-TARGET_MAP_MARGIN, TARGET_MAP_MARGIN:-TARGET_MAP_MARGIN], axis=-1) >= 0.5)
# free_xy = np.stack([free_x, free_y], axis=-1)
# free_xy = free_xy.astype(np.float32)
# free_xy += 0.5
# free_xy += TARGET_MAP_MARGIN
# dist = np.linalg.norm(free_xy - self.target_xy[None], axis=1)
# # pretend the closest free cell is the target
# self.target_xy_for_planning = free_xy[np.argmin(dist)]
# print ("Moving target within the map: %s --> %s"%(str(self.target_xy), str(self.target_xy_for_planning)))
# else:
# self.target_xy_for_planning = self.target_xy.copy()
# Keep a fixed map size. Dont even update it, only move the offset, such that center point is between current pose and goal
assert map_shape[:2] == self.max_map_size
# Find the free map cell closest to the target
global_map_pred = ClassicMapping.inverse_logodds(self.global_map_logodds)
is_free_map = (np.squeeze(global_map_pred, axis=-1) >= 0.5) # TODO this should use the same threshold instead of 0.5
# TODO xy_map_margin was occasionally too small. Expose as param and increase
offset_ij, projected_target_xy = project_state_and_goal_to_smaller_map(
self.max_map_size, self.xy, self.target_xy, is_free_map, xy_map_margin=10, target_map_margin=TARGET_MAP_MARGIN)
self.particle_xy_list = [xy + offset_ij for xy in self.particle_xy_list]
self.target_xy += offset_ij
self.true_xy_offset += offset_ij
self.xy += offset_ij
self.target_xy_for_planning = projected_target_xy
if np.any(self.target_xy != self.target_xy_for_planning):
print("Moving target within the map: %s --> %s" % (str(self.target_xy), str(self.target_xy_for_planning)))
else:
# Expand map and offset pose if needed, such that target and the surrounding of current pose are all in the map.
if MAX_MAP_SIZE_FOR_SPEED_TEST:
offset_ij = np.array(((self.max_map_size[0]-map_shape[0])//2, (self.max_map_size[1]-map_shape[1])//2), np.int32)
expand_xy = offset_ij.copy()
else:
local_map_max_extent = 110 # TODO need to adjust to local map size and scaler
local_map_max_extent += 10 # to account for how much the robot may move in one step, including max overshooting
target_margin = 8
min_particle_xy = self.particle_xy_list[-1].min(axis=0) # last is step is enough because earliers could arleady fit on map
max_particle_xy = self.particle_xy_list[-1].max(axis=0)
min_x = int(min(self.target_xy[0] - target_margin, min_particle_xy[0] - local_map_max_extent) - 1)
min_y = int(min(self.target_xy[1] - target_margin, min_particle_xy[1] - local_map_max_extent) - 1)
max_x = int(max(self.target_xy[0] + target_margin, max_particle_xy[0] + local_map_max_extent) + 1)
max_y = int(max(self.target_xy[1] + target_margin, max_particle_xy[1] + local_map_max_extent) + 1)
offset_ij = np.array([max(0, -min_x), max(0, -min_y)])
expand_xy = np.array([max(0, max_x+1-map_shape[0]), max(0, max_y+1-map_shape[1])])
is_offset = np.any(offset_ij > 0)
is_expand = np.any(expand_xy > 0)
if is_offset:
offset_ij += 0 if MAX_MAP_SIZE_FOR_SPEED_TEST else EXTRA_STEPS_WHEN_EXPANDING_MAP
self.particle_xy_list = [xy + offset_ij for xy in self.particle_xy_list]
self.target_xy += offset_ij
self.true_xy_offset += offset_ij
if is_expand:
expand_xy += 0 if MAX_MAP_SIZE_FOR_SPEED_TEST else EXTRA_STEPS_WHEN_EXPANDING_MAP
if is_offset or is_expand:
prev_shape = self.global_map_logodds.shape
self.global_map_logodds = np.pad(
self.global_map_logodds, [[offset_ij[0], expand_xy[0]], [offset_ij[1], expand_xy[1]], [0, 0]],
mode='constant', constant_values=0.)
print ("Increasing map size: (%d, %d) --> (%d, %d) offset (%d, %d), expand (%d, %d)"%(
prev_shape[0], prev_shape[1], self.global_map_logodds.shape[0], self.global_map_logodds.shape[1],
offset_ij[0], offset_ij[1], expand_xy[0], expand_xy[1]))
excess_xy = np.array(self.global_map_logodds.shape[:2], np.int32) - np.array(self.max_map_size[:2], np.int32)
excess_xy = np.maximum(excess_xy, np.zeros_like(excess_xy))
if np.any(excess_xy > 0):
print ("Reducing map to fit max size (%d, %d)"%(excess_xy[0], excess_xy[1]))
if self.target_xy[0] > self.global_map_logodds.shape[0] // 2:
self.global_map_logodds = self.global_map_logodds[excess_xy[0]:]
else:
self.global_map_logodds = self.global_map_logodds[:-excess_xy[0]]
if self.target_xy[1] > self.global_map_logodds.shape[1] // 2:
self.global_map_logodds = self.global_map_logodds[:, excess_xy[1]:]
else:
self.global_map_logodds = self.global_map_logodds[:, :-excess_xy[1]]
self.target_xy_for_planning = self.target_xy.copy()
map_shape = self.global_map_logodds.shape
# Offset true map
if self.env is not None:
reduce_xy = np.maximum(-self.true_xy_offset, np.zeros((2,), np.int32)).astype(np.int32)
extend_xy = np.maximum(self.true_xy_offset, np.zeros((2,), np.int32)).astype(np.int32)
global_map_label = true_global_map * (1./255.)
global_map_label = global_map_label[reduce_xy[0]:, reduce_xy[1]:]
global_map_label = np.pad(global_map_label, [[extend_xy[0], 0], [extend_xy[1], 0], [0, 0]])
global_map_label = np.pad(global_map_label, [[0, max(map_shape[0]-global_map_label.shape[0], 0)], [0, max(map_shape[1]-global_map_label.shape[1], 0)], [0, 0]])
global_map_label = global_map_label[:map_shape[0], :map_shape[1]]
assert global_map_label.shape == map_shape
else:
global_map_label = None
# Get image observations
rgb = observations['rgb']
depth = observations['depth']
if USE_ASSERTS:
assert rgb.dtype == np.uint8
assert depth.dtype == np.float32 and np.all(depth <= 1.)
rgb = cv2.resize(rgb, (160, 90), )
rgb = rgb.astype(np.float32) * 255.
depth = cv2.resize(depth, (160, 90), ) # interpolation=cv2.INTER_NEAREST)
depth = np.atleast_3d(depth)
if self.params.mode == 'both':
images = np.concatenate([depth, rgb], axis=-1) # these are 0..1 float format
elif self.params.mode == 'depth':
images = depth
else:
images = rgb
images = (images * 255).astype(np.uint8)
images = np.array(images, np.float32)
# images = images * 255 # to unit8 0..255 format
images = images * (2. / 255.) - 1. # to network input -1..1 format
# Get visibility map from depth if needed
if self.use_custom_visibility:
visibility_map_input = ClassicMapping.is_visible_from_depth(depth, self.local_map_shape, sim=self.params.sim, zoom_factor=self.brain_requirements.transform_window_scaler,
fix_habitat_depth=self.params.fix_habitat_depth)
visibility_map_input = visibility_map_input[:, :, None].astype(np.float32)
assert np.all(visibility_map_input <= 1.)
else:
visibility_map_input = np.zeros(self.visibility_input.shape[2:], dtype=np.float32)
# # Map prediction only, using known pose
# last_global_map_input = np.zeros(self.max_map_size + (self.map_ch, ), np.float32)
# last_global_map_input[:map_shape[0], :map_shape[1]] = self.global_map_logodds
# true_map_input = np.zeros(self.max_map_size + (1, ), np.uint8)
# true_map_input[:global_map_label.shape[0], :global_map_label.shape[1]] = global_map_label
#
# feed_dict = {
# self.images_input: images, self.xy_input: true_xy, self.yaw_input: np.array((true_yaw, )),
# self.global_map_input: last_global_map_input,
# self.true_map_input: true_map_input,
# }
# if self.visibility_input is not None:
# visibility_map_input = ClassicMapping.is_visible_from_depth(depth, self.local_map_shape, sim=self.params.sim, zoom_factor=self.brain_requirements.transform_window_scaler)
# visibility_map_input = visibility_map_input[:, :, None].astype(np.uint8)
# feed_dict[self.visibility_input] = visibility_map_input
#
# mapping_output = self.run_inference(feed_dict)
# global_map_logodds = np.array(mapping_output.global_map_logodds[0, -1]) # squeeze batch and traj
# global_map_logodds = global_map_logodds[:map_shape[0], :map_shape[1]]
# self.global_map_logodds = global_map_logodds
time_prepare = time.time() - time_last
time_last = time.time()
# SLAM prediction
if self.step_i == 0:
# For the first step we dont do pose update, but we need to obtain local maps and image features
self.image_traj = [images.copy()]
# Get local maps for first
feed_dict = {
self.new_images_input: images[None, None],
self.visibility_input: visibility_map_input[None, None],
}
# TODO we should predict global map as well with a single local map added to it
new_local_maps, new_visibility_maps, new_image_features = self.sess.run([self.inference_outputs['new_local_maps'], self.inference_outputs['new_visibility_maps'], self.inference_outputs['new_image_features']], feed_dict=feed_dict)
self.local_map_traj = [new_local_maps[0, 0]]
self.visibility_traj = [new_visibility_maps[0, 0]]
self.image_features_traj = [new_image_features[0, 0]]
slam_outputs = None
# Transform predictions
global_map_true_partial = None
assert self.global_map_logodds.shape[-1] == 1
global_map_pred = ClassicMapping.inverse_logodds(self.global_map_logodds)
slam_xy = np.mean(self.particle_xy_list[-1], axis=0)
slam_yaw = np.mean(self.particle_yaw_list[-1], axis=0)
slam_mean_xy = slam_xy
slam_mean_yaw = slam_yaw
slam_mean2_xy = slam_xy
slam_mean2_yaw = slam_yaw
slam_ml_xy = slam_xy
slam_ml_yaw = slam_yaw
slam_traj_xy = None
slam_traj_yaw = None
else:
assert len(self.action_traj) > 0
assert len(self.particle_xy_list) == len(self.action_traj)
assert self.visibility_traj[-1].dtype == np.float32
assert np.all(self.visibility_traj[-1] <= 1.)
inference_trajlen = self.params.inference_trajlen
self.image_traj.append(images.copy())
self.true_xy_traj.append(true_xy)
self.true_yaw_traj.append(true_yaw)
new_action = np.array((self.action_traj[-1], ), np.int32)[None]
new_rel_xy, new_rel_yaw = actions_from_trajectory(
np.stack([self.true_xy_traj[-2], self.true_xy_traj[-1]], axis=0), np.stack([self.true_yaw_traj[-2], self.true_yaw_traj[-1]], axis=0))
# Pick best segment of the trajectory based on how much viewing areas overlap
current_trajlen = len(self.particle_xy_list) + 1
assert len(self.true_xy_traj) == current_trajlen and len(self.image_features_traj) == current_trajlen - 1
if self.params.slam_use_best_steps:
mean_traj_xy, mean_traj_yaw = ClassicMapping.mean_particle_traj(
np.array(self.particle_xy_list), np.array(self.particle_yaw_list), self.particle_logit_acc_list[-1][None, :, None])
mean_traj_xy, mean_traj_yaw = ClassicMapping.propage_trajectory_with_action(mean_traj_xy, mean_traj_yaw, self.action_traj[-1])
segment_steps = ClassicMapping.get_steps_with_largest_overlapping_view(
mean_traj_xy, mean_traj_yaw, segment_len=inference_trajlen, view_distance=30*self.brain_requirements.transform_window_scaler)
else:
segment_steps = np.arange(max(current_trajlen-inference_trajlen, 0), current_trajlen)
assert segment_steps.ndim == 1
past_particle_xy = np.stack(self.particle_xy_list, axis=0)
past_particle_yaw = np.stack(self.particle_yaw_list, axis=0)
true_xy_seg = np.stack([self.true_xy_traj[i] for i in segment_steps], axis=0) + self.true_xy_offset[None]
true_yaw_seg = np.stack([self.true_yaw_traj[i] for i in segment_steps], axis=0)
past_image_features_seg = np.stack([self.image_features_traj[i] for i in segment_steps[:-1]], axis=0)
past_local_maps = np.stack(self.local_map_traj, axis=0)
past_visibility = np.stack(self.visibility_traj, axis=0)
feed_dict = {
self.inference_timesteps_input: segment_steps[None],
self.new_images_input: images[None, None],
self.last_images_input: self.image_traj[-2][None, None],
self.visibility_input: visibility_map_input[None, None],
self.past_local_maps_input: past_local_maps[None],
self.past_visibility_input: past_visibility[None],
self.past_needed_image_features_input: past_image_features_seg[None],
self.global_map_shape_input: np.array(map_shape[:2], np.int32),
# global_map_input: global_map,
# self.images_input: images_seg[None], # always input both images and global map, only one will be connected
self.true_xy_input: true_xy_seg[None], # used for global to local transition and loss
self.true_yaw_input: true_yaw_seg[None],
# self.visibility_input: visibility_seg[None],
# self.particle_xy_input: particle_xy_seg[None],
# self.particle_yaw_input: particle_yaw_seg[None],
self.particle_xy_input: past_particle_xy[None],
self.particle_yaw_input: past_particle_yaw[None],
self.new_action_input: new_action[None],
self.new_rel_xy_input: new_rel_xy[None],
self.new_rel_yaw_input: new_rel_yaw[None],
self.last_step_particle_logits_input: self.particle_logit_acc_list[-1][None],
}
slam_outputs = self.run_inference(feed_dict, need_map=need_map)
# Deal with resampling
self.particle_xy_list = [particle[slam_outputs.particle_indices[0]] for particle in self.particle_xy_list]
self.particle_yaw_list = [particle[slam_outputs.particle_indices[0]] for particle in self.particle_yaw_list]
self.particle_logit_acc_list = [particle[slam_outputs.particle_indices[0]] for particle in self.particle_logit_acc_list]
# Store new particles
self.particle_xy_list.append(slam_outputs.particle_xy_t[0])
self.particle_yaw_list.append(slam_outputs.particle_yaw_t[0])
self.particle_logit_acc_list.append(slam_outputs.particle_logits_acc[0])
if FAKE_INPUT_FOR_SPEED_TEST:
self.particle_xy_list[-1] = self.particle_xy_list[-1] * 0 + true_xy[None] + self.true_xy_offset[None]
# Store local map prediction
self.local_map_traj.append(slam_outputs.new_local_maps[0, 0])
self.visibility_traj.append(slam_outputs.new_visibility_maps[0, 0])
self.image_features_traj.append(slam_outputs.new_image_features[0, 0])
print (self.image_features_traj[-1].shape)
# Store losses. only meaningful if true state was input
self.xy_loss_list.append(slam_outputs.loss_xy_all[0])
self.yaw_loss_list.append(slam_outputs.loss_yaw_all[0])
# Update map
if need_map:
global_map_logodds = np.array(slam_outputs.global_map_logodds[0]) # squeeze batch and traj
# if global_map_logodds.shape != self.global_map_logodds.shape:
# raise ValueError("Unexpected global map shape output from slam net.")
if not self.fixed_map_size:
global_map_logodds = global_map_logodds[:map_shape[0], :map_shape[1]]
self.global_map_logodds = global_map_logodds
# Transform predictions
global_map_true_partial = None
assert self.global_map_logodds.shape[-1] == 1
global_map_pred = ClassicMapping.inverse_logodds(self.global_map_logodds)
slam_mean_xy = slam_outputs.mean_xy[0, -1]
slam_mean_yaw = slam_outputs.mean_yaw[0, -1]
slam_mean2_xy = slam_outputs.mean2_xy[0, -1]
slam_mean2_yaw = slam_outputs.mean2_yaw[0, -1]
slam_ml_xy = slam_outputs.ml_xy[0, -1]
slam_ml_yaw = slam_outputs.ml_yaw[0, -1]
slam_traj_xy = slam_outputs.xy[0, :] # the one used for mapping
slam_traj_yaw = slam_outputs.yaw[0, :] # the one used for mapping
slam_xy = slam_outputs.xy[0, -1] # the one used for mapping
slam_yaw = slam_outputs.yaw[0, -1]
# TODO should separate reassemble the map for the whole trajectory for the mean particle trajectory
# do NOT use most likely particle, its meaningless after resampling. Density is what matters.
# need to implement reasonable sequential averaging of yaws..
# Compute mean separately here
if self.params.brain == 'habslambrain_v1' and USE_ASSERTS:
mean_xy_from_np, mean_yaw_from_np = ClassicMapping.mean_particle_traj(self.particle_xy_list[-1], self.particle_yaw_list[-1], self.particle_logit_acc_list[-1][:, None])
xy_diff = np.abs(mean_xy_from_np - slam_mean_xy)
yaw_diff = np.abs(mean_yaw_from_np - slam_mean_yaw)
yaw_diff = (yaw_diff + np.pi) % (2 * np.pi) - np.pi
if not np.all(xy_diff < 1.) or not np.all(yaw_diff < np.deg2rad(10.)):
raise ValueError("SLAM mean and numpy mean dont match. Mean difference: %s vs %s | %s vs. %s" % (
str(mean_xy_from_np), str(slam_mean_xy), str(mean_yaw_from_np), str(slam_mean_yaw)))
# Pose source
if self.pose_source == 'true':
xy = true_xy + self.true_xy_offset
yaw = true_yaw
traj_xy = np.array(self.true_xy_traj) + self.true_xy_offset[None]
traj_yaw = np.array(self.true_yaw_traj)
assert slam_traj_xy is None or traj_xy.shape[0] == slam_traj_xy.shape[0]
elif self.pose_source in ["slam-truestart", "slam"]:
xy = slam_xy
yaw = slam_yaw
traj_xy = slam_traj_xy
traj_yaw = slam_traj_yaw
# TODO weighted mean of particles
else:
raise NotImplementedError
self.xy = xy
self.yaw = yaw
# Verify true pose
if USE_ASSERTS and self.params.agent_pose_source == 'true':
assert np.all(np.isclose(traj_xy[:, None], np.array(self.particle_xy_list), atol=1e-3))
assert np.all(np.isclose(traj_yaw[:, None], np.array(self.particle_yaw_list), atol=1e-3))
# last_action = self.action_traj[-1]
# if last_action == 1:
# nominal_xy = traj_xy[-2] + rotate_2d(np.array([5., 0.], np.float32), traj_yaw[-2])
# else:
# nominal_xy = traj_xy[-2]
# move_error = np.linalg.norm(xy - nominal_xy)
# move_amount = np.linalg.norm(xy - traj_xy[-2])
# print ("Act %d. Moved %f. Error %f"%(last_action, move_amount, move_error))
# if move_error > 3.:
# pdb.set_trace()
local_map_label = None
# local_map_label = slam_outputs.local_map_label[0, 0, :, :, 0]
# local_map_pred = slam_outputs.combined_local_map_pred[0, 0, :, :, 0]
ang_vel = yaw - self.prev_yaw
ang_vel = (ang_vel + np.pi) % (2*np.pi) - np.pi
target_dist = np.linalg.norm(self.target_xy - xy)
true_target_dist = np.linalg.norm(true_target_xy - true_xy)
xy_error, yaw_error = self.pose_error(slam_xy, slam_yaw, true_xy, true_yaw)
mean_xy_error, mean_yaw_error = self.pose_error(slam_mean_xy, slam_mean_yaw, true_xy, true_yaw)
mean2_xy_error, _ = self.pose_error(slam_mean2_xy, slam_mean2_yaw, true_xy, true_yaw)
ml_xy_error, _ = self.pose_error(slam_ml_xy, slam_ml_yaw, true_xy, true_yaw)
self.distance_history.append(target_dist)
if self.pose_source != 'slam' and not FAKE_INPUT_FOR_SPEED_TEST:
assert np.abs(np.sqrt(self.xy_loss_list[-1]) - xy_error) < 2. # one is before resampling, other is after
# Detect collision
is_colliding = False
if self.step_i > 2 and self.action_traj[-1] == 1 and self.recover_step_i == 0: # moved forward
last_step_len = np.linalg.norm(traj_xy[-2] - traj_xy[-1], axis=0)
if last_step_len < COLLISION_DISTANCE_THRESHOLD:
is_colliding = True
self.collision_timesteps.append(self.step_i)
self.num_collisions += 1
if self.recover_step_i >= len(self.recover_policy):
self.recover_step_i = 0 # done with recovery
dist_hist = np.array(self.distance_history[-self.GIVE_UP_NO_PROGRESS_STEPS:])
time_slam = time.time() - time_last
time_last = time.time()
should_give_up = False
# Modify state if its out of bounds, or give up if goal is out of bounds
if (np.any(self.target_xy_for_planning < TARGET_MAP_MARGIN)
or np.any(self.target_xy_for_planning + TARGET_MAP_MARGIN >= np.array(self.max_map_size))):
should_give_up = True
if USE_ASSERTS and self.fixed_map_size:
raise ValueError("Target is outside of map area -- this should not happen for fixed size map.")
elif (np.any(self.xy < 0) or np.any(self.xy >= np.array(self.max_map_size))):
print ("State is outside of map area -- this can happen for fixed size map because its cropped before the slam update.")
if self.fixed_map_size:
new_xy = np.clip(xy, [0., 0.], np.array(self.max_map_size, np.float32) - 0.001)
print ("moving state.. %s --> %s"%(str(xy), str(new_xy)))
xy = new_xy
self.xy = new_xy
else:
print ("Giving up")
should_give_up = True
# Check for time and distance limits
try:
for time_thres, dist_thres in self.GIVE_UP_STEP_AND_DISTANCE:
if self.step_i >= time_thres and target_dist >= dist_thres:
should_give_up = True
break
except Exception as e:
print ("Exception " + str(e))
# Give up if no progress for too long wallclock time
try:
mins_since_ep_start = (time.time() - self.episode_t) / 60
reduction_since_beginning = self.distance_history[0] - self.distance_history[-1]
for time_thres, reduct_thres in self.GIVE_UP_TIME_AND_REDUCTION:
if mins_since_ep_start >= time_thres and reduction_since_beginning < reduct_thres:
print ("Give up because of wallclock time and reduction t=%f reduct=%f"%(mins_since_ep_start, reduction_since_beginning))
should_give_up = True
break
except Exception as e:
print ("Exception " + str(e))
giving_up_collision = False
giving_up_distance = False
giving_up_progress = False
is_done = False
# Plan
planned_path = np.zeros([0, 2], dtype=np.float32)
# Choose which map to use for planning
global_map_for_planning, cont_global_map_for_planning = self.get_global_map_for_planning(global_map_pred, global_map_label, traj_xy, traj_yaw, map_shape, self.map_source, keep_soft=self.params.soft_cost_map)
shrunk_map_offset_xy = None
if self.params.interactive_action:
while True:
ans = input("Manual action: ")
try:
if ans and int(ans) >= 0 and int(ans) <= 3:
action = int(ans)
break
except:
pass
plan_status_msg = "Manual %d"%action
elif target_dist < self.params.agent_stop_near_target_dist:
# Close enough to target. Normal requirement is 0.36/0.05 = 7.2
plan_status_msg = "Manual stop"
is_done = True
action = 0
elif should_give_up:
plan_status_msg = "Giving up because target is too far (or state was outside of map)"
giving_up_distance = True
action = 0
elif shortcut_action is not None:
# NOTE must be before recover on collision - because we already incremented recover policy
plan_status_msg = "Shortcut action"
action = shortcut_action
elif RECOVER_ON_COLLISION and (is_colliding or self.recover_step_i > 0):
plan_status_msg = ("Recover from collision %d / %d."%(self.recover_step_i, len(self.recover_policy)))
action = self.recover_policy[self.recover_step_i]
self.recover_step_i += 1
self.pathplanner.reset() # to clear out its cache
if target_dist < NEAR_TARGET_COLLISION_STOP_DISTANCE:
plan_status_msg += " --> Attempt to stop instead, near target"
is_done = True
action = 0
elif self.GIVE_UP_NUM_COLLISIONS > 0 and self.num_collisions >= self.GIVE_UP_NUM_COLLISIONS:
plan_status_msg = "Too many collisions (%d). Giving up.."%(self.num_collisions, )
giving_up_collision = True
action = 0
elif self.GIVE_UP_NO_PROGRESS_STEPS > 0 and self.step_i > self.GIVE_UP_NO_PROGRESS_STEPS and self.step_i > 100 and np.max(dist_hist) - np.min(dist_hist) < self.NO_PROGRESS_THRESHOLD:
plan_status_msg = "No progress for %d steps. Giving up.."%(self.GIVE_UP_NO_PROGRESS_STEPS, )
giving_up_progress = True
action = 0
else:
action, planned_path, plan_status_msg, processed_map_for_planning, shrunk_map_offset_xy = self.plan_and_control(
xy, yaw, self.target_xy_for_planning, global_map_for_planning, ang_vel, initial_target_fi,
allow_shrink_map=self.allow_shrink_map,
cont_global_map_pred=cont_global_map_for_planning if self.planner_needs_cont_map else None)
is_done = (action == 0)
# Visualize agent
if self.step_i % PLOT_EVERY_N_STEP == 0 and PLOT_EVERY_N_STEP > 0 and slam_outputs is not None:
local_map_pred = self.local_map_traj[-1][:, :, 0]
self.visualize_agent(slam_outputs.tiled_visibility_mask[0, 0, :, :, 0], images, global_map_pred,
global_map_for_planning,
# processed_map_for_planning.astype(np.float32)/255.,
global_map_label,
global_map_true_partial, local_map_pred, local_map_label, planned_path,
sim_rgb=observations['rgb'], # uint
xy=xy, yaw=yaw, true_xy=true_xy + self.true_xy_offset, true_yaw=true_yaw, target_xy=self.target_xy_for_planning)
# pdb.set_trace()
# Overwrite with expert
if self.action_source == 'expert':
best_action = self.follower.get_next_action(goal_pos)
action = best_action
if action == 0 and EXIT_AFTER_N_STEPS_FOR_SPEED_TEST > 0:
print ("Sping instead of stopping.")
action = 3
is_done = (action == 0)
if DEBUG_DUMMY_ACTIONS_ONLY:
action = 1
# # Overwrite with manual actions
# if self.params.interactive_action:
# ans = input("Overwrite %d: "%action)
# if ans and int(ans) >= 0 and int(ans) <= 3:
# action = int(ans)
time_plan = time.time() - time_last
time_last = time.time()
# Save data
if len(self.tfwriters) > 0 and self.step_i % SAVE_DATA_EVERY_N == 0:
is_using_planner = planned_path.shape[0] > 0 and target_dist >= 10.5
if DATA_TYPE == "planinstance":
assert self.map_source != "pred"
if not is_using_planner: # two step strategy for <= 10
if DATA_INCLUDE_NONPLANNED_ACTIONS:
raise NotImplementedError
else:
pred_map_for_planning, _ = self.get_global_map_for_planning(
global_map_pred, global_map_label, traj_xy, traj_yaw, map_shape, "pred", keep_soft=True)
self.write_datapoint(global_map_for_planning, pred_map_for_planning, self.target_xy_for_planning,
planned_path.astype(np.int32), action, shrunk_map_offset_xy)
elif DATA_TYPE == "scenario":
assert SAVE_DATA_EVERY_N == 1 # need to save all steps for meaningful slam and image data
pred_map_for_planning, _ = self.get_global_map_for_planning(
global_map_pred, global_map_label, traj_xy, traj_yaw, map_shape, "pred", keep_soft=True)
# TODO for predmap the maps we save will not necessarily be meaningful. Never tested.
# convert maps
assert global_map_for_planning.dtype == np.float32 and pred_map_for_planning.dtype == np.float32
assert global_map_for_planning.shape == pred_map_for_planning.shape
assert global_true_target_xy is not None # unchanged goal coordinated on map
data_true_map_png = encode_image_to_png((global_map_for_planning * 255.).astype(np.uint8))
data_pred_map_png = encode_image_to_png((pred_map_for_planning * 255.).astype(np.uint8)) # encode predicted probability as uint8
depth_data = np.atleast_3d(observations['depth']) if self.params.data_highres_images else depth
rgb_data = observations['rgb'] if self.params.data_highres_images else (rgb * 255.).astype(np.uint8)
depth_png = encode_image_to_png((depth_data * 255.).astype(np.uint8))
rgb_png = encode_image_to_png(rgb_data)
global_xy = np.array(info['top_down_map']['agent_map_coord']) # x: downwards; y: rightwars
global_yaw = np.array(info['top_down_map']['agent_angle']) # 0 downwards, positive ccw. Forms stanard coord system with x and y.
self.scenario_traj_data.append({
'action': np.array(action, np.int32),
'local_est_xy': xy.copy(),
'local_est_yaw': yaw.copy(),
'local_true_xy': (true_xy + self.true_xy_offset).copy(),
'local_true_yaw': true_yaw.copy(),
'local_goal_xy': self.target_xy_for_planning.copy(),
'true_map_png': data_true_map_png,
'pred_map_png': data_pred_map_png,
'depth_png': depth_png,
'rgb_png': rgb_png,
'global_xy': global_xy.copy(),
'global_yaw': global_yaw.copy(),
'is_using_planner': is_using_planner,
'is_colliding': is_colliding,
})
# Metadata
if len(self.scenario_traj_data) == 1:
ep = self.env.current_episode
episode_id = ep.episode_id
model_id = get_model_id_from_episode(ep)
height = ep.start_position[1]
floor = get_floor_from_json(model_id, height, map_path_for_sim(self.params.sim))
self.scenario_traj_data[0].update({
'global_goal_xy': global_true_target_xy.copy(),
'model_id': str(model_id),
'floor': int(floor),
'episode_id': int(episode_id),
})
else:
raise NotImplementedError(DATA_TYPE)
# pdb.set_trace()
# if self.episode_i == 0:
# cv2.imwrite('./temp/ep%d-step%d.png'%(self.episode_i, self.step_i), observations['rgb'])
# if self.step_i == 0:
# top_down_map = maps.get_topdown_map(
# self.env.sim, map_resolution=(5000, 5000)
# )
# plt.imshow(top_down_map)
# plt.show()
self.prev_yaw = yaw
self.action_traj.append(action)
self.step_i += 1
slam_status_msg = "Pose errors mean=%.1f mean2=%.1f ml=%.1f yaw=%.1f. Loss=%.1f "%(
mean_xy_error, mean2_xy_error, ml_xy_error, np.rad2deg(mean_yaw_error), np.sqrt(self.xy_loss_list[-1]))
act_status_msg = "Est dist=%.1f. True dist=%.1f Act=%d %s"%(
target_dist, true_target_dist, action, "COL" if is_colliding else "")
print (plan_status_msg)
print (slam_status_msg + act_status_msg)
# Get map statistics
if global_map_label is not None:
ij = xy.astype(np.int32)
self.num_wrong_obstacle += 1. if not is_colliding and global_map_label[ij[0], ij[1]] < 0.5 else 0.
self.num_wrong_free += 1. if is_colliding and global_map_label[ij[0], ij[1]] >= 0.5 else 0.
self.num_wrong_free_area += 1. if is_colliding and np.all(global_map_label[max(ij[0]-1, 0):ij[0]+2, max(ij[1]-1, 0):ij[1]+2] >= 0.5) else 0.
self.num_wrong_free_area2 += 1. if is_colliding and np.all(global_map_label[max(ij[0]-2, 0):ij[0]+3, max(ij[1]-2, 0):ij[1]+3] >= 0.5) else 0.
self.num_wrong_free_area3 += 1. if is_colliding and np.all(global_map_label[max(ij[0]-3, 0):ij[0]+4, max(ij[1]-3, 0):ij[1]+4] >= 0.5) else 0.
# Video output
if self.params.interactive_video or self.params.save_video > self.num_videos_saved:
if not isinstance(planned_path, np.ndarray) or planned_path.ndim != 2:
print ("planned path has an unexpected format")
pdb.set_trace()
# Set outcome text
if (giving_up_collision or giving_up_progress or giving_up_distance):
outcome = 'giveup'
elif is_done:
outcome = 'done'
else:
outcome = 'timeout'
frame_data = dict(
rgb=observations['rgb'],
depth=observations['depth'],
global_map=global_map_pred.copy(),
global_map_for_planning=global_map_for_planning.copy(),
cont_global_map_for_planning=cont_global_map_for_planning.copy(),
true_global_map=global_map_label.copy(),
xy=self.xy.copy(), yaw=self.yaw.copy(),
target_xy=self.target_xy_for_planning.copy(),
path=planned_path.copy(), # subgoal=planned_subgoal.copy(),
target_status=slam_status_msg, control_status=plan_status_msg, act_status=act_status_msg,
outcome=outcome)
if self.plot_process:
while not self.plot_queue.empty():
time.sleep(0.01)
self.plot_queue.put(("step", frame_data))
else:
self.frame_traj_data.append(frame_data)
if self.params.interactive_video:
self.video_update(VIDEO_FRAME_SKIP * len(self.frame_traj_data))
time_output = time.time() - time_last
time_last = time.time()
if PRINT_TIMES:
print ("Time sim %.3f prep %.3f slam %.3f plan %.3f output %.3f"%(time_sim, time_prepare, time_slam, time_plan, time_output))
# Pause for interactive run every n steps
if self.params.interactive_step > 0 and self.step_i > 0 and self.step_i % self.params.interactive_step == 0:
print ("pause..")
pdb.set_trace()
return {"action": action, "has_collided": float(self.num_collisions > 0), "num_collisions": self.num_collisions,
"xy_error": xy_error, # "mean_xy_error": mean_xy_error, "mean2_xy_error": mean2_xy_error, "ml_xy_error": ml_xy_error,
'mean_yaw_error': mean_yaw_error, 'target_dist': target_dist,
'num_wrong_obstacle': self.num_wrong_obstacle, 'num_wrong_free': self.num_wrong_free,
'num_wrong_free_area': self.num_wrong_free_area,
'num_wrong_free_area2': self.num_wrong_free_area2, 'num_wrong_free_area3': self.num_wrong_free_area3,
'time_plan': 0. if len(self.plan_times) == 0 else np.mean(self.plan_times),
'map_mismatch_count': float(self.map_mismatch_count) / self.step_i, # TODO remove
'giveup_collision': float(giving_up_collision), 'giveup_progress': float(giving_up_progress),
'giveup_distance': float(giving_up_distance), 'is_done: ': is_done} # 0: stop, forward, left, right
# return {"action": numpy.random.choice(self._POSSIBLE_ACTIONS)}
def reset_scenario_data_writer(self):
if len(self.scenario_traj_data) == 0:
return
assert len(self.tfwriters) == len(self.params.data_map_sizes) == 1
metadata = self.scenario_traj_data[0]
trajdata = self.scenario_traj_data
context_features = {
'trajlen': tf_int64_feature(len(trajdata)),
'goal_xy': tf_bytes_feature(metadata['global_goal_xy'].astype(np.float32).tobytes()),
'model_id': tf_bytes_feature(str(metadata['model_id']).encode()),
'floor': tf_int64_feature(metadata['floor']),
'episode_id': tf_int64_feature(metadata['episode_id']), # int(ep.episode_id)
'map_id': tf_int64_feature(self.saved_map_i), # tf_bytes_feature(np.array((,), np.int32).tobytes()),
}
sequence_features = {
'actions': sequence_feature_wrapper([stepdata['action'].astype(np.int32) for stepdata in trajdata]),
# global map coordinates
'xys': sequence_feature_wrapper([stepdata['global_xy'].astype(np.float32) for stepdata in trajdata]),
'yaws': sequence_feature_wrapper([stepdata['global_yaw'].astype(np.float32) for stepdata in trajdata]),
'is_using_planner': sequence_feature_wrapper([np.array(stepdata['is_using_planner'], dtype=np.bool) for stepdata in trajdata]),
'is_colliding': sequence_feature_wrapper([np.array(stepdata['is_colliding'], dtype=np.bool) for stepdata in trajdata]),
# coordinates in rotated local coordinate frame for planning. cropped compared to global pose
'local_true_xys': sequence_feature_wrapper([stepdata['local_true_xy'].astype(np.float32) for stepdata in trajdata]),
'local_true_yaws': sequence_feature_wrapper([stepdata['local_true_yaw'].astype(np.float32) for stepdata in trajdata]),
'local_est_xys': sequence_feature_wrapper([stepdata['local_est_xy'].astype(np.float32) for stepdata in trajdata]),
'local_est_yaws': sequence_feature_wrapper([stepdata['local_est_yaw'].astype(np.float32) for stepdata in trajdata]),
'local_goal_xys': sequence_feature_wrapper([stepdata['local_goal_xy'].astype(np.float32) for stepdata in trajdata]),
# maps used for planning (typically true-partial) and (accumulated) predicted map
'true_maps': sequence_feature_wrapper([stepdata['true_map_png'] for stepdata in trajdata]),
'pred_maps': sequence_feature_wrapper([stepdata['pred_map_png'] for stepdata in trajdata]),
'depths':sequence_feature_wrapper([stepdata['depth_png'] for stepdata in trajdata]),
'rgbs': sequence_feature_wrapper([stepdata['rgb_png'] for stepdata in trajdata]),
}
# store
example = tf.train.SequenceExample(context=tf.train.Features(feature=context_features),
feature_lists=tf.train.FeatureLists(feature_list=sequence_features))
if DATA_SEPARATE_FILES:
data_filename = os.path.join(
self.logdir, "habscenarios.episode.m%d.tfrecords.%d" % (self.params.data_map_sizes[0], self.num_data_entries))
with tf.python_io.TFRecordWriter(data_filename) as tfwriter:
tfwriter.write(example.SerializeToString())
else:
self.tfwriters[0].write(example.SerializeToString())
self.saved_map_i += 1
self.num_data_entries += 1
self.scenario_traj_data = []
def write_datapoint(self, map_for_planning, pred_map, target_xy, planned_path, action, shrunk_map_offset_xy):
assert planned_path.shape[0] > 0
assert planned_path.dtype == np.int32
planned_actions = grid_actions_from_trajectory(planned_path, connect8=self.params.connect8)
target_ij = target_xy.astype(np.int32)
if planned_path.shape[0] < self.params.trainlen:
return
if planned_path[-1][0] != target_ij[0] or planned_path[-1][1] != target_ij[1]:
print ("Skip because path does not reach goal")
print (planned_path)
print (target_ij)
return
# convert maps
assert map_for_planning.dtype == np.float32 and pred_map.dtype == np.float32
assert map_for_planning.shape == pred_map.shape
map_for_planning = (map_for_planning * 255.).astype(np.uint8)
pred_map = (pred_map * 255.).astype(np.uint8) # encode predicted probability as uint8
if np.any(map_for_planning[planned_path[:, 0], planned_path[:, 1]] < 127):
print ("Skip because path is not collision free")
return
# Q values. Assumes planner is a VI and it was called in this time step, otherwise path would be None
qs = self.pathplanner.last_qs_value
if shrunk_map_offset_xy is not None:
shrunk_map_offset_xy = shrunk_map_offset_xy.astype(np.int)
qs = np.pad(qs, [[shrunk_map_offset_xy[0], pred_map.shape[0]-qs.shape[0]-shrunk_map_offset_xy[0]],
[shrunk_map_offset_xy[1], pred_map.shape[1]-qs.shape[1]-shrunk_map_offset_xy[1]],
[0, 0]])
assert qs.shape[:2] == pred_map.shape[:2]
assert len(self.tfwriters) == len(self.params.data_map_sizes)
for tfwriter, map_size in zip(self.tfwriters, self.params.data_map_sizes):
self.write_data_for_map_size(map_for_planning, pred_map, qs, planned_actions, planned_path, target_xy, tfwriter, map_size)
self.saved_map_i += 1
def write_data_for_map_size(self, map_for_planning, pred_map, qs, planned_actions, planned_path, target_xy, tfwriter, map_size):
segment_len = self.params.trainlen
if map_size < map_for_planning.shape[0]:
assert False # We need to replan for q values to be valid.
if DATA_USE_LAST_SEGMENT:
# Find last trajectory segment that is still within the map size
margin = 2
for start_i in range(len(planned_path)):
range_ij = np.max(planned_path[start_i:], axis=0) - np.min(planned_path[start_i:], axis=0)
if np.all(range_ij < map_size - 2 * margin):
break
planned_path = planned_path[start_i:]
planned_actions = planned_actions[start_i:]
else:
# Find first trajectory segment that is within the map size and change goal
margin = 2
for end_i in range(len(planned_path), 0, -1): # go backwards
range_ij = np.max(planned_path[:end_i], axis=0) - np.min(planned_path[:end_i], axis=0)
if np.all(range_ij < map_size - 2 * margin):
break
planned_path = planned_path[:end_i]
planned_actions = planned_actions[:end_i]
target_xy = planned_path[-1].astype(np.float32) + 0.5
# Crop map
offset_ij = np.min(planned_path, axis=0)
range_ij = np.max(planned_path, axis=0) - offset_ij
# add half of the remaining spacing to the beginning
topleft_space = (map_size - range_ij) // 2
offset_ij = offset_ij - topleft_space
offset_ij = np.maximum(offset_ij, np.zeros((2, ), np.int32)) # cannot be less than zero
offset_ij = np.minimum(offset_ij, np.array(map_for_planning.shape[:2], np.int32) - map_size)
# crop the given size starting from offset_ij
map_for_planning = map_for_planning[offset_ij[0]:offset_ij[0]+map_size, offset_ij[1]:offset_ij[1]+map_size]
pred_map = pred_map[offset_ij[0]:offset_ij[0]+map_size, offset_ij[1]:offset_ij[1]+map_size]
qs = qs[offset_ij[0]:offset_ij[0]+map_size, offset_ij[1]:offset_ij[1]+map_size]
qs = qs.astype(np.float32)
# Move poses to cropped frame
planned_path = planned_path - offset_ij[None]
target_xy = target_xy - offset_ij.astype(np.float32)
if planned_path.shape[0] < self.params.trainlen:
return
assert map_for_planning.shape[0] == map_size and map_for_planning.shape[1] == map_size
assert np.all(planned_path >= 0) and np.all(planned_path < map_size)
# Limit trajlen so we only save the trajectory segment near the current pose
if DATA_FIRST_STEP_ONLY:
max_trajlen = segment_len # there will be only one segment
else:
max_trajlen = DATA_MAX_TRAJLEN // segment_len * segment_len
assert max_trajlen >= 2
planned_path = planned_path[:max_trajlen]
planned_actions = planned_actions[:max_trajlen-1]
# Abstract Q values along trajectory and make sure they are consistent with the action choices
q_traj = qs[planned_path[:, 0].astype(np.int), planned_path[:, 1].astype(np.int), :]
q_for_actions = q_traj[np.arange(q_traj.shape[0]-1), planned_actions]
assert np.all(np.isclose(q_for_actions, q_traj[:-1].max(axis=1)))
planned_xy = planned_path.astype(np.float32) + 0.5
true_map_png = cv2.imencode('.png', map_for_planning)[1].tobytes()
pred_map_png = cv2.imencode('.png', pred_map)[1].tobytes()
# segments
traj_segments = []
overlap = 1 # use extra step because last action will be dropped
assert overlap < segment_len
start_i = 0
while start_i < planned_path.shape[0] - overlap: # include all steps
# for incomplete last segment, start earlier overlapping with previous segment
if start_i + segment_len > planned_path.shape[0]:
start_i = planned_path.shape[0] - segment_len
overlap = -1 # this is to triger break at the end of this iteration
segment = tuple(range(start_i, start_i + segment_len))
traj_segments.append(segment)
start_i += segment_len - overlap
del overlap
assert not DATA_FIRST_STEP_ONLY or len(traj_segments) == 1
# store each segment
goal_xy = target_xy.copy()
for segment_i, segment in enumerate(traj_segments): # repeat multiple times
xy_segment = planned_xy[segment, :]
grid_action_segment = planned_actions[segment[:-1],]
q_segment = q_traj[segment[:-1],]
# dummy yaw and action
yaw_segment = np.ones((segment_len, 1), np.float32) * -1
action_segment = np.ones((segment_len - 1, 1), np.int32) * -1
# tfrecord features
context_features = {
'true_map': tf_bytes_feature(true_map_png),
'pred_map': tf_bytes_feature(pred_map_png),
'trajlen': tf_int64_feature(len(segment)),
'goal_xy': tf_bytes_feature(goal_xy.astype(np.float32).tobytes()),
'xy': tf_bytes_feature(xy_segment.astype(np.float32).tobytes()),
'yaw': tf_bytes_feature(yaw_segment.astype(np.float32).tobytes()),
'action': tf_bytes_feature(action_segment.astype(np.int32).tobytes()),
'grid_q_values': tf_bytes_feature(q_segment.astype(np.float32).tobytes()),
'grid_action': tf_bytes_feature(grid_action_segment.astype(np.int32).tobytes()),
'qs': tf_bytes_feature(qs.tobytes()),
'episode_id': tf_bytes_feature(np.array((self.episode_i + self.params.skip_first_n, ), np.int32).tobytes()),
'map_id': tf_bytes_feature(np.array((self.saved_map_i, ), np.int32).tobytes()),
'segment_i': tf_bytes_feature(np.array((segment_i, ), np.int32).tobytes()),
}
sequence_features = {
# 'local_map': tf.train.FeatureList(feature=[tf.train.Feature(bytes_list=tf.train.BytesList(value=[local_map_pngs[i]])) for i in segment]),
# 'visibility': tf.train.FeatureList(feature=[tf.train.Feature(bytes_list=tf.train.BytesList(value=[visibility_map_pngs[i]])) for i in segment]),
}
# store
example = tf.train.SequenceExample(context=tf.train.Features(feature=context_features),
feature_lists=tf.train.FeatureLists(feature_list=sequence_features))
tfwriter.write(example.SerializeToString())
self.num_data_entries += 1
def get_global_map_for_planning(self, global_map_pred, global_map_label, traj_xy, traj_yaw, map_shape, map_source, keep_soft):
if map_source in ['true', 'true-saved', 'true-saved-sampled', 'true-saved-hrsampled']:
assert global_map_label.ndim == 3
global_map_for_planning = global_map_label.copy()
assert global_map_for_planning.shape == map_shape
elif map_source in ['true-partial', 'true-partial-sampled']:
global_map_for_planning = global_map_label.copy()
# Overwrite unseen areas with 0.5
unseen_mask = np.isclose(global_map_pred, 0.5)
global_map_for_planning[unseen_mask] = 0.5
else:
global_map_for_planning = global_map_pred.copy()
# Erode with float values before thresholding and before adding patches for collision.
# This is used to account for larger robot than used in training data, like spot.
if self.params.map_erosion_pre_planning > 1:
global_map_for_planning = np.squeeze(global_map_for_planning, axis=-1)
global_map_for_planning = cv2.erode(
global_map_for_planning, Expert.get_kernel_for_erosion(self.params.map_erosion_pre_planning))
global_map_for_planning = global_map_for_planning[..., None]
if self.params.collision_patch_radius > 0 and self.step_i > 1:
global_map_for_planning = self.patch_map_with_collisions(global_map_for_planning, traj_xy,
traj_yaw, self.collision_timesteps,
self.params.collision_patch_radius)
if self.params.agent_clear_target_radius > 0:
try:
min_xy = self.target_xy_for_planning.astype(np.int32) - self.params.agent_clear_target_radius
max_xy = self.target_xy_for_planning.astype(np.int32) + self.params.agent_clear_target_radius + 1
global_map_for_planning[min_xy[0]:max_xy[0], min_xy[1]:max_xy[1]] = 1.
except Exception as e:
print ("Exception clearing target. " + str(e))
raise e
#
# if self.step_i == 1:
# print ("DEBUG !!!!!!! REMOVE !!!!!!!")
# self.collision_timesteps.append(1)
# threshold
cont_global_map_for_planning = global_map_for_planning
if not keep_soft:
traversable_threshold = self.params.traversable_threshold # higher than this is traversable
object_treshold = 0. # treat everything as non-object
threshold_const = np.array((traversable_threshold, object_treshold))[None, None, :self.map_ch - 1]
global_map_for_planning = np.array(global_map_for_planning >= threshold_const, np.float32)
return global_map_for_planning, cont_global_map_for_planning
@staticmethod
def shrink_map(xy, target_xy, global_map, margin=8):
assert margin > 6 # one step in each direction requires at least 6 margin
assert global_map.dtype == np.uint8
obst_i, obst_j, _ = np.nonzero(global_map == 0)
if obst_i.shape[0] == 0:
obst_i = np.array([xy[0]], np.int)
obst_j = np.array([xy[1]], np.int)
min_i = min(int(xy[0]), int(target_xy[0]), np.min(obst_i)) - margin
min_j = min(int(xy[1]), int(target_xy[1]), np.min(obst_j)) - margin
max_i = max(int(xy[0]), int(target_xy[0]), np.max(obst_i)) + margin + 1
max_j = max(int(xy[1]), int(target_xy[1]), np.max(obst_j)) + margin + 1
min_i = max(min_i, 0)
min_j = max(min_j, 0)
max_i = min(max_i, global_map.shape[0])
max_j = min(max_j, global_map.shape[1])
offset_xy = np.array([min_i, min_j], np.float32)
if min_i > 0 or min_j > 0 or max_i < global_map.shape[0] or max_j < global_map.shape[1]:
global_map = global_map[min_i:max_i, min_j:max_j]
xy = xy - offset_xy
target_xy = target_xy - offset_xy
return xy, target_xy, global_map, offset_xy
@staticmethod
def patch_map_with_collisions(global_map_for_planning, traj_xy, traj_yaw, collision_timesteps, patch_radius):
for timestep in collision_timesteps:
xy = traj_xy[timestep]
yaw = traj_yaw[timestep]
if patch_radius > 0.5:
num_samples = max(int(2 * patch_radius), 6)
ego_x, ego_y = np.meshgrid(
np.linspace(0, 2 * patch_radius, num_samples) - 0.4,
np.linspace(-patch_radius, patch_radius, num_samples),
indexing='ij')
ego_xy = np.stack((ego_x.flatten(), ego_y.flatten()), axis=-1)
abs_xy = xy[None] + rotate_2d(ego_xy, yaw[None])
abs_ij = abs_xy.astype(np.int32)
else:
abs_ij = xy[None].astype(np.int32)
# Filter out of range
abs_ij = abs_ij[np.logical_and.reduce([
abs_ij[:, 0] >= 0, abs_ij[:, 1] >= 0, abs_ij[:, 0] < global_map_for_planning.shape[0],
abs_ij[:, 1] < global_map_for_planning.shape[1] ])]
# Set map not traversable (0.)
global_map_for_planning[abs_ij[:, 0], abs_ij[:, 1]] = 0.
return global_map_for_planning
def pose_error(self, slam_xy, slam_yaw, true_xy, true_yaw):
xy_error = np.linalg.norm(true_xy + self.true_xy_offset - slam_xy)
yaw_error = true_yaw - slam_yaw
yaw_error = np.abs((yaw_error + np.pi) % (2 * np.pi) - np.pi)
return xy_error, yaw_error
def run_inference(self, feed_dict, need_map=True):
outputs = self.sess.run((self.inference_outputs if need_map else self.inference_outputs_without_map), feed_dict=feed_dict)
return outputs
def video_update(self, frame_i):
# frame skip of 3
if frame_i % VIDEO_FRAME_SKIP == 0:
ind = min(frame_i // VIDEO_FRAME_SKIP, len(self.frame_traj_data)-1)
self.video_image_ax.set_data(self.frame_traj_data[ind]['rgb'])
self.video_image_ax2.set_data(1.-self.frame_traj_data[ind]['depth'][..., 0])
# self.video_text_ax1.set_text(self.frame_traj_data[ind]['target_status'])
split_str = self.frame_traj_data[ind]['control_status'] + " " + self.frame_traj_data[ind]['act_status']
# Attempt to break lines
segs = split_str.split("[")
if len(segs) > 1:
split_str = segs[0] + "\n["+"[".join(segs[1:])
segs = split_str.split(" v=")
if len(segs) > 1:
split_str = segs[0] + "\nv=" + " v=".join(segs[1:])
# self.video_text_ax2.set_text(split_str)
self.video_text_ax1.set_text("t = %d"%(frame_i // VIDEO_FRAME_SKIP + 1))
if self.video_global_map_ax is not None:
xy = self.frame_traj_data[ind]['xy']
target_xy = self.frame_traj_data[ind]['target_xy']
#subgoal = self.frame_traj_data[ind]['subgoal']
path = self.frame_traj_data[ind]['path'].copy()
if len(path) == 0:
path = xy[None]
path = np.array(path)[:, :2]
global_map = np.atleast_3d(self.frame_traj_data[ind]['global_map'])
global_map = np.tile(global_map[:, :, :1], [1, 1, 3])
true_map = np.atleast_3d(self.frame_traj_data[ind]['true_global_map'])
true_map = np.tile(true_map[:, :, :1], [1, 1, 3])
# map_for_planning = np.atleast_3d(self.frame_traj_data[ind]['global_map_for_planning'])
map_for_planning = np.atleast_3d(self.frame_traj_data[ind]['cont_global_map_for_planning'])
map_for_planning = np.tile(map_for_planning[:, :, :1], [1, 1, 3])
if self.fixed_map_size:
# Fix window to full map
window_size = self.max_map_size[0]
map_for_planning_crop, path_crop, target_xy_crop, xy_crop = self.crop_experience_window(
window_size, map_for_planning, path, target_xy, xy)
# Use a fixed global view
combined_map = map_for_planning_crop
combined_map2 = global_map
combined_map3 = true_map
else:
# Follow agent with a window
window_size = 220
map_for_planning_crop, path_crop, target_xy_crop, xy_crop = self.crop_experience_window(
window_size, map_for_planning, path, target_xy, xy)
combined_map = map_for_planning_crop # global_map_crop if MAP_SOURCE == 'pred' else true_map_crop
global_map_crop, _, _, temp_xy_crop = self.crop_experience_window(window_size, global_map, path, target_xy, xy)
assert np.all(temp_xy_crop == xy_crop)
combined_map2 = global_map_crop
true_map_crop, _, _, temp_xy_crop = self.crop_experience_window(window_size, true_map, path, target_xy, xy)
assert np.all(temp_xy_crop == xy_crop)
combined_map3 = true_map_crop
xy = xy_crop
target_xy = target_xy_crop
path = path_crop
planned_path_skip = 4
# global_map = global_map[:map_size-map_offset_xy[0], :map_size-map_offset_xy[1]]
# combined_map[map_offset_xy[0]:map_offset_xy[0]+global_map.shape[0], map_offset_xy[1]:map_offset_xy[1]+global_map.shape[1]] = global_map
# TODO add mild colors to cont_global_map_for_planning
combined_map[int(xy_crop[0])-1:int(xy_crop[0])+2, int(xy_crop[1])-1:int(xy_crop[1]+2)] = (1., 0., 1.)
combined_map2[int(xy[0])-1:int(xy[0])+2, int(xy[1])-1:int(xy[1]+2)] = (1., 0., 1.)
combined_map3[int(xy[0])-1:int(xy[0])+2, int(xy[1])-1:int(xy[1]+2)] = (1., 0., 1.)
# print (self.video_ax.get_xlim())
self.video_ax.set_xlim(-0.5, combined_map.shape[1]-0.5)
self.video_ax.set_ylim(combined_map.shape[0]-0.5, -0.5)
self.video_global_map_ax.set_data(combined_map)
self.video_global_map_ax.set_extent([-0.5, combined_map.shape[1]-0.5, combined_map.shape[0]-0.5, -0.5])
self.video_path_scatter.set_offsets(np.flip(path_crop[planned_path_skip::planned_path_skip], axis=-1))
self.video_target_scatter.set_offsets([np.flip(target_xy_crop, axis=-1)])
if VIDEO_LARGE_PLOT:
self.video_ax2.set_xlim(-0.5, combined_map2.shape[1]-0.5)
self.video_ax2.set_ylim(combined_map2.shape[0]-0.5, -0.5)
self.video_global_map_ax2.set_data(combined_map2)
self.video_global_map_ax2.set_extent([-0.5, combined_map2.shape[1]-0.5, combined_map2.shape[0]-0.5, -0.5])
self.video_path_scatter2.set_offsets(np.flip(path[planned_path_skip::planned_path_skip], axis=-1))
self.video_target_scatter2.set_offsets([np.flip(target_xy, axis=-1)])
self.video_ax3.set_xlim(-0.5, combined_map3.shape[1]-0.5)
self.video_ax3.set_ylim(combined_map3.shape[0]-0.5, -0.5)
self.video_global_map_ax3.set_data(combined_map3)
self.video_global_map_ax3.set_extent([-0.5, combined_map3.shape[1]-0.5, combined_map3.shape[0]-0.5, -0.5])
self.video_path_scatter3.set_offsets(np.flip(path[planned_path_skip::planned_path_skip], axis=-1))
self.video_target_scatter3.set_offsets([np.flip(target_xy, axis=-1)])
if VIDEO_DETAILED:
# View angle
half_fov = 0.5 * np.deg2rad(70)
for ang_i, angle in enumerate([half_fov, -half_fov]):
angle = angle - float(self.frame_traj_data[ind]['yaw']) + np.pi/2
# angle = angle + yaw[batch_i, traj_i, 0]
v = np.array([np.cos(angle), np.sin(angle)]) * 10.
x1 = np.array([xy[1], xy[0]]) # need to be flipped for display
x2 = v + x1
self.video_view_angle_lines[ang_i].set_data([x1[0], x2[0]], [x1[1], x2[1]])
#
# # pdb.set_trace()
# Path
# # print(self.frame_traj_data[ind]['xy'], path[0])
# for i in range(len(self.video_path_circles)-2):
# path_i = min(i * 4, len(path)-1)
# xy = path[path_i]
# self.video_path_circles[i].center = ([xy[1], xy[0]])
# # Sub-goal
# xy = self.frame_traj_data[ind]['subgoal']
# self.video_path_circles[-2].center = ([xy[1], xy[0]])
# # Target
# xy = path[-1]
# self.video_path_circles[-1].center = ([xy[1], xy[0]])
# self.video_text_ax2.set_data(self.summary_str)
if self.params.interactive_video:
plt.draw()
plt.show()
plt.waitforbuttonpress(0.01)
return self.video_image_ax
def crop_experience_window(self, map_size, global_map, path, target_xy, xy):
# Cut it to fixed size 300 x 300
center_xy = (xy + target_xy) * 0.5
desired_center_xy = np.array(map_size, np.float32) * 0.5
center_xy = center_xy.astype(np.int)
desired_center_xy = desired_center_xy.astype(np.int)
offset_xy = (desired_center_xy - center_xy).astype(np.int)
xy = xy + offset_xy
target_xy = target_xy + offset_xy
# subgoal += offset_xy
path = path + offset_xy[None]
map_start_xy = np.maximum(center_xy - map_size // 2, 0)
map_cutoff_xy = -np.minimum(center_xy - map_size // 2, 0)
global_map = global_map[map_start_xy[0]:map_start_xy[0] + map_size - map_cutoff_xy[0], map_start_xy[1]:map_start_xy[1] + map_size - map_cutoff_xy[1]]
global_map_crop = np.ones((map_size, map_size, 3), np.float32) * 0.5
global_map_crop[map_cutoff_xy[0]:map_cutoff_xy[0] + global_map.shape[0], map_cutoff_xy[1]:global_map.shape[1] + map_cutoff_xy[1]] = global_map
return global_map_crop, path, target_xy, xy
def plot_loop(self, queue):
# Infinite loop that takes frame data or reset request from queue and does plotting in a separate thread.
plt.ion()
print ("plot loop")
while True:
cmd, frame_data = queue.get(block=True)
# print ("plot command %s"%cmd)
if cmd == "reset":
self.reset_video_writer(called_from_plot_process=True)
elif cmd == "exit":
# self.reset_video_writer(called_from_plot_process=True)
# TODO could save it here, but usually trying to create a new figure in this thread raises excpetion
plt.close('all')
return
elif cmd == "step":
self.frame_traj_data.append(frame_data)
self.video_update(VIDEO_FRAME_SKIP * len(self.frame_traj_data)) # hack to plot last frame
else:
raise ValueError("Unknown plot command")
def reset_video_writer(self, last_success=None, called_from_plot_process=False):
if not called_from_plot_process and self.plot_process:
self.plot_queue.put(("reset", None))
return
if self.params.interactive_video or (SAVE_VIDEO and len(self.frame_traj_data) > 0):
# Save video
if False:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_image_ax = ax.imshow(np.zeros((90, 160, 3)))
self.video_global_map_ax = None
self.video_text_ax0 = fig.text(0.04, 0.9, self.summary_str, transform=fig.transFigure, fontsize=10, verticalalignment='top') # bottom left
self.video_text_ax1 = fig.text(0.96, 0.9, "Target", transform=fig.transFigure, fontsize=10, verticalalignment='top', horizontalalignment='right')
self.video_text_ax2 = fig.text(0.04, 0.05, "Status2", transform=fig.transFigure, fontsize=10, verticalalignment='bottom', wrap=True)
else:
# fig = plt.figure(figsize=(6, 9)) # aspect ratio
# ax = fig.add_subplot(221 if VIDEO_LARGE_PLOT else 121)
# # ax.set_aspect('equal')
# ax.get_xaxis().set_visible(False)
# ax.get_yaxis().set_visible(False)
# self.video_image_ax = ax.imshow(np.zeros((90, 160, 3)))
if self.params.interactive_video and not called_from_plot_process:
plt.close('all')
fig = plt.figure(constrained_layout=True, figsize=(9, 5)) # figsize overwritten later
gs = gridspec.GridSpec(20, 30)
ax = plt.subplot(gs[:9, :15]) # image
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_image_ax = ax.imshow(np.zeros((90, 160, 3)))
ax = plt.subplot(gs[9:18, 0:15]) # depth
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_image_ax2 = ax.imshow(np.zeros((90, 160)), cmap='Greys', vmin=0., vmax=1.)
ax = plt.subplot(gs[:18, 15:]) # map window
# ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_global_map_ax = ax.imshow(np.zeros((1200, 1200, 3)))
self.video_ax = ax
self.video_path_scatter = ax.scatter([0.], [1.], s=2., c='green', marker='o')
self.video_target_scatter = ax.scatter([0.], [1.], s=2., c='red', marker='o')
if VIDEO_LARGE_PLOT:
ax = fig.add_subplot(223)
# ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_global_map_ax2 = ax.imshow(np.zeros((1200, 1200, 3)))
self.video_ax2 = ax
self.video_path_scatter2 = ax.scatter([0.], [1.], s=2., c='green', marker='o')
self.video_target_scatter2 = ax.scatter([0.], [1.], s=2., c='red', marker='o')
ax = fig.add_subplot(224)
# ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_global_map_ax3 = ax.imshow(np.zeros((1200, 1200, 3)))
self.video_ax3 = ax
self.video_path_scatter3 = ax.scatter([0.], [1.], s=2., c='green', marker='o')
self.video_target_scatter3 = ax.scatter([0.], [1.], s=2., c='red', marker='o')
if VIDEO_DETAILED:
# self.video_view_angle_lines = [mlines.Line2D([0., 0.], [10., 10.,], color='green') for _ in range(2)]
# ax.add_line(self.video_view_angle_lines[0])
# ax.add_line(self.video_view_angle_lines[1])
# self.video_path_circles = []
# for i in range(20):
# circle = plt.Circle((0., 0.), 2., color=('red' if i >= 18 else 'orange'), fill=False, transform='data')
# ax.add_artist(circle)
# self.video_path_circles.append(circle)
self.video_view_angle_lines = []
for _ in range(2):
self.video_view_angle_lines.extend(ax.plot([0., 1.], [0., 1.], '-', color='blue')) # plot returns a list of lines
# self.video_text_ax0 = fig.text(0.04, 0.9, self.summary_str, transform=fig.transFigure, fontsize=9,
# verticalalignment='top') # bottom left
# self.video_text_ax1 = fig.text(0.96, 0.9, "Target", transform=fig.transFigure, fontsize=9,
# verticalalignment='top', horizontalalignment='right')
# self.video_text_ax2 = fig.text(0.04, 0.05, "Status2", transform=fig.transFigure, fontsize=9,
# verticalalignment='bottom', wrap=True)
self.video_text_ax1 = fig.text(0.5, 0.05, "Status2", transform=fig.transFigure, fontsize=9,
verticalalignment='bottom', horizontalalignment='center', wrap=False)
# im.set_clim([0, 1])
if self.params.interactive_video:
fig.set_size_inches([9., 5.])
else:
fig.set_size_inches([9./2, 5./2])
plt.tight_layout()
if SAVE_VIDEO and len(self.frame_traj_data) > 0:
ani = animation.FuncAnimation(fig, self.video_update, len(self.frame_traj_data) * VIDEO_FRAME_SKIP + 21, interval=100) # time between frames in ms. overwritten by fps below
writer = animation.writers['ffmpeg'](fps=VIDEO_FPS) # default h264 is lossless, but could not find in docker)
if last_success is not None:
outcome_str = '_S' if last_success else '_F'
else:
outcome_str = ''
outcome_str = outcome_str + '_' + self.frame_traj_data[-1]['outcome']
video_filename = os.path.join(self.logdir, '%s_%d%s%s.mp4'%(self.get_scene_name(), self.episode_i, outcome_str, self.filename_addition))
ani.save(video_filename, writer=writer, dpi=200)
print ("Video saved to "+video_filename)
self.num_videos_saved += 1
self.frame_traj_data = []
def visualize_agent(self, visibility_mask, images, global_map_pred, global_map_for_planning, global_map_label,
global_map_true_partial, local_map_pred, local_map_label, planned_path, sim_rgb=None,
local_obj_map_pred=None, xy=None, yaw=None, true_xy=None, true_yaw=None, target_xy=None):
# Coordinate systems dont match the ones assumed in these plot functions, but all cancells out except for yaw
yaw = yaw - np.pi/2
if true_yaw is not None:
true_yaw = true_yaw - np.pi/2
status_msg = "step %d" % (self.step_i,)
if global_map_label is not None:
# assert global_map_label.shape[-1] == 3
global_map_label = np.concatenate(
[global_map_label, np.zeros_like(global_map_label), np.zeros_like(global_map_label)], axis=-1)
plt.figure("Global map label")
plt.imshow(global_map_label)
plot_viewpoints(xy[0], xy[1], yaw)
if true_xy is not None and true_yaw is not None:
plot_viewpoints(true_xy[0], true_xy[1], true_yaw, color='green')
plot_target_and_path(target_xy=target_xy, path=planned_path, every_n=1)
plt.title(status_msg)
plt.savefig('./temp/global-map-label.png')
plt.figure("Global map (%d)" % self.step_i)
map_to_plot = global_map_pred[..., :1]
map_to_plot = np.pad(map_to_plot, [[0, 0], [0, 0], [0, 3-map_to_plot.shape[-1]]])
plt.imshow(map_to_plot)
plot_viewpoints(xy[0], xy[1], yaw)
plot_target_and_path(target_xy=target_xy, path=planned_path, every_n=1)
# plot_target_and_path(target_xy=target_xy_vel, path=np.array(self.hist2)[:, :2])
plt.title(status_msg)
plt.savefig('./temp/global-map-pred.png')
if global_map_pred.shape[-1] == 2:
map_to_plot = global_map_pred[..., 1:2]
map_to_plot = np.pad(map_to_plot, [[0, 0], [0, 0], [0, 3-map_to_plot.shape[-1]]])
plt.imshow(map_to_plot)
plot_viewpoints(xy[0], xy[1], yaw)
plot_target_and_path(target_xy=target_xy, path=planned_path, every_n=1)
plt.title(status_msg)
plt.savefig('./temp/global-obj-map-pred.png')
# if global_map_true_partial is not None:
# plt.figure("Global map true (%d)" % self.step_i)
# map_to_plot = global_map_true_partial
# map_to_plot = np.pad(map_to_plot, [[0, 0], [0, 0], [0, 3-map_to_plot.shape[-1]]])
# plt.imshow(map_to_plot)
# plot_viewpoints(xy[0], xy[1], yaw)
# plot_target_and_path(target_xy=self.target_xy, path=planned_path)
# # plot_target_and_path(target_xy=self.target_xy, path=np.array(self.hist1)[:, :2])
# # plot_target_and_path(target_xy=self.target_xy_vel, path=np.array(self.hist2)[:, :2])
# plt.title(status_msg)
# plt.savefig('./temp/global-map-true.png')
# plt.figure("Global map plan (%d)" % self.step_i)
map_to_plot = global_map_for_planning
map_to_plot = np.pad(map_to_plot, [[0, 0], [0, 0], [0, 3-map_to_plot.shape[-1]]])
plt.imshow(map_to_plot)
plot_viewpoints(xy[0], xy[1], yaw)
plot_target_and_path(target_xy=target_xy, path=planned_path, every_n=1)
plt.title(status_msg)
plt.savefig('./temp/global-map-plan.png')
depth, rgb = mapping_visualizer.recover_depth_and_rgb(images)
if self.params.mode == 'depth' and sim_rgb is not None:
rgb = sim_rgb
rgb[:5, :5, :] = 0 # indicate this is not observed
images_fig, images_axarr = plt.subplots(2, 2, squeeze=True)
plt.title(status_msg)
plt.axes(images_axarr[0, 0])
plt.imshow(depth)
plt.axes(images_axarr[0, 1])
plt.imshow(rgb)
plt.axes(images_axarr[1, 0])
if local_map_pred is not None:
plt.imshow(local_map_pred * visibility_mask + (1 - visibility_mask) * 0.5, vmin=0., vmax=1.)
plt.axes(images_axarr[1, 1])
if local_obj_map_pred is not None:
plt.imshow(local_obj_map_pred * visibility_mask + (1 - visibility_mask) * 0.5, vmin=0, vmax=1.)
elif local_map_label is not None:
plt.imshow(local_map_label * visibility_mask + (1 - visibility_mask) * 0.5, vmin=0., vmax=1.)
plt.savefig('./temp/inputs.png')
#
if INTERACTIVE_PLOT:
plt.figure('step')
plt.show()
# pdb.set_trace()
plt.waitforbuttonpress(0.01) # True for keyboard, False for mouse, None for timeout
if button_res:
print ('pause')
pdb.set_trace()
else:
plt.close('all')
# def main():
# params = parse_args(default_files=('./gibson_submission.conf', ))
# is_submission = (params.gibson_mode == 'submission')
#
# parser = argparse.ArgumentParser()
# parser.add_argument("--evaluation", type=str, required=True, choices=["local", "remote"])
# args = parser.parse_args()
#
# config_paths = os.environ["CHALLENGE_CONFIG_FILE"]
# config = habitat.get_config(config_paths)
#
# # agent = RandomAgent(task_config=config)
#
# if args.evaluation == "local":
# challenge = habitat.Challenge(eval_remote=False)
# else:
# challenge = habitat.Challenge(eval_remote=True)
#
# env = challenge._env
# agent = DSLAMAgent(task_config=config, env=env)
#
# challenge.submit(agent)
#
#
# if __name__ == "__main__":
# main()
| 54.78752 | 241 | 0.615592 | 18,049 | 134,339 | 4.278741 | 0.066319 | 0.034379 | 0.014503 | 0.012172 | 0.545975 | 0.444016 | 0.367514 | 0.328279 | 0.298354 | 0.261877 | 0 | 0.026285 | 0.283224 | 134,339 | 2,451 | 242 | 54.809874 | 0.775732 | 0.197009 | 0 | 0.246377 | 0 | 0.002415 | 0.04629 | 0.002715 | 0 | 0 | 0 | 0.000408 | 0.049517 | 1 | 0.013889 | false | 0.001208 | 0.019928 | 0.000604 | 0.052536 | 0.024155 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8db1da409aa926ae0d4a1dd1326712356ef588d | 2,890 | py | Python | examples/nightlight/nightlight.py | pimoroni/breakout-garden | 15f6886a1d011363cc660df1a350fd23d6cf4b78 | [
"MIT"
] | 68 | 2018-08-20T21:45:01.000Z | 2022-03-17T20:45:47.000Z | examples/nightlight/nightlight.py | pimoroni/breakout-garden | 15f6886a1d011363cc660df1a350fd23d6cf4b78 | [
"MIT"
] | 24 | 2018-08-20T14:04:13.000Z | 2022-03-09T12:26:24.000Z | examples/nightlight/nightlight.py | pimoroni/breakout-garden | 15f6886a1d011363cc660df1a350fd23d6cf4b78 | [
"MIT"
] | 14 | 2018-08-25T13:33:49.000Z | 2021-12-09T09:02:35.000Z | #!/usr/bin/env python3
import time
from ltr559 import LTR559
from rgbmatrix5x5 import RGBMatrix5x5
print("""This Pimoroni Breakout Garden example requires an
LTR-559 Light and Proximity Breakout and a 5x5 RGB Matrix Breakout.
This example creates a little nightlight that can be toggled on or
off by tapping the proximity sensor with your finger, or triggered
automatically when it's dark.
Press Ctrl+C to exit.
""")
# Set up the LTR-559 sensor
ltr559 = LTR559()
# Set up the 5x5 RGB matrix
rgbmatrix5x5 = RGBMatrix5x5()
rgbmatrix5x5.set_clear_on_exit()
rgbmatrix5x5.set_brightness(0.8)
# Initial variables to keep track of state of light
state = False
last_state = False
toggled = False
light_threshold = 100 # Low-light trigger level
prox_threshold = 1000 # Proximity trigger level
colour = (255, 165, 0) # Orange-ish
# Function to toggle the RGB matrix on or off depending on state
def toggle_matrix():
global state, last_state
if state is True and last_state is False:
rgbmatrix5x5.set_all(*colour)
rgbmatrix5x5.show()
elif state is False and last_state is True:
rgbmatrix5x5.clear()
rgbmatrix5x5.show()
last_state = state
# Read the sensor once, as the first values are always squiffy
ltr559.update_sensor()
lux = ltr559.get_lux()
prox =ltr559. get_proximity()
time.sleep(1)
try:
while True:
# Read the light and proximity sensor
ltr559.update_sensor()
lux = ltr559.get_lux()
prox = ltr559.get_proximity()
# If it's dark and the light isn't toggled on, turn on
if lux < light_threshold and not toggled:
state = True
if state != last_state:
print("It's dark! Turning light ON")
toggle_matrix()
# If it's light and the light isn't on, turn off
elif lux >= light_threshold and not toggled:
state = False
if state != last_state:
print("It's light! Turning light OFF")
toggle_matrix()
# If there's a tap on the sensor
if prox > prox_threshold:
# Toggle it off if it's currently on
if toggled:
state = False
toggled = False
if state != last_state:
print("Toggling light OFF")
toggle_matrix()
# Toggle it on if it's currently off
else:
state = True
toggled = True
if state != last_state:
print("Toggling light ON")
toggle_matrix()
# Wait a short while to prevent the on/off switch
# from immediately re-triggering
time.sleep(0.5)
elif prox < prox_threshold and lux >= light_threshold:
state = False
time.sleep(0.05)
except KeyboardInterrupt:
pass
| 27.788462 | 67 | 0.623529 | 386 | 2,890 | 4.585492 | 0.331606 | 0.045763 | 0.039548 | 0.036158 | 0.19435 | 0.177401 | 0.167232 | 0.062147 | 0.062147 | 0.062147 | 0 | 0.041667 | 0.310727 | 2,890 | 103 | 68 | 28.058252 | 0.846888 | 0.215225 | 0 | 0.358209 | 0 | 0 | 0.175922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014925 | false | 0.014925 | 0.044776 | 0 | 0.059701 | 0.074627 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8de1a0557c16a820290ec65f2861645cf8269e4 | 6,595 | py | Python | leaguedirector/sequence/sequenceTrackView.py | santutu/league-director | 631ab416e31a0391ab207f9b657638c8e350a48c | [
"Apache-2.0"
] | null | null | null | leaguedirector/sequence/sequenceTrackView.py | santutu/league-director | 631ab416e31a0391ab207f9b657638c8e350a48c | [
"Apache-2.0"
] | null | null | null | leaguedirector/sequence/sequenceTrackView.py | santutu/league-director | 631ab416e31a0391ab207f9b657638c8e350a48c | [
"Apache-2.0"
] | null | null | null | import copy
import statistics
from operator import attrgetter
from PySide2.QtCore import Signal, Qt, QEvent
from PySide2.QtGui import QPen, QMouseEvent
from PySide2.QtWidgets import QGraphicsView, QGraphicsScene, QAbstractScrollArea, QApplication, QGraphicsItem
from leaguedirector.libs.memoryCache import MemoryCache
from leaguedirector.sequence.constant import PRECISION, ADJACENT
from leaguedirector.sequence.sequenceKeyframe import SequenceKeyframe
from leaguedirector.sequence.sequenceTime import SequenceTime
from leaguedirector.sequence.sequenceTrack import SequenceTrack
from leaguedirector.widgets import schedule
class SequenceTrackView(QGraphicsView):
selectionChanged = Signal()
def __init__(self, api, headers):
self.api = api
self.scene = QGraphicsScene()
QGraphicsView.__init__(self, self.scene)
self.tracks = {}
self.timer = schedule(10, self.animate)
self.scale(1.0 / PRECISION, 1.0)
self.setDragMode(QGraphicsView.NoDrag)
self.setAlignment(Qt.AlignLeft | Qt.AlignTop)
self.setTransformationAnchor(QGraphicsView.AnchorUnderMouse)
self.setSizeAdjustPolicy(QAbstractScrollArea.AdjustToContents)
for index, name in enumerate(self.api.sequence.keys()):
track = SequenceTrack(self.api, name, index)
self.scene.addItem(track)
self.tracks[name] = track
self.time = SequenceTime(0, 1, 0, self.scene.height() - 2)
self.time.setPen(QPen(QApplication.palette().highlight(), 1))
self.time.setFlags(QGraphicsItem.ItemIgnoresTransformations)
self.scene.addItem(self.time)
self.api.playback.updated.connect(self.update)
self.api.sequence.updated.connect(self.update)
self.api.sequence.dataLoaded.connect(self.reload)
headers.addKeyframe.connect(self.addKeyframe)
headers.verticalScrollBar().valueChanged.connect(lambda value: self.verticalScrollBar().setValue(value))
self.verticalScrollBar().valueChanged.connect(lambda value: headers.verticalScrollBar().setValue(value))
self.scene.selectionChanged.connect(self.selectionChanged.emit)
self.clipboard = MemoryCache()
self.clipboard.set('copied_key_frames', [])
def copyKeyframes(self):
self.clipboard.set('copied_key_frames',
[(keyframe.track.name, copy.deepcopy(keyframe.item)) for keyframe in
self.selectedKeyframes()])
return self
def pasteKeyframes(self):
keyframes = self.clipboard.get('copied_key_frames')
for keyframe in keyframes:
[name, item] = keyframe
item = copy.deepcopy(item)
self.api.sequence.appendKeyframe(name, item)
SequenceKeyframe(self.api, item, self.tracks[name])
def reload(self):
for track in self.tracks.values():
track.reload()
def selectedKeyframes(self):
return [key for key in self.scene.selectedItems() if isinstance(key, SequenceKeyframe)]
def allKeyframes(self):
return [key for key in self.scene.items() if isinstance(key, SequenceKeyframe)]
def addKeyframe(self, name):
self.tracks[name].addKeyframe()
def clearKeyframes(self):
for track in self.tracks.values():
track.clearKeyframes()
def deleteSelectedKeyframes(self):
for selected in self.selectedKeyframes():
selected.delete()
def selectAllKeyframes(self):
for child in self.allKeyframes():
child.setSelected(True)
def selectAdjacentKeyframes(self):
for selected in self.selectedKeyframes():
for child in self.allKeyframes():
if abs(child.time - selected.time) < ADJACENT:
child.setSelected(True)
def selectNextKeyframe(self):
selectionSorted = sorted(self.selectedKeyframes(), key=attrgetter('time'))
trackSelection = {key.track: key for key in selectionSorted}
for track, selected in trackSelection.items():
for child in sorted(track.childItems(), key=attrgetter('time')):
if child.time > selected.time:
trackSelection[track] = child
break
self.scene.clearSelection()
for item in trackSelection.values():
item.setSelected(True)
def selectPrevKeyframe(self):
selectionSorted = sorted(self.selectedKeyframes(), key=attrgetter('time'), reverse=True)
trackSelection = {key.track: key for key in selectionSorted}
for track, selected in trackSelection.items():
for child in sorted(track.childItems(), key=attrgetter('time'), reverse=True):
if child.time < selected.time:
trackSelection[track] = child
break
self.scene.clearSelection()
for item in trackSelection.values():
item.setSelected(True)
def seekSelectedKeyframe(self):
selected = [key.time for key in self.selectedKeyframes()]
if selected:
self.api.playback.pause(statistics.mean(selected))
def update(self):
for track in self.tracks.values():
track.update()
def mousePressEvent(self, event):
if event.button() == Qt.RightButton:
self.setDragMode(QGraphicsView.ScrollHandDrag)
QGraphicsView.mousePressEvent(self, QMouseEvent(
QEvent.GraphicsSceneMousePress,
event.pos(),
Qt.MouseButton.LeftButton,
Qt.MouseButton.LeftButton,
Qt.KeyboardModifier.NoModifier
))
elif event.button() == Qt.LeftButton:
if event.modifiers() == Qt.ShiftModifier:
self.setDragMode(QGraphicsView.RubberBandDrag)
QGraphicsView.mousePressEvent(self, event)
QGraphicsView.mousePressEvent(self, event)
def mouseDoubleClickEvent(self, event):
QGraphicsView.mouseDoubleClickEvent(self, event)
if not self.scene.selectedItems() and not event.isAccepted():
self.api.playback.pause(self.mapToScene(event.pos()).x() / PRECISION)
def mouseReleaseEvent(self, event):
QGraphicsView.mouseReleaseEvent(self, event)
self.setDragMode(QGraphicsView.NoDrag)
def wheelEvent(self, event):
if event.angleDelta().y() > 0:
self.scale(1.1, 1.0)
else:
self.scale(0.9, 1.0)
def animate(self):
self.time.setPos(self.api.playback.currentTime * PRECISION, 0)
| 40.962733 | 112 | 0.660197 | 660 | 6,595 | 6.575758 | 0.239394 | 0.019355 | 0.009217 | 0.010138 | 0.290553 | 0.236175 | 0.204378 | 0.186406 | 0.119355 | 0.119355 | 0 | 0.004796 | 0.241243 | 6,595 | 160 | 113 | 41.21875 | 0.86251 | 0 | 0 | 0.214815 | 0 | 0 | 0.010159 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.088889 | 0.014815 | 0.274074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8de56f1954539d2d33e25fa9d9007b69553e370 | 23,746 | py | Python | annealed_flow_transport/flows.py | LaudateCorpus1/annealed_flow_transport | 28f348bb41e3acec5bc925355063d476f2e2aea2 | [
"Apache-2.0"
] | 23 | 2021-08-13T14:00:10.000Z | 2022-02-15T12:44:20.000Z | annealed_flow_transport/flows.py | deepmind/annealed_flow_transport | 28f348bb41e3acec5bc925355063d476f2e2aea2 | [
"Apache-2.0"
] | 1 | 2021-10-05T16:19:25.000Z | 2021-10-05T16:19:25.000Z | annealed_flow_transport/flows.py | LaudateCorpus1/annealed_flow_transport | 28f348bb41e3acec5bc925355063d476f2e2aea2 | [
"Apache-2.0"
] | 4 | 2021-10-05T16:14:58.000Z | 2022-01-03T15:17:36.000Z | # Copyright 2020 DeepMind Technologies Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Code for normalizing flows.
For a review of normalizing flows see: https://arxiv.org/abs/1912.02762
The abstract base class ConfigurableFlow demonstrates our minimal interface.
Although the standard change of variables formula requires that
normalizing flows are invertible, none of the algorithms in train.py
require evaluating that inverse explicitly so inverses are not implemented.
"""
import abc
from typing import Callable, List, Tuple
import annealed_flow_transport.aft_types as tp
import chex
import haiku as hk
import jax
import jax.numpy as jnp
import numpy as np
Array = tp.Array
ConfigDict = tp.ConfigDict
class ConfigurableFlow(hk.Module, abc.ABC):
"""Abstract base clase for configurable normalizing flows.
This is the interface expected by all flow based algorithms called in train.py
"""
def __init__(self, config: ConfigDict):
super().__init__()
self._check_configuration(config)
self._config = config
def _check_input(self, x: Array) -> Array:
chex.assert_rank(x, 1)
def _check_outputs(self, x: Array, transformed_x: Array,
log_abs_det_jac: Array) -> Array:
chex.assert_rank(x, 1)
chex.assert_equal_shape([x, transformed_x])
chex.assert_shape(log_abs_det_jac, ())
def _check_members_types(self, config: ConfigDict, expected_members_types):
for elem, elem_type in expected_members_types:
if elem not in config:
raise ValueError('Flow config element not found: ', elem)
if not isinstance(config[elem], elem_type):
msg = 'Flow config element '+elem+' is not of type '+str(elem_type)
raise TypeError(msg)
def __call__(self, x: Array) -> Tuple[Array, Array]:
"""Call transform_and_log abs_det_jac with automatic shape checking.
This calls transform_and_log_abs_det_jac which needs to be implemented
in derived classes.
Args:
x: Array size (num_dim,) containing input to flow.
Returns:
Array size (num_dim,) containing output and Scalar log abs det Jacobian.
"""
self._check_input(x)
output, log_abs_det_jac = self.transform_and_log_abs_det_jac(x)
self._check_outputs(x, output, log_abs_det_jac)
return output, log_abs_det_jac
@abc.abstractmethod
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
"""Transform x through the flow and compute log abs determinant of Jacobian.
Args:
x: (num_dim,) input to the flow.
Returns:
Array size (num_dim,) containing output and Scalar log abs det Jacobian.
"""
@abc.abstractmethod
def _check_configuration(self, config: ConfigDict):
"""Check the configuration includes the necessary fields.
Will typically raise Assertion like errors.
Args:
config: A ConfigDict include the fields required by the flow.
"""
class DiagonalAffine(ConfigurableFlow):
"""An affine transformation with a positive diagonal matrix."""
def _check_configuration(self, unused_config: ConfigDict):
pass
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
num_elem = x.shape[0]
unconst_diag_init = hk.initializers.Constant(jnp.zeros((num_elem,)))
bias_init = hk.initializers.Constant(jnp.zeros((num_elem,)))
unconst_diag = hk.get_parameter('unconst_diag',
shape=[num_elem],
dtype=x.dtype,
init=unconst_diag_init)
bias = hk.get_parameter('bias',
shape=[num_elem],
dtype=x.dtype,
init=bias_init)
output = jnp.exp(unconst_diag)*x + bias
log_abs_det = jnp.sum(unconst_diag)
return output, log_abs_det
def rational_quadratic_spline(x: Array,
bin_positions: Array,
bin_heights: Array,
derivatives: Array) -> Tuple[Array, Array]:
"""Compute a rational quadratic spline.
See https://arxiv.org/abs/1906.04032
Args:
x: A single real number.
bin_positions: A sorted array of bin positions of length num_bins+1.
bin_heights: An array of bin heights of length num_bins+1.
derivatives: An array of derivatives at bin positions of length num_bins+1.
Returns:
Value of the rational quadratic spline at x.
Derivative with respect to x of rational quadratic spline at x.
"""
bin_index = jnp.searchsorted(bin_positions, x)
array_index = bin_index % len(bin_positions)
lower_x = bin_positions[array_index-1]
upper_x = bin_positions[array_index]
lower_y = bin_heights[array_index-1]
upper_y = bin_heights[array_index]
lower_deriv = derivatives[array_index-1]
upper_deriv = derivatives[array_index]
delta_x = upper_x - lower_x
delta_y = upper_y - lower_y
slope = delta_y / delta_x
alpha = (x - lower_x)/delta_x
alpha_squared = jnp.square(alpha)
beta = alpha * (1.-alpha)
gamma = jnp.square(1.-alpha)
epsilon = upper_deriv+lower_deriv -2. *slope
numerator_quadratic = delta_y * (slope*alpha_squared + lower_deriv*beta)
denominator_quadratic = slope + epsilon*beta
interp_x = lower_y + numerator_quadratic/denominator_quadratic
# now compute derivative
numerator_deriv = jnp.square(slope) * (
upper_deriv * alpha_squared + 2. * slope * beta + lower_deriv * gamma)
sqrt_denominator_deriv = slope + epsilon*beta
denominator_deriv = jnp.square(sqrt_denominator_deriv)
deriv = numerator_deriv / denominator_deriv
return interp_x, deriv
def identity_padded_rational_quadratic_spline(
x: Array, bin_positions: Array, bin_heights: Array,
derivatives: Array) -> Tuple[Array, Array]:
"""An identity padded rational quadratic spline.
Args:
x: the value to evaluate the spline at.
bin_positions: sorted values of bin x positions of length num_bins+1.
bin_heights: absolute height of bin of length num_bins-1.
derivatives: derivatives at internal bin edge of length num_bins-1.
Returns:
The value of the spline at x.
The derivative with respect to x of the spline at x.
"""
lower_limit = bin_positions[0]
upper_limit = bin_positions[-1]
bin_height_sequence = (jnp.atleast_1d(jnp.array(lower_limit)),
bin_heights,
jnp.atleast_1d(jnp.array(upper_limit)))
full_bin_heights = jnp.concatenate(bin_height_sequence)
derivative_sequence = (jnp.ones((1,)),
derivatives,
jnp.ones((1,)))
full_derivatives = jnp.concatenate(derivative_sequence)
in_range = jnp.logical_and(jnp.greater(x, lower_limit),
jnp.less(x, upper_limit))
multiplier = in_range*1.
multiplier_complement = jnp.logical_not(in_range)*1.
spline_val, spline_deriv = rational_quadratic_spline(x,
bin_positions,
full_bin_heights,
full_derivatives)
identity_val = x
identity_deriv = 1.
val = spline_val*multiplier + multiplier_complement*identity_val
deriv = spline_deriv*multiplier + multiplier_complement*identity_deriv
return val, deriv
class AutoregressiveMLP(hk.Module):
"""An MLP which is constrained to have autoregressive dependency."""
def __init__(self,
num_hiddens_per_input_dim: List[int],
include_self_links: bool,
non_linearity,
zero_final: bool,
bias_last: bool,
name=None):
super().__init__(name=name)
self._num_hiddens_per_input_dim = num_hiddens_per_input_dim
self._include_self_links = include_self_links
self._non_linearity = non_linearity
self._zero_final = zero_final
self._bias_last = bias_last
def __call__(self, x: Array) -> Array:
input_dim = x.shape[0]
hidden_representation = jnp.atleast_2d(x).T
prev_hid_per_dim = 1
num_hidden_layers = len(self._num_hiddens_per_input_dim)
final_index = num_hidden_layers-1
for layer_index in range(num_hidden_layers):
is_last_layer = (final_index == layer_index)
hid_per_dim = self._num_hiddens_per_input_dim[layer_index]
name_stub = '_'+str(layer_index)
layer_shape = (input_dim,
prev_hid_per_dim,
input_dim,
hid_per_dim)
in_degree = prev_hid_per_dim * input_dim
if is_last_layer and self._zero_final:
w_init = jnp.zeros
else:
w_init = hk.initializers.TruncatedNormal(1. / np.sqrt(in_degree))
bias_init = hk.initializers.Constant(jnp.zeros((input_dim, hid_per_dim,)))
weights = hk.get_parameter(name='weights'+name_stub,
shape=layer_shape,
dtype=x.dtype,
init=w_init)
if is_last_layer and not self._bias_last:
biases = jnp.zeros((input_dim, hid_per_dim,))
else:
biases = hk.get_parameter(name='biases'+name_stub,
shape=(input_dim, hid_per_dim),
dtype=x.dtype,
init=bias_init)
if not(self._include_self_links) and is_last_layer:
k = -1
else:
k = 0
mask = jnp.tril(jnp.ones((input_dim, input_dim)),
k=k)
masked_weights = mask[:, None, :, None] * weights
new_hidden_representation = jnp.einsum('ijkl,ij->kl',
masked_weights,
hidden_representation) + biases
prev_hid_per_dim = hid_per_dim
if not is_last_layer:
hidden_representation = self._non_linearity(new_hidden_representation)
else:
hidden_representation = new_hidden_representation
return hidden_representation
class InverseAutogressiveFlow(object):
"""A generic inverse autoregressive flow.
See https://arxiv.org/abs/1606.04934
Takes two functions as input.
1) autoregressive_func takes array of (num_dim,)
and returns array (num_dim, num_features)
it is autoregressive in the sense that the output[i, :]
depends only on the input[:i]. This is not checked.
2) transform_func takes array of (num_dim, num_features) and
an array of (num_dim,) and returns output of shape (num_dim,)
and a single log_det_jacobian value. The represents the transformation
acting on the inputs with given parameters.
"""
def __init__(self,
autoregressive_func: Callable[[Array], Array],
transform_func: Callable[[Array, Array], Tuple[Array, Array]]):
self._autoregressive_func = autoregressive_func
self._transform_func = transform_func
def __call__(self, x: Array) -> Tuple[Array, Array]:
"""x is of shape (num_dim,)."""
transform_features = self._autoregressive_func(x)
output, log_abs_det = self._transform_func(transform_features, x)
return output, log_abs_det
class SplineInverseAutoregressiveFlow(ConfigurableFlow):
"""An inverse autoregressive flow with spline transformer.
config must contain the following fields:
num_spline_bins: Number of bins for rational quadratic spline.
intermediate_hids_per_dim: See AutoregresiveMLP.
num_layers: Number of layers for AutoregressiveMLP.
identity_init: Whether to initalize the flow to the identity.
bias_last: Whether to include biases on the last later of AutoregressiveMLP
lower_lim: Lower limit of active region for rational quadratic spline.
upper_lim: Upper limit of active region for rational quadratic spline.
min_bin_size: Minimum bin size for rational quadratic spline.
min_derivative: Minimum derivative for rational quadratic spline.
"""
def __init__(self,
config: ConfigDict):
super().__init__(config)
self._num_spline_bins = config.num_spline_bins
num_spline_parameters = 3 * config.num_spline_bins - 1
num_hids_per_input_dim = [config.intermediate_hids_per_dim
] * config.num_layers + [
num_spline_parameters
]
self._autoregressive_mlp = AutoregressiveMLP(
num_hids_per_input_dim,
include_self_links=False,
non_linearity=jax.nn.leaky_relu,
zero_final=config.identity_init,
bias_last=config.bias_last)
self._lower_lim = config.lower_lim
self._upper_lim = config.upper_lim
self._min_bin_size = config.min_bin_size
self._min_derivative = config.min_derivative
def _check_configuration(self, config: ConfigDict):
expected_members_types = [
('num_spline_bins', int),
('intermediate_hids_per_dim', int),
('num_layers', int),
('identity_init', bool),
('bias_last', bool),
('lower_lim', float),
('upper_lim', float),
('min_bin_size', float),
('min_derivative', float)
]
self._check_members_types(config, expected_members_types)
def _unpack_spline_params(self, raw_param_vec) -> Tuple[Array, Array, Array]:
unconst_bin_size_x = raw_param_vec[:self._num_spline_bins]
unconst_bin_size_y = raw_param_vec[self._num_spline_bins:2 *
self._num_spline_bins]
unconst_derivs = raw_param_vec[2 * self._num_spline_bins:(
3 * self._num_spline_bins - 1)]
return unconst_bin_size_x, unconst_bin_size_y, unconst_derivs
def _transform_raw_to_spline_params(
self, raw_param_vec: Array) -> Tuple[Array, Array, Array]:
unconst_bin_size_x, unconst_bin_size_y, unconst_derivs = self._unpack_spline_params(
raw_param_vec)
def normalize_bin_sizes(unconst_bin_sizes: Array) -> Array:
bin_range = self._upper_lim - self._lower_lim
reduced_bin_range = (
bin_range - self._num_spline_bins * self._min_bin_size)
return jax.nn.softmax(
unconst_bin_sizes) * reduced_bin_range + self._min_bin_size
bin_size_x = normalize_bin_sizes(unconst_bin_size_x)
bin_size_y = normalize_bin_sizes(unconst_bin_size_y)
# get the x bin positions.
array_sequence = (jnp.ones((1,))*self._lower_lim, bin_size_x)
x_bin_pos = jnp.cumsum(jnp.concatenate(array_sequence))
# get the y bin positions, ignoring redundant terms.
stripped_y_bin_pos = self._lower_lim + jnp.cumsum(bin_size_y[:-1])
def forward_positive_transform(unconst_value: Array,
min_value: Array) -> Array:
return jax.nn.softplus(unconst_value) + min_value
def inverse_positive_transform(const_value: Array,
min_value: Array) -> Array:
return jnp.log(jnp.expm1(const_value-min_value))
inverted_one = inverse_positive_transform(1., self._min_derivative)
derivatives = forward_positive_transform(unconst_derivs + inverted_one,
self._min_derivative)
return x_bin_pos, stripped_y_bin_pos, derivatives
def _get_spline_values(self,
raw_parameters: Array,
x: Array) -> Tuple[Array, Array]:
bat_get_parameters = jax.vmap(self._transform_raw_to_spline_params)
bat_x_bin_pos, bat_stripped_y, bat_derivatives = bat_get_parameters(
raw_parameters)
# Vectorize spline over data and parameters.
bat_get_spline_vals = jax.vmap(identity_padded_rational_quadratic_spline,
in_axes=[0, 0, 0, 0])
spline_vals, derivs = bat_get_spline_vals(x, bat_x_bin_pos, bat_stripped_y,
bat_derivatives)
log_abs_det = jnp.sum(jnp.log(jnp.abs(derivs)))
return spline_vals, log_abs_det
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
iaf = InverseAutogressiveFlow(self._autoregressive_mlp,
self._get_spline_values)
return iaf(x)
class AffineInverseAutoregressiveFlow(ConfigurableFlow):
"""An inverse autoregressive flow with affine transformer.
config must contain the following fields:
intermediate_hids_per_dim: See AutoregresiveMLP.
num_layers: Number of layers for AutoregressiveMLP.
identity_init: Whether to initalize the flow to the identity.
bias_last: Whether to include biases on the last later of AutoregressiveMLP
"""
def __init__(self,
config: ConfigDict):
super().__init__(config)
num_affine_params = 2
num_hids_per_input_dim = [config.intermediate_hids_per_dim
] * config.num_layers + [num_affine_params]
self._autoregressive_mlp = AutoregressiveMLP(
num_hids_per_input_dim,
include_self_links=False,
non_linearity=jax.nn.leaky_relu,
zero_final=config.identity_init,
bias_last=config.bias_last)
def _check_configuration(self, config: ConfigDict):
expected_members_types = [('intermediate_hids_per_dim', int),
('num_layers', int),
('identity_init', bool),
('bias_last', bool)
]
self._check_members_types(config, expected_members_types)
def _get_affine_transformation(self,
raw_parameters: Array,
x: Array) -> Tuple[Array, Array]:
shifts = raw_parameters[:, 0]
scales = raw_parameters[:, 1] + jnp.ones_like(raw_parameters[:, 1])
log_abs_det = jnp.sum(jnp.log(jnp.abs(scales)))
output = x * scales + shifts
return output, log_abs_det
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
iaf = InverseAutogressiveFlow(self._autoregressive_mlp,
self._get_affine_transformation)
return iaf(x)
def affine_transformation(params: Array,
x: Array) -> Tuple[Array, Array]:
shift = params[0]
# Assuming params start as zero adding 1 to scale gives identity transform.
scale = params[1] + 1.
output = x * scale + shift
return output, jnp.log(jnp.abs(scale))
class RationalQuadraticSpline(ConfigurableFlow):
"""A learnt monotonic rational quadratic spline with identity padding.
Each input dimension is operated on by a separate spline.
The spline is initialized to the identity.
config must contain the following fields:
num_bins: Number of bins for rational quadratic spline.
lower_lim: Lower limit of active region for rational quadratic spline.
upper_lim: Upper limit of active region for rational quadratic spline.
min_bin_size: Minimum bin size for rational quadratic spline.
min_derivative: Minimum derivative for rational quadratic spline.
"""
def __init__(self,
config: ConfigDict):
super().__init__(config)
self._num_bins = config.num_bins
self._lower_lim = config.lower_lim
self._upper_lim = config.upper_lim
self._min_bin_size = config.min_bin_size
self._min_derivative = config.min_derivative
def _check_configuration(self, config: ConfigDict):
expected_members_types = [
('num_bins', int),
('lower_lim', float),
('upper_lim', float),
('min_bin_size', float),
('min_derivative', float)
]
self._check_members_types(config, expected_members_types)
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
"""Apply the spline transformation.
Args:
x: (num_dim,) DeviceArray representing flow input.
Returns:
output: (num_dim,) transformed sample through flow.
log_prob_out: new Scalar representing log_probability of output.
"""
num_dim = x.shape[0]
bin_parameter_shape = (num_dim, self._num_bins)
# Setup the bin position and height parameters.
bin_init = hk.initializers.Constant(jnp.ones(bin_parameter_shape))
unconst_bin_size_x = hk.get_parameter(
'unconst_bin_size_x',
shape=bin_parameter_shape,
dtype=x.dtype,
init=bin_init)
unconst_bin_size_y = hk.get_parameter(
'unconst_bin_size_y',
shape=bin_parameter_shape,
dtype=x.dtype,
init=bin_init)
def normalize_bin_sizes(unconst_bin_sizes):
bin_range = self._upper_lim - self._lower_lim
reduced_bin_range = (bin_range - self._num_bins * self._min_bin_size)
return jax.nn.softmax(
unconst_bin_sizes) * reduced_bin_range + self._min_bin_size
batched_normalize = jax.vmap(normalize_bin_sizes)
bin_size_x = batched_normalize(unconst_bin_size_x)
bin_size_y = batched_normalize(unconst_bin_size_y)
array_sequence = (jnp.ones((num_dim, 1)) * self._lower_lim, bin_size_x)
bin_positions = jnp.cumsum(jnp.concatenate(array_sequence, axis=1), axis=1)
# Don't include the redundant bin heights.
stripped_bin_heights = self._lower_lim + jnp.cumsum(
bin_size_y[:, :-1], axis=1)
# Setup the derivative parameters.
def forward_positive_transform(unconst_value, min_value):
return jax.nn.softplus(unconst_value) + min_value
def inverse_positive_transform(const_value, min_value):
return jnp.log(jnp.expm1(const_value - min_value))
deriv_parameter_shape = (num_dim, self._num_bins - 1)
inverted_one = inverse_positive_transform(1., self._min_derivative)
deriv_init = hk.initializers.Constant(
jnp.ones(deriv_parameter_shape) * inverted_one)
unconst_deriv = hk.get_parameter(
'unconst_deriv',
shape=deriv_parameter_shape,
dtype=x.dtype,
init=deriv_init)
batched_positive_transform = jax.vmap(
forward_positive_transform, in_axes=[0, None])
deriv = batched_positive_transform(unconst_deriv, self._min_derivative)
# Setup batching then apply the spline.
batch_padded_rq_spline = jax.vmap(
identity_padded_rational_quadratic_spline, in_axes=[0, 0, 0, 0])
output, jac_terms = batch_padded_rq_spline(x, bin_positions,
stripped_bin_heights, deriv)
log_abs_det_jac = jnp.sum(jnp.log(jac_terms))
return output, log_abs_det_jac
class ComposedFlows(ConfigurableFlow):
"""Class to compose flows based on a list of configs.
config should contain flow_configs a list of flow configs to compose.
"""
def __init__(self, config: ConfigDict):
super().__init__(config)
self._flows = []
for flow_config in self._config.flow_configs:
base_flow_class = globals()[flow_config.type]
flow = base_flow_class(flow_config)
self._flows.append(flow)
def _check_configuration(self, config: ConfigDict):
expected_members_types = [
('flow_configs', list),
]
self._check_members_types(config, expected_members_types)
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
log_abs_det = 0.
progress = x
for flow in self._flows:
progress, log_abs_det_increment = flow(progress)
log_abs_det += log_abs_det_increment
return progress, log_abs_det
| 38.361874 | 88 | 0.67902 | 3,101 | 23,746 | 4.868752 | 0.13802 | 0.016691 | 0.018479 | 0.012717 | 0.477679 | 0.434826 | 0.356736 | 0.309246 | 0.282951 | 0.238442 | 0 | 0.005879 | 0.240714 | 23,746 | 618 | 89 | 38.423948 | 0.831503 | 0.235703 | 0 | 0.312662 | 0 | 0 | 0.022103 | 0.002805 | 0 | 0 | 0 | 0 | 0.010336 | 1 | 0.098191 | false | 0.002584 | 0.020672 | 0.010336 | 0.193798 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8df2a64ed17e68830f228cf62337f3dea5df521 | 7,373 | py | Python | 2.ReinforcementLearning/CartPole/CartPole-PPO/cartpole_ppo.py | link-kut/deeplink_public | 688c379bfeb63156e865d78d0428f97d7d203cc1 | [
"MIT"
] | null | null | null | 2.ReinforcementLearning/CartPole/CartPole-PPO/cartpole_ppo.py | link-kut/deeplink_public | 688c379bfeb63156e865d78d0428f97d7d203cc1 | [
"MIT"
] | 11 | 2020-01-28T22:33:49.000Z | 2022-03-11T23:41:08.000Z | 2.ReinforcementLearning/CartPole/CartPole-PPO/cartpole_ppo.py | link-kut/deeplink_public | 688c379bfeb63156e865d78d0428f97d7d203cc1 | [
"MIT"
] | 2 | 2019-06-01T04:14:52.000Z | 2020-05-31T08:13:23.000Z | # Initial framework taken from https://github.com/OctThe16th/PPO-Keras/blob/master/Main.py
import numpy as np
import gym
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras import backend as K
from tensorflow.keras.optimizers import Adam
import tensorflow as tf
import random
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
print(tf.__version__)
env = gym.make(ENV)
CONTINUOUS = False
# num_states = env.observation_space.shape[0]
LOSS_CLIPPING = 0.2 # Only implemented clipping for the surrogate loss, paper said it was best
NOISE = 1.0 # Exploration noise
GAMMA = 0.99
BUFFER_SIZE = 256
BATCH_SIZE = 64
NUM_ACTIONS = 2
NUM_STATE = 4
HIDDEN_SIZE = 128
NUM_LAYERS = 2
ENTROPY_LOSS = 1e-3
LR = 1e-4 # Lower lr stabilises training greatly
'''def exponential_average(old, new, b1):
return old * b1 + (1-b1) * new'''
def proximal_policy_optimization_loss(advantage, old_prediction):
def loss(y_true, y_pred):
prob = y_true * y_pred
old_prob = y_true * old_prediction
r = prob / (old_prob + 1e-10)
return -K.mean(
K.minimum(
r * advantage,
K.clip(r, min_value=1 - LOSS_CLIPPING, max_value=1 + LOSS_CLIPPING) * advantage
) + ENTROPY_LOSS * (prob * K.log(prob + 1e-10))
)
return loss
class Agent:
def __init__(self):
self.critic = self.build_critic()
self.actor = self.build_actor()
self.env = gym.make(ENV)
print(self.env.action_space, 'action_space', self.env.observation_space, 'observation_space')
self.episode = 0
self.observation = self.env.reset()
self.val = False
self.reward = []
self.reward_over_time = []
self.name = self.get_name()
self.scores = []
self.episode_reward = 0
def get_name(self):
name = 'AllRuns/'
name += 'discrete/'
name += ENV
return name
def build_actor(self):
state_input = Input(shape=(NUM_STATE,))
advantage = Input(shape=(1,))
old_prediction = Input(shape=(NUM_ACTIONS,))
x = Dense(units=HIDDEN_SIZE, activation='tanh')(state_input)
for _ in range(NUM_LAYERS - 1):
x = Dense(HIDDEN_SIZE, activation='tanh')(x)
out_actions = Dense(units=NUM_ACTIONS, activation='softmax', name='output')(x)
model = Model(
inputs=[state_input, advantage, old_prediction],
outputs=[out_actions],
name="actor_model"
)
model.compile(
optimizer=Adam(lr=LR),
loss=[proximal_policy_optimization_loss(advantage=advantage, old_prediction=old_prediction)]
)
model.summary()
return model
def build_critic(self):
state_input = Input(shape=(NUM_STATE,))
x = Dense(units=HIDDEN_SIZE, activation='tanh')(state_input)
for _ in range(NUM_LAYERS - 1):
x = Dense(units=HIDDEN_SIZE, activation='tanh')(x)
out_value = Dense(units=1)(x)
model = Model(
inputs=[state_input],
outputs=[out_value],
name="critic_model"
)
model.compile(
optimizer=Adam(lr=LR),
loss='mse'
)
model.summary()
return model
def reset_env(self):
self.episode += 1
if self.episode % 100 == 0:
self.val = True
else:
self.val = False
self.observation = self.env.reset()
self.reward = []
self.episode_reward = 0
def get_action(self):
DUMMY_VALUE = np.zeros((1, 1))
DUMMY_ACTION = np.zeros((1, NUM_ACTIONS))
p = self.actor.predict([self.observation.reshape(1, NUM_STATE), DUMMY_VALUE, DUMMY_ACTION])
if self.val is False:
action = np.random.choice(NUM_ACTIONS, p=np.nan_to_num(p[0]))
else:
action = np.argmax(p[0])
action_matrix = np.zeros(NUM_ACTIONS)
action_matrix[action] = 1
return action, action_matrix, p
def transform_reward(self):
for j in range(len(self.reward) - 2, -1, -1):
self.reward[j] += self.reward[j + 1] * GAMMA
def get_batch(self):
batch = [[], [], [], []]
tmp_batch = [[], [], []]
while len(batch[0]) < BUFFER_SIZE:
action, action_matrix, actor_p = self.get_action()
observation, reward, done, info = self.env.step(action)
self.reward.append(reward)
self.episode_reward = self.episode_reward + reward
tmp_batch[0].append(self.observation)
tmp_batch[1].append(action_matrix)
tmp_batch[2].append(actor_p)
self.observation = observation
if done:
self.transform_reward()
if self.val is False:
for i in range(len(tmp_batch[0])):
obs, action, pred = tmp_batch[0][i], tmp_batch[1][i], tmp_batch[2][i]
r = self.reward[i]
batch[0].append(obs)
batch[1].append(action)
batch[2].append(pred)
batch[3].append(r)
tmp_batch = [[], [], []]
#print("EPISODE REWARD ", self.episode_reward)
self.scores.append(self.episode_reward)
self.reset_env()
obs = np.array(batch[0])
action = np.array(batch[1])
pred = np.array(batch[2])
pred = np.reshape(pred, (pred.shape[0], pred.shape[2]))
reward = np.reshape(np.array(batch[3]), (len(batch[3]), 1))
return obs[:BUFFER_SIZE], action[:BUFFER_SIZE], pred[:BUFFER_SIZE], reward[:BUFFER_SIZE]
def run(self):
total_episodes = 100000
epochs = 10
while self.episode < total_episodes:
if len(self.scores) > 1:
print("EPISODE ", self.episode, self.scores[-1])
obs, action, pred, reward = self.get_batch()
old_prediction = pred
pred_values = self.critic.predict(obs)
advantage = reward - pred_values
# advantage = (advantage - advantage.mean()) / advantage.std()
actor_loss = self.actor.fit(
x=[obs, advantage, old_prediction],
y=[action],
batch_size=BATCH_SIZE,
shuffle=True,
epochs=epochs,
verbose=0
)
critic_loss = self.critic.fit(
x=[obs],
y=[reward],
batch_size=BATCH_SIZE,
shuffle=True,
epochs=epochs,
verbose=0
)
if self.episode % 10 == 0:
print('(episode, score) = ' + str((self.episode, self.episode_reward)))
# Solved condition
if len(self.scores) >= 110:
if np.mean(self.scores[-100:]) >= 195.0:
print(' \ Solved after ' + str(self.episode - 100) + ' episodes')
break
plt.plot(self.scores)
if __name__ == '__main__':
ag = Agent()
ag.run() | 30.720833 | 104 | 0.563814 | 886 | 7,373 | 4.527088 | 0.215576 | 0.041137 | 0.033159 | 0.023934 | 0.242583 | 0.150835 | 0.10197 | 0.078783 | 0.059835 | 0.059835 | 0 | 0.021544 | 0.320087 | 7,373 | 240 | 105 | 30.720833 | 0.778576 | 0.052489 | 0 | 0.20765 | 0 | 0 | 0.023772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060109 | false | 0 | 0.065574 | 0 | 0.169399 | 0.027322 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8df48a2c6778c32363c444430a9dcd1859230a7 | 8,721 | py | Python | models/san_lowrank.py | LegionChang/CoTNet | b1bc456c0b13b282b807d1082a1598b71014b4fe | [
"Apache-2.0"
] | 360 | 2021-07-26T07:23:29.000Z | 2022-03-16T03:03:25.000Z | python_developer_tools/cv/bases/conv/CoTNet/CoTNet-master/models/san_lowrank.py | HonestyBrave/python_developer_tools | fc0dcf5c4ef088e2e535206dc82f09bbfd01f280 | [
"Apache-2.0"
] | 22 | 2021-07-29T15:05:00.000Z | 2022-03-17T04:28:14.000Z | python_developer_tools/cv/bases/conv/CoTNet/CoTNet-master/models/san_lowrank.py | HonestyBrave/python_developer_tools | fc0dcf5c4ef088e2e535206dc82f09bbfd01f280 | [
"Apache-2.0"
] | 47 | 2021-07-27T02:14:21.000Z | 2022-02-25T09:15:12.000Z | import math
import numpy as np
import torch
from torch import nn as nn
from config import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from .helpers import build_model_with_cfg
from .layers import SelectiveKernelConv, ConvBnAct, create_attn
from .registry import register_model
from .resnet import ResNet
from .layers import Shiftlution
from cupy_layers.aggregation_zeropad import LocalConvolution
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224),
'crop_pct': 0.875, 'interpolation': 'bicubic',
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'classifier': 'fc',
**kwargs
}
default_cfgs = {
'san19': _cfg(
url='',),
}
def conv1x1(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class SAM(nn.Module):
def __init__(self, in_planes, rel_planes, out_planes, share_planes, kernel_size=3, stride=1, dilation=1):
super(SAM, self).__init__()
self.kernel_size, self.stride = kernel_size, stride
self.conv1 = nn.Conv2d(in_planes, rel_planes, kernel_size=1)
self.conv2 = nn.Conv2d(in_planes, rel_planes, kernel_size=1)
self.conv3 = nn.Conv2d(in_planes, out_planes, kernel_size=1)
self.conv_w = nn.Sequential(nn.BatchNorm2d(rel_planes * (pow(kernel_size, 2) + 1)), nn.ReLU(inplace=True),
nn.Conv2d(rel_planes * (pow(kernel_size, 2) + 1), out_planes // share_planes, kernel_size=1, bias=False),
nn.BatchNorm2d(out_planes // share_planes), nn.ReLU(inplace=True),
nn.Conv2d(out_planes // share_planes, pow(kernel_size, 2) * out_planes // share_planes, kernel_size=1))
self.unfold_j = nn.Unfold(kernel_size=kernel_size, dilation=dilation, padding=0, stride=stride)
self.pad = nn.ReflectionPad2d(kernel_size // 2)
#self.aggregation = Aggregation(kernel_size, stride, (dilation * (kernel_size - 1) + 1) // 2, dilation, pad_mode=1)
self.local_conv = LocalConvolution(out_planes, out_planes, kernel_size=self.kernel_size, stride=1, padding=(self.kernel_size - 1) // 2, dilation=1)
def forward(self, x):
x1, x2, x3 = self.conv1(x), self.conv2(x), self.conv3(x)
x2 = self.unfold_j(self.pad(x2)).view(x.shape[0], -1, x1.shape[2], x1.shape[3])
w = self.conv_w(torch.cat([x1, x2], 1))
w = w.view(x1.shape[0], -1, self.kernel_size*self.kernel_size, x1.shape[2], x1.shape[3])
w = w.unsqueeze(1)
#x = self.aggregation(x3, w)
x = self.local_conv(x3, w)
return x
class SAM_lowRank(nn.Module):
def __init__(self, in_planes, rel_planes, out_planes, share_planes, kernel_size=3, stride=1, dilation=1):
super(SAM_lowRank, self).__init__()
self.rel_planes = rel_planes
self.out_planes = out_planes
self.kernel_size, self.stride = kernel_size, stride
self.pool_size = min(512 // out_planes, 4)
self.down = nn.AvgPool2d(self.pool_size, self.pool_size, padding=0) if self.pool_size > 1 else None
self.unfold_j = nn.Unfold(kernel_size=kernel_size, dilation=dilation, padding=0, stride=stride)
self.pad = nn.ReflectionPad2d(kernel_size // 2)
self.conv = nn.Sequential(
nn.Conv2d(in_planes, out_planes + 2*rel_planes, kernel_size=1, bias=False),
#nn.BatchNorm2d(out_planes + rel_planes),
#nn.ReLU(inplace=True),
)
self.key_embed = nn.Sequential(
nn.BatchNorm2d(rel_planes * self.kernel_size * self.kernel_size),
nn.ReLU(inplace=True),
nn.Conv2d(rel_planes * self.kernel_size * self.kernel_size, rel_planes, 1, bias=False),
)
self.conv_w = nn.Sequential(
nn.BatchNorm2d(rel_planes * 2),
nn.ReLU(inplace=True),
nn.Conv2d(rel_planes * 2, out_planes * self.kernel_size * 2, kernel_size=1, bias=False)
)
self.local_conv = LocalConvolution(out_planes, out_planes, kernel_size=self.kernel_size, stride=1, padding=(self.kernel_size - 1) // 2, dilation=1)
def forward(self, x):
x = self.conv(x)
q, k, x = torch.split(x, [self.rel_planes, self.rel_planes, self.out_planes], 1)
x2 = self.unfold_j(self.pad(k))
x2 = x2.view(x.shape[0], -1, x.shape[2], x.shape[3])
x2 = self.key_embed(x2)
qk = torch.cat([q, x2], 1)
if self.pool_size > 1:
qk = self.down(qk)
b, c, qk_hh, qk_ww = qk.size()
embed = self.conv_w(qk)
embed_h, embed_w = torch.split(embed, embed.shape[1] // 2, dim=1)
embed_h = embed_h.view(b, -1, self.kernel_size, 1, qk_hh, qk_ww)
embed_w = embed_w.view(b, -1, 1, self.kernel_size, qk_hh, qk_ww)
w = embed_h * embed_w
w = w.view(x.shape[0], -1, self.kernel_size*self.kernel_size, qk_hh, qk_ww)
if self.pool_size > 1:
w = w.view(b, -1, self.kernel_size*self.kernel_size, qk_hh, 1, qk_ww, 1)
w = w.expand(b, -1, self.kernel_size*self.kernel_size, qk_hh, self.pool_size, qk_ww, self.pool_size).contiguous()
w = w.view(b, -1, self.kernel_size*self.kernel_size, x.shape[2], x.shape[3])
w = w.unsqueeze(1)
x = self.local_conv(x, w)
return x
class Bottleneck(nn.Module):
def __init__(self, in_planes, rel_planes, mid_planes, out_planes, share_planes=8, kernel_size=7, stride=1):
super(Bottleneck, self).__init__()
self.bn1 = nn.BatchNorm2d(in_planes)
self.sam = SAM(in_planes, rel_planes, mid_planes, share_planes, kernel_size, stride)
self.bn2 = nn.BatchNorm2d(mid_planes)
self.conv = nn.Conv2d(mid_planes, out_planes, kernel_size=1)
self.relu = nn.ReLU(inplace=True)
self.stride = stride
def forward(self, x):
identity = x
out = self.relu(self.bn1(x))
out = self.relu(self.bn2(self.sam(out)))
out = self.conv(out)
out += identity
return out
class SAN(nn.Module):
def __init__(self, in_chans, block, layers, kernels, num_classes, **kwargs):
super(SAN, self).__init__()
c = 64
self.conv_in, self.bn_in = conv1x1(3, c), nn.BatchNorm2d(c)
self.conv0, self.bn0 = conv1x1(c, c), nn.BatchNorm2d(c)
self.layer0 = self._make_layer(block, c, layers[0], kernels[0])
c *= 4
self.conv1, self.bn1 = conv1x1(c // 4, c), nn.BatchNorm2d(c)
self.layer1 = self._make_layer(block, c, layers[1], kernels[1])
c *= 2
self.conv2, self.bn2 = conv1x1(c // 2, c), nn.BatchNorm2d(c)
self.layer2 = self._make_layer(block, c, layers[2], kernels[2])
c *= 2
self.conv3, self.bn3 = conv1x1(c // 2, c), nn.BatchNorm2d(c)
self.layer3 = self._make_layer(block, c, layers[3], kernels[3])
c *= 2
self.conv4, self.bn4 = conv1x1(c // 2, c), nn.BatchNorm2d(c)
self.layer4 = self._make_layer(block, c, layers[4], kernels[4])
self.relu = nn.ReLU(inplace=True)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(c, num_classes)
def _make_layer(self, block, planes, blocks, kernel_size=7, stride=1):
layers = []
for _ in range(0, blocks):
layers.append(block(planes, planes // 16, planes // 4, planes, 8, kernel_size, stride))
return nn.Sequential(*layers)
def forward(self, x):
x = self.relu(self.bn_in(self.conv_in(x)))
x = self.relu(self.bn0(self.layer0(self.conv0(self.pool(x)))))
x = self.relu(self.bn1(self.layer1(self.conv1(self.pool(x)))))
x = self.relu(self.bn2(self.layer2(self.conv2(self.pool(x)))))
x = self.relu(self.bn3(self.layer3(self.conv3(self.pool(x)))))
x = self.relu(self.bn4(self.layer4(self.conv4(self.pool(x)))))
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def _create_san(variant, pretrained=False, **kwargs):
return build_model_with_cfg(
SAN, variant, default_cfg=default_cfgs[variant], pretrained=pretrained, **kwargs)
@register_model
def san19(pretrained=False, **kwargs):
#model_args = dict(block=Bottleneck, layers=[3, 3, 4, 6, 3], kernels = [3, 3, 3, 3, 3], **kwargs)
#model_args = dict(block=Bottleneck, layers=[3, 3, 4, 6, 3], kernels = [3, 5, 5, 5, 5], **kwargs)
model_args = dict(block=Bottleneck, layers=[3, 3, 4, 6, 3], kernels=[3, 7, 7, 7, 7], **kwargs)
return _create_san('san19', pretrained, **model_args) | 44.269036 | 155 | 0.624928 | 1,306 | 8,721 | 3.981623 | 0.124809 | 0.103846 | 0.061923 | 0.031154 | 0.551346 | 0.470962 | 0.395769 | 0.338077 | 0.280962 | 0.2125 | 0 | 0.040728 | 0.231395 | 8,721 | 197 | 156 | 44.269036 | 0.735044 | 0.045522 | 0 | 0.178344 | 0 | 0 | 0.009737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082803 | false | 0 | 0.070064 | 0.019108 | 0.235669 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8e1107cf7ccb8c88d2d79f53d1ffccc5940049b | 1,262 | py | Python | qa/admin.py | thebenwaters/openclickio | c5e08d89b37c5f415810dca088803dba25af5e1a | [
"MIT"
] | null | null | null | qa/admin.py | thebenwaters/openclickio | c5e08d89b37c5f415810dca088803dba25af5e1a | [
"MIT"
] | 1 | 2017-10-21T19:29:18.000Z | 2017-10-21T19:29:18.000Z | qa/admin.py | thebenwaters/openclickio | c5e08d89b37c5f415810dca088803dba25af5e1a | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Answer, AnswerOption, AnswerInstance, Question,\
OpenEndedResponse, ClosedEndedQuestion
# Register your models here.
@admin.register(AnswerOption)
class AnswerOptionAdmin(admin.ModelAdmin):
list_display = ('id', 'text')
@admin.register(Answer)
class AnswerAdmin(admin.ModelAdmin):
list_display = ('id', 'created', 'owner', 'correct_answer')
@admin.register(AnswerInstance)
class AnswerInstanceAdmin(admin.ModelAdmin):
list_display = ('id', 'created', 'student', 'question', 'answer_option', 'was_correct')
def was_correct(self, obj):
my_question = ClosedEndedQuestion.objects.get(pk=obj.question.pk)
if obj.answer_option == my_question.answer.correct_answer:
return True
return False
def activate(modeladmin, request, queryset):
for obj in queryset:
obj.activate()
def deactivate(modeladmin,request, queryset):
for obj in queryset:
obj.deactivate()
@admin.register(Question)
class QuestionAdmin(admin.ModelAdmin):
list_display = ('id', 'owner', 'text', 'active')
actions = [activate, deactivate]
@admin.register(ClosedEndedQuestion)
class ClosedEndedQuestionAdmin(admin.ModelAdmin):
list_display = ('id', 'owner', 'text', 'answer', 'active')
actions = [activate, deactivate] | 29.348837 | 88 | 0.759113 | 142 | 1,262 | 6.65493 | 0.359155 | 0.068783 | 0.100529 | 0.137566 | 0.275132 | 0.245503 | 0.171429 | 0.093122 | 0 | 0 | 0 | 0 | 0.108558 | 1,262 | 43 | 89 | 29.348837 | 0.84 | 0.020602 | 0 | 0.129032 | 0 | 0 | 0.098785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.064516 | 0 | 0.612903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8e180bf8b4b157c9e27b0c8c553c612b8e2d1ec | 6,212 | py | Python | Bot/Cogs/jisho.py | No767/Rin-Bot | b4c64e0ebccc9465100006ec2cb023eecb425570 | [
"Apache-2.0"
] | null | null | null | Bot/Cogs/jisho.py | No767/Rin-Bot | b4c64e0ebccc9465100006ec2cb023eecb425570 | [
"Apache-2.0"
] | null | null | null | Bot/Cogs/jisho.py | No767/Rin-Bot | b4c64e0ebccc9465100006ec2cb023eecb425570 | [
"Apache-2.0"
] | null | null | null | import re
import discord
import requests
import ujson
from discord.ext import commands
from dotenv import load_dotenv
from jamdict import Jamdict
load_dotenv()
jam = Jamdict()
# Use Array Loop Instead
def kanjiv2(search):
res = jam.lookup(search.replace("\n", " "))
for c in res.chars:
return str(c).replace("\n", " ")
def hiragana(search):
result = jam.lookup(search)
for word in result.entries:
m = re.findall("[ぁ-ん]", str(word))
r = str(m).replace("'", "").replace(",", "").replace(" ", "")
return str(r)
def katakana(search):
result = jam.lookup(search.replace("\n", " "))
for entry in result.entries:
m = re.findall("[ァ-ン]", str(entry))
r = (
str(m)
.replace("[", " ")
.replace("]", " ")
.replace("'", " ")
.replace(",", "")
.replace(" ", "")
)
return str(r)
# old kanji lookup system. use the function kanjiv2 instead
def kanji(search):
result = jam.lookup(search)
result_search = result.text(separator=" | ", with_chars=False, no_id=True)
m = re.findall(".[一-龯]", result_search)
all_kanji = str(m).replace(",", "")[1:-1]
all_kanjiv2 = all_kanji.replace("'", "").replace(" ", "").replace("", ", ")
return all_kanjiv2
def searcher(search):
result = jam.lookup(search)
for word in result.entries:
return str(word[4:10])
def better_hiragana(search):
searcher(search)
def tag(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_tag = str(jisho["data"][0]["tags"])
return jisho_tag.replace("[", " ").replace("]", " ").replace("'", " ")
def jlpt(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_jlpt = str(jisho["data"][0]["tags"])
return jisho_jlpt.replace("[", " ").replace("]", " ").replace("'", " ")
def is_common(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jishov1 = str(jisho["data"][0]["is_common"])
return jishov1.replace("[", " ").replace("]", " ")
def pos(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_sorted = jisho["data"][0]["senses"][0]["parts_of_speech"]
return str(jisho_sorted).replace("[", "").replace("]", "").replace("'", "")
def see_also(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_sorted = jisho["data"][0]["senses"][0]["see_also"]
return str(jisho_sorted).replace("[", "").replace("]", "").replace("'", "")
def antonyms(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_sorted = jisho["data"][0]["senses"][0]["antonyms"]
return str(jisho_sorted).replace("[", "").replace("]", "").replace("'", "")
def links(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_sorted = jisho["data"][0]["senses"][0]["links"]
return str(jisho_sorted).replace("[", "").replace("]", "").replace("'", "")
class jisho_dict(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(name="jisho")
async def jisho(self, ctx, search: str):
try:
result = jam.lookup(search)
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
res = jam.lookup(search.replace("\n", " "))
embedVar = discord.Embed()
embedVar.add_field(
name="Kanji",
value=[str(c).replace("'", "") for c in res.chars],
inline=False,
)
embedVar.add_field(
name="Position of Speech (POS)", value=pos(search), inline=False
)
embedVar.add_field(name="Is Common?",
value=is_common(search), inline=False)
embedVar.add_field(
name="Other Info",
value=f"Tags >> {tag(search)}\nJLPT >> {jlpt(search)}\nAntonyms >> {antonyms(search)}\nSee Also >> {see_also(search)}\nLinks >> {links(search)}",
inline=False,
)
embedVar.add_field(
name="Attributions",
value=f"JMDict >> {jisho['data'][0]['attribution']['jmdict']}\nJMNEDict >> {jisho['data'][0]['attribution']['jmnedict']}\nDBPedia >> {jisho['data'][0]['attribution']['dbpedia']}",
inline=False,
)
embedVar.add_field(
name="HTTP Status (Jisho API)",
value=f"{jisho['meta']['status']}",
inline=False,
)
embedVar.description = str([str(word[0])
for word in result.entries])
await ctx.send(embed=embedVar)
except Exception as e:
embed_discord = discord.Embed()
embed_discord.description = (
f"An error has occurred. Please try again\nReason: {e}"
)
await ctx.send(embed=embed_discord)
@jisho.error
async def on_message_error(
self, ctx: commands.Context, error: commands.CommandError
):
if isinstance(error, commands.MissingRequiredArgument):
embed_discord = discord.Embed()
embed_discord.description = f"Missing a requireed argument: {error.param}"
await ctx.send(embed=embed_discord)
def setup(bot):
bot.add_cog(jisho_dict(bot))
| 33.042553 | 195 | 0.56246 | 732 | 6,212 | 4.702186 | 0.209016 | 0.085415 | 0.067112 | 0.034863 | 0.556944 | 0.531958 | 0.438408 | 0.377978 | 0.311737 | 0.311737 | 0 | 0.010327 | 0.251771 | 6,212 | 187 | 196 | 33.219251 | 0.730207 | 0.012878 | 0 | 0.386667 | 0 | 0.013333 | 0.188285 | 0.040463 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.046667 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8e67ae78b6e8735abac8eb28c78858b399f444d | 1,207 | py | Python | scripts/executor_action.py | rezhajulio/azkaban | 974e2e45f4e2f1cd14a3e160f9326aa067b606c2 | [
"Apache-2.0"
] | 3 | 2019-12-19T00:04:36.000Z | 2020-05-07T02:54:56.000Z | scripts/executor_action.py | rezhajulio/azkaban | 974e2e45f4e2f1cd14a3e160f9326aa067b606c2 | [
"Apache-2.0"
] | null | null | null | scripts/executor_action.py | rezhajulio/azkaban | 974e2e45f4e2f1cd14a3e160f9326aa067b606c2 | [
"Apache-2.0"
] | 3 | 2018-03-15T04:54:50.000Z | 2019-07-15T06:33:58.000Z | #!/usr/bin/python3
import requests
import sys
import time
from wait_for_port_ready import wait_for_port_ready
import traceback
import json
action = sys.argv[1]
assert action in ('activate', 'deactivate', 'getStatus', 'shutdown')
url = 'http://localhost:12321/executor?action={action}'.format(action=action)
if action == 'getStatus':
r = requests.post(url, timeout=5)
assert r.status_code == 200
assert json.loads(r.text)['isActive'] == 'true'
else:
wait_for_port_ready(12321, 15)
retries = 0
retry_count = 15
success = False
while not success:
try:
r = requests.post(url, timeout=5)
print(r.status_code)
print(r.text)
if r.json()['status'] == 'success':
success = True
if not success:
raise Exception('Attempt to ' + action + ' executor failed')
except Exception as ex:
print(traceback.format_exc())
sys.stdout.flush()
retries += 1
if retries > retry_count:
raise Exception('Attempt to ' + action + ' executor failed')
print('waiting for 1 seconds...')
time.sleep(1)
| 24.632653 | 77 | 0.591549 | 145 | 1,207 | 4.827586 | 0.462069 | 0.03 | 0.047143 | 0.068571 | 0.254286 | 0.191429 | 0.122857 | 0 | 0 | 0 | 0 | 0.029308 | 0.293289 | 1,207 | 48 | 78 | 25.145833 | 0.791325 | 0.014085 | 0 | 0.114286 | 0 | 0 | 0.163162 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 1 | 0 | false | 0 | 0.171429 | 0 | 0.171429 | 0.114286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8e8a2245da2f5f3c3aaee9fd554b9ee96a551e9 | 22,494 | py | Python | axonius_api_client/http.py | geransmith/axonius_api_client | 09fd564d62f0ddf7aa44db14a509eaafaf0c930f | [
"MIT"
] | null | null | null | axonius_api_client/http.py | geransmith/axonius_api_client | 09fd564d62f0ddf7aa44db14a509eaafaf0c930f | [
"MIT"
] | null | null | null | axonius_api_client/http.py | geransmith/axonius_api_client | 09fd564d62f0ddf7aa44db14a509eaafaf0c930f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""HTTP client."""
import logging
import warnings
from urllib.parse import urlparse, urlunparse
import requests
from .constants import (
LOG_LEVEL_HTTP,
MAX_BODY_LEN,
REQUEST_ATTR_MAP,
RESPONSE_ATTR_MAP,
TIMEOUT_CONNECT,
TIMEOUT_RESPONSE,
)
from .exceptions import HttpError
from .logs import get_obj_log, set_log_level
from .tools import join_url, json_reload, listify, path_read
from .version import __version__
InsecureRequestWarning = requests.urllib3.exceptions.InsecureRequestWarning
class Http:
"""HTTP client wrapper around :obj:`requests.Session`."""
def __init__(
self,
url,
connect_timeout=TIMEOUT_CONNECT,
response_timeout=TIMEOUT_RESPONSE,
certpath=None,
certwarn=True,
certverify=False,
cert_client_both=None,
cert_client_cert=None,
cert_client_key=None,
http_proxy=None,
https_proxy=None,
save_last=True,
save_history=False,
log_level=LOG_LEVEL_HTTP,
log_level_urllib="warning",
log_request_attrs=None,
log_response_attrs=None,
log_request_body=False,
log_response_body=False,
):
"""HTTP client wrapper around :obj:`requests.Session`.
Notes:
* If certpath is supplied, certverify is ignored
* private key supplied to cert_client_key or cert_client_both
can **NOT** be password encrypted
Args:
url (:obj:`str` or :obj:`ParserUrl`): URL to connect to
connect_timeout (:obj:`int`, optional):
default :data:`TIMEOUT_CONNECT` - seconds to
wait for connections to open to :attr:`url`
response_timeout (:obj:`int`, optional):
default :data:`TIMEOUT_RESPONSE` - seconds to
wait for responses from :attr:`url`
certpath (:obj:`str` or :obj:`pathlib.Path`, optional): default ``None`` -
path to CA bundle file to use when verifing certs offered by :attr:`url`
instead of the system CA bundle
certwarn (:obj:`bool`, optional): default ``True`` - show warnings from
requests about certs offered by :attr:`url` that are self signed:
* if ``True`` show warning only the first time it happens
* if ``False`` never show warning
* if ``None`` show warning every time it happens
certverify (:obj:`bool`, optional): default ``False`` - control
validation of certs offered by :attr:`url`:
* if ``True`` raise exception if cert is invalid/self-signed
* if ``False`` only raise exception if cert is invalid
cert_client_both (:obj:`str` or :obj:`pathlib.Path`, optional):
default ``None`` - path to cert file containing both the private key and
cert to offer to :attr:`url`
cert_client_cert (:obj:`str` or :obj:`pathlib.Path`, optional):
default ``None`` - path to cert file to offer to :attr:`url`
(*must also supply cert_client_key*)
cert_client_key (:obj:`str` or :obj:`pathlib.Path`, optional):
default ``None`` - path to private key file for cert_client_cert to
offer to :attr:`url` (*must also supply cert_client_cert*)
http_proxy (:obj:`str`, optional): default ``None`` - proxy to use
when making http requests to :attr:`url`
https_proxy (:obj:`str`, optional): default ``None`` - proxy to use
when making https requests to :attr:`url`
save_last (:obj:`bool`, optional): default ``True`` -
* if ``True`` save request to :attr:`LAST_REQUEST` and
response to :attr:`LAST_RESPONSE`
* if ``False`` do not save request to :attr:`LAST_REQUEST` and
response to :attr:`LAST_RESPONSE`
save_history (:obj:`bool`, optional): default ``True`` -
* if ``True`` append responses to :attr:`HISTORY`
* if ``False`` do not append responses to :attr:`HISTORY`
log_level (:obj:`str`):
default :data:`axonius_api_client.LOG_LEVEL_HTTP` -
logging level to use for this objects logger
log_level_urllib (:obj:`str`): default ``"warning"`` -
logging level to use for this urllib logger
log_request_attrs (:obj:`bool`): default ``None`` - control logging
of request attributes:
* if ``True``, log request attributes defined in
:data:`axonius_api_client.LOG_REQUEST_ATTRS_VERBOSE`
* if ``False``, log request attributes defined in
:data:`axonius_api_client.LOG_REQUEST_ATTRS_BRIEF`
* if ``None``, do not log any request attributes
log_response_attrs (:obj:`bool`): default ``None`` - control logging
of response attributes:
* if ``True``, log response attributes defined in
:data:`axonius_api_client.LOG_RESPONSE_ATTRS_VERBOSE`
* if ``False``, log response attributes defined in
:data:`axonius_api_client.LOG_RESPONSE_ATTRS_BRIEF`
* if ``None``, do not log any response attributes
log_request_body (:obj:`bool`): default ``False`` - control logging
of request bodies:
* if ``True``, log request bodies
* if ``False``, do not log request bodies
log_response_body (:obj:`bool`): default ``False`` - control logging
of response bodies:
* if ``True``, log response bodies
* if ``False``, do not log response bodies
Raises:
:exc:`HttpError`: if either cert_client_cert or cert_client_key
are supplied, and the other is not supplied
:exc:`HttpError`: if any of cert_path, cert_client_cert,
cert_client_key, or cert_client_both are supplied and the file does
not exist
"""
self.LOG = get_obj_log(obj=self, level=log_level)
""":obj:`logging.Logger`: Logger for this object."""
if isinstance(url, ParserUrl):
self.URLPARSED = url
else:
self.URLPARSED = ParserUrl(url=url, default_scheme="https")
self.url = self.URLPARSED.url
""":obj:`str`: URL to connect to"""
self.LAST_REQUEST = None
""":obj:`requests.PreparedRequest`: last request sent"""
self.LAST_RESPONSE = None
""":obj:`requests.Response`: last response received"""
self.HISTORY = []
""":obj:`list` of :obj:`requests.Response`: all responses received."""
self.SAVE_LAST = save_last
""":obj:`bool`: save requests to :attr:`LAST_REQUEST` and responses
to :attr:`LAST_RESPONSE`"""
self.SAVEHISTORY = save_history
""":obj:`bool`: Append all responses to :attr:`HISTORY`"""
self.CONNECT_TIMEOUT = connect_timeout
""":obj:`int`: seconds to wait for connections to open to :attr:`url`"""
self.RESPONSE_TIMEOUT = response_timeout
""":obj:`int`: seconds to wait for responses from :attr:`url`"""
self.session = requests.Session()
""":obj:`requests.Session`: session object to use"""
self.LOG_REQUEST_BODY = log_request_body
""":obj:`bool`: Log the full request body."""
self.LOG_RESPONSE_BODY = log_response_body
""":obj:`bool`: Log the full response body."""
self.log_request_attrs = log_request_attrs
self.log_response_attrs = log_response_attrs
self.session.proxies = {}
self.session.proxies["https"] = https_proxy
self.session.proxies["http"] = http_proxy
if certpath:
path_read(obj=certpath, binary=True)
self.session.verify = certpath
else:
self.session.verify = certverify
if cert_client_both:
path_read(obj=cert_client_both, binary=True)
self.session.cert = str(cert_client_both)
elif cert_client_cert or cert_client_key:
if not all([cert_client_cert, cert_client_key]):
error = (
"You must supply both a 'cert_client_cert' and 'cert_client_key'"
" or use 'cert_client_both'!"
)
raise HttpError(error)
path_read(obj=cert_client_cert, binary=True)
path_read(obj=cert_client_key, binary=True)
self.session.cert = (str(cert_client_cert), str(cert_client_key))
if certwarn is True:
warnings.simplefilter("once", InsecureRequestWarning)
elif certwarn is False:
warnings.simplefilter("ignore", InsecureRequestWarning)
urllog = logging.getLogger("urllib3.connectionpool")
set_log_level(obj=urllog, level=log_level_urllib)
def __call__(
self,
path=None,
route=None,
method="get",
data=None,
params=None,
headers=None,
json=None,
files=None,
# fmt: off
**kwargs
# fmt: on
):
"""Create, prepare, and then send a request using :attr:`session`.
Args:
path (:obj:`str`, optional): default ``None`` - path to append to
:attr:`url`
route (:obj:`str`, optional): default ``None`` - route to append to
:attr:`url`
method (:obj:`str`, optional): default ``"get"`` - method to use
data (:obj:`str`, optional): default ``None`` - body to send
params (:obj:`dict`, optional): default ``None`` - parameters to url encode
headers (:obj:`dict`, optional): default ``None`` - headers to send
json (:obj:`dict`, optional): default ``None`` - obj to encode as json
files (:obj:`tuple` of :obj:`tuple`, optional): default ``None`` - files to
send
**kwargs:
overrides for object attributes
* connect_timeout (:obj:`int`): default :attr:`CONNECT_TIMEOUT` -
seconds to wait for connection to open to :attr:`url`
* response_timeout (:obj:`int`): default :attr:`RESPONSE_TIMEOUT` -
seconds to wait for for response from :attr:`url`
* proxies (:obj:`dict`): default ``None`` -
use custom proxies instead of proxies defined in :attr:`session`
* verify (:obj:`bool` or :obj:`str`): default ``None`` - use custom
verification of cert offered by :attr:`url` instead of verification
defined in :attr:`session`
* cert (:obj:`str`): default ``None`` - use custom
client cert to offer to :attr:`url` cert defined in :attr:`session`
Returns:
:obj:`requests.Response`: raw response object
"""
url = join_url(self.url, path, route)
headers = headers or {}
headers.setdefault("User-Agent", self.user_agent)
request = requests.Request(
url=url,
method=method,
data=data,
headers=headers,
params=params,
json=json,
files=files or [],
)
prepped_request = self.session.prepare_request(request=request)
prepped_request.body_size = len(prepped_request.body or "")
if self.SAVE_LAST:
self.LAST_REQUEST = prepped_request
if self.log_request_attrs:
lattrs = ", ".join(self.log_request_attrs).format(request=prepped_request)
self.LOG.debug(f"REQUEST ATTRS: {lattrs}")
send_args = self.session.merge_environment_settings(
url=prepped_request.url,
proxies=kwargs.get("proxies", {}),
stream=kwargs.get("stream", None),
verify=kwargs.get("verify", None),
cert=kwargs.get("cert", None),
)
send_args["request"] = prepped_request
send_args["timeout"] = (
kwargs.get("connect_timeout", self.CONNECT_TIMEOUT),
kwargs.get("response_timeout", self.RESPONSE_TIMEOUT),
)
if self.LOG_REQUEST_BODY:
self.log_body(body=prepped_request.body, body_type="REQUEST")
response = self.session.send(**send_args)
response.body_size = len(response.text or "")
if self.SAVE_LAST:
self.LAST_RESPONSE = response
if self.SAVEHISTORY:
self.HISTORY.append(response)
if self.log_response_attrs:
lattrs = ", ".join(self.log_response_attrs).format(response=response)
self.LOG.debug(f"RESPONSE ATTRS: {lattrs}")
if self.LOG_RESPONSE_BODY:
self.log_body(body=response.text, body_type="RESPONSE")
return response
def __str__(self):
"""Show object info.
Returns:
:obj:`str`
"""
return "{c.__module__}.{c.__name__}(url={url!r})".format(
c=self.__class__, url=self.url
)
def __repr__(self):
"""Show object info.
Returns:
:obj:`str`
"""
return self.__str__()
@property
def user_agent(self):
"""Value to use in User-Agent header.
Returns:
:obj:`str`: user agent string
"""
return f"{__name__}.{self.__class__.__name__}/{__version__}"
@property
def log_request_attrs(self):
"""Get the request attributes that should be logged."""
return self._get_log_attrs("request")
@log_request_attrs.setter
def log_request_attrs(self, value):
"""Set the request attributes that should be logged."""
attr_map = REQUEST_ATTR_MAP
attr_type = "request"
self._set_log_attrs(attr_map=attr_map, attr_type=attr_type, value=value)
@property
def log_response_attrs(self):
"""Get the response attributes that should be logged."""
return self._get_log_attrs("response")
@log_response_attrs.setter
def log_response_attrs(self, value):
"""Set the response attributes that should be logged."""
attr_map = RESPONSE_ATTR_MAP
attr_type = "response"
self._set_log_attrs(attr_map=attr_map, attr_type=attr_type, value=value)
def _get_log_attrs(self, attr_type):
return getattr(self, "_LOG_ATTRS", {}).get(attr_type, [])
def _set_log_attrs(self, attr_map, attr_type, value):
if not hasattr(self, "_LOG_ATTRS"):
self._LOG_ATTRS = {"response": [], "request": []}
value = [x.lower().strip() for x in listify(value)]
if not value:
self._LOG_ATTRS[attr_type] = []
return
log_attrs = self._LOG_ATTRS[attr_type]
if "all" in value:
for k, v in attr_map.items():
entry = f"{k}={v}"
if entry not in log_attrs:
log_attrs.append(entry)
return
for item in value:
if item in attr_map:
value = attr_map[item]
entry = f"{item}={value}"
if entry not in log_attrs:
log_attrs.append(entry)
def log_body(self, body, body_type):
"""Pass."""
body = body or ""
body = json_reload(obj=body, error=False, trim=MAX_BODY_LEN)
self.LOG.debug(f"{body_type} BODY:\n{body}")
class ParserUrl:
"""Parse a URL and ensure it has the neccessary bits."""
def __init__(self, url, default_scheme="https"):
"""Parse a URL and ensure it has the neccessary bits.
Args:
url (:obj:`str`): URL to parse
default_scheme (:obj:`str`, optional): default ``"https"`` - default
scheme to use if url does not contain a scheme
Raises:
:exc:`HttpError`:
if parsed URL winds up without a hostname, port, or scheme.
"""
self._init_url = url
""":obj:`str`: initial URL provided"""
self._init_scheme = default_scheme
""":obj:`str`: default scheme provided"""
self._init_parsed = urlparse(url)
""":obj:`urllib.parse.ParseResult`: first pass of parsing URL"""
self.parsed = self.reparse(
parsed=self._init_parsed, default_scheme=default_scheme
)
""":obj:`urllib.parse.ParseResult`: second pass of parsing URL"""
for part in ["hostname", "port", "scheme"]:
if not getattr(self.parsed, part, None):
error = (
f"Parsed URL into {self.parsed_str!r} and no {part!r} provided"
f" in URL {url!r}"
)
raise HttpError(error)
def __str__(self):
"""Show object info.
Returns:
:obj:`str`
"""
cls = self.__class__
return f"{cls.__module__}.{cls.__name__}({self.parsed_str})"
def __repr__(self):
"""Show object info.
Returns:
:obj:`str`
"""
return self.__str__()
@property
def hostname(self):
"""Hostname part from :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: hostname value
"""
return self.parsed.hostname
@property
def port(self):
"""Port part from :attr:`ParserUrl.parsed`.
Returns
:obj:`int`: port value
"""
return int(self.parsed.port)
@property
def scheme(self):
"""Scheme part from :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: scheme value
"""
return self.parsed.scheme
@property
def url(self):
"""Get scheme, hostname, and port from :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: schema, hostname, and port unparsed values
"""
return self.unparse_base(parsed_result=self.parsed)
@property
def url_full(self):
"""Get full URL from :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: full unparsed url
"""
return self.unparse_all(parsed_result=self.parsed)
@property
def parsed_str(self):
"""Get a str value of :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: str value of :attr:`ParserUrl.parsed`
"""
parsed = getattr(self, "parsed", None)
attrs = [
"scheme",
"netloc",
"hostname",
"port",
"path",
"params",
"query",
"fragment",
]
atmpl = "{a}={v!r}".format
attrs = [atmpl(a=a, v="{}".format(getattr(parsed, a, "")) or "") for a in attrs]
return ", ".join(attrs)
def make_netloc(self, host, port):
"""Create netloc from host and port.
Args:
host (:obj:`str`): host part to use in netloc
port (:obj:`str`): port part to use in netloc
Returns:
:obj:`str`: host and port values joined by :
"""
return ":".join([x for x in [host, port] if x])
def reparse(self, parsed, default_scheme=""):
"""Reparse a parsed URL into a parsed URL with values fixed.
Args:
parsed (:obj:`urllib.parse.ParseResult`): parsed URL to reparse
default_scheme (:obj:`str`, optional): default ``""`` -
default scheme to use if URL does not contain a scheme
Returns:
:obj:`urllib.parse.ParseResult`: reparsed result
"""
scheme, netloc, path, params, query, fragment = parsed
host = parsed.hostname
port = format(parsed.port or "")
if not netloc and scheme and path and path.split("/")[0].isdigit():
"""For case:
>>> urllib.parse.urlparse('host:443/')
ParseResult(
scheme='host', netloc='', path='443/', params='', query='', fragment=''
)
"""
host = scheme # switch host from scheme to host
port = path.split("/")[0] # remove / from path and assign to port
path = "" # empty out path
scheme = default_scheme
netloc = ":".join([host, port])
if not netloc and path:
"""For cases:
>>> urllib.parse.urlparse('host:443')
ParseResult(
scheme='', netloc='', path='host:443', params='', query='', fragment=''
)
>>> urllib.parse.urlparse('host')
ParseResult(
scheme='', netloc='', path='host', params='', query='', fragment=''
)
"""
netloc, path = path, netloc
if ":" in netloc: # pragma: no cover
# can't get this to trigger anymore, ignore test coverage
host, port = netloc.split(":", 1)
netloc = ":".join([host, port]) if port else host
else:
host = netloc
scheme = scheme or default_scheme
if not scheme and port:
if format(port) == "443":
scheme = "https"
elif format(port) == "80":
scheme = "http"
if not port:
if scheme == "https":
netloc = self.make_netloc(host, "443")
elif scheme == "http":
netloc = self.make_netloc(host, "80")
pass2 = urlunparse((scheme, netloc, path, params, query, fragment))
return urlparse(pass2)
def unparse_base(self, parsed_result):
"""Unparse a parsed URL into just the scheme, hostname, and port parts.
Args:
parsed (:obj:`urllib.parse.ParseResult`): parsed URL to unparse
Returns:
:obj:`str`: unparsed url
"""
# only unparse self.parsed into url with scheme and netloc
bits = (parsed_result.scheme, parsed_result.netloc, "", "", "", "")
return urlunparse(bits)
def unparse_all(self, parsed_result):
"""Unparse a parsed URL with all the parts.
Args:
parsed (:obj:`urllib.parse.ParseResult`): parsed URL to unparse
Returns:
:obj:`str`: unparsed url
"""
return urlunparse(parsed_result)
| 35.479495 | 88 | 0.563973 | 2,584 | 22,494 | 4.744582 | 0.117647 | 0.017618 | 0.020147 | 0.013703 | 0.38385 | 0.296982 | 0.24739 | 0.179038 | 0.143638 | 0.129201 | 0 | 0.001963 | 0.32053 | 22,494 | 633 | 89 | 35.535545 | 0.800183 | 0.375834 | 0 | 0.128114 | 0 | 0 | 0.067447 | 0.014706 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088968 | false | 0.007117 | 0.032028 | 0.003559 | 0.202847 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8edf7cbcf7cedddc71ad9cf461c4f588b745f8c | 427 | py | Python | tests/test.py | alex-panda/PDFCompiler | 3454ee01a6e5ebb2d2bccdcdc32678bf1def895d | [
"MIT"
] | null | null | null | tests/test.py | alex-panda/PDFCompiler | 3454ee01a6e5ebb2d2bccdcdc32678bf1def895d | [
"MIT"
] | null | null | null | tests/test.py | alex-panda/PDFCompiler | 3454ee01a6e5ebb2d2bccdcdc32678bf1def895d | [
"MIT"
] | null | null | null | from fpdf import FPDF
import os
pdf = FPDF()
pdf.add_page()
#pdf.add_font('CMUSerif-UprightItalic', fname=os.path.abspath('./src/Fonts/Computer Modern/cmunui.ttf'), uni=True)
#pdf.set_font('CMUSerif-UprightItalic', size=16)
pdf.add_font('BerlinSansFB-Bold', fname='C:\\Windows\\Fonts\\VINERITC.TTF', uni=True)
pdf.set_font('BerlinSansFB-Bold')
pdf.cell(40, 10, "Hello World! (It's a great day today!)")
pdf.output("test.pdf")
| 35.583333 | 114 | 0.735363 | 69 | 427 | 4.478261 | 0.608696 | 0.058252 | 0.064725 | 0.084142 | 0.12945 | 0.12945 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.0726 | 427 | 11 | 115 | 38.818182 | 0.765152 | 0.374707 | 0 | 0 | 0 | 0 | 0.422642 | 0.120755 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8f564b8365eed4a07a4dd31237eb8da98838a5f | 3,064 | py | Python | docs/talks/xdc2016/compare_cairo.py | juhapekka/ezbench_work | ac0cb9ccbc205746d4790a9e33e598fbd5732741 | [
"BSD-3-Clause"
] | 3 | 2019-06-25T16:49:25.000Z | 2021-04-30T06:36:54.000Z | docs/talks/xdc2016/compare_cairo.py | juhapekka/ezbench_work | ac0cb9ccbc205746d4790a9e33e598fbd5732741 | [
"BSD-3-Clause"
] | 4 | 2019-12-10T00:50:49.000Z | 2022-03-10T06:18:42.000Z | docs/talks/xdc2016/compare_cairo.py | juhapekka/ezbench_work | ac0cb9ccbc205746d4790a9e33e598fbd5732741 | [
"BSD-3-Clause"
] | 1 | 2021-04-30T06:36:36.000Z | 2021-04-30T06:36:36.000Z | #!/usr/bin/env python3
"""
Copyright (c) 2015, Intel Corporation
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of Intel Corporation nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import sys
import os
# Import ezbench from the utils/ folder
ezbench_dir = os.path.abspath(sys.path[0]+'/../')
sys.path.append(ezbench_dir+'/utils/')
sys.path.append(ezbench_dir+'/utils/env_dump')
from ezbench import *
from env_dump_parser import *
if __name__ == "__main__":
import argparse
# parse the options
parser = argparse.ArgumentParser()
parser.add_argument("log_folder")
args = parser.parse_args()
report = Report(args.log_folder, silentMode=True)
report.enhance_report([])
print("Test name, cairo image perf, xlib perf, cairo image energy, xlib energy")
for result in report.commits[0].results:
test_name = result.test.full_name
if not test_name.startswith("x11:cairo:xlib:"):
continue
img_res = report.find_result_by_name(report.commits[0], test_name.replace("x11:cairo:xlib:", "x11:cairo:image:"))
if img_res is None:
img_res = report.find_result_by_name(report.commits[0], test_name.replace("x11:cairo:xlib:", "x11:cairo:ximage:"))
test_name = test_name.replace(":xlib:", ':')
if img_res is None:
print("could not find the cpu result for test '{}'".format(test_name))
perf_cpu = img_res.result().mean()
perf_gpu = result.result().mean()
pwr_cpu = img_res.result("metric_rapl0.package-0:energy").mean()
pwr_gpu = result.result("metric_rapl0.package-0:energy").mean()
print("{},{},{},{},{}".format(test_name, perf_cpu, perf_gpu, pwr_cpu, pwr_gpu))
| 41.972603 | 126 | 0.72748 | 437 | 3,064 | 4.98627 | 0.409611 | 0.033043 | 0.019275 | 0.021111 | 0.245067 | 0.190913 | 0.165213 | 0.133089 | 0.133089 | 0.133089 | 0 | 0.009182 | 0.182441 | 3,064 | 72 | 127 | 42.555556 | 0.860679 | 0.514687 | 0 | 0.066667 | 0 | 0 | 0.21327 | 0.039269 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8f89ca57ebf1d8154f7f2629edeea9594a44b41 | 9,541 | py | Python | generator/blocks/write_back/base/memory_block_base.py | biarmic/OpenCache | bb9e110e434deb83900de328cc76b63901ba582f | [
"BSD-3-Clause"
] | null | null | null | generator/blocks/write_back/base/memory_block_base.py | biarmic/OpenCache | bb9e110e434deb83900de328cc76b63901ba582f | [
"BSD-3-Clause"
] | null | null | null | generator/blocks/write_back/base/memory_block_base.py | biarmic/OpenCache | bb9e110e434deb83900de328cc76b63901ba582f | [
"BSD-3-Clause"
] | null | null | null | # See LICENSE for licensing information.
#
# Copyright (c) 2021 Regents of the University of California and The Board
# of Regents for the Oklahoma Agricultural and Mechanical College
# (acting for and on behalf of Oklahoma State University)
# All rights reserved.
#
from block_base import block_base
from amaranth import Cat, C
from state import state
class memory_block_base(block_base):
"""
This is the base class of memory controller always block modules.
Methods of this class can be overridden for specific implementation of
different cache designs.
In this block, cache communicates with memory components such as tag array,
data array, use array, and DRAM.
"""
def __init__(self):
super().__init__()
def add_reset(self, c, m):
""" Add statements for the RESET state. """
# In the RESET state, cache sends write request to the tag array to reset
# the current set.
# set register is incremented by the Request Block.
# When set register reaches the end, state switches to IDLE.
with m.Case(state.RESET):
c.tag_array.write(c.set, 0)
c.data_array.write(c.set, 0)
def add_flush(self, c, m):
""" Add statements for the FLUSH state. """
# In the FLUSH state, cache sends write request to DRAM.
# set register is incremented by the Request Block.
# way register is incremented by the Replacement Block.
# When set and way registers reach the end, state switches to IDLE.
with m.Case(state.FLUSH):
c.tag_array.read(c.set)
c.data_array.read(c.set)
with m.Switch(c.way):
for i in range(c.num_ways):
with m.Case(i):
# Check if current set is clean or DRAM is available,
# and all ways of the set are checked
if i == c.num_ways - 1:
with m.If(~c.tag_array.output().dirty(i) | ~c.dram.stall()):
# Request the next tag and data lines from SRAMs
c.tag_array.read(c.set + 1)
c.data_array.read(c.set + 1)
# Check if current set is dirty and DRAM is available
with m.If(c.tag_array.output().dirty(i) & ~c.dram.stall()):
# Update dirty bits in the tag line
c.tag_array.write(c.set, Cat(c.tag_array.output().tag(i), C(2, 2)), i)
# Send the write request to DRAM
c.dram.write(Cat(c.set, c.tag_array.output().tag(i)), c.data_array.output(i))
def add_idle(self, c, m):
""" Add statements for the IDLE state. """
# In the IDLE state, cache waits for CPU to send a new request.
# Until there is a new request from the cache, stall is low.
# When there is a new request from the cache stall is asserted, request
# is decoded and corresponding tag, data, and use array lines are read
# from internal SRAMs.
with m.Case(state.IDLE):
# Read next lines from SRAMs even though CPU is not sending a new
# request since read is non-destructive.
c.tag_array.read(c.addr.parse_set())
c.data_array.read(c.addr.parse_set())
def add_compare(self, c, m):
""" Add statements for the COMPARE state. """
# In the COMPARE state, cache compares tags.
with m.Case(state.COMPARE):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# Assuming that current request is miss, check if it is dirty miss
with c.check_dirty_miss(m):
# If DRAM is available, switch to WAIT_WRITE and wait for DRAM to
# complete writing
with m.If(~c.dram.stall()):
c.dram.write(Cat(c.set, c.tag_array.output().tag()), c.data_array.output())
# Else, assume that current request is clean miss
with c.check_clean_miss(m):
# If DRAM is busy, switch to READ and wait for DRAM to be available
# If DRAM is available, switch to WAIT_READ and wait for DRAM to
# complete reading
with m.If(~c.dram.stall()):
c.dram.read(Cat(c.set, c.tag))
# Check if current request is hit
with c.check_hit(m):
# Set DRAM's csb to 1 again since it could be set 0 above
c.dram.disable()
# Perform the write request
with m.If(~c.web_reg):
# Update dirty bit
c.tag_array.write(c.set, Cat(c.tag, C(3, 2)))
# Perform write request
c.data_array.write(c.set, c.data_array.output())
c.data_array.write_input(0, c.offset, c.din_reg, c.wmask_reg if c.num_masks else None)
# Read next lines from SRAMs even though the CPU is not sending
# a new request since read is non-destructive.
c.tag_array.read(c.addr.parse_set())
c.data_array.read(c.addr.parse_set())
def add_write(self, c, m):
""" Add statements for the WRITE state. """
# In the WRITE state, cache waits for DRAM to be available.
# When DRAM is available, write request is sent.
with m.Case(state.WRITE):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# If DRAM is busy, wait in this state.
# If DRAM is available, switch to WAIT_WRITE and wait for DRAM to
# complete writing.
with m.If(~c.dram.stall()):
with m.Switch(c.way):
for i in range(c.num_ways):
with m.Case(i):
c.dram.write(Cat(c.set, c.tag_array.output().tag(c.way)), c.data_array.output(i))
def add_wait_write(self, c, m):
""" Add statements for the WAIT_WRITE state. """
# In the WAIT_WRITE state, cache waits for DRAM to complete writing.
# When DRAM completes writing, read request is sent.
with m.Case(state.WAIT_WRITE):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# If DRAM is busy, wait in this state.
# If DRAM completes writing, switch to WAIT_READ and wait for DRAM to
# complete reading.
with m.If(~c.dram.stall()):
c.dram.read(Cat(c.set, c.tag))
def add_read(self, c, m):
""" Add statements for the READ state. """
# In the READ state, cache waits for DRAM to be available.
# When DRAM is available, read request is sent.
# TODO: Is this state really necessary? WAIT_WRITE state may be used instead
with m.Case(state.READ):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# If DRAM is busy, wait in this state.
# If DRAM completes writing, switch to WAIT_READ and wait for DRAM to
# complete reading.
with m.If(~c.dram.stall()):
c.dram.read(Cat(c.set, c.tag))
def add_wait_read(self, c, m):
""" Add statements for the WAIT_READ state. """
# In the WAIT_READ state, cache waits for DRAM to complete reading
# When DRAM completes reading, request is completed.
with m.Case(state.WAIT_READ):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# If DRAM is busy, cache waits in this state.
# If DRAM completes reading, cache switches to:
# IDLE if CPU isn't sending a new request
# COMPARE if CPU is sending a new request
with m.If(~c.dram.stall()):
# Update tag line
c.tag_array.write(c.set, Cat(c.tag, ~c.web_reg, C(1, 1)), c.way)
# Update data line
c.data_array.write(c.set, c.dram.output(), c.way)
# Perform the write request
with m.If(~c.web_reg):
c.data_array.write_input(c.way, c.offset, c.din_reg, c.wmask_reg if c.num_masks else None)
# Read next lines from SRAMs even though the CPU is not sending
# a new request since read is non-destructive
c.tag_array.read(c.addr.parse_set())
c.data_array.read(c.addr.parse_set())
def add_flush_hazard(self, c, m):
""" Add statements for the FLUSH_HAZARD state. """
# In the FLUSH_HAZARD state, cache waits in this state for 1 cycle.
# Read requests are sent to tag and data arrays.
with m.Case(state.FLUSH_HAZARD):
c.tag_array.read(0)
c.data_array.read(0)
def add_wait_hazard(self, c, m):
""" Add statements for the WAIT_HAZARD state. """
# In the WAIT_HAZARD state, cache waits in this state for 1 cycle.
# Read requests are sent to tag and data arrays.
with m.Case(state.WAIT_HAZARD):
c.tag_array.read(c.set)
c.data_array.read(c.set)
def add_flush_sig(self, c, m):
""" Add flush signal control. """
# If flush is high, state switches to FLUSH.
# In the FLUSH state, cache will write all data lines back to DRAM.
with m.If(c.flush):
c.tag_array.read(0)
c.data_array.read(0) | 42.977477 | 110 | 0.570276 | 1,400 | 9,541 | 3.802857 | 0.132857 | 0.021788 | 0.038881 | 0.039068 | 0.633546 | 0.584711 | 0.54846 | 0.480278 | 0.422239 | 0.422239 | 0 | 0.003808 | 0.339482 | 9,541 | 222 | 111 | 42.977477 | 0.841003 | 0.440625 | 0 | 0.455556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004505 | 0 | 1 | 0.133333 | false | 0 | 0.033333 | 0 | 0.177778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8fb835e064c6068c174aaab9d60c797f66b3c26 | 319 | py | Python | combinatorics/p11069.py | sajjadt/competitive-programming | fb0844afba95383441f0c4c0c3b1a38078d24ec9 | [
"MIT"
] | 10 | 2019-03-29T08:37:10.000Z | 2021-12-29T14:06:57.000Z | combinatorics/p11069.py | sajjadt/competitive-programming | fb0844afba95383441f0c4c0c3b1a38078d24ec9 | [
"MIT"
] | 1 | 2020-07-03T08:25:38.000Z | 2020-07-03T08:25:38.000Z | combinatorics/p11069.py | sajjadt/competitive-programming | fb0844afba95383441f0c4c0c3b1a38078d24ec9 | [
"MIT"
] | 4 | 2019-05-30T16:04:48.000Z | 2020-10-22T21:42:25.000Z |
# f(n) = number of valid sequencess with n items
# f(n) = {"attaching n to"} f(n-2) + {"attaching n-1 to "} f(n-3)
LIMIT = 76 + 1
f_table = [0, 1, 2, 2]
for i in range(LIMIT):
f_table.append(f_table[-2] + f_table[-3])
while True:
try:
n = int(input())
print(f_table[n])
except(EOFError):
break
| 19.9375 | 69 | 0.579937 | 60 | 319 | 3 | 0.516667 | 0.166667 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04918 | 0.23511 | 319 | 15 | 70 | 21.266667 | 0.688525 | 0.354232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8fee123a93215beee41ff7185b11c6c92c2b7c1 | 3,566 | py | Python | aita/api/course.py | ze-lin/AITA | 0f2fe4e630c37fcc566a54621880b78ec67eefa6 | [
"MIT"
] | null | null | null | aita/api/course.py | ze-lin/AITA | 0f2fe4e630c37fcc566a54621880b78ec67eefa6 | [
"MIT"
] | null | null | null | aita/api/course.py | ze-lin/AITA | 0f2fe4e630c37fcc566a54621880b78ec67eefa6 | [
"MIT"
] | 1 | 2020-12-29T19:45:28.000Z | 2020-12-29T19:45:28.000Z | import datetime, time, os
from flask import Blueprint, jsonify, request, g
from aita.auth import login_required
from aita.db import get_collection
from werkzeug.utils import secure_filename
bp = Blueprint('course', __name__, url_prefix='/course')
@bp.route('/getall', methods=['GET'])
def get_all_course():
COURSE = get_collection('course')
result = COURSE.find()
json_body = {}
for i, document in enumerate(result):
json_body[i] = document
json_body[i].pop('_id')
return jsonify(json_body)
@bp.route('/getall-teacher', methods=['GET'])
@login_required
def get_all_course_teacher():
COURSE = get_collection('course')
if g.usr['role'] == 'student':
return 'student'
result = COURSE.find({ 'teacher': g.usr['usr'] })
json_body = {}
for i, document in enumerate(result):
json_body[i] = document
json_body[i].pop('_id')
return jsonify(json_body)
@bp.route('/create', methods=['GET'])
@login_required
def create_course():
COURSE = get_collection('course')
document = {
'genre': request.args.get('genre'),
'title': request.args.get('title'),
'exam': request.args.get('exam'),
'time': request.args.get('time'),
'teacher': g.usr['usr'],
'video': secure_filename(request.args.get('video')),
'article': secure_filename(request.args.get('article')),
'date': str(datetime.date.today()),
'id': str(time.time()),
'view': 0
}
COURSE.insert_one(document)
return 'Success!'
@bp.route('/delete', methods=['GET'])
@login_required
def delete_course():
# 级联删除
COURSE = get_collection('course')
COLLECTION = get_collection('collection') # delete all
course_id = request.args.get('id')
COLLECTION.delete_many({ 'id': course_id })
COURSE.delete_one({'id': course_id})
return 'Success!'
@bp.route('/uploadfile', methods=['POST'])
@login_required
def upload():
"""
存数据库留给submit_class做
"""
file = request.files['file']
file_name = secure_filename(file.filename)
if not os.path.exists(os.path.join('aita/static', file_name)):
file.save(os.path.join('aita/static', file_name))
return 'Success!'
@bp.route('/getreading', methods=['GET'])
@login_required
def get_reading():
COURSE = get_collection('course')
result = COURSE.find_one({'id': request.args.get('id')})
file_name = result['article']
file_path = os.path.join('aita/static', file_name)
content = ''
with open(file_path, 'r') as f:
for line in f:
content += line
return content
@bp.route('/getvideo', methods=['GET'])
@login_required
def get_video():
COURSE = get_collection('course')
result = COURSE.find_one({'id': request.args.get('id')})
file_name = result['video']
return file_name
@bp.route('/getexam', methods=['GET'])
@login_required
def get_exam():
COURSE = get_collection('course')
result = COURSE.find_one({'id': request.args.get('id')})
return result['exam']
@bp.route('/view', methods=['GET'])
@login_required
def view():
COURSE = get_collection('course')
course_id = request.args.get('id')
result = COURSE.find_one({'id': course_id })
result['view'] += 1
COURSE.replace_one({'id': course_id }, result)
COLLECTION = get_collection('collection')
document = {
'id': course_id,
'usr': g.usr['usr']
}
result = COLLECTION.find_one(document)
if not result:
COLLECTION.insert_one(document)
return 'Success!'
| 26.029197 | 66 | 0.633763 | 450 | 3,566 | 4.853333 | 0.204444 | 0.065476 | 0.070513 | 0.091575 | 0.448718 | 0.318223 | 0.243132 | 0.185897 | 0.185897 | 0.185897 | 0 | 0.000699 | 0.19742 | 3,566 | 136 | 67 | 26.220588 | 0.762404 | 0.010095 | 0 | 0.375 | 0 | 0 | 0.121117 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086538 | false | 0 | 0.048077 | 0 | 0.230769 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b078da4ba018d0ed23b38cf26025965f628a808 | 3,658 | py | Python | main.py | AuroraBTH/minecraft-modpack-randomizer | 797fb6a438a3365da69fbcbc22d856668a90ed27 | [
"MIT"
] | null | null | null | main.py | AuroraBTH/minecraft-modpack-randomizer | 797fb6a438a3365da69fbcbc22d856668a90ed27 | [
"MIT"
] | null | null | null | main.py | AuroraBTH/minecraft-modpack-randomizer | 797fb6a438a3365da69fbcbc22d856668a90ed27 | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
from requests import get
import json
def get_amount_of_pages(minecraft_version):
initial_site_response = get("https://www.curseforge.com/minecraft/mc-mods?filter-game-version=" + minecraft_version + "&filter-sort=5&")
soup = BeautifulSoup(initial_site_response.text, "html.parser")
amount_of_pages = soup.find('li', class_="dots").find_next_sibling().text
return int(amount_of_pages)
def get_project_and_author_id(mod):
mod_id = mod.find("a", class_="button--download").get("data-nurture-data")
author_id = mod.find("a", class_="button--download").get("data-nurture-data")
if mod_id is None:
mod_id = json.loads(mod.find("a", class_="button--download").get("data-exp-nurture"))["ProjectID"]
author_id = json.loads(mod.find("a", class_="button--download").get("data-exp-nurture"))["AuthorID"]
else:
mod_id = json.loads(mod_id)["ProjectID"]
author_id = json.loads(author_id)["AuthorID"]
return [mod_id, author_id]
def write_mods_to_json(minecraft_version, file_name):
domain = "https://www.curseforge.com"
page_number = 1
amount_of_pages = get_amount_of_pages(minecraft_version)
mod_list = []
while page_number <= amount_of_pages:
url = "https://www.curseforge.com/minecraft/mc-mods?filter-game-version=" + minecraft_version + "&filter-sort=5&page=" + str(page_number)
response = get(url)
data = BeautifulSoup(response.text, "html.parser")
list_of_mods = data.find_all("li", class_="project-list-item")
for mod in list_of_mods:
id_list = get_project_and_author_id(mod)
project_name = mod.find("h2", class_="list-item__title").text.strip()
project_id = id_list[0]
project_author_id = id_list[1]
project_category = mod.find("a", class_="category__item")["title"].strip()
project_description = mod.find("div", class_="list-item__description").p.text.strip()
project_downloads = int(mod.find("span", class_="count--download").text.strip().replace(",", ""))
project_link = mod.find("div", class_="list-item__details").a["href"]
mod_data = {}
mod_data["id"] = project_id
mod_data["name"] = project_name
mod_data["author_id"] = project_author_id
mod_data["category"] = project_category
mod_data["description"] = project_description
mod_data["downloads"] = project_downloads
mod_data["link"] = domain + project_link
mod_list.append(mod_data)
progress_percent = round((page_number / amount_of_pages) * 100, 2)
print("Done with page " + str(page_number) + "/" + str(amount_of_pages) + " (" + str(progress_percent) + "%)")
page_number = page_number + 1
with open(file_name, "w") as f:
amount_of_mods = len(mod_list)
pretty_json = json.loads(json.JSONEncoder().encode(mod_list))
f.write(json.dumps(pretty_json, indent=4))
print("Done indexing " + str(amount_of_mods) + " mods, see " + file_name + " for more details.")
user_version = input("0: 1.7.10\n1: 1.12.2\n_________\n->")
file_name = input("Name on file output (default is data.json):\n->")
if file_name == "":
file_name = "data.json"
if user_version == "0":
minecraft_1_7_10 = "2020709689%3A4449"
write_mods_to_json(minecraft_1_7_10, file_name)
elif user_version == "1":
minecraft_1_12_2 = "2020709689%3A6756"
write_mods_to_json(minecraft_1_12_2, file_name)
| 43.547619 | 146 | 0.642701 | 492 | 3,658 | 4.45122 | 0.252033 | 0.03653 | 0.047489 | 0.02968 | 0.321005 | 0.261187 | 0.16621 | 0.16621 | 0.16621 | 0.16621 | 0 | 0.024721 | 0.214872 | 3,658 | 83 | 147 | 44.072289 | 0.737813 | 0 | 0 | 0 | 0 | 0 | 0.21035 | 0.012028 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046875 | false | 0 | 0.046875 | 0 | 0.125 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b09394715e3c0dcf590faefc51ab0a74f18287b | 540 | py | Python | product/views/brand_details.py | Rafeen/Inventory-Management-and-POS | c6b93fd83e76d8cdee1bdbe1042a29b23bfc36ac | [
"MIT"
] | null | null | null | product/views/brand_details.py | Rafeen/Inventory-Management-and-POS | c6b93fd83e76d8cdee1bdbe1042a29b23bfc36ac | [
"MIT"
] | 10 | 2019-07-03T21:28:41.000Z | 2022-01-13T01:13:35.000Z | product/views/brand_details.py | Rafeen/Inventory-Management-and-POS | c6b93fd83e76d8cdee1bdbe1042a29b23bfc36ac | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect, get_object_or_404
from product.models.brand_model import Brand
from django.contrib.auth.decorators import login_required
@login_required(login_url='/login/')
def brand_detail_view(request, id):
"""
This view renders User Detail page with a details of selected user
"""
brand_obj = get_object_or_404(Brand, brand_id=id)
context = {
"brand": brand_obj,
"title": "Category Details"
}
return render(request, "brand_details.html", context)
| 21.6 | 74 | 0.709259 | 72 | 540 | 5.097222 | 0.555556 | 0.054496 | 0.059946 | 0.076294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013857 | 0.198148 | 540 | 24 | 75 | 22.5 | 0.833718 | 0.122222 | 0 | 0 | 0 | 0 | 0.113586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0a6f1bbc8afafe4db77b3247308ff00dd67a64 | 1,264 | py | Python | 1-100q/40.py | rampup01/Leetcode | 8450a95a966ef83b24ffe6450f06ce8de92b3efb | [
"MIT"
] | 990 | 2018-06-05T11:49:22.000Z | 2022-03-31T08:59:17.000Z | 1-100q/40.py | rampup01/Leetcode | 8450a95a966ef83b24ffe6450f06ce8de92b3efb | [
"MIT"
] | 1 | 2021-11-01T01:29:38.000Z | 2021-11-01T01:29:38.000Z | 1-100q/40.py | rampup01/Leetcode | 8450a95a966ef83b24ffe6450f06ce8de92b3efb | [
"MIT"
] | 482 | 2018-06-12T22:16:53.000Z | 2022-03-29T00:23:29.000Z | '''
Given a collection of candidate numbers (candidates) and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target.
Each number in candidates may only be used once in the combination.
Note:
All numbers (including target) will be positive integers.
The solution set must not contain duplicate combinations.
Example 1:
Input: candidates = [10,1,2,7,6,1,5], target = 8,
A solution set is:
[
[1, 7],
[1, 2, 5],
[2, 6],
[1, 1, 6]
]
'''
class Solution(object):
def combinationSum2(self, candidates, target):
"""
:type candidates: List[int]
:type target: int
:rtype: List[List[int]]
"""
result = []
candidates.sort()
def recursive(candidates, target, currList, index):
if target < 0:
return
if target == 0:
result.append(currList)
return
previous = -1
for start in range(index, len(candidates)):
if previous != candidates[start]:
recursive(candidates, target - candidates[start], currList + [candidates[start]], start+1)
previous = candidates[start]
recursive(candidates, target, [], 0)
return result | 26.893617 | 170 | 0.609968 | 152 | 1,264 | 5.072368 | 0.460526 | 0.083009 | 0.097276 | 0.083009 | 0.124514 | 0.124514 | 0 | 0 | 0 | 0 | 0 | 0.028698 | 0.283228 | 1,264 | 47 | 171 | 26.893617 | 0.822296 | 0.44462 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0caaaf1b41e4b4941d55d16c265dd9df819b1f | 8,651 | py | Python | src/clustar_project/clustarray.py | jz5jx/Test_Repo | 8796f45021943984ed02232fd34ff02e17123d71 | [
"MIT"
] | 1 | 2021-04-24T21:52:53.000Z | 2021-04-24T21:52:53.000Z | src/clustar_project/clustarray.py | jz5jx/Test_Repo | 8796f45021943984ed02232fd34ff02e17123d71 | [
"MIT"
] | null | null | null | src/clustar_project/clustarray.py | jz5jx/Test_Repo | 8796f45021943984ed02232fd34ff02e17123d71 | [
"MIT"
] | null | null | null | import warnings
import numpy as np
import itertools
class ClustArray:
''' Class for working with data from FITS images
Initialized from a numpy array from an image
Methods for denoising images
'''
def __init__(self, np_array):
self.im_array = np_array
self.noise_est = None
self.denoised_arr = None
def circle_crop(self, rad_factor = 1.0):
'''Function to crop square images to a circle
Params
------
rad_factor: float multiple allowing change to size of circle_crop
default is 1
value equal to 0.7 crops to a circle with radius that is 70% as large as the max image radius
values < 0 not allowed
values >= sqrt(2) will return original image
Outputs
-------
new_imdata: np array of same size as image data array, but with values outside radius set to nan;
sets self.denoised_arr to equal this array
'''
if rad_factor < 0:
raise ValueError('rad_factor must be >= 0')
if self.denoised_arr is None:
new_imdata = self.im_array.copy()
else:
new_imdata = self.denoised_arr.copy()
rad = (new_imdata.shape[0]/2)
rad_sq = (rad*rad_factor)**2
for ix,iy in np.ndindex(new_imdata.shape):
if (ix - rad)**2 + (iy - rad)**2 > rad_sq:
new_imdata[ix, iy] = np.nan
self.denoised_arr = new_imdata
return new_imdata
def pb_multiply(self, pb_array):
'''Function to multiply a FITS image by a .pb file to deemphasize edges
Inputs
------
pb_array: numpy array from a .pb file
Outputs
-------
new_imdata: np array of same size as image data array
consisting of elementwise multiplication of image and pb file;
sets self.denoised_arr to equal this array
'''
if self.denoised_arr is None:
imdata = self.im_array.copy()
else:
imdata = self.denoised_arr.copy()
new_imdata = np.multiply(imdata, pb_array)
self.denoised_arr = new_imdata
return new_imdata
def get_noise_level(self, nchunks = 3, rms_quantile = 0):
'''Calculates estimated noise level in image intensity
Stores value in FitsImage object noise attribute
Arguments
---------
nchunks: int number of chunks to use in grid, must be odd
rms_quantile: float in range [0, 1] indicating quantile of chunk RMS to use for noise level (0 = min RMS, 0.5 = median, etc)
Returns
-------
noise: float estimated noise in image intensity values;
sets self.noise_est to this value
'''
if self.denoised_arr is None:
imdata = self.im_array.copy()
warnings.warn('Calculating noise level from uncleaned image')
else:
imdata = self.denoised_arr.copy()
#now break the image into chunks and do the same analysis;
# one of the chunks should have no signal in and give you an estimate of the noise (= rms).# number of chunks in each direction:
# an odd value is used so that the centre of the image does not correspond to the edge of chunks;
# when you ask for observations with ALMA, you usually specify that the object of interest be in the
# center of your image.
size = [i//nchunks for i in imdata.shape]
remain = [i % nchunks for i in imdata.shape]
chunks = dict()
k = 0
for j,i in itertools.product(range(nchunks),range(nchunks)):
chunks[k] = size.copy()
k += 1# next, account for when the image dimensions are not evenly divisible by `nchunks`.
row_remain, column_remain = 0, 0
for k in chunks:
if k % nchunks < remain[0]:
row_remain = 1
if k // nchunks < remain[1]:
column_remain = 1
if row_remain > 0:
chunks[k][0] += 1
row_remain -= 1
if column_remain > 0:
chunks[k][1] += 1
column_remain -= 1# with that in hand, calculate the lower left corner indices of each chunk
indices = dict()
for k in chunks:
indices[k] = chunks[k].copy()
if k % nchunks == 0:
indices[k][0] = 0
elif k % nchunks != 0:
indices[k][0] = indices[k-1][0] + chunks[k][0]
if k >= nchunks:
indices[k][1] = indices[k-nchunks][1] + chunks[k][1]
else:
indices[k][1] = 0
stddev_chunk = dict()
rms_chunk = dict()
for k in chunks:
i,j = indices[k]
di,dj = chunks[k]
x = imdata[i:i+di,j:j+dj]
stddev_this = np.nanstd(x)
rms_this = np.sqrt(np.nanmean(x**2))
stddev_chunk[k] = stddev_this
rms_chunk[k] = rms_this
noise = np.quantile(list(rms_chunk.values()), q = rms_quantile)
self.noise_est = noise
return(noise)
def denoise(self, pb_array = None, rad_factor = 1.0, rms_quantile = 0, grid_chunks = 3):
'''Wrapper function to perform entire denoising process
Crops image to a circle, multiplies by a pb file (if desired), and calculates RMS noise level
Inputs
------
im_array: 2d array representing a FITS image data
pb_array: optional numpy array from a .pb file
Params
------
rad_factor: float multiple allowing change to size of circle_crop
default is 1
value equal to 0.7 crops to a circle with radius that is 70% as large as the max image radius
values < 0 not allowed
values >= sqrt(2) will return original image
grid_chunks: int number of chunks to use in grid, must be odd
rms_quantile: float in range [0, 1] indicating quantile of chunk RMS to use for noise level (0 = min RMS, 0.5 = median, etc)
Outputs
-------
'''
self.circle_crop(rad_factor)
if pb_array is not None:
self.pb_multiply(pb_array)
noise_lvl = self.get_noise_level()
return(noise_lvl)
def extract_subgroup(self, group_indices, square = True, buffer = 0.0):
'''Function for extracting a subgroup of an image
Inputs
------
group_indices: list containing indices of subgroup [row_min, row_max, col_min, col_max]
Params
------
square: if True, widen shorter axis range to make subgroup a square
buffer: fraction to add to each dimension
(e.g. if subgroup is 200x200 pixels, buffer = 0.1 will return 220x220 pixels)
'''
row_min = group_indices[0]
row_max = group_indices[1]
col_min = group_indices[2]
col_max = group_indices[3]
if square:
diff = (row_max - row_min) - (col_max - col_min)
if diff == 0:
#already square
pass
elif diff < 0:
#adjust row min/max
row_min += int(np.floor(diff/2))
row_max -= int(np.ceil(diff/2))
else:
#adjust col min/max
col_min -= int(np.floor(diff/2))
col_max += int(np.ceil(diff/2))
buffer_width = int(buffer*(col_max - col_min)/2)
buffer_height = int(buffer*(row_max - row_min)/2)
row_min -= buffer_height
row_max += buffer_height
col_min -= buffer_width
col_max += buffer_width
subgroup = self.im_array[row_min:row_max, col_min:col_max]
return subgroup
def plot_subgroup(self, group_indices, square = True, buffer = 0.0, colorbar = True):
'''Function for plotting a subgroup of an image
Inputs
------
group_indices: list containing indices of subgroup [row_min, row_max, col_min, col_max]
Params
------
square: if True, widen shorter axis range to make subgroup a square
buffer: fraction to add to each dimension
(e.g. if subgroup is 200x200 pixels, buffer = 0.1 will return 220x220 pixels)
colorbar: boolean indicating whether or not to include a colorbar with the plot
'''
subgroup = self.extract_subgroup(group_indices, square, buffer)
plt.imshow(subgroup, origin='lower')
if colorbar:
plt.colorbar()
| 35.310204 | 136 | 0.573229 | 1,171 | 8,651 | 4.114432 | 0.204953 | 0.022416 | 0.034247 | 0.010585 | 0.401619 | 0.389788 | 0.337069 | 0.326692 | 0.32171 | 0.271897 | 0 | 0.020081 | 0.343775 | 8,651 | 244 | 137 | 35.454918 | 0.828607 | 0.399029 | 0 | 0.168142 | 0 | 0 | 0.015926 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061947 | false | 0.00885 | 0.026549 | 0 | 0.123894 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0d1ad6e91ffdcea74efa0272a18d860ad0c2ae | 7,151 | py | Python | rpa_logger/task.py | kangasta/rpa_logger | 63fb9d2472cc8039b6d794c5a09f4fbb77f5ac23 | [
"MIT"
] | null | null | null | rpa_logger/task.py | kangasta/rpa_logger | 63fb9d2472cc8039b6d794c5a09f4fbb77f5ac23 | [
"MIT"
] | null | null | null | rpa_logger/task.py | kangasta/rpa_logger | 63fb9d2472cc8039b6d794c5a09f4fbb77f5ac23 | [
"MIT"
] | null | null | null | '''Constants and helpers for describing RPA tasks and their status.
'''
from collections import Counter
from dataclasses import dataclass
from typing import Any, Dict, Hashable, List, Union
from uuid import uuid4
from .utils import timestamp
from .utils.output import OutputText
STARTED = 'STARTED'
SUCCESS = 'SUCCESS'
IGNORED = 'IGNORED'
FAILURE = 'FAILURE'
ERROR = 'ERROR'
SKIPPED = 'SKIPPED'
STATUSES = (STARTED, SUCCESS, IGNORED, FAILURE, ERROR, SKIPPED,)
@dataclass
class BaseTask:
'''Base class to define common functionality of `rpa_logger.task.Task` and
`rpa_logger.task.TaskSuite`
'''
type: str
'''Used to identify task type, when task is presented as dict'''
name: Union[str, None]
'''Human-readable name of the task.'''
status: str
'''Describes state of the task. For example `SUCCESS` or `ERROR`.'''
started: str
'''UTC ISO-8601 timestamp that stores the start time of the task.
Defined automatically when instance is created.
'''
finished: Union[str, None]
'''UTC ISO-8601 timestamp that stores the finish time of the task.
Defined automatically when `rpa_logger.task.BaseTask.finish` method is
called.
'''
metadata: Dict[str, Any]
'''Container for any other data stored in the task. Could, for example,
contain information about the execution environment or data that was
processed in the task.
'''
def __init__(self, name: Union[str, None], status: str = STARTED) -> None:
'''
Args:
name: Name of the task.
status: Status to use for the started task.
'''
self.status = status
self.name = name
self.started = timestamp()
self.finished = None
self.metadata = dict()
def finish(self, status) -> None:
'''Set finished timestamp and end status of the task
Args:
status: Status to use for the finished task.
'''
self.status = status
self.finished = timestamp()
def log_metadata(self, key: str, value: Any) -> None:
'''Log metadata for the task.
Args:
key: Key for the metadata item.
value: Value for the metadata item. If task data is saved as json
or yaml, this value must be serializable.
'''
self.metadata[key] = value
@dataclass
class Task(BaseTask):
'''Defines single task and stores its output and metadata
'''
output: List[OutputText]
def __init__(self, name: str, status: str = STARTED):
'''
Args:
name: Name of the task.
status: Status to use for the started task.
'''
super().__init__(name, status)
self.output = list()
@property
def type(self):
return 'TASK'
def log_output(self, text: str, stream: str = 'stdout') -> None:
'''Append new `rpa_logger.utils.output.OutputText` to task output.
Args:
text: Output text content.
stream: Output stream. Defaults to `stdout`.
'''
self.output.append(OutputText(text, stream))
@dataclass
class TaskSuite(BaseTask):
'''Defines task suite and stores its tasks and metadata
'''
description: Union[str, None]
tasks: List[Task]
def __init__(
self,
name: Union[str, None],
description: str = None,
status: str = STARTED):
'''
Args:
name: Name of the task suite.
description: Description of the task suite.
status: Status to use for the started task suite.
'''
super().__init__(name, status)
self.description = description
self._tasks: Dict[Hashable, Task] = dict()
@property
def type(self):
return 'SUITE'
@property
def tasks(self) -> List[Task]: # pylint: disable=function-redefined
'''Return suites tasks as list sorted by the started time.
'''
tasks = list(self._tasks.values())
tasks.sort(key=lambda i: i.started)
return tasks
@property
def active_tasks(self) -> List[Task]:
'''Return suites active tasks as list sorted by the started time.
Task is active until it is finished; Task is active, if its finished
variable is None.
'''
return [i for i in self.tasks if i.finished is None]
@property
def task_status_counter(self) -> Counter:
'''Return `Counter` instance initialized with suites task statuses.
'''
return Counter(i.status for i in self._tasks.values())
def create_task(
self,
name: str,
key: Hashable = None,
status: str = STARTED):
'''Create new task and store it in the suite tasks.
Args:
name: Name of the task.
key: Key to identify the created task with.
status: Status to use for the started task.
Returns:
Key of the created task.
'''
if not key:
key = uuid4()
self._tasks[key] = Task(name, status)
return key
def log_task(self, status: str, name: str) -> None:
'''Create and finish a new task.
Args:
name: Name of the task.
status: Status to use for the finished task.
Returns:
Key of the created task.
'''
key = self.create_task(name)
self.finish_task(key, status)
return key
def finish_task(self, key: Hashable, status: str) -> None:
'''Set finished timestamp and end status of the task
Args:
key: Key of the task to finish
status: Status to use for the finished task.
'''
return self._tasks[key].finish(status)
def get_task(self, key: Hashable) -> Task:
'''Get `rpa_logger.task.Task` with given key.
Args:
key: Key to try to find from suite.
Returns:
Task with matching key.
'''
return self._tasks.get(key)
def log_metadata(
self,
key: str,
value: Any,
task_key: Hashable = None) -> None:
'''Log metadata into the task suite or any of its tasks.
Args:
key: Key for the metadata item.
value: Value for the metadata item. If task data is saved as json
or yaml, this value must be serializable.
task_key: Key of a task to log metadata into. If None, metadata
is logged to the suite.
'''
if task_key:
self._tasks[task_key].log_metadata(key, value)
return
super().log_metadata(key, value)
def log_output(self, key: Hashable, text: str,
stream: str = 'stdout') -> None:
'''Append new `rpa_logger.utils.output.OutputText` to task output.
Args:
key: Key of the task to log output to.
text: Output text content.
stream: Output stream. Defaults to `stdout`.
'''
self._tasks[key].log_output(text, stream)
| 29.549587 | 78 | 0.586212 | 889 | 7,151 | 4.654668 | 0.173228 | 0.030449 | 0.030449 | 0.028758 | 0.389319 | 0.337361 | 0.332286 | 0.278154 | 0.195505 | 0.182697 | 0 | 0.002063 | 0.322193 | 7,151 | 241 | 79 | 29.672199 | 0.851661 | 0.351419 | 0 | 0.214286 | 0 | 0 | 0.018021 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173469 | false | 0 | 0.061224 | 0.020408 | 0.459184 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0d8b35ff7e943b202f21481e50e5769f2ff2f4 | 13,760 | py | Python | src/graph_construction.py | chrisdxie/rice | c3e42822226af9ac28d95d434cd582386122b679 | [
"MIT"
] | 16 | 2021-07-01T16:18:26.000Z | 2022-02-21T05:19:39.000Z | src/graph_construction.py | chrisdxie/rice | c3e42822226af9ac28d95d434cd582386122b679 | [
"MIT"
] | 1 | 2022-02-22T22:46:37.000Z | 2022-02-22T22:46:37.000Z | src/graph_construction.py | chrisdxie/rice | c3e42822226af9ac28d95d434cd582386122b679 | [
"MIT"
] | 1 | 2021-11-08T19:52:40.000Z | 2021-11-08T19:52:40.000Z | import sys, os
import numpy as np
import cv2
import torch
import torch.nn.functional as F
from torch_geometric.data import Data, Batch
import torchvision.transforms as transforms
from . import constants
from .util import utilities as util_
def get_resnet50_fpn_model(pretrained=True, trainable_layer_names=[]):
"""Load ResNet50 + FPN model, pre-trained on COCO 2017."""
import torchvision.models.detection.backbone_utils as backbone_utils
from torch.utils.model_zoo import load_url as load_url
pretrained_backbone=False
rn50_fpn = backbone_utils.resnet_fpn_backbone('resnet50', pretrained_backbone)
# This is an instance of BackboneWithFPN: https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L11
if pretrained:
model_urls = {
'maskrcnn_resnet50_fpn_coco':
'https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth',
}
pretrained_state_dict = load_url(model_urls['maskrcnn_resnet50_fpn_coco'],
progress=True)
# Hack to load only the backbone weights to the model, instead of all of MaskRCNN
rn50_fpn_dict = rn50_fpn.state_dict()
pretrained_dict = {k : pretrained_state_dict['backbone.' + k] for k in rn50_fpn_dict.keys()}
rn50_fpn_dict.update(pretrained_dict)
rn50_fpn.load_state_dict(rn50_fpn_dict)
rn50_fpn = rn50_fpn.to(constants.DEVICE)
# Freeze layers unless specified
for name, parameter in rn50_fpn.named_parameters():
parameter.requires_grad_(False)
for layer_name in trainable_layer_names:
if layer_name in name:
parameter.requires_grad_(True)
return rn50_fpn
def extract_rgb_img_features(model, img):
"""Run model (COCO2017 pre-trained ResNet50+FPN) on image.
Args:
model: output from get_resnet50_fpn_model()
img: a [3 x H x W] torch.FloatTensor. Should have been standardized already
Returns:
an OrderedDict of torch.FloatTensors of shape [1, 256, H, W].
"""
H,W = img.shape[1:]
features = model(img.unsqueeze(0).to(constants.DEVICE))
for key in features.keys():
if key == 'pool':
del features[key]
continue
features[key] = F.interpolate(features[key], size=(H,W), mode='bilinear')
return features
def FPN_feature_key(mask):
"""Compute which FPN layer to use.
Args:
mask: a [H x W] torch tensor with values in {0,1}
Returns:
a string
"""
x_min, y_min, x_max, y_max = util_.mask_to_tight_box(mask)
roi_w = x_max-x_min+1; roi_h = y_max-y_min+1;
roi_w = roi_w.float(); roi_h = roi_h.float()
k = torch.floor(4 + torch.log2(torch.sqrt(roi_w*roi_h)/224)) # Taken from FPN paper
k = min(max(int(k), 2), 5)
features_key = str(k-2) # P2 -> '0', P3 -> '1', P4 -> '2', P5 -> '3'
return features_key
def crop_tensor_to_nchw(tensor,
x_min, y_min, x_max, y_max,
img_size=(64,64),
mode='bilinear'):
"""Crop a tensor and reshape.
Args:
tensor: a torch.Tensor of shape [H x W], [C x H x W], or [N x C x H x W]
x_min: int
y_min: int
x_max: int
y_max: int
x_axis: int
y_axis: int
img_size: tuple of (H, W)
Returns:
a torch.Tensor of shape [N x C x img_size[0] x img_size[1]]
"""
y_axis = tensor.ndim - 2
x_axis = tensor.ndim - 1
crop = torch.narrow(tensor, y_axis, y_min, y_max - y_min + 1)
crop = torch.narrow(crop, x_axis, x_min, x_max - x_min + 1)
while crop.ndim < 4: # NCHW
crop.unsqueeze_(0)
crop = F.interpolate(crop, img_size, mode=mode)
return crop
def construct_segmentation_graph(rgb_img_features,
xyz_img,
masks,
create_edge_indices=True,
compute_bg_node=True,
neighbor_dist=10,
padding_config=None,
device=None):
"""Construct Graph from img + masks.
Args:
rgb_img_features: an OrderedDict of image features. Output of extract_rgb_img_features()
xyz_img: a [3 x H x W] torch.FloatTensor. 3D point cloud from camera frame of reference
masks: a [H x W] torch.FloatTensor of masks in {0, 1, ..., K-1}. HW
OR
a [N x H x W] torch.FloatTensor of masks in {0,1}. NHW
compute_bg_node: bool.
create_edge_indices: bool.
neighbor_dist: int. Used to create edge indices.
padding_config: a Python dictionary with padding parameters.
Returns:
graph: a torch_geometric.data.Data instance with keys:
- rgb: a [N, 256, h, w] torch.FloatTensor of ResnNet50+FPN rgb image features
- depth: a [N, 3, h, w] torch.FloatTensor. XYZ image
- mask: a [N, h, w] torch.FloatTensor of values in {0, 1}
- orig_masks: a [N, H, W] torch.FloatTensor of values in {0, 1}. Original image size.
- crop_indices: a [N, 4] torch.LongTensor. xmin, ymin, xmax, ymax.
"""
if device is None:
device = constants.DEVICE
H, W = xyz_img.shape[1:]
if padding_config is None:
padding_config = {
'inference' : True,
'padding_percentage' : 0.25,
'new_H' : 64,
'new_W' : 64,
}
new_H = padding_config['new_H']
new_W = padding_config['new_W']
# Get relevant masks
if masks.ndim == 2:
orig_masks = util_.convert_mask_HW_to_NHW(masks, to_ignore=range(0,constants.OBJECTS_LABEL)) # [N x H x W]
elif masks.ndim == 3:
orig_masks = masks
masks = util_.convert_mask_NHW_to_HW(orig_masks, start_label=constants.OBJECTS_LABEL)
else:
raise Exception(f"<masks> MUST be in HW or NHW format. Got shape: {masks.shape}...")
N = orig_masks.shape[0] # Number of objects, and nodes in graph
# Crop/Resize Masks/Depth
rgb_channels_dim = 256 # hard-coded based on ResNet50+FPN output
rgb_cr = torch.zeros((N, rgb_channels_dim, new_H, new_W), dtype=torch.float32, device=device) # + 1 for background
depth_cr = torch.zeros((N, 3, new_H, new_W), dtype=torch.float32, device=device)
mask_cr = torch.zeros((N, 1, new_H, new_W), dtype=torch.float32, device=device)
crop_indices = torch.zeros((N, 4), dtype=torch.long, device=device)
for i, mask in enumerate(orig_masks):
x_min, y_min, x_max, y_max = util_.crop_indices_with_padding(mask, padding_config, inference=padding_config['inference'])
crop_indices[i] = torch.stack([x_min, y_min, x_max, y_max])
features_key = FPN_feature_key(mask)
layer_features = rgb_img_features[features_key] # Shape: [1 x C x h x w]. C = 256
rgb_cr[i] = crop_tensor_to_nchw(layer_features, x_min, y_min, x_max, y_max,
img_size=(new_H, new_W))[0]
depth_cr[i] = crop_tensor_to_nchw(xyz_img, x_min, y_min, x_max, y_max,
img_size=(new_H, new_W), mode='nearest')[0]
mask_cr[i] = crop_tensor_to_nchw(mask, x_min, y_min, x_max, y_max,
img_size=(new_H, new_W), mode='nearest')[0]
# Background node
if compute_bg_node:
crop_indices = torch.cat([torch.LongTensor([[0, 0, W-1, H-1]]).to(device),
crop_indices], axis=0)
rgb_cr = torch.cat([crop_tensor_to_nchw(rgb_img_features['3'], *crop_indices[0]), # deepest layer. Semantic features
rgb_cr], axis=0)
depth_cr = torch.cat([crop_tensor_to_nchw(xyz_img, *crop_indices[0]).to(device),
depth_cr], axis=0)
bg_orig_mask = (masks == 0).float().unsqueeze(0) # [1, H, W]
orig_masks = torch.cat([bg_orig_mask,
orig_masks], axis=0)
mask_cr = torch.cat([crop_tensor_to_nchw(bg_orig_mask, *crop_indices[0]),
mask_cr], axis=0)
N += 1
# Check to make sure no masks are 0
valid_indices = []
for i in range(N):
if torch.sum(mask_cr[i]) > 0:
valid_indices.append(i)
valid_indices = np.array(valid_indices)
N = len(valid_indices)
graph = Data(rgb=rgb_cr[valid_indices],
depth=depth_cr[valid_indices],
mask=mask_cr[valid_indices],
orig_masks=orig_masks[valid_indices],
crop_indices=crop_indices[valid_indices],
)
if create_edge_indices:
build_edge_index(graph, neighbor_dist=neighbor_dist)
graph = graph.to(device)
return graph
def build_edge_index(graph, neighbor_dist):
edge_index = util_.neighboring_mask_indices(graph.orig_masks, reduction_factor=1,
neighbor_dist=neighbor_dist)
edge_index = torch.cat([edge_index, edge_index.flip([1])], dim=0).T # Shape: [2 x E]
graph.edge_index = edge_index.to(graph.mask.device)
def remove_bg_node(data_list):
"""Return a list of new graphs with background node removed.
Note: the RGB/Depth/Mask is not copied over, but assigned. Thus, losses can be applied to
the new graphs (w/out BG nodes) and gradients will still flow through the old graphs.
Args:
graph: Can be a torch_geometric.Data instance, torch_geometric.Batch instance,
or a List of torch_geometric.Data instances.
Returns:
Same data type as input. A copy of graphs, but without background nodes and update edge_indices.
"""
if isinstance(data_list, Data):
input_type = 'Data'
data_list = [data_list]
elif isinstance(data_list, Batch):
data_list = Batch.to_data_list(data_list)
input_type = 'Batch'
elif isinstance(data_list, list):
input_type = 'list'
else:
raise NotImplementedError()
# Note: data_list is now of type list
new_data_list = []
for graph in data_list:
# Double check to make sure background node hasn't already been removed
if 'background_removed' in graph:
raise Exception("Cannot remove background node if it has already been removed...")
new_graph = Data()
new_graph.rgb = graph.rgb[1:]
new_graph.depth = graph.depth[1:]
new_graph.mask = graph.mask[1:]
new_graph.orig_masks = graph.orig_masks[1:]
new_graph.crop_indices = graph.crop_indices[1:]
new_graph.background_removed = True
# Special cases
if 'edge_index' in graph:
edge_mask = torch.all(graph.edge_index != 0, dim=0) # [E]
new_graph.edge_index = graph.edge_index[:, edge_mask] - 1 # -1 since we removed background
if 'paths' in graph and 'split' in graph: # Splitting is stored
new_graph.paths = {k - 1: graph.paths[k] for k in graph.paths.keys()}
new_graph.split = graph.split[1:]
new_data_list.append(new_graph)
if input_type == 'Data':
return new_data_list[0]
elif input_type == 'Batch':
return convert_list_to_batch(new_data_list)
elif input_type == 'list':
return new_data_list
def convert_list_to_batch(graph_list, external_key='crop_indices'):
"""Convert list of graphs into a Batch(Data) instance.
Args:
graph_list: a Python list of torch_geometric.data.Data instances
Returns:
a torch_geometric.data.Batch instance
"""
for graph in graph_list:
if 'x' not in graph.keys: # Batch.from_data_list needs 'x' to run correctly (to compute graph.num_nodes)
graph.x = graph[external_key]
return Batch.from_data_list(graph_list)
def convert_batch_to_list(batch_graph):
"""Convert Batch(Data) instance into a list of Data instances.
Undoes the convert_list_to_batch() function.
Args:
batch_graph: a torch_geometric.Batch instance
Returns:
a Python list of torch_geometric.data.Data instances.
"""
return Batch.to_data_list(batch_graph)
def get_edge_graph(graph, rgb_img_features, xyz_img, padding_config=None):
"""Compute graph where each node is an edge of original graph.
Creates a new graph such that each node in the new graph corresponds
to an edge in the original graph. The new graph is constructed
in the same way, but the crop_indices cover the union of the
masks. This graph has no edges.
Args:
graph: a torch_geometric.data.Data instance
rgb_img_features: an OrderedDict of image features. Output of extract_rgb_img_features()
xyz_img: a [3 x H x W] torch.FloatTensor. 3D point cloud from camera frame of reference
padding_config: a Python dictionary.
Returns:
a torch_geometric.Data instance
"""
union_orig_masks = torch.clamp(graph.orig_masks[graph.edge_index[0]] + \
graph.orig_masks[graph.edge_index[1]], max=1) # Shape: [E x H x W]
return construct_segmentation_graph(
rgb_img_features,
xyz_img,
union_orig_masks,
compute_bg_node=False,
create_edge_indices=False,
padding_config=padding_config
)
def add_zero_channel_to_masks(graph):
"""Add an empty channel of 0's to graph.mask."""
graph.mask = torch.cat([graph.mask, torch.zeros_like(graph.mask)], dim=1)
| 38.328691 | 142 | 0.622892 | 1,975 | 13,760 | 4.11038 | 0.170633 | 0.018724 | 0.004435 | 0.004435 | 0.215447 | 0.166174 | 0.130574 | 0.112097 | 0.098793 | 0.066272 | 0 | 0.018933 | 0.282195 | 13,760 | 358 | 143 | 38.435754 | 0.802977 | 0.308067 | 0 | 0.030928 | 0 | 0 | 0.048015 | 0.005687 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056701 | false | 0 | 0.056701 | 0 | 0.170103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b12bcc09d43893147348ccc3696625e690b010c | 3,817 | py | Python | src/views/botones/informacion/boton_informacion.py | julianVelandia/UI_RETEDECON | 87b707f5c1553446fc92265db9da50f292e2f2d1 | [
"MIT"
] | 3 | 2022-02-27T02:15:52.000Z | 2022-02-28T15:16:40.000Z | src/views/botones/informacion/boton_informacion.py | julianVelandia/UI_RETEDECON | 87b707f5c1553446fc92265db9da50f292e2f2d1 | [
"MIT"
] | null | null | null | src/views/botones/informacion/boton_informacion.py | julianVelandia/UI_RETEDECON | 87b707f5c1553446fc92265db9da50f292e2f2d1 | [
"MIT"
] | null | null | null | from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
from PyQt5.QtGui import *
#locals
from .funciones_informacion import Funcion_informacion
from src.views.botones.inicio.funciones import *
class Boton_informacion(Funcion_informacion):
def boton_informacion_manual(self, widget):
self.informacion_manual = QToolButton(widget)
self.informacion_manual.setText('Manual de Usuario')
self.informacion_manual.setObjectName("button") # nombre de enlace a css
self.informacion_manual.setIcon(QIcon('src/views/static/icons/icono_manual_usuario')) # icono
self.informacion_manual.setIconSize(QSize(self.height/11, self.height/11))
self.informacion_manual.setToolButtonStyle(Qt.ToolButtonTextUnderIcon)
self.informacion_manual.setGeometry(self.width/4.5, self.height/2.8,
self.width/4, self.height/3.9)
self.informacion_manual.clicked.connect(self.InformacionManual)
self.informacion_manual.setVisible(False)
def boton_informacion_fabricante(self, widget):
self.informacion_fabricante = QToolButton(widget)
self.informacion_fabricante.setText('Información del\nFabricante')
self.informacion_fabricante.setObjectName("button") # nombre de enlace a css
self.informacion_fabricante.setIcon(QIcon('src/views/static/icons/favicon3')) # icono
self.informacion_fabricante.setIconSize(QSize(self.height/11, self.height/11))
self.informacion_fabricante.setToolButtonStyle(Qt.ToolButtonTextUnderIcon)
self.informacion_fabricante.clicked.connect(self.InformacionFabricante)
self.informacion_fabricante.setGeometry(self.width/1.9, self.height/2.8,
self.width/4, self.height/3.9)
self.informacion_fabricante.setVisible(False)
def qr_informacion_qr(self, widget):
self.informacion_qr = QToolButton(widget)
self.informacion_qr.setObjectName("button_trasnparente") # nombre de enlace a css
self.informacion_qr.setIcon(QIcon('src/views/static/icons/QRDRIVE.png')) # icono
self.informacion_qr.setIconSize(QSize(self.height/5, self.height/5))
self.informacion_qr.setGeometry((self.width/2) - (self.height/7), (self.height/2) - (self.height/7),
self.height/5, self.height/5)
self.informacion_qr.setVisible(False)
def label_informacion_label(self, widget):
self.informacion_label = QLabel(widget)
self.informacion_label.setObjectName("FabInfo") # nombre de enlace a css
self.informacion_label.setText("GRACIAS POR USAR RETEDECON\n"
"\n"
"RETEDECON es fabricado por:\n"
" - Julián C. Velandia\n"
" - Sebastian Cubides\n"
" - Brayan Guevara\n"
" - Jhon B. Muñoz\n"
"Con la coolaboración de: \n"
" - Diego A. Tibaduiza\n"
"Bajo la supervición y sustento de la Unidad De Gestion De La Innovación,\n"
"Facultad De Ingeniería (Ingnova), de La Universidad Nacional De Colombia.\n\n"
"Si desea contactarse con nosotros puede hacerlo a través de los siguientes medios:\n"
" - Celular/Whatsapp: +57 313 8244012\n"
" - E-Mail: scubidest@unal.edu.co\n\n"
"Versión del Software: 1.0")
self.informacion_label.setGeometry((self.width / 6), (self.height/9),
self.width / 1.2, self.height/1.2)
self.informacion_label.setVisible(False) | 59.640625 | 113 | 0.632172 | 420 | 3,817 | 5.640476 | 0.309524 | 0.183622 | 0.079781 | 0.042212 | 0.295061 | 0.246095 | 0.192486 | 0.164626 | 0.164626 | 0.087801 | 0 | 0.019445 | 0.272465 | 3,817 | 64 | 114 | 59.640625 | 0.833633 | 0.03039 | 0 | 0.035088 | 0 | 0 | 0.193557 | 0.036004 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0 | 0.087719 | 0 | 0.175439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b13c9bdd22e18cff242d5292bbf3eb9e6c0efa1 | 263 | py | Python | 1030 Brick Layout.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | null | null | null | 1030 Brick Layout.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | null | null | null | 1030 Brick Layout.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | null | null | null | class Solution:
def solve(self, bricks, width, height):
dp = [0]*(width+1)
dp[0] = 1
for i in range(len(dp)):
for brick in bricks:
dp[i] += dp[i-brick] if i-brick >= 0 else 0
return dp[-1]**height
| 23.909091 | 59 | 0.48289 | 40 | 263 | 3.175 | 0.5 | 0.047244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042424 | 0.372624 | 263 | 10 | 60 | 26.3 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b15e166bdadb8379f269e4e1a5eb613b13e1d82 | 3,173 | py | Python | src/feature_creation.py | aswain571/m5_forecasting | 3b7fccd56a4c14c38bbcff6b11f82cd440132730 | [
"MIT"
] | null | null | null | src/feature_creation.py | aswain571/m5_forecasting | 3b7fccd56a4c14c38bbcff6b11f82cd440132730 | [
"MIT"
] | null | null | null | src/feature_creation.py | aswain571/m5_forecasting | 3b7fccd56a4c14c38bbcff6b11f82cd440132730 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import pickle
from preprocess import process_ds
from sklearn.preprocessing import LabelEncoder
def transform_cat_feats(df):
"""makes null columns into unknown and cat columns
are label encoded
Args:
df (pd.DataFrame): Dataframe with the sales data.
Returns:
Dataframe with the sales data including lag and rolling
features.
"""
# nan_features = [
#'event_name_1',
#'event_type_1',
#'event_name_2',
#'event_type_2',]
# for feature in nan_features:
# df[feature].fillna('unknown', inplace = True)
cat = [
"item_id",
"dept_id",
"cat_id",
"store_id",
"state_id",
"event_name_1",
"event_type_1",
"event_name_2",
"event_type_2",
]
for feature in cat:
encoder = LabelEncoder()
df[feature] = encoder.fit_transform(df[feature])
return df
def calculate_time_features(df):
"""Clagged and rolling mean features
of the sales data.
Args:
df (pd.DataFrame): Dataframe with the sales data.
Returns:
Dataframe with the sales data including lag and rolling
features.
"""
dayLags = [28]
lagSalesCols = [f"lag_{dayLag}" for dayLag in dayLags]
for dayLag, lagSalesCol in zip(dayLags, lagSalesCols):
df[lagSalesCol] = (
df[["id", "item_sales"]].groupby("id")["item_sales"].shift(dayLag)
)
windows = [7, 28]
for window in windows:
for dayLag, lagSalesCol in zip(dayLags, lagSalesCols):
df[f"rmean_{dayLag}_{window}"] = (
df[["id", lagSalesCol]]
.groupby("id")[lagSalesCol]
.transform(lambda x: x.rolling(window).mean())
)
return df
def cat_ts_feats(df):
"""Build categorical and time series feats.
Args:
df (pd.Dataframe) : Dataframe with sales data
Returns:
Dataframe with sales data including categorical
features and lag/rolling mean features
"""
df = transform_cat_feats(df)
df = calculate_time_features(df)
return df
def get_test_train_data():
"""Build train and test dataset. Test is
used for inference
Args:
None
Returns:
train and test dataframes
"""
df = process_ds()
df = cat_ts_feats(df)
df = df.reset_index().set_index("date")
# remove unused columns
cols_not_used = ["id", "weekday", "d", "index"]
df.drop(columns=cols_not_used, inplace=True)
df.dropna(inplace=True)
# convert T/F to boolean - lightgbm throws error otherwise
df["is_weekend"] = df["is_weekend"].astype(int)
df["no_sell_price"] = df["no_sell_price"].astype(int)
print(df)
train_start_date = "2014-04-24"
train_end_date = "2016-04-23"
test_start_date = "2016-04-24"
test_end_date = "2016-05-23"
df_train = df.loc[train_start_date:train_end_date]
df_test = df.loc[test_start_date:test_end_date]
# save train and test dataframes for later use
df_train.to_pickle("../data/df_train.pkl")
df_test.to_pickle("../data/df_test.pkl")
if __name__ == "__main__":
get_test_train_data()
| 25.58871 | 78 | 0.632524 | 423 | 3,173 | 4.524823 | 0.316785 | 0.032915 | 0.031348 | 0.043887 | 0.241902 | 0.22675 | 0.211076 | 0.211076 | 0.163009 | 0.163009 | 0 | 0.019027 | 0.254649 | 3,173 | 123 | 79 | 25.796748 | 0.790275 | 0.300347 | 0 | 0.081967 | 0 | 0 | 0.143612 | 0.011047 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065574 | false | 0 | 0.081967 | 0 | 0.196721 | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b178eeae032ec25548f56cb6c96df9b289d22b5 | 6,545 | py | Python | cosmosis/output/fits_output.py | annis/cosmosis | 55efc1bc2260ca39298c584ae809fa2a8e72a38e | [
"BSD-2-Clause"
] | 2 | 2021-06-18T14:11:59.000Z | 2022-02-23T19:19:36.000Z | cosmosis/output/fits_output.py | annis/cosmosis | 55efc1bc2260ca39298c584ae809fa2a8e72a38e | [
"BSD-2-Clause"
] | 2 | 2021-11-02T12:44:24.000Z | 2022-03-30T15:09:48.000Z | cosmosis/output/fits_output.py | annis/cosmosis | 55efc1bc2260ca39298c584ae809fa2a8e72a38e | [
"BSD-2-Clause"
] | 2 | 2022-03-25T21:26:27.000Z | 2022-03-29T06:37:46.000Z | from .output_base import OutputBase
from . import utils
import numpy as np
import os
from glob import glob
from collections import OrderedDict
try:
import fitsio
except ImportError:
fitsio = None
comment_indicator = "_cosmosis_comment_indicator_"
final_metadata_indicator = "FINALMETA"
unreserve_indicator = "UNRES"
reserved_keys = [
"XTENSION",
"BITPIX",
"NAXIS",
"NAXIS1",
"NAXIS2",
"PCOUNT",
"GCOUNT",
"TFIELDS",
"TTYPE1",
"COMMENT",
]
def check_fitsio():
if fitsio is None:
raise RuntimeError("You need to have the fitsio library installed to output FITS files. Try running: pip install --install-option=\"--use-system-fitsio\" git+git://github.com/joezuntz/fitsio")
class FitsOutput(OutputBase):
FILE_EXTENSION = ".fits"
_aliases = ["fits"]
def __init__(self, filename, rank=0, nchain=1, clobber=True):
super(FitsOutput, self).__init__()
#If filename already ends in .txt then remove it for a moment
if filename.endswith(self.FILE_EXTENSION):
filename = filename[:-len(self.FILE_EXTENSION)]
if nchain > 1:
filename = filename + "_{}".format(rank+1)
self._filename = filename + self.FILE_EXTENSION
self.filename_base = filename
check_fitsio()
self._fits = fitsio.FITS(self._filename, "rw", clobber=clobber)
self._hdu = None
#also used to store comments:
self._metadata = OrderedDict()
self._final_metadata = OrderedDict()
def _close(self):
self._flush_metadata(self._final_metadata)
self._final_metadata={}
self._fits.close()
def _flush_metadata(self, metadata):
for (key,(value,comment)) in list(metadata.items()):
if key.startswith(comment_indicator):
self._hdu.write_comment(value)
elif comment:
self._hdu.write_key(key, value, comment)
else:
self._hdu.write_key(key, value)
def _begun_sampling(self, params):
#write the name line
self._fits.create_table_hdu(data=params, names=[c[0] for c in self.columns])
self._hdu = self._fits[-1]
self._dtype = self._hdu.get_rec_dtype()[0]
self._flush_metadata(self._metadata)
self._metadata={}
@staticmethod
def is_reserved_fits_keyword(key):
for k in reserved_keys:
if key.upper().startswith(k):
return True
return False
def _write_metadata(self, key, value, comment=''):
#We save the metadata until we get the first
#parameters since up till then the columns can
#be changed
if self.is_reserved_fits_keyword(key):
key=unreserve_indicator + key
self._metadata[key]= (value, comment)
def _write_comment(self, comment):
#save comments along with the metadata - nice as
#preserves order
self._metadata[comment_indicator +
"_%d" % (len(self._metadata))] = (comment,None)
def _write_parameters(self, params):
row = np.core.records.fromarrays(params, dtype=self._dtype)
row=np.atleast_1d(row)
self._hdu.append(row)
def _write_final(self, key, value, comment=''):
#I suppose we can put this at the end - why not?
if self.is_reserved_fits_keyword(key):
key=unreserve_indicator + key
self._final_metadata[key]= (value, final_metadata_indicator+comment)
def name_for_sampler_resume_info(self):
return self.filename_base + '.sampler_status'
@classmethod
def from_options(cls, options, resume=False):
#look something up required parameters in the ini file.
#how this looks will depend on the ini
if resume:
raise ValueError("Cannot resume from FITS output")
filename = options['filename']
delimiter = options.get('delimiter', '\t')
rank = options.get('rank', 0)
nchain = options.get('parallel', 1)
clobber = utils.boolean_string(options.get('clobber', True))
return cls(filename, rank, nchain, clobber=clobber)
@classmethod
def load_from_options(cls, options):
check_fitsio()
filename = options['filename']
cut = False
if filename.endswith(cls.FILE_EXTENSION):
filename = filename[:-len(cls.FILE_EXTENSION)]
cut = True
# first look for serial file
if os.path.exists(filename+cls.FILE_EXTENSION):
datafiles = [filename+cls.FILE_EXTENSION]
elif os.path.exists(filename) and not cut:
datafiles = [filename]
else:
datafiles = glob(filename+"_[0-9]*"+cls.FILE_EXTENSION)
if not datafiles:
raise RuntimeError("No datafiles found starting with %s!"%filename)
#Read the metadata
metadata = []
final_metadata = []
data = []
comments = []
column_names = None
for datafile in datafiles:
print('LOADING CHAIN FROM FILE: ', datafile)
chain = []
chain_metadata = {}
chain_final_metadata = {}
chain_comments = []
f = fitsio.FITS(datafile, "r")
hdu = f[1]
chain = f[1].read()
#convert to unstructured format
chain = chain.view((chain.dtype[0], len(chain.dtype.names)))
column_names = hdu.get_colnames()
hdr = hdu.read_header()
chain_comments = [r['comment'] for r in hdr.records() if r['name'].lower()=="comment"]
for r in hdr.records():
key = r['name']
if key=='COMMENT':
continue
if key.startswith(unreserve_indicator):
key = key[len(unreserve_indicator):]
value = r['value']
key=key.lower()
if r['comment'].startswith(final_metadata_indicator):
chain_final_metadata[key] = value
else:
chain_metadata[key] = value
data.append(np.array(chain))
metadata.append(chain_metadata)
final_metadata.append(chain_final_metadata)
comments.append(chain_comments)
if column_names is None:
raise ValueError("Could not find column names header in file starting %s"%filename)
return column_names, data, metadata, comments, final_metadata
| 33.055556 | 201 | 0.603209 | 751 | 6,545 | 5.070573 | 0.28229 | 0.04438 | 0.019695 | 0.016544 | 0.089811 | 0.054622 | 0.030462 | 0.030462 | 0.030462 | 0.030462 | 0 | 0.003902 | 0.295187 | 6,545 | 197 | 202 | 33.22335 | 0.821591 | 0.073644 | 0 | 0.087838 | 0 | 0.006757 | 0.086957 | 0.01058 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087838 | false | 0 | 0.054054 | 0.006757 | 0.195946 | 0.006757 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b1c1953cad2c24ae38087460d540f5ab88ef710 | 278 | py | Python | app.py | M3nin0/selectToTex | 423cfdafdd0bd391c30cbbf70386f74e93844c2f | [
"BSD-2-Clause"
] | 4 | 2018-06-06T15:35:51.000Z | 2020-01-19T15:47:23.000Z | app.py | M3nin0/selectToTex | 423cfdafdd0bd391c30cbbf70386f74e93844c2f | [
"BSD-2-Clause"
] | null | null | null | app.py | M3nin0/selectToTex | 423cfdafdd0bd391c30cbbf70386f74e93844c2f | [
"BSD-2-Clause"
] | null | null | null | from selecttotex.totex import Totex
# Criando instância do SelectToTex
tt = Totex()
# Comandos que serão utilizados
commands = ['SELECT * FROM aluno;', 'SELECT * FROM materia;', 'SELECT * FROM matricula;']
# Chama a função para a conversão
tt.to_tex(commands, 'tabelas.txt')
| 25.272727 | 89 | 0.733813 | 37 | 278 | 5.486486 | 0.702703 | 0.147783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154676 | 278 | 10 | 90 | 27.8 | 0.86383 | 0.33813 | 0 | 0 | 0 | 0 | 0.427778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b1da2d3ed1a52018f6ec06f4c582bd00a0d9184 | 6,682 | py | Python | python/vtool/maya_lib/ui.py | louisVottero/vtool | 4e2592df5841829e790251dc6923e45c8d013091 | [
"MIT"
] | 3 | 2022-02-22T01:00:59.000Z | 2022-03-07T16:19:27.000Z | python/vtool/maya_lib/ui.py | louisVottero/vtool | 4e2592df5841829e790251dc6923e45c8d013091 | [
"MIT"
] | 4 | 2022-03-04T05:25:44.000Z | 2022-03-11T04:51:35.000Z | python/vtool/maya_lib/ui.py | louisVottero/vtool | 4e2592df5841829e790251dc6923e45c8d013091 | [
"MIT"
] | 1 | 2022-03-31T23:07:09.000Z | 2022-03-31T23:07:09.000Z | # Copyright (C) 2022 Louis Vottero louis.vot@gmail.com All rights reserved.
from __future__ import absolute_import
import maya.cmds as cmds
import maya.utils
import maya.mel as mel
from maya.app.general.mayaMixin import MayaQWidgetBaseMixin, MayaQWidgetDockableMixin
from maya import OpenMayaUI as omui
from .. import qt_ui, qt
from .. import util, util_file
from .ui_lib import ui_fx, ui_shape_combo, ui_corrective
from .ui_lib import ui_rig
from .ui_lib import ui_anim
from .ui_lib import ui_model
from . import ui_core
from ..process_manager import process
from . import core
from . import attr
from . import space
from . import geo
from . import deform
from . import rigs_util
def load_into_tool_manager(window):
if ToolManager._last_instance:
parent_name = ToolManager._last_instance.parent().objectName()
if parent_name.find('WorkspaceControl') > -1:
window.show()
window_name = window.parent().objectName()
cmds.workspaceControl(window_name, e = True, tabToControl = (parent_name,-1))#, uiScript = command, li = False, retain = False)
if not ToolManager._last_instance:
window.show()
#window_name = window.parent().objectName()
#cmds.workspaceControl(window_name, e = True)#, tabToControl = (parent_name,-1))#, uiScript = command, li = False, retain = False)
if hasattr(window, 'initialize_settings'):
window.show()
window.initialize_settings()
def pose_manager(shot_sculpt_only = False):
window = ui_rig.pose_manager(shot_sculpt_only)
load_into_tool_manager(window)
def shape_combo():
window = ui_rig.shape_combo()
load_into_tool_manager(window)
def picker():
window = ui_rig.picker()
if ToolManager._last_instance:
ToolManager._last_instance.add_tab(window, window.title)
def tool_manager(name = None, directory = None):
workspace_name = ToolManager.title + 'WorkspaceControl'
ui_core.delete_workspace_control(workspace_name)
manager = ToolManager(name)
workspace_control = manager.title + 'WorkspaceControl'
if not ui_core.was_floating(manager.title):
tab_name = ui_core.get_stored_tab(manager.title)
manager.show()
ui_core.add_tab(workspace_control, tab_name)
else:
manager.show()
if directory:
manager.set_directory(directory)
return manager
def process_manager(directory = None):
ui_core.delete_workspace_control(ui_rig.ProcessMayaWindow.title + 'WorkspaceControl')
window = ui_rig.ProcessMayaWindow()
if directory:
window.set_directory(directory)
window.show()
return window
def ramen():
ui_core.delete_workspace_control(ui_rig.RamenMayaWindow.title + 'WorkspaceControl')
window = ui_rig.RamenMayaWindow()
window.show()
return window
def script_manager(directory):
ui_core.delete_workspace_control(ui_rig.ScriptMayaWindow.title + 'WorkspaceControl')
window = ui_rig.ScriptMayaWindow()
window.set_directory(directory)
window.show()
return window
class ToolManager(ui_core.MayaDirectoryWindowMixin):
#class ToolManager(ui_core.MayaDockMixin, qt_ui.BasicWidget):
#class ToolManager(ui_core.MayaDockMixin,qt.QWidget):
title = (util.get_custom('vetala_name', 'VETALA') + ' HUB')
#_last_instance = None
def __init__(self,name = None):
if name:
self.title = name
self.default_docks = []
self.docks = []
super(ToolManager, self).__init__()
self.setWindowTitle(self.title)
ui_core.new_tool_signal.signal.connect(load_into_tool_manager)
def _build_widgets(self):
self.main_layout.setAlignment(qt.QtCore.Qt.AlignTop)
header_layout = qt.QHBoxLayout()
version = qt.QLabel('%s' % util_file.get_vetala_version())
version.setMaximumHeight(30)
header_layout.addWidget(version)
self.main_layout.addLayout(header_layout)
self.rigging_widget = ui_rig.RigManager()
self.main_layout.addWidget(self.rigging_widget)
def add_tab(self, widget, name):
self.add_dock(widget, name)
def add_dock(self, widget , name):
self.dock_window.add_dock(widget, name)
def set_directory(self, directory):
super(ToolManager, self).set_directory(directory)
self.rigging_widget.set_directory(directory)
class Dock(ui_core.MayaBasicMixin,qt_ui.BasicWindow):
def __init__(self, name = None):
self.docks = []
super(Dock, self).__init__()
def _get_dock_widgets(self):
children = self.children()
found = []
for child in children:
if isinstance(child, qt.QDockWidget):
found.append(child)
return found
def _build_widgets(self):
self.main_widget.setSizePolicy(qt.QSizePolicy.Minimum, qt.QSizePolicy.Minimum)
self.centralWidget().hide()
self.setTabPosition(qt.QtCore.Qt.TopDockWidgetArea, qt.QTabWidget.West)
self.setDockOptions( self.AllowTabbedDocks)
def add_dock(self, widget , name):
docks = self._get_dock_widgets()
for dock in docks:
if dock.windowTitle() == name:
dock.deleteLater()
dock.close()
old_parent = widget.parent()
old_parent_name = None
if old_parent:
old_parent_name = old_parent.objectName()
dock_widget = ui_core.MayaDockWidget(self)
dock_widget.setWindowTitle(name)
dock_widget.setWidget(widget)
if old_parent_name and old_parent_name.find('Mixin') > -1:
old_parent.close()
cmds.deleteUI(old_parent_name)
self.addDockWidget(qt.QtCore.Qt.TopDockWidgetArea, dock_widget)
if docks:
self.tabifyDockWidget( docks[-1], dock_widget)
dock_widget.show()
dock_widget.raise_()
return dock_widget
| 28.678112 | 140 | 0.611344 | 713 | 6,682 | 5.464236 | 0.238429 | 0.021561 | 0.01694 | 0.0154 | 0.274384 | 0.179415 | 0.119867 | 0.094456 | 0.069302 | 0.069302 | 0 | 0.002365 | 0.303951 | 6,682 | 233 | 141 | 28.678112 | 0.835304 | 0.063903 | 0 | 0.195652 | 0 | 0 | 0.023778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.123188 | false | 0 | 0.144928 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b1e286ea315966366a86b5a9f5142b3ebdb896b | 4,748 | py | Python | xtbservice/models.py | cheminfo-py/xtbservice | d9227ea9e4647fe302cc3c1e9d57838fff938cd4 | [
"MIT"
] | 2 | 2022-01-28T02:59:28.000Z | 2022-01-31T15:47:30.000Z | xtbservice/models.py | cheminfo-py/xtbservice | d9227ea9e4647fe302cc3c1e9d57838fff938cd4 | [
"MIT"
] | 17 | 2021-09-13T12:26:57.000Z | 2022-01-31T22:35:49.000Z | xtbservice/models.py | cheminfo-py/xtbservice | d9227ea9e4647fe302cc3c1e9d57838fff938cd4 | [
"MIT"
] | 1 | 2022-01-26T08:17:50.000Z | 2022-01-26T08:17:50.000Z | # -*- coding: utf-8 -*-
from dataclasses import dataclass
from typing import Dict, List, Optional
import numpy as np
from ase import Atoms
from pydantic import BaseModel, Field, validator
ALLOWED_METHODS = ("GFNFF", "GFN2xTB", "GFN1xTB")
ALLOWED_FF = ("uff", "mmff94", "mmff94s")
@dataclass
class OptimizationResult:
atoms: Atoms
forces: np.ndarray
energy: float
class IRResult(BaseModel):
wavenumbers: List[float] = Field(None, description="List of wavenumbers in cm^-1")
intensities: List[float] = Field(
None, description="List of IR intensities in (D/Å)^2 amu^-1"
)
ramanIntensities: List[float] = Field(
None,
description="List of Raman intensities in (D/Å)^2 amu^-1, computed using Placzek and Bond Polarization (using values from Lippincott/Stuttman) approximation",
)
zeroPointEnergy: float = Field(None, description="Zero point energy in a.u.")
modes: Optional[List[dict]] = Field(
None,
description="List of dictionaries with the keys `number` - number of the mode (zero indexed), `displacements` - xyz file with the displacement vectors, `intensity` - IR intensity of the mode in D/Å)^2 amu^-1, `ramanIntensity` - Raman intensity of mode, `imaginary` - true if mode is imaginary, `mostDisplaceAtoms` - sorted list of atom indices (zero indiced) according to they displacement (Euclidean norm), `mostContributingAtoms` - most contributing atoms according to a distance criterion.",
)
mostRelevantModesOfAtoms: Optional[Dict[int, List[int]]] = Field(
None,
description="Dictionary indexed with atom indices (zero indexed) and mode indices (zero indexed) as values that is most relevant for a given",
)
mostRelevantModesOfBonds: Optional[List[dict]] = Field(
None,
description="List of dictionaries with the key `startAtom`, `endAtom` and `mode`",
)
hasImaginaryFrequency: bool = Field(
None, description="True if there is any mode with imaginary frequency"
)
isLinear: bool = Field(None, description="True if the molecule is linear.")
momentsOfInertia: List[float] = Field(
None,
description="Moments of inertia around principal axes. For a linear molecule one only expects two non-zero components.",
)
hasLargeImaginaryFrequency: bool = Field(
None,
description="True if there is a large imaginary frequency, indicating a failed geometry optimization.",
)
class IRRequest(BaseModel):
smiles: Optional[str] = Field(
None,
description="SMILES string of input molecule. The service will add implicit hydrogens",
)
molFile: Optional[str] = Field(
None,
description="String with molfile with expanded hydrogens. The service will not attempt to add implicit hydrogens to ensure that the atom ordering is preserved.",
)
method: Optional[str] = Field(
"GFNFF",
description="String with method that is used for geometry optimization and calculation of the vibrational frequencies. Allowed values are `GFNFF`, `GFN2xTB`, and `GFN1xTB`. `GFNFF` is the computationally most inexpensive method, but can be less accurate than the xTB methods",
)
@validator("method")
def method_match(cls, v):
if not v in ALLOWED_METHODS:
raise ValueError(f"method must be in {ALLOWED_METHODS}")
return v
class ConformerRequest(BaseModel):
smiles: Optional[str] = Field(
None,
description="SMILES string of input molecule. The service will add implicit hydrogens",
)
molFile: Optional[str] = Field(
None,
description="String with molfile with expanded hydrogens. The service will not attempt to add implicit hydrogens to ensure that the atom ordering is preserved.",
)
forceField: Optional[str] = Field(
"uff",
description="String with method force field that is used for energy minimization. Options are 'uff', 'mmff94', and 'mmff94s'",
)
rmsdThreshold: Optional[float] = Field(
0.5, description="RMSD threshold that is used to prune conformer library."
)
maxConformers: Optional[int] = Field(
1,
description="Maximum number of conformers that are generated (after pruning).",
)
@validator("forceField")
def method_match(cls, v):
if not v in ALLOWED_FF:
raise ValueError(f"forceField must be in {ALLOWED_FF}")
return v
class Conformer(BaseModel):
molFile: str = Field(
None, description="String with molfile.",
)
energy: str = Field(
None, description="Final energy after energy minimization.",
)
class ConformerLibrary(BaseModel):
conformers: List[Conformer]
| 40.931034 | 502 | 0.68829 | 574 | 4,748 | 5.679443 | 0.34669 | 0.046933 | 0.104294 | 0.042331 | 0.321779 | 0.312883 | 0.30092 | 0.244172 | 0.221472 | 0.221472 | 0 | 0.006223 | 0.221567 | 4,748 | 115 | 503 | 41.286957 | 0.875812 | 0.004423 | 0 | 0.22449 | 0 | 0.071429 | 0.486138 | 0.004868 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.05102 | 0 | 0.408163 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b219b5d2c6acf165fc3fb183df871cbdfc2a9e9 | 3,068 | py | Python | Aprior.py | zhangmingming-chb/Aprior | 69bea22f34d20bdc9984faf1fa021fac6e60ef38 | [
"MIT"
] | null | null | null | Aprior.py | zhangmingming-chb/Aprior | 69bea22f34d20bdc9984faf1fa021fac6e60ef38 | [
"MIT"
] | null | null | null | Aprior.py | zhangmingming-chb/Aprior | 69bea22f34d20bdc9984faf1fa021fac6e60ef38 | [
"MIT"
] | null | null | null | #-*-coding:utf-8-*-
from typing import List
from itertools import chain
class Aprior():
def __init__(self, support, confidence):
self.support = support
self.confidence = confidence
def set_transactions(self, transactions: List[List[str]]) -> None:
self.transactions = transactions
def get_I(self) -> List[str]:
return sorted(set(chain(*self.transactions)))
def F(self, items: List[List[str]] or List[str]) -> List[List[str]]:
# 统计集合出现次数
records = {}
query_table = {}
for i in items:
for j in self.transactions:
query_table[id(i)] = i
if set(i).issubset(set(j)):
if id(i) not in records:
records[id(i)] = 1
else:
records[id(i)] += 1
# 选出k-频繁项集
item = []
for k, v in records.items():
if v / len(self.transactions) >= self.support:
item.append(query_table[k])
item = list(map(lambda x: sorted(list(x)), item))
return item
def generate_1_items(self, I: List[str]) -> List[List[str]]:
return self.F(I)
def generate_k_items(self, k_last_items: List[List[str]]) -> List[List[str]]:
prefix = []
for i in k_last_items:
prefix.append(",".join(i[:-1]))
records = {}
for i in prefix:
records[i] = []
for i in k_last_items:
# 将只有最后一个元素不同的集合分组
current_prefix = ",".join(i[:-1])
if current_prefix in records.keys():
records[current_prefix].append(i)
items = []
for v in records.values():
for i in range(len(v)):
for j in range(i + 1, len(v)):
temp = sorted(list(set(v[i]).union(v[j])))
# 判断集合中的后k-1是否在k-1-频繁项集中
if temp[1:] in k_last_items:
items.append(temp)
return self.F(items)
def k_items_result(self) -> List[List[str]]:
I = self.get_I()
items = self.generate_1_items(I)
item_max_length = len(sorted(self.transactions,key=lambda x:len(x))[-1])
while True:
if len(items) == 1 or len(items[0]) > item_max_length:
break
last_items = items[::]
items = self.generate_k_items(items)
# 无符合的频繁项集,返回上次计算结果
if len(items) == 0:
return last_items
return items
tran = [
['1','2','3'],
['1','2','4'],
['1','3','4'],
['1','2','3','5'],
['1','3','5'],
['2','4','5'],
['1','2','3','4']
]
# tran = [
# ['apple','banana','orange'],
# ['apple','banana','peer'],
# ['apple','orange','peer'],
# ['apple','banana','orange','mongo'],
# ['apple','orange','mongo'],
# ['banana','peer','mongo'],
# ['apple','banana','orange','peer']
# ]
ap = Aprior(support=3/7,confidence=5/7)
ap.set_transactions(tran)
print(ap.k_items_result()) # [['1', '2', '3']]
| 29.5 | 81 | 0.496415 | 383 | 3,068 | 3.872063 | 0.227154 | 0.047202 | 0.051922 | 0.030344 | 0.057991 | 0.021578 | 0 | 0 | 0 | 0 | 0 | 0.021919 | 0.330834 | 3,068 | 103 | 82 | 29.786408 | 0.700438 | 0.117666 | 0 | 0.055556 | 0 | 0 | 0.009294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097222 | false | 0 | 0.027778 | 0.027778 | 0.222222 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b23c62c9bf29b77cf256e932af29e6d9da15c7b | 686 | py | Python | dlex/tf/models/base_v2.py | dvtrung/dl-torch | b49e57d10d32bb223e2d7643f2579ccc32c63a9a | [
"MIT"
] | null | null | null | dlex/tf/models/base_v2.py | dvtrung/dl-torch | b49e57d10d32bb223e2d7643f2579ccc32c63a9a | [
"MIT"
] | null | null | null | dlex/tf/models/base_v2.py | dvtrung/dl-torch | b49e57d10d32bb223e2d7643f2579ccc32c63a9a | [
"MIT"
] | null | null | null | import tensorflow as tf
from dlex import Params
from dlex.datasets.tf import Dataset
class BaseModel(tf.keras.Model):
def __init__(self, params: Params, dataset: Dataset):
super().__init__()
self.params = params
self.dataset = dataset
self._optimizer = None
self._loss = None
@property
def model(self):
raise NotImplemented
def compile(self):
super().compile(
optimizer=self.optimizer,
loss=self.loss,
metrics=self.metrics)
return self.model
@property
def optimizer(self):
return tf.keras.optimizers.SGD(learning_rate=0.02) | 25.407407 | 58 | 0.603499 | 76 | 686 | 5.302632 | 0.421053 | 0.039702 | 0.069479 | 0.099256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006342 | 0.310496 | 686 | 27 | 59 | 25.407407 | 0.845666 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.136364 | 0.045455 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2476addfb055d48f5d5ac598a8041fdc9fee29 | 1,259 | py | Python | pi/commands/token/reset.py | pan-net-security/pi-bundle | 1819caede77357331465216e0355eb2499d09cb4 | [
"MIT"
] | 2 | 2017-12-15T20:50:58.000Z | 2020-10-21T15:48:48.000Z | pi/commands/token/reset.py | pan-net-security/pi-bundle | 1819caede77357331465216e0355eb2499d09cb4 | [
"MIT"
] | 1 | 2017-10-26T09:28:30.000Z | 2017-10-26T10:33:41.000Z | pi/commands/token/reset.py | pan-net-security/pi-bundle | 1819caede77357331465216e0355eb2499d09cb4 | [
"MIT"
] | null | null | null | from pi.commands.token.base import TokenBase
import json
import re
class Reset(TokenBase):
def __init__(self):
super().__init__()
def run(self):
handler = self.parse_subcommand_
handler()
def reset(self):
results = []
# currently supporting just one argument
arg_user = self.request.args[0]
# options not yet supported
# future implementation: serial - to reset one specific token failcounter
# self.request.options)
user = {'name': arg_user}
try:
reset_tokens = self.reset_tokens(user=arg_user)
if reset_tokens:
reset_tokens = json.loads(reset_tokens.content)
#print(json.dumps(reset_tokens, indent=4, sort_keys=True))
user['result'] = reset_tokens['result']['status']
else:
user['result'] = False
except Exception as e:
self.fail(e)
results.append(user)
self.response.content(results, template='token_reset').send()
@property
def parse_subcommand_(self):
if self.request.args:
return self.reset
self.fail("This command requires at least one argument and none was passed.") | 26.787234 | 85 | 0.599682 | 144 | 1,259 | 5.076389 | 0.534722 | 0.105335 | 0.04104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002286 | 0.305004 | 1,259 | 47 | 85 | 26.787234 | 0.833143 | 0.17077 | 0 | 0 | 0 | 0 | 0.099134 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0.034483 | 0.103448 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b25709c41a264855b79fbdf3a37d395af6fdc3b | 4,500 | py | Python | canaries/canaries.py | wyatt-howe/canaries | 0bd0783e388dcee21fd3addd09a9299940627536 | [
"MIT"
] | null | null | null | canaries/canaries.py | wyatt-howe/canaries | 0bd0783e388dcee21fd3addd09a9299940627536 | [
"MIT"
] | null | null | null | canaries/canaries.py | wyatt-howe/canaries | 0bd0783e388dcee21fd3addd09a9299940627536 | [
"MIT"
] | null | null | null | """Library for loading dynamic library files.
Python library for choosing and loading dynamic library
files compatible with the operating environment.
"""
import doctest
import sys
import os.path
import platform
from ctypes import cdll, create_string_buffer
from multiprocessing import Pool
class canaries():
"""
Wrapper class for static methods.
"""
@staticmethod
def _xdll(path):
"""
Load a library using the appropriate method.
"""
system = platform.system()
xdll = cdll
if system == 'Windows':
# pylint: disable=import-outside-toplevel
from ctypes import windll as xdll # pragma: no cover
return xdll.LoadLibrary(path)
@staticmethod
def _probe(lib):
"""
Probe whether a library has a correctly implemented
verification method.
"""
# Build input and output buffers.
treat = create_string_buffer(5)
for (i, c) in enumerate('treat'):
try:
treat[i] = c
except:
treat[i] = ord(c)
chirp = create_string_buffer(5)
# Attempt to invoke the canary method.
r = lib.canary(chirp, treat)
# Decode results.
chirp = chirp.raw
if isinstance(chirp, bytes):
chirp = chirp.decode()
# Check that results are correct.
return r == 0 and chirp == 'chirp'
@staticmethod
def _isolated(path):
"""
Method to be used by isolated probe process.
"""
return canaries._probe(canaries._xdll(path))
@staticmethod
def canary(system, path):
"""
Single-path wrapper method for convenience.
"""
paths = {}
paths[system] = [path]
obj = canaries(paths)
return obj.lib if hasattr(obj, 'lib') else None
@staticmethod
def load(paths):
"""
Wrapper method for backwards compatibility.
"""
obj = canaries(paths)
return obj.lib if hasattr(obj, 'lib') else None
def __init__(self, paths):
"""
Attempt to load a library at one of the supplied
paths based on the platform. Retains state in order
to record all exceptions and incorrect outputs.
"""
if not isinstance(paths, (str, list, dict)):
raise TypeError(
"input must be a string, list, or dictionary"
)
if isinstance(paths, dict) and\
not all(isinstance(p, (str, list)) for p in paths.values()):
raise TypeError(
"path values in dictionary must be strings or lists of strings"
)
self.lib = None
self.exceptions = []
self.outputs = []
system = platform.system()
if isinstance(paths, str):
self.lib = self._canary(system, paths)
elif isinstance(paths, list):
for path in paths:
self.lib = self._canary(system, path)
if self.lib is not None:
break
elif isinstance(paths, dict):
if system in paths:
ps = paths[system]
for path in [ps] if isinstance(ps, str) else ps:
self.lib = self._canary(system, path)
if self.lib is not None:
break
def _canary(self, system, path):
"""
Attempt to load a library file at the supplied path
and verify that its exported functions work.
"""
lib = None
# Only attempt to load object files that exist.
if os.path.exists(path):
# Confirm that the library's exported functions work.
try:
# Invoke compatibility validation method.
with Pool(1) as p:
task = p.imap(canaries._isolated, [path])
if task.next(5): # Process has five seconds to succeedd.
lib = canaries._xdll(path)
except:
self.exceptions.append((
(system, path),
(
sys.exc_info()[0], sys.exc_info()[1],
sys.exc_info()[2].tb_lineno
)
))
return lib
# Provide direct access to static methods.
canary = canaries.canary
load = canaries.load
if __name__ == "__main__":
doctest.testmod() # pragma: no cover
| 29.220779 | 79 | 0.543778 | 496 | 4,500 | 4.866935 | 0.340726 | 0.024855 | 0.02237 | 0.021127 | 0.110605 | 0.083679 | 0.083679 | 0.083679 | 0.083679 | 0.083679 | 0 | 0.002829 | 0.371556 | 4,500 | 153 | 80 | 29.411765 | 0.850778 | 0.241778 | 0 | 0.255556 | 0 | 0 | 0.042373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077778 | false | 0 | 0.077778 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b28b58e2579cbe5e2ab26c5528edcabd5571c91 | 1,074 | py | Python | docs/end-to-end/library/GeocontribOnCoordinatesLibrary.py | hcharp/geocontrib | 87ee241c737aae23eff358d2550bddba714f9c7b | [
"Apache-2.0"
] | 3 | 2020-12-02T09:44:41.000Z | 2021-04-17T13:05:30.000Z | docs/end-to-end/library/GeocontribOnCoordinatesLibrary.py | hcharp/geocontrib | 87ee241c737aae23eff358d2550bddba714f9c7b | [
"Apache-2.0"
] | 14 | 2020-01-27T09:49:33.000Z | 2021-06-14T08:04:10.000Z | docs/end-to-end/library/GeocontribOnCoordinatesLibrary.py | hcharp/geocontrib | 87ee241c737aae23eff358d2550bddba714f9c7b | [
"Apache-2.0"
] | 9 | 2020-01-16T12:37:39.000Z | 2021-04-22T09:57:59.000Z | # Copyright (c) 2017-2021 Neogeo-Technologies.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from selenium.webdriver.common.action_chains import ActionChains
from utils import get_driver
def geocontrib_click_at_coordinates(pos_x, pos_y):
actions = ActionChains(get_driver())
my_map = get_driver().find_element_by_xpath("//html/body/main/div/div/form/div[3]/div/div/div[1]/div[4]/div")
actions.move_to_element_with_offset(my_map, pos_x, pos_y).click().perform()
get_driver().find_element_by_xpath("//button[@type='submit']").click()
| 42.96 | 113 | 0.76257 | 168 | 1,074 | 4.732143 | 0.630952 | 0.075472 | 0.032704 | 0.040252 | 0.067925 | 0.067925 | 0 | 0 | 0 | 0 | 0 | 0.016112 | 0.133147 | 1,074 | 24 | 114 | 44.75 | 0.837809 | 0.546555 | 0 | 0 | 0 | 0.142857 | 0.182203 | 0.182203 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2938cdd0a73902522794999f575e5ff3fb8b89 | 3,341 | py | Python | torch/quantization/fx/qconfig_utils.py | deltabravozulu/pytorch | c6eef589971e45bbedacc7f65533d1b8f80a6895 | [
"Intel"
] | 1 | 2021-06-17T13:02:45.000Z | 2021-06-17T13:02:45.000Z | torch/quantization/fx/qconfig_utils.py | deltabravozulu/pytorch | c6eef589971e45bbedacc7f65533d1b8f80a6895 | [
"Intel"
] | 1 | 2022-01-18T12:17:29.000Z | 2022-01-18T12:17:29.000Z | torch/quantization/fx/qconfig_utils.py | deltabravozulu/pytorch | c6eef589971e45bbedacc7f65533d1b8f80a6895 | [
"Intel"
] | 2 | 2021-07-02T10:18:21.000Z | 2021-08-18T10:10:28.000Z | import torch
from collections import OrderedDict
from typing import Union, Callable, Any, Dict
import re
from .utils import _parent_name
QConfigAny = Union[torch.quantization.QConfig,
torch.quantization.QConfigDynamic, None]
def get_flattened_qconfig_dict(qconfig_dict):
""" flatten the global, object_type and module_name qconfig
to the same qconfig_dict so that it can be used by
propagate_qconfig_ function.
"module_name_regex" is ignored for now since it's not supported
in propagate_qconfig_, but it can be fixed later.
For example:
Input: {
"": qconfig,
"object_type": [
(torch.add, qconfig)
],
"module_name": [
("conv", qconfig)
]
}
Output: {
"": qconfig,
torch.add: qconfig,
"conv": qconfig
}
"""
flattened = dict()
if '' in qconfig_dict:
flattened[''] = qconfig_dict['']
def flatten_key(key):
if key in qconfig_dict:
for (obj, qconfig) in qconfig_dict[key].items():
flattened[obj] = qconfig
flatten_key('object_type')
flatten_key('module_name')
return flattened
def convert_dict_to_ordered_dict(qconfig_dict: Any) -> Dict[str, Dict[Any, Any]]:
""" Convert dict in qconfig_dict to ordered dict
"""
# convert a qconfig list for a type to OrderedDict
def _convert_to_ordered_dict(key, qconfig_dict):
qconfig_dict[key] = OrderedDict(qconfig_dict.get(key, []))
_convert_to_ordered_dict('object_type', qconfig_dict)
_convert_to_ordered_dict('module_name_regex', qconfig_dict)
_convert_to_ordered_dict('module_name', qconfig_dict)
return qconfig_dict
def get_object_type_qconfig(
qconfig_dict: Any,
object_type: Union[Callable, str],
fallback_qconfig: QConfigAny) -> QConfigAny:
# object_type can be
# 1. module type (call_module)
# 2. function (call_function)
# 3. string (call_method)
return qconfig_dict['object_type'].get(
object_type, fallback_qconfig)
def get_module_name_regex_qconfig(qconfig_dict, module_name, fallback_qconfig):
for regex_pattern, qconfig in \
qconfig_dict['module_name_regex'].items():
if re.match(regex_pattern, module_name):
# first match wins
return qconfig
return fallback_qconfig
def get_module_name_qconfig(qconfig_dict, module_name, fallback_qconfig):
if module_name == '':
# module name qconfig not found
return fallback_qconfig
if module_name in qconfig_dict['module_name']:
return qconfig_dict['module_name'][module_name]
else:
parent, _ = _parent_name(module_name)
return get_module_name_qconfig(qconfig_dict, parent, fallback_qconfig)
# get qconfig for module_name,
# fallback to module_name_regex_qconfig, module_type_qconfig,
# global_qconfig if necessary
def get_qconfig(qconfig_dict, module_type, module_name, global_qconfig):
module_type_qconfig = get_object_type_qconfig(
qconfig_dict, module_type, global_qconfig)
module_name_regex_qconfig = get_module_name_regex_qconfig(
qconfig_dict, module_name, module_type_qconfig)
module_name_qconfig = get_module_name_qconfig(
qconfig_dict, module_name, module_name_regex_qconfig)
return module_name_qconfig
| 33.41 | 81 | 0.701287 | 427 | 3,341 | 5.128806 | 0.192037 | 0.141553 | 0.057534 | 0.067123 | 0.267123 | 0.210959 | 0.130594 | 0.116895 | 0.042009 | 0 | 0 | 0.001148 | 0.2176 | 3,341 | 99 | 82 | 33.747475 | 0.836649 | 0.243939 | 0 | 0.037736 | 0 | 0 | 0.045811 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150943 | false | 0 | 0.09434 | 0.018868 | 0.415094 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2a9c9a4b580fa5f2d5bbe38b2d93f37f8e19c1 | 3,062 | py | Python | researchmap/wrapper.py | RTa-technology/researchmap.py | 6aa427e1564644b20ba2001dfecf63457ef40463 | [
"MIT"
] | null | null | null | researchmap/wrapper.py | RTa-technology/researchmap.py | 6aa427e1564644b20ba2001dfecf63457ef40463 | [
"MIT"
] | null | null | null | researchmap/wrapper.py | RTa-technology/researchmap.py | 6aa427e1564644b20ba2001dfecf63457ef40463 | [
"MIT"
] | null | null | null | from typing import List
import urllib.parse
from .adapter import Adapter
__all__ = ['Wrapper']
class Wrapper:
"""Wrapper class for the Adapter class.
This class is used to wrap the Adapter class and provide a more
convenient interface for the user.
"""
def __init__(self, adapter: Adapter) -> None:
self._adapter = adapter
def get_bulk(self, params=None) -> dict:
"""Get a list of researchers from the API.
Parameters
----------
params : :class:`dict`
A dictionary containing the parameters to be passed to the API.
The payload to send to the API. Defaults to None.
Returns
-------
:class:`dict`
"""
return self._adapter.get_bulk(params=params)
def set_bulk(self, jsondata=None, params=None) -> dict:
"""Get a list of researchers from the API.
Parameters
----------
jsondata : :class:`dict`
A dictionary containing the parameters to be passed to the API.
The payload to send to the API. Defaults to None.
params : :class:`dict`
A dictionary containing the parameters to be passed to the API.
Returns
-------
:class:`dict`
"""
if params is None:
params = {}
if jsondata is None:
jsondata = {}
data = self._adapter.set_bulk(params=params, jsondata=jsondata)
print(data)
bulk_data = {}
bulk_data['id'] = urllib.parse.parse_qs(urllib.parse.urlparse(data['url']).query)['id'][0]
error = self._adapter.get_bulk_results(bulk_data)
bulk_data['display_type'] = "success"
print(bulk_data)
succeed = self._adapter.get_bulk_results(bulk_data)
print(succeed)
print(error)
return self._adapter.get_bulk_results(bulk_data)
def set_bulk_apply(self, params=None) -> dict:
"""Get a list of researchers from the API.
Parameters
----------
params : :class:`dict`
A dictionary containing the parameters to be passed to the API.
Returns
-------
:class:`dict`
"""
if params is None:
params = {}
return self._adapter.set_bulk_apply(params=params)
def get_bulk_results(self, params=None) -> dict:
"""Get a list of researchers from the API.
Parameters
----------
params : :class:`dict`
A dictionary containing the parameters to be passed to the API.
Returns
-------
:class:`dict`
"""
if params is None:
params = {}
return self._adapter.get_bulk_results(params=params)
def search_researcher(self, payload=None) -> dict:
"""Search for a researcher in the API.
Parameters
----------
payload : :class:`dict`
A dictionary containing the parameters to be passed to the API.
The payload to send to the API. Defaults to None.
Returns
-------
:class:`dict`
"""
if payload is None:
payload = {}
return self._adapter.search_researcher(payload)
def usage(self) -> dict:
return self._adapter.get_usage()
| 26.17094 | 95 | 0.612998 | 387 | 3,062 | 4.726098 | 0.173127 | 0.045927 | 0.039366 | 0.06561 | 0.588846 | 0.562603 | 0.551668 | 0.49754 | 0.49754 | 0.49754 | 0 | 0.000448 | 0.270738 | 3,062 | 116 | 96 | 26.396552 | 0.81863 | 0.423253 | 0 | 0.153846 | 0 | 0 | 0.023125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.179487 | false | 0 | 0.076923 | 0.025641 | 0.435897 | 0.102564 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2dc91a67e56678c390a41ec58ff7af3ed3237a | 2,888 | py | Python | demo/MagicMind/python/calibrator_custom_data.py | huismiling/YOLOX | d9d1c1e8c6362c71703d34e25765a2dfe8618e4a | [
"Apache-2.0"
] | null | null | null | demo/MagicMind/python/calibrator_custom_data.py | huismiling/YOLOX | d9d1c1e8c6362c71703d34e25765a2dfe8618e4a | [
"Apache-2.0"
] | null | null | null | demo/MagicMind/python/calibrator_custom_data.py | huismiling/YOLOX | d9d1c1e8c6362c71703d34e25765a2dfe8618e4a | [
"Apache-2.0"
] | null | null | null | from typing import List
import cv2
import numpy
import magicmind.python.runtime as mm
from magicmind.python.common.types import get_numpy_dtype_by_datatype
import os
import sys
def preprocess(img, input_size, swap=(2, 0, 1)):
if len(img.shape) == 3:
padded_img = numpy.ones((input_size[0], input_size[1], 3), dtype=numpy.uint8) * 114
else:
padded_img = numpy.ones(input_size, dtype=numpy.uint8) * 114
r = min(input_size[0] / img.shape[0], input_size[1] / img.shape[1])
resized_img = cv2.resize(
img,
(int(img.shape[1] * r), int(img.shape[0] * r)),
interpolation=cv2.INTER_LINEAR,
).astype(numpy.uint8)
padded_img[: int(img.shape[0] * r), : int(img.shape[1] * r)] = resized_img
padded_img = padded_img.transpose(swap)
padded_img = numpy.ascontiguousarray(padded_img, dtype=numpy.float32)
return padded_img, r
def load_multi_image(data_paths: List[str], input_wh = List[int], target_dtype: mm.DataType = mm.DataType.FLOAT32) -> numpy.ndarray:
# Load multiple pre-processed image into a NCHW style ndarray
images = []
for path in data_paths:
img = cv2.imread(path)
images.append(preprocess(img, input_wh)[0][numpy.newaxis, :])
ret = numpy.concatenate(tuple(images), axis = 0)
return numpy.ascontiguousarray(
ret.astype(dtype = get_numpy_dtype_by_datatype(target_dtype)))
class FixedCalibData(mm.CalibDataInterface):
def __init__(self, shape: mm.Dims, data_type: mm.DataType, max_samples: int, data_paths: str):
super().__init__()
self.shape_ = shape
self.data_type_ = data_type
self.batch_size_ = shape.GetDimValue(0)
self.input_wh = [shape.GetDimValue(3), shape.GetDimValue(2)]
data_lines = [itd.strip() for itd in open(data_paths).readlines() if os.path.isfile(itd.strip())]
self.max_samples_ = min(max_samples, len(data_lines))
self.data_paths_ = data_lines
self.current_sample_ = None
self.outputed_sample_count = 0
def get_shape(self):
return self.shape_
def get_data_type(self):
return self.data_type_
def get_sample(self):
return self.current_sample_
def next(self):
beg_ind = self.outputed_sample_count
end_ind = self.outputed_sample_count + self.batch_size_
if end_ind > self.max_samples_:
return mm.Status(mm.Code.OUT_OF_RANGE, "End reached")
self.current_sample_ = load_multi_image(self.data_paths_[beg_ind:end_ind],
input_wh = self.input_wh,
target_dtype = self.data_type_)
self.outputed_sample_count = end_ind
return mm.Status.OK()
def reset(self):
self.current_sample_ = None
self.outputed_sample_count = 0
return mm.Status.OK()
| 35.219512 | 132 | 0.655125 | 400 | 2,888 | 4.465 | 0.28 | 0.040314 | 0.050392 | 0.06439 | 0.182531 | 0.113102 | 0.050392 | 0.050392 | 0.050392 | 0 | 0 | 0.017671 | 0.235803 | 2,888 | 81 | 133 | 35.654321 | 0.791572 | 0.020429 | 0 | 0.096774 | 0 | 0 | 0.003891 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.112903 | 0.048387 | 0.387097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2eb592acc995c4132c3288aeaefe49afa5e490 | 66,478 | py | Python | probreg/main.py | albertvisser/probreg | 5f685616221e3261afe0d8ae8506cad9a719fa82 | [
"MIT"
] | null | null | null | probreg/main.py | albertvisser/probreg | 5f685616221e3261afe0d8ae8506cad9a719fa82 | [
"MIT"
] | null | null | null | probreg/main.py | albertvisser/probreg | 5f685616221e3261afe0d8ae8506cad9a719fa82 | [
"MIT"
] | null | null | null | #! usr/bin/env python
"""Actie (was: problemen) Registratie, GUI toolkit onafhankelijke code
"""
import os
# import sys
import pathlib
import functools
import probreg.gui as gui
import probreg.shared as shared # import DataError, et_projnames
import probreg.dml_django as dmls
import probreg.dml_xml as dmlx
LIN = True if os.name == 'posix' else False
class Page():
"base class for notebook page"
def __init__(self, parent, pageno, standard=True):
self.parent = parent
self.pageno = pageno
self.is_text_page = standard
if standard:
self.gui = gui.PageGui(parent, self)
def get_toolbar_data(self, textfield):
"return texts, shortcuts and picture names for setting op toolbar"
return (('&Bold', 'Ctrl+B', 'icons/sc_bold', 'Toggle Bold', textfield.text_bold,
textfield.update_bold),
('&Italic', 'Ctrl+I', 'icons/sc_italic', 'Toggle Italic', textfield.text_italic,
textfield.update_italic),
('&Underline', 'Ctrl+U', 'icons/sc_underline', 'Toggle Underline',
textfield.text_underline, textfield.update_underline),
('Strike&through', 'Ctrl+~', 'icons/sc_strikethrough', 'Toggle Strikethrough',
textfield.text_strikethrough),
# ("Toggle &Monospace", 'Shift+Ctrl+M', 'icons/text',
# 'Switch using proportional font off/on', textfield.toggle_monospace),
(),
("&Enlarge text", 'Ctrl+Up', 'icons/sc_grow', 'Use bigger letters',
textfield.enlarge_text),
("&Shrink text", 'Ctrl+Down', 'icons/sc_shrink', 'Use smaller letters',
textfield.shrink_text),
(),
('To &Lower Case', 'Shift+Ctrl+L', 'icons/sc_changecasetolower',
'Use lower case letters', textfield.case_lower),
('To &Upper Case', 'Shift+Ctrl+U', 'icons/sc_changecasetoupper',
'Use upper case letters', textfield.case_upper),
(),
("Indent &More", 'Ctrl+]', 'icons/sc_incrementindent', 'Increase indentation',
textfield.indent_more),
("Indent &Less", 'Ctrl+[', 'icons/sc_decrementindent', 'Decrease indentation',
textfield.indent_less),
(),
# ("Normal Line Spacing", '', 'icons/sc_spacepara1',
# 'Set line spacing to 1', textfield.linespacing_1),
# ("1.5 Line Spacing", '', 'icons/sc_spacepara15',
# 'Set line spacing to 1.5', textfield.linespacing_15),
# ("Double Line Spacing", '', 'icons/sc_spacepara2',
# 'Set line spacing to 2', textfield.linespacing_2),
# (),
("Increase Paragraph &Spacing", '', 'icons/sc_paraspaceincrease',
'Increase spacing between paragraphs', textfield.increase_paragraph_spacing),
("Decrease &Paragraph Spacing", '', 'icons/sc_paraspacedecrease',
'Decrease spacing between paragraphs', textfield.decrease_paragraph_spacing))
def vulp(self):
"""te tonen gegevens invullen in velden e.a. initialisaties
methode aan te roepen voorafgaand aan het tonen van de pagina"""
self.initializing = True
self.parent.parent.enable_settingsmenu()
if self.parent.current_tab == 0:
text = self.seltitel
else:
state = True if self.parent.current_tab == 1 and self.parent.newitem else False
self.enable_buttons(state)
text = self.parent.tabs[self.parent.current_tab].split(None, 1)
if self.parent.pagedata:
text = str(self.parent.pagedata.id) + ' ' + self.parent.pagedata.titel
self.parent.parent.set_windowtitle("{} | {}".format(self.parent.parent.title, text))
self.parent.parent.set_statusmessage()
if 1 < self.parent.current_tab < 6:
self.oldbuf = ''
is_readonly = False
if self.parent.pagedata is not None:
if self.parent.current_tab == 2 and self.parent.pagedata.melding:
self.oldbuf = self.parent.pagedata.melding
if self.parent.current_tab == 3 and self.parent.pagedata.oorzaak:
self.oldbuf = self.parent.pagedata.oorzaak
if self.parent.current_tab == 4 and self.parent.pagedata.oplossing:
self.oldbuf = self.parent.pagedata.oplossing
if self.parent.current_tab == 5 and self.parent.pagedata.vervolg:
self.oldbuf = self.parent.pagedata.vervolg
# self.text1.setReadOnly(self.parent.pagedata.arch)
is_readonly = self.parent.pagedata.arch
# print('in Page.vulp, setting text:', self.oldbuf)
self.gui.set_textarea_contents(self.oldbuf)
# print('in Page.vulp, set text')
if not is_readonly:
is_readonly = not self.parent.parent.is_user
self.gui.set_text_readonly(is_readonly)
self.gui.enable_toolbar(self.parent.parent.is_user)
# print('in Page.vulp, getting text')
self.oldbuf = self.gui.get_textarea_contents() # make sure it's rich text
# print('in Page.vulp, got text:', self.oldbuf)
self.gui.move_cursor_to_end()
# print(' set cursor to end')
self.initializing = False
# self.parent.checked_for_leaving = True - alleen voor wx versie, hoort bij gui
# print('end of Page.vulp')
def readp(self, pid):
"lezen van een actie"
if self.parent.pagedata: # spul van de vorige actie opruimen
self.parent.pagedata.clear()
self.parent.pagedata = shared.Actie[self.parent.parent.datatype](self.parent.fnaam, pid,
self.parent.parent.user)
self.parent.parent.imagelist = self.parent.pagedata.imagelist
self.parent.old_id = self.parent.pagedata.id
self.parent.newitem = False
def nieuwp(self, *args):
"""voorbereiden opvoeren nieuwe actie"""
shared.log('opvoeren nieuwe actie')
self.parent.newitem = True
if self.leavep():
if self.parent.current_tab == 0:
self.parent.parent.gui.enable_book_tabs(True, tabfrom=1)
self.parent.pagedata = shared.Actie[self.parent.parent.datatype](self.parent.fnaam, 0,
self.parent.parent.user)
self.parent.pagedata.events.append((shared.get_dts(), 'Actie opgevoerd'))
self.parent.parent.imagelist = self.parent.pagedata.imagelist
if self.parent.current_tab == 1:
self.vulp() # om de velden leeg te maken
self.gui.set_focus()
else:
self.goto_page(1, check=False)
else:
self.parent.newitem = False
shared.log("leavep() geeft False: nog niet klaar met huidige pagina")
def leavep(self):
"afsluitende acties uit te voeren alvorens de pagina te verlaten"
newbuf = []
if self.parent.current_tab > 0:
newbuf = self.oldbuf
newbuf = self.gui.build_newbuf()
ok_to_leave = True
if self.parent.current_tab == 0:
pass
elif self.parent.changed_item:
message = "\n".join(("De gegevens op de pagina zijn gewijzigd, ",
"wilt u de wijzigingen opslaan voordat u verder gaat?"))
ok, cancel = gui.ask_cancel_question(self.gui, message)
if ok:
ok_to_leave = self.savep()
elif cancel:
# self.parent.checked_for_leaving = ok_to_leave = False
ok_to_leave = False
if not cancel:
self.parent.parent.gui.enable_all_other_tabs(True)
return ok_to_leave
def savep(self, *args):
"gegevens van een actie opslaan afhankelijk van pagina"
if not self.gui.can_save:
return False
self.enable_buttons(False)
if self.parent.current_tab <= 1 or self.parent.current_tab == 6:
return False
text = self.gui.get_textarea_contents()
event_text = ''
if self.parent.current_tab == 2 and text != self.parent.pagedata.melding:
self.oldbuf = self.parent.pagedata.melding = text
event_text = "Meldingtekst aangepast"
if self.parent.current_tab == 3 and text != self.parent.pagedata.oorzaak:
self.oldbuf = self.parent.pagedata.oorzaak = text
event_text = "Beschrijving oorzaak aangepast"
if self.parent.current_tab == 4 and text != self.parent.pagedata.oplossing:
self.oldbuf = self.parent.pagedata.oplossing = text
event_text = "Beschrijving oplossing aangepast"
if self.parent.current_tab == 5 and text != self.parent.pagedata.vervolg:
self.oldbuf = self.parent.pagedata.vervolg = text
event_text = "Tekst vervolgactie aangepast"
if event_text:
self.parent.pagedata.events.append((shared.get_dts(), event_text))
self.update_actie()
self.parent.pages[0].gui.set_item_text(self.parent.pages[0].gui.get_selection(), 3,
self.parent.pagedata.updated)
return True
def savepgo(self, *args):
"opslaan en naar de volgende pagina"
if not self.gui.can_saveandgo():
return
if self.savep():
self.goto_next()
else:
self.enable_buttons()
def restorep(self, *args):
"oorspronkelijke (laatst opgeslagen) inhoud van de pagina herstellen"
# reset font - are these also needed: case? indent? linespacing? paragraphspacing?
if self.parent.current_tab > 1:
self.gui.reset_font()
self.vulp()
def on_text(self, *args):
"""callback voor EVT_TEXT e.d.
de initializing flag wordt uitgevraagd omdat deze event ook tijdens vulp()
en tijdens vul_combos plaatsvindt"""
if not self.initializing:
newbuf = self.gui.build_newbuf()
changed = newbuf != self.oldbuf
self.enable_buttons(changed)
def on_choice(self):
"callback voor combobox (? wordt on_text hier niet gewoon voor gebruikt?)"
self.enable_buttons()
def update_actie(self):
"""pass page data from the GUI to the internal storage
"""
self.parent.pagedata.imagecount = self.parent.parent.imagecount
self.parent.pagedata.imagelist = self.parent.parent.imagelist
if self.parent.parent.datatype == shared.DataType.SQL.name:
self.parent.pagedata.write(self.parent.parent.user)
else:
self.parent.pagedata.write()
self.parent.pagedata.read() # om "updated" attribuut op te halen
if self.parent.newitem:
# nieuwe entry maken in de tabel voor panel 0
newindex = len(self.parent.data) # + 1
pagegui = self.parent.pages[1].gui
itemdata = (pagegui.get_text('date'),
" - ".join((pagegui.get_text('proc'),
pagegui.get_text('desc'))),
pagegui.get_choice_data('stat')[0],
pagegui.get_choice_data('cat')[0],
pagegui.get_text('id'))
self.parent.data[newindex] = itemdata # waarom niet append?
# ook nieuwe entry maken in de visuele tree
page = self.parent.pages[0]
self.parent.current_item = page.gui.add_listitem(itemdata[0].split(' ')[0])
page.gui.set_selection()
self.parent.newitem = False
self.parent.rereadlist = True
def enable_buttons(self, state=True):
"buttons wel of niet bruikbaar maken"
self.gui.enable_buttons(state)
self.parent.changed_item = state
if self.parent.current_tab > 0:
self.parent.parent.gui.enable_all_other_tabs(not state)
def goto_actie(self, *args):
"naar startpagina actie gaan"
self.goto_page(1)
def goto_next(self, *args):
"naar de volgende pagina gaan"
if not self.leavep():
return
next = self.parent.current_tab + 1
if next >= len(self.parent.pages):
next = 0
self.parent.parent.gui.set_page(next)
def goto_prev(self, *args):
"naar de vorige pagina gaan"
if not self.leavep():
return
next = self.parent.current_tab - 1
if next < 0:
next = len(self.parent.pages) - 1
self.parent.parent.gui.set_page(next)
def goto_page(self, page_num, check=True):
"naar de aangegeven pagina gaan"
if check and not self.leavep():
return
if 0 <= page_num <= len(self.parent.pages):
self.parent.parent.gui.set_page(page_num)
def get_textarea_contents(self):
"get the page text"
return self.gui.get_textarea_contents()
class Page0(Page):
"pagina 0: overzicht acties"
def __init__(self, parent):
self.parent = parent
super().__init__(parent, pageno=0, standard=False)
self.selection = 'excl. gearchiveerde'
self.sel_args = {}
self.sorted = (0, "A")
widths = [94, 24, 146, 90, 400] if LIN else [64, 24, 114, 72, 292]
if self.parent.parent.datatype == shared.DataType.SQL.name:
widths[4] = 90 if LIN else 72
extra = 310 if LIN else 220
widths.append(extra)
self.gui = gui.Page0Gui(parent, self, widths)
self.gui.enable_buttons()
self.sort_via_options = False
def vulp(self):
"""te tonen gegevens invullen in velden e.a. initialisaties
methode aan te roepen voorafgaand aan het tonen van de pagina
"""
# print('in Page0.vulp')
self.saved_sortopts = None
if (self.parent.parent.datatype == shared.DataType.SQL.name
and self.parent.parent.filename):
if self.parent.parent.is_user:
self.saved_sortopts = dmls.SortOptions(self.parent.parent.filename)
test = self.saved_sortopts.load_options()
test = bool(test)
self.sort_via_options = test
value = not test
else:
value = False
self.gui.enable_sorting(value)
self.seltitel = 'alle meldingen ' + self.selection
super().vulp()
msg = ''
if self.parent.rereadlist:
self.parent.data = {}
select = self.sel_args.copy()
arch = "" # "alles"
if "arch" in select:
arch = select.pop("arch")
data = shared.get_acties[self.parent.parent.datatype](self.parent.fnaam, select,
arch, self.parent.parent.user)
for idx, item in enumerate(data):
if self.parent.parent.datatype == shared.DataType.XML.name:
self.parent.data[idx] = (item[0],
item[1],
".".join((item[3][1], item[3][0])),
".".join((item[2][1], item[2][0])),
item[5],
item[4],
True if item[6] == 'arch' else False)
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.parent.data[idx] = (item[0],
item[1],
".".join((item[5], item[4])),
".".join((str(item[3]), item[2])),
item[8],
item[6],
item[7],
item[9])
msg = self.populate_list()
# nodig voor sorteren? Geen idee maar als het ergens goed voor is dan moet dit
# naar de gui module want sortItems is een qt methode
# if self.parent.parent.datatype == shared.DataType.XML.name:
# self.gui.p0list.sortItems(self.sorted[0], sortorder[self.sorted[1]]) # , True)
#
self.parent.current_item = self.gui.get_first_item()
self.parent.parent.enable_all_book_tabs(False)
self.gui.enable_buttons()
if self.gui.has_selection():
self.parent.parent.enable_all_book_tabs(True)
self.gui.set_selection()
self.gui.ensure_visible(self.parent.current_item)
self.parent.parent.set_statusmessage(msg)
def populate_list(self):
"list control vullen"
self.gui.clear_list()
self.parent.rereadlist = False
items = self.parent.data.items()
if items is None:
self.parent.parent.set_statusmessage('Selection is None?')
if not items:
return
for _, data in items:
new_item = self.gui.add_listitem(data[0])
self.gui.set_listitem_values(new_item, [data[0]] + list(data[2:]))
def change_selected(self, item_n):
"""callback voor wijzigen geselecteerd item, o.a. door verplaatsen van de
cursor of door klikken
"""
self.parent.current_item = item_n
self.gui.set_selection()
if not self.parent.newitem:
selindx = self.gui.get_selected_action()
self.readp(selindx)
hlp = "&Herleef" if self.parent.pagedata.arch else "&Archiveer"
self.gui.set_archive_button_text(hlp)
def activate_item(self):
"""callback voor activeren van item, door doubleclick of enter
"""
self.goto_actie()
def select_items(self, event=None):
"""tonen van de selectie dialoog
niet alleen selecteren op tekst(deel) maar ook op status, soort etc
"""
args = self.sel_args, None
if self.parent.parent.datatype == shared.DataType.SQL.name:
data = dmls.SelectOptions(self.parent.fnaam, self.parent.parent.user)
args, sel_args = data.load_options(), {}
for key, value in args.items():
if key == 'nummer':
for item in value: # splitsen in idgt, id en idlt
if len(item) == 1:
sel_args['id'] = 'and' if item[0] == 'en' else 'or'
elif item[1] == 'GT':
sel_args['idgt'] = item[0]
elif item[1] == 'LT':
sel_args['idlt'] = item[0]
# elif key == 'arch':
# sel_args[key] = {0: 'narch', 1: 'arch', 2: 'alles'}[value]
elif value:
sel_args[key] = value
args = sel_args, data
while True:
test = gui.show_dialog(self.gui, gui.SelectOptionsDialog, args)
if not test:
break
self.parent.rereadlist = True
try:
self.vulp()
except (dmlx.DataError, dmls.DataError) as msg:
self.parent.rereadlist = False
gui.show_message(self, str(msg))
else:
break
def sort_items(self, *args):
"""tonen van de sorteer-opties dialoog
sortering mogelijk op datum/tijd, soort, titel, status via schermpje met
2x4 comboboxjes waarin je de volgorde van de rubrieken en de sorteervolgorde
per rubriek kunt aangeven"""
sortopts, sortlist = {}, []
if self.parent.parent.datatype == shared.DataType.XML.name:
gui.show_message(self.gui, 'Sorry, multi-column sorteren werkt nog niet')
return
if self.parent.parent.datatype == shared.DataType.SQL.name:
sortopts = self.saved_sortopts.load_options()
try:
sortlist = [x[0] for x in dmls.SORTFIELDS]
except AttributeError:
pass
if not sortlist:
sortlist = [x for x in self.parent.ctitels]
sortlist[1] = "Soort"
sortlist.insert(0, "(geen)")
args = sortopts, sortlist
test = gui.show_dialog(self.gui, gui.SortOptionsDialog, args)
if not test:
return
if self.sort_via_options:
self.gui.enable_sorting(False)
self.parent.rereadlist = True
try:
self.vulp()
# moet hier soms nog het daadwerkelijke sorteren tussen (bij XML)?
except (dmlx.DataError, dmls.DataError) as msg:
self.parent.rereadlist = False
gui.show_message(self, str(msg))
else:
self.gui.enable_sorting(True)
def archiveer(self, *args):
"archiveren of herleven van het geselecteerde item"
selindx = self.gui.get_selected_action()
if self.parent.parent.datatype == shared.DataType.XML.name:
selindx = shared.data2str(selindx)
else:
selindx = shared.data2int(selindx)
self.readp(selindx)
if self.parent.parent.datatype == shared.DataType.XML.name:
self.parent.pagedata.arch = not self.parent.pagedata.arch
hlp = "gearchiveerd" if self.parent.pagedata.arch else "herleefd"
self.parent.pagedata.events.append((shared.get_dts(), "Actie {0}".format(hlp)))
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.parent.pagedata.set_arch(not self.parent.pagedata.arch)
self.update_actie() # self.parent.pagedata.write()
self.parent.rereadlist = True
self.vulp()
self.parent.parent.gui.set_tabfocus(0)
# het navolgende geldt alleen voor de selectie "gearchiveerd en actief"
if self.sel_args.get("arch", "") == "alles":
self.gui.ensure_visible(self.parent.current_item)
hlp = "&Herleef" if self.parent.pagedata.arch else "&Archiveer"
self.gui.set_archive_button_text(hlp)
def enable_buttons(self, value=None):
"buttons wel of niet bruikbaar maken"
if value is not None:
self.gui.enable_buttons(value)
else:
self.gui.enable_buttons()
def get_items(self):
"retrieve all listitems"
return self.gui.get_items()
def get_item_text(self, item_or_index, column):
"get the item's text for a specified column"
return self.gui.get_item_text(item_or_index, column)
def clear_selection(self):
"initialize selection criteria"
self.sel_args = {}
class Page1(Page):
"pagina 1: startscherm actie"
def __init__(self, parent):
self.parent = parent
super().__init__(parent, pageno=1, standard=False)
self.gui = gui.Page1Gui(parent, self)
def vulp(self):
"""te tonen gegevens invullen in velden e.a. initialisaties
methode aan te roepen voorafgaand aan het tonen van de pagina"""
super().vulp()
self.initializing = True
self.gui.init_fields()
self.parch = False
if self.parent.pagedata is not None: # and not self.parent.newitem:
self.gui.set_text('id', str(self.parent.pagedata.id))
self.gui.set_text('date', self.parent.pagedata.datum)
self.parch = self.parent.pagedata.arch
if self.parent.parent.datatype == shared.DataType.XML.name:
if self.parent.pagedata.titel is not None:
if " - " in self.parent.pagedata.titel:
hlp = self.parent.pagedata.titel.split(" - ", 1)
else:
hlp = self.parent.pagedata.titel.split(": ", 1)
self.gui.set_text('proc', hlp[0])
if len(hlp) > 1:
self.gui.set_text('desc', hlp[1])
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.gui.set_text('proc', self.parent.pagedata.over)
self.gui.set_text('desc', self.parent.pagedata.titel)
self.gui.set_choice('stat', self.parent.pagedata.status)
self.gui.set_choice('cat', self.parent.pagedata.soort)
self.oldbuf = self.gui.set_oldbuf()
if self.parch:
aanuit = False
if self.parent.parent.datatype == shared.DataType.XML.name:
if self.parent.pagedata.titel is not None:
if " - " in self.parent.pagedata.titel:
hlp = self.parent.pagedata.titel.split(" - ", 1)
else:
hlp = self.parent.pagedata.titel.split(": ", 1)
self.gui.set_text('proc', hlp[0])
if len(hlp) > 1:
self.gui.set_text('desc', hlp[1])
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.gui.set_text('proc', self.parent.pagedata.over)
self.gui.set_text('desc', self.parent.pagedata.titel)
self.gui.set_text('arch', "Deze actie is gearchiveerd")
self.gui.set_archive_button_text("Herleven")
else:
aanuit = True
self.gui.set_text('arch', '')
self.gui.set_archive_button_text("Archiveren")
if not self.parent.parent.is_user:
aanuit = False
self.gui.enable_fields(aanuit)
self.initializing = False
def savep(self, *args):
"opslaan van de paginagegevens"
super().savep()
proc = self.gui.get_text('proc')
self.gui.set_text('proc', proc.capitalize())
self.enable_buttons(False)
desc = self.gui.get_text('desc')
if proc == "" or desc == "":
gui.show_message(self.gui, "Beide tekstrubrieken moeten worden ingevuld")
return False
wijzig = False
procdesc = " - ".join((proc, desc))
if procdesc != self.parent.pagedata.titel:
if self.parent.parent.datatype == shared.DataType.XML.name:
self.parent.pagedata.titel = procdesc
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.parent.pagedata.over = proc
self.parent.pagedata.events.append(
(shared.get_dts(), 'Onderwerp gewijzigd in "{0}"'.format(proc)))
self.parent.pagedata.titel = procdesc = desc
self.parent.pagedata.events.append(
(shared.get_dts(), 'Titel gewijzigd in "{0}"'.format(procdesc)))
wijzig = True
newstat, sel = self.gui.get_choice_data('stat')
if newstat != self.parent.pagedata.status:
self.parent.pagedata.status = newstat
self.parent.pagedata.events.append(
(shared.get_dts(), 'Status gewijzigd in "{0}"'.format(sel)))
wijzig = True
newcat, sel = self.gui.get_choice_data('cat')
if newcat != self.parent.pagedata.soort:
self.parent.pagedata.soort = newcat
self.parent.pagedata.events.append(
(shared.get_dts(), 'Categorie gewijzigd in "{0}"'.format(sel)))
wijzig = True
if self.parch != self.parent.pagedata.arch:
self.parent.pagedata.set_arch(self.parch)
hlp = "gearchiveerd" if self.parch else "herleefd"
self.parent.pagedata.events.append(
(shared.get_dts(), "Actie {0}".format(hlp)))
wijzig = True
if wijzig:
self.update_actie()
# teksten op panel 0 bijwerken
pagegui = self.parent.pages[0].gui
item = pagegui.get_selection()
pagegui.set_item_text(item, 1, self.parent.pagedata.get_soorttext()[0].upper())
pagegui.set_item_text(item, 2, self.parent.pagedata.get_statustext())
pagegui.set_item_text(item, 3, self.parent.pagedata.updated)
if self.parent.parent.datatype == shared.DataType.XML.name:
pagegui.set_item_text(item, 4, self.parent.pagedata.titel)
elif self.parent.parent.datatype == shared.DataType.SQL.name:
pagegui.set_item_text(item, 4, self.parent.pagedata.over)
pagegui.set_item_text(item, 5, self.parent.pagedata.titel)
self.oldbuf = self.gui.set_oldbuf()
return True
def archiveer(self, *args):
"archiveren/herleven"
self.parch = not self.parch
self.savep()
self.parent.rereadlist = True
self.vulp()
def vul_combos(self):
"vullen comboboxen"
self.initializing = True
self.gui.clear_stats()
self.gui.clear_cats()
for key in sorted(self.parent.stats.keys()):
text, value = self.parent.stats[key][:2]
self.gui.add_stat_choice(text, value)
for key in sorted(self.parent.cats.keys()):
text, value = self.parent.cats[key][:2]
self.gui.add_cat_choice(text, value)
self.initializing = False
def get_field_text(self, entry_type):
"return a screen field's text"
return self.gui.get_field_text(entry_type)
class Page6(Page):
"pagina 6: voortgang"
def __init__(self, parent):
super().__init__(parent, pageno=6, standard=False)
self.current_item = 0
self.oldtext = ""
self.event_list, self.event_data, self.old_list, self.old_data = [], [], [], []
self.gui = gui.Page6Gui(parent, self)
def vulp(self):
"""te tonen gegevens invullen in velden e.a. initialisaties
methode aan te roepen voorafgaand aan het tonen van de pagina"""
super().vulp()
self.initializing = True
self.gui.init_textfield()
# self.progress_text.clear()
# self.progress_text.setReadOnly(True)
if self.parent.pagedata:
self.event_list = [x[0] for x in self.parent.pagedata.events]
self.event_list.reverse()
self.old_list = self.event_list[:]
self.event_data = [x[1] for x in self.parent.pagedata.events]
self.event_data.reverse()
self.old_data = self.event_data[:]
if self.parent.parent.is_user:
text = '-- doubleclick or press Shift-Ctrl-N to add new item --'
else:
text = '-- adding new items is disabled --'
self.gui.init_list(text)
for idx, datum in enumerate(self.event_list):
self.gui.add_item_to_list(idx, datum)
if self.parent.parent.datatype == shared.DataType.SQL.name:
self.gui.set_list_callback()
# self.gui.clear_textfield() - zit al in init_textfield
self.oldbuf = (self.old_list, self.old_data)
self.oldtext = ''
self.initializing = False
def savep(self, *args):
"opslaan van de paginagegevens"
super().savep()
# voor het geval er na het aanpassen van een tekst direkt "sla op" gekozen is
# nog even kijken of de tekst al in self.event_data is aangepast.
idx = self.current_item
hlp = self.gui.get_textfield_contents()
if idx > 0:
idx -= 1
if self.event_data[idx] != hlp:
self.event_data[idx] = hlp
self.oldtext = hlp
short_text = hlp.split("\n")[0]
if len(short_text) < 80:
short_text = short_text[:80] + "..."
if self.parent.parent.datatype == shared.DataType.XML.name:
short_text = short_text.encode('latin-1')
self.gui.set_listitem_text(idx + 1, "{} - {}".format(self.event_list[idx], short_text))
self.gui.set_listitem_data(idx + 1)
wijzig = False
if self.event_list != self.old_list or self.event_data != self.old_data:
wijzig = True
hlp = len(self.event_list) - 1
for idx, data in enumerate(self.parent.pagedata.events):
if data != (self.event_list[hlp - idx], self.event_data[hlp - idx]):
self.parent.pagedata.events[idx] = (self.event_list[hlp - idx],
self.event_data[hlp - idx])
for idx in range(len(self.parent.pagedata.events), hlp + 1):
if self.event_data[hlp - idx]:
self.parent.pagedata.events.append((self.event_list[hlp - idx],
self.event_data[hlp - idx]))
if wijzig:
self.update_actie()
# waar is deze voor (self.book.current_item.setText) ?
# self.parent.current_item = self.parent.page0.p0list.topLevelItem(x)
# self.parent.current_item.setText(4, self.parent.pagedata.updated)
self.parent.pages[0].gui.set_item_text(self.parent.current_item, 3,
self.parent.pagedata.updated)
# dit was self.parent.page0.p0list.currentItem().setText( -- is dat niet hetzelfde?
self.old_list = self.event_list[:]
self.old_data = self.event_data[:]
self.oldbuf = (self.old_list, self.old_data)
return True
def goto_prev(self, *args):
"set the selection to the previous row, if possible"
test = self.gui.get_list_row() - 1
if test > 0:
self.gui.set_list_row(test)
def goto_next(self, *args):
"set the selection to the next row, if possible"
test = self.gui.get_list_row() + 1
if test < self.gui.get_list_rowcount():
self.gui.set_list_row(test)
def on_text(self, *args):
"""callback voor wanneer de tekst gewijzigd is
de initializing flag wordt uitgevraagd omdat deze event ook tijdens vulp()
en wijzigen van list positie plaatsvindt
"""
if self.initializing:
return
# lees de inhoud van het tekstveld en vergelijk deze met de buffer
tekst = self.gui.get_textfield_contents()
# str(self.progress_text.get_contents()) # self.progress_list.GetItemText(ix)
if tekst != self.oldtext:
# stel de buffer in op de nieuwe tekst
self.oldtext = tekst
# maak er platte tekst van om straks in de listbox bij te werken
tekst_plat = self.gui.convert_text(self.oldtext, to='plain')
# stel in dat we niet van dit scherm af kunnen zonder te updaten
if self.parent.parent.is_user:
self.enable_buttons()
self.current_item = self.gui.get_list_row()
if self.current_item > 0:
indx = self.current_item - 1
self.event_data[indx] = tekst
# item = self.progress_list.currentItem()
# datum = str(item.text()).split(' - ')[0]
datum = self.gui.get_listitem_text(self.current_item).split(' - ')[0]
short_text = ' - '.join((datum, tekst_plat.split("\n")[0]))
if len(short_text) >= 80:
short_text = short_text[:80] + "..."
# item.setText(short_text)
self.gui.set_listitem_text(self.current_item, short_text)
class TabOptions:
"hulp klasse bij dialoog voor mogelijke tab headers"
def initstuff(self, parent):
"aanvullende initialisatie"
self.titel = "Tab titels"
self.data = []
for key in sorted(parent.master.book.tabs.keys()):
tab_text = parent.master.book.tabs[key].split(" ", 1)[1]
self.data.append(tab_text)
self.tekst = ["De tab titels worden getoond in de volgorde",
"zoals ze van links naar rechts staan.",
"Er kunnen geen tabs worden verwijderd of toegevoegd."]
self.editable = False
def leesuit(self, parent, optionslist):
"wijzigingen doorvoeren"
self.newtabs = {}
for idx, item in enumerate(optionslist):
self.newtabs[str(idx)] = str(item)
parent.master.save_settings("tab", self.newtabs)
class StatOptions:
"hulp klasse bij dialoog voor de mogelijke statussen"
def initstuff(self, parent):
"aanvullende initialisatie"
self.titel = "Status codes en waarden"
self.data = []
for key in sorted(parent.master.book.stats.keys()):
if parent.master.datatype == shared.DataType.XML.name:
item_text, item_value = parent.master.book.stats[key]
self.data.append(": ".join((item_value, item_text)))
elif parent.master.datatype == shared.DataType.SQL.name:
item_text, item_value, row_id = parent.master.book.stats[key]
self.data.append(": ".join((item_value, item_text, row_id)))
self.tekst = ["De waarden voor de status worden getoond in dezelfde volgorde",
"als waarin ze in de combobox staan.",
"Vóór de dubbele punt staat de code, erachter de waarde.",
"Denk erom dat als je codes wijzigt of statussen verwijdert, deze",
"ook niet meer getoond en gebruikt kunnen worden in de registratie.",
"Omschrijvingen kun je rustig aanpassen"]
self.editable = True
def leesuit(self, parent, optionslist):
"wijzigingen doorvoeren"
self.newstats = {}
for sortkey, item in enumerate(optionslist):
try:
value, text = str(item).split(": ")
except ValueError:
return 'Foutieve waarde: bevat geen dubbele punt'
self.newstats[value] = (text, sortkey)
parent.master.save_settings("stat", self.newstats)
return ''
class CatOptions:
"hulp klasse bij dialoog voor de mogelijke categorieen"
def initstuff(self, parent):
"aanvullende initialisatie"
self.titel = "Soort codes en waarden"
self.data = []
for key in sorted(parent.master.book.cats.keys()):
if parent.master.datatype == shared.DataType.XML.name:
item_value, item_text = parent.master.book.cats[key]
self.data.append(": ".join((item_text, item_value)))
elif parent.master.datatype == shared.DataType.SQL.name:
item_value, item_text, row_id = parent.master.book.cats[key]
self.data.append(": ".join((item_text, item_value, str(row_id))))
self.tekst = ["De waarden voor de soorten worden getoond in dezelfde volgorde",
"als waarin ze in de combobox staan.",
"Vóór de dubbele punt staat de code, erachter de waarde.",
"Denk erom dat als je codes wijzigt of soorten verwijdert, deze",
"ook niet meer getoond en gebruikt kunnen worden in de registratie.",
"Omschrijvingen kun je rustig aanpassen"]
self.editable = True
def leesuit(self, parent, optionslist):
"wijzigingen doorvoeren"
self.newcats = {}
for sortkey, item in enumerate(optionslist):
try:
value, text = str(item).split(": ")
except ValueError:
return 'Foutieve waarde: bevat geen dubbele punt'
self.newcats[value] = (text, sortkey)
parent.master.save_settings("cat", self.newcats)
return ''
class MainWindow():
"""Hoofdscherm met menu, statusbalk, notebook en een "quit" button"""
def __init__(self, parent, fnaam="", version=None):
# if not version:
# raise ValueError('No data method specified')
self.parent = parent
self.datatype = version
self.dirname, self.filename = '', ''
self.title = 'Actieregistratie'
self.initializing = True
self.exiting = False
self.helptext = ''
# self.pagedata = None
# self.oldbuf = None
self.is_newfile = False
self.oldsort = -1
self.idlist = self.actlist = self.alist = []
shared.log('fnaam is %s', fnaam)
self.projnames = dmls.get_projnames()
if fnaam:
if fnaam == 'xml' or os.path.exists(fnaam):
self.datatype = shared.DataType.XML.name
if fnaam != 'xml':
test = pathlib.Path(fnaam)
self.dirname, self.filename = test.parent, test.name
shared.log('XML: %s %s', self.dirname, self.filename)
elif fnaam == 'sql' or fnaam.lower() in [x[0] for x in self.projnames]:
self.datatype = shared.DataType.SQL.name
if fnaam == 'basic':
self.filename = '_basic'
elif fnaam != 'sql':
self.filename = fnaam.lower()
shared.log('SQL: %s', self.filename)
else:
fnaam = ''
self.gui = gui.MainGui(self)
if not self.datatype:
self.filename = ''
choice = gui.get_choice_item(None, 'Select Mode', ['XML', 'SQL'])
if choice == 'XML':
self.datatype = shared.DataType.XML.name
elif choice == 'SQL':
self.datatype = shared.DataType.SQL.name
else:
raise SystemExit('No datatype selected')
self.user = None # start without user
self.is_user = self.is_admin = False
if self.datatype == shared.DataType.XML.name:
self.user = 1 # pretend user
self.is_user = self.is_admin = True # force editability for XML mode
self.create_book()
self.gui.create_menu()
self.gui.create_actions()
self.create_book_pages()
if self.datatype == shared.DataType.XML.name:
if self.filename == "":
self.open_xml()
else:
self.startfile()
elif self.datatype == shared.DataType.SQL.name:
if self.filename:
self.open_sql(do_sel=False)
else:
self.open_sql()
self.initializing = False
def get_menu_data(self):
"""Define application menu
"""
data = [("&File", [("&Open", self.open_xml, 'Ctrl+O', " Open a new file"),
("&New", self.new_file, 'Ctrl+N', " Create a new file"),
('',),
("&Print", (("Dit &Scherm", self.print_scherm, 'Shift+Ctrl+P',
"Print the contents of the current screen"),
("Deze &Actie", self.print_actie, 'Alt+Ctrl+P',
"Print the contents of the current issue"))),
('',),
("&Quit", self.exit_app, 'Ctrl+Q', " Terminate the program")]),
("&Login", [("&Go", self.sign_in, 'Ctrl+L', " Sign in to the database")]),
("&Settings", (("&Applicatie", (("&Lettertype", self.font_settings, '',
" Change the size and font of the text"),
("&Kleuren", self.colour_settings, '',
" Change the colours of various items"))),
("&Data", (("&Tabs", self.tab_settings, '',
" Change the titles of the tabs"),
("&Soorten", self.cat_settings, '',
" Add/change type categories"),
("St&atussen", self.stat_settings, '',
" Add/change status categories"))),
("&Het leven", self.silly_menu, '',
" Change the way you look at life"))),
("&View", []),
("&Help", (("&About", self.about_help, 'F1', " Information about this program"),
("&Keys", self.hotkey_help, 'Ctrl+H', " List of shortcut keys")))]
for tabnum, tabtitle in self.book.tabs.items():
data[3][1].append(('&{}'.format(tabtitle),
functools.partial(self.gui.go_to, int(tabnum)),
'Alt+{}'.format(tabnum), "switch to tab"))
if self.datatype == shared.DataType.XML.name:
data.pop(1)
elif self.datatype == shared.DataType.SQL.name:
data[0][1][0] = ("&Other project", self.open_sql, 'Ctrl+O', " Select a project")
data[0][1][1] = ("&New", self.new_file, 'Ctrl+N', " Create a new project")
return data
def create_book(self):
"""define the tabbed interface and its subclasses
"""
self.book = self.gui.get_bookwidget()
self.book.parent = self
self.book.fnaam = ""
if self.filename and self.datatype == shared.DataType.SQL.name:
self.book.fnaam = self.filename
self.book.current_item = None
self.book.data = {}
self.book.rereadlist = True
self.lees_settings()
# print('in create book na lees_settings: book.tabs is', self.book.tabs)
self.book.ctitels = ["actie", " ", "status", "L.wijz."]
if self.datatype == shared.DataType.XML.name:
self.book.ctitels.append("titel")
elif self.datatype == shared.DataType.SQL.name:
self.book.ctitels.extend(("betreft", "omschrijving"))
self.book.current_tab = -1
self.book.pages = []
self.book.newitem = False
self.book.changed_item = True
self.book.pagedata = None
def create_book_pages(self):
"add the pages to the tabbed widget"
self.book.pages.append(Page0(self.book))
self.book.pages.append(Page1(self.book))
self.book.pages.append(Page(self.book, 2))
self.book.pages.append(Page(self.book, 3))
self.book.pages.append(Page(self.book, 4))
self.book.pages.append(Page(self.book, 5))
self.book.pages.append(Page6(self.book))
# print('in create_book_pages: book.tabs is', self.book.tabs)
for i, page in enumerate(self.book.pages):
self.gui.add_book_tab(page, "&" + self.book.tabs[i])
self.enable_all_book_tabs(False)
def not_implemented_message(self):
"information"
gui.show_message(self.gui, "Sorry, werkt nog niet")
def new_file(self, event=None):
"Menukeuze: nieuw file"
if self.datatype == shared.DataType.SQL.name:
self.not_implemented_message()
return
self.is_newfile = False
# self.dirname = str(self.dirname) # defaults to '.' so no need for `or os.getcwd()`
fname = gui.get_save_filename(self.gui, start=self.dirname)
if fname:
test = pathlib.Path(fname)
if test.suffix != '.xml':
gui.show_message(self.gui, 'Naam voor nieuw file moet wel extensie .xml hebben')
return
self.dirname, self.filename = test.parent, test.name
self.is_newfile = True
self.startfile()
self.is_newfile = False
self.enable_all_book_tabs(False)
def open_xml(self, event=None):
"Menukeuze: open file"
shared.log('in open_xml: %s', self.filename)
self.dirname = self.dirname or os.getcwd()
fname = gui.get_open_filename(self.gui, start=self.dirname)
if fname:
test = pathlib.Path(fname)
self.dirname, self.filename = test.parent, test.name
self.startfile()
def open_sql(self, event=None, do_sel=True):
"Menukeuze: open project"
shared.log('in open_sql: %s', self.filename)
current = choice = 0
data = self.projnames
if self.filename in data:
current = data.index(self.filename)
if do_sel:
choice = gui.get_choice_item(self.gui, 'Kies een project om te openen',
[": ".join((h[0], h[2])) for h in data], current)
else:
for h in data:
shared.log(h)
if h[0] == self.filename or (h[0] == 'basic' and self.filename == "_basic"):
choice = h[0]
break
if choice:
self.filename = choice.split(': ')[0]
if self.filename in ("Demo", 'basic'):
self.filename = "_basic"
self.startfile()
def print_something(self, event=None):
"""callback voor ctrl-P(rint)
vraag om printen scherm of actie, bv. met een InputDialog
"""
choices = ['huidig scherm', 'huidige actie']
choice = gui.get_choice_item(self, 'Wat wil je afdrukken?', choices)
if choice == choices[0]:
self.print_scherm()
elif choice == choices[1]:
self.print_actie()
def print_scherm(self, event=None):
"Menukeuze: print dit scherm"
self.printdict = {'lijst': [], 'actie': [], 'sections': [], 'events': []}
self.hdr = "Actie: {} {}".format(self.book.pagedata.id,
self.book.pagedata.titel)
if self.book.current_tab == 0:
self.hdr = "Overzicht acties uit " + self.filename
lijst = []
page = self.book.pages[0]
for item in page.get_items():
actie = page.get_item_text(item, 0)
started = ''
soort = page.get_item_text(item, 1)
for x in self.book.cats.values():
oms, code = x[0], x[1]
if code == soort:
soort = oms
break
status = page.get_item_text(item, 2)
l_wijz = page.get_item_text(item, 3)
titel = page.get_item_text(item, 4)
if self.datatype == shared.DataType.SQL.name:
over = titel
titel = page.get_item_text(item, 5)
l_wijz = l_wijz[:19]
actie = actie + " - " + over
started = started[:19]
if status != self.book.stats[0][0]:
if l_wijz:
l_wijz = ", laatst behandeld op " + l_wijz
l_wijz = "status: {}{}".format(status, l_wijz)
else:
hlp = "status: {}".format(status)
if l_wijz and not started:
hlp += ' op {}'.format(l_wijz)
l_wijz = hlp
lijst.append((actie, titel, soort, started, l_wijz))
self.printdict['lijst'] = lijst
elif self.book.current_tab == 1:
data = {x: self.book.pages[1].get_field_text(x) for x in ('actie', 'datum', 'oms',
'tekst', 'soort', 'status')}
self.hdr = "Informatie over actie {}: samenvatting".format(data["actie"])
self.printdict.update(data)
elif 2 <= self.book.current_tab <= 5:
title = self.book.tabs[self.book.current_tab].split(None, 1)[1]
# if self.book.current_tab == 2:
text = self.book.pages[self.book.current_tab].get_textarea_contents()
# elif self.book.current_tab == 3:
# text = self.book.page3.get_textarea_contents()
# elif self.book.current_tab == 4:
# text = self.book.page4.get_textarea_contents()
# elif self.book.current_tab == 5:
# text = self.book.page5.get_textarea_contents()
self.printdict['sections'] = [(title, text)]
elif self.book.current_tab == 6:
events = []
for idx, data in enumerate(self.book.pages[6].event_list):
if self.datatype == shared.DataType.SQL.name:
data = data[:19]
events.append((data, self.book.pages[6].event_data[idx]))
self.printdict['events'] = events
self.gui.preview()
def print_actie(self, event=None):
"Menukeuze: print deze actie"
if self.book.pagedata is None: # or self.book.newitem:
gui.show_message(self.gui, "Wel eerst een actie kiezen om te printen")
return
self.hdr = ("Actie: {} {}".format(self.book.pagedata.id, self.book.pagedata.titel))
tekst = self.book.pagedata.titel
try:
oms, tekst = tekst.split(" - ", 1)
except ValueError:
try:
oms, tekst = tekst.split(": ", 1)
except ValueError:
oms = ''
srt = "(onbekende soort)"
for srtoms, srtcode in self.book.cats.values():
if srtcode == self.book.pagedata.soort:
srt = srtoms
break
stat = "(onbekende status)"
for statoms, statcode in self.book.stats.values():
if statcode == self.book.pagedata.status:
stat = statoms
break
self.printdict = {'lijst': [],
'actie': self.book.pagedata.id,
'datum': self.book.pagedata.datum,
'oms': oms,
'tekst': tekst,
'soort': srt,
'status': stat}
empty = "(nog niet beschreven)"
sections = [[title.split(None, 1)[1], ''] for key, title in
self.book.tabs.items() if key > 2]
sections[0][1] = self.book.pagedata.melding or empty
sections[1][1] = self.book.pagedata.oorzaak or empty
sections[2][1] = self.book.pagedata.oplossing or empty
sections[3][1] = self.book.pagedata.vervolg or ''
if not sections[3][1]:
sections.pop()
self.printdict['sections'] = sections
self.printdict['events'] = [(x, y) for x, y in self.book.pagedata.events] or []
self.gui.preview()
def exit_app(self, event=None):
"Menukeuze: exit applicatie"
self.exiting = True
ok_to_leave = True # while we don't have pages yet
if self.book.current_tab > -1:
ok_to_leave = self.book.pages[self.book.current_tab].leavep()
if ok_to_leave:
self.gui.exit()
def sign_in(self, *args):
"""aanloggen in SQL/Django mode
"""
logged_in = False
while not logged_in:
ok = gui.show_dialog(self.gui, gui.LoginBox)
if not ok:
break
test = dmls.validate_user(*self.gui.dialog_data)
if test:
text = 'Login accepted'
logged_in = True
else:
text = 'Login failed'
gui.show_message(self.gui, text)
if logged_in:
self.user, self.is_user, self.is_admin = test
# print('in signin:', self.user, self.is_user)
self.book.rereadlist = True
self.gui.refresh_page()
def tab_settings(self, event=None):
"Menukeuze: settings - data - tab titels"
gui.show_dialog(self.gui, gui.SettOptionsDialog, args=(TabOptions, "Wijzigen tab titels"))
def stat_settings(self, event=None):
"Menukeuze: settings - data - statussen"
gui.show_dialog(self.gui, gui.SettOptionsDialog, args=(StatOptions, "Wijzigen statussen"))
def cat_settings(self, event=None):
"Menukeuze: settings - data - soorten"
gui.show_dialog(self.gui, gui.SettOptionsDialog, args=(CatOptions, "Wijzigen categorieën"))
def font_settings(self, event=None):
"Menukeuze: settings - applicatie - lettertype"
self.not_implemented_message()
def colour_settings(self, event=None):
"Menukeuze: settings - applicatie - kleuren"
self.not_implemented_message()
def hotkey_settings(self, event=None):
"Menukeuze: settings - applicatie- hotkeys (niet geactiveerd)"
self.not_implemented_message()
def about_help(self, event=None):
"Menukeuze: help - about"
gui.show_message(self.gui, "PyQt versie van mijn actiebox")
def hotkey_help(self, event=None):
"menukeuze: help - keys"
if not self.helptext:
lines = ["=== Albert's actiebox ===\n",
"Keyboard shortcuts:",
" Alt left/right: verder - terug",
" Alt-0 t/m Alt-6: naar betreffende pagina",
" Alt-O op tab 1: S_o_rteren",
" Alt-I op tab 1: F_i_lteren",
" Alt-G of Enter op tab 1: _G_a naar aangegeven actie",
" Alt-N op elke tab: _N_ieuwe actie opvoeren",
" Ctrl-P: _p_rinten (scherm of actie)",
" Shift-Ctrl-P: print scherm",
" Alt-Ctrl-P: print actie",
" Ctrl-Q: _q_uit actiebox",
" Ctrl-H: _h_elp (dit scherm)",
" Ctrl-S: gegevens in het scherm op_s_laan",
" Ctrl-G: oplaan en _g_a door naar volgende tab",
" Ctrl-Z in een tekstveld: undo",
" Shift-Ctrl-Z in een tekstveld: redo",
" Alt-Ctrl-Z overal: wijzigingen ongedaan maken",
" Shift-Ctrl-N op tab 6: nieuwe regel opvoeren",
" Ctrl-up/down op tab 6: move in list"]
if self.datatype == shared.DataType.XML.name:
lines.insert(8, " Ctrl-O: _o_pen een (ander) actiebestand")
lines.insert(8, " Ctrl-N: maak een _n_ieuw actiebestand")
elif self.datatype == shared.DataType.SQL.name:
lines.insert(8, " Ctrl-O: selecteer een (ander) pr_o_ject")
self.helptext = "\n".join(lines)
gui.show_message(self.gui, self.helptext)
def silly_menu(self, event=None):
"Menukeuze: settings - het leven"
gui.show_message(self.gui, "Yeah you wish...\nHet leven is niet in te stellen helaas")
def startfile(self):
"initialisatie t.b.v. nieuw bestand"
if self.datatype == shared.DataType.XML.name:
fullname = self.dirname / self.filename
retval = dmlx.checkfile(fullname, self.is_newfile)
if retval != '':
gui.show_message(self.gui, retval)
return retval
self.book.fnaam = fullname
self.title = self.filename
elif self.datatype == shared.DataType.SQL.name:
self.book.fnaam = self.title = self.filename
self.book.rereadlist = True
self.book.sorter = None
self.lees_settings()
self.gui.set_tab_titles(self.book.tabs)
self.book.pages[0].clear_selection()
self.book.pages[1].vul_combos()
if self.book.current_tab == 0:
self.book.pages[0].vulp()
else:
self.gui.select_first_tab()
self.book.changed_item = True
return ''
def lees_settings(self):
"""instellingen (tabnamen, actiesoorten en actiestatussen) inlezen"""
self.book.stats = {0: ('dummy,', 0, 0)}
self.book.cats = {0: ('dummy,', ' ', 0)}
self.book.tabs = {0: '0 start'}
data = shared.Settings[self.datatype](self.book.fnaam)
## print(data.meld) # "Standaard waarden opgehaald"
self.imagecount = data.imagecount
self.book.stats = {}
self.book.cats = {}
self.book.tabs = {}
self.book.pagehelp = ["Overzicht van alle acties",
"Identificerende gegevens van de actie",
"Beschrijving van het probleem of wens",
"Analyse van het probleem of wens",
"Voorgestelde oplossing",
"Eventuele vervolgactie(s)",
"Overzicht stand van zaken"]
for item_value, item in data.stat.items():
if self.datatype == shared.DataType.XML.name:
item_text, sortkey = item
self.book.stats[int(sortkey)] = (item_text, item_value)
elif self.datatype == shared.DataType.SQL.name:
item_text, sortkey, row_id = item
self.book.stats[int(sortkey)] = (item_text, item_value, row_id)
for item_value, item in data.cat.items():
if self.datatype == shared.DataType.XML.name:
item_text, sortkey = item
self.book.cats[int(sortkey)] = (item_text, item_value)
elif self.datatype == shared.DataType.SQL.name:
item_text, sortkey, row_id = item
self.book.cats[int(sortkey)] = (item_text, item_value, row_id)
for tab_num, tab_text in data.kop.items():
if self.datatype == shared.DataType.XML.name:
self.book.tabs[int(tab_num)] = " ".join((tab_num, tab_text))
elif self.datatype == shared.DataType.SQL.name:
tab_text = tab_text[0] # , tab_adr = tab_text
self.book.tabs[int(tab_num)] = " ".join((tab_num, tab_text.title()))
# print('in lees_settings voor', self.book.fnaam, 'book.tabs is', self.book.tabs)
def save_settings(self, srt, data):
"""instellingen (tabnamen, actiesoorten of actiestatussen) terugschrijven
argumenten: soort, data
data is een dictionary die in een van de dialogen TabOptions, CatOptions
of StatOptions wordt opgebouwd"""
settings = shared.Settings[self.datatype](self.book.fnaam)
if srt == "tab":
settings.kop = data
settings.write()
self.book.tabs = {}
for item_value, item_text in data.items():
item = " ".join((item_value, item_text))
self.book.tabs[int(item_value)] = item
self.gui.set_page_title(int(item_value), item)
elif srt == "stat":
settings.stat = data
settings.write()
self.book.stats = {}
for item_value, item in data.items():
if self.datatype == shared.DataType.XML.name:
item_text, sortkey = item
self.book.stats[sortkey] = (item_text, item_value)
elif self.datatype == shared.DataType.SQL.name:
item_text, sortkey, row_id = item
self.book.stats[sortkey] = (item_text, item_value, row_id)
elif srt == "cat":
settings.cat = data
settings.write()
self.book.cats = {}
for item_value, item in data.items():
if self.datatype == shared.DataType.XML.name:
item_text, sortkey = item
self.book.cats[sortkey] = (item_text, item_value)
elif self.datatype == shared.DataType.SQL.name:
item_text, sortkey, row_id = item
self.book.cats[sortkey] = (item_text, item_value, row_id)
self.book.pages[1].vul_combos()
def goto_next(self, *args):
"""redirect to the method of the current page
"""
Page.goto_next(self.book.pages[self.book.current_tab])
def goto_prev(self, *args):
"""redirect to the method of the current page
"""
Page.goto_prev(self.book.pages[self.book.current_tab])
def goto_page(self, page):
"""redirect to the method of the current page
"""
# print('in MainWindow.goto_page naar page', page, 'van page', self.book.current_tab)
Page.goto_page(self.book.pages[self.book.current_tab], page)
def enable_settingsmenu(self):
"instellen of gebruik van settingsmenu mogelijk is"
self.gui.enable_settingsmenu()
def set_windowtitle(self, text):
"build title for window"
self.gui.set_window_title(text)
def set_statusmessage(self, msg=''):
"""stel tekst in statusbar in
"""
if not msg:
msg = self.book.pagehelp[self.book.current_tab]
if self.book.current_tab == 0:
msg += ' - {} items'.format(len(self.book.data))
self.gui.set_statusmessage(msg)
if self.datatype == shared.DataType.SQL.name:
if self.user:
msg = 'Aangemeld als {}'.format(self.user.username)
else:
msg = 'Niet aangemeld'
self.gui.show_username(msg)
def get_focus_widget_for_tab(self, tabno):
"determine field to set focus on"
return (self.book.pages[0].gui.p0list,
self.book.pages[1].gui.proc_entry,
self.book.pages[2].gui.text1,
self.book.pages[3].gui.text1,
self.book.pages[4].gui.text1,
self.book.pages[5].gui.text1,
self.book.pages[6].gui.progress_list)[tabno]
def enable_all_book_tabs(self, state):
"make all tabs (in)accessible"
self.gui.enable_book_tabs(state, tabfrom=1)
def main(arg=None):
"opstart routine"
# if arg is None:
# version = shared.DataType.SQL.name
# else:
# version = shared.DataType.XML.name
# try:
frame = MainWindow(None, arg) # , version)
frame.gui.go()
# except ValueError as err:
# print(err)
| 45.346521 | 101 | 0.55295 | 7,727 | 66,478 | 4.651352 | 0.116863 | 0.068167 | 0.047077 | 0.018697 | 0.468128 | 0.38118 | 0.320804 | 0.259036 | 0.218664 | 0.186945 | 0 | 0.007514 | 0.335344 | 66,478 | 1,465 | 102 | 45.377474 | 0.805907 | 0.123665 | 0 | 0.307882 | 0 | 0 | 0.12979 | 0.002902 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067323 | false | 0.003284 | 0.005747 | 0 | 0.10509 | 0.01642 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b30462f4e15d1e7277002cce72ced8525343755 | 982 | py | Python | dailyblink/media.py | ptrstn/dailyblink | 16fe482552b101d83412bfbb662b8754682ba7d2 | [
"MIT"
] | 25 | 2020-05-01T16:34:11.000Z | 2022-02-19T09:39:20.000Z | dailyblink/media.py | ptrstn/dailyblink | 16fe482552b101d83412bfbb662b8754682ba7d2 | [
"MIT"
] | 24 | 2020-12-07T21:07:11.000Z | 2022-03-15T18:18:00.000Z | dailyblink/media.py | ptrstn/dailyblink | 16fe482552b101d83412bfbb662b8754682ba7d2 | [
"MIT"
] | 6 | 2021-03-05T09:19:37.000Z | 2022-01-01T08:25:14.000Z | import pathlib
from mutagen.mp4 import MP4
def create_file(content, path, mode):
pathlib.Path(path).parent.mkdir(parents=True, exist_ok=True)
with open(path, mode) as file:
file.write(content)
def save_media(media, file_path):
create_file(content=media, path=file_path, mode="wb")
def save_text(text, file_path):
create_file(content=text, path=file_path, mode="w+")
def set_m4a_meta_data(
filename,
artist=None,
title=None,
album=None,
track_number=None,
total_track_number=None,
genre=None,
):
mp4_file = MP4(filename)
if not mp4_file.tags:
mp4_file.add_tags()
tags = mp4_file.tags
if artist:
tags["\xa9ART"] = artist
if title:
tags["\xa9alb"] = album
if album:
tags["\xa9nam"] = title
if track_number and total_track_number:
tags["trkn"] = [(track_number, total_track_number)]
if genre:
tags["\xa9gen"] = genre
tags.save(filename)
| 20.458333 | 64 | 0.647658 | 137 | 982 | 4.445255 | 0.357664 | 0.108374 | 0.083744 | 0.059113 | 0.082102 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015915 | 0.232179 | 982 | 47 | 65 | 20.893617 | 0.791777 | 0 | 0 | 0 | 0 | 0 | 0.03666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b307d66d8ef01ec6b560b96a1fec0c928cc9a2d | 22,878 | py | Python | src/server/server.py | HanseMerkur/cassh | 947023ad7971a0922d56aaaee5afcdf9294334e3 | [
"Apache-2.0"
] | null | null | null | src/server/server.py | HanseMerkur/cassh | 947023ad7971a0922d56aaaee5afcdf9294334e3 | [
"Apache-2.0"
] | null | null | null | src/server/server.py | HanseMerkur/cassh | 947023ad7971a0922d56aaaee5afcdf9294334e3 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
Sign a user's SSH public key.
"""
from argparse import ArgumentParser
from json import dumps
from os import remove
from re import compile as re_compile, IGNORECASE
from tempfile import NamedTemporaryFile
from urllib.parse import unquote_plus
# Third party library imports
from configparser import ConfigParser, NoOptionError
from ldap import initialize, SCOPE_SUBTREE
from web import application, config, data, httpserver
from web.wsgiserver import CherryPyWSGIServer
# Own library
from ssh_utils import get_fingerprint
from tools import get_principals, get_pubkey, random_string, response_render, timestamp, unquote_custom, Tools
# DEBUG
# from pdb import set_trace as st
STATES = {
0: 'ACTIVE',
1: 'REVOKED',
2: 'PENDING',
}
URLS = (
'/admin/([a-z]+)', 'Admin',
'/ca', 'Ca',
'/client', 'Client',
'/client/status', 'ClientStatus',
'/cluster/status', 'ClusterStatus',
'/health', 'Health',
'/krl', 'Krl',
'/ping', 'Ping',
'/test_auth', 'TestAuth',
)
VERSION = '1.9.2'
PARSER = ArgumentParser()
PARSER.add_argument('-c', '--config', action='store', help='Configuration file')
PARSER.add_argument('-v', '--verbose', action='store_true', default=False, help='Add verbosity')
ARGS = PARSER.parse_args()
if not ARGS.config:
PARSER.error('--config argument is required !')
CONFIG = ConfigParser()
CONFIG.read(ARGS.config)
SERVER_OPTS = {}
SERVER_OPTS['ca'] = CONFIG.get('main', 'ca')
SERVER_OPTS['krl'] = CONFIG.get('main', 'krl')
SERVER_OPTS['port'] = CONFIG.get('main', 'port')
try:
SERVER_OPTS['admin_db_failover'] = CONFIG.get('main', 'admin_db_failover')
except NoOptionError:
SERVER_OPTS['admin_db_failover'] = False
SERVER_OPTS['ldap'] = False
SERVER_OPTS['ssl'] = False
if CONFIG.has_section('postgres'):
try:
SERVER_OPTS['db_host'] = CONFIG.get('postgres', 'host')
SERVER_OPTS['db_name'] = CONFIG.get('postgres', 'dbname')
SERVER_OPTS['db_user'] = CONFIG.get('postgres', 'user')
SERVER_OPTS['db_password'] = CONFIG.get('postgres', 'password')
except NoOptionError:
if ARGS.verbose:
print('Option reading error (postgres).')
exit(1)
if CONFIG.has_section('ldap'):
try:
SERVER_OPTS['ldap'] = True
SERVER_OPTS['ldap_host'] = CONFIG.get('ldap', 'host')
SERVER_OPTS['ldap_bind_dn'] = CONFIG.get('ldap', 'bind_dn')
SERVER_OPTS['ldap_admin_cn'] = CONFIG.get('ldap', 'admin_cn')
SERVER_OPTS['filterstr'] = CONFIG.get('ldap', 'filterstr')
except NoOptionError:
if ARGS.verbose:
print('Option reading error (ldap).')
exit(1)
if CONFIG.has_section('ssl'):
try:
SERVER_OPTS['ssl'] = True
SERVER_OPTS['ssl_private_key'] = CONFIG.get('ssl', 'private_key')
SERVER_OPTS['ssl_public_key'] = CONFIG.get('ssl', 'public_key')
except NoOptionError:
if ARGS.verbose:
print('Option reading error (ssl).')
exit(1)
# Cluster mode is used for revocation
try:
SERVER_OPTS['cluster'] = CONFIG.get('main', 'cluster').split(',')
except NoOptionError:
# Standalone mode
PROTO = 'http'
if SERVER_OPTS['ssl']:
PROTO = 'https'
SERVER_OPTS['cluster'] = ['%s://localhost:%s' % (PROTO, SERVER_OPTS['port'])]
try:
SERVER_OPTS['clustersecret'] = CONFIG.get('main', 'clustersecret')
except NoOptionError:
# Standalone mode
SERVER_OPTS['clustersecret'] = random_string(32)
try:
SERVER_OPTS['debug'] = bool(CONFIG.get('main', 'debug') != 'False')
except NoOptionError:
SERVER_OPTS['debug'] = False
TOOLS = Tools(SERVER_OPTS, STATES, VERSION)
def data2map():
"""
Returns a map from data POST
"""
data_map = {}
data_str = data().decode('utf-8')
if data_str == '':
return data_map
for key in data_str.split('&'):
data_map[key.split('=')[0]] = '='.join(key.split('=')[1:])
return data_map
def ldap_authentification(admin=False):
"""
Return True if user is well authentified
realname=xxxxx@domain.fr
password=xxxxx
"""
if SERVER_OPTS['ldap']:
credentials = data2map()
if 'realname' in credentials:
realname = unquote_plus(credentials['realname'])
else:
return False, 'Error: No realname option given.'
if 'password' in credentials:
password = unquote_plus(credentials['password'])
else:
return False, 'Error: No password option given.'
if password == '':
return False, 'Error: password is empty.'
ldap_conn = initialize("ldap://"+SERVER_OPTS['ldap_host'])
try:
ldap_conn.bind_s(realname, password)
except Exception as e:
return False, 'Error: %s' % e
if admin:
memberof_admin_list = ldap_conn.search_s(
SERVER_OPTS['ldap_bind_dn'],
SCOPE_SUBTREE,
filterstr='(&(%s=%s)(memberOf=%s))' % (
SERVER_OPTS['filterstr'],
realname,
SERVER_OPTS['ldap_admin_cn']))
if not memberof_admin_list:
return False, 'Error: user %s is not an admin.' % realname
return True, 'OK'
class Admin():
"""
Class admin to action or revoke keys.
"""
def POST(self, username):
"""
Revoke or Active keys.
/admin/<username>
revoke=true/false => Revoke user
status=true/false => Display status
"""
# LDAP authentication
is_admin_auth, message = ldap_authentification(admin=True)
if not is_admin_auth:
return response_render(message, http_code='401 Unauthorized')
payload = data2map()
if 'revoke' in payload:
do_revoke = payload['revoke'].lower() == 'true'
else:
do_revoke = False
if 'status' in payload:
do_status = payload['status'].lower() == 'true'
else:
do_status = False
pg_conn, message = TOOLS.pg_connection()
if pg_conn is None:
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
if username == 'all' and do_status:
return response_render(
TOOLS.list_keys(),
content_type='application/json')
# Search if key already exists
cur.execute('SELECT * FROM USERS WHERE NAME=(%s)', (username,))
user = cur.fetchone()
# If user dont exist
if user is None:
cur.close()
pg_conn.close()
message = 'User does not exists.'
elif do_revoke:
cur.execute('UPDATE USERS SET STATE=1 WHERE NAME=(%s)', (username,))
pg_conn.commit()
pubkey = get_pubkey(username, pg_conn)
cur.execute('INSERT INTO REVOCATION VALUES \
((%s), (%s), (%s))', \
(pubkey, timestamp(), username))
pg_conn.commit()
message = 'Revoke user=%s.' % username
cur.close()
pg_conn.close()
# Display status
elif do_status:
return response_render(
TOOLS.list_keys(username=username),
content_type='application/json')
# If user is in PENDING state
elif user[2] == 2:
cur.execute('UPDATE USERS SET STATE=0 WHERE NAME=(%s)', (username,))
pg_conn.commit()
cur.close()
pg_conn.close()
message = 'Active user=%s. SSH Key active but need to be signed.' % username
# If user is in REVOKE state
elif user[2] == 1:
cur.execute('UPDATE USERS SET STATE=0 WHERE NAME=(%s)', (username,))
pg_conn.commit()
cur.close()
pg_conn.close()
message = 'Active user=%s. SSH Key active but need to be signed.' % username
else:
cur.close()
pg_conn.close()
message = 'user=%s already active. Nothing done.' % username
return response_render(message)
def PATCH(self, username):
"""
Set the first founded value.
/admin/<username>
key=value => Set the key value. Keys are in status output.
"""
# LDAP authentication
is_admin_auth, message = ldap_authentification(admin=True)
if not is_admin_auth:
return response_render(message, http_code='401 Unauthorized')
pg_conn, message = TOOLS.pg_connection()
if pg_conn is None:
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
payload = data2map()
for key, value in payload.items():
if key == 'expiry':
pattern = re_compile('^\\+([0-9]+)+[dh]$')
if pattern.match(value) is None:
return response_render(
'ERROR: Value %s is malformed. Should match pattern ^\\+([0-9]+)+[dh]$' \
% value,
http_code='400 Bad Request')
cur.execute('UPDATE USERS SET EXPIRY=(%s) WHERE NAME=(%s)', (value, username))
pg_conn.commit()
cur.close()
pg_conn.close()
return response_render('OK: %s=%s for %s' % (key, value, username))
elif key == 'principals':
value = unquote_plus(value)
pattern = re_compile("^([a-zA-Z-]+)$")
for principal in value.split(','):
if pattern.match(principal) is None:
return response_render(
'ERROR: Value %s is malformed. Should match pattern ^([a-zA-Z-]+)$' \
% principal,
http_code='400 Bad Request')
cur.execute('UPDATE USERS SET PRINCIPALS=(%s) WHERE NAME=(%s)', (value, username))
pg_conn.commit()
cur.close()
pg_conn.close()
return response_render('OK: %s=%s for %s' % (key, value, username))
return response_render('WARNING: No key found...')
def DELETE(self, username):
"""
Delete keys (but DOESN'T REVOKE)
/admin/<username>
"""
# LDAP authentication
is_admin_auth, message = ldap_authentification(admin=True)
if not is_admin_auth:
return response_render(message, http_code='401 Unauthorized')
pg_conn, message = TOOLS.pg_connection()
if pg_conn is None:
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
# Search if key already exists
cur.execute('DELETE FROM USERS WHERE NAME=(%s)', (username,))
pg_conn.commit()
cur.close()
pg_conn.close()
return response_render('OK')
class Ca():
"""
Class CA.
"""
def GET(self):
"""
Return ca.
"""
return response_render(
open(SERVER_OPTS['ca'] + '.pub', 'rb'),
content_type='application/octet-stream')
class ClientStatus():
"""
ClientStatus main class.
"""
def POST(self):
"""
Get client key status.
/client/status
"""
# LDAP authentication
is_auth, message = ldap_authentification()
if not is_auth:
return response_render(message, http_code='401 Unauthorized')
payload = data2map()
if 'realname' in payload:
realname = unquote_plus(payload['realname'])
else:
return response_render(
'Error: No realname option given.',
http_code='400 Bad Request')
return response_render(
TOOLS.list_keys(realname=realname),
content_type='application/json')
class Client():
"""
Client main class.
"""
def POST(self):
"""
Ask to sign pub key.
/client
username=xxxxxx => Unique username. Used by default to connect on server.
realname=xxxxx@domain.fr => This LDAP/AD user.
# Optionnal
admin_force=true|false
"""
# LDAP authentication
is_auth, message = ldap_authentification()
if not is_auth:
return response_render(message, http_code='401 Unauthorized')
# Check if user is an admin and want to force signature when db fail
force_sign = False
# LDAP ADMIN authentication
is_admin_auth, _ = ldap_authentification(admin=True)
payload = data2map()
if is_admin_auth and SERVER_OPTS['admin_db_failover'] \
and 'admin_force' in payload and payload['admin_force'].lower() == 'true':
force_sign = True
# Get username
if 'username' in payload:
username = payload['username']
else:
return response_render(
'Error: No username option given. Update your CASSH >= 1.3.0',
http_code='400 Bad Request')
username_pattern = re_compile("^([a-z]+)$")
if username_pattern.match(username) is None or username == 'all':
return response_render(
"Error: Username doesn't match pattern %s" \
% username_pattern.pattern,
http_code='400 Bad Request')
# Get realname
if 'realname' in payload:
realname = unquote_plus(payload['realname'])
else:
return response_render(
'Error: No realname option given.',
http_code='400 Bad Request')
# Get public key
if 'pubkey' in payload:
pubkey = unquote_custom(payload['pubkey'])
else:
return response_render(
'Error: No pubkey given.',
http_code='400 Bad Request')
tmp_pubkey = NamedTemporaryFile(delete=False)
tmp_pubkey.write(bytes(pubkey, 'utf-8'))
tmp_pubkey.close()
pubkey_fingerprint = get_fingerprint(tmp_pubkey.name)
if pubkey_fingerprint == 'Unknown':
remove(tmp_pubkey.name)
return response_render(
'Error : Public key unprocessable',
http_code='422 Unprocessable Entity')
pg_conn, message = TOOLS.pg_connection()
# Admin force signature case
if pg_conn is None and force_sign:
cert_contents = TOOLS.sign_key(tmp_pubkey.name, username, '+12h', username)
remove(tmp_pubkey.name)
return response_render(cert_contents, content_type='application/octet-stream')
# Else, if db is down it fails.
elif pg_conn is None:
remove(tmp_pubkey.name)
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
# Search if key already exists
cur.execute('SELECT * FROM USERS WHERE SSH_KEY=(%s) AND NAME=lower(%s)', (pubkey, username))
user = cur.fetchone()
if user is None:
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render(
'Error : User or Key absent, add your key again.',
http_code='400 Bad Request')
if username != user[0] or realname != user[1]:
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render(
'Error : (username, realname) couple mismatch.',
http_code='401 Unauthorized')
status = user[2]
expiry = user[6]
principals = get_principals(user[7], username, shell=True)
if status > 0:
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render("Status: %s" % STATES[user[2]])
cert_contents = TOOLS.sign_key(tmp_pubkey.name, username, expiry, principals, db_cursor=cur)
remove(tmp_pubkey.name)
pg_conn.commit()
cur.close()
pg_conn.close()
return response_render(
cert_contents,
content_type='application/octet-stream')
def PUT(self):
"""
This function permit to add or update a ssh public key.
/client
username=xxxxxx => Unique username. Used by default to connect on server.
realname=xxxxx@domain.fr => This LDAP/AD user.
"""
# LDAP authentication
is_auth, message = ldap_authentification()
if not is_auth:
return response_render(message, http_code='401 Unauthorized')
payload = data2map()
if 'username' in payload:
username = payload['username']
else:
return response_render(
'Error: No username option given.',
http_code='400 Bad Request')
username_pattern = re_compile("^([a-z]+)$")
if username_pattern.match(username) is None or username == 'all':
return response_render(
"Error: Username doesn't match pattern %s" \
% username_pattern.pattern,
http_code='400 Bad Request')
if 'realname' in payload:
realname = unquote_plus(payload['realname'])
else:
return response_render(
'Error: No realname option given.',
http_code='400 Bad Request')
realname_pattern = re_compile(
r"(^[-!#$%&'*+/=?^_`{}|~0-9A-Z]+(\.[-!#$%&'*+/=?^_`{}|~0-9A-Z]+)*"
r'|^"([\001-\010\013\014\016-\037!#-\[\]-\177]|\\[\001-011\013\014\016-\177])*"'
r')@(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+[A-Z]{2,6}\.?$', IGNORECASE)
if realname_pattern.match(realname) is None:
return response_render(
"Error: Realname doesn't match pattern",
http_code='400 Bad Request')
# Get public key
if 'pubkey' in payload:
pubkey = unquote_custom(payload['pubkey'])
else:
return response_render(
'Error: No pubkey given.',
http_code='400 Bad Request')
tmp_pubkey = NamedTemporaryFile(delete=False)
tmp_pubkey.write(bytes(pubkey, 'utf-8'))
tmp_pubkey.close()
pubkey_fingerprint = get_fingerprint(tmp_pubkey.name)
if pubkey_fingerprint == 'Unknown':
remove(tmp_pubkey.name)
return response_render(
'Error : Public key unprocessable',
http_code='422 Unprocessable Entity')
pg_conn, message = TOOLS.pg_connection()
if pg_conn is None:
remove(tmp_pubkey.name)
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
# Search if key already exists
cur.execute('SELECT * FROM USERS WHERE NAME=(%s)', (username,))
user = cur.fetchone()
# CREATE NEW USER
if user is None:
cur.execute('INSERT INTO USERS VALUES \
((%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s))', \
(username, realname, 2, 0, pubkey_fingerprint, pubkey, '+12h', ''))
pg_conn.commit()
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render(
'Create user=%s. Pending request.' % username,
http_code='201 Created')
else:
# Check if realname is the same
cur.execute('SELECT * FROM USERS WHERE NAME=(%s) AND REALNAME=lower((%s))', \
(username, realname))
if cur.fetchone() is None:
pg_conn.commit()
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render(
'Error : (username, realname) couple mismatch.',
http_code='401 Unauthorized')
# Update entry into database
cur.execute('UPDATE USERS SET SSH_KEY=(%s), SSH_KEY_HASH=(%s), STATE=2, EXPIRATION=0 \
WHERE NAME=(%s)', (pubkey, pubkey_fingerprint, username))
pg_conn.commit()
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render('Update user=%s. Pending request.' % username)
class ClusterStatus():
"""
ClusterStatus main class.
"""
def GET(self):
"""
/cluster/status
"""
message = dict()
alive_nodes, dead_nodes = TOOLS.cluster_alived()
for node in alive_nodes:
message.update({node: {'status': 'OK'}})
for node in dead_nodes:
message.update({node: {'status': 'KO'}})
return response_render(
dumps(message),
content_type='application/json')
class Health():
"""
Class Health
"""
def GET(self):
"""
Return a health check
"""
health = {}
health['name'] = 'cassh'
health['version'] = VERSION
return response_render(
dumps(health, indent=4, sort_keys=True),
content_type='application/json')
class Krl():
"""
Class KRL.
"""
def GET(self):
"""
Return krl.
"""
return TOOLS.get_last_krl()
class Ping():
"""
Class Ping
"""
def GET(self):
"""
Return a pong
"""
return response_render('pong')
class TestAuth():
"""
Test authentication
"""
def POST(self):
"""
Test authentication
"""
# LDAP authentication
is_auth, message = ldap_authentification()
if not is_auth:
return response_render(message, http_code='401 Unauthorized')
return response_render('OK')
class MyApplication(application):
"""
Can change port or other stuff
"""
def run(self, port=int(SERVER_OPTS['port']), *middleware):
func = self.wsgifunc(*middleware)
return httpserver.runsimple(func, ('0.0.0.0', port))
if __name__ == "__main__":
if SERVER_OPTS['ssl']:
CherryPyWSGIServer.ssl_certificate = SERVER_OPTS['ssl_public_key']
CherryPyWSGIServer.ssl_private_key = SERVER_OPTS['ssl_private_key']
if ARGS.verbose:
print('SSL: %s' % SERVER_OPTS['ssl'])
print('LDAP: %s' % SERVER_OPTS['ldap'])
print('Admin DB Failover: %s' % SERVER_OPTS['admin_db_failover'])
APP = MyApplication(URLS, globals())
config.debug = SERVER_OPTS['debug']
if SERVER_OPTS['debug']:
print('Debug mode on')
APP.run()
| 33.447368 | 110 | 0.563992 | 2,549 | 22,878 | 4.910553 | 0.137309 | 0.053687 | 0.075098 | 0.033954 | 0.526244 | 0.468483 | 0.44835 | 0.439722 | 0.430375 | 0.405928 | 0 | 0.01281 | 0.314145 | 22,878 | 683 | 111 | 33.49634 | 0.784909 | 0.085978 | 0 | 0.528421 | 0 | 0.006316 | 0.188702 | 0.014493 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031579 | false | 0.014737 | 0.025263 | 0 | 0.197895 | 0.029474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b349e7b7c259815a84ae590fa15ba7d1700f32b | 2,339 | py | Python | app/api/inventory_routes.py | jon-wehner/MyPantry | 01f833b99d4318b4676abd542272dce61d0b8c61 | [
"MIT"
] | 9 | 2021-03-02T16:52:40.000Z | 2021-03-03T16:51:46.000Z | app/api/inventory_routes.py | jon-wehner/PantryStock | 01f833b99d4318b4676abd542272dce61d0b8c61 | [
"MIT"
] | 50 | 2021-03-12T16:04:49.000Z | 2022-03-17T20:47:00.000Z | app/api/inventory_routes.py | jon-wehner/PantryStock | 01f833b99d4318b4676abd542272dce61d0b8c61 | [
"MIT"
] | null | null | null | from flask import Blueprint, request
from app.models import UserItem, User, db
from app.forms import InventoryItemForm
from flask_login import login_required
from app.utils import validation_errors_to_error_messages
inventory_routes = Blueprint('inventory', __name__)
# Get all of a user's Items
@inventory_routes.route('/<int:user_id>')
@login_required
def user_inventory(user_id):
user = User.query.get(user_id)
if user:
return {"inventory": user.inventory()}
else:
return {"errors": "User Not Found"}
# Add an item to a user intentory
@inventory_routes.route('/<int:user_id>', methods=['POST'])
@login_required
def add_item(user_id):
user = User.query.get(user_id)
form = InventoryItemForm()
form['csrf_token'].data = request.cookies['csrf_token']
if form.validate_on_submit():
item_id = form.data['item_id']
measurement_id = form.data['measurement_id']
item = UserItem(
item_id=form.data['item_id'],
user_id=user_id,
expiration_date=form.data['expiration_date'],
quantity=form.data['quantity'],
measurement_id=form.data['measurement_id']
)
db.session.add(item)
if form.errors:
return {"errors": validation_errors_to_error_messages(form.errors)}
else:
db.session.commit()
return {"inventory": user.inventory()}
@inventory_routes.route('/<int:user_id>/<int:item_id>',
methods=['PUT', 'DELETE'])
@login_required
def edit_delete_item(user_id, item_id):
user = User.query.get(user_id)
item = UserItem.query.get(item_id)
form = InventoryItemForm()
if request.method == 'PUT':
form['csrf_token'].data = request.cookies['csrf_token']
form['item_id'].data = item.item.id
if form.validate_on_submit():
item.expiration_date = form.data['expiration_date']
print(item.expiration_date)
item.quantity = form.data['quantity']
measurement_id = form.data['measurement_id']
db.session.add(item)
if request.method == 'DELETE':
db.session.delete(item)
if form.errors:
return {"errors": validation_errors_to_error_messages(form.errors)}
else:
db.session.commit()
return {"inventory": user.inventory()}
| 33.898551 | 75 | 0.655408 | 296 | 2,339 | 4.952703 | 0.216216 | 0.04502 | 0.034106 | 0.047067 | 0.579809 | 0.558663 | 0.366985 | 0.350614 | 0.257844 | 0.257844 | 0 | 0 | 0.218469 | 2,339 | 68 | 76 | 34.397059 | 0.801969 | 0.024369 | 0 | 0.491525 | 0 | 0 | 0.129443 | 0.012286 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050847 | false | 0 | 0.084746 | 0 | 0.237288 | 0.050847 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b37b39bc7440eb3efd9fb78397787d52e20da21 | 760 | py | Python | src/doremi/__init__.py | jpivarski/doremi | 0f8fb1fc8e9664b2e4b61fffc5382e41d8d624d6 | [
"BSD-3-Clause"
] | 1 | 2022-01-09T00:32:44.000Z | 2022-01-09T00:32:44.000Z | src/doremi/__init__.py | jpivarski/doremi | 0f8fb1fc8e9664b2e4b61fffc5382e41d8d624d6 | [
"BSD-3-Clause"
] | null | null | null | src/doremi/__init__.py | jpivarski/doremi | 0f8fb1fc8e9664b2e4b61fffc5382e41d8d624d6 | [
"BSD-3-Clause"
] | null | null | null | # BSD 3-Clause License; see https://github.com/jpivarski/doremi/blob/main/LICENSE
from ._version import version as __version__
from typing import Optional
import doremi.parsing
import doremi.abstract
import doremi.concrete
def compose(
source: str,
scale: doremi.concrete.AnyScale = "C major",
bpm: float = 120.0,
scope: Optional[doremi.abstract.Scope] = None,
) -> doremi.concrete.Composition:
scale = doremi.concrete.get_scale(scale)
abstract_collection = doremi.abstract.abstracttree(source)
num_beats, abstract_notes, scope = abstract_collection.evaluate(scope)
return doremi.concrete.Composition(
scale, bpm, num_beats, scope, abstract_collection, abstract_notes
)
__all__ = ("__version__", "compose")
| 26.206897 | 81 | 0.743421 | 92 | 760 | 5.913043 | 0.48913 | 0.128676 | 0.069853 | 0.110294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0078 | 0.156579 | 760 | 28 | 82 | 27.142857 | 0.840874 | 0.103947 | 0 | 0 | 0 | 0 | 0.036819 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.277778 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b3e41b019fe6b2d3864d763d679862e197cea39 | 7,447 | py | Python | Lib/glyphsLib/interpolation.py | anthrotype/glyphsLib | ab98c4ae3981aec72ae70a053c3efb0ca2dd6b93 | [
"Apache-2.0"
] | null | null | null | Lib/glyphsLib/interpolation.py | anthrotype/glyphsLib | ab98c4ae3981aec72ae70a053c3efb0ca2dd6b93 | [
"Apache-2.0"
] | null | null | null | Lib/glyphsLib/interpolation.py | anthrotype/glyphsLib | ab98c4ae3981aec72ae70a053c3efb0ca2dd6b93 | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import (print_function, division, absolute_import,
unicode_literals)
import logging
import os
from glyphsLib.builder import set_redundant_data, set_custom_params,\
set_default_params, GLYPHS_PREFIX
from glyphsLib.util import build_ufo_path, write_ufo, clean_ufo, clear_data
__all__ = [
'interpolate', 'build_designspace', 'apply_instance_data'
]
logger = logging.getLogger(__name__)
DEFAULT_LOC = 100
def interpolate(ufos, master_dir, out_dir, instance_data, debug=False):
"""Create MutatorMath designspace and generate instances.
Returns instance UFOs, or unused instance data if debug is True.
"""
from mutatorMath.ufo import build
designspace_path, instance_files = build_designspace(
ufos, master_dir, out_dir, instance_data)
logger.info('Building instances')
for path, _ in instance_files:
clean_ufo(path)
build(designspace_path, outputUFOFormatVersion=3)
instance_ufos = apply_instance_data(instance_files)
if debug:
return clear_data(instance_data)
return instance_ufos
def build_designspace(masters, master_dir, out_dir, instance_data):
"""Just create MutatorMath designspace without generating instances.
Returns the path of the resulting designspace document and a list of
(instance_path, instance_data) tuples which map instance UFO filenames to
Glyphs data for that instance.
"""
from mutatorMath.ufo.document import DesignSpaceDocumentWriter
for font in masters:
write_ufo(font, master_dir)
# needed so that added masters and instances have correct relative paths
tmp_path = os.path.join(master_dir, 'tmp.designspace')
writer = DesignSpaceDocumentWriter(tmp_path)
base_family, base_style = add_masters_to_writer(writer, masters)
instance_files = add_instances_to_writer(
writer, base_family, instance_data, out_dir)
basename = '%s%s.designspace' % (
base_family, ('-' + base_style) if base_style else '')
writer.path = os.path.join(master_dir, basename.replace(' ', ''))
writer.save()
return writer.path, instance_files
def add_masters_to_writer(writer, ufos):
"""Add master UFOs to a MutatorMath document writer.
Returns the masters' family name and shared style names. These are used for
naming instances and the designspace path.
"""
master_data = []
base_family = None
base_style = None
# only write dimension elements if defined in at least one of the masters
dimension_names = []
for s in ('weight', 'width', 'custom'):
key = GLYPHS_PREFIX + s + 'Value'
if any(key in font.lib for font in ufos):
dimension_names.append(s)
for font in ufos:
family, style = font.info.familyName, font.info.styleName
if base_family is None:
base_family = family
else:
assert family == base_family, 'Masters must all have same family'
if base_style is None:
base_style = style.split()
else:
base_style = [s for s in style.split() if s in base_style]
master_data.append((font.path, family, style, {
s: font.lib.get(GLYPHS_PREFIX + s + 'Value', DEFAULT_LOC)
for s in dimension_names}))
# pick a master to copy info, features, and groups from, trying to find the
# master with a base style shared between all masters (or just Regular) and
# defaulting to the first master if nothing is found
base_style = ' '.join(base_style)
info_source = 0
for i, (path, family, style, location) in enumerate(master_data):
if family == base_family and style == (base_style or 'Regular'):
info_source = i
break
for i, (path, family, style, location) in enumerate(master_data):
is_base = (i == info_source)
writer.addSource(
path=path, name='%s %s' % (family, style),
familyName=family, styleName=style, location=location,
copyFeatures=is_base, copyGroups=is_base, copyInfo=is_base,
copyLib=is_base)
return base_family, base_style
def add_instances_to_writer(writer, family_name, instance_data, out_dir):
"""Add instances from Glyphs data to a MutatorMath document writer.
Returns a list of <ufo_path, font_data> pairs, corresponding to the
instances which will be output by the document writer. The font data is the
Glyphs data for this instance as a dict.
"""
default_family_name = instance_data.pop('defaultFamilyName')
instance_data = instance_data.pop('data')
ofiles = []
# only write dimension elements if defined in at least one of the instances
dimension_names = []
for s in ('weight', 'width', 'custom'):
key = 'interpolation' + s.title()
if any(key in instance for instance in instance_data):
dimension_names.append(s)
for instance in instance_data:
# Glyphs.app recognizes both "exports=0" and "active=0" as a flag
# to mark instances as inactive. Those should not be instantiated.
# https://github.com/googlei18n/glyphsLib/issues/129
if (not int(instance.pop('exports', 1))
or not int(instance.pop('active', 1))):
continue
instance_family = default_family_name
custom_params = instance.get('customParameters', ())
for i in range(len(custom_params)):
if custom_params[i]['name'] == 'familyName':
instance_family = custom_params[i]['value']
break
if not instance_family:
continue
style_name = instance.pop('name')
ufo_path = build_ufo_path(out_dir, instance_family, style_name)
ofiles.append((ufo_path, instance))
writer.startInstance(
name=' '.join((instance_family, style_name)),
location={
s: instance.pop('interpolation' + s.title(), DEFAULT_LOC)
for s in dimension_names},
familyName=instance_family,
styleName=style_name,
fileName=ufo_path)
writer.writeInfo()
writer.writeKerning()
writer.endInstance()
return ofiles
def apply_instance_data(instance_data):
"""Open instances, apply data, and re-save.
Args:
instance_data: List of (path, data) tuples, one for each instance.
dst_ufo_list: List to add opened instances to.
Returns:
List of opened and updated instance UFOs.
"""
from defcon import Font
instance_ufos = []
for path, data in instance_data:
ufo = Font(path)
set_custom_params(ufo, data=data)
set_default_params(ufo)
set_redundant_data(ufo)
ufo.save()
instance_ufos.append(ufo)
return instance_ufos
| 35.293839 | 79 | 0.674903 | 970 | 7,447 | 5.004124 | 0.257732 | 0.046972 | 0.00618 | 0.009271 | 0.152452 | 0.112279 | 0.082818 | 0.057684 | 0.057684 | 0.041203 | 0 | 0.003903 | 0.243051 | 7,447 | 210 | 80 | 35.461905 | 0.857194 | 0.291795 | 0 | 0.132231 | 0 | 0 | 0.055966 | 0 | 0 | 0 | 0 | 0 | 0.008264 | 1 | 0.041322 | false | 0 | 0.066116 | 0 | 0.157025 | 0.008264 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b3fd90b08b658e73198cb9b547400cb33e29f70 | 10,934 | py | Python | code/analysis/plot_group_statistics.py | INM-6/reproducing-polychronization | fbce7040450a92996ef64bb081558ea02f6a72da | [
"MIT"
] | 2 | 2019-09-05T13:26:55.000Z | 2019-11-27T17:23:13.000Z | code/analysis/plot_group_statistics.py | INM-6/reproducing-polychronization | fbce7040450a92996ef64bb081558ea02f6a72da | [
"MIT"
] | null | null | null | code/analysis/plot_group_statistics.py | INM-6/reproducing-polychronization | fbce7040450a92996ef64bb081558ea02f6a72da | [
"MIT"
] | 3 | 2018-09-20T13:03:05.000Z | 2021-12-09T09:31:07.000Z | import argparse
import numpy as np
import os
import sys
import matplotlib
matplotlib.use('Agg')
import json
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import mpl_toolkits.axes_grid.inset_locator
import helper as hf
import plot_helper as phf
import seaborn as sns
import scipy.stats as stat
from matplotlib import mlab
import pandas as pd
# sns.set_palette(sns.light_palette("blue"))
parser = argparse.ArgumentParser()
parser.add_argument("-glo", '--groupstatlist_original', help="list of group stat files", nargs="+")
parser.add_argument("-glp", '--groupstatlist_python', help="list of group stat files", nargs="+")
parser.add_argument('--output', type=str)
args = parser.parse_args()
def return_val(stats):
if 'Failed' in stats.keys():
ngr='Failed'
elif 'Failed' not in stats.keys():
ngr = len(stats['N_fired'])
else:
ngr = np.nan
return ngr
exp,reps=[],[]
ngr_o=[]
ngr_p=[]
rate_exc=[]
rate_inh=[]
spek_peak=[]
experiments=[i.split('/')[2] for i in args.groupstatlist_original]+[i.split('/')[2] for i in args.groupstatlist_python]
for experiment in np.unique(experiments):
repetitions = [i.split('/')[3] for i in args.groupstatlist_original if i.split('/')[2]==experiment] + \
[i.split('/')[3] for i in args.groupstatlist_python if i.split('/')[2]==experiment]
repetitions=np.sort(np.unique(repetitions))
for repetition in repetitions:
print(repetition,experiment)
file_p='data/NEST_model/{e}/{r}/stats.json'.format(e=experiment,r=repetition)
if os.path.isfile(file_p):
with open(file_p, "r") as f_p:
stats_p = json.load(f_p)
ngr_p_ = return_val(stats_p)
else:
ngr_p_=np.nan
#print(stats_p)
file_o='data/NEST_model/{e}/{r}/stats_orig.json'.format(e=experiment,r=repetition )
#print(file)
if os.path.isfile(file_o):
with open(file_o, "r") as f_o:
stats_o = json.load(f_o)
ngr_o_ = return_val(stats_o)
else:
ngr_o_ = np.nan
spk_fl = 'data/NEST_model/{e}/{r}/spikes-1001.gdf'.format(e=experiment,r=repetition )
data = np.loadtxt(spk_fl)
senders, times = data[:, 0], data[:, 1]
mean_ex, mean_inh, max_freq = phf.get_rates(times, senders)
if max_freq<50:
spek_peak.append('low')
else:
spek_peak.append('high')
ngr_o.append(ngr_o_)
ngr_p.append(ngr_p_)
exp.append(experiment.replace('_',' '))
reps.append(repetition)
rate_exc.append(mean_ex)
rate_inh.append(mean_inh)
df=pd.DataFrame({'Number of groups':ngr_o,
'Number of groups (nest)':ngr_p,
'Experiment':exp,
'reps':reps,
'exc_rate':rate_exc,
'inh_rate': rate_inh,
'spektral peak':spek_peak
})
def iqr(df):
return df.quantile(.75)-df.quantile(.25)
df_latex=df.replace(value=np.nan,to_replace='Failed').groupby(['Experiment'])['Number of groups', 'Number of groups (nest)','exc_rate','inh_rate','spektral peak'].agg([np.median,iqr,'min','max','count']) #.agg([np.median,iqr])
print(df_latex.to_latex())
print(df_latex)
df_latex_spek=df.replace(value=np.nan,to_replace='Failed').groupby(['Experiment','spektral peak'])['Number of groups', 'Number of groups (nest)','exc_rate','inh_rate','spektral peak'].agg([np.median,iqr,'min','max','count']) #.agg([np.median,iqr])
print(df_latex_spek.to_latex())
print(df_latex_spek)
phf.latexify(fig_height=6., columns=1)
fig = plt.figure()
N = 9
N_bot = 5
M = 4
gs0 = gridspec.GridSpec(N, M)
ax_orig = plt.subplot(gs0[:N_bot, :M - 1])
ax_nest = plt.subplot(gs0[N_bot:, 0:M - 1])
ax_orig_broken = fig.add_subplot(gs0[:N_bot, M - 1]) # , sharey=ax_orig)
ax_nest_broken = fig.add_subplot(gs0[N_bot:, M - 1]) # , sharey=ax_nest)
orig_pal = ['C2', 'C1', 'C0', 'C5', 'C4', 'C4', 'C4', 'C4', 'C4', 'C4', 'C4']
orig_exp_order = ['initial reproduction',
'bitwise reproduction',
'qualitative model',
'poisson stimulus',
'stdp window match',
'const add value 0p0',
'synapse update interval 0p1s',
'synapse update interval 10s',
'time driven additive 1s',
'tau syn update interval 2s',
'tau syn update interval 1000s']
orig_names = ['Initial model',
'Bitwise reproduction',
'Qualitative model',
'Poisson stimulus',
'STDP window match',
'No additive factor',
'Buffer length $0.1\;\mathrm{s}$',
'Buffer length $10\;\mathrm{s}$',
'No elig. trace',
'Elig. trace $2\;\mathrm{s}$',
'Elig. trace $1000\;\mathrm{s}$',
]
width = 1.25
ax_orig = sns.boxplot(data=df.replace(value=np.nan,to_replace='Failed'),
y='Experiment',
x='Number of groups',
order=orig_exp_order,
palette=orig_pal,
fliersize=0,
ax=ax_orig,
linewidth=width,
width=0.6)
ax_orig_broken = sns.boxplot(data=df.replace(value=np.nan,to_replace='Failed'),
y='Experiment',
x='Number of groups',
order=orig_exp_order,
palette=orig_pal,
fliersize=0,
ax=ax_orig_broken,
linewidth=width,
width=0.6)
ax_orig.set_yticklabels(orig_names)
nest_pal = ['C1', 'C0', 'C3', 'C3', 'C3', 'C3','C4', 'C4']
nest_exp_order = ['bitwise reproduction',
'qualitative model',
'delay distribution 20',
'delay distribution 15',
'delay distribution 10',
'delay distribution 5',
'resolution 0p1 W pspmatched',
'qualitative model high res',
]
name_order = ['Bitwise reproduction',
'Qualitative model',
r'Delay $\in \left[1,20\right]\;\mathrm{ms}$',
r'Delay $\in \left[1,15\right]\;\mathrm{ms}$',
r'Delay $\in \left[1,10\right]\;\mathrm{ms}$',
r'Delay $\in \left[1,5\right]\;\mathrm{ms}$',
r'Resolution $0.1\;\mathrm{ms}$',
r'Improved integration',
]
ax_nest = sns.boxplot(data=df.replace(value=np.nan,to_replace='Failed'),
y='Experiment',
x='Number of groups (nest)',
order=nest_exp_order,
palette=nest_pal,
fliersize=0,
ax=ax_nest, linewidth=width,
width=0.5)
ax_nest_broken = sns.boxplot(data=df.replace(value=np.nan,to_replace='Failed'),
y='Experiment',
x='Number of groups (nest)',
order=nest_exp_order,
palette=nest_pal,
fliersize=0,
ax=ax_nest_broken,
linewidth=width,
width=0.6)
print(ax_nest.get_yticks())
ax_nest.set_yticklabels(name_order)
for ax in [ax_orig_broken, ax_nest_broken]:
# ax.axis('off')
# if ax !=ax_delay:
# ax.set_xscale('log')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.set_yticks(())
ax.set_ylabel('')
ax_orig_broken.set_xlabel('')
ax_nest_broken.set_xlabel('')
ax_orig_broken.set_xlim((6000, 70000))
ax_orig_broken.set_xticks((10000, 70000))
ax_orig_broken.set_xticklabels(('10k', '70k'))
ax_nest_broken.set_xlim((10000, 40000))
ax_nest_broken.set_xticks((20000, 40000))
ax_nest_broken.set_xticklabels(('20k', '40k'))
ax_orig.set_xlim((-500, 6000))
ax_orig.set_xticks((0, 2500, 5000))
ax_nest.set_xlim((-500, 8500))
for ax in [ax_orig, ax_nest]:
# ax.axis('off')
# if ax !=ax_delay:
# ax.set_xscale('log')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
# ax.set_yticks(())
# if ax !=ax_delay:
# ax.set_xticks(())
# ax.set_xlabel('')
# ax.spines['bottom'].set_visible(False)
# ax_orig.set_ylabel('original group finding algorithm')
# ax_nest.set_ylabel('NEST group finding algorithm')
ax_orig.set_ylabel('')
ax_nest.set_ylabel('')
ax_orig.set_xlabel('Number of groups')
ax_nest.set_xlabel('Number of groups (Python)')
# iqr=df.loc[df['Experiment']=='qualitative model','Number of Groups'].quantile(0.75)-df.loc[df['Experiment']=='qualitative model','Number of Groups'].quantile(0.25)
min_line = df.loc[df['Experiment'] == 'bitwise reproduction', 'Number of groups'].quantile(0.25) # -1.5*iqr
max_line = df.loc[df['Experiment'] == 'bitwise reproduction', 'Number of groups'].quantile(0.75) # +1.5*iqr
min_line
ax_orig.axvline(min_line, zorder=0, linestyle='--', color='C1')
ax_orig.axvline(max_line, zorder=0, linestyle='--', color='C1')
min_line = df.loc[df['Experiment'] == 'bitwise reproduction', 'Number of groups (nest)'].quantile(0.25)
max_line = df.loc[df['Experiment'] == 'bitwise reproduction', 'Number of groups (nest)'].quantile(0.75)
min_line
ax_nest.axvline(min_line, zorder=0, linestyle='--', color='C1')
ax_nest.axvline(max_line, zorder=0, linestyle='--', color='C1')
ax_orig.annotate(r'\textbf{A}', xy=(-0.95, 1.05), xycoords='axes fraction',
horizontalalignment='left', verticalalignment='top', annotation_clip=False)
ax_nest.annotate(r'\textbf{B}', xy=(-0.95, 1.06), xycoords='axes fraction',
horizontalalignment='left', verticalalignment='top', annotation_clip=False)
xy = (0, ax_nest.get_yticks()[-1])
#ax_nest.annotate(xy=xy, xytext=xy, s=r'\textbf{X}', ha='center', va='center')
# xy=(ax_orig_broken.get_xticks()[-1],ax_orig.get_yticks()[-1])
# ax_orig_broken.annotate(xy=xy,xytext=xy,s=r'$\rip$',ha='center',va='center',fontsize=20)
# xy=(ax_orig_broken.get_xticks()[-1],ax_orig.get_yticks()[-1])
# ax_orig_broken.annotate(xy=xy,xytext=xy,s=r'$\rip$',ha='center',va='center',fontsize=20)
xy = (ax_orig_broken.get_xticks()[-1], ax_orig.get_yticks()[-4])
ax_orig_broken.annotate(xy=xy, xytext=xy, s=r'$\rip$', ha='center', va='center', fontsize=20)
# xy=(ax_orig_broken.get_xticks()[-1],ax_orig.get_yticks()[-4])
# ax_orig_broken.annotate(xy=xy,xytext=xy,s=r'$\rip$',ha='center',va='center',fontsize=20)
gs0.update(left=0.4, right=0.95, top=0.97, bottom=0.07, hspace=1.99, wspace=0.35)
plt.savefig(args.output) | 36.691275 | 247 | 0.587525 | 1,479 | 10,934 | 4.161596 | 0.194726 | 0.033144 | 0.040942 | 0.020471 | 0.549472 | 0.48026 | 0.438993 | 0.426645 | 0.380829 | 0.347035 | 0 | 0.03156 | 0.252332 | 10,934 | 298 | 248 | 36.691275 | 0.721346 | 0.110664 | 0 | 0.190045 | 0 | 0 | 0.206313 | 0.028987 | 0 | 0 | 0 | 0 | 0 | 1 | 0.00905 | false | 0 | 0.067873 | 0.004525 | 0.085973 | 0.027149 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b3fdcb67a067b54f957d1bd2d0f7f8ff8e0d97e | 4,306 | py | Python | api-back/extract_resume.py | Bitseat/demo | e5a12d975ef8162e89eaa3e67aaa0967e4c24d75 | [
"MIT"
] | null | null | null | api-back/extract_resume.py | Bitseat/demo | e5a12d975ef8162e89eaa3e67aaa0967e4c24d75 | [
"MIT"
] | 1 | 2020-08-11T15:40:02.000Z | 2020-08-11T15:40:02.000Z | api-back/extract_resume.py | Bitseat/demo | e5a12d975ef8162e89eaa3e67aaa0967e4c24d75 | [
"MIT"
] | null | null | null | # importing all required libraries
import os
import traceback
# importing libraries for computer vision
import numpy as np
import cv2
import imutils
from imutils import contours
from imutils.perspective import four_point_transform
from skimage.filters import threshold_local
# importing libraries to read text from image
from PIL import Image
import pytesseract
import re
import json
from docx2pdf import convert
from pyresparser import ResumeParser
import image_text_extractor
from image_text_extractor import image_extract
import subprocess
from os import rename
import shutil
import time
def main():
# import resumes from directory
directory = 'resumes/'
directory3 = 'pdfs/'
dir_list = os.listdir(directory)
dir_list.sort(key=lambda f: os.path.splitext(f)[1], reverse = True)
for filename in dir_list:
if filename.endswith(".pdf"):
full_path = os.path.join(directory, filename)
extract_info(full_path)
elif filename.endswith(".docx"):
full_path = os.path.join(directory, filename)
out = subprocess.Popen(['unoconv', str(full_path)], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout,stderr = out.communicate()
time.sleep(5)
target_path = os.path.join(os.path.dirname(__file__), str(full_path[:-5]) + ".pdf")
m = str(target_path).replace('resumes/','pdfs/')
shutil.move(str(target_path), os.path.join(directory3, str(m)))
time.sleep(5)
#new_path = str(target_path[:-4]) + ".docx" + ".pdf"
#rename(target_path, new_path)
extract_info(full_path)
elif filename.endswith(".jpg"):
full_path = os.path.join(directory, filename)
x = image_extract()
out = subprocess.Popen(['unoconv', str(full_path)], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout,stderr = out.communicate()
time.sleep(5)
target_path = os.path.join(os.path.dirname(__file__), str(x) + ".pdf")
file_name = str(full_path[:-4]) + ".pdf"
n = file_name.replace('resumes/','')
shutil.move(os.path.join(directory, str(n)), os.path.join(directory3, str(n)))
time.sleep(5)
extract_info(x)
else:
pass
def extract_info(full_path):
directory = 'resumes/'
directory2 = 'jsons/'
directory3 = 'pdfs/'
data = {}
with open(full_path, 'r') as f:
#print(full_path)
data = ResumeParser(full_path).get_extracted_data()
time.sleep(5)
z = full_path.replace('resumes/','')
json_file_name = str(directory2) + str(z) + ".json"
clean_data = re.sub('\u2013', '', str(data))
clean_data = re.sub('\uf0b7', '', clean_data)
clean_data = re.sub('\u200b', '', clean_data)
clean_data = re.sub(r'\\uf0b7', '', clean_data)
clean_data = re.sub(r'[^\x00-\x7F]+|\x0c',' ', clean_data)
clean_data = re.sub(r"'", '"', clean_data)
clean_data = re.sub(r'None', 'null', clean_data)
clean_data = json.loads(clean_data.replace("\'", '"'))
jpg_file_name = str(directory2) + str(z[:-5]) + ".json"
pdf_file_name = str(full_path[:-9]) + ".pdf"
l = pdf_file_name.replace('resumes/','')
word_file_name = str(full_path[:-5]) + ".pdf"
m = word_file_name.replace('resumes/','')
if full_path.endswith(".jpg.docx"):
with open(jpg_file_name, 'w') as outfile:
json.dump(clean_data, outfile)
#shutil.move(str(pdf_file_name), os.path.join(directory3, str(l)))
os.remove(full_path)
os.remove(str(full_path[:-5]))
elif full_path.endswith(".pdf"):
with open(json_file_name, 'w') as outfile:
json.dump(clean_data, outfile)
shutil.move(os.path.join(directory, str(z)), os.path.join(directory3, str(z)))
time.sleep(5)
else:
with open(json_file_name, 'w') as outfile:
json.dump(clean_data, outfile)
os.remove(full_path)
if __name__ == '__main__':
main()
| 30.323944 | 114 | 0.594055 | 537 | 4,306 | 4.571695 | 0.22905 | 0.068432 | 0.044807 | 0.039919 | 0.446029 | 0.39389 | 0.343788 | 0.194705 | 0.194705 | 0.194705 | 0 | 0.012751 | 0.271482 | 4,306 | 141 | 115 | 30.539007 | 0.769844 | 0.071296 | 0 | 0.304348 | 0 | 0 | 0.05423 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0.01087 | 0.217391 | 0 | 0.23913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b421ace835f65630586818249ab3197ef13ff58 | 1,991 | py | Python | week12_telegram_bots/Peter Sergeev Homework/mysubstratedb.py | pserg1/msai-python | 57908933d0af0614a9c7f5c6dcdcc1b46abb2184 | [
"MIT"
] | null | null | null | week12_telegram_bots/Peter Sergeev Homework/mysubstratedb.py | pserg1/msai-python | 57908933d0af0614a9c7f5c6dcdcc1b46abb2184 | [
"MIT"
] | null | null | null | week12_telegram_bots/Peter Sergeev Homework/mysubstratedb.py | pserg1/msai-python | 57908933d0af0614a9c7f5c6dcdcc1b46abb2184 | [
"MIT"
] | null | null | null | import sqlalchemy
import pyodbc
from sqlalchemy import create_engine
from sqlalchemy import Column, Integer, String, DateTime, Float
from sqlalchemy.sql import func
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy_utils import database_exists, create_database
from sqlalchemy.orm import sessionmaker, Session
CONNECTION_STR = "mssql+pyodbc://user:password@127.0.0.1/testDB?driver=SQL+Server"
# create db
Base = declarative_base()
class Transaction(Base):
__tablename__ = 'Transactions'
id = Column(Integer, primary_key=True)
fromAddr = Column(String)
Destination = Column(String)
Amount = Column(Float)
Module = Column(String)
Method = Column(String)
created_at = Column(DateTime(timezone=True), server_default=func.now())
def __repr__(self):
str = f"Transaction(fromAddr='{self.fromAddr}', Destination='{self.Destination}',\
Amount='{self.Amount}', Module='{self.Module}, Method={self.Method}, 'created_at={self.created_at}')"
return str
# write data to db
def writeData(data):
engine = create_engine(CONNECTION_STR)
if not database_exists(engine.url):
create_database(engine.url)
Session = sessionmaker(engine)
session = Session()
new_trx = Transaction(
fromAddr = data['Signature'],
Destination = data['Dest'],
Amount = data['Amount'],
Module = data['Pallet'],
Method = data['Call'],
)
session.add(new_trx)
print(f'\nExecuting add query:\n', new_trx)
session.commit()
# some testing + table creation
'''
trx = Transaction(fromAddr=1, Destination=2011, Amount=1000000, Module='Balances', Method='Transfer')
engine = create_engine(CONNECTION_STR)
if not database_exists(engine.url):
create_database(engine.url)
Base.metadata.create_all(engine)
Session = sessionmaker(engine)
with Session.begin() as session:
session.add(trx)
trxs = session.query(Transaction).all()
print(trxs)
'''
| 28.855072 | 115 | 0.705173 | 239 | 1,991 | 5.740586 | 0.376569 | 0.061224 | 0.029155 | 0.040816 | 0.119534 | 0.119534 | 0.119534 | 0.119534 | 0.119534 | 0.119534 | 0 | 0.010962 | 0.175289 | 1,991 | 68 | 116 | 29.279412 | 0.824604 | 0.028127 | 0 | 0 | 0 | 0.051282 | 0.13737 | 0.054688 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0.025641 | 0.205128 | 0 | 0.512821 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b452016fd5d89254447c86f05dc5c9a851e0645 | 7,251 | py | Python | figuras/Pycharm_Papoulis_Probability_Report/buffon_needle_long.py | bor9/estudiando_el_papoulis | ef40ac18d7aece3415cd9ce72d1f9684c762d6df | [
"MIT"
] | null | null | null | figuras/Pycharm_Papoulis_Probability_Report/buffon_needle_long.py | bor9/estudiando_el_papoulis | ef40ac18d7aece3415cd9ce72d1f9684c762d6df | [
"MIT"
] | null | null | null | figuras/Pycharm_Papoulis_Probability_Report/buffon_needle_long.py | bor9/estudiando_el_papoulis | ef40ac18d7aece3415cd9ce72d1f9684c762d6df | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
import math
from matplotlib import patches
from matplotlib import transforms
import matplotlib.colors as colors
from matplotlib import cm
from matplotlib import rc
__author__ = 'ernesto'
# if use latex or mathtext
rc('text', usetex=False)
rc('mathtext', fontset='cm')
# colors from coolwarm
cNorm = colors.Normalize(vmin=0, vmax=1)
scalarMap = cm.ScalarMappable(norm=cNorm, cmap=cm.coolwarm)
col10 = scalarMap.to_rgba(0)
col11 = scalarMap.to_rgba(0.2)
col20 = scalarMap.to_rgba(1)
col21 = scalarMap.to_rgba(0.85)
col22 = scalarMap.to_rgba(0.7)
# read image
im = 'buffon_needle_3.png'
img = plt.imread(im)
# image parameters
img_height = 25
img_width = img_height * img.shape[1] / img.shape[0]
# needle original angle (radians)
theta = math.atan(img.shape[0]/img.shape[1])
# needle length
l = math.sqrt(img_height**2 + img_width**2)
# axis span
x_max = 35
y_max = 42
# lines y-coordinate
n_lines = 3 # number of lines
start_gap = 1 # y-coordinate of the first lines
d = (y_max - 2 * start_gap) / (n_lines - 1) # distance between lines
lines_y = start_gap + np.arange(n_lines) * d # y-coordinate of lines
# needle 1 position
x1_start = 8
y1_start = lines_y[1]
# needle 1 angle
theta1 = math.asin(d/l)
rot1 = transforms.Affine2D().rotate_around(x1_start, y1_start, theta1 - theta)
# needle 2 height and width
img1_height = l * math.sin(theta1)
img1_width = l * math.cos(theta1)
# needle 2 position
x2_start = 3
y2_start = start_gap
# needle 2 angle
theta2 = theta1 * 0.6
rot2 = transforms.Affine2D().rotate_around(x2_start, y2_start, theta2 - theta)
# needle 2 height and width
img2_height = l * math.sin(theta2)
img2_width = l * math.cos(theta2)
# needle center
x2c = x2_start + img2_width / 2
y2c = y2_start + img2_height / 2
# needle size mark
r = 4 # distance from the needle
s = 1 # lines of the borders
# parameters for plot
dashes = (5, 2)
fontsize = 14
# plot
fig = plt.figure(0, figsize=(10, 5), frameon=False)
ax = plt.subplot2grid((1, 10), (0, 0), rowspan=1, colspan=5)
ax.set_xlim(0, x_max)
ax.set_ylim(0, y_max)
# plot lines
plt.plot([0, x_max], [lines_y, lines_y], color=col10, lw=2)
# plot angle arc 1
arc1 = patches.Arc((x1_start, y1_start), 10, 10, angle=0, theta1=0, theta2=theta1 * 180 / math.pi, linewidth=1,
fill=False, zorder=1)
ax.add_patch(arc1)
plt.text(x1_start + 6 * math.cos(theta1/2), y1_start + 6 * math.sin(theta1/2), r'$\theta=\arcsin\;\dfrac{d}{l}$',
fontsize=fontsize, ha='left', va='center')
# plot angle arc 2
arc2 = patches.Arc((x2_start, y2_start), 10, 10, angle=0, theta1=0, theta2=theta2 * 180 / math.pi, linewidth=1,
fill=False, zorder=1)
ax.add_patch(arc2)
plt.text(x2_start + 6 * math.cos(theta2/2), y2_start + 6 * math.sin(theta2/2), r'$\theta$', fontsize=fontsize,
ha='left', va='center')
# show needle 1 image
ax.imshow(img, transform=rot1 + ax.transData,
extent=[x1_start, x1_start + img_width, y1_start, y1_start + img_height], zorder=2)
# show needle 2 image
ax.imshow(img, transform=rot2 + ax.transData,
extent=[x2_start, x2_start + img_width, y2_start, y2_start + img_height], zorder=2)
# mark needle 1 center
plt.plot(x2c, y2c, 'k.', markersize=6, zorder=3)
plt.plot([x2c, x2c], [lines_y[0], y2c], 'k--', lw=1, dashes=dashes)
plt.text(x2c + 1, (lines_y[0] + y2c) / 2, r'$z=\dfrac{l}{2}\;\sin\;\theta$', fontsize=fontsize, ha='left', va='center')
# plot distance between lines
xl = 2
plt.plot((x1_start + img1_width) * np.array([1, 1]), lines_y[1] + np.array([0, img1_height]),
'k--', lw=1, dashes=dashes)
plt.text(x1_start + img1_width + 1, lines_y[1] + img1_height / 2, r'$d$', fontsize=fontsize, ha='left', va='center')
# plot needle 2 size marker
xm = x2_start - r * math.sin(theta2)
ym = y2_start + r * math.cos(theta2)
plt.plot(xm + np.array([0, img2_width]), ym + np.array([0, img2_height]), 'k--', lw=1, dashes=dashes)
plt.plot(xm + s * math.sin(theta2) * np.array([-1, 1]),
ym + s * math.cos(theta2) * np.array([1, -1]),
'k-', lw=1)
plt.plot(xm + img2_width + s * math.sin(theta2) * np.array([-1, 1]),
ym + img2_height + s * math.cos(theta2) * np.array([1, -1]),
'k-', lw=1)
plt.text(xm + img2_width / 2 - 1 * math.sin(theta2), ym + img2_height / 2 + 1 * math.cos(theta2), r'$l$',
fontsize=fontsize, ha='center', va='baseline')
plt.axis('off')
# SAMPLE SPACE PLOT
# scale l and d
f = 10
l /= f
d /= f
# axis limits
z_max = l / 2
t_max = math.pi / 2
delta_ax = 0.3
z_ax_min = -0.1
z_ax_max = z_max + delta_ax
t_ax_min = -0.1
t_ax_max = t_max + delta_ax
# theta vector
ts = np.linspace(0, t_max, 100)
sin_ts = (l / 2) * np.sin(ts)
zs = np.linspace(0, d/2, 100)
asin_zs = np.arcsin(2 * zs / l)
ax = plt.subplot2grid((1, 10), (0, 5), rowspan=1, colspan=5)
plt.axis([z_ax_min, z_ax_max, t_ax_min, t_ax_max])
# axis arrows
plt.annotate("", xytext=(z_ax_min, 0), xycoords='data', xy=(z_ax_max, 0), textcoords='data',
arrowprops=dict(width=0.2, headwidth=6, headlength=8, facecolor='black', shrink=0.002))
plt.annotate("", xytext=(0, t_ax_min), xycoords='data', xy=(0, t_ax_max), textcoords='data',
arrowprops=dict(width=0.2, headwidth=6, headlength=8, facecolor='black', shrink=0.002))
plt.plot([d/2, d/2], [0, t_max], color=col10, lw=2)
plt.plot([0, d/2], [t_max, t_max], color=col10, lw=2)
plt.plot(sin_ts, ts, color=col20, lw=2)
ax.fill_between([0, d/2], math.asin(d/l) * np.array([1, 1]), t_max, color=col21)
ax.fill_between(zs, asin_zs, math.asin(d/l), color=col22)
# z labels
z_baseline = -0.14
plt.text(z_ax_max, z_baseline, r'$z$', fontsize=fontsize, ha='center', va='baseline')
plt.text(d/2, z_baseline, r'$\dfrac{d}{2}$', fontsize=fontsize, ha='center', va='baseline')
plt.text(-0.05, z_baseline, r'$0$', fontsize=fontsize, ha='right', va='baseline')
plt.plot([l/2, l/2], [0, math.pi/2], 'k--', lw=1, dashes=dashes)
plt.text(l/2, z_baseline, r'$\dfrac{l}{2}$', fontsize=fontsize, ha='center', va='baseline')
# theta labels
plt.text(-0.05, t_ax_max, r'$\theta$', fontsize=fontsize, ha='right', va='center')
plt.text(-0.05, math.pi/2, r'$\dfrac{\pi}{2}$', fontsize=fontsize, ha='right', va='center')
plt.plot([0, math.pi/2], [l/2, l/2], 'k--', lw=1, dashes=dashes, zorder=1)
plt.plot([0, d/2], math.asin(d/l) * np.array([1, 1]), 'k--', lw=1, dashes=dashes)
plt.text(-0.05, math.asin(d/l), r'$\arcsin\;\dfrac{d}{l}$', fontsize=fontsize, ha='right', va='center')
plt.text(d/2, math.pi/2, r'$\Omega$', fontsize=fontsize, ha='right', va='bottom', color=col10)
z1 = 1.16
plt.annotate(r'$z=\dfrac{l}{2}\;\sin\;\theta$', xytext=((d+l)/4, 0.45), xycoords='data', xy=(z1, math.asin(2*z1/l)),
textcoords='data', fontsize=fontsize, va="center", ha="center",
arrowprops=dict(arrowstyle="-|>, head_width=0.1, head_length=0.4", facecolor='black', relpos=(0.4, 1),
patchA=None, patchB=None, shrinkA=4, shrinkB=1))
plt.text(d/4, (math.pi/2+math.asin(d/l)) / 2, r'$D_1$', fontsize=fontsize, ha='center', va='center')
plt.text(d/4, 0.75*math.asin(d/l), r'$D_2$', fontsize=fontsize, ha='center', va='center')
plt.axis('off')
plt.savefig('buffon_needle_long.pdf', bbox_inches='tight', dpi=900)
plt.show()
| 35.028986 | 119 | 0.655358 | 1,291 | 7,251 | 3.563904 | 0.170411 | 0.05564 | 0.058683 | 0.015214 | 0.331232 | 0.27907 | 0.254727 | 0.146055 | 0.089111 | 0.078244 | 0 | 0.062714 | 0.153358 | 7,251 | 206 | 120 | 35.199029 | 0.686757 | 0.098193 | 0 | 0.06015 | 0 | 0 | 0.086783 | 0.020772 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.06015 | 0 | 0.06015 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b46780622ca167e59a3a3ad6cc2146cc6ba62f4 | 4,309 | py | Python | app.py | chrisvoncsefalvay/dash-sir-interactive-model | 97d854774fb5395452127b5627efab39bddcdbdf | [
"BSD-3-Clause"
] | 3 | 2020-11-29T06:36:23.000Z | 2021-11-28T13:10:46.000Z | app.py | chrisvoncsefalvay/dash-sir-interactive-model | 97d854774fb5395452127b5627efab39bddcdbdf | [
"BSD-3-Clause"
] | null | null | null | app.py | chrisvoncsefalvay/dash-sir-interactive-model | 97d854774fb5395452127b5627efab39bddcdbdf | [
"BSD-3-Clause"
] | null | null | null | import os
import flask
import dash
import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import plotly.graph_objects as go
import dash_defer_js_import as dji
import numpy as np
from components import solve
external_stylesheets = ['https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css',
'https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.18.1/styles/monokai-sublime.min.css']
external_scripts = ['https://code.jquery.com/jquery-3.2.1.slim.min.js',
'https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js',
'https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js']
# Server definition
server = flask.Flask(__name__)
app = dash.Dash(__name__,
external_stylesheets=external_stylesheets,
external_scripts=external_scripts,
server=server)
filepath = os.path.split(os.path.realpath(__file__))[0]
narrative_text = open(os.path.join(filepath, "narrative.md"), "r").read()
refs_text = open(os.path.join(filepath, "references.md"), "r").read()
edvs_text = open(os.path.join(filepath, "edvs.md"), "r").read()
mathjax_script = dji.Import(src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.7/latest.js?config=TeX-AMS-MML_SVG")
app.index_string = '''
<!DOCTYPE html>
<html>
<head>
{%metas%}
<title>{%title%}</title>
{%favicon%}
{%css%}
</head>
<body>
{%app_entry%}
<footer>
{%config%}
{%scripts%}
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {
inlineMath: [ ['$','$'],],
processEscapes: true
}
});
</script>
{%renderer%}
</footer>
</body>
</html>
'''
# COMPONENTS
# ==========
def display_SIR_solution(data) -> dcc.Graph:
S, I, R = data
tspace = np.linspace(0, len(S), len(S))
fig = go.Figure()
# Susceptible
fig.add_trace(go.Scatter(x = tspace, y = S, mode="lines", name="Susceptible"))
# Infectious
fig.add_trace(go.Scatter(x = tspace, y = I, mode="lines", name="Infectious"))
# Recovered
fig.add_trace(go.Scatter(x = tspace, y = R, mode="lines", name="Removed"))
return fig
## Interactors
## -----------
R0_slider = dcc.Slider(id="r0_input", min=0, max=6.5, step=0.01, value=2.67, marks={x: str(x) for x in [0, 1, 2, 3, 4, 5, 6]})
delta_slider = dcc.Slider(id="delta_input", min=0, max=1, step=0.01, value=0.25, marks={x: f"{100*x:.0f}%" for x in np.linspace(0, 1, 11)})
tau_slider = dcc.Slider(id="tau_input", min=3, max=20, step=0.5, value=8.5, marks={x: str(x) for x in [3+2*x for x in range(0, 9)]})
# APP LAYOUT
# ==========
app.layout = html.Div([
dbc.Container(children=[
dcc.Markdown(narrative_text, dangerously_allow_html=True),
dcc.Graph(id="sir_solution", figure=display_SIR_solution(solve(delta=0.5, R0=2.67, tau=8.5))),
dbc.Row(children=[dbc.Col(children=[R0_slider], className="col-md-8"), dbc.Col(children=["$R_0$ (basic reproduction number)"], className="col-md-4")]),
html.Br(),
dbc.Row(children=[dbc.Col(children=[delta_slider], className="col-md-8"),
dbc.Col(children=["$\delta$ (social distancing fraction)"], className="col-md-4")]),
html.Br(),
dbc.Row(children=[dbc.Col(children=[tau_slider], className="col-md-8"),
dbc.Col(children=["$\\tau$ (duration of illness)"], className="col-md-4")]),
html.Br(),
html.Br(),
dcc.Markdown(edvs_text, dangerously_allow_html=True),
html.Br(),
dcc.Markdown(refs_text, dangerously_allow_html=True)
]),
mathjax_script
])
# INTERACTION
# ===========
@app.callback(Output("sir_solution", "figure"),
[Input("r0_input", "value"),
Input("delta_input", "value"),
Input("tau_input", "value")])
def update_plot(r0_input, delta_input, tau_input):
return display_SIR_solution(solve(delta=delta_input, R0=r0_input, tau=tau_input))
if __name__ == '__main__':
app.run_server(debug=True)
| 33.403101 | 159 | 0.609654 | 591 | 4,309 | 4.301184 | 0.301184 | 0.014162 | 0.033045 | 0.027144 | 0.29701 | 0.241935 | 0.155389 | 0.142801 | 0.038552 | 0.038552 | 0 | 0.025867 | 0.21049 | 4,309 | 128 | 160 | 33.664063 | 0.72134 | 0.032954 | 0 | 0.054945 | 0 | 0.065934 | 0.317197 | 0.012765 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021978 | false | 0 | 0.131868 | 0.010989 | 0.175824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b489fbcfce2c4d5dd4a28fb019c7e2eb148afb0 | 18,525 | py | Python | autoesk_main/anual_defis.py | SilasPDJ/autoesk_project_v2 | 249730307ad350a1aaacfd5abe08b0781253854e | [
"MIT"
] | 1 | 2021-03-12T00:40:13.000Z | 2021-03-12T00:40:13.000Z | autoesk_main/anual_defis.py | SilasPDJ/autoesk_project_v2 | 249730307ad350a1aaacfd5abe08b0781253854e | [
"MIT"
] | 1 | 2021-04-02T04:40:38.000Z | 2021-04-02T04:42:20.000Z | autoesk_main/anual_defis.py | SilasPDJ/autoesk_project_v2 | 249730307ad350a1aaacfd5abe08b0781253854e | [
"MIT"
] | null | null | null | from imports import WDShorcuts
from imports import press_key_b4, activate_window, tk_msg
from imports import TimeoutException, ElementClickInterceptedException, NoSuchElementException, NoAlertPresentException
from imports import ActionChains
from imports import Keys, By, WebDriverWait, expected_conditions
from imports import ExcelToData
from _new_set_paths import NewSetPaths
import subprocess
import os
from time import sleep
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import UnexpectedAlertPresentException
# dale
class Defis(WDShorcuts, NewSetPaths, ExcelToData):
def __init__(self, compt=None):
"""
:param compt: from GUI
# remember past_only arg from self.get_atual_competencia
"""
import pandas as pd
from default.webdriver_utilities.pre_drivers import pgdas_driver
# O vencimento DAS(seja pra qual for a compt) está certo, haja vista que se trata do mes atual
sh_names = ['DEFIS']
sh_name = sh_names[0]
if compt is None:
compt = super().get_compt_only()
excel_file_name = super().excel_file_path()
COMPT = compt = f"DEFIS_{self.y()}"
# transcrevendo compt para que não seja 02/2021
# excel_file_name = '/'.join(excel_file_name.split('/')[:-1])
excel_file_name = os.path.dirname(excel_file_name)
excel_file_name += f'/DEFIS-anual.xlsx'
pdExcelFile = pd.ExcelFile(excel_file_name)
msh = pdExcelFile.parse(sheet_name=str(sh_name))
col_str_dic = {column: str for column in list(msh)}
msh = pdExcelFile.parse(sheet_name=str(sh_name), dtype=col_str_dic)
READ = self.le_excel_each_one(msh)
self.after_READ = self.readnew_lista(READ, False)
msh_socio = pdExcelFile.parse(sheet_name='Socios')
col_str_dic = {column: str for column in list(msh_socio)}
msh_socio = pdExcelFile.parse(sheet_name='Socios', dtype=col_str_dic)
self.after_socio = self.readnew_lista(self.le_excel_each_one(msh_socio))
SK = list(self.after_socio.keys())
# ACHEI FINALMENTE O JEITO RESPONSIVO DE DECLARAR PRA NÃO FICAR TENDO QUE ESCREVER POR EXTENSO
cont_soc = 0
for i, CNPJ in enumerate(self.after_READ['CNPJ']):
_cliente = self.empresa_now = self.after_READ['Razão Social'][i]
_ja_declared = self.after_READ['Declarado'][i].upper().strip()
_cod_sim = self.after_READ['Código Simples'][i]
_cpf = self.after_READ['CPF'][i]
_cert_or_login = self.after_READ['CERTORLOGIN'][i]
# Defis exclusivos
_dirf = self.after_READ['DIRF'][i]
# +2 Pois começa da linha 2, logo o excel está reconhendo isso como index
while int(self.after_socio[SK[-4]][cont_soc])-2 != i:
cont_soc += 1
__ate_soc = self.after_socio[SK[-3]][cont_soc]
__ate_soc = int(__ate_soc) + cont_soc
self.socios_now__cnpj = self.after_socio[SK[0]][cont_soc:__ate_soc]
self.socios_now__cpf = self.after_socio[SK[1]][cont_soc:__ate_soc]
self.socios_now__nome = self.after_socio[SK[2]][cont_soc:__ate_soc]
self.socios_now__cota = self.after_socio[SK[3]][cont_soc:__ate_soc]
self.socios_now__tipo = self.after_socio[SK[5]][cont_soc:__ate_soc]
self.client_path = self.files_pathit(_cliente, COMPT, )
if _ja_declared not in ['S', 'OK', 'FORA']:
print('-' * 60)
# print(f'CNPJ: {CNPJ}, {CNPJ.strip()==self.socios_now__cnpj[0]}')
self.the_print()
__client_path = self.client_path
self.driver = pgdas_driver(__client_path)
now_process = subprocess.Popen(f'explorer {__client_path}')
driver = self.driver
super().__init__(driver)
if _cert_or_login == 'certificado':
self.loga_cert()
# loga ECAC, Insere CNPJ
self.change_ecac_client(CNPJ)
self.current_url = driver.current_url
self.opta_script() if self.m() == 12 else None
else:
self.loga_simples(CNPJ, _cpf, _cod_sim, _cliente)
self.current_url = driver.current_url
self.opta_script() if self.m() == 12 else None
driver.get('https://www8.receita.fazenda.gov.br/SimplesNacional/Aplicacoes/ATSPO/defis.app/entrada.aspx')
while True:
try:
WebDriverWait(self.driver, 10).until(expected_conditions.presence_of_element_located((By.TAG_NAME, 'input')))
my_radios_bt = driver.find_elements_by_name('ctl00$conteudo$AnoC')
my_radios_bt[-2].click()
driver.find_element_by_id('ctl00_conteudo_lnkContinuar').click()
break
except TimeoutException:
driver.get('https://sinac.cav.receita.fazenda.gov.br/SimplesNacional/Aplicacoes/ATSPO/defis.app/entrada.aspx')
(print('sleeping'), sleep(5))
self.send_keys_anywhere(Keys.TAB, 2)
self.send_keys_anywhere(Keys.ENTER, 1)
self.contains_text(str(self.y()-1)).click()
self.contains_text('Continuar').click()
driver.implicitly_wait(10)
self.send_keys_anywhere(Keys.TAB, 9)
self.send_keys_anywhere(Keys.ENTER, 1)
self.send_keys_anywhere(Keys.TAB, 2)
self.send_keys_anywhere(Keys.ENTER, 1)
WebDriverWait(self.driver, 5)
try:
self.send_keys_anywhere(Keys.TAB, 1)
self.send_keys_anywhere(Keys.ENTER, 1)
except UnexpectedAlertPresentException:
pass
else:
# se 3 => De toda MP
self.send_keys_anywhere(Keys.TAB, 2)
self.send_keys_anywhere(Keys.ENTER)
self.send_keys_anywhere(Keys.TAB, 1)
# Informações econômicas e fiscais do estabelecimento
ac = ActionChains(self.driver)
for sdc in range(13):
ac.send_keys('0')
ac.send_keys(Keys.TAB)
ac.perform()
self.send_keys_anywhere(Keys.TAB, 11, pause=.1)
self.send_keys_anywhere(Keys.RIGHT)
self.send_keys_anywhere(Keys.TAB)
self.send_keys_anywhere(Keys.RIGHT)
self.send_keys_anywhere(Keys.TAB, 15, pause=.001)
self.send_keys_anywhere(Keys.ENTER)
# Chega até os campos padrão
print('\033[1;31m DIGITE F8 p/ prosseguir \033[m')
which_one = press_key_b4('f8')
now_process.kill()
print('-' * 30)
print(f'already declared {_cliente}')
print('-' * 30)
def loga_cert(self):
"""
:return: mixes the two functions above (show_actual_tk_window, mensagem)
"""
from threading import Thread
from pyautogui import hotkey
from time import sleep
driver = self.driver
while True:
try:
driver.get('https://cav.receita.fazenda.gov.br/autenticacao/login')
driver.set_page_load_timeout(30)
break
except TimeoutException:
driver.refresh()
finally:
sleep(1)
activate_window('eCAC - Centro Virtual de Atendimento')
"""
while True:
try:
driver.get('https://cav.receita.fazenda.gov.br/')
driver.set_page_load_timeout(5)
break
except TimeoutException:
driver.refresh()
finally:
sleep(1)
"""
# initial = driver.find_element_by_id('caixa1-login-certificado')
driver.get(
'https://sso.acesso.gov.br/authorize?response_type=code&client_id=cav.receita.fazenda.gov.br&'
'scope=openid+govbr_recupera_certificadox509+govbr_confiabilidades&'
'redirect_uri=https://cav.receita.fazenda.gov.br/autenticacao/login/govbrsso')
initial = driver.find_element_by_link_text('Certificado digital')
print('ativando janela acima, logando certificado abaixo, linhas 270')
sleep(2)
# self.thread_pool_executor(initial.click, [hotkey, 'enter'])
t = Thread(target=initial.click)
t.start()
tt = Thread(target=sleep(2.5))
tt.start()
# B4 enter, ir pra baixo por causa do certificado do castilho
tb4 = Thread(target=hotkey('down'))
tb4.start()
tt2 = Thread(target=sleep(2))
tt2.start()
t2 = Thread(target=hotkey('enter'))
t2.start()
def loga_simples(self, CNPJ, CPF, CodSim, CLIENTE):
driver = self.driver
driver.get(
'https://www8.receita.fazenda.gov.br/SimplesNacional/controleAcesso/Autentica.aspx?id=60')
driver.get(
'https://www8.receita.fazenda.gov.br/SimplesNacional/controleAcesso/Autentica.aspx?id=60')
while str(driver.current_url.strip()).endswith('id=60'):
self.tags_wait('body')
self.tags_wait('html')
self.tags_wait('input')
# driver.find_elements_by_xpath("//*[contains(text(), 'CNPJ:')]")[0].click()
# pygui.hotkey('tab', interval=0.5)
cpcp = driver.find_element_by_name('ctl00$ContentPlaceHolder$txtCNPJ')
cpcp.clear()
cpcp.send_keys(CNPJ)
cpfcpf = driver.find_element_by_name('ctl00$ContentPlaceHolder$txtCPFResponsavel')
cpfcpf.clear()
cpfcpf.send_keys(CPF)
cod = driver.find_element_by_name('ctl00$ContentPlaceHolder$txtCodigoAcesso')
cod.clear()
cod.send_keys(CodSim)
cod_caract = driver.find_element_by_id('txtTexto_captcha_serpro_gov_br')
btn_som = driver.find_element_by_id('btnTocarSom_captcha_serpro_gov_br')
sleep(2.5)
btn_som.click()
sleep(.5)
cod_caract.click()
print(f'PRESSIONE ENTER P/ PROSSEGUIR, {CLIENTE}')
press_key_b4('enter')
while True:
try:
submit = driver.find_element_by_xpath("//input[@type='submit']").click()
break
except (NoSuchElementException, ElementClickInterceptedException):
print('sleepin'
'g, line 167. Cadê o submit?')
driver.refresh()
sleep(5)
sleep(5)
def change_ecac_client(self, CNPJ):
""":return: vai até ao site de declaração do ECAC."""
driver = self.driver
def elem_with_text(elem, searched):
_tag = driver.find_element_by_xpath(f"//{elem}[contains(text(),'{searched.rstrip()}')]")
return _tag
self.tags_wait('html', 'span')
sleep(5)
# nextcl = elem_with_text("span", "Alterar perfil de acesso")
# nextcl.click()
btn_perfil = WebDriverWait(self.driver, 20).until(
expected_conditions.presence_of_element_located((By.ID, 'btnPerfil')))
self.click_ac_elementors(btn_perfil)
# altera perfil e manda o cnpj
self.tags_wait('label')
cnpj = elem_with_text("label", "Procurador de pessoa jurídica - CNPJ")
cnpj.click()
sleep(.5)
self.send_keys_anywhere(CNPJ)
sleep(1)
self.send_keys_anywhere(Keys.TAB)
self.send_keys_anywhere(Keys.ENTER)
sleep(1)
# driver.find_element_by_class_name('access-button').click()
# sleep(10)
antigo = driver.current_url
"""I GOT IT"""
# switch_to.frame...
sleep(5)
driver.get(
'https://sinac.cav.receita.fazenda.gov.br/simplesnacional/aplicacoes/atspo/pgdasd2018.app/')
sleep(2.5)
driver.get(antigo)
driver.get('https://cav.receita.fazenda.gov.br/ecac/Aplicacao.aspx?id=10009&origem=menu')
driver.switch_to.frame(driver.find_element_by_tag_name("iframe"))
sleep(2)
while True:
try:
driver.find_element_by_xpath('//span[@class="glyphicon glyphicon-off"]').click()
driver.refresh()
break
except ElementClickInterceptedException:
print('---> PRESSIONE ESC PARA CONTINUAR <--- glyphicon-off intercepted')
press_key_b4('esc')
except NoSuchElementException:
print('---> PRESSIONE ESC PARA CONTINUAR NoSuchElement glyphicon-off')
press_key_b4('esc')
driver.get(
'https://sinac.cav.receita.fazenda.gov.br/simplesnacional/aplicacoes/atspo/pgdasd2018.app/')
driver.implicitly_wait(5)
break
sleep(3)
driver.switch_to.default_content()
"""I GOT IT"""
# chegou em todo mundo...
driver.get(
'https://sinac.cav.receita.fazenda.gov.br/simplesnacional/aplicacoes/atspo/pgdasd2018.app/')
driver.implicitly_wait(5)
def simples_and_ecac_utilities(self, option, compt):
"""
:param int option: somente de 1 a 2, sendo
:param str compt: competência
1 -> Gerar Das somente se for consolidar para outra DATA
2 -> Gerar Protocolos
:return:
"""
# estou na "declaração", aqui faço o que quiser
from datetime import datetime
now_year = str(datetime.now().year)
compt = ''.join(v for v in compt if v.isdigit())
month_compt = compt[:2]
year_compt = compt[2:]
driver = self.driver
current_url = self.current_url
link_gera_das, download_protocolos_das = 'Das/PorPa', '/Consulta'
if option == 2:
self.get_sub_site(download_protocolos_das, current_url)
driver.implicitly_wait(5)
if now_year != year_compt:
self.send_keys_anywhere(year_compt)
self.find_submit_form()
sleep(3.5)
comp_clic = driver.find_elements_by_class_name('pa')
lenc = len(comp_clic) - 1
comp_clic[lenc].click()
for i in range(3):
sleep(2)
self.send_keys_anywhere(Keys.TAB)
self.send_keys_anywhere(Keys.ENTER)
elif option == 1:
# gera das
venc_month_compt = int(month_compt) + 1
venc = self.get_last_business_day_of_month(venc_month_compt, int(year_compt))
retifica_p_dia = f'{venc}{venc_month_compt:02d}{year_compt}'
self.get_sub_site(link_gera_das, current_url)
self.tags_wait('input')
driver.implicitly_wait(10)
periodo = driver.find_element_by_id('pa')
periodo.send_keys(compt)
self.find_submit_form()
sleep(2.5)
# if len(driver.find_elements_by_id('msgBox')) == 0 # CASO NÃO EXISTA O DAS
consolida = driver.find_element_by_id('btnConsolidarOutraData')
consolida.click()
sleep(2.5)
validade_id = 'txtDataValidade'
driver.execute_script(f"document.getElementById('{validade_id}').focus();")
validade_change = driver.find_element_by_id(validade_id)
for e, val in enumerate(retifica_p_dia):
validade_change.send_keys(val)
if e == 0:
sleep(.25)
sleep(1)
driver.find_element_by_id('btnDataValidade').click()
# coloquei a validade
# gerei das
driver.implicitly_wait(5)
self.find_submit_form()
# GERAR DAS
else:
tk_msg(f'Tente outra opção, linha 550 +-, opc: {option}')
def opta_script(self):
driver = self.driver
try:
# #################################################### opta
self.get_sub_site('/RegimeApuracao/Optar', self.current_url)
# driver.execute_script("""window.location.href += '/RegimeApuracao/Optar'""")
anocalendario = Select(driver.find_element_by_id('anocalendario'))
anocalendario.select_by_value('2021')
self.find_submit_form()
# competencia
competencia, caixa = '0', '1'
driver.find_element_by_css_selector(f"input[type='radio'][value='{competencia}']").click()
self.find_submit_form()
sleep(2.5)
# driver.find_element_by_id('btnSimConfirm').click()
try:
driver.implicitly_wait(10)
self.click_ac_elementors(driver.find_element_by_class_name('glyphicon-save'))
except NoSuchElementException:
input('Não consegui')
else:
print('Não fui exceptado')
# ########################################################
except NoSuchElementException:
pass
finally:
driver.get(self.current_url)
driver.execute_script("""window.location.href += '/declaracao?clear=1'""")
sleep(2.5)
def the_print(self):
len_nome = len(self.socios_now__nome)
print(self.empresa_now)
print(f'{"CNPJ":<10}{"Nome":>10}{"CPF":>38}{"COTA":>21}{"COTA %":>10}')
total_calc = sum(int(v) for v in self.socios_now__cota)
for ins in range(len(self.socios_now__cnpj)):
soc_cnpj = self.socios_now__cnpj[ins]
soc_nome = self.socios_now__nome[ins]
soc_cpf = self.socios_now__cpf[ins]
soc_cota = self.socios_now__cota[ins]
print(f'{soc_cnpj:<16}', end='')
print(f'{soc_nome:<{40 - len_nome}}', end='')
print(f'{soc_cpf:>9}', end='')
print(f'{soc_cota:>10}', end='')
cota = int(soc_cota) / total_calc
print(' ', cota)
print(self.socios_now__tipo)
print('-' * 60)
print()
Defis()
| 40.097403 | 134 | 0.577436 | 2,139 | 18,525 | 4.755493 | 0.238429 | 0.023594 | 0.027133 | 0.045222 | 0.312033 | 0.257275 | 0.229847 | 0.186197 | 0.145989 | 0.128293 | 0 | 0.017665 | 0.309366 | 18,525 | 461 | 135 | 40.184382 | 0.777396 | 0.095007 | 0 | 0.341463 | 0 | 0.030488 | 0.155126 | 0.0379 | 0 | 0 | 0 | 0.002169 | 0 | 1 | 0.02439 | false | 0.006098 | 0.054878 | 0 | 0.085366 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b4920cd2300bafcb005d22098eb6361aa94da89 | 15,125 | py | Python | src/Python/Bezier.py | rparak/Bezier_Curve_Simple | 06531e17601a52c65aef36c38d61673fee676751 | [
"MIT"
] | 2 | 2021-04-09T20:38:57.000Z | 2022-01-03T09:19:27.000Z | src/Python/Bezier.py | rparak/Bezier_Curve_Simple | 06531e17601a52c65aef36c38d61673fee676751 | [
"MIT"
] | null | null | null | src/Python/Bezier.py | rparak/Bezier_Curve_Simple | 06531e17601a52c65aef36c38d61673fee676751 | [
"MIT"
] | null | null | null | """
## =========================================================================== ##
MIT License
Copyright (c) 2021 Roman Parak
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
## =========================================================================== ##
Author : Roman Parak
Email : Roman.Parak@outlook.com
Github : https://github.com/rparak
File Name: Bezier.py
## =========================================================================== ##
"""
# Numpy (Array computing) [pip3 install numpy]
import numpy as np
# Support for type hints
import typing
# Initialization of constants
CONST_NUM_OF_ENTRY_POINTS_LINEAR = 2
CONST_NUM_OF_ENTRY_POINTS_QUADRATIC = 3
CONST_NUM_OF_ENTRY_POINTS_CUBIC = 4
# Time t ∈ [0: The starting value of the sequence,
# 1: The end value of the sequence]
CONST_T_START = 0
CONST_T_STOP = 1
def Linear(num_of_samples: typing.Union[int], points: typing.Union[typing.List[int], typing.List[float]]) -> typing.Union[typing.List[int], typing.List[float]]:
"""
Description:
Given two control points p_{0} and p_{1} we define the linear Bézier curve to be the curve parametrized by:
p(t) = (1 - t)*p_{0} + t*p_{1}, t ∈ [0, 1]
Args:
(1) num_of_samples [INT]: Number of samples to generate. Must be non-negative.
(2) points [p_{0, 1}] [Int/Float Matrix]: Multiple points to create a curve.
Returns:
(1) parameter [Int/Float Matrix]: Resulting points of the curve.
Example:
res = Linear(num_of_samples, points),
where points are equal to [[px_id_0, py_id_0], [px_id_1, py_id_1]] in 2D space
and [[px_id_0, py_id_0, pz_id_0], [px_id_1, py_id_1, pz_id_1]] in 3D space
"""
try:
assert len(points) == CONST_NUM_OF_ENTRY_POINTS_LINEAR
assert(num_of_samples >= 0)
# Return evenly spaced numbers over a specified interval.
t = np.linspace(CONST_T_START, CONST_T_STOP, num_of_samples)
return [(1 - t) * p[0] + t * p[1]
for _, p in enumerate(np.transpose(points))]
except AssertionError as error:
print('[ERROR] Insufficient number of entry points.')
print('[ERROR] The correct number of entry points is %d.' % CONST_NUM_OF_ENTRY_POINTS_LINEAR)
print('[ERROR] The number of samples must not be negative.')
def Quadratic(num_of_samples: typing.Union[int], points: typing.Union[typing.List[int], typing.List[float]]) -> typing.Union[typing.List[int], typing.List[float]]:
"""
Description:
Given three control points p_{0}, p_{1} and p_{2} we define the quadratic Bézier curve (degree 2 Bézier curve)
to be the curve parametrized by:
p(t) = ((1 - t)^2)*p_{0} + 2*t*(1 - t)*p_{1} + (t^2)*p_{2}, t ∈ [0, 1]
Args:
(1) num_of_samples [INT]: Number of samples to generate. Must be non-negative.
(2) points [p_{0, 1, 2}] [Int/Float Matrix]: Multiple points to create a curve.
Returns:
(1) parameter [Int/Float Matrix]: Resulting points of the curve.
Example:
res = Quadratic(t, p),
where points are equal to [[px_id_0, py_id_0], [px_id_1, py_id_1], [px_id_2, py_id_2]] in 2D space
and [[px_id_0, py_id_0, pz_id_0], [px_id_1, py_id_1, pz_id_1], [px_id_2, py_id_2, pz_id_2]] in 3D space
"""
try:
assert len(points) == CONST_NUM_OF_ENTRY_POINTS_QUADRATIC
assert(num_of_samples >= 0)
# Return evenly spaced numbers over a specified interval.
t = np.linspace(CONST_T_START, CONST_T_STOP, num_of_samples)
return [(1 - t)**2 * p[0] + 2 * t * (1 - t) * p[1] + t**2 * p[2]
for _, p in enumerate(np.transpose(points))]
except AssertionError as error:
print('[ERROR] Insufficient number of entry points.')
print('[ERROR] The correct number of entry points is %d.' % CONST_NUM_OF_ENTRY_POINTS_QUADRATIC)
print('[ERROR] The number of samples must not be negative.')
def Cubic(num_of_samples: typing.Union[int], points: typing.Union[typing.List[int], typing.List[float]]) -> typing.Union[typing.List[int], typing.List[float]]:
"""
Description:
Given four control points p_{0}, p_{1}, p_{2} and p_{3} we define the cubic Bézier curve (degree 3 Bézier curve) to
be the curve parametrized by:
p(t) = ((1 - t)^3)*p_{0} + 3*t*((1 - t)^2)*p_{1} + (3*t^2)*(1 - t)*p_{2} + (t^3) * p_{3}, t ∈ [0, 1]
Args:
(1) num_of_samples [INT]: Number of samples to generate. Must be non-negative.
(2) points [p_{0, 1, 2, 3}] [Int/Float Matrix]: Multiple points to create a curve.
Returns:
(1) parameter [Int/Float Matrix]: Resulting points of the curve.
Example:
res = Cubic(t, p),
where points are equal to [[px_id_0, py_id_0], [px_id_1, py_id_1], [px_id_2, py_id_2], [px_id_3, py_id_3]] in 2D space
and [[px_id_0, py_id_0, pz_id_0], [px_id_1, py_id_1, pz_id_1], [px_id_2, py_id_2, pz_id_2], [px_id_3, py_id_3, pz_id_2]] in 3D space
"""
try:
assert len(points) == CONST_NUM_OF_ENTRY_POINTS_CUBIC
assert(num_of_samples >= 0)
# Return evenly spaced numbers over a specified interval.
t = np.linspace(CONST_T_START, CONST_T_STOP, num_of_samples)
return [((1 - t)**3) * (p[0]) + (3 * t * (1 - t)**2) * (p[1]) + 3 * (t**2) * (1 - t) * p[2] + (t**3) * p[3]
for _, p in enumerate(np.transpose(points))]
except AssertionError as error:
print('[ERROR] Insufficient number of entry points.')
print('[ERROR] The correct number of entry points is %d.' % CONST_NUM_OF_ENTRY_POINTS_CUBIC)
print('[ERROR] The number of samples must not be negative.')
class N_Degree(object):
"""
Description:
Class for efficient solution of N-degree Bézier curve.
Note:
A Bézier curve is a parametric curve used in computer graphics and related fields.
Initialization of the Class:
Input:
(1) num_of_samples [INT]: Number of samples to generate. Must be non-negative.
Example:
Initialization:
Cls = N_Degree(num_of_samples)
Calculation:
res = Cls.Solve(p, simplification_factor)
where p is equal to [[px_id_0, py_id_0], .., [px_id_n, py_id_n]] in 2D space
and [[px_id_0, py_id_0, pz_id_0], .., [px_id_n, py_id_n, pz_id_n]] in 3D space
"""
def __init__(self, num_of_samples: typing.Union[int]) -> None:
# << PUBLIC >> #
try:
assert(num_of_samples >= 0)
# Return evenly spaced numbers over a specified interval.
self.t = np.linspace(CONST_T_START, CONST_T_STOP, num_of_samples)
except AssertionError as error:
print('[ERROR] The number of samples must not be negative.')
# << PRIVATE >> #
# Points [Float Matrix]
self.__points = []
# Number of samples to generate
self.__num_of_samples = num_of_samples
@staticmethod
def __path_simplification(points, simplification_factor):
"""
Description:
Function to simplify the path through the simplification factor. The first and end points do not change, the others
depend on the factor coefficient.
Example:
Input Points:
points = [1.0, 1.0], [1.25, 2.0], [1.75, 2.0], [2.0, 1.0], [1.0, -1.0], [1.25, -2.0], [1.75, -2.0], [2.0, -1.0]
Number of points:
n = 8
Simplification Factor:
1\ Example:
simplification_factor = 1
points_new = [1.0, 1.0], [1.25, 2.0], [1.75, 2.0], [2.0, 1.0], [1.0, -1.0], [1.25, -2.0], [1.75, -2.0], [2.0, -1.0]
n = 8
2\ Example:
simplification_factor = 2
points_new = [1.0, 1.0], [None], [1.75, 2.0], [None], [1.0, -1.0], [None], [1.75, -2.0], [2.0, -1.0]
points_new = [1.0, 1.0], [1.75, 2.0], [1.0, -1.0], [1.75, -2.0], [2.0, -1.0]
n = 5
Args:
(1) points [p_{0, .., n}] [Int/Float Matrix]: Multiple points to create a curve.
(2) simplification_factor [INT]: Simplification factor for the simplify the path.
Returns:
(1) parameter [Int/Float Matrix]: New simplified matrix of points to create a curve.
"""
points_aux = []
points_aux.append(points[0])
for i in range(1, len(points) - 1):
if i % simplification_factor == 0:
points_aux.append(points[i])
if points_aux[-1] != points[-1]:
points_aux.append(points[-1])
return points_aux
@staticmethod
def __binomial_coefficient(n, k):
"""
Description:
Calculation binomial coofecient C, from pair of integers n ≥ k ≥ 0 and is written (n k). The binomial coefficients are the positive integers that occur as coefficients in the binomial theorem.
(n k) = n! / (k! * (n - k)!)
...
Simplification of the calculation:
(n k) = ((n - k + 1) * (n - k + 2) * ... * (n - 1) * (n)) / (1 * 2 * ... * (k - 1) * k)
Args:
(1) n [INT]: Integer number 1 (numerator)
(2) k [INT]: Integer number 2 (denumerator)
Returns:
(1) parameter [INT]: Binomial coofecient C(n k).
"""
try:
assert(n >= k)
if k == 0:
return 1
elif k == 1:
return n
else:
c_nk = 1
# Calculation from the simplification equation
for i in range(0, k):
c_nk *= (n - i) # numerator
c_nk /= (i + 1) # denumerator
return c_nk
except AssertionError as error:
print('[ERROR] The number n must be larger than or equal to k.')
return 0
def __n_index_curve(self, i, point, n, c_ni):
"""
Description:
Given n + 1 control points p_{0}, p_{1},..., p_{n} we define the degree n Bezier curve to
be the curve parametrized by (De Casteljau's algorithm):
p(t) = sum(i = 0 -> n) (C(n i)) * (t ^ i) * ((1 - t) ^ (n - i)) * p_{i}, t ∈ [0, 1]
where C(n i) is a binomial coefficient.
Args:
(1) i [INT]: Iteration.
(2) point [Int/Float Matrix]: Point (2D/3D) in interation (i).
(3) n [INT]: Number of points.
(4) c_ni [INT]: Binomial coofecient C(n i) in iteration (i).
Returns:
(1) parameter [Int/Float Matrix]: Results of curve values in iteration (i).
"""
return [c_ni * (self.t**i) * ((1 - self.t)**(n - i)) * p
for _, p in enumerate(point)]
def __n_degree(self):
"""
Description:
The main control function for creating a Bézier curve of degree n.
Returns:
(1) parameter [{0 .. Number of dimensions - 1}] [Int/Float Matrix]: Resulting points of the curve.
"""
# Number of points in the matrix
n = len(self.__points) - 1
# Calculation of binomial cooficient of the first iteration
c_nk = self.__binomial_coefficient(n, 0)
# Calculation of the first curve positions
result = self.__n_index_curve(0, self.__points[0], n, c_nk)
for i in range(1, n + 1):
# Binomial cooficient in interation (i)
c_ni = self.__binomial_coefficient(n, i)
# Calculation positions in iteration (i)
aux_result = self.__n_index_curve(i, self.__points[i], n, c_ni)
# The sum of all positions for the resulting Bézier curve
for j in range(0, len(aux_result)):
result[j] += aux_result[j]
return result
def Solve(self, points: typing.Union[typing.List[int], typing.List[float]], simplification_factor: typing.Union[int]) -> typing.Union[typing.List[int], typing.List[float]]:
"""
Description:
Function for automatic calculation of a suitably selected Bézier curve.
Args:
(1) points [p_{0, .., n}] [Int/Float Matrix]: Multiple points to create a curve.
(2) simplification_factor [INT]: Simplification factor for the simplify the path.
Return:
(1) parameter [Int/Float Matrix]: Resulting points of the curve.
"""
try:
assert len(points) > 1
# If the number of input points is greater than 4 and variable simplification_factor > 1, the program chooses the n_points calculation method. But if the simplification
# coefficient is greater than or equal to 1, the program can choose another method and the principle of calculation will be faster.
if simplification_factor > 1 and len(points) > 4:
# If the coefficient coefficient is greater than 1, simplify the path
self.__points = self.__path_simplification(points, simplification_factor)
else:
self.__points = points
# Selects the calculation method based on the number of points in the matrix (p).
if len(self.__points) > 4:
return self.__n_degree()
if len(self.__points) == 4:
return Cubic(self.__num_of_samples, self.__points)
elif len(self.__points) == 3:
return Quadratic(self.__num_of_samples, self.__points)
elif len(self.__points) == 2:
return Linear(self.__num_of_samples, self.__points)
except AssertionError as error:
print('[ERROR] Insufficient number of entry points.')
print('[ERROR] The minimum number of entry points is %d.' % CONST_NUM_OF_ENTRY_POINTS_LINEAR)
| 42.130919 | 204 | 0.579967 | 2,160 | 15,125 | 3.89537 | 0.14213 | 0.008082 | 0.032802 | 0.005229 | 0.522106 | 0.492988 | 0.451985 | 0.440694 | 0.421678 | 0.407297 | 0 | 0.032933 | 0.295339 | 15,125 | 358 | 205 | 42.248603 | 0.755864 | 0.559669 | 0 | 0.317757 | 0 | 0 | 0.108252 | 0 | 0 | 0 | 0 | 0 | 0.140187 | 1 | 0.084112 | false | 0 | 0.018692 | 0 | 0.242991 | 0.121495 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b4a35412c05702e2d3412785226759c63a9cac5 | 1,175 | py | Python | dist/weewx-4.5.1/examples/basic/install.py | v0rts/docker-weewx | 70b2f252051dfead4fcb74e74662b297831e6342 | [
"Apache-2.0"
] | 10 | 2017-01-05T17:30:48.000Z | 2021-09-18T15:04:20.000Z | dist/weewx-4.5.1/examples/basic/install.py | v0rts/docker-weewx | 70b2f252051dfead4fcb74e74662b297831e6342 | [
"Apache-2.0"
] | 2 | 2019-07-21T10:48:42.000Z | 2022-02-16T20:36:45.000Z | dist/weewx-4.5.1/examples/basic/install.py | v0rts/docker-weewx | 70b2f252051dfead4fcb74e74662b297831e6342 | [
"Apache-2.0"
] | 12 | 2017-01-05T18:50:30.000Z | 2021-10-05T07:35:45.000Z | # installer for the 'basic' skin
# Copyright 2014 Matthew Wall
from weecfg.extension import ExtensionInstaller
def loader():
return BasicInstaller()
class BasicInstaller(ExtensionInstaller):
def __init__(self):
super(BasicInstaller, self).__init__(
version="0.1",
name='basic',
description='Very basic skin for weewx.',
author="Matthew Wall",
author_email="mwall@users.sourceforge.net",
config={
'StdReport': {
'basic': {
'skin': 'basic',
'HTML_ROOT': 'basic',
'Extras': {
'current': 'INST_SKIN_ROOT/basic/current.inc',
'hilo': 'INST_SKIN_ROOT/basic/hilo.inc'}}}},
files=[('skins/basic',
['skins/basic/basic.css',
'skins/basic/current.inc',
'skins/basic/favicon.ico',
'skins/basic/hilo.inc',
'skins/basic/index.html.tmpl',
'skins/basic/skin.conf']),
]
)
| 32.638889 | 74 | 0.469787 | 99 | 1,175 | 5.434343 | 0.525253 | 0.130112 | 0.04461 | 0.063197 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008633 | 0.408511 | 1,175 | 35 | 75 | 33.571429 | 0.765468 | 0.049362 | 0 | 0 | 0 | 0 | 0.29982 | 0.182226 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.035714 | 0.035714 | 0.178571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b5509332abe32e34973d35ac1a06d05d2a1a9d0 | 1,581 | py | Python | Code/ReceiverZX.py | eastOffice/MsgBrokerTest | 5139fff386c73bf05afdfa63c827b6ba36405cdb | [
"MIT"
] | null | null | null | Code/ReceiverZX.py | eastOffice/MsgBrokerTest | 5139fff386c73bf05afdfa63c827b6ba36405cdb | [
"MIT"
] | null | null | null | Code/ReceiverZX.py | eastOffice/MsgBrokerTest | 5139fff386c73bf05afdfa63c827b6ba36405cdb | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import pika
import random
import time
import sys
import datetime
import QoECurve
'''
MsgBroker Configuration
'''
max_priority = 250
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
c_properties = dict()
c_properties['x-max-priority'] = max_priority
channel.queue_declare(queue='hello', durable=False, arguments = c_properties)
'''
Msg Handler
'''
def datahandler(body):
# print(str(body))
message_body = str(body).split()
message_body[0] = message_body[0].strip('b\'') # time when generating requests
message_body[1] = message_body[1].strip('\'') # request non back end delay
# message_body[2] = message_body[2].strip('\'') # message index
# message_index = int(message_body[2])
# print(message_body[0])
now_time = int(round(time.time() * 1000))
e2e_latency = now_time - int(message_body[0]) + int(message_body[1]) + 0.0 # total e2e dealy
#back_end_zero = int(message_body[1]) # only non back end delay
sa, sb = QoECurve.QoECurve(e2e_latency)
#sa_d, sb_d = QoECurve.QoECurve(back_end_zero)
print(sa)
time.sleep(0.005)
def callback(ch, method, properties, body):
#print(" [x] Received " + str(body) + ' ' + str(datetime.datetime.now()))
datahandler(body)
channel.basic_ack(delivery_tag = method.delivery_tag)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(callback,
queue='hello'
)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
| 29.277778 | 96 | 0.68754 | 212 | 1,581 | 4.957547 | 0.424528 | 0.125595 | 0.045671 | 0.028544 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02144 | 0.173941 | 1,581 | 53 | 97 | 29.830189 | 0.783308 | 0.259962 | 0 | 0 | 0 | 0 | 0.074977 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.2 | 0 | 0.266667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b5653d2af00c13103b575ebb27d1a523e5c40b6 | 1,723 | py | Python | problem12.py | rentes/Euler | e28b536a15f2e795f886a5df261d38bb0181be07 | [
"MIT"
] | 1 | 2019-05-29T23:54:24.000Z | 2019-05-29T23:54:24.000Z | problem12.py | rentes/Euler | e28b536a15f2e795f886a5df261d38bb0181be07 | [
"MIT"
] | null | null | null | problem12.py | rentes/Euler | e28b536a15f2e795f886a5df261d38bb0181be07 | [
"MIT"
] | null | null | null | """Project Euler - Problem 12 - http://projecteuler.net/problem=12"""
import sys
import time
import tools.timeutils as timeutils
def number_of_factors(n):
"""
Returns the number of factors of number n
Using a list to keep the factors found of number n
"""
max_limit = 0
nr_factors = 2 # 1 and n are always factors
for m in range(2, n):
if n % m == 0:
# found a new factor
nr_factors += 1
# I only have to divide n by m until m reaches the result of
# the quotient of the first factor encountered
# for example: consider number 28. the first factor is 2 and
# the quotient gives 14, since 28 / 2 = 14. 14 is then the max
# limit where m has to increase to, because we know for sure that
# any m > 14 will not be a factor of 28, and we break the cycle
# when this condition occurs. This way we only have to make less
# divisions to find out all the factors of number n
quotient = int(n / m)
if max_limit < quotient:
max_limit = quotient
if m > max_limit:
break
return nr_factors
def main():
"""Main entry point for the script"""
start = time.time()
triangular_number = 1
n = 2
while number_of_factors(triangular_number) <= 500:
triangular_number += n
n += 1
timeutils.elapsed_time(time.time() - start)
print(triangular_number)
def test_number_of_factors():
"""Testing the number of factors method [problem 12]"""
assert number_of_factors(28) == 6
assert number_of_factors(76576500) > 500
if __name__ == '__main__':
sys.exit(main())
| 29.706897 | 77 | 0.612304 | 253 | 1,723 | 4.047431 | 0.418972 | 0.054688 | 0.102539 | 0.035156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040644 | 0.314568 | 1,723 | 57 | 78 | 30.22807 | 0.826418 | 0.434707 | 0 | 0 | 0 | 0 | 0.008602 | 0 | 0 | 0 | 0 | 0 | 0.068966 | 1 | 0.103448 | false | 0 | 0.103448 | 0 | 0.241379 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b56b38a493baecfbe24f9c81a16e03dcfd892d0 | 5,203 | py | Python | external_code/Correlations_pipeline/MultivariateXWASCorr.py | SamuelDiai/Dash-Website | e064e432f14a86de1b54cf31ab311997c5643129 | [
"MIT"
] | null | null | null | external_code/Correlations_pipeline/MultivariateXWASCorr.py | SamuelDiai/Dash-Website | e064e432f14a86de1b54cf31ab311997c5643129 | [
"MIT"
] | null | null | null | external_code/Correlations_pipeline/MultivariateXWASCorr.py | SamuelDiai/Dash-Website | e064e432f14a86de1b54cf31ab311997c5643129 | [
"MIT"
] | null | null | null | from scipy import stats
import pandas as pd
import numpy as np
path_mutlivariate_feat_imps = '/n/groups/patel/samuel/EWAS/feature_importances_paper/'
Environmental = ['Clusters_Alcohol', 'Clusters_Diet', 'Clusters_Education', 'Clusters_ElectronicDevices',
'Clusters_Employment', 'Clusters_FamilyHistory', 'Clusters_Eyesight', 'Clusters_Mouth',
'Clusters_GeneralHealth', 'Clusters_Breathing', 'Clusters_Claudification', 'Clusters_GeneralPain',
'Clusters_ChestPain', 'Clusters_CancerScreening', 'Clusters_Medication', 'Clusters_Hearing',
'Clusters_Household', 'Clusters_MentalHealth', 'Clusters_OtherSociodemographics',
'Clusters_PhysicalActivityQuestionnaire', 'Clusters_SexualFactors', 'Clusters_Sleep', 'Clusters_SocialSupport',
'Clusters_SunExposure', 'Clusters_EarlyLifeFactors', 'Clusters_Smoking']
Biomarkers = ['Clusters_PhysicalActivity', 'Clusters_HandGripStrength', 'Clusters_BrainGreyMatterVolumes', 'Clusters_BrainSubcorticalVolumes',
'Clusters_HeartSize', 'Clusters_HeartPWA', 'Clusters_ECGAtRest', 'Clusters_AnthropometryImpedance',
'Clusters_UrineBiochemistry', 'Clusters_BloodBiochemistry', 'Clusters_BloodCount',
'Clusters_EyeAutorefraction', 'Clusters_EyeAcuity', 'Clusters_EyeIntraoculaPressure',
'Clusters_BraindMRIWeightedMeans', 'Clusters_Spirometry', 'Clusters_BloodPressure',
'Clusters_AnthropometryBodySize', 'Clusters_ArterialStiffness', 'Clusters_CarotidUltrasound',
'Clusters_BoneDensitometryOfHeel', 'Clusters_HearingTest', 'Clusters_CognitiveFluidIntelligence', 'Clusters_CognitiveMatrixPatternCompletion',
'Clusters_CognitiveNumericMemory', 'Clusters_CognitivePairedAssociativeLearning', 'Clusters_CognitivePairsMatching', 'Clusters_CognitiveProspectiveMemory',
'Clusters_CognitiveReactionTime', 'Clusters_CognitiveSymbolDigitSubstitution', 'Clusters_CognitiveTowerRearranging', 'Clusters_CognitiveTrailMaking']
Pathologies = ['medical_diagnoses_%s' % letter for letter in ['A', 'B', 'C', 'D', 'E',
'F', 'G', 'H', 'I', 'J',
'K', 'L', 'M', 'N', 'O',
'P', 'Q', 'R', 'S', 'T',
'U', 'V', 'W', 'X', 'Y', 'Z']]
Clusters = []
All = Environmental + Biomarkers + Pathologies #+ ['Genetics']
organs = ['\*', '*instances01', '*instances1.5x', '*instances23', 'Abdomen' , 'AbdomenLiver' , 'AbdomenPancreas' , 'Arterial' , 'ArterialCarotids' , 'ArterialPulseWaveAnalysis' , 'Biochemistry' , 'BiochemistryBlood' , 'BiochemistryUrine' , 'Brain' , 'BrainCognitive' , 'BrainMRI' , 'Eyes' , 'EyesAll' , 'EyesFundus' , 'EyesOCT' , 'Hearing' , 'Heart' , 'HeartECG' , 'HeartMRI' , 'ImmuneSystem' , 'Lungs' , 'Musculoskeletal' , 'MusculoskeletalFullBody' , 'MusculoskeletalHips' , 'MusculoskeletalKnees' , 'MusculoskeletalScalars' , 'MusculoskeletalSpine' , 'PhysicalActivity']
path_heritability = '/n/groups/patel/Alan/Aging/Medical_Images/GWAS_hits_Age'
def Create_data(corr_type, model):
df_corr_env = pd.DataFrame(columns = ['env_dataset', 'organ_1', 'organ_2', 'corr', 'sample_size'])
for env_dataset in All:
print("Env dataset : ", env_dataset)
for organ_1 in organs:
try :
df_1 = pd.read_csv(path_mutlivariate_feat_imps + 'FeatureImp_%s_%s_%s.csv' % (env_dataset, organ_1, model)).set_index('features').fillna(0)
except FileNotFoundError as e:
continue
for organ_2 in organs:
try :
df_2 = pd.read_csv(path_mutlivariate_feat_imps + 'FeatureImp_%s_%s_%s.csv' % (env_dataset, organ_2, model)).set_index('features').fillna(0)
except FileNotFoundError as e:
#print(e)
continue
try :
if corr_type == 'Spearman':
corr, _ = stats.spearmanr(df_1.weight, df_2.weight)
elif corr_type == 'Pearson':
corr, _ = stats.pearsonr(df_1.weight, df_2.weight)
except ValueError:
commun_indexes = df_1.weight.index.intersection(df_2.weight.index)
if corr_type == 'Spearman':
corr, _ = stats.spearmanr(df_1.weight.loc[commun_indexes], df_2.weight.loc[commun_indexes])
elif corr_type == 'Pearson':
corr, _ = stats.pearsonr(df_1.weight.loc[commun_indexes], df_2.weight.loc[commun_indexes])
sample_size = len(df_1.weight)
df_corr_env = df_corr_env.append({'env_dataset' : env_dataset, 'organ_1' : organ_1, 'organ_2' :organ_2, 'corr' :corr, 'sample_size' : sample_size}, ignore_index = True)
df_corr_env.to_csv('/n/groups/patel/samuel/EWAS/Correlations/CorrelationsMultivariate_%s_%s.csv'% (corr_type, model))
for model in ['LightGbm', 'ElasticNet', 'NeuralNetwork']:
for corr_type in ['Pearson', 'Spearman']:
Create_data(corr_type, model)
| 74.328571 | 573 | 0.645397 | 478 | 5,203 | 6.711297 | 0.42887 | 0.01995 | 0.016833 | 0.027431 | 0.201995 | 0.160848 | 0.155237 | 0.155237 | 0.155237 | 0.155237 | 0 | 0.007758 | 0.231982 | 5,203 | 69 | 574 | 75.405797 | 0.795045 | 0.004228 | 0 | 0.183333 | 0 | 0 | 0.443221 | 0.25956 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0 | 0.066667 | 0 | 0.083333 | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b58d366bf3ed1f98d609fd61a964c71dab67651 | 8,544 | py | Python | dataset2/Channel-PFLocalization-DataSet2.py | herolab-uga/pf-doa-localization | f6d4f3b5bafdde7a9afa905b96378fdc113f70f6 | [
"MIT"
] | 3 | 2022-01-17T14:29:26.000Z | 2022-03-31T13:06:55.000Z | dataset2/Channel-PFLocalization-DataSet2.py | herolab-uga/pf-doa-localization | f6d4f3b5bafdde7a9afa905b96378fdc113f70f6 | [
"MIT"
] | null | null | null | dataset2/Channel-PFLocalization-DataSet2.py | herolab-uga/pf-doa-localization | f6d4f3b5bafdde7a9afa905b96378fdc113f70f6 | [
"MIT"
] | null | null | null | import math
import numpy as np
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot as pb
import random
from datetime import datetime
import time
import sys
import csv
def dist(x, y, pos):
return math.sqrt((pos[0]-x)**2 + (pos[1]-y)**2)
areaSize=(10, 10)
node_pos=[(0,0),(10,0),(10,10),(0,10)]
centroid = (sum([node_pos[i][0] for i in range(len(node_pos))])/len(node_pos) , sum([node_pos[i][1] for i in range(len(node_pos))])/len(node_pos))
possible_x = list(range(10, 90))
possible_y = list(range(10, 90))
num_particles = 200
def gen_wifi(freq=2.4, power=20, trans_gain=0, recv_gain=0, size=areaSize, pos=(5,5), shadow_dev=1 , n=2,rss0=-40,noise=1):
if pos is None:
pos = (random.randrange(size[0]), random.randrange(size[1]))
random.seed(datetime.now())
normal_dist = np.random.normal(0, shadow_dev, size=[size[0]+1, size[1]+1])
rss = []
random.seed(datetime.now())
for x in range(0,4):
distance = dist(node_pos[x][0], node_pos[x][1], pos)
val =rss0 - 10 * n * math.log10(distance) + normal_dist[int(pos[0])][int(pos[1])]
rss.append(val-noise*random.random())
return rss
rssi_dict = []
for i in range(4):
with open(sys.argv[1]+"s"+str(i)+".csv") as f:
dict_from_csv = [{k: v for k, v in row.items()}
for row in csv.DictReader(f,delimiter=';', skipinitialspace=True)]
rssi_dict.append(dict_from_csv)
min_length = len(rssi_dict[0])
for i in range(1,4):
if len(rssi_dict[i]) < min_length:
min_length = len(rssi_dict[i])
RSS0 = -47
overall_rss={}
original_tragectory={}
path_loss_list = []
received_signal_log = []
for i in range(min_length):
x , y = float(rssi_dict[0][i]['x']) , float(rssi_dict[0][i]['y'])
# if (x,y) != Previous_pos:
random.seed(datetime.now())
rss = [int(rssi_dict[0][i]['rssi'])-random.random(),int(rssi_dict[1][i]['rssi'])-random.random(),int(rssi_dict[2][i]['rssi'])-random.random(),int(rssi_dict[3][i]['rssi'])-random.random()]
# if rssi_dict[0][i]['channel'] == rssi_dict[1][i]['channel'] == rssi_dict[2][i]['channel'] == rssi_dict[3][i]['channel']:
if rssi_dict[3][i]['channel'] in overall_rss:
if (x,y) in overall_rss[rssi_dict[3][i]['channel']]:
overall_rss[rssi_dict[3][i]['channel']][(x,y)].append(rss)
else :
overall_rss[rssi_dict[3][i]['channel']][(x,y)] = [rss]
# original_tragectory[rssi_dict[3][i]['channel']].append((x,y))
else :
overall_rss[rssi_dict[3][i]['channel']] = {(x,y):[rss]}
# if float(rssi_dict[0][i]['distance']) > 1.4 and float(rssi_dict[0][i]['distance']) < 1.5 :
# RSS0 = int(rssi_dict[0][i]['rssi'])
# elif float(rssi_dict[1][i]['distance']) > 1.4 and float(rssi_dict[1][i]['distance']) < 1.5:
# RSS0 = int(rssi_dict[1][i]['rssi'])
# elif float(rssi_dict[2][i]['distance']) > 1.4 and float(rssi_dict[2][i]['distance']) < 1.5 :
# RSS0 = int(rssi_dict[2][i]['rssi'])
# elif float(rssi_dict[3][i]['distance']) > 1.4 and float(rssi_dict[3][i]['distance']) < 1.5 :
# RSS0 = int(rssi_dict[3][i]['rssi'])
# for j in range(4):
# path_loss_list.append(20-rss[j])
# received_signal_log.append(10*math.log10(float(rssi_dict[j][i]['distance'])))
# Previous_pos = (x,y)
# average_path_loss = np.average(path_loss_list)
# average_received_signal_log = np.average(received_signal_log)
# nominator = 0
# demonimator = 0
# for i in range(len(path_loss_list)):
# nominator += (path_loss_list[i] - average_path_loss)*(received_signal_log[i] - average_received_signal_log)
# demonimator += math.pow((received_signal_log[i] - average_received_signal_log),2)
# pathloss_exponent = nominator / demonimator
# doa=[]
# for i in range(0,len(overall_rss)):
# inner_curr = i
# limit = i-500 if i>500 else 0
# est_sin_sum = 0
# est_cos_sum = 0
# starting_curr = inner_curr
# weight_sum = 0
# # average estimated DoA calculated
# while inner_curr >= limit:
# gx = ((overall_rss[i][1]-overall_rss[i][0])/2) + ((overall_rss[i][2]-overall_rss[i][3])/2)
# gy = ((overall_rss[i][2]-overall_rss[i][1])/2) + ((overall_rss[i][3]-overall_rss[i][0])/2)
# estimated_grad=np.arctan(gy/gx)
# if estimated_grad > math.pi:
# estimated_grad = -2 * math.pi + estimated_grad
# elif estimated_grad < -math.pi:
# estimated_grad = math.pi - abs(-math.pi - estimated_grad)
# weight = 0.99 ** (inner_curr - starting_curr)
# weight_sum += weight
# estimated_grad = weight * estimated_grad
# est_sin_sum += math.sin(estimated_grad)
# est_cos_sum += math.cos(estimated_grad)
# inner_curr -= 1
# avg_est_sin = est_sin_sum / weight_sum
# avg_est_cos = est_cos_sum / weight_sum
# avg_grad = math.atan2(avg_est_sin, avg_est_cos)
# doa.append(avg_grad)
resultFile = open("error_boundry_particleFilter_full"+sys.argv[1].split("/")[1] +".csv", "a") # append mode
resultFile.write("Channel,"+"Mean,"+"StDev"+"\n")
for channel in overall_rss:
# if int(channel)%5 == 0:
print("---------Channel %s--------"%channel)
poses = overall_rss[channel]
random.seed(datetime.now())
previous_errors =[]
distance_error =[]
particles = []
times = []
# num_particles = len(particles)
# print("Number of particle filters",num_particles)
for original_pos in poses:
rss_values = np.average(poses[original_pos],axis=0)
# print(rss_values)
for p in range(num_particles):
particles.append((random.choice(possible_x)/10,random.choice(possible_y)/10))
start_time = time.time()
positions =[]
errors=[]
weights =[]
error=0
gx = (((rss_values[1]-rss_values[0])/20) + ((rss_values[2]-rss_values[3])/20))/2
gy = (((rss_values[2]-rss_values[1])/20) + ((rss_values[3]-rss_values[0])/20))/2
estimated_doa=math.atan2(gy,gx)
for particle in particles:
x,y=particle[0],particle[1]
# print(str(particle))
# actual_rss = gen_wifi(pos=(x,y),n=pathloss_exponent,rss0=RSS0,noise=0)
# gx = ((actual_rss[1]-actual_rss[0])/2) + ((actual_rss[2]-actual_rss[3])/2)
# gy = ((actual_rss[2]-actual_rss[1])/2) + ((actual_rss[3]-actual_rss[0])/2)
adoa=math.atan2(y-centroid[1],x-centroid[0])
error=adoa-estimated_doa
if len(previous_errors)>2:
std_error=np.std(previous_errors)
else:
std_error=0.01
omega=((1/((std_error)*math.sqrt(2*math.pi)))*(math.pow(math.e,-(math.pow(error,2)/(2*(std_error**2))))))
for j in range(len(previous_errors)-1,len(previous_errors)-4 if len(previous_errors) > 5 else 0,-1):
omega=omega*((1/((std_error)*math.sqrt(2*math.pi)))*(math.pow(math.e,-(math.pow(previous_errors[j],2)/(2*(std_error**2))))))
weights.append(omega)
positions.append((x,y))
errors.append(error)
sum_weight=np.sum(weights)
if sum_weight == 0:
pass
for j in range(0,len(weights)):
weights[j]=weights[j]/sum_weight
max_weight = max(weights)
max_index = weights.index(max_weight)
pos=positions[max_index]
previous_errors.append(errors[max_index])
# print("Actual position: ",original_pos,"Predicted Position: ",pos,"DOA: ",estimated_doa*180/math.pi,"ADOA: ",(errors[max_index]+estimated_doa)*180/math.pi,"Error: ",errors[max_index])
distance_error.append(dist(pos[0],pos[1],original_pos))
times.append(time.time() - start_time)
distcumulativeEror=np.sum(distance_error)
distmeanError=np.average(distance_error)
distStandardDeviationError=np.std(distance_error)
print("--- Average Computation Time per Iteration : %s seconds ---" % (np.average(times)))
# print("rss0",RSS0,"path loss exponent: ",pathloss_exponent)
# print("RSS_ERROR: Cumulative Error: " + str(rsscumulativeEror)+"\tMean Error: "+str(rssmeanError)+"\tStandard Deviation: "+str(rssStandardDeviationError))
print("DIST_ERROR: Cummulative Error: " + str(distcumulativeEror)+"\tMean Error: "+str(distmeanError)+"\tStandard Deviation: "+str(distStandardDeviationError))
resultFile.write(str(channel)+","+str(distmeanError)+","+str(distStandardDeviationError)+"\n")
| 42.934673 | 193 | 0.622659 | 1,273 | 8,544 | 3.996858 | 0.150039 | 0.053459 | 0.028105 | 0.02162 | 0.253931 | 0.178459 | 0.15173 | 0.10908 | 0.050904 | 0.045204 | 0 | 0.035241 | 0.196278 | 8,544 | 198 | 194 | 43.151515 | 0.705694 | 0.375585 | 0 | 0.081818 | 0 | 0 | 0.052761 | 0.006263 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018182 | false | 0.009091 | 0.1 | 0.009091 | 0.136364 | 0.027273 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b5d21bc7f3bf38099a6104d053f835c59544b6b | 2,764 | py | Python | test/connectivity/acts/tests/google/bt/native/BtNativeTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | null | null | null | test/connectivity/acts/tests/google/bt/native/BtNativeTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | null | null | null | test/connectivity/acts/tests/google/bt/native/BtNativeTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | 1 | 2018-02-24T19:13:01.000Z | 2018-02-24T19:13:01.000Z | #/usr/bin/env python3.4
#
# Copyright (C) 2016 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy of
# the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under
# the License.
import time
from acts.base_test import BaseTestClass
from acts.controllers import native_android_device
from acts.test_utils.bt.native_bt_test_utils import setup_native_bluetooth
from acts.test_utils.bt.bt_test_utils import generate_id_by_size
class BtNativeTest(BaseTestClass):
tests = None
def __init__(self, controllers):
BaseTestClass.__init__(self, controllers)
setup_native_bluetooth(self.native_devices)
self.droid = self.native_devices[0].droid
self.tests = ("test_binder_get_name",
"test_binder_get_name_invalid_parameter",
"test_binder_set_name_get_name",
"test_binder_get_address", )
if len(self.native_devices) > 1:
self.droid1 = self.native_devices[1].droid
self.tests = self.tests + ("test_two_devices_set_get_name", )
def test_binder_get_name(self):
result = self.droid.BluetoothBinderGetName()
self.log.info("Bluetooth device name: {}".format(result))
return True
def test_binder_get_name_invalid_parameter(self):
try:
self.droid.BluetoothBinderGetName("unexpected_parameter")
return False
except Exception:
return True
def test_binder_set_name_get_name(self):
test_name = generate_id_by_size(4)
result = self.droid.BluetoothBinderSetName(test_name)
if not result:
return False
name = self.droid.BluetoothBinderGetName()
if test_name != name:
return False
return True
def test_binder_get_address(self):
result = self.droid.BluetoothBinderGetAddress()
self.log.info("Found BT address: {}".format(result))
if not result:
return False
return True
def test_two_devices_set_get_name(self):
test_name = generate_id_by_size(4)
for n in self.native_devices:
d = n.droid
d.BluetoothBinderSetName(test_name)
name = d.BluetoothBinderGetName()
if name != test_name:
return False
return True
| 35.435897 | 79 | 0.678365 | 353 | 2,764 | 5.070822 | 0.354108 | 0.044693 | 0.043575 | 0.037989 | 0.249721 | 0.164246 | 0.040223 | 0.040223 | 0.040223 | 0.040223 | 0 | 0.007703 | 0.248553 | 2,764 | 77 | 80 | 35.896104 | 0.854117 | 0.213821 | 0 | 0.269231 | 0 | 0 | 0.094576 | 0.055169 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.096154 | 0 | 0.442308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b61d7390ab2819c257c38fbffc3a703a9852f12 | 5,176 | py | Python | PEPit/examples/unconstrained_convex_minimization/accelerated_gradient_convex.py | PerformanceEstimation/PEPit | 7005bc9a9da11dea448966437365c897734ec341 | [
"MIT"
] | 1 | 2022-03-30T11:18:37.000Z | 2022-03-30T11:18:37.000Z | PEPit/examples/unconstrained_convex_minimization/accelerated_gradient_convex.py | PerformanceEstimation/PEPit | 7005bc9a9da11dea448966437365c897734ec341 | [
"MIT"
] | 1 | 2022-02-23T10:26:38.000Z | 2022-02-23T10:26:38.000Z | PEPit/examples/unconstrained_convex_minimization/accelerated_gradient_convex.py | PerformanceEstimation/PEPit | 7005bc9a9da11dea448966437365c897734ec341 | [
"MIT"
] | null | null | null | from PEPit import PEP
from PEPit.functions import SmoothStronglyConvexFunction
def wc_accelerated_gradient_convex(mu, L, n, verbose=1):
"""
Consider the convex minimization problem
.. math:: f_\\star \\triangleq \\min_x f(x),
where :math:`f` is :math:`L`-smooth and :math:`\\mu`-strongly convex (:math:`\\mu` is possibly 0).
This code computes a worst-case guarantee for an **accelerated gradient method**, a.k.a. **fast gradient method**.
That is, it computes the smallest possible :math:`\\tau(n, L, \\mu)` such that the guarantee
.. math:: f(x_n) - f_\\star \\leqslant \\tau(n, L, \\mu) \\|x_0 - x_\\star\\|^2
is valid, where :math:`x_n` is the output of the accelerated gradient method,
and where :math:`x_\\star` is the minimizer of :math:`f`.
In short, for given values of :math:`n`, :math:`L` and :math:`\\mu`,
:math:`\\tau(n, L, \\mu)` is computed as the worst-case value of
:math:`f(x_n)-f_\\star` when :math:`\\|x_0 - x_\\star\\|^2 \\leqslant 1`.
**Algorithm**:
The accelerated gradient method of this example is provided by
.. math::
:nowrap:
\\begin{eqnarray}
x_{t+1} & = & y_t - \\frac{1}{L} \\nabla f(y_t) \\\\
y_{t+1} & = & x_{t+1} + \\frac{t-1}{t+2} (x_{t+1} - x_t).
\\end{eqnarray}
**Theoretical guarantee**:
When :math:`\\mu=0`, a tight **empirical** guarantee can be found in [1, Table 1]:
.. math:: f(x_n)-f_\\star \\leqslant \\frac{2L\\|x_0-x_\\star\\|^2}{n^2 + 5 n + 6},
where tightness is obtained on some Huber loss functions.
**References**:
`[1] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite
convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.
<https://arxiv.org/pdf/1512.07516.pdf>`_
Args:
mu (float): the strong convexity parameter
L (float): the smoothness parameter.
n (int): number of iterations.
verbose (int): Level of information details to print.
-1: No verbose at all.
0: This example's output.
1: This example's output + PEPit information.
2: This example's output + PEPit information + CVXPY details.
Returns:
pepit_tau (float): worst-case value
theoretical_tau (float): theoretical value
Example:
>>> pepit_tau, theoretical_tau = wc_accelerated_gradient_convex(mu=0, L=1, n=1, verbose=1)
(PEPit) Setting up the problem: size of the main PSD matrix: 4x4
(PEPit) Setting up the problem: performance measure is minimum of 1 element(s)
(PEPit) Setting up the problem: initial conditions (1 constraint(s) added)
(PEPit) Setting up the problem: interpolation conditions for 1 function(s)
function 1 : 6 constraint(s) added
(PEPit) Compiling SDP
(PEPit) Calling SDP solver
(PEPit) Solver status: optimal (solver: SCS); optimal value: 0.16666666668209376
*** Example file: worst-case performance of accelerated gradient method ***
PEPit guarantee: f(x_n)-f_* <= 0.166667 ||x_0 - x_*||^2
Theoretical guarantee: f(x_n)-f_* <= 0.166667 ||x_0 - x_*||^2
"""
# Instantiate PEP
problem = PEP()
# Declare a strongly convex smooth function
func = problem.declare_function(SmoothStronglyConvexFunction, mu=mu, L=L)
# Start by defining its unique optimal point xs = x_* and corresponding function value fs = f_*
xs = func.stationary_point()
fs = func.value(xs)
# Then define the starting point x0 of the algorithm
x0 = problem.set_initial_point()
# Set the initial constraint that is the distance between x0 and x^*
problem.set_initial_condition((x0 - xs) ** 2 <= 1)
# Run n steps of the fast gradient method
x_new = x0
y = x0
for i in range(n):
x_old = x_new
x_new = y - 1 / L * func.gradient(y)
y = x_new + i / (i + 3) * (x_new - x_old)
# Set the performance metric to the function value accuracy
problem.set_performance_metric(func.value(x_new) - fs)
# Solve the PEP
pepit_verbose = max(verbose, 0)
pepit_tau = problem.solve(verbose=pepit_verbose)
# Theoretical guarantee (for comparison)
theoretical_tau = 2 * L / (n ** 2 + 5 * n + 6) # tight only for mu=0, see [2], Table 1 (column 1, line 1)
if mu != 0:
print('Warning: momentum is tuned for non-strongly convex functions.')
# Print conclusion if required
if verbose != -1:
print('*** Example file: worst-case performance of accelerated gradient method ***')
print('\tPEPit guarantee:\t f(x_n)-f_* <= {:.6} ||x_0 - x_*||^2'.format(pepit_tau))
print('\tTheoretical guarantee:\t f(x_n)-f_* <= {:.6} ||x_0 - x_*||^2'.format(theoretical_tau))
# Return the worst-case guarantee of the evaluated method (and the reference theoretical value)
return pepit_tau, theoretical_tau
if __name__ == "__main__":
pepit_tau, theoretical_tau = wc_accelerated_gradient_convex(mu=0, L=1, n=1, verbose=1)
| 41.079365 | 118 | 0.624614 | 750 | 5,176 | 4.193333 | 0.285333 | 0.048331 | 0.006677 | 0.008903 | 0.204452 | 0.1469 | 0.121463 | 0.108744 | 0.108744 | 0.07186 | 0 | 0.03353 | 0.24517 | 5,176 | 125 | 119 | 41.408 | 0.77118 | 0.68296 | 0 | 0 | 0 | 0.071429 | 0.188897 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.071429 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b62bceba3f71a5c3bc433ee4f5eefd5ac1873e5 | 4,052 | py | Python | 2_import/rna_seq/01_import_merged_tsv.py | weng-lab/SCREEN | e8e7203e2f9baa2de70e2f75bdad3ae24b568367 | [
"MIT"
] | 5 | 2020-07-30T02:35:20.000Z | 2020-12-24T01:26:47.000Z | 2_import/rna_seq/01_import_merged_tsv.py | weng-lab/SCREEN | e8e7203e2f9baa2de70e2f75bdad3ae24b568367 | [
"MIT"
] | 6 | 2021-03-04T10:30:11.000Z | 2022-03-16T16:47:47.000Z | 2_import/rna_seq/01_import_merged_tsv.py | weng-lab/SCREEN | e8e7203e2f9baa2de70e2f75bdad3ae24b568367 | [
"MIT"
] | 2 | 2020-12-08T10:05:02.000Z | 2022-03-10T09:41:19.000Z | #!/usr/bin/env python
# SPDX-License-Identifier: MIT
# Copyright (c) 2016-2020 Michael Purcaro, Henry Pratt, Jill Moore, Zhiping Weng
from __future__ import print_function
import os
import sys
import json
import psycopg2
import argparse
import gzip
sys.path.append(os.path.join(os.path.dirname(__file__), '../../common/'))
from dbconnect import db_connect
from config import Config
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)),
"../../metadata/utils"))
from db_utils import getcursor, makeIndex, makeIndexRev, makeIndexArr, makeIndexIntRange, makeIndexMultiCol
from files_and_paths import Dirs, Tools, Genome, Datasets
from utils import AddPath, Utils, printt, importedNumRows
AddPath(__file__, '../../common/')
from dbconnect import db_connect
from constants import chroms, paths, DB_COLS
from config import Config
from table_names import GeData, GeExperimentList
class ImportRNAseq(object):
def __init__(self, curs, assembly):
self.curs = curs
self.assembly = assembly
def _tableNameData(self, isNormalized):
return GeData(self.assembly, isNormalized)
def _tableNameExperimentList(self):
return GeExperimentList(self.assembly)
def run(self):
for isNormalized in [True, False]:
tableNameData = self._tableNameData(isNormalized)
fnp = paths.geFnp(self.assembly, isNormalized)
self._setupAndCopy(tableNameData, fnp)
self._doIndexData(tableNameData)
# normalizaed and unnormalizaed tables should have same experiments!!
self._extractExpIDs(tableNameData, self._tableNameExperimentList())
def _setupAndCopy(self, tableNameData, fnp):
printt("dropping and creating", tableNameData)
self.curs.execute("""
DROP TABLE IF EXISTS {tableNameData};
CREATE TABLE {tableNameData} (
id serial PRIMARY KEY,
ensembl_id VARCHAR(256) NOT NULL,
gene_name VARCHAR(256) NOT NULL,
expID VARCHAR(256) NOT NULL,
fileID VARCHAR(256) NOT NULL,
replicate INT NOT NULL,
fpkm NUMERIC NOT NULL,
tpm NUMERIC NOT NULL);
""".format(tableNameData=tableNameData))
printt("importing", fnp)
with gzip.open(fnp) as f:
self.curs.copy_from(f, tableNameData, '\t',
columns=("expID", "replicate", "ensembl_id", "gene_name",
"fileID", "tpm", "fpkm"))
importedNumRows(self.curs)
def _extractExpIDs(self, tableNameData, tableNameExperimentList):
printt("dropping and creating", tableNameExperimentList)
self.curs.execute("""
DROP TABLE IF EXISTS {tableNameExperimentList};
CREATE TABLE {tableNameExperimentList} AS
SELECT DISTINCT expID, fileID, replicate
FROM {tableNameData}
""".format(tableNameData = tableNameData,
tableNameExperimentList = tableNameExperimentList))
importedNumRows(self.curs)
def _doIndexData(self, tableNameData):
printt("creating indices in", tableNameData, "...")
makeIndex(self.curs, tableNameData, ["gene_name", "tpm"])
def doIndex(self):
for isNormalized in [True, False]:
self._doIndexData(self._tableNameData(isNormalized))
def run(args, DBCONN):
assemblies = Config.assemblies
if args.assembly:
assemblies = [args.assembly]
for assembly in assemblies:
with getcursor(DBCONN, "08_setup_log") as curs:
im = ImportRNAseq(curs, assembly)
if args.index:
im.doIndex()
else:
im.run()
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--assembly", type=str, default="")
parser.add_argument('--index', action="store_true", default=False)
args = parser.parse_args()
return args
def main():
args = parse_args()
DBCONN = db_connect(os.path.realpath(__file__))
run(args, DBCONN)
return 0
if __name__ == '__main__':
sys.exit(main())
| 32.15873 | 107 | 0.67152 | 436 | 4,052 | 6.087156 | 0.357798 | 0.024115 | 0.019593 | 0.025622 | 0.105501 | 0.105501 | 0.082894 | 0.058779 | 0.027129 | 0 | 0 | 0.007653 | 0.226061 | 4,052 | 125 | 108 | 32.416 | 0.838648 | 0.048371 | 0 | 0.105263 | 0 | 0 | 0.191589 | 0.01324 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115789 | false | 0 | 0.221053 | 0.021053 | 0.389474 | 0.063158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b663e447ff6dde531cade9c45704d5b63408a17 | 4,618 | py | Python | code/src/helpers/sequencer.py | mcd01/arvalus-experiments | 1c075853885d0d81284eee55988ba8747d33584e | [
"MIT"
] | null | null | null | code/src/helpers/sequencer.py | mcd01/arvalus-experiments | 1c075853885d0d81284eee55988ba8747d33584e | [
"MIT"
] | null | null | null | code/src/helpers/sequencer.py | mcd01/arvalus-experiments | 1c075853885d0d81284eee55988ba8747d33584e | [
"MIT"
] | null | null | null | import torch
from src.transforms import MultiNodeData
import collections
import dill
import os
from src.utils import create_dirs
class Sequencer(object):
"Determines sequences in a dataset and annotates elements accordingly."
def __init__(self, path_to_dir : str, node_classes : list = [], graph_classes : list = [], exclude_normal : bool = False, transform_key =""):
self.path_to_dir = path_to_dir
self.is_fitted : bool = False
self.__exclude_normal__ = exclude_normal
self.__node_classes__ = node_classes
self.__graph_classes__ = graph_classes
self.__sequence_dict__ : dict = {}
self.__latest_group__ : tuple = None
self.__id__ = Sequencer.get_id(node_classes=self.__node_classes__,
graph_classes=self.__graph_classes__,
exclude_normal=self.__exclude_normal__,
transform_key=transform_key)
def __repr__(self):
return f"{self.__class__.__name__}(exclude_normal={self.__exclude_normal__}, #sequence_groups={self.__latest_group__[0] + 1})" # starts with zero
def __str__(self):
return f"{self.__class__.__name__}(exclude_normal={self.__exclude_normal__}, #sequence_groups={self.__latest_group__[0] + 1})" # starts with zero
@staticmethod
def get_id(*args, **kwargs):
sorted_kwargs = collections.OrderedDict(sorted(kwargs.items()))
return ", ".join(f"{key}={value}" for key, value in sorted_kwargs.items())
@classmethod
def getInstance(cls, path_to_dir : str, **kwargs):
path_to_file = os.path.join(path_to_dir, "sequencer.pkl")
if os.path.exists(path_to_file):
with open(path_to_file, 'rb') as dill_file:
obj = dill.load(dill_file)
if obj.__id__ == Sequencer.get_id(**kwargs):
return obj
else:
return Sequencer(path_to_dir, **kwargs)
else:
return Sequencer(path_to_dir, **kwargs)
def save(self):
create_dirs(self.path_to_dir)
with open(os.path.join(self.path_to_dir, "sequencer.pkl"), "wb") as dill_file:
dill.dump(self, dill_file)
def __call__(self, data: MultiNodeData):
file_idx : int = data["file_idx"]
look_up_dict = self.__sequence_dict__.get(file_idx, {})
for key, value in look_up_dict.items():
data[key] = value
return data
def annotate(self, data: MultiNodeData):
"The calling function iterates over a dataset and sequentially inputs elements."
# extract properties from object
identifiers : list = data["identifiers"]
y_compact = data["y"]
y_full = data["y_full"]
sequence_node_group : str = None
sequence_transitional : str = "steady"
sequence_anomaly_index : int = torch.argmax(y_compact).item()
sequence_anomaly : int = self.__graph_classes__[sequence_anomaly_index]
sequence_anomaly_index = self.__node_classes__.index(sequence_anomaly) # overwrite
if y_compact[0,0] == 1 and not self.__exclude_normal__:
sequence_node_group = "cluster"
else:
identifier_index : int = torch.argmax(y_full[: ,sequence_anomaly_index]).item()
identifier : str = identifiers[identifier_index]
sequence_node_group = data[f"group_{identifier}"]
# handle group incrementing
if self.__latest_group__ is None:
self.__latest_group__ = (0, sequence_anomaly, sequence_node_group)
elif (self.__latest_group__[1] != sequence_anomaly) or (self.__latest_group__[2] != sequence_node_group):
sequence_transitional = "up / down"
self.__latest_group__ = (self.__latest_group__[0] + 1, sequence_anomaly, sequence_node_group)
# add properties to this object
data["sequence_group"] = self.__latest_group__[0]
data["sequence_anomaly"] = sequence_anomaly
data["sequence_node_group"] = sequence_node_group
self.__sequence_dict__[data["file_idx"]] = {
"sequence_transitional": sequence_transitional,
"sequence_id": self.__latest_group__[0],
"file_id": data["file_idx"]
}
return data
| 40.867257 | 153 | 0.60654 | 513 | 4,618 | 4.927875 | 0.241715 | 0.028481 | 0.065269 | 0.037975 | 0.191851 | 0.105222 | 0.105222 | 0.078323 | 0.078323 | 0.078323 | 0 | 0.004342 | 0.301862 | 4,618 | 113 | 154 | 40.867257 | 0.779777 | 0.062148 | 0 | 0.1125 | 0 | 0.025 | 0.132737 | 0.053408 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.075 | 0.025 | 0.2875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b676d4042d46bee66b146595fc707221e3e2e2a | 2,184 | py | Python | pymix/lattice_classes.py | vpbereznev/Pymix | 74f87a099169f8d215399f5d52eed80a574c8b3b | [
"MIT"
] | null | null | null | pymix/lattice_classes.py | vpbereznev/Pymix | 74f87a099169f8d215399f5d52eed80a574c8b3b | [
"MIT"
] | null | null | null | pymix/lattice_classes.py | vpbereznev/Pymix | 74f87a099169f8d215399f5d52eed80a574c8b3b | [
"MIT"
] | null | null | null | from math import sqrt, sin, cos, pi, ceil
class HexLattice:
def __init__(self, pitch, pattern):
self.pitch = pitch
self.pattern = pattern
def num_nodes(self):
return len(self.pattern)
def num_rings(self):
return ceil((1 + sqrt(1 + 4 / 3 * (self.num_nodes() - 1))) / 2)
def spiral_coord(self):
coord = [(0.0, 0.0)] * self.num_nodes()
for i in range(1, self.num_rings()):
for j in range(6):
coord[3 * i * (i - 1) + 2 + j * i - 1] = (self.pitch * i * cos(j / 3.0 * pi + pi / 6.0),
self.pitch * i * sin(j / 3.0 * pi + pi / 6.0))
if i > 1:
for j in range(5):
a = 3 * i * (i - 1) + 2 + i * j - 1
b = a + i
for k in range(1, i):
coord[a + k] = (coord[a][0] + (coord[b][0] - coord[a][0]) / i * k,
coord[a][1] + (coord[b][1] - coord[a][1]) / i * k)
a = 3 * i * (i - 1) + 2 + i * 5 - 1
b = 3 * i * (i - 1) + 2 + i * 0 - 1
for k in range(1, i):
coord[a + k] = (coord[a][0] + (coord[b][0] - coord[a][0]) / i * k,
coord[a][1] + (coord[b][1] - coord[a][1]) / i * k)
return coord
class RectangularLattice:
def __init__(self, nx, ny, dx, dy, pattern):
self.nx = nx
self.ny = ny
self.dx = dx
self.dy = dy
self.pattern = pattern
def get_coord(self):
coord = []
for i in range(self.nx):
for j in range(self.ny):
coord.append(((i + 1) * self.dx, (j + 1) * self.dy))
return coord
class CircleLattice:
def __init__(self, nodes, pitch, pattern):
self.nodes = nodes
self.pitch = pitch
self.pattern = pattern
def get_coord(self):
coord = []
angle = 360.0 / self.nodes / 180.0 * pi
for i in range(self.nodes):
coord.append((0.5 * self.pitch * cos(i * angle), 0.5 * self.pitch * sin(i * angle)))
return coord
| 33.6 | 104 | 0.42674 | 311 | 2,184 | 2.932476 | 0.157556 | 0.065789 | 0.013158 | 0.017544 | 0.361842 | 0.323465 | 0.316886 | 0.22807 | 0.144737 | 0.144737 | 0 | 0.053797 | 0.421245 | 2,184 | 64 | 105 | 34.125 | 0.667722 | 0 | 0 | 0.346154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.019231 | 0.038462 | 0.326923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b6e420a92dfca372820374b206351ebdc97a95a | 1,105 | py | Python | Leetcode/medium/binary-tree-from-postorder-and-inorder.py | jen-sjen/data-structures-basics-leetcode | addac32974b16e0a37aa60c210ab7820b349b279 | [
"MIT"
] | 6 | 2021-07-29T03:26:20.000Z | 2022-01-28T15:11:45.000Z | Leetcode/medium/binary-tree-from-postorder-and-inorder.py | jen-sjen/data-structures-basics-leetcode | addac32974b16e0a37aa60c210ab7820b349b279 | [
"MIT"
] | 2 | 2021-09-30T09:47:23.000Z | 2022-01-31T03:08:24.000Z | Leetcode/medium/binary-tree-from-postorder-and-inorder.py | jen-sjen/data-structures-basics-leetcode | addac32974b16e0a37aa60c210ab7820b349b279 | [
"MIT"
] | 5 | 2021-08-10T06:41:11.000Z | 2022-01-29T17:50:20.000Z | """
# CREATE BINARY TREE FROM POSTORDER AND INORDER
Given inorder and postorder traversal of a tree, construct the binary tree.
Note:
You may assume that duplicates do not exist in the tree.
For example, given
inorder = [9,3,15,20,7]
postorder = [9,15,7,20,3]
Return the following binary tree:
3
- -
9 20
- -
15 7
"""
# Definition for a binary tree node.
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
class Solution:
def buildTree(self, inorder, postorder) -> TreeNode:
if len(postorder) == 0:
return None
return self.tree(postorder, inorder)
def tree(self, post, inorder):
if len(post) < 1:
return None
if len(post) == 1:
return TreeNode(post[0], None, None)
root = post[-1]
index = inorder.index(root)
x = self.tree(post[:index], inorder[:index])
y = self.tree(post[index:len(post)-1], inorder[index+1:])
return TreeNode(root, x, y) | 24.021739 | 75 | 0.58733 | 152 | 1,105 | 4.243421 | 0.355263 | 0.062016 | 0.037209 | 0.031008 | 0.049612 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037516 | 0.300452 | 1,105 | 46 | 76 | 24.021739 | 0.796895 | 0.335747 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15 | false | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b76612269c85e9f247752fe2f6a4d09415e6758 | 2,644 | py | Python | hyper_param/utils.py | EnisBerk/hyperopt-keras-sample | dc6892f023b83ee3b5b92f2a258676ad6bbc0a94 | [
"MIT"
] | null | null | null | hyper_param/utils.py | EnisBerk/hyperopt-keras-sample | dc6892f023b83ee3b5b92f2a258676ad6bbc0a94 | [
"MIT"
] | null | null | null | hyper_param/utils.py | EnisBerk/hyperopt-keras-sample | dc6892f023b83ee3b5b92f2a258676ad6bbc0a94 | [
"MIT"
] | null | null | null |
"""Json utils to print, save and load training results."""
import os
import json
from bson import json_util
import tensorflow as tf
from tensorflow.python.saved_model import builder as saved_model_builder, tag_constants
from tensorflow.python.client import device_lib
import keras.backend as K
from gradient_sdk import model_dir, export_dir
EXPERIMENT_NAME = os.environ.get('EXPERIMENT_NAME')
RESULTS_DIR = model_dir(EXPERIMENT_NAME)
def is_gpu_available():
return tf.test.is_gpu_available()
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU']
def print_json(result):
"""Pretty-print a jsonable structure (e.g.: result)."""
print(json.dumps(
result,
default=json_util.default, sort_keys=True,
indent=4, separators=(',', ': ')
))
def save_json_result(model_name, result):
"""Save json to a directory and a filename."""
print("Prepare to save best result")
result_name = '{}.txt.json'.format(model_name)
if not os.path.exists(RESULTS_DIR):
os.makedirs(RESULTS_DIR)
with open(os.path.join(RESULTS_DIR, result_name), 'w') as f:
json.dump(
result, f,
default=json_util.default, sort_keys=True,
indent=4, separators=(',', ': ')
)
print("Result save to json finished")
def load_json_result(best_result_name):
"""Load json from a path (directory + filename)."""
result_path = os.path.join(RESULTS_DIR, best_result_name)
with open(result_path, 'r') as f:
return json.JSONDecoder().decode(
f.read()
# default=json_util.default,
# separators=(',', ': ')
)
def load_best_hyperspace():
results = [
f for f in list(sorted(os.listdir(RESULTS_DIR))) if 'json' in f
]
if len(results) == 0:
return None
best_result_name = results[-1]
return load_json_result(best_result_name)["space"]
def export_model(model_name):
try:
# Export Model
tf.logging.info("Export trained model")
export_path = export_dir(EXPERIMENT_NAME)
model_path = os.path.join(export_path, model_name, '1')
K.set_learning_phase(0)
builder = saved_model_builder.SavedModelBuilder(model_path)
with K.get_session() as sess:
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tag_constants.SERVING],
)
builder.save()
except Exception as e:
tf.logging.error('Model export has failed with error: %s', e)
| 28.430108 | 87 | 0.655446 | 358 | 2,644 | 4.617318 | 0.332402 | 0.036298 | 0.033878 | 0.039927 | 0.119782 | 0.095584 | 0.061706 | 0.061706 | 0.061706 | 0.061706 | 0 | 0.002963 | 0.234115 | 2,644 | 92 | 88 | 28.73913 | 0.813333 | 0.095688 | 0 | 0.064516 | 0 | 0 | 0.067596 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112903 | false | 0 | 0.129032 | 0.016129 | 0.322581 | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b774945cd3adbd39f821d0dd8b129b94b59f146 | 2,941 | py | Python | cog_modules/taunts/cog.py | michael-byrd/HammerBot | f9ad90179b486949f76a2e69a1e8b26414e2b21a | [
"MIT"
] | 3 | 2021-12-30T19:45:24.000Z | 2022-03-07T19:14:26.000Z | cog_modules/taunts/cog.py | michael-byrd/HammerBot | f9ad90179b486949f76a2e69a1e8b26414e2b21a | [
"MIT"
] | 29 | 2022-01-07T20:07:48.000Z | 2022-03-30T01:10:16.000Z | cog_modules/taunts/cog.py | michael-byrd/HammerBot | f9ad90179b486949f76a2e69a1e8b26414e2b21a | [
"MIT"
] | 4 | 2022-01-07T20:17:56.000Z | 2022-03-24T00:20:50.000Z | import os
import disnake
from disnake.ext import commands, tasks
from dotenv import load_dotenv
class Taunts(commands.Cog):
"""Replies with taunts from AoE2"""
def __init__(self, bot: commands.Bot):
self.bot = bot
@commands.command(name="1")
async def yes_1(self, ctx: commands.Context):
"""
Command: 1
Returns: The age taunt #1. (Yes.)
"""
response = "Yes."
await ctx.send(response)
@commands.command(name="2")
async def no_2(self, ctx: commands.Context):
"""
Command: 2
Returns: The age taunt #2. (No.)
"""
response = "No."
await ctx.send(response)
@commands.command(name="28")
async def otherguy_28(self, ctx: commands.Context):
"""
Command: 28
Returns: The age taunt #28. (Yeah, well, you should see the other guy.)
"""
response = "Yeah, well, you should see the other guy."
await ctx.send(response)
@commands.command(name="30")
async def monk_30(self, ctx: commands.Context):
"""
Command: 30
Returns: The age taunt #30. (Wololo!)
"""
response = "Wololo!"
await ctx.send(response)
@commands.command(name="14", help="Returns AoE2 taunt #14.")
# @commands.cooldown(1, 30, commands.BucketType.user)
async def startTheGame(self, ctx: commands.Context):
"""
Command: 14
Returns: The age2 taunt #14. (Start the game already!)
"""
response = "Start the game already!"
await ctx.send(response)
@commands.command(name="13", help="Returns AoE2 taunt #13.")
# @commands.cooldown(1, 30, commands.BucketType.user)
async def isp(self, ctx: commands.Context):
"""
Command: 13
Returns: The age2 taunt #13. (Sure, blame it on your ISP.)
"""
response = "Sure, blame it on your ISP."
await ctx.send(response)
@commands.command(name="age?", help="Returns AoE2 taunt #30.")
# @commands.cooldown(1, 30, commands.BucketType.user)
async def questionableAge(self, ctx: commands.Context):
"""
Command: age?
Returns: The phrase "Well, duh."
"""
response = "Well, duh."
await ctx.send(response)
@commands.command(name="11")
async def laugh(self, ctx: commands.Context):
"""
Command: 11
Returns: The age taunt #11. (*laughter*)
"""
response = "🤣"
await ctx.send(response)
@commands.command(name="!gg")
async def gg(self, ctx: commands.Context):
"""
Command: :gg:
Returns: The server GG emote.
"""
response = "<:gg:861701719050551307>"
await ctx.send(response)
def setup(bot: commands.Bot):
bot.add_cog(Taunts(bot))
| 29.118812 | 80 | 0.555593 | 338 | 2,941 | 4.807692 | 0.227811 | 0.083077 | 0.105231 | 0.121846 | 0.505846 | 0.345231 | 0.320615 | 0.128615 | 0.090462 | 0 | 0 | 0.040159 | 0.314179 | 2,941 | 100 | 81 | 29.41 | 0.764998 | 0.063244 | 0 | 0.2 | 0 | 0 | 0.122383 | 0.012882 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.088889 | 0 | 0.155556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b77f7eedc8e7e3dc9ed83b6fd8ae34f45c97d94 | 2,475 | py | Python | sources/models/DeepCNN2.py | cwi-dis/affect-gan | aea0f7dd7dc412f7e3fc44bc2db3526b09aaf131 | [
"MIT"
] | null | null | null | sources/models/DeepCNN2.py | cwi-dis/affect-gan | aea0f7dd7dc412f7e3fc44bc2db3526b09aaf131 | [
"MIT"
] | null | null | null | sources/models/DeepCNN2.py | cwi-dis/affect-gan | aea0f7dd7dc412f7e3fc44bc2db3526b09aaf131 | [
"MIT"
] | null | null | null | import config
import tensorflow as tf
import tensorflow.keras.layers as layers
from models.Blocks import *
class DeepCNN(tf.keras.Model):
def __init__(self, hparams, *args, **kwargs):
super(DeepCNN, self).__init__(*args, **kwargs)
self.layers_count = hparams[config.HP_DEEP_LAYERS]
self.dual_output = hparams[config.HP_LOSS_TYPE] == "DUAL_BCE"
self.input_len = 500
self.down_res_layers = [DownResLayer
(
hparams[config.HP_DEEP_CHANNELS] * 2**l,
kernel_size=hparams[config.HP_DEEP_KERNEL_SIZE],
first_layer=(l == 0),
use_dropout=True
) for l in range(self.layers_count - 1)]
self.down_res_layer_final_a = DownResLayer(
hparams[config.HP_DEEP_CHANNELS] * 2**(self.layers_count-1),
kernel_size=hparams[config.HP_DEEP_KERNEL_SIZE],
first_layer=False
)
self.down_res_layer_final_v = DownResLayer(
hparams[config.HP_DEEP_CHANNELS] * 2**(self.layers_count-1),
kernel_size=hparams[config.HP_DEEP_KERNEL_SIZE],
first_layer=False
)
self.feature_pool_a = layers.GlobalAveragePooling1D()
self.feature_pool_v = layers.GlobalAveragePooling1D()
self.lrelu_out_a = layers.LeakyReLU()
self.lrelu_out_v = layers.LeakyReLU()
if hparams[config.HP_LOSS_TYPE] == "MSE":
activation = None
else:
activation = 'sigmoid'
self.dense_out_a = layers.Dense(units=1, activation=activation, name="arousal_class")
self.dense_out_v = layers.Dense(units=1, activation=activation, name="valence_class")
def call(self, inputs, training=None, mask=None):
x = inputs
for i in range(self.layers_count - 1):
x = self.down_res_layers[i](x, training=training)
x_a = self.down_res_layer_final_a(x, training=training)
x_a = self.lrelu_out_a(x_a)
x_a = self.feature_pool_a(x_a)
if self.dual_output:
x_v = self.down_res_layer_final_v(x, training=training)
x_v = self.lrelu_out_v(x_v)
x_v = self.feature_pool_v(x_v)
return self.dense_out_a(x_a), self.dense_out_v(x_v)
else:
return self.dense_out_a(x_a)
def model(self):
x = layers.Input(shape=(500, 5))
return tf.keras.Model(inputs=[x], outputs=self.call(x))
| 35.869565 | 93 | 0.623434 | 333 | 2,475 | 4.315315 | 0.24024 | 0.08142 | 0.093946 | 0.092554 | 0.458594 | 0.426583 | 0.306889 | 0.192763 | 0.192763 | 0.192763 | 0 | 0.01055 | 0.272323 | 2,475 | 68 | 94 | 36.397059 | 0.78734 | 0 | 0 | 0.166667 | 0 | 0 | 0.017778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.074074 | 0 | 0.203704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b78cd2e8c328cd6e908ab353389cea7a0e9d949 | 4,517 | py | Python | z3/finding_celebrities.py | Wikunia/hakank | 030bc928d2efe8dcbc5118bda3f8ae9575d0fd13 | [
"MIT"
] | 279 | 2015-01-10T09:55:35.000Z | 2022-03-28T02:34:03.000Z | z3/finding_celebrities.py | Wikunia/hakank | 030bc928d2efe8dcbc5118bda3f8ae9575d0fd13 | [
"MIT"
] | 10 | 2017-10-05T15:48:50.000Z | 2021-09-20T12:06:52.000Z | z3/finding_celebrities.py | Wikunia/hakank | 030bc928d2efe8dcbc5118bda3f8ae9575d0fd13 | [
"MIT"
] | 83 | 2015-01-20T03:44:00.000Z | 2022-03-13T23:53:06.000Z | #!/usr/bin/python -u
# -*- coding: latin-1 -*-
#
# Finding celebrities problem in Z3
#
# From Uwe Hoffmann
# "Finding celebrities at a party"
# http://www.codemanic.com/papers/celebs/celebs.pdf
# """
# Problem: Given a list of people at a party and for each person the list of
# people they know at the party, we want to find the celebrities at the party.
# A celebrity is a person that everybody at the party knows but that
# only knows other celebrities. At least one celebrity is present at the party.
# """
# (This paper also has an implementation in Scala.)
#
# Note: The original of this problem is
# Richard Bird and Sharon Curtis:
# "Functional pearls: Finding celebrities: A lesson in functional programming"
# J. Funct. Program., 16(1):13 20, 2006.
#
# The problem from Hoffmann's paper is to find of who are the
# celebrity/celebrities in this party graph:
# Adam knows {Dan,Alice,Peter,Eva},
# Dan knows {Adam,Alice,Peter},
# Eva knows {Alice,Peter},
# Alice knows {Peter},
# Peter knows {Alice}
#
# Solution: the celebrities are Peter and Alice.
#
# I blogged about this problem in "Finding celebrities at a party"
# http://www.hakank.org/constraint_programming_blog/2010/01/finding_celebrities_at_a_party.html
#
# This Z3 model was written by Hakan Kjellerstrand (hakank@gmail.com)
# See also my Z3 page: http://hakank.org/z3/
#
from z3_utils_hakank import *
def finding_celebrities(problem):
graph = problem
n = len(graph)
sol = Solver()
# variables
celebrities = makeIntVector(sol,"celebrities",n,0,1) # 1 if a celebrity
num_celebrities = makeIntVar(sol,"num_celebrities",0,n)
# constraints
sol.add(num_celebrities == Sum(celebrities))
# All persons know the celebrities,
# and the celebrities only know celebrities.
for i in range(n):
sol.add((celebrities[i] == 1) == (Sum([If(graph[j][i] == 1,1,0) for j in range(n)]) == n))
sol.add((celebrities[i] == 1) == (Sum([If(graph[i][j] == 1,1,0) for j in range(n)]) == num_celebrities))
num_solutions = 0
while sol.check() == sat:
num_solutions += 1
mod = sol.model()
print("num_celebrities :", mod.eval(num_celebrities))
print("celebrities :", [i for i in range(n) if mod.eval(celebrities[i]) == 1])
print()
getDifferentSolution(sol,mod,celebrities)
print("num_solutions:", num_solutions)
print()
#
# The party graph of the example above:
#
# Adam knows [Dan,Alice,Peter,Eva], [2,3,4,5]
# Dan knows [Adam,Alice,Peter], [1,4,5]
# Eva knows [Alice,Peter], [4,5]
# Alice knows [Peter], [5]
# Peter knows [Alice] [4]
#
# Solution: Peter and Alice (4,5) are the celebrities.
#
problem1 = [[1,1,1,1,1], # 1
[1,1,0,1,1], # 2
[0,0,1,1,1], # 3
[0,0,0,1,1], # 4
[0,0,0,1,1] # 5
]
# In this example Alice (4) also knows Adam (1),
# which makes Alice a non celebrity, and since
# Peter (5) knows Alices, Peter is now also a
# non celebrity. Which means that there are no
# celebrities at this party.
#
problem2 = [[1,1,1,1,1],
[1,1,0,1,1],
[0,0,1,1,1],
[1,0,0,1,1],
[0,0,0,1,1]
]
#
# Here is another example. It has the following
# cliques:
# [1,2]
# [4,5,6]
# [6,7,8]
# [3,9,10]
#
# The celebrities are [3,9,10]
#
problem3 = [[0,1,1,0,0,0,0,1,1,1],
[1,0,1,0,0,0,0,0,1,1],
[0,0,1,0,0,0,0,0,1,1],
[0,1,1,0,1,1,0,0,1,1],
[0,0,1,1,0,1,0,0,1,1],
[0,0,1,1,1,0,1,1,1,1],
[0,0,1,0,0,1,0,1,1,1],
[0,0,1,0,0,1,1,0,1,1],
[0,0,1,0,0,0,0,0,1,1],
[0,0,1,0,0,0,0,0,1,1]
]
#
# This is the same graph as the one above
# with the following changes:
# - 9 don't know 3 or 10
# This party graph know consists of just
# one celebrity: [9]
#
problem4 = [[0,1,1,0,0,0,0,1,1,1],
[1,0,1,0,0,0,0,0,1,1],
[0,0,1,0,0,0,0,0,1,1],
[0,1,1,0,1,1,0,0,1,1],
[0,0,1,1,0,1,0,0,1,1],
[0,0,1,1,1,0,1,1,1,1],
[0,0,1,0,0,1,0,1,1,1],
[0,0,1,0,0,1,1,0,1,1],
[0,0,0,0,0,0,0,0,1,0],
[0,0,1,0,0,0,0,0,1,1]
]
print("problem1")
problem = problem1
finding_celebrities(problem)
print("\nproblem2")
problem = problem2
finding_celebrities(problem)
print("\nproblem3")
problem = problem3
finding_celebrities(problem)
print("\nproblem4")
problem = problem4
finding_celebrities(problem)
| 26.570588 | 109 | 0.590658 | 787 | 4,517 | 3.360864 | 0.219822 | 0.054442 | 0.045369 | 0.037807 | 0.20794 | 0.166352 | 0.14707 | 0.118715 | 0.104348 | 0.080151 | 0 | 0.099592 | 0.239761 | 4,517 | 169 | 110 | 26.727811 | 0.670646 | 0.484171 | 0 | 0.432836 | 0 | 0 | 0.048466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014925 | false | 0 | 0.014925 | 0 | 0.029851 | 0.134328 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b79194f124eff83fdb228ce81236856c628bf5e | 3,495 | py | Python | features/count_encoding_present_domains.py | wantedly/recsys2020-challenge | d9967860cc4767380d28d2ed7af00d467cc6941a | [
"Apache-2.0"
] | 35 | 2020-06-23T05:33:50.000Z | 2021-11-22T08:22:42.000Z | features/count_encoding_present_domains.py | wantedly/recsys2020-challenge | d9967860cc4767380d28d2ed7af00d467cc6941a | [
"Apache-2.0"
] | 15 | 2020-12-28T05:31:06.000Z | 2021-01-22T06:49:28.000Z | features/count_encoding_present_domains.py | wantedly/recsys2020-challenge | d9967860cc4767380d28d2ed7af00d467cc6941a | [
"Apache-2.0"
] | 2 | 2020-06-30T10:02:05.000Z | 2021-05-22T09:57:19.000Z | import os
import pandas as pd
from base import BaseFeature
from encoding_func import target_encoding
from google.cloud import storage, bigquery
from google.cloud import bigquery_storage_v1beta1
class CountEncodingPresentDomains(BaseFeature):
def import_columns(self):
return [
"tweet_id",
"engaging_user_id"
]
def _read_present_domains_count_from_bigquery(
self, train_table_name: str, test_table_name) -> pd.DataFrame:
self._logger.info(f"Reading from {train_table_name} and {test_table_name}")
query = """
WITH subset AS (
(
SELECT tweet_id, any_value(present_domains) AS present_domains
FROM {}
GROUP BY tweet_id
)
UNION ALL
(
SELECT tweet_id, any_value(present_domains) AS present_domains
FROM {}
GROUP BY tweet_id
)
)
, unnest_subset AS (
SELECT tweet_id, present_domain
FROM subset,
unnest(present_domains) AS present_domain
)
, count_present_domain AS (
SELECT present_domain, COUNT(*) AS cnt
FROM unnest_subset
GROUP BY present_domain
)
SELECT
tweet_id,
AVG(cnt) AS mean_value,
min(cnt) AS min_value,
max(cnt) AS max_value,
case when stddev(cnt) is null then 1 else stddev(cnt) end AS std_value
FROM (
SELECT A.tweet_id, A.present_domain, B.cnt
FROM unnest_subset AS A
LEFT OUTER JOIN count_present_domain AS B
ON A.present_domain = B.present_domain
)
GROUP BY
tweet_id
""".format(train_table_name, test_table_name)
if self.debugging:
query += " limit 10000"
bqclient = bigquery.Client(project=self.PROJECT_ID)
bqstorageclient = bigquery_storage_v1beta1.BigQueryStorageClient()
df = (
bqclient.query(query)
.result()
.to_dataframe(bqstorage_client=bqstorageclient)
)
return df
def make_features(self, df_train_input, df_test_input):
# read unnested present_media
count_present_domains = self._read_present_domains_count_from_bigquery(
self.train_table, self.test_table
)
feature_names = ["mean_value", "max_value", "min_value", "std_value"]
print(count_present_domains.shape)
print(count_present_domains.isnull().sum())
df_train_features = pd.DataFrame()
df_test_features = pd.DataFrame()
df_train_input = pd.merge(df_train_input, count_present_domains, on="tweet_id", how="left").fillna(0)
df_test_input = pd.merge(df_test_input, count_present_domains, on="tweet_id", how="left").fillna(0)
for fe in feature_names:
df_train_features[fe] = df_train_input[fe].values
df_test_features[fe] = df_test_input[fe].values
print(df_train_features.isnull().sum())
print(df_test_features.isnull().sum())
return df_train_features, df_test_features
if __name__ == "__main__":
CountEncodingPresentDomains.main()
| 35.663265 | 109 | 0.578827 | 389 | 3,495 | 4.858612 | 0.277635 | 0.088889 | 0.050265 | 0.036508 | 0.189947 | 0.174603 | 0.174603 | 0.174603 | 0.174603 | 0.122751 | 0 | 0.005291 | 0.351073 | 3,495 | 97 | 110 | 36.030928 | 0.828042 | 0.007725 | 0 | 0.072289 | 0 | 0 | 0.45528 | 0.021639 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036145 | false | 0 | 0.084337 | 0.012048 | 0.168675 | 0.048193 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b7933c47db153c1ec83f5874cfd167e2b409ed3 | 1,214 | py | Python | IntroDataScience/ejercicios/06/mean.py | aess14/Cursos-Uniandes | be016b25f2f49788235fbe91ec577fd16b9ad613 | [
"MIT"
] | null | null | null | IntroDataScience/ejercicios/06/mean.py | aess14/Cursos-Uniandes | be016b25f2f49788235fbe91ec577fd16b9ad613 | [
"MIT"
] | null | null | null | IntroDataScience/ejercicios/06/mean.py | aess14/Cursos-Uniandes | be016b25f2f49788235fbe91ec577fd16b9ad613 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
def prior(mu):
"""
Densidad de probabilidad de mu
"""
p = np.ones(len(mu))/(mu.max()-mu.min())
return p
def like(x, sigma, mu):
"""
Likelihod de tener un dato x e incertidumbre sigma
"""
L = np.ones(len(mu))
for x_i,sigma_i in zip(x, sigma):
L *= (1.0/np.sqrt(2.0*np.pi*sigma_i**2))*np.exp(-0.5*(x_i-mu)**2/(sigma_i**2))
return L
def posterior(mu, x, sigma):
"""
Posterior calculado con la normalizacion adecuada
"""
post = like(x, sigma, mu) * prior(mu)
evidencia = np.trapz(post, mu)
return post/evidencia
def maximo_incertidumbre(x, y):
deltax = x[1] - x[0]
# maximo de y
ii = np.argmax(y)
# segunda derivada
d = (y[ii+1] - 2*y[ii] + y[ii-1]) / (deltax**2)
return x[ii], 1.0/np.sqrt(-d)
x = [4.6, 6.0, 2.0, 5.8]
sigma = [2.0, 1.5, 5.0, 1.0]
mu = np.linspace(0.0, 10.0, 1000)
post = posterior(mu, x, sigma)
max, incertidumbre = maximo_incertidumbre(mu, np.log(post))
plt.figure()
plt.plot(mu, post)
plt.title('$\mu$= {:.2f} $\pm$ {:.2f}'.format(max, incertidumbre))
plt.xlabel('$\mu$')
plt.ylabel('prob($\mu$|datos)')
plt.savefig('mean.png')
| 22.072727 | 86 | 0.581549 | 212 | 1,214 | 3.29717 | 0.363208 | 0.042918 | 0.025751 | 0.031474 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046218 | 0.215815 | 1,214 | 54 | 87 | 22.481481 | 0.688025 | 0.132619 | 0 | 0 | 0 | 0 | 0.055666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.066667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b795e47993f764317453f8d08fc171b991375f7 | 582 | py | Python | quickSorting.py | slowy07/pythonApps | 22f9766291dbccd8185035745950c5ee4ebd6a3e | [
"MIT"
] | 10 | 2020-10-09T11:05:18.000Z | 2022-02-13T03:22:10.000Z | quickSorting.py | khairanabila/pythonApps | f90b8823f939b98f7bf1dea7ed35fe6e22e2f730 | [
"MIT"
] | null | null | null | quickSorting.py | khairanabila/pythonApps | f90b8823f939b98f7bf1dea7ed35fe6e22e2f730 | [
"MIT"
] | 6 | 2020-11-26T12:49:43.000Z | 2022-03-06T06:46:43.000Z | def partition(arr, low, high):
i = (low - 1)
pivot = arr[high]
for j in range(low, high):
if arr[j] <= pivot:
i = i + 1
arr[i], arr[j] = arr[j], arr[i]
arr[i + 1], arr[high] = arr[high], arr[i + 1]
return i + 1
def quickSorting(arr, low, high):
if low < high:
pi = partition(arr, low, high)
quickSorting(arr, low, pi - 1)
quickSorting(arr, pi + 1, high)
arr = [10,9,8,7,2,3,5,2]
print("array is ",arr)
numberLength = len(arr)
quickSorting(arr, 0, numberLength - 1)
print("sorted array is ",arr) | 25.304348 | 49 | 0.536082 | 94 | 582 | 3.319149 | 0.319149 | 0.112179 | 0.096154 | 0.121795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043902 | 0.295533 | 582 | 23 | 50 | 25.304348 | 0.717073 | 0 | 0 | 0 | 0 | 0 | 0.042882 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0 | 0 | 0.157895 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |