hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b317eed8bc3c7f37020ba36b17b2500b5686a571 | 4,433 | py | Python | clifford/taylor_expansions.py | tBuLi/clifford | 31651d095834ad4c2c5da0692414b55c88823840 | [
"BSD-3-Clause"
] | 21 | 2015-03-21T14:20:29.000Z | 2020-04-20T02:12:03.000Z | clifford/taylor_expansions.py | tBuLi/clifford | 31651d095834ad4c2c5da0692414b55c88823840 | [
"BSD-3-Clause"
] | 13 | 2016-09-01T11:58:24.000Z | 2017-11-19T17:01:51.000Z | clifford/taylor_expansions.py | tBuLi/clifford | 31651d095834ad4c2c5da0692414b55c88823840 | [
"BSD-3-Clause"
] | 7 | 2016-08-31T16:19:42.000Z | 2017-11-16T17:27:45.000Z | """
.. currentmodule:: clifford.taylor_expansions
=====================================================
taylor_expansions (:mod:`clifford.taylor_expansions`)
=====================================================
.. versionadded:: 1.4.0
This file implements various Taylor expansions for useful functions of multivectors.
For some algebra signatures there may exist closed forms of these functions which would likely be faster
and more accurate. Nonetheless, having pre-written taylor expansions for the general case is useful.
.. note::
Many of these functions are also exposed as :class:`~clifford.MultiVector` methods,
such as :meth:`clifford.MultiVector.sin`. This means that ``mv.sin()`` or even ``np.sin(mv)`` can be used
as a convenient interface to functions in this module, without having to import it directly.
For example::
>>> from clifford.g3 import *
>>> import numpy as np
>>> np.sin(np.pi*e12/4)
(0.86867^e12)
Implemented functions
----------------
.. autofunction:: exp
.. autofunction:: sin
.. autofunction:: cos
.. autofunction:: tan
.. autofunction:: sinh
.. autofunction:: cosh
.. autofunction:: tanh
"""
import math
import numpy as np
from . import _numba_utils
from . import _settings
@_numba_utils.njit
def exp(x, max_order=15):
"""
This implements the series expansion of :math:`\exp x` where :math:`x` is a multivector
The parameter `max_order` is the maximum order of the taylor series to use
"""
result = 1.0 + 0.0*x
if max_order == 0:
return result
# scale by power of 2 so that its norm is < 1
max_val = int(np.max(np.abs(x.value)))
scale = 1
if max_val > 1:
max_val <<= 1
while max_val:
max_val >>= 1
scale <<= 1
scaled = x * (1.0 / scale)
# taylor approximation
tmp = 1.0 + 0.0*x
for i in range(1, max_order):
if np.any(np.abs(tmp.value) > _settings._eps):
tmp = tmp*scaled * (1.0 / i)
result = result + tmp
else:
break
# undo scaling
while scale > 1:
result = result*result
scale >>= 1
return result
@_numba_utils.njit
def sin(X, max_order=30):
"""
A taylor series expansion for sin
The parameter `max_order` is the maximum order of the taylor series to use
"""
op = +X
X2 = X*X
X2np1 = X
for n in range(1, max_order):
X2np1 = X2np1 * X2
op = op + ((-1) ** (n) / math.gamma(2 * n + 2)) * X2np1
return op
@_numba_utils.njit
def cos(X, max_order=30):
"""
A taylor series expansion for cos
The parameter `max_order` is the maximum order of the taylor series to use
"""
op = 1 + 0*X
X2 = X * X
X2n = 1 + 0*X
for n in range(1, max_order):
X2n = X2n*X2
op = op + ((-1) ** (n) / math.gamma(2 * n + 1)) * X2n
return op
def tan(X, max_order=30):
"""
The tan function as the ratio of sin and cos
The parameter `max_order` is the maximum order of the taylor series to use
.. note::
It would probably be better to implement this as its own taylor series. This function
is not JITed as currently we do not overload the truediv operator for multivectors.
"""
return sin(X, max_order) / cos(X, max_order)
@_numba_utils.njit
def sinh(X, max_order=30):
"""
A taylor series expansion for sinh
The parameter `max_order` is the maximum order of the taylor series to use
"""
op = +X
X2 = X * X
X2np1 = X
for n in range(1, max_order):
X2np1 = X2np1 * X2
op = op + (1 / math.gamma(2 * n + 2)) * X2np1
return op
@_numba_utils.njit
def cosh(X, max_order=30):
"""
A taylor series expansion for cosh
The parameter `max_order` is the maximum order of the taylor series to use
"""
op = 1 + 0 * X
X2 = X * X
X2n = 1 + 0 * X
for n in range(1, max_order):
X2n = X2n * X2
op = op + (1 / math.gamma(2 * n + 1)) * X2n
return op
def tanh(X, max_order=30):
"""
The tanh function as the ratio of sinh and cosh
The parameter `max_order` is the maximum order of the taylor series to use
.. note::
It would probably be better to implement this as its own taylor series. This function
is not JITed as currently we do not overload the truediv operator for multivectors.
"""
return sinh(X, max_order) / cosh(X, max_order)
| 26.866667 | 109 | 0.606361 | 667 | 4,433 | 3.95952 | 0.226387 | 0.0727 | 0.037486 | 0.05301 | 0.491102 | 0.455509 | 0.455509 | 0.455509 | 0.455509 | 0.380538 | 0 | 0.033261 | 0.267539 | 4,433 | 164 | 110 | 27.030488 | 0.780105 | 0.554252 | 0 | 0.455882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102941 | false | 0 | 0.058824 | 0 | 0.279412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b318fb68451b5682ff5190f3321869e6da4e8c1a | 2,521 | py | Python | month05/Spider/Tecent/Tecent/spiders/tencent.py | chaofan-zheng/python_learning_code | 5d05848911d55aa49eaee4afd7ffd80536fad7aa | [
"Apache-2.0"
] | null | null | null | month05/Spider/Tecent/Tecent/spiders/tencent.py | chaofan-zheng/python_learning_code | 5d05848911d55aa49eaee4afd7ffd80536fad7aa | [
"Apache-2.0"
] | null | null | null | month05/Spider/Tecent/Tecent/spiders/tencent.py | chaofan-zheng/python_learning_code | 5d05848911d55aa49eaee4afd7ffd80536fad7aa | [
"Apache-2.0"
] | null | null | null | import json
import scrapy
import time
import requests
from ..items import TecentItem
class TencentSpider(scrapy.Spider):
name = 'tencent'
allowed_domains = ['careers.tencent.com']
first_url = 'https://careers.tencent.com/tencentcareer/api/post/Query?timestamp={}&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=&attrId=&keyword={}&pageIndex={}&pageSize=10&language=en-us&area='
headers = {
'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36"}
second_url = 'https://careers.tencent.com/tencentcareer/api/post/ByPostId?timestamp={}&postId={}&num=7&language=en-us'
def get_timestamp(self):
timestamp = str(int(time.time() * 1000))
return timestamp
def get_pagen(self, timestamp, keyword):
url = self.first_url.format(timestamp, keyword, 1)
html = requests.get(url=url, headers=self.headers).json()
count = html['Data']['Count']
pagen = count / 10 if count % 10 == 0 else int(count / 10) + 1
return pagen
def start_requests(self):
keyword = input('请输入关键字')
timestamp = self.get_timestamp()
pagen = self.get_pagen(timestamp, keyword)
for page in range(1, pagen + 1):
page_url = self.first_url.format(timestamp, keyword, page)
yield scrapy.Request(url=page_url, callback=self.parse_first)
def parse_first(self, response):
post_list = json.loads(response.text)['Data']['Posts']
for post in post_list:
timestamp = self.get_timestamp()
post_id = post['PostId']
href = self.second_url.format(timestamp, post_id)
yield scrapy.Request(url=href, callback=self.parse_second)
def parse_second(self, response):
item = TecentItem()
item['post_name'] = json.loads(response.text)['Data']['RecruitPostName']
item['location_name'] = json.loads(response.text)['Data']['LocationName']
item['category'] = json.loads(response.text)['Data']['CategoryName']
item['update_time'] = json.loads(response.text)['Data']['LastUpdateTime']
item['responsibility'] = json.loads(response.text)['Data']['Responsibility']
item['requirement'] = json.loads(response.text)['Data']['Requirement']
yield item
if __name__ == '__main__':
spider = TencentSpider()
print(spider.get_pagen(spider.get_timestamp(), 'python'))
# print(spider.get_timestamp())
| 42.016667 | 219 | 0.659262 | 310 | 2,521 | 5.23871 | 0.367742 | 0.038793 | 0.073276 | 0.090517 | 0.21367 | 0.1367 | 0.100985 | 0.055419 | 0 | 0 | 0 | 0.022102 | 0.192384 | 2,521 | 59 | 220 | 42.728814 | 0.77554 | 0.011503 | 0 | 0.042553 | 0 | 0.06383 | 0.270281 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106383 | false | 0 | 0.106383 | 0 | 0.382979 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b31b96b0f97a4a311c4207315ff1606066d471f8 | 837 | py | Python | scripts/plot_discuss_features.py | WEgeophysics/watex | 21616ce35372a095c3dd624f82a5282b15cb2c91 | [
"MIT"
] | 3 | 2021-06-19T02:16:46.000Z | 2021-07-16T15:56:49.000Z | scripts/plot_discuss_features.py | WEgeophysics/watex | 21616ce35372a095c3dd624f82a5282b15cb2c91 | [
"MIT"
] | null | null | null | scripts/plot_discuss_features.py | WEgeophysics/watex | 21616ce35372a095c3dd624f82a5282b15cb2c91 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Fri Oct 8 09:04:27 2021
@author: @Daniel03
"""
from watex.viewer.plot import QuickPlot
# path to dataset
# data_fn = 'data/geo_fdata/BagoueDataset2.xlsx'
data_fn ='data/geo_fdata/main.bagciv.data2.csv'
# set figure title
fig_title= '`sfi` vs`ohmS|`geol`'
# list of features to discuss
features2dicuss =['ohmS', 'sfi','geol', 'flow']
qkObj = QuickPlot( fig_legend_kws={'loc':'upper right'},
fig_title = fig_title,
)
# sns keywords arguments
sns_pkws={'aspect':2 ,
"height": 2}
# marker keywords arguments
map_kws={'edgecolor':"w"}
qkObj.discussingFeatures(
data_fn =data_fn ,
features =features2dicuss,
map_kws=map_kws, **sns_pkws
)
| 22.026316 | 57 | 0.57945 | 100 | 837 | 4.69 | 0.64 | 0.051173 | 0.063966 | 0.055437 | 0.076759 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033727 | 0.291517 | 837 | 37 | 58 | 22.621622 | 0.757167 | 0.285544 | 0 | 0 | 0 | 0 | 0.183533 | 0.06175 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b31be5d103cfb9e8ae792650af1addd027a89193 | 309 | py | Python | main/Customer/urls.py | VikasSherawat/OrderFood | 1c9b67d48f3add57053b50b8a07695016fe2bb4a | [
"MIT"
] | null | null | null | main/Customer/urls.py | VikasSherawat/OrderFood | 1c9b67d48f3add57053b50b8a07695016fe2bb4a | [
"MIT"
] | 4 | 2020-02-11T21:44:22.000Z | 2021-06-10T17:28:46.000Z | main/Customer/urls.py | VikasSherawat/OrderFood | 1c9b67d48f3add57053b50b8a07695016fe2bb4a | [
"MIT"
] | 1 | 2017-03-13T04:21:49.000Z | 2017-03-13T04:21:49.000Z | from django.conf.urls import url
from . import views
app_name = 'customer'
urlpatterns = [
# url(r'^$', views.index, name='index'),
url(r'^fooditems/(?P<shop_id>[0-9]+)/$', views.fooditems, name="fooditems"),
url(r'^buyitem/(?P<fooditem_id>[0-9]+)/$', views.buy_fooditem, name="buy_fooditem")
] | 28.090909 | 87 | 0.647249 | 45 | 309 | 4.333333 | 0.488889 | 0.061538 | 0.041026 | 0.092308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014815 | 0.126214 | 309 | 11 | 88 | 28.090909 | 0.707407 | 0.122977 | 0 | 0 | 0 | 0 | 0.351852 | 0.244444 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b31c17bc22fb02364bf3e27057e5970be530fe76 | 2,149 | py | Python | playground/messaging/producer/config.py | murlokito/playground | 405a7091bbfd6705db967e872ed6c4591bd892e6 | [
"MIT"
] | null | null | null | playground/messaging/producer/config.py | murlokito/playground | 405a7091bbfd6705db967e872ed6c4591bd892e6 | [
"MIT"
] | null | null | null | playground/messaging/producer/config.py | murlokito/playground | 405a7091bbfd6705db967e872ed6c4591bd892e6 | [
"MIT"
] | null | null | null | __title__ = "simulation"
__author__ = "murlux"
__copyright__ = "Copyright 2019, " + __author__
__credits__ = (__author__, )
__license__ = "MIT"
__email__ = "murlux@protonmail.com"
# Global imports
from enum import Enum
from typing import Any, Dict
from playground.messaging.flow import Flow, flow_from_json
from playground.messaging.stream import Stream, stream_from_json
class ProducerConfig:
"""An object representing the ProducerConfig."""
name: str = None
socket_ip: str = None
socket_port: int = None
flow: Flow = None
produce_stream: Stream = None
def __init__(self,
name: str = None, socket_ip: str = None, socket_port: int = None, flow: Flow = None, produce_stream: Stream = None,
):
"""
Simply initiate the ProducerConfig.
"""
if name is None:
raise Exception("ProducerConfig class needs `name` param to designate itself")
self.name = name
if socket_ip is None:
raise Exception("ProducerConfig class needs `socket_ip` param to connect to stream")
self.socket_ip = socket_ip
if socket_port is None:
raise Exception("ProducerConfig class needs `socket_port` param to connect to stream")
self.socket_port = socket_port
if flow is None:
raise Exception("ConsumerConfig class needs `flow` param to connect to stream")
self.flow = flow
if produce_stream is None:
raise Exception("ConsumerConfig class needs `produce_stream` param to connect to stream")
self.produce_stream = produce_stream
def producer_config_from_json(self, json: Dict[str, Any] = None) -> None:
"""
Simply initiate the ProducerConfig.
"""
if json is None:
raise Exception("ProducerConfig from_json method needs `json` param")
return ProducerConfig(
name=json.get('name', None),
socket_ip=json.get('socket_ip', None),
socket_port=json.get('socket_port', None),
flow=flow_from_json(json=json.get('flow', None)),
produce_stream=stream_from_json(json=json.get('stream', None)),
)
| 30.7 | 119 | 0.665891 | 265 | 2,149 | 5.132075 | 0.226415 | 0.047059 | 0.048529 | 0.088235 | 0.494853 | 0.422059 | 0.332353 | 0.188235 | 0.114706 | 0.114706 | 0 | 0.002463 | 0.2443 | 2,149 | 69 | 120 | 31.144928 | 0.834975 | 0.060493 | 0 | 0 | 0 | 0 | 0.233418 | 0.010633 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.295455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b3208faab0bb9858fdeabda875efe2eb39010200 | 5,673 | py | Python | robot3_18c1054/scripts/visualizer.py | 18C1054-S-K/robot3_ros | c34cf2989829b3f38648b5e32be1d5cc5264ad80 | [
"MIT"
] | null | null | null | robot3_18c1054/scripts/visualizer.py | 18C1054-S-K/robot3_ros | c34cf2989829b3f38648b5e32be1d5cc5264ad80 | [
"MIT"
] | null | null | null | robot3_18c1054/scripts/visualizer.py | 18C1054-S-K/robot3_ros | c34cf2989829b3f38648b5e32be1d5cc5264ad80 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import rospy
import numpy as np
import math
import time
from sensor_msgs.msg import JointState
from std_msgs.msg import Float32, Float32MultiArray, Bool, Header
from robot3_18c1054.msg import HandClose
from robot3_18c1054.srv import GetHandState, GetHandStateResponse, GetInitTime, GetInitTimeResponse
class VisualizerNode():
GRAVITY = 9.8
JOINT_NUM = 6
DELTA_TIME = 0.05
arm_ang = [0.0]*6
is_hand_close = False
hand_state = [0.0]*6
hand_ang_close = 0.0
hand_ang_open = 0.8
is_shooted = False
shoot_timestamp = time.time()
is_ball_flying = False
shooter_state = [0.0, -1.0, 0.0, 0.0]
ball_state = [0.0]*6
ball_initial_state = [0.0]*6
init_time = 0.0
def __init__(self):
#get init_time
rospy.wait_for_service('get_init_time')
try:
get_init_time = rospy.ServiceProxy('get_init_time', GetInitTime)
resp = get_init_time()
self.init_time = resp.year
self.init_time = self.init_time*12 + resp.month
self.init_time = self.init_time*30 + resp.day
self.init_time = self.init_time*24 + resp.hour
self.init_time = self.init_time*60 + resp.minute
self.init_time = self.init_time*60 + resp.second
self.init_time += resp.lower
except rospy.ServiceException:
print('service err in visualizer')
self.init_time = time.time()
self.pub_viz = rospy.Publisher('joint_states', JointState, queue_size=10)
self.sub_arm = rospy.Subscriber('arm_ang_angv', Float32MultiArray, self.update_arm)
self.sub_hand = rospy.Subscriber('hand_close', HandClose, self.update_hand)
self.sub_shooter = rospy.Subscriber('shooter_state', Float32MultiArray, self.update_shooter)
self.sub_shoot_1 = rospy.Subscriber('ball_initial_state', Float32MultiArray, self.shoot)
self.timer = rospy.Timer(rospy.Duration(self.DELTA_TIME), self.redisp)
def update_arm(self, msg):
for i in range(self.JOINT_NUM):
self.arm_ang[i] = msg.data[i]
def update_hand(self, msg):
self.is_hand_close = msg.close
if self.is_hand_close:
count = (msg.time_stamp + self.init_time) - self.shoot_timestamp
for i in range(3):
self.ball_state[i+3] = self.ball_initial_state[i+3]
self.ball_state[i] = self.ball_initial_state[i]
self.ball_state[i] += count * self.ball_initial_state[i+3]
self.ball_state[4] -= self.GRAVITY * count
self.ball_state[1] -= (self.GRAVITY/2.0)*(count**2)
self.check_catch()
# self.is_ball_flying = False
SAFE_DIST_X = 0.02
SAFE_DIST_Y = 0.05
SAFE_DIST_Z = 0.05
SAFE_COS_ANG_Z = 0.75
SAFE_COS_ANG_Y = 0.5
def check_catch(self):
rospy.wait_for_service('get_hand_state')
try:
get_hand_state = rospy.ServiceProxy('get_hand_state', GetHandState)
resp = get_hand_state()
hand_pos = np.zeros((3,1))
hand_pos_v = np.zeros((3,1))
hand_att = np.zeros((3,3))
for i in range(3):
hand_pos[i,0]=resp.hand_state[i]
hand_pos_v[i,0]=resp.hand_state[i+3]
for j in range(3):
hand_att[i,j]=resp.hand_state[i*3+j+6]
ball_pos = np.zeros((3,1))
ball_pos_v = np.zeros((3,1))
for i in range(3):
ball_pos[i,0] = self.ball_state[i]
ball_pos_v[i,0] = self.ball_state[i+3]
print('--------------------------')
rel_p = ball_pos - hand_pos
rel_p = hand_att.T @ rel_p
print("ball's relative position from hand :")
print(rel_p)
if abs(rel_p[0,0]) < self.SAFE_DIST_X and abs(rel_p[1,0]) < self.SAFE_DIST_Y and abs(rel_p[2,0]) < self.SAFE_DIST_Z:
rel_v = ball_pos_v - hand_pos_v
rel_v = hand_att.T @ rel_v
v_norm = np.linalg.norm(rel_v)
if (rel_v[2,0]>v_norm*self.SAFE_COS_ANG_Z or rel_v[2,0]<v_norm*self.SAFE_COS_ANG_Z) and rel_v[1,0]<v_norm*self.SAFE_COS_ANG_Y:
self.is_ball_flying = False
if self.is_ball_flying and self.is_shooted:
print('ball catch : miss')
elif self.is_shooted:
print('ball catch : success')
print('--------------------------')
except rospy.ServiceException: pass
def update_shooter(self, msg):
for i in range(4):
self.shooter_state[i] = msg.data[i]
def shoot(self, msg):
self.shoot_timestamp = msg.data[6] + self.init_time
self.is_shooted = True
self.is_ball_flying = True
for i in range(6):
self.ball_initial_state[i]=msg.data[i]
def redisp(self, event):
if not rospy.is_shutdown():
j_s = JointState()
j_s.header = Header()
j_s.header.stamp = rospy.Time.now()
j_s.name = ['hand_1','hand_2','shooter_x','shooter_z','shooter_roll','shooter_pitch','ball_x','ball_y','ball_z', 'joint_1','joint_2','joint_3','joint_4','joint_5','joint_6']
arr = [0.0]*(9+self.JOINT_NUM)
#arm
for i in range(self.JOINT_NUM):
arr[i+9] = self.arm_ang[i]
#hand
if self.is_hand_close:
arr[0]=self.hand_ang_close
arr[1]=self.hand_ang_close
else:
arr[0]=self.hand_ang_open
arr[1]=self.hand_ang_open
#shooter
for i in range(4):
arr[i+2]=self.shooter_state[i]
#ball
if self.is_shooted and self.is_ball_flying:
count = time.time() - self.shoot_timestamp
for i in range(3):
self.ball_state[i+3] = self.ball_initial_state[i+3]
self.ball_state[i] = self.ball_initial_state[i]
self.ball_state[i] += count * self.ball_initial_state[i+3]
self.ball_state[4] -= self.GRAVITY * count
self.ball_state[1] -= (self.GRAVITY/2.0)*(count**2)
elif not self.is_shooted:
self.ball_state[0]=self.shooter_state[0]
self.ball_state[1]=0.0
self.ball_state[2]=self.shooter_state[1]
for i in range(3):
arr[i+6]=self.ball_state[i]
#publish
j_s.position = arr
try:
self.pub_viz.publish(j_s)
except rospy.ROSException: pass
def main():
rospy.init_node('visualizer', anonymous=True)
node = VisualizerNode()
rospy.spin()
if __name__ == '__main__':
main()
| 30.5 | 176 | 0.693637 | 988 | 5,673 | 3.719636 | 0.154858 | 0.050068 | 0.056599 | 0.029932 | 0.326259 | 0.244354 | 0.162721 | 0.144762 | 0.128435 | 0.128435 | 0 | 0.034635 | 0.160233 | 5,673 | 185 | 177 | 30.664865 | 0.736776 | 0.015688 | 0 | 0.174497 | 0 | 0 | 0.072108 | 0.009327 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053691 | false | 0.013423 | 0.053691 | 0 | 0.248322 | 0.04698 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b3216e51e921df93ba62232e2dc93c7a485cd2b1 | 6,980 | py | Python | gpu_bdb/min/large_ddf.py | cylondata/gpu-bdb | 6edef0985e9953c6bc9b4e0639b0dff1c9facefa | [
"Apache-2.0"
] | null | null | null | gpu_bdb/min/large_ddf.py | cylondata/gpu-bdb | 6edef0985e9953c6bc9b4e0639b0dff1c9facefa | [
"Apache-2.0"
] | null | null | null | gpu_bdb/min/large_ddf.py | cylondata/gpu-bdb | 6edef0985e9953c6bc9b4e0639b0dff1c9facefa | [
"Apache-2.0"
] | null | null | null | #
# Copyright (c) 2019-2020, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import sys
import dask
import dask.dataframe as dd
from dask.distributed import Client
from dask.distributed import wait
import dask_cudf
import pandas as pd
import numpy as np
import cudf
import cupy
def create_pandas_df(nrows, ncols, start=0):
df = pd.DataFrame()
for i in range(ncols):
df["col-" + str(i)] = np.arange(start, nrows+start, dtype="int64")
print("pandas df constructed.")
print("rows in pandas df:", len(df.index))
return df
def create_cudf_df(nrows, ncols, start=0):
df = cudf.DataFrame()
for i in range(ncols):
df["col-" + str(i)] = cupy.arange(start, nrows + start, dtype="int64")
print("cudf df constructed.")
print("rows in cudf df:", len(df.index))
return df
# this first creates the df on the client gpu,
# then df is transferred from client gpu to worker gpus by partitioning
# persist/wait mechanism makes sure that the df on the client gpu is transferred to worker gpus
# this method does not scale over 2GB
def create_dask_cudf_df1(nrows, ncols, npartitions):
df = create_cudf_df(nrows=nrows, ncols=ncols)
ddf = dask_cudf.from_cudf(df, npartitions=npartitions)
ddf = ddf.persist()
wait(ddf)
print("dask_cudf ddf constructed.")
print("rows in ddf:", len(ddf.index))
return ddf
# this first creates a single large pandas dataframe on the client cpu,
# then dask.df is constructed from it with partitions
# then the dask.df is transferred from client cpu to worker gpus as a dask_cudf.df
# this method does not scale and it does not work for 4GB and higher
def create_dask_cudf_df2(nrows, ncols, npartitions):
df = create_pandas_df(nrows=nrows, ncols=ncols)
ddf = dd.from_pandas(df, npartitions=npartitions)
ddf = dask_cudf.from_dask_dataframe(ddf)
ddf = ddf.persist()
wait(ddf)
print("dask_cudf ddf constructed.")
print("rows in ddf:", len(ddf.index))
print("npartitions in ddf:", ddf.npartitions)
return ddf
# this method first creates a partition as a cudf df on the client gpu,
# then df is transferred from client gpu to a worker gpu as a single df
# when all partitions are transferred to worker gpus, all concatted to make a single dask_cudf df
# this method works when each partition size is 1GB or smaller, but it is very slow.
# since all partitions are first created on client gpu and transferred to worker gpus sequentially
def create_dask_cudf_df3(nrows, ncols, npartitions):
nrows = int(nrows/npartitions)
df_list = []
for i in range(npartitions):
df = create_cudf_df(nrows=nrows, ncols=ncols, start=nrows * i)
ddf = dask_cudf.from_cudf(df, npartitions=1)
ddf = ddf.persist()
wait(ddf)
df_list.append(ddf)
print(i, "dask_cudf ddf constructed.")
print(i, "rows in ddf:", len(ddf.index))
ddf = dask_cudf.concat(df_list)
ddf = ddf.persist()
wait(ddf)
print("concated dask_cudf ddf.")
print("rows in ddf:", len(ddf.index))
print("npartitions in ddf:", ddf.npartitions)
return ddf
# this method first creates a partition as a pandas df on the client cpu,
# this pandas df is converted to a dask.df
# then the dask.df is transferred from client cpu to a worker gpu as a single dask_cudf.df
# when all partitions are transferred to worker gpus, all dask_cudf.df's are concatted to make a single dask_cudf.df
# this method works for 16GB, or 32GB when each partition size is 1GB or smaller
# but it is very slow since all data is created on the client cpu and transferred to gpus sequentially
def create_dask_cudf_df4(nrows, ncols, npartitions):
nrows = int(nrows/npartitions)
df_list = []
for i in range(npartitions):
df = create_pandas_df(nrows=nrows, ncols=ncols, start=nrows * i)
ddf = dd.from_pandas(df, npartitions=1)
ddf = dask_cudf.from_dask_dataframe(ddf)
ddf = ddf.persist()
wait(ddf)
df_list.append(ddf)
print(i, "dask_cudf ddf constructed.")
print(i, "rows in ddf:", len(ddf.index))
ddf = dask_cudf.concat(df_list)
ddf = ddf.persist()
wait(ddf)
print("concated dask_cudf ddf.")
print("rows in ddf:", len(ddf.index))
print("npartitions in ddf:", ddf.npartitions)
return ddf
df_list = []
# this should be created with dask.delayed
@dask.delayed
def create_single_dask_cudf_df(nrows, ncols, start=0):
df = create_cudf_df(nrows=nrows, ncols=ncols, start=start)
ddf = dask_cudf.from_cudf(df, npartitions=1)
# df_list.append(ddf)
return len(ddf.index)
# create each dask_cudf.df in parallel in worker gpus with dask.delayed
# concat them all, when all are created
def create_dask_cudf_df5(nrows, ncols, npartitions):
nrows = int(nrows/npartitions)
len_list = []
for i in range(npartitions):
length = dask.delayed(create_single_dask_cudf_df)(nrows=nrows, ncols=ncols, start=nrows * i)
len_list.append(length)
print(i, "dask_cudf ddf constructed.")
len_list = list(dask.compute(*len_list))
print("length of first partition: ", len_list[0])
# print("dask.compute completed:")
# df_list = list(dask.compute(*df_list))
# print("dask compute done.")
# print("type of dask compute result:", type(df_list))
# ddf = dd.from_delayed(df_list)
ddf = dask_cudf.concat(df_list)
ddf = ddf.persist()
wait(ddf)
print("concated dask_cudf ddf.")
print("rows in ddf:", len(ddf.index))
print("npartitions in ddf:", ddf.npartitions)
print("sum of first column in ddf:", ddf["col-0"].sum().compute())
return ddf
if __name__ == "__main__":
scheduler_file = "dask-local-directory/scheduler.json"
try:
with open(scheduler_file) as fp:
print(fp.read())
client = Client(scheduler_file=scheduler_file)
print('Connected!')
except OSError as e:
sys.exit(f"Unable to create a Dask Client connection: {e}")
print("client:", client)
import cupy as cp
import rmm
cp.cuda.set_allocator(rmm.rmm_cupy_allocator)
client.run(cp.cuda.set_allocator, rmm.rmm_cupy_allocator)
ncols = 4
# nrows = 125000000 * 4
nrows = 125000000 # 125M*8=1GB, each row 1GB
# nrows = 125000 * 4
npartitions = 4 * 4
print("nrows, ncols, npartitions:", nrows, ncols, npartitions)
ddf = create_dask_cudf_df5(nrows=nrows, ncols=ncols, npartitions=npartitions)
| 36.165803 | 116 | 0.698281 | 1,074 | 6,980 | 4.436685 | 0.188082 | 0.052046 | 0.018468 | 0.029381 | 0.564743 | 0.529276 | 0.469045 | 0.420147 | 0.391396 | 0.33893 | 0 | 0.013101 | 0.201719 | 6,980 | 192 | 117 | 36.354167 | 0.842067 | 0.34914 | 0 | 0.495868 | 0 | 0 | 0.143238 | 0.007785 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066116 | false | 0 | 0.099174 | 0 | 0.231405 | 0.239669 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b3245056e3f0f3fb5389269f801f8bf1920f67ff | 13,768 | py | Python | ehb_service/apps/api/views/externalrecord.py | chop-dbhi/ehb-service | 62f26cbb1495026caf37e1c326a593253b88bdd9 | [
"BSD-2-Clause"
] | 1 | 2018-11-03T15:53:50.000Z | 2018-11-03T15:53:50.000Z | ehb_service/apps/api/views/externalrecord.py | chop-dbhi/ehb-service | 62f26cbb1495026caf37e1c326a593253b88bdd9 | [
"BSD-2-Clause"
] | 78 | 2015-12-16T20:12:20.000Z | 2020-04-08T19:42:35.000Z | ehb_service/apps/api/views/externalrecord.py | chop-dbhi/ehb-service | 62f26cbb1495026caf37e1c326a593253b88bdd9 | [
"BSD-2-Clause"
] | 1 | 2019-11-04T06:24:02.000Z | 2019-11-04T06:24:02.000Z | import json
import logging
from django.db.models import Q
from core.forms import ExternalRecordForm, ExternalRecordRelationForm
from .constants import ErrorConstants
from api.helpers import FormHelpers
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework import status
from rest_framework.decorators import permission_classes
from rest_framework import permissions
from core.models.identities import ExternalRecord, \
ExternalRecordRelation, ExternalRecordLabel, Subject, ExternalSystem
log = logging.getLogger(__name__)
class ExternalRecordQuery(APIView):
def responseLabels(self, subjid, subj_org, subj_org_id, esid, esname, esurl, path):
label_dict = {}
# Subject based labels
if subjid:
label_dict['subject_id'] = subjid
elif subj_org and subj_org_id:
label_dict['subject_org'] = subj_org
label_dict['subject_org_id'] = subj_org_id
else:
label_dict['subject_'] = 'not_provided'
# External system based labels
if esid:
label_dict['external_system_id'] = esid
elif esname:
label_dict['external_system_name'] = esname
elif esurl:
label_dict['external_system_url'] = esurl
else:
label_dict['external_system_'] = 'not_provided'
# Path modifiers
if path:
label_dict['path'] = path
else:
label_dict['path_'] = 'not_provided'
return label_dict
def appendError(self, response, errormsg, subjid, subj_org, subj_org_id, esid, esname, esurl, path):
response_dict = self.responseLabels(subjid, subj_org, subj_org_id, esid, esname, esurl, path)
response_dict['errors'] = errormsg
response.append(response_dict)
def post(self, request):
"""
This method is intended for querying for ExternalRecord records.
The query can be on any combination of Subject, ExternalSystem and path.
It is not necessary to specify all 3, but at least 1 must be provided.
The Subject can be specified by supplying the subject_id OR the subject_org AND subject_org_id.
The ExternalSystem can be specified by supplying the external_system_id OR external_system_name OR external_system_url
The path can be specified by supplying the path. This is the path to the record collection on the external system
"""
content_type = request.META.get("CONTENT_TYPE")
response = []
if content_type == "application/json":
for s in request.data:
# Look for valid subject and externalSystem identifiers
subjid = s.get('subject_id')
subj_org = s.get('subject_org')
subj_org_id = s.get('subject_org_id')
esid = s.get('external_system_id')
esname = s.get('external_system_name')
esurl = s.get('external_system_url')
subj = None
es = None
path = s.get('path')
# try to query for Subject, External System
try:
if subjid:
subj = Subject.objects.get(pk=subjid)
elif subj_org and subj_org_id:
qs = Subject.objects.filter(organization=subj_org).filter(organization_subject_id=subj_org_id)
if qs.__len__() == 1:
subj = qs[0]
if esid:
es = ExternalSystem.objects.get(pk=esid)
elif esname:
es = ExternalSystem.objects.get(name=esname)
elif esurl:
es = ExternalSystem.objects.get(url=esurl)
# Try to query for ExternalRecords
ers = ExternalRecord.objects.all()
if subj:
ers = ers.filter(subject__pk=str(subj.pk))
if es:
ers = ers.filter(external_system__pk=str(es.pk))
if path:
ers = ers.filter(path=path)
if ers:
era = []
for er in ers:
era.append(er.responseFieldDict())
response_dict = self.responseLabels(subjid, subj_org, subj_org_id, esid, esname, esurl, path)
response_dict['external_record'] = era
response.append(response_dict)
else:
log.error("No ExternalRecord found for query.")
response.append({"errors": [{"Query": ErrorConstants.ERROR_NO_RECORD_FOUND_FOR_QUERY}]})
except Subject.DoesNotExist:
log.error("No ExternalRecord found. Subject does not exist")
self.appendError(
response,
{
"Query": ErrorConstants.ERROR_NO_RECORD_FOUND_FOR_QUERY
},
subjid,
subj_org,
subj_org_id,
esid,
esname,
esurl,
path
)
except ExternalSystem.DoesNotExist:
log.error("No ExternalRecord found. External System does not exist")
self.appendError(
response,
{
"Query": ErrorConstants.ERROR_NO_RECORD_FOUND_FOR_QUERY
},
subjid,
subj_org,
subj_org_id,
esid,
esname,
esurl,
path
)
return Response(response)
class ExternalRecordView(APIView):
supported_accept_types = ['application/json', 'application/xml']
model = 'core.models.identities.ExternalRecord'
def get(self, request, pk):
er = None
try:
er = ExternalRecord.objects.get(pk=pk)
except ExternalRecord.DoesNotExist:
log.error("Unable to retrieve ExternalRecord[{0}]. It does not exists.".format(pk))
return Response(status=status.HTTP_404_NOT_FOUND)
if er:
r = er.responseFieldDict()
return Response (r)
def post(self, request):
"""This method is intended for adding new ExternalRecord records"""
content_type = request.META.get("CONTENT_TYPE")
response = []
if content_type == "application/json":
for s in request.data:
label = s.get('label_id')
recordid = s.get('record_id')
path = s.get('path')
form = ExternalRecordForm(s)
args = {'record_id': recordid}
if path:
args['path'] = path
if label:
args['label_id'] = int(label)
else:
args['label_id'] = 1
FormHelpers.processFormJsonResponse(form, response, valid_dict=args, invalid_dict=args)
return Response(response)
def put(self, request):
"""This method is intended for updating an existing ExternalRecord record"""
content_type = request.META.get("CONTENT_TYPE")
response = []
if content_type == "application/json":
for item in request.data:
pkval = item.get('id')
s = item.get('external_record')
if not pkval or not s:
log.error("Unable to update existing ExternalRecord no identifier provided")
return Response(status=status.HTTP_422_UNPROCESSABLE_ENTITY)
try:
er = ExternalRecord.objects.get(pk=pkval)
subjid = s.get('subject', str(er.subject.pk))
esid = s.get('external_system', str(er.external_system.pk))
erri = s.get('record_id', er.record_id)
erpath = s.get('path', er.path)
if er.label:
label = s.get('label', er.label.id)
else:
label = s.get('label')
js = {
"subject": subjid,
"external_system": esid,
"record_id": erri,
"path": erpath,
"label": label
}
form = ExternalRecordForm(js, instance=er)
FormHelpers.processFormJsonResponse(form, response, invalid_dict={'id': pkval})
except ExternalRecord.DoesNotExist:
log.error("Unable to update ExternalRecord. ExternalRecord not found")
response.append(
{
'id': pkval,
'success': False,
'errors': [
{
'id': ErrorConstants.ERROR_RECORD_ID_NOT_FOUND
}
]
}
)
return Response(response)
def delete(self, request, pk):
try:
er = ExternalRecord.objects.get(pk=pk)
er.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
except ExternalRecord.DoesNotExist:
log.error("Unable to delete ExternalRecord as it does not exist")
return Response(status=status.HTTP_404_NOT_FOUND)
class ExternalRecordLabelView(APIView):
'''
Provide a View to provide ExternalRecord labels.
'''
supported_accept_type = ['application/json']
model = 'core.models.identities.ExternalRecordLabel'
def get(self, request, pk):
try:
erl = ExternalRecordLabel.objects.get(pk=pk)
return Response(json.dumps({"id": erl.id, "label": erl.label}))
except ExternalRecordLabel.DoesNotExist:
log.error("Unable to retrieve ExternalRecord label. Label not found.")
return Response(status=status.HTTP_404_NOT_FOUND)
def post(self, request):
response = []
labels = ExternalRecordLabel.objects.all()
for label in labels:
response.append({
"id": label.id,
"label": label.label
})
return Response(response)
class ExternalRecordRelationView(APIView):
'''
Provide a View to provide related ExternalRecords.
'''
def get(self, request, pk, link=None):
response = []
if link:
relations = ExternalRecordRelation.objects.filter(external_record=pk, pk=link)
else:
relations = ExternalRecordRelation.objects.filter(Q(external_record=pk) | Q(related_record=pk))
data = []
for relation in relations:
r = relation.to_dict()
primary = False
if int(pk) == r['external_record']['id']:
primary = True
d = {
'external_record': r['related_record'],
'type': r['type'],
'description': r['relation_description'],
'id': r['id'],
'primary': primary
}
if (r['related_record']['id'] == int(pk)):
d['external_record'] = r['external_record']
data.append(d)
if link:
return Response(data[0])
else:
return Response(data)
def post(self, request, pk):
'''
This method is intended for adding new ExternalRecordRelation records
{
"related_record": 2,
"relation_type": 1
}
'''
content_type = request.META.get("CONTENT_TYPE")
response = []
if content_type == "application/json":
s = request.data
s['external_record'] = pk
# Check if this already exists
args = {}
args['external_record'] = request.data.get('external_record')
args['related_record'] = request.data.get('related_record')
args['relation_type'] = request.data.get('relation_type')
if len(ExternalRecordRelation.objects.filter(external_record=args['external_record'], related_record=args['related_record'], relation_type=args['relation_type'])) > 0:
return Response(json.dumps({'success': False, 'error': 'Record relation already exists'}))
form = ExternalRecordRelationForm(s)
r = FormHelpers.processFormJsonResponse(form, response, invalid_dict=args, valid_dict=args)
return Response(r)
def delete(self, request, pk, link):
'''
This method deletes the specified ExternalRecordRelation based id passed
via the URL
'''
response = []
try:
record = ExternalRecordRelation.objects.get(pk=link)
record.delete()
response.append(
{
'success': True
}
)
except ExternalRecordRelation.DoesNotExist:
response.append(
{
'error': 'Record relation does not exist',
'success': False
}
)
return Response(response)
| 36.519894 | 179 | 0.529198 | 1,329 | 13,768 | 5.329571 | 0.148984 | 0.021742 | 0.013977 | 0.011859 | 0.311591 | 0.260765 | 0.208951 | 0.158549 | 0.126641 | 0.115064 | 0 | 0.002955 | 0.385532 | 13,768 | 376 | 180 | 36.617021 | 0.834279 | 0.087231 | 0 | 0.34296 | 0 | 0 | 0.122868 | 0.006386 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043321 | false | 0 | 0.043321 | 0 | 0.176895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b327fc80caa49ef916375787bf5e4e7f062d013b | 8,409 | py | Python | cardbuilder/resolution/printer.py | jrhoff/cardbuilder | 857360b1827494a286cee9928cb004af882e55b4 | [
"MIT"
] | null | null | null | cardbuilder/resolution/printer.py | jrhoff/cardbuilder | 857360b1827494a286cee9928cb004af882e55b4 | [
"MIT"
] | null | null | null | cardbuilder/resolution/printer.py | jrhoff/cardbuilder | 857360b1827494a286cee9928cb004af882e55b4 | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from collections import OrderedDict
from os import mkdir
from os.path import exists, join
from typing import Optional, Callable, get_type_hints, Dict
import requests
from cardbuilder.common.util import dedup_by, retry_with_logging
from cardbuilder.exceptions import CardBuilderUsageException
from cardbuilder.lookup.value import SingleValue, ListValue, MultiListValue, MultiValue, Value
class Printer(ABC):
@abstractmethod
def __call__(self, *args, **kwargs) -> str:
raise NotImplementedError()
def get_input_type(self) -> type:
return next(val for key, val in get_type_hints(self.__call__).items() if key != 'return')
class WrappingPrinter(Printer, ABC):
def __init__(self, printer: Printer):
self._printer = printer
def get_input_type(self) -> type:
return self._printer.get_input_type()
class SingleValuePrinter(Printer):
"""The printer class for single values, like a word, part of speech, or single sentence definition."""
value_format = '{value}'
def __init__(self, format_string=value_format):
if self.value_format not in format_string:
raise CardBuilderUsageException('Format string {} does not include '.
format(format_string) + self.value_format)
self.format_string = format_string
def __call__(self, value: SingleValue) -> str:
return self.format_string.format(value=value.get_data())
class MultiValuePrinter(Printer):
def __init__(self, value_printer: SingleValuePrinter = SingleValuePrinter(),
header_printer: Optional[SingleValuePrinter] = SingleValuePrinter('{value}: '), join_string: str = ', ',
max_length: int = 10, print_lone_header: bool = True):
self.value_printer = value_printer
self.header_printer = header_printer
self.join_string = join_string
self.max_length = max_length
self.print_lone_header = print_lone_header
def __call__(self, value: MultiValue) -> str:
if len(value.get_data()) == 1 and not self.print_lone_header:
header_printer = None
else:
header_printer = self.header_printer
return self.join_string.join([(header_printer(header) if header is not None and header_printer is not None
else '') + self.value_printer(value) for value, header in value.get_data()
][:self.max_length])
class ListValuePrinter(Printer):
def __init__(self, single_value_printer: SingleValuePrinter = SingleValuePrinter(), join_string: str = ', ',
number_format_string: Optional[str] = None, sort_key: Optional[Callable[[SingleValue], int]] = None,
max_length: int = 10):
self.single_value_printer = single_value_printer
self.join_string = join_string
self.num_fstring = number_format_string
self.sort_key = sort_key
self.max_length = max_length
if self.num_fstring is not None:
if '{number}' not in self.num_fstring:
raise CardBuilderUsageException('Number format string must include "{number}"')
def __call__(self, value: ListValue) -> str:
data = (value.get_data() if self.sort_key is None else sorted(value.get_data(),
key=self.sort_key))[:self.max_length]
return self.join_string.join([
(self.num_fstring.format(number=idx) if self.num_fstring is not None else '') + self.single_value_printer(
val)
for idx, val in enumerate(data)
])
class MultiListValuePrinter(Printer):
def __init__(self, list_printer: ListValuePrinter = ListValuePrinter(number_format_string='{number}. ',
join_string='\n'),
header_printer: Optional[SingleValuePrinter] = SingleValuePrinter('{value}\n'),
join_string: str = '\n\n', group_by_header: bool = True, max_length: int = 10,
print_lone_header: bool = True):
self.list_printer = list_printer
self.header_printer = header_printer
self.join_string = join_string
self.group_by_header = group_by_header
self.max_length = max_length
self.print_lone_header = print_lone_header
def __call__(self, value: MultiListValue) -> str:
data = value.get_data()
if self.group_by_header:
grouped_data = OrderedDict()
for data_list, header in data:
if header not in grouped_data:
grouped_data[header] = list()
grouped_data[header].extend(x.get_data() for x in data_list.get_data())
data = list((ListValue(val), key) for key, val in grouped_data.items())
data = data[:self.max_length]
if len(data) == 1 and not self.print_lone_header:
header_printer = None
else:
header_printer = self.header_printer
return self.join_string.join([
(header_printer(header) if header is not None and header_printer is not None else '') +
self.list_printer(data_list) for data_list, header in data
])
class TatoebaPrinter(MultiValuePrinter):
def __init__(self, **kwargs):
if 'header_printer' not in kwargs:
kwargs['header_printer'] = SingleValuePrinter('{value}\n')
if 'join_string' not in kwargs:
kwargs['join_string'] = '\n\n'
super(TatoebaPrinter, self).__init__(**kwargs)
def __call__(self, value: MultiValue) -> str:
deduped_value = MultiValue([(x.get_data(), y.get_data()) for x, y in
dedup_by(dedup_by(value.get_data(), lambda x: x[0]), lambda x: x[1])])
return super().__call__(deduped_value)
class DefaultPrinter(Printer):
def __call__(self, value: Value):
return {
SingleValue: SingleValuePrinter(),
MultiValue: MultiValuePrinter(),
ListValue: ListValuePrinter(),
MultiListValue: MultiListValuePrinter()
}[type(value)](value)
class CasePrinter(Printer):
def __init__(self, printers_by_type: Dict[type, Printer]):
self.printers_by_type = printers_by_type
def __call__(self, value: Value) -> str:
if type(value) in self.printers_by_type:
return self.printers_by_type[type(value)](value)
else:
raise CardBuilderUsageException(f'{type(self).__name__} that supports types '
f'{set(self.printers_by_type.keys())} received type {type(value).__name__}'
f'to print')
class FirstValuePrinter(CasePrinter):
def __init__(self):
super(FirstValuePrinter, self).__init__({
ListValue: ListValuePrinter(max_length=1),
MultiValue: MultiValuePrinter(max_length=1, print_lone_header=False),
MultiListValue: MultiListValuePrinter(max_length=1, print_lone_header=False,
list_printer=ListValuePrinter(max_length=1))
})
class DownloadPrinter(Printer):
def __init__(self, output_directory: str, format_string='{directory}/{filename}'):
self.output_directory = output_directory
self.format_string = format_string
def __call__(self, value: Value) -> str:
if isinstance(value, SingleValue):
url = value.get_data()
elif isinstance(value, MultiValue):
url = value.get_data()[0][0].get_data()
elif isinstance(value, MultiListValue):
url = value.get_data()[0][0].get_data()[0].get_data()
else:
raise CardBuilderUsageException('{} is not supported for printing by {}'.format(
DownloadPrinter.__name__, type(value).__name__))
filename = url.split('/')[-1]
if not exists(self.output_directory):
mkdir(self.output_directory)
r = retry_with_logging(requests.get, tries=2, delay=1, fargs=[url])
with open(join(self.output_directory, filename), 'wb') as f:
f.write(r.content)
return self.format_string.format(directory=self.output_directory, filename=filename)
| 41.220588 | 121 | 0.633964 | 959 | 8,409 | 5.252346 | 0.153285 | 0.046456 | 0.023824 | 0.025412 | 0.305738 | 0.265436 | 0.223347 | 0.165972 | 0.156442 | 0.138972 | 0 | 0.00358 | 0.269235 | 8,409 | 203 | 122 | 41.423645 | 0.816111 | 0.011416 | 0 | 0.194805 | 0 | 0 | 0.046346 | 0.012038 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12987 | false | 0 | 0.058442 | 0.025974 | 0.331169 | 0.25974 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b329631d1ae39b00c66566b6ebd6369fb6aa8d9a | 19,912 | py | Python | pysrc/map_info_rdr.py | CrackerCat/xed | 428712c28e831573579b7f749db63d3a58dcdbd9 | [
"Apache-2.0"
] | 1,261 | 2016-12-16T14:29:30.000Z | 2022-03-30T20:21:25.000Z | pysrc/map_info_rdr.py | CrackerCat/xed | 428712c28e831573579b7f749db63d3a58dcdbd9 | [
"Apache-2.0"
] | 190 | 2016-12-17T13:44:09.000Z | 2022-03-27T09:28:13.000Z | pysrc/map_info_rdr.py | CrackerCat/xed | 428712c28e831573579b7f749db63d3a58dcdbd9 | [
"Apache-2.0"
] | 155 | 2016-12-16T22:17:20.000Z | 2022-02-16T20:53:59.000Z | #!/usr/bin/env python
# -*- python -*-
#BEGIN_LEGAL
#
#Copyright (c) 2020 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
import sys
import os
import re
import shlex
import collections
import math
import genutil
import enumer
import enum_txt_writer
import codegen
# dict space name -> numerical space id
_space_id = {'legacy':0, 'vex':1, 'evex':2, 'xop':3, 'knc':4}
_space_id_to_name = {v: k for k,v in _space_id.items()}
# list ordered by numerical space id
_space_id_sorted = sorted(_space_id.keys(), key=lambda x: _space_id[x])
def _encoding_space_max():
return max(_space_id.values())
def _encoding_space_range():
#Could make this dynamic based on what spaces are enabled
return range(0, _encoding_space_max()+1)
def vexvalid_to_encoding_space(vv):
"""Input number, output string"""
return _space_id_sorted[vv]
def encoding_space_to_vexvalid(space):
"""Input string, output number"""
return _space_id[space]
def _die(s):
genutil.die(s)
def _msgb(b,s=''):
genutil.msgerr("[{}] {}".format(b,s))
class map_info_t(object):
def __init__(self):
self.map_name = None
self.space = None # legacy, vex, evex, xop
self.legacy_escape = None # N/A or 0f
self.legacy_opcode = None # N/A or 38, 3a
self.map_id = None # N/A or 0,1,2,3,... 8,9,0xA
# "var" means variable, requires a table generated based on defined instructions
self.modrm = None # var,yes,no, has modrm
self.disp = None # var,yes,no, has disp
self.imm = None # var,0,1,2,4 (bytes) var=7
self.opcpos = None # 0,1,2, ... -1 (last) opcode position in pattern
self.priority = 10
# search_pattern is the string that we use to identify this
# map in the XED decode patterns. The pattern may have spaces
# in it. (and motivates using shlex to parse the input lines)
self.search_pattern = None
def is_legacy(self):
return self.space == 'legacy'
def is_vex(self):
return self.space == 'vex'
def is_evex(self):
return self.space == 'evex'
def is_xop(self):
return self.space == 'xop'
def map_short_name(self):
if self.map_name == 'amd_3dnow':
return 'AMD'
h = hex(self.map_id)[-1]
return str(h)
def ild_enum(self):
s = self.map_short_name()
if self.space == 'XOP':
s = '_XOP{}'.format(s)
return 'XED_ILD_MAP{}'.format(s)
def get_legacy_escapes(self):
if self.legacy_opcode == 'N/A':
return (self.legacy_escape, None)
return (self.legacy_escape, self.legacy_opcode)
def has_variable_modrm(self):
return self.modrm == 'var'
def has_regular_modrm(self):
return self.modrm == 'yes'
def has_variable_disp(self):
return self.disp == 'var'
def has_variable_imm(self):
return self.imm == 'var'
def __str__(self):
s = []
s.append("name: {}".format(self.map_name))
s.append("space: {}".format(self.space))
s.append("legacyesc: {}".format(self.legacy_escape))
s.append("legacyopc: {}".format(self.legacy_opcode))
s.append("mapid: {}".format(self.map_id))
s.append("modrm: {}".format(self.modrm))
s.append("disp: {}".format(self.disp))
s.append("imm: {}".format(self.imm))
s.append("opcpos: {}".format(self.opcpos))
s.append("priority: {}".format(self.priority))
s.append("search_pattern: {}".format(self.search_pattern))
return " ".join(s)
_map_info_fields = ['map_name',
'space',
'legacy_escape',
'legacy_opcode',
'map_id',
'modrm',
'disp',
'imm',
'opcpos',
'search_pattern' ]
def _parse_map_line(s):
global _map_info_fields
# shlex allows for quoted substrings containing spaces as
# individual args.
t = shlex.split(s.strip())
if len(t) != len(_map_info_fields):
_die("Bad map description line: [{}]".format(s))
mi = map_info_t()
for i,fld in enumerate(_map_info_fields):
setattr(mi,fld,t[i])
# this gets used in function names so must only be legal characters
mi.map_name = re.sub('-', '_', mi.map_name)
if mi.space == 'legacy':
if mi.legacy_escape != 'N/A':
mi.legacy_escape_int = int(mi.legacy_escape,16)
if mi.legacy_opcode != 'N/A':
mi.legacy_opcode_int = int(mi.legacy_opcode,16)
else:
mi.legacy_opcode_int = None
mi.map_id_fixup=False
if mi.space not in ['legacy','vex','evex', 'xop','knc']:
_die("Bad map description encoding space [{}]".format(s))
if mi.space == 'legacy':
if genutil.is_hex(mi.legacy_escape):
pass
elif mi.legacy_escape != 'N/A':
_die("Bad map description legacy escape [{}]".format(s))
if genutil.is_hex(mi.legacy_opcode):
pass
elif mi.legacy_opcode != 'N/A':
_die("Bad map description legacy opcode [{}]".format(s))
if mi.map_id == 'N/A':
_die("Bad map description map-id [{}]".format(s))
elif genutil.numeric(mi.map_id):
mi.map_id = genutil.make_numeric(mi.map_id)
else:
mi.map_id_fixup=True
else:
if mi.legacy_escape != 'N/A':
_die("Bad map description legacy escape [{}]".format(s))
if mi.legacy_opcode != 'N/A':
_die("Bad map description legacy opcode [{}]".format(s))
if genutil.numeric(mi.map_id):
mi.map_id = genutil.make_numeric(mi.map_id)
else:
_die("Bad map description map id [{}]".format(s))
if mi.disp not in ['var','no']:
_die("Bad map description disp specifier [{}]".format(s))
if mi.modrm not in ['var','yes','no']:
_die("Bad map description modrm specifier [{}]".format(s))
if mi.imm not in ['var','0','1','2','4']:
_die("Bad map description imm specifier [{}]".format(s))
if genutil.numeric(mi.opcpos):
mi.opcpos = genutil.make_numeric(mi.opcpos)
else:
_die("Bad map description opcode position specifier [{}]".format(s))
# we want the longer patterns first when we sort the map_info_t.
mi.priority = 100-len(mi.search_pattern)
return mi
def emit_enums(agi):
emit_ild_enum_dups(agi) # XED_ILD_*
emit_ild_enum_unique(agi) # XED_MAPU_*
file_list = emit_map_info_tables(agi)
agi.hdr_files.extend(file_list)
def emit_map_info_tables(agi):
'''variable modrm,disp,imm tables, per encoding space using natural
map ids. returns list of files generated'''
map_features_cfn = 'xed-map-feature-tables.c'
map_features_hfn = 'xed-map-feature-tables.h'
private_gendir = os.path.join(agi.common.options.gendir,'include-private')
hfe = codegen.xed_file_emitter_t(agi.common.options.xeddir,
private_gendir,
map_features_hfn)
for h in [ 'xed-map-info.h' ]:
hfe.add_header(h)
hfe.start()
sorted_list = sorted(agi.map_info, key=lambda x: x.map_name)
spaces = list(set([ mi.space for mi in sorted_list ]))
sorted_spaces = sorted(spaces, key=lambda x: encoding_space_to_vexvalid(x))
max_space_id = _encoding_space_max() # legacy,vex,evex,xop,knc
#max_space_id = encoding_space_to_vexvalid(sorted_spaces[-1])
max_map_id = max([mi.map_id for mi in agi.map_info]) #0...31
fields = ['modrm', 'disp', 'imm']
cvt_yes_no_var = { 'yes':1, 'no':0, 'var':2 }
cvt_imm = { '0':0, '1':1, '2':2, '4':4, 'var':7 }
field_to_cvt = { 'modrm': cvt_yes_no_var,
'disp' : cvt_yes_no_var,
'imm' : cvt_imm }
bits_per_chunk = 64
# The field width in bits must be a power of 2 for current design,
# otherwise the bits of interest can span the 64b chunks we are
# using to store the values.
field_to_bits = { 'modrm': 2,
'disp' : 2,
'imm' : 4 }
def collect_codes(field, space_maps):
'''cvt is dict converting strings to integers. the codes are indexed by map id.'''
cvt = field_to_cvt[field]
codes = { key:0 for key in range(0,max_map_id+1) }
for mi in space_maps:
codes[mi.map_id] = cvt[getattr(mi,field)]
codes_as_list = [ codes[i] for i in range(0,max_map_id+1) ]
return codes_as_list
def convert_list_to_integer(lst, bits_per_field):
'''return an integer or a list of integer if more than 64b'''
integers = []
tot = 0
shift = 0
for v in lst:
if shift >= 64:
integers.append(tot)
tot = 0
shift = 0
tot = tot + (v << shift)
shift = shift + bits_per_field
integers.append(tot)
if len(integers) == 1:
return integers[0]
return integers
for space_id in _encoding_space_range():
space = _space_id_to_name[space_id]
space_maps = [ mi for mi in sorted_list if mi.space == space ]
for field in fields:
bits_per_field = field_to_bits[field]
total_bits = max_map_id * bits_per_field
required_chunks = math.ceil(total_bits / bits_per_chunk)
values_per_chunk = bits_per_chunk // bits_per_field
ilog2_values_per_chunk = int(math.log2(values_per_chunk))
mask = (1<<bits_per_field)-1
f = codegen.function_object_t('xed_ild_has_{}_{}'.format(field,space),
'xed_bool_t',
static=True, inline=True)
f.add_arg('xed_uint_t m')
if space_maps:
codes = collect_codes(field, space_maps)
constant = convert_list_to_integer(codes,bits_per_field)
else:
codes = [0]
constant = 0
f.add_code('/* {} */'.format(codes))
if set(codes) == {0}: # all zero values...
f.add_code_eol('return 0')
f.add_code_eol('(void)m')
else:
if required_chunks <= 1:
f.add_code_eol('const xed_uint64_t data_const = 0x{:x}ULL'.format(constant))
f.add_code_eol('return (xed_bool_t)((data_const >> ({}*m)) & {})'.format(
bits_per_field, mask))
else:
f.add_code('const xed_uint64_t data_const[{}] = {{'.format(required_chunks))
ln = ['0x{:x}ULL'.format(c) for c in constant]
f.add_code_eol(' {} }}'.format(", ".join(ln)))
f.add_code_eol('const xed_uint64_t chunkno = m >> {}'.format(ilog2_values_per_chunk))
f.add_code_eol('const xed_uint64_t offset = m & ({}-1)'.format(values_per_chunk))
f.add_code_eol('return (xed_bool_t)((data_const[chunkno] >> ({}*offset)) & {})'.format(
bits_per_field, mask))
hfe.write(f.emit()) # emit the inline function in the header
# emit a function that covers all spaces
for field in fields:
bits_per_field = field_to_bits[field]
total_bits = max_map_id * bits_per_field
required_chunks = math.ceil(total_bits / bits_per_chunk)
values_per_chunk = bits_per_chunk // bits_per_field
ilog2_values_per_chunk = int(math.log2(values_per_chunk))
mask = (1<<bits_per_field)-1
f = codegen.function_object_t('xed_ild_has_{}'.format(field),
'xed_bool_t',
static=True, inline=True)
f.add_arg('xed_uint_t vv')
f.add_arg('xed_uint_t m')
if required_chunks <= 1:
f.add_code('const xed_uint64_t data_const[{}] = {{'.format(max_space_id+1))
else:
f.add_code('const xed_uint64_t data_const[{}][{}] = {{'.format(max_space_id+1,
required_chunks))
for space_id in _encoding_space_range():
space = _space_id_to_name[space_id]
space_maps = [ mi for mi in sorted_list if mi.space == space ]
if space_maps:
codes = collect_codes(field, space_maps)
constant = convert_list_to_integer(codes,bits_per_field)
else:
codes = [0]*required_chunks
if required_chunks <= 1:
constant = 0
else:
constant = [0]*required_chunks
f.add_code('/* {} {} */'.format(codes,space))
if required_chunks <= 1:
f.add_code(' 0x{:x}ULL,'.format(constant))
else:
ln = ['0x{:x}ULL'.format(c) for c in constant]
f.add_code('{{ {} }},'.format(", ".join(ln)))
f.add_code_eol('}')
f.add_code_eol('xed_assert(vv < {})'.format(max_space_id+1))
if required_chunks <= 1:
f.add_code_eol('return (xed_bool_t)((data_const[vv] >> ({}*m)) & {})'.format(bits_per_field,
mask))
else:
f.add_code_eol('const xed_uint64_t chunkno = m >> {}'.format(ilog2_values_per_chunk))
f.add_code_eol('const xed_uint64_t offset = m & ({}-1)'.format(values_per_chunk))
f.add_code_eol('return (xed_bool_t)((data_const[vv][chunkno] >> ({}*offset)) & {})'.format(
bits_per_field, mask))
hfe.write(f.emit()) # emit the inline function in the header
# emit a set of functions for determining the valid maps in each encoding space
if max_map_id > 64:
genutil.die("Need to make this work with multiple chunks of u64")
for space_id in _encoding_space_range():
space = _space_id_to_name[space_id]
space_maps = [ mi for mi in sorted_list if mi.space == space ]
f = codegen.function_object_t('xed_ild_map_valid_{}'.format(space),
'xed_bool_t',
static=True, inline=True)
f.add_arg('xed_uint_t m')
max_id = _encoding_space_max()
#max_id = max( [mi.map_id for mi in space_maps ] )
codes_dict = { key:0 for key in range(0,max_map_id+1) }
for mi in space_maps:
codes_dict[mi.map_id] = 1
codes = [ codes_dict[i] for i in range(0,max_map_id+1) ]
f.add_code('/* {} */'.format(codes))
constant = convert_list_to_integer(codes,1)
f.add_code_eol('const xed_uint64_t data_const = 0x{:x}ULL'.format(constant))
# no need for a max-map test since, the upper bits of the
# constant will be zero already
f.add_code_eol('return (xed_bool_t)((data_const >> m) & 1)')
hfe.write(f.emit()) # emit the inline function in the header
# emit a table filling in "xed_map_info_t xed_legacy_maps[] = { ... }"
legacy_maps = [ mi for mi in sorted_list if mi.space == 'legacy' ]
legacy_maps = sorted(legacy_maps,
key=lambda x: -len(x.search_pattern) * 10 + x.map_id)
hfe.add_code('const xed_map_info_t xed_legacy_maps[] = {')
for mi in legacy_maps:
if mi.map_id == 0:
continue
has_legacy_opcode = 1 if mi.legacy_opcode != 'N/A' else 0
legacy_opcode = mi.legacy_opcode if mi.legacy_opcode != 'N/A' else 0
legacy_escape = mi.legacy_escape if mi.legacy_escape != 'N/A' else 0
hfe.add_code('{{ {}, {}, {}, {}, {} }},'.format(legacy_escape,
has_legacy_opcode,
legacy_opcode,
mi.map_id,
mi.opcpos))
hfe.add_code_eol('}')
hfe.close()
return [hfe.full_file_name]
def emit_ild_enum_unique(agi):
"""modify map_info_t values to include mapu enum name so that we can
build other arrays for the C-code based on that unique enum"""
sorted_list = sorted(agi.map_info, key=lambda x: x.map_name)
evalues = ['INVALID']
for mi in sorted_list:
s = mi.map_name.upper()
evalues.append(s)
mi.mapu_name = 'XED_MAPU_{}'.format(s)
enum = enum_txt_writer.enum_info_t(evalues,
agi.common.options.xeddir,
agi.common.options.gendir,
'xed-mapu',
'xed_mapu_enum_t',
'XED_MAPU_',
cplusplus=False)
enum.run_enumer()
agi.add_file_name(enum.src_full_file_name)
agi.add_file_name(enum.hdr_full_file_name, header=True)
agi.all_enums['xed_mapu_enum_t'] = evalues
def emit_ild_enum_dups(agi):
evalues = []
sorted_list = sorted(agi.map_info, key=lambda x: x.map_name)
for mi in sorted_list:
val = None
if isinstance(mi.map_id,int):
val = str(mi.map_id)
e = enumer.enumer_value_t(mi.map_name.upper(), val)
evalues.append(e)
evalues.append('MAP_INVALID')
enum = enum_txt_writer.enum_info_t(evalues,
agi.common.options.xeddir,
agi.common.options.gendir,
'xed-ild',
'xed_ild_map_enum_t',
'XED_ILD_',
cplusplus=False)
enum.run_enumer()
agi.add_file_name(enum.src_full_file_name)
agi.add_file_name(enum.hdr_full_file_name, header=True)
agi.all_enums['xed_ild_map_enum_t'] = evalues
def fix_nonnumeric_maps(maps):
d = collections.defaultdict(list)
for mi in maps:
if not mi.map_id_fixup:
d[mi.space].append(mi.map_id)
mx = {} # max per key
for k in d.keys():
mx[k] = max(d[k])
for mi in maps:
if mi.map_id_fixup:
maxval = mx[mi.space] + 1
mi.map_id = maxval
mx[mi.space] = maxval
mi.map_id_fixup = False
def read_file(fn):
lines = open(fn,'r').readlines()
lines = map(genutil.no_comments, lines)
lines = list(filter(genutil.blank_line, lines))
maps = [] # list of map_info_t
for line in lines:
maps.append( _parse_map_line(line) )
fix_nonnumeric_maps(maps)
maps.sort(key=lambda x: x.priority)
#for m in maps:
# _msgb("MAPINFO",m)
return maps
if __name__ == "__main__":
read_file(sys.argv[1])
sys.exit(0)
| 38.072658 | 107 | 0.558206 | 2,675 | 19,912 | 3.903925 | 0.144299 | 0.018194 | 0.018386 | 0.016853 | 0.430911 | 0.356986 | 0.341856 | 0.325194 | 0.310543 | 0.304989 | 0 | 0.011977 | 0.32493 | 19,912 | 522 | 108 | 38.145594 | 0.764916 | 0.131227 | 0 | 0.30829 | 0 | 0 | 0.120042 | 0.011283 | 0 | 0 | 0 | 0 | 0.002591 | 1 | 0.072539 | false | 0.005181 | 0.025907 | 0.025907 | 0.163212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b32c8ac533a9595c7935a15d0c1ab0a2d7c89655 | 2,271 | py | Python | specs/monitor/events_v1_spec.py | marojor/sysdig-sdk-python | c554f9f747ff88ce075b50a50d81f75aff327bb1 | [
"MIT"
] | 45 | 2016-04-11T16:50:15.000Z | 2020-07-11T23:37:51.000Z | specs/monitor/events_v1_spec.py | marojor/sysdig-sdk-python | c554f9f747ff88ce075b50a50d81f75aff327bb1 | [
"MIT"
] | 74 | 2016-08-09T17:10:55.000Z | 2020-07-09T08:36:16.000Z | specs/monitor/events_v1_spec.py | marojor/sysdig-sdk-python | c554f9f747ff88ce075b50a50d81f75aff327bb1 | [
"MIT"
] | 39 | 2016-04-20T17:22:23.000Z | 2020-07-08T17:25:52.000Z | import os
import time
from expects import expect, have_key, contain, have_keys, be_empty, equal
from mamba import it, before, description
from sdcclient.monitor import EventsClientV1
from specs import be_successful_api_call
with description("Events v1", "integration") as self:
with before.all:
self.client = EventsClientV1(sdc_url=os.getenv("SDC_MONITOR_URL", "https://app.sysdigcloud.com"),
token=os.getenv("SDC_MONITOR_TOKEN"))
self.event_name = "event_v1_test_ci"
with it("is able to create a custom event"):
call = self.client.post_event(name=self.event_name,
description="This event was created in a CI pipeline for the Python SDK library")
expect(call).to(be_successful_api_call)
with it("is able to retrieve an event by ID"):
ok, res = self.client.post_event(name=self.event_name,
description="This event was created in a CI pipeline for the Python SDK library")
expect((ok, res)).to(be_successful_api_call)
event = res["event"]
event_id = event["id"]
ok, res = self.client.get_event(id=event_id)
expect((ok, res)).to(be_successful_api_call)
expect(res["event"]).to(equal(event))
with it("is able to list the events happened without any filter"):
time.sleep(3) # Wait for the event to appear in the feed
ok, res = self.client.get_events()
expect((ok, res)).to(be_successful_api_call)
expect(res).to(have_key("events"))
with it("is able to list the events created by the tests"):
time.sleep(3) # Wait for the event to appear in the feed
ok, res = self.client.get_events(last_s=60)
expect((ok, res)).to(be_successful_api_call)
expect(res).to(have_key("events", contain(have_keys(name=self.event_name))))
with it("is able to remove the event from the feed"):
time.sleep(3)
_, res = self.client.get_events(last_s=60)
events = [event for event in res["events"] if event["name"] == self.event_name]
expect(events).to_not(be_empty)
call = self.client.delete_event(events[0])
expect(call).to(be_successful_api_call)
| 41.290909 | 122 | 0.647292 | 335 | 2,271 | 4.226866 | 0.259701 | 0.056497 | 0.074153 | 0.093927 | 0.554379 | 0.47387 | 0.47387 | 0.434322 | 0.352401 | 0.352401 | 0 | 0.006989 | 0.243945 | 2,271 | 54 | 123 | 42.055556 | 0.817705 | 0.035667 | 0 | 0.268293 | 0 | 0 | 0.214449 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.146341 | 0 | 0.146341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b32ce48bfac67262240dc37403bd03bc3776e361 | 322 | py | Python | app/cryptprice/exceptions/app_exceptions.py | MihailButnaru/cryptprice | b9636dbb0fb88d345fc53a9e8ec0554a888e7302 | [
"MIT"
] | 2 | 2020-07-04T23:02:14.000Z | 2021-02-04T21:21:14.000Z | app/cryptprice/exceptions/app_exceptions.py | MihailButnaru/cryptprice | b9636dbb0fb88d345fc53a9e8ec0554a888e7302 | [
"MIT"
] | 4 | 2021-03-30T13:50:34.000Z | 2021-09-22T19:22:57.000Z | app/cryptprice/exceptions/app_exceptions.py | MihailButnaru/cryptprice | b9636dbb0fb88d345fc53a9e8ec0554a888e7302 | [
"MIT"
] | null | null | null | from rest_framework.exceptions import APIException
__author__ = "Mihail Butnaru"
__copyright__ = "Copyright 2020, All rights reserved."
class APPServerError(APIException):
status_code = 500
default_detail = "Service temporarily unavailable, contact the administrator"
default_code = "internal_server_error"
| 29.272727 | 81 | 0.795031 | 34 | 322 | 7.117647 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025362 | 0.142857 | 322 | 10 | 82 | 32.2 | 0.851449 | 0 | 0 | 0 | 0 | 0 | 0.400621 | 0.065217 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b3306de4e386f43f3bd60bacfbc628514aa6597c | 2,683 | py | Python | python/Data Structures/Arrays/maximumRange.py | sinderpl/CodingExamples | 9bc59a0345589bf51fc74fe9ad527e9498b9b5c9 | [
"MIT"
] | null | null | null | python/Data Structures/Arrays/maximumRange.py | sinderpl/CodingExamples | 9bc59a0345589bf51fc74fe9ad527e9498b9b5c9 | [
"MIT"
] | null | null | null | python/Data Structures/Arrays/maximumRange.py | sinderpl/CodingExamples | 9bc59a0345589bf51fc74fe9ad527e9498b9b5c9 | [
"MIT"
] | null | null | null | class Solution:
def maximumPopulation(self, logs: List[List[int]]) -> int:
years_mapped = dict()
# Map each person = O(n)
# First bottleneck is having to map every year out which increases processing time
# The second is also here since we use additional storage to account for all the years
for years in logs:
currYear = years[0]
while currYear < years[1]:
years_mapped[currYear] = years_mapped.get(currYear,0) + 1
currYear += 1
max_year = 0
max_year_count = 0
for year, count in years_mapped.items():
if count > max_year_count:
max_year = year
max_year_count = count
elif count == max_year_count and year < max_year:
max_year = year
max_year_count = count
# O(n* (death- currYear)) ??
return max_year
#Hashmap is not really that much more efficient
#The runtime was slightly faster most likely due to the speedier lookup than dictionairy
# Not much of a space improvement either
class Solution:
def maximumPopulation(self, logs: List[List[int]]) -> int:
years_mapped = [0] * 100
print(years_mapped)
# Map each person = O(n)
for years in logs:
currYear = years[0]
# O(n* (death- currYear))
while currYear < years[1]:
years_mapped[currYear - 1950] += 1
currYear += 1
max_year = 0
max_year_count = 0
for year, count in enumerate(years_mapped):
if count > max_year_count:
max_year = year
max_year_count = count
elif count == max_year_count and year < max_year:
max_year = year
max_year_count = count
return 1950 + max_year
class Solution:
def maximumPopulation(self, logs: List[List[int]]) -> int:
years_mapped = [0] * 101
constant_diff = 1950
# Map each person = O(n)
# here a running tally is utilised for the total pop count
for years in logs:
years_mapped[years[0] - constant_diff] += 1
years_mapped[years[1] - constant_diff] -= 1
max_year = 0
running_pop = 0
max_sum = 0
for year, count in enumerate(years_mapped):
running_pop += count
if running_pop > max_sum:
max_sum = running_pop
max_year = year
return max_year + constant_diff
| 33.123457 | 94 | 0.542303 | 324 | 2,683 | 4.317901 | 0.277778 | 0.115082 | 0.085776 | 0.048606 | 0.530379 | 0.498213 | 0.498213 | 0.40386 | 0.364546 | 0.364546 | 0 | 0.024465 | 0.390608 | 2,683 | 81 | 95 | 33.123457 | 0.831193 | 0.191577 | 0 | 0.660377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056604 | false | 0 | 0 | 0 | 0.169811 | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b330802951dfea9ae95601d2e89610bb4ee484ba | 5,027 | py | Python | vision_transformer/Vit.py | ximingxing/Deep-Learning-in-Action | 38d5d3d6990553ff9d3ea771d8e83f8b47241b9a | [
"MIT"
] | 1 | 2020-09-16T09:17:37.000Z | 2020-09-16T09:17:37.000Z | vision_transformer/Vit.py | ximingxing/Deep-Learning-in-Action | 38d5d3d6990553ff9d3ea771d8e83f8b47241b9a | [
"MIT"
] | 1 | 2021-05-13T05:20:07.000Z | 2021-05-13T05:20:07.000Z | vision_transformer/Vit.py | ximingxing/Deep-Learning-in-Action | 38d5d3d6990553ff9d3ea771d8e83f8b47241b9a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Project : Deep-Learning-in-Action
File : Vit.py
Description :
论文题目: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
论文地址: https://arxiv.org/abs/2010.11929
Author : xingximing.xxm
Date : 2021/7/11 11:36 AM
"""
import torch
from torch import nn, einsum
import torch.nn.functional as F
from einops import rearrange, repeat
from einops.layers.torch import Rearrange
def pair(t):
return t if isinstance(t, tuple) else (t, t)
class PreNorm(nn.Module):
def __init__(self, dim, fn):
super().__init__()
self.norm = nn.LayerNorm(dim)
self.fn = fn
def forward(self, x, **kwargs):
return self.fn(self.norm(x), **kwargs)
class FeedForward(nn.Module):
def __init__(self, dim, hidden_dim, dropout=0.):
super().__init__()
self.net = nn.Sequential(
nn.Linear(dim, hidden_dim),
nn.GELU(),
nn.Dropout(dropout),
nn.Linear(hidden_dim, dim),
nn.Dropout(dropout)
)
def forward(self, x):
return self.net(x)
class Attention(nn.Module):
def __init__(self, dim, heads=8, dim_head=64, dropout=0.):
super().__init__()
inner_dim = dim_head * heads
project_out = not (heads == 1 and dim_head == dim)
self.heads = heads
self.scale = dim_head ** -0.5
self.attend = nn.Softmax(dim=-1)
self.to_qkv = nn.Linear(dim, inner_dim * 3, bias=False)
self.to_out = nn.Sequential(
nn.Linear(inner_dim, dim),
nn.Dropout(dropout)
) if project_out else nn.Identity()
def forward(self, x):
b, n, _, h = *x.shape, self.heads
qkv = self.to_qkv(x).chunk(3, dim=-1)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), qkv)
dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale
attn = self.attend(dots)
out = einsum('b h i j, b h j d -> b h i d', attn, v)
out = rearrange(out, 'b h n d -> b n (h d)')
return self.to_out(out)
class Transformer(nn.Module):
def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout=0.):
super().__init__()
self.layers = nn.ModuleList([])
for _ in range(depth):
self.layers.append(nn.ModuleList([
PreNorm(dim, Attention(dim, heads=heads, dim_head=dim_head, dropout=dropout)),
PreNorm(dim, FeedForward(dim, mlp_dim, dropout=dropout))
]))
def forward(self, x):
for attn, ff in self.layers:
x = attn(x) + x
x = ff(x) + x
return x
class ViT(nn.Module):
def __init__(self, *, image_size, patch_size, num_classes, dim, depth, heads, mlp_dim, pool='cls', channels=3,
dim_head=64, dropout=0., emb_dropout=0.):
super().__init__()
image_height, image_width = pair(image_size)
patch_height, patch_width = pair(patch_size)
assert image_height % patch_height == 0 and image_width % patch_width == 0, 'Image dimensions must be divisible by the patch size.'
num_patches = (image_height // patch_height) * (image_width // patch_width)
patch_dim = channels * patch_height * patch_width
assert pool in {'cls', 'mean'}, 'pool type must be either cls (cls token) or mean (mean pooling)'
# 将图片变为图像块序列(sequences of image patches)
self.to_patch_embedding = nn.Sequential(
# C x H x W -> N x (P x P x C), where N = (H x W) / p^2
Rearrange('b c (h p1) (w p2) -> b (h w) (p1 p2 c)', p1=patch_height, p2=patch_width),
# Linear层的输入维度大小是 P x P x C, 输出是 D
nn.Linear(patch_dim, dim),
)
self.pos_embedding = nn.Parameter(torch.randn(1, num_patches + 1, dim))
self.cls_token = nn.Parameter(torch.randn(1, 1, dim))
self.dropout = nn.Dropout(emb_dropout)
self.transformer = Transformer(dim, depth, heads, dim_head, mlp_dim, dropout)
self.pool = pool
self.to_latent = nn.Identity()
self.mlp_head = nn.Sequential(
nn.LayerNorm(dim),
nn.Linear(dim, num_classes)
)
def forward(self, img):
x = self.to_patch_embedding(img)
b, n, _ = x.shape
cls_tokens = repeat(self.cls_token, '() n d -> b n d', b=b)
x = torch.cat((cls_tokens, x), dim=1)
x += self.pos_embedding[:, :(n + 1)]
x = self.dropout(x)
x = self.transformer(x)
x = x.mean(dim=1) if self.pool == 'mean' else x[:, 0]
x = self.to_latent(x)
return self.mlp_head(x)
if __name__ == '__main__':
v = ViT(
image_size=256,
patch_size=32,
num_classes=1000,
dim=1024,
depth=6,
heads=16,
mlp_dim=2048,
dropout=0.1,
emb_dropout=0.1
)
img = torch.randn(1, 3, 256, 256)
preds = v(img)
print(preds.shape) # torch.Size([1, 1000])
| 30.652439 | 139 | 0.571912 | 734 | 5,027 | 3.749319 | 0.239782 | 0.022892 | 0.019985 | 0.027253 | 0.144622 | 0.09484 | 0.02907 | 0.023983 | 0 | 0 | 0 | 0.027417 | 0.296201 | 5,027 | 163 | 140 | 30.840491 | 0.750424 | 0.083947 | 0 | 0.089286 | 0 | 0.008929 | 0.062336 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 1 | 0.098214 | false | 0 | 0.044643 | 0.026786 | 0.241071 | 0.008929 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b33142a09cea192c1496daaf28a083bed7fd400a | 3,518 | py | Python | sheets.py | caleby117/xybot | 6dcc6ac99ef05ae7791c6c29c1f2f2c4242fb647 | [
"Apache-2.0"
] | null | null | null | sheets.py | caleby117/xybot | 6dcc6ac99ef05ae7791c6c29c1f2f2c4242fb647 | [
"Apache-2.0"
] | null | null | null | sheets.py | caleby117/xybot | 6dcc6ac99ef05ae7791c6c29c1f2f2c4242fb647 | [
"Apache-2.0"
] | null | null | null | import gspread
import time
gc = gspread.service_account(filename='creds.json')
sheetKey = "<insert shet key here>"
wb = gc.open_by_key(sheetKey)
localtime = time.asctime()
newSheetName = f'XY on {localtime}'
worksheet = wb.sheet1
scoreSheet = worksheet.duplicate(insert_sheet_index=1, new_sheet_name=newSheetName)
#make sure you share the spreadsheet with admin first
#Using template, define the cells for each round and team. Returns dictionary w ?x?y as well
def init_scoreboard(no_teams):
if no_teams < 12:
scoreSheet.delete_columns(start_index=no_teams+2, end_index=13)
# Reference to where the cells are eg round 1 team 1 card cell
class operator_cells():
def __init__(self, no_teams):
self.no_teams = no_teams
def readcombi(self, round_no):
coord = self.aXbY_cell(round_no)
return scoreSheet.cell(coord['r'], coord['c']).value
def aXbY_cell(self, round_no):
col = self.no_teams + 2
row = round_no*2 + 2
return {'r': row, 'c':col}
def create_scoring_matrix(self):
colOffset = 17
'''
Updating multiple cells at once requires new list for a new col - gspread interprets
it as list[r][c]=rowcol.value
Example:
We want to update A1:A10
sheets.update('A1:A10', [[a1val], [a2val], [a3val]...])
'''
xMatrix = [[0]]
for gain in range(1, self.no_teams):
no_y = self.no_teams - gain
xMatrix.append([no_y*10])
xMatrix.append([-10])
yMatrix = []
for gain in range(len(xMatrix)):
yMatrix.append([xMatrix[gain][0]*-1])
print(yMatrix)
print(xMatrix)
x_col = self.no_teams + 6
y_col = self.no_teams + 7
xMatrix.reverse()
xRange = scoring_matrix_range(x_col, xMatrix)
yRange = scoring_matrix_range(y_col, yMatrix)
scoreSheet.update(f'{xRange}', xMatrix)
scoreSheet.update(f'{yRange}', yMatrix)
aXbY_col = self.no_teams + 5
aXbY = aXbY_list(self.no_teams)
aXbYRange = scoring_matrix_range(aXbY_col, aXbY)
scoreSheet.update(f'{aXbYRange}', aXbY)
return {'X': xMatrix, 'Y': yMatrix}
"""
Object called team; syntax team(<team_no>(int))
team's cells: team1.<cellType>(<roundno>(int))
"""
class team():
def __init__(self, team_no):
self.team_no = team_no
def __repr__(self):
return f'Team {self.team_no}'
def writecard(self, card, round_no):
cell = self.card_cell(round_no)
scoreSheet.update_cell(cell['r'], cell['c'], card)
def readcard(self, round_no):
cell = self.card_cell(round_no)
return scoreSheet.cell(cell['r'], cell['c']).value
def readscore(self, round_no):
cell = self.score_cell(round_no)
return scoreSheet.cell(cell['r'], cell['c']).value
def card_cell(self, round_no):
rowOffset = 2
row = round_no*2 + rowOffset
col = self.team_no + 1
cell = {'r': row,
'c' : col
}
return cell
def score_cell(self, round_no):
rowOffset = 3
row = round_no*2 + rowOffset
col = self.team_no + 1
cell = {'r': row, 'c': col}
return cell
def aXbY_list(no_teams):
aXbY_desc = []
for n in range(no_teams+1):
aXbY_desc.append([f'{no_teams-n}X {n}Y'])
return aXbY_desc
def scoring_matrix_range(col, matrix):
start = f'{colnum_string(col)}4'
end = f'{colnum_string(col)}{len(matrix)+3}'
return(f'{start}:{end}')
def colnum_string(n):
string = ""
while n > 0:
n, remainder = divmod(n - 1, 26)
string = chr(65 + remainder) + string
return string
def initialise_sheet(no_teams):
init_scoreboard(no_teams)
op = operator_cells(no_teams)
matrix = op.create_scoring_matrix()
sheetid = scoreSheet.id
return op, matrix, sheetid
| 21.716049 | 92 | 0.688459 | 548 | 3,518 | 4.231752 | 0.293796 | 0.057352 | 0.042691 | 0.024148 | 0.169901 | 0.127641 | 0.114273 | 0.114273 | 0.093144 | 0.093144 | 0 | 0.017188 | 0.17311 | 3,518 | 161 | 93 | 21.850932 | 0.779993 | 0.057987 | 0 | 0.105263 | 0 | 0 | 0.066421 | 0.018786 | 0 | 0 | 0 | 0 | 0 | 1 | 0.168421 | false | 0 | 0.021053 | 0.010526 | 0.326316 | 0.021053 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b33207e52f5d4757fcc591f2721a05242c76cc96 | 3,538 | py | Python | rgn/preprocessing.py | alifkurniawan/tesis | 6330dba32f5dc12785e956875c94d83344d788a8 | [
"MIT"
] | null | null | null | rgn/preprocessing.py | alifkurniawan/tesis | 6330dba32f5dc12785e956875c94d83344d788a8 | [
"MIT"
] | 3 | 2022-01-13T03:13:37.000Z | 2022-03-12T00:48:18.000Z | rgn/preprocessing.py | alifkurniawan/tesis | 6330dba32f5dc12785e956875c94d83344d788a8 | [
"MIT"
] | null | null | null | import glob
import os
import numpy as np
from io import BytesIO
from tqdm import tqdm
import bcolz
import platform
MAX_SEQUENCE_LENGTH = 700
def filter_input_files(input_files):
disallowed_file_endings = (".gitignore", ".DS_Store")
return list(filter(lambda x: not x.endswith(disallowed_file_endings), input_files))
input_files = glob.glob("../data/raw/*")
input_files_filtered = filter_input_files(input_files)
for file_path in input_files_filtered:
if platform.system() == 'Windows':
filename = file_path.split('\\')[-1]
else:
filename = file_path.split('/')[-1]
preprocessed_file_name = "../data/bcolz/" + filename + ".bc"
ids = []
seqs = []
evs = []
coords = []
masks = ['init', '/n']
id_next, pri_next, ev_next, ter_next, msk_next = False, False, False, False, False
with open('../data/raw/' + filename) as fp:
for line in tqdm(iter(fp.readline, '')):
if id_next:
ids.append(line[:-1])
elif pri_next:
seqs.append(line[:-1])
elif ev_next:
evs.append(np.genfromtxt(BytesIO(line.encode())))
elif ter_next:
coords.append(np.genfromtxt(BytesIO(line.encode())))
elif msk_next:
masks.append(line[:-1])
if np.core.defchararray.find(line, "[ID]", end=5) != -1:
id_next = True
masks.pop()
masks.pop()
pri_next, ev_next, ter_next, msk_next = False, False, False, False
elif np.core.defchararray.find(line, "[PRIMARY]", end=10) != -1:
pri_next = True
ids.pop()
id_next, ev_next, ter_next, msk_next = False, False, False, False
elif np.core.defchararray.find(line, "[EVOLUTIONARY]", end=15) != -1:
ev_next = True
seqs.pop()
id_next, pri_next, ter_next, msk_next = False, False, False, False
elif np.core.defchararray.find(line, "[TERTIARY]", end=11) != -1:
ter_next = True
evs.pop()
id_next, pri_next, ev_next, msk_next = False, False, False, False
elif np.core.defchararray.find(line, "[MASK]", end=7) != -1:
msk_next = True
coords.pop()
id_next, pri_next, ev_next, ter_next = False, False, False, False
pssm = evs
xyz = coords
# loop through each evolutionary section
for i in range(len(ids)):
# first store the id and sequence
id = ids[i]
seq = seqs[i]
# next get the PSSM matrix for the sequence
sp = 21 * i
ep = 21 * (i + 1)
psi = np.array(pssm[sp:ep])
pssmi = np.stack([p for p in psi], axis=1)
# then get the coordinates
sx = 3 * i
ex = 3 * (i + 1)
xi = np.array(xyz[sx:ex])
xyzi = np.stack([c for c in xi], axis=1) / 100 # have to scale by 100 to match PDB
# lastly convert the mask to indices
msk_idx = np.where(np.array(list(masks[i])) == '+')[0]
# bracket id or get "setting an array element with a sequence"
zt = np.array([[id], seq, pssmi, xyzi, msk_idx])
if i == 0:
bc = bcolz.carray([zt], rootdir=preprocessed_file_name, mode='w', expectedlen=len(ids))
bc.flush()
else:
bc = bcolz.carray(rootdir=preprocessed_file_name, mode='w')
bc.append([zt])
bc.flush()
| 34.019231 | 99 | 0.550594 | 469 | 3,538 | 4.014925 | 0.309168 | 0.100903 | 0.103558 | 0.074349 | 0.359002 | 0.277217 | 0.243229 | 0.190653 | 0.173659 | 0.173659 | 0 | 0.01623 | 0.320803 | 3,538 | 103 | 100 | 34.349515 | 0.767374 | 0.075466 | 0 | 0.075 | 0 | 0 | 0.037718 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0125 | false | 0 | 0.0875 | 0 | 0.1125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b336e30f32a8c91df3cdcc80f11eabae458079ff | 1,058 | py | Python | metropolis/core/tests/test_serializer.py | Ashon/_study_nats_worker | 9c137a5cb68b13ddd91f999e61ac5d6125cd155a | [
"MIT"
] | 5 | 2020-01-08T07:58:54.000Z | 2021-03-03T13:22:11.000Z | metropolis/core/tests/test_serializer.py | Ashon/python-nats-gateway | 9c137a5cb68b13ddd91f999e61ac5d6125cd155a | [
"MIT"
] | 3 | 2019-11-05T01:15:47.000Z | 2019-11-07T09:50:25.000Z | metropolis/core/tests/test_serializer.py | Ashon/python-nats-rpc | 9c137a5cb68b13ddd91f999e61ac5d6125cd155a | [
"MIT"
] | 1 | 2019-11-06T04:49:52.000Z | 2019-11-06T04:49:52.000Z | import unittest
from metropolis.core.serializer import DefaultMessageSerializer
from metropolis.core.serializer import JsonMessageSerializer
class TestDefaultMessageSerializer(unittest.TestCase):
def test_default_serializer_should_returns_bytes_msg(self):
msg = 'hello'
encoded = DefaultMessageSerializer.serialize(msg)
self.assertEqual(type(encoded), bytes)
def test_default_serializer_should_returns_string_msg(self):
byte_msg = b'hello'
decoded = DefaultMessageSerializer.deserialize(byte_msg)
self.assertEqual(type(decoded), str)
class TestJsonMessageSerializer(unittest.TestCase):
def test_default_serializer_should_returns_bytes_msg(self):
msg = {"msg": "hello"}
encoded = JsonMessageSerializer.serialize(msg)
self.assertEqual(type(encoded), bytes)
def test_default_serializer_should_returns_string_msg(self):
byte_msg = b'{"msg": "hello"}'
decoded = JsonMessageSerializer.deserialize(byte_msg)
self.assertEqual(type(decoded), dict)
| 36.482759 | 64 | 0.751418 | 110 | 1,058 | 6.972727 | 0.290909 | 0.073012 | 0.073012 | 0.125163 | 0.644068 | 0.555411 | 0.555411 | 0.440678 | 0.440678 | 0.440678 | 0 | 0 | 0.165406 | 1,058 | 28 | 65 | 37.785714 | 0.86863 | 0 | 0 | 0.285714 | 0 | 0 | 0.032136 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 1 | 0.190476 | false | 0 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b338f100fce3bc7e987544b20907d50a2e935653 | 5,704 | py | Python | ceph_provisioned/ceph_provisioned.py | CCI-MOC/zabbix-ceph | 500b7a714225ef03fe4b202af1f49072cc879651 | [
"Apache-2.0"
] | 1 | 2020-02-22T20:50:47.000Z | 2020-02-22T20:50:47.000Z | ceph_provisioned/ceph_provisioned.py | CCI-MOC/zabbix-ceph | 500b7a714225ef03fe4b202af1f49072cc879651 | [
"Apache-2.0"
] | null | null | null | ceph_provisioned/ceph_provisioned.py | CCI-MOC/zabbix-ceph | 500b7a714225ef03fe4b202af1f49072cc879651 | [
"Apache-2.0"
] | null | null | null | #! /bin/python
"""Calculate provisioned space"""
from __future__ import division
import sys
import subprocess
import json
import functools
import multiprocessing
import configparser
from pprint import pprint
from math import ceil
from pyzabbix import ZabbixMetric, ZabbixSender
from pyzabbix_socketwrapper import PyZabbixPSKSocketWrapper
CONFIG_FILE = "/etc/zabbix-ceph/config.ini"
def execute_command(command):
"""Execute `command` and return the json output"""
command += " --format json"
return json.loads(subprocess.check_output(command.split()))
def get_pools():
"""Return a list of pool names"""
jsonout = execute_command("ceph osd lspools")
return [item["poolname"] for item in jsonout]
def get_pool_size(ceph_pool):
"""Get the size of pool.
It attempts to first find the size assuming it's a pool used for rbd
storage. If that returns all zero, then we try to get the size using
ceph df in case the pool was used for rgw or cephfs.
"""
jsonout = execute_command("rbd du -p " + ceph_pool)
total_used_size = jsonout["total_used_size"]
total_provisioned_size = jsonout["total_provisioned_size"]
if total_used_size != 0 or total_provisioned_size != 0:
return {"total_used_size": total_used_size,
"total_provisioned_size": total_provisioned_size}
# If the total_used_size and total_provisioned_size are both zero, then
# there's no rbd usage. Then the pool is being used for rgw or cephfs
# and in that case the provisioned usage is the same as allocated, and we
# get those numbers from ceph df.
jsonout = execute_command("ceph df")
jsonout = jsonout["pools"]
for item in jsonout:
if item["name"] == ceph_pool:
total_used_size = item["stats"]["bytes_used"]
total_provisioned_size = item["stats"]["bytes_used"]
break
return {"total_used_size": total_used_size,
"total_provisioned_size": total_provisioned_size}
def get_replication_factor(ceph_pool):
"""Get the replication factor.
This returns the factor to multiply our pool size with to get the actual
space used on disk. This factor depends on if the pool is erasure coded
or simply a replicated pool.
"""
jsonout = execute_command("ceph osd pool get " + ceph_pool + " all")
if "erasure_code_profile" in jsonout:
ec_profile = jsonout["erasure_code_profile"]
jsonout = execute_command("ceph osd erasure-code-profile get " + ec_profile)
return 1 + float(jsonout["m"]) / float(jsonout["k"])
else:
return jsonout["size"]
def get_pool_root_map():
"""
Figure out the root to pool map.
Returns a dict which maps pool name to their root name.
"""
# This is rule_id and root_name mapping.
jsonout = execute_command("ceph osd crush dump")
rules = {}
for rule in jsonout["rules"]:
crush_rule_id = rule["rule_id"]
item_name_found = False
for step in rule["steps"]:
if "item_name" in step:
item_name = step["item_name"]
item_name_found = True
break
if not item_name_found:
raise Exception("item_name not found.")
rules[crush_rule_id] = item_name
# This gives pool name to crush_rule_id mapping.
jsonout = execute_command("ceph osd pool ls detail")
pool_root_map = {}
for item in jsonout:
root_name = rules[item["crush_rule"]]
pool_name = item["pool_name"]
pool_root_map[pool_name] = root_name
return pool_root_map
def main():
"""main"""
config = configparser.ConfigParser()
config.read(CONFIG_FILE)
PSK = config['general']['PSK']
PSK_IDENTITY = config['general']['PSK_IDENTITY']
HOST_IN_ZABBIX = config['general']['HOST_IN_ZABBIX']
ZABBIX_SERVER = config['general']['ZABBIX_SERVER']
custom_wrapper = functools.partial(
PyZabbixPSKSocketWrapper, identity=PSK_IDENTITY, psk=bytes(bytearray.fromhex(PSK)))
zabbix_sender = ZabbixSender(
zabbix_server=ZABBIX_SERVER, socket_wrapper=custom_wrapper)
pools = get_pools()
pool_root_map = get_pool_root_map()
results = []
for pool in pools:
d = {}
d = get_pool_size(pool)
d["pool_name"] = pool
d["replication_factor"] = get_replication_factor(pool)
d["raw_total_used_size"] = d["total_used_size"] * d["replication_factor"]
d["raw_total_provisioned_size"] = d["total_provisioned_size"] * \
d["replication_factor"]
d["pool_root"] = pool_root_map[pool]
results.append(d)
print("Results breakdown:")
pprint(results)
root_results = {}
for item in results:
if item["pool_root"] in root_results:
root_results[item["pool_root"]]["raw_total_used_size"] += item["raw_total_used_size"]
root_results[item["pool_root"]]["raw_total_provisioned_size"] += item["raw_total_provisioned_size"]
else:
root_results[item["pool_root"]] = {"raw_total_used_size": item["raw_total_used_size"], "raw_total_provisioned_size": item["raw_total_provisioned_size"]}
print("Results that matter for zabbix")
pprint(root_results)
for root, stats in root_results.iteritems():
KEY = "ceph.custom.root.raw.total.provisioned[" + root + "]"
zabbix_sender.send([ZabbixMetric(HOST_IN_ZABBIX, KEY, long(ceil(stats["raw_total_provisioned_size"])))])
KEY = "ceph.custom.root.raw.total.used[" + root + "]"
zabbix_sender.send([ZabbixMetric(HOST_IN_ZABBIX, KEY, long(ceil(stats["raw_total_used_size"])))])
if __name__ == "__main__":
main()
| 32.409091 | 164 | 0.673562 | 765 | 5,704 | 4.75817 | 0.224837 | 0.042033 | 0.057143 | 0.041209 | 0.255495 | 0.195055 | 0.143132 | 0.136813 | 0.136813 | 0.109341 | 0 | 0.000677 | 0.222651 | 5,704 | 175 | 165 | 32.594286 | 0.820253 | 0.167251 | 0 | 0.095238 | 0 | 0 | 0.228424 | 0.073422 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.104762 | 0 | 0.228571 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b33a0a1302faab794aabfcd653702fef3513e51b | 6,307 | py | Python | live_cnn/webcam_cnn_pipeline.py | apapiu/live_cnn | d560c14269c81d9b437f4f25169e28c983a7b0af | [
"Apache-2.0"
] | 3 | 2016-11-15T16:04:14.000Z | 2017-02-05T23:51:29.000Z | live_cnn/webcam_cnn_pipeline.py | apapiu/live_cnn | d560c14269c81d9b437f4f25169e28c983a7b0af | [
"Apache-2.0"
] | null | null | null | live_cnn/webcam_cnn_pipeline.py | apapiu/live_cnn | d560c14269c81d9b437f4f25169e28c983a7b0af | [
"Apache-2.0"
] | null | null | null | #functions:
#to get the dependencies create a conda environment:
#conda create --name live_cnn keras ipython opencv scikit-learn
#note that the dim ordering here is the theano one - it's faster on CPU.
#TODO: do some detection/ bounding box CNN's?
import time
import os
import cv2
import numpy as np
from keras.models import Sequential, load_model, Model
from keras.layers import Dense, Dropout, Convolution2D, MaxPooling2D, Flatten
from keras.utils.np_utils import to_categorical
from keras.optimizers import adam
from sklearn.cross_validation import train_test_split
font = cv2.FONT_HERSHEY_SIMPLEX
font = cv2.FONT_ITALIC
def annotate(frame, label, size = 1):
"""writes label on image"""
cv2.putText(frame, label, (20,30), font,
fontScale = size,
color = (255, 255, 0),
thickness = 1,
lineType = cv2.LINE_AA)
def imgs_to_arr(cp, label, nr = 100, nframe = 20):
"""captures video and saves an image to an array a few times/second"""
imgs = []
#range is made so that nr is actually the number of photos taken:
for i in range(int(nr)*nframe + 80):
ret, frame = cp.read(0)
if i < 75:
annotate(frame, "Prepare: {0}".format(label), size = 0.7)
#capture every n frames and leave a few frames to get in position:
if i % nframe == 0 and i > 70:
imgs.append(frame)
print((i - 70)/nframe)
#annotate(frame, str((i - 70)/nframe))
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
imgs = np.array(imgs)
return(imgs)
def create_labels(sizes):
"""create labels starting at 0 with specified sizes
sizes must be a tuple
"""
labels = []
for i, size in enumerate(sizes):
labels += size*[i]
return(labels)
def create_matrices(array_list):
""" create X and label matrices from a bunch of numpy arrays"""
X = np.vstack(array_list)
X = X.transpose(0, 3, 1, 2)
X = X/255
sizes = (X.shape[0] for X in array_list)
y = create_labels(sizes)
#TODO: create a time based validation - take the last 15% percent of data and make it val
return(X, y)
def return_compiled_model(input_shape, num_class = 2):
"""a one-layer perceptron"""
model = Sequential()
model.add(MaxPooling2D((8,8), input_shape = input_shape))
model.add(Flatten())
model.add(Dense(64, activation = "relu"))
model.add(Dropout(0.3))
model.add(Dense(num_class, activation = "softmax"))
model.compile(loss = "categorical_crossentropy", optimizer = adam(lr = 0.001),
metrics = ["accuracy"])
return(model)
def return_compiled_model_2(input_shape, num_class = 2):
"""a 3-layer convnet"""
model = Sequential()
#max pooling at first since images are big and
#also for small translation invariance:
model.add(MaxPooling2D((3,3), input_shape = input_shape))
model.add(Convolution2D(32, 3, 3, activation = "relu", border_mode = "same"))
model.add(MaxPooling2D((2,2)))
model.add(Convolution2D(64, 3, 3, activation = "relu", border_mode = "same"))
model.add(MaxPooling2D((2,2)))
model.add(Convolution2D(64, 3, 3, activation = "relu", border_mode = "same"))
model.add(MaxPooling2D((2,2)))
model.add(Flatten())
model.add(Dense(128, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(num_class, activation = "softmax"))
model.compile(loss = "categorical_crossentropy", optimizer = "adam", metrics = ["accuracy"])
return(model)
#model = return_compiled_model_2(input_shape = (3, w, h), num_class = 5)
#model.summary()
os.chdir("/Users/alexpapiu/Documents/Data/OpenCV_CNN")
model = load_model("basic_model")
feat_extr = Model(input = model.input, output = model.get_layer("dropout_1").output)
#TODO: make this accept arbitrarily sized input:
def pre_trained_model(num_class, lr = 0.0005):
"""
returns a pretrained model adding a fully connected
layer on top for prediction with as many final units
as number of classes
"""
os.chdir("/Users/alexpapiu/Documents/Data/OpenCV_CNN")
model = load_model("basic_model")
feat_extr = Model(input = model.input, output = model.get_layer("dropout_1").output)
output = Dense(num_class, activation = "softmax", name = "bleh")(feat_extr.output)
new_model = Model(input = feat_extr.input, output = output)
#make all layers except last not trainable:
for layer in new_model.layers[:-1]:
layer.trainable = False
new_model.layers[-1].trainable = True
new_model.compile(loss = "categorical_crossentropy",
optimizer = adam(lr = lr), metrics = ["accuracy"])
return new_model
def predict_from_frame(model, frame, labelz, resize = False, input_shape = (128, 128)):
"""takes a frame and outputs a class prediction
Parameters:
-----------
model: a keras or sklearn model
frame: a frame given as an numpy array
labelz: dictionary of classes for example {0:"cat", 1:"dog"}
resize: Boolean indicating if the image should be reshaped before
being fed into the model
input shape: 2-dimensional tuple giving the height and width
the image should be reshaped to - this should be the shape
accepted by the model
"""
if resize:
frame = cv2.resize(frame, input_shape)
frame = frame.transpose((2, 0, 1)) #this because we use theano shape defaults
frame = np.expand_dims(frame, 0)
frame = frame/255.0
#preds = model.predict_classes(frame, verbose = False)[0]
preds = np.argmax(model.predict(frame, verbose = False)[0])
label = labelz[preds]
return(label)
def real_time_pred(model, labelz, cp, nframes = 1000, resize = False,
input_shape = (128, 128)):
for i in range(nframes):
ret, frame = cp.read(0)
#predict every 10 frames:
if i % 10 == 0:
label = predict_from_frame(model, frame, labelz,
resize = resize,
input_shape = input_shape)
annotate(frame, label)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
| 30.322115 | 96 | 0.641985 | 875 | 6,307 | 4.534857 | 0.323429 | 0.032258 | 0.025202 | 0.015121 | 0.32006 | 0.292843 | 0.21875 | 0.199597 | 0.185484 | 0.185484 | 0 | 0.032197 | 0.241636 | 6,307 | 207 | 97 | 30.468599 | 0.797407 | 0.269859 | 0 | 0.259615 | 0 | 0 | 0.068187 | 0.034876 | 0 | 0 | 0.001789 | 0.009662 | 0 | 1 | 0.086538 | false | 0 | 0.086538 | 0 | 0.182692 | 0.009615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b33b2269e6d2e9516aade7f62d1a13fd0a9f22d6 | 2,292 | py | Python | robotarm/controllers/api_service.py | AmidBidee/Robot-Arm | cfacfc779b2f025846e9748167bcfb15ce207923 | [
"MIT"
] | 1 | 2022-03-27T20:09:10.000Z | 2022-03-27T20:09:10.000Z | robotarm/controllers/api_service.py | AmidBidee/Robot-Arm | cfacfc779b2f025846e9748167bcfb15ce207923 | [
"MIT"
] | 4 | 2022-03-25T03:45:10.000Z | 2022-03-29T14:31:16.000Z | robotarm/controllers/api_service.py | AmidBidee/RobotArm | cfacfc779b2f025846e9748167bcfb15ce207923 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
"""
State Controller Module
"""
import subprocess
from robotarm.controllers import proxy_url
import requests
import pathlib
import psutil
BASE_DIR = pathlib.Path(__file__).resolve().parent.parent
#print(BASE_DIR)
class APIServiceController():
"""
contolls api service
actions:
health_check: checks api status
"""
__api_start_script = 'start_arm_api'
__base_api = proxy_url + '/status'
def start_service(self):
"""
starts the api service process
"""
with open(f'{BASE_DIR}/logfile', mode='w', encoding='utf8') as file:
subprocess.Popen([f'./scripts/{self.__api_start_script}'], stderr=file, cwd=BASE_DIR)
def stop_service(self):
"""
stops the api service process
"""
p_name = self.__api_start_script
for proc in psutil.process_iter(['pid', 'name']):
if p_name == proc.name():
script_process = proc
pid = script_process.pid # get the pid before killing
script_process.kill()
server_process = psutil.Process(pid + 1)
if server_process.name() == 'gunicorn':
server_process.kill()
def health_check(self):
"""
sends a request to the /states/create endpoint to create a new
development environment state
Example:
curl -H "Content-Type" -X GET \
http://127.0.0:5555/api/status/
Args:
args (list): argument list
Return:
running: OK
"""
print('performing api health check\n==========================================================')
headers = {'content-type': 'application/json'}
url = self.__base_api
try:
request = requests.get(url, headers=headers).json()
status = request.get('status')
if status == 'OK':
print('STATUS: APIs Are Running :)')
else:
print("well I'm confused :(")
except Exception as e:
print("APIs aren't running, are you sure you started robotarm :(")
return False
print('==========================================================\nchecks complete')
return True
| 29.012658 | 104 | 0.544939 | 248 | 2,292 | 4.862903 | 0.491935 | 0.023217 | 0.034826 | 0.033168 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007495 | 0.301483 | 2,292 | 78 | 105 | 29.384615 | 0.745784 | 0.196771 | 0 | 0 | 0 | 0 | 0.234979 | 0.098751 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.131579 | 0 | 0.342105 | 0.131579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b33f2a6de57090a527f6078cd21b1bc58dc5b6e8 | 1,266 | py | Python | examples/basic.py | kpchand/firepack | 082b2477024c928e1999691b17eb3c07f015d79f | [
"MIT"
] | 1 | 2021-06-14T10:18:57.000Z | 2021-06-14T10:18:57.000Z | examples/basic.py | kpchand/firepack | 082b2477024c928e1999691b17eb3c07f015d79f | [
"MIT"
] | null | null | null | examples/basic.py | kpchand/firepack | 082b2477024c928e1999691b17eb3c07f015d79f | [
"MIT"
] | null | null | null | from firepack.fields import IntField, StrField
from firepack.service import FireService
from firepack.errors import SkipError, ValidationError
CRAWLED_DB = []
def page_name_validator(name, value):
if not value.endswith('.html'):
raise ValidationError(name, 'I only know html pages!')
class Crawler(FireService):
# All fields are required by default
user_id = IntField(min_value=1)
page_name = StrField(validators=[page_name_validator])
def url(self):
# Fields are directly accessible using instance __dict__
return 'http://example.com/{}/{}'.format(self.user_id, self.page_name)
def pre_fire(self):
if self.url() in CRAWLED_DB:
# Control directly goes to post_fire method
raise SkipError('Page already crawled!')
def fire(self, **kwargs):
CRAWLED_DB.append(self.url())
def post_fire(self, fired, exc):
if fired:
print('I crawled!')
else:
print('I skipped crawling because: ', exc)
crawler = Crawler()
crawler.call({
'user_id': 1,
'page_name': 'about.html'
})
# Values are stored in native python types wherever possible:
print(type(crawler.user_id), type(crawler.page_name)) # <class 'int'> <class 'str'> | 28.133333 | 84 | 0.665877 | 163 | 1,266 | 5.030675 | 0.496933 | 0.058537 | 0.041463 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00203 | 0.221959 | 1,266 | 45 | 84 | 28.133333 | 0.830457 | 0.172986 | 0 | 0 | 0 | 0 | 0.131478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.178571 | false | 0 | 0.107143 | 0.035714 | 0.428571 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b33f68c75544b629936c29fb4b8ae81e16ce8cfe | 10,460 | py | Python | dependencies/panda/Panda3D-1.10.0-x64/python/Lib/JOD/JamoDrumNodePath.py | CrankySupertoon01/Toontown-2 | 60893d104528a8e7eb4aced5d0015f22e203466d | [
"MIT"
] | 1 | 2021-02-13T22:40:50.000Z | 2021-02-13T22:40:50.000Z | dependencies/panda/Panda3D-1.10.0-x64/python/Lib/JOD/JamoDrumNodePath.py | CrankySupertoonArchive/Toontown-2 | 60893d104528a8e7eb4aced5d0015f22e203466d | [
"MIT"
] | 1 | 2018-07-28T20:07:04.000Z | 2018-07-30T18:28:34.000Z | dependencies/panda/Panda3D-1.10.0-x64/python/Lib/JOD/JamoDrumNodePath.py | CrankySupertoonArchive/Toontown-2 | 60893d104528a8e7eb4aced5d0015f22e203466d | [
"MIT"
] | 2 | 2019-12-02T01:39:10.000Z | 2021-02-13T22:41:00.000Z | """
NodePath for Jam-o-Drum model
@author: Ben Buchwald <bb2@alumni.cmu.edu>
Last Updated: 11/29/2005
@var DRUMPAD_RADIUS: constant representing the radius of the drumpad as a percentage
of the radius of the table
@type DRUMPAD_RADIUS: float
@var SPINNER_RADIUS: constant representing the radius of the spinner as a percentage
of the radius of the table
@type SPINNER_RADIUS: float
"""
from direct.showbase.DirectObject import DirectObject
from direct.showbase.Audio3DManager import Audio3DManager
from direct.gui.DirectGui import *
from direct.task import Task
import math
from pandac.PandaModules import NodePath
from pandac.PandaModules import Texture, CardMaker, DepthOffsetAttrib, OrthographicLens
DRUMPAD_RADIUS = .167
SPINNER_RADIUS = .254
class JamoDrumNodePath(NodePath):
"""
A C{NodePath} of a heirarchy of a Jam-o-Drum. Creating one of these sets up
the camera as well for displaying an orthographic view of this Jam-o-drum
@ivar stations: list of 4 L{Station} objects representing each Jam-o-Drum station
@type stations: L{Station}[]
"""
def __init__(self,table=None,mask=None):
"""
@keyword table: filename of a table texture. See table_template.psd. Either
paint anywhere inside the mask for a complete background
or turn off the pads and spinner and paint in the table circle
for just a table texture that will have spinners and pads
put on top of it.
@type mask: str
@keyword mask: filename of a mask texture of the non-Jam-o-Drum area. probably
jod_mask.png that comes with the Jam-o-Drum library.
@type mask: str
"""
NodePath.__init__(self,"JamoDrum")
totalHeight = max(1.0,math.sqrt(2)/4.0+SPINNER_RADIUS)*2
cm = CardMaker("card")
cm.setFrame(-1,1,-1,1)
self.tableCard = self.attachNewNode(cm.generate())
self.tableCard.setP(-90)
self.tableCard.setScale(4.0/3.0)
self.tableCard.setLightOff()
self.tableCard.setBin("background",0)
self.tableCard.setDepthTest(0)
self.tableCard.setDepthWrite(0)
self.tableCard.hide()
if (table):
self.setTableTexture(loader.loadTexture(table))
if (mask):
cm = CardMaker("JOD Mask")
cm.setFrame(-4.0/3.0,4.0/3.0,-4.0/3.0,4.0/3.0)
self.mask = aspect2d.attachNewNode(cm.generate())
#self.mask.setP(-90)
self.mask.setTexture(loader.loadTexture(mask),1)
self.mask.setTransparency(1)
self.mask.setDepthTest(0)
else:
self.mask = None
self.stations = []
for i in range(4):
station = Station(self,i)
station.reparentTo(self)
self.stations.append(station)
self.reparentTo(render)
base.disableMouse()
self.lens = OrthographicLens()
self.lens.setFilmSize(totalHeight*base.getAspectRatio(),totalHeight)
base.cam.node().setLens(self.lens)
camera.setPosHpr(0,0,10.0, 0,-90,0)
base.setBackgroundColor(0,0,0)
self.audio3d = Audio3DManager(base.sfxManagerList[0],self)
self.audio3d.setDropOffFactor(0)
# end __init__
def setTableTexture(self,texture,scale=None):
"""
sets the background texture on the table
@param texture: C{Texture} or image file to show on the table
@type texture: C{Texture} or string
@keyword scale: scale of the texture. Default is set for the table_template.psd
@type scale: float
"""
if (not isinstance(texture,Texture)):
texture = loader.loadTexture(texture)
self.tableCard.setTexture(texture)
if (scale):
self.tableCard.setScale(scale)
self.showTable()
# end setTableTexture
def showTable(self):
"""
displays the table's background texture
"""
self.tableCard.show()
# end showTable
def hideTable(self):
"""
hides the table's background texture
"""
self.tableCard.hide()
# end hideSpinner
def loadSfx(self,file,object=None):
"""
load a sound with positional audio support
@param file: wav file to load. Must be mono.
@type file: string
@keyword object: object to attach sound to
@type object: C{NodePath}
@return: a Panda sound object
@rtype: C{AudioSound}
"""
sfx = self.audio3d.loadSfx(file)
if (object):
self.attachSoundToObject(sfx,object)
return sfx
# loadSfx
def attachSoundToObject(self,sound,object):
"""
attach a positional sound to an object it will come from
@param sound: positional sound object to attach
@type sound: C{AudioSound}
@param object: object sound should come from
@type object: C{NodePath}
"""
self.audio3d.attachSoundToObject(sound,object)
# end attachSoundToObject
def detachSound(self,sound):
"""
detach a positional sound from it's object. It will no longer move.
@param sound: positional sound object to detach
@type sound: C{AudioSound}
"""
self.audio3d.detachSound(sound)
# end detachSound
# end class JamoDrum
class Station(NodePath):
"""
A C{NodePath} to a specific Jam-o-Drum station. This C{NodePath} should contain
subparts call pad and spinner. This is automatically created by L{JamoDrumNodePath}
and should not be created by hand.
@ivar index: index of this station (0-3)
@type index: int
@ivar pad: C{NodePath} to the pad subpart
@type pad: C{NodePath}
@ivar spinner: C{NodePath} to the spinner subpart
@type spinner: C{NodePath}
"""
def __init__(self,jod,index):
"""
@param jod: containing Jam-o-Drum
@type jod: L{JamoDrumNodePath}
@param index: index of the station
@type index: int
"""
NodePath.__init__(self,"station%02d"%index)
self.jod = jod
self.index = index
angle = 45.0-self.index*90.0
self.setPos(math.cos((angle-90.0)*math.pi/180.0),math.sin((angle-90.0)*math.pi/180.0),0)
self.setH(angle)
cm = CardMaker("card")
cm.setFrame(-1,1,-1,1)
self.spinner = self.attachNewNode("spinner")
self.spinnerCard = self.spinner.attachNewNode(cm.generate())
self.spinnerCard.setTransparency(1)
self.spinnerCard.setBin("background",1)
self.spinnerCard.node().setAttrib(DepthOffsetAttrib.make(1))
self.spinnerCard.setScale(1.0/3.0)
self.spinnerCard.setP(-90)
self.spinnerCard.setLightOff()
self.spinnerCard.hide()
self.pad = self.attachNewNode("pad")
self.padCard = self.pad.attachNewNode(cm.generate())
self.padCard.setTransparency(1)
self.padCard.setBin("background",2)
self.padCard.node().setAttrib(DepthOffsetAttrib.make(2))
self.padCard.setScale(1.0/3.0)
self.padCard.setP(-90)
self.padCard.setLightOff()
self.padCard.hide()
# end __init__
def getParent(self):
"""
get the containing L{JamoDrumNodePath}
@return: the containing Jam-o-Drum
@rtype: L{JamoDrumNodePath}
"""
return self.jod
# end getParent
def setSpinnerTexture(self,texture,scale=None):
"""
sets the background texture on the spinner
@param texture: C{Texture} or image file to show on the spinner
@type texture: C{Texture} or string
@keyword scale: scale of the texture. Default is set for the spinner_template.psd
@type scale: float
"""
if (not isinstance(texture,Texture)):
texture = loader.loadTexture(texture)
self.spinnerCard.setTexture(texture)
if (scale):
self.spinnerCard.setScale(scale)
self.showSpinner()
# end setSpinnerTexture
def setPadTexture(self,texture,scale=None):
"""
sets the background texture on the pad
@param texture: C{Texture} or image file to show on the pad
@type texture: C{Texture} or string
@keyword scale: scale of the texture. Default is set for the spinner_template.psd
@type scale: float
"""
if (not isinstance(texture,Texture)):
texture = loader.loadTexture(texture)
self.padCard.setTexture(texture)
if (scale):
self.padCard.setScale(scale)
self.showPad()
# end setPadTexture
def showSpinner(self):
"""
displays the spinner's background texture
"""
self.spinnerCard.show()
# end showSpinner
def showPad(self):
"""
displays the pad's background texture
"""
self.padCard.show()
# end showPad
def hideSpinner(self):
"""
hides the spinner's background texture
"""
self.spinnerCard.hide()
# end hideSpinner
def hidePad(self):
"""
hide the pad's background texture
"""
self.padCard.hide()
# end hidePad
def loadSfx(self,file,object=None):
"""
load a sound with positional audio support and attach it to this station
@param file: wav file to load. Must be mono.
@type file: string
@keyword object: object to attach sound to. default: this station
@type object: C{NodePath}
@return: a Panda sound object
@rtype: C{AudioSound}
"""
return self.jod.loadSfx(file,object or self)
#end loadSfx
def attachSound(self,sound,object=None):
"""
attach a positional sound to this station
@param sound: positional sound object to attach
@type sound: C{AudioSound}
@keyword object: object sound should come from. default: this station
@type object: C{NodePath}
"""
self.jod.attachSoundToObject(sound,object or self)
# end attachSound
# end class Station
| 33.101266 | 96 | 0.60631 | 1,247 | 10,460 | 5.056937 | 0.194066 | 0.024738 | 0.011418 | 0.016175 | 0.342372 | 0.315255 | 0.295116 | 0.230891 | 0.230891 | 0.230891 | 0 | 0.01731 | 0.298566 | 10,460 | 315 | 97 | 33.206349 | 0.84217 | 0.39847 | 0 | 0.169355 | 0 | 0 | 0.013998 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137097 | false | 0 | 0.056452 | 0 | 0.233871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b34056015462e1089eaab43dc86088ca8783c08f | 2,927 | py | Python | src/apps/alarm_clock/tests/test_api.py | stefanhoelzl/alarm-clock | efba84e71fcade26bef020dc7eaa10181ea9f96c | [
"MIT"
] | 1 | 2019-07-31T12:39:53.000Z | 2019-07-31T12:39:53.000Z | src/apps/alarm_clock/tests/test_api.py | stefanhoelzl/alarm-clock | efba84e71fcade26bef020dc7eaa10181ea9f96c | [
"MIT"
] | null | null | null | src/apps/alarm_clock/tests/test_api.py | stefanhoelzl/alarm-clock | efba84e71fcade26bef020dc7eaa10181ea9f96c | [
"MIT"
] | 1 | 2019-10-04T04:32:20.000Z | 2019-10-04T04:32:20.000Z | import pytest
from unittest.mock import patch
from ..api import *
from ..alarm import Alarm
from .. import AlarmClockApp
from ..alarm_states import Disabled, Enabled
class AlarmClockMock(AlarmClockApp):
auto_store = False
def __init__(self, alarms=None, off=False, snooze=False):
super().__init__()
self.alarms = alarms if alarms is not None else {}
self._off = off
self._snooze = snooze
def off(self):
self._off = True
def snooze(self):
self._snooze = True
@pytest.mark.asyncio
async def test_getAlarms():
aid = 123
alarm = Alarm(h=7, m=35, days=(1, 2), snooze_time=50,
daylight_mode=True, daylight_time=10)
with patch("uaos.App.get_app", return_value=AlarmClockMock({aid: alarm})):
alarms = (await get_alarms(""))
assert alarms[0]['aid'] == 0
assert alarms[0]['hour'] == 7
assert alarms[0]['minute'] == 35
assert alarms[0]['days'] == (1,2)
assert alarms[0]['daylight_mode'] == True
assert alarms[0]['daylight_time'] == 10
assert alarms[0]['snooze_time'] == 50
assert alarms[0]['daylight'] == 0
assert alarms[0]['ringing'] is False
assert alarms[0]['snoozing'] is False
@pytest.mark.asyncio
async def test_off():
ac = AlarmClockMock()
with patch("uaos.App.get_app", return_value=ac):
await off("")
assert ac.off
@pytest.mark.asyncio
async def test_snooze():
ac = AlarmClockMock()
with patch("uaos.App.get_app", return_value=ac):
await snooze("")
assert ac.snooze
@pytest.mark.asyncio
async def test_disable():
ac = AlarmClockMock({123: Alarm()})
assert ac.alarms[123] == Enabled
with patch("uaos.App.get_app", return_value=ac):
await disable("123")
assert ac.alarms[123] == Disabled
@pytest.mark.asyncio
async def test_enable():
ac = AlarmClockMock({123: Alarm()})
ac.alarms[123].disable()
assert ac.alarms[123] == Disabled
with patch("uaos.App.get_app", return_value=ac):
await enable("123")
assert ac.alarms[123] == Enabled
@pytest.mark.asyncio
async def test_new():
ac = AlarmClockMock()
with patch("uaos.App.get_app", return_value=ac):
await new({"hour": 7,
"minute": 24,
"days": (1, 2, 3),
"daylight_mode": True,
"daylight_time": 120,
"snooze_time": 60
})
assert ac.alarms[0] == Enabled
assert ac.alarms[0].hour == 7
assert ac.alarms[0].minute == 24
assert ac.alarms[0].days == (1, 2, 3)
assert ac.alarms[0].daylight_mode == True
assert ac.alarms[0].daylight_time == 120
assert ac.alarms[0].snooze_time == 60
@pytest.mark.asyncio
async def test_delete():
ac = AlarmClockMock({123: Alarm()})
with patch("uaos.App.get_app", return_value=ac):
await delete("123")
assert 123 not in ac.alarms
| 27.35514 | 78 | 0.617014 | 393 | 2,927 | 4.473282 | 0.183206 | 0.067691 | 0.0876 | 0.0876 | 0.471559 | 0.331058 | 0.182594 | 0.182594 | 0.163823 | 0.163823 | 0 | 0.044045 | 0.239836 | 2,927 | 106 | 79 | 27.613208 | 0.746067 | 0 | 0 | 0.270588 | 0 | 0 | 0.08507 | 0 | 0 | 0 | 0 | 0 | 0.282353 | 1 | 0.035294 | false | 0 | 0.070588 | 0 | 0.129412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b340e1518e138957e56fd973250c45b5d93fb474 | 4,519 | py | Python | skipole/skiadmin/skiadminpackages/editfiles/editfile.py | bernie-skipole/skipole | b45d3291c593e7c03c053ab4f192f1ecc5c3e9b9 | [
"MIT"
] | null | null | null | skipole/skiadmin/skiadminpackages/editfiles/editfile.py | bernie-skipole/skipole | b45d3291c593e7c03c053ab4f192f1ecc5c3e9b9 | [
"MIT"
] | null | null | null | skipole/skiadmin/skiadminpackages/editfiles/editfile.py | bernie-skipole/skipole | b45d3291c593e7c03c053ab4f192f1ecc5c3e9b9 | [
"MIT"
] | null | null | null |
"Functions implementing FilePage editing"
from ... import ValidateError, FailPage, ServerError
from ....ski.project_class_definition import SectionData
from ... import skilift
from ....skilift import editpage
from .. import utils
def retrieve_edit_filepage(skicall):
"Retrieves widget data for the edit file page"
call_data = skicall.call_data
pd = call_data['pagedata']
# clears any session data, keeping page_number, pchange and any status message
utils.clear_call_data(call_data, keep=["page_number", "pchange", "status"])
project = call_data['editedprojname']
if 'page_number' in call_data:
pagenumber = call_data['page_number']
str_pagenumber = str(pagenumber)
else:
raise FailPage(message = "page missing")
if not pagenumber:
raise FailPage(message = "Invalid page")
try:
pageinfo = skilift.page_info(project, pagenumber)
if pageinfo.item_type != 'FilePage':
raise FailPage(message = "Invalid page")
call_data['pchange'] = pageinfo.change
filepath, mimetype = editpage.file_parameters(project, pagenumber)
except ServerError as e:
raise FailPage(message = e.message)
# fill in sections
sd_adminhead = SectionData("adminhead")
sd_page_edit = SectionData("page_edit")
# fills in the data for editing page name, brief, parent, etc.,
sd_adminhead["page_head","large_text"] = pageinfo.name
sd_page_edit['p_ident','page_ident'] = (project,str_pagenumber)
sd_page_edit['p_name','page_ident'] = (project,str_pagenumber)
sd_page_edit['p_description','page_ident'] = (project,str_pagenumber)
sd_page_edit['p_rename','input_text'] = pageinfo.name
sd_page_edit['p_parent','input_text'] = "%s,%s" % (project, pageinfo.parentfolder_number)
sd_page_edit['p_brief','input_text'] = pageinfo.brief
pd.update(sd_adminhead)
pd.update(sd_page_edit)
pd['p_file','input_text'] = filepath
pd['p_mime','input_text'] = mimetype
pd['enable_cache','radio_checked'] = pageinfo.enable_cache
def submit_new_filepath(skicall):
"Sets new page filepath"
call_data = skicall.call_data
project = call_data['editedprojname']
if 'page_number' in call_data:
pagenumber = call_data['page_number']
else:
raise FailPage(message = "page missing")
if not pagenumber:
raise FailPage(message = "Invalid page")
pchange = call_data['pchange']
if not 'filepath' in call_data:
raise FailPage(message="No filepath given")
new_filepath = call_data['filepath']
if not new_filepath:
raise FailPage(message="No filepath given")
try:
call_data['pchange'] = editpage.page_filepath(project, pagenumber, pchange, new_filepath)
except ServerError as e:
raise FailPage(message=e.message)
call_data['status'] = 'Page filepath set: %s' % (new_filepath,)
def submit_mimetype(skicall):
"Sets mimetype"
call_data = skicall.call_data
project = call_data['editedprojname']
if 'page_number' in call_data:
pagenumber = call_data['page_number']
else:
raise FailPage(message = "page missing")
if not pagenumber:
raise FailPage(message = "Invalid page")
pchange = call_data['pchange']
if not 'mime_type' in call_data:
raise FailPage(message="No mimetype given")
# Set the page mimetype
try:
call_data['pchange'] = editpage.page_mimetype(project, pagenumber, pchange, call_data['mime_type'])
except ServerError as e:
raise FailPage(message=e.message)
call_data['status'] = 'Mimetype set'
def submit_cache(skicall):
"Sets cache true or false"
call_data = skicall.call_data
# this function is duplicated in editpage, may be better to remove this file and transfer conetents to editpage
project = call_data['editedprojname']
pagenumber = call_data['page_number']
pchange = call_data['pchange']
if 'cache' not in call_data:
raise FailPage(message="No cache instruction given")
try:
# Set the page cache
if call_data['cache'] == 'True':
enable_cache = True
message = "Cache Enabled"
else:
enable_cache = False
message = "Cache Disabled"
call_data['pchange'] = editpage.page_enable_cache(project, pagenumber, pchange, enable_cache)
except ServerError as e:
raise FailPage(message=e.message)
call_data['status'] = message
| 31.823944 | 115 | 0.680239 | 563 | 4,519 | 5.250444 | 0.198934 | 0.102842 | 0.101489 | 0.022327 | 0.456022 | 0.403248 | 0.366712 | 0.315968 | 0.315968 | 0.259134 | 0 | 0 | 0.215313 | 4,519 | 141 | 116 | 32.049645 | 0.833615 | 0.100686 | 0 | 0.464646 | 0 | 0 | 0.204475 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040404 | false | 0 | 0.050505 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b342a3c46ccc9eff45c7a93f20bcee1cba5cd708 | 23,313 | py | Python | src/radixlib/provider.py | 0xOmarA/RadixLib | 85d75a47d4c4df4c1a319b74857ae2c513933623 | [
"MIT"
] | 32 | 2022-01-12T16:52:28.000Z | 2022-03-24T18:05:47.000Z | src/radixlib/provider.py | 0xOmarA/RadixLib | 85d75a47d4c4df4c1a319b74857ae2c513933623 | [
"MIT"
] | 3 | 2022-01-12T17:01:55.000Z | 2022-02-12T15:14:16.000Z | src/radixlib/provider.py | 0xOmarA/RadixLib | 85d75a47d4c4df4c1a319b74857ae2c513933623 | [
"MIT"
] | 1 | 2022-01-21T04:28:07.000Z | 2022-01-21T04:28:07.000Z | from radixlib.api_types.identifiers import (
TransactionIdentifier,
ValidatorIdentifier,
NetworkIdentifier,
AccountIdentifier,
TokenIdentifier,
StateIdentifier,
)
from radixlib.actions import ActionType
from radixlib.network import Network
import radixlib as radix
import requests
from typing import Optional, Any, Dict, Union, List
class Provider():
""" An implementation of a provider for the Gateway API of the Radix blockchain.
This provider is implemented in a way that makes it easy to make requests to the API. However,
it is not the job of the provider to parse the responses from the API. The provider only goes
as far as trying to load in the response as json if its possible, but that is about it. This is
because the provider's job is to provide an easy way to communicate with the gateway API, not to
parse responses.
"""
def __init__(
self,
network: Network,
custom_gateway_url: Optional[str] = None,
open_api_version: str = "1.1.2",
) -> None:
""" Instantiates a new provider object through the passed parameters for the given network.
This method is used to create a new provider object for the given network object passed in
the arguments. The provider supports default RPC urls for both the mainnet and the stokenet.
Aside from that, if you wish to connect to some other network, the :obj:`custom_gateway_url`
becomes nolonger an optional argument.
Args:
network (Network): The type of network that the provider will connect to.
custom_gateway_url (:obj:`str`, optional): An optional argument that defaults to None.
This is the url of the RPC to connect to if we wish to connect to a custom gateway.
open_api_version (str): An optional argument that defaults to "1.1.2" and it defines the
value for the X-Radixdlt-Target-Gw-Api header which is requested by the gateway API.
Raises:
ValueError: Raised when a network other than the mainnet or the stokenet is used without
providing a custom_gateway_url.
"""
# Checking to see if the network provides a default gateway URL or not
if network.default_gateway_url or custom_gateway_url:
self.base_url: str = custom_gateway_url or network.default_gateway_url # type: ignore
self.network: Network = network
self.open_api_version: str = open_api_version
else:
raise ValueError(
"The network provided does not have a default gateway API URL and no URL was "
"supplied to the custom_gateway_url"
)
def __str__(self) -> str:
""" Represents the provider as a string """
return f"Provider(base_url={self.base_url}, network={self.network.name}, open_api_version={self.open_api_version})"
def __repr__(self) -> str:
""" Represents the provider """
return str(self)
def __dispatch(
self,
endpoint: str,
params: Dict[Any, Any],
http_method: str = "POST"
) -> Dict[Any, Any]:
""" Dispatches HTTP calls to the endpoints with the params provided
Args:
endpoint (str): The endpoint to make the HTTP call to.
params (dict): The JSON payload to include in the request body.
http_method (str): The type of request to make, defaults to a POST request.
Returns:
dict: A dictionary of the response from the API.
Raises:
TypeError: Raised if the response from the API is not a JSON response.
"""
# The network identifier is always in the JSON body of all requests made to the dateway API.
# So, we add the network identifier to the request parameters
params['network_identifier'] = NetworkIdentifier(self.network)
# Making the request to the gateway API
response: requests.Response = requests.request(
method = str(http_method),
url = f'{self.base_url}/{endpoint}',
json = radix.utils.remove_none_values_recursively(
radix.utils.convert_to_dict_recursively(
iterable = params # type: ignore
)
),
headers = {
"X-Radixdlt-Target-Gw-Api": self.open_api_version
}
)
# Checking the type of the content sent back from the API. If the content is in JSON then
# we are good. If not then we throw an exception.
if "application/json" not in str(response.headers.get('content-type')):
raise TypeError(
f"The provider expects a JSON response but got a response of the type: "
f"{str(response.headers.get('content-type'))}. Response: {response.text}"
)
# Converting the response body to JSON and checking if there are errors in the response
json_response: Dict[Any, Any] = response.json()
return json_response
# #######################################
# ---------- Gateway Endpoints ----------
# #######################################
def get_gateway_info(self) -> Dict[str, Any]:
""" Returns the Gateway API version, network and current ledger state. """
return self.__dispatch(
endpoint = "gateway",
params = {}
)
# #######################################
# ---------- Account Endpoints ----------
# #######################################
def derive_account_identifier(
self,
public_key: str
) -> Dict[str, Any]:
""" Derives the wallet address for the given public key.
This method is similar to the `derive.wallet_address_from_public_key` method with the only
exception being that in this case we're asking the node to derive the accoutn identifier
(wallet address) for us. This might be useful if a change suddenly happens to the network
and all of the HRPs are changed or any case where computing the wallet address locally does
not make sense.
Args:
public_key (str): The public key to derive the wallet address for.
Returns:
Dict[str, Any]: A dictionary of the account identifier.
"""
return self.__dispatch(
endpoint = "account/derive",
params = {
"public_key": {
"hex": public_key
}
}
)
def get_account_balances(
self,
account_address: str,
state_identifier: Optional[StateIdentifier] = None
) -> Dict[str, Any]:
""" Returns an account's available and staked token balances, given an account address.
Args:
account_identifier (account_address): The account to get the balances for.
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Returns:
dict: A dictionary of the account balances
"""
return self.__dispatch(
endpoint = "account/balances",
params = {
"account_identifier": AccountIdentifier(account_address),
"at_state_identifier": state_identifier
}
)
def get_stake_positions(
self,
account_address: str,
state_identifier: Optional[StateIdentifier] = None
) -> Dict[str, Any]:
""" Returns the xrd which the account has in pending and active delegated stake positions
with validators, given an account address.
Args:
account_address (str): The account to get the stakes for.
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Returns:
dict: A dictionary of the account stake positions.
"""
return self.__dispatch(
endpoint = "account/stakes",
params = {
"account_identifier": AccountIdentifier(account_address),
"at_state_identifier": state_identifier
}
)
def get_unstake_positions(
self,
account_address: str,
state_identifier: Optional[StateIdentifier] = None
) -> Dict[str, Any]:
""" Returns the xrd which the account has in pending and temporarily-locked delegated
unstake positions with validators, given an account address.
Args:
account_address (str): The account to get the unstakes for.
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Returns:
dict: A dictionary of the account unstake positions.
"""
return self.__dispatch(
endpoint = "account/unstakes",
params = {
"account_identifier": AccountIdentifier(account_address),
"at_state_identifier": state_identifier
}
)
def get_account_transactions(
self,
account_address: str,
state_identifier: Optional[StateIdentifier] = None,
cursor: Optional[str] = None,
limit: int = 30,
) -> Dict[str, Any]:
""" Returns user-initiated transactions involving the given account address which have been
succesfully committed to the ledger. The transactions are returned in a paginated format,
ordered by most recent.
Args:
account_address (str): The account to get the transactions for.
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
cursor (:obj:`str`, optional): A timestamp of when to begin getting transactions.
limit (int): The page size requested. The maximum value is 30 at present
Returns:
dict: A dictionary of the transactions information.
"""
return self.__dispatch(
endpoint = "account/transactions",
params = {
"account_identifier": AccountIdentifier(account_address),
"at_state_identifier": state_identifier,
"cursor": cursor,
"limit": limit
},
)
# ######################################
# ---------- Token Endpoints ----------
# ######################################
def get_native_token_info(
self,
state_identifier: Optional[StateIdentifier] = None
) -> Dict[str, Any]:
""" Returns information about XRD, including its Radix Resource Identifier (RRI).
Args:
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Returns:
dict: A dictionary of the token information
"""
return self.__dispatch(
endpoint = "token/native",
params = {
"at_state_identifier": state_identifier,
}
)
def get_token_info(
self,
token_rri: str,
state_identifier: Optional[StateIdentifier] = None
) -> Dict[str, Any]:
""" Returns information about any token, given its Radix Resource Identifier (RRI).
Args:
token_rri (str): The RRI of the token to get the information for.
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Returns:
dict: A dictionary of the token information
"""
return self.__dispatch(
endpoint = "token",
params = {
"token_identifier": TokenIdentifier(token_rri),
"at_state_identifier": state_identifier,
}
)
def derive_token_identifier(
self,
public_key: str,
symbol: str
) -> Dict[str, Any]:
""" Returns the Radix Resource Identifier of a token with the given symbol, created by an
account with the given public key.
Args:
public_key (str): The public key of the token creator.
symbol (str): The 3 to 8 character long symbol assigned to the token.
Returns:
dict: A dictionary containing the token's RRI.
"""
return self.__dispatch(
endpoint = "token/derive",
params = {
"symbol": symbol.lower(),
"public_key": {
"hex": public_key
}
}
)
# #########################################
# ---------- Validator Endpoints ----------
# #########################################
def get_validator(
self,
validator_address: str,
state_identifier: Optional[StateIdentifier] = None
) -> Dict[str, Any]:
""" Returns information about a validator, given a validator address
Args:
validator_address (str): An identifier for the validator to get info on.
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Returns:
dict: A dictionary of the validator info.
"""
return self.__dispatch(
endpoint = "validator",
params = {
"validator_identifier": ValidatorIdentifier(validator_address),
"at_state_identifier": state_identifier
}
)
def get_validator_identifier(
self,
public_key: str,
) -> Dict[str, Any]:
""" Returns the validator address associated with the given public key
Args:
public_key (str): The public key of the validator
Returns:
dict: A dictionary of the validator info.
"""
return self.__dispatch(
endpoint = "validator/derive",
params = {
"public_key": {
"hex": public_key
}
}
)
def get_validators(
self,
state_identifier: Optional[StateIdentifier] = None
) -> Dict[str, Any]:
""" Returns information about all validators.
Args:
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Returns:
dict: A dictionary of the validators
"""
return self.__dispatch(
endpoint = "validators",
params = {
"at_state_identifier": state_identifier
}
)
def get_validator_stakes(
self,
validator_address: str,
state_identifier: Optional[StateIdentifier] = None,
cursor: Optional[str] = None,
limit: int = 30,
) -> Dict[str, Any]:
""" Returns paginated results about the delegated stakes from accounts to a validator. The
results are totalled by account, and ordered by account age (oldest to newest).
Args:
validator_address (str): A string of the validator address
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
cursor (:obj:`str`, optional): A timestamp of when to begin getting transactions.
limit (int): The page size requested. The maximum value is 30 at present.
Returns:
dict: A dictionary of the validator stakes
"""
return self.__dispatch(
endpoint = "validator/stakes",
params = {
"at_state_identifier": state_identifier,
"validator_identifier": ValidatorIdentifier(validator_address),
"cursor": cursor,
"limit": limit
}
)
# ###########################################
# ---------- Transaction Endpoints ----------
# ###########################################
def get_transaction_rules(
self,
state_identifier: Optional[StateIdentifier] = None
) -> Dict[str, Any]:
""" Returns the current rules used to build and validate transactions in the Radix Engine.
Args:
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Returns:
dict: A dictionary of the transaction rules.
"""
return self.__dispatch(
endpoint = "transaction/rules",
params = {}
)
def build_transaction(
self,
actions: Union[List[ActionType], radix.ActionBuilder],
fee_payer: str,
message_bytes: Optional[Union[str, bytes, bytearray]] = None,
state_identifier: Optional[StateIdentifier] = None,
disable_token_mint_and_burn: Optional[bool] = None,
) -> Dict[str, Any]:
""" Returns a built unsigned transaction payload, from a set of intended actions.
Args:
actions (Union[List[ActionType], radix.ActionBuilder]): Either a list of actions or an
ActionBuilder used to build create the actions.
fee_payer (str): The address of the wallet paying the fees of the transaction.
message_bytes (Union[str, bytes, bytearray], optional): An optional argument for the
message to include in the transaction. This argument expects the bytes to be passed
to it. So, this should either be the hex string of the bytes or a bytes object.
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
disable_token_mint_and_burn (bool, optional): If true, mints and burns (aside from fee
payments) are not permitted during transaction execution.
Returns:
dict: A dictionary of the transaction details and blob
"""
return self.__dispatch(
endpoint = "transaction/build",
params = {
"at_state_identifier": state_identifier,
"actions": actions if isinstance(actions, list) else actions.to_action_list(),
"fee_payer": AccountIdentifier(fee_payer),
"message": message_bytes.hex() if isinstance(message_bytes, (bytes, bytearray)) else message_bytes,
"disable_token_mint_and_burn": disable_token_mint_and_burn
}
)
def finalize_transaction(
self,
unsigned_transaction: Union[str, bytes, bytearray],
signature_der: Union[str, bytes, bytearray],
public_key: str,
submit: Optional[bool] = None
) -> Dict[str, Any]:
""" Returns a signed transaction payload and transaction identifier, from an unsigned
transaction payload and signature.
Args:
unsigned_transaction (Union[str, bytes, bytearray]): A bytes like object containing the
transaction blob.
signature_der (Union[str, bytes, bytearray]): A bytes like object of the signature in
the DER format.
public_key (str): The public key of the sender of the transaction.
submit (Optional[bool]): An optional boolean which defines whether or not a transaction
should be submitted immediately upon finalization.
Returns:
dict: A dictionary of the signed transaction information.
"""
return self.__dispatch(
endpoint = "transaction/finalize",
params = {
"unsigned_transaction": unsigned_transaction if isinstance(unsigned_transaction, str) else unsigned_transaction.hex(),
"signature": {
"bytes": signature_der if isinstance(signature_der, str) else signature_der.hex(),
"public_key": {
"hex": public_key,
}
},
"submit": submit
}
)
def submit_transaction(
self,
signed_transaction: Union[str, bytes, bytearray]
) -> Dict[str, Any]:
""" Submits a signed transaction payload to the network. The transaction identifier from
finalize or submit can then be used to track the transaction status.
Args:
signed_transaction (Union[str, bytes, bytearray]): A string or bytes like object which
contains the bytes of the signed transaction to submit to the network.
Returns:
dict: A dictionary of the submitted transaction information.
"""
return self.__dispatch(
endpoint = "transaction/submit",
params = {
"signed_transaction": signed_transaction if isinstance(signed_transaction, str) else signed_transaction.hex(),
}
)
def transaction_status(
self,
transaction_hash: str,
state_identifier: Optional[StateIdentifier] = None,
) -> Dict[str, Any]:
""" Returns the status and contents of the transaction with the given transaction identifier.
Transaction identifiers which aren't recognised as either belonging to a committed
transaction or a transaction submitted through this Network Gateway may return a
TransactionNotFoundError. Transaction identifiers relating to failed transactions will,
after a delay, also be reported as a TransactionNotFoundError.
Args:
transaction_hash (str): An identifier for the transaction
state_identifier (:obj:`StateIdentifier`, optional): An optional argument that defaults
to None. Allows a client to request a response referencing an earlier ledger state.
Return:
dict: A dictionary of the transaction information.
"""
return self.__dispatch(
endpoint = "transaction/status",
params = {
"transaction_identifier": TransactionIdentifier(transaction_hash),
"at_state_identifier": state_identifier,
}
) | 39.313659 | 134 | 0.59306 | 2,513 | 23,313 | 5.388778 | 0.144847 | 0.050953 | 0.01403 | 0.034559 | 0.48213 | 0.421356 | 0.362871 | 0.34345 | 0.310959 | 0.299586 | 0 | 0.001009 | 0.320079 | 23,313 | 593 | 135 | 39.313659 | 0.853322 | 0.473513 | 0 | 0.443262 | 0 | 0.003546 | 0.124018 | 0.024156 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078014 | false | 0 | 0.021277 | 0 | 0.177305 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b34624bfe3809ef151eb67f5a9916a24938c1e64 | 3,109 | py | Python | setup.py | Mr0l3/filometro | 46af52121fc3ed94935c9965d2ff51d2d26d1809 | [
"MIT"
] | null | null | null | setup.py | Mr0l3/filometro | 46af52121fc3ed94935c9965d2ff51d2d26d1809 | [
"MIT"
] | null | null | null | setup.py | Mr0l3/filometro | 46af52121fc3ed94935c9965d2ff51d2d26d1809 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import os
import sys
from shutil import rmtree
from setuptools import setup, find_packages, Command
from filometro import __version__
from filometro import __author__
here = os.path.abspath(os.path.dirname(__file__))
class PublishCommand(Command):
"""Support setup.py publish."""
description = 'Build and publish package in Pypi.'
user_options = []
@staticmethod
def print_status(msg):
"""Prints message in bold and yellow."""
print('\033[1;33m{m}\033[0m'.format(m=msg))
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
try:
self.print_status('Removing previous builds…')
rmtree(os.path.join(here, 'dist'))
rmtree(os.path.join(here, 'build'))
except OSError:
pass
self.print_status('Build Source and Wheel distribution…')
os.system('{python} setup.py sdist bdist_wheel'.format(python=sys.executable))
self.print_status('Uploading the package to PyPi via Twine…')
os.system('twine upload --config-file .pypirc --repository pypi dist/*')
sys.exit()
with open(os.path.join(here, 'README.md'), mode='r', encoding='utf-8') as f:
long_description = '\n' + f.read()
setup(
name='filometro',
version=__version__,
description='Um wrapper Python do site "De Olho na Fila": obtenha os dados dos postos de vacinação da covid-19 em São Paulo',
long_description=long_description,
long_description_content_type='text/markdown',
license='MIT License',
author=__author__,
author_email='matheusfelipeog@protonmail.com',
url='https://github.com/matheusfelipeog/filometro',
packages=find_packages(
exclude=('tests',)
),
install_requires=[
'requests',
'pandas'
],
zip_safe=False,
python_requires='>=3.7',
project_urls={
"Bug Tracker": "https://github.com/matheusfelipeog/filometro/issues",
"Documentation": "https://github.com/matheusfelipeog/filometro",
"Source Code": "https://github.com/matheusfelipeog/filometro",
},
keywords=[
'filometro', 'de-olho-na-fila', 'data', 'sao-paulo',
'covid-19', 'vacina', 'vacinasampa', 'python', 'wrapper'
],
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'Intended Audience :: Information Technology',
'License :: OSI Approved :: MIT License',
'Natural Language :: Portuguese (Brazilian)',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: Implementation :: CPython',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules'
],
# setup.py publish support
cmdclass={
'publish': PublishCommand
}
) | 30.480392 | 129 | 0.630749 | 344 | 3,109 | 5.607558 | 0.511628 | 0.015552 | 0.0648 | 0.060135 | 0.099533 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010513 | 0.235124 | 3,109 | 102 | 130 | 30.480392 | 0.796888 | 0.034738 | 0 | 0.076923 | 0 | 0.012821 | 0.432107 | 0.010033 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0.038462 | 0.076923 | 0 | 0.166667 | 0.064103 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b34719cd1fc2bb1e55eb9ba89ae807337753e699 | 801 | py | Python | examples/abcbook/example206.py | martinlackner/approval-multiwinner | 17bb6294b1910531b66c457f7b1a34a966d4113d | [
"MIT"
] | 2 | 2019-07-10T08:54:43.000Z | 2019-09-09T16:17:04.000Z | examples/abcbook/example206.py | martinlackner/approval-multiwinner | 17bb6294b1910531b66c457f7b1a34a966d4113d | [
"MIT"
] | 1 | 2019-09-11T21:29:47.000Z | 2019-09-11T21:29:47.000Z | examples/abcbook/example206.py | martinlackner/approval-multiwinner | 17bb6294b1910531b66c457f7b1a34a966d4113d | [
"MIT"
] | 1 | 2019-09-11T16:56:23.000Z | 2019-09-11T16:56:23.000Z | """
Example 2.6 (PAV, seq-PAV, revseq-PAV).
From "Multi-Winner Voting with Approval Preferences"
by Martin Lackner and Piotr Skowron
https://arxiv.org/abs/2007.01795
"""
from abcvoting import abcrules
from abcvoting.preferences import Profile
from abcvoting import misc
from abcvoting.output import output, DETAILS
output.set_verbosity(DETAILS)
print(misc.header("Example 6", "*"))
# Approval profile
num_cand = 4
a, b, c, d = range(4) # a = 0, b = 1, c = 2, ...
cand_names = "abcd"
profile = Profile(num_cand, cand_names=cand_names)
profile.add_voters([{a, b}, {a, b, c}, {a, b, d}, {a, c, d}, {a, c, d}, {b}, {c}, {d}])
print(misc.header("Input:"))
print(profile.str_compact())
committees_av = abcrules.compute_av(profile, 1)
committees_revseqpav = abcrules.compute_revseqpav(profile, 1)
| 24.272727 | 87 | 0.705368 | 125 | 801 | 4.424 | 0.456 | 0.094033 | 0.068716 | 0.014467 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027457 | 0.13608 | 801 | 32 | 88 | 25.03125 | 0.771676 | 0.25593 | 0 | 0 | 0 | 0 | 0.034072 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.266667 | 0 | 0.266667 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b3490a3f712a2661b2490d387f96a6b22dd40ef8 | 7,605 | py | Python | Cogs/Encode.py | Damiian1/techwizardshardware | 97ceafc15036be4136e860076d73d74f1887f041 | [
"MIT"
] | null | null | null | Cogs/Encode.py | Damiian1/techwizardshardware | 97ceafc15036be4136e860076d73d74f1887f041 | [
"MIT"
] | null | null | null | Cogs/Encode.py | Damiian1/techwizardshardware | 97ceafc15036be4136e860076d73d74f1887f041 | [
"MIT"
] | null | null | null | import asyncio
import discord
from discord.ext import commands
import base64
import binascii
import re
from Cogs import Nullify
def setup(bot):
# Add the bot and deps
settings = bot.get_cog("Settings")
bot.add_cog(Encode(bot, settings))
class Encode:
# Init with the bot reference
def __init__(self, bot, settings):
self.bot = bot
self.settings = settings
def suppressed(self, guild, msg):
# Check if we're suppressing @here and @everyone mentions
if self.settings.getServerStat(guild, "SuppressMentions"):
return Nullify.clean(msg)
else:
return msg
# Helper methods
def _to_bytes(self, in_string):
return in_string.encode('utf-8')
def _to_string(self, in_bytes):
return in_bytes.decode('utf-8')
# Check hex value
def _check_hex(self, hex_string):
if hex_string.lower().startswith("0x"):
hex_string = hex_string[2:]
hex_string = re.sub(r'[^0-9A-Fa-f]+', '', hex_string)
return hex_string
# To base64 methods
def _ascii_to_base64(self, ascii_string):
ascii_bytes = self._to_bytes(ascii_string)
base_64 = base64.b64encode(ascii_bytes)
return self._to_string(base_64)
def _hex_to_base64(self, hex_string):
hex_string = self._check_hex(hex_string)
hex_s_bytes = self._to_bytes(hex_string)
hex_bytes = binascii.unhexlify(hex_s_bytes)
base64_bytes = base64.b64encode(hex_bytes)
return self._to_string(base64_bytes)
# To ascii methods
def _hex_to_ascii(self, hex_string):
hex_string = self._check_hex(hex_string)
hex_bytes = self._to_bytes(hex_string)
ascii_bytes = binascii.unhexlify(hex_bytes)
return self._to_string(ascii_bytes)
def _base64_to_ascii(self, base64_string):
base64_bytes = self._to_bytes(base64_string)
ascii_bytes = base64.b64decode(base64_bytes)
return self._to_string(ascii_bytes)
# To hex methods
def _ascii_to_hex(self, ascii_string):
ascii_bytes = self._to_bytes(ascii_string)
hex_bytes = binascii.hexlify(ascii_bytes)
return self._to_string(hex_bytes)
def _base64_to_hex(self, base64_string):
b64_string = self._to_bytes(base64_string)
base64_bytes = base64.b64decode(b64_string)
hex_bytes = binascii.hexlify(base64_bytes)
return self._to_string(hex_bytes)
@commands.command(pass_context=True)
async def slide(self, ctx, input_hex = None):
"""Calculates your slide value for Clover based on an input address (in hex)."""
try:
# We're accepting strings here - convert
start_addr = int(input_hex, 16)
except:
await ctx.send("Malformed input hex - try again.")
return
# Setup our temp vars
first_str = "0x100000"
first = int(first_str, 16)
secon_str = "0x200000"
secon = int(secon_str, 16)
slide_float = ( start_addr - first ) / secon
if slide_float > int(slide_float):
# has a > 0 decimal - round up
slide_float = int(slide_float) + 1
await ctx.send("```\nslide={}\n```".format(slide_float))
@commands.command(pass_context=True)
async def hexdec(self, ctx, *, input_hex = None):
"""Converts hex to decimal."""
if input_hex == None:
await ctx.send("Usage: `{}hexdec [input_hex]`".format(ctx.prefix))
return
input_hex = self._check_hex(input_hex)
if not len(input_hex):
await ctx.send("Malformed hex - try again.")
return
try:
dec = int(input_hex, 16)
except Exception:
await ctx.send("I couldn't make that conversion!")
return
await ctx.send(dec)
@commands.command(pass_context=True)
async def dechex(self, ctx, *, input_dec = None):
"""Converts an int to hex."""
if input_dec == None:
await ctx.send("Usage: `{}dechex [input_dec]`".format(ctx.prefix))
return
try:
input_dec = int(input_dec)
except Exception:
await ctx.send("Input must be an integer.")
return
await ctx.send("0x" + "{:x}".format(input_dec).upper())
@commands.command(pass_context=True)
async def strbin(self, ctx, *, input_string = None):
"""Converts the input string to its binary representation."""
if input_string == None:
await ctx.send("Usage: `{}strbin [input_string]`".format(ctx.prefix))
return
msg = ''.join('{:08b}'.format(ord(c)) for c in input_string)
# Format into blocks:
# - First split into chunks of 8
msg_list = re.findall('........?', msg)
# Now we format!
msg = "```\n"
msg += " ".join(msg_list)
msg += "```"
if len(msg) > 1993:
await ctx.send("Well... that was *a lot* of 1s and 0s. Maybe try a smaller string... Discord won't let me send all that.")
return
await ctx.send(msg)
@commands.command(pass_context=True)
async def binstr(self, ctx, *, input_binary = None):
"""Converts the input binary to its string representation."""
if input_binary == None:
await ctx.send("Usage: `{}binstr [input_binary]`".format(ctx.prefix))
return
# Clean the string
new_bin = ""
for char in input_binary:
if char is "0" or char is "1":
new_bin += char
if not len(new_bin):
await ctx.send("Usage: `{}binstr [input_binary]`".format(ctx.prefix))
return
msg = ''.join(chr(int(new_bin[i:i+8], 2)) for i in range(0, len(new_bin), 8))
await ctx.send(self.suppressed(ctx.guild, msg))
@commands.command(pass_context=True)
async def binint(self, ctx, *, input_binary = None):
"""Converts the input binary to its integer representation."""
if input_binary == None:
await ctx.send("Usage: `{}binint [input_binary]`".format(ctx.prefix))
return
try:
msg = int(input_binary, 2)
except Exception:
msg = "I couldn't make that conversion!"
await ctx.send(msg)
@commands.command(pass_context=True)
async def intbin(self, ctx, *, input_int = None):
"""Converts the input integer to its binary representation."""
if input_int == None:
await ctx.send("Usage: `{}intbin [input_int]`".format(ctx.prefix))
return
try:
input_int = int(input_int)
except Exception:
await ctx.send("Input must be an integer.")
return
await ctx.send("{:08b}".format(input_int))
@commands.command(pass_context=True)
async def encode(self, ctx, value = None , from_type = None, *, to_type = None):
"""Data converter from ascii <--> hex <--> base64."""
if value == None or from_type == None or to_type == None:
msg = 'Usage: `{}encode "[value]" [from_type] [to_type]`\nTypes include ascii, hex, and base64.'.format(ctx.prefix)
await ctx.send(msg)
return
types = [ "base64", "hex", "ascii" ]
if not from_type.lower() in types:
await ctx.send("Invalid *from* type!")
return
if not to_type.lower() in types:
await ctx.send("Invalid *to* type!")
return
if from_type.lower() == to_type.lower():
await ctx.send("*Poof!* Your encoding was done before it started!")
return
try:
if from_type.lower() == "base64":
if to_type.lower() == "hex":
await ctx.send(self.suppressed(ctx.guild, self._base64_to_hex(value)))
return
elif to_type.lower() == "ascii":
await ctx.send(self.suppressed(ctx.guild, self._base64_to_ascii(value)))
return
elif from_type.lower() == "hex":
if to_type.lower() == "ascii":
await ctx.send(self.suppressed(ctx.guild, self._hex_to_ascii(value)))
return
elif to_type.lower() == "base64":
await ctx.send(self.suppressed(ctx.guild, self._hex_to_base64(value)))
return
elif from_type.lower() == "ascii":
if to_type.lower() == "hex":
await ctx.send(self.suppressed(ctx.guild, self._ascii_to_hex(value)))
return
elif to_type.lower() == "base64":
await ctx.send(self.suppressed(ctx.guild, self._ascii_to_base64(value)))
return
except Exception:
await ctx.send("I couldn't make that conversion!")
return
| 30.298805 | 126 | 0.687968 | 1,134 | 7,605 | 4.424162 | 0.169312 | 0.049432 | 0.074148 | 0.041459 | 0.480965 | 0.418975 | 0.340243 | 0.270082 | 0.243771 | 0.228025 | 0 | 0.019943 | 0.175805 | 7,605 | 250 | 127 | 30.42 | 0.780472 | 0.046811 | 0 | 0.352941 | 0 | 0.010695 | 0.128918 | 0 | 0 | 0 | 0.002355 | 0 | 0 | 1 | 0.064171 | false | 0.042781 | 0.037433 | 0.010695 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b34955dfe5c35177be930f5f64d0fb5e134d2832 | 1,968 | py | Python | 9.classes/a_first_look_at_classes.py | yuishihara/python_tutorial | c88db5b1c002dcf69e183ed1a45c02a08aee905c | [
"MIT"
] | null | null | null | 9.classes/a_first_look_at_classes.py | yuishihara/python_tutorial | c88db5b1c002dcf69e183ed1a45c02a08aee905c | [
"MIT"
] | null | null | null | 9.classes/a_first_look_at_classes.py | yuishihara/python_tutorial | c88db5b1c002dcf69e183ed1a45c02a08aee905c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
def class_objects():
print(MyClass.i)
print(MyClass.f)
print(MyClass.f(None))
x = MyClass()
class Complex:
def __init__(self, realpart, imagpart):
self.r = realpart
self.i = imagpart
x = Complex(3.0, -4.5)
print(x.r, x.i)
def instance_objects():
x = MyClass()
x.counter = 1
while x.counter < 10:
x.counter = x.counter*2
print(x.counter)
del x.counter
x.counter = 1
def method_objects():
x = MyClass()
print(x.f())
xf = x.f
print(xf())
def good_designed_class():
class Dog:
kind = 'canine'
def __init__(self, name):
self.tricks = []
self.name = name
def add_trick(self, trick):
self.tricks.append(trick)
d = Dog('Fido')
e = Dog('Buddy')
d.add_trick('roll over')
e.add_trick('play dead')
print('good', d.tricks)
print('good', e.tricks)
def bad_designed_class():
class Dog:
kind = 'canine'
tricks = []
def __init__(self, name):
self.name = name
def add_trick(self, trick):
self.tricks.append(trick)
d = Dog('Fido')
e = Dog('Buddy')
d.add_trick('roll over')
e.add_trick('play dead')
print('bad', d.tricks)
print('bad', e.tricks)
def class_and_instance_variables():
class Dog:
kind = 'canine'
def __init__(self, name):
self.name = name
def add_trick(self, trick):
self.tricks.append(trick)
d = Dog('Fido')
e = Dog('Buddy')
print(d.kind)
print(e.kind)
print(d.name)
print(e.name)
bad_designed_class()
good_designed_class()
if __name__ == "__main__":
class_objects()
instance_objects()
method_objects()
class_and_instance_variables()
| 18.055046 | 47 | 0.554878 | 257 | 1,968 | 4.050584 | 0.241245 | 0.053794 | 0.042267 | 0.051873 | 0.400576 | 0.400576 | 0.358309 | 0.358309 | 0.358309 | 0.305476 | 0 | 0.011029 | 0.308943 | 1,968 | 108 | 48 | 18.222222 | 0.754412 | 0.022358 | 0 | 0.434211 | 0 | 0 | 0.059437 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184211 | false | 0 | 0 | 0.013158 | 0.328947 | 0.197368 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b34a6562345bbb78b3671e1bb467aac47ea7b5ce | 1,664 | py | Python | omega_miya/plugins/bilibili_dynamic_monitor/util.py | Ailitonia/nonebot_miya | 71d8cf3e91bbb0dd63cbee85fd757c738b99f10c | [
"MIT"
] | 5 | 2020-10-03T14:24:41.000Z | 2021-03-05T12:36:44.000Z | omega_miya/plugins/bilibili_dynamic_monitor/util.py | Ailitonia/nonebot_miya | 71d8cf3e91bbb0dd63cbee85fd757c738b99f10c | [
"MIT"
] | null | null | null | omega_miya/plugins/bilibili_dynamic_monitor/util.py | Ailitonia/nonebot_miya | 71d8cf3e91bbb0dd63cbee85fd757c738b99f10c | [
"MIT"
] | null | null | null | import base64
import aiohttp
from io import BytesIO
from nonebot import log
# 图片转base64
async def pic_2_base64(url: str) -> str:
async def get_image(pic_url: str):
timeout_count = 0
while timeout_count < 3:
try:
timeout = aiohttp.ClientTimeout(total=10)
async with aiohttp.ClientSession(timeout=timeout) as session:
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/83.0.4103.116 Safari/537.36'}
async with session.get(url=pic_url, headers=headers, timeout=timeout) as resp:
__res = await resp.read()
return __res
except Exception as err:
log.logger.warning(f'{__name__}: error info: {err}. '
f'Occurred in try {timeout_count + 1} using paras: {pic_url}')
finally:
timeout_count += 1
else:
log.logger.warning(f'{__name__}: error info: Exceeds the set timeout time. '
f'Timeout in {timeout_count} times, using paras: {pic_url}')
return None
origin_image_f = BytesIO()
try:
origin_image_f.write(await get_image(pic_url=url))
except Exception as e:
log.logger.error(f'{__name__}: error info: {e}.')
return ''
b64 = base64.b64encode(origin_image_f.getvalue())
b64 = str(b64, encoding='utf-8')
b64 = 'base64://' + b64
origin_image_f.close()
return b64
| 39.619048 | 98 | 0.546875 | 196 | 1,664 | 4.44898 | 0.443878 | 0.034404 | 0.055046 | 0.048165 | 0.068807 | 0.068807 | 0.068807 | 0 | 0 | 0 | 0 | 0.056744 | 0.353966 | 1,664 | 41 | 99 | 40.585366 | 0.754419 | 0.005409 | 0 | 0.054054 | 0 | 0 | 0.221416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.108108 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b34a72eb51bbf4be0f21e5a82d02a446a82a711f | 2,972 | py | Python | server_obj.py | varadhodiyil/vision_pipeline | 3cf2925b9d799bbd0dbf213d23ca35bfed927381 | [
"MIT"
] | 1 | 2020-10-05T10:46:44.000Z | 2020-10-05T10:46:44.000Z | server_obj.py | varadhodiyil/vision_pipeline | 3cf2925b9d799bbd0dbf213d23ca35bfed927381 | [
"MIT"
] | null | null | null | server_obj.py | varadhodiyil/vision_pipeline | 3cf2925b9d799bbd0dbf213d23ca35bfed927381 | [
"MIT"
] | null | null | null | from __future__ import print_function
import sys
import socket
import numpy as np
from PIL import Image
try:
import cPickle as pickle
except ImportError:
import pickle
from datetime import datetime
import threading
from client_send_message import MessageSender
from color_classifier import detect_color
import json
sender = MessageSender()
from yolo import YOLO
pred = YOLO()
s = socket.socket()
print("Socket Conn Started")
s.bind((b'',6666))
s.listen(1)
#Socker server to receive image and send for object detection and color detection
idx = 0
while True:
c,a = s.accept()
# data = b''
_data = []
print(a)
while True:
block = c.recv(4096)
if not block:
break
# print(block.decode("ascii","ignore"))
# if block.decode("ascii","ignore") in 'EOF':
# break
# data += block
_data.append(block)
# if len(block) < 4096:
# break
# c.shutdown(socket.SHUT_RD)
# c.close()
# c.shutdown()
data = b"".join(_data)
if sys.version_info.major < 3:
unserialized_input = pickle.loads(data)
else:
unserialized_input = pickle.loads(data,encoding='bytes')
if unserialized_input is not None:
img = Image.fromarray(unserialized_input)
rec = datetime.now()
# img = unserialized_input
# images = list()
# images.append(img)
# images = np.array(images,dtype=float)
resp = pred.detect_image(img)
# c.connect(a)
class_ = len(resp)
_proc = "{:f}".format(float((datetime.now() -rec).total_seconds()))
hasCar = False
if class_ > 0:
hasCar = True
resp_json = dict()
resp_json['has_car'] = hasCar
resp_json['num_cars'] = class_
resp_json['num_cars_time'] = _proc
# c.close()
# respS = socket.socket()
# respS.bind((b'',6666))
# respS.close()
# sender.send_message("{0},num_cars,{1},num_cars_time,{2}".format(idx,class_,_proc))
dete_colours = list()
rec = datetime.now()
for _i,_r in enumerate(resp):
#print(_r)
_img = img.crop(_r)
_img = _img.resize((224,224))
_img = np.array(_img)
car_color = detect_color(_img)
print(car_color)
dete_colours.append(car_color)
d_clr = "---".join(dete_colours)
_proc = "{:f}".format(float((datetime.now() - rec).total_seconds()))
if not hasCar:
_proc = ""
resp_json['colours'] = d_clr
resp_json['colours_time'] = _proc
print("Sending",resp_json)
c.sendall(pickle.dumps(resp_json,protocol=2))
c.close()
# sender.send_message("{0},colours,{1},colours_time,{2}".format(idx,d_clr,_proc))
idx = idx + 1
# print(img.size)
# img.show() | 27.018182 | 92 | 0.569314 | 363 | 2,972 | 4.46281 | 0.349862 | 0.039506 | 0.011111 | 0.02716 | 0.119753 | 0.051852 | 0.051852 | 0.051852 | 0.051852 | 0 | 0 | 0.016481 | 0.305855 | 2,972 | 110 | 93 | 27.018182 | 0.768783 | 0.222073 | 0 | 0.088235 | 0 | 0 | 0.038899 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.205882 | 0 | 0.205882 | 0.073529 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b34ceef8d6c2e4dc17ff23fbef63dc632e613112 | 4,332 | py | Python | legacy/metamodel/nn_generator/nn_trainer_evaluate.py | AlexJew/CityEnergyAnalyst | 6eb372c79e5100a2d0abce78561ae368fb409cd1 | [
"MIT"
] | null | null | null | legacy/metamodel/nn_generator/nn_trainer_evaluate.py | AlexJew/CityEnergyAnalyst | 6eb372c79e5100a2d0abce78561ae368fb409cd1 | [
"MIT"
] | null | null | null | legacy/metamodel/nn_generator/nn_trainer_evaluate.py | AlexJew/CityEnergyAnalyst | 6eb372c79e5100a2d0abce78561ae368fb409cd1 | [
"MIT"
] | null | null | null | # coding=utf-8
"""
'input_matrix.py' script hosts the following functions:
(1) collect CEA inputs
(2) collect CEA outputs (demands)
(3) add delay to time-sensitive inputs
(4) return the input and target matrices
"""
from math import sqrt
import pandas as pd
from legacy.metamodel.nn_generator import random_variables, target_parameters
from legacy.metamodel.nn_generator import nn_model_collector
from legacy.metamodel.nn_generator import sampling_single
from sklearn.metrics import mean_squared_error, mean_absolute_error
import cea.inputlocator
import cea.config
from cea.demand.demand_main import properties_and_schedule
from cea.utilities import epwreader
__author__ = "Jimeno A. Fonseca","Fazel Khayatian"
__copyright__ = "Copyright 2017, Architecture and Building Systems - ETH Zurich"
__credits__ = ["Jimeno A. Fonseca", "Fazel Khayatian"]
__license__ = "MIT"
__version__ = "0.1"
__maintainer__ = "Daren Thomas"
__email__ = "cea@arch.ethz.ch"
__status__ = "Production"
def get_nn_performance(model, scalerT, scalerX, urban_input_matrix, urban_taget_matrix, locator):
input_NN_x = urban_input_matrix
target_NN_t = urban_taget_matrix
inputs_x = scalerX.transform(input_NN_x)
model_estimates = model.predict(inputs_x)
filtered_predict = scalerT.inverse_transform(model_estimates)
rmse_Qhsf = sqrt(mean_squared_error(target_NN_t[:, 0], filtered_predict[:, 0]))
rmse_Qcsf = sqrt(mean_squared_error(target_NN_t[:, 1], filtered_predict[:, 1]))
rmse_Qwwf = sqrt(mean_squared_error(target_NN_t[:, 2], filtered_predict[:, 2]))
rmse_Ef = sqrt(mean_squared_error(target_NN_t[:, 3], filtered_predict[:, 3]))
rmse_T_int = sqrt(mean_squared_error(target_NN_t[:, 4], filtered_predict[:, 4]))
mbe_Qhsf = mean_absolute_error(target_NN_t[:, 0], filtered_predict[:, 0])
mbe_Qcsf = mean_absolute_error(target_NN_t[:, 1], filtered_predict[:, 1])
mbe_Qwwf = mean_absolute_error(target_NN_t[:, 2], filtered_predict[:, 2])
mbe_Ef = mean_absolute_error(target_NN_t[:, 3], filtered_predict[:, 3])
mbe_T_int = mean_absolute_error(target_NN_t[:, 4], filtered_predict[:, 4])
print ("the rmse of Qhsf is %d and the mbe is %d" %(rmse_Qhsf, mbe_Qhsf))
print (rmse_Qcsf, mbe_Qcsf)
print (rmse_Qwwf, mbe_Qwwf)
print (rmse_Ef, mbe_Ef)
print (rmse_T_int, mbe_T_int)
model_estimates = locator.get_neural_network_estimates()
filtered_predict = pd.DataFrame(filtered_predict)
filtered_predict.to_csv(model_estimates, index=False, header=False, float_format='%.3f', decimal='.')
return urban_input_matrix, urban_taget_matrix
def eval_nn_performance(locator, random_variables, target_parameters, list_building_names,
config, nn_delay, climatic_variables, year,
use_stochastic_occupancy):
urban_input_matrix, urban_taget_matrix = sampling_single(locator, random_variables, target_parameters,
list_building_names, config,
nn_delay, climatic_variables, year,
use_stochastic_occupancy)
model, scalerT, scalerX = nn_model_collector(locator)
get_nn_performance(model, scalerT, scalerX, urban_input_matrix, urban_taget_matrix, locator)
def main(config):
locator = cea.inputlocator.InputLocator(scenario=config.scenario)
weather_data = epwreader.epw_reader(config.weather)[['year', 'drybulb_C', 'wetbulb_C',
'relhum_percent', 'windspd_ms', 'skytemp_C']]
year = weather_data['year'][0]
settings = config.demand
building_properties, schedules_dict, date = properties_and_schedule(locator, year)
list_building_names = building_properties.list_building_names()
eval_nn_performance(locator, random_variables, target_parameters, list_building_names,
config=config, nn_delay=config.neural_network.nn_delay,
climatic_variables=config.neural_network.climatic_variables,
year=config.neural_network.year,
use_stochastic_occupancy=config.demand.use_stochastic_occupancy)
if __name__ == '__main__':
main(cea.config.Configuration())
| 47.086957 | 106 | 0.707756 | 547 | 4,332 | 5.182815 | 0.279708 | 0.074074 | 0.034921 | 0.049383 | 0.401411 | 0.381658 | 0.299824 | 0.273369 | 0.164021 | 0.164021 | 0 | 0.009513 | 0.199215 | 4,332 | 91 | 107 | 47.604396 | 0.807726 | 0.051247 | 0 | 0 | 0 | 0 | 0.068747 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044776 | false | 0 | 0.149254 | 0 | 0.208955 | 0.074627 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b34f561782604d15fbd0ad5fb7b41eb487472431 | 1,512 | py | Python | src/feedfetcher.py | moretti/comic-scraper | 12c767499d64f43e0a9703598a6adcad6ffd6330 | [
"Unlicense",
"MIT"
] | 1 | 2015-02-17T12:37:28.000Z | 2015-02-17T12:37:28.000Z | src/feedfetcher.py | moretti/comic-scraper | 12c767499d64f43e0a9703598a6adcad6ffd6330 | [
"Unlicense",
"MIT"
] | null | null | null | src/feedfetcher.py | moretti/comic-scraper | 12c767499d64f43e0a9703598a6adcad6ffd6330 | [
"Unlicense",
"MIT"
] | null | null | null | import feedparser
import models
import time
import datetime as dt
from bs4 import BeautifulSoup
VALID_IMAGE_ATTRIBUTES = ('alt', 'title', 'src')
def fetch(url):
f = feedparser.parse(url)
feed = models.Feed()
feed.link = url
feed.website = f.feed.get('link')
feed.title = f.feed.get('title')
feed.author = f.feed.get('author')
entries = []
for e in f.entries:
entry = models.Entry()
entry.link = e.get('link')
entry.title = e.get('title')
published = get_first_or_default(
e, ('updated_parsed', 'published_parsed'))
if published:
entry.published = dt.datetime.fromtimestamp(time.mktime(published))
if 'content' in e and e.content and isinstance(e.content, list):
first_content = e.content[0]
content = first_content.get('value')
else:
content = e.get('description')
entry.content = content
if content:
entry.image_content = get_comic_image(content)
entries.append(entry)
return (feed, entries)
def get_comic_image(html):
soup = BeautifulSoup(html)
img = soup.find('img')
if img:
img.attrs = {key: value for key, value in img.attrs.iteritems() if key in VALID_IMAGE_ATTRIBUTES}
return unicode(img)
else:
return None
def get_first_or_default(d, sequence, default=None):
for element in sequence:
if element in d:
return d[element]
return default
| 25.2 | 105 | 0.618386 | 194 | 1,512 | 4.721649 | 0.324742 | 0.016376 | 0.026201 | 0.037118 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001817 | 0.271825 | 1,512 | 59 | 106 | 25.627119 | 0.830154 | 0 | 0 | 0.044444 | 0 | 0 | 0.060185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.111111 | 0 | 0.288889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b35206db6eaaa312fca86f170531ab36f181b6a0 | 6,745 | py | Python | server.py | eugenetan01/TranslatorBot | e814d11b5a492a26ecb84737c31dc2c92c9711d8 | [
"MIT"
] | null | null | null | server.py | eugenetan01/TranslatorBot | e814d11b5a492a26ecb84737c31dc2c92c9711d8 | [
"MIT"
] | null | null | null | server.py | eugenetan01/TranslatorBot | e814d11b5a492a26ecb84737c31dc2c92c9711d8 | [
"MIT"
] | null | null | null | import json
import logging
from telegram import InlineKeyboardButton, InlineKeyboardMarkup
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, CallbackQueryHandler, ConversationHandler
import os
from googletrans import Translator
import Controller as controller
import DBController as dbController
from gtts import gTTS
changetranslationlanguage = 0
setdefaultlanguage = 0
PORT = int(os.environ.get('PORT', 5000))
# Enable logging
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
level=logging.INFO)
logger = logging.getLogger(__name__)
# Define a few command handlers. These usually take the two arguments update and
# context. Error handlers also receive the raised TelegramError object in error.
def start(update, context):
"""Send a message when the command /start is issued."""
update.message.reply_text("Hi! I'm TranslatorBot! What would you like me to translate today?")
def help(update, context):
"""Send a message when the command /help is issued."""
update.message.reply_text('Help!')
def languageButtons(update, context):
placeholder = "Please choose any of the following languages you would like to change to: "
languages = controller.langCodes()
keyboard = []
for key in languages:
keyboard.append([InlineKeyboardButton(key, callback_data=languages[key])])
reply_markup = InlineKeyboardMarkup(keyboard)
update.message.reply_text(placeholder, reply_markup=reply_markup)
if(update.message.text == "/changetranslationlanguage"):
return changetranslationlanguage
else:
return setdefaultlanguage
def buttonTranslateChange(update, context):
query = update.callback_query
# CallbackQueries need to be answered, even if no notification to the user is needed
# Some clients may have trouble otherwise. See https://core.telegram.org/bots/api#callbackquery
query.answer()
query.edit_message_text(text="Language translation changed to: {}".format(controller.langNames()[query.data]))
#global language
#language = query.data
controller.updateLanguageTranslation(update.callback_query.from_user['username'], query.data)
return ConversationHandler.END
def buttonInputChange(update, context):
query = update.callback_query
# CallbackQueries need to be answered, even if no notification to the user is needed
# Some clients may have trouble otherwise. See https://core.telegram.org/bots/api#callbackquery
query.answer()
query.edit_message_text(text="Default Language changed to: {}".format(controller.langNames()[query.data]))
#global language
#language = query.data
# print("im here")
controller.updateLanguageInputDefault(update.callback_query.from_user['username'], query.data)
return ConversationHandler.END
def translate(update, context):
"""Translate the user message."""
defaultLanguage = dbController.getUserDefaultInputLanguage(update.message.from_user['username'])
language = dbController.getUserDefaultLanguage(update.message.from_user['username'])
translator = Translator()
if update.message.voice is not None:
voice = context.bot.getFile(update.message.voice.file_id)
toTranslate = controller.audioConversionToText(voice, defaultLanguage, update.message.from_user['username'])
else:
toTranslate = update.message.text
if toTranslate == "errorAudio" :
update.message.reply_text(translator.translate("Could not understand audio, please send another voice recording please.",src="en", dest=defaultLanguage).text)
elif toTranslate == "errorService":
update.message.reply_text(translator.translate("Speech input translation service is down. Please try again later.",src="en", dest=defaultLanguage).text)
else:
result = translator.translate(toTranslate, src=defaultLanguage, dest=language)
update.message.reply_text(result.text)
if result.pronunciation is not None and result.pronunciation != toTranslate and result.pronunciation != result.text:
update.message.reply_text(result.pronunciation)
#else:
# errorText = "No text pronunciation available"
# update.message.reply_text(translator.translate(errorText, src="en", dest=defaultLanguage).text)
#send text to speech translation
try:
tts = gTTS(result.text, lang=language)
placeholderEn = "Click here for audio pronunciation in {}: ".format(controller.langNames()[language])
update.message.reply_text(translator.translate(placeholderEn, src = "en", dest = defaultLanguage).text + tts.get_urls()[0])
except:
pass
def error(update, context):
"""Log Errors caused by Updates."""
logger.warning('Update "%s" caused error "%s"', update, context.error)
def main():
"""Start the bot."""
# Create the Updater and pass it your bot's token.
# Make sure to set use_context=True to use the new context based callbacks
# Post version 12 this will no longer be necessary
with open('config.json') as config_file:
data = json.load(config_file)
TOKEN = data['TOKEN']
updater = Updater(TOKEN, use_context=True)
# Get the dispatcher to register handlers
dp = updater.dispatcher
dp.add_handler(CommandHandler("start", start))
dp.add_handler(ConversationHandler(
entry_points=[CommandHandler('setdefaultlanguage', languageButtons)],
states={
setdefaultlanguage: [CallbackQueryHandler(buttonInputChange)]
},
fallbacks=[CommandHandler('setdefaultlanguage', languageButtons)]
))
dp.add_handler(ConversationHandler(
entry_points=[CommandHandler('changetranslationlanguage', languageButtons)],
states={
changetranslationlanguage: [CallbackQueryHandler(buttonTranslateChange)]
},
fallbacks=[CommandHandler('changetranslationlanguage', languageButtons)]
))
dp.add_handler(CommandHandler("help", help))
dp.add_handler(MessageHandler(Filters.voice, translate))
dp.add_handler(MessageHandler(Filters.text, translate))
# log all errors
dp.add_error_handler(error)
# Start the Bot
#updater.start_webhook(listen="0.0.0.0",
# port=int(PORT),
# url_path=TOKEN)
#updater.bot.setWebhook('https://herokuappname.herokuapp.com/' + TOKEN)
# Start the Bot
updater.start_polling()
# Run the bot until you press Ctrl-C or the process receives SIGINT,
# SIGTERM or SIGABRT. This should be used most of the time, since
# start_polling() is non-blocking and will stop the bot gracefully.
updater.idle()
if __name__ == '__main__':
main()
| 40.878788 | 166 | 0.722016 | 767 | 6,745 | 6.273794 | 0.33116 | 0.043225 | 0.033666 | 0.041147 | 0.317747 | 0.238155 | 0.191604 | 0.168329 | 0.15212 | 0.15212 | 0 | 0.002345 | 0.178206 | 6,745 | 164 | 167 | 41.128049 | 0.865777 | 0.238251 | 0 | 0.153061 | 0 | 0 | 0.135013 | 0.014958 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0.010204 | 0.091837 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b35557efbdcd9de6dfaad18c72d80170e6a3f1c7 | 871 | py | Python | tests/test_cluster_assignment.py | mo6zes/Reproducing-Deep-Fair-Clustering | 91f915436821eb05cdd021d3e9eb050a248fe993 | [
"Unlicense"
] | 4 | 2021-01-30T12:36:18.000Z | 2022-03-23T10:10:45.000Z | tests/test_cluster_assignment.py | mo6zes/Reproducing-Deep-Fair-Clustering | 91f915436821eb05cdd021d3e9eb050a248fe993 | [
"Unlicense"
] | 1 | 2021-11-05T09:16:36.000Z | 2021-11-05T15:27:25.000Z | tests/test_cluster_assignment.py | mo6zes/Reproducing-Deep-Fair-Clustering | 91f915436821eb05cdd021d3e9eb050a248fe993 | [
"Unlicense"
] | 1 | 2021-03-21T19:44:45.000Z | 2021-03-21T19:44:45.000Z | """
Source: https://github.com/vlukiyanov/pt-dec/blob/master/tests/test_cluster.py
"""
import torch
from unittest import TestCase
from module import ClusterAssignment
class TestClusterAssignment(TestCase):
@classmethod
def setUpClass(cls) -> None:
cls.cluster = ClusterAssignment(
cluster_number=2,
embedding_dimension=2,
cluster_centers=torch.Tensor([[-1, -1], [1, 1]]).float(),
alpha=1,
)
def test_calculation(self):
test_tensor = torch.Tensor([-2, -2]).float().unsqueeze(0)
den = float(1) / 3 + float(1) / 19
gold = torch.Tensor([(float(1) / 3) / den, (float(1) / 19) / den])
output = self.cluster(test_tensor).data
self.assertAlmostEqual((gold - output).numpy()[0][0], 0.0)
self.assertAlmostEqual((gold - output).numpy()[0][1], 0.0)
| 30.034483 | 82 | 0.608496 | 106 | 871 | 4.933962 | 0.45283 | 0.045889 | 0.011472 | 0.118547 | 0.141491 | 0.141491 | 0 | 0 | 0 | 0 | 0 | 0.041979 | 0.234214 | 871 | 28 | 83 | 31.107143 | 0.742129 | 0.089552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.105263 | false | 0 | 0.157895 | 0 | 0.315789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b357152a7c97a6a98a37a74b2b7b43161d57c224 | 2,789 | py | Python | products/models.py | anonshubh/eCommerce-rostores- | 7503e855d650556e216c42fc1c5b95a42bb9c501 | [
"Apache-2.0"
] | null | null | null | products/models.py | anonshubh/eCommerce-rostores- | 7503e855d650556e216c42fc1c5b95a42bb9c501 | [
"Apache-2.0"
] | null | null | null | products/models.py | anonshubh/eCommerce-rostores- | 7503e855d650556e216c42fc1c5b95a42bb9c501 | [
"Apache-2.0"
] | null | null | null | from django.db import models
from django.urls import reverse
import random,os
from ecommerce_src.utils import unique_slug_generator,get_filename
from django.db.models.signals import pre_save
from django.utils.encoding import smart_str
from django.core.files.storage import FileSystemStorage
from django.conf import settings
def get_filename_ext(filename):
base_name = os.path.basename(filename)
name,ext = os.path.splitext(base_name)
return name,ext
def upload_image_path(instance,filename):
new_filename = random.randint(1,100000000)
name,ext= get_filename_ext(filename)
final_filename= f'{new_filename}{ext}'
return f'products/{new_filename}/{final_filename}'
class Product(models.Model):
title = models.CharField(max_length=256)
slug = models.SlugField(unique=True,editable=False)
description = models.TextField()
price = models.DecimalField(max_digits=100,decimal_places=2)
image = models.ImageField(upload_to=upload_image_path)
available = models.BooleanField(default=True)
featured = models.BooleanField(default=False)
is_digital = models.BooleanField(default=False)
def __str__(self):
return smart_str(self.title[:50])
def get_absolute_url(self):
#return '/products/{slug}/'.format(slug=self.slug)
return reverse('products:detail',kwargs={'slug':self.slug})
@property
def name(self):
return self.title
def get_download(self):
qs = self.productfile_set.all()
return qs
def product_pre_save_receiver(sender,instance,*args,**kwargs):
if not instance.slug:
instance.slug = unique_slug_generator(instance)
pre_save.connect(product_pre_save_receiver,sender=Product)
def upload_product_file_loc(instance,filename):
slug = instance.product.slug
if not slug:
slug = unique_slug_generator(instance.product)
location = f"products/{slug}/"
return location + filename
class ProductFile(models.Model):
product = models.ForeignKey(Product,on_delete=models.CASCADE)
name = models.CharField(max_length = 128,null=True,blank=True)
file = models.FileField(upload_to=upload_product_file_loc,storage=FileSystemStorage(location=settings.PROTECTED_ROOT))
free = models.BooleanField(default=False)
user_required = models.BooleanField(default=False)
def __str__(self):
return smart_str(self.file.name)
def get_download_url(self):
return reverse('products:download',kwargs={'slug':self.product.slug,'pk':self.pk})
@property
def display_name(self):
f_name = get_filename(self.file.name)
if self.name:
return self.name
return f_name
def get_default_url(self):
return self.product.get_absolute_url()
| 32.811765 | 122 | 0.726067 | 363 | 2,789 | 5.380165 | 0.311295 | 0.030722 | 0.064004 | 0.061444 | 0.119816 | 0.059396 | 0.059396 | 0.059396 | 0.059396 | 0.059396 | 0 | 0.009528 | 0.172105 | 2,789 | 84 | 123 | 33.202381 | 0.836293 | 0.017569 | 0 | 0.0625 | 0 | 0 | 0.042748 | 0.014615 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.125 | 0.09375 | 0.734375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b35836893267f52783e00ae095462a2ce8e96ecc | 5,473 | py | Python | ai/project/scraper.py | BoraKurucu/Artificial-Intelligence | 5681ad43a79e349b36eab3f8c1fd994fffcf0cfe | [
"MIT"
] | null | null | null | ai/project/scraper.py | BoraKurucu/Artificial-Intelligence | 5681ad43a79e349b36eab3f8c1fd994fffcf0cfe | [
"MIT"
] | null | null | null | ai/project/scraper.py | BoraKurucu/Artificial-Intelligence | 5681ad43a79e349b36eab3f8c1fd994fffcf0cfe | [
"MIT"
] | 1 | 2021-02-23T19:34:12.000Z | 2021-02-23T19:34:12.000Z | import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
class SeleniumCrosswordHelper:
"""
A class to determine the web-operations of Selenium.
Methods
---------
get_clues()
Returns the data of the clues using the tags that are embedded in the html file.
get_cells()
Returns the data of the cells similarly to get_clues().
reveal_solutions()
The processes for the revealment of the solutions using Selenium.
_click_ok()
Selenium function to click OK button.
_click_reveal_menu_button()
Selenium function to click reveal menu button.
_click_puzzle_reveal_button()
Selenium function to click reveal button.
_click_reveal_confirmation_button()
Selenium function to click confirmation button.
_close_pop_up()
Selenium function to close pop-ups.
"""
def __init__(self):
options = Options()
options.headless = False
options.add_argument("--log-level=3")
options.add_experimental_option('excludeSwitches', ['enable-logging'])
self.driver = webdriver.Chrome(options=options)
print("Reaching to https://www.nytimes.com/crosswords/game/mini...")
self.driver.get("https://www.nytimes.com/crosswords/game/mini")
time.sleep(1)
def get_clues(self):
"""
This function first finds the class name tag related with the clues Then,
it stores the elements which are accessed by the tag names such as 'div','h3','li','span'
Then the function returns the data array that contains the elements for clues of the puzzle.
"""
print("Scraping clues...")
data = {}
clue_lists = self.driver.find_element_by_class_name('Layout-clueLists--10_Xl')
divs = clue_lists.find_elements_by_tag_name('div')
for div in divs:
title = div.find_element_by_tag_name('h3').text.lower()
data[title] = []
list_items = div.find_elements_by_tag_name('li')
for list_item in list_items:
spans = list_item.find_elements_by_tag_name('span')
data[title].append({'id': spans[0].text, 'text': spans[1].text})
return data
def get_cells(self):
"""
Creates an empty set at first, then according to the tags of html, adds cells up to it.
Returns the set at the end of the function.
"""
print("Scraping puzzle geometry and solutions...")
data = {}
cell_table = self.driver.find_element_by_css_selector('g[data-group=cells]')
cells = cell_table.find_elements_by_tag_name('g')
for cell in cells:
cell_data = {'block': False, 'text': '', 'number': ''}
rect = cell.find_element_by_tag_name('rect')
cell_id = rect.get_attribute('id').split('-')[2]
if 'Cell-block' in rect.get_attribute('class'):
cell_data['block'] = True
text_fields = cell.find_elements_by_tag_name('text')
for text_field in text_fields:
if text_field.get_attribute('text-anchor') == 'start':
cell_data['number'] = text_field.text
if text_field.get_attribute('text-anchor') == 'middle':
cell_data['text'] = text_field.text
data[cell_id] = cell_data
return data
def reveal_solutions(self):
"""
Using the simple functions below, this function performs the process of revealing solutions.
"""
print("Revealing the solution...")
self._click_ok()
self._click_reveal_menu_button()
self._click_puzzle_reveal_button()
self._click_reveal_confirmation_button()
self._close_pop_up()
def _click_ok(self):
"""
Clicks the OK button.
"""
ok_button = self.driver.find_element_by_css_selector('button[aria-label="OK"]')
ok_button.click()
def _click_reveal_menu_button(self):
"""
Clicks the reveal menu button.
"""
reveal_button = self.driver.find_element_by_css_selector('button[aria-label="reveal"]')
reveal_button.click()
def _click_puzzle_reveal_button(self):
"""
Clicks the reveal button.
"""
puzzle_reveal_button = self.driver.find_element_by_link_text('Puzzle')
puzzle_reveal_button.click()
def _click_reveal_confirmation_button(self):
"""
Clicks the confirmation button.
"""
reveal_button = self.driver.find_element_by_css_selector('button[aria-label="Reveal"]')
reveal_button.click()
def _close_pop_up(self):
"""
Clicks and closes the pop-up screens.
"""
spans = self.driver.find_elements_by_tag_name('span')
for span in spans:
if 'closeX' in span.get_attribute('class'):
span.click()
return
class NYCrossword(SeleniumCrosswordHelper):
"""
A class to initialize and use the Selenium through NYTimes Mini Puzzle
"""
def __init__(self):
super(NYCrossword, self).__init__()
self.reveal_solutions()
def get_data(self):
return {'clues': self.get_clues(), 'cells': self.get_cells()}
| 35.083333 | 101 | 0.610451 | 660 | 5,473 | 4.80303 | 0.251515 | 0.04164 | 0.032808 | 0.039748 | 0.320505 | 0.17918 | 0.141325 | 0.076025 | 0.076025 | 0.076025 | 0 | 0.002314 | 0.289421 | 5,473 | 155 | 102 | 35.309677 | 0.812805 | 0.251233 | 0 | 0.105263 | 0 | 0 | 0.133572 | 0.027541 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144737 | false | 0 | 0.039474 | 0.013158 | 0.263158 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b359b7be06a297e1d25880bf92aada1e13ba8821 | 3,930 | py | Python | server.py | LutherMckeiver/http-server | 4ce81dfad82c5b2792bf21f088985ddb784d43c9 | [
"MIT"
] | null | null | null | server.py | LutherMckeiver/http-server | 4ce81dfad82c5b2792bf21f088985ddb784d43c9 | [
"MIT"
] | null | null | null | server.py | LutherMckeiver/http-server | 4ce81dfad82c5b2792bf21f088985ddb784d43c9 | [
"MIT"
] | null | null | null | from http.server import BaseHTTPRequestHandler, HTTPServer
from urllib.parse import urlparse, parse_qs
from cowpy import cow
import os
import json
httpcow = '''<!DOCTYPE html>
<html>
<head>
<title> cowsay </title>
</head>
<body>
<header>
<nav>
<ul>
<li><a href="/cow">cowsay</a></li>
</ul>
</nav>
<header>
<main>
<!-- project description defining how users can further interact with the application -->
</main>
</body>
</html>'''
class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
"""
This method will allow you to send HTTP Requests
"""
cheese = cow.Moose()
parsed_path = urlparse(self.path)
parsed_qs = parse_qs(parsed_path.query)
# set a status code
# set any headers
# set any body data on the response
# end headers
if parsed_path.path == '/':
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write(httpcow.encode())
return
elif parsed_path.path == '/cow':
try:
if len(parsed_qs['msg'][0]) < 1:
self.send_response(400)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write(b'400 Bad Request')
return
msg = cheese.milk(parsed_qs['msg'][0])
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write(f'''<html><body>{msg}</body></html>'''.encode())
return
except KeyError:
self.send_response(400)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write(b'400 Bad Request')
return
self.send_response(404)
self.end_headers()
def do_POST(self):
"""
This is the method that will allow us to post
"""
cheese = cow.Moose()
parsed_path = urlparse(self.path)
parsed_qs = parse_qs(parsed_path.query)
# set a status code
# set any headers
# set any body data on the response
# end headers
if parsed_path.path == '/cow':
try:
if len(parsed_qs['msg'][0]) < 1:
self.send_response(400)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write(b'400 Bad Request')
return
msg = cheese.milk(parsed_qs['msg'][0])
return_json = f'{{"content": "{msg}"}}'
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write(return_json.encode())
return
except KeyError:
self.send_response(400)
self.send_header('Content-Type', 'text/html')
self.end_headers()
self.wfile.write(b'400 Bad Request')
return
self.send_response(404)
self.end_headers()
def create_server():
"""
This method will create the server
"""
return HTTPServer(
('127.0.0.1', int(os.environ['PORT'])),
SimpleHTTPRequestHandler
)
def run_forever():
"""
This method will run the server until an interrupt occurs
"""
server = create_server()
try:
print(f'Server running on {os.environ["PORT"]}')
server.serve_forever()
except KeyboardInterrupt:
server.shutdown()
server.server_close()
if __name__ == '__main__':
run_forever()
| 28.273381 | 97 | 0.526718 | 432 | 3,930 | 4.655093 | 0.261574 | 0.06365 | 0.071606 | 0.073098 | 0.584784 | 0.584784 | 0.584784 | 0.584784 | 0.584784 | 0.584784 | 0 | 0.02015 | 0.35598 | 3,930 | 138 | 98 | 28.478261 | 0.774397 | 0.08855 | 0 | 0.540816 | 0 | 0 | 0.193825 | 0.016581 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.05102 | 0 | 0.183673 | 0.010204 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b35e517ebb50cbef91f32f99c017eebf79d3a397 | 2,873 | py | Python | byceps/services/site/dbmodels/site.py | GSH-LAN/byceps | ab8918634e90aaa8574bd1bb85627759cef122fe | [
"BSD-3-Clause"
] | 33 | 2018-01-16T02:04:51.000Z | 2022-03-22T22:57:29.000Z | byceps/services/site/dbmodels/site.py | GSH-LAN/byceps | ab8918634e90aaa8574bd1bb85627759cef122fe | [
"BSD-3-Clause"
] | 7 | 2019-06-16T22:02:03.000Z | 2021-10-02T13:45:31.000Z | byceps/services/site/dbmodels/site.py | GSH-LAN/byceps | ab8918634e90aaa8574bd1bb85627759cef122fe | [
"BSD-3-Clause"
] | 14 | 2019-06-01T21:39:24.000Z | 2022-03-14T17:56:43.000Z | """
byceps.services.site.dbmodels.site
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:Copyright: 2006-2021 Jochen Kupperschmidt
:License: Revised BSD (see `LICENSE` file for details)
"""
from typing import Optional
from ....database import db
from ....typing import BrandID, PartyID
from ....util.instances import ReprBuilder
from ...board.transfer.models import BoardID
from ...brand.dbmodels.brand import Brand
from ...news.dbmodels.channel import Channel as NewsChannel
from ...shop.storefront.transfer.models import StorefrontID
from ..transfer.models import SiteID
site_news_channels = db.Table(
'site_news_channels',
db.Column('site_id', db.UnicodeText, db.ForeignKey('sites.id'), primary_key=True),
db.Column('news_channel_id', db.UnicodeText, db.ForeignKey('news_channels.id'), primary_key=True),
)
class Site(db.Model):
"""A site."""
__tablename__ = 'sites'
id = db.Column(db.UnicodeText, primary_key=True)
title = db.Column(db.UnicodeText, unique=True, nullable=False)
server_name = db.Column(db.UnicodeText, unique=True, nullable=False)
brand_id = db.Column(db.UnicodeText, db.ForeignKey('brands.id'), index=True, nullable=False)
brand = db.relationship(Brand, backref='sites')
party_id = db.Column(db.UnicodeText, db.ForeignKey('parties.id'), index=True, nullable=True)
enabled = db.Column(db.Boolean, nullable=False)
user_account_creation_enabled = db.Column(db.Boolean, nullable=False)
login_enabled = db.Column(db.Boolean, nullable=False)
board_id = db.Column(db.UnicodeText, db.ForeignKey('boards.id'), index=True, nullable=True)
storefront_id = db.Column(db.UnicodeText, db.ForeignKey('shop_storefronts.id'), index=True, nullable=True)
archived = db.Column(db.Boolean, default=False, nullable=False)
news_channels = db.relationship(
NewsChannel,
secondary=site_news_channels,
lazy='subquery',
backref=db.backref('news_channels', lazy=True),
)
def __init__(
self,
site_id: SiteID,
title: str,
server_name: str,
brand_id: BrandID,
enabled: bool,
user_account_creation_enabled: bool,
login_enabled: bool,
*,
party_id: Optional[PartyID] = None,
board_id: Optional[BoardID] = None,
storefront_id: Optional[StorefrontID] = None,
) -> None:
self.id = site_id
self.title = title
self.server_name = server_name
self.brand_id = brand_id
self.party_id = party_id
self.enabled = enabled
self.user_account_creation_enabled = user_account_creation_enabled
self.login_enabled = login_enabled
self.board_id = board_id
self.storefront_id = storefront_id
def __repr__(self) -> str:
return ReprBuilder(self) \
.add_with_lookup('id') \
.build()
| 34.202381 | 110 | 0.678037 | 355 | 2,873 | 5.295775 | 0.256338 | 0.055319 | 0.058511 | 0.078191 | 0.257979 | 0.180319 | 0.180319 | 0.046809 | 0 | 0 | 0 | 0.003456 | 0.194222 | 2,873 | 83 | 111 | 34.614458 | 0.808639 | 0.06126 | 0 | 0 | 0 | 0 | 0.053651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.145161 | 0.016129 | 0.435484 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b35f99354012d3ad1fdad1496663748a0d44b801 | 1,799 | py | Python | src/air_controller.py | n-guitar/flask-natuer-remo | 197ff7b6031caa3ac36eaf2bbeca71fbeac090db | [
"MIT"
] | 5 | 2021-08-13T14:55:58.000Z | 2022-03-21T12:11:11.000Z | src/air_controller.py | n-guitar/flask-natuer-remo | 197ff7b6031caa3ac36eaf2bbeca71fbeac090db | [
"MIT"
] | null | null | null | src/air_controller.py | n-guitar/flask-natuer-remo | 197ff7b6031caa3ac36eaf2bbeca71fbeac090db | [
"MIT"
] | null | null | null | from flask import Blueprint
from api.json_dataclient import AppliancesDataClient
from api.remoclient import NatureRemoClient
air_controller = Blueprint('air_controller', __name__, url_prefix='/air/api')
@air_controller.route('/send/power/<appliance_id>/<signal>', methods=['POST'])
def send_power(appliance_id,signal):
nclient = NatureRemoClient()
if signal == "power-on":
signal = ""
result = nclient.send_aircon_settings(appliance_id=appliance_id,button=signal)
return result
@air_controller.route('/send/temp/<appliance_id>/<signal>', methods=['POST'])
def send_temp(appliance_id,signal):
nclient = NatureRemoClient()
result = nclient.send_aircon_settings(appliance_id=appliance_id,temperature=signal)
# update json data
appliances_client = AppliancesDataClient()
appliances_client.json_load()
appliances_client.appliances_json_update_air_temp(appliance_id=appliance_id, temperature=signal)
return result
@air_controller.route('/send/mode/<appliance_id>/<signal>', methods=['POST'])
def send_mode(appliance_id,signal):
nclient = NatureRemoClient()
signal = signal[5:]
result = nclient.send_aircon_settings(appliance_id=appliance_id,operation_mode=signal)
return result
@air_controller.route('/send/vol/<appliance_id>/<signal>', methods=['POST'])
def send_vol(appliance_id,signal):
nclient = NatureRemoClient()
signal = signal[4:]
result = nclient.send_aircon_settings(appliance_id=appliance_id,air_volume=signal)
return result
@air_controller.route('/send/dir/<appliance_id>/<signal>', methods=['POST'])
def send_dir(appliance_id,signal):
nclient = NatureRemoClient()
signal = signal[4:]
result = nclient.send_aircon_settings(appliance_id=appliance_id,air_direction=signal)
return result
| 38.276596 | 100 | 0.761534 | 220 | 1,799 | 5.95 | 0.213636 | 0.184874 | 0.12987 | 0.10084 | 0.71505 | 0.621849 | 0.583652 | 0.288006 | 0.288006 | 0.166539 | 0 | 0.001889 | 0.117287 | 1,799 | 46 | 101 | 39.108696 | 0.822418 | 0.008894 | 0 | 0.324324 | 0 | 0 | 0.122965 | 0.094891 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0 | 0.081081 | 0 | 0.351351 | 0.054054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b36181990f0e7a289739c1df1c506a6c08cadfb1 | 1,952 | py | Python | code/api_controlers/apis.py | AICyberTeam/retrievalSystem | 3a6b4a3f38ed179b529d3ac0cf0aad9aa6b98903 | [
"MIT"
] | 51 | 2021-11-07T03:14:12.000Z | 2022-03-31T11:47:56.000Z | code/api_controlers/apis.py | AICyberTeam/retrievalSystem | 3a6b4a3f38ed179b529d3ac0cf0aad9aa6b98903 | [
"MIT"
] | null | null | null | code/api_controlers/apis.py | AICyberTeam/retrievalSystem | 3a6b4a3f38ed179b529d3ac0cf0aad9aa6b98903 | [
"MIT"
] | 13 | 2021-11-07T03:14:14.000Z | 2022-03-20T11:14:03.000Z | from flask import Flask, request, jsonify
import json, os
from threading import Timer
from api_controlers import base_function, image_encode_ctl, delete_encode_ctl,\
text_search_ctl, image_search_ctl, semantic_localization_ctl, utils
def api_run(cfg):
app = Flask(__name__) # Flask 初始化
# ======================= 接口定义 ============================
# 影像编码
@app.route(cfg['apis']['image_encode']['route'], methods=['post'])
def image_encode():
request_data = json.loads(request.data.decode('utf-8'))
return_json = image_encode_ctl.image_encode_append(request_data)
return return_json
# 删除编码
@app.route(cfg['apis']['delete_encode']['route'], methods=['post'])
def delete_encode():
request_data = json.loads(request.data.decode('utf-8'))
return_json = delete_encode_ctl.delete_encode(request_data)
return return_json
# 文本检索
@app.route(cfg['apis']['text_search']['route'], methods=['post'])
def text_search():
request_data = json.loads(request.data.decode('utf-8'))
return_json = text_search_ctl.text_search(request_data)
return return_json
# 图像检索
@app.route(cfg['apis']['image_search']['route'], methods=['post'])
def image_search():
request_data = json.loads(request.data.decode('utf-8'))
return_json = image_search_ctl.image_search(request_data)
return return_json
# 语义定位
@app.route(cfg['apis']['semantic_localization']['route'], methods=['post'])
def semantic_localization():
request_data = json.loads(request.data.decode('utf-8'))
return_json = semantic_localization_ctl.semantic_localization(request_data)
return return_json
# 定时任务
# 循环检测待编码池中是否还有未编码数据
check_unembeded_image = Timer(5, image_encode_ctl.image_encode_runner)
check_unembeded_image.start()
app.run(host=cfg['apis']['hosts']['ip'], port=cfg['apis']['hosts']['port'])
| 36.830189 | 83 | 0.665471 | 245 | 1,952 | 5.012245 | 0.228571 | 0.134365 | 0.044788 | 0.061075 | 0.513844 | 0.289088 | 0.235342 | 0.235342 | 0.235342 | 0.235342 | 0 | 0.00372 | 0.173668 | 1,952 | 52 | 84 | 37.538462 | 0.757595 | 0.059426 | 0 | 0.285714 | 0 | 0 | 0.100219 | 0.011501 | 0 | 0 | 0 | 0 | 0 | 1 | 0.171429 | false | 0 | 0.114286 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b36472502cc6bb056b8f449921017ab2cdc6baa1 | 2,440 | py | Python | atnp/collecter.py | alanfgn/text-analysis-political-news | 2a6e81bbc68eac7cacd63b0fcf157161264a91b1 | [
"MIT"
] | null | null | null | atnp/collecter.py | alanfgn/text-analysis-political-news | 2a6e81bbc68eac7cacd63b0fcf157161264a91b1 | [
"MIT"
] | null | null | null | atnp/collecter.py | alanfgn/text-analysis-political-news | 2a6e81bbc68eac7cacd63b0fcf157161264a91b1 | [
"MIT"
] | null | null | null | import atnp.utils as utils
import requests
import csv
import re
import os
def slice_url(url):
match = re.search(utils.LINK_PATTERN, url)
return match.group(1), match.group(2), match.group(3)
def gen_unique_name(domain, path):
return "{}__{}".format(domain, path.replace("/", "_"))
def makerequest(row, header):
url = row[header.index("url")]
request = requests.get(url)
_, domain, path = slice_url(url)
print("[%d] %s" % (request.status_code, url))
return {
"fileid": gen_unique_name(domain, path),
"url": url,
"subject": row[header.index("subject")],
"journal": row[header.index("journal")],
"html": request.text
}
def report(links_file, destination):
print("Making report of file %s to %s" % (links_file, destination))
utils.create_if_not_exists(destination)
files = os.listdir(destination)
links = csv.reader(open(links_file, newline="\n"), delimiter=',')
header = next(links)
lines = count_not_downloaded = count_dupl = file_abnormal = 0
fileids = []
for row in links:
lines = lines + 1
_, domain, path = slice_url(row[header.index("url")])
fileid = gen_unique_name(domain, path) + ".json"
if fileid not in files:
count_not_downloaded = count_not_downloaded + 1
print("[%s] Not Downloaded" % row[header.index("url")])
if fileid in fileids:
count_dupl = count_dupl + 1
print("[%s] Duplicatet" % fileid)
fileids.append(fileid)
for filename in files:
if filename not in fileids:
file_abnormal = file_abnormal + 1
print("[%s] Abnormal" % filename)
print("\n########################\n")
print("%0*d Lines in csv %s" % (3, lines, links_file))
print("%0*d Files Downloaded" % (3, len(files)))
print("%0*d Files not downloaded" % (3, count_not_downloaded))
print("%0*d Files duplicated" % (3, count_dupl))
print("%0*d Files abnormals" % (3, file_abnormal))
def download(links_file, destination):
print("Making requests of file %s to %s" % (links_file, destination))
with open(links_file, newline="\n") as links:
reader = csv.reader(links, delimiter=',')
header = next(reader)
for row in reader:
filejson = makerequest(row, header=header)
utils.save_json(destination, filejson["fileid"], filejson)
| 28.045977 | 73 | 0.611475 | 312 | 2,440 | 4.644231 | 0.262821 | 0.043478 | 0.048309 | 0.033126 | 0.196687 | 0.081435 | 0.041408 | 0.041408 | 0 | 0 | 0 | 0.009698 | 0.239344 | 2,440 | 86 | 74 | 28.372093 | 0.771013 | 0 | 0 | 0 | 0 | 0 | 0.133607 | 0.011475 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084746 | false | 0 | 0.084746 | 0.016949 | 0.220339 | 0.20339 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b3654a8261cc1c66da5a7989b216f4998e241a89 | 754 | py | Python | alembic/versions/5ce477cd7ee3_add_otp_column.py | ashrafmahmood/oscarine-api | 775f3ad5a3d3c50e36e228ee348ef40f075e95ec | [
"MIT"
] | 7 | 2019-09-18T19:45:46.000Z | 2020-05-18T20:07:07.000Z | alembic/versions/5ce477cd7ee3_add_otp_column.py | ashrafmahmood/oscarine-api | 775f3ad5a3d3c50e36e228ee348ef40f075e95ec | [
"MIT"
] | 252 | 2019-09-18T20:25:03.000Z | 2022-03-25T11:23:50.000Z | alembic/versions/5ce477cd7ee3_add_otp_column.py | ashrafmahmood/oscarine-api | 775f3ad5a3d3c50e36e228ee348ef40f075e95ec | [
"MIT"
] | 8 | 2019-09-18T11:02:45.000Z | 2021-05-18T17:08:51.000Z | """add otp column
Revision ID: 5ce477cd7ee3
Revises: 54f029830204
Create Date: 2020-05-12 19:04:09.302116
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '5ce477cd7ee3'
down_revision = '54f029830204'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('owner', sa.Column('otp', sa.Integer(), nullable=False))
op.add_column('user', sa.Column('otp', sa.Integer(), nullable=False))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('user', 'otp')
op.drop_column('owner', 'otp')
# ### end Alembic commands ###
| 24.322581 | 74 | 0.679045 | 97 | 754 | 5.206186 | 0.494845 | 0.053465 | 0.083168 | 0.091089 | 0.304951 | 0.304951 | 0.304951 | 0.174257 | 0 | 0 | 0 | 0.0864 | 0.171088 | 754 | 30 | 75 | 25.133333 | 0.7216 | 0.392573 | 0 | 0 | 0 | 0 | 0.128266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b367e04a0c647330d45d344829776dd6eaf4daa7 | 3,087 | py | Python | keras_approach.py | NicolaOrritos/pricenet | 1667e564d48bf0021eb16dcd529017cd00643b03 | [
"MIT"
] | null | null | null | keras_approach.py | NicolaOrritos/pricenet | 1667e564d48bf0021eb16dcd529017cd00643b03 | [
"MIT"
] | null | null | null | keras_approach.py | NicolaOrritos/pricenet | 1667e564d48bf0021eb16dcd529017cd00643b03 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import tensorflow
import pandas as pd
import numpy as np
from time import time
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import TensorBoard
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.pipeline import Pipeline
import data_provider as dp
def validate_model(model, test_data, test_targets):
predictions = []
for data in test_data:
predictions.append(model.predict(data.reshape(-1, len(data))))
predictions = np.array(predictions)
print(test_targets)
print(predictions)
location = test_targets[test_targets == 0].iloc
test_targets.iloc[location] += 0.1
predictions.iloc[location] += 0.1
return np.absolute(1 - predictions / test_targets).mean(axis=None)
# Get data:
data = dp.load(resolution='hour')
data = dp.remove_fields(data, ['time'])
# Scale them all:
# print('Scaling data...')
data, min_values, max_values = dp.min_max_scale(data)
# Split into X and y:
# print('Splitting data to "X" and "y" sets...')
X, y = dp.split_to_X_y(data, groups_size=4)
X_last = np.array(X.pop())
y_last = np.array(y.pop())
X = np.array(X)
y = np.array(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
features_number = len(X[0])
# Now let's use a neural network:
def baseline_model():
model = Sequential()
model.add(Dense(features_number, input_dim=features_number, activation='relu'))
model.add(Dense(256, kernel_initializer='normal', activation='relu'))
model.add(Dense(128, kernel_initializer='normal', activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
return model
# evaluate model with standardized dataset
estimators = []
tensorboard = TensorBoard(log_dir="logs/{}".format(time()))
estimators.append(('regressor', KerasRegressor(build_fn=baseline_model, epochs=200, batch_size=10, verbose=0)))
pipeline = Pipeline(estimators)
pipeline.fit(X_train, y_train, regressor__callbacks=[tensorboard])
score = pipeline.score(X_test, y_test)
error = validate_model(pipeline, X_test, y_test)
print('Score: ', score)
print('Error: {}%'.format(round(error * 100, 4)))
# # Fix random seed for reproducibility
# seed = 7
# np.random.seed(seed)
#
# # Evaluate using 10-fold cross validation
# kfold = KFold(n_splits=2, shuffle=True, random_state=seed)
# results = cross_val_score(pipeline, X, y, cv=kfold)
# print('Results: ', results.mean())
predicted = pipeline.predict(X_last.reshape(-1, features_number))
actual = y_last
difference = (1 - actual/predicted) * 100
print('Predicted:', predicted)
print('Actual: ', actual)
print('Error: {}%'.format(round(difference, 2)))
# pipeline.named_steps['regressor'].model.save('model.h5')
| 27.318584 | 112 | 0.694201 | 424 | 3,087 | 4.898585 | 0.360849 | 0.031777 | 0.025036 | 0.03611 | 0.105922 | 0.048146 | 0.048146 | 0.048146 | 0 | 0 | 0 | 0.01536 | 0.177519 | 3,087 | 112 | 113 | 27.5625 | 0.802678 | 0.169096 | 0 | 0 | 0 | 0 | 0.049302 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0 | 0.22807 | 0 | 0.298246 | 0.122807 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b371587fd6814326d2d07cd04a37a379d3827420 | 1,893 | py | Python | Playlist.py | FloSewn/Jukeberry | 9afb4a544ba1f35c6bde41b7bc25be0aa6b87954 | [
"MIT"
] | null | null | null | Playlist.py | FloSewn/Jukeberry | 9afb4a544ba1f35c6bde41b7bc25be0aa6b87954 | [
"MIT"
] | null | null | null | Playlist.py | FloSewn/Jukeberry | 9afb4a544ba1f35c6bde41b7bc25be0aa6b87954 | [
"MIT"
] | null | null | null | from Songlist import Songlist
import random
class Playlist(Songlist):
def __init__(self, songpool, n_minimum=3, n_maximum=7):
super().__init__("Playlist")
random.seed(539)
self.songpool = songpool
self.n_minimum = min((n_minimum, len(songpool.data)))
self.n_maximum = max((n_minimum, n_maximum))
self.musicplayer = None
self.add_random_songs()
def add_random_songs(self, take_out=True):
''' Add randomly songs from the songpool
until the required minimum of songs in the
playlist is met
'''
# Add random songs to initialize the playlist
n_pool = len(self.songpool.data)
n_add = 0
while (len(self.data) < self.n_minimum) and (n_pool > 0):
index = random.randint(0, n_pool-1)
song = self.songpool.data[index].song
song.queue( self )
if take_out:
song.hide()
self.songpool.remove_song( song )
n_pool = len(self.songpool.data)
n_add += 1
return (n_add > 0)
def next_song(self):
''' Play the next song in the playlist
'''
# Remove previous head
if len(self.data) < 1: return
current_song = self.data[0].song
current_song.is_queued = False
self.remove_song( current_song )
# Play next song
self.musicplayer.play_head()
# Make new head
if len(self.data) < 1: return
current_song = self.data[0].song
def is_taking_songs(self):
''' Returns true, if the playlist can take some
more songs
FADE OUT LABEL: https://stackoverflow.com/questions/35351729/having-a-label-fade-out-in-kivy/35354967
'''
if len(self.data) < self.n_maximum:
return True
return False
| 26.291667 | 113 | 0.577919 | 243 | 1,893 | 4.333333 | 0.316872 | 0.068376 | 0.041785 | 0.037037 | 0.174739 | 0.144349 | 0.144349 | 0.144349 | 0.091168 | 0.091168 | 0 | 0.024352 | 0.327522 | 1,893 | 71 | 114 | 26.661972 | 0.802828 | 0.207079 | 0 | 0.166667 | 0 | 0 | 0.00567 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.055556 | 0 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b37267d6c2c6d0c826bf45142f7dbce6bb47ab24 | 4,131 | py | Python | model.py | khk4912/slash-dpy | 751f66cb0eb330d230695c179e44ce99b4575fd1 | [
"MIT"
] | 2 | 2021-05-09T17:46:10.000Z | 2021-05-09T17:46:13.000Z | model.py | khk4912/slash-dpy | 751f66cb0eb330d230695c179e44ce99b4575fd1 | [
"MIT"
] | null | null | null | model.py | khk4912/slash-dpy | 751f66cb0eb330d230695c179e44ce99b4575fd1 | [
"MIT"
] | null | null | null | import inspect
from dataclasses import dataclass
from typing import Callable, Optional, OrderedDict
import aiohttp
import discord
from discord.member import Member
from discord.role import Role
from discord.user import User
from exceptions import InvaildArgument
SUB_COMMAND = 1
SUB_COMMAND_GROUP = 2
STRING = 3
INTEGER = 4
BOOLEAN = 5
USER = 6
CHANNEL = 7
ROLE = 8
@dataclass
class SlashCommand:
"""
Model of SlashCommand.
Attributes
----------
name : str
Name of SlashCommand
description : str
Description of SlashCommand
options: OrdeeredDict[str, inspect.Parameter]
Options created based on parameter's annotations.
func : Callable
Function covered with slash decorator.
"""
name: str
description: Optional[str]
options: OrderedDict[str, inspect.Parameter]
func: Callable
def to_dict(self) -> dict:
self.options.popitem(last=False)
data = {}
options = []
data["name"] = self.name
data["description"] = self.description or f"Command {self.name}"
option_type = None
required = True
for name, param in self.options.items():
annot = param.annotation
if annot == User or annot == Member:
option_type = USER
elif annot == int:
option_type = INTEGER
elif annot == str:
option_type = STRING
elif annot == Role:
option_type = ROLE
elif annot == Optional[User] or annot == Optional[Member]:
option_type = USER
required = False
elif annot == Optional[int]:
option_type = INTEGER
required = False
elif annot == Optional[str]:
option_type = STRING
required = False
elif annot == Optional[Role]:
option_type = ROLE
required = False
options.append(
{
"name": name,
"description": f"Param {name}",
"type": option_type,
"required": required,
}
)
data["options"] = options
print(data)
return data
@dataclass
class InteractionContext:
"""
Model of InteractionContext
Attributes
----------
author : discord.User, optional
Message's author.
options : list, optional
List of options.
id : int
ID of interaction.
token : str
Token of interaction.
"""
author: Optional[User]
options: Optional[list]
id: int
token: str
async def send(
self,
content: Optional[str] = None,
embed: Optional[discord.Embed] = None,
private: bool = False,
tts: bool = False,
):
"""
Function that calllback to interaction.
Parameters
----------
content : str, optional
Content of message.
embed : discord.Embed, optional
Embed.
private : bool,
Whether to send a message that is visible only to the sender.
Default is False.
tts : bool
Whether to send TTS message. Default is False.
"""
embeds = []
if content is None and embed is None:
raise InvaildArgument("Both content and embeds are None.")
if embed is not None:
embeds.append(embed.to_dict())
data = {
"type": 4,
"data": {
"tts": tts,
"content": content,
"embeds": embeds,
},
}
if private:
data["data"]["flags"] = 64
url = f"https://discord.com/api/v8/interactions/{self.id}/{self.token}/callback"
headers = {"Authorization": f"Bot {self.token}"}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=data, headers=headers) as r:
c = await r.text()
print(c)
| 25.5 | 88 | 0.535706 | 417 | 4,131 | 5.270983 | 0.316547 | 0.045496 | 0.030937 | 0.030027 | 0.040946 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004651 | 0.375454 | 4,131 | 161 | 89 | 25.658385 | 0.847287 | 0.123699 | 0 | 0.142857 | 0 | 0.010204 | 0.079922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010204 | false | 0 | 0.091837 | 0 | 0.214286 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
b37268a6e2b21c06a412ba95af142dfc30df87fd | 20,192 | py | Python | labelprop/voxelmorph2d.py | nathandecaux/labelprop | c92c453e12cc8575929d947db4ae7b266433eb52 | [
"MIT"
] | null | null | null | labelprop/voxelmorph2d.py | nathandecaux/labelprop | c92c453e12cc8575929d947db4ae7b266433eb52 | [
"MIT"
] | null | null | null | labelprop/voxelmorph2d.py | nathandecaux/labelprop | c92c453e12cc8575929d947db4ae7b266433eb52 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributions.normal import Normal
from monai.networks.nets import UNet,Classifier,DenseNet,BasicUNet,DynUNet
import numpy as np
import math
from kornia.geometry.transform import get_perspective_transform,get_affine_matrix2d
from .MultiLevelNet import MultiLevelNet as MLNet
class VxmDense(nn.Module):
"""
VoxelMorph network for (unsupervised) nonlinear registration between two images.
"""
def __init__(self,
inshape,
int_steps=7,
int_downsize=2,
bidir=False,
use_probs=False,
src_feats=1,
trg_feats=1,
unet_half_res=False,
sub_levels=3):
"""
Parameters:
inshape: Input shape. e.g. (192, 192, 192)
nb_unet_features: Unet convolutional features. Can be specified via a list of lists with
the form [[encoder feats], [decoder feats]], or as a single integer.
If None (default), the unet features are defined by the default config described in
the unet class documentation.
nb_unet_levels: Number of levels in unet. Only used when nb_features is an integer.
Default is None.
unet_feat_mult: Per-level feature multiplier. Only used when nb_features is an integer.
Default is 1.
nb_unet_conv_per_level: Number of convolutions per unet level. Default is 1.
int_steps: Number of flow integration steps. The warp is non-diffeomorphic when this
value is 0.
int_downsize: Integer specifying the flow downsample factor for vector integration.
The flow field is not downsampled when this value is 1.
bidir: Enable bidirectional cost function. Default is False.
use_probs: Use probabilities in flow field. Default is False.
src_feats: Number of source image features. Default is 1.
trg_feats: Number of target image features. Default is 1.
unet_half_res: Skip the last unet decoder upsampling. Requires that int_downsize=2.
Default is False.
"""
super().__init__()
# internal flag indicating whether to return flow or integrated warp during inference
self.training = True
# ensure correct dimensionality
self.inshape=inshape
ndims = len(inshape)
assert ndims in [1, 2, 3], 'ndims should be one of 1, 2, or 3. found: %d' % ndims
self.levels=sub_levels+1
# configure core unet model
# self.unet_model = Unet(
# inshape,
# infeats=(src_feats + trg_feats)
# )
filters= [16, 32, 32, 32]
#DynUNet(spatial_dims, in_channels, out_channels, kernel_size, strides, upsample_kernel_size, filters=None, dropout=None, norm_name=('INSTANCE', {'affine': True}), act_name=('leakyrelu', {'inplace': True, 'negative_slope': 0.01}), deep_supervision=False, deep_supr_num=1, res_block=False, trans_bias=False)
# self.unet_model=DynUNet(2,2,2,(3,3,3,3),(1,2,2,2),(3,3,3,3))
# self.unet_model=UNet(src_feats*2,trg_feats*2,16,filters,strides=(len(filters)-1)*[2],num_res_units=(len(filters)-2))
# self.unet_model=BasicUNet(2,src_feats*2,2,features=filters,upsample='nontrainable')
# self.unet_model=MultiLevelNet(inshape=inshape,levels=sub_levels)
self.unet_model=MLNet(inshape=inshape,levels=sub_levels)
# self.unet_model=SingleLevelNet(inshape)
# configure unet to flow field layer
# Conv = getattr(nn, 'Conv%dd' % ndims)
# self.flow = Conv(16, ndims, kernel_size=3, padding=1)
# # ☺init flow layer with small weights and bias
# self.flow.weight = nn.Parameter(Normal(0, 1e-5).sample(self.flow.weight.shape))
# self.flow.bias = nn.Parameter(torch.zeros(self.flow.bias.shape))
# probabilities are not supported in pytorch
# configure optional resize layers (downsize)
if not unet_half_res and int_steps > 0 and int_downsize > 1:
self.resize = ResizeTransform(int_downsize, ndims)
else:
self.resize = None
# resize to full res
if int_steps > 0 and int_downsize > 1:
self.fullsize = ResizeTransform(1 / int_downsize, ndims)
else:
self.fullsize = None
# configure bidirectional training
self.bidir = bidir
# configure optional integration layer for diffeomorphic warp
down_shape = [int(dim / int_downsize) for dim in inshape]
self.integrate = VecInt(down_shape, int_steps) if int_steps > 0 else None
# configure transformer
self.transformer = SpatialTransformer(inshape,levels=sub_levels+1)
def forward(self, source, target, registration=False):
'''
Parameters:
source: Source image tensor.
target: Target image tensor.
registration: Return transformed image and flow. Default is False.
'''
# concatenate inputs and propagate unet
x = torch.cat([source, target], dim=1)
flow_field = self.unet_model(x)
# transform into flow field
# flow_field = self.flow(flow_field)
# resize flow for integration
pos_flow = flow_field
if self.resize:
pos_flow = self.resize(pos_flow)
preint_flow = pos_flow
# negate flow for bidirectional model
neg_flow = -pos_flow if self.bidir else None
# integrate to produce diffeomorphic warp
if self.integrate:
pos_flow = self.integrate(pos_flow)
neg_flow = self.integrate(neg_flow) if self.bidir else None
# resize to final resolution
if self.fullsize:
pos_flow = self.fullsize(pos_flow)
neg_flow = self.fullsize(neg_flow) if self.bidir else None
# warp image with flow field
y_source = self.transformer(source, pos_flow)
y_target = self.transformer(target, neg_flow) if self.bidir else None
# return non-integrated flow field if training
if not registration:
return (y_source, y_target, preint_flow) if self.bidir else (y_source, pos_flow, preint_flow)
else:
if self.bidir:
return y_source, pos_flow, neg_flow
else:
return y_source, pos_flow
class FeaturesToAffine(nn.Module):
"""
Dense network that takes pixels of features map and convert it to affine matrix
"""
def __init__(self,inshape):
super().__init__()
self.inshape=inshape
self.outshape=4*4
self.conv1=nn.Conv2d(16,1,kernel_size=1)
self.flatten=nn.Flatten()
self.layer=nn.Sequential(nn.Linear(self.inshape,self.outshape),nn.ReLU())
self.layer2=nn.Sequential(nn.Linear(self.outshape,self.outshape),nn.Tanh())
def forward(self,x):
x=self.conv1(x)
x=self.layer(self.flatten(x))
x=self.layer2(x)
return x
class AffineGenerator(nn.Module):
"""
Dense network that takes affine matrix and generate affine transformation
"""
def __init__(self,inshape):
super().__init__()
self.inshape=inshape
self.network=Classifier(inshape,9,(2,2,2,2,2),(2,2,2,2,2))
self.max_angle=nn.Parameter(30.*torch.ones(1))
self.max_scale=nn.Parameter(0.01*torch.ones(1))
def forward(self,x1,x2):
x=torch.cat([x1,x2],dim=1)
x=self.network(x)
# x=get_affine_matrix2d(translations=nn.Tanh()(x[:,0:2])*self.inshape[0]/2,center=nn.Tanh()(x[:,5:7])*self.inshape[0]/2,scale=nn.Tanh()(x[:,2:4])*self.max_scale+1,angle=nn.Tanh()(x[:,4])*self.max_angle)
x=get_affine_matrix2d(nn.Tanh()(x[:,0]),center=x[:,5:7],scale=x[:,2:4],angle=x[:,4])
return x
class AffineGenerator3D(nn.Module):
"""
Dense network that takes affine matrix and generate affine transformation
"""
def __init__(self,inshape):
super().__init__()
self.inshape=(2,inshape[0],inshape[1],inshape[2])
print(self.inshape)
#self.network=Classifier(inshape,16,(2,2,2,2,2),(2,2,2,2,2))
self.network=DenseNet(3,2,16)
def forward(self,x1,x2):
x=torch.cat([x1,x2],dim=1)
x=self.network(x)
return x.view(-1,4,4)
class SingleLevelNet(nn.Module):
"""
Convolutional network generating deformation field
"""
def __init__(self,inshape, in_channels=2,features=16):
super().__init__()
print('Initializing MultiLevelNet')
self.inshape=inshape
self.in_channels=in_channels
self.conv_blocks=self.get_conv_blocks(in_channels,features)
def get_conv_blocks(self, in_channels, intermediate_features):
"""
For each levels, create a convolutional block with two Conv Tanh BatchNorm layers
"""
# return nn.Sequential(nn.BatchNorm2d(in_channels),
# nn.LeakyReLU(0.2),
# nn.Conv2d(in_channels, intermediate_features, kernel_size=3, padding=1),
# nn.BatchNorm2d(intermediate_features),
# nn.LeakyReLU(0.2),
# nn.Conv2d(intermediate_features, intermediate_features, kernel_size=3, padding=1),
# nn.BatchNorm2d(intermediate_features),
# nn.LeakyReLU(0.2),
# nn.Conv2d(intermediate_features, in_channels, kernel_size=3, padding=1),
# nn.LeakyReLU(0.2))
return nn.Sequential(
nn.BatchNorm2d(intermediate_features),
nn.Conv2d(in_channels, intermediate_features, kernel_size=3, padding=1)
)
def forward(self,x):
"""
Forward pass of the network
Args:
x ([Tensor]): Tensor of shape (B,C,H,W)
Returns:
[Tensor]: Tensor of shape (B,C,H,W)
"""
x=self.conv_blocks(x)
print(x.shape)
return x.view(-1,self.in_channels,x.shape[-2],x.shape[-1])
class MultiLevelNet(nn.Module):
"""
Convolutional network generating deformation field with different scales.
"""
def __init__(self,inshape, in_channels=2, levels=3,features=16):
super().__init__()
print('Initializing MultiLevelNet')
self.inshape=inshape
self.levels=levels
self.in_channels=in_channels
self.downsample_blocks=self.get_downsample_blocks(in_channels,levels)
self.shapes=[int(self.inshape[0]/(i+1)) for i in range(levels+1)]
self.conv_blocks=self.get_conv_blocks(in_channels,levels,features)
# self.transformers=self.get_transformer_list(levels,inshape)
def get_downsample_blocks(self, in_channels, levels):
blocks = nn.ModuleList()
for i in range(levels):
blocks.append(nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1, stride=2))
return blocks
def get_conv_blocks(self, in_channels, levels,intermediate_features):
"""
For each levels, create a convolutional block with two Conv Tanh BatchNorm layers
"""
blocks = nn.ModuleList()
for i in range(levels+1):
blocks.append(nn.Sequential(
nn.BatchNorm2d(in_channels),
nn.Conv2d(in_channels, intermediate_features, kernel_size=3, padding=1),
nn.Tanh(),
nn.BatchNorm2d(intermediate_features),
nn.Conv2d(intermediate_features, in_channels, kernel_size=3, padding=1),
nn.Upsample(self.inshape, mode='bilinear',align_corners=True)))
return blocks
def get_transformer_list(self,levels,inshape):
"""
Create a list of spatial transformer for each level.
"""
transformers = nn.ModuleList()
for i in range(levels):
transformers.append(SpatialTransformer((inshape[0]/(2**i),inshape[1]/(2**i))))
return transformers
def compose_list(self,flows):
flows=list(flows)
compo=flows[-1]
for flow in reversed(flows[:-1]):
compo=self.compose_deformation(flow,compo)
return compo
def compose_deformation(self,flow_i_k,flow_k_j):
""" Returns flow_k_j(flow_i_k(.)) flow
Args:
flow_i_k
flow_k_j
Returns:
[Tensor]: Flow field flow_i_j = flow_k_j(flow_i_k(.))
"""
flow_i_j= flow_k_j+self.transformer(flow_i_k,flow_k_j)
return flow_i_j
def forward(self,x,registration=False):
"""
For each levels, downsample the input and apply the convolutional block.
"""
x_levels=[x]
for downsampling in self.downsample_blocks:
x_downsampled = downsampling(x_levels[-1])
x_levels.append(x_downsampled)
# for i in range(len(self.conv_blocks)):
# x_conv = self.conv_blocks[i](x_levels[-1])
# x_levels.append(x_conv)
# #For each x_levels,interpolate to the original resolution, and sum x_levels
# for i in range(1,len(x_levels)):
# x_levels[i]=F.interpolate(x_levels[i],size=self.inshape,mode='bilinear')
for i in range(len(x_levels)):
x_levels[i]=self.conv_blocks[i](x_levels[i])
return torch.stack(x_levels,dim=0).mean(0)
class SpatialTransformer(nn.Module):
"""
N-D Spatial Transformer
"""
def __init__(self, size, mode='bilinear',levels=4):
super().__init__()
self.mode = mode
# create sampling grid
vectors = [torch.arange(0, s) for s in size]
grids = torch.meshgrid(vectors)
grid = torch.stack(grids)
grid = torch.unsqueeze(grid, 0)
grid = grid.type(torch.FloatTensor)
# grid= torch.cat([grid]*levels,dim=0)
self.levels=levels
# registering the grid as a buffer cleanly moves it to the GPU, but it also
# adds it to the state dict. this is annoying since everything in the state dict
# is included when saving weights to disk, so the model files are way bigger
# than they need to be. so far, there does not appear to be an elegant solution.
# see: https://discuss.pytorch.org/t/how-to-register-buffer-without-polluting-state-dict
self.register_buffer('grid', grid)
def forward(self, src, flow):
# new locations
new_locs = self.grid + flow
shape = flow.shape[2:]
# need to normalize grid values to [-1, 1] for resampler
for i in range(len(shape)):
new_locs[:, i, ...] = 2 * (new_locs[:, i, ...] / (shape[i] - 1) - 0.5)
# move channels dim to last position
# also not sure why, but the channels need to be reversed
if len(shape) == 2:
new_locs = new_locs.permute(0, 2, 3, 1)
new_locs = new_locs[..., [1, 0]]
elif len(shape) == 3:
new_locs = new_locs.permute(0, 2, 3, 4, 1)
new_locs = new_locs[..., [2, 1, 0]]
# if src.shape[0]==1:
# src=src.repeat(self.levels,1,1,1)
return F.grid_sample(src, new_locs, align_corners=True, mode=self.mode)
class VecInt(nn.Module):
"""
Integrates a vector field via scaling and squaring.
"""
def __init__(self, inshape, nsteps):
super().__init__()
assert nsteps >= 0, 'nsteps should be >= 0, found: %d' % nsteps
self.nsteps = nsteps
self.scale = 1.0 / (2 ** self.nsteps)
self.transformer = SpatialTransformer(inshape)
def forward(self, vec):
vec = vec * self.scale
for _ in range(self.nsteps):
vec = vec + self.transformer(vec, vec)
return vec
class ResizeTransform(nn.Module):
"""
Resize a transform, which involves resizing the vector field *and* rescaling it.
"""
def __init__(self, vel_resize, ndims):
super().__init__()
self.factor = 1.0 / vel_resize
self.mode = 'linear'
if ndims == 2:
self.mode = 'bi' + self.mode
elif ndims == 3:
self.mode = 'tri' + self.mode
def forward(self, x):
if self.factor < 1:
# resize first to save memory
x = F.interpolate(x, align_corners=True, scale_factor=self.factor, mode=self.mode)
x = self.factor * x
elif self.factor > 1:
# multiply first to save memory
x = self.factor * x
x = F.interpolate(x, align_corners=True, scale_factor=self.factor, mode=self.mode)
# don't do anything if resize is 1
return x
import torch
import torch.nn.functional as F
import numpy as np
import math
class NCC:
"""
Local (over window) normalized cross correlation loss.
"""
def __init__(self, win=None):
self.win = win
def loss(self, y_true, y_pred, mean=True):
Ii = y_true
Ji = y_pred
# get dimension of volume
# assumes Ii, Ji are sized [batch_size, *vol_shape, nb_feats]
ndims = len(list(Ii.size())) - 2
assert ndims in [1, 2, 3], "volumes should be 1 to 3 dimensions. found: %d" % ndims
# set window size
win = [9] * ndims if self.win is None else self.win
# compute filters
sum_filt = torch.ones([1, 1, *win]).to(y_pred.device)
pad_no = math.floor(win[0] / 2)
if ndims == 1:
stride = (1)
padding = (pad_no)
elif ndims == 2:
stride = (1, 1)
padding = (pad_no, pad_no)
else:
stride = (1, 1, 1)
padding = (pad_no, pad_no, pad_no)
# get convolution function
conv_fn = getattr(F, 'conv%dd' % ndims)
# compute CC squares
I2 = Ii * Ii
J2 = Ji * Ji
IJ = Ii * Ji
I_sum = conv_fn(Ii, sum_filt, stride=stride, padding=padding)
J_sum = conv_fn(Ji, sum_filt, stride=stride, padding=padding)
I2_sum = conv_fn(I2, sum_filt, stride=stride, padding=padding)
J2_sum = conv_fn(J2, sum_filt, stride=stride, padding=padding)
IJ_sum = conv_fn(IJ, sum_filt, stride=stride, padding=padding)
win_size = np.prod(win)
u_I = I_sum / win_size
u_J = J_sum / win_size
cross = IJ_sum - u_J * I_sum - u_I * J_sum + u_I * u_J * win_size
I_var = I2_sum - 2 * u_I * I_sum + u_I * u_I * win_size
J_var = J2_sum - 2 * u_J * J_sum + u_J * u_J * win_size
cc = cross * cross / (I_var * J_var + 1e-5)
if mean:
return -torch.mean(cc)
else:
return cc
class MSE:
"""
Mean squared error loss.
"""
def loss(self, y_true, y_pred):
return torch.mean((y_true - y_pred) ** 2)
class Dice:
"""
N-D dice for segmentation
"""
def loss(self, y_true, y_pred):
ndims = len(list(y_pred.size())) - 2
vol_axes = list(range(2, ndims + 2))
top = 2 * (y_true * y_pred).sum(dim=vol_axes)
bottom = torch.clamp((y_true + y_pred).sum(dim=vol_axes), min=1e-5)
dice = torch.mean(top / bottom)
return -dice
class Grad:
"""
N-D gradient loss.
"""
def __init__(self, penalty='l1', loss_mult=None):
self.penalty = penalty
self.loss_mult = loss_mult
def loss(self, _, y_pred):
dy = torch.abs(y_pred[:, :, 1:, :] - y_pred[:, :, :-1, :])
dx = torch.abs(y_pred[:, :, :, 1:] - y_pred[:, :, :, :-1])
if self.penalty == 'l2':
dy = dy * dy
dx = dx * dx
d = torch.mean(dx) + torch.mean(dy)
grad = d / 2.0
if self.loss_mult is not None:
grad *= self.loss_mult
return grad | 37.186004 | 314 | 0.595186 | 2,683 | 20,192 | 4.320164 | 0.169586 | 0.019843 | 0.004659 | 0.004831 | 0.312398 | 0.268398 | 0.217151 | 0.159952 | 0.126305 | 0.11923 | 0 | 0.02218 | 0.296652 | 20,192 | 543 | 315 | 37.186004 | 0.793902 | 0.332359 | 0 | 0.221053 | 0 | 0 | 0.016905 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 1 | 0.105263 | false | 0 | 0.045614 | 0.003509 | 0.273684 | 0.014035 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fa6051aedcd77c23a8930e26b6cd51d26ec9f9d | 607 | py | Python | 4fun/open_web.py | gatitoz-luan/extras | b5caf73afcdc5e24f3ad97ef565cb15a7fee2764 | [
"MIT"
] | null | null | null | 4fun/open_web.py | gatitoz-luan/extras | b5caf73afcdc5e24f3ad97ef565cb15a7fee2764 | [
"MIT"
] | null | null | null | 4fun/open_web.py | gatitoz-luan/extras | b5caf73afcdc5e24f3ad97ef565cb15a7fee2764 | [
"MIT"
] | null | null | null | from time import sleep
import urllib.request
import webbrowser
try:
urllib.request.urlopen("https://www.youtube.com/watch?v=ffbI0JMW7g4")
except Exception as erro: # Desligando o wifi, ou o site não estiver no ar
print('\033[1; 31mO melhor vídeo do youtube não está acessível no momento.\033[m')
er = erro # Variável usada para não ter marcações de erro no código.
else: # Com wifi ligado:
print('\033[1;32mConsegui acessar o link com sucesso!\033[m')
print('Abrindo o youtube em 3, 2, 1...')
sleep(3)
webbrowser.open('https://www.youtube.com/watch?v=ffbI0JMW7g4', new=2)
| 43.357143 | 86 | 0.708402 | 97 | 607 | 4.43299 | 0.618557 | 0.060465 | 0.069767 | 0.083721 | 0.162791 | 0.162791 | 0.162791 | 0 | 0 | 0 | 0 | 0.057884 | 0.174629 | 607 | 14 | 87 | 43.357143 | 0.800399 | 0.197694 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.230769 | 0 | 0.230769 | 0.230769 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fa6797946f30447b24a6199a63c2d91f44255e0 | 5,546 | py | Python | crystal_toolkit/components/symmetry.py | mkhorton/mp-dash-components | b9af1b59f0120a90897631d9a7f8d9f0ae561de9 | [
"MIT"
] | null | null | null | crystal_toolkit/components/symmetry.py | mkhorton/mp-dash-components | b9af1b59f0120a90897631d9a7f8d9f0ae561de9 | [
"MIT"
] | 5 | 2018-10-18T19:52:12.000Z | 2018-11-17T19:02:49.000Z | crystal_toolkit/components/symmetry.py | mkhorton/mp-dash-components | b9af1b59f0120a90897631d9a7f8d9f0ae561de9 | [
"MIT"
] | null | null | null | from fractions import Fraction
from dash import html
import numpy as np
from dash import callback_context
from dash.dependencies import Input, Output
from pymatgen.core.structure import Structure
from pymatgen.symmetry.analyzer import SpacegroupAnalyzer
from pymatgen.util.string import unicodeify_spacegroup, unicodeify_species
from crystal_toolkit.core.panelcomponent import PanelComponent
from crystal_toolkit.helpers.layouts import (
H5,
Column,
Columns,
Loading,
get_data_list,
get_table,
)
class SymmetryPanel(PanelComponent):
@staticmethod
def pretty_frac_format(x):
x = x % 1
fraction = Fraction(x).limit_denominator(8)
if np.allclose(x, 1):
x_str = "0"
elif not np.allclose(x, float(fraction)):
x = np.around(x, decimals=3)
x_str = f"{x:.3g}"
else:
x_str = str(fraction)
return x_str
@property
def title(self):
return "Symmetry"
@property
def description(self):
return "Analyze the symmetry of your crystal structure or molecule."
def contents_layout(self):
state = {"symprec": 0.01, "angle_tolerance": 5}
symprec = self.get_numerical_input(
label="Symmetry-finding tolerance",
kwarg_label="symprec",
state=state,
help_str="Tolerance of distance between atomic positions and between lengths "
"of lattice vectors to be tolerated in the symmetry finding in Ångstroms. "
"The angle distortion between lattice vectors is converted to a length and "
"compared with this distance tolerance.",
shape=(),
min=0,
)
angle_tolerance = self.get_numerical_input(
label="Angle tolerance",
kwarg_label="angle_tolerance",
state=state,
help_str="Explicit angle tolerance for symmetry finding in degrees. "
"Set to a negative value to disable.",
shape=(),
)
return html.Div(
[
symprec,
angle_tolerance,
html.Br(),
html.Br(),
Loading(id=self.id("analysis")),
]
)
def generate_callbacks(self, app, cache):
super().generate_callbacks(app, cache)
@app.callback(
Output(self.id("analysis"), "children"),
[
Input(self.id(), "data"),
Input(self.get_kwarg_id("symprec"), "value"),
Input(self.get_kwarg_id("angle_tolerance"), "value"),
],
)
def update_contents(data, symprec, angle_tolerance):
if not data:
return html.Div()
struct = self.from_data(data)
if not isinstance(struct, Structure):
return html.Div(
"Can only analyze symmetry of crystal structures at present."
)
kwargs = self.reconstruct_kwargs_from_state(callback_context.inputs)
symprec = kwargs["symprec"]
angle_tolerance = kwargs["angle_tolerance"]
if symprec <= 0:
return html.Span(
f"Please use a positive symmetry-finding tolerance (currently {symprec})."
)
sga = SpacegroupAnalyzer(
struct, symprec=symprec, angle_tolerance=angle_tolerance
)
try:
data = dict()
data["Crystal System"] = sga.get_crystal_system().title()
data["Lattice System"] = sga.get_lattice_type().title()
data["Hall Number"] = sga.get_hall()
data["International Number"] = sga.get_space_group_number()
data["Symbol"] = unicodeify_spacegroup(sga.get_space_group_symbol())
data["Point Group"] = unicodeify_spacegroup(
sga.get_point_group_symbol()
)
sym_struct = sga.get_symmetrized_structure()
except Exception:
return html.Span(
f"Failed to calculate symmetry with this combination of "
f"symmetry-finding ({symprec}) and angle tolerances ({angle_tolerance})."
)
datalist = get_data_list(data)
wyckoff_contents = []
wyckoff_data = sorted(
zip(sym_struct.wyckoff_symbols, sym_struct.equivalent_sites),
key=lambda x: "".join(filter(lambda w: w.isalpha(), x[0])),
)
for symbol, equiv_sites in wyckoff_data:
wyckoff_contents.append(
html.Label(
f"{symbol}, {unicodeify_species(equiv_sites[0].species_string)}",
className="mpc-label",
)
)
site_data = [
(
self.pretty_frac_format(site.frac_coords[0]),
self.pretty_frac_format(site.frac_coords[1]),
self.pretty_frac_format(site.frac_coords[2]),
)
for site in equiv_sites
]
wyckoff_contents.append(get_table(site_data))
return Columns(
[
Column([H5("Overview"), datalist]),
Column([H5("Wyckoff Positions"), html.Div(wyckoff_contents)]),
]
)
| 33.817073 | 94 | 0.545979 | 555 | 5,546 | 5.284685 | 0.331532 | 0.062053 | 0.021821 | 0.020457 | 0.065462 | 0.034777 | 0.034777 | 0 | 0 | 0 | 0 | 0.005674 | 0.364407 | 5,546 | 163 | 95 | 34.02454 | 0.826383 | 0 | 0 | 0.086957 | 0 | 0 | 0.182474 | 0.009196 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.072464 | 0.014493 | 0.188406 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fa6c536a0c61526a9ec8c3d285219c9f61ba8f1 | 14,306 | py | Python | operation_obj.py | Frichetten/aws_api_shapeshifter | c5b8891ca98eb290c0773c5fb46672d64cd7fae0 | [
"MIT"
] | 16 | 2022-02-13T00:35:30.000Z | 2022-02-14T20:11:55.000Z | operation_obj.py | Frichetten/aws_api_shapeshifter | c5b8891ca98eb290c0773c5fb46672d64cd7fae0 | [
"MIT"
] | null | null | null | operation_obj.py | Frichetten/aws_api_shapeshifter | c5b8891ca98eb290c0773c5fb46672d64cd7fae0 | [
"MIT"
] | 1 | 2022-02-16T18:27:34.000Z | 2022-02-16T18:27:34.000Z | from xeger import Xeger
import api_signer
import protocol_formatter
import aws_api_shapeshifter
""" The operation_obj will be the primary location where parameters are altered and configured.
Every user generated modification will pass throug here. As a result this code is ugly and I'm
only partially sorry. """
class Operation:
def __init__(self, metadata, endpoints, shapes, operation):
self.name = operation['name']
self.method = operation['http']['method']
self.request_uri = operation['http']['requestUri']
self.operation = operation
self.endpoints = endpoints['endpoints']
self.metadata = metadata
self.shapes = shapes
self.endpoint_prefix = metadata['endpointPrefix']
self.target_prefix = self._resolve_target_prefix(self.metadata)
self.shape_input = self._resolve_shape_input(operation)
# make_request will take the requested modifications (if any) and make the request to the AWS API
def make_request(self, **kwargs):
name = self.name
self.input_format = self._parse_input_shape(self.metadata['endpointPrefix'], self.shapes, self.operation, kwargs)
version = self.metadata['apiVersion']
credentials = _resolve_credentials(kwargs)
endpoint_prefix = self._resolve_endpoint_prefix(kwargs)
method = self._resolve_method(kwargs)
region = self._resolve_region(kwargs)
host = self._resolve_host(region, endpoint_prefix, kwargs)
endpoint = self._resolve_endpoint(host, kwargs)
request_uri = self._resolve_request_uri(kwargs)
# TODO: Clean this up
if 'noparam' in kwargs.keys() or 'noparams' in kwargs.keys():
# TODO: Should be {}
self.input_format = {}
if 'protocol' in kwargs.keys():
protocol = kwargs['protocol']
else:
protocol = self.metadata['protocol']
# Depending on the protocol we need to format inputs differently
if protocol == "query":
formatted_request = protocol_formatter.query_protocol_formatter(
host,
credentials.token,
name,
version,
kwargs,
self.input_format
)
response = api_signer.query_signer(
credentials,
method,
endpoint_prefix,
host,
region,
endpoint,
request_uri,
formatted_request
)
return response
if protocol == "json":
json_version = self._resolve_json_version(self.metadata)
amz_target = self._resolve_target_prefix(self.metadata)
amz_target += "." + self.name
signing_name = self._resolve_signing_name(self.metadata, kwargs)
formatted_request = protocol_formatter.json_protocol_formatter(
host,
credentials.token,
json_version,
amz_target,
kwargs,
self.input_format
)
response = api_signer.json_signer(
credentials,
method,
endpoint_prefix,
host,
region,
endpoint,
signing_name,
request_uri,
formatted_request
)
return response
if protocol == "rest-json":
json_version = self._resolve_json_version(self.metadata)
signing_name = self._resolve_signing_name(self.metadata, kwargs)
formatted_request = protocol_formatter.rest_json_protocol_formatter(
host,
credentials.token,
json_version,
request_uri,
kwargs,
self.input_format
)
response = api_signer.rest_json_signer(
credentials,
method,
endpoint_prefix,
host,
region,
endpoint,
signing_name,
formatted_request['request_uri'],
formatted_request
)
return response
return None
def _has_safe_regions(self, regions):
# return true if it has safe regions
# false if not
for region in regions:
if region in SAFE_REGIONS:
return True
return False
def _get_safe_region(self, regions):
for region in regions:
if region in SAFE_REGIONS:
return region
def _test_hostname(self, hostname):
try:
socket.gethostbyname(hostname)
return True
except socket.error:
return False
def _resolve_endpoint_prefix(self, kwargs):
if 'endpoint_prefix' in kwargs.keys():
return kwargs['endpoint_prefix']
return self.endpoint_prefix
def _resolve_method(self, kwargs):
if 'method' in kwargs.keys():
return kwargs['method']
return self.method
def _resolve_signing_name(self, metadata, kwargs):
if 'signing_name' in kwargs.keys():
return kwargs['signing_name']
# if we have a signing name
if 'signingName' in metadata.keys():
return metadata['signingName']
# Give up and go with endpointPrefix
return metadata['endpointPrefix']
def _resolve_region(self, kwargs):
if 'region' in kwargs.keys():
return kwargs['region']
# ngl, gonna prefer us-east-1
if 'us-east-1' in self.endpoints.keys():
return 'us-east-1'
# Otherwise, pick a random region that they support
# Need to check for a credential scope region
potential = list(self.endpoints.keys())[0]
if "credentialScope" in self.endpoints[potential].keys():
return self.endpoints[potential]['credentialScope']['region']
return potential
def _resolve_host(self, region, endpoint_prefix, kwargs):
if 'host' in kwargs.keys():
return kwargs['host']
# iam is an example of this - Need to check for a hostname for a region
# not in the keys (aws-global)
potential = list(self.endpoints.keys())[0]
if 'credentialScope' in self.endpoints[potential].keys() and region == self.endpoints[potential]['credentialScope']['region']:
return self.endpoints[potential]['hostname']
if 'hostname' in self.endpoints[region].keys():
return self.endpoints[region]['hostname']
# TODO: Check ,but I don't think we ever get here.
# I know this is broken but will wait to fix until I have
# more examples
return endpoint_prefix + "." + region + ".amazonaws.com"
def _resolve_endpoint(self, host, kwargs):
if 'endpoint' in kwargs.keys():
return kwargs['endpoint']
return "https://" + host
def _resolve_request_uri(self, kwargs):
if 'request_uri' in kwargs.keys():
return kwargs['request_uri']
return self.request_uri
def _resolve_target_prefix(self, metadata):
if 'targetPrefix' in metadata.keys():
return metadata['targetPrefix']
else:
return metadata['endpointPrefix']
def _resolve_json_version(self, metadata):
if 'jsonVersion' in metadata.keys():
return metadata['jsonVersion']
# Otherwise you likely know what you want to do
# In fact it's likely what you're testing
# I'll give you a 1.0 so you don't complain
return "1.0"
def _resolve_shape_input(self, operation):
if 'input' in operation.keys():
return operation['input']['shape']
return ""
# GIVE THIS THE SHAPE, NOT THE NAME
def _resolve_unknown_shape(self, shapes, unknown_shape):
unknown_shape_type = unknown_shape['type']
if unknown_shape_type == 'string':
return self._gen_string_shape(unknown_shape)
if unknown_shape_type == 'integer':
return 1
if unknown_shape_type == 'boolean':
return "false"
if unknown_shape_type == 'structure':
return self._resolve_structure(shapes, unknown_shape)
if unknown_shape_type == 'list':
return self._resolve_list(shapes, unknown_shape)
if unknown_shape_type == 'timestamp':
return self._resolve_timestamp(shapes, unknown_shape)
if unknown_shape_type == 'blob':
return self._resolve_blob(shapes, unknown_shape)
if unknown_shape_type == 'long':
return 1
if unknown_shape_type == 'map':
return "map"
if unknown_shape_type == 'double' or unknown_shape_type == 'float':
return 2.0
# Map not implemented -Xray
print(unknown_shape_type)
def _resolve_structure(self, shapes, structure):
to_return = {}
for member in structure['members']:
if 'required' in structure.keys() and member in structure['required']:
shape_name = structure['members'][member]['shape']
to_return[member] = self._resolve_unknown_shape(shapes, shapes[shape_name])
return to_return
def _resolve_list(self, shapes, list_shape):
# This is an interesting problem. We should return this in a list
# The reason being that NORMAL operations may have multiple items.
# In our current form, we only give one.
member_shape = list_shape['member']['shape']
#if 'locationName' in list_shape['member'].keys():
# location_name = list_shape['member']['locationName']
#else:
# # Learned from elasticache RemoveTagsFromResource
# location_name = 'member'
result = self._resolve_unknown_shape(shapes, shapes[member_shape])
return [result]
def _resolve_timestamp(self, shapes, timestamp):
return "1615593755.796672"
def _resolve_blob(self, shapes, blob):
return "bbbbbbbbebfbebebbebebb"
def _parse_input_shape(self, name, shapes, operation, kwargs):
to_return = {}
# We may have params in our kwargs and we need to process them
# We are going to support a "Append" format
# Not sure what that means right now. Going to work on it
# Not every operation has an input
if 'input' in operation.keys():
input_shape_name = operation['input']['shape']
shape = shapes[input_shape_name]
if "required" in shape.keys():
# This is actual torture
# TODO: Refactor
for required in shape['required']:
shape_name = shape['members'][required]['shape']
result = self._resolve_unknown_shape(shapes, shapes[shape_name])
to_return[required] = result
# Parsing params for kwargs
if 'params' in kwargs.keys():
passed_params = kwargs['params']
for key in passed_params.keys():
to_return[key] = passed_params[key]
return to_return
# Leaving this code here for now. Going to need to
# improve this. Query protocol is an edge case
# because it uses a weird naming format for
# lists. Maybe move this into the query protocol
# formatter?
i = 2
for item in values:
if "." in item[0]:
if "Key" in item[0]:
new_name = item[0] + "." + str(i//2)
new_name = new_name.replace(".Key.",".") + ".Key"
elif "Value" in item[0]:
new_name = item[0] + "." + str(i//2)
new_name = item[0].replace(".Value.", ".") + ".Value"
else:
new_name = item[0] + "." + str(i//2)
i += 1
to_return[new_name] = item[1]
else:
to_return[item[0]] = item[1]
return to_return
return {}
def _flatten_list(self, list_in):
if isinstance(list_in, list):
for l in list_in:
for y in self._flatten_list(l):
yield y
else:
yield list_in
def _gen_string_shape(self, member_shape):
# min, max, pattern, enum
if "pattern" in member_shape.keys():
return self._gen_regex_pattern(member_shape['pattern'])
if "enum" in member_shape.keys():
return member_shape['enum'][0]
if "min" in member_shape.keys():
return 'a'*member_shape['min']
return 'aareturngen'
def _gen_regex_pattern(self, pattern):
# Some patterns break
x = Xeger()
try:
result = x.xeger(pattern)
return result
except:
return "aregex"
def _resolve_credentials(kwargs):
""" A user can send a few types of creds at us, and we
have to be able to resolve them. Output should
always be a credential object with .access_key,
.secret_key, and .token accessible """
kwargs_keys = kwargs.keys()
# Assume that access key modification is intentional and
# is a higher priority
if 'access_key' in kwargs_keys:
return aws_api_shapeshifter.Credentials(
kwargs['access_key'],
kwargs['secret_key'],
kwargs['token']
)
elif 'creds' in kwargs_keys:
return kwargs['creds']
elif 'credentials' in kwargs_keys:
return kwargs['credentials']
return aws_api_shapeshifter.Credentials("", "", "")
def _resolve_region_hostname(endpoints, preferred=''):
# If there is a hostname return in
# otherwise just return the region
None | 34.555556 | 134 | 0.572557 | 1,556 | 14,306 | 5.073907 | 0.184447 | 0.036479 | 0.021279 | 0.022799 | 0.30665 | 0.223813 | 0.174668 | 0.133882 | 0.08993 | 0.077517 | 0 | 0.004787 | 0.342933 | 14,306 | 414 | 135 | 34.555556 | 0.835106 | 0.14113 | 0 | 0.281588 | 0 | 0 | 0.073753 | 0.001835 | 0 | 0 | 0 | 0.002415 | 0 | 1 | 0.093863 | false | 0.01083 | 0.01444 | 0.00722 | 0.33213 | 0.00361 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fa7dec69496a095a2c47ea6cc15a0aee006cc97 | 29,064 | py | Python | chesstab/gui/uci.py | RogerMarsh/chesstab | 01d375dc6bf025b621612a84513e55c4640a78ad | [
"BSD-3-Clause"
] | null | null | null | chesstab/gui/uci.py | RogerMarsh/chesstab | 01d375dc6bf025b621612a84513e55c4640a78ad | [
"BSD-3-Clause"
] | null | null | null | chesstab/gui/uci.py | RogerMarsh/chesstab | 01d375dc6bf025b621612a84513e55c4640a78ad | [
"BSD-3-Clause"
] | null | null | null | # uci.py
# Copyright 2015 Roger Marsh
# License: See LICENSE.TXT (BSD licence)
"""Control multiple UCI compliant chess engines."""
import tkinter
import tkinter.messagebox
import tkinter.filedialog
from solentware_grid.core.dataclient import DataSource
from solentware_misc.gui.exceptionhandler import ExceptionHandler
from uci_net.engine import (
ReservedOptionNames,
)
from ..core.uci import UCI as _UCI
from .enginegrid import EngineGrid
from .enginerow import make_ChessDBrowEngine
from ..core.filespec import (
ENGINE_FILE_DEF,
COMMAND_FIELD_DEF,
)
from .eventspec import EventSpec
import sys
_win32_platform = sys.platform == "win32"
_freebsd_platform = sys.platform.startswith("freebsd")
del sys
class UCI(ExceptionHandler):
def __init__(self, menu_engines, menu_commands):
self._do_toplevel = None
self._show_engines_toplevel = None
self.database = None
self.base_engines = None
self.visible_scrollbars = True
# The chess engine definition widgets in their own Toplevel.
# Should these be helper class responsibility as well?
# Similar construct in .chess_ui.ChessUI too.
self.engines_in_toplevels = set()
self.menu_engines = menu_engines
self.menu_commands = menu_commands
menu_engines.add_separator()
for accelerator, function in (
(
EventSpec.menu_engines_position_queues,
self.position_queue_status,
),
(EventSpec.menu_engines_show_engines, self.show_engines),
(EventSpec.menu_engines_start_engine, self.start_engine),
(EventSpec.menu_engines_quit_all_engines, self.quit_all_engines),
):
menu_engines.add_command(
label=accelerator[1],
command=self.try_command(function, menu_engines),
underline=accelerator[3],
)
menu_engines.add_separator()
menu_engines.add_separator()
menu_commands.add_separator()
for accelerator, function in (
(EventSpec.menu_commands_multipv, self.set_multipv),
(EventSpec.menu_commands_depth, self.set_depth),
(EventSpec.menu_commands_hash, self.set_hash),
):
menu_commands.add_command(
label=accelerator[1],
command=self.try_command(function, menu_commands),
underline=accelerator[3],
)
menu_commands.add_separator()
# Must do both these commands between each move to get consistent
# analysis between runs of ChessTab given that positions are evaluated
# in random order.
# menu_commands.add_command(
# label='Ucinewgame off',
# underline=0,
# command=self.try_command(self.set_ucinewgame_off, menu_commands))
# menu_commands.add_command(
# label='Clear Hash off',
# underline=0,
# command=self.try_command(
# self.set_clear_hash_off, menu_commands))
# menu_commands.add_separator()
# Not implemented at present.
# And possibly never will be.
# menu_commands.add_command(
# label='Go infinite',
# underline=3,
# command=self.try_command(self.go_infinite, menu_commands))
# menu_commands.add_command(
# label='Stop',
# underline=0,
# command=self.try_command(self.stop, menu_commands))
# menu_commands.add_separator()
# menu_commands.add_command(
# label='Isready',
# underline=2,
# command=self.try_command(self.isready, menu_commands))
# menu_commands.add_separator()
self.uci = _UCI()
@property
def show_engines_toplevel(self):
""" """
return self._show_engines_toplevel
def remove_engines_and_menu_entries(self):
"""Quit all started UCI compliant Chess Engines."""
ui_names = set(self.uci.uci_drivers)
still_alive = self.uci.quit_all_engines()
ui_names -= set([n for n, p in still_alive])
dead_menu_items = []
for i in range(self.menu_engines.index("end") + 1):
label = self.menu_engines.entryconfigure(i).get("label")
if label is not None:
if label[-1] in ui_names:
dead_menu_items.insert(0, i)
for dmi in dead_menu_items:
self.menu_engines.delete(dmi)
for name, pid in still_alive:
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Stop Engine",
message="".join(
(
name,
" failed to stop.\n\nYou may have to kill",
" process id ",
str(pid),
" manually.",
)
),
)
def quit_all_engines(self):
"""Quit all started UCI compliant Chess Engines after confirmation."""
if (
tkinter.messagebox.askquestion(
parent=self.menu_commands,
title="Engines",
message="Confirm Quit All Engines.",
)
== tkinter.messagebox.YES
):
self.remove_engines_and_menu_entries()
def start_engine(self):
"""Start an UCI compliant Chess Engine."""
# Avoid "OSError: [WinError 535] Pipe connected" at Python3.3 running
# under Wine on FreeBSD 10.1 by disabling the UCI functions.
# Assume all later Pythons are affected because they do not install
# under Wine at time of writing.
# The OSError stopped happening by wine-2.0_3,1 on FreeBSD 10.1 but
# get_nowait() fails to 'not wait', so ChessTab never gets going under
# wine at present. Leave alone because it looks like the problem is
# being shifted constructively.
# At Python3.5 running under Wine on FreeBSD 10.1, get() does not wait
# when the queue is empty either, and ChessTab does not run under
# Python3.3 because it uses asyncio: so no point in disabling.
# if self.uci.uci_drivers_reply is None:
# tkinter.messagebox.showinfo(
# parent=self.menu_commands,
# title='Chesstab Restriction',
# message=' '.join(
# ('Starting an UCI chess engine is not allowed because',
# 'an interface cannot be created:',
# 'this is expected if running under Wine.')))
# return
if _win32_platform:
filetypes = (("Chess Engines", "*.exe"),)
else:
filetypes = ()
filename = tkinter.filedialog.askopenfilename(
parent=self.menu_commands,
title="Run Chess Engine",
filetypes=filetypes,
initialfile="",
initialdir="~",
)
if not filename:
return
def get(event):
command = self._contents.get()
if command == filename:
self.run_engine(filename)
elif not command.startswith(filename):
tkinter.messagebox.showinfo(
parent=self._do_toplevel,
title="Start Engine",
message="Command must start with selected file name.",
)
else:
command = command.replace(filename, "", 1)
if not command.startswith(" "):
tkinter.messagebox.showinfo(
parent=self._do_toplevel,
title="Start Engine",
message="Command must start with selected file name.",
)
else:
self.run_engine(filename, args=command.strip())
self._cancel()
self.do_command(
filename,
get,
hint="".join(
(
"\nConsult chess engine's documentation for any arguments ",
"which must be appended to the command.\n\nPress enter to ",
"run engine.\n",
)
),
)
def _cancel(self):
self._do_toplevel.destroy()
self._do_toplevel = None
del self._contents
def do_command(self, initial_value, callback, hint=None, wraplength=None):
if self._do_toplevel is not None:
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Engine Command",
message="".join(
(
"A command dialogue is already active:\n\n",
"Cannot start another one.",
)
),
)
return
self._command = initial_value.split(None, maxsplit=1)[0]
self._do_toplevel = tkinter.Toplevel(
master=self.menu_engines.winfo_toplevel()
)
if hint:
if wraplength is None:
wraplength = 400
tkinter.Label(
master=self._do_toplevel,
text=hint,
wraplength=wraplength,
justify=tkinter.LEFT,
).pack()
entrythingy = tkinter.Entry(self._do_toplevel)
entrythingy.pack(fill=tkinter.BOTH)
buttonbar = tkinter.Frame(master=self._do_toplevel)
buttonbar.pack(fill=tkinter.X, expand=tkinter.TRUE)
cancel = tkinter.Button(
master=buttonbar,
text="Cancel",
underline=0,
command=self.try_command(self._cancel, buttonbar),
)
cancel.pack(expand=tkinter.TRUE, side=tkinter.LEFT)
self._contents = tkinter.StringVar()
self._contents.set(initial_value)
entrythingy["textvariable"] = self._contents
entrythingy.bind("<Key-Return>", callback)
entrythingy.focus_set()
def run_engine(self, program_file_name, args=None):
self.uci.run_engine(program_file_name, args)
self.menu_engines.insert_command(
self.menu_engines.index("end"),
label=self.uci.uci_drivers_index[self.uci.engine_counter],
command=self.make_quit_engine_command(self.uci.engine_counter),
)
def make_quit_engine_command(self, counter):
def quit_engine_command():
self.quit_engine(counter)
return quit_engine_command
def quit_engine(self, number):
ui_name = self.uci.uci_drivers_index[number]
for i in range(self.menu_engines.index("end")):
label = self.menu_engines.entryconfigure(i).get("label")
if label is not None:
if label[-1] == ui_name:
if (
tkinter.messagebox.askquestion(
parent=self.menu_engines,
title="Quit Engine",
message="".join(
(
"Please confirm request to quit engine\n\n",
ui_name,
)
),
)
!= tkinter.messagebox.YES
):
continue
if self.uci.kill_engine(number):
self.menu_engines.delete(i)
else:
tkinter.messagebox.showinfo(
parent=self.menu_engines,
title="Quit Engine",
message="".join(
(
ui_name,
" failed to quit.\n\nYou may have to kill",
" process id ",
str(
self.uci.uci_drivers[
ui_name
].driver.pid
),
" manually.",
)
),
)
def send_to_all_engines(self, event):
command = self._contents.get()
if command.split()[0] != self._command:
if (
tkinter.messagebox.askquestion(
parent=self.menu_commands,
title="Send to Engine",
message="".join(
(
"Command is not the one used to start dialogue.\n\n",
"Do you want to cancel dialogue?",
)
),
)
== tkinter.messagebox.YES
):
del self._command
del self._contents
self._do_toplevel.destroy()
self._do_toplevel = None
return
for ui_name, ei in self.uci.uci_drivers.items():
try:
ei.to_driver_queue.put(command)
except:
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Send to Engine",
message="".join(
(
"Send command\n\n",
command,
"\n\nto\n\n",
ui_name,
"\n\nfailed.",
)
),
)
del self._command
del self._contents
self._do_toplevel.destroy()
self._do_toplevel = None
def set_hash(self):
"""Set value of Hash option to use on UCI compliant Chess Engine."""
def get(event):
self.uci.hash_size = self._contents.get()
self.uci.set_option_on_empty_queues.add(ReservedOptionNames.Hash)
self._do_toplevel.destroy()
self._do_toplevel = None
del self._spinbox
self.do_spinbox(
self.uci.hash_size,
0,
1000,
get,
hint="".join(
(
"\nSet the size of hash tables, in mega-bytes, within ",
"limits allowed by chess engines.\n\n0 (zero) means use ",
"the default assumed by a chess engine.\n\nPress enter to ",
"set value.\n",
)
),
)
def set_multipv(self):
"""Set value of MultiPV option to use on UCI compliant Chess Engine."""
def get(event):
self.uci.multipv = self._contents.get()
self._do_toplevel.destroy()
self._do_toplevel = None
del self._spinbox
self.do_spinbox(
self.uci.multipv,
0,
100,
get,
hint="".join(
(
"\nSet the number of variations returned by chess engines ",
"which support the MultiPV option.\n\n0 (zero) means use ",
"the default assumed by a chess engine.\n\nPress enter to ",
"set value.\n",
)
),
)
def set_depth(self):
"""Set depth to use in go commands to UCI compliant Chess Engine."""
def get(event):
self.uci.go_depth = self._contents.get()
self._do_toplevel.destroy()
self._do_toplevel = None
del self._spinbox
self.do_spinbox(
self.uci.go_depth,
0,
200,
get,
hint="".join(
(
"\nSet the length of variations, in half-moves, returned ",
"by chess engines.\n\nPress enter to set value.\n",
)
),
)
def _modify_command_menu_item(self, old, new, command):
for i in range(self.menu_commands.index("end")):
label = self.menu_commands.entryconfigure(i).get("label")
if label is not None:
if label[-1] == old:
self.menu_commands.entryconfigure(
i,
label=new,
command=self.try_command(command, self.menu_commands),
)
def set_ucinewgame_off(self):
"""Set to not use ucinewgame command when navigating games."""
if (
tkinter.messagebox.askquestion(
parent=self.menu_commands,
title="Ucinewgame OFF",
message="".join(
(
"Turn use of ucinewgame command OFF when navigating in or ",
"between game scores?\n\n",
"Consult engine documentation for implications of using, ",
"or not using, the ucinewgame command.\n\n",
"UCI specification states new GUIs should support ucinewgame ",
"command, and engines should not expect ucinewgame commands ",
"if the ucinewgame command is not used before the first ",
"position command. (August 2015)",
)
),
)
== tkinter.messagebox.YES
):
self.uci.use_ucinewgame = False
self._modify_command_menu_item(
"Ucinewgame off", "Ucinewgame on", self.set_ucinewgame_on
)
def set_ucinewgame_on(self):
"""Set to use ucinewgame command when navigating games."""
if (
tkinter.messagebox.askquestion(
parent=self.menu_commands,
title="Ucinewgame ON",
message="".join(
(
"Turn use of ucinewgame command ON when navigating in or ",
"between game scores?\n\n",
"Consult engine documentation for implications of using, ",
"or not using, the ucinewgame command.\n\n",
"UCI specification states new GUIs should support ucinewgame ",
"command, and engines should not expect ucinewgame commands ",
"if the ucinewgame command is not used before the first ",
"position command. (August 2015)",
)
),
)
== tkinter.messagebox.YES
):
self.uci.use_ucinewgame = True
self._modify_command_menu_item(
"Ucinewgame on", "Ucinewgame off", self.set_ucinewgame_off
)
def stop(self):
"""Send stop command to UCI compliant Chess Engines."""
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Stop Command",
message="Stop not implemented.",
)
def go_infinite(self):
"""Send go infinite command to UCI compliant Chess Engines."""
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Go Infinite Command",
message="Go infinite not implemented.",
)
def isready(self):
"""Send isready command to UCI compliant Chess Engines."""
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Isready Command",
message="Isready not implemented.",
)
def do_spinbox(
self, initial_value, from_, to, callback, hint=None, wraplength=None
):
# Comapare with show_engines() method.
# The after() technique used in show_engines does not help here to
# guarantee display of the new Toplevel on second and subsequent use.
# But the problem is seen only when the X server is on a different box
# than the ChessTab process.
if self._do_toplevel is not None:
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Engine Command",
message="".join(
(
"A command dialogue is already active:\n\n",
"Cannot start another one.",
)
),
)
return
def cancel():
self._do_toplevel.destroy()
self._do_toplevel = None
del self._spinbox
def destroy(event=None):
self._do_toplevel = None
self._spinbox = initial_value
self._do_toplevel = tkinter.Toplevel(
master=self.menu_engines.winfo_toplevel()
)
self._do_toplevel.bind("<Destroy>", destroy)
if hint:
if wraplength is None:
wraplength = 400
tkinter.Label(
master=self._do_toplevel,
text=hint,
wraplength=wraplength,
justify=tkinter.LEFT,
).pack()
entrythingy = tkinter.Spinbox(self._do_toplevel)
entrythingy.pack(fill=tkinter.BOTH)
buttonbar = tkinter.Frame(master=self._do_toplevel)
buttonbar.pack(fill=tkinter.X, expand=tkinter.TRUE)
tkinter.Button(
master=buttonbar,
text="Cancel",
underline=0,
command=self.try_command(cancel, buttonbar),
).pack(expand=tkinter.TRUE, side=tkinter.LEFT)
entrythingy.configure(from_=from_, to=to, increment=1)
self._contents = tkinter.IntVar()
self._contents.set(initial_value)
entrythingy["textvariable"] = self._contents
entrythingy.bind("<Key-Return>", callback)
entrythingy.focus_set()
def show_engines(self):
"""Show Chess Engine Descriptions on database."""
if self._show_engines_toplevel is not None:
tkinter.messagebox.showinfo(
parent=self.menu_engines,
title="Show Engines",
message="".join(
(
"A show engines dialogue is already active:\n\n",
"Cannot start another one.",
)
),
)
return
def destroy(event=None):
if self._show_engines_toplevel is not None:
self._close_engine_grid()
self._show_engines_toplevel = None
self._show_engines_toplevel = tkinter.Toplevel(
master=self.menu_engines.winfo_toplevel()
)
self._show_engines_toplevel.wm_title("Chess Engines")
self._show_engines_toplevel.bind("<Destroy>", destroy)
self._close_engine_grid()
# The after_idle version makes it less common for the Toplevel to not
# become visible until 'click on menubar' or 'move pointer out of
# application window' on 'destroy or close' toplevel and 'show engine'
# action sequences.
# The after version gives a noticably slower response when the X server
# is on a different machine, but has not yet needed the pointer prompt
# to display the new toplevel.
# The problem has not been seen yet on Microsoft Windows XP.
# The problem happens rarely when the X server is on the same machine.
# The problem had never been seen before in the toplevels started from
# the other subclasses of DataGrid: but on looking it can be seen now.
# self._open_engine_grid()
# self._show_engines_toplevel.after_idle(self.try_command(
# self._open_engine_grid, self._show_engines_toplevel))
self._show_engines_toplevel.after(
1,
self.try_command(
self._open_engine_grid, self._show_engines_toplevel
),
)
def set_open_database_and_engine_classes(self, database=None):
"""Set current open database and engine specific dataset classes."""
if self.database is not database:
self._close_engine_grid()
self.database = database
self._open_engine_grid()
def _close_engine_grid(self):
"""Close the existing EngineGrid instance."""
if self.base_engines is not None:
self.base_engines.get_top_widget().pack_forget()
self.base_engines.set_data_source()
self.base_engines = None
def _open_engine_grid(self):
"""Open and show an EngineGrid instance."""
if self._show_engines_toplevel is None:
return
if self.base_engines is None and self.database is not None:
self.base_engines = EngineGrid(self)
self.base_engines.set_data_source(
DataSource(
self.database,
ENGINE_FILE_DEF,
COMMAND_FIELD_DEF,
make_ChessDBrowEngine(self),
),
self.base_engines.on_data_change,
)
self.base_engines.set_focus()
self.base_engines.get_top_widget().pack(
fill=tkinter.BOTH, expand=tkinter.TRUE
)
def show_scrollbars(self):
"""Show the scrollbars in the engine definition display widgets."""
self.visible_scrollbars = True
exceptions = []
for items in (self.engines_in_toplevels,):
for i in items:
try:
i.show_scrollbars()
except tkinter.TclError:
exceptions.append(i, items)
for i, items in exceptions:
items.remove(i)
def hide_scrollbars(self):
"""Show the scrollbars in the engine definition display widgets."""
self.visible_scrollbars = False
exceptions = []
for items in (self.engines_in_toplevels,):
for i in items:
try:
i.hide_scrollbars()
except tkinter.TclError:
exceptions.append((i, items))
for i, items in exceptions:
items.remove(i)
def position_queue_status(self):
"""Display counts of position queued for analysis by engines."""
pending_counts = []
for engine, pending in self.uci.positions_pending.items():
pending_counts.append(
(engine, sum([len(p) for p in pending.values()]))
)
if not pending_counts:
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Position Queues",
message="There are no queues of positions for analysis.",
)
return
pce = []
for e, pc in sorted(pending_counts):
pce.append("\t".join((str(pc), e)))
tkinter.messagebox.showinfo(
parent=self.menu_commands,
title="Position Queues",
message="".join(
(
"Number of positions queued for analysis by ",
"each engine are:\n\n",
"\n".join(pce),
)
),
)
def set_clear_hash_off(self):
"""Set to not clear hash tables before position go command sequence."""
if (
tkinter.messagebox.askquestion(
parent=self.menu_commands,
title="Clear Hash OFF",
message="".join(
(
"Turn OFF clear hash tables before analysing a position?\n\n",
"Consult engine documentation for implications of clearing ",
"hash tables or not.",
)
),
)
== tkinter.messagebox.YES
):
self.uci.clear_hash = False
self._modify_command_menu_item(
"Clear Hash off", "Clear Hash on", self.set_clear_hash_on
)
def set_clear_hash_on(self):
"""Set to clear hash tables before position go command sequence."""
if (
tkinter.messagebox.askquestion(
parent=self.menu_commands,
title="Clear Hash ON",
message="".join(
(
"Turn ON clear hash tables before analysing a position?\n\n",
"Consult engine documentation for implications of clearing ",
"hash tables or not.",
)
),
)
== tkinter.messagebox.YES
):
self.uci.clear_hash = True
self._modify_command_menu_item(
"Clear Hash on", "Clear Hash off", self.set_clear_hash_off
)
| 37.071429 | 87 | 0.525495 | 2,928 | 29,064 | 5.044057 | 0.155738 | 0.035751 | 0.02749 | 0.025323 | 0.553389 | 0.496648 | 0.459747 | 0.420272 | 0.39197 | 0.354865 | 0 | 0.004698 | 0.392169 | 29,064 | 783 | 88 | 37.118774 | 0.831314 | 0.150874 | 0 | 0.483713 | 0 | 0 | 0.120238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061889 | false | 0 | 0.019544 | 0 | 0.09772 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fa9b6e0222d2c0b70dc7ec3aecbef873f179373 | 4,704 | py | Python | Exercises/flask-blogly/app.py | pedwards95/Springboard_Class | 9df8dbd8832223e89b89d12db3f7e0b178e2ed79 | [
"MIT"
] | null | null | null | Exercises/flask-blogly/app.py | pedwards95/Springboard_Class | 9df8dbd8832223e89b89d12db3f7e0b178e2ed79 | [
"MIT"
] | null | null | null | Exercises/flask-blogly/app.py | pedwards95/Springboard_Class | 9df8dbd8832223e89b89d12db3f7e0b178e2ed79 | [
"MIT"
] | null | null | null | # python -m venv venv
# source venv/Scripts/activate
# pip install pylint
# pip install ipython
# pip install flask
# pip install flask_debugtoolbar
# pip install pylint-flask
# pip install psycopg2-binary
# pip install flask-sqlalchemy
# pip install pylint_flask_sqlalchemy
# pip install pylint-sqlalchemy
# pip install python-dotenv
# then make a .flaskenv in root directory
# pip freeze > requirements.txt
# flask run
"""Blogly application."""
from flask import Flask, redirect, render_template, request, flash
from models import connect_db, User, Post
from seed import seed
app = Flask(__name__)
app.config['SECRET_KEY']="mykey"
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql:///blogly'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SQLALCHEMY_ECHO'] = True
my_db = connect_db(app)
seed(my_db)
@app.route("/")
def redirect_blogly_home():
"Redirects to home route"
return redirect("/home")
@app.route("/home")
def blogly_home():
"Home route"
my_posts = Post.query.order_by(Post.created_at.desc()).limit(5).all()
return render_template("home.html", my_posts=my_posts)
@app.route("/all_users")
def blogly_users():
"Show all users"
users = User.query.order_by(User.last_name, User.first_name).all()
return render_template("all_users.html", users=users)
@app.route("/user/<int:user_id>", methods=["GET"])
def blogly_user_info(user_id):
"Show one user"
my_user = User.query.get_or_404(user_id)
return render_template("user_details.html", user=my_user)
@app.route("/user/<int:user_id>/delete", methods=["POST"])
def blogly_delete_user(user_id):
"Delete user"
my_user = User.query.get_or_404(user_id)
my_db.session.delete(my_user)
my_db.session.commit()
flash("User deleted", "success")
return redirect("/all_users")
@app.route("/user/edit/<int:user_id>", methods=["GET"])
def blogly_edit_user_info(user_id):
"Edit one user form"
my_user = User.query.get_or_404(user_id)
return render_template("user_details_edit.html", user=my_user)
@app.route("/user/edit/<int:user_id>", methods=["POST"])
def blogly_submit_edit_user_info(user_id):
"Edit one user submit"
my_user = User.query.get_or_404(user_id)
my_user.first_name = request.form["first_name"]
my_user.last_name = request.form["last_name"]
my_user.profile_img = request.form["img_url"]
my_db.session.commit()
flash("Edit Successful", "success")
return redirect(f"/user/{my_user.id}")
@app.route("/new_user", methods = ["GET"])
def blogly_new_user_form():
"Show the user creation form"
return render_template("new_user.html")
@app.route("/new_user", methods = ["POST"])
def blogly_new_user_submit():
"Create a new user"
fname = request.form["first_name"]
lname = request.form["last_name"]
img = request.form["img_url"]
if not img:
img = None
new_user = User(first_name=fname,last_name=lname, profile_img=img)
my_db.session.add(new_user)
my_db.session.commit()
return redirect("/all_users")
@app.route("/user/<int:user_id>/posts/create")
def blogly_add_post_form(user_id):
my_user = User.query.get_or_404(user_id)
return render_template("new_post.html", user=my_user)
@app.route("/posts/<int:post_id>")
def blogly_show_post(post_id):
my_post = Post.query.get_or_404(post_id)
return render_template("post_details.html", post=my_post, user=my_post.poster)
@app.route("/posts", methods=["POST"])
def blogly_add_post():
title = request.form["title"]
content = request.form["content"]
user_id = int(request.form["user"])
new_post = Post(title=title,content=content, posted_by=user_id)
my_db.session.add(new_post)
my_db.session.commit()
flash("Post Added", "success")
return redirect(f"/user/{user_id}")
@app.route("/posts/<int:post_id>/delete")
def blogly_delete_post(post_id):
"Delete post"
my_post = Post.query.get_or_404(post_id)
user_id = my_post.poster.id
my_db.session.delete(my_post)
my_db.session.commit()
flash("Post deleted", "success")
return redirect(f"/user/{user_id}")
@app.route("/posts/edit/<int:post_id>", methods=["GET"])
def blogly_edit_post_form(post_id):
my_post = Post.query.get_or_404(post_id)
return render_template("post_details_edit.html", post=my_post, user=my_post.poster)
@app.route("/posts/edit/<int:post_id>", methods=["POST"])
def blogly_edit_post(post_id):
"Edit one post submit"
my_post = Post.query.get_or_404(post_id)
my_post.title = request.form["title"]
my_post.content = request.form["content"]
my_db.session.commit()
flash("Edit Successful", "success")
return redirect(f"/posts/{my_post.id}") | 31.36 | 87 | 0.713435 | 723 | 4,704 | 4.384509 | 0.159059 | 0.039748 | 0.0347 | 0.036909 | 0.481388 | 0.38265 | 0.34858 | 0.278549 | 0.227445 | 0.209148 | 0 | 0.007153 | 0.13818 | 4,704 | 150 | 88 | 31.36 | 0.77479 | 0.129889 | 0 | 0.194444 | 0 | 0 | 0.229328 | 0.065589 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.027778 | 0 | 0.305556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fad9a654e68fbbd2c8a4a136f03a32478751ee4 | 1,177 | py | Python | emmet/action_utils/utils.py | jingyuexing/py-emmet | e3b1ecb875e0067fc9ef4f4479c7a8d4646aaa11 | [
"MIT"
] | 29 | 2019-11-12T16:15:15.000Z | 2022-02-06T10:51:25.000Z | emmet/action_utils/utils.py | jingyuexing/py-emmet | e3b1ecb875e0067fc9ef4f4479c7a8d4646aaa11 | [
"MIT"
] | 3 | 2020-04-25T11:02:53.000Z | 2021-11-25T10:39:09.000Z | emmet/action_utils/utils.py | jingyuexing/py-emmet | e3b1ecb875e0067fc9ef4f4479c7a8d4646aaa11 | [
"MIT"
] | 7 | 2020-04-25T09:42:54.000Z | 2021-02-16T20:29:41.000Z | from ..scanner_utils import is_space
class SelectItemModel:
__slots__ = ('start', 'end', 'ranges')
def __init__(self, start: int, end: int, ranges: list=None):
self.start = start
self.end = end
self.ranges = ranges
def to_json(self):
return {
'start': self.start,
'end': self.end,
'ranges': self.ranges
}
def push_range(ranges: list, rng: list):
prev = ranges and ranges[-1]
if rng and rng[0] != rng[1] and (not prev or prev[0] != rng[0] or prev[1] != rng[1]):
ranges.append(rng)
def token_list(value: str, offset=0):
"Returns ranges of tokens in given value. Tokens are space-separated words."
ranges = []
l = len(value)
pos = 0
start = 0
end = 0
while pos < l:
end = pos
ch = value[pos]
pos += 1
if is_space(ch):
if start != end:
ranges.append((offset + start, offset + end))
while pos < l and is_space(value[pos]):
pos += 1
start = pos
if start != pos:
ranges.append((offset + start, offset + pos))
return ranges
| 23.078431 | 89 | 0.531011 | 155 | 1,177 | 3.935484 | 0.316129 | 0.034426 | 0.045902 | 0.039344 | 0.095082 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016883 | 0.345794 | 1,177 | 50 | 90 | 23.54 | 0.775325 | 0.062872 | 0 | 0.054054 | 0 | 0 | 0.086661 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108108 | false | 0 | 0.027027 | 0.027027 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2faef444b1852c4a4b65fb3d4c3766d0712c2492 | 2,765 | py | Python | tests/test_effect_overlay.py | niklasR/brave | 5234bd9cecd6d7282cd0b50acfda44890cf4cfb8 | [
"Apache-2.0"
] | 1 | 2018-12-04T21:58:27.000Z | 2018-12-04T21:58:27.000Z | tests/test_effect_overlay.py | niklasR/brave | 5234bd9cecd6d7282cd0b50acfda44890cf4cfb8 | [
"Apache-2.0"
] | null | null | null | tests/test_effect_overlay.py | niklasR/brave | 5234bd9cecd6d7282cd0b50acfda44890cf4cfb8 | [
"Apache-2.0"
] | 1 | 2018-12-07T12:21:08.000Z | 2018-12-07T12:21:08.000Z | import time, pytest, inspect
from utils import *
from PIL import Image
def test_effect_overlay_visible_after_creation(run_brave):
run_brave()
time.sleep(0.5)
check_brave_is_running()
add_overlay({'type': 'effect', 'props': {'effect_name': 'edgetv'}})
time.sleep(0.1)
assert_overlays([{'id': 0, 'state': 'NULL', 'props': {'visible': False, 'effect_name': 'edgetv'}}])
update_overlay(0, {'props': {'visible': True}}, status_code=200)
time.sleep(0.1)
assert_overlays([{'id': 0, 'state': 'PLAYING', 'props': {'visible': True,'effect_name': 'edgetv'}}])
add_overlay({'type': 'effect', 'props': {'effect_name': 'solarize'}})
time.sleep(0.1)
assert_overlays([{'id': 0, 'state': 'PLAYING', 'props': {'visible': True,'effect_name': 'edgetv'}},
{'id': 1, 'state': 'NULL', 'props': {'visible': False, 'effect_name': 'solarize'}}])
update_overlay(1, {'props': {'visible': True}}, status_code=200)
time.sleep(0.1)
assert_overlays([{'id': 0, 'state': 'PLAYING', 'props': {'visible': True,'effect_name': 'edgetv'}},
{'id': 1, 'state': 'PLAYING', 'props': {'visible': True,'effect_name': 'solarize'}}])
delete_overlay(0)
time.sleep(0.1)
assert_overlays([{'id': 1, 'state': 'PLAYING', 'props': {'visible': True,'effect_name': 'solarize'}}])
delete_overlay(1)
time.sleep(0.1)
assert_overlays([])
# @pytest.mark.skip(reason="known bug that effects made visible at start should not be permitted")
def test_effect_overlay_visible_at_creation(run_brave):
'''Test that visible:true on creation also does not work if mixer is playing/paused'''
run_brave()
time.sleep(0.5)
check_brave_is_running()
# This time, visible from the start with visible=True
add_overlay({'type': 'effect', 'props': {'effect_name': 'warptv', 'visible': True}}, status_code=200)
time.sleep(0.1)
assert_overlays([{'state': 'PLAYING', 'props': {'visible': True, 'effect_name': 'warptv'}}])
def test_set_up_effect_overlay_in_config_file(run_brave, create_config_file):
'''Test that an effect in a config file works fine'''
output_video_location = create_output_video_location()
config = {
'default_overlays': [
{'type': 'effect', 'props': {'effect_name': 'quarktv', 'visible': True}},
{'type': 'effect', 'props': {'effect_name': 'vertigotv', 'visible': False}}
]
}
config_file = create_config_file(config)
run_brave(config_file.name)
time.sleep(0.5)
check_brave_is_running()
assert_overlays([{'id': 0, 'state': 'PLAYING', 'props': {'effect_name': 'quarktv', 'visible': True}},
{'id': 1, 'state': 'PLAYING', 'props': {'effect_name': 'vertigotv', 'visible': False}}])
| 41.893939 | 109 | 0.632911 | 353 | 2,765 | 4.739377 | 0.23796 | 0.089659 | 0.059773 | 0.046025 | 0.66049 | 0.607292 | 0.512851 | 0.355051 | 0.337119 | 0.317394 | 0 | 0.018325 | 0.171067 | 2,765 | 65 | 110 | 42.538462 | 0.711606 | 0.100543 | 0 | 0.354167 | 0 | 0 | 0.266263 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fafadc84ca6d4bd015e4f90fe2035ddef798fc3 | 1,659 | py | Python | RaNDaL/packages/models/text_fields/tree_field.py | adam-bentley/RaNDaL | d079ea4773815630f25f83ab0fbb8eefdbf2d17c | [
"MIT"
] | null | null | null | RaNDaL/packages/models/text_fields/tree_field.py | adam-bentley/RaNDaL | d079ea4773815630f25f83ab0fbb8eefdbf2d17c | [
"MIT"
] | null | null | null | RaNDaL/packages/models/text_fields/tree_field.py | adam-bentley/RaNDaL | d079ea4773815630f25f83ab0fbb8eefdbf2d17c | [
"MIT"
] | null | null | null | import json
from numpy import ndarray
from tensorflow.python.keras import Sequential
from ...helpers.segmentation import split_characters
from .text_field import TextArea
class Tree(TextArea):
def __init__(self, frame: ndarray, model: Sequential, inverted: bool = False):
"""
Constrctor for a tree text area
:param frame: The image containing the text
:param model: CNN model for predictions
:param inverted: If its white text on a black background or black on white
"""
# Call parent
super().__init__(frame, model, inverted=inverted)
# Segmemtate
self.characters = split_characters(self.frame)
if len(self.characters) != 0:
self.resize_characters()
self.text = self.predict()
if self.text[0] == "D" or self.text[0] == "I":
self.text = "Delay Tree"
elif self.text[0] == ".":
# If the second letter is a 5
# It could be a .5 Full, .5 X-OVER
# or .5 Pro Tree
if self.text[1] == "5":
if self.text[2] == "F":
self.text = ".5 Full Tree"
elif self.text[2] == "X":
self.text = ".5X"
elif self.text[2] == "P":
self.text = ".5 Pro Tree"
else:
self.text = ""
# It not .5 or 'D', it has to be a .4 Pro Tree
elif self.text[1] == "4":
self.text = ".4 Pro Tree"
def toJson(self):
return json.dumps(self) | 33.18 | 82 | 0.502712 | 200 | 1,659 | 4.11 | 0.395 | 0.145985 | 0.058394 | 0.058394 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020833 | 0.392405 | 1,659 | 50 | 83 | 33.18 | 0.794643 | 0.201929 | 0 | 0 | 0 | 0 | 0.043239 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.178571 | 0.035714 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fb05c28f9fc1d23f3b79a475475fc481afb0de9 | 2,742 | py | Python | pyez/PyEZ_Cookbook_2017/15-Monitoring_IPSEC_tunnels/ike-tunnels.py | rsmekala/junosautomation | b76b3e8ce34048e3460cb071d0aeef176ab3836f | [
"Apache-2.0"
] | 117 | 2016-08-22T15:52:28.000Z | 2022-01-08T00:53:28.000Z | pyez/PyEZ_Cookbook_2017/15-Monitoring_IPSEC_tunnels/ike-tunnels.py | rsmekala/junosautomation | b76b3e8ce34048e3460cb071d0aeef176ab3836f | [
"Apache-2.0"
] | 12 | 2017-10-28T09:44:44.000Z | 2018-11-21T15:12:42.000Z | pyez/PyEZ_Cookbook_2017/15-Monitoring_IPSEC_tunnels/ike-tunnels.py | rsmekala/junosautomation | b76b3e8ce34048e3460cb071d0aeef176ab3836f | [
"Apache-2.0"
] | 78 | 2016-08-19T05:35:28.000Z | 2022-03-13T07:16:27.000Z | # Copyright 2017, Juniper Networks Pvt Ltd.
# All rights reserved.
from jnpr.junos import Device
from jnpr.junos.utils.config import Config
from lxml import etree
import dill
import xmltodict
from io import StringIO
from contextlib import redirect_stdout
from collections import defaultdict
from operator import itemgetter
host = '172.27.75.5'
user = "root"
passwd = "root123"
def main():
dev = Device(host=host, user=user, passwd=passwd)
# open a connection with the device and start a NETCONF session
try:
dev.open()
dev.timeout = 300
except Exception as err:
print("Cannot connect to device:", err)
return
ikepeers = dev.rpc.get_ike_security_associations_information(detail=True)
ipsecpeers = dev.rpc.get_security_associations_information(detail=True)
iketree = etree.ElementTree(ikepeers)
ipsectree = etree.ElementTree(ipsecpeers)
ikeroot = iketree.getroot()
ipsecroot = ipsectree.getroot()
ikeactions = iketree.findall('.//ike-security-associations-block')
ikeparsed = [{field.tag: field.text for field in action}
for action in ikeactions]
ipsecactions = ipsectree.findall('.//ipsec-security-associations-block')
ipsecparsed = [{field.tag: field.text for field in action}
for action in ipsecactions]
d = defaultdict(dict)
for elem in ipsecparsed:
d[elem['sa-tunnel-index']].update(elem)
l3 = sorted(d.values(), key=itemgetter('sa-vpn-name'))
for elem in ikeparsed:
for elem1 in l3:
if elem1['sa-remote-gateway'] == elem['ike-sa-remote-address']:
elem1.update(elem)
for idx, tunnel in enumerate(l3):
try:
print((idx + 1),
' - ',
tunnel['sa-vpn-name'],
' - ',
tunnel['sa-remote-gateway'],
'- ',
tunnel['ike-sa-index'],
'- ',
tunnel['sa-tunnel-index'],
' - ',
"\n\t",
tunnel['sa-local-identity'],
' - ',
tunnel['sa-remote-identity'],
'\n')
except KeyError:
print((idx + 1),
' - ',
tunnel['sa-vpn-name'],
' - ',
tunnel['sa-remote-gateway'],
'- ',
tunnel['sa-tunnel-index'],
' - ',
'\n\t',
tunnel['sa-local-identity'],
' - ',
tunnel['sa-remote-identity'],
'\n')
# End the NETCONF session and close the connection
dev.close()
main()
| 32.642857 | 77 | 0.539023 | 286 | 2,742 | 5.13986 | 0.416084 | 0.054422 | 0.038095 | 0.05034 | 0.269388 | 0.213605 | 0.213605 | 0.213605 | 0.213605 | 0.213605 | 0 | 0.01442 | 0.342451 | 2,742 | 83 | 78 | 33.036145 | 0.800887 | 0.063093 | 0 | 0.369863 | 0 | 0 | 0.152496 | 0.035491 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013699 | false | 0.027397 | 0.123288 | 0 | 0.150685 | 0.041096 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fb0c6198b297677732f2c4daf939edfc49f4ce4 | 3,484 | py | Python | tools/precommit_converter/extractor/extractor.py | bayeshack2016/icon-service | 36cab484d2e41548d7f2f74526f127ee3a4423fc | [
"Apache-2.0"
] | 52 | 2018-08-24T02:28:43.000Z | 2021-07-06T04:44:22.000Z | tools/precommit_converter/extractor/extractor.py | bayeshack2016/icon-service | 36cab484d2e41548d7f2f74526f127ee3a4423fc | [
"Apache-2.0"
] | 62 | 2018-09-17T06:59:16.000Z | 2021-12-15T06:02:51.000Z | tools/precommit_converter/extractor/extractor.py | bayeshack2016/icon-service | 36cab484d2e41548d7f2f74526f127ee3a4423fc | [
"Apache-2.0"
] | 35 | 2018-09-14T02:42:10.000Z | 2022-02-05T10:34:46.000Z | import json
from typing import List, Tuple, Optional
from iconservice.base.block import Block
from tools.precommit_converter.data.icon_service_info import IconServiceInfo
from tools.precommit_converter.data.key_value import KeyValue
class NotSupportFileException(Exception):
pass
class ExtractException(Exception):
pass
class Extractor:
@classmethod
def extract(cls, path: str) -> Tuple[Optional[IconServiceInfo], List[KeyValue]]:
# Check the txt format and extract the data from the
if path.endswith("txt"):
return cls._extract_key_values_from_text(path)
elif path.endswith("json"):
return cls._extract_key_values_from_json(path)
else:
raise NotSupportFileException("Not supported file format.")
@classmethod
def _extract_json(cls, path: str) -> dict:
with open(path, 'r') as f:
precommit_dict: dict = json.load(f)
return precommit_dict
@classmethod
def _extract_key_values_from_json(cls, path: str) -> Tuple[IconServiceInfo, List[KeyValue]]:
json_dict: dict = cls._extract_json(path)
bytes_k_v: list = []
block_batch = json_dict.get("blockBatch")
if block_batch is None:
raise KeyError("blockBatch not found")
for data in block_batch:
key = bytes.fromhex(data["key"][2:])
value = bytes.fromhex(data["value"][2:]) if data["value"] is not None else None
bytes_k_v.append(KeyValue(data["txIndexes"], key, value))
icon_service_info: 'IconServiceInfo' = IconServiceInfo(json_dict["iconservice"],
json_dict["revision"],
Block.from_dict(json_dict["block"]),
json_dict["isStateRootHash"],
json_dict["rcStateRootHash"],
json_dict["stateRootHash"],
json_dict["prevBlockGenerator"])
return icon_service_info, bytes_k_v
@classmethod
def _extract_key_values_from_text(cls, path: str) -> Tuple[None, List[KeyValue]]:
"""
Extract key, value from the precommit text file
If the format change, should fix this method
:param path:
:return:
"""
key_values: list = []
with open(path, "rt") as f:
# Collect the key, values
lines: List[str] = f.readlines()
for line in lines:
sliced_str: list = line.split(" ")
for i, string in enumerate(sliced_str):
if string == "-":
key: str = sliced_str[i - 1]
value: str = sliced_str[i + 1]
if key.startswith("0x"):
key = key[2:]
if value.startswith("0x"):
value = value[2:]
if value[-1] == ",":
value = value[:len(value) - 1]
hex_key = bytes.fromhex(key)
hex_value = bytes.fromhex(value) if value != "None" else None
key_values.append(KeyValue(None, hex_key, hex_value))
return None, key_values
| 39.590909 | 99 | 0.528703 | 360 | 3,484 | 4.936111 | 0.272222 | 0.040518 | 0.047271 | 0.04502 | 0.130557 | 0.070906 | 0 | 0 | 0 | 0 | 0 | 0.004627 | 0.379736 | 3,484 | 87 | 100 | 40.045977 | 0.817677 | 0.054535 | 0 | 0.095238 | 0 | 0 | 0.061325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063492 | false | 0.031746 | 0.079365 | 0 | 0.269841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fb476dd66bd0ab1d1e4099cd971c61bf78c0144 | 2,325 | py | Python | api/models/predict.py | sanjeet1207/Batman | 658a1bab55de425b23288ffec69f2d7ac93bb007 | [
"MIT"
] | null | null | null | api/models/predict.py | sanjeet1207/Batman | 658a1bab55de425b23288ffec69f2d7ac93bb007 | [
"MIT"
] | null | null | null | api/models/predict.py | sanjeet1207/Batman | 658a1bab55de425b23288ffec69f2d7ac93bb007 | [
"MIT"
] | null | null | null | from datetime import datetime
from PIL import Image
from torchvision import transforms
from urllib.request import urlopen
import requests
import logging
import os
import sys
import torch
model = torch.hub.load('pytorch/vision:v0.5.0', 'resnet18', pretrained=True)
model.eval()
def get_class_labels():
class_dict = {}
counter = 0
try:
dirname = os.path.dirname(__file__)
with open(os.path.join(dirname, 'labels.txt'), 'r') as infile:
for line in infile.readlines():
out = line.split("'")
class_dict[counter] = out[1]
counter += 1
except FileNotFoundError:
logging.info(os.listdir(os.curdir))
logging.info(os.curdir)
raise
return class_dict
def predict_image_from_url(image_url):
logging.info(f'url to be {image_url}')
class_dict = get_class_labels()
# with urlopen(image_url) as testImage:
with requests.get(image_url,stream=True).raw as testImage:
input_image = Image.open(testImage).convert('RGB')
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
# move the input and model to GPU for speed if available
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')
with torch.no_grad():
output = model(input_batch)
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
print(output[0])
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
softmax = (torch.nn.functional.softmax(output[0], dim=0))
out = class_dict[softmax.argmax().item()]
response = {
'created': datetime.utcnow().isoformat(),
'predictedTagName': out,
'prediction': softmax.max().item()
}
logging.info(f'returning {response}')
return response
if __name__ == '__main__':
predict_image_from_url(sys.argv[1]) | 33.695652 | 96 | 0.630968 | 293 | 2,325 | 4.866894 | 0.474403 | 0.031557 | 0.019635 | 0.026648 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029582 | 0.258495 | 2,325 | 69 | 97 | 33.695652 | 0.797564 | 0.128172 | 0 | 0 | 0 | 0 | 0.066271 | 0.010386 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.160714 | 0 | 0.232143 | 0.017857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fb6f09cc2a00f81943353736fc4cc723e1daf09 | 1,471 | py | Python | nlp_helper/nlp_helper_sentence_split.py | mstaschik/nlp-helper | 9f0722637cdcce6ae0063a152db735bb99818ec1 | [
"MIT"
] | null | null | null | nlp_helper/nlp_helper_sentence_split.py | mstaschik/nlp-helper | 9f0722637cdcce6ae0063a152db735bb99818ec1 | [
"MIT"
] | null | null | null | nlp_helper/nlp_helper_sentence_split.py | mstaschik/nlp-helper | 9f0722637cdcce6ae0063a152db735bb99818ec1 | [
"MIT"
] | null | null | null | from pathlib import Path
import logging
logging.basicConfig(level=logging.DEBUG)
def split_text_into_columns(nlp_models, df) -> None:
"""
Takes Text Content of one DataFrame Cell and splits into multiple Columns
Args:
nlp_models (list): List of Spacy Language Models
df (Pandas DataFrame): [description]
Returns:
Pandas Dataframe: concatenated Dataframe
"""
import pandas as pd
nlp_models = nlp_models
df['sentences'] = 'default value'
for nlp_model in nlp_models:
for index, row in df.iterrows():
list1 = []
list1.clear()
for sentence in nlp_model(row['content']).sents:
#print(sentence)
list1.append(sentence.text)
#print(list1)
row['sentences'] = list1
#print(row['sentences'])
logging.info(df['sentences'])
dfcolumns = pd.DataFrame(df['sentences'].values.tolist()).add_prefix('column_')
#print(dfcolumns)
# Apply Version is slower:
# expand df.sentences into its own dataframe
# dfcolumns = df['sentences'].apply(pd.Series)
# rename each variable
#dfcolumns = dfcolumns.rename(columns = lambda x : 'columnname_' + str(x))
# join the column dataframe back to the original dataframe
dfconcat = pd.concat([df[:], dfcolumns[:]], axis=1)
logging.info(dfconcat)
return dfconcat | 25.807018 | 87 | 0.60775 | 165 | 1,471 | 5.339394 | 0.50303 | 0.051078 | 0.024972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005742 | 0.289599 | 1,471 | 57 | 88 | 25.807018 | 0.837321 | 0.380693 | 0 | 0 | 0 | 0 | 0.072664 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.157895 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fb82aeb0795716862ec22bbeb4fb9c9fa7db7dd | 5,592 | py | Python | ribo_annotate.py | everyday847/RiboAnnotate | a53abc6987e71538527b5b282129ead54497b31a | [
"MIT"
] | null | null | null | ribo_annotate.py | everyday847/RiboAnnotate | a53abc6987e71538527b5b282129ead54497b31a | [
"MIT"
] | null | null | null | ribo_annotate.py | everyday847/RiboAnnotate | a53abc6987e71538527b5b282129ead54497b31a | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from PIL import Image, ImageFont, ImageDraw
base_fields = ["x", "y", "chain", "segid", "seqpos", "name"]
def base_info_of(line):
"""
Line format:
x y chain segid seqpos name
205.500 -150.750 C AB 0 L14
"""
return dict(zip(base_fields, line.strip().split()))
def test_by_drawing_nt_names(im, residuewise_info):
draw = ImageDraw.Draw(im)
font = ImageFont.truetype("DejaVuSans.ttf", 24)
# Pillow expects the upper left corner of the character; Matlab outputs the center
for info in residuewise_info:
#print(info)
draw.text((float(info["x"])-10, float(info["y"])-10), info["name"], font=font, fill=(0,0,0,255))
im.save("modified2_bw.png")
def add_annotations(im, residuewise_info, annotations_fn):
fields = ["human_name", "start_res", "end_res", "worked_ISAT", "worked_squires", "insertion"]
import re
def annotation_info_of(line):
#return dict(zip(fields, line.strip().split()))
return dict(zip(fields, re.split('\s*', line.strip())))# line.strip().split()))
def color_of(s):
if s == "Y": return (0,255,0,255)
elif s == "?": return (135,206,250,255) # sky blue
else: return (255,0,0,255)
def xy_of(ann, residuewise_info, offset=(0,0)):
"""
How to calculate the position for an annotation as a function of the loop?
1. Just 'near the base pair where the cut happens'
2. Involve the loop in your decision
3. _______
"""
# Strategy: 'off to the side' by extending the start -> end vector
xy_end, xy_start = None, None
for ri in residuewise_info:
if ri["seqpos"] == ann["end_res"]:
xy_end = (float(ri["x"]), float(ri["y"]))
if ri["seqpos"] == ann["start_res"]:
xy_start = (float(ri["x"]), float(ri["y"]))
if xy_start is not None and xy_end is not None: break
return ((xy_end[0]-xy_start[0])*2+xy_start[0]+offset[0], (xy_end[1]-xy_start[1])*2+xy_start[1]+offset[1])
annotations = []
with open(annotations_fn) as f:
annotations = [annotation_info_of(l) for l in f.readlines()]
print(annotations)
draw = ImageDraw.Draw(im)
font = ImageFont.truetype("DejaVuSans.ttf", 15)
def draw_text_with_box(draw, font, text, fill, location):
# Must multiply by # \n
text_size = font.getsize(text)
# Largest
actual_x = max([font.getsize(substr)[0] for substr in text.split()])
actual_y = text_size[1]+text_size[1]*text.count('\n')
print(location, actual_x,actual_y)
#draw.rectangle(((location[0]-5, location[1]-5), (location[0]+text_size[0]+5, location[1]+text_size[1]*text.count('\n')+5)), fill='white', outline=fill)
draw.rectangle(((location[0]-5, location[1]-5), (location[0]+actual_x+5, location[1]+actual_y+5)), fill='white', outline=fill)
draw.text(location, text, font=font, fill=fill)
def draw_line_between(draw, pt1, pt2, worked, info):
"""
Note that we have to connect 'residue before start' with 'residue
after end'.
"""
xy_end, xy_start = None, None
for ri in residuewise_info:
if int(ri["seqpos"]) == int(pt1)-1:
print(int(ri["seqpos"]))
xy_start = (float(ri["x"]), float(ri["y"]))
if int(ri["seqpos"]) == int(pt2)+1:
print(int(ri["seqpos"]))
xy_end = (float(ri["x"]), float(ri["y"]))
if xy_start is not None and xy_end is not None: break
draw.line((xy_start[0], xy_start[1], xy_end[0], xy_end[1]), fill=color_of(worked), width=5)
def offset_of(ann_count, ann):
"""
What offset should be applied to labels based on their referred residues having been
annotated before n times?
"""
return (0, ann_count[(ann["start_res"], ann["end_res"])]*40)
ann_count = {}
for ann in annotations:
print(ann)
if (ann["start_res"], ann["end_res"]) in ann_count:
ann_count[(ann["start_res"], ann["end_res"])] += 1
else:
ann_count[(ann["start_res"], ann["end_res"])] = 0
print("Ann:", ann)
ann_worked = '?'
if ann["worked_ISAT"] == 'Y' or ann["worked_squires"] == 'Y': ann_worked = 'Y'
elif ann["worked_ISAT"] == 'N' or ann["worked_squires"] == 'N': ann_worked = 'N'
draw_text_with_box(draw, font, ann["human_name"]+"\n"+ann["insertion"], color_of(ann_worked), xy_of(ann, residuewise_info, offset_of(ann_count, ann)))
draw_line_between(draw, ann["start_res"], ann["end_res"], ann_worked, residuewise_info)
if __name__ == "__main__":
#im = Image.open("drawing_new.png")
#im = Image.open("/Users/amw579/Downloads/drawing_blackwhite.png")
im = Image.open("drawing_blackwhite.png")
im = im.convert("L")
#im.save("temp.png")
#im = Image.open("temp.png")
im = im.convert("RGB")
residuewise_info = []
#with open("/Users/amw579/Downloads/drawing_blackwhite.png.coords.txt") as f:
with open("drawing_blackwhite.png.coords.txt") as f:
for line in f.readlines():
residuewise_info.append(base_info_of(line))
#test_by_drawing_nt_names(im, residuewise_info)
add_annotations(im, residuewise_info, "annotations.txt")
im.save("annotated_bw.png")
add_annotations(im, residuewise_info, "map_annotations.txt")
im.save("annotated_bw_withmap.png")
| 38.833333 | 160 | 0.599428 | 804 | 5,592 | 3.988806 | 0.241294 | 0.060804 | 0.016838 | 0.021827 | 0.396632 | 0.368257 | 0.243218 | 0.189585 | 0.140318 | 0.088556 | 0 | 0.027581 | 0.241416 | 5,592 | 143 | 161 | 39.104895 | 0.72843 | 0.21191 | 0 | 0.177215 | 0 | 0 | 0.119098 | 0.018558 | 0 | 0 | 0 | 0 | 0 | 1 | 0.113924 | false | 0 | 0.025316 | 0.012658 | 0.189873 | 0.075949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fbb0c4e75b2c3ddc001608e2927153cc660a8eb | 5,560 | py | Python | web_app.py | frank-chris/HospitalDatabaseSystem | 4861582d8cdca0ea28c7ef7ed46b2bdcecb3bf0b | [
"MIT"
] | null | null | null | web_app.py | frank-chris/HospitalDatabaseSystem | 4861582d8cdca0ea28c7ef7ed46b2bdcecb3bf0b | [
"MIT"
] | null | null | null | web_app.py | frank-chris/HospitalDatabaseSystem | 4861582d8cdca0ea28c7ef7ed46b2bdcecb3bf0b | [
"MIT"
] | null | null | null | from flask import Flask, render_template, request, redirect, url_for, session
from flask_mysqldb import MySQL
from sql_helper import *
from html_helper import *
app = Flask(__name__)
app.debug = True
app.secret_key = 'mango'
# Enter your database connection details below
app.config['MYSQL_HOST'] = 'localhost'
app.config['MYSQL_USER'] = 'tempuser'
app.config['MYSQL_PASSWORD'] = '123+Temppass'
app.config['MYSQL_DB'] = 'hospitaldb'
# Intialize MySQL
mysql = MySQL(app)
@app.route('/')
def index():
return render_template('index.html')
@app.route('/', methods=['POST'])
def choose():
if request.form.get("start"):
return redirect(url_for('pick_table'))
else:
return render_template('index.html')
@app.route('/pick_table', methods=['POST', 'GET'])
def pick_table():
table_name = ''
if session.get('table_name'):
session.pop('table_name', None)
options = nested_list_to_html_select(show_tables(mysql))
if request.method == 'POST' and 'table' in request.form:
if 'describe' in request.form:
table_name = request.form['table']
table = nested_list_to_html_table(desc_table(mysql, table_name))
return render_template('pick_table.html', table=table, table_name=table_name, options=options)
elif 'pick' in request.form:
session['table_name'] = request.form['table']
return redirect(url_for('edit'))
table = nested_list_to_html_table(show_tables(mysql))
return render_template('pick_table.html', table=table, table_name=table_name, options=options)
@app.route('/edit', methods=['POST', 'GET'])
def edit():
table_name = session['table_name']
operation = None
form_html = ''
if request.method == 'POST' and 'insert_form' in request.form:
operation = 'insert'
table = nested_list_to_html_table(select_with_headers(mysql, table_name), buttons=True)
form_html = get_insert_form(select_with_headers(mysql, table_name)[0])
return render_template('edit.html', table=table, table_name=table_name, operation=operation, form_html=form_html)
elif request.method == 'POST' and 'insert_execute' in request.form:
columns = select_with_headers(mysql, table_name)[0]
values = []
for col in columns:
val = request.form[col]
if val.isnumeric():
values.append(val)
else:
values.append("\'" + val + "\'")
try:
tables = insert_to_table(mysql, table_name, columns, values)
except Exception as e:
return render_template('invalid.html', e=str(e))
tables = [nested_list_to_html_table(t) for t in tables]
return render_template('insert_results.html', tables=tables, table_name=table_name)
elif request.method == 'POST' and 'delete_button' in request.form:
values = request.form['delete_button'].split(',')
values = [val if val.isnumeric() else "\'" + val + "\'" for val in values]
columns = select_with_headers(mysql, table_name)[0]
where = []
for col, val in zip(columns, values):
where.append(col + " = " + val)
where = " AND ".join(where)
try:
tables = delete_from_table(mysql, table_name, where)
except Exception as e:
return render_template('invalid.html', e=str(e))
tables = [nested_list_to_html_table(t) for t in tables]
return render_template('delete_results.html', tables=tables, table_name=table_name)
elif request.method == 'POST' and 'update_button' in request.form:
operation = 'update'
table = nested_list_to_html_table(select_with_headers(mysql, table_name), buttons=True)
values = request.form['update_button'].split(',')
form_html = get_update_form(select_with_headers(mysql, table_name)[0], values)
values = [val if val.isnumeric() else "\'" + val + "\'" for val in values]
columns = select_with_headers(mysql, table_name)[0]
where = []
for col, val in zip(columns, values):
where.append(col + " = " + val)
where = " AND ".join(where)
session['update_where'] = where
return render_template('edit.html', table=table, table_name=table_name, operation=operation, form_html=form_html)
elif request.method == 'POST' and 'update_execute' in request.form:
columns = select_with_headers(mysql, table_name)[0]
values = []
for col in columns:
val = request.form[col]
if val.isnumeric():
values.append(val)
else:
values.append("\'" + val + "\'")
set_statement = []
for col, val in zip(columns, values):
set_statement.append(col + " = " + val)
set_statement = ", ".join(set_statement)
try:
tables = update_table(mysql, table_name, set_statement, session['update_where'])
except Exception as e:
return render_template('invalid.html', e=str(e))
tables = [nested_list_to_html_table(t) for t in tables]
if session.get('update_where'):
session.pop('update_where', None)
return render_template('update_results.html', tables=tables, table_name=table_name)
table = nested_list_to_html_table(select_with_headers(mysql, table_name), buttons=True)
return render_template('edit.html', table=table, table_name=table_name, operation=operation, form_html=form_html)
if __name__ == '__main__':
app.run() | 42.442748 | 121 | 0.645863 | 711 | 5,560 | 4.80872 | 0.14346 | 0.094765 | 0.076046 | 0.042118 | 0.617724 | 0.586721 | 0.568002 | 0.538462 | 0.503656 | 0.503656 | 0 | 0.002097 | 0.228237 | 5,560 | 131 | 122 | 42.442748 | 0.794687 | 0.010791 | 0 | 0.45614 | 0 | 0 | 0.109131 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035088 | false | 0.008772 | 0.035088 | 0.008772 | 0.201754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fbcf0d92ad2b40b429d3953d100953a69f41fd2 | 4,379 | py | Python | seq2seq_dataset.py | Leon-Francis/blue-badminton | 4b267ac355173a2ed717f9fcf16edd008960b502 | [
"Apache-2.0"
] | null | null | null | seq2seq_dataset.py | Leon-Francis/blue-badminton | 4b267ac355173a2ed717f9fcf16edd008960b502 | [
"Apache-2.0"
] | null | null | null | seq2seq_dataset.py | Leon-Francis/blue-badminton | 4b267ac355173a2ed717f9fcf16edd008960b502 | [
"Apache-2.0"
] | null | null | null | import os
import re
from victim_module.victim_config import IMDB_Config
from torch.utils.data import Dataset
from tools import logging
from transformers import AutoTokenizer
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
os.environ["TOKENIZERS_PARALLELISM"] = "false"
# Setup stopwords list & word (noun, adjective, and verb) lemmatizer
stop_words = set(stopwords.words('english'))
lemmatizer = WordNetLemmatizer()
def clean_text(text):
"""Function to clean text using RegEx operations, removal of stopwords, and lemmatization."""
text = re.sub(r'[^\w\s]', '', text, re.UNICODE)
text = text.lower()
text = [lemmatizer.lemmatize(token) for token in text.split(' ')]
text = [lemmatizer.lemmatize(token, 'v') for token in text]
text = [word for word in text if word not in stop_words]
text = ' '.join(text)
text = text.lstrip().rstrip()
text = re.sub(r'<br />', '', text)
text = re.sub(' +', ' ', text)
return text
class IMDB_Seq2Seq_Dataset(Dataset):
"""
idx = [CLS] + seq
mask = mask of idx
type = type of idx
label_idx = seq + [SEP]
"""
def __init__(self, train_data=True, debug_mode=False):
super(IMDB_Seq2Seq_Dataset, self).__init__()
if train_data:
self.path = IMDB_Config.TRAIN_DATA_PATH
else:
self.path = IMDB_Config.TEST_DATA_PATH
self.sentences, self.labels = self.read_standard_data(
self.path, debug_mode)
self.tokenizer = AutoTokenizer.from_pretrained(
IMDB_Config.TOKENIZER_NAME)
self.encodings_sentence = []
self.decodings_sentence = []
for sen in self.sentences:
self.encodings_sentence.append('[CLS] ' + sen)
self.decodings_sentence.append(sen + ' [SEP]')
self.encodings = self.tokenizer(self.encodings_sentence,
add_special_tokens=False,
padding=True,
truncation=True,
max_length=IMDB_Config.MAX_TOKENIZATION_LENGTH,
return_tensors='pt')
self.decodings = self.tokenizer(self.decodings_sentence,
add_special_tokens=False,
padding=True,
truncation=True,
max_length=IMDB_Config.MAX_TOKENIZATION_LENGTH,
return_tensors='pt')
def read_standard_data(self, path, debug_mode=False):
data = []
labels = []
if debug_mode:
i = 250
with open(path, 'r', encoding='utf-8') as file:
for line in file:
i -= 1
line = line.strip('\n')
sentence = line[:-1]
sentence = re.sub(r'<br />', '', sentence)
sentence = sentence.lstrip().rstrip()
sentence = re.sub(' +', ' ', sentence)
data.append(sentence)
labels.append(int(line[-1]))
if i == 0:
break
logging(f'loading data {len(data)} from {path}')
return data, labels
with open(path, 'r', encoding='utf-8') as file:
for line in file:
line = line.strip('\n')
sentence = line[:-1]
sentence = re.sub(r'<br />', '', sentence)
sentence = sentence.lstrip().rstrip()
sentence = re.sub(' +', ' ', sentence)
if IMDB_Config.APPLY_CLEANING:
sentence = clean_text(sentence)
data.append(sentence)
labels.append(int(line[-1]))
logging(f'loading data {len(data)} from {path}')
return data, labels
def __getitem__(self, item):
return self.encodings['input_ids'][item], self.encodings[
'attention_mask'][item], self.encodings['token_type_ids'][
item], self.decodings['input_ids'][item]
def __len__(self):
return len(self.encodings['input_ids'])
if __name__ == '__main__':
dataset = IMDB_Seq2Seq_Dataset(train_data=True, debug_mode=True)
pass | 38.752212 | 97 | 0.542818 | 468 | 4,379 | 4.891026 | 0.290598 | 0.045435 | 0.010485 | 0.010485 | 0.349498 | 0.330275 | 0.330275 | 0.301442 | 0.301442 | 0.264744 | 0 | 0.004904 | 0.348025 | 4,379 | 113 | 98 | 38.752212 | 0.796848 | 0.053665 | 0 | 0.351648 | 0 | 0 | 0.057879 | 0.00535 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054945 | false | 0.010989 | 0.087912 | 0.021978 | 0.208791 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fbf69f0c7833581d5b839c771d79dfeca5e3540 | 771 | py | Python | tests/read_head_test.py | aescwork/sqliteminor | 83aa4f76a28141b085bbfc7f725531e0b67aeed5 | [
"BSD-2-Clause"
] | 1 | 2020-01-31T11:38:47.000Z | 2020-01-31T11:38:47.000Z | tests/read_head_test.py | aescwork/sqliteminor | 83aa4f76a28141b085bbfc7f725531e0b67aeed5 | [
"BSD-2-Clause"
] | null | null | null | tests/read_head_test.py | aescwork/sqliteminor | 83aa4f76a28141b085bbfc7f725531e0b67aeed5 | [
"BSD-2-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
import sys
sys.path.append("../sqliteminor/")
sys.path.append("../temp/")
import sqliteminor
import setup_db
class ReadHeadTest(unittest.TestCase):
def setUp(self):
self.conn = setup_db.setup_db()
self.read_head_comp = [(1, u'Ash'), (2, u'Yew'), (3, u'White Pine'), (4, u'Scots Pine'), (5, u'Elm'), (6, u'Sugar Maple')]
self.sm = sqliteminor.SQLiteMinor(self.conn, "trees")
self.read_head_output = self.sm.read_head(6, "tree_id, name")
def test_read_head(self):
self.assertEqual(self.read_head_output, self.read_head_comp)
def test_result(self):
self.assertEqual(self.sm.result, "OK")
def tearDown(self):
self.sm.__del__()
del(self.conn)
if __name__ == '__main__':
unittest.main()
| 15.734694 | 124 | 0.673152 | 116 | 771 | 4.232759 | 0.431034 | 0.09776 | 0.09776 | 0.065173 | 0.089613 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0.149157 | 771 | 48 | 125 | 16.0625 | 0.73628 | 0.027237 | 0 | 0 | 0 | 0 | 0.123306 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 1 | 0.190476 | false | 0 | 0.190476 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fbf7f011904cba5b1dd2d9a8b76bf67ae5b7a0e | 5,955 | py | Python | Gallery/views.py | CiganOliviu/InfiniteShoot | 14f7fb21e360e3c58876d82ebbe206054c72958e | [
"MIT"
] | 1 | 2021-04-02T16:45:37.000Z | 2021-04-02T16:45:37.000Z | Gallery/views.py | CiganOliviu/InfiniteShoot-1 | 6322ae34f88caaffc1de29dfa4f6d86d175810a7 | [
"Apache-2.0"
] | null | null | null | Gallery/views.py | CiganOliviu/InfiniteShoot-1 | 6322ae34f88caaffc1de29dfa4f6d86d175810a7 | [
"Apache-2.0"
] | null | null | null | from django.contrib.auth.decorators import login_required
from django.contrib.auth.mixins import LoginRequiredMixin
from django.shortcuts import render
from django.urls import reverse
from django.views import generic
from django.views.generic.edit import FormMixin
from .models import *
from Gallery.form import ClientCatalogueForm
def gallery_view(request):
template_name = "gallery/gallery_view.html"
images_for_first_column = PlatformPresentationImage.objects.filter(column="First Column")
images_for_second_column = PlatformPresentationImage.objects.filter(column="Second Column")
images_for_third_column = PlatformPresentationImage.objects.filter(column="Third Column")
images_for_fourth_column = PlatformPresentationImage.objects.filter(column="Fourth Column")
context = {
'images_for_first_column': images_for_first_column,
'images_for_second_column': images_for_second_column,
'images_for_third_column': images_for_third_column,
'images_for_fourth_column': images_for_fourth_column,
}
return render(request, template_name, context)
def get_images_from_database(request):
images_first_column_client_query = ImagesClient.objects.filter(client=request.user, column="First Column")
images_second_column_client_query = ImagesClient.objects.filter(client=request.user, column="Second Column")
images_third_column_client_query = ImagesClient.objects.filter(client=request.user, column="Third Column")
images_fourth_column_client_query = ImagesClient.objects.filter(client=request.user, column="Fourth Column")
return images_first_column_client_query, images_fourth_column_client_query, images_second_column_client_query, \
images_third_column_client_query
@login_required
def personal_gallery_view(request):
template_name = 'gallery/personal_gallery_view.html'
images_first_column_client_query, images_fourth_column_client_query, images_second_column_client_query, \
images_third_column_client_query = get_images_from_database(
request)
context = {
'images_first_column_client_query': images_first_column_client_query,
'images_second_column_client_query': images_second_column_client_query,
'images_third_column_client_query': images_third_column_client_query,
'images_fourth_column_client_query': images_fourth_column_client_query,
}
return render(request, template_name, context)
def choose_preferred_images_view(request):
template_name = 'gallery_details/pick_images_from_gallery.html'
images_first_column_client_query, images_fourth_column_client_query, images_second_column_client_query, \
images_third_column_client_query = get_images_from_database(
request)
context = {
'images_first_column_client_query': images_first_column_client_query,
'images_second_column_client_query': images_second_column_client_query,
'images_third_column_client_query': images_third_column_client_query,
'images_fourth_column_client_query': images_fourth_column_client_query,
}
return render(request, template_name, context)
class ChoosePreferredPhotosDetailView(LoginRequiredMixin, FormMixin, generic.DetailView):
model = ImagesClient
template_name = 'gallery_details/personal_image_details.html'
slug_field = 'image_slug'
slug_url_kwarg = 'image_slug'
form_class = ClientCatalogueForm
def get_success_url(self):
return reverse('choose_preferred_photos')
def get_context_data(self, **kwargs):
context = super(ChoosePreferredPhotosDetailView, self).get_context_data(**kwargs)
context['form'] = ClientCatalogueForm(initial={'post': self.object})
return context
def get_number_of_instances_from_database(self, database_name, filter_name):
return database_name.objects.filter(image_positioning=filter_name).count()
def validate_query(self, database_name, filter_name):
if filter_name == "Cover Image":
if self.get_number_of_instances_from_database(database_name, "Cover Image") == 1:
return False
if filter_name == "Content Image":
if self.get_number_of_instances_from_database(database_name, "Content Image") == 4:
return False
if filter_name == "Back Image":
if self.get_number_of_instances_from_database(database_name, "Back Image") == 1:
return False
return True
def post(self, request, *args, **kwargs):
self.object = self.get_object()
form = self.get_form()
if form.is_valid():
form_obj = form.save(commit=False)
form_obj.client = request.user
form_obj.image = self.__get_specific_image(request.user, self.kwargs['image_slug']).image
if self.validate_query(ClientCatalogue, form_obj.image_positioning):
return self.form_valid(form_obj)
return self.form_invalid(form)
else:
return self.form_invalid(form)
@staticmethod
def __get_specific_image(user, name_id):
result = ImagesClient.objects.get(client=user, name=name_id)
return result
def form_valid(self, form):
form.save()
return super(ChoosePreferredPhotosDetailView, self).form_valid(form)
def my_catalogue(request):
template_name = 'gallery_details/my_catalogue.html'
get_cover_image = ClientCatalogue.objects.filter(client=request.user, image_positioning='Cover Image')
get_image = ClientCatalogue.objects.filter(client=request.user, image_positioning='Content Image')
get_back_image = ClientCatalogue.objects.filter(client=request.user, image_positioning='Back Image')
context = {
'get_cover_image': get_cover_image,
'get_image': get_image,
'get_back_image': get_back_image,
}
return render(request, template_name, context)
| 37.21875 | 116 | 0.751637 | 718 | 5,955 | 5.82312 | 0.13649 | 0.091844 | 0.130112 | 0.126525 | 0.597226 | 0.478833 | 0.410428 | 0.376704 | 0.376704 | 0.329347 | 0 | 0.000607 | 0.170109 | 5,955 | 159 | 117 | 37.45283 | 0.845407 | 0 | 0 | 0.25 | 0 | 0 | 0.140218 | 0.093535 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.074074 | 0.018519 | 0.398148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fc03895160f2fd0bb43ba9092bb74994194db31 | 4,766 | py | Python | kat/controller.py | TOT0RoKR/kat | 5fafe688593a60700f1067ca51ec9d5f148a396e | [
"MIT"
] | 3 | 2019-10-27T08:26:15.000Z | 2019-11-26T12:23:38.000Z | kat/controller.py | tot0rokr/kat | 5fafe688593a60700f1067ca51ec9d5f148a396e | [
"MIT"
] | null | null | null | kat/controller.py | tot0rokr/kat | 5fafe688593a60700f1067ca51ec9d5f148a396e | [
"MIT"
] | 1 | 2020-01-25T00:14:51.000Z | 2020-01-25T00:14:51.000Z | from kat.lib.tag import Tag
from kat.lib.file import File
from kat.lib.scope import Scope
from kat.tracer import ccscanner as ccs
from kat.tracer import ccparser as ccp
from kat.tracer import ppscanner as pps
from kat.tracer import ppparser as ppp
from kat.ui.tabpage import *
import kat.ui.filetree as ft
import kat.ui.taglist as tl
import kat.ui.explorer as ep
from kat.katconfig import Katconfig
import vim
import os.path
import time
import pickle
katconfig = {}
global_tags = {}
global_tags['preprocess'] = {}
global_tags['curconfig'] = {}
def initializeKAT(configPath):
config = Katconfig(configPath)
kernel_root_dir = vim.vars['KATRootDir'].decode()
database = kernel_root_dir + '/' + "kat.database"
katref = kernel_root_dir + '/' + "kat.ref"
cc_tags = []
pp_tags = []
with open(kernel_root_dir + '/.config', "r") as f:
raw_data = f.read()
tokens = ccs.scan(raw_data)
cc_tags = ccp.parse(tokens)
for it in cc_tags:
if it.name in global_tags['curconfig']:
global_tags['curconfig'][it.name].append(it)
else:
global_tags['curconfig'][it.name] = [it]
files = []
for it in config.files:
files.append(File(it, kernel_root_dir + '/'))
files = sorted(files, key=lambda x: x.path)
katconfig['files'] = files
# read database
if os.path.exists(database):
# with open(database, "rb") as f:
f = open(database, "rb")
katref_time = pickle.load(f)
if katref_time != time.ctime(os.path.getmtime(katref)):
f.close()
pp_tags = initialize_database(katconfig)
else:
katref_data = pickle.load(f)
f.close()
pp_tags = katref_data
else:
pp_tags = initialize_database(katconfig)
# pp_tags = sorted(pp_tags, key=lambda x: x.path.path)
for it in pp_tags:
if it.name in global_tags['preprocess']:
global_tags['preprocess'][it.name].append(it)
else:
global_tags['preprocess'][it.name] = [it]
# print("files load success")
kconfigs = []
for it in config.kconfigs:
kconfigs.append(File(it, vim.vars['KATRootDir'].decode() + '/'))
kconfigs = sorted(kconfigs, key=lambda x: x.path)
katconfig['kconfigs'] = kconfigs
# print("kconfigs load success")
vim.vars['CompletedLoad'] = True
initialize_tab()
def initialize_tab():
if vim.vars['CompletedLoad'] == 0:
return
tab = TabPage(katconfig, global_tags)
vim.current.tabpage.vars['tabid'] = tab.tabpageNumber
# print(type(tabpages[currentTabpageNumber()].global_tags))
# print(type(global_tags))
ft.preInitialize()
tl.preInitialize()
ep.preInitialize()
initialize_buffer()
tl.show_taglist_buf()
def initialize_database(katconfig):
kernel_root_dir = vim.vars['KATRootDir'].decode()
database = kernel_root_dir + '/' + "kat.database"
katref = kernel_root_dir + '/' + "kat.ref"
pp_tags = []
i = 0
files_nr = len(katconfig['files'])
for it in katconfig['files']:
i += 1
print(str(i) + "/" + str(files_nr) + " - " + it.path)
filename = kernel_root_dir + '/' + it.path
with open(filename, "r", encoding="utf-8") as f:
try:
raw_data = f.read()
except UnicodeDecodeError:
with open(filename, "r", encoding="iso-8859-1") as f2:
raw_data = f2.read()
tokens = pps.scan(raw_data)
it.scope = Scope(it.path, None, 0, 0)
tags, _, _ = ppp.parse(tokens, it)
pp_tags += tags
database = kernel_root_dir + '/' + "kat.database"
with open(database, "wb") as f:
pickle.dump(time.ctime(os.path.getmtime(katref)), f)
pickle.dump(pp_tags, f)
return pp_tags
def initialize_window():
pass
def initialize_buffer():
if vim.vars['CompletedLoad'] == 0:
return
kernel_root_dir = vim.vars['KATRootDir'].decode()
buf = vim.current.buffer
tab = tabpages[currentTabpageNumber()]
filename = buf.name
if filename in buffers:
return
files = list(filter(lambda x: x.path in filename, tab.katconfig['files']))
if len(files) > 1:
raise AssertionError(str(files))
elif len(files) <= 0:
return
else:
pass
with open(filename, "r", encoding="utf-8") as f:
try:
raw_data = f.read()
except UnicodeDecodeError:
with open(filename, "r", encoding="iso-8859-1") as f2:
raw_data = f2.read()
tokens = pps.scan(raw_data)
it = File(filename)
it.scope = Scope(it, None, 0, 0)
tags, _, _ = ppp.parse(tokens, it)
buffers[filename] = Buffer(buf, tags)
| 29.974843 | 78 | 0.610575 | 619 | 4,766 | 4.578352 | 0.198708 | 0.042343 | 0.050459 | 0.028229 | 0.37156 | 0.304164 | 0.255469 | 0.184898 | 0.166549 | 0.166549 | 0 | 0.007345 | 0.257239 | 4,766 | 158 | 79 | 30.164557 | 0.79322 | 0.051616 | 0 | 0.338462 | 0 | 0 | 0.065824 | 0 | 0 | 0 | 0 | 0 | 0.007692 | 1 | 0.038462 | false | 0.015385 | 0.123077 | 0 | 0.2 | 0.007692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fc482b29b5ed517acbe06792cf699b040d68832 | 7,015 | py | Python | corehq/apps/data_interfaces/utils.py | andyasne/commcare-hq | c59a24e57bdd4d2536493f9ecdcc9906f4ae1b88 | [
"BSD-3-Clause"
] | 471 | 2015-01-10T02:55:01.000Z | 2022-03-29T18:07:18.000Z | corehq/apps/data_interfaces/utils.py | andyasne/commcare-hq | c59a24e57bdd4d2536493f9ecdcc9906f4ae1b88 | [
"BSD-3-Clause"
] | 14,354 | 2015-01-01T07:38:23.000Z | 2022-03-31T20:55:14.000Z | corehq/apps/data_interfaces/utils.py | andyasne/commcare-hq | c59a24e57bdd4d2536493f9ecdcc9906f4ae1b88 | [
"BSD-3-Clause"
] | 175 | 2015-01-06T07:16:47.000Z | 2022-03-29T13:27:01.000Z | from typing import List, Optional
from django.utils.translation import ugettext as _
from couchdbkit import ResourceNotFound
from soil import DownloadBase
from corehq.apps.casegroups.models import CommCareCaseGroup
from corehq.apps.hqcase.utils import get_case_by_identifier
from corehq.form_processor.interfaces.dbaccessors import FormAccessors
from corehq.motech.repeaters.const import RECORD_CANCELLED_STATE
def add_cases_to_case_group(domain, case_group_id, uploaded_data, progress_tracker):
response = {
'errors': [],
'success': [],
}
try:
case_group = CommCareCaseGroup.get(case_group_id)
except ResourceNotFound:
response['errors'].append(_("The case group was not found."))
return response
num_rows = len(uploaded_data)
progress_tracker(0, num_rows)
for row_number, row in enumerate(uploaded_data):
identifier = row.get('case_identifier')
case = None
if identifier is not None:
case = get_case_by_identifier(domain, str(identifier))
if not case:
response['errors'].append(
_("Could not find case with identifier '{}'.").format(identifier)
)
elif case.doc_type != 'CommCareCase':
response['errors'].append(
_("It looks like the case with identifier '{}' "
"is marked as deleted.").format(identifier)
)
elif case.case_id in case_group.cases:
response['errors'].append(
_("A case with identifier '{}' already exists in this "
"group.").format(identifier)
)
else:
case_group.cases.append(case.case_id)
response['success'].append(
_("Case with identifier '{}' has been added to this "
"group.").format(identifier)
)
progress_tracker(row_number + 1, num_rows)
if response['success']:
case_group.save()
return response
def archive_or_restore_forms(domain, user_id, username, form_ids, archive_or_restore, task=None, from_excel=False):
response = {
'errors': [],
'success': [],
}
missing_forms = set(form_ids)
success_count = 0
if task:
DownloadBase.set_progress(task, 0, len(form_ids))
for xform in FormAccessors(domain).iter_forms(form_ids):
missing_forms.discard(xform.form_id)
if xform.domain != domain:
response['errors'].append(_("XForm {form_id} does not belong to domain {domain}").format(
form_id=xform.form_id, domain=domain))
continue
xform_string = _("XForm {form_id} for domain {domain} by user '{username}'").format(
form_id=xform.form_id,
domain=xform.domain,
username=username)
try:
if archive_or_restore.is_archive_mode():
xform.archive(user_id=user_id)
message = _("Successfully archived {form}").format(form=xform_string)
else:
xform.unarchive(user_id=user_id)
message = _("Successfully unarchived {form}").format(form=xform_string)
response['success'].append(message)
success_count = success_count + 1
except Exception as e:
response['errors'].append(_("Could not archive {form}: {error}").format(
form=xform_string, error=e))
if task:
DownloadBase.set_progress(task, success_count, len(form_ids))
for missing_form_id in missing_forms:
response['errors'].append(
_("Could not find XForm {form_id}").format(form_id=missing_form_id))
if from_excel:
return response
response["success_count_msg"] = _("{success_msg} {count} form(s)".format(
success_msg=archive_or_restore.success_text,
count=success_count))
return {"messages": response}
def property_references_parent(case_property):
return isinstance(case_property, str) and (
case_property.startswith("parent/") or
case_property.startswith("host/")
)
def operate_on_payloads(
repeat_record_ids: List[str],
domain: str,
action, # type: Literal['resend', 'cancel', 'requeue'] # 3.8+
use_sql: bool,
task: Optional = None,
from_excel: bool = False,
):
if not repeat_record_ids:
return {'messages': {'errors': [_('No payloads specified')]}}
response = {
'errors': [],
'success': [],
}
success_count = 0
if task:
DownloadBase.set_progress(task, 0, len(repeat_record_ids))
for record_id in repeat_record_ids:
if use_sql:
record = _get_sql_repeat_record(domain, record_id)
else:
record = _get_couch_repeat_record(domain, record_id)
if record:
try:
if action == 'resend':
record.fire(force_send=True)
message = _("Successfully resent repeat record (id={})").format(record_id)
elif action == 'cancel':
if use_sql:
record.state = RECORD_CANCELLED_STATE
else:
record.cancel()
record.save()
message = _("Successfully cancelled repeat record (id={})").format(record_id)
elif action == 'requeue':
record.requeue()
if not use_sql:
record.save()
message = _("Successfully requeued repeat record (id={})").format(record_id)
else:
raise ValueError(f'Unknown action {action!r}')
response['success'].append(message)
success_count = success_count + 1
except Exception as e:
message = _("Could not perform action for repeat record (id={}): {}").format(record_id, e)
response['errors'].append(message)
if task:
DownloadBase.set_progress(task, success_count, len(repeat_record_ids))
if from_excel:
return response
if success_count:
response["success_count_msg"] = _(
"Successfully performed {action} action on {count} form(s)"
).format(action=action, count=success_count)
else:
response["success_count_msg"] = ''
return {"messages": response}
def _get_couch_repeat_record(domain, record_id):
from corehq.motech.repeaters.models import RepeatRecord
try:
couch_record = RepeatRecord.get(record_id)
except ResourceNotFound:
return None
if couch_record.domain != domain:
return None
return couch_record
def _get_sql_repeat_record(domain, record_id):
from corehq.motech.repeaters.models import SQLRepeatRecord
try:
return SQLRepeatRecord.objects.get(domain=domain, pk=record_id)
except SQLRepeatRecord.DoesNotExist:
return None
| 33.564593 | 115 | 0.606557 | 771 | 7,015 | 5.285344 | 0.217899 | 0.029448 | 0.039264 | 0.020614 | 0.242454 | 0.211043 | 0.16638 | 0.132515 | 0.113865 | 0.090307 | 0 | 0.002015 | 0.292373 | 7,015 | 208 | 116 | 33.725962 | 0.818896 | 0.00727 | 0 | 0.327381 | 0 | 0 | 0.149691 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.059524 | 0.005952 | 0.172619 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fc4fb97dec276ac9c856a99d96ed1184a3ff45c | 4,725 | py | Python | category_prediction/evalCategoryModel.py | jakelever/corona-ml | 8ceb22af50d7277ebf05f2fd21bbbf68c080ed76 | [
"MIT"
] | 7 | 2021-02-01T22:39:23.000Z | 2021-08-09T16:28:38.000Z | category_prediction/evalCategoryModel.py | jakelever/corona-ml | 8ceb22af50d7277ebf05f2fd21bbbf68c080ed76 | [
"MIT"
] | 1 | 2021-05-17T13:14:40.000Z | 2021-05-20T10:26:09.000Z | category_prediction/evalCategoryModel.py | jakelever/corona-ml | 8ceb22af50d7277ebf05f2fd21bbbf68c080ed76 | [
"MIT"
] | 1 | 2021-01-04T14:11:18.000Z | 2021-01-04T14:11:18.000Z |
import argparse
import json
import sys
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MultiLabelBinarizer
import sklearn.metrics
from coronacode import DocumentClassifier
def main():
parser = argparse.ArgumentParser('Build a model for a classifier')
parser.add_argument('--categoriesFile',required=True,type=str,help='Category list file')
parser.add_argument('--params',required=True,type=str,help='JSON string with parameters')
parser.add_argument('--useTestSet',action='store_true',help='Whether to use the test set instead of the validation set')
parser.add_argument('--inJSON',required=True,type=str,help='Filename of JSON documents')
args = parser.parse_args()
print("Running with --params %s" % args.params)
params = json.loads(args.params)
with open(args.inJSON) as f:
documents = json.load(f)
with open(args.categoriesFile) as f:
categories = [ line.strip() for line in f ]
#test_docs = [ d for d in documents if 'phase4' in d['annotations'] ]
#documents = [ d for d in documents if not 'phase4' in d['annotations'] ]
#viruses = {'SARS-CoV-2','SARS-CoV','MERS-CoV'}
#documents = [ d for d in documents if any(entity['type'] == 'Virus' for entity in d['entities']) or any( v in d['annotations'] for v in viruses) ]
train_docs = [ d for d in documents if len(d['annotations']) > 0 and d['phase'] != 'testset' ]
test_docs = [ d for d in documents if d['phase'] == 'testset' ]
#other_docs = [ d for d in documents if len(d['annotations']) == 0 ]
toRemoveFromTraining = {'RemoveFromCorpus?','NotAllEnglish','NotRelevant','FixAbstract'}
train_docs = [ d for d in train_docs if not any (f in d['annotations'] for f in toRemoveFromTraining) ]
if not args.useTestSet:
train_docs, test_docs = train_test_split(train_docs, test_size=0.25, random_state=42)
train_categories = [ [ a for a in d['annotations'] if a in categories ] for d in train_docs ]
test_categories = [ [ a for a in d['annotations'] if a in categories ] for d in test_docs ]
encoder = MultiLabelBinarizer()
train_targets = encoder.fit_transform(train_categories)
test_targets = encoder.fit_transform(test_categories)
target_names = encoder.classes_
assert len(target_names) == len(categories)
print("len(train_docs):",len(train_docs))
print("len(test_docs):",len(test_docs))
print("class balance for train:", 100*sum(train_targets)/len(train_targets))
print("class balance for test:", 100*sum(test_targets)/len(test_targets))
sys.stdout.flush()
clf = DocumentClassifier(params)
print('train_targets.shape=',train_targets.shape)
sys.stdout.flush()
clf.fit(train_docs, train_targets, target_names)
predictions = clf.predict(test_docs)
print('predictions.shape=',predictions.shape)
sys.stdout.flush()
results = {}
all_tn, all_fp, all_fn, all_tp = 0,0,0,0
all_precisions, all_recalls, all_f1_scores = [],[],[]
for i,label in enumerate(target_names):
gold_for_label = test_targets[:,i]
predictions_for_label = predictions[:,i] > 0.5
tn, fp, fn, tp = sklearn.metrics.confusion_matrix(gold_for_label, predictions_for_label).ravel()
tn, fp, fn, tp = map(int, [tn, fp, fn, tp])
all_tn += tn
all_fp += fp
all_fn += fn
all_tp += tp
precision = sklearn.metrics.precision_score(gold_for_label,predictions_for_label)
recall = sklearn.metrics.recall_score(gold_for_label,predictions_for_label)
f1_score = sklearn.metrics.f1_score(gold_for_label,predictions_for_label)
all_precisions.append(precision)
all_recalls.append(recall)
all_f1_scores.append(f1_score)
print(f"{label}\t{precision}\t{recall}\t{f1_score}")
sys.stdout.flush()
results[label] = {'tn':tn,'fp':fp,'fn':fn,'tp':tp,'precision':precision,'recall':recall,'f1_score':f1_score}
micro_precision = all_tp / (all_tp + all_fp) if (all_tp + all_fp) > 0 else 0
micro_recall = all_tp / (all_tp + all_fn) if (all_tp + all_fn) > 0 else 0
micro_f1 = 2 * (micro_precision * micro_recall) / (micro_precision + micro_recall) if (micro_precision + micro_recall) > 0 else 0
macro_precision = sum(all_precisions) / len(all_precisions)
macro_recall = sum(all_recalls) / len(all_recalls)
macro_f1 = sum(all_f1_scores) / len(all_f1_scores)
results['MICRO'] = {'tn':all_tn,'fp':all_fp,'fn':all_fn,'tp':all_tp,'precision':micro_precision,'recall':micro_recall,'f1_score':micro_f1}
results['MACRO'] = {'precision':macro_precision,'recall':macro_recall,'f1_score':macro_f1}
print("-"*30)
print(f"MICRO\t{micro_precision}\t{micro_recall}\t{micro_f1}")
print(f"MACRO\t{macro_precision}\t{macro_recall}\t{macro_f1}")
print("-"*30)
output = {'params':params, 'results':results}
print(json.dumps(output))
print("Done")
if __name__ == '__main__':
main()
| 35.526316 | 148 | 0.729735 | 721 | 4,725 | 4.571429 | 0.209431 | 0.024272 | 0.016384 | 0.014867 | 0.168386 | 0.134102 | 0.118325 | 0.069175 | 0.053398 | 0.053398 | 0 | 0.012364 | 0.126984 | 4,725 | 132 | 149 | 35.795455 | 0.786667 | 0.084444 | 0 | 0.072289 | 0 | 0 | 0.171296 | 0.033796 | 0 | 0 | 0 | 0 | 0.012048 | 1 | 0.012048 | false | 0 | 0.084337 | 0 | 0.096386 | 0.168675 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fc5e09486ca793c293ae78afb5fd7c1bcc6bbb2 | 1,930 | py | Python | simple/cubes.py | sawantss/simple-cozmo | 54c85d7a42e351616edfc19ea000c94b3eee47f4 | [
"Apache-2.0"
] | 1 | 2019-10-19T22:03:33.000Z | 2019-10-19T22:03:33.000Z | simple/cubes.py | sawantss/simple-cozmo | 54c85d7a42e351616edfc19ea000c94b3eee47f4 | [
"Apache-2.0"
] | null | null | null | simple/cubes.py | sawantss/simple-cozmo | 54c85d7a42e351616edfc19ea000c94b3eee47f4 | [
"Apache-2.0"
] | null | null | null | """
This class makes it easier to deal with cube tapped events in a procedural way using if statements.
"""
from __future__ import with_statement
__all__ = ["LightCubesEventsSerializer"]
__author__ = "Shitalkumar Sawant"
__copyright__ = "Copyright(c) 2018 All rights reserved"
from threading import Lock
import cozmo
from cozmo.robot import Robot
from cozmo.objects import LightCube, LightCube1Id, LightCube2Id, LightCube3Id, EvtObjectTapped
class LightCubesEventsSerializer:
cube_Ids = [LightCube1Id, LightCube2Id, LightCube3Id]
taps_status_locks = [Lock(), Lock(), Lock()]
taps_status = [False, False, False]
def __init__(self, robot: Robot):
self.robot = robot
for i in range(1, 4):
self.when_cube_tapped(self._cube_tapped, i)
def _cube_tapped(self, cube_no):
print("Cube tapped: ", cube_no)
with self.taps_status_locks[cube_no - 1]:
self.taps_status[cube_no - 1] = True
def _get_cube(self, cube_no=1) -> LightCube:
return self.robot.world.get_light_cube(self.cube_Ids[cube_no-1])
def when_cube_tapped (self, f, cube_no=None):
cube = self._get_cube(cube_no)
def cube_tapped_handler(evt, obj=None, tap_count=None, **kwargs):
f(cube_no)
if cube is None:
cozmo.logger.warning("Cozmo is not connected to a " + cube_no + " check the battery.")
else:
cube.add_event_handler(EvtObjectTapped, cube_tapped_handler)
"""
Returns true if the specified cube was tapped since this method was called last
cube_no must be an integer in the range of 1 to 3
"""
def was_cube_tapped(self, cube_no) -> bool:
cube_tapped = False
with self.taps_status_locks[cube_no - 1]:
if self.taps_status[cube_no - 1] > 0:
cube_tapped=True
self.taps_status[cube_no - 1] = False
return cube_tapped
| 31.129032 | 99 | 0.672021 | 265 | 1,930 | 4.615094 | 0.366038 | 0.07359 | 0.040065 | 0.044154 | 0.133279 | 0.100572 | 0.04906 | 0.04906 | 0 | 0 | 0 | 0.015007 | 0.240415 | 1,930 | 61 | 100 | 31.639344 | 0.819236 | 0.051295 | 0 | 0.054054 | 0 | 0 | 0.084482 | 0.015578 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162162 | false | 0 | 0.135135 | 0.027027 | 0.459459 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fc616d3f84483fe97876ef204e5c1ea508cf80c | 7,771 | py | Python | test/test_basic.py | brunoginciene/mergin-db-sync | 0126e952df0e290ce503629b9831fbd1623edf7e | [
"MIT"
] | null | null | null | test/test_basic.py | brunoginciene/mergin-db-sync | 0126e952df0e290ce503629b9831fbd1623edf7e | [
"MIT"
] | null | null | null | test/test_basic.py | brunoginciene/mergin-db-sync | 0126e952df0e290ce503629b9831fbd1623edf7e | [
"MIT"
] | null | null | null |
import pytest
import os
import shutil
import sqlite3
import tempfile
import psycopg2
from mergin import MerginClient, ClientError
import sys
sys.path.insert(0, '/home/martin/lutra/mergin-db-sync')
from dbsync import dbsync_init, dbsync_pull, dbsync_push, dbsync_status, config
GEODIFFINFO_EXE = os.environ.get('TEST_GEODIFFINFO_EXE')
DB_CONNINFO = os.environ.get('TEST_DB_CONNINFO')
SERVER_URL = os.environ.get('TEST_MERGIN_URL')
API_USER = os.environ.get('TEST_API_USERNAME')
USER_PWD = os.environ.get('TEST_API_PASSWORD')
TMP_DIR = tempfile.gettempdir()
TEST_DATA_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'test_data')
@pytest.fixture(scope='function')
def mc():
assert SERVER_URL and API_USER and USER_PWD
#assert SERVER_URL and SERVER_URL.rstrip('/') != 'https://public.cloudmergin.com' and API_USER and USER_PWD
return MerginClient(SERVER_URL, login=API_USER, password=USER_PWD)
def cleanup(mc, project, dirs):
""" cleanup leftovers from previous test if needed such as remote project and local directories """
try:
print("Deleting project on Mergin server: " + project)
mc.delete_project(project)
except ClientError as e:
print("Deleting project error: " + str(e))
pass
for d in dirs:
if os.path.exists(d):
shutil.rmtree(d)
def cleanup_db(conn, schema_base, schema_main):
""" Removes test schemas from previous tests """
cur = conn.cursor()
cur.execute("DROP SCHEMA IF EXISTS {} CASCADE".format(schema_base))
cur.execute("DROP SCHEMA IF EXISTS {} CASCADE".format(schema_main))
cur.execute("COMMIT")
def init_sync_from_geopackage(mc, project_name, source_gpkg_path):
"""
Initialize sync from given GeoPackage file:
- (re)create Mergin project with the file
- (re)create local project working directory and sync directory
- configure DB sync and let it do the init (make copies to the database)
"""
full_project_name = API_USER + "/" + project_name
project_dir = os.path.join(TMP_DIR, project_name + '_work') # working directory
sync_project_dir = os.path.join(TMP_DIR, project_name + '_dbsync') # used by dbsync
db_schema_main = project_name + '_main'
db_schema_base = project_name + '_base'
conn = psycopg2.connect(DB_CONNINFO)
cleanup(mc, full_project_name, [project_dir, sync_project_dir])
cleanup_db(conn, db_schema_base, db_schema_main)
# prepare a new Mergin project
mc.create_project(project_name)
mc.download_project(full_project_name, project_dir)
shutil.copy(source_gpkg_path, os.path.join(project_dir, 'test_sync.gpkg'))
mc.push_project(project_dir)
# prepare sync dir
mc.download_project(full_project_name, sync_project_dir)
# prepare dbsync config
config.geodiffinfo_exe = GEODIFFINFO_EXE
config.mergin_username = API_USER
config.mergin_password = USER_PWD
config.mergin_url = SERVER_URL
config.db_conn_info = DB_CONNINFO
config.project_working_dir = sync_project_dir
config.mergin_sync_file = 'test_sync.gpkg'
config.db_driver = 'postgres'
config.db_schema_modified = db_schema_main
config.db_schema_base = db_schema_base
dbsync_init(from_gpkg=True)
def test_basic_pull(mc):
"""
Test initialization and one pull from Mergin to DB
1. create a Mergin project using py-client with a testing gpkg
2. run init, check that everything is fine
3. make change in gpkg (copy new version), check everything is fine
"""
project_name = 'test_sync_pull'
source_gpkg_path = os.path.join(TEST_DATA_DIR, 'base.gpkg')
project_dir = os.path.join(TMP_DIR, project_name + '_work') # working directory
init_sync_from_geopackage(mc, project_name, source_gpkg_path)
conn = psycopg2.connect(DB_CONNINFO)
# test that database schemas are created + tables are populated
cur = conn.cursor()
cur.execute("SELECT count(*) from test_sync_pull_main.simple")
assert cur.fetchone()[0] == 3
# make change in GPKG and push
shutil.copy(os.path.join(TEST_DATA_DIR, 'inserted_1_A.gpkg'), os.path.join(project_dir, 'test_sync.gpkg'))
mc.push_project(project_dir)
# pull the change from Mergin to DB
dbsync_pull()
# check that a feature has been inserted
cur = conn.cursor()
cur.execute("SELECT count(*) from test_sync_pull_main.simple")
assert cur.fetchone()[0] == 4
print("---")
dbsync_status()
def test_basic_push(mc):
""" Initialize a project and test push of a new row from PostgreSQL to Mergin """
project_name = 'test_sync_push'
source_gpkg_path = os.path.join(TEST_DATA_DIR, 'base.gpkg')
project_dir = os.path.join(TMP_DIR, project_name + '_work') # working directory
init_sync_from_geopackage(mc, project_name, source_gpkg_path)
conn = psycopg2.connect(DB_CONNINFO)
# test that database schemas are created + tables are populated
cur = conn.cursor()
cur.execute("SELECT count(*) from test_sync_push_main.simple")
assert cur.fetchone()[0] == 3
# make a change in PostgreSQL
cur = conn.cursor()
cur.execute("INSERT INTO test_sync_push_main.simple (name, rating) VALUES ('insert in postgres', 123)")
cur.execute("COMMIT")
cur.execute("SELECT count(*) from test_sync_push_main.simple")
assert cur.fetchone()[0] == 4
# push the change from DB to PostgreSQL
dbsync_push()
# pull new version of the project to the work project directory
mc.pull_project(project_dir)
# check that the insert has been applied to our GeoPackage
gpkg_conn = sqlite3.connect(os.path.join(project_dir, 'test_sync.gpkg'))
gpkg_cur = gpkg_conn.cursor()
gpkg_cur.execute("SELECT count(*) FROM simple")
assert gpkg_cur.fetchone()[0] == 4
print("---")
dbsync_status()
def test_basic_both(mc):
""" Initializes a sync project and does both a change in Mergin and in the database,
and lets DB sync handle it: changes in PostgreSQL need to be rebased on top of
changes in Mergin server.
"""
project_name = 'test_sync_both'
source_gpkg_path = os.path.join(TEST_DATA_DIR, 'base.gpkg')
project_dir = os.path.join(TMP_DIR, project_name + '_work') # working directory
init_sync_from_geopackage(mc, project_name, source_gpkg_path)
conn = psycopg2.connect(DB_CONNINFO)
# test that database schemas are created + tables are populated
cur = conn.cursor()
cur.execute(f"SELECT count(*) from {project_name}_main.simple")
assert cur.fetchone()[0] == 3
# make change in GPKG and push
shutil.copy(os.path.join(TEST_DATA_DIR, 'inserted_1_A.gpkg'), os.path.join(project_dir, 'test_sync.gpkg'))
mc.push_project(project_dir)
# make a change in PostgreSQL
cur = conn.cursor()
cur.execute(f"INSERT INTO {project_name}_main.simple (name, rating) VALUES ('insert in postgres', 123)")
cur.execute("COMMIT")
cur.execute(f"SELECT count(*) from {project_name}_main.simple")
assert cur.fetchone()[0] == 4
# first pull changes from Mergin to DB (+rebase changes in DB) and then push the changes from DB to Mergin
dbsync_pull()
dbsync_push()
# pull new version of the project to the work project directory
mc.pull_project(project_dir)
# check that the insert has been applied to our GeoPackage
gpkg_conn = sqlite3.connect(os.path.join(project_dir, 'test_sync.gpkg'))
gpkg_cur = gpkg_conn.cursor()
gpkg_cur.execute("SELECT count(*) FROM simple")
assert gpkg_cur.fetchone()[0] == 5
# check that the insert has been applied to the DB
cur = conn.cursor()
cur.execute(f"SELECT count(*) from {project_name}_main.simple")
assert cur.fetchone()[0] == 5
print("---")
dbsync_status()
| 34.847534 | 111 | 0.713808 | 1,143 | 7,771 | 4.633421 | 0.167979 | 0.049849 | 0.030211 | 0.024169 | 0.545128 | 0.510385 | 0.487915 | 0.487538 | 0.480929 | 0.456193 | 0 | 0.005991 | 0.18376 | 7,771 | 222 | 112 | 35.004505 | 0.828945 | 0.245528 | 0 | 0.476563 | 0 | 0.007813 | 0.187435 | 0.046467 | 0 | 0 | 0 | 0 | 0.078125 | 1 | 0.054688 | false | 0.03125 | 0.070313 | 0 | 0.132813 | 0.039063 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fc6dec78a4c6244e5db1d19faef976eaab1b58b | 20,773 | py | Python | code/ARAX/ARAXQuery/Expand/bte_querier.py | MayankMurali/RTX | 7d86305fb654c013c5ed2450dfe222f8666bfab2 | [
"MIT"
] | null | null | null | code/ARAX/ARAXQuery/Expand/bte_querier.py | MayankMurali/RTX | 7d86305fb654c013c5ed2450dfe222f8666bfab2 | [
"MIT"
] | null | null | null | code/ARAX/ARAXQuery/Expand/bte_querier.py | MayankMurali/RTX | 7d86305fb654c013c5ed2450dfe222f8666bfab2 | [
"MIT"
] | null | null | null | #!/bin/env python3
import sys
import os
import traceback
import asyncio
from biothings_explorer.user_query_dispatcher import SingleEdgeQueryDispatcher
sys.path.append(os.path.dirname(os.path.abspath(__file__))+"/../../UI/OpenAPI/python-flask-server/")
from swagger_server.models.node import Node
from swagger_server.models.edge import Edge
import Expand.expand_utilities as eu
class BTEQuerier:
def __init__(self, response_object):
self.response = response_object
self.use_synonyms = response_object.data['parameters'].get('use_synonyms')
self.synonym_handling = response_object.data['parameters'].get('synonym_handling')
self.enforce_directionality = response_object.data['parameters'].get('enforce_directionality')
self.continue_if_no_results = response_object.data['parameters'].get('continue_if_no_results')
def answer_one_hop_query(self, query_graph, qnodes_using_curies_from_prior_step):
answer_kg = {'nodes': dict(), 'edges': dict()}
edge_to_nodes_map = dict()
curie_map = dict()
valid_bte_inputs_dict = self._get_valid_bte_inputs_dict()
if self.response.status != 'OK':
return answer_kg, edge_to_nodes_map
# Validate our input to make sure it will work with BTE
qedge, input_qnode, output_qnode = self._validate_and_pre_process_input(query_graph=query_graph,
valid_bte_inputs_dict=valid_bte_inputs_dict,
enforce_directionality=self.enforce_directionality)
if self.response.status != 'OK':
return answer_kg, edge_to_nodes_map
# Add synonyms to our input query node, if desired
if self.use_synonyms:
curie_map = eu.add_curie_synonyms_to_query_nodes(qnodes=[input_qnode, output_qnode],
log=self.response,
override_node_type=False,
format_for_bte=True,
qnodes_using_curies_from_prior_step=qnodes_using_curies_from_prior_step)
if self.response.status != 'OK':
return answer_kg, edge_to_nodes_map
# Use BTE to answer the query
answer_kg, accepted_curies = self._answer_query_using_bte(input_qnode=input_qnode,
output_qnode=output_qnode,
qedge=qedge,
answer_kg=answer_kg,
valid_bte_inputs_dict=valid_bte_inputs_dict)
if self.response.status != 'OK':
return answer_kg, edge_to_nodes_map
# Do any post-processing after ALL curies in the input qnode have been queried
if eu.qg_is_fulfilled(query_graph, answer_kg) and input_qnode.curie and output_qnode.curie:
answer_kg = self._prune_answers_to_achieve_curie_to_curie_query(answer_kg, output_qnode, qedge)
if eu.qg_is_fulfilled(query_graph, answer_kg) and self.use_synonyms and self.synonym_handling == 'map_back':
answer_kg = self._remove_synonym_nodes(answer_kg, input_qnode, output_qnode, qedge, curie_map)
answer_kg = self._remove_redundant_edges(answer_kg, qedge.id)
# Report our findings
if eu.qg_is_fulfilled(query_graph, answer_kg):
answer_kg = eu.switch_kg_to_arax_curie_format(answer_kg)
edge_to_nodes_map = self._create_edge_to_nodes_map(answer_kg, input_qnode.id, output_qnode.id)
num_results_string = ", ".join([f"{qg_id}: {count}" for qg_id, count in sorted(eu.get_counts_by_qg_id(answer_kg).items())])
self.response.info(f"Query for edge {qedge.id} returned results ({num_results_string})")
else:
self._log_proper_no_results_message(accepted_curies, self.continue_if_no_results, valid_bte_inputs_dict['curie_prefixes'])
return answer_kg, edge_to_nodes_map
def _validate_and_pre_process_input(self, query_graph, valid_bte_inputs_dict, enforce_directionality):
# Make sure we have a valid one-hop query graph
if len(query_graph.edges) != 1 or len(query_graph.nodes) != 2:
self.response.error(f"BTE can only accept one-hop query graphs (your QG has {len(query_graph.nodes)} "
f"nodes and {len(query_graph.edges)} edges)", error_code="InvalidQueryGraph")
return None, None, None
qedge = query_graph.edges[0]
# Make sure at least one of our qnodes has a curie
qnodes_with_curies = [qnode for qnode in query_graph.nodes if qnode.curie]
if not qnodes_with_curies:
self.response.error(f"Neither qnode for qedge {qedge.id} has a curie specified. BTE requires that at least"
f" one of them has a curie. Your query graph is: {query_graph.to_dict()}")
return None, None, None
# Figure out which query node is input vs. output and validate which qnodes have curies
if enforce_directionality:
input_qnode = next(qnode for qnode in query_graph.nodes if qnode.id == qedge.source_id)
output_qnode = next(qnode for qnode in query_graph.nodes if qnode.id == qedge.target_id)
else:
qnodes_with_curies = [qnode for qnode in query_graph.nodes if qnode.curie]
input_qnode = qnodes_with_curies[0] if qnodes_with_curies else None
output_qnode = next(qnode for qnode in query_graph.nodes if qnode.id != input_qnode.id)
if not input_qnode.curie:
self.response.error(f"BTE cannot expand edges with a non-specific (curie-less) source node (source node is:"
f" {input_qnode.to_dict()})", error_code="InvalidInput")
elif not enforce_directionality:
self.response.warning(f"BTE cannot do bidirectional queries; the query for this edge will be directed, "
f"going: {input_qnode.id}-->{output_qnode.id}")
if self.response.status != 'OK':
return None, None, None
# Make sure predicate is allowed
if qedge.type not in valid_bte_inputs_dict['predicates'] and qedge.type is not None:
self.response.error(f"BTE does not accept predicate '{qedge.type}'. Valid options are "
f"{valid_bte_inputs_dict['predicates']}", error_code="InvalidInput")
return None, None, None
# Process qnode types (guess one if none provided, convert to preferred format, make sure allowed)
if not input_qnode.type:
input_qnode.type = eu.guess_qnode_type(input_qnode.curie, self.response)
if not output_qnode.type:
output_qnode.type = eu.guess_qnode_type(output_qnode.curie, self.response)
input_qnode.type = eu.convert_string_to_pascal_case(input_qnode.type)
output_qnode.type = eu.convert_string_to_pascal_case(output_qnode.type)
qnodes_missing_type = [qnode.id for qnode in [input_qnode, output_qnode] if not qnode.type]
if qnodes_missing_type:
self.response.error(f"BTE requires every query node to have a type. QNode(s) missing a type: "
f"{', '.join(qnodes_missing_type)}", error_code="InvalidInput")
return None, None, None
invalid_qnode_types = [qnode.type for qnode in [input_qnode, output_qnode] if qnode.type not in valid_bte_inputs_dict['node_types']]
if invalid_qnode_types:
self.response.error(f"BTE does not accept QNode type(s): {', '.join(invalid_qnode_types)}. Valid options are"
f" {valid_bte_inputs_dict['node_types']}", error_code="InvalidInput")
return None, None, None
# Make sure our input node curies are in list form and use prefixes BTE prefers
input_curie_list = eu.convert_string_or_list_to_list(input_qnode.curie)
input_qnode.curie = [eu.convert_curie_to_bte_format(curie) for curie in input_curie_list]
return qedge, input_qnode, output_qnode
def _answer_query_using_bte(self, input_qnode, output_qnode, qedge, answer_kg, valid_bte_inputs_dict):
accepted_curies = set()
# Send this single-edge query to BTE, once per input curie (adding findings to our answer KG as we go)
for curie in input_qnode.curie:
if eu.get_curie_prefix(curie) in valid_bte_inputs_dict['curie_prefixes']:
accepted_curies.add(curie)
try:
loop = asyncio.new_event_loop()
seqd = SingleEdgeQueryDispatcher(input_cls=input_qnode.type,
output_cls=output_qnode.type,
pred=qedge.type,
input_id=eu.get_curie_prefix(curie),
values=eu.get_curie_local_id(curie),
loop=loop)
self.response.debug(f"Sending query to BTE: {curie}-{qedge.type if qedge.type else ''}->{output_qnode.type}")
seqd.query()
reasoner_std_response = seqd.to_reasoner_std()
except Exception:
trace_back = traceback.format_exc()
error_type, error, _ = sys.exc_info()
self.response.error(f"Encountered a problem while using BioThings Explorer. {trace_back}",
error_code=error_type.__name__)
return answer_kg, accepted_curies
else:
answer_kg = self._add_answers_to_kg(answer_kg, reasoner_std_response, input_qnode.id, output_qnode.id, qedge.id)
return answer_kg, accepted_curies
def _add_answers_to_kg(self, answer_kg, reasoner_std_response, input_qnode_id, output_qnode_id, qedge_id):
kg_to_qg_ids_dict = self._build_kg_to_qg_id_dict(reasoner_std_response['results'])
if reasoner_std_response['knowledge_graph']['edges']:
remapped_node_ids = dict()
self.response.debug(f"Got results back from BTE for this query "
f"({len(reasoner_std_response['knowledge_graph']['edges'])} edges)")
for node in reasoner_std_response['knowledge_graph']['nodes']:
swagger_node = Node()
bte_node_id = node.get('id')
swagger_node.name = node.get('name')
swagger_node.type = eu.convert_string_to_snake_case(node.get('type'))
# Map the returned BTE qg_ids back to the original qnode_ids in our query graph
bte_qg_id = kg_to_qg_ids_dict['nodes'].get(bte_node_id)
if bte_qg_id == "n0":
qnode_id = input_qnode_id
elif bte_qg_id == "n1":
qnode_id = output_qnode_id
else:
self.response.error("Could not map BTE qg_id to ARAX qnode_id", error_code="UnknownQGID")
return answer_kg
# Find and use the preferred equivalent identifier for this node (if it's an 'output' node)
if qnode_id == output_qnode_id:
if bte_node_id in remapped_node_ids:
swagger_node.id = remapped_node_ids.get(bte_node_id)
else:
equivalent_curies = [f"{prefix}:{eu.get_curie_local_id(local_id)}" for prefix, local_ids in
node.get('equivalent_identifiers').items() for local_id in local_ids]
swagger_node.id = eu.get_best_equivalent_curie(equivalent_curies, swagger_node.type)
remapped_node_ids[bte_node_id] = swagger_node.id
else:
swagger_node.id = bte_node_id
eu.add_node_to_kg(answer_kg, swagger_node, qnode_id)
for edge in reasoner_std_response['knowledge_graph']['edges']:
swagger_edge = Edge()
swagger_edge.id = edge.get("id")
swagger_edge.type = edge.get('type')
swagger_edge.source_id = remapped_node_ids.get(edge.get('source_id'), edge.get('source_id'))
swagger_edge.target_id = remapped_node_ids.get(edge.get('target_id'), edge.get('target_id'))
swagger_edge.is_defined_by = "BTE"
swagger_edge.provided_by = edge.get('edge_source')
# Map the returned BTE qg_id back to the original qedge_id in our query graph
bte_qg_id = kg_to_qg_ids_dict['edges'].get(swagger_edge.id)
if bte_qg_id != "e1":
self.response.error("Could not map BTE qg_id to ARAX qedge_id", error_code="UnknownQGID")
return answer_kg
eu.add_edge_to_kg(answer_kg, swagger_edge, qedge_id)
return answer_kg
def _log_proper_no_results_message(self, accepted_curies, continue_if_no_results, valid_prefixes):
if continue_if_no_results:
if not accepted_curies:
self.response.warning(f"BTE could not accept any of the input curies. Valid curie prefixes for BTE are:"
f" {valid_prefixes}")
self.response.warning(f"No paths were found in BTE satisfying this query graph")
else:
if not accepted_curies:
self.response.error(f"BTE could not accept any of the input curies. Valid curie prefixes for BTE are: "
f"{valid_prefixes}", error_code="InvalidPrefix")
self.response.error(f"No paths were found in BTE satisfying this query graph", error_code="NoResults")
@staticmethod
def _remove_synonym_nodes(kg, input_qnode, output_qnode, qedge, curie_map):
for qnode_id, curie_mappings in curie_map.items():
ids_of_nodes_in_kg = set(list(kg['nodes'][qnode_id].keys()))
for original_curie, curies_used in curie_mappings.items():
synonyms_used_set = set(curies_used).difference({original_curie})
ids_of_synonym_nodes = synonyms_used_set.intersection(ids_of_nodes_in_kg)
if ids_of_synonym_nodes:
# Remap to the original curie if it's present, otherwise pick the best synonym node
if original_curie in ids_of_nodes_in_kg:
node_id_to_keep = original_curie
node_ids_to_remove = ids_of_synonym_nodes
else:
qnode_type = input_qnode.type if qnode_id == input_qnode.id else output_qnode.type
node_id_to_keep = eu.get_best_equivalent_curie(list(ids_of_synonym_nodes), qnode_type)
node_ids_to_remove = ids_of_synonym_nodes.difference({node_id_to_keep})
# Remove the nodes we don't want
for node_id in node_ids_to_remove:
kg['nodes'][qnode_id].pop(node_id)
# And remap their edges to point to the node we kept
for edge in kg['edges'][qedge.id].values():
if edge.source_id in node_ids_to_remove:
edge.source_id = node_id_to_keep
if edge.target_id in node_ids_to_remove:
edge.target_id = node_id_to_keep
return kg
@staticmethod
def _remove_redundant_edges(kg, qedge_id):
# Figure out which edges are redundant (can happen due to synonym remapping)
edges_already_seen = set()
edge_ids_to_remove = set()
for edge_id, edge in kg['edges'][qedge_id].items():
identifier_tuple_for_edge = (edge.source_id, edge.type, edge.target_id, edge.provided_by)
if identifier_tuple_for_edge in edges_already_seen:
edge_ids_to_remove.add(edge_id)
else:
edges_already_seen.add(identifier_tuple_for_edge)
# Then remove them
for edge_id in edge_ids_to_remove:
kg['edges'][qedge_id].pop(edge_id)
return kg
@staticmethod
def _prune_answers_to_achieve_curie_to_curie_query(kg, output_qnode, qedge):
"""
This is a way of hacking around BTE's limitation where it can only do (node with curie)-->(non-specific node)
kinds of queries. We do the non-specific query, and then use this function to remove all of the answer nodes
that do not correspond to the curie we wanted for the 'output' node.
"""
# Remove 'output' nodes in the KG that aren't actually the ones we were looking for
desired_output_curies = set(eu.convert_string_or_list_to_list(output_qnode.curie))
all_output_node_ids = set(list(kg['nodes'][output_qnode.id].keys()))
output_node_ids_to_remove = all_output_node_ids.difference(desired_output_curies)
for node_id in output_node_ids_to_remove:
kg['nodes'][output_qnode.id].pop(node_id)
# And remove any edges that used them
edge_ids_to_remove = set()
for edge_id, edge in kg['edges'][qedge.id].items():
if edge.target_id in output_node_ids_to_remove: # Edge target_id always contains output node ID for BTE
edge_ids_to_remove.add(edge_id)
for edge_id in edge_ids_to_remove:
kg['edges'][qedge.id].pop(edge_id)
return kg
@staticmethod
def _create_edge_to_nodes_map(answer_kg, input_qnode_id, output_qnode_id):
edge_to_nodes_map = dict()
for qedge_id, edges in answer_kg['edges'].items():
for edge_key, edge in edges.items():
# BTE single-edge queries are always directed (meaning, edge.source_id == input qnode ID)
edge_to_nodes_map[edge.id] = {input_qnode_id: edge.source_id, output_qnode_id: edge.target_id}
return edge_to_nodes_map
@staticmethod
def _get_valid_bte_inputs_dict():
# TODO: Load these using the soon to be built method in ARAX/KnowledgeSources (then will be regularly updated)
node_types = {'ChemicalSubstance', 'Transcript', 'AnatomicalEntity', 'Disease', 'GenomicEntity', 'Gene',
'BiologicalProcess', 'Cell', 'SequenceVariant', 'MolecularActivity', 'PhenotypicFeature',
'Protein', 'CellularComponent', 'Pathway'}
curie_prefixes = {'ENSEMBL', 'CHEBI', 'HP', 'DRUGBANK', 'MOP', 'MONDO', 'GO', 'HGNC', 'CL', 'DOID', 'MESH',
'OMIM', 'SO', 'SYMBOL', 'Reactome', 'UBERON', 'UNIPROTKB', 'PR', 'NCBIGene', 'UMLS',
'CHEMBL.COMPOUND', 'MGI', 'DBSNP', 'WIKIPATHWAYS', 'MP'}
predicates = {'disrupts', 'coexists_with', 'caused_by', 'subclass_of', 'affected_by', 'manifested_by',
'physically_interacts_with', 'prevented_by', 'has_part', 'negatively_regulates',
'functional_association', 'precedes', 'homologous_to', 'negatively_regulated_by',
'positively_regulated_by', 'has_subclass', 'contraindication', 'located_in', 'prevents',
'disrupted_by', 'preceded_by', 'treats', 'produces', 'treated_by', 'derives_from',
'gene_to_transcript_relationship', 'predisposes', 'affects', 'metabolize', 'has_gene_product',
'produced_by', 'derives_info', 'related_to', 'causes', 'contraindicated_by', 'part_of',
'metabolic_processing_affected_by', 'positively_regulates', 'manifestation_of'}
return {'node_types': node_types, 'curie_prefixes': curie_prefixes, 'predicates': predicates}
@staticmethod
def _build_kg_to_qg_id_dict(results):
kg_to_qg_ids = {'nodes': dict(), 'edges': dict()}
for node_binding in results['node_bindings']:
node_id = node_binding['kg_id']
qnode_id = node_binding['qg_id']
kg_to_qg_ids['nodes'][node_id] = qnode_id
for edge_binding in results['edge_bindings']:
edge_ids = eu.convert_string_or_list_to_list(edge_binding['kg_id'])
qedge_ids = edge_binding['qg_id']
for kg_id in edge_ids:
kg_to_qg_ids['edges'][kg_id] = qedge_ids
return kg_to_qg_ids
| 60.211594 | 140 | 0.620854 | 2,670 | 20,773 | 4.500749 | 0.146442 | 0.023966 | 0.017475 | 0.022468 | 0.397603 | 0.293917 | 0.217276 | 0.169843 | 0.124906 | 0.121994 | 0 | 0.000547 | 0.296154 | 20,773 | 344 | 141 | 60.386628 | 0.821353 | 0.090983 | 0 | 0.1875 | 0 | 0.007353 | 0.163779 | 0.035445 | 0 | 0 | 0 | 0.002907 | 0 | 1 | 0.044118 | false | 0 | 0.029412 | 0 | 0.161765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fcabbf4dd03d68a4332042abb2b0e3d729ffc0b | 2,088 | py | Python | filehandling.py | theambidextrous/py-learn-trail | 33283d29862654972edaab5baf318ae7298e6f76 | [
"Apache-2.0"
] | null | null | null | filehandling.py | theambidextrous/py-learn-trail | 33283d29862654972edaab5baf318ae7298e6f76 | [
"Apache-2.0"
] | null | null | null | filehandling.py | theambidextrous/py-learn-trail | 33283d29862654972edaab5baf318ae7298e6f76 | [
"Apache-2.0"
] | null | null | null | ### FILE HADLING #############
## File OPen
""""
Use open() to manipulate files. Open() function takes two parameters; filename, and mode.
There are four different methods (modes) for opening a file:
===================================================================
"r" - Read - Default value. Opens a file for reading, error if the file does not exist
"a" - Append - Opens a file for appending, creates the file if it does not exist
"w" - Write - Opens a file for writing, creates the file if it does not exist
"x" - Create - Creates the specified file, returns an error if the file exists
In addition you can specify if the file should be handled as binary or text mode
"t" - Text - Default value. Text mode
"b" - Binary - Binary mode (e.g. images)
"""
#### apending reading
f = open('learn.txt', 'a')
f.write('material: Some learning material here\n')
f.close()
#### reading as text
f = open('learn.txt', 'rt')
#print(f.read())
""" read() method can take an int to specify how many chars to read e.g."""
#print(f.read(10))
#### reading as binary
f = open('learn.txt', 'rb')
#print(f.read())
### Reading single line
""" use readline() to return a single line from the file """
f = open('learn.txt', 'rt')
print(f.readline())
print(f.readline())
print(f.readline())
print('-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-')
#### print by looping
f = open('learn.txt', 'rt')
for material in f:
print(material)
f.close()
print('-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-')
##### Deleting Files
""" import OS module to delete files """
import os
f = open('demo.txt', 'x')
f.close()
os.remove("demo.txt")
print ('file deleted')
print('-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-')
#### Check if file exisits
if os.path.exists('learn.txt') :
print ('file exists')
else:
print ('file not found ')
##### Deleting folders
"""" use os.rmdir("myfolder") to delete folders """
| 34.229508 | 120 | 0.534962 | 270 | 2,088 | 4.137037 | 0.4 | 0.037601 | 0.044763 | 0.058192 | 0.14145 | 0.128021 | 0.128021 | 0.053715 | 0 | 0 | 0 | 0.00111 | 0.137452 | 2,088 | 60 | 121 | 34.8 | 0.619101 | 0.440613 | 0 | 0.48 | 0 | 0 | 0.556818 | 0.378409 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.04 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fcb5b1b96a29a0e18900da614e07c230a2d10b4 | 5,046 | py | Python | train_utils.py | mengcz13/pytorch_training_toolbox | fae446b53d687248c7cf4667b45530b6a9676f43 | [
"MIT"
] | null | null | null | train_utils.py | mengcz13/pytorch_training_toolbox | fae446b53d687248c7cf4667b45530b6a9676f43 | [
"MIT"
] | null | null | null | train_utils.py | mengcz13/pytorch_training_toolbox | fae446b53d687248c7cf4667b45530b6a9676f43 | [
"MIT"
] | null | null | null | import os
import sys
import socket
import time
import argparse
import pickle
import datetime
import multiprocessing
import numpy as np
import torch
import torch.optim as optim
import torch.nn.functional as torchF
from torch.optim import lr_scheduler
from torch.utils.data import DataLoader
import torch.multiprocessing as mp
class MyArgs:
'''Stores arguments to __dict__ of the class such that all arguments can be visited as attributes. This class is designed for visiting dictionary keys as attributes.
It's recommended to store experiment arguments and runtime arguments in two objects.
- Experiment arguments: arguments having effect on training, such as learning rate, batch size, random seed, and model hyperparameters;
- Runtime arguments: arguments only related to running (theoretically), such as device, GPU numbers, training/test mode...
Usually we only need to serialize and persist experiment arguments.
Args:
argdict: a dictionary storing all arguments, such as lr (learning rate), batch_size, ...
'''
def __init__(self, **argdict):
for k, v in argdict.items():
if isinstance(v, dict):
self.__dict__[k] = MyArgs(**v)
else:
self.__dict__[k] = v
def to_argdict(self):
'''Transform the object to a dictionary. Can be saved with yaml or other serialization tools.
'''
argdict = dict()
for k, v in self.__dict__.items():
if isinstance(v, MyArgs):
argdict[k] = v.to_argdict()
else:
argdict[k] = v
return argdict
def load_argdict(self, argdict):
'''Transform a dictionary to the object. Will overwrite existing keys!
'''
for k, v in argdict.items():
if isinstance(v, dict):
self.__dict__[k] = MyArgs(**v)
else:
self.__dict__[k] = v
def fetch_ckpt_namelist(ckptdir, suffix='_checkpoint.pt'):
'''Auxiliary function to get a list of numbered checkpoints under the specified directory.
Here we assume that all numbered checkpoint files are named as `{:d}{}`.format(epochnum, suffix), e.g., `1_checkpoint.pt`.
The best checkpoint file storing the model with the best performance on the validaiton set won't be listed.
Return a list of checkpoint file names sorted via numbers (or an empty list if there is no numbered checkpoint files).
'''
ckpts = []
for x in os.listdir(ckptdir):
if x.endswith(suffix) and (not x.startswith('best')):
xs = x.replace(suffix, '')
ckpts.append((x, int(xs)))
if len(ckpts) == 0:
return []
else:
ckpts.sort(key=lambda x: x[1])
return ckpts
def get_last_ckpt(ckptdir, device, suffix='_checkpoint.pt', specify=None):
'''Auxiliary function for getting the latest checkpoint or the specified checkpoint.
'''
if specify is not None:
last_ckpt = torch.load(os.path.join(ckptdir, '{}'.format(specify) + suffix))
else:
ckpts = fetch_ckpt_namelist(ckptdir, suffix)
if len(ckpts) == 0:
last_ckpt = None
else:
last_ckpt = torch.load(os.path.join(ckptdir, ckpts[-1][0]), map_location=device)
if os.path.exists(os.path.join(ckptdir, 'best' + suffix)):
best_ckpt = torch.load(os.path.join(ckptdir, 'best' + suffix), map_location=device)
else:
best_ckpt = None
return {
'last': last_ckpt, 'best': best_ckpt
}
def save_ckpt(epoch, best_valid_loss, best_valid_epoch, model, optimizer, scheduler, ckptdir,
prefix, suffix='_checkpoint.pt', max_to_keep=3):
'''Save checkpoints and keep only latest several checkpoints.
'''
ckptdict = {
'epoch': epoch,
'best_valid_loss': best_valid_loss,
'best_valid_epoch': best_valid_epoch,
'model': model.state_dict(),
'optimizer': optimizer.state_dict(),
'scheduler': scheduler.state_dict()
}
torch.save(ckptdict, os.path.join(ckptdir, prefix + suffix))
# remove too old ckpts
ckpts = fetch_ckpt_namelist(ckptdir, suffix)
if len(ckpts) > max_to_keep:
for tdfname, _ in ckpts[:len(ckpts) - max_to_keep]:
to_del_path = os.path.join(ckptdir, tdfname)
os.remove(to_del_path)
return ckptdict
def load_ckpt(model, optimizer, scheduler, ckpt, restore_opt_sche=True):
epoch = ckpt['epoch']
best_valid_loss = ckpt['best_valid_loss']
best_valid_epoch = ckpt['best_valid_epoch']
try:
model.load_state_dict(ckpt['model'])
except:
model = torch.nn.DataParallel(model)
model.load_state_dict(ckpt['model'])
model = model.module
if restore_opt_sche:
optimizer.load_state_dict(ckpt['optimizer'])
scheduler.load_state_dict(ckpt['scheduler'])
return epoch, best_valid_loss, best_valid_epoch, model, optimizer, scheduler
def print_2way(f, *x):
print(*x)
print(*x, file=f)
f.flush() | 36.565217 | 169 | 0.652794 | 669 | 5,046 | 4.763827 | 0.306428 | 0.033888 | 0.018826 | 0.032005 | 0.213681 | 0.187951 | 0.136806 | 0.127393 | 0.106056 | 0.077816 | 0 | 0.002104 | 0.246334 | 5,046 | 138 | 170 | 36.565217 | 0.835919 | 0.291518 | 0 | 0.214286 | 0 | 0 | 0.053444 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.153061 | 0 | 0.306122 | 0.030612 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fd3ac7c1ba41bd5ab199c3429b9b16299c4d677 | 3,677 | py | Python | get_images.py | agnes-yang/firecam | 9282d1b5b83be3abf6a137f7a72c090a9eca05f6 | [
"Apache-2.0"
] | 10 | 2019-12-19T02:37:33.000Z | 2021-12-07T04:47:08.000Z | get_images.py | agnes-yang/firecam | 9282d1b5b83be3abf6a137f7a72c090a9eca05f6 | [
"Apache-2.0"
] | 5 | 2019-10-27T23:22:52.000Z | 2020-02-13T23:08:15.000Z | get_images.py | agnes-yang/firecam | 9282d1b5b83be3abf6a137f7a72c090a9eca05f6 | [
"Apache-2.0"
] | 13 | 2019-09-24T18:53:24.000Z | 2021-07-16T05:57:18.000Z | # Copyright 2018 The Fuego Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Download images from for given camera with date/time closest to the specified time
from either public HPWREN archive, or Fuego's AlertWildfire archive
"""
import sys
import os
fuegoRoot = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.join(fuegoRoot, 'lib'))
sys.path.insert(0, fuegoRoot)
import settings
settings.fuegoRoot = fuegoRoot
import collect_args
import goog_helper
import img_archive
import db_manager
import logging
import time, datetime, dateutil.parser
def main():
reqArgs = [
["c", "cameraID", "ID (code name) of camera"],
["s", "startTime", "starting date and time in ISO format (e.g., 2019-02-22T14:34:56 in Pacific time zone)"],
]
optArgs = [
["e", "endTime", "ending date and time in ISO format (e.g., 2019-02-22T14:34:56 in Pacific time zone)"],
["p", "periodSeconds", "override default of 60 seconds period between images to download"],
["o", "outputDir", "directory to save the output image"],
]
args = collect_args.collectArgs(reqArgs, optionalArgs=optArgs, parentParsers=[goog_helper.getParentParser()])
periodSeconds = int(args.periodSeconds) if args.periodSeconds else 60
outputDir = args.outputDir if args.outputDir else settings.downloadDir
startTimeDT = dateutil.parser.parse(args.startTime)
if args.endTime:
endTimeDT = dateutil.parser.parse(args.endTime)
else:
endTimeDT = startTimeDT
assert startTimeDT.year == endTimeDT.year
assert startTimeDT.month == endTimeDT.month
assert startTimeDT.day == endTimeDT.day
assert endTimeDT >= startTimeDT
alertWildfire = False
hpwren = False
files = None
googleServices = goog_helper.getGoogleServices(settings, args)
dbManager = db_manager.DbManager(sqliteFile=settings.db_file,
psqlHost=settings.psqlHost, psqlDb=settings.psqlDb,
psqlUser=settings.psqlUser, psqlPasswd=settings.psqlPasswd)
if args.cameraID.startswith('Axis-'):
alertWildfire = True
elif args.cameraID.endswith('-mobo-c'):
hpwren = True
else:
logging.error('Unexpected camera ID %s. Must start with either "Axis-" or end with "mobo-c"', args.cameraID)
exit(1)
if hpwren:
camArchives = img_archive.getHpwrenCameraArchives(googleServices['sheet'], settings)
gapMinutes = max(round(float(periodSeconds)/60), 1) # convert to minutes and ensure at least 1 minute
files = img_archive.getHpwrenImages(googleServices, settings, outputDir, camArchives, args.cameraID, startTimeDT, endTimeDT, gapMinutes)
else:
assert alertWildfire
files = img_archive.getAlertImages(googleServices, dbManager, settings, outputDir, args.cameraID, startTimeDT, endTimeDT, periodSeconds)
if files:
logging.warning('Found %d files.', len(files))
else:
logging.error('No matches for camera ID %s', args.cameraID)
if __name__=="__main__":
main()
| 39.537634 | 144 | 0.689421 | 447 | 3,677 | 5.61745 | 0.438479 | 0.023895 | 0.010354 | 0.012744 | 0.044604 | 0.044604 | 0.044604 | 0.044604 | 0.044604 | 0.044604 | 0 | 0.015862 | 0.19418 | 3,677 | 92 | 145 | 39.967391 | 0.83159 | 0.226815 | 0 | 0.064516 | 0 | 0.048387 | 0.173111 | 0 | 0 | 0 | 0 | 0 | 0.080645 | 1 | 0.016129 | false | 0.016129 | 0.145161 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fd4b59a4281b93484f61543a3964063cbea187d | 2,578 | py | Python | run_compile_all_cython.py | SamanFekri/BookRecommendation | 07dfa875154af39546cb263d4407339ce26d47e8 | [
"MIT"
] | null | null | null | run_compile_all_cython.py | SamanFekri/BookRecommendation | 07dfa875154af39546cb263d4407339ce26d47e8 | [
"MIT"
] | null | null | null | run_compile_all_cython.py | SamanFekri/BookRecommendation | 07dfa875154af39546cb263d4407339ce26d47e8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on 30/03/2019
@author: Maurizio Ferrari Dacrema
"""
import sys, glob, traceback, os
from CythonCompiler.run_compile_subprocess import run_compile_subprocess
if __name__ == '__main__':
# cython_file_list = glob.glob('**/*.pyx', recursive=True)
subfolder_to_compile_list = [
"MatrixFactorization",
"Cython_examples",
"Base/Similarity",
"SLIM_BPR",
]
cython_file_list = []
for subfolder_to_compile in subfolder_to_compile_list:
if subfolder_to_compile == "Cython_examples":
cython_file_list.extend(glob.glob('{}/*.pyx'.format(subfolder_to_compile), recursive=True))
else:
cython_file_list.extend(glob.glob('{}/Cython/*.pyx'.format(subfolder_to_compile), recursive=True))
print("run_compile_all_cython: Found {} Cython files in {} folders...".format(len(cython_file_list), len(subfolder_to_compile_list)))
print("run_compile_all_cython: All files will be compiled using your current python environment: '{}'".format(sys.executable))
save_folder_path = "./result_experiments/"
log_file_path = save_folder_path + "run_compile_all_cython.txt"
# If directory does not exist, create
if not os.path.exists(save_folder_path):
os.makedirs(save_folder_path)
log_file = open(log_file_path, "w")
fail_count = 0
for file_index, file_path in enumerate(cython_file_list):
file_path = file_path.replace("\\", "/").split("/")
file_name = file_path[-1]
file_path = "/".join(file_path[:-1]) + "/"
log_string = "Compiling [{}/{}]: {}... ".format(file_index+1, len(cython_file_list), file_name)
print(log_string)
try:
run_compile_subprocess(file_path, [file_name])
log_string += "PASS\n"
print(log_string)
log_file.write(log_string)
log_file.flush()
except Exception as exc:
traceback.print_exc()
fail_count += 1
log_string += "FAIL: {}\n".format(str(exc))
print(log_string)
log_file.write(log_string)
log_file.flush()
log_string = "run_compile_all_cython: Compilation finished. "
if fail_count != 0:
log_string += "FAILS {}/{}.".format(fail_count, len(cython_file_list))
else:
log_string += "SUCCESS."
log_string += "\nCompilation log can be found here: '{}'".format(log_file_path)
print(log_string)
log_file.write(log_string)
log_file.close() | 28.644444 | 137 | 0.643134 | 324 | 2,578 | 4.762346 | 0.333333 | 0.081659 | 0.072586 | 0.062216 | 0.207388 | 0.17628 | 0.139987 | 0.08814 | 0.08814 | 0.08814 | 0 | 0.008024 | 0.226532 | 2,578 | 90 | 138 | 28.644444 | 0.765797 | 0.074864 | 0 | 0.215686 | 0 | 0 | 0.194105 | 0.048842 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.019608 | 0.039216 | 0 | 0.039216 | 0.137255 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fd84b89ad959d00f7cb28ec23fef9355fc7136b | 1,473 | py | Python | angr/procedures/posix/getenv.py | BA7JCM/angr | 187a713c35759d998d93dfc5280630976d42d717 | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/posix/getenv.py | BA7JCM/angr | 187a713c35759d998d93dfc5280630976d42d717 | [
"BSD-2-Clause"
] | null | null | null | angr/procedures/posix/getenv.py | BA7JCM/angr | 187a713c35759d998d93dfc5280630976d42d717 | [
"BSD-2-Clause"
] | null | null | null | import angr
######################################
# getenv
######################################
class getenv(angr.SimProcedure):
'''
The getenv() function searches the environment list to find the
environment variable name, and returns a pointer to the
corresponding value string.
'''
#pylint:disable=arguments-differ
def run(self, m_name):
strlen = angr.SIM_PROCEDURES['libc']['strlen']
name_len = self.inline_call(strlen, m_name)
name_expr = self.state.memory.load(m_name, name_len.max_null_index, endness='Iend_BE')
name = self.state.solver.eval(name_expr, cast_to=bytes)
p = self.state.posix.environ
if p is None:
return self.state.solver.BVS(b"getenv__" + name, self.state.arch.bits, key=('api', 'getenv', name.decode()))
while True:
m_line = self.state.memory.load(p, self.state.arch.byte_width, endness=self.arch.memory_endness)
if self.state.solver.eval(m_line, cast_to=int) == 0:
break
line_len = self.inline_call(strlen, m_line)
line_expr = self.state.memory.load(m_line, line_len.max_null_index, endness='Iend_BE')
line = self.state.solver.eval(line_expr, cast_to=bytes)
kv = line.split(b'=', maxsplit=1)
if len(kv) == 2 and kv[0] == name:
return m_line + (name_len.max_null_index + 1)
p += self.state.arch.bytes
return 0
| 39.810811 | 120 | 0.596062 | 199 | 1,473 | 4.236181 | 0.386935 | 0.117438 | 0.071174 | 0.067616 | 0.207592 | 0.180308 | 0.066429 | 0 | 0 | 0 | 0 | 0.005357 | 0.239647 | 1,473 | 36 | 121 | 40.916667 | 0.747321 | 0.126273 | 0 | 0 | 0 | 0 | 0.035413 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.045455 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fd88f709bdb101553b7e313c87147a9ef4f3e60 | 1,571 | py | Python | pyblaze/utils/examples/normalizing_flows.py | Greenroom-Robotics/pyblaze | e45e27fbd400b6ae2365ad2347165c7b5154ac51 | [
"MIT"
] | 20 | 2020-03-29T08:43:15.000Z | 2021-12-17T21:38:17.000Z | pyblaze/utils/examples/normalizing_flows.py | borchero/bxtorch | 8d01568c8ee9fc05f5b3c84ca3ec68ea74eef9eb | [
"MIT"
] | 4 | 2020-10-27T20:43:40.000Z | 2021-04-29T12:19:39.000Z | pyblaze/utils/examples/normalizing_flows.py | borchero/bxtorch | 8d01568c8ee9fc05f5b3c84ca3ec68ea74eef9eb | [
"MIT"
] | 2 | 2020-08-16T18:10:49.000Z | 2021-03-31T23:17:28.000Z | # pylint: disable=missing-docstring
import torch
import torch.optim as optim
from torch.utils.data.dataset import TensorDataset
import matplotlib.pyplot as plt
import pyblaze.nn as xnn
import pyblaze.plot as P
def train(engine, data, lr=1e-2, epochs=1000):
data_ = torch.as_tensor(data, dtype=torch.float)
# pylint: disable=no-member
loader = TensorDataset(data_).loader(batch_size=4096)
optimizer = optim.Adam(engine.model.parameters(), lr=lr)
return engine.train(
loader,
epochs=epochs,
optimizer=optimizer,
loss=xnn.TransformedNormalLoss(),
callbacks=[
xnn.EpochProgressLogger(),
],
gpu=False
)
def train_and_plot(engine, datasets, **kwargs):
plt.figure(figsize=plt.figaspect(0.4))
loss = xnn.TransformedNormalLoss(reduction='none')
num_datasets = len(datasets)
for (i, dataset) in enumerate(datasets):
print(f"Dataset ({i+1}/{num_datasets})...")
plt.subplot(2, num_datasets, i+1)
plt.xlim((-2.5, 2.5))
plt.ylim((-2.5, 2.5))
plt.scatter(*dataset.T, s=1, color='orange')
plt.xticks([])
plt.yticks([])
train(engine, dataset, **kwargs)
plt.subplot(2, num_datasets, num_datasets+i+1)
P.density_plot2d(lambda x: (-loss(*engine.model.eval()(x))).exp(), (-2.5, 2.5), (-2.5, 2.5))
plt.xticks([])
plt.yticks([])
if i == num_datasets - 1:
cbar = plt.colorbar(label='Density')
cbar.set_ticks([])
plt.tight_layout()
plt.show()
| 29.092593 | 100 | 0.615532 | 206 | 1,571 | 4.621359 | 0.456311 | 0.016807 | 0.015756 | 0.021008 | 0.072479 | 0.008403 | 0 | 0 | 0 | 0 | 0 | 0.029851 | 0.232336 | 1,571 | 53 | 101 | 29.641509 | 0.759536 | 0.037556 | 0 | 0.095238 | 0 | 0 | 0.033135 | 0.016567 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.142857 | 0 | 0.214286 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fd8994fcfe97d0f5782b8f01e1cbbee48079342 | 4,896 | py | Python | pracnbastats/team.py | practicallypredictable/pracnbstats | 55cd6e285394686a62f3d1535a5b3468cd5410ea | [
"MIT"
] | null | null | null | pracnbastats/team.py | practicallypredictable/pracnbstats | 55cd6e285394686a62f3d1535a5b3468cd5410ea | [
"MIT"
] | null | null | null | pracnbastats/team.py | practicallypredictable/pracnbstats | 55cd6e285394686a62f3d1535a5b3468cd5410ea | [
"MIT"
] | null | null | null | import numpy as np
from . import params
from . import scrape
from . import utils
from . import league
class BoxScores(league.BoxScores):
def __init__(
self, *, scraper,
season=params.Season.default(),
season_type=params.SeasonType.default(),
date_from=params.DateFrom.default(),
date_to=params.DateTo.default(),
counter=params.NBACounter.default(),
sorter=params.Sorter.default(),
sort_direction=params.SortDirection.default()):
super().__init__(
scraper=scraper,
season=season,
season_type=season_type,
date_from=date_from,
date_to=date_to,
player_team_flag=params.PlayerTeamFlag.Team,
counter=counter,
sorter=sorter,
sort_direction=sort_direction,
)
self._additional_formatting()
def _additional_formatting(self):
self._data = self._data.rename(columns={
'plus_minus': 'mov',
})
start = [
'season',
'season_type',
'team_id',
'team_abbr',
'game_id',
'date',
'opp_team_abbr',
'home_road',
'win_loss',
]
end = [
'video',
]
self._data = utils.order_columns(
self._data,
first_cols=start,
last_cols=end
)
@property
def team_records(self):
df = super().matchups
info = (
df[['season', 'season_type', 'team_id_h', 'team_abbr_h']]
.drop_duplicates(subset=['team_abbr_h'])
)
info = info.rename(columns={
'team_id_h': 'team_id',
'team_abbr_h': 'team_abbr'
}).set_index(['team_abbr'])
home = (
df.groupby(['team_abbr_h'])['win_loss_h']
.value_counts()
.unstack()
)
road = (
df.groupby(['team_abbr_r'])['win_loss_r']
.value_counts()
.unstack()
)
df = home.join(road, lsuffix='_h', rsuffix='_r').reset_index()
df = df.rename(columns={
'team_abbr_h': 'team_abbr',
'L_h': 'home_losses',
'W_h': 'home_wins',
'L_r': 'road_losses',
'W_r': 'road_wins',
})
df['wins'] = df['home_wins'] + df['road_wins']
df['losses'] = df['home_losses'] + df['road_losses']
df['home'] = df['home_wins'] + df['home_losses']
df['road'] = df['road_wins'] + df['road_losses']
df['games'] = df['wins'] + df['losses']
cols = [
'team_abbr',
'games',
'wins',
'losses',
'home',
'road',
'home_wins',
'home_losses',
'road_wins',
'road_losses'
]
df = df[cols].set_index(['team_abbr'])
df['win_pct'] = np.where(
df['games'] > 0,
df['wins'] / df['games'],
np.nan
)
df['home_win_pct'] = np.where(
df['home'] > 0,
df['home_wins'] / df['home'],
np.nan
)
df['road_win_pct'] = np.where(
df['road'] > 0,
df['road_wins'] / df['road'],
np.nan
)
df = info.join(df)
df = df.reset_index()
return df
class AdvancedBoxScores(scrape.NBAStats):
# TO DO: Fix opposing team ID
def __init__(
self, *,
scraper,
team=None,
opposing_team=None,
measure_type=params.MeasureType.default(),
season=params.Season.default(),
season_type=params.SeasonType.default(),
season_segment=params.SeasonSegment.default(),
month=params.SeasonMonth.default(),
game_outcome=params.GameOutcome.default(),
game_location=params.GameLocation.default(),
game_segment=params.GameSegment.default(),
period=params.Period.default(),
vs_div=params.Division.default(),
vs_conf=params.Conference.default(),
po_round=params.PlayoffRound.default()):
args = params.Arguments(
team=team,
opposing_team=opposing_team,
MeasureType=measure_type,
Season=season,
SeasonType=season_type,
SeasonSegment=season_segment,
Month=month,
Outcome=game_outcome,
Location=game_location,
GameSegment=game_segment,
Period=period,
VsDivision=vs_div,
VsConference=vs_conf,
PORound=po_round,
)
super().__init__(
scraper=scraper,
api_endpoint="teamgamelogs",
params=args)
| 30.409938 | 70 | 0.500817 | 483 | 4,896 | 4.795031 | 0.250518 | 0.044905 | 0.01943 | 0.015544 | 0.143782 | 0.050086 | 0.050086 | 0.050086 | 0.050086 | 0 | 0 | 0.000975 | 0.371324 | 4,896 | 160 | 71 | 30.6 | 0.751462 | 0.005515 | 0 | 0.137255 | 0 | 0 | 0.120814 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026144 | false | 0 | 0.03268 | 0 | 0.078431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fde19d1fd4d8c645e2a58569add1946141ead92 | 3,429 | py | Python | aishell/prepare_data.py | luweishuang/GSoC-2019-ASR-Pipeline | db43575418d57711e8be2ea83c32b931b90fa814 | [
"MIT"
] | 1 | 2019-12-25T03:37:01.000Z | 2019-12-25T03:37:01.000Z | aishell/prepare_data.py | luweishuang/GSoC-2019-ASR-Pipeline | db43575418d57711e8be2ea83c32b931b90fa814 | [
"MIT"
] | null | null | null | aishell/prepare_data.py | luweishuang/GSoC-2019-ASR-Pipeline | db43575418d57711e8be2ea83c32b931b90fa814 | [
"MIT"
] | 1 | 2020-03-20T07:59:37.000Z | 2020-03-20T07:59:37.000Z | """
Command : prepare_data.py --src [...] --dst [...]
"""
import argparse
import os
import sys
from tqdm import tqdm
def findaudiofiles(dir):
files = []
for dirpath, _, filenames in os.walk(dir):
for filename in filenames:
if filename.endswith(".wav"):
files.append(os.path.join(dirpath, filename))
return files
def write_sample(filename, idx, dst, transcripts_dict):
# filename, input, lbl = line.split(" ", 2)
# assert filename and input and lbl
basepath = os.path.join(dst, "%09d" % idx)
name = os.path.splitext(os.path.basename(filename))[0]
if name not in transcripts_dict:
return False
# wav
os.system(
"cp {src} {dst}".format(
src=filename,
dst=basepath + ".wav",
)
)
# wrd
words = transcripts_dict[name].strip()
with open(basepath + ".wrd", "w", encoding="utf-8") as f:
f.write(words)
# ltr
spellings = " | ".join([" ".join(w) for w in words.split()])
with open(basepath + ".tkn", "w", encoding="utf-8") as f:
f.write(spellings)
# id
with open(basepath + ".id", "w") as f:
f.write("file_id\t{fid}".format(fid=idx))
return True
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Aishell Dataset creation.")
parser.add_argument("--src", help="source directory")
parser.add_argument("--dst", help="destination directory", default="./aishell")
args = parser.parse_args()
assert os.path.isdir(
str(args.src)
), "Aishell src directory not found - '{d}'".format(d=args.src)
transcript_subpath = os.path.join(args.src, "transcript/aishell_transcript_v0.8.txt")
transcripts_dict = {}
with open(transcript_subpath, 'r', encoding="utf-8") as f:
for line in f:
fname, transcript = line.split(None, 1)
transcripts_dict[fname] = transcript
subpaths = ["wav/train", "wav/dev", "wav/test"]
os.makedirs(args.dst, exist_ok=True)
for subpath in subpaths:
src = os.path.join(args.src, subpath)
dst = os.path.join(args.dst, "data", subpath)
os.makedirs(dst, exist_ok=True)
wavs = []
assert os.path.exists(src), "Unable to find the directory - '{src}'".format(
src=src
)
sys.stdout.write("analyzing {src}...\n".format(src=src))
sys.stdout.flush()
wavfiles = findaudiofiles(src)
# transcriptfiles.sort()
sys.stdout.write("writing to {dst}...\n".format(dst=dst))
sys.stdout.flush()
idx = 0
n_samples = len(wavfiles)
for n in tqdm(range(n_samples)):
idx += write_sample(wavfiles[n], idx, dst, transcripts_dict)
# create tokens dictionary
tkn_file = os.path.join(args.dst, "data", "wav/tokens.txt")
sys.stdout.write("creating tokens file {t}...\n".format(t=tkn_file))
sys.stdout.flush()
with open(tkn_file, "w") as f:
f.write("|\n")
tokens = set()
for key in transcripts_dict:
chars = list(transcripts_dict[key].strip().replace(' ', ''))
for char in chars:
tokens.add(char)
for token in tokens:
f.write(token + "\n")
# f.write("'\n")
# for alphabet in range(ord("a"), ord("z") + 1):
# f.write(chr(alphabet) + "\n")
sys.stdout.write("Done !\n")
| 29.560345 | 89 | 0.580927 | 443 | 3,429 | 4.417607 | 0.316027 | 0.030659 | 0.030659 | 0.018396 | 0.100664 | 0.043945 | 0.022483 | 0.022483 | 0 | 0 | 0 | 0.004736 | 0.261009 | 3,429 | 115 | 90 | 29.817391 | 0.767561 | 0.083115 | 0 | 0.038961 | 0 | 0 | 0.130838 | 0.012156 | 0 | 0 | 0 | 0 | 0.025974 | 1 | 0.025974 | false | 0 | 0.051948 | 0 | 0.116883 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fe14f420e86345764e26b8b68fa3a9d231ede94 | 7,717 | py | Python | ddpg.py | fengredrum/DDPG-v2 | 78d7bfbedb52b4dec76d2723f4e5d04d340f1275 | [
"MIT"
] | 3 | 2018-12-26T09:33:30.000Z | 2020-04-22T12:11:30.000Z | ddpg.py | fengredrum/DDPG-v2 | 78d7bfbedb52b4dec76d2723f4e5d04d340f1275 | [
"MIT"
] | null | null | null | ddpg.py | fengredrum/DDPG-v2 | 78d7bfbedb52b4dec76d2723f4e5d04d340f1275 | [
"MIT"
] | 1 | 2018-12-20T08:19:08.000Z | 2018-12-20T08:19:08.000Z | import time
import numpy as np
from replayBuffer import Memory
from network import *
class DDPG:
"""docstring for DDPG"""
def __init__(self, sess, env, args):
self.s_dim = env.observation_space.shape[0]
self.act_dim = env.action_space.shape[0]
self.sess = sess
self.args = args
self._build_graph()
self.memory = Memory(self.args.replayBuffer_size, dims=2 * self.s_dim + self.act_dim + 1)
def _build_graph(self):
self._placehoders()
self._actor_critic()
self._loss_train_op()
self.score = tf.Variable(0., trainable=False, dtype=tf.float32, name='score')
self.score_summary = tf.summary.scalar('score', self.score)
self.sess.run(tf.global_variables_initializer())
self.writer = tf.summary.FileWriter('logs/')
self.writer.add_graph(self.sess.graph)
def _placehoders(self):
with tf.name_scope('inputs'):
self.current_state = tf.placeholder(tf.float32, shape=[None, self.s_dim], name='s')
self.reward = tf.placeholder(tf.float32, [None, 1], name='r')
self.next_state = tf.placeholder(tf.float32, shape=[None, self.s_dim], name='s_')
self.is_training = tf.placeholder(tf.bool, name='is_training')
def _actor_critic(self):
self.actor, self.actor_summary = build_actor(self.current_state, self.act_dim, self.is_training)
self.actor_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'Actor')
actor_ema = tf.train.ExponentialMovingAverage(decay=1 - self.args.tau)
self.update_targetActor = actor_ema.apply(self.actor_vars)
self.targetActor, _ = build_actor(self.next_state, self.act_dim, False,
reuse=True, getter=get_getter(actor_ema))
self.critic, self.critic_summary = build_critic(self.current_state, self.actor, self.act_dim)
self.critic_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'Critic')
critic_ema = tf.train.ExponentialMovingAverage(decay=1 - self.args.tau)
self.update_targetCritic = critic_ema.apply(self.critic_vars)
self.targetCritic, _ = build_critic(self.next_state, self.targetActor, self.act_dim,
reuse=True, getter=get_getter(critic_ema))
def _loss_train_op(self):
max_grad = 2
with tf.variable_scope('target_q'):
self.target_q = self.reward + self.args.gamma * self.targetCritic
with tf.variable_scope('TD_error'):
self.critic_loss = tf.squared_difference(self.target_q, self.critic)
with tf.variable_scope('critic_grads'):
self.critic_grads = tf.gradients(ys=self.critic_loss, xs=self.critic_vars)
for ix, grad in enumerate(self.critic_grads):
self.critic_grads[ix] = grad / self.args.batch_size
with tf.variable_scope('C_train'):
critic_optimizer = tf.train.AdamOptimizer(self.args.critic_lr, epsilon=1e-5)
self.train_critic = critic_optimizer.apply_gradients(zip(self.critic_grads, self.critic_vars))
with tf.variable_scope('a_grad'):
self.a_grads = tf.gradients(self.critic, self.actor)[0]
with tf.variable_scope('actor_grads'):
self.actor_grads = tf.gradients(ys=self.actor, xs=self.actor_vars, grad_ys=self.a_grads)
for ix, grad in enumerate(self.actor_grads):
self.actor_grads[ix] = tf.clip_by_norm(grad / self.args.batch_size, max_grad)
with tf.variable_scope('A_train'):
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
actor_optimizer = tf.train.AdamOptimizer(-self.args.actor_lr,
epsilon=1e-5) # (- learning rate) for ascent policy
self.train_actor = actor_optimizer.apply_gradients(zip(self.actor_grads, self.actor_vars))
def choose_action(self, state):
state = state[np.newaxis, :] # single state
return self.sess.run(self.actor, feed_dict={self.current_state: state,
self.is_training: False})[0] # single action
def train(self, episode=None, ep_reward=None):
b_m = self.memory.sample(self.args.batch_size)
b_s = b_m[:, :self.s_dim]
b_a = b_m[:, self.s_dim: self.s_dim + self.act_dim]
b_r = b_m[:, -self.s_dim - 1: -self.s_dim]
b_s_ = b_m[:, -self.s_dim:]
if episode is None:
critic_feed_dict = {self.current_state: b_s, self.actor: b_a, self.reward: b_r, self.next_state: b_s_}
self.sess.run([self.train_critic, self.update_targetCritic],
feed_dict=critic_feed_dict)
actor_feed_dict = {self.current_state: b_s, self.next_state: b_s_, self.is_training: True}
self.sess.run([self.train_actor, self.update_targetActor],
feed_dict=actor_feed_dict)
else:
update_score = self.score.assign(tf.convert_to_tensor(ep_reward, dtype=tf.float32))
with tf.control_dependencies([update_score]):
merged_score = tf.summary.merge([self.score_summary])
critic_feed_dict = {self.current_state: b_s, self.actor: b_a, self.reward: b_r, self.next_state: b_s_}
_, _, critic = self.sess.run([self.train_critic, self.update_targetCritic, self.critic_summary],
feed_dict=critic_feed_dict)
self.writer.add_summary(critic, episode)
actor_feed_dict = {self.current_state: b_s, self.next_state: b_s_, self.is_training: True}
merged = tf.summary.merge([merged_score, self.actor_summary])
_, _, actor = self.sess.run([self.train_actor, self.update_targetActor, merged],
feed_dict=actor_feed_dict)
self.writer.add_summary(actor, episode)
def perceive(self, state, action, reward, next_state, episode=None, ep_reward=None):
# Store transition (s_t,a_t,r_t,s_{t+1}) in replay buffer
self.memory.store_transition(state, action, reward, next_state)
# Store transitions to replay start size then start training
if self.memory.pointer > self.args.replayBuffer_size:
self.train(episode, ep_reward)
def learn(args, env, agent):
render = False
var = 3 # control exploration
start = time.time()
for e in range(args.num_episodes):
obs = env.reset()
ep_reward = 0
if var >= 0.1:
var *= .995 # decay the action randomness
for j in range(args.num_steps):
if render:
env.render()
action = agent.choose_action(obs)
# Add exploration noise
action = np.clip(np.random.normal(action, var), -2, 2) # add randomness to action selection for exploration
next_obs, reward, done, info = env.step(action)
ep_reward += reward
if j == args.num_steps - 1:
agent.perceive(obs, action, reward * 0.1, next_obs, episode=e, ep_reward=ep_reward)
end = time.time()
total_num_steps = (e + 1) * args.num_steps
print('Episode:', e, 'FPS:', int(total_num_steps / (end - start)),
'Reward: %i' % int(ep_reward), 'Explore: %.2f' % var, )
if ep_reward > 10000:
render = True
break
else:
agent.perceive(obs, action, reward * 0.1, next_obs)
obs = next_obs
agent.sess.close()
| 51.791946 | 120 | 0.621485 | 1,022 | 7,717 | 4.446184 | 0.181018 | 0.035651 | 0.017606 | 0.029269 | 0.387544 | 0.25132 | 0.193222 | 0.18794 | 0.166813 | 0.109155 | 0 | 0.008487 | 0.267073 | 7,717 | 148 | 121 | 52.141892 | 0.794908 | 0.041208 | 0 | 0.078125 | 0 | 0 | 0.019093 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070313 | false | 0 | 0.03125 | 0 | 0.117188 | 0.007813 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fe4fadf475b13cfb0692513254c3cb049b9b1c5 | 3,025 | py | Python | chainlet_unittests/test_grammar/test_subscription.py | maxfischer2781/chainlet | 4e17f9992b4780bd0d9309202e2847df640bffe8 | [
"MIT"
] | 1 | 2017-11-22T21:23:34.000Z | 2017-11-22T21:23:34.000Z | chainlet_unittests/test_grammar/test_subscription.py | maxfischer2781/chainlet | 4e17f9992b4780bd0d9309202e2847df640bffe8 | [
"MIT"
] | 10 | 2017-02-16T16:20:09.000Z | 2018-03-12T16:57:38.000Z | chainlet_unittests/test_grammar/test_subscription.py | maxfischer2781/chainlet | 4e17f9992b4780bd0d9309202e2847df640bffe8 | [
"MIT"
] | null | null | null | import itertools
import unittest
from chainlet.dataflow import MergeLink
from chainlet_unittests.utility import Adder
class ChainSubscription(unittest.TestCase):
def test_pair(self):
"""Subscribe chain[i:j:k] for `a >> b`"""
elements = [Adder(val) for val in (0, -2, -1E6)]
for elements in itertools.product(elements, repeat=2):
with self.subTest(elements=elements):
a, b = elements
def factory():
return a >> b
self._assert_subscriptable(factory)
def test_flatchain(self):
"""Subscribe chain[i:j:k] for `a >> b >> c ...`"""
elements = [Adder(val) for val in (0, -2, -1E6)]
for elements in itertools.product(elements, repeat=5):
with self.subTest(elements=elements):
a, b, c, d, e = elements
def factory():
return a >> b >> c >> d >> e
self._assert_subscriptable(factory)
def test_fork(self):
"""Subscribe chain[i:j:k] for `a >> (b, c) >> d ...`"""
elements = [Adder(val) for val in (0, -2, -1E6)]
for elements in itertools.product(elements, repeat=5):
with self.subTest(elements=elements):
a, b, c, d, e = elements
def factory():
return a >> (b, c) >> MergeLink() >> d >> e
self._assert_subscriptable(factory)
def _assert_subscriptable(self, chain_factory):
with self.subTest(verify='interface available'):
chain_instance = chain_factory()
self.assertIsNotNone(getattr(chain_instance, '__len__', None))
self.assertIsNotNone(getattr(chain_instance, '__getitem__', None))
with self.subTest(verify='indexing'):
chain_instance = chain_factory()
self.assertEqual(len(chain_instance), len(chain_instance.elements))
self.assertEqual([chain_instance[idx] for idx in range(len(chain_instance))], list(chain_instance.elements))
with self.subTest(verify='slicing'):
chain_instance = chain_factory()
for start in range(len(chain_instance)):
for stop in range(start, len(chain_instance)):
sub_chain = chain_instance[start:stop]
self.assertEqual(sub_chain.elements, chain_instance.elements[start:stop])
with self.subTest(verify='consistency'):
chain_instance = chain_factory()
for index in range(len(chain_instance)):
self.assertEqual(chain_instance[:index] >> chain_instance[index:], chain_instance)
with self.subTest(verify='data flow'):
chain_instance = chain_factory()
for index in range(len(chain_instance)):
first_chain, second_chain = chain_instance[:index], chain_instance[index:]
temp_result = first_chain.send(1)
self.assertEqual(second_chain.send(temp_result), chain_instance.send(1))
| 44.485294 | 120 | 0.596364 | 347 | 3,025 | 5.0317 | 0.210375 | 0.178694 | 0.068729 | 0.060137 | 0.571019 | 0.44559 | 0.365979 | 0.306987 | 0.306987 | 0.292096 | 0 | 0.007878 | 0.286612 | 3,025 | 67 | 121 | 45.149254 | 0.801205 | 0.042975 | 0 | 0.425926 | 0 | 0 | 0.025009 | 0 | 0 | 0 | 0 | 0 | 0.203704 | 1 | 0.12963 | false | 0 | 0.074074 | 0.055556 | 0.277778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fe84a2906f5bf097d1442024ee1714c19469089 | 2,727 | py | Python | tests/test_rules.py | f213/richtypo.py | 717cf4b640289c410b5ecddbd240d7cc818b8b68 | [
"MIT"
] | 9 | 2017-01-12T16:52:35.000Z | 2020-11-08T19:08:34.000Z | tests/test_rules.py | f213/richtypo.py | 717cf4b640289c410b5ecddbd240d7cc818b8b68 | [
"MIT"
] | 115 | 2018-06-12T20:48:09.000Z | 2020-07-29T02:14:58.000Z | tests/test_rules.py | f213/richtypo.py | 717cf4b640289c410b5ecddbd240d7cc818b8b68 | [
"MIT"
] | 1 | 2022-02-10T06:40:58.000Z | 2022-02-10T06:40:58.000Z | # -*- coding: utf-8
import re
import six
from richtypo import Richtypo
from richtypo.rules import ABRule, Rule
try:
from unittest.mock import patch
except ImportError:
from mock import patch
def test_rule_generic():
r = Rule(
pattern='b{2}',
replacement='c'
)
assert r.apply('abb') == 'ac'
def test_rule_generic_no_match():
r = Rule(
pattern='b{2}',
replacement='c'
)
assert r.apply('acc') == 'acc'
def test_rule_applying():
r1 = Rule(
pattern='b{2}',
replacement='cc'
)
r2 = Rule(
pattern='c{2}',
replacement='dd'
)
r = Richtypo()
r.rules = [r1, r2]
r.text = 'abb'
r.apply_rule_chain()
assert r.text == 'add'
def test_rule_order():
"""
Check if rules are applyied in order they are supplied
"""
r1 = Rule(
pattern='b{2}',
replacement='cc'
)
r2 = Rule(
pattern='c{2}',
replacement='dd'
)
r = Richtypo()
r.rules = [r2, r1] # r2 works only after r1
r.text = 'abb'
r.apply_rule_chain()
assert r.text == 'acc' # so the text should not be changed
def test_get_rule_from_available():
r = Richtypo()
r.available_rules = {
'b': Rule(pattern='b', replacement='d')
}
rule = r._get_rule('b')
assert rule.pattern == 'b'
def test_get_rule_from_predefined_rules():
r = Richtypo()
rule = r._get_rule(ABRule)
assert rule == ABRule
def test_rule_flags():
rule = Rule(pattern='A', replacement='b')
rule._compile()
if six.PY2:
reference_flags = re.UNICODE
else:
reference_flags = 0
assert rule._re == re.compile('A', flags=reference_flags)
rule.flags = ['I']
rule._compile()
assert rule._re == re.compile('A', flags=reference_flags | re.I)
def test_all_occurancies_are_replaced():
rule = Rule(pattern='(a|b)', replacement=r'\1c')
assert rule.apply('aab') == 'acacbc'
@patch('richtypo.Richtypo._get_ruleset_rules')
def test_build_rule_chain(rules):
r = Richtypo()
r.available_rules = {
'b': Rule(pattern='b', replacement='d')
}
with patch('richtypo.Richtypo._get_ruleset_rules') as rules:
rules.return_value = ['b', ABRule]
r.build_rule_chain('generic')
assert len(r.rules) == 2
@patch('richtypo.Richtypo._get_ruleset_rules')
def test_ruleset_input_param(get_ruleset):
get_ruleset.return_value = [
ABRule,
]
r = Richtypo(ruleset='generic') # this ruleset realy should exist
assert len(r.rules) == 1
assert ABRule in r.rules
def test_rule_loading_by_default():
r = Richtypo()
assert len(r.available_rules.keys()) >= 1
| 20.350746 | 70 | 0.604327 | 364 | 2,727 | 4.337912 | 0.258242 | 0.048765 | 0.053198 | 0.032932 | 0.395187 | 0.372388 | 0.349588 | 0.349588 | 0.295124 | 0.243192 | 0 | 0.011341 | 0.256326 | 2,727 | 133 | 71 | 20.503759 | 0.767258 | 0.059406 | 0 | 0.382979 | 0 | 0 | 0.082482 | 0.042419 | 0 | 0 | 0 | 0 | 0.138298 | 1 | 0.117021 | false | 0 | 0.074468 | 0 | 0.191489 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fe8b058bff633879ee2bcceb0dc667d24109f50 | 786 | py | Python | accounts/urls.py | UTpay/UTpay | 5e43f07f9046781ab2b50ada3c3a022d13da1765 | [
"MIT"
] | 1 | 2017-11-04T20:33:04.000Z | 2017-11-04T20:33:04.000Z | accounts/urls.py | UTpay/utpay | 5e43f07f9046781ab2b50ada3c3a022d13da1765 | [
"MIT"
] | 7 | 2020-06-05T17:38:37.000Z | 2022-03-11T23:17:27.000Z | accounts/urls.py | UTpay/UTpay | 5e43f07f9046781ab2b50ada3c3a022d13da1765 | [
"MIT"
] | null | null | null | from django.contrib.auth import views as auth_views
from django.urls import path
from .views import *
app_name = 'accounts'
urlpatterns = [
path('signup/', SignUpView.as_view(), name='signup'),
path('activation/<key>/', ActivationView.as_view(), name='activation'),
path('login/', auth_views.LoginView.as_view(template_name='login.html'), name='login'),
path('logout/', auth_views.LogoutView.as_view(template_name='logout.html'), name='logout'),
path('mypage/', MyPageView.as_view(), name='mypage'),
path('mypage/contract/', ContractView.as_view(), name='contract'),
path('mypage/contract/register/', ContractRegisterView.as_view(), name='contract_register'),
path('mypage/contract/<address>/', ContractDetailView.as_view(), name='contract_detail'),
]
| 46.235294 | 96 | 0.717557 | 97 | 786 | 5.649485 | 0.350515 | 0.087591 | 0.109489 | 0.09854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100509 | 786 | 16 | 97 | 49.125 | 0.775106 | 0 | 0 | 0 | 0 | 0 | 0.270992 | 0.064886 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2feaafe4312b2826f2437121c11a4ae1d7ccd4e8 | 10,940 | py | Python | latest version/web/index.py | Kanae25805/Arcaea-server | 0218e797aec835e07488a1dfbd121925d56245b5 | [
"MIT"
] | null | null | null | latest version/web/index.py | Kanae25805/Arcaea-server | 0218e797aec835e07488a1dfbd121925d56245b5 | [
"MIT"
] | null | null | null | latest version/web/index.py | Kanae25805/Arcaea-server | 0218e797aec835e07488a1dfbd121925d56245b5 | [
"MIT"
] | null | null | null | from flask import (
Blueprint, flash, g, redirect, render_template, request, url_for
)
from web.login import login_required
from werkzeug.utils import secure_filename
import sqlite3
import web.webscore
import web.system
import time
import server.arcscore
import os
UPLOAD_FOLDER = 'database'
ALLOWED_EXTENSIONS = {'db'}
bp = Blueprint('index', __name__, url_prefix='/web')
def is_number(s):
try: # 判断字符串s是浮点数
float(s)
return True
except ValueError:
pass
return False
@bp.route('/index')
@bp.route('/')
@login_required
def index():
# 主页
return render_template('web/index.html')
@bp.route('/singleplayer', methods=['POST', 'GET'])
@login_required
def single_player_score():
# 单个玩家分数查询
if request.method == 'POST':
name = request.form['name']
user_code = request.form['user_code']
error = None
if name or user_code:
conn = sqlite3.connect('./database/arcaea_database.db')
c = conn.cursor()
if user_code:
c.execute('''select user_id from user where user_code=:a''', {
'a': user_code})
else:
c.execute(
'''select user_id from user where name=:a''', {'a': name})
user_id = c.fetchone()
posts = []
if user_id:
user_id = user_id[0]
posts = web.webscore.get_user_score(c, user_id)
if not posts:
error = '无成绩 No score.'
else:
error = '玩家不存在 The player does not exist.'
conn.commit()
conn.close()
else:
error = '输入为空 Null Input.'
if error:
flash(error)
else:
return render_template('web/singleplayer.html', posts=posts)
return render_template('web/singleplayer.html')
@bp.route('/singleplayerptt', methods=['POST', 'GET'])
@login_required
def single_player_ptt():
# 单个玩家PTT详情查询
if request.method == 'POST':
name = request.form['name']
user_code = request.form['user_code']
error = None
if name or user_code:
conn = sqlite3.connect('./database/arcaea_database.db')
c = conn.cursor()
if user_code:
c.execute('''select user_id from user where user_code=:a''', {
'a': user_code})
else:
c.execute(
'''select user_id from user where name=:a''', {'a': name})
user_id = c.fetchone()
posts = []
if user_id:
user_id = user_id[0]
user = web.webscore.get_user(c, user_id)
posts = web.webscore.get_user_score(c, user_id, 30)
recent, recentptt = web.webscore.get_user_recent30(c, user_id)
if not posts:
error = '无成绩 No score.'
else:
bestptt = 0
for i in posts:
if i['rating']:
bestptt += i['rating']
bestptt = bestptt / 30
else:
error = '玩家不存在 The player does not exist.'
conn.commit()
conn.close()
else:
error = '输入为空 Null Input.'
if error:
flash(error)
else:
return render_template('web/singleplayerptt.html', posts=posts, user=user, recent=recent, recentptt=recentptt, bestptt=bestptt)
return render_template('web/singleplayerptt.html')
@bp.route('/allplayer', methods=['GET'])
@login_required
def all_player():
# 所有玩家数据,按照ptt排序
conn = sqlite3.connect('./database/arcaea_database.db')
c = conn.cursor()
c.execute('''select * from user order by rating_ptt DESC''')
x = c.fetchall()
error = None
if x:
posts = []
for i in x:
join_data = None
time_played = None
if i[3]:
join_date = time.strftime('%Y-%m-%d %H:%M:%S',
time.localtime(int(i[3])//1000))
if i[20]:
time_played = time.strftime('%Y-%m-%d %H:%M:%S',
time.localtime(int(i[20])//1000))
posts.append({'name': i[1],
'user_id': i[0],
'join_date': join_date,
'user_code': i[4],
'rating_ptt': i[5],
'song_id': i[11],
'difficulty': i[12],
'score': i[13],
'shiny_perfect_count': i[14],
'perfect_count': i[15],
'near_count': i[16],
'miss_count': i[17],
'time_played': time_played,
'clear_type': i[21],
'rating': i[22]
})
else:
error = '没有玩家数据 No player data.'
conn.commit()
conn.close()
if error:
flash(error)
return render_template('web/allplayer.html')
else:
return render_template('web/allplayer.html', posts=posts)
@bp.route('/allsong', methods=['GET'])
@login_required
def all_song():
# 所有歌曲数据
def defnum(x):
# 定数转换
if x >= 0:
return x / 10
else:
return None
conn = sqlite3.connect('./database/arcsong.db')
c = conn.cursor()
c.execute('''select * from songs''')
x = c.fetchall()
error = None
if x:
posts = []
for i in x:
posts.append({'song_id': i[0],
'name_en': i[1],
'rating_pst': defnum(i[12]),
'rating_prs': defnum(i[13]),
'rating_ftr': defnum(i[14]),
'rating_byn': defnum(i[15])
})
else:
error = '没有铺面数据 No song data.'
conn.commit()
conn.close()
if error:
flash(error)
return render_template('web/allsong.html')
else:
return render_template('web/allsong.html', posts=posts)
@bp.route('/singlecharttop', methods=['GET', 'POST'])
@login_required
def single_chart_top():
# 歌曲排行榜
if request.method == 'POST':
song_name = request.form['sid']
difficulty = request.form['difficulty']
if difficulty.isdigit():
difficulty = int(difficulty)
error = None
conn = sqlite3.connect('./database/arcsong.db')
c = conn.cursor()
song_name = '%'+song_name+'%'
c.execute('''select sid, name_en from songs where sid like :a limit 1''',
{'a': song_name})
x = c.fetchone()
conn.commit()
conn.close()
print(x)
if x:
song_id = x[0]
posts = server.arcscore.arc_score_top(song_id, difficulty, -1)
for i in posts:
i['time_played'] = time.strftime('%Y-%m-%d %H:%M:%S',
time.localtime(i['time_played']))
else:
error = '查询为空 No song.'
if not error:
return render_template('web/singlecharttop.html', posts=posts, song_name_en=x[1], song_id=song_id, difficulty=difficulty)
else:
flash(error)
return render_template('web/singlecharttop.html')
def allowed_file(filename):
return '.' in filename and \
filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
@bp.route('/updatedatabase', methods=['GET', 'POST'])
@login_required
def update_database():
# 更新数据库
error = None
if request.method == 'POST':
if 'file' not in request.files:
flash('无文件 No file part.')
return redirect(request.url)
file = request.files['file']
if file.filename == '':
flash('未选择文件 No selected file.')
return redirect(request.url)
if file and allowed_file(file.filename) and file.filename in ['arcsong.db', 'arcaea_database.db']:
filename = 'old_' + secure_filename(file.filename)
file.save(os.path.join(UPLOAD_FOLDER, filename))
flash('上传成功 Success upload.')
try:
web.system.update_database()
flash('数据更新成功 Success update data.')
except:
flash('数据更新失败 Cannot update data.')
else:
error = '上传失败 Upload error.'
if error:
flash(error)
return render_template('web/updatedatabase.html')
@bp.route('/changesong', methods=['GET'])
@login_required
def change_song():
# 修改歌曲数据
return render_template('web/changesong.html')
@bp.route('/changesong/addsong', methods=['POST'])
@login_required
def add_song():
# 添加歌曲数据
def get_rating(x):
# 换算定数
if is_number(x):
x = float(x)
if x >= 0:
return int(x*10)
else:
return -1
else:
return -1
error = None
song_id = request.form['sid']
name_en = request.form['name_en']
rating_pst = get_rating(request.form['rating_pst'])
rating_prs = get_rating(request.form['rating_prs'])
rating_ftr = get_rating(request.form['rating_ftr'])
rating_byd = get_rating(request.form['rating_byd'])
if len(song_id) >= 256:
song_id = song_id[:200]
if len(name_en) >= 256:
name_en = name_en[:200]
conn = sqlite3.connect('./database/arcsong.db')
c = conn.cursor()
c.execute(
'''select exists(select * from songs where sid=:a)''', {'a': song_id})
if c.fetchone() == (0,):
c.execute('''insert into songs(sid,name_en,rating_pst,rating_prs,rating_ftr,rating_byn) values(:a,:b,:c,:d,:e,:f)''', {
'a': song_id, 'b': name_en, 'c': rating_pst, 'd': rating_prs, 'e': rating_ftr, 'f': rating_byd})
flash('歌曲添加成功 Successfully add the song.')
else:
error = '歌曲已存在 The song exists.'
conn.commit()
conn.close()
if error:
flash(error)
return redirect(url_for('index.change_song'))
@bp.route('/changesong/deletesong', methods=['POST'])
@login_required
def delete_song():
# 删除歌曲数据
error = None
song_id = request.form['sid']
conn = sqlite3.connect('./database/arcsong.db')
c = conn.cursor()
c.execute(
'''select exists(select * from songs where sid=:a)''', {'a': song_id})
if c.fetchone() == (1,):
c.execute('''delete from songs where sid=:a''', {'a': song_id})
flash('歌曲删除成功 Successfully delete the song.')
else:
error = "歌曲不存在 The song doesn't exist."
conn.commit()
conn.close()
if error:
flash(error)
return redirect(url_for('index.change_song'))
| 30.304709 | 139 | 0.522121 | 1,262 | 10,940 | 4.38748 | 0.177496 | 0.018422 | 0.046957 | 0.054 | 0.484017 | 0.443742 | 0.382156 | 0.355066 | 0.324183 | 0.312805 | 0 | 0.012717 | 0.345887 | 10,940 | 360 | 140 | 30.388889 | 0.76104 | 0.009049 | 0 | 0.491639 | 0 | 0.003344 | 0.176056 | 0.041287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046823 | false | 0.003344 | 0.0301 | 0.010033 | 0.160535 | 0.010033 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2feca74f95fd3ef72f4807ca5abf86a33a361fda | 14,122 | py | Python | taggd/core/demultiplex.py | jfnavarro/taggd | c423a4bb8a2afc1ffaabc6704a8968dc3e829b4f | [
"BSD-3-Clause"
] | 3 | 2015-05-07T12:23:12.000Z | 2019-12-07T11:30:43.000Z | taggd/core/demultiplex.py | JoelSjostrand/taggd | d778dcce5205ca61bd82cb518854583f32f63baf | [
"BSD-3-Clause"
] | 1 | 2017-06-29T11:49:28.000Z | 2017-06-29T17:59:03.000Z | taggd/core/demultiplex.py | JoelSjostrand/taggd | d778dcce5205ca61bd82cb518854583f32f63baf | [
"BSD-3-Clause"
] | 3 | 2015-05-11T08:00:03.000Z | 2020-11-30T07:49:27.000Z | """
Runs taggd, a tool to demultiplex (link molecular barcodes back to) a file of genetic reads,
typically obtained by sequencing. For matched reads, the barcode and its related properties
are added to the read. Unmatched reads, ambiguously matched reads, and stats are by default
produced as output files as well.
The input ID file should be tab-delimited with the following format:
<barcode> <prop1> <prop2> ...
<barcode> <prop1> <prop2> ...
The input files can be in FASTA, FASTQ, SAM or BAM format. Matched files will be appended
with the barcode and properties like this:
B0:Z:<barcode> B1:Z:<prop1> B2:Z:<prop3> ...
Source: https://github.com/SpatialTranscriptomicsResearch/taggd
Python package: https://pypi.python.org/pypi/taggd
Contact: joel.sjostrand@gmail.com;jose.fernandez.navarro@scilifelab.se
"""
import os
import time
import multiprocessing as mp
import argparse
import taggd.io.barcode_utils as bu
import taggd.core.demultiplex_core_functions as core
import taggd.core.demultiplex_sub_functions as sub
import taggd.core.demultiplex_search_functions as srch
def main(argv=None):
"""
Main application.
Starts a timer, create parameter parsers, parsers parameters
and run all the steps for the demultiplexing.
"""
start_time = time.time()
# Create a parser
parser = argparse.ArgumentParser(description=__doc__,
formatter_class=argparse.RawTextHelpFormatter)
# Needed parameters
parser.add_argument('barcodes_infile',
metavar='barcodes-infile',
help="The file with true barcode IDs and other properties.")
parser.add_argument('reads_infile',
metavar='reads-infile',
help="The FASTQ, FASTA, SAM or BAM file with reads.")
parser.add_argument('outfile_prefix',
metavar='outfile-prefix', help="The output files prefix.")
# Optional arguments.
parser.add_argument('--no-matched-output',
help='Do not output matched reads',
default=False, action='store_true')
parser.add_argument('--no-ambiguous-output',
help='Do not output ambiguous reads',
default=False, action='store_true')
parser.add_argument('--no-unmatched-output',
help='Do not output unmatched reads',
default=False, action='store_true')
parser.add_argument('--no-results-output',
help='Do not output a tab-separated results file with stats on the reads',
default=False, action='store_true')
parser.add_argument('--start-position',
type=int,
help='The start position for barcodes in reads (default: %(default)d)',
default=0, metavar="[int]")
parser.add_argument('--k',
type=int,
help='The kmer length (default: %(default)d)',
default=6, metavar="[int]")
parser.add_argument('--max-edit-distance',
type=int,
help='The max edit distance for allowing hits (default: %(default)d)',
default=2, metavar="[int]")
parser.add_argument('--metric',
help= "Distance metric: Subglobal, Levenshtein or Hamming (default: %(default)s)",
default="Subglobal", metavar="[string]")
parser.add_argument('--ambiguity-factor',
type=float,
help='Top matches within this factor from the best match are considered ambiguous,\n'
'for instance with factor=1.5, having one match with distance 2 and two matches\n'
'with distance 4 yields all three matches as ambiguous hits. Perfect hits are always\n'
'considered non-ambiguous, irrespective of factor. (default: %(default)d)',
default=1.0, metavar="[int]")
parser.add_argument('--slider-increment',
type=int, help="Space between kmer searches, " \
"0 yields kmer length (default: %(default)d)",
default=0, metavar="[int]")
parser.add_argument('--overhang',
type=int,
help="Additional flanking bases around read barcode\n" \
"to allow for insertions when matching (default: %(default)d)",
default=2, metavar="[int]")
parser.add_argument('--seed',
help="Random number generator seed for shuffling ambiguous hits (default: %(default)s)",
default=None, metavar="[string]")
parser.add_argument('--homopolymer-filter',
type=int,
help="If set, excludes reads where the barcode part contains\n" \
"a homopolymer of the given length,\n" \
"0 means no filter (default: %(default)d)",
default=8, metavar="[int]")
parser.add_argument('--subprocesses',
type=int,
help="Number of subprocesses started (default: 0, yielding number of machine cores - 1)",
default=0, metavar="[int]")
parser.add_argument('--estimate-min-edit-distance',
type=int,
help="If set, estimates the min edit distance among true\n" \
"barcodes by comparing the specified number of pairs,\n" \
"0 means no estimation (default: %(default)d)",
default=0, metavar="[int]")
parser.add_argument('--no-offset-speedup',
help="Turns off an offset speedup routine.\n" \
"Increases running time but may yield more hits.",
default=False, action='store_true')
parser.add_argument('--multiple-hits-keep-one',
help="When multiple kmer hits are found for a record\n" \
"keep one as unambiguous and the rest as ambiguous",
default=False, action='store_true')
parser.add_argument('--trim-sequences', nargs='+', type=int, default=None,
help="Trims from the barcodes in the input file\n" \
"The bases given in the list of tuples as START END START END .. where\n" \
"START is the integer position of the first base (0 based) and END is the integer\n" \
"position of the last base.\nTrimmng sequences can be given several times.")
parser.add_argument('--barcode-tag',
type=str,
help='Use the sequence in specified tag instead of the read sequence for the barcode demultiplexing.\n' \
'The tag must be a two-letter string and be present for all records in the input file.\n' \
'Can only be used with SAM or BAM formatted input files.',
default=None, metavar="[str]")
parser.add_argument('--version', action='version', version='%(prog)s ' + "0.3.2")
# Parse
if argv == None:
options = parser.parse_args()
else:
options = parser.parse_args(argv)
# Validate all options.
if not os.path.isfile(options.barcodes_infile) :
raise ValueError("Invalid true barcodes input file path.")
if not os.path.isfile(options.reads_infile) :
raise ValueError("Invalid reads input file path.")
if not (options.reads_infile.upper().endswith(".FASTQ") or \
options.reads_infile.upper().endswith(".FQ") or \
options.reads_infile.upper().endswith(".SAM") or \
options.reads_infile.upper().endswith(".FASTA") or \
options.reads_infile.upper().endswith(".FA") or \
options.reads_infile.upper().endswith(".BAM")):
raise ValueError("Invalid reads input file format: must be FASTQ, " \
"FASTA, SAM or BAM format and file end with .fq, fastq, .fa, .fasta, .sam or .bam")
if options.outfile_prefix is None or options.outfile_prefix == "":
raise ValueError("Invalid output file prefix.")
if options.k <= 0:
raise ValueError("Invalid kmer length. Must be > 0.")
if options.max_edit_distance < 0:
raise ValueError("Invalid max edit distance. Must be >= 0.")
if options.metric not in ("Subglobal", "Levenshtein", "Hamming"):
raise ValueError("Invalid metric. Must be Subglobal, Levenshtein or Hamming.")
if options.slider_increment < 0:
raise ValueError("Invalid slider increment. Must be >= 0.")
if options.slider_increment == 0:
options.slider_increment = int(options.k)
if options.start_position < 0:
raise ValueError("Invalid start position. Must be >= 0.")
if options.overhang < 0:
raise ValueError("Invalid overhang. Must be >= 0.")
if options.metric == "Hamming" and options.overhang > 0:
raise ValueError("Invalid overhang. Must be 0 for Hamming metric.")
if options.subprocesses < 0:
raise ValueError("Invalid no. of subprocesses. Must be >= 0.")
if options.ambiguity_factor < 1.0:
raise ValueError("Invalid ambiguity factor. Must be >= 1.")
# Check the the trimming sequences given are valid
if options.trim_sequences is not None \
and (len(options.trim_sequences) % 2 != 0 or min(options.trim_sequences)) < 0:
raise ValueError("Invalid trimming sequences given " \
"The number of positions given must be even and they must fit into the barcode length.")
if options.barcode_tag:
if len(options.barcode_tag) != 2:
raise ValueError("Invalid the \"--barcode-tag\" option must specify a two-letter string, current length is "+str(len(options.barcode_tag))+" letters (\""+options.barcode_tag+"\").\n")
if not (options.reads_infile.upper().endswith(".SAM") or options.reads_infile.upper().endswith(".BAM")):
raise ValueError("Invalid the \"--barcode-tag\" option can only be used with SAM or BAM formatted input files.\n")
# Read barcodes file
true_barcodes = bu.read_barcode_file(options.barcodes_infile)
# Paths
frmt = options.reads_infile.split(".")[-1]
fn_bc = os.path.abspath(options.barcodes_infile)
fn_reads = os.path.abspath(options.reads_infile)
fn_prefix = os.path.abspath(options.outfile_prefix)
fn_matched = None if options.no_matched_output else fn_prefix + "_matched." + frmt
fn_ambig = None if options.no_ambiguous_output else fn_prefix + "_ambiguous." + frmt
fn_unmatched = None if options.no_unmatched_output else fn_prefix + "_unmatched." + frmt
fn_results = None if options.no_results_output else fn_prefix + "_results.tsv"
# Subprocesses
if options.subprocesses == 0:
options.subprocesses = mp.cpu_count() - 1
print("# Options: " + str(options).split("Namespace")[-1])
print("# Barcodes input file: " + str(fn_bc))
print("# Reads input file: " + str(fn_reads))
print("# Matched output file: " + str(fn_matched))
print("# Ambiguous output file: " + str(fn_ambig))
print("# Unmatched output file: " + str(fn_unmatched))
print("# Results output file: " + str(fn_results))
print("# Number of barcodes in input: " + str(len(true_barcodes)))
lngth = len(list(true_barcodes.keys())[0])
print("# Barcode length: " + str(lngth))
print("# Barcode length when overhang added: " + \
str(lngth + min(options.start_position, options.overhang) + options.overhang))
# Check barcodes file.
if options.estimate_min_edit_distance > 0:
min_dist = estimate_min_edit_distance(true_barcodes, options.estimate_min_edit_distance)
if min_dist <= options.max_edit_distance:
raise ValueError("Invalid max edit distance: exceeds or equal " \
"to estimated minimum edit distance among true barcodes.")
print("# Estimate of minimum edit distance between true barcodes (may be less): " + str(min_dist))
else:
print("# Estimate of minimum edit distance between true barcodes (may be less): Not estimated")
# Make the input trim coordinates a list of tuples
trim_sequences = None
if options.trim_sequences is not None:
trim_sequences = list()
for i in range(len(options.trim_sequences) - 1):
if i % 2 == 0:
trim_sequences.append((options.trim_sequences[i],
options.trim_sequences[i+1]))
# Initialize main components
sub.init(true_barcodes,
options.start_position,
min(options.start_position, options.overhang),
options.overhang,
options.max_edit_distance,
options.homopolymer_filter,
options.seed,
options.multiple_hits_keep_one,
trim_sequences,
options.barcode_tag)
srch.init(true_barcodes,
options.k,
options.max_edit_distance,
options.metric,
options.slider_increment,
min(options.start_position, options.overhang),
options.overhang,
options.ambiguity_factor,
options.no_offset_speedup)
# Demultiplex
print("# Starting demultiplexing...")
stats = core.demultiplex(fn_reads,
fn_matched,
fn_ambig,
fn_unmatched,
fn_results,
options.subprocesses)
print("# ...finished demultiplexing")
print("# Wall time in secs: " + str(time.time() - start_time))
print(str(stats))
| 52.303704 | 195 | 0.599278 | 1,651 | 14,122 | 5.028468 | 0.20533 | 0.024934 | 0.047097 | 0.020597 | 0.283064 | 0.215972 | 0.174175 | 0.150325 | 0.133341 | 0.112985 | 0 | 0.006452 | 0.29755 | 14,122 | 269 | 196 | 52.498141 | 0.830444 | 0.087877 | 0 | 0.125 | 0 | 0.009259 | 0.333489 | 0.008958 | 0.00463 | 0 | 0 | 0 | 0 | 1 | 0.00463 | false | 0 | 0.037037 | 0 | 0.041667 | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fed9c1cc996668efe8fcc3cc02dc64126644948 | 18,586 | py | Python | plots_matlabformat.py | apiasecki/benthic_calibration | 739a42096018f51af2c2043413b90aa072fcd81d | [
"MIT"
] | null | null | null | plots_matlabformat.py | apiasecki/benthic_calibration | 739a42096018f51af2c2043413b90aa072fcd81d | [
"MIT"
] | null | null | null | plots_matlabformat.py | apiasecki/benthic_calibration | 739a42096018f51af2c2043413b90aa072fcd81d | [
"MIT"
] | null | null | null | # THIS TAKES AN ALREADY FORMATED DATA TABLE from matlab AND DOES THE REGRESSIONS
# IT ALSO MAKES THE PLOTS
# LAST EDITED 11-29-17
import pandas
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from pylab import *
from pyteomics import mass # can do cool isotope math stuff, not used here
# plt.style.use('ggplot')
#plt.style.use('presentation') #have this when i want slides, basically makes it readable from the back row
# import file and make main variables
xls_file = pandas.ExcelFile('out_02_13_18.xlsx')
avg = xls_file.parse('avg')
avg10 = xls_file.parse('avg10')
reps = xls_file.parse('all')
cibs = xls_file.parse('cibs')
lent = xls_file.parse('len')
pyrgo = xls_file.parse('pyrgo')
site = xls_file.parse('sites')
uvi = xls_file.parse('uvi')
elegans = xls_file.parse('helegans')
other = xls_file.parse('assorted')
# big = xls_file.parse('big')
# notbig = xls_file.parse('not')
otherpeople = pandas. ExcelFile('others.xlsx')
travertine = otherpeople.parse('travertine')
leeds = otherpeople.parse('leeds')
ETH = otherpeople.parse('ETH')
cavepearls = otherpeople.parse('cavepearls')
magali = otherpeople.parse('bonifacie')
#others.to_csv('others.csv')
# create 1/T^2 variable
# avg['XT'] = 1e6 /((273.15+avg['temp'])**2)
# avg10['XT'] = 1e6 /((273.15+avg10['temp'])**2)
# reps['XT'] = 1e6 /((273.15+reps['temp'])**2)
# cibs['XT'] = 1e6 /((273.15+cibs['temp'])**2)
# lent['XT'] = 1e6 /((273.15+lent['temp'])**2)
# pyrgo['XT'] = 1e6 /((273.15+pyrgo['temp'])**2)
# uvi['XT'] = 1e6 /((273.15+uvi['temp'])**2)
# elegans['XT'] = 1e6 /((273.15+elegans['temp'])**2)
# other['XT'] = 1e6 /((273.15+other['temp'])**2)
# big['XT'] = 1e6 /((273.15+big['temp'])**2)
# notbig['XT'] = 1e6 /((273.15+notbig['temp'])**2)
# site['XT'] = 1e6 /((273.15+site['temp'])**2)
travertine['XT'] = 1e6 /((273.15+travertine['temp'])**2)
leeds['XT'] = 1e6 /((273.15+leeds['temp'])**2)
ETH['XT'] = 1e6 /((273.15+ETH['temp'])**2)
cavepearls['XT'] = 1e6 /((273.15+cavepearls['temp'])**2)
# regression
def line(x, a, b):
return a*x+b
popt_all, pcov_all = curve_fit(line, reps['XT'], reps['D47'])
#popt5, pcov5 = curve_fit(line, avg10['XT'], avg10['D47'], sigma=avg10['D47_sterr'])
popt5, pcov5 = curve_fit(line, avg10['XT'], avg10['D47_avg'])
popt, pcov = curve_fit(line, avg['XT'], avg['D47_avg'])
poptt, pcovt = curve_fit(line, travertine['XT'], travertine['D47'])
poptl, pcovl = curve_fit(line, leeds['XT'], leeds['D47'])
poptE, pcovE = curve_fit(line, ETH['XT'], ETH['D47'])
poptc, pcovc = curve_fit(line, cavepearls['XT'], cavepearls['D47'])
poptBF, pcovBF = curve_fit(line, magali['XT'], magali['D47'])
popt18, pcov18 = curve_fit(line, avg10['d18O_avg'], avg10['D47_avg'])
popt18b, pcov18b = curve_fit(line, avg10['T'], avg10['d18O_avg'] )
# poptb, pcovb = curve_fit(line, big['XT'], big['D47_avg'])
# poptnb, pcovnb = curve_fit(line, notbig['XT'], notbig['D47_avg'])
popts, pcovs = curve_fit(line, site['XT'], site['D47_avg'])
str_all = 'y='+ '%.3f' % popt_all[0] + 'x+' + '%.3f' % popt_all[1]
strc = 'y='+ '%.3f' % popt5[0] + 'x+' + '%.3f' % popt5[1]
strt = 'y='+ '%.3f' % poptt[0] + 'x+' + '%.3f' % poptt[1]
strl = 'y='+ '%.3f' % poptl[0] + 'x+' + '%.3f' % poptl[1]
strE = 'y='+ '%.3f' % poptE[0] + 'x+' + '%.3f' % poptE[1]
strcp = 'y='+ '%.3f' % poptc[0] + 'x+' + '%.3f' % poptc[1]
# strb = 'y='+ '%.3f' % poptb[0] + 'x+' + '%.3f' % poptb[1]
# strnb = 'y='+ '%.3f' % poptnb[0] + 'x+' + '%.3f' % poptnb[1]
strs = 'y='+ '%.4f' % popts[0] + 'x+' + '%.3f' % popts[1]
strB = 'y=0.0422x+0.208'
# r squared
residuals = avg10['D47_avg']- line(avg10['XT'], popt5[0], popt5[1])
ss_res = np.sum(residuals**2)
ss_tot = np.sum((avg10['D47_avg']-np.mean(avg10['D47_avg']))**2)
r_squared = 1 - (ss_res / ss_tot)
strr = 'r$^2$=' '%.2f' % r_squared
residuals_reps = reps['D47'] - line(reps['XT'], popt_all[0], popt_all[1])
ss_res_reps = np.sum(residuals_reps**2)
ss_tot_reps = np.sum((reps['D47']-np.mean(reps['D47']))**2)
r_squared_reps = 1 - (ss_res_reps / ss_tot_reps)
strr_reps = 'r$^2$=' '%.2f' % r_squared_reps
residuals_site = site['D47_avg'] - line(site['XT'], popts[0], popts[1])
ss_res_site = np.sum(residuals_site**2)
ss_tot_site = np.sum((site['D47_avg']-np.mean(site['D47_avg']))**2)
r_squared_site = 1 - (ss_res_site / ss_tot_site)
strr_site = 'r$^2$=' '%.2f' % r_squared_site
# confidence interval
xplot = linspace(np.min(avg10['XT'])-0.1, np.max(avg10['XT'])+0.1, 100)
ps = np.random.multivariate_normal(popt5, pcov5, 10000)
ysample = np.asarray([line(xplot, *pi) for pi in ps])
lower = percentile(ysample, 2.5, axis = 0)
upper = percentile(ysample, 97.5, axis = 0)
ps_all = np.random.multivariate_normal(popt_all, pcov_all, 10000)
ysample_all = np.asarray([line(xplot, *pi) for pi in ps_all])
lower_all = percentile(ysample_all, 2.5, axis = 0)
upper_all = percentile(ysample_all, 97.5, axis = 0)
ps_site = np.random.multivariate_normal(popts, pcovs, 10000)
ysample_site = np.asarray([line(xplot, *pi) for pi in ps_site])
lower_site = percentile(ysample_site, 2.5, axis = 0)
upper_site = percentile(ysample_site, 97.5, axis = 0)
ps_bonifacie = np.random.multivariate_normal(poptBF, pcovBF, 10000)
ysample_BF = np.asarray([line(xplot, *pi) for pi in ps_bonifacie])
lower_BF = percentile(ysample_BF, 2.5, axis= 0)
upper_BF = percentile(ysample_BF, 97.5, axis = 0)
# plot my data plus other peoples basically the OG plot
# fig1 = plt.figure()
# plt.rc('font', family='Helvetica')
# ax = fig1.add_subplot(1,1,1)
# ax.plot(reps['XT'], reps['D47'], '+', color = '0.75', label = 'reps', zorder=1)
# ax.errorbar(avg['XT'], avg['D47_avg'], yerr = avg['D47_sterr'],
# fmt = 'o', color = 'b', ecolor= 'b', label = 'avg')
# ax.errorbar(avg10['XT'], avg10['D47_avg'], yerr = avg10['D47_sterr'],
# fmt = 'o', color = 'm', ecolor = 'm', label = 'avg>10')
# ax.plot(travertine['XT'], travertine['D47'], 'o', color = 'g', label = 'travertine')
# ax.plot(leeds['XT'], leeds['D47'], 'o', color = 'r', label = 'leeds')
# ax.plot(ETH['XT'], ETH['D47'], 'o', color = 'k', label = 'ETH')
# ax.plot(cavepearls['XT'], cavepearls['D47'], 'o', color = 'y', label = 'cavepearls')
# ax.plot(xplot, line(xplot, popt[0], popt[1]), 'k--', linewidth=0.5)
# ax.plot(xplot, line(xplot, popt5[0], popt5[1]), 'm-')
# ax.plot(xplot, line(xplot, poptt[0], poptt[1]), 'g--')
# ax.plot(xplot, line(xplot, poptl[0], poptl[1]), 'r--')
# ax.plot(xplot, line(xplot, poptE[0], poptE[1]), 'k--')
# ax.plot(xplot, line(xplot, poptc[0], poptc[1]), 'y--')
#
# #ax.plot(xplot, lower, 'm--')
# #ax.plot(xplot, upper, 'm--')
# ax.fill_between(xplot, upper, lower, facecolor='m', alpha=0.5)
#
# txt = strc + '\n' + strr + '\n' + strt + '\n' + strl + '\n' + strE + '\n' + strcp
# # txt = strc + '\n' + strr
# ax.text(0.05, 0.72, txt, transform=ax.transAxes)
# ax.set_xlabel('$10^6/$T$^2 ($K$)$')
# ax.set_ylabel('$\Delta_{47}$ (\u2030)')
# ax.legend(loc='lower right')
# ax.set_xlim([11.5, 13.75])
# ax.set_ylim([0.65, 0.85])
#
# # a2 = fig1.add_subplot(2,1,2)
# # a2.plot(avg10['XT'], residuals, 'ko')
# # a2.set_xlabel('$10^6/$T$^2 ($K$)$')
# # a2.set_ylabel('residual')
# #plt.show()
# #avg.to_csv('avgout.csv')
# plt.savefig('12_7_all_noerr.pdf')
# plot just my data
# fig2 = plt.figure()
# plt.rc('font', family='Helvetica')
# ax2 = fig2.add_subplot(1,1,1)
# ax2.plot(reps['XT'], reps['D47'], '+', color = '0.75', label = 'Replicates', zorder=1)
# ax2.errorbar(avg10['XT'], avg10['D47_avg'], yerr = avg10['D47_sterr'],
# fmt = 'o', color = 'm', ecolor = 'm', label = 'Sample Averages')
# # plot regression based on Averages
# ax2.plot(xplot, line(xplot, popt5[0], popt5[1]), 'm-')
# ax2.fill_between(xplot, upper, lower, facecolor='m', alpha=0.5)
# txt2 = strc + '\n' + strr
#
# # plot regression based on Replicates
# # ax2.plot(xplot, line(xplot, popt_all[0], popt_all[1]), 'm-')
# # ax2.fill_between(xplot, upper_all, lower_all, facecolor='m', alpha=0.5)
# # txt2 = str_all + '\n' + strr_reps
# ax2.text(0.47, 0.02, txt2, transform=ax.transAxes)
# ax2.set_xlabel('$10^6/$T$^2 ($K$)$')
# ax2.set_ylabel('$\Delta_{47}$ (\u2030)')
# ax2.legend(loc='upper left')
# ax2.set_xlim([11.5, 13.75])
# ax2.set_ylim([0.65, 0.85])
# plt.savefig('12_7_data_noerr.pdf')
# plot regression comparisons
# fig3 = plt.figure()
# plt.rc('font', family='Helvetica')
# ax3 = fig3.add_subplot(1,1,1)
# ax3.errorbar(avg10['XT'], avg10['D47_avg'], yerr = avg10['D47_sterr'],
# fmt = 'o', color = 'm', ecolor = 'm', label = 'This Study')
# ax3.plot(travertine['XT'], travertine['D47'], 'd', color = 'g', label = 'ETH travertine')
# ax3.plot(leeds['XT'], leeds['D47'], '^', color = 'r', label = 'ETH synthetic')
# # ax3.plot(ETH['XT'], ETH['D47'], 's', color = 'k', label = 'ETH synthetic')
# # ax3.plot(cavepearls['XT'], cavepearls['D47'], 'H', color = 'y', label = 'Cambridge')
# ax3.plot(xplot, line(xplot, popt5[0], popt5[1]), 'm-')
# ax3.plot(xplot, line(xplot, poptt[0], poptt[1]), 'g--')
# ax3.plot(xplot, line(xplot, poptl[0], poptl[1]), 'r--')
# # ax3.plot(xplot, line(xplot, poptE[0], poptE[1]), 'k--')
# # ax3.plot(xplot, line(xplot, poptc[0], poptc[1]), 'y--')
# ax3.plot(xplot, line(xplot, 0.0422, 0.2082), 'c--', label = 'Bonifacie et al 2017')
# ax3.fill_between(xplot, upper, lower, facecolor='m', alpha=0.5)
# txt3 = strt + '\n' + strl + '\n' + strB + '\n' + strc
# ax3.text(-0.01, 0.55, txt3, transform=ax.transAxes)
# ax3.set_xlabel('$10^6/$T$^2 ($K$)$')
# ax3.set_ylabel('$\Delta_{47}$ (\u2030)')
# ax3.legend(loc='lower right')
# ax3.set_xlim([11.5, 13.75])
# ax3.set_ylim([0.65, 0.85])
# plt.savefig('12_7_compare_noerr.pdf')
# # plot regression comparisons
# fig3 = plt.figure()
# plt.rc('font', family='Helvetica')
# ax3 = fig3.add_subplot(1,1,1)
# ax3.errorbar(avg10['XT'], avg10['D47'], yerr = avg10['D47_sterr'],
# fmt = 'o', color = 'm', ecolor = 'm', label = 'This Study')
# ax3.plot(travertine['XT'], travertine['D47'], 'd', color = 'g', label = 'ETH travertine')
# ax3.plot(leeds['XT'], leeds['D47'], '^', color = 'r', label = 'ETH synthetic')
# # ax3.plot(ETH['XT'], ETH['D47'], 's', color = 'k', label = 'ETH synthetic')
# # ax3.plot(cavepearls['XT'], cavepearls['D47'], 'H', color = 'y', label = 'Cambridge')
# xplot2 = linspace(8-0.1, 14+0.1, 100)
# ax3.plot(xplot, line(xplot, popt5[0], popt5[1]), 'm-')
# ax3.plot(xplot2, line(xplot2, 0.0464, 0.1535), 'g--')
# ax3.plot(xplot2, line(xplot2, 0.0464, 0.132), 'r--')
# # ax3.plot(xplot, line(xplot, poptE[0], poptE[1]), 'k--')
# # ax3.plot(xplot, line(xplot, poptc[0], poptc[1]), 'y--')
# ax3.plot(xplot2, line(xplot2, 0.0422, 0.2082), 'c--', label = 'Bonifacie et al 2017')
# ax3.fill_between(xplot, upper, lower, facecolor='m', alpha=0.5)
# txt3 = strt + '\n' + strl + '\n' + strB + '\n' + strc
# ax3.text(-0.01, 0.55, txt3, transform=ax.transAxes)
# ax3.set_xlabel('$10^6/$T$^2 ($K$)$')
# ax3.set_ylabel('$\Delta_{47}$ (\u2030)')
# ax3.legend(loc='best')
# ax3.set_xlim([8, 14])
# ax3.set_ylim([0.45, 0.75])
# plt.savefig('12_7_compare_noerr.eps', format = 'eps', dpi = 1000)
# plot species specific
fig4 = plt.figure()
plt.rc('font', family='Helvetica')
ax4 = fig4.add_subplot(1,1,1)
ax4b = ax4.twiny()
# ax4.plot(xplot, line(xplot, popt5[0], popt5[1]), 'k--', label = 'This Study')
# ax4.fill_between(xplot, upper, lower, facecolor='k', alpha=0.2)
ax4.plot(xplot, line(xplot, popts[0], popts[1]), 'k--', label = 'This Study' )
ax4.fill_between(xplot, upper_site, lower_site, facecolor='k', alpha=0.2)
ax4.errorbar(cibs['XT'], cibs['D47_avg'], yerr = cibs['D47_sterr'],
fmt = 'o', color = 'r', ecolor = 'r', label = 'C. pachyderma')
ax4.errorbar(lent['XT'], lent['D47_avg'], yerr = lent['D47_sterr'],
fmt = '^', color = 'g', ecolor = 'g', label = 'Lenticulina spp')
ax4.errorbar(pyrgo['XT'], pyrgo['D47_avg'], yerr = pyrgo['D47_sterr'],
fmt = 'h', color = 'c', ecolor = 'c', label = 'Pyrgo spp')
ax4.errorbar(uvi['XT'], uvi['D47_avg'], yerr = uvi['D47_sterr'],
fmt = 's', color = 'b', ecolor = 'b', label = 'U. mediteranea')
ax4.errorbar(elegans['XT'], elegans['D47_avg'], yerr = elegans['D47_sterr'],
fmt = 'p', color = 'darkviolet', ecolor = 'darkviolet', label = 'H. elegans')
ax4.errorbar(other['XT'], other['D47_avg'], yerr = other['D47_sterr'],
fmt = 'd', color = 'm', ecolor = 'm', label = 'Assorted')
# ax4.plot(xplot, line(xplot, 0.0422, 0.2082), 'c--', label = 'Bonifacie et al 2017')
ax4.set_xlabel('$10^6/$T$^2 ($K$)$')
ax4.set_ylabel('$\Delta_{47}$ (\u2030)')
ax4.legend(loc='lower right')
ax4.set_xlim([11.55, 13.7])
ax4.set_ylim([0.65, 0.85])
ax4Ticks = ax4.get_xticks()
ax4bTicks = ax4Ticks
def tick_function(X):
V = -273.15 + ((1e6/X)**(0.5))
return ['%.1f' %z for z in V]
ax4b.set_xticks(ax4bTicks)
ax4b.set_xbound(ax4.get_xbound())
ax4b.set_xticklabels(tick_function(ax4bTicks))
ax4b.set_xlabel('T ($^\circ$C)')
plt.savefig('02_13_18_species_noerr.pdf')
# d18O plotz
fig5 = plt.figure()
plt.rc('font', family= 'Helvetica')
ax5= fig5.add_subplot(1,1,1)
x18plot = linspace(np.min(avg10['d18O_avg'])-0.1, np.max(avg10['d18O_avg'])+0.1, 100)
# ax5.errorbar(avg10['d18O_avg'], avg10['D47_avg'], yerr = avg10['D47_sterr'], xerr = avg10['d18O_sterr'],
# fmt = 'd', color = 'k', ecolor = 'k', label = 'Averages >10')
ax5.errorbar(cibs['d18O_avg'], cibs['D47_avg'], yerr = cibs['D47_sterr'], xerr = cibs['d18O_sterr'],
fmt = 'o', color = 'r', ecolor = 'r', label = 'C. pachyderma')
ax5.errorbar(lent['d18O_avg'], lent['D47_avg'], yerr = lent['D47_sterr'], xerr = lent['d18O_sterr'],
fmt = '^', color = 'g', ecolor = 'g', label = 'Lenticulina spp')
ax5.errorbar(pyrgo['d18O_avg'], pyrgo['D47_avg'], yerr = pyrgo['D47_sterr'], xerr = pyrgo['d18O_sterr'],
fmt = 'h', color = 'c', ecolor = 'c', label = 'Pyrgo spp')
ax5.errorbar(uvi['d18O_avg'], uvi['D47_avg'], yerr = uvi['D47_sterr'], xerr = uvi['d18O_sterr'],
fmt = 's', color = 'b', ecolor = 'b', label = 'U. mediteranea')
ax5.errorbar(elegans['d18O_avg'], elegans['D47_avg'], yerr = elegans['D47_sterr'], xerr = elegans['d18O_sterr'],
fmt = 'p', color = 'darkviolet', ecolor = 'darkviolet', label = 'H. elegans')
ax5.errorbar(other['d18O_avg'], other['D47_avg'], yerr = other['D47_sterr'], xerr = other['d18O_sterr'],
fmt = 'd', color = 'm', ecolor = 'm', label = 'Assorted')
ax5.plot(x18plot, line(x18plot, popt18[0], popt18[1]), 'k--')
ax5.legend(loc = 'upper left')
ax5.set_xlabel('$\delta^{18}$O (\u2030)')
ax5.set_ylabel('$\Delta_{47}$ (\u2030)')
plt.savefig('02_13_18_d18O.pdf')
# site average plot
fig6 = plt.figure()
plt.rc('font', family = 'Helvetica')
ax6 = fig6.add_subplot(1,1,1)
ax6b = ax6.twiny()
ax6.plot(xplot, line(xplot, popts[0], popts[1]), 'k--', label = 'This Study' )
ax6.fill_between(xplot, upper_site, lower_site, facecolor='k', alpha=0.2)
# ax6.plot(xplot, line(xplot, 0.0422, 0.2082), 'c--', label = 'Bonifacie et al 2017')
ax6.errorbar(site['XT'], site['D47_avg'], yerr = site['D47_sterr'],
fmt = 'o', color = 'k', ecolor = 'k', label = 'Site Averages', zorder=10)
ax6.plot(reps['XT'], reps['D47'], '+', color = '0.75', label = 'Replicates', zorder=1)
ax6.plot(xplot, line(xplot, 0.0422, 0.2082), 'c--', label = 'Bonifacie et al 2017')
ax6.plot(magali['XT'], magali['D47'], 'd', color = 'c', alpha=0.2, label = 'Bonifacie et al 2017')
ax6.set_xlabel('$10^6/$T$^2 ($K$)$')
ax6.set_ylabel('$\Delta_{47}$ (\u2030)')
ax6.legend(loc='upper left')
ax6.set_xlim([11.55, 13.7])
ax6.set_ylim([0.65, 0.85])
txts = strs + '\n' + strr_site
ax6.text(0.77, 0.02, txts, transform=ax6.transAxes)
ax6Ticks = ax6.get_xticks()
ax6bTicks = ax6Ticks
ax6b.set_xticks(ax6bTicks)
ax6b.set_xbound(ax6.get_xbound())
ax6b.set_xticklabels(tick_function(ax6bTicks))
ax6b.set_xlabel('T ($^\circ$C)')
# ax6b.set_xlim([-1.5, 20])
plt.savefig('02_13_18_site_noerr.pdf')
# different d18O plot
fig7 = plt.figure()
plt.rc('font', family = 'Helvetica')
ax7 = fig7.add_subplot(1,1,1)
x18plot2 = linspace(np.min(avg10['T'])-1, np.max(avg10['T'])+1, 100)
# ax5.errorbar(avg10['d18O_avg'], avg10['D47_avg'], yerr = avg10['D47_sterr'], xerr = avg10['d18O_sterr'],
# fmt = 'd', color = 'k', ecolor = 'k', label = 'Averages >10')
ax7.errorbar(cibs['T'], cibs['d18O_avg'], yerr = cibs['d18O_sterr'],
fmt = 'o', color = 'r', ecolor = 'r', label = 'C. pachyderma')
ax7.errorbar(lent['T'], lent['d18O_avg'], yerr = lent['d18O_sterr'],
fmt = '^', color = 'g', ecolor = 'g', label = 'Lenticulina spp')
ax7.errorbar(pyrgo['T'], pyrgo['d18O_avg'], yerr = pyrgo['d18O_sterr'],
fmt = 'h', color = 'c', ecolor = 'c', label = 'Pyrgo spp')
ax7.errorbar(uvi['T'], uvi['d18O_avg'], yerr = uvi['d18O_sterr'],
fmt = 's', color = 'b', ecolor = 'b', label = 'U. mediteranea')
ax7.errorbar(elegans['T'], elegans['d18O_avg'], yerr = elegans['d18O_sterr'],
fmt = 'p', color = 'darkviolet', ecolor = 'darkviolet', label = 'H. elegans')
ax7.errorbar(other['T'], other['d18O_avg'], yerr = other['d18O_sterr'],
fmt = 'd', color = 'm', ecolor = 'm', label = 'Assorted')
ax7.plot(x18plot2, line(x18plot2, popt18b[0], popt18b[1]), 'k--')
ax7.legend(loc = 'lower left')
ax7.set_xlabel('T ($^\circ$C)')
ax7.set_ylabel('$\delta^{18}$O (\u2030)')
plt.savefig('02_13_18_d18OvT.pdf')
# DATA COMPARE
fig8 = plt.figure()
plt.rc('font', family = 'Helvetica')
ax8 = fig8.add_subplot(1,1,1)
ax8b = ax8.twiny()
ax8.plot(xplot, line(xplot, popts[0], popts[1]), 'k--', label = 'This Study' )
ax8.fill_between(xplot, upper_site, lower_site, facecolor='k', alpha=0.2)
ax8.plot(xplot, line(xplot, 0.0422, 0.2082), 'c--', label = 'Bonifacie et al 2017')
ax8.errorbar(site['XT'], site['D47_avg'], yerr = site['D47_sterr'],
fmt = 'o', color = 'k', ecolor = 'k', label = 'Site Averages')
# ax8.plot(reps['XT'], reps['D47'], '+', color = '0.75', label = 'Replicates', zorder=1)
# ax8.plot(magali['XT'], magali['D47'], 'o', color = 'blue', label = 'Bonifacie et al 2017')
ax8.fill_between(xplot, upper_BF, lower_BF, facecolor='c', alpha=0.2)
ax8.set_xlabel('$10^6/$T$^2 ($K$)$')
ax8.set_ylabel('$\Delta_{47}$ (\u2030)')
ax8.legend(loc='upper left')
# ax8.set_xlim([11.55, 13.7])
# ax8.set_ylim([0.65, 0.85])
txts = strs + '\n' + strr_site
ax8.text(0.77, 0.02, txts, transform=ax8.transAxes)
ax8Ticks = ax8.get_xticks()
ax8bTicks = ax8Ticks
ax8b.set_xticks(ax8bTicks)
ax8b.set_xbound(ax8.get_xbound())
ax8b.set_xticklabels(tick_function(ax8bTicks))
ax8b.set_xlabel('T ($^\circ$C)')
# ax8b.set_xlim([-1.5, 20])
plt.savefig('02_13_18_compare_magali_noerr.pdf')
| 46.698492 | 112 | 0.620144 | 2,961 | 18,586 | 3.783857 | 0.116852 | 0.017137 | 0.029008 | 0.040164 | 0.535077 | 0.464834 | 0.432703 | 0.37076 | 0.36362 | 0.315602 | 0 | 0.085677 | 0.137146 | 18,586 | 397 | 113 | 46.816121 | 0.612958 | 0.421554 | 0 | 0.123762 | 0 | 0 | 0.175914 | 0.007764 | 0.034653 | 0 | 0 | 0 | 0 | 1 | 0.009901 | false | 0 | 0.029703 | 0.00495 | 0.049505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fef0f20d9583beaad642d2b113a19d757ae987a | 36,252 | py | Python | ai_pomdp_agent.py | Grottoh/Deep-Active-Inference-for-Partially-Observable-MDPs | 11fedf09cefaada3dd60f1af430d59d87cbd706e | [
"MIT"
] | 8 | 2020-09-17T06:48:33.000Z | 2022-01-08T16:51:20.000Z | ai_pomdp_agent.py | pl-robotdecision/Deep-Active-Inference-for-Partially-Observable-MDPs | 11fedf09cefaada3dd60f1af430d59d87cbd706e | [
"MIT"
] | null | null | null | ai_pomdp_agent.py | pl-robotdecision/Deep-Active-Inference-for-Partially-Observable-MDPs | 11fedf09cefaada3dd60f1af430d59d87cbd706e | [
"MIT"
] | 1 | 2021-01-13T12:23:35.000Z | 2021-01-13T12:23:35.000Z | __author__ = "Otto van der Himst"
__credits__ = "Otto van der Himst, Pablo Lanillos"
__version__ = "1.0"
__email__ = "o.vanderhimst@student.ru.nl"
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import datetime
import sys
import gym
import torchvision.transforms as T
from PIL import Image
import matplotlib.pyplot as plt
class ReplayMemory():
def __init__(self, capacity, obs_shape, device='cpu'):
self.device=device
self.capacity = capacity # The maximum number of items to be stored in memory
self.obs_shape = obs_shape # the shape of observations
# Initialize (empty) memory tensors
self.obs_mem = torch.empty([capacity]+[dim for dim in self.obs_shape], dtype=torch.float32, device=self.device)
self.action_mem = torch.empty(capacity, dtype=torch.int64, device=self.device)
self.reward_mem = torch.empty(capacity, dtype=torch.int8, device=self.device)
self.done_mem = torch.empty(capacity, dtype=torch.int8, device=self.device)
self.push_count = 0 # The number of times new data has been pushed to memory
def push(self, obs, action, reward, done):
# Store data in memory
self.obs_mem[self.position()] = obs
self.action_mem[self.position()] = action
self.reward_mem[self.position()] = reward
self.done_mem[self.position()] = done
self.push_count += 1
def position(self):
# Returns the next position (index) to which data is pushed
return self.push_count % self.capacity
def sample(self, obs_indices, action_indices, reward_indices, done_indices, max_n_indices, batch_size):
# Fine as long as max_n is not greater than the fewest number of time steps an episode can take
# Pick indices at random
end_indices = np.random.choice(min(self.push_count, self.capacity)-max_n_indices*2, batch_size, replace=False) + max_n_indices
# Correct for sampling near the position where data was last pushed
for i in range(len(end_indices)):
if end_indices[i] in range(self.position(), self.position()+max_n_indices):
end_indices[i] += max_n_indices
# Retrieve the specified indices that come before the end_indices
obs_batch = self.obs_mem[np.array([index-obs_indices for index in end_indices])]
action_batch = self.action_mem[np.array([index-action_indices for index in end_indices])]
reward_batch = self.reward_mem[np.array([index-reward_indices for index in end_indices])]
done_batch = self.done_mem[np.array([index-done_indices for index in end_indices])]
# Correct for sampling over multiple episodes
for i in range(len(end_indices)):
index = end_indices[i]
for j in range(1, max_n_indices):
if self.done_mem[index-j]:
for k in range(len(obs_indices)):
if obs_indices[k] >= j:
obs_batch[i, k] = torch.zeros_like(self.obs_mem[0])
for k in range(len(action_indices)):
if action_indices[k] >= j:
action_batch[i, k] = torch.zeros_like(self.action_mem[0]) # Assigning action '0' might not be the best solution, perhaps as assigning at random, or adding an action for this specific case would be better
for k in range(len(reward_indices)):
if reward_indices[k] >= j:
reward_batch[i, k] = torch.zeros_like(self.reward_mem[0]) # Reward of 0 will probably not make sense for every environment
for k in range(len(done_indices)):
if done_indices[k] >= j:
done_batch[i, k] = torch.zeros_like(self.done_mem[0])
break
return obs_batch, action_batch, reward_batch, done_batch
def get_last_n_obs(self, n):
""" Get the last n observations stored in memory (of a single episode) """
last_n_obs = torch.zeros([n]+[dim for dim in self.obs_shape], device=self.device)
n = min(n, self.push_count)
for i in range(1, n+1):
if self.position() >= i:
if self.done_mem[self.position()-i]:
return last_n_obs
last_n_obs[-i] = self.obs_mem[self.position()-i]
else:
if self.done_mem[-i+self.position()]:
return last_n_obs
last_n_obs[-i] = self.obs_mem[-i+self.position()]
return last_n_obs
class Model(nn.Module):
def __init__(self, n_inputs, n_outputs, n_hidden=64, lr=1e-3, softmax=False, device='cpu'):
super(Model, self).__init__()
self.n_inputs = n_inputs # Number of inputs
self.n_hidden = n_hidden # Number of hidden units
self.n_outputs = n_outputs # Number of outputs
self.softmax = softmax # If true apply a softmax function to the output
self.fc1 = nn.Linear(self.n_inputs, self.n_hidden) # Hidden layer
self.fc2 = nn.Linear(self.n_hidden, self.n_outputs) # Output layer
self.optimizer = optim.Adam(self.parameters(), lr) # Adam optimizer
self.device = device
self.to(self.device)
def forward(self, x):
# Define the forward pass:
h_relu = F.relu(self.fc1(x))
y = self.fc2(h_relu)
if self.softmax: # If true apply a softmax function to the output
y = F.softmax(self.fc2(h_relu), dim=-1).clamp(min=1e-9, max=1-1e-9)
return y
class VAE(nn.Module):
# In part taken from:
# https://github.com/pytorch/examples/blob/master/vae/main.py
def __init__(self, n_screens, n_latent_states, lr=1e-5, device='cpu'):
super(VAE, self).__init__()
self.device = device
self.n_screens = n_screens
self.n_latent_states = n_latent_states
# The convolutional encoder
self.encoder = nn.Sequential(
nn.Conv3d(3, 16, (5,5,1), (2,2,1)),
nn.BatchNorm3d(16),
nn.ReLU(inplace=True),
nn.Conv3d(16, 32, (5,5,1), (2,2,1)),
nn.BatchNorm3d(32),
nn.ReLU(inplace=True),
nn.Conv3d(32, 32, (5,5,1), (2,2,1)),
nn.BatchNorm3d(32),
nn.ReLU(inplace=True)
).to(self.device)
# The size of the encoder output
self.conv3d_shape_out = (32, 2, 8, self.n_screens)
self.conv3d_size_out = np.prod(self.conv3d_shape_out)
# The convolutional decoder
self.decoder = nn.Sequential(
nn.ConvTranspose3d(32, 32, (5,5,1), (2,2,1)),
nn.BatchNorm3d(32),
nn.ReLU(inplace=True),
nn.ConvTranspose3d(32, 16, (5,5,1), (2,2,1)),
nn.BatchNorm3d(16),
nn.ReLU(inplace=True),
nn.ConvTranspose3d(16, 3, (5,5,1), (2,2,1)),
nn.BatchNorm3d(3),
nn.ReLU(inplace=True),
nn.Sigmoid()
).to(self.device)
# Fully connected layers connected to encoder
self.fc1 = nn.Linear(self.conv3d_size_out, self.conv3d_size_out // 2)
self.fc2_mu = nn.Linear(self.conv3d_size_out // 2, self.n_latent_states)
self.fc2_logvar = nn.Linear(self.conv3d_size_out // 2, self.n_latent_states)
# Fully connected layers connected to decoder
self.fc3 = nn.Linear(self.n_latent_states, self.conv3d_size_out // 2)
self.fc4 = nn.Linear(self.conv3d_size_out // 2, self.conv3d_size_out)
self.optimizer = optim.Adam(self.parameters(), lr)
self.to(self.device)
def encode(self, x):
# Deconstruct input x into a distribution over latent states
conv = self.encoder(x)
h1 = F.relu(self.fc1(conv.view(conv.size(0), -1)))
mu, logvar = self.fc2_mu(h1), self.fc2_logvar(h1)
return mu, logvar
def reparameterize(self, mu, logvar):
# Apply reparameterization trick
std = torch.exp(0.5*logvar)
eps = torch.randn_like(std)
return mu + eps*std
def decode(self, z, batch_size=1):
# Reconstruct original input x from the (reparameterized) latent states
h3 = F.relu(self.fc3(z))
deconv_input = self.fc4(h3)
deconv_input = deconv_input.view([batch_size] + [dim for dim in self.conv3d_shape_out])
y = self.decoder(deconv_input)
return y
def forward(self, x, batch_size=1):
# Deconstruct and then reconstruct input x
mu, logvar = self.encode(x)
z = self.reparameterize(mu, logvar)
recon = self.decode(z, batch_size)
return recon, mu, logvar
# Reconstruction + KL divergence losses summed over all elements and batch
def loss_function(self, recon_x, x, mu, logvar, batch=True):
if batch:
BCE = F.binary_cross_entropy(recon_x, x, reduction='none')
BCE = torch.sum(BCE, dim=(1, 2, 3, 4))
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp(), dim=1)
else:
BCE = F.binary_cross_entropy(recon_x, x, reduction='sum')
# see Appendix B from VAE paper:
# Kingma and Welling. Auto-Encoding Variational Bayes. ICLR, 2014
# https://arxiv.org/abs/1312.6114
# 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return BCE + KLD
class Agent():
def __init__(self, argv):
self.set_parameters(argv) # Set parameters
self.c = 3 # The number of (color) channels of observations
self.h = 37 # The height of observations (screens)
self.w = 85 # The width of observations (screens)
self.obs_shape = (self.c, self.h, self.w)
self.n_actions = self.env.action_space.n # The number of actions available to the agent
self.freeze_cntr = 0 # Keeps track of when to (un)freeze the target network
# Initialize the networks:
self.vae = VAE(self.n_screens, self.n_latent_states, lr=self.lr_vae, device=self.device)
self.transition_net = Model(self.n_latent_states*2+1, self.n_latent_states, self.n_hidden_trans, lr=self.lr_trans, device=self.device)
self.policy_net = Model(self.n_latent_states*2, self.n_actions, self.n_hidden_pol, lr=self.lr_pol, softmax=True, device=self.device)
self.value_net = Model(self.n_latent_states*2, self.n_actions, self.n_hidden_val, lr=self.lr_val, device=self.device)
self.target_net = Model(self.n_latent_states*2, self.n_actions, self.n_hidden_val, lr=self.lr_val, device=self.device)
self.target_net.load_state_dict(self.value_net.state_dict())
if self.load_network: # If true: load the networks given paths
self.vae.load_state_dict(torch.load(self.network_load_path.format("vae"), map_location=self.device))
self.vae.eval()
self.transition_net.load_state_dict(torch.load(self.network_load_path.format("trans"), map_location=self.device))
self.transition_net.eval()
self.policy_net.load_state_dict(torch.load(self.network_load_path.format("pol"), map_location=self.device))
self.policy_net.eval()
self.value_net.load_state_dict(torch.load(self.network_load_path.format("val"), map_location=self.device))
self.value_net.eval()
if self.load_pre_trained_vae: # If true: load a pre-trained VAE
self.vae.load_state_dict(torch.load(self.pt_vae_load_path, map_location=self.device))
self.vae.eval()
# Initialize the replay memory
self.memory = ReplayMemory(self.memory_capacity, self.obs_shape, device=self.device)
# Used to pre-process the observations (screens)
self.resize = T.Compose([T.ToPILImage(),
T.Resize(40, interpolation=Image.CUBIC),
T.ToTensor()])
# When sampling from memory at index i, obs_indices indicates that we want observations with indices i-obs_indices, works the same for the others
self.obs_indices = [(self.n_screens+1)-i for i in range(self.n_screens+2)]
self.action_indices = [2, 1]
self.reward_indices = [1]
self.done_indices = [0]
self.max_n_indices = max(max(self.obs_indices, self.action_indices, self.reward_indices, self.done_indices)) + 1
def set_parameters(self, argv):
# The default parameters
default_parameters = {'run_id':"_rX", 'device':'cuda',
'env':'CartPole-v1', 'n_episodes':5000,
'n_screens':4, 'n_latent_states':32, 'lr_vae':1e-5, 'alpha':25000,
'n_hidden_trans':64, 'lr_trans':1e-3,
'n_hidden_pol':64, 'lr_pol':1e-3,
'n_hidden_val':64, 'lr_val':1e-4,
'memory_capacity':65536, 'batch_size':32, 'freeze_period':25,
'Beta':0.99, 'gamma':12.00,
'print_timer':100,
'keep_log':True, 'log_path':"logs/ai_pomdp_log{}.txt", 'log_save_timer':10,
'save_results':True, 'results_path':"results/ai_pomdp_results{}.npz", 'results_save_timer':500,
'save_network':True, 'network_save_path':"networks/ai_pomdp_{}net{}.pth", 'network_save_timer':500,
'load_network':False, 'network_load_path':"networks/ai_pomdp_{}net_rX.pth",
'pre_train_vae':False, 'pt_vae_n_episodes':500, 'pt_vae_plot':False,
'load_pre_trained_vae':True, 'pt_vae_load_path':"networks/pre_trained_vae/vae_n{}_end.pth"}
# Possible commands:
# python ai_pomdp_agent.py device=cuda:0
# python ai_pomdp_agent.py device=cuda:0 load_pre_trained_vae=False pre_train_vae=True
# Adjust the custom parameters according to the arguments in argv
custom_parameters = default_parameters.copy()
custom_parameter_msg = "Custom parameters:\n"
for arg in argv:
key, value = arg.split('=')
if key in custom_parameters:
custom_parameters[key] = value
custom_parameter_msg += " {}={}\n".format(key, value)
else:
print("Argument {} is unknown, terminating.".format(arg))
sys.exit()
def interpret_boolean(param):
if type(param) == bool:
return param
elif param in ['True', '1']:
return True
elif param in ['False', '0']:
return False
else:
sys.exit("param '{}' cannot be interpreted as boolean".format(param))
# Set all parameters
self.run_id = custom_parameters['run_id'] # Is appended to paths to distinguish between runs
self.device = custom_parameters['device'] # The device used to run the code
self.env = gym.make(custom_parameters['env']) # The environment in which to train
self.n_episodes = int(custom_parameters['n_episodes']) # The number of episodes for which to train
# Set number of hidden nodes and learning rate for each network
self.n_screens = int(custom_parameters['n_screens']) # The number of obervations (screens) that are passed to the VAE
self.n_latent_states = int(custom_parameters['n_latent_states'])
self.lr_vae = float(custom_parameters['lr_vae'])
self.alpha = int(custom_parameters['alpha']) # Used to scale down the VAE's loss
self.n_hidden_trans = int(custom_parameters['n_hidden_trans'])
self.lr_trans = float(custom_parameters['lr_trans'])
self.n_hidden_val = int(custom_parameters['n_hidden_val'])
self.lr_val = float(custom_parameters['lr_val'])
self.n_hidden_pol = int(custom_parameters['n_hidden_pol'])
self.lr_pol = float(custom_parameters['lr_pol'])
self.memory_capacity = int(custom_parameters['memory_capacity']) # The maximum number of items to be stored in memory
self.batch_size = int(custom_parameters['batch_size']) # The mini-batch size
self.freeze_period = int(custom_parameters['freeze_period']) # The number of time-steps the target network is frozen
self.Beta = float(custom_parameters['Beta']) # The discount rate
self.gamma = float(custom_parameters['gamma']) # A precision parameter
self.print_timer = int(custom_parameters['print_timer']) # Print progress every print_timer episodes
self.keep_log = interpret_boolean(custom_parameters['keep_log']) # If true keeps a (.txt) log concerning data of this run
self.log_path = custom_parameters['log_path'].format(self.run_id) # The path to which the log is saved
self.log_save_timer = int(custom_parameters['log_save_timer']) # The number of episodes after which the log is saved
self.save_results = interpret_boolean(custom_parameters['save_results']) # If true saves the results to an .npz file
self.results_path = custom_parameters['results_path'].format(self.run_id) # The path to which the results are saved
self.results_save_timer = int(custom_parameters['results_save_timer']) # The number of episodes after which the results are saved
self.save_network = interpret_boolean(custom_parameters['save_network']) # If true saves the policy network (state_dict) to a .pth file
self.network_save_path = custom_parameters['network_save_path'].format("{}", self.run_id) # The path to which the network is saved
self.network_save_timer = int(custom_parameters['network_save_timer']) # The number of episodes after which the network is saved
self.load_network = interpret_boolean(custom_parameters['load_network']) # If true loads a (policy) network (state_dict) instead of initializing a new one
self.network_load_path = custom_parameters['network_load_path'] # The path from which to laod the network
self.pre_train_vae = interpret_boolean(custom_parameters['pre_train_vae']) # If true pre trains the vae
self.pt_vae_n_episodes = custom_parameters['pt_vae_n_episodes'] # The amount of episodes for which to pre train the vae
self.pt_vae_plot = interpret_boolean(custom_parameters['pt_vae_plot']) # If true plots stuff while training the vae
self.load_pre_trained_vae = interpret_boolean(custom_parameters['load_pre_trained_vae']) # If true loads a pre trained vae
self.pt_vae_load_path = custom_parameters['pt_vae_load_path'].format(self.n_latent_states) # The path from which to load the pre trained vae
msg = "Default parameters:\n"+str(default_parameters)+"\n"+custom_parameter_msg
print(msg)
if self.keep_log: # If true: write a message to the log
self.record = open(self.log_path, "a")
self.record.write("\n\n-----------------------------------------------------------------\n")
self.record.write("File opened at {}\n".format(datetime.datetime.now()))
self.record.write(msg+"\n")
def get_screen(self, env, device='cuda', displacement_h=0, displacement_w=0):
"""
Get a (pre-processed, i.e. cropped, cart-focussed) observation/screen
For the most part taken from:
https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
"""
def get_cart_location(env, screen_width):
world_width = env.x_threshold * 2
scale = screen_width / world_width
return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART
# Returned screen requested by gym is 400x600x3, but is sometimes larger
# such as 800x1200x3. Transpose it into torch order (CHW).
screen = env.render(mode='rgb_array').transpose((2, 0, 1))
# Cart is in the lower half, so strip off the top and bottom of the screen
_, screen_height, screen_width = screen.shape
screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)]
view_width = int(screen_width * 0.6)
cart_location = get_cart_location(env, screen_width)+displacement_w
if cart_location < view_width // 2:
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
slice_range = slice(-view_width, None)
else:
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
# Strip off the edges, so that we have a square image centered on a cart
screen = screen[:, :, slice_range]
# Convert to float, rescale, convert to torch tensor
# (this doesn't require a copy)
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
# Resize, and add a batch dimension (BCHW)
screen = self.resize(screen).unsqueeze(0).to(device)
screen = screen[:, :, 2:-1, 3:-2]
return screen
def select_action(self, obs):
with torch.no_grad():
# Derive a distribution over states state from the last n observations (screens):
prev_n_obs = self.memory.get_last_n_obs(self.n_screens-1)
x = torch.cat((prev_n_obs, obs), dim=0).view(1, self.c, self.h, self.w, self.n_screens)
state_mu, state_logvar = self.vae.encode(x)
# Determine a distribution over actions given the current observation:
x = torch.cat((state_mu, torch.exp(state_logvar)), dim=1)
policy = self.policy_net(x)
return torch.multinomial(policy, 1)
def get_mini_batches(self):
# Retrieve transition data in mini batches
all_obs_batch, all_actions_batch, reward_batch_t1, done_batch_t2 = self.memory.sample(
self.obs_indices, self.action_indices, self.reward_indices,
self.done_indices, self.max_n_indices, self.batch_size)
# Retrieve a batch of observations for 3 consecutive points in time
obs_batch_t0 = all_obs_batch[:, 0:self.n_screens, :, :, :].view(self.batch_size, self.c, self.h, self.w, self.n_screens)
obs_batch_t1 = all_obs_batch[:, 1:self.n_screens+1, :, :, :].view(self.batch_size, self.c, self.h, self.w, self.n_screens)
obs_batch_t2 = all_obs_batch[:, 2:self.n_screens+2, :, :, :].view(self.batch_size, self.c, self.h, self.w, self.n_screens)
# Retrieve a batch of distributions over states for 3 consecutive points in time
state_mu_batch_t0, state_logvar_batch_t0 = self.vae.encode(obs_batch_t0)
state_mu_batch_t1, state_logvar_batch_t1 = self.vae.encode(obs_batch_t1)
state_mu_batch_t2, state_logvar_batch_t2 = self.vae.encode(obs_batch_t2)
# Combine the sufficient statistics (mean and variance) into a single vector
state_batch_t0 = torch.cat((state_mu_batch_t0, torch.exp(state_logvar_batch_t0)), dim=1)
state_batch_t1 = torch.cat((state_mu_batch_t1, torch.exp(state_logvar_batch_t1)), dim=1)
state_batch_t2 = torch.cat((state_mu_batch_t2, torch.exp(state_logvar_batch_t2)), dim=1)
# Reparameterize the distribution over states for time t1
z_batch_t1 = self.vae.reparameterize(state_mu_batch_t1, state_logvar_batch_t1)
# Retrieve the agent's action history for time t0 and time t1
action_batch_t0 = all_actions_batch[:, 0].unsqueeze(1)
action_batch_t1 = all_actions_batch[:, 1].unsqueeze(1)
# At time t0 predict the state at time t1:
X = torch.cat((state_batch_t0.detach(), action_batch_t0.float()), dim=1)
pred_batch_t0t1 = self.transition_net(X)
# Determine the prediction error wrt time t0-t1:
pred_error_batch_t0t1 = torch.mean(F.mse_loss(
pred_batch_t0t1, state_mu_batch_t1, reduction='none'), dim=1).unsqueeze(1)
return (state_batch_t1, state_batch_t2, action_batch_t1,
reward_batch_t1, done_batch_t2, pred_error_batch_t0t1,
obs_batch_t1, state_mu_batch_t1,
state_logvar_batch_t1, z_batch_t1)
def compute_value_net_loss(self, state_batch_t1, state_batch_t2,
action_batch_t1, reward_batch_t1,
done_batch_t2, pred_error_batch_t0t1):
with torch.no_grad():
# Determine the action distribution for time t2:
policy_batch_t2 = self.policy_net(state_batch_t2)
# Determine the target EFEs for time t2:
target_EFEs_batch_t2 = self.target_net(state_batch_t2)
# Weigh the target EFEs according to the action distribution:
weighted_targets = ((1-done_batch_t2) * policy_batch_t2 *
target_EFEs_batch_t2).sum(-1).unsqueeze(1)
# Determine the batch of bootstrapped estimates of the EFEs:
EFE_estimate_batch = -reward_batch_t1 + pred_error_batch_t0t1 + self.Beta * weighted_targets
# Determine the EFE at time t1 according to the value network:
EFE_batch_t1 = self.value_net(state_batch_t1).gather(1, action_batch_t1)
# Determine the MSE loss between the EFE estimates and the value network output:
value_net_loss = F.mse_loss(EFE_estimate_batch, EFE_batch_t1)
return value_net_loss
def compute_VFE(self, vae_loss, state_batch_t1, pred_error_batch_t0t1):
# Determine the action distribution for time t1:
policy_batch_t1 = self.policy_net(state_batch_t1)
# Determine the EFEs for time t1:
EFEs_batch_t1 = self.value_net(state_batch_t1)
# Take a gamma-weighted Boltzmann distribution over the EFEs:
boltzmann_EFEs_batch_t1 = torch.softmax(-self.gamma * EFEs_batch_t1, dim=1).clamp(min=1e-9, max=1-1e-9)
# Weigh them according to the action distribution:
energy_term_batch = -(policy_batch_t1 * torch.log(boltzmann_EFEs_batch_t1)).sum(-1).unsqueeze(1)
# Determine the entropy of the action distribution
entropy_batch = -(policy_batch_t1 * torch.log(policy_batch_t1)).sum(-1).unsqueeze(1)
# Determine the VFE, then take the mean over all batch samples:
VFE_batch = vae_loss + pred_error_batch_t0t1 + (energy_term_batch - entropy_batch)
VFE = torch.mean(VFE_batch)
return VFE
def learn(self):
# If there are not enough transitions stored in memory, return
if self.memory.push_count - self.max_n_indices*2 < self.batch_size:
return
# After every freeze_period time steps, update the target network
if self.freeze_cntr % self.freeze_period == 0:
self.target_net.load_state_dict(self.value_net.state_dict())
self.freeze_cntr += 1
# Retrieve mini-batches of data from memory
(state_batch_t1, state_batch_t2, action_batch_t1,
reward_batch_t1, done_batch_t2, pred_error_batch_t0t1,
obs_batch_t1, state_mu_batch_t1,
state_logvar_batch_t1, z_batch_t1) = self.get_mini_batches()
# Determine the reconstruction loss for time t1
recon_batch = self.vae.decode(z_batch_t1, self.batch_size)
vae_loss = self.vae.loss_function(recon_batch, obs_batch_t1, state_mu_batch_t1, state_logvar_batch_t1, batch=True) / self.alpha
# Compute the value network loss:
value_net_loss = self.compute_value_net_loss(state_batch_t1, state_batch_t2,
action_batch_t1, reward_batch_t1,
done_batch_t2, pred_error_batch_t0t1)
# Compute the variational free energy:
VFE = self.compute_VFE(vae_loss, state_batch_t1.detach(), pred_error_batch_t0t1)
# Reset the gradients:
self.vae.optimizer.zero_grad()
self.policy_net.optimizer.zero_grad()
self.transition_net.optimizer.zero_grad()
self.value_net.optimizer.zero_grad()
# Compute the gradients:
VFE.backward(retain_graph=True)
value_net_loss.backward()
# Perform gradient descent:
self.vae.optimizer.step()
self.policy_net.optimizer.step()
self.transition_net.optimizer.step()
self.value_net.optimizer.step()
def train_vae(self):
""" Train the VAE separately. """
vae_batch_size = 256
vae_obs_indices = [self.n_screens-i for i in range(self.n_screens)]
losses = []
for ith_episode in range(self.pt_vae_n_episodes):
self.env.reset()
obs = self.get_screen(self.env, self.device)
done = False
while not done:
action = self.env.action_space.sample()
self.memory.push(obs, -99, -99, done)
_, _, done, _ = self.env.step(action)
obs = self.get_screen(self.env, self.device)
if self.memory.push_count > vae_batch_size + self.n_screens*2:
obs_batch, _, _, _ = self.memory.sample(vae_obs_indices, [], [], [], len(vae_obs_indices), vae_batch_size)
obs_batch = obs_batch.view(vae_batch_size, self.c, self.h, self.w, self.n_screens)
recon, mu, logvar = self.vae.forward(obs_batch, vae_batch_size)
loss = torch.mean(self.vae.loss_function(recon, obs_batch, mu, logvar))
self.vae.optimizer.zero_grad()
loss.backward()
self.vae.optimizer.step()
losses.append(loss)
print("episode %4d: vae_loss=%5.2f"%(ith_episode, loss.item()))
if done:
if ith_episode > 0 and ith_episode % 10 > 0 and self.pt_vae_plot:
plt.plot(losses)
plt.show()
plt.plot(losses[-1000:])
plt.show()
for i in range(self.n_screens):
plt.imshow(obs_batch[0, :, :, :, i].detach().cpu().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none')
plt.show()
plt.imshow(recon[0, :, :, :, i].detach().cpu().squeeze(0).permute(1, 2, 0).numpy(), interpolation='none')
plt.show()
if done:
self.memory.push(obs, -99, -99, done)
if ith_episode > 0 and ith_episode % 100 == 0:
torch.save(self.vae.state_dict(), "networks/pre_trained_vae/vae_n{}_{:d}.pth".format(
self.n_latent_states, ith_episode))
self.memory.push_count = 0
torch.save(self.vae.state_dict(), "networks/pre_trained_vae/vae_n{}_end.pth".format(self.n_latent_states))
def train(self):
if self.pre_train_vae: # If True: pre-train the VAE
msg = "Environment is: {}\nPre-training vae. Starting at {}".format(self.env.unwrapped.spec.id, datetime.datetime.now())
print(msg)
if self.keep_log:
self.record.write(msg+"\n")
self.train_vae()
msg = "Environment is: {}\nTraining started at {}".format(self.env.unwrapped.spec.id, datetime.datetime.now())
print(msg)
if self.keep_log:
self.record.write(msg+"\n")
results = []
for ith_episode in range(self.n_episodes):
total_reward = 0
self.env.reset()
obs = self.get_screen(self.env, self.device)
done = False
reward = 0
while not done:
action = self.select_action(obs)
self.memory.push(obs, action, reward, done)
_, reward, done, _ = self.env.step(action[0].item())
obs = self.get_screen(self.env, self.device)
total_reward += reward
self.learn()
if done:
self.memory.push(obs, -99, -99, done)
results.append(total_reward)
# Print and keep a (.txt) record of stuff
if ith_episode > 0 and ith_episode % self.print_timer == 0:
avg_reward = np.mean(results)
last_x = np.mean(results[-self.print_timer:])
msg = "Episodes: {:4d}, avg score: {:3.2f}, over last {:d}: {:3.2f}".format(ith_episode, avg_reward, self.print_timer, last_x)
print(msg)
if self.keep_log:
self.record.write(msg+"\n")
if ith_episode % self.log_save_timer == 0:
self.record.close()
self.record = open(self.log_path, "a")
# If enabled, save the results and the network (state_dict)
if self.save_results and ith_episode > 0 and ith_episode % self.results_save_timer == 0:
np.savez("results/intermediary/intermediary_results{}_{:d}".format(self.run_id, ith_episode), np.array(results))
if self.save_network and ith_episode > 0 and ith_episode % self.network_save_timer == 0:
torch.save(self.value_net.state_dict(), "networks/intermediary/intermediary_networks{}_{:d}.pth".format(self.run_id, ith_episode))
self.env.close()
# If enabled, save the results and the network (state_dict)
if self.save_results:
np.savez("results/intermediary/intermediary_results{}_end".format(self.run_id), np.array(results))
np.savez(self.results_path, np.array(results))
if self.save_network:
torch.save(self.value_net.state_dict(), "networks/intermediary/intermediary_networks{}_end.pth".format(self.run_id))
torch.save(self.value_net.state_dict(), self.network_save_path)
# Print and keep a (.txt) record of stuff
msg = "Training finished at {}".format(datetime.datetime.now())
print(msg)
if self.keep_log:
self.record.write(msg)
self.record.close()
if __name__ == "__main__":
agent = Agent(sys.argv[1:])
agent.train() | 50.702098 | 232 | 0.594698 | 4,754 | 36,252 | 4.306689 | 0.127472 | 0.013676 | 0.011722 | 0.011625 | 0.387907 | 0.287926 | 0.229804 | 0.201377 | 0.162255 | 0.148188 | 0 | 0.021237 | 0.305087 | 36,252 | 715 | 233 | 50.702098 | 0.791481 | 0.173784 | 0 | 0.180085 | 0 | 0.002119 | 0.063631 | 0.018343 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052966 | false | 0 | 0.023305 | 0.002119 | 0.129237 | 0.025424 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fefb9f1bc9de1b69287eaed8f707942f874fa69 | 2,465 | py | Python | src/BribeNet/bribery/temporal/action/briberyAction.py | RobMurray98/BribeNet | 09ddd8f15d9ab5fac44ae516ed92c6ba5e5119bc | [
"MIT"
] | null | null | null | src/BribeNet/bribery/temporal/action/briberyAction.py | RobMurray98/BribeNet | 09ddd8f15d9ab5fac44ae516ed92c6ba5e5119bc | [
"MIT"
] | null | null | null | src/BribeNet/bribery/temporal/action/briberyAction.py | RobMurray98/BribeNet | 09ddd8f15d9ab5fac44ae516ed92c6ba5e5119bc | [
"MIT"
] | null | null | null | from abc import ABC, abstractmethod
from typing import List
from BribeNet.helpers.bribeNetException import BribeNetException
class BriberyActionExecutedMultipleTimesException(BribeNetException):
pass
class BriberyActionTimeNotCorrectException(BribeNetException):
pass
class BriberyAction(ABC):
def __init__(self, graph):
from BribeNet.graph.temporal.ratingGraph import TemporalRatingGraph # local import to remove cyclic dependency
from BribeNet.bribery.temporal.briber import GraphNotSubclassOfTemporalRatingGraphException
if not issubclass(graph.__class__, TemporalRatingGraph):
raise GraphNotSubclassOfTemporalRatingGraphException(f"{graph.__class__.__name__} is not a subclass of "
"TemporalRatingGraph")
self.graph = graph
self.__time_step = self.graph.get_time_step()
self.__performed = False
@classmethod
@abstractmethod
def empty_action(cls, graph):
raise NotImplementedError
def perform_action(self):
"""
Perform the action safely
:raises BriberyActionTimeNotCorrectException: if action not at same time step as graph
:raises BriberyActionExecutedMultipleTimesException: if action already executed
"""
if not self.__performed:
if self.__time_step == self.graph.get_time_step():
self._perform_action()
self.__performed = True
else:
message = f"The time step of the TemporalRatingGraph ({self.graph.get_time_step()}) is not equal to " \
f"the intended execution time ({self.__time_step})"
raise BriberyActionTimeNotCorrectException(message)
else:
raise BriberyActionExecutedMultipleTimesException()
def get_time_step(self):
return self.__time_step
def get_performed(self):
return self.__performed
@abstractmethod
def _perform_action(self):
"""
Perform the stored bribery actions simultaneously
"""
raise NotImplementedError
@abstractmethod
def is_bribed(self, node_id) -> (bool, List[int]):
"""
Determine if the bribery action results in a node being bribed this time step
:param node_id: the node
:return: whether the node is bribed this time step
"""
raise NotImplementedError
| 34.71831 | 119 | 0.669371 | 236 | 2,465 | 6.762712 | 0.338983 | 0.06015 | 0.037594 | 0.030075 | 0.095238 | 0.082707 | 0.045113 | 0.045113 | 0.045113 | 0 | 0 | 0 | 0.270994 | 2,465 | 70 | 120 | 35.214286 | 0.888147 | 0.177688 | 0 | 0.238095 | 0 | 0 | 0.105455 | 0.029091 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.047619 | 0.119048 | 0.047619 | 0.404762 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fefd6e5e58e9aad34db52d1d9799e3ab83c554a | 4,269 | py | Python | jax_md/quantity.py | niklasschmitz/jax-md | 5702b9a852d990ff1f7bae87bf06842734df1a90 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | jax_md/quantity.py | niklasschmitz/jax-md | 5702b9a852d990ff1f7bae87bf06842734df1a90 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | jax_md/quantity.py | niklasschmitz/jax-md | 5702b9a852d990ff1f7bae87bf06842734df1a90 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Describes different physical quantities."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from jax import grad, vmap
import jax.numpy as np
from jax_md import space
from jax_md.util import *
from functools import partial
def force(energy):
"""Computes the force as the negative gradient of an energy."""
return grad(lambda R, *args, **kwargs: -energy(R, *args, **kwargs))
def canonicalize_force(energy_or_force, quantity):
if quantity is Force:
return energy_or_force
elif quantity is Energy:
return force(energy_or_force)
raise ValueError(
'Expected quantity to be Energy or Force, but found {}'.format(quantity))
class Force(object):
"""Dummy object to denote whether a quantity is a force."""
pass
Force = Force()
class Energy(object):
"""Dummy object to denote whether a quantity is an energy."""
pass
Energy = Energy()
class Dynamic(object):
"""Object used to denote dynamic shapes and species."""
pass
Dynamic = Dynamic()
def kinetic_energy(V, mass=1.0):
"""Computes the kinetic energy of a system with some velocities."""
return 0.5 * np.sum(mass * V ** 2)
def temperature(V, mass=1.0):
"""Computes the temperature of a system with some velocities."""
N, dim = V.shape
return np.sum(mass * V ** 2) / (N * dim)
def canonicalize_mass(mass):
if isinstance(mass, float):
return mass
elif isinstance(mass, np.ndarray):
if len(mass.shape) == 2 and mass.shape[1] == 1:
return mass
elif len(mass.shape) == 1:
return np.reshape(mass, (mass.shape[0], 1))
elif len(mass.shape) == 0:
return mass
elif (isinstance(mass, f32) or
isinstance(mass, f64)):
return mass
msg = (
'Expected mass to be either a floating point number or a one-dimensional'
'ndarray. Found {}.'.format(mass)
)
raise ValueError(msg)
def cosine_angles(dR):
"""Returns cosine of angles for all atom triplets.
Args:
dR: Matrix of displacements; ndarray(shape=[num_atoms, num_neighbors,
spatial_dim]).
Returns:
Tensor of cosine of angles;
ndarray(shape=[num_atoms, num_neighbors, num_neighbors]).
"""
def angle_between_two_vectors(dR_12, dR_13):
dr_12 = space.distance(dR_12) + 1e-7
dr_13 = space.distance(dR_13) + 1e-7
cos_angle = np.dot(dR_12, dR_13) / dr_12 / dr_13
return np.clip(cos_angle, -1.0, 1.0)
angles_between_all_triplets = vmap(
vmap(vmap(angle_between_two_vectors, (0, None)), (None, 0)), 0)
return angles_between_all_triplets(dR, dR)
def pair_correlation(displacement_or_metric, rs, sigma):
metric = space.canonicalize_displacement_or_metric(displacement_or_metric)
sigma = np.array(sigma, f32)
# NOTE(schsam): This seems rather harmless, but possibly something to look at
rs = np.array(rs + 1e-7, f32)
# TODO(schsam): Get this working with cell list .
def compute_fun(R, **dynamic_kwargs):
_metric = partial(metric, **dynamic_kwargs)
_metric = space.map_product(_metric)
dr = _metric(R, R)
# TODO(schsam): Clean up.
dr = np.where(dr > f32(1e-7), dr, f32(1e7))
dim = R.shape[1]
exp = np.exp(-f32(0.5) * (dr[:, :, np.newaxis] - rs) ** 2 / sigma ** 2)
e = np.exp(dr / sigma ** 2)
gaussian_distances = exp / np.sqrt(2 * np.pi * sigma ** 2)
return np.mean(gaussian_distances, axis=1) / rs ** (dim - 1)
return compute_fun
def box_size_at_number_density(
particle_count, number_density, spatial_dimension):
return np.power(particle_count / number_density, 1 / spatial_dimension)
def bulk_modulus(elastic_tensor):
return np.einsum('iijj->', elastic_tensor) / elastic_tensor.shape[0] ** 2
| 29.441379 | 79 | 0.69829 | 643 | 4,269 | 4.497667 | 0.339036 | 0.020747 | 0.017981 | 0.008299 | 0.118257 | 0.091286 | 0.029737 | 0.029737 | 0.029737 | 0 | 0 | 0.025123 | 0.188803 | 4,269 | 144 | 80 | 29.645833 | 0.809991 | 0.309909 | 0 | 0.089744 | 0 | 0 | 0.051478 | 0 | 0 | 0 | 0 | 0.006944 | 0 | 1 | 0.141026 | false | 0.038462 | 0.102564 | 0.025641 | 0.487179 | 0.012821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2fefd9d669adc52d9fc25135e72257c4fc66d661 | 9,748 | py | Python | nsfiu/support/Python/Micronix.py | TobiasSchof/KPIC | 619ea31d9705c0168220d79f5d2f6f70535ca416 | [
"Unlicense"
] | null | null | null | nsfiu/support/Python/Micronix.py | TobiasSchof/KPIC | 619ea31d9705c0168220d79f5d2f6f70535ca416 | [
"Unlicense"
] | null | null | null | nsfiu/support/Python/Micronix.py | TobiasSchof/KPIC | 619ea31d9705c0168220d79f5d2f6f70535ca416 | [
"Unlicense"
] | null | null | null | from serial import Serial
from time import sleep
from telnetlib import Telnet
from logging import debug
class TimeoutError(Exception):
"""An error to be thrown for a movement timeout"""
pass
# TODO: Can I check which axis are connected in open_x?
class Micronix_Device():
"""Class for controlling Micronex stages"""
def __init__(self):
"""Constructor
Args:
precision = how far a controller can report a stage as being
from target before it is considered on target
"""
# a variable to keep track of what kind of connection is open
self.con_type = None
def open_Serial(self, devnm:str, baud:int):
"""Opens a serial connection to a device
Does nothing if a connection is already open
Args:
devnm = the device name to connect to
baud = the baudrate
"""
# if connection is already open, return
if not self.con_type is None:
debug("Device is already open. Cannot open {}".format(devnm))
return
debug("Connecting to serial: {}:{}...".format(devnm, baud))
self.con = Serial()
self.con.port = devnm
self.con.baudrate = baud
self.con.timeout = 1
self.con.open()
self.con_type = "serial"
def open_Telnet(self, host:str, port:int):
"""Opens a telnet connection to device
Does nothing if a connection is already open
Args:
host = the hostname of the telnet connection
port = the port to connect to
"""
# if connection is already open, return
if not self.con_type is None:
debug("Device is already opeen. Cannot open {}".format(host))
return
debug("Connecting to telnet: {}:{}...".format(host, port))
self.con = Telnet()
self.con.timeout = 1
self.con.open(host, port)
self.con_type = "telnet"
#TODO: check: is first message ignored?
def close_Connection(self):
"""Closes whatever connection is currently open"""
# if there is no connection, return
if self.con_type is None:
debug("Connection is not open, doing nothing.")
return
debug("Closing {} connection".format(self.con_type))
self.con.close()
self.con_type = None
def close(self):
"""Closes the device connection if open"""
if self.con_type is None:
debug("Cannot close device connection. Already closed.")
else:
self.con.close()
self.con_type = None
def move(self, newPOS:dict, isBlocking:bool = False):
"""Moves this device to the given position
Args:
newPOS = a dict where keys correspond to axis, values to positions
isBlocking = a boolean that will block the program until the move is
done.
"""
debug("Performing the following moves: {}".format(newPOS))
# for each axis, prep a syncronous move
for axis in newPOS:
self._write("{}MVA{}".format(axis, newPOS[axis]))
# if program execution should be blocked, sleep until position is within precision
# block for no more than 5 seconds
cnt = 0
if isBlocking:
for axis in newPOS:
while self.isMoving(axis)[axis] and cnt < 500:
sleep(.01)
cnt += 1
def isMoving(self, axes:list) -> dict:
"""A method to check whether the given axes are moving
Args:
axes = a list of axes to check
Returns:
dict = keys as axes, values as True/False for moving/not
"""
# if axes is a singleton, make it a list
if type(axes) is int: axes = [axes]
# for each axis, get status and check bit 3
ret = {axis:not (int(self._query("{}STA?".format(axis))) & 8) for axis in axes}
# return values
return ret
def isConnected(self) -> bool:
"""Returns whether this device is connected
Returns:
bool = True if device is connected, False otherwise
"""
return self.con_type is not None
def home(self, axes:list, isBlocking:bool = False):
"""A method to home the given axes
Args:
axes = the axes to home. Also accepts an int
isBlocking = whether this method should block execution until all
home commands are complete
"""
debug("Homing the following axes: {}".format(axes))
# convert axes to list if an int was provided
if type(axes) is int: axes = [axes]
# request a home for each axis
for axis in axes:
self._write("{}HOM".format(axis))
# block for no more than 5 seconds
cnt = 0
if isBlocking:
for axis in axes:
while not self.isHomed(axis)[axis] and cnt < 500:
sleep(.01)
cnt += 1
def setLoopState(self, target:dict):
"""A method to set the loop state
Args:
target = keys as axes, values as a valid FBK state
"""
debug("Setting the following loop states: {}".format(target))
for axis in target: self._write("{}FBK{}".format(axis, target[axis]))
def getLoopState(self, axes:list) -> dict:
"""A method to change the loop state
Args:
axes = list of axes (or axis) to query
Returns:
dict = keys as axes, values as ints representing FBK options for controller
"""
debug("Requesting loop state for the following axes: {}".format(axes))
# if a single axis was provided, format it as a list
if type(axes) is int: axes = [axes]
# check FBK status for each axis
ret = {axis:self._query("{}FBK?".format(axis)) for axis in axes}
return ret
def isHomed(self, axes:list) -> dict:
"""A method to check whether the given axes are homed
Args:
axes = which axes (or axis) to query. Also accepts an int
Returns:
dict = keys as axis, values as True/False for homed/not
"""
debug("Checking if the following axes are homed: {}".format(axes))
# convert axes to list if an int was provided
if type(axes) is int: axes = [axes]
# request homed status for each axis
ret = {axis:bool(int(self._query("{}HOM?".format(axis)))) for axis in axes}
return ret
def getPos(self, axes:list) -> dict:
"""A method to return the current position of the device.
Args:
axes = the axes to query, also accepts an int
Returns:
dict = keys correspond to axes, values to current position
"""
debug("Getting the current position for the following axes: {}".format(axes))
# if a single axis was supplied, store it as a list
if type(axes) is int: axes = [axes]
# query position for each axis and store results
# the controller returns position as theoretical,actual
ret = {axis:self._query("{}POS?".format(axis)).split(",")[1] for axis in axes}
return ret
def getError(self, axes:list) -> dict:
"""A method to get the stored error and clear it from controller's memory
Args:
axes = the list axes to get errors for, also accepts an int
Returns:
dict = keys correspond to axes, values to errors
"""
debug("Getting stored errors for the following axes: {}".format(axes))
# if a single axis was supplied, store it as a list
if type(axes) is int: axes = [axes]
# if there's no error, we'll get a timeout, so set timeout to a short time
tmp = self.con.timeout
self.con.timeout = .1
# query error for each axis and store results
ret = {axis:self._query("{}ERR?".format(axis)) for axis in axes}
ret = {axis:0 if ret[axis] == '' else int(ret[axis].split(" ")[1]) for axis in ret}
# change timeout back
self.con.timeout = tmp
return ret
def _write(self, MSG:str):
"""Formats a <MSG> to be sent to the stage
<MSG> should be a normal string and not end in CR\LF
Args:
MSG = a string containing a command (e.g. "1VER?")
"""
# raise an error if MSG isn't a string
try: assert type(MSG) is str
except AssertionError as e:
raise ValueError("Message to send must be a string.")
# if there is already a carriage return, no need to add one
if MSG.endswith("\r"):
debug("Sending message: {}\\r".format(MSG[:-1]))
self.con.write(MSG.encode())
else:
debug("Sending message: {}\\n\\r".format(MSG))
self.con.write((MSG+"\n\r").encode())
def _query(self, MSG:str) -> str:
"""Formats query <MSG> to be sent and returns result
<MSG> should end in a ?
Args:
MSG = a string containing a query command
Returns:
str = the device's response
"""
# validate MSG
try:
assert type(MSG) is str
assert MSG[-1] == "?" or MSG[-2:] == "?\r" or MSG[-3:] == "?\n\r"
except AssertionError as e:
raise ValueError("Message to send must be a string ending in ?.")
# send query
self._write(MSG)
# read response, stripping new line carriage return at end and # at beginning
return self.con.read_until("\n\r".encode())[1:-2].decode() | 31.856209 | 91 | 0.572733 | 1,298 | 9,748 | 4.276579 | 0.20339 | 0.034048 | 0.021798 | 0.016393 | 0.366961 | 0.334714 | 0.290758 | 0.24428 | 0.233471 | 0.20771 | 0 | 0.004769 | 0.333197 | 9,748 | 306 | 92 | 31.856209 | 0.849231 | 0.387156 | 0 | 0.359649 | 0 | 0 | 0.140275 | 0 | 0 | 0 | 0 | 0.006536 | 0.04386 | 1 | 0.140351 | false | 0.008772 | 0.035088 | 0 | 0.280702 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2ff20fd15245634531744ee562835b4c0d1a75be | 2,251 | py | Python | tests/test_pipeline_wgbs_index.py | Multiscale-Genomics/mg-process-fastq | 50c7115c0c1a6af48dc34f275e469d1b9eb02999 | [
"Apache-2.0"
] | 2 | 2017-07-31T11:45:46.000Z | 2017-08-09T09:32:35.000Z | tests/test_pipeline_wgbs_index.py | Multiscale-Genomics/mg-process-fastq | 50c7115c0c1a6af48dc34f275e469d1b9eb02999 | [
"Apache-2.0"
] | 28 | 2016-11-17T11:12:32.000Z | 2018-11-02T14:09:13.000Z | tests/test_pipeline_wgbs_index.py | Multiscale-Genomics/mg-process-fastq | 50c7115c0c1a6af48dc34f275e469d1b9eb02999 | [
"Apache-2.0"
] | 4 | 2017-02-12T17:47:21.000Z | 2018-05-29T08:16:27.000Z | """
.. See the NOTICE file distributed with this work for additional information
regarding copyright ownership.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import print_function
import os.path
import pytest
from basic_modules.metadata import Metadata
from process_bs_seeker_index import process_bs_seeker_index
@pytest.mark.wgbs
@pytest.mark.pipeline
def test_wgbs_pipeline_index():
"""
Test case to ensure that the RNA-seq pipeline code works.
Running the pipeline with the test data from the command line:
"""
home = os.path.expanduser('~')
resource_path = os.path.join(os.path.dirname(__file__), "data/")
genomefa_file = resource_path + "bsSeeker.Mouse.GRCm38.fasta"
files = {
"genome": genomefa_file
}
metadata = {
"genome": Metadata(
"Assembly", "fasta", files['genome'], None,
{'assembly': 'GRCm38'}
)
}
files_out = {
"index": resource_path + "bsSeeker.Mouse.GRCm38.fasta.bt2.tar.gz"
}
print("WGBS TEST FILES:", files)
rs_handle = process_bs_seeker_index(
configuration={
"bss_path": home + "/lib/BSseeker2",
"aligner": "bowtie2",
"aligner_path": home + "/lib/bowtie2-2.3.4-linux-x86_64",
"execution": resource_path
}
)
rs_files, rs_meta = rs_handle.run(files, metadata, files_out) # pylint: disable=unused-variable
assert len(rs_files) == 1
# Add tests for all files created
for f_out in rs_files:
print("WGBS RESULTS FILE:", f_out)
assert rs_files[f_out] == files_out[f_out]
assert os.path.isfile(rs_files[f_out]) is True
assert os.path.getsize(rs_files[f_out]) > 0
| 30.418919 | 100 | 0.67259 | 306 | 2,251 | 4.787582 | 0.480392 | 0.040956 | 0.030717 | 0.040956 | 0.049147 | 0.049147 | 0 | 0 | 0 | 0 | 0 | 0.013302 | 0.231897 | 2,251 | 73 | 101 | 30.835616 | 0.834008 | 0.377166 | 0 | 0 | 0 | 0 | 0.176856 | 0.069869 | 0 | 0 | 0 | 0 | 0.102564 | 1 | 0.025641 | false | 0 | 0.128205 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2ff388db8366ea8591858ff099655cca6fbb8662 | 7,625 | py | Python | finance_manager/functions.py | jehboyes/finance_manager | d310a3a4c2c6b6e5564e2a83e3f355b23266b773 | [
"MIT"
] | null | null | null | finance_manager/functions.py | jehboyes/finance_manager | d310a3a4c2c6b6e5564e2a83e3f355b23266b773 | [
"MIT"
] | null | null | null | finance_manager/functions.py | jehboyes/finance_manager | d310a3a4c2c6b6e5564e2a83e3f355b23266b773 | [
"MIT"
] | null | null | null | """
A collection of generic functions and iterators intended for use in various parts of the App,
and will probably be of use to future development.
"""
from importlib import import_module as imp
from os import listdir, path
import sys
import functools
import click
class periods():
"""
Iterator for financial periods.
Exists for brevity/clarity in actual code. Outputs the numbers 1 to 12,
unless restricted by passing the ``end`` parameter on construction.
"""
def __init__(self, end=12):
"""
Parameters
----------
end : int, optional
The final month to output, useful for dynamic in-year processing, but by default 12.
"""
self.end = end
pass
def __iter__(self):
self.a = 1
return self
def __next__(self):
if self.a <= self.end:
x = self.a
self.a += 1
return x
else:
raise StopIteration
def period_to_month(period, acad_year):
"""
Financial month and year to calendar month and year.
Converts a period and academic year into the actual month number and calendar year.
Parameters
----------
period : int
Accounting period
acad_year : int
Academic year (calendar year commencing)
Returns
-------
tuple
Month, Calendar year
Examples
--------
Period 1 (August) in the 2020 financial year:
>>> period_to_month(1,2020)
(8, 2020)
Period 6 (January) in the 1984 financial year:
>>> period_to_month(6, 1984)
(1, 1985)
"""
# Because August is P1
period += 7
# Increment calendar year if new period is in next year (i.e. >12)
acad_year += (period-1)//12
# Bring period back to legitimate month number, and correct for 0
period = period % 12
if period == 0:
period = 12
return period, acad_year
def sa_con_string(dialect, server, db, py_driver=None, user=None, password='', driver=None):
"""
Formats connection variables into SQL Alchemy string.
Intended for brevity elsewhere in the App. For more detail,
see the `SQLAlchemy Engine Configuration <https://docs.sqlalchemy.org/en/13/core/engines.html>`_ page.
Parameters
----------
dialect : str
SQLAlchemy-recognised name for the DBMS, such as `mssql` or `sqlite`
server : str
Server/host name
db : str
Database name
py_driver : str
Name of additional driver required for dialect connection (e.g. pyodbc)
user : str
Username, if used. If ommitted, connection uses windows credentials (via trusted connection)
password : str
Password for given username. Can be blank.
driver : str
Specific driver to use when connecting.
Returns
-------
str
SQL Alchemy engine connection string.
"""
# Configure security
user = '' if user is None else user
if len(user) > 0:
login = user + ':' + password
trust = ''
else:
login = ''
trust = '?trusted_connection=yes'
# Configure dialect
if py_driver is not None:
dialect = '+'.join([dialect, py_driver])
# configure additional dialect
if driver is not None and len(driver) > 0:
driver = '&driver='+driver.replace(" ", "+")
con = f"{dialect}://{login}@{server}/{db}{trust}{driver}" + \
";MARS_Connection=Yes"
return con
def normalise_period(val):
"""Return an integer from 1 to 12.
Parameters
----------
val : str or int
Variant for period. Should at least contain numeric characters.
Returns
-------
int
Number corresponding to financial period.
Examples
--------
>>> normalise_period('P6')
6
>>> normalise_period(202106)
6
"""
val = ''.join(c for c in str(val) if c.isdigit())
return int(val[-2:])
def level_to_session(level):
"""
Converts study level to a year of study.
Intended for use with the level descriptions that come out of the
HE In Year Cohort web report, but applicable to other instances.
Parameters
----------
level : str
The text version of a level. Should begin with the word 'level'.
Returns
-------
int
The year of study that the level (typically) corresponds to.
"""
session = "X"
if level[:5].upper() == "LEVEL":
session = int(level[-1]) - 3
else:
session = 1
return session
def name_to_aos(name):
"""
Converts a verbose course name to its aos_code
Essentially a fuzzy matching function, intended for use with reverse engineering web reports
Parameters
----------
name : str
The course description. Can include year.
Returns
-------
str
The 6-character aos_code.
int
Session, i.e. year of study. If no numeric characters were
in the ``name``, this will default to -1.
Examples
--------
>>> name_to_aos('Jazz Year 1')
('HBAMJA', 1)
When no numeric year information appears
>>> name_to_aos('Jazz Year Two')
('HBAMJA', -1)
"""
aos_abbr = [["Business", "BU", ""],
["Classical", "CM", "C"],
["Film", "FM"],
["Folk", "FO", "F"],
["Jazz", "JA", "J"],
["Production", "PR", "M"],
["Popular", "PM", "P"],
["Songwriting", "SW"],
["Acting", "ACT"],
["Actor Musician", "AMU"],
["Musical Theatre", "MTH"]]
aos_code = ""
quals = ["BA ", "FD", "MMus", "MA "]
fd_triggers = ["electronic", "foundation degree", "FD"]
pg_triggers = ["creative", "mmus"]
# Check the name contains qualification
has_qual = any([qual.lower() in name.lower() for qual in quals])
if any([t.lower() in name.lower() for t in pg_triggers]):
aos_code = "HMMCRM"
elif any([t.lower() in name.lower() for t in fd_triggers]):
aos_code = "HFD"
if "Electronic" in name or "EMP" in name:
aos_code += "EMP"
else:
aos_code += "MPM"
elif name[:2] == "BA" or not has_qual: # IE assume BA if not specified
aos_code = "HBA"
if "with" in name:
# i.e. is combined
aos_code += "C"
withpos = name.index("with")
for p in aos_abbr:
if p[0] in name[:withpos]:
aos_code += p[2]
for p in aos_abbr:
if p[0] in name[withpos:]:
aos_code += p[2]
else: # Music and Acting/MT
for p in aos_abbr:
if p[0] in name:
if len(p[1]) == 2:
aos_code += "M"
aos_code += p[1]
break
if len(aos_code) != 6:
raise ValueError(
f"Unable to recognise {name}. Got as far as '{aos_code}''.")
# And then the numeric bit
num = -1
for char in name:
if char.isdigit():
num = int(char)
break
return aos_code, num
def _add_subcommands(parent, file, package):
p = path.dirname(file)
files = listdir(p)
this_package = sys.modules[package].__name__
modules = [imp(this_package+"."+f.replace(".py", ""), )
for f in files if f[0] != "_"]
commands = [getattr(module, module.__name__[module.__name__.rfind(".")+1:])
for module in modules]
for _ in commands:
parent.add_command(_)
| 26.943463 | 107 | 0.557902 | 953 | 7,625 | 4.370409 | 0.342078 | 0.026891 | 0.010084 | 0.011525 | 0.061945 | 0.036735 | 0.036735 | 0.036735 | 0.036735 | 0.02425 | 0 | 0.017649 | 0.323803 | 7,625 | 282 | 108 | 27.039007 | 0.790147 | 0.429639 | 0 | 0.106195 | 0 | 0 | 0.100908 | 0.018418 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079646 | false | 0.026549 | 0.044248 | 0 | 0.19469 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2ff4614b2c2823a60726232798f4f0d9f9766030 | 5,157 | py | Python | tests/test_openml.py | carefree0910/carefree-learn-benchmark | d5507c1719e0b625230e3befc23149a96e057d14 | [
"MIT"
] | null | null | null | tests/test_openml.py | carefree0910/carefree-learn-benchmark | d5507c1719e0b625230e3befc23149a96e057d14 | [
"MIT"
] | null | null | null | tests/test_openml.py | carefree0910/carefree-learn-benchmark | d5507c1719e0b625230e3befc23149a96e057d14 | [
"MIT"
] | null | null | null | import os
import cflearn
import platform
import unittest
import numpy as np
from typing import Dict
from cflearn_benchmark import Benchmark
from scipy.sparse import csr_matrix
from cftool.ml import patterns_type
from cftool.ml import Tracker
from cftool.ml import Comparer
from cftool.misc import timestamp
from cfdata.tabular import TabularData
from cfml.misc.toolkit import Experiment
from sklearn.svm import LinearSVC, SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import fetch_openml
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
class TestOpenML(unittest.TestCase):
Experiment.suppress_warnings()
num_repeat = 2
num_jobs = 0 if platform.system() == "Linux" else 2
project_name = "carefree-learn-benchmark"
logging_folder = "__test_openml__"
openml_indices = [38]
# openml_indices = [38, 46, 179, 184, 389, 554, 772, 917, 1049, 1111, 1120, 1128, 293]
task_names = [f"openml_{openml_id}" for openml_id in openml_indices]
messages: Dict[str, str] = {}
def _get_benchmark_saving_folder(self, task_name: str) -> str:
benchmark_sub_folder = os.path.join("benchmarks", f"{task_name}_benchmark")
return os.path.join(self.logging_folder, benchmark_sub_folder)
def test1(self) -> None:
for openml_id, task_name in zip(self.openml_indices, self.task_names):
# preparation
bunch = fetch_openml(data_id=openml_id, as_frame=False)
x, y = bunch.data, bunch.target
if isinstance(x, csr_matrix):
x = x.toarray()
feature_names = bunch.feature_names
if bunch.categories is None:
categorical_columns = None
else:
categorical_columns = [
i
for i, name in enumerate(feature_names)
if name in bunch.categories
]
data = TabularData(
process_methods=None,
valid_columns=set(range(x.shape[1])),
categorical_columns=categorical_columns,
)
data.read(x, y.reshape([-1, 1]))
comparer_list = []
sk_bases = [
LinearSVC,
SVC,
DecisionTreeClassifier,
RandomForestClassifier,
LogisticRegression,
]
# cflearn benchmark
benchmark = Benchmark(
task_name,
"clf",
num_jobs=self.num_jobs,
models=["fcnn", "tree_dnn"],
increment_config={
"fixed_epoch": 2,
"data_config": {"categorical_columns": categorical_columns},
},
)
results = benchmark.k_random(self.num_repeat, 0.1, *data.converted.xy)
msg = results.comparer.log_statistics(verbose_level=None)
TestOpenML.messages[task_name] = msg
benchmark.save(self._get_benchmark_saving_folder(task_name))
best_methods = list(set(results.best_methods.values()))
comparer_list.append(results.comparer.select(best_methods))
# sklearn
exp = benchmark.experiment
data_folders = benchmark.data_folders
for data_folder in data_folders:
sklearn_patterns: Dict[str, patterns_type] = {}
x_tr, y_tr = exp.fetch_data(data_folder=data_folder)
x_te, y_te = exp.fetch_data("_te", data_folder=data_folder)
assert isinstance(y_tr, np.ndarray)
for base in sk_bases:
clf = base()
sklearn_patterns.setdefault(base.__name__, []).append(
cflearn.ModelPattern(
init_method=lambda: clf.fit(x_tr, y_tr.ravel()),
predict_method=lambda x_: clf.predict(x_).reshape([-1, 1]),
predict_prob_method="predict_proba",
)
)
comparer_list.append(
cflearn.evaluate(
x_te,
y_te,
metrics=["acc", "auc"],
other_patterns=sklearn_patterns,
comparer_verbose_level=None,
)
)
comparer = Comparer.merge(comparer_list)
msg = comparer.log_statistics(method_length=24)
tracker = Tracker(self.project_name, f"{task_name}_summary")
tracker.track_message(timestamp(), msg)
def test2(self) -> None:
for task_name in self.task_names:
saving_folder = self._get_benchmark_saving_folder(task_name)
benchmark = Benchmark.load(saving_folder)
loaded_msg = benchmark.results.comparer.log_statistics(verbose_level=None)
self.assertEqual(TestOpenML.messages[task_name], loaded_msg)
cflearn._rmtree(self.logging_folder)
if __name__ == "__main__":
unittest.main()
| 38.774436 | 90 | 0.587745 | 549 | 5,157 | 5.251366 | 0.336976 | 0.027749 | 0.012487 | 0.01873 | 0.055498 | 0.055498 | 0.055498 | 0 | 0 | 0 | 0 | 0.016802 | 0.330619 | 5,157 | 132 | 91 | 39.068182 | 0.818366 | 0.023657 | 0 | 0 | 0 | 0 | 0.039364 | 0.008946 | 0 | 0 | 0 | 0 | 0.017544 | 1 | 0.026316 | false | 0 | 0.166667 | 0 | 0.27193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2ff8aa8452d3785af93c4d934bd42e377d0cab4b | 6,117 | py | Python | final/predict.py | terrifyzhao/information_extract | d950c25911f5a8d9f7e10eb9ecfd53f897cef8d0 | [
"Apache-2.0"
] | null | null | null | final/predict.py | terrifyzhao/information_extract | d950c25911f5a8d9f7e10eb9ecfd53f897cef8d0 | [
"Apache-2.0"
] | null | null | null | final/predict.py | terrifyzhao/information_extract | d950c25911f5a8d9f7e10eb9ecfd53f897cef8d0 | [
"Apache-2.0"
] | null | null | null | import json
from final.model import model
import os
import numpy as np
import tensorflow as tf
import jieba
from tqdm import tqdm
import ahocorasick
from gensim.models import Word2Vec
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
path = os.getcwd()
graph = tf.get_default_graph()
train_data = json.load(open(path + '/data/train.json'))
id2predicate, predicate2id = json.load(open(path + '/data/schemas.json'))
id2predicate = {int(i): j for i, j in id2predicate.items()}
id2char, char2id = json.load(open(path + '/data/vocab.json'))
num_classes = len(id2predicate)
# 词向量
word2vec = Word2Vec.load(path + '/word2vec.model')
id2word = {i + 1: j for i, j in enumerate(word2vec.wv.index2word)}
word2id = {j: i for i, j in id2word.items()}
word2vec = word2vec.wv.syn0
word2vec = np.concatenate([np.zeros((1, word2vec.shape[1])), word2vec])
max_s = 14
max_len = 140
train_model, subject_model, object_model = model(len(char2id), max_len, len(predicate2id))
subject_model.load_weights(path + '/out2/subject_model.weights')
object_model.load_weights(path + '/out2/object_model.weights')
def seq_padding(X, padding=0):
L = [len(x) for x in X]
ML = max(L)
return np.array([
np.concatenate([x, [padding] * (ML - len(x))]) if len(x) < ML else x for x in X
])
class SopSearch:
def __init__(self):
self.ac_s = ahocorasick.Automaton()
self.ac_o = ahocorasick.Automaton()
self.sop_dic = {}
self.sop_total = {}
for i, d in enumerate(tqdm(train_data, desc='build SOP search')):
for s, p, o in d['spo_list']:
self.ac_s.add_word(s, s)
self.ac_o.add_word(o, o)
if (s, o) not in self.sop_dic:
self.sop_dic[(s, o)] = set()
if (s, p, o) not in self.sop_total:
self.sop_total[(s, p, o)] = set()
self.sop_dic[(s, o)].add(p)
self.sop_total[(s, p, o)].add(i)
self.ac_s.make_automaton()
self.ac_o.make_automaton()
def find(self, text, i=None):
spo = set()
for s in self.ac_s.iter(text):
for o in self.ac_o.iter(text):
if (s[1], o[1]) in self.sop_dic.keys():
for p in self.sop_dic.get((s[1], o[1])):
if i is None:
spo.add((s[1], p, o[1]))
elif self.sop_total[(s[1], p, o[1])] - {i}:
spo.add((s[1], p, o[1]))
return list(spo)
spo_search = SopSearch()
def sent2vec(S):
V = []
for s in S:
V.append([])
for w in s:
for _ in w:
V[-1].append(word2id.get(w, 0))
V = seq_padding(V)
V = word2vec[V]
return V
def extract_items(text):
R = []
text = text[:max_len]
char_index = [char2id.get(c, 1) for c in text]
pre_po = np.zeros((len(text), num_classes, 2))
pre_s = np.zeros((len(text), 2))
for s, p, o in spo_search.find(text):
pre_s[text.find(s), 0] = 1
pre_s[text.find(s) + len(s) - 1, 1] = 1
pre_po[text.find(o), predicate2id[p], 0] = 1
pre_po[text.find(o) + len(o) - 1, predicate2id[p], 1] = 1
word = sent2vec([list(jieba.cut(text))])
pre_s = np.expand_dims(pre_s, 0)
char_index = np.array([char_index])
s_star, s_end = subject_model.predict([char_index, word, pre_s])
s_star, s_end = s_star[0, :, 0], s_end[0, :, 0]
# index
s_star_out, s_end_out = np.where(s_star > 0.5)[0], np.where(s_end > 0.5)[0]
# one-hot
s_star_in, s_end_in = np.where(s_star > 0.5, 1, 0), np.where(s_end > 0.5, 1, 0)
s_star, s_end = s_star_out, s_end_out
subjects = []
for i in s_star:
j = s_end[s_end >= i]
if len(j) > 0:
j = j[0]
subject = text[i: j + 1]
subjects.append((subject, i, j))
# subjects.append(('阿斯达', 1, 4))
# subjects.append(('得到的', 2, 5))
if subjects:
s_index = []
for subject in subjects:
s_index.append([char2id.get(c, 1) for c in subject[0]])
# s_index = [char2id.get(c, 1) for c in subjects[0][0]]
# s_index = np.array([s_index])
s_index = seq_padding(s_index)
s_star_in = np.array([s_star_in])
s_end_in = np.array([s_end_in])
pre_po = pre_po.reshape(pre_po.shape[0], -1)
pre_po = np.expand_dims(pre_po, 0)
char_index = np.repeat(char_index, len(subjects), 0)
word = np.repeat(word, len(subjects), 0)
s_star_in = np.repeat(s_star_in, len(subjects), 0)
s_end_in = np.repeat(s_end_in, len(subjects), 0)
pre_s = np.repeat(pre_s, len(subjects), 0)
pre_po = np.repeat(pre_po, len(subjects), 0)
o1, o2 = object_model.predict([char_index, word, s_index, s_star_in, s_end_in, pre_s, pre_po])
for i, subject in enumerate(subjects):
_oo1, _oo2 = np.where(o1[i] > 0.5), np.where(o2[i] > 0.5)
for _ooo1, _c1 in zip(*_oo1):
for _ooo2, _c2 in zip(*_oo2):
if _ooo1 <= _ooo2 and _c1 == _c2:
_object = text[_ooo1: _ooo2 + 1]
_predicate = id2predicate[_c1]
R.append((subject[0], _predicate, _object))
break
zhuanji, gequ = [], []
for s, p, o in R[:]:
if p == u'妻子':
R.append((o, u'丈夫', s))
elif p == u'丈夫':
R.append((o, u'妻子', s))
if p == u'所属专辑':
zhuanji.append(o)
gequ.append(s)
spo_list = set()
for s, p, o in R:
if p in [u'歌手', u'作词', u'作曲']:
if s in zhuanji and s not in gequ:
continue
spo_list.add((s, p, o))
return list(spo_list)
else:
return []
def predict(text):
global graph
with graph.as_default():
result = extract_items(text)
return result
if __name__ == '__main__':
while 1:
text = input('text:')
r = predict(text)
print(r)
| 32.194737 | 102 | 0.541605 | 956 | 6,117 | 3.282427 | 0.172594 | 0.020395 | 0.007648 | 0.007648 | 0.205864 | 0.101976 | 0.053537 | 0.022307 | 0 | 0 | 0 | 0.032235 | 0.310283 | 6,117 | 189 | 103 | 32.365079 | 0.711543 | 0.026647 | 0 | 0.013158 | 0 | 0 | 0.033132 | 0.008914 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039474 | false | 0 | 0.059211 | 0 | 0.144737 | 0.006579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2ff8c1d733f52e6e0c92e54a6bb944e263727097 | 16,976 | py | Python | agent/state/model.py | abiro/minerl-agent | c33361c4a5ad0d5fe903f211665ff4da16439b69 | [
"MIT"
] | null | null | null | agent/state/model.py | abiro/minerl-agent | c33361c4a5ad0d5fe903f211665ff4da16439b69 | [
"MIT"
] | null | null | null | agent/state/model.py | abiro/minerl-agent | c33361c4a5ad0d5fe903f211665ff4da16439b69 | [
"MIT"
] | null | null | null | import os
from collections import defaultdict
import logging
from operator import itemgetter
from pathlib import Path
from typing import List, Optional, Tuple, Dict, Iterable, Iterator
import time
import numpy as np
import torch as T
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.tensorboard import SummaryWriter
from agent.config import Config, StateConfig
from agent.state.data import Batch, BatchLoader, Dataset
class FrameEncoder(nn.Module):
def __init__(self, conf: StateConfig):
super().__init__()
# TODO how does batch norm behave when repeatedly applied before back
# prop
self._main = nn.Sequential(
# 32x32 out for 64x64 input
nn.BatchNorm2d(3),
nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3,
padding=1), # noqa 501
nn.LeakyReLU(inplace=True),
nn.Conv2d(32, 16, 1),
nn.LeakyReLU(True),
nn.MaxPool2d(2),
# 16x16 out
nn.BatchNorm2d(16),
nn.Conv2d(16, 64, 3, padding=1),
nn.LeakyReLU(True),
nn.Conv2d(64, 32, 1),
nn.LeakyReLU(True),
nn.MaxPool2d(2),
# 8x8 out
nn.BatchNorm2d(32),
nn.Conv2d(32, 128, 3, padding=1),
nn.LeakyReLU(True),
nn.Conv2d(128, 64, 1),
nn.LeakyReLU(True),
nn.MaxPool2d(2),
# 4x4 out
nn.BatchNorm2d(64),
nn.Conv2d(64, 256, 3, padding=1),
nn.LeakyReLU(True),
nn.Conv2d(256, 128, 1),
nn.LeakyReLU(True),
nn.MaxPool2d(2),
# 2x2 out
nn.BatchNorm2d(128),
nn.Conv2d(128, 256, 3, padding=1),
nn.LeakyReLU(True),
nn.Conv2d(256, 128, 1),
nn.LeakyReLU(True),
nn.MaxPool2d(2),
# 1x1 out
nn.BatchNorm2d(128),
nn.Conv2d(128, 256, 3, padding=1),
nn.LeakyReLU(True),
nn.Conv2d(256, conf.cnn_out, 1),
nn.LeakyReLU(True),
nn.MaxPool2d(2),
)
def forward(self, frame: T.Tensor) -> T.Tensor:
return self._main(frame).flatten(start_dim=1)
class InvEncoder(nn.Module):
def __init__(self, conf: StateConfig):
super().__init__()
self._main = nn.Sequential(
# Inv observations are standardized in data loader
nn.Linear(conf.num_inv, 2 * conf.num_inv),
nn.LeakyReLU(inplace=True),
nn.Linear(2 * conf.num_inv, conf.inv_out),
nn.LeakyReLU(inplace=True),
)
def forward(self, features: T.Tensor) -> T.Tensor:
return self._main(features)
class StateEncoder(nn.Module):
@staticmethod
def _get_input_indices(step_size: int, num_steps: int) -> List[int]:
"""Get the input indices for one of the step sizes.
Args:
step_size: Exponent for the step size.
num_steps: The number of steps.
Returns:
The input indices in reversed where t_0 is always the last. Eg. for
i=1 and num_steps=2 the output is [2, 0].
Raises:
ValueError for i < 0 or num_steps < 0
"""
if step_size < 0 or num_steps < 0:
raise ValueError(('Input must be non-negative. step_size: {} '
'num_steps: {}').format(step_size, num_steps))
stop = step_size * num_steps
return list(reversed(range(0, stop, step_size)))
def __init__(self, conf: StateConfig):
super().__init__()
if len(conf.seq_lens) > 1:
raise NotImplementedError('Only implemented for one step size')
self._seq_lens = dict(conf.seq_lens)
self._out_size = conf.state_size
self._num_act = conf.num_act
self._frame_encoder = FrameEncoder(conf)
self._inv_encoder = InvEncoder(conf)
self._input_indices = {}
flat_indices = set()
for step_size, num_steps in self._seq_lens.items():
ii = self._get_input_indices(step_size, num_steps)
self._input_indices[step_size] = ii
flat_indices.update(self._input_indices[step_size])
# Index 0 is the current time step which we want last
self._flat_indices = sorted(flat_indices, reverse=True)
rnn_in = conf.cnn_out + conf.inv_out + conf.num_act
self._bn = nn.BatchNorm1d(rnn_in)
self._rnn = nn.GRU(input_size=rnn_in,
hidden_size=conf.state_size,
num_layers=conf.rnn_layers)
self._skip_merge = nn.ModuleDict()
@property
def out_size(self):
return self._out_size
def forward(self, batch: Batch) -> T.Tensor:
"""Compute a forward pass.
Args:
batch: Batch
Returns:
The state encoding for each time step.
"""
inputs = []
for i in self._flat_indices:
f_out = self._frame_encoder(batch.frames[i])
i_out = self._inv_encoder(batch.inv[i])
joined = T.cat([f_out, i_out, batch.actions[i]], dim=-1)
inputs.append(self._bn(joined))
# TODO skip connection from inputs to state
inputs_t = T.stack(inputs, dim=0)
state, _ = self._rnn(inputs_t)
# State embeddings for t+1 from each time step t
return state
class FrameGenerator(nn.Module):
"""DCGAN Generator for frame generation from state.
Based on: https://github.com/pytorch/examples/blob/master/dcgan/main.py
"""
def __init__(self, conf: StateConfig):
super().__init__()
ngf = conf.frame_gen_filters
self.main = nn.Sequential(
# (ngf*8) x 4 x 4 out
nn.BatchNorm2d(conf.state_size),
nn.ConvTranspose2d(in_channels=conf.state_size,
out_channels=ngf * 8,
kernel_size=4,
stride=1,
padding=0),
nn.LeakyReLU(inplace=True),
# (ngf*4) x 8 x 8 out
nn.BatchNorm2d(ngf * 8),
nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1),
nn.LeakyReLU(True),
# (ngf*2) x 16 x 16 out
nn.BatchNorm2d(ngf * 4),
nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1),
nn.LeakyReLU(True),
# ngf x 32 x 32 out
nn.BatchNorm2d(ngf * 2),
nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 1),
nn.LeakyReLU(True),
# 3 x 64 x 64 out
nn.BatchNorm2d(ngf),
nn.ConvTranspose2d(ngf, 3, 4, 2, 1),
)
def forward(self, state: T.Tensor) -> T.Tensor:
"""Computes a forward pass
Args:
state: (batch_size, state_size)
Returns:
(batch_size, 3, 64, 64)
"""
s = state.view((state.shape[0], state.shape[1], 1, 1))
return self.main(s)
class InvGenerator(nn.Module):
"""Generate inventory features from state."""
def __init__(self, conf: StateConfig):
super().__init__()
self.main = nn.Sequential(
nn.BatchNorm1d(conf.state_size),
nn.Linear(conf.state_size, 2 * conf.num_inv),
nn.LeakyReLU(inplace=True),
nn.BatchNorm1d(2 * conf.num_inv),
nn.Linear(2 * conf.num_inv, 2 * conf.num_inv),
nn.LeakyReLU(inplace=True),
nn.BatchNorm1d(2 * conf.num_inv),
nn.Linear(2 * conf.num_inv, conf.num_inv),
)
def forward(self, state: T.Tensor) -> T.Tensor:
"""Computes a forward pass
Args:
state: (batch_size, state_size)
Returns:
(batch_size, num_inv)
"""
return self.main(state)
class Generator(nn.Module):
"""GAN Generator"""
def __init__(self, conf: StateConfig):
super().__init__()
self._frame_gen = FrameGenerator(conf)
self._inv_gen = InvGenerator(conf)
def forward(self, state: T.Tensor) -> Tuple[T.Tensor, T.Tensor]:
return self._frame_gen(state), self._inv_gen(state)
class Critic(nn.Module):
"""DCGAN Critic."""
def __init__(self, conf: StateConfig, encoder: StateEncoder):
super().__init__()
self.encoder = encoder
self.main = nn.Sequential(
nn.BatchNorm1d(conf.state_size),
nn.Linear(conf.state_size, 2 * conf.state_size),
nn.LeakyReLU(inplace=True),
nn.Linear(2 * conf.state_size, 1),
)
def forward(self, batch: Batch) -> Tuple[T.Tensor, T.Tensor]:
state = self.encoder(batch)
# state[-1] is last time step which is step 0 in the batch
# dicts
pred = self.main(state[-1]).mean()
return state, pred
# TODO use reward from dense environment as signal. Problem: reward will be
# sparse in evaluation environment, so it can't be input to encoder.
class Training:
@staticmethod
def _ensure_dir(dir_path: Path) -> Path:
os.makedirs(dir_path, exist_ok=True)
return dir_path
@staticmethod
def _save_model(ckpt_dir: Path, epoch: int, model: nn.Module,
optimizer: Optional[optim.Optimizer] = None):
fname = 'epoch_{:03d}.tar'.format(epoch)
p = ckpt_dir / fname
d = {
'epoch': epoch,
'model_state_dict': model.state_dict(),
}
if optimizer is not None:
d['optimizer_state_dict'] = optimizer.state_dict(),
T.save(d, p)
logging.info('Saved checkpoint to: {}'.format(p))
def __init__(self, dataset: Dataset, conf: Config, device: T.device,
ckpt_dir: Path, write_freq: int = 100,
writer: Optional[SummaryWriter] = None):
self.sconf = conf.state
self.encoder_ckpt = self._ensure_dir(ckpt_dir / 'encoder')
self.generator_ckpt = self._ensure_dir(ckpt_dir / 'generator')
self.critic_ckpt = self._ensure_dir(ckpt_dir / 'critic')
self.device = device
self.encoder = StateEncoder(self.sconf).train().to(device)
self.generator = Generator(self.sconf).train().to(device)
self.critic = Critic(self.sconf, self.encoder).train().to(device)
self.gen_opt = optim.RMSprop(self.generator.parameters(),
lr=self.sconf.lr)
self.crit_opt = optim.RMSprop(self.critic.parameters(),
lr=self.sconf.lr)
self.loader = BatchLoader(dataset.warm_up(),
conf.state,
Dataset.collate_fn,
pin_memory=True)
self.writer = writer
self._one = T.tensor(1.).to(device)
self._mone = T.tensor(-1.).to(device)
self._write_freq = write_freq
self._global_step = 1
self._epoch = 1
self._epoch_start = 0
self._epoch_end = 0
self._train_end = False
self._metrics = defaultdict(list)
def _save_all(self):
self._save_model(self.encoder_ckpt, self._epoch, self.encoder)
self._save_model(self.generator_ckpt, self._epoch, self.generator,
self.gen_opt)
self._save_model(self.critic_ckpt, self._epoch, self.critic,
self.crit_opt)
def _flow_batches(self):
self._on_train_start()
self._global_step = 0
for i in range(1, self.sconf.epochs + 1):
self._epoch = i
self._on_epoch_start()
for batch in self.loader:
# We want the first train step to be 1, but it shouldn't be
# incremented after the last batch of the last epoch.
self._global_step += 1
yield batch.to(self.device, non_blocking=True)
self._on_step_end()
self._on_epoch_end()
self._on_train_end()
def _flush_metrics(self):
tmpl = '| {}: {:.5f}'
messages = []
mean_metrics = {key: np.mean(vals) for key, vals in
self._metrics.items() if len(vals) > 0}
for key, val in sorted(mean_metrics.items(), key=itemgetter(0)):
if self.writer is not None:
self.writer.add_scalar(key, val, self._global_step)
else:
messages.append(tmpl.format(key, val))
if len(messages) > 0:
msg = '|step: {:05d}'.format(self._global_step)
msg = ' '.join([msg] + messages)
logging.info(msg)
self._metrics = defaultdict(list)
def _on_epoch_start(self):
self._epoch_start = time.perf_counter()
def _on_epoch_end(self):
epoch_time = round((time.perf_counter() - self._epoch_start) / 60, 2)
logging.info('Epoch {} complete in {} m'.format(self._epoch,
epoch_time))
if self._epoch % self.sconf.checkpoint_freq == 0:
self._save_all()
def _on_step_end(self):
if self._global_step % self._write_freq == 0:
self._flush_metrics()
def _on_train_start(self):
self._train_end = False
self._metrics = defaultdict(list)
def _on_train_end(self):
self._train_end = True
if self._global_step % self._write_freq != 0:
self._flush_metrics()
if self._epoch % self.sconf.checkpoint_freq != 0:
self._save_all()
def _get_grad_penalty(self):
# Based on https://github.com/caogang/wgan-gp
raise NotImplementedError
def _train_critic(self, batches: Iterator):
if self._epoch == 1:
n_iters = self.sconf.critic_turbo_iters
else:
n_iters = self.sconf.critic_iters
for i in range(n_iters):
try:
batch = next(batches)
except StopIteration:
break
self.crit_opt.zero_grad()
# TODO gradient penalty instead of clamping
for p in self.critic.parameters():
p.data.clamp_(-self.sconf.critic_clamp,
self.sconf.critic_clamp)
real_state, real_pred = self.critic(batch)
real_pred.backward(self._mone)
with T.no_grad():
# real_state[-2] is the state prediction for t_0
fake_frames, fake_invs = self.generator(real_state[-2])
batch.frames[0] = fake_frames
batch.inv[0] = fake_invs
_, fake_pred = self.critic(batch)
fake_pred.backward(self._one)
self.crit_opt.step()
# Wasserstein distance between real and fake distribution.
# The larger its magnitude, the better the critic.
# The optimizer minimizes, so the distance is negative, but it
# makes more sense to display it as a positive number, hence the
# abs.
wd = abs(fake_pred.item() - real_pred.item())
self._metrics['wd'].append(wd)
def _train_generator(self, batches: Iterator) -> \
Optional[Tuple[float, float]]:
try:
batch = next(batches)
except StopIteration:
return None
self.gen_opt.zero_grad()
with T.no_grad():
real_state = self.encoder(batch)
# real_state[-2] is the state prediction for t_0
fake_frames, fake_invs = self.generator(real_state[-2])
real_frames, real_invs = batch.frames[0], batch.inv[0]
batch.frames[0] = fake_frames
batch.inv[0] = fake_invs
_, fake_pred = self.critic(batch)
fake_pred.backward(self._mone)
self.gen_opt.step()
# The lower the error, the better the generator
self._metrics['err_g'].append(abs(fake_pred.item()))
with T.no_grad():
l1_frame = F.l1_loss(fake_frames, real_frames).cpu().item()
l1_inv = F.l1_loss(fake_invs, real_invs).cpu().item()
self._metrics['l1_frame'].append(l1_frame)
self._metrics['l1_inv'].append(l1_inv)
last_l1_frame = self._metrics['l1_frame'][-1]
last_l1_inv = self._metrics['l1_inv'][-1]
return last_l1_frame, last_l1_inv
def train(self) -> Tuple[float, float]:
# Critic and generator take different number of steps, so we need a
# mechanism to yield steps_per_epoch * epoch batches, while
# keeping track of the current global step and epoch.
batch_flow = self._flow_batches()
l1_scores = (-1., -1.)
while not self._train_end:
self._train_critic(batch_flow)
res = self._train_generator(batch_flow)
if res:
l1_scores = res
return l1_scores
| 33.615842 | 79 | 0.568332 | 2,128 | 16,976 | 4.322838 | 0.173872 | 0.025111 | 0.018263 | 0.02435 | 0.302315 | 0.257854 | 0.211545 | 0.179585 | 0.163387 | 0.147842 | 0 | 0.026447 | 0.325106 | 16,976 | 504 | 80 | 33.68254 | 0.776469 | 0.129889 | 0 | 0.267062 | 0 | 0 | 0.019123 | 0 | 0 | 0 | 0 | 0.005952 | 0 | 1 | 0.091988 | false | 0 | 0.04451 | 0.011869 | 0.198813 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2ffe22f93b98651c83789010c629f07c9626d409 | 475 | py | Python | src/maximize_product_of_array.py | marco-zangari/code-katas | 1dfda1cfbbe8687b17e97e414358b38d964df675 | [
"MIT"
] | null | null | null | src/maximize_product_of_array.py | marco-zangari/code-katas | 1dfda1cfbbe8687b17e97e414358b38d964df675 | [
"MIT"
] | null | null | null | src/maximize_product_of_array.py | marco-zangari/code-katas | 1dfda1cfbbe8687b17e97e414358b38d964df675 | [
"MIT"
] | null | null | null | """Maximize_product_of_arry(array series#2), Codewars Kata, level 7."""
def max_product(li, n):
"""Return maxima(not maximum) product from a list from the set number.
input: list of integers
output: integer, product of given integers
ex: maxProduct ({4,3,5} , 2) returns return 20
since the size (k) equal 2 , then it's 5*4 = 20
"""
std_li = sorted(li)[-n:]
product = 1
for num in std_li:
product *= num
return product
| 26.388889 | 74 | 0.631579 | 75 | 475 | 3.92 | 0.666667 | 0.061224 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03966 | 0.256842 | 475 | 17 | 75 | 27.941176 | 0.793201 | 0.633684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
640e5f2dbc283de85c337ecaac27efdeff2ecfa4 | 352 | py | Python | file_open.py | farhan1503001/Data-structures-with-python-Coursera | 04c04fea15f96b83056d846a8cc59f94d9df66ca | [
"MIT"
] | null | null | null | file_open.py | farhan1503001/Data-structures-with-python-Coursera | 04c04fea15f96b83056d846a8cc59f94d9df66ca | [
"MIT"
] | null | null | null | file_open.py | farhan1503001/Data-structures-with-python-Coursera | 04c04fea15f96b83056d846a8cc59f94d9df66ca | [
"MIT"
] | null | null | null | #Opening file
filename=open('words.txt')
for line in filename:
print(line.upper().rstrip())
filename.close()
second_file=open('mbox.txt')
sum=0
count=0
for line in second_file:
if line.startswith("X-DSPAM-Confidence:"):
sum=sum+float(line[-7:-1])
count=count+1
print("Average spam confidence:",sum/count)
| 20.705882 | 47 | 0.644886 | 51 | 352 | 4.411765 | 0.54902 | 0.062222 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017731 | 0.198864 | 352 | 17 | 48 | 20.705882 | 0.780142 | 0.034091 | 0 | 0 | 0 | 0 | 0.185185 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6412c114e0148cac74c8016e5dd9e4636887e532 | 2,196 | py | Python | authentik/root/asgi.py | BeryJu/passbook | 350f0d836580f4411524614f361a76c4f27b8a2d | [
"MIT"
] | 15 | 2020-01-05T09:09:57.000Z | 2020-11-28T05:27:39.000Z | authentik/root/asgi.py | BeryJu/passbook | 350f0d836580f4411524614f361a76c4f27b8a2d | [
"MIT"
] | 302 | 2020-01-21T08:03:59.000Z | 2020-12-04T05:04:57.000Z | authentik/root/asgi.py | BeryJu/passbook | 350f0d836580f4411524614f361a76c4f27b8a2d | [
"MIT"
] | 3 | 2020-03-04T08:21:59.000Z | 2020-08-01T20:37:18.000Z | """
ASGI config for authentik project.
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/howto/deployment/asgi/
"""
import django
from channels.routing import ProtocolTypeRouter, URLRouter
from defusedxml import defuse_stdlib
from django.core.asgi import get_asgi_application
from sentry_sdk.integrations.asgi import SentryAsgiMiddleware
# DJANGO_SETTINGS_MODULE is set in gunicorn.conf.py
defuse_stdlib()
django.setup()
# pylint: disable=wrong-import-position
from authentik.root import websocket # noqa # isort:skip
class LifespanApp:
"""
temporary shim for https://github.com/django/channels/issues/1216
needed so that hypercorn doesn't display an error.
this uses ASGI 2.0 format, not the newer 3.0 single callable
"""
def __init__(self, scope):
self.scope = scope
async def __call__(self, receive, send):
if self.scope["type"] == "lifespan":
while True:
message = await receive()
if message["type"] == "lifespan.startup":
await send({"type": "lifespan.startup.complete"})
elif message["type"] == "lifespan.shutdown":
await send({"type": "lifespan.shutdown.complete"})
return
class RouteNotFoundMiddleware:
"""Middleware to ignore 404s for websocket requests
taken from https://github.com/django/daphne/issues/165#issuecomment-808284950"""
def __init__(self, app):
self.app = app
async def __call__(self, scope, receive, send):
try:
return await self.app(scope, receive, send)
except ValueError as exc:
if "No route found for path" in str(exc) and scope["type"] == "websocket":
await send({"type": "websocket.close"})
else:
raise exc
application = SentryAsgiMiddleware(
ProtocolTypeRouter(
{
"http": get_asgi_application(),
"websocket": RouteNotFoundMiddleware(URLRouter(websocket.websocket_urlpatterns)),
"lifespan": LifespanApp,
}
)
)
| 30.929577 | 93 | 0.652095 | 250 | 2,196 | 5.624 | 0.532 | 0.042674 | 0.027738 | 0.02845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015115 | 0.246812 | 2,196 | 70 | 94 | 31.371429 | 0.834946 | 0.285519 | 0 | 0 | 0 | 0 | 0.123198 | 0.033421 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6412f72910f9d2b8e997d384ba6143ae4869f3af | 4,922 | py | Python | pysaintcoinach/ex/variant2/__init__.py | icykoneko/saintcoinach-py | 66898385e1198203a7ec9da83787427bf6fe5c83 | [
"MIT"
] | 7 | 2019-11-20T17:24:49.000Z | 2022-03-29T04:17:53.000Z | pysaintcoinach/ex/variant2/__init__.py | icykoneko/saintcoinach-py | 66898385e1198203a7ec9da83787427bf6fe5c83 | [
"MIT"
] | 7 | 2019-04-08T07:36:46.000Z | 2022-01-17T22:51:54.000Z | pysaintcoinach/ex/variant2/__init__.py | icykoneko/saintcoinach-py | 66898385e1198203a7ec9da83787427bf6fe5c83 | [
"MIT"
] | 3 | 2019-04-08T08:24:22.000Z | 2021-06-27T22:19:15.000Z | from struct import unpack_from
from typing import Iterable as IterableT, TypeVar, Union, Dict
from ..datasheet import DataRowBase, IDataSheet, IDataRow
from ..column import Column
from ..relational import IRelationalDataRow
from ..relational.datasheet import IRelationalDataSheet
class SubRow(DataRowBase, IRelationalDataRow):
@property
def parent_row(self): return self.__parent_row
@property
def full_key(self): return str(self.parent_row.key) + "." + str(self.key)
def __init__(self, parent: IDataRow, key: int, offset: int):
super(SubRow, self).__init__(parent.sheet, key, offset)
self.__parent_row = parent
@property
def sheet(self) -> IRelationalDataSheet:
return super(SubRow, self).sheet
@property
def default_value(self) -> object:
def_col = self.sheet.header.default_column
return self[def_col.index] if def_col is not None else None
def __getitem__(self, item: str) -> object:
if isinstance(item, int):
return super(SubRow, self).__getitem__(item)
col = self.sheet.header.find_column(item)
if col is None:
raise KeyError
return self[col.index]
def get_raw(self, column_name: Union[str, int] = None, **kwargs) -> object:
if 'column_index' in kwargs:
return super(SubRow, self).get_raw(**kwargs)
if isinstance(column_name, int):
return super(SubRow, self).get_raw(column_name)
column = self.sheet.header.find_column(column_name)
if column is None:
raise KeyError
return self.get_raw(column.index)
class DataRow(DataRowBase):
METADATA_LENGTH = 0x06
@property
def length(self): return self.__length
@property
def sub_row_count(self): return self.__sub_row_count
@property
def sub_row_keys(self) -> IterableT[int]:
if not self.__is_read:
self._read()
return self.__sub_rows.keys()
@property
def sub_rows(self) -> IterableT[SubRow]:
if not self.__is_read:
self._read()
return self.__sub_rows.values()
def get_sub_row(self, key) -> SubRow:
if not self.__is_read:
self._read()
return self.__sub_rows[key]
def __init__(self, sheet: IDataSheet, key: int, offset: int):
super(DataRow, self).__init__(sheet, key, offset + self.METADATA_LENGTH)
self.__is_read = False
self.__sub_rows = {} # type: Dict[int, SubRow]
b = sheet.get_buffer()
if len(b) < (offset + self.METADATA_LENGTH):
raise ValueError("Index out of range")
self.__length, self.__sub_row_count = unpack_from(">lh", b, offset)
def _read(self):
self.__sub_rows.clear()
h = self.sheet.header
b = self.sheet.get_buffer()
o = self.offset
for i in range(self.sub_row_count):
key, = unpack_from(">h", b, o)
o += 2
r = SubRow(self, key, o)
self.__sub_rows[key] = r
o += h.fixed_size_data_length
self.__is_read = True
def __getitem__(self, item):
raise RuntimeError('Invalid Operation: Cannot get column on Variant 2 DataRow. Use GetSubRow instead.')
def get_raw(self, column_index: int, **kwargs):
raise RuntimeError('Invalid Operation: Cannot get column on Variant 2 DataRow. Use GetSubRow instead.')
class RelationalDataRow(DataRow, IRelationalDataRow):
@property
def sheet(self) -> IRelationalDataSheet:
return super(RelationalDataRow, self).sheet
def __str__(self):
def_col = self.sheet.header.default_column
if def_col is not None:
return "%s" % self.get_sub_row(def_col.index).default_value
else:
return "%s#%u" % (self.sheet.header.name, self.key)
def __init__(self, sheet: IDataSheet, key: int, offset: int):
super(RelationalDataRow, self).__init__(sheet, key, offset)
@property
def default_value(self) -> object:
def_col = self.sheet.header.default_column
return self[def_col.index] if def_col is not None else None
def __getitem__(self, item: str) -> object:
if isinstance(item, int):
return super(RelationalDataRow, self).__getitem__(item)
col = self.sheet.header.find_column(item)
if col is None:
raise KeyError
return self[col.index]
def get_raw(self, column_name: Union[str, int] = None, **kwargs) -> object:
if 'column_index' in kwargs:
return super(RelationalDataRow, self).get_raw(**kwargs)
if isinstance(column_name, int):
return super(RelationalDataRow, self).get_raw(column_name)
column = self.sheet.header.find_column(column_name)
if column is None:
raise KeyError
return super(RelationalDataRow, self).get_raw(column.index)
| 32.596026 | 111 | 0.645266 | 627 | 4,922 | 4.799043 | 0.15949 | 0.041874 | 0.044865 | 0.02991 | 0.585909 | 0.536058 | 0.519442 | 0.455965 | 0.455965 | 0.455965 | 0 | 0.001634 | 0.253759 | 4,922 | 150 | 112 | 32.813333 | 0.817588 | 0.004673 | 0 | 0.469027 | 0 | 0 | 0.044313 | 0 | 0 | 0 | 0.000817 | 0 | 0 | 1 | 0.19469 | false | 0 | 0.053097 | 0.053097 | 0.451327 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
641310d10a8d7e15c9a7d898de0f6cdccc1b0fc5 | 9,874 | py | Python | wurst/ecoinvent/electricity_markets.py | pjamesjoyce/wurst | 95b37e72eaa18b33bdd83cd4a51d37d9eb4ae7ba | [
"BSD-2-Clause"
] | 1 | 2022-03-29T14:59:13.000Z | 2022-03-29T14:59:13.000Z | wurst/ecoinvent/electricity_markets.py | pjamesjoyce/wurst | 95b37e72eaa18b33bdd83cd4a51d37d9eb4ae7ba | [
"BSD-2-Clause"
] | null | null | null | wurst/ecoinvent/electricity_markets.py | pjamesjoyce/wurst | 95b37e72eaa18b33bdd83cd4a51d37d9eb4ae7ba | [
"BSD-2-Clause"
] | null | null | null | from .. import toolz
from ..searching import get_many, equals
from ..transformations import rescale_exchange
from functools import partial
def get_generators_in_mix(db, name="market for electricity, high voltage"):
"""Get names of inputs to electricity mixes"""
inputs = set()
for act in db:
if act['name'] == name:
for exc in act.technosphere():
producer = exc.input
if producer['unit'] == 'kilowatt hour':
inputs.add(producer['name'])
return inputs
# Valid as of 3.3; edited manually to remove non-generation
high_voltage_providers = {
'cane sugar production with ethanol by-product',
'electricity production, deep geothermal',
'electricity production, hard coal',
'electricity production, hydro, pumped storage',
'electricity production, hydro, reservoir, alpine region',
'electricity production, hydro, reservoir, non-alpine region',
'electricity production, hydro, reservoir, tropical region',
'electricity production, hydro, run-of-river',
'electricity production, lignite',
'electricity production, natural gas, 10MW',
'electricity production, natural gas, combined cycle power plant',
'electricity production, natural gas, conventional power plant',
'electricity production, nuclear, boiling water reactor',
'electricity production, nuclear, pressure water reactor',
'electricity production, nuclear, pressure water reactor, heavy water moderated',
'electricity production, oil',
'electricity production, peat',
'electricity production, wind, 1-3MW turbine, offshore',
'electricity production, wind, 1-3MW turbine, onshore',
'electricity production, wind, 2.3MW turbine, precast concrete tower, onshore',
'electricity production, wind, <1MW turbine, onshore',
'electricity production, wind, >3MW turbine, onshore',
'ethanol production from sugarcane',
'ethanol production from sweet sorghum',
'ethanol production from wood',
'heat and power co-generation, biogas, gas engine',
'heat and power co-generation, diesel, 200kW electrical, SCR-NOx reduction',
'heat and power co-generation, hard coal',
'heat and power co-generation, lignite',
'heat and power co-generation, natural gas, 1MW electrical, lean burn',
'heat and power co-generation, natural gas, 200kW electrical, lean burn',
'heat and power co-generation, natural gas, 500kW electrical, lean burn',
'heat and power co-generation, natural gas, combined cycle power plant, 400MW electrical',
'heat and power co-generation, natural gas, conventional power plant, 100MW electrical',
'heat and power co-generation, oil',
'heat and power co-generation, wood chips, 6667 kW',
'heat and power co-generation, wood chips, 6667 kW, state-of-the-art 2014',
'petroleum refinery operation',
'treatment of bagasse, from sugarcane, in heat and power co-generation unit, 6400kW thermal',
'treatment of bagasse, from sweet sorghum, in heat and power co-generation unit, 6400kW thermal',
'treatment of blast furnace gas, in power plant',
'treatment of coal gas, in power plant',
# New in 3.5
'electricity production, hard coal, conventional',
'electricity production, hard coal, supercritical',
'electricity production, solar thermal parabolic trough, 50 MW',
'electricity production, solar tower power plant, 20 MW',
}
medium_voltage_providers = {
'burnt shale production',
'electricity, from municipal waste incineration to generic market for electricity, medium voltage',
'fluting medium production, semichemical',
'linerboard production, kraftliner',
'natural gas, burned in gas turbine, for compressor station',
'treatment of recovered paper to fluting medium, wellenstoff',
'treatment of recovered paper to linerboard, testliner',
}
low_voltage_providers = {
'biogas, burned in micro gas turbine 100kWe',
'biogas, burned in polymer electrolyte membrane fuel cell 2kWe, future',
'biogas, burned in solid oxide fuel cell 125kWe, future',
'biogas, burned in solid oxide fuel cell, with micro gas turbine, 180kWe, future',
'electricity production, photovoltaic, 3kWp facade installation, multi-Si, laminated, integrated',
'electricity production, photovoltaic, 3kWp facade installation, multi-Si, panel, mounted',
'electricity production, photovoltaic, 3kWp facade installation, single-Si, laminated, integrated',
'electricity production, photovoltaic, 3kWp facade installation, single-Si, panel, mounted',
'electricity production, photovoltaic, 3kWp flat-roof installation, multi-Si',
'electricity production, photovoltaic, 3kWp flat-roof installation, single-Si',
'electricity production, photovoltaic, 3kWp slanted-roof installation, CIS, panel, mounted',
'electricity production, photovoltaic, 3kWp slanted-roof installation, CdTe, laminated, integrated',
'electricity production, photovoltaic, 3kWp slanted-roof installation, a-Si, laminated, integrated',
'electricity production, photovoltaic, 3kWp slanted-roof installation, a-Si, panel, mounted',
'electricity production, photovoltaic, 3kWp slanted-roof installation, multi-Si, laminated, integrated',
'electricity production, photovoltaic, 3kWp slanted-roof installation, multi-Si, panel, mounted',
'electricity production, photovoltaic, 3kWp slanted-roof installation, ribbon-Si, laminated, integrated',
'electricity production, photovoltaic, 3kWp slanted-roof installation, ribbon-Si, panel, mounted',
'electricity production, photovoltaic, 3kWp slanted-roof installation, single-Si, laminated, integrated',
'electricity production, photovoltaic, 3kWp slanted-roof installation, single-Si, panel, mounted',
'electricity production, photovoltaic, 570kWp open ground installation, multi-Si',
'heat and power co-generation, natural gas, 160kW electrical, Jakobsberg',
'heat and power co-generation, natural gas, 160kW electrical, lambda=1',
'heat and power co-generation, natural gas, 50kW electrical, lean burn',
'heat and power co-generation, natural gas, mini-plant 2KW electrical',
'sawing and planing, paraná pine, kiln dried',
}
high_voltage_transformation = 'electricity voltage transformation from high to medium voltage'
medium_voltage_transformation = 'electricity voltage transformation from medium to low voltage'
low_voltage_mix = 'market for electricity, low voltage'
medium_voltage_mix = 'market for electricity, medium voltage'
high_voltage_mix = 'market for electricity, high voltage'
all_providers = high_voltage_providers.union(medium_voltage_providers).union(low_voltage_providers)
def move_all_generation_to_high_voltage(data):
"""Move all generation sources to the high voltage market.
Uses the relative shares in the low voltage market, **ignoring transmission losses**. In theory, using the production volumes would be more correct, but these numbers are no longer updated since ecoinvent 3.2.
Empties out the medium and low voltage mixes."""
MIXES = {low_voltage_mix, medium_voltage_mix, high_voltage_mix}
mix_filter = lambda ds: ds['name'] in MIXES
for group in toolz.groupby("location", filter(mix_filter, data)).values():
assert len(group) == 3
high, low, medium = sorted(group, key=lambda x: x['name'])
medium_in_low = [ex for ex in low['exchanges']
if ex['name'] == medium_voltage_transformation][0]['amount']
high_in_low = [ex for ex in medium['exchanges']
if ex['name'] == high_voltage_transformation][0]['amount'] * \
medium_in_low
for exc in high['exchanges']:
if (exc['name'] in high_voltage_providers or (
"electricity" in exc['name'] and
"import from" in exc['name'])):
rescale_exchange(exc, high_in_low)
high['exchanges'].extend([rescale_exchange(exc, medium_in_low)
for exc in medium['exchanges']
if exc['name'] in medium_voltage_providers])
high['exchanges'].extend([exc
for exc in low['exchanges']
if exc['name'] in low_voltage_providers])
data = empty_medium_voltage_markets(data)
data = empty_low_voltage_markets(data)
return data
def remove_electricity_trade(data):
"""Delete all electricity trade exchanges.
Intended to be used when substituting in new trade mixes."""
MIXES = {low_voltage_mix, medium_voltage_mix, high_voltage_mix}
mix_filter = lambda ds: ds['name'] in MIXES
for ds in filter(mix_filter, data):
ds['exchanges'] = [
exc for exc in ds['exchanges']
if not ("electricity" in exc['name'] and
"import from" in exc['name'])
]
return data
def include_filter(exc):
return exc['unit'] != 'kilowatt hour' or (
'import from' not in exc['name'] and exc['name'] not in all_providers)
def set_conversion_to_one_kwh(ds, conversion):
for exc in ds['exchanges']:
if exc['name'] == conversion:
exc['amount'] = 1
def _empty(data, kind):
for ds in get_many(data, equals('name', kind)):
ds['exchanges'] = list(filter(include_filter, ds['exchanges']))
if kind == low_voltage_mix:
set_conversion_to_one_kwh(ds, medium_voltage_transformation)
elif kind == medium_voltage_mix:
set_conversion_to_one_kwh(ds, high_voltage_transformation)
return data
empty_low_voltage_markets = partial(_empty, kind=low_voltage_mix)
empty_medium_voltage_markets = partial(_empty, kind=medium_voltage_mix)
empty_high_voltage_markets = partial(_empty, kind=high_voltage_mix)
| 51.427083 | 213 | 0.708021 | 1,204 | 9,874 | 5.70515 | 0.225083 | 0.128403 | 0.031446 | 0.036687 | 0.512738 | 0.418838 | 0.344446 | 0.30907 | 0.251274 | 0.174552 | 0 | 0.012525 | 0.199514 | 9,874 | 191 | 214 | 51.696335 | 0.856528 | 0.052967 | 0 | 0.044586 | 0 | 0.006369 | 0.587642 | 0 | 0 | 0 | 0 | 0 | 0.006369 | 1 | 0.038217 | false | 0 | 0.044586 | 0.006369 | 0.11465 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
6418b3422f4c2568aa263e251a1ee434a2dfc461 | 634 | py | Python | examples/render/render_sphere.py | jkjt/ezdxf | 2acc5611b81476ea16b98063b9f55446a9182b81 | [
"MIT"
] | 515 | 2017-01-25T05:46:52.000Z | 2022-03-29T09:52:27.000Z | examples/render/render_sphere.py | jkjt/ezdxf | 2acc5611b81476ea16b98063b9f55446a9182b81 | [
"MIT"
] | 417 | 2017-01-25T10:01:17.000Z | 2022-03-29T09:22:04.000Z | examples/render/render_sphere.py | jkjt/ezdxf | 2acc5611b81476ea16b98063b9f55446a9182b81 | [
"MIT"
] | 149 | 2017-02-01T15:52:02.000Z | 2022-03-17T10:33:38.000Z | # Copyright (c) 2020-2021, Manfred Moitzi
# License: MIT License
from pathlib import Path
import ezdxf
from ezdxf.render.forms import sphere
DIR = Path("~/Desktop/Outbox").expanduser()
doc = ezdxf.new()
doc.layers.new("form", dxfattribs={"color": 5})
doc.layers.new("csg", dxfattribs={"color": 1})
doc.layers.new("normals", dxfattribs={"color": 6})
doc.set_modelspace_vport(6, center=(5, 0))
msp = doc.modelspace()
sphere1 = sphere(count=32, stacks=16, radius=1, quads=True)
sphere1.render_polyface(msp, dxfattribs={"layer": "form"})
sphere1.render_normals(msp, dxfattribs={"layer": "normals"})
doc.saveas(DIR / "sphere.dxf")
| 25.36 | 60 | 0.716088 | 90 | 634 | 5 | 0.544444 | 0.06 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.038596 | 0.100946 | 634 | 24 | 61 | 26.416667 | 0.750877 | 0.094637 | 0 | 0 | 0 | 0 | 0.1331 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |