hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
fab86be6f16580e58ee8836bf4504a1098307651 | 539 | py | Python | server/urls.py | Valchris/AngularJS-Django-Template | 10c90087984dcd9e6d29380eb4380824e65bcecf | [
"MIT"
] | 1 | 2015-07-29T04:28:26.000Z | 2015-07-29T04:28:26.000Z | server/urls.py | Valchris/AngularJS-Django-Template | 10c90087984dcd9e6d29380eb4380824e65bcecf | [
"MIT"
] | null | null | null | server/urls.py | Valchris/AngularJS-Django-Template | 10c90087984dcd9e6d29380eb4380824e65bcecf | [
"MIT"
] | null | null | null | from django.conf.urls import include, url
from django.contrib import admin
from glue.views import *
from glue.api import *
urlpatterns = [
# Examples:
# url(r'^$', 'server.views.home', name='home'),
# url(r'^blog/', include('blog.urls')),
url(r'^api/user/data/', view=user_data),
url(r'^api/user/signout/', view=user_signout),
url(r'^api/user/signin/', view=user_signin),
url(r'^api/user/register/', view=user_register),
url(r'^admin', include(admin.site.urls)),
url(r'^', AngularView.as_view()),
]
| 26.95 | 52 | 0.64564 | 79 | 539 | 4.341772 | 0.35443 | 0.093294 | 0.081633 | 0.12828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.152134 | 539 | 19 | 53 | 28.368421 | 0.750547 | 0.172542 | 0 | 0 | 0 | 0 | 0.172336 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fabc177219e2e95776351ee5bdc5b7834e86aaf5 | 17,943 | py | Python | nebula/dao/strategy_dao.py | threathunterX/nebula_web | 2e32e6e7b225e0bd87ee8c847c22862f12c51bb1 | [
"Apache-2.0"
] | 2 | 2019-05-01T09:42:32.000Z | 2019-05-31T01:08:37.000Z | nebula/dao/strategy_dao.py | threathunterX/nebula_web | 2e32e6e7b225e0bd87ee8c847c22862f12c51bb1 | [
"Apache-2.0"
] | 1 | 2021-06-01T23:30:04.000Z | 2021-06-01T23:30:04.000Z | nebula/dao/strategy_dao.py | threathunterX/nebula_web | 2e32e6e7b225e0bd87ee8c847c22862f12c51bb1 | [
"Apache-2.0"
] | 5 | 2019-05-14T09:30:12.000Z | 2020-09-29T04:57:26.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import json
import logging
from threathunter_common.util import millis_now
from nebula_meta.model import Strategy
from .base_dao import BaseDao, BaseDefaultDao
from . import cache
from ..models.default import StrategyDefaultModel as Model, StrategyDefaultModel
from ..models import StrategyCustModel as CustModel
logger = logging.getLogger('nebula.dao.strategy')
#TODO more nodes
def is_strategy_weigh_cache_avail():
if cache.Strategy_Weigh_Cache is None:
logger.warn('strategy weigh cache is None')
return False
return True
def add_strategy_weigh_cache(s):
if not is_strategy_weigh_cache_avail():
return
new_weigh = get_strategy_weigh(s)
cache.Strategy_Weigh_Cache[new_weigh['name']] = new_weigh
def delete_strategy_weigh_cache(app=None, name=None):
# @todo
if not is_strategy_weigh_cache_avail():
return
weighs = cache.Strategy_Weigh_Cache.values()
if app:
if name:
weighs = list(filter(lambda w: w['app'] != app or w['name'] != name, weighs))
else:
weighs = list(filter(lambda x: x['app'] != app, weighs))
cache.Strategy_Weigh_Cache = dict((weigh['name'], weigh) for weigh in weighs)
else:
cache.Strategy_Weigh_Cache = dict()
def get_strategy_weigh(s):
blacklist_info = None
config = json.loads(s.config)
terms = config.get('terms', [])
for term in terms:
if term['left']['subtype'] == 'setblacklist':
blacklist_info = term['left']['config']
if blacklist_info is None:
logger.error(u'app:%s, name:%s 的策略没有设置黑名单的配置', s.app, s.name)
return {
'app': s.app,
'name': s.name,
'tags': (s.tags or '').split(','),
'category': s.category,
'score': s.score,
'expire': s.endeffect,
'remark': s.remark,
'test': True if s.status == 'test' else False,
'scope': term.get('scope', ''),
'checkpoints': blacklist_info.get('checkpoints', ''),
'checkvalue': blacklist_info.get('checkvalue', ''),
'checktype': blacklist_info.get('checktype', ''),
'decision': blacklist_info.get('decision', ''),
'ttl': blacklist_info.get('ttl', 300)
}
def update_strategy_weigh_cache(s):
if not is_strategy_weigh_cache_avail():
return
new_weigh = get_strategy_weigh(s)
cache.Strategy_Weigh_Cache[new_weigh['name']] = new_weigh
def init_strategy_weigh():
strategies = StrategyCustDao().list_all_strategies_raw()
result = dict()
for s in strategies:
weigh = get_strategy_weigh(s)
if not weigh:
continue
result[weigh['name']] = weigh
cache.Strategy_Weigh_Cache = result
class StrategyDefaultDao(BaseDefaultDao):
cached_online_strategies = set()
last_cache_update_ts = 0
def get_strategy_by_app_and_name(self, app, name):
"""
get strategy by app and name.
"""
query = self.session.query(Model)
result = query.filter(Model.name == name, Model.app == app).first()
if result:
return result.to_strategy()
def _get_model_by_app_and_name(self, app, name):
query = self.session.query(Model)
return query.filter(Model.name == name, Model.app == app).first()
def get_strategy_by_id(self, id):
"""
get strategy by id.
"""
query = self.session.query(Model)
result = query.filter(Model.id == id).first()
if result:
return result.to_strategy()
def list_all_strategies(self):
"""
get all strategies
"""
query = self.session.query(Model)
result = query.all() or []
result = [_.to_strategy() for _ in result]
return result
def list_all_strategies_by_status(self, status):
"""
get all strategies
"""
return filter(lambda s: s.status == status, self.list_all_strategies())
def list_all_strategies_by_app(self, app):
"""
get all strategies
"""
return filter(lambda s: s.app == app, self.list_all_strategies())
def list_all_strategies_in_effect(self):
now = millis_now()
result = self.list_all_strategies() or []
return filter(lambda s: s.start_effect <= now <= s.end_effect, result)
def list_all_online_strategy_names_in_effect(self):
now = millis_now()
result = self.list_all_strategies() or []
result = filter(lambda s: s.start_effect <= now <= s.end_effect and s.status == "online", result)
result = map(lambda s: s.name, result)
return result
def get_cached_online_strategies(self):
current = millis_now()
if current - StrategyDefaultDao.last_cache_update_ts< 5000:
return StrategyDefaultDao.cached_online_strategies
strategies = self.list_all_online_strategy_names_in_effect()
StrategyDefaultDao.cached_online_strategies = set(strategies)
StrategyDefaultDao.last_cache_update_ts = millis_now()
return StrategyDefaultDao.cached_online_strategies
def add_strategy(self, s):
new = StrategyDefaultModel.from_strategy(s)
new.last_modified = millis_now()
existing = self._get_model_by_app_and_name(s.app, s.name)
if existing:
# update
new.id = existing.id
self.session.merge(new)
update_strategy_weigh_cache(new)
else:
# insert
self.session.add(new)
add_strategy_weigh_cache(new)
self.session.commit()
def change_status(self, app, name, old_status, new_status):
result = self._get_model_by_app_and_name(app, name)
# check whether the internal status is right
if not result:
return
result_strategy = result.to_strategy()
if result_strategy.status != old_status:
return
result_strategy.status = new_status
new_model = StrategyDefaultModel.from_strategy(result_strategy)
new_model.id = result.id
self.session.merge(new_model)
self.session.commit()
def delete_strategy_by_app_and_name(self, app, name):
query = self.session.query(Model)
query.filter(Model.name == name, Model.app == app).delete()
self.session.commit()
delete_strategy_weigh_cache(app=app, name=name)
def delete_strategy(self, s):
self.delete_strategy_by_app_and_name(s.app, s.name)
def delete_strategy_list_by_app(self, app):
query = self.session.query(Model)
if app:
query.filter(Model.app == app).delete()
delete_strategy_weigh_cache(app=app)
else:
query.filter().delete()
delete_strategy_weigh_cache()
self.session.commit()
def clear(self):
"""
clear all the records
"""
query = self.session.query(Model)
query.delete()
self.session.commit()
delete_strategy_weigh_cache()
def count(self):
query = self.session.query(Model)
return query.count()
class StrategyCustDao(BaseDao):
cached_online_strategies = set()
last_cache_update_ts = 0
def get_strategy_by_app_and_name(self, app, name):
"""
get strategy by app and name. 定制的覆盖默认的strategy
@keep 保持接口功能不变,含义变了 with v1.0
"""
result = self._get_model_by_app_and_name(app, name)
if result:
return result.to_strategy()
def _get_model_by_app_and_name(self, app, name):
"""
只根据key获取strategy custmize优先default
@add within v2.0
"""
query = self.session.query(CustModel).filter(CustModel.app == app, CustModel.name == name)
cust_strategy = query.first()
if not cust_strategy:
query = StrategyDefaultDao().session.query(Model).filter(Model.app == app, Model.name == name)
return query.first()
else:
return cust_strategy
def _get_cust_model_by_app_name(self, app, name):
"""
只根据key获取定制化的strategy
@add within v2.0
"""
query = self.session.query(CustModel)
return query.filter(CustModel.app == app, CustModel.name == name).first()
def get_strategy_by_id(self, id):
"""
get strategy by id. custmize 优先于default
@keep 接口功能不变,含义变了 with v1.0
"""
query = self.session.query(CustModel).filter(CustModel.id == id)
cust_strategy = query.first()
if not cust_strategy:
query = StrategyDefaultDao().session.query(Model).filter(Model.id == id)
return query.first()
else:
return cust_strategy
query = self.session.query(CustModel)
result = query.filter(CustModel.id == id).first()
if result:
return result.to_strategy()
def get_cust_strategy_by_id(self, id):
"""
get cust strategy by id.
@add
"""
query = self.session.query(CustModel)
result = query.filter(CustModel.id == id).first()
if result:
return result.to_strategy()
def list_all_strategies_raw(self):
"""
@new v2.0
"""
default_query = StrategyDefaultDao().session.query(Model)
strategies = dict( ( (_.app, _.name), _) for _ in default_query.all())
# key: strategy obj
cust_query = self.session.query(CustModel)
for cq in cust_query.all():
strategies[(cq.app, cq.name)] = cq
return strategies.values()
def list_all_strategies(self):
"""
list all strategies, 取定制的和默认的strategies的合集,定制的覆盖默认的strategies
@keep 保持接口功能不变,含义变了 with v1.0
"""
default_query = StrategyDefaultDao().session.query(Model)
strategies = dict( ( (_.app, _.name), _.to_strategy()) for _ in default_query.all())
# key: strategy obj
cust_query = self.session.query(CustModel)
for cq in cust_query.all():
strategies[(cq.app, cq.name)] = cq.to_strategy()
return strategies.values()
def list_all_cust_strategies(self):
"""
list all custmize strategies
@add within v2.0
"""
query = self.session.query(CustModel)
result = query.all() or []
result = [_.to_strategy() for _ in result]
return result
def list_all_strategies_by_status(self, status):
"""
get strategies with certain status
@keep 保持接口功能不变 with v1.0
"""
return filter(lambda s: s.status == status, self.list_all_strategies())
def list_all_strategies_by_app(self, app):
"""
get strategies with certain status
@keep 保持接口功能不变 with v1.0
"""
return filter(lambda s: s.app == app, self.list_all_strategies())
def list_all_strategies_in_effect(self):
"""
get strategies not expire yet
@keep 保持接口功能不变 with v1.0
"""
now = millis_now()
result = self.list_all_strategies() or []
return filter(lambda s: s.start_effect <= now <= s.end_effect, result)
def list_all_online_strategy_names_in_effect(self):
"""
get online strategies not expire yet
@keep 保持接口功能不变 with v1.0
"""
now = millis_now()
result = self.list_all_strategies() or []
result = filter(lambda s: s.start_effect <= now <= s.end_effect and s.status == "online", result)
result = map(lambda s: s.name, result)
return result
def get_cached_online_strategies(self):
"""
@keep 保持接口功能不变 with v1.0
"""
current = millis_now()
if current - StrategyCustDao.last_cache_update_ts< 5000:
return StrategyCustDao.cached_online_strategies
strategies = self.list_all_online_strategy_names_in_effect()
StrategyCustDao.cached_online_strategies = set(strategies)
StrategyCustDao.last_cache_update_ts = millis_now()
return StrategyCustDao.cached_online_strategies
def add_strategy(self, s):
"""
only add custmize strategies, just override the default strategies, not delete key's strategies entirely.
@keep 保持接口功能不变,含义变了 with v1.0
"""
new = CustModel.from_strategy(s)
new.last_modified = millis_now()
existing = self._get_cust_model_by_app_name(s.app, s.name)
if existing:
# update
new.id = existing.id
new.group_id = existing.group_id
self.session.merge(new)
update_strategy_weigh_cache(new)
else:
# insert
self.session.add(new)
add_strategy_weigh_cache(new)
self.session.commit()
def change_status(self, app, name, old_status, new_status):
"""
only change custmize strategies
@keep 保持接口功能变了,含义变了 with v1.0
"""
result = self._get_model_by_app_and_name(app, name)
# check whether the internal status is right
if not result:
return
result_strategy = result.to_strategy()
if result_strategy.status != old_status:
return
result_strategy.status = new_status
new_model = CustModel.from_strategy(result_strategy)
new_model.id = result.id
self.session.merge(new_model)
self.session.commit()
update_strategy_weigh_cache(new_model)
def delete_strategy_by_app_and_name(self, app, name):
"""
现在只能删除custmize的strategy
@change 保持接口功能结果可能变了,含义也变了 with v1.0
"""
query = self.session.query(CustModel)
query.filter(CustModel.name == name, CustModel.app == app).delete()
self.session.commit()
delete_strategy_weigh_cache(app=app, name=name)
def delete_strategy(self, s):
"""
现在只能删除custmize的strategy
@change 保持接口功能结果可能变了,含义也变了 with v1.0
"""
self.delete_strategy_by_app_and_name(s.app, s.name)
def delete_strategy_list_by_app(self, app):
"""
现在只能删除custmize的strategy
@change 保持接口功能结果可能变了,含义也变了 with v1.0
"""
query = self.session.query(CustModel)
if app:
query.filter(CustModel.app == app).delete()
delete_strategy_weigh_cache(app=app)
else:
query.filter().delete()
delete_strategy_weigh_cache()
self.session.commit()
def clear(self):
"""
clear all Custmize strategy, reset to default strategy(different with b4)
@change 保持接口功能结果可能变了,含义也变了 with v1.0
"""
query = self.session.query(CustModel)
query.delete()
self.session.commit()
delete_strategy_weigh_cache()
def count(self):
"""
只获取custmize 的strategy个数
@change 保持接口功能结果可能变了,含义也变了 with v1.0
"""
query = self.session.query(CustModel)
return query.count()
if __name__ == "__main__":
js = """{
"app": "nebula",
"name": "test_strategy",
"remark": "test strategy",
"version": 1430694092730,
"status": "inedit",
"createtime": 1430693092730,
"modifytime": 1430693092730,
"starteffect": 1430693092730,
"endeffect": 1431095092730,
"terms": [
{
"left":
{
"type": "event",
"subtype": "",
"config": {
"event": ["nebula", "http_static"],
"field": "c_bytes"
}
},
"op": "between",
"right":
{
"type": "constant",
"subtype": "",
"config": {
"value": "1,200"
}
}
},
{
"left":
{
"type": "func",
"subtype": "count",
"config": {
"sourceevent": ["nebula", "http_dynamic"],
"condition": [
{
"left": "method",
"op": "==",
"right": "get"
}
],
"interval": 300,
"algorithm": "count",
"groupby": ["c_ip", "uri_stem"],
"trigger": {
"event": ["nebula", "http_static"],
"keys": ["c_ip","uri_stem"]
}
}
},
"op": "<",
"right":
{
"type": "constant",
"subtype": "",
"config": {
"value": "2"
}
}
}
]
}"""
dao = StrategyDefaultDao()
strategy = Strategy.from_json(js)
print StrategyDefaultModel.from_strategy(strategy)
dao.add_strategy(strategy)
for i in dao.list_all_strategies():
print i
dao.list_all_strategies()
dao.list_all_strategies_by_status("inedit")
dao.list_all_strategies_in_effect()
dao.count()
# dao.delete_strategy(dao.get_strategy_by_app_and_name("app", "name"))
| 32.68306 | 113 | 0.567687 | 1,951 | 17,943 | 4.988211 | 0.111225 | 0.040691 | 0.051788 | 0.043157 | 0.72688 | 0.66081 | 0.624127 | 0.590423 | 0.56268 | 0.533498 | 0 | 0.010444 | 0.327649 | 17,943 | 548 | 114 | 32.742701 | 0.796253 | 0.015716 | 0 | 0.572603 | 0 | 0 | 0.16284 | 0 | 0 | 0 | 0 | 0.001825 | 0 | 0 | null | null | 0 | 0.024658 | null | null | 0.005479 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fabc961b87da1b6806ebcaaaca91e938753fd2bc | 2,205 | py | Python | src/21_zip.py | TheFlipbook/python_challenge | 21bd42178088bcaafbe02c25a76bc4f2950509b2 | [
"MIT"
] | null | null | null | src/21_zip.py | TheFlipbook/python_challenge | 21bd42178088bcaafbe02c25a76bc4f2950509b2 | [
"MIT"
] | null | null | null | src/21_zip.py | TheFlipbook/python_challenge | 21bd42178088bcaafbe02c25a76bc4f2950509b2 | [
"MIT"
] | null | null | null | # http://www.pythonchallenge.com/pc/bin/hex.html
import bz2
import io
import urllib.request
import urllib.error
import zipfile
import zlib
out_dir = "_out/idiot"
prompt = "http://www.pythonchallenge.com/pc/hex/unreal.jpg"
prompt_top = "http://www.pythonchallenge.com/pc/hex/"
prompt_range = 1152983631
prompt_pass = b"redavni"
username = "butter"
password = "fly"
def open_section(start=None):
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, prompt_top, username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
opener = urllib.request.build_opener(handler)
headers = {}
if start:
headers["Range"] = "bytes={}-".format(start)
request = urllib.request.Request(prompt, headers=headers)
response = opener.open(request)
return response.read()
def main():
data = open_section(start=prompt_range)
stream = io.BytesIO(data)
archive = zipfile.ZipFile(stream)
# Get Prompt
with archive.open("readme.txt", pwd=prompt_pass) as readme:
text = (b"".join(readme.readlines())).decode("ascii")
print(text)
# Inspect data
with archive.open("package.pack", pwd=prompt_pass) as package:
generation = package.read()
# Data ping-pongs between compression methods
zlib_header = b"x"
bz2_header = b"BZh"
# Reversing twice means we couldn't find a header
just_reversed = False
for x in range(2000):
if generation.startswith(zlib_header):
print("_", end=" ")
just_reversed = False
generation = zlib.decompress(generation)
elif generation.startswith(bz2_header):
print("B", end=" ")
just_reversed = False
generation = bz2.decompress(generation)
elif just_reversed:
break
else:
print("f")
just_reversed = True
generation = generation[::-1]
print(generation)
return archive
if __name__ == "__main__":
print(main())
# http://www.pythonchallenge.com/pc/hex/copper.html
| 24.5 | 67 | 0.631293 | 249 | 2,205 | 5.453815 | 0.421687 | 0.047865 | 0.064801 | 0.073638 | 0.130339 | 0.066274 | 0 | 0 | 0 | 0 | 0 | 0.011607 | 0.257596 | 2,205 | 89 | 68 | 24.775281 | 0.81796 | 0.096145 | 0 | 0.054545 | 0 | 0 | 0.085599 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0.127273 | 0.109091 | 0 | 0.181818 | 0.109091 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fabd8c394c9b5ebf1b3c158c1fcc13c3e5dcf49b | 2,596 | py | Python | tests/test_auth/test_upload_passport.py | peterwade153/flybob | 85fcd401bffed9adb06e7943f0c748be822fac75 | [
"MIT"
] | 1 | 2019-09-09T15:04:07.000Z | 2019-09-09T15:04:07.000Z | tests/test_auth/test_upload_passport.py | peterwade153/flybob | 85fcd401bffed9adb06e7943f0c748be822fac75 | [
"MIT"
] | 26 | 2019-03-27T16:59:26.000Z | 2021-06-01T23:35:27.000Z | tests/test_auth/test_upload_passport.py | peterwade153/flybob | 85fcd401bffed9adb06e7943f0c748be822fac75 | [
"MIT"
] | null | null | null | import unittest
from unittest.mock import patch, Mock
from werkzeug.datastructures import FileStorage
import io
import json
from app import app
from app.models.base import db
from app.models.user import User
from app.auth.views import UserPassportphotoView
from app.auth import views
class AuthUploadPassportPhotoTestCase(unittest.TestCase):
def setUp(self):
self.app = app.test_client()
app.testing = True
self.user_data = {
"username": "john123",
"email": "john123@john.com",
"password": "john1234556",
}
with app.app_context():
db.drop_all()
db.create_all()
# create admin user
user = User(
username="john123",
email="john123@john.com",
password="john1234556",
role=True,
)
user.save()
@patch.object(views.UserPassportphotoView, "post")
def test_upload_passport_photo(self, mock_post):
upload = UserPassportphotoView()
mock_post.return_value.status_code = 200
res = upload.post(
"/api/v1/auth/upload",
data=dict(file=(io.BytesIO(b"abcdef"), "test.jpg")),
headers={"Content-Type": "multipart/form-data"},
)
self.assertEqual(res.status_code, 200)
def test_upload_photo_with_non_allowed_ext(self):
res = self.app.post(
"/api/v1/auth/login",
data=json.dumps(self.user_data),
headers={"Content-Type": "application/json"},
)
token = json.loads(res.data.decode())["access_token"]
data = {"file": (io.BytesIO(b'my file contents'), 'hello.txt')}
result = self.app.post(
"/api/v1/auth/upload", buffered=True,
headers={
"Authorization": token,
"Content-Type" : 'multipart/form-data',
},
data=data,
)
self.assertEqual(result.status_code, 400)
def test_no_photo_upload(self):
res = self.app.post(
"/api/v1/auth/login",
data=json.dumps(self.user_data),
headers={"Content-Type": "application/json"},
)
token = json.loads(res.data.decode())["access_token"]
result = self.app.post(
"/api/v1/auth/upload", buffered=True,
headers={
"Authorization": token,
"Content-Type" : 'multipart/form-data',
},
data={},
)
self.assertEqual(result.status_code, 400)
| 28.844444 | 71 | 0.558937 | 277 | 2,596 | 5.137184 | 0.31769 | 0.024596 | 0.031623 | 0.045678 | 0.479972 | 0.446943 | 0.446943 | 0.446943 | 0.321855 | 0.321855 | 0 | 0.024198 | 0.315485 | 2,596 | 89 | 72 | 29.168539 | 0.77659 | 0.006549 | 0 | 0.305556 | 0 | 0 | 0.166085 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.055556 | false | 0.097222 | 0.138889 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
fac02a78f618b71b9828bd71e56497f77be5f2b6 | 1,571 | py | Python | example_project/news_with_archive/migrations/0001_initial.py | richardbarran/django-minipub | f6df9b15cf49ba95c5aefed5355a7d3de0241c3f | [
"MIT"
] | 7 | 2016-02-19T12:52:01.000Z | 2021-07-07T05:10:41.000Z | example_project/news_with_archive/migrations/0001_initial.py | richardbarran/django-minipub | f6df9b15cf49ba95c5aefed5355a7d3de0241c3f | [
"MIT"
] | 2 | 2018-05-14T09:28:25.000Z | 2021-05-12T19:21:10.000Z | example_project/news_with_archive/migrations/0001_initial.py | richardbarran/django-minipub | f6df9b15cf49ba95c5aefed5355a7d3de0241c3f | [
"MIT"
] | 1 | 2021-03-24T00:44:22.000Z | 2021-03-24T00:44:22.000Z | # Generated by Django 2.0 on 2018-02-14 13:39
from django.db import migrations, models
import django.utils.timezone
import model_utils.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Article',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, editable=False, verbose_name='created')),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, editable=False, verbose_name='modified')),
('status', model_utils.fields.StatusField(choices=[('draft', 'draft'), ('published', 'published'), ('archived', 'archived')], default='draft', max_length=100, no_check_for_status=True, verbose_name='status')),
('status_changed', model_utils.fields.MonitorField(default=django.utils.timezone.now, monitor='status', verbose_name='status changed')),
('start', models.DateField(blank=True, null=True, verbose_name='start date')),
('end', models.DateField(blank=True, null=True, verbose_name='end date')),
('title', models.CharField(max_length=50, unique=True)),
('slug', models.SlugField()),
('body', models.TextField()),
],
options={
'abstract': False,
},
),
]
| 44.885714 | 225 | 0.621897 | 163 | 1,571 | 5.871166 | 0.460123 | 0.08046 | 0.083595 | 0.081505 | 0.23093 | 0.200627 | 0.200627 | 0.200627 | 0.110763 | 0 | 0 | 0.015728 | 0.231063 | 1,571 | 34 | 226 | 46.205882 | 0.77649 | 0.027371 | 0 | 0 | 1 | 0 | 0.119921 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fac137087e41cae16ef6b6cc8d7e95ccb0632729 | 6,159 | py | Python | recipes/Python/578344_Simple_Finite_State_Machine_class_/recipe-578344.py | tdiprima/code | 61a74f5f93da087d27c70b2efe779ac6bd2a3b4f | [
"MIT"
] | 2,023 | 2017-07-29T09:34:46.000Z | 2022-03-24T08:00:45.000Z | recipes/Python/578344_Simple_Finite_State_Machine_class_/recipe-578344.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 32 | 2017-09-02T17:20:08.000Z | 2022-02-11T17:49:37.000Z | recipes/Python/578344_Simple_Finite_State_Machine_class_/recipe-578344.py | unhacker/code | 73b09edc1b9850c557a79296655f140ce5e853db | [
"MIT"
] | 780 | 2017-07-28T19:23:28.000Z | 2022-03-25T20:39:41.000Z | #! /usr/bin/env python
""" Generic finite state machine class
Initialise the class with a list of tuples - or by adding transitions
Tony Flury - November 2012
Released under an MIT License - free to use so long as the author and other contributers are credited.
"""
class fsm(object):
""" A simple to use finite state machine class.
Allows definition of multiple states, condition functions from state to state and optional callbacks
"""
def __init__(self, states=[]):
self._states=states
self.currentState = None
def start(self,startState=None):
""" Start the finite state machine
"""
if not startState or not (startState in [x[0] for x in self._states]):
raise ValueError("Not a valid start state")
self.currentState = startState
def stop(self):
""" Stop the finite state machine
"""
# Bug fix 15 Dec 2012 - self.currentState should be reset, not startState - Identified by Holger Waldmann
self.currentState = None
def addTransition(self,fromState, toState, condition, callback=None):
""" Add a state transition to the list, order is irellevant, loops are undetected
Can only add a transition if the state machine isn't started.
"""
if not self.currentState:
raise ValueError("StateMachine already Started - cannot add new transitions")
# add a transition to the state table
self._states.append( (fromState, toState,condition, callback))
def event(self, value):
""" Trigger a transition - return a tuple (<new_state>, <changed>)
Raise an exception if no valid transition exists.
Callee needs to determine if the value will be consumed or re-used
"""
if not self.currentState:
raise ValueError("StateMachine not Started - cannot process event")
# get a list of transitions which are valid
self.nextStates = [ x for x in self._states\
if x[0] == self.currentState \
and (x[2]==True or (callable(x[2]) and x[2](value))) ]
if not self.nextStates:
raise ValueError("No Transition defined from state {0} with value '{1}'".format(self.currentState, value))
elif len(self.nextStates) > 1:
raise ValueError("Ambiguous transitions from state {0} with value '{1}' -> New states defined {2}".format(self.currentState, value, [x[0] for x in self.nextStates]))
else:
if len(self.nextStates[0]) == 4:
current, next, condition, callback = self.nextStates[0]
else:
current, next, condition = self.nextStates[0]
callback = None
self.currentState, changed = (next,True) \
if self.currentState != next else (next, False)
# Execute the callback if defined
if callable(callback):
callback(self, value)
return self.currentState, changed
def CurrentState(self):
""" Return the current State of the finite State machine
"""
return self.currentState
# -------------------------------------------------------------------------------------------------
# Example classes to demonstrate the use of the Finite State Machine Class
# They implement a simple lexical tokeniser.
# These classes are not neccesary for the FSM class to work.
# -------------------------------------------------------------------------------------------------
# Simple storage object for each token
class token(object):
def __init__(self, type):
self.tokenType = type
self.tokenText = ""
def addCharacter(self, char):
self.tokenText += char
def __repr__(self):
return "{0}<{1}>".format(self.tokenType, self.tokenText)
# Token list object - demonstrating the definition of state machine callbacks
class tokenList(object):
def __init__(self):
self.tokenList = []
self.currentToken = None
def StartToken(self, fss, value):
self.currentToken = token(fss.CurrentState())
self.currentToken.addCharacter(value)
def addCharacter(self, fss, value):
self.currentToken.addCharacter(value)
def EndToken(self, fss, value):
self.tokenList.append(self.currentToken)
self.currentToken = None
# Example code - showing population of the state machine in the constructor
# the Machine could also be constructed by multiple calls to addTransition method
# Example code is a simple tokeniser
# Machine transitions back to the Start state whenever the end of a token is detected
if __name__ == "__main__":
t = tokenList()
fs = fsm( [ ("Start","Start",lambda x: x.isspace() ),
("Start","Identifier",str.isalpha, t.StartToken ),
("Identifier","Identifier", str.isalnum, t.addCharacter ),
("Identifier","Start",lambda x: not x.isalnum(), t.EndToken ),
("Start","Operator", lambda x: x in "=+*/-()", t.StartToken ),
("Operator","Start", True, t.EndToken),
("Start","Number",str.isdigit, t.StartToken ),
("Number","Number",lambda x: x.isdigit() or x == ".", t.addCharacter ),
("Number","Start",lambda x: not x.isdigit() and x != ".", t.EndToken ),
("Start","StartQuote",lambda x: x == "\'"),
("StartQuote","String", lambda x: x != "\'", t.StartToken),
("String","String",lambda x: x != "\'", t.addCharacter ),
("String","EndQuote", lambda x: x == "\'", t.EndToken ),
("EndQuote","Start", True ) ] )
fs.start("Start")
a = " x123=MyString+123.65-'hello'*value"
c = 0
while c < len(a):
ret = fs.event(a[c])
# Make sure a transition back to start (from something else) does not consume the character.
if ret[0] != "Start" or (ret[0] == "Start" and ret[1] == False):
c += 1
ret = fs.event("")
print t.tokenList
| 41.06 | 178 | 0.585485 | 717 | 6,159 | 4.988842 | 0.285914 | 0.058149 | 0.015656 | 0.023483 | 0.112385 | 0.04473 | 0.026838 | 0 | 0 | 0 | 0 | 0.009197 | 0.276181 | 6,159 | 149 | 179 | 41.33557 | 0.793181 | 0.177139 | 0 | 0.121951 | 0 | 0 | 0.131148 | 0.008319 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.012195 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fac3e7ee67811ede3b6d8461b25dcb5790afb786 | 475 | py | Python | Searches/models.py | Sofia190/book_store_app | 3c32f269604948bb4a495802d17794a68188e3a5 | [
"MIT"
] | null | null | null | Searches/models.py | Sofia190/book_store_app | 3c32f269604948bb4a495802d17794a68188e3a5 | [
"MIT"
] | null | null | null | Searches/models.py | Sofia190/book_store_app | 3c32f269604948bb4a495802d17794a68188e3a5 | [
"MIT"
] | null | null | null | from django.db import models
# Create your models here.
from django.conf import settings
from django.db import models
from django.utils import timezone
# Create your models here.
class SearchQuery(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, blank=True, null=True, on_delete=models.CASCADE)
query = models.CharField(max_length=570)
timestamp = models.DateField(auto_now=False, auto_now_add=False, default=timezone.now())
| 12.837838 | 100 | 0.755789 | 66 | 475 | 5.333333 | 0.545455 | 0.113636 | 0.068182 | 0.102273 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0075 | 0.157895 | 475 | 36 | 101 | 13.194444 | 0.8725 | 0.103158 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
fac956be30414f4e9750c5fba11e0bd38288e8e4 | 585 | py | Python | event/migrations/0003_event_org.py | Ortus-Team/Moim | 57bdd94ffb0c3b5d7dc74396264074e2a9a7f84a | [
"MIT"
] | null | null | null | event/migrations/0003_event_org.py | Ortus-Team/Moim | 57bdd94ffb0c3b5d7dc74396264074e2a9a7f84a | [
"MIT"
] | 6 | 2020-06-05T17:44:24.000Z | 2022-02-09T23:15:16.000Z | event/migrations/0003_event_org.py | Ortus-Team/Moim | 57bdd94ffb0c3b5d7dc74396264074e2a9a7f84a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.6 on 2018-01-10 08:34
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('org', '0001_initial'),
('event', '0002_auto_20180102_2143'),
]
operations = [
migrations.AddField(
model_name='event',
name='org',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='org', to='org.Org'),
),
]
| 25.434783 | 126 | 0.634188 | 70 | 585 | 5.128571 | 0.671429 | 0.066852 | 0.077994 | 0.122563 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082222 | 0.230769 | 585 | 22 | 127 | 26.590909 | 0.715556 | 0.116239 | 0 | 0 | 1 | 0 | 0.118677 | 0.044747 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
facc8b55215e84d77bae17017885f0c7c6fa4a14 | 2,084 | py | Python | src/main/python/counts_tools/exec/deviation_analysis.py | cday97/beam | 7e1ab50eecaefafd04daab360f8b12bc7cab559b | [
"BSD-3-Clause-LBNL"
] | 123 | 2017-04-06T20:17:19.000Z | 2022-03-02T13:42:15.000Z | src/main/python/counts_tools/exec/deviation_analysis.py | cday97/beam | 7e1ab50eecaefafd04daab360f8b12bc7cab559b | [
"BSD-3-Clause-LBNL"
] | 2,676 | 2017-04-26T20:27:27.000Z | 2022-03-31T16:39:53.000Z | src/main/python/counts_tools/exec/deviation_analysis.py | cday97/beam | 7e1ab50eecaefafd04daab360f8b12bc7cab559b | [
"BSD-3-Clause-LBNL"
] | 60 | 2017-04-06T20:14:32.000Z | 2022-03-30T20:10:53.000Z | import ConfigParser
from datetime import datetime
import os
import sys
import numpy as np
import pandas as pd
import utils.counts
import utils.counts_deviation
__author__ = 'Andrew A Campbell'
# This script finds the days with the greatest deviation from some reference value (such as hourly means or medians)
if __name__ == '__main__':
if len(sys.argv) < 2:
print 'ERROR: need to supply the path to the conifg file'
config_path = sys.argv[1]
conf = ConfigParser.ConfigParser()
conf.read(config_path)
# Paths
station_TS_dir = conf.get('Paths', 'station_TS_dir') # Path to station Time Series
ref_counts_file = conf.get('Paths', 'ref_counts_file')
out_file = conf.get('Paths', 'out_file') # Where to write the counts file
# Parameters
start_date = conf.get('Params', 'start_date')
end_date = conf.get('Params', 'end_date')
days = [int(d.strip()) for d in conf.get('Params', 'days').split(',')]
measure = conf.get('Params', 'measure')
# Get target dates
targ_dates = utils.counts.date_string_list(start_date, end_date, days)
# Create the counts file
ref = utils.counts.df_from_counts(ref_counts_file) # DF w/ mean flow for each link
measures = []
keepers = []
for i, stat in enumerate(ref.columns):
# Get path to stat ts file
print 'Processings station: %s' % str(stat)
print 'Number %d of %d' % (i, ref.shape[1])
ts_path = os.path.join(station_TS_dir, str(stat), 'time_series.csv')
c_dev = utils.counts_deviation.CountsDeviation(ts_path, targ_dates)
if c_dev.missing: # if there is missing data, we skip the whole station
print "Missing data. Skipping station: %s" % str(stat)
continue
c_dev.calc_measure(measure, reference=ref[stat])
measures.append(c_dev.measures[measure])
keepers.append(stat)
df = pd.DataFrame(measures).transpose()
df.columns = keepers
df.index = targ_dates
df.dropna(axis=1)
df['Max_Dev'] = df.apply(np.sum, axis=1)
df.to_csv(out_file)
| 32.5625 | 116 | 0.669866 | 306 | 2,084 | 4.385621 | 0.398693 | 0.036513 | 0.038748 | 0.025335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003069 | 0.21833 | 2,084 | 63 | 117 | 33.079365 | 0.820749 | 0.161708 | 0 | 0 | 0 | 0 | 0.158382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.186047 | null | null | 0.093023 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fae463c351e42ad7cf6fcfb323f650cd6cc418ae | 304 | py | Python | tests/test_statebus.py | invisible-college/tightrope | f0c96dd6702e9d4b730cffac70829b56f76077b6 | [
"MIT"
] | 1 | 2021-08-22T05:09:05.000Z | 2021-08-22T05:09:05.000Z | tests/test_statebus.py | invisible-college/tightrope | f0c96dd6702e9d4b730cffac70829b56f76077b6 | [
"MIT"
] | 3 | 2017-09-18T01:45:44.000Z | 2017-10-17T23:26:22.000Z | tests/test_statebus.py | invisible-college/tightrope | f0c96dd6702e9d4b730cffac70829b56f76077b6 | [
"MIT"
] | null | null | null | // Test calls to statebus server
var bus = require('statebus/server')();
bus.ws_client("/*", "ws://aws.local-box.org:45678");
x = bus.fetch("/paul/code");
console.log(JSON.stringify(x));
if (!x.written) {
console.log("No member .written found, setting it now");
x.written = "here it is";
}
save(x);
| 25.333333 | 58 | 0.654605 | 48 | 304 | 4.125 | 0.708333 | 0.141414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018797 | 0.125 | 304 | 11 | 59 | 27.636364 | 0.725564 | 0 | 0 | 0 | 0 | 0 | 0.346535 | 0.092409 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
fae48fbff8a06a587206ef5fa49056a7f5046d73 | 1,205 | py | Python | src/pyams_utils/interfaces/intids.py | Py-AMS/pyams-utils | 65b166596a8b9f66fb092a69ce5d53ac6675685e | [
"ZPL-2.1"
] | null | null | null | src/pyams_utils/interfaces/intids.py | Py-AMS/pyams-utils | 65b166596a8b9f66fb092a69ce5d53ac6675685e | [
"ZPL-2.1"
] | null | null | null | src/pyams_utils/interfaces/intids.py | Py-AMS/pyams-utils | 65b166596a8b9f66fb092a69ce5d53ac6675685e | [
"ZPL-2.1"
] | null | null | null | #
# Copyright (c) 2008-2015 Thierry Florac <tflorac AT ulthar.net>
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
"""PyAMS_utils.interfaces.intids module
Small set of interfaces used by IIntIds utilities.
"""
from zope.interface import Interface
from zope.schema import Int, TextLine
__docformat__ = 'restructuredtext'
from pyams_utils import _
#
# Generic interfaces
#
class IIndexLength(Interface):
"""Index length interface"""
count = Int(title=_("Indexed elements count"),
readonly=True)
class IUniqueID(Interface):
"""Interface used to get unique ID of an object"""
oid = TextLine(title="Unique ID",
description="Globally unique identifier of this object can be used to create "
"internal links",
readonly=True)
| 26.777778 | 97 | 0.699585 | 152 | 1,205 | 5.493421 | 0.651316 | 0.028743 | 0.033533 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010684 | 0.223237 | 1,205 | 44 | 98 | 27.386364 | 0.88141 | 0.53278 | 0 | 0.166667 | 0 | 0 | 0.234522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
fae717b2d4db53ea73a947ac37133ff735c46b9c | 282 | py | Python | reo/migrations/0083_merge_20201207_1317.py | akuam1/REopt_Lite_API | fb5a88ee52351b725fda5c15712b617f6e97ddca | [
"BSD-3-Clause"
] | 41 | 2020-02-21T08:25:17.000Z | 2022-01-14T23:06:42.000Z | reo/migrations/0083_merge_20201207_1317.py | akuam1/REopt_Lite_API | fb5a88ee52351b725fda5c15712b617f6e97ddca | [
"BSD-3-Clause"
] | 167 | 2020-02-17T17:26:47.000Z | 2022-01-20T20:36:54.000Z | reo/migrations/0083_merge_20201207_1317.py | akuam1/REopt_Lite_API | fb5a88ee52351b725fda5c15712b617f6e97ddca | [
"BSD-3-Clause"
] | 31 | 2020-02-20T00:22:51.000Z | 2021-12-10T05:48:08.000Z | # Generated by Django 2.2.13 on 2020-12-07 13:17
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('reo', '0075_auto_20201125_1947'),
('reo', '0082_chpmodel_chp_unavailability_hourly'),
]
operations = [
]
| 18.8 | 59 | 0.659574 | 34 | 282 | 5.264706 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164384 | 0.223404 | 282 | 14 | 60 | 20.142857 | 0.652968 | 0.163121 | 0 | 0 | 1 | 0 | 0.290598 | 0.264957 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
faeb54d9605b6182b7e92333f3846926f9dfc119 | 8,246 | py | Python | ls/joyous/models/one_off_events.py | tjwalch/ls.joyous | 0ee50d3af71c066bddb2310948b02f74b52ee253 | [
"BSD-3-Clause"
] | 72 | 2018-03-16T16:35:08.000Z | 2022-03-23T08:09:33.000Z | polrev/ls/joyous/models/one_off_events.py | polrev-github/polrev-django | 99108ace1a5307b14c3eccb424a9f9616e8c02ae | [
"MIT"
] | 41 | 2018-03-25T20:36:52.000Z | 2022-03-10T08:59:27.000Z | polrev/ls/joyous/models/one_off_events.py | polrev-github/polrev-django | 99108ace1a5307b14c3eccb424a9f9616e8c02ae | [
"MIT"
] | 28 | 2018-08-13T22:36:09.000Z | 2022-03-17T12:24:15.000Z | # ------------------------------------------------------------------------------
# Joyous events models
# ------------------------------------------------------------------------------
import datetime as dt
from django.db import models
from django.db.models.query import ModelIterable
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
from wagtail.core.models import Page
from wagtail.admin.edit_handlers import FieldPanel
from wagtail.images.edit_handlers import ImageChooserPanel
from ..utils.telltime import (todayUtc, getAwareDatetime, getLocalDatetime,
getLocalDate, getLocalTime)
from ..utils.telltime import timeFormat
from ..edit_handlers import TimePanel
from ..forms import FormDefender
from .groups import get_group_model_string
from .event_base import (ThisEvent, EventsByDayList,
EventManager, EventQuerySet, EventPageForm, EventBase)
# ------------------------------------------------------------------------------
# Helper types and constants
# ------------------------------------------------------------------------------
_1day = dt.timedelta(days=1)
_2days = dt.timedelta(days=2)
# ------------------------------------------------------------------------------
# Event models
# ------------------------------------------------------------------------------
class SimpleEventQuerySet(EventQuerySet):
def current(self):
qs = super().current()
return qs.filter(date__gte = todayUtc() - _1day)
def future(self):
qs = super().future()
return qs.filter(date__gte = todayUtc() - _1day)
def past(self):
qs = super().past()
return qs.filter(date__lte = todayUtc() + _1day)
def byDay(self, fromDate, toDate):
request = self.request
class ByDayIterable(ModelIterable):
def __iter__(self):
evods = EventsByDayList(fromDate, toDate)
for page in super().__iter__():
pageFromDate = getLocalDate(page.date,
page.time_from, page.tz)
pageToDate = getLocalDate(page.date,
page.time_to, page.tz)
thisEvent = ThisEvent(page, url=page.get_url(request))
evods.add(thisEvent, pageFromDate, pageToDate)
yield from evods
qs = self._clone()
qs._iterable_class = ByDayIterable
return qs.filter(date__range=(fromDate - _2days, toDate + _2days))
class SimpleEventPage(EventBase, Page, metaclass=FormDefender):
events = EventManager.from_queryset(SimpleEventQuerySet)()
class Meta:
verbose_name = _("event page")
verbose_name_plural = _("event pages")
default_manager_name = "objects"
parent_page_types = ["joyous.CalendarPage",
"joyous.SpecificCalendarPage",
"joyous.GeneralCalendarPage",
get_group_model_string()]
subpage_types = []
base_form_class = EventPageForm
date = models.DateField(_("date"), default=dt.date.today)
content_panels = Page.content_panels + [
FieldPanel('category'),
ImageChooserPanel('image'),
FieldPanel('date'),
TimePanel('time_from'),
TimePanel('time_to'),
FieldPanel('tz'),
] + EventBase.content_panels1
# Anything inheriting from models.Model needs its own __init__ or
# modeltranslation patch_constructor may break it
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@property
def when(self):
"""
A string describing when the event occurs (in the local time zone).
"""
return self._getLocalWhen(self.date)
def _getFromTime(self, atDate=None):
"""
Time that the event starts (in the local time zone).
"""
return getLocalTime(self.date, self.time_from, self.tz)
def _getFromDt(self):
"""
Datetime that the event starts (in the local time zone).
"""
return getLocalDatetime(self.date, self.time_from, self.tz)
def _getToDt(self):
"""
Datetime that the event ends (in the local time zone).
"""
return getLocalDatetime(self.date, self.time_to, self.tz)
# ------------------------------------------------------------------------------
class MultidayEventQuerySet(EventQuerySet):
def current(self):
qs = super().current()
return qs.filter(date_to__gte = todayUtc() - _1day)
def future(self):
qs = super().future()
return qs.filter(date_from__gte = todayUtc() - _1day)
def past(self):
qs = super().past()
return qs.filter(date_from__lte = todayUtc() + _1day)
def byDay(self, fromDate, toDate):
request = self.request
class ByDayIterable(ModelIterable):
def __iter__(self):
evods = EventsByDayList(fromDate, toDate)
for page in super().__iter__():
pageFromDate = getLocalDate(page.date_from,
page.time_from, page.tz)
pageToDate = getLocalDate(page.date_to,
page.time_to, page.tz)
thisEvent = ThisEvent(page, url=page.get_url(request))
evods.add(thisEvent, pageFromDate, pageToDate)
yield from evods
qs = self._clone()
qs._iterable_class = ByDayIterable
return qs.filter(date_to__gte = fromDate - _2days) \
.filter(date_from__lte = toDate + _2days)
class MultidayEventPageForm(EventPageForm):
def _checkStartBeforeEnd(self, cleaned_data):
startDate = cleaned_data.get('date_from', dt.date.min)
endDate = cleaned_data.get('date_to', dt.date.max)
if startDate > endDate:
self.add_error('date_to', _("Event cannot end before it starts"))
elif startDate == endDate:
super()._checkStartBeforeEnd(cleaned_data)
class MultidayEventPage(EventBase, Page, metaclass=FormDefender):
events = EventManager.from_queryset(MultidayEventQuerySet)()
class Meta:
verbose_name = _("multiday event page")
verbose_name_plural = _("multiday event pages")
default_manager_name = "objects"
parent_page_types = ["joyous.CalendarPage",
"joyous.SpecificCalendarPage",
"joyous.GeneralCalendarPage",
get_group_model_string()]
subpage_types = []
base_form_class = MultidayEventPageForm
date_from = models.DateField(_("start date"), default=dt.date.today)
date_to = models.DateField(_("end date"), default=dt.date.today)
content_panels = Page.content_panels + [
FieldPanel('category'),
ImageChooserPanel('image'),
FieldPanel('date_from'),
TimePanel('time_from'),
FieldPanel('date_to'),
TimePanel('time_to'),
FieldPanel('tz'),
] + EventBase.content_panels1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@property
def when(self):
"""
A string describing when the event occurs (in the local time zone).
"""
return self._getLocalWhen(self.date_from, self.date_to)
def _getFromTime(self, atDate=None):
"""
Time that the event starts (in the local time zone).
"""
return getLocalTime(self.date_from, self.time_from, self.tz)
def _getFromDt(self):
"""
Datetime that the event starts (in the local time zone).
"""
return getLocalDatetime(self.date_from, self.time_from, self.tz)
def _getToDt(self):
"""
Datetime that the event ends (in the local time zone).
"""
return getLocalDatetime(self.date_to, self.time_to, self.tz)
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
| 37.652968 | 80 | 0.557361 | 782 | 8,246 | 5.654731 | 0.213555 | 0.018091 | 0.025328 | 0.032564 | 0.664631 | 0.64066 | 0.639077 | 0.639077 | 0.587517 | 0.563998 | 0 | 0.002614 | 0.257822 | 8,246 | 218 | 81 | 37.825688 | 0.719935 | 0.173296 | 0 | 0.565517 | 0 | 0 | 0.057117 | 0.016017 | 0 | 0 | 0 | 0 | 0 | 1 | 0.144828 | false | 0 | 0.096552 | 0 | 0.503448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
faeec08412c17e1886d0f4332b15cb71403f5016 | 1,337 | py | Python | project/RealEstateMarketPlace/views/ListConversationsView.py | Mihaaai/RealEstateMarketplace | 9b9fa1376436801303e1ed0207ef09845a7d827e | [
"Apache-2.0"
] | null | null | null | project/RealEstateMarketPlace/views/ListConversationsView.py | Mihaaai/RealEstateMarketplace | 9b9fa1376436801303e1ed0207ef09845a7d827e | [
"Apache-2.0"
] | null | null | null | project/RealEstateMarketPlace/views/ListConversationsView.py | Mihaaai/RealEstateMarketplace | 9b9fa1376436801303e1ed0207ef09845a7d827e | [
"Apache-2.0"
] | null | null | null | from django.views.generic import ListView
from rest_framework import authentication, permissions
from ..models import Message,Listing,User
from django.db.models import Q
class ListConversationsView(ListView):
authentication_classes = (authentication.SessionAuthentication,)
permission_classes = (permissions.IsAuthenticated,)
template_name = 'list_conversations_template.html'
context_object_name = 'conversations'
def get_queryset(self):
#get each listing for which there is at least a message by/from logged user
_listings = Listing.objects.filter(pk__in = Message.objects.filter(Q(receiver_id=self.request.user)|Q(sender_id=self.request.user)).values('listing_id').distinct())
conversations = {}
#for each listing, find all users whom which logged user talked to
for listing in _listings:
sender_id_list = Message.objects.filter(receiver_id=self.request.user).filter(listing_id = listing).values('sender_id').distinct()
receiver_id_list = Message.objects.filter(sender_id=self.request.user).filter(listing_id = listing).values('receiver_id').distinct()
users = User.objects.filter(Q(pk__in = sender_id_list)| Q(pk__in = receiver_id_list)).distinct()
conversations[listing] = users
return conversations | 51.423077 | 174 | 0.735976 | 166 | 1,337 | 5.722892 | 0.36747 | 0.068421 | 0.054737 | 0.071579 | 0.214737 | 0.094737 | 0.094737 | 0.094737 | 0.094737 | 0 | 0 | 0 | 0.169035 | 1,337 | 26 | 175 | 51.423077 | 0.855086 | 0.103964 | 0 | 0 | 0 | 0 | 0.062657 | 0.026734 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.222222 | 0 | 0.611111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
faf66d5e9e6ff74d2a82b5b0fd5dc4b83c98d750 | 426 | py | Python | respostas/migrations/0016_resposta_materia.py | Samio-Santos/Sistema_Questoes_Django | 415c28b386ac7848fdd244ba51c20239b730f4ae | [
"MIT"
] | null | null | null | respostas/migrations/0016_resposta_materia.py | Samio-Santos/Sistema_Questoes_Django | 415c28b386ac7848fdd244ba51c20239b730f4ae | [
"MIT"
] | null | null | null | respostas/migrations/0016_resposta_materia.py | Samio-Santos/Sistema_Questoes_Django | 415c28b386ac7848fdd244ba51c20239b730f4ae | [
"MIT"
] | null | null | null | # Generated by Django 3.2 on 2021-07-02 21:43
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('respostas', '0015_alter_resposta_banca'),
]
operations = [
migrations.AddField(
model_name='resposta',
name='materia',
field=models.CharField(blank=True, default=None, max_length=20, null=True),
),
]
| 22.421053 | 87 | 0.617371 | 47 | 426 | 5.489362 | 0.829787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064103 | 0.267606 | 426 | 18 | 88 | 23.666667 | 0.762821 | 0.100939 | 0 | 0 | 1 | 0 | 0.128609 | 0.065617 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
faf67b2c9d286ee2d83587f71a298be32213ce3a | 523 | py | Python | libs/menus/menus.py | MilianoJunior/appSalva | d1ad23d06c57aa4b6d380ad637847b6842b68ccd | [
"MIT"
] | null | null | null | libs/menus/menus.py | MilianoJunior/appSalva | d1ad23d06c57aa4b6d380ad637847b6842b68ccd | [
"MIT"
] | null | null | null | libs/menus/menus.py | MilianoJunior/appSalva | d1ad23d06c57aa4b6d380ad637847b6842b68ccd | [
"MIT"
] | null | null | null | from kivymd.uix.boxlayout import MDBoxLayout
from kivymd.uix.toolbar import MDToolbar
class Menus():
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def __call__(self):
box_central = MDBoxLayout(orientation='vertical')
# criar componentes
toolbar = MDToolbar(title='App Salva')
# navigation = NavegationMenu()()
#add componentes
box_central.add_widget(toolbar)
# box_central.add_widget(navigation)
return box_central | 26.15 | 57 | 0.66348 | 55 | 523 | 5.981818 | 0.563636 | 0.121581 | 0.079027 | 0.115502 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23327 | 523 | 20 | 58 | 26.15 | 0.820449 | 0.216061 | 0 | 0 | 0 | 0 | 0.041872 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
faf6934e3cb37291d228f183808eb0c338d26479 | 1,342 | py | Python | setup.py | t-ceccarini/deep-b-spline-approximation | 9e48b593717486bbdac9bf0269a5645830d76082 | [
"MIT"
] | null | null | null | setup.py | t-ceccarini/deep-b-spline-approximation | 9e48b593717486bbdac9bf0269a5645830d76082 | [
"MIT"
] | null | null | null | setup.py | t-ceccarini/deep-b-spline-approximation | 9e48b593717486bbdac9bf0269a5645830d76082 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Thu Jan 13 23:29:22 2022
@author: Tommaso
"""
from setuptools import setup
VERSION = '0.2.8'
DESCRIPTION = 'A python package for bspline curve approximation using deep learning'
# Setting up
setup(
name='deep-b-spline-approximation',
packages=['deep_b_spline_approximation'],
version=VERSION,
author="Tommaso Ceccarini",
author_email="<tceccarini93@gmail.com>",
description=DESCRIPTION,
long_description_content_type="text/markdown",
url='https://github.com/t-ceccarini/deep-b-spline-approximation',
download_url='https://github.com/t-ceccarini/deep-b-spline-approximation/archive/refs/tags/v_0.2.8.tar.gz',
install_requires=['torch','prettytable','numpy','scipy','matplotlib'],
keywords=['python', 'deep learning', 'mlp', 'cnn', 'cagd', 'bspline', 'bezier'],
classifiers=[
"Development Status :: 1 - Planning",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Operating System :: Unix",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows",
]
)
| 35.315789 | 111 | 0.651267 | 155 | 1,342 | 5.574194 | 0.6 | 0.109954 | 0.144676 | 0.150463 | 0.118056 | 0.118056 | 0.118056 | 0.118056 | 0.118056 | 0.118056 | 0 | 0.029385 | 0.188525 | 1,342 | 37 | 112 | 36.27027 | 0.764004 | 0.064829 | 0 | 0 | 0 | 0.035714 | 0.608347 | 0.0626 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.035714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
faf73cd0b10c574ff66ce3351fe78b6258b32478 | 1,697 | py | Python | source/db_api/crud/crud_documents.py | JungeAlexander/kbase_db_api | f3ec5e8b9ae509f9e8d962183efef21be61ef425 | [
"MIT"
] | 1 | 2021-09-19T14:31:44.000Z | 2021-09-19T14:31:44.000Z | source/db_api/crud/crud_documents.py | JungeAlexander/kbase_db_api | f3ec5e8b9ae509f9e8d962183efef21be61ef425 | [
"MIT"
] | 4 | 2020-10-13T08:41:49.000Z | 2021-04-29T18:05:40.000Z | source/db_api/crud/crud_documents.py | JungeAlexander/kbase_db_api | f3ec5e8b9ae509f9e8d962183efef21be61ef425 | [
"MIT"
] | null | null | null | from datetime import date
from typing import Iterable
from sqlalchemy.orm import Session
from db_api import models, schemas
def get_document(db: Session, document_id: str) -> models.Document:
return db.query(models.Document).filter(models.Document.id == document_id).first()
def get_documents_by_publication_date(
db: Session, document_date: date
) -> Iterable[models.Document]:
return (
db.query(models.Document)
.filter(models.Document.publication_date == document_date)
.all()
)
def get_documents(
db: Session, skip: int = 0, limit: int = 100
) -> Iterable[models.Document]:
return db.query(models.Document).offset(skip).limit(limit).all()
def get_document_ids(db: Session, skip: int = 0, limit: int = 100):
return db.query(models.Document.id).offset(skip).limit(limit).all()
def search_document_summary(
db: Session, query: str = "query"
) -> Iterable[models.Document]:
search = "%{}%".format(query)
return (
db.query(models.Document).filter(models.Document.summary.ilike(search)).all() # type: ignore
)
def create_document(db: Session, document: schemas.DocumentCreate) -> models.Document:
db_document = models.Document(**document.dict())
db.add(db_document)
db.commit()
db.refresh(db_document)
return db_document
def update_document(db: Session, document: schemas.DocumentUpdate) -> models.Document:
# TODO does not seem to update modified_date
new_document = models.Document(**document.dict())
old_document = get_document(db, new_document.id)
db.delete(old_document)
db.add(new_document)
db.commit()
db.refresh(new_document)
return new_document
| 28.762712 | 101 | 0.70772 | 222 | 1,697 | 5.27027 | 0.252252 | 0.191453 | 0.055556 | 0.081197 | 0.464103 | 0.28547 | 0.241026 | 0.241026 | 0.104274 | 0.104274 | 0 | 0.005662 | 0.167354 | 1,697 | 58 | 102 | 29.258621 | 0.822364 | 0.03241 | 0 | 0.170732 | 0 | 0 | 0.005491 | 0 | 0 | 0 | 0 | 0.017241 | 0 | 1 | 0.170732 | false | 0 | 0.097561 | 0.097561 | 0.439024 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f03405a3316d902d0f6702b629b6f2aae600c70 | 633 | py | Python | tests/functional_tests.py | Ecotrust/OPCDB | f639408c9cfdfa392a9233042f40e116c703fff1 | [
"MIT"
] | null | null | null | tests/functional_tests.py | Ecotrust/OPCDB | f639408c9cfdfa392a9233042f40e116c703fff1 | [
"MIT"
] | 7 | 2021-03-19T02:36:29.000Z | 2022-01-21T23:51:38.000Z | tests/functional_tests.py | Ecotrust/OPCDB | f639408c9cfdfa392a9233042f40e116c703fff1 | [
"MIT"
] | null | null | null | from selenium import webdriver
import unittest
class FirefoxTest(unittest.TestCase):
def setUp(self):
self.browser = webdriver.Firefox()
def tearDown(self):
self.browser.quit()
def test_page(self): #test method names must start with 'test'
self.browser.get('http://localhost:8000')
self.assertIn('Database', self.browser.title)
# self.fail('Finish the test!')
if __name__ == '__main__':
unittest.main() #call unittest.main(), which launches
# the unittest test runner, which will automatically find test classes and
# methods in the file and run them
| 31.65 | 80 | 0.669826 | 79 | 633 | 5.253165 | 0.620253 | 0.106024 | 0.072289 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008163 | 0.225908 | 633 | 19 | 81 | 33.315789 | 0.838776 | 0.341232 | 0 | 0 | 0 | 0 | 0.089806 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 1 | 0.25 | false | 0 | 0.166667 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f0442ce4e58fffb400b80c3f2ed1670944947b9 | 336 | py | Python | hortiradar/database/restart_workers.py | mctenthij/big-tu-top10 | d551f944aa364728d97bb2b672276a97f8019749 | [
"Apache-2.0",
"BSD-2-Clause"
] | 7 | 2019-04-21T15:25:29.000Z | 2021-11-07T23:20:17.000Z | hortiradar/database/restart_workers.py | mctenthij/big-tu-top10 | d551f944aa364728d97bb2b672276a97f8019749 | [
"Apache-2.0",
"BSD-2-Clause"
] | null | null | null | hortiradar/database/restart_workers.py | mctenthij/big-tu-top10 | d551f944aa364728d97bb2b672276a97f8019749 | [
"Apache-2.0",
"BSD-2-Clause"
] | 2 | 2019-04-21T15:25:30.000Z | 2022-01-01T20:49:36.000Z | import os
import re
from subprocess import call
from time import sleep
supervisor_dir = "/etc/supervisor/conf.d/"
_, _, files = next(os.walk(supervisor_dir))
for f in files:
m = re.match("(hortiradar-worker\d)\.conf", f)
if m:
worker = m.group(1)
call(["supervisorctl", "restart", worker])
sleep(60)
| 19.764706 | 50 | 0.642857 | 48 | 336 | 4.416667 | 0.604167 | 0.122642 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011407 | 0.217262 | 336 | 16 | 51 | 21 | 0.794677 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 0.14881 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
877ea49df20cc55a5d0735462644f88406735b14 | 291 | py | Python | src/dots/token.py | Mokin711/dots-python | 6bc0c98daa331302df9c9829a7579be6e1bd828c | [
"MIT"
] | 1 | 2021-06-14T18:43:53.000Z | 2021-06-14T18:43:53.000Z | src/dots/token.py | Mokin711/dots-python | 6bc0c98daa331302df9c9829a7579be6e1bd828c | [
"MIT"
] | 1 | 2021-11-15T21:33:27.000Z | 2021-11-16T19:22:34.000Z | src/dots/token.py | Mokin711/dots-python | 6bc0c98daa331302df9c9829a7579be6e1bd828c | [
"MIT"
] | 1 | 2022-02-09T19:39:15.000Z | 2022-02-09T19:39:15.000Z | import base64
import dots
def get_auth_token():
if dots.client_id == None or dots.api_key == None:
raise AssertionError('api_key and/or client_id not set')
token = base64.b64encode(bytes(dots.client_id + ':' + dots.api_key, 'utf-8')).decode('utf-8')
return token
| 24.25 | 97 | 0.670103 | 45 | 291 | 4.155556 | 0.555556 | 0.128342 | 0.128342 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034335 | 0.199313 | 291 | 11 | 98 | 26.454545 | 0.76824 | 0 | 0 | 0 | 0 | 0 | 0.147766 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
877eab8db5158dc55512e44d21ca46730b5e208f | 946 | py | Python | setup.py | ctsit/lineman | d90e876d70fbc3d6ca18425d2748d70eb00ab485 | [
"Apache-2.0"
] | null | null | null | setup.py | ctsit/lineman | d90e876d70fbc3d6ca18425d2748d70eb00ab485 | [
"Apache-2.0"
] | 2 | 2017-05-23T18:45:01.000Z | 2017-09-26T17:02:34.000Z | setup.py | ctsit/lineman | d90e876d70fbc3d6ca18425d2748d70eb00ab485 | [
"Apache-2.0"
] | 3 | 2017-04-28T13:35:34.000Z | 2017-05-16T14:01:13.000Z | from setuptools import setup
#bring in __version__ from sourcecode
#per https://stackoverflow.com/a/17626524
#and https://stackoverflow.com/a/2073599
with open('lineman/version.py') as ver:
exec(ver.read())
setup(name='lineman',
version=__version__,
description='Lineman fixes data problems that will keep your data from going into redcap.',
url='http://github.com/ctsit/lineman',
author='Patrick White',
author_email='pfwhite9@gmail.com',
license='Apache License 2.0',
packages=['lineman'],
entry_points={
'console_scripts': [
'lineman = lineman.__main__:cli_run',
],
},
install_requires=['cappy==1.1.1',
'docopt==0.6.2',
'pyyaml==3.12',
'python-dateutil==2.6.1'],
dependency_links=["git+https://github.com/ctsit/cappy@1.1.1#egg=cappy-1.1.1"],
zip_safe=False)
| 32.62069 | 97 | 0.598309 | 117 | 946 | 4.675214 | 0.649573 | 0.021938 | 0.038391 | 0.043876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050992 | 0.2537 | 946 | 28 | 98 | 33.785714 | 0.723796 | 0.121564 | 0 | 0 | 0 | 0.045455 | 0.425121 | 0.055556 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.045455 | 0 | 0.045455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8784142013ef93ac4ae61c954de934c0b7a1cd9b | 2,247 | py | Python | cloudmarker/test/test_azwebapphttp20event.py | TinLe/cloudmarker | 29698420457a86d5d8a0bac156bc98bd656198e1 | [
"MIT"
] | 208 | 2019-04-10T05:15:11.000Z | 2022-03-16T17:41:29.000Z | cloudmarker/test/test_azwebapphttp20event.py | TinLe/cloudmarker | 29698420457a86d5d8a0bac156bc98bd656198e1 | [
"MIT"
] | 88 | 2018-12-17T18:24:13.000Z | 2021-05-15T04:19:53.000Z | cloudmarker/test/test_azwebapphttp20event.py | TinLe/cloudmarker | 29698420457a86d5d8a0bac156bc98bd656198e1 | [
"MIT"
] | 15 | 2019-01-03T04:18:33.000Z | 2021-06-03T09:24:31.000Z | """Tests for AzWebAppHttp20Event plugin."""
import copy
import unittest
from cloudmarker.events import azwebapphttp20event
base_record = {
'ext': {
'record_type': 'web_app_config',
'cloud_type': 'azure',
'http20_enabled': True
},
'com': {
'cloud_type': 'azure'
}
}
class AzWebAppHttp20EventTest(unittest.TestCase):
"""Tests for AzWebAppHttp20Event plugin."""
def test_com_bucket_missing(self):
record = copy.deepcopy(base_record)
record['com'] = None
plugin = azwebapphttp20event.AzWebAppHttp20Event()
events = list(plugin.eval(record))
self.assertEqual(events, [])
def test_cloud_type_non_azure(self):
record = copy.deepcopy(base_record)
record['com']['cloud_type'] = 'non_azure'
plugin = azwebapphttp20event.AzWebAppHttp20Event()
events = list(plugin.eval(record))
self.assertEqual(events, [])
def test_ext_bucket_missing(self):
record = copy.deepcopy(base_record)
record['ext'] = None
plugin = azwebapphttp20event.AzWebAppHttp20Event()
events = list(plugin.eval(record))
self.assertEqual(events, [])
def test_record_type_non_web_app_config(self):
record = copy.deepcopy(base_record)
record['ext']['record_type'] = 'non_web_app_config'
plugin = azwebapphttp20event.AzWebAppHttp20Event()
events = list(plugin.eval(record))
self.assertEqual(events, [])
def test_http20_enabled(self):
record = copy.deepcopy(base_record)
record['ext']['http20_enabled'] = True
plugin = azwebapphttp20event.AzWebAppHttp20Event()
events = list(plugin.eval(record))
self.assertEqual(events, [])
def test_http20_disabled(self):
record = copy.deepcopy(base_record)
record['ext']['http20_enabled'] = False
plugin = azwebapphttp20event.AzWebAppHttp20Event()
events = list(plugin.eval(record))
self.assertEqual(len(events), 1)
self.assertEqual(events[0]['ext']['record_type'],
'web_app_http20_event')
self.assertEqual(events[0]['com']['record_type'],
'web_app_http20_event')
| 32.565217 | 59 | 0.640854 | 227 | 2,247 | 6.127753 | 0.189427 | 0.086269 | 0.105679 | 0.094896 | 0.732567 | 0.716751 | 0.641984 | 0.641984 | 0.57872 | 0.505392 | 0 | 0.028605 | 0.23765 | 2,247 | 68 | 60 | 33.044118 | 0.783421 | 0.033378 | 0 | 0.462963 | 0 | 0 | 0.109671 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 1 | 0.111111 | false | 0 | 0.055556 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8791411f9352e21475daedce2828ad0066226068 | 419 | py | Python | ranking/migrations/0056_auto_20201128_2316.py | horacexd/clist | 9759dfea97b86514bec9825d2430abc36decacf0 | [
"Apache-2.0"
] | 166 | 2019-05-16T23:46:08.000Z | 2022-03-31T05:20:23.000Z | ranking/migrations/0056_auto_20201128_2316.py | horacexd/clist | 9759dfea97b86514bec9825d2430abc36decacf0 | [
"Apache-2.0"
] | 92 | 2020-01-18T22:51:53.000Z | 2022-03-12T01:23:57.000Z | ranking/migrations/0056_auto_20201128_2316.py | VadVergasov/clist | 4afcdfe88250d224043b28efa511749347cec71c | [
"Apache-2.0"
] | 23 | 2020-02-09T17:38:43.000Z | 2021-12-09T14:39:07.000Z | # Generated by Django 2.2.13 on 2020-11-28 23:16
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('ranking', '0055_auto_20201009_0735'),
]
operations = [
migrations.AddIndex(
model_name='statistics',
index=models.Index(fields=['place_as_int', '-created'], name='ranking_sta_place_a_42252c_idx'),
),
]
| 23.277778 | 107 | 0.637232 | 49 | 419 | 5.22449 | 0.795918 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115987 | 0.238663 | 419 | 17 | 108 | 24.647059 | 0.68652 | 0.109785 | 0 | 0 | 1 | 0 | 0.242588 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8797dc3ef1031456c069cbbb89dbdaafc3b5a76e | 137 | py | Python | ty.py | IsSveshuD/lab_2_12 | e7a276292fed67764526fff4dda582a86f2ddf45 | [
"MIT"
] | null | null | null | ty.py | IsSveshuD/lab_2_12 | e7a276292fed67764526fff4dda582a86f2ddf45 | [
"MIT"
] | null | null | null | ty.py | IsSveshuD/lab_2_12 | e7a276292fed67764526fff4dda582a86f2ddf45 | [
"MIT"
] | null | null | null | import re
def c(text, chars=" !?"):
rx = re.compile(f'{chars}')
text = rx.sub(r'-', text)
print(text)
a = 'dsf !?#'
c(a)
| 11.416667 | 31 | 0.489051 | 22 | 137 | 3.045455 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255474 | 137 | 11 | 32 | 12.454545 | 0.656863 | 0 | 0 | 0 | 0 | 0 | 0.131387 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.285714 | 0.142857 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8799edb410a9b593ae1ec6c87d90f8b46275cfe2 | 1,005 | py | Python | website/articles.py | ceyeoh/fyp_doppler | 4805378d57870560f8a8b450ec49b6c72a85962a | [
"MIT"
] | null | null | null | website/articles.py | ceyeoh/fyp_doppler | 4805378d57870560f8a8b450ec49b6c72a85962a | [
"MIT"
] | null | null | null | website/articles.py | ceyeoh/fyp_doppler | 4805378d57870560f8a8b450ec49b6c72a85962a | [
"MIT"
] | null | null | null | from flask import Blueprint, render_template
from flask_login import login_required, current_user
articles = Blueprint(
"articles",
__name__,
)
@articles.route("/intro-fgr")
@login_required
def intro():
return render_template("article-intro-fgr.html", user=current_user)
@articles.route("/causes-fgr")
@login_required
def causes():
return render_template("article-causes-fgr.html", user=current_user)
@articles.route("/twinsrisk-fgr")
@login_required
def twinsrisk():
return render_template("article-twinsrisk-fgr.html", user=current_user)
@articles.route("/symptoms-fgr")
@login_required
def symptoms():
return render_template("article-symptoms-fgr.html", user=current_user)
@articles.route("/diagnosis-fgr")
@login_required
def diagnosis():
return render_template("article-diagnosis-fgr.html", user=current_user)
@articles.route("/preventions-fgr")
@login_required
def preventions():
return render_template("article-preventions-fgr.html", user=current_user)
| 22.840909 | 77 | 0.763184 | 126 | 1,005 | 5.880952 | 0.190476 | 0.132254 | 0.153846 | 0.153846 | 0.265857 | 0.236167 | 0.236167 | 0 | 0 | 0 | 0 | 0 | 0.102488 | 1,005 | 43 | 78 | 23.372093 | 0.821508 | 0 | 0 | 0.2 | 0 | 0 | 0.234826 | 0.149254 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.066667 | 0.2 | 0.466667 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
879ae2125f2be56cca379202ae8598161954d149 | 2,777 | py | Python | rvaconnect/circles/migrations/0001_initial.py | rva-data/rvaconnect | dc7e387dd35971ff5514f2675532e29094843ae2 | [
"BSD-3-Clause"
] | 1 | 2015-01-27T05:24:13.000Z | 2015-01-27T05:24:13.000Z | rvaconnect/circles/migrations/0001_initial.py | rva-data/rvaconnect | dc7e387dd35971ff5514f2675532e29094843ae2 | [
"BSD-3-Clause"
] | null | null | null | rvaconnect/circles/migrations/0001_initial.py | rva-data/rvaconnect | dc7e387dd35971ff5514f2675532e29094843ae2 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'Group'
db.create_table(u'circles_group', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('created', self.gf('model_utils.fields.AutoCreatedField')(default=datetime.datetime.now)),
('modified', self.gf('model_utils.fields.AutoLastModifiedField')(default=datetime.datetime.now)),
('name', self.gf('django.db.models.fields.CharField')(max_length=100)),
('slug', self.gf('django.db.models.fields.SlugField')(max_length=100)),
('description_markdown', self.gf('django.db.models.fields.TextField')(default='')),
('description', self.gf('django.db.models.fields.TextField')(null=True)),
('status', self.gf('model_utils.fields.StatusField')(default='active', max_length=100, no_check_for_status=True)),
('url', self.gf('django.db.models.fields.URLField')(max_length=200, null=True, blank=True)),
('is_active', self.gf('django.db.models.fields.BooleanField')(default=True)),
('notes', self.gf('django.db.models.fields.TextField')(null=True, blank=True)),
))
db.send_create_signal(u'circles', ['Group'])
def backwards(self, orm):
# Deleting model 'Group'
db.delete_table(u'circles_group')
models = {
u'circles.group': {
'Meta': {'ordering': "['name']", 'object_name': 'Group'},
'created': ('model_utils.fields.AutoCreatedField', [], {'default': 'datetime.datetime.now'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'description_markdown': ('django.db.models.fields.TextField', [], {'default': "''"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'modified': ('model_utils.fields.AutoLastModifiedField', [], {'default': 'datetime.datetime.now'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'notes': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '100'}),
'status': ('model_utils.fields.StatusField', [], {'default': "'active'", 'max_length': '100', 'no_check_for_status': 'True'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '200', 'null': 'True', 'blank': 'True'})
}
}
complete_apps = ['circles'] | 55.54 | 138 | 0.605329 | 309 | 2,777 | 5.330097 | 0.236246 | 0.082574 | 0.136005 | 0.194293 | 0.700668 | 0.689739 | 0.645416 | 0.549484 | 0.327262 | 0.160291 | 0 | 0.011494 | 0.185452 | 2,777 | 50 | 139 | 55.54 | 0.716622 | 0.023407 | 0 | 0 | 0 | 0 | 0.451458 | 0.289406 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.102564 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
879cdfa799a43a3cd06f5d3f201e4e357ab443c1 | 550 | py | Python | models/tree.py | pigaov10/tree_manager | c85aa03d59536ebe6b8fac0407fd285094df3a65 | [
"Apache-2.0"
] | null | null | null | models/tree.py | pigaov10/tree_manager | c85aa03d59536ebe6b8fac0407fd285094df3a65 | [
"Apache-2.0"
] | null | null | null | models/tree.py | pigaov10/tree_manager | c85aa03d59536ebe6b8fac0407fd285094df3a65 | [
"Apache-2.0"
] | null | null | null | from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
def configure(app):
db.init_app(app)
app.db = db
class Tree(db.Model):
__tablename__ = 'tree'
id = db.Column(db.Integer, primary_key=True)
code = db.Column(db.String(50), nullable=False)
description = db.Column(db.String(255), nullable=False)
age = db.Column(db.Integer(), nullable=False)
# specie_id = db.Column(db.Integer, db.ForeignKey('specie.id'), nullable=False)
def __repr__(self):
return '<Tree %r>' % self.description | 30.555556 | 85 | 0.656364 | 74 | 550 | 4.716216 | 0.445946 | 0.114613 | 0.143266 | 0.146132 | 0.108883 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011416 | 0.203636 | 550 | 18 | 86 | 30.555556 | 0.785388 | 0.141818 | 0 | 0 | 0 | 0 | 0.028634 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0.076923 | 0.769231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
87a23d912466c34c97307a017f8d2956a06cdbc3 | 1,729 | py | Python | examples/black_lives/create_progmem.py | fejiso/PxMatrix | fc53edf18af43ab3d0459890c0575243a3592445 | [
"BSD-3-Clause"
] | 599 | 2018-03-31T21:56:45.000Z | 2022-03-26T03:31:30.000Z | examples/black_lives/create_progmem.py | fejiso/PxMatrix | fc53edf18af43ab3d0459890c0575243a3592445 | [
"BSD-3-Clause"
] | 291 | 2018-03-29T11:59:26.000Z | 2022-03-24T19:44:32.000Z | examples/black_lives/create_progmem.py | fejiso/PxMatrix | fc53edf18af43ab3d0459890c0575243a3592445 | [
"BSD-3-Clause"
] | 144 | 2018-03-31T04:45:50.000Z | 2022-03-29T15:00:22.000Z | #!/usr/bin/python
import binascii
import sys
import glob, os
import pdb
file_no=0;
file_names=[];
RGB565=1;
out_string="";
def printrgb565(red, green, blue):
x1 = (red & 0xF8) | (green >> 5);
x2 = ((green & 0x1C) << 3) | (blue >> 3);
#pdb.set_trace()
this_string="0x" + str(binascii.hexlify(chr(x2))) + ",";
this_string+="0x" + str(binascii.hexlify(chr(x1))) + ",";
return this_string;
def printrgb888(red, green, blue):
this_string="0x" + str(binascii.hexlify(red)) + ",";
this_string+="0x" + str(binascii.hexlify(green)) + ",";
this_string+="0x" + str(binascii.hexlify(blue)) + ",";
return this_string;
out_string="uint8_t animation_lengths[]={";
for file in glob.glob("*.rgb"):
file_no=file_no+1;
file_names.append(str(file))
size = os.path.getsize(str(file))/64/32/3
out_string+=str(size)+ ",";
out_string=out_string[:-1];
out_string+="};\nconst uint8_t animations[] PROGMEM = {";
print (out_string)
byte_count=0;
for file_name in file_names:
size = os.path.getsize(str(file_name))
print(str(file_name)+ "- source_size: " + str(size));
with open(file_name, 'rb') as f:
byte0 = f.read(1)
while byte0 != "":
byte1 = f.read(1)
byte2 = f.read(1)
# Do stuff with byte.
if (RGB565):
out_string+=printrgb565(ord(byte0), ord(byte1), ord(byte2))
byte_count=byte_count+2;
else:
out_string+=printrgb888(byte0, byte1, byte2,out_string)
byte_count=byte_count+3;
if ((byte_count%10)==0):
out_string+="\n";
byte0 = f.read(1)
#print(str(file_name)+ "- out_size: " + str(byte_count));
out_string+="0x00};";
out_file = open("anim_data.h", "w");
out_file.write(out_string);
out_file.close();
| 27.444444 | 65 | 0.626952 | 256 | 1,729 | 4.046875 | 0.324219 | 0.112934 | 0.057915 | 0.072394 | 0.196911 | 0.196911 | 0.063707 | 0 | 0 | 0 | 0 | 0.048729 | 0.18103 | 1,729 | 62 | 66 | 27.887097 | 0.68291 | 0.061885 | 0 | 0.08 | 0 | 0 | 0.079728 | 0.012979 | 0 | 0 | 0.007417 | 0 | 0 | 0 | null | null | 0 | 0.08 | null | null | 0.12 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87a487207e754b62b27676fbeca5d8fa0f49a8b7 | 7,340 | py | Python | lisa_flexbe_states_flexbe_behaviors/src/lisa_flexbe_states_flexbe_behaviors/test_multiple_sm.py | lawrence-iviani/lisa-flexbe-states | 5a228b7a9139394c9bd9ea386725226fef7844ac | [
"BSD-3-Clause"
] | null | null | null | lisa_flexbe_states_flexbe_behaviors/src/lisa_flexbe_states_flexbe_behaviors/test_multiple_sm.py | lawrence-iviani/lisa-flexbe-states | 5a228b7a9139394c9bd9ea386725226fef7844ac | [
"BSD-3-Clause"
] | null | null | null | lisa_flexbe_states_flexbe_behaviors/src/lisa_flexbe_states_flexbe_behaviors/test_multiple_sm.py | lawrence-iviani/lisa-flexbe-states | 5a228b7a9139394c9bd9ea386725226fef7844ac | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
###########################################################
# WARNING: Generated code! #
# ************************** #
# Manual changes may get lost if file is generated again. #
# Only code inside the [MANUAL] tags will be kept. #
###########################################################
from flexbe_core import Behavior, Autonomy, OperatableStateMachine, ConcurrencyContainer, PriorityContainer, Logger
from lisa_flexbe_states_flexbe_states.lisa_utter_state import LisaUtterState
from lisa_flexbe_states_flexbe_states.lisa_utter_actionlib_state import LisaUtterActionState
from lisa_flexbe_states_flexbe_states.lisa_utter_and_wait_for_intent_state import LisaUtterAndWaitForIntentState
from flexbe_states.check_condition_state import CheckConditionState
from lisa_flexbe_states_flexbe_states.lisa_extract_payload_key import LisaGetPayloadKeyState
# Additional imports can be added inside the following tags
# [MANUAL_IMPORT]
# [/MANUAL_IMPORT]
'''
Created on Mon Nov 25 2020
@author: lawrence iviani
'''
class test_multipleSM(Behavior):
'''
a test of interactions with several repeated blocks
'''
def __init__(self):
super(test_multipleSM, self).__init__()
self.name = 'test_multiple'
# parameters of this behavior
# references to used behaviors
# Additional initialization code can be added inside the following tags
# [MANUAL_INIT]
# [/MANUAL_INIT]
# Behavior comments:
def create(self):
wait_time_utter = 5
context_id = "test_multiple"
intent_1 = ["GetTime"]
intent_2 = ["YesNo"]
suspend_time = 1.5
wait_time_interaction = 10
# x:633 y:607, x:643 y:65
_state_machine = OperatableStateMachine(outcomes=['finished', 'failed'])
_state_machine.userdata.utter_1 = "Utterance example 1"
_state_machine.userdata.utter_2 = "Utterance example 2, a little bit longer"
_state_machine.userdata.utter_repeat = "Repeat the test"
_state_machine.userdata.utter_and_intent_1 = "Intent is Get Time"
_state_machine.userdata.utter_and_intent_2 = "Intent is Continue Yes or no"
# Additional creation code can be added inside the following tags
# [MANUAL_CREATE]
# [/MANUAL_CREATE]
with _state_machine:
# x:62 y:59
OperatableStateMachine.add('Utter_1',
LisaUtterState(context_id=context_id, wait_time=wait_time_utter, suspend_time=suspend_time),
transitions={'done': 'UtterAndWaitForIntent_1', 'preempt': 'finished', 'timeouted': 'UtterAndWaitForIntent_1', 'error': 'failed'},
autonomy={'done': Autonomy.Off, 'preempt': Autonomy.Off, 'timeouted': Autonomy.Off, 'error': Autonomy.Off},
remapping={'text_to_utter': 'utter_1', 'error_reason': 'error_reason'})
# x:1173 y:38
OperatableStateMachine.add('UtterActionLib',
LisaUtterActionState(text_to_utter='Intent Not Recognized', wait_time=0),
transitions={'uttered_all': 'finished', 'timeout': 'failed', 'command_error': 'failed'},
autonomy={'uttered_all': Autonomy.Off, 'timeout': Autonomy.Off, 'command_error': Autonomy.Off},
remapping={'error_reason': 'error_reason'})
# x:596 y:287
OperatableStateMachine.add('Utter_2',
LisaUtterState(context_id=context_id, wait_time=wait_time_utter, suspend_time=suspend_time),
transitions={'done': 'UtterAndWaitForIntent_2', 'preempt': 'finished', 'timeouted': 'UtterAndWaitForIntent_2', 'error': 'failed'},
autonomy={'done': Autonomy.Off, 'preempt': Autonomy.Off, 'timeouted': Autonomy.Off, 'error': Autonomy.Off},
remapping={'text_to_utter': 'utter_2', 'error_reason': 'error_reason'})
# x:1045 y:374
OperatableStateMachine.add('UtterAndWaitForIntent_2',
LisaUtterAndWaitForIntentState(context_id=context_id, intents=intent_2, wait_time=wait_time_interaction),
transitions={'intent_recognized': 'get_answer', 'intent_not_recognized': 'utter_not_recogn_2', 'preempt': 'finished', 'timeouted': 'utter_not_recogn_2', 'error': 'failed'},
autonomy={'intent_recognized': Autonomy.Off, 'intent_not_recognized': Autonomy.Off, 'preempt': Autonomy.Off, 'timeouted': Autonomy.Off, 'error': Autonomy.Off},
remapping={'text_to_utter': 'utter_and_intent_2', 'payload': 'payload', 'original_sentence': 'original_sentence', 'error_reason': 'error_reason', 'intent_recognized': 'intent_recognized'})
# x:17 y:609
OperatableStateMachine.add('UtterNoTimeout',
LisaUtterState(context_id=context_id, wait_time=0, suspend_time=0),
transitions={'done': 'Utter_1', 'preempt': 'finished', 'timeouted': 'Utter_1', 'error': 'failed'},
autonomy={'done': Autonomy.Off, 'preempt': Autonomy.Off, 'timeouted': Autonomy.Off, 'error': Autonomy.Off},
remapping={'text_to_utter': 'utter_repeat', 'error_reason': 'error_reason'})
# x:1373 y:612
OperatableStateMachine.add('check_finish',
CheckConditionState(predicate=lambda x: x=="Yes"),
transitions={'true': 'UtterActionLib', 'false': 'UtterNoTimeout'},
autonomy={'true': Autonomy.Off, 'false': Autonomy.Off},
remapping={'input_value': 'answer'})
# x:171 y:264
OperatableStateMachine.add('utter_not_recogn_1',
LisaUtterActionState(text_to_utter="Intent 1 not recognized try again", wait_time=wait_time_utter),
transitions={'uttered_all': 'UtterAndWaitForIntent_1', 'timeout': 'UtterAndWaitForIntent_1', 'command_error': 'failed'},
autonomy={'uttered_all': Autonomy.Off, 'timeout': Autonomy.Off, 'command_error': Autonomy.Off},
remapping={'error_reason': 'error_reason'})
# x:937 y:513
OperatableStateMachine.add('utter_not_recogn_2',
LisaUtterActionState(text_to_utter="Intent 2 not recognized try again", wait_time=wait_time_utter),
transitions={'uttered_all': 'UtterAndWaitForIntent_2', 'timeout': 'UtterAndWaitForIntent_2', 'command_error': 'failed'},
autonomy={'uttered_all': Autonomy.Off, 'timeout': Autonomy.Off, 'command_error': Autonomy.Off},
remapping={'error_reason': 'error_reason'})
# x:289 y:121
OperatableStateMachine.add('UtterAndWaitForIntent_1',
LisaUtterAndWaitForIntentState(context_id=context_id, intents=intent_1, wait_time=wait_time_interaction),
transitions={'intent_recognized': 'Utter_2', 'intent_not_recognized': 'utter_not_recogn_1', 'preempt': 'finished', 'timeouted': 'utter_not_recogn_1', 'error': 'failed'},
autonomy={'intent_recognized': Autonomy.Off, 'intent_not_recognized': Autonomy.Off, 'preempt': Autonomy.Off, 'timeouted': Autonomy.Off, 'error': Autonomy.Off},
remapping={'text_to_utter': 'utter_and_intent_1', 'payload': 'payload', 'original_sentence': 'original_sentence', 'error_reason': 'error_reason', 'intent_recognized': 'intent_recognized'})
# x:1342 y:454
OperatableStateMachine.add('get_answer',
LisaGetPayloadKeyState(payload_key='confirm'),
transitions={'done': 'check_finish', 'error': 'failed'},
autonomy={'done': Autonomy.Off, 'error': Autonomy.Off},
remapping={'payload': 'payload', 'payload_value': 'answer'})
return _state_machine
# Private functions can be added inside the following tags
# [MANUAL_FUNC]
# [/MANUAL_FUNC]
| 49.261745 | 198 | 0.700409 | 831 | 7,340 | 5.901324 | 0.222623 | 0.078507 | 0.040783 | 0.045881 | 0.564437 | 0.505506 | 0.450653 | 0.410277 | 0.347675 | 0.330546 | 0 | 0.019778 | 0.152725 | 7,340 | 148 | 199 | 49.594595 | 0.768773 | 0.124659 | 0 | 0.173333 | 1 | 0 | 0.320974 | 0.050621 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026667 | false | 0 | 0.08 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87a653834398311d35699ada567936f2e6f4ca64 | 410 | py | Python | data_preparation/jobScreening_cvpr17/extract_spectograms.py | segurac/richEmbeddings | 3279714c4b70db09740152822951cd0359fda8c8 | [
"Apache-2.0"
] | null | null | null | data_preparation/jobScreening_cvpr17/extract_spectograms.py | segurac/richEmbeddings | 3279714c4b70db09740152822951cd0359fda8c8 | [
"Apache-2.0"
] | null | null | null | data_preparation/jobScreening_cvpr17/extract_spectograms.py | segurac/richEmbeddings | 3279714c4b70db09740152822951cd0359fda8c8 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
import pickle
import numpy as np
from scipy.io import wavfile
import python_speech_features as fextract
audio_filename = sys.argv[1]
features_filename = sys.argv[2]
rate, sig = wavfile.read(audio_filename)
fbank_feat = fextract.logfbank(sig,samplerate=rate)
with open(features_filename, 'wb') as stream:
pickle.dump(fbank_feat, stream)
| 15.769231 | 51 | 0.74878 | 61 | 410 | 4.901639 | 0.622951 | 0.086957 | 0.100334 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011396 | 0.143902 | 410 | 25 | 52 | 16.4 | 0.840456 | 0.104878 | 0 | 0 | 0 | 0 | 0.005495 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.454545 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
87a6ec55d7fc458c9d50fc876766d5e4b737fb6f | 452 | py | Python | urls.py | markbate/whiskerboard | fe157c1eff068c089f6948ac5cf21f5a6ff36600 | [
"MIT"
] | 20 | 2015-03-31T09:43:43.000Z | 2021-06-12T23:41:28.000Z | urls.py | ametaireau/whiskerboard | b539337416069e0c794b4c3e4dfdd1afc64562cb | [
"MIT"
] | 5 | 2015-01-19T23:07:52.000Z | 2021-06-10T17:38:37.000Z | urls.py | ametaireau/whiskerboard | b539337416069e0c794b4c3e4dfdd1afc64562cb | [
"MIT"
] | 6 | 2015-05-14T21:05:31.000Z | 2018-04-07T22:40:39.000Z | from django.conf.urls.defaults import patterns, include, url
from django.contrib import admin
from board.feeds import EventFeed
from board.views import IndexView, ServiceView
admin.autodiscover()
urlpatterns = patterns('',
url(r'^$', IndexView.as_view(), name='index'),
url(r'^services/(?P<slug>[-\w]+)$', ServiceView.as_view(), name='service'),
url(r'^feed$', EventFeed(), name='feed'),
url(r'^admin/', include(admin.site.urls)),
)
| 30.133333 | 79 | 0.692478 | 60 | 452 | 5.183333 | 0.516667 | 0.051447 | 0.064309 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119469 | 452 | 14 | 80 | 32.285714 | 0.781407 | 0 | 0 | 0 | 0 | 0 | 0.128319 | 0.059735 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
87b2ac0cb62dd997f1308319e2198c8f122f2f8d | 458 | py | Python | tests/mock_module.py | nullpsifer/cryptosploit | e33cfca07397c05dffa734274c202acc7ff597b4 | [
"MIT"
] | null | null | null | tests/mock_module.py | nullpsifer/cryptosploit | e33cfca07397c05dffa734274c202acc7ff597b4 | [
"MIT"
] | null | null | null | tests/mock_module.py | nullpsifer/cryptosploit | e33cfca07397c05dffa734274c202acc7ff597b4 | [
"MIT"
] | null | null | null | from modules.abstract_module import *
class MockModule(AbstractModule):
executed = False
name = "mock_module"
description = "Module for testing purposes."
arguments = [
ModuleArgumentDescription("Arg1", "Argument 1", True),
ModuleArgumentDescription("Arg2", "Argument 2", False),
ModuleArgumentDescription("Arg3", "Argument 3", False)
]
def execute(self):
self.executed = True | 25.444444 | 67 | 0.635371 | 40 | 458 | 7.225 | 0.725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017751 | 0.262009 | 458 | 18 | 68 | 25.444444 | 0.837278 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
87b5a69588b76f251c9cd5a2072d8ea5e658ab2d | 850 | py | Python | service/server.py | IsraelAbebe/fake-news-classification | 3c8c46d8e4222a5f70daea423b7a90480cb2044c | [
"MIT"
] | null | null | null | service/server.py | IsraelAbebe/fake-news-classification | 3c8c46d8e4222a5f70daea423b7a90480cb2044c | [
"MIT"
] | null | null | null | service/server.py | IsraelAbebe/fake-news-classification | 3c8c46d8e4222a5f70daea423b7a90480cb2044c | [
"MIT"
] | null | null | null |
import grpc
from concurrent import futures
import time
import sys
sys.path.insert(0, 'service/')
from service_spec import fake_news_pb2
from service_spec import fake_news_pb2_grpc
import json
import test
class fake_news_classificationServicer(fake_news_pb2_grpc.fake_news_classificationServicer):
def classify(self, request, context):
response = fake_news_pb2.OutputMessage()
response.result = test.predict(request.value)
return response
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
fake_news_pb2_grpc.add_fake_news_classificationServicer_to_server(fake_news_classificationServicer(), server)
print('Starting server. Listening on port 7011.')
server.add_insecure_port('0.0.0.0:7011')
server.start()
try:
while True:
time.sleep(86400)
except KeyboardInterrupt:
server.stop(0)
| 23.611111 | 109 | 0.787059 | 114 | 850 | 5.614035 | 0.464912 | 0.1125 | 0.085938 | 0.070313 | 0.1 | 0.1 | 0.1 | 0 | 0 | 0 | 0 | 0.035278 | 0.132941 | 850 | 35 | 110 | 24.285714 | 0.833107 | 0 | 0 | 0 | 0 | 0 | 0.070671 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.333333 | 0 | 0.458333 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
87ca3a57df23d8770609b8c197e911f0d1988dd7 | 1,473 | py | Python | exchanges/okex.py | soulmachine/crypto-market-data | 1dbf1cfd28754a37dd054777feadc1554e1cccaf | [
"Apache-2.0"
] | null | null | null | exchanges/okex.py | soulmachine/crypto-market-data | 1dbf1cfd28754a37dd054777feadc1554e1cccaf | [
"Apache-2.0"
] | null | null | null | exchanges/okex.py | soulmachine/crypto-market-data | 1dbf1cfd28754a37dd054777feadc1554e1cccaf | [
"Apache-2.0"
] | null | null | null | from typing import Any, Dict, List
from .utils import get_json
def fetch_markets(market_type: str) -> List[Dict[str, Any]]:
'''Fetch all trading markets from a crypto exchage.'''
if market_type == 'future':
return _fetch_future_markets()
elif market_type == 'option':
return _fetch_option_markets()
elif market_type == 'spot':
return _fetch_spot_markets()
elif market_type == 'swap':
return _fetch_swap_markets()
else:
raise ValueError(f'Unknown market type: {market_type}')
def _fetch_future_markets() -> List[Dict[str, Any]]:
url = 'https://www.okex.com/api/futures/v3/instruments'
return get_json(url)
def _fetch_spot_markets() -> List[Dict[str, Any]]:
url = 'https://www.okex.com/api/spot/v3/instruments'
symbols = get_json(url)
symbols.sort(key=lambda x: x['instrument_id'])
return symbols
def _fetch_swap_markets() -> List[Dict[str, Any]]:
url = 'https://www.okex.com/api/swap/v3/instruments'
return get_json(url)
def _fetch_option_markets_underlying(underlying: str) -> List[Dict[str, Any]]:
url = f'https://www.okex.com/api/option/v3/instruments/{underlying}'
return get_json(url)
def _fetch_option_markets() -> List[Dict[str, Any]]:
underlying = ["BTC-USD", "ETH-USD", "EOS-USD"]
lst: List[Dict[str, Any]] = []
for underlying_symbol in underlying:
lst.extend(_fetch_option_markets_underlying(underlying_symbol))
return lst
| 30.6875 | 78 | 0.680923 | 204 | 1,473 | 4.686275 | 0.284314 | 0.073222 | 0.080544 | 0.10251 | 0.394351 | 0.261506 | 0.261506 | 0.261506 | 0.131799 | 0.131799 | 0 | 0.003309 | 0.179226 | 1,473 | 47 | 79 | 31.340426 | 0.787428 | 0.032587 | 0 | 0.090909 | 0 | 0 | 0.198732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.060606 | 0 | 0.515152 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
87d835e669333bab23cb9ebd086673441ab97685 | 1,821 | py | Python | src/issues/search_indexes.py | ofirr/OpenCommunity | 7786ac2996530af8f545f4398c071793c73634c8 | [
"BSD-3-Clause"
] | null | null | null | src/issues/search_indexes.py | ofirr/OpenCommunity | 7786ac2996530af8f545f4398c071793c73634c8 | [
"BSD-3-Clause"
] | null | null | null | src/issues/search_indexes.py | ofirr/OpenCommunity | 7786ac2996530af8f545f4398c071793c73634c8 | [
"BSD-3-Clause"
] | null | null | null | from haystack import indexes
from issues.models import Issue, Proposal
from haystack.fields import IntegerField, CharField, BooleanField, DateField, DateTimeField
from datetime import date, datetime, timedelta
class IssueIndex(indexes.ModelSearchIndex, indexes.Indexable):
community = IntegerField(model_attr='community_id')
is_confidential = BooleanField(model_attr='is_confidential')
class Meta:
model = Issue
fields = ['title', 'abstract']
# Note that regular ``SearchIndex`` methods apply.
def index_queryset(self, using=None):
"Used when the entire index for model is updated."
return Issue.objects.active()
class ProposalIndex(indexes.ModelSearchIndex, indexes.Indexable):
text = CharField(document=True, use_template=True)
active = BooleanField(model_attr='active')
title = CharField(model_attr='title')
community = IntegerField(model_attr='issue__community_id')
status = IntegerField(model_attr='status')
task_completed = BooleanField(model_attr='task_completed')
type = IntegerField(model_attr='type')
decided_at = DateTimeField()
assignee = CharField()
due_by = DateField(model_attr='due_by', null=True)
is_confidential = BooleanField(model_attr='is_confidential')
def get_model(self):
return Proposal
def prepare_assignee(self, obj):
return u'' if not obj.assigned_to_user else \
obj.assigned_to_user.display_name
def prepare_decided_at(self, obj):
return obj.created_at if not obj.decided_at_meeting \
else obj.decided_at_meeting.held_at
# Note that regular ``SearchIndex`` methods apply.
def index_queryset(self, using=None):
"Used when the entire index for model is updated."
return Proposal.objects.active()
| 37.163265 | 91 | 0.721032 | 218 | 1,821 | 5.834862 | 0.376147 | 0.070755 | 0.066038 | 0.061321 | 0.253145 | 0.253145 | 0.253145 | 0.176101 | 0.176101 | 0.176101 | 0 | 0 | 0.189456 | 1,821 | 48 | 92 | 37.9375 | 0.861789 | 0.107633 | 0 | 0.166667 | 0 | 0 | 0.122603 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.138889 | false | 0 | 0.111111 | 0.083333 | 0.833333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
87e4656207a6810c62e813656c8a6d18731bd5ed | 3,265 | py | Python | Python/ldap/neo2open.py | ebouaziz/miscripts | 9520d31adfd8cf63a06d519b0c308f07dd107b90 | [
"MIT"
] | null | null | null | Python/ldap/neo2open.py | ebouaziz/miscripts | 9520d31adfd8cf63a06d519b0c308f07dd107b90 | [
"MIT"
] | null | null | null | Python/ldap/neo2open.py | ebouaziz/miscripts | 9520d31adfd8cf63a06d519b0c308f07dd107b90 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Create/update LDAP entries from custom directory to opendirectory schema
import binascii
import os
import re
import sys
cmtcre = re.compile(r'#.*$')
try:
filename = sys.argv[1]
except IndexError:
filename = os.path.join(os.path.expanduser('~'), 'Desktop', 'openldap.ldif')
def get_users(filename):
attributes = []
with open(filename, 'rt') as in_:
for (n,l) in enumerate(in_):
l = l.strip('\r\n')
l = cmtcre.sub('', l).rstrip('\t ')
if not l:
if attributes:
dattr = {}
for k,t,v in attributes:
dattr.setdefault(k, []).append((t,v))
try:
dn = dattr['dn'][0][1]
except KeyError:
print >> sys.stderr, "No DN: ", attributes
raise StopIteration
if 'ou=people' in [x.lower() for x in dn.split(',')]:
yield dattr
#raise StopIteration
else:
print >> sys.stderr, "Not a people DN"
attributes = []
continue
#print n,l
if l[0] in ' \t':
# continuation
attributes[-1] = (attributes[-1][0],
attributes[-1][1],
attributes[-1][2]+l[1:])
continue
items = l.split(':')
k,v = items[0], items[-1].lstrip(' \t')
b64 = len(items) > 2
attributes.append((k, b64, v))
OBJECTCLASSES = ['inetOrgPerson','posixAccount','shadowAccount',
#'apple-user',
'extensibleObject','organizationalPerson','top','person']
def update_user(attributes, uid, gid):
# add objectclass
delattrs = []
for attr in attributes:
if attr.lower().startswith('trac'):
delattrs.append(attr)
if attr.lower() in ['objectclass']:
delattrs.append(attr)
for attr in set(delattrs):
del attributes[attr]
attributes['objectclass'] = zip([False]*len(OBJECTCLASSES), OBJECTCLASSES)
attributes['structuralObjectClass'] = [(False, 'inetOrgPerson')]
attributes['uidNumber'] = [(False, str(uid))]
attributes['gidNumber'] = [(False, str(gid))]
attributes['homeDirectory'] = [(False, '/dev/null')]
attributes['loginShell'] = [(False, '/bin/bash')]
def export_user(dn, attrs):
lmax = 77
ndn = []
for it in dn.split(','):
k,v = it.split('=')
if k == 'ou':
k = 'cn'
v = 'users'
ndn.append('='.join([k,v]))
dn = ','.join(ndn)
print 'dn:', dn
for k in attrs:
for t,v in attrs[k]:
l = '%s:%s %s' % (k, t and ':' or '', v)
print '\n '.join([l[lmax*x:lmax*(x+1)] \
for x in xrange((len(v)+lmax-1)/lmax)])
print ''
uid = 1100
gid = 20
for attributes in get_users(filename):
uid += 1
(dn, ) = attributes['dn']
del attributes['dn']
update_user(attributes, uid, gid)
export_user(dn[1], attributes)
#import pprint
#pprint.pprint(attributes) | 32.326733 | 80 | 0.488821 | 357 | 3,265 | 4.448179 | 0.352941 | 0.027708 | 0.020151 | 0.028967 | 0.032746 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014811 | 0.358959 | 3,265 | 101 | 81 | 32.326733 | 0.743908 | 0.061562 | 0 | 0.097561 | 0 | 0 | 0.101113 | 0.006872 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.04878 | null | null | 0.060976 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87e58bcf6dcfe22c279e06b2787f924dca81ae9f | 429 | py | Python | Practice Problem Solutions/5 - Lists/program.py | argosopentech/practical-programming-in-python | ae5aebcda6968ff327b6db3350840813d1c563ba | [
"CC0-1.0"
] | 1 | 2021-01-17T17:29:36.000Z | 2021-01-17T17:29:36.000Z | Practice Problem Solutions/5 - Lists/program.py | argosopentech/practical-programming-in-python | ae5aebcda6968ff327b6db3350840813d1c563ba | [
"CC0-1.0"
] | null | null | null | Practice Problem Solutions/5 - Lists/program.py | argosopentech/practical-programming-in-python | ae5aebcda6968ff327b6db3350840813d1c563ba | [
"CC0-1.0"
] | null | null | null | print('Grocery list:')
print('"add" to add items and "view" to view list')
grocery_list = []
while True:
command = input('Enter command: ')
if command == 'add':
to_add = input('Enter new item: ')
grocery_list.append(to_add)
# elif stands for "else if"
elif command == 'view':
for i in range(len(grocery_list)):
print(grocery_list[i])
else:
print('Invalid command')
| 28.6 | 51 | 0.596737 | 58 | 429 | 4.310345 | 0.448276 | 0.22 | 0.128 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.265734 | 429 | 14 | 52 | 30.642857 | 0.793651 | 0.058275 | 0 | 0 | 0 | 0 | 0.268657 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.307692 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87f3786df52a1d8399072312a80407b63e5e8a0e | 40,723 | py | Python | plugin.py | uwsbel/blenderPlugin | beeab9850c4cc2ea6a3f514ce958a5a153c38f95 | [
"BSD-3-Clause"
] | 3 | 2015-08-24T20:34:33.000Z | 2021-01-03T10:49:33.000Z | plugin.py | uwsbel/blenderPlugin | beeab9850c4cc2ea6a3f514ce958a5a153c38f95 | [
"BSD-3-Clause"
] | null | null | null | plugin.py | uwsbel/blenderPlugin | beeab9850c4cc2ea6a3f514ce958a5a153c38f95 | [
"BSD-3-Clause"
] | null | null | null | /*******************************************************
* Copyright (C) 2013-2014 Daniel Kaczmarek <dankaczma@gmail.com>, Simulation Based Engineering Lab <sbel.wisc.edu>
* Some rights reserved. See LICENSE
* Use of this source code is governed by a BSD-style license that can be
* found in the LICENSE file at the top level of the distribution as well
* as well as at https://github.com/uwsbel/blenderPlugin/blob/master/LICENSE
*******************************************************/
import bpy
import math
import mathutils
import os
import yaml
import tarfile
import shutil
import stat
bl_info = {
"name": "Chrono::Render plugin",
"description": "Allows for easy graphical manipulation of simulated data before rendering with a powerful renderman renderer",
"author": "Daniel <Daphron> Kaczmarek",
"version": (0, 9),
"blender": (2, 67, 1),
"location": "File > Import > Import Chrono::Engine",
"warning": "",
"wiki_url": "TODO",
"tracker_url":"TODO",
"category": "Import-Export"}
DEFAULT_COLOR = (0.4, 0.4, 0.6)
MESH_IMPORT_FUNCTIONS = {"obj": bpy.ops.import_scene.obj,
"stl": bpy.ops.import_mesh.stl,
"ply": bpy.ops.import_mesh.ply}
fin = ""
objects = ""
proxyObjects = ""
changing_params = False
max_dim = 1
min_dim = 1
class AmbientLightProxy:
def __init__(self):
self.material = self.create_material()
self.obj = None
def update(self):
"""Grabs stuff like color, texture and stores them"""
#Color can be diffuse, specular, mirror, and subsurface scattering
if self.obj.active_material is None:
self.obj = bpy.context.scene.objects['Ambient Light Proxy']
self.color = (self.obj.active_material.diffuse_color[0], self.obj.active_material.diffuse_color[1], self.obj.active_material.diffuse_color[2])
def create_material(self):
mat = bpy.data.materials.new("Ambient light proxy material")
mat.diffuse_color = (0,0,0)
mat.diffuse_shader = 'LAMBERT'
mat.diffuse_intensity = 1.0
mat.specular_color = (1.0, 1.0, 1.0)
mat.specular_shader = 'COOKTORR'
mat.specular_intensity = 0.5
mat.alpha = 1.0
mat.ambient = 1
return mat
def addToBlender(self):
bpy.ops.mesh.primitive_monkey_add(location=(6, 6, 6))
bpy.context.active_object.name = "Ambient Light Proxy"
bpy.context.active_object.active_material = self.material
bpy.context.active_object["index"] = "AMBIENT_PROXY"
self.obj = bpy.context.active_object
class Object:
def __init__(self, data, currdir):
# print("DATA:",data)
self.group = data[0]
self.index = int(data[1]) #The objects unique ID/index number
#XYZ locations
self.x = float(data[2])
self.y = float(data[3])
self.z = float(data[4])
self.quat = mathutils.Quaternion((float(data[5]), float(data[6]), float(data[7]), float(data[8])))
# self.euler_zyx = self.quat.to_euler('ZYX')
self.euler = tuple(a for a in self.quat.to_euler())
self.obj_type = data[9].lower()
#Extra parameters (specific to each object type)
# test = []
# for x in range(10,len(data)):
# if data[x] is not '\n':
# test.append(float(data[x]))
# self.ep = [float(data[x]) for x in range(10,len(data)) if data[x] is not '\n']
self.ep = []
for x in range(10,len(data)):
if data[x] is not '\n':
try:
self.ep.append(float(data[x]))
except ValueError:
self.ep.append(data[x].strip("\n"))
self.color = DEFAULT_COLOR
self.currdir = currdir
self.material = self.create_material()
def create_material(self):
mat = bpy.data.materials.new("Object {}'s material".format(self.index))
mat.diffuse_color = self.color
mat.diffuse_shader = 'LAMBERT'
mat.diffuse_intensity = 1.0
mat.specular_color = (1.0, 1.0, 1.0)
mat.specular_shader = 'COOKTORR'
mat.specular_intensity = 0.5
mat.alpha = 1.0
mat.ambient = 1
return mat
def addToBlender(self):
# if self.index % 100 == 0:
# print("index = {}".format(self.index))
# Cube
if self.obj_type == "cube":
#ep[0] = length of one side
bpy.ops.mesh.primitive_cube_add(radius=self.ep[0], location=(self.x, self.y, self.z), rotation=self.euler)
#Box
elif self.obj_type == "box":
bpy.ops.mesh.primitive_cube_add(radius=1.0, location=(self.x, self.y, self.z))
bpy.ops.transform.resize(value=(self.ep[0], self.ep[1], self.ep[2]))
bpy.context.object.rotation_euler = mathutils.Euler(self.euler)
# Cylinder
elif self.obj_type == "cylinder":
# ep[0] = radius of top, 2*ep[1] = depth
bpy.ops.mesh.primitive_cylinder_add(radius=self.ep[0], depth=2*self.ep[1], location=(self.x, self.y, self.z), rotation=self.euler)
# Sphere
elif self.obj_type == "sphere":
# ep[0] = radius of the sphere
# uv sphere looks nicer but icosphere might be the better route
bpy.ops.mesh.primitive_uv_sphere_add(size=self.ep[0], location=(self.x, self.y, self.z), rotation=self.euler)
# Ellipsoid
elif self.obj_type == "ellipsoid":
#ep[0] is the radius, ep[1] is the length in the direction of rotation
bpy.ops.mesh.primitive_uv_sphere_add(size=1.0, location=(self.x, self.y, self.z))
#The right way?
bpy.ops.transform.resize(value=(self.ep[0],self.ep[1],self.ep[2]))
bpy.context.object.rotation_euler = mathutils.Euler(self.euler)
#Cone
elif self.obj_type == "cone":
# self.ep[0] = radius of cone bottom, self.ep[1] = half_height of cone
bpy.ops.mesh.primitive_cone_add(radius1=self.ep[0], depth=2*self.ep[1], location=(self.x, self.y, self.z), rotation=self.euler)
#Torus
elif self.obj_type == "torus":
bpy.ops.mesh.primitive_torus_add(rotation=self.euler, location=(self.x, self.y, self.z), major_radius=self.ep[0], minor_radius=self.ep[1])
#External Mesh
elif self.obj_type in MESH_IMPORT_FUNCTIONS:
filename = os.path.join(self.currdir, "meshes", self.ep[0])
MESH_IMPORT_FUNCTIONS[self.obj_type](filepath=filename, use_split_groups=False, use_split_objects=False)
# bpy.ops.object.join()
for o in bpy.context.selected_objects:
o.location = [self.x, self.y, self.z]
# Now rotate and move to match what renderman render looks like
o.rotation_euler = mathutils.Euler(self.euler)
# o.rotation_euler = self.euler_zyx
# o.rotation_euler.rotate(mathutils.Euler((math.pi, 0, 0)))
# o.rotation_quaternion = self.quat.rotate(mathutils.Euler((180, 0, 0)))
bpy.context.scene.objects.active = o
else:
print("Object type {} is not currently supported as a primitive in the blender plugin")
bpy.context.active_object.rotation_mode = 'ZYX'
bpy.context.active_object["index"] = self.index
bpy.context.active_object.name = "Obj # {}".format(self.index)
bpy.context.active_object.active_material = self.material
self.obj = bpy.context.active_object
#object.get("index") to get the value
#object["index"] doesn't work?
#TODO: it is taking the obj2 as active_object and then relabling it here. Fixed?
def update(self):
"""Grabs stuff like color, texture and stores them"""
try:
self.obj = bpy.context.scene.objects['Obj # {}'.format(self.index)]
self.color = (self.obj.active_material.diffuse_color[0], self.obj.active_material.diffuse_color[1], self.obj.active_material.diffuse_color[2])
self.mat = self.obj.active_material
except Exception as e:
print(e.strerror)
print("EXCEPTION! Dropping to pdb shell")
import pdb; pdb.set_trace()
class ProxyObject(Object):
def __init__(self, data, currdir, indicies):
""" data is a line of the input file, indicies is a list of lines
from the file that this obj represents whichAttribute is a num which
specifies the column of data on the line that decides proxyObjs and
group tells the specifica group which this proxyObj is for
(sphere, cube...) """
# print("MAKING PROXY OBJ")
Object.__init__(self, data, currdir)
self.indicies = indicies
# print(self.group)
self.color = DEFAULT_COLOR
self.material.name = "Group {}'s material".format(self.group)
def same_params(self, data):
other_ep = []
for x in range(10,len(data)):
if data[x] is not '\n':
try:
other_ep.append(float(data[x]))
except ValueError:
other_ep.append(data[x].strip("\n"))
return other_ep == self.ep
def addToBlender(self):
# print(self.ep)
bpy.ops.mesh.primitive_monkey_add(radius=self.ep[0], location=(self.x, self.y, self.z))
bpy.context.active_object["group"] = self.group
bpy.context.active_object["index"] = "PROXY"
bpy.context.active_object.name = "Proxy " + self.group
bpy.context.active_object.active_material = self.material
self.obj = bpy.context.active_object
def update(self):
try:
self.obj = bpy.context.scene.objects['Proxy {}'.format(self.group)]
self.color = (self.obj.active_material.diffuse_color[0], self.obj.active_material.diffuse_color[1], self.obj.active_material.diffuse_color[2])
self.mat = self.obj.active_material
except:
print("EXCEPTION! Dropping to pdb shell")
import pdb; pdb.set_trace()
# def update(self):
# """Grabs stuff like color, texture and stores them"""
# #Color can be diffuse, specular, mirror, and subsurface scattering
# if self.obj.active_material is not None:
# self.color = (self.obj.active_material.diffuse_color[0], self.obj.active_material.diffuse_color[1], self.obj.active_material.diffuse_color[2])
# self.mat = self.obj.active_material
def configInitialScene(fin_frame):
# bpy.ops.object.delete()
bpy.data.scenes["Scene"].frame_end = fin_frame
bpy.data.scenes["Scene"].frame_start = 0
bpy.data.scenes["Scene"].frame_current = bpy.data.scenes["Scene"].frame_start
class ImportChronoRender(bpy.types.Operator):
"""Import ChronoRender"""
bl_idname = "import.import_chrono_render"
bl_label = "Import ChronoRender"
filename = bpy.props.StringProperty(subtype='FILE_PATH')
directory = bpy.props.StringProperty(subtype='DIR_PATH')
def invoke(self, context, event):
context.window_manager.fileselect_add(self)
return {'RUNNING_MODAL'}
def process_max_dimensions(self, data):
global max_dim
global min_dim
max_length = 0
if data[9] in MESH_IMPORT_FUNCTIONS:
pass
#TODO: this could screw up some shadows. Fix. (because now sun shadows out of box)
else:
max_length = max(float(data[x]) for x in range(10,len(data)) if data[x] is not '\n')
for coord in (data[2:5]):
if float(coord) + max_length > max_dim:
max_dim = float(coord) + max_length
if float(coord) - max_length < min_dim:
min_dim = float(coord) - max_length
def import_mesh(self, data):
global extra_geometry_indicies
mesh_filename = os.path.join(self.directory, "meshes", data[10].strip("\n"))
MESH_IMPORT_FUNCTIONS["obj"](filepath=mesh_filename)
extra_geometry_indicies.append(int(data[1]))
for o in bpy.context.selected_objects:
o.location = [float(data[2]), float(data[3]), float(data[4])]
quat = mathutils.Quaternion((float(data[5]), float(data[6]), float(data[7]), float(data[8])))
euler = tuple(a for a in quat.to_euler())
for o in bpy.context.selected_objects:
o.rotation_euler = mathutils.Euler(euler)
def execute(self, context):
global fin_name
global objects
global proxyObjects
global changing_params
global ambient_proxy
global extra_geometry_indicies
global fin_dir
# filename = "/home/xeno/repos/blender-plugin/plugins/blender/blender_input_test.dat"
# individualObjectsIndicies = [1,2,3,4, 5, 6] #LINE NUMBERS
objects = []
proxyObjects = []
extra_geometry_indicies = []
fin_name = self.filename
fin_frame = 10
try:
fin_frame = self.filename.replace(".dat", "")
fin_frame = fin_frame.replace("data_", "")
fin_frame = int(fin_frame)
except:
print("Failed to automatically get the framerange from the file. You will likely need to set it manually.")
filepath = os.path.join(self.directory, self.filename)
fin_dir = self.directory
fin = open(filepath, "r")
for i, line in enumerate(fin):
index = line.split(",")[1]
# if line.split(",")[9].lower() == "extrageometry":
# extra_geometry_indicies.append(line.split(",")[1])
# if line.split(",")[9].lower() in MESH_IMPORT_FUNCTIONS:
# self.import_mesh(line.split(","))
# else:
self.process_max_dimensions(line.split(","))
if line.split(",")[0].lower() == "individual":
objects.append(Object(line.split(","), self.directory))
print("Object {}".format(index))
else:
data = line.split(",")
proxyExists = False
for obj in proxyObjects:
if obj.group == data[0]:
obj.indicies.append(index)
if not changing_params and not obj.same_params(data):
changing_params = True
proxyExists = True
if not proxyExists:
print("New Proxy obj num {}".format(index))
proxyObjects.append(ProxyObject(data, self.directory, [index]))
configInitialScene(fin_frame)
for obj in objects:
obj.addToBlender()
for obj in proxyObjects:
obj.addToBlender()
ambient_proxy = AmbientLightProxy()
ambient_proxy.addToBlender()
print("objects added")
return {'FINISHED'}
def add_importChronoRenderButton(self, context):
self.layout.operator(
ImportChronoRender.bl_idname,
text=ImportChronoRender.__doc__,
icon='PLUGIN')
class ExportChronoRender(bpy.types.Operator):
"""Exports to Chrono::Render"""
bl_idname = "export.export_chrono_render"
bl_label = "Export Chrono::Render"
filename = bpy.props.StringProperty(subtype='FILE_PATH')
directory = bpy.props.StringProperty(subtype='DIR_PATH')
def invoke(self, context, event):
context.window_manager.fileselect_add(self)
self.context = context
return {'RUNNING_MODAL'}
def construct_condition(self, indicies):
"""docstring for construct_condition"""
#Very simple way
rtnd = "id == "
if len(indicies) <= 0:
raise Exception("No indicies in this proxy object")
for i in indicies:
rtnd += str(i) + " or id == "
rtnd = rtnd[:-10] # -10 to remove the trailing "or id =="
# Group by ranges
rtn = ""
max_elem = None
min_elem = None
for i in indicies:
i = int(i)
if min_elem == None:
min_elem = i
if max_elem == None:
max_elem = i
if i == max_elem + 1:
max_elem = i
elif i > max_elem + 1:
rtn += " or ({} <= id <= {})".format(min_elem, max_elem)
min_elem = i
max_elem = i
rtn += " or ({} <= id <= {})".format(min_elem, max_elem)
rtn = rtn[4:]
return min(rtnd, rtn)
def export_mesh(self, context, fout, obj):
#TODO: don't use just one file for the whole animation. One per frame. (per obj also?)
for face in obj.obj.data.polygons:
pgonstr = "Polygon "
vertices = '"P" ['
for v in face.vertices:
vert = obj.obj.data.vertices[v].co
vertices += " {} {} {}".format(vert.x, vert.y, vert.z)
vertices += ']\n'
pgonstr += vertices
# fout.write('AttributeBegin\n')
# fout.write('Surface "matte"\n')
# fout.write('Color [{} {} {}]\n'.format(obj.color[0], obj.color[1], obj.color[2]))
#TODO: get rotations to work with any blender rotation scheme
# fout.write('Rotate {} 0 0 1\n'.format(math.degrees(obj.rotation_euler[2])))
# fout.write('Rotate {} 0 1 0\n'.format(math.degrees(obj.rotation_euler[1])))
# fout.write('Rotate {} 1 0 0\n'.format(math.degrees(obj.rotation_euler[0])))
# fout.write('Translate {} {} {}\n'.format(obj.location[0], obj.location[2], -obj.location[1]))
fout.write(pgonstr)
# fout.write('AttributeEnd\n')
def write_object(self, objects, is_proxy=False):
global changing_params
renderobject = []
for obj in objects:
obj.update()
name = obj.group
#Start writing
color = "{} {} {}".format(obj.color[0], obj.color[1], obj.color[2])
data = dict()
data["name"] = str(name)
if is_proxy:
data["condition"] = self.construct_condition(obj.indicies)
else:
data["condition"] = "id == {}".format(obj.index)
# maxIndex = obj.index
# minIndex = obj.index
# data["condition"] = "id >= {} and id <= {}".format(minIndex, maxIndex)
data["color"] = color
if obj.obj_type in MESH_IMPORT_FUNCTIONS:
data["geometry"] = [{"type" : "archive"}]
else:
data["geometry"] = [{"type" : obj.obj_type}]
data["shader"] = [{"name" : "matte.sl"}] #TODO: not hardcoded
data["geometry"][0]["changingprams"] = changing_params
if obj.obj_type.lower() == "sphere":
data["geometry"][0]["radius"] = obj.ep[0]
elif obj.obj_type.lower() == "cube":
data["geometry"][0]["side"] = obj.ep[0]
elif obj.obj_type.lower() == "cone":
data["geometry"][0]["radius"] = obj.ep[0]
data["geometry"][0]["height"] = obj.ep[1]
elif obj.obj_type.lower() == "cylinder":
data["geometry"][0]["radius"] = obj.ep[0]
data["geometry"][0]["height"] = obj.ep[1]
elif obj.obj_type.lower() == "ellipsoid":
data["geometry"][0]["a"] = obj.ep[0]
data["geometry"][0]["b"] = obj.ep[1]
data["geometry"][0]["c"] = obj.ep[2]
elif obj.obj_type.lower() == "torus":
data["geometry"][0]["rmajor"] = obj.ep[0]
data["geometry"][0]["rminor"] = obj.ep[1]
elif obj.obj_type.lower() == "box":
data["geometry"][0]["xlength"] = obj.ep[0]
data["geometry"][0]["ylength"] = obj.ep[1]
data["geometry"][0]["zlength"] = obj.ep[2]
elif obj.obj_type.lower() in MESH_IMPORT_FUNCTIONS:
extra_rib_filename = "extra_geo_{}".format(obj.index) + ".rib"
data["geometry"][0]["filename"] = extra_rib_filename
renderman_dir = os.path.join(self.directory, "RENDERMAN")
if not os.path.exists(renderman_dir):
os.makedirs(renderman_dir)
ribarchives_dir = os.path.join(renderman_dir, "ribarchives")
if not os.path.exists(ribarchives_dir):
os.makedirs(ribarchives_dir)
fout_fullpath = os.path.join(ribarchives_dir, extra_rib_filename)
fout = open(fout_fullpath, "w")
self.export_mesh(self.context, fout, obj)
fout.close()
else:
print("Geometry type {} not supported by blender export at this time".format(obj.obj_type))
if not obj.obj.hide_render:
renderobject.append(data)
return renderobject
def write_extra_geometry(self, context, obj):
global extra_geometry_indicies
renderobject = []
data = dict()
# data["color"] = "{} {} {}".format(obj.color[0], obj.color[1], obj.color[2])
data["geometry"] = [{"type" : "archive"}]
# data["shader"] = [{"type" : "matte.sl"}]
data["geometry"][0]["filename"] = "extrageometry.rib"
data["name"] = "extrageometry"
id_str = ""
for i in extra_geometry_indicies:
id_str += "id == {} or ".format(i)
id_str = id_str[:-4]
data["condition"] = id_str
renderobject.append(data)
return renderobject
def camera_to_renderman(self, context, obj):
camera_matrix = obj.matrix_world
camera = obj
camera_loc = obj.location
camera_euler = obj.rotation_euler
fov = None
try:
cam_fov = math.degrees(obj.data.angle)
fov = 360.0*math.atan(16.0/camera.data.lens)/math.pi
except AttributeError:
if hasattr(obj.data, "spot_size"):
fov = math.degrees(obj.data.spot_size)
else:
pass
out = ''
if hasattr(obj.data, "type"):
if obj.data.type == 'SUN':
out += ('Projection "orthographic"\n')
else:
out += ('Projection "perspective" "fov" [{}]\n'.format(fov))
else:
out += ('Projection "perspective" "fov" [{}]\n'.format(fov))
out += ("Scale 1 1 -1\n")
out += ("Rotate {} 1 0 0\n".format(-math.degrees(camera_euler[0])))
out += ("Rotate {} 0 1 0\n".format(-math.degrees(camera_euler[1])))
out += ("Rotate {} 0 0 1\n".format(-math.degrees(camera_euler[2])))
out += ("Translate {} {} {}\n".format(-camera_matrix[0][3],
-camera_matrix[1][3],
-camera_matrix[2][3]))
return out
def write_shadowspot(self, context, renderpasses, light_file, obj, end_x, end_y, end_z, delta_angle, index):
name = "shadow_" + obj.data.name
name = name.replace(".", "_")
correct_name = obj.data.name.replace(".", "_")
shadowmap_name = name + ".rib"
shadowmap_file_path = os.path.join(self.fout_dir, shadowmap_name)
shadowmap_file = open(shadowmap_file_path, 'w')
shadowmap_file.write(self.camera_to_renderman(context, obj))
light_string = 'LightSource "shadowspot" {} "intensity" {} "coneangle" {} "conedeltaangle" {} "lightcolor" [{} {} {}] "from" [{} {} {}] "to" [{} {} {}] "shadowname" ["{}"]\n'.format(index, obj.data.energy*30, obj.data.spot_size/2.0, delta_angle, obj.data.color[0], obj.data.color[1], obj.data.color[2], obj.location.x, obj.location.y, obj.location.z, end_x+obj.location.x, end_y+obj.location.y, end_z+obj.location.z, name+".shd")
light_file.write(light_string)
#TODO: heuristic for resolution of pass
shadowpass = {
"name": "shadowpass" + str(index),
"type": "shadow",
"settings" : {
"resolution" : "512 512 1",
"shadingrate" : 1.0,
"pixelsamples" : "1 1",
"shadowfilepath" : "shadow_" + correct_name+ ".rib",
"display" : {"output" : "shadow_" + correct_name + ".z",
"outtype" : "zfile",
"mode" : "z"}}}
renderpasses.append(shadowpass)
def write_sun(self, context, renderpasses, light_file, obj, end_x, end_y, end_z, index):
global max_dim
global min_dim
name = "shadow_" + obj.data.name
name = name.replace(".", "_")
correct_name = obj.data.name.replace(".", "_")
shadowmap_name = name + ".rib"
shadowmap_file_path = os.path.join(self.fout_dir, shadowmap_name)
shadowmap_file = open(shadowmap_file_path, 'w')
shadowmap_file.write(self.camera_to_renderman(context, obj))
shadowmap_file.write('ScreenWindow {} {} {} {}'.format(min_dim, max_dim, min_dim, max_dim))
light_string = 'LightSource "shadowdistant" {} "intensity" {} "lightcolor" [{} {} {}] "from" [{} {} {}] "to" [{} {} {}] "shadowname" ["{}"]\n'.format(index, obj.data.energy, obj.data.color[0], obj.data.color[1], obj.data.color[2], 0, 0, 0, end_x, end_y, end_z, name+".shd")
light_file.write(light_string)
shadowpass = {
"name": "shadowpass" + str(index),
"type": "shadow",
"settings" : {
"resolution" : "512 512 1",
"shadingrate" : 1.0,
"pixelsamples" : "1 1",
"shadowfilepath" : "shadow_" + correct_name + ".rib",
"display" : {"output" : "shadow_" + correct_name + ".z",
"outtype" : "zfile",
"mode" : "z"}}}
renderpasses.append(shadowpass)
def write_shadowpoint(self, context, renderpasses, light_file, obj, index):
light_string = 'LightSource "shadowpoint" {} "intensity" {} "lightcolor" [{} {} {}] "from" [{} {} {}]'.format(index, obj.data.energy*20.0, obj.data.color[0], obj.data.color[1], obj.data.color[2], obj.location.x, obj.location.y, obj.location.z)
name = "shadow_" + obj.data.name
name = name.replace(".", "_")
correct_name = obj.data.name.replace(".", "_")
shadowmap_name_base = name + ".rib"
rotations = {'px': 'Rotate -90.0 0.0 1.0 0.0',
'py': 'Rotate 90.0 1.0 0.0 0.0',
'pz': 'Rotate 0.0 0.0 1.0 0.0',
'nx': 'Rotate 90.0 0.0 1.0 0.0',
'ny': 'Rotate -90.0 1.0 0.0 0.0',
'nz': 'Rotate 180 0.0 1.0 0.0'}
for end in ('px', 'py', 'pz', 'nx', 'ny', 'nz'):
shadowmap_name = end + shadowmap_name_base
shadowmap_file_path = os.path.join(self.fout_dir, shadowmap_name)
shadowmap_file = open(shadowmap_file_path, 'w')
light_string += ' "sf{}" ["{}"]'.format(end, end + "shadow_" + correct_name + ".shd")
shadowmap_file.write('Projection "perspective" "fov" [95.0]\n')
# shadowmap_file.write("Scale 1 1 -1\n")
shadowmap_file.write(rotations[end] + "\n")
shadowmap_file.write('Translate {} {} {}\n'.format(-obj.location.x, -obj.location.y, -obj.location.z))
shadowpass = {
"name": "shadowpass" + str(index) + "_" + end,
"type": "shadow",
"settings" : {
"resolution" : "512 512 1",
"shadingrate" : 1.0,
"pixelsamples" : "1 1",
"shadowfilepath" : shadowmap_name,
"display" : {"output" : end + "shadow_" + correct_name + ".z",
"outtype" : "zfile",
"mode" : "z"}}}
renderpasses.append(shadowpass)
light_string += '\n'
light_file.write(light_string)
def write_ambient_occlusion(self, context, renderpasses, shader):
resolution = "{} {}".format(bpy.data.scenes["Scene"].render.resolution_x,
bpy.data.scenes["Scene"].render.resolution_y)
shadowpass = {
"name": "ambientpass",
"type": "ao",
"settings": {
"resolution": resolution,
"bounces": bpy.context.scene.world.light_settings.indirect_bounces,
"display": {"output" : "out.tif"}},
"shader": {
"name": shader,
"samples": 256}} #TODO: some nice way of setting samples
renderpasses.append(shadowpass)
def execute(self, context):
global fin_name
global objects
global proxyObjects
global ambient_proxy
global fin_dir
#We will ignore the user given output file Chrono::Render is designed
#to accept out.yaml as the yaml file
self.filename = "out.yaml"
renderpasses = []
self.fout_dir = os.path.join(self.directory, "RENDERMAN")
if not os.path.exists(self.fout_dir):
os.makedirs(self.fout_dir)
filepath = os.path.join(self.fout_dir, self.filename)
fout = open(filepath, "w")
print("Export beginning")
##############
#Camera stuff#
##############
current_frame = bpy.context.scene.frame_current
fmax = bpy.data.scenes["Scene"].frame_end
fmin = 0
camera_moved = False
last_camera_output = None
for frame in range(fmin, fmax+1):
bpy.context.scene.frame_set(frame)
cam_file_name = "custom_camera_{}.rib".format(frame)
cam_file_path = os.path.join(self.fout_dir, cam_file_name)
cam_file = open(cam_file_path, 'w')
camera_output = self.camera_to_renderman(context, bpy.data.objects['Camera'])
if last_camera_output == None:
last_camera_output = camera_output
if camera_output != last_camera_output:
camrea_moved = True
cam_file.write(camera_output)
#TODO: only write the file if camera hasn't moved at all (would have to fix the one camera or indididual camera frames thing)
cam_file.close()
if not camera_moved and frame == fmax:
cam_file_name = "custom_camera.rib"
cam_file_path = os.path.join(self.fout_dir, cam_file_name)
cam_file = open(cam_file_path, 'w')
cam_file.write(camera_output)
cam_file.close()
moving_camera = {"moving_camera" : camera_moved}
cam_file_name = "custom_camera.rib"
bpy.context.scene.frame_current = current_frame
#############
#Light stuff#
#############
light_file_name = "custom_lighting.rib"
light_file_path = os.path.join(self.fout_dir, light_file_name)
light_file = open(light_file_path, 'w')
for i, obj in enumerate(bpy.context.scene.objects):
if obj.type == 'LAMP' and obj.hide_render == False:
light_string = None
e = obj.rotation_euler
M = e.to_matrix()
v = mathutils.Vector((0,0,-1)) #default direction of light
# v.rotate(e)
# end_x, end_y, end_z = v
end_x, end_y, end_z = M*v
# x20 for point and spot intensity as a rough heuristic to get them looking the same in blender and renderman(matte shader)
if obj.data.type == 'SUN':
# intensity = obj.data.energy*
if obj.data.shadow_method == 'NOSHADOW':
light_string = 'LightSource "distantlight" {} "intensity" {} "lightcolor" [{} {} {}] "from" [{} {} {}] "to" [{} {} {}]\n'.format(i, obj.data.energy, obj.data.color[0], obj.data.color[1], obj.data.color[2], 0, 0, 0, end_x, end_y, end_z)
else:
self.write_sun(context, renderpasses, light_file, obj, end_x, end_y, end_z, i)
elif obj.data.type == 'POINT':
if obj.data.shadow_method == 'NOSHADOW':
light_string = 'LightSource "pointlight" {} "intensity" {} "lightcolor" [{} {} {}] "from" [{} {} {}]\n'.format(i, obj.data.energy*20, obj.data.color[0], obj.data.color[1], obj.data.color[2], obj.location.x, obj.location.y, obj.location.z)
else:
self.write_shadowpoint(context, renderpasses, light_file, obj, i)
elif obj.data.type == 'SPOT':
delta_angle = obj.data.spot_size/2 * obj.data.spot_blend
if obj.data.shadow_method == 'NOSHADOW':
light_string = 'LightSource "spotlight" {} "intensity" {} "coneangle" {} "conedeltaangle" {} "lightcolor" [{} {} {}] "from" [{} {} {}] "to" [{} {} {}]\n'.format(i, obj.data.energy*20, obj.data.spot_size/2.0, delta_angle, obj.data.color[0], obj.data.color[1], obj.data.color[2], obj.location.x, obj.location.y, obj.location.z, end_x+obj.location.x, end_y+obj.location.y, end_z+obj.location.z)
else:
self.write_shadowspot(context, renderpasses, light_file, obj, end_x, end_y, end_z, delta_angle, i)
if light_string != None:
light_file.write(light_string)
ambient_proxy.update()
light_string = 'LightSource "ambientlight" {} "intensity" {} "lightcolor" [{} {} {}]\n'.format(i, ambient_proxy.obj.active_material.ambient, bpy.data.worlds["World"].ambient_color[0], bpy.data.worlds["World"].ambient_color[1], bpy.data.worlds["World"].ambient_color[2])
light_file.write(light_string)
light_file.close()
#Ambient Occlusion/Color Bleeding
if bpy.context.scene.world.light_settings.use_indirect_light:
self.write_ambient_occlusion(context, renderpasses, "colorbleedinglight.sl")
elif bpy.context.scene.world.light_settings.use_ambient_occlusion:
self.write_ambient_occlusion(context, renderpasses, "occlusionlight.sl")
##########
#The Rest#
##########
renderobject = self.write_object(objects, is_proxy = False)
renderobject += self.write_object(proxyObjects, is_proxy = True)
#Imported meshes
fout_extrageo = open(os.path.join(self.fout_dir, "extrageometry.rib"), "w")
for obj in bpy.data.objects:
if obj.type == 'MESH' and obj.name != "Ambient Light Proxy":
if not 'index' in obj:
self.export_mesh(context, fout_extrageo, obj)
renderobject += self.write_extra_geometry(context, obj)
fout_extrageo.close()
data_name = "./data/" + "_".join(fin_name.split("_")[:-1]) + "_*.dat"
resolution = "{} {}".format(bpy.data.scenes["Scene"].render.resolution_x,
bpy.data.scenes["Scene"].render.resolution_y)
defaultpass = {
"name": "defaultpass",
"settings" : {
"resolution" : resolution,
"display" : {"output" : "out.tif"}}}
if not bpy.context.scene.world.light_settings.use_ambient_occlusion and not bpy.context.scene.world.light_settings.use_indirect_light:
renderpasses.append(defaultpass)
data = {"chronorender" : {
"rendersettings" : {"searchpaths" : "./"},
"camera" : [{"filename" : cam_file_name}, moving_camera],
"lighting" : [{"filename" : "custom_lighting.rib"}],
# "scene" : [{"filename" : "default_scene.rib"}],
"renderpass" : renderpasses ,
"simulation" : {
"data" : {
"datasource" : [{
"type" : "csv",
"name" : "defaultdata",
"resource" : data_name,
"fields" : [
["group", "string"],
["id", "integer"],
["pos_x", "float"],
["pos_y", "float"],
["pos_z", "float"],
["quat_w", "float"],
["quat_x", "float"],
["quat_y", "float"],
["quat_z", "float"],
["ignore", "string"], #object type
["ep1", "string"], #extra params
["ep2", "string"], #need to modify if more than 4 extra params
["ep3", "string"],
["ep4", "string"],
]}]},
"renderobject" : renderobject}}}
# [{
# "name" : "particle",
# "condition" : "id >= 0",
# "color" : color,
# "geometry" : [{
# "radius" : 0.888,
# "type" : "sphere"}]}]}}}}
yaml.safe_dump(data, fout)
self.move_ribs(self.fout_dir)
print("Export complete! (yes really)")
print("Compression beginning")
self.compress(fin_name, fin_dir, self.filename, self.fout_dir)
print("Compression finished")
print("Cleanup Beginning")
self.cleanup(self.fout_dir)
print("Cleanup Ended")
return {'FINISHED'}
def cleanup(self, fout_dir):
shutil.rmtree(fout_dir, onerror=self.iferror)
def iferror(self, func, path, except_info):
os.chmod(path, stat.S_IWRITE)
func(path)
def move_ribs(self, fout_dir):
"""Moves all rib files to the ribarchive directory"""
ribarchives = os.path.join(fout_dir, "ribarchives")
if not os.path.isdir(ribarchives):
os.mkdir(ribarchives)
init_dir = os.getcwd()
os.chdir(fout_dir)
for f in os.listdir("."):
if f.endswith(".rib"):
dest = os.path.join(ribarchives, os.path.basename(f))
shutil.copy2(f, dest)
os.chdir(init_dir)
def compress(self, fin_name, fin_dir, fout_name, fout_dir, force_data=False):
#TODO: allow user to select force_data
#requires a SEPARATE data directory to work
data_zipped_path = os.path.join(self.directory, "data.tar.gz")
metadata_zipped_path = os.path.join(self.directory, fout_name.split(".")[0] + ".tar.gz")
if not os.path.exists(data_zipped_path) or force_data == True:
with tarfile.open(data_zipped_path, "w:gz") as tar:
for filename in os.listdir(fin_dir):
if filename[-4:] == ".dat":
filepath = os.path.join(fin_dir, filename)
aname = os.path.join(os.path.join("job", "data"), filename)
tar.add(filepath, arcname=aname)
with tarfile.open(metadata_zipped_path, "w:gz") as tar2:
tar2.add(fout_dir, arcname="")
def add_exportChronoRenderButton(self, context):
self.layout.operator(
ExportChronoRender.bl_idname,
text=ExportChronoRender.__doc__,
icon='PLUGIN')
def register():
print("Registering")
bpy.utils.register_class(ImportChronoRender)
# bpy.types.INFO_MT_file.append(add_object_button)
bpy.types.INFO_MT_file_import.append(add_importChronoRenderButton)
bpy.utils.register_class(ExportChronoRender)
bpy.types.INFO_MT_file_export.append(add_exportChronoRenderButton)
def unregister():
print("Unregistering")
bpy.utils.unregister_class(ImportChronoRender)
bpy.types.unregister_class(ExportChronoRender)
if __name__ == "__main__":
register()
| 44.12026 | 438 | 0.550107 | 4,759 | 40,723 | 4.563144 | 0.13112 | 0.015472 | 0.010131 | 0.016439 | 0.452339 | 0.397725 | 0.333072 | 0.316725 | 0.281912 | 0.247099 | 0 | 0.014632 | 0.311912 | 40,723 | 922 | 439 | 44.168113 | 0.760358 | 0.101122 | 0 | 0.339763 | 0 | 0.008902 | 0.122573 | 0.002092 | 0.001484 | 0 | 0 | 0.002169 | 0 | 0 | null | null | 0.041543 | 0.045994 | null | null | 0.025223 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87f4710e0d278ffa4b65cd1fdbf57b6e8ed23f91 | 7,180 | py | Python | ArtGAN/data/ingest_stl10.py | rh01/caffe-model-for-category-artgan | 911b8fb44c62e8a2c71396099194d8925ed7c826 | [
"BSD-3-Clause"
] | 304 | 2018-07-17T00:18:54.000Z | 2022-03-31T22:26:42.000Z | ArtGAN/data/ingest_stl10.py | cs-chan/Artwork-Synthesis-Classification | ad9cd090c669ca636f6c048d97608092d52dd3e0 | [
"BSD-3-Clause"
] | 9 | 2018-10-16T14:42:51.000Z | 2022-01-13T11:22:02.000Z | ArtGAN/data/ingest_stl10.py | cs-chan/Artwork-Synthesis-Classification | ad9cd090c669ca636f6c048d97608092d52dd3e0 | [
"BSD-3-Clause"
] | 57 | 2018-07-19T02:38:29.000Z | 2022-03-17T11:12:17.000Z | from configargparse import ArgParser
from PIL import Image
import logging
import numpy as np
import os
def transform_and_save(img_arr, output_filename):
"""
Takes an image and optionally transforms it and then writes it out to output_filename
"""
img = Image.fromarray(img_arr)
img.save(output_filename)
class Ingest(object):
def __init__(self, input_dir, out_dir, target_size=96, skipimg=False):
np.random.seed(0)
self.skipimg = skipimg
self.out_dir = out_dir
self.input_dir = input_dir
self.manifests = dict()
for setn in ('train', 'val'):
self.manifests[setn] = os.path.join(self.out_dir, '{}-index.csv'.format(setn))
self.target_size = target_size
self.trainpairlist = {}
self.valpairlist = {}
self.labels = range(10)
if not os.path.exists(self.out_dir):
os.mkdir(self.out_dir)
self.outimgdir = os.path.join(self.out_dir, 'images')
if not os.path.exists(self.outimgdir):
os.mkdir(self.outimgdir)
os.mkdir(os.path.join(self.outimgdir, 'train'))
os.mkdir(os.path.join(self.outimgdir, 'val'))
self.outlabeldir = os.path.join(self.out_dir, 'labels')
if not os.path.exists(self.outlabeldir):
os.mkdir(self.outlabeldir)
def collectdata(self,):
print 'Start Collect Data...'
train_x_path = os.path.join(self.input_dir, 'train_X.bin')
train_y_path = os.path.join(self.input_dir, 'train_y.bin')
test_x_path = os.path.join(self.input_dir, 'test_X.bin')
test_y_path = os.path.join(self.input_dir, 'test_y.bin')
train_xf = open(train_x_path, 'rb')
train_x = np.fromfile(train_xf, dtype=np.uint8)
train_x = np.reshape(train_x, (-1, 3, 96, 96))
train_x = np.transpose(train_x, (0, 3, 2, 1))
train_yf = open(train_y_path, 'rb')
train_y = np.fromfile(train_yf, dtype=np.uint8)
test_xf = open(test_x_path, 'rb')
test_x = np.fromfile(test_xf, dtype=np.uint8)
test_x = np.reshape(test_x, (-1, 3, 96, 96))
test_x = np.transpose(test_x, (0, 3, 2, 1))
test_yf = open(test_y_path, 'rb')
test_y = np.fromfile(test_yf, dtype=np.uint8)
idx = np.zeros(10, dtype=np.int)
for i in xrange(train_x.shape[0]):
outdir = os.path.join(self.outimgdir, 'train', str(train_y[i]-1))
if not os.path.exists(outdir):
os.mkdir(outdir)
if not self.skipimg:
transform_and_save(img_arr=train_x[i], output_filename=os.path.join(outdir, str(idx[train_y[i]-1]) + '.jpg'))
self.trainpairlist[os.path.join('images', 'train', str(train_y[i]-1), str(idx[train_y[i]-1]) + '.jpg')] = \
os.path.join('labels', str(train_y[i] - 1) + '.txt')
idx[train_y[i]-1] += 1
idx = np.zeros(10, dtype=np.int)
for i in xrange(test_x.shape[0]):
outdir = os.path.join(self.outimgdir, 'val', str(test_y[i]-1))
if not os.path.exists(outdir):
os.mkdir(outdir)
if not self.skipimg:
transform_and_save(img_arr=test_x[i],
output_filename=os.path.join(outdir, str(idx[test_y[i]-1]) + '.jpg'))
self.valpairlist[os.path.join('images', 'val', str(test_y[i]-1), str(idx[test_y[i]-1]) + '.jpg')] = \
os.path.join('labels', str(test_y[i] - 1) + '.txt')
idx[test_y[i]-1] += 1
print 'Finished Collect Data...'
def write_label(self, ):
for i, l in enumerate(self.labels):
sdir = os.path.join(self.outlabeldir, str(i) + '.txt')
np.savetxt(sdir, [l], '%d')
def run(self):
"""
resize images then write manifest files to disk.
"""
self.write_label()
self.collectdata()
records = [(fname, tgt)
for fname, tgt in self.trainpairlist.items()]
np.savetxt(self.manifests['train'], records, fmt='%s,%s')
records = [(fname, tgt)
for fname, tgt in self.valpairlist.items()]
np.savetxt(self.manifests['val'], records, fmt='%s,%s')
class IngestUnlabeled(object):
def __init__(self, input_dir, out_dir, target_size=96, skipimg=False):
np.random.seed(0)
self.skipimg = skipimg
self.out_dir = out_dir
self.input_dir = input_dir
self.manifests = dict()
self.manifests = os.path.join(self.out_dir, 'unlabeled-index.csv')
self.target_size = target_size
self.trainpairlist = {}
if not os.path.exists(self.out_dir):
os.mkdir(self.out_dir)
self.outimgdir = os.path.join(self.out_dir, 'images')
if not os.path.exists(self.outimgdir):
os.mkdir(self.outimgdir)
self.unlabeldir = os.path.join(self.outimgdir, 'unlabeled')
if not os.path.exists(self.unlabeldir):
os.mkdir(self.unlabeldir)
def collectdata(self,):
print 'Start Collect Data...'
train_x_path = os.path.join(self.input_dir, 'unlabeled_X.bin')
train_xf = open(train_x_path, 'rb')
train_x = np.fromfile(train_xf, dtype=np.uint8)
train_x = np.reshape(train_x, (-1, 3, 96, 96))
train_x = np.transpose(train_x, (0, 3, 2, 1))
idx = 0
for i in xrange(train_x.shape[0]):
if not self.skipimg:
transform_and_save(img_arr=train_x[i], output_filename=os.path.join(self.unlabeldir, str(idx) + '.jpg'))
self.trainpairlist[os.path.join('images', 'unlabeled', str(idx) + '.jpg')] = 'labels/11.txt'
idx += 1
print 'Finished Collect Data...'
def write_label(self, ):
sdir = os.path.join(self.out_dir, 'labels', '11.txt')
np.savetxt(sdir, [11], '%d')
def run(self):
"""
resize images then write manifest files to disk.
"""
self.write_label()
self.collectdata()
records = [(fname, tgt)
for fname, tgt in self.trainpairlist.items()]
np.savetxt(self.manifests, records, fmt='%s,%s')
if __name__ == "__main__":
parser = ArgParser()
parser.add_argument('--input_dir', help='Directory to find input',
default='/hdd/Dataset/STL10')
parser.add_argument('--out_dir', help='Directory to write ingested files',
default='/home/william/PyProjects/TFcodes/dataset/stl10')
parser.add_argument('--target_size', type=int, default=96,
help='Size in pixels to scale shortest side DOWN to (0 means no scaling)')
parser.add_argument('--skipImg', type=bool, default=False,
help='True to skip processing and copying images')
args = parser.parse_args()
logger = logging.getLogger(__name__)
bw = Ingest(input_dir=args.input_dir, out_dir=args.out_dir, target_size=args.target_size, skipimg=args.skipImg)
# bw = IngestUnlabeled(input_dir=args.input_dir, out_dir=args.out_dir, target_size=args.target_size, skipimg=args.skipImg)
bw.run()
| 38.810811 | 126 | 0.59429 | 1,007 | 7,180 | 4.072493 | 0.156902 | 0.048281 | 0.060961 | 0.061448 | 0.676664 | 0.625457 | 0.590832 | 0.516459 | 0.477445 | 0.427213 | 0 | 0.014917 | 0.262396 | 7,180 | 184 | 127 | 39.021739 | 0.759441 | 0.016713 | 0 | 0.460432 | 0 | 0 | 0.095699 | 0.006752 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.035971 | null | null | 0.028777 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87f4e0add218b91c8358380aec15e53a0b7ec2cc | 615 | py | Python | working_example/python/hello_serverless/lambda/create.py | darko-mesaros/workshop-serverless-with-cdk | bbfd30de43d01251565c019a8ac259706bd6f1d0 | [
"MIT"
] | 33 | 2020-08-12T08:08:08.000Z | 2022-03-20T20:32:18.000Z | working_example/python/hello_serverless/lambda/create.py | darko-mesaros/workshop-serverless-with-cdk | bbfd30de43d01251565c019a8ac259706bd6f1d0 | [
"MIT"
] | 2 | 2020-08-12T09:54:53.000Z | 2020-08-12T13:37:22.000Z | working_example/python/hello_serverless/lambda/create.py | darko-mesaros/workshop-serverless-with-cdk | bbfd30de43d01251565c019a8ac259706bd6f1d0 | [
"MIT"
] | 17 | 2020-08-12T08:09:46.000Z | 2021-07-18T19:52:50.000Z | import os
import json
import boto3
def handler(event, context):
table = os.environ.get('table')
dynamodb = boto3.client('dynamodb')
item = {
"name":{'S':event["queryStringParameters"]["name"]},
"location":{'S':event["queryStringParameters"]["location"]},
"age":{'S':event["queryStringParameters"]["age"]}
}
response = dynamodb.put_item(TableName=table,
Item=item
)
message = 'Status of the write to DynamoDB {}!'.format(response)
return {
"statusCode": 200,
"body": json.dumps(message)
}
| 24.6 | 72 | 0.564228 | 59 | 615 | 5.864407 | 0.576271 | 0.052023 | 0.234104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 0.276423 | 615 | 24 | 73 | 25.625 | 0.766292 | 0 | 0 | 0 | 0 | 0 | 0.256911 | 0.102439 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.157895 | 0 | 0.263158 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87f74a05f4408addae7f347f4c814a7bd1356155 | 13,528 | py | Python | pypeit/spectrographs/gemini_flamingos.py | ykwang1/PypeIt | a96cff699f1284905ce7ef19d06a9027cd333c63 | [
"BSD-3-Clause"
] | null | null | null | pypeit/spectrographs/gemini_flamingos.py | ykwang1/PypeIt | a96cff699f1284905ce7ef19d06a9027cd333c63 | [
"BSD-3-Clause"
] | null | null | null | pypeit/spectrographs/gemini_flamingos.py | ykwang1/PypeIt | a96cff699f1284905ce7ef19d06a9027cd333c63 | [
"BSD-3-Clause"
] | null | null | null | """
Module for Gemini FLAMINGOS.
.. include:: ../include/links.rst
"""
import os
from pkg_resources import resource_filename
from IPython import embed
import numpy as np
from pypeit import msgs
from pypeit import telescopes
from pypeit.core import framematch
from pypeit.images import detector_container
from pypeit.spectrographs import spectrograph
class GeminiFLAMINGOSSpectrograph(spectrograph.Spectrograph):
"""
Base class for the Gemini FLAMINGOS spectrograph.
"""
ndet = 1
telescope = telescopes.GeminiSTelescopePar()
def init_meta(self):
"""
Define how metadata are derived from the spectrograph files.
That is, this associates the ``PypeIt``-specific metadata keywords
with the instrument-specific header cards using :attr:`meta`.
"""
self.meta = {}
# Required (core)
self.meta['ra'] = dict(ext=0, card='RA')
self.meta['dec'] = dict(ext=0, card='DEC')
self.meta['target'] = dict(ext=0, card='OBJECT')
self.meta['decker'] = dict(ext=0, card='MASKNAME')
self.meta['dichroic'] = dict(ext=0, card='FILTER')
self.meta['binning'] = dict(ext=0, card=None, default='1,1')
self.meta['mjd'] = dict(ext=0, card='MJD-OBS')
self.meta['exptime'] = dict(ext=0, card='EXPTIME')
self.meta['airmass'] = dict(ext=0, card='AIRMASS')
# Extras for config and frametyping
self.meta['dispname'] = dict(ext=0, card='GRISM')
self.meta['idname'] = dict(ext=0, card='OBSTYPE')
class GeminiFLAMINGOS2Spectrograph(GeminiFLAMINGOSSpectrograph):
"""
Gemini/Flamingos2 Echelle spectrograph methods.
"""
name = 'gemini_flamingos2'
camera = 'FLAMINGOS'
supported = True
comment = 'Flamingos-2 NIR spectrograph'
def get_detector_par(self, hdu, det):
"""
Return metadata for the selected detector.
Args:
hdu (`astropy.io.fits.HDUList`_):
The open fits file with the raw image of interest.
det (:obj:`int`):
1-indexed detector number.
Returns:
:class:`~pypeit.images.detector_container.DetectorContainer`:
Object with the detector metadata.
"""
# Detector 1
detector_dict = dict(
binning = '1,1',
det = 1,
dataext = 1,
specaxis = 0,
specflip = True,
spatflip = False,
platescale = 0.1787,
darkcurr = 0.5,
saturation = 700000., #155400.,
nonlinear = 1.0,
mincounts = -1e10,
numamplifiers = 1,
gain = np.atleast_1d(4.44),
ronoise = np.atleast_1d(5.0), #8 CDS read
datasec = np.atleast_1d('[:,:]'),
oscansec = np.atleast_1d('[:,:]'),
)
return detector_container.DetectorContainer(**detector_dict)
@classmethod
def default_pypeit_par(cls):
"""
Return the default parameters to use for this instrument.
Returns:
:class:`~pypeit.par.pypeitpar.PypeItPar`: Parameters required by
all of ``PypeIt`` methods.
"""
par = super().default_pypeit_par()
# Image processing steps
turn_off = dict(use_illumflat=False, use_biasimage=False, use_overscan=False,
use_darkimage=False)
par.reset_all_processimages_par(**turn_off)
# Wavelengths
# 1D wavelength solution with arc lines
par['calibrations']['wavelengths']['rms_threshold'] = 0.5
par['calibrations']['wavelengths']['sigdetect']=5
par['calibrations']['wavelengths']['fwhm'] = 5
par['calibrations']['wavelengths']['n_first']=2
par['calibrations']['wavelengths']['n_final']=4
par['calibrations']['wavelengths']['lamps'] = ['OH_NIRES']
par['calibrations']['wavelengths']['match_toler']=5.0
# Set slits and tilts parameters
par['calibrations']['tilts']['tracethresh'] = 5
par['calibrations']['tilts']['spat_order'] = 4
par['calibrations']['slitedges']['trace_thresh'] = 10.
par['calibrations']['slitedges']['edge_thresh'] = 200.
par['calibrations']['slitedges']['fit_min_spec_length'] = 0.4
par['calibrations']['slitedges']['sync_predict'] = 'nearest'
# Set the default exposure time ranges for the frame typing
par['calibrations']['standardframe']['exprng'] = [None, 30]
par['calibrations']['tiltframe']['exprng'] = [50, None]
par['calibrations']['arcframe']['exprng'] = [50, None]
par['calibrations']['darkframe']['exprng'] = [20, None]
par['scienceframe']['exprng'] = [20, None]
# Scienceimage parameters
par['reduce']['findobj']['sig_thresh'] = 5.0
par['reduce']['skysub']['sky_sigrej'] = 5.0
par['reduce']['findobj']['find_trim_edge'] = [10,10]
# Do not correct for flexure
par['flexure']['spec_method'] = 'skip'
# Sensitivity function parameters
par['sensfunc']['algorithm'] = 'IR'
par['sensfunc']['polyorder'] = 8
# TODO: replace the telluric grid file for Gemini-S site.
par['sensfunc']['IR']['telgridfile'] \
= os.path.join(par['sensfunc']['IR'].default_root,
'TelFit_LasCampanas_3100_26100_R20000.fits')
return par
def config_specific_par(self, scifile, inp_par=None):
"""
Modify the ``PypeIt`` parameters to hard-wired values used for
specific instrument configurations.
Args:
scifile (:obj:`str`):
File to use when determining the configuration and how
to adjust the input parameters.
inp_par (:class:`~pypeit.par.parset.ParSet`, optional):
Parameter set used for the full run of PypeIt. If None,
use :func:`default_pypeit_par`.
Returns:
:class:`~pypeit.par.parset.ParSet`: The PypeIt parameter set
adjusted for configuration specific parameter values.
"""
par = super().config_specific_par(scifile, inp_par=inp_par)
# TODO: Should we allow the user to override these?
if self.get_meta_value(scifile, 'dispname') == 'JH_G5801':
par['calibrations']['wavelengths']['method'] = 'full_template'
par['calibrations']['wavelengths']['reid_arxiv'] = 'Flamingos2_JH_JH.fits'
elif self.get_meta_value(scifile, 'dispname') == 'HK_G5802':
par['calibrations']['wavelengths']['method'] = 'full_template'
par['calibrations']['wavelengths']['reid_arxiv'] = 'Flamingos2_HK_HK.fits'
return par
def check_frame_type(self, ftype, fitstbl, exprng=None):
"""
Check for frames of the provided type.
Args:
ftype (:obj:`str`):
Type of frame to check. Must be a valid frame type; see
frame-type :ref:`frame_type_defs`.
fitstbl (`astropy.table.Table`_):
The table with the metadata for one or more frames to check.
exprng (:obj:`list`, optional):
Range in the allowed exposure time for a frame of type
``ftype``. See
:func:`pypeit.core.framematch.check_frame_exptime`.
Returns:
`numpy.ndarray`_: Boolean array with the flags selecting the
exposures in ``fitstbl`` that are ``ftype`` type frames.
"""
good_exp = framematch.check_frame_exptime(fitstbl['exptime'], exprng)
if ftype in ['pinhole', 'bias']:
# No pinhole or bias frames
return np.zeros(len(fitstbl), dtype=bool)
if ftype in ['pixelflat', 'trace']:
return good_exp & (fitstbl['idname'] == 'FLAT')
if ftype == 'standard':
return good_exp & (fitstbl['idname'] == 'OBJECT')
if ftype == 'science':
return good_exp & (fitstbl['idname'] == 'OBJECT')
if ftype in ['arc', 'tilt']:
return good_exp & (fitstbl['idname'] == 'OBJECT')
msgs.warn('Cannot determine if frames are of type {0}.'.format(ftype))
return np.zeros(len(fitstbl), dtype=bool)
class GeminiFLAMINGOS1Spectrograph(GeminiFLAMINGOSSpectrograph):
"""
Gemini/Flamingos1 Echelle spectrograph methods.
.. todo::
This is a placeholder class that is not yet supported.
"""
name = 'gemini_flamingos1'
camera = 'FLAMINGOS'
def get_detector_par(self, hdu, det):
"""
Return metadata for the selected detector.
Args:
hdu (`astropy.io.fits.HDUList`_):
The open fits file with the raw image of interest.
det (:obj:`int`):
1-indexed detector number.
Returns:
:class:`~pypeit.images.detector_container.DetectorContainer`:
Object with the detector metadata.
"""
# Detector 1
detector_dict = dict(
binning='1,1',
det = 1,
dataext = 1,
specaxis = 0,
specflip = False,
spatflip = False,
platescale = 0.15,
darkcurr = 0.01,
saturation = 320000., #155400.,
nonlinear = 0.875,
mincounts = -1e10,
numamplifiers = 1,
gain = np.atleast_1d(3.8),
ronoise = np.atleast_1d(6.0), # SUTR readout
datasec= np.atleast_1d('[5:2044, 900:1250]'),
oscansec= np.atleast_1d('[:5, 900:1250]'),
)
return detector_container.DetectorContainer(**detector_dict)
@classmethod
def default_pypeit_par(cls):
"""
Return the default parameters to use for this instrument.
Returns:
:class:`~pypeit.par.pypeitpar.PypeItPar`: Parameters required by
all of ``PypeIt`` methods.
"""
par = super().default_pypeit_par()
# Image processing steps
turn_off = dict(use_illumflat=False, use_biasimage=False, use_overscan=False,
use_darkimage=False)
par.reset_all_processimages_par(**turn_off)
# Wavelengths
# 1D wavelength solution with arc lines
par['calibrations']['wavelengths']['rms_threshold'] = 1.0
par['calibrations']['wavelengths']['sigdetect']=3
par['calibrations']['wavelengths']['fwhm'] = 20
par['calibrations']['wavelengths']['n_first']=2
par['calibrations']['wavelengths']['n_final']=4
par['calibrations']['wavelengths']['lamps'] = ['ArI', 'ArII', 'ThAr', 'NeI']
par['calibrations']['wavelengths']['method'] = 'full_template'
par['calibrations']['wavelengths']['reid_arxiv'] = 'magellan_fire_long.fits'
par['calibrations']['wavelengths']['match_toler']=5.0
# Set slits and tilts parameters
par['calibrations']['tilts']['tracethresh'] = 5
par['calibrations']['slitedges']['trace_thresh'] = 5.
par['calibrations']['slitedges']['sync_predict'] = 'nearest'
# Scienceimage parameters
par['reduce']['findobj']['sig_thresh'] = 5.0
# TODO: I think this parameter was removed
par['reduce']['findobj']['find_trim_edge'] = [50,50]
# Do not correct for flexure
par['flexure']['spec_method'] = 'skip'
# Set the default exposure time ranges for the frame typing
par['calibrations']['standardframe']['exprng'] = [None, 60]
par['calibrations']['arcframe']['exprng'] = [1, 50]
par['calibrations']['darkframe']['exprng'] = [20, None]
par['scienceframe']['exprng'] = [20, None]
return par
def check_frame_type(self, ftype, fitstbl, exprng=None):
"""
Check for frames of the provided type.
Args:
ftype (:obj:`str`):
Type of frame to check. Must be a valid frame type; see
frame-type :ref:`frame_type_defs`.
fitstbl (`astropy.table.Table`_):
The table with the metadata for one or more frames to check.
exprng (:obj:`list`, optional):
Range in the allowed exposure time for a frame of type
``ftype``. See
:func:`pypeit.core.framematch.check_frame_exptime`.
Returns:
`numpy.ndarray`_: Boolean array with the flags selecting the
exposures in ``fitstbl`` that are ``ftype`` type frames.
"""
good_exp = framematch.check_frame_exptime(fitstbl['exptime'], exprng)
if ftype in ['pinhole', 'bias']:
# No pinhole or bias frames
return np.zeros(len(fitstbl), dtype=bool)
if ftype in ['pixelflat', 'trace']:
return good_exp & (fitstbl['idname'] == 'PixFlat')
if ftype == 'standard':
return good_exp & (fitstbl['idname'] == 'Telluric')
if ftype == 'science':
return good_exp & (fitstbl['idname'] == 'Science')
if ftype in ['arc', 'tilt']:
return good_exp & (fitstbl['idname'] == 'Arc')
msgs.warn('Cannot determine if frames are of type {0}.'.format(ftype))
return np.zeros(len(fitstbl), dtype=bool)
| 39.325581 | 86 | 0.572664 | 1,458 | 13,528 | 5.214678 | 0.249657 | 0.071025 | 0.068394 | 0.017362 | 0.629357 | 0.610154 | 0.582796 | 0.582796 | 0.548862 | 0.535973 | 0 | 0.022455 | 0.295535 | 13,528 | 343 | 87 | 39.440233 | 0.775341 | 0.291987 | 0 | 0.476471 | 0 | 0 | 0.24145 | 0.012044 | 0 | 0 | 0 | 0.011662 | 0 | 1 | 0.047059 | false | 0 | 0.052941 | 0 | 0.264706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87f916d84fbb3ebf66d1daf924621a5103749784 | 1,200 | py | Python | mywebsite.py | jzorrof/my_website | 3e0d31e5c4d981dd2116c9f7048aa3f111815ff7 | [
"Apache-2.0"
] | null | null | null | mywebsite.py | jzorrof/my_website | 3e0d31e5c4d981dd2116c9f7048aa3f111815ff7 | [
"Apache-2.0"
] | null | null | null | mywebsite.py | jzorrof/my_website | 3e0d31e5c4d981dd2116c9f7048aa3f111815ff7 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
__author__ = 'Fanzhong'
from flask import Flask, render_template
from boto.s3.connection import S3Connection
from boto.s3.key import Key
import json
app = Flask(__name__)
'''
This is my website index
I'll create my website from now
data: 2015.04.10
'''
def get_from_s3():
conn = S3Connection('AKIAJZ5NU5RXHVW3QXPA', 'dHE5tDMMk/WwAoyvrd44TaKsJfnNqLSjEUGOmXt5')
bucketname = conn.get_bucket('scrapy_data_2')
print bucketname
k = Key(bucketname)
k.key = 'my_scrapy'
k.get_contents_to_filename('getjson.json')
@app.route("/")
def index():
return render_template('index.html')
@app.route("/qiche")
def qiche():
testdata=[]
try:
with open("getjson.json") as jsf:
for each_line in jsf:
js = json.loads(each_line,encoding='utf-8')
getjson = json.dumps(js, ensure_ascii=False)
print(type(getjson))
except IOError as err:
print('err was' + str(err))
#return render_template('qiche.html' , testdata={'error':'nothingloaded'})
return render_template('qiche.html' , testdata=testdata)
if __name__ == '__main__':
#get_from_s3()
app.run(debug = True) | 27.906977 | 91 | 0.660833 | 155 | 1,200 | 4.896774 | 0.522581 | 0.073781 | 0.079051 | 0.065876 | 0.097497 | 0.097497 | 0 | 0 | 0 | 0 | 0 | 0.025237 | 0.2075 | 1,200 | 43 | 92 | 27.906977 | 0.772871 | 0.089167 | 0 | 0 | 0 | 0 | 0.159564 | 0.039643 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.133333 | null | null | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
87fd884af907e9e970ccc13cfcca8085d841d1bd | 1,520 | py | Python | python/tlbm/wavy_channel/wavy_channel_generator.py | stu314159/HPC_Introduction_with_LBM | cbba81460513166b4814f3028807020be9b5c234 | [
"MIT"
] | null | null | null | python/tlbm/wavy_channel/wavy_channel_generator.py | stu314159/HPC_Introduction_with_LBM | cbba81460513166b4814f3028807020be9b5c234 | [
"MIT"
] | null | null | null | python/tlbm/wavy_channel/wavy_channel_generator.py | stu314159/HPC_Introduction_with_LBM | cbba81460513166b4814f3028807020be9b5c234 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 7 08:53:18 2021
@author: sblair
"""
import numpy as np
import scipy.integrate as integrate
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
L_hx = 30; # cm, length of the heat exchanger
nX = 100; # number of points in the x-direction
n_period = 4;
A_lam_ratio = 0.3; # ratio between amplitude and wavelength
def get_B(A):
return A_lam_ratio*(2*np.pi)/A;
def wave_form_p(x,A):
return (A/2)*np.sin(get_B(A)*x);
def d_wave_form_p(x,A):
return get_B(A)*(A/2)*np.cos(get_B(A)*x);
def get_X_max(A):
return n_period*2.*np.pi/get_B(A);
def chord_length_error(A):
result = integrate.quad(lambda x: np.sqrt(1.+d_wave_form_p(x,A))**2,
0,get_X_max(A));
chord_length = result[0];
return chord_length - L_hx;
A = fsolve(chord_length_error,0.1);
print(f'{"Amplitude = %g cm"}'%A);
def wave_form(x):
return wave_form_p(x,A);
# def d_wave_form(x):
# return d_wave_form_p(x,A);
# def phi(x):
# return np.arctan(d_wave_form(x));
# offset = 0.5;
# def offset_x(x):
# return offset*(-np.sin(phi(x)));
# def offset_y(x):
# return offset*(np.cos(phi(x)));
print(f'{"A: %12.8f"}'%A);
print(f'{"B: %12.8f"}'%get_B(A));
print(f'{"x_max: %12.8f "}'%get_X_max(A));
xMin = 0;
xMax = get_X_max(A);
X = np.linspace(xMin,xMax,nX);
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(X,wave_form(X))
plt.grid()
ax.set_aspect('equal',adjustable='box');
plt.show() | 19.74026 | 72 | 0.6375 | 291 | 1,520 | 3.151203 | 0.360825 | 0.078517 | 0.032715 | 0.054526 | 0.102508 | 0.082879 | 0 | 0 | 0 | 0 | 0 | 0.036595 | 0.173026 | 1,520 | 77 | 73 | 19.74026 | 0.69292 | 0.280263 | 0 | 0 | 0 | 0 | 0.06797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162162 | false | 0 | 0.108108 | 0.135135 | 0.432432 | 0.108108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
e2033f8cbe6a73bf5cc8da3c75dc093abcd8cd80 | 4,273 | py | Python | opytimark/core/benchmark.py | gugarosa/opytimark | cad25623f23ce4b509d59381cf7bd79e41a966b6 | [
"Apache-2.0"
] | 3 | 2020-06-11T22:58:26.000Z | 2021-03-15T20:12:29.000Z | opytimark/core/benchmark.py | gugarosa/opytimark | cad25623f23ce4b509d59381cf7bd79e41a966b6 | [
"Apache-2.0"
] | 1 | 2020-08-13T12:10:35.000Z | 2020-08-17T14:30:45.000Z | opytimark/core/benchmark.py | gugarosa/opytimark | cad25623f23ce4b509d59381cf7bd79e41a966b6 | [
"Apache-2.0"
] | null | null | null | """Benchmark-based class.
"""
import opytimark.utils.exception as e
class Benchmark:
"""A Benchmark class is the root of any benchmarking function.
It is composed by several properties that defines the traits of a function,
as well as a non-implemented __call__ method.
"""
def __init__(self, name='Benchmark', dims=1, continuous=False, convex=False,
differentiable=False, multimodal=False, separable=False):
"""Initialization method.
Args:
name (str): Name of the function.
dims (int): Number of allowed dimensions.
continuous (bool): Whether the function is continuous.
convex (bool): Whether the function is convex.
differentiable (bool): Whether the function is differentiable.
multimodal (bool): Whether the function is multimodal.
separable (bool): Whether the function is separable.
"""
# Name of the function
self.name = name
# Number of allowed dimensions
self.dims = dims
# Continuous
self.continuous = continuous
# Convexity
self.convex = convex
# Differentiability
self.differentiable = differentiable
# Modality
self.multimodal = multimodal
# Separability
self.separable = separable
@property
def name(self):
"""str: Name of the function.
"""
return self._name
@name.setter
def name(self, name):
if not isinstance(name, str):
raise e.TypeError('`name` should be a string')
self._name = name
@property
def dims(self):
"""int: Number of allowed dimensions.
"""
return self._dims
@dims.setter
def dims(self, dims):
if not isinstance(dims, int):
raise e.TypeError('`dims` should be a integer')
if (dims < -1 or dims == 0):
raise e.ValueError('`dims` should be >= -1 and different than 0')
self._dims = dims
@property
def continuous(self):
"""bool: Whether function is continuous or not.
"""
return self._continuous
@continuous.setter
def continuous(self, continuous):
if not isinstance(continuous, bool):
raise e.TypeError('`continuous` should be a boolean')
self._continuous = continuous
@property
def convex(self):
"""bool: Whether function is convex or not.
"""
return self._convex
@convex.setter
def convex(self, convex):
if not isinstance(convex, bool):
raise e.TypeError('`convex` should be a boolean')
self._convex = convex
@property
def differentiable(self):
"""bool: Whether function is differentiable or not.
"""
return self._differentiable
@differentiable.setter
def differentiable(self, differentiable):
if not isinstance(differentiable, bool):
raise e.TypeError('`differentiable` should be a boolean')
self._differentiable = differentiable
@property
def multimodal(self):
"""bool: Whether function is multimodal or not.
"""
return self._multimodal
@multimodal.setter
def multimodal(self, multimodal):
if not isinstance(multimodal, bool):
raise e.TypeError('`multimodal` should be a boolean')
self._multimodal = multimodal
@property
def separable(self):
"""bool: Whether function is separable or not.
"""
return self._separable
@separable.setter
def separable(self, separable):
if not isinstance(separable, bool):
raise e.TypeError('`separable` should be a boolean')
self._separable = separable
def __call__(self, x):
"""This method returns the function's output when the class is called.
Note that it needs to be implemented in every child class as it is the
one to hold the benchmarking function logic.
Args:
x (np.array): An input array for calculating the function's output.
Returns:
The benchmarking function output `f(x)`.
"""
raise NotImplementedError
| 24.699422 | 80 | 0.607302 | 468 | 4,273 | 5.489316 | 0.209402 | 0.042818 | 0.040872 | 0.042818 | 0.171662 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00169 | 0.307512 | 4,273 | 172 | 81 | 24.843023 | 0.866509 | 0.327405 | 0 | 0.098592 | 0 | 0 | 0.098311 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.225352 | false | 0 | 0.014085 | 0 | 0.352113 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e20384b81ca4f4f0e6bcef7012f54531808f1314 | 468 | py | Python | tronx/helpers/decorators.py | beastzx18/Tron | 92207b841c80311e484e8f350b96f7df8a76d3b9 | [
"MIT"
] | 8 | 2021-08-22T06:43:34.000Z | 2022-02-24T17:09:49.000Z | tronx/helpers/decorators.py | beastzx18/Tron | 92207b841c80311e484e8f350b96f7df8a76d3b9 | [
"MIT"
] | 61 | 2021-09-12T11:05:33.000Z | 2021-12-07T15:26:18.000Z | tronx/helpers/decorators.py | beastzx18/Tron | 92207b841c80311e484e8f350b96f7df8a76d3b9 | [
"MIT"
] | 6 | 2021-09-08T08:43:04.000Z | 2022-02-24T17:09:50.000Z | from pyrogram.types import CallbackQuery
from .variables import USER_ID
from pyrogram.errors import MessageNotModified
def alert_user(func):
async def wrapper(_, cb: CallbackQuery):
if cb.from_user and not cb.from_user.id in USER_ID:
await cb.answer(
f"Sorry, but you can't use this userbot ! make your own userbot at @tronuserbot",
show_alert=True
)
else:
try:
await func(_, cb)
except MessageNotModified:
pass
return wrapper
| 20.347826 | 86 | 0.728632 | 68 | 468 | 4.897059 | 0.632353 | 0.054054 | 0.06006 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.202991 | 468 | 22 | 87 | 21.272727 | 0.892761 | 0 | 0 | 0 | 0 | 0 | 0.16453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.0625 | 0.1875 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
e20d9cffb20687d16b9b32885b1c7eab97057a0f | 21,383 | py | Python | dep/scm.py | harveyt/dep | 5a52fda5ce75033c240c52fd98d3ffde99ed6617 | [
"MIT"
] | null | null | null | dep/scm.py | harveyt/dep | 5a52fda5ce75033c240c52fd98d3ffde99ed6617 | [
"MIT"
] | null | null | null | dep/scm.py | harveyt/dep | 5a52fda5ce75033c240c52fd98d3ffde99ed6617 | [
"MIT"
] | null | null | null | #
# Source Code Management
# ======================
#
# %%LICENSE%%
#
import os
import re
from dep import opts
from dep.helpers import *
class Repository:
def __init__(self, work_dir, url, vcs, name):
self.work_dir = work_dir
self.url = url
self.vcs = vcs
self.name = name
self.branch = None
self.commit = None
def write_state_to_config_section(self, section):
section["url"] = self.url
section["vcs"] = self.vcs
if self.branch:
section["branch"] = self.branch
if self.commit:
section["commit"] = self.commit
def read_state_from_config_section(self, section):
self.branch = section["branch"] if section.has_key("branch") else None
self.commit = section["commit"] if section.has_key("commit") else None
def read_state_from_disk(self):
pass
@staticmethod
def determine_vcs_from_url(url):
# TODO: Hard coded for now
return "git"
@staticmethod
def determine_vcs_from_work_dir(work_dir):
# TODO: Hard coded for now
if GitRepository.is_present(work_dir):
return "git"
else:
return "file"
@staticmethod
def determine_name_from_url(url):
# TODO: Hard coded for now
name = os.path.basename(url)
name = re.sub(r"\.git$", "", name)
return name
@staticmethod
def create(work_dir, url=None, name=None, parent=None):
# Determine URL and vcs if none provided
if url is None:
if work_dir is None:
error("Cannot create repository with no URL and no working directory")
url = "file://{}".format(work_dir)
vcs = Repository.determine_vcs_from_work_dir(work_dir)
else:
vcs = Repository.determine_vcs_from_url(url)
# Determine name if none provided
if name is None:
name = Repository.determine_name_from_url(url)
# Determine work_dir if none provided
if work_dir is None:
work_dir = os.path.join(os.getcwd(), name)
# TODO: Support more VCS
if vcs == "git":
return GitRepository(work_dir, url, name, parent)
elif vcs == "file":
return FileRepository(work_dir, url)
else:
error("Cannot determine VCS from repository URL '{}'", url)
def debug_dump(self, prefix=""):
if not opts.args.debug or opts.args.quiet:
return
debug("{}--- {} ---", prefix, self)
debug("{}work_dir = {}", prefix, self.work_dir)
debug("{}url = {}", prefix, self.url)
debug("{}vcs = {}", prefix, self.vcs)
debug("{}name = {}", prefix, self.name)
debug("{}branch = {}", prefix, self.branch)
debug("{}commit = {}", prefix, self.commit)
self._debug_dump_contents(prefix)
def _debug_dump_contents(self, prefix):
pass
class FileRepository(Repository):
def __init__(self, work_dir, url):
name = Repository.determine_name_from_url(url)
Repository.__init__(self, work_dir, url, "file", name)
def __str__(self):
return "{} '{}'".format(self.__class__.__name__, self.work_dir)
def register(self, path):
pass
def unregister(self, path):
pass
def pre_edit(self, path):
pass
def post_edit(self, path):
pass
def download(self):
pass
def checkout(self, branch=None, commit=None):
pass
def has_ignore(self, path):
return False
def add_ignore(self, path):
pass
def remove_ignore(self, path):
pass
def has_local_modifications(self):
return True
def refresh(self):
pass
def record(self):
pass
def merge_branch(self, name):
pass
def status(self, path, kw):
return True
def create_branch(self, name, startpoint):
pass
def create_worktree(self, branch_name):
pass
class GitRepository(Repository):
def __init__(self, work_dir, url, name, parent):
if parent is not None and not isinstance(parent, GitRepository):
error("GitRepository must have Git parent repository or no parent")
Repository.__init__(self, work_dir, url, "git", name)
self.parent = parent
self.dot_git_path = os.path.join(work_dir, ".git")
self.git_dir = self._compute_git_dir()
self.git_common_dir = self._compute_git_common_dir()
self.worktree_path = self._compute_worktree_path()
self.ignore_file = os.path.join(work_dir, ".gitignore")
self.quiet_flag = "--quiet" if opts.args.quiet else None
def __str__(self):
return "{} '{}'".format(self.__class__.__name__, self.git_dir)
def _debug_dump_contents(self, prefix):
debug("{}parent = {}", prefix, self.parent)
debug("{}dot_git_path = {}", prefix, self.dot_git_path)
debug("{}git_dir = {}", prefix, self.git_dir)
debug("{}git_common_dir = {}", prefix, self.git_common_dir)
debug("{}worktree_path = {}", prefix, self.worktree_path)
debug("{}ignore_file = {}", prefix, self.ignore_file)
debug("{}quiet_flag = {}", prefix, self.quiet_flag)
def read_state_from_disk(self):
if os.path.exists(self.dot_git_path):
self.branch = self._get_branch()
self.commit = self._get_commit()
def _read_git_dir(self):
try:
git_dir = None
with open(self.dot_git_path, 'r') as f:
for line in f:
m = re.match(r"^gitdir:\s+(.*)$", line)
if m:
git_dir = m.group(1)
break
if git_dir is None:
error("Cannot find gitdir in '{}'", self.dot_git_path)
if not os.path.isabs(git_dir):
git_dir = os.path.join(self.work_dir, git_dir)
return git_dir
except IOError, e:
error("Cannot open '{}' for reading: {}", self.dot_git_path, e)
def _compute_git_dir(self):
# If .git exists as directory, either root or old style so use that always.
# If .git exists as file, contents determines actual git directory location always.
if os.path.isdir(self.dot_git_path):
return self.dot_git_path
elif os.path.isfile(self.dot_git_path):
return self._read_git_dir()
# If root project, simply use the .git directory.
if self.parent is None:
return self.dot_git_path
deps_path = os.path.join("deps", self.name)
git_dir = os.path.join(self.parent.git_common_dir, deps_path)
if self.parent.worktree_path is not None:
git_dir = os.path.join(git_dir, "worktrees/.UNKNOWN.")
return git_dir
def _is_separate_git_dir(self):
return self.git_dir != self.dot_git_path
def _get_separate_git_dir_flag(self):
return "--separate-git-dir" if self._is_separate_git_dir() else None
def _get_separate_git_dir_arg(self):
return self.git_dir if self._is_separate_git_dir() else None
def _compute_git_common_dir(self):
# The repository git_dir is one of:
# WORK_DIR/.git/worktrees/WORKTREE_ID
# WORK_DIR/.git/deps/NAME/worktrees/WORKTREE_ID
m = re.match(r"(.*/\.git(/deps/[^/]*)?)/worktrees/[^/]*$", self.git_dir)
if m:
return m.group(1)
return self.git_dir
def _compute_worktree_path(self):
if self.parent is None:
# Root is a worktree if git_dir and git_common_dir are different
if self.git_dir == self.git_common_dir:
return None
common_root = os.path.dirname(self.git_common_dir)
return os.path.relpath(self.work_dir, common_root)
# Other repositories inherit from parent
return self.parent.worktree_path
@staticmethod
def is_present(work_dir):
dot_git_path = os.path.join(work_dir, ".git")
return os.path.exists(dot_git_path)
def register(self, path):
run("git", "add", path, cwd=self.work_dir)
def unregister(self, path):
run("git", "rm", "--cached", path, cwd=self.work_dir)
def pre_edit(self, path):
pass
def post_edit(self, path):
run("git", "add", path, cwd=self.work_dir)
def _worktree_add(self):
self.parent.debug_dump("parent: ")
self.debug_dump("local: ")
dep_to_root_path = os.path.relpath(self.parent.work_dir, self.work_dir)
dep_path = os.path.relpath(self.work_dir, self.parent.work_dir)
worktree_path = os.path.join(dep_to_root_path, self.worktree_path, dep_path)
parent_common_root = os.path.dirname(self.parent.git_common_dir)
worktree_common_dir = os.path.join(parent_common_root, dep_path)
branch_name = self._branch_name_from_ref(self.branch)
debug("dep_to_root_path={}", dep_to_root_path)
debug("dep_path={}", dep_path)
debug("worktree_path={}", worktree_path)
debug("parent_common_root={}", parent_common_root)
debug("worktree_common_dir={}", worktree_common_dir)
debug("branch_name={}", branch_name)
status("Adding worktree {}\n on branch '{}'",
self.work_dir, branch_name)
run("git", "worktree", "add", worktree_path, branch_name, cwd=worktree_common_dir)
# NOTE: The git_dir will be incorrect (unknown) until after it is created, must update.
self.git_dir = self._compute_git_dir()
self.debug_dump("worktree: ")
def _clone(self):
status("Downloading {}\n from '{}'", self, self.url)
if self._is_separate_git_dir():
make_dirs(os.path.dirname(self.git_dir))
run("git", "clone",
self.quiet_flag, self._get_separate_git_dir_flag(), self._get_separate_git_dir_arg(),
"--no-checkout", self.url, self.work_dir)
def download(self):
validate_dir_notexists_or_empty(self.work_dir)
validate_dir_notexists(self.git_dir)
if self.worktree_path is not None:
self._worktree_add()
else:
self._clone()
def _is_working_dir_empty(self):
work_dir_contents = filter(lambda entry: not entry in [".", "..", ".git"], os.listdir(self.work_dir))
return len(work_dir_contents) == 0
def _need_checkout(self, branch=None, commit=None, force=False):
debug("_need_checkout: force={}", force)
if force or self._is_working_dir_empty():
return True
if branch is not None:
cur_branch = self._get_branch()
debug("_need_checkout: cur_branch={} required={}", cur_branch, branch)
if cur_branch != branch:
return True
if commit is not None:
cur_commit = self._get_commit()
debug("_need_checkout: cur_commit={} required={}", cur_commit, commit)
if cur_commit != commit:
return True
return False
def checkout(self, branch=None, commit=None):
if not self._need_checkout(branch=branch, commit=commit):
return
branch_flag = None if branch is None or commit is None else "-B"
branch_name = None if branch is None else self._branch_name_from_ref(branch)
commit_flag = None if commit is None else commit
branch_mesg = "" if branch is None else "\n on branch '{}'".format(branch)
commit_mesg = "" if commit is None else "\n at commit '{}'".format(commit)
status("Checkout {}{}{}\n in '{}'", self, branch_mesg, commit_mesg, self.work_dir)
run("git", "checkout", self.quiet_flag, branch_flag, branch_name, commit_flag, cwd=self.work_dir)
def _read_ignore(self):
if not os.path.exists(self.ignore_file):
return []
try:
ignores = []
with open(self.ignore_file, 'r') as f:
for line in f:
line = line.strip()
ignores.append(line)
return ignores
except IOError, e:
error("Cannot open '{}' for reading: {}", self.ignore_file, e)
def has_ignore(self, path):
path = "/" + path
ignores = self._read_ignore()
return path in ignores
def add_ignore(self, path):
verbose("Adding '{}' to ignore file '{}'", path, self.ignore_file)
if opts.args.dry_run:
return
# TODO: With git we know we can just post_edit the file to do the right thing.
# TODO: With out vcs we might need register/pre_edit.
try:
with open(self.ignore_file, 'a') as f:
f.write('/{}\n'.format(path))
except IOError, e:
error("Cannot open '{}' for writing: {}'", self.ignore_file, e)
self.post_edit(self.ignore_file)
def remove_ignore(self, path):
verbose("Removing '{}' from ignore file '{}'", path, self.ignore_file)
if opts.args.dry_run:
return
if not os.path.exists(self.ignore_file):
# TODO: There is no ignore file, so cannot remove?
return
# TODO: With git we know we can just post_edit the file to do the right thing.
# TODO: With out vcs we might need pre_edit.
ignores = self._read_ignore()
try:
with open(self.ignore_file, 'w') as f:
for ignore in ignores:
if ignore != "/" + path:
f.write('{}\n'.format(ignore))
except IOError, e:
error("Cannot open '{}' for writing: {}'", self.ignore_file, e)
self.post_edit(self.ignore_file)
# TODO: Remove if ignore file is now empty?
def _is_status_conflict(self, line):
style = line[0:2]
if style == "DD" or style == "AU" or style == "UD" or style == "UA":
return True
if style == "DU" or style == "AA" or style == "UU":
return True
return False
def _get_status(self):
ahead = 0
behind = 0
changes = 0
conflicts = 0
with Pipe("git", "status", "--porcelain", "--branch", cwd=self.work_dir) as p:
for line in p:
m = re.match(r"##\s+[^[]*(\[(\s*ahead\s+(\d+)\s*)?,?(\s*behind\s+(\d+)\s*)?\])?", line)
if m:
ahead = m.group(3) if m.group(3) else 0
behind = m.group(5) if m.group(5) else 0
else:
if self._is_status_conflict(line):
conflicts = conflicts + 1
else:
changes = changes + 1
return (changes, ahead, behind, conflicts)
def _is_merge_in_progress(self):
# Local modifications if merge is in progress so merge will be committed.
merge_head_file = os.path.join(self.git_dir, "MERGE_HEAD")
return os.path.exists(merge_head_file)
def has_local_modifications(self):
return self._is_merge_in_progress() or self._get_status()[0] > 0
def is_ahead(self):
return self._get_status()[1] > 0
def refresh(self):
check_local = True
if not os.path.exists(self.work_dir):
check_local = False
if not os.path.exists(self.git_dir):
self.download()
if check_local and self.has_local_modifications():
error("{} has local modifications, not refreshed", self)
self.checkout(self.branch, self.commit)
def _get_branch(self):
branch = run_query("git", "rev-parse", "--symbolic-full-name", "HEAD", cwd=self.work_dir).rstrip("\n")
# TODO: Check it is valid!
if branch == "HEAD":
# Detached head is not supported (yet), need to checkout a branch.
# TODO: Support checkout of tag and arbitary commit - pick the first sensible branch containing that commit.
error("{} is checked out with a detached head, not yet supported; checkout a branch (not a tag)", self)
return branch
def _get_commit(self):
commit = run_query("git", "rev-parse", "HEAD", cwd=self.work_dir).rstrip("\n")
# TODO: Check it is valid!
return commit
def _get_describe(self):
actual_branch = self._get_branch()
describe = run_query("git", "describe", "--tags", "--always", cwd=self.work_dir).rstrip("\n")
# TODO: Check it is valid!
return describe
def record(self):
new_branch = self._get_branch()
new_commit = self._get_commit()
if new_branch != self.branch or new_commit != self.commit:
self.branch = new_branch
self.commit = new_commit
status("""Recording {}
at commit '{}'
on branch '{}'""", self, self.commit, self.branch)
def _branch_name_from_ref(self, ref):
return re.sub(r"refs/heads/", "", ref)
def merge_branch(self, name):
run("git", "merge", self.quiet_flag, "--no-commit", "--no-ff", name, cwd=self.work_dir, allow_failure=True)
def status(self, path, kw):
if kw.get('status_long'):
return self.status_long(path, kw)
else:
return self.status_short(path, kw)
def status_short(self, path, kw):
branch = self.branch
commit = self.commit
actual_branch = self._get_branch()
actual_commit = self._get_commit()
changes, ahead, behind, conflicts = self._get_status()
merging = self._is_merge_in_progress()
# Determine modification state
if changes is None:
mod = "?"
elif conflicts:
mod = "C"
elif changes:
mod = "*"
elif merging:
mod = ">"
else:
mod = " "
# Deteremine branch and commit differences
if branch is None:
branch_diff = " "
else:
branch_diff = (" " if branch == actual_branch else "*")
if commit is None:
commit_diff = " "
else:
commit_diff = (" " if commit == actual_commit else "*")
# Determine ahead/behind
ahead = "?" if ahead is None else ahead
behind = "?" if behind is None else behind
# Determine values to show
actual_branch = self._branch_name_from_ref(actual_branch)
show_commit = kw.get('status_commit')
show_describe = kw.get('status_describe')
if not show_commit and not show_describe:
show_commit = (actual_branch != "master")
show_describe = (actual_branch == "master")
if not show_commit or show_describe:
actual_commit = self._get_describe()
commit_value = commit_diff + actual_commit
branch_value = branch_diff + actual_branch
lead = ("## " if kw.get('status_long') else "")
if kw.get('status_first'):
status("{}M Branch Commit Push Pull Path", lead)
status("{}- --------------- ---------------------------------------- ---- ---- --------------------------", lead)
status("{}{:1} {:16} {:41} {:>4} {:>4} {}", lead, mod, branch_value, commit_value, ahead, behind, path)
return self._status_is_clean(mod, branch_diff, commit_diff, ahead, behind, kw)
def _status_is_clean(self, mod, branch_diff, commit_diff, ahead, behind, kw):
if mod != " ":
return False
if branch_diff != " ":
return False
if commit_diff != " ":
return False
if kw.get('status_push_clean') and ahead != 0:
return False
if kw.get('status_pull_clean') and behind != 0:
return False
return True
def status_long(self, path, kw):
status_seperator()
kw['status_first'] = True
is_clean = self.status_short(path, kw)
status("")
run("git", "status", "--long", cwd=self.work_dir)
status("")
return is_clean
def create_branch(self, name, startpoint):
starting = ("\n with start point '{}'".format(startpoint) if startpoint is not None else "")
status("Branch {}\n to branch '{}'{}", self, name, starting)
run("git", "checkout", "-b", name, startpoint, cwd=self.work_dir)
def create_worktree(self, branch_name):
worktree_root = "branch"
worktree_path = os.path.join(worktree_root, branch_name)
work_dir = os.path.join(self.work_dir, worktree_path)
status("Adding worktree {}\n on branch '{}'", work_dir, branch_name)
run("git", "worktree", "add", worktree_path, branch_name)
# Ensure worktree_root is ignored.
if not self.has_ignore(worktree_root):
self.add_ignore(worktree_root)
# Create a .deproot so root finding does not go through "branch" to parent directories.
deproot_path = os.path.join(self.work_dir, worktree_root, ".deproot")
if not os.path.exists(deproot_path):
open(deproot_path, 'a').close()
return Repository.create(work_dir)
| 38.389587 | 127 | 0.582191 | 2,731 | 21,383 | 4.331014 | 0.109484 | 0.034917 | 0.02976 | 0.01302 | 0.348664 | 0.229371 | 0.152266 | 0.123182 | 0.09579 | 0.076852 | 0 | 0.001997 | 0.297573 | 21,383 | 556 | 128 | 38.458633 | 0.785486 | 0.080531 | 0 | 0.29955 | 0 | 0.004505 | 0.111825 | 0.010907 | 0 | 0 | 0 | 0.001799 | 0 | 0 | null | null | 0.036036 | 0.009009 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
e2101c00fb0005b243b277050e3c456984d8e6cc | 1,746 | py | Python | companies/migrations/0003_auto_20210221_1537.py | Ins-V/wc_crm | 5d75907bb48e892328712ed0b2cf96b9083239aa | [
"MIT"
] | null | null | null | companies/migrations/0003_auto_20210221_1537.py | Ins-V/wc_crm | 5d75907bb48e892328712ed0b2cf96b9083239aa | [
"MIT"
] | null | null | null | companies/migrations/0003_auto_20210221_1537.py | Ins-V/wc_crm | 5d75907bb48e892328712ed0b2cf96b9083239aa | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-02-21 13:37
import companies.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('companies', '0002_auto_20210221_1408'),
]
operations = [
migrations.CreateModel(
name='Email',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('owner', models.CharField(max_length=150, verbose_name='владелец')),
('address', models.EmailField(max_length=254, verbose_name='адрес электронной почты')),
],
options={
'verbose_name': 'email',
'verbose_name_plural': 'emails',
},
),
migrations.CreateModel(
name='Phone',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('owner', models.CharField(max_length=150, verbose_name='владелец')),
('number', models.CharField(max_length=15, validators=[companies.validators.PhoneValidator], verbose_name='номер')),
],
options={
'verbose_name': 'телефон',
'verbose_name_plural': 'телефоны',
},
),
migrations.AddField(
model_name='company',
name='emails',
field=models.ManyToManyField(to='companies.Email', verbose_name='emails'),
),
migrations.AddField(
model_name='company',
name='phones',
field=models.ManyToManyField(to='companies.Phone', verbose_name='phones'),
),
]
| 35.632653 | 132 | 0.563574 | 160 | 1,746 | 5.98125 | 0.43125 | 0.137931 | 0.056426 | 0.075235 | 0.428422 | 0.351097 | 0.271682 | 0.271682 | 0.271682 | 0.271682 | 0 | 0.034568 | 0.304124 | 1,746 | 48 | 133 | 36.375 | 0.753086 | 0.025773 | 0 | 0.47619 | 1 | 0 | 0.160683 | 0.013537 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.047619 | 0 | 0.119048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35589980547cf3bbd203b94d5ac8dbe125b385c2 | 1,096 | py | Python | dask/TestNB2.py | mlkimmins/scalingpythonml | 517c6d3e14ce4eb331ab0fd3b0368e0bf10d9986 | [
"Apache-2.0"
] | 13 | 2020-02-09T16:03:10.000Z | 2022-03-19T14:08:16.000Z | dask/TestNB2.py | mlkimmins/scalingpythonml | 517c6d3e14ce4eb331ab0fd3b0368e0bf10d9986 | [
"Apache-2.0"
] | 3 | 2020-10-31T16:20:05.000Z | 2020-11-04T01:17:02.000Z | dask/TestNB2.py | mlkimmins/scalingpythonml | 517c6d3e14ce4eb331ab0fd3b0368e0bf10d9986 | [
"Apache-2.0"
] | 4 | 2020-12-21T22:23:16.000Z | 2022-03-29T20:25:28.000Z | #!/usr/bin/env python
# coding: utf-8
# In[1]:
import dask
from dask_kubernetes import KubeCluster
import numpy as np
# In[ ]:
#tag::remote_lb_deploy[]
# In[2]:
# Specify a remote deployment using a load blanacer, necessary for communication with notebook from cluster
dask.config.set({"kubernetes.scheduler-service-type": "LoadBalancer"})
# In[4]:
cluster = KubeCluster.from_yaml('worker-spec.yaml', namespace='dask', deploy_mode='remote')
# In[ ]:
#end::remote_lb_deploy[]
# In[5]:
cluster.adapt(minimum=1, maximum=100)
# In[6]:
# Example usage
from dask.distributed import Client
import dask.array as da
# Connect Dask to the cluster
client = Client(cluster)
# In[7]:
client.scheduler_comm.comm.handshake_info()
# In[8]:
# Create a large array and calculate the mean
array = da.ones((1000, 1000, 1000))
print(array.mean().compute()) # Should print 1.0|
# In[9]:
print(array.mean().compute())
# In[10]:
print(array.sum().compute())
# In[13]:
dir(array)
# In[18]:
np.take(array, indices=[0, 10]).sum().compute()
# In[15]:
# In[ ]:
| 10.640777 | 107 | 0.666971 | 160 | 1,096 | 4.5125 | 0.5375 | 0.041551 | 0.038781 | 0.044321 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041712 | 0.168796 | 1,096 | 102 | 108 | 10.745098 | 0.750823 | 0.364051 | 0 | 0.125 | 0 | 0 | 0.106129 | 0.049327 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.3125 | 0 | 0.3125 | 0.1875 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3559247cc27efd7aa5a74724da6869a1c6747c97 | 1,329 | py | Python | api_v1/tests/test_models.py | andela-akiura/yonder | 1e7c2e113b9188b69459b2443e548d83baeb24e2 | [
"MIT"
] | 1 | 2017-09-04T11:45:32.000Z | 2017-09-04T11:45:32.000Z | api_v1/tests/test_models.py | andela-akiura/pixlr | 1e7c2e113b9188b69459b2443e548d83baeb24e2 | [
"MIT"
] | 4 | 2021-06-08T19:30:05.000Z | 2022-03-11T23:17:41.000Z | api_v1/tests/test_models.py | andela-akiura/khali | 1e7c2e113b9188b69459b2443e548d83baeb24e2 | [
"MIT"
] | null | null | null | from django.test import TestCase
from factories import ImageFactory, ThumbnailImageFactory, ThumbnailFilterFactory
from faker import Faker
from django.contrib.auth.models import User
fake = Faker()
class UserModelTest(TestCase):
pass
class ImageModelTest(TestCase):
def setUp(self):
self.image = ImageFactory()
def test_image_name(self):
fake.seed(1738)
self.assertEqual(self.image.image_name, fake.word())
def test_filter_name_is_none(self):
fake.seed(1738)
self.assertEqual(self.image.filter_name, 'NONE')
def test_created_by(self):
self.assertEqual(self.image.created_by,
User.objects.get(username='fake'))
class ThumbImageModelTest(TestCase):
def setUp(self):
self.thumb = ThumbnailImageFactory()
def test_thumbnail_name(self):
self.assertEqual(
self.thumb.thumbnail.name, 'images/thumbnails/example.jpg')
class ThumbFilterTest(TestCase):
def setUp(self):
self.thumb_filter = ThumbnailFilterFactory()
def test_thumbnail_name(self):
self.assertEqual(
self.thumb_filter.filtered_thumbnail.name,
'images/thumbnails/example.jpg')
def test_filter_name(self):
self.assertEqual(
self.thumb_filter.filter_name, 'BLUR')
| 27.122449 | 81 | 0.686983 | 150 | 1,329 | 5.94 | 0.306667 | 0.062851 | 0.127946 | 0.103255 | 0.426487 | 0.399551 | 0.246914 | 0.197531 | 0.107744 | 0 | 0 | 0.007685 | 0.216704 | 1,329 | 48 | 82 | 27.6875 | 0.848223 | 0 | 0 | 0.285714 | 0 | 0 | 0.052671 | 0.043642 | 0 | 0 | 0 | 0 | 0.171429 | 1 | 0.257143 | false | 0.028571 | 0.114286 | 0 | 0.485714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
355ac1914c73eb19b95d9073487e7c446179f0d7 | 17,852 | py | Python | python/calico/felix/frules.py | a0x8o/felix | fb431cc4a5482f1013bcbef89954d93551c8fec6 | [
"Apache-2.0"
] | 6 | 2016-10-18T04:04:25.000Z | 2016-10-18T04:06:49.000Z | python/calico/felix/frules.py | axbaretto/felix | fb431cc4a5482f1013bcbef89954d93551c8fec6 | [
"Apache-2.0"
] | 1 | 2021-06-01T21:45:37.000Z | 2021-06-01T21:45:37.000Z | python/calico/felix/frules.py | axbaretto/felix | fb431cc4a5482f1013bcbef89954d93551c8fec6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright (c) 2015-2016 Tigera, Inc. All rights reserved.
# Copyright (c) 2015 Cisco Systems. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
felix.frules
~~~~~~~~~~~~
Functions for generating iptables rules. This covers our top-level
chains as well as low-level conversion from our datamodel rules to
iptables format.
iptables background
~~~~~~~~~~~~~~~~~~~
iptables configuration is split into multiple tables, which each support
different types of rules. Each table contains multiple chains, which
are sequences of rules. At certain points in packet processing the
packet is handed to one of the always-present kernel chains in a
particular table. The kernel chains have default behaviours but they
can be modified to add or remove rules, including inserting a jump to
another chain.
Felix is mainly concerned with the "filter" table, which is used for
imposing policy rules. There are multiple kernel chains in the filter
table. After the routing decision has been made, packets enter
* the INPUT chain if they are destined for the host itself
* the OUTPUT chain if they are being sent by the host itself
* the FORWARD chain if they are to be forwarded between two interfaces.
Note: packets that are being forwarded do not traverse the INPUT or
OUTPUT chains at all. INPUT and OUTPUT are only used for packets that
the host itself is to receive/send.
Packet paths
~~~~~~~~~~~~
There are a number of possible paths through the filter chains that we
care about:
* Packets from a local workload to another local workload traverse the
FORWARD chain only. Felix must ensure that those packets have *both*
the outbound policy of the sending workload and the inbound policy
of the receiving workload applied.
* Packets from a local workload to a remote address traverse the FORWARD
chain only. Felix must ensure that those packets have the outbound
policy of the local workload applied.
* Packets from a remote address to a local workload traverse the FORWARD
chain only. Felix must apply the inbound policy of the local workload.
* Packets from a local workload to the host itself traverse the INPUT
chain. Felix must apply the outbound policy of the workload.
Chain structure
~~~~~~~~~~~~~~~
Rather than adding many rules to the kernel chains, which are a shared
resource (and hence difficult to unpick), Felix creates its own delegate
chain for each kernel chain and inserts a single jump rule into the
kernel chain:
* INPUT -> felix-INPUT
* FORWARD -> felix-FORWARD
The top-level felix-XXX chains are static and configured at start-of-day.
The felix-FORWARD chain sends packet that arrive from a local workload to
the felix-FROM-ENDPOINT chain, which applies inbound policy. Packets that
are denied by policy are dropped immediately. However, accepted packets
are returned to the felix-FORWARD chain in case they need to be processed
further. felix-FORWARD then directs packets that are going to local
endpoints to the felix-TO-ENDPOINT chain, which applies inbound policy.
Similarly, felix-TO-ENDPOINT either drops or returns the packet. Finally,
if both the FROM-ENDPOINT and TO-ENDPOINT chains allow the packet,
felix-FORWARD accepts the packet and allows it through.
The felix-INPUT sends packets from local workloads to the (shared)
felix-FROM-ENDPOINT chain, which applies outbound policy. Then it
(optionally) accepts packets that are returned.
Since workloads come and go, the TO/FROM-ENDPOINT chains are dynamic and
consist of dispatch tables based on device name. Those chains are managed
by dispatch.py.
The dispatch chains direct packets to per-endpoint ("felix-to/from")
chains, which are responsible for policing IP addresses. Those chains are
managed by endpoint.py. Since the actual policy rules can be shared by
multiple endpoints, we put each set of policy rules in its own chain and
the per-endpoint chains send packets to the relevant policy
(felix-p-xxx-i/o) chains in turn. Policy profile chains are managed by
profilerules.py.
Since an endpoint may be in multiple profiles and we execute the policy
chains of those profiles in sequence, the policy chains need to
communicate three different "return values"; for this we use the packet
Accept MARK (a configured bit in the MARK space):
* Packet was matched by a deny rule. In this case the packet is immediately
dropped.
* Packet was matched by an allow rule. In this case the packet is returned
with Accept MARK==1. The calling chain can then return the packet to its
caller for further processing.
* Packet was not matched at all. In this case, the packet is returned with
Accept MARK==0. The calling chain can then send the packet through the next
profile chain.
"""
import logging
import time
import netaddr
from calico.felix import devices
from calico.felix import futils
from calico.felix.futils import FailedSystemCall
from calico.felix.ipsets import HOSTS_IPSET_V4
_log = logging.getLogger(__name__)
FELIX_PREFIX = "felix-"
# Maximum number of port entries in a "multiport" match rule. Ranges count for
# 2 entries.
MAX_MULTIPORT_ENTRIES = 15
# Name of the global, stateless IP-in-IP device name.
IP_IN_IP_DEV_NAME = "tunl0"
# Rule to catch packets that are being sent down the IPIP tunnel from an
# incorrect local IP address of the host. This happens if:
#
# - the user explicitly binds their socket to the wrong source IP accidentally
# - the user sends traffic to, for example, a Kubernetes service IP, which is
# implemented via NAT instead of routing, leading the kernel to choose the
# wrong source IP.
#
# We NAT the source of the packet to use the tunnel IP. We assume that
# non-local IPs have been correctly routed. Since Calico-assigned IPs are
# non-local (because they're down a veth), they won't get caught by the rule.
# Other remote sources will only reach the tunnel if they're being NATted
# already (for example, a Kubernetes "NodePort"). The kernel will then
# choose the correct source on its own.
POSTROUTING_LOCAL_NAT_FRAGMENT = (
"POSTROUTING "
# Only match if the packet is going out via the tunnel.
"--out-interface %s "
# Match packets that don't have the correct source address. This matches
# local addresses (i.e. ones assigned to this host) limiting the match to
# the output interface (which we matched above as the tunnel). Avoiding
# embedding the IP address lets us use a static rule, which is easier to
# manage.
"-m addrtype ! --src-type LOCAL --limit-iface-out "
# Only match if the IP is also some local IP on the box. This prevents
# us from matching packets from workloads, which are remote as far as the
# routing table is concerned.
"-m addrtype --src-type LOCAL "
# NAT them to use the source IP of the tunnel. Using MASQUERADE means
# the kernel chooses the source automatically.
"-j MASQUERADE" % IP_IN_IP_DEV_NAME
)
# Chain names
# Dispatch chains to and from workload endpoints.
CHAIN_TO_ENDPOINT = FELIX_PREFIX + "TO-ENDPOINT"
CHAIN_FROM_ENDPOINT = FELIX_PREFIX + "FROM-ENDPOINT"
CHAIN_TO_LEAF = FELIX_PREFIX + "TO-EP-PFX"
CHAIN_FROM_LEAF = FELIX_PREFIX + "FROM-EP-PFX"
WORKLOAD_DISPATCH_CHAINS = {
"to_root": CHAIN_TO_ENDPOINT,
"from_root": CHAIN_FROM_ENDPOINT,
"to_leaf": CHAIN_TO_LEAF,
"from_leaf": CHAIN_FROM_LEAF,
}
# Ditto for host endpoints.
CHAIN_TO_IFACE = FELIX_PREFIX + "TO-HOST-IF"
CHAIN_FROM_IFACE = FELIX_PREFIX + "FROM-HOST-IF"
CHAIN_TO_IFACE_LEAF = FELIX_PREFIX + "TO-IF-PFX"
CHAIN_FROM_IFACE_LEAF = FELIX_PREFIX + "FROM-IF-PFX"
HOST_DISPATCH_CHAINS = {
"to_root": CHAIN_TO_IFACE,
"from_root": CHAIN_FROM_IFACE,
"to_leaf": CHAIN_TO_IFACE_LEAF,
"from_leaf": CHAIN_FROM_IFACE_LEAF,
}
# Failsafe whitelist chains.
CHAIN_FAILSAFE_IN = FELIX_PREFIX + "FAILSAFE-IN"
CHAIN_FAILSAFE_OUT = FELIX_PREFIX + "FAILSAFE-OUT"
# Per-endpoint/interface chain prefixes.
CHAIN_TO_PREFIX = FELIX_PREFIX + "to-"
CHAIN_FROM_PREFIX = FELIX_PREFIX + "from-"
# Top-level felix chains.
CHAIN_PREROUTING = FELIX_PREFIX + "PREROUTING"
CHAIN_POSTROUTING = FELIX_PREFIX + "POSTROUTING"
CHAIN_INPUT = FELIX_PREFIX + "INPUT"
CHAIN_OUTPUT = FELIX_PREFIX + "OUTPUT"
CHAIN_FORWARD = FELIX_PREFIX + "FORWARD"
CHAIN_FIP_DNAT = FELIX_PREFIX + 'FIP-DNAT'
CHAIN_FIP_SNAT = FELIX_PREFIX + 'FIP-SNAT'
def load_nf_conntrack():
"""
Try to force the nf_conntrack_netlink kernel module to be loaded.
"""
_log.info("Running conntrack command to force load of "
"nf_conntrack_netlink module.")
try:
# Run a conntrack command to trigger it to load the kernel module if
# it's not already compiled in. We list rules with a randomly-chosen
# link local address. That makes it very unlikely that we generate
# any wasteful output. We used to use "-S" (show stats) here but it
# seems to be bugged on some platforms, generating an error.
futils.check_call(["conntrack", "-L", "-s", "169.254.45.169"])
except FailedSystemCall:
_log.exception("Failed to execute conntrack command to force load of "
"nf_conntrack_netlink module. conntrack commands may "
"fail later.")
def install_global_rules(config, filter_updater, nat_updater, ip_version,
raw_updater=None):
"""
Set up global iptables rules. These are rules that do not change with
endpoint, and are expected never to change (such as the rules that send all
traffic through the top level Felix chains).
This method therefore :
- ensures that all the required global tables are present;
- applies any changes required.
"""
# If enabled, create the IP-in-IP device, but only for IPv4
if ip_version == 4:
if config.IP_IN_IP_ENABLED:
_log.info("IP-in-IP enabled, ensuring device exists.")
try:
_configure_ipip_device(config)
except FailedSystemCall:
# We've seen this fail occasionally if the kernel is
# concurrently starting the tunl0 device. Retry.
_log.exception("Failed to configure IPIP device, retrying...")
time.sleep(1)
_configure_ipip_device(config)
if config.IP_IN_IP_ENABLED and config.IP_IN_IP_ADDR:
# Add a rule to catch packets originated by this host that are
# going down the tunnel with the wrong source address. NAT them
# to use the address of the tunnel device instead. See comment
# on the constant for more details.
_log.info("IPIP enabled and tunnel address set: inserting "
"MASQUERADE rule to ensure tunnelled packets have "
"correct source.")
nat_updater.ensure_rule_inserted(POSTROUTING_LOCAL_NAT_FRAGMENT,
async=False)
else:
# Clean up the rule that we insert above if IPIP is enabled.
_log.info("IPIP disabled or no tunnel address set: removing "
"MASQUERADE rule.")
nat_updater.ensure_rule_removed(POSTROUTING_LOCAL_NAT_FRAGMENT,
async=False)
# Ensure that Calico-controlled IPv6 hosts cannot spoof their IP addresses.
# (For IPv4, this is controlled by a per-interface sysctl.)
iptables_generator = config.plugins["iptables_generator"]
if raw_updater:
raw_prerouting_chain, raw_prerouting_deps = (
iptables_generator.raw_rpfilter_failed_chain(ip_version=ip_version)
)
raw_updater.rewrite_chains({CHAIN_PREROUTING: raw_prerouting_chain},
{CHAIN_PREROUTING: raw_prerouting_deps},
async=False)
for iface_prefix in config.IFACE_PREFIX:
# The interface matching string; for example,
# if interfaces start "tap" then this string is "tap+".
iface_match = iface_prefix + '+'
raw_updater.ensure_rule_inserted(
"PREROUTING --in-interface %s --match rpfilter --invert "
"--jump %s" %
(iface_match, CHAIN_PREROUTING),
async=False)
# Both IPV4 and IPV6 nat tables need felix-PREROUTING,
# felix-POSTROUTING and felix-OUTPUT, along with the dependent
# DNAT and SNAT tables required for NAT/floating IP support.
prerouting_chain, prerouting_deps = (
iptables_generator.nat_prerouting_chain(ip_version=ip_version)
)
postrouting_chain, postrouting_deps = (
iptables_generator.nat_postrouting_chain(ip_version=ip_version)
)
output_chain, output_deps = (
iptables_generator.nat_output_chain(ip_version=ip_version)
)
nat_updater.rewrite_chains({CHAIN_PREROUTING: prerouting_chain,
CHAIN_POSTROUTING: postrouting_chain,
CHAIN_OUTPUT: output_chain,
CHAIN_FIP_DNAT: [],
CHAIN_FIP_SNAT: []},
{CHAIN_PREROUTING: prerouting_deps,
CHAIN_POSTROUTING: postrouting_deps,
CHAIN_OUTPUT: output_deps},
async=False)
nat_updater.ensure_rule_inserted(
"PREROUTING --jump %s" % CHAIN_PREROUTING, async=False)
nat_updater.ensure_rule_inserted(
"POSTROUTING --jump %s" % CHAIN_POSTROUTING, async=False)
nat_updater.ensure_rule_inserted(
"OUTPUT --jump %s" % CHAIN_OUTPUT, async=False)
# Now the filter table. This needs to have felix-FORWARD and felix-INPUT
# chains, which we must create before adding any rules that send to them.
if ip_version == 4 and config.IP_IN_IP_ENABLED:
hosts_set_name = HOSTS_IPSET_V4.set_name
HOSTS_IPSET_V4.ensure_exists()
else:
hosts_set_name = None
input_chain, input_deps = (
iptables_generator.filter_input_chain(ip_version, hosts_set_name)
)
output_chain, output_deps = (
iptables_generator.filter_output_chain(ip_version)
)
forward_chain, forward_deps = (
iptables_generator.filter_forward_chain(ip_version)
)
failsafe_in_chain, failsafe_in_deps = (
iptables_generator.failsafe_in_chain()
)
failsafe_out_chain, failsafe_out_deps = (
iptables_generator.failsafe_out_chain()
)
filter_updater.rewrite_chains(
{
CHAIN_FORWARD: forward_chain,
CHAIN_INPUT: input_chain,
CHAIN_OUTPUT: output_chain,
CHAIN_FAILSAFE_IN: failsafe_in_chain,
CHAIN_FAILSAFE_OUT: failsafe_out_chain,
},
{
CHAIN_FORWARD: forward_deps,
CHAIN_INPUT: input_deps,
CHAIN_OUTPUT: output_deps,
CHAIN_FAILSAFE_IN: failsafe_in_deps,
CHAIN_FAILSAFE_OUT: failsafe_out_deps,
},
async=False)
filter_updater.ensure_rule_inserted(
"INPUT --jump %s" % CHAIN_INPUT,
async=False)
filter_updater.ensure_rule_inserted(
"OUTPUT --jump %s" % CHAIN_OUTPUT,
async=False)
filter_updater.ensure_rule_inserted(
"FORWARD --jump %s" % CHAIN_FORWARD,
async=False)
def _configure_ipip_device(config):
"""Creates and enables the IPIP tunnel device.
:raises FailedSystemCall on failure.
"""
if not devices.interface_exists(IP_IN_IP_DEV_NAME):
# Make sure the IP-in-IP device exists; since we use the global
# device, this command actually creates it as a side-effect of
# initialising the kernel module rather than explicitly creating
# it.
_log.info("Tunnel device didn't exist; creating.")
futils.check_call(["ip", "tunnel", "add", IP_IN_IP_DEV_NAME,
"mode", "ipip"])
futils.check_call(["ip", "link", "set", IP_IN_IP_DEV_NAME, "mtu",
str(config.IP_IN_IP_MTU)])
if not devices.interface_up(IP_IN_IP_DEV_NAME):
_log.info("Tunnel device wasn't up; enabling.")
futils.check_call(["ip", "link", "set", IP_IN_IP_DEV_NAME, "up"])
# Allow an IP address to be added to the tunnel. This is useful to
# allow the host to have an IP on a private IPIP network so that it can
# originate traffic and have it routed correctly.
_log.info("Setting IPIP device IP to %s", config.IP_IN_IP_ADDR)
tunnel_addrs = [netaddr.IPAddress(config.IP_IN_IP_ADDR)] if config.IP_IN_IP_ADDR else []
devices.set_interface_ips(futils.IPV4, IP_IN_IP_DEV_NAME,
set(tunnel_addrs))
_log.info("Configured IPIP device.")
def interface_to_chain_suffix(config, iface_name):
"""
Extracts the suffix from a given interface name, uniquely shortening it
to 16 characters if necessary.
:param iface_name: The interface name
:returns string: the suffix (shortened if necessary)
"""
for prefix in sorted(config.IFACE_PREFIX, reverse=True):
if iface_name.startswith(prefix):
iface_name = iface_name[len(prefix):]
break
iface_name = futils.uniquely_shorten(iface_name, 16)
return iface_name
| 41.906103 | 92 | 0.698969 | 2,502 | 17,852 | 4.831335 | 0.221423 | 0.0182 | 0.009927 | 0.005956 | 0.198709 | 0.111433 | 0.063865 | 0.046823 | 0.046823 | 0.041694 | 0 | 0.003731 | 0.234259 | 17,852 | 425 | 93 | 42.004706 | 0.880541 | 0.22765 | 0 | 0.145161 | 0 | 0 | 0.142619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.037634 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
355af8d5ae4552973efc6c0ce81832474cc5e594 | 2,716 | py | Python | browse.py | Thorsten-Sick/tags_for_media_ccc_de | ad1a117ea1dfc2b508d287854ba9b2f5c5a438ca | [
"MIT"
] | 1 | 2018-01-11T15:46:56.000Z | 2018-01-11T15:46:56.000Z | browse.py | Thorsten-Sick/tags_for_media_ccc_de | ad1a117ea1dfc2b508d287854ba9b2f5c5a438ca | [
"MIT"
] | 1 | 2018-11-04T18:42:57.000Z | 2018-11-18T22:14:49.000Z | browse.py | Thorsten-Sick/tags_for_media_ccc_de | ad1a117ea1dfc2b508d287854ba9b2f5c5a438ca | [
"MIT"
] | 1 | 2018-11-24T19:17:31.000Z | 2018-11-24T19:17:31.000Z | #!/usr/bin/env python3
# TODO: Write a command line tool to browser and search in the database
# TODO: Define a command set to search for strings, tags, similar talks, mark talks as seen, mark talks as irrelevant, mark talks as relevant, open a browser and watch, show details, quit
# https://opensource.com/article/17/5/4-practical-python-libraries
# TODO: Maybe use fuzzyfinder
# TODO: use prompt_toolkit autocompletion, auto suggestion and history
# TODO: Use pygments for syntax highlighting https://pygments.org/
from prompt_toolkit import prompt
from prompt_toolkit.history import FileHistory
from prompt_toolkit.auto_suggest import AutoSuggestFromHistory
from prompt_toolkit.completion import NestedCompleter
from dropdata import MediaTagger
import argparse
def printHelp():
print("""
tags: list tags
TODO tags + tag: list all talks containing a specific tag
TODO similar: Find similar content
TODO seen: Mark talks as seen
TODO irrelevant: Mark talks as irrelevant
TODO relevant: Mark talks as relevant
TODO show: Show content in browser
TODO details: Show details
quit: quit
help: get help
""")
def getCompleter():
""" Generates a nested completer
:return:
"""
mt = MediaTagger(frab=False, subtitles=False, default=False, offline=True)
return NestedCompleter.from_nested_dict({'help': None, # Show help
'quit':None, # Quit
'tags': {key: None for (key) in mt.list_tags()+[""]}, # Search for tags
'similar':None, # Find similar content using k-nearest
})
if __name__=="__main__":
### Parsing args
parser = argparse.ArgumentParser()
parser.add_argument("--data", help="Database file name", default = "frab.json", type = str)
args = parser.parse_args()
### Load data
### Logic
BrowserCompleter = getCompleter()
mt = MediaTagger(frab=False, subtitles=False, default=False, offline=True)
mt.read_file(args.data)
while 1:
user_input = prompt('> ',
history=FileHistory("history.txt"),
auto_suggest=AutoSuggestFromHistory(),
completer=BrowserCompleter,
)
user_input = user_input.lower()
if user_input == "quit":
break
elif user_input == "help":
printHelp()
elif user_input == "tags":
# pure tags, list them
print(",".join(mt.list_tags()))
else:
print(user_input)
| 32.722892 | 187 | 0.610088 | 305 | 2,716 | 5.337705 | 0.419672 | 0.038698 | 0.040541 | 0.018428 | 0.072482 | 0.072482 | 0.072482 | 0.072482 | 0.072482 | 0.072482 | 0 | 0.003161 | 0.301178 | 2,716 | 82 | 188 | 33.121951 | 0.854584 | 0.243373 | 0 | 0.041667 | 0 | 0 | 0.217434 | 0 | 0 | 0 | 0 | 0.012195 | 0 | 1 | 0.041667 | false | 0 | 0.125 | 0 | 0.1875 | 0.104167 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
355b54f8b2fba95e01f01d6e3b0468747cbcfa07 | 587 | py | Python | Curso-Em-Video-Python/1Materias/08_Utilizando_Modulos/#08 - Utilizando Módulos C random.py | pedrohd21/Cursos-Feitos | b223aad83867bfa45ad161d133e33c2c200d42bd | [
"MIT"
] | null | null | null | Curso-Em-Video-Python/1Materias/08_Utilizando_Modulos/#08 - Utilizando Módulos C random.py | pedrohd21/Cursos-Feitos | b223aad83867bfa45ad161d133e33c2c200d42bd | [
"MIT"
] | null | null | null | Curso-Em-Video-Python/1Materias/08_Utilizando_Modulos/#08 - Utilizando Módulos C random.py | pedrohd21/Cursos-Feitos | b223aad83867bfa45ad161d133e33c2c200d42bd | [
"MIT"
] | null | null | null | import random
# num = random.random() para numeros de 0 e 1
num = random.randint(1, 10)
print(num)
'''import random 'choice'
n1 = str(input('Primeiro aluno: '))
n2 = str(input('Segundo aluno: '))
n3 = str(input('Terceiro aluno: '))
n4 = str(input('Quarto aluno: '))
lista = [n1, n2, n3, n4]
escolha = random.choice(lista)
print(escolha)'''
'''import random 'shuffle'
n1 = str(input('Aluno: '))
n2 = str(input('Aluno: '))
n3 = str(input('Aluno: '))
n4 = str(input('Aluno: '))
lista = [n1, n2, n3, n4]
sorteio = random.shuffle(lista)
print('A ordem de apresentação é ')
print(lista)''' | 24.458333 | 45 | 0.645656 | 88 | 587 | 4.306818 | 0.363636 | 0.168865 | 0.137203 | 0.079156 | 0.094987 | 0.094987 | 0 | 0 | 0 | 0 | 0 | 0.042084 | 0.149915 | 587 | 24 | 46 | 24.458333 | 0.717435 | 0.073254 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
355d9f89110de4ad691f2cde310c459a0094cdbf | 1,520 | py | Python | help.py | TarikCinar/python-sesli-asistan | 1a29a8d3081b67ff352cf03f7b01ac01b7118deb | [
"MIT"
] | 1 | 2021-05-28T17:27:50.000Z | 2021-05-28T17:27:50.000Z | help.py | TarikCinar/python-sesli-asistan | 1a29a8d3081b67ff352cf03f7b01ac01b7118deb | [
"MIT"
] | null | null | null | help.py | TarikCinar/python-sesli-asistan | 1a29a8d3081b67ff352cf03f7b01ac01b7118deb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'help.ui'
#
# Created by: PyQt5 UI code generator 5.13.0
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(400, 450)
Form.setMinimumSize(QtCore.QSize(400, 450))
Form.setMaximumSize(QtCore.QSize(400, 450))
Form.setStyleSheet("\n"
"background-color: qlineargradient(spread:pad, x1:0, y1:0, x2:1, y2:0, stop:0 rgb(40,40,211) , stop:1 rgb(99,136,153) );")
self.textBrowser = QtWidgets.QTextBrowser(Form)
self.textBrowser.setGeometry(QtCore.QRect(10, 20, 381, 421))
self.textBrowser.setMinimumSize(QtCore.QSize(10, 10))
self.textBrowser.setMaximumSize(QtCore.QSize(121121, 325235))
self.textBrowser.setStyleSheet("#textBrowser{\n"
"\n"
"font: 12pt \"Consolas\";\n"
"}")
self.textBrowser.setFrameShape(QtWidgets.QFrame.NoFrame)
self.textBrowser.setObjectName("textBrowser")
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Help"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Form = QtWidgets.QWidget()
ui = Ui_Form()
ui.setupUi(Form)
Form.show()
sys.exit(app.exec_())
| 31.666667 | 122 | 0.675 | 185 | 1,520 | 5.475676 | 0.518919 | 0.103653 | 0.029615 | 0.033564 | 0.041461 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063209 | 0.188158 | 1,520 | 47 | 123 | 32.340426 | 0.757699 | 0.117105 | 0 | 0 | 1 | 0.03125 | 0.138577 | 0.020225 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.15625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35665e1b39e67d688ac135c0ce7cb34d35d57e66 | 1,223 | py | Python | homeassistant/components/launch_library/diagnostics.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 30,023 | 2016-04-13T10:17:53.000Z | 2020-03-02T12:56:31.000Z | homeassistant/components/launch_library/diagnostics.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 24,710 | 2016-04-13T08:27:26.000Z | 2020-03-02T12:59:13.000Z | homeassistant/components/launch_library/diagnostics.py | MrDelik/core | 93a66cc357b226389967668441000498a10453bb | [
"Apache-2.0"
] | 11,956 | 2016-04-13T18:42:31.000Z | 2020-03-02T09:32:12.000Z | """Diagnostics support for Launch Library."""
from __future__ import annotations
from typing import Any
from pylaunches.objects.event import Event
from pylaunches.objects.launch import Launch
from homeassistant.config_entries import ConfigEntry
from homeassistant.core import HomeAssistant
from homeassistant.helpers.update_coordinator import DataUpdateCoordinator
from . import LaunchLibraryData
from .const import DOMAIN
async def async_get_config_entry_diagnostics(
hass: HomeAssistant,
entry: ConfigEntry,
) -> dict[str, Any]:
"""Return diagnostics for a config entry."""
coordinator: DataUpdateCoordinator[LaunchLibraryData] = hass.data[DOMAIN]
if coordinator.data is None:
return {}
def _first_element(data: list[Launch | Event]) -> dict[str, Any] | None:
if not data:
return None
return data[0].raw_data_contents
return {
"next_launch": _first_element(coordinator.data["upcoming_launches"]),
"starship_launch": _first_element(
coordinator.data["starship_events"].upcoming.launches
),
"starship_event": _first_element(
coordinator.data["starship_events"].upcoming.events
),
}
| 29.829268 | 77 | 0.72036 | 133 | 1,223 | 6.428571 | 0.383459 | 0.070175 | 0.080702 | 0.094737 | 0.160234 | 0.11462 | 0.11462 | 0 | 0 | 0 | 0 | 0.001016 | 0.195421 | 1,223 | 40 | 78 | 30.575 | 0.867886 | 0.031889 | 0 | 0.068966 | 0 | 0 | 0.07672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.310345 | 0 | 0.482759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
3567e722e33bfee718b3bdecb716ef40a5ef9cda | 2,894 | py | Python | py_tests/test_vision_pipeline_manager.py | machine2learn/mlpiot.base | da0b77fccbb0e42d1ddbb6dbc490313433dc7575 | [
"Apache-2.0"
] | 1 | 2021-03-30T20:49:54.000Z | 2021-03-30T20:49:54.000Z | py_tests/test_vision_pipeline_manager.py | machine2learn/mlpiot.base | da0b77fccbb0e42d1ddbb6dbc490313433dc7575 | [
"Apache-2.0"
] | null | null | null | py_tests/test_vision_pipeline_manager.py | machine2learn/mlpiot.base | da0b77fccbb0e42d1ddbb6dbc490313433dc7575 | [
"Apache-2.0"
] | null | null | null | """Tests for mlpiot.base.vision_pipeline_manager"""
import unittest
from mlpiot.base.action_executor import ActionExecutor
from mlpiot.base.event_extractor import EventExtractor
from mlpiot.base.scene_descriptor import SceneDescriptor
from mlpiot.base.trainer import Trainer
from mlpiot.base.vision_pipeline_manager import VisionPipelineManager
from mlpiot.proto import \
Image, ImageWithHelpers, \
VisionPipelineData, VisionPipelineManagerMetadata
class DummySceneDescriptor(SceneDescriptor):
def initialize(self, environ):
pass
def prepare_for_describing(self, output_metadata):
pass
def describe_scene(self, input_image, output_scene_description):
pass
class DummyEventExtractor(EventExtractor):
def initialize(self, environ):
pass
def prepare_for_event_extraction(self, output_metadata):
pass
def extract_events(
self, input_scene_description, output_event_extraction):
pass
class DummyActionExecutor(ActionExecutor):
def initialize(self, environ):
pass
def prepare_for_action_execution(self, output_metadata):
pass
def execute_action(
self, input_event_extraction, output_action_execution):
pass
class DummyTrainer(Trainer):
def initialize(self, environ):
pass
def prepare_for_training(self, output_metadata):
pass
def train(self, dataset, validation_dataset=None):
pass
class TestVisionPipelineManager(unittest.TestCase):
"""Test mlpiot.base.vision_pipeline_manager.VisionPipelineManager"""
def test_smoke(self):
"A simple test to check if everything is importable"
dummy_scene_descriptor = DummySceneDescriptor()
dummy_event_extractor = DummyEventExtractor()
dummy_action_executor = DummyActionExecutor()
dummy_trainer = DummyTrainer()
vision_pipeline_manager = VisionPipelineManager(
dummy_scene_descriptor,
dummy_event_extractor,
[dummy_action_executor],
dummy_trainer)
vpmm = VisionPipelineManagerMetadata()
vision_pipeline_manager.initialize({}, vpmm)
with vision_pipeline_manager.\
prepare_for_running_pipeline() as pipeline_runner:
input_image_proto = Image()
input_image_proto.height = 1
input_image_proto.width = 1
input_image_proto.channels = 1
input_image = ImageWithHelpers(input_image_proto)
vision_pipeline_data = VisionPipelineData()
vision_pipeline_data.id = 1001
pipeline_runner.run_pipeline(input_image, vision_pipeline_data)
initialized_trainer = vision_pipeline_manager.managed_trainer
with initialized_trainer.prepare_for_training() as ready_runner:
ready_runner.train([vision_pipeline_data])
| 29.232323 | 75 | 0.717346 | 295 | 2,894 | 6.718644 | 0.277966 | 0.077699 | 0.074168 | 0.048436 | 0.186176 | 0.120081 | 0.082745 | 0.082745 | 0 | 0 | 0 | 0.003112 | 0.222875 | 2,894 | 98 | 76 | 29.530612 | 0.878168 | 0.054941 | 0 | 0.242424 | 0 | 0 | 0.018018 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.19697 | false | 0.181818 | 0.121212 | 0 | 0.393939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
357adb90337719d5723ab2cf058c01616c052b6e | 806 | py | Python | course/views.py | author31/HongsBlog | a94dc56a05062b5b2bab3f28f84b7ede1ae44bf8 | [
"MIT"
] | null | null | null | course/views.py | author31/HongsBlog | a94dc56a05062b5b2bab3f28f84b7ede1ae44bf8 | [
"MIT"
] | null | null | null | course/views.py | author31/HongsBlog | a94dc56a05062b5b2bab3f28f84b7ede1ae44bf8 | [
"MIT"
] | null | null | null | from typing import List
from django.shortcuts import render
from django.views.generic.detail import DetailView
from django.views.generic.list import ListView
from assignment.models import Assignment
from course.models import Course
class CourseListView(ListView):
template_name = 'course/course_list.html'
model = Course
context_object_name = 'course'
class CourseDetailView(DetailView):
template_name = 'course/course_detail.html'
model = Course
context_object_name = 'course'
def get(self, request, *args, **kwargs):
self.pk = kwargs["pk"]
return super().get(request, *args, **kwargs)
def get_context_data(self, **kwargs):
kwargs["assignment"] = Assignment.objects.filter(course__id=self.pk)
return super().get_context_data(**kwargs)
| 29.851852 | 76 | 0.729529 | 100 | 806 | 5.74 | 0.37 | 0.069686 | 0.052265 | 0.076655 | 0.132404 | 0.132404 | 0.132404 | 0 | 0 | 0 | 0 | 0 | 0.168734 | 806 | 26 | 77 | 31 | 0.856716 | 0 | 0 | 0.2 | 0 | 0 | 0.089441 | 0.059627 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.9 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
357cffec267dd0668f1c68a283ee49efc4b0ead9 | 3,077 | py | Python | datastructures/binarytree.py | tkaleas/python-sandbox | 37ebe92c5f89300e27803118259d16f62d67f612 | [
"MIT"
] | null | null | null | datastructures/binarytree.py | tkaleas/python-sandbox | 37ebe92c5f89300e27803118259d16f62d67f612 | [
"MIT"
] | null | null | null | datastructures/binarytree.py | tkaleas/python-sandbox | 37ebe92c5f89300e27803118259d16f62d67f612 | [
"MIT"
] | null | null | null | class Node(object):
def __init__(self, value):
self.value = value
self.left = None
self.right = None
#Binary Tree
class BinaryTree(object):
def __init__(self, root):
self.root = Node(root)
def search(self, find_val):
"""Return True if the value
is in the tree, return
False otherwise."""
return self.preorder_search(self.root, find_val)
def print_tree(self):
"""Print out all tree nodes
as they are visited in
a pre-order traversal."""
return self.preorder_print(self.root,"")[:-1]
def preorder_search(self, start, find_val):
"""Helper method - use this to create a
recursive search solution."""
if start:
hasVal = False
if start.value == find_val:
hasVal = True
return hasVal or self.preorder_search(start.left, find_val) or self.preorder_search(start.right, find_val)
return False
def preorder_print(self, start, traversal):
"""Helper method - use this to create a
recursive print solution."""
if start:
traversal += str(start.value) + "-"
traversal = self.preorder_print(start.left, traversal)
traversal = self.preorder_print(start.right, traversal)
return traversal
# Binary Search Tree
class BST(object):
def __init__(self, root):
self.root = Node(root)
def insert(self, new_val):
self.insert_helper(self.root, new_val)
def search(self, find_val):
return self.search_helper(self.root, find_val)
def search_helper(self, start, find_val):
if start.value == find_val:
return True
elif find_val < start.value:
if start.left:
return self.search_helper(start.left, find_val)
elif find_val > start.value:
if start.right:
return self.search_helper(start.right, find_val)
return False
def insert_helper(self, start, new_val):
if start.value == new_val:
return
if new_val > start.value:
if start.right:
self.insert_helper(start.right, new_val)
else:
start.right = Node(new_val)
if new_val < start.value:
if start.left:
self.insert_helper(start.left, new_val)
else:
start.left = Node(new_val)
return
def print_tree(self):
"""Print out all tree nodes
as they are visited in
a pre-order traversal."""
return self.preorder_print(self.root,"")[:-1]
def preorder_print(self, start, traversal):
"""Helper method - use this to create a
recursive print solution."""
if start:
traversal += str(start.value) + "-"
traversal = self.preorder_print(start.left, traversal)
traversal = self.preorder_print(start.right, traversal)
return traversal
| 32.734043 | 118 | 0.577836 | 371 | 3,077 | 4.636119 | 0.150943 | 0.056977 | 0.059302 | 0.060465 | 0.678488 | 0.582558 | 0.543605 | 0.447674 | 0.426163 | 0.426163 | 0 | 0.00097 | 0.329867 | 3,077 | 94 | 119 | 32.734043 | 0.833172 | 0.139747 | 0 | 0.546875 | 0 | 0 | 0.000786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.203125 | false | 0 | 0 | 0.015625 | 0.46875 | 0.15625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
357d8ee4029bbe48236c823d9888079f0ce3ef3f | 4,203 | py | Python | cs15211/StoneGame.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | 1 | 2021-07-05T01:53:30.000Z | 2021-07-05T01:53:30.000Z | cs15211/StoneGame.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | null | null | null | cs15211/StoneGame.py | JulyKikuAkita/PythonPrac | 0ba027d9b8bc7c80bc89ce2da3543ce7a49a403c | [
"Apache-2.0"
] | 1 | 2018-01-08T07:14:08.000Z | 2018-01-08T07:14:08.000Z | __source__ = 'https://leetcode.com/problems/stone-game/'
# Time: O()
# Space: O()
#
# Description: Leetcode # 877. Stone Game
#
# Alex and Lee play a game with piles of stones.
# There are an even number of piles arranged in a row,
# and each pile has a positive integer number of stones piles[i].
#
# The objective of the game is to end with the most stones.
# The total number of stones is odd, so there are no ties.
#
# Alex and Lee take turns, with Alex starting first.
# Each turn, a player takes the entire pile of stones from either the beginning
# or the end of the row. This continues until there are no more piles left,
# at which point the person with the most stones wins.
#
# Assuming Alex and Lee play optimally, return True if and only if Alex wins the game.
#
#
#
# Example 1:
#
# Input: [5,3,4,5]
# Output: true
# Explanation:
# Alex starts first, and can only take the first 5 or the last 5.
# Say he takes the first 5, so that the row becomes [3, 4, 5].
# If Lee takes 3, then the board is [4, 5], and Alex takes 5 to win with 10 points.
# If Lee takes the last 5, then the board is [3, 4], and Alex takes 4 to win with 9 points.
# This demonstrated that taking the first 5 was a winning move for Alex, so we return true.
#
#
# Note:
#
# 2 <= piles.length <= 500
# piles.length is even.
# 1 <= piles[i] <= 500
# sum(piles) is odd.
#
import unittest
class Solution(object):
def stoneGame(self, piles):
"""
:type piles: List[int]
:rtype: bool
"""
return True
class SolutionDP(object):
def stoneGame(self, piles):
"""
:type piles: List[int]
:rtype: bool
"""
n = len(piles)
dp = [[0] * n for _ in range(n)]
for i in range(n):
dp[i][i] = piles[i]
for l in range(2, n + 1):
for i in range(n - l + 1):
j = i + l - 1
dp[i][j] = max(piles[i] - dp[i + 1][j], piles[j] - dp[i][j - 1])
return dp[0][n - 1] > 0
class TestMethods(unittest.TestCase):
def test_Local(self):
self.assertEqual(1, 1)
if __name__ == '__main__':
unittest.main()
Java = '''
# Thought: https://leetcode.com/problems/stone-game/solution/
Approach 1: Dynamic Programming
Complexity Analysis
Time Complexity: O(N^2), where N is the number of piles.
Space Complexity: O(N^2), the space used storing the intermediate results of each subgame.
# 10ms 36.14%
class Solution {
public boolean stoneGame(int[] piles) {
int N = piles.length;
// dp[i+1][j+1] = the value of the game [piles[i], ..., piles[j]].
int[][] dp = new int[N+2][N+2];
for (int size = 1; size <= N; ++ size) {
for (int i = 0; i + size <= N; ++i) {
int j = i + size - 1;
int parity = ( j + i + N) % 2; // j - i - N; but +x = -x (mod 2)
if (parity == 1) {
dp[i + 1][j + 1] = Math.max(piles[i] + dp[i +2][j + 1], piles[j] + dp[i + 1][j]);
} else {
dp[i + 1][j + 1] = Math.min(-piles[i] + dp[i +2][j + 1], -piles[j] + dp[i + 1][j]);
}
}
}
return dp[1][N] > 0;
}
}
Approach 2: Mathematical
Complexity Analysis
Time and Space Complexity: O(1)
# 3ms 53.69%
class Solution {
public boolean stoneGame(int[] piles) {
return true;
}
}
# 2ms 99.64%
class Solution {
public boolean stoneGame(int[] piles) {
int left = 0;
int right = piles.length-1;
int alex = 0;
int lee = 0;
boolean alexTurn = true;
while (left < right) {
if (alexTurn) {
if (piles[left] > piles[right]) {
alex += piles[left];
left++;
} else {
alex += piles[right];
right--;
}
} else {
if (piles[left] > piles[right]) {
lee += piles[left];
left++;
} else {
lee += piles[right];
right--;
}
}
}
return alex > lee;
}
}
'''
| 28.02 | 103 | 0.524863 | 604 | 4,203 | 3.629139 | 0.288079 | 0.015055 | 0.010949 | 0.013686 | 0.208942 | 0.169252 | 0.130018 | 0.110401 | 0.068431 | 0.068431 | 0 | 0.033781 | 0.344992 | 4,203 | 149 | 104 | 28.208054 | 0.762441 | 0.306448 | 0 | 0.229885 | 0 | 0.045977 | 0.745021 | 0 | 0 | 0 | 0 | 0 | 0.011494 | 1 | 0.034483 | false | 0 | 0.011494 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3580b7cb753fcaa31d0c440e5b6586620bfd111a | 745 | py | Python | assignment_solutions/6/is_all_upper.py | dannymeijer/level-up-with-python | 1bd1169aafd0fdc124984c30edc7f0153626cf06 | [
"MIT"
] | null | null | null | assignment_solutions/6/is_all_upper.py | dannymeijer/level-up-with-python | 1bd1169aafd0fdc124984c30edc7f0153626cf06 | [
"MIT"
] | null | null | null | assignment_solutions/6/is_all_upper.py | dannymeijer/level-up-with-python | 1bd1169aafd0fdc124984c30edc7f0153626cf06 | [
"MIT"
] | null | null | null | import re
only_letters = re.compile("[a-zA-Z]")
def is_all_upper(text: str) -> bool:
# check if text has actual content
has_no_content = len(only_letters.findall(text)) == 0
return False if has_no_content else text.upper() == text
if __name__ == '__main__':
print("Example:")
print(is_all_upper('ALL UPPER'))
# These "asserts" are used for self-checking and not for an auto-testing
assert is_all_upper('ALL UPPER') is True
assert is_all_upper('all lower') is False
assert is_all_upper('mixed UPPER and lower') is False
assert is_all_upper('') is False
assert is_all_upper(' ') is False
assert is_all_upper('123') is False
print("Coding complete? Click 'Check' to earn cool rewards!")
| 29.8 | 76 | 0.689933 | 119 | 745 | 4.067227 | 0.462185 | 0.165289 | 0.165289 | 0.198347 | 0.336777 | 0.210744 | 0.210744 | 0.142562 | 0.142562 | 0.142562 | 0 | 0.0067 | 0.198658 | 745 | 24 | 77 | 31.041667 | 0.80402 | 0.138255 | 0 | 0.133333 | 0 | 0 | 0.203443 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3589c234fc1a0fe7e6d360402ae2ceaf2a97c3d8 | 726 | py | Python | django_analyses/filters/output/output_definition.py | TheLabbingProject/django_analyses | 08cac40a32754a265b37524f08ec6160c69ebea8 | [
"Apache-2.0"
] | 1 | 2020-12-30T12:43:34.000Z | 2020-12-30T12:43:34.000Z | django_analyses/filters/output/output_definition.py | TheLabbingProject/django_analyses | 08cac40a32754a265b37524f08ec6160c69ebea8 | [
"Apache-2.0"
] | 59 | 2019-12-25T13:14:56.000Z | 2021-07-22T12:24:46.000Z | django_analyses/filters/output/output_definition.py | TheLabbingProject/django_analyses | 08cac40a32754a265b37524f08ec6160c69ebea8 | [
"Apache-2.0"
] | 2 | 2020-05-24T06:44:27.000Z | 2020-07-09T15:47:31.000Z | """
Definition of an
:class:`~django_analyses.filters.output.output_definition.OutputDefinitionFilter`
for the :class:`~django_analyses.models.output.definitions.OutputDefinition`
model.
"""
from django_analyses.models.output.definitions.output_definition import \
OutputDefinition
from django_filters import rest_framework as filters
class OutputDefinitionFilter(filters.FilterSet):
"""
Provides useful filtering options for the
:class:`~django_analyses.models.output.definitions.output_definition.OutputDefinition`
model.
"""
output_specification = filters.AllValuesFilter("specification_set")
class Meta:
model = OutputDefinition
fields = "key", "output_specification"
| 27.923077 | 90 | 0.774105 | 73 | 726 | 7.534247 | 0.424658 | 0.101818 | 0.103636 | 0.141818 | 0.3 | 0.3 | 0.3 | 0.174545 | 0 | 0 | 0 | 0 | 0.136364 | 726 | 25 | 91 | 29.04 | 0.877193 | 0.438017 | 0 | 0 | 0 | 0 | 0.106383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
358ee60cb29f177fb65f6050d60d87a71d7179ec | 4,801 | py | Python | parser.py | PouletFreak/mailparser | 6877b879cbaaccb5e00491726ead740a42922ae3 | [
"MIT"
] | 1 | 2019-07-02T02:05:07.000Z | 2019-07-02T02:05:07.000Z | parser.py | PouletFreak/mailparser | 6877b879cbaaccb5e00491726ead740a42922ae3 | [
"MIT"
] | null | null | null | parser.py | PouletFreak/mailparser | 6877b879cbaaccb5e00491726ead740a42922ae3 | [
"MIT"
] | null | null | null | import email, json, os, re
import magic
import ssdeep
import hashlib
import datetime
def md5(fname):
hash_md5 = hashlib.md5()
with open(fname, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_md5.update(chunk)
return hash_md5.hexdigest()
def sha1(fname):
hash_sha1 = hashlib.sha1()
with open(fname, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_sha1.update(chunk)
return hash_sha1.hexdigest()
def sha256(fname):
hash_sha256 = hashlib.sha256()
with open(fname, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest()
def sha512(fname):
hash_sha512 = hashlib.sha512()
with open(fname, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_sha512.update(chunk)
return hash_sha512.hexdigest()
def main():
file = '31a891f9e074c81b4688ac5b9faac9c1e3786a20'
f = open(file, 'r')
msg = email.message_from_file(f)
message_json = {}
message_json['parsedate'] = str(datetime.datetime.now())
message_json['filename'] = file
message_json['md5'] = md5(file)
message_json['sha1'] = sha1(file)
message_json['sha512'] = sha512(file)
message_json['sha256'] = sha256(file)
detach_dir = './' + message_json['filename'][0:10]
if not os.path.exists(detach_dir):
os.makedirs(detach_dir)
scan_json = {}
scan_json['Date'] = msg['Date']
scan_json['From'] = msg['From']
scan_json['Subject'] = msg['Subject']
scan_json['To'] = msg['To']
scan_json['Cc'] = msg['Cc']
scan_json['Bcc'] = msg['Bcc']
scan_json['References'] = msg['References']
scan_json['body'] = ''
scan_json['body_html'] = ''
scan_json['xml'] = ''
scan_json['email_addresses'] = []
scan_json['ip_addresses'] = []
scan_json['attachments'] = []
message_json['scan'] = scan_json
attachment = {}
for part in msg.walk():
application_pattern = re.compile('application/*')
image_pattern = re.compile('image/*')
audio_pattern = re.compile('audio/*')
video_pattern = re.compile('video/*')
content_type = part.get_content_type()
if content_type == 'text/plain':
''' Fills the main email part into the JSON Object and searches for valid email and ip addresses '''
mainpart = part.get_payload()
scan_json['body'] += mainpart
mail_matches = re.findall(r'[\w\.-]+@[\w\.-]+', mainpart) #finds mail addresses in text
for match in mail_matches:
if match not in scan_json['email_addresses']:
scan_json['email_addresses'].append(match)
ip_matches = re.findall( r'[0-9]+(?:\.[0-9]+){3}', mainpart) #Finds IP Addresses in text
for match in ip_matches:
scan_json['ip_addresses'].append(match)
if content_type == 'text/html':
scan_json['body_html'] += part.get_payload()
if content_type == 'text/xml':
scan_json['xml'] += part.get_payload()
if re.match(image_pattern, content_type) \
or re.match(application_pattern, content_type) \
or re.match(audio_pattern, content_type) \
or re.match(video_pattern, content_type):
filename = part.get_filename()
counter = 1
if not filename:
filename = 'part-%03d%s' % (counter, 'bin')
counter += 1
att_path = os.path.join(detach_dir, filename)
print att_path
attachment['filepath'] = att_path #TODO: zum kaufen bekommen
attachment['filename'] = filename
attachment['Type'] = content_type
if not os.path.isfile(att_path):
fp = open(att_path, 'wb')
fp.write(part.get_payload(decode=True))
fp.close()
attachment['size'] = os.path.getsize(att_path)
attachment['magic'] = magic.from_file(att_path, mime=True)
try:
attachment['ssdeep'] = ssdeep.hash_from_file(att_path)
except:
pass
attachment['md5'] = md5(att_path)
attachment['sha1'] = sha1(att_path)
attachment['sha512'] = sha512(att_path)
attachment['sha256'] = sha256(att_path)
scan_json['attachments'].append(attachment)
attachment = {}
try:
json_data = json.dumps(message_json, indent=4, sort_keys=True)
except UnicodeDecodeError:
json_data = json.dumps(message_json, indent=4, sort_keys=True, ensure_ascii=False)
print json_data
if __name__ == '__main__':
main() | 32.006667 | 112 | 0.587378 | 585 | 4,801 | 4.623932 | 0.230769 | 0.065065 | 0.031423 | 0.022181 | 0.182255 | 0.182255 | 0.111645 | 0.111645 | 0.111645 | 0.111645 | 0 | 0.036526 | 0.275776 | 4,801 | 150 | 113 | 32.006667 | 0.741444 | 0.016455 | 0 | 0.103448 | 0 | 0 | 0.100216 | 0.013203 | 0 | 0 | 0 | 0.006667 | 0 | 0 | null | null | 0.008621 | 0.043103 | null | null | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
358f891b1298dda3ec2ff6f47a9bf5842305d9ac | 5,937 | py | Python | analysis_vis/scripts/CovarEpi.py | arubenstein/deep_seq | 96c2bc131dc3bd3afb05486bfbc6f7297c57e604 | [
"BSD-2-Clause"
] | null | null | null | analysis_vis/scripts/CovarEpi.py | arubenstein/deep_seq | 96c2bc131dc3bd3afb05486bfbc6f7297c57e604 | [
"BSD-2-Clause"
] | null | null | null | analysis_vis/scripts/CovarEpi.py | arubenstein/deep_seq | 96c2bc131dc3bd3afb05486bfbc6f7297c57e604 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
"""Create edges and nodes from a list of sequences that are a given hamming distance apart"""
import itertools
import sys
import operator
import numpy as np
import argparse
from general_seq import conv
from general_seq import seq_IO
from plot import conv as pconv
import matplotlib.pyplot as plt
import math
import matplotlib
def shiftedColorMap(cmap, start=0, midpoint=0.5, stop=1.0, name='shiftedcmap'):
'''
Function to offset the "center" of a colormap. Useful for
data with a negative min and positive max and you want the
middle of the colormap's dynamic range to be at zero
Input
-----
cmap : The matplotlib colormap to be altered
start : Offset from lowest point in the colormap's range.
Defaults to 0.0 (no lower ofset). Should be between
0.0 and `midpoint`.
midpoint : The new center of the colormap. Defaults to
0.5 (no shift). Should be between 0.0 and 1.0. In
general, this should be 1 - vmax/(vmax + abs(vmin))
For example if your data range from -15.0 to +5.0 and
you want the center of the colormap at 0.0, `midpoint`
should be set to 1 - 5/(5 + 15)) or 0.75
stop : Offset from highets point in the colormap's range.
Defaults to 1.0 (no upper ofset). Should be between
`midpoint` and 1.0.
'''
cdict = {
'red': [],
'green': [],
'blue': [],
'alpha': []
}
# regular index to compute the colors
reg_index = np.linspace(start, stop, 257)
# shifted index to match the data
shift_index = np.hstack([
np.linspace(0.0, midpoint, 128, endpoint=False),
np.linspace(midpoint, 1.0, 129, endpoint=True)
])
for ri, si in zip(reg_index, shift_index):
r, g, b, a = cmap(ri)
cdict['red'].append((si, r, r))
cdict['green'].append((si, g, g))
cdict['blue'].append((si, b, b))
cdict['alpha'].append((si, a, a))
newcmap = matplotlib.colors.LinearSegmentedColormap(name, cdict)
plt.register_cmap(cmap=newcmap)
return newcmap
def plot_heatmap(ax, data, colormap, ticks, labels, xlabel, ylabel, title, vmin, vmax):
CS = ax.pcolor(data, cmap=colormap, vmin=vmin, vmax=vmax)
ax.set_xticklabels('')
ax.set_yticklabels('')
ax.set_xticks(ticks, minor=True)
ax.set_yticks(ticks, minor=True)
ax.set_xticklabels(labels, minor=True)
ax.set_yticklabels(labels, minor=True)
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_title(title)
ax.xaxis.set_ticks_position('none')
return CS
def main(sequence_file):
sequences = seq_IO.read_sequences(sequence_file)
n_char = len(sequences[0])
fig, axarr = pconv.create_ax(6, 2, shx=False, shy=False)
fig2, axarr2 = pconv.create_ax(1, 1, shx=False, shy=False)
ticks = [ i + 0.5 for i in np.arange(0,20) ]
#aa_string = 'DEKRHNQYCGSTAMILVFWP'
aa_string = 'ACDEFGHIKLMNPQRSTVWY'
maxes = []
mins = []
full_data = []
positions = []
full_data_flat = []
shrunk_cmap = shiftedColorMap(matplotlib.cm.bwr, start=0.25, midpoint=0.5, stop=0.75, name='shrunk')
for ind, (pos1, pos2) in enumerate(list(itertools.combinations(range(0,5),2))):
#print pos1, pos2, conv.covar_MI(sequences, pos1, pos2)
data = np.zeros( (20,20) )
for ind1, aa1 in enumerate(aa_string):
for ind2, aa2 in enumerate(aa_string):
data[ind1,ind2] = conv.calc_epi_log(sequences, pos1, pos2, aa1, aa2)
avg_pos1 = np.sum(data, axis=1) #should check once more that this is the correct axis
avg_pos2 = np.sum(data, axis=0)
#I'm sure there is a cool numpy way to do this but I don't have time for it right now
for ind1 in xrange(0, 20):
for ind2 in xrange(0, 20):
p = (avg_pos1[ind1]+avg_pos2[ind2]-data[ind1,ind2])/(19) #n-1=19
p = p if p > 0.05 else 0.05 #min 0.05 for rcw
data[ind1,ind2] = data[ind1,ind2]/p #rcw
maxes.append(np.amax(data))
mins.append(np.amin(data))
full_data.append(data)
positions.append((pos1, pos2))
full_data_flat.extend(data.flatten())
perc = np.percentile(full_data_flat, 99.9)
for ind, (data, (pos1, pos2)) in enumerate(zip(full_data, positions)):
if pos1 == 2 and pos2 == 3:
CS2 = plot_heatmap(axarr2[0,0], data, shrunk_cmap, ticks, list(aa_string), "position {0}".format(pos2+1), "position {0}".format(pos1+1), "", vmin = -1.0 * perc, vmax = perc)
y_ind = ind % 5
x_ind = math.floor(ind/5)
CS = plot_heatmap(axarr[x_ind,y_ind], data, shrunk_cmap, ticks, list(aa_string), "position {0}".format(pos2+1), "position {0}".format(pos1+1), "MI: {0:.4f}".format(conv.covar_MI(sequences, pos1, pos2)), vmin = -1.0 * perc, vmax = perc)
average_data = np.mean(full_data, axis=0)
max_data = np.max(full_data, axis=0)
CS = plot_heatmap(axarr[0,5], average_data, shrunk_cmap, ticks, list(aa_string), "", "", "Averages", vmin = -1.0 * perc, vmax = perc)
CS = plot_heatmap(axarr[1,5], max_data, shrunk_cmap, ticks, list(aa_string), "", "", "Maximums", vmin = -1.0 * perc, vmax = perc)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
fig2.subplots_adjust(right=0.8)
cbar_ax2 = fig2.add_axes([0.85, 0.15, 0.05, 0.7])
plt.colorbar(CS, cax=cbar_ax)
plt.colorbar(CS2, cax=cbar_ax2)
pconv.save_fig(fig, sequence_file, "heatmap", 18, 6, tight=False, size=7)
pconv.save_fig(fig2, sequence_file, "heatmap3_4", 4, 4, tight=False, size=10)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument ('--sequence_file', '-d', help="text file which contains sequences")
args = parser.parse_args()
main(args.sequence_file)
| 36.648148 | 243 | 0.640391 | 926 | 5,937 | 3.993521 | 0.291577 | 0.004868 | 0.011898 | 0.020552 | 0.173607 | 0.156842 | 0.083288 | 0.066522 | 0.048134 | 0.048134 | 0 | 0.046845 | 0.223345 | 5,937 | 161 | 244 | 36.875776 | 0.755151 | 0.056763 | 0 | 0.019802 | 0 | 0 | 0.051395 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.108911 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
3598073bb8b0c52a37225a3d3dc812d2999277d1 | 20,289 | py | Python | backend/hqlib/domain/measurement/metric.py | ICTU/quality-report | f6234e112228ee7cfe6476c2d709fe244579bcfe | [
"Apache-2.0"
] | 25 | 2016-11-25T10:41:24.000Z | 2021-07-03T14:02:49.000Z | backend/hqlib/domain/measurement/metric.py | ICTU/quality-report | f6234e112228ee7cfe6476c2d709fe244579bcfe | [
"Apache-2.0"
] | 783 | 2016-09-19T12:10:21.000Z | 2021-01-04T20:39:15.000Z | backend/hqlib/domain/measurement/metric.py | ICTU/quality-report | f6234e112228ee7cfe6476c2d709fe244579bcfe | [
"Apache-2.0"
] | 15 | 2015-03-25T13:52:49.000Z | 2021-03-08T17:17:56.000Z | """
Copyright 2012-2019 Ministerie van Sociale Zaken en Werkgelegenheid
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from typing import cast, Dict, List, Optional, Type, Tuple, TYPE_CHECKING
import json
import re
import datetime
import functools
import logging
from hqlib import utils
from hqlib.typing import MetricParameters, MetricValue, DateTime, Number
from .metric_source import MetricSource
from .target import AdaptedTarget
if TYPE_CHECKING: # pragma: no cover
from ..software_development.project import Project # pylint: disable=unused-import
class ExtraInfo(object):
""" The class represents extra metric information structure, that is serialized to extra_info json tag."""
def __init__(self, **kwargs):
""" Class is initialized with column keys and header texts."""
self.headers = kwargs
self.title = None
self.data = []
def __add__(self, *args):
""" Adds data rows to the extra_info table, matching arguments by position to the column keys."""
item = args[0] if isinstance(args[0], tuple) else args
dictionary_length = len(self.headers)
for i in range(len(item) // dictionary_length):
self.data.append(dict(zip(self.headers.keys(), item[dictionary_length * i:dictionary_length * (i + 1)])))
return self
class Metric(object):
""" Base class for metrics. """
name: str = 'Subclass responsibility'
template = '{name} heeft {value} {unit}.'
norm_template: str = 'Subclass responsibility'
unit: str = 'Subclass responsibility' # Unit in plural, e.g. "lines of code"
target_value: MetricValue = 'Subclass responsibility'
low_target_value: MetricValue = 'Subclass responsibility'
perfect_value: MetricValue = 'Subclass responsibility'
missing_template: str = 'De {metric} van {name} kon niet gemeten worden omdat niet alle benodigde bronnen ' \
'beschikbaar zijn.'
missing_source_template: str = 'De {metric} van {name} kon niet gemeten worden omdat de bron ' \
'{metric_source_class} niet is geconfigureerd.'
missing_source_id_template: str = 'De {metric} van {name} kon niet gemeten worden omdat niet alle benodigde ' \
'bron-ids zijn geconfigureerd. Configureer ids voor de bron ' \
'{metric_source_class}.'
perfect_template: str = ''
url_label_text: str = ''
comment_url_label_text: str = ''
metric_source_class: Type[MetricSource] = None
extra_info_headers: Dict[str, str] = None
def __init__(self, subject=None, project: 'Project' = None) -> None:
self._subject = subject
self._project = project
for source in self._project.metric_sources(self.metric_source_class):
try:
source_id = self._subject.metric_source_id(source)
except AttributeError:
continue
if source_id:
self._metric_source = source
self._metric_source_id, self._display_url = self.__separate_metric_source_links(source_id)
break
else:
if self.metric_source_class:
logging.warning("Couldn't find metric source of class %s for %s", self.metric_source_class.__name__,
self.stable_id())
self._metric_source = None
self._metric_source_id = None
self._display_url = None
self.__id_string = self.stable_id()
self._extra_info_data = list()
from hqlib import metric_source
history_sources = self._project.metric_sources(metric_source.History) if self._project else []
self.__history = cast(metric_source.History, history_sources[0]) if history_sources else None
def __separate_metric_source_links(self, values) -> tuple:
if not isinstance(values, list):
return self.__split_source_and_display(values)
else:
source = []
display = []
for val in values:
src, dsp = self.__split_source_and_display(val)
source.append(src)
display.append(dsp)
return source, display
@staticmethod
def __split_source_and_display(val) -> tuple:
return (val['source'], val['display']) if isinstance(val, dict) else (val, val)
def format_text_with_links(self, text: str) -> str:
""" Format a text paragraph with additional url. """
return Metric.format_comment_with_links(text, self.url(), '')
@staticmethod
def format_comment_with_links(text: str, url_dict: Dict[str, str], # pylint: disable=no-self-use
url_label: str) -> str:
""" Format a text paragraph with optional urls and label for the urls. """
comment_text = Metric._format_links_in_comment_text(text)
links = [
str(utils.format_link_object(href, utils.html_escape(anchor))) for (anchor, href) in list(url_dict.items())
]
if links:
if url_label:
url_label += ': '
comment_text = '{0} [{1}{2}]'.format(comment_text, url_label, ', '.join(sorted(links)))
return json.dumps(comment_text)[1:-1] # Strip quotation marks
@staticmethod
def _format_links_in_comment_text(text: str) -> str:
url_pattern = re.compile(r'(?i)\b(http(?:s?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]|'
r'\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|'
r'[^\s`!()\[\]{};:\'".,<>?\xab\xbb\u201c\u201d\u2018\u2019]))')
return re.sub(url_pattern, r"{'href': '\1', 'text': '\1'}", text.replace('\n', ' '))
@classmethod
def norm_template_default_values(cls) -> MetricParameters:
""" Return the default values for parameters in the norm template. """
return dict(unit=cls.unit, target=cls.target_value, low_target=cls.low_target_value)
def is_applicable(self) -> bool: # pylint: disable=no-self-use
""" Return whether this metric applies to the specified subject. """
return True
@functools.lru_cache(maxsize=1024)
def normalized_stable_id(self):
""" Returns stable_id where non-alphanumerics are substituted by _ and codes of other characters are added. """
return "".join([c if c.isalnum() else "_" for c in self.stable_id()]) + '_' + \
"".join(['' if c.isalnum() else str(ord(c)) for c in self.stable_id()])
@functools.lru_cache(maxsize=1024)
def stable_id(self) -> str:
""" Return an id that doesn't depend on numbering/order of metrics. """
stable_id = self.__class__.__name__
if not isinstance(self._subject, list):
stable_id += self._subject.name() if self._subject else str(self._subject)
return stable_id
def set_id_string(self, id_string: str) -> None:
""" Set the identification string. This can be set by a client since the identification of a metric may
depend on the section the metric is reported in. E.g. A-1. """
self.__id_string = id_string
def id_string(self) -> str:
""" Return the identification string of the metric. """
return self.__id_string
def target(self) -> MetricValue:
""" Return the target value for the metric. If the actual value of the
metric is below the target value, the metric is not green. """
subject_target = self._subject.target(self.__class__) if hasattr(self._subject, 'target') else None
return self.target_value if subject_target is None else subject_target
def low_target(self) -> MetricValue:
""" Return the low target value for the metric. If the actual value is below the low target value, the metric
needs immediate action and its status/color is red. """
subject_low_target = self._subject.low_target(self.__class__) if hasattr(self._subject, 'low_target') else None
return self.low_target_value if subject_low_target is None else subject_low_target
def __technical_debt_target(self):
""" Return the reduced target due to technical debt for the subject. If the subject has technical debt and
the actual value of the metric is below the technical debt target, the metric is red, else it is grey. """
try:
return self._subject.technical_debt_target(self.__class__)
except AttributeError:
return None
@functools.lru_cache(maxsize=8 * 1024)
def status(self) -> str:
""" Return the status/color of the metric. """
for status_string, has_status in [('missing_source', self.__missing_source_configuration),
('missing', self._missing),
('grey', self.__has_accepted_technical_debt),
('red', self._needs_immediate_action),
('yellow', self._is_below_target),
('perfect', self.__is_perfect)]:
if has_status():
return status_string
return 'green'
def status_start_date(self) -> DateTime:
""" Return since when the metric has the current status. """
return self.__history.status_start_date(self.stable_id(), self.status()) \
if self.__history else datetime.datetime.min
def __has_accepted_technical_debt(self) -> bool:
""" Return whether the metric is below target but above the accepted technical debt level. """
technical_debt_target = self.__technical_debt_target()
if technical_debt_target:
return self._is_below_target() and self._is_value_better_than(technical_debt_target.target_value())
return False
def _missing(self) -> bool:
""" Return whether the metric source is missing. """
return self.value() == -1
def __missing_source_configuration(self) -> bool:
""" Return whether the metric sources have been completely configured. """
return self.__missing_source_class() or self.__missing_source_ids()
def __missing_source_class(self) -> bool:
""" Return whether a metric source class that needs to be configured for the metric to be measurable is
available from the project. """
return not self._project.metric_sources(self.metric_source_class) if self.metric_source_class else False
def __missing_source_ids(self) -> bool:
""" Return whether the metric source ids have been configured for the metric source class. """
return bool(self.metric_source_class) and not self._get_metric_source_ids()
def _needs_immediate_action(self) -> bool:
""" Return whether the metric needs immediate action, i.e. its actual value is below its low target value. """
return not self._is_value_better_than(self.low_target())
def _is_below_target(self) -> bool:
""" Return whether the actual value of the metric is below its target value. """
return not self._is_value_better_than(self.target())
def __is_perfect(self) -> bool:
""" Return whether the actual value of the metric equals its perfect value,
i.e. no further improvement is possible. """
return self.value() == self.perfect_value
def value(self) -> MetricValue:
""" Return the actual value of the metric. """
raise NotImplementedError
def _is_value_better_than(self, target: MetricValue) -> bool:
""" Return whether the actual value of the metric is better than the specified target value. """
raise NotImplementedError
def report(self, max_subject_length: int = 200) -> str:
""" Return the actual value of the metric in the form of a short, mostly one sentence, report. """
name = self.__subject_name()
if len(name) > max_subject_length:
name = name[:max_subject_length] + '...'
logging.info('Reporting %s on %s', self.__class__.__name__, name)
return self._get_template().format(**self._parameters())
def _get_template(self) -> str:
""" Return the template for the metric report. """
if self.__missing_source_class():
return self.missing_source_template
if self.__missing_source_ids():
return self.missing_source_id_template
if self._missing():
return self.missing_template
if self.__is_perfect() and self.perfect_template:
return self.perfect_template
return self.template
def _parameters(self) -> MetricParameters:
""" Return the parameters for the metric report template and for the metric norm template. """
return dict(name=self.__subject_name(),
metric=self.name[0].lower() + self.name[1:],
unit=self.unit,
target=self.target(),
low_target=self.low_target(),
value=self.value(),
metric_source_class=self.metric_source_class.__name__ if self.metric_source_class
else '<metric has no metric source defined>')
def norm(self) -> str:
""" Return a description of the norm for the metric. """
try:
return self.norm_template.format(**self._parameters())
except KeyError as reason:
class_name = self.__class__.__name__
logging.critical('Key missing in %s parameters (%s) for norm template "%s": %s', class_name,
self._parameters(), self.norm_template, reason)
raise
def url(self) -> Dict[str, str]:
""" Return a dictionary of urls for the metric. The key is the anchor, the value the url. """
label = self._metric_source.metric_source_name if self._metric_source else 'Unknown metric source'
urls = [url for url in self._metric_source_urls() if url] # Weed out urls that are empty or None
if len(urls) == 1:
return {label: urls[0]}
return {'{label} ({index}/{count})'.format(label=label, index=index, count=len(urls)): url
for index, url in enumerate(urls, start=1)}
def _metric_source_urls(self) -> List[str]:
""" Return a list of metric source urls to be used to create the url dict. """
if self._metric_source:
if self._get_display_urls():
return self._metric_source.metric_source_urls(*self._get_display_urls())
return [self._metric_source.url()]
return []
def _get_display_urls(self) -> List[str]:
ids = self._display_url if isinstance(self._display_url, list) else [self._display_url]
return [id_ for id_ in ids if id_]
def _get_metric_source_ids(self) -> List[str]:
""" Allow for subclasses to override what the metric source id is. """
ids = self._metric_source_id if isinstance(self._metric_source_id, list) else [self._metric_source_id]
return [id_ for id_ in ids if id_]
def comment(self) -> str:
""" Return a comment on the metric. The comment is retrieved from either the technical debt or the subject. """
comments = [comment for comment in (self.__non_default_target_comment(), self.__technical_debt_comment(),
self.__subject_comment()) if comment]
return ' '.join(comments)
def __subject_comment(self) -> str:
""" Return the comment of the subject about this metric, if any. """
try:
return self._subject.metric_options(self.__class__)['comment']
except (AttributeError, TypeError, KeyError):
return ''
def __technical_debt_comment(self) -> str:
""" Return the comment of the accepted technical debt, if any. """
td_target = self.__technical_debt_target()
return td_target.explanation(self.unit) if td_target else ''
def __non_default_target_comment(self) -> str:
""" Return a comment about a non-default target, if relevant. """
return AdaptedTarget(self.low_target(), self.low_target_value).explanation(self.unit)
def comment_urls(self) -> Dict[str, str]: # pylint: disable=no-self-use
""" Return the source for the comment on the metric. """
return dict()
def __history_records(self, method: callable) -> List[int]:
history = method(self.stable_id()) if self.__history else []
return [int(round(float(value))) if value is not None else None for value in history]
def recent_history(self) -> List[int]:
""" Return a list of recent values of the metric, to be used in e.g. a spark line graph. """
return self.__history_records(self.__history.recent_history) if self.__history else []
def long_history(self) -> List[int]:
""" Return a long list of values of the metric, to be used in e.g. a spark line graph. """
return self.__history_records(self.__history.long_history) if self.__history else []
def get_recent_history_dates(self) -> str:
""" Return a list of recent dates when report was generated. """
return self.__history.get_dates() if self.__history else ""
def get_long_history_dates(self) -> str:
""" Return a long list of dates when report was generated. """
return self.__history.get_dates(long_history=True) if self.__history else ""
def y_axis_range(self) -> Tuple[int, int]:
""" Return a two-tuple (min, max) for use in graphs. """
history = [d for d in self.recent_history() if d is not None]
if not history:
return 0, 100
minimum, maximum = min(history), max(history)
return (minimum - 1, maximum + 1) if minimum == maximum else (minimum, maximum)
def numerical_value(self) -> Number:
""" Return a numerical version of the metric value for use in graphs. By default this simply returns the
regular value, assuming it is already numerical. Metrics that don't have a numerical value by default
can override this method to convert the non-numerical value into a numerical value. """
value = self.value()
if isinstance(value, tuple):
value = value[0]
if isinstance(value, (int, float)):
return value
raise NotImplementedError
def extra_info(self) -> Optional[ExtraInfo]:
""" Method can be overridden by concrete metrics that fill extra info. """
extra_info = None
if self._metric_source and self.extra_info_headers:
url_list = self.extra_info_rows()
if url_list:
extra_info = self.__create_extra_info(url_list)
return extra_info if extra_info is not None and extra_info.data else None
def extra_info_rows(self) -> List:
""" Returns rows of extra info table. """
return self._extra_info_data
def __create_extra_info(self, url_list):
extra_info = ExtraInfo(**self.extra_info_headers)
extra_info.title = self.url_label_text
for item in url_list:
extra_info += self.convert_item_to_extra_info(item)
return extra_info
@staticmethod
def convert_item_to_extra_info(item):
""" Method should transform an item to the form used in extra info. Should be overridden. """
return item
def __subject_name(self) -> str:
""" Return the subject name, or a string representation if the subject has no name. """
try:
return self._subject.name()
except AttributeError:
return str(self._subject)
| 48.078199 | 119 | 0.642368 | 2,615 | 20,289 | 4.74761 | 0.153346 | 0.045429 | 0.028353 | 0.013532 | 0.244865 | 0.172614 | 0.115505 | 0.094241 | 0.069352 | 0.054531 | 0 | 0.004855 | 0.258958 | 20,289 | 421 | 120 | 48.192399 | 0.820885 | 0.239785 | 0 | 0.074205 | 0 | 0.024735 | 0.069112 | 0.012717 | 0 | 0 | 0 | 0 | 0 | 1 | 0.190813 | false | 0 | 0.042403 | 0.003534 | 0.515901 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
35981b6b41348f376489c82ca46f8c08bcc7ebf0 | 3,564 | py | Python | examples/decrypt.py | joke325/Pyrop | 79669e3a3362180a239cd496513a60007a914e22 | [
"BSD-2-Clause"
] | null | null | null | examples/decrypt.py | joke325/Pyrop | 79669e3a3362180a239cd496513a60007a914e22 | [
"BSD-2-Clause"
] | null | null | null | examples/decrypt.py | joke325/Pyrop | 79669e3a3362180a239cd496513a60007a914e22 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2020 Janky <box@janky.tech>
# All right reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS
# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
# OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
# OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
# IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
# THE POSSIBILITY OF SUCH DAMAGE.
# Inspired by https://github.com/rnpgp/rnp/blob/master/src/examples/decrypt.c
from pyrop.bind import RopBind
from pyrop.error import RopError
message = "Dummy"
def example_pass_provider(session, app_ctx, key, pgp_context, buf_len):
if pgp_context == 'decrypt (symmetric)':
return True, 'encpassword'
if pgp_context == 'decrypt':
return True, 'password'
return False, None
def decrypt(rop, usekeys):
alt = rop.tagging()
try:
# initialize FFI object
ses = rop.create_session(rop.KEYSTORE_GPG, rop.KEYSTORE_GPG)
# check whether we want to use key or password for decryption
if usekeys:
try:
# load secret keyring, as it is required for public-key decryption. However, you may
# need to load public keyring as well to validate key's signatures.
keyfile = rop.create_input(path="secring.pgp")
# we may use secret=True and public=True as well
ses.load_keys(rop.KEYSTORE_GPG, keyfile, secret=True)
except RopError:
print("Failed to read secring")
raise
finally:
rop.drop(object_=keyfile)
# set the password provider
ses.set_pass_provider(example_pass_provider, None)
try:
# create file input and memory output objects for the encrypted message and decrypted
# message
input_ = rop.create_input(path="encrypted.asc")
output = rop.create_output(max_alloc=0)
ses.decrypt(input_, output)
# get the decrypted message from the output structure
buf = output.memory_get_str(False)
except RopError:
print("Public-key decryption failed")
raise
print("Decrypted message ({}):\n{}\n".format("with key" if usekeys else \
"with password", buf))
global message
message = buf
finally:
rop.drop(from_=alt)
def execute():
rop = RopBind()
try:
decrypt(rop, True)
decrypt(rop, False)
finally:
rop.close()
if __name__ == '__main__':
execute()
| 36.742268 | 100 | 0.672559 | 463 | 3,564 | 5.103672 | 0.466523 | 0.015235 | 0.017774 | 0.019467 | 0.077867 | 0.057554 | 0.057554 | 0.057554 | 0.057554 | 0.057554 | 0 | 0.002635 | 0.254489 | 3,564 | 96 | 101 | 37.125 | 0.886714 | 0.518519 | 0 | 0.23913 | 0 | 0 | 0.108269 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0.108696 | 0.043478 | 0 | 0.173913 | 0.065217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
359a485e3ad209d745beb2991ca78d9f951ff276 | 1,023 | py | Python | src/jomiel_kore/version.py | guendto/jomiel-kore | 7bbb7193baed13d7bb7baacd6cf63b28f5ddf6ac | [
"Apache-2.0"
] | null | null | null | src/jomiel_kore/version.py | guendto/jomiel-kore | 7bbb7193baed13d7bb7baacd6cf63b28f5ddf6ac | [
"Apache-2.0"
] | null | null | null | src/jomiel_kore/version.py | guendto/jomiel-kore | 7bbb7193baed13d7bb7baacd6cf63b28f5ddf6ac | [
"Apache-2.0"
] | null | null | null | #
# jomiel-kore
#
# Copyright
# 2019-2020 Toni Gündoğdu
#
#
# SPDX-License-Identifier: Apache-2.0
#
"""TODO."""
try: # py38+
from importlib.metadata import version as metadata_version
from importlib.metadata import PackageNotFoundError
except ModuleNotFoundError:
from importlib_metadata import version as metadata_version
from importlib_metadata import PackageNotFoundError
def package_version(package_name, destination):
"""Returns the package version string
Args:
package_name (str): the package name to look up
destination (list): the list to store the result (tuple) to
"""
try:
version = metadata_version(package_name)
except PackageNotFoundError:
version = "<unavailable>"
if package_name == "pyzmq":
from zmq import zmq_version
version = "{} (libzmq version {})".format(
version,
zmq_version(),
)
destination.append((package_name, version))
# vim: set ts=4 sw=4 tw=72 expandtab:
| 22.733333 | 67 | 0.675464 | 115 | 1,023 | 5.895652 | 0.486957 | 0.097345 | 0.123894 | 0.159292 | 0.289086 | 0.289086 | 0.289086 | 0.289086 | 0.289086 | 0.289086 | 0 | 0.02046 | 0.235582 | 1,023 | 44 | 68 | 23.25 | 0.846547 | 0.282502 | 0 | 0.111111 | 0 | 0 | 0.057554 | 0 | 0 | 0 | 0 | 0.022727 | 0 | 1 | 0.055556 | false | 0 | 0.277778 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35a50fd2c3fd502485183ee67073c6d3b767aa38 | 14,065 | py | Python | secfs/fs.py | quinnmagendanz/vFileSystem | 9a3c4b1d27a6325325a4f048f6a8fe93e5d871bf | [
"MIT"
] | null | null | null | secfs/fs.py | quinnmagendanz/vFileSystem | 9a3c4b1d27a6325325a4f048f6a8fe93e5d871bf | [
"MIT"
] | null | null | null | secfs/fs.py | quinnmagendanz/vFileSystem | 9a3c4b1d27a6325325a4f048f6a8fe93e5d871bf | [
"MIT"
] | null | null | null | # This file implements file system operations at the level of inodes.
import time
import secfs.crypto
import secfs.tables
import secfs.access
import secfs.store.tree
import secfs.store.block
from secfs.store.inode import Inode
from secfs.store.tree import Directory
from cryptography.fernet import Fernet
from secfs.types import I, Principal, User, Group
# usermap contains a map from user ID to their public key according to /.users
usermap = {}
# groupmap contains a map from group ID to the list of members according to /.groups
groupmap = {}
# owner is the user principal that owns the current share
owner = None
# root_i is the i of the root of the current share
root_i = None
def get_inode(i):
"""
Shortcut for retrieving an inode given its i.
"""
ihash = secfs.tables.resolve(i)
if ihash == None:
raise LookupError("asked to resolve i {}, but i does not exist".format(i))
return Inode.load(ihash)
def init(owner, users, groups):
"""
init will initialize a new share root as the given user principal. This
includes setting up . and .. in the root directory, as well as adding the
.users and .groups files that list trusted user public keys and group
memberships respectively. This function will only allocate the share's
root, but not map it to any particular share at the server. The new root's
i is returned so that this can be done by the caller.
"""
if not isinstance(owner, User):
raise TypeError("{} is not a User, is a {}".format(owner, type(owner)))
node = Inode()
node.kind = 0
node.ex = True
node.ctime = time.time()
node.mtime = node.ctime
ihash = secfs.store.block.store(node.bytes(), None) # inodes not encrypted
root_i = secfs.tables.modmap(owner, I(owner), ihash)
if root_i == None:
raise RuntimeError
new_ihash = secfs.store.tree.add(root_i, b'.', root_i)
secfs.tables.modmap(owner, root_i, new_ihash)
new_ihash = secfs.store.tree.add(root_i, b'..', root_i) # TODO(eforde): why would .. be mapped to root_i?
secfs.tables.modmap(owner, root_i, new_ihash)
print("CREATED ROOT AT", new_ihash)
init = {
b".users": users,
b".groups": groups,
}
import pickle
for fn, c in init.items():
bts = pickle.dumps(c)
node = Inode()
node.kind = 1
node.size = len(bts)
node.mtime = node.ctime
node.ctime = time.time()
node.blocks = [secfs.store.block.store(bts, None)] # don't encrypt init
ihash = secfs.store.block.store(node.bytes(), None) # inodes not encrypted
i = secfs.tables.modmap(owner, I(owner), ihash)
link(owner, i, root_i, fn)
return root_i
def _create(parent_i, name, create_as, create_for, isdir, encrypt):
"""
_create allocates a new file, and links it into the directory at parent_i
with the given name. The new file is owned by create_for, but is created
using the credentials of create_as. This distinction is necessary as a user
principal is needed for the final i when creating a file as a group.
"""
if not isinstance(parent_i, I):
raise TypeError("{} is not an I, is a {}".format(parent_i, type(parent_i)))
if not isinstance(create_as, User):
raise TypeError("{} is not a User, is a {}".format(create_as, type(create_as)))
if not isinstance(create_for, Principal):
raise TypeError("{} is not a Principal, is a {}".format(create_for, type(create_for)))
assert create_as.is_user() # only users can create
assert create_as == create_for or create_for.is_group() # create for yourself or for a group
if create_for.is_group() and create_for not in groupmap:
raise PermissionError("cannot create for unknown group {}".format(create_for))
# This check is performed by link() below, but better to fail fast
if not secfs.access.can_write(create_as, parent_i):
if parent_i.p.is_group():
raise PermissionError("cannot create in group-writeable directory {0} as {1}; user is not in group".format(parent_i, create_as))
else:
raise PermissionError("cannot create in user-writeable directory {0} as {1}".format(parent_i, create_as))
# TODO(eforde): encrypt if parent directory is encrypted
# encrypt = encrypt or parent_i.encrypted
node = Inode()
node.encrypted = 1 if encrypt else 0
node.ctime = time.time()
node.mtime = node.ctime
node.kind = 0 if isdir else 1
node.ex = isdir
# store the newly created inode on the server
new_hash = secfs.store.block.store(node.bytes(), None) # inodes not encrypted
# map the block to an i owned by create_for, created with credentials of create_as
new_i = secfs.tables.modmap(create_as, I(create_for), new_hash)
if isdir:
# create . and .. if this is a directory
table_key = secfs.tables.get_itable_key(create_for, create_as)
new_ihash = secfs.store.tree.add(new_i, b'.', new_i, table_key)
secfs.tables.modmap(create_as, new_i, new_ihash)
new_ihash = secfs.store.tree.add(new_i, b'..', parent_i, table_key)
secfs.tables.modmap(create_as, new_i, new_ihash)
# link the new i into the directoy at parent_i with the given name
link(create_as, new_i, parent_i, name)
return new_i
def create(parent_i, name, create_as, create_for, encrypt):
"""
Create a new file.
See secfs.fs._create
"""
return _create(parent_i, name, create_as, create_for, False, encrypt)
def mkdir(parent_i, name, create_as, create_for, encrypt):
"""
Create a new directory.
See secfs.fs._create
"""
return _create(parent_i, name, create_as, create_for, True, encrypt)
def read(read_as, i, off, size):
"""
Read reads [off:off+size] bytes from the file at i.
"""
if not isinstance(i, I):
raise TypeError("{} is not an I, is a {}".format(i, type(i)))
if not isinstance(read_as, User):
raise TypeError("{} is not a User, is a {}".format(read_as, type(read_as)))
if not secfs.access.can_read(read_as, i):
if i.p.is_group():
raise PermissionError("cannot read from group-readable file {0} as {1}; user is not in group".format(i, read_as))
else:
raise PermissionError("cannot read from user-readable file {0} as {1}".format(i, read_as))
node = get_inode(i)
table_key = secfs.tables.get_itable_key(i.p, read_as)
return node.read(table_key)[off:off+size]
def write(write_as, i, off, buf):
"""
Write writes the given bytes into the file at i at the given offset.
"""
if not isinstance(i, I):
raise TypeError("{} is not an I, is a {}".format(i, type(i)))
if not isinstance(write_as, User):
raise TypeError("{} is not a User, is a {}".format(write_as, type(write_as)))
if not secfs.access.can_write(write_as, i):
if i.p.is_group():
raise PermissionError("cannot write to group-owned file {0} as {1}; user is not in group".format(i, write_as))
else:
raise PermissionError("cannot write to user-owned file {0} as {1}".format(i, write_as))
node = get_inode(i)
table_key = secfs.tables.get_itable_key(i.p, write_as)
# TODO: this is obviously stupid -- should not get rid of blocks that haven't changed
bts = node.read(table_key)
# write also allows us to extend a file
if off + len(buf) > len(bts):
bts = bts[:off] + buf
else:
bts = bts[:off] + buf + bts[off+len(buf):]
# update the inode
node.blocks = [secfs.store.block.store(bts, table_key if node.encrypted else None)]
node.mtime = time.time()
node.size = len(bts)
# put new hash in tree
new_hash = secfs.store.block.store(node.bytes(), None) # inodes not encrypted
secfs.tables.modmap(write_as, i, new_hash)
return len(buf)
def rename(parent_i_old, name_old, parent_i_new, name_new, rename_as):
"""
Rename renames the given file in parent_i_old into parent_i_new as name_new
"""
if not isinstance(parent_i_old, I):
raise TypeError("{} is not an I, is a {}".format(parent_i_old, type(parent_i_old)))
if not isinstance(parent_i_new, I):
raise TypeError("{} is not an I, is a {}".format(parent_i_new, type(parent_i_new)))
if not isinstance(rename_as, User):
raise TypeError("{} is not a User, is a {}".format(rename_as, type(rename_as)))
if not secfs.access.can_write(rename_as, parent_i_new):
raise PermissionError("no permission to rename {} to {} in new directory {}".format(name_old, name_new, parent_i_new))
# Fetch i we're moving
i = secfs.store.tree.find_under(parent_i_old, name_old, rename_as)
# Remove i from old directory
table_key = secfs.tables.get_itable_key(parent_i_old.p, rename_as)
new_ihash = secfs.store.tree.remove(parent_i_old, name_old, table_key)
secfs.tables.modmap(rename_as, parent_i_old, new_ihash)
# Add i to new directory
table_key = secfs.tables.get_itable_key(parent_i_new.p, rename_as)
new_ihash = secfs.store.tree.add(parent_i_new, name_new, i, table_key)
secfs.tables.modmap(rename_as, parent_i_new, new_ihash)
return i
def unlink(parent_i, i, name, remove_as):
"""
Unlink removes the given file from the parent_inode
"""
if not isinstance(parent_i, I):
raise TypeError("{} is not an I, is a {}".format(parent_i, type(parent_i)))
if not isinstance(remove_as, User):
raise TypeError("{} is not a User, is a {}".format(remove_as, type(remove_as)))
assert remove_as.is_user() # only users can create
if not secfs.access.can_write(remove_as, i):
if i.p.is_group():
raise PermissionError("cannot remove group-owned file {0} as {1}; user is not in group".format(i, remove_as))
else:
raise PermissionError("cannot remove user-owned file {0} as {1}".format(i, remove_as))
table_key = secfs.tables.get_itable_key(i.p, remove_as)
new_ihash = secfs.store.tree.remove(parent_i, name, table_key)
secfs.tables.modmap(remove_as, parent_i, new_ihash)
#TODO(magendanz) remove filr and inode from server using secfs.store.blocks
secfs.tables.remove(i)
def rmdir(parent_i, i, name, remove_as):
"""
rmdir removes the given directory from the parent_inode as well as all subfiles
"""
if not isinstance(parent_i, I):
raise TypeError("{} is not an I, is a {}".format(parent_i, type(parent_i)))
if not isinstance(remove_as, User):
raise TypeError("{} is not a User, is a {}".format(remove_as, type(remove_as)))
assert remove_as.is_user() # only users can create
if not secfs.access.can_write(remove_as, i):
if i.p.is_group():
raise PermissionError("cannot remove group-owned file {0} as {1}; user is not in group".format(i, remove_as))
else:
raise PermissionError("cannot remove user-owned file {0} as {1}".format(i, remove_as))
print("Permissions: {} can edit {} owned file".format(remove_as, i))
table_key = secfs.tables.get_itable_key(i.p, remove_as)
# recursive rm of all subfiles/subdirs
inode = get_inode(i)
sub_is = []
# pass to unlink if not dir
if inode.kind == 0:
dr = Directory(i, table_key)
subfiles = [(sub_name, sub_i) for sub_name, sub_i in dr.children if ((sub_name != b'.') and (sub_name != b'..'))]
print("Subfiles to try and rm {}".format(subfiles))
# confirm that can delete all subfiles/subdirs before starting to delete
for child_name, child_i in subfiles:
print("Checking permissions. {} can edit {}".format(remove_as, child_i))
if not secfs.access.can_write(remove_as, child_i):
raise PermissionError("cannot remove group-owned file {0} as {1}; user is not in group".format(child_i, remove_as))
for child_name, child_i in subfiles:
print("Recusing to delete child {}".format(child_name))
sub_is += rmdir(i, child_i, child_name, remove_as)
# TODO(magendanz) do we need to delete . and ..?
new_ihash = secfs.store.tree.remove(parent_i, name, table_key)
#if parent_i.p != remove_as:
# p_i = Group.(ctx.gid)
secfs.tables.modmap(remove_as, parent_i, new_ihash)
#TODO(magendanz) remove filr and inode from server using secfs.store.blocks
secfs.tables.remove(i)
sub_is.append(i)
return sub_is
else:
unlink(parent_i, i, name, remove_as)
return i
def readdir(i, off, read_as):
"""
Return a list of is in the directory at i.
Each returned list item is a tuple of an i and an index. The index can be
used to request a suffix of the list at a later time.
"""
table_key = secfs.tables.get_itable_key(i.p, read_as)
dr = Directory(i, table_key)
if dr == None:
return None
return [(i, index+1) for index, i in enumerate(dr.children) if index >= off]
def link(link_as, i, parent_i, name):
"""
Adds the given i into the given parent directory under the given name.
"""
if not isinstance(parent_i, I):
raise TypeError("{} is not an I, is a {}".format(parent_i, type(parent_i)))
if not isinstance(i, I):
raise TypeError("{} is not an I, is a {}".format(i, type(i)))
if not isinstance(link_as, User):
raise TypeError("{} is not a User, is a {}".format(link_as, type(link_as)))
if not secfs.access.can_write(link_as, parent_i):
if parent_i.p.is_group():
raise PermissionError("cannot create in group-writeable directory {0} as {1}; user is not in group".format(parent_i, link_as))
else:
raise PermissionError("cannot create in user-writeable directory {0} as {1}".format(parent_i, link_as))
table_key = secfs.tables.get_itable_key(parent_i.p, link_as)
parent_ihash = secfs.store.tree.add(parent_i, name, i, table_key)
secfs.tables.modmap(link_as, parent_i, parent_ihash)
| 40.650289 | 140 | 0.664842 | 2,231 | 14,065 | 4.045271 | 0.115195 | 0.047313 | 0.029917 | 0.037895 | 0.531856 | 0.488753 | 0.476565 | 0.441108 | 0.403435 | 0.379612 | 0 | 0.003118 | 0.224671 | 14,065 | 345 | 141 | 40.768116 | 0.824484 | 0.213082 | 0 | 0.363208 | 0 | 0 | 0.136444 | 0 | 0 | 0 | 0 | 0.014493 | 0.018868 | 1 | 0.056604 | false | 0 | 0.051887 | 0 | 0.165094 | 0.023585 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35a90fa7fe750428ce519a6161eec0ec07750701 | 4,463 | py | Python | recognize.py | aerdem4/rock-paper-scissors | 0e520aa53d8cb146a8ab4f5fd1ebd823ffed3a4b | [
"MIT"
] | null | null | null | recognize.py | aerdem4/rock-paper-scissors | 0e520aa53d8cb146a8ab4f5fd1ebd823ffed3a4b | [
"MIT"
] | 1 | 2020-03-02T13:26:05.000Z | 2020-03-02T13:26:05.000Z | recognize.py | aerdem4/rock-paper-scissors | 0e520aa53d8cb146a8ab4f5fd1ebd823ffed3a4b | [
"MIT"
] | null | null | null | import cv2
import numpy as np
from keras.models import load_model
bg = None
def run_avg(image, acc_weight):
global bg
if bg is None:
bg = image.copy().astype("float")
return
cv2.accumulateWeighted(image, bg, acc_weight)
def segment(image, threshold=10):
global bg
diff = cv2.absdiff(bg.astype("uint8"), image)
thresholded = cv2.threshold(diff, threshold, 255, cv2.THRESH_BINARY)[1]
thresholded = cv2.GaussianBlur(thresholded,(5,5),0)
cnts, _ = cv2.findContours(thresholded.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if len(cnts) == 0:
return None
else:
segmented = max(cnts, key=cv2.contourArea)
return (thresholded, segmented)
#-------------------------------------------------------------------------------
# Main function
#-------------------------------------------------------------------------------
if __name__ == "__main__":
model = load_model("model.h5")
# initialize accumulated weight
accumWeight = 0.5
im_count = 0
# get the reference to the webcam
camera = cv2.VideoCapture(0)
x, y, r = 500, 900, 200
# region of interest (ROI) coordinates
top, right, bottom, left = x-r, y-r, x+r, y+r
# initialize num of frames
num_frames = 0
# calibration indicator
calibrated = False
# keep looping, until interrupted
while(True):
# get the current frame
(grabbed, frame) = camera.read()
# flip the frame so that it is not the mirror view
frame = cv2.flip(frame, 1)
# clone the frame
clone = frame.copy()
# get the height and width of the frame
(height, width) = frame.shape[:2]
# get the ROI
roi = frame[top:bottom, right:left]
# convert the roi to grayscale and blur it
gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)
# to get the background, keep looking till a threshold is reached
# so that our weighted average model gets calibrated
if num_frames < 30:
run_avg(gray, accumWeight)
if num_frames == 1:
print "[STATUS] please wait! calibrating..."
elif num_frames == 29:
print "[STATUS] calibration successfull..."
else:
# segment the hand region
hand = segment(gray)
# check whether hand region is segmented
if hand is not None:
# if yes, unpack the thresholded image and
# segmented region
(thresholded, segmented) = hand
epsilon = 0.01*cv2.arcLength(segmented,True)
segmented = cv2.approxPolyDP(segmented,epsilon,True)
# draw the segmented region and display the frame
convex_hull = cv2.convexHull(segmented)
cv2.rectangle(clone, (left, top), (right, bottom), (0,0,0), thickness=cv2.cv.CV_FILLED)
cv2.drawContours(clone, [convex_hull + (right, top)], -1, (255, 0, 0), thickness=cv2.cv.CV_FILLED)
cv2.drawContours(clone, [segmented + (right, top)], -1, (0, 255, 255), thickness=cv2.cv.CV_FILLED)
preds = model.predict(cv2.resize(clone[top:bottom, right:left], (64, 64)).reshape((-1, 64, 64, 3)))[0]
index = np.argmax(preds)
text = ["rock", "paper", "scissors"][index] + " " + str(round(preds[index], 2))
cv2.putText(clone, text, (right, top - 10), cv2.FONT_HERSHEY_SIMPLEX, 1,(0,0,255),2)
# draw the segmented hand
cv2.rectangle(clone, (left, top), (right, bottom), (0,255,0), 2)
# increment the number of frames
num_frames += 1
# display the frame with segmented hand
cv2.imshow("Video Feed", clone)
# observe the keypress by the user
keypress = cv2.waitKey(1) & 0xFF
# if the user pressed "q", then stop looping
path = None
if keypress == ord("r"):
path = "r" + str(im_count) + ".png"
elif keypress == ord("p"):
path = "p" + str(im_count) + ".png"
elif keypress == ord("s"):
path = "s" + str(im_count) + ".png"
if path is not None:
cv2.imwrite("data/" + path, clone[top:bottom, right:left])
print "saved", path
im_count += 1
# free up memory
camera.release()
cv2.destroyAllWindows()
| 33.556391 | 118 | 0.561058 | 547 | 4,463 | 4.510055 | 0.374771 | 0.014187 | 0.017025 | 0.021889 | 0.11512 | 0.087556 | 0.087556 | 0.064856 | 0.035671 | 0.035671 | 0 | 0.037999 | 0.292404 | 4,463 | 132 | 119 | 33.810606 | 0.743192 | 0.223168 | 0 | 0.053333 | 0 | 0 | 0.04449 | 0 | 0 | 0 | 0.001163 | 0 | 0 | 0 | null | null | 0 | 0.04 | null | null | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35b02597463fbb5f50f5b66d1352a253be0edf70 | 4,708 | py | Python | loggerBot.py | jskrist/channelLogger | 42d5820d29ce9213c823d76dbdc748e288f45eb8 | [
"MIT"
] | null | null | null | loggerBot.py | jskrist/channelLogger | 42d5820d29ce9213c823d76dbdc748e288f45eb8 | [
"MIT"
] | null | null | null | loggerBot.py | jskrist/channelLogger | 42d5820d29ce9213c823d76dbdc748e288f45eb8 | [
"MIT"
] | null | null | null | import asyncio, discord, json
from discord.ext.commands import Bot
from discord.ext import commands
from tinydb import TinyDB, Query
from tinydb.operations import delete, increment
'''
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
SETUP
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
'''
# Create a bot
bot = Bot(description="Channel Logger Bot by jskrist#3569", command_prefix="!", pm_help = True)
# Start or connect to a database to log the messages
db = TinyDB('data.json')
# This is a Query object to use when searching through the database
msg = Query()
usr = Query()
'''
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
HELPER FUNCTIONS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
'''
# this function returns a list of all the users that have posted to the server
def getPostingUsers():
postingUsers = set();
for item in db:
postingUsers.add(item['authorName'])
return postingUsers
async def addMsgToDB(message):
# Confirm that the message did not come from this Bot to make sure we don't get
# into an infinite loop if this bot send out any messages in this function also
# check that the first character of the message is not a "!" or "]", which would
# indicate a command
if (message.author.id != bot.user.id) & \
(message.content[0] != '!') & (message.content[0] != ']'):
# if the mesage content is not in the database yet
if not db.search(msg.content == message.content.lower()):
# Insert the content into the database, along with the name of the user that posted it.
# You could add any other data to the database at this point.
db.insert({'content': message.content.lower(), 'authorName': message.author.name})
'''
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
BOT EVENTS AND COMMANDS
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
'''
# This function prints a message to the terminal/command window to let you know the bot started correctly
@bot.event
async def on_ready():
print('Bot is up and running.')
# when a message comes into the server, this function is executed
@bot.listen()
async def on_message(message):
await addMsgToDB(message)
# when a message on the server is edited, this function is executed
@bot.listen()
async def on_message_edit(msgBefore, msgAfter):
'''
update the database to reflect only the edited message. This could create a state where a
duplicate message is on the server, but not represented in the database, e.g.
User1 sends "Hello"
User2 sends "Hello"
Database no has {'content':"hello", "authorName":"User1"}
User1 edits post to say "Hello World"
Database now has {'content':"hello world", "authorName":"User1"}
Should it also contain a copy of the message "hello"? since User2 also sent it?
'''
# db.update({'content': msgAfter.content.lower()}, msg.content == msgBefore.content.lower())
'''
Alternatively, you could just add the updated message to the database:
'''
await addMsgToDB(msgAfter)
@bot.command(pass_context=True)
async def printDB(context):
# this command prints out the contents of the database. It should not be used with a large database.
# the database will be save into a file called data.json (see line 12 of this file).
for item in db:
await bot.send_message(context.message.channel, item)
@bot.command(pass_context=True)
async def stats(context):
# this command returns the stats for each user, at the moment that is just the number of messages
# each user has posted, but could be expanded however you'd like
postingUsers = getPostingUsers()
for user in postingUsers:
userMsgs = db.search(msg.authorName == user)
await bot.send_message(context.message.channel, '{0} has {1} messages'.format(user, len(userMsgs)))
@bot.command(pass_context=True)
async def clearDB_all(context):
# this command removes all of messages from the Database
db.purge()
@bot.command(pass_context=True)
async def clearDB_usr(context, User=""):
# this command removes all of messages in the Database from the given user
db.remove(usr.authorName == User)
@bot.command(pass_context=True)
async def clearDB_msg(context, Msg=""):
# this command removes the given messages from the Database if it exists
db.remove(msg.content == Msg.lower())
'''
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
STARTING THE BOT
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
'''
# this opens up a file named botToken.txt which should contain a single line of text; the bot's token
with open('botToken.txt', 'r') as myfile:
botToken = myfile.read().replace('\n', '')
# start the bot
bot.run(botToken)
| 37.664 | 105 | 0.647409 | 644 | 4,708 | 4.708075 | 0.335404 | 0.043536 | 0.023087 | 0.034631 | 0.139842 | 0.139842 | 0.119393 | 0.07124 | 0.031662 | 0.031662 | 0 | 0.004271 | 0.204333 | 4,708 | 124 | 106 | 37.967742 | 0.805125 | 0.347281 | 0 | 0.18 | 0 | 0 | 0.070961 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02 | false | 0.1 | 0.1 | 0 | 0.14 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
35b754ce093c02acd53d79d1aafbde7ead2584ed | 2,221 | py | Python | src/refactor/parallel.py | luislorenzom/b33th0v3n | cf2665a51ed6779093c273cf9d7c404dd9222493 | [
"MIT"
] | null | null | null | src/refactor/parallel.py | luislorenzom/b33th0v3n | cf2665a51ed6779093c273cf9d7c404dd9222493 | [
"MIT"
] | null | null | null | src/refactor/parallel.py | luislorenzom/b33th0v3n | cf2665a51ed6779093c273cf9d7c404dd9222493 | [
"MIT"
] | null | null | null | from types import FunctionType
import numpy as np
import pandas as pd
from functools import partial
from multiprocessing import Pool, cpu_count
def get_levenshtein_distance(str1: str, str2: str) -> float:
"""
Computes the Levenshtein distance between two strings
:param str1: first string
:param str2: second string
:return: the distance between the two params
"""
size_x = len(str1) + 1
size_y = len(str2) + 1
matrix = np.zeros((size_x, size_y))
for x in range(size_x):
matrix[x, 0] = x
for y in range(size_y):
matrix[0, y] = y
for x in range(1, size_x):
for y in range(1, size_y):
if str1[x - 1] == str2[y - 1]:
matrix[x, y] = min(
matrix[x - 1, y] + 1,
matrix[x - 1, y - 1],
matrix[x, y - 1] + 1
)
else:
matrix[x, y] = min(
matrix[x - 1, y] + 1,
matrix[x - 1, y - 1] + 1,
matrix[x, y - 1] + 1
)
return matrix[size_x - 1, size_y - 1]
def add_distance_column(filename: str, df: pd.DataFrame) -> pd.DataFrame:
"""
Add new column to df which contains distance computed using filename
:param filename: filename to compare to df
:param df: df with artist or tracks names
:return: df with new column
"""
df['distances'] = df.applymap(lambda x: get_levenshtein_distance(filename, x))
return df
def parallelize_dataframe(df: pd.DataFrame, func: FunctionType, word: str, n_cores: int = cpu_count() - 1) -> pd.DataFrame:
"""
Apply certain func against dataframe parallelling the application
:param df: DataFrame which contains the required by func
:param func: func that will be parallelize through df
:param word: to compute the distance using
:param n_cores: thread to parallelize the function
:return: DataFrame after func applied
"""
df_split = np.array_split(df, n_cores) # TODO: add df length check to get n_cores
pool = Pool(n_cores)
f = partial(func, word)
df = pd.concat(pool.map(f, df_split))
pool.close()
pool.join()
return df
| 30.013514 | 123 | 0.594327 | 317 | 2,221 | 4.078864 | 0.315457 | 0.048724 | 0.030936 | 0.027842 | 0.102862 | 0.064192 | 0.053364 | 0.047951 | 0.047951 | 0.047951 | 0 | 0.020833 | 0.30842 | 2,221 | 73 | 124 | 30.424658 | 0.820964 | 0.310671 | 0 | 0.205128 | 0 | 0 | 0.006246 | 0 | 0 | 0 | 0 | 0.013699 | 0 | 1 | 0.076923 | false | 0 | 0.128205 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35beb2659c3525943e08592cd4e9ebc8b9fd9ed7 | 2,239 | py | Python | algolab_class_API/migrations/0011_auto_20190110_1307.py | KMU-algolab/algolab_class | fdf22cd10d5af71eae63e259c4f88f2b55b44ec7 | [
"MIT"
] | 1 | 2019-01-10T05:46:09.000Z | 2019-01-10T05:46:09.000Z | algolab_class_API/migrations/0011_auto_20190110_1307.py | KMU-algolab/algolab_class | fdf22cd10d5af71eae63e259c4f88f2b55b44ec7 | [
"MIT"
] | 7 | 2018-12-25T15:59:49.000Z | 2019-01-10T05:45:25.000Z | algolab_class_API/migrations/0011_auto_20190110_1307.py | KMU-algolab/algolab_class | fdf22cd10d5af71eae63e259c4f88f2b55b44ec7 | [
"MIT"
] | null | null | null | # Generated by Django 2.1.4 on 2019-01-10 04:07
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('algolab_class_API', '0010_submithistory'),
]
operations = [
migrations.RemoveField(
model_name='boardquestion',
name='context',
),
migrations.RemoveField(
model_name='boardquestion',
name='context_type',
),
migrations.RemoveField(
model_name='boardreply',
name='context',
),
migrations.AddField(
model_name='boardquestion',
name='contents',
field=models.TextField(db_column='Contents', default='내용을 입력하세요.', verbose_name='내용'),
),
migrations.AddField(
model_name='boardquestion',
name='contents_type',
field=models.CharField(choices=[('NOTICE', '공지사항'), ('QUESTION', '질문')], db_column='ContentsType', default='QUESTION', max_length=10, verbose_name='글 종류'),
),
migrations.AddField(
model_name='boardreply',
name='contents',
field=models.TextField(db_column='Contents', default='내용을 입력하세요.', verbose_name='내용'),
),
migrations.AlterField(
model_name='boardquestion',
name='write_time',
field=models.DateTimeField(db_column='WriteTime', verbose_name='작성 시간'),
),
migrations.AlterField(
model_name='course',
name='manager',
field=models.ForeignKey(db_column='Manager', on_delete=django.db.models.deletion.DO_NOTHING, related_name='courseManager_set', to=settings.AUTH_USER_MODEL, verbose_name='교수자'),
),
migrations.AlterField(
model_name='submithistory',
name='status',
field=models.CharField(choices=[('NOT_SOLVED', 'NotSolved'), ('SOLVED', 'Solved'), ('COMPILE_ERROR', 'CompileError'), ('TIME_OVER', 'TimeOver'), ('RUNTIME_ERROR', 'RuntimeError'), ('SERVER_ERROR', 'ServerError')], db_column='Status', default='NOT_SOLVED', max_length=10, verbose_name='제출 결과'),
),
]
| 38.603448 | 305 | 0.604734 | 219 | 2,239 | 5.995434 | 0.415525 | 0.061691 | 0.083778 | 0.09901 | 0.309216 | 0.275704 | 0.275704 | 0.130998 | 0.130998 | 0.130998 | 0 | 0.013781 | 0.254578 | 2,239 | 57 | 306 | 39.280702 | 0.772918 | 0.020098 | 0 | 0.607843 | 1 | 0 | 0.220803 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.117647 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35c0bcc2adb3ea68d0b4f4ffb1f220f03d52c1be | 724 | py | Python | var/spack/repos/builtin/packages/liblzf/package.py | BenWibking/spack | 49b3b43a4a9375210b578635d9240875a5f3106b | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 2,360 | 2017-11-06T08:47:01.000Z | 2022-03-31T14:45:33.000Z | var/spack/repos/builtin/packages/liblzf/package.py | BenWibking/spack | 49b3b43a4a9375210b578635d9240875a5f3106b | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 13,838 | 2017-11-04T07:49:45.000Z | 2022-03-31T23:38:39.000Z | var/spack/repos/builtin/packages/liblzf/package.py | joequant/spack | e028ee0d5903045e1cdeb57550cbff61f2ffb2fa | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 1,793 | 2017-11-04T07:45:50.000Z | 2022-03-30T14:31:53.000Z | # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Liblzf(AutotoolsPackage):
"""LibLZF is a very small data compression library.
It consists of only two .c and two .h files and is very easy to incorporate into
your own programs. The compression algorithm is very, very fast, yet still written
in portable C."""
homepage = "http://oldhome.schmorp.de/marc/liblzf.html"
url = "http://dist.schmorp.de/liblzf/liblzf-3.6.tar.gz"
version('3.6', sha256='9c5de01f7b9ccae40c3f619d26a7abec9986c06c36d260c179cedd04b89fb46a')
| 36.2 | 93 | 0.740331 | 100 | 724 | 5.36 | 0.78 | 0.022388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086235 | 0.167127 | 724 | 19 | 94 | 38.105263 | 0.802653 | 0.577348 | 0 | 0 | 0 | 0 | 0.547368 | 0.224561 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
35cc8dce8cfa78125ee76beb2078c100cbc1294f | 672 | py | Python | apps/bloguser/migrations/0003_auto_20180505_1717.py | dryprojects/MyBlog | ec04ba2bc658e96cddeb1d4766047ca8e89ff656 | [
"BSD-3-Clause"
] | 2 | 2021-08-17T13:29:21.000Z | 2021-09-04T05:00:01.000Z | apps/bloguser/migrations/0003_auto_20180505_1717.py | dryprojects/MyBlog | ec04ba2bc658e96cddeb1d4766047ca8e89ff656 | [
"BSD-3-Clause"
] | 1 | 2020-07-16T11:22:32.000Z | 2020-07-16T11:22:32.000Z | apps/bloguser/migrations/0003_auto_20180505_1717.py | dryprojects/MyBlog | ec04ba2bc658e96cddeb1d4766047ca8e89ff656 | [
"BSD-3-Clause"
] | 1 | 2020-09-18T10:41:59.000Z | 2020-09-18T10:41:59.000Z | # Generated by Django 2.0.3 on 2018-05-05 17:17
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('bloguser', '0002_auto_20180504_1808'),
]
operations = [
migrations.AddField(
model_name='userprofile',
name='image_url',
field=models.CharField(default='', max_length=100, verbose_name='用户头像url'),
),
migrations.AlterField(
model_name='userprofile',
name='image',
field=models.ImageField(blank=True, default='bloguser/avatar.png', upload_to='bloguser/images/%Y/%m', verbose_name='用户头像'),
),
]
| 28 | 135 | 0.610119 | 72 | 672 | 5.555556 | 0.694444 | 0.045 | 0.1 | 0.12 | 0.145 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068136 | 0.25744 | 672 | 23 | 136 | 29.217391 | 0.733467 | 0.066964 | 0 | 0.235294 | 1 | 0 | 0.1888 | 0.0704 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
35d5e04e5892b72fc7057d291530a91e4883bc62 | 1,127 | py | Python | app/user/serializers.py | falleng0d/medicar-backend | 30bedff54ae84da7a67350852cd508c54e5bf6e7 | [
"MIT"
] | null | null | null | app/user/serializers.py | falleng0d/medicar-backend | 30bedff54ae84da7a67350852cd508c54e5bf6e7 | [
"MIT"
] | null | null | null | app/user/serializers.py | falleng0d/medicar-backend | 30bedff54ae84da7a67350852cd508c54e5bf6e7 | [
"MIT"
] | null | null | null | from collections import OrderedDict
from django.contrib.auth import get_user_model # If used custom user model
from rest_framework import serializers
UserModel = get_user_model()
class UserSerializer(serializers.ModelSerializer):
password = serializers.CharField(write_only=True)
def create(self, validated_data):
email = validated_data.get('email', None)
first_name = validated_data.get('first_name', '')
user = UserModel.objects.create_user(
username=validated_data['username'],
password=validated_data['password'],
email=email,
first_name=first_name,
)
return user
def to_representation(self, instance):
instance = super(UserSerializer, self).to_representation(instance)
return OrderedDict([(key, instance[key])
for key in instance if key not in ['email', 'first_name']
or (instance[key] is not None and len(instance[key]) > 1)])
class Meta:
model = UserModel
fields = ("id", "username", "password", "email", "first_name")
| 34.151515 | 87 | 0.645075 | 125 | 1,127 | 5.656 | 0.424 | 0.076379 | 0.059406 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001193 | 0.256433 | 1,127 | 32 | 88 | 35.21875 | 0.842482 | 0.022183 | 0 | 0 | 0 | 0 | 0.071818 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.125 | 0.125 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
35da15160bebb0c093e96b03a913df011244fd8f | 831 | py | Python | model_zoo/jag_utils/python/build_inclusive_from_exclusive.py | jonesholger/lbann | 3214f189a1438565d695542e076c4fa8e7332d34 | [
"Apache-2.0"
] | 194 | 2016-07-19T15:40:21.000Z | 2022-03-19T08:06:10.000Z | model_zoo/jag_utils/python/build_inclusive_from_exclusive.py | jonesholger/lbann | 3214f189a1438565d695542e076c4fa8e7332d34 | [
"Apache-2.0"
] | 1,021 | 2016-07-19T12:56:31.000Z | 2022-03-29T00:41:47.000Z | model_zoo/jag_utils/python/build_inclusive_from_exclusive.py | jonesholger/lbann | 3214f189a1438565d695542e076c4fa8e7332d34 | [
"Apache-2.0"
] | 74 | 2016-07-28T18:24:00.000Z | 2022-01-24T19:41:04.000Z | import sys
if len(sys.argv) != 4 :
print 'usage:', sys.argv[0], 'index_fn id_mapping_fn output_fn'
exit(9)
a = open(sys.argv[1])
a.readline()
header = a.readline()
dir = a.readline()
#build map: filename -> set of bad samples
mp = {}
mp_good = {}
mp_bad = {}
for line in a :
t = line.split()
mp[t[0]] = set()
mp_good[t[0]] = t[1]
mp_bad[t[0]] = t[2]
for id in t[3:] :
mp[t[0]].add(id)
a.close()
out = open(sys.argv[3], 'w')
out.write('CONDUIT_HDF5_INCLUSION\n')
out.write(header)
out.write(dir)
a = open(sys.argv[2])
bad = 0
for line in a :
t = line.split()
fn = t[0]
out.write(fn + ' ' + mp_good[fn] + ' ' + mp_bad[fn] + ' ')
for id in t[1:] :
if id not in mp[fn] :
out.write(id + ' ')
else :
bad += 1
out.write('\n')
out.close()
print header
print 'num found bad:', bad
| 17.680851 | 65 | 0.56438 | 154 | 831 | 2.967532 | 0.331169 | 0.105033 | 0.07221 | 0.052516 | 0.087527 | 0.087527 | 0.087527 | 0 | 0 | 0 | 0 | 0.028125 | 0.229844 | 831 | 46 | 66 | 18.065217 | 0.685938 | 0.049338 | 0 | 0.105263 | 0 | 0 | 0.105196 | 0.030418 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.026316 | null | null | 0.078947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea008637c73dda8e84514900695de29b3ed914c6 | 14,069 | py | Python | animation_retarget/animation_retarget_mh.py | curmil/makehuman-utils | 1e1a56479bc1deac613802e891abf440cbeb342e | [
"CC0-1.0"
] | 3 | 2018-04-16T15:14:54.000Z | 2021-08-11T16:00:58.000Z | animation_retarget/animation_retarget_mh.py | curmil/makehuman-utils | 1e1a56479bc1deac613802e891abf440cbeb342e | [
"CC0-1.0"
] | 1 | 2020-10-29T07:53:51.000Z | 2020-10-29T07:53:51.000Z | animation_retarget/animation_retarget_mh.py | curmil/makehuman-utils | 1e1a56479bc1deac613802e891abf440cbeb342e | [
"CC0-1.0"
] | 5 | 2019-08-09T15:21:50.000Z | 2022-02-21T14:02:45.000Z | #!/usr/bin/python
"""
**Project Name:** MakeHuman
**Product Home Page:** http://www.makehuman.org/
**Code Home Page:** https://bitbucket.org/MakeHuman/makehuman/
**Author:** Jonas Hauquier, Thomas Larsson
**Copyright(c):** MakeHuman Team 2001-2015
**Licensing:** AGPL3
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Abstract
--------
Transfer an animation or pose from one skeleton to another by copying each
bone's relative poses, and compensating for differences in bind pose.
Allows transferring the animation from a BVH file imported in Blender to a MH
(or other) skeleton.
Bone names between the two skeletons are matched using fuzzy string matching,
allowing it to automatically find combinations if bone names are similar.
"""
import bpy
import mathutils
from difflib import SequenceMatcher
BONE_NAME_SIMILARITY_THRESHOLD = 0.7
# Credit goes to Thomas Larsson for these derivations
#
# M_b = global bone matrix, relative world (PoseBone.matrix)
# L_b = local bone matrix, relative parent and rest (PoseBone.matrix_local)
# R_b = bone rest matrix, relative armature (Bone.matrix_local)
# T_b = global T-pose marix, relative world
#
#
# M_p = parent global bone matrix
# R_p = parent rest matrix
#
# A_b = A bone matrix, A-pose rest matrix, converts M'_b in A pose to M_b in T pose
# M'_b= bone matrix for the mesh in A pose
#
# T_b = T bone matrix, converts bone matrix from T pose into A pose
#
#
# M_b = M_p R_p^-1 R_b L_b
# M_b = A_b M'_b
# T_b = A_b T'_b
# A_b = T_b T'^-1_b
# B_b = R^-1_b R_p
#
# L_b = R^-1_b R_p M^-1_p A_b M'_b
# L_b = B_b M^-1_p A_b M'_b
#
def _get_bone_matrix(bone):
"""bone should be a Bone
B_b
"""
if bone.parent:
b_mat = bone.matrix_local.inverted() * bone.parent.matrix_local
else:
b_mat = bone.matrix_local.inverted()
return b_mat
def _get_rest_pose_compensation_matrix(src_pbone, trg_pbone):
"""Bind pose compensation matrix
bones are expected to be of type PoseBone and be in rest pose
A_b
"""
a_mat = src_pbone.matrix.inverted() * trg_pbone.matrix
return a_mat
def set_rotation(pose_bone, rot, frame_idx, group=None):
"""Apply rotation to PoseBone and insert a keyframe.
Rotation can be a matrix, a quaternion or a tuple of euler angles
"""
if not group:
group = pose_bone.name
if pose_bone.rotation_mode == 'QUATERNION':
try:
quat = rot.to_quaternion()
except:
quat = rot
pose_bone.rotation_quaternion = quat
pose_bone.keyframe_insert('rotation_quaternion', frame=frame_idx, group=group)
else:
try:
euler = rot.to_euler(pose_bone.rotation_mode)
except:
euler = rot
pose_bone.rotation_euler = euler
pose_bone.keyframe_insert('rotation_euler', frame=frame_idx, group=group)
def set_translation(pose_bone, trans, frame_idx, group=None):
"""Insert a translation keyframe for a pose bone
"""
if not group:
group = pose_bone.name
try:
trans = trans.to_translation()
except:
pass
pose_bone.location = trans
pose_bone.keyframe_insert("location", frame=frame_idx, group=group)
def fuzzy_stringmatch_ratio(str1, str2):
"""Compare two strings using a fuzzy matching algorithm. Returns the
similarity of both strings as a float, with 1 meaning identical match,
and 0 meaning no similarity at all.
"""
m = SequenceMatcher(None, str1, str2)
return m.ratio()
def select_and_set_rest_pose(rig, scn):
"""Select the rig, go into pose mode and clear all rotations (sets to rest
pose)
"""
scn.objects.active = rig
bpy.ops.object.mode_set(mode='POSE')
bpy.ops.pose.select_all(action='SELECT')
bpy.ops.pose.rot_clear()
bpy.ops.pose.loc_clear()
bpy.ops.pose.scale_clear()
def sort_by_depth(bonemaplist):
"""Sort bone mapping list by depth of target bone.
Creating a breadth-first list through the target skeleton.
This order is needed for correct retargeting, so that we build up the
_trg_mat and _src_mat top to bottom.
"""
def _depth(bonemap):
"""Depth of target bone in the skeleton, is 0 for root bone.
Depth also is the number of parents this bone has.
"""
return len(bonemap.trg_bone.parent_recursive)
sort_tuples = [(_depth(bm), bm) for bm in bonemaplist]
return [x[1] for x in sorted(sort_tuples, key=lambda b: b[0])]
class AnimationRetarget(object):
"""Manages the retargetting operation between two armatures.
"""
def __init__(self, src_amt, trg_amt):
self.src_amt = src_amt
self.trg_amt = trg_amt
self.bone_mappings = []
self.trg_bone_lookup = {} # Lookup a mapping by target bone name
self.src_bone_lookup = {} # Lookup a mapping by source bone name
# Automatically map source bones to target bones using fuzzy matching
self.find_bone_mapping()
self.bone_mappings = sort_by_depth(self.bone_mappings)
self._init_lookup_structures()
def _init_lookup_structures(self):
"""Create lookup dicts that allow quick access to the mappings by
source or target bone name.
"""
for bm in self.bone_mappings:
self.trg_bone_lookup[bm.trg_bone.name] = bm
self.src_bone_lookup[bm.src_bone.name] = bm
def find_bone_mapping(self):
"""Find combination of source and target bones by comparing the bones
from both armatures with a fuzzy string matching algorithm.
"""
# TODO allow more complicated remappings by allowing to specify a mapping file
not_mapped_trg = {}
mapped_src = {}
for trg_bone in self.trg_amt.pose.bones:
if trg_bone.name in self.src_amt.pose.bones:
src_bone = self.src_amt.pose.bones[trg_bone.name]
self.bone_mappings.append(BoneMapping(src_bone, trg_bone, self))
print ("Bone mapped: %s -> %s" % (src_bone.name, trg_bone.name))
mapped_src[src_bone.name] = True
else:
not_mapped_trg[trg_bone.name] = trg_bone
for trg_bone in not_mapped_trg.values():
src_candidates = [b for b in self.src_amt.pose.bones if b.name not in mapped_src]
best_candidate = None
score = -1
for b_idx, src_bone in enumerate(src_candidates):
ratio = fuzzy_stringmatch_ratio(src_bone.name, trg_bone.name)
if ratio > score:
score = ratio
best_candidate = b_idx
if best_candidate is not None and score > BONE_NAME_SIMILARITY_THRESHOLD:
src_bone = src_candidates[best_candidate]
self.bone_mappings.append(BoneMapping(src_bone, trg_bone, self))
print ("Bone mapped: %s -> %s" % (src_bone.name, trg_bone.name))
del src_candidates[best_candidate]
else:
print ("Could not find an approriate source bone for %s" % trg_bone.name)
def _retarget_frame(self, scn, frame_idx, target_frame, in_place=False):
scn.frame_set(frame_idx)
for b_map in self.bone_mappings:
b_map.retarget(target_frame, in_place)
def _set_rest_frame(self, target_frame, in_place=False):
pose_mat = mathutils.Matrix()
pose_mat.identity()
for b_map in self.bone_mappings:
b_map.insert_keyframe(target_frame, pose_mat, in_place)
def retarget(self, scn, frames, insert_restframes=False, in_place=False):
"""Start the retarget operation for specified frames.
"""
scn.frame_set(0)
select_and_set_rest_pose(self.src_amt, scn)
select_and_set_rest_pose(self.trg_amt, scn)
for bm in self.bone_mappings:
bm.update_matrices()
if insert_restframes:
print ("Rest keyframe insertion is enabled")
tf_idx = 1
for c, frame_idx in enumerate(frames):
print ("Retargetting frame %s/%s" % (c, len(frames)))
if insert_restframes and frame_idx > 2:
self._set_rest_frame(tf_idx, in_place)
tf_idx += 1
self._retarget_frame(scn, frame_idx, tf_idx, in_place)
tf_idx += 1
class BoneMapping(object):
def __init__(self, src_pbone, trg_pbone, container):
"""A mapping of a source bone to a target bone. Retargetting will
transfer the pose from the source bone, compensate it for the difference
in bind pose between source and target bone, and apply a corresponding
pose matrix on the target bone.
src_pbone and trg_pbone are expected to be PoseBones
"""
self.container = container
self.src_bone = src_pbone.bone
self.trg_bone = trg_pbone.bone
self.src_pbone = src_pbone
self.trg_pbone = trg_pbone
self.src_mat = None
self.trg_mat = None
self.a_mat = None
self.b_mat = None
@property
def src_parent(self):
"""Return the bone mapping for the parent of the source bone.
"""
if not self.src_bone.parent:
return None
return self.container.src_bone_lookup[self.src_bone.parent.name]
@property
def trg_parent(self):
"""Return the bone mapping for the parent of the target bone.
"""
if not self.trg_bone.parent:
return None
# TODO guard against unmapped bones
return self.container.trg_bone_lookup[self.trg_bone.parent.name]
def update_matrices(self):
"""Update static matrices. These change only if the rest poses or structure
of one of the two rigs changes.
Should be called when both rigs are in rest pose.
"""
self.a_mat = _get_rest_pose_compensation_matrix(self.src_pbone, self.trg_pbone)
self.b_mat = _get_bone_matrix(self.trg_bone)
#self.src_mat = _get_bone_matrix(self.src_pbone)
#self.b_mat =
def __repr__(self):
return self.__unicode__()
def __str__(self):
return self.__unicode__()
def __unicode__(self):
return '<BoneMapping %s -> %s>' % (self.src_bone.name, self.trg_bone.name)
def insert_keyframe(self, frame_idx, pose_mat, in_place=False):
"""Insert the specified matrix as a keyframe for the target bone.
"""
set_rotation(self.trg_pbone, pose_mat, frame_idx)
if not in_place and not self.trg_bone.parent:
set_translation(self.trg_pbone, pose_mat, frame_idx)
def retarget(self, frame_idx, in_place=False):
"""Retarget the current pose of the source bone to the target bone, and
apply it as keyframe with specified index.
"""
frame_mat = self.src_pbone.matrix.to_4x4()
pose_mat = self.retarget_frame(frame_mat)
self.insert_keyframe(frame_idx, pose_mat, in_place)
def retarget_frame(self, frame_mat):
"""Calculate a pose matrix for the target bone by retargeting the
specified frame_mat, which is a pose on the source bone.
"""
# Store these for reuse in child bones, should be recalculated for every frame
self._src_mat = frame_mat
self._trg_mat = self._src_mat * self.a_mat.to_4x4()
self._trg_mat.col[3] = frame_mat.col[3]
trg_parent = self.trg_parent
if trg_parent:
mat = trg_parent._trg_mat.inverted() * self._trg_mat
else:
mat = self._trg_mat
mat = self.b_mat * mat
# TODO apply rotation locks and corrections
#mat = correctMatrixForLocks(mat, self.order, self.locks, self.trgBone, self.useLimits)
# Don't know why, but apparently we need to modify _trg_mat another time
mat_ = self.b_mat.inverted() * mat
if trg_parent:
self._trg_mat = trg_parent._trg_mat * mat_
else:
self._trg_mat = mat_
return mat
def get_armatures(context):
trg_rig = context.active_object
selected_objs = context.selected_objects[:]
if not trg_rig or len(selected_objs) != 2 or trg_rig.type != "ARMATURE":
raise Exception("Exactly two armatures must be selected. This Addon copies the current animation/pose the selected armature to the active armature.")
selected_objs.remove(trg_rig)
src_rig = selected_objs[0]
if src_rig.type != "ARMATURE":
raise Exception("Exactly two armatures must be selected. This Addon copies the current animation/pose the selected armature to the active armature.")
return (src_rig, trg_rig)
def retarget_animation(src_rig, trg_rig, insert_restframes=False, in_place=False):
"""With insert_restframes == True the first frame, which is supposed to contain the
rest pose, is copied in between every two frames. This makes it possible to
blend in each pose using action constraints.
If in_place == True translations of the root bone are ignored.
"""
r = AnimationRetarget(src_rig, trg_rig)
r.retarget(bpy.context.scene, range(1,500+1), insert_restframes, in_place) # TODO determine how many frames to copy
def main():
src_rig, trg_rig = get_armatures(bpy.context)
print ("Retarget animation from %s to %s" % (src_rig.name, trg_rig.name))
retarget_animation(src_rig, trg_rig)
if __name__ == '__main__':
main()
| 36.074359 | 157 | 0.665435 | 2,055 | 14,069 | 4.338686 | 0.186861 | 0.018057 | 0.016151 | 0.006729 | 0.212539 | 0.162517 | 0.102176 | 0.068192 | 0.068192 | 0.061687 | 0 | 0.00438 | 0.253536 | 14,069 | 389 | 158 | 36.167095 | 0.844601 | 0.359372 | 0 | 0.186529 | 0 | 0.010363 | 0.063282 | 0 | 0 | 0 | 0 | 0.010283 | 0 | 1 | 0.139896 | false | 0.005181 | 0.015544 | 0.015544 | 0.238342 | 0.031088 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea030c574075dd05328271b1ccc630cdf7f9c443 | 1,303 | py | Python | solum-6.0.0/solum/objects/sqlalchemy/execution.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 39 | 2015-09-26T01:30:52.000Z | 2021-05-20T23:37:43.000Z | solum-6.0.0/solum/objects/sqlalchemy/execution.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 5 | 2019-08-14T06:46:03.000Z | 2021-12-13T20:01:25.000Z | solum-6.0.0/solum/objects/sqlalchemy/execution.py | scottwedge/OpenStack-Stein | 7077d1f602031dace92916f14e36b124f474de15 | [
"Apache-2.0"
] | 30 | 2015-10-25T18:06:39.000Z | 2020-01-14T12:14:06.000Z | # Copyright 2014 - Rackspace Hosting
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sa
from solum.objects import execution as abstract
from solum.objects.sqlalchemy import models as sql
class Execution(sql.Base, abstract.Execution):
"""Represent an execution in sqlalchemy."""
__tablename__ = 'execution'
__resource__ = 'executions'
__table_args__ = sql.table_args()
id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
uuid = sa.Column(sa.String(36))
pipeline_id = sa.Column(sa.Integer, sa.ForeignKey('pipeline.id'))
class ExecutionList(abstract.ExecutionList):
"""Represent a list of executions in sqlalchemy."""
@classmethod
def get_all(cls, context):
return ExecutionList(sql.model_query(context, Execution))
| 33.410256 | 75 | 0.744436 | 179 | 1,303 | 5.318436 | 0.581006 | 0.063025 | 0.031513 | 0.033613 | 0.039916 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009225 | 0.168074 | 1,303 | 38 | 76 | 34.289474 | 0.869004 | 0.492709 | 0 | 0 | 0 | 0 | 0.047022 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0.071429 | 0.928571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ea0e70ed7f48b9ea77840bcc6953a91b092a58f3 | 1,166 | py | Python | src/bpp/migrations/0232_auto_20210101_1751.py | iplweb/django-bpp | 85f183a99d8d5027ae4772efac1e4a9f21675849 | [
"BSD-3-Clause"
] | 1 | 2017-04-27T19:50:02.000Z | 2017-04-27T19:50:02.000Z | src/bpp/migrations/0232_auto_20210101_1751.py | mpasternak/django-bpp | 434338821d5ad1aaee598f6327151aba0af66f5e | [
"BSD-3-Clause"
] | 41 | 2019-11-07T00:07:02.000Z | 2022-02-27T22:09:39.000Z | src/bpp/migrations/0232_auto_20210101_1751.py | iplweb/bpp | f027415cc3faf1ca79082bf7bacd4be35b1a6fdf | [
"BSD-3-Clause"
] | null | null | null | # Generated by Django 3.0.11 on 2021-01-01 16:51
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("bpp", "0231_ukryj_status_korekty"),
]
operations = [
migrations.AlterField(
model_name="autor",
name="pseudonim",
field=models.CharField(
blank=True,
help_text="\n Jeżeli w bazie danych znajdują się autorzy o zbliżonych imionach, nazwiskach i tytułach naukowych,\n skorzystaj z tego pola aby ułatwić ich rozróżnienie. Pseudonim pokaże się w polach wyszukiwania\n oraz na podstronie autora, po nazwisku i tytule naukowym.",
max_length=300,
null=True,
),
),
migrations.AlterField(
model_name="uczelnia",
name="sortuj_jednostki_alfabetycznie",
field=models.BooleanField(
default=True,
help_text="Jeżeli ustawione na 'FAŁSZ', sortowanie jednostek będzie odbywało się ręcznie\n tzn za pomocą ustalonej przez administratora systemu kolejności. ",
),
),
]
| 36.4375 | 297 | 0.608919 | 123 | 1,166 | 5.691057 | 0.772358 | 0.057143 | 0.071429 | 0.082857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.028786 | 0.314751 | 1,166 | 31 | 298 | 37.612903 | 0.847309 | 0.039451 | 0 | 0.24 | 1 | 0.08 | 0.447227 | 0.049195 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.16 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea1590334af24435c30185a2cb73b1bcb47990a6 | 12,795 | py | Python | telestream_cloud_qc_sdk/test/test_video_config.py | pandastream/telestream-cloud-python-sdk | ce0ad503299661a0f622661359367173c06889fc | [
"MIT"
] | null | null | null | telestream_cloud_qc_sdk/test/test_video_config.py | pandastream/telestream-cloud-python-sdk | ce0ad503299661a0f622661359367173c06889fc | [
"MIT"
] | 2 | 2016-07-06T14:13:31.000Z | 2018-03-07T12:54:58.000Z | telestream_cloud_qc_sdk/test/test_video_config.py | Telestream/telestream-cloud-python-sdk | ce0ad503299661a0f622661359367173c06889fc | [
"MIT"
] | null | null | null | # coding: utf-8
"""
Qc API
Qc API # noqa: E501
The version of the OpenAPI document: 3.0.0
Contact: cloudsupport@telestream.net
Generated by: https://openapi-generator.tech
"""
from __future__ import absolute_import
import unittest
import datetime
import telestream_cloud_qc
from telestream_cloud_qc.models.video_config import VideoConfig # noqa: E501
from telestream_cloud_qc.rest import ApiException
class TestVideoConfig(unittest.TestCase):
"""VideoConfig unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def make_instance(self, include_optional):
"""Test VideoConfig
include_option is a boolean, when False only required
params are included, when True both required and
optional params are included """
# model = telestream_cloud_qc.models.video_config.VideoConfig() # noqa: E501
if include_optional :
return VideoConfig(
track_select_test = telestream_cloud_qc.models.track_select_test.track_select_test(
selector = 56,
selector_type = 'TrackIndex',
checked = True, ),
track_id_test = telestream_cloud_qc.models.track_id_test.track_id_test(
track_id = 56,
reject_on_error = True,
checked = True, ),
ignore_vbi_test = telestream_cloud_qc.models.ignore_vbi_test.ignore_vbi_test(
reject_on_error = True,
checked = True, ),
force_color_space_test = telestream_cloud_qc.models.force_color_space_test.force_color_space_test(
color_space = 'CSUnknown',
checked = True, ),
video_segment_detection_test = telestream_cloud_qc.models.video_segment_detection_test.video_segment_detection_test(
black_level_default_or_custom = 'Default',
black_level = 56,
percentage_of_frame = 56,
min_duration_required = 1.337,
min_duration_required_secs_or_frames = 'Seconds',
require_digital_silence = True,
reject_on_error = True,
checked = True, ),
video_layout_test = telestream_cloud_qc.models.layout_test.layout_test(
layout_type = 'LayoutTypeFixedIgnoreStartAndEnd',
start_duration = 1.337,
start_duration_secs_or_frames = 'Seconds',
end_duration = 1.337,
end_duration_secs_or_frames = 'Seconds',
start_enabled = True,
start_hours = 56,
start_minutes = 56,
start_seconds = 56,
start_frames = 56,
end_enabled = True,
end_hours = 56,
end_minutes = 56,
end_seconds = 56,
end_frames = 56,
checked = True, ),
letterboxing_test = telestream_cloud_qc.models.letterboxing_test.letterboxing_test(
ratio_or_lines = 'Ratio',
ratio_horizontal = 56,
ratio_vertical = 56,
lines_top_and_bottom = 56,
lines_left_and_right = 56,
tolerance = 56,
black_level_default_or_custom = 'Default',
black_level = 56,
reject_on_error = True,
checked = True, ),
blanking_test = telestream_cloud_qc.models.blanking_test.blanking_test(
black_level_default_or_custom = 'Default',
black_level = 56,
checked = True, ),
loss_of_chroma_test = telestream_cloud_qc.models.loss_of_chroma_test.loss_of_chroma_test(
level_default_or_custom = 'Default',
level = 56,
tolerance = 56,
reject_on_error = True,
checked = True, ),
chroma_level_test = telestream_cloud_qc.models.chroma_level_test.chroma_level_test(
y_level_default_or_custom = 'Default',
y_level_lower = 56,
y_level_upper = 56,
y_level_max_outside_range = 1.337,
y_level_tolerance_low = 1.337,
y_level_tolerance_high = 1.337,
u_vlevel_default_or_custom = 'Default',
u_vlevel_lower = 56,
u_vlevel_upper = 56,
u_vlevel_max_outside_range = 1.337,
low_pass_filter = 'NoFilter',
reject_on_error = True,
do_correction = True,
checked = True, ),
black_level_test = telestream_cloud_qc.models.black_level_test.black_level_test(
level_default_or_custom = 'Default',
level = 56,
level_max_outside_range = 1.337,
reject_on_error = True,
do_correction = True,
checked = True, ),
rgb_gamut_test = telestream_cloud_qc.models.rgb_gamut_test.rgb_gamut_test(
level_default_or_custom = 'Default',
level_lower = 56,
level_upper = 56,
level_max_outside_range = 1.337,
level_tolerance = 1.337,
low_pass_filter = 'NoFilter',
reject_on_error = True,
do_correction = True,
checked = True, ),
hdr_test = telestream_cloud_qc.models.hdr_test.hdr_test(
hdr_standard = 'GenericHdr',
max_fall_max_enabled = True,
max_fall_max = 56,
max_fall_error_enabled = True,
max_fall_error = 56,
max_cll_max_enabled = True,
max_cll_max = 56,
max_cll_error_enabled = True,
max_cll_error = 56,
always_calculate = True,
always_report = True,
reject_on_error = True,
checked = True, ),
colour_bars_test = telestream_cloud_qc.models.colour_bars_test.colour_bars_test(
color_bar_standard = 'AnyColorBars',
tolerance = 56,
time_range_enabled = True,
start_time = 1.337,
end_time = 1.337,
range_tolerance = 1.337,
time_secs_or_frames = 'Seconds',
not_at_any_other_time = True,
reject_on_error = True,
do_correction = True,
checked = True, ),
black_frame_test = telestream_cloud_qc.models.black_frame_test.black_frame_test(
level_default_or_custom = 'Default',
level = 56,
percentage_of_frame = 56,
start_range_enabled = True,
start_time = 1.337,
end_time = 1.337,
start_range_tolerance = 1.337,
time_secs_or_frames = 'Seconds',
end_range_enabled = True,
end_range = 1.337,
end_range_tolerance = 1.337,
end_secs_or_frames = 'Seconds',
not_at_any_other_time = True,
max_time_allowed = 1.337,
max_time_allowed_secs_or_frames = 'Seconds',
max_time_at_start = True,
max_time_allowed_at_start = 1.337,
max_time_allowed_at_start_secs_or_frames = 'Seconds',
max_time_at_end = True,
max_time_allowed_at_end = 1.337,
max_time_allowed_at_end_secs_or_frames = 'Seconds',
reject_on_error = True,
do_correction = True,
checked = True, ),
single_color_test = telestream_cloud_qc.models.single_color_test.single_color_test(
max_time_allowed = 1.337,
time_secs_or_frames = 'Seconds',
percentage_of_frame = 1.337,
ignore_below = 56,
reject_on_error = True,
checked = True, ),
freeze_frame_test = telestream_cloud_qc.models.freeze_frame_test.freeze_frame_test(
sensitivity = 'Low',
time_range_enabled = True,
start_time = 1.337,
end_time = 1.337,
start_range_tolerance = 1.337,
time_secs_or_frames = 'Seconds',
end_range_enabled = True,
end_range = 1.337,
end_range_duration = 1.337,
end_range_tolerance = 1.337,
end_secs_or_frames = 'Seconds',
not_at_any_other_time = True,
max_time_allowed = 1.337,
max_time_allowed_secs_or_frames = 'Seconds',
reject_on_error = True,
checked = True, ),
blockiness_test = telestream_cloud_qc.models.blockiness_test.blockiness_test(
quality_level = 56,
max_time_below_quality = 1.337,
max_time_below_quality_secs_or_frames = 'Seconds',
reject_on_error = True,
checked = True, ),
field_order_test = telestream_cloud_qc.models.field_order_test.field_order_test(
flagged_field_order = 'UnknownFieldOrder',
baseband_enabled = True,
simple = True,
baseband_field_order = 'UnknownFieldOrder',
reject_on_error = True,
checked = True, ),
cadence_test = telestream_cloud_qc.models.cadence_test.cadence_test(
check_cadence = True,
cadence_required = 'CadenceUnknown',
check_cadence_breaks = True,
report_cadence = True,
check_for_poor_cadence = True,
reject_on_error = True,
checked = True, ),
dropout_test = telestream_cloud_qc.models.dropout_test.dropout_test(
sensitivity = 'Low',
reject_on_error = True,
do_correction = True,
checked = True, ),
digital_dropout_test = telestream_cloud_qc.models.digital_dropout_test.digital_dropout_test(
sensitivity = 'Low',
reject_on_error = True,
checked = True, ),
stripe_test = telestream_cloud_qc.models.stripe_test.stripe_test(
sensitivity = 'Low',
reject_on_error = True,
do_correction = True,
checked = True, ),
corrupt_frame_test = telestream_cloud_qc.models.corrupt_frame_test.corrupt_frame_test(
sensitivity = 'Low',
reject_on_error = True,
do_correction = True,
checked = True, ),
flash_test = telestream_cloud_qc.models.flash_test.flash_test(
check_type = 'PSEStandard',
check_for_extended = True,
check_for_red = True,
check_for_patterns = True,
reject_on_error = True,
do_correction = True,
checked = True, ),
media_offline_test = telestream_cloud_qc.models.media_offline_test.media_offline_test(
reject_on_error = True,
checked = True, )
)
else :
return VideoConfig(
)
def testVideoConfig(self):
"""Test VideoConfig"""
inst_req_only = self.make_instance(include_optional=False)
inst_req_and_optional = self.make_instance(include_optional=True)
if __name__ == '__main__':
unittest.main()
| 47.040441 | 132 | 0.511059 | 1,220 | 12,795 | 4.902459 | 0.156557 | 0.020732 | 0.08527 | 0.107674 | 0.533188 | 0.40729 | 0.321853 | 0.250293 | 0.219863 | 0.194115 | 0 | 0.028845 | 0.428292 | 12,795 | 271 | 133 | 47.214022 | 0.78879 | 0.03517 | 0 | 0.483471 | 1 | 0 | 0.027481 | 0.002609 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016529 | false | 0.016529 | 0.024793 | 0 | 0.053719 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea1c12db4d5af227141198911161b74bbd00e24e | 1,271 | py | Python | .tox/scenario/lib/python2.7/site-packages/oslo_middleware/__init__.py | bdrich/neutron-lbaas | b4711abfe0207c4fdd5d7fb7ecbf017e753abbfd | [
"Apache-2.0"
] | null | null | null | .tox/scenario/lib/python2.7/site-packages/oslo_middleware/__init__.py | bdrich/neutron-lbaas | b4711abfe0207c4fdd5d7fb7ecbf017e753abbfd | [
"Apache-2.0"
] | null | null | null | .tox/scenario/lib/python2.7/site-packages/oslo_middleware/__init__.py | bdrich/neutron-lbaas | b4711abfe0207c4fdd5d7fb7ecbf017e753abbfd | [
"Apache-2.0"
] | null | null | null | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
__all__ = ['CatchErrors',
'CorrelationId',
'CORS',
'Debug',
'Healthcheck',
'HTTPProxyToWSGI',
'RequestId',
'RequestBodySizeLimiter',
'SSLMiddleware']
from oslo_middleware.catch_errors import CatchErrors
from oslo_middleware.correlation_id import CorrelationId
from oslo_middleware.cors import CORS
from oslo_middleware.debug import Debug
from oslo_middleware.healthcheck import Healthcheck
from oslo_middleware.http_proxy_to_wsgi import HTTPProxyToWSGI
from oslo_middleware.request_id import RequestId
from oslo_middleware.sizelimit import RequestBodySizeLimiter
from oslo_middleware.ssl import SSLMiddleware
| 39.71875 | 78 | 0.739575 | 156 | 1,271 | 5.903846 | 0.525641 | 0.078176 | 0.175896 | 0.034745 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003949 | 0.20299 | 1,271 | 31 | 79 | 41 | 0.905232 | 0.429583 | 0 | 0 | 0 | 0 | 0.14507 | 0.030986 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ea2cea171e2a0a408098703594d6df6015c2788a | 333 | py | Python | python/primes.py | matheuskiser/pdx_code_guild | 49a5c62fb468253eb4d9a1fb11166df79bb10873 | [
"MIT"
] | null | null | null | python/primes.py | matheuskiser/pdx_code_guild | 49a5c62fb468253eb4d9a1fb11166df79bb10873 | [
"MIT"
] | null | null | null | python/primes.py | matheuskiser/pdx_code_guild | 49a5c62fb468253eb4d9a1fb11166df79bb10873 | [
"MIT"
] | null | null | null | """
User picks number n and program returns all prime number from 0 until n.
"""
def is_prime(num):
for i in range(2, num):
if (num % i) == 0:
return False
return True
number_picked = int(raw_input("Pick a number: "))
print 2
for i in range(3, number_picked, 2):
if is_prime(i):
print i, | 17.526316 | 72 | 0.600601 | 56 | 333 | 3.482143 | 0.571429 | 0.071795 | 0.061538 | 0.112821 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025316 | 0.288288 | 333 | 19 | 73 | 17.526316 | 0.797468 | 0 | 0 | 0 | 0 | 0 | 0.059055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea319b174e521eedd1eb8788136bfce6b273b847 | 1,982 | py | Python | test_autogalaxy/util/test_error_util.py | caoxiaoyue/PyAutoGalaxy | ad2b4b27404f5bf0f65ba9a0cd7c3ee6570e2d05 | [
"MIT"
] | 7 | 2021-05-29T08:46:29.000Z | 2022-01-23T14:06:20.000Z | test_autogalaxy/util/test_error_util.py | caoxiaoyue/PyAutoGalaxy | ad2b4b27404f5bf0f65ba9a0cd7c3ee6570e2d05 | [
"MIT"
] | 3 | 2021-01-06T09:42:44.000Z | 2022-03-10T15:52:23.000Z | test_autogalaxy/util/test_error_util.py | caoxiaoyue/PyAutoGalaxy | ad2b4b27404f5bf0f65ba9a0cd7c3ee6570e2d05 | [
"MIT"
] | 3 | 2021-02-10T07:45:16.000Z | 2022-01-21T17:36:40.000Z | from autofit.non_linear.samples.pdf import quantile
import autogalaxy as ag
import numpy as np
def test__quantile_1d_profile():
profile_1d_0 = np.array([1.0, 2.0, 3.0])
profile_1d_1 = np.array([1.0, 2.0, 3.0])
profile_1d_list = [profile_1d_0, profile_1d_1]
median_profile_1d = ag.util.error.quantile_profile_1d(
profile_1d_list=profile_1d_list, q=0.5
)
assert (median_profile_1d == np.array([1.0, 2.0, 3.0])).all()
profile_1d_0 = np.array([1.0, 2.0, 3.0])
profile_1d_1 = np.array([2.0, 4.0, 6.0])
profile_1d_list = [profile_1d_0, profile_1d_1]
median_profile_1d = ag.util.error.quantile_profile_1d(
profile_1d_list=profile_1d_list, q=0.5
)
assert (median_profile_1d == np.array([1.5, 3.0, 4.5])).all()
profile_1d_list = [
profile_1d_0,
profile_1d_0,
profile_1d_0,
profile_1d_1,
profile_1d_1,
profile_1d_1,
profile_1d_1,
]
weights = np.array([9.9996, 9.9996, 9.9996, 1e-4, 1e-4, 1e-4, 1e-4])
median_profile_1d = ag.util.error.quantile_profile_1d(
profile_1d_list=profile_1d_list, q=0.5, weights=weights
)
assert (median_profile_1d == np.array([1.0, 2.0, 3.0])).all()
radial_values = [1.0, 2.0, 3.0, 4.0, 5.0]
weights = [0.1, 0.3, 0.2, 0.05, 0.35]
quantile_result = quantile(x=radial_values, q=0.23, weights=weights)
profile_1d_0 = np.array([1.0])
profile_1d_1 = np.array([2.0])
profile_1d_2 = np.array([3.0])
profile_1d_3 = np.array([4.0])
profile_1d_4 = np.array([5.0])
profile_1d_list = [
profile_1d_0,
profile_1d_1,
profile_1d_2,
profile_1d_3,
profile_1d_4,
]
profile_1d_via_error_util = ag.util.error.quantile_profile_1d(
profile_1d_list=profile_1d_list, q=0.23, weights=weights
)
assert quantile_result == profile_1d_via_error_util[0]
| 27.527778 | 73 | 0.617558 | 337 | 1,982 | 3.302671 | 0.130564 | 0.396226 | 0.134771 | 0.143756 | 0.707996 | 0.639712 | 0.62354 | 0.607367 | 0.540881 | 0.504942 | 0 | 0.124495 | 0.250252 | 1,982 | 71 | 74 | 27.915493 | 0.624495 | 0 | 0 | 0.431373 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078431 | 1 | 0.019608 | false | 0 | 0.058824 | 0 | 0.078431 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea33e9758376d259b5c2e71586d76551a1685aa4 | 7,772 | py | Python | search_and_change/search_and_change_num.py | physics-sp/frida-tools | 8ae44d041417152f0717f48513043a320649e9e9 | [
"MIT"
] | 5 | 2020-02-08T12:25:40.000Z | 2021-08-25T16:49:59.000Z | search_and_change/search_and_change_num.py | physics-sp/frida-tools | 8ae44d041417152f0717f48513043a320649e9e9 | [
"MIT"
] | null | null | null | search_and_change/search_and_change_num.py | physics-sp/frida-tools | 8ae44d041417152f0717f48513043a320649e9e9 | [
"MIT"
] | 4 | 2020-06-03T04:27:02.000Z | 2021-06-07T15:16:20.000Z | #! /usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
import time
try:
import frida
except ImportError:
sys.exit('install frida\nsudo pip3 install frida')
# number of times that 'old_value' was find in memory
matches = None
def err(msg):
sys.stderr.write(msg + '\n')
def read(msg): # read input from user
def _invalido():
sys.stdout.write('\033[F\r') # Cursor up one line
blank = ' ' * len(str(leido) + msg)
sys.stdout.write('\r' + blank + '\r')
return read(msg)
try:
leido = input(msg)
except EOFError:
return _invalido()
if leido != '' and leido.isdigit() is False:
return _invalido()
if leido.isdigit():
try:
leido = eval(leido)
except SyntaxError:
return _invalido()
if leido < 1 or leido > matches:
return _invalido()
return leido
def on_message(message, data):
global matches
if message['type'] == 'error':
err('[!] ' + message['stack'])
elif message['type'] == 'send':
# recive amount of matches from js script
matches = message['payload']
else:
print(message)
def main(target_process, usb, old_value, new_value, endianness, signed, bits, alignment):
try:
if usb:
session = frida.get_usb_device().attach(target_process)
else:
session = frida.attach(target_process)
except:
sys.exit('An error ocurred while attaching with the procces')
script = session.create_script("""
function get_pattern(number, isLittleEndian, bits, signed) {
var negative = (number < 0 && signed == "s");
if (number < 0) {
number *= -1;
}
var hex_string = number.toString(16);
if (hex_string.length %% 2 == 1) {
hex_string = '0' + hex_string;
}
var pattern = "";
hex_string.match(/.{2}/g).forEach(function(byte) {
pattern = (isLittleEndian ? byte + " " + pattern : pattern + " " + byte);
});
if (isLittleEndian) {
pattern = pattern.substring(0, pattern.length - 1);
}
else {
pattern = pattern.substring(1, pattern.length);
}
var cantBytes = pattern.split(" ").length;
var bytesReg = Math.floor(bits/8);
for (i = 0; i < (bytesReg - cantBytes); i++) {
pattern = (isLittleEndian ? pattern + ' 00' : '00 ' + pattern);
}
var lenPattern = pattern.length;
if (negative) {
if (isLittleEndian) {
var prev = pattern.substring(lenPattern-1, lenPattern);
var nvo = parseInt(prev);
nvo |= 256;
nvo = nvo.toString();
pattern = pattern.substring(0, lenPattern-1) + nvo;
}
else {
var prev = pattern.substring(0, 2);
var nvo = parseInt(prev);
nvo |= 256;
nvo = nvo.toString();
pattern = nvo + pattern.substring(2);
}
}
return pattern;
}
function get_byte_array(number, isLittleEndian, bits, signed) {
var pattern = get_pattern(number, isLittleEndian, bits, signed);
var byte_array = [];
var bytes = pattern.split(" ");
for (var i = 0; i < bytes.length; i++) {
byte_array.push(parseInt("0x" + bytes[i]));
}
return byte_array;
}
function isAlligned(pointer, bits) {
var bytesInPointer = parseInt(pointer);
var bytesInRegister = bits / 8;
return bytesInPointer %% bytesInRegister === 0;
}
var old_value = %d;
var new_value = %d;
var isLittleEndian = '%s' == "l";
var signed = '%s';
var bits = %d;
var alignment = %d;
var mustBeAlligned = alignment != 0;
// pattern of bytes that frida will search in memory
var pattern = get_pattern(old_value, isLittleEndian, bits, signed);
// new bytes that will be written
var byte_array = get_byte_array(new_value, isLittleEndian, bits, signed);
console.log("[i] searching for " + pattern);
console.log("");
console.log("List of matches:");
// get array of ranges of memory that are readable and writable
var ranges = Process.enumerateRangesSync({protection: 'rw-', coalesce: true});
var counter = 0;
var addresses = {};
for (var i = 0; i < ranges.length; i++) {
var range = ranges[i];
// get array of addresses where 'old_value' was found in this range of memory
var matches = Memory.scanSync(range.base, range.size, pattern);
for (var j = 0; j < matches.length; j++) {
var address = matches[j].address;
// check if address is alligned in memory if user wants it to be
if (!mustBeAlligned || (mustBeAlligned && isAlligned(address, alignment))) {
// save match in array at index counter
addresses[counter ++] = address;
}
}
}
// show all matches found to user
var lenMax = counter.toString().length
for (var i = 0; i < counter; i++) {
var index = (i + 1).toString();
var padding = " ".repeat(lenMax - index.length);
console.log("(" + index + ") " + padding + addresses[i]);
}
// send amount of matches to python
send(counter);
// recive index selected by user from python
recv('input', function(value) {
Memory.writeByteArray(addresses[value.payload - 1], byte_array);
});
""" % (old_value, new_value, endianness, signed, bits, alignment))
script.on('message', on_message)
script.load()
# wait for scan to finish
while matches is None:
pass
if matches == 0:
print('\nNo matches found')
else:
print('\nIndicate which address you want to overwrite. Press <Enter> to detach.')
index = read('index of address:')
if index != '':
# send index selected by user to js script
script.post({'type': 'input', 'payload': int(index)})
print('address overwritten!')
time.sleep(1)
session.detach()
if __name__ == '__main__':
argc = len(sys.argv)
if argc < 4 or argc > 11:
usage = 'Usage: {} [-U] [-e little|big] [-b 64|32|16|8] [-a 64|32] <process name or PID> <old value> <new value>\n'.format(__file__)
usage += 'The \'-U\' option is for mobile instrumentation.\n'
usage += 'The \'-e\' option is to specify the endianness. Little is the default.\n'
usage += 'The \'-b\' option is to specify the size of the variable in bits. 32 is the default.\n'
usage += 'The \'-a\' option is to specify that the variable must be aligned in memory (and not in between registers). This is disabled by default.\n'
# usage += 'Specify if the variable is signed or unsigned with -s or -u.\n'
sys.exit(usage)
usb = False
endianness = 'l'
bits = 32
signed = 'u'
alignment = 0
for i in range(1, argc - 3):
if sys.argv[i] == '-U':
usb = True
elif sys.argv[i] == '-e':
endianness = sys.argv[i + 1]
if endianness not in ['big', 'little']:
sys.exit('Bad \'-e\' parameter. Specify the endianness (big or little).')
endianness = endianness[0]
elif sys.argv[i] == '-b':
size = sys.argv[i + 1]
if size not in ['64', '32', '16', '8']:
sys.exit('Bad \'-b\' parameter. Specify the size of the variable in bits (64, 32, 16 or 8).')
bits = int(size)
elif sys.argv[i] == '-a':
arch = sys.argv[i + 1]
if arch not in ['64', '32']:
sys.exit('Bad \'-a\' parameter. Specify the architecture (32 or 64).')
alignment = int(arch)
if sys.argv[argc - 3].isdigit():
target_process = int(sys.argv[argc - 3])
else:
target_process = sys.argv[argc - 3]
if sys.argv[argc - 2].replace('-', '').isdigit() is False:
sys.exit('<old value> must be a number.')
if sys.argv[argc - 1].replace('-', '').isdigit() is False:
sys.exit('<new value> must be a number.')
old_value = int(sys.argv[argc - 2])
new_value = int(sys.argv[argc - 1])
if old_value < 0 or new_value < 0:
sys.exit('Negative numbers aren\'t suported yet.')
if (old_value > (2 ** (bits - 1)) - 1 and signed == 's') or (old_value > (2 ** bits) - 1 and signed == 'u'):
sys.exit(str(old_value) + ' is too large')
if (new_value > (2 ** (bits - 1)) - 1 and signed == 's') or (new_value > (2 ** bits) - 1 and signed == 'u'):
sys.exit(str(new_value) + ' is too large')
main(target_process, usb, old_value, new_value, endianness, signed, bits, alignment)
| 30.478431 | 151 | 0.634328 | 1,100 | 7,772 | 4.420909 | 0.227273 | 0.021592 | 0.011516 | 0.013161 | 0.177257 | 0.128521 | 0.108369 | 0.090685 | 0.067859 | 0.057989 | 0 | 0.018902 | 0.210371 | 7,772 | 254 | 152 | 30.598425 | 0.773505 | 0.04053 | 0 | 0.113744 | 0 | 0.009479 | 0.61305 | 0.044979 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023697 | false | 0.004739 | 0.018957 | 0 | 0.085308 | 0.018957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea384cdec08dafa6c13c92feb566f42fb716a71c | 13,837 | py | Python | Decision_Maker-Temp_Humid/Temp_Humid_Mqtt_Controller.py | cantiusdeepan/Cadrea | bdd341f8e9ee7a5103611d5bdac1a820ab9fdd81 | [
"MIT"
] | null | null | null | Decision_Maker-Temp_Humid/Temp_Humid_Mqtt_Controller.py | cantiusdeepan/Cadrea | bdd341f8e9ee7a5103611d5bdac1a820ab9fdd81 | [
"MIT"
] | null | null | null | Decision_Maker-Temp_Humid/Temp_Humid_Mqtt_Controller.py | cantiusdeepan/Cadrea | bdd341f8e9ee7a5103611d5bdac1a820ab9fdd81 | [
"MIT"
] | null | null | null | ####### File will publish a score between 0-5 on how good an idea
# (0- Don't open, 5 - Very good conditions for opening window)
# it is to open the window to clear the room given the current internal and external weather conditions
# Factors considered - ________
# Internal temperature
# External Temperature (Effect calculated by adaptive modelling formulation - ASHRAE standard)
# External Weather condition(smog,fog etc)
# External Relative Humidity
# External Wind speed/ Air velocity
import json
import time
import paho.mqtt.client as MQTT
import numpy as np
import random
import fpformat
import requests
import socket
class MyMQTT:
def __init__(self, broker, port, notifier):
self.broker = broker
self.port = port
self.notifier = notifier
self._paho_mqtt = MQTT.Client("Temp_Humid_Decision_Maker", False)
self._paho_mqtt.on_connect = self.myOnConnect
self._paho_mqtt.on_message = self.myOnMessageReceived
def myOnConnect(self, paho_mqtt, userdata, flags, rc):
# print ("Connected to message broker with result code: " + str(rc))
pass
def myOnMessageReceived(self, paho_mqtt, userdata, msg):
self.notifier.notify(msg.topic, msg.payload)
def myPublish(self, housIDvar, w):
# print("barbastruzzo")
js_pub = {"data": "temp_window", "value": w}
topic_pub = 'house/' + housIDvar + '/temp_local_controller/temp_window'
self._paho_mqtt.publish(topic_pub, json.dumps(js_pub), 2)
print("Publishing TempHumid decision on MQTT")
# self._paho_mqtt.publish(topic, msg, qos)
def mySubscribe(self, topicExtTemp, topicExtWind, topicExtWeather, topicExtRH, topicIntTemp, topicIntRH, qos=2):
self._paho_mqtt.subscribe(topicExtTemp, qos)
self._paho_mqtt.subscribe(topicExtWind, qos)
self._paho_mqtt.subscribe(topicExtWeather, qos)
self._paho_mqtt.subscribe(topicExtRH, qos)
self._paho_mqtt.subscribe(topicIntTemp, qos)
self._paho_mqtt.subscribe(topicIntRH, qos)
def start(self):
self._paho_mqtt.connect(self.broker, self.port)
self._paho_mqtt.loop_start()
def stop(self):
self._paho_mqtt.loop_stop()
class StartTempHumidMqtt():
def __init__(self):
#####Values to be fetched from local config
# resource catalog base url
self.rc_base_url = ""
# Central config server base URL
self.cc_base_url = ""
self.house_id = 0
self.mqtt_broker = ""
self.mqtt_port = 0
self.getLocalConfig()
self.runningMonthMeanOutTemp = 0.0
## Values to be used in the logical decision making section
# setting default initial values
self.internal_temp = 0.0
self.external_temp = 20.0
self.external_wind = 0.0
self.external_weather = 0
self.l_threshold_temp = 15.0
self.u_threshold_temp = 30.0
self.external_rHumidity = 45.0
self.window = 0.0
self.internal_rhumidity = 45.0
self.last_month_ext_temp_list = np.array([])
self.myMqtt = MyMQTT(self.mqtt_broker, self.mqtt_port, self)
self.myMqtt.start()
def getLocalConfig(self):
json_file = open('local_TH_control_config.json').read()
local_config = json.loads(json_file)
if local_config.get("RC_base_url"):
self.rc_base_url = local_config["RC_base_url"]
else:
print "Problem in local json - Can't get RC url"
if local_config.get("Central_config_base_url"):
self.cc_base_url = local_config["Central_config_base_url"]
else:
print "Problem in local json - Can't get Central config url"
if local_config.get("house_id"):
self.house_id = local_config["house_id"]
else:
print "Problem in local json - Can't get house_id"
if local_config.get("mqtt_broker"):
self.mqtt_broker = local_config["mqtt_broker"]
else:
print "Problem in local json - Can't get mqtt_broker"
if local_config.get("mqtt_port"):
self.mqtt_port = local_config["mqtt_port"]
else:
print "Problem in local json - Can't get mqtt_port"
def thresholdValuesFromCentre(self, url, house_ID, reqString='index.html'):
# URL of the GUIWebservice for Central config file
# url = 'http://192.168.1.71:8081/'
updated_url = url + house_ID + "/" + reqString
print "updated_url:", updated_url
try:
response = requests.get(updated_url)
print response.text
return str(response.text)
except:
print("Error in fetching thingspeak ID from resource catalog")
pass
# Getting thingspeak ID from RC using rasp pi IP
def running_mean(self, current_ext_temp, array_size_limit):
self.last_month_ext_temp_list = np.append(self.last_month_ext_temp_list, current_ext_temp)
self.ext_temp_array_size = self.last_month_ext_temp_list.size
# 12 readings per hour for 24 h = 288 readings per day
# 288 readings per day for 30 days = 8640
if (self.ext_temp_array_size >= array_size_limit):
divisor = array_size_limit
# if array size is at limit, remove the oldest value
self.last_month_ext_temp_list = np.delete(self.last_month_ext_temp_list, 0)
else:
divisor = self.ext_temp_array_size
# DOes cumulative sum - Last value is sum of all values in array
cumsum = np.cumsum(self.last_month_ext_temp_list)
cum_sum_last_month_ext_temp = cumsum[-1]
# print ("Ext Temp Array size:", self.ext_temp_array_size)
# print (c)
# print("CUrrent reading external Temp:",current_ext_temp )
# print("Average monthly mean external temp:", (cum_sum_last_month_ext_temp) / divisor)
return ((cum_sum_last_month_ext_temp) / divisor)
def end(self):
self.myMqtt.stop()
# This is just a local temp and humid controller, there is a central controller making
# decisions based on all input like tmp, humid, wind and dust
def local_temp_test_controller(self):
# house_id = self.getIDfromRC(rc_base_url,'getHID4pi:',local_ip_addr)
internal_temp_topic = 'house/' + self.house_id + '/sensor/temp/internal'
internal_RH_topic = 'house/' + self.house_id + '/sensor/rhumidity/internal'
self.myMqtt.mySubscribe('/wunderground/temp/Turin', '/wunderground/wind/Turin', '/wunderground/weather/Turin',
'/wunderground/rhumidity/Turin',
internal_temp_topic, internal_RH_topic, 2)
wind_multiplier = 1.0
humid_multiplier = 1.0
RH_lower_limit = 10.0
RH_upper_limit = 10.0
comf_temp_range = 7.5
# Getting INITAL THRESHOLDS FOR TEMP - after entering loop- adaptive modelling kicks in
l_threshold_temp = float(self.thresholdValuesFromCentre(self.cc_base_url, self.house_id, "init_temp_low"))
u_threshold_temp = float(self.thresholdValuesFromCentre(self.cc_base_url, self.house_id, "init_temp_high"))
while True:
# getting the following thresholds every five mins from centre
RH_lower_limit = float(self.thresholdValuesFromCentre(self.cc_base_url, self.house_id, "init_RH_low"))
RH_upper_limit = float(self.thresholdValuesFromCentre(self.cc_base_url, self.house_id, "init_RH_high"))
comf_temp_range = float(self.thresholdValuesFromCentre(self.cc_base_url, self.house_id, "tempRange"))
# Higher the window multiplier value, better it is to open window
# Check if outside conditions(excluding temp) allow opening of window and by how much
if self.external_weather > 0:
# Wind speed classification based on : https://www.windows2universe.org/earth/Atmosphere/wind_speeds.html
if self.external_wind <= 1.0:
wind_multiplier = 1
elif 1.1 <= self.external_wind <= 5.9:
wind_multiplier = 2
elif 6.0 <= self.external_wind <= 11.9:
wind_multiplier = 3
elif 12.0 <= self.external_wind <= 19.9:
wind_multiplier = 4
elif self.external_wind > 20.0:
wind_multiplier = 0
if (40.0 <= self.external_rHumidity <= 50.0):
humid_multiplier = 1.25
elif (35.0 <= self.external_rHumidity <= 55.0):
humid_multiplier = 1.15
elif (30.0 <= self.external_rHumidity <= 60.0):
humid_multiplier = 1.0
elif (25.0 <= self.external_rHumidity <= 65.0):
humid_multiplier = 0.75
elif (RH_lower_limit <= self.external_rHumidity <= RH_upper_limit):
humid_multiplier = 0.5
else:
humid_multiplier = 0
# If internal humidity is very bad, and outdoor RH is not very bad, even better to open window
if (20.0 <= self.internal_rhumidity >= 70.0):
humid_multiplier = humid_multiplier * 2
# Impact of outside temperature on inside temperature
######### ASHRAE standard for thermal comfort #############
# http://www.sciencedirect.com/science/article/pii/S2095263513000320
# 12 readings per hour for 24 h = 288 readings per day
# 288 readings per day for 30 days = 8640
# So array size is being set to 8640 for taking monthly mean
self.runningMonthMeanOutTemp = self.running_mean(self.external_temp, 8640)
tComf = 0.31 * (self.runningMonthMeanOutTemp) + 17.8
# Range on both sides from comf temp provided from central config file
print "tComf:", tComf
# print "comf_temp_range:",comf_temp_range
self.l_threshold_temp = float(tComf) - comf_temp_range
self.u_threshold_temp = float(tComf) + comf_temp_range
print"int temp:", self.internal_temp
print"internal_rhumidity:", self.internal_rhumidity
print"external temp:", self.external_temp
print"external_rHumidity:", self.external_rHumidity
print"external_wind:", self.external_wind
print"runningMonthMeanOutTemp:", self.runningMonthMeanOutTemp
print"lower threshold temp:", self.l_threshold_temp
print"higher threshold temp:", self.u_threshold_temp
print("____________________________________________________")
if self.l_threshold_temp <= self.external_temp <= self.u_threshold_temp:
self.window = 1
print "External temp ok - open window"
# <editor-fold desc="Description">
# elif (self.l_threshold_temp > self.internal_temp):
# self.window = 0.5
#
# print "EXT and INT temp both not ok-int lower than lower_threshold, opening window doesn't have major negative impact - open window"
#
#
# elif (self.u_threshold_temp < self.internal_temp):
# self.window = 0.5
#
# print "EXT and INT temp both not ok-int higher than higher_threshold, opening window doesn't have major negative impact - open window"
# # </editor-fold>
else:
self.window = 0
print "EXT temp NOT ok,Negative impact if window opened - Close Window"
print("___________________________________________________")
print "window value based only on temp:", self.window
self.window = self.window * wind_multiplier
print "window value based on temp,wind:", self.window
self.window = self.window * humid_multiplier
print "window value based on temp,wind,humidity:", self.window
print("***************************************************")
self.myMqtt.myPublish(str(self.house_id), str(self.window))
print "window: " + str(self.window)
print("****************************************************")
time.sleep(30)
def notify(self, topic, msg):
# print msg
if topic == "/wunderground/temp/Turin":
self.external_temp = (json.loads(msg)['value'])
if "/sensor/temp/internal" in topic:
internal_temp_temporary = (json.loads(msg)['value'])
if (internal_temp_temporary != "-100"):
self.internal_temp = internal_temp_temporary
# checking if we have value from sensor, if not skip and use default value
else:
self.internal_temp = 0
if "/sensor/rhumidity/internal" in topic:
internal_rhumidity_temporary = (json.loads(msg)['value'])
# checking if we have value from sensor, if not skip and use default value
if (internal_rhumidity_temporary != "-1"):
self.internal_rhumidity = internal_rhumidity_temporary
else:
self.internal_rhumidity = 0
if topic == "/wunderground/wind/Turin":
self.external_wind = float((json.loads(msg)['value']))
if topic == "/wunderground/weather/Turin":
self.external_weather = (json.loads(msg)['value'])
if topic == "/wunderground/rhumidity/Turin":
self.external_rHumidity = (json.loads(msg)['value'])
# print "received under topic %s" % (topic)
if __name__ == "__main__":
start_TH_control = StartTempHumidMqtt()
start_TH_control.local_temp_test_controller()
# time.sleep(30)
# test.end()
| 42.185976 | 152 | 0.627376 | 1,704 | 13,837 | 4.821596 | 0.205399 | 0.035054 | 0.023369 | 0.019474 | 0.289922 | 0.21945 | 0.176728 | 0.139727 | 0.129747 | 0.125609 | 0 | 0.021077 | 0.276505 | 13,837 | 327 | 153 | 42.314985 | 0.79962 | 0.227361 | 0 | 0.069652 | 0 | 0 | 0.146417 | 0.062777 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.00995 | 0.039801 | null | null | 0.139303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea38f8206042b01f8b467e0f3e3183966c12c73d | 6,538 | py | Python | pynagios/perf_data.py | jimbrowne/pynagios | f144b5507b1b3966a8f587bd07d0c7845db90182 | [
"MIT"
] | null | null | null | pynagios/perf_data.py | jimbrowne/pynagios | f144b5507b1b3966a8f587bd07d0c7845db90182 | [
"MIT"
] | null | null | null | pynagios/perf_data.py | jimbrowne/pynagios | f144b5507b1b3966a8f587bd07d0c7845db90182 | [
"MIT"
] | 1 | 2022-02-11T09:27:21.000Z | 2022-02-11T09:27:21.000Z | """
Tools for creating performance data for Nagios plugin responses.
If you're adding performance data to a :py:class:`~pynagios.response.Response`
object, then :py:func:`~pynagios.response.Response.set_perf_data` can be
called instead of having to create an entire :py:class:`PerfData` object.
"""
import re
from pynagios.range import Range
class PerfData(object):
"""
This class represents performance data for a response. Since
performance data has a non-trivial response format, this class
is meant to ease the formation of performance data.
"""
def __init__(self, label, value, uom=None, warn=None, crit=None,
minval=None, maxval=None):
"""Creates a new object representing a single performance data
item for a Nagios response.
Performance data is extra key/value data that can be returned
along with a response. The performance data is not used immediately
by Nagios itself, but can be extracted by 3rd party tools and can
often be helpful additional information for system administrators
to view. The `label` can be any string, but `value` must be a
numeric value.
Raises :class:`ValueError` if any of the parameters are invalid.
The exact nature of the error is in the human readable message
attribute of the exception.
:Parameters:
- `label`: Label for the performance data. This must be a
string.
- `value`: Value of the data point. This must be a number whose
characters are in the class of `[-0-9.]`
- `uom` (optional): Unit of measure. This must only be `%`, `s`
for seconds, `c` for continous data, or a unit of bit space
measurement ('b', 'kb', etc.)
- `warn` (optional): Warning range for this metric.
- `crit` (optional): Critical range for this metric.
- `minval` (optional): Minimum value possible for this metric,
if one exists.
- `maxval` (optional): Maximum value possible for this metric,
if one exists.
"""
self.label = label
self.value = value
self.uom = uom
self.warn = warn
self.crit = crit
self.minval = minval
self.maxval = maxval
@property
def value(self):
"""The value of this metric."""
return self._value
@value.setter
def value(self, value):
if value is None:
raise ValueError("value must not be None")
elif not self._is_valid_value(value):
raise ValueError("value must be in class [-0-9.]")
self._value = value
@property
def warn(self):
"""
The warning range of this metric. This return value of this
will always be a :py:class:`~pynagios.range.Range` object, even
if it was set with a string.
"""
return self._warn
@warn.setter
def warn(self, value):
if value is not None and not isinstance(value, Range):
value = Range(value)
self._warn = value
@property
def crit(self):
"""
The critical range of this metric. This return value of this
will always be a :py:class:`~pynagios.range.Range` object,
even if it was set with a string.
"""
return self._crit
@crit.setter
def crit(self, value):
if value is not None and not isinstance(value, Range):
value = Range(value)
self._crit = value
@property
def minval(self):
"""
The minimum value possible for this metric. This doesn't make
a lot of sense if the `uom` is '%', since that is obviously going
to be 0, but this will return whatever was set.
"""
return self._minval
@minval.setter
def minval(self, value):
if not self._is_valid_value(value):
raise ValueError("minval must be in class [-0-9.]")
self._minval = value
@property
def maxval(self):
"""
The maximum value possible for this metric. This doesn't make
a lot of sense if the `uom` is '%', since that is obviously going
to be 100, but this will return whatever was set.
"""
return self._maxval
@maxval.setter
def maxval(self, value):
if not self._is_valid_value(value):
raise ValueError("maxval must be in class [-0-9.]")
self._maxval = value
@property
def uom(self):
"""
The unit of measure (UOM) for this metric.
"""
return self._uom
@uom.setter
def uom(self, value):
valids = ['', 's', '%', 'b', 'kb', 'mb', 'gb', 'tb', 'c']
if value is not None and not str(value).lower() in valids:
raise ValueError("uom must be in: %s" % valids)
self._uom = value
def __str__(self):
"""
Returns the proper string format that should be outputted
in the plugin response string. This format is documented in
depth in the Nagios developer guidelines, but in general looks
like this:
| 'label'=value[UOM];[warn];[crit];[min];[max]
"""
# Quotify the label
label = self._quote_if_needed(self.label)
# Check for None in each and make it empty string if so
uom = self.uom or ''
warn = self.warn or ''
crit = self.crit or ''
minval = self.minval or ''
maxval = self.maxval or ''
# Create the proper format and return it
return "%s=%s%s;%s;%s;%s;%s" % (label, self.value, uom, warn, crit, minval, maxval)
def _is_valid_value(self, value):
"""
Returns boolean noting whether a value is in the proper value
format which certain values for the performance data must adhere to.
"""
value_format = re.compile(r"[-0-9.]+$")
return value is None or value_format.match(str(value))
def _quote_if_needed(self, value):
"""
This handles single quoting the label if necessary. The reason that
this is not done all the time is so that characters can be saved
since Nagios only reads 80 characters and one line of stdout.
"""
if '=' in value or ' ' in value or "'" in value:
# Quote the string and replace single quotes with double single
# quotes and return that
return "'%s'" % value.replace("'", "''")
else:
return value
| 33.701031 | 91 | 0.599725 | 883 | 6,538 | 4.392978 | 0.234428 | 0.027842 | 0.02346 | 0.020624 | 0.259345 | 0.24826 | 0.239237 | 0.218871 | 0.18974 | 0.1686 | 0 | 0.003771 | 0.310493 | 6,538 | 193 | 92 | 33.875648 | 0.856699 | 0.487305 | 0 | 0.151899 | 0 | 0 | 0.064977 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.202532 | false | 0 | 0.025316 | 0 | 0.367089 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea3b4a5064c4a8a783500e634bbd859b7a0ab263 | 2,318 | py | Python | cahoots/parsers/email.py | SerenitySoftwareLLC/cahoots | 866336c51436343ff5e56f83f89dddc82a5693a3 | [
"MIT"
] | 8 | 2015-03-24T15:34:40.000Z | 2016-12-24T22:09:47.000Z | cahoots/parsers/email.py | hickeroar/cahoots | 8fa795d7d933507c6cbf490bd20c1b3562689c5a | [
"MIT"
] | 34 | 2015-03-06T06:27:54.000Z | 2015-05-27T05:23:27.000Z | cahoots/parsers/email.py | hickeroar/cahoots | 8fa795d7d933507c6cbf490bd20c1b3562689c5a | [
"MIT"
] | 4 | 2015-04-05T06:24:50.000Z | 2015-05-30T02:40:21.000Z | """
The MIT License (MIT)
Copyright (c) Serenity Software, LLC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
from cahoots.parsers.base import BaseParser
from SereneRegistry import registry
from validate_email import VALID_ADDRESS_REGEXP
import re
class EmailParser(BaseParser):
'''Determines if given data is an email address'''
def __init__(self, config):
"""
:param config: cahoots config
:type config: cahoots.config.BaseConfig
"""
BaseParser.__init__(self, config, "Email", 100)
@staticmethod
def bootstrap(config):
"""
This method is statically called to bootstrap a parser
:param config: cahoots config
:type config: cahoots.config.BaseConfig
"""
email_regex = re.compile(VALID_ADDRESS_REGEXP)
registry.set('EP_valid_regex', email_regex)
def parse(self, data_string):
"""
parses for email addresses
:param data_string: the string we want to parse
:type data_string: str
:return: yields parse result(s) if there are any
:rtype: ParseResult
"""
if len(data_string) > 254 or '@' not in data_string:
return
if registry.get('EP_valid_regex').match(data_string):
yield self.result("Email Address", self.confidence)
| 35.661538 | 78 | 0.716566 | 317 | 2,318 | 5.160883 | 0.485804 | 0.05379 | 0.046455 | 0.02934 | 0.069682 | 0.069682 | 0.069682 | 0.069682 | 0.069682 | 0 | 0 | 0.003317 | 0.219586 | 2,318 | 64 | 79 | 36.21875 | 0.90105 | 0.643658 | 0 | 0 | 0 | 0 | 0.068314 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.25 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
ea3d55b836b67869362d50e8f2439e202c28d2d7 | 40,516 | py | Python | Process_Data/audio_processing.py | Wenhao-Yang/DeepSpeaker-pytorch | 99eb8de3357c85e2b7576da2a742be2ffd773ead | [
"MIT"
] | 8 | 2020-08-26T13:32:56.000Z | 2022-01-18T21:05:46.000Z | Process_Data/audio_processing.py | Wenhao-Yang/DeepSpeaker-pytorch | 99eb8de3357c85e2b7576da2a742be2ffd773ead | [
"MIT"
] | 1 | 2020-07-24T17:06:16.000Z | 2020-07-24T17:06:16.000Z | Process_Data/audio_processing.py | Wenhao-Yang/DeepSpeaker-pytorch | 99eb8de3357c85e2b7576da2a742be2ffd773ead | [
"MIT"
] | 5 | 2020-12-11T03:31:15.000Z | 2021-11-23T15:57:55.000Z | #!/usr/bin/env python
# encoding: utf-8
import os
import pathlib
import traceback
<<<<<<< HEAD
=======
import random
>>>>>>> Server/Server
import librosa
import numpy as np
import soundfile as sf
import torch
import torch.nn.utils.rnn as rnn_utils
from pydub import AudioSegment
from python_speech_features import fbank, delta, sigproc
from scipy import signal
from scipy.io import wavfile
from scipy.signal import butter, sosfilt
from speechpy.feature import mfe
from speechpy.processing import cmvn, cmvnw
from Process_Data import constants as c
from Process_Data.Compute_Feat.compute_vad import ComputeVadEnergy
from Process_Data.xfcc.common import local_fbank, local_mfcc
def mk_MFB(filename, sample_rate=c.SAMPLE_RATE, use_delta=c.USE_DELTA, use_scale=c.USE_SCALE, use_logscale=c.USE_LOGSCALE):
audio, sr = librosa.load(filename, sr=sample_rate, mono=True)
#audio = audio.flatten()
filter_banks, energies = fbank(audio, samplerate=sample_rate, nfilt=c.FILTER_BANK, winlen=0.025)
if use_logscale:
filter_banks = 20 * np.log10(np.maximum(filter_banks, 1e-5))
if use_delta:
delta_1 = delta(filter_banks, N=1)
delta_2 = delta(delta_1, N=1)
filter_banks = normalize_frames(filter_banks, Scale=use_scale)
delta_1 = normalize_frames(delta_1, Scale=use_scale)
delta_2 = normalize_frames(delta_2, Scale=use_scale)
frames_features = np.hstack([filter_banks, delta_1, delta_2])
else:
filter_banks = normalize_frames(filter_banks, Scale=use_scale)
frames_features = filter_banks
np.save(filename.replace('.wav', '.npy'), frames_features)
return
def resample_wav(in_wav, out_wav, sr):
try:
samples, samplerate = sf.read(in_wav, dtype='float32')
samples = np.asfortranarray(samples)
samples = librosa.resample(samples, samplerate, sr)
sf.write(file=out_wav, data=samples, samplerate=sr, format='WAV')
except Exception as e:
traceback.print_exc()
raise (e)
def butter_bandpass(cutoff, fs, order=15):
nyq = 0.5 * fs
sos = butter(order, np.array(cutoff) / nyq, btype='bandpass', analog=False, output='sos')
return sos
def butter_bandpass_filter(data, cutoff, fs, order=15):
<<<<<<< HEAD
sos = butter_bandpass(cutoff, fs, order=order)
y = sosfilt(sos, data)
=======
int2float = False
if data.dtype == np.int16:
data = data / 32768.
data = data.astype(np.float32)
int2float = True
sos = butter_bandpass(cutoff, fs, order=order)
y = sosfilt(sos, data)
if int2float:
y = (y * 32768).astype(np.int16)
>>>>>>> Server/Server
return y # Filter requirements.
def make_Fbank(filename, write_path, # sample_rate=c.SAMPLE_RATE,
use_delta=c.USE_DELTA,
use_scale=c.USE_SCALE,
nfilt=c.FILTER_BANK,
use_logscale=c.USE_LOGSCALE,
use_energy=c.USE_ENERGY,
normalize=c.NORMALIZE):
if not os.path.exists(filename):
raise ValueError('wav file does not exist.')
sample_rate, audio = wavfile.read(filename)
# audio, sr = librosa.load(filename, sr=None, mono=True)
#audio = audio.flatten()
filter_banks, energies = fbank(audio,
samplerate=sample_rate,
nfilt=nfilt,
winlen=0.025,
winfunc=np.hamming)
if use_energy:
energies = energies.reshape(energies.shape[0], 1)
filter_banks = np.concatenate((energies, filter_banks), axis=1)
# frames_features[:, 0] = np.log(energies)
if use_logscale:
# filter_banks = 20 * np.log10(np.maximum(filter_banks, 1e-5))
filter_banks = np.log(np.maximum(filter_banks, 1e-5))
# Todo: extract the normalize step?
if use_delta:
delta_1 = delta(filter_banks, N=1)
delta_2 = delta(delta_1, N=1)
filter_banks = normalize_frames(filter_banks, Scale=use_scale)
delta_1 = normalize_frames(delta_1, Scale=use_scale)
delta_2 = normalize_frames(delta_2, Scale=use_scale)
filter_banks = np.hstack([filter_banks, delta_1, delta_2])
if normalize:
filter_banks = normalize_frames(filter_banks, Scale=use_scale)
frames_features = filter_banks
file_path = pathlib.Path(write_path)
if not file_path.parent.exists():
os.makedirs(str(file_path.parent))
np.save(write_path, frames_features)
# np.save(filename.replace('.wav', '.npy'), frames_features)
return
def compute_fbank_feat(filename, nfilt=c.FILTER_BANK, use_logscale=c.USE_LOGSCALE, use_energy=True, add_energy=True, normalize=c.CMVN, vad=c.VAD):
"""
Making feats more like in kaldi.
:param filename:
:param use_delta:
:param nfilt:
:param use_logscale:
:param use_energy:
:param normalize:
:return:
"""
if not os.path.exists(filename):
raise ValueError('Wav file does not exist.')
sample_rate, audio = wavfile.read(filename)
pad_size = np.ceil((len(audio) - 0.025 * sample_rate) / (0.01 * sample_rate)) * 0.01 * sample_rate - len(audio) + 0.025 * sample_rate
audio = np.lib.pad(audio, (0, int(pad_size)), 'symmetric')
filter_banks, energies = mfe(audio, sample_rate, frame_length=0.025, frame_stride=0.01, num_filters=nfilt, fft_length=512, low_frequency=0, high_frequency=None)
if use_energy:
if add_energy:
# Add an extra dimension to features
energies = energies.reshape(energies.shape[0], 1)
filter_banks = np.concatenate((energies, filter_banks), axis=1)
else:
# replace the 1st dim as energy
energies = energies.reshape(energies.shape[0], 1)
filter_banks[:, 0]=energies[:, 0]
if use_logscale:
filter_banks = np.log(np.maximum(filter_banks, 1e-5))
# filter_banks = np.log(filter_banks)
if normalize=='cmvn':
# vec(array): input_feature_matrix (size:(num_observation, num_features))
norm_fbank = cmvn(vec=filter_banks, variance_normalization=True)
elif normalize=='cmvnw':
norm_fbank = cmvnw(vec=filter_banks, win_size=301, variance_normalization=True)
if use_energy and vad:
voiced = []
ComputeVadEnergy(filter_banks, voiced)
voiced = np.array(voiced)
voiced_index = np.argwhere(voiced==1).squeeze()
norm_fbank = norm_fbank[voiced_index]
return norm_fbank, voiced
return norm_fbank
def GenerateSpect(wav_path, write_path, windowsize=25, stride=10, nfft=c.NUM_FFT):
"""
Pre-computing spectrograms for wav files
:param wav_path: path of the wav file
:param write_path: where to write the spectrogram .npy file
:param windowsize:
:param stride:
:param nfft:
:return: None
"""
if not os.path.exists(wav_path):
raise ValueError('wav file does not exist.')
#pdb.set_trace()
# samples, sample_rate = wavfile.read(wav_path)
sample_rate, samples = sf.read(wav_path, dtype='int16')
sample_rate_norm = int(sample_rate / 1e3)
frequencies, times, spectrogram = signal.spectrogram(x=samples, fs=sample_rate, window=signal.hamming(windowsize * sample_rate_norm), noverlap=(windowsize-stride) * sample_rate_norm, nfft=nfft)
# Todo: store the whole spectrogram
# spectrogram = spectrogram[:, :300]
# while spectrogram.shape[1]<300:
# # Copy padding
# spectrogram = np.concatenate((spectrogram, spectrogram), axis=1)
#
# # raise ValueError("The dimension of spectrogram is less than 300")
# spectrogram = spectrogram[:, :300]
# maxCol = np.max(spectrogram,axis=0)
# spectrogram = np.nan_to_num(spectrogram / maxCol)
# spectrogram = spectrogram * 255
# spectrogram = spectrogram.astype(np.uint8)
# For voxceleb1
# file_path = wav_path.replace('Data/voxceleb1', 'Data/voxceleb1')
# file_path = file_path.replace('.wav', '.npy')
file_path = pathlib.Path(write_path)
if not file_path.parent.exists():
os.makedirs(str(file_path.parent))
np.save(write_path, spectrogram)
# return spectrogram
def Make_Spect(wav_path, windowsize, stride, window=np.hamming,
bandpass=False, lowfreq=0, highfreq=0, log_scale=True,
<<<<<<< HEAD
preemph=0.97, duration=False, nfft=None, normalize=True):
=======
preemph=0.97, duration=False, nfft=None, normalize=False):
>>>>>>> Server/Server
"""
read wav as float type. [-1.0 ,1.0]
:param wav_path:
:param windowsize:
:param stride:
:param window: default to np.hamming
:return: return spectrogram with shape of (len(wav/stride), windowsize * samplerate /2 +1).
"""
# samplerate, samples = wavfile.read(wav_path)
<<<<<<< HEAD
samples, samplerate = sf.read(wav_path, dtype='float32')
=======
samples, samplerate = sf.read(wav_path, dtype='int16')
if not len(samples) > 0:
raise ValueError('wav file is empty?')
>>>>>>> Server/Server
if bandpass and highfreq > lowfreq:
samples = butter_bandpass_filter(data=samples, cutoff=[lowfreq, highfreq], fs=samplerate)
signal = sigproc.preemphasis(samples, preemph)
frames = sigproc.framesig(signal, windowsize * samplerate, stride * samplerate, winfunc=window)
if nfft == None:
nfft = int(windowsize * samplerate)
pspec = sigproc.powspec(frames, nfft)
pspec = np.where(pspec == 0, np.finfo(float).eps, pspec)
if log_scale == True:
feature = np.log(pspec).astype(np.float32)
else:
feature = pspec.astype(np.float32)
# feature = feature.transpose()
if normalize:
feature = normalize_frames(feature)
if duration:
return feature, len(samples) / samplerate
return feature
def Make_Fbank(filename, # sample_rate=c.SAMPLE_RATE,
filtertype='mel', windowsize=0.025, nfft=512, use_delta=c.USE_DELTA, use_scale=c.USE_SCALE,
lowfreq=0, nfilt=c.FILTER_BANK, log_scale=c.USE_LOGSCALE,
use_energy=c.USE_ENERGY, normalize=c.NORMALIZE, duration=False, multi_weight=False):
if not os.path.exists(filename):
raise ValueError('wav file does not exist.')
<<<<<<< HEAD
audio, sample_rate = sf.read(filename, dtype='float32')
=======
# audio, sample_rate = sf.read(filename, dtype='float32')
audio, sample_rate = sf.read(filename, dtype='int16')
assert len(audio) > 0, print('wav file is empty?')
>>>>>>> Server/Server
filter_banks, energies = local_fbank(audio, samplerate=sample_rate, nfilt=nfilt, nfft=nfft, lowfreq=lowfreq,
winlen=windowsize, filtertype=filtertype, winfunc=np.hamming,
multi_weight=multi_weight)
if use_energy:
energies = energies.reshape(energies.shape[0], 1)
filter_banks = np.concatenate((energies, filter_banks), axis=1)
# frames_features[:, 0] = np.log(energies)
if log_scale:
# filter_banks = 20 * np.log10(np.maximum(filter_banks, 1e-5))
<<<<<<< HEAD
filter_banks = 10 * np.log10(filter_banks)
=======
# filter_banks = 10 * np.log10(filter_banks)
filter_banks = np.log(filter_banks)
>>>>>>> Server/Server
if use_delta:
delta_1 = delta(filter_banks, N=1)
delta_2 = delta(delta_1, N=1)
filter_banks = normalize_frames(filter_banks, Scale=use_scale)
delta_1 = normalize_frames(delta_1, Scale=use_scale)
delta_2 = normalize_frames(delta_2, Scale=use_scale)
filter_banks = np.hstack([filter_banks, delta_1, delta_2])
if normalize:
filter_banks = normalize_frames(filter_banks, Scale=use_scale)
frames_features = filter_banks
if duration:
return frames_features, len(audio) / sample_rate
# np.save(filename.replace('.wav', '.npy'), frames_features)
return frames_features
def Make_MFCC(filename,
filtertype='mel', winlen=0.025, winstep=0.01,
use_delta=c.USE_DELTA, use_scale=c.USE_SCALE,
nfilt=c.FILTER_BANK, numcep=c.FILTER_BANK,
use_energy=c.USE_ENERGY, lowfreq=0, nfft=512,
normalize=c.NORMALIZE,
duration=False):
if not os.path.exists(filename):
raise ValueError('wav file does not exist.')
# sample_rate, audio = wavfile.read(filename)
audio, sample_rate = sf.read(filename, dtype='int16')
# audio, sample_rate = librosa.load(filename, sr=None)
# audio = audio.flatten()
if not len(audio) > 0:
raise ValueError('wav file is empty?')
feats = local_mfcc(audio, samplerate=sample_rate,
nfilt=nfilt, winlen=winlen,
winstep=winstep, numcep=numcep,
nfft=nfft, lowfreq=lowfreq,
highfreq=None, preemph=0.97,
ceplifter=0, appendEnergy=use_energy,
winfunc=np.hamming, filtertype=filtertype)
if use_delta:
delta_1 = delta(feats, N=1)
delta_2 = delta(delta_1, N=1)
filter_banks = normalize_frames(feats, Scale=use_scale)
delta_1 = normalize_frames(delta_1, Scale=use_scale)
delta_2 = normalize_frames(delta_2, Scale=use_scale)
feats = np.hstack([filter_banks, delta_1, delta_2])
if normalize:
feats = normalize_frames(feats, Scale=use_scale)
if duration:
return feats, len(audio) / sample_rate
# np.save(filename.replace('.wav', '.npy'), frames_features)
return feats
def conver_to_wav(filename, write_path, format='m4a'):
"""
Convert other formats into wav.
:param filename: file path for the audio.
:param write_path:
:param format: formats that ffmpeg supports.
:return: None. write the wav to local.
"""
if not os.path.exists(filename):
raise ValueError('File may not exist.')
if not pathlib.Path(write_path).parent.exists():
os.makedirs(str(pathlib.Path(write_path).parent))
sound = AudioSegment.from_file(filename, format=format)
sound.export(write_path, format="wav")
def read_MFB(filename):
#audio, sr = librosa.load(filename, sr=sample_rate, mono=True)
#audio = audio.flatten()
try:
audio = np.load(filename.replace('.wav', '.npy'))
except Exception:
raise ValueError("Load {} error!".format(filename))
return audio
def read_Waveform(filename):
"""
read features from npy files
:param filename: the path of wav files.
:return:
"""
# audio, sr = librosa.load(filename, sr=sample_rate, mono=True)
# audio = audio.flatten()
audio, sample_rate = sf.read(filename, dtype='int16')
return audio.astype(np.float32).reshape(1, -1)
def read_from_npy(filename):
"""
read features from npy files
:param filename: the path of wav files.
:return:
"""
#audio, sr = librosa.load(filename, sr=sample_rate, mono=True)
#audio = audio.flatten()
audio = np.load(filename.replace('.wav', '.npy'))
return audio
class ConcateVarInput(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
<<<<<<< HEAD
def __init__(self, num_frames=c.NUM_FRAMES_SPECT, remove_vad=False):
=======
def __init__(self, num_frames=c.NUM_FRAMES_SPECT, frame_shift=c.NUM_SHIFT_SPECT,
feat_type='kaldi', remove_vad=False):
>>>>>>> Server/Server
super(ConcateVarInput, self).__init__()
self.num_frames = num_frames
self.remove_vad = remove_vad
<<<<<<< HEAD
=======
self.frame_shift = frame_shift
self.c_axis = 0 if feat_type != 'wav' else 1
>>>>>>> Server/Server
def __call__(self, frames_features):
network_inputs = []
output = frames_features
<<<<<<< HEAD
while len(output) < self.num_frames:
output = np.concatenate((output, frames_features), axis=0)
input_this_file = int(np.ceil(len(output) / self.num_frames))
for i in range(input_this_file):
if i == input_this_file - 1:
network_inputs.append(output[len(output) - self.num_frames:])
else:
network_inputs.append(output[i * self.num_frames:(i + 1) * self.num_frames])
=======
while output.shape[self.c_axis] < self.num_frames:
output = np.concatenate((output, frames_features), axis=self.c_axis)
input_this_file = int(np.ceil(output.shape[self.c_axis] / self.frame_shift))
for i in range(input_this_file):
start = i * self.frame_shift
if start < output.shape[self.c_axis] - self.num_frames:
end = start + self.num_frames
else:
start = output.shape[self.c_axis] - self.num_frames
end = output.shape[self.c_axis]
if self.c_axis == 0:
network_inputs.append(output[start:end])
else:
network_inputs.append(output[:, start:end])
>>>>>>> Server/Server
network_inputs = torch.tensor(network_inputs, dtype=torch.float32)
if self.remove_vad:
network_inputs = network_inputs[:, :, 1:]
return network_inputs
class ConcateInput(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
def __init__(self, input_per_file=1, num_frames=c.NUM_FRAMES_SPECT, remove_vad=False):
super(ConcateInput, self).__init__()
self.input_per_file = input_per_file
self.num_frames = num_frames
self.remove_vad = remove_vad
def __call__(self, frames_features):
network_inputs = []
output = frames_features
while len(output) < self.num_frames:
output = np.concatenate((output, frames_features), axis=0)
for i in range(self.input_per_file):
try:
start = np.random.randint(low=0, high=len(output) - self.num_frames + 1)
frames_slice = output[start:start + self.num_frames]
network_inputs.append(frames_slice)
except Exception as e:
print(len(output))
raise e
# pdb.set_trace()
network_inputs = np.array(network_inputs, dtype=np.float32)
if self.remove_vad:
network_inputs = network_inputs[:, :, 1:]
<<<<<<< HEAD
return network_inputs
=======
return torch.tensor(network_inputs.squeeze())
class ConcateNumInput(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
def __init__(self, input_per_file=1, num_frames=c.NUM_FRAMES_SPECT, feat_type='kaldi', remove_vad=False):
super(ConcateNumInput, self).__init__()
self.input_per_file = input_per_file
self.num_frames = num_frames
self.remove_vad = remove_vad
self.c_axis = 0 if feat_type != 'wav' else 1
def __call__(self, frames_features):
network_inputs = []
output = frames_features
while output.shape[self.c_axis] < self.num_frames:
output = np.concatenate((output, frames_features), axis=self.c_axis)
if len(output) / self.num_frames >= self.input_per_file:
for i in range(self.input_per_file):
start = i * self.num_frames
frames_slice = output[start:start + self.num_frames] if self.c_axis == 0 else output[:,
start:start + self.num_frames]
network_inputs.append(frames_slice)
else:
for i in range(self.input_per_file):
try:
start = np.random.randint(low=0, high=output.shape[self.c_axis] - self.num_frames + 1)
frames_slice = output[start:start + self.num_frames] if self.c_axis == 0 else output[:,
start:start + self.num_frames]
network_inputs.append(frames_slice)
except Exception as e:
print(len(output))
raise e
# pdb.set_trace()
network_inputs = np.array(network_inputs, dtype=np.float32)
if self.remove_vad:
network_inputs = network_inputs[:, :, 1:]
if len(network_inputs.shape) > 2:
network_inputs = network_inputs.squeeze(0)
return network_inputs
class ConcateNumInput_Test(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
def __init__(self, input_per_file=1, num_frames=c.NUM_FRAMES_SPECT, remove_vad=False):
super(ConcateNumInput_Test, self).__init__()
self.input_per_file = input_per_file
self.num_frames = num_frames
self.remove_vad = remove_vad
def __call__(self, frames_features):
network_inputs = []
output = frames_features
while len(output) < self.num_frames:
output = np.concatenate((output, frames_features), axis=0)
start = np.random.randint(low=0, high=len(output) - self.num_frames + 1)
return start, len(output)
>>>>>>> Server/Server
class concateinputfromMFB(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
def __init__(self, input_per_file=1, num_frames=c.NUM_FRAMES_SPECT, remove_vad=False):
super(concateinputfromMFB, self).__init__()
self.input_per_file = input_per_file
self.num_frames = num_frames
self.remove_vad = remove_vad
def __call__(self, frames_features):
network_inputs = []
output = frames_features
while len(output) < self.num_frames:
output = np.concatenate((output, frames_features), axis=0)
for i in range(self.input_per_file):
try:
start = np.random.randint(low=0, high=len(output) - self.num_frames + 1)
frames_slice = output[start:start + self.num_frames]
network_inputs.append(frames_slice)
except Exception as e:
print(len(output))
raise e
# pdb.set_trace()
network_inputs = torch.tensor(network_inputs, dtype=torch.float32)
if self.remove_vad:
network_inputs = network_inputs[:, :, 1:]
return network_inputs
class ConcateOrgInput(object):
"""
prepare feats with true length.
"""
def __init__(self, remove_vad=False):
super(ConcateOrgInput, self).__init__()
self.remove_vad = remove_vad
def __call__(self, frames_features):
# pdb.set_trace()
network_inputs = []
output = np.array(frames_features)
if self.remove_vad:
output = output[:, 1:]
network_inputs.append(output)
network_inputs = torch.tensor(network_inputs, dtype=torch.float32)
return network_inputs
def pad_tensor(vec, pad, dim):
"""
args:
vec - tensor to pad
pad - the size to pad to
dim - dimension to pad
return:
a new tensor padded itself to 'pad' in dimension 'dim'
"""
while vec.shape[dim]<pad:
vec = torch.cat([vec, vec], dim=dim)
start = np.random.randint(low=0, high=vec.shape[dim]-pad+1)
return torch.Tensor.narrow(vec, dim=dim, start=start, length=pad)
class PadCollate:
"""
a variant of callate_fn that pads according to the longest sequence in
a batch of sequences
"""
def __init__(self, dim=0, min_chunk_size=200, max_chunk_size=400, normlize=True,
num_batch=0,
fix_len=False):
"""
args:
dim - the dimension to be padded (dimension of time in sequences)
"""
self.dim = dim
self.min_chunk_size = min_chunk_size
self.max_chunk_size = max_chunk_size
self.num_batch = num_batch
self.fix_len = fix_len
self.normlize = normlize
if self.fix_len:
self.frame_len = np.random.randint(low=self.min_chunk_size, high=self.max_chunk_size)
else:
assert num_batch > 0
batch_len = []
self.iteration = 0
# print('==> Generating %d different random length...' % (int(np.ceil(num_batch/100))))
# for i in range(int(np.ceil(num_batch/100))):
# batch_len.append(np.random.randint(low=self.min_chunk_size, high=self.max_chunk_size))
# self.batch_len = np.repeat(batch_len, 100)
print('==> Generating %d different random length...' % (num_batch))
for i in range(num_batch):
batch_len.append(np.random.randint(low=self.min_chunk_size, high=self.max_chunk_size))
self.batch_len = np.array(batch_len)
while np.mean(self.batch_len[:num_batch]) < int((self.min_chunk_size + self.max_chunk_size) / 2):
self.batch_len += 1
self.batch_len = self.batch_len.clip(max=self.max_chunk_size)
print('==> Average of utterance length is %d. ' % (np.mean(self.batch_len[:num_batch])))
def pad_collate(self, batch):
"""
args:
batch - list of (tensor, label)
reutrn:
xs - a tensor of all examples in 'batch' after padding
ys - a LongTensor of all labels in batch
"""
# pdb.set_trace()
if self.fix_len:
frame_len = self.frame_len
else:
# frame_len = np.random.randint(low=self.min_chunk_size, high=self.max_chunk_size)
frame_len = self.batch_len[self.iteration % self.num_batch]
self.iteration += 1
self.iteration %= self.num_batch
if self.iteration == 0:
np.random.shuffle(self.batch_len)
# pad according to max_len
# print()
xs = torch.stack(list(map(lambda x: x[0], batch)), dim=0)
if frame_len < batch[0][0].shape[-2]:
start = np.random.randint(low=0, high=batch[0][0].shape[-2] - frame_len)
end = start + frame_len
xs = xs[:, :, start:end, :].contiguous()
else:
xs = xs.contiguous()
ys = torch.LongTensor(list(map(lambda x: x[1], batch)))
# map_batch = map(lambda x_y: (pad_tensor(x_y[0], pad=frame_len, dim=self.dim - 1), x_y[1]), batch)
# pad_batch = list(map_batch)
#
# xs = torch.stack(list(map(lambda x: x[0], pad_batch)), dim=0)
# ys = torch.LongTensor(list(map(lambda x: x[1], pad_batch)))
return xs, ys
def __call__(self, batch):
return self.pad_collate(batch)
class RNNPadCollate:
"""
a variant of callate_fn that pads according to the longest sequence in
a batch of sequences
"""
def __init__(self, dim=0):
"""
args:
dim - the dimension to be padded (dimension of time in sequences)
"""
self.dim = dim
def pad_collate(self, batch):
"""
args:
batch - list of (tensor, label)
reutrn:
xs - a tensor of all examples in 'batch' after padding
ys - a LongTensor of all labels in batch
"""
# pdb.set_trace()
# pad according to max_len
data = [x[0][0] for x in batch]
data = [x[:, :40].float() for x in data]
data_len = np.array([len(x) for x in data])
sort_idx = np.argsort(-data_len)
sort_data = [data[sort_idx[i]] for i in range(len(sort_idx))]
labels = [x[1] for x in batch]
sort_label = [labels[sort_idx[i]] for i in range(len(sort_idx))]
# data.sort(key=lambda x: len(x), reverse=True)
sort_label = torch.LongTensor(sort_label)
data_length = [len(sq) for sq in sort_data]
p_data = rnn_utils.pad_sequence(sort_data, batch_first=True, padding_value=0)
batch_x_pack = rnn_utils.pack_padded_sequence(p_data, data_length, batch_first=True)
return batch_x_pack, sort_label, data_length
def __call__(self, batch):
return self.pad_collate(batch)
class TripletPadCollate:
"""
a variant of callate_fn that pads according to the longest sequence in
a batch of sequences
"""
def __init__(self, dim=0):
"""
args:
dim - the dimension to be padded (dimension of time in sequences)
"""
self.dim = dim
self.min_chunk_size = 300
self.max_chunk_size = 500
self.num_chunk = np.random.randint(low=self.min_chunk_size, high=self.max_chunk_size)
def pad_collate(self, batch):
"""
args:
batch - list of (tensor, label)
reutrn:
xs - a tensor of all examples in 'batch' after padding
ys - a LongTensor of all labels in batch
"""
# pdb.set_trace()
# find longest sequence
# max_len = max(map(lambda x: x[0].shape[self.dim], batch))
frame_len = self.num_chunk
# pad according to max_len
map_batch = map(lambda x_y: (pad_tensor(x_y[0], pad=frame_len, dim=self.dim),
pad_tensor(x_y[1], pad=frame_len, dim=self.dim),
pad_tensor(x_y[2], pad=frame_len, dim=self.dim),
x_y[3],
x_y[4]), batch)
pad_batch = list(map_batch)
# stack all
xs_a = torch.stack(list(map(lambda x: x[0], pad_batch)), dim=0)
xs_p = torch.stack(list(map(lambda x: x[1], pad_batch)), dim=0)
xs_n = torch.stack(list(map(lambda x: x[2], pad_batch)), dim=0)
ys_a = torch.LongTensor(list(map(lambda x: x[3], pad_batch)))
ys_n = torch.LongTensor(list(map(lambda x: x[4], pad_batch)))
return xs_a, xs_p, xs_n, ys_a, ys_n
def __call__(self, batch):
return self.pad_collate(batch)
class ExtractCollate:
"""
a variant of callate_fn that pads according to the longest sequence in
a batch of sequences
"""
def __init__(self, dim=0):
"""
args:
dim - the dimension to be padded (dimension of time in sequences)
"""
self.dim = dim
self.min_chunk_size = 300
self.max_chunk_size = 500
self.num_chunk = np.random.randint(low=self.min_chunk_size, high=self.max_chunk_size)
def extract_collate(self, batch):
"""
args:
batch - list of (tensor, label)
reutrn:
xs - a tensor of all examples in 'batch' after padding
ys - a LongTensor of all labels in batch
"""
# pdb.set_trace()
# find longest sequence
# max_len = max(map(lambda x: x[0].shape[self.dim], batch))
frame_len = self.num_chunk
# pad according to max_len
map_batch = map(lambda x_y: (pad_tensor(x_y[0], pad=frame_len, dim=self.dim), x_y[1]), batch)
pad_batch = list(map_batch)
# stack all
xs = torch.stack(list(map(lambda x: x[0], pad_batch)), dim=0)
ys = torch.LongTensor(list(map(lambda x: x[1], pad_batch)))
uid = [x[2] for x in batch]
return xs, ys, uid
def __call__(self, batch):
return self.extract_collate(batch)
class truncatedinputfromSpectrogram(object):
"""truncated input from Spectrogram
"""
def __init__(self, input_per_file=1):
super(truncatedinputfromSpectrogram, self).__init__()
self.input_per_file = input_per_file
def __call__(self, frames_features):
network_inputs = []
frames_features = np.swapaxes(frames_features, 0, 1)
num_frames = len(frames_features)
import random
for i in range(self.input_per_file):
j=0
if c.NUM_PREVIOUS_FRAME_SPECT <= (num_frames - c.NUM_NEXT_FRAME_SPECT):
j = random.randrange(c.NUM_PREVIOUS_FRAME_SPECT, num_frames - c.NUM_NEXT_FRAME_SPECT)
#j = random.randrange(c.NUM_PREVIOUS_FRAME_SPECT, num_frames - c.NUM_NEXT_FRAME_SPECT)
# If len(frames_features)<NUM__FRAME_SPECT, then apply zero padding.
if j==0:
frames_slice = np.zeros((c.NUM_FRAMES_SPECT, c.NUM_FFT/2+1), dtype=np.float32)
frames_slice[0:(frames_features.shape[0])] = frames_features
else:
frames_slice = frames_features[j - c.NUM_PREVIOUS_FRAME_SPECT:j + c.NUM_NEXT_FRAME_SPECT]
network_inputs.append(frames_slice)
return np.array(network_inputs)
def read_audio(filename, sample_rate=c.SAMPLE_RATE):
audio, sr = librosa.load(filename, sr=sample_rate, mono=True)
audio = audio.flatten()
return audio
#this is not good
#def normalize_frames(m):
# return [(v - np.mean(v)) / (np.std(v) + 2e-12) for v in m]
def normalize_frames(m, Scale=True):
"""
Normalize frames with mean and variance
:param m:
:param Scale:
:return:
"""
if Scale:
return (m - np.mean(m, axis=0)) / (np.std(m, axis=0) + 1e-12)
return (m - np.mean(m, axis=0))
def pre_process_inputs(signal=np.random.uniform(size=32000), target_sample_rate=8000, use_delta=c.USE_DELTA):
filter_banks, energies = fbank(signal, samplerate=target_sample_rate, nfilt=c.FILTER_BANK, winlen=0.025)
delta_1 = delta(filter_banks, N=1)
delta_2 = delta(delta_1, N=1)
filter_banks = normalize_frames(filter_banks)
delta_1 = normalize_frames(delta_1)
delta_2 = normalize_frames(delta_2)
if use_delta:
frames_features = np.hstack([filter_banks, delta_1, delta_2])
else:
frames_features = filter_banks
num_frames = len(frames_features)
network_inputs = []
"""Too complicated
for j in range(c.NUM_PREVIOUS_FRAME, num_frames - c.NUM_NEXT_FRAME):
frames_slice = frames_features[j - c.NUM_PREVIOUS_FRAME:j + c.NUM_NEXT_FRAME]
#network_inputs.append(np.reshape(frames_slice, (32, 20, 3)))
network_inputs.append(frames_slice)
"""
import random
j = random.randrange(c.NUM_PREVIOUS_FRAME, num_frames - c.NUM_NEXT_FRAME)
frames_slice = frames_features[j - c.NUM_PREVIOUS_FRAME:j + c.NUM_NEXT_FRAME]
network_inputs.append(frames_slice)
return np.array(network_inputs)
class truncatedinput(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
def __call__(self, input):
#min_existing_frames = min(self.libri_batch['raw_audio'].apply(lambda x: len(x)).values)
want_size = int(c.TRUNCATE_SOUND_FIRST_SECONDS * c.SAMPLE_RATE)
if want_size > len(input):
output = np.zeros((want_size,))
output[0:len(input)] = input
#print("biho check")
return output
else:
return input[0:want_size]
class toMFB(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
def __call__(self, input):
output = pre_process_inputs(input, target_sample_rate=c.SAMPLE_RATE)
return output
class totensor(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
def __call__(self, input):
"""
Args:
pic (PIL.Image or numpy.ndarray): Image to be converted to tensor.
Returns:
Tensor: Converted image.
"""
input = torch.tensor(input, dtype=torch.float32)
return input.unsqueeze(0)
class to2tensor(object):
"""Rescales the input PIL.Image to the given 'size'.
If 'size' is a 2-element tuple or list in the order of (width, height), it will be the exactly size to scale.
If 'size' is a number, it will indicate the size of the smaller edge.
For example, if height > width, then image will be
rescaled to (size * height / width, size)
size: size of the exactly size or the smaller edge
interpolation: Default: PIL.Image.BILINEAR
"""
def __call__(self, pic):
"""
Args:
pic (PIL.Image or numpy.ndarray): Image to be converted to tensor.
Returns:
Tensor: Converted image.
"""
# if isinstance(pic, np.ndarray):
# handle numpy array
img = torch.tensor(pic, dtype=torch.float32)
return img
class tonormal(object):
def __call__(self, tensor):
"""
Args:
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
Returns:
Tensor: Normalized image.
"""
# TODO: make efficient
tensor = tensor - torch.mean(tensor)
return tensor.float()
class mvnormal(object):
def __call__(self, tensor):
"""
Args:
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
Returns:
Tensor: Normalized image.
"""
# TODO: make efficient
tensor = (tensor - torch.mean(tensor, dim=-2, keepdim=True)) / torch.std(tensor, dim=-2, keepdim=True).add_(
1e-12)
return tensor.float()
class tolog(object):
def __call__(self, tensor):
"""
Args:
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
Returns:
Tensor: Normalized image.
"""
tensor = torch.log(tensor)
return tensor.float()
| 34.718081 | 197 | 0.626518 | 5,452 | 40,516 | 4.463866 | 0.082722 | 0.028475 | 0.017093 | 0.006657 | 0.665242 | 0.631056 | 0.604553 | 0.57053 | 0.557628 | 0.536796 | 0 | 0.014965 | 0.264439 | 40,516 | 1,166 | 198 | 34.747856 | 0.801658 | 0 | 0 | 0.486667 | 0 | 0 | 0.014715 | 0 | 0 | 0 | 0 | 0.003431 | 0.003333 | 0 | null | null | 0.013333 | 0.035 | null | null | 0.011667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea4292de7cf5cedb5b3ae53916a825188ffc20c5 | 852 | py | Python | codes/Constant.py | YasserDaho/Saliency-3DSal | 4a8ff399c8b24ccc88bb04311d6f9797d0cae2d1 | [
"MIT"
] | 2 | 2020-04-19T13:25:47.000Z | 2020-05-08T17:14:38.000Z | codes/Constant.py | YasserDaho/Saliency-3DSal | 4a8ff399c8b24ccc88bb04311d6f9797d0cae2d1 | [
"MIT"
] | null | null | null | codes/Constant.py | YasserDaho/Saliency-3DSal | 4a8ff399c8b24ccc88bb04311d6f9797d0cae2d1 | [
"MIT"
] | 1 | 2019-09-24T17:42:08.000Z | 2019-09-24T17:42:08.000Z | """""
Path to the Image Dataset directories
"""""
TR_IMG_DIR = './WORKSPACE/DATASET/annotation/'
GT_IMG_DIR = './WORKSPACE/DATASET/annotation/'
"""""
Path to Numpy Video directories
"""""
TR_VID_DIR = './WORKSPACE/DATA/TR_DATA/'
GT_VID_DIR = './WORKSPACE/DATA/GT_DATA/'
"""""
Path to Numpy batches directories
"""""
TR_VGG_DIR = './WORKSPACE/BATCH/VGG-16/'
TR_BATCH_DIR = './WORKSPACE/BATCH/TR_BATCH/'
GT_BATCH_DIR = './WORKSPACE/BATCH/GT_BATCH/'
"""""
Path to the global test dataset directories
"""""
TEST_DIR = './WORKSPACE/TEST/annotation/'
TEST_RES = './WORKSPACE/TEST/result/'
"""""
Path to the text file, containing the dataset video names
"""""
DATASET_INDEX = './train.txt'
TEST_INDEX = './test.txt'
"""""
The new image size
"""""
IMG_SIZE = 224
""""""
""""
The saved model directory
"""
Model_DIR = './WORKSPACE/TRAINED_MODEL/'
| 17.75 | 57 | 0.676056 | 114 | 852 | 4.833333 | 0.333333 | 0.196007 | 0.049002 | 0.079855 | 0.116152 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006729 | 0.127934 | 852 | 47 | 58 | 18.12766 | 0.734859 | 0.048122 | 0 | 0.315789 | 0 | 0 | 0.546139 | 0.506591 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea4430c870df5f7cb35f3b4eb8439be6a855324e | 770 | py | Python | extras/scripts/test-client.py | claudiosv/unisparks | 6215faddbc5a656c7f387c3bea811d435b122042 | [
"Apache-2.0"
] | null | null | null | extras/scripts/test-client.py | claudiosv/unisparks | 6215faddbc5a656c7f387c3bea811d435b122042 | [
"Apache-2.0"
] | 3 | 2022-01-26T22:55:56.000Z | 2022-02-04T18:41:54.000Z | extras/scripts/test-client.py | claudiosv/unisparks | 6215faddbc5a656c7f387c3bea811d435b122042 | [
"Apache-2.0"
] | 1 | 2021-10-05T17:42:55.000Z | 2021-10-05T17:42:55.000Z | #!/usr/bin/python
import socket
import sys
import time
import struct
MCADDR = '239.255.223.01'
PORT = 0xDF0D
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((MCADDR, PORT))
mreq = struct.pack("4sl", socket.inet_aton(MCADDR), socket.INADDR_ANY)
s.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
while 1:
data, addr = s.recvfrom(1024)
try:
(msgcode, reserved, effect, elapsed, beat, hue_med, hue_dev) = struct.unpack("!I12s16sIIBB", data)
print "RX %s:%s %-16s elapsed: %04d beat: %04d hue_med: %03d hue_dev: %03d" % (addr[0], addr[1], effect.rstrip('\0'), elapsed, beat, hue_med, hue_dev)
except Exception as err:
print "RX %d bytes, %s" % (len(data), err)
| 33.478261 | 157 | 0.697403 | 122 | 770 | 4.278689 | 0.540984 | 0.068966 | 0.065134 | 0.065134 | 0.088123 | 0.088123 | 0 | 0 | 0 | 0 | 0 | 0.056317 | 0.146753 | 770 | 22 | 158 | 35 | 0.738204 | 0.020779 | 0 | 0 | 0 | 0.055556 | 0.152722 | 0 | 0 | 0 | 0.007968 | 0 | 0 | 0 | null | null | 0 | 0.222222 | null | null | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea4a0594644cef9b0c271ea046c042822efb0f38 | 1,065 | py | Python | setup.py | mariocesar/boot.py | a75098759e91e4fb6be15ccab3745de13840d8d2 | [
"MIT"
] | 2 | 2018-02-16T01:26:50.000Z | 2021-10-31T09:50:50.000Z | setup.py | mariocesar/boot.py | a75098759e91e4fb6be15ccab3745de13840d8d2 | [
"MIT"
] | null | null | null | setup.py | mariocesar/boot.py | a75098759e91e4fb6be15ccab3745de13840d8d2 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import sys
from setuptools import find_packages, setup
if sys.version_info < (3, 6):
sys.exit('Python 3.6 is the minimum required version')
description, long_description = (
open('README.rst', 'rt').read().split('\n\n', 1))
setup(
name='boot.py',
author='Mario César Señoranis Ayala',
author_email='mariocesar.c50@gmail.com',
version='0.16',
url='https://github.com/mariocesar/boot.py',
description=description,
long_description=f'\n{long_description}',
package_dir={'': 'src'},
packages=find_packages('src'),
python_requires='>=3.6',
setup_requires=['pytest-runner'],
tests_require=['pytest', 'pytest-cov'],
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: MIT License',
'Topic :: Software Development :: Libraries :: Python Modules',
],
)
| 30.428571 | 71 | 0.634742 | 124 | 1,065 | 5.362903 | 0.620968 | 0.01203 | 0.112782 | 0.078195 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0.201878 | 1,065 | 34 | 72 | 31.323529 | 0.762353 | 0.019718 | 0 | 0 | 0 | 0 | 0.459252 | 0.023011 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.068966 | 0 | 0.068966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea507ff5ce0b048fae3891ba72d4dee04d5ab84a | 5,242 | py | Python | disentanglement_lib/methods/unsupervised/unsupervised_train.py | erow/disentanglement_lib | c875207fdeadc44880277542447544941bc0bd0a | [
"Apache-2.0"
] | null | null | null | disentanglement_lib/methods/unsupervised/unsupervised_train.py | erow/disentanglement_lib | c875207fdeadc44880277542447544941bc0bd0a | [
"Apache-2.0"
] | null | null | null | disentanglement_lib/methods/unsupervised/unsupervised_train.py | erow/disentanglement_lib | c875207fdeadc44880277542447544941bc0bd0a | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2018 The DisentanglementLib Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Main training protocol used for unsupervised disentanglement models."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
from disentanglement_lib.data.ground_truth import named_data
from disentanglement_lib.data.ground_truth import util
from disentanglement_lib.data.ground_truth.ground_truth_data import *
from disentanglement_lib.methods.shared import losses
from disentanglement_lib.methods.unsupervised import gaussian_encoder_model
from disentanglement_lib.methods.unsupervised import model # pylint: disable=unused-import
from disentanglement_lib.methods.unsupervised.gaussian_encoder_model import GaussianModel
from disentanglement_lib.methods.unsupervised.model import gaussian_log_density
from disentanglement_lib.utils import results
from disentanglement_lib.evaluation.metrics import mig
import numpy as np
from argparse import ArgumentParser
import pytorch_lightning as pl
import torch
from torch import nn as nn
from torch.nn import functional as F
from torch.utils.data import Dataset, DataLoader
import gin
import pathlib, shutil
import wandb
from disentanglement_lib.utils.hub import convert_model
from disentanglement_lib.utils.mi_estimators import estimate_entropies
from disentanglement_lib.visualize.visualize_util import plt_sample_traversal
@gin.configurable("train", denylist=[])
class Train(pl.LightningModule):
"""Trains the estimator and exports the snapshot and the gin config.
The use of this function requires the gin binding 'dataset.name' to be
specified as that determines the data set used for training.
Args:
model: GaussianEncoderModel that should be trained and exported.
training_steps: Integer with number of training steps.
random_seed: Integer with random seed used for training.
batch_size: Integer with the batch size.
name: Optional string with name of the model (can be used to name models).
model_num: Optional integer with model number (can be used to identify
models).
"""
def __init__(self,
model=gin.REQUIRED,
training_steps=gin.REQUIRED,
random_seed=gin.REQUIRED,
batch_size=gin.REQUIRED,
opt_name=torch.optim.Adam,
lr=5e-4,
eval_numbers=10,
name="",
model_num=None):
super().__init__()
self.training_steps = training_steps
self.random_seed = random_seed
self.batch_size = batch_size
self.lr = lr
self.name = name
self.model_num = model_num
self.eval_numbers = eval_numbers
wandb.config['dataset'] = gin.query_parameter('dataset.name')
self.save_hyperparameters()
self.opt_name = opt_name
self.data = named_data.get_named_ground_truth_data()
img_shape = np.array(self.data.observation_shape)[[2, 0, 1]].tolist()
# img_shape = [1,64,64]
self.ae = model(img_shape)
def training_step(self, batch, batch_idx):
if (self.global_step + 1) % (self.training_steps // self.eval_numbers) == 0:
self.evaluate()
x = batch
loss, summary = self.ae.model_fn(x.float(), None)
self.log_dict(summary)
return loss
def evaluate(self) -> None:
model = self.ae
model.cpu()
model.eval()
dic_log = {}
dic_log.update(self.visualize_model(model))
wandb.log(dic_log)
model.cuda()
model.train()
def visualize_model(self, model) -> dict:
_encoder, _decoder = convert_model(model)
num_latent = self.ae.num_latent
mu = torch.zeros(1, num_latent)
fig = plt_sample_traversal(mu, _decoder, 8, range(num_latent), 2)
return {'traversal': wandb.Image(fig)}
def train_dataloader(self) -> DataLoader:
dl = DataLoader(self.data,
batch_size=self.batch_size,
num_workers=4,
shuffle=True,
pin_memory=True)
return dl
def configure_optimizers(self):
optimizer = self.opt_name(self.parameters(), lr=self.lr)
return optimizer
def save_model(self, file):
dir = '/tmp/models/' + str(np.random.randint(99999))
file_path = os.path.join(dir, file)
pathlib.Path(dir).mkdir(parents=True, exist_ok=True)
torch.save(self.ae.state_dict(), file_path)
wandb.save(file_path, base_path=dir)
| 38.262774 | 91 | 0.689432 | 682 | 5,242 | 5.118768 | 0.356305 | 0.070753 | 0.081925 | 0.041535 | 0.097393 | 0.06216 | 0.024635 | 0 | 0 | 0 | 0 | 0.007966 | 0.233689 | 5,242 | 136 | 92 | 38.544118 | 0.86109 | 0.251812 | 0 | 0 | 0 | 0 | 0.011808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076087 | false | 0 | 0.304348 | 0 | 0.434783 | 0.01087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
ea540d35be6aa8bb6a870342c44c751f5089211c | 2,658 | py | Python | Python/visualization.py | richieliuse/SegmentationCNN | 12aaeff53d01f7c2ddd1f27489283b3062bb1d4a | [
"MIT"
] | 54 | 2016-11-19T02:12:04.000Z | 2022-02-24T14:26:41.000Z | Python/visualization.py | richieliuse/SegmentationCNN | 12aaeff53d01f7c2ddd1f27489283b3062bb1d4a | [
"MIT"
] | 7 | 2019-05-01T10:51:36.000Z | 2022-02-10T04:24:54.000Z | Python/visualization.py | richieliuse/SegmentationCNN | 12aaeff53d01f7c2ddd1f27489283b3062bb1d4a | [
"MIT"
] | 13 | 2016-08-06T00:15:55.000Z | 2021-12-26T20:20:35.000Z | # encoding: utf-8
"""
Visualization functions for features and predictions.
Copyright 2016 Matthias Leimeister
"""
import numpy as np
from feature_extraction import load_raw_features
from evaluation import post_processing
import matplotlib.pyplot as plt
import pickle
def visualize_predictions():
"""
Visualize predictions resulting from a pretrained CNN model
on the test dataset.
"""
preds = np.load('../Data/predsTestTracks_100epochs_lr005.npy')
train_features, train_labels, test_features, test_labels = load_raw_features('../Data/rawFeatures.pickle')
data = np.load('../Data/testDataNormalized.npz')
test_y = data['test_y']
# load file lists and indices
with open('../Data/fileListsAndIndex.pickle', 'rb') as f:
train_files, train_idx, test_files, test_idx = pickle.load(f)
for i in range(len(test_labels)):
f = test_files[i]
print f
idx = np.where(test_idx == i)[0]
labels = test_y[idx]
preds_track = np.squeeze(np.asarray(preds[idx]))
preds_track = post_processing(preds_track)
preds_track = 0.5 + 0.5 * preds_track
labels *= 0.5
plt.plot(labels)
plt.plot(preds_track)
plt.show()
def visualize_training_data():
"""
Visualize log Mel beat spectra of the training dataset.
"""
train_features, train_labels, test_features, test_labels = load_raw_features('../Data/rawFeatures.pickle')
for features, labels in zip(train_features, train_labels):
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.imshow(features)
ax2.plot(labels)
ax1.set_xlim([0, features.shape[1]])
ax1.set_ylim([0, 80])
ax2.set_xlim([0, features.shape[1]])
ax2.set_ylim([0, 1])
ax1.set_adjustable('box-forced')
ax2.set_adjustable('box-forced')
plt.show()
def visualize_test_data():
"""
Visualize log Mel beat spectra of the test dataset.
"""
train_features, train_labels, test_features, test_labels = load_raw_features('../Data/rawFeatures.pickle')
for features, labels in zip(test_features, test_labels):
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.imshow(features)
ax2.plot(labels)
ax1.set_xlim([0, features.shape[1]])
ax1.set_ylim([0, 80])
ax2.set_xlim([0, features.shape[1]])
ax2.set_ylim([0, 1])
ax1.set_adjustable('box-forced')
ax2.set_adjustable('box-forced')
plt.show()
if __name__ == "__main__":
visualize_predictions()
# visualize_test_data()
# visualize_training_data()
| 25.805825 | 110 | 0.648232 | 351 | 2,658 | 4.695157 | 0.287749 | 0.036408 | 0.036408 | 0.058252 | 0.467233 | 0.467233 | 0.467233 | 0.467233 | 0.424757 | 0.424757 | 0 | 0.029326 | 0.230248 | 2,658 | 102 | 111 | 26.058824 | 0.776149 | 0.034236 | 0 | 0.461538 | 0 | 0 | 0.107706 | 0.08247 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.096154 | null | null | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea55fb37f7d90d99122deeb02d480c900db16d68 | 343 | py | Python | myhood/migrations/0010_remove_neighborhood_occupants_count.py | kiptoo-rotich/neighborhood | 54974922dbd52e83ccfc6ab8c5cf5e3b258211fb | [
"MIT"
] | null | null | null | myhood/migrations/0010_remove_neighborhood_occupants_count.py | kiptoo-rotich/neighborhood | 54974922dbd52e83ccfc6ab8c5cf5e3b258211fb | [
"MIT"
] | null | null | null | myhood/migrations/0010_remove_neighborhood_occupants_count.py | kiptoo-rotich/neighborhood | 54974922dbd52e83ccfc6ab8c5cf5e3b258211fb | [
"MIT"
] | null | null | null | # Generated by Django 3.2.5 on 2021-07-26 18:46
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('myhood', '0009_business_created_on'),
]
operations = [
migrations.RemoveField(
model_name='neighborhood',
name='occupants_count',
),
]
| 19.055556 | 47 | 0.609329 | 36 | 343 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077236 | 0.282799 | 343 | 17 | 48 | 20.176471 | 0.752033 | 0.131195 | 0 | 0 | 1 | 0 | 0.192568 | 0.081081 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea5d2e688d4dea54f8e149ad7683f67e025d7b0f | 1,713 | py | Python | editaveis/prototipos/protoLevenshtein.py | Ziul/tcc1 | 97dc2b9afcd6736aa8158066b95a698301629543 | [
"CC-BY-3.0"
] | null | null | null | editaveis/prototipos/protoLevenshtein.py | Ziul/tcc1 | 97dc2b9afcd6736aa8158066b95a698301629543 | [
"CC-BY-3.0"
] | 2 | 2015-11-21T02:30:20.000Z | 2015-11-21T02:30:35.000Z | editaveis/prototipos/protoLevenshtein.py | Ziul/tcc1 | 97dc2b9afcd6736aa8158066b95a698301629543 | [
"CC-BY-3.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Code to rank packages from a search in APT using Levenshtein
"""
from apt import Cache
from Levenshtein import ratio
from exact import Pack, _parser
from multiprocessing.pool import ThreadPool as Pool
_MAX_PEERS = 20
def Thread_Rank(k):
pack = _args[0]
item = Pack()
item.name = k
item.ratio = ratio(pack, k)
return item
def Rankilist(pack):
cache = Cache()
if _options.single:
list_app = []
for k in cache:
item = Pack()
item.name = k.name
item.ratio = ratio(pack, k.name)
list_app.append(item)
return list_app
else:
_pool = Pool(processes=_MAX_PEERS)
result = _pool.map(Thread_Rank, cache._set)
return result
if __name__ == '__main__':
(_options, _args) = _parser.parse_args()
package_name = _args[0]
suffixes = ['core', 'dev', 'commom', 'devel']
prefixes = ['lib']
lista = Rankilist(package_name)
if _options.suffix:
for suffix in suffixes:
matches = Rankilist('{}-{}'.format(package_name, suffix))
lista.extend(matches)
if _options.prefix:
for prefix in prefixes:
matches = Rankilist('{}{}'.format(prefix, package_name))
lista.extend(matches)
if _options.suffix and _options.prefix:
for suffix in suffixes:
for prefix in prefixes:
matches = Rankilist(
'{}{}-{}'.format(prefix, package_name, suffix))
lista.extend(matches)
# ultimo = time.time()
lista = list(set(lista))
lista = sorted(lista, reverse=True)
for i in lista[:_options.amount]:
print i
| 27.190476 | 69 | 0.590776 | 203 | 1,713 | 4.79803 | 0.359606 | 0.056468 | 0.067762 | 0.032854 | 0.290554 | 0.179671 | 0.119097 | 0.119097 | 0.119097 | 0.119097 | 0 | 0.004149 | 0.296556 | 1,713 | 62 | 70 | 27.629032 | 0.804149 | 0.024518 | 0 | 0.183673 | 0 | 0 | 0.028213 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.081633 | null | null | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea5f3e26198b37258a038046ed6c084a792a8485 | 1,098 | py | Python | PlotGenGain_PathsOfSelection_TBV.py | janaobsteter/Genotype_CODES | 8adf70660ebff4dd106c666db02cdba8b8ce4f97 | [
"Apache-2.0"
] | 1 | 2021-10-07T18:55:03.000Z | 2021-10-07T18:55:03.000Z | PlotGenGain_PathsOfSelection_TBV.py | janaobsteter/Genotype_CODES | 8adf70660ebff4dd106c666db02cdba8b8ce4f97 | [
"Apache-2.0"
] | null | null | null | PlotGenGain_PathsOfSelection_TBV.py | janaobsteter/Genotype_CODES | 8adf70660ebff4dd106c666db02cdba8b8ce4f97 | [
"Apache-2.0"
] | 1 | 2017-04-13T09:07:41.000Z | 2017-04-13T09:07:41.000Z | import pandas as pd
import sys
import numpy as np
import matplotlib.pyplot as plt
T = pd.read_csv('GenTrends_cat.csv')
T.index = T.cat
T = T.drop('cat', axis=1)
tT = np.transpose(T)
tT.loc[:,'Cycle'] = [i.strip('_vars').strip('_mean') for i in list(tT.index)]
tT_mean = tT.ix[0::2,:]
tT_var = tT.ix[1::2,:]
cats = [i for i in ['pBM', 'pb','gpb','genTest', 'k', 'pripust1', 'pripust2', 'mladi'] if i in tT_mean.columns]
for cat in cats:
tT_meanP = tT_mean[[cat, 'Cycle']]
tT_varP = tT_var[[cat, 'Cycle']]
plt.plot(tT_meanP.Cycle, tT_meanP.loc[:,cat], label = cat)
plt.xlabel('Selected Generation')
plt.ylabel('Mean Generation TBV')
legend(loc='upper left')
plt.savefig('GenTrends_Mean_PathOfSel.pdf')
for cat in cats:
tT_meanP = tT_mean[[cat, 'Cycle']]
tT_varP = tT_var[[cat, 'Cycle']]
plt.plot(tT_varP.Cycle, tT_varP.loc[:,cat], label = cat)
plt.xlabel('Selected Generation')
plt.ylabel('Mean Generation TBV')
legend(loc='upper left')
plt.savefig('GenTrends_Var_PathOfSel.pdf')
print 'Created plots: GenTrends_Mean_' + cat + '.pdf and GenTrends_Var_' + cat + '.pdf'
| 28.894737 | 111 | 0.675774 | 185 | 1,098 | 3.864865 | 0.345946 | 0.033566 | 0.046154 | 0.033566 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0.461538 | 0 | 0.0074 | 0.138434 | 1,098 | 37 | 112 | 29.675676 | 0.748414 | 0 | 0 | 0.413793 | 0 | 0 | 0.273224 | 0.050091 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.137931 | null | null | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea6d08e4b75d46d8c207e5f66fc96bf1e79be92d | 2,434 | py | Python | docs/index_functions.py | guyms/pyansys | 7a9182a7c44098d9b99a0d1eb2fd183b7256ac01 | [
"MIT"
] | null | null | null | docs/index_functions.py | guyms/pyansys | 7a9182a7c44098d9b99a0d1eb2fd183b7256ac01 | [
"MIT"
] | null | null | null | docs/index_functions.py | guyms/pyansys | 7a9182a7c44098d9b99a0d1eb2fd183b7256ac01 | [
"MIT"
] | null | null | null | #==============================================================================
# load a beam and write it
#==============================================================================
import pyansys
from pyansys import examples
# Sample *.cdb
filename = examples.hexarchivefile
# Read ansys archive file
archive = pyansys.Archive(filename)
# Print raw data from cdb
for key in archive.raw:
print "%s : %s" % (key, archive.raw[key])
# Create a vtk unstructured grid from the raw data and plot it
archive.ParseFEM()
archive.uGrid.Plot()
# write this as a vtk xml file
archive.save_as_vtk('hex.vtu')
# Load this from vtk
import vtki
grid = vtki.LoadGrid('hex.vtk')
grid.Plot()
#==============================================================================
# load beam results
#==============================================================================
# Load the reader from pyansys
import pyansys
from pyansys import examples
# Sample result file and associated archive file
rstfile = examples.rstfile
hexarchivefile = examples.hexarchivefile
# Create result reader object by loading the result file
result = pyansys.ResultReader(rstfile)
# Get beam natural frequencies
freqs = result.GetTimeValues()
# Get the node numbers in this result file
nnum = result.nnum
# Get the 1st bending mode shape. Nodes are ordered according to nnum.
disp = result.GetResult(0, True) # uses 0 based indexing
# Load CDB (necessary for display)
result.LoadArchive(hexarchivefile)
# Plot the displacement of Mode 0 in the x direction
result.PlotNodalResult(0, 'x', label='Displacement')
#==============================================================================
# Load KM
#==============================================================================
# Load the reader from pyansys
import pyansys
from pyansys import examples
filename = examples.fullfile
# Create result reader object and read in full file
fobj = pyansys.FullReader(filename)
fobj.LoadFullKM()
import numpy as np
from scipy.sparse import csc_matrix, linalg
ndim = fobj.nref.size
k = csc_matrix((fobj.kdata, (fobj.krows, fobj.kcols)), shape=(ndim, ndim))
m = csc_matrix((fobj.mdata, (fobj.mrows, fobj.mcols)), shape=(ndim, ndim))
# Solve
w, v = linalg.eigsh(k, k=20, M=m, sigma=10000)
# System natural frequencies
f = (np.real(w))**0.5/(2*np.pi)
print('First four natural frequencies')
for i in range(4):
print '{:.3f} Hz'.format(f[i])
| 26.747253 | 79 | 0.595727 | 298 | 2,434 | 4.848993 | 0.42953 | 0.038062 | 0.058824 | 0.049827 | 0.120415 | 0.120415 | 0.120415 | 0.085813 | 0.085813 | 0.085813 | 0 | 0.008026 | 0.129827 | 2,434 | 90 | 80 | 27.044444 | 0.674221 | 0.484799 | 0 | 0.162162 | 0 | 0 | 0.059543 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.243243 | null | null | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea7b9d12029f07974525dc659c2414d6e62953e4 | 643 | py | Python | conversion/octalToDecimal.py | slowy07/pythonApps | 22f9766291dbccd8185035745950c5ee4ebd6a3e | [
"MIT"
] | 10 | 2020-10-09T11:05:18.000Z | 2022-02-13T03:22:10.000Z | conversion/octalToDecimal.py | khairanabila/pythonApps | f90b8823f939b98f7bf1dea7ed35fe6e22e2f730 | [
"MIT"
] | null | null | null | conversion/octalToDecimal.py | khairanabila/pythonApps | f90b8823f939b98f7bf1dea7ed35fe6e22e2f730 | [
"MIT"
] | 6 | 2020-11-26T12:49:43.000Z | 2022-03-06T06:46:43.000Z | def octalToDecimal(octString: str)->str:
octString = str(octString).strip()
if not octString:
raise ValueError("empty string was passed to function")
isNegative = octString[0] == "-"
if isNegative:
octString = octString[1:]
if not all(0 <= int(char) <= 7 for char in octString):
raise ValueError("non octal value was passed to function")
decimalNumber = 0
for char in octString:
decimalNumber = 8 * decimalNumber + int(char)
if isNegative:
decimalNumber = -decimalNumber
return decimalNumber
if __name__ == '__main__':
from doctest import testmod
testmod()
| 30.619048 | 66 | 0.656299 | 73 | 643 | 5.671233 | 0.493151 | 0.057971 | 0.115942 | 0.091787 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012448 | 0.250389 | 643 | 20 | 67 | 32.15 | 0.846473 | 0 | 0 | 0.111111 | 0 | 0 | 0.127527 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.111111 | 0.055556 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
ea80ed40b25b2af9be9e4742f1f9e34326e94328 | 879 | py | Python | newsXtract.py | selection-bias-www2018/NewsXtract | 6b66024fea912ed5f34a5ac2fe051d9abf8e5ee2 | [
"BSD-3-Clause"
] | 1 | 2019-10-24T10:04:59.000Z | 2019-10-24T10:04:59.000Z | newsXtract.py | selection-bias-www2018/selection-bias-code | 6b66024fea912ed5f34a5ac2fe051d9abf8e5ee2 | [
"BSD-3-Clause"
] | null | null | null | newsXtract.py | selection-bias-www2018/selection-bias-code | 6b66024fea912ed5f34a5ac2fe051d9abf8e5ee2 | [
"BSD-3-Clause"
] | 1 | 2021-05-04T12:51:23.000Z | 2021-05-04T12:51:23.000Z | import os,json
import requests
BASE_URL = 'http://epfl.elasticsearch.spinn3r.com/content*/_search'
BULK_SIZE = 100
SPINN3R_SECRET = os.environ['SPINN3R_SECRET']
HEADERS = {
'X-vendor': 'epfl',
'X-vendor-auth': SPINN3R_SECRET
}
query = {
"size": BULK_SIZE,
"query":{
"bool":{
"must":{
"match":{
"domain":"afp.com"
}
},
"filter":{
"range":{
"published":{
"gte":"18/02/2017",
"lte":"20/02/2017",
"format":"dd/MM/yyyy"
}
}
}
}
}
}
resp = requests.post(BASE_URL, headers=HEADERS, json=query)
resp_json = json.loads(resp.text)
titles = set()
for r in resp_json['hits']['hits']:
t = r['_source']['title']
if t not in titles:
print t
titles.add(t)
| 18.702128 | 68 | 0.482366 | 96 | 879 | 4.302083 | 0.604167 | 0.094431 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040422 | 0.352673 | 879 | 46 | 69 | 19.108696 | 0.685413 | 0 | 0 | 0 | 0 | 0 | 0.238908 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.052632 | null | null | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea8d3c9ed97f275534ba12a7a2adf9b3f80643b1 | 24,084 | py | Python | nyc_bike_flow.py | AngeloManzatto/NYCBikeFlow | cd7f936c4d4627e4a90e17d416fb1f628b2445c6 | [
"MIT"
] | 1 | 2020-09-09T01:36:57.000Z | 2020-09-09T01:36:57.000Z | nyc_bike_flow.py | AngeloManzatto/NYCBikeFlow | cd7f936c4d4627e4a90e17d416fb1f628b2445c6 | [
"MIT"
] | null | null | null | nyc_bike_flow.py | AngeloManzatto/NYCBikeFlow | cd7f936c4d4627e4a90e17d416fb1f628b2445c6 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Mon Aug 5 14:01:56 2019
@author: Angelo Antonio Manzatto
This implementation use ST-ResNet for inflow / outflow bike prediction on the city of NY
Article: https://arxiv.org/pdf/1610.00081.pdf
References and credits:
Junbo Zhang, Yu Zheng, Dekang Qi. Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction. In AAAI 2017.
The dataset can be download checking the information on the following link:
https://github.com/lucktroy/DeepST/tree/master/data/BikeNYC
"""
##################################################################################
# Libraries
##################################################################################
import os
import math
from datetime import datetime
from datetime import timedelta
import numpy as np
import h5py
import matplotlib.pyplot as plt
import matplotlib.cm
import seaborn as sns
sns.set()
import keras.backend as K
from keras.models import Model
from keras.layers import Input, Dense, Reshape, Activation, Add, LeakyReLU
from keras.layers import Conv2D , BatchNormalization, Lambda, concatenate
from keras.callbacks import ModelCheckpoint, CSVLogger, EarlyStopping
from keras.optimizers import Adam
from keras.engine.topology import Layer
np.random.seed(42) # My nickname Recruta42
############################################################################################
# Load Dataset
############################################################################################
dataset_folder = 'dataset'
dataset_file = os.path.join(dataset_folder,'NYC14_M16x8_T60_NewEnd.h5')
images_folder = 'images'
nyc_map = plt.imread(os.path.join(images_folder,'nyc.jpg'))
# Plot New York Map
f, ax = plt.subplots(figsize=(8,8))
ax.imshow(nyc_map)
# Load dataset file
f = h5py.File(dataset_file)
data = f['data'][()]
timestamps = f['date'][()]
# Convert data from [batch x flow matrices x map height x map width] to [batch x map height x map width x flow matrices]
data = np.transpose(data, (0, 2, 3, 1))
# Plot some samples from dataset
n_samples = 5
for i in range(n_samples):
# define the size of images
f, (ax1, ax2) = plt.subplots(1, 2)
f.set_figwidth(12)
f.set_figheight(8)
# randomly select a sample
idx = np.random.randint(0, len(data))
inflow = data[idx][:,:,0] #input flow is the first matrix
outflow = data[idx][:,:,1] #output flow is the second matrix
date = datetime.strptime(timestamps[idx].decode("utf-8"), '%Y%m%d%H')
hmax1 = sns.heatmap(inflow, cmap = matplotlib.cm.winter, alpha = 0.3, annot = False,zorder = 2, ax=ax1)
hmax1.imshow(nyc_map,aspect = hmax1.get_aspect(),extent = hmax1.get_xlim() + hmax1.get_ylim(), zorder = 1)
ax1.set_title('In Flow: {0}'.format(date))
hmax2 = sns.heatmap(outflow, cmap = matplotlib.cm.winter, alpha = 0.3, annot = False,zorder = 2, ax=ax2)
hmax2.imshow(nyc_map,aspect = hmax2.get_aspect(),extent = hmax2.get_xlim() + hmax2.get_ylim(), zorder = 1)
ax2.set_title('Out Flow: {0}'.format(date))
############################################################################################
# Pre-Process Dataset
############################################################################################
# Convert timestamps from ASCII format to string
formated_timestamps = []
for ts in timestamps:
formated_timestamps.append(ts.decode("utf-8"))
# Scale in flow and out flow values on the map matrices to a range between [-1,1]
min_value = data.min()
max_value = data.max()
print("Minimum values: {0} , Maximum value: {1}".format(min_value,max_value))
data_scaled = 1. * (data - min_value) / (max_value - min_value)
data_scaled = 2. * data_scaled - 1.
print("Minimum scaled values: {0} , Maximum scaled value: {1}".format(data_scaled.min(),data_scaled.max()))
############################################################################################
# Create Train / Target data
############################################################################################
'''
Minimum granularity will be 1 hour
To create the input for our model we need to aggregate the inflow and outflow matrices according
to three interval of times defined in the article as: closeness, period and trend.
For this project:
* Closeness is a difference in 1 hour period between two matrices
* Period is a difference is 24 hours period between two matrices
* Trend is a difference is 7 days period between two matrices
This means that for example, for a data (16 x 8 x 2) inflow/outflow matrices collected
at time stamp: 2014 08 07 01:00:00 we will have to do the following transformations:
Input closeness = len closeness stack of consecutive matrices distant between closeness interval.
Ex: Len = 3 and interval = 1 hour - stack [2014 08 07 01:00:00, 2014 08 07 02:00:00 , 2014 08 07 03:00:00] matrices
Input period = len period stack of consecutive matrices distant between period interval.
Ex: Len = 4 and interval = 24 hours - stack [2014 08 07 01:00:00, 2014 08 08 01:00:00 , 2014 08 09 01:00:00, 2014 08 10 01:00:00] matrices
Input trend = len trend stack of consecutive matrices distant between trend interval.
Ex: Len = 4 and interval = 168 hours - stack [2014 08 07 01:00:00, 2014 08 14 01:00:00 , 2014 08 21 01:00:00, 2014 08 28 01:00:00] matrices
This is an important information and the dataset should have little or almost NO disconnected interval between two
inflow / outflow matrices meaning that we should avoid missing hours.
'''
# Simple function that receives a string in format YmdH and convert to a datetime object
def str_to_date(timestamp):
# We can't direct stripe the data using datetime.strptime(ts, '%Y%m%d%H')
# because the hours are in 01 to 24 format instead of 00 to 23
year, month, day, hour = int(timestamp[:4]), int(timestamp[4:6]), int(timestamp[6:8]), int(timestamp[8:])-1
converted_time = datetime(year, month, day, hour)
return converted_time
# Convert timestamp to a one hot encoded vector taking into account week way and if it is weekend or not
def one_hot_day_week(timestamp):
converted_time = str_to_date(timestamp)
i = converted_time.weekday()
one_hot_encoded = np.zeros((8))
# Day week (sunday, monday...) encoder
one_hot_encoded[i] = 1
# Weekend / Not Weekend encoder
if i >= 5:
one_hot_encoded[7] = 0
else:
one_hot_encoded[7] = 1
return one_hot_encoded
closeness_interval = 1 # distance between hours
period_interval = 24 * closeness_interval # number of time intervals in one day
trend_interval = 7 * period_interval
closeness_len = 3 # recent time (closeness)
period_len = 4 # near history (period)
trend_len = 4 # distant history (trend)
closeness_range = [x * closeness_interval for x in range(1,closeness_len+1)]
period_range = [x * period_interval for x in range(1,period_len + 1)]
trend_range = [x * trend_interval for x in range(1,trend_len+1)]
# Build a dictionary of time stamps. This will ease our work to convert between timestamps to indices to get
# the in/out flow matrices.
ts_dict = {}
ts_list = []
for i, ts in enumerate(formated_timestamps):
converted_time = str_to_date(ts)
# Add converted time from string to a list for iteration and for a dictionary for search purposes
ts_list.append(str_to_date(ts))
ts_dict[converted_time] = i
# Create X, y data
X_Closeness, X_Period, X_Trend, X_External, Y , Y_timestamp = [],[],[],[],[],[]
# Crete the datasets for closeness, period and trend
# Since we have future predictions as output we need to build the dataset based on the lates trend period as starting point
starting_period = trend_interval * trend_len
# We construct the X, y datasets based on a reversed time interval, from the latest trend to starting closeness
for i in range(starting_period, len(formated_timestamps)):
# Starting period
date = str_to_date(formated_timestamps[i])
check_dates = []
# Get all dates in the closeness interval near the target
for c in closeness_range:
check_dates.append(date - timedelta(hours=c))
for p in period_range:
check_dates.append(date - timedelta(hours=p))
for t in trend_range:
check_dates.append(date - timedelta(hours=t))
# Check if all those selected dates exists in our timestamp dictionary and if not go to the next iteration
break_flag = False
for check_date in check_dates:
if check_date not in ts_dict:
print("Date frame missing!: {0} ".format(formated_timestamps[i]))
break_flag = True
if break_flag:
continue
# Parse again to create de dataset stacking the time range for closeness, period and trend
# X Closeness
xc = []
for c in closeness_range:
xc.append(data_scaled[ts_dict[date - timedelta(hours=c)]])
xc = np.concatenate(xc,axis=-1)
# X Period
xp = []
for p in period_range:
xp.append(data_scaled[ts_dict[date - timedelta(hours=p)]])
xp = np.concatenate(xp,axis=-1)
# X Trend
xt = []
for t in trend_range:
xt.append(data_scaled[ts_dict[date - timedelta(hours=t)]])
xt = np.concatenate(xt,axis=-1)
# Target
y = data_scaled[ts_dict[date]]
# Add each created set to the final datasets
X_Closeness.append(xc)
X_Period.append(xp)
X_Trend.append(xt)
X_External.append(one_hot_day_week(formated_timestamps[i]))
Y.append(y)
Y_timestamp.append(formated_timestamps[i])
X_Closeness = np.asarray(X_Closeness)
X_Period = np.asarray(X_Period)
X_Trend = np.asarray(X_Trend)
X_External = np.asarray(X_External)
Y = np.asarray(Y)
print("X_Closeness shape: ", X_Closeness.shape)
print("X_Period shape: ", X_Period.shape)
print("X_Trend shape: ", X_Trend.shape)
print("X_External shape: ", X_External.shape)
print( "Y shape:", Y.shape)
############################################################################################
# Split dataset into Train / Test
############################################################################################
days_test = 10
n_test = 24 * days_test
# Split dataset into training / test sets
XC_train, XP_train, XT_train,XE_train, Y_train = X_Closeness[:-n_test], X_Period[:-n_test], X_Trend[:-n_test],X_External[:-n_test], Y[:-n_test]
XC_test, XP_test, XT_test, XE_test, Y_test = X_Closeness[-n_test:], X_Period[-n_test:], X_Trend[-n_test:],X_External[-n_test:], Y[-n_test:]
# Time stamp split so we can track the period
timestamp_train, timestamp_test = Y_timestamp[:-n_test], Y_timestamp[-n_test:]
# Concatenate closeness , period and trend
X_train = [XC_train,XP_train,XT_train,XE_train]
X_test = [XC_test,XP_test,XT_test,XE_test]
print("X Train size: ", len(X_train))
print("X Test size: ", len(X_test))
############################################################################################
# Spatial Temporal Residual Network
############################################################################################
############################################################################################
# ResNet Identity Block
############################################################################################
def identity_block(inputs, filters, block_id):
x = BatchNormalization(name='block_' + block_id + '_identity_batch_1')(inputs)
x = Activation('relu', name='block_' + block_id + '_identity_relu_1')(x)
x = Conv2D(filters, kernel_size=(3,3), strides=(1,1), padding='same', kernel_initializer='he_normal', name='block_' + block_id + '_identity_conv2d_1')(x)
x = BatchNormalization(name='block_' + block_id + '_identity_batch_2')(x)
x = Activation('relu',name='block_' + block_id + '_identity_relu_2')(x)
x = Conv2D(filters, kernel_size=(3,3), strides=(1,1), padding='same', kernel_initializer='he_normal', name='block_' + block_id + '_identity_conv2d_2')(x)
x = Add(name='block_' + block_id + '_add')([inputs,x])
return x
############################################################################################
# ResNet bottleNeck block
############################################################################################
def bottleneck_block(inputs,kernel_size, filters, block_id):
f1, f2, f3 = filters
x = Conv2D(f1, kernel_size=(1,1), use_bias=False, kernel_initializer='he_normal', name='block_' + block_id + '_identity_conv2d_1')(inputs)
x = BatchNormalization(name='block_' + block_id + '_identity_batch_1')(x)
x = Activation('relu', name='block_' + block_id + '_identity_relu_1')(x)
x = Conv2D(f2, kernel_size = kernel_size, padding='same', use_bias=False, kernel_initializer='he_normal', name='block_' + block_id + '_identity_conv2d_2')(x)
x = BatchNormalization(name='block_' + block_id + '_identity_batch_2')(x)
x = Activation('relu',name='block_' + block_id + '_identity_relu_2')(x)
x = Conv2D(f3, kernel_size=(1,1), use_bias=False, kernel_initializer='he_normal', name='block_' + block_id + '_identity_conv2d_3')(x)
x = BatchNormalization(name='block_' + block_id + '_identity_batch_3')(x)
x = Add(name='block_' + block_id + '_add')([x, inputs])
x = Activation('relu', name='block_' + block_id + '_identity_relu_3')(x)
return x
############################################################################################
# ResNetXt group block
############################################################################################
def grouped_block(inputs, filters, cardinality, block_id):
assert not filters % cardinality
convolution_groups = []
n_convs = filters // cardinality
for j in range(cardinality):
group = Lambda(lambda z: z[:, :, :, j * n_convs:j * n_convs + n_convs])(inputs)
convolution_groups.append(Conv2D(n_convs, kernel_size=(3, 3), strides=(1,1) , padding='same')(group))
x = concatenate(convolution_groups, name='block_Xt' + block_id + '_concatenate')
return x
############################################################################################
# ResNet bottleNeck block
############################################################################################
def resnetXt_block(inputs, filters, cardinality, block_id):
f1, f2, f3 = filters
x = Conv2D(f1, kernel_size=(1,1), use_bias=False, kernel_initializer='he_normal', name='block_' + block_id + '_xt_conv2d_1')(inputs)
x = BatchNormalization(name='block_' + block_id + '_xt_batch_1')(x)
x = LeakyReLU(name='block_' + block_id + '_identity_leakyrelu_1')(x)
x = grouped_block(x, f2, cardinality, block_id)
x = BatchNormalization(name='block_' + block_id + '_identity_batch_2')(x)
x = Activation('relu',name='block_' + block_id + '_identity_relu_2')(x)
x = Conv2D(f3, kernel_size=(1,1), use_bias=False, kernel_initializer='he_normal', name='block_' + block_id + '_identity_conv2d_3')(x)
x = BatchNormalization(name='block_' + block_id + '_identity_batch_3')(x)
x = Add(name='block_' + block_id + '_add')([x, inputs])
x = LeakyReLU(name='block_' + block_id + '_identity_leakyrelu_relu_3')(x)
return x
############################################################################################
# Fusion Block
############################################################################################
class FusionLayer(Layer):
def __init__(self, **kwargs):
super(FusionLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer.
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1:]),
initializer='uniform',
trainable=True)
super(FusionLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x, mask=None):
return x * self.kernel
def get_output_shape_for(self, input_shape):
return input_shape
############################################################################################
# ST-ResNet version 1
############################################################################################
def STResNet_v1(c_conf=(32, 32, 2, 3),
p_conf=(32, 32, 2, 3),
t_conf=(32, 32, 2, 3),
output_shape = (32, 32, 2),
res_units=3,
external_dim = None):
height, width, n_flows = output_shape
main_inputs = []
Input_c = Input(shape=(c_conf[0], c_conf[1], c_conf[2] * c_conf[3]), name='input_c')
Input_p = Input(shape=(p_conf[0], p_conf[1], p_conf[2] * p_conf[3]), name='input_p')
Input_t = Input(shape=(t_conf[0], t_conf[1], t_conf[2] * t_conf[3]), name='input_t')
main_inputs.append(Input_c)
main_inputs.append(Input_p)
main_inputs.append(Input_t)
# Input
x_c = Conv2D(64, kernel_size=(3,3),strides=(1,1), padding="same", name= 'conv_input_c')(Input_c)
x_p = Conv2D(64, kernel_size=(3,3),strides=(1,1), padding="same", name= 'conv_input_p')(Input_p)
x_t = Conv2D(64, kernel_size=(3,3),strides=(1,1), padding="same", name= 'conv_input_t')(Input_t)
for i in range(res_units):
x_c = identity_block(x_c, 64, block_id= str(i) +'_c')
x_p = identity_block(x_p, 64, block_id= str(i) +'_p')
x_t = identity_block(x_t, 64, block_id= str(i) +'_t')
x_c = Conv2D(1, kernel_size=(3,3),strides=(1,1), padding="same", name= 'conv_output_c')(x_c)
x_p = Conv2D(1, kernel_size=(3,3),strides=(1,1), padding="same", name= 'conv_output__p')(x_p)
x_t = Conv2D(1, kernel_size=(3,3),strides=(1,1), padding="same", name= 'conv_output__t')(x_t)
# Fusion Layers
x_c = FusionLayer()(x_c)
x_p = FusionLayer()(x_p)
x_t = FusionLayer()(x_t)
fusion = Add(name='temporal_fusion')([x_c,x_p,x_t])
#########################################################################
# External Block
#########################################################################
if external_dim != None and external_dim > 0:
# Concatenate external inputs with temporal inputs
external_input = Input(shape=(external_dim,), name='external_input')
main_inputs.append(external_input)
embedding = Dense(10, name='external_dense_1')(external_input)
embedding = Activation('relu')(embedding)
embedding = Dense(height * width * n_flows* channels)(embedding)
embedding = Activation('relu')(embedding)
external_output = Reshape((height, width, n_flows ) ,name='external_output')(embedding)
# Fuse with external output
fusion = Add(name='external_fusion')([fusion,external_output])
final_output = Activation('tanh', name='Tanh')(fusion)
model = Model(inputs=main_inputs,outputs=final_output)
return model
############################################################################################
# Training pipeline
############################################################################################
# Metric for our model
def rmse(y_true, y_pred):
return K.mean(K.square(y_pred - y_true)) ** 0.5
# Hyperparameters
epochs = 500
batch_size = 32
learning_rate = 0.0002
# callbacks
model_path = 'saved_models'
# File were the best model will be saved during checkpoint
model_file = os.path.join(model_path,'nyc_bike_flow.h5')
# Early stop to avoid overfitting our model
early_stopping = EarlyStopping(monitor='val_rmse', patience=5, mode='min')
# Check point for saving the best model
check_pointer = ModelCheckpoint(model_file, monitor='val_rmse', mode='min',verbose=1, save_best_only=True)
# Heatmap parameters
map_height = 16
map_width = 8
n_flows = 2
c_conf=(map_height, map_width, n_flows, closeness_len) # closeness
p_conf=(map_height, map_width, n_flows, period_len) # period
t_conf=(map_height, map_width, n_flows, trend_len) # trend
output_shape=(map_height, map_width, n_flows)
external_dim = 8
# Create ST-ResNet Model
model = STResNet_v1(c_conf,p_conf,t_conf, output_shape, res_units=3, external_dim = external_dim,unit_type = 'v2')
# Create Optimizer
optimizer = Adam(lr=learning_rate)
model.compile(optimizer, loss='mse' , metrics=[rmse])
model.summary()
# Train the model
history = model.fit(X_train, Y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=0.1,
callbacks=[check_pointer,early_stopping],
verbose=1)
############################################################################################
# Predict
############################################################################################
# If we want to test on a pre trained model use the following line
model.load_weights(os.path.join(model_path,'bikenyc-0.0020.h5'), by_name=False)
n_samples = 3
for i in range(n_samples):
f, (ax1, ax2, ax3,ax4) = plt.subplots(1, 4)
f.set_figwidth(14)
f.set_figheight(6)
# randomly select a sample
idx = np.random.randint(0, len(X_test[0]))
# Add single dimension to each input to simulate batch
X = [X_test[0][idx][np.newaxis,...],X_test[1][idx][np.newaxis,...],X_test[2][idx][np.newaxis,...],X_test[3][idx][np.newaxis,...]]
y_true = Y_test[idx]
# Predict values using our trained model
y_pred = model.predict(X)
y_pred = np.squeeze(y_pred)
date =
hmax1 = sns.heatmap(y_true[:,:,0], cmap = matplotlib.cm.winter, alpha = 0.3, annot = False,zorder = 2, ax=ax1)
hmax1.imshow(nyc_map,aspect = hmax1.get_aspect(),extent = hmax1.get_xlim() + hmax1.get_ylim(), zorder = 1)
ax1.set_title('True In Flow: {0}'.format(timestamps[idx].decode("utf-8")))
hmax2 = sns.heatmap(y_pred[:,:,0], cmap = matplotlib.cm.winter, alpha = 0.3, annot = False,zorder = 2, ax=ax2)
hmax2.imshow(nyc_map,aspect = hmax2.get_aspect(),extent = hmax2.get_xlim() + hmax2.get_ylim(), zorder = 1)
ax2.set_title('Pred In Flow: {0}'.format(timestamps[idx].decode("utf-8")))
hmax3 = sns.heatmap(y_true[:,:,1], cmap = matplotlib.cm.winter, alpha = 0.3, annot = False,zorder = 2, ax=ax3)
hmax3.imshow(nyc_map,aspect = hmax3.get_aspect(),extent = hmax3.get_xlim() + hmax3.get_ylim(), zorder = 1)
ax3.set_title('True Out Flow: {0}'.format(timestamps[idx].decode("utf-8")))
hmax4 = sns.heatmap(y_pred[:,:,1], cmap = matplotlib.cm.winter, alpha = 0.3, annot = False,zorder = 2, ax=ax4)
hmax4.imshow(nyc_map,aspect = hmax4.get_aspect(),extent = hmax4.get_xlim() + hmax4.get_ylim(), zorder = 1)
ax4.set_title('Pred Out Flow: {0}'.format(timestamps[idx].decode("utf-8")))
############################################################################################
# Evaluate
############################################################################################
# This information was provided in the original article an file !
'''
For NYC Bike data, there are 81 available grid-based areas, each of
which includes at least ONE bike station. Therefore, we modify the final
RMSE by multiplying the following factor (i.e., factor).
'''
nb_area = 81
m_factor = math.sqrt(1. * map_height * map_width / nb_area)
score = model.evaluate(X_train, Y_train, batch_size=Y_train.shape[0] // 48, verbose=0)
print('Train score: %.6f rmse (norm): %.6f rmse (real): %.6f' %
(score[0], score[1], score[1] * (max_value - min_value) / 2. * m_factor))
score = model.evaluate(X_test, Y_test, batch_size=Y_test.shape[0], verbose=0)
print('Test score: %.6f rmse (norm): %.6f rmse (real): %.6f' %
(score[0], score[1], score[1] * (max_value - min_value) / 2. * m_factor))
| 40.073211 | 161 | 0.594918 | 3,235 | 24,084 | 4.236167 | 0.167852 | 0.017878 | 0.026562 | 0.030356 | 0.332823 | 0.290864 | 0.256567 | 0.242119 | 0.221468 | 0.195271 | 0 | 0.030628 | 0.17576 | 24,084 | 600 | 162 | 40.14 | 0.659715 | 0.125685 | 0 | 0.122867 | 0 | 0 | 0.094438 | 0.00453 | 0 | 0 | 0 | 0 | 0.003413 | 0 | null | null | 0 | 0.054608 | null | null | 0.040956 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
ea8ffc8379c8866cb56248edf88ff78b6a856f02 | 1,838 | py | Python | Breast_cancer_prediction1.py | HagerBesar/Breast_cancer_prediction1 | f391a37f8064cabefdf9c416f2dbb40e3bd0e98a | [
"MIT"
] | 1 | 2021-03-23T15:03:39.000Z | 2021-03-23T15:03:39.000Z | Breast_cancer_prediction1.py | HagerBesar/Breast_cancer_prediction1 | f391a37f8064cabefdf9c416f2dbb40e3bd0e98a | [
"MIT"
] | null | null | null | Breast_cancer_prediction1.py | HagerBesar/Breast_cancer_prediction1 | f391a37f8064cabefdf9c416f2dbb40e3bd0e98a | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# In[ ]:
####################################<<<<Breast_cancer_prediction>>>>>>####################################
# In[ ]:
#part(1)--By:Manar Moeanse
# In[1]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# In[2]:
DB = pd.read_csv('Breast_cancer_data.csv')
DB
# In[3]:
DB.head(5)
# In[4]:
DB.describe()
# In[ ]:
DB.info()
# In[ ]:
#part(2)--By:Mariam Mamdoh
# In[5]:
uneff = DB[DB.diagnosis == 0]
eff = DB[DB.diagnosis == 1]
len(uneff)
# In[ ]:
len(eff)
# In[6]:
uneffected = (len(uneff)/len(DB)) *100
print('people are uneffected = ', uneffected , '% .')
effected = (len(eff)/len(DB)) *100
print('people are effected = ', effected , '% .')
# In[ ]:
#part(3)--By:Hemat Shawky.
# In[7]:
plt.scatter(DB['diagnosis'],DB['mean_area'])
# In[8]:
plt.scatter(DB['mean_area'],DB['mean_texture'])
# In[9]:
plt.scatter(DB['mean_radius'],DB['mean_perimeter'])
# In[10]:
import seaborn as sns
sns.pairplot(data=DB)
# In[ ]:
#part(4)--By:Hager Mohamed.
# In[11]:
x = DB.drop('diagnosis', 1)
y = DB['diagnosis']
x
# In[12]:
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2)
print(x_train.shape,x_test.shape)
print(y_train.shape,y_test.shape)
# In[13]:
from sklearn.linear_model import LinearRegression
model = LinearRegression ()
# In[14]:
model.fit(x_train,y_train)
# In[15]:
pred =model.predict(x_test)
# In[16]:
from sklearn.metrics import mean_squared_error
# In[17]:
error=np.sqrt(mean_squared_error(y_pred=pred,y_true=y_test))
print(error)
# In[18]:
print(model.score(x_test,y_test))
# In[ ]:
"""" BY:
1-Manar Moeanse.
2-Mariam Mamdoh.
3-Hemat Shawky.
4-Hager Mohamed.
| 10.268156 | 106 | 0.600653 | 279 | 1,838 | 3.824373 | 0.34767 | 0.028116 | 0.033739 | 0.024367 | 0.041237 | 0.041237 | 0 | 0 | 0 | 0 | 0 | 0.031725 | 0.176823 | 1,838 | 178 | 107 | 10.325843 | 0.673496 | 0 | 0 | 0 | 0 | 0 | 0.11362 | 0.016023 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.170732 | null | null | 0.146341 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.