hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d8bd8bd8d533ca30262aba2bf7317396c7dab909 | 443 | py | Python | blender/arm/logicnode/native/LN_read_storage.py | onelsonic/armory | 55cfead0844923d419d75bf4bd677ebed714b4b5 | [
"Zlib"
] | 2,583 | 2016-07-27T08:25:47.000Z | 2022-03-31T10:42:17.000Z | blender/arm/logicnode/native/LN_read_storage.py | onelsonic/armory | 55cfead0844923d419d75bf4bd677ebed714b4b5 | [
"Zlib"
] | 2,122 | 2016-07-31T14:20:04.000Z | 2022-03-31T20:44:14.000Z | blender/arm/logicnode/native/LN_read_storage.py | onelsonic/armory | 55cfead0844923d419d75bf4bd677ebed714b4b5 | [
"Zlib"
] | 451 | 2016-08-12T05:52:58.000Z | 2022-03-31T01:33:07.000Z | from arm.logicnode.arm_nodes import *
class ReadStorageNode(ArmLogicTreeNode):
"""Reads a stored content.
@seeNode Write Storage"""
bl_idname = 'LNReadStorageNode'
bl_label = 'Read Storage'
arm_section = 'file'
arm_version = 1
def arm_init(self, context):
self.add_input('ArmStringSocket', 'Key')
self.add_input('ArmStringSocket', 'Default')
self.add_output('ArmDynamicSocket', 'Value')
| 26.058824 | 52 | 0.677201 | 49 | 443 | 5.938776 | 0.734694 | 0.072165 | 0.082474 | 0.185567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002841 | 0.205418 | 443 | 16 | 53 | 27.6875 | 0.823864 | 0.106095 | 0 | 0 | 0 | 0 | 0.243523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0 | 0.7 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d8bfc1ac4a0ba5561449846faf72e97b38a7b701 | 320 | py | Python | apolo/apolo/apps/adopcion/models.py | XeresRed/TrabajoGrado | 3d7135732ea76410995ed3b5c3332eb83881677c | [
"MIT"
] | null | null | null | apolo/apolo/apps/adopcion/models.py | XeresRed/TrabajoGrado | 3d7135732ea76410995ed3b5c3332eb83881677c | [
"MIT"
] | null | null | null | apolo/apolo/apps/adopcion/models.py | XeresRed/TrabajoGrado | 3d7135732ea76410995ed3b5c3332eb83881677c | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth.models import AbstractUser
# Create your models here.
class Persona(AbstractUser):
activos = models.CharField(blank=True, max_length=100)
numeroEmpl = models.IntegerField(blank=True, null=True)
tipo = models.CharField(blank=True, max_length=100)
| 26.666667 | 59 | 0.759375 | 42 | 320 | 5.738095 | 0.571429 | 0.112033 | 0.165975 | 0.19917 | 0.298755 | 0.298755 | 0.298755 | 0 | 0 | 0 | 0 | 0.021978 | 0.146875 | 320 | 11 | 60 | 29.090909 | 0.860806 | 0.075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 3 |
d8c08d175a1cbd3180bab825774c67e642b794d4 | 1,731 | py | Python | generator/framework/util/cfg_generator.py | sinsay/ds_generator | 9365e22e8730418caf29b8ed6ada1f30f936a297 | [
"Apache-2.0"
] | null | null | null | generator/framework/util/cfg_generator.py | sinsay/ds_generator | 9365e22e8730418caf29b8ed6ada1f30f936a297 | [
"Apache-2.0"
] | null | null | null | generator/framework/util/cfg_generator.py | sinsay/ds_generator | 9365e22e8730418caf29b8ed6ada1f30f936a297 | [
"Apache-2.0"
] | null | null | null | class HasIdent(object):
"""
避免 CfgGenerator 交叉引用所使用的基类
"""
def __init__(self, ident: int = 0):
self.ident = ident
def increase_ident(self):
self.ident += 1
def decrease_ident(self):
self.ident -= 1
class CfgGeneratorIdent(object):
def __init__(self, gen: HasIdent):
self.gen = gen
def __enter__(self):
self.gen.increase_ident()
def __exit__(self, exc_type, exc_val, exc_tb):
self.gen.decrease_ident()
class CfgGenerator(HasIdent):
def __init__(self, step: int = 4):
super(CfgGenerator, self).__init__()
self.step = step
self.conf = []
def to_cfg_string(self):
"""
获取生成好的配置信息
:return:
"""
return "".join(self.conf)
def with_ident(self):
"""
返回一个保存了递进状态的资源对象, 用于 with 语句
:return:
"""
return CfgGeneratorIdent(self)
def increase_ident(self, ident: int = 1):
"""
手动增加或减少递进状态
:param ident:
:return:
"""
self.ident += ident
def append_with(self, conf: str = "", new_line: bool = True, with_ident: bool = True):
"""
添加配置信息,传递了 new_line 或 ident 参数时可为其添加 ident 及新行
:param conf:
:param new_line:
:param with_ident:
:return:
"""
ident = "" if not with_ident else self.ident_str()
if str:
self.conf.append("%s%s" % (ident, conf))
if new_line:
self.conf.append("\n")
def ident_str(self):
return self.ident * self.step * " "
def str_with_ident(self, s) -> str:
"""
使用当前的缩进来创建字符串
"""
return "%s%s" % (self.ident_str(), s)
| 22.192308 | 90 | 0.536684 | 196 | 1,731 | 4.494898 | 0.285714 | 0.091941 | 0.037457 | 0.038593 | 0.043133 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004371 | 0.33911 | 1,731 | 77 | 91 | 22.480519 | 0.765734 | 0.137493 | 0 | 0 | 0 | 0 | 0.008475 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.371429 | false | 0 | 0 | 0.028571 | 0.571429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d8c1279c1f035fd1c0ca93502531ba20b1cf610a | 2,323 | py | Python | app/product/tests/test_product_api.py | RamzeyXD/varanus-ecommerce-api | 4688fc393b73d70a4923d471006caee2ec624f68 | [
"MIT"
] | null | null | null | app/product/tests/test_product_api.py | RamzeyXD/varanus-ecommerce-api | 4688fc393b73d70a4923d471006caee2ec624f68 | [
"MIT"
] | 5 | 2021-03-19T04:52:44.000Z | 2021-09-22T19:12:07.000Z | app/product/tests/test_product_api.py | RamzeyXD/varanus-ecommerce-api | 4688fc393b73d70a4923d471006caee2ec624f68 | [
"MIT"
] | null | null | null | from django.contrib.auth import get_user_model
from django.urls import reverse
from django.test import TestCase
from rest_framework import status
from rest_framework.test import APIClient
from core.models import Product
from product.serializers import ProductSerializer
PRODUCTS_URL = reverse('product:product-list')
def detail_url(product_slug):
"""Return product detail URL"""
return reverse('product:product-detail', args=[product_slug])
def sample_product(**params):
"""Create and return sample product"""
defaults = {
'name': 'TestNameCase',
'description': "test description for test Product",
'cost': 45
}
defaults.update(params)
return Product.objects.create(**defaults)
class PublicProductsApiTests(TestCase):
"""Test the publicly available products API"""
def setUp(self):
self.client = APIClient()
def test_login_required(self):
"""Test that login is required to access the endpoint"""
res = self.client.get(PRODUCTS_URL)
self.assertEqual(res.status_code, status.HTTP_401_UNAUTHORIZED)
class PrivateProductApiTests(TestCase):
"""Test products can be retrieved by authorized user"""
def setUp(self):
self.client = APIClient()
self.user = get_user_model().objects.create_user(
email='TestMail@gmail.com',
password='TestPassword123'
)
self.client.force_authenticate(self.user)
def test_retrieve_product_list(self):
"""Test retrieving list of products"""
params = {
'name': 'TestProduct',
'description': 'Test description for second test product',
'cost': 5.00
}
sample_product(**params)
sample_product()
products = Product.objects.all()
serializer = ProductSerializer(products, many=True)
res = self.client.get(PRODUCTS_URL)
self.assertEqual(res.status_code, status.HTTP_200_OK)
self.assertEqual(res.data, serializer.data)
def test_view_product_detail(self):
"""Test viewing product detail"""
product = sample_product()
url = detail_url(product.slug)
res = self.client.get(url)
serializer = ProductSerializer(product)
self.assertEqual(serializer.data, res.data)
| 28.679012 | 71 | 0.671545 | 262 | 2,323 | 5.828244 | 0.358779 | 0.039293 | 0.02554 | 0.031434 | 0.125737 | 0.125737 | 0.085134 | 0.085134 | 0.085134 | 0.085134 | 0 | 0.007795 | 0.226862 | 2,323 | 80 | 72 | 29.0375 | 0.842428 | 0.112355 | 0 | 0.117647 | 0 | 0 | 0.103159 | 0.010859 | 0 | 0 | 0 | 0 | 0.078431 | 1 | 0.137255 | false | 0.019608 | 0.137255 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c141a49a479e74699dc9b65661ce60383e9e67 | 4,686 | py | Python | src/face_feature.py | ryota0051/facial_expressions | 763f1108fc56f5360fbd6603e0dc3e40c27a3d1b | [
"MIT"
] | null | null | null | src/face_feature.py | ryota0051/facial_expressions | 763f1108fc56f5360fbd6603e0dc3e40c27a3d1b | [
"MIT"
] | null | null | null | src/face_feature.py | ryota0051/facial_expressions | 763f1108fc56f5360fbd6603e0dc3e40c27a3d1b | [
"MIT"
] | null | null | null | import os
from typing import Dict, Tuple, List
import json
import time
import tensorflow as tf
import numpy as np
from type_def import BOUNDARY_BOX_TYPE, PERSONAL_INFO_TYPE
class FaceFeatureExtractor():
def __init__(self, base_model_path: str, nationality_model_path: str, label_path: str) -> None:
'''必要なファイルを読み込むインスタンスメソッド
Parameter
----------
base_model_path: mobilenetV2の畳み込み部分のみのモデルパス
nationality_model_path: 国籍モデルのパス
label_path: 各モデルが出力する数字が表す文字列を格納したファイルのパス
ファイル内容の例:
{
"gender": {
0: "female",
1: "male"
},
...
}
'''
self.base_model = self.__load_model(base_model_path)
self.nationality_model = self.__load_model(nationality_model_path)
self.labels = self.__load_labels(label_path)
def get_personal_data_from_faces(self, img_batch: np.array, rect_list: BOUNDARY_BOX_TYPE) -> PERSONAL_INFO_TYPE:
'''顔画像データから性別、年齢、人種を判別するメソッド
Parameter
----------
img_batch: バッチ画像
rect_list: 顔座標
Returns
----------
例:
[
{
"coodinate": [x, y, W, H],
"attrributes": {
"nationality": "japanese"
}
},
...
]
'''
features = self.get_feature_batch(img_batch)
features = features.reshape(len(features), -1)
# 国籍判定
nationality_list = self.predict_facial_expression(features, self.nationality_model)
result_list = [None] * len(rect_list)
assert len(rect_list) == len(nationality_list)
for i, (rect, nationality) in enumerate(zip(rect_list, nationality_list)):
result = {'coodinate': None, 'attribute': {}}
result['coodinate'] = list(rect)
result['attribute']['nationality'] = self.labels['nationality'][str(nationality)]
result_list[i] = result
return result_list
def get_feature_batch(self, img_batch: np.array) -> np.array:
'''ベースとなるモバイルネットからバッチ画像ごとに特徴量を抽出するメソッド
Parameter
---------
img_batch: バッチ画像
Returns
---------
モバイルネットが出力する特徴量
'''
assert isinstance(img_batch, np.ndarray)
assert img_batch.ndim == 4
x = tf.keras.applications.mobilenet_v2.preprocess_input(img_batch)
features = self.base_model.predict(x)
return features
def predict_facial_expression(
self,
features: np.array,
model: '学習済み予測部分モデル') -> List[int]:
'''指定モデルにおける顔の属性を予測するメソッド
Parameter
---------
features: modelに入力する特徴量
model: 属性予測モデル(kerasのクラスラベルを返すメソッドである
predict_classesを用いているので、別のフレームワークを使う場合は、
classなどでラッパーする。)
Returns
---------
要素として、予測結果の数値ラベルをもつリスト
'''
return model.predict_classes(features).tolist()
def __load_labels(self, label_path: str) -> Dict[str, Dict[str, str]]:
'''json形式で記述されたファイルからone-hot-vectorが表す文字列辞書を取得するメソッド
Parameter
----------
label_path: 各モデルが出力するラベルが表す文字列辞書が記述されたjsonファイルのパス
Returns
----------
one-hot-vectorが表す文字列辞書
例:
{
"gender":
{
"0": "female",
"1": "male"
},
"age": {
"0": "10代",
"1": "20代",
"2": "30代",
"3": "40代",
"4": "50代"
},
"race":
{
"0": "Asian",
"1": "Black",
"2": "Indian",
"3": "others",
"4": "White"
}
}
'''
self.__check_file_exists(label_path)
with open(label_path, 'r') as f:
labels = json.load(f)
return labels
def __load_model(self, model_path:str) -> 'kerasのmodel':
'''kerasモデルを読み込むメソッド
Parameter
----------
model_path: 読み込みモデルパス
Returns
----------
tf.keras.models.load_modelの返り値
'''
self.__check_file_exists(model_path)
return tf.keras.models.load_model(model_path)
def __check_file_exists(self, file_path: str) -> None:
'''ファイルが存在するかを確かめるメソッド(ファイルが存在しない場合は、例外を出力する。)
Parameter
----------
file_path: 存在を確かめるファイル
'''
if not os.path.exists(file_path):
raise FileNotFoundError('[{}]が存在しません。'.format(file_path))
| 28.573171 | 116 | 0.522621 | 412 | 4,686 | 5.682039 | 0.36165 | 0.038445 | 0.01666 | 0.01965 | 0.058095 | 0.026484 | 0 | 0 | 0 | 0 | 0 | 0.009299 | 0.357448 | 4,686 | 163 | 117 | 28.748466 | 0.768183 | 0.33312 | 0 | 0 | 0 | 0 | 0.039241 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 1 | 0.152174 | false | 0 | 0.152174 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c15c388c58bbae49aac02c97bdee96b885e94e | 3,234 | py | Python | app/main/routes.py | Tsolmon1/company | 270d88e40e0c709247a7338cd41942b0ceb67c5e | [
"MIT"
] | null | null | null | app/main/routes.py | Tsolmon1/company | 270d88e40e0c709247a7338cd41942b0ceb67c5e | [
"MIT"
] | null | null | null | app/main/routes.py | Tsolmon1/company | 270d88e40e0c709247a7338cd41942b0ceb67c5e | [
"MIT"
] | null | null | null | from datetime import datetime
from flask import render_template, flash, redirect, url_for, request, g, \
jsonify, current_app
from flask_login import current_user, login_required
from flask_babel import _, get_locale
#from guess_language import guess_language
from app import db
from app.main.forms import CompanyForm
from app.models import Company_list
from app.main import bp
@bp.route("/company", methods=['GET'])
def company_namelist():
"""
List all company
"""
#loan_requests = Loan_request.query.all()
page = request.args.get('page', 1, type=int)
companys = Company_list.query.order_by(Company_list.id.asc()).paginate(
page, current_app.config['POSTS_PER_PAGE'], False)
next_url = url_for('main.company_namelist', page=companys.next_num) \
if companys.has_next else None
prev_url = url_for('main.company_namelist', page=companys.prev_num) \
if companys.has_prev else None
return render_template('company/company_namelists.html', companys=companys.items, title="companys", next_url=next_url, prev_url=prev_url)
@bp.route('/company/add', methods=['GET', 'POST'])
def add_company():
form = CompanyForm()
if form.validate_on_submit():
company = Company_list(names_one=form.names_one.data,
names_two=form.names_two.data,
names_three=form.names_three.data,
branches=form.branches.data)
# add employee to the database
db.session.add(company)
db.session.commit()
flash('You have successfully registered!')
# redirect to the login page
return redirect(url_for('main.company_namelist'))
# load registration template
return render_template('company/company_add.html', form=form, title='LoanTypeAdd')
@bp.route('/companys/edit/<int:id>', methods=['GET', 'POST'])
def edit_company(id):
"""
Edit a user
"""
add_company = False
companys = Company_list.query.get_or_404(id)
form = CompanyForm(obj=companys)
if form.validate_on_submit():
companys.names_one = form.names_one.data
companys.names_two = form.names_two.data
companys.names_three = form.names_three.data
companys.branches = form.branches.data
db.session.add(companys)
db.session.commit()
flash('You have successfully edited the companys.')
# redirect to the roles page
return redirect(url_for('main.company_namelist'))
form.names_one.data = companys.names_one
form.names_two.data = companys.names_two
form.names_three.data = companys.names_three
form.branches.data = companys.branches
return render_template('company/company_edit.html', add_company=add_company,
form=form, title="Edit company")
@bp.route('/company/delete/<int:id>', methods=['GET', 'POST'])
def delete_company(id):
"""
Delete a employee from the database
"""
companyss = Company_list.query.get_or_404(id)
db.session.delete(companyss)
db.session.commit()
flash('You have successfully deleted the company.')
# redirect to the roles page
return redirect(url_for('main.company_namelist')) | 32.019802 | 141 | 0.682746 | 425 | 3,234 | 5.002353 | 0.237647 | 0.0381 | 0.023518 | 0.039981 | 0.419567 | 0.327375 | 0.194732 | 0.11524 | 0.057385 | 0.057385 | 0 | 0.002728 | 0.206555 | 3,234 | 101 | 142 | 32.019802 | 0.825799 | 0.087508 | 0 | 0.135593 | 0 | 0 | 0.152069 | 0.079655 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.135593 | 0 | 0.305085 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c4609c13c1b5b024cb78f178101d21b07a60ae | 31,034 | py | Python | opentisim/containers/container_defaults.py | TUDelft-CITG/OpenTISim | 443b20572eb2aae2f1909a8a01e95e31be53b675 | [
"MIT"
] | 7 | 2020-02-15T01:34:29.000Z | 2022-02-28T01:24:05.000Z | opentisim/containers/container_defaults.py | TUDelft-CITG/OpenTISim | 443b20572eb2aae2f1909a8a01e95e31be53b675 | [
"MIT"
] | 2 | 2020-02-14T18:44:31.000Z | 2020-04-06T15:39:17.000Z | opentisim/containers/container_defaults.py | TUDelft-CITG/OpenTISim | 443b20572eb2aae2f1909a8a01e95e31be53b675 | [
"MIT"
] | 2 | 2019-07-19T08:50:31.000Z | 2020-02-05T11:14:07.000Z | """
Main generic object classes:
- 1. Quay_wall
- 2. Berth
- 3. Cyclic_Unloader
- STS crane
- 4. Horizontal transport
- Tractor trailer
- 5. Commodity
- TEU
- 6. Containers
- Laden
- Reefer
- Empty
- OOG
- 7. Laden and reefer stack
- 8. Stack equipment
- 9. Empty stack
- 10. OOG stack
- 11. Gates
- 12. Empty handler
- 13. Vessel
- 14. Labour
- 15. Energy
- 16. General
- 17. Indirect Costs
"""
# package(s) for data handling
import pandas as pd
# *** Default inputs: Quay_Wall class *** todo add values of RHDHV or general (e.g. PIANC)
quay_wall_data = {"name": 'Quay',
"ownership": 'Port authority',
"delivery_time": 2,
"lifespan": 50,
"mobilisation_min": 2_500_000,
"mobilisation_perc": 0.02,
"maintenance_perc": 0.01,
"insurance_perc": 0.01,
"berthing_gap": 15, # see PIANC (2014), p 98
"freeboard": 4, # m
"Gijt_constant": 753.24, # Source: (J. de Gijt, 2011) Figure 2 ; USD/m (if 1.0 EUR = 1.12 USD, 670.45 EUR = 757.8 USD)
"Gijt_coefficient": 1.2729, # Source: (J. de Gijt, 2011) Figure 2
"max_sinkage": 0.5,
"wave_motion": 0.5,
"safety_margin": 0.5,
"apron_width": 65.5, # see PIANC (2014b), p 62
"apron_pavement": 125} # all values from Ijzermans, 2019, P 91
# *** Default inputs: Berth class ***
berth_data = {"name": 'Berth',
"crane_type": 'Mobile cranes',
"delivery_time": 2,
"max_cranes": 3} # STS cranes
# *** Default inputs: Crane class *** todo check sources sts_crane_data and check small sts_crane_data for the barge berths
sts_crane_data = {"name": 'STS_crane',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"unit_rate": 10_000_000, # USD per unit
"mobilisation_perc": 0.15, # percentage
"maintenance_perc": 0.02, # percentage
"insurance_perc": 0.01, # percentage
"consumption": 8, # Source: Peter Beamish (RHDHV)
"crew": 5.5, # 1.5 crane driver, 2 quay staff, 2 twistlock handler (per shift)
"crane_type": 'STS crane',
"lifting_capacity": 2.13, # weighted average of TEU per lift
"hourly_cycles": 25, # PIANC wg135
"eff_fact": 0.75}
# *** Default inputs: Barge_Berth class ***
barge_berth_data = {"name": 'Barge_Berth',
"delivery_time": 2, # years
"max_cranes": 1.0} # barge_cranes/barge_berth (Source: RHDHV)
barge_quay_wall_data = {"name": 'Barge_Quay',
"ownership": "Terminal operator",
"delivery_time": 2, # years
"lifespan": 50, # equal to quay wall OGV
"mobilisation_min": 1_000_000, # todo add source
"mobilisation_perc": 0.02,
"maintenance_perc": 0.01,
"insurance_perc": 0.01,
"berthing_gap": 15, # see PIANC (2014), p 98
"freeboard": 4, # m
"Gijt_constant": 753.24, # Source: (J. de Gijt, 2011) Figure 2 ; USD/m (if 1.0 EUR = 1.12 USD, 670.45 EUR = 757.8 USD)
"Gijt_coefficient": 1.2729, # Source: (J. de Gijt, 2011) Figure 2
"max_sinkage": 0.5,
"wave_motion": 0.5,
"safety_margin": 0.5,
"apron_width": 30, # todo add source, check PIANC 2014b
"apron_pavement": 125} # all values from Ijzermans, 2019, P 91
barge_crane_data = {"name": 'Barge Crane',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"unit_rate": 5_000_000, # USD per unit
"mobilisation_perc": 0.15, # percentage
"maintenance_perc": 0.02, # percentage
"insurance_perc": 0.01, # percentage
"consumption": 4, # RHDHV
"crew": 1.5, # 1.5 crane driver (per shift)
"lifting_capacity": 1.60, # RHDHV, weighted average of TEU per lift
"avg_utilisation": 0.9, # RHDHV
"nom_crane_productivity": 15.0, # moves per hour
"utilisation": 0.90, # rate
"efficiency": 0.75, # rate
"handling_time_ratio": 0.90, # handling time to berthing time ratio
"peak_factor": 1.10} # RHDHV
# *** Default inputs: ***
channel_data = {"name": 'Channel',
"ownership": 'Port authority',
"delivery_time": 2, # years
"lifespan": 50, # years
"capital_dredging_rate": 7.0, # USD per m3 (source: Payra, $6.82)
"infill_dredging_rate": 5.5, # USD per m3 (source: Payra, $5.25)
"maintenance_dredging_rate": 4.5, # USD per m3 (source: Payra, $4.43)
"mobilisation_min": 2_500_000,
"mobilisation_perc": 0.02,
"maintenance_perc": 0.10,
"insurance_perc": 0.01}
bridge_data = {"name": 'Bridge',
"ownership": 'Port authority',
"delivery_time": 3,
"lifespan": 50, # years
"unit_rate": 100_000_000, # USD per km
"maintenance_perc": 0.025,
"insurance_perc": 0.01}
reclamation_data = {"name": 'Reclamation',
"ownership": 'Port authority',
"delivery_time": 2, # years
"lifespan": 50, # years
"reclamation_rate": 12.50, # USD per m3
"maintenance_perc": 0.02,
"insurance_perc": 0.00}
revetment_data = {"name": 'Revetment',
"ownership": 'Port authority',
"delivery_time": 2, # years
"lifespan": 50, # years
"revetment_rate": 180_000, # USD per m
"quay_length_rate": 1.5,
"maintenance_perc": 0.01,
"insurance_perc": 0.00}
breakwater_data = {"name": 'Breakwater',
"ownership": 'Port authority',
"delivery_time": 2, # years
"lifespan": 50, # years
"breakwater_rate": 275_000, # USD per m
"quay_length_rate": 1.5,
"maintenance_perc": 0.01,
"insurance_perc": 0.00}
# Default inputs: Horizontal_Transport class *** #todo add sources
tractor_trailer_data = {"name": 'Tractor-trailer',
"type": 'tractor_trailer',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"mobilisation": 1_000,
"unit_rate": 85_000,
"maintenance_perc": 0.10,
"insurance_perc": 0.01,
"crew": 1,
"salary": 30_000, # dummy
"utilisation": 0.80,
"fuel_consumption": 2, # liter per box move
"productivity": 1,
"required": 5, # typical 3 - 6 see PIANC 2014b, p 58
"non_essential_moves": 1.2} # todo input value for tractor productivity
# *** Default inputs: Container class #todo add sources
laden_container_data = {"name": 'Laden container',
"type": 'laden_container',
"teu_factor": 1.60,
"dwell_time": 3, # days, PIANC (2014b) p 64 (5 - 10)
"peak_factor": 1.2,
"stack_ratio": 0.7,
"stack_occupancy": 0.8, # acceptable occupancy rate (0.65 to 0.70), Quist and Wijdeven (2014), p 49
"width": 48, # TEU
"height": 4, # TEU
"length": 20 # TEU
}
reefer_container_data = {"name": 'Reefer container',
"type": 'reefer_container',
"teu_factor": 1.75,
"dwell_time": 3, # days, PIANC (2014b) p 64 (5 - 10)
"peak_factor": 1.2,
"stack_ratio": 0.7,
"stack_occupancy": 0.8, # acceptable occupancy rate (0.65 to 0.70), Quist and Wijdeven (2014), p 49
"width": 21, # TEU
"height": 4, # TEU
"length": 4 # TEU
}
empty_container_data = {"name": 'Empty container',
"type": 'empty_container',
"teu_factor": 1.55,
"dwell_time": 10, # days, PIANC (2014b) p 64 (10 - 20)
"peak_factor": 1.2,
"stack_ratio": 1, # looking for a good reference for this value
"stack_occupancy": 0.7, # acceptable occupancy rate (0.65 to 0.70), Quist and Wijdeven (2014), p 49
"width": 48, # TEU
"height": 4, # TEU
"length": 20 # TEU
}
oog_container_data = {"name": 'OOG container',
"type": 'oog_container',
"teu_factor": 1.55,
"dwell_time": 4, # days, PIANC (2014b) p 64 (5 - 10)
"peak_factor": 1.2,
"stack_ratio": 1, # by definition the H of oog stacks is 1
"stack_occupancy": 0.9, # acceptable occupancy rate (0.65 to 0.70), Quist and Wijdeven (2014), p 49
"width": 48, # TEU
"height": 4, # TEU
"length": 20 # TEU
}
# *** Default inputs: Laden_Stack class within the stacks
rtg_stack_data = {"name": 'RTG Stack',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"mobilisation": 25_000, # USD
"maintenance_perc": 0.1,
# "width": 6, # TEU
# "height": 5, # TEU
# "length": 30, # TEU
# "capacity": 900, # TEU
"gross_tgs": 18, # TEU Ground Slot [m2/teu]
"area_factor": 2.04, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # m2 DUMMY
"drainage": 50, # m2 DUMMY
"household": 0.1, # moves
"digout_margin": 1.2, # percentage
"reefer_factor": 2.33, # RHDHV
"consumption": 4, # kWh per active reefer
"reefer_rack": 3500, # USD
"reefers_present": 0.5} # per reefer spot
rmg_stack_data = {"name": 'RMG Stack',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"mobilisation": 50_000, # USD
"maintenance_perc": 0.1,
# "width": 6, # TEU
# "height": 5, # TEU
# "length": 40, # TEU
# "capacity": 1200, # TEU
"gross_tgs": 18.67, # TEU Ground Slot [m2/teu]
"area_factor": 2.79, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # m2 DUMMY
"drainage": 50, # m2 DUMMY
"household": 0.1, # moves
"digout_margin": 1.2, # percentage
"reefer_factor": 2.33, # RHDHV
"consumption": 4, # kWh per active reefer
"reefer_rack": 3500, # USD
"reefers_present": 0.5} # per reefer spot
sc_stack_data = {"name": 'SC Stack',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"mobilisation": 50_000, # USD
"maintenance_perc": 0.1,
# "width": 45, # TEU
# "height": 3, # TEU
# "length": 22, # TEU
# "capacity": 1200, # TEU
"gross_tgs": 27.3, # TEU Ground Slot [m2/teu]
"area_factor": 1.45, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # DUMMY
"drainage": 50, # DUMMY
"household": 0.1, # moves
"digout_margin": 1.2, # percentage
"reefer_factor": 2.33, # RHDHV
"consumption": 4, # kWh per active reefer
"reefer_rack": 3500, # USD
"reefers_present": 0.5} # per reefer spot
rs_stack_data = {"name": 'RS Stack',
"ownership": 'Terminal operator',
"delivery_time": 1, # years
"lifespan": 40, # years
"mobilisation": 10_000, # USD
"maintenance_perc": 0.1,
# "width": 4, # TEU
# "height": 4, # TEU
# "length": 20, # TEU
# "capacity": 320, # TEU
"gross_tgs": 18, # TEU Ground Slot [m2/teu]
"area_factor": 3.23, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # m2 DUMMY
"drainage": 50, # m2 DUMMY
"household": 0.1, # moves
"digout_margin": 1.2, # percentage
"reefer_factor": 2.33, # RHDHV
"consumption": 4, # kWh per active reefer
"reefer_rack": 3500, # USD
"reefers_present": 0.5} # per reefer spot
# *** Default inputs: Other_Stack class
empty_stack_data = {"name": 'Empty Stack',
"ownership": 'Terminal operator',
"delivery_time": 1,
"lifespan": 40,
"mobilisation": 25_000,
"maintenance_perc": 0.1,
"width": 8, # TEU
"height": 6, # TEU
"length": 10, # TEU
"capacity": 480, # TEU
"gross_tgs": 18, # TEU Ground Slot
"area_factor": 2.04, # Based on grasshopper layout
"pavement": 200, # DUMMY
"drainage": 50,
"household": 1.05,
"digout": 1.05} # DUMMY
oog_stack_data = {"name": 'OOG Stack',
"ownership": 'Terminal operator',
"delivery_time": 1,
"lifespan": 40,
"mobilisation": 25_000,
"maintenance_perc": 0.1,
"width": 10, # TEU
"height": 1, # TEU
"length": 10, # TEU
"capacity": 100, # TEU
"gross_tgs": 64, # TEU Ground Slot
"area_factor": 1.05, # m2/TEU (based on grasshopper layout P. Koster)
"pavement": 200, # DUMMY
"drainage": 50} # DUMMY
# *** Default inputs: Stack_Equipment class
rtg_data = {"name": 'RTG',
"type": 'rtg',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"unit_rate": 1_400_000,
"mobilisation": 5000,
"maintenance_perc": 0.1, # dummy
"insurance_perc": 0,
"crew": 1, # dummy
"salary": 50_000, # dummy
"required": 3,
"fuel_consumption": 1, # dummy
"power_consumption": 0
}
rmg_data = {"name": 'RMG',
"type": 'rmg',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"unit_rate": 2_500_000,
"mobilisation": 5000,
"maintenance_perc": 0.1, # dummy
"insurance_perc": 0,
"crew": 0, # dummy
"salary": 50_000, # dummy
"required": 1, # one per stack
"fuel_consumption": 0, # dummy
"power_consumption": 15 # kWh/box move
}
sc_data = {"name": 'Straddle carrier',
"type": 'sc',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"unit_rate": 2_000_000, # dummy
"mobilisation": 5000,
"maintenance_perc": 0.1, # dummy
"insurance_perc": 0,
"crew": 0, # dummy
"salary": 50_000, # dummy
"required": 5,
"fuel_consumption": 0, # dummy
"power_consumption": 30
}
rs_data = {"name": 'Reach stacker',
"type": 'rs',
"ownership": 'Terminal operator',
"delivery_time": 0,
"lifespan": 10,
"unit_rate": 500_000,
"mobilisation": 5000,
"maintenance_perc": 0.1, # dummy
"insurance_perc": 0,
"crew": 2, # dummy
"salary": 50_000, # dummy
"required": 4,
"fuel_consumption": 1, # dummy
"power_consumption": 0
}
# *** Default inputs: Gate class ***
gate_data = {"name": 'Gate',
"type": 'gate',
"ownership": "Terminal operator",
"delivery_time": 1, # years
"lifespan": 15, # years
"unit_rate": 30_000, # USD/gate
"mobilisation": 5000, # USD/gate
"maintenance_perc": 0.02,
"crew": 2, # crew
"salary": 30_000, # Dummy
"canopy_costs": 250, # USD/m2 # Dummy
"area": 288.75, # PIANC WG135
"staff_gates": 1, #
"service_gates": 1, #
"design_capacity": 0.98, #
"exit_inspection_time": 3, # min #dummy
"entry_inspection_time": 2, # min #dummy
"peak_hour": 0.125, # dummy
"peak_day": 0.25, # dummy
"peak_factor": 1.2,
"truck_moves": 0.75,
"operating_days": 7,
"capacity": 60}
# *** Default inputs: ECH class***
empty_handler_data = {"name": 'Empty Handler',
"type": 'empty_handler',
"ownership": "Terminal operator",
"delivery_time": 1,
"lifespan": 15,
"unit_rate": 500_000,
"mobilisation": 5000,
"maintenance_perc": 0.02,
"crew": 1,
"salary": 35_000, # dummy
"fuel_consumption": 1.5,
"required": 5}
# *** Default inputs: Commodity class ***
container_data = {"name": 'Laden',
"handling_fee": 150,
"fully_cellular_perc": 0,
"panamax_perc": 0,
"panamax_max_perc": 0,
"post_panamax_I_perc": 0,
"post_panamax_II_perc": 0,
"new_panamax_perc": 100,
"VLCS_perc": 0,
"ULCS_perc": 0}
# *** Default inputs: Vessel class *** (Source: i) The Geography of Transport Systems, Jean-Paul Rodrigue (2017), ii) UNCTAD)
fully_cellular_data = {"name": 'Fully_Cellular_1',
"type": 'Fully_Cellular',
"delivery_time": 0, # years
"call_size": 2500 / 8, # TEU
"LOA": 215, # m
"draught": 10.0, # m
"beam": 20.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source
"mooring_time": 6, # berthing + deberthing time
"demurrage_rate": 730, # USD todo edit
"transport_costs": 200, # USD per TEU, RHDHV
"all_in_transport_costs": 2128 # USD per TEU, Ports and Terminals p.158
}
panamax_data = {"name": 'Panamax_1',
"type": 'Panamax',
"delivery_time": 0, # years
"call_size": 3400 / 8, # TEU
"LOA": 250, # m
"draught": 12.5, # m
"beam": 32.2, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 6, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 180, # USD per TEU, RHDHV
"all_in_transport_costs": 1881 # USD per TEU, Ports and Terminals p.158
}
panamax_max_data = {"name": 'Panamax_Max_1',
"type": 'Panamax_Max',
"delivery_time": 0, # years
"call_size": 4500 / 8, # TEU
"LOA": 290, # m
"draught": 12.5, # m
"beam": 32.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 2, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 160, # USD per TEU, RHDHV
"all_in_transport_costs": 1682 # USD per TEU, Ports and Terminals p.158
}
post_panamax_I_data = {"name": 'Post_Panamax_I_1',
"type": 'Post_Panamax_I',
"delivery_time": 0, # years
"call_size": 6000 / 8, # TEU
"LOA": 300, # m
"draught": 13.0, # m
"beam": 40.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 2, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 150, # USD per TEU, RHDHV
"all_in_transport_costs": 1499 # USD per TEU, Ports and Terminals p.158
}
post_panamax_II_data = {"name": 'Post_Panamax_II_1',
"type": 'Post_Panamax_II',
"delivery_time": 0, # years
"call_size": 8500 / 8, # TEU
"LOA": 340, # m
"draught": 14.5, # m
"beam": 43.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 2, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 140, # USD per TEU, RHDHV
"all_in_transport_costs": 1304 # USD per TEU, Ports and Terminals p.158
}
new_panamax_data = {"name": 'New_Panamax_1',
"type": 'New_Panamax',
"delivery_time": 0, # years
"call_size": 12500 / 8, # TEU
"LOA": 366, # m
"draught": 15.2, # m
"beam": 49.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 6, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 120, # USD per TEU, RHDHV
"all_in_transport_costs": 1118 # USD per TEU, Ports and Terminals p.158
}
VLCS_data = {"name": 'VLCS_1',
"type": 'VLCS',
"delivery_time": 0, # years
"call_size": 15000 / 8, # TEU
"LOA": 397, # m
"draught": 15.5, # m
"beam": 56.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 4, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 80, # USD per TEU, RHDHV
"all_in_transport_costs": 2128 # USD per TEU, Ports and Terminals p.158
}
ULCS_data = {"name": 'ULCS_1',
"type": 'ULCS',
"delivery_time": 0, # years
"call_size": 21000 / 8, # TEU
"LOA": 400, # m
"draught": 16.0, # m
"beam": 59.0, # m
"max_cranes": 4, # STS cranes
"all_turn_time": 31, # todo source [hr]
"mooring_time": 4, # berthing + deberthing time [hr]
"demurrage_rate": 730, # USD todo edit
"transport_costs": 60, # USD per TEU, RHDHV
"all_in_transport_costs": 908 # USD per TEU, Ports and Terminals p.158
}
# *** Default inputs: Barge class *** # todo add sources
small_barge_data = {"name": 'Small_Barge_1',
"type": 'small',
"ownership": 'Port authority',
"delivery_time": 1, # years
"lifespan": 10, # years
"call_size": 200, # TEU
"LOA": 90, # m
"draught": 4.5, # m
"beam": 12.0, # m
"unit_rate": 1_000_000, # USD per barge
"operations_perc": 0.10,
"maintenance_perc": 0.10,
"insurance_perc": 0.01,
"mooring_time": 6, # berthing + deberthing time
"transport_costs": 200} # USD per TEU
medium_barge_data = {"name": 'Medium_Barge_1',
"type": 'medium',
"ownership": 'Port authority',
"delivery_time": 1, # years
"lifespan": 10, # years
"call_size": 250, # TEU
"LOA": 100, # m
"draught": 5.0, # m
"beam": 13.0, # m
"unit_rate": 1_000_000, # USD per barge
"operations_perc": 0.10,
"maintenance_perc": 0.10,
"insurance_perc": 0.01,
"mooring_time": 6, # berthing + deberthing time
"transport_costs": 200} # USD per TEU
large_barge_data = {"name": 'Large_Barge_1',
"type": 'large',
"ownership": 'Port authority',
"delivery_time": 1, # years
"lifespan": 10, # years
"call_size": 300, # TEU
"LOA": 120, # m
"draught": 5.5, # m
"beam": 14.0, # m
"unit_rate": 1_000_000, # USD per barge
"operations_perc": 0.10,
"maintenance_perc": 0.10,
"insurance_perc": 0.01,
"mooring_time": 6, # berthing + deberthing time
"transport_costs": 200} # USD per TEU
truck_data = {"name": 'Truck',
"ownership": 'Port authority',
"delivery_time": 1,
"lifespan": 10,
"unit_rate": 10_000, # USD per truck
"operations_perc": 0.10,
"maintenance_perc": 0.10,
"insurance_perc": 0.01}
# *** Default inputs: Labour class ***
labour_data = {"name": 'Labour',
"international_salary": 105_000,
"international_staff": 4,
"local_salary": 18_850,
"local_staff": 10,
"operational_salary": 16_750,
"shift_length": 6.5, # hr per shift
"annual_shifts": 200,
"daily_shifts": 5, # shifts per day
"blue_collar_salary": 25_000, # USD per crew per day
"white_collar_salary": 35_000} # USD per crew per day
# *** Default inputs: Energy class ***
energy_data = {"name": 'Energy',
"price": 0.10}
# *** Default inputs: General_Services class ***
general_services_data = {"name": 'General_Services"',
"type": 'general_services',
"office": 2400,
"office_cost": 1500,
"workshop": 2400,
"workshop_cost": 1000,
"fuel_station_cost": 500_000,
"scanning_inspection_area": 2700,
"scanning_inspection_area_cost": 1000,
"lighting_mast_required": 1.2, # masts per ha
"lighting_mast_cost": 30_000,
"firefight_cost": 2_000_000,
"maintenance_tools_cost": 10_000_000,
"terminal_operating_software_cost": 10_000_000,
"electrical_station_cost": 2_000_000,
"repair_building": 100,
"repair_building_cost": 1000,
"ceo": 1, # FTE per 500 k TEU
"secretary": 1, # FTE per 500 k TEU
"administration": 3, # FTE per 500 k TEU
"hr": 2, # FTE per 500 k TEU
"commercial": 1, # FTE per 500 k TEU
"operations": 4, # FTE/shirt per 500 k TEU
"engineering": 2, # FTE/shift per 500 k TEU
"security": 2, # FTE/shift per 500 k TEU
"general_maintenance": 0.015,
"crew_required": 500_000, # for each 500_k TEU an additional crew team is added
"delivery_time": 1,
"lighting_consumption": 1,
"general_consumption": 1000}
# *** Default inputs: Indirect_Costs class ***
indirect_costs_data = {"name": 'Indirect_Costs',
"preliminaries": 0.15,
"engineering": 0.05,
"miscellaneous": 0.15,
"electrical_works_fuel_terminal": 0.12,
"electrical_works_power_terminal": 0.15}
| 43.343575 | 142 | 0.444512 | 3,092 | 31,034 | 4.278137 | 0.136805 | 0.022679 | 0.031448 | 0.039915 | 0.587239 | 0.564636 | 0.515573 | 0.481403 | 0.442849 | 0.436045 | 0 | 0.09082 | 0.439421 | 31,034 | 715 | 143 | 43.404196 | 0.669541 | 0.20571 | 0 | 0.522998 | 0 | 0 | 0.296287 | 0.019698 | 0 | 0 | 0 | 0.001399 | 0 | 1 | 0 | false | 0 | 0.001704 | 0 | 0.001704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8c6bd57ec5db427c92f2222ca7c809d32503546 | 571 | py | Python | src/arrays/combination-sum-2.py | vighnesh153/ds-algo | 79c401dad2d2e575ce1913184ca8665f2712a5b8 | [
"MIT"
] | null | null | null | src/arrays/combination-sum-2.py | vighnesh153/ds-algo | 79c401dad2d2e575ce1913184ca8665f2712a5b8 | [
"MIT"
] | null | null | null | src/arrays/combination-sum-2.py | vighnesh153/ds-algo | 79c401dad2d2e575ce1913184ca8665f2712a5b8 | [
"MIT"
] | 1 | 2020-08-09T06:37:21.000Z | 2020-08-09T06:37:21.000Z | def recursion(arr, target, index, solutions, current):
if target == 0:
solutions.add(tuple(current))
return
if target < 0 or index >= len(arr):
return
current.append(arr[index])
recursion(arr, target - arr[index], index + 1, solutions, current)
current.pop()
recursion(arr, target, index + 1, solutions, current)
def solve(arr, target):
solutions = set()
arr.sort()
recursion(arr, target, 0, solutions, [])
return [list(item) for item in solutions]
A = [10, 1, 2, 7, 6, 1, 5]
B = 8
print(solve(A, B))
| 22.84 | 70 | 0.607706 | 79 | 571 | 4.392405 | 0.417722 | 0.129683 | 0.207493 | 0.132565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032558 | 0.246935 | 571 | 24 | 71 | 23.791667 | 0.774419 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.277778 | 0.055556 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8c794bc8aa2a91941f9265c3fe17da6aef9d56a | 472 | py | Python | mmhuman3d/core/visualization/__init__.py | ttxskk/mmhuman3d | f6d39e24a2d5cc216448fc3bd82832ff45eee436 | [
"Apache-2.0"
] | 1 | 2021-12-03T04:17:52.000Z | 2021-12-03T04:17:52.000Z | mmhuman3d/core/visualization/__init__.py | ttxskk/mmhuman3d | f6d39e24a2d5cc216448fc3bd82832ff45eee436 | [
"Apache-2.0"
] | null | null | null | mmhuman3d/core/visualization/__init__.py | ttxskk/mmhuman3d | f6d39e24a2d5cc216448fc3bd82832ff45eee436 | [
"Apache-2.0"
] | null | null | null | from .visualize_keypoints2d import visualize_kp2d
from .visualize_keypoints3d import visualize_kp3d
from .visualize_smpl import (
render_smpl,
visualize_smpl_calibration,
visualize_smpl_hmr,
visualize_smpl_pose,
visualize_smpl_vibe,
visualize_T_pose,
)
__all__ = [
'visualize_kp2d', 'visualize_kp3d', 'visualize_smpl_pose',
'visualize_T_pose', 'render_smpl', 'visualize_smpl_vibe',
'visualize_smpl_calibration', 'visualize_smpl_hmr'
]
| 27.764706 | 62 | 0.777542 | 55 | 472 | 6.072727 | 0.272727 | 0.350299 | 0.113772 | 0.137725 | 0.239521 | 0.239521 | 0 | 0 | 0 | 0 | 0 | 0.014851 | 0.144068 | 472 | 16 | 63 | 29.5 | 0.811881 | 0 | 0 | 0 | 0 | 0 | 0.290254 | 0.055085 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8c9e071e19e41968b2a38fb82cb08379e2983f3 | 12,413 | py | Python | pyoogle/preprocessing/crawl/crawler.py | DanDits/Pyoogle | f860dffb574f8629d3e894074450fdcb76547a03 | [
"Apache-2.0"
] | null | null | null | pyoogle/preprocessing/crawl/crawler.py | DanDits/Pyoogle | f860dffb574f8629d3e894074450fdcb76547a03 | [
"Apache-2.0"
] | null | null | null | pyoogle/preprocessing/crawl/crawler.py | DanDits/Pyoogle | f860dffb574f8629d3e894074450fdcb76547a03 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sat Feb 6 12:49:02 2016
@author: daniel
"""
import logging
import threading # For main processing thread
import urllib # For downloading websites
import urllib.error
import urllib.request
from concurrent.futures import ThreadPoolExecutor # each downloads a website
from http.client import RemoteDisconnected
from queue import Queue, Empty # For processing downloaded websites
from socket import timeout as socket_timeout
from pyoogle.config import LOGGING_LEVEL
from pyoogle.preprocessing.crawl.linkconstraint import LinkConstraint # constraint to which links are allowed
from pyoogle.preprocessing.web.net import WebNet
from pyoogle.preprocessing.web.node import WebNode
from pyoogle.preprocessing.web.nodestore import WebNodeStore # for permanently saving created WebNodes
from pyoogle.preprocessing.web.parser import WebParser # parses the downloaded html site and extracts info
logging.getLogger().setLevel(LOGGING_LEVEL)
NotResolvable = "NOT_RESOLVABLE_LINK"
class Crawler:
# Initializes the Crawler. If max_sites is greater than zero it will only
# download this many sites and stop afterwards, else until no new site is found.
def __init__(self, store_path, link_constraint, max_sites=0, max_workers=2, timeout=30):
self.store_path = store_path
self.pending_links = Queue()
self.pending_websites = Queue()
self.web_net = None
self.link_constraint = link_constraint
if self.link_constraint is None:
raise ValueError("No link constraint given!")
self.already_processed_links = set()
self.already_processed_websites = set()
self.is_crawling = False
self.max_sites = max_sites
self.processed_sites_count = 0
self.max_workers = max_workers
self.timeout = timeout
self.starting_processor = None
self.links_processor = None
self.websites_processor = None
def _is_finished(self):
return not self.is_crawling or self.has_maximum_sites_processed()
def has_maximum_sites_processed(self):
return 0 < self.max_sites <= self.processed_sites_count
def process_link(self, link):
if self._is_finished():
return
website = Crawler.download_website(link, self.timeout)
if website is None:
logging.debug("Website %s not downloaded", link)
if website is NotResolvable:
logging.debug("Website %s not resolvable and not trying again.", link)
return
return self, link, website
@staticmethod
def link_got_processed(future):
if future.done() and future.result() is not None:
self, link, website = future.result()
if self._is_finished():
return
if website is None:
# revert and try later
logging.debug("Website %s not downloaded, retrying later ", link)
self.add_link(link)
return
if not self.has_maximum_sites_processed():
self.pending_websites.put((link, website))
def obtain_new_link(self):
link = None
while link is None and not self._is_finished():
try:
link = self.pending_links.get(timeout=self.timeout)
except Empty:
logging.info("No more links found to process!")
return
if link in self.already_processed_links:
link = None
continue # already processed
if link is not None:
self.already_processed_links.add(link)
return link
def process_links(self):
logging.info("Starting to process links")
try:
with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
while not self._is_finished():
# this will submit many many futures when testing with limited maxsites(>0)
# but they will be ignored!
link = self.obtain_new_link()
if link is None:
return
future = executor.submit(self.process_link, link)
future.add_done_callback(Crawler.link_got_processed)
finally:
self.stop() # ensure crawler is really stopped
def process_website(self, link, website):
logging.debug("Starting to parse %s pending links %d", link, self.pending_links.qsize())
try:
webparser = WebParser(link, website)
except ValueError:
logging.debug("Website %s not parsable, ignored but out link kept", link)
return
web_hash = hash(webparser)
if web_hash in self.already_processed_websites:
# Already processed but with a different url, add this url to node so we know this in the future!
logging.debug("Website %s already processed (with different url)!", link)
node = self.web_net.get_by_content_hash(web_hash)
if node is not None:
node.add_url(link)
return
logging.info("Processed %d.link %s pending websites %d",
self.processed_sites_count + 1, link, self.pending_websites.qsize())
self.already_processed_websites.add(web_hash)
self.processed_sites_count += 1
builder = WebNode.Builder(self.link_constraint)
builder.init_from_webparser(webparser)
webnode = builder.make_node()
self.web_net.add_node(webnode)
for link in webnode.get_out_links():
self.add_link(link)
def process_websites(self, clear_store):
# We are required to open the store in the same thread the store is modified in
logging.info("Starting to process websites")
with WebNodeStore(self.store_path, clear_store) as node_store:
try:
while not self._is_finished():
data = self.pending_websites.get(block=True)
if data is None:
break
link, website = data
self.process_website(link, website)
node_store.save_webnodes(self.web_net.get_nodes())
finally:
self.stop() # ensure crawler is really stopped
def _init_net(self, clear_store):
self.web_net = WebNet()
if not clear_store:
# Do not clear the store but add new nodes to it, load and add existing to webnet
with WebNodeStore(self.store_path, clear=False) as node_store:
for node in node_store.load_webnodes(True):
self.already_processed_websites.add(node.get_content_hash())
for link in node.get_urls():
self.already_processed_links.add(link)
self.web_net.add_node(node)
# After we marked all already processed links, add new outgoings to restart
restart_link_count = 0
total_link_out = 0
for node in self.web_net:
for link in node.get_out_links():
total_link_out += 1
if link not in self.already_processed_links:
self.add_link(link)
restart_link_count += 1
logging.info("Restarting with %d links of %d", restart_link_count, total_link_out)
def _start_async(self, clear_store):
self._init_net(clear_store)
self.links_processor = threading.Thread(target=self.process_links)
self.links_processor.start()
self.websites_processor = threading.Thread(target=Crawler.process_websites, args=[self, clear_store])
self.websites_processor.start()
def join(self):
try:
self.starting_processor.join() # If this stops blocking, the other processors are valid
self.websites_processor.join()
self.links_processor.join()
except KeyboardInterrupt:
self.stop()
def start(self, start_url, clear_store=True):
logging.info("Starting crawling at %s", start_url)
self.is_crawling = True
self.add_link(start_url)
self.starting_processor = threading.Thread(target=Crawler._start_async, args=[self, clear_store])
self.starting_processor.start()
def add_link(self, link):
link = self.link_constraint.get_valid(link)
if link is None:
return
self.pending_links.put(link)
def stop(self):
if self.is_crawling: # Race condition safe (could be executed multiple times)
logging.info("Stopping crawling")
self.is_crawling = False
self.pending_websites.put(None) # Ensure threads do not wait forever and exit
self.pending_links.put(None)
@staticmethod
def download_website(url, timeout):
# Download and read website
logging.debug("Downloading website %s", url)
try:
website = urllib.request.urlopen(url, timeout=timeout).read()
except socket_timeout:
logging.debug("Timeout error when downloading %s", url)
website = None
except urllib.error.HTTPError as err:
if int(err.code / 100) == 4:
logging.debug("Client http error when downloading %s %s", url, err)
website = NotResolvable # 404 Not Found or other Client Error, ignore link in future
else:
logging.debug("HTTP Error when downloading %d %s %s", err.code, url, err)
website = None
except urllib.error.URLError as err:
logging.debug("Url error when downloading %s %s", url, err)
website = None
except RemoteDisconnected as disc:
logging.debug("(RemoteDisconnect) error when downloading %s %s", url, disc)
website = NotResolvable
except UnicodeEncodeError:
logging.debug("(UnicodeEncodeError) error when downloading %s", url)
website = NotResolvable
return website
def crawl_mathy():
# Build constraint that describes which outgoing WebNode links to follow
constraint = LinkConstraint('http', 'www.math.kit.edu')
# Prevent downloading links with these endings
# Frequent candidates: '.png', '.jpg', '.jpeg', '.pdf', '.ico', '.doc', '.txt', '.gz', '.zip', '.tar','.ps',
# '.docx', '.tex', 'gif', '.ppt', '.m', '.mw', '.mp3', '.wav', '.mp4'
forbidden_endings = ['.pdf', '.png', '.ico', '#top'] # for fast exclusion
constraint.add_rule(lambda link: all((not link.lower().endswith(ending) for ending in forbidden_endings)))
# Forbid every point in the last path segment as this likely is a file and we are not interested in it
def rule_no_point_in_last_path_segment(link_parsed):
split = link_parsed.path.split("/")
return len(split) == 0 or "." not in split[-1]
constraint.add_rule(rule_no_point_in_last_path_segment, parsed_link=True)
# Start the crawler from a start domain, optionally loading already existing nodes
from pyoogle.config import DATABASE_PATH
path = DATABASE_PATH
c = Crawler(path, constraint)
c.start("http://www.math.kit.edu", clear_store=False)
# Wait for the crawler to finish
c.join()
webnet = c.web_net
logging.info("DONE, webnet contains %d nodes", len(webnet))
return path, webnet
def crawl_spon():
constraint = LinkConstraint('', 'www.spiegel.de')
# Forbid every point in the last path segment as this likely is a file and we are not interested in it
def rule_no_point_in_last_path_segment(link_parsed):
split = link_parsed.path.split("/")
return len(split) == 0 or ("." not in split[-1] or
split[-1].lower().endswith(".html") or split[-1].lower().endswith(".htm"))
constraint.add_rule(rule_no_point_in_last_path_segment, parsed_link=True)
path = "/home/daniel/PycharmProjects/PageRank/spon.db"
c = Crawler(path, constraint)
c.start("http://www.spiegel.de", clear_store=False)
# Wait for the crawler to finish
c.join()
webnet = c.web_net
logging.info("DONE, webnet contains %d nodes", len(webnet))
return path, webnet
if __name__ == "__main__":
crawl_spon()
| 42.077966 | 112 | 0.634174 | 1,546 | 12,413 | 4.930789 | 0.205692 | 0.027286 | 0.023613 | 0.016398 | 0.283878 | 0.173554 | 0.122262 | 0.122262 | 0.103896 | 0.091827 | 0 | 0.004483 | 0.281157 | 12,413 | 294 | 113 | 42.221088 | 0.849826 | 0.152501 | 0 | 0.265217 | 0 | 0 | 0.092098 | 0.004295 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.069565 | 0.008696 | 0.234783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8caaf44d7f053ff6f28f609749087b123ec4b34 | 2,965 | py | Python | 13.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | 1 | 2021-12-02T15:19:36.000Z | 2021-12-02T15:19:36.000Z | 13.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | null | null | null | 13.part2.py | elp2/advent_of_code_2018 | 0d359422dd04b0849481796005e97d05c30e9eb4 | [
"Apache-2.0"
] | null | null | null | from collections import defaultdict
def return_default():
return 0
REAL=open("13.txt").readlines()
SAMPLE=open("13.sample2").readlines()
def parse_lines(lines):
return list(map(list, lines))
CARTS = "^>v<"
DIRS = [(0, -1), (1, 0), (0, 1), (-1, 0)]
def cart_positions(start, facing, board):
poses = []
pos = start
corners = 0
fidx = DIRS.index(facing)
while True:
poses.append(pos)
x, y = pos
here = board[y][x]
delta = 0
if here == "\\":
delta = [-1, 1, -1, 1][fidx]
elif here == "/":
delta = [1, -1, 1, -1][fidx]
elif here == "+":
cmod = corners % 3
if cmod == 0:
delta = -1
elif cmod == 1:
delta = 0
elif cmod == 2:
delta = 1
corners += 1
else:
assert here in CARTS or here in "|-+"
fidx = (fidx + len(DIRS) + delta) % len(DIRS)
facing = DIRS[fidx]
dx, dy = facing
x += dx
y += dy
pos = (x, y)
if pos == start and corners % 3 == 0:
break
return poses
def solve(lines):
carts = []
parsed = parse_lines(lines)
ats = {}
for y in range(len(lines)):
for x in range(len(lines[y])):
here = parsed[y][x]
if here in CARTS:
facing = DIRS[CARTS.index(here)]
pos = (x, y)
carts.append(cart_positions(pos, facing, parsed))
ats[pos] = len(carts) - 1
t = 0
dead_carts = set()
while True:
moved = set()
for y in range(len(parsed)):
for x in range(len(parsed[y])):
pos = (x, y)
if pos not in ats:
continue
cidx = ats[pos]
if cidx in moved:
continue
moved.add(cidx)
cart = carts[cidx]
cart_next = cart[(t + 1) % len(cart)]
if cart_next in ats:
dead_carts.add(cidx)
dead2 = ats[cart_next]
dead_carts.add(dead2)
print("Crash at ", cart_next, " from ", pos, cidx, dead2)
del ats[cart_next]
del ats[pos]
if len(ats) == 1:
at = list(ats.keys())[0]
print("EARLY: " + str(at[0]) + "," + str(at[1]))
else:
ats[cart_next] = cidx
del ats[pos]
# assert len(ats) + len(dead_carts) == len(carts)
# assert len(set(ats.keys()).intersection(dead_carts)) == 0
if len(ats) == 1:
at = list(ats.keys())[0]
return str(at[0]) + "," + str(at[1])
t += 1
sample = solve(SAMPLE)
assert sample == "6,4"
print("*** SAMPLE PASSED ***")
print(solve(REAL)) # not 93,59
| 27.201835 | 77 | 0.43204 | 355 | 2,965 | 3.56338 | 0.230986 | 0.012648 | 0.01581 | 0.006324 | 0.151779 | 0.0917 | 0.072727 | 0.072727 | 0.072727 | 0 | 0 | 0.033175 | 0.430691 | 2,965 | 108 | 78 | 27.453704 | 0.716232 | 0.038786 | 0 | 0.186813 | 0 | 0 | 0.026353 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 1 | 0.043956 | false | 0.010989 | 0.010989 | 0.021978 | 0.098901 | 0.043956 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8cb54d17428f4a861ab1eb4f8524561f2936c44 | 844 | py | Python | docs/_downloads/485d1a22616717976d2f85cbaf046db3/plot__jitterdodge_position.py | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | docs/_downloads/485d1a22616717976d2f85cbaf046db3/plot__jitterdodge_position.py | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | docs/_downloads/485d1a22616717976d2f85cbaf046db3/plot__jitterdodge_position.py | IKupriyanov-HORIS/lets-plot-docs | 30fd31cb03dc649a03518b0c9348639ebfe09d53 | [
"MIT"
] | null | null | null | """
Jitterdodge Position
====================
Position adjustments determine how to arrange geoms that would otherwise
occupy the same space.
Simultaneously dodge and jitter in one function:
``position_jitterdodge()``.
See
`position_jitterdodge() <https://jetbrains.github.io/lets-plot-docs/pages/api/lets_plot.position_jitterdodge.html#lets_plot.position_jitterdodge>`__.
"""
# sphinx_gallery_thumbnail_path = "gallery_py\_position_adjustments\_jitterdodge_position.png"
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
# %%
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
# %%
ggplot(df, aes('cyl', 'hwy', group='drv', fill='drv')) + \
geom_boxplot() + \
geom_point(position='jitterdodge', shape=21, color='black') | 27.225806 | 151 | 0.703791 | 103 | 844 | 5.563107 | 0.669903 | 0.165794 | 0.041885 | 0.094241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00277 | 0.14455 | 844 | 31 | 152 | 27.225806 | 0.790859 | 0.569905 | 0 | 0 | 0 | 0.142857 | 0.329193 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8ceaa47207dcd451d3a6b75d0d1b483e1ba9218 | 2,537 | py | Python | mask_example/classification_vars.py | ami-a/MaskDetection | 9df329a24a987e63331c17db154319b3ebcaad74 | [
"MIT"
] | 1 | 2021-04-09T09:08:33.000Z | 2021-04-09T09:08:33.000Z | mask_example/classification_vars.py | ami-a/MaskDetection | 9df329a24a987e63331c17db154319b3ebcaad74 | [
"MIT"
] | null | null | null | mask_example/classification_vars.py | ami-a/MaskDetection | 9df329a24a987e63331c17db154319b3ebcaad74 | [
"MIT"
] | null | null | null | """loading the classification model variables for the detector object"""
import numpy as np
import cv2
from TrackEverything.tool_box import ClassificationVars
def get_class_vars(class_model_path):
"""loading the classification model variables for the detector object
We define here the model interpolation function so the detector
can use the classification model
Args:
class_model_path (str): classification model path
Returns:
ClassificationVars: classification variables for the detector
"""
#custom classification model interpolation
def custom_classify_detection(model,det_images,size=(224,224)):
"""Classify a batch of images
Args:
model (tensorflow model): classification model
det_images (np.array): batch of images in numpy array to classify
size (tuple, optional): size to resize to, 1-D int32 Tensor of 2 elements:
new_height, new_width (if None then no resizing).
(In custom function you can use model.inputs[0].shape.as_list()
and set size to default)
Returns:
Numpy NxM vector where N num of images, M num of classes and filled with scores.
For example two images (car,plan) with three possible classes (car,plan,lion)
that are identify currectly with 90% in the currect category and the rest is
devided equally will return [[0.9,0.05,0.05],[0.05,0.9,0.05]].
"""
#resize bounding box capture to fit classification model
if size is not None:
det_images=np.asarray(
[
cv2.resize(img, size, interpolation = cv2.INTER_LINEAR) for img in det_images
]
)
predictions=model.predict(det_images/255.)
#if class is binary make sure size is 2
if len(predictions)>0 and len(predictions[0])<2:
reshaped_pred=np.ones((len(predictions),2))
#size of classification list is 1 so turn it to 2
for ind,pred in enumerate(predictions):
reshaped_pred[ind,:]=pred,1-pred
#print(reshaped_pred)
predictions=reshaped_pred
return predictions
#providing only the classification model path for ClassificationVars
#since the default loding method
#tf.keras.models.load_model(path) will work
return ClassificationVars(
class_model_path=class_model_path,
class_proccessing=custom_classify_detection
)
| 41.590164 | 97 | 0.658652 | 330 | 2,537 | 4.972727 | 0.421212 | 0.092626 | 0.053626 | 0.042048 | 0.076782 | 0.070689 | 0.070689 | 0.070689 | 0.070689 | 0 | 0 | 0.023433 | 0.276705 | 2,537 | 60 | 98 | 42.283333 | 0.870845 | 0.564446 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.136364 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d26259abf1d70bfe1abffb2493230cee42b319 | 668 | py | Python | detector/urls.py | SPIN-RD/data_analysis | b2ec9ca008781f3015ec3780a858de0dac4549b9 | [
"MIT"
] | null | null | null | detector/urls.py | SPIN-RD/data_analysis | b2ec9ca008781f3015ec3780a858de0dac4549b9 | [
"MIT"
] | null | null | null | detector/urls.py | SPIN-RD/data_analysis | b2ec9ca008781f3015ec3780a858de0dac4549b9 | [
"MIT"
] | null | null | null | from django.urls import path
from .views import (
MeasurementCreateView,
MeasurementRetrieveView,
energy_spectrum_analysis,
half_life_analysis,
index,
)
urlpatterns = [
path("api/measurements/", MeasurementCreateView.as_view()),
path(
"api/measurements/<str:device_id>/<str:mode>", MeasurementRetrieveView.as_view()
),
path("", index, name="index"),
path(
"detector/half-life/<str:device_id>",
half_life_analysis,
name="half-life-analysis",
),
path(
"detector/energy-spectrum/<str:device_id>",
energy_spectrum_analysis,
name="energy-spectrum-analysis",
),
]
| 23.857143 | 88 | 0.646707 | 67 | 668 | 6.253731 | 0.373134 | 0.133652 | 0.157518 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.22006 | 668 | 27 | 89 | 24.740741 | 0.804223 | 0 | 0 | 0.4 | 0 | 0 | 0.270958 | 0.211078 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.08 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d3a26759da05b4ac662c9a2c7f47a9af51214a | 166 | py | Python | site_repo/utils/requests.py | Aviah/one-click-django-server | ddce7181f025b7f8d0979d725f85f8124add6adf | [
"MIT"
] | 10 | 2016-03-22T22:14:40.000Z | 2021-07-23T22:00:02.000Z | site_repo/utils/requests.py | Aviah/one-click-django-server | ddce7181f025b7f8d0979d725f85f8124add6adf | [
"MIT"
] | null | null | null | site_repo/utils/requests.py | Aviah/one-click-django-server | ddce7181f025b7f8d0979d725f85f8124add6adf | [
"MIT"
] | 4 | 2016-04-05T05:41:15.000Z | 2017-01-08T10:03:25.000Z | # requests utils
def get_ip(request):
ip = request.META['REMOTE_ADDR']
ip = request.META.get('HTTP_X_FORWARDED_FOR',ip)
return ip
| 16.6 | 52 | 0.596386 | 22 | 166 | 4.272727 | 0.636364 | 0.287234 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.295181 | 166 | 10 | 53 | 16.6 | 0.803419 | 0.084337 | 0 | 0 | 0 | 0 | 0.205298 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
d8d3a3c4c4228610d78f5c857adca30a66a1762a | 142 | py | Python | backend/app/views/GetConstructsAsGenbankView/__init__.py | Edinburgh-Genome-Foundry/dab | 7eabf76adf3a0b9332c3651b5d0e5e6d98237d2b | [
"MIT"
] | 7 | 2019-04-11T20:36:07.000Z | 2020-03-24T07:12:13.000Z | backend/app/views/GetConstructsAsGenbankView/__init__.py | Edinburgh-Genome-Foundry/dab | 7eabf76adf3a0b9332c3651b5d0e5e6d98237d2b | [
"MIT"
] | null | null | null | backend/app/views/GetConstructsAsGenbankView/__init__.py | Edinburgh-Genome-Foundry/dab | 7eabf76adf3a0b9332c3651b5d0e5e6d98237d2b | [
"MIT"
] | null | null | null | from .GetConstructsAsGenbank import (GetConstructsAsGenbankView,
construct_data_to_assemblies_sequences)
| 47.333333 | 76 | 0.676056 | 9 | 142 | 10.222222 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.295775 | 142 | 2 | 77 | 71 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 5 |
d8d3e8f353c9df0eca02ab9d7ac5d28b32d4bbb1 | 1,287 | py | Python | newsletter/models.py | doctsystems/newsletter | 7355ad7bca5f2d082abf9703a20fc91a0c325454 | [
"MIT"
] | null | null | null | newsletter/models.py | doctsystems/newsletter | 7355ad7bca5f2d082abf9703a20fc91a0c325454 | [
"MIT"
] | null | null | null | newsletter/models.py | doctsystems/newsletter | 7355ad7bca5f2d082abf9703a20fc91a0c325454 | [
"MIT"
] | null | null | null | from django.db import models
class Newsletter(models.Model):
name=models.CharField(max_length=100)
description=models.TextField()
created_at=models.DateTimeField(auto_now_add=True)
updated_at=models.DateTimeField(auto_now=True)
def __str__(self):
return self.name
class Subscriber(models.Model):
name = models.CharField(max_length=100)
email = models.CharField(max_length=100)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
newsletter = models.ManyToManyField(Newsletter, through='Subscription')
def __str__(self):
return self.name
class Issue(models.Model):
content = models.TextField()
title = models.CharField(max_length=100)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
newsletter = models.ForeignKey(Newsletter, on_delete=models.CASCADE)
def __str__(self):
return self.title
class Subscription(models.Model):
newsletter = models.ForeignKey(Newsletter, on_delete=models.CASCADE)
subscriber = models.ForeignKey(Subscriber, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return f"{self.subscriber} Subscribed to {self.newsletter}"
| 30.642857 | 72 | 0.79798 | 170 | 1,287 | 5.788235 | 0.247059 | 0.065041 | 0.170732 | 0.203252 | 0.697154 | 0.676829 | 0.676829 | 0.634146 | 0.449187 | 0.449187 | 0 | 0.010309 | 0.095571 | 1,287 | 42 | 73 | 30.642857 | 0.835052 | 0 | 0 | 0.580645 | 0 | 0 | 0.04736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.032258 | 0.129032 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 2 |
d8d46c16be9da8396527eb259a0965644da2d48d | 37 | py | Python | veintitres/__init__.py | joelalejandro/veintitres-python | 18fa7aa66688cce3f2c42ebc96ddb780bcd6d4bf | [
"MIT"
] | null | null | null | veintitres/__init__.py | joelalejandro/veintitres-python | 18fa7aa66688cce3f2c42ebc96ddb780bcd6d4bf | [
"MIT"
] | null | null | null | veintitres/__init__.py | joelalejandro/veintitres-python | 18fa7aa66688cce3f2c42ebc96ddb780bcd6d4bf | [
"MIT"
] | null | null | null | from .client import VeintitresClient
| 18.5 | 36 | 0.864865 | 4 | 37 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d8d586caec5e48f58983b527adfdcf89eb123054 | 6,604 | py | Python | bin/pylint_runner.py | PickBas/meta-social | f6fb0a50c30e240086a75917b705dfdc71dbebf9 | [
"MIT"
] | null | null | null | bin/pylint_runner.py | PickBas/meta-social | f6fb0a50c30e240086a75917b705dfdc71dbebf9 | [
"MIT"
] | 15 | 2020-06-07T07:58:05.000Z | 2022-01-19T16:53:47.000Z | bin/pylint_runner.py | PickBas/meta-social | f6fb0a50c30e240086a75917b705dfdc71dbebf9 | [
"MIT"
] | null | null | null | '''
The MIT License (MIT)
Copyright (c) 2015 Matthew Peveler
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''
# https://github.com/MasterOdin/pylint_runner
from argparse import ArgumentParser
import configparser
import os
import sys
import colorama
import pylint
import pylint.lint
PYTHON_VERSION = ".".join([str(x) for x in sys.version_info[0:3]])
class Runner:
""" A pylint runner that will lint all files recursively from the CWD. """
DEFAULT_IGNORE_FOLDERS = [".git", ".idea", "__pycache__"]
DEFAULT_ARGS = ["--reports=n", "--output-format=colorized"]
DEFAULT_RCFILE = ".pylintrc"
def __init__(self, args=None):
colorama.init(autoreset=True)
self.verbose = False
self.args = self.DEFAULT_ARGS
self.rcfile = self.DEFAULT_RCFILE
self.ignore_folders = self.DEFAULT_IGNORE_FOLDERS
self._parse_args(args or sys.argv[1:])
self._parse_ignores()
def _parse_args(self, args):
"""Parses any supplied command-line args and provides help text. """
parser = ArgumentParser(description="Runs pylint recursively on a directory")
parser.add_argument(
"-v",
"--verbose",
dest="verbose",
action="store_true",
default=False,
help="Verbose mode (report which files were found for testing).",
)
parser.add_argument(
"--rcfile",
dest="rcfile",
action="store",
default=".pylintrc",
help="A relative or absolute path to your pylint rcfile. Defaults to\
`.pylintrc` at the current working directory",
)
options, _ = parser.parse_known_args(args)
self.verbose = options.verbose
if options.rcfile:
if not os.path.isfile(options.rcfile):
options.rcfile = os.getcwd() + "/" + options.rcfile
self.rcfile = options.rcfile
return options
def _parse_ignores(self):
""" Parse the ignores setting from the pylintrc file if available. """
error_message = (
colorama.Fore.RED
+ "{} does not appear to be a valid pylintrc file".format(self.rcfile)
+ colorama.Fore.RESET
)
if not os.path.isfile(self.rcfile):
if not self._is_using_default_rcfile():
print(error_message)
sys.exit(1)
else:
return
config = configparser.ConfigParser()
try:
config.read(self.rcfile)
except configparser.MissingSectionHeaderError:
print(error_message)
sys.exit(1)
if config.has_section("MASTER") and config.get("MASTER", "ignore"):
self.ignore_folders += config.get("MASTER", "ignore").split(",")
def _is_using_default_rcfile(self):
return self.rcfile == os.getcwd() + "/" + self.DEFAULT_RCFILE
def _print_line(self, line):
""" Print output only with verbose flag. """
if self.verbose and line != 'pylint_runner.py' and 'test_settings' not in line and 'test' not in line and 'migrations' not in line:
print(line)
def get_files_from_dir(self, current_dir):
"""
Recursively walk through a directory and get all python files and then walk
through any potential directories that are found off current directory,
so long as not within self.IGNORE_FOLDERS
:return: all python files that were found off current_dir
"""
if current_dir[-1] != "/" and current_dir != ".":
current_dir += "/"
files = []
for dir_file in os.listdir(current_dir):
if current_dir != ".":
file_path = current_dir + dir_file
else:
file_path = dir_file
if os.path.isfile(file_path):
file_split = os.path.splitext(dir_file)
if len(file_split) == 2 and file_split[0] != "" \
and file_split[1] == ".py":
files.append(file_path)
elif (os.path.isdir(dir_file) or os.path.isdir(file_path)) \
and dir_file not in self.ignore_folders:
path = dir_file + os.path.sep
if current_dir not in ["", "."]:
path = os.path.join(current_dir.rstrip(os.path.sep), path)
files += self.get_files_from_dir(path)
return files
def run(self, output=None, error=None):
""" Runs pylint on all python files in the current directory """
pylint_output = output if output is not None else sys.stdout
pylint_error = error if error is not None else sys.stderr
savedout, savederr = sys.__stdout__, sys.__stderr__
sys.stdout = pylint_output
sys.stderr = pylint_error
pylint_files = self.get_files_from_dir(os.curdir)
for pylint_file in pylint_files:
# we need to recast this as a string, else pylint enters an endless recursion
split_file = str(pylint_file).split("/")
split_file[-1] = colorama.Fore.CYAN + split_file[-1] + colorama.Fore.RESET
pylint_file = "/".join(split_file)
if 'pylint' not in pylint_file:
self._print_line(pylint_file)
def main(output=None, error=None, verbose=False):
""" The main (cli) interface for the pylint runner. """
runner = Runner(args=["--verbose"] if verbose is not False else None)
runner.run(output, error)
if __name__ == "__main__":
main(verbose=True)
| 36.087432 | 139 | 0.629164 | 842 | 6,604 | 4.789786 | 0.30285 | 0.024795 | 0.016861 | 0.011158 | 0.062485 | 0.0243 | 0 | 0 | 0 | 0 | 0 | 0.003151 | 0.279073 | 6,604 | 182 | 140 | 36.285714 | 0.84394 | 0.271048 | 0 | 0.073395 | 0 | 0 | 0.078764 | 0.005293 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073395 | false | 0 | 0.06422 | 0.009174 | 0.211009 | 0.045872 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d6b40e4e24a3388d3de220686239dd259ccfe1 | 2,141 | py | Python | util/intensity_scaling.py | LeHenschel/DeepFCD | 494c92dfd80b55aa2369095853bfea84b770acdc | [
"MIT"
] | 3 | 2020-03-24T11:16:50.000Z | 2022-01-21T14:50:41.000Z | util/intensity_scaling.py | LeHenschel/DeepFCD | 494c92dfd80b55aa2369095853bfea84b770acdc | [
"MIT"
] | null | null | null | util/intensity_scaling.py | LeHenschel/DeepFCD | 494c92dfd80b55aa2369095853bfea84b770acdc | [
"MIT"
] | 3 | 2022-01-07T10:53:45.000Z | 2022-03-02T15:31:29.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 7 17:24:20 2020
Rescaling exported synthetic PNGs to the intensity range of real PNGs.
MinMax-Scaling not recommended. Better use Histogram Matching
TODO: Integrate directly into apply_GAN notebook
@author: bdavid
"""
import os
import numpy as np
import glob
from PIL import Image
#from matplotlib import pyplot as plt
from skimage.exposure import match_histograms
SYNTHPATH='/home/bdavid/Deep_Learning/playground/fake_flair_2d/png_cor/synth_T1/'
REALPATH='/home/bdavid/Deep_Learning/playground/fake_flair_2d/png_cor/T1/'
OUTPATH='/home/bdavid/Deep_Learning/playground/intensity_rescaled/T1_synth/test'
def intensity_rescale(synth_img, real_img):
real_img=np.array(Image.open(real_img))
synth_img=np.array(Image.open(synth_img))
min_real=np.min(real_img)
max_real=np.max(real_img)
scale=(max_real-min_real)/(np.max(synth_img)-np.min(synth_img))
offset=max_real - scale*np.max(synth_img)
synth_img_scaled=scale*synth_img+offset
# plt.figure()
# plt.imshow(synth_img,cmap='gray',vmin=0,vmax=255)
# plt.figure()
# plt.imshow(real_img,cmap='gray',vmin=0,vmax=255)
# plt.figure()
# plt.imshow(synth_img_scaled.astype(int),cmap='gray',vmin=0,vmax=255)
return Image.fromarray(np.uint8(synth_img_scaled))
def histo_matching(synth_img, real_img):
real_img=np.array(Image.open(real_img))
synth_img=np.array(Image.open(synth_img))
synth_img_scaled=match_histograms(synth_img,real_img)
return Image.fromarray(np.uint8(synth_img_scaled))
synth_list=glob.glob(os.path.join(SYNTHPATH,"test/*.png"))
real_list=glob.glob(os.path.join(REALPATH,"test/*.png"))
os.makedirs(OUTPATH, exist_ok=True)
for synth_img, real_img in zip(synth_list, real_list):
# synth_img_minmax_scaled= intensity_rescale(synth_img, real_img)
# synth_img_minmax_scaled.save(os.path.join(OUTPATH,'test','minmax',synth_img.split('/')[-1]))
synth_img_histo_scaled = histo_matching(synth_img, real_img)
synth_img_histo_scaled.save(os.path.join(OUTPATH,synth_img.split('/')[-1]))
| 32.439394 | 98 | 0.744512 | 339 | 2,141 | 4.463127 | 0.330383 | 0.137475 | 0.052875 | 0.059484 | 0.487112 | 0.442168 | 0.27495 | 0.27495 | 0.220753 | 0.220753 | 0 | 0.018104 | 0.12284 | 2,141 | 65 | 99 | 32.938462 | 0.78754 | 0.317609 | 0 | 0.214286 | 0 | 0 | 0.154539 | 0.139986 | 0 | 0 | 0 | 0.015385 | 0 | 1 | 0.071429 | false | 0 | 0.178571 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8d6b4d53e13b0fd18dcd2609163a130f5b31c93 | 1,311 | py | Python | mysite/polls/migrations/0007_auto_20150314_0332.py | aaronkrolik/rule46 | 20d3e384768caced5b76f37e8fdefc2e9fb129d6 | [
"Apache-2.0"
] | null | null | null | mysite/polls/migrations/0007_auto_20150314_0332.py | aaronkrolik/rule46 | 20d3e384768caced5b76f37e8fdefc2e9fb129d6 | [
"Apache-2.0"
] | null | null | null | mysite/polls/migrations/0007_auto_20150314_0332.py | aaronkrolik/rule46 | 20d3e384768caced5b76f37e8fdefc2e9fb129d6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('polls', '0006_auto_20150314_0320'),
]
operations = [
migrations.CreateModel(
name='Accolade',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('title', models.CharField(max_length=200)),
('accolade_text', models.TextField()),
('player', models.ForeignKey(to='polls.Player')),
],
options={
},
bases=(models.Model,),
),
migrations.AddField(
model_name='player',
name='position',
field=models.CharField(default='x', max_length=200),
preserve_default=False,
),
migrations.AddField(
model_name='player',
name='salary',
field=models.IntegerField(default=0),
preserve_default=True,
),
migrations.AddField(
model_name='player',
name='team',
field=models.CharField(default='x', max_length=200),
preserve_default=False,
),
]
| 29.133333 | 114 | 0.536995 | 115 | 1,311 | 5.93913 | 0.495652 | 0.065886 | 0.052709 | 0.118594 | 0.338214 | 0.338214 | 0.175695 | 0.175695 | 0.175695 | 0.175695 | 0 | 0.030928 | 0.334096 | 1,311 | 44 | 115 | 29.795455 | 0.751432 | 0.016018 | 0 | 0.368421 | 0 | 0 | 0.088509 | 0.017857 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.052632 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d72c5cdaafc3330e3c8acc96c27ff888c380fc | 2,549 | py | Python | ceraon/api/v1/meals/schema.py | Rdbaker/Mealbound | 37cec6b45a632ac26a5341a0c9556279b6229ea8 | [
"BSD-3-Clause"
] | 1 | 2018-11-03T17:48:50.000Z | 2018-11-03T17:48:50.000Z | ceraon/api/v1/meals/schema.py | Rdbaker/Mealbound | 37cec6b45a632ac26a5341a0c9556279b6229ea8 | [
"BSD-3-Clause"
] | 3 | 2021-03-09T09:47:04.000Z | 2022-02-12T13:04:41.000Z | ceraon/api/v1/meals/schema.py | Rdbaker/Mealbound | 37cec6b45a632ac26a5341a0c9556279b6229ea8 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""Meal schema."""
from datetime import datetime as dt
from marshmallow import Schema, ValidationError, fields, validates
from ceraon.api.v1.locations.schema import LocationSchema
from ceraon.api.v1.users.schema import UserSchema
from ceraon.api.v1.tags.schema import TagSchema
from ceraon.constants import Errors
class MealSchema(Schema):
"""A schema for a Meal model."""
id = fields.UUID(dump_only=True)
name = fields.String(required=True)
description = fields.String()
scheduled_for = fields.DateTime(required=True)
price = fields.Float(required=True, places=2)
host = fields.Nested(UserSchema, dump_only=True)
location = fields.Nested(LocationSchema, dump_only=True,
load_from='user.location')
my_review = fields.Nested('ReviewSchema', dump_only=True,
exclude=('meal',))
num_reviews = fields.Int(dump_only=True)
num_guests = fields.Int(dump_only=True)
max_guests = fields.Int()
avg_rating = fields.Float(places=2, dump_only=True)
tags = fields.Nested(TagSchema, many=True)
guest_fields = ['my_review']
private_fields = ['location.{}'.format(field)
for field in LocationSchema.private_fields] + \
['host.{}'.format(field)
for field in UserSchema.private_fields]
class Meta:
"""Meta class for Meal schema."""
type_ = 'meal'
strict = True
@validates('name')
def validate_name(self, value):
"""Validate the name field."""
if not value:
raise ValidationError(Errors.MEAL_NAME_MISSING[1])
@validates('price')
def validate_price(self, value):
"""Validate the price field."""
if not value:
raise ValidationError(Errors.MEAL_PRICE_MISSING[1])
if value < 0:
raise ValidationError(Errors.MEAL_PRICE_NEGATIVE[1])
@validates('scheduled_for')
def validate_scheduled_for(self, value):
"""Validate the scheduled_for field."""
# if there is no timezone
if value.tzinfo is None:
# assign the server's timezone
value = value.astimezone()
if value <= dt.now().astimezone():
raise ValidationError(Errors.MEAL_CREATE_IN_PAST[1])
@validates('max_guests')
def validate_max_guests(self, value):
"""Validate the max_guests field."""
if value is not None and value < 1:
raise ValidationError(Errors.BAD_MAX_GUESTS)
| 34.445946 | 69 | 0.640251 | 304 | 2,549 | 5.233553 | 0.315789 | 0.035198 | 0.052797 | 0.050283 | 0.134507 | 0.056568 | 0.056568 | 0.056568 | 0 | 0 | 0 | 0.006276 | 0.249902 | 2,549 | 73 | 70 | 34.917808 | 0.825837 | 0.101608 | 0 | 0.04 | 0 | 0 | 0.040853 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.12 | 0 | 0.54 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d8d7c06e5b65b400c9d18a38d0705a519de0d5e6 | 506 | py | Python | Vbox_postgres_conn.py | RajasKhokle/Drug_Web | e59496fc4af69e34a5b0b4d3acb498a038ad831c | [
"BSD-3-Clause"
] | null | null | null | Vbox_postgres_conn.py | RajasKhokle/Drug_Web | e59496fc4af69e34a5b0b4d3acb498a038ad831c | [
"BSD-3-Clause"
] | null | null | null | Vbox_postgres_conn.py | RajasKhokle/Drug_Web | e59496fc4af69e34a5b0b4d3acb498a038ad831c | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Fri Mar 29 08:14:44 2019
Purpose: Connect to the Postgresql Drug Database on local virtual machine on UBUNTU
@author: Rajas Khokle
"""
import pandas as pd
from sqlalchemy import create_engine
# Create Connection to the database
engine = create_engine('postgres://postgres:raj_drug_2019@10.37.17.10:5432/diabetes')
df = pd.read_csv('P:\GA Capstone\Health Forcast\Capstone Deliverables\Tableau\Capstone.csv')
df.to_sql('df',engine,if_exists = 'append',index=True) | 28.111111 | 92 | 0.758893 | 80 | 506 | 4.7125 | 0.725 | 0.026525 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065169 | 0.120553 | 506 | 18 | 93 | 28.111111 | 0.782022 | 0.391304 | 0 | 0 | 0 | 0 | 0.463333 | 0.306667 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
d8d80406f757e14704187e04f0b5d07b32575e58 | 1,071 | py | Python | core/objs/zona.py | aanacleto/erp- | 9c2d5388248cfe4b8cdb8454f6f47df4cb521f0e | [
"MIT"
] | null | null | null | core/objs/zona.py | aanacleto/erp- | 9c2d5388248cfe4b8cdb8454f6f47df4cb521f0e | [
"MIT"
] | null | null | null | core/objs/zona.py | aanacleto/erp- | 9c2d5388248cfe4b8cdb8454f6f47df4cb521f0e | [
"MIT"
] | 2 | 2017-12-04T14:59:22.000Z | 2018-12-06T18:50:29.000Z | # !/usr/bin/env python3
# -*- encoding: utf-8 -*-
"""
ERP+
"""
__author__ = 'António Anacleto'
__credits__ = []
__version__ = "1.0"
__maintainer__ = "António Anacleto"
__status__ = "Development"
__model_name__ = 'zona.Zona'
import auth, base_models
from orm import *
from form import *
class Zona(Model, View):
def __init__(self, **kargs):
Model.__init__(self, **kargs)
self.__name__ = 'zona'
self.__title__ = 'Zonas de Distribuição'
self.__model_name__ = __model_name__
self.__list_edit_mode__ = 'inline'
self.__order_by__ = 'zona.nome'
self.__auth__ = {
'read':['All'],
'write':['Gestor'],
'create':['Gestor'],
'delete':['Gestor'],
'full_access':['Gestor']
}
self.__get_options__ = ['nome']
self.nome = string_field(view_order=1 , name='Nome', size=80)
self.contratos = list_field(view_order=2 , name='Contratos', model_name='contrato.Contrato', condition="zona='{id}'", list_edit_mode='edit', onlist = False)
| 29.75 | 164 | 0.605042 | 119 | 1,071 | 4.773109 | 0.554622 | 0.06338 | 0.045775 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009804 | 0.238095 | 1,071 | 35 | 165 | 30.6 | 0.686275 | 0.047619 | 0 | 0 | 0 | 0 | 0.20099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.111111 | 0 | 0.185185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d85cecefde2c0134f937fbe84f1d254b9a273b | 4,383 | py | Python | biothings/hub/upgrade.py | sirloon/biothings.api | 8a981fa2151e368d0ca76aaf226eb565d794d4fb | [
"Apache-2.0"
] | null | null | null | biothings/hub/upgrade.py | sirloon/biothings.api | 8a981fa2151e368d0ca76aaf226eb565d794d4fb | [
"Apache-2.0"
] | null | null | null | biothings/hub/upgrade.py | sirloon/biothings.api | 8a981fa2151e368d0ca76aaf226eb565d794d4fb | [
"Apache-2.0"
] | null | null | null | import sys
from biothings.utils.hub_db import get_src_dump, get_data_plugin, get_hub_db_conn, backup, restore
from biothings import config
logging = config.logger
def migrate_0dot1_to_0dot2():
"""
mongodb src_dump/data_plugin changed:
1. "data_folder" and "release" under "download"
2. "data_folder" and "release" in upload.jobs[subsrc] taken from "download"
3. no more "err" under "upload"
4. no more "status" under "upload"
5. "pending_to_upload" is now "pending": ["upload"]
"""
src_dump = get_src_dump()
data_plugin = get_data_plugin()
for srccol in [src_dump,data_plugin]:
logging.info("Converting collection %s" % srccol)
srcs = [src for src in srccol.find()]
wasdue = False
for src in srcs:
logging.info("\tConverting '%s'" % src["_id"])
# 1.
for field in ["data_folder","release"]:
if field in src:
logging.debug("\t\t%s: found '%s' in document, moving under 'download'" % (src["_id"],field))
try:
src["download"][field] = src.pop(field)
wasdue = True
except KeyError as e:
logging.warning("\t\t%s: no such field '%s' found, skip it (error: %s)" % (src["_id"],field,e))
# 2.
for subsrc_name in src.get("upload",{}).get("jobs",{}):
for field in ["data_folder","release"]:
if not field in src["upload"]["jobs"][subsrc_name]:
logging.debug("\t\t%s: no '%s' found in upload jobs, taking it from 'download' (or from root keys)" % (src["_id"],field))
try:
src["upload"]["jobs"][subsrc_name][field] = src["download"][field]
wasdue = True
except KeyError:
try:
src["upload"]["jobs"][subsrc_name][field] = src[field]
wasdue = True
except KeyError:
logging.warning("\t\t%s: no such field '%s' found, skip it" % (src["_id"],field))
# 3. & 4.
for field in ["err","status"]:
if field in src.get("upload",{}):
logging.debug("\t\t%s: removing '%s' key from 'upload'" % (src["_id"],field))
src["upload"].pop(field)
wasdue = True
# 5.
if "pending_to_upload" in src:
logging.debug("\t%s: found 'pending_to_upload' field, moving to 'pending' list" % src["_id"])
src.pop("pending_to_upload")
wasdue = True
if not "upload" in src.get("pending",[]):
src.setdefault("pending",[]).append("upload")
if wasdue:
logging.info("\tFinishing converting document for '%s'" % src["_id"])
srccol.save(src)
else:
logging.info("\tDocument for '%s' already converted" % src["_id"])
def migrate(from_version, to_version,restore_if_failure=True):
func_name = "migrate_%s_to_%s" % (from_version.replace(".","dot"),
to_version.replace(".","dot"))
# backup
db = get_hub_db_conn()[config.DATA_HUB_DB_DATABASE]
logging.info("Backing up %s" % db.name)
path = backup()
logging.info("Backup file: %s" % path)
thismodule = sys.modules[__name__]
try:
func = getattr(thismodule,func_name)
except AttributeError:
logging.error("Can't upgrade, no such function to migrate from '%s' to '%s'" % (from_version, to_version))
raise
# resolve A->C = A->B then B->C
logging.info("Start upgrading from '%s' to '%s'" % (from_version, to_version))
try:
func()
except Exception as e:
logging.exception("Failed upgrading: %s")
if restore_if_failure:
logging.info("Now restoring original database from '%s" % path)
restore(db,path,drop=True)
logging.info("Done. If you want to keep converted data for inspection, use restore_if_failure=False")
else:
logging.info("*not* restoring original data. It can still be restored using file '%s'" % path)
| 44.72449 | 145 | 0.531371 | 525 | 4,383 | 4.293333 | 0.257143 | 0.048802 | 0.006655 | 0.022626 | 0.213398 | 0.116238 | 0.116238 | 0.090506 | 0.035492 | 0.035492 | 0 | 0.004806 | 0.335387 | 4,383 | 97 | 146 | 45.185567 | 0.768967 | 0.080995 | 0 | 0.216216 | 0 | 0.027027 | 0.254395 | 0.006027 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.040541 | 0 | 0.067568 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8d8d4bab6bca93fe7ec5b879bc940d20a949497 | 22,052 | py | Python | capirca/lib/gce.py | supertylerc/capirca | 31235e964c9893f3f3432d84604fbaa727384047 | [
"Apache-2.0"
] | null | null | null | capirca/lib/gce.py | supertylerc/capirca | 31235e964c9893f3f3432d84604fbaa727384047 | [
"Apache-2.0"
] | null | null | null | capirca/lib/gce.py | supertylerc/capirca | 31235e964c9893f3f3432d84604fbaa727384047 | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Google Compute Engine firewall generator.
More information about GCE networking and firewalls:
https://cloud.google.com/compute/docs/networking
https://cloud.google.com/compute/docs/reference/latest/firewalls
"""
import copy
import datetime
import ipaddress
import json
import logging
import re
from typing import Dict, Any
from capirca.lib import gcp
from capirca.lib import nacaddr
import six
class Error(Exception):
"""Generic error class."""
class GceFirewallError(Error):
"""Raised with problems in formatting for GCE firewall."""
class ExceededAttributeCountError(Error):
"""Raised when the total attribute count of a policy is above the maximum."""
def IsDefaultDeny(term):
"""Returns true if a term is a default deny without IPs, ports, etc."""
skip_attrs = ['flattened', 'flattened_addr', 'flattened_saddr',
'flattened_daddr', 'action', 'comment', 'name', 'logging']
if 'deny' not in term.action:
return False
# This lc will look through all methods and attributes of the object.
# It returns only the attributes that need to be looked at to determine if
# this is a default deny.
for i in [a for a in dir(term) if not a.startswith('__') and
a.islower() and not callable(getattr(term, a))]:
if i in skip_attrs:
continue
v = getattr(term, i)
if isinstance(v, str) and v:
return False
if isinstance(v, list) and v:
return False
return True
def GetNextPriority(priority):
"""Get the priority for the next rule."""
return priority
class Term(gcp.Term):
"""Creates the term for the GCE firewall."""
ACTION_MAP = {'accept': 'allowed',
'deny': 'denied'}
# Restrict the number of addresses per term to 256.
# Similar restrictions apply to source and target tags, and ports.
# Details: https://cloud.google.com/vpc/docs/quota#per_network_2
_TERM_ADDRESS_LIMIT = 256
_TERM_SOURCE_TAGS_LIMIT = 30
_TERM_TARGET_TAGS_LIMIT = 70
_TERM_PORTS_LIMIT = 256
# Firewall rule name has to match specific RE:
# The first character must be a lowercase letter, and all following characters
# must be a dash, lowercase letter, or digit, except the last character, which
# cannot be a dash.
# Details: https://cloud.google.com/compute/docs/reference/latest/firewalls
_TERM_NAME_RE = re.compile(r'^[a-z]([-a-z0-9]*[a-z0-9])?$')
# Protocols allowed by name from:
# https://cloud.google.com/vpc/docs/firewalls#protocols_and_ports
_ALLOW_PROTO_NAME = frozenset(
['tcp', 'udp', 'icmp', 'esp', 'ah', 'ipip', 'sctp',
'all' # Needed for default deny, do not use in policy file.
])
# Any protocol not in _ALLOW_PROTO_NAME must be passed by number.
ALWAYS_PROTO_NUM = set(gcp.Term.PROTO_MAP.keys()) - _ALLOW_PROTO_NAME
def __init__(self, term, inet_version='inet', policy_inet_version='inet'):
super().__init__(term)
self.term = term
self.inet_version = inet_version
# This is to handle mixed, where the policy_inet_version is mixed,
# but the term inet version is either inet/inet6.
# This is only useful for term name and priority.
self.policy_inet_version = policy_inet_version
self._validateDirection()
if self.term.source_address_exclude and not self.term.source_address:
raise GceFirewallError(
'GCE firewall does not support address exclusions without a source '
'address list.')
# The reason for the error below isn't because of a GCE restriction, but
# because we don't want to use a bad default of GCE that allows talking
# to anything when there's no source address, source tag, or source service
# account.
if (not self.term.source_address and
not self.term.source_tag) and self.term.direction == 'INGRESS':
raise GceFirewallError(
'GCE firewall needs either to specify source address or source tags.')
if self.term.source_port:
raise GceFirewallError(
'GCE firewall does not support source port restrictions.')
if (self.term.source_address_exclude and self.term.source_address or
self.term.destination_address_exclude and
self.term.destination_address):
self.term.FlattenAll()
if not self.term.source_address and self.term.direction == 'INGRESS':
raise GceFirewallError(
'GCE firewall rule no longer contains any source addresses after '
'the prefixes in source_address_exclude were removed.')
# Similarly to the comment above, the reason for this error is also
# because we do not want to use the bad default of GCE that allows for
# talking to anything when there is no IP address provided for this field.
if not self.term.destination_address and self.term.direction == 'EGRESS':
raise GceFirewallError(
'GCE firewall rule no longer contains any destination addresses '
'after the prefixes in destination_address_exclude were removed.')
def __str__(self):
"""Convert term to a string."""
json.dumps(self.ConvertToDict(), indent=2,
separators=(six.ensure_str(','), six.ensure_str(': ')))
def _validateDirection(self):
if self.term.direction == 'INGRESS':
if not self.term.source_address and not self.term.source_tag:
raise GceFirewallError(
'Ingress rule missing required field oneof "sourceRanges" or '
'"sourceTags".')
if self.term.destination_address:
raise GceFirewallError('Ingress rules cannot include '
'"destinationRanges.')
elif self.term.direction == 'EGRESS':
if self.term.source_address:
raise GceFirewallError(
'Egress rules cannot include "sourceRanges".')
if not self.term.destination_address:
raise GceFirewallError(
'Egress rule missing required field "destinationRanges".')
if self.term.destination_tag:
raise GceFirewallError(
'GCE Egress rule cannot have destination tag.')
def ConvertToDict(self):
"""Convert term to a dictionary.
This is used to get a dictionary describing this term which can be
output easily as a JSON blob.
Returns:
A dictionary that contains all fields necessary to create or update a GCE
firewall.
Raises:
GceFirewallError: The term name is too long.
"""
if self.term.owner:
self.term.comment.append('Owner: %s' % self.term.owner)
term_dict = {
'description': ' '.join(self.term.comment),
'name': self.term.name,
'direction': self.term.direction
}
if self.term.network:
term_dict['network'] = self.term.network
term_dict['name'] = '%s-%s' % (
self.term.network.split('/')[-1], term_dict['name'])
# Identify if this is inet6 processing for a term under a mixed policy.
mixed_policy_inet6_term = False
if self.policy_inet_version == 'mixed' and self.inet_version == 'inet6':
mixed_policy_inet6_term = True
# Update term name to have the IPv6 suffix for the inet6 rule.
if mixed_policy_inet6_term:
term_dict['name'] = gcp.GetIpv6TermName(term_dict['name'])
# Checking counts of tags, and ports to see if they exceeded limits.
if len(self.term.source_tag) > self._TERM_SOURCE_TAGS_LIMIT:
raise GceFirewallError(
'GCE firewall rule exceeded number of source tags per rule: %s' %
self.term.name)
if len(self.term.destination_tag) > self._TERM_TARGET_TAGS_LIMIT:
raise GceFirewallError(
'GCE firewall rule exceeded number of target tags per rule: %s' %
self.term.name)
if self.term.source_tag:
if self.term.direction == 'INGRESS':
term_dict['sourceTags'] = self.term.source_tag
elif self.term.direction == 'EGRESS':
term_dict['targetTags'] = self.term.source_tag
if self.term.destination_tag and self.term.direction == 'INGRESS':
term_dict['targetTags'] = self.term.destination_tag
if self.term.priority:
term_dict['priority'] = self.term.priority
# Update term priority for the inet6 rule.
if mixed_policy_inet6_term:
term_dict['priority'] = GetNextPriority(term_dict['priority'])
rules = []
# If 'mixed' ends up in indvidual term inet_version, something has gone
# horribly wrong. The only valid values are inet/inet6.
term_af = self.AF_MAP.get(self.inet_version)
if self.inet_version == 'mixed':
raise GceFirewallError(
'GCE firewall rule has incorrect inet_version for rule: %s' %
self.term.name)
# Exit early for inet6 processing of mixed rules that have only tags,
# and no IP addresses, since this is handled in the inet processing.
if mixed_policy_inet6_term:
if not self.term.source_address and not self.term.destination_address:
if 'targetTags' in term_dict or 'sourceTags' in term_dict:
return []
saddrs = sorted(self.term.GetAddressOfVersion('source_address', term_af),
key=ipaddress.get_mixed_type_key)
daddrs = sorted(
self.term.GetAddressOfVersion('destination_address', term_af),
key=ipaddress.get_mixed_type_key)
# If the address got filtered out and is empty due to address family, we
# don't render the term. At this point of term processing, the direction
# has already been validated, so we can just log and return empty rule.
if self.term.source_address and not saddrs:
logging.warning(
'WARNING: Term %s is not being rendered for %s, '
'because there are no addresses of that family.', self.term.name,
self.inet_version)
return []
if self.term.destination_address and not daddrs:
logging.warning(
'WARNING: Term %s is not being rendered for %s, '
'because there are no addresses of that family.', self.term.name,
self.inet_version)
return []
if not self.term.protocol:
raise GceFirewallError(
'GCE firewall rule contains no protocol, it must be specified.')
proto_dict = copy.deepcopy(term_dict)
if self.term.logging:
proto_dict['logConfig'] = {'enable': True}
filtered_protocols = []
for proto in self.term.protocol:
# ICMP filtering by inet_version
# Since each term has inet_version, 'mixed' is correctly processed here.
# Convert protocol to number for uniformity of comparison.
# PROTO_MAP always returns protocol number.
if proto in self._ALLOW_PROTO_NAME:
proto_num = self.PROTO_MAP[proto]
else:
proto_num = proto
if proto_num == self.PROTO_MAP['icmp'] and self.inet_version == 'inet6':
logging.warning(
'WARNING: Term %s is being rendered for inet6, ICMP '
'protocol will not be rendered.', self.term.name)
continue
if proto_num == self.PROTO_MAP['icmpv6'] and self.inet_version == 'inet':
logging.warning(
'WARNING: Term %s is being rendered for inet, ICMPv6 '
'protocol will not be rendered.', self.term.name)
continue
if proto_num == self.PROTO_MAP['igmp'] and self.inet_version == 'inet6':
logging.warning(
'WARNING: Term %s is being rendered for inet6, IGMP '
'protocol will not be rendered.', self.term.name)
continue
filtered_protocols.append(proto)
# If there is no protocol left after ICMP/IGMP filtering, drop this term.
if not filtered_protocols:
return []
for proto in filtered_protocols:
# If the protocol name is not supported, protocol number is used.
# This is done by default in policy.py.
if proto not in self._ALLOW_PROTO_NAME:
logging.info(
'INFO: Term %s is being rendered using protocol number',
self.term.name)
dest = {
'IPProtocol': proto
}
if self.term.destination_port:
ports = []
for start, end in self.term.destination_port:
if start == end:
ports.append(str(start))
else:
ports.append('%d-%d' % (start, end))
if len(ports) > self._TERM_PORTS_LIMIT:
raise GceFirewallError(
'GCE firewall rule exceeded number of ports per rule: %s' %
self.term.name)
dest['ports'] = ports
action = self.ACTION_MAP[self.term.action[0]]
dict_val = []
if action in proto_dict:
dict_val = proto_dict[action]
if not isinstance(dict_val, list):
dict_val = [dict_val]
dict_val.append(dest)
proto_dict[action] = dict_val
# There's a limit of 256 addresses each term can contain.
# If we're above that limit, we're breaking it down in more terms.
if saddrs:
source_addr_chunks = [
saddrs[x:x+self._TERM_ADDRESS_LIMIT] for x in range(
0, len(saddrs), self._TERM_ADDRESS_LIMIT)]
for i, chunk in enumerate(source_addr_chunks):
rule = copy.deepcopy(proto_dict)
if len(source_addr_chunks) > 1:
rule['name'] = '%s-%d' % (rule['name'], i+1)
rule['sourceRanges'] = [str(saddr) for saddr in chunk]
rules.append(rule)
elif daddrs:
dest_addr_chunks = [
daddrs[x:x+self._TERM_ADDRESS_LIMIT] for x in range(
0, len(daddrs), self._TERM_ADDRESS_LIMIT)]
for i, chunk in enumerate(dest_addr_chunks):
rule = copy.deepcopy(proto_dict)
if len(dest_addr_chunks) > 1:
rule['name'] = '%s-%d' % (rule['name'], i+1)
rule['destinationRanges'] = [str(daddr) for daddr in chunk]
rules.append(rule)
else:
rules.append(proto_dict)
# Sanity checking term name lengths.
long_rules = [rule['name'] for rule in rules if len(rule['name']) > 63]
if long_rules:
raise GceFirewallError(
'GCE firewall name ended up being too long: %s' % long_rules)
return rules
class GCE(gcp.GCP):
"""A GCE firewall policy object."""
_PLATFORM = 'gce'
SUFFIX = '.gce'
_SUPPORTED_AF = frozenset(('inet', 'inet6', 'mixed'))
_ANY_IP = {
'inet': nacaddr.IP('0.0.0.0/0'),
'inet6': nacaddr.IP('::/0'),
}
# Supported is 63 but we need to account for dynamic updates when the term
# is rendered (which can add proto and a counter).
_TERM_MAX_LENGTH = 53
_GOOD_DIRECTION = ['INGRESS', 'EGRESS']
_OPTIONAL_SUPPORTED_KEYWORDS = set(['expiration',
'destination_tag',
'source_tag'])
def _BuildTokens(self):
"""Build supported tokens for platform.
Returns:
tuple containing both supported tokens and sub tokens
"""
supported_tokens, _ = super()._BuildTokens()
# add extra things
supported_tokens |= {'destination_tag',
'expiration',
'owner',
'priority',
'source_tag'}
# remove unsupported things
supported_tokens -= {'icmp_type',
'platform',
'platform_exclude',
'verbatim'}
# easier to make a new structure
supported_sub_tokens = {'action': {'accept', 'deny'}}
return supported_tokens, supported_sub_tokens
def _TranslatePolicy(self, pol, exp_info):
self.gce_policies = []
max_attribute_count = 0
total_attribute_count = 0
total_rule_count = 0
current_date = datetime.datetime.utcnow().date()
exp_info_date = current_date + datetime.timedelta(weeks=exp_info)
for header, terms in pol.filters:
if self._PLATFORM not in header.platforms:
continue
filter_options = header.FilterOptions(self._PLATFORM)
filter_name = header.FilterName(self._PLATFORM)
network = ''
direction = 'INGRESS'
if filter_options:
for i in self._GOOD_DIRECTION:
if i in filter_options:
direction = i
filter_options.remove(i)
# Get the address family if set.
address_family = 'inet'
for i in self._SUPPORTED_AF:
if i in filter_options:
address_family = i
filter_options.remove(i)
for opt in filter_options:
try:
max_attribute_count = int(opt)
logging.info(
'Checking policy for max attribute count %d', max_attribute_count)
filter_options.remove(opt)
break
except ValueError:
continue
if filter_options:
network = filter_options[0]
else:
logging.warning('GCE filter does not specify a network.')
term_names = set()
if IsDefaultDeny(terms[-1]):
terms[-1].protocol = ['all']
terms[-1].priority = 65534
if direction == 'EGRESS':
if address_family != 'mixed':
# Default deny also gets processed as part of terms processing.
# The name and priority get updated there.
terms[-1].destination_address = [self._ANY_IP[address_family]]
else:
terms[-1].destination_address = [self._ANY_IP['inet'],
self._ANY_IP['inet6']]
else:
if address_family != 'mixed':
terms[-1].source_address = [self._ANY_IP[address_family]]
else:
terms[-1].source_address = [
self._ANY_IP['inet'], self._ANY_IP['inet6']
]
for term in terms:
if term.stateless_reply:
logging.warning('WARNING: Term %s in policy %s is a stateless reply '
'term and will not be rendered.',
term.name, filter_name)
continue
term.network = network
if not term.comment:
term.comment = header.comment
if direction == 'EGRESS':
term.name += '-e'
term.name = self.FixTermLength(term.name)
if term.name in term_names:
raise GceFirewallError('Duplicate term name %s' % term.name)
term_names.add(term.name)
term.direction = direction
if term.expiration:
if term.expiration <= exp_info_date:
logging.info('INFO: Term %s in policy %s expires '
'in less than two weeks.', term.name, filter_name)
if term.expiration <= current_date:
logging.warning('WARNING: Term %s in policy %s is expired and '
'will not be rendered.', term.name, filter_name)
continue
if term.option:
raise GceFirewallError(
'GCE firewall does not support term options.')
# Handle mixed for each indvidual term as inet and inet6.
# inet/inet6 are treated the same.
term_address_families = []
if address_family == 'mixed':
term_address_families = ['inet', 'inet6']
else:
term_address_families = [address_family]
for term_af in term_address_families:
for rules in Term(term, term_af, address_family).ConvertToDict():
logging.debug('Attribute count of rule %s is: %d', term.name,
GetAttributeCount(rules))
total_attribute_count += GetAttributeCount(rules)
total_rule_count += 1
if max_attribute_count and total_attribute_count > max_attribute_count:
# Stop processing rules as soon as the attribute count is over the
# limit.
raise ExceededAttributeCountError(
'Attribute count (%d) for %s exceeded the maximum (%d)' %
(total_attribute_count, filter_name, max_attribute_count))
self.gce_policies.append(rules)
logging.info('Total rule count of policy %s is: %d', filter_name,
total_rule_count)
logging.info('Total attribute count of policy %s is: %d', filter_name,
total_attribute_count)
def __str__(self):
out = '%s\n\n' % (json.dumps(self.gce_policies, indent=2,
separators=(six.ensure_str(','),
six.ensure_str(': ')),
sort_keys=True))
return out
def GetAttributeCount(dict_term: Dict[str, Any]) -> int:
"""Calculate the attribute count of a term in its dictionary form.
The attribute count of a rule is the sum of the number of ports, protocols, IP
ranges, tags and target service account.
Note: The goal of this function is not to determine if a term is valid, but
to calculate its attribute count regardless of correctness.
Args:
dict_term: A dict object.
Returns:
int: The attribute count of the term.
"""
addresses = (len(dict_term.get('destinationRanges', []))
or len(dict_term.get('sourceRanges', [])))
proto_ports = 0
for allowed in dict_term.get('allowed', []):
proto_ports += len(allowed.get('ports', [])) + 1 # 1 for ipProtocol
for denied in dict_term.get('denied', []):
proto_ports += len(denied.get('ports', [])) + 1 # 1 for ipProtocol
tags = 0
for _ in dict_term.get('sourceTags', []):
tags += 1
for _ in dict_term.get('targetTags', []):
tags += 1
service_accounts = 0
for _ in dict_term.get('targetServiceAccount', []):
service_accounts += 1
return addresses + proto_ports + tags + service_accounts
| 38.02069 | 83 | 0.640169 | 2,853 | 22,052 | 4.809674 | 0.173502 | 0.043725 | 0.018365 | 0.027984 | 0.289899 | 0.224457 | 0.185396 | 0.161347 | 0.157557 | 0.080601 | 0 | 0.006492 | 0.266597 | 22,052 | 579 | 84 | 38.086356 | 0.841959 | 0.232269 | 0 | 0.227041 | 0 | 0 | 0.175992 | 0.004603 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02551 | false | 0 | 0.02551 | 0 | 0.135204 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8dab0f4aacf85ce7a8eb87b58a351fa764a3691 | 134,339 | py | Python | myhabitatagent.py | karkuspeter/habitat-challenge | 4b61be2b24b43d03246c94435febc691b6172ab6 | [
"MIT"
] | null | null | null | myhabitatagent.py | karkuspeter/habitat-challenge | 4b61be2b24b43d03246c94435febc691b6172ab6 | [
"MIT"
] | null | null | null | myhabitatagent.py | karkuspeter/habitat-challenge | 4b61be2b24b43d03246c94435febc691b6172ab6 | [
"MIT"
] | null | null | null | import argparse
import habitat
import random
import numpy as np
import scipy
import os
import cv2
import time
from habitat.tasks.nav.shortest_path_follower import ShortestPathFollower
from habitat.utils.visualizations import maps
from gibsonagents.expert import Expert
from gibsonagents.pathplanners import Dstar_planner, Astar3D, VI_planner
from gibsonagents.classic_mapping import rotate_2d, ClassicMapping, map_path_for_sim
from utils.dotdict import dotdict
from utils.tfrecordfeatures import tf_bytes_feature, tf_int64_feature, sequence_feature_wrapper # tf_bytelist_feature
from habitat_utils import load_map_from_file, encode_image_to_png, get_model_id_from_episode, get_floor_from_json
from vin import grid_actions_from_trajectory, project_state_and_goal_to_smaller_map
import quaternion
from multiprocessing import Queue, Process
import atexit
import platform
from arguments import parse_args
import tensorflow as tf
from train import get_brain, get_tf_config
from common_net import load_from_file, count_number_trainable_params
from visualize.visualize_habitat_training import plot_viewpoints, plot_target_and_path, mapping_visualizer
from gen_habitat_data import actions_from_trajectory
from gen_planner_data import rotate_map_and_poses, Transform2D
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib.gridspec as gridspec
try:
import ipdb as pdb
except:
import pdb
# # Fix multiprocessing on mac OSX
# if platform.system() == "Darwin":
# import multiprocessing
# multiprocessing.set_start_method('spawn')
ACTION_SOURCE = "plan" #"expert" # "plan"
# START_WITH_SPIN = False # True
SPIN_TARGET = np.deg2rad(370) # np.deg2rad(270) # np.deg2rad(360 - 70)
SPIN_DIRECTION = 1 # 1 for same direction as target, -1 for opposite direction. Opposite is better if target < 360
PLANNER_SINGLE_THREAD = False
PLANNER_STOP_THREAD_EACH_EPISODE = False
# COST_SETTING = 0 # 2
# SOFT_COST_MAP = True
PLANNER2D_TIMEOUT = 200 # 200. # 0.08
# PLANNER3D_TIMEOUT = 2.5 # 1.5 # 200. # 0.08 - ------------------
RECOVER_ON_COLLISION = True
COLLISION_DISTANCE_THRESHOLD = 0.6 # 0.8
MAX_SHORTCUT_TURNS = 2 # was 1 in submission
NEAR_TARGET_COLLISION_STOP_DISTANCE = 5. # when colliding withing this radius to the goal, stop instead
# # Patch map with collisions and around target
TARGET_MAP_MARGIN = 2
OBSTACLE_DOWNWEIGHT_DISTANCE = 20 # from top, smaller the further
OBSTACLE_DOWNWEIGHT_SCALARS = (0.3, 0.8) # (0.3, 0.8)
EXTRA_STEPS_WHEN_EXPANDING_MAP = 30
# !!!!!!
SUPRESS_EXCEPTIONS = False
INTERACTIVE_ON_EXCEPTIONS = True
PLOT_EVERY_N_STEP = -1
PRINT_TIMES = True
INTERACTIVE_PLOT = True
PLOT_PROCESS = False # True
SAVE_VIDEO = True # will save params.num_video number of videos, or all if interactive
USE_ASSERTS = False
# 42 * 60 * 60 - 3 * 60 * 60 # 30 * 60 - 5 * 60 #
TOTAL_TIME_LIMIT = 42 * 60 * 60 - 30 * 60 # challenge gave up at 38h and finished at 39h so 120 minutes should be enough. Even more recently. all giveup finished in 6 mins.
# 42 hours = 2520 mins for 1000-2000 episodes.
# Average episode time should be < 75.6 sec
ERROR_ON_TIMEOUT = False # True
SKIP_FIRST_N_FOR_TEST = -1 # 10 # 10 # 10
VIDEO_FRAME_SKIP = 1 # 6`
VIDEO_FPS = 5 # 5 # 30
VIDEO_LARGE_PLOT = False
VIDEO_DETAILED = True
DEBUG_DUMMY_ACTIONS_ONLY = False
SKIP_FIRST_N = -1 # 1000
SKIP_AFTER_N = -1 # 1500
SKIP_MAP_SHAPE_MISMATCH = True
# !!!!!!!
REPLACE_WITH_RANDOM_ACTIONS = False
EXIT_AFTER_N_STEPS_FOR_SPEED_TEST = -1
FAKE_INPUT_FOR_SPEED_TEST = False
MAX_MAP_SIZE_FOR_SPEED_TEST = False
# DATA GENERATION - for both sim scenarios and real spot
DATA_TYPE = "scenario"
SAVE_DATA_EVERY_N = 1 # 4
DATA_FIRST_STEP_ONLY = True
DATA_MAX_TRAJLEN = 50 # when DATA_FIRST_STEP_ONLY == False
DATA_INCLUDE_NONPLANNED_ACTIONS = False
DATA_USE_LAST_SEGMENT = False # when map is smaller use either the last or the first trajectory segment
# DATA_SEPARATE_FILES = True # for real spot data
DATA_SEPARATE_FILES = False # for simulated scenario data
def giveup_settings(giveup_setting):
# # Give up settings - submission
if giveup_setting == "1":
GIVE_UP_NO_PROGRESS_STEPS = 90 # 100
NO_PROGRESS_THRESHOLD = 15
GIVE_UP_NUM_COLLISIONS = 6 # 100 # TODO increase TODO increase later distances
GIVE_UP_STEP_AND_DISTANCE = [[0, 340], [150, 220], [300, 150], [400, 100]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [[3.5, 100], [4., 120], [5., 300], [6., 400]] # in minutes ! and distance reduction from beginning
# Give up settings - more agressive for submission2
elif giveup_setting == "2":
GIVE_UP_NO_PROGRESS_STEPS = 90 # 100
NO_PROGRESS_THRESHOLD = 15
GIVE_UP_NUM_COLLISIONS = 6
GIVE_UP_STEP_AND_DISTANCE = [[0, 340], [150, 220], [300, 100], [400, 50]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [[3.5, 100], [4., 120], [5., 300], [6., 400]] # in minutenum_wrong_frees ! and distance reduction from beginning
# Relaxed giveup settings for local evaluation (3)
elif giveup_setting == "2":
GIVE_UP_NO_PROGRESS_STEPS = 100 # 100
NO_PROGRESS_THRESHOLD = 12
GIVE_UP_NUM_COLLISIONS = 8 # 100 # TODO increase TODO increase later distances
GIVE_UP_STEP_AND_DISTANCE = [[0, 440], [150, 320], [300, 250], [400, 150]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [[10., 100], [15., 120], [20., 300], [30., 400]] # in minutes ! and distance reduction from beginning
# Almost never give up -- august submission4
elif giveup_setting == "4":
GIVE_UP_NO_PROGRESS_STEPS = 100
NO_PROGRESS_THRESHOLD = 12
GIVE_UP_NUM_COLLISIONS = 20
GIVE_UP_STEP_AND_DISTANCE = [[0, 440], [150, 300], [200, 250], [250, 200], [300, 150], [350, 100], [400, 40]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [] #[10., 100], [15., 120], [20., 300], [30., 400]] # in minutes ! and distance reduction from beginning
elif giveup_setting == "5":
# # Almost never give up -- sept submission5
GIVE_UP_NO_PROGRESS_STEPS = 100
NO_PROGRESS_THRESHOLD = 10
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[200, 300], [300, 200], [400, 100]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = []
# # Almost never give up -- sept submission6
elif giveup_setting == "6":
GIVE_UP_NO_PROGRESS_STEPS = 120
NO_PROGRESS_THRESHOLD = 10
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[200, 400], [300, 250], [400, 150],
[450, 100]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = []
# # Almost never give up -- sept submission7
elif giveup_setting == "7":
GIVE_UP_NO_PROGRESS_STEPS = 120
NO_PROGRESS_THRESHOLD = 10
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[200, 500], [300, 300], [400, 175], [450, 100]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = []
# # Almost never give up -- nov submission8
elif giveup_setting == "8":
GIVE_UP_NO_PROGRESS_STEPS = 1000
NO_PROGRESS_THRESHOLD = 1
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[200, 400], [300, 240], [400, 160]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = []
# # no giveup but 300 limit for data generation
elif giveup_setting == "data300":
GIVE_UP_NO_PROGRESS_STEPS = 1000
NO_PROGRESS_THRESHOLD = 1
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [[300, 1], ] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [] # in minutes ! and distance reduction from beginning
# # No giveup
elif giveup_setting == "never":
GIVE_UP_NO_PROGRESS_STEPS = 1000 # 100
NO_PROGRESS_THRESHOLD = 1
GIVE_UP_NUM_COLLISIONS = 1000
GIVE_UP_STEP_AND_DISTANCE = [] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [] # in minutes ! and distance reduction from beginning
# # No giveup
elif giveup_setting == "always":
GIVE_UP_NO_PROGRESS_STEPS = 1 # 100
NO_PROGRESS_THRESHOLD = 1
GIVE_UP_NUM_COLLISIONS = 1
GIVE_UP_STEP_AND_DISTANCE = [[0, 1]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [] # in minutes ! and distance reduction from beginning
# # Very agressive for fast testing
elif giveup_setting == "fast":
GIVE_UP_NO_PROGRESS_STEPS = 50 # 100
NO_PROGRESS_THRESHOLD = 15
GIVE_UP_NUM_COLLISIONS = 1
GIVE_UP_STEP_AND_DISTANCE = [[0, 340], [100, 200], [200, 100], [300, 50]] # NOTE if changing first threshold also change max map size.
GIVE_UP_TIME_AND_REDUCTION = [[3.5, 100], [4., 120], [5., 300], [6., 400]] # in minutes ! and distance reduction from beginning
else:
raise ValueError('Unknown giveup_setting %s'%giveup_setting)
return GIVE_UP_NO_PROGRESS_STEPS, NO_PROGRESS_THRESHOLD, GIVE_UP_NUM_COLLISIONS, GIVE_UP_STEP_AND_DISTANCE, GIVE_UP_TIME_AND_REDUCTION
class DSLAMAgent(habitat.Agent):
def __init__(self, task_config, params, env=None, logdir='./temp/', tfwriters=()):
self.start_time = time.time()
self._POSSIBLE_ACTIONS = task_config.TASK.POSSIBLE_ACTIONS
self.step_i = 0
self.episode_i = -2
self.env = env
self.task_config = task_config
self.tfwriters = tfwriters
self.num_data_entries = 0
if env is None:
self.follower = None
assert ACTION_SOURCE != "expert"
else:
self.follower = ShortestPathFollower(env._sim, 0.36/2., False)
# if len(params.gpu) > 0 and int(params.gpu[0]) > 4:
# print ("Try to explicitly disable gpu")
# try:
# tf.config.experimental.set_visible_devices([], 'GPU')
# except Exception as e:
# print("Exception " + str(e))
print (params)
self.params = params
# Giveup setting
self.GIVE_UP_NO_PROGRESS_STEPS, self.NO_PROGRESS_THRESHOLD, self.GIVE_UP_NUM_COLLISIONS, \
self.GIVE_UP_STEP_AND_DISTANCE, self.GIVE_UP_TIME_AND_REDUCTION = giveup_settings(params.giveup)
if INTERACTIVE_PLOT or self.params.interactive_video:
plt.ion()
assert params.sim in ['habitat', 'spot', 'spotsmall', 'spotsmall2', 'habitat2021']
self.map_source = self.params.agent_map_source
self.pose_source = self.params.agent_pose_source
self.action_source = ACTION_SOURCE
self.max_confidence = 0.96 # 0.98
self.confidence_threshold = None # (0.2, 0.01) # (0.35, 0.05)
self.use_custom_visibility = (self.params.visibility_mask in [2, 20, 21])
assert self.params.agent_map_source in ['true', 'true-saved', 'true-saved-sampled', 'true-saved-hrsampled',
'true-partial', 'true-partial-sampled', 'pred']
assert self.params.agent_pose_source in ['slam', 'slam-truestart', 'true']
_, gpuname = get_tf_config(devices=params.gpu) # sets CUDA_VISIBLE_DEVICES
if params.skip_slam:
print ("SKIP SLAM overwritting particles and removing noise.")
assert self.pose_source == 'true'
assert params.num_particles == 1
assert params.odom_source == 'relmotion'
self.accumulated_spin = 0.
self.spin_direction = None
self.map_ch = 2
# slam_map_ch = 1
self.max_map_size = (self.params.global_map_size, self.params.global_map_size) # (360, 360)
params.batchsize = 1
params.trajlen = 1
sensor_ch = (1 if params.mode == 'depth' else (3 if params.mode == 'rgb' else 4))
batchsize = params.batchsize
if params.seed is not None and params.seed > 0:
print("Fix Numpy and TF seed to %d" % params.seed)
tf.set_random_seed(params.seed)
np.random.seed(params.seed)
random.seed(params.seed)
# Build graph for slam and planner
with tf.Graph().as_default():
with tf.variable_scope(tf.get_variable_scope(), reuse=False):
# Choose planner
if self.params.planner == 'astar3d':
self.max_map_size = (370, 370) # also change giveup setting when changing this
self.fixed_map_size = True
self.planner_needs_cont_map = False
self.allow_shrink_map = True
assert self.params.agent_map_downscale == 1
# assert MAP_SOURCE != "true"
self.pathplanner = Astar3D(single_thread=PLANNER_SINGLE_THREAD, max_map_size=self.max_map_size, timeout=self.params.planner_timeout)
self.need_to_stop_planner_thread = PLANNER_STOP_THREAD_EACH_EPISODE
elif self.params.planner in ['dstar_track_fixsize', 'dstar4_track_fixsize']:
self.fixed_map_size = True
self.planner_needs_cont_map = False
self.allow_shrink_map = True
assert self.params.agent_map_downscale == 1
if self.params.planner in ['dstar4_track_fixsize']:
assert not self.params.connect8
else:
assert self.params.connect8
self.pathplanner = Dstar_planner(single_thread=PLANNER_SINGLE_THREAD, max_map_size=self.max_map_size, connect8=self.params.connect8)
self.need_to_stop_planner_thread = PLANNER_STOP_THREAD_EACH_EPISODE
elif self.params.planner in ['dstar_track', 'dstar2d']:
self.max_map_size = (900, 900)
self.fixed_map_size = False
self.planner_needs_cont_map = False
self.allow_shrink_map = False
assert self.params.agent_map_downscale == 1
assert self.params.connect8 # add to def config
self.pathplanner = Dstar_planner(single_thread=PLANNER_SINGLE_THREAD, max_map_size=self.max_map_size, connect8=self.params.connect8)
self.need_to_stop_planner_thread = PLANNER_STOP_THREAD_EACH_EPISODE
elif self.params.planner in ['vi4', 'vi8', 'vi4-e1', 'vi8-e1', 'vi4-noshrink', 'vi8-noshrink']:
self.fixed_map_size = True
self.planner_needs_cont_map = False
self.allow_shrink_map = (self.params.planner not in ['vi4-noshrink', 'vi8-noshrink'])
if self.params.planner in ['vi4', 'vi4-e1', 'vi4-noshrink']:
assert not self.params.connect8
else:
assert self.params.connect8
self.pathplanner = VI_planner(max_map_size=(None, None), brain="trueplanner", params=self.params, connect8=self.params.connect8,
downscale=self.params.agent_map_downscale)
self.need_to_stop_planner_thread = False
elif self.params.planner in ['vin', 'vin-e1', 'vinpred']:
self.fixed_map_size = True
self.planner_needs_cont_map = (self.params.planner in ['vinpred'])
self.allow_shrink_map = False
self.pathplanner = VI_planner(max_map_size=(None, None), brain=self.params.agent_planner_brain, params=self.params, connect8=self.params.connect8,
downscale=self.params.agent_map_downscale)
self.need_to_stop_planner_thread = False
else:
raise ValueError("Unknown planner %s"%self.params.planner)
# Test data and network
assert params.target in ["traj"]
train_brain = get_brain(params.brain, params)
req = train_brain.requirements()
self.brain_requirements = req
self.local_map_shape = req.local_map_size
# Build slam brain with placeholder inputs
# global_map_input = tf.placeholder(shape=(batchsize, None, None, slam_map_ch,), dtype=tf.float32)
# self.images_input = tf.placeholder(shape=(batchsize, None) + req.sensor_shape + (sensor_ch,),
# dtype=tf.float32)
# self.visibility_input = (
# tf.placeholder(shape=(batchsize, None) + tuple(req.local_map_size) + (1,), dtype=tf.float32)
# if params.visibility_mask == 2
# else tf.zeros((batchsize, None, 0, 0, 1)))
self.new_images_input = tf.placeholder(shape=(batchsize, 1) + req.sensor_shape + (sensor_ch,),
dtype=tf.float32)
self.last_images_input = tf.placeholder(shape=(batchsize, 1) + req.sensor_shape + (sensor_ch,),
dtype=tf.float32)
self.past_visibility_input = tf.placeholder(shape=(batchsize, None) + tuple(req.local_map_size) + (1,), dtype=tf.float32)
self.visibility_input = tf.placeholder(shape=(batchsize, 1) + tuple(req.local_map_size) + (1,), dtype=tf.float32)
self.past_local_maps_input = tf.placeholder(shape=(batchsize, None) + tuple(req.local_map_size) + (1,), dtype=tf.float32)
self.past_needed_image_features_input = tf.placeholder(shape=(batchsize, None) + tuple(req.local_map_size) + (req.latent_map_ch,), dtype=tf.float32)
self.particle_xy_input = tf.placeholder(shape=(batchsize, None, params.num_particles, 2,), dtype=tf.float32)
self.particle_yaw_input = tf.placeholder(shape=(batchsize, None, params.num_particles, 1,), dtype=tf.float32)
self.last_step_particle_logits_input = tf.placeholder(shape=(batchsize, params.num_particles),
dtype=tf.float32)
self.new_action_input = tf.placeholder(shape=(batchsize, 1, 1,), dtype=tf.int32)
self.new_rel_xy_input = tf.placeholder(shape=(batchsize, 1, 2,), dtype=tf.float32)
self.new_rel_yaw_input = tf.placeholder(shape=(batchsize, 1, 1,), dtype=tf.float32)
self.true_xy_input = tf.placeholder(shape=(batchsize, None, 2,), dtype=tf.float32)
self.true_yaw_input = tf.placeholder(shape=(batchsize, None, 1,), dtype=tf.float32)
self.inference_timesteps_input = tf.placeholder(shape=(batchsize, None), dtype=tf.int32) # indexes history to be used for slam update
self.global_map_shape_input = tf.placeholder(shape=(2, ), dtype=tf.int32)
if self.params.obstacle_downweight:
custom_obstacle_prediction_weight = Expert.get_obstacle_prediction_weight(OBSTACLE_DOWNWEIGHT_DISTANCE, OBSTACLE_DOWNWEIGHT_SCALARS, self.local_map_shape)
else:
custom_obstacle_prediction_weight = None
if FAKE_INPUT_FOR_SPEED_TEST:
self.inference_outputs = train_brain.sequential_localization_with_past_and_pred_maps(
tf.zeros_like(self.past_local_maps_input), tf.ones_like(self.past_visibility_input),
tf.zeros_like(self.past_needed_image_features_input),
tf.zeros_like(self.new_images_input), tf.zeros_like(self.true_xy_input), tf.zeros_like(self.true_yaw_input),
tf.zeros_like(self.visibility_input),
tf.zeros_like(self.particle_xy_input), tf.zeros_like(self.particle_yaw_input),
tf.zeros_like(self.new_action_input), tf.zeros_like(self.new_rel_xy_input), tf.zeros_like(self.new_rel_yaw_input),
particle_logits_acc=tf.zeros_like(self.last_step_particle_logits_input),
global_map_shape=self.global_map_shape_input,
max_confidence=self.max_confidence)
else:
###
# THIS IS USED NORMALLY
###
self.inference_outputs = train_brain.sequential_localization_with_past_and_pred_maps(
self.past_local_maps_input, self.past_visibility_input, self.past_needed_image_features_input,
self.new_images_input, self.true_xy_input, self.true_yaw_input, self.visibility_input,
self.particle_xy_input, self.particle_yaw_input,
self.new_action_input, self.new_rel_xy_input, self.new_rel_yaw_input,
inference_timesteps=self.inference_timesteps_input,
particle_logits_acc=self.last_step_particle_logits_input,
global_map_shape=(tuple(self.max_map_size) if self.fixed_map_size else self.global_map_shape_input), # self.global_map_shape_input, tuple(self.max_map_size),
max_confidence=self.max_confidence,
custom_obstacle_prediction_weight=custom_obstacle_prediction_weight,
last_images=self.last_images_input,
use_true_pose_instead_of_slam=(self.params.agent_pose_source == 'true'),
)
if PLOT_EVERY_N_STEP < 0:
self.inference_outputs = self.drop_output(self.inference_outputs, drop_names=['tiled_visibility_mask'])
self.inference_outputs_without_map = self.drop_output(self.inference_outputs, drop_names=['global_map_logodds'])
# self.inference_outputs = train_brain.sequential_localization_with_map_prediction(
# self.images_input, self.true_xy_input, self.true_yaw_input, self.visibility_input,
# self.particle_xy_input, self.particle_yaw_input,
# self.new_action_input, self.new_rel_xy_input, self.new_rel_yaw_input,
# particle_logits_acc=self.last_step_particle_logits_input)
# self.inference_outputs = train_brain.sequential_localization_with_past_and_pred_maps(
# self.past_local_maps_input, self.past_visibility_input, NEED_IMAGES,
# self.new_images_input, self.true_xy_input, self.true_yaw_input, self.visibility_input,
# self.particle_xy_input, self.particle_yaw_input,
# self.new_action_input, self.new_rel_xy_input, self.new_rel_yaw_input,
# particle_logits_acc=self.last_step_particle_logits_input)
#
# TODO pass in map inference inputs. Could produce one processed and one unprocess map for slam.
# self.true_map_input = tf.placeholder(shape=self.max_map_size + (1, ), dtype=tf.uint8)
# self.images_input = tf.placeholder(shape=req.sensor_shape + (sensor_ch,), dtype=tf.float32)
# self.xy_input = tf.placeholder(shape=(2,), dtype=tf.float32)
# self.yaw_input = tf.placeholder(shape=(1, ), dtype=tf.float32)
# # self.action_input = tf.placeholder(shape=(2,), dtype=tf.float32)
# actions = tf.zeros((1, 1, 2), dtype=tf.float32)
# self.global_map_input = tf.placeholder(shape=self.max_map_size + (self.map_ch, ), dtype=tf.float32)
# self.visibility_input = tf.placeholder(shape=self.local_map_shape + (1, ), dtype=tf.uint8) if self.use_custom_visibility else None
# local_obj_map_labels = tf.zeros((1, 1, ) + self.local_map_shape + (1, ), dtype=np.uint8)
#
# self.inference_outputs = train_brain.sequential_inference(
# self.true_map_input[None], self.images_input[None, None], self.xy_input[None, None], self.yaw_input[None, None],
# actions, prev_global_map_logodds=self.global_map_input[None],
# local_obj_maps=local_obj_map_labels,
# confidence_threshold=self.confidence_threshold,
# max_confidence=self.max_confidence,
# max_obj_confidence=0.8,
# custom_visibility_maps=None if self.visibility_input is None else self.visibility_input[None, None],
# is_training=True)
# self.true_map_input = tf.zeros(shape=self.max_map_size + (1, ), dtype=tf.uint8)
# self.images_input = tf.zeros(shape=req.sensor_shape + (sensor_ch,), dtype=tf.float32)
# self.xy_input = tf.ones(shape=(2,), dtype=tf.float32)
# self.yaw_input = tf.zeros(shape=(1, ), dtype=tf.float32)
# # self.action_input = tf.placeholder(shape=(2,), dtype=tf.float32)
# actions = tf.ones((1, 1, 2), dtype=tf.float32)
# self.global_map_input = tf.ones(shape=self.max_map_size + (self.map_ch, ), dtype=tf.float32)
# self.visibility_input = tf.ones(shape=self.local_map_shape + (1, ), dtype=tf.uint8) if self.use_custom_visibility else None
# local_obj_map_labels = tf.zeros((1, 1, ) + self.local_map_shape + (1, ), dtype=np.uint8)
#
# self.inference_outputs = train_brain.sequential_inference(
# self.true_map_input[None], self.images_input[None, None], self.xy_input[None, None], self.yaw_input[None, None],
# actions, prev_global_map_logodds=self.global_map_input[None],
# local_obj_maps=local_obj_map_labels,
# confidence_threshold=self.confidence_threshold,
# max_confidence=self.max_confidence,
# max_obj_confidence=0.8,
# custom_visibility_maps=None if self.visibility_input is None else self.visibility_input[None, None],
# is_training=True)
# Add the variable initializer Op.
init = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
count_number_trainable_params(verbose=True)
# training session
gpuconfig, gpuname = get_tf_config(devices=params.gpu)
self.sess = tf.Session(config=gpuconfig)
# # Debug
# self.sess.run(init) # withouth init if a variable is not loaded we get an error
# outputs = self.sess.run(self.inference_outputs)
# print ("Success")
# pdb.set_trace()
# self.sess.run(init) # withouth init if a variable is not loaded we get an error
load_from_file(self.sess, params.load, partialload=params.partialload, loadcenter=[],
skip=params.loadskip, autorename=False)
self.global_map_logodds = None
self.xy = None
self.yaw = None
self.target_xy = None
self.step_i = -1
self.t = time.time()
# datafile
self.scenario_traj_data = []
# video
self.frame_traj_data = []
self.num_videos_saved = 0
self.summary_str = ""
self.filename_addition = ""
self.logdir = logdir
self.saved_map_i = 0
if self.params.interactive_video and PLOT_PROCESS:
print ("Starting plot process.. ANY PLOT IN THIS THREAD WILL LEAD TO A CRASH")
atexit.register(self.cleanup) # to stop process
self.plot_queue = Queue()
self.plot_process = Process(target=self.plot_loop, args=(self.plot_queue, ))
self.plot_process.start()
# # plotting should be only done in one thread... any plot in this thread will lead to a crash
# plt.show = lambda *args: None
# plt.figure = lambda *args: None
# plt.imshow = lambda *args: None
# plt.plot = lambda *args: None
# # import matplotlib
# # matplotlib.use('Agg')
# # plt.figure = lambda *args: None
else:
self.plot_queue = None
self.plot_process = None
self.reset()
def cleanup(self):
try:
if not self.plot_process or not self.plot_process.is_alive():
return
print("Stopping plot process")
self.plot_queue.put(("exit", None), block=False)
self.plot_process.join(timeout=4.0)
self.plot_process.terminate()
except Exception as e:
print ("Destructor had an exception. %s" % str(e))
def get_scene_name(self):
scene_path = "unknown" if self.env is None else self.env._sim._current_scene
scene_name = scene_path.split('/')[-1].split('.')[0]
return scene_name
def _to_grid_pos(self, agent_pos_0, agent_pos_2, top_down_map_dict):
if not hasattr(maps, 'COORDINATE_MAX'):
grid_pos = maps.to_grid(
agent_pos_2, # not the order!
agent_pos_0,
top_down_map_dict['map'].shape[0:2],
sim=self.env.sim,
keep_float=True,
)
else:
del top_down_map_dict
map_shape = (self.task_config.TASK.TOP_DOWN_MAP.MAP_RESOLUTION,
self.task_config.TASK.TOP_DOWN_MAP.MAP_RESOLUTION)
grid_pos = maps.to_grid(
agent_pos_0, agent_pos_2, # order does not matter here
maps.COORDINATE_MIN, maps.COORDINATE_MAX,
map_shape, keep_float=True)
return np.array(grid_pos, np.float32)
def reset(self, last_success=None):
self.step_i = -1
self.episode_i += 1
self.t = time.time()
self.episode_t = self.t
self.accumulated_spin = 0.
self.spin_direction = None
self.distance_history = []
self.raw_xy_transform = lambda xys: xys
self.raw_yaw_offset = 0.
self.recover_step_i = 0
self.num_collisions = 0
self.num_shortcut_actions = 0
self.num_wrong_obstacle = 0
self.num_wrong_free = 0
self.num_wrong_free_area = 0
self.num_wrong_free_area2 = 0
self.num_wrong_free_area3 = 0
self.map_mismatch_count = 0
self.plan_times = []
assert self.params.recoverpolicy in ['back5', 'back1']
num_recover_back_steps = (5 if self.params.recoverpolicy == 'back5' else 1)
self.recover_policy = [3] * 6 + [1] * num_recover_back_steps
self.global_map_logodds = None # will initialize in act() np.zeros((1, 1) + (1, ), np.float32)
self.collision_timesteps = []
for tfwriter in self.tfwriters:
tfwriter.flush()
self.pathplanner.reset()
self.reset_scenario_data_writer()
self.reset_video_writer(last_success=last_success)
print ("Resetting agent %d. Scene %s."%(self.episode_i, self.get_scene_name()))
self.last_call_time = time.time()
def drop_output(self, outputs, drop_names):
return dotdict({key: val for key, val in outputs.items() if key not in drop_names})
def plan_and_control(self, xy, yaw, target_xy, global_map_pred, ang_vel, target_fi, allow_shrink_map=False, cont_global_map_pred=None):
if self.params.start_with_spin and np.abs(self.accumulated_spin) < SPIN_TARGET and self.step_i < 40:
if self.spin_direction is None:
self.spin_direction = SPIN_DIRECTION * np.sign(target_fi) # spin opposite direction to the goal
self.accumulated_spin += ang_vel
# spin
status_message = "%d: spin %f: %f"%(self.step_i, self.spin_direction, self.accumulated_spin)
action = (2 if self.spin_direction > 0 else 3)
planned_path = np.zeros((0, 2))
return action, planned_path, status_message, (global_map_pred * 255.).astype(np.uint8), None
assert global_map_pred.dtype == np.float32
assert cont_global_map_pred is None or cont_global_map_pred.dtype == np.float32
if self.params.soft_cost_map:
assert not allow_shrink_map
assert cont_global_map_pred is None
# keep global_map as continuous input
elif allow_shrink_map:
assert cont_global_map_pred is None # otherwise would need to shrink it
global_map_pred = (global_map_pred * 255.).astype(np.uint8)
xy, target_xy, global_map_pred, offset_xy = self.shrink_map(xy, target_xy, global_map_pred)
else:
global_map_pred = (global_map_pred * 255.).astype(np.uint8)
offset_xy = None
# Scan map and cost graph.
scan_graph, eroded_scan_map, normal_scan_map, costmap = Expert.get_graph_and_eroded_map(
raw_trav_map=global_map_pred[..., :1],
trav_map_for_simulator=global_map_pred[..., :1],
raw_scan_map=global_map_pred,
rescale_scan_map=1.,
erosion=self.params.map_erosion_for_planning,
build_graph=False,
interactive_channel=False,
cost_setting=self.params.cost_setting,
soft_cost_map=self.params.soft_cost_map,
)
# plt.figure()
# plt.imshow(costmap)
# plt.show()
# pdb.set_trace()
# scan_map = global_map_pred[..., :1]
#
# scan_map = cv2.erode(scan_map, kernel=np.ones((3, 3)))
# scan_map[scan_map<255] = 0
#
# costmap = np.zeros_like(global_map_pred, dtype=np.float32)
# costmap[global_map_pred == 0] = 1000.
#
# temp_map1 = scan_map
# temp_map2 = cv2.erode(temp_map1, kernel=np.ones((3, 3)))
# temp_filter = np.logical_and(temp_map2 < 255, temp_map1 == 255)
# costmap[temp_filter] = 100.
#
# temp_map1 = scan_map
# temp_map2 = cv2.erode(temp_map1, kernel=np.ones((7, 7)))
# temp_filter = np.logical_and(temp_map2 < 255, temp_map1 == 255)
# costmap[temp_filter] = 1.
assert self.params.goalpolicy in ['twostep', 'none']
start_time = time.time()
if self.params.planner == 'astar3d':
assert self.params.goalpolicy in ['twostep'] # need to add support below for removing twostep
action, obstacle_distance, planned_path, status_message = Expert.discrete3d_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
pathplanner=self.pathplanner)
elif self.params.planner == 'dstar2d':
assert self.params.goalpolicy in ['twostep'] # need to add support below for removing twostep
action, obstacle_distance, planned_path, status_message = Expert.discrete_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_asserts=True,
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.dstar_path(
cost_map, source_tuple, target_tuple, timeout=PLANNER2D_TIMEOUT))
elif self.params.planner in ['dstar_track', 'dstar_track_fixsize', 'dstar4_track_fixsize']:
action, obstacle_distance, planned_path, status_message = Expert.discrete_tracking_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_lookahead=True, use_twostep_approach=(self.params.goalpolicy == 'twostep'),
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.dstar_path(
cost_map, source_tuple, target_tuple, timeout=PLANNER2D_TIMEOUT))
elif self.params.planner in ['vi4', 'vi8', 'vin', 'vi4-noshrink', 'vi8-noshrink']:
assert not self.params.soft_cost_map # because expert policy checks traversability assuming uint map
action, obstacle_distance, planned_path, status_message = Expert.discrete_tracking_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_lookahead=True, use_twostep_approach=(self.params.goalpolicy == 'twostep'),
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.vi_path(
scan_map, cost_map, source_tuple, target_tuple, sess=self.sess))
elif self.params.planner in ['vinpred']:
assert not allow_shrink_map # because we are shortcutting map, not using the shrunk map
assert not self.params.soft_cost_map # because expert policy checks traversability assuming uint map
assert cont_global_map_pred is not None
# Shortcut map input to path planner directly
action, obstacle_distance, planned_path, status_message = Expert.discrete_tracking_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_lookahead=True, use_twostep_approach=(self.params.goalpolicy == 'twostep'),
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.vi_path(
cont_global_map_pred * 255., # this is intentionally shortcutting the scan_map input
cost_map, source_tuple, target_tuple, sess=self.sess))
elif self.params.planner in ['vi4-e1', 'vi8-e1', 'vin-e1']:
assert not self.params.soft_cost_map # because expert policy checks traversability assuming uint map
action, obstacle_distance, planned_path, status_message = Expert.discrete_tracking_policy(
scan_map=eroded_scan_map, pos_map_float=xy, yaw=yaw, target_map_float=target_xy, cost_map=costmap,
use_lookahead=True, use_twostep_approach=(self.params.goalpolicy == 'twostep'),
shortest_path_fn=lambda _, source_tuple, target_tuple, cost_map, scan_map: self.pathplanner.vi_path(
normal_scan_map, cost_map, source_tuple, target_tuple, sess=self.sess))
else:
raise ValueError("Unknown planner %s"%(self.params.planner))
status_message = "%d/%d: %.2f %s"%(self.episode_i, self.step_i, time.time()-self.t, status_message)
self.t = time.time()
self.plan_times.append(time.time() - start_time)
if allow_shrink_map:
planned_path = planned_path + offset_xy[None]
return action, planned_path, status_message, eroded_scan_map, offset_xy
def act(self, observations):
if SUPRESS_EXCEPTIONS or INTERACTIVE_ON_EXCEPTIONS:
try:
return_val = self.wrapped_act(observations)
if self.need_to_stop_planner_thread and return_val['action'] == 0:
self.pathplanner.stop_thread()
self.last_call_time = time.time()
return return_val
except Exception as e:
print ("Exception " + str(e))
if INTERACTIVE_ON_EXCEPTIONS:
self.reset_scenario_data_writer()
self.reset_video_writer(last_success=False)
print ("Data and video saved. Continue?")
pdb.set_trace()
self.last_call_time = time.time()
return {"action": 0, "xy_error": 0.}
else:
return_val = self.wrapped_act(observations)
if self.need_to_stop_planner_thread and return_val['action'] == 0:
self.pathplanner.stop_thread()
self.last_call_time = time.time()
return return_val
def wrapped_act(self, observations):
time_sim = time.time() - self.last_call_time
time_last = time.time()
time_now = time.time()
time_since_beginning = time_now - self.start_time
if REPLACE_WITH_RANDOM_ACTIONS and self.episode_i > 2:
self.step_i += 1
if self.step_i > 100:
action = 0
else:
action = np.random.choice([1, 2, 3])
return {"action": action, "xy_error": 0.}
initial_target_r_meters, initial_target_fi = observations['pointgoal']
if self.step_i == -1:
self.step_i += 1
# self.initial_xy = np.zeros((2, ), np.float32)
# self.initial_yaw = np.zeros((1, ), np.float32)
self.episode_t = time_now
if self.env is not None:
# info = self.env.get_metrics()
# print (info['top_down_map']['agent_map_coord'])
# pdb.set_trace()
stay_action = self.task_config.SIMULATOR.get('STAY_ACTION', 3) # turn right by default
return {"action": stay_action} # turn right, because first step does not provide the top down map
# otherwise continue below
if TOTAL_TIME_LIMIT > 0 and time_since_beginning > TOTAL_TIME_LIMIT:
print ("Giving up because total time limit of %d sec reached."%TOTAL_TIME_LIMIT)
if ERROR_ON_TIMEOUT:
raise ValueError("Timeout.. only for minival!")
return {"action": 0, "xy_error": 0.}
if (SKIP_FIRST_N_FOR_TEST > 0 and self.episode_i < SKIP_FIRST_N_FOR_TEST) or (SKIP_FIRST_N > 0 and self.episode_i < SKIP_FIRST_N) or (SKIP_AFTER_N > 0 and self.episode_i >= SKIP_AFTER_N):
print ("Skip")
return {"action": 0, "xy_error": 0.}
# if EXIT_AFTER_N_STEPS_FOR_SPEED_TEST > 0 and self.step_i > EXIT_AFTER_N_STEPS_FOR_SPEED_TEST:
# raise SystemExit
# Check for possible shortcut
shortcut_action = None
need_map = True
if self.params.skip_plan_when_turning and len(self.pathplanner.cached_action_path) >= 2 and self.num_shortcut_actions < MAX_SHORTCUT_TURNS:
cached_next_action = self.pathplanner.cached_action_path[1]
if cached_next_action in [2, 3]: # turning
print ("Shortcut turn action")
shortcut_action = cached_next_action
need_map = False
self.num_shortcut_actions += 1
self.pathplanner.num_timeouts += 1
self.pathplanner.cached_action_path = self.pathplanner.cached_action_path[1:]
else:
self.num_shortcut_actions = 0
else:
self.num_shortcut_actions = 0
if RECOVER_ON_COLLISION and self.recover_step_i > 0:
need_map = False
# True pose and map
if self.env is not None:
# When using slam, must run with --habitat_eval local not localtest.
# Thats because with --localtest we skip the first step, but that ruins the goal observation.
assert self.pose_source != 'slam'
info = self.env.get_metrics()
agent_pos = self.env.sim.get_agent_state().position
goal_pos = self.env.current_episode.goals[0].position
# First deal with observed map and optionally samplre random rotation
true_global_map = info['top_down_map']['map']
true_global_map = (true_global_map > 0).astype(np.uint8) * 255
true_global_map = np.atleast_3d(true_global_map)
if self.step_i > 0:
# Count pixels that used to be free but not in the latest map
if not self.params.random_rotations:
new_map_mismatch_count = np.count_nonzero(np.logical_and(true_global_map == 0, self.last_true_global_map))
if new_map_mismatch_count > 200:
print ("TOO MANY MAP MISMATCHES %d"%new_map_mismatch_count)
# assert False
self.map_mismatch_count += new_map_mismatch_count
# assert true_global_map.shape == self.last_true_global_map.shape
true_global_map = self.last_true_global_map # keep the first
else:
if self.map_source in ['true-saved', 'true-saved-sampled', 'true-saved-hrsampled', 'true-partial-sampled']:
saved_global_map = load_map_from_file(scene_id=self.get_scene_name(), height=agent_pos[1], map_name=(
"map" if self.map_source == 'true-saved' else ('sampledmap' if self.map_source in [
'true-saved-sampled', 'true-partial-sampled'] else 'hrsampledmap')), basepath=map_path_for_sim(self.params.sim))
assert saved_global_map.dtype == np.uint8
if saved_global_map.shape != true_global_map.shape:
# This can happen if floors are not perfectly aligned, etc. Its a problem as we cannot recover
# map pose anymore.
# Save log
print ('Map shapes mismatch for %s. Exmple saved under ./temp/failures/'%self.get_scene_name())
if not os.path.exists('./temp/failures'):
os.mkdir('./temp/failures')
cv2.imwrite('./temp/failures/%s_ep%d_envmap.png'%(self.get_scene_name(), int(self.env.current_episode.episode_id)), true_global_map)
cv2.imwrite('./temp/failures/%s_ep%d_savedmap.png'%(self.get_scene_name(), int(self.env.current_episode.episode_id)), saved_global_map)
if SKIP_MAP_SHAPE_MISMATCH:
print("Skip because of map mismatch")
return {"action": 0, "xy_error": 0.}
else:
raise ValueError('Map shapes mismatch. Exmple saved under ./temp/failures/')
self.map_mismatch_count = np.count_nonzero(np.logical_and(saved_global_map, true_global_map))
true_global_map = saved_global_map
else:
self.map_mismatch_count = 0
# Rotate true map once and keep this for the episode. Remember the transformation and apply it from all raw observations
if self.params.random_rotations:
self.raw_yaw_offset = np.random.rand() * 2. * np.pi
true_global_map, new_poses, transform = rotate_map_and_poses(true_global_map, self.raw_yaw_offset, poses=[np.zeros((1, 2), np.float32)], constant_value=0)
assert true_global_map.dtype == np.uint8
# Reapply threshold
true_global_map = (true_global_map > 128).astype(np.uint8) * 255
self.raw_xy_transform = transform
self.last_true_global_map = true_global_map
# Deal with observed pose
# TODO this might be wrong here if map shapes don't match and/or change during episode.
true_xy = np.array(info['top_down_map']['agent_map_coord']) # x: downwards; y: rightwars
if np.any(true_xy < 0.):
raise ValueError("Map coordinates are less than zero. On spot this happens if the dummy environment "
"happens to be smaller than the real world.")
true_xy = self.raw_xy_transform(true_xy[None])[0]
true_yaw = info['top_down_map']['agent_angle'] # 0 downwards, positive ccw. Forms stanard coord system with x and y.
true_yaw = true_yaw + self.raw_yaw_offset
true_yaw = np.array((true_yaw, ), np.float32)
true_yaw = (true_yaw + np.pi) % (2 * np.pi) - np.pi # normalize
# Recover from simulator pos
true_xy_from_pos = self._to_grid_pos(agent_pos[0], agent_pos[2], info['top_down_map'])
true_xy_from_pos = self.raw_xy_transform(true_xy_from_pos[None])[0]
offset_xy = true_xy - true_xy_from_pos
global_true_target_xy = self._to_grid_pos(goal_pos[0], goal_pos[2], info['top_down_map'])
true_target_xy = self.raw_xy_transform(global_true_target_xy[None])[0]
true_target_xy += offset_xy
del offset_xy
# # Debug pose
# print("position", true_xy, self.env.sim.get_agent_state().position)
# rot_euler = quaternion.as_euler_angles(self.env.sim.get_agent_state().rotation)
# print("rotation", np.rad2deg(true_yaw), np.rad2deg(rot_euler))
#
# quat = quaternion.from_euler_angles(np.pi, true_yaw - np.pi, np.pi)
# print (np.rad2deg(quaternion.as_euler_angles(quat)))
#
# pdb.set_trace()
else:
true_xy = np.zeros((2,), np.float32)
true_yaw = np.zeros((1,), np.float32)
true_target_xy = np.zeros((2,), np.float32)
global_true_target_xy = None
info = None
true_global_map = np.zeros([self.max_map_size[0], self.max_map_size[1], 1], np.float32)
# Initialize everything
if self.step_i == 0:
if self.pose_source in ['true', 'slam-truestart']:
# Initialize with true things. Only makes sense if we access it
assert self.env is not None
self.true_xy_offset = -true_xy.astype(np.int32)
# self.true_xy_transform = Transform2D
# self.true_xy_transform.add_translation()
if self.fixed_map_size:
self.global_map_logodds = np.zeros((self.max_map_size[0], self.max_map_size[1], 1), np.float32) # np.zeros(true_global_map.shape, np.float32)
else:
self.global_map_logodds = np.zeros((1, 1, 1), np.float32) # np.zeros(true_global_map.shape, np.float32)
self.prev_yaw = true_yaw
self.xy = true_xy + self.true_xy_offset
self.yaw = true_yaw
particle_xy0 = np.tile((self.xy)[None], [self.params.num_particles, 1])
particle_yaw0 = np.tile(self.yaw[None], [self.params.num_particles, 1])
# Target from observed distance. Can only use it after reset
initial_target_r_meters, initial_target_fi = observations['pointgoal']
initial_target_r = initial_target_r_meters / 0.05 # meters to grid cells
# assumes initial pose is 0.0
initial_target_xy = rotate_2d(np.array([initial_target_r, 0.], np.float32), initial_target_fi + true_yaw + np.deg2rad(30)) + true_xy + self.true_xy_offset
# Target from observed distance. Can only use it after reset
initial_target_r_meters, initial_target_fi = observations['pointgoal_with_gps_compass']
target_r = initial_target_r_meters / 0.05 # meters to grid cells
# assumes initial pose is 0.0
observed_target_xy = rotate_2d(np.array([target_r, 0.], np.float32), initial_target_fi + true_yaw) + true_xy + self.true_xy_offset
print ("Target observed: (%d, %d) true: (%d, %d) initial (%d, %d)"%(
observed_target_xy[0], observed_target_xy[1], true_target_xy[0] + self.true_xy_offset[0], true_target_xy[1] + self.true_xy_offset[1], initial_target_xy[0], initial_target_xy[1]))
if np.linalg.norm(observed_target_xy - (true_target_xy + self.true_xy_offset)) > 0.001:
pdb.set_trace()
self.target_xy = observed_target_xy
elif self.pose_source == "slam":
self.true_xy_offset = np.zeros((2,), np.int32) # we dont know
if self.fixed_map_size:
self.global_map_logodds = np.zeros((self.max_map_size[0], self.max_map_size[1], 1), np.float32) # np.zeros(true_global_map.shape, np.float32)
else:
self.global_map_logodds = np.zeros((1, 1, 1), np.float32) # np.zeros(true_global_map.shape, np.float32)
self.prev_yaw = 0.
self.xy = np.zeros((2, ), np.float32)
self.yaw = np.zeros((1, ), np.float32)
particle_xy0 = np.zeros((self.params.num_particles, 2), np.float32)
particle_yaw0 = np.zeros((self.params.num_particles, 1), np.float32)
# Target from observed distance. Can only use it after reset
initial_target_r_meters, initial_target_fi = observations['pointgoal']
target_r = initial_target_r_meters / 0.05 # meters to grid cells
# assumes initial pose is 0.0
observed_target_xy = rotate_2d(np.array([target_r, 0.], np.float32), initial_target_fi)
self.target_xy = observed_target_xy
else:
raise ValueError("Unknown pose estimation source.")
self.particle_xy_list = [particle_xy0]
self.particle_yaw_list = [particle_yaw0]
self.particle_logit_acc_list = [np.zeros((self.params.num_particles,), np.float32)]
self.xy_loss_list = [0.]
self.yaw_loss_list = [0.]
self.true_xy_traj = [true_xy]
self.true_yaw_traj = [true_yaw]
self.action_traj = []
# Resize map and add offset
map_shape = self.global_map_logodds.shape
if self.fixed_map_size:
# xy_map_margin = 10 # this is before slam update. a single step can move 6 cells plus estimation may change.
# # TODO xy_map_margin was occasionally too small. Expose as param and increase
#
# # Keep a fixed map size. Dont even update it, only move the offset, such that center point is between current pose and goal
# assert map_shape[:2] == self.max_map_size
# assert self.max_map_size[0] == self.max_map_size[1]
#
# center_xy = (self.xy + self.target_xy) * 0.5
# desired_center_xy = np.array(self.max_map_size, np.float32) * 0.5
# offset_xy = (desired_center_xy - center_xy).astype(np.int)
#
# new_xy = self.xy + offset_xy
#
# # Handle the case when xy would fall out of the map area or would be too near the edge.
# # These will be only nonzero if xy is outside the allowed area
# # if np.any(new_xy < xy_map_margin):
# offset_xy += np.ceil(np.maximum(xy_map_margin - new_xy, 0.)).astype(np.int32)
# # if np.any(new_xy >= self.max_map_size[0] - xy_map_margin):
# offset_xy -= np.ceil(np.maximum(new_xy - (self.max_map_size[0] - xy_map_margin), 0.)).astype(np.int32)
#
# self.particle_xy_list = [xy + offset_xy for xy in self.particle_xy_list]
# self.target_xy += offset_xy
# self.true_xy_offset += offset_xy
# self.xy += offset_xy
#
# # Handle the case when target is outside of the map area
# if np.any(self.target_xy < TARGET_MAP_MARGIN) or np.any(self.target_xy >= self.max_map_size[0] - TARGET_MAP_MARGIN):
# # Find the free map cell closest to the target
# global_map_pred = ClassicMapping.inverse_logodds(self.global_map_logodds)
# # TODO this should use the same threshold instead of 0.5
# free_x, free_y = np.nonzero(np.squeeze(global_map_pred[TARGET_MAP_MARGIN:-TARGET_MAP_MARGIN, TARGET_MAP_MARGIN:-TARGET_MAP_MARGIN], axis=-1) >= 0.5)
# free_xy = np.stack([free_x, free_y], axis=-1)
# free_xy = free_xy.astype(np.float32)
# free_xy += 0.5
# free_xy += TARGET_MAP_MARGIN
# dist = np.linalg.norm(free_xy - self.target_xy[None], axis=1)
# # pretend the closest free cell is the target
# self.target_xy_for_planning = free_xy[np.argmin(dist)]
# print ("Moving target within the map: %s --> %s"%(str(self.target_xy), str(self.target_xy_for_planning)))
# else:
# self.target_xy_for_planning = self.target_xy.copy()
# Keep a fixed map size. Dont even update it, only move the offset, such that center point is between current pose and goal
assert map_shape[:2] == self.max_map_size
# Find the free map cell closest to the target
global_map_pred = ClassicMapping.inverse_logodds(self.global_map_logodds)
is_free_map = (np.squeeze(global_map_pred, axis=-1) >= 0.5) # TODO this should use the same threshold instead of 0.5
# TODO xy_map_margin was occasionally too small. Expose as param and increase
offset_ij, projected_target_xy = project_state_and_goal_to_smaller_map(
self.max_map_size, self.xy, self.target_xy, is_free_map, xy_map_margin=10, target_map_margin=TARGET_MAP_MARGIN)
self.particle_xy_list = [xy + offset_ij for xy in self.particle_xy_list]
self.target_xy += offset_ij
self.true_xy_offset += offset_ij
self.xy += offset_ij
self.target_xy_for_planning = projected_target_xy
if np.any(self.target_xy != self.target_xy_for_planning):
print("Moving target within the map: %s --> %s" % (str(self.target_xy), str(self.target_xy_for_planning)))
else:
# Expand map and offset pose if needed, such that target and the surrounding of current pose are all in the map.
if MAX_MAP_SIZE_FOR_SPEED_TEST:
offset_ij = np.array(((self.max_map_size[0]-map_shape[0])//2, (self.max_map_size[1]-map_shape[1])//2), np.int32)
expand_xy = offset_ij.copy()
else:
local_map_max_extent = 110 # TODO need to adjust to local map size and scaler
local_map_max_extent += 10 # to account for how much the robot may move in one step, including max overshooting
target_margin = 8
min_particle_xy = self.particle_xy_list[-1].min(axis=0) # last is step is enough because earliers could arleady fit on map
max_particle_xy = self.particle_xy_list[-1].max(axis=0)
min_x = int(min(self.target_xy[0] - target_margin, min_particle_xy[0] - local_map_max_extent) - 1)
min_y = int(min(self.target_xy[1] - target_margin, min_particle_xy[1] - local_map_max_extent) - 1)
max_x = int(max(self.target_xy[0] + target_margin, max_particle_xy[0] + local_map_max_extent) + 1)
max_y = int(max(self.target_xy[1] + target_margin, max_particle_xy[1] + local_map_max_extent) + 1)
offset_ij = np.array([max(0, -min_x), max(0, -min_y)])
expand_xy = np.array([max(0, max_x+1-map_shape[0]), max(0, max_y+1-map_shape[1])])
is_offset = np.any(offset_ij > 0)
is_expand = np.any(expand_xy > 0)
if is_offset:
offset_ij += 0 if MAX_MAP_SIZE_FOR_SPEED_TEST else EXTRA_STEPS_WHEN_EXPANDING_MAP
self.particle_xy_list = [xy + offset_ij for xy in self.particle_xy_list]
self.target_xy += offset_ij
self.true_xy_offset += offset_ij
if is_expand:
expand_xy += 0 if MAX_MAP_SIZE_FOR_SPEED_TEST else EXTRA_STEPS_WHEN_EXPANDING_MAP
if is_offset or is_expand:
prev_shape = self.global_map_logodds.shape
self.global_map_logodds = np.pad(
self.global_map_logodds, [[offset_ij[0], expand_xy[0]], [offset_ij[1], expand_xy[1]], [0, 0]],
mode='constant', constant_values=0.)
print ("Increasing map size: (%d, %d) --> (%d, %d) offset (%d, %d), expand (%d, %d)"%(
prev_shape[0], prev_shape[1], self.global_map_logodds.shape[0], self.global_map_logodds.shape[1],
offset_ij[0], offset_ij[1], expand_xy[0], expand_xy[1]))
excess_xy = np.array(self.global_map_logodds.shape[:2], np.int32) - np.array(self.max_map_size[:2], np.int32)
excess_xy = np.maximum(excess_xy, np.zeros_like(excess_xy))
if np.any(excess_xy > 0):
print ("Reducing map to fit max size (%d, %d)"%(excess_xy[0], excess_xy[1]))
if self.target_xy[0] > self.global_map_logodds.shape[0] // 2:
self.global_map_logodds = self.global_map_logodds[excess_xy[0]:]
else:
self.global_map_logodds = self.global_map_logodds[:-excess_xy[0]]
if self.target_xy[1] > self.global_map_logodds.shape[1] // 2:
self.global_map_logodds = self.global_map_logodds[:, excess_xy[1]:]
else:
self.global_map_logodds = self.global_map_logodds[:, :-excess_xy[1]]
self.target_xy_for_planning = self.target_xy.copy()
map_shape = self.global_map_logodds.shape
# Offset true map
if self.env is not None:
reduce_xy = np.maximum(-self.true_xy_offset, np.zeros((2,), np.int32)).astype(np.int32)
extend_xy = np.maximum(self.true_xy_offset, np.zeros((2,), np.int32)).astype(np.int32)
global_map_label = true_global_map * (1./255.)
global_map_label = global_map_label[reduce_xy[0]:, reduce_xy[1]:]
global_map_label = np.pad(global_map_label, [[extend_xy[0], 0], [extend_xy[1], 0], [0, 0]])
global_map_label = np.pad(global_map_label, [[0, max(map_shape[0]-global_map_label.shape[0], 0)], [0, max(map_shape[1]-global_map_label.shape[1], 0)], [0, 0]])
global_map_label = global_map_label[:map_shape[0], :map_shape[1]]
assert global_map_label.shape == map_shape
else:
global_map_label = None
# Get image observations
rgb = observations['rgb']
depth = observations['depth']
if USE_ASSERTS:
assert rgb.dtype == np.uint8
assert depth.dtype == np.float32 and np.all(depth <= 1.)
rgb = cv2.resize(rgb, (160, 90), )
rgb = rgb.astype(np.float32) * 255.
depth = cv2.resize(depth, (160, 90), ) # interpolation=cv2.INTER_NEAREST)
depth = np.atleast_3d(depth)
if self.params.mode == 'both':
images = np.concatenate([depth, rgb], axis=-1) # these are 0..1 float format
elif self.params.mode == 'depth':
images = depth
else:
images = rgb
images = (images * 255).astype(np.uint8)
images = np.array(images, np.float32)
# images = images * 255 # to unit8 0..255 format
images = images * (2. / 255.) - 1. # to network input -1..1 format
# Get visibility map from depth if needed
if self.use_custom_visibility:
visibility_map_input = ClassicMapping.is_visible_from_depth(depth, self.local_map_shape, sim=self.params.sim, zoom_factor=self.brain_requirements.transform_window_scaler,
fix_habitat_depth=self.params.fix_habitat_depth)
visibility_map_input = visibility_map_input[:, :, None].astype(np.float32)
assert np.all(visibility_map_input <= 1.)
else:
visibility_map_input = np.zeros(self.visibility_input.shape[2:], dtype=np.float32)
# # Map prediction only, using known pose
# last_global_map_input = np.zeros(self.max_map_size + (self.map_ch, ), np.float32)
# last_global_map_input[:map_shape[0], :map_shape[1]] = self.global_map_logodds
# true_map_input = np.zeros(self.max_map_size + (1, ), np.uint8)
# true_map_input[:global_map_label.shape[0], :global_map_label.shape[1]] = global_map_label
#
# feed_dict = {
# self.images_input: images, self.xy_input: true_xy, self.yaw_input: np.array((true_yaw, )),
# self.global_map_input: last_global_map_input,
# self.true_map_input: true_map_input,
# }
# if self.visibility_input is not None:
# visibility_map_input = ClassicMapping.is_visible_from_depth(depth, self.local_map_shape, sim=self.params.sim, zoom_factor=self.brain_requirements.transform_window_scaler)
# visibility_map_input = visibility_map_input[:, :, None].astype(np.uint8)
# feed_dict[self.visibility_input] = visibility_map_input
#
# mapping_output = self.run_inference(feed_dict)
# global_map_logodds = np.array(mapping_output.global_map_logodds[0, -1]) # squeeze batch and traj
# global_map_logodds = global_map_logodds[:map_shape[0], :map_shape[1]]
# self.global_map_logodds = global_map_logodds
time_prepare = time.time() - time_last
time_last = time.time()
# SLAM prediction
if self.step_i == 0:
# For the first step we dont do pose update, but we need to obtain local maps and image features
self.image_traj = [images.copy()]
# Get local maps for first
feed_dict = {
self.new_images_input: images[None, None],
self.visibility_input: visibility_map_input[None, None],
}
# TODO we should predict global map as well with a single local map added to it
new_local_maps, new_visibility_maps, new_image_features = self.sess.run([self.inference_outputs['new_local_maps'], self.inference_outputs['new_visibility_maps'], self.inference_outputs['new_image_features']], feed_dict=feed_dict)
self.local_map_traj = [new_local_maps[0, 0]]
self.visibility_traj = [new_visibility_maps[0, 0]]
self.image_features_traj = [new_image_features[0, 0]]
slam_outputs = None
# Transform predictions
global_map_true_partial = None
assert self.global_map_logodds.shape[-1] == 1
global_map_pred = ClassicMapping.inverse_logodds(self.global_map_logodds)
slam_xy = np.mean(self.particle_xy_list[-1], axis=0)
slam_yaw = np.mean(self.particle_yaw_list[-1], axis=0)
slam_mean_xy = slam_xy
slam_mean_yaw = slam_yaw
slam_mean2_xy = slam_xy
slam_mean2_yaw = slam_yaw
slam_ml_xy = slam_xy
slam_ml_yaw = slam_yaw
slam_traj_xy = None
slam_traj_yaw = None
else:
assert len(self.action_traj) > 0
assert len(self.particle_xy_list) == len(self.action_traj)
assert self.visibility_traj[-1].dtype == np.float32
assert np.all(self.visibility_traj[-1] <= 1.)
inference_trajlen = self.params.inference_trajlen
self.image_traj.append(images.copy())
self.true_xy_traj.append(true_xy)
self.true_yaw_traj.append(true_yaw)
new_action = np.array((self.action_traj[-1], ), np.int32)[None]
new_rel_xy, new_rel_yaw = actions_from_trajectory(
np.stack([self.true_xy_traj[-2], self.true_xy_traj[-1]], axis=0), np.stack([self.true_yaw_traj[-2], self.true_yaw_traj[-1]], axis=0))
# Pick best segment of the trajectory based on how much viewing areas overlap
current_trajlen = len(self.particle_xy_list) + 1
assert len(self.true_xy_traj) == current_trajlen and len(self.image_features_traj) == current_trajlen - 1
if self.params.slam_use_best_steps:
mean_traj_xy, mean_traj_yaw = ClassicMapping.mean_particle_traj(
np.array(self.particle_xy_list), np.array(self.particle_yaw_list), self.particle_logit_acc_list[-1][None, :, None])
mean_traj_xy, mean_traj_yaw = ClassicMapping.propage_trajectory_with_action(mean_traj_xy, mean_traj_yaw, self.action_traj[-1])
segment_steps = ClassicMapping.get_steps_with_largest_overlapping_view(
mean_traj_xy, mean_traj_yaw, segment_len=inference_trajlen, view_distance=30*self.brain_requirements.transform_window_scaler)
else:
segment_steps = np.arange(max(current_trajlen-inference_trajlen, 0), current_trajlen)
assert segment_steps.ndim == 1
past_particle_xy = np.stack(self.particle_xy_list, axis=0)
past_particle_yaw = np.stack(self.particle_yaw_list, axis=0)
true_xy_seg = np.stack([self.true_xy_traj[i] for i in segment_steps], axis=0) + self.true_xy_offset[None]
true_yaw_seg = np.stack([self.true_yaw_traj[i] for i in segment_steps], axis=0)
past_image_features_seg = np.stack([self.image_features_traj[i] for i in segment_steps[:-1]], axis=0)
past_local_maps = np.stack(self.local_map_traj, axis=0)
past_visibility = np.stack(self.visibility_traj, axis=0)
feed_dict = {
self.inference_timesteps_input: segment_steps[None],
self.new_images_input: images[None, None],
self.last_images_input: self.image_traj[-2][None, None],
self.visibility_input: visibility_map_input[None, None],
self.past_local_maps_input: past_local_maps[None],
self.past_visibility_input: past_visibility[None],
self.past_needed_image_features_input: past_image_features_seg[None],
self.global_map_shape_input: np.array(map_shape[:2], np.int32),
# global_map_input: global_map,
# self.images_input: images_seg[None], # always input both images and global map, only one will be connected
self.true_xy_input: true_xy_seg[None], # used for global to local transition and loss
self.true_yaw_input: true_yaw_seg[None],
# self.visibility_input: visibility_seg[None],
# self.particle_xy_input: particle_xy_seg[None],
# self.particle_yaw_input: particle_yaw_seg[None],
self.particle_xy_input: past_particle_xy[None],
self.particle_yaw_input: past_particle_yaw[None],
self.new_action_input: new_action[None],
self.new_rel_xy_input: new_rel_xy[None],
self.new_rel_yaw_input: new_rel_yaw[None],
self.last_step_particle_logits_input: self.particle_logit_acc_list[-1][None],
}
slam_outputs = self.run_inference(feed_dict, need_map=need_map)
# Deal with resampling
self.particle_xy_list = [particle[slam_outputs.particle_indices[0]] for particle in self.particle_xy_list]
self.particle_yaw_list = [particle[slam_outputs.particle_indices[0]] for particle in self.particle_yaw_list]
self.particle_logit_acc_list = [particle[slam_outputs.particle_indices[0]] for particle in self.particle_logit_acc_list]
# Store new particles
self.particle_xy_list.append(slam_outputs.particle_xy_t[0])
self.particle_yaw_list.append(slam_outputs.particle_yaw_t[0])
self.particle_logit_acc_list.append(slam_outputs.particle_logits_acc[0])
if FAKE_INPUT_FOR_SPEED_TEST:
self.particle_xy_list[-1] = self.particle_xy_list[-1] * 0 + true_xy[None] + self.true_xy_offset[None]
# Store local map prediction
self.local_map_traj.append(slam_outputs.new_local_maps[0, 0])
self.visibility_traj.append(slam_outputs.new_visibility_maps[0, 0])
self.image_features_traj.append(slam_outputs.new_image_features[0, 0])
print (self.image_features_traj[-1].shape)
# Store losses. only meaningful if true state was input
self.xy_loss_list.append(slam_outputs.loss_xy_all[0])
self.yaw_loss_list.append(slam_outputs.loss_yaw_all[0])
# Update map
if need_map:
global_map_logodds = np.array(slam_outputs.global_map_logodds[0]) # squeeze batch and traj
# if global_map_logodds.shape != self.global_map_logodds.shape:
# raise ValueError("Unexpected global map shape output from slam net.")
if not self.fixed_map_size:
global_map_logodds = global_map_logodds[:map_shape[0], :map_shape[1]]
self.global_map_logodds = global_map_logodds
# Transform predictions
global_map_true_partial = None
assert self.global_map_logodds.shape[-1] == 1
global_map_pred = ClassicMapping.inverse_logodds(self.global_map_logodds)
slam_mean_xy = slam_outputs.mean_xy[0, -1]
slam_mean_yaw = slam_outputs.mean_yaw[0, -1]
slam_mean2_xy = slam_outputs.mean2_xy[0, -1]
slam_mean2_yaw = slam_outputs.mean2_yaw[0, -1]
slam_ml_xy = slam_outputs.ml_xy[0, -1]
slam_ml_yaw = slam_outputs.ml_yaw[0, -1]
slam_traj_xy = slam_outputs.xy[0, :] # the one used for mapping
slam_traj_yaw = slam_outputs.yaw[0, :] # the one used for mapping
slam_xy = slam_outputs.xy[0, -1] # the one used for mapping
slam_yaw = slam_outputs.yaw[0, -1]
# TODO should separate reassemble the map for the whole trajectory for the mean particle trajectory
# do NOT use most likely particle, its meaningless after resampling. Density is what matters.
# need to implement reasonable sequential averaging of yaws..
# Compute mean separately here
if self.params.brain == 'habslambrain_v1' and USE_ASSERTS:
mean_xy_from_np, mean_yaw_from_np = ClassicMapping.mean_particle_traj(self.particle_xy_list[-1], self.particle_yaw_list[-1], self.particle_logit_acc_list[-1][:, None])
xy_diff = np.abs(mean_xy_from_np - slam_mean_xy)
yaw_diff = np.abs(mean_yaw_from_np - slam_mean_yaw)
yaw_diff = (yaw_diff + np.pi) % (2 * np.pi) - np.pi
if not np.all(xy_diff < 1.) or not np.all(yaw_diff < np.deg2rad(10.)):
raise ValueError("SLAM mean and numpy mean dont match. Mean difference: %s vs %s | %s vs. %s" % (
str(mean_xy_from_np), str(slam_mean_xy), str(mean_yaw_from_np), str(slam_mean_yaw)))
# Pose source
if self.pose_source == 'true':
xy = true_xy + self.true_xy_offset
yaw = true_yaw
traj_xy = np.array(self.true_xy_traj) + self.true_xy_offset[None]
traj_yaw = np.array(self.true_yaw_traj)
assert slam_traj_xy is None or traj_xy.shape[0] == slam_traj_xy.shape[0]
elif self.pose_source in ["slam-truestart", "slam"]:
xy = slam_xy
yaw = slam_yaw
traj_xy = slam_traj_xy
traj_yaw = slam_traj_yaw
# TODO weighted mean of particles
else:
raise NotImplementedError
self.xy = xy
self.yaw = yaw
# Verify true pose
if USE_ASSERTS and self.params.agent_pose_source == 'true':
assert np.all(np.isclose(traj_xy[:, None], np.array(self.particle_xy_list), atol=1e-3))
assert np.all(np.isclose(traj_yaw[:, None], np.array(self.particle_yaw_list), atol=1e-3))
# last_action = self.action_traj[-1]
# if last_action == 1:
# nominal_xy = traj_xy[-2] + rotate_2d(np.array([5., 0.], np.float32), traj_yaw[-2])
# else:
# nominal_xy = traj_xy[-2]
# move_error = np.linalg.norm(xy - nominal_xy)
# move_amount = np.linalg.norm(xy - traj_xy[-2])
# print ("Act %d. Moved %f. Error %f"%(last_action, move_amount, move_error))
# if move_error > 3.:
# pdb.set_trace()
local_map_label = None
# local_map_label = slam_outputs.local_map_label[0, 0, :, :, 0]
# local_map_pred = slam_outputs.combined_local_map_pred[0, 0, :, :, 0]
ang_vel = yaw - self.prev_yaw
ang_vel = (ang_vel + np.pi) % (2*np.pi) - np.pi
target_dist = np.linalg.norm(self.target_xy - xy)
true_target_dist = np.linalg.norm(true_target_xy - true_xy)
xy_error, yaw_error = self.pose_error(slam_xy, slam_yaw, true_xy, true_yaw)
mean_xy_error, mean_yaw_error = self.pose_error(slam_mean_xy, slam_mean_yaw, true_xy, true_yaw)
mean2_xy_error, _ = self.pose_error(slam_mean2_xy, slam_mean2_yaw, true_xy, true_yaw)
ml_xy_error, _ = self.pose_error(slam_ml_xy, slam_ml_yaw, true_xy, true_yaw)
self.distance_history.append(target_dist)
if self.pose_source != 'slam' and not FAKE_INPUT_FOR_SPEED_TEST:
assert np.abs(np.sqrt(self.xy_loss_list[-1]) - xy_error) < 2. # one is before resampling, other is after
# Detect collision
is_colliding = False
if self.step_i > 2 and self.action_traj[-1] == 1 and self.recover_step_i == 0: # moved forward
last_step_len = np.linalg.norm(traj_xy[-2] - traj_xy[-1], axis=0)
if last_step_len < COLLISION_DISTANCE_THRESHOLD:
is_colliding = True
self.collision_timesteps.append(self.step_i)
self.num_collisions += 1
if self.recover_step_i >= len(self.recover_policy):
self.recover_step_i = 0 # done with recovery
dist_hist = np.array(self.distance_history[-self.GIVE_UP_NO_PROGRESS_STEPS:])
time_slam = time.time() - time_last
time_last = time.time()
should_give_up = False
# Modify state if its out of bounds, or give up if goal is out of bounds
if (np.any(self.target_xy_for_planning < TARGET_MAP_MARGIN)
or np.any(self.target_xy_for_planning + TARGET_MAP_MARGIN >= np.array(self.max_map_size))):
should_give_up = True
if USE_ASSERTS and self.fixed_map_size:
raise ValueError("Target is outside of map area -- this should not happen for fixed size map.")
elif (np.any(self.xy < 0) or np.any(self.xy >= np.array(self.max_map_size))):
print ("State is outside of map area -- this can happen for fixed size map because its cropped before the slam update.")
if self.fixed_map_size:
new_xy = np.clip(xy, [0., 0.], np.array(self.max_map_size, np.float32) - 0.001)
print ("moving state.. %s --> %s"%(str(xy), str(new_xy)))
xy = new_xy
self.xy = new_xy
else:
print ("Giving up")
should_give_up = True
# Check for time and distance limits
try:
for time_thres, dist_thres in self.GIVE_UP_STEP_AND_DISTANCE:
if self.step_i >= time_thres and target_dist >= dist_thres:
should_give_up = True
break
except Exception as e:
print ("Exception " + str(e))
# Give up if no progress for too long wallclock time
try:
mins_since_ep_start = (time.time() - self.episode_t) / 60
reduction_since_beginning = self.distance_history[0] - self.distance_history[-1]
for time_thres, reduct_thres in self.GIVE_UP_TIME_AND_REDUCTION:
if mins_since_ep_start >= time_thres and reduction_since_beginning < reduct_thres:
print ("Give up because of wallclock time and reduction t=%f reduct=%f"%(mins_since_ep_start, reduction_since_beginning))
should_give_up = True
break
except Exception as e:
print ("Exception " + str(e))
giving_up_collision = False
giving_up_distance = False
giving_up_progress = False
is_done = False
# Plan
planned_path = np.zeros([0, 2], dtype=np.float32)
# Choose which map to use for planning
global_map_for_planning, cont_global_map_for_planning = self.get_global_map_for_planning(global_map_pred, global_map_label, traj_xy, traj_yaw, map_shape, self.map_source, keep_soft=self.params.soft_cost_map)
shrunk_map_offset_xy = None
if self.params.interactive_action:
while True:
ans = input("Manual action: ")
try:
if ans and int(ans) >= 0 and int(ans) <= 3:
action = int(ans)
break
except:
pass
plan_status_msg = "Manual %d"%action
elif target_dist < self.params.agent_stop_near_target_dist:
# Close enough to target. Normal requirement is 0.36/0.05 = 7.2
plan_status_msg = "Manual stop"
is_done = True
action = 0
elif should_give_up:
plan_status_msg = "Giving up because target is too far (or state was outside of map)"
giving_up_distance = True
action = 0
elif shortcut_action is not None:
# NOTE must be before recover on collision - because we already incremented recover policy
plan_status_msg = "Shortcut action"
action = shortcut_action
elif RECOVER_ON_COLLISION and (is_colliding or self.recover_step_i > 0):
plan_status_msg = ("Recover from collision %d / %d."%(self.recover_step_i, len(self.recover_policy)))
action = self.recover_policy[self.recover_step_i]
self.recover_step_i += 1
self.pathplanner.reset() # to clear out its cache
if target_dist < NEAR_TARGET_COLLISION_STOP_DISTANCE:
plan_status_msg += " --> Attempt to stop instead, near target"
is_done = True
action = 0
elif self.GIVE_UP_NUM_COLLISIONS > 0 and self.num_collisions >= self.GIVE_UP_NUM_COLLISIONS:
plan_status_msg = "Too many collisions (%d). Giving up.."%(self.num_collisions, )
giving_up_collision = True
action = 0
elif self.GIVE_UP_NO_PROGRESS_STEPS > 0 and self.step_i > self.GIVE_UP_NO_PROGRESS_STEPS and self.step_i > 100 and np.max(dist_hist) - np.min(dist_hist) < self.NO_PROGRESS_THRESHOLD:
plan_status_msg = "No progress for %d steps. Giving up.."%(self.GIVE_UP_NO_PROGRESS_STEPS, )
giving_up_progress = True
action = 0
else:
action, planned_path, plan_status_msg, processed_map_for_planning, shrunk_map_offset_xy = self.plan_and_control(
xy, yaw, self.target_xy_for_planning, global_map_for_planning, ang_vel, initial_target_fi,
allow_shrink_map=self.allow_shrink_map,
cont_global_map_pred=cont_global_map_for_planning if self.planner_needs_cont_map else None)
is_done = (action == 0)
# Visualize agent
if self.step_i % PLOT_EVERY_N_STEP == 0 and PLOT_EVERY_N_STEP > 0 and slam_outputs is not None:
local_map_pred = self.local_map_traj[-1][:, :, 0]
self.visualize_agent(slam_outputs.tiled_visibility_mask[0, 0, :, :, 0], images, global_map_pred,
global_map_for_planning,
# processed_map_for_planning.astype(np.float32)/255.,
global_map_label,
global_map_true_partial, local_map_pred, local_map_label, planned_path,
sim_rgb=observations['rgb'], # uint
xy=xy, yaw=yaw, true_xy=true_xy + self.true_xy_offset, true_yaw=true_yaw, target_xy=self.target_xy_for_planning)
# pdb.set_trace()
# Overwrite with expert
if self.action_source == 'expert':
best_action = self.follower.get_next_action(goal_pos)
action = best_action
if action == 0 and EXIT_AFTER_N_STEPS_FOR_SPEED_TEST > 0:
print ("Sping instead of stopping.")
action = 3
is_done = (action == 0)
if DEBUG_DUMMY_ACTIONS_ONLY:
action = 1
# # Overwrite with manual actions
# if self.params.interactive_action:
# ans = input("Overwrite %d: "%action)
# if ans and int(ans) >= 0 and int(ans) <= 3:
# action = int(ans)
time_plan = time.time() - time_last
time_last = time.time()
# Save data
if len(self.tfwriters) > 0 and self.step_i % SAVE_DATA_EVERY_N == 0:
is_using_planner = planned_path.shape[0] > 0 and target_dist >= 10.5
if DATA_TYPE == "planinstance":
assert self.map_source != "pred"
if not is_using_planner: # two step strategy for <= 10
if DATA_INCLUDE_NONPLANNED_ACTIONS:
raise NotImplementedError
else:
pred_map_for_planning, _ = self.get_global_map_for_planning(
global_map_pred, global_map_label, traj_xy, traj_yaw, map_shape, "pred", keep_soft=True)
self.write_datapoint(global_map_for_planning, pred_map_for_planning, self.target_xy_for_planning,
planned_path.astype(np.int32), action, shrunk_map_offset_xy)
elif DATA_TYPE == "scenario":
assert SAVE_DATA_EVERY_N == 1 # need to save all steps for meaningful slam and image data
pred_map_for_planning, _ = self.get_global_map_for_planning(
global_map_pred, global_map_label, traj_xy, traj_yaw, map_shape, "pred", keep_soft=True)
# TODO for predmap the maps we save will not necessarily be meaningful. Never tested.
# convert maps
assert global_map_for_planning.dtype == np.float32 and pred_map_for_planning.dtype == np.float32
assert global_map_for_planning.shape == pred_map_for_planning.shape
assert global_true_target_xy is not None # unchanged goal coordinated on map
data_true_map_png = encode_image_to_png((global_map_for_planning * 255.).astype(np.uint8))
data_pred_map_png = encode_image_to_png((pred_map_for_planning * 255.).astype(np.uint8)) # encode predicted probability as uint8
depth_data = np.atleast_3d(observations['depth']) if self.params.data_highres_images else depth
rgb_data = observations['rgb'] if self.params.data_highres_images else (rgb * 255.).astype(np.uint8)
depth_png = encode_image_to_png((depth_data * 255.).astype(np.uint8))
rgb_png = encode_image_to_png(rgb_data)
global_xy = np.array(info['top_down_map']['agent_map_coord']) # x: downwards; y: rightwars
global_yaw = np.array(info['top_down_map']['agent_angle']) # 0 downwards, positive ccw. Forms stanard coord system with x and y.
self.scenario_traj_data.append({
'action': np.array(action, np.int32),
'local_est_xy': xy.copy(),
'local_est_yaw': yaw.copy(),
'local_true_xy': (true_xy + self.true_xy_offset).copy(),
'local_true_yaw': true_yaw.copy(),
'local_goal_xy': self.target_xy_for_planning.copy(),
'true_map_png': data_true_map_png,
'pred_map_png': data_pred_map_png,
'depth_png': depth_png,
'rgb_png': rgb_png,
'global_xy': global_xy.copy(),
'global_yaw': global_yaw.copy(),
'is_using_planner': is_using_planner,
'is_colliding': is_colliding,
})
# Metadata
if len(self.scenario_traj_data) == 1:
ep = self.env.current_episode
episode_id = ep.episode_id
model_id = get_model_id_from_episode(ep)
height = ep.start_position[1]
floor = get_floor_from_json(model_id, height, map_path_for_sim(self.params.sim))
self.scenario_traj_data[0].update({
'global_goal_xy': global_true_target_xy.copy(),
'model_id': str(model_id),
'floor': int(floor),
'episode_id': int(episode_id),
})
else:
raise NotImplementedError(DATA_TYPE)
# pdb.set_trace()
# if self.episode_i == 0:
# cv2.imwrite('./temp/ep%d-step%d.png'%(self.episode_i, self.step_i), observations['rgb'])
# if self.step_i == 0:
# top_down_map = maps.get_topdown_map(
# self.env.sim, map_resolution=(5000, 5000)
# )
# plt.imshow(top_down_map)
# plt.show()
self.prev_yaw = yaw
self.action_traj.append(action)
self.step_i += 1
slam_status_msg = "Pose errors mean=%.1f mean2=%.1f ml=%.1f yaw=%.1f. Loss=%.1f "%(
mean_xy_error, mean2_xy_error, ml_xy_error, np.rad2deg(mean_yaw_error), np.sqrt(self.xy_loss_list[-1]))
act_status_msg = "Est dist=%.1f. True dist=%.1f Act=%d %s"%(
target_dist, true_target_dist, action, "COL" if is_colliding else "")
print (plan_status_msg)
print (slam_status_msg + act_status_msg)
# Get map statistics
if global_map_label is not None:
ij = xy.astype(np.int32)
self.num_wrong_obstacle += 1. if not is_colliding and global_map_label[ij[0], ij[1]] < 0.5 else 0.
self.num_wrong_free += 1. if is_colliding and global_map_label[ij[0], ij[1]] >= 0.5 else 0.
self.num_wrong_free_area += 1. if is_colliding and np.all(global_map_label[max(ij[0]-1, 0):ij[0]+2, max(ij[1]-1, 0):ij[1]+2] >= 0.5) else 0.
self.num_wrong_free_area2 += 1. if is_colliding and np.all(global_map_label[max(ij[0]-2, 0):ij[0]+3, max(ij[1]-2, 0):ij[1]+3] >= 0.5) else 0.
self.num_wrong_free_area3 += 1. if is_colliding and np.all(global_map_label[max(ij[0]-3, 0):ij[0]+4, max(ij[1]-3, 0):ij[1]+4] >= 0.5) else 0.
# Video output
if self.params.interactive_video or self.params.save_video > self.num_videos_saved:
if not isinstance(planned_path, np.ndarray) or planned_path.ndim != 2:
print ("planned path has an unexpected format")
pdb.set_trace()
# Set outcome text
if (giving_up_collision or giving_up_progress or giving_up_distance):
outcome = 'giveup'
elif is_done:
outcome = 'done'
else:
outcome = 'timeout'
frame_data = dict(
rgb=observations['rgb'],
depth=observations['depth'],
global_map=global_map_pred.copy(),
global_map_for_planning=global_map_for_planning.copy(),
cont_global_map_for_planning=cont_global_map_for_planning.copy(),
true_global_map=global_map_label.copy(),
xy=self.xy.copy(), yaw=self.yaw.copy(),
target_xy=self.target_xy_for_planning.copy(),
path=planned_path.copy(), # subgoal=planned_subgoal.copy(),
target_status=slam_status_msg, control_status=plan_status_msg, act_status=act_status_msg,
outcome=outcome)
if self.plot_process:
while not self.plot_queue.empty():
time.sleep(0.01)
self.plot_queue.put(("step", frame_data))
else:
self.frame_traj_data.append(frame_data)
if self.params.interactive_video:
self.video_update(VIDEO_FRAME_SKIP * len(self.frame_traj_data))
time_output = time.time() - time_last
time_last = time.time()
if PRINT_TIMES:
print ("Time sim %.3f prep %.3f slam %.3f plan %.3f output %.3f"%(time_sim, time_prepare, time_slam, time_plan, time_output))
# Pause for interactive run every n steps
if self.params.interactive_step > 0 and self.step_i > 0 and self.step_i % self.params.interactive_step == 0:
print ("pause..")
pdb.set_trace()
return {"action": action, "has_collided": float(self.num_collisions > 0), "num_collisions": self.num_collisions,
"xy_error": xy_error, # "mean_xy_error": mean_xy_error, "mean2_xy_error": mean2_xy_error, "ml_xy_error": ml_xy_error,
'mean_yaw_error': mean_yaw_error, 'target_dist': target_dist,
'num_wrong_obstacle': self.num_wrong_obstacle, 'num_wrong_free': self.num_wrong_free,
'num_wrong_free_area': self.num_wrong_free_area,
'num_wrong_free_area2': self.num_wrong_free_area2, 'num_wrong_free_area3': self.num_wrong_free_area3,
'time_plan': 0. if len(self.plan_times) == 0 else np.mean(self.plan_times),
'map_mismatch_count': float(self.map_mismatch_count) / self.step_i, # TODO remove
'giveup_collision': float(giving_up_collision), 'giveup_progress': float(giving_up_progress),
'giveup_distance': float(giving_up_distance), 'is_done: ': is_done} # 0: stop, forward, left, right
# return {"action": numpy.random.choice(self._POSSIBLE_ACTIONS)}
def reset_scenario_data_writer(self):
if len(self.scenario_traj_data) == 0:
return
assert len(self.tfwriters) == len(self.params.data_map_sizes) == 1
metadata = self.scenario_traj_data[0]
trajdata = self.scenario_traj_data
context_features = {
'trajlen': tf_int64_feature(len(trajdata)),
'goal_xy': tf_bytes_feature(metadata['global_goal_xy'].astype(np.float32).tobytes()),
'model_id': tf_bytes_feature(str(metadata['model_id']).encode()),
'floor': tf_int64_feature(metadata['floor']),
'episode_id': tf_int64_feature(metadata['episode_id']), # int(ep.episode_id)
'map_id': tf_int64_feature(self.saved_map_i), # tf_bytes_feature(np.array((,), np.int32).tobytes()),
}
sequence_features = {
'actions': sequence_feature_wrapper([stepdata['action'].astype(np.int32) for stepdata in trajdata]),
# global map coordinates
'xys': sequence_feature_wrapper([stepdata['global_xy'].astype(np.float32) for stepdata in trajdata]),
'yaws': sequence_feature_wrapper([stepdata['global_yaw'].astype(np.float32) for stepdata in trajdata]),
'is_using_planner': sequence_feature_wrapper([np.array(stepdata['is_using_planner'], dtype=np.bool) for stepdata in trajdata]),
'is_colliding': sequence_feature_wrapper([np.array(stepdata['is_colliding'], dtype=np.bool) for stepdata in trajdata]),
# coordinates in rotated local coordinate frame for planning. cropped compared to global pose
'local_true_xys': sequence_feature_wrapper([stepdata['local_true_xy'].astype(np.float32) for stepdata in trajdata]),
'local_true_yaws': sequence_feature_wrapper([stepdata['local_true_yaw'].astype(np.float32) for stepdata in trajdata]),
'local_est_xys': sequence_feature_wrapper([stepdata['local_est_xy'].astype(np.float32) for stepdata in trajdata]),
'local_est_yaws': sequence_feature_wrapper([stepdata['local_est_yaw'].astype(np.float32) for stepdata in trajdata]),
'local_goal_xys': sequence_feature_wrapper([stepdata['local_goal_xy'].astype(np.float32) for stepdata in trajdata]),
# maps used for planning (typically true-partial) and (accumulated) predicted map
'true_maps': sequence_feature_wrapper([stepdata['true_map_png'] for stepdata in trajdata]),
'pred_maps': sequence_feature_wrapper([stepdata['pred_map_png'] for stepdata in trajdata]),
'depths':sequence_feature_wrapper([stepdata['depth_png'] for stepdata in trajdata]),
'rgbs': sequence_feature_wrapper([stepdata['rgb_png'] for stepdata in trajdata]),
}
# store
example = tf.train.SequenceExample(context=tf.train.Features(feature=context_features),
feature_lists=tf.train.FeatureLists(feature_list=sequence_features))
if DATA_SEPARATE_FILES:
data_filename = os.path.join(
self.logdir, "habscenarios.episode.m%d.tfrecords.%d" % (self.params.data_map_sizes[0], self.num_data_entries))
with tf.python_io.TFRecordWriter(data_filename) as tfwriter:
tfwriter.write(example.SerializeToString())
else:
self.tfwriters[0].write(example.SerializeToString())
self.saved_map_i += 1
self.num_data_entries += 1
self.scenario_traj_data = []
def write_datapoint(self, map_for_planning, pred_map, target_xy, planned_path, action, shrunk_map_offset_xy):
assert planned_path.shape[0] > 0
assert planned_path.dtype == np.int32
planned_actions = grid_actions_from_trajectory(planned_path, connect8=self.params.connect8)
target_ij = target_xy.astype(np.int32)
if planned_path.shape[0] < self.params.trainlen:
return
if planned_path[-1][0] != target_ij[0] or planned_path[-1][1] != target_ij[1]:
print ("Skip because path does not reach goal")
print (planned_path)
print (target_ij)
return
# convert maps
assert map_for_planning.dtype == np.float32 and pred_map.dtype == np.float32
assert map_for_planning.shape == pred_map.shape
map_for_planning = (map_for_planning * 255.).astype(np.uint8)
pred_map = (pred_map * 255.).astype(np.uint8) # encode predicted probability as uint8
if np.any(map_for_planning[planned_path[:, 0], planned_path[:, 1]] < 127):
print ("Skip because path is not collision free")
return
# Q values. Assumes planner is a VI and it was called in this time step, otherwise path would be None
qs = self.pathplanner.last_qs_value
if shrunk_map_offset_xy is not None:
shrunk_map_offset_xy = shrunk_map_offset_xy.astype(np.int)
qs = np.pad(qs, [[shrunk_map_offset_xy[0], pred_map.shape[0]-qs.shape[0]-shrunk_map_offset_xy[0]],
[shrunk_map_offset_xy[1], pred_map.shape[1]-qs.shape[1]-shrunk_map_offset_xy[1]],
[0, 0]])
assert qs.shape[:2] == pred_map.shape[:2]
assert len(self.tfwriters) == len(self.params.data_map_sizes)
for tfwriter, map_size in zip(self.tfwriters, self.params.data_map_sizes):
self.write_data_for_map_size(map_for_planning, pred_map, qs, planned_actions, planned_path, target_xy, tfwriter, map_size)
self.saved_map_i += 1
def write_data_for_map_size(self, map_for_planning, pred_map, qs, planned_actions, planned_path, target_xy, tfwriter, map_size):
segment_len = self.params.trainlen
if map_size < map_for_planning.shape[0]:
assert False # We need to replan for q values to be valid.
if DATA_USE_LAST_SEGMENT:
# Find last trajectory segment that is still within the map size
margin = 2
for start_i in range(len(planned_path)):
range_ij = np.max(planned_path[start_i:], axis=0) - np.min(planned_path[start_i:], axis=0)
if np.all(range_ij < map_size - 2 * margin):
break
planned_path = planned_path[start_i:]
planned_actions = planned_actions[start_i:]
else:
# Find first trajectory segment that is within the map size and change goal
margin = 2
for end_i in range(len(planned_path), 0, -1): # go backwards
range_ij = np.max(planned_path[:end_i], axis=0) - np.min(planned_path[:end_i], axis=0)
if np.all(range_ij < map_size - 2 * margin):
break
planned_path = planned_path[:end_i]
planned_actions = planned_actions[:end_i]
target_xy = planned_path[-1].astype(np.float32) + 0.5
# Crop map
offset_ij = np.min(planned_path, axis=0)
range_ij = np.max(planned_path, axis=0) - offset_ij
# add half of the remaining spacing to the beginning
topleft_space = (map_size - range_ij) // 2
offset_ij = offset_ij - topleft_space
offset_ij = np.maximum(offset_ij, np.zeros((2, ), np.int32)) # cannot be less than zero
offset_ij = np.minimum(offset_ij, np.array(map_for_planning.shape[:2], np.int32) - map_size)
# crop the given size starting from offset_ij
map_for_planning = map_for_planning[offset_ij[0]:offset_ij[0]+map_size, offset_ij[1]:offset_ij[1]+map_size]
pred_map = pred_map[offset_ij[0]:offset_ij[0]+map_size, offset_ij[1]:offset_ij[1]+map_size]
qs = qs[offset_ij[0]:offset_ij[0]+map_size, offset_ij[1]:offset_ij[1]+map_size]
qs = qs.astype(np.float32)
# Move poses to cropped frame
planned_path = planned_path - offset_ij[None]
target_xy = target_xy - offset_ij.astype(np.float32)
if planned_path.shape[0] < self.params.trainlen:
return
assert map_for_planning.shape[0] == map_size and map_for_planning.shape[1] == map_size
assert np.all(planned_path >= 0) and np.all(planned_path < map_size)
# Limit trajlen so we only save the trajectory segment near the current pose
if DATA_FIRST_STEP_ONLY:
max_trajlen = segment_len # there will be only one segment
else:
max_trajlen = DATA_MAX_TRAJLEN // segment_len * segment_len
assert max_trajlen >= 2
planned_path = planned_path[:max_trajlen]
planned_actions = planned_actions[:max_trajlen-1]
# Abstract Q values along trajectory and make sure they are consistent with the action choices
q_traj = qs[planned_path[:, 0].astype(np.int), planned_path[:, 1].astype(np.int), :]
q_for_actions = q_traj[np.arange(q_traj.shape[0]-1), planned_actions]
assert np.all(np.isclose(q_for_actions, q_traj[:-1].max(axis=1)))
planned_xy = planned_path.astype(np.float32) + 0.5
true_map_png = cv2.imencode('.png', map_for_planning)[1].tobytes()
pred_map_png = cv2.imencode('.png', pred_map)[1].tobytes()
# segments
traj_segments = []
overlap = 1 # use extra step because last action will be dropped
assert overlap < segment_len
start_i = 0
while start_i < planned_path.shape[0] - overlap: # include all steps
# for incomplete last segment, start earlier overlapping with previous segment
if start_i + segment_len > planned_path.shape[0]:
start_i = planned_path.shape[0] - segment_len
overlap = -1 # this is to triger break at the end of this iteration
segment = tuple(range(start_i, start_i + segment_len))
traj_segments.append(segment)
start_i += segment_len - overlap
del overlap
assert not DATA_FIRST_STEP_ONLY or len(traj_segments) == 1
# store each segment
goal_xy = target_xy.copy()
for segment_i, segment in enumerate(traj_segments): # repeat multiple times
xy_segment = planned_xy[segment, :]
grid_action_segment = planned_actions[segment[:-1],]
q_segment = q_traj[segment[:-1],]
# dummy yaw and action
yaw_segment = np.ones((segment_len, 1), np.float32) * -1
action_segment = np.ones((segment_len - 1, 1), np.int32) * -1
# tfrecord features
context_features = {
'true_map': tf_bytes_feature(true_map_png),
'pred_map': tf_bytes_feature(pred_map_png),
'trajlen': tf_int64_feature(len(segment)),
'goal_xy': tf_bytes_feature(goal_xy.astype(np.float32).tobytes()),
'xy': tf_bytes_feature(xy_segment.astype(np.float32).tobytes()),
'yaw': tf_bytes_feature(yaw_segment.astype(np.float32).tobytes()),
'action': tf_bytes_feature(action_segment.astype(np.int32).tobytes()),
'grid_q_values': tf_bytes_feature(q_segment.astype(np.float32).tobytes()),
'grid_action': tf_bytes_feature(grid_action_segment.astype(np.int32).tobytes()),
'qs': tf_bytes_feature(qs.tobytes()),
'episode_id': tf_bytes_feature(np.array((self.episode_i + self.params.skip_first_n, ), np.int32).tobytes()),
'map_id': tf_bytes_feature(np.array((self.saved_map_i, ), np.int32).tobytes()),
'segment_i': tf_bytes_feature(np.array((segment_i, ), np.int32).tobytes()),
}
sequence_features = {
# 'local_map': tf.train.FeatureList(feature=[tf.train.Feature(bytes_list=tf.train.BytesList(value=[local_map_pngs[i]])) for i in segment]),
# 'visibility': tf.train.FeatureList(feature=[tf.train.Feature(bytes_list=tf.train.BytesList(value=[visibility_map_pngs[i]])) for i in segment]),
}
# store
example = tf.train.SequenceExample(context=tf.train.Features(feature=context_features),
feature_lists=tf.train.FeatureLists(feature_list=sequence_features))
tfwriter.write(example.SerializeToString())
self.num_data_entries += 1
def get_global_map_for_planning(self, global_map_pred, global_map_label, traj_xy, traj_yaw, map_shape, map_source, keep_soft):
if map_source in ['true', 'true-saved', 'true-saved-sampled', 'true-saved-hrsampled']:
assert global_map_label.ndim == 3
global_map_for_planning = global_map_label.copy()
assert global_map_for_planning.shape == map_shape
elif map_source in ['true-partial', 'true-partial-sampled']:
global_map_for_planning = global_map_label.copy()
# Overwrite unseen areas with 0.5
unseen_mask = np.isclose(global_map_pred, 0.5)
global_map_for_planning[unseen_mask] = 0.5
else:
global_map_for_planning = global_map_pred.copy()
# Erode with float values before thresholding and before adding patches for collision.
# This is used to account for larger robot than used in training data, like spot.
if self.params.map_erosion_pre_planning > 1:
global_map_for_planning = np.squeeze(global_map_for_planning, axis=-1)
global_map_for_planning = cv2.erode(
global_map_for_planning, Expert.get_kernel_for_erosion(self.params.map_erosion_pre_planning))
global_map_for_planning = global_map_for_planning[..., None]
if self.params.collision_patch_radius > 0 and self.step_i > 1:
global_map_for_planning = self.patch_map_with_collisions(global_map_for_planning, traj_xy,
traj_yaw, self.collision_timesteps,
self.params.collision_patch_radius)
if self.params.agent_clear_target_radius > 0:
try:
min_xy = self.target_xy_for_planning.astype(np.int32) - self.params.agent_clear_target_radius
max_xy = self.target_xy_for_planning.astype(np.int32) + self.params.agent_clear_target_radius + 1
global_map_for_planning[min_xy[0]:max_xy[0], min_xy[1]:max_xy[1]] = 1.
except Exception as e:
print ("Exception clearing target. " + str(e))
raise e
#
# if self.step_i == 1:
# print ("DEBUG !!!!!!! REMOVE !!!!!!!")
# self.collision_timesteps.append(1)
# threshold
cont_global_map_for_planning = global_map_for_planning
if not keep_soft:
traversable_threshold = self.params.traversable_threshold # higher than this is traversable
object_treshold = 0. # treat everything as non-object
threshold_const = np.array((traversable_threshold, object_treshold))[None, None, :self.map_ch - 1]
global_map_for_planning = np.array(global_map_for_planning >= threshold_const, np.float32)
return global_map_for_planning, cont_global_map_for_planning
@staticmethod
def shrink_map(xy, target_xy, global_map, margin=8):
assert margin > 6 # one step in each direction requires at least 6 margin
assert global_map.dtype == np.uint8
obst_i, obst_j, _ = np.nonzero(global_map == 0)
if obst_i.shape[0] == 0:
obst_i = np.array([xy[0]], np.int)
obst_j = np.array([xy[1]], np.int)
min_i = min(int(xy[0]), int(target_xy[0]), np.min(obst_i)) - margin
min_j = min(int(xy[1]), int(target_xy[1]), np.min(obst_j)) - margin
max_i = max(int(xy[0]), int(target_xy[0]), np.max(obst_i)) + margin + 1
max_j = max(int(xy[1]), int(target_xy[1]), np.max(obst_j)) + margin + 1
min_i = max(min_i, 0)
min_j = max(min_j, 0)
max_i = min(max_i, global_map.shape[0])
max_j = min(max_j, global_map.shape[1])
offset_xy = np.array([min_i, min_j], np.float32)
if min_i > 0 or min_j > 0 or max_i < global_map.shape[0] or max_j < global_map.shape[1]:
global_map = global_map[min_i:max_i, min_j:max_j]
xy = xy - offset_xy
target_xy = target_xy - offset_xy
return xy, target_xy, global_map, offset_xy
@staticmethod
def patch_map_with_collisions(global_map_for_planning, traj_xy, traj_yaw, collision_timesteps, patch_radius):
for timestep in collision_timesteps:
xy = traj_xy[timestep]
yaw = traj_yaw[timestep]
if patch_radius > 0.5:
num_samples = max(int(2 * patch_radius), 6)
ego_x, ego_y = np.meshgrid(
np.linspace(0, 2 * patch_radius, num_samples) - 0.4,
np.linspace(-patch_radius, patch_radius, num_samples),
indexing='ij')
ego_xy = np.stack((ego_x.flatten(), ego_y.flatten()), axis=-1)
abs_xy = xy[None] + rotate_2d(ego_xy, yaw[None])
abs_ij = abs_xy.astype(np.int32)
else:
abs_ij = xy[None].astype(np.int32)
# Filter out of range
abs_ij = abs_ij[np.logical_and.reduce([
abs_ij[:, 0] >= 0, abs_ij[:, 1] >= 0, abs_ij[:, 0] < global_map_for_planning.shape[0],
abs_ij[:, 1] < global_map_for_planning.shape[1] ])]
# Set map not traversable (0.)
global_map_for_planning[abs_ij[:, 0], abs_ij[:, 1]] = 0.
return global_map_for_planning
def pose_error(self, slam_xy, slam_yaw, true_xy, true_yaw):
xy_error = np.linalg.norm(true_xy + self.true_xy_offset - slam_xy)
yaw_error = true_yaw - slam_yaw
yaw_error = np.abs((yaw_error + np.pi) % (2 * np.pi) - np.pi)
return xy_error, yaw_error
def run_inference(self, feed_dict, need_map=True):
outputs = self.sess.run((self.inference_outputs if need_map else self.inference_outputs_without_map), feed_dict=feed_dict)
return outputs
def video_update(self, frame_i):
# frame skip of 3
if frame_i % VIDEO_FRAME_SKIP == 0:
ind = min(frame_i // VIDEO_FRAME_SKIP, len(self.frame_traj_data)-1)
self.video_image_ax.set_data(self.frame_traj_data[ind]['rgb'])
self.video_image_ax2.set_data(1.-self.frame_traj_data[ind]['depth'][..., 0])
# self.video_text_ax1.set_text(self.frame_traj_data[ind]['target_status'])
split_str = self.frame_traj_data[ind]['control_status'] + " " + self.frame_traj_data[ind]['act_status']
# Attempt to break lines
segs = split_str.split("[")
if len(segs) > 1:
split_str = segs[0] + "\n["+"[".join(segs[1:])
segs = split_str.split(" v=")
if len(segs) > 1:
split_str = segs[0] + "\nv=" + " v=".join(segs[1:])
# self.video_text_ax2.set_text(split_str)
self.video_text_ax1.set_text("t = %d"%(frame_i // VIDEO_FRAME_SKIP + 1))
if self.video_global_map_ax is not None:
xy = self.frame_traj_data[ind]['xy']
target_xy = self.frame_traj_data[ind]['target_xy']
#subgoal = self.frame_traj_data[ind]['subgoal']
path = self.frame_traj_data[ind]['path'].copy()
if len(path) == 0:
path = xy[None]
path = np.array(path)[:, :2]
global_map = np.atleast_3d(self.frame_traj_data[ind]['global_map'])
global_map = np.tile(global_map[:, :, :1], [1, 1, 3])
true_map = np.atleast_3d(self.frame_traj_data[ind]['true_global_map'])
true_map = np.tile(true_map[:, :, :1], [1, 1, 3])
# map_for_planning = np.atleast_3d(self.frame_traj_data[ind]['global_map_for_planning'])
map_for_planning = np.atleast_3d(self.frame_traj_data[ind]['cont_global_map_for_planning'])
map_for_planning = np.tile(map_for_planning[:, :, :1], [1, 1, 3])
if self.fixed_map_size:
# Fix window to full map
window_size = self.max_map_size[0]
map_for_planning_crop, path_crop, target_xy_crop, xy_crop = self.crop_experience_window(
window_size, map_for_planning, path, target_xy, xy)
# Use a fixed global view
combined_map = map_for_planning_crop
combined_map2 = global_map
combined_map3 = true_map
else:
# Follow agent with a window
window_size = 220
map_for_planning_crop, path_crop, target_xy_crop, xy_crop = self.crop_experience_window(
window_size, map_for_planning, path, target_xy, xy)
combined_map = map_for_planning_crop # global_map_crop if MAP_SOURCE == 'pred' else true_map_crop
global_map_crop, _, _, temp_xy_crop = self.crop_experience_window(window_size, global_map, path, target_xy, xy)
assert np.all(temp_xy_crop == xy_crop)
combined_map2 = global_map_crop
true_map_crop, _, _, temp_xy_crop = self.crop_experience_window(window_size, true_map, path, target_xy, xy)
assert np.all(temp_xy_crop == xy_crop)
combined_map3 = true_map_crop
xy = xy_crop
target_xy = target_xy_crop
path = path_crop
planned_path_skip = 4
# global_map = global_map[:map_size-map_offset_xy[0], :map_size-map_offset_xy[1]]
# combined_map[map_offset_xy[0]:map_offset_xy[0]+global_map.shape[0], map_offset_xy[1]:map_offset_xy[1]+global_map.shape[1]] = global_map
# TODO add mild colors to cont_global_map_for_planning
combined_map[int(xy_crop[0])-1:int(xy_crop[0])+2, int(xy_crop[1])-1:int(xy_crop[1]+2)] = (1., 0., 1.)
combined_map2[int(xy[0])-1:int(xy[0])+2, int(xy[1])-1:int(xy[1]+2)] = (1., 0., 1.)
combined_map3[int(xy[0])-1:int(xy[0])+2, int(xy[1])-1:int(xy[1]+2)] = (1., 0., 1.)
# print (self.video_ax.get_xlim())
self.video_ax.set_xlim(-0.5, combined_map.shape[1]-0.5)
self.video_ax.set_ylim(combined_map.shape[0]-0.5, -0.5)
self.video_global_map_ax.set_data(combined_map)
self.video_global_map_ax.set_extent([-0.5, combined_map.shape[1]-0.5, combined_map.shape[0]-0.5, -0.5])
self.video_path_scatter.set_offsets(np.flip(path_crop[planned_path_skip::planned_path_skip], axis=-1))
self.video_target_scatter.set_offsets([np.flip(target_xy_crop, axis=-1)])
if VIDEO_LARGE_PLOT:
self.video_ax2.set_xlim(-0.5, combined_map2.shape[1]-0.5)
self.video_ax2.set_ylim(combined_map2.shape[0]-0.5, -0.5)
self.video_global_map_ax2.set_data(combined_map2)
self.video_global_map_ax2.set_extent([-0.5, combined_map2.shape[1]-0.5, combined_map2.shape[0]-0.5, -0.5])
self.video_path_scatter2.set_offsets(np.flip(path[planned_path_skip::planned_path_skip], axis=-1))
self.video_target_scatter2.set_offsets([np.flip(target_xy, axis=-1)])
self.video_ax3.set_xlim(-0.5, combined_map3.shape[1]-0.5)
self.video_ax3.set_ylim(combined_map3.shape[0]-0.5, -0.5)
self.video_global_map_ax3.set_data(combined_map3)
self.video_global_map_ax3.set_extent([-0.5, combined_map3.shape[1]-0.5, combined_map3.shape[0]-0.5, -0.5])
self.video_path_scatter3.set_offsets(np.flip(path[planned_path_skip::planned_path_skip], axis=-1))
self.video_target_scatter3.set_offsets([np.flip(target_xy, axis=-1)])
if VIDEO_DETAILED:
# View angle
half_fov = 0.5 * np.deg2rad(70)
for ang_i, angle in enumerate([half_fov, -half_fov]):
angle = angle - float(self.frame_traj_data[ind]['yaw']) + np.pi/2
# angle = angle + yaw[batch_i, traj_i, 0]
v = np.array([np.cos(angle), np.sin(angle)]) * 10.
x1 = np.array([xy[1], xy[0]]) # need to be flipped for display
x2 = v + x1
self.video_view_angle_lines[ang_i].set_data([x1[0], x2[0]], [x1[1], x2[1]])
#
# # pdb.set_trace()
# Path
# # print(self.frame_traj_data[ind]['xy'], path[0])
# for i in range(len(self.video_path_circles)-2):
# path_i = min(i * 4, len(path)-1)
# xy = path[path_i]
# self.video_path_circles[i].center = ([xy[1], xy[0]])
# # Sub-goal
# xy = self.frame_traj_data[ind]['subgoal']
# self.video_path_circles[-2].center = ([xy[1], xy[0]])
# # Target
# xy = path[-1]
# self.video_path_circles[-1].center = ([xy[1], xy[0]])
# self.video_text_ax2.set_data(self.summary_str)
if self.params.interactive_video:
plt.draw()
plt.show()
plt.waitforbuttonpress(0.01)
return self.video_image_ax
def crop_experience_window(self, map_size, global_map, path, target_xy, xy):
# Cut it to fixed size 300 x 300
center_xy = (xy + target_xy) * 0.5
desired_center_xy = np.array(map_size, np.float32) * 0.5
center_xy = center_xy.astype(np.int)
desired_center_xy = desired_center_xy.astype(np.int)
offset_xy = (desired_center_xy - center_xy).astype(np.int)
xy = xy + offset_xy
target_xy = target_xy + offset_xy
# subgoal += offset_xy
path = path + offset_xy[None]
map_start_xy = np.maximum(center_xy - map_size // 2, 0)
map_cutoff_xy = -np.minimum(center_xy - map_size // 2, 0)
global_map = global_map[map_start_xy[0]:map_start_xy[0] + map_size - map_cutoff_xy[0], map_start_xy[1]:map_start_xy[1] + map_size - map_cutoff_xy[1]]
global_map_crop = np.ones((map_size, map_size, 3), np.float32) * 0.5
global_map_crop[map_cutoff_xy[0]:map_cutoff_xy[0] + global_map.shape[0], map_cutoff_xy[1]:global_map.shape[1] + map_cutoff_xy[1]] = global_map
return global_map_crop, path, target_xy, xy
def plot_loop(self, queue):
# Infinite loop that takes frame data or reset request from queue and does plotting in a separate thread.
plt.ion()
print ("plot loop")
while True:
cmd, frame_data = queue.get(block=True)
# print ("plot command %s"%cmd)
if cmd == "reset":
self.reset_video_writer(called_from_plot_process=True)
elif cmd == "exit":
# self.reset_video_writer(called_from_plot_process=True)
# TODO could save it here, but usually trying to create a new figure in this thread raises excpetion
plt.close('all')
return
elif cmd == "step":
self.frame_traj_data.append(frame_data)
self.video_update(VIDEO_FRAME_SKIP * len(self.frame_traj_data)) # hack to plot last frame
else:
raise ValueError("Unknown plot command")
def reset_video_writer(self, last_success=None, called_from_plot_process=False):
if not called_from_plot_process and self.plot_process:
self.plot_queue.put(("reset", None))
return
if self.params.interactive_video or (SAVE_VIDEO and len(self.frame_traj_data) > 0):
# Save video
if False:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_image_ax = ax.imshow(np.zeros((90, 160, 3)))
self.video_global_map_ax = None
self.video_text_ax0 = fig.text(0.04, 0.9, self.summary_str, transform=fig.transFigure, fontsize=10, verticalalignment='top') # bottom left
self.video_text_ax1 = fig.text(0.96, 0.9, "Target", transform=fig.transFigure, fontsize=10, verticalalignment='top', horizontalalignment='right')
self.video_text_ax2 = fig.text(0.04, 0.05, "Status2", transform=fig.transFigure, fontsize=10, verticalalignment='bottom', wrap=True)
else:
# fig = plt.figure(figsize=(6, 9)) # aspect ratio
# ax = fig.add_subplot(221 if VIDEO_LARGE_PLOT else 121)
# # ax.set_aspect('equal')
# ax.get_xaxis().set_visible(False)
# ax.get_yaxis().set_visible(False)
# self.video_image_ax = ax.imshow(np.zeros((90, 160, 3)))
if self.params.interactive_video and not called_from_plot_process:
plt.close('all')
fig = plt.figure(constrained_layout=True, figsize=(9, 5)) # figsize overwritten later
gs = gridspec.GridSpec(20, 30)
ax = plt.subplot(gs[:9, :15]) # image
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_image_ax = ax.imshow(np.zeros((90, 160, 3)))
ax = plt.subplot(gs[9:18, 0:15]) # depth
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_image_ax2 = ax.imshow(np.zeros((90, 160)), cmap='Greys', vmin=0., vmax=1.)
ax = plt.subplot(gs[:18, 15:]) # map window
# ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_global_map_ax = ax.imshow(np.zeros((1200, 1200, 3)))
self.video_ax = ax
self.video_path_scatter = ax.scatter([0.], [1.], s=2., c='green', marker='o')
self.video_target_scatter = ax.scatter([0.], [1.], s=2., c='red', marker='o')
if VIDEO_LARGE_PLOT:
ax = fig.add_subplot(223)
# ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_global_map_ax2 = ax.imshow(np.zeros((1200, 1200, 3)))
self.video_ax2 = ax
self.video_path_scatter2 = ax.scatter([0.], [1.], s=2., c='green', marker='o')
self.video_target_scatter2 = ax.scatter([0.], [1.], s=2., c='red', marker='o')
ax = fig.add_subplot(224)
# ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
self.video_global_map_ax3 = ax.imshow(np.zeros((1200, 1200, 3)))
self.video_ax3 = ax
self.video_path_scatter3 = ax.scatter([0.], [1.], s=2., c='green', marker='o')
self.video_target_scatter3 = ax.scatter([0.], [1.], s=2., c='red', marker='o')
if VIDEO_DETAILED:
# self.video_view_angle_lines = [mlines.Line2D([0., 0.], [10., 10.,], color='green') for _ in range(2)]
# ax.add_line(self.video_view_angle_lines[0])
# ax.add_line(self.video_view_angle_lines[1])
# self.video_path_circles = []
# for i in range(20):
# circle = plt.Circle((0., 0.), 2., color=('red' if i >= 18 else 'orange'), fill=False, transform='data')
# ax.add_artist(circle)
# self.video_path_circles.append(circle)
self.video_view_angle_lines = []
for _ in range(2):
self.video_view_angle_lines.extend(ax.plot([0., 1.], [0., 1.], '-', color='blue')) # plot returns a list of lines
# self.video_text_ax0 = fig.text(0.04, 0.9, self.summary_str, transform=fig.transFigure, fontsize=9,
# verticalalignment='top') # bottom left
# self.video_text_ax1 = fig.text(0.96, 0.9, "Target", transform=fig.transFigure, fontsize=9,
# verticalalignment='top', horizontalalignment='right')
# self.video_text_ax2 = fig.text(0.04, 0.05, "Status2", transform=fig.transFigure, fontsize=9,
# verticalalignment='bottom', wrap=True)
self.video_text_ax1 = fig.text(0.5, 0.05, "Status2", transform=fig.transFigure, fontsize=9,
verticalalignment='bottom', horizontalalignment='center', wrap=False)
# im.set_clim([0, 1])
if self.params.interactive_video:
fig.set_size_inches([9., 5.])
else:
fig.set_size_inches([9./2, 5./2])
plt.tight_layout()
if SAVE_VIDEO and len(self.frame_traj_data) > 0:
ani = animation.FuncAnimation(fig, self.video_update, len(self.frame_traj_data) * VIDEO_FRAME_SKIP + 21, interval=100) # time between frames in ms. overwritten by fps below
writer = animation.writers['ffmpeg'](fps=VIDEO_FPS) # default h264 is lossless, but could not find in docker)
if last_success is not None:
outcome_str = '_S' if last_success else '_F'
else:
outcome_str = ''
outcome_str = outcome_str + '_' + self.frame_traj_data[-1]['outcome']
video_filename = os.path.join(self.logdir, '%s_%d%s%s.mp4'%(self.get_scene_name(), self.episode_i, outcome_str, self.filename_addition))
ani.save(video_filename, writer=writer, dpi=200)
print ("Video saved to "+video_filename)
self.num_videos_saved += 1
self.frame_traj_data = []
def visualize_agent(self, visibility_mask, images, global_map_pred, global_map_for_planning, global_map_label,
global_map_true_partial, local_map_pred, local_map_label, planned_path, sim_rgb=None,
local_obj_map_pred=None, xy=None, yaw=None, true_xy=None, true_yaw=None, target_xy=None):
# Coordinate systems dont match the ones assumed in these plot functions, but all cancells out except for yaw
yaw = yaw - np.pi/2
if true_yaw is not None:
true_yaw = true_yaw - np.pi/2
status_msg = "step %d" % (self.step_i,)
if global_map_label is not None:
# assert global_map_label.shape[-1] == 3
global_map_label = np.concatenate(
[global_map_label, np.zeros_like(global_map_label), np.zeros_like(global_map_label)], axis=-1)
plt.figure("Global map label")
plt.imshow(global_map_label)
plot_viewpoints(xy[0], xy[1], yaw)
if true_xy is not None and true_yaw is not None:
plot_viewpoints(true_xy[0], true_xy[1], true_yaw, color='green')
plot_target_and_path(target_xy=target_xy, path=planned_path, every_n=1)
plt.title(status_msg)
plt.savefig('./temp/global-map-label.png')
plt.figure("Global map (%d)" % self.step_i)
map_to_plot = global_map_pred[..., :1]
map_to_plot = np.pad(map_to_plot, [[0, 0], [0, 0], [0, 3-map_to_plot.shape[-1]]])
plt.imshow(map_to_plot)
plot_viewpoints(xy[0], xy[1], yaw)
plot_target_and_path(target_xy=target_xy, path=planned_path, every_n=1)
# plot_target_and_path(target_xy=target_xy_vel, path=np.array(self.hist2)[:, :2])
plt.title(status_msg)
plt.savefig('./temp/global-map-pred.png')
if global_map_pred.shape[-1] == 2:
map_to_plot = global_map_pred[..., 1:2]
map_to_plot = np.pad(map_to_plot, [[0, 0], [0, 0], [0, 3-map_to_plot.shape[-1]]])
plt.imshow(map_to_plot)
plot_viewpoints(xy[0], xy[1], yaw)
plot_target_and_path(target_xy=target_xy, path=planned_path, every_n=1)
plt.title(status_msg)
plt.savefig('./temp/global-obj-map-pred.png')
# if global_map_true_partial is not None:
# plt.figure("Global map true (%d)" % self.step_i)
# map_to_plot = global_map_true_partial
# map_to_plot = np.pad(map_to_plot, [[0, 0], [0, 0], [0, 3-map_to_plot.shape[-1]]])
# plt.imshow(map_to_plot)
# plot_viewpoints(xy[0], xy[1], yaw)
# plot_target_and_path(target_xy=self.target_xy, path=planned_path)
# # plot_target_and_path(target_xy=self.target_xy, path=np.array(self.hist1)[:, :2])
# # plot_target_and_path(target_xy=self.target_xy_vel, path=np.array(self.hist2)[:, :2])
# plt.title(status_msg)
# plt.savefig('./temp/global-map-true.png')
# plt.figure("Global map plan (%d)" % self.step_i)
map_to_plot = global_map_for_planning
map_to_plot = np.pad(map_to_plot, [[0, 0], [0, 0], [0, 3-map_to_plot.shape[-1]]])
plt.imshow(map_to_plot)
plot_viewpoints(xy[0], xy[1], yaw)
plot_target_and_path(target_xy=target_xy, path=planned_path, every_n=1)
plt.title(status_msg)
plt.savefig('./temp/global-map-plan.png')
depth, rgb = mapping_visualizer.recover_depth_and_rgb(images)
if self.params.mode == 'depth' and sim_rgb is not None:
rgb = sim_rgb
rgb[:5, :5, :] = 0 # indicate this is not observed
images_fig, images_axarr = plt.subplots(2, 2, squeeze=True)
plt.title(status_msg)
plt.axes(images_axarr[0, 0])
plt.imshow(depth)
plt.axes(images_axarr[0, 1])
plt.imshow(rgb)
plt.axes(images_axarr[1, 0])
if local_map_pred is not None:
plt.imshow(local_map_pred * visibility_mask + (1 - visibility_mask) * 0.5, vmin=0., vmax=1.)
plt.axes(images_axarr[1, 1])
if local_obj_map_pred is not None:
plt.imshow(local_obj_map_pred * visibility_mask + (1 - visibility_mask) * 0.5, vmin=0, vmax=1.)
elif local_map_label is not None:
plt.imshow(local_map_label * visibility_mask + (1 - visibility_mask) * 0.5, vmin=0., vmax=1.)
plt.savefig('./temp/inputs.png')
#
if INTERACTIVE_PLOT:
plt.figure('step')
plt.show()
# pdb.set_trace()
plt.waitforbuttonpress(0.01) # True for keyboard, False for mouse, None for timeout
if button_res:
print ('pause')
pdb.set_trace()
else:
plt.close('all')
# def main():
# params = parse_args(default_files=('./gibson_submission.conf', ))
# is_submission = (params.gibson_mode == 'submission')
#
# parser = argparse.ArgumentParser()
# parser.add_argument("--evaluation", type=str, required=True, choices=["local", "remote"])
# args = parser.parse_args()
#
# config_paths = os.environ["CHALLENGE_CONFIG_FILE"]
# config = habitat.get_config(config_paths)
#
# # agent = RandomAgent(task_config=config)
#
# if args.evaluation == "local":
# challenge = habitat.Challenge(eval_remote=False)
# else:
# challenge = habitat.Challenge(eval_remote=True)
#
# env = challenge._env
# agent = DSLAMAgent(task_config=config, env=env)
#
# challenge.submit(agent)
#
#
# if __name__ == "__main__":
# main()
| 54.78752 | 241 | 0.615592 | 18,049 | 134,339 | 4.278741 | 0.066319 | 0.034379 | 0.014503 | 0.012172 | 0.545975 | 0.444016 | 0.367514 | 0.328279 | 0.298354 | 0.261877 | 0 | 0.026285 | 0.283224 | 134,339 | 2,451 | 242 | 54.809874 | 0.775732 | 0.197009 | 0 | 0.246377 | 0 | 0.002415 | 0.04629 | 0.002715 | 0 | 0 | 0 | 0.000408 | 0.049517 | 1 | 0.013889 | false | 0.001208 | 0.019928 | 0.000604 | 0.052536 | 0.024155 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8db1da409aa926ae0d4a1dd1326712356ef588d | 2,890 | py | Python | examples/nightlight/nightlight.py | pimoroni/breakout-garden | 15f6886a1d011363cc660df1a350fd23d6cf4b78 | [
"MIT"
] | 68 | 2018-08-20T21:45:01.000Z | 2022-03-17T20:45:47.000Z | examples/nightlight/nightlight.py | pimoroni/breakout-garden | 15f6886a1d011363cc660df1a350fd23d6cf4b78 | [
"MIT"
] | 24 | 2018-08-20T14:04:13.000Z | 2022-03-09T12:26:24.000Z | examples/nightlight/nightlight.py | pimoroni/breakout-garden | 15f6886a1d011363cc660df1a350fd23d6cf4b78 | [
"MIT"
] | 14 | 2018-08-25T13:33:49.000Z | 2021-12-09T09:02:35.000Z | #!/usr/bin/env python3
import time
from ltr559 import LTR559
from rgbmatrix5x5 import RGBMatrix5x5
print("""This Pimoroni Breakout Garden example requires an
LTR-559 Light and Proximity Breakout and a 5x5 RGB Matrix Breakout.
This example creates a little nightlight that can be toggled on or
off by tapping the proximity sensor with your finger, or triggered
automatically when it's dark.
Press Ctrl+C to exit.
""")
# Set up the LTR-559 sensor
ltr559 = LTR559()
# Set up the 5x5 RGB matrix
rgbmatrix5x5 = RGBMatrix5x5()
rgbmatrix5x5.set_clear_on_exit()
rgbmatrix5x5.set_brightness(0.8)
# Initial variables to keep track of state of light
state = False
last_state = False
toggled = False
light_threshold = 100 # Low-light trigger level
prox_threshold = 1000 # Proximity trigger level
colour = (255, 165, 0) # Orange-ish
# Function to toggle the RGB matrix on or off depending on state
def toggle_matrix():
global state, last_state
if state is True and last_state is False:
rgbmatrix5x5.set_all(*colour)
rgbmatrix5x5.show()
elif state is False and last_state is True:
rgbmatrix5x5.clear()
rgbmatrix5x5.show()
last_state = state
# Read the sensor once, as the first values are always squiffy
ltr559.update_sensor()
lux = ltr559.get_lux()
prox =ltr559. get_proximity()
time.sleep(1)
try:
while True:
# Read the light and proximity sensor
ltr559.update_sensor()
lux = ltr559.get_lux()
prox = ltr559.get_proximity()
# If it's dark and the light isn't toggled on, turn on
if lux < light_threshold and not toggled:
state = True
if state != last_state:
print("It's dark! Turning light ON")
toggle_matrix()
# If it's light and the light isn't on, turn off
elif lux >= light_threshold and not toggled:
state = False
if state != last_state:
print("It's light! Turning light OFF")
toggle_matrix()
# If there's a tap on the sensor
if prox > prox_threshold:
# Toggle it off if it's currently on
if toggled:
state = False
toggled = False
if state != last_state:
print("Toggling light OFF")
toggle_matrix()
# Toggle it on if it's currently off
else:
state = True
toggled = True
if state != last_state:
print("Toggling light ON")
toggle_matrix()
# Wait a short while to prevent the on/off switch
# from immediately re-triggering
time.sleep(0.5)
elif prox < prox_threshold and lux >= light_threshold:
state = False
time.sleep(0.05)
except KeyboardInterrupt:
pass
| 27.788462 | 67 | 0.623529 | 386 | 2,890 | 4.585492 | 0.331606 | 0.045763 | 0.039548 | 0.036158 | 0.19435 | 0.177401 | 0.167232 | 0.062147 | 0.062147 | 0.062147 | 0 | 0.041667 | 0.310727 | 2,890 | 103 | 68 | 28.058252 | 0.846888 | 0.215225 | 0 | 0.358209 | 0 | 0 | 0.175922 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014925 | false | 0.014925 | 0.044776 | 0 | 0.059701 | 0.074627 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8dd3e6986ec9de4d911cd084dbaeaf781af98b0 | 186 | py | Python | python_for_everybody/for_in_range.py | timothyyu/p4e-prac | f978b71ce147b6e9058372929f2666c2e67d0741 | [
"BSD-3-Clause"
] | null | null | null | python_for_everybody/for_in_range.py | timothyyu/p4e-prac | f978b71ce147b6e9058372929f2666c2e67d0741 | [
"BSD-3-Clause"
] | null | null | null | python_for_everybody/for_in_range.py | timothyyu/p4e-prac | f978b71ce147b6e9058372929f2666c2e67d0741 | [
"BSD-3-Clause"
] | 1 | 2020-04-18T16:09:04.000Z | 2020-04-18T16:09:04.000Z | x = 0
for y in range(5):
print(x,y)
x = x + y
# x = 0 + 0
# x = 0 + 1 ==> x = 1
# x = 1 + 2 ==> x = 3
# x = 3 + 3 ==> x = 6
# x = 6 + 4 ==> x = 10
print(x)
| 15.5 | 27 | 0.290323 | 38 | 186 | 1.421053 | 0.368421 | 0.111111 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.177083 | 0.483871 | 186 | 11 | 28 | 16.909091 | 0.385417 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d8de1a0557c16a820290ec65f2861645cf8269e4 | 6,595 | py | Python | leaguedirector/sequence/sequenceTrackView.py | santutu/league-director | 631ab416e31a0391ab207f9b657638c8e350a48c | [
"Apache-2.0"
] | null | null | null | leaguedirector/sequence/sequenceTrackView.py | santutu/league-director | 631ab416e31a0391ab207f9b657638c8e350a48c | [
"Apache-2.0"
] | null | null | null | leaguedirector/sequence/sequenceTrackView.py | santutu/league-director | 631ab416e31a0391ab207f9b657638c8e350a48c | [
"Apache-2.0"
] | null | null | null | import copy
import statistics
from operator import attrgetter
from PySide2.QtCore import Signal, Qt, QEvent
from PySide2.QtGui import QPen, QMouseEvent
from PySide2.QtWidgets import QGraphicsView, QGraphicsScene, QAbstractScrollArea, QApplication, QGraphicsItem
from leaguedirector.libs.memoryCache import MemoryCache
from leaguedirector.sequence.constant import PRECISION, ADJACENT
from leaguedirector.sequence.sequenceKeyframe import SequenceKeyframe
from leaguedirector.sequence.sequenceTime import SequenceTime
from leaguedirector.sequence.sequenceTrack import SequenceTrack
from leaguedirector.widgets import schedule
class SequenceTrackView(QGraphicsView):
selectionChanged = Signal()
def __init__(self, api, headers):
self.api = api
self.scene = QGraphicsScene()
QGraphicsView.__init__(self, self.scene)
self.tracks = {}
self.timer = schedule(10, self.animate)
self.scale(1.0 / PRECISION, 1.0)
self.setDragMode(QGraphicsView.NoDrag)
self.setAlignment(Qt.AlignLeft | Qt.AlignTop)
self.setTransformationAnchor(QGraphicsView.AnchorUnderMouse)
self.setSizeAdjustPolicy(QAbstractScrollArea.AdjustToContents)
for index, name in enumerate(self.api.sequence.keys()):
track = SequenceTrack(self.api, name, index)
self.scene.addItem(track)
self.tracks[name] = track
self.time = SequenceTime(0, 1, 0, self.scene.height() - 2)
self.time.setPen(QPen(QApplication.palette().highlight(), 1))
self.time.setFlags(QGraphicsItem.ItemIgnoresTransformations)
self.scene.addItem(self.time)
self.api.playback.updated.connect(self.update)
self.api.sequence.updated.connect(self.update)
self.api.sequence.dataLoaded.connect(self.reload)
headers.addKeyframe.connect(self.addKeyframe)
headers.verticalScrollBar().valueChanged.connect(lambda value: self.verticalScrollBar().setValue(value))
self.verticalScrollBar().valueChanged.connect(lambda value: headers.verticalScrollBar().setValue(value))
self.scene.selectionChanged.connect(self.selectionChanged.emit)
self.clipboard = MemoryCache()
self.clipboard.set('copied_key_frames', [])
def copyKeyframes(self):
self.clipboard.set('copied_key_frames',
[(keyframe.track.name, copy.deepcopy(keyframe.item)) for keyframe in
self.selectedKeyframes()])
return self
def pasteKeyframes(self):
keyframes = self.clipboard.get('copied_key_frames')
for keyframe in keyframes:
[name, item] = keyframe
item = copy.deepcopy(item)
self.api.sequence.appendKeyframe(name, item)
SequenceKeyframe(self.api, item, self.tracks[name])
def reload(self):
for track in self.tracks.values():
track.reload()
def selectedKeyframes(self):
return [key for key in self.scene.selectedItems() if isinstance(key, SequenceKeyframe)]
def allKeyframes(self):
return [key for key in self.scene.items() if isinstance(key, SequenceKeyframe)]
def addKeyframe(self, name):
self.tracks[name].addKeyframe()
def clearKeyframes(self):
for track in self.tracks.values():
track.clearKeyframes()
def deleteSelectedKeyframes(self):
for selected in self.selectedKeyframes():
selected.delete()
def selectAllKeyframes(self):
for child in self.allKeyframes():
child.setSelected(True)
def selectAdjacentKeyframes(self):
for selected in self.selectedKeyframes():
for child in self.allKeyframes():
if abs(child.time - selected.time) < ADJACENT:
child.setSelected(True)
def selectNextKeyframe(self):
selectionSorted = sorted(self.selectedKeyframes(), key=attrgetter('time'))
trackSelection = {key.track: key for key in selectionSorted}
for track, selected in trackSelection.items():
for child in sorted(track.childItems(), key=attrgetter('time')):
if child.time > selected.time:
trackSelection[track] = child
break
self.scene.clearSelection()
for item in trackSelection.values():
item.setSelected(True)
def selectPrevKeyframe(self):
selectionSorted = sorted(self.selectedKeyframes(), key=attrgetter('time'), reverse=True)
trackSelection = {key.track: key for key in selectionSorted}
for track, selected in trackSelection.items():
for child in sorted(track.childItems(), key=attrgetter('time'), reverse=True):
if child.time < selected.time:
trackSelection[track] = child
break
self.scene.clearSelection()
for item in trackSelection.values():
item.setSelected(True)
def seekSelectedKeyframe(self):
selected = [key.time for key in self.selectedKeyframes()]
if selected:
self.api.playback.pause(statistics.mean(selected))
def update(self):
for track in self.tracks.values():
track.update()
def mousePressEvent(self, event):
if event.button() == Qt.RightButton:
self.setDragMode(QGraphicsView.ScrollHandDrag)
QGraphicsView.mousePressEvent(self, QMouseEvent(
QEvent.GraphicsSceneMousePress,
event.pos(),
Qt.MouseButton.LeftButton,
Qt.MouseButton.LeftButton,
Qt.KeyboardModifier.NoModifier
))
elif event.button() == Qt.LeftButton:
if event.modifiers() == Qt.ShiftModifier:
self.setDragMode(QGraphicsView.RubberBandDrag)
QGraphicsView.mousePressEvent(self, event)
QGraphicsView.mousePressEvent(self, event)
def mouseDoubleClickEvent(self, event):
QGraphicsView.mouseDoubleClickEvent(self, event)
if not self.scene.selectedItems() and not event.isAccepted():
self.api.playback.pause(self.mapToScene(event.pos()).x() / PRECISION)
def mouseReleaseEvent(self, event):
QGraphicsView.mouseReleaseEvent(self, event)
self.setDragMode(QGraphicsView.NoDrag)
def wheelEvent(self, event):
if event.angleDelta().y() > 0:
self.scale(1.1, 1.0)
else:
self.scale(0.9, 1.0)
def animate(self):
self.time.setPos(self.api.playback.currentTime * PRECISION, 0)
| 40.962733 | 112 | 0.660197 | 660 | 6,595 | 6.575758 | 0.239394 | 0.019355 | 0.009217 | 0.010138 | 0.290553 | 0.236175 | 0.204378 | 0.186406 | 0.119355 | 0.119355 | 0 | 0.004796 | 0.241243 | 6,595 | 160 | 113 | 41.21875 | 0.86251 | 0 | 0 | 0.214815 | 0 | 0 | 0.010159 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.148148 | false | 0 | 0.088889 | 0.014815 | 0.274074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8de56f1954539d2d33e25fa9d9007b69553e370 | 23,746 | py | Python | annealed_flow_transport/flows.py | LaudateCorpus1/annealed_flow_transport | 28f348bb41e3acec5bc925355063d476f2e2aea2 | [
"Apache-2.0"
] | 23 | 2021-08-13T14:00:10.000Z | 2022-02-15T12:44:20.000Z | annealed_flow_transport/flows.py | deepmind/annealed_flow_transport | 28f348bb41e3acec5bc925355063d476f2e2aea2 | [
"Apache-2.0"
] | 1 | 2021-10-05T16:19:25.000Z | 2021-10-05T16:19:25.000Z | annealed_flow_transport/flows.py | LaudateCorpus1/annealed_flow_transport | 28f348bb41e3acec5bc925355063d476f2e2aea2 | [
"Apache-2.0"
] | 4 | 2021-10-05T16:14:58.000Z | 2022-01-03T15:17:36.000Z | # Copyright 2020 DeepMind Technologies Limited.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Code for normalizing flows.
For a review of normalizing flows see: https://arxiv.org/abs/1912.02762
The abstract base class ConfigurableFlow demonstrates our minimal interface.
Although the standard change of variables formula requires that
normalizing flows are invertible, none of the algorithms in train.py
require evaluating that inverse explicitly so inverses are not implemented.
"""
import abc
from typing import Callable, List, Tuple
import annealed_flow_transport.aft_types as tp
import chex
import haiku as hk
import jax
import jax.numpy as jnp
import numpy as np
Array = tp.Array
ConfigDict = tp.ConfigDict
class ConfigurableFlow(hk.Module, abc.ABC):
"""Abstract base clase for configurable normalizing flows.
This is the interface expected by all flow based algorithms called in train.py
"""
def __init__(self, config: ConfigDict):
super().__init__()
self._check_configuration(config)
self._config = config
def _check_input(self, x: Array) -> Array:
chex.assert_rank(x, 1)
def _check_outputs(self, x: Array, transformed_x: Array,
log_abs_det_jac: Array) -> Array:
chex.assert_rank(x, 1)
chex.assert_equal_shape([x, transformed_x])
chex.assert_shape(log_abs_det_jac, ())
def _check_members_types(self, config: ConfigDict, expected_members_types):
for elem, elem_type in expected_members_types:
if elem not in config:
raise ValueError('Flow config element not found: ', elem)
if not isinstance(config[elem], elem_type):
msg = 'Flow config element '+elem+' is not of type '+str(elem_type)
raise TypeError(msg)
def __call__(self, x: Array) -> Tuple[Array, Array]:
"""Call transform_and_log abs_det_jac with automatic shape checking.
This calls transform_and_log_abs_det_jac which needs to be implemented
in derived classes.
Args:
x: Array size (num_dim,) containing input to flow.
Returns:
Array size (num_dim,) containing output and Scalar log abs det Jacobian.
"""
self._check_input(x)
output, log_abs_det_jac = self.transform_and_log_abs_det_jac(x)
self._check_outputs(x, output, log_abs_det_jac)
return output, log_abs_det_jac
@abc.abstractmethod
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
"""Transform x through the flow and compute log abs determinant of Jacobian.
Args:
x: (num_dim,) input to the flow.
Returns:
Array size (num_dim,) containing output and Scalar log abs det Jacobian.
"""
@abc.abstractmethod
def _check_configuration(self, config: ConfigDict):
"""Check the configuration includes the necessary fields.
Will typically raise Assertion like errors.
Args:
config: A ConfigDict include the fields required by the flow.
"""
class DiagonalAffine(ConfigurableFlow):
"""An affine transformation with a positive diagonal matrix."""
def _check_configuration(self, unused_config: ConfigDict):
pass
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
num_elem = x.shape[0]
unconst_diag_init = hk.initializers.Constant(jnp.zeros((num_elem,)))
bias_init = hk.initializers.Constant(jnp.zeros((num_elem,)))
unconst_diag = hk.get_parameter('unconst_diag',
shape=[num_elem],
dtype=x.dtype,
init=unconst_diag_init)
bias = hk.get_parameter('bias',
shape=[num_elem],
dtype=x.dtype,
init=bias_init)
output = jnp.exp(unconst_diag)*x + bias
log_abs_det = jnp.sum(unconst_diag)
return output, log_abs_det
def rational_quadratic_spline(x: Array,
bin_positions: Array,
bin_heights: Array,
derivatives: Array) -> Tuple[Array, Array]:
"""Compute a rational quadratic spline.
See https://arxiv.org/abs/1906.04032
Args:
x: A single real number.
bin_positions: A sorted array of bin positions of length num_bins+1.
bin_heights: An array of bin heights of length num_bins+1.
derivatives: An array of derivatives at bin positions of length num_bins+1.
Returns:
Value of the rational quadratic spline at x.
Derivative with respect to x of rational quadratic spline at x.
"""
bin_index = jnp.searchsorted(bin_positions, x)
array_index = bin_index % len(bin_positions)
lower_x = bin_positions[array_index-1]
upper_x = bin_positions[array_index]
lower_y = bin_heights[array_index-1]
upper_y = bin_heights[array_index]
lower_deriv = derivatives[array_index-1]
upper_deriv = derivatives[array_index]
delta_x = upper_x - lower_x
delta_y = upper_y - lower_y
slope = delta_y / delta_x
alpha = (x - lower_x)/delta_x
alpha_squared = jnp.square(alpha)
beta = alpha * (1.-alpha)
gamma = jnp.square(1.-alpha)
epsilon = upper_deriv+lower_deriv -2. *slope
numerator_quadratic = delta_y * (slope*alpha_squared + lower_deriv*beta)
denominator_quadratic = slope + epsilon*beta
interp_x = lower_y + numerator_quadratic/denominator_quadratic
# now compute derivative
numerator_deriv = jnp.square(slope) * (
upper_deriv * alpha_squared + 2. * slope * beta + lower_deriv * gamma)
sqrt_denominator_deriv = slope + epsilon*beta
denominator_deriv = jnp.square(sqrt_denominator_deriv)
deriv = numerator_deriv / denominator_deriv
return interp_x, deriv
def identity_padded_rational_quadratic_spline(
x: Array, bin_positions: Array, bin_heights: Array,
derivatives: Array) -> Tuple[Array, Array]:
"""An identity padded rational quadratic spline.
Args:
x: the value to evaluate the spline at.
bin_positions: sorted values of bin x positions of length num_bins+1.
bin_heights: absolute height of bin of length num_bins-1.
derivatives: derivatives at internal bin edge of length num_bins-1.
Returns:
The value of the spline at x.
The derivative with respect to x of the spline at x.
"""
lower_limit = bin_positions[0]
upper_limit = bin_positions[-1]
bin_height_sequence = (jnp.atleast_1d(jnp.array(lower_limit)),
bin_heights,
jnp.atleast_1d(jnp.array(upper_limit)))
full_bin_heights = jnp.concatenate(bin_height_sequence)
derivative_sequence = (jnp.ones((1,)),
derivatives,
jnp.ones((1,)))
full_derivatives = jnp.concatenate(derivative_sequence)
in_range = jnp.logical_and(jnp.greater(x, lower_limit),
jnp.less(x, upper_limit))
multiplier = in_range*1.
multiplier_complement = jnp.logical_not(in_range)*1.
spline_val, spline_deriv = rational_quadratic_spline(x,
bin_positions,
full_bin_heights,
full_derivatives)
identity_val = x
identity_deriv = 1.
val = spline_val*multiplier + multiplier_complement*identity_val
deriv = spline_deriv*multiplier + multiplier_complement*identity_deriv
return val, deriv
class AutoregressiveMLP(hk.Module):
"""An MLP which is constrained to have autoregressive dependency."""
def __init__(self,
num_hiddens_per_input_dim: List[int],
include_self_links: bool,
non_linearity,
zero_final: bool,
bias_last: bool,
name=None):
super().__init__(name=name)
self._num_hiddens_per_input_dim = num_hiddens_per_input_dim
self._include_self_links = include_self_links
self._non_linearity = non_linearity
self._zero_final = zero_final
self._bias_last = bias_last
def __call__(self, x: Array) -> Array:
input_dim = x.shape[0]
hidden_representation = jnp.atleast_2d(x).T
prev_hid_per_dim = 1
num_hidden_layers = len(self._num_hiddens_per_input_dim)
final_index = num_hidden_layers-1
for layer_index in range(num_hidden_layers):
is_last_layer = (final_index == layer_index)
hid_per_dim = self._num_hiddens_per_input_dim[layer_index]
name_stub = '_'+str(layer_index)
layer_shape = (input_dim,
prev_hid_per_dim,
input_dim,
hid_per_dim)
in_degree = prev_hid_per_dim * input_dim
if is_last_layer and self._zero_final:
w_init = jnp.zeros
else:
w_init = hk.initializers.TruncatedNormal(1. / np.sqrt(in_degree))
bias_init = hk.initializers.Constant(jnp.zeros((input_dim, hid_per_dim,)))
weights = hk.get_parameter(name='weights'+name_stub,
shape=layer_shape,
dtype=x.dtype,
init=w_init)
if is_last_layer and not self._bias_last:
biases = jnp.zeros((input_dim, hid_per_dim,))
else:
biases = hk.get_parameter(name='biases'+name_stub,
shape=(input_dim, hid_per_dim),
dtype=x.dtype,
init=bias_init)
if not(self._include_self_links) and is_last_layer:
k = -1
else:
k = 0
mask = jnp.tril(jnp.ones((input_dim, input_dim)),
k=k)
masked_weights = mask[:, None, :, None] * weights
new_hidden_representation = jnp.einsum('ijkl,ij->kl',
masked_weights,
hidden_representation) + biases
prev_hid_per_dim = hid_per_dim
if not is_last_layer:
hidden_representation = self._non_linearity(new_hidden_representation)
else:
hidden_representation = new_hidden_representation
return hidden_representation
class InverseAutogressiveFlow(object):
"""A generic inverse autoregressive flow.
See https://arxiv.org/abs/1606.04934
Takes two functions as input.
1) autoregressive_func takes array of (num_dim,)
and returns array (num_dim, num_features)
it is autoregressive in the sense that the output[i, :]
depends only on the input[:i]. This is not checked.
2) transform_func takes array of (num_dim, num_features) and
an array of (num_dim,) and returns output of shape (num_dim,)
and a single log_det_jacobian value. The represents the transformation
acting on the inputs with given parameters.
"""
def __init__(self,
autoregressive_func: Callable[[Array], Array],
transform_func: Callable[[Array, Array], Tuple[Array, Array]]):
self._autoregressive_func = autoregressive_func
self._transform_func = transform_func
def __call__(self, x: Array) -> Tuple[Array, Array]:
"""x is of shape (num_dim,)."""
transform_features = self._autoregressive_func(x)
output, log_abs_det = self._transform_func(transform_features, x)
return output, log_abs_det
class SplineInverseAutoregressiveFlow(ConfigurableFlow):
"""An inverse autoregressive flow with spline transformer.
config must contain the following fields:
num_spline_bins: Number of bins for rational quadratic spline.
intermediate_hids_per_dim: See AutoregresiveMLP.
num_layers: Number of layers for AutoregressiveMLP.
identity_init: Whether to initalize the flow to the identity.
bias_last: Whether to include biases on the last later of AutoregressiveMLP
lower_lim: Lower limit of active region for rational quadratic spline.
upper_lim: Upper limit of active region for rational quadratic spline.
min_bin_size: Minimum bin size for rational quadratic spline.
min_derivative: Minimum derivative for rational quadratic spline.
"""
def __init__(self,
config: ConfigDict):
super().__init__(config)
self._num_spline_bins = config.num_spline_bins
num_spline_parameters = 3 * config.num_spline_bins - 1
num_hids_per_input_dim = [config.intermediate_hids_per_dim
] * config.num_layers + [
num_spline_parameters
]
self._autoregressive_mlp = AutoregressiveMLP(
num_hids_per_input_dim,
include_self_links=False,
non_linearity=jax.nn.leaky_relu,
zero_final=config.identity_init,
bias_last=config.bias_last)
self._lower_lim = config.lower_lim
self._upper_lim = config.upper_lim
self._min_bin_size = config.min_bin_size
self._min_derivative = config.min_derivative
def _check_configuration(self, config: ConfigDict):
expected_members_types = [
('num_spline_bins', int),
('intermediate_hids_per_dim', int),
('num_layers', int),
('identity_init', bool),
('bias_last', bool),
('lower_lim', float),
('upper_lim', float),
('min_bin_size', float),
('min_derivative', float)
]
self._check_members_types(config, expected_members_types)
def _unpack_spline_params(self, raw_param_vec) -> Tuple[Array, Array, Array]:
unconst_bin_size_x = raw_param_vec[:self._num_spline_bins]
unconst_bin_size_y = raw_param_vec[self._num_spline_bins:2 *
self._num_spline_bins]
unconst_derivs = raw_param_vec[2 * self._num_spline_bins:(
3 * self._num_spline_bins - 1)]
return unconst_bin_size_x, unconst_bin_size_y, unconst_derivs
def _transform_raw_to_spline_params(
self, raw_param_vec: Array) -> Tuple[Array, Array, Array]:
unconst_bin_size_x, unconst_bin_size_y, unconst_derivs = self._unpack_spline_params(
raw_param_vec)
def normalize_bin_sizes(unconst_bin_sizes: Array) -> Array:
bin_range = self._upper_lim - self._lower_lim
reduced_bin_range = (
bin_range - self._num_spline_bins * self._min_bin_size)
return jax.nn.softmax(
unconst_bin_sizes) * reduced_bin_range + self._min_bin_size
bin_size_x = normalize_bin_sizes(unconst_bin_size_x)
bin_size_y = normalize_bin_sizes(unconst_bin_size_y)
# get the x bin positions.
array_sequence = (jnp.ones((1,))*self._lower_lim, bin_size_x)
x_bin_pos = jnp.cumsum(jnp.concatenate(array_sequence))
# get the y bin positions, ignoring redundant terms.
stripped_y_bin_pos = self._lower_lim + jnp.cumsum(bin_size_y[:-1])
def forward_positive_transform(unconst_value: Array,
min_value: Array) -> Array:
return jax.nn.softplus(unconst_value) + min_value
def inverse_positive_transform(const_value: Array,
min_value: Array) -> Array:
return jnp.log(jnp.expm1(const_value-min_value))
inverted_one = inverse_positive_transform(1., self._min_derivative)
derivatives = forward_positive_transform(unconst_derivs + inverted_one,
self._min_derivative)
return x_bin_pos, stripped_y_bin_pos, derivatives
def _get_spline_values(self,
raw_parameters: Array,
x: Array) -> Tuple[Array, Array]:
bat_get_parameters = jax.vmap(self._transform_raw_to_spline_params)
bat_x_bin_pos, bat_stripped_y, bat_derivatives = bat_get_parameters(
raw_parameters)
# Vectorize spline over data and parameters.
bat_get_spline_vals = jax.vmap(identity_padded_rational_quadratic_spline,
in_axes=[0, 0, 0, 0])
spline_vals, derivs = bat_get_spline_vals(x, bat_x_bin_pos, bat_stripped_y,
bat_derivatives)
log_abs_det = jnp.sum(jnp.log(jnp.abs(derivs)))
return spline_vals, log_abs_det
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
iaf = InverseAutogressiveFlow(self._autoregressive_mlp,
self._get_spline_values)
return iaf(x)
class AffineInverseAutoregressiveFlow(ConfigurableFlow):
"""An inverse autoregressive flow with affine transformer.
config must contain the following fields:
intermediate_hids_per_dim: See AutoregresiveMLP.
num_layers: Number of layers for AutoregressiveMLP.
identity_init: Whether to initalize the flow to the identity.
bias_last: Whether to include biases on the last later of AutoregressiveMLP
"""
def __init__(self,
config: ConfigDict):
super().__init__(config)
num_affine_params = 2
num_hids_per_input_dim = [config.intermediate_hids_per_dim
] * config.num_layers + [num_affine_params]
self._autoregressive_mlp = AutoregressiveMLP(
num_hids_per_input_dim,
include_self_links=False,
non_linearity=jax.nn.leaky_relu,
zero_final=config.identity_init,
bias_last=config.bias_last)
def _check_configuration(self, config: ConfigDict):
expected_members_types = [('intermediate_hids_per_dim', int),
('num_layers', int),
('identity_init', bool),
('bias_last', bool)
]
self._check_members_types(config, expected_members_types)
def _get_affine_transformation(self,
raw_parameters: Array,
x: Array) -> Tuple[Array, Array]:
shifts = raw_parameters[:, 0]
scales = raw_parameters[:, 1] + jnp.ones_like(raw_parameters[:, 1])
log_abs_det = jnp.sum(jnp.log(jnp.abs(scales)))
output = x * scales + shifts
return output, log_abs_det
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
iaf = InverseAutogressiveFlow(self._autoregressive_mlp,
self._get_affine_transformation)
return iaf(x)
def affine_transformation(params: Array,
x: Array) -> Tuple[Array, Array]:
shift = params[0]
# Assuming params start as zero adding 1 to scale gives identity transform.
scale = params[1] + 1.
output = x * scale + shift
return output, jnp.log(jnp.abs(scale))
class RationalQuadraticSpline(ConfigurableFlow):
"""A learnt monotonic rational quadratic spline with identity padding.
Each input dimension is operated on by a separate spline.
The spline is initialized to the identity.
config must contain the following fields:
num_bins: Number of bins for rational quadratic spline.
lower_lim: Lower limit of active region for rational quadratic spline.
upper_lim: Upper limit of active region for rational quadratic spline.
min_bin_size: Minimum bin size for rational quadratic spline.
min_derivative: Minimum derivative for rational quadratic spline.
"""
def __init__(self,
config: ConfigDict):
super().__init__(config)
self._num_bins = config.num_bins
self._lower_lim = config.lower_lim
self._upper_lim = config.upper_lim
self._min_bin_size = config.min_bin_size
self._min_derivative = config.min_derivative
def _check_configuration(self, config: ConfigDict):
expected_members_types = [
('num_bins', int),
('lower_lim', float),
('upper_lim', float),
('min_bin_size', float),
('min_derivative', float)
]
self._check_members_types(config, expected_members_types)
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
"""Apply the spline transformation.
Args:
x: (num_dim,) DeviceArray representing flow input.
Returns:
output: (num_dim,) transformed sample through flow.
log_prob_out: new Scalar representing log_probability of output.
"""
num_dim = x.shape[0]
bin_parameter_shape = (num_dim, self._num_bins)
# Setup the bin position and height parameters.
bin_init = hk.initializers.Constant(jnp.ones(bin_parameter_shape))
unconst_bin_size_x = hk.get_parameter(
'unconst_bin_size_x',
shape=bin_parameter_shape,
dtype=x.dtype,
init=bin_init)
unconst_bin_size_y = hk.get_parameter(
'unconst_bin_size_y',
shape=bin_parameter_shape,
dtype=x.dtype,
init=bin_init)
def normalize_bin_sizes(unconst_bin_sizes):
bin_range = self._upper_lim - self._lower_lim
reduced_bin_range = (bin_range - self._num_bins * self._min_bin_size)
return jax.nn.softmax(
unconst_bin_sizes) * reduced_bin_range + self._min_bin_size
batched_normalize = jax.vmap(normalize_bin_sizes)
bin_size_x = batched_normalize(unconst_bin_size_x)
bin_size_y = batched_normalize(unconst_bin_size_y)
array_sequence = (jnp.ones((num_dim, 1)) * self._lower_lim, bin_size_x)
bin_positions = jnp.cumsum(jnp.concatenate(array_sequence, axis=1), axis=1)
# Don't include the redundant bin heights.
stripped_bin_heights = self._lower_lim + jnp.cumsum(
bin_size_y[:, :-1], axis=1)
# Setup the derivative parameters.
def forward_positive_transform(unconst_value, min_value):
return jax.nn.softplus(unconst_value) + min_value
def inverse_positive_transform(const_value, min_value):
return jnp.log(jnp.expm1(const_value - min_value))
deriv_parameter_shape = (num_dim, self._num_bins - 1)
inverted_one = inverse_positive_transform(1., self._min_derivative)
deriv_init = hk.initializers.Constant(
jnp.ones(deriv_parameter_shape) * inverted_one)
unconst_deriv = hk.get_parameter(
'unconst_deriv',
shape=deriv_parameter_shape,
dtype=x.dtype,
init=deriv_init)
batched_positive_transform = jax.vmap(
forward_positive_transform, in_axes=[0, None])
deriv = batched_positive_transform(unconst_deriv, self._min_derivative)
# Setup batching then apply the spline.
batch_padded_rq_spline = jax.vmap(
identity_padded_rational_quadratic_spline, in_axes=[0, 0, 0, 0])
output, jac_terms = batch_padded_rq_spline(x, bin_positions,
stripped_bin_heights, deriv)
log_abs_det_jac = jnp.sum(jnp.log(jac_terms))
return output, log_abs_det_jac
class ComposedFlows(ConfigurableFlow):
"""Class to compose flows based on a list of configs.
config should contain flow_configs a list of flow configs to compose.
"""
def __init__(self, config: ConfigDict):
super().__init__(config)
self._flows = []
for flow_config in self._config.flow_configs:
base_flow_class = globals()[flow_config.type]
flow = base_flow_class(flow_config)
self._flows.append(flow)
def _check_configuration(self, config: ConfigDict):
expected_members_types = [
('flow_configs', list),
]
self._check_members_types(config, expected_members_types)
def transform_and_log_abs_det_jac(self, x: Array) -> Tuple[Array, Array]:
log_abs_det = 0.
progress = x
for flow in self._flows:
progress, log_abs_det_increment = flow(progress)
log_abs_det += log_abs_det_increment
return progress, log_abs_det
| 38.361874 | 88 | 0.67902 | 3,101 | 23,746 | 4.868752 | 0.13802 | 0.016691 | 0.018479 | 0.012717 | 0.477679 | 0.434826 | 0.356736 | 0.309246 | 0.282951 | 0.238442 | 0 | 0.005879 | 0.240714 | 23,746 | 618 | 89 | 38.423948 | 0.831503 | 0.235703 | 0 | 0.312662 | 0 | 0 | 0.022103 | 0.002805 | 0 | 0 | 0 | 0 | 0.010336 | 1 | 0.098191 | false | 0.002584 | 0.020672 | 0.010336 | 0.193798 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8debc26659ca852de091f669c8c62e62434b95f | 3,208 | py | Python | EMD_ensembled_data_prepare_Test_server.py | NahianHasan/Cardiovascular_Disease_Classification_Employing_EMD | 57bf6425808bcff4f1c54e6be2e1df9c14b61313 | [
"MIT"
] | 6 | 2019-10-10T18:53:13.000Z | 2020-08-13T08:39:43.000Z | EMD_ensembled_data_prepare_Test_server.py | NahianHasan/Cardiovascular_Disease_Classification_Employing_EMD | 57bf6425808bcff4f1c54e6be2e1df9c14b61313 | [
"MIT"
] | null | null | null | EMD_ensembled_data_prepare_Test_server.py | NahianHasan/Cardiovascular_Disease_Classification_Employing_EMD | 57bf6425808bcff4f1c54e6be2e1df9c14b61313 | [
"MIT"
] | 1 | 2020-04-22T07:50:49.000Z | 2020-04-22T07:50:49.000Z | import pyhht
from pyhht.emd import EMD
from pyhht.visualization import plot_imfs
import numpy as np
import math
from pyhht.utils import extr
from pyhht.utils import get_envelops
import matplotlib.pyplot as plt
from pyhht.utils import inst_freq
import wfdb
import os
import sys
import glob
def EMD_data_preparation(csv_folder,samplenumber,test_list):
########### Trining and Test Data Spliting ######################
Ensembled_test = open(csv_folder+'Ensembled_test.csv', 'w')
Total_data = 0
#Testing data preparation
G = open(test_list,'r')
line = G.readline()
while line:
Original_signal = []
splitted = line.split(',')
for h in range(0,samplenumber):
Original_signal.append(float(splitted[h]))
disease = splitted[-1][:-1]
Original_signal = np.asarray(Original_signal)
try:
decomposer = EMD(Original_signal,n_imfs=3,maxiter=3000)
imfs = decomposer.decompose()
ensembled_data = []
for h in range(0,samplenumber):
ensembled_data.append(imfs[0][h]+imfs[1][h]+imfs[2][h])
Total_data = Total_data+1
string = str(float("{0:.8f}".format(ensembled_data[0])))
for h in range(1,samplenumber):
string = string +','+str(float("{0:.8f}".format(ensembled_data[h])))
string = string+','+disease+'\n'
Ensembled_test.write(string)
print 'Test Data = ',Total_data,'---Disease = ',disease
line = G.readline()
except:
print 'Could not write'
line = G.readline()
#Ensembled_train.close()
Ensembled_test.close()
#F.close()
G.close()
def Main():
parser = argparse.ArgumentParser(description='ECG data training using EMD Data with separate threading',
usage='Classifying EMD Data',
epilog='Give proper arguments')
parser.add_argument('-p',"--data_path",metavar='', help="Path to the main database",default=C.data_path)
parser.add_argument('-c',"--csv_path",metavar='',help="Path to the CSV Folder of EMD Data",default=C.IMF_csv_path)
Training",default=C.initial_epoch)
parser.add_argument('-rc',"--patient_data_path",metavar='',help="Path to the Patient file RECORD.txt",default=C.patient_data_path)
parser.add_argument('-pd',"--problem_data_path",metavar='',help="Path to the text file where problematic data to be stored",default=C.preoblem_data_path)
parser.add_argument('-s',"--sample_number",metavar='',help="Number of samples to be taken by each record",type=int,default=C.samplenumber)
parser.add_argument('-imf',"--number_of_IMFs",metavar='',help="Number of IMFs to be extracted",default=C.number_of_IMFs,type=int,choices=[2,3,4,5,6])
parser.add_argument('-spl',"--split_perc",metavar='',help="Splitting percentage of train and test(upper limit)",type=float,default=C.split_perc)
parser.add_argument('-tel',"--te_list",metavar='',help="A csv file containing the list of test files")
args = parser.parse_args()
file_path=args.data_path
csv_path=args.csv_path
test_list = args.te_list
patient_data=args.patient_data_path
problem_data_file=args.problem_data_path
samplenumber=int(args.sample_number)
number_of_IMFs=int(args.number_of_IMFs)
EMD_data_preparation(csv_path,samplenumber,test_list)
Main()
| 37.302326 | 155 | 0.7101 | 469 | 3,208 | 4.684435 | 0.321962 | 0.032772 | 0.061903 | 0.034593 | 0.137915 | 0.103778 | 0.071006 | 0.032772 | 0 | 0 | 0 | 0.009098 | 0.143392 | 3,208 | 85 | 156 | 37.741176 | 0.790393 | 0 | 0 | 0.074627 | 0 | 0.014925 | 0.210755 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.19403 | null | null | 0.029851 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8df2a64ed17e68830f228cf62337f3dea5df521 | 7,373 | py | Python | 2.ReinforcementLearning/CartPole/CartPole-PPO/cartpole_ppo.py | link-kut/deeplink_public | 688c379bfeb63156e865d78d0428f97d7d203cc1 | [
"MIT"
] | null | null | null | 2.ReinforcementLearning/CartPole/CartPole-PPO/cartpole_ppo.py | link-kut/deeplink_public | 688c379bfeb63156e865d78d0428f97d7d203cc1 | [
"MIT"
] | 11 | 2020-01-28T22:33:49.000Z | 2022-03-11T23:41:08.000Z | 2.ReinforcementLearning/CartPole/CartPole-PPO/cartpole_ppo.py | link-kut/deeplink_public | 688c379bfeb63156e865d78d0428f97d7d203cc1 | [
"MIT"
] | 2 | 2019-06-01T04:14:52.000Z | 2020-05-31T08:13:23.000Z | # Initial framework taken from https://github.com/OctThe16th/PPO-Keras/blob/master/Main.py
import numpy as np
import gym
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras import backend as K
from tensorflow.keras.optimizers import Adam
import tensorflow as tf
import random
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
print(tf.__version__)
env = gym.make(ENV)
CONTINUOUS = False
# num_states = env.observation_space.shape[0]
LOSS_CLIPPING = 0.2 # Only implemented clipping for the surrogate loss, paper said it was best
NOISE = 1.0 # Exploration noise
GAMMA = 0.99
BUFFER_SIZE = 256
BATCH_SIZE = 64
NUM_ACTIONS = 2
NUM_STATE = 4
HIDDEN_SIZE = 128
NUM_LAYERS = 2
ENTROPY_LOSS = 1e-3
LR = 1e-4 # Lower lr stabilises training greatly
'''def exponential_average(old, new, b1):
return old * b1 + (1-b1) * new'''
def proximal_policy_optimization_loss(advantage, old_prediction):
def loss(y_true, y_pred):
prob = y_true * y_pred
old_prob = y_true * old_prediction
r = prob / (old_prob + 1e-10)
return -K.mean(
K.minimum(
r * advantage,
K.clip(r, min_value=1 - LOSS_CLIPPING, max_value=1 + LOSS_CLIPPING) * advantage
) + ENTROPY_LOSS * (prob * K.log(prob + 1e-10))
)
return loss
class Agent:
def __init__(self):
self.critic = self.build_critic()
self.actor = self.build_actor()
self.env = gym.make(ENV)
print(self.env.action_space, 'action_space', self.env.observation_space, 'observation_space')
self.episode = 0
self.observation = self.env.reset()
self.val = False
self.reward = []
self.reward_over_time = []
self.name = self.get_name()
self.scores = []
self.episode_reward = 0
def get_name(self):
name = 'AllRuns/'
name += 'discrete/'
name += ENV
return name
def build_actor(self):
state_input = Input(shape=(NUM_STATE,))
advantage = Input(shape=(1,))
old_prediction = Input(shape=(NUM_ACTIONS,))
x = Dense(units=HIDDEN_SIZE, activation='tanh')(state_input)
for _ in range(NUM_LAYERS - 1):
x = Dense(HIDDEN_SIZE, activation='tanh')(x)
out_actions = Dense(units=NUM_ACTIONS, activation='softmax', name='output')(x)
model = Model(
inputs=[state_input, advantage, old_prediction],
outputs=[out_actions],
name="actor_model"
)
model.compile(
optimizer=Adam(lr=LR),
loss=[proximal_policy_optimization_loss(advantage=advantage, old_prediction=old_prediction)]
)
model.summary()
return model
def build_critic(self):
state_input = Input(shape=(NUM_STATE,))
x = Dense(units=HIDDEN_SIZE, activation='tanh')(state_input)
for _ in range(NUM_LAYERS - 1):
x = Dense(units=HIDDEN_SIZE, activation='tanh')(x)
out_value = Dense(units=1)(x)
model = Model(
inputs=[state_input],
outputs=[out_value],
name="critic_model"
)
model.compile(
optimizer=Adam(lr=LR),
loss='mse'
)
model.summary()
return model
def reset_env(self):
self.episode += 1
if self.episode % 100 == 0:
self.val = True
else:
self.val = False
self.observation = self.env.reset()
self.reward = []
self.episode_reward = 0
def get_action(self):
DUMMY_VALUE = np.zeros((1, 1))
DUMMY_ACTION = np.zeros((1, NUM_ACTIONS))
p = self.actor.predict([self.observation.reshape(1, NUM_STATE), DUMMY_VALUE, DUMMY_ACTION])
if self.val is False:
action = np.random.choice(NUM_ACTIONS, p=np.nan_to_num(p[0]))
else:
action = np.argmax(p[0])
action_matrix = np.zeros(NUM_ACTIONS)
action_matrix[action] = 1
return action, action_matrix, p
def transform_reward(self):
for j in range(len(self.reward) - 2, -1, -1):
self.reward[j] += self.reward[j + 1] * GAMMA
def get_batch(self):
batch = [[], [], [], []]
tmp_batch = [[], [], []]
while len(batch[0]) < BUFFER_SIZE:
action, action_matrix, actor_p = self.get_action()
observation, reward, done, info = self.env.step(action)
self.reward.append(reward)
self.episode_reward = self.episode_reward + reward
tmp_batch[0].append(self.observation)
tmp_batch[1].append(action_matrix)
tmp_batch[2].append(actor_p)
self.observation = observation
if done:
self.transform_reward()
if self.val is False:
for i in range(len(tmp_batch[0])):
obs, action, pred = tmp_batch[0][i], tmp_batch[1][i], tmp_batch[2][i]
r = self.reward[i]
batch[0].append(obs)
batch[1].append(action)
batch[2].append(pred)
batch[3].append(r)
tmp_batch = [[], [], []]
#print("EPISODE REWARD ", self.episode_reward)
self.scores.append(self.episode_reward)
self.reset_env()
obs = np.array(batch[0])
action = np.array(batch[1])
pred = np.array(batch[2])
pred = np.reshape(pred, (pred.shape[0], pred.shape[2]))
reward = np.reshape(np.array(batch[3]), (len(batch[3]), 1))
return obs[:BUFFER_SIZE], action[:BUFFER_SIZE], pred[:BUFFER_SIZE], reward[:BUFFER_SIZE]
def run(self):
total_episodes = 100000
epochs = 10
while self.episode < total_episodes:
if len(self.scores) > 1:
print("EPISODE ", self.episode, self.scores[-1])
obs, action, pred, reward = self.get_batch()
old_prediction = pred
pred_values = self.critic.predict(obs)
advantage = reward - pred_values
# advantage = (advantage - advantage.mean()) / advantage.std()
actor_loss = self.actor.fit(
x=[obs, advantage, old_prediction],
y=[action],
batch_size=BATCH_SIZE,
shuffle=True,
epochs=epochs,
verbose=0
)
critic_loss = self.critic.fit(
x=[obs],
y=[reward],
batch_size=BATCH_SIZE,
shuffle=True,
epochs=epochs,
verbose=0
)
if self.episode % 10 == 0:
print('(episode, score) = ' + str((self.episode, self.episode_reward)))
# Solved condition
if len(self.scores) >= 110:
if np.mean(self.scores[-100:]) >= 195.0:
print(' \ Solved after ' + str(self.episode - 100) + ' episodes')
break
plt.plot(self.scores)
if __name__ == '__main__':
ag = Agent()
ag.run() | 30.720833 | 104 | 0.563814 | 886 | 7,373 | 4.527088 | 0.215576 | 0.041137 | 0.033159 | 0.023934 | 0.242583 | 0.150835 | 0.10197 | 0.078783 | 0.059835 | 0.059835 | 0 | 0.021544 | 0.320087 | 7,373 | 240 | 105 | 30.720833 | 0.778576 | 0.052489 | 0 | 0.20765 | 0 | 0 | 0.023772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060109 | false | 0 | 0.065574 | 0 | 0.169399 | 0.027322 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8df48a2c6778c32363c444430a9dcd1859230a7 | 8,721 | py | Python | models/san_lowrank.py | LegionChang/CoTNet | b1bc456c0b13b282b807d1082a1598b71014b4fe | [
"Apache-2.0"
] | 360 | 2021-07-26T07:23:29.000Z | 2022-03-16T03:03:25.000Z | python_developer_tools/cv/bases/conv/CoTNet/CoTNet-master/models/san_lowrank.py | HonestyBrave/python_developer_tools | fc0dcf5c4ef088e2e535206dc82f09bbfd01f280 | [
"Apache-2.0"
] | 22 | 2021-07-29T15:05:00.000Z | 2022-03-17T04:28:14.000Z | python_developer_tools/cv/bases/conv/CoTNet/CoTNet-master/models/san_lowrank.py | HonestyBrave/python_developer_tools | fc0dcf5c4ef088e2e535206dc82f09bbfd01f280 | [
"Apache-2.0"
] | 47 | 2021-07-27T02:14:21.000Z | 2022-02-25T09:15:12.000Z | import math
import numpy as np
import torch
from torch import nn as nn
from config import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
from .helpers import build_model_with_cfg
from .layers import SelectiveKernelConv, ConvBnAct, create_attn
from .registry import register_model
from .resnet import ResNet
from .layers import Shiftlution
from cupy_layers.aggregation_zeropad import LocalConvolution
def _cfg(url='', **kwargs):
return {
'url': url,
'num_classes': 1000, 'input_size': (3, 224, 224),
'crop_pct': 0.875, 'interpolation': 'bicubic',
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
'classifier': 'fc',
**kwargs
}
default_cfgs = {
'san19': _cfg(
url='',),
}
def conv1x1(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class SAM(nn.Module):
def __init__(self, in_planes, rel_planes, out_planes, share_planes, kernel_size=3, stride=1, dilation=1):
super(SAM, self).__init__()
self.kernel_size, self.stride = kernel_size, stride
self.conv1 = nn.Conv2d(in_planes, rel_planes, kernel_size=1)
self.conv2 = nn.Conv2d(in_planes, rel_planes, kernel_size=1)
self.conv3 = nn.Conv2d(in_planes, out_planes, kernel_size=1)
self.conv_w = nn.Sequential(nn.BatchNorm2d(rel_planes * (pow(kernel_size, 2) + 1)), nn.ReLU(inplace=True),
nn.Conv2d(rel_planes * (pow(kernel_size, 2) + 1), out_planes // share_planes, kernel_size=1, bias=False),
nn.BatchNorm2d(out_planes // share_planes), nn.ReLU(inplace=True),
nn.Conv2d(out_planes // share_planes, pow(kernel_size, 2) * out_planes // share_planes, kernel_size=1))
self.unfold_j = nn.Unfold(kernel_size=kernel_size, dilation=dilation, padding=0, stride=stride)
self.pad = nn.ReflectionPad2d(kernel_size // 2)
#self.aggregation = Aggregation(kernel_size, stride, (dilation * (kernel_size - 1) + 1) // 2, dilation, pad_mode=1)
self.local_conv = LocalConvolution(out_planes, out_planes, kernel_size=self.kernel_size, stride=1, padding=(self.kernel_size - 1) // 2, dilation=1)
def forward(self, x):
x1, x2, x3 = self.conv1(x), self.conv2(x), self.conv3(x)
x2 = self.unfold_j(self.pad(x2)).view(x.shape[0], -1, x1.shape[2], x1.shape[3])
w = self.conv_w(torch.cat([x1, x2], 1))
w = w.view(x1.shape[0], -1, self.kernel_size*self.kernel_size, x1.shape[2], x1.shape[3])
w = w.unsqueeze(1)
#x = self.aggregation(x3, w)
x = self.local_conv(x3, w)
return x
class SAM_lowRank(nn.Module):
def __init__(self, in_planes, rel_planes, out_planes, share_planes, kernel_size=3, stride=1, dilation=1):
super(SAM_lowRank, self).__init__()
self.rel_planes = rel_planes
self.out_planes = out_planes
self.kernel_size, self.stride = kernel_size, stride
self.pool_size = min(512 // out_planes, 4)
self.down = nn.AvgPool2d(self.pool_size, self.pool_size, padding=0) if self.pool_size > 1 else None
self.unfold_j = nn.Unfold(kernel_size=kernel_size, dilation=dilation, padding=0, stride=stride)
self.pad = nn.ReflectionPad2d(kernel_size // 2)
self.conv = nn.Sequential(
nn.Conv2d(in_planes, out_planes + 2*rel_planes, kernel_size=1, bias=False),
#nn.BatchNorm2d(out_planes + rel_planes),
#nn.ReLU(inplace=True),
)
self.key_embed = nn.Sequential(
nn.BatchNorm2d(rel_planes * self.kernel_size * self.kernel_size),
nn.ReLU(inplace=True),
nn.Conv2d(rel_planes * self.kernel_size * self.kernel_size, rel_planes, 1, bias=False),
)
self.conv_w = nn.Sequential(
nn.BatchNorm2d(rel_planes * 2),
nn.ReLU(inplace=True),
nn.Conv2d(rel_planes * 2, out_planes * self.kernel_size * 2, kernel_size=1, bias=False)
)
self.local_conv = LocalConvolution(out_planes, out_planes, kernel_size=self.kernel_size, stride=1, padding=(self.kernel_size - 1) // 2, dilation=1)
def forward(self, x):
x = self.conv(x)
q, k, x = torch.split(x, [self.rel_planes, self.rel_planes, self.out_planes], 1)
x2 = self.unfold_j(self.pad(k))
x2 = x2.view(x.shape[0], -1, x.shape[2], x.shape[3])
x2 = self.key_embed(x2)
qk = torch.cat([q, x2], 1)
if self.pool_size > 1:
qk = self.down(qk)
b, c, qk_hh, qk_ww = qk.size()
embed = self.conv_w(qk)
embed_h, embed_w = torch.split(embed, embed.shape[1] // 2, dim=1)
embed_h = embed_h.view(b, -1, self.kernel_size, 1, qk_hh, qk_ww)
embed_w = embed_w.view(b, -1, 1, self.kernel_size, qk_hh, qk_ww)
w = embed_h * embed_w
w = w.view(x.shape[0], -1, self.kernel_size*self.kernel_size, qk_hh, qk_ww)
if self.pool_size > 1:
w = w.view(b, -1, self.kernel_size*self.kernel_size, qk_hh, 1, qk_ww, 1)
w = w.expand(b, -1, self.kernel_size*self.kernel_size, qk_hh, self.pool_size, qk_ww, self.pool_size).contiguous()
w = w.view(b, -1, self.kernel_size*self.kernel_size, x.shape[2], x.shape[3])
w = w.unsqueeze(1)
x = self.local_conv(x, w)
return x
class Bottleneck(nn.Module):
def __init__(self, in_planes, rel_planes, mid_planes, out_planes, share_planes=8, kernel_size=7, stride=1):
super(Bottleneck, self).__init__()
self.bn1 = nn.BatchNorm2d(in_planes)
self.sam = SAM(in_planes, rel_planes, mid_planes, share_planes, kernel_size, stride)
self.bn2 = nn.BatchNorm2d(mid_planes)
self.conv = nn.Conv2d(mid_planes, out_planes, kernel_size=1)
self.relu = nn.ReLU(inplace=True)
self.stride = stride
def forward(self, x):
identity = x
out = self.relu(self.bn1(x))
out = self.relu(self.bn2(self.sam(out)))
out = self.conv(out)
out += identity
return out
class SAN(nn.Module):
def __init__(self, in_chans, block, layers, kernels, num_classes, **kwargs):
super(SAN, self).__init__()
c = 64
self.conv_in, self.bn_in = conv1x1(3, c), nn.BatchNorm2d(c)
self.conv0, self.bn0 = conv1x1(c, c), nn.BatchNorm2d(c)
self.layer0 = self._make_layer(block, c, layers[0], kernels[0])
c *= 4
self.conv1, self.bn1 = conv1x1(c // 4, c), nn.BatchNorm2d(c)
self.layer1 = self._make_layer(block, c, layers[1], kernels[1])
c *= 2
self.conv2, self.bn2 = conv1x1(c // 2, c), nn.BatchNorm2d(c)
self.layer2 = self._make_layer(block, c, layers[2], kernels[2])
c *= 2
self.conv3, self.bn3 = conv1x1(c // 2, c), nn.BatchNorm2d(c)
self.layer3 = self._make_layer(block, c, layers[3], kernels[3])
c *= 2
self.conv4, self.bn4 = conv1x1(c // 2, c), nn.BatchNorm2d(c)
self.layer4 = self._make_layer(block, c, layers[4], kernels[4])
self.relu = nn.ReLU(inplace=True)
self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(c, num_classes)
def _make_layer(self, block, planes, blocks, kernel_size=7, stride=1):
layers = []
for _ in range(0, blocks):
layers.append(block(planes, planes // 16, planes // 4, planes, 8, kernel_size, stride))
return nn.Sequential(*layers)
def forward(self, x):
x = self.relu(self.bn_in(self.conv_in(x)))
x = self.relu(self.bn0(self.layer0(self.conv0(self.pool(x)))))
x = self.relu(self.bn1(self.layer1(self.conv1(self.pool(x)))))
x = self.relu(self.bn2(self.layer2(self.conv2(self.pool(x)))))
x = self.relu(self.bn3(self.layer3(self.conv3(self.pool(x)))))
x = self.relu(self.bn4(self.layer4(self.conv4(self.pool(x)))))
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
def _create_san(variant, pretrained=False, **kwargs):
return build_model_with_cfg(
SAN, variant, default_cfg=default_cfgs[variant], pretrained=pretrained, **kwargs)
@register_model
def san19(pretrained=False, **kwargs):
#model_args = dict(block=Bottleneck, layers=[3, 3, 4, 6, 3], kernels = [3, 3, 3, 3, 3], **kwargs)
#model_args = dict(block=Bottleneck, layers=[3, 3, 4, 6, 3], kernels = [3, 5, 5, 5, 5], **kwargs)
model_args = dict(block=Bottleneck, layers=[3, 3, 4, 6, 3], kernels=[3, 7, 7, 7, 7], **kwargs)
return _create_san('san19', pretrained, **model_args) | 44.269036 | 155 | 0.624928 | 1,306 | 8,721 | 3.981623 | 0.124809 | 0.103846 | 0.061923 | 0.031154 | 0.551346 | 0.470962 | 0.395769 | 0.338077 | 0.280962 | 0.2125 | 0 | 0.040728 | 0.231395 | 8,721 | 197 | 156 | 44.269036 | 0.735044 | 0.045522 | 0 | 0.178344 | 0 | 0 | 0.009737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.082803 | false | 0 | 0.070064 | 0.019108 | 0.235669 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8e095732d43409e05f872611e0a25bf01b9a971 | 471 | py | Python | d1lod/d1lod/__init__.py | DataONEorg/slinky | 1f0f774b7b5556126d75524ac9fd328ad0fc1ba2 | [
"Apache-2.0"
] | 2 | 2019-03-07T21:14:27.000Z | 2021-03-30T00:24:13.000Z | d1lod/d1lod/__init__.py | DataONEorg/slinky | 1f0f774b7b5556126d75524ac9fd328ad0fc1ba2 | [
"Apache-2.0"
] | 64 | 2021-03-11T22:28:45.000Z | 2022-03-17T18:41:08.000Z | d1lod/d1lod/__init__.py | DataONEorg/slinky | 1f0f774b7b5556126d75524ac9fd328ad0fc1ba2 | [
"Apache-2.0"
] | 2 | 2018-09-05T16:38:42.000Z | 2021-03-12T18:07:20.000Z | from . import people
from . import dataone
from . import util
from . import validator
from . import metadata
from .graph import Graph
from .interface import Interface
# Set default logging handler to avoid "No handler found" warnings.
import logging
try: # Python 2.7+
from logging import NullHandler
except ImportError:
class NullHandler(logging.Handler):
def emit(self, record):
pass
logging.getLogger(__name__).addHandler(NullHandler())
| 24.789474 | 67 | 0.7431 | 59 | 471 | 5.864407 | 0.576271 | 0.144509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005236 | 0.18896 | 471 | 18 | 68 | 26.166667 | 0.900524 | 0.163482 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.066667 | 0.666667 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 3 |
d8e1107cf7ccb8c88d2d79f53d1ffccc5940049b | 1,262 | py | Python | qa/admin.py | thebenwaters/openclickio | c5e08d89b37c5f415810dca088803dba25af5e1a | [
"MIT"
] | null | null | null | qa/admin.py | thebenwaters/openclickio | c5e08d89b37c5f415810dca088803dba25af5e1a | [
"MIT"
] | 1 | 2017-10-21T19:29:18.000Z | 2017-10-21T19:29:18.000Z | qa/admin.py | thebenwaters/openclickio | c5e08d89b37c5f415810dca088803dba25af5e1a | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Answer, AnswerOption, AnswerInstance, Question,\
OpenEndedResponse, ClosedEndedQuestion
# Register your models here.
@admin.register(AnswerOption)
class AnswerOptionAdmin(admin.ModelAdmin):
list_display = ('id', 'text')
@admin.register(Answer)
class AnswerAdmin(admin.ModelAdmin):
list_display = ('id', 'created', 'owner', 'correct_answer')
@admin.register(AnswerInstance)
class AnswerInstanceAdmin(admin.ModelAdmin):
list_display = ('id', 'created', 'student', 'question', 'answer_option', 'was_correct')
def was_correct(self, obj):
my_question = ClosedEndedQuestion.objects.get(pk=obj.question.pk)
if obj.answer_option == my_question.answer.correct_answer:
return True
return False
def activate(modeladmin, request, queryset):
for obj in queryset:
obj.activate()
def deactivate(modeladmin,request, queryset):
for obj in queryset:
obj.deactivate()
@admin.register(Question)
class QuestionAdmin(admin.ModelAdmin):
list_display = ('id', 'owner', 'text', 'active')
actions = [activate, deactivate]
@admin.register(ClosedEndedQuestion)
class ClosedEndedQuestionAdmin(admin.ModelAdmin):
list_display = ('id', 'owner', 'text', 'answer', 'active')
actions = [activate, deactivate] | 29.348837 | 88 | 0.759113 | 142 | 1,262 | 6.65493 | 0.359155 | 0.068783 | 0.100529 | 0.137566 | 0.275132 | 0.245503 | 0.171429 | 0.093122 | 0 | 0 | 0 | 0 | 0.108558 | 1,262 | 43 | 89 | 29.348837 | 0.84 | 0.020602 | 0 | 0.129032 | 0 | 0 | 0.098785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.064516 | 0 | 0.612903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8e180bf8b4b157c9e27b0c8c553c612b8e2d1ec | 6,212 | py | Python | Bot/Cogs/jisho.py | No767/Rin-Bot | b4c64e0ebccc9465100006ec2cb023eecb425570 | [
"Apache-2.0"
] | null | null | null | Bot/Cogs/jisho.py | No767/Rin-Bot | b4c64e0ebccc9465100006ec2cb023eecb425570 | [
"Apache-2.0"
] | null | null | null | Bot/Cogs/jisho.py | No767/Rin-Bot | b4c64e0ebccc9465100006ec2cb023eecb425570 | [
"Apache-2.0"
] | null | null | null | import re
import discord
import requests
import ujson
from discord.ext import commands
from dotenv import load_dotenv
from jamdict import Jamdict
load_dotenv()
jam = Jamdict()
# Use Array Loop Instead
def kanjiv2(search):
res = jam.lookup(search.replace("\n", " "))
for c in res.chars:
return str(c).replace("\n", " ")
def hiragana(search):
result = jam.lookup(search)
for word in result.entries:
m = re.findall("[ぁ-ん]", str(word))
r = str(m).replace("'", "").replace(",", "").replace(" ", "")
return str(r)
def katakana(search):
result = jam.lookup(search.replace("\n", " "))
for entry in result.entries:
m = re.findall("[ァ-ン]", str(entry))
r = (
str(m)
.replace("[", " ")
.replace("]", " ")
.replace("'", " ")
.replace(",", "")
.replace(" ", "")
)
return str(r)
# old kanji lookup system. use the function kanjiv2 instead
def kanji(search):
result = jam.lookup(search)
result_search = result.text(separator=" | ", with_chars=False, no_id=True)
m = re.findall(".[一-龯]", result_search)
all_kanji = str(m).replace(",", "")[1:-1]
all_kanjiv2 = all_kanji.replace("'", "").replace(" ", "").replace("", ", ")
return all_kanjiv2
def searcher(search):
result = jam.lookup(search)
for word in result.entries:
return str(word[4:10])
def better_hiragana(search):
searcher(search)
def tag(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_tag = str(jisho["data"][0]["tags"])
return jisho_tag.replace("[", " ").replace("]", " ").replace("'", " ")
def jlpt(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_jlpt = str(jisho["data"][0]["tags"])
return jisho_jlpt.replace("[", " ").replace("]", " ").replace("'", " ")
def is_common(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jishov1 = str(jisho["data"][0]["is_common"])
return jishov1.replace("[", " ").replace("]", " ")
def pos(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_sorted = jisho["data"][0]["senses"][0]["parts_of_speech"]
return str(jisho_sorted).replace("[", "").replace("]", "").replace("'", "")
def see_also(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_sorted = jisho["data"][0]["senses"][0]["see_also"]
return str(jisho_sorted).replace("[", "").replace("]", "").replace("'", "")
def antonyms(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_sorted = jisho["data"][0]["senses"][0]["antonyms"]
return str(jisho_sorted).replace("[", "").replace("]", "").replace("'", "")
def links(search):
search = search.replace(" ", "%20")
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
jisho_sorted = jisho["data"][0]["senses"][0]["links"]
return str(jisho_sorted).replace("[", "").replace("]", "").replace("'", "")
class jisho_dict(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(name="jisho")
async def jisho(self, ctx, search: str):
try:
result = jam.lookup(search)
link = f"https://jisho.org/api/v1/search/words?keyword={search}"
r = requests.get(link)
jisho = ujson.loads(r.text)
res = jam.lookup(search.replace("\n", " "))
embedVar = discord.Embed()
embedVar.add_field(
name="Kanji",
value=[str(c).replace("'", "") for c in res.chars],
inline=False,
)
embedVar.add_field(
name="Position of Speech (POS)", value=pos(search), inline=False
)
embedVar.add_field(name="Is Common?",
value=is_common(search), inline=False)
embedVar.add_field(
name="Other Info",
value=f"Tags >> {tag(search)}\nJLPT >> {jlpt(search)}\nAntonyms >> {antonyms(search)}\nSee Also >> {see_also(search)}\nLinks >> {links(search)}",
inline=False,
)
embedVar.add_field(
name="Attributions",
value=f"JMDict >> {jisho['data'][0]['attribution']['jmdict']}\nJMNEDict >> {jisho['data'][0]['attribution']['jmnedict']}\nDBPedia >> {jisho['data'][0]['attribution']['dbpedia']}",
inline=False,
)
embedVar.add_field(
name="HTTP Status (Jisho API)",
value=f"{jisho['meta']['status']}",
inline=False,
)
embedVar.description = str([str(word[0])
for word in result.entries])
await ctx.send(embed=embedVar)
except Exception as e:
embed_discord = discord.Embed()
embed_discord.description = (
f"An error has occurred. Please try again\nReason: {e}"
)
await ctx.send(embed=embed_discord)
@jisho.error
async def on_message_error(
self, ctx: commands.Context, error: commands.CommandError
):
if isinstance(error, commands.MissingRequiredArgument):
embed_discord = discord.Embed()
embed_discord.description = f"Missing a requireed argument: {error.param}"
await ctx.send(embed=embed_discord)
def setup(bot):
bot.add_cog(jisho_dict(bot))
| 33.042553 | 195 | 0.56246 | 732 | 6,212 | 4.702186 | 0.209016 | 0.085415 | 0.067112 | 0.034863 | 0.556944 | 0.531958 | 0.438408 | 0.377978 | 0.311737 | 0.311737 | 0 | 0.010327 | 0.251771 | 6,212 | 187 | 196 | 33.219251 | 0.730207 | 0.012878 | 0 | 0.386667 | 0 | 0.013333 | 0.188285 | 0.040463 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.046667 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8e1c9d0e3a88657f33d1a06132b17c2fdbfdca9 | 40 | py | Python | cs-224n/assn2/utils/__init__.py | PranjalGupta2199/nlp-dl | 7e2290c82602cb2ff863f2513c54dfb0412affd0 | [
"MIT"
] | null | null | null | cs-224n/assn2/utils/__init__.py | PranjalGupta2199/nlp-dl | 7e2290c82602cb2ff863f2513c54dfb0412affd0 | [
"MIT"
] | null | null | null | cs-224n/assn2/utils/__init__.py | PranjalGupta2199/nlp-dl | 7e2290c82602cb2ff863f2513c54dfb0412affd0 | [
"MIT"
] | null | null | null | from . import gradcheck, treebank, utils | 40 | 40 | 0.8 | 5 | 40 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 40 | 1 | 40 | 40 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d8e326a0842502832eb889a12acb3aa9cc0f3dbc | 679 | py | Python | simple_import/forms.py | smcoll/django-simple-import | fcdaaf61196fb7c596c3383159d49f11a8dd518e | [
"BSD-3-Clause"
] | null | null | null | simple_import/forms.py | smcoll/django-simple-import | fcdaaf61196fb7c596c3383159d49f11a8dd518e | [
"BSD-3-Clause"
] | null | null | null | simple_import/forms.py | smcoll/django-simple-import | fcdaaf61196fb7c596c3383159d49f11a8dd518e | [
"BSD-3-Clause"
] | null | null | null | from django import forms
from django.contrib.contenttypes.models import ContentType
from simple_import.models import ImportLog, ColumnMatch, RelationalMatch
class ImportForm(forms.ModelForm):
class Meta:
model = ImportLog
fields = ('name', 'import_file', 'import_type')
model = forms.ModelChoiceField(ContentType.objects.all())
class MatchForm(forms.ModelForm):
class Meta:
model = ColumnMatch
exclude = ['header_position']
class MatchRelationForm(forms.ModelForm):
class Meta:
model = RelationalMatch
widgets = {
'related_field_name': forms.Select(choices=(('', '---------'),))
}
| 27.16 | 76 | 0.664212 | 65 | 679 | 6.846154 | 0.523077 | 0.094382 | 0.12809 | 0.155056 | 0.188764 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.222386 | 679 | 24 | 77 | 28.291667 | 0.842803 | 0 | 0 | 0.166667 | 0 | 0 | 0.100147 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.722222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d8e4dee2464dfe2fbd5f649c5d1fede75cf0bf21 | 3,198 | py | Python | src/main/python/utils/handle_null.py | meowpunch/bobsim-research | 4411ac6eaf5b760611f689b0a9e290546e2f5435 | [
"MIT"
] | 2 | 2020-03-01T17:42:44.000Z | 2020-03-09T06:13:34.000Z | src/main/python/utils/handle_null.py | meowpunch/bobsim-research | 4411ac6eaf5b760611f689b0a9e290546e2f5435 | [
"MIT"
] | 2 | 2020-04-01T16:48:06.000Z | 2020-04-04T11:04:10.000Z | src/main/python/utils/handle_null.py | meowpunch/bobsim-research | 4411ac6eaf5b760611f689b0a9e290546e2f5435 | [
"MIT"
] | null | null | null | import sys
from functools import reduce
import pandas as pd
from utils.logging import init_logger
from utils.s3_manager.manage import S3Manager
class NullHandler:
def __init__(self, strategy=None, df=None):
self.logger = init_logger()
self.input_df = df
self.strategy = strategy
self.fillna_method = {
"drop": self.fillna_with_drop,
"zero": self.fillna_with_zero,
"linear": self.fillna_with_linear,
}
@staticmethod
def fillna_with_linear(df: pd.DataFrame):
# fill nan by linear formula.
return df.interpolate(method='linear', limit_direction='both')
@staticmethod
def fillna_with_zero(df: pd.DataFrame):
return df.fillna(value=0)
@staticmethod
def fillna_with_drop(df: pd.DataFrame):
return df.dropna(axis=0)
@staticmethod
def missing_values(df: pd.DataFrame):
# TODO: specify
df_null = df.isna().sum()
if df_null.sum() > 0:
filtered = df_null[df_null.map(lambda x: x > 0)]
# self.logger.info("missing values: \n {}".format(filtered))
return filtered
else:
# self.logger.info("no missing value at raw material price")
return None
def get_columns_list(self):
# TODO: in order not to scan df twice, combine this method with fillnan
if len(self.strategy.values()) is 1:
return list(self.strategy.values())[0]
else:
return reduce(
lambda x, y: x + y,
self.strategy.values()
)
@staticmethod
def fill_by_method(method, df):
filled = method(df)
if isinstance(filled, pd.Series):
return filled.to_frame()
return filled
def fill_nan(self):
"""
:return: list of pd DataFrame filled nan by each method
"""
method_list = self.strategy.keys()
return list(map(
lambda name: self.fill_by_method(
self.fillna_method[name], self.input_df.filter(items=self.strategy[name])
), method_list
))
def process(self):
"""
by strategy, fill nan in df at once
:return: pd DataFrame after fill nan
"""
df_list = self.fill_nan()
return pd.concat(
df_list + [self.input_df.drop(columns=self.get_columns_list(), axis=1)],
axis=1, join="inner"
)
def load(filename="2014-2020"):
"""
fetch DataFrame and astype and filter by columns
:return: pd DataFrame
"""
manager = S3Manager(bucket_name="production-bobsim")
df = manager.fetch_df_from_csv(
key="public_data/open_data_terrestrial_weather/origin/csv/{filename}.csv".format(filename=filename)
)
# TODO: no use index to get first element.
return df[0]
def main():
"""
df, key = build_origin_weather(date="201908")
print(df.info())
t = NullHandler(
strategy={
"linear": ["최대 풍속(m/s)"],
"drop": ["최저기온(°C)"]
}, df=df
)
print(t.process())
"""
pass
if __name__ == '__main__':
main()
| 26.213115 | 107 | 0.583177 | 394 | 3,198 | 4.576142 | 0.340102 | 0.046589 | 0.028841 | 0.041597 | 0.023295 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011706 | 0.305503 | 3,198 | 121 | 108 | 26.429752 | 0.79964 | 0.209819 | 0 | 0.101449 | 0 | 0 | 0.054144 | 0.027905 | 0 | 0 | 0 | 0.016529 | 0 | 1 | 0.15942 | false | 0.014493 | 0.072464 | 0.043478 | 0.42029 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8e67ae78b6e8735abac8eb28c78858b399f444d | 1,207 | py | Python | scripts/executor_action.py | rezhajulio/azkaban | 974e2e45f4e2f1cd14a3e160f9326aa067b606c2 | [
"Apache-2.0"
] | 3 | 2019-12-19T00:04:36.000Z | 2020-05-07T02:54:56.000Z | scripts/executor_action.py | rezhajulio/azkaban | 974e2e45f4e2f1cd14a3e160f9326aa067b606c2 | [
"Apache-2.0"
] | null | null | null | scripts/executor_action.py | rezhajulio/azkaban | 974e2e45f4e2f1cd14a3e160f9326aa067b606c2 | [
"Apache-2.0"
] | 3 | 2018-03-15T04:54:50.000Z | 2019-07-15T06:33:58.000Z | #!/usr/bin/python3
import requests
import sys
import time
from wait_for_port_ready import wait_for_port_ready
import traceback
import json
action = sys.argv[1]
assert action in ('activate', 'deactivate', 'getStatus', 'shutdown')
url = 'http://localhost:12321/executor?action={action}'.format(action=action)
if action == 'getStatus':
r = requests.post(url, timeout=5)
assert r.status_code == 200
assert json.loads(r.text)['isActive'] == 'true'
else:
wait_for_port_ready(12321, 15)
retries = 0
retry_count = 15
success = False
while not success:
try:
r = requests.post(url, timeout=5)
print(r.status_code)
print(r.text)
if r.json()['status'] == 'success':
success = True
if not success:
raise Exception('Attempt to ' + action + ' executor failed')
except Exception as ex:
print(traceback.format_exc())
sys.stdout.flush()
retries += 1
if retries > retry_count:
raise Exception('Attempt to ' + action + ' executor failed')
print('waiting for 1 seconds...')
time.sleep(1)
| 24.632653 | 77 | 0.591549 | 145 | 1,207 | 4.827586 | 0.462069 | 0.03 | 0.047143 | 0.068571 | 0.254286 | 0.191429 | 0.122857 | 0 | 0 | 0 | 0 | 0.029308 | 0.293289 | 1,207 | 48 | 78 | 25.145833 | 0.791325 | 0.014085 | 0 | 0.114286 | 0 | 0 | 0.163162 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 1 | 0 | false | 0 | 0.171429 | 0 | 0.171429 | 0.114286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8e730bb375940f9f3582e34ab7d4585a0799ed3 | 80 | py | Python | T26-04/program.py | tiagomendes7/SSof-Project1920 | 9b026763a492683ab982aa769f2a381ed02930aa | [
"MIT"
] | 2 | 2019-11-20T19:26:07.000Z | 2019-11-22T00:42:23.000Z | T26-04/program.py | tiagomendes7/SSof-Project1920 | 9b026763a492683ab982aa769f2a381ed02930aa | [
"MIT"
] | 2 | 2019-11-28T05:21:24.000Z | 2019-11-28T05:21:58.000Z | T26-04/program.py | tiagomendes7/SSof-Project1920 | 9b026763a492683ab982aa769f2a381ed02930aa | [
"MIT"
] | 25 | 2019-11-27T01:40:56.000Z | 2019-12-04T23:38:59.000Z | src = source()
src2 = src
src3 = src2
src5 = src2
src4 = src3
src4 = sink(src4)
| 11.428571 | 17 | 0.65 | 13 | 80 | 4 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.145161 | 0.225 | 80 | 6 | 18 | 13.333333 | 0.693548 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d8e84bbf2e79e843bed9f808b75020d51a48d30f | 3,855 | py | Python | faker/providers/barcode/__init__.py | StabbarN/faker | 57882ff73255cb248d8f995b2abfce5cfee45ab3 | [
"MIT"
] | 4 | 2020-09-23T15:48:00.000Z | 2021-02-25T07:55:23.000Z | faker/providers/barcode/__init__.py | StabbarN/faker | 57882ff73255cb248d8f995b2abfce5cfee45ab3 | [
"MIT"
] | 7 | 2021-03-05T23:08:02.000Z | 2022-03-12T00:47:19.000Z | faker/providers/barcode/__init__.py | StabbarN/faker | 57882ff73255cb248d8f995b2abfce5cfee45ab3 | [
"MIT"
] | 1 | 2022-03-10T16:25:33.000Z | 2022-03-10T16:25:33.000Z | from .. import BaseProvider
localized = True
class Provider(BaseProvider):
# Source of GS1 country codes: https://gs1.org/standards/id-keys/company-prefix
local_prefixes = ()
def _ean(self, length=13, prefixes=()):
if length not in (8, 13):
raise AssertionError("length can only be 8 or 13")
code = [self.random_digit() for _ in range(length - 1)]
if prefixes:
prefix = self.random_element(prefixes)
code[:len(prefix)] = map(int, prefix)
if length == 8:
weights = [3, 1, 3, 1, 3, 1, 3]
elif length == 13:
weights = [1, 3, 1, 3, 1, 3, 1, 3, 1, 3, 1, 3]
weighted_sum = sum(x * y for x, y in zip(code, weights))
check_digit = (10 - weighted_sum % 10) % 10
code.append(check_digit)
return ''.join(str(x) for x in code)
def ean(self, length=13, prefixes=()):
"""Generate an EAN barcode of the specified ``length``.
The value of ``length`` can only be ``8`` or ``13`` (default) which will
create an EAN-8 or an EAN-13 barcode respectively.
If ``prefixes`` are specified, the result will begin with one of the sequence in ``prefixes``
:sample: length=13
:sample: length=8
:sample: prefixes=('00',)
:sample: prefixes=('45', '49')
"""
return self._ean(length, prefixes=prefixes)
def ean8(self, prefixes=()):
"""Generate an EAN-8 barcode.
This method uses :meth:`ean() <faker.providers.barcode.Provider.ean>` under the
hood with the ``length`` argument explicitly set to ``8``.
If ``prefixes`` are specified, the result will begin with one of the sequence in ``prefixes``
:sample:
:sample: prefixes=('00',)
:sample: prefixes=('45', '49')
"""
return self._ean(8, prefixes=prefixes)
def ean13(self, prefixes=()):
"""Generate an EAN-13 barcode.
If ``prefixes`` are specified, the result will begin with one of the sequence in ``prefixes``.
This method uses :meth:`ean() <faker.providers.barcode.Provider.ean>` under the
hood with the ``length`` argument explicitly set to ``13``.
.. note::
Codes starting with a leading zero are treated specially in some barcode readers.
For more information about compatibility with UPC-A codes, see
:meth:`en_US.ean13() <faker.providers.barcode.en_US.Provider.ean13>`
:sample:
:sample: prefixes=('00',)
:sample: prefixes=('45', '49')
"""
return self._ean(13, prefixes=prefixes)
def localized_ean(self, length=13):
"""Generate a localized EAN barcode of the specified ``length``.
The value of ``length`` can only be ``8`` or ``13`` (default) which will
create an EAN-8 or an EAN-13 barcode respectively.
This method uses :meth:`ean() <faker.providers.barcode.Provider.ean>` under the
hood with the ``prefixes`` argument explicitly set to ``self.local_prefixes``.
:sample:
:sample: length=13
:sample: length=8
"""
return self._ean(length, prefixes=self.local_prefixes)
def localized_ean8(self):
"""Generate a localized EAN-8 barcode.
This method uses :meth:`localized_ean() <faker.providers.barcode.Provider.ean>` under the
hood with the ``length`` argument explicitly set to ``8``.
:sample:
"""
return self.localized_ean(8)
def localized_ean13(self):
"""Generate a localized EAN-13 barcode.
This method uses :meth:`localized_ean() <faker.providers.barcode.Provider.ean>` under the
hood with the ``length`` argument explicitly set to ``13``.
:sample:
"""
return self.localized_ean(13)
| 33.232759 | 102 | 0.597925 | 500 | 3,855 | 4.562 | 0.224 | 0.007891 | 0.010522 | 0.012275 | 0.64007 | 0.556335 | 0.512495 | 0.494082 | 0.494082 | 0.494082 | 0 | 0.038242 | 0.274189 | 3,855 | 115 | 103 | 33.521739 | 0.776984 | 0.554086 | 0 | 0 | 0 | 0 | 0.019316 | 0 | 0 | 0 | 0 | 0 | 0.032258 | 1 | 0.225806 | false | 0 | 0.032258 | 0 | 0.548387 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 2 |
d8e8a2245da2f5f3c3aaee9fd554b9ee96a551e9 | 22,494 | py | Python | axonius_api_client/http.py | geransmith/axonius_api_client | 09fd564d62f0ddf7aa44db14a509eaafaf0c930f | [
"MIT"
] | null | null | null | axonius_api_client/http.py | geransmith/axonius_api_client | 09fd564d62f0ddf7aa44db14a509eaafaf0c930f | [
"MIT"
] | null | null | null | axonius_api_client/http.py | geransmith/axonius_api_client | 09fd564d62f0ddf7aa44db14a509eaafaf0c930f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""HTTP client."""
import logging
import warnings
from urllib.parse import urlparse, urlunparse
import requests
from .constants import (
LOG_LEVEL_HTTP,
MAX_BODY_LEN,
REQUEST_ATTR_MAP,
RESPONSE_ATTR_MAP,
TIMEOUT_CONNECT,
TIMEOUT_RESPONSE,
)
from .exceptions import HttpError
from .logs import get_obj_log, set_log_level
from .tools import join_url, json_reload, listify, path_read
from .version import __version__
InsecureRequestWarning = requests.urllib3.exceptions.InsecureRequestWarning
class Http:
"""HTTP client wrapper around :obj:`requests.Session`."""
def __init__(
self,
url,
connect_timeout=TIMEOUT_CONNECT,
response_timeout=TIMEOUT_RESPONSE,
certpath=None,
certwarn=True,
certverify=False,
cert_client_both=None,
cert_client_cert=None,
cert_client_key=None,
http_proxy=None,
https_proxy=None,
save_last=True,
save_history=False,
log_level=LOG_LEVEL_HTTP,
log_level_urllib="warning",
log_request_attrs=None,
log_response_attrs=None,
log_request_body=False,
log_response_body=False,
):
"""HTTP client wrapper around :obj:`requests.Session`.
Notes:
* If certpath is supplied, certverify is ignored
* private key supplied to cert_client_key or cert_client_both
can **NOT** be password encrypted
Args:
url (:obj:`str` or :obj:`ParserUrl`): URL to connect to
connect_timeout (:obj:`int`, optional):
default :data:`TIMEOUT_CONNECT` - seconds to
wait for connections to open to :attr:`url`
response_timeout (:obj:`int`, optional):
default :data:`TIMEOUT_RESPONSE` - seconds to
wait for responses from :attr:`url`
certpath (:obj:`str` or :obj:`pathlib.Path`, optional): default ``None`` -
path to CA bundle file to use when verifing certs offered by :attr:`url`
instead of the system CA bundle
certwarn (:obj:`bool`, optional): default ``True`` - show warnings from
requests about certs offered by :attr:`url` that are self signed:
* if ``True`` show warning only the first time it happens
* if ``False`` never show warning
* if ``None`` show warning every time it happens
certverify (:obj:`bool`, optional): default ``False`` - control
validation of certs offered by :attr:`url`:
* if ``True`` raise exception if cert is invalid/self-signed
* if ``False`` only raise exception if cert is invalid
cert_client_both (:obj:`str` or :obj:`pathlib.Path`, optional):
default ``None`` - path to cert file containing both the private key and
cert to offer to :attr:`url`
cert_client_cert (:obj:`str` or :obj:`pathlib.Path`, optional):
default ``None`` - path to cert file to offer to :attr:`url`
(*must also supply cert_client_key*)
cert_client_key (:obj:`str` or :obj:`pathlib.Path`, optional):
default ``None`` - path to private key file for cert_client_cert to
offer to :attr:`url` (*must also supply cert_client_cert*)
http_proxy (:obj:`str`, optional): default ``None`` - proxy to use
when making http requests to :attr:`url`
https_proxy (:obj:`str`, optional): default ``None`` - proxy to use
when making https requests to :attr:`url`
save_last (:obj:`bool`, optional): default ``True`` -
* if ``True`` save request to :attr:`LAST_REQUEST` and
response to :attr:`LAST_RESPONSE`
* if ``False`` do not save request to :attr:`LAST_REQUEST` and
response to :attr:`LAST_RESPONSE`
save_history (:obj:`bool`, optional): default ``True`` -
* if ``True`` append responses to :attr:`HISTORY`
* if ``False`` do not append responses to :attr:`HISTORY`
log_level (:obj:`str`):
default :data:`axonius_api_client.LOG_LEVEL_HTTP` -
logging level to use for this objects logger
log_level_urllib (:obj:`str`): default ``"warning"`` -
logging level to use for this urllib logger
log_request_attrs (:obj:`bool`): default ``None`` - control logging
of request attributes:
* if ``True``, log request attributes defined in
:data:`axonius_api_client.LOG_REQUEST_ATTRS_VERBOSE`
* if ``False``, log request attributes defined in
:data:`axonius_api_client.LOG_REQUEST_ATTRS_BRIEF`
* if ``None``, do not log any request attributes
log_response_attrs (:obj:`bool`): default ``None`` - control logging
of response attributes:
* if ``True``, log response attributes defined in
:data:`axonius_api_client.LOG_RESPONSE_ATTRS_VERBOSE`
* if ``False``, log response attributes defined in
:data:`axonius_api_client.LOG_RESPONSE_ATTRS_BRIEF`
* if ``None``, do not log any response attributes
log_request_body (:obj:`bool`): default ``False`` - control logging
of request bodies:
* if ``True``, log request bodies
* if ``False``, do not log request bodies
log_response_body (:obj:`bool`): default ``False`` - control logging
of response bodies:
* if ``True``, log response bodies
* if ``False``, do not log response bodies
Raises:
:exc:`HttpError`: if either cert_client_cert or cert_client_key
are supplied, and the other is not supplied
:exc:`HttpError`: if any of cert_path, cert_client_cert,
cert_client_key, or cert_client_both are supplied and the file does
not exist
"""
self.LOG = get_obj_log(obj=self, level=log_level)
""":obj:`logging.Logger`: Logger for this object."""
if isinstance(url, ParserUrl):
self.URLPARSED = url
else:
self.URLPARSED = ParserUrl(url=url, default_scheme="https")
self.url = self.URLPARSED.url
""":obj:`str`: URL to connect to"""
self.LAST_REQUEST = None
""":obj:`requests.PreparedRequest`: last request sent"""
self.LAST_RESPONSE = None
""":obj:`requests.Response`: last response received"""
self.HISTORY = []
""":obj:`list` of :obj:`requests.Response`: all responses received."""
self.SAVE_LAST = save_last
""":obj:`bool`: save requests to :attr:`LAST_REQUEST` and responses
to :attr:`LAST_RESPONSE`"""
self.SAVEHISTORY = save_history
""":obj:`bool`: Append all responses to :attr:`HISTORY`"""
self.CONNECT_TIMEOUT = connect_timeout
""":obj:`int`: seconds to wait for connections to open to :attr:`url`"""
self.RESPONSE_TIMEOUT = response_timeout
""":obj:`int`: seconds to wait for responses from :attr:`url`"""
self.session = requests.Session()
""":obj:`requests.Session`: session object to use"""
self.LOG_REQUEST_BODY = log_request_body
""":obj:`bool`: Log the full request body."""
self.LOG_RESPONSE_BODY = log_response_body
""":obj:`bool`: Log the full response body."""
self.log_request_attrs = log_request_attrs
self.log_response_attrs = log_response_attrs
self.session.proxies = {}
self.session.proxies["https"] = https_proxy
self.session.proxies["http"] = http_proxy
if certpath:
path_read(obj=certpath, binary=True)
self.session.verify = certpath
else:
self.session.verify = certverify
if cert_client_both:
path_read(obj=cert_client_both, binary=True)
self.session.cert = str(cert_client_both)
elif cert_client_cert or cert_client_key:
if not all([cert_client_cert, cert_client_key]):
error = (
"You must supply both a 'cert_client_cert' and 'cert_client_key'"
" or use 'cert_client_both'!"
)
raise HttpError(error)
path_read(obj=cert_client_cert, binary=True)
path_read(obj=cert_client_key, binary=True)
self.session.cert = (str(cert_client_cert), str(cert_client_key))
if certwarn is True:
warnings.simplefilter("once", InsecureRequestWarning)
elif certwarn is False:
warnings.simplefilter("ignore", InsecureRequestWarning)
urllog = logging.getLogger("urllib3.connectionpool")
set_log_level(obj=urllog, level=log_level_urllib)
def __call__(
self,
path=None,
route=None,
method="get",
data=None,
params=None,
headers=None,
json=None,
files=None,
# fmt: off
**kwargs
# fmt: on
):
"""Create, prepare, and then send a request using :attr:`session`.
Args:
path (:obj:`str`, optional): default ``None`` - path to append to
:attr:`url`
route (:obj:`str`, optional): default ``None`` - route to append to
:attr:`url`
method (:obj:`str`, optional): default ``"get"`` - method to use
data (:obj:`str`, optional): default ``None`` - body to send
params (:obj:`dict`, optional): default ``None`` - parameters to url encode
headers (:obj:`dict`, optional): default ``None`` - headers to send
json (:obj:`dict`, optional): default ``None`` - obj to encode as json
files (:obj:`tuple` of :obj:`tuple`, optional): default ``None`` - files to
send
**kwargs:
overrides for object attributes
* connect_timeout (:obj:`int`): default :attr:`CONNECT_TIMEOUT` -
seconds to wait for connection to open to :attr:`url`
* response_timeout (:obj:`int`): default :attr:`RESPONSE_TIMEOUT` -
seconds to wait for for response from :attr:`url`
* proxies (:obj:`dict`): default ``None`` -
use custom proxies instead of proxies defined in :attr:`session`
* verify (:obj:`bool` or :obj:`str`): default ``None`` - use custom
verification of cert offered by :attr:`url` instead of verification
defined in :attr:`session`
* cert (:obj:`str`): default ``None`` - use custom
client cert to offer to :attr:`url` cert defined in :attr:`session`
Returns:
:obj:`requests.Response`: raw response object
"""
url = join_url(self.url, path, route)
headers = headers or {}
headers.setdefault("User-Agent", self.user_agent)
request = requests.Request(
url=url,
method=method,
data=data,
headers=headers,
params=params,
json=json,
files=files or [],
)
prepped_request = self.session.prepare_request(request=request)
prepped_request.body_size = len(prepped_request.body or "")
if self.SAVE_LAST:
self.LAST_REQUEST = prepped_request
if self.log_request_attrs:
lattrs = ", ".join(self.log_request_attrs).format(request=prepped_request)
self.LOG.debug(f"REQUEST ATTRS: {lattrs}")
send_args = self.session.merge_environment_settings(
url=prepped_request.url,
proxies=kwargs.get("proxies", {}),
stream=kwargs.get("stream", None),
verify=kwargs.get("verify", None),
cert=kwargs.get("cert", None),
)
send_args["request"] = prepped_request
send_args["timeout"] = (
kwargs.get("connect_timeout", self.CONNECT_TIMEOUT),
kwargs.get("response_timeout", self.RESPONSE_TIMEOUT),
)
if self.LOG_REQUEST_BODY:
self.log_body(body=prepped_request.body, body_type="REQUEST")
response = self.session.send(**send_args)
response.body_size = len(response.text or "")
if self.SAVE_LAST:
self.LAST_RESPONSE = response
if self.SAVEHISTORY:
self.HISTORY.append(response)
if self.log_response_attrs:
lattrs = ", ".join(self.log_response_attrs).format(response=response)
self.LOG.debug(f"RESPONSE ATTRS: {lattrs}")
if self.LOG_RESPONSE_BODY:
self.log_body(body=response.text, body_type="RESPONSE")
return response
def __str__(self):
"""Show object info.
Returns:
:obj:`str`
"""
return "{c.__module__}.{c.__name__}(url={url!r})".format(
c=self.__class__, url=self.url
)
def __repr__(self):
"""Show object info.
Returns:
:obj:`str`
"""
return self.__str__()
@property
def user_agent(self):
"""Value to use in User-Agent header.
Returns:
:obj:`str`: user agent string
"""
return f"{__name__}.{self.__class__.__name__}/{__version__}"
@property
def log_request_attrs(self):
"""Get the request attributes that should be logged."""
return self._get_log_attrs("request")
@log_request_attrs.setter
def log_request_attrs(self, value):
"""Set the request attributes that should be logged."""
attr_map = REQUEST_ATTR_MAP
attr_type = "request"
self._set_log_attrs(attr_map=attr_map, attr_type=attr_type, value=value)
@property
def log_response_attrs(self):
"""Get the response attributes that should be logged."""
return self._get_log_attrs("response")
@log_response_attrs.setter
def log_response_attrs(self, value):
"""Set the response attributes that should be logged."""
attr_map = RESPONSE_ATTR_MAP
attr_type = "response"
self._set_log_attrs(attr_map=attr_map, attr_type=attr_type, value=value)
def _get_log_attrs(self, attr_type):
return getattr(self, "_LOG_ATTRS", {}).get(attr_type, [])
def _set_log_attrs(self, attr_map, attr_type, value):
if not hasattr(self, "_LOG_ATTRS"):
self._LOG_ATTRS = {"response": [], "request": []}
value = [x.lower().strip() for x in listify(value)]
if not value:
self._LOG_ATTRS[attr_type] = []
return
log_attrs = self._LOG_ATTRS[attr_type]
if "all" in value:
for k, v in attr_map.items():
entry = f"{k}={v}"
if entry not in log_attrs:
log_attrs.append(entry)
return
for item in value:
if item in attr_map:
value = attr_map[item]
entry = f"{item}={value}"
if entry not in log_attrs:
log_attrs.append(entry)
def log_body(self, body, body_type):
"""Pass."""
body = body or ""
body = json_reload(obj=body, error=False, trim=MAX_BODY_LEN)
self.LOG.debug(f"{body_type} BODY:\n{body}")
class ParserUrl:
"""Parse a URL and ensure it has the neccessary bits."""
def __init__(self, url, default_scheme="https"):
"""Parse a URL and ensure it has the neccessary bits.
Args:
url (:obj:`str`): URL to parse
default_scheme (:obj:`str`, optional): default ``"https"`` - default
scheme to use if url does not contain a scheme
Raises:
:exc:`HttpError`:
if parsed URL winds up without a hostname, port, or scheme.
"""
self._init_url = url
""":obj:`str`: initial URL provided"""
self._init_scheme = default_scheme
""":obj:`str`: default scheme provided"""
self._init_parsed = urlparse(url)
""":obj:`urllib.parse.ParseResult`: first pass of parsing URL"""
self.parsed = self.reparse(
parsed=self._init_parsed, default_scheme=default_scheme
)
""":obj:`urllib.parse.ParseResult`: second pass of parsing URL"""
for part in ["hostname", "port", "scheme"]:
if not getattr(self.parsed, part, None):
error = (
f"Parsed URL into {self.parsed_str!r} and no {part!r} provided"
f" in URL {url!r}"
)
raise HttpError(error)
def __str__(self):
"""Show object info.
Returns:
:obj:`str`
"""
cls = self.__class__
return f"{cls.__module__}.{cls.__name__}({self.parsed_str})"
def __repr__(self):
"""Show object info.
Returns:
:obj:`str`
"""
return self.__str__()
@property
def hostname(self):
"""Hostname part from :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: hostname value
"""
return self.parsed.hostname
@property
def port(self):
"""Port part from :attr:`ParserUrl.parsed`.
Returns
:obj:`int`: port value
"""
return int(self.parsed.port)
@property
def scheme(self):
"""Scheme part from :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: scheme value
"""
return self.parsed.scheme
@property
def url(self):
"""Get scheme, hostname, and port from :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: schema, hostname, and port unparsed values
"""
return self.unparse_base(parsed_result=self.parsed)
@property
def url_full(self):
"""Get full URL from :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: full unparsed url
"""
return self.unparse_all(parsed_result=self.parsed)
@property
def parsed_str(self):
"""Get a str value of :attr:`ParserUrl.parsed`.
Returns:
:obj:`str`: str value of :attr:`ParserUrl.parsed`
"""
parsed = getattr(self, "parsed", None)
attrs = [
"scheme",
"netloc",
"hostname",
"port",
"path",
"params",
"query",
"fragment",
]
atmpl = "{a}={v!r}".format
attrs = [atmpl(a=a, v="{}".format(getattr(parsed, a, "")) or "") for a in attrs]
return ", ".join(attrs)
def make_netloc(self, host, port):
"""Create netloc from host and port.
Args:
host (:obj:`str`): host part to use in netloc
port (:obj:`str`): port part to use in netloc
Returns:
:obj:`str`: host and port values joined by :
"""
return ":".join([x for x in [host, port] if x])
def reparse(self, parsed, default_scheme=""):
"""Reparse a parsed URL into a parsed URL with values fixed.
Args:
parsed (:obj:`urllib.parse.ParseResult`): parsed URL to reparse
default_scheme (:obj:`str`, optional): default ``""`` -
default scheme to use if URL does not contain a scheme
Returns:
:obj:`urllib.parse.ParseResult`: reparsed result
"""
scheme, netloc, path, params, query, fragment = parsed
host = parsed.hostname
port = format(parsed.port or "")
if not netloc and scheme and path and path.split("/")[0].isdigit():
"""For case:
>>> urllib.parse.urlparse('host:443/')
ParseResult(
scheme='host', netloc='', path='443/', params='', query='', fragment=''
)
"""
host = scheme # switch host from scheme to host
port = path.split("/")[0] # remove / from path and assign to port
path = "" # empty out path
scheme = default_scheme
netloc = ":".join([host, port])
if not netloc and path:
"""For cases:
>>> urllib.parse.urlparse('host:443')
ParseResult(
scheme='', netloc='', path='host:443', params='', query='', fragment=''
)
>>> urllib.parse.urlparse('host')
ParseResult(
scheme='', netloc='', path='host', params='', query='', fragment=''
)
"""
netloc, path = path, netloc
if ":" in netloc: # pragma: no cover
# can't get this to trigger anymore, ignore test coverage
host, port = netloc.split(":", 1)
netloc = ":".join([host, port]) if port else host
else:
host = netloc
scheme = scheme or default_scheme
if not scheme and port:
if format(port) == "443":
scheme = "https"
elif format(port) == "80":
scheme = "http"
if not port:
if scheme == "https":
netloc = self.make_netloc(host, "443")
elif scheme == "http":
netloc = self.make_netloc(host, "80")
pass2 = urlunparse((scheme, netloc, path, params, query, fragment))
return urlparse(pass2)
def unparse_base(self, parsed_result):
"""Unparse a parsed URL into just the scheme, hostname, and port parts.
Args:
parsed (:obj:`urllib.parse.ParseResult`): parsed URL to unparse
Returns:
:obj:`str`: unparsed url
"""
# only unparse self.parsed into url with scheme and netloc
bits = (parsed_result.scheme, parsed_result.netloc, "", "", "", "")
return urlunparse(bits)
def unparse_all(self, parsed_result):
"""Unparse a parsed URL with all the parts.
Args:
parsed (:obj:`urllib.parse.ParseResult`): parsed URL to unparse
Returns:
:obj:`str`: unparsed url
"""
return urlunparse(parsed_result)
| 35.479495 | 88 | 0.563973 | 2,584 | 22,494 | 4.744582 | 0.117647 | 0.017618 | 0.020147 | 0.013703 | 0.38385 | 0.296982 | 0.24739 | 0.179038 | 0.143638 | 0.129201 | 0 | 0.001963 | 0.32053 | 22,494 | 633 | 89 | 35.535545 | 0.800183 | 0.375834 | 0 | 0.128114 | 0 | 0 | 0.067447 | 0.014706 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088968 | false | 0.007117 | 0.032028 | 0.003559 | 0.202847 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8edf7cbcf7cedddc71ad9cf461c4f588b745f8c | 427 | py | Python | tests/test.py | alex-panda/PDFCompiler | 3454ee01a6e5ebb2d2bccdcdc32678bf1def895d | [
"MIT"
] | null | null | null | tests/test.py | alex-panda/PDFCompiler | 3454ee01a6e5ebb2d2bccdcdc32678bf1def895d | [
"MIT"
] | null | null | null | tests/test.py | alex-panda/PDFCompiler | 3454ee01a6e5ebb2d2bccdcdc32678bf1def895d | [
"MIT"
] | null | null | null | from fpdf import FPDF
import os
pdf = FPDF()
pdf.add_page()
#pdf.add_font('CMUSerif-UprightItalic', fname=os.path.abspath('./src/Fonts/Computer Modern/cmunui.ttf'), uni=True)
#pdf.set_font('CMUSerif-UprightItalic', size=16)
pdf.add_font('BerlinSansFB-Bold', fname='C:\\Windows\\Fonts\\VINERITC.TTF', uni=True)
pdf.set_font('BerlinSansFB-Bold')
pdf.cell(40, 10, "Hello World! (It's a great day today!)")
pdf.output("test.pdf")
| 35.583333 | 114 | 0.735363 | 69 | 427 | 4.478261 | 0.608696 | 0.058252 | 0.064725 | 0.084142 | 0.12945 | 0.12945 | 0 | 0 | 0 | 0 | 0 | 0.015152 | 0.0726 | 427 | 11 | 115 | 38.818182 | 0.765152 | 0.374707 | 0 | 0 | 0 | 0 | 0.422642 | 0.120755 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8ee816a339e470ff77545c7b6f55ec86374feb8 | 362 | py | Python | examinations/serializers.py | CASDON-MYSTERY/studentapp | 0fd942e963a10a02a6c9f358dd362cfd646eecc3 | [
"MIT"
] | null | null | null | examinations/serializers.py | CASDON-MYSTERY/studentapp | 0fd942e963a10a02a6c9f358dd362cfd646eecc3 | [
"MIT"
] | null | null | null | examinations/serializers.py | CASDON-MYSTERY/studentapp | 0fd942e963a10a02a6c9f358dd362cfd646eecc3 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from .models import Examination, Exam_day_and_venue
class Examination_Serializer(serializers.ModelSerializer):
class Meta:
model=Examination
fields= "__all__"
class Exam_day_and_venue_Serializer(serializers.ModelSerializer):
class Meta:
model=Exam_day_and_venue
fields= "__all__" | 25.857143 | 65 | 0.762431 | 41 | 362 | 6.243902 | 0.439024 | 0.082031 | 0.117188 | 0.175781 | 0.390625 | 0.390625 | 0 | 0 | 0 | 0 | 0 | 0 | 0.18232 | 362 | 14 | 66 | 25.857143 | 0.864865 | 0 | 0 | 0.4 | 0 | 0 | 0.038567 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
d8f1c63452f9e657da2dce69409e74cc04262524 | 2,008 | py | Python | server/settings.py | codingismycraft/pinta | bbeb721f5d365cf2643d01e1f3196e90e2658d05 | [
"MIT"
] | null | null | null | server/settings.py | codingismycraft/pinta | bbeb721f5d365cf2643d01e1f3196e90e2658d05 | [
"MIT"
] | null | null | null | server/settings.py | codingismycraft/pinta | bbeb721f5d365cf2643d01e1f3196e90e2658d05 | [
"MIT"
] | null | null | null | """Exposes the configuration settings."""
import os
import json
class _Settings:
"""Exposes the configuration settings."""
_project_root = None
_include_root = None
_dependencies_filename = None
_pinta_executable = None
_history_db = None
def __init__(self):
"""Initializer.
Loads the settings using the pinta configuration file.
"""
home_dir = os.path.expanduser("~")
filename = os.path.join(home_dir, ".pinta", "pinta_conf.json")
with open(filename) as json_file:
data = json.load(json_file)
self._project_root = data["project_root"]
self._include_root = data["include_root"]
self._dependencies_filename = data["dependencies_filename"]
self._history_db = data["history_db"]
self._pinta_executable = os.path.join(home_dir, ".pinta", "pinta")
@property
def project_root(self):
"""Returns the project root.
:return: The project root.
:rtype: str.
"""
return self._project_root
@property
def include_root(self):
"""Returns the include root for all python imports.
:return: The include root.
:rtype: str.
"""
return self._include_root
@property
def dependencies_filename(self):
"""Returns the filename that containg the dependencies as csv.
:return: The filename that containg the dependencies as csv.
:rtype: str.
"""
return self._dependencies_filename
@property
def pinta_executable(self):
"""Returns the path to the pinta executable.
:return: The path to the pinta executable.
:rtype: str.
"""
return self._pinta_executable
@property
def history_db(self):
"""Returns the path to the history database.
:return: The path to the history database.
:rtype: str.
"""
return self._history_db
settings = _Settings()
| 25.417722 | 74 | 0.616036 | 224 | 2,008 | 5.294643 | 0.223214 | 0.064924 | 0.059022 | 0.075885 | 0.274874 | 0.227656 | 0.118044 | 0.072513 | 0 | 0 | 0 | 0 | 0.289841 | 2,008 | 78 | 75 | 25.74359 | 0.831697 | 0.313745 | 0 | 0.147059 | 0 | 0 | 0.073456 | 0.017529 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.058824 | 0 | 0.558824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
d8f2c334a4aedf8eab7e4c82a7180f26920b4989 | 2,533 | py | Python | pages/header_page.py | Morjus/kinoposk_ui_tests | 0e5c0660593731529db7df5912e10a81e80a55ea | [
"MIT"
] | null | null | null | pages/header_page.py | Morjus/kinoposk_ui_tests | 0e5c0660593731529db7df5912e10a81e80a55ea | [
"MIT"
] | null | null | null | pages/header_page.py | Morjus/kinoposk_ui_tests | 0e5c0660593731529db7df5912e10a81e80a55ea | [
"MIT"
] | null | null | null | import allure
import logging
from selenium.webdriver.common.by import By
from pages.base_page import BasePage
class HeaderPage(BasePage):
LOGIN_BUTTON = (By.XPATH, '//button[contains(text(), "Войти")]')
EMAIL_FIELD = (By.CSS_SELECTOR, '#passp-field-login')
PASSW_FIELD = (By.CSS_SELECTOR, '#passp-field-passwd')
SUBMIT_BUTTON = (By.XPATH, '//button[@type="submit"]')
SKIP_PHONE_BUTTON = (By.XPATH, '//button[@type="button"]')
SEARCH_FIELD = (By.CSS_SELECTOR, "input[type='text'][name='kp_query']")
SEARCH_BUTTON = (By.CSS_SELECTOR, 'button[type="submit"]')
HD_LINK = (By.XPATH, '//a[contains(text(), "Онлайн-кинотеатр")]')
MY_BUYS = (By.XPATH, '//a[contains(text(), "Мои покупки")]')
def __init__(self, driver, url):
super().__init__(driver, url)
self.driver = driver
self.base_url = url
self.logger = logging.getLogger(type(self).__name__)
def _set_email(self, email):
with allure.step(f"Отправка {email} в {self.EMAIL_FIELD}"):
self.find(locator=self.EMAIL_FIELD).send_keys(email)
def _set_passw(self, passw):
with allure.step(f"Отправка {passw} в {self.PASSW_FIELD}"):
self.find(locator=self.PASSW_FIELD).send_keys(passw)
def login(self, email, passw):
with allure.step(f"Нажатие на 'Войти' на главной странице"):
self.find(locator=self.LOGIN_BUTTON).click()
self._set_email(email)
with allure.step(f"Нажимаю кнопку {self.SUBMIT_BUTTON}"):
self.find(locator=self.SUBMIT_BUTTON).click()
self._set_passw(passw)
with allure.step(f"Нажимаю кнопку {self.SUBMIT_BUTTON}"):
self.find(locator=self.SUBMIT_BUTTON).click()
with allure.step("Жду появления поисковой строки"):
self.find(locator=self.SEARCH_FIELD, time=60)
# with allure.step(f"Пропуск предложения привязать телефон"):
# self.find(locator=self.SKIP_PHONE_BUTTON).click()
def go_to_hd(self):
with allure.step("Перехожу в онлайн кинотеатр"):
self.find(locator=self.HD_LINK).click()
with allure.step("Смотрю, что на странице есть вкладка 'Мои покупки'"):
self.find(locator=self.MY_BUYS)
def search_movie(self, movie_name):
with allure.step(f"Ввожу в поиск фильм: {movie_name}"):
self.find(locator=self.SEARCH_FIELD).send_keys(movie_name)
with allure.step(f"Запускаю поиск фильма: {movie_name}"):
self.find(locator=self.SEARCH_BUTTON).click() | 43.672414 | 79 | 0.658113 | 338 | 2,533 | 4.745562 | 0.275148 | 0.068579 | 0.09601 | 0.130299 | 0.36596 | 0.228803 | 0.142145 | 0.099751 | 0.099751 | 0.099751 | 0 | 0.000982 | 0.19621 | 2,533 | 58 | 80 | 43.672414 | 0.786837 | 0.044611 | 0 | 0.086957 | 0 | 0 | 0.252275 | 0.05335 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0.152174 | 0.086957 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
d8f4b8a4abb68a6dab26dbeb1dd2c937dba8e5c5 | 1,911 | py | Python | modules/data/warehouse/tools/import_record.py | Shokoofeh/apollo | 71d6ea753b4595eb38cc54d6650c8de677b173df | [
"Apache-2.0"
] | 7 | 2017-07-07T07:56:13.000Z | 2019-03-06T06:27:00.000Z | modules/data/warehouse/tools/import_record.py | Shokoofeh/apollo | 71d6ea753b4595eb38cc54d6650c8de677b173df | [
"Apache-2.0"
] | null | null | null | modules/data/warehouse/tools/import_record.py | Shokoofeh/apollo | 71d6ea753b4595eb38cc54d6650c8de677b173df | [
"Apache-2.0"
] | 2 | 2017-07-07T07:56:15.000Z | 2018-08-10T17:13:34.000Z | #!/usr/bin/env python
# -*- coding: UTF-8-*-
###############################################################################
# Copyright 2018 The Apollo Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
"""
Import Apollo record into a MongoDB.
Use as command tool: import_record.py <record>
Use as util lib: RecordImporter.Import(<record>)
"""
import os
import sys
import gflags
import glog
from mongo_util import Mongo
from parse_record import RecordParser
gflags.DEFINE_string('mongo_collection_name', 'records',
'MongoDB collection name.')
class RecordImporter(object):
"""Import Apollo record files."""
@staticmethod
def Import(record_file):
"""Import one record."""
parser = RecordParser(record_file)
if not parser.ParseMeta():
glog.error('Fail to parse record {}'.format(record_file))
return
parser.ParseMessages()
doc = Mongo.pb_to_doc(parser.record)
collection = Mongo.collection(gflags.FLAGS.mongo_collection_name)
collection.replace_one({'path': parser.record.path}, doc, upsert=True)
glog.info('Imported record {}'.format(record_file))
if __name__ == '__main__':
gflags.FLAGS(sys.argv)
if len(sys.argv) > 0:
RecordImporter.Import(sys.argv[-1])
| 30.822581 | 79 | 0.641549 | 234 | 1,911 | 5.141026 | 0.525641 | 0.049875 | 0.021613 | 0.0266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007042 | 0.182627 | 1,911 | 61 | 80 | 31.327869 | 0.763124 | 0.418629 | 0 | 0 | 0 | 0 | 0.114007 | 0.022801 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.416667 | 0 | 0.541667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
d8f50d17f9a52ee96a7037b851918cf13a2ca3ae | 96 | py | Python | enthought/mayavi/core/ui/mayavi_scene.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/mayavi/core/ui/mayavi_scene.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/mayavi/core/ui/mayavi_scene.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from __future__ import absolute_import
from mayavi.core.ui.mayavi_scene import *
| 24 | 41 | 0.833333 | 14 | 96 | 5.285714 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114583 | 96 | 3 | 42 | 32 | 0.870588 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 5 |
d8f564b8365eed4a07a4dd31237eb8da98838a5f | 3,064 | py | Python | docs/talks/xdc2016/compare_cairo.py | juhapekka/ezbench_work | ac0cb9ccbc205746d4790a9e33e598fbd5732741 | [
"BSD-3-Clause"
] | 3 | 2019-06-25T16:49:25.000Z | 2021-04-30T06:36:54.000Z | docs/talks/xdc2016/compare_cairo.py | juhapekka/ezbench_work | ac0cb9ccbc205746d4790a9e33e598fbd5732741 | [
"BSD-3-Clause"
] | 4 | 2019-12-10T00:50:49.000Z | 2022-03-10T06:18:42.000Z | docs/talks/xdc2016/compare_cairo.py | juhapekka/ezbench_work | ac0cb9ccbc205746d4790a9e33e598fbd5732741 | [
"BSD-3-Clause"
] | 1 | 2021-04-30T06:36:36.000Z | 2021-04-30T06:36:36.000Z | #!/usr/bin/env python3
"""
Copyright (c) 2015, Intel Corporation
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of Intel Corporation nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import sys
import os
# Import ezbench from the utils/ folder
ezbench_dir = os.path.abspath(sys.path[0]+'/../')
sys.path.append(ezbench_dir+'/utils/')
sys.path.append(ezbench_dir+'/utils/env_dump')
from ezbench import *
from env_dump_parser import *
if __name__ == "__main__":
import argparse
# parse the options
parser = argparse.ArgumentParser()
parser.add_argument("log_folder")
args = parser.parse_args()
report = Report(args.log_folder, silentMode=True)
report.enhance_report([])
print("Test name, cairo image perf, xlib perf, cairo image energy, xlib energy")
for result in report.commits[0].results:
test_name = result.test.full_name
if not test_name.startswith("x11:cairo:xlib:"):
continue
img_res = report.find_result_by_name(report.commits[0], test_name.replace("x11:cairo:xlib:", "x11:cairo:image:"))
if img_res is None:
img_res = report.find_result_by_name(report.commits[0], test_name.replace("x11:cairo:xlib:", "x11:cairo:ximage:"))
test_name = test_name.replace(":xlib:", ':')
if img_res is None:
print("could not find the cpu result for test '{}'".format(test_name))
perf_cpu = img_res.result().mean()
perf_gpu = result.result().mean()
pwr_cpu = img_res.result("metric_rapl0.package-0:energy").mean()
pwr_gpu = result.result("metric_rapl0.package-0:energy").mean()
print("{},{},{},{},{}".format(test_name, perf_cpu, perf_gpu, pwr_cpu, pwr_gpu))
| 41.972603 | 126 | 0.72748 | 437 | 3,064 | 4.98627 | 0.409611 | 0.033043 | 0.019275 | 0.021111 | 0.245067 | 0.190913 | 0.165213 | 0.133089 | 0.133089 | 0.133089 | 0 | 0.009182 | 0.182441 | 3,064 | 72 | 127 | 42.555556 | 0.860679 | 0.514687 | 0 | 0.066667 | 0 | 0 | 0.21327 | 0.039269 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8f89ca57ebf1d8154f7f2629edeea9594a44b41 | 9,541 | py | Python | generator/blocks/write_back/base/memory_block_base.py | biarmic/OpenCache | bb9e110e434deb83900de328cc76b63901ba582f | [
"BSD-3-Clause"
] | null | null | null | generator/blocks/write_back/base/memory_block_base.py | biarmic/OpenCache | bb9e110e434deb83900de328cc76b63901ba582f | [
"BSD-3-Clause"
] | null | null | null | generator/blocks/write_back/base/memory_block_base.py | biarmic/OpenCache | bb9e110e434deb83900de328cc76b63901ba582f | [
"BSD-3-Clause"
] | null | null | null | # See LICENSE for licensing information.
#
# Copyright (c) 2021 Regents of the University of California and The Board
# of Regents for the Oklahoma Agricultural and Mechanical College
# (acting for and on behalf of Oklahoma State University)
# All rights reserved.
#
from block_base import block_base
from amaranth import Cat, C
from state import state
class memory_block_base(block_base):
"""
This is the base class of memory controller always block modules.
Methods of this class can be overridden for specific implementation of
different cache designs.
In this block, cache communicates with memory components such as tag array,
data array, use array, and DRAM.
"""
def __init__(self):
super().__init__()
def add_reset(self, c, m):
""" Add statements for the RESET state. """
# In the RESET state, cache sends write request to the tag array to reset
# the current set.
# set register is incremented by the Request Block.
# When set register reaches the end, state switches to IDLE.
with m.Case(state.RESET):
c.tag_array.write(c.set, 0)
c.data_array.write(c.set, 0)
def add_flush(self, c, m):
""" Add statements for the FLUSH state. """
# In the FLUSH state, cache sends write request to DRAM.
# set register is incremented by the Request Block.
# way register is incremented by the Replacement Block.
# When set and way registers reach the end, state switches to IDLE.
with m.Case(state.FLUSH):
c.tag_array.read(c.set)
c.data_array.read(c.set)
with m.Switch(c.way):
for i in range(c.num_ways):
with m.Case(i):
# Check if current set is clean or DRAM is available,
# and all ways of the set are checked
if i == c.num_ways - 1:
with m.If(~c.tag_array.output().dirty(i) | ~c.dram.stall()):
# Request the next tag and data lines from SRAMs
c.tag_array.read(c.set + 1)
c.data_array.read(c.set + 1)
# Check if current set is dirty and DRAM is available
with m.If(c.tag_array.output().dirty(i) & ~c.dram.stall()):
# Update dirty bits in the tag line
c.tag_array.write(c.set, Cat(c.tag_array.output().tag(i), C(2, 2)), i)
# Send the write request to DRAM
c.dram.write(Cat(c.set, c.tag_array.output().tag(i)), c.data_array.output(i))
def add_idle(self, c, m):
""" Add statements for the IDLE state. """
# In the IDLE state, cache waits for CPU to send a new request.
# Until there is a new request from the cache, stall is low.
# When there is a new request from the cache stall is asserted, request
# is decoded and corresponding tag, data, and use array lines are read
# from internal SRAMs.
with m.Case(state.IDLE):
# Read next lines from SRAMs even though CPU is not sending a new
# request since read is non-destructive.
c.tag_array.read(c.addr.parse_set())
c.data_array.read(c.addr.parse_set())
def add_compare(self, c, m):
""" Add statements for the COMPARE state. """
# In the COMPARE state, cache compares tags.
with m.Case(state.COMPARE):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# Assuming that current request is miss, check if it is dirty miss
with c.check_dirty_miss(m):
# If DRAM is available, switch to WAIT_WRITE and wait for DRAM to
# complete writing
with m.If(~c.dram.stall()):
c.dram.write(Cat(c.set, c.tag_array.output().tag()), c.data_array.output())
# Else, assume that current request is clean miss
with c.check_clean_miss(m):
# If DRAM is busy, switch to READ and wait for DRAM to be available
# If DRAM is available, switch to WAIT_READ and wait for DRAM to
# complete reading
with m.If(~c.dram.stall()):
c.dram.read(Cat(c.set, c.tag))
# Check if current request is hit
with c.check_hit(m):
# Set DRAM's csb to 1 again since it could be set 0 above
c.dram.disable()
# Perform the write request
with m.If(~c.web_reg):
# Update dirty bit
c.tag_array.write(c.set, Cat(c.tag, C(3, 2)))
# Perform write request
c.data_array.write(c.set, c.data_array.output())
c.data_array.write_input(0, c.offset, c.din_reg, c.wmask_reg if c.num_masks else None)
# Read next lines from SRAMs even though the CPU is not sending
# a new request since read is non-destructive.
c.tag_array.read(c.addr.parse_set())
c.data_array.read(c.addr.parse_set())
def add_write(self, c, m):
""" Add statements for the WRITE state. """
# In the WRITE state, cache waits for DRAM to be available.
# When DRAM is available, write request is sent.
with m.Case(state.WRITE):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# If DRAM is busy, wait in this state.
# If DRAM is available, switch to WAIT_WRITE and wait for DRAM to
# complete writing.
with m.If(~c.dram.stall()):
with m.Switch(c.way):
for i in range(c.num_ways):
with m.Case(i):
c.dram.write(Cat(c.set, c.tag_array.output().tag(c.way)), c.data_array.output(i))
def add_wait_write(self, c, m):
""" Add statements for the WAIT_WRITE state. """
# In the WAIT_WRITE state, cache waits for DRAM to complete writing.
# When DRAM completes writing, read request is sent.
with m.Case(state.WAIT_WRITE):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# If DRAM is busy, wait in this state.
# If DRAM completes writing, switch to WAIT_READ and wait for DRAM to
# complete reading.
with m.If(~c.dram.stall()):
c.dram.read(Cat(c.set, c.tag))
def add_read(self, c, m):
""" Add statements for the READ state. """
# In the READ state, cache waits for DRAM to be available.
# When DRAM is available, read request is sent.
# TODO: Is this state really necessary? WAIT_WRITE state may be used instead
with m.Case(state.READ):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# If DRAM is busy, wait in this state.
# If DRAM completes writing, switch to WAIT_READ and wait for DRAM to
# complete reading.
with m.If(~c.dram.stall()):
c.dram.read(Cat(c.set, c.tag))
def add_wait_read(self, c, m):
""" Add statements for the WAIT_READ state. """
# In the WAIT_READ state, cache waits for DRAM to complete reading
# When DRAM completes reading, request is completed.
with m.Case(state.WAIT_READ):
c.tag_array.read(c.set)
c.data_array.read(c.set)
# If DRAM is busy, cache waits in this state.
# If DRAM completes reading, cache switches to:
# IDLE if CPU isn't sending a new request
# COMPARE if CPU is sending a new request
with m.If(~c.dram.stall()):
# Update tag line
c.tag_array.write(c.set, Cat(c.tag, ~c.web_reg, C(1, 1)), c.way)
# Update data line
c.data_array.write(c.set, c.dram.output(), c.way)
# Perform the write request
with m.If(~c.web_reg):
c.data_array.write_input(c.way, c.offset, c.din_reg, c.wmask_reg if c.num_masks else None)
# Read next lines from SRAMs even though the CPU is not sending
# a new request since read is non-destructive
c.tag_array.read(c.addr.parse_set())
c.data_array.read(c.addr.parse_set())
def add_flush_hazard(self, c, m):
""" Add statements for the FLUSH_HAZARD state. """
# In the FLUSH_HAZARD state, cache waits in this state for 1 cycle.
# Read requests are sent to tag and data arrays.
with m.Case(state.FLUSH_HAZARD):
c.tag_array.read(0)
c.data_array.read(0)
def add_wait_hazard(self, c, m):
""" Add statements for the WAIT_HAZARD state. """
# In the WAIT_HAZARD state, cache waits in this state for 1 cycle.
# Read requests are sent to tag and data arrays.
with m.Case(state.WAIT_HAZARD):
c.tag_array.read(c.set)
c.data_array.read(c.set)
def add_flush_sig(self, c, m):
""" Add flush signal control. """
# If flush is high, state switches to FLUSH.
# In the FLUSH state, cache will write all data lines back to DRAM.
with m.If(c.flush):
c.tag_array.read(0)
c.data_array.read(0) | 42.977477 | 110 | 0.570276 | 1,400 | 9,541 | 3.802857 | 0.132857 | 0.021788 | 0.038881 | 0.039068 | 0.633546 | 0.584711 | 0.54846 | 0.480278 | 0.422239 | 0.422239 | 0 | 0.003808 | 0.339482 | 9,541 | 222 | 111 | 42.977477 | 0.841003 | 0.440625 | 0 | 0.455556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004505 | 0 | 1 | 0.133333 | false | 0 | 0.033333 | 0 | 0.177778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8f8b4873e2a9d706b1b4b89de4220f8b9ddd61d | 731 | py | Python | concordia/migrations/0002_auto_20181004_1848.py | juliecentofanti172/juliecentofanti.github.io | 446ea8522b9f4a6709124ebb6e0f675acf7fe205 | [
"CC0-1.0"
] | 134 | 2018-05-23T14:00:29.000Z | 2022-03-10T15:47:53.000Z | concordia/migrations/0002_auto_20181004_1848.py | ptrourke/concordia | 56ff364dbf38cb8a763df489479821fe43b76d69 | [
"CC0-1.0"
] | 1,104 | 2018-05-22T20:18:22.000Z | 2022-03-31T17:28:40.000Z | concordia/migrations/0002_auto_20181004_1848.py | ptrourke/concordia | 56ff364dbf38cb8a763df489479821fe43b76d69 | [
"CC0-1.0"
] | 32 | 2018-05-22T20:22:38.000Z | 2021-12-21T14:11:44.000Z | # Generated by Django 2.0.9 on 2018-10-04 18:48
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [("concordia", "0001_squashed_0040_remove_campaign_is_active")]
operations = [
migrations.AlterField(
model_name="pageinuse",
name="created_on",
field=models.DateTimeField(auto_now_add=True),
),
migrations.AlterField(
model_name="pageinuse",
name="page_url",
field=models.URLField(max_length=768),
),
migrations.AlterField(
model_name="pageinuse",
name="updated_on",
field=models.DateTimeField(auto_now=True),
),
]
| 27.074074 | 82 | 0.603283 | 75 | 731 | 5.666667 | 0.626667 | 0.141176 | 0.176471 | 0.204706 | 0.451765 | 0.451765 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.288646 | 731 | 26 | 83 | 28.115385 | 0.767308 | 0.06156 | 0 | 0.45 | 1 | 0 | 0.157895 | 0.064327 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
d8fb835e064c6068c174aaab9d60c797f66b3c26 | 319 | py | Python | combinatorics/p11069.py | sajjadt/competitive-programming | fb0844afba95383441f0c4c0c3b1a38078d24ec9 | [
"MIT"
] | 10 | 2019-03-29T08:37:10.000Z | 2021-12-29T14:06:57.000Z | combinatorics/p11069.py | sajjadt/competitive-programming | fb0844afba95383441f0c4c0c3b1a38078d24ec9 | [
"MIT"
] | 1 | 2020-07-03T08:25:38.000Z | 2020-07-03T08:25:38.000Z | combinatorics/p11069.py | sajjadt/competitive-programming | fb0844afba95383441f0c4c0c3b1a38078d24ec9 | [
"MIT"
] | 4 | 2019-05-30T16:04:48.000Z | 2020-10-22T21:42:25.000Z |
# f(n) = number of valid sequencess with n items
# f(n) = {"attaching n to"} f(n-2) + {"attaching n-1 to "} f(n-3)
LIMIT = 76 + 1
f_table = [0, 1, 2, 2]
for i in range(LIMIT):
f_table.append(f_table[-2] + f_table[-3])
while True:
try:
n = int(input())
print(f_table[n])
except(EOFError):
break
| 19.9375 | 69 | 0.579937 | 60 | 319 | 3 | 0.516667 | 0.166667 | 0.044444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04918 | 0.23511 | 319 | 15 | 70 | 21.266667 | 0.688525 | 0.354232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8fbe81e9f1a510748c80f8dc8a1b42b637b2f92 | 207 | py | Python | kartverket_tide_api/exceptions/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | kartverket_tide_api/exceptions/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | kartverket_tide_api/exceptions/__init__.py | matsjp/kartverket_tide_api | b4be15e9c8f077ef6ec0747fe67f0a64383cfa30 | [
"MIT"
] | null | null | null | from .apierrorexception import NoTideDataErrorException, UnknownApiErrorException,\
ApiErrorException, InvalidStationTypeErrorException
from .cannotfindelementexception import CannotFindElementException
| 51.75 | 83 | 0.898551 | 11 | 207 | 16.909091 | 0.636364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072464 | 207 | 3 | 84 | 69 | 0.96875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d8fee123a93215beee41ff7185b11c6c92c2b7c1 | 3,566 | py | Python | aita/api/course.py | ze-lin/AITA | 0f2fe4e630c37fcc566a54621880b78ec67eefa6 | [
"MIT"
] | null | null | null | aita/api/course.py | ze-lin/AITA | 0f2fe4e630c37fcc566a54621880b78ec67eefa6 | [
"MIT"
] | null | null | null | aita/api/course.py | ze-lin/AITA | 0f2fe4e630c37fcc566a54621880b78ec67eefa6 | [
"MIT"
] | 1 | 2020-12-29T19:45:28.000Z | 2020-12-29T19:45:28.000Z | import datetime, time, os
from flask import Blueprint, jsonify, request, g
from aita.auth import login_required
from aita.db import get_collection
from werkzeug.utils import secure_filename
bp = Blueprint('course', __name__, url_prefix='/course')
@bp.route('/getall', methods=['GET'])
def get_all_course():
COURSE = get_collection('course')
result = COURSE.find()
json_body = {}
for i, document in enumerate(result):
json_body[i] = document
json_body[i].pop('_id')
return jsonify(json_body)
@bp.route('/getall-teacher', methods=['GET'])
@login_required
def get_all_course_teacher():
COURSE = get_collection('course')
if g.usr['role'] == 'student':
return 'student'
result = COURSE.find({ 'teacher': g.usr['usr'] })
json_body = {}
for i, document in enumerate(result):
json_body[i] = document
json_body[i].pop('_id')
return jsonify(json_body)
@bp.route('/create', methods=['GET'])
@login_required
def create_course():
COURSE = get_collection('course')
document = {
'genre': request.args.get('genre'),
'title': request.args.get('title'),
'exam': request.args.get('exam'),
'time': request.args.get('time'),
'teacher': g.usr['usr'],
'video': secure_filename(request.args.get('video')),
'article': secure_filename(request.args.get('article')),
'date': str(datetime.date.today()),
'id': str(time.time()),
'view': 0
}
COURSE.insert_one(document)
return 'Success!'
@bp.route('/delete', methods=['GET'])
@login_required
def delete_course():
# 级联删除
COURSE = get_collection('course')
COLLECTION = get_collection('collection') # delete all
course_id = request.args.get('id')
COLLECTION.delete_many({ 'id': course_id })
COURSE.delete_one({'id': course_id})
return 'Success!'
@bp.route('/uploadfile', methods=['POST'])
@login_required
def upload():
"""
存数据库留给submit_class做
"""
file = request.files['file']
file_name = secure_filename(file.filename)
if not os.path.exists(os.path.join('aita/static', file_name)):
file.save(os.path.join('aita/static', file_name))
return 'Success!'
@bp.route('/getreading', methods=['GET'])
@login_required
def get_reading():
COURSE = get_collection('course')
result = COURSE.find_one({'id': request.args.get('id')})
file_name = result['article']
file_path = os.path.join('aita/static', file_name)
content = ''
with open(file_path, 'r') as f:
for line in f:
content += line
return content
@bp.route('/getvideo', methods=['GET'])
@login_required
def get_video():
COURSE = get_collection('course')
result = COURSE.find_one({'id': request.args.get('id')})
file_name = result['video']
return file_name
@bp.route('/getexam', methods=['GET'])
@login_required
def get_exam():
COURSE = get_collection('course')
result = COURSE.find_one({'id': request.args.get('id')})
return result['exam']
@bp.route('/view', methods=['GET'])
@login_required
def view():
COURSE = get_collection('course')
course_id = request.args.get('id')
result = COURSE.find_one({'id': course_id })
result['view'] += 1
COURSE.replace_one({'id': course_id }, result)
COLLECTION = get_collection('collection')
document = {
'id': course_id,
'usr': g.usr['usr']
}
result = COLLECTION.find_one(document)
if not result:
COLLECTION.insert_one(document)
return 'Success!'
| 26.029197 | 66 | 0.633763 | 450 | 3,566 | 4.853333 | 0.204444 | 0.065476 | 0.070513 | 0.091575 | 0.448718 | 0.318223 | 0.243132 | 0.185897 | 0.185897 | 0.185897 | 0 | 0.000699 | 0.19742 | 3,566 | 136 | 67 | 26.220588 | 0.762404 | 0.010095 | 0 | 0.375 | 0 | 0 | 0.121117 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086538 | false | 0 | 0.048077 | 0 | 0.230769 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8ff3f9b9d076df28a9f625cded5efea0322cd15 | 16,309 | py | Python | apps/dash-baseball-statistics/layouts.py | JeroenvdSande/dash-sample-apps | 106fa24693cfdaf47c06466a0aed78e642344f91 | [
"MIT"
] | 2,332 | 2019-05-10T18:24:20.000Z | 2022-03-30T21:46:29.000Z | apps/dash-baseball-statistics/layouts.py | JeroenvdSande/dash-sample-apps | 106fa24693cfdaf47c06466a0aed78e642344f91 | [
"MIT"
] | 384 | 2019-05-09T19:19:56.000Z | 2022-03-12T00:58:24.000Z | apps/dash-baseball-statistics/layouts.py | JeroenvdSande/dash-sample-apps | 106fa24693cfdaf47c06466a0aed78e642344f91 | [
"MIT"
] | 3,127 | 2019-05-16T17:20:45.000Z | 2022-03-31T17:59:07.000Z | # Dash components, html, and dash tables
import dash_core_components as dcc
import dash_html_components as html
import dash_table
# Import Bootstrap components
import dash_bootstrap_components as dbc
# Import custom data.py
import data
# Import data from data.py file
teams_df = data.teams
# Hardcoded list that contain era names and marks
era_list = data.era_list
era_marks = data.era_marks
# Main applicaiton menu
appMenu = html.Div(
[
dbc.Row(
[
dbc.Col(
html.H4(style={"text-align": "center"}, children="Select Era:"),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": "auto", "offset": 3},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
),
dbc.Col(
dcc.Dropdown(
style={
"text-align": "center",
"font-size": "18px",
"width": "210px",
},
id="era-dropdown",
options=era_list,
value=era_list[0]["value"],
clearable=False,
),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": "auto", "offset": 0},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
),
dbc.Col(
html.H4(
style={"text-align": "center", "justify-self": "right"},
children="Select Team:",
),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": "auto", "offset": 3},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 1},
),
dbc.Col(
dcc.Dropdown(
style={
"text-align": "center",
"font-size": "18px",
"width": "210px",
},
id="team-dropdown",
clearable=False,
),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": "auto", "offset": 0},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
),
],
form=True,
),
dbc.Row(
dbc.Col(
html.P(
style={"font-size": "16px", "opacity": "70%"},
children="""For continuity, some teams historical names where changed to match """
"""their modern counterpart. Available teams are updated based on Era selection.""",
)
)
),
],
className="menu",
)
# Menu slider used, NOT independent, MUST be used with main menu
menuSlider = html.Div(
[
dbc.Row(
dbc.Col(
dcc.RangeSlider(
id="era-slider",
min=1903,
max=teams_df["year"].max(),
marks=era_marks,
tooltip={"always_visible": False, "placement": "bottom"},
)
)
),
dbc.Row(
dbc.Col(
html.P(
style={"font-size": "16px", "opacity": "70%"},
children="Adjust slider to desired range.",
)
)
),
],
className="era-slider",
)
# Layout for Team Analysis page
teamLayout = html.Div(
[
dbc.Row(dbc.Col(html.H3(children="Team Accolades"))),
# Display Championship titles in datatable
dbc.Row(
dbc.Col(
html.Div(id="team-data"),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": 7, "offset": 0},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
),
justify="center",
),
### Graphs of Historical Team statistics ###
dbc.Row(dbc.Col(html.H3(children="Team Analysis"))),
# Bar Chart of Wins and Losses
dbc.Row(
dbc.Col(
dcc.Graph(id="wl-bar", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 12, "offset": 0},
)
),
# Line Chart of Batting Average, BABIP, and Strikeout Rate
dbc.Row(
dbc.Col(
dcc.Graph(id="batting-line", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 12, "offset": 0},
)
),
# Bar Char of Errors and Double Plays
dbc.Row(
dbc.Col(
dcc.Graph(id="feild-line", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 12, "offset": 0},
)
),
dbc.Row(dbc.Col(html.H4(children="Pitching Performance"))),
dbc.Row(
[
# Line graph of K/BB ratio with ERA bubbles
dbc.Col(
dcc.Graph(id="pitch-bubble", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 6, "offset": 0},
),
# Pie Chart, % of Completed Games, Shutouts, and Saves of Total Games played
dbc.Col(
dcc.Graph(id="pitch-pie", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 6, "offset": 0},
),
],
no_gutters=True,
),
],
className="app-page",
)
# Player menu used to select players after era and team are set
playerMenu = html.Div(
[
dbc.Row(dbc.Col(html.H3(children="Player Profile and Statistics"))),
dbc.Row(
dbc.Col(
html.P(
style={"font-size": "16px", "opacity": "70%"},
children="Available players are updated based on team selection.",
)
)
),
dbc.Row(
[
dbc.Row(
dbc.Col(
html.H4(
style={"text-align": "center"}, children="Select Player:"
),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": "auto", "offset": 0},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
)
),
dbc.Row(
dbc.Col(
dcc.Dropdown(
style={
"margin-left": "2%",
"text-align": "center",
"font-size": "18px",
"width": "218px",
},
id="player-dropdown",
clearable=False,
),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": "auto", "offset": 0},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
)
),
],
no_gutters=True,
),
html.Br(),
dbc.Row(
dbc.Col(
dash_table.DataTable(
id="playerProfile",
style_as_list_view=True,
editable=False,
style_table={
"overflowY": "scroll",
"width": "100%",
"minWidth": "100%",
},
style_header={"backgroundColor": "#f8f5f0", "fontWeight": "bold"},
style_cell={"textAlign": "center", "padding": "8px"},
),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": 8, "offset": 0},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
),
justify="center",
),
html.Br(),
],
className="app-page",
)
# Batting statistics
battingLayout = html.Div(
[
# Batting datatable
dbc.Row(
dbc.Col(
dash_table.DataTable(
id="batterTable",
style_as_list_view=True,
editable=False,
style_table={
"overflowY": "scroll",
"width": "100%",
"minWidth": "100%",
},
style_header={"backgroundColor": "#f8f5f0", "fontWeight": "bold"},
style_cell={"textAlign": "center", "padding": "8px"},
),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 10, "offset": 0},
lg={"size": 10, "offset": 0},
xl={"size": 10, "offset": 0},
),
justify="center",
),
dbc.Row(
dbc.Col(
html.H3(
style={"margin-top": "1%", "margin-bottom": "1%"},
children="Player Analysis",
)
)
),
dbc.Row(
dbc.Col(
html.P(
style={"font-size": "16px", "opacity": "70%"},
children="Some statistics where not tracked until the 1950s, graphs may not always reflect certain plots.",
)
)
),
dbc.Row(
[
# Line/Bar Chart of On-Base Percentage, features; H BB HBP SF
dbc.Col(
dcc.Graph(id="obp-line", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 6, "offset": 0},
),
# Line/Bar Chart of Slugging Average, features; 2B 3B HR
dbc.Col(
dcc.Graph(id="slg-line", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 6, "offset": 0},
),
],
no_gutters=True,
),
# Line Chart of OPS, Features; OBP SLG
dbc.Row(
dbc.Col(
dcc.Graph(id="ops-line", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 12, "offset": 0},
)
),
],
className="app-page",
)
# Feilding Statistics
fieldingLayout = html.Div(
[
dbc.Row(dbc.Col(html.H3(style={"margin-bottom": "1%"}, children="Feilding"))),
# Feilding Datatable
dbc.Row(
dbc.Col(
dash_table.DataTable(
id="fieldTable",
style_as_list_view=True,
editable=False,
style_table={
"overflowY": "scroll",
"width": "100%",
"minWidth": "100%",
},
style_header={"backgroundColor": "#f8f5f0", "fontWeight": "bold"},
style_cell={"textAlign": "center", "padding": "8px"},
),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 8, "offset": 0},
lg={"size": 8, "offset": 0},
xl={"size": 8, "offset": 0},
),
justify="center",
),
html.Br(),
dbc.Row(dbc.Col(html.H3(style={"margin-bottom": "1%"}, children="Pitching"))),
dbc.Row(
dbc.Col(
html.Div(id="pitch-data"),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 10, "offset": 0},
lg={"size": 10, "offset": 0},
xl={"size": 10, "offset": 0},
),
justify="center",
),
html.Br(),
dbc.Row(dbc.Col(html.H3(children="Player Analysis"))),
# Player dropdown menu
dbc.Row(
[
dbc.Row(
dbc.Col(
html.H4(
style={"text-align": "center"}, children="Select Position:"
),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": "auto", "offset": 0},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
)
),
dbc.Row(
dbc.Col(
dcc.Dropdown(
style={
"margin-left": "5px",
"text-align": "center",
"font-size": "18px",
"width": "100px",
},
id="pos-dropdown",
clearable=False,
),
xs={"size": "auto", "offset": 0},
sm={"size": "auto", "offset": 0},
md={"size": "auto", "offset": 0},
lg={"size": "auto", "offset": 0},
xl={"size": "auto", "offset": 0},
)
),
],
no_gutters=True,
),
dbc.Row(dbc.Col(html.H4(children="Pitching Performance"))),
# Pitching and Fielding graphs, Pitching graphs are set in a subplot dcc.Graph(id='field-graphs', config={'displayModeBar': False})
dbc.Row(
dbc.Col(
html.Div(id="pitch-graphs"),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 12, "offset": 0},
xl={"size": 12, "offset": 0},
)
),
dbc.Row(
dbc.Col(
dcc.Graph(id="field-graph", config={"displayModeBar": False}),
xs={"size": 12, "offset": 0},
sm={"size": 12, "offset": 0},
md={"size": 12, "offset": 0},
lg={"size": 12, "offset": 0},
xl={"size": 12, "offset": 0},
)
),
],
className="app-page",
)
| 35.843956 | 139 | 0.365565 | 1,417 | 16,309 | 4.177841 | 0.160903 | 0.122973 | 0.113514 | 0.11402 | 0.691216 | 0.683784 | 0.667399 | 0.629899 | 0.594088 | 0.551351 | 0 | 0.0365 | 0.475872 | 16,309 | 454 | 140 | 35.922907 | 0.65606 | 0.063768 | 0 | 0.717391 | 0 | 0 | 0.188651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012077 | 0 | 0.012077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2b028209a00a7c4331c52f1ca61c13b4b8eaf902 | 107 | py | Python | parquet_metadata/test_parquet_metadata.py | dzamo/parquet-metadata | 221bff0253bcaefc7c95e6e16ae376e3bba6ee9f | [
"Apache-2.0"
] | 11 | 2018-09-11T02:56:32.000Z | 2022-02-16T18:49:39.000Z | parquet_metadata/test_parquet_metadata.py | dzamo/parquet-metadata | 221bff0253bcaefc7c95e6e16ae376e3bba6ee9f | [
"Apache-2.0"
] | null | null | null | parquet_metadata/test_parquet_metadata.py | dzamo/parquet-metadata | 221bff0253bcaefc7c95e6e16ae376e3bba6ee9f | [
"Apache-2.0"
] | 4 | 2019-05-30T22:44:33.000Z | 2022-02-16T18:49:40.000Z | from . import parquet_metadata
def test_smoke_test():
parquet_metadata.dump('parquets/types.parquet')
| 21.4 | 51 | 0.785047 | 14 | 107 | 5.714286 | 0.714286 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11215 | 107 | 4 | 52 | 26.75 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0.205607 | 0.205607 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
2b05abd8c972d299aef12d9ffb9c918cfcc45604 | 5,956 | py | Python | src/pylightnix/repl.py | stagedml/pylightnix | b475057f03db1d0b975500d079745c4de9c42069 | [
"Apache-2.0"
] | 9 | 2020-01-25T12:03:12.000Z | 2021-10-12T08:22:33.000Z | src/pylightnix/repl.py | stagedml/pylightnix | b475057f03db1d0b975500d079745c4de9c42069 | [
"Apache-2.0"
] | 1 | 2020-02-16T10:47:34.000Z | 2020-02-20T17:11:45.000Z | src/pylightnix/repl.py | stagedml/pylightnix | b475057f03db1d0b975500d079745c4de9c42069 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020, Sergey Mironov
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Repl module defines variants of `instantiate` and `realize1` functions, which
are suitable for REPL shells. Repl-friendly wrappers (see `repl_realize`) could
pause the computation, save the Pylightnix state into a variable and return to
the REPL's main loop. At this point user could alter the state of the whole
system. Finally, `repl_continue` or `repl_cancel` could be called to either
continue or cancel the realization.
"""
from pylightnix.utils import ( dirrm, timestring, concat )
from pylightnix.types import (Dict, Closure, Context, Derivation, RRef, DRef,
List, Tuple, Optional, Generator, Path, Build,
Union, Any, BuildArgs, RealizeArg, SPath,
StorageSettings)
from pylightnix.core import (realizeSeq, RealizeSeqGen, mkrealization)
class ReplHelper:
def __init__(self, gen:RealizeSeqGen)->None:
self.gen:Optional[RealizeSeqGen]=gen
self.S:Optional[StorageSettings]=None
self.dref:Optional[DRef]=None
self.context:Optional[Context]=None
self.drv:Optional[Derivation]=None
self.result:Optional[Context]=None
self.rarg:Optional[RealizeArg]=None
ERR_INVALID_RH="Neither global, nor user-defined ReplHelper is valid"
ERR_INACTIVE_RH="REPL session is not paused or was already unpaused"
PYLIGHTNIX_REPL_HELPER:Optional[ReplHelper]=None
def repl_continueAll(out_paths:Optional[List[Path]]=None,
out_rrefs:Optional[List[RRef]]=None,
rh:Optional[ReplHelper]=None)->Optional[Context]:
global PYLIGHTNIX_REPL_HELPER
if rh is None:
rh=PYLIGHTNIX_REPL_HELPER
assert rh is not None, ERR_INVALID_RH
assert rh.gen is not None, ERR_INACTIVE_RH
assert rh.dref is not None, ERR_INACTIVE_RH
assert rh.context is not None, ERR_INACTIVE_RH
assert rh.drv is not None, ERR_INACTIVE_RH
assert rh.S is not None, ERR_INACTIVE_RH
try:
rrefs:Optional[List[RRef]]
if out_paths is not None:
assert out_rrefs is None
rrefs=[mkrealization(rh.dref,rh.context,p,rh.S)
for p in out_paths]
elif out_rrefs is not None:
assert out_paths is None
rrefs=out_rrefs
else:
rrefs=None
rh.S,rh.dref,rh.context,rh.drv,rh.rarg=rh.gen.send((rrefs,False))
except StopIteration as e:
rh.gen=None
rh.result=e.value
return repl_result(rh)
def repl_continueMany(out_paths:Optional[List[Path]]=None,
out_rrefs:Optional[List[RRef]]=None,
rh:Optional[ReplHelper]=None)->Optional[List[RRef]]:
ctx=repl_continueAll(out_paths,out_rrefs,rh)
if ctx is None:
return None
assert len(ctx)==1, f"Expected a single-targeted closure"
return ctx[list(ctx.keys())[0]]
def repl_continue(out_paths:Optional[List[Path]]=None,
out_rrefs:Optional[List[RRef]]=None,
rh:Optional[ReplHelper]=None)->Optional[RRef]:
rrefs=repl_continueMany(out_paths,out_rrefs,rh)
if rrefs is None:
return None
assert len(rrefs)==1, f"Expected a single-result derivation"
return rrefs[0]
def repl_realize(closure:Union[Closure,Tuple[Any,Closure]],
force_interrupt:Union[List[DRef],bool]=True,
realize_args:Dict[DRef,RealizeArg]={})->ReplHelper:
"""
TODO
Example:
```python
rh=repl_realize(instantiate(mystage), force_interrupt=True)
# ^^^ `repl_realize` returnes the `ReplHelper` object which holds the state of
# incomplete realization
b:Build=repl_build()
# ^^^ Access it's build object. Now we may think that we are inside the
# realization function. Lets do some hacks.
with open(join(build_outpath(b),'artifact.txt'), 'w') as f:
f.write("Fooo")
repl_continueBuild(b)
rref=repl_rref(rh)
# ^^^ Since we didn't program any other pasues, we should get the usual RRef
# holding the result of our hacks.
```
"""
# FIXME: define a Closure as a datatype and simplify the below check
closure_:Closure
if isinstance(closure,tuple) and len(closure)==2:
closure_=closure[1] # type:ignore
else:
closure_=closure # type:ignore
global PYLIGHTNIX_REPL_HELPER
force_interrupt_:List[DRef]=[]
if isinstance(force_interrupt,bool):
if force_interrupt:
force_interrupt_=closure_.targets
elif isinstance(force_interrupt,list):
force_interrupt_=force_interrupt
else:
assert False, "Invalid argument"
rh=ReplHelper(realizeSeq(closure_,force_interrupt_,realize_args=realize_args))
PYLIGHTNIX_REPL_HELPER=rh
assert rh.gen is not None, ERR_INACTIVE_RH
try:
rh.S,rh.dref,rh.context,rh.drv,rh.rarg=next(rh.gen)
except StopIteration as e:
rh.gen=None
rh.result=e.value
return rh
def repl_result(rh:ReplHelper)->Optional[Context]:
return rh.result
def repl_rrefs(rh:ReplHelper)->Optional[List[RRef]]:
ctx=repl_result(rh)
if ctx is None:
return None
assert len(ctx)==1
return ctx[list(ctx.keys())[0]]
def repl_rref(rh:ReplHelper)->Optional[RRef]:
rrefs=repl_rrefs(rh)
if rrefs is None:
return None
assert len(rrefs)==1
return rrefs[0]
def repl_cancel(rh:Optional[ReplHelper]=None)->None:
global PYLIGHTNIX_REPL_HELPER
if rh is None:
rh=PYLIGHTNIX_REPL_HELPER
assert rh is not None, ERR_INVALID_RH
try:
assert rh.gen is not None
while True:
rh.gen.send((None,True))
except StopIteration as e:
rh.gen=None
| 34.427746 | 81 | 0.712727 | 867 | 5,956 | 4.78316 | 0.272203 | 0.014468 | 0.023873 | 0.023149 | 0.297082 | 0.262358 | 0.251025 | 0.236798 | 0.199662 | 0.199662 | 0 | 0.003938 | 0.189893 | 5,956 | 172 | 82 | 34.627907 | 0.855544 | 0.273842 | 0 | 0.353982 | 0 | 0 | 0.043907 | 0 | 0 | 0 | 0 | 0.011628 | 0.141593 | 1 | 0.079646 | false | 0 | 0.026549 | 0.00885 | 0.212389 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2b078da4ba018d0ed23b38cf26025965f628a808 | 3,658 | py | Python | main.py | AuroraBTH/minecraft-modpack-randomizer | 797fb6a438a3365da69fbcbc22d856668a90ed27 | [
"MIT"
] | null | null | null | main.py | AuroraBTH/minecraft-modpack-randomizer | 797fb6a438a3365da69fbcbc22d856668a90ed27 | [
"MIT"
] | null | null | null | main.py | AuroraBTH/minecraft-modpack-randomizer | 797fb6a438a3365da69fbcbc22d856668a90ed27 | [
"MIT"
] | null | null | null | from bs4 import BeautifulSoup
from requests import get
import json
def get_amount_of_pages(minecraft_version):
initial_site_response = get("https://www.curseforge.com/minecraft/mc-mods?filter-game-version=" + minecraft_version + "&filter-sort=5&")
soup = BeautifulSoup(initial_site_response.text, "html.parser")
amount_of_pages = soup.find('li', class_="dots").find_next_sibling().text
return int(amount_of_pages)
def get_project_and_author_id(mod):
mod_id = mod.find("a", class_="button--download").get("data-nurture-data")
author_id = mod.find("a", class_="button--download").get("data-nurture-data")
if mod_id is None:
mod_id = json.loads(mod.find("a", class_="button--download").get("data-exp-nurture"))["ProjectID"]
author_id = json.loads(mod.find("a", class_="button--download").get("data-exp-nurture"))["AuthorID"]
else:
mod_id = json.loads(mod_id)["ProjectID"]
author_id = json.loads(author_id)["AuthorID"]
return [mod_id, author_id]
def write_mods_to_json(minecraft_version, file_name):
domain = "https://www.curseforge.com"
page_number = 1
amount_of_pages = get_amount_of_pages(minecraft_version)
mod_list = []
while page_number <= amount_of_pages:
url = "https://www.curseforge.com/minecraft/mc-mods?filter-game-version=" + minecraft_version + "&filter-sort=5&page=" + str(page_number)
response = get(url)
data = BeautifulSoup(response.text, "html.parser")
list_of_mods = data.find_all("li", class_="project-list-item")
for mod in list_of_mods:
id_list = get_project_and_author_id(mod)
project_name = mod.find("h2", class_="list-item__title").text.strip()
project_id = id_list[0]
project_author_id = id_list[1]
project_category = mod.find("a", class_="category__item")["title"].strip()
project_description = mod.find("div", class_="list-item__description").p.text.strip()
project_downloads = int(mod.find("span", class_="count--download").text.strip().replace(",", ""))
project_link = mod.find("div", class_="list-item__details").a["href"]
mod_data = {}
mod_data["id"] = project_id
mod_data["name"] = project_name
mod_data["author_id"] = project_author_id
mod_data["category"] = project_category
mod_data["description"] = project_description
mod_data["downloads"] = project_downloads
mod_data["link"] = domain + project_link
mod_list.append(mod_data)
progress_percent = round((page_number / amount_of_pages) * 100, 2)
print("Done with page " + str(page_number) + "/" + str(amount_of_pages) + " (" + str(progress_percent) + "%)")
page_number = page_number + 1
with open(file_name, "w") as f:
amount_of_mods = len(mod_list)
pretty_json = json.loads(json.JSONEncoder().encode(mod_list))
f.write(json.dumps(pretty_json, indent=4))
print("Done indexing " + str(amount_of_mods) + " mods, see " + file_name + " for more details.")
user_version = input("0: 1.7.10\n1: 1.12.2\n_________\n->")
file_name = input("Name on file output (default is data.json):\n->")
if file_name == "":
file_name = "data.json"
if user_version == "0":
minecraft_1_7_10 = "2020709689%3A4449"
write_mods_to_json(minecraft_1_7_10, file_name)
elif user_version == "1":
minecraft_1_12_2 = "2020709689%3A6756"
write_mods_to_json(minecraft_1_12_2, file_name)
| 43.547619 | 146 | 0.642701 | 492 | 3,658 | 4.45122 | 0.252033 | 0.03653 | 0.047489 | 0.02968 | 0.321005 | 0.261187 | 0.16621 | 0.16621 | 0.16621 | 0.16621 | 0 | 0.024721 | 0.214872 | 3,658 | 83 | 147 | 44.072289 | 0.737813 | 0 | 0 | 0 | 0 | 0 | 0.21035 | 0.012028 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046875 | false | 0 | 0.046875 | 0 | 0.125 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b09394715e3c0dcf590faefc51ab0a74f18287b | 540 | py | Python | product/views/brand_details.py | Rafeen/Inventory-Management-and-POS | c6b93fd83e76d8cdee1bdbe1042a29b23bfc36ac | [
"MIT"
] | null | null | null | product/views/brand_details.py | Rafeen/Inventory-Management-and-POS | c6b93fd83e76d8cdee1bdbe1042a29b23bfc36ac | [
"MIT"
] | 10 | 2019-07-03T21:28:41.000Z | 2022-01-13T01:13:35.000Z | product/views/brand_details.py | Rafeen/Inventory-Management-and-POS | c6b93fd83e76d8cdee1bdbe1042a29b23bfc36ac | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect, get_object_or_404
from product.models.brand_model import Brand
from django.contrib.auth.decorators import login_required
@login_required(login_url='/login/')
def brand_detail_view(request, id):
"""
This view renders User Detail page with a details of selected user
"""
brand_obj = get_object_or_404(Brand, brand_id=id)
context = {
"brand": brand_obj,
"title": "Category Details"
}
return render(request, "brand_details.html", context)
| 21.6 | 74 | 0.709259 | 72 | 540 | 5.097222 | 0.555556 | 0.054496 | 0.059946 | 0.076294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013857 | 0.198148 | 540 | 24 | 75 | 22.5 | 0.833718 | 0.122222 | 0 | 0 | 0 | 0 | 0.113586 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0a6f1bbc8afafe4db77b3247308ff00dd67a64 | 1,264 | py | Python | 1-100q/40.py | rampup01/Leetcode | 8450a95a966ef83b24ffe6450f06ce8de92b3efb | [
"MIT"
] | 990 | 2018-06-05T11:49:22.000Z | 2022-03-31T08:59:17.000Z | 1-100q/40.py | rampup01/Leetcode | 8450a95a966ef83b24ffe6450f06ce8de92b3efb | [
"MIT"
] | 1 | 2021-11-01T01:29:38.000Z | 2021-11-01T01:29:38.000Z | 1-100q/40.py | rampup01/Leetcode | 8450a95a966ef83b24ffe6450f06ce8de92b3efb | [
"MIT"
] | 482 | 2018-06-12T22:16:53.000Z | 2022-03-29T00:23:29.000Z | '''
Given a collection of candidate numbers (candidates) and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target.
Each number in candidates may only be used once in the combination.
Note:
All numbers (including target) will be positive integers.
The solution set must not contain duplicate combinations.
Example 1:
Input: candidates = [10,1,2,7,6,1,5], target = 8,
A solution set is:
[
[1, 7],
[1, 2, 5],
[2, 6],
[1, 1, 6]
]
'''
class Solution(object):
def combinationSum2(self, candidates, target):
"""
:type candidates: List[int]
:type target: int
:rtype: List[List[int]]
"""
result = []
candidates.sort()
def recursive(candidates, target, currList, index):
if target < 0:
return
if target == 0:
result.append(currList)
return
previous = -1
for start in range(index, len(candidates)):
if previous != candidates[start]:
recursive(candidates, target - candidates[start], currList + [candidates[start]], start+1)
previous = candidates[start]
recursive(candidates, target, [], 0)
return result | 26.893617 | 170 | 0.609968 | 152 | 1,264 | 5.072368 | 0.460526 | 0.083009 | 0.097276 | 0.083009 | 0.124514 | 0.124514 | 0 | 0 | 0 | 0 | 0 | 0.028698 | 0.283228 | 1,264 | 47 | 171 | 26.893617 | 0.822296 | 0.44462 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0caaaf1b41e4b4941d55d16c265dd9df819b1f | 8,651 | py | Python | src/clustar_project/clustarray.py | jz5jx/Test_Repo | 8796f45021943984ed02232fd34ff02e17123d71 | [
"MIT"
] | 1 | 2021-04-24T21:52:53.000Z | 2021-04-24T21:52:53.000Z | src/clustar_project/clustarray.py | jz5jx/Test_Repo | 8796f45021943984ed02232fd34ff02e17123d71 | [
"MIT"
] | null | null | null | src/clustar_project/clustarray.py | jz5jx/Test_Repo | 8796f45021943984ed02232fd34ff02e17123d71 | [
"MIT"
] | null | null | null | import warnings
import numpy as np
import itertools
class ClustArray:
''' Class for working with data from FITS images
Initialized from a numpy array from an image
Methods for denoising images
'''
def __init__(self, np_array):
self.im_array = np_array
self.noise_est = None
self.denoised_arr = None
def circle_crop(self, rad_factor = 1.0):
'''Function to crop square images to a circle
Params
------
rad_factor: float multiple allowing change to size of circle_crop
default is 1
value equal to 0.7 crops to a circle with radius that is 70% as large as the max image radius
values < 0 not allowed
values >= sqrt(2) will return original image
Outputs
-------
new_imdata: np array of same size as image data array, but with values outside radius set to nan;
sets self.denoised_arr to equal this array
'''
if rad_factor < 0:
raise ValueError('rad_factor must be >= 0')
if self.denoised_arr is None:
new_imdata = self.im_array.copy()
else:
new_imdata = self.denoised_arr.copy()
rad = (new_imdata.shape[0]/2)
rad_sq = (rad*rad_factor)**2
for ix,iy in np.ndindex(new_imdata.shape):
if (ix - rad)**2 + (iy - rad)**2 > rad_sq:
new_imdata[ix, iy] = np.nan
self.denoised_arr = new_imdata
return new_imdata
def pb_multiply(self, pb_array):
'''Function to multiply a FITS image by a .pb file to deemphasize edges
Inputs
------
pb_array: numpy array from a .pb file
Outputs
-------
new_imdata: np array of same size as image data array
consisting of elementwise multiplication of image and pb file;
sets self.denoised_arr to equal this array
'''
if self.denoised_arr is None:
imdata = self.im_array.copy()
else:
imdata = self.denoised_arr.copy()
new_imdata = np.multiply(imdata, pb_array)
self.denoised_arr = new_imdata
return new_imdata
def get_noise_level(self, nchunks = 3, rms_quantile = 0):
'''Calculates estimated noise level in image intensity
Stores value in FitsImage object noise attribute
Arguments
---------
nchunks: int number of chunks to use in grid, must be odd
rms_quantile: float in range [0, 1] indicating quantile of chunk RMS to use for noise level (0 = min RMS, 0.5 = median, etc)
Returns
-------
noise: float estimated noise in image intensity values;
sets self.noise_est to this value
'''
if self.denoised_arr is None:
imdata = self.im_array.copy()
warnings.warn('Calculating noise level from uncleaned image')
else:
imdata = self.denoised_arr.copy()
#now break the image into chunks and do the same analysis;
# one of the chunks should have no signal in and give you an estimate of the noise (= rms).# number of chunks in each direction:
# an odd value is used so that the centre of the image does not correspond to the edge of chunks;
# when you ask for observations with ALMA, you usually specify that the object of interest be in the
# center of your image.
size = [i//nchunks for i in imdata.shape]
remain = [i % nchunks for i in imdata.shape]
chunks = dict()
k = 0
for j,i in itertools.product(range(nchunks),range(nchunks)):
chunks[k] = size.copy()
k += 1# next, account for when the image dimensions are not evenly divisible by `nchunks`.
row_remain, column_remain = 0, 0
for k in chunks:
if k % nchunks < remain[0]:
row_remain = 1
if k // nchunks < remain[1]:
column_remain = 1
if row_remain > 0:
chunks[k][0] += 1
row_remain -= 1
if column_remain > 0:
chunks[k][1] += 1
column_remain -= 1# with that in hand, calculate the lower left corner indices of each chunk
indices = dict()
for k in chunks:
indices[k] = chunks[k].copy()
if k % nchunks == 0:
indices[k][0] = 0
elif k % nchunks != 0:
indices[k][0] = indices[k-1][0] + chunks[k][0]
if k >= nchunks:
indices[k][1] = indices[k-nchunks][1] + chunks[k][1]
else:
indices[k][1] = 0
stddev_chunk = dict()
rms_chunk = dict()
for k in chunks:
i,j = indices[k]
di,dj = chunks[k]
x = imdata[i:i+di,j:j+dj]
stddev_this = np.nanstd(x)
rms_this = np.sqrt(np.nanmean(x**2))
stddev_chunk[k] = stddev_this
rms_chunk[k] = rms_this
noise = np.quantile(list(rms_chunk.values()), q = rms_quantile)
self.noise_est = noise
return(noise)
def denoise(self, pb_array = None, rad_factor = 1.0, rms_quantile = 0, grid_chunks = 3):
'''Wrapper function to perform entire denoising process
Crops image to a circle, multiplies by a pb file (if desired), and calculates RMS noise level
Inputs
------
im_array: 2d array representing a FITS image data
pb_array: optional numpy array from a .pb file
Params
------
rad_factor: float multiple allowing change to size of circle_crop
default is 1
value equal to 0.7 crops to a circle with radius that is 70% as large as the max image radius
values < 0 not allowed
values >= sqrt(2) will return original image
grid_chunks: int number of chunks to use in grid, must be odd
rms_quantile: float in range [0, 1] indicating quantile of chunk RMS to use for noise level (0 = min RMS, 0.5 = median, etc)
Outputs
-------
'''
self.circle_crop(rad_factor)
if pb_array is not None:
self.pb_multiply(pb_array)
noise_lvl = self.get_noise_level()
return(noise_lvl)
def extract_subgroup(self, group_indices, square = True, buffer = 0.0):
'''Function for extracting a subgroup of an image
Inputs
------
group_indices: list containing indices of subgroup [row_min, row_max, col_min, col_max]
Params
------
square: if True, widen shorter axis range to make subgroup a square
buffer: fraction to add to each dimension
(e.g. if subgroup is 200x200 pixels, buffer = 0.1 will return 220x220 pixels)
'''
row_min = group_indices[0]
row_max = group_indices[1]
col_min = group_indices[2]
col_max = group_indices[3]
if square:
diff = (row_max - row_min) - (col_max - col_min)
if diff == 0:
#already square
pass
elif diff < 0:
#adjust row min/max
row_min += int(np.floor(diff/2))
row_max -= int(np.ceil(diff/2))
else:
#adjust col min/max
col_min -= int(np.floor(diff/2))
col_max += int(np.ceil(diff/2))
buffer_width = int(buffer*(col_max - col_min)/2)
buffer_height = int(buffer*(row_max - row_min)/2)
row_min -= buffer_height
row_max += buffer_height
col_min -= buffer_width
col_max += buffer_width
subgroup = self.im_array[row_min:row_max, col_min:col_max]
return subgroup
def plot_subgroup(self, group_indices, square = True, buffer = 0.0, colorbar = True):
'''Function for plotting a subgroup of an image
Inputs
------
group_indices: list containing indices of subgroup [row_min, row_max, col_min, col_max]
Params
------
square: if True, widen shorter axis range to make subgroup a square
buffer: fraction to add to each dimension
(e.g. if subgroup is 200x200 pixels, buffer = 0.1 will return 220x220 pixels)
colorbar: boolean indicating whether or not to include a colorbar with the plot
'''
subgroup = self.extract_subgroup(group_indices, square, buffer)
plt.imshow(subgroup, origin='lower')
if colorbar:
plt.colorbar()
| 35.310204 | 136 | 0.573229 | 1,171 | 8,651 | 4.114432 | 0.204953 | 0.022416 | 0.034247 | 0.010585 | 0.401619 | 0.389788 | 0.337069 | 0.326692 | 0.32171 | 0.271897 | 0 | 0.020081 | 0.343775 | 8,651 | 244 | 137 | 35.454918 | 0.828607 | 0.399029 | 0 | 0.168142 | 0 | 0 | 0.015926 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061947 | false | 0.00885 | 0.026549 | 0 | 0.123894 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0d1ad6e91ffdcea74efa0272a18d860ad0c2ae | 7,151 | py | Python | rpa_logger/task.py | kangasta/rpa_logger | 63fb9d2472cc8039b6d794c5a09f4fbb77f5ac23 | [
"MIT"
] | null | null | null | rpa_logger/task.py | kangasta/rpa_logger | 63fb9d2472cc8039b6d794c5a09f4fbb77f5ac23 | [
"MIT"
] | null | null | null | rpa_logger/task.py | kangasta/rpa_logger | 63fb9d2472cc8039b6d794c5a09f4fbb77f5ac23 | [
"MIT"
] | null | null | null | '''Constants and helpers for describing RPA tasks and their status.
'''
from collections import Counter
from dataclasses import dataclass
from typing import Any, Dict, Hashable, List, Union
from uuid import uuid4
from .utils import timestamp
from .utils.output import OutputText
STARTED = 'STARTED'
SUCCESS = 'SUCCESS'
IGNORED = 'IGNORED'
FAILURE = 'FAILURE'
ERROR = 'ERROR'
SKIPPED = 'SKIPPED'
STATUSES = (STARTED, SUCCESS, IGNORED, FAILURE, ERROR, SKIPPED,)
@dataclass
class BaseTask:
'''Base class to define common functionality of `rpa_logger.task.Task` and
`rpa_logger.task.TaskSuite`
'''
type: str
'''Used to identify task type, when task is presented as dict'''
name: Union[str, None]
'''Human-readable name of the task.'''
status: str
'''Describes state of the task. For example `SUCCESS` or `ERROR`.'''
started: str
'''UTC ISO-8601 timestamp that stores the start time of the task.
Defined automatically when instance is created.
'''
finished: Union[str, None]
'''UTC ISO-8601 timestamp that stores the finish time of the task.
Defined automatically when `rpa_logger.task.BaseTask.finish` method is
called.
'''
metadata: Dict[str, Any]
'''Container for any other data stored in the task. Could, for example,
contain information about the execution environment or data that was
processed in the task.
'''
def __init__(self, name: Union[str, None], status: str = STARTED) -> None:
'''
Args:
name: Name of the task.
status: Status to use for the started task.
'''
self.status = status
self.name = name
self.started = timestamp()
self.finished = None
self.metadata = dict()
def finish(self, status) -> None:
'''Set finished timestamp and end status of the task
Args:
status: Status to use for the finished task.
'''
self.status = status
self.finished = timestamp()
def log_metadata(self, key: str, value: Any) -> None:
'''Log metadata for the task.
Args:
key: Key for the metadata item.
value: Value for the metadata item. If task data is saved as json
or yaml, this value must be serializable.
'''
self.metadata[key] = value
@dataclass
class Task(BaseTask):
'''Defines single task and stores its output and metadata
'''
output: List[OutputText]
def __init__(self, name: str, status: str = STARTED):
'''
Args:
name: Name of the task.
status: Status to use for the started task.
'''
super().__init__(name, status)
self.output = list()
@property
def type(self):
return 'TASK'
def log_output(self, text: str, stream: str = 'stdout') -> None:
'''Append new `rpa_logger.utils.output.OutputText` to task output.
Args:
text: Output text content.
stream: Output stream. Defaults to `stdout`.
'''
self.output.append(OutputText(text, stream))
@dataclass
class TaskSuite(BaseTask):
'''Defines task suite and stores its tasks and metadata
'''
description: Union[str, None]
tasks: List[Task]
def __init__(
self,
name: Union[str, None],
description: str = None,
status: str = STARTED):
'''
Args:
name: Name of the task suite.
description: Description of the task suite.
status: Status to use for the started task suite.
'''
super().__init__(name, status)
self.description = description
self._tasks: Dict[Hashable, Task] = dict()
@property
def type(self):
return 'SUITE'
@property
def tasks(self) -> List[Task]: # pylint: disable=function-redefined
'''Return suites tasks as list sorted by the started time.
'''
tasks = list(self._tasks.values())
tasks.sort(key=lambda i: i.started)
return tasks
@property
def active_tasks(self) -> List[Task]:
'''Return suites active tasks as list sorted by the started time.
Task is active until it is finished; Task is active, if its finished
variable is None.
'''
return [i for i in self.tasks if i.finished is None]
@property
def task_status_counter(self) -> Counter:
'''Return `Counter` instance initialized with suites task statuses.
'''
return Counter(i.status for i in self._tasks.values())
def create_task(
self,
name: str,
key: Hashable = None,
status: str = STARTED):
'''Create new task and store it in the suite tasks.
Args:
name: Name of the task.
key: Key to identify the created task with.
status: Status to use for the started task.
Returns:
Key of the created task.
'''
if not key:
key = uuid4()
self._tasks[key] = Task(name, status)
return key
def log_task(self, status: str, name: str) -> None:
'''Create and finish a new task.
Args:
name: Name of the task.
status: Status to use for the finished task.
Returns:
Key of the created task.
'''
key = self.create_task(name)
self.finish_task(key, status)
return key
def finish_task(self, key: Hashable, status: str) -> None:
'''Set finished timestamp and end status of the task
Args:
key: Key of the task to finish
status: Status to use for the finished task.
'''
return self._tasks[key].finish(status)
def get_task(self, key: Hashable) -> Task:
'''Get `rpa_logger.task.Task` with given key.
Args:
key: Key to try to find from suite.
Returns:
Task with matching key.
'''
return self._tasks.get(key)
def log_metadata(
self,
key: str,
value: Any,
task_key: Hashable = None) -> None:
'''Log metadata into the task suite or any of its tasks.
Args:
key: Key for the metadata item.
value: Value for the metadata item. If task data is saved as json
or yaml, this value must be serializable.
task_key: Key of a task to log metadata into. If None, metadata
is logged to the suite.
'''
if task_key:
self._tasks[task_key].log_metadata(key, value)
return
super().log_metadata(key, value)
def log_output(self, key: Hashable, text: str,
stream: str = 'stdout') -> None:
'''Append new `rpa_logger.utils.output.OutputText` to task output.
Args:
key: Key of the task to log output to.
text: Output text content.
stream: Output stream. Defaults to `stdout`.
'''
self._tasks[key].log_output(text, stream)
| 29.549587 | 78 | 0.586212 | 889 | 7,151 | 4.654668 | 0.173228 | 0.030449 | 0.030449 | 0.028758 | 0.389319 | 0.337361 | 0.332286 | 0.278154 | 0.195505 | 0.182697 | 0 | 0.002063 | 0.322193 | 7,151 | 241 | 79 | 29.672199 | 0.851661 | 0.351419 | 0 | 0.214286 | 0 | 0 | 0.018021 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.173469 | false | 0 | 0.061224 | 0.020408 | 0.459184 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0d8b35ff7e943b202f21481e50e5769f2ff2f4 | 13,760 | py | Python | src/graph_construction.py | chrisdxie/rice | c3e42822226af9ac28d95d434cd582386122b679 | [
"MIT"
] | 16 | 2021-07-01T16:18:26.000Z | 2022-02-21T05:19:39.000Z | src/graph_construction.py | chrisdxie/rice | c3e42822226af9ac28d95d434cd582386122b679 | [
"MIT"
] | 1 | 2022-02-22T22:46:37.000Z | 2022-02-22T22:46:37.000Z | src/graph_construction.py | chrisdxie/rice | c3e42822226af9ac28d95d434cd582386122b679 | [
"MIT"
] | 1 | 2021-11-08T19:52:40.000Z | 2021-11-08T19:52:40.000Z | import sys, os
import numpy as np
import cv2
import torch
import torch.nn.functional as F
from torch_geometric.data import Data, Batch
import torchvision.transforms as transforms
from . import constants
from .util import utilities as util_
def get_resnet50_fpn_model(pretrained=True, trainable_layer_names=[]):
"""Load ResNet50 + FPN model, pre-trained on COCO 2017."""
import torchvision.models.detection.backbone_utils as backbone_utils
from torch.utils.model_zoo import load_url as load_url
pretrained_backbone=False
rn50_fpn = backbone_utils.resnet_fpn_backbone('resnet50', pretrained_backbone)
# This is an instance of BackboneWithFPN: https://github.com/pytorch/vision/blob/master/torchvision/models/detection/backbone_utils.py#L11
if pretrained:
model_urls = {
'maskrcnn_resnet50_fpn_coco':
'https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth',
}
pretrained_state_dict = load_url(model_urls['maskrcnn_resnet50_fpn_coco'],
progress=True)
# Hack to load only the backbone weights to the model, instead of all of MaskRCNN
rn50_fpn_dict = rn50_fpn.state_dict()
pretrained_dict = {k : pretrained_state_dict['backbone.' + k] for k in rn50_fpn_dict.keys()}
rn50_fpn_dict.update(pretrained_dict)
rn50_fpn.load_state_dict(rn50_fpn_dict)
rn50_fpn = rn50_fpn.to(constants.DEVICE)
# Freeze layers unless specified
for name, parameter in rn50_fpn.named_parameters():
parameter.requires_grad_(False)
for layer_name in trainable_layer_names:
if layer_name in name:
parameter.requires_grad_(True)
return rn50_fpn
def extract_rgb_img_features(model, img):
"""Run model (COCO2017 pre-trained ResNet50+FPN) on image.
Args:
model: output from get_resnet50_fpn_model()
img: a [3 x H x W] torch.FloatTensor. Should have been standardized already
Returns:
an OrderedDict of torch.FloatTensors of shape [1, 256, H, W].
"""
H,W = img.shape[1:]
features = model(img.unsqueeze(0).to(constants.DEVICE))
for key in features.keys():
if key == 'pool':
del features[key]
continue
features[key] = F.interpolate(features[key], size=(H,W), mode='bilinear')
return features
def FPN_feature_key(mask):
"""Compute which FPN layer to use.
Args:
mask: a [H x W] torch tensor with values in {0,1}
Returns:
a string
"""
x_min, y_min, x_max, y_max = util_.mask_to_tight_box(mask)
roi_w = x_max-x_min+1; roi_h = y_max-y_min+1;
roi_w = roi_w.float(); roi_h = roi_h.float()
k = torch.floor(4 + torch.log2(torch.sqrt(roi_w*roi_h)/224)) # Taken from FPN paper
k = min(max(int(k), 2), 5)
features_key = str(k-2) # P2 -> '0', P3 -> '1', P4 -> '2', P5 -> '3'
return features_key
def crop_tensor_to_nchw(tensor,
x_min, y_min, x_max, y_max,
img_size=(64,64),
mode='bilinear'):
"""Crop a tensor and reshape.
Args:
tensor: a torch.Tensor of shape [H x W], [C x H x W], or [N x C x H x W]
x_min: int
y_min: int
x_max: int
y_max: int
x_axis: int
y_axis: int
img_size: tuple of (H, W)
Returns:
a torch.Tensor of shape [N x C x img_size[0] x img_size[1]]
"""
y_axis = tensor.ndim - 2
x_axis = tensor.ndim - 1
crop = torch.narrow(tensor, y_axis, y_min, y_max - y_min + 1)
crop = torch.narrow(crop, x_axis, x_min, x_max - x_min + 1)
while crop.ndim < 4: # NCHW
crop.unsqueeze_(0)
crop = F.interpolate(crop, img_size, mode=mode)
return crop
def construct_segmentation_graph(rgb_img_features,
xyz_img,
masks,
create_edge_indices=True,
compute_bg_node=True,
neighbor_dist=10,
padding_config=None,
device=None):
"""Construct Graph from img + masks.
Args:
rgb_img_features: an OrderedDict of image features. Output of extract_rgb_img_features()
xyz_img: a [3 x H x W] torch.FloatTensor. 3D point cloud from camera frame of reference
masks: a [H x W] torch.FloatTensor of masks in {0, 1, ..., K-1}. HW
OR
a [N x H x W] torch.FloatTensor of masks in {0,1}. NHW
compute_bg_node: bool.
create_edge_indices: bool.
neighbor_dist: int. Used to create edge indices.
padding_config: a Python dictionary with padding parameters.
Returns:
graph: a torch_geometric.data.Data instance with keys:
- rgb: a [N, 256, h, w] torch.FloatTensor of ResnNet50+FPN rgb image features
- depth: a [N, 3, h, w] torch.FloatTensor. XYZ image
- mask: a [N, h, w] torch.FloatTensor of values in {0, 1}
- orig_masks: a [N, H, W] torch.FloatTensor of values in {0, 1}. Original image size.
- crop_indices: a [N, 4] torch.LongTensor. xmin, ymin, xmax, ymax.
"""
if device is None:
device = constants.DEVICE
H, W = xyz_img.shape[1:]
if padding_config is None:
padding_config = {
'inference' : True,
'padding_percentage' : 0.25,
'new_H' : 64,
'new_W' : 64,
}
new_H = padding_config['new_H']
new_W = padding_config['new_W']
# Get relevant masks
if masks.ndim == 2:
orig_masks = util_.convert_mask_HW_to_NHW(masks, to_ignore=range(0,constants.OBJECTS_LABEL)) # [N x H x W]
elif masks.ndim == 3:
orig_masks = masks
masks = util_.convert_mask_NHW_to_HW(orig_masks, start_label=constants.OBJECTS_LABEL)
else:
raise Exception(f"<masks> MUST be in HW or NHW format. Got shape: {masks.shape}...")
N = orig_masks.shape[0] # Number of objects, and nodes in graph
# Crop/Resize Masks/Depth
rgb_channels_dim = 256 # hard-coded based on ResNet50+FPN output
rgb_cr = torch.zeros((N, rgb_channels_dim, new_H, new_W), dtype=torch.float32, device=device) # + 1 for background
depth_cr = torch.zeros((N, 3, new_H, new_W), dtype=torch.float32, device=device)
mask_cr = torch.zeros((N, 1, new_H, new_W), dtype=torch.float32, device=device)
crop_indices = torch.zeros((N, 4), dtype=torch.long, device=device)
for i, mask in enumerate(orig_masks):
x_min, y_min, x_max, y_max = util_.crop_indices_with_padding(mask, padding_config, inference=padding_config['inference'])
crop_indices[i] = torch.stack([x_min, y_min, x_max, y_max])
features_key = FPN_feature_key(mask)
layer_features = rgb_img_features[features_key] # Shape: [1 x C x h x w]. C = 256
rgb_cr[i] = crop_tensor_to_nchw(layer_features, x_min, y_min, x_max, y_max,
img_size=(new_H, new_W))[0]
depth_cr[i] = crop_tensor_to_nchw(xyz_img, x_min, y_min, x_max, y_max,
img_size=(new_H, new_W), mode='nearest')[0]
mask_cr[i] = crop_tensor_to_nchw(mask, x_min, y_min, x_max, y_max,
img_size=(new_H, new_W), mode='nearest')[0]
# Background node
if compute_bg_node:
crop_indices = torch.cat([torch.LongTensor([[0, 0, W-1, H-1]]).to(device),
crop_indices], axis=0)
rgb_cr = torch.cat([crop_tensor_to_nchw(rgb_img_features['3'], *crop_indices[0]), # deepest layer. Semantic features
rgb_cr], axis=0)
depth_cr = torch.cat([crop_tensor_to_nchw(xyz_img, *crop_indices[0]).to(device),
depth_cr], axis=0)
bg_orig_mask = (masks == 0).float().unsqueeze(0) # [1, H, W]
orig_masks = torch.cat([bg_orig_mask,
orig_masks], axis=0)
mask_cr = torch.cat([crop_tensor_to_nchw(bg_orig_mask, *crop_indices[0]),
mask_cr], axis=0)
N += 1
# Check to make sure no masks are 0
valid_indices = []
for i in range(N):
if torch.sum(mask_cr[i]) > 0:
valid_indices.append(i)
valid_indices = np.array(valid_indices)
N = len(valid_indices)
graph = Data(rgb=rgb_cr[valid_indices],
depth=depth_cr[valid_indices],
mask=mask_cr[valid_indices],
orig_masks=orig_masks[valid_indices],
crop_indices=crop_indices[valid_indices],
)
if create_edge_indices:
build_edge_index(graph, neighbor_dist=neighbor_dist)
graph = graph.to(device)
return graph
def build_edge_index(graph, neighbor_dist):
edge_index = util_.neighboring_mask_indices(graph.orig_masks, reduction_factor=1,
neighbor_dist=neighbor_dist)
edge_index = torch.cat([edge_index, edge_index.flip([1])], dim=0).T # Shape: [2 x E]
graph.edge_index = edge_index.to(graph.mask.device)
def remove_bg_node(data_list):
"""Return a list of new graphs with background node removed.
Note: the RGB/Depth/Mask is not copied over, but assigned. Thus, losses can be applied to
the new graphs (w/out BG nodes) and gradients will still flow through the old graphs.
Args:
graph: Can be a torch_geometric.Data instance, torch_geometric.Batch instance,
or a List of torch_geometric.Data instances.
Returns:
Same data type as input. A copy of graphs, but without background nodes and update edge_indices.
"""
if isinstance(data_list, Data):
input_type = 'Data'
data_list = [data_list]
elif isinstance(data_list, Batch):
data_list = Batch.to_data_list(data_list)
input_type = 'Batch'
elif isinstance(data_list, list):
input_type = 'list'
else:
raise NotImplementedError()
# Note: data_list is now of type list
new_data_list = []
for graph in data_list:
# Double check to make sure background node hasn't already been removed
if 'background_removed' in graph:
raise Exception("Cannot remove background node if it has already been removed...")
new_graph = Data()
new_graph.rgb = graph.rgb[1:]
new_graph.depth = graph.depth[1:]
new_graph.mask = graph.mask[1:]
new_graph.orig_masks = graph.orig_masks[1:]
new_graph.crop_indices = graph.crop_indices[1:]
new_graph.background_removed = True
# Special cases
if 'edge_index' in graph:
edge_mask = torch.all(graph.edge_index != 0, dim=0) # [E]
new_graph.edge_index = graph.edge_index[:, edge_mask] - 1 # -1 since we removed background
if 'paths' in graph and 'split' in graph: # Splitting is stored
new_graph.paths = {k - 1: graph.paths[k] for k in graph.paths.keys()}
new_graph.split = graph.split[1:]
new_data_list.append(new_graph)
if input_type == 'Data':
return new_data_list[0]
elif input_type == 'Batch':
return convert_list_to_batch(new_data_list)
elif input_type == 'list':
return new_data_list
def convert_list_to_batch(graph_list, external_key='crop_indices'):
"""Convert list of graphs into a Batch(Data) instance.
Args:
graph_list: a Python list of torch_geometric.data.Data instances
Returns:
a torch_geometric.data.Batch instance
"""
for graph in graph_list:
if 'x' not in graph.keys: # Batch.from_data_list needs 'x' to run correctly (to compute graph.num_nodes)
graph.x = graph[external_key]
return Batch.from_data_list(graph_list)
def convert_batch_to_list(batch_graph):
"""Convert Batch(Data) instance into a list of Data instances.
Undoes the convert_list_to_batch() function.
Args:
batch_graph: a torch_geometric.Batch instance
Returns:
a Python list of torch_geometric.data.Data instances.
"""
return Batch.to_data_list(batch_graph)
def get_edge_graph(graph, rgb_img_features, xyz_img, padding_config=None):
"""Compute graph where each node is an edge of original graph.
Creates a new graph such that each node in the new graph corresponds
to an edge in the original graph. The new graph is constructed
in the same way, but the crop_indices cover the union of the
masks. This graph has no edges.
Args:
graph: a torch_geometric.data.Data instance
rgb_img_features: an OrderedDict of image features. Output of extract_rgb_img_features()
xyz_img: a [3 x H x W] torch.FloatTensor. 3D point cloud from camera frame of reference
padding_config: a Python dictionary.
Returns:
a torch_geometric.Data instance
"""
union_orig_masks = torch.clamp(graph.orig_masks[graph.edge_index[0]] + \
graph.orig_masks[graph.edge_index[1]], max=1) # Shape: [E x H x W]
return construct_segmentation_graph(
rgb_img_features,
xyz_img,
union_orig_masks,
compute_bg_node=False,
create_edge_indices=False,
padding_config=padding_config
)
def add_zero_channel_to_masks(graph):
"""Add an empty channel of 0's to graph.mask."""
graph.mask = torch.cat([graph.mask, torch.zeros_like(graph.mask)], dim=1)
| 38.328691 | 142 | 0.622892 | 1,975 | 13,760 | 4.11038 | 0.170633 | 0.018724 | 0.004435 | 0.004435 | 0.215447 | 0.166174 | 0.130574 | 0.112097 | 0.098793 | 0.066272 | 0 | 0.018933 | 0.282195 | 13,760 | 358 | 143 | 38.435754 | 0.802977 | 0.308067 | 0 | 0.030928 | 0 | 0 | 0.048015 | 0.005687 | 0 | 0 | 0 | 0 | 0 | 1 | 0.056701 | false | 0 | 0.056701 | 0 | 0.170103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b0eda6bd56eca2e8afd3f6bc6ca556e0abd44cd | 374 | py | Python | Learning+Misc/Quantum-Circuits/deutsch_oracle.py | Gregory-Eales/QA-Reimplementations | bef0b3e67397a73c468e539c426c6629d398433b | [
"MIT"
] | 1 | 2019-05-03T21:48:29.000Z | 2019-05-03T21:48:29.000Z | Learning+Misc/Quantum-Circuits/deutsch_oracle.py | Gregory-Eales/QA-Reimplementations | bef0b3e67397a73c468e539c426c6629d398433b | [
"MIT"
] | null | null | null | Learning+Misc/Quantum-Circuits/deutsch_oracle.py | Gregory-Eales/QA-Reimplementations | bef0b3e67397a73c468e539c426c6629d398433b | [
"MIT"
] | null | null | null | import cirq
import numpy as np
class DeutschOracle(object):
def __init__(self):
self.circuit = cirq.Circuit()
self.x_gate = cirq.X
self.hadmard_gate = cirq.H
self.measure_gate = cirq.MeasurementGate
def init_qubits(self, length=3):
q = [cirq.GridQubit(i, j) for i in range(length) for j in range(length)]
return q
| 23.375 | 80 | 0.639037 | 54 | 374 | 4.277778 | 0.537037 | 0.103896 | 0.112554 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003636 | 0.264706 | 374 | 15 | 81 | 24.933333 | 0.836364 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.545455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
2b0f54ec033255faf4b2fe939082f385da0b5246 | 263 | py | Python | example.py | pnsn/python_template | 1e09b87149407ee8e5f5ed78f57244470cb1415d | [
"MIT"
] | null | null | null | example.py | pnsn/python_template | 1e09b87149407ee8e5f5ed78f57244470cb1415d | [
"MIT"
] | null | null | null | example.py | pnsn/python_template | 1e09b87149407ee8e5f5ed78f57244470cb1415d | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
'''
All projects should use python 3
virtualenv symlinks python, python3, python3.x to environments python
envoke with either python example.py
or ./example.py
'''
def main():
print("Hello PNSN")
if __name__ == "__main__":
main() | 18.785714 | 69 | 0.711027 | 37 | 263 | 4.837838 | 0.756757 | 0.100559 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018265 | 0.1673 | 263 | 14 | 70 | 18.785714 | 0.799087 | 0.673004 | 0 | 0 | 0 | 0 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0 | 0 | 0.25 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
2b1231e92f21f35984d0911fe8b686f6f0ee27b3 | 4,272 | py | Python | python/tests/wind_settings_test.py | anth-dj/geog5003m_project | 51caa4255a04cc7043dde9ff94e654c41fc1620c | [
"MIT"
] | null | null | null | python/tests/wind_settings_test.py | anth-dj/geog5003m_project | 51caa4255a04cc7043dde9ff94e654c41fc1620c | [
"MIT"
] | null | null | null | python/tests/wind_settings_test.py | anth-dj/geog5003m_project | 51caa4255a04cc7043dde9ff94e654c41fc1620c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Wind Settings unit tests
@author: Anthony Jarrett
"""
import unittest
from python.src.simulation import particleframework
class WindSettingsTestCase(unittest.TestCase):
def test_init(self):
# Initialize parameters
north_percentage = 5
east_percentage = 10
south_percentage = 20
west_percentage = 65
# Create wind settings
wind_settings = particleframework.WindSettings(
north_percentage,
east_percentage,
south_percentage,
west_percentage
)
# Verify the wind settings
self.assertIsNotNone(wind_settings)
self.assertEqual(wind_settings.north_percentage, north_percentage)
self.assertEqual(wind_settings.east_percentage, east_percentage)
self.assertEqual(wind_settings.south_percentage, south_percentage)
self.assertEqual(wind_settings.west_percentage, west_percentage)
def test_sum_too_large(self):
# Initialize parameters
north_percentage = 5
east_percentage = 10
south_percentage = 20
west_percentage = 75
try:
# Create wind settings
particleframework.WindSettings(
north_percentage,
east_percentage,
south_percentage,
west_percentage
)
self.fail()
except Exception as e:
self.assertIsNotNone(e)
def test_sum_too_small(self):
# Initialize parameters
north_percentage = 5
east_percentage = 10
south_percentage = 20
west_percentage = 15
try:
# Create wind settings
particleframework.WindSettings(
north_percentage,
east_percentage,
south_percentage,
west_percentage
)
self.fail()
except Exception as e:
self.assertIsNotNone(e)
def test_get_next_north(self):
# Initialize parameters
north_percentage = 100
east_percentage = 0
south_percentage = 0
west_percentage = 0
# Create wind settings
wind_settings = particleframework.WindSettings(
north_percentage,
east_percentage,
south_percentage,
west_percentage
)
# Verify the next direction
next_direction = wind_settings.get_next()
self.assertEqual(next_direction, particleframework.Direction.NORTH)
def test_get_next_east(self):
# Initialize parameters
north_percentage = 0
east_percentage = 100
south_percentage = 0
west_percentage = 0
# Create wind settings
wind_settings = particleframework.WindSettings(
north_percentage,
east_percentage,
south_percentage,
west_percentage
)
# Verify the next direction
next_direction = wind_settings.get_next()
self.assertEqual(next_direction, particleframework.Direction.EAST)
def test_get_next_south(self):
# Initialize parameters
north_percentage = 0
east_percentage = 0
south_percentage = 100
west_percentage = 0
# Create wind settings
wind_settings = particleframework.WindSettings(
north_percentage,
east_percentage,
south_percentage,
west_percentage
)
# Verify the next direction
next_direction = wind_settings.get_next()
self.assertEqual(next_direction, particleframework.Direction.SOUTH)
def test_get_next_west(self):
# Initialize parameters
north_percentage = 0
east_percentage = 0
south_percentage = 0
west_percentage = 100
# Create wind settings
wind_settings = particleframework.WindSettings(
north_percentage,
east_percentage,
south_percentage,
west_percentage
)
# Verify the next direction
next_direction = wind_settings.get_next()
self.assertEqual(next_direction, particleframework.Direction.WEST)
| 27.037975 | 75 | 0.617041 | 393 | 4,272 | 6.430025 | 0.157761 | 0.10922 | 0.075979 | 0.080332 | 0.815196 | 0.755837 | 0.755837 | 0.755837 | 0.722596 | 0.722596 | 0 | 0.016445 | 0.330993 | 4,272 | 157 | 76 | 27.210191 | 0.86774 | 0.122659 | 0 | 0.673267 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108911 | 1 | 0.069307 | false | 0 | 0.019802 | 0 | 0.09901 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2b12bcc09d43893147348ccc3696625e690b010c | 3,817 | py | Python | src/views/botones/informacion/boton_informacion.py | julianVelandia/UI_RETEDECON | 87b707f5c1553446fc92265db9da50f292e2f2d1 | [
"MIT"
] | 3 | 2022-02-27T02:15:52.000Z | 2022-02-28T15:16:40.000Z | src/views/botones/informacion/boton_informacion.py | julianVelandia/UI_RETEDECON | 87b707f5c1553446fc92265db9da50f292e2f2d1 | [
"MIT"
] | null | null | null | src/views/botones/informacion/boton_informacion.py | julianVelandia/UI_RETEDECON | 87b707f5c1553446fc92265db9da50f292e2f2d1 | [
"MIT"
] | null | null | null | from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
from PyQt5.QtGui import *
#locals
from .funciones_informacion import Funcion_informacion
from src.views.botones.inicio.funciones import *
class Boton_informacion(Funcion_informacion):
def boton_informacion_manual(self, widget):
self.informacion_manual = QToolButton(widget)
self.informacion_manual.setText('Manual de Usuario')
self.informacion_manual.setObjectName("button") # nombre de enlace a css
self.informacion_manual.setIcon(QIcon('src/views/static/icons/icono_manual_usuario')) # icono
self.informacion_manual.setIconSize(QSize(self.height/11, self.height/11))
self.informacion_manual.setToolButtonStyle(Qt.ToolButtonTextUnderIcon)
self.informacion_manual.setGeometry(self.width/4.5, self.height/2.8,
self.width/4, self.height/3.9)
self.informacion_manual.clicked.connect(self.InformacionManual)
self.informacion_manual.setVisible(False)
def boton_informacion_fabricante(self, widget):
self.informacion_fabricante = QToolButton(widget)
self.informacion_fabricante.setText('Información del\nFabricante')
self.informacion_fabricante.setObjectName("button") # nombre de enlace a css
self.informacion_fabricante.setIcon(QIcon('src/views/static/icons/favicon3')) # icono
self.informacion_fabricante.setIconSize(QSize(self.height/11, self.height/11))
self.informacion_fabricante.setToolButtonStyle(Qt.ToolButtonTextUnderIcon)
self.informacion_fabricante.clicked.connect(self.InformacionFabricante)
self.informacion_fabricante.setGeometry(self.width/1.9, self.height/2.8,
self.width/4, self.height/3.9)
self.informacion_fabricante.setVisible(False)
def qr_informacion_qr(self, widget):
self.informacion_qr = QToolButton(widget)
self.informacion_qr.setObjectName("button_trasnparente") # nombre de enlace a css
self.informacion_qr.setIcon(QIcon('src/views/static/icons/QRDRIVE.png')) # icono
self.informacion_qr.setIconSize(QSize(self.height/5, self.height/5))
self.informacion_qr.setGeometry((self.width/2) - (self.height/7), (self.height/2) - (self.height/7),
self.height/5, self.height/5)
self.informacion_qr.setVisible(False)
def label_informacion_label(self, widget):
self.informacion_label = QLabel(widget)
self.informacion_label.setObjectName("FabInfo") # nombre de enlace a css
self.informacion_label.setText("GRACIAS POR USAR RETEDECON\n"
"\n"
"RETEDECON es fabricado por:\n"
" - Julián C. Velandia\n"
" - Sebastian Cubides\n"
" - Brayan Guevara\n"
" - Jhon B. Muñoz\n"
"Con la coolaboración de: \n"
" - Diego A. Tibaduiza\n"
"Bajo la supervición y sustento de la Unidad De Gestion De La Innovación,\n"
"Facultad De Ingeniería (Ingnova), de La Universidad Nacional De Colombia.\n\n"
"Si desea contactarse con nosotros puede hacerlo a través de los siguientes medios:\n"
" - Celular/Whatsapp: +57 313 8244012\n"
" - E-Mail: scubidest@unal.edu.co\n\n"
"Versión del Software: 1.0")
self.informacion_label.setGeometry((self.width / 6), (self.height/9),
self.width / 1.2, self.height/1.2)
self.informacion_label.setVisible(False) | 59.640625 | 113 | 0.632172 | 420 | 3,817 | 5.640476 | 0.309524 | 0.183622 | 0.079781 | 0.042212 | 0.295061 | 0.246095 | 0.192486 | 0.164626 | 0.164626 | 0.087801 | 0 | 0.019445 | 0.272465 | 3,817 | 64 | 114 | 59.640625 | 0.833633 | 0.03039 | 0 | 0.035088 | 0 | 0 | 0.193557 | 0.036004 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070175 | false | 0 | 0.087719 | 0 | 0.175439 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b13c9bdd22e18cff242d5292bbf3eb9e6c0efa1 | 263 | py | Python | 1030 Brick Layout.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | null | null | null | 1030 Brick Layout.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | null | null | null | 1030 Brick Layout.py | ansabgillani/binarysearchcomproblems | 12fe8632f8cbb5058c91a55bae53afa813a3247e | [
"MIT"
] | null | null | null | class Solution:
def solve(self, bricks, width, height):
dp = [0]*(width+1)
dp[0] = 1
for i in range(len(dp)):
for brick in bricks:
dp[i] += dp[i-brick] if i-brick >= 0 else 0
return dp[-1]**height
| 23.909091 | 59 | 0.48289 | 40 | 263 | 3.175 | 0.5 | 0.047244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042424 | 0.372624 | 263 | 10 | 60 | 26.3 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b14e1f101cb9c3f4df93b0023935c69cc41ca60 | 6,801 | py | Python | Malt/SourceTranspiler.py | panthavma/Malt | 918bce21a131e472db7a136a3cb851bda9237a7f | [
"MIT-0",
"MIT"
] | null | null | null | Malt/SourceTranspiler.py | panthavma/Malt | 918bce21a131e472db7a136a3cb851bda9237a7f | [
"MIT-0",
"MIT"
] | null | null | null | Malt/SourceTranspiler.py | panthavma/Malt | 918bce21a131e472db7a136a3cb851bda9237a7f | [
"MIT-0",
"MIT"
] | null | null | null | import textwrap
#TODO: Send transpiler along graph types
class SourceTranspiler():
@classmethod
def get_source_name(self, name):
name = name.replace('.','_').replace(' ', '_')
name = '_' + ''.join(char for char in name if char.isalnum() or char == '_')
while '__' in name:
name = name.replace('__','_')
return name
@classmethod
def asignment(self, name, asignment):
pass
@classmethod
def declaration(self, type, size, name, initialization=None):
pass
@classmethod
def global_reference(self, node_name, parameter_name):
pass
@classmethod
def global_declaration(self, type, size, name, initialization=None):
pass
@classmethod
def custom_io_reference(self, io, graph_io_type, name):
pass
@classmethod
def preprocessor_wrap(self, define, content):
return content
@classmethod
def custom_output_declaration(self, type, name, index, graph_io_type):
pass
@classmethod
def parameter_reference(self, node_name, parameter_name, io_type):
pass
@classmethod
def io_parameter_reference(self, parameter_name, io_type):
return parameter_name
@classmethod
def is_instantiable_type(self, type):
return True
@classmethod
def call(self, name, parameters=[], full_statement=False):
pass
@classmethod
def result(self, result):
pass
@classmethod
def scoped(self, code):
pass
class GLSLTranspiler(SourceTranspiler):
@classmethod
def asignment(self, name, asignment):
return f'{name} = {asignment};\n'
@classmethod
def declaration(self, type, size, name, initialization=None):
array = '' if size == 0 else f'[{size}]'
asignment = f' = {initialization}' if initialization else ''
return f'{type} {name}{array}{asignment};\n'
@classmethod
def global_reference(self, node_name, parameter_name):
return f"U_0{node_name}_0_{self.get_source_name(parameter_name)}".replace('__','_')
@classmethod
def global_declaration(self, type, size, name, initialization=None):
return 'uniform ' + self.declaration(type, size, name, initialization)
@classmethod
def custom_io_reference(self, io, graph_io_type, name):
return f"{io.upper()}_{graph_io_type.upper()}_{''.join(char.upper() for char in name if char.isalnum())}"
@classmethod
def preprocessor_wrap(self, define, content):
if define is None:
return content
return textwrap.dedent('''\
#ifdef {}
{}
#endif //{}
''').format(define, content.strip(), define)
@classmethod
def custom_output_declaration(self, type, name, index, graph_io_type):
return f"layout (location = {index}) out {type} {self.custom_io_reference('OUT', graph_io_type, name)};\n"
@classmethod
def parameter_reference(self, node_name, parameter_name, io_type):
return f'{node_name}_0_{parameter_name}'
@classmethod
def is_instantiable_type(self, type):
return type.startswith('sampler') == False
@classmethod
def call(self, function, name, parameters=[], post_parameter_initialization = ''):
src = ''
for i, parameter in enumerate(function['parameters']):
if parameter['io'] in ['out','inout']:
initialization = parameters[i]
src_reference = self.parameter_reference(name, parameter['name'], parameter['io'])
src += self.declaration(parameter['type'], parameter['size'], src_reference, initialization)
parameters[i] = src_reference
src += post_parameter_initialization
initialization = f'{function["name"]}({",".join(parameters)})'
if function['type'] != 'void' and self.is_instantiable_type(function['type']):
src += self.declaration(function['type'], 0, self.parameter_reference(name, 'result', 'out'), initialization)
else:
src += initialization + ';\n'
return src
@classmethod
def result(self, result):
return f'return {result};\n'
@classmethod
def scoped(self, code):
import textwrap
code = textwrap.indent(code, '\t')
return f'{{\n{code}}}\n'
class PythonTranspiler(SourceTranspiler):
@classmethod
def asignment(self, name, asignment):
return f'{name} = {asignment}\n'
@classmethod
def declaration(self, type, size, name, initialization=None):
if initialization is None: initialization = 'None'
return self.asignment(name, initialization)
@classmethod
def global_reference(self, node_name, parameter_name):
return f'PARAMETERS["{node_name}"]["{parameter_name}"]'
@classmethod
def global_declaration(self, type, size, name, initialization=None):
return ''
return self.declaration(type, size, name, initialization)
@classmethod
def custom_io_reference(self, io, graph_io_type, name):
return self.io_parameter_reference(name, io)
@classmethod
def custom_output_declaration(self, type, name, index, graph_io_type):
return self.declaration(type, 0, self.io_parameter_reference(name, 'out'))
@classmethod
def parameter_reference(self, node_name, parameter_name, io_type):
if io_type:
return f'{node_name}_parameters["{io_type.upper()}"]["{parameter_name}"]'
else:
return f'{node_name}_parameters["{parameter_name}"]'
@classmethod
def io_parameter_reference(self, parameter_name, io_type):
return f'{io_type.upper()}["{parameter_name}"]'
@classmethod
def call(self, function, name, parameters=[], post_parameter_initialization = ''):
import textwrap
src = ''
src += textwrap.dedent(f'''
{name}_parameters = {{
'IN' : {{}},
'OUT' : {{}},
}}
''')
for i, parameter in enumerate(function['parameters']):
initialization = parameters[i]
if initialization is None:
initialization = 'None'
parameter_reference = self.parameter_reference(name, parameter['name'], parameter['io'])
src += f'{parameter_reference} = {initialization}\n'
src += post_parameter_initialization
src += f'run_node("{name}", "{function["name"]}", {name}_parameters)\n'
return src
@classmethod
def result(self, result):
return f'return {result}\n'
@classmethod
def scoped(self, code):
import textwrap
code = textwrap.indent(code, '\t')
return f'if True:\n{code}'
| 32.385714 | 121 | 0.624761 | 737 | 6,801 | 5.583446 | 0.118046 | 0.125881 | 0.041312 | 0.050547 | 0.680194 | 0.614581 | 0.576428 | 0.520535 | 0.520535 | 0.507412 | 0 | 0.001179 | 0.251728 | 6,801 | 209 | 122 | 32.54067 | 0.807428 | 0.005734 | 0 | 0.618182 | 0 | 0.012121 | 0.157077 | 0.06996 | 0.006061 | 0 | 0 | 0.004785 | 0 | 1 | 0.224242 | false | 0.060606 | 0.024242 | 0.10303 | 0.448485 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 3 |
2b157c7b02563c277a3b074729ba1a44f8eb0df3 | 113 | py | Python | endpoints/projects.py | FLUX-SE/TrackedHQ_python_wrapper | d35f868698d0ba0cb2fdb820317f7460b154a6d0 | [
"MIT"
] | null | null | null | endpoints/projects.py | FLUX-SE/TrackedHQ_python_wrapper | d35f868698d0ba0cb2fdb820317f7460b154a6d0 | [
"MIT"
] | null | null | null | endpoints/projects.py | FLUX-SE/TrackedHQ_python_wrapper | d35f868698d0ba0cb2fdb820317f7460b154a6d0 | [
"MIT"
] | null | null | null | from .base import Resource
class Projects(Resource):
def list(self):
return self._get("/projects")
| 16.142857 | 37 | 0.672566 | 14 | 113 | 5.357143 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.212389 | 113 | 6 | 38 | 18.833333 | 0.842697 | 0 | 0 | 0 | 0 | 0 | 0.079646 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 5 |
2b15e166bdadb8379f269e4e1a5eb613b13e1d82 | 3,173 | py | Python | src/feature_creation.py | aswain571/m5_forecasting | 3b7fccd56a4c14c38bbcff6b11f82cd440132730 | [
"MIT"
] | null | null | null | src/feature_creation.py | aswain571/m5_forecasting | 3b7fccd56a4c14c38bbcff6b11f82cd440132730 | [
"MIT"
] | null | null | null | src/feature_creation.py | aswain571/m5_forecasting | 3b7fccd56a4c14c38bbcff6b11f82cd440132730 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import pickle
from preprocess import process_ds
from sklearn.preprocessing import LabelEncoder
def transform_cat_feats(df):
"""makes null columns into unknown and cat columns
are label encoded
Args:
df (pd.DataFrame): Dataframe with the sales data.
Returns:
Dataframe with the sales data including lag and rolling
features.
"""
# nan_features = [
#'event_name_1',
#'event_type_1',
#'event_name_2',
#'event_type_2',]
# for feature in nan_features:
# df[feature].fillna('unknown', inplace = True)
cat = [
"item_id",
"dept_id",
"cat_id",
"store_id",
"state_id",
"event_name_1",
"event_type_1",
"event_name_2",
"event_type_2",
]
for feature in cat:
encoder = LabelEncoder()
df[feature] = encoder.fit_transform(df[feature])
return df
def calculate_time_features(df):
"""Clagged and rolling mean features
of the sales data.
Args:
df (pd.DataFrame): Dataframe with the sales data.
Returns:
Dataframe with the sales data including lag and rolling
features.
"""
dayLags = [28]
lagSalesCols = [f"lag_{dayLag}" for dayLag in dayLags]
for dayLag, lagSalesCol in zip(dayLags, lagSalesCols):
df[lagSalesCol] = (
df[["id", "item_sales"]].groupby("id")["item_sales"].shift(dayLag)
)
windows = [7, 28]
for window in windows:
for dayLag, lagSalesCol in zip(dayLags, lagSalesCols):
df[f"rmean_{dayLag}_{window}"] = (
df[["id", lagSalesCol]]
.groupby("id")[lagSalesCol]
.transform(lambda x: x.rolling(window).mean())
)
return df
def cat_ts_feats(df):
"""Build categorical and time series feats.
Args:
df (pd.Dataframe) : Dataframe with sales data
Returns:
Dataframe with sales data including categorical
features and lag/rolling mean features
"""
df = transform_cat_feats(df)
df = calculate_time_features(df)
return df
def get_test_train_data():
"""Build train and test dataset. Test is
used for inference
Args:
None
Returns:
train and test dataframes
"""
df = process_ds()
df = cat_ts_feats(df)
df = df.reset_index().set_index("date")
# remove unused columns
cols_not_used = ["id", "weekday", "d", "index"]
df.drop(columns=cols_not_used, inplace=True)
df.dropna(inplace=True)
# convert T/F to boolean - lightgbm throws error otherwise
df["is_weekend"] = df["is_weekend"].astype(int)
df["no_sell_price"] = df["no_sell_price"].astype(int)
print(df)
train_start_date = "2014-04-24"
train_end_date = "2016-04-23"
test_start_date = "2016-04-24"
test_end_date = "2016-05-23"
df_train = df.loc[train_start_date:train_end_date]
df_test = df.loc[test_start_date:test_end_date]
# save train and test dataframes for later use
df_train.to_pickle("../data/df_train.pkl")
df_test.to_pickle("../data/df_test.pkl")
if __name__ == "__main__":
get_test_train_data()
| 25.58871 | 78 | 0.632524 | 423 | 3,173 | 4.524823 | 0.316785 | 0.032915 | 0.031348 | 0.043887 | 0.241902 | 0.22675 | 0.211076 | 0.211076 | 0.163009 | 0.163009 | 0 | 0.019027 | 0.254649 | 3,173 | 123 | 79 | 25.796748 | 0.790275 | 0.300347 | 0 | 0.081967 | 0 | 0 | 0.143612 | 0.011047 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065574 | false | 0 | 0.081967 | 0 | 0.196721 | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b175fc4f0131469f592b2c384a3aa753931cdfb | 294 | py | Python | setup.py | bzdvdn/sipuni-api-wrapper | 44c799244388e65e9f455ed21dd16481be36f5f6 | [
"MIT"
] | 1 | 2019-12-05T14:28:31.000Z | 2019-12-05T14:28:31.000Z | setup.py | bzdvdn/sipuni-api-wrapper | 44c799244388e65e9f455ed21dd16481be36f5f6 | [
"MIT"
] | 3 | 2020-03-24T17:55:25.000Z | 2021-02-02T22:22:17.000Z | setup.py | bzdvdn/sipuni-api-wrapper | 44c799244388e65e9f455ed21dd16481be36f5f6 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name='sipuni-api',
version='0.0.1.1',
packages=find_packages(),
install_requires=[
'requests'
],
author='bzdvdn',
author_email='bzdv.dn@gmail.com',
url='https://github.com/bzdvdn/sipuni-api-wrapper',
)
| 21 | 55 | 0.642857 | 37 | 294 | 5 | 0.702703 | 0.12973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016878 | 0.193878 | 294 | 13 | 56 | 22.615385 | 0.763713 | 0 | 0 | 0 | 0 | 0 | 0.312925 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2b178eeae032ec25548f56cb6c96df9b289d22b5 | 6,545 | py | Python | cosmosis/output/fits_output.py | annis/cosmosis | 55efc1bc2260ca39298c584ae809fa2a8e72a38e | [
"BSD-2-Clause"
] | 2 | 2021-06-18T14:11:59.000Z | 2022-02-23T19:19:36.000Z | cosmosis/output/fits_output.py | annis/cosmosis | 55efc1bc2260ca39298c584ae809fa2a8e72a38e | [
"BSD-2-Clause"
] | 2 | 2021-11-02T12:44:24.000Z | 2022-03-30T15:09:48.000Z | cosmosis/output/fits_output.py | annis/cosmosis | 55efc1bc2260ca39298c584ae809fa2a8e72a38e | [
"BSD-2-Clause"
] | 2 | 2022-03-25T21:26:27.000Z | 2022-03-29T06:37:46.000Z | from .output_base import OutputBase
from . import utils
import numpy as np
import os
from glob import glob
from collections import OrderedDict
try:
import fitsio
except ImportError:
fitsio = None
comment_indicator = "_cosmosis_comment_indicator_"
final_metadata_indicator = "FINALMETA"
unreserve_indicator = "UNRES"
reserved_keys = [
"XTENSION",
"BITPIX",
"NAXIS",
"NAXIS1",
"NAXIS2",
"PCOUNT",
"GCOUNT",
"TFIELDS",
"TTYPE1",
"COMMENT",
]
def check_fitsio():
if fitsio is None:
raise RuntimeError("You need to have the fitsio library installed to output FITS files. Try running: pip install --install-option=\"--use-system-fitsio\" git+git://github.com/joezuntz/fitsio")
class FitsOutput(OutputBase):
FILE_EXTENSION = ".fits"
_aliases = ["fits"]
def __init__(self, filename, rank=0, nchain=1, clobber=True):
super(FitsOutput, self).__init__()
#If filename already ends in .txt then remove it for a moment
if filename.endswith(self.FILE_EXTENSION):
filename = filename[:-len(self.FILE_EXTENSION)]
if nchain > 1:
filename = filename + "_{}".format(rank+1)
self._filename = filename + self.FILE_EXTENSION
self.filename_base = filename
check_fitsio()
self._fits = fitsio.FITS(self._filename, "rw", clobber=clobber)
self._hdu = None
#also used to store comments:
self._metadata = OrderedDict()
self._final_metadata = OrderedDict()
def _close(self):
self._flush_metadata(self._final_metadata)
self._final_metadata={}
self._fits.close()
def _flush_metadata(self, metadata):
for (key,(value,comment)) in list(metadata.items()):
if key.startswith(comment_indicator):
self._hdu.write_comment(value)
elif comment:
self._hdu.write_key(key, value, comment)
else:
self._hdu.write_key(key, value)
def _begun_sampling(self, params):
#write the name line
self._fits.create_table_hdu(data=params, names=[c[0] for c in self.columns])
self._hdu = self._fits[-1]
self._dtype = self._hdu.get_rec_dtype()[0]
self._flush_metadata(self._metadata)
self._metadata={}
@staticmethod
def is_reserved_fits_keyword(key):
for k in reserved_keys:
if key.upper().startswith(k):
return True
return False
def _write_metadata(self, key, value, comment=''):
#We save the metadata until we get the first
#parameters since up till then the columns can
#be changed
if self.is_reserved_fits_keyword(key):
key=unreserve_indicator + key
self._metadata[key]= (value, comment)
def _write_comment(self, comment):
#save comments along with the metadata - nice as
#preserves order
self._metadata[comment_indicator +
"_%d" % (len(self._metadata))] = (comment,None)
def _write_parameters(self, params):
row = np.core.records.fromarrays(params, dtype=self._dtype)
row=np.atleast_1d(row)
self._hdu.append(row)
def _write_final(self, key, value, comment=''):
#I suppose we can put this at the end - why not?
if self.is_reserved_fits_keyword(key):
key=unreserve_indicator + key
self._final_metadata[key]= (value, final_metadata_indicator+comment)
def name_for_sampler_resume_info(self):
return self.filename_base + '.sampler_status'
@classmethod
def from_options(cls, options, resume=False):
#look something up required parameters in the ini file.
#how this looks will depend on the ini
if resume:
raise ValueError("Cannot resume from FITS output")
filename = options['filename']
delimiter = options.get('delimiter', '\t')
rank = options.get('rank', 0)
nchain = options.get('parallel', 1)
clobber = utils.boolean_string(options.get('clobber', True))
return cls(filename, rank, nchain, clobber=clobber)
@classmethod
def load_from_options(cls, options):
check_fitsio()
filename = options['filename']
cut = False
if filename.endswith(cls.FILE_EXTENSION):
filename = filename[:-len(cls.FILE_EXTENSION)]
cut = True
# first look for serial file
if os.path.exists(filename+cls.FILE_EXTENSION):
datafiles = [filename+cls.FILE_EXTENSION]
elif os.path.exists(filename) and not cut:
datafiles = [filename]
else:
datafiles = glob(filename+"_[0-9]*"+cls.FILE_EXTENSION)
if not datafiles:
raise RuntimeError("No datafiles found starting with %s!"%filename)
#Read the metadata
metadata = []
final_metadata = []
data = []
comments = []
column_names = None
for datafile in datafiles:
print('LOADING CHAIN FROM FILE: ', datafile)
chain = []
chain_metadata = {}
chain_final_metadata = {}
chain_comments = []
f = fitsio.FITS(datafile, "r")
hdu = f[1]
chain = f[1].read()
#convert to unstructured format
chain = chain.view((chain.dtype[0], len(chain.dtype.names)))
column_names = hdu.get_colnames()
hdr = hdu.read_header()
chain_comments = [r['comment'] for r in hdr.records() if r['name'].lower()=="comment"]
for r in hdr.records():
key = r['name']
if key=='COMMENT':
continue
if key.startswith(unreserve_indicator):
key = key[len(unreserve_indicator):]
value = r['value']
key=key.lower()
if r['comment'].startswith(final_metadata_indicator):
chain_final_metadata[key] = value
else:
chain_metadata[key] = value
data.append(np.array(chain))
metadata.append(chain_metadata)
final_metadata.append(chain_final_metadata)
comments.append(chain_comments)
if column_names is None:
raise ValueError("Could not find column names header in file starting %s"%filename)
return column_names, data, metadata, comments, final_metadata
| 33.055556 | 201 | 0.603209 | 751 | 6,545 | 5.070573 | 0.28229 | 0.04438 | 0.019695 | 0.016544 | 0.089811 | 0.054622 | 0.030462 | 0.030462 | 0.030462 | 0.030462 | 0 | 0.003902 | 0.295187 | 6,545 | 197 | 202 | 33.22335 | 0.821591 | 0.073644 | 0 | 0.087838 | 0 | 0.006757 | 0.086957 | 0.01058 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087838 | false | 0 | 0.054054 | 0.006757 | 0.195946 | 0.006757 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b17cbd8a9488937054f8a24306f05f58eb7a8b6 | 2,757 | py | Python | micromath/fibonacci/test_fibonacci.py | hedrox/micromath | 0f300da914c844e5ff0775f25119909f748de635 | [
"MIT"
] | null | null | null | micromath/fibonacci/test_fibonacci.py | hedrox/micromath | 0f300da914c844e5ff0775f25119909f748de635 | [
"MIT"
] | null | null | null | micromath/fibonacci/test_fibonacci.py | hedrox/micromath | 0f300da914c844e5ff0775f25119909f748de635 | [
"MIT"
] | null | null | null | import logging
import json
from server import app
app.testing = True
logging.disable(logging.ERROR)
class TestFibonacci:
def test_correct_fibonacci(self):
with app.test_client() as client:
body = {'number': 16}
result = client.post('/api/v1/fibonacci', json=body)
assert result.status_code == 200
data = json.loads(result.data)
assert int(data['result']) == 610
assert data['error'] is None
def test_invalid_attribute_type(self):
with app.test_client() as client:
body = {'number': None}
result = client.post('/api/v1/fibonacci', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
body = {'number': '2'}
result = client.post('/api/v1/fibonacci', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
def test_empty_body(self):
with app.test_client() as client:
body = {}
result = client.post('/api/v1/fibonacci', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
def test_extra_attribute(self):
with app.test_client() as client:
body = {'number': 32, 'extra_key': 32}
result = client.post('/api/v1/fibonacci', json=body)
assert result.status_code == 500
data = json.loads(result.data)
assert data['result'] is None
assert 'name' in data['error']
assert data['error']['name'] == 'ValidationError'
def test_no_body(self):
with app.test_client() as client:
result = client.post('/api/v1/fibonacci')
assert result.status_code == 400
data = json.loads(result.data)
assert data['result'] is None
assert data['error'] == 'Data not provided'
def test_invalid_api_version(self):
with app.test_client() as client:
body = {'number': 32}
result = client.post('/api/v2/fibonacci', json=body)
assert result.status_code == 404
data = json.loads(result.data)
assert data['result'] is None
assert data['error'] == 'API version v2 not found'
| 33.621951 | 64 | 0.564019 | 322 | 2,757 | 4.742236 | 0.177019 | 0.085134 | 0.073346 | 0.087099 | 0.787164 | 0.773412 | 0.734774 | 0.709234 | 0.663392 | 0.612312 | 0 | 0.021602 | 0.311571 | 2,757 | 81 | 65 | 34.037037 | 0.782929 | 0 | 0 | 0.571429 | 0 | 0 | 0.141095 | 0 | 0 | 0 | 0 | 0 | 0.396825 | 1 | 0.095238 | false | 0 | 0.047619 | 0 | 0.15873 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4 |
2b1c1953cad2c24ae38087460d540f5ab88ef710 | 278 | py | Python | app.py | M3nin0/selectToTex | 423cfdafdd0bd391c30cbbf70386f74e93844c2f | [
"BSD-2-Clause"
] | 4 | 2018-06-06T15:35:51.000Z | 2020-01-19T15:47:23.000Z | app.py | M3nin0/selectToTex | 423cfdafdd0bd391c30cbbf70386f74e93844c2f | [
"BSD-2-Clause"
] | null | null | null | app.py | M3nin0/selectToTex | 423cfdafdd0bd391c30cbbf70386f74e93844c2f | [
"BSD-2-Clause"
] | null | null | null | from selecttotex.totex import Totex
# Criando instância do SelectToTex
tt = Totex()
# Comandos que serão utilizados
commands = ['SELECT * FROM aluno;', 'SELECT * FROM materia;', 'SELECT * FROM matricula;']
# Chama a função para a conversão
tt.to_tex(commands, 'tabelas.txt')
| 25.272727 | 89 | 0.733813 | 37 | 278 | 5.486486 | 0.702703 | 0.147783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154676 | 278 | 10 | 90 | 27.8 | 0.86383 | 0.33813 | 0 | 0 | 0 | 0 | 0.427778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b1d50c991de2a9641935aabe9ff1cac8d06aaa8 | 506 | py | Python | demo/blog/migrations/0003_auto_20200424_2258.py | erdem/django-marshmallow | 3b9f3cefd70d98e9348f5a69fa61837ca28be7a6 | [
"MIT"
] | 3 | 2020-05-05T16:17:34.000Z | 2021-05-13T19:05:12.000Z | demo/blog/migrations/0003_auto_20200424_2258.py | erdem/django-marshmallow | 3b9f3cefd70d98e9348f5a69fa61837ca28be7a6 | [
"MIT"
] | 3 | 2021-09-08T02:12:25.000Z | 2022-03-12T00:36:59.000Z | demo/blog/migrations/0003_auto_20200424_2258.py | erdem/django-marshmallow | 3b9f3cefd70d98e9348f5a69fa61837ca28be7a6 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.11 on 2020-04-24 22:58
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('blog', '0002_tag_related_tags'),
]
operations = [
migrations.RemoveField(
model_name='tag',
name='related_tags',
),
migrations.AddField(
model_name='tag',
name='related_tags',
field=models.ManyToManyField(null=True, to='blog.Tag'),
),
]
| 22 | 67 | 0.571146 | 53 | 506 | 5.320755 | 0.641509 | 0.117021 | 0.085106 | 0.113475 | 0.191489 | 0.191489 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 0.3083 | 506 | 22 | 68 | 23 | 0.748571 | 0.090909 | 0 | 0.375 | 1 | 0 | 0.137555 | 0.045852 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2b1da2d3ed1a52018f6ec06f4c582bd00a0d9184 | 6,682 | py | Python | python/vtool/maya_lib/ui.py | louisVottero/vtool | 4e2592df5841829e790251dc6923e45c8d013091 | [
"MIT"
] | 3 | 2022-02-22T01:00:59.000Z | 2022-03-07T16:19:27.000Z | python/vtool/maya_lib/ui.py | louisVottero/vtool | 4e2592df5841829e790251dc6923e45c8d013091 | [
"MIT"
] | 4 | 2022-03-04T05:25:44.000Z | 2022-03-11T04:51:35.000Z | python/vtool/maya_lib/ui.py | louisVottero/vtool | 4e2592df5841829e790251dc6923e45c8d013091 | [
"MIT"
] | 1 | 2022-03-31T23:07:09.000Z | 2022-03-31T23:07:09.000Z | # Copyright (C) 2022 Louis Vottero louis.vot@gmail.com All rights reserved.
from __future__ import absolute_import
import maya.cmds as cmds
import maya.utils
import maya.mel as mel
from maya.app.general.mayaMixin import MayaQWidgetBaseMixin, MayaQWidgetDockableMixin
from maya import OpenMayaUI as omui
from .. import qt_ui, qt
from .. import util, util_file
from .ui_lib import ui_fx, ui_shape_combo, ui_corrective
from .ui_lib import ui_rig
from .ui_lib import ui_anim
from .ui_lib import ui_model
from . import ui_core
from ..process_manager import process
from . import core
from . import attr
from . import space
from . import geo
from . import deform
from . import rigs_util
def load_into_tool_manager(window):
if ToolManager._last_instance:
parent_name = ToolManager._last_instance.parent().objectName()
if parent_name.find('WorkspaceControl') > -1:
window.show()
window_name = window.parent().objectName()
cmds.workspaceControl(window_name, e = True, tabToControl = (parent_name,-1))#, uiScript = command, li = False, retain = False)
if not ToolManager._last_instance:
window.show()
#window_name = window.parent().objectName()
#cmds.workspaceControl(window_name, e = True)#, tabToControl = (parent_name,-1))#, uiScript = command, li = False, retain = False)
if hasattr(window, 'initialize_settings'):
window.show()
window.initialize_settings()
def pose_manager(shot_sculpt_only = False):
window = ui_rig.pose_manager(shot_sculpt_only)
load_into_tool_manager(window)
def shape_combo():
window = ui_rig.shape_combo()
load_into_tool_manager(window)
def picker():
window = ui_rig.picker()
if ToolManager._last_instance:
ToolManager._last_instance.add_tab(window, window.title)
def tool_manager(name = None, directory = None):
workspace_name = ToolManager.title + 'WorkspaceControl'
ui_core.delete_workspace_control(workspace_name)
manager = ToolManager(name)
workspace_control = manager.title + 'WorkspaceControl'
if not ui_core.was_floating(manager.title):
tab_name = ui_core.get_stored_tab(manager.title)
manager.show()
ui_core.add_tab(workspace_control, tab_name)
else:
manager.show()
if directory:
manager.set_directory(directory)
return manager
def process_manager(directory = None):
ui_core.delete_workspace_control(ui_rig.ProcessMayaWindow.title + 'WorkspaceControl')
window = ui_rig.ProcessMayaWindow()
if directory:
window.set_directory(directory)
window.show()
return window
def ramen():
ui_core.delete_workspace_control(ui_rig.RamenMayaWindow.title + 'WorkspaceControl')
window = ui_rig.RamenMayaWindow()
window.show()
return window
def script_manager(directory):
ui_core.delete_workspace_control(ui_rig.ScriptMayaWindow.title + 'WorkspaceControl')
window = ui_rig.ScriptMayaWindow()
window.set_directory(directory)
window.show()
return window
class ToolManager(ui_core.MayaDirectoryWindowMixin):
#class ToolManager(ui_core.MayaDockMixin, qt_ui.BasicWidget):
#class ToolManager(ui_core.MayaDockMixin,qt.QWidget):
title = (util.get_custom('vetala_name', 'VETALA') + ' HUB')
#_last_instance = None
def __init__(self,name = None):
if name:
self.title = name
self.default_docks = []
self.docks = []
super(ToolManager, self).__init__()
self.setWindowTitle(self.title)
ui_core.new_tool_signal.signal.connect(load_into_tool_manager)
def _build_widgets(self):
self.main_layout.setAlignment(qt.QtCore.Qt.AlignTop)
header_layout = qt.QHBoxLayout()
version = qt.QLabel('%s' % util_file.get_vetala_version())
version.setMaximumHeight(30)
header_layout.addWidget(version)
self.main_layout.addLayout(header_layout)
self.rigging_widget = ui_rig.RigManager()
self.main_layout.addWidget(self.rigging_widget)
def add_tab(self, widget, name):
self.add_dock(widget, name)
def add_dock(self, widget , name):
self.dock_window.add_dock(widget, name)
def set_directory(self, directory):
super(ToolManager, self).set_directory(directory)
self.rigging_widget.set_directory(directory)
class Dock(ui_core.MayaBasicMixin,qt_ui.BasicWindow):
def __init__(self, name = None):
self.docks = []
super(Dock, self).__init__()
def _get_dock_widgets(self):
children = self.children()
found = []
for child in children:
if isinstance(child, qt.QDockWidget):
found.append(child)
return found
def _build_widgets(self):
self.main_widget.setSizePolicy(qt.QSizePolicy.Minimum, qt.QSizePolicy.Minimum)
self.centralWidget().hide()
self.setTabPosition(qt.QtCore.Qt.TopDockWidgetArea, qt.QTabWidget.West)
self.setDockOptions( self.AllowTabbedDocks)
def add_dock(self, widget , name):
docks = self._get_dock_widgets()
for dock in docks:
if dock.windowTitle() == name:
dock.deleteLater()
dock.close()
old_parent = widget.parent()
old_parent_name = None
if old_parent:
old_parent_name = old_parent.objectName()
dock_widget = ui_core.MayaDockWidget(self)
dock_widget.setWindowTitle(name)
dock_widget.setWidget(widget)
if old_parent_name and old_parent_name.find('Mixin') > -1:
old_parent.close()
cmds.deleteUI(old_parent_name)
self.addDockWidget(qt.QtCore.Qt.TopDockWidgetArea, dock_widget)
if docks:
self.tabifyDockWidget( docks[-1], dock_widget)
dock_widget.show()
dock_widget.raise_()
return dock_widget
| 28.678112 | 140 | 0.611344 | 713 | 6,682 | 5.464236 | 0.238429 | 0.021561 | 0.01694 | 0.0154 | 0.274384 | 0.179415 | 0.119867 | 0.094456 | 0.069302 | 0.069302 | 0 | 0.002365 | 0.303951 | 6,682 | 233 | 141 | 28.678112 | 0.835304 | 0.063903 | 0 | 0.195652 | 0 | 0 | 0.023778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.123188 | false | 0 | 0.144928 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b1e286ea315966366a86b5a9f5142b3ebdb896b | 4,748 | py | Python | xtbservice/models.py | cheminfo-py/xtbservice | d9227ea9e4647fe302cc3c1e9d57838fff938cd4 | [
"MIT"
] | 2 | 2022-01-28T02:59:28.000Z | 2022-01-31T15:47:30.000Z | xtbservice/models.py | cheminfo-py/xtbservice | d9227ea9e4647fe302cc3c1e9d57838fff938cd4 | [
"MIT"
] | 17 | 2021-09-13T12:26:57.000Z | 2022-01-31T22:35:49.000Z | xtbservice/models.py | cheminfo-py/xtbservice | d9227ea9e4647fe302cc3c1e9d57838fff938cd4 | [
"MIT"
] | 1 | 2022-01-26T08:17:50.000Z | 2022-01-26T08:17:50.000Z | # -*- coding: utf-8 -*-
from dataclasses import dataclass
from typing import Dict, List, Optional
import numpy as np
from ase import Atoms
from pydantic import BaseModel, Field, validator
ALLOWED_METHODS = ("GFNFF", "GFN2xTB", "GFN1xTB")
ALLOWED_FF = ("uff", "mmff94", "mmff94s")
@dataclass
class OptimizationResult:
atoms: Atoms
forces: np.ndarray
energy: float
class IRResult(BaseModel):
wavenumbers: List[float] = Field(None, description="List of wavenumbers in cm^-1")
intensities: List[float] = Field(
None, description="List of IR intensities in (D/Å)^2 amu^-1"
)
ramanIntensities: List[float] = Field(
None,
description="List of Raman intensities in (D/Å)^2 amu^-1, computed using Placzek and Bond Polarization (using values from Lippincott/Stuttman) approximation",
)
zeroPointEnergy: float = Field(None, description="Zero point energy in a.u.")
modes: Optional[List[dict]] = Field(
None,
description="List of dictionaries with the keys `number` - number of the mode (zero indexed), `displacements` - xyz file with the displacement vectors, `intensity` - IR intensity of the mode in D/Å)^2 amu^-1, `ramanIntensity` - Raman intensity of mode, `imaginary` - true if mode is imaginary, `mostDisplaceAtoms` - sorted list of atom indices (zero indiced) according to they displacement (Euclidean norm), `mostContributingAtoms` - most contributing atoms according to a distance criterion.",
)
mostRelevantModesOfAtoms: Optional[Dict[int, List[int]]] = Field(
None,
description="Dictionary indexed with atom indices (zero indexed) and mode indices (zero indexed) as values that is most relevant for a given",
)
mostRelevantModesOfBonds: Optional[List[dict]] = Field(
None,
description="List of dictionaries with the key `startAtom`, `endAtom` and `mode`",
)
hasImaginaryFrequency: bool = Field(
None, description="True if there is any mode with imaginary frequency"
)
isLinear: bool = Field(None, description="True if the molecule is linear.")
momentsOfInertia: List[float] = Field(
None,
description="Moments of inertia around principal axes. For a linear molecule one only expects two non-zero components.",
)
hasLargeImaginaryFrequency: bool = Field(
None,
description="True if there is a large imaginary frequency, indicating a failed geometry optimization.",
)
class IRRequest(BaseModel):
smiles: Optional[str] = Field(
None,
description="SMILES string of input molecule. The service will add implicit hydrogens",
)
molFile: Optional[str] = Field(
None,
description="String with molfile with expanded hydrogens. The service will not attempt to add implicit hydrogens to ensure that the atom ordering is preserved.",
)
method: Optional[str] = Field(
"GFNFF",
description="String with method that is used for geometry optimization and calculation of the vibrational frequencies. Allowed values are `GFNFF`, `GFN2xTB`, and `GFN1xTB`. `GFNFF` is the computationally most inexpensive method, but can be less accurate than the xTB methods",
)
@validator("method")
def method_match(cls, v):
if not v in ALLOWED_METHODS:
raise ValueError(f"method must be in {ALLOWED_METHODS}")
return v
class ConformerRequest(BaseModel):
smiles: Optional[str] = Field(
None,
description="SMILES string of input molecule. The service will add implicit hydrogens",
)
molFile: Optional[str] = Field(
None,
description="String with molfile with expanded hydrogens. The service will not attempt to add implicit hydrogens to ensure that the atom ordering is preserved.",
)
forceField: Optional[str] = Field(
"uff",
description="String with method force field that is used for energy minimization. Options are 'uff', 'mmff94', and 'mmff94s'",
)
rmsdThreshold: Optional[float] = Field(
0.5, description="RMSD threshold that is used to prune conformer library."
)
maxConformers: Optional[int] = Field(
1,
description="Maximum number of conformers that are generated (after pruning).",
)
@validator("forceField")
def method_match(cls, v):
if not v in ALLOWED_FF:
raise ValueError(f"forceField must be in {ALLOWED_FF}")
return v
class Conformer(BaseModel):
molFile: str = Field(
None, description="String with molfile.",
)
energy: str = Field(
None, description="Final energy after energy minimization.",
)
class ConformerLibrary(BaseModel):
conformers: List[Conformer]
| 40.931034 | 502 | 0.68829 | 574 | 4,748 | 5.679443 | 0.34669 | 0.046933 | 0.104294 | 0.042331 | 0.321779 | 0.312883 | 0.30092 | 0.244172 | 0.221472 | 0.221472 | 0 | 0.006223 | 0.221567 | 4,748 | 115 | 503 | 41.286957 | 0.875812 | 0.004423 | 0 | 0.22449 | 0 | 0.071429 | 0.486138 | 0.004868 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.05102 | 0 | 0.408163 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b1f7bce8113efac8c0084ee01c17879a8efac3c | 4,590 | py | Python | heat/tests/test_common_param_utils.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 265 | 2015-01-02T09:33:22.000Z | 2022-03-26T23:19:54.000Z | heat/tests/test_common_param_utils.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 8 | 2015-09-01T15:43:19.000Z | 2021-12-14T05:18:23.000Z | heat/tests/test_common_param_utils.py | noironetworks/heat | 7cdadf1155f4d94cf8f967635b98e4012a7acfb7 | [
"Apache-2.0"
] | 295 | 2015-01-06T07:00:40.000Z | 2021-09-06T08:05:06.000Z | #
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from heat.common import param_utils
from heat.tests import common
class TestExtractBool(common.HeatTestCase):
def test_extract_bool(self):
for value in ('True', 'true', 'TRUE', True):
self.assertTrue(param_utils.extract_bool('bool', value))
for value in ('False', 'false', 'FALSE', False):
self.assertFalse(param_utils.extract_bool('bool', value))
for value in ('foo', 't', 'f', 'yes', 'no', 'y', 'n', '1', '0', None):
self.assertRaises(ValueError, param_utils.extract_bool,
'bool', value)
class TestExtractInt(common.HeatTestCase):
def test_extract_int(self):
# None case
self.assertIsNone(param_utils.extract_int('num', None))
# 0 case
self.assertEqual(0, param_utils.extract_int('num', 0))
self.assertEqual(0, param_utils.extract_int('num', 0, allow_zero=True))
self.assertEqual(0, param_utils.extract_int('num', '0'))
self.assertEqual(0, param_utils.extract_int('num', '0',
allow_zero=True))
self.assertRaises(ValueError,
param_utils.extract_int,
'num', 0, allow_zero=False)
self.assertRaises(ValueError,
param_utils.extract_int,
'num', '0', allow_zero=False)
# positive values
self.assertEqual(1, param_utils.extract_int('num', 1))
self.assertEqual(1, param_utils.extract_int('num', '1'))
self.assertRaises(ValueError, param_utils.extract_int, 'num', '1.1')
self.assertRaises(ValueError, param_utils.extract_int, 'num', 1.1)
# negative values
self.assertEqual(-1, param_utils.extract_int('num', -1,
allow_negative=True))
self.assertEqual(-1, param_utils.extract_int('num', '-1',
allow_negative=True))
self.assertRaises(ValueError,
param_utils.extract_int, 'num', '-1.1',
allow_negative=True)
self.assertRaises(ValueError,
param_utils.extract_int, 'num', -1.1,
allow_negative=True)
self.assertRaises(ValueError, param_utils.extract_int, 'num', -1)
self.assertRaises(ValueError, param_utils.extract_int, 'num', '-1')
self.assertRaises(ValueError, param_utils.extract_int, 'num', '-1.1')
self.assertRaises(ValueError, param_utils.extract_int, 'num', -1.1)
self.assertRaises(ValueError,
param_utils.extract_int, 'num', -1,
allow_negative=False)
self.assertRaises(ValueError,
param_utils.extract_int, 'num', '-1',
allow_negative=False)
self.assertRaises(ValueError,
param_utils.extract_int, 'num', '-1.1',
allow_negative=False)
self.assertRaises(ValueError,
param_utils.extract_int, 'num', -1.1,
allow_negative=False)
# Non-int value
self.assertRaises(ValueError,
param_utils.extract_int, 'num', 'abc')
self.assertRaises(ValueError,
param_utils.extract_int, 'num', '')
self.assertRaises(ValueError,
param_utils.extract_int, 'num', 'true')
self.assertRaises(ValueError,
param_utils.extract_int, 'num', True)
class TestExtractTags(common.HeatTestCase):
def test_extract_tags(self):
self.assertRaises(ValueError, param_utils.extract_tags, "aaaaaaaaaaaaa"
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
"aaaaaaaaaaaaaaaaa,a")
self.assertEqual(["foo", "bar"], param_utils.extract_tags('foo,bar'))
| 45.9 | 79 | 0.579956 | 493 | 4,590 | 5.231237 | 0.221095 | 0.127957 | 0.210934 | 0.209383 | 0.673129 | 0.626987 | 0.5886 | 0.5886 | 0.550601 | 0.47848 | 0 | 0.014227 | 0.310893 | 4,590 | 99 | 80 | 46.363636 | 0.801138 | 0.13268 | 0 | 0.376812 | 0 | 0 | 0.066162 | 0.012879 | 0 | 0 | 0 | 0 | 0.463768 | 1 | 0.043478 | false | 0 | 0.028986 | 0 | 0.115942 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2b202431420e6ee7ee27a4ade6e7bbf3a50a0b94 | 7,741 | py | Python | commander/thirdparty/covertutils/shells/multi/shell.py | how2how/ToyHome | 4457b1d28e21ed6fd4ab980a0f7fed345c570ae3 | [
"Apache-2.0"
] | 1 | 2020-07-26T01:08:30.000Z | 2020-07-26T01:08:30.000Z | commander/thirdparty/covertutils/shells/multi/shell.py | how2how/ToyHome | 4457b1d28e21ed6fd4ab980a0f7fed345c570ae3 | [
"Apache-2.0"
] | null | null | null | commander/thirdparty/covertutils/shells/multi/shell.py | how2how/ToyHome | 4457b1d28e21ed6fd4ab980a0f7fed345c570ae3 | [
"Apache-2.0"
] | null | null | null | import cmd
import sys
import argparse
import threading
from covertutils.handlers.multi import MultiHandler
#
# The idea is to pass a list of handlers to start with, and create a MultiHandler object from this list.
#
# The initial arguments for this should be a list of Shells. From shells, the handlers will be retrieved and from there, the MultiHandler object will be created
#
try:
raw_input # Python 2
except NameError:
raw_input = input # Python 3
class CLIArgumentParser( argparse.ArgumentParser ) :
def exit(self, ex_code = 1, message = "Unrecognised") :
# print message
# print '[!] [EXIT] - ', ex_code, message
return
def error(self, message) : print (message)
class MultiShell( cmd.Cmd ) :
def __init__( self, shells = [], output = None ) :
cmd.Cmd.__init__(self)
self.shells = {}
for shell in shells :
self.__add_handler_shell( shell )
self.prompt = 'covertpreter> '
def __add_handler_shell( self, shell ) :
self.shells[shell.handler.getOrchestrator().getIdentity()] = shell
def list_sessions( self, verbose = False ) :
handlers = [shell.handler for shell in self.shells.values()]
to_show = []
for orch_id, shell in self.shells.iteritems() :
# print shell.sysinfo
try :
sysinfo = shell.sysinfo
sysinfo = ' - '.join([sysinfo[i] for i in (0,4,3,8)]) # hostname, distro, locale, user
except Exception as e:
# print e
sysinfo = None # Could dispatch selectively a 'SI' command
handler = shell.handler
if verbose :
streams = handler.getOrchestrator().getStreams()
else :
streams = []
to_show.append( {'row' : (orch_id, handler.__class__) , 'streams' : streams, 'info' : sysinfo } )
print ("\tCurrent Sessions:")
for i, sess_dict in enumerate(to_show) :
# print row
row = sess_dict['row']
streams = sess_dict['streams']
num_row = "%d) {:16} - {}" % (i)
print ( num_row.format(*row) )
if sess_dict['info'] :
print sess_dict['info']
else :
print "System Info: N/A"
for stream in streams :
print ("\t-> {}".format(stream))
print ('\n')
def default(self, line) :
if not line :
return
arg_parser = CLIArgumentParser(prog = "\n%s" % self.prompt)
arg_parser.add_argument("SESSIONS", nargs = '*', help = 'The SESSIONS IDs that the MESSAGE must be sent to. If not provided, it defaults to ALL SESSIONS', default = None)
arg_parser.add_argument("STREAM", type = str, help = 'The STREAM to send the MESSAGE. If a SESSION does not support the provided STREAM, it will be omitted')
arg_parser.add_argument("MESSAGE", type = str, default = None, help = "The MESSAGE to send to the selected SESSIONS")
try :
args = arg_parser.parse_args( line.split() )
except Exception as e :
print e
return
if args.MESSAGE == None :
# print arg_parser.print_help()
return
# print args
if not args.SESSIONS :
print "No sessions selected, ALL sessions will be commanded" # Warning
# [y/n] thing
resp = raw_input("Are you sure? [y/N]: ")
if resp.lower() != 'y' :
return
else :
args.SESSIONS = self.shells.keys()
for session_id in args.SESSIONS :
if session_id in self.shells.keys() :
shell = self.shells[session_id]
command = "{stream_char}{stream} {message}".format(
stream_char = shell.stream_preamp_char,
stream = args.STREAM,
message = args.MESSAGE,
)
print ( "'%s' -> <%s>" % (command, session_id) )
shell.onecmd( command )
def do_session( self, line ) :
arg_parser = CLIArgumentParser(prog = "session")
arg_parser.add_argument("-i", "--session_num", help = "Jumps to the shell designated by the SESSION_NUM", type = int, required = False)
arg_parser.add_argument("-s", "--session_id", help = "Jumps to the shell designated by the SESSIONS_ID", type = str, required = False)
arg_parser.add_argument("-l", "--list", help = "Lists all current Session Shells", action = 'store_true' )
arg_parser.add_argument("-v", help = "Verbose output of '-l'", action = 'store_true' )
try :
args = arg_parser.parse_args( line.split() )
except Exception as e :
print e
return
# print args
if args.list :
return self.list_sessions( args.v )
try :
i = args.session_num
# print list(self.shells)[i]
shell = self.shells.values()[i]
return shell.start(False)
except Exception as e:
# print e
pass
if args.session_id in self.shells.keys() :
shell = self.shells[args.session_id]
return shell.start(False)
def do_handler( self, line ) :
arg_parser = CLIArgumentParser(prog='handler')
subparsers = arg_parser.add_subparsers(help='command for the handler', dest="command")
parser_add = subparsers.add_parser('add', help='Add a Handler in a new Thread and start a session')
parser_add.add_argument("SCRIPT", help = "The file that contains the Handler in Python 'covertutils' code", type = str)
parser_add.add_argument("ARGUMENTS", help = "The arguments passed to the Python 'covertutils' handler script", type = str, default = '', nargs = '*')
parser_add.add_argument("--shell", '-s', help = "The argument in the Python code that contains the 'covertutils.shell.baseshell.BaseShell' implementation",
type = str, default = 'shell')
parser_del = subparsers.add_parser('del', help='Delete a Handler')
parser_del.add_argument("SESSION_ID", help = "The ID of the SESSION to purge", type = str)
parser_del.add_argument("--kill", '-k', help = "Send 'KILL' command to the corresponding Agent [TODO]", action = 'store_true', default = False)
args = arg_parser.parse_args(line.split())
# print args
if args.command == 'add' :
if args.SCRIPT == None :
print arg_parser.print_help()
return
filename = args.SCRIPT
arguments = args.ARGUMENTS
shell_var = args.shell
mount_thread = threading.Thread( target = self.mount_new_handler, args = ( filename, arguments, shell_var ) )
mount_thread.daemon = True
mount_thread.start()
elif args.command == 'del' :
if args.SESSION_ID == None :
print arg_parser.print_help()
return
self.unmount_handler(args.SESSION_ID, args.kill)
def unmount_handler( self, orch_id, kill = False ) :
if orch_id in self.shells.keys() :
if kill :
self.shells[orch_id].onecmd("!control kill")
self.shells[orch_id].handler.stop()
# self.shells[orch_id].handler.receive_function = None
del self.shells[orch_id]
def mount_new_handler( self, filename, arguments, shell_var = 'shell' ) : # handler add examples/tcp_reverse_handler.py 8080 Pa55phra531
variables = ['handler_filaneme.py'] + arguments
sys.argv = variables
print variables
with open(filename, 'r') as handler_codefile :
handler_code = handler_codefile.read()
# namespace_dict = locals()
namespace_dict = {}
handler_code = handler_code.replace("%s.start()" % shell_var, 'pass') # Replace the blocking command of the shell
# exec( handler_code )
exec( handler_code, namespace_dict )
print namespace_dict[shell_var]
self.__add_handler_shell( namespace_dict[shell_var] )
print "Added Session!"
def emptyline( self ) : return
def do_EOF( self, *args ) : return
def do_exit( self, *args ) : return self.do_q( *args )
def do_quit( self, *args ) : return self.do_q( *args )
def do_q( self, *args ) : return self.quitPrompt()
def quitPrompt( self, *args ) :
# print( args )
exit_input = raw_input("[!]\tQuit shell? [y/N] ")
if exit_input.lower() == 'y' :
print( "Aborted by the user..." )
# sys.exit(0)
return True
return False
def start( self, warn = True ) :
# try :
while True :
ret = None
try :
ret = self.cmdloop()
# if ret :
break
except KeyboardInterrupt :
print ("\n[!] For exiting use [q|quit|exit]")
| 34.101322 | 172 | 0.677949 | 1,076 | 7,741 | 4.731413 | 0.215613 | 0.033392 | 0.018857 | 0.0275 | 0.14614 | 0.128855 | 0.091534 | 0.065999 | 0.052642 | 0.025142 | 0 | 0.00306 | 0.197778 | 7,741 | 226 | 173 | 34.252212 | 0.816747 | 0.104379 | 0 | 0.179641 | 0 | 0.005988 | 0.195624 | 0.008694 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.017964 | 0.02994 | null | null | 0.107784 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
2b219b5d2c6acf165fc3fb183df871cbdfc2a9e9 | 3,068 | py | Python | Aprior.py | zhangmingming-chb/Aprior | 69bea22f34d20bdc9984faf1fa021fac6e60ef38 | [
"MIT"
] | null | null | null | Aprior.py | zhangmingming-chb/Aprior | 69bea22f34d20bdc9984faf1fa021fac6e60ef38 | [
"MIT"
] | null | null | null | Aprior.py | zhangmingming-chb/Aprior | 69bea22f34d20bdc9984faf1fa021fac6e60ef38 | [
"MIT"
] | null | null | null | #-*-coding:utf-8-*-
from typing import List
from itertools import chain
class Aprior():
def __init__(self, support, confidence):
self.support = support
self.confidence = confidence
def set_transactions(self, transactions: List[List[str]]) -> None:
self.transactions = transactions
def get_I(self) -> List[str]:
return sorted(set(chain(*self.transactions)))
def F(self, items: List[List[str]] or List[str]) -> List[List[str]]:
# 统计集合出现次数
records = {}
query_table = {}
for i in items:
for j in self.transactions:
query_table[id(i)] = i
if set(i).issubset(set(j)):
if id(i) not in records:
records[id(i)] = 1
else:
records[id(i)] += 1
# 选出k-频繁项集
item = []
for k, v in records.items():
if v / len(self.transactions) >= self.support:
item.append(query_table[k])
item = list(map(lambda x: sorted(list(x)), item))
return item
def generate_1_items(self, I: List[str]) -> List[List[str]]:
return self.F(I)
def generate_k_items(self, k_last_items: List[List[str]]) -> List[List[str]]:
prefix = []
for i in k_last_items:
prefix.append(",".join(i[:-1]))
records = {}
for i in prefix:
records[i] = []
for i in k_last_items:
# 将只有最后一个元素不同的集合分组
current_prefix = ",".join(i[:-1])
if current_prefix in records.keys():
records[current_prefix].append(i)
items = []
for v in records.values():
for i in range(len(v)):
for j in range(i + 1, len(v)):
temp = sorted(list(set(v[i]).union(v[j])))
# 判断集合中的后k-1是否在k-1-频繁项集中
if temp[1:] in k_last_items:
items.append(temp)
return self.F(items)
def k_items_result(self) -> List[List[str]]:
I = self.get_I()
items = self.generate_1_items(I)
item_max_length = len(sorted(self.transactions,key=lambda x:len(x))[-1])
while True:
if len(items) == 1 or len(items[0]) > item_max_length:
break
last_items = items[::]
items = self.generate_k_items(items)
# 无符合的频繁项集,返回上次计算结果
if len(items) == 0:
return last_items
return items
tran = [
['1','2','3'],
['1','2','4'],
['1','3','4'],
['1','2','3','5'],
['1','3','5'],
['2','4','5'],
['1','2','3','4']
]
# tran = [
# ['apple','banana','orange'],
# ['apple','banana','peer'],
# ['apple','orange','peer'],
# ['apple','banana','orange','mongo'],
# ['apple','orange','mongo'],
# ['banana','peer','mongo'],
# ['apple','banana','orange','peer']
# ]
ap = Aprior(support=3/7,confidence=5/7)
ap.set_transactions(tran)
print(ap.k_items_result()) # [['1', '2', '3']]
| 29.5 | 81 | 0.496415 | 383 | 3,068 | 3.872063 | 0.227154 | 0.047202 | 0.051922 | 0.030344 | 0.057991 | 0.021578 | 0 | 0 | 0 | 0 | 0 | 0.021919 | 0.330834 | 3,068 | 103 | 82 | 29.786408 | 0.700438 | 0.117666 | 0 | 0.055556 | 0 | 0 | 0.009294 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.097222 | false | 0 | 0.027778 | 0.027778 | 0.222222 | 0.013889 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b23c62c9bf29b77cf256e932af29e6d9da15c7b | 686 | py | Python | dlex/tf/models/base_v2.py | dvtrung/dl-torch | b49e57d10d32bb223e2d7643f2579ccc32c63a9a | [
"MIT"
] | null | null | null | dlex/tf/models/base_v2.py | dvtrung/dl-torch | b49e57d10d32bb223e2d7643f2579ccc32c63a9a | [
"MIT"
] | null | null | null | dlex/tf/models/base_v2.py | dvtrung/dl-torch | b49e57d10d32bb223e2d7643f2579ccc32c63a9a | [
"MIT"
] | null | null | null | import tensorflow as tf
from dlex import Params
from dlex.datasets.tf import Dataset
class BaseModel(tf.keras.Model):
def __init__(self, params: Params, dataset: Dataset):
super().__init__()
self.params = params
self.dataset = dataset
self._optimizer = None
self._loss = None
@property
def model(self):
raise NotImplemented
def compile(self):
super().compile(
optimizer=self.optimizer,
loss=self.loss,
metrics=self.metrics)
return self.model
@property
def optimizer(self):
return tf.keras.optimizers.SGD(learning_rate=0.02) | 25.407407 | 58 | 0.603499 | 76 | 686 | 5.302632 | 0.421053 | 0.039702 | 0.069479 | 0.099256 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006342 | 0.310496 | 686 | 27 | 59 | 25.407407 | 0.845666 | 0 | 0 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.136364 | 0.045455 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b240ebab74f79c8c466f67dcad71acc3ab5e267 | 67 | py | Python | slack_bolt/context/respond/__init__.py | hirosassa/bolt-python | befc3a1463f3ac8dbb780d66decc304e2bdf3e7a | [
"MIT"
] | 504 | 2020-08-07T05:02:57.000Z | 2022-03-31T14:32:46.000Z | slack_bolt/context/respond/__init__.py | hirosassa/bolt-python | befc3a1463f3ac8dbb780d66decc304e2bdf3e7a | [
"MIT"
] | 560 | 2020-08-07T01:16:06.000Z | 2022-03-30T00:40:56.000Z | slack_bolt/context/respond/__init__.py | hirosassa/bolt-python | befc3a1463f3ac8dbb780d66decc304e2bdf3e7a | [
"MIT"
] | 150 | 2020-08-07T09:41:14.000Z | 2022-03-30T04:54:51.000Z | # Don't add async module imports here
from .respond import Respond
| 22.333333 | 37 | 0.791045 | 11 | 67 | 4.818182 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164179 | 67 | 2 | 38 | 33.5 | 0.946429 | 0.522388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 4 |
2b2476addfb055d48f5d5ac598a8041fdc9fee29 | 1,259 | py | Python | pi/commands/token/reset.py | pan-net-security/pi-bundle | 1819caede77357331465216e0355eb2499d09cb4 | [
"MIT"
] | 2 | 2017-12-15T20:50:58.000Z | 2020-10-21T15:48:48.000Z | pi/commands/token/reset.py | pan-net-security/pi-bundle | 1819caede77357331465216e0355eb2499d09cb4 | [
"MIT"
] | 1 | 2017-10-26T09:28:30.000Z | 2017-10-26T10:33:41.000Z | pi/commands/token/reset.py | pan-net-security/pi-bundle | 1819caede77357331465216e0355eb2499d09cb4 | [
"MIT"
] | null | null | null | from pi.commands.token.base import TokenBase
import json
import re
class Reset(TokenBase):
def __init__(self):
super().__init__()
def run(self):
handler = self.parse_subcommand_
handler()
def reset(self):
results = []
# currently supporting just one argument
arg_user = self.request.args[0]
# options not yet supported
# future implementation: serial - to reset one specific token failcounter
# self.request.options)
user = {'name': arg_user}
try:
reset_tokens = self.reset_tokens(user=arg_user)
if reset_tokens:
reset_tokens = json.loads(reset_tokens.content)
#print(json.dumps(reset_tokens, indent=4, sort_keys=True))
user['result'] = reset_tokens['result']['status']
else:
user['result'] = False
except Exception as e:
self.fail(e)
results.append(user)
self.response.content(results, template='token_reset').send()
@property
def parse_subcommand_(self):
if self.request.args:
return self.reset
self.fail("This command requires at least one argument and none was passed.") | 26.787234 | 85 | 0.599682 | 144 | 1,259 | 5.076389 | 0.534722 | 0.105335 | 0.04104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002286 | 0.305004 | 1,259 | 47 | 85 | 26.787234 | 0.833143 | 0.17077 | 0 | 0 | 0 | 0 | 0.099134 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0.034483 | 0.103448 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b25709c41a264855b79fbdf3a37d395af6fdc3b | 4,500 | py | Python | canaries/canaries.py | wyatt-howe/canaries | 0bd0783e388dcee21fd3addd09a9299940627536 | [
"MIT"
] | null | null | null | canaries/canaries.py | wyatt-howe/canaries | 0bd0783e388dcee21fd3addd09a9299940627536 | [
"MIT"
] | null | null | null | canaries/canaries.py | wyatt-howe/canaries | 0bd0783e388dcee21fd3addd09a9299940627536 | [
"MIT"
] | null | null | null | """Library for loading dynamic library files.
Python library for choosing and loading dynamic library
files compatible with the operating environment.
"""
import doctest
import sys
import os.path
import platform
from ctypes import cdll, create_string_buffer
from multiprocessing import Pool
class canaries():
"""
Wrapper class for static methods.
"""
@staticmethod
def _xdll(path):
"""
Load a library using the appropriate method.
"""
system = platform.system()
xdll = cdll
if system == 'Windows':
# pylint: disable=import-outside-toplevel
from ctypes import windll as xdll # pragma: no cover
return xdll.LoadLibrary(path)
@staticmethod
def _probe(lib):
"""
Probe whether a library has a correctly implemented
verification method.
"""
# Build input and output buffers.
treat = create_string_buffer(5)
for (i, c) in enumerate('treat'):
try:
treat[i] = c
except:
treat[i] = ord(c)
chirp = create_string_buffer(5)
# Attempt to invoke the canary method.
r = lib.canary(chirp, treat)
# Decode results.
chirp = chirp.raw
if isinstance(chirp, bytes):
chirp = chirp.decode()
# Check that results are correct.
return r == 0 and chirp == 'chirp'
@staticmethod
def _isolated(path):
"""
Method to be used by isolated probe process.
"""
return canaries._probe(canaries._xdll(path))
@staticmethod
def canary(system, path):
"""
Single-path wrapper method for convenience.
"""
paths = {}
paths[system] = [path]
obj = canaries(paths)
return obj.lib if hasattr(obj, 'lib') else None
@staticmethod
def load(paths):
"""
Wrapper method for backwards compatibility.
"""
obj = canaries(paths)
return obj.lib if hasattr(obj, 'lib') else None
def __init__(self, paths):
"""
Attempt to load a library at one of the supplied
paths based on the platform. Retains state in order
to record all exceptions and incorrect outputs.
"""
if not isinstance(paths, (str, list, dict)):
raise TypeError(
"input must be a string, list, or dictionary"
)
if isinstance(paths, dict) and\
not all(isinstance(p, (str, list)) for p in paths.values()):
raise TypeError(
"path values in dictionary must be strings or lists of strings"
)
self.lib = None
self.exceptions = []
self.outputs = []
system = platform.system()
if isinstance(paths, str):
self.lib = self._canary(system, paths)
elif isinstance(paths, list):
for path in paths:
self.lib = self._canary(system, path)
if self.lib is not None:
break
elif isinstance(paths, dict):
if system in paths:
ps = paths[system]
for path in [ps] if isinstance(ps, str) else ps:
self.lib = self._canary(system, path)
if self.lib is not None:
break
def _canary(self, system, path):
"""
Attempt to load a library file at the supplied path
and verify that its exported functions work.
"""
lib = None
# Only attempt to load object files that exist.
if os.path.exists(path):
# Confirm that the library's exported functions work.
try:
# Invoke compatibility validation method.
with Pool(1) as p:
task = p.imap(canaries._isolated, [path])
if task.next(5): # Process has five seconds to succeedd.
lib = canaries._xdll(path)
except:
self.exceptions.append((
(system, path),
(
sys.exc_info()[0], sys.exc_info()[1],
sys.exc_info()[2].tb_lineno
)
))
return lib
# Provide direct access to static methods.
canary = canaries.canary
load = canaries.load
if __name__ == "__main__":
doctest.testmod() # pragma: no cover
| 29.220779 | 79 | 0.543778 | 496 | 4,500 | 4.866935 | 0.340726 | 0.024855 | 0.02237 | 0.021127 | 0.110605 | 0.083679 | 0.083679 | 0.083679 | 0.083679 | 0.083679 | 0 | 0.002829 | 0.371556 | 4,500 | 153 | 80 | 29.411765 | 0.850778 | 0.241778 | 0 | 0.255556 | 0 | 0 | 0.042373 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.077778 | false | 0 | 0.077778 | 0 | 0.233333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b25b21c76b9e5d09cd75db1cb54b32efea42278 | 906 | py | Python | examples/wsgi.py | eliziario/django-websocket-redis | e36449a622230dfc7af090471e0ae59b51526767 | [
"MIT"
] | 815 | 2015-01-04T03:02:03.000Z | 2022-03-18T21:48:48.000Z | examples/wsgi.py | eliziario/django-websocket-redis | e36449a622230dfc7af090471e0ae59b51526767 | [
"MIT"
] | 213 | 2015-01-06T14:15:32.000Z | 2022-01-28T17:35:45.000Z | examples/wsgi.py | eliziario/django-websocket-redis | e36449a622230dfc7af090471e0ae59b51526767 | [
"MIT"
] | 253 | 2015-01-02T16:38:15.000Z | 2022-03-30T20:43:58.000Z | # Django with Websocket for Redis under a single uWSGI server
# (for sites with low traffic)
#
# uwsgi --virtualenv /path/to/virtualenv --http :9090 --gevent 100 --http-websockets --module wsgi
#
# See: http://django-websocket-redis.readthedocs.io/en/latest/running.html#django-with-websockets-for-redis-as-a-stand-alone-uwsgi-server
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'chatserver.settings')
from django.core.wsgi import get_wsgi_application
from django.conf import settings
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
_django_app = get_wsgi_application()
_websocket_app = uWSGIWebsocketServer()
def application(environ, start_response):
if environ.get('PATH_INFO').startswith(settings.WEBSOCKET_URL):
return _websocket_app(environ, start_response)
return _django_app(environ, start_response)
| 36.24 | 137 | 0.788079 | 122 | 906 | 5.688525 | 0.491803 | 0.051873 | 0.086455 | 0.066282 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01107 | 0.102649 | 906 | 24 | 138 | 37.75 | 0.842558 | 0.355408 | 0 | 0 | 0 | 0 | 0.090278 | 0.038194 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.384615 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 2 |
2b25d82fa6f0f77480a8c0271786df4755236b12 | 136 | py | Python | src/py/twitter_credentials.py | franciscomoura/social-media-analysis | e277ad7b81952ec5d01c0801133e226f430daa6e | [
"Apache-2.0"
] | null | null | null | src/py/twitter_credentials.py | franciscomoura/social-media-analysis | e277ad7b81952ec5d01c0801133e226f430daa6e | [
"Apache-2.0"
] | null | null | null | src/py/twitter_credentials.py | franciscomoura/social-media-analysis | e277ad7b81952ec5d01c0801133e226f430daa6e | [
"Apache-2.0"
] | null | null | null | consumer_key = '<your key>'
consumer_secret = '<your secret>'
access_token = '<your token>'
access_token_secret = '<your token secret>'
| 27.2 | 43 | 0.727941 | 18 | 136 | 5.222222 | 0.333333 | 0.212766 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 136 | 4 | 44 | 34 | 0.789916 | 0 | 0 | 0 | 0 | 0 | 0.397059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
2b275e20cb5d8a0781ea95c53cddfe783077cf6e | 1,820 | py | Python | pmaf/pipe/agents/miners/_metakit.py | mmtechslv/PhyloMAF | bab43dd4a4d2812951b1fdf4f1abb83edb79ea88 | [
"BSD-3-Clause"
] | 1 | 2021-07-02T06:24:17.000Z | 2021-07-02T06:24:17.000Z | pmaf/pipe/agents/miners/_metakit.py | mmtechslv/PhyloMAF | bab43dd4a4d2812951b1fdf4f1abb83edb79ea88 | [
"BSD-3-Clause"
] | 1 | 2021-06-28T12:02:46.000Z | 2021-06-28T12:02:46.000Z | pmaf/pipe/agents/miners/_metakit.py | mmtechslv/PhyloMAF | bab43dd4a4d2812951b1fdf4f1abb83edb79ea88 | [
"BSD-3-Clause"
] | null | null | null | from abc import ABC, abstractmethod
class MinerBackboneMetabase(ABC):
""" """
@abstractmethod
def verify_docker(self,docker):
"""
Parameters
----------
docker :
Returns
-------
"""
pass
@abstractmethod
def yield_accession_by_identifier(self, docker, **kwargs):
"""
Parameters
----------
docker :
**kwargs :
Returns
-------
"""
pass
@abstractmethod
def yield_sequence_by_identifier(self, docker, **kwargs):
"""
Parameters
----------
docker :
**kwargs :
Returns
-------
"""
pass
@abstractmethod
def yield_phylogeny_by_identifier(self, docker, **kwargs):
"""
Parameters
----------
docker :
**kwargs :
Returns
-------
"""
pass
@abstractmethod
def yield_taxonomy_by_identifier(self, docker, **kwargs):
"""
Parameters
----------
docker :
**kwargs :
Returns
-------
"""
pass
@abstractmethod
def yield_identifier_by_docker(self, docker, **kwargs):
"""
Parameters
----------
docker :
**kwargs :
Returns
-------
"""
pass
@property
@abstractmethod
def factor(self):
""" """
pass
@property
@abstractmethod
def mediator(self):
""" """
pass
@property
@abstractmethod
def state(self):
""" """
pass
| 14.108527 | 62 | 0.392308 | 106 | 1,820 | 6.584906 | 0.235849 | 0.17192 | 0.179083 | 0.200573 | 0.687679 | 0.545845 | 0.545845 | 0.545845 | 0.475645 | 0.475645 | 0 | 0 | 0.471978 | 1,820 | 128 | 63 | 14.21875 | 0.726327 | 0.218132 | 0 | 0.65625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.28125 | false | 0.28125 | 0.03125 | 0 | 0.34375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 3 |
2b27627202fcf16a1bd8bdf2adc4734f515109a8 | 45 | py | Python | 01_hello/hello02_comment.py | rebeckaflynn/tiny_python_projects | 692f24dd00769438e7aaa1c45223b701b20a1192 | [
"MIT"
] | null | null | null | 01_hello/hello02_comment.py | rebeckaflynn/tiny_python_projects | 692f24dd00769438e7aaa1c45223b701b20a1192 | [
"MIT"
] | null | null | null | 01_hello/hello02_comment.py | rebeckaflynn/tiny_python_projects | 692f24dd00769438e7aaa1c45223b701b20a1192 | [
"MIT"
] | null | null | null | # Purpose: Say hello
print('Hello, World!')
| 11.25 | 22 | 0.666667 | 6 | 45 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.155556 | 45 | 3 | 23 | 15 | 0.789474 | 0.4 | 0 | 0 | 0 | 0 | 0.541667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 5 |
2b28b58e2579cbe5e2ab26c5528edcabd5571c91 | 1,074 | py | Python | docs/end-to-end/library/GeocontribOnCoordinatesLibrary.py | hcharp/geocontrib | 87ee241c737aae23eff358d2550bddba714f9c7b | [
"Apache-2.0"
] | 3 | 2020-12-02T09:44:41.000Z | 2021-04-17T13:05:30.000Z | docs/end-to-end/library/GeocontribOnCoordinatesLibrary.py | hcharp/geocontrib | 87ee241c737aae23eff358d2550bddba714f9c7b | [
"Apache-2.0"
] | 14 | 2020-01-27T09:49:33.000Z | 2021-06-14T08:04:10.000Z | docs/end-to-end/library/GeocontribOnCoordinatesLibrary.py | hcharp/geocontrib | 87ee241c737aae23eff358d2550bddba714f9c7b | [
"Apache-2.0"
] | 9 | 2020-01-16T12:37:39.000Z | 2021-04-22T09:57:59.000Z | # Copyright (c) 2017-2021 Neogeo-Technologies.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from selenium.webdriver.common.action_chains import ActionChains
from utils import get_driver
def geocontrib_click_at_coordinates(pos_x, pos_y):
actions = ActionChains(get_driver())
my_map = get_driver().find_element_by_xpath("//html/body/main/div/div/form/div[3]/div/div/div[1]/div[4]/div")
actions.move_to_element_with_offset(my_map, pos_x, pos_y).click().perform()
get_driver().find_element_by_xpath("//button[@type='submit']").click()
| 42.96 | 113 | 0.76257 | 168 | 1,074 | 4.732143 | 0.630952 | 0.075472 | 0.032704 | 0.040252 | 0.067925 | 0.067925 | 0 | 0 | 0 | 0 | 0 | 0.016112 | 0.133147 | 1,074 | 24 | 114 | 44.75 | 0.837809 | 0.546555 | 0 | 0 | 0 | 0.142857 | 0.182203 | 0.182203 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2938cdd0a73902522794999f575e5ff3fb8b89 | 3,341 | py | Python | torch/quantization/fx/qconfig_utils.py | deltabravozulu/pytorch | c6eef589971e45bbedacc7f65533d1b8f80a6895 | [
"Intel"
] | 1 | 2021-06-17T13:02:45.000Z | 2021-06-17T13:02:45.000Z | torch/quantization/fx/qconfig_utils.py | deltabravozulu/pytorch | c6eef589971e45bbedacc7f65533d1b8f80a6895 | [
"Intel"
] | 1 | 2022-01-18T12:17:29.000Z | 2022-01-18T12:17:29.000Z | torch/quantization/fx/qconfig_utils.py | deltabravozulu/pytorch | c6eef589971e45bbedacc7f65533d1b8f80a6895 | [
"Intel"
] | 2 | 2021-07-02T10:18:21.000Z | 2021-08-18T10:10:28.000Z | import torch
from collections import OrderedDict
from typing import Union, Callable, Any, Dict
import re
from .utils import _parent_name
QConfigAny = Union[torch.quantization.QConfig,
torch.quantization.QConfigDynamic, None]
def get_flattened_qconfig_dict(qconfig_dict):
""" flatten the global, object_type and module_name qconfig
to the same qconfig_dict so that it can be used by
propagate_qconfig_ function.
"module_name_regex" is ignored for now since it's not supported
in propagate_qconfig_, but it can be fixed later.
For example:
Input: {
"": qconfig,
"object_type": [
(torch.add, qconfig)
],
"module_name": [
("conv", qconfig)
]
}
Output: {
"": qconfig,
torch.add: qconfig,
"conv": qconfig
}
"""
flattened = dict()
if '' in qconfig_dict:
flattened[''] = qconfig_dict['']
def flatten_key(key):
if key in qconfig_dict:
for (obj, qconfig) in qconfig_dict[key].items():
flattened[obj] = qconfig
flatten_key('object_type')
flatten_key('module_name')
return flattened
def convert_dict_to_ordered_dict(qconfig_dict: Any) -> Dict[str, Dict[Any, Any]]:
""" Convert dict in qconfig_dict to ordered dict
"""
# convert a qconfig list for a type to OrderedDict
def _convert_to_ordered_dict(key, qconfig_dict):
qconfig_dict[key] = OrderedDict(qconfig_dict.get(key, []))
_convert_to_ordered_dict('object_type', qconfig_dict)
_convert_to_ordered_dict('module_name_regex', qconfig_dict)
_convert_to_ordered_dict('module_name', qconfig_dict)
return qconfig_dict
def get_object_type_qconfig(
qconfig_dict: Any,
object_type: Union[Callable, str],
fallback_qconfig: QConfigAny) -> QConfigAny:
# object_type can be
# 1. module type (call_module)
# 2. function (call_function)
# 3. string (call_method)
return qconfig_dict['object_type'].get(
object_type, fallback_qconfig)
def get_module_name_regex_qconfig(qconfig_dict, module_name, fallback_qconfig):
for regex_pattern, qconfig in \
qconfig_dict['module_name_regex'].items():
if re.match(regex_pattern, module_name):
# first match wins
return qconfig
return fallback_qconfig
def get_module_name_qconfig(qconfig_dict, module_name, fallback_qconfig):
if module_name == '':
# module name qconfig not found
return fallback_qconfig
if module_name in qconfig_dict['module_name']:
return qconfig_dict['module_name'][module_name]
else:
parent, _ = _parent_name(module_name)
return get_module_name_qconfig(qconfig_dict, parent, fallback_qconfig)
# get qconfig for module_name,
# fallback to module_name_regex_qconfig, module_type_qconfig,
# global_qconfig if necessary
def get_qconfig(qconfig_dict, module_type, module_name, global_qconfig):
module_type_qconfig = get_object_type_qconfig(
qconfig_dict, module_type, global_qconfig)
module_name_regex_qconfig = get_module_name_regex_qconfig(
qconfig_dict, module_name, module_type_qconfig)
module_name_qconfig = get_module_name_qconfig(
qconfig_dict, module_name, module_name_regex_qconfig)
return module_name_qconfig
| 33.41 | 81 | 0.701287 | 427 | 3,341 | 5.128806 | 0.192037 | 0.141553 | 0.057534 | 0.067123 | 0.267123 | 0.210959 | 0.130594 | 0.116895 | 0.042009 | 0 | 0 | 0.001148 | 0.2176 | 3,341 | 99 | 82 | 33.747475 | 0.836649 | 0.243939 | 0 | 0.037736 | 0 | 0 | 0.045811 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150943 | false | 0 | 0.09434 | 0.018868 | 0.415094 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2a9c9a4b580fa5f2d5bbe38b2d93f37f8e19c1 | 3,062 | py | Python | researchmap/wrapper.py | RTa-technology/researchmap.py | 6aa427e1564644b20ba2001dfecf63457ef40463 | [
"MIT"
] | null | null | null | researchmap/wrapper.py | RTa-technology/researchmap.py | 6aa427e1564644b20ba2001dfecf63457ef40463 | [
"MIT"
] | null | null | null | researchmap/wrapper.py | RTa-technology/researchmap.py | 6aa427e1564644b20ba2001dfecf63457ef40463 | [
"MIT"
] | null | null | null | from typing import List
import urllib.parse
from .adapter import Adapter
__all__ = ['Wrapper']
class Wrapper:
"""Wrapper class for the Adapter class.
This class is used to wrap the Adapter class and provide a more
convenient interface for the user.
"""
def __init__(self, adapter: Adapter) -> None:
self._adapter = adapter
def get_bulk(self, params=None) -> dict:
"""Get a list of researchers from the API.
Parameters
----------
params : :class:`dict`
A dictionary containing the parameters to be passed to the API.
The payload to send to the API. Defaults to None.
Returns
-------
:class:`dict`
"""
return self._adapter.get_bulk(params=params)
def set_bulk(self, jsondata=None, params=None) -> dict:
"""Get a list of researchers from the API.
Parameters
----------
jsondata : :class:`dict`
A dictionary containing the parameters to be passed to the API.
The payload to send to the API. Defaults to None.
params : :class:`dict`
A dictionary containing the parameters to be passed to the API.
Returns
-------
:class:`dict`
"""
if params is None:
params = {}
if jsondata is None:
jsondata = {}
data = self._adapter.set_bulk(params=params, jsondata=jsondata)
print(data)
bulk_data = {}
bulk_data['id'] = urllib.parse.parse_qs(urllib.parse.urlparse(data['url']).query)['id'][0]
error = self._adapter.get_bulk_results(bulk_data)
bulk_data['display_type'] = "success"
print(bulk_data)
succeed = self._adapter.get_bulk_results(bulk_data)
print(succeed)
print(error)
return self._adapter.get_bulk_results(bulk_data)
def set_bulk_apply(self, params=None) -> dict:
"""Get a list of researchers from the API.
Parameters
----------
params : :class:`dict`
A dictionary containing the parameters to be passed to the API.
Returns
-------
:class:`dict`
"""
if params is None:
params = {}
return self._adapter.set_bulk_apply(params=params)
def get_bulk_results(self, params=None) -> dict:
"""Get a list of researchers from the API.
Parameters
----------
params : :class:`dict`
A dictionary containing the parameters to be passed to the API.
Returns
-------
:class:`dict`
"""
if params is None:
params = {}
return self._adapter.get_bulk_results(params=params)
def search_researcher(self, payload=None) -> dict:
"""Search for a researcher in the API.
Parameters
----------
payload : :class:`dict`
A dictionary containing the parameters to be passed to the API.
The payload to send to the API. Defaults to None.
Returns
-------
:class:`dict`
"""
if payload is None:
payload = {}
return self._adapter.search_researcher(payload)
def usage(self) -> dict:
return self._adapter.get_usage()
| 26.17094 | 95 | 0.612998 | 387 | 3,062 | 4.726098 | 0.173127 | 0.045927 | 0.039366 | 0.06561 | 0.588846 | 0.562603 | 0.551668 | 0.49754 | 0.49754 | 0.49754 | 0 | 0.000448 | 0.270738 | 3,062 | 116 | 96 | 26.396552 | 0.81863 | 0.423253 | 0 | 0.153846 | 0 | 0 | 0.023125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.179487 | false | 0 | 0.076923 | 0.025641 | 0.435897 | 0.102564 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2dc91a67e56678c390a41ec58ff7af3ed3237a | 2,888 | py | Python | demo/MagicMind/python/calibrator_custom_data.py | huismiling/YOLOX | d9d1c1e8c6362c71703d34e25765a2dfe8618e4a | [
"Apache-2.0"
] | null | null | null | demo/MagicMind/python/calibrator_custom_data.py | huismiling/YOLOX | d9d1c1e8c6362c71703d34e25765a2dfe8618e4a | [
"Apache-2.0"
] | null | null | null | demo/MagicMind/python/calibrator_custom_data.py | huismiling/YOLOX | d9d1c1e8c6362c71703d34e25765a2dfe8618e4a | [
"Apache-2.0"
] | null | null | null | from typing import List
import cv2
import numpy
import magicmind.python.runtime as mm
from magicmind.python.common.types import get_numpy_dtype_by_datatype
import os
import sys
def preprocess(img, input_size, swap=(2, 0, 1)):
if len(img.shape) == 3:
padded_img = numpy.ones((input_size[0], input_size[1], 3), dtype=numpy.uint8) * 114
else:
padded_img = numpy.ones(input_size, dtype=numpy.uint8) * 114
r = min(input_size[0] / img.shape[0], input_size[1] / img.shape[1])
resized_img = cv2.resize(
img,
(int(img.shape[1] * r), int(img.shape[0] * r)),
interpolation=cv2.INTER_LINEAR,
).astype(numpy.uint8)
padded_img[: int(img.shape[0] * r), : int(img.shape[1] * r)] = resized_img
padded_img = padded_img.transpose(swap)
padded_img = numpy.ascontiguousarray(padded_img, dtype=numpy.float32)
return padded_img, r
def load_multi_image(data_paths: List[str], input_wh = List[int], target_dtype: mm.DataType = mm.DataType.FLOAT32) -> numpy.ndarray:
# Load multiple pre-processed image into a NCHW style ndarray
images = []
for path in data_paths:
img = cv2.imread(path)
images.append(preprocess(img, input_wh)[0][numpy.newaxis, :])
ret = numpy.concatenate(tuple(images), axis = 0)
return numpy.ascontiguousarray(
ret.astype(dtype = get_numpy_dtype_by_datatype(target_dtype)))
class FixedCalibData(mm.CalibDataInterface):
def __init__(self, shape: mm.Dims, data_type: mm.DataType, max_samples: int, data_paths: str):
super().__init__()
self.shape_ = shape
self.data_type_ = data_type
self.batch_size_ = shape.GetDimValue(0)
self.input_wh = [shape.GetDimValue(3), shape.GetDimValue(2)]
data_lines = [itd.strip() for itd in open(data_paths).readlines() if os.path.isfile(itd.strip())]
self.max_samples_ = min(max_samples, len(data_lines))
self.data_paths_ = data_lines
self.current_sample_ = None
self.outputed_sample_count = 0
def get_shape(self):
return self.shape_
def get_data_type(self):
return self.data_type_
def get_sample(self):
return self.current_sample_
def next(self):
beg_ind = self.outputed_sample_count
end_ind = self.outputed_sample_count + self.batch_size_
if end_ind > self.max_samples_:
return mm.Status(mm.Code.OUT_OF_RANGE, "End reached")
self.current_sample_ = load_multi_image(self.data_paths_[beg_ind:end_ind],
input_wh = self.input_wh,
target_dtype = self.data_type_)
self.outputed_sample_count = end_ind
return mm.Status.OK()
def reset(self):
self.current_sample_ = None
self.outputed_sample_count = 0
return mm.Status.OK()
| 35.219512 | 132 | 0.655125 | 400 | 2,888 | 4.465 | 0.28 | 0.040314 | 0.050392 | 0.06439 | 0.182531 | 0.113102 | 0.050392 | 0.050392 | 0.050392 | 0 | 0 | 0.017671 | 0.235803 | 2,888 | 81 | 133 | 35.654321 | 0.791572 | 0.020429 | 0 | 0.096774 | 0 | 0 | 0.003891 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.129032 | false | 0 | 0.112903 | 0.048387 | 0.387097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2eb592acc995c4132c3288aeaefe49afa5e490 | 66,478 | py | Python | probreg/main.py | albertvisser/probreg | 5f685616221e3261afe0d8ae8506cad9a719fa82 | [
"MIT"
] | null | null | null | probreg/main.py | albertvisser/probreg | 5f685616221e3261afe0d8ae8506cad9a719fa82 | [
"MIT"
] | null | null | null | probreg/main.py | albertvisser/probreg | 5f685616221e3261afe0d8ae8506cad9a719fa82 | [
"MIT"
] | null | null | null | #! usr/bin/env python
"""Actie (was: problemen) Registratie, GUI toolkit onafhankelijke code
"""
import os
# import sys
import pathlib
import functools
import probreg.gui as gui
import probreg.shared as shared # import DataError, et_projnames
import probreg.dml_django as dmls
import probreg.dml_xml as dmlx
LIN = True if os.name == 'posix' else False
class Page():
"base class for notebook page"
def __init__(self, parent, pageno, standard=True):
self.parent = parent
self.pageno = pageno
self.is_text_page = standard
if standard:
self.gui = gui.PageGui(parent, self)
def get_toolbar_data(self, textfield):
"return texts, shortcuts and picture names for setting op toolbar"
return (('&Bold', 'Ctrl+B', 'icons/sc_bold', 'Toggle Bold', textfield.text_bold,
textfield.update_bold),
('&Italic', 'Ctrl+I', 'icons/sc_italic', 'Toggle Italic', textfield.text_italic,
textfield.update_italic),
('&Underline', 'Ctrl+U', 'icons/sc_underline', 'Toggle Underline',
textfield.text_underline, textfield.update_underline),
('Strike&through', 'Ctrl+~', 'icons/sc_strikethrough', 'Toggle Strikethrough',
textfield.text_strikethrough),
# ("Toggle &Monospace", 'Shift+Ctrl+M', 'icons/text',
# 'Switch using proportional font off/on', textfield.toggle_monospace),
(),
("&Enlarge text", 'Ctrl+Up', 'icons/sc_grow', 'Use bigger letters',
textfield.enlarge_text),
("&Shrink text", 'Ctrl+Down', 'icons/sc_shrink', 'Use smaller letters',
textfield.shrink_text),
(),
('To &Lower Case', 'Shift+Ctrl+L', 'icons/sc_changecasetolower',
'Use lower case letters', textfield.case_lower),
('To &Upper Case', 'Shift+Ctrl+U', 'icons/sc_changecasetoupper',
'Use upper case letters', textfield.case_upper),
(),
("Indent &More", 'Ctrl+]', 'icons/sc_incrementindent', 'Increase indentation',
textfield.indent_more),
("Indent &Less", 'Ctrl+[', 'icons/sc_decrementindent', 'Decrease indentation',
textfield.indent_less),
(),
# ("Normal Line Spacing", '', 'icons/sc_spacepara1',
# 'Set line spacing to 1', textfield.linespacing_1),
# ("1.5 Line Spacing", '', 'icons/sc_spacepara15',
# 'Set line spacing to 1.5', textfield.linespacing_15),
# ("Double Line Spacing", '', 'icons/sc_spacepara2',
# 'Set line spacing to 2', textfield.linespacing_2),
# (),
("Increase Paragraph &Spacing", '', 'icons/sc_paraspaceincrease',
'Increase spacing between paragraphs', textfield.increase_paragraph_spacing),
("Decrease &Paragraph Spacing", '', 'icons/sc_paraspacedecrease',
'Decrease spacing between paragraphs', textfield.decrease_paragraph_spacing))
def vulp(self):
"""te tonen gegevens invullen in velden e.a. initialisaties
methode aan te roepen voorafgaand aan het tonen van de pagina"""
self.initializing = True
self.parent.parent.enable_settingsmenu()
if self.parent.current_tab == 0:
text = self.seltitel
else:
state = True if self.parent.current_tab == 1 and self.parent.newitem else False
self.enable_buttons(state)
text = self.parent.tabs[self.parent.current_tab].split(None, 1)
if self.parent.pagedata:
text = str(self.parent.pagedata.id) + ' ' + self.parent.pagedata.titel
self.parent.parent.set_windowtitle("{} | {}".format(self.parent.parent.title, text))
self.parent.parent.set_statusmessage()
if 1 < self.parent.current_tab < 6:
self.oldbuf = ''
is_readonly = False
if self.parent.pagedata is not None:
if self.parent.current_tab == 2 and self.parent.pagedata.melding:
self.oldbuf = self.parent.pagedata.melding
if self.parent.current_tab == 3 and self.parent.pagedata.oorzaak:
self.oldbuf = self.parent.pagedata.oorzaak
if self.parent.current_tab == 4 and self.parent.pagedata.oplossing:
self.oldbuf = self.parent.pagedata.oplossing
if self.parent.current_tab == 5 and self.parent.pagedata.vervolg:
self.oldbuf = self.parent.pagedata.vervolg
# self.text1.setReadOnly(self.parent.pagedata.arch)
is_readonly = self.parent.pagedata.arch
# print('in Page.vulp, setting text:', self.oldbuf)
self.gui.set_textarea_contents(self.oldbuf)
# print('in Page.vulp, set text')
if not is_readonly:
is_readonly = not self.parent.parent.is_user
self.gui.set_text_readonly(is_readonly)
self.gui.enable_toolbar(self.parent.parent.is_user)
# print('in Page.vulp, getting text')
self.oldbuf = self.gui.get_textarea_contents() # make sure it's rich text
# print('in Page.vulp, got text:', self.oldbuf)
self.gui.move_cursor_to_end()
# print(' set cursor to end')
self.initializing = False
# self.parent.checked_for_leaving = True - alleen voor wx versie, hoort bij gui
# print('end of Page.vulp')
def readp(self, pid):
"lezen van een actie"
if self.parent.pagedata: # spul van de vorige actie opruimen
self.parent.pagedata.clear()
self.parent.pagedata = shared.Actie[self.parent.parent.datatype](self.parent.fnaam, pid,
self.parent.parent.user)
self.parent.parent.imagelist = self.parent.pagedata.imagelist
self.parent.old_id = self.parent.pagedata.id
self.parent.newitem = False
def nieuwp(self, *args):
"""voorbereiden opvoeren nieuwe actie"""
shared.log('opvoeren nieuwe actie')
self.parent.newitem = True
if self.leavep():
if self.parent.current_tab == 0:
self.parent.parent.gui.enable_book_tabs(True, tabfrom=1)
self.parent.pagedata = shared.Actie[self.parent.parent.datatype](self.parent.fnaam, 0,
self.parent.parent.user)
self.parent.pagedata.events.append((shared.get_dts(), 'Actie opgevoerd'))
self.parent.parent.imagelist = self.parent.pagedata.imagelist
if self.parent.current_tab == 1:
self.vulp() # om de velden leeg te maken
self.gui.set_focus()
else:
self.goto_page(1, check=False)
else:
self.parent.newitem = False
shared.log("leavep() geeft False: nog niet klaar met huidige pagina")
def leavep(self):
"afsluitende acties uit te voeren alvorens de pagina te verlaten"
newbuf = []
if self.parent.current_tab > 0:
newbuf = self.oldbuf
newbuf = self.gui.build_newbuf()
ok_to_leave = True
if self.parent.current_tab == 0:
pass
elif self.parent.changed_item:
message = "\n".join(("De gegevens op de pagina zijn gewijzigd, ",
"wilt u de wijzigingen opslaan voordat u verder gaat?"))
ok, cancel = gui.ask_cancel_question(self.gui, message)
if ok:
ok_to_leave = self.savep()
elif cancel:
# self.parent.checked_for_leaving = ok_to_leave = False
ok_to_leave = False
if not cancel:
self.parent.parent.gui.enable_all_other_tabs(True)
return ok_to_leave
def savep(self, *args):
"gegevens van een actie opslaan afhankelijk van pagina"
if not self.gui.can_save:
return False
self.enable_buttons(False)
if self.parent.current_tab <= 1 or self.parent.current_tab == 6:
return False
text = self.gui.get_textarea_contents()
event_text = ''
if self.parent.current_tab == 2 and text != self.parent.pagedata.melding:
self.oldbuf = self.parent.pagedata.melding = text
event_text = "Meldingtekst aangepast"
if self.parent.current_tab == 3 and text != self.parent.pagedata.oorzaak:
self.oldbuf = self.parent.pagedata.oorzaak = text
event_text = "Beschrijving oorzaak aangepast"
if self.parent.current_tab == 4 and text != self.parent.pagedata.oplossing:
self.oldbuf = self.parent.pagedata.oplossing = text
event_text = "Beschrijving oplossing aangepast"
if self.parent.current_tab == 5 and text != self.parent.pagedata.vervolg:
self.oldbuf = self.parent.pagedata.vervolg = text
event_text = "Tekst vervolgactie aangepast"
if event_text:
self.parent.pagedata.events.append((shared.get_dts(), event_text))
self.update_actie()
self.parent.pages[0].gui.set_item_text(self.parent.pages[0].gui.get_selection(), 3,
self.parent.pagedata.updated)
return True
def savepgo(self, *args):
"opslaan en naar de volgende pagina"
if not self.gui.can_saveandgo():
return
if self.savep():
self.goto_next()
else:
self.enable_buttons()
def restorep(self, *args):
"oorspronkelijke (laatst opgeslagen) inhoud van de pagina herstellen"
# reset font - are these also needed: case? indent? linespacing? paragraphspacing?
if self.parent.current_tab > 1:
self.gui.reset_font()
self.vulp()
def on_text(self, *args):
"""callback voor EVT_TEXT e.d.
de initializing flag wordt uitgevraagd omdat deze event ook tijdens vulp()
en tijdens vul_combos plaatsvindt"""
if not self.initializing:
newbuf = self.gui.build_newbuf()
changed = newbuf != self.oldbuf
self.enable_buttons(changed)
def on_choice(self):
"callback voor combobox (? wordt on_text hier niet gewoon voor gebruikt?)"
self.enable_buttons()
def update_actie(self):
"""pass page data from the GUI to the internal storage
"""
self.parent.pagedata.imagecount = self.parent.parent.imagecount
self.parent.pagedata.imagelist = self.parent.parent.imagelist
if self.parent.parent.datatype == shared.DataType.SQL.name:
self.parent.pagedata.write(self.parent.parent.user)
else:
self.parent.pagedata.write()
self.parent.pagedata.read() # om "updated" attribuut op te halen
if self.parent.newitem:
# nieuwe entry maken in de tabel voor panel 0
newindex = len(self.parent.data) # + 1
pagegui = self.parent.pages[1].gui
itemdata = (pagegui.get_text('date'),
" - ".join((pagegui.get_text('proc'),
pagegui.get_text('desc'))),
pagegui.get_choice_data('stat')[0],
pagegui.get_choice_data('cat')[0],
pagegui.get_text('id'))
self.parent.data[newindex] = itemdata # waarom niet append?
# ook nieuwe entry maken in de visuele tree
page = self.parent.pages[0]
self.parent.current_item = page.gui.add_listitem(itemdata[0].split(' ')[0])
page.gui.set_selection()
self.parent.newitem = False
self.parent.rereadlist = True
def enable_buttons(self, state=True):
"buttons wel of niet bruikbaar maken"
self.gui.enable_buttons(state)
self.parent.changed_item = state
if self.parent.current_tab > 0:
self.parent.parent.gui.enable_all_other_tabs(not state)
def goto_actie(self, *args):
"naar startpagina actie gaan"
self.goto_page(1)
def goto_next(self, *args):
"naar de volgende pagina gaan"
if not self.leavep():
return
next = self.parent.current_tab + 1
if next >= len(self.parent.pages):
next = 0
self.parent.parent.gui.set_page(next)
def goto_prev(self, *args):
"naar de vorige pagina gaan"
if not self.leavep():
return
next = self.parent.current_tab - 1
if next < 0:
next = len(self.parent.pages) - 1
self.parent.parent.gui.set_page(next)
def goto_page(self, page_num, check=True):
"naar de aangegeven pagina gaan"
if check and not self.leavep():
return
if 0 <= page_num <= len(self.parent.pages):
self.parent.parent.gui.set_page(page_num)
def get_textarea_contents(self):
"get the page text"
return self.gui.get_textarea_contents()
class Page0(Page):
"pagina 0: overzicht acties"
def __init__(self, parent):
self.parent = parent
super().__init__(parent, pageno=0, standard=False)
self.selection = 'excl. gearchiveerde'
self.sel_args = {}
self.sorted = (0, "A")
widths = [94, 24, 146, 90, 400] if LIN else [64, 24, 114, 72, 292]
if self.parent.parent.datatype == shared.DataType.SQL.name:
widths[4] = 90 if LIN else 72
extra = 310 if LIN else 220
widths.append(extra)
self.gui = gui.Page0Gui(parent, self, widths)
self.gui.enable_buttons()
self.sort_via_options = False
def vulp(self):
"""te tonen gegevens invullen in velden e.a. initialisaties
methode aan te roepen voorafgaand aan het tonen van de pagina
"""
# print('in Page0.vulp')
self.saved_sortopts = None
if (self.parent.parent.datatype == shared.DataType.SQL.name
and self.parent.parent.filename):
if self.parent.parent.is_user:
self.saved_sortopts = dmls.SortOptions(self.parent.parent.filename)
test = self.saved_sortopts.load_options()
test = bool(test)
self.sort_via_options = test
value = not test
else:
value = False
self.gui.enable_sorting(value)
self.seltitel = 'alle meldingen ' + self.selection
super().vulp()
msg = ''
if self.parent.rereadlist:
self.parent.data = {}
select = self.sel_args.copy()
arch = "" # "alles"
if "arch" in select:
arch = select.pop("arch")
data = shared.get_acties[self.parent.parent.datatype](self.parent.fnaam, select,
arch, self.parent.parent.user)
for idx, item in enumerate(data):
if self.parent.parent.datatype == shared.DataType.XML.name:
self.parent.data[idx] = (item[0],
item[1],
".".join((item[3][1], item[3][0])),
".".join((item[2][1], item[2][0])),
item[5],
item[4],
True if item[6] == 'arch' else False)
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.parent.data[idx] = (item[0],
item[1],
".".join((item[5], item[4])),
".".join((str(item[3]), item[2])),
item[8],
item[6],
item[7],
item[9])
msg = self.populate_list()
# nodig voor sorteren? Geen idee maar als het ergens goed voor is dan moet dit
# naar de gui module want sortItems is een qt methode
# if self.parent.parent.datatype == shared.DataType.XML.name:
# self.gui.p0list.sortItems(self.sorted[0], sortorder[self.sorted[1]]) # , True)
#
self.parent.current_item = self.gui.get_first_item()
self.parent.parent.enable_all_book_tabs(False)
self.gui.enable_buttons()
if self.gui.has_selection():
self.parent.parent.enable_all_book_tabs(True)
self.gui.set_selection()
self.gui.ensure_visible(self.parent.current_item)
self.parent.parent.set_statusmessage(msg)
def populate_list(self):
"list control vullen"
self.gui.clear_list()
self.parent.rereadlist = False
items = self.parent.data.items()
if items is None:
self.parent.parent.set_statusmessage('Selection is None?')
if not items:
return
for _, data in items:
new_item = self.gui.add_listitem(data[0])
self.gui.set_listitem_values(new_item, [data[0]] + list(data[2:]))
def change_selected(self, item_n):
"""callback voor wijzigen geselecteerd item, o.a. door verplaatsen van de
cursor of door klikken
"""
self.parent.current_item = item_n
self.gui.set_selection()
if not self.parent.newitem:
selindx = self.gui.get_selected_action()
self.readp(selindx)
hlp = "&Herleef" if self.parent.pagedata.arch else "&Archiveer"
self.gui.set_archive_button_text(hlp)
def activate_item(self):
"""callback voor activeren van item, door doubleclick of enter
"""
self.goto_actie()
def select_items(self, event=None):
"""tonen van de selectie dialoog
niet alleen selecteren op tekst(deel) maar ook op status, soort etc
"""
args = self.sel_args, None
if self.parent.parent.datatype == shared.DataType.SQL.name:
data = dmls.SelectOptions(self.parent.fnaam, self.parent.parent.user)
args, sel_args = data.load_options(), {}
for key, value in args.items():
if key == 'nummer':
for item in value: # splitsen in idgt, id en idlt
if len(item) == 1:
sel_args['id'] = 'and' if item[0] == 'en' else 'or'
elif item[1] == 'GT':
sel_args['idgt'] = item[0]
elif item[1] == 'LT':
sel_args['idlt'] = item[0]
# elif key == 'arch':
# sel_args[key] = {0: 'narch', 1: 'arch', 2: 'alles'}[value]
elif value:
sel_args[key] = value
args = sel_args, data
while True:
test = gui.show_dialog(self.gui, gui.SelectOptionsDialog, args)
if not test:
break
self.parent.rereadlist = True
try:
self.vulp()
except (dmlx.DataError, dmls.DataError) as msg:
self.parent.rereadlist = False
gui.show_message(self, str(msg))
else:
break
def sort_items(self, *args):
"""tonen van de sorteer-opties dialoog
sortering mogelijk op datum/tijd, soort, titel, status via schermpje met
2x4 comboboxjes waarin je de volgorde van de rubrieken en de sorteervolgorde
per rubriek kunt aangeven"""
sortopts, sortlist = {}, []
if self.parent.parent.datatype == shared.DataType.XML.name:
gui.show_message(self.gui, 'Sorry, multi-column sorteren werkt nog niet')
return
if self.parent.parent.datatype == shared.DataType.SQL.name:
sortopts = self.saved_sortopts.load_options()
try:
sortlist = [x[0] for x in dmls.SORTFIELDS]
except AttributeError:
pass
if not sortlist:
sortlist = [x for x in self.parent.ctitels]
sortlist[1] = "Soort"
sortlist.insert(0, "(geen)")
args = sortopts, sortlist
test = gui.show_dialog(self.gui, gui.SortOptionsDialog, args)
if not test:
return
if self.sort_via_options:
self.gui.enable_sorting(False)
self.parent.rereadlist = True
try:
self.vulp()
# moet hier soms nog het daadwerkelijke sorteren tussen (bij XML)?
except (dmlx.DataError, dmls.DataError) as msg:
self.parent.rereadlist = False
gui.show_message(self, str(msg))
else:
self.gui.enable_sorting(True)
def archiveer(self, *args):
"archiveren of herleven van het geselecteerde item"
selindx = self.gui.get_selected_action()
if self.parent.parent.datatype == shared.DataType.XML.name:
selindx = shared.data2str(selindx)
else:
selindx = shared.data2int(selindx)
self.readp(selindx)
if self.parent.parent.datatype == shared.DataType.XML.name:
self.parent.pagedata.arch = not self.parent.pagedata.arch
hlp = "gearchiveerd" if self.parent.pagedata.arch else "herleefd"
self.parent.pagedata.events.append((shared.get_dts(), "Actie {0}".format(hlp)))
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.parent.pagedata.set_arch(not self.parent.pagedata.arch)
self.update_actie() # self.parent.pagedata.write()
self.parent.rereadlist = True
self.vulp()
self.parent.parent.gui.set_tabfocus(0)
# het navolgende geldt alleen voor de selectie "gearchiveerd en actief"
if self.sel_args.get("arch", "") == "alles":
self.gui.ensure_visible(self.parent.current_item)
hlp = "&Herleef" if self.parent.pagedata.arch else "&Archiveer"
self.gui.set_archive_button_text(hlp)
def enable_buttons(self, value=None):
"buttons wel of niet bruikbaar maken"
if value is not None:
self.gui.enable_buttons(value)
else:
self.gui.enable_buttons()
def get_items(self):
"retrieve all listitems"
return self.gui.get_items()
def get_item_text(self, item_or_index, column):
"get the item's text for a specified column"
return self.gui.get_item_text(item_or_index, column)
def clear_selection(self):
"initialize selection criteria"
self.sel_args = {}
class Page1(Page):
"pagina 1: startscherm actie"
def __init__(self, parent):
self.parent = parent
super().__init__(parent, pageno=1, standard=False)
self.gui = gui.Page1Gui(parent, self)
def vulp(self):
"""te tonen gegevens invullen in velden e.a. initialisaties
methode aan te roepen voorafgaand aan het tonen van de pagina"""
super().vulp()
self.initializing = True
self.gui.init_fields()
self.parch = False
if self.parent.pagedata is not None: # and not self.parent.newitem:
self.gui.set_text('id', str(self.parent.pagedata.id))
self.gui.set_text('date', self.parent.pagedata.datum)
self.parch = self.parent.pagedata.arch
if self.parent.parent.datatype == shared.DataType.XML.name:
if self.parent.pagedata.titel is not None:
if " - " in self.parent.pagedata.titel:
hlp = self.parent.pagedata.titel.split(" - ", 1)
else:
hlp = self.parent.pagedata.titel.split(": ", 1)
self.gui.set_text('proc', hlp[0])
if len(hlp) > 1:
self.gui.set_text('desc', hlp[1])
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.gui.set_text('proc', self.parent.pagedata.over)
self.gui.set_text('desc', self.parent.pagedata.titel)
self.gui.set_choice('stat', self.parent.pagedata.status)
self.gui.set_choice('cat', self.parent.pagedata.soort)
self.oldbuf = self.gui.set_oldbuf()
if self.parch:
aanuit = False
if self.parent.parent.datatype == shared.DataType.XML.name:
if self.parent.pagedata.titel is not None:
if " - " in self.parent.pagedata.titel:
hlp = self.parent.pagedata.titel.split(" - ", 1)
else:
hlp = self.parent.pagedata.titel.split(": ", 1)
self.gui.set_text('proc', hlp[0])
if len(hlp) > 1:
self.gui.set_text('desc', hlp[1])
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.gui.set_text('proc', self.parent.pagedata.over)
self.gui.set_text('desc', self.parent.pagedata.titel)
self.gui.set_text('arch', "Deze actie is gearchiveerd")
self.gui.set_archive_button_text("Herleven")
else:
aanuit = True
self.gui.set_text('arch', '')
self.gui.set_archive_button_text("Archiveren")
if not self.parent.parent.is_user:
aanuit = False
self.gui.enable_fields(aanuit)
self.initializing = False
def savep(self, *args):
"opslaan van de paginagegevens"
super().savep()
proc = self.gui.get_text('proc')
self.gui.set_text('proc', proc.capitalize())
self.enable_buttons(False)
desc = self.gui.get_text('desc')
if proc == "" or desc == "":
gui.show_message(self.gui, "Beide tekstrubrieken moeten worden ingevuld")
return False
wijzig = False
procdesc = " - ".join((proc, desc))
if procdesc != self.parent.pagedata.titel:
if self.parent.parent.datatype == shared.DataType.XML.name:
self.parent.pagedata.titel = procdesc
elif self.parent.parent.datatype == shared.DataType.SQL.name:
self.parent.pagedata.over = proc
self.parent.pagedata.events.append(
(shared.get_dts(), 'Onderwerp gewijzigd in "{0}"'.format(proc)))
self.parent.pagedata.titel = procdesc = desc
self.parent.pagedata.events.append(
(shared.get_dts(), 'Titel gewijzigd in "{0}"'.format(procdesc)))
wijzig = True
newstat, sel = self.gui.get_choice_data('stat')
if newstat != self.parent.pagedata.status:
self.parent.pagedata.status = newstat
self.parent.pagedata.events.append(
(shared.get_dts(), 'Status gewijzigd in "{0}"'.format(sel)))
wijzig = True
newcat, sel = self.gui.get_choice_data('cat')
if newcat != self.parent.pagedata.soort:
self.parent.pagedata.soort = newcat
self.parent.pagedata.events.append(
(shared.get_dts(), 'Categorie gewijzigd in "{0}"'.format(sel)))
wijzig = True
if self.parch != self.parent.pagedata.arch:
self.parent.pagedata.set_arch(self.parch)
hlp = "gearchiveerd" if self.parch else "herleefd"
self.parent.pagedata.events.append(
(shared.get_dts(), "Actie {0}".format(hlp)))
wijzig = True
if wijzig:
self.update_actie()
# teksten op panel 0 bijwerken
pagegui = self.parent.pages[0].gui
item = pagegui.get_selection()
pagegui.set_item_text(item, 1, self.parent.pagedata.get_soorttext()[0].upper())
pagegui.set_item_text(item, 2, self.parent.pagedata.get_statustext())
pagegui.set_item_text(item, 3, self.parent.pagedata.updated)
if self.parent.parent.datatype == shared.DataType.XML.name:
pagegui.set_item_text(item, 4, self.parent.pagedata.titel)
elif self.parent.parent.datatype == shared.DataType.SQL.name:
pagegui.set_item_text(item, 4, self.parent.pagedata.over)
pagegui.set_item_text(item, 5, self.parent.pagedata.titel)
self.oldbuf = self.gui.set_oldbuf()
return True
def archiveer(self, *args):
"archiveren/herleven"
self.parch = not self.parch
self.savep()
self.parent.rereadlist = True
self.vulp()
def vul_combos(self):
"vullen comboboxen"
self.initializing = True
self.gui.clear_stats()
self.gui.clear_cats()
for key in sorted(self.parent.stats.keys()):
text, value = self.parent.stats[key][:2]
self.gui.add_stat_choice(text, value)
for key in sorted(self.parent.cats.keys()):
text, value = self.parent.cats[key][:2]
self.gui.add_cat_choice(text, value)
self.initializing = False
def get_field_text(self, entry_type):
"return a screen field's text"
return self.gui.get_field_text(entry_type)
class Page6(Page):
"pagina 6: voortgang"
def __init__(self, parent):
super().__init__(parent, pageno=6, standard=False)
self.current_item = 0
self.oldtext = ""
self.event_list, self.event_data, self.old_list, self.old_data = [], [], [], []
self.gui = gui.Page6Gui(parent, self)
def vulp(self):
"""te tonen gegevens invullen in velden e.a. initialisaties
methode aan te roepen voorafgaand aan het tonen van de pagina"""
super().vulp()
self.initializing = True
self.gui.init_textfield()
# self.progress_text.clear()
# self.progress_text.setReadOnly(True)
if self.parent.pagedata:
self.event_list = [x[0] for x in self.parent.pagedata.events]
self.event_list.reverse()
self.old_list = self.event_list[:]
self.event_data = [x[1] for x in self.parent.pagedata.events]
self.event_data.reverse()
self.old_data = self.event_data[:]
if self.parent.parent.is_user:
text = '-- doubleclick or press Shift-Ctrl-N to add new item --'
else:
text = '-- adding new items is disabled --'
self.gui.init_list(text)
for idx, datum in enumerate(self.event_list):
self.gui.add_item_to_list(idx, datum)
if self.parent.parent.datatype == shared.DataType.SQL.name:
self.gui.set_list_callback()
# self.gui.clear_textfield() - zit al in init_textfield
self.oldbuf = (self.old_list, self.old_data)
self.oldtext = ''
self.initializing = False
def savep(self, *args):
"opslaan van de paginagegevens"
super().savep()
# voor het geval er na het aanpassen van een tekst direkt "sla op" gekozen is
# nog even kijken of de tekst al in self.event_data is aangepast.
idx = self.current_item
hlp = self.gui.get_textfield_contents()
if idx > 0:
idx -= 1
if self.event_data[idx] != hlp:
self.event_data[idx] = hlp
self.oldtext = hlp
short_text = hlp.split("\n")[0]
if len(short_text) < 80:
short_text = short_text[:80] + "..."
if self.parent.parent.datatype == shared.DataType.XML.name:
short_text = short_text.encode('latin-1')
self.gui.set_listitem_text(idx + 1, "{} - {}".format(self.event_list[idx], short_text))
self.gui.set_listitem_data(idx + 1)
wijzig = False
if self.event_list != self.old_list or self.event_data != self.old_data:
wijzig = True
hlp = len(self.event_list) - 1
for idx, data in enumerate(self.parent.pagedata.events):
if data != (self.event_list[hlp - idx], self.event_data[hlp - idx]):
self.parent.pagedata.events[idx] = (self.event_list[hlp - idx],
self.event_data[hlp - idx])
for idx in range(len(self.parent.pagedata.events), hlp + 1):
if self.event_data[hlp - idx]:
self.parent.pagedata.events.append((self.event_list[hlp - idx],
self.event_data[hlp - idx]))
if wijzig:
self.update_actie()
# waar is deze voor (self.book.current_item.setText) ?
# self.parent.current_item = self.parent.page0.p0list.topLevelItem(x)
# self.parent.current_item.setText(4, self.parent.pagedata.updated)
self.parent.pages[0].gui.set_item_text(self.parent.current_item, 3,
self.parent.pagedata.updated)
# dit was self.parent.page0.p0list.currentItem().setText( -- is dat niet hetzelfde?
self.old_list = self.event_list[:]
self.old_data = self.event_data[:]
self.oldbuf = (self.old_list, self.old_data)
return True
def goto_prev(self, *args):
"set the selection to the previous row, if possible"
test = self.gui.get_list_row() - 1
if test > 0:
self.gui.set_list_row(test)
def goto_next(self, *args):
"set the selection to the next row, if possible"
test = self.gui.get_list_row() + 1
if test < self.gui.get_list_rowcount():
self.gui.set_list_row(test)
def on_text(self, *args):
"""callback voor wanneer de tekst gewijzigd is
de initializing flag wordt uitgevraagd omdat deze event ook tijdens vulp()
en wijzigen van list positie plaatsvindt
"""
if self.initializing:
return
# lees de inhoud van het tekstveld en vergelijk deze met de buffer
tekst = self.gui.get_textfield_contents()
# str(self.progress_text.get_contents()) # self.progress_list.GetItemText(ix)
if tekst != self.oldtext:
# stel de buffer in op de nieuwe tekst
self.oldtext = tekst
# maak er platte tekst van om straks in de listbox bij te werken
tekst_plat = self.gui.convert_text(self.oldtext, to='plain')
# stel in dat we niet van dit scherm af kunnen zonder te updaten
if self.parent.parent.is_user:
self.enable_buttons()
self.current_item = self.gui.get_list_row()
if self.current_item > 0:
indx = self.current_item - 1
self.event_data[indx] = tekst
# item = self.progress_list.currentItem()
# datum = str(item.text()).split(' - ')[0]
datum = self.gui.get_listitem_text(self.current_item).split(' - ')[0]
short_text = ' - '.join((datum, tekst_plat.split("\n")[0]))
if len(short_text) >= 80:
short_text = short_text[:80] + "..."
# item.setText(short_text)
self.gui.set_listitem_text(self.current_item, short_text)
class TabOptions:
"hulp klasse bij dialoog voor mogelijke tab headers"
def initstuff(self, parent):
"aanvullende initialisatie"
self.titel = "Tab titels"
self.data = []
for key in sorted(parent.master.book.tabs.keys()):
tab_text = parent.master.book.tabs[key].split(" ", 1)[1]
self.data.append(tab_text)
self.tekst = ["De tab titels worden getoond in de volgorde",
"zoals ze van links naar rechts staan.",
"Er kunnen geen tabs worden verwijderd of toegevoegd."]
self.editable = False
def leesuit(self, parent, optionslist):
"wijzigingen doorvoeren"
self.newtabs = {}
for idx, item in enumerate(optionslist):
self.newtabs[str(idx)] = str(item)
parent.master.save_settings("tab", self.newtabs)
class StatOptions:
"hulp klasse bij dialoog voor de mogelijke statussen"
def initstuff(self, parent):
"aanvullende initialisatie"
self.titel = "Status codes en waarden"
self.data = []
for key in sorted(parent.master.book.stats.keys()):
if parent.master.datatype == shared.DataType.XML.name:
item_text, item_value = parent.master.book.stats[key]
self.data.append(": ".join((item_value, item_text)))
elif parent.master.datatype == shared.DataType.SQL.name:
item_text, item_value, row_id = parent.master.book.stats[key]
self.data.append(": ".join((item_value, item_text, row_id)))
self.tekst = ["De waarden voor de status worden getoond in dezelfde volgorde",
"als waarin ze in de combobox staan.",
"Vóór de dubbele punt staat de code, erachter de waarde.",
"Denk erom dat als je codes wijzigt of statussen verwijdert, deze",
"ook niet meer getoond en gebruikt kunnen worden in de registratie.",
"Omschrijvingen kun je rustig aanpassen"]
self.editable = True
def leesuit(self, parent, optionslist):
"wijzigingen doorvoeren"
self.newstats = {}
for sortkey, item in enumerate(optionslist):
try:
value, text = str(item).split(": ")
except ValueError:
return 'Foutieve waarde: bevat geen dubbele punt'
self.newstats[value] = (text, sortkey)
parent.master.save_settings("stat", self.newstats)
return ''
class CatOptions:
"hulp klasse bij dialoog voor de mogelijke categorieen"
def initstuff(self, parent):
"aanvullende initialisatie"
self.titel = "Soort codes en waarden"
self.data = []
for key in sorted(parent.master.book.cats.keys()):
if parent.master.datatype == shared.DataType.XML.name:
item_value, item_text = parent.master.book.cats[key]
self.data.append(": ".join((item_text, item_value)))
elif parent.master.datatype == shared.DataType.SQL.name:
item_value, item_text, row_id = parent.master.book.cats[key]
self.data.append(": ".join((item_text, item_value, str(row_id))))
self.tekst = ["De waarden voor de soorten worden getoond in dezelfde volgorde",
"als waarin ze in de combobox staan.",
"Vóór de dubbele punt staat de code, erachter de waarde.",
"Denk erom dat als je codes wijzigt of soorten verwijdert, deze",
"ook niet meer getoond en gebruikt kunnen worden in de registratie.",
"Omschrijvingen kun je rustig aanpassen"]
self.editable = True
def leesuit(self, parent, optionslist):
"wijzigingen doorvoeren"
self.newcats = {}
for sortkey, item in enumerate(optionslist):
try:
value, text = str(item).split(": ")
except ValueError:
return 'Foutieve waarde: bevat geen dubbele punt'
self.newcats[value] = (text, sortkey)
parent.master.save_settings("cat", self.newcats)
return ''
class MainWindow():
"""Hoofdscherm met menu, statusbalk, notebook en een "quit" button"""
def __init__(self, parent, fnaam="", version=None):
# if not version:
# raise ValueError('No data method specified')
self.parent = parent
self.datatype = version
self.dirname, self.filename = '', ''
self.title = 'Actieregistratie'
self.initializing = True
self.exiting = False
self.helptext = ''
# self.pagedata = None
# self.oldbuf = None
self.is_newfile = False
self.oldsort = -1
self.idlist = self.actlist = self.alist = []
shared.log('fnaam is %s', fnaam)
self.projnames = dmls.get_projnames()
if fnaam:
if fnaam == 'xml' or os.path.exists(fnaam):
self.datatype = shared.DataType.XML.name
if fnaam != 'xml':
test = pathlib.Path(fnaam)
self.dirname, self.filename = test.parent, test.name
shared.log('XML: %s %s', self.dirname, self.filename)
elif fnaam == 'sql' or fnaam.lower() in [x[0] for x in self.projnames]:
self.datatype = shared.DataType.SQL.name
if fnaam == 'basic':
self.filename = '_basic'
elif fnaam != 'sql':
self.filename = fnaam.lower()
shared.log('SQL: %s', self.filename)
else:
fnaam = ''
self.gui = gui.MainGui(self)
if not self.datatype:
self.filename = ''
choice = gui.get_choice_item(None, 'Select Mode', ['XML', 'SQL'])
if choice == 'XML':
self.datatype = shared.DataType.XML.name
elif choice == 'SQL':
self.datatype = shared.DataType.SQL.name
else:
raise SystemExit('No datatype selected')
self.user = None # start without user
self.is_user = self.is_admin = False
if self.datatype == shared.DataType.XML.name:
self.user = 1 # pretend user
self.is_user = self.is_admin = True # force editability for XML mode
self.create_book()
self.gui.create_menu()
self.gui.create_actions()
self.create_book_pages()
if self.datatype == shared.DataType.XML.name:
if self.filename == "":
self.open_xml()
else:
self.startfile()
elif self.datatype == shared.DataType.SQL.name:
if self.filename:
self.open_sql(do_sel=False)
else:
self.open_sql()
self.initializing = False
def get_menu_data(self):
"""Define application menu
"""
data = [("&File", [("&Open", self.open_xml, 'Ctrl+O', " Open a new file"),
("&New", self.new_file, 'Ctrl+N', " Create a new file"),
('',),
("&Print", (("Dit &Scherm", self.print_scherm, 'Shift+Ctrl+P',
"Print the contents of the current screen"),
("Deze &Actie", self.print_actie, 'Alt+Ctrl+P',
"Print the contents of the current issue"))),
('',),
("&Quit", self.exit_app, 'Ctrl+Q', " Terminate the program")]),
("&Login", [("&Go", self.sign_in, 'Ctrl+L', " Sign in to the database")]),
("&Settings", (("&Applicatie", (("&Lettertype", self.font_settings, '',
" Change the size and font of the text"),
("&Kleuren", self.colour_settings, '',
" Change the colours of various items"))),
("&Data", (("&Tabs", self.tab_settings, '',
" Change the titles of the tabs"),
("&Soorten", self.cat_settings, '',
" Add/change type categories"),
("St&atussen", self.stat_settings, '',
" Add/change status categories"))),
("&Het leven", self.silly_menu, '',
" Change the way you look at life"))),
("&View", []),
("&Help", (("&About", self.about_help, 'F1', " Information about this program"),
("&Keys", self.hotkey_help, 'Ctrl+H', " List of shortcut keys")))]
for tabnum, tabtitle in self.book.tabs.items():
data[3][1].append(('&{}'.format(tabtitle),
functools.partial(self.gui.go_to, int(tabnum)),
'Alt+{}'.format(tabnum), "switch to tab"))
if self.datatype == shared.DataType.XML.name:
data.pop(1)
elif self.datatype == shared.DataType.SQL.name:
data[0][1][0] = ("&Other project", self.open_sql, 'Ctrl+O', " Select a project")
data[0][1][1] = ("&New", self.new_file, 'Ctrl+N', " Create a new project")
return data
def create_book(self):
"""define the tabbed interface and its subclasses
"""
self.book = self.gui.get_bookwidget()
self.book.parent = self
self.book.fnaam = ""
if self.filename and self.datatype == shared.DataType.SQL.name:
self.book.fnaam = self.filename
self.book.current_item = None
self.book.data = {}
self.book.rereadlist = True
self.lees_settings()
# print('in create book na lees_settings: book.tabs is', self.book.tabs)
self.book.ctitels = ["actie", " ", "status", "L.wijz."]
if self.datatype == shared.DataType.XML.name:
self.book.ctitels.append("titel")
elif self.datatype == shared.DataType.SQL.name:
self.book.ctitels.extend(("betreft", "omschrijving"))
self.book.current_tab = -1
self.book.pages = []
self.book.newitem = False
self.book.changed_item = True
self.book.pagedata = None
def create_book_pages(self):
"add the pages to the tabbed widget"
self.book.pages.append(Page0(self.book))
self.book.pages.append(Page1(self.book))
self.book.pages.append(Page(self.book, 2))
self.book.pages.append(Page(self.book, 3))
self.book.pages.append(Page(self.book, 4))
self.book.pages.append(Page(self.book, 5))
self.book.pages.append(Page6(self.book))
# print('in create_book_pages: book.tabs is', self.book.tabs)
for i, page in enumerate(self.book.pages):
self.gui.add_book_tab(page, "&" + self.book.tabs[i])
self.enable_all_book_tabs(False)
def not_implemented_message(self):
"information"
gui.show_message(self.gui, "Sorry, werkt nog niet")
def new_file(self, event=None):
"Menukeuze: nieuw file"
if self.datatype == shared.DataType.SQL.name:
self.not_implemented_message()
return
self.is_newfile = False
# self.dirname = str(self.dirname) # defaults to '.' so no need for `or os.getcwd()`
fname = gui.get_save_filename(self.gui, start=self.dirname)
if fname:
test = pathlib.Path(fname)
if test.suffix != '.xml':
gui.show_message(self.gui, 'Naam voor nieuw file moet wel extensie .xml hebben')
return
self.dirname, self.filename = test.parent, test.name
self.is_newfile = True
self.startfile()
self.is_newfile = False
self.enable_all_book_tabs(False)
def open_xml(self, event=None):
"Menukeuze: open file"
shared.log('in open_xml: %s', self.filename)
self.dirname = self.dirname or os.getcwd()
fname = gui.get_open_filename(self.gui, start=self.dirname)
if fname:
test = pathlib.Path(fname)
self.dirname, self.filename = test.parent, test.name
self.startfile()
def open_sql(self, event=None, do_sel=True):
"Menukeuze: open project"
shared.log('in open_sql: %s', self.filename)
current = choice = 0
data = self.projnames
if self.filename in data:
current = data.index(self.filename)
if do_sel:
choice = gui.get_choice_item(self.gui, 'Kies een project om te openen',
[": ".join((h[0], h[2])) for h in data], current)
else:
for h in data:
shared.log(h)
if h[0] == self.filename or (h[0] == 'basic' and self.filename == "_basic"):
choice = h[0]
break
if choice:
self.filename = choice.split(': ')[0]
if self.filename in ("Demo", 'basic'):
self.filename = "_basic"
self.startfile()
def print_something(self, event=None):
"""callback voor ctrl-P(rint)
vraag om printen scherm of actie, bv. met een InputDialog
"""
choices = ['huidig scherm', 'huidige actie']
choice = gui.get_choice_item(self, 'Wat wil je afdrukken?', choices)
if choice == choices[0]:
self.print_scherm()
elif choice == choices[1]:
self.print_actie()
def print_scherm(self, event=None):
"Menukeuze: print dit scherm"
self.printdict = {'lijst': [], 'actie': [], 'sections': [], 'events': []}
self.hdr = "Actie: {} {}".format(self.book.pagedata.id,
self.book.pagedata.titel)
if self.book.current_tab == 0:
self.hdr = "Overzicht acties uit " + self.filename
lijst = []
page = self.book.pages[0]
for item in page.get_items():
actie = page.get_item_text(item, 0)
started = ''
soort = page.get_item_text(item, 1)
for x in self.book.cats.values():
oms, code = x[0], x[1]
if code == soort:
soort = oms
break
status = page.get_item_text(item, 2)
l_wijz = page.get_item_text(item, 3)
titel = page.get_item_text(item, 4)
if self.datatype == shared.DataType.SQL.name:
over = titel
titel = page.get_item_text(item, 5)
l_wijz = l_wijz[:19]
actie = actie + " - " + over
started = started[:19]
if status != self.book.stats[0][0]:
if l_wijz:
l_wijz = ", laatst behandeld op " + l_wijz
l_wijz = "status: {}{}".format(status, l_wijz)
else:
hlp = "status: {}".format(status)
if l_wijz and not started:
hlp += ' op {}'.format(l_wijz)
l_wijz = hlp
lijst.append((actie, titel, soort, started, l_wijz))
self.printdict['lijst'] = lijst
elif self.book.current_tab == 1:
data = {x: self.book.pages[1].get_field_text(x) for x in ('actie', 'datum', 'oms',
'tekst', 'soort', 'status')}
self.hdr = "Informatie over actie {}: samenvatting".format(data["actie"])
self.printdict.update(data)
elif 2 <= self.book.current_tab <= 5:
title = self.book.tabs[self.book.current_tab].split(None, 1)[1]
# if self.book.current_tab == 2:
text = self.book.pages[self.book.current_tab].get_textarea_contents()
# elif self.book.current_tab == 3:
# text = self.book.page3.get_textarea_contents()
# elif self.book.current_tab == 4:
# text = self.book.page4.get_textarea_contents()
# elif self.book.current_tab == 5:
# text = self.book.page5.get_textarea_contents()
self.printdict['sections'] = [(title, text)]
elif self.book.current_tab == 6:
events = []
for idx, data in enumerate(self.book.pages[6].event_list):
if self.datatype == shared.DataType.SQL.name:
data = data[:19]
events.append((data, self.book.pages[6].event_data[idx]))
self.printdict['events'] = events
self.gui.preview()
def print_actie(self, event=None):
"Menukeuze: print deze actie"
if self.book.pagedata is None: # or self.book.newitem:
gui.show_message(self.gui, "Wel eerst een actie kiezen om te printen")
return
self.hdr = ("Actie: {} {}".format(self.book.pagedata.id, self.book.pagedata.titel))
tekst = self.book.pagedata.titel
try:
oms, tekst = tekst.split(" - ", 1)
except ValueError:
try:
oms, tekst = tekst.split(": ", 1)
except ValueError:
oms = ''
srt = "(onbekende soort)"
for srtoms, srtcode in self.book.cats.values():
if srtcode == self.book.pagedata.soort:
srt = srtoms
break
stat = "(onbekende status)"
for statoms, statcode in self.book.stats.values():
if statcode == self.book.pagedata.status:
stat = statoms
break
self.printdict = {'lijst': [],
'actie': self.book.pagedata.id,
'datum': self.book.pagedata.datum,
'oms': oms,
'tekst': tekst,
'soort': srt,
'status': stat}
empty = "(nog niet beschreven)"
sections = [[title.split(None, 1)[1], ''] for key, title in
self.book.tabs.items() if key > 2]
sections[0][1] = self.book.pagedata.melding or empty
sections[1][1] = self.book.pagedata.oorzaak or empty
sections[2][1] = self.book.pagedata.oplossing or empty
sections[3][1] = self.book.pagedata.vervolg or ''
if not sections[3][1]:
sections.pop()
self.printdict['sections'] = sections
self.printdict['events'] = [(x, y) for x, y in self.book.pagedata.events] or []
self.gui.preview()
def exit_app(self, event=None):
"Menukeuze: exit applicatie"
self.exiting = True
ok_to_leave = True # while we don't have pages yet
if self.book.current_tab > -1:
ok_to_leave = self.book.pages[self.book.current_tab].leavep()
if ok_to_leave:
self.gui.exit()
def sign_in(self, *args):
"""aanloggen in SQL/Django mode
"""
logged_in = False
while not logged_in:
ok = gui.show_dialog(self.gui, gui.LoginBox)
if not ok:
break
test = dmls.validate_user(*self.gui.dialog_data)
if test:
text = 'Login accepted'
logged_in = True
else:
text = 'Login failed'
gui.show_message(self.gui, text)
if logged_in:
self.user, self.is_user, self.is_admin = test
# print('in signin:', self.user, self.is_user)
self.book.rereadlist = True
self.gui.refresh_page()
def tab_settings(self, event=None):
"Menukeuze: settings - data - tab titels"
gui.show_dialog(self.gui, gui.SettOptionsDialog, args=(TabOptions, "Wijzigen tab titels"))
def stat_settings(self, event=None):
"Menukeuze: settings - data - statussen"
gui.show_dialog(self.gui, gui.SettOptionsDialog, args=(StatOptions, "Wijzigen statussen"))
def cat_settings(self, event=None):
"Menukeuze: settings - data - soorten"
gui.show_dialog(self.gui, gui.SettOptionsDialog, args=(CatOptions, "Wijzigen categorieën"))
def font_settings(self, event=None):
"Menukeuze: settings - applicatie - lettertype"
self.not_implemented_message()
def colour_settings(self, event=None):
"Menukeuze: settings - applicatie - kleuren"
self.not_implemented_message()
def hotkey_settings(self, event=None):
"Menukeuze: settings - applicatie- hotkeys (niet geactiveerd)"
self.not_implemented_message()
def about_help(self, event=None):
"Menukeuze: help - about"
gui.show_message(self.gui, "PyQt versie van mijn actiebox")
def hotkey_help(self, event=None):
"menukeuze: help - keys"
if not self.helptext:
lines = ["=== Albert's actiebox ===\n",
"Keyboard shortcuts:",
" Alt left/right: verder - terug",
" Alt-0 t/m Alt-6: naar betreffende pagina",
" Alt-O op tab 1: S_o_rteren",
" Alt-I op tab 1: F_i_lteren",
" Alt-G of Enter op tab 1: _G_a naar aangegeven actie",
" Alt-N op elke tab: _N_ieuwe actie opvoeren",
" Ctrl-P: _p_rinten (scherm of actie)",
" Shift-Ctrl-P: print scherm",
" Alt-Ctrl-P: print actie",
" Ctrl-Q: _q_uit actiebox",
" Ctrl-H: _h_elp (dit scherm)",
" Ctrl-S: gegevens in het scherm op_s_laan",
" Ctrl-G: oplaan en _g_a door naar volgende tab",
" Ctrl-Z in een tekstveld: undo",
" Shift-Ctrl-Z in een tekstveld: redo",
" Alt-Ctrl-Z overal: wijzigingen ongedaan maken",
" Shift-Ctrl-N op tab 6: nieuwe regel opvoeren",
" Ctrl-up/down op tab 6: move in list"]
if self.datatype == shared.DataType.XML.name:
lines.insert(8, " Ctrl-O: _o_pen een (ander) actiebestand")
lines.insert(8, " Ctrl-N: maak een _n_ieuw actiebestand")
elif self.datatype == shared.DataType.SQL.name:
lines.insert(8, " Ctrl-O: selecteer een (ander) pr_o_ject")
self.helptext = "\n".join(lines)
gui.show_message(self.gui, self.helptext)
def silly_menu(self, event=None):
"Menukeuze: settings - het leven"
gui.show_message(self.gui, "Yeah you wish...\nHet leven is niet in te stellen helaas")
def startfile(self):
"initialisatie t.b.v. nieuw bestand"
if self.datatype == shared.DataType.XML.name:
fullname = self.dirname / self.filename
retval = dmlx.checkfile(fullname, self.is_newfile)
if retval != '':
gui.show_message(self.gui, retval)
return retval
self.book.fnaam = fullname
self.title = self.filename
elif self.datatype == shared.DataType.SQL.name:
self.book.fnaam = self.title = self.filename
self.book.rereadlist = True
self.book.sorter = None
self.lees_settings()
self.gui.set_tab_titles(self.book.tabs)
self.book.pages[0].clear_selection()
self.book.pages[1].vul_combos()
if self.book.current_tab == 0:
self.book.pages[0].vulp()
else:
self.gui.select_first_tab()
self.book.changed_item = True
return ''
def lees_settings(self):
"""instellingen (tabnamen, actiesoorten en actiestatussen) inlezen"""
self.book.stats = {0: ('dummy,', 0, 0)}
self.book.cats = {0: ('dummy,', ' ', 0)}
self.book.tabs = {0: '0 start'}
data = shared.Settings[self.datatype](self.book.fnaam)
## print(data.meld) # "Standaard waarden opgehaald"
self.imagecount = data.imagecount
self.book.stats = {}
self.book.cats = {}
self.book.tabs = {}
self.book.pagehelp = ["Overzicht van alle acties",
"Identificerende gegevens van de actie",
"Beschrijving van het probleem of wens",
"Analyse van het probleem of wens",
"Voorgestelde oplossing",
"Eventuele vervolgactie(s)",
"Overzicht stand van zaken"]
for item_value, item in data.stat.items():
if self.datatype == shared.DataType.XML.name:
item_text, sortkey = item
self.book.stats[int(sortkey)] = (item_text, item_value)
elif self.datatype == shared.DataType.SQL.name:
item_text, sortkey, row_id = item
self.book.stats[int(sortkey)] = (item_text, item_value, row_id)
for item_value, item in data.cat.items():
if self.datatype == shared.DataType.XML.name:
item_text, sortkey = item
self.book.cats[int(sortkey)] = (item_text, item_value)
elif self.datatype == shared.DataType.SQL.name:
item_text, sortkey, row_id = item
self.book.cats[int(sortkey)] = (item_text, item_value, row_id)
for tab_num, tab_text in data.kop.items():
if self.datatype == shared.DataType.XML.name:
self.book.tabs[int(tab_num)] = " ".join((tab_num, tab_text))
elif self.datatype == shared.DataType.SQL.name:
tab_text = tab_text[0] # , tab_adr = tab_text
self.book.tabs[int(tab_num)] = " ".join((tab_num, tab_text.title()))
# print('in lees_settings voor', self.book.fnaam, 'book.tabs is', self.book.tabs)
def save_settings(self, srt, data):
"""instellingen (tabnamen, actiesoorten of actiestatussen) terugschrijven
argumenten: soort, data
data is een dictionary die in een van de dialogen TabOptions, CatOptions
of StatOptions wordt opgebouwd"""
settings = shared.Settings[self.datatype](self.book.fnaam)
if srt == "tab":
settings.kop = data
settings.write()
self.book.tabs = {}
for item_value, item_text in data.items():
item = " ".join((item_value, item_text))
self.book.tabs[int(item_value)] = item
self.gui.set_page_title(int(item_value), item)
elif srt == "stat":
settings.stat = data
settings.write()
self.book.stats = {}
for item_value, item in data.items():
if self.datatype == shared.DataType.XML.name:
item_text, sortkey = item
self.book.stats[sortkey] = (item_text, item_value)
elif self.datatype == shared.DataType.SQL.name:
item_text, sortkey, row_id = item
self.book.stats[sortkey] = (item_text, item_value, row_id)
elif srt == "cat":
settings.cat = data
settings.write()
self.book.cats = {}
for item_value, item in data.items():
if self.datatype == shared.DataType.XML.name:
item_text, sortkey = item
self.book.cats[sortkey] = (item_text, item_value)
elif self.datatype == shared.DataType.SQL.name:
item_text, sortkey, row_id = item
self.book.cats[sortkey] = (item_text, item_value, row_id)
self.book.pages[1].vul_combos()
def goto_next(self, *args):
"""redirect to the method of the current page
"""
Page.goto_next(self.book.pages[self.book.current_tab])
def goto_prev(self, *args):
"""redirect to the method of the current page
"""
Page.goto_prev(self.book.pages[self.book.current_tab])
def goto_page(self, page):
"""redirect to the method of the current page
"""
# print('in MainWindow.goto_page naar page', page, 'van page', self.book.current_tab)
Page.goto_page(self.book.pages[self.book.current_tab], page)
def enable_settingsmenu(self):
"instellen of gebruik van settingsmenu mogelijk is"
self.gui.enable_settingsmenu()
def set_windowtitle(self, text):
"build title for window"
self.gui.set_window_title(text)
def set_statusmessage(self, msg=''):
"""stel tekst in statusbar in
"""
if not msg:
msg = self.book.pagehelp[self.book.current_tab]
if self.book.current_tab == 0:
msg += ' - {} items'.format(len(self.book.data))
self.gui.set_statusmessage(msg)
if self.datatype == shared.DataType.SQL.name:
if self.user:
msg = 'Aangemeld als {}'.format(self.user.username)
else:
msg = 'Niet aangemeld'
self.gui.show_username(msg)
def get_focus_widget_for_tab(self, tabno):
"determine field to set focus on"
return (self.book.pages[0].gui.p0list,
self.book.pages[1].gui.proc_entry,
self.book.pages[2].gui.text1,
self.book.pages[3].gui.text1,
self.book.pages[4].gui.text1,
self.book.pages[5].gui.text1,
self.book.pages[6].gui.progress_list)[tabno]
def enable_all_book_tabs(self, state):
"make all tabs (in)accessible"
self.gui.enable_book_tabs(state, tabfrom=1)
def main(arg=None):
"opstart routine"
# if arg is None:
# version = shared.DataType.SQL.name
# else:
# version = shared.DataType.XML.name
# try:
frame = MainWindow(None, arg) # , version)
frame.gui.go()
# except ValueError as err:
# print(err)
| 45.346521 | 101 | 0.55295 | 7,727 | 66,478 | 4.651352 | 0.116863 | 0.068167 | 0.047077 | 0.018697 | 0.468128 | 0.38118 | 0.320804 | 0.259036 | 0.218664 | 0.186945 | 0 | 0.007514 | 0.335344 | 66,478 | 1,465 | 102 | 45.377474 | 0.805907 | 0.123665 | 0 | 0.307882 | 0 | 0 | 0.12979 | 0.002902 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067323 | false | 0.003284 | 0.005747 | 0 | 0.10509 | 0.01642 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b2fdb3f5840708a53911e04a2280ed67c225b83 | 6,055 | py | Python | back-end/python-scripts/add_vm.py | matteomitico/vtc-senior-project | 778b59537edae54fbcb284a862bad587469b96af | [
"MIT"
] | null | null | null | back-end/python-scripts/add_vm.py | matteomitico/vtc-senior-project | 778b59537edae54fbcb284a862bad587469b96af | [
"MIT"
] | null | null | null | back-end/python-scripts/add_vm.py | matteomitico/vtc-senior-project | 778b59537edae54fbcb284a862bad587469b96af | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import json
import sys, os
import virtualbox
from os.path import expanduser
from pprint import pprint
from subprocess import call
#create virtual machine from the repository
# arg1 = vmType
# arg2 = vmName
# return 1|0 success or failure
# list of all VMs
gateway = '155.42.97.220'
#gateway = '155.42.13.162'
vms_repo_path = expanduser("~") + '/cluster/VMs'
vm_repo = {
'nodejs' : {
'folder' : '/nodejs/nodejs-13.0-i386/',
'file' : 'nodejs.ovf' ,
'vm_name' : 'nodejs-13.0-i386',
'ports': {
'http' : {'rule': '10.0.2.15,8000' , 'note': 'http://{{the-gatway}}' },
'https' : {'rule': '10.0.2.15,443' , 'note': 'https://{{the-gatway}}' },
'webshell' : {'rule': '10.0.2.15,12320', 'note': 'https://{{the-gatway}}' },
'webmin' : {'rule': '10.0.2.15,12321', 'note': 'https://{{the-gatway}}' },
'ssh' : {'rule': '10.0.2.15,22' , 'note': 'root@{{the-gatway}} port'}
},
'notes' : [
'Webmin, SSH: username:root',
'default password:root (remember to change!)'
]
},
'postgresql' : {
'folder' : '/postgresql/postgresql-i386/',
'file' : 'postgresql-i386.ovf' ,
'vm_name' : 'postgres',
'ports': {
'PHPPgAdmin' : {'rule': '10.0.2.15,443', 'note': 'https://{{the-gatway}}' } ,
'webshell' : {'rule': '10.0.2.15,12320', 'note': 'https://{{the-gatway}}' } ,
'webmin' : {'rule': '10.0.2.15,12321', 'note': 'https://{{the-gatway}}' } ,
'ssh' : {'rule': '10.0.2.15,22', 'note': 'root@{{the-gatway}} port'}
},
'notes' : [
'Webmin, SSH: username:root, password:root(remember to change!)',
'postgresql, phppgadmin: username:postgres, password:postgresroot(remember to change!)',
'PostgreSQL: psql -U postgres -h 10.0.2.15'
]
},
'lamp' : {
'folder' : '/lamp/lamp-13.0-i386/',
'file' : 'lamp-13.0.ovf' ,
'vm_name' : 'lamp-13.0',
'ports': {
'http' : {'rule': '10.0.2.15,80' , 'note': 'http://{{the-gatway}}' },
'https' : {'rule': '10.0.2.15,443' , 'note': 'https://{{the-gatway}}' },
'webshell' : {'rule': '10.0.2.15,12320', 'note': 'https://{{the-gatway}}' },
'webmin' : {'rule': '10.0.2.15,12321', 'note': 'https://{{the-gatway}}' },
'ssh' : {'rule': '10.0.2.15,22' , 'note': 'root@{{the-gatway}} port'},
'PHPMyAdmin' : {'rule': '10.0.2.15,12322', 'note': 'https://{{the-gatway}}' } ,
},
'notes' : [
'Webmin, SSH: username:root, password:root(remember to change!)',
'MySQL, phpMyAdmin: username:root, password:mysql(remember to change!)'
]
},
'rails' : {
'folder' : '/rails/lamp-13.0-i386/',
'file' : 'lamp-13.0.ovf' ,
'vm_name' : 'lamp-13.0',
'ports': {
'http' : {'rule': '10.0.2.15,80' , 'note': 'http://{{the-gatway}}' },
'https' : {'rule': '10.0.2.15,443' , 'note': 'https://{{the-gatway}}' },
'webshell' : {'rule': '10.0.2.15,12320', 'note': 'https://{{the-gatway}}' },
'webmin' : {'rule': '10.0.2.15,12321', 'note': 'https://{{the-gatway}}' },
'ssh' : {'rule': '10.0.2.15,22' , 'note': 'root@{{the-gatway}} port'},
'PHPMyAdmin' : {'rule': '10.0.2.15,12322', 'note': 'https://{{the-gatway}}' } ,
},
'notes' : [
'Webmin, SSH: username:root, password:root(remember to change!)',
'MySQL: username:root, password:mysql(remember to change!)'
]
}
}
port_ranges = {'http' : 8000,
'https' : 4000,
'ssh' : 22000,
'webshell' : 12000,
'webmin' : 13000,
'PHPPgAdmin': 14000,
'PHPMyAdmin': 15000,
}
vm_type = sys.argv[1]
vm_name = sys.argv[2]
vbox = virtualbox.VirtualBox()
notes = vm_repo[vm_type]['notes']
# Create new IAppliance and read the exported machine
# called 'vmName'.
path = os.path.normpath(vms_repo_path + vm_repo[vm_type]['folder'] + vm_repo[vm_type]['file'])
#print path
#adding port forwardings
if os.stat("usedports.json").st_size == 0 :
used_ports = {}
else:
used_ports = json.load(open("usedports.json"))
ports_to_forward = []
for port_forw_rule in vm_repo[vm_type]['ports']:
#print "checking: " + port_forw_rule + str(port_ranges[port_forw_rule])
for the_port in range(port_ranges[port_forw_rule]+1, port_ranges[port_forw_rule] + 1000):
#print the_port
#print used_ports
if not str(the_port) in used_ports:
#print the_port
#print port_forw_rule + ",tcp," + gateway + "," + str(the_port) + "," + vm_repo[vm_type]['ports'][port_forw_rule]
#print 'making notes'
note = port_forw_rule + ": " + vm_repo[vm_type]['ports'][port_forw_rule]['note'] + ":" + str(the_port)
#print note
note = note.replace("{{the-gatway}}", gateway);
#print note
notes.append(note)
#pprint (["VBoxManage", 'modifyvm', vm_name , '--natpf1', port_forw_rule + ",tcp," + gateway + "," + str(the_port) + "," + vm_repo[vm_type]['ports'][port_forw_rule]['rule']])
ports_to_forward.append(["VBoxManage", 'modifyvm', vm_name , '--natpf1', port_forw_rule + ",tcp," + gateway + "," + str(the_port) + "," + vm_repo[vm_type]['ports'][port_forw_rule]['rule'] ])
#call(["VBoxManage", 'modifyvm', vm_name , '--natpf1', port_forw_rule + ",tcp," + gateway + "," + str(the_port) + "," + vm_repo[vm_type]['ports'][port_forw_rule]['rule'] ])
#print 'done'
used_ports[the_port] = vm_name + ":" + vm_type
break
json.dump(used_ports, open("usedports.json",'w'))
appliance = vbox.create_appliance()
appliance.read(path)
# Extract the IVirtualSystemDescription object
# for 'ubuntu' and set its name to 'foobar' and cpu '2'.
#desc = appliance.interpret() # for some reason this returns empty so I go with find_description(name of the vm when it is imported)
#pprint (dir(desc))
desc = appliance.find_description(vm_repo[vm_type]['vm_name'])
#pprint (dir(desc))
desc.set_name(vm_name)
my_desc = '';
for n in notes:
my_desc += n + ',\n'
#print my_desc
desc.set_final_value(9, my_desc )
try:
# perform import
progress = appliance.import_machines()
progress.wait_for_completion()
new_vm = vbox.find_machine(vm_name)
for pf in ports_to_forward:
call(pf)
result = {'status' : 1, 'note' : notes}
pprint (result)
except:
pprint (vars(sys.exc_info()[0]))
#print 0
| 34.798851 | 193 | 0.605945 | 850 | 6,055 | 4.184706 | 0.215294 | 0.018555 | 0.02474 | 0.03711 | 0.467529 | 0.444195 | 0.4307 | 0.402305 | 0.394152 | 0.394152 | 0 | 0.06442 | 0.166804 | 6,055 | 173 | 194 | 35 | 0.640634 | 0.196367 | 0 | 0.278689 | 0 | 0 | 0.435728 | 0.049265 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.057377 | 0.057377 | 0 | 0.057377 | 0.02459 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2b30462f4e15d1e7277002cce72ced8525343755 | 982 | py | Python | dailyblink/media.py | ptrstn/dailyblink | 16fe482552b101d83412bfbb662b8754682ba7d2 | [
"MIT"
] | 25 | 2020-05-01T16:34:11.000Z | 2022-02-19T09:39:20.000Z | dailyblink/media.py | ptrstn/dailyblink | 16fe482552b101d83412bfbb662b8754682ba7d2 | [
"MIT"
] | 24 | 2020-12-07T21:07:11.000Z | 2022-03-15T18:18:00.000Z | dailyblink/media.py | ptrstn/dailyblink | 16fe482552b101d83412bfbb662b8754682ba7d2 | [
"MIT"
] | 6 | 2021-03-05T09:19:37.000Z | 2022-01-01T08:25:14.000Z | import pathlib
from mutagen.mp4 import MP4
def create_file(content, path, mode):
pathlib.Path(path).parent.mkdir(parents=True, exist_ok=True)
with open(path, mode) as file:
file.write(content)
def save_media(media, file_path):
create_file(content=media, path=file_path, mode="wb")
def save_text(text, file_path):
create_file(content=text, path=file_path, mode="w+")
def set_m4a_meta_data(
filename,
artist=None,
title=None,
album=None,
track_number=None,
total_track_number=None,
genre=None,
):
mp4_file = MP4(filename)
if not mp4_file.tags:
mp4_file.add_tags()
tags = mp4_file.tags
if artist:
tags["\xa9ART"] = artist
if title:
tags["\xa9alb"] = album
if album:
tags["\xa9nam"] = title
if track_number and total_track_number:
tags["trkn"] = [(track_number, total_track_number)]
if genre:
tags["\xa9gen"] = genre
tags.save(filename)
| 20.458333 | 64 | 0.647658 | 137 | 982 | 4.445255 | 0.357664 | 0.108374 | 0.083744 | 0.059113 | 0.082102 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015915 | 0.232179 | 982 | 47 | 65 | 20.893617 | 0.791777 | 0 | 0 | 0 | 0 | 0 | 0.03666 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b307d66d8ef01ec6b560b96a1fec0c928cc9a2d | 22,878 | py | Python | src/server/server.py | HanseMerkur/cassh | 947023ad7971a0922d56aaaee5afcdf9294334e3 | [
"Apache-2.0"
] | null | null | null | src/server/server.py | HanseMerkur/cassh | 947023ad7971a0922d56aaaee5afcdf9294334e3 | [
"Apache-2.0"
] | null | null | null | src/server/server.py | HanseMerkur/cassh | 947023ad7971a0922d56aaaee5afcdf9294334e3 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
"""
Sign a user's SSH public key.
"""
from argparse import ArgumentParser
from json import dumps
from os import remove
from re import compile as re_compile, IGNORECASE
from tempfile import NamedTemporaryFile
from urllib.parse import unquote_plus
# Third party library imports
from configparser import ConfigParser, NoOptionError
from ldap import initialize, SCOPE_SUBTREE
from web import application, config, data, httpserver
from web.wsgiserver import CherryPyWSGIServer
# Own library
from ssh_utils import get_fingerprint
from tools import get_principals, get_pubkey, random_string, response_render, timestamp, unquote_custom, Tools
# DEBUG
# from pdb import set_trace as st
STATES = {
0: 'ACTIVE',
1: 'REVOKED',
2: 'PENDING',
}
URLS = (
'/admin/([a-z]+)', 'Admin',
'/ca', 'Ca',
'/client', 'Client',
'/client/status', 'ClientStatus',
'/cluster/status', 'ClusterStatus',
'/health', 'Health',
'/krl', 'Krl',
'/ping', 'Ping',
'/test_auth', 'TestAuth',
)
VERSION = '1.9.2'
PARSER = ArgumentParser()
PARSER.add_argument('-c', '--config', action='store', help='Configuration file')
PARSER.add_argument('-v', '--verbose', action='store_true', default=False, help='Add verbosity')
ARGS = PARSER.parse_args()
if not ARGS.config:
PARSER.error('--config argument is required !')
CONFIG = ConfigParser()
CONFIG.read(ARGS.config)
SERVER_OPTS = {}
SERVER_OPTS['ca'] = CONFIG.get('main', 'ca')
SERVER_OPTS['krl'] = CONFIG.get('main', 'krl')
SERVER_OPTS['port'] = CONFIG.get('main', 'port')
try:
SERVER_OPTS['admin_db_failover'] = CONFIG.get('main', 'admin_db_failover')
except NoOptionError:
SERVER_OPTS['admin_db_failover'] = False
SERVER_OPTS['ldap'] = False
SERVER_OPTS['ssl'] = False
if CONFIG.has_section('postgres'):
try:
SERVER_OPTS['db_host'] = CONFIG.get('postgres', 'host')
SERVER_OPTS['db_name'] = CONFIG.get('postgres', 'dbname')
SERVER_OPTS['db_user'] = CONFIG.get('postgres', 'user')
SERVER_OPTS['db_password'] = CONFIG.get('postgres', 'password')
except NoOptionError:
if ARGS.verbose:
print('Option reading error (postgres).')
exit(1)
if CONFIG.has_section('ldap'):
try:
SERVER_OPTS['ldap'] = True
SERVER_OPTS['ldap_host'] = CONFIG.get('ldap', 'host')
SERVER_OPTS['ldap_bind_dn'] = CONFIG.get('ldap', 'bind_dn')
SERVER_OPTS['ldap_admin_cn'] = CONFIG.get('ldap', 'admin_cn')
SERVER_OPTS['filterstr'] = CONFIG.get('ldap', 'filterstr')
except NoOptionError:
if ARGS.verbose:
print('Option reading error (ldap).')
exit(1)
if CONFIG.has_section('ssl'):
try:
SERVER_OPTS['ssl'] = True
SERVER_OPTS['ssl_private_key'] = CONFIG.get('ssl', 'private_key')
SERVER_OPTS['ssl_public_key'] = CONFIG.get('ssl', 'public_key')
except NoOptionError:
if ARGS.verbose:
print('Option reading error (ssl).')
exit(1)
# Cluster mode is used for revocation
try:
SERVER_OPTS['cluster'] = CONFIG.get('main', 'cluster').split(',')
except NoOptionError:
# Standalone mode
PROTO = 'http'
if SERVER_OPTS['ssl']:
PROTO = 'https'
SERVER_OPTS['cluster'] = ['%s://localhost:%s' % (PROTO, SERVER_OPTS['port'])]
try:
SERVER_OPTS['clustersecret'] = CONFIG.get('main', 'clustersecret')
except NoOptionError:
# Standalone mode
SERVER_OPTS['clustersecret'] = random_string(32)
try:
SERVER_OPTS['debug'] = bool(CONFIG.get('main', 'debug') != 'False')
except NoOptionError:
SERVER_OPTS['debug'] = False
TOOLS = Tools(SERVER_OPTS, STATES, VERSION)
def data2map():
"""
Returns a map from data POST
"""
data_map = {}
data_str = data().decode('utf-8')
if data_str == '':
return data_map
for key in data_str.split('&'):
data_map[key.split('=')[0]] = '='.join(key.split('=')[1:])
return data_map
def ldap_authentification(admin=False):
"""
Return True if user is well authentified
realname=xxxxx@domain.fr
password=xxxxx
"""
if SERVER_OPTS['ldap']:
credentials = data2map()
if 'realname' in credentials:
realname = unquote_plus(credentials['realname'])
else:
return False, 'Error: No realname option given.'
if 'password' in credentials:
password = unquote_plus(credentials['password'])
else:
return False, 'Error: No password option given.'
if password == '':
return False, 'Error: password is empty.'
ldap_conn = initialize("ldap://"+SERVER_OPTS['ldap_host'])
try:
ldap_conn.bind_s(realname, password)
except Exception as e:
return False, 'Error: %s' % e
if admin:
memberof_admin_list = ldap_conn.search_s(
SERVER_OPTS['ldap_bind_dn'],
SCOPE_SUBTREE,
filterstr='(&(%s=%s)(memberOf=%s))' % (
SERVER_OPTS['filterstr'],
realname,
SERVER_OPTS['ldap_admin_cn']))
if not memberof_admin_list:
return False, 'Error: user %s is not an admin.' % realname
return True, 'OK'
class Admin():
"""
Class admin to action or revoke keys.
"""
def POST(self, username):
"""
Revoke or Active keys.
/admin/<username>
revoke=true/false => Revoke user
status=true/false => Display status
"""
# LDAP authentication
is_admin_auth, message = ldap_authentification(admin=True)
if not is_admin_auth:
return response_render(message, http_code='401 Unauthorized')
payload = data2map()
if 'revoke' in payload:
do_revoke = payload['revoke'].lower() == 'true'
else:
do_revoke = False
if 'status' in payload:
do_status = payload['status'].lower() == 'true'
else:
do_status = False
pg_conn, message = TOOLS.pg_connection()
if pg_conn is None:
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
if username == 'all' and do_status:
return response_render(
TOOLS.list_keys(),
content_type='application/json')
# Search if key already exists
cur.execute('SELECT * FROM USERS WHERE NAME=(%s)', (username,))
user = cur.fetchone()
# If user dont exist
if user is None:
cur.close()
pg_conn.close()
message = 'User does not exists.'
elif do_revoke:
cur.execute('UPDATE USERS SET STATE=1 WHERE NAME=(%s)', (username,))
pg_conn.commit()
pubkey = get_pubkey(username, pg_conn)
cur.execute('INSERT INTO REVOCATION VALUES \
((%s), (%s), (%s))', \
(pubkey, timestamp(), username))
pg_conn.commit()
message = 'Revoke user=%s.' % username
cur.close()
pg_conn.close()
# Display status
elif do_status:
return response_render(
TOOLS.list_keys(username=username),
content_type='application/json')
# If user is in PENDING state
elif user[2] == 2:
cur.execute('UPDATE USERS SET STATE=0 WHERE NAME=(%s)', (username,))
pg_conn.commit()
cur.close()
pg_conn.close()
message = 'Active user=%s. SSH Key active but need to be signed.' % username
# If user is in REVOKE state
elif user[2] == 1:
cur.execute('UPDATE USERS SET STATE=0 WHERE NAME=(%s)', (username,))
pg_conn.commit()
cur.close()
pg_conn.close()
message = 'Active user=%s. SSH Key active but need to be signed.' % username
else:
cur.close()
pg_conn.close()
message = 'user=%s already active. Nothing done.' % username
return response_render(message)
def PATCH(self, username):
"""
Set the first founded value.
/admin/<username>
key=value => Set the key value. Keys are in status output.
"""
# LDAP authentication
is_admin_auth, message = ldap_authentification(admin=True)
if not is_admin_auth:
return response_render(message, http_code='401 Unauthorized')
pg_conn, message = TOOLS.pg_connection()
if pg_conn is None:
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
payload = data2map()
for key, value in payload.items():
if key == 'expiry':
pattern = re_compile('^\\+([0-9]+)+[dh]$')
if pattern.match(value) is None:
return response_render(
'ERROR: Value %s is malformed. Should match pattern ^\\+([0-9]+)+[dh]$' \
% value,
http_code='400 Bad Request')
cur.execute('UPDATE USERS SET EXPIRY=(%s) WHERE NAME=(%s)', (value, username))
pg_conn.commit()
cur.close()
pg_conn.close()
return response_render('OK: %s=%s for %s' % (key, value, username))
elif key == 'principals':
value = unquote_plus(value)
pattern = re_compile("^([a-zA-Z-]+)$")
for principal in value.split(','):
if pattern.match(principal) is None:
return response_render(
'ERROR: Value %s is malformed. Should match pattern ^([a-zA-Z-]+)$' \
% principal,
http_code='400 Bad Request')
cur.execute('UPDATE USERS SET PRINCIPALS=(%s) WHERE NAME=(%s)', (value, username))
pg_conn.commit()
cur.close()
pg_conn.close()
return response_render('OK: %s=%s for %s' % (key, value, username))
return response_render('WARNING: No key found...')
def DELETE(self, username):
"""
Delete keys (but DOESN'T REVOKE)
/admin/<username>
"""
# LDAP authentication
is_admin_auth, message = ldap_authentification(admin=True)
if not is_admin_auth:
return response_render(message, http_code='401 Unauthorized')
pg_conn, message = TOOLS.pg_connection()
if pg_conn is None:
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
# Search if key already exists
cur.execute('DELETE FROM USERS WHERE NAME=(%s)', (username,))
pg_conn.commit()
cur.close()
pg_conn.close()
return response_render('OK')
class Ca():
"""
Class CA.
"""
def GET(self):
"""
Return ca.
"""
return response_render(
open(SERVER_OPTS['ca'] + '.pub', 'rb'),
content_type='application/octet-stream')
class ClientStatus():
"""
ClientStatus main class.
"""
def POST(self):
"""
Get client key status.
/client/status
"""
# LDAP authentication
is_auth, message = ldap_authentification()
if not is_auth:
return response_render(message, http_code='401 Unauthorized')
payload = data2map()
if 'realname' in payload:
realname = unquote_plus(payload['realname'])
else:
return response_render(
'Error: No realname option given.',
http_code='400 Bad Request')
return response_render(
TOOLS.list_keys(realname=realname),
content_type='application/json')
class Client():
"""
Client main class.
"""
def POST(self):
"""
Ask to sign pub key.
/client
username=xxxxxx => Unique username. Used by default to connect on server.
realname=xxxxx@domain.fr => This LDAP/AD user.
# Optionnal
admin_force=true|false
"""
# LDAP authentication
is_auth, message = ldap_authentification()
if not is_auth:
return response_render(message, http_code='401 Unauthorized')
# Check if user is an admin and want to force signature when db fail
force_sign = False
# LDAP ADMIN authentication
is_admin_auth, _ = ldap_authentification(admin=True)
payload = data2map()
if is_admin_auth and SERVER_OPTS['admin_db_failover'] \
and 'admin_force' in payload and payload['admin_force'].lower() == 'true':
force_sign = True
# Get username
if 'username' in payload:
username = payload['username']
else:
return response_render(
'Error: No username option given. Update your CASSH >= 1.3.0',
http_code='400 Bad Request')
username_pattern = re_compile("^([a-z]+)$")
if username_pattern.match(username) is None or username == 'all':
return response_render(
"Error: Username doesn't match pattern %s" \
% username_pattern.pattern,
http_code='400 Bad Request')
# Get realname
if 'realname' in payload:
realname = unquote_plus(payload['realname'])
else:
return response_render(
'Error: No realname option given.',
http_code='400 Bad Request')
# Get public key
if 'pubkey' in payload:
pubkey = unquote_custom(payload['pubkey'])
else:
return response_render(
'Error: No pubkey given.',
http_code='400 Bad Request')
tmp_pubkey = NamedTemporaryFile(delete=False)
tmp_pubkey.write(bytes(pubkey, 'utf-8'))
tmp_pubkey.close()
pubkey_fingerprint = get_fingerprint(tmp_pubkey.name)
if pubkey_fingerprint == 'Unknown':
remove(tmp_pubkey.name)
return response_render(
'Error : Public key unprocessable',
http_code='422 Unprocessable Entity')
pg_conn, message = TOOLS.pg_connection()
# Admin force signature case
if pg_conn is None and force_sign:
cert_contents = TOOLS.sign_key(tmp_pubkey.name, username, '+12h', username)
remove(tmp_pubkey.name)
return response_render(cert_contents, content_type='application/octet-stream')
# Else, if db is down it fails.
elif pg_conn is None:
remove(tmp_pubkey.name)
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
# Search if key already exists
cur.execute('SELECT * FROM USERS WHERE SSH_KEY=(%s) AND NAME=lower(%s)', (pubkey, username))
user = cur.fetchone()
if user is None:
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render(
'Error : User or Key absent, add your key again.',
http_code='400 Bad Request')
if username != user[0] or realname != user[1]:
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render(
'Error : (username, realname) couple mismatch.',
http_code='401 Unauthorized')
status = user[2]
expiry = user[6]
principals = get_principals(user[7], username, shell=True)
if status > 0:
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render("Status: %s" % STATES[user[2]])
cert_contents = TOOLS.sign_key(tmp_pubkey.name, username, expiry, principals, db_cursor=cur)
remove(tmp_pubkey.name)
pg_conn.commit()
cur.close()
pg_conn.close()
return response_render(
cert_contents,
content_type='application/octet-stream')
def PUT(self):
"""
This function permit to add or update a ssh public key.
/client
username=xxxxxx => Unique username. Used by default to connect on server.
realname=xxxxx@domain.fr => This LDAP/AD user.
"""
# LDAP authentication
is_auth, message = ldap_authentification()
if not is_auth:
return response_render(message, http_code='401 Unauthorized')
payload = data2map()
if 'username' in payload:
username = payload['username']
else:
return response_render(
'Error: No username option given.',
http_code='400 Bad Request')
username_pattern = re_compile("^([a-z]+)$")
if username_pattern.match(username) is None or username == 'all':
return response_render(
"Error: Username doesn't match pattern %s" \
% username_pattern.pattern,
http_code='400 Bad Request')
if 'realname' in payload:
realname = unquote_plus(payload['realname'])
else:
return response_render(
'Error: No realname option given.',
http_code='400 Bad Request')
realname_pattern = re_compile(
r"(^[-!#$%&'*+/=?^_`{}|~0-9A-Z]+(\.[-!#$%&'*+/=?^_`{}|~0-9A-Z]+)*"
r'|^"([\001-\010\013\014\016-\037!#-\[\]-\177]|\\[\001-011\013\014\016-\177])*"'
r')@(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+[A-Z]{2,6}\.?$', IGNORECASE)
if realname_pattern.match(realname) is None:
return response_render(
"Error: Realname doesn't match pattern",
http_code='400 Bad Request')
# Get public key
if 'pubkey' in payload:
pubkey = unquote_custom(payload['pubkey'])
else:
return response_render(
'Error: No pubkey given.',
http_code='400 Bad Request')
tmp_pubkey = NamedTemporaryFile(delete=False)
tmp_pubkey.write(bytes(pubkey, 'utf-8'))
tmp_pubkey.close()
pubkey_fingerprint = get_fingerprint(tmp_pubkey.name)
if pubkey_fingerprint == 'Unknown':
remove(tmp_pubkey.name)
return response_render(
'Error : Public key unprocessable',
http_code='422 Unprocessable Entity')
pg_conn, message = TOOLS.pg_connection()
if pg_conn is None:
remove(tmp_pubkey.name)
return response_render(message, http_code='503 Service Unavailable')
cur = pg_conn.cursor()
# Search if key already exists
cur.execute('SELECT * FROM USERS WHERE NAME=(%s)', (username,))
user = cur.fetchone()
# CREATE NEW USER
if user is None:
cur.execute('INSERT INTO USERS VALUES \
((%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s))', \
(username, realname, 2, 0, pubkey_fingerprint, pubkey, '+12h', ''))
pg_conn.commit()
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render(
'Create user=%s. Pending request.' % username,
http_code='201 Created')
else:
# Check if realname is the same
cur.execute('SELECT * FROM USERS WHERE NAME=(%s) AND REALNAME=lower((%s))', \
(username, realname))
if cur.fetchone() is None:
pg_conn.commit()
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render(
'Error : (username, realname) couple mismatch.',
http_code='401 Unauthorized')
# Update entry into database
cur.execute('UPDATE USERS SET SSH_KEY=(%s), SSH_KEY_HASH=(%s), STATE=2, EXPIRATION=0 \
WHERE NAME=(%s)', (pubkey, pubkey_fingerprint, username))
pg_conn.commit()
cur.close()
pg_conn.close()
remove(tmp_pubkey.name)
return response_render('Update user=%s. Pending request.' % username)
class ClusterStatus():
"""
ClusterStatus main class.
"""
def GET(self):
"""
/cluster/status
"""
message = dict()
alive_nodes, dead_nodes = TOOLS.cluster_alived()
for node in alive_nodes:
message.update({node: {'status': 'OK'}})
for node in dead_nodes:
message.update({node: {'status': 'KO'}})
return response_render(
dumps(message),
content_type='application/json')
class Health():
"""
Class Health
"""
def GET(self):
"""
Return a health check
"""
health = {}
health['name'] = 'cassh'
health['version'] = VERSION
return response_render(
dumps(health, indent=4, sort_keys=True),
content_type='application/json')
class Krl():
"""
Class KRL.
"""
def GET(self):
"""
Return krl.
"""
return TOOLS.get_last_krl()
class Ping():
"""
Class Ping
"""
def GET(self):
"""
Return a pong
"""
return response_render('pong')
class TestAuth():
"""
Test authentication
"""
def POST(self):
"""
Test authentication
"""
# LDAP authentication
is_auth, message = ldap_authentification()
if not is_auth:
return response_render(message, http_code='401 Unauthorized')
return response_render('OK')
class MyApplication(application):
"""
Can change port or other stuff
"""
def run(self, port=int(SERVER_OPTS['port']), *middleware):
func = self.wsgifunc(*middleware)
return httpserver.runsimple(func, ('0.0.0.0', port))
if __name__ == "__main__":
if SERVER_OPTS['ssl']:
CherryPyWSGIServer.ssl_certificate = SERVER_OPTS['ssl_public_key']
CherryPyWSGIServer.ssl_private_key = SERVER_OPTS['ssl_private_key']
if ARGS.verbose:
print('SSL: %s' % SERVER_OPTS['ssl'])
print('LDAP: %s' % SERVER_OPTS['ldap'])
print('Admin DB Failover: %s' % SERVER_OPTS['admin_db_failover'])
APP = MyApplication(URLS, globals())
config.debug = SERVER_OPTS['debug']
if SERVER_OPTS['debug']:
print('Debug mode on')
APP.run()
| 33.447368 | 110 | 0.563992 | 2,549 | 22,878 | 4.910553 | 0.137309 | 0.053687 | 0.075098 | 0.033954 | 0.526244 | 0.468483 | 0.44835 | 0.439722 | 0.430375 | 0.405928 | 0 | 0.01281 | 0.314145 | 22,878 | 683 | 111 | 33.49634 | 0.784909 | 0.085978 | 0 | 0.528421 | 0 | 0.006316 | 0.188702 | 0.014493 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031579 | false | 0.014737 | 0.025263 | 0 | 0.197895 | 0.029474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2b31dd8ff5a66ac8bf0442f51e45b1fcb61fee3b | 23,469 | py | Python | src/test/test_data.py | opploans/cbc-syslog | 72a203b1dbe6ddd97f02dc87f36631d758564022 | [
"MIT"
] | 14 | 2020-04-28T12:52:50.000Z | 2021-08-25T00:36:51.000Z | src/test/test_data.py | opploans/cbc-syslog | 72a203b1dbe6ddd97f02dc87f36631d758564022 | [
"MIT"
] | 21 | 2016-10-24T20:16:39.000Z | 2020-02-11T21:30:50.000Z | src/test/test_data.py | opploans/cbc-syslog | 72a203b1dbe6ddd97f02dc87f36631d758564022 | [
"MIT"
] | 15 | 2016-12-19T20:39:24.000Z | 2020-01-02T16:26:34.000Z | # -*- coding: utf-8 -*-
null = ""
true = "true"
false = "false"
raw_notifications = {
"notifications": [{
"threatInfo": {
"incidentId": "Z7NG6",
"score": 7,
"summary": "A known virus (Sality: Keylogger, Password or Data stealer, Backdoor) was detected running.",
"indicators": [{
"indicatorName": "PACKED_CALL",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
},
{
"indicatorName": "TARGET_MALWARE_APP",
"applicationName": "explorer.exe",
"sha256Hash": "1e675cb7df214172f7eb0497f7275556038a0d09c6e5a3e6862c5e26885ef455"
},
{
"indicatorName": "HAS_PACKED_CODE",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
},
{
"indicatorName": "KNOWN_DOWNLOADER",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
},
{
"indicatorName": "ENUMERATE_PROCESSES",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
},
{
"indicatorName": "SET_SYSTEM_SECURITY",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
},
{
"indicatorName": "MODIFY_MEMORY_PROTECTION",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
},
{
"indicatorName": "KNOWN_PASSWORD_STEALER",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
},
{
"indicatorName": "RUN_MALWARE_APP",
"applicationName": "explorer.exe",
"sha256Hash": "1e675cb7df214172f7eb0497f7275556038a0d09c6e5a3e6862c5e26885ef455"
},
{
"indicatorName": "MODIFY_PROCESS",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
},
{
"indicatorName": "MALWARE_APP",
"applicationName": "ShippingInvoice.pdf.exe",
"sha256Hash": "cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc"
}
],
"time": 1460703240678
},
"url": "https://testserver.company.net/ui#investigate/events/device/2004118/incident/Z7NG6",
"eventTime": 1460703240678,
"eventId": "f279d0e6035211e6be8701df2c083974",
"eventDescription": "[syslog alert] [Cb Defense has detected a threat against your company.] [https://testserver.company.net/ui#device/2004118/incident/Z7NG6] [A known virus (Sality: Keylogger, Password or Data stealer, Backdoor) was detected running.] [Incident id: Z7NG6] [Threat score: 7] [Group: default] [Email: FirstName.LastName@company.net.demo] [Name: Demo_CaretoPC] [Type and OS: WINDOWS XP x86 SP: 0]\n",
"deviceInfo": {
"email": "COMPANY\\FirstName.LastName",
"groupName": "default",
"internalIpAddress": null,
"externalIpAddress": null,
"deviceType": "WINDOWS",
"deviceVersion": "XP x86 SP: 0",
"targetPriorityType": "MEDIUM",
"deviceId": 2004118,
"deviceName": "COMPANY\\Demo_CaretoPC",
"deviceHostName": null,
"targetPriorityCode": 0
},
"ruleName": "syslog alert",
"type": "THREAT"
},
{
"policyAction": {
"sha256Hash": "2552332222112552332222112552332222112552332222112552332222112552",
"action": "TERMINATE",
"reputation": "KNOWN_MALWARE",
"applicationName": "firefox.exe"
},
"type": "POLICY_ACTION",
"eventTime": 1423163263482,
"eventId": "EV1",
"url": "http://carbonblack.com/ui#device/100/hash/2552332222112552332222112552332222112552332222112552332222112552/app/firefox.exe/keyword/terminate policy action",
"deviceInfo": {
"deviceType": "WINDOWS",
"email": "tester@carbonblack.com",
"deviceId": 100,
"deviceName": "testers-pc",
"deviceHostName": null,
"deviceVersion": "7 SP1",
"targetPriorityType": "HIGH",
"targetPriorityCode": 0,
"internalIpAddress": "55.33.22.11",
"groupName": "Executives",
"externalIpAddress": "255.233.222.211"
},
"eventDescription": "Policy action 1",
"ruleName": "Alert Rule 1"
},
{
"threatHunterInfo": {
"incidentId": "WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660",
"score": 1,
"summary": "PowerShell - File and Directory Discovery Enumeration",
"time": 1554652050250,
"indicators": [
{
"applicationName": "powershell.exe",
"sha256Hash": "ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436",
"indicatorName": "565660-0"
}
],
"watchLists": [
{
"id": "a3xW2ZiaRyAqRtuVES8Q",
"name": "ATT&CK Framework",
"alert": true
}
],
"iocId": "565660-0",
"count": 0,
"dismissed": false,
"documentGuid": "7a9fQEsTRfuFmXcogI8CMQ",
"firstActivityTime": 1554651811577,
"md5": "097ce5761c89434367598b34fe32893b",
"policyId": 9815,
"processGuid": "WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf",
"processPath": "c:\\windows\\system32\\windowspowershell\\v1.0\\powershell.exe",
"reportName": "PowerShell - File and Directory Discovery Enumeration",
"reportId": "j0MkcneCQXy1fIbhber6rw-565660",
"reputation": "TRUSTED_WHITE_LIST",
"responseAlarmId": "WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660",
"responseSeverity": 1,
"runState": "RAN",
"sha256": "ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436",
"status": "UNRESOLVED",
"tags": null,
"targetPriority": "MEDIUM",
"threatCause": {
"reputation": "TRUSTED_WHITE_LIST",
"actor": "ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436",
"actorName": "powershell.exe",
"reason": "Process powershell.exe was detected by the report \"PowerShell - File and Directory Discovery Enumeration\" in watchlist \"ATT&CK Framework\"",
"actorType": null,
"threatCategory": "RESPONSE_WATCHLIST",
"actorProcessPPid": null,
"causeEventId": null,
"originSourceType": "UNKNOWN"
},
"threatId": "a2b724aa094af97c06c758d325240460",
"lastUpdatedTime": 0,
"orgId": 428
},
"eventDescription": "[sm-sentinel-notification] [Carbon Black has detected a threat against your company.] [https://defense-eap01.conferdeploy.net#device/18900/incident/WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660] [PowerShell - File and Directory Discovery Enumeration] [Incident id: WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660] [Threat score: 1] [Group: sm-detection] [Email: smultani@carbonblack.com] [Name: win-559j1nqvfgj] [Type and OS: WINDOWS pscr-sensor] [Severity: 1]\n",
"eventTime": 1554651811577,
"deviceInfo": {
"deviceId": 18900,
"targetPriorityCode": 0,
"groupName": "sm-detection",
"deviceName": "win-559j1nqvfgj",
"deviceType": "WINDOWS",
"email": "smultani@carbonblack.com",
"deviceHostName": null,
"deviceVersion": "pscr-sensor",
"targetPriorityType": "MEDIUM",
"uemId": null,
"internalIpAddress": "192.168.81.148",
"externalIpAddress": "73.69.152.214"
},
"url": "https://defense-eap01.conferdeploy.net/investigate?s[searchWindow]=ALL&s[c][DEVICE_ID][0]=18900&s[c][INCIDENT_ID][0]=WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660",
"ruleName": "sm-sentinel-notification",
"type": "THREAT_HUNTER"
}],
"success": true,
"message": "Success"
}
cef_notifications = ['test CEF:0|CarbonBlack|CbDefense_Syslog_Connector|2.0|Active_Threat|A known virus (Sality: Keylogger, Password or Data stealer, Backdoor) was detected running.|7|rt="Apr 15 2016 06:54:00" sntdom=COMPANY dvchost=Demo_CaretoPC duser=FirstName.LastName dvc= cs3Label="Link" cs3="https://testserver.company.net/ui#investigate/events/device/2004118/incident/Z7NG6" cs4Label="Threat_ID" cs4="Z7NG6" act=Alert', 'test CEF:0|CarbonBlack|CbDefense_Syslog_Connector|2.0|Policy_Action|Confer Sensor Policy Action|1|rt="Feb 05 2015 19:07:43" dvchost=testers-pc duser=tester@carbonblack.com dvc=55.33.22.11 cs3Label="Link" cs3="http://carbonblack.com/ui#device/100/hash/2552332222112552332222112552332222112552332222112552332222112552/app/firefox.exe/keyword/terminate policy action" act=TERMINATE hash=2552332222112552332222112552332222112552332222112552332222112552 deviceprocessname=firefox.exe', 'test CEF:0|CarbonBlack|CbDefense_Syslog_Connector|2.0|Threat_Hunter|PowerShell - File and Directory Discovery Enumeration|1|rt="Apr 07 2019 15:43:31" dvchost=win-559j1nqvfgj duser=smultani@carbonblack.com dvc=192.168.81.148 cs3Label="Link" cs3="https://defense-eap01.conferdeploy.net/investigate?s[searchWindow]=ALL&s[c][DEVICE_ID][0]=18900&s[c][INCIDENT_ID][0]=WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660" cs4Label="Threat_ID" cs4="WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660" hash=ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436']
leef_notifications = ['LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=PACKED_CALL\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=explorer.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=TARGET_MALWARE_APP\tsev=Z7NG6\tsha256Hash=1e675cb7df214172f7eb0497f7275556038a0d09c6e5a3e6862c5e26885ef455', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=HAS_PACKED_CODE\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=KNOWN_DOWNLOADER\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=ENUMERATE_PROCESSES\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=SET_SYSTEM_SECURITY\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=MODIFY_MEMORY_PROTECTION\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=KNOWN_PASSWORD_STEALER\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=explorer.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=RUN_MALWARE_APP\tsev=Z7NG6\tsha256Hash=1e675cb7df214172f7eb0497f7275556038a0d09c6e5a3e6862c5e26885ef455', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=MODIFY_PROCESS\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=ShippingInvoice.pdf.exe\tcat=INDICATOR\tincidentId=Z7NG6\tindicatorName=MALWARE_APP\tsev=Z7NG6\tsha256Hash=cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc', 'LEEF:2.0|CarbonBlack|Cloud|1.0|THREAT|x09|cat=THREAT\tdevTime=Apr-15-2016 06:54:00 GMT\tdevTimeFormat=MMM dd yyyy HH:mm:ss z\tdeviceId=2004118\tdeviceType=WINDOWS\teventId=f279d0e6035211e6be8701df2c083974\tidentHostName=\tidentSrc=\tincidentId=Z7NG6\trealm=default\tresource=COMPANY\\Demo_CaretoPC\truleName=syslog alert\tsev=7\tsummary=A known virus (Sality: Keylogger, Password or Data stealer, Backdoor) was detected running.\ttargetPriorityType=MEDIUM\turl=https://testserver.company.net/ui#investigate/events/device/2004118/incident/Z7NG6', 'LEEF:2.0|CarbonBlack|Cloud|1.0|POLICY_ACTION|x09|action=TERMINATE\tapplicationName=firefox.exe\tcat=POLICY_ACTION\tdevTime=Feb-05-2015 19:07:43 GMT\tdevTimeFormat=MMM dd yyyy HH:mm:ss z\tdeviceId=100\tdeviceType=WINDOWS\teventId=EV1\tidentHostName=\tidentSrc=55.33.22.11\trealm=Executives\treputation=KNOWN_MALWARE\tresource=testers-pc\truleName=Alert Rule 1\tsev=1\tsha256=2552332222112552332222112552332222112552332222112552332222112552\tsummary=\ttargetPriorityType=HIGH\turl=http://carbonblack.com/ui#device/100/hash/2552332222112552332222112552332222112552332222112552332222112552/app/firefox.exe/keyword/terminate policy action', 'LEEF:2.0|CarbonBlack|Cloud|1.0|INDICATOR|x09|applicationName=powershell.exe\tcat=INDICATOR\tincidentId=WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660\tindicatorName=565660-0\tsev=WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660\tsha256Hash=ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436', 'LEEF:2.0|CarbonBlack|Cloud|1.0|THREAT_HUNTER|x09|cat=THREAT_HUNTER\tdevTime=Apr-07-2019 15:43:31 GMT\tdevTimeFormat=MMM dd yyyy HH:mm:ss z\tdeviceId=18900\tdeviceType=WINDOWS\teventId=None\tidentHostName=\tidentSrc=192.168.81.148\tincidentId=WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660\tprocessGuid=WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf\tprocessPath=c:\\windows\\system32\\windowspowershell\\v1.0\\powershell.exe\trealm=sm-detection\treportName=PowerShell - File and Directory Discovery Enumeration\treputation=TRUSTED_WHITE_LIST\tresource=win-559j1nqvfgj\truleName=sm-sentinel-notification\trunState=RAN\tsev=1\tsha256=ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436\tsummary=PowerShell - File and Directory Discovery Enumeration\ttargetPriorityType=MEDIUM\turl=https://defense-eap01.conferdeploy.net/investigate?s[searchWindow]=ALL&s[c][DEVICE_ID][0]=18900&s[c][INCIDENT_ID][0]=WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660\twatchlists=ATT&CK Framework']
json_notifications = [{'threatInfo': {'incidentId': 'Z7NG6', 'score': 7, 'summary': 'A known virus (Sality: Keylogger, Password or Data stealer, Backdoor) was detected running.', 'indicators': [{'indicatorName': 'PACKED_CALL', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}, {'indicatorName': 'TARGET_MALWARE_APP', 'applicationName': 'explorer.exe', 'sha256Hash': '1e675cb7df214172f7eb0497f7275556038a0d09c6e5a3e6862c5e26885ef455'}, {'indicatorName': 'HAS_PACKED_CODE', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}, {'indicatorName': 'KNOWN_DOWNLOADER', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}, {'indicatorName': 'ENUMERATE_PROCESSES', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}, {'indicatorName': 'SET_SYSTEM_SECURITY', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}, {'indicatorName': 'MODIFY_MEMORY_PROTECTION', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}, {'indicatorName': 'KNOWN_PASSWORD_STEALER', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}, {'indicatorName': 'RUN_MALWARE_APP', 'applicationName': 'explorer.exe', 'sha256Hash': '1e675cb7df214172f7eb0497f7275556038a0d09c6e5a3e6862c5e26885ef455'}, {'indicatorName': 'MODIFY_PROCESS', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}, {'indicatorName': 'MALWARE_APP', 'applicationName': 'ShippingInvoice.pdf.exe', 'sha256Hash': 'cfe0ae57f314a9f747a7cec605907cdaf1984b3cdea74ee8d5893d00ae0886cc'}], 'time': 1460703240678}, 'url': 'https://testserver.company.net/ui#investigate/events/device/2004118/incident/Z7NG6', 'eventTime': 1460703240678, 'eventId': 'f279d0e6035211e6be8701df2c083974', 'eventDescription': '[syslog alert] [Cb Defense has detected a threat against your company.] [https://testserver.company.net/ui#device/2004118/incident/Z7NG6] [A known virus (Sality: Keylogger, Password or Data stealer, Backdoor) was detected running.] [Incident id: Z7NG6] [Threat score: 7] [Group: default] [Email: FirstName.LastName@company.net.demo] [Name: Demo_CaretoPC] [Type and OS: WINDOWS XP x86 SP: 0]\n', 'deviceInfo': {'email': 'COMPANY\\FirstName.LastName', 'groupName': 'default', 'internalIpAddress': '', 'externalIpAddress': '', 'deviceType': 'WINDOWS', 'deviceVersion': 'XP x86 SP: 0', 'targetPriorityType': 'MEDIUM', 'deviceId': 2004118, 'deviceName': 'COMPANY\\Demo_CaretoPC', 'deviceHostName': '', 'targetPriorityCode': 0}, 'ruleName': 'syslog alert', 'type': 'THREAT', 'source': 'test'}, {'policyAction': {'sha256Hash': '2552332222112552332222112552332222112552332222112552332222112552', 'action': 'TERMINATE', 'reputation': 'KNOWN_MALWARE', 'applicationName': 'firefox.exe'}, 'type': 'POLICY_ACTION', 'eventTime': 1423163263482, 'eventId': 'EV1', 'url': 'http://carbonblack.com/ui#device/100/hash/2552332222112552332222112552332222112552332222112552332222112552/app/firefox.exe/keyword/terminate policy action', 'deviceInfo': {'deviceType': 'WINDOWS', 'email': 'tester@carbonblack.com', 'deviceId': 100, 'deviceName': 'testers-pc', 'deviceHostName': '', 'deviceVersion': '7 SP1', 'targetPriorityType': 'HIGH', 'targetPriorityCode': 0, 'internalIpAddress': '55.33.22.11', 'groupName': 'Executives', 'externalIpAddress': '255.233.222.211'}, 'eventDescription': 'Policy action 1', 'ruleName': 'Alert Rule 1', 'source': 'test'}, {'threatHunterInfo': {'incidentId': 'WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660', 'score': 1, 'summary': 'PowerShell - File and Directory Discovery Enumeration', 'time': 1554652050250, 'indicators': [{'applicationName': 'powershell.exe', 'sha256Hash': 'ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436', 'indicatorName': '565660-0'}], 'watchLists': [{'id': 'a3xW2ZiaRyAqRtuVES8Q', 'name': 'ATT&CK Framework', 'alert': 'true'}], 'iocId': '565660-0', 'count': 0, 'dismissed': 'false', 'documentGuid': '7a9fQEsTRfuFmXcogI8CMQ', 'firstActivityTime': 1554651811577, 'md5': '097ce5761c89434367598b34fe32893b', 'policyId': 9815, 'processGuid': 'WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf', 'processPath': 'c:\\windows\\system32\\windowspowershell\\v1.0\\powershell.exe', 'reportName': 'PowerShell - File and Directory Discovery Enumeration', 'reportId': 'j0MkcneCQXy1fIbhber6rw-565660', 'reputation': 'TRUSTED_WHITE_LIST', 'responseAlarmId': 'WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660', 'responseSeverity': 1, 'runState': 'RAN', 'sha256': 'ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436', 'status': 'UNRESOLVED', 'tags': '', 'targetPriority': 'MEDIUM', 'threatCause': {'reputation': 'TRUSTED_WHITE_LIST', 'actor': 'ba4038fd20e474c047be8aad5bfacdb1bfc1ddbe12f803f473b7918d8d819436', 'actorName': 'powershell.exe', 'reason': 'Process powershell.exe was detected by the report "PowerShell - File and Directory Discovery Enumeration" in watchlist "ATT&CK Framework"', 'actorType': '', 'threatCategory': 'RESPONSE_WATCHLIST', 'actorProcessPPid': '', 'causeEventId': '', 'originSourceType': 'UNKNOWN'}, 'threatId': 'a2b724aa094af97c06c758d325240460', 'lastUpdatedTime': 0, 'orgId': 428}, 'eventDescription': '[sm-sentinel-notification] [Carbon Black has detected a threat against your company.] [https://defense-eap01.conferdeploy.net#device/18900/incident/WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660] [PowerShell - File and Directory Discovery Enumeration] [Incident id: WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660] [Threat score: 1] [Group: sm-detection] [Email: smultani@carbonblack.com] [Name: win-559j1nqvfgj] [Type and OS: WINDOWS pscr-sensor] [Severity: 1]\n', 'eventTime': 1554651811577, 'deviceInfo': {'deviceId': 18900, 'targetPriorityCode': 0, 'groupName': 'sm-detection', 'deviceName': 'win-559j1nqvfgj', 'deviceType': 'WINDOWS', 'email': 'smultani@carbonblack.com', 'deviceHostName': '', 'deviceVersion': 'pscr-sensor', 'targetPriorityType': 'MEDIUM', 'uemId': '', 'internalIpAddress': '192.168.81.148', 'externalIpAddress': '73.69.152.214'}, 'url': 'https://defense-eap01.conferdeploy.net/investigate?s[searchWindow]=ALL&s[c][DEVICE_ID][0]=18900&s[c][INCIDENT_ID][0]=WNEXFKQ7-000049d4-00001ef0-00000000-1d4ed58a5f07dbf-j0MkcneCQXy1fIbhber6rw-565660', 'ruleName': 'sm-sentinel-notification', 'type': 'THREAT_HUNTER', 'source': 'test'}]
| 116.761194 | 6,890 | 0.711023 | 2,007 | 23,469 | 8.263079 | 0.158445 | 0.048842 | 0.053726 | 0.058611 | 0.877894 | 0.874457 | 0.854438 | 0.838157 | 0.82266 | 0.81452 | 0 | 0.196449 | 0.160169 | 23,469 | 200 | 6,891 | 117.345 | 0.644952 | 0.000895 | 0 | 0.206186 | 0 | 0.149485 | 0.762348 | 0.373027 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.030928 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |