hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c614a1664f6bdf152a7894bbff6e36ef78258fe0 | 1,234 | py | Python | tests/test_asdf_schema.py | WilliamJamieson/asdf-standard | d0934b2abd95507e5d5f79ed69782d5b6b24eadd | [
"BSD-3-Clause"
] | 47 | 2015-04-21T10:03:44.000Z | 2020-06-16T12:02:35.000Z | tests/test_asdf_schema.py | WilliamJamieson/asdf-standard | d0934b2abd95507e5d5f79ed69782d5b6b24eadd | [
"BSD-3-Clause"
] | 120 | 2015-01-12T23:45:17.000Z | 2020-06-06T17:20:28.000Z | tests/test_asdf_schema.py | WilliamJamieson/asdf-standard | d0934b2abd95507e5d5f79ed69782d5b6b24eadd | [
"BSD-3-Clause"
] | 22 | 2015-04-21T10:14:44.000Z | 2020-06-16T08:55:17.000Z | import pytest
from common import SCHEMAS_PATH, assert_yaml_header_and_footer, load_yaml
from jsonschema import ValidationError
@pytest.mark.parametrize("path", SCHEMAS_PATH.glob("asdf-schema-*.yaml"))
def test_asdf_schema(path):
assert_yaml_header_and_footer(path)
# Asserting no exceptions here
load_yaml(path)
@pytest.mark.parametrize("path", SCHEMAS_PATH.glob("asdf-schema-*.yaml"))
def test_nested_object_validation(path, create_validator):
"""
Test that the validations are applied to nested objects.
"""
metaschema = load_yaml(path)
validator = create_validator(metaschema)
schema = {"$schema": metaschema["id"], "type": "object", "properties": {"foo": {"datatype": "float32"}}}
# No error here
validator.validate(schema)
schema = {"$schema": metaschema["id"], "type": "object", "properties": {"foo": {"datatype": "banana"}}}
with pytest.raises(ValidationError, match="'banana' is not valid"):
validator.validate(schema)
schema = {
"$schema": metaschema["id"],
"type": "array",
"items": {"type": "object", "properties": {"foo": {"ndim": "twelve"}}},
}
with pytest.raises(ValidationError):
validator.validate(schema)
| 33.351351 | 108 | 0.67342 | 139 | 1,234 | 5.827338 | 0.410072 | 0.074074 | 0.081481 | 0.088889 | 0.449383 | 0.449383 | 0.377778 | 0.377778 | 0.28642 | 0.150617 | 0 | 0.001947 | 0.167747 | 1,234 | 36 | 109 | 34.277778 | 0.786758 | 0.081037 | 0 | 0.217391 | 0 | 0 | 0.191585 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.086957 | false | 0 | 0.130435 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c61593afa95fc30a19ca5363d13b5db09015e8fd | 5,552 | py | Python | retinanet/dataloader/label_encoder.py | lchen-wyze/retinanet-tensorflow2.x | 86404a2da6ec636d4b1aef768ac52f018c127798 | [
"Apache-2.0"
] | 36 | 2020-09-23T13:32:47.000Z | 2022-03-29T18:53:58.000Z | retinanet/dataloader/label_encoder.py | lchen-wyze/retinanet-tensorflow2.x | 86404a2da6ec636d4b1aef768ac52f018c127798 | [
"Apache-2.0"
] | 12 | 2020-10-25T09:07:58.000Z | 2021-11-17T12:53:50.000Z | retinanet/dataloader/label_encoder.py | lchen-wyze/retinanet-tensorflow2.x | 86404a2da6ec636d4b1aef768ac52f018c127798 | [
"Apache-2.0"
] | 9 | 2020-11-12T20:03:06.000Z | 2022-01-03T12:40:48.000Z | import tensorflow as tf
from retinanet.dataloader.anchor_generator import AnchorBoxGenerator
from retinanet.dataloader.preprocessing_pipeline import PreprocessingPipeline
from retinanet.dataloader.utils import compute_iou
class LabelEncoder:
def __init__(self, params):
self.input_shape = params.input.input_shape
self.encoder_params = params.encoder_params
self.anchors = AnchorBoxGenerator(
*self.input_shape,
params.architecture.feature_fusion.min_level,
params.architecture.feature_fusion.max_level,
params.anchor_params)
self.preprocessing_pipeline = PreprocessingPipeline(
self.input_shape, params.dataloader_params)
self._all_unmatched = -1 * tf.ones(
[self.anchors.boxes.get_shape().as_list()[0]], dtype=tf.int32)
self._min_level = params.architecture.feature_fusion.min_level
self._max_level = params.architecture.feature_fusion.max_level
self._params = params
def _match_anchor_boxes(self, anchor_boxes, gt_boxes):
if tf.shape(gt_boxes)[0] == 0:
return self._all_unmatched
iou_matrix = compute_iou(gt_boxes, anchor_boxes, pair_wise=True)
max_ious = tf.reduce_max(iou_matrix, axis=0)
matched_gt_idx = tf.argmax(iou_matrix, axis=0, output_type=tf.int32)
matches = tf.where(tf.greater(max_ious, self.encoder_params.match_iou),
matched_gt_idx, -1)
matches = tf.where(
tf.logical_and(
tf.greater_equal(max_ious, self.encoder_params.ignore_iou),
tf.greater(self.encoder_params.match_iou, max_ious)), -2,
matches)
best_matched_anchors = tf.argmax(iou_matrix,
axis=-1,
output_type=tf.int32)
best_matched_anchors_one_hot = tf.one_hot(
best_matched_anchors, depth=tf.shape(iou_matrix)[-1])
matched_anchors = tf.reduce_max(best_matched_anchors_one_hot, axis=0)
matched_anchors_gt_idx = tf.argmax(best_matched_anchors_one_hot,
axis=0,
output_type=tf.int32)
matches = tf.where(tf.cast(matched_anchors, dtype=tf.bool),
matched_anchors_gt_idx, matches)
return matches
def _compute_box_target(self, matched_gt_boxes, matches, eps=1e-8):
matched_gt_boxes = tf.maximum(matched_gt_boxes, eps)
box_target = tf.concat(
[
(matched_gt_boxes[:, :2] - self.anchors.boxes[:, :2]) /
self.anchors.boxes[:, 2:],
tf.math.log(
matched_gt_boxes[:, 2:] / self.anchors.boxes[:, 2:]),
],
axis=-1,
)
positive_mask = tf.expand_dims(tf.greater_equal(matches, 0), axis=-1)
positive_mask = tf.broadcast_to(positive_mask, tf.shape(box_target))
box_target = tf.where(positive_mask, box_target, 0.0)
if self.encoder_params.scale_box_targets:
box_target = box_target / tf.convert_to_tensor(
self.encoder_params.box_variance, dtype=tf.float32)
return box_target
@staticmethod
def _pad_labels(gt_boxes, cls_ids):
gt_boxes = tf.concat([tf.stack([tf.zeros(4), tf.zeros(4)]), gt_boxes],
axis=0)
cls_ids = tf.concat([
tf.squeeze(tf.stack([-2 * tf.ones(1), -1 * tf.ones(1)])), cls_ids
],
axis=0)
return gt_boxes, cls_ids
def encode_sample(self, sample):
image, gt_boxes, cls_ids = self.preprocessing_pipeline(sample)
matches = self._match_anchor_boxes(self.anchors.boxes, gt_boxes)
cls_ids = tf.cast(cls_ids, dtype=tf.float32)
gt_boxes, cls_ids = LabelEncoder._pad_labels(gt_boxes, cls_ids)
gt_boxes = tf.gather(gt_boxes, matches + 2)
cls_target = tf.gather(cls_ids, matches + 2)
box_target = self._compute_box_target(gt_boxes, matches)
iou_target = compute_iou(self.anchors.boxes, gt_boxes, pair_wise=False)
iou_target = tf.where(tf.greater(matches, -1), iou_target, -1.0)
boundaries = self.anchors.anchor_boundaries
targets = {'class-targets': {}, 'box-targets': {}}
if self._params.architecture.auxillary_head.use_auxillary_head:
targets['iou-targets'] = {}
# TODO(srihari): use pyramid levels for indexing
for level in range(self._min_level, self._max_level + 1):
i = level - 3
fh = tf.math.ceil(self.input_shape[0] / (2**(i + 3)))
fw = tf.math.ceil(self.input_shape[1] / (2**(i + 3)))
targets['class-targets'][str(i + 3)] = tf.reshape(
cls_target[boundaries[i]:boundaries[i + 1]],
shape=[fh, fw, self.anchors._num_anchors])
targets['box-targets'][str(i + 3)] = tf.reshape(
box_target[boundaries[i]:boundaries[i + 1]],
shape=[fh, fw, 4 * self.anchors._num_anchors])
if 'iou-targets' in targets:
targets['iou-targets'][str(i + 3)] = tf.reshape(
iou_target[boundaries[i]:boundaries[i + 1]],
shape=[fh, fw, self.anchors._num_anchors])
num_positives = tf.reduce_sum(
tf.cast(tf.greater(matches, -1), dtype=tf.float32))
targets['num-positives'] = num_positives
return image, targets
| 44.063492 | 79 | 0.608249 | 695 | 5,552 | 4.579856 | 0.185612 | 0.043984 | 0.032045 | 0.024505 | 0.314797 | 0.222432 | 0.15677 | 0.112473 | 0.092366 | 0.060949 | 0 | 0.016779 | 0.2808 | 5,552 | 125 | 80 | 44.416 | 0.780366 | 0.008285 | 0 | 0.096154 | 0 | 0 | 0.017078 | 0 | 0 | 0 | 0 | 0.008 | 0 | 1 | 0.048077 | false | 0 | 0.038462 | 0 | 0.144231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c617ffba3ed7d86f83e8204d0efd8b5ec19f40d4 | 476 | py | Python | ex4.py | JasperStfun/OOP | fb6c084979c2550a01b8dd07a24c244d31f943a0 | [
"Apache-2.0"
] | null | null | null | ex4.py | JasperStfun/OOP | fb6c084979c2550a01b8dd07a24c244d31f943a0 | [
"Apache-2.0"
] | null | null | null | ex4.py | JasperStfun/OOP | fb6c084979c2550a01b8dd07a24c244d31f943a0 | [
"Apache-2.0"
] | null | null | null | class DefenerVector:
def __init__(self, v):
self.__v = v
def __enter__(self):
self.__temp = self.__v[:]
return self.__temp
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_type is None:
self.__v[:] = self.__temp
return False
v1 = [1, 2, 3]
v2 = [1, 2]
try:
with DefenerVector(v1) as dv:
for i in range(len(dv)):
dv[i] += v2[i]
except Exception as e:
print(e)
print(v1) | 21.636364 | 50 | 0.544118 | 70 | 476 | 3.3 | 0.514286 | 0.08658 | 0.077922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031447 | 0.331933 | 476 | 22 | 51 | 21.636364 | 0.694969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0 | 0 | 0 | 0.315789 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c61b4b710f700b7e7d84698ef9c73f9fe55fc76c | 1,488 | py | Python | tamcolors/tests/utils_tests/identifier_tests.py | cmcmarrow/tamcolors | 65a5f2455bbe35a739b98d14af158c3df7feb786 | [
"Apache-2.0"
] | 29 | 2020-07-17T23:46:17.000Z | 2022-02-06T05:36:44.000Z | tamcolors/tests/utils_tests/identifier_tests.py | sudo-nikhil/tamcolors | 65a5f2455bbe35a739b98d14af158c3df7feb786 | [
"Apache-2.0"
] | 42 | 2020-07-25T19:39:52.000Z | 2021-02-24T01:19:58.000Z | tamcolors/tests/utils_tests/identifier_tests.py | sudo-nikhil/tamcolors | 65a5f2455bbe35a739b98d14af158c3df7feb786 | [
"Apache-2.0"
] | 8 | 2020-07-18T23:02:48.000Z | 2020-12-30T04:07:35.000Z | # built in libraries
import unittest.mock
from tempfile import TemporaryDirectory
from os.path import join
# tamcolors libraries
from tamcolors.utils import identifier
class IdentifierTests(unittest.TestCase):
def test_globals(self):
self.assertIsInstance(identifier.IDENTIFIER_FILE_NAME, str)
self.assertIsInstance(identifier.IDENTIFIER_SIZE, int)
def test_generate_identifier(self):
with TemporaryDirectory() as tmp_dir_name:
tmp_name = join(tmp_dir_name, "temp.id")
self.assertIsInstance(identifier.generate_identifier_bytes(tmp_name), bytes)
self.assertIsInstance(identifier.generate_identifier_bytes(tmp_name), bytes)
self.assertIsInstance(identifier.generate_identifier_bytes(tmp_name, 1000), bytes)
self.assertIsInstance(identifier.generate_identifier_bytes(tmp_name, 9999), bytes)
def test_get_identifier_bytes(self):
with TemporaryDirectory() as tmp_dir_name:
tmp_name = join(tmp_dir_name, "temp2.id")
tmp_id = identifier.get_identifier_bytes(tmp_name)
self.assertIsInstance(tmp_id, bytes)
self.assertEqual(len(tmp_id), identifier.IDENTIFIER_SIZE)
for _ in range(10):
self.assertEqual(tmp_id, identifier.get_identifier_bytes(tmp_name))
self.assertNotEqual(identifier.generate_identifier_bytes(tmp_name, identifier.IDENTIFIER_SIZE + 1000),
tmp_id)
| 41.333333 | 114 | 0.716398 | 169 | 1,488 | 6.023669 | 0.272189 | 0.061886 | 0.123772 | 0.151277 | 0.492141 | 0.492141 | 0.452849 | 0.452849 | 0.452849 | 0.302554 | 0 | 0.012723 | 0.207661 | 1,488 | 35 | 115 | 42.514286 | 0.850721 | 0.025538 | 0 | 0.16 | 0 | 0 | 0.010366 | 0 | 0 | 0 | 0 | 0 | 0.4 | 1 | 0.12 | false | 0 | 0.16 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c61b82fdb5df4f14bd3407dbc13cb9a344c9c650 | 2,847 | py | Python | source.py | FahimFBA/Invisible-Cloak-Using-Python | 4ed662f8ec6ab7bd90c5cf337d10ff8d2496843b | [
"MIT"
] | 2 | 2021-03-29T07:29:39.000Z | 2021-12-11T18:05:45.000Z | source.py | FahimFBA/Invisible-Cloak-Using-Python | 4ed662f8ec6ab7bd90c5cf337d10ff8d2496843b | [
"MIT"
] | 1 | 2021-12-12T06:37:32.000Z | 2021-12-12T16:36:19.000Z | source.py | FahimFBA/Invisible-Cloak-Using-Python | 4ed662f8ec6ab7bd90c5cf337d10ff8d2496843b | [
"MIT"
] | 2 | 2021-02-14T15:11:52.000Z | 2021-07-08T20:22:58.000Z | # start importing some modules
# importing OpenCV
import cv2
# using this module , we can process images and videos to identify objects, faces, or even handwriting of a human.
# importing NumPy
import numpy as np
# NumPy is usually imported under the np alias. NumPy is a Python library used for working with arrays. It also has functions for working in domain of linear algebra, fourier transform, and matrices
# importing another essential module named time
import time
# The Python time module provides many ways of representing time in code, such as objects, numbers, and strings. It also provides functionality other than representing time, like waiting during code execution and measuring the efficiency of our code.
# I'll use a print function here. It's optional.
print("Hey! Have you ever heard about invisible cloak?")
print("What is an invisible cloak?")
print("""
You have watched invisible cloak in "Harry Potter" a lot, haven't you?
It's the same thing. How would I provide you that cloak?
Grab a red cloth first! I'll convert that cloth into an invisible cloak with my project!!!
""")
# starting the initial part
cap = cv2.VideoCapture(0) # It lets you create a video capture object which is helpful to capture videos through webcam and then you may perform desired operations on that video.
# I need to suspend execution time for 1 seconds now. I'll used it to capture the still background image.
time.sleep(1)
background = 0 # background plot
# capturing the live frame
for i in range(30):
ret,background = cap.read()
# flipping the image
background = np.flip(background,axis=1)
while(cap.isOpened()):
ret, img = cap.read() # reading from the ongoing video
img = np.flip(img,axis=1)
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # Converting the image : from BGR to HSV
value = (35, 35)
blurred = cv2.GaussianBlur(hsv, value,0)
# configuration for the mask1
lower_red = np.array([0,120,70])
upper_red = np.array([10,255,255])
mask1 = cv2.inRange(hsv,lower_red,upper_red)
# configuration for the mask2
lower_red = np.array([170,120,70])
upper_red = np.array([180,255,255])
mask2 = cv2.inRange(hsv,lower_red,upper_red)
# The upper blocks of code (mask1 and mask2) can be replaced with some other code depending the color of your cloth which you would use as the invisible cloak
mask = mask1+mask2
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, np.ones((5,5),np.uint8)) # Morphological Transformations
img[np.where(mask==255)] = background[np.where(mask==255)]
cv2.imshow('Display',img) # display the image in the specified window
k = cv2.waitKey(10) # cv2. waitKey() is a keyboard binding function. The function waits for specified milliseconds for any keyboard event.
if k == 27:
break
| 38.472973 | 250 | 0.724974 | 449 | 2,847 | 4.57461 | 0.474388 | 0.03408 | 0.019474 | 0.014606 | 0.047712 | 0.047712 | 0.028238 | 0 | 0 | 0 | 0 | 0.034031 | 0.194942 | 2,847 | 73 | 251 | 39 | 0.862129 | 0.537408 | 0 | 0 | 0 | 0 | 0.25522 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.085714 | 0 | 0.085714 | 0.085714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c61de299d4e4c292c61a91562c711d62f7565f8f | 7,333 | py | Python | src/user/role_service.py | fugwenna/bunkbot | f438c6a7d1f2c1797755eb8287bc1499c0cf2a88 | [
"MIT"
] | 2 | 2017-05-10T03:41:12.000Z | 2017-08-12T12:51:44.000Z | src/user/role_service.py | fugwenna/bunkbot | f438c6a7d1f2c1797755eb8287bc1499c0cf2a88 | [
"MIT"
] | 13 | 2017-08-09T00:41:17.000Z | 2021-09-04T17:47:11.000Z | src/user/role_service.py | fugwenna/bunkbot | f438c6a7d1f2c1797755eb8287bc1499c0cf2a88 | [
"MIT"
] | 4 | 2017-05-10T01:15:03.000Z | 2021-03-02T03:49:45.000Z | from typing import List
from discord import Role, Color, role
from ..bunkbot import BunkBot
from ..channel.channel_service import ChannelService
from ..core.bunk_user import BunkUser
from ..core.service import Service
from ..db.database_service import DatabaseService
class RoleService(Service):
"""
Service responsible for handling role references
and removing/adding new roles
Parameters
-----------
bot: Bunkbot
Super class instance of the bot
database: DatabaseService
Super class instance of the database service
channels: ChannelService
Access to the server channels and other channel functions
"""
def __init__(self, bot: BunkBot, database: DatabaseService, channels: ChannelService):
super().__init__(bot, database)
self.admin: Role = None
self.channels: ChannelService = channels
def get_role(self, role_name: str) -> Role:
"""
Get a role directly from the server by name
Parameters
-----------
role_name: str
Name of the role to retrieve from the server
"""
return next((role for role in self.server.roles if role.name == role_name), None)
def get_role_by_pattern(self, pattern: str, roles: List[Role] = None) -> Role:
"""
Get a role directly from the server with a pattern "contains"
Parameters
-----------
pattern: str
Pattern which will be used to fuzzy search a role name
roles: List[Role] (optional)
Optional list of roles to search if the default server is not used
"""
if roles is None:
roles = self.server.roles
return next((role for role in roles if pattern in role.name), None)
async def rm_role(self, role_name: str, user: BunkUser = None) -> None:
"""
Non-event driven - directly remove a role when another service has deemed appropriate
Parameters
-----------
role_name: str
Name of the role to remove
user: Bunkuser (optional)
When supplied, the role will be removed from a user rather than the server list
"""
if user is not None:
roles = user.member.roles.copy()
roles = [r for r in user.member.roles if r.name != role_name]
await user.set_roles(roles)
else:
roles: List[Role] = [r for r in self.bot.server.roles.copy() if r.name == role_name]
for role in roles:
ref: Role = role
await ref.delete()
async def rm_roles_from_user(self, role_names: List[str], user: BunkUser) -> None:
"""
Non-event driven - directly remove a role when another service has deemed appropriate
Parameters
-----------
role_names: List[str]
List of the roles to remove
user: Bunkuser
User from which the roles will be removed from a user
"""
roles: List[Role] = user.member.roles.copy()
new_roles: List[Role] = [r for r in roles if r.name not in role_names]
await user.set_roles(new_roles)
async def add_role_to_user(self, role_name: str, user: BunkUser, color: Color = None) -> Role:
"""
Non-event driven - directly add a role when another service has deemed appropriate
Parameters
-----------
role_name: str
Name of the role to add
user: BunkUser
User which to add the role
color: Color (optional)
Optionally add a color to the role
Returns
--------
Role added to the user
"""
roles: List[Role] = await self._get_user_roles_to_set(user.member.roles.copy(), role_name, user, color)
await user.set_roles(roles)
return self.get_role(role_name)
async def add_roles_to_user(self, role_names: List[str], user: BunkUser, color: Color = None) -> List[Role]:
"""
Non-event driven - directly add multiple roles when another service has deemed appropriate
Parameters
-----------
role_names: List[str]
List of roles to add to the user
user: BunkUser
User which to add the roles
color: Color (optional)
Optionally add a color to the roles
Returns
--------
Roles added to the user
"""
roles = user.member.roles.copy()
for role_name in role_names:
roles = await self._get_user_roles_to_set(roles, role_name, user, color)
await user.set_roles(roles)
return roles
async def _get_user_roles_to_set(self, current_roles: List[Role], role_name: str, user: BunkUser, color: Color = None) -> List[Role]:
if not user.has_role(role_name):
role = self.get_role(role_name)
if role is None:
if color is None:
role: Role = await self.bot.server.create_role(name=role_name)
else:
role: Role = await self.bot.server.create_role(name=role_name, color=color)
current_roles.append(role)
return current_roles
async def prune_orphaned_roles(self, pattern: str = None) -> None:
"""
When updating users/roles check for roles which are no longer being used
Parameters
-----------
pattern: str (optional)
Only pruned orphaned roles that contain a specific pattern in the name
"""
if self.bot.server is None:
pass
else:
empty_color_roles: List[str] = []
if pattern is None:
empty_color_roles = [r.name for r in self.bot.server.roles if len(r.members) == 0]
else:
empty_color_roles = [r.name for r in self.bot.server.roles if pattern in r.name and len(r.members) == 0]
for orphan_role in empty_color_roles:
await self.channels.log_info("Removing role `{0}`".format(orphan_role))
await self.rm_role(orphan_role)
async def get_role_containing(self, pattern: str, user: BunkUser) -> Role:
"""
Get a user role that contains a given pattern in the name
Parameters
-----------
pattern: str
Pattern which the role name must contain
user: BunkUser
User which to find the role
"""
role = next((r for r in user.member.roles if pattern in r.name.lower()), None)
return role
async def get_lowest_index_for(self, pattern: str) -> int:
"""
Get the server role index of a given role name (pattern)
Parameters
-----------
pattern: str
Pattern which to locate a role by it's index
"""
roles: List[int] = [r.position for r in self.bot.server.roles if pattern in r.name]
roles.sort()
if len(roles) == 0:
return 1
return roles[:1][0]
| 32.591111 | 138 | 0.569617 | 919 | 7,333 | 4.43852 | 0.151251 | 0.049032 | 0.018877 | 0.01569 | 0.451091 | 0.359402 | 0.313067 | 0.27188 | 0.200785 | 0.171366 | 0 | 0.001457 | 0.344879 | 7,333 | 224 | 139 | 32.736607 | 0.847627 | 0.096277 | 0 | 0.123288 | 0 | 0 | 0.004732 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041096 | false | 0.013699 | 0.09589 | 0 | 0.260274 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6221c4f4c97c1af9d1e5e7738396af3bac3c4e6 | 1,591 | py | Python | doc/buildbot/sample_slave.py | elhigu/pocl | 726569646d3e95ef7625991aef11a2e84216076a | [
"MIT"
] | 1 | 2020-08-13T06:59:37.000Z | 2020-08-13T06:59:37.000Z | doc/buildbot/sample_slave.py | elhigu/pocl | 726569646d3e95ef7625991aef11a2e84216076a | [
"MIT"
] | null | null | null | doc/buildbot/sample_slave.py | elhigu/pocl | 726569646d3e95ef7625991aef11a2e84216076a | [
"MIT"
] | 1 | 2020-08-13T06:59:39.000Z | 2020-08-13T06:59:39.000Z | from buildbot.buildslave import BuildSlave
from buildbot.schedulers.basic import SingleBranchScheduler
from buildbot.changes import filter
from buildbot.config import BuilderConfig
from buildbot.schedulers.forcesched import *
from poclfactory import createPoclFactory
# overrride the 'sample_slave' with a descriptive function name
# Note: when finished renaming, the string "sample" should not appear anywhere in this file!
#
# c - the global buildbot configuration data structure
# common_branch - this is the branch that the slave should build.
# typically 'master', but during release it will be changed
# to the release branch
def sample_slave( c, common_branch ):
#create a new slave in the master's database
c['slaves'].append(
BuildSlave(
"sample_slave_name",
"password" ))
# lauch the builders listed in "builderNames" whenever the change poller notices a change to github pocl
c['schedulers'].append(
SingleBranchScheduler(name="name for scheduler, not sure where this is used",
change_filter=filter.ChangeFilter(branch=common_branch),
treeStableTimer=60,
builderNames=[
"sample_builder_name - this is the name that appears on the webpage"] ))
#create one set of steps to build pocl. See poclfactory.py for details
# on how to configure it
sample_factory = createPoclFactory()
#register your build to the master
c['builders'].append(
BuilderConfig(
name = "sample_builder_name - this is the name that appears on the webpage",
slavenames=["sample_slave_name"],
factory = sample_factory ))
| 37 | 105 | 0.752986 | 210 | 1,591 | 5.628571 | 0.466667 | 0.050761 | 0.022843 | 0.035533 | 0.089679 | 0.089679 | 0.089679 | 0.089679 | 0.089679 | 0.089679 | 0 | 0.001531 | 0.179133 | 1,591 | 42 | 106 | 37.880952 | 0.903522 | 0.409805 | 0 | 0 | 0 | 0 | 0.265152 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0.043478 | 0.26087 | 0 | 0.304348 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6224896f978093621d6275c6b492b553a5f89f0 | 2,575 | py | Python | pqr/__init__.py | pittquantum/PittQuantumRepository | 5100ff264e76cb97e5eba2929558a08d0ed158f8 | [
"MIT"
] | 13 | 2015-10-11T00:52:52.000Z | 2022-03-04T16:26:40.000Z | pqr/__init__.py | pittquantum/backend | 5100ff264e76cb97e5eba2929558a08d0ed158f8 | [
"MIT"
] | 54 | 2015-05-06T07:33:06.000Z | 2015-07-07T05:09:08.000Z | pqr/__init__.py | pittquantum/backend | 5100ff264e76cb97e5eba2929558a08d0ed158f8 | [
"MIT"
] | 4 | 2017-03-03T03:58:50.000Z | 2020-01-23T03:55:30.000Z | from flask import Flask, url_for, request, session, abort
import os
import re
import base64
pqr = Flask(__name__)
# Determines the destination of the build. Only usefull if you're using
# Frozen-Flask
pqr.config['FREEZER_DESTINATION'] = os.path.dirname(os.path.abspath(__file__)) + '/../build'
# Function to easily find your assets
# In your template use <link rel=stylesheet href="{{ static('filename') }}">
pqr.jinja_env.globals['static'] = (
lambda filename: url_for('static', filename=filename)
)
##########################################################################
# Form CSRF protection functions
@pqr.before_request
def csrf_protect():
if request.method == "POST":
token = session.pop('_csrf_token', None)
if not token or token != request.form.get('_csrf_token'):
abort(403)
def generate_csrf_token():
if '_csrf_token' not in session:
session['_csrf_token'] = some_random_string()
return session['_csrf_token']
def some_random_string():
return base64.urlsafe_b64encode(os.urandom(32))
pqr.jinja_env.globals['csrf_token'] = generate_csrf_token
##########################################################################
##########################################################################
# Custom Filters
# Auto Subscript any sequence of digits
def subnumbers_filter(input):
return re.sub("\d+", lambda val: "<sub>" + val.group(0) + "</sub>", input)
#Aubscript digits after ~characters removing the ~character
def supnumbers_iupac_filter(input):
return re.sub("~(.*?)~", lambda val: "<sup>" + val.group(0).replace('~', ' ') + "</sup>", input)
# Greek String Replacement
def replace_greek_filter(input):
choice = ""
try:
choice = re.findall(r"(Alpha|Beta|Gamma)", input)[0]
except IndexError:
pass
if len(re.findall("(Alpha|Beta|Gamma)[^\w\s]", input)) > 0:
return input.replace(choice, '&{};'.format(choice.lower()))
else:
return input
#return re.sub("(Alpha|Beta|Gamma)[^\w\s]", lambda val: "&{};{}".format(choice.lower(), val.group(0)[-1]), input, flags=re.I)
# Adding the filters to the environment
pqr.jinja_env.filters['subnumbers'] = subnumbers_filter
pqr.jinja_env.filters['supnumbersiupac'] = supnumbers_iupac_filter
pqr.jinja_env.filters['replacegreek'] = replace_greek_filter
assert pqr.jinja_env.filters['subnumbers']
assert pqr.jinja_env.filters['supnumbersiupac']
assert pqr.jinja_env.filters['replacegreek']
##########################################################################
from pqr import views
| 33.441558 | 129 | 0.616311 | 307 | 2,575 | 5.003257 | 0.407166 | 0.041667 | 0.057292 | 0.070313 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007644 | 0.136311 | 2,575 | 76 | 130 | 33.881579 | 0.683004 | 0.202718 | 0 | 0 | 0 | 0 | 0.151289 | 0.014327 | 0 | 0 | 0 | 0 | 0.069767 | 1 | 0.139535 | false | 0.023256 | 0.116279 | 0.069767 | 0.395349 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c625108183edb6d267c158ee6b26a88a85fb2494 | 255 | py | Python | application/core/common_utils.py | solomonxie/lambda-application-demo | 35ac5e17985cdd5694eb154b527d3942ad52cad6 | [
"MIT"
] | null | null | null | application/core/common_utils.py | solomonxie/lambda-application-demo | 35ac5e17985cdd5694eb154b527d3942ad52cad6 | [
"MIT"
] | null | null | null | application/core/common_utils.py | solomonxie/lambda-application-demo | 35ac5e17985cdd5694eb154b527d3942ad52cad6 | [
"MIT"
] | null | null | null | import json
from urllib import request
def get_ip():
info = None
try:
resp = request.urlopen("http://ip-api.com/json/")
raw = resp.read()
info = json.loads(raw)
except Exception as e:
print(e)
return info
| 18.214286 | 57 | 0.580392 | 35 | 255 | 4.2 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.305882 | 255 | 13 | 58 | 19.615385 | 0.830508 | 0 | 0 | 0 | 0 | 0 | 0.090196 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.181818 | 0 | 0.363636 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c625be4168c09da47fadb564f1e3637c6cb209e3 | 2,205 | py | Python | src/boxes/datatypes/calc.py | Peilonrayz/alphabet_learner | 13229e53215e3c050f106e00e34f90ca2d6fa256 | [
"MIT"
] | null | null | null | src/boxes/datatypes/calc.py | Peilonrayz/alphabet_learner | 13229e53215e3c050f106e00e34f90ca2d6fa256 | [
"MIT"
] | null | null | null | src/boxes/datatypes/calc.py | Peilonrayz/alphabet_learner | 13229e53215e3c050f106e00e34f90ca2d6fa256 | [
"MIT"
] | null | null | null | import collections.abc
from typing import Union, Sequence, Optional
from .primitives import Number
from .units import Unit, UnitTypes
_Value = Union[Unit, Number, float, int]
class Calc:
type: UnitTypes
@classmethod
def build(
cls,
values: Union[_Value, Sequence[_Value]],
operators: Sequence[str] = [],
):
_values: Sequence[_Value] = (
values
if isinstance(values, collections.abc.Sequence) else
[values]
)
if len(_values) != len(operators) + 1:
raise ValueError("There must be one less operator than values.")
calc = CalcOperators(
[
CalcValue(value)
if not isinstance(value, (float, int)) else
CalcValue(Number(value))
for value in _values
],
operators[:],
)
if len(operators) == 0:
return calc._values[0]
return calc
class CalcValue(Calc):
_value: Union[Unit, Number]
def __init__(self, value: Union[Unit, Number]):
self._value = value
if isinstance(value, Unit):
self.type = value.TYPE
else:
self.type = UnitTypes.NONE
def __str__(self):
return str(self._value)
def __repr__(self):
return f"CalcValue({self._value!r})"
class CalcOperators(Calc):
_values: Sequence[Calc]
_operators: Sequence[str]
def __init__(self, values: Sequence[Calc], operators: Sequence[str]):
if len(values) != len(operators) + 1:
raise ValueError("There must be one less operator than values.")
types = {value.type for value in values if value.type is not UnitTypes.NONE}
if 1 < len(types):
raise ValueError(f"Cannot mix types {types}")
self._values = values
self._operators = operators
def __str__(self):
values = [None] * (len(self._values) * 2 - 1)
values[0::2] = self._values
values[1::2] = self._operators
return " ".join(str(v) for v in values)
def __repr__(self):
return f"CalcOperators({self._values!r}, {self._operators!r})"
| 27.911392 | 84 | 0.576417 | 248 | 2,205 | 4.943548 | 0.241935 | 0.04894 | 0.034258 | 0.04894 | 0.213703 | 0.184339 | 0.122349 | 0.122349 | 0.122349 | 0.122349 | 0 | 0.007299 | 0.316553 | 2,205 | 78 | 85 | 28.269231 | 0.806238 | 0 | 0 | 0.096774 | 0 | 0 | 0.086621 | 0.02585 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112903 | false | 0 | 0.064516 | 0.048387 | 0.387097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c626517f412e73181fae98762e0e92e932f1d7ee | 676 | py | Python | dfrus/machine_code_match.py | dfint/dfrus | 84eb206d01e57ba2571b19c2bbbe7765c660fb55 | [
"MIT"
] | 1 | 2021-09-11T12:46:01.000Z | 2021-09-11T12:46:01.000Z | dfrus/machine_code_match.py | dfint/dfrus | 84eb206d01e57ba2571b19c2bbbe7765c660fb55 | [
"MIT"
] | 8 | 2021-10-29T18:54:54.000Z | 2021-11-29T08:18:05.000Z | dfrus/machine_code_match.py | dfint/dfrus | 84eb206d01e57ba2571b19c2bbbe7765c660fb55 | [
"MIT"
] | null | null | null | from .binio import from_dword
from .opcodes import Reg, mov_reg_imm, mov_acc_mem, mov_rm_reg, x0f_movups, Prefix
def match_mov_reg_imm32(b: bytes, reg: Reg, imm: int) -> bool:
assert len(b) == 5, b
return b[0] == mov_reg_imm | 8 | int(reg) and from_dword(b[1:]) == imm
def get_start(s):
i = None
if s[-1] & 0xfe == mov_acc_mem:
i = 1
elif s[-2] & 0xf8 == mov_rm_reg and s[-1] & 0xc7 == 0x05:
i = 2
elif s[-3] == 0x0f and s[-2] & 0xfe == x0f_movups and s[-1] & 0xc7 == 0x05:
i = 3
return i # prefix is not allowed here
assert i is not None
if s[-1 - i] == Prefix.operand_size:
i += 1
return i
| 26 | 82 | 0.573964 | 122 | 676 | 3.008197 | 0.401639 | 0.021798 | 0.049046 | 0.043597 | 0.076294 | 0.076294 | 0 | 0 | 0 | 0 | 0 | 0.072765 | 0.288462 | 676 | 25 | 83 | 27.04 | 0.690229 | 0.038462 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049383 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.388889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c628a854cb921505c3553f10d9c7e5209dafaeeb | 10,483 | py | Python | tests/test_base_api.py | yhegen/cumulocity-python-api | 729d29a518426fe118ed315df84020578a6961fb | [
"Apache-2.0"
] | 9 | 2021-02-16T08:53:08.000Z | 2022-02-15T11:58:19.000Z | tests/test_base_api.py | yhegen/cumulocity-python-api | 729d29a518426fe118ed315df84020578a6961fb | [
"Apache-2.0"
] | 4 | 2021-04-20T12:26:41.000Z | 2022-02-09T09:52:11.000Z | tests/test_base_api.py | yhegen/cumulocity-python-api | 729d29a518426fe118ed315df84020578a6961fb | [
"Apache-2.0"
] | 3 | 2021-04-26T23:05:32.000Z | 2021-12-09T14:13:58.000Z | # Copyright (c) 2020 Software AG,
# Darmstadt, Germany and/or Software AG USA Inc., Reston, VA, USA,
# and/or its subsidiaries and/or its affiliates and/or their licensors.
# Use, reproduction, transfer, publication or disclosure is prohibited except
# as specifically provided for in your License Agreement with Software AG.
# pylint: disable=protected-access, redefined-outer-name
import base64
from unittest.mock import patch
import json
import pytest
import requests
import responses
from c8y_api._base_api import CumulocityRestApi # noqa (protected-access)
@pytest.fixture(scope='function')
def mock_c8y() -> CumulocityRestApi:
"""Provide mock CumulocityRestApi instance."""
return CumulocityRestApi(
base_url='http://base.com',
tenant_id='t12345',
username='username',
password='password',
application_key='application_key')
@pytest.fixture(scope='module')
def httpbin_basic() -> CumulocityRestApi:
"""Provide mock CumulocityRestApi instance for httpbin with basic auth."""
return CumulocityRestApi(
base_url='https://httpbin.org',
tenant_id='t12345',
username='username',
password='password'
)
def assert_auth_header(c8y, headers):
"""Assert that the given auth header is correctly formatted."""
auth_header = headers['Authorization'].lstrip('Basic ')
expected = f'{c8y.tenant_id}/{c8y.username}:{c8y.password}'
assert base64.b64decode(auth_header) == expected.encode('utf-8')
def assert_accept_header(headers, accept='application/json'):
"""Assert that the accept header matches the expectation."""
assert headers['Accept'] == accept
def assert_content_header(headers, content_type='application/json'):
"""Assert that the content-type header matches the expectation."""
assert headers['Content-Type'] == content_type
def assert_application_key_header(c8y, headers):
"""Assert that the application key header matches the expectation."""
assert headers[c8y.HEADER_APPLICATION_KEY] == c8y.application_key
@pytest.mark.parametrize('args, expected', [
({'accept': 'application/json'}, {'Accept': 'application/json'}),
({'content_tYPe': 'content/TYPE'}, {'Content-Type': 'content/TYPE'}),
({'some': 'thing', 'mORE_Of_this': 'same'}, {'Some': 'thing', 'More-Of-This': 'same'}),
({'empty': None, 'accept': 'accepted'}, {'Accept': 'accepted'}),
({'empty1': None, 'empty2': None}, None),
({'accept': ''}, {'Accept': None}),
])
def test_prepare_headers(args, expected):
"""Verify header preparation."""
assert CumulocityRestApi._prepare_headers(**args) == expected
@pytest.mark.parametrize('method', ['get', 'post', 'put'])
def test_remove_accept_header(mock_c8y: CumulocityRestApi, method):
"""Verify that the default accept header can be unset/removed."""
with responses.RequestsMock() as rsps:
rsps.add(method=method.upper(),
url=mock_c8y.base_url + '/resource',
status=200,
json={})
kwargs = {'resource': '/resource', 'accept': ''}
if method.startswith('p'):
kwargs['json'] = {}
func = getattr(mock_c8y, method)
func(**kwargs)
assert 'Accept' not in rsps.calls[0].request.headers
@pytest.mark.online
@pytest.mark.parametrize('method', ['get', 'post', 'put'])
def test_remove_accept_header_online(httpbin_basic: CumulocityRestApi, method):
"""Verify that the unset accept header are actually not sent."""
kwargs = {'resource': '/anything', 'accept': ''}
if method.startswith('p'):
kwargs['json'] = {}
func = getattr(httpbin_basic, method)
response = func(**kwargs)
assert 'Accept' not in response['headers']
@pytest.mark.parametrize('method', ['get', 'post', 'put', 'delete'])
def test_no_application_key_header(mock_c8y: CumulocityRestApi, method):
"""Verify that the application key header is not present by default."""
c8y = CumulocityRestApi(mock_c8y.base_url, mock_c8y.tenant_id, mock_c8y.username, mock_c8y.username)
with responses.RequestsMock() as rsps:
rsps.add(method=method.upper(),
url=mock_c8y.base_url + '/resource',
status=200,
json={'result': True})
kwargs = {'resource': '/resource'}
if method.startswith('p'):
kwargs['json'] = {}
func = getattr(c8y, method)
if method.startswith('p'):
kwargs.update({'json': {}})
func(**kwargs)
request_headers = rsps.calls[0].request.headers
assert CumulocityRestApi.HEADER_APPLICATION_KEY not in request_headers
@pytest.mark.online
def test_basic_auth_get(httpbin_basic: CumulocityRestApi):
"""Verify that the basic auth headers are added for the REST requests."""
c8y = httpbin_basic
# first we verify that the auth is there for GET requests
response = c8y.get('/anything')
assert_auth_header(c8y, response['headers'])
def test_post_defaults(mock_c8y: CumulocityRestApi):
"""Verify the basic functionality of the POST requests."""
with responses.RequestsMock() as rsps:
rsps.add(method=responses.POST,
url=mock_c8y.base_url + '/resource',
status=201,
json={'result': True})
response = mock_c8y.post('/resource', json={'request': True})
request_body = rsps.calls[0].request.body
request_headers = rsps.calls[0].request.headers
assert json.loads(request_body)['request']
assert_auth_header(mock_c8y, request_headers)
assert_accept_header(request_headers)
assert_content_header(request_headers)
assert_application_key_header(mock_c8y, request_headers)
assert response['result']
def test_post_explicits(mock_c8y: CumulocityRestApi):
"""Verify the basic functionality of the POST requests."""
with responses.RequestsMock() as rsps:
rsps.add(method=responses.POST,
url=mock_c8y.base_url + '/resource',
status=201,
json={'result': True})
response = mock_c8y.post('/resource', accept='custom/accept',
content_type='custom/content', json={'request': True})
request_body = rsps.calls[0].request.body
request_headers = rsps.calls[0].request.headers
assert json.loads(request_body)['request']
assert_auth_header(mock_c8y, request_headers)
assert_accept_header(request_headers, 'custom/accept')
assert_content_header(request_headers, 'custom/content')
assert_application_key_header(mock_c8y, request_headers)
assert response['result']
@pytest.mark.online
def test_get_default(httpbin_basic: CumulocityRestApi):
"""Verify that the get function with default parameters works as expected."""
c8y = httpbin_basic
# (1) with implicit parameters given and all default
response = c8y.get(resource='/anything/resource?p1=v1&p2=v2')
# auth header must always be present
assert response['headers']['Authorization']
# by default we accept JSON
assert response['headers']['Accept'] == 'application/json'
# inline parameters recognized
assert response['args']['p1']
assert response['args']['p2']
@pytest.mark.online
def test_get_explicit(httpbin_basic: CumulocityRestApi):
"""Verify that the get function with explicit parameters works as expected."""
c8y = httpbin_basic
response = c8y.get(resource='/anything/resource', params={'p1': 'v1', 'p2': 3}, accept='something/custom')
# auth header must always be present
assert response['headers']['Authorization']
# expecting our custom accept header
assert response['headers']['Accept'] == 'something/custom'
# explicit parameters recognized
assert response['args']['p1']
assert response['args']['p2']
def test_get_ordered_response():
"""Verify that the response JSON can be ordered on request."""
c8y = CumulocityRestApi(base_url='', tenant_id='', username='', password='')
with patch('requests.Session.get') as get_mock:
mock_response = requests.Response()
mock_response.status_code = 200
mock_response._content = b'{"list": [1, 2, 3, 4, 5], "x": "xxx", "m": "mmm", "c": "ccc"}'
get_mock.return_value = mock_response
response = c8y.get('any', ordered=True)
elements = list(response.items())
# first element is a list
assert elements[0][0] == 'list'
assert elements[0][1] == [1, 2, 3, 4, 5]
# 2nd to 4th are some elements in order
assert (elements[1][0], elements[2][0], elements[3][0]) == ('x', 'm', 'c')
def test_get_404():
"""Verify that a 404 results in a KeyError and a message naming the missing resource."""
c8y = CumulocityRestApi(base_url='', tenant_id='', username='', password='')
with patch('requests.Session.get') as get_mock:
mock_response = requests.Response()
mock_response.status_code = 404
get_mock.return_value = mock_response
with pytest.raises(KeyError) as error:
c8y.get('some/key')
assert 'some/key' in str(error)
def test_delete_defaults(mock_c8y: CumulocityRestApi):
"""Verify the basic funtionality of the DELETE requests."""
with responses.RequestsMock() as rsps:
rsps.add(method=responses.DELETE,
url=mock_c8y.base_url + '/resource',
status=204)
mock_c8y.delete('/resource')
request_headers = rsps.calls[0].request.headers
assert_auth_header(mock_c8y, request_headers)
assert_application_key_header(mock_c8y, request_headers)
def test_empty_response(mock_c8y: CumulocityRestApi):
"""Verify that an empty GET/POST/PUT responses doesn't break the code."""
with responses.RequestsMock() as rsps:
rsps.add(method=responses.GET,
url=mock_c8y.base_url + '/resource',
status=200)
mock_c8y.get('/resource')
with responses.RequestsMock() as rsps:
rsps.add(method=responses.POST,
url=mock_c8y.base_url + '/resource',
status=201)
mock_c8y.post('/resource', json={})
with responses.RequestsMock() as rsps:
rsps.add(method=responses.PUT,
url=mock_c8y.base_url + '/resource',
status=200)
mock_c8y.put('/resource', json={})
| 35.900685 | 110 | 0.659258 | 1,249 | 10,483 | 5.38751 | 0.183347 | 0.033289 | 0.032694 | 0.018725 | 0.575866 | 0.511517 | 0.429484 | 0.385793 | 0.350572 | 0.304206 | 0 | 0.018568 | 0.208814 | 10,483 | 291 | 111 | 36.024055 | 0.792742 | 0.180864 | 0 | 0.466667 | 0 | 0.005556 | 0.137817 | 0.00885 | 0 | 0 | 0 | 0 | 0.216667 | 1 | 0.105556 | false | 0.027778 | 0.038889 | 0 | 0.155556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c62b0f6c1dafe4863a081c29e104df05db4c301e | 866 | py | Python | src/xsdtools/jsonschema_generator.py | pietrodelugas/xsdtools | b30e5f6b41c079cca01b3fb8c36feee484df8e34 | [
"BSD-3-Clause"
] | 1 | 2020-12-17T04:50:47.000Z | 2020-12-17T04:50:47.000Z | src/xsdtools/jsonschema_generator.py | pietrodelugas/xsdtools | b30e5f6b41c079cca01b3fb8c36feee484df8e34 | [
"BSD-3-Clause"
] | null | null | null | src/xsdtools/jsonschema_generator.py | pietrodelugas/xsdtools | b30e5f6b41c079cca01b3fb8c36feee484df8e34 | [
"BSD-3-Clause"
] | 2 | 2021-07-21T10:38:08.000Z | 2021-09-16T17:50:25.000Z | #
# Copyright (c) 2020, Quantum Espresso Foundation and SISSA.
# Internazionale Superiore di Studi Avanzati). All rights reserved.
# This file is distributed under the terms of the BSD 3-Clause license.
# See the file 'LICENSE' in the root directory of the present distribution,
# or https://opensource.org/licenses/BSD-3-Clause
#
from .abstract_generator import AbstractGenerator
class JSONSchemaGenerator(AbstractGenerator):
"""
JSON Schema generic generator for XSD schemas.
"""
formal_language = 'JSON Schema'
default_paths = ['templates/json-schema/']
builtin_types = {
'string': 'string',
'boolean': 'boolean',
'float': 'number',
'double': 'number',
'integer': 'integer',
'unsignedByte': 'integer',
'nonNegativeInteger': 'integer',
'positiveInteger': 'integer',
}
| 29.862069 | 75 | 0.668591 | 91 | 866 | 6.318681 | 0.747253 | 0.052174 | 0.034783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008837 | 0.215935 | 866 | 28 | 76 | 30.928571 | 0.837997 | 0.420323 | 0 | 0 | 0 | 0 | 0.3375 | 0.045833 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c62e4d453948d4cfc525683440c2c6e5323bf2c9 | 2,568 | py | Python | tests/unittest_db.py | zaanposni/umfrageBot | 3e19dc0629cde394da2ae8706e6e043b4e87059d | [
"MIT"
] | 6 | 2019-08-15T20:19:38.000Z | 2021-02-28T21:33:19.000Z | tests/unittest_db.py | zaanposni/umfrageBot | 3e19dc0629cde394da2ae8706e6e043b4e87059d | [
"MIT"
] | 31 | 2019-08-14T08:42:08.000Z | 2020-05-07T13:43:43.000Z | tests/unittest_db.py | zaanposni/umfrageBot | 3e19dc0629cde394da2ae8706e6e043b4e87059d | [
"MIT"
] | 5 | 2019-08-17T13:39:53.000Z | 2020-04-01T07:25:51.000Z | import unittest
from pathlib import Path
import os
import shutil
import time
from src.bt_utils.handle_sqlite import DatabaseHandler
from src.bt_utils.get_content import content_dir
from sqlite3 import IntegrityError
class TestClass(unittest.TestCase):
def testDB(self):
if os.path.exists(content_dir):
shutil.rmtree(content_dir, ignore_errors=True)
if not os.path.exists(content_dir):
os.makedirs(content_dir)
else:
try:
os.remove(os.path.join(content_dir, "bundestag.db"))
except OSError:
pass
self.db = DatabaseHandler()
self.roles = ["role1", "role2"]
# creates basic table structures if not already present
print("Create database and test if creation was successful")
self.db.create_structure(self.roles)
db_path = Path(os.path.join(content_dir, "bundestag.db"))
self.assertTrue(db_path.is_file())
print("Check if database is empty")
users = self.db.get_all_users()
self.assertEqual(users, [])
print("Add user to database and check if he exists.")
self.db.add_user(123, self.roles)
user = self.db.get_specific_user(123)
self.assertEqual(user, (123, 0, 0))
print("Add reaction to user and check if it exists.")
self.db.add_reaction(123, "role1")
user = self.db.get_specific_user(123)
self.assertEqual(user, (123, 1, 0))
print("Remove reaction and check if it does not exist anymore.")
self.db.remove_reaction(123, "role1")
user = self.db.get_specific_user(123)
self.assertEqual(user, (123, 0, 0))
print("Add another user and check if select all users works.")
self.db.add_user(124, self.roles)
users = self.db.get_all_users()
self.assertEqual(users, [(123, 0, 0), (124, 0, 0)])
print("Add another user with invalid id and check if it still get created.")
with self.assertRaises(IntegrityError):
self.db.add_user(124, self.roles)
users = self.db.get_all_users()
self.assertEqual(users, [(123, 0, 0), (124, 0, 0)])
print("Add another column and check if it gets applied correctly")
self.roles = ["role1", "role2", "role3"]
self.db.update_columns(self.roles)
users = self.db.get_all_users()
self.assertEqual(users, [(123, 0, 0, 0), (124, 0, 0, 0)])
print("Closing connection")
del self.db
if __name__ == '__main__':
unittest.main()
| 33.789474 | 84 | 0.628505 | 351 | 2,568 | 4.472934 | 0.293447 | 0.061147 | 0.040127 | 0.035669 | 0.389172 | 0.357325 | 0.347771 | 0.30828 | 0.30828 | 0.281529 | 0 | 0.041405 | 0.257009 | 2,568 | 75 | 85 | 34.24 | 0.781447 | 0.020639 | 0 | 0.224138 | 0 | 0 | 0.191803 | 0 | 0 | 0 | 0 | 0 | 0.155172 | 1 | 0.017241 | false | 0.017241 | 0.137931 | 0 | 0.172414 | 0.155172 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c62ff5ebe57088aca1b17d3ec477eb124d6fc9bd | 1,777 | py | Python | mxnet_load_model.py | whn09/mxnet-ssd | ff15817dbf6d3c6d3fc69fbf6bef4c4d61490159 | [
"MIT"
] | 1 | 2019-09-11T02:07:50.000Z | 2019-09-11T02:07:50.000Z | mxnet_load_model.py | whn09/mxnet-ssd | ff15817dbf6d3c6d3fc69fbf6bef4c4d61490159 | [
"MIT"
] | null | null | null | mxnet_load_model.py | whn09/mxnet-ssd | ff15817dbf6d3c6d3fc69fbf6bef4c4d61490159 | [
"MIT"
] | null | null | null | # load model and predicate
import mxnet as mx
import numpy as np
# define test data
batch_size = 1
num_batch = 1
filepath = 'frame-1.jpg'
DEFAULT_INPUT_SHAPE = 300
# load model
sym, arg_params, aux_params = mx.model.load_checkpoint("model/deploy_model_algo_1", 0) # load with net name and epoch num
mod = mx.mod.Module(symbol=sym, context=mx.cpu(), data_names=["data"], label_names=["cls_prob"])
print('data_names:', mod.data_names)
print('output_names:', mod.output_names)
#print('data_shapes:', mod.data_shapes)
#print('label_shapes:', mod.label_shapes)
#print('output_shapes:', mod.output_shapes)
mod.bind(data_shapes=[("data", (1, 3, DEFAULT_INPUT_SHAPE, DEFAULT_INPUT_SHAPE))], for_training=False)
mod.set_params(arg_params, aux_params) # , allow_missing=True
import cv2
img = cv2.cvtColor(cv2.imread(filepath), cv2.COLOR_BGR2RGB)
print(img.shape)
img = cv2.resize(img, (DEFAULT_INPUT_SHAPE, DEFAULT_INPUT_SHAPE))
img = np.swapaxes(img, 0, 2)
img = np.swapaxes(img, 1, 2)
img = img[np.newaxis, :]
print(img.shape)
# # predict
# eval_data = np.array([img])
# eval_label = np.zeros(len(eval_data)) # just need to be the same length, empty is ok
# eval_iter = mx.io.NDArrayIter(eval_data, eval_label, batch_size, shuffle=False)
# print('eval_iter.provide_data:', eval_iter.provide_data)
# print('eval_iter.provide_label:', eval_iter.provide_label)
# predict_stress = mod.predict(eval_iter, num_batch)
# print(predict_stress) # you can transfer to numpy array
# forward
from collections import namedtuple
Batch = namedtuple('Batch', ['data'])
mod.forward(Batch([mx.nd.array(img)]))
prob = mod.get_outputs()[0].asnumpy()
prob = np.squeeze(prob)
# Grab top result, convert to python list of lists and return
results = [prob[i].tolist() for i in range(4)]
print(results)
| 34.843137 | 121 | 0.747327 | 287 | 1,777 | 4.425087 | 0.400697 | 0.037795 | 0.066929 | 0.028346 | 0.053543 | 0.053543 | 0 | 0 | 0 | 0 | 0 | 0.013898 | 0.109173 | 1,777 | 50 | 122 | 35.54 | 0.788377 | 0.403489 | 0 | 0.074074 | 0 | 0 | 0.081888 | 0.024085 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.148148 | 0 | 0.148148 | 0.185185 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c63135668085d0506f3e10e5b3343cb3a4bdce5a | 577 | py | Python | dataMapper.py | cbrandl/csv_costanalyser | c4e2c53bb7f13e56aec07425e5c0e1f0bed6b8fa | [
"MIT"
] | null | null | null | dataMapper.py | cbrandl/csv_costanalyser | c4e2c53bb7f13e56aec07425e5c0e1f0bed6b8fa | [
"MIT"
] | null | null | null | dataMapper.py | cbrandl/csv_costanalyser | c4e2c53bb7f13e56aec07425e5c0e1f0bed6b8fa | [
"MIT"
] | null | null | null | class dataMapper:
def __init__(self, data):
self.__data = data
self.__structure = self.getDataStructure()
def getDataStructure(self):
headings = self.__data[0]
structure = {}
for key in headings:
structure[key.lower()] = ''
return structure
def map(self):
dataSet = []
for dataRecord in self.__data[1:]:
item = {}
for index, key in enumerate(self.__structure):
item[key] = dataRecord[index]
dataSet.append(item)
return dataSet
| 27.47619 | 58 | 0.551127 | 57 | 577 | 5.333333 | 0.403509 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005319 | 0.348354 | 577 | 20 | 59 | 28.85 | 0.803191 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6337ba6bb736b172e7ae3a784113684d1641780 | 5,320 | py | Python | STResNet/plots/error_plots.py | vtsuperdarn/deep_leaning_on_GSP_TEC | f5989d1742be9c02edbcab37433f468cb2c5f771 | [
"MIT"
] | 9 | 2018-09-17T02:11:26.000Z | 2020-12-16T12:28:35.000Z | STResNet/plots/error_plots.py | vtsuperdarn/deep_leaning_on_GSP_TEC | f5989d1742be9c02edbcab37433f468cb2c5f771 | [
"MIT"
] | null | null | null | STResNet/plots/error_plots.py | vtsuperdarn/deep_leaning_on_GSP_TEC | f5989d1742be9c02edbcab37433f468cb2c5f771 | [
"MIT"
] | 6 | 2018-07-23T13:37:10.000Z | 2022-01-19T17:51:19.000Z | import datetime
import pandas
import seaborn as sns
import matplotlib.pyplot as plt
import os
import re
import glob
amean_err = []
astddev_err = []
amin_err = []
amax_err = []
rmean_err = []
rstddev_err = []
rmin_err = []
rmax_err = []
#loading the true and predicted tec maps for calculating the min/max error, mean and stddev error for both absolute and relative differences
for i in range(32):
#print i
path = "predicted_tec_files/{}_pred_*.npy".format(i)
for fnm in glob.glob(path):
pred = np.load(fnm).tolist()
pred = np.array(pred)
#print pred.shape
path = "predicted_tec_files/{}_y_*.npy".format(i)
for fnm in glob.glob(path):
truth = np.load(fnm).tolist()
truth = np.array(truth)
#print truth.shape
pred = np.squeeze(pred)
truth = np.squeeze(truth)
diff_absolute = abs(pred - truth)
diff_relative = abs((pred - truth)/truth)
#print diff.shape
#flattern operation
diff_absolute = np.reshape(diff_absolute, (32,-1))
diff_relative = np.reshape(diff_relative, (32,-1))
#print diff.shape
amean_err += np.mean(diff_absolute, axis=1).tolist()
astddev_err += np.std(diff_absolute, axis=1).tolist()
amin_err += np.min(diff_absolute, axis=1).tolist()
amax_err += np.max(diff_absolute,axis=1).tolist()
rmean_err += np.mean(diff_relative, axis=1).tolist()
rstddev_err += np.std(diff_relative, axis=1).tolist()
rmin_err += np.min(diff_relative, axis=1).tolist()
rmax_err += np.max(diff_relative,axis=1).tolist()
#starting from 168 because we want one day cycle plot
amean_err = amean_err[168:]
astddev_err = astddev_err[168:]
amin_err = amin_err[168:]
amax_err = amax_err[168:]
rmean_err = rmean_err[168:]
rstddev_err = rstddev_err[168:]
rmin_err = rmin_err[168:]
rmax_err = rmax_err[168:]
amean_err = np.array(amean_err)
astddev_err = np.array(astddev_err)
amin_err = np.array(amin_err)
amax_err = np.array(amax_err)
print(amean_err.shape)
print(astddev_err.shape)
print(amin_err.shape)
print(amax_err.shape)
rmean_err = np.array(rmean_err)
rstddev_err = np.array(rstddev_err)
rmin_err = np.array(rmin_err)
rmax_err = np.array(rmax_err)
print(rmean_err.shape)
print(rstddev_err.shape)
print(rmin_err.shape)
print(rmax_err.shape)
#plotting the absolute error plots
sns.set_style("whitegrid")
sns.set_context("poster")
f, axArr = plt.subplots(5, sharex=True, figsize=(20, 20))
xlim1 = amean_err.shape[0]
dates = []
stdate = datetime.datetime(2015, 1, 12, 0, 5)
dummy = datetime.datetime(2015, 1, 12, 0, 10)
tec_resolution = (dummy - stdate)
dates.append(stdate)
for i in range(1, 856):
dates.append(dates[i-1]+tec_resolution)
x_val = dates
print(len(x_val))
cl = sns.color_palette('bright', 4)
axArr[0].plot(x_val, amean_err, color=cl[0])
axArr[1].plot(x_val, astddev_err, color=cl[1])
axArr[2].plot(x_val, amin_err, color=cl[2])
axArr[3].plot(x_val, amax_err, color=cl[3])
axArr[4].plot(x_val, amean_err, color=cl[0], label='mean')
axArr[4].plot(x_val, astddev_err, color=cl[1], label='stddev')
axArr[0].set_ylabel("Mean", fontsize=14)
axArr[1].set_ylabel("Stddev", fontsize=14)
axArr[2].set_ylabel("Min", fontsize=14)
axArr[3].set_ylabel("Max", fontsize=14)
axArr[4].set_ylabel("Mean/Stddev", fontsize=14)
axArr[-1].set_xlabel("TIME", fontsize=14)
axArr[0].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[1].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[2].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[3].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[4].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[4].legend( bbox_to_anchor=(0., 1.02, 1., .102), loc=1, ncol=2, borderaxespad=0.1 )
f.savefig('error_plot_absolute.png', dpi=f.dpi, bbox_inches='tight')
#plotting the relative error plots
sns.set_style("whitegrid")
sns.set_context("poster")
f, axArr = plt.subplots(5, sharex=True, figsize=(20, 20))
xlim1 = rmean_err.shape[0]
dates = []
stdate = datetime.datetime(2015, 1, 12, 0, 5)
dummy = datetime.datetime(2015, 1, 12, 0, 10)
tec_resolution = (dummy - stdate)
dates.append(stdate)
for i in range(1, 856):
dates.append(dates[i-1]+tec_resolution)
x_val = dates
print(len(x_val))
cl = sns.color_palette('bright', 4)
axArr[0].plot(x_val, rmean_err, color=cl[0])
axArr[1].plot(x_val, rstddev_err, color=cl[1])
axArr[2].plot(x_val, rmin_err, color=cl[2])
axArr[3].plot(x_val, rmax_err, color=cl[3])
axArr[4].plot(x_val, rmean_err, color=cl[0], label='mean')
axArr[4].plot(x_val, rstddev_err, color=cl[1], label='stddev')
axArr[0].set_ylabel("Mean", fontsize=14)
axArr[1].set_ylabel("Stddev", fontsize=14)
axArr[2].set_ylabel("Min", fontsize=14)
axArr[3].set_ylabel("Max", fontsize=14)
axArr[4].set_ylabel("Mean/Stddev", fontsize=14)
axArr[-1].set_xlabel("TIME", fontsize=14)
axArr[0].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[1].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[2].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[3].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[4].get_xaxis().set_major_formatter(DateFormatter('%H:%M'))
axArr[4].legend( bbox_to_anchor=(0., 1.02, 1., .102), loc=1, ncol=2, borderaxespad=0.1 )
f.savefig('error_plot_relative.png', dpi=f.dpi, bbox_inches='tight')
| 31.111111 | 140 | 0.71015 | 892 | 5,320 | 4.044843 | 0.154709 | 0.022173 | 0.026608 | 0.044346 | 0.623614 | 0.572616 | 0.572616 | 0.558758 | 0.545455 | 0.473392 | 0 | 0.043219 | 0.117105 | 5,320 | 170 | 141 | 31.294118 | 0.724931 | 0.065977 | 0 | 0.40625 | 0 | 0 | 0.059084 | 0.02198 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.054688 | 0 | 0.054688 | 0.078125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6382fdd07fdfdca681e712305a912e00b66a929 | 1,262 | py | Python | src/ipdasite.services/ipdasite/services/interfaces/curator.py | NASA-PDS/planetarydata.org | 16731a251c22408b433117f7f01e29d004f11467 | [
"Apache-2.0"
] | null | null | null | src/ipdasite.services/ipdasite/services/interfaces/curator.py | NASA-PDS/planetarydata.org | 16731a251c22408b433117f7f01e29d004f11467 | [
"Apache-2.0"
] | 5 | 2021-03-19T21:41:19.000Z | 2022-02-11T14:55:14.000Z | src/ipdasite.services/ipdasite/services/interfaces/curator.py | NASA-PDS/planetarydata.org | 16731a251c22408b433117f7f01e29d004f11467 | [
"Apache-2.0"
] | null | null | null | # encoding: utf-8
# Copyright 2011 California Institute of Technology. ALL RIGHTS
# RESERVED. U.S. Government Sponsorship acknowledged.
'''Curator: interface'''
from zope.interface import Interface
from zope import schema
from ipdasite.services import ProjectMessageFactory as _
class ICurator(Interface):
'''A person and agency that is responsible for a service.'''
title = schema.TextLine(
title=_(u'Name'),
description=_(u'Name of this curator.'),
required=True,
)
description = schema.Text(
title=_(u'Description'),
description=_(u'A short summary of this curator, used in free-text searches.'),
required=False,
)
contactName = schema.TextLine(
title=_(u'Contact Name'),
description=_(u'Name of a person who curates one or more services.'),
required=False,
)
emailAddress = schema.TextLine(
title=_(u'Email Address'),
description=_(u'Contact address for a person or workgroup that curates services.'),
required=False,
)
telephone = schema.TextLine(
title=_(u'Telephone'),
description=_(u'Public telephone number in international format in order to contact this curator.'),
required=False,
)
| 33.210526 | 108 | 0.669572 | 146 | 1,262 | 5.712329 | 0.486301 | 0.035971 | 0.091127 | 0.095923 | 0.052758 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005149 | 0.230586 | 1,262 | 37 | 109 | 34.108108 | 0.853759 | 0.161648 | 0 | 0.137931 | 0 | 0 | 0.311005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.103448 | 0 | 0.310345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c63e00866b579ae084343dd771e2b18a8af736d6 | 867 | py | Python | TelloStuff/Tests/Tello.py | svg94/Drone_Prototype-dirty-implementation- | 53ea429714beff6966c2b9c82e0c96d53baca66c | [
"MIT"
] | null | null | null | TelloStuff/Tests/Tello.py | svg94/Drone_Prototype-dirty-implementation- | 53ea429714beff6966c2b9c82e0c96d53baca66c | [
"MIT"
] | null | null | null | TelloStuff/Tests/Tello.py | svg94/Drone_Prototype-dirty-implementation- | 53ea429714beff6966c2b9c82e0c96d53baca66c | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from TelloSDKPy.djitellopy.tello import Tello
import cv2
import pygame
import numpy as np
import time
def main():
#Controller Init
pygame.init()
joysticks = []
for i in range(0,pygame.joystick.get_count()):
joysticks.append(pygame.joystick.Joystick(i))
joysticks[-1].init()
print(joysticks[-1].get_name())
#Tello Init
while True:
for event in pygame.event.get():
if(event.type == pygame.JOYBUTTONDOWN):
b = event.button
if (b == 0):
print("takeoff")
drone.takeoff()
elif (b == 1):
print("land")
drone.land()
elif (b == 2):
print("quit")
return 0
if __name__== "__main__":
main()
| 24.083333 | 53 | 0.49827 | 93 | 867 | 4.537634 | 0.505376 | 0.052133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016854 | 0.384083 | 867 | 35 | 54 | 24.771429 | 0.773408 | 0.053057 | 0 | 0 | 0 | 0 | 0.028186 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.185185 | 0 | 0.259259 | 0.148148 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c63ed36ee241e548d81bdd20f997dcd995f3ec78 | 11,761 | py | Python | models/SPR.py | fresh-professor/DiverseCont | 4be198f5531a7efe2cb91b17066322a38d219127 | [
"MIT"
] | 21 | 2021-09-08T14:37:06.000Z | 2022-02-28T02:58:35.000Z | models/SPR.py | fresh-professor/DiverseCont | 4be198f5531a7efe2cb91b17066322a38d219127 | [
"MIT"
] | 1 | 2021-12-28T09:17:38.000Z | 2021-12-28T11:49:30.000Z | models/SPR.py | fresh-professor/DiverseCont | 4be198f5531a7efe2cb91b17066322a38d219127 | [
"MIT"
] | null | null | null | import os
from copy import deepcopy
import tqdm
import torch
import torch.nn.functional as F
import colorful
import numpy as np
import networkx as nx
from tensorboardX import SummaryWriter
from .reservoir import reservoir
from components import Net
from utils import BetaMixture1D
class SPR(torch.nn.Module):
""" Train Continual Model self-supervisedly
Freeze when required to eval and finetune supervisedly using Purified Buffer.
"""
def __init__(self, config, writer: SummaryWriter):
super().__init__()
self.config = config
self.device = config['device']
self.writer = writer
self.purified_buffer = reservoir['purified'](config, config['purified_buffer_size'], config['purified_buffer_q_poa'])
self.delay_buffer = reservoir['delay'](config, config['delayed_buffer_size'], config['delayed_buffer_q_poa'])
self.E_max = config['E_max']
self.expert_step = 0
self.base_step = 0
self.base_ft_step = 0
self.expert_number = 0
self.base = self.get_init_base(config)
self.expert = self.get_init_expert(config)
self.ssl_dir = os.path.join(os.path.dirname(os.path.dirname(self.config['log_dir'])),
'noiserate_{}'.format(config['corruption_percent']),
'expt_{}'.format(config['expert_train_epochs']),
'randomseed_{}'.format(config['random_seed']))
if os.path.exists(self.ssl_dir):
with open(os.path.join(self.ssl_dir, 'idx_sets.npy'), 'rb') as f:
self.debug_idxs = np.load(f, allow_pickle=True)
def get_init_base(self, config):
"""get initialized base model"""
base = Net[config['net']](config)
optim_config = config['optimizer']
lr_scheduler_config = deepcopy(config['lr_scheduler'])
lr_scheduler_config['options'].update({'T_max': config['base_train_epochs']})
base.setup_optimizer(optim_config)
base.setup_lr_scheduler(lr_scheduler_config)
return base
def get_init_expert(self, config):
"""get initialized expert model"""
expert = Net[config['net']](config)
optim_config = config['optimizer']
lr_scheduler_config = deepcopy(config['lr_scheduler'])
lr_scheduler_config['options'].update({'T_max': config['expert_train_epochs']})
expert.setup_optimizer(optim_config)
expert.setup_lr_scheduler(lr_scheduler_config)
return expert
def get_init_base_ft(self, config):
"""get initialized eval model"""
base_ft = Net[config['net'] + '_ft'](config)
optim_config = config['optimizer_ft']
lr_scheduler_config = config['lr_scheduler_ft']
base_ft.setup_optimizer(optim_config)
base_ft.setup_lr_scheduler(lr_scheduler_config)
return base_ft
def learn(self, x, y, corrupt, idx, step=None):
x, y = x.cuda(), y.cuda()
for i in range(len(x)):
self.delay_buffer.update(imgs=x[i: i + 1], cats=y[i: i + 1], corrupts=corrupt[i: i + 1], idxs=idx[i: i + 1])
if self.delay_buffer.is_full():
if not os.path.exists(os.path.join(self.ssl_dir, 'model{}.ckpt'.format(self.expert_number))):
self.expert = self.get_init_expert(self.config)
self.train_self_expert()
else:
self.expert.load_state_dict(
torch.load(os.path.join(self.ssl_dir, 'model{}.ckpt'.format(self.expert_number)),
map_location=self.device))
################### data consistency check ######################
if torch.sum(self.delay_buffer.get('idxs') != torch.Tensor(self.debug_idxs[self.expert_number])) != 0:
raise Exception("it seems there is a data consistency problem: exp_num {}".format(self.expert_number))
################### data consistency check ######################
self.train_self_base()
clean_idx, clean_p = self.cluster_and_sample()
self.update_purified_buffer(clean_idx, clean_p, step)
self.expert_number += 1
def update_purified_buffer(self, clean_idx, clean_p, step):
"""update purified buffer with the filtered samples"""
self.purified_buffer.update(
imgs=self.delay_buffer.get('imgs')[clean_idx],
cats=self.delay_buffer.get('cats')[clean_idx],
corrupts=self.delay_buffer.get('corrupts')[clean_idx],
idxs=self.delay_buffer.get('idxs')[clean_idx],
clean_ps=clean_p)
self.delay_buffer.reset()
print(colorful.bold_yellow(self.purified_buffer.state('corrupts')).styled_string)
self.writer.add_scalar(
'buffer_corrupts', torch.sum(self.purified_buffer.get('corrupts')), step)
def cluster_and_sample(self):
"""filter samples in delay buffer"""
self.expert.eval()
with torch.no_grad():
xs = self.delay_buffer.get('imgs')
ys = self.delay_buffer.get('cats')
corrs = self.delay_buffer.get('corrupts')
features = self.expert(xs)
features = F.normalize(features, dim=1)
clean_p = list()
clean_idx = list()
print("***********************************************")
for u_y in torch.unique(ys).tolist():
y_mask = ys == u_y
corr = corrs[y_mask]
feature = features[y_mask]
# ignore negative similairties
_similarity_matrix = torch.relu(F.cosine_similarity(feature.unsqueeze(1), feature.unsqueeze(0), dim=-1))
# stochastic ensemble
_clean_ps = torch.zeros((self.E_max, len(feature)), dtype=torch.double)
for _i in range(self.E_max):
similarity_matrix = (_similarity_matrix > torch.rand_like(_similarity_matrix)).type(torch.float32)
similarity_matrix[similarity_matrix == 0] = 1e-5 # add small num for ensuring positive matrix
g = nx.from_numpy_matrix(similarity_matrix.cpu().numpy())
info = nx.eigenvector_centrality(g, max_iter=6000, weight='weight') # index: value
centrality = [info[i] for i in range(len(info))]
bmm_model = BetaMixture1D(max_iters=10)
# fit beta mixture model
c = np.asarray(centrality)
c, c_min, c_max = bmm_model.outlier_remove(c)
c = bmm_model.normalize(c, c_min, c_max)
bmm_model.fit(c)
bmm_model.create_lookup(1) # 0: noisy, 1: clean
# get posterior
c = np.asarray(centrality)
c = bmm_model.normalize(c, c_min, c_max)
p = bmm_model.look_lookup(c)
_clean_ps[_i] = torch.from_numpy(p)
_clean_ps = torch.mean(_clean_ps, dim=0)
m = _clean_ps > torch.rand_like(_clean_ps)
clean_idx.extend(torch.nonzero(y_mask)[:, -1][m].tolist())
clean_p.extend(_clean_ps[m].tolist())
print("class: {}".format(u_y))
print("--- num of selected samples: {}".format(torch.sum(m).item()))
print("--- num of selected corrupt samples: {}".format(torch.sum(corr[m]).item()))
print("***********************************************")
return clean_idx, torch.Tensor(clean_p)
def train_self_base(self):
"""Self Replay. train base model with samples from delay and purified buffer"""
bs = self.config['base_batch_size']
# If purified buffer is full, train using it also
db_bs = (bs // 2) if self.purified_buffer.is_full() else bs
db_bs = min(db_bs, len(self.delay_buffer))
pb_bs = min(bs - db_bs, len(self.purified_buffer))
self.base.train()
self.base.init_ntxent(self.config, batch_size=db_bs + pb_bs)
dataloader = self.delay_buffer.get_dataloader(batch_size=db_bs, shuffle=True, drop_last=True)
for epoch_i in tqdm.trange(self.config['base_train_epochs'], desc="base training", leave=False):
for inner_step, data in enumerate(dataloader):
x = data['imgs']
self.base.zero_grad()
# sample data from purified buffer and merge
if pb_bs > 0:
replay_data = self.purified_buffer.sample(num=pb_bs)
x = torch.cat([replay_data['imgs'], x], dim=0)
loss = self.base.get_selfsup_loss(x)
loss.backward()
self.base.optimizer.step()
self.writer.add_scalar(
'continual_base_train_loss', loss,
self.base_step + inner_step + epoch_i * len(dataloader))
# warmup for the first 10 epochs
if epoch_i >= 10:
self.base.lr_scheduler.step()
self.writer.flush()
self.base_step += self.config['base_train_epochs'] * len(dataloader)
def train_self_expert(self):
"""train expert model with samples from delay"""
batch_size =min(self.config['expert_batch_size'], len(self.delay_buffer))
self.expert.train()
self.expert.init_ntxent(self.config, batch_size=batch_size)
dataloader = self.delay_buffer.get_dataloader(batch_size=batch_size, shuffle=True, drop_last=True)
for epoch_i in tqdm.trange(self.config['expert_train_epochs'], desc='expert training', leave=False):
for inner_step, data in enumerate(dataloader):
x = data['imgs']
self.expert.zero_grad()
loss = self.expert.get_selfsup_loss(x)
loss.backward()
self.expert.optimizer.step()
self.writer.add_scalar(
'expert_train_loss', loss,
self.expert_step + inner_step + len(dataloader) * epoch_i)
# warmup for the first 10 epochs
if epoch_i >= 10:
self.expert.lr_scheduler.step()
self.writer.flush()
self.expert_step += self.config['expert_train_epochs'] * len(dataloader)
def get_finetuned_model(self):
"""copy the base and fine-tune for evaluation"""
base_ft = self.get_init_base_ft(self.config)
# overwrite entries in the state dict
ft_dict = base_ft.state_dict()
ft_dict.update({k: v for k, v in self.base.state_dict().items() if k in ft_dict})
base_ft.load_state_dict(ft_dict)
base_ft.train()
dataloader = self.purified_buffer.get_dataloader(batch_size=self.config['ft_batch_size'], shuffle=True, drop_last=True)
for epoch_i in tqdm.trange(self.config['ft_epochs'], desc='finetuning', leave=False):
for inner_step, data in enumerate(dataloader):
x, y = data['imgs'], data['cats']
base_ft.zero_grad()
loss = base_ft.get_sup_loss(x, y).mean()
loss.backward()
base_ft.clip_grad()
base_ft.optimizer.step()
base_ft.lr_scheduler.step()
self.writer.add_scalar(
'ft_train_loss', loss,
self.base_ft_step + inner_step + epoch_i * len(dataloader))
self.writer.flush()
self.base_ft_step += self.config['ft_epochs'] * len(dataloader)
base_ft.eval()
return base_ft
def forward(self, x):
pass
| 43.884328 | 127 | 0.585239 | 1,452 | 11,761 | 4.5 | 0.184573 | 0.03367 | 0.036731 | 0.027548 | 0.348944 | 0.240588 | 0.197123 | 0.157637 | 0.130089 | 0.121824 | 0 | 0.005139 | 0.288581 | 11,761 | 267 | 128 | 44.048689 | 0.775786 | 0.071337 | 0 | 0.147208 | 0 | 0 | 0.085802 | 0.013 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055838 | false | 0.005076 | 0.060914 | 0 | 0.147208 | 0.030457 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c640ef3189a49dcfa1947c8d0c9f7d5961226015 | 6,602 | py | Python | tests/pyunity/testScene/testScene.py | rayzchen/PyUnity | 8ed436eca7a84f05190c1fa275c58da5c6059926 | [
"MIT"
] | null | null | null | tests/pyunity/testScene/testScene.py | rayzchen/PyUnity | 8ed436eca7a84f05190c1fa275c58da5c6059926 | [
"MIT"
] | null | null | null | tests/pyunity/testScene/testScene.py | rayzchen/PyUnity | 8ed436eca7a84f05190c1fa275c58da5c6059926 | [
"MIT"
] | null | null | null | # Copyright (c) 2020-2022 The PyUnity Team
# This file is licensed under the MIT License.
# See https://docs.pyunity.x10.bz/en/latest/license.html
from pyunity import (
SceneManager, Component, Camera, AudioListener, Light,
GameObject, Tag, Transform, GameObjectException,
ComponentException, Canvas, PyUnityException,
Behaviour, ShowInInspector, RenderTarget, Logger,
Vector3, MeshRenderer, Mesh)
from . import SceneTestCase
class TestScene(SceneTestCase):
def testInit(self):
scene = SceneManager.AddScene("Scene")
assert scene.name == "Scene"
assert len(scene.gameObjects) == 2
for gameObject in scene.gameObjects:
assert gameObject.scene is scene
for component in gameObject.components:
assert component.gameObject is gameObject
assert component.transform is gameObject.transform
assert isinstance(component, Component)
assert scene.gameObjects[0].name == "Main Camera"
assert scene.gameObjects[1].name == "Light"
assert scene.mainCamera is scene.gameObjects[0].components[1]
assert len(scene.gameObjects[0].components) == 3
assert len(scene.gameObjects[1].components) == 2
assert scene.gameObjects[0].GetComponent(Camera) is not None
assert scene.gameObjects[0].GetComponent(AudioListener) is not None
assert scene.gameObjects[1].GetComponent(Light) is not None
def testFind(self):
scene = SceneManager.AddScene("Scene")
a = GameObject("A")
b = GameObject("B", a)
c = GameObject("C", a)
d = GameObject("B", c)
scene.AddMultiple(a, b, c, d)
tagnum = Tag.AddTag("Custom Tag")
a.tag = Tag(tagnum)
c.tag = Tag("Custom Tag")
assert len(scene.FindGameObjectsByName("B")) == 2
assert scene.FindGameObjectsByName("B") == [b, d]
assert scene.FindGameObjectsByTagName("Custom Tag") == [a, c]
assert scene.FindGameObjectsByTagNumber(tagnum) == [a, c]
assert isinstance(scene.FindComponent(Transform), Transform)
assert scene.FindComponents(Transform) == [
scene.mainCamera.transform, scene.gameObjects[1].transform,
a.transform, b.transform, c.transform, d.transform]
with self.assertRaises(GameObjectException) as exc:
scene.FindGameObjectsByTagName("Invalid")
assert exc.value == "No tag named Invalid; create a new tag with Tag.AddTag"
with self.assertRaises(GameObjectException) as exc:
scene.FindGameObjectsByTagNumber(-1)
assert exc.value == "No tag at index -1; create a new tag with Tag.AddTag"
with self.assertRaises(ComponentException) as exc:
scene.FindComponent(Canvas)
assert exc.value == "Cannot find component Canvas in scene"
def testRootGameObjects(self):
scene = SceneManager.AddScene("Scene")
a = GameObject("A")
b = GameObject("B", a)
c = GameObject("C", a)
d = GameObject("B", c)
scene.AddMultiple(a, b, c, d)
assert len(scene.rootGameObjects) == 3
assert scene.rootGameObjects[2] is a
def testAddError(self):
scene = SceneManager.AddScene("Scene")
gameObject = GameObject("GameObject")
scene.Add(gameObject)
with self.assertRaises(PyUnityException) as exc:
scene.Add(gameObject)
assert exc.value == "GameObject \"GameObject\" is already in Scene \"Scene\""
def testBare(self):
from pyunity.scenes import Scene
scene = Scene.Bare("Scene")
assert scene.name == "Scene"
assert len(scene.gameObjects) == 0
assert scene.mainCamera is None
def testDestroy(self):
class Test(Behaviour):
other = ShowInInspector(GameObject)
scene = SceneManager.AddScene("Scene")
# Exception
fake = GameObject("Not in scene")
with self.assertRaises(PyUnityException) as exc:
scene.Destroy(fake)
assert exc.value == "The provided GameObject is not part of the Scene"
# Correct
a = GameObject("A")
b = GameObject("B", a)
c = GameObject("C", a)
scene.AddMultiple(a, b, c)
assert c.scene is scene
assert c in scene.gameObjects
scene.Destroy(c)
assert c.scene is None
assert c not in scene.gameObjects
# Multiple
scene.Destroy(a)
assert b.scene is None
assert b not in scene.gameObjects
assert c.scene is None
assert c not in scene.gameObjects
# Components
cam = GameObject("Camera")
camera = cam.AddComponent(Camera)
test = GameObject("Test")
test.AddComponent(Test).other = cam
target = GameObject("Target")
target.AddComponent(RenderTarget).source = camera
scene.AddMultiple(cam, test, target)
scene.Destroy(cam)
assert b.scene is None
assert cam not in scene.gameObjects
assert test.GetComponent(Test).other is None
assert target.GetComponent(RenderTarget).source is None
# Main Camera
with Logger.TempRedirect(silent=True) as r:
scene.Destroy(scene.mainCamera.gameObject)
assert r.get() == "Warning: Removing Main Camera from scene 'Scene'\n"
def testHas(self):
scene = SceneManager.AddScene("Scene")
gameObject = GameObject("GameObject")
gameObject2 = GameObject("GameObject 2")
scene.Add(gameObject)
assert scene.Has(gameObject)
assert not scene.Has(gameObject2)
def testList(self):
scene = SceneManager.AddScene("Scene")
a = GameObject("A")
b = GameObject("B", a)
c = GameObject("C", a)
d = GameObject("B", c)
scene.AddMultiple(b, d, c, a)
with Logger.TempRedirect(silent=True) as r:
scene.List()
assert r.get() == "\n".join([
"/A", "/A/B", "/A/C", "/A/C/B", "/Light", "/Main Camera\n"])
def testInsideFrustrum(self):
scene = SceneManager.AddScene("Scene")
gameObject = GameObject("Cube")
gameObject.transform.position = Vector3(0, 0, 5)
renderer = gameObject.AddComponent(MeshRenderer)
scene.Add(gameObject)
assert not scene.insideFrustrum(renderer)
renderer.mesh = Mesh.cube(2)
# assert scene.insideFrustrum(renderer))
gameObject.transform.position = Vector3(0, 0, -5)
# assert not scene.insideFrustrum(renderer)
| 36.882682 | 85 | 0.628294 | 735 | 6,602 | 5.643537 | 0.187755 | 0.065574 | 0.048216 | 0.057859 | 0.372469 | 0.301109 | 0.27459 | 0.202748 | 0.152604 | 0.128496 | 0 | 0.008656 | 0.265071 | 6,602 | 178 | 86 | 37.089888 | 0.846249 | 0.041048 | 0 | 0.323741 | 0 | 0 | 0.07943 | 0 | 0 | 0 | 0 | 0 | 0.374101 | 1 | 0.064748 | false | 0 | 0.021583 | 0 | 0.100719 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c64201468eb9a627a5893c74a3ccfcc9bf284d58 | 1,806 | py | Python | alembic_migration/alembic_handler.py | NASA-IMPACT/hls-sentinel2-downloader-serverless | e3e4f542fc805c6259f20a6dd932c98cccd4144c | [
"Apache-2.0"
] | null | null | null | alembic_migration/alembic_handler.py | NASA-IMPACT/hls-sentinel2-downloader-serverless | e3e4f542fc805c6259f20a6dd932c98cccd4144c | [
"Apache-2.0"
] | 2 | 2021-07-23T00:49:42.000Z | 2021-07-23T00:51:25.000Z | alembic_migration/alembic_handler.py | NASA-IMPACT/hls-sentinel2-downloader-serverless | e3e4f542fc805c6259f20a6dd932c98cccd4144c | [
"Apache-2.0"
] | null | null | null | import logging
import os
import alembic.command
import alembic.config
import cfnresponse
from db.session import get_session, get_session_maker
from retry import retry
from sqlalchemy.exc import OperationalError
def log(log_statement: str):
"""
Gets a Logger for the Lambda function with level logging.INFO and logs
`log_statement`. This is used multiple times as Alembic takes over the logging
configuration so we have to re-take control when we want to log
:param log_statement: str to log
"""
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.info(log_statement)
@retry(OperationalError, tries=30, delay=10)
def check_rds_connection():
session_maker = get_session_maker()
with get_session(session_maker) as db:
db.execute("SELECT * FROM pg_catalog.pg_tables;")
def handler(event, context):
if event["RequestType"] == "Delete":
log("Received a Delete Request")
cfnresponse.send(
event, context, cfnresponse.SUCCESS, {"Response": "Nothing run on deletes"}
)
return
try:
log("Checking connection to RDS")
check_rds_connection()
log("Connected to RDS")
log("Running Alembic Migrations")
alembic_config = alembic.config.Config(os.path.join(".", "alembic.ini"))
alembic_config.set_main_option("script_location", ".")
alembic.command.upgrade(alembic_config, "head")
log("Migrations run successfully")
cfnresponse.send(
event,
context,
cfnresponse.SUCCESS,
{"Response": "Migrations run successfully"},
)
except Exception as ex:
log(str(ex))
cfnresponse.send(event, context, cfnresponse.FAILED, {"Response": str(ex)})
raise ex
| 30.1 | 87 | 0.668328 | 218 | 1,806 | 5.426606 | 0.46789 | 0.054945 | 0.050719 | 0.06847 | 0.121724 | 0.089603 | 0.089603 | 0 | 0 | 0 | 0 | 0.002899 | 0.23588 | 1,806 | 59 | 88 | 30.610169 | 0.854348 | 0.136213 | 0 | 0.046512 | 0 | 0 | 0.180809 | 0.013708 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.186047 | 0 | 0.27907 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c642eb175ecb18dbaa59adf93fe2e5472ccf50d1 | 1,229 | py | Python | ch2/q25solution.py | kylepw/ctci | 7e2fcc6775db3789d0e425f4fb969acf6c44aad5 | [
"MIT"
] | null | null | null | ch2/q25solution.py | kylepw/ctci | 7e2fcc6775db3789d0e425f4fb969acf6c44aad5 | [
"MIT"
] | null | null | null | ch2/q25solution.py | kylepw/ctci | 7e2fcc6775db3789d0e425f4fb969acf6c44aad5 | [
"MIT"
] | null | null | null | from LinkedList import LinkedList
def sum_lists(ll_a, ll_b):
n1, n2 = ll_a.head, ll_b.head
ll = LinkedList()
carry = 0
while n1 or n2:
result = carry
if n1:
result += n1.value
n1 = n1.next
if n2:
result += n2.value
n2 = n2.next
ll.add(result % 10)
carry = result // 10
if carry:
ll.add(carry)
return ll
def sum_lists_followup(ll_a, ll_b):
# Pad the shorter list with zeros
if len(ll_a) < len(ll_b):
for i in range(len(ll_b) - len(ll_a)):
ll_a.add_to_beginning(0)
else:
for i in range(len(ll_a) - len(ll_b)):
ll_b.add_to_beginning(0)
# Find sum
n1, n2 = ll_a.head, ll_b.head
result = 0
while n1 and n2:
result = (result * 10) + n1.value + n2.value
n1 = n1.next
n2 = n2.next
# Create new linked list
ll = LinkedList()
ll.add_multiple([int(i) for i in str(result)])
return ll
ll_a = LinkedList()
ll_a.generate(4, 0, 9)
ll_b = LinkedList()
ll_b.generate(3, 0, 9)
print(ll_a)
print(ll_b)
#print(sum_lists(ll_a, ll_b))
print(sum_lists_recursive(ll_a, ll_b))
#print(sum_lists_followup(ll_a, ll_b)) | 21.561404 | 52 | 0.570382 | 207 | 1,229 | 3.188406 | 0.251208 | 0.063636 | 0.045455 | 0.045455 | 0.289394 | 0.277273 | 0.166667 | 0.054545 | 0 | 0 | 0 | 0.045077 | 0.314076 | 1,229 | 57 | 53 | 21.561404 | 0.737841 | 0.10415 | 0 | 0.243902 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0 | 0.02439 | 0 | 0.121951 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c646039bec76cea06e642add68741d31531aa8e2 | 5,147 | py | Python | cmdb-compliance/libs/server/server_common.py | zjj1002/aws-cloud-cmdb-system | 47982007688e5db1272435891cb654ab11d0d60a | [
"Apache-2.0"
] | null | null | null | cmdb-compliance/libs/server/server_common.py | zjj1002/aws-cloud-cmdb-system | 47982007688e5db1272435891cb654ab11d0d60a | [
"Apache-2.0"
] | 1 | 2022-01-04T13:53:16.000Z | 2022-01-04T13:53:16.000Z | cmdb-optimization/libs/server/server_common.py | zjj1002/aws-cloud-cmdb-system | 47982007688e5db1272435891cb654ab11d0d60a | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2019/5/15 14:44
# @Author : Fred Yangxiaofei
# @File : server_common.py
# @Role : server公用方法,记录日志,更新资产,推送密钥,主要给手动更新资产使用
from models.server import Server, AssetErrorLog, ServerDetail
from libs.db_context import DBContext
from libs.web_logs import ins_log
from libs.server.sync_public_key import RsyncPublicKey, start_rsync
import sqlalchemy
def write_error_log(error_list):
with DBContext('w') as session:
for i in error_list:
ip = i.get('ip')
msg = i.get('msg')
error_log = '推送公钥失败, 错误信息:{}'.format(msg)
ins_log.read_log('error', error_log)
session.query(Server).filter(Server.ip == ip).update({Server.state: 'false'})
exist_ip = session.query(AssetErrorLog).filter(AssetErrorLog.ip == ip).first()
if exist_ip:
session.query(AssetErrorLog).filter(AssetErrorLog.ip == ip).update(
{AssetErrorLog.error_log: error_log})
else:
new_error_log = AssetErrorLog(ip=ip, error_log=error_log)
session.add(new_error_log)
session.commit()
def update_asset(asset_data):
"""
更新资产到数据库
:param host_data: 主机返回的资产采集基础数据
:return:
"""
with DBContext('w') as session:
for k, v in asset_data.items():
try:
if asset_data[k].get('status'):
_sn = v.get('sn', None)
_hostname = v.get('host_name', None)
_cpu = v.get('cpu', None)
_cpu_cores = v.get('cpu_cores', None)
_memory = v.get('memory', None)
_disk = v.get('disk', None)
_os_type = v.get('os_type', None)
_os_kernel = v.get('os_kernel', None)
# _instance_id = v.get('instance_id', None)
# _instance_type = v.get('instance_type', None)
# _instance_state = v.get('instance_state', None)
exist_detail = session.query(ServerDetail).filter(ServerDetail.ip == k).first()
if not exist_detail:
# 不存在就新建
new_server_detail = ServerDetail(ip=k, sn=_sn, cpu=_cpu, cpu_cores=_cpu_cores,
memory=_memory, disk=_disk,
os_type=_os_type, os_kernel=_os_kernel)
session.add(new_server_detail)
session.commit()
session.query(Server).filter(Server.ip == k).update(
{Server.hostname: _hostname, Server.state: 'true'})
session.commit()
else:
# 存在就更新
session.query(ServerDetail).filter(ServerDetail.ip == k).update({
ServerDetail.sn: _sn, ServerDetail.ip: k,
ServerDetail.cpu: _cpu, ServerDetail.cpu_cores: _cpu_cores,
ServerDetail.disk: _disk, ServerDetail.memory: _memory,
ServerDetail.os_type: _os_type, ServerDetail.os_kernel: _os_kernel,
})
session.query(Server).filter(Server.ip == k).update(
{Server.hostname: _hostname, Server.state: 'true'})
session.commit()
except sqlalchemy.exc.IntegrityError as e:
ins_log.read_log('error', e)
# 状态改为Flse->删除主机Detail--记录错误信息
session.query(Server).filter(Server.ip == k).update({Server.state: 'false'})
session.query(ServerDetail).filter(ServerDetail.ip == k).delete(
synchronize_session=False)
exist_ip = session.query(AssetErrorLog).filter(AssetErrorLog.ip == k).first()
error_log = str(e)
if exist_ip:
session.query(AssetErrorLog).filter(AssetErrorLog.ip == k).update(
{AssetErrorLog.error_log: error_log})
else:
new_error_log = AssetErrorLog(ip=k, error_log=error_log)
session.add(new_error_log)
session.commit()
return False
def rsync_public_key(server_list):
"""
推送PublicKey
:return: 只返回推送成功的,失败的直接写错误日志
"""
# server_list = [('47.100.231.147', 22, 'root', '-----BEGIN RSA PRIVATE KEYxxxxxEND RSA PRIVATE KEY-----', 'false')]
ins_log.read_log('info', 'rsync public key to server')
rsync_error_list = []
rsync_sucess_list = []
sync_key_obj = RsyncPublicKey()
check = sync_key_obj.check_rsa()
if check:
res_data = start_rsync(server_list)
if not res_data.get('status'):
rsync_error_list.append(res_data)
else:
rsync_sucess_list.append(res_data)
if rsync_error_list:
write_error_log(rsync_error_list)
return rsync_sucess_list
if __name__ == '__main__':
pass
| 41.508065 | 120 | 0.541092 | 552 | 5,147 | 4.786232 | 0.25 | 0.051476 | 0.024603 | 0.036336 | 0.370553 | 0.342165 | 0.310371 | 0.259273 | 0.259273 | 0.154428 | 0 | 0.007476 | 0.350301 | 5,147 | 123 | 121 | 41.845528 | 0.782596 | 0.108413 | 0 | 0.244186 | 0 | 0 | 0.032863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034884 | false | 0.011628 | 0.05814 | 0 | 0.116279 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6463fc79c5c1fa3ed6d7d6dc133cda7182d1756 | 1,607 | py | Python | src/test/serialization/codec/object/test_string_codec.py | typingtanuki/pyserialization | f4a0d9cff08b3a6ce8f83f3a258c4dce1367d151 | [
"Apache-2.0"
] | null | null | null | src/test/serialization/codec/object/test_string_codec.py | typingtanuki/pyserialization | f4a0d9cff08b3a6ce8f83f3a258c4dce1367d151 | [
"Apache-2.0"
] | null | null | null | src/test/serialization/codec/object/test_string_codec.py | typingtanuki/pyserialization | f4a0d9cff08b3a6ce8f83f3a258c4dce1367d151 | [
"Apache-2.0"
] | null | null | null | import unittest
from src.main.serialization.codec.codec import Codec
from src.main.serialization.codec.object.stringCodec import StringCodec
from src.main.serialization.codec.primitive.shortCodec import ShortCodec
from src.main.serialization.codec.utils.byteIo import ByteIo
from src.main.serialization.codec.utils.bytes import to_byte
from src.test.serialization.codec.test_codec import TestCodec
class TestStringCodec(TestCodec):
def test_wide_range(self):
self.string_seria(None)
self.string_seria("abc")
self.string_seria("123")
self.string_seria("ほげほげ")
self.string_seria("漢字漢字")
self.string_seria(""" % Total\t\t\t\t % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 162 0 0 0 0 0 \t\t\t 0 --:--:-- --:--:-- --:--:-- 0
100 6 0 6 0 \r\n\0\t\t\t 0 0 0 --:--:-- 0:00:09 --:--:-- 1 漢字漢字漢字漢字漢字漢字漢字漢字
漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字
漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字
漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字漢字""")
def string_seria(self, value: None or str):
codec: Codec[str] = StringCodec(to_byte(12), 0)
writer: ByteIo = self.writer()
codec.write(writer, value)
writer.close()
reader: ByteIo = self.reader()
pim: int = codec.read(reader)
self.assertEqual(value, pim)
reader.close()
if __name__ == '__main__':
unittest.main()
| 40.175 | 116 | 0.654636 | 186 | 1,607 | 5.548387 | 0.365591 | 0.015504 | 0.087209 | 0.116279 | 0.161822 | 0.077519 | 0 | 0 | 0 | 0 | 0 | 0.02771 | 0.236465 | 1,607 | 39 | 117 | 41.205128 | 0.813366 | 0 | 0 | 0.0625 | 0 | 0.03125 | 0.357187 | 0.11201 | 0 | 0 | 0 | 0 | 0.03125 | 1 | 0.0625 | false | 0 | 0.21875 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c647d169764fd3968e368f07e9481ecd112d4727 | 539 | py | Python | dataviz/ex1.py | jonaslindemann/compute-course-public | b8f55595ebbd790d79b525efdff17b8517154796 | [
"MIT"
] | 4 | 2021-09-12T12:07:01.000Z | 2021-09-29T17:38:34.000Z | dataviz/ex1.py | jonaslindemann/compute-course-public | b8f55595ebbd790d79b525efdff17b8517154796 | [
"MIT"
] | null | null | null | dataviz/ex1.py | jonaslindemann/compute-course-public | b8f55595ebbd790d79b525efdff17b8517154796 | [
"MIT"
] | 5 | 2020-10-24T16:02:31.000Z | 2021-09-28T20:57:46.000Z | # -*- coding: utf-8 -*-
"""
Created on Wed Jun 7 14:58:44 2017
@author: Jonas Lindemann
"""
import numpy as np
import pyvtk as vtk
print("Reading from uvw.dat...")
xyzuvw = np.loadtxt('uvw.dat', skiprows=2)
print("Converting to points and vectors")
points = xyzuvw[:, 0:3].tolist()
vectors = xyzuvw[:, 3:].tolist()
pointdata = vtk.PointData(vtk.Vectors(vectors, name="vec1"), vtk.Vectors(vectors, name="vec2"))
data = vtk.VtkData(vtk.StructuredGrid([96, 65, 48], points), pointdata)
data.tofile('uvw','ascii')
| 24.5 | 96 | 0.654917 | 77 | 539 | 4.584416 | 0.649351 | 0.033994 | 0.096317 | 0.11898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.053215 | 0.163265 | 539 | 21 | 97 | 25.666667 | 0.72949 | 0.155844 | 0 | 0 | 0 | 0 | 0.183529 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6487ab8b368f34287f785fc89a730b8d8fe1f9f | 2,237 | py | Python | nginc/__init__.py | FlorianLudwig/nginc | 489546d1b0190047150bf3134071aa88c64f8c3d | [
"Apache-2.0"
] | 1 | 2015-11-01T12:16:17.000Z | 2015-11-01T12:16:17.000Z | nginc/__init__.py | FlorianLudwig/nginc | 489546d1b0190047150bf3134071aa88c64f8c3d | [
"Apache-2.0"
] | null | null | null | nginc/__init__.py | FlorianLudwig/nginc | 489546d1b0190047150bf3134071aa88c64f8c3d | [
"Apache-2.0"
] | null | null | null | # Copyright 2014 Florian Ludwig
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import tempfile
import subprocess
import atexit
import shutil
import argparse
import pkg_resources
def start(root, address='127.0.0.1', port=8000):
conf_template = pkg_resources.resource_string('nginc', 'nginx.conf')
conf_template = conf_template.decode('utf-8')
tmp = tempfile.mkdtemp(prefix='nginc')
@atexit.register
def cleanup_tmp():
shutil.rmtree(tmp)
root = os.path.abspath(root)
root = root.replace('"', '\\"')
config = conf_template.format(tmp=tmp, root=root, port=port, address=address)
conf_path = tmp + '/nginx.conf'
conf_file = open(conf_path, 'w')
conf_file.write(config)
conf_file.close()
proc = subprocess.Popen(['nginx', '-c', conf_path])
@atexit.register
def cleanup_proc():
try:
proc.kill()
except OSError:
pass
return proc
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--port', type=int, default=8000,
help='port to bind to')
parser.add_argument('-r', '--root', type=str, default='.',
help='directory to serve, defaults to current working directory')
parser.add_argument('-a', '--address', type=str, default='127.0.0.1',
help='address to bind to')
parser.add_argument('-A', action='store_true',
help='shortcut for --address 0.0.0.0')
args = parser.parse_args()
address = args.address
if args.A:
address = '0.0.0.0'
proc = start(args.root, address, args.port)
try:
proc.wait()
except KeyboardInterrupt:
proc.kill()
| 30.22973 | 89 | 0.646848 | 299 | 2,237 | 4.769231 | 0.464883 | 0.01122 | 0.047686 | 0.02244 | 0.050491 | 0.035063 | 0 | 0 | 0 | 0 | 0 | 0.021524 | 0.23156 | 2,237 | 73 | 90 | 30.643836 | 0.808028 | 0.246312 | 0 | 0.125 | 0 | 0 | 0.139354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0.020833 | 0.145833 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c649c2159d59b04bdc795f0bcf96424017779542 | 1,417 | py | Python | Part 2 - Regression/Section 4 - Simple Linear Regression/practice_linear_regression.py | aditya30394/Machine-Learning-A-Z | 8caaf1f94f800fcc7bd594569593c4d713c32d9e | [
"MIT"
] | null | null | null | Part 2 - Regression/Section 4 - Simple Linear Regression/practice_linear_regression.py | aditya30394/Machine-Learning-A-Z | 8caaf1f94f800fcc7bd594569593c4d713c32d9e | [
"MIT"
] | null | null | null | Part 2 - Regression/Section 4 - Simple Linear Regression/practice_linear_regression.py | aditya30394/Machine-Learning-A-Z | 8caaf1f94f800fcc7bd594569593c4d713c32d9e | [
"MIT"
] | null | null | null | # Import important libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Read the data set
dataset = pd.read_csv('Salary_Data.csv')
X = dataset.iloc[:,:-1].values
y = dataset.iloc[:, 1].values
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 0)
# There is no need to do feature scaling as the linear regression model takes
# care of that for us
# Fitting Simple linear regression to the training set
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
# Predicting the test set results
y_pred = regressor.predict(X_test)
""" Now we will visualize the results that we achieved so far """
# Visualising the Training set results
plt.scatter(X_train, y_train, color='red')
plt.plot(X_train, regressor.predict(X_train), color='blue')
plt.title("Salary VS Experience (Training Set)")
plt.xlabel("Years of Experience")
plt.ylabel("Salary")
plt.show()
# Visualising the Test set results
plt.scatter(X_test, y_test, color='red')
# This is the same line as that of plt.plot(X_train, regressor.predict(X_train), color='blue')
plt.plot(X_test, y_pred, color='blue')
plt.title("Salary VS Experience (Test Set)")
plt.xlabel("Years of Experience")
plt.ylabel("Salary")
plt.show() | 32.953488 | 94 | 0.762879 | 234 | 1,417 | 4.5 | 0.371795 | 0.039886 | 0.039886 | 0.034188 | 0.281102 | 0.241216 | 0.241216 | 0.186135 | 0.186135 | 0.186135 | 0 | 0.004058 | 0.130558 | 1,417 | 43 | 95 | 32.953488 | 0.850649 | 0.314749 | 0 | 0.25 | 0 | 0 | 0.16183 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.208333 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c64c0a450e0399b2acbc4deba8555735fd48b6da | 1,853 | py | Python | source/source_test.py | mengwangk/myinvestor-toolkit | 3dca9e1accfccf1583dcdbec80d1a0fe9dae2e81 | [
"MIT"
] | 7 | 2019-10-13T18:58:33.000Z | 2021-08-07T12:46:22.000Z | source/source_test.py | mengwangk/myinvestor-toolkit | 3dca9e1accfccf1583dcdbec80d1a0fe9dae2e81 | [
"MIT"
] | 7 | 2019-12-16T21:25:34.000Z | 2022-02-10T00:11:22.000Z | source/source_test.py | mengwangk/myinvestor-toolkit | 3dca9e1accfccf1583dcdbec80d1a0fe9dae2e81 | [
"MIT"
] | 4 | 2020-02-01T11:23:51.000Z | 2021-12-13T12:27:18.000Z | # -*- coding: utf-8 -*-
"""Test for various sources
Supported sources
- Yahoo Finance
- I3Investor - KLSe
"""
import datetime as dt
import string
import unittest
from source import YahooFinanceSource, GoogleFinanceSource
class SourceTest(unittest.TestCase):
_TEST_YAHOO_FINANCE_SYMBOL = '6742.KL'
_YAHOO_FINANCE_SOURCE = YahooFinanceSource(_TEST_YAHOO_FINANCE_SYMBOL)
_TEST_GOOGLE_FINANCE_SYMBOL = "ytlpowr"
_GOOGLE_FINANCE_SOURCE = GoogleFinanceSource(_TEST_GOOGLE_FINANCE_SYMBOL)
_TODAY = dt.datetime.today().strftime('%Y-%m-%d')
@unittest.skip
def test_yahoo_get_stock_prices(self):
print("Getting historical prices")
# Get historical stock data
historical_data = self._YAHOO_FINANCE_SOURCE.get_historical_stock_data('2016-05-15', self._TODAY, 'daily')
print(historical_data)
# prices = historical_data[self._TEST_SYMBOL]['prices']
# print(prices)
# for price in prices:
# print(price.get('close', None))
# Get current price
# current_price = yahoo_finance_source.get_current_price()
# print(current_price)
@unittest.skip
def test_yahoo_get_dividend_history(self):
print("Getting historical dividends")
dividend_data = self._YAHOO_FINANCE_SOURCE.get_historical_stock_dividend_data('2010-05-15', self._TODAY,
'daily')
print(dividend_data)
@unittest.skip
def test_genereate_a_to_z(self):
for c in string.ascii_uppercase:
print(c)
def test_google_finance_get_stock_prices(self):
print("Getting historical prices")
historical_prices = self._GOOGLE_FINANCE_SOURCE.get_stock_historical_prices("2010-05-15", self._TODAY)
print(historical_prices)
| 30.377049 | 114 | 0.679439 | 213 | 1,853 | 5.544601 | 0.305164 | 0.071126 | 0.060965 | 0.048264 | 0.252329 | 0.234547 | 0.152413 | 0.152413 | 0 | 0 | 0 | 0.021097 | 0.232596 | 1,853 | 60 | 115 | 30.883333 | 0.809423 | 0.188343 | 0 | 0.172414 | 0 | 0 | 0.094086 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0 | 0.137931 | 0 | 0.482759 | 0.241379 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c64e4b280de8cd0b21024951fd1499e577dd81d6 | 2,197 | py | Python | solver.py | n8henrie/knapsack | c52179e43a833d57f0df185d5d225444d1725204 | [
"MIT"
] | null | null | null | solver.py | n8henrie/knapsack | c52179e43a833d57f0df185d5d225444d1725204 | [
"MIT"
] | null | null | null | solver.py | n8henrie/knapsack | c52179e43a833d57f0df185d5d225444d1725204 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from collections import namedtuple
from itertools import combinations
import knapsack
def solve_it(input_data, language="rust"):
if language == "python":
return solve_it_python(input_data)
return solve_it_rust(input_data)
def solve_it_rust(input_data):
return knapsack.solve(input_data)
Item = namedtuple("Item", ["index", "value", "weight"])
def solve_it_python(input_data):
print("running in python", file=sys.stderr)
# parse the input
lines = input_data.split("\n")
firstLine = lines[0].split()
item_count = int(firstLine[0])
capacity = int(firstLine[1])
items = []
for i in range(1, item_count + 1):
line = lines[i]
parts = line.split()
items.append(Item(i - 1, int(parts[0]), int(parts[1])))
# a trivial algorithm for filling the knapsack
# it takes items in-order until the knapsack is full
value = 0
taken = [0] * len(items)
all_combinations = (
comb
for n in range(1, len(items) + 1)
for comb in combinations(items, n)
)
small_enough = (
comb
for comb in all_combinations
if sum(item.weight for item in comb) <= capacity
)
winner = max(small_enough, key=lambda items: sum(i.value for i in items))
value = sum(i.value for i in winner)
for idx, item in enumerate(items):
if item in winner:
taken[idx] = 1
# prepare the solution in the specified output format
output_data = str(value) + " " + str(1) + "\n"
output_data += " ".join(map(str, taken))
return output_data
if __name__ == "__main__":
import sys
if len(sys.argv) > 1:
file_location = sys.argv[1].strip()
with open(file_location, "r") as input_data_file:
input_data = input_data_file.read()
if len(sys.argv) > 2:
language = sys.argv[2].lower().strip()
print(solve_it(input_data, language=language))
else:
print(solve_it(input_data))
else:
print(
"This test requires an input file. Please select one from the data directory. (i.e. python solver.py ./data/ks_4_0)"
)
| 26.46988 | 129 | 0.61766 | 308 | 2,197 | 4.256494 | 0.340909 | 0.08238 | 0.022883 | 0.036613 | 0.143402 | 0.022883 | 0 | 0 | 0 | 0 | 0 | 0.01306 | 0.268093 | 2,197 | 82 | 130 | 26.792683 | 0.802239 | 0.084206 | 0 | 0.071429 | 0 | 0.017857 | 0.088191 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053571 | false | 0 | 0.071429 | 0.017857 | 0.196429 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c65170e65e760d40c99c948a36c0e972a977e113 | 3,765 | py | Python | api/mon/utils.py | klebed/esdc-ce | 2c9e4591f344247d345a83880ba86777bb794460 | [
"Apache-2.0"
] | 97 | 2016-11-15T14:44:23.000Z | 2022-03-13T18:09:15.000Z | api/mon/utils.py | klebed/esdc-ce | 2c9e4591f344247d345a83880ba86777bb794460 | [
"Apache-2.0"
] | 334 | 2016-11-17T19:56:57.000Z | 2022-03-18T10:45:53.000Z | api/mon/utils.py | klebed/esdc-ce | 2c9e4591f344247d345a83880ba86777bb794460 | [
"Apache-2.0"
] | 33 | 2017-01-02T16:04:13.000Z | 2022-02-07T19:20:24.000Z | from django.conf import settings
from api.task.internal import InternalTask
from api.task.response import mgmt_task_response
from vms.utils import AttrDict
from vms.models import Vm
from que import TG_DC_UNBOUND, TG_DC_BOUND
class MonitoringGraph(AttrDict):
"""
Monitoring graph configuration.
"""
def __init__(self, name, **params):
dict.__init__(self)
self['name'] = name
self['params'] = params
# noinspection PyAbstractClass
class MonInternalTask(InternalTask):
"""
Internal zabbix tasks.
"""
abstract = True
def call(self, *args, **kwargs):
# Monitoring is completely disabled
if not settings.MON_ZABBIX_ENABLED:
return None
# Remove unused/useless parameters
kwargs.pop('old_json_active', None)
return super(MonInternalTask, self).call(*args, **kwargs)
def get_mon_vms(sr=('dc',), order_by=('hostname',), **filters):
"""Return iterator of Vm objects which are monitoring by an internal Zabbix"""
filters['slavevm__isnull'] = True
vms = Vm.objects.select_related(*sr).filter(**filters)\
.exclude(status=Vm.NOTCREATED)\
.order_by(*order_by)
return (vm for vm in vms
if vm.dc.settings.MON_ZABBIX_ENABLED and vm.is_zabbix_sync_active() and not vm.is_deploying())
def call_mon_history_task(request, task_function, view_fun_name, obj, dc_bound,
serializer, data, graph, graph_settings):
"""Function that calls task_function callback and returns output mgmt_task_response()"""
_apiview_ = {
'view': view_fun_name,
'method': request.method,
'hostname': obj.hostname,
'graph': graph,
'graph_params': serializer.object.copy(),
}
result = serializer.object.copy()
result['desc'] = graph_settings.get('desc', '')
result['hostname'] = obj.hostname
result['graph'] = graph
result['options'] = graph_settings.get('options', {})
result['update_interval'] = graph_settings.get('update_interval', None)
result['add_host_name'] = graph_settings.get('add_host_name', False)
tidlock = '%s obj:%s graph:%s item_id:%s since:%d until:%d' % (task_function.__name__,
obj.uuid, graph, serializer.item_id,
round(serializer.object['since'], -2),
round(serializer.object['until'], -2))
item_id = serializer.item_id
if item_id is None:
items = graph_settings['items']
else:
item_dict = {'id': item_id}
items = [i % item_dict for i in graph_settings['items']]
if 'items_search_fun' in graph_settings:
# noinspection PyCallingNonCallable
items_search = graph_settings['items_search_fun'](graph_settings, item_id)
else:
items_search = None
history = graph_settings['history']
# for VM the task_function is called without task group value because it's DC bound
if dc_bound:
tg = TG_DC_BOUND
else:
tg = TG_DC_UNBOUND
ter = task_function.call(request, obj.owner.id, (obj.uuid, items, history, result, items_search),
tg=tg, meta={'apiview': _apiview_}, tidlock=tidlock)
# NOTE: cache_result=tidlock, cache_timeout=60)
# Caching is disable here, because it makes no real sense.
# The latest graphs must be fetched from zabbix and the older are requested only seldom.
return mgmt_task_response(request, *ter, obj=obj, api_view=_apiview_,
dc_bound=dc_bound, data=data)
| 37.277228 | 106 | 0.619124 | 449 | 3,765 | 4.962138 | 0.345212 | 0.064183 | 0.028725 | 0.021544 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001468 | 0.276228 | 3,765 | 100 | 107 | 37.65 | 0.816147 | 0.162815 | 0 | 0.046875 | 0 | 0 | 0.09375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.09375 | 0 | 0.265625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
c6555d7ff4abb61b8057f24c0663ec8a607ba4a5 | 714 | py | Python | Pillow-4.3.0/Tests/test_image_toqpixmap.py | leorzz/simplemooc | 8b1c5e939d534b1fd729596df4c59fc69708b896 | [
"MIT"
] | null | null | null | Pillow-4.3.0/Tests/test_image_toqpixmap.py | leorzz/simplemooc | 8b1c5e939d534b1fd729596df4c59fc69708b896 | [
"MIT"
] | null | null | null | Pillow-4.3.0/Tests/test_image_toqpixmap.py | leorzz/simplemooc | 8b1c5e939d534b1fd729596df4c59fc69708b896 | [
"MIT"
] | null | null | null | from helper import unittest, PillowTestCase, hopper
from test_imageqt import PillowQtTestCase, PillowQPixmapTestCase
from PIL import ImageQt
if ImageQt.qt_is_installed:
from PIL.ImageQt import QPixmap
class TestToQPixmap(PillowQPixmapTestCase, PillowTestCase):
def test_sanity(self):
PillowQtTestCase.setUp(self)
for mode in ('1', 'RGB', 'RGBA', 'L', 'P'):
data = ImageQt.toqpixmap(hopper(mode))
self.assertIsInstance(data, QPixmap)
self.assertFalse(data.isNull())
# Test saving the file
tempfile = self.tempfile('temp_{}.png'.format(mode))
data.save(tempfile)
if __name__ == '__main__':
unittest.main()
| 25.5 | 64 | 0.666667 | 77 | 714 | 6.012987 | 0.584416 | 0.056156 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001821 | 0.231092 | 714 | 27 | 65 | 26.444444 | 0.84153 | 0.028011 | 0 | 0 | 0 | 0 | 0.041908 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.0625 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d65ef125e43cf3342be46e1656a8eaaba3ec76c9 | 6,483 | py | Python | hw-10/rainwater-hw-10.py | rainwaterone/stat656 | c582fc8c6a55c377e2b57d1f7b10471d625d79db | [
"MIT"
] | null | null | null | hw-10/rainwater-hw-10.py | rainwaterone/stat656 | c582fc8c6a55c377e2b57d1f7b10471d625d79db | [
"MIT"
] | null | null | null | hw-10/rainwater-hw-10.py | rainwaterone/stat656 | c582fc8c6a55c377e2b57d1f7b10471d625d79db | [
"MIT"
] | null | null | null | """
STAT 656 HW-10
@author:Lee Rainwater
@heavy_lifting_by: Dr. Edward Jones
@date: 2020-07-29
"""
import pandas as pd
# Classes provided from AdvancedAnalytics ver 1.25
from AdvancedAnalytics.Text import text_analysis
from AdvancedAnalytics.Text import sentiment_analysis
from sklearn.feature_extraction.text import CountVectorizer
import numpy as np
from AdvancedAnalytics.Text import text_plot
def heading(headerstring):
"""
Centers headerstring on the page. For formatting to stdout
Parameters
----------
headerstring : string
String that you wish to center.
Returns
-------
Returns: None.
"""
tw = 70 # text width
lead = int(tw/2)-(int(len(headerstring)/2))-1
tail = tw-lead-len(headerstring)-2
print('\n' + ('*'*tw))
print(('*'*lead) + ' ' + headerstring + ' ' + ('*'*tail))
print(('*'*tw))
return
heading("READING DATA SOURCE...")
# Set Pandas Columns Width for Excel Columns
pd.set_option('max_colwidth', 32000)
df = pd.read_excel("hotels.xlsx")
text_col = 'Review' #Identify the Data Frame Text Target Column Name
# Check if any text was truncated
pd_width = pd.get_option('max_colwidth')
maxsize = df[text_col].map(len).max() # Maps text_col onto len() and finds max()
n_truncated = (df[text_col].map(len) > pd_width).sum()
print("\nTEXT LENGTH:")
print("{:<17s}{:>6d}".format(" Max. Accepted", pd_width))
print("{:<17s}{:>6d}".format(" Max. Observed", maxsize))
print("{:<17s}{:>6d}".format(" Truncated", n_truncated))
# Initialize TextAnalytics and Sentiment Analysis.
ta = text_analysis(synonyms=None, stop_words=None, pos=False, stem=False)
# n_terms=2 only displays text containing 2 or more sentiment words for
# the list of the highest and lowest sentiment strings
sa = sentiment_analysis(n_terms=2)
heading("CREATING TOKEN COUNT MATRIX...")
# Create Word Frequency by Review Matrix using Custom Sentiment
cv = CountVectorizer(max_df=1.0, min_df=1, max_features=None, \
ngram_range=(1,2), analyzer=sa.analyzer, \
vocabulary=sa.sentiment_word_dic)
stf = cv.fit_transform(df[text_col]) # Return document-term matrix
sterms = cv.get_feature_names() # Map feature indices to feature names
heading("CALCULATE AND STORE SENTIMENT SCORES...")
# Calculate and Store Sentiment Scores into DataFrame "s_score"
s_score = sa.scores(stf, sterms)
n_reviews = s_score.shape[0]
n_sterms = s_score['n_words'].sum()
max_length = df['Review'].apply(len).max()
if n_sterms == 0 or n_reviews == 0:
print("No sentiment terms found.")
p = s_score['n_words'].sum() / n_reviews
print('{:-<24s}{:>6d}'.format("\nMaximum Text Length", max_length))
print('{:-<23s}{:>6d}'.format("Total Reviews", n_reviews))
print('{:-<23s}{:>6d}'.format("Total Sentiment Terms", n_sterms))
print('{:-<23s}{:>6.2f}'.format("Avg. Sentiment Terms", p))
# s_score['sentiment'] = s_score['sentiment'].map("{:,.2f}".format)
df = df.join(s_score)
print("\n", df[['hotel', 'sentiment', 'n_words']], "\n")
print(df.groupby(['hotel']).mean())
heading("GENERATING TOTAL WORD CLOUD FOR CORPUS...")
tcv = CountVectorizer(max_df=1.0, min_df=1, max_features=None, \
ngram_range=(1,2), analyzer=ta.analyzer)
tf = tcv.fit_transform(df[text_col])
terms = tcv.get_feature_names()
td = text_plot.term_dic(tf, terms)
text_plot.word_cloud_dic(td, max_words=200)
heading("GENERATING SENTIMENT WORD CLOUD FOR CORPUS...")
corpus_sentiment = {}
n_sw = 0
for i in range(n_reviews):
# Iterate over the terms with nonzero scores."stf" is a sparse matrix
term_list = stf[i].nonzero()[1]
if len(term_list)>0:
for t in np.nditer(term_list):
score = sa.sentiment_dic.get(sterms[t])
if score != None:
n_sw += stf[i,t]
current_count = corpus_sentiment.get(sterms[t])
if current_count == None:
corpus_sentiment[sterms[t]] = stf[i,t]
else:
corpus_sentiment[sterms[t]] += stf[i,t]
# Word cloud for the Sentiment Words found in the Corpus
text_plot.word_cloud_dic(corpus_sentiment, max_words=200)
n_usw = len(corpus_sentiment)
print("\nSENTIMENT TERMS")
print("------------------")
print("{:.<10s}{:>8d}".format("Unique",n_usw))
print("{:.<10s}{:>8d}".format("Total", n_sw ))
print("------------------")
heading("GENERATING TOTAL WORD CLOUD FOR BELLAGIO...")
tcv = CountVectorizer(max_df=1.0, min_df=1, max_features=None, \
ngram_range=(1,2), analyzer=ta.analyzer)
tf = tcv.fit_transform(df[df['hotel']=='Bellagio'][text_col])
terms = tcv.get_feature_names()
td = text_plot.term_dic(tf, terms)
text_plot.word_cloud_dic(td, max_words=200)
heading("GENERATING SENTIMENT WORD CLOUD FOR BELLAGIO...")
bcv = CountVectorizer(max_df=1.0, min_df=1, max_features=None, \
ngram_range=(1,2), analyzer=sa.analyzer, \
vocabulary=sa.sentiment_word_dic)
bstf = bcv.fit_transform(df[df['hotel']=='Bellagio'][text_col]) # Return document-term matrix
bsterms = bcv.get_feature_names() # Map feature indices to feature names
heading("CALCULATE AND STORE SENTIMENT SCORES FOR BELLAGIO...")
# Calculate and Store Sentiment Scores into DataFrame "s_score"
bs_score = sa.scores(bstf, bsterms)
bn_reviews = bs_score.shape[0]
bn_sterms = bs_score['n_words'].sum()
max_length = df['Review'].apply(len).max()
if bn_sterms == 0 or bn_reviews == 0:
print("No sentiment terms found.")
corpus_sentiment = {}
n_sw = 0
for i in range(bn_reviews):
# Iterate over the terms with nonzero scores."stf" is a sparse matrix
term_list = bstf[i].nonzero()[1]
if len(term_list)>0:
for t in np.nditer(term_list):
score = sa.sentiment_dic.get(bsterms[t])
if score != None:
n_sw += bstf[i,t]
current_count = corpus_sentiment.get(bsterms[t])
if current_count == None:
corpus_sentiment[bsterms[t]] = bstf[i,t]
else:
corpus_sentiment[bsterms[t]] += bstf[i,t]
# Word cloud for the Sentiment Words found in the Corpus
text_plot.word_cloud_dic(corpus_sentiment, max_words=200)
n_usw = len(corpus_sentiment)
print("\nBELLAGIO SENTIMENT TERMS")
print("------------------")
print("{:.<10s}{:>8d}".format("Unique",n_usw))
print("{:.<10s}{:>8d}".format("Total", n_sw ))
print("------------------")
| 37.912281 | 100 | 0.653555 | 919 | 6,483 | 4.45049 | 0.231774 | 0.04401 | 0.017604 | 0.020538 | 0.587775 | 0.534474 | 0.497066 | 0.429095 | 0.416626 | 0.377017 | 0 | 0.020691 | 0.187413 | 6,483 | 170 | 101 | 38.135294 | 0.755695 | 0.197902 | 0 | 0.372881 | 0 | 0 | 0.181393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008475 | false | 0 | 0.050847 | 0 | 0.067797 | 0.211864 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d661b7759b3dc688b6b6db70c85b0949bed0d166 | 10,520 | py | Python | dist/geoapi/data/queries.py | tinyperegrine/geoapi | 63d50427adef7b8db727f2942b39791bdae32a4c | [
"MIT"
] | 2 | 2021-05-24T22:00:30.000Z | 2021-07-26T07:39:23.000Z | src/geoapi/data/queries.py | tinyperegrine/geoapi | 63d50427adef7b8db727f2942b39791bdae32a4c | [
"MIT"
] | 5 | 2021-03-19T03:42:09.000Z | 2022-03-11T23:59:20.000Z | src/geoapi/data/queries.py | tinyperegrine/geoapi | 63d50427adef7b8db727f2942b39791bdae32a4c | [
"MIT"
] | null | null | null | """Query Object for all read-only queries to the Real Property table
"""
import os
import logging
from time import time
from typing import List
import asyncio
import aiohttp
import aiofiles
import databases
from PIL import Image
import sqlalchemy
from sqlalchemy.sql import select, func
import geoapi.common.spatial_utils as spatial_utils
import geoapi.common.decorators as decorators
from geoapi.common.exceptions import ResourceNotFoundError, ResourceMissingDataError
from geoapi.common.json_models import RealPropertyOut, GeometryAndDistanceIn, StatisticsOut
class RealPropertyQueries():
"""Repository for all DB Query Operations.
Different from repository for all transaction operations."""
def __init__(self, connection: databases.Database,
real_property_table: sqlalchemy.Table):
self._connection = connection
self._real_property_table = real_property_table
self.logger = logging.getLogger(__name__)
async def get_all(self) -> List[RealPropertyOut]:
"""Gets all the records
TODO: add paging
Raises:
ResourceNotFoundError: if the table is empty
Returns:
List[RealPropertyOut]: List of outgoing geojson based objects
"""
select_query = self._real_property_table.select()
db_rows = await self._connection.fetch_all(select_query)
if not db_rows:
msg = "No Properties found!"
self.logger.error(msg)
raise ResourceNotFoundError(msg)
out_list = [RealPropertyOut.from_db(db_row) for db_row in db_rows]
return out_list
async def get(self, property_id: str) -> RealPropertyOut:
"""Gets a single record
Args:
property_id (str): property id to search for
Raises:
ResourceNotFoundError: if property id not found
Returns:
RealPropertyOut: Outgoing geojson based object
"""
select_query = self._real_property_table.select().where(
self._real_property_table.c.id == property_id)
db_row = await self._connection.fetch_one(select_query)
if not db_row:
msg = "Property not found - id: {}".format(property_id)
self.logger.error(msg)
raise ResourceNotFoundError(msg)
return RealPropertyOut.from_db(db_row)
async def find(self, geometry_distance: GeometryAndDistanceIn) -> List[str]:
"""Searches for properties within a given distance of a geometry
Args:
geometry_distance (GeometryAndDistanceIn): geojson based geometry and distance in object
Raises:
ResourceNotFoundError: if no properties found
Returns:
List[str]: list of property ids
"""
geoalchemy_element_buffered = spatial_utils.buffer(
geometry_distance.location_geo, geometry_distance.distance)
select_query = select([self._real_property_table.c.id]).where(
self._real_property_table.c.geocode_geo.ST_Intersects(
geoalchemy_element_buffered))
db_rows = await self._connection.fetch_all(select_query)
if not db_rows:
msg = "No Properties found!"
self.logger.error(msg)
raise ResourceNotFoundError(msg)
out_list = [db_row["id"] for db_row in db_rows]
return out_list
# helpers for parallel running of queries
async def _query_parcels(self, select_query_parcels):
parcel_area = await self._connection.fetch_val(select_query_parcels)
return parcel_area
async def _query_buildings(self, select_query_buildings):
db_rows = await self._connection.fetch_all(select_query_buildings)
return db_rows
async def statistics(self, property_id: str, distance: int) -> StatisticsOut:
"""Gets statistics for data near a property
TODO: replace the property geocode with a redis geocode cache
and maintain db sync with postgres with a redis queue. Also, refactor
to reduce 'too many locals'
Args:
property_id (str): property id
distance (int): search radius in meters
Raises:
ResourceNotFoundError: if no property found for the given property id
ResourceMissingDataError: if given property does not have geometry info to locate itself
Returns:
StatisticsOut: A summary statistics outgoing object
"""
# get property geocode
select_query = select([
self._real_property_table.c.geocode_geo
]).where(self._real_property_table.c.id == property_id)
db_row = await self._connection.fetch_one(select_query)
if db_row is None:
msg = "Property not found - id: {}".format(property_id)
self.logger.error(msg)
raise ResourceNotFoundError(msg)
if db_row["geocode_geo"] is None:
msg = "Property missing geocode_geo data - id: {}".format(
property_id)
self.logger.error(msg)
raise ResourceMissingDataError(msg)
# get zone - buffer around property
geojson_obj = spatial_utils.to_geo_json(db_row["geocode_geo"])
geoalchemy_element_buffered = spatial_utils.buffer(
geojson_obj, distance)
area_distance = spatial_utils.area_distance(geoalchemy_element_buffered,
None)
zone_area = area_distance['area']
# get parcel area
select_query_parcels = select(
[func.sum(self._real_property_table.c.parcel_geo.ST_Area())]).where(
self._real_property_table.c.parcel_geo.ST_Intersects(
geoalchemy_element_buffered))
# get buildings
select_query_buildings = select(
[self._real_property_table.c.building_geo]).where(
self._real_property_table.c.building_geo.ST_Intersects(
geoalchemy_element_buffered))
# run queries in parallel
parcel_area, db_rows = await asyncio.gather(
self._query_parcels(select_query_parcels),
self._query_buildings(select_query_buildings),
)
# get parcel area result
if not parcel_area:
parcel_area = 0
parcel_area = round(parcel_area)
# get distance and area for buildings
if db_rows:
area_distance_list = [
spatial_utils.area_distance(db_row["building_geo"], geojson_obj)
for db_row in db_rows
]
building_area = sum(
[area_distance['area'] for area_distance in area_distance_list])
else:
area_distance_list = []
building_area = 0
buildings_area_distance = area_distance_list
# get final zone density
zone_density_percentage = 100 * building_area / zone_area
if zone_density_percentage > 100.00:
zone_density_percentage = 100.00
zone_density = round(zone_density_percentage, 2)
statistics_out = StatisticsOut(
parcel_area=parcel_area,
buildings_area_distance=buildings_area_distance,
zone_area=zone_area,
zone_density=zone_density)
return statistics_out
@decorators.logtime_async(1)
async def get_image(self, property_id) -> str:
"""Gets an image based on url from the database
Args:
property_id (str): property id
Raises:
ResourceNotFoundError: if property id not found
ResourceMissingDataError: if property does not have a url for image
Returns:
str: image file name/path
"""
# get property image url
select_query = select([
self._real_property_table.c.image_url
]).where(self._real_property_table.c.id == property_id)
db_row = await self._connection.fetch_one(select_query)
if db_row is None:
msg = "Property not found - id: {}".format(property_id)
self.logger.error(msg)
raise ResourceNotFoundError(msg)
if db_row["image_url"] is None:
msg = "Property missing image url - id: {}".format(property_id)
self.logger.error(msg)
raise ResourceMissingDataError(msg)
# get image
# with temporary placeholder for progress reporting, add logging etc.
# timeouts on url not found, badly formed urls, etc. not handled
total_size = 0
start = time()
print_size = 0.0
file_name = os.path.join('geoapi/static/tmp',
os.path.basename(db_row["image_url"]))
timeout = aiohttp.ClientTimeout(
total=5 * 60, connect=30) # could put in config eventually
try:
async with aiohttp.ClientSession(timeout=timeout) as session:
async with session.get(db_row["image_url"]) as r:
async with aiofiles.open(file_name, 'wb') as fd:
self.logger.info('file download started: %s', db_row["image_url"])
while True:
chunk = await r.content.read(16144)
if not chunk:
break
await fd.write(chunk)
total_size += len(chunk)
print_size += len(chunk)
if (print_size / (1024 * 1024)
) > 100: # print every 100MB download
msg = f'{time() - start:0.2f}s, downloaded: {total_size / (1024 * 1024):0.0f}MB'
self.logger.info(msg)
print_size = (print_size / (1024 * 1024)) - 100
self.logger.info('file downloaded: %s', file_name)
log_msg = f'total time: {time() - start:0.2f}s, total size: {total_size / (1024 * 1024):0.0f}MB'
self.logger.info(log_msg)
# convert to jpeg
file_name_jpg = os.path.splitext(file_name)[0] + ".jpg"
img = Image.open(file_name)
img.save(file_name_jpg, "JPEG", quality=100)
except aiohttp.client_exceptions.ServerTimeoutError as ste:
self.logger.error('Time out: %s', str(ste))
raise
return file_name_jpg
| 39.400749 | 120 | 0.616825 | 1,206 | 10,520 | 5.150912 | 0.201493 | 0.032196 | 0.046523 | 0.047328 | 0.341919 | 0.314392 | 0.264488 | 0.208789 | 0.17933 | 0.162911 | 0 | 0.011437 | 0.310171 | 10,520 | 266 | 121 | 39.548872 | 0.844564 | 0.060171 | 0 | 0.240741 | 0 | 0.012346 | 0.063832 | 0 | 0 | 0 | 0 | 0.007519 | 0 | 1 | 0.006173 | false | 0 | 0.092593 | 0 | 0.148148 | 0.024691 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6640fe2dcd11b0460e228747652381b73af179f | 1,685 | py | Python | src/api/proxy/proxy.py | HaoJiangGuo/fp-server | 9c00b8f0ee64049eb9f214c3efe1fdee977542a6 | [
"MIT"
] | 2 | 2018-08-17T06:56:21.000Z | 2019-01-08T03:10:32.000Z | src/api/proxy/proxy.py | HaoJiangGuo/fp-server | 9c00b8f0ee64049eb9f214c3efe1fdee977542a6 | [
"MIT"
] | null | null | null | src/api/proxy/proxy.py | HaoJiangGuo/fp-server | 9c00b8f0ee64049eb9f214c3efe1fdee977542a6 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
API for proxy
"""
from core import exceptions
from core.web import WebHandler
from service.proxy.serializers import ProxySerializer
from service.proxy.proxy import proxy_srv
from utils import log as logger
from utils.routes import route
def return_developing():
raise exceptions.NotFound(msg=exceptions.ERR_MSG_IS_DEVELOPING)
@route(r'/api/proxy/$')
class GetProxyHandler(WebHandler):
"""
proxy api
"""
async def get(self, *args, **kwargs):
"""
get proxies
"""
count = int(self.get_param('count', 1))
scheme = self.get_param('scheme')
if scheme:
scheme = scheme.lower()
anonymity = self.get_param('anonymity')
spec = dict(count=count, scheme=scheme, anonymity=anonymity)
_items = await proxy_srv.query(spec)
items = []
for i in _items:
s = ProxySerializer(i)
items.append(s.to_representation())
data = {
"count": len(items),
"detail": items,
}
# sort_by_speed = self.get_param('sort_by_speed', 0)
self.do_success(data)
async def post(self, *args, **kwargs):
""" create proxies
"""
datas = self.get_body()
logger.debug('datas:', datas, caller=self)
self.do_success({'ok': 1}, 'todo')
async def delete(self, *args, **kwargs):
""" delete proxies
"""
self.do_success({'ok': 1}, 'todo')
@route(r'/api/proxy/report/$')
class ReposrProxyHandler(WebHandler):
async def post(self, *args, **kwargs):
self.do_success({'ok': 1}, 'developing..')
| 23.402778 | 68 | 0.591691 | 199 | 1,685 | 4.899497 | 0.41206 | 0.035897 | 0.057436 | 0.046154 | 0.110769 | 0.094359 | 0 | 0 | 0 | 0 | 0 | 0.004858 | 0.267062 | 1,685 | 71 | 69 | 23.732394 | 0.784615 | 0.069436 | 0 | 0.108108 | 0 | 0 | 0.064917 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.162162 | 0 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d66723dcef2bb7193246b5983ca2285df66c7c28 | 2,058 | py | Python | projects/crawler_for_prodect_category/category_output/to_html.py | ice-melt/python-lib | 345e34fff7386d91acbb03a01fd4127c5dfed037 | [
"MIT"
] | null | null | null | projects/crawler_for_prodect_category/category_output/to_html.py | ice-melt/python-lib | 345e34fff7386d91acbb03a01fd4127c5dfed037 | [
"MIT"
] | null | null | null | projects/crawler_for_prodect_category/category_output/to_html.py | ice-melt/python-lib | 345e34fff7386d91acbb03a01fd4127c5dfed037 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
from projects.crawler_for_prodect_category.category_output import output_utils
import codecs
Logger = output_utils.Logger
def output(filename, datas):
"""
将爬取的数据导出到html
:return:
"""
Logger.info('Output to html file, please wait ...')
# object_serialize('object.pkl',self.datas)
# categories , description,url
with codecs.open(output_utils.get_filename(filename, 'html'), 'w', 'utf-8') as file:
file.write('<html>\n')
file.write('<head>\n')
file.write('<meta charset="utf-8"/>\n')
file.write('<style>\n')
file.write('table{font-family:"Trebuchet MS", Arial, Helvetica, sans-serif;'
'width:100%;border-collapse:collapse;}\n')
file.write('table th,table td{font-size:1em;border:1px solid #98bf21;padding:3px 7px 2px 7px;}\n')
file.write('table th{font-size:1.1em;background-color:#A7C942;color:#ffffff;'
'padding:5px 7px 4px 7px;text-align:left;}\n')
file.write('table tr.alt td{background-color:#EAF2D3;color:#000000;}\n')
file.write('a:link{text-decoration: none;}\n')
file.write('a:visited{text-decoration: none;}\n')
file.write('a:hover{text-decoration: underline;}\n')
file.write('</style>\n')
file.write('</head>\n')
file.write('<body>\n')
file.write('<table>\n')
# 输出首行
file.write('<tr><th>Sequence</th><th>Product Categories</th>'
'<th>Product SubCategories</th><th>Description</th></tr>\n')
for i in range(len(datas)):
key = datas[i]
clazz = '' if i % 2 == 0 else ' class="alt" '
file.write('<tr %s><td>%05d</td><td>%s</td><td>%s</td>'
'<td><a target="_blank" href="%s">%s</a></td></tr>\n'
% (clazz, i + 1, key['categories'], key['subcategories'], key['url'], key['description']))
file.write('</table>\n')
file.write('</body>\n')
file.write('</html>\n')
Logger.info(' Save completed !')
| 43.787234 | 113 | 0.573372 | 274 | 2,058 | 4.270073 | 0.412409 | 0.153846 | 0.136752 | 0.064103 | 0.198291 | 0.157265 | 0.157265 | 0 | 0 | 0 | 0 | 0.024528 | 0.227405 | 2,058 | 46 | 114 | 44.73913 | 0.711321 | 0.056365 | 0 | 0 | 0 | 0.114286 | 0.459093 | 0.227202 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.057143 | 0 | 0.085714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d667529945c63e9ee84e1cddf5e8de3b084ac4c0 | 5,567 | py | Python | hsv.py | FarinaMatteo/siv-project | fbac7c7c114db51d9fdcf90aba296906abdf91af | [
"MIT"
] | null | null | null | hsv.py | FarinaMatteo/siv-project | fbac7c7c114db51d9fdcf90aba296906abdf91af | [
"MIT"
] | null | null | null | hsv.py | FarinaMatteo/siv-project | fbac7c7c114db51d9fdcf90aba296906abdf91af | [
"MIT"
] | 1 | 2021-04-13T11:22:06.000Z | 2021-04-13T11:22:06.000Z | """
Background vs Foreground Image segmentation. The goal is to produce a segmentation map that imitates
videocalls tools like the ones implemented in Google Meet, Zoom without using Deep Learning- or Machine Learning-
based techniques.
This script does the following:
- builds a background model using the first 3s of the video, acting on the HSV colorspace;
- performs frame differencing in the HSV domain;
- runs LP filtering (median-filter) on the Saturation difference;
- uses Otsu's technique to threshold the saturation and the brightness difference;
- concatenates the saturation and the brightness masks to produce the foreground mask;
- runs morphological operators one the mask (closing and dilation) with a 3x5 ellipse (resembles the shape of a human face);
- uses the foreground mask, the current video stream and a pre-defined background picture to produce the final output.
Authors: M. Farina, F. Diprima - University of Trento
Last Update (dd/mm/yyyy): 09/04/2021
"""
import os
import cv2
import time
import numpy as np
from helpers.variables import *
from helpers.utils import build_argparser, codec_from_ext, make_folder, recursive_clean
def run(**kwargs):
"""
Main loop for background removal.
"""
time_lst = [0]
# setup an image for the background
bg_pic_path = kwargs['background']
bg_pic = cv2.imread(bg_pic_path)
bg_pic = cv2.resize(bg_pic, dst_size)
# setup the video writer if needed
writer = None
if kwargs["output_video"]:
codec = codec_from_ext(kwargs["output_video"])
writer = cv2.VideoWriter(kwargs["output_video"], codec, fps, frameSize=(width, height))
# create the output frame folder if needed
if kwargs["frame_folder"]:
if kwargs["refresh"]: recursive_clean(kwargs["frame_folder"])
make_folder(kwargs["frame_folder"])
# initialize background
hsv_bg = np.zeros(dst_shape_multi, dtype='uint16')
# start looping through frames
frame_count = 0
if cap.isOpened():
while cap.isOpened():
# retrieve the current frame and exit if needed
ret, frame = cap.read()
if not ret:
break
# otherwise, perform basic operations on the current frame
frame = cv2.resize(frame, dst_size)
hsv_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
hsv_frame_blurred = cv2.GaussianBlur(hsv_frame, gauss_kernel, sigmaX=2, sigmaY=2)
# build a model for the background during the first frames
if frame_count < bg_frame_limit:
hsv_bg = hsv_bg.copy() + hsv_frame_blurred
if frame_count == bg_frame_limit-1:
hsv_bg = np.uint8(hsv_bg.copy() / bg_frame_limit)
# when the bg has been modeled, segment the fg
else:
time_in = time.perf_counter()
diff = cv2.absdiff(hsv_frame_blurred, hsv_bg)
h_diff, s_diff, v_diff = cv2.split(diff)
# automatic global thresholding with Otsu's technique
r1, h_diff_thresh = cv2.threshold(h_diff, 1, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
r2, s_diff_thresh = cv2.threshold(s_diff, 1, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
r3, v_diff_thresh = cv2.threshold(v_diff, 1, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# take into account contribution of saturation and value (aka 'brightness')
# clean the saturation mask beforehand, it usually is more unstable
s_diff_thresh_median = cv2.medianBlur(s_diff_thresh, ksize=median_ksize)
fg_mask = s_diff_thresh_median + v_diff_thresh
fg_mask_closed = cv2.morphologyEx(fg_mask, cv2.MORPH_CLOSE, kernel=kernel, iterations=10)
fg_mask_dilated = cv2.dilate(fg_mask_closed, kernel=kernel)
# compute the actual foreground and background
foreground = cv2.bitwise_and(frame, frame, mask=fg_mask_dilated)
background = bg_pic - cv2.bitwise_and(bg_pic, bg_pic, mask=fg_mask_dilated)
# ... and add them to generate the output image
out = cv2.add(foreground, background)
# display the output and the masks
cv2.imshow("Output", out)
# save frames on the fs if the user requested it
if kwargs["frame_folder"] and frame_count % kwargs["throttle"] == 0:
cv2.imwrite(os.path.join(kwargs["frame_folder"], "{}.jpg".format(frame_count - bg_frame_limit + 1)), out)
# write the video on the fs if the user requested it
if writer:
writer.write(cv2.resize(out, dsize=(width, height)))
# quit if needed
if cv2.waitKey(ms) & 0xFF==ord('q'):
break
# keep track of time
time_out = time.perf_counter()
time_diff = time_out - time_in
time_lst.append(time_diff)
frame_count += 1
print("Average Time x Frame: ", round(np.sum(np.array(time_lst))/len(time_lst), 2))
cv2.destroyAllWindows()
cap.release()
if writer:
writer.release()
if __name__ == "__main__":
parser = build_argparser()
kwargs = vars(parser.parse_args())
run(**kwargs) | 43.492188 | 126 | 0.623496 | 718 | 5,567 | 4.657382 | 0.380223 | 0.011962 | 0.025419 | 0.015251 | 0.088517 | 0.071172 | 0.049641 | 0.049641 | 0.049641 | 0 | 0 | 0.018665 | 0.297467 | 5,567 | 128 | 127 | 43.492188 | 0.836359 | 0.331238 | 0 | 0.060606 | 0 | 0 | 0.046208 | 0 | 0 | 0 | 0.001087 | 0 | 0 | 1 | 0.015152 | false | 0 | 0.090909 | 0 | 0.106061 | 0.015152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d66b1cd60139d9cc3f50ee9f63aec7859add227e | 6,967 | py | Python | cuchem/cuchem/wf/cluster/gpurandomprojection.py | dorukozturk/cheminformatics | c0fa66dd4f4e6650d7286ae2be533c66b7a2b270 | [
"Apache-2.0"
] | null | null | null | cuchem/cuchem/wf/cluster/gpurandomprojection.py | dorukozturk/cheminformatics | c0fa66dd4f4e6650d7286ae2be533c66b7a2b270 | [
"Apache-2.0"
] | null | null | null | cuchem/cuchem/wf/cluster/gpurandomprojection.py | dorukozturk/cheminformatics | c0fa66dd4f4e6650d7286ae2be533c66b7a2b270 | [
"Apache-2.0"
] | null | null | null | #!/opt/conda/envs/rapids/bin/python3
#
# Copyright (c) 2020, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from functools import singledispatch
from typing import List
import cudf
import cupy
import dask
import dask_cudf
import pandas
from cuchemcommon.context import Context
from cuchemcommon.data import ClusterWfDAO
from cuchemcommon.data.cluster_wf import ChemblClusterWfDao
from cuchemcommon.fingerprint import MorganFingerprint
from cuchemcommon.utils.logger import MetricsLogger
from cuchemcommon.utils.singleton import Singleton
from cuml import SparseRandomProjection, KMeans
from cuchem.utils.metrics import batched_silhouette_scores
from cuchem.wf.cluster import BaseClusterWorkflow
logger = logging.getLogger(__name__)
@singledispatch
def _gpu_random_proj_wrapper(embedding, self):
return NotImplemented
@_gpu_random_proj_wrapper.register(dask.dataframe.core.DataFrame)
def _(embedding, self):
logger.info('Converting from dask.dataframe.core.DataFrame...')
embedding = embedding.compute()
return _gpu_random_proj_wrapper(embedding, self)
@_gpu_random_proj_wrapper.register(dask_cudf.core.DataFrame)
def _(embedding, self):
logger.info('Converting from dask_cudf.core.DataFrame...')
embedding = embedding.compute()
return _gpu_random_proj_wrapper(embedding, self)
@_gpu_random_proj_wrapper.register(pandas.DataFrame)
def _(embedding, self):
logger.info('Converting from pandas.DataFrame...')
embedding = cudf.from_pandas(embedding)
return _gpu_random_proj_wrapper(embedding, self)
@_gpu_random_proj_wrapper.register(cudf.DataFrame)
def _(embedding, self):
return self._cluster(embedding)
class GpuWorkflowRandomProjection(BaseClusterWorkflow, metaclass=Singleton):
def __init__(self,
n_molecules: int = None,
dao: ClusterWfDAO = ChemblClusterWfDao(MorganFingerprint),
n_clusters=7,
seed=0):
super(GpuWorkflowRandomProjection, self).__init__()
self.dao = dao
self.n_molecules = n_molecules
self.n_clusters = n_clusters
self.pca = None
self.seed = seed
self.n_silhouette = 500000
self.context = Context()
self.srp_embedding = SparseRandomProjection(n_components=2)
def rand_jitter(self, arr):
"""
Introduces random displacements to spread the points
"""
stdev = .023 * cupy.subtract(cupy.max(arr), cupy.min(arr))
for i in range(arr.shape[1]):
rnd = cupy.multiply(cupy.random.randn(len(arr)), stdev)
arr[:, i] = cupy.add(arr[:, i], rnd)
return arr
def _cluster(self, embedding):
logger.info('Computing cluster...')
embedding = embedding.reset_index()
n_molecules = embedding.shape[0]
# Before reclustering remove all columns that may interfere
embedding, prop_series = self._remove_non_numerics(embedding)
with MetricsLogger('random_proj', n_molecules) as ml:
srp = self.srp_embedding.fit_transform(embedding.values)
ml.metric_name = 'spearman_rho'
ml.metric_func = self._compute_spearman_rho
ml.metric_func_args = (embedding, embedding, srp)
with MetricsLogger('kmeans', n_molecules) as ml:
kmeans_cuml = KMeans(n_clusters=self.n_clusters)
kmeans_cuml.fit(srp)
kmeans_labels = kmeans_cuml.predict(srp)
ml.metric_name = 'silhouette_score'
ml.metric_func = batched_silhouette_scores
ml.metric_func_kwargs = {}
ml.metric_func_args = (None, None)
if self.context.is_benchmark:
(srp_sample, kmeans_labels_sample), _ = self._random_sample_from_arrays(
srp, kmeans_labels, n_samples=self.n_silhouette)
ml.metric_func_args = (srp_sample, kmeans_labels_sample)
# Add back the column required for plotting and to correlating data
# between re-clustering
srp = self.rand_jitter(srp)
embedding['cluster'] = kmeans_labels
embedding['x'] = srp[:, 0]
embedding['y'] = srp[:, 1]
# Add back the prop columns
for col in prop_series.keys():
embedding[col] = prop_series[col]
return embedding
def cluster(self, df_mol_embedding=None):
logger.info("Executing GPU workflow...")
if df_mol_embedding is None:
self.n_molecules = self.context.n_molecule
df_mol_embedding = self.dao.fetch_molecular_embedding(
self.n_molecules,
cache_directory=self.context.cache_directory)
df_mol_embedding = df_mol_embedding.persist()
self.df_embedding = _gpu_random_proj_wrapper(df_mol_embedding, self)
return self.df_embedding
def recluster(self,
filter_column=None,
filter_values=None,
n_clusters=None):
if filter_values is not None:
self.df_embedding['filter_col'] = self.df_embedding[filter_column].isin(filter_values)
self.df_embedding = self.df_embedding.query('filter_col == True')
if n_clusters is not None:
self.n_clusters = n_clusters
self.df_embedding = _gpu_random_proj_wrapper(self.df_embedding, self)
return self.df_embedding
def add_molecules(self, chemblids: List):
chem_mol_map = {row[0]: row[1] for row in self.dao.fetch_id_from_chembl(chemblids)}
molregnos = list(chem_mol_map.keys())
self.df_embedding['id_exists'] = self.df_embedding['id'].isin(molregnos)
ldf = self.df_embedding.query('id_exists == True')
if hasattr(ldf, 'compute'):
ldf = ldf.compute()
self.df_embedding = self.df_embedding.drop(['id_exists'], axis=1)
missing_mol = set(molregnos).difference(ldf['id'].to_array())
chem_mol_map = {id: chem_mol_map[id] for id in missing_mol}
missing_molregno = chem_mol_map.keys()
if len(missing_molregno) > 0:
new_fingerprints = self.dao.fetch_molecular_embedding_by_id(missing_molregno)
new_fingerprints = new_fingerprints.compute()
self.df_embedding = self._remove_ui_columns(self.df_embedding)
self.df_embedding = self.df_embedding.append(new_fingerprints)
return chem_mol_map, molregnos, self.df_embedding
| 35.912371 | 98 | 0.68724 | 860 | 6,967 | 5.317442 | 0.275581 | 0.026241 | 0.062322 | 0.043735 | 0.218456 | 0.176252 | 0.127706 | 0.096217 | 0.085502 | 0.085502 | 0 | 0.005373 | 0.225348 | 6,967 | 193 | 99 | 36.098446 | 0.841949 | 0.117985 | 0 | 0.102362 | 0 | 0 | 0.048992 | 0.009667 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086614 | false | 0 | 0.133858 | 0.015748 | 0.307087 | 0.03937 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d66c0ed5c84f564a500b3ce340b852c977ab112f | 2,176 | py | Python | pkg/azure/resource_group.py | NihilBabu/xmigrate | c33d0b506a86a0ebef22df8ce299cd84f560d034 | [
"Apache-2.0"
] | 10 | 2021-01-02T11:59:46.000Z | 2021-06-14T04:38:45.000Z | pkg/azure/resource_group.py | NihilBabu/xmigrate | c33d0b506a86a0ebef22df8ce299cd84f560d034 | [
"Apache-2.0"
] | 12 | 2021-01-06T07:02:22.000Z | 2021-03-11T06:34:07.000Z | pkg/azure/resource_group.py | NihilBabu/xmigrate | c33d0b506a86a0ebef22df8ce299cd84f560d034 | [
"Apache-2.0"
] | 3 | 2021-01-10T12:33:52.000Z | 2021-04-12T14:29:13.000Z | # Import the needed management objects from the libraries. The azure.common library
# is installed automatically with the other libraries.
from azure.common.client_factory import get_client_from_cli_profile
from azure.mgmt.resource import ResourceManagementClient
from utils.dbconn import *
from utils.logger import *
from model.project import Project
import string, random
from azure.common.credentials import ServicePrincipalCredentials
# Provision the resource group.
async def create_rg(project):
con = create_db_con()
try:
if Project.objects(name=project)[0]['resource_group']:
if Project.objects(name=project)[0]['resource_group_created']:
return True
except Exception as e:
print("Reaching Project document failed: "+repr(e))
logger("Reaching Project document failed: "+repr(e),"warning")
else:
rg_location = Project.objects(name=project)[0]['location']
rg_name = Project.objects(name=project)[0]['resource_group']
try:
client_id = Project.objects(name=project)[0]['client_id']
secret = Project.objects(name=project)[0]['secret']
tenant_id = Project.objects(name=project)[0]['tenant_id']
subscription_id = Project.objects(name=project)[0]['subscription_id']
creds = ServicePrincipalCredentials(client_id=client_id, secret=secret, tenant=tenant_id)
resource_client = ResourceManagementClient(creds,subscription_id)
print("Provisioning a resource group...some operations might take a minute or two.")
rg_result = resource_client.resource_groups.create_or_update(
rg_name, {"location": rg_location})
print(
"Provisioned resource group"+ rg_result.name+" in the "+rg_result.location+" region")
Project.objects(name=project).update(resource_group=rg_result.name, resource_group_created=True)
con.close()
return True
except Exception as e:
print("Resource group creation failed "+str(e))
logger("Resource group creation failed: "+repr(e),"warning")
return False
| 50.604651 | 108 | 0.68704 | 259 | 2,176 | 5.625483 | 0.320463 | 0.089224 | 0.111187 | 0.154427 | 0.302677 | 0.23267 | 0.128346 | 0.05628 | 0 | 0 | 0 | 0.004681 | 0.214614 | 2,176 | 42 | 109 | 51.809524 | 0.847864 | 0.075368 | 0 | 0.157895 | 0 | 0 | 0.182271 | 0.010956 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.184211 | 0 | 0.263158 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d670c4735ca674e296208c80467697f931ec147e | 2,020 | py | Python | docs/examples/arch/full_rhino.py | GeneKao/compas_assembly | 92fde9cd3948c1b9bb41b4ea7fc866392905182d | [
"MIT"
] | null | null | null | docs/examples/arch/full_rhino.py | GeneKao/compas_assembly | 92fde9cd3948c1b9bb41b4ea7fc866392905182d | [
"MIT"
] | null | null | null | docs/examples/arch/full_rhino.py | GeneKao/compas_assembly | 92fde9cd3948c1b9bb41b4ea7fc866392905182d | [
"MIT"
] | null | null | null | import os
from compas_assembly.datastructures import Assembly
from compas_assembly.geometry import Arch
from compas_assembly.rhino import AssemblyArtist
from compas.rpc import Proxy
proxy = Proxy()
proxy.restart_server()
try:
HERE = os.path.dirname(__file__)
except NameError:
HERE = os.getcwd()
DATA = os.path.join(HERE, '../../../data')
FILE = os.path.join(DATA, 'arch.json')
# ==============================================================================
# Assembly
# ==============================================================================
rise = 5
span = 10
depth = 0.5
thickness = 0.7
n = 40
arch = Arch(rise, span, thickness, depth, n)
assembly = Assembly.from_geometry(arch)
assembly.node_attribute(0, 'is_support', True)
assembly.node_attribute(n - 1, 'is_support', True)
# ==============================================================================
# Identify the interfaces
# ==============================================================================
proxy.package = 'compas_assembly.datastructures'
# make proxy methods into configurable objects
# with __call__ for execution
# store the method objects in a dict of callables
assembly = proxy.assembly_interfaces_numpy(assembly, tmax=0.02)
# ==============================================================================
# Compute interface forces
# ==============================================================================
proxy.package = 'compas_rbe.equilibrium'
assembly = proxy.compute_interface_forces_cvx(assembly, solver='CPLEX')
# ==============================================================================
# Visualize
# ==============================================================================
artist = AssemblyArtist(assembly, layer="Arch")
artist.clear_layer()
artist.draw_nodes(color={key: (255, 0, 0) for key in assembly.nodes_where({'is_support': True})})
artist.draw_edges()
artist.draw_blocks()
artist.draw_interfaces()
artist.draw_resultants(scale=0.1)
# artist.color_interfaces(mode=1)
| 29.275362 | 97 | 0.517327 | 193 | 2,020 | 5.243523 | 0.450777 | 0.049407 | 0.05336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011976 | 0.090594 | 2,020 | 68 | 98 | 29.705882 | 0.538922 | 0.421782 | 0 | 0 | 0 | 0 | 0.098176 | 0.045178 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.151515 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d67c213b9a36706b9d0346fd0b72cfdc78942fe2 | 855 | py | Python | palabox/processing/text/properly_cut_text.py | marcoboucas/palabox | d6e937db909daac0f9d3c5dff2309c29b5b68ea8 | [
"MIT"
] | null | null | null | palabox/processing/text/properly_cut_text.py | marcoboucas/palabox | d6e937db909daac0f9d3c5dff2309c29b5b68ea8 | [
"MIT"
] | null | null | null | palabox/processing/text/properly_cut_text.py | marcoboucas/palabox | d6e937db909daac0f9d3c5dff2309c29b5b68ea8 | [
"MIT"
] | null | null | null | """Cut properly some text."""
import re
END_OF_SENTENCE_CHARACTERS = {".", ";", "!", "?"}
def properly_cut_text(
text: str, start_idx: int, end_idx: int, nbr_before: int = 30, nbr_after: int = 30
) -> str:
"""Properly cut a text around some interval."""
str_length = len(text)
start_idx = max(0, start_idx - nbr_before)
end_idx = end_idx + nbr_after
# Change the end depending on the value
match = re.search(r"\.[^\d]|\?|\!", text[end_idx:], flags=re.IGNORECASE)
if match:
end_idx = match.end() + end_idx
else:
end_idx = str_length
# Change the beginning depending on the value
match = re.search(r"(\.|\?|\!)(?!.*\1)", text[: start_idx - 1], flags=re.IGNORECASE)
if match:
start_idx = match.end() + 1
else:
start_idx = 0
return text[start_idx:end_idx].strip()
| 27.580645 | 88 | 0.604678 | 124 | 855 | 3.959677 | 0.346774 | 0.09776 | 0.07332 | 0.077393 | 0.232179 | 0.13442 | 0.13442 | 0.13442 | 0 | 0 | 0 | 0.01374 | 0.233918 | 855 | 30 | 89 | 28.5 | 0.735878 | 0.173099 | 0 | 0.210526 | 0 | 0 | 0.05036 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.052632 | 0 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d67f25d3516ae26ca8b76017de005e34113a7d6e | 7,293 | py | Python | process/triplifier.py | biocodellc/ontology-data-pipeline | f89dc159ab710368b3054bf8e8d70fb4c967527c | [
"BSD-3-Clause"
] | 13 | 2020-06-27T18:37:12.000Z | 2022-03-07T16:19:14.000Z | process/triplifier.py | biocodellc/ontology-data-pipeline | f89dc159ab710368b3054bf8e8d70fb4c967527c | [
"BSD-3-Clause"
] | 31 | 2019-01-05T18:39:37.000Z | 2021-12-13T19:43:40.000Z | process/triplifier.py | biocodellc/ontology-data-pipeline | f89dc159ab710368b3054bf8e8d70fb4c967527c | [
"BSD-3-Clause"
] | 1 | 2021-11-17T19:04:31.000Z | 2021-11-17T19:04:31.000Z | # -*- coding: utf-8 -*-
import re
import pandas as pd
import multiprocessing
from multiprocessing.dummy import Pool as ThreadPool
import logging
from .utils import isNull
class Triplifier(object):
def __init__(self, config):
self.config = config
self.integer_columns = []
for rule in self.config.rules:
if rule['rule'].lower() == 'integer':
self.integer_columns.extend(rule['columns'])
def triplify(self, data_frame):
"""
Generate triples using the given data_frame and the config mappings
:param data_frame: pandas DataFrame
:return: list of triples for the given data_frame data
"""
triples = []
data_frame = data_frame.fillna('')
for index, row in data_frame.iterrows():
triples.extend(self._generate_triples_for_row(row))
triples.extend(self._generate_triples_for_relation_predicates())
triples.extend(self._generate_triples_for_entities())
triples.append(self._generate_ontology_import_triple())
return triples
def _generate_triples_for_chunk(self, chunk):
triples = []
for index, row in chunk.iterrows():
triples.extend(self._generate_triples_for_row(row))
return triples
def _generate_triples_for_row(self, row):
row_triples = []
for entity in self.config.entities:
s = "<{}{}>".format(entity['identifier_root'], self._get_value(row, entity['unique_key']))
if entity['concept_uri'] != 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type':
o = "<{}>".format(entity['concept_uri'])
row_triples.append("{} <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> {}".format(s, o))
for column, uri in entity['columns']:
val = self._get_value(row, column)
list_for_column = self.config.get_list(column)
# if there is a specified list for this column & the field contains a defined_by, substitute the
# defined_by value for the list field
literal_val = True
if list_for_column and "http://www.w3.org/1999/02/22-rdf-syntax-ns#type" in uri:
for i in list_for_column:
if i['field'] == val and i['defined_by']:
val = i['defined_by']
literal_val = False
break
# if this is not a list but URI specified is rdf:type for mapping column then we assume this is object Property
# and attempt to convert
elif "http://www.w3.org/1999/02/22-rdf-syntax-ns#type" in uri:
val = self.config._get_uri_from_label(val)
literal_val = False
# format and print all of the instance data triples
if (not isNull(val)):
p = "<{}>".format(uri)
if literal_val:
type = self._get_type(val)
o = "\"{}\"^^<http://www.w3.org/2001/XMLSchema#{}>".format(val, type)
else:
o = "<{}>".format(str(val))
row_triples.append("{} {} {}".format(s, p, o))
# format and print all triples describing relations
for relation in self.config.relations:
try:
subject_entity = self.config.get_entity(relation['subject_entity_alias'])
object_entity = self.config.get_entity(relation['object_entity_alias'])
s = "<{}{}>".format(subject_entity['identifier_root'], self._get_value(row, subject_entity['unique_key']))
p = "<{}>".format(relation['predicate'])
o = "<{}{}>".format(object_entity['identifier_root'], self._get_value(row, object_entity['unique_key']))
row_triples.append("{} {} {}".format(s, p, o))
except Exception as err:
raise RuntimeError("Error assigning relations between a subject and an object. "
"Check to be sure each relation maps to an entity alias")
return row_triples
def _generate_triples_for_relation_predicates(self):
predicate_triples = []
for relation in self.config.relations:
s = "<{}>".format(relation['predicate'])
p = "<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>"
o = "<http://www.w3.org/2002/07/owl#ObjectProperty>"
predicate_triples.append("{} {} {}".format(s, p, o))
return predicate_triples
def _generate_triples_for_entities(self):
entity_triples = []
for entity in self.config.entities:
entity_triples.extend(self._generate_property_triples(entity['columns']))
if entity['concept_uri'] != 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type':
entity_triples.append(self._generate_class_triple(entity['concept_uri']))
return entity_triples
def _generate_ontology_import_triple(self):
s = "<urn:importInstance>"
p = "<http://www.w3.org/2002/07/owl#imports>"
o = "<{}>".format(self.config.ontology)
return "{} {} {}".format(s, p, o)
@staticmethod
def _generate_class_triple(concept_uri):
s = "<{}>".format(concept_uri)
p = "<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>"
o = "<http://www.w3.org/2000/01/rdf-schema#Class>"
return "{} {} {}".format(s, p, o)
@staticmethod
def _generate_property_triples(properties):
"""
generate triples for the properties of each entity
"""
property_triples = []
for column, uri in properties:
s = "<{}>".format(uri)
p = "<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>"
o = "<http://www.w3.org/1999/02/22-rdf-syntax-ns#Property>"
property_triples.append("{} {} {}".format(s, p, o))
o2 = "<http://www.w3.org/2002/07/owl#DatatypeProperty>"
property_triples.append("{} {} {}".format(s, p, o2))
p2 = "<http://www.w3.org/2000/01/rdf-schema#isDefinedBy>"
property_triples.append("{} {} {}".format(s, p2, s))
return property_triples
def _get_value(self, row_data, column):
coerce_integer = False
if column in self.integer_columns:
coerce_integer = True
# TODO: This line breaks in certain situations. Workaround for now: return an empty string on exception
try:
val = str(row_data[column])
except:
return ''
# need to perform coercion here as pandas can't store ints along floats and strings. The only way to coerce
# to ints is to drop all strings and null values. We don't want to do this in the case of a warning.
if coerce_integer:
return int(float(val)) if re.fullmatch(r"[+-]?\d+(\.0+)?", str(val)) else val
return val
@staticmethod
def _get_type(val):
if re.fullmatch(r"[+-]?\d+", str(val)):
return 'integer'
elif re.fullmatch(r"[+-]?\d+\.\d+", str(val)):
return 'float'
else:
return 'string'
| 39.209677 | 127 | 0.574935 | 894 | 7,293 | 4.525727 | 0.214765 | 0.037074 | 0.033366 | 0.044488 | 0.347751 | 0.314632 | 0.219476 | 0.150766 | 0.121602 | 0.09738 | 0 | 0.024718 | 0.295489 | 7,293 | 185 | 128 | 39.421622 | 0.762748 | 0.124092 | 0 | 0.224 | 0 | 0.072 | 0.197118 | 0 | 0 | 0 | 0 | 0.005405 | 0 | 1 | 0.088 | false | 0 | 0.08 | 0 | 0.288 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d680bcdee688777add2842ce0bdbc8ac9c241004 | 455 | py | Python | core/urls.py | tyronedamasceno/coffe-api | 8cbf48c35c5dbd9ddfbeb921140be1d96a48698f | [
"MIT"
] | null | null | null | core/urls.py | tyronedamasceno/coffe-api | 8cbf48c35c5dbd9ddfbeb921140be1d96a48698f | [
"MIT"
] | 8 | 2020-02-12T02:59:28.000Z | 2022-02-10T14:02:04.000Z | core/urls.py | tyronedamasceno/coffe-api | 8cbf48c35c5dbd9ddfbeb921140be1d96a48698f | [
"MIT"
] | null | null | null | from django.urls import path, include
from rest_framework import routers
from core import views
router = routers.DefaultRouter()
router.register('coffe_types', views.CoffeTypeViewSet, base_name='coffe_types')
router.register('harvests', views.HarvestViewSet, base_name='harvests')
router.register(
'storage_report', views.StorageReportViewSet, base_name='storage_report'
)
app_name = 'core'
urlpatterns = [
path('', include(router.urls)),
]
| 22.75 | 79 | 0.771429 | 54 | 455 | 6.333333 | 0.481481 | 0.122807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10989 | 455 | 19 | 80 | 23.947368 | 0.844444 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.230769 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d681b7a1e2748c8e44788cf4efd924a9d9b41944 | 5,876 | py | Python | ros/src/waypoint_updater/waypoint_updater.py | ryan-jonesford/CarND-Capstone | f8095bb2b7370b0825a89d419c19884431dfb754 | [
"MIT"
] | null | null | null | ros/src/waypoint_updater/waypoint_updater.py | ryan-jonesford/CarND-Capstone | f8095bb2b7370b0825a89d419c19884431dfb754 | [
"MIT"
] | 3 | 2018-11-04T23:54:56.000Z | 2018-11-18T19:37:11.000Z | ros/src/waypoint_updater/waypoint_updater.py | ryan-jonesford/CarND-Capstone | f8095bb2b7370b0825a89d419c19884431dfb754 | [
"MIT"
] | 2 | 2018-10-29T23:45:15.000Z | 2018-11-04T21:43:16.000Z | #!/usr/bin/env python
import rospy
from geometry_msgs.msg import PoseStamped
from styx_msgs.msg import Lane, Waypoint
from scipy.spatial import KDTree
import numpy as np
from std_msgs.msg import Int32
import math
'''
This node will publish waypoints from the car's current position to some `x` distance ahead.
As mentioned in the doc, you should ideally first implement a version which does not care
about traffic lights or obstacles.
Once you have created dbw_node, you will update this node to use the status of traffic lights too.
Please note that our simulator also provides the exact location of traffic lights and their
current status in `/vehicle/traffic_lights` message. You can use this message to build this node
as well as to verify your TL classifier.
'''
LOOKAHEAD_WPS = 50 # Number of waypoints we will publish. You can change this number
UPDATE_RATE = 30 #hz
NO_WP = -1
DECEL_RATE = 1.5 # m/s^2
STOPLINE = 3 # waypoints behind stopline to stop
DELAY = 20. # update difference between this node and twist_controller in hz
class WaypointUpdater(object):
def __init__(self, rate_hz=UPDATE_RATE):
rospy.init_node('waypoint_updater')
self.pose = None
self.base_waypoints = None
self.waypoints_2d = None
self.waypoint_ktree = None
self.freq = rate_hz
self.nearest_wp_idx = NO_WP
self.stop_wp = NO_WP
rospy.Subscriber('/current_pose', PoseStamped, self.pose_cb)
rospy.Subscriber('/base_waypoints', Lane, self.waypoints_cb)
rospy.Subscriber('/traffic_waypoint', Int32, self.traffic_cb)
self.final_waypoints_pub = rospy.Publisher('final_waypoints', Lane, queue_size=1)
self.loop()
def loop(self):
rate = rospy.Rate(self.freq)
while not rospy.is_shutdown():
if (self.pose != None) and \
(self.base_waypoints != None) and \
(self.waypoint_ktree != None):
self.nearest_wp_idx = self.get_nearest_wp_indx()
self.publish_waypoints()
# don't update unless we get new positional data
self.pose = None
rate.sleep()
def publish_waypoints(self):
lane = self.generate_lane()
self.final_waypoints_pub.publish(lane)
def generate_lane(self):
lane = Lane()
lane.header = self.base_waypoints.header
look_ahead_wp_max = self.nearest_wp_idx + LOOKAHEAD_WPS
base_wpts = self.base_waypoints.waypoints[self.nearest_wp_idx:look_ahead_wp_max]
if self.stop_wp == NO_WP or (self.stop_wp >= look_ahead_wp_max):
lane.waypoints = base_wpts
else:
temp_waypoints = []
stop_idx = max(self.stop_wp - self.nearest_wp_idx - STOPLINE, 0)
for i, wp in enumerate(base_wpts):
temp_wp = Waypoint()
temp_wp.pose = wp.pose
if stop_idx >= STOPLINE:
dist = self.distance(base_wpts, i, stop_idx)
# account for system lag
if DELAY > 0:
delay_s = 1./DELAY
else:
delay_s = 0
# x = xo + vot + .5at^2, xo = 0
dist += self.get_waypoint_velocity(base_wpts[i])*delay_s+.5*DECEL_RATE*delay_s*delay_s
# v^2 = vo^2 + 2*a*(x-xo)
# v^2 = 0 + 2*a*(dist)
# v = sqrt(2*a*dist)
vel = math.sqrt(2*DECEL_RATE*dist)
if vel < 1.0:
vel = 0.0
else:
vel = 0.0
temp_wp.twist.twist.linear.x = min(vel, self.get_waypoint_velocity(base_wpts[0]))
temp_waypoints.append(temp_wp)
lane.waypoints = temp_waypoints
return lane
def get_nearest_wp_indx(self):
ptx = self.pose.pose.position.x
pty = self.pose.pose.position.y
nearest_indx = self.waypoint_ktree.query([ptx,pty],1)[1]
nearest_coord = self.waypoints_2d[nearest_indx]
prev_coord = self.waypoints_2d[nearest_indx - 1]
neareset_vect = np.array(nearest_coord)
prev_vect = np.array(prev_coord)
positive_vect = np.array([ptx,pty])
# check if the nearest_coord is infront or behind the car
val = np.dot(neareset_vect-prev_vect, positive_vect-neareset_vect)
if val > 0.0:
# works for waypoints that are in a loop
nearest_indx = (nearest_indx + 1) % len(self.waypoints_2d)
return nearest_indx
def pose_cb(self, msg):
self.pose = msg
def waypoints_cb(self, lane):
self.base_waypoints = lane
if not self.waypoints_2d:
self.waypoints_2d = [ [ waypoint.pose.pose.position.x, waypoint.pose.pose.position.y ] for waypoint in lane.waypoints ]
self.waypoint_ktree = KDTree(self.waypoints_2d)
def traffic_cb(self, msg):
self.stop_wp = msg.data
def obstacle_cb(self, msg):
# TODO: Callback for /obstacle_waypoint message. We will implement it later
pass
def get_waypoint_velocity(self, waypoint):
return waypoint.twist.twist.linear.x
def set_waypoint_velocity(self, waypoints, waypoint, velocity):
waypoints[waypoint].twist.twist.linear.x = velocity
def distance(self, waypoints, wp1, wp2):
dist = 0
dl = lambda a, b: math.sqrt((a.x-b.x)**2 + (a.y-b.y)**2 + (a.z-b.z)**2)
for i in range(wp1, wp2+1):
dist += dl(waypoints[wp1].pose.pose.position, waypoints[i].pose.pose.position)
wp1 = i
return dist
if __name__ == '__main__':
try:
WaypointUpdater()
except rospy.ROSInterruptException:
rospy.logerr('Could not start waypoint updater node.')
| 36.955975 | 131 | 0.615895 | 802 | 5,876 | 4.32793 | 0.266833 | 0.037453 | 0.030251 | 0.023048 | 0.084126 | 0.035725 | 0 | 0 | 0 | 0 | 0 | 0.015188 | 0.294078 | 5,876 | 158 | 132 | 37.189873 | 0.821601 | 0.088836 | 0 | 0.063636 | 0 | 0 | 0.025512 | 0 | 0 | 0 | 0 | 0.006329 | 0 | 1 | 0.109091 | false | 0.009091 | 0.063636 | 0.009091 | 0.218182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d681e5845812f47217df144a4c421bd1734a615c | 1,633 | py | Python | tools/configure.py | corrodedHash/sigmarsGarden | c6070005d9e01523c0b0deb2efbbfa5ffef0ce6f | [
"MIT"
] | null | null | null | tools/configure.py | corrodedHash/sigmarsGarden | c6070005d9e01523c0b0deb2efbbfa5ffef0ce6f | [
"MIT"
] | null | null | null | tools/configure.py | corrodedHash/sigmarsGarden | c6070005d9e01523c0b0deb2efbbfa5ffef0ce6f | [
"MIT"
] | null | null | null | from typing import Any
import cv2
import numpy as np
from sigmarsGarden.config import Configuration
from sigmarsGarden.parse import circle_coords
def configure(img: Any) -> Configuration:
cv2.namedWindow("configureDisplay")
# def click_and_crop(event, x, y, flags, param) -> None:
# print(event, x, y, flags, param)
# cv2.setMouseCallback("configureDisplay", click_and_crop)
cv2.imshow("configureDisplay", img)
result = Configuration()
result.down_distance = 114
result.right_distance = 66
result.start_coord = (1371, 400)
result.radius = 28
circle_color = [0, 0, 0]
while True:
keycode = cv2.waitKey(0)
print(keycode)
left = 81
up = 82
down = 84
right = 83
left = 104
up = 116
down = 110
right = 115
esc = 27
start_coord = list(result.start_coord)
if keycode == left:
start_coord[0] -= 1
elif keycode == right:
start_coord[0] += 1
elif keycode == up:
start_coord[1] -= 1
elif keycode == down:
start_coord[1] += 1
elif keycode == esc:
break
result.start_coord = (start_coord[0], start_coord[1])
new_img = np.copy(img)
for coord in circle_coords(result):
new_img = cv2.circle(new_img, coord, result.radius, circle_color)
cv2.imshow("configureDisplay", new_img)
print(start_coord)
return result
def main() -> None:
x = cv2.imread("testboards/1.jpg")
print(configure(x))
if __name__ == "__main__":
main()
| 24.014706 | 77 | 0.590325 | 199 | 1,633 | 4.678392 | 0.38191 | 0.118153 | 0.051557 | 0.025779 | 0.135338 | 0.098818 | 0 | 0 | 0 | 0 | 0 | 0.052074 | 0.306185 | 1,633 | 67 | 78 | 24.373134 | 0.769638 | 0.090631 | 0 | 0 | 0 | 0 | 0.048616 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.102041 | 0 | 0.163265 | 0.061224 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d682adb11768d513c2f5a3a5ae14d06fb88db0b8 | 4,847 | py | Python | gpt2_generate.py | LindgeW/PreLM | 39a6b1c2fc0ccff7e8143f14d113cdfa79f63d79 | [
"Apache-2.0"
] | 1 | 2022-03-09T14:40:24.000Z | 2022-03-09T14:40:24.000Z | gpt2_generate.py | LindgeW/PreLM | 39a6b1c2fc0ccff7e8143f14d113cdfa79f63d79 | [
"Apache-2.0"
] | null | null | null | gpt2_generate.py | LindgeW/PreLM | 39a6b1c2fc0ccff7e8143f14d113cdfa79f63d79 | [
"Apache-2.0"
] | null | null | null | import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from torch.utils.data import TensorDataset, DataLoader
# reference: \transformers\generation_utils.py
def select_greedy(logits):
next_token_logits = logits[:, -1, :]
# Greedy decoding
next_token = torch.argmax(next_token_logits, dim=-1)
return next_token
def select_topk(logits, k=10):
# next_token = random.choice(logits[0, -1, :].sort(descending=True)[1][:k]).item()
next_token_logits = logits[:, -1, :]
top_k = min(max(k, 1), next_token_logits.size(-1))
# Remove all tokens with a probability less than the last token of the top-k
indices_to_remove = next_token_logits < torch.topk(next_token_logits, top_k)[0][..., -1, None]
next_token_logits[indices_to_remove] = -float("Inf")
probs = torch.nn.functional.softmax(next_token_logits, dim=-1)
# multinominal方法可以根据给定权重对数组进行多次采样,返回采样后的元素下标
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
return next_token
def select_topp(logits, p=0.75):
next_token_logits = logits[:, -1, :] # (batch_size, vocab_size)
sorted_logits, sorted_indices = torch.sort(next_token_logits, descending=True)
cum_probs = torch.cumsum(torch.nn.functional.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold (token with 0 are kept)
sorted_indices_to_remove = cum_probs > p
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
# scatter sorted tensors to original indexing
indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove)
next_token_logits[indices_to_remove] = -float("Inf")
probs = torch.nn.functional.softmax(next_token_logits, dim=-1)
# multinominal方法可以根据给定权重对数组进行多次采样,返回采样后的元素下标
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
return next_token
def read_data(path='./romeo_and_juliet.txt'):
with open(path, 'r', encoding='utf-8') as fin:
ds = fin.read()
return ds
def data_processor(dataset, tokenizer, max_len=32):
indexed_text = tokenizer.encode(dataset)
ds_cut = []
for i in range(0, len(indexed_text)-max_len, max_len):
# 将串切成长度为max_len
ds_cut.append(indexed_text[i: i+max_len])
ds_tensor = torch.tensor(ds_cut)
train_set = TensorDataset(ds_tensor, ds_tensor) # 数据和标签相同
return DataLoader(dataset=train_set, batch_size=8, shuffle=False)
def train(train_loader, model, ep=30, device=torch.device('cpu')):
optimizer = torch.optim.Adam(model.parameters(), lr=2e-5, eps=1e-8)
print(next(model.parameters()).device)
model.train()
model.to(device)
for i in range(ep):
total_loss = 0.
for bi, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
loss, logits, _ = model(data, labels=target)
print('loss:', loss.data.item())
total_loss += loss
loss.backward()
optimizer.step()
print('train loss:', total_loss / len(train_loader))
return model
def inference(model, tokenizer, prefix=None, max_len=100, top_k=20, top_p=0.75, temperature=1.):
print('inference ... ')
print(next(model.parameters()).device)
model.eval()
indexed_tokens = tokenizer.encode(prefix)
tokens_tensor = torch.tensor([indexed_tokens])
final_pred_text = prefix
cur_len = tokens_tensor.size(-1)
for _ in range(max_len):
with torch.no_grad():
output = model(tokens_tensor)
logits = output[0] # (batch_size, cur_len, vocab_size)
if temperature != 1:
logits /= temperature
next_idx = select_topk(logits, k=top_k)
# next_idx = select_topp(logits, p=0.75)
final_pred_text += tokenizer.decode(next_idx)
if tokenizer.eos_token in final_pred_text:
break
# indexed_tokens += [next_idx]
# tokens_tensor = torch.tensor([indexed_tokens])
tokens_tensor = torch.cat([tokens_tensor, next_idx.unsqueeze(-1)], dim=-1)
cur_len += 1
print(cur_len)
return final_pred_text
tokenizer = GPT2Tokenizer.from_pretrained('gpt2/en')
model = GPT2LMHeadModel.from_pretrained('gpt2/en')
# ds = read_data('./romeo_and_juliet.txt')
# train_loader = data_processor(ds, tokenizer)
# model = train(train_loader, model, ep=3, device=torch.device('cuda', 0))
pred_text = inference(model.to('cpu'), tokenizer,
'Yesterday, Jack said he saw an alien,',
top_k=20,
top_p=0.8,
temperature=0.5)
print(pred_text)
| 39.08871 | 101 | 0.676295 | 663 | 4,847 | 4.713424 | 0.268477 | 0.05472 | 0.0576 | 0.04032 | 0.26752 | 0.21184 | 0.12416 | 0.12416 | 0.12416 | 0.12416 | 0 | 0.019156 | 0.203012 | 4,847 | 123 | 102 | 39.406504 | 0.789801 | 0.17908 | 0 | 0.162791 | 0 | 0 | 0.030571 | 0.005558 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081395 | false | 0 | 0.034884 | 0 | 0.197674 | 0.081395 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d68824df1e94960138084688a7d3f88b19a19dff | 8,044 | py | Python | gradient_chaser.py | RobertOpitz/Gradient_Chaser | ca2011342d28798808769831655b74d9adfc6d26 | [
"MIT"
] | null | null | null | gradient_chaser.py | RobertOpitz/Gradient_Chaser | ca2011342d28798808769831655b74d9adfc6d26 | [
"MIT"
] | null | null | null | gradient_chaser.py | RobertOpitz/Gradient_Chaser | ca2011342d28798808769831655b74d9adfc6d26 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed May 13 17:37:31 2020
@author: robertopitz
"""
import numpy as np
from random import randrange
from math import isnan
import pygame as pg
def get_new_prey_pos(pos, board):
while True:
c = randrange(1,len(board)-1)
r = randrange(1,len(board[0])-1)
if c != pos[0] or r != pos[1]:
if board[c][r] == 0:
return np.array([c,r])
def get_next_move(pos, board):
c = pos[0]
r = pos[1]
gradient = np.array([board[c+1][r], board[c-1][r],
board[c][r-1], board[c][r+1]])
i = np.argmin(gradient)
move = ["RIGHT", "LEFT", "UP", "DOWN"]
return move[i]
def move_bot(bot_pos, prey_pos, board, penalty_board):
c = bot_pos[0]
r = bot_pos[1]
move = get_next_move(bot_pos, penalty_board)
step_size = 1
if move == "UP":
if board[c][r-1] == 0:
bot_pos[1] -= step_size
elif move == "DOWN":
if board[c][r+1] == 0:
bot_pos[1] += step_size
elif move == "LEFT":
if board[c-1][r] == 0:
bot_pos[0] -= step_size
elif move == "RIGHT":
if board[c+1][r] == 0:
bot_pos[0] += step_size
def convert_board(board):
new_board = np.zeros(board.shape)
new_board = new_board.astype(float)
new_board[board == 0.] = np.nan
new_board[board == 1.] = float('inf')
return new_board
def convert_to_draw_board(board):
new_board = np.zeros(board.shape)
for c in range(np.size(board,0)):
for r in range(np.size(board,1)):
b = board[c][r]
if b == "o" or b == "O" or b == " ":
new_board[c,r] = 0
else:
new_board[c,r] = 1
return new_board
def create_gradient(board):
# border is Inf
# empty field is NaN
step_penalty = 1
nans_present = True
border = float('inf')
while nans_present:
nans_present = False
for c in range(1,len(board)-1):
for r in range(1,len(board[0])-1):
if isnan(board[c][r]):
nans_present = True
if isnan(board[c+1][r]) and isnan(board[c][r+1]):
pass
elif isnan(board[c+1][r]) and not isnan(board[c][r+1]):
if board[c][r+1] != border:
board[c][r] = board[c][r+1] + step_penalty
elif not isnan(board[c+1][r]) and isnan(board[c][r+1]):
if board[c+1][r] != border:
board[c][r] = board[c+1][r] + step_penalty
else:
if board[c+1][r] != border and \
board[c][r+1] != border:
board[c][r] = int(0.5 * (board[c+1][r] + \
board[c][r+1]) + step_penalty)
elif board[c+1][r] == border and \
board[c][r+1] != border:
board[c][r] = board[c][r+1] + step_penalty
elif board[c+1][r] != border and \
board[c][r+1] == border:
board[c][r] = board[c+1][r] + step_penalty
else:
if board[c][r] != border:
if isnan(board[c+1][r]):
board[c+1][r] = board[c][r] + step_penalty
if isnan(board[c][r+1]):
board[c][r+1] = board[c][r] + step_penalty
return board
def nint(f):
return int(round(f))
def get_penalty_board(board, prey_pos):
new_board = np.copy(board)
c = nint(prey_pos[0])
r = nint(prey_pos[1])
new_board[c, r] = 0.0
penalty_board = create_gradient(new_board)
return penalty_board
def draw_board(screen, board, rs):
for c in range(np.size(board,0)):
for r in range(np.size(board,1)):
if board[c,r] == 1:
pg.draw.rect(screen,
pg.Color("blue"),
pg.Rect(c * rs,
r * rs,
rs, rs))
def draw_bot(screen, pos, rs):
pg.draw.rect(screen,
pg.Color("red"),
pg.Rect(pos[0] * rs,
pos[1] * rs,
rs, rs))
def draw_prey(screen, pos, rs):
pg.draw.rect(screen,
pg.Color("yellow"),
pg.Rect(pos[0] * rs,
pos[1] * rs,
rs, rs))
def play_game(bot_pos_start, board_extern):
board = convert_to_draw_board(board_extern)
penalty_board_blue_print = convert_board(board)
rect_size = 15
bot_pos = np.copy(bot_pos_start)
pg.init()
screen_color = pg.Color("black")
screen = pg.display.set_mode((np.size(board,0) * rect_size,
np.size(board,1) * rect_size))
clock = pg.time.Clock()
pg.display.set_caption("Clean Bot AI")
running = True
prey_pos = get_new_prey_pos(bot_pos, board)
penalty_board = get_penalty_board(penalty_board_blue_print, prey_pos)
while running:
move_bot(bot_pos, prey_pos, board, penalty_board)
if bot_pos[0] == prey_pos[0] and bot_pos[1] == prey_pos[1]:
prey_pos = get_new_prey_pos(bot_pos, board)
penalty_board = get_penalty_board(penalty_board_blue_print,
prey_pos)
screen.fill(screen_color)
for event in pg.event.get():
if event.type == pg.QUIT:
running = False
draw_board(screen, board, rect_size)
draw_prey(screen, prey_pos, rect_size)
draw_bot(screen, bot_pos, rect_size)
clock.tick(60)
pg.display.flip()
pg.quit()
#==MAIN CODE==================================================================
board = [list("x--------x---|-|---x----xx----x"),#1
list("|ooOooooo|---| |---|oooO||oooo|"),#2
list("|ox-xo--o|---| |---|o--o--o--o|"),#3
list("|o|-|o||o|---| |---|o||oooo||o|"),#4
list("|o|-|o||o|---| |---|o|x--|o||o|"),#5
list("|ox-xo--ox---x x---xo----|o||o|"),#6
list("|oooooooooooooooooooooooooo||o|"),#7
list("|ox-xo|------| |---|o--o|--x|o|"),#8
list("|o|-|o|--xx--| |---|o||o|--x|o|"),#9
list("|o|-|oooo|| o||oooo||o|"),#10
list("|o|-|o--o|| x---x --o||o--o||o|"),#11
list("|ox-xo||o-- |x-x| ||o--o||o--o|"),#12
list("|ooooo||o ||-|| ||oooo||oooo|"),#13
list("x---|o|x--| |--|| |x--|o|x--|o|"),#14
list("x---|o|x--| |--|| |x--|o|x--|o|"),#15
list("|ooooo||o ||-|| ||oooo||oooo|"),#16
list("|ox-xo||o-- |x-x| ||o--o||o--o|"),#17
list("|o|-|o--o|| x---x --o||o--o||o|"),#18
list("|o|-|oooo|| o||oooo||o|"),#19
list("|o|-|o|--xx--| |---|o||o|--x|o|"),#20
list("|ox-xo|------| |---|o--o|--x|o|"),#21
list("|oooooooooooooooooooooooooo||o|"),#22
list("|ox-xo--ox---x x---xox---|o||o|"),#23
list("|o|-|o||o|---| |---|o|x--|o||o|"),#24
list("|o|-|o||o|---| |---|o||oooo||o|"),#25
list("|ox-xo--o|---| |---|o--o--o--o|"),#26
list("|ooOooooo|---| |---|oooO||oooo|"),#27
list("x--------x---|-|---x----xx----x")#28
]
# board = [[1,1,1,1,1,1,1,1,1],
# [1,0,0,0,1,0,0,0,1],
# [1,0,0,0,1,0,1,0,1],
# [1,0,1,1,1,0,1,0,1],
# [1,0,1,0,1,1,1,0,1],
# [1,0,0,0,0,0,0,0,1],
# [1,0,0,0,1,1,1,0,1],
# [1,0,1,0,1,0,1,0,1],
# [1,0,1,1,1,0,1,0,1],
# [1,0,0,0,1,0,1,0,1],
# [1,0,0,0,1,0,0,0,1],
# [1,1,1,1,1,1,1,1,1]]
board = np.array(board)
bot_pos_start = np.array([1,1])
play_game(bot_pos_start, board)
| 34.084746 | 80 | 0.441074 | 1,182 | 8,044 | 2.890017 | 0.132826 | 0.087822 | 0.063525 | 0.042155 | 0.523126 | 0.46897 | 0.407787 | 0.390515 | 0.321721 | 0.277518 | 0 | 0.049211 | 0.345724 | 8,044 | 235 | 81 | 34.229787 | 0.599848 | 0.076827 | 0 | 0.305556 | 0 | 0.011111 | 0.127068 | 0.016816 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.005556 | 0.022222 | 0.005556 | 0.127778 | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d689694bd6143417bf16953605dd1dede7dec316 | 1,375 | py | Python | prior_config.py | ZENGXH/NPDRAW | 339d1d9b4880cce891cafe7c20198ef7c121a29e | [
"MIT"
] | 21 | 2021-06-28T18:29:28.000Z | 2022-03-13T09:12:07.000Z | prior_config.py | ZENGXH/NPDRAW | 339d1d9b4880cce891cafe7c20198ef7c121a29e | [
"MIT"
] | null | null | null | prior_config.py | ZENGXH/NPDRAW | 339d1d9b4880cce891cafe7c20198ef7c121a29e | [
"MIT"
] | 2 | 2021-07-05T02:29:32.000Z | 2021-11-02T08:25:14.000Z | from utils.yacs_config import CfgNode as CN
__C = CN()
cfg = __C
# cfg.canvas_init=0
cfg.use_vit=0
cfg.use_fast_vit=0
cfg.img_mean=-1
cfg.vit_mlp_dim=2048
cfg.vit_depth=8
cfg.vit_dropout=1
cfg.concat_one_hot=0
cfg.mask_out_prevloc_samples=0
#cfg.input_id_canvas=0
cfg.register_deprecated_key('input_id_canvas')
cfg.use_cnn_process=0
cfg.input_id_only=0
cfg.cond_on_loc=0
cfg.gt_file=''
cfg.img_size=28
cfg.pw=10
cfg.register_renamed_key('ps', 'pw')
cfg.register_deprecated_key('steps')
cfg.register_deprecated_key('canvas_init')
cfg.register_deprecated_key('lw')
cfg.register_deprecated_key('anchor_dependent')
cfg.hid=256
cfg.batch_size=128
cfg.num_epochs=50
cfg.lr=3e-4
## cfg.lw=1.0
cfg.k=50
cfg.loc_loss_weight=1.0
cfg.cls_loss_weight=1.0
cfg.stp_loss_weight=1.0
cfg.output_folder='./exp/prior'
cfg.single_sample=0
cfg.dataset='mnist'
cfg.add_empty=0
cfg.add_stop=0
cfg.inputd=2
cfg.model_name='cnn_prior'
cfg.hidden_size_prior=64
cfg.hidden_size_vae=256
cfg.use_scheduler=0
cfg.early_stopping=0
cfg.loc_map=1
cfg.nloc=-1
cfg.num_layers=8 #15
cfg.loc_dist='Gaussian'
cfg.loc_stride=1
cfg.exp_key=''
cfg.device='cuda'
cfg.exp_dir='./exp/' # root of all experiments
cfg.mhead=0
cfg.kernel_size=7 # for picnn's kernel
cfg.permute_order=0 # for picnn's kernel
cfg.geometric=0
#cfg.anchor_dependent=0
cfg.start_time=''
cfg.pos_encode=0
cfg.use_emb_enc=0
| 22.177419 | 48 | 0.786909 | 276 | 1,375 | 3.637681 | 0.434783 | 0.087649 | 0.104582 | 0.119522 | 0.080677 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050553 | 0.079273 | 1,375 | 61 | 49 | 22.540984 | 0.742496 | 0.100364 | 0 | 0 | 0 | 0 | 0.07824 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.017544 | 0 | 0.017544 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d689def2b69b86b6725aa76fbe9f83cda3ccc692 | 1,769 | py | Python | 2021/09/main2.py | chirsz-ever/aoc | dbdc2e32fbef108752db87f3747ce5898a0775ce | [
"BSL-1.0"
] | null | null | null | 2021/09/main2.py | chirsz-ever/aoc | dbdc2e32fbef108752db87f3747ce5898a0775ce | [
"BSL-1.0"
] | null | null | null | 2021/09/main2.py | chirsz-ever/aoc | dbdc2e32fbef108752db87f3747ce5898a0775ce | [
"BSL-1.0"
] | null | null | null | #!/usr/bin/env python3
import sys
from itertools import repeat, product
from operator import mul
from functools import reduce
inputFile = 'input'
if len(sys.argv) >= 2:
inputFile = sys.argv[1]
heightmap : list[list[int]] = []
with open(inputFile) as fin:
for line in fin:
heightmap.append([int(c) for c in line.strip()])
width = len(heightmap[0])
height = len(heightmap)
def isLowPoint(i, j):
h = heightmap[i][j]
if i != 0 and heightmap[i - 1][j] <= h:
return False
if i != height - 1 and heightmap[i + 1][j] <= h:
return False
if j != 0 and heightmap[i][j - 1] <= h:
return False
if j != width - 1 and heightmap[i][j + 1] <= h:
return False
return True
lowpoints : list[tuple[int, int]] = []
for i, j in product(range(height), range(width)):
if isLowPoint(i, j):
lowpoints.append((i, j))
basinlog = [[0 for _ in range(width)] for _ in range(height)]
for i, j in product(range(height), range(width)):
if heightmap[i][j] == 9:
basinlog[i][j] = -1
def findbasin(i, j, t) -> int:
if basinlog[i][j] != 0:
return 0
basinlog[i][j] = t
size = 1
if i != 0 and heightmap[i - 1][j] != 9:
size += findbasin(i - 1, j, t)
if i != height - 1 and heightmap[i + 1][j] != 9:
size += findbasin(i + 1, j, t)
if j != 0 and heightmap[i][j - 1] != 9:
size += findbasin(i, j - 1, t)
if j != width - 1 and heightmap[i][j + 1] != 9:
size += findbasin(i, j + 1, t)
return size
basinsizes : list[int, int] = []
basintoken = 1
for i, j in lowpoints:
if (size := findbasin(i, j, basintoken)) != 0:
basinsizes.append(size)
basintoken += 1
print(f'{reduce(mul, sorted(basinsizes, reverse=True)[:3]) = }') | 26.80303 | 64 | 0.569248 | 280 | 1,769 | 3.589286 | 0.217857 | 0.037811 | 0.103483 | 0.055721 | 0.366169 | 0.366169 | 0.366169 | 0.366169 | 0.315423 | 0.208955 | 0 | 0.029253 | 0.265687 | 1,769 | 66 | 64 | 26.80303 | 0.744419 | 0.011871 | 0 | 0.113208 | 0 | 0 | 0.033753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037736 | false | 0 | 0.075472 | 0 | 0.245283 | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d689f1e24c703d9de5c7460fe0778d147ec02403 | 974 | py | Python | tpDcc/libs/qt/core/traymessage.py | tpDcc/tpQtLib | 26b6e893395633a1b189a1b73654891b7688648d | [
"MIT"
] | 3 | 2019-08-26T05:56:12.000Z | 2019-10-03T11:35:53.000Z | tpDcc/libs/qt/core/traymessage.py | tpDcc/tpQtLib | 26b6e893395633a1b189a1b73654891b7688648d | [
"MIT"
] | null | null | null | tpDcc/libs/qt/core/traymessage.py | tpDcc/tpQtLib | 26b6e893395633a1b189a1b73654891b7688648d | [
"MIT"
] | 1 | 2021-03-03T21:01:50.000Z | 2021-03-03T21:01:50.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
"""
Module that contains custom tray balloon
"""
from __future__ import print_function, division, absolute_import
from Qt.QtWidgets import QWidget, QSystemTrayIcon, QMenu
class TrayMessage(QWidget, object):
def __init__(self, parent=None):
super(TrayMessage, self).__init__(parent=parent)
self._tools_icon = None
self.tray_icon_menu = QMenu(self)
self.tray_icon = QSystemTrayIcon(self)
# self.tray_icon.setIcon(self._tools_icon)
self.tray_icon.setToolTip('Tray')
self.tray_icon.setContextMenu(self.tray_icon_menu)
if not QSystemTrayIcon.isSystemTrayAvailable():
raise OSError('Tray Icon is not available!')
self.tray_icon.show()
def show_message(self, title, msg):
try:
self.tray_icon.showMessage(title, msg, self._tools_icon)
except Exception:
self.tray_icon.showMessage(title, msg)
| 27.055556 | 68 | 0.677618 | 117 | 974 | 5.367521 | 0.470085 | 0.127389 | 0.171975 | 0.050955 | 0.098726 | 0.098726 | 0 | 0 | 0 | 0 | 0 | 0.001316 | 0.219713 | 974 | 35 | 69 | 27.828571 | 0.825 | 0.128337 | 0 | 0 | 0 | 0 | 0.036949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.277778 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d68d945342e1ae0d7e7a7a1d0a9e54406e6ceb70 | 18,034 | py | Python | booltest/battery.py | sobuch/polynomial-distinguishers | 5a007abd222d00cbf99f1083c3b537343d2fff56 | [
"MIT"
] | 5 | 2017-03-03T13:53:51.000Z | 2019-05-09T09:47:28.000Z | booltest/battery.py | sobuch/polynomial-distinguishers | 5a007abd222d00cbf99f1083c3b537343d2fff56 | [
"MIT"
] | 5 | 2017-10-07T11:15:09.000Z | 2021-01-25T17:03:59.000Z | booltest/battery.py | sobuch/polynomial-distinguishers | 5a007abd222d00cbf99f1083c3b537343d2fff56 | [
"MIT"
] | 6 | 2017-03-26T17:06:20.000Z | 2021-11-15T22:22:33.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import argparse
import coloredlogs
import logging
import json
import itertools
import shlex
import time
import queue
import sys
import os
import collections
import tempfile
from jsonpath_ng import jsonpath, parse
from .runner import AsyncRunner
from .common import merge_pvals, booltest_pval
from . import common
logger = logging.getLogger(__name__)
coloredlogs.install(level=logging.INFO)
"""
Config can look like this:
{
"default-cli": "--no-summary --json-out --log-prints --top 128 --no-comb-and --only-top-comb --only-top-deg --no-term-map --topterm-heap --topterm-heap-k 256 --best-x-combs 512",
"strategies": [
{
"name": "v1",
"cli": "",
"variations": [
{
"bl": [128, 256, 384, 512],
"deg": [1],
"cdeg": [1],
"exclusions": []
}
]
},
{
"name": "halving",
"cli": "--halving",
"variations": [
{
"bl": [128, 256, 384, 512],
"deg": [1, 2, 3],
"cdeg": [1, 2, 3],
"exclusions": []
}
]
}
]
}
"""
def jsonpath(path, obj, allow_none=False):
r = [m.value for m in parse(path).find(obj)]
return r[0] if not allow_none else (r[0] if r else None)
def listize(obj):
return obj if (obj is None or isinstance(obj, list)) else [obj]
def get_runner(cli, cwd=None, rtt_env=None):
async_runner = AsyncRunner(cli, cwd=cwd, shell=False, env=rtt_env)
async_runner.log_out_after = False
async_runner.preexec_setgrp = True
return async_runner
class BoolParamGen:
def __init__(self, cli, vals):
self.cli = cli
self.vals = vals if isinstance(vals, list) else [vals]
class BoolJob:
def __init__(self, cli, name, vinfo='', idx=None):
self.cli = cli
self.name = name
self.vinfo = vinfo
self.idx = idx
def is_halving(self):
return '--halving' in self.cli
class BoolRes:
def __init__(self, job, ret_code, js_res, is_halving, rejects=False, pval=None, alpha=None, stderr=None):
self.job = job # type: BoolJob
self.ret_code = ret_code
self.js_res = js_res
self.is_halving = is_halving
self.rejects = rejects
self.alpha = alpha
self.pval = pval
self.stderr = stderr
class BoolRunner:
def __init__(self):
self.args = None
self.bool_config = None
self.parallel_tasks = None
self.bool_wrapper = None
self.job_queue = queue.Queue(maxsize=0)
self.runners = [] # type: List[Optional[AsyncRunner]]
self.comp_jobs = [] # type: List[Optional[BoolJob]]
self.results = []
def init_config(self):
self.parallel_tasks = self.args.threads or 1
self.bool_wrapper = self.args.booltest_bin
try:
if self.args.config:
with open(self.args.config) as fh:
self.bool_config = json.load(fh)
if not self.bool_wrapper:
self.bool_wrapper = jsonpath("$.wrapper", self.bool_config, True)
if not self.args.threads:
self.parallel_tasks = jsonpath("$.threads", self.bool_config, True) or self.args.threads or 1
except Exception as e:
logger.error("Could not load the config %s" % (e,), exc_info=e)
if not self.bool_wrapper:
self.bool_wrapper = "\"%s\" -m booltest.booltest_main" % sys.executable
def norm_methods(self, methods):
res = set()
for m in methods:
if m == 'v1':
res.add('1')
elif m == '1':
res.add(m)
elif m == 'halving':
res.add('2')
elif m == 'v2':
res.add('2')
elif m == '2':
res.add(m)
else:
raise ValueError("Unknown method %s" % m)
return sorted(list(res))
def norm_params(self, params, default):
if params is None or len(params) == 0:
return default
return [int(x) for x in params]
def generate_jobs(self):
dcli = self.args.cli
if dcli is None:
dcli = jsonpath('$.default-cli', self.bool_config, True)
if dcli is None:
dcli = '--no-summary --json-out --log-prints --top 128 --no-comb-and --only-top-comb --only-top-deg ' \
'--no-term-map --topterm-heap --topterm-heap-k 256 --best-x-combs 512'
if '--no-summary' not in dcli:
dcli += ' --no-summary'
if '--json-out' not in dcli:
dcli += ' --json-out'
if '--log-prints' not in dcli:
dcli += ' --log-prints'
strategies = jsonpath('$.strategies', self.bool_config, True)
if strategies is None:
strategies = []
methods = self.norm_methods(self.args.methods or ["1", "2"])
for mt in methods:
strat = collections.OrderedDict()
strat['name'] = "v%s" % mt
strat['cli'] = "--halving" if mt == '2' else ''
strat['variations'] = [collections.OrderedDict([
('bl', self.norm_params(self.args.block, [128, 256, 384, 512])),
('deg', self.norm_params(self.args.deg, [1, 2])),
('cdeg', self.norm_params(self.args.comb_deg, [1, 2])),
('exclusions', []),
])]
strategies.append(strat)
for st in strategies:
name = st['name']
st_cli = jsonpath('$.cli', st, True) or ''
st_vars = jsonpath('$.variations', st, True) or []
ccli = ('%s %s' % (dcli, st_cli)).strip()
if not st_vars:
yield BoolJob(ccli, name)
continue
for cvar in st_vars:
blocks = listize(jsonpath('$.bl', cvar, True)) or [None]
degs = listize(jsonpath('$.deg', cvar, True)) or [None]
cdegs = listize(jsonpath('$.cdeg', cvar, True)) or [None]
pcli = ['--block', '--degree', '--combine-deg']
vinfo = ['', '', '']
iterator = itertools.product(blocks, degs, cdegs)
for el in iterator:
c = ' '.join([(('%s %s') % (pcli[ix], dt)) for (ix, dt) in enumerate(el) if dt is not None])
vi = '-'.join([(('%s%s') % (vinfo[ix], dt)).strip() for (ix, dt) in enumerate(el) if dt is not None])
ccli0 = ('%s %s' % (ccli, c)).strip()
yield BoolJob(ccli0, name, vi)
def run_job(self, cli):
async_runner = get_runner(shlex.split(cli))
logger.info("Starting async command %s" % cli)
async_runner.start()
while async_runner.is_running:
time.sleep(1)
logger.info("Async command finished")
def on_finished(self, job, runner, idx):
if runner.ret_code != 0:
logger.warning("Return code of job %s is %s" % (idx, runner.ret_code))
stderr = ("\n".join(runner.err_acc)).strip()
br = BoolRes(job, runner.ret_code, None, job.is_halving, stderr=stderr)
self.results.append(br)
return
results = runner.out_acc
buff = (''.join(results)).strip()
try:
js = json.loads(buff)
is_halving = js['halving']
br = BoolRes(job, 0, js, is_halving)
if not is_halving:
br.rejects = [m.value for m in parse('$.inputs[0].res[0].rejects').find(js)][0]
br.alpha = [m.value for m in parse('$.inputs[0].res[0].ref_alpha').find(js)][0]
logger.info('rejects: %s, at alpha %.5e' % (br.rejects, br.alpha))
else:
br.pval = [m.value for m in parse('$.inputs[0].res[1].halvings[0].pval').find(js)][0]
logger.info('halving pval: %5e' % br.pval)
self.results.append(br)
except Exception as e:
logger.error("Exception processing results: %s" % (e,), exc_info=e)
logger.warning("[[[%s]]]" % buff)
def on_results_ready(self):
try:
logger.info("="*80)
logger.info("Results")
ok_results = [r for r in self.results if r.ret_code == 0]
nok_results = [r for r in self.results if r.ret_code != 0]
bat_errors = ['Job %d (%s-%s), ret_code %d' % (r.job.idx, r.job.name, r.job.vinfo, r.ret_code)
for r in self.results if r.ret_code != 0]
if nok_results:
logger.warning("Some jobs failed with error: \n%s" % ("\n".join(bat_errors)))
for r in nok_results:
logger.info("Job %s, (%s-%s)" % (r.job.idx, r.job.name, r.job.vinfo))
logger.info("Stderr: %s" % r.stderr)
v1_jobs = [r for r in ok_results if not r.is_halving]
v2_jobs = [r for r in ok_results if r.is_halving]
v1_sum = collections.OrderedDict()
v2_sum = collections.OrderedDict()
if v1_jobs:
rejects = [r for r in v1_jobs if r.rejects]
v1_sum['alpha'] = max([x.alpha for x in v1_jobs])
v1_sum['pvalue'] = booltest_pval(nfails=len(rejects), ntests=len(v1_jobs), alpha=v1_sum['alpha'])
v1_sum['npassed'] = sum([1 for r in v1_jobs if not r.rejects])
if v2_jobs:
pvals = [r.pval for r in v2_jobs]
v2_sum['npassed'] = sum([1 for r in v2_jobs if r.pval >= self.args.alpha])
v2_sum['pvalue'] = merge_pvals(pvals)[0] if len(pvals) > 1 else -1
if v1_jobs:
logger.info("V1 results:")
self.print_test_res(v1_jobs)
if v2_jobs:
logger.info("V2 results:")
self.print_test_res(v2_jobs)
logger.info("=" * 80)
logger.info("Summary: ")
if v1_jobs:
logger.info("v1 tests: %s, #passed: %s, pvalue: %s"
% (len(v1_jobs), v1_sum['npassed'], v1_sum['pvalue']))
if v2_jobs:
logger.info("v2 tests: %s, #passed: %s, pvalue: %s"
% (len(v2_jobs), v2_sum['npassed'], v2_sum['pvalue']))
if not self.args.json_out and not self.args.json_out_file:
return
jsout = collections.OrderedDict()
jsout["nfailed_jobs"] = len(nok_results)
jsout["failed_jobs_stderr"] = [r.stderr for r in nok_results]
jsout["results"] = common.noindent_poly([r.js_res for r in ok_results])
kwargs = {'indent': 2} if self.args.json_nice else {}
if self.args.json_out:
print(common.json_dumps(jsout, **kwargs))
if self.args.json_out_file:
with open(self.args.json_out_file, 'w+') as fh:
common.json_dump(jsout, fh, **kwargs)
jsout = common.jsunwrap(jsout)
return jsout
except Exception as e:
logger.warning("Exception in results processing: %s" % (e,), exc_info=e)
def print_test_res(self, res):
for rs in res: # type: BoolRes
passed = (rs.pval >= self.args.alpha if rs.is_halving else not rs.rejects) if rs.ret_code == 0 else None
desc_str = ""
if rs.is_halving:
desc_str = "pvalue: %5e" % (rs.pval,)
else:
desc_str = "alpha: %5e" % (rs.alpha,)
res = rs.js_res["inputs"][0]["res"]
dist_poly = jsonpath('$[0].dists[0].poly', res, True)
time_elapsed = jsonpath('$.time_elapsed', rs.js_res, True)
best_dist_zscore = jsonpath('$[0].dists[0].zscore', res, True) or -1
ref_zscore_min = jsonpath('$[0].ref_minmax[0]', res, True) or -1
ref_zscore_max = jsonpath('$[0].ref_minmax[1]', res, True) or -1
aux_str = ""
if rs.is_halving:
best_dist_zscore_halving = jsonpath('$[1].dists[0].zscore', res, True)
aux_str = "Learn: (z-score: %.5f, acc. zscores: [%.5f, %.5f]), Eval: (z-score: %.5f)" \
% (best_dist_zscore, ref_zscore_min, ref_zscore_max, best_dist_zscore_halving)
else:
aux_str = "z-score: %.5f, acc. zscores: [%.5f, %.5f]" \
% (best_dist_zscore, ref_zscore_min, ref_zscore_max)
logger.info(" - %s %s: passed: %s, %s, dist: %s\n elapsed time: %6.2f s, %s"
% (rs.job.name, rs.job.vinfo, passed, desc_str, dist_poly,
time_elapsed, aux_str))
def work(self):
if len(self.args.files) != 1:
raise ValueError("Provide exactly one file to test")
ifile = self.args.files[0]
if ifile != '-' and not os.path.exists(ifile):
raise ValueError("Provided input file not found")
tmp_file = None
if ifile == '-':
tmp_file = tempfile.NamedTemporaryFile(prefix="booltest-bat-inp", delete=True)
while True:
data = sys.stdin.read(4096) if sys.version_info < (3,) else sys.stdin.buffer.read(4096)
if data is None or len(data) == 0:
break
tmp_file.write(data)
ifile = tmp_file.name
jobs = [x for x in self.generate_jobs()]
for i, j in enumerate(jobs):
j.idx = i
self.runners = [None] * self.parallel_tasks
self.comp_jobs = [None] * self.parallel_tasks
for j in jobs:
self.job_queue.put_nowait(j)
while not self.job_queue.empty() or sum([1 for x in self.runners if x is not None]) > 0:
time.sleep(0.1)
# Realloc work
for i in range(len(self.runners)):
if self.runners[i] is not None and self.runners[i].is_running:
continue
was_empty = self.runners[i] is None
if not was_empty:
self.job_queue.task_done()
logger.info("Task %d done, job queue size: %d, running: %s"
% (i, self.job_queue.qsize(), sum([1 for x in self.runners if x])))
self.on_finished(self.comp_jobs[i], self.runners[i], i)
# Start a new task, if any
try:
job = self.job_queue.get_nowait() # type: BoolJob
except queue.Empty:
self.runners[i] = None
continue
cli = '%s %s "%s"' % (self.bool_wrapper, job.cli, ifile)
self.comp_jobs[i] = job
self.runners[i] = get_runner(shlex.split(cli))
logger.info("Starting async command %s %s, %s" % (job.name, job.vinfo, cli))
self.runners[i].start()
return self.on_results_ready()
def main(self):
parser = self.argparser()
self.args = parser.parse_args()
self.init_config()
return self.work()
def argparser(self):
parser = argparse.ArgumentParser(description='BoolTest Battery Runner')
parser.add_argument('--debug', dest='debug', action='store_const', const=True,
help='enables debug mode')
parser.add_argument('-c', '--config', default=None,
help='Test config')
parser.add_argument('--alpha', dest='alpha', type=float, default=1e-4,
help='Alpha value for pass/fail')
parser.add_argument('-t', dest='threads', type=int, default=1,
help='Maximum parallel threads')
parser.add_argument('--block', dest='block', nargs=argparse.ZERO_OR_MORE,
default=None, type=int,
help='List of block sizes to test')
parser.add_argument('--deg', dest='deg', nargs=argparse.ZERO_OR_MORE,
default=None, type=int,
help='List of degree to test')
parser.add_argument('--comb-deg', dest='comb_deg', nargs=argparse.ZERO_OR_MORE,
default=None, type=int,
help='List of degree of combinations to test')
parser.add_argument('--methods', dest='methods', nargs=argparse.ZERO_OR_MORE,
default=None,
help='List of methods to test, supported: 1, 2, halving')
parser.add_argument('files', nargs=argparse.ONE_OR_MORE, default=[],
help='files to process')
parser.add_argument('--stdin', dest='stdin', action='store_const', const=True, default=False,
help='Read from the stdin')
parser.add_argument('--booltest-bin', dest='booltest_bin',
help='Specify BoolTest binary launcher. If not specified, autodetected.')
parser.add_argument('--cli', dest='cli',
help='Specify common BoolTest CLI options')
parser.add_argument('--json-out', dest='json_out', action='store_const', const=True, default=False,
help='Produce json result')
parser.add_argument('--json-out-file', dest='json_out_file', default=None,
help='Produce json result to a file')
parser.add_argument('--json-nice', dest='json_nice', action='store_const', const=True, default=False,
help='Nicely formatted json output')
return parser
def main():
br = BoolRunner()
return br.main()
if __name__ == '__main__':
main()
| 37.886555 | 182 | 0.523345 | 2,256 | 18,034 | 4.055408 | 0.155142 | 0.020111 | 0.027872 | 0.003826 | 0.260794 | 0.173789 | 0.15029 | 0.132583 | 0.100557 | 0.087113 | 0 | 0.016541 | 0.342963 | 18,034 | 475 | 183 | 37.966316 | 0.755591 | 0.010314 | 0 | 0.118497 | 0 | 0.011561 | 0.135592 | 0.006541 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057803 | false | 0.028902 | 0.046243 | 0.00578 | 0.156069 | 0.020231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d68e21dba61d5cbeb398194248f4e63acb8aae21 | 8,346 | py | Python | traincifar224.py | iitm-sysdl/FuSeConv | 04cdf54abfdbf359235d1b4c0848f188b1abbf2d | [
"Apache-2.0"
] | 8 | 2021-02-08T22:12:53.000Z | 2022-02-20T16:33:11.000Z | traincifar224.py | iitm-sysdl/FuSeConv | 04cdf54abfdbf359235d1b4c0848f188b1abbf2d | [
"Apache-2.0"
] | null | null | null | traincifar224.py | iitm-sysdl/FuSeConv | 04cdf54abfdbf359235d1b4c0848f188b1abbf2d | [
"Apache-2.0"
] | 4 | 2021-03-04T11:21:42.000Z | 2022-02-15T07:47:19.000Z | '''
FuSeConv: Fully Separable Convolutions for Fast Inference on Systolic Arrays
Authors: Surya Selvam, Vinod Ganesan, Pratyush Kumar
Email ID: selvams@purdue.edu, vinodg@cse.iitm.ac.in, pratyush@cse.iitm.ac.in
'''
import os
import torch
import wandb
import random
import argparse
import torchvision
import torch.nn as nn
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from utils import *
from models import *
def dumpData(flag, string):
if flag == 'train':
meta = open(args.name+'/metadataTrain.txt', "a")
meta.write(string)
meta.close()
else:
meta = open(args.name+'/metadataTest.txt', "a")
meta.write(string)
meta.close()
def train(net, trainloader, criterion, optimizer, epoch):
print('\nEpoch: %d' % epoch)
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs = inputs.cuda()
targets = targets.cuda()
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
progress_bar(batch_idx, len(trainloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (train_loss/(batch_idx+1), 100.*correct/total, correct, total))
string = str(epoch) + ',' + str(train_loss) + ',' + str(correct*1.0/total) + '\n'
dumpData('train', string)
wandb.log({
"Train Loss": train_loss,
"Train Accuracy": 100*correct/total}, step=epoch)
def test(net, testloader, criterion, epoch):
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(testloader):
inputs, targets = inputs.cuda(), targets.cuda()
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
progress_bar(batch_idx, len(testloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (test_loss/(batch_idx+1), 100.*correct/total, correct, total))
string = str(epoch) + ',' + str(test_loss) + ',' + str(correct*1.0/total) + '\n'
dumpData('test', string)
wandb.log({
"Test Loss": test_loss,
"Test Accuracy": 100*correct/total}, step=epoch)
return correct*1.0/total
def main():
wandb.init(name=args.name, project="cifar-224-full-variant")
transform_train = transforms.Compose([
transforms.Resize(224),
transforms.RandomCrop(224, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
transform_test = transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
if args.Dataset == 'CIFAR10':
trainset = torchvision.datasets.CIFAR10(root='data', train=True, download=True, transform=transform_train)
testset = torchvision.datasets.CIFAR10(root='data', train=False, download=True, transform=transform_test)
numClasses = 10
elif args.Dataset == 'CIFAR100':
trainset = torchvision.datasets.CIFAR100(root='data', train=True, download=True, transform=transform_train)
testset = torchvision.datasets.CIFAR100(root='data', train=False, download=True, transform=transform_test)
numClasses = 100
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=4)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=4)
if args.variant == 'baseline':
if args.Network == 'ResNet':
net = ResNet50(numClasses)
elif args.Network == 'MobileNetV1':
net = MobileNetV1(numClasses)
elif args.Network == 'MobileNetV2':
net = MobileNetV2(numClasses)
elif args.Network == 'MobileNetV3S':
net = MobileNetV3('small', numClasses)
elif args.Network == 'MobileNetV3L':
net = MobileNetV3('large', numClasses)
elif args.Network == 'MnasNet':
net = MnasNet(numClasses)
elif args.variant == 'half':
if args.Network == 'ResNet':
net = ResNet50FuSeHalf(numClasses)
elif args.Network == 'MobileNetV1':
net = MobileNetV1FuSeHalf(numClasses)
elif args.Network == 'MobileNetV2':
net = MobileNetV2FuSeHalf(numClasses)
elif args.Network == 'MobileNetV3S':
net = MobileNetV3FuSeHalf('small', numClasses)
elif args.Network == 'MobileNetV3L':
net = MobileNetV3FuSeHalf('large', numClasses)
elif args.Network == 'MnasNet':
net = MnasNetFuSeHalf(numClasses)
elif args.variant == 'full':
if args.Network == 'ResNet':
net = ResNet50FuSeFull(numClasses)
elif args.Network == 'MobileNetV1':
net = MobileNetV1FuSeFull(numClasses)
elif args.Network == 'MobileNetV2':
net = MobileNetV2FuSeFull(numClasses)
elif args.Network == 'MobileNetV3S':
net = MobileNetV3FuSeFull('small', numClasses)
elif args.Network == 'MobileNetV3L':
net = MobileNetV3FuSeFull('large', numClasses)
elif args.Network == 'MnasNet':
net = MnasNetFuSeFull(numClasses)
else:
print("Provide a valid variant")
exit(0)
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.SGD(net.parameters(), 0.1, momentum=0.9, weight_decay=5e-4)
net.cuda()
wandb.watch(net, log="all")
bestAcc = 0
startEpoch = 0
if args.resume == True:
assert os.path.isdir(args.name), 'Error: no checkpoint directory found!'
checkpoint = torch.load(args.name+'/BestModel.t7')
net.load_state_dict(checkpoint['net'])
bestAcc = checkpoint['acc']
startEpoch = checkpoint['epoch']
optimizer.load_state_dict(checkpoint['optimizer'])
lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,
milestones=[20, 40, 60, 70, 80, 90], gamma=0.1, last_epoch=startEpoch-1)
for epoch in range(startEpoch, 60):
train(net, trainloader, criterion, optimizer, epoch)
lr_scheduler.step()
acc = test(net, testloader, criterion, epoch)
state = {'net': net.state_dict(),
'acc': acc,
'epoch': epoch+1,
'optimizer' : optimizer.state_dict()
}
if acc > bestAcc:
torch.save(state, args.name+'/BestModel.t7')
bestAcc = acc
wandb.save('BestModel.h5')
else:
torch.save(state, args.name+'/LastEpoch.t7')
meta = open(args.name+'/stats.txt', "a")
s = args.variant
meta.write(args.Dataset + ' , ' + args.Network + ' , ' + s + ' , ' + str(bestAcc) + '\n')
meta.close()
if __name__ == '__main__':
random.seed(42)
torch.manual_seed(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
parser = argparse.ArgumentParser(description = "Train CIFAR Models")
parser.add_argument("--Dataset", "-D", type = str, help = 'CIFAR10, CIFAR100', required=True)
parser.add_argument("--Network", "-N", type = str, help = 'ResNet, MobileNetV1, MobileNetV2, MobileNetV3S, MobileNetV3L, MnasNet', required=True)
parser.add_argument("--name", "-n", type=str, help = 'Name of the run', required=True)
parser.add_argument('--resume', '-r', action='store_true', help='resume from checkpoint')
parser.add_argument('--variant', '-v', type=str, help='baseline or half or full', required=True)
args = parser.parse_args()
if not os.path.isdir(args.name):
os.mkdir(args.name)
main()
| 39.367925 | 149 | 0.606278 | 920 | 8,346 | 5.43587 | 0.26087 | 0.041792 | 0.061188 | 0.074985 | 0.454709 | 0.395321 | 0.25215 | 0.158768 | 0.146771 | 0.146771 | 0 | 0.033742 | 0.257848 | 8,346 | 211 | 150 | 39.554502 | 0.773652 | 0.024682 | 0 | 0.274725 | 0 | 0 | 0.106493 | 0.002705 | 0 | 0 | 0 | 0 | 0.005495 | 1 | 0.021978 | false | 0 | 0.06044 | 0 | 0.087912 | 0.010989 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d69171373efa977663e506a9e0cd4ffbf706ae5a | 2,590 | py | Python | fabfile.py | prezi/snakebasket | 8e2e91ef2c7d034fa45c8005e5217fec333808ee | [
"MIT"
] | 24 | 2015-02-03T00:04:06.000Z | 2021-09-14T06:50:01.000Z | fabfile.py | prezi/snakebasket | 8e2e91ef2c7d034fa45c8005e5217fec333808ee | [
"MIT"
] | 1 | 2021-03-23T10:44:18.000Z | 2021-03-23T15:38:38.000Z | fabfile.py | prezi/snakebasket | 8e2e91ef2c7d034fa45c8005e5217fec333808ee | [
"MIT"
] | 5 | 2015-08-16T11:31:09.000Z | 2021-12-27T13:31:33.000Z | import os.path
from fabric.api import local, env
from fabric.utils import fastprint
from prezi.fabric.s3 import CommonTasks, S3Deploy, NoopServiceManager
env.forward_agent = True
env.user = 'publisher'
env.roledefs = {'production': [], 'stage': [], 'local': []}
class SingleVirtualenvS3Deploy(S3Deploy):
def __init__(self, app_name, buckets, revno):
super(SingleVirtualenvS3Deploy, self).__init__(app_name, buckets, revno)
self.service = NoopServiceManager(self)
self.virtualenv = SingleVirtualenvService(self)
class SingleVirtualenvService(object):
def __init__(self, deployer):
self.deployer = deployer
self.tarball_path = self.deployer.build_dir + '.tar'
self.tarbz_path = self.tarball_path + '.bz2'
self.tarbz_name = os.path.basename(self.tarbz_path)
def build_tarbz(self):
self.build_venv()
self.compress_venv()
def cleanup(self):
local('rm -rf %s %s' % (self.tarbz_path, self.deployer.build_dir))
def build_venv(self):
fastprint('Building single virtualenv service in %s\n' % self.deployer.build_dir)
# init + update pip submodule
local('git submodule init; git submodule update')
# builds venv
self.run_virtualenv_cmd("--distribute --no-site-packages -p python2.7 %s" % self.deployer.build_dir)
# installs app + dependencies
local(' && '.join(
['. %s/bin/activate' % self.deployer.build_dir,
'pip install --exists-action=s -e `pwd`/pip#egg=pip -e `pwd`@master#egg=snakebasket -r requirements-development.txt']
))
# makes venv relocatable
self.run_virtualenv_cmd("--relocatable -p python2.7 %s" % self.deployer.build_dir)
def compress_venv(self):
fastprint('Compressing virtualenv')
local('tar -C %(build_dir)s/.. -cjf %(tarbz_path)s %(dirname)s' % {
'build_dir': self.deployer.build_dir,
'tarbz_path': self.tarbz_path,
'dirname': os.path.basename(self.deployer.build_dir)
})
def run_virtualenv_cmd(self, args):
if not isinstance(args, list):
args = args.split()
fastprint('Running virtualenv with args %s\n' % args)
local("env VERSIONER_PYTHON_VERSION='' virtualenv %s" % ' '.join(args))
@property
def upload_source(self):
return self.tarbz_path
@property
def upload_target(self):
return self.tarbz_name
tasks = CommonTasks(SingleVirtualenvS3Deploy, 'snakebasket', None)
snakebasket_build = tasks.build
cleanup = tasks.cleanup
| 35.479452 | 130 | 0.660232 | 310 | 2,590 | 5.348387 | 0.354839 | 0.072376 | 0.082027 | 0.096502 | 0.08263 | 0.036188 | 0.036188 | 0.036188 | 0 | 0 | 0 | 0.005435 | 0.218533 | 2,590 | 72 | 131 | 35.972222 | 0.813735 | 0.034749 | 0 | 0.037736 | 0 | 0.018868 | 0.214429 | 0.033267 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169811 | false | 0 | 0.075472 | 0.037736 | 0.320755 | 0.075472 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d69196f51793e1153bef57beef63e6af53ecc91a | 706 | py | Python | djstripe/__init__.py | TigerDX/dj-stripe | 2fd4897abaedf2d9faa3dd5af86402dae3ab86a3 | [
"BSD-3-Clause"
] | null | null | null | djstripe/__init__.py | TigerDX/dj-stripe | 2fd4897abaedf2d9faa3dd5af86402dae3ab86a3 | [
"BSD-3-Clause"
] | null | null | null | djstripe/__init__.py | TigerDX/dj-stripe | 2fd4897abaedf2d9faa3dd5af86402dae3ab86a3 | [
"BSD-3-Clause"
] | null | null | null | from __future__ import unicode_literals
import warnings
from django import get_version as get_django_version
__title__ = "dj-stripe"
__summary__ = "Django + Stripe Made Easy"
__uri__ = "https://github.com/pydanny/dj-stripe/"
__version__ = "0.5.0"
__author__ = "Daniel Greenfeld"
__email__ = "pydanny@gmail.com"
__license__ = "BSD"
__license__ = "License :: OSI Approved :: BSD License"
__copyright__ = "Copyright 2015 Daniel Greenfeld"
if get_django_version() <= '1.6.x':
msg = "dj-stripe deprecation notice: Django 1.6 and lower are deprecated\n" \
"and will be removed in dj-stripe 0.6.0.\n" \
"Reference: https://github.com/pydanny/dj-stripe/issues/173"
warnings.warn(msg)
| 29.416667 | 81 | 0.723796 | 98 | 706 | 4.744898 | 0.55102 | 0.086022 | 0.068817 | 0.090323 | 0.124731 | 0.124731 | 0 | 0 | 0 | 0 | 0 | 0.028523 | 0.155807 | 706 | 23 | 82 | 30.695652 | 0.751678 | 0 | 0 | 0 | 0 | 0 | 0.498584 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6946eb298801b23fec7b4b5e6de31aae00f1e3a | 10,283 | py | Python | autolamella/milling.py | Chlanda-Lab/autolamella | ab135eefd56770f326f90747ef4dafebff4e8f71 | [
"MIT"
] | null | null | null | autolamella/milling.py | Chlanda-Lab/autolamella | ab135eefd56770f326f90747ef4dafebff4e8f71 | [
"MIT"
] | null | null | null | autolamella/milling.py | Chlanda-Lab/autolamella | ab135eefd56770f326f90747ef4dafebff4e8f71 | [
"MIT"
] | null | null | null | import logging
import os
import time
import numpy as np
from autolamella.acquire import (
grab_images,
save_reference_images,
save_final_images,
)
from autolamella.align import realign
from autolamella.autoscript import reset_state
def milling(
microscope,
settings,
stage_settings,
my_lamella,
pattern, # "upper", "lower", "both"
filename_prefix="",
demo_mode=False,
):
from autoscript_core.common import ApplicationServerException
from autoscript_sdb_microscope_client.structures import StagePosition
# Sanity-check pattern parameter
if pattern not in ("upper", "lower", "both"):
raise ValueError(f"Invalid pattern type:\n"
f"Should be \"upper\", \"lower\" or \"both\", not \"{pattern}\"")
# Setup and realign to fiducial marker
setup_milling(microscope, settings, stage_settings, my_lamella)
tilt_in_radians = np.deg2rad(stage_settings["overtilt_degrees"])
if pattern == "upper":
microscope.specimen.stage.relative_move(StagePosition(t=-tilt_in_radians))
elif pattern == "lower":
microscope.specimen.stage.relative_move(StagePosition(t=+tilt_in_radians))
# Realign three times
for abc in "abc":
image_unaligned = grab_images(
microscope,
settings,
my_lamella,
prefix="IB_" + filename_prefix,
suffix=f"_0{abc}-unaligned",
)
realign(microscope, image_unaligned, my_lamella.fiducial_image)
# Save the refined position to prevent gradual stage-drift
my_lamella.fibsem_position.ion_beam.update_beam_shift()
# Save the newly aligned image for the next alignment stage
my_lamella.fiducial_image = grab_images(
microscope,
settings,
my_lamella, # can remove
prefix="IB_" + filename_prefix,
suffix="_1-aligned",
)
# Create and mill patterns
if pattern == "upper" or pattern == "both":
_milling_coords(microscope, stage_settings, my_lamella, "upper")
if pattern == "lower" or pattern == "both":
_milling_coords(microscope, stage_settings, my_lamella, "lower")
# Create microexpansion joints (if applicable)
_microexpansion_coords(microscope, stage_settings, my_lamella)
if 'patterning_mode' in stage_settings:
microscope.patterning.mode = stage_settings['patterning_mode']
if not demo_mode:
microscope.imaging.set_active_view(2) # the ion beam view
print("Milling pattern...")
try:
microscope.patterning.run()
except ApplicationServerException:
logging.error("ApplicationServerException: could not mill!")
microscope.patterning.clear_patterns()
grab_images(
microscope,
settings,
my_lamella, # can remove
prefix="IB_" + filename_prefix,
suffix=f"_2-after-{pattern}-milling",
)
return microscope
def _milling_coords(microscope, stage_settings, my_lamella, pattern):
"""Create milling pattern for lamella position."""
# Sanity-check pattern parameter
if pattern not in ("upper", "lower"):
raise ValueError(f"Invalid pattern type for milling coords generation:\n"
f"Should be \"upper\" or \"lower\", not \"{pattern}\"")
microscope.imaging.set_active_view(2) # the ion beam view
lamella_center_x, lamella_center_y = my_lamella.center_coord_realspace
if my_lamella.custom_milling_depth is not None:
milling_depth = my_lamella.custom_milling_depth
else:
milling_depth = stage_settings["milling_depth"]
height = float(
stage_settings["total_cut_height"] * stage_settings.get(f"percentage_roi_height_{pattern}",
stage_settings["percentage_roi_height"])
)
center_offset = (
(0.5 * stage_settings["lamella_height"])
+ (stage_settings["total_cut_height"] * stage_settings["percentage_from_lamella_surface"])
+ (0.5 * height)
)
center_y = lamella_center_y + center_offset \
if pattern == "upper" \
else lamella_center_y - center_offset
# milling_roi = microscope.patterning.create_cleaning_cross_section(
milling_roi = microscope.patterning.create_rectangle(
lamella_center_x,
center_y,
stage_settings.get(f'lamella_width_{pattern}', stage_settings["lamella_width"]),
height,
milling_depth,
)
if pattern == "upper":
milling_roi.scan_direction = "TopToBottom"
elif pattern == "lower":
milling_roi.scan_direction = "BottomToTop"
return milling_roi
def _microexpansion_coords(microscope, stage_settings, my_lamella):
"""Mill microexpansion joints (TODO: add reference)"""
if not ("microexpansion_width" in stage_settings
and "microexpansion_distance_from_lamella" in stage_settings
and "microexpansion_percentage_height" in stage_settings):
return None
microscope.imaging.set_active_view(2) # the ion beam view
lamella_center_x, lamella_center_y = my_lamella.center_coord_realspace
if my_lamella.custom_milling_depth is not None:
milling_depth = my_lamella.custom_milling_depth
else:
milling_depth = stage_settings["milling_depth"]
height = float(
(
2 * stage_settings["total_cut_height"]
* (stage_settings["percentage_roi_height"] + stage_settings["percentage_from_lamella_surface"])
+ stage_settings["lamella_height"]
) * stage_settings["microexpansion_percentage_height"]
)
offset_x = (stage_settings["lamella_width"] + stage_settings["microexpansion_width"]) / 2 \
+ stage_settings["microexpansion_distance_from_lamella"]
milling_rois = []
for scan_direction, offset_x in (("LeftToRight", -offset_x), ("RightToLeft", offset_x)):
milling_roi = microscope.patterning.create_rectangle(
lamella_center_x + offset_x,
lamella_center_y,
stage_settings["microexpansion_width"],
height,
milling_depth,
)
milling_roi.scan_direction = scan_direction
milling_rois.append(milling_roi)
return milling_rois
def setup_milling(microscope, settings, stage_settings, my_lamella):
"""Setup the ion beam system ready for milling."""
system_settings = settings["system"]
ccs_file = system_settings["application_file_cleaning_cross_section"]
microscope = reset_state(microscope, settings, application_file=ccs_file)
my_lamella.fibsem_position.restore_state(microscope)
microscope.beams.ion_beam.beam_current.value = stage_settings["milling_current"]
return microscope
def run_drift_corrected_milling(microscope, correction_interval,
reduced_area=None):
"""
Parameters
----------
microscope : Autoscript microscope object
correction_interval : Time in seconds between drift correction realignment
reduced_area : Autoscript Rectangle() object
Describes the reduced area view in relative coordinates, with the
origin in the top left corner.
Default value is None, which will create a Rectangle(0, 0, 1, 1),
which means the imaging will use the whole field of view.
"""
from autoscript_core.common import ApplicationServerException
from autoscript_sdb_microscope_client.structures import (GrabFrameSettings,
Rectangle)
if reduced_area is None:
reduced_area = Rectangle(0, 0, 1, 1)
s = GrabFrameSettings(reduced_area=reduced_area)
reference_image = microscope.imaging.grab_frame(s)
# start drift corrected patterning (is a blocking function, not asynchronous)
microscope.patterning.start()
while microscope.patterning.state == "Running":
time.sleep(correction_interval)
try:
microscope.patterning.pause()
except ApplicationServerException:
continue
else:
new_image = microscope.imaging.grab_frame(s)
realign(microscope, new_image, reference_image)
microscope.patterning.resume()
def mill_single_stage(
microscope, settings, stage_settings, stage_number, my_lamella, lamella_number
):
"""Run ion beam milling for a single milling stage in the protocol."""
filename_prefix = f"lamella{lamella_number + 1}_stage{stage_number + 1}"
demo_mode = settings["demo_mode"]
milling(
microscope,
settings,
stage_settings,
my_lamella,
pattern="both",
filename_prefix=filename_prefix,
demo_mode=demo_mode,
)
def mill_all_stages(
microscope, protocol_stages, lamella_list, settings, output_dir="output_images"
):
"""Run all milling stages in the protocol."""
if lamella_list == []:
logging.info("Lamella sample list is empty, nothing to mill here.")
return
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
for stage_number, stage_settings in enumerate(protocol_stages):
logging.info(
f"Protocol stage {stage_number + 1} of {len(protocol_stages)}"
)
for lamella_number, my_lamella in enumerate(lamella_list):
logging.info(
f"Lamella number {lamella_number + 1} of {len(lamella_list)}"
)
# save all the reference images you took creating the fiducial
if stage_number == 0:
save_reference_images(settings, my_lamella, lamella_number)
mill_single_stage(
microscope,
settings,
stage_settings,
stage_number,
my_lamella,
lamella_number,
)
# If this is the very last stage, take an image
if stage_number + 1 == len(protocol_stages):
save_final_images(microscope, settings, lamella_number)
reset_state(microscope, settings)
# Return ion beam current to imaging current (20 pico-Amps)
microscope.beams.ion_beam.beam_current.value = 20e-12
| 38.950758 | 111 | 0.665565 | 1,148 | 10,283 | 5.684669 | 0.208188 | 0.075697 | 0.033865 | 0.03034 | 0.404382 | 0.366074 | 0.325621 | 0.275054 | 0.227245 | 0.209163 | 0 | 0.004285 | 0.251094 | 10,283 | 263 | 112 | 39.098859 | 0.843137 | 0.134008 | 0 | 0.302885 | 0 | 0 | 0.137509 | 0.051169 | 0 | 0 | 0 | 0.003802 | 0 | 1 | 0.033654 | false | 0 | 0.052885 | 0 | 0.115385 | 0.004808 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6979012b22ac73cacf5e578b3aa216da2c78397 | 2,210 | py | Python | robots/SlowRobot.py | theGitRory/RoboWars | 6121f13e3569c4699a93900a8a6f45301f01a98c | [
"MIT"
] | null | null | null | robots/SlowRobot.py | theGitRory/RoboWars | 6121f13e3569c4699a93900a8a6f45301f01a98c | [
"MIT"
] | null | null | null | robots/SlowRobot.py | theGitRory/RoboWars | 6121f13e3569c4699a93900a8a6f45301f01a98c | [
"MIT"
] | 1 | 2021-12-16T22:49:29.000Z | 2021-12-16T22:49:29.000Z | import pygame
from Robot import Robot
class SlowRobot(Robot):
moveState = -15
shootState = 0
def __init__(self, image, name):
super().__init__(image, name)
self.movingLeft = False
self.movingRight = True
self.movingUp = False
self.movingDown = True
def update(self):
super().update()
SlowRobot.moveState = SlowRobot.moveState + 1
if((SlowRobot.moveState)% 25 == 0 or SlowRobot.moveState < 0):
preX = self.getRect().centerx
preY = self.getRect().centery
if self.movingLeft:
self.movingLeft = self.moveLeft()
if not self.movingLeft:
self.movingRight = True
if self.movingUp:
self.movingUp = self.moveUp()
if not self.movingUp:
self.movingDown = True
else:
self.movingDown = self.moveDown()
if not self.movingDown:
self.movingUp = True
else:
self.movingRight = self.moveRight()
if not self.movingRight:
self.movingLeft = True
if self.movingDown:
self.movingDown = self.moveDown()
if not self.movingDown:
self.movingUp = True
else:
self.movingUp = self.moveUp()
if not self.movingUp:
self.movingDown = True
if self.movingLeft and self.movingUp:
self.turnTowardsAngle(135)
elif self.movingLeft and self.movingDown:
self.turnTowardsAngle(-135)
elif self.movingRight and self.movingUp:
self.turnTowardsAngle(45)
else:
self.turnTowardsAngle(-45)
SlowRobot.shootState = SlowRobot.shootState + 1
if((SlowRobot.shootState)% 10 == 0):
self.shoot() | 32.5 | 70 | 0.466516 | 182 | 2,210 | 5.620879 | 0.247253 | 0.117302 | 0.109482 | 0.043011 | 0.351906 | 0.242424 | 0.242424 | 0.242424 | 0.242424 | 0.242424 | 0 | 0.018456 | 0.460633 | 2,210 | 68 | 71 | 32.5 | 0.839765 | 0 | 0 | 0.365385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.038462 | 0 | 0.134615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6997f85637504050677de593bfdc5dfb24a288e | 934 | py | Python | istype/__init__.py | Cologler/typing-instancecheck-python | b4dcea88468b1ee43ebb36413b099e3e8508b3ce | [
"MIT"
] | 6 | 2018-07-08T09:38:35.000Z | 2020-06-25T13:15:02.000Z | istype/__init__.py | Cologler/typing-instancecheck-python | b4dcea88468b1ee43ebb36413b099e3e8508b3ce | [
"MIT"
] | 1 | 2018-07-08T10:12:49.000Z | 2018-07-08T11:31:18.000Z | istype/__init__.py | Cologler/istype-python | b4dcea88468b1ee43ebb36413b099e3e8508b3ce | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2017~2999 - cologler <skyoflw@gmail.com>
# ----------
#
# ----------
from typing import Iterable
from itertools import zip_longest
from .internal import TypeMatcher
from .g import isinstanceof, issubclassof
def match(args: (list, tuple), types: Iterable[type]) -> bool:
'''
check whether args match types.
example:
``` py
`match(('', 1), (str, int)) # True
```
'''
try:
if len(args) != len(types):
return False
except TypeError:
# object of type 'types' has no len()
pass
empty = object()
for item, typ in zip_longest(args, types, fillvalue=empty):
if item is empty or typ is empty:
return False
if not isinstanceof(item, typ):
return False
return True
__all__ = [
'TypeMatcher',
'isinstanceof',
'issubclassof',
'match'
]
| 19.87234 | 63 | 0.574946 | 108 | 934 | 4.916667 | 0.601852 | 0.062147 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014925 | 0.282655 | 934 | 46 | 64 | 20.304348 | 0.777612 | 0.260171 | 0 | 0.130435 | 0 | 0 | 0.06135 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0.043478 | 0.173913 | 0 | 0.391304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d69b51ad69fd05add04bf2431b9d7c7e537e89f6 | 3,669 | py | Python | aetherling/space_time/ram_st.py | David-Durst/aetherling | 91bcf0579608ccbf7d42a7bddf90ccd4257d6571 | [
"MIT"
] | 10 | 2018-04-03T01:51:16.000Z | 2022-02-07T04:27:26.000Z | aetherling/space_time/ram_st.py | David-Durst/aetherling | 91bcf0579608ccbf7d42a7bddf90ccd4257d6571 | [
"MIT"
] | 19 | 2018-05-20T00:43:31.000Z | 2021-03-18T20:36:52.000Z | aetherling/space_time/ram_st.py | David-Durst/aetherling | 91bcf0579608ccbf7d42a7bddf90ccd4257d6571 | [
"MIT"
] | 1 | 2018-07-11T23:36:43.000Z | 2018-07-11T23:36:43.000Z | from aetherling.space_time.space_time_types import *
from aetherling.space_time.nested_counters import *
from aetherling.modules.ram_any_type import *
from aetherling.modules.term_any_type import TermAnyType
from aetherling.modules.mux_any_type import DefineMuxAnyType
from aetherling.modules.map_fully_parallel_sequential import DefineNativeMapParallel
from aetherling.helpers.nameCleanup import cleanName
from mantle.coreir.memory import getRAMAddrWidth
from mantle.common.countermod import Decode
from aetherling.modules.ram_any_type import *
from magma import *
from magma.circuit import DefineCircuitKind, Circuit
__all__ = ['DefineRAM_ST', 'RAM_ST']
@cache_definition
def DefineRAM_ST(t: ST_Type, n: int, has_reset = False, read_latency = 0) -> DefineCircuitKind:
"""
Generate a RAM where t store n objects each of type t.
WE, RE and RESET affect where in a t is being written/read.
This is different from normal magma RAMs that don't have values that take multiple clocks.
RADDR : In(Array[log_2(n), Bit)],
RDATA : Out(t.magma_repr()),
WADDR : In(Array(log_2(n), Bit)),
WDATA : In(t.magma_repr()),
WE: In(Bit)
RE: In(Bit)
if has_reset:
RESET : In(Bit)
"""
class _RAM_ST(Circuit):
name = 'RAM_ST_{}_hasReset{}'.format(cleanName(str(t)), str(has_reset))
addr_width = getRAMAddrWidth(n)
IO = ['RADDR', In(Bits[addr_width]),
'RDATA', Out(t.magma_repr()),
'WADDR', In(Bits[addr_width]),
'WDATA', In(t.magma_repr()),
'WE', In(Bit),
'RE', In(Bit)
] + ClockInterface(has_ce=False, has_reset=has_reset)
@classmethod
def definition(cls):
# each valid clock, going to get a magma_repr in
# read or write each one of those to a location
rams = [DefineRAMAnyType(t.magma_repr(), t.valid_clocks(), read_latency=read_latency)() for _ in range(n)]
read_time_position_counter = DefineNestedCounters(t, has_cur_valid=True, has_ce=True, has_reset=has_reset)()
read_valid_term = TermAnyType(Bit)
read_last_term = TermAnyType(Bit)
write_time_position_counter = DefineNestedCounters(t, has_cur_valid=True, has_ce=True, has_reset=has_reset)()
write_valid_term = TermAnyType(Bit)
write_last_term = TermAnyType(Bit)
read_selector = DefineMuxAnyType(t.magma_repr(), n)()
for i in range(n):
wire(cls.WDATA, rams[i].WDATA)
wire(write_time_position_counter.cur_valid, rams[i].WADDR)
wire(read_selector.data[i], rams[i].RDATA)
wire(read_time_position_counter.cur_valid, rams[i].RADDR)
write_cur_ram = Decode(i, cls.WADDR.N)(cls.WADDR)
wire(write_cur_ram & write_time_position_counter.valid, rams[i].WE)
wire(cls.RADDR, read_selector.sel)
wire(cls.RDATA, read_selector.out)
wire(cls.WE, write_time_position_counter.CE)
wire(cls.RE, read_time_position_counter.CE)
wire(read_time_position_counter.valid, read_valid_term.I)
wire(read_time_position_counter.last, read_last_term.I)
wire(write_time_position_counter.valid, write_valid_term.I)
wire(write_time_position_counter.last, write_last_term.I)
if has_reset:
wire(cls.RESET, write_time_position_counter.RESET)
wire(cls.RESET, read_time_position_counter.RESET)
return _RAM_ST
def RAM_ST(t: ST_Type, n: int, has_reset: bool = False) -> Circuit:
DefineRAM_ST(t, n, has_reset)
| 42.662791 | 121 | 0.67021 | 505 | 3,669 | 4.6 | 0.247525 | 0.067155 | 0.106328 | 0.07232 | 0.303487 | 0.238485 | 0.226431 | 0.148945 | 0.095566 | 0.095566 | 0 | 0.001063 | 0.230853 | 3,669 | 85 | 122 | 43.164706 | 0.822112 | 0.131098 | 0 | 0.035714 | 0 | 0 | 0.019802 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053571 | false | 0 | 0.214286 | 0 | 0.357143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d69d1551c5ade2888af4e328fc5206b21287212d | 1,227 | py | Python | yhteenlasku/Python/rose_images_plus.py | samuntiede/valokuvamatikka | adab47a93534bf0f83f39603a8744bf6e5923da4 | [
"Apache-2.0"
] | null | null | null | yhteenlasku/Python/rose_images_plus.py | samuntiede/valokuvamatikka | adab47a93534bf0f83f39603a8744bf6e5923da4 | [
"Apache-2.0"
] | null | null | null | yhteenlasku/Python/rose_images_plus.py | samuntiede/valokuvamatikka | adab47a93534bf0f83f39603a8744bf6e5923da4 | [
"Apache-2.0"
] | null | null | null | # Process two rose images by summing them together
# FIN Laske kaksi ruusukuvaa yhteen
#
# Samuli Siltanen April 2021
# Python-käännös Ville Tilvis 2021
import numpy as np
import matplotlib.pyplot as plt
# Read in the images
# FIN Lue kuvat levyltä
im1 = plt.imread('../_kuvat/ruusu1.png')
im2 = plt.imread('../_kuvat/ruusu2.png')
print('Images read')
# Normalize images
# FIN Normalisoi kuva-alkiot nollan ja ykkösen välille
MAX = np.max([np.max(im1),np.max(im2)])
im1 = im1/MAX
im2 = im2/MAX
print('Images normalized')
# Gamma correction for brightening images
# FIN Gammakorjaus ja kynnystyksiä
gammacorrB = .6
blackthr = .03
whitethr = .95
# Save the summed image to file
# FIN Laske summakuva
im3 = (im1+im2)/2
# FIN Kohenna kuvaa
im3 = im3-np.min(im3);
im3 = im3/np.max(im3);
blackthrarray = blackthr*np.ones(im3.shape)
im3 = np.maximum(im3,blackthrarray)-blackthrarray
im3 = im3/(whitethr*np.max(im3));
im3 =np.minimum(im3, np.ones(im3.shape))
im3 = np.power(im3,gammacorrB)
print('New image ready')
# FIN Tallenna levylle
plt.imsave('../_kuvat/ruusu12.png', im3);
print('Wrote new image to file')
# FIN Katso, miltä kuva näyttää
plt.figure(1)
plt.clf
plt.axis('off')
plt.gcf().set_dpi(600)
plt.imshow(im3)
| 22.309091 | 55 | 0.727791 | 194 | 1,227 | 4.582474 | 0.515464 | 0.033746 | 0.026997 | 0.031496 | 0.042745 | 0.042745 | 0 | 0 | 0 | 0 | 0 | 0.049242 | 0.139364 | 1,227 | 54 | 56 | 22.722222 | 0.792614 | 0.365118 | 0 | 0 | 0 | 0 | 0.170604 | 0.027559 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d69d82f810cb6e55ebc41352a4d82679e4b15e3c | 5,712 | py | Python | likebee/core/admin.py | klebercode/likebee | 0a0dd6368ef43b53fb8315eb5eb14663067ef07c | [
"MIT"
] | 1 | 2019-11-05T15:00:51.000Z | 2019-11-05T15:00:51.000Z | likebee/core/admin.py | klebercode/likebee | 0a0dd6368ef43b53fb8315eb5eb14663067ef07c | [
"MIT"
] | null | null | null | likebee/core/admin.py | klebercode/likebee | 0a0dd6368ef43b53fb8315eb5eb14663067ef07c | [
"MIT"
] | null | null | null | from django.contrib import admin
from django.db.models import Q
from django.utils.translation import ugettext_lazy as _
from django.utils.html import format_html
from datetime import date, datetime
from django_summernote.admin import SummernoteModelAdmin
from mptt.admin import MPTTModelAdmin, DraggableMPTTAdmin
from .models import Priority, Status, Sprint, Project, Task, TaskType
from ..accounts.models import Profile
def make_done(modeladmin, request, queryset):
status = Status.objects.filter(done=True).first()
queryset.update(status=status, done=True, done_on=datetime.now())
make_done.short_description = '''
Marcar tarefas selecionadas como concluído'''
def make_archive(modeladmin, request, queryset):
if request.user.is_superuser:
queryset.update(archived=True, archived_on=datetime.now())
make_archive.short_description = '''
Marcar tarefas selecionadas como arquivado'''
@admin.register(Task)
class TaskAdmin(SummernoteModelAdmin, DraggableMPTTAdmin):
# change_list_template = 'admin/task_change_list.html'
mptt_indent_field = 'title'
list_per_page = 100
list_display = [
'tree_actions', 'indented_title',
'owner_thumb', 'colored_priority', 'colored_status',
'colored_task_type', 'formatted_finish', 'project', 'sprint'
]
list_display_links = [
'indented_title',
]
list_filter = [
('sprint', admin.RelatedFieldListFilter),
('owner', admin.RelatedFieldListFilter),
('project', admin.RelatedFieldListFilter),
('status', admin.RelatedFieldListFilter),
'archived',
]
search_fields = ['title', 'description']
summernote_fields = ['description']
actions = [make_done, make_archive]
def get_exclude(self, request, obj=None):
excluded = super().get_exclude(request, obj) or []
if not request.user.is_superuser:
return excluded + ['done', 'done_on', 'archived', 'archived_on']
return excluded
def get_queryset(self, request):
qs = super().get_queryset(request)
if request.user.is_superuser:
return qs
# return qs.filter(Q(status=None) | Q(status__archive=False))
return qs.filter(archived=False)
def formatted_finish(self, obj):
if not obj.finish_on:
return ''
color = '#373A3C'
status_done = None
status = None
if obj.status:
status_done = obj.status.done
status = obj.status
if (obj.finish_on.date() < date.today()) and (
not status_done or not status):
color = '#E0465E'
return format_html(
'<span style="color: {}; font-weight: bold;">{}</span>'.format(
color, obj.finish_on.strftime('%b %-d')))
formatted_finish.allow_tags = True
formatted_finish.admin_order_field = 'finish_on'
formatted_finish.short_description = _('Data')
def colored_priority(self, obj):
if obj.priority:
name = obj.priority.name
color = obj.priority.color
color_text = obj.priority.color_text
else:
name = '-'
color = '#C4C4C4'
color_text = '#FFFFFF'
return format_html(
'<div style="background:{}; color:{}; '
'text-align:center; padding: 4px;">{}</div>'.format(
color, color_text, name))
colored_priority.allow_tags = True
colored_priority.admin_order_field = 'priority'
colored_priority.short_description = _('Prioridade')
def colored_status(self, obj):
if obj.status:
name = obj.status.name
color = obj.status.color
color_text = obj.status.color_text
else:
name = '-'
color = '#C4C4C4'
color_text = '#FFFFFF'
return format_html(
'<div style="background:{}; color:{}; '
'text-align:center; padding: 4px;">{}</div>'.format(
color, color_text, name))
colored_status.allow_tags = True
colored_status.admin_order_field = 'status'
colored_status.short_description = _('Status')
def colored_task_type(self, obj):
if obj.task_type:
name = obj.task_type.name
color = obj.task_type.color
color_text = obj.task_type.color_text
else:
name = '-'
color = '#C4C4C4'
color_text = '#FFFFFF'
return format_html(
'<div style="background:{}; color:{}; '
'text-align:center; padding: 4px;">{}</div>'.format(
color, color_text, name))
colored_task_type.allow_tags = True
colored_task_type.admin_order_field = 'task_type'
colored_task_type.short_description = _('Tipo')
def owner_thumb(self, obj):
if obj.owner:
profile = Profile.objects.filter(user=obj.owner)
for item in profile:
if item.photo:
img = item.photo_thumbnail.url
else:
img = None
if img:
return format_html(
'<img src="{0}" width="35" />'.format(img)
)
owner = obj.owner
else:
owner = ''
return '{}'.format(owner)
owner_thumb.allow_tags = True
owner_thumb.admin_order_field = 'owner'
owner_thumb.short_description = _('Resp.')
class Media:
css = {
'all': ('css/likebee.css',)
}
admin.site.register(Priority)
admin.site.register(Status)
admin.site.register(Sprint)
admin.site.register(Project)
admin.site.register(TaskType)
| 31.558011 | 76 | 0.608193 | 624 | 5,712 | 5.363782 | 0.227564 | 0.040335 | 0.025097 | 0.014341 | 0.177771 | 0.153272 | 0.126382 | 0.126382 | 0.126382 | 0.126382 | 0 | 0.006308 | 0.278361 | 5,712 | 180 | 77 | 31.733333 | 0.805677 | 0.019608 | 0 | 0.22069 | 0 | 0 | 0.140075 | 0.011256 | 0 | 0 | 0 | 0 | 0 | 1 | 0.062069 | false | 0 | 0.062069 | 0 | 0.268966 | 0.027586 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d69f54878d575fc34023843ff11b4582ac0a48da | 1,703 | py | Python | shop/coreapp/admin.py | bsperezb/Django-Ecomerce | f061798fd6528997ec7c1874ab0a5bdec03137c6 | [
"MIT"
] | 1 | 2021-09-02T03:48:44.000Z | 2021-09-02T03:48:44.000Z | shop/coreapp/admin.py | bsperezb/Django-Ecomerce | f061798fd6528997ec7c1874ab0a5bdec03137c6 | [
"MIT"
] | null | null | null | shop/coreapp/admin.py | bsperezb/Django-Ecomerce | f061798fd6528997ec7c1874ab0a5bdec03137c6 | [
"MIT"
] | null | null | null | from django.contrib import admin
from .models import Address, Coupon, Item, Order, OrderItem, Payment, Session
def make_refund_accepted(modeladmin, request, queryset):
queryset.update(refund_requested=False, refund_granted=True)
make_refund_accepted.short_description = "Update orders to refound granted"
class OrderAdmin(admin.ModelAdmin):
list_display = [
"session",
"user",
"ordered",
"being_delivered",
"received",
"refund_requested",
"refund_granted",
"billing_address",
"shipping_address",
"payment",
"coupon",
]
list_filter = [
"ordered",
"being_delivered",
"received",
"refund_requested",
"refund_granted",
]
list_display_links = [
"session",
"billing_address",
"payment",
"coupon",
"shipping_address",
]
search_fields = ["user__username", "reference", "session__session_number"]
actions = [make_refund_accepted]
class AddressAdmin(admin.ModelAdmin):
list_display = [
"user",
"street_address",
"apartment_address",
"country",
"zip",
"address_type",
"default",
]
list_filter = ["address_type", "default", "country"]
search_fields = ["user", "street_address", "apartment_address", "zip"]
class SessionAdmin(admin.ModelAdmin):
readonly_fields = ("start_date",)
admin.site.register(Item)
admin.site.register(Order, OrderAdmin)
admin.site.register(OrderItem)
admin.site.register(Payment)
admin.site.register(Address, AddressAdmin)
admin.site.register(Coupon)
admin.site.register(Session, SessionAdmin)
| 22.706667 | 78 | 0.63946 | 163 | 1,703 | 6.435583 | 0.380368 | 0.060057 | 0.113441 | 0.049571 | 0.171592 | 0.108675 | 0.108675 | 0.108675 | 0 | 0 | 0 | 0 | 0.23899 | 1,703 | 74 | 79 | 23.013514 | 0.809414 | 0 | 0 | 0.428571 | 0 | 0 | 0.259542 | 0.013506 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017857 | false | 0 | 0.035714 | 0 | 0.267857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6a34d6160e9405d0d8eb8de229e4504ff3e1406 | 18,624 | py | Python | slugdetection/Data_Engineering.py | dapolak/acse-9-independent-research-project-dapolak | 5ae2cfa7f63c739d419b1362c4aede451ae83eb1 | [
"MIT"
] | null | null | null | slugdetection/Data_Engineering.py | dapolak/acse-9-independent-research-project-dapolak | 5ae2cfa7f63c739d419b1362c4aede451ae83eb1 | [
"MIT"
] | null | null | null | slugdetection/Data_Engineering.py | dapolak/acse-9-independent-research-project-dapolak | 5ae2cfa7f63c739d419b1362c4aede451ae83eb1 | [
"MIT"
] | 2 | 2019-08-29T16:14:37.000Z | 2019-08-30T08:52:03.000Z | # -*- coding: utf-8 -*-
"""
Part of slugdetection package
@author: Deirdree A Polak
github: dapolak
"""
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
from pyspark.sql import functions as F
from pyspark.sql.window import Window
class Data_Engineering:
"""
Tools to crop and select the raw well data. Converts data from a Spark dataframe to Pandas.
Parameters
----------
well : Spark data frame
data frame containing the pressure, temperature and choke data from a well.
Attributes
----------
well_df : Spark data frame
data frame containing all of the pressure, temperature and choke data from a well. None values have
been dropped
well_og : Spark data frame
original data frame copy, with None values
features : list of strings
List of the features of the well, default "WH_P", "DH_P", "WH_T", "DH_T" and "WH_choke"
thresholds : dictionary
Dictionary with important features as keys, and their lower and upper thresholds as values. This is
used for cropping out of range values. The set_thresholds method allows user to change or add values.
"""
def __init__(self, well):
self.well_df = well.na.drop()
self.well_og = well
self.features = ["WH_P", "DH_P", "WH_T", "DH_T", "WH_choke"]
self.thresholds = {"WH_P": [0, 100],
"DH_P": [90, 150],
"WH_T": [0, 100],
"DH_T": [75, 95],
"WH_choke": [-1000, 1000]}
def stats(self):
"""
Describes the data in terms of the most common statistics, such as mean, std, max, min and count
Returns
-------
stats : Spark DataFrame
Stats of data frame attribute well_df
"""
return self.well_df.describe()
def shape(self):
"""
Describes the shape of the Spark data frame well_df, with number of rows and number of columns
Returns
-------
shape : int, int
number of rows, number of columns
"""
return self.well_df.count(), len(self.well_df.columns)
def reset_well_df(self):
"""
Resets Spark data frame attribute well_df to original state by overriding the well_df attribute
"""
self.well_df = self.well_og.na.drop()
def timeframe(self, start="01-JAN-01 00:01", end="01-JUL-19 00:01", date_format="dd-MMM-yy HH:mm",
datetime_format='%d-%b-%y %H:%M'):
"""
For Spark DataFrame well_df attribute, crops the data to the inputted start and end date
Parameters
----------
start : str (optional)
Wanted start date of cropped data frame (default is "01-JAN-01 00:01")
end : str (optional)
Wanted end date of cropped data frame (default is "01-JAN-19 00:01")
date_format : str (optional)
String format of inputted dates (default is "dd-MMM-yy HH:mm")
datetime_format : str (optional)
C standard data format for datetime (default is '%d-%b-%y %H:%M')
"""
d1 = datetime.strptime(start, datetime_format)
d2 = datetime.strptime(end, datetime_format)
assert max((d1, d2)) == d2, "Assert end date is later than start date"
# Crop to start date
self.well_df = self.well_df.filter(
F.col("ts") > F.to_timestamp(F.lit(start), format=date_format).cast('timestamp'))
# Crop to end date
self.well_df = self.well_df.filter(
F.col("ts") < F.to_timestamp(F.lit(end), format=date_format).cast('timestamp'))
return
def set_thresholds(self, variable, max_, min_):
"""
Sets the thresholds value of a variable
Parameters
----------
variable : str
Name of variable, for example "WH_P"
max_ : float
Upper threshold of variable
min_ : float
Lower threshold of variable
"""
assert isinstance(min_, float), "Minimum threshold must be a number"
assert isinstance(max_, float), "Maximum threshold must be a number"
assert max(min_, max_) == max_, "Maximum value must be larger than min"
self.thresholds[variable] = [min_, max_]
def data_range(self, verbose=True):
"""
Ensures variables within the dataframe well_df are within range, as set by the attribute thresholds. The out of
range values are replaced by the previous in range value
Parameters
----------
verbose : bool (optional)
whether to allow for verbose (default is True)
"""
window = Window.orderBy("ts") # Spark Window ordering data frames by time
lag_names = [] # Empty list to store column names
for well_columns in self.well_df.schema.names: # loop through all components (columns) of data
if well_columns != "ts": # no tresholding for timestamp
if well_columns in self.thresholds.keys():
tresh = self.thresholds[well_columns] # set thresholds values for parameter from dictionary
else:
tresh = [-1000, 1000] # if feature not in thresholds attribute, set large thresholds
if verbose:
print(well_columns, "treshold is", tresh)
for i in range(1, 10): # Naive approach, creating large amount of lagged features columns
lag_col = well_columns + "_lag_" + str(i)
lag_names.append(lag_col)
self.well_df = self.well_df.withColumn(lag_col, F.lag(well_columns, i, 0).over(window))
for i in range(8, 0, -1):
lag_col = well_columns + "_lag_" + str(i)
prev_lag = well_columns + "_lag_" + str(i + 1)
# apply minimum and maximum threshold to column, and replace out of range values with previous value
self.well_df = self.well_df.withColumn(lag_col,
F.when(F.col(lag_col) < tresh[0],
F.col(prev_lag))
.otherwise(F.col(lag_col)))
self.well_df = self.well_df.withColumn(lag_col,
F.when(F.col(lag_col) > tresh[1],
F.col(prev_lag)).otherwise(F.col(lag_col)))
# apply minimum and maximum threshold to column, and replace out of range values with previous value
lag_col = well_columns + "_lag_1"
self.well_df = self.well_df.withColumn(well_columns,
F.when(F.col(well_columns) < tresh[0],
F.col(lag_col))
.otherwise(F.col(well_columns)))
self.well_df = self.well_df.withColumn(well_columns,
F.when(F.col(well_columns) > tresh[1],
F.col(lag_col))
.otherwise(F.col(well_columns)))
self.well_df = self.well_df.drop(*lag_names)
return
def clean_choke(self, method="99"):
"""
Method to clean WH_choke variables values from the well_df Spark data frame attribute
Parameters
----------
method : str (optional)
Method to clean out WH_choke values. "99" entails suppressing all the data rows where the choke is lower
than 99%. "no_choke" entails setting to None all the rows where the WH_choke value is 0 or where it is non
constant i.e. differential is larger than 1 or second differential is larger than 3 (default is '99').
"""
assert ("WH_choke" in self.well_df.schema.names), 'In order to clean out WH choke data, WH choke column' \
'in well_df must exist'
if method == "99":
self.well_df = self.well_df.where("WH_choke > 99") # Select well_df only where WH is larger than 99%
elif method == "no_choke":
# Select well_df only where WH choke is constant
window = Window.orderBy("ts") # Window ordering by time
# Create differential and second differential columns for WH choke
self.well_df = self.well_df.withColumn("WH_choke_lag", F.lag("WH_choke", 1, 0).over(window))
self.well_df = self.well_df.withColumn("WH_choke_diff", F.abs(F.col("WH_choke") - F.col("WH_choke_lag")))
self.well_df = self.well_df.withColumn("WH_choke_lag2", F.lag("WH_choke_lag", 1, 0).over(window))
self.well_df = self.well_df.withColumn("WH_choke_diff2", F.abs(F.col("WH_choke") - F.col("WH_choke_lag2")))
for col in self.well_df.schema.names:
# Set all rows with WH choke less than 10 to 0
self.well_df = self.well_df.withColumn(col, F.when(F.col("WH_choke") < 10, None).
otherwise(F.col(col)))
# Select well_df where WH choke gradient is less than 1, set rows with high gradient to None
self.well_df = self.well_df.withColumn(col,
F.when(F.col("WH_choke_diff") > 1, None).
otherwise(F.col(col)))
# Select well_df where WH choke curvature is less than 3, set rows with higher values to None
self.well_df = self.well_df.withColumn(col,
F.when(F.col("WH_choke_diff2") > 3, None).
otherwise(F.col(col)))
else:
print("Clean choke method inputted is not know. Try 99 or no_choke")
return
def df_toPandas(self, stats=True, **kwargs):
"""
Creates a copy of Spark data frame attribute well_df in Pandas format. Also calculates and stores the
mean and standard deviations of each column in the Pandas data frame in the class attributes means and stds.
Parameters
----------
stats : bool (optional)
Bool asserting whether or not to calculate means and standard deviations of each columns/variable (default
is True)
kwargs :
features : list of str
feature names/ column headers to include in pandas data frame pd_df attribute
Returns
-------
pd_df : Pandas data frame
Pandas data frame of original well_df Spark data frame
"""
if "features" in kwargs.keys(): # if features specified in kwargs, update feature attribute
self.features = kwargs["features"]
cols = self.features.copy()
cols.append("ts")
print("Converting Spark data frame to Pandas")
self.pd_df = self.well_df.select(cols).toPandas() # convert selected columns of data frame to Pandas
print("Converted")
if stats: # If stats is true, calculate and store mean and std as attributes
self.means = pd.DataFrame([[0 for i in range(len(self.features))]], columns=self.features)
self.stds = pd.DataFrame([[0 for i in range(len(self.features))]], columns=self.features)
for f in self.features:
self.means[f] = self.pd_df[f].mean() # Compute and store mean of column in means attribute
self.stds[f] = self.pd_df[f].std() # Compute and store std of column in stds attribute
return self.pd_df
def standardise(self, df):
"""
Standardises the data based on the attributes means and stds as calculated when the original dataframe was
converted to Pandas.
Parameters
----------
df : Pandas data frame
Input data frame to be standardised
Returns
-------
df : Pandas data frame
Input data frame standardised
"""
for feature_ in self.means.columns: # For all features
if (feature_ != 'ts') & (feature_ in df.columns):
avg = self.means[feature_][0] # get mean for feature from means attribute
std = self.stds[feature_][0] # ger std for feature from stds attribute
df[feature_] -= avg # Standardise column
df[feature_] /= std
return df
def plot(self, start=0, end=None, datetime_format="%d-%b-%y %H:%M",
title="Well Pressure and Temperature over time", ax2_label="Temperature in C // Choke %", **kwargs):
"""
Simple plot function to plot the pd_df pandas data frame class attribute.
Parameters
----------
start : int or str (optional)
Index or date at which to start plotting the data (default is 0)
end : int or str (optional)
Index or date at which to stop plotting the data (default is None)
datetime_format : str (optional)
C standard data format for datetime (default is '%d-%b-%y %H:%M')
title : str (optional)
Plot title (default is "Well Pressure and Temperature over time")
ax2_label : str (optional)
Label for second axis, for non pressure features (default is "Temperature in C // Choke %")
kwargs :
features: list of str
List of features to include in the plot
Returns
-------
: Figure
data plot figure
"""
assert hasattr(self, "pd_df"), "Pandas data frame pd_df attribute must exist"
assert not self.pd_df.empty, "Pandas data frame cannot be empty"
# If features has been specified in kwargs passed
if "features" in kwargs.keys(): # if only selected features
self.features = kwargs["features"]
for f in self.features: # Check features exist
assert (f in self.pd_df.columns), f + "must be contained in pd_df"
if isinstance(start, int): # If start date inputted as an index
assert start >= 0, "Start index must be positive"
assert start <= len(self.pd_df), "Start index must be less than the last index of pd_df attribute"
if isinstance(end, int): # If start date inputted as an index
assert end >= 0, "End index must be positive"
if isinstance(start, str): # If a string has been passed for the start date
date = datetime.strptime(start, datetime_format)
assert np.any(self.pd_df.isin([date])), "Start time must exist in pandas data frame"
start = self.pd_df['ts'][self.pd_df['ts'].isin([date])].index.tolist()[0] # Get start date as in index
if isinstance(end, str): # If a string has been passed for the end date
date = datetime.strptime(end, datetime_format)
assert np.any(self.pd_df.isin([date])), "End time must exist in pandas data frame"
end = self.pd_df['ts'][self.pd_df['ts'].isin([date])].index.tolist()[0] # Get end date as in index
if end is not None: # If end index/date has been specified
assert max((start, end)) == end, "Assert end date is later than start date"
fig, ax = plt.subplots(1, 1, figsize=(30, 12)) # Create subplot
ax2 = ax.twinx() # Instantiate secondary axis that shares the same x-axis
lines = [] # Create empty list to store lines and corresponding labels
colours = ['C' + str(i) for i in range(len(self.features))] # Create list of colour for plots lines
for col, c in zip(self.features, colours):
if col[-1] == "P": # If pressure, plot on main axis
a, = ax.plot(self.pd_df["ts"][start:end], self.pd_df[col][start:end], str(c) + ".", label=col)
ax.set_ylabel("Pressure in BarG")
lines.append(a)
else: # For other features, like Temperature and Choke, plot on secondary axis
a, = ax2.plot(self.pd_df["ts"][start:end], self.pd_df[col][start:end], c + '.', label=col)
ax2.set_ylabel(ax2_label)
lines.append(a)
ax.legend(lines, [l.get_label() for l in lines])
ax.set_xlabel("Time")
ax.grid(True, which='both')
ax.set_title(title)
return fig
def confusion_mat(cm, labels, title='Confusion Matrix', cmap='RdYlGn', **kwargs):
"""
Simple confusion matrix plotting method. Inspired by Scikit Learn Confusionp Matrix plot example.
Parameters
----------
cm : numpy array or list
Confusion matrix as outputted by Scikit Learn Confusion Matrix method.
labels : list of str
Labels to use on the plot of the Confusion Matrix. Must match number of rows in the confusion matrix.
title : str (optional)
Title that will be printed above confusion matrix plot
cmap : str (optional)
Colour Map of confusion matrix
kwargs :
figsize : tuple of int or int
Matplotlib key word to set size of plot
Returns
-------
: Figure
confusion matrix figure
"""
assert (len(labels) == len(cm[0])), "There must be the same number of columns in the confusion matrix as there" \
"is labels available"
fig, ax = plt.subplots()
if "figsize" in kwargs.keys():
# Plot confusion matrix
fig, ax = plt.subplots(figsize=kwargs["figsize"])
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
ax.set(xticks=np.arange(cm.shape[1]), yticks=np.arange(cm.shape[0]), xticklabels=labels, yticklabels=labels,
title=title, ylabel='True label', xlabel='Predicted label')
# Loop over data dimensions and create text annotations.
fmt = '.2f'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return fig
| 44.342857 | 120 | 0.574152 | 2,452 | 18,624 | 4.26876 | 0.165171 | 0.032674 | 0.039171 | 0.022738 | 0.348524 | 0.289099 | 0.236171 | 0.211713 | 0.195089 | 0.154772 | 0 | 0.012026 | 0.330273 | 18,624 | 419 | 121 | 44.448687 | 0.827147 | 0.381873 | 0 | 0.195531 | 0 | 0 | 0.130135 | 0 | 0 | 0 | 0 | 0 | 0.083799 | 1 | 0.067039 | false | 0 | 0.03352 | 0 | 0.156425 | 0.022346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6a35d2bfc2590a217cfcb8cbdd8bde2a6488383 | 2,632 | py | Python | vietocr/predict.py | anhtv26062000/vietocr | b14a7d14cc37a969f73b27b2946b8680672c0fe5 | [
"Apache-2.0"
] | null | null | null | vietocr/predict.py | anhtv26062000/vietocr | b14a7d14cc37a969f73b27b2946b8680672c0fe5 | [
"Apache-2.0"
] | null | null | null | vietocr/predict.py | anhtv26062000/vietocr | b14a7d14cc37a969f73b27b2946b8680672c0fe5 | [
"Apache-2.0"
] | null | null | null | import os
import time
import yaml
import argparse
from PIL import Image
import matplotlib.pyplot as plt
from vietocr.tool.predictor import Predictor
from vietocr.tool.config import Cfg
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--img", required=True, help="foo help")
parser.add_argument("--config", required=True, help="foo help")
args = parser.parse_args()
config = Cfg.load_config_from_file(args.config)
config[
"vocab"
] = " !\"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\xB0\
\xB2\xC0\xC1\xC2\xC3\xC8\xC9\xCA\xCC\xCD\xD0\xD2\xD3\xD4\xD5\xD6\xD9\xDA\xDC\xDD\
\xE0\xE1\xE2\xE3\xE8\xE9\xEA\xEC\xED\xF0\xF2\xF3\xF4\xF5\xF6\xF9\xFA\xFC\xFD\u0100\
\u0101\u0102\u0103\u0110\u0111\u0128\u0129\u014C\u014D\u0168\u0169\u016A\u016B\u01A0\
\u01A1\u01AF\u01B0\u1EA0\u1EA1\u1EA2\u1EA3\u1EA4\u1EA5\u1EA6\u1EA7\u1EA8\u1EA9\u1EAA\
\u1EAB\u1EAC\u1EAD\u1EAE\u1EAF\u1EB0\u1EB1\u1EB2\u1EB3\u1EB4\u1EB5\u1EB6\u1EB7\u1EB8\
\u1EB9\u1EBA\u1EBB\u1EBC\u1EBD\u1EBE\u1EBF\u1EC0\u1EC1\u1EC2\u1EC3\u1EC4\u1EC5\u1EC6\
\u1EC7\u1EC8\u1EC9\u1ECA\u1ECB\u1ECC\u1ECD\u1ECE\u1ECF\u1ED0\u1ED1\u1ED2\u1ED3\u1ED4\
\u1ED5\u1ED6\u1ED7\u1ED8\u1ED9\u1EDA\u1EDB\u1EDC\u1EDD\u1EDE\u1EDF\u1EE0\u1EE1\u1EE2\
\u1EE3\u1EE4\u1EE5\u1EE6\u1EE7\u1EE8\u1EE9\u1EEA\u1EEB\u1EEC\u1EED\u1EEE\u1EEF\u1EF0\
\u1EF1\u1EF2\u1EF3\u1EF4\u1EF5\u1EF6\u1EF7\u1EF8\u1EF9\u2013\u2014\u2019\u201C\u201D\
\u2026\u20AC\u2122\u2212"
print(config)
detector = Predictor(config)
# Option for predicting folder images
img_list = os.listdir(args.img)
img_list = sorted(img_list)
f_pre = open("./test_seq.txt", "w+")
# new output <name>\t<gtruth>\t<predict>
# f_gt = open("./gt_word.txt", "r")
# lines = [line.strip("\n") for line in f_gt if line != "\n"]
# start_time = time.time()
# for img in lines:
# name, gt = img.split("\t")
# img_path = args.img + name
# image = Image.open(img_path)
# s, prob = detector.predict(image, return_prob=True)
# res = name + "\t" + gt + "\t" + s + "\t" + str(prob) + "\n"
# f_pre.write(res)
# runtime = time.time() - start_time
# print("FPS:", len(img_list) / runtime)
start_time = time.time()
for img in img_list:
img_path = args.img + img
image = Image.open(img_path)
s = detector.predict(image)
print(img_path, "-----", s)
res = img + "\t" + s + "\n"
f_pre.write(res)
runtime = time.time() - start_time
print("FPS:", len(img_list) / runtime)
if __name__ == "__main__":
main()
| 34.631579 | 111 | 0.665274 | 387 | 2,632 | 4.426357 | 0.609819 | 0.024518 | 0.014011 | 0.022183 | 0.154116 | 0.127262 | 0.101576 | 0.072388 | 0.072388 | 0.072388 | 0 | 0.128345 | 0.162234 | 2,632 | 75 | 112 | 35.093333 | 0.648526 | 0.197948 | 0 | 0 | 0 | 0 | 0.035305 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.177778 | 0 | 0.2 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6a7a396afccca6f90d97a49c348647708bf30b9 | 9,790 | py | Python | MultiAtlasSegmenter/MultiAtlasSegmentation/EvaluateSegmentation.py | mabelzunce/MuscleSegmentation | 390737ca1853e3c142a4fb7e186bc8b33bc4ade4 | [
"MIT"
] | null | null | null | MultiAtlasSegmenter/MultiAtlasSegmentation/EvaluateSegmentation.py | mabelzunce/MuscleSegmentation | 390737ca1853e3c142a4fb7e186bc8b33bc4ade4 | [
"MIT"
] | null | null | null | MultiAtlasSegmenter/MultiAtlasSegmentation/EvaluateSegmentation.py | mabelzunce/MuscleSegmentation | 390737ca1853e3c142a4fb7e186bc8b33bc4ade4 | [
"MIT"
] | null | null | null | #! python3
# Multi-atlas segmentation scheme trying to give a platform to do tests before translating them to the plugin.
from __future__ import print_function
from GetMetricFromElastixRegistration import GetFinalMetricFromElastixLogFile
from MultiAtlasSegmentation import MultiAtlasSegmentation
from ApplyBiasCorrection import ApplyBiasCorrection
import SimpleITK as sitk
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import SitkImageManipulation as sitkIm
import winshell
import numpy as np
import matplotlib.pyplot as plt
import sys
import os
# DATA FOLDERS:
case = "107"
basePath = "D:\Martin\ImplantMigrationStudy\\" + case + "\\"
postopImageNames = basePath + case + '_Migration_ContralateralPostopHemiPelvis.mhd'
followupImageNames = basePath + case + '_Migration_ContralateralFollowupHemiPelvis.mhd'
#postopImageNames = basePath + case + '_Migration_PostopPelvis.mhd'
#followupImageNames = basePath + case + '_Migration_FollowupPelvis.mhd'
#postopImageNames = basePath + case + '_Migration_PostopBone.mhd'
#followupImageNames = basePath + case + '_Migration_FollowupBone.mhd'
# READ DATA
postopImage = sitk.ReadImage(postopImageNames) # This will be the reference
followupImage = sitk.ReadImage(followupImageNames) # This will be the segmented
# BINARIZE THE IMAGES:
postopImage = sitk.Greater(postopImage, 0)
followupImage = sitk.Greater(followupImage, 0)
# HOW OVERLAP IMAGES
slice_number = round(postopImage.GetSize()[1]/2)
#DisplayWithOverlay(image, segmented, slice_number, window_min, window_max)
sitkIm.DisplayWithOverlay(postopImage[:,slice_number,:], followupImage[:,slice_number,:], 0, 1)
#interact(sitkIm.DisplayWithOverlay, slice_number = (5), image = fixed(postopImage), segmented = fixed(followupImage),
# window_min = fixed(0), window_max=fixed(1));
# Get the image constrained by both bounding boxes:
#labelStatisticFilter = sitk.LabelShapeStatisticsImageFilter()
#labelStatisticFilter.Execute(postopImage)
#postopBoundingBox = np.array(labelStatisticFilter.GetBoundingBox(1))
#labelStatisticFilter.Execute(followupImage)
#followupBoundingBox = np.array(labelStatisticFilter.GetBoundingBox(1))
#minimumStart = np.minimum(postopBoundingBox[0:3], followupBoundingBox[0:3]+ 20) # 50 is to give an extra margin
#minimumStop = np.minimum(postopBoundingBox[0:3]+postopBoundingBox[3:6], followupBoundingBox[0:3]+followupBoundingBox[3:6]- 20)
#minimumBoxSize = minimumStop - minimumStart
#postopImage = postopImage[minimumStart[0]:minimumStop[0], minimumStart[1]:minimumStop[1], minimumStart[2]:minimumStop[2]]
#followupImage = followupImage[minimumStart[0]:minimumStop[0], minimumStart[1]:minimumStop[1], minimumStart[2]:minimumStop[2]]
# Another approach is to get the bounding box of the intersection:
postopAndFollowupImage = sitk.And(postopImage, followupImage)
labelStatisticFilter = sitk.LabelShapeStatisticsImageFilter()
labelStatisticFilter.Execute(postopAndFollowupImage)
bothBoundingBox = np.array(labelStatisticFilter.GetBoundingBox(1))
postopImage = postopImage[bothBoundingBox[0]:bothBoundingBox[0]+bothBoundingBox[3],
bothBoundingBox[1]:bothBoundingBox[1]+bothBoundingBox[4],
bothBoundingBox[2]+20:bothBoundingBox[2]++bothBoundingBox[5]-20]
followupImage = followupImage[bothBoundingBox[0]:bothBoundingBox[0]+bothBoundingBox[3],
bothBoundingBox[1]:bothBoundingBox[1]+bothBoundingBox[4],
bothBoundingBox[2]+20:bothBoundingBox[2]+bothBoundingBox[5]-20]
#Display reduced image:
slice_number = round(postopImage.GetSize()[1]*1/3)
sitkIm.DisplayWithOverlay(postopImage[:,slice_number,:], followupImage[:,slice_number,:], 0, 1)
#sitk.Get
#postopZ = permute(sum(sum(postopImage))>0, [3 1 2]);
#followupZ = permute(sum(sum(followupImage))>0, [3 1 2]);
#bothZ = find(postopZ&followupZ > 0);
#% Remove 10 slices each side:
#bothZ(1:10) = []; bothZ(end-10:end) = [];
# GET SEGMENTATION PERFORMANCE BASED ON SURFACES:
# init signed mauerer distance as reference metrics
reference_distance_map = sitk.Abs(sitk.SignedMaurerDistanceMap(postopImage, squaredDistance=False, useImageSpacing=True))
# Get the reference surface:
reference_surface = sitk.LabelContour(postopImage)
statistics_image_filter = sitk.StatisticsImageFilter()
# Get the number of pixels in the reference surface by counting all pixels that are 1.
statistics_image_filter.Execute(reference_surface)
num_reference_surface_pixels = int(statistics_image_filter.GetSum())
# Get the surface (contour) of the segmented image:
segmented_distance_map = sitk.Abs(sitk.SignedMaurerDistanceMap(followupImage, squaredDistance=False, useImageSpacing=True))
segmented_surface = sitk.LabelContour(followupImage)
# Get the number of pixels in the reference surface by counting all pixels that are 1.
statistics_image_filter.Execute(segmented_surface)
num_segmented_surface_pixels = int(statistics_image_filter.GetSum())
label_intensity_statistics_filter = sitk.LabelIntensityStatisticsImageFilter()
label_intensity_statistics_filter.Execute(segmented_surface, reference_distance_map)
# Hausdorff distance:
hausdorff_distance_filter = sitk.HausdorffDistanceImageFilter()
hausdorff_distance_filter.Execute(postopImage, followupImage)
#All the other metrics:
# Multiply the binary surface segmentations with the distance maps. The resulting distance
# maps contain non-zero values only on the surface (they can also contain zero on the surface)
seg2ref_distance_map = reference_distance_map * sitk.Cast(segmented_surface, sitk.sitkFloat32)
ref2seg_distance_map = segmented_distance_map * sitk.Cast(reference_surface, sitk.sitkFloat32)
# Get all non-zero distances and then add zero distances if required.
seg2ref_distance_map_arr = sitk.GetArrayViewFromImage(seg2ref_distance_map)
seg2ref_distances = list(seg2ref_distance_map_arr[seg2ref_distance_map_arr != 0])
seg2ref_distances = seg2ref_distances + \
list(np.zeros(num_segmented_surface_pixels - len(seg2ref_distances)))
ref2seg_distance_map_arr = sitk.GetArrayViewFromImage(ref2seg_distance_map)
ref2seg_distances = list(ref2seg_distance_map_arr[ref2seg_distance_map_arr != 0])
ref2seg_distances = ref2seg_distances + \
list(np.zeros(num_reference_surface_pixels - len(ref2seg_distances)))
all_surface_distances = seg2ref_distances + ref2seg_distances
# The maximum of the symmetric surface distances is the Hausdorff distance between the surfaces. In
# general, it is not equal to the Hausdorff distance between all voxel/pixel points of the two
# segmentations, though in our case it is. More on this below.
#hausdorff_distance = hausdorff_distance_filter.GetHausdorffDistance()
#max_surface_distance = label_intensity_statistics_filter.GetMaximum(1)
#avg_surface_distance = label_intensity_statistics_filter.GetMean(1)
#median_surface_distance = label_intensity_statistics_filter.GetMedian(1)
#std_surface_distance = label_intensity_statistics_filter.GetStandardDeviation(1)
hausdorff_distance = hausdorff_distance_filter.GetHausdorffDistance()
avg_surface_distance = np.mean(all_surface_distances)
max_surface_distance = np.max(all_surface_distances)
median_surface_distance = np.median(all_surface_distances)
std_surface_distance = np.std(all_surface_distances)
# Now in mm:
hausdorff_distance_mm = hausdorff_distance * postopImage.GetSpacing()[0]
avg_surface_distance_mm = avg_surface_distance * postopImage.GetSpacing()[0]
max_surface_distance_mm = max_surface_distance * postopImage.GetSpacing()[0]
median_surface_distance_mm = median_surface_distance * postopImage.GetSpacing()[0]
std_surface_distance_mm = std_surface_distance * postopImage.GetSpacing()[0]
print("Surface based metrics [voxels]: MEAN_SD={0}, STDSD={1}, MEDIAN_SD={2}, HD={3}, MAX_SD={4}\n".format(avg_surface_distance, std_surface_distance, median_surface_distance, hausdorff_distance, max_surface_distance))
print("Surface based metrics [mm]: MEAN_SD={0}, STDSD={1}, MEDIAN_SD={2}, HD={3}, MAX_SD={4}\n".format(avg_surface_distance_mm, std_surface_distance_mm, median_surface_distance_mm, hausdorff_distance_mm, max_surface_distance_mm))
# GET SEGMENTATION PERFORMANCE BASED ON OVERLAP METRICS:
overlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter()
overlap_measures_filter.Execute(postopImage, followupImage)
jaccard_value = overlap_measures_filter.GetJaccardCoefficient()
dice_value = overlap_measures_filter.GetDiceCoefficient()
volume_similarity_value = overlap_measures_filter.GetVolumeSimilarity()
false_negative_value = overlap_measures_filter.GetFalseNegativeError()
false_positive_value = overlap_measures_filter.GetFalsePositiveError()
print("Overlap based metrics: Jaccard={0}, Dice={1}, VolumeSimilarity={2}, FN={3}, FP={4}\n".format(jaccard_value, dice_value, volume_similarity_value, false_negative_value, false_positive_value))
# Create a log file:
logFilename = basePath + 'RegistrationPerformance_python.txt'
log = open(logFilename, 'w')
log.write("Mean Surface Distance, STD Surface Distance, Median Surface Distance, Hausdorff Distance, Max Surface Distance\n")
log.write("{0}, {1}, {2}, {3}, {4}\n".format(avg_surface_distance, std_surface_distance, median_surface_distance, hausdorff_distance, max_surface_distance))
log.write("Mean Surface Distance, STD Surface Distance [mm], Median Surface Distance [mm], Hausdorff Distance [mm], Max Surface Distance [mm]\n")
log.write("{0}, {1}, {2}, {3}, {4}\n".format(avg_surface_distance_mm, std_surface_distance_mm, median_surface_distance_mm, hausdorff_distance_mm, max_surface_distance_mm))
log.write("Jaccard, Dice, Volume Similarity, False Negative, False Positive\n")
log.write("{0}, {1}, {2}, {3}, {4}\n".format(jaccard_value, dice_value, volume_similarity_value, false_negative_value, false_positive_value))
log.close()
plt.show() | 60.432099 | 229 | 0.81236 | 1,188 | 9,790 | 6.464646 | 0.223064 | 0.078125 | 0.033203 | 0.023438 | 0.41862 | 0.29974 | 0.230339 | 0.219141 | 0.214063 | 0.214063 | 0 | 0.017984 | 0.091216 | 9,790 | 162 | 230 | 60.432099 | 0.845229 | 0.331869 | 0 | 0.044944 | 0 | 0.044944 | 0.124904 | 0.027448 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.146067 | 0 | 0.146067 | 0.044944 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6a8326296ab6af0cd67cc06c386c30e1cff6c27 | 1,591 | py | Python | jes/jes-v5.020-linux/jes/python/jes/gui/dialogs/intro.py | utv-teaching/foundations-computer-science | 568e19fd83a3355dab2814229f335abf31bfd7e9 | [
"MIT"
] | null | null | null | jes/jes-v5.020-linux/jes/python/jes/gui/dialogs/intro.py | utv-teaching/foundations-computer-science | 568e19fd83a3355dab2814229f335abf31bfd7e9 | [
"MIT"
] | null | null | null | jes/jes-v5.020-linux/jes/python/jes/gui/dialogs/intro.py | utv-teaching/foundations-computer-science | 568e19fd83a3355dab2814229f335abf31bfd7e9 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
jes.gui.dialogs.intro
=====================
The "intro" dialog, which displays the JESIntroduction.txt file.
:copyright: (C) 2014 Matthew Frazier and Mark Guzdial
:license: GNU GPL v2 or later, see jes/help/JESCopyright.txt for details
"""
from __future__ import with_statement
import JESResources
import JESVersion
from java.awt import BorderLayout
from javax.swing import JTextPane, JScrollPane, JButton
from jes.gui.components.actions import methodAction
from .controller import BasicDialog, DialogController
class IntroDialog(BasicDialog):
INFO_FILE = JESResources.getPathTo("help/JESIntroduction.txt")
WINDOW_TITLE = "Welcome to %s!" % JESVersion.TITLE
WINDOW_SIZE = (400, 300)
def __init__(self):
super(IntroDialog, self).__init__()
# Open the text file and make a text pane
textPane = JTextPane()
textPane.editable = False
scrollPane = JScrollPane(textPane)
scrollPane.preferredSize = (32767, 32767) # just a large number
with open(self.INFO_FILE, 'r') as fd:
infoText = fd.read().decode('utf8').replace(
"@version@", JESVersion.VERSION
)
textPane.text = infoText
# Load the scroll pane into the layout
self.add(scrollPane, BorderLayout.CENTER)
# Make an OK button
self.okButton = JButton(self.ok)
self.buttonPanel.add(self.okButton)
@methodAction(name="OK")
def ok(self):
self.visible = False
introController = DialogController("Introduction", IntroDialog)
| 28.927273 | 74 | 0.672533 | 182 | 1,591 | 5.785714 | 0.593407 | 0.011396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018548 | 0.220616 | 1,591 | 54 | 75 | 29.462963 | 0.830645 | 0.236329 | 0 | 0 | 0 | 0 | 0.054908 | 0.019967 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.241379 | 0 | 0.448276 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6ac72939accd121f9983c21fee472c5729238d1 | 899 | py | Python | 36-valid-sudoku/36-valid-sudoku.py | MayaScarlet/leetcode-python | 8ef0c5cadf2e975957085c0ef84a8c3d90a64b6a | [
"MIT"
] | null | null | null | 36-valid-sudoku/36-valid-sudoku.py | MayaScarlet/leetcode-python | 8ef0c5cadf2e975957085c0ef84a8c3d90a64b6a | [
"MIT"
] | null | null | null | 36-valid-sudoku/36-valid-sudoku.py | MayaScarlet/leetcode-python | 8ef0c5cadf2e975957085c0ef84a8c3d90a64b6a | [
"MIT"
] | null | null | null | import collections
class Solution:
def isValidSudoku(self, board: List[List[str]]) -> bool:
cols = collections.defaultdict(set)
rows = collections.defaultdict(set)
grid = collections.defaultdict(set)
for r in range(len(board)):
for c in range(len(board)):
#Ignore empty cells
if board[r][c] == ".":
continue
#If element exist in any of the three sets, return False
if board[r][c] in rows[r] or board[r][c] in cols[c] or board[r][c] in grid[r//3, c//3]:
return False
#Add element if it doesn't exist
rows[r].add(board[r][c])
cols[c].add(board[r][c])
grid[(r//3, c//3)].add(board[r][c])
return True
| 34.576923 | 103 | 0.463849 | 111 | 899 | 3.756757 | 0.387387 | 0.100719 | 0.117506 | 0.064748 | 0.091127 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007634 | 0.41713 | 899 | 26 | 104 | 34.576923 | 0.788168 | 0.115684 | 0 | 0 | 0 | 0 | 0.001261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6b10c402152179bd5b2001946c3c77a518e7627 | 9,375 | py | Python | Crop Predictions With GUI.py | KRITGYA2001/Crop-Prediction-Model | 81c2f0701c89ae6ed1f3b8ae48252c94670ee413 | [
"Apache-2.0"
] | null | null | null | Crop Predictions With GUI.py | KRITGYA2001/Crop-Prediction-Model | 81c2f0701c89ae6ed1f3b8ae48252c94670ee413 | [
"Apache-2.0"
] | null | null | null | Crop Predictions With GUI.py | KRITGYA2001/Crop-Prediction-Model | 81c2f0701c89ae6ed1f3b8ae48252c94670ee413 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# In[1]:
import numpy as np
import pandas as pd
import seaborn as sns
get_ipython().run_line_magic('matplotlib', 'inline')
import matplotlib.pyplot as plt
# In[2]:
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.linear_model import LinearRegression, LogisticRegression, Ridge, Lasso
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, RandomForestRegressor, GradientBoostingRegressor, ExtraTreesRegressor
from sklearn.svm import LinearSVC, SVC
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score, r2_score, classification_report
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
# In[32]:
data=pd.read_csv("Crop_prediction.csv")
# In[33]:
data.head()
# * (Data set taken from Indian chamber of food and agriculture)
# **Data fields**
# * N - ratio of Nitrogen content in soil
# * P - ratio of Phosphorous content in soil
# * K - ratio of Potassium content in soil
# * temperature - temperature in degree Celsius
# * humidity - relative humidity in %
# * ph - ph value of the soil
# * rainfall - rainfall in mm
# In[34]:
data.tail()
# In[35]:
data.info()
# In[36]:
data.describe()
# In[37]:
data.isnull().sum()
# In[38]:
data.nunique()
# In[39]:
data.columns
# In[40]:
#Visualization
plt.figure(figsize=(8,8))
plt.title("Correlation between features")
corr=data.corr()
sns.heatmap(corr,annot=True)
# In[41]:
data['label'].unique()
# In[42]:
plt.figure(figsize=(6,8))
plt.title("Temperature relation with crops")
sns.barplot(y="label", x="temperature", data=data,palette="hot")
plt.ylabel("crops")
#Temperature has very effect with blackgram
# In[43]:
plt.figure(figsize=(6,8))
plt.title("Humidity relation with crops")
sns.barplot(y="label", x="humidity", data=data,palette='brg')
plt.ylabel("crops")
#humidity has very high relation with rice
# In[44]:
plt.figure(figsize=(6,8))
plt.title("pH relation with crops")
sns.barplot(y="label", x="ph", data=data,palette='hot')
plt.ylabel("crops")
#ph has a very high relationship with crops
# In[45]:
plt.figure(figsize=(6,8))
plt.title("Rainfall relation with crops")
sns.barplot(y="label", x="rainfall", data=data,palette='brg')
plt.ylabel("crops")
#Rice needs a lots of rainfall
#lentil needs a very less rainfall
# In[46]:
plt.figure(figsize=(8,6))
plt.title("Temperature and pH effect values for crops")
sns.scatterplot(data=data, x="temperature", y="label", hue="ph",palette='brg')
plt.ylabel("Crops")
# In[47]:
plt.figure(figsize=(8,6))
plt.title("Temperature and humidity effect values for crops")
sns.scatterplot(data=data, x="temperature", y="label", hue="humidity",palette='brg')
plt.ylabel("Crops")
# In[48]:
plt.figure(figsize=(8,6))
plt.title("Temperature and Rainfall effect values for crops")
sns.scatterplot(data=data, x="temperature", y="label", hue="rainfall",palette='brg')
plt.ylabel("Crops")
# In[49]:
#from pandas_profiling import ProfileReport
# In[50]:
#Predictions
encoder=LabelEncoder()
data.label=encoder.fit_transform(data.label)
# In[51]:
features=data.drop("label",axis=1)
target=data.label
# In[52]:
features
# In[53]:
X_train, X_test, y_train, y_test = train_test_split(features, target, random_state=42)
# In[54]:
#Linear Regression
lr = LinearRegression().fit(X_train, y_train)
lr_pred= lr.score(X_test, y_test)
print("Training score: {:.3f}".format(lr.score(X_train, y_train)))
print("Test score: {:.3f}".format(lr.score(X_test, y_test)))
# In[55]:
#Decision Tree Classifier
tree = DecisionTreeClassifier(max_depth=15,random_state=0).fit(X_train, y_train)
tree_pred= tree.score(X_test, y_test)
print("Training score: {:.3f}".format(tree.score(X_train, y_train)))
print("Test score: {:.3f}".format(tree.score(X_test, y_test)))
# In[56]:
#Random Forests
rf = RandomForestClassifier(n_estimators=10, max_features=3, random_state=0).fit(X_train, y_train)
rf_pred= rf.score(X_test, y_test)
print("Training score: {:.3f}".format(rf.score(X_train, y_train)))
print("Test score: {:.3f}".format(rf.score(X_test, y_test)))
# In[57]:
#GradientBoostingClassifier
gbr = GradientBoostingClassifier(n_estimators=20, max_depth=4, max_features=2, random_state=0).fit(X_train, y_train)
gbr_pred= gbr.score(X_test, y_test)
print("Training score: {:.3f}".format(gbr.score(X_train, y_train)))
print("Test score: {:.3f}".format(gbr.score(X_test, y_test)))
# In[58]:
#Support Vector Classifier
svm = SVC(C=100, gamma=0.001).fit(X_train, y_train)
svm_pred= svm.score(X_test, y_test)
print("Training score: {:.3f}".format(svm.score(X_train, y_train)))
print("Test score: {:.3f}".format(svm.score(X_test, y_test)))
# In[59]:
#Logistic regression
log_reg = LogisticRegression(C=0.1, max_iter=100000).fit(X_train, y_train)
log_reg_pred= log_reg.score(X_test, y_test)
print("Training score: {:.3f}".format(log_reg.score(X_train, y_train)))
print("Test score: {:.3f}".format(log_reg.score(X_test, y_test)))
# In[60]:
predictions_acc = { "Model": ['Decision Tree', 'Random Forest', 'Gradient Boosting', 'SVC', 'Logistic Regression'],
"Accuracy": [tree_pred, rf_pred, gbr_pred, svm_pred, log_reg_pred]}
# In[61]:
model_acc = pd.DataFrame(predictions_acc, columns=["Model", "Accuracy"])
# In[62]:
model_acc
# In[3]:
import tkinter as tk
from tkinter.font import BOLD
from tkinter import messagebox
from tkinter import scrolledtext
from tkinter.constants import RIGHT, Y
from tkinter import filedialog
from tkinter import *
# In[8]:
def mainscreen():
global window
window = tk.Tk()
window.geometry("1530x795+0+0")
window.configure(bg="#FFE4B5")
window.title("Prediction model")
head = tk.Label(window, text="\nEnter Details\n", font=("rockwell extra bold",45),fg="dark blue",bg="#FFE4B5").pack()
def back3() :
window.destroy()
def values():
n=n_tk.get()
p=p_tk.get()
k=k_tk.get()
temp=temp_tk.get()
humidity=humidity_tk.get()
ph=ph_tk.get()
rainfall=rainfall_tk.get()
def predictfunc(n,p,k,temp,humidity,ph,rainfall):
#Predicting Model
data=pd.read_csv("Crop_prediction.csv")
x=data.loc[:,"N":"rainfall"]
y=data.loc[:,'label']
Knn=KNeighborsClassifier()
Knn.fit(x,y)
test_data=[[n,p,k,temp,humidity,ph,rainfall]]
predict=Knn.predict(test_data)
#print(predict[0])
output1 = tk.Label(window, text="The prediction is: ",font=("Arial", 20),bg="#FFE4B5").place(x=600, y=570)
output2 = tk.Label(window, text=predict, font=("Arial", 20),bg="#FFE4B5").place(x=820, y=570)
predictfunc(n,p,k,temp,humidity,ph,rainfall)
n1 = tk.Label(window, text="Ratio of Nitrogen content in soil: ",font=("Arial", 20),bg="#FFE4B5").place(x=320, y=200)
n_tk = tk.Entry(window, fg='blue', bg='white',borderwidth=5,font=("Arial", 18), width=30)
n_tk.place(x=800, y=200)
p2 = tk.Label(window, text="Ratio of Phosphorous content in soil: ",font=("Arial", 20),bg="#FFE4B5").place(x=320, y=250)
p_tk = tk.Entry(window, fg='blue', bg='white',borderwidth=5,font=("Arial", 18), width=30)
p_tk.place(x=800, y=250)
k3 = tk.Label(window, text="Ratio of Potassium content in soil: ",font=("Arial", 20),bg="#FFE4B5").place(x=320, y=300)
k_tk = tk.Entry(window, fg='blue', bg='white',borderwidth=5,font=("Arial", 18), width=30)
k_tk.place(x=800, y=300)
temp4= tk.Label(window, text="Temperature in degree Celsius: ",font=("Arial", 20),bg="#FFE4B5").place(x=320, y=350)
temp_tk = tk.Entry(window, fg='blue', bg='white',borderwidth=5,font=("Arial", 18), width=30)
temp_tk.place(x=800, y=350)
humidity5= tk.Label(window, text="Relative humidity in %: ",font=("Arial", 20),bg="#FFE4B5").place(x=320, y=400)
humidity_tk = tk.Entry(window, fg='blue', bg='white',borderwidth=5,font=("Arial", 18), width=30)
humidity_tk.place(x=800, y=400)
ph6= tk.Label(window, text="pH value of the soil: ",font=("Arial", 20),bg="#FFE4B5").place(x=320, y=450)
ph_tk = tk.Entry(window, fg='blue', bg='white',borderwidth=5,font=("Arial", 18), width=30)
ph_tk.place(x=800, y=450)
rainfall7= tk.Label(window, text="Rainfall in mm: ",font=("Arial", 20),bg="#FFE4B5").place(x=320, y=500)
rainfall_tk = tk.Entry(window, fg='blue', bg='white',borderwidth=5,font=("Arial", 18), width=30)
rainfall_tk.place(x=800, y=500)
back3_button = tk.Button(text="Exit", bg="blue", fg="white", height=1, width=10, borderwidth=8, cursor="hand2",font=("Arial", 12), command=back3)
back3_button.place(x=530,y=680)
submit_button = tk.Button(text="Submit", bg="green", fg="white", height=1, width=10, borderwidth=8, cursor="hand2",font=("Arial", 12), command=values)
submit_button.place(x=830,y=680)
# start the GUI
window.mainloop()
mainscreen()
# In[ ]:
| 22.865854 | 154 | 0.68192 | 1,414 | 9,375 | 4.428571 | 0.219236 | 0.017247 | 0.012456 | 0.022996 | 0.4481 | 0.423028 | 0.359949 | 0.296072 | 0.24992 | 0.216864 | 0 | 0.043593 | 0.14848 | 9,375 | 409 | 155 | 22.92176 | 0.740824 | 0.11552 | 0 | 0.103226 | 0 | 0 | 0.172422 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025806 | false | 0 | 0.154839 | 0 | 0.180645 | 0.077419 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6b123a5d8064301b36365941fb80c50a959742e | 3,950 | py | Python | alipay/aop/api/domain/ZhimaCreditOrderRepaymentApplyModel.py | articuly/alipay-sdk-python-all | 0259cd28eca0f219b97dac7f41c2458441d5e7a6 | [
"Apache-2.0"
] | null | null | null | alipay/aop/api/domain/ZhimaCreditOrderRepaymentApplyModel.py | articuly/alipay-sdk-python-all | 0259cd28eca0f219b97dac7f41c2458441d5e7a6 | [
"Apache-2.0"
] | null | null | null | alipay/aop/api/domain/ZhimaCreditOrderRepaymentApplyModel.py | articuly/alipay-sdk-python-all | 0259cd28eca0f219b97dac7f41c2458441d5e7a6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import simplejson as json
from alipay.aop.api.constant.ParamConstants import *
class ZhimaCreditOrderRepaymentApplyModel(object):
def __init__(self):
self._action_type = None
self._category = None
self._order_info = None
self._out_order_no = None
self._repay_amount = None
self._repay_proof = None
self._user_id = None
@property
def action_type(self):
return self._action_type
@action_type.setter
def action_type(self, value):
self._action_type = value
@property
def category(self):
return self._category
@category.setter
def category(self, value):
self._category = value
@property
def order_info(self):
return self._order_info
@order_info.setter
def order_info(self, value):
self._order_info = value
@property
def out_order_no(self):
return self._out_order_no
@out_order_no.setter
def out_order_no(self, value):
self._out_order_no = value
@property
def repay_amount(self):
return self._repay_amount
@repay_amount.setter
def repay_amount(self, value):
self._repay_amount = value
@property
def repay_proof(self):
return self._repay_proof
@repay_proof.setter
def repay_proof(self, value):
self._repay_proof = value
@property
def user_id(self):
return self._user_id
@user_id.setter
def user_id(self, value):
self._user_id = value
def to_alipay_dict(self):
params = dict()
if self.action_type:
if hasattr(self.action_type, 'to_alipay_dict'):
params['action_type'] = self.action_type.to_alipay_dict()
else:
params['action_type'] = self.action_type
if self.category:
if hasattr(self.category, 'to_alipay_dict'):
params['category'] = self.category.to_alipay_dict()
else:
params['category'] = self.category
if self.order_info:
if hasattr(self.order_info, 'to_alipay_dict'):
params['order_info'] = self.order_info.to_alipay_dict()
else:
params['order_info'] = self.order_info
if self.out_order_no:
if hasattr(self.out_order_no, 'to_alipay_dict'):
params['out_order_no'] = self.out_order_no.to_alipay_dict()
else:
params['out_order_no'] = self.out_order_no
if self.repay_amount:
if hasattr(self.repay_amount, 'to_alipay_dict'):
params['repay_amount'] = self.repay_amount.to_alipay_dict()
else:
params['repay_amount'] = self.repay_amount
if self.repay_proof:
if hasattr(self.repay_proof, 'to_alipay_dict'):
params['repay_proof'] = self.repay_proof.to_alipay_dict()
else:
params['repay_proof'] = self.repay_proof
if self.user_id:
if hasattr(self.user_id, 'to_alipay_dict'):
params['user_id'] = self.user_id.to_alipay_dict()
else:
params['user_id'] = self.user_id
return params
@staticmethod
def from_alipay_dict(d):
if not d:
return None
o = ZhimaCreditOrderRepaymentApplyModel()
if 'action_type' in d:
o.action_type = d['action_type']
if 'category' in d:
o.category = d['category']
if 'order_info' in d:
o.order_info = d['order_info']
if 'out_order_no' in d:
o.out_order_no = d['out_order_no']
if 'repay_amount' in d:
o.repay_amount = d['repay_amount']
if 'repay_proof' in d:
o.repay_proof = d['repay_proof']
if 'user_id' in d:
o.user_id = d['user_id']
return o
| 30.152672 | 75 | 0.591899 | 496 | 3,950 | 4.387097 | 0.102823 | 0.073529 | 0.068934 | 0.045037 | 0.32261 | 0.264706 | 0.045037 | 0.027574 | 0 | 0 | 0 | 0.000368 | 0.311392 | 3,950 | 130 | 76 | 30.384615 | 0.799632 | 0.010633 | 0 | 0.126126 | 0 | 0 | 0.097848 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153153 | false | 0 | 0.018018 | 0.063063 | 0.27027 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6b2ed2144802e9da93811adc368ed32fd611400 | 1,503 | py | Python | chconsole/storage/json_storage.py | mincode/chconsole | ab8ca8a38bd47ecb1aa7ff90225f57e042aaad6e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | chconsole/storage/json_storage.py | mincode/chconsole | ab8ca8a38bd47ecb1aa7ff90225f57e042aaad6e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | chconsole/storage/json_storage.py | mincode/chconsole | ab8ca8a38bd47ecb1aa7ff90225f57e042aaad6e | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import os, json
__author__ = 'Manfred Minimair <manfred@minimair.org>'
class JSONStorage:
"""
File storage for a dictionary.
"""
file = '' # file name of storage file
data = None # data dict
indent = ' ' # indent prefix for pretty printing json files
def __init__(self, path, name):
"""
Initizlize.
:param path: path to the storage file; empty means the current direcory.
:param name: file name, json file
"""
if path:
os.makedirs(path, exist_ok=True)
self.file = os.path.normpath(os.path.join(path, name))
try:
with open(self.file) as data_file:
self.data = json.load(data_file)
except FileNotFoundError:
self.data = dict()
self.dump()
def dump(self):
"""
Dump data into storage file.
"""
with open(self.file, 'w') as out_file:
json.dump(self.data, out_file, indent=self.indent)
def get(self, item):
"""
Get stored item.
:param item: name, string, of item to get.
:return: stored item; raises a KeyError if item does not exist.
"""
return self.data[item]
def set(self, item, value):
"""
Set item's value; causes the data to be dumped into the storage file.
:param item: name, string of item to set.
:param value: value to set.
"""
self.data[item] = value
self.dump()
| 28.358491 | 80 | 0.558217 | 190 | 1,503 | 4.347368 | 0.363158 | 0.048426 | 0.033898 | 0.038741 | 0.065375 | 0.065375 | 0.065375 | 0 | 0 | 0 | 0 | 0 | 0.337991 | 1,503 | 52 | 81 | 28.903846 | 0.830151 | 0.348636 | 0 | 0.083333 | 0 | 0 | 0.051157 | 0.026797 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.041667 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6b3ca6a1ac3ed90a5486b776195623690a10478 | 2,219 | py | Python | Trabalho_02/Trabalho/ELM_Q01.py | gabriel-rc-201/Trabalhos-Inteligencia-Computacional | c334f2fd66a31bacab0470b390782ce38dc2a100 | [
"MIT"
] | 1 | 2021-07-09T19:32:16.000Z | 2021-07-09T19:32:16.000Z | Trabalho_02/Trabalho/ELM_Q01.py | gabriel-rc-201/Trabalhos-Inteligencia-Computacional | c334f2fd66a31bacab0470b390782ce38dc2a100 | [
"MIT"
] | null | null | null | Trabalho_02/Trabalho/ELM_Q01.py | gabriel-rc-201/Trabalhos-Inteligencia-Computacional | c334f2fd66a31bacab0470b390782ce38dc2a100 | [
"MIT"
] | null | null | null |
#!-*- conding: utf8 -*-
#coding: utf-8
"""
Aluno: Gabriel Ribeiro Camelo
Matricula: 401091
"""
import matplotlib.pyplot as pplt # gráficos
import math # Matemática
import re # expressões regulares
import numpy as np # matrizes
from statistics import pstdev # Desvio padrão
from scipy import stats # Contem o zscore
#Funções para o calculo do R2
subxy = lambda x,y: x-y
multxy = lambda x,y: x*y
def somaYy(y):
#cria o somatorio de yy
acumulador = 0
y_media = np.sum(y)/len(y)
for k in range(len(y)):
acumulador += (y[k] - y_media)**2
return acumulador
# Coleta de dados
arq = open("aerogerador.dat", "r") # abre o arquivo que contem os dados
x = [] # Dados
y = [] # Resultados
for line in arq: # separa x de y
line = line.strip() # quebra no \n
line = re.sub('\s+',',',line) # trocando espaços vazios por virgula
X,Y = line.split(",") # quebra nas virgulas e retorna 2 valores
x.append(float(X))
y.append(float(Y))
arq.close() # fecha o arquivo que contem os dados
# Normalização Zscore
xn = stats.zscore(x)
#adicionando o peso que pondera o bias
xb = []
for i in range(2250):
xb.append(-1)
X = np.matrix([xb, xn]) # Matriz de dados com o bias
# Matriz de pesos aleatórios
def matPesos (qtdNeuronios, qtdAtributos):
# retorna uma matriz de numeros aleatórios de uma distribuição narmal
w = np.random.randn(qtdNeuronios, qtdAtributos+1)
return w
Neuronios = int(input("Quantidade de Neuronios: "))
W = matPesos(Neuronios, 1)
# Função de Ativação
phi = lambda u: (1 - math.exp(u))/(1 + math.exp(u)) #Logistica
# Ativação dos Neuronios
U = np.array(W@X)
Z = list(map(phi, [valor for linha in U for valor in linha]))
Z = np.array(Z)
Z = Z.reshape(Neuronios, 2250)
# Matriz de pesos dos neuronios da camada de saida
M = (y@Z.T) @ np.linalg.inv(Z@Z.T)
# Ativação dos neuronios de saida
D = M@Z
# Calculo do R2
somaQe = sum(map(multxy, list(map(subxy, y, D)), list(map(subxy, y, D))))
R2 = 1 - (somaQe/somaYy(y))
#Resultados
print("R2: ", R2)
#gráfico
pplt.plot(x, D, color ='red')
pplt.scatter(x, y, marker = "*")
pplt.show()
| 24.384615 | 75 | 0.633168 | 344 | 2,219 | 4.078488 | 0.456395 | 0.009979 | 0.015681 | 0.01283 | 0.081967 | 0.034212 | 0 | 0 | 0 | 0 | 0 | 0.017741 | 0.237945 | 2,219 | 90 | 76 | 24.655556 | 0.811946 | 0.343398 | 0 | 0 | 0 | 0 | 0.038325 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.12766 | 0 | 0.212766 | 0.021277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6b7e37b488f062925a4fc8dd11c3dc8a00b2e5d | 4,424 | py | Python | tools/weixin.py | GingerWWW/news_spider | 51e1437cf9a58071cc5bd2c12f854ae84f96a5d5 | [
"MIT"
] | 208 | 2018-02-11T01:58:08.000Z | 2022-03-28T07:15:12.000Z | tools/weixin.py | DeteMin/news_spider | 9e29525a8bcb2310fca3bb4f9ca4b99b39ecfc9c | [
"MIT"
] | 19 | 2018-04-17T11:03:28.000Z | 2022-03-17T00:02:20.000Z | tools/weixin.py | DeteMin/news_spider | 9e29525a8bcb2310fca3bb4f9ca4b99b39ecfc9c | [
"MIT"
] | 44 | 2018-02-26T09:47:41.000Z | 2022-03-22T13:34:27.000Z | #!/usr/bin/env python
# encoding: utf-8
"""
@author: zhanghe
@software: PyCharm
@file: weixin.py
@time: 2018-02-10 17:55
"""
import re
import time
import hashlib
# from urlparse import urljoin # PY2
# from urllib.parse import urljoin # PY3
from future.moves.urllib.parse import urljoin
import execjs
from tools.char import un_escape
from config import current_config
from models.news import FetchResult
from news.items import FetchResultItem
from apps.client_db import db_session_mysql
from maps.platform import WEIXIN, WEIBO
BASE_DIR = current_config.BASE_DIR
def get_finger(content_str):
"""
:param content_str:
:return:
"""
m = hashlib.md5()
m.update(content_str.encode('utf-8') if isinstance(content_str, unicode) else content_str)
finger = m.hexdigest()
return finger
def parse_weixin_js_body(html_body, url=''):
"""
解析js
:param html_body:
:param url:
:return:
"""
rule = r'<script type="text/javascript">.*?(var msgList.*?)seajs.use\("sougou/profile.js"\);.*?</script>'
js_list = re.compile(rule, re.S).findall(html_body)
if not js_list:
print('parse error url: %s' % url)
return ''.join(js_list)
def parse_weixin_article_id(html_body):
rule = r'<script nonce="(\d+)" type="text\/javascript">'
article_id_list = re.compile(rule, re.I).findall(html_body)
return article_id_list[0]
def add_img_src(html_body):
rule = r'data-src="(.*?)"'
img_data_src_list = re.compile(rule, re.I).findall(html_body)
print(img_data_src_list)
for img_src in img_data_src_list:
print(img_src)
html_body = html_body.replace(img_src, '%(img_src)s" src="%(img_src)s' % {'img_src': img_src})
return html_body
def get_img_src_list(html_body, host_name='/', limit=None):
rule = r'src="(%s.*?)"' % host_name
img_data_src_list = re.compile(rule, re.I).findall(html_body)
if limit:
return img_data_src_list[:limit]
return img_data_src_list
def check_article_title_duplicate(article_title):
"""
检查标题重复
:param article_title:
:return:
"""
session = db_session_mysql()
article_id_count = session.query(FetchResult) \
.filter(FetchResult.platform_id == WEIXIN,
FetchResult.article_id == get_finger(article_title)) \
.count()
return article_id_count
class ParseJsWc(object):
"""
解析微信动态数据
"""
def __init__(self, js_body):
self.js_body = js_body
self._add_js_msg_list_fn()
self.ctx = execjs.compile(self.js_body)
# print(self.ctx)
def _add_js_msg_list_fn(self):
js_msg_list_fn = """
function r_msg_list() {
return msgList.list;
};
"""
self.js_body += js_msg_list_fn
def parse_js_msg_list(self):
msg_list = self.ctx.call('r_msg_list')
app_msg_ext_info_list = [i['app_msg_ext_info'] for i in msg_list]
comm_msg_info_date_time_list = [time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(i['comm_msg_info']['datetime'])) for i in msg_list]
# msg_id_list = [i['comm_msg_info']['id'] for i in msg_list]
msg_data_list = [
{
# 'article_id': '%s_000' % msg_id_list[index],
'article_id': get_finger(i['title']),
'article_url': urljoin('https://mp.weixin.qq.com', un_escape(i['content_url'])),
'article_title': i['title'],
'article_abstract': i['digest'],
'article_pub_time': comm_msg_info_date_time_list[index],
} for index, i in enumerate(app_msg_ext_info_list)
]
msg_ext_list = [i['multi_app_msg_item_list'] for i in app_msg_ext_info_list]
for index_j, j in enumerate(msg_ext_list):
for index_i, i in enumerate(j):
msg_data_list.append(
{
# 'article_id': '%s_%03d' % (msg_id_list[index_j], index_i + 1),
'article_id': get_finger(i['title']),
'article_url': urljoin('https://mp.weixin.qq.com', un_escape(i['content_url'])),
'article_title': i['title'],
'article_abstract': i['digest'],
'article_pub_time': comm_msg_info_date_time_list[index_j],
}
)
return msg_data_list
| 30.937063 | 141 | 0.614828 | 615 | 4,424 | 4.092683 | 0.253659 | 0.038141 | 0.023838 | 0.033373 | 0.265793 | 0.232817 | 0.176798 | 0.176798 | 0.176798 | 0.162892 | 0 | 0.00729 | 0.255877 | 4,424 | 142 | 142 | 31.15493 | 0.75729 | 0.117089 | 0 | 0.116279 | 0 | 0.011628 | 0.167411 | 0.034954 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104651 | false | 0 | 0.127907 | 0 | 0.348837 | 0.034884 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6ca0b78e18a4bf98def2fc3af39ef75294bf852 | 14,129 | py | Python | tweezers/io/TxtMpiSource.py | DollSimon/tweezers | 7c9b3d781c53f7728526a8242aa9e1d671f15688 | [
"BSD-2-Clause"
] | null | null | null | tweezers/io/TxtMpiSource.py | DollSimon/tweezers | 7c9b3d781c53f7728526a8242aa9e1d671f15688 | [
"BSD-2-Clause"
] | null | null | null | tweezers/io/TxtMpiSource.py | DollSimon/tweezers | 7c9b3d781c53f7728526a8242aa9e1d671f15688 | [
"BSD-2-Clause"
] | null | null | null | from pathlib import Path
import json
import re
import numpy as np
import os
from collections import OrderedDict
from .TxtMpiFile import TxtMpiFile
from .BaseSource import BaseSource
from tweezers.meta import MetaDict, UnitDict
class TxtMpiSource(BaseSource):
"""
Data source for \*.txt files from the MPI with the old style header or the new JSON format.
"""
data = None
psd = None
ts = None
def __init__(self, data=None, psd=None, ts=None):
"""
Args:
path (:class:`patlhlib.Path`): path to file to read, if the input is of a different type, it is given to
:class:`pathlibh.Path` to try to create an instance
"""
super().__init__()
# go through input
if data:
self.data = TxtMpiFile(data)
if psd:
self.psd = TxtMpiFile(psd)
if ts:
self.ts = TxtMpiFile(ts)
@staticmethod
def isDataFile(path):
"""
Checks if a given file is a valid data file and returns its ID and type.
Args:
path (:class:`pathlib.Path`): file to check
Returns:
:class:`dict` with `id` and `type`
"""
pPath = Path(path)
m = re.match('^((?P<type>[A-Z]+)_)?(?P<id>(?P<trial>[0-9]{1,3})_Date_[0-9_]{19})\.txt$',
pPath.name)
if m:
tipe = 'data'
if m.group('type'):
tipe = m.group('type').lower()
res = {'id': m.group('id'),
'trial': m.group('trial'),
'type': tipe,
'path': pPath}
return res
else:
return False
@classmethod
def getAllSources(cls, path):
"""
Get a list of all IDs and their files that are at the given path and its subfolders.
Args:
path (:class:`pathlib.Path`): root path for searching
Returns:
`dir`
"""
_path = Path(path)
# get a list of all files and their properties
files = cls.getAllFiles(_path)
sources = OrderedDict()
# sort files that belong to the same id
for el in files:
if el['id'] not in sources.keys():
sources[el['id']] = cls()
setattr(sources[el['id']], el['type'], TxtMpiFile(el['path']))
return sources
def getMetadata(self):
"""
Return the metadata of the experiment.
Returns:
:class:`tweezers.MetaDict` and :class:`tweezers.UnitDict`
"""
# keep variables local so they are not stored in memory
meta, units = self.getDefaultMeta()
# check each available file for header information
# sequence is important since later calls overwrite earlier ones so if a header is present in "psd" and
# "data", the value from "data" will be returned
if self.ts:
# get header data from file
metaTmp, unitsTmp = self.ts.getMetadata()
# make sure we don't override important stuff that by accident has the same name
self.renameKey('nSamples', 'psdNSamples', meta=metaTmp, units=unitsTmp)
self.renameKey('dt', 'psdDt', meta=metaTmp, units=unitsTmp)
# set time series unit
unitsTmp['timeseries'] = 'V'
# update the dictionaries with newly found values
meta.update(metaTmp)
units.update(unitsTmp)
if self.psd:
metaTmp, unitsTmp = self.psd.getMetadata()
# make sure we don't override important stuff that by accident has the same name
# also, 'nSamples' and 'samplingRate' in reality refer to the underlying timeseries data
self.renameKey('nSamples', 'psdNSamples', meta=metaTmp, units=unitsTmp)
self.renameKey('dt', 'psdDt', meta=metaTmp, units=unitsTmp)
# set psd unit
unitsTmp['psd'] = 'V^2 / Hz'
meta.update(metaTmp)
units.update(unitsTmp)
if self.data:
metaTmp, unitsTmp = self.data.getMetadata()
# rename variables for the sake of consistency and compatibility with Matlab and because the naming is
# confusing: samplingRate is actually the acquisition rate since the DAQ card averages the data already
# the sampling rate should describe the actual time step between data points not something else
if 'recordingRate' in metaTmp:
self.renameKey('samplingRate', 'acquisitionRate', meta=metaTmp, units=unitsTmp)
self.renameKey('recordingRate', 'samplingRate', meta=metaTmp, units=unitsTmp)
self.renameKey('nSamples', 'nAcquisitionsPerSample', meta=metaTmp)
# add trial number
metaTmp['trial'] = self.data.getTrialNumber()
# update dictionaries
meta.update(metaTmp)
units.update(unitsTmp)
# add title string to metadata, used for plots
self.setTitle(meta)
# make sure all axes have the beadDiameter
meta['pmY']['beadDiameter'] = meta['pmX']['beadDiameter']
units['pmY']['beadDiameter'] = units['pmX']['beadDiameter']
meta['aodY']['beadDiameter'] = meta['aodX']['beadDiameter']
units['aodY']['beadDiameter'] = units['aodX']['beadDiameter']
# add trap names
meta['traps'] = meta.subDictKeys()
return meta, units
def getData(self):
"""
Return the experiment data.
Returns:
:class:`pandas.DataFrame`
"""
if not self.data:
raise ValueError('No data file given.')
return self.data.getData()
def getDataSegment(self, tmin, tmax, chunkN=10000):
"""
Returns the data between ``tmin`` and ``tmax``.
Args:
tmin (float): minimum data timestamp
tmax (float): maximum data timestamp
chunkN (int): number of rows to read per chunk
Returns:
:class:`pandas.DataFrame`
"""
meta, units = self.getMetadata()
nstart = int(meta.samplingRate * tmin)
nrows = int(meta.samplingRate * (tmax - tmin))
return self.data.getDataSegment(nstart, nrows)
def getPsd(self):
"""
Return the PSD of the thermal calibration of the experiment as computed by LabView.
Returns:
:class:`pandas.DataFrame`
"""
if not self.psd:
raise ValueError('No PSD file given.')
# read psd file which also contains the fitting
data = self.psd.getData()
# ignore the fitting
titles = [title for title, column in data.iteritems() if not title.endswith('Fit')]
return data[titles]
def getPsdFit(self):
"""
Return the LabView fit of the Lorentzian to the PSD.
Returns:
:class:`pandas.DataFrame`
"""
if not self.psd:
raise ValueError('No PSD file given.')
# the fit is in the psd file
data = self.psd.getData()
# only choose frequency and fit columns
titles = [title for title, column in data.iteritems() if title.endswith('Fit') or title == 'f']
return data[titles]
def getTs(self):
"""
Return the time series recorded for thermal calibration.
Returns:
:class:`pandas.DataFrame`
"""
if not self.ts:
raise ValueError('No time series file given.')
data = self.ts.getData()
# remove "Diff" from column headers
columnHeader = [title.split('Diff')[0] for title in data.columns]
data.columns = columnHeader
return data
@staticmethod
def calculateForce(meta, units, data):
"""
Calculate forces from Diff signal and calibration values.
Args:
meta (:class:`.MetaDict`): metadata
units (:class:`.UnitDict`): unit metadata
data (:class:`pandas.DataFrame`): data
Returns:
Updated versions of the input parameters
* meta (:class:`.MetaDict`)
* units (:class:`.UnitDict`)
* data (:class:`pandas.DataFrame`)
"""
# calculate force per trap and axis
for trap in meta['traps']:
m = meta[trap]
data[trap + 'Force'] = (data[trap + 'Diff'] - m['zeroOffset']) \
/ m['displacementSensitivity'] \
* m['stiffness']
units[trap + 'Force'] = 'pN'
# invert PM force, is not as expected in the raw data
# data.pmYForce = -data.pmYForce
# calculate mean force per axis, only meaningful for two traps
data['xForce'] = (data.pmXForce + data.aodXForce) / 2
data['yForce'] = (data.pmYForce - data.aodYForce) / 2
units['xForce'] = 'pN'
units['yForce'] = 'pN'
return meta, units, data
@staticmethod
def postprocessData(meta, units, data):
"""
Create time array, calculate forces etc.
Args:
meta (:class:`tweezers.MetaDict`): meta dictionary
units (:class:`tweezers.UnitDict`): units dictionary
data (:class:`pandas.DataFrame`): data
Returns:
Updated versions of the input parameters
* meta (:class:`.MetaDict`)
* units (:class:`.UnitDict`)
* data (:class:`pandas.DataFrame`)
"""
data['time'] = np.arange(0, meta['dt'] * len(data), meta['dt'])
units['time'] = 's'
meta, units, data = self.calculateForce(meta, units, data)
data['distance'] = np.sqrt(data.xDist**2 + data.yDist**2)
units['distance'] = 'nm'
return meta, units, data
def setTitle(self, meta):
"""
Set the 'title' key in the metadata dictionary based on date and trial number if they are available. This
string is e.g. used for plots.
Args:
meta
Returns:
:class:`tweezers.MetaDict`
"""
title = ''
try:
title += meta['date'] + ' '
except KeyError:
pass
try:
title += meta['time'] + ' '
except KeyError:
pass
try:
title += meta['trial']
except KeyError:
pass
meta['title'] = title.strip()
def save(self, container, path=None):
"""
Writes the data of a :class:`tweezers.TweezersData` to disk. This preservers the `data` and`thermalCalibration`
folder structure. `path` should be the folder that holds these subfolders. If it is empty, the original files
will be overwritten.
Args:
container (:class:`tweezers.TweezersData`): data to write
path (:class:`pathlib.Path`): path to a folder for the dataset, if not set, the original data will be
overwritten
"""
if not isinstance(path, Path):
path = Path(path)
data = ['ts', 'psd', 'data']
# list of input files and their data from the container, these are the ones we're writing back
# this is also important for the laziness of the TweezerData object
files = [[getattr(self, file), getattr(container, file)] for file in data if getattr(self, file)]
if not files:
return
# get root path if not given
if not path:
path = files[0][0].path.parents[1]
meta = container.meta
meta['units'] = container.units
# now write all of it
for file in files:
filePath = path / file[0].path.parent.name / file[0].path.name
self.writeData(meta, file[1], filePath)
def writeData(self, meta, data, path):
"""
Write experiment data back to a target file. Note that this writes the data in an `UTF-8` encoding.
Implementing this is not required for a data source but used here to convert the header to JSON.
Args:
meta (:class:`tweezers.MetaDict`): meta data to store
data (:class:`pandas.DataFrame`): data to write back
path (:class:`pathlib.Path`): path where to write the file
"""
# ensure directory exists
try:
os.makedirs(str(path.parent))
except FileExistsError:
pass
# write the data
with path.open(mode='w', encoding='utf-8') as f:
f.write(json.dumps(meta,
indent=4,
ensure_ascii=False,
sort_keys=True))
f.write("\n\n#### DATA ####\n\n")
data.to_csv(path_or_buf=str(path), sep='\t', mode='a', index=False)
def getDefaultMeta(self):
"""
Set default values for metadata and units. This will be overwritten by values in the data files if they exist.
Returns:
:class:`tweezers.MetaDict` and :class:`tweezers.UnitDict`
"""
meta = MetaDict()
units = UnitDict()
# meta[self.getStandardIdentifier('tsSamplingRate')] = 80000
#
# units[self.getStandardIdentifier('tsSamplingRate')] = 'Hz'
return meta, units
def renameKey(self, oldKey, newKey, meta=None, units=None):
"""
Rename a key in the meta- and units-dictionaries. Does not work for nested dictionaries.
Args:
meta (:class:`tweezers.MetaDict`): meta dictionary
units (:class:`tweezers.UnitDict`): units dictionary (can be an empty one if not required)
oldKey (str): key to be renamed
newKey (str): new key name
"""
if meta:
if oldKey not in meta:
return
meta.replaceKey(oldKey, newKey)
if units:
if oldKey not in units:
return
units.replaceKey(oldKey, newKey)
| 32.038549 | 119 | 0.55793 | 1,608 | 14,129 | 4.890547 | 0.246269 | 0.019837 | 0.025432 | 0.018311 | 0.212233 | 0.195956 | 0.160478 | 0.151322 | 0.127416 | 0.116734 | 0 | 0.003633 | 0.337674 | 14,129 | 440 | 120 | 32.111364 | 0.836717 | 0.386439 | 0 | 0.214286 | 0 | 0.005495 | 0.091051 | 0.015175 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087912 | false | 0.021978 | 0.049451 | 0 | 0.241758 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6ca4d4ba42af57fc033e9368f2ae8ef9b4d183f | 6,482 | py | Python | shexter_client/shexterd.py | tetchel/shexter-client | b1db3ac072fc9a53403a15b1f41188e0a09220f4 | [
"MIT"
] | 3 | 2017-12-18T06:37:50.000Z | 2018-02-23T08:31:25.000Z | shexter_client/shexterd.py | tetchel/shexter-client | b1db3ac072fc9a53403a15b1f41188e0a09220f4 | [
"MIT"
] | null | null | null | shexter_client/shexterd.py | tetchel/shexter-client | b1db3ac072fc9a53403a15b1f41188e0a09220f4 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from threading import Thread
from time import sleep
import logging
import shexter.requester
import shexter.platform as platform
import shexter.config
"""
This file is for the shexter daemon, which runs persistantly. Every 5 seconds, it polls the phone to see if there are
unread messages. If there are, it displays a notification to the user.
This file is meant to be run directly; not to be imported by any other file.
"""
def notify(msg: str, title=shexter.config.APP_NAME):
print(title + ': ' + msg)
if notifier:
# Note swap of msg, title order
notify_function(title, msg)
def _parse_contact_name(line: str):
# print('parsing contact name from "{}"'.format(line))
# The contact name is the first word after the first ']'
try:
return line.split(']')[1].strip().split()[0].rstrip(':')
except Exception as e:
print(e)
print('Error parsing contact name from "{}"'.format(line))
def notify_unread(unread: str) -> None:
unread_lines = unread.splitlines()
# Remove the first line, which is just "Unread Messages:"
unread_lines = unread_lines[1:]
if len(unread_lines) > 1:
notify_title = str(len(unread_lines)) + ' New Messages'
notify_msg = 'Messages from '
contact_names = []
for line in unread_lines:
contact_name = _parse_contact_name(line)
# Don't repeat contacts
if contact_name not in contact_names:
notify_msg += contact_name + ', '
contact_names.append(contact_name)
# Remove last ', '
notify_msg = notify_msg[:-2]
elif len(unread_lines) == 0:
# At this time, if the unread response was originally exactly one line,
# it was because the phone rejected the request.
notify_title = 'Approval Required'
notify_msg = 'Approve this computer on your phone'
else:
contact_name = _parse_contact_name(unread_lines[0] )
notify_title = 'New Message'
notify_msg = 'Message from ' + contact_name
# A cool title would be the phone's hostname.
notify(notify_msg, title=notify_title)
def init_notifier_win():
try:
import win10toast
toaster = win10toast.ToastNotifier()
toaster.show_toast(shexter.config.APP_NAME, 'Notifications enabled', duration=3, threaded=True)
return toaster
except ImportError as e:
print(e)
print('***** To use the ' + shexter.config.APP_NAME + ' daemon on Windows you must install win10toast'
' with "[pip | pip3] install win10toast"')
NOTIFY_LEN_S = 10
def notify_win(title: str, msg: str) -> None:
# Notifier is a win10toast.ToastNotifier
notifier.show_toast(title, msg, duration=NOTIFY_LEN_S, threaded=True)
"""
def build_notifier_macos():
# Fuck this for now
try:
import gntp.notifier
except ImportError:
print('To use the ' + shexter.config.APP_NAME + ' daemon on OSX you must install Growl (see http://growl.info)
and its python library with "pip3 install gntp"')
quit()
"""
import subprocess
NOTIFY_SEND = 'notify-send'
def init_notifier_nix():
try:
subprocess.check_call([NOTIFY_SEND, shexter.config.APP_NAME, 'Notifications enabled',
'-t', '3000'])
return True
except Exception as e:
print(e)
print('***** To use the ' + shexter.config.APP_NAME + ' daemon on Linux you must install notify-send, eg "sudo apt-get install notify-send"')
def notify_nix(title: str, msg: str):
# print('notify_nix {} {}'.format(title, msg))
result = subprocess.getstatusoutput('notify-send "{}" "{}" -t {}'
.format(title, msg, NOTIFY_LEN_S * 1000))
if result[0] != 0:
print('Error running notify-send:')
print(result[1])
def init_notifier():
"""
Initializes the 'notifier' and 'notify_function' globals, which are later called by notify
The notifier is an object for the notify_platform functions to use
"""
platf = platform.get_platform()
global notifier, notify_function
if platf == platform.Platform.WIN:
notifier = init_notifier_win()
notify_function = notify_win
elif platf == platform.Platform.LINUX:
notifier = init_notifier_nix()
notify_function = notify_nix
else:
print('Sorry, notifications are not supported on your platform, which appears to be ' + platf)
return None
# Must match response from phone in the case of no msgs.
NO_UNREAD_RESPONSE = 'No unread messages.'
def main(connectinfo: tuple):
running = True
logging.basicConfig(filename=shexter.config.APP_NAME.lower() + 'd.log', level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(shexter.config.APP_NAME)
launched_msg = shexter.config.APP_NAME + ' daemon launched'
logger.info(launched_msg)
logger.info('ConnectInfo: ' + str(connectinfo))
print(launched_msg + ' - CTRL + C to quit')
try:
while running:
unread_result = shexter.requester.unread_command(connectinfo, silent=True)
# print('result: ' + str(type(unread_result)) + ' ' + unread_result)
if not unread_result:
logger.info('Failed to connect to phone')
elif unread_result != NO_UNREAD_RESPONSE:
# new messages
Thread(target=notify_unread, args=(unread_result,)).start()
logger.info('Got at least 1 msg')
else:
logger.debug('No unread')
# print('no unread')
for i in range(5):
# Shorter sleep to afford interrupting...
# https://stackoverflow.com/questions/5114292/break-interrupt-a-time-sleep-in-python
sleep(1)
except (KeyboardInterrupt, EOFError):
print('Exiting')
quit(0)
_connectinfo = shexter.config.configure(False)
if not _connectinfo:
print('Please run ' + shexter.config.APP_NAME + ' config first, so the daemon knows how to find your phone.')
quit()
# Initialize globals
notifier = None
notify_function = None
init_notifier()
if not notifier:
notify_function = print
# Call the main loop
main(_connectinfo)
| 32.248756 | 149 | 0.631749 | 805 | 6,482 | 4.959006 | 0.300621 | 0.039078 | 0.04008 | 0.0501 | 0.102455 | 0.082415 | 0.046343 | 0.035321 | 0.035321 | 0.035321 | 0 | 0.009705 | 0.268744 | 6,482 | 200 | 150 | 32.41 | 0.832489 | 0.149954 | 0 | 0.106195 | 0 | 0.00885 | 0.161117 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079646 | false | 0 | 0.079646 | 0 | 0.19469 | 0.123894 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6caa6b944d454c9401a57591872b8b05ba06ad2 | 7,218 | py | Python | info_epd/mixins/salah.py | ibnadam/info_epd | 3838cc30187dc10fd2c79fba2d51357d3dbbf47a | [
"MIT"
] | null | null | null | info_epd/mixins/salah.py | ibnadam/info_epd | 3838cc30187dc10fd2c79fba2d51357d3dbbf47a | [
"MIT"
] | null | null | null | info_epd/mixins/salah.py | ibnadam/info_epd | 3838cc30187dc10fd2c79fba2d51357d3dbbf47a | [
"MIT"
] | null | null | null | import sys
import os
import logging
import time
import datetime
from PIL import (
Image,
ImageDraw,
ImageFont
)
from config import *
from util import *
from info_epd import praytimes
DEBUG_PRAYTIMES = False
class SalahMixin:
"""Uses praytimes library for calculating prayer times.
Make sure settings are correct for your location, Madhab, etc.
Especially check:
* Caluclation methos
* Settings for Maghrib & midnight
"""
calc_method = 'ISNA'
time_fmt = '12h'
def __init__(self):
self._funcs['setup'].append(self.setup_praytimes)
self._funcs['update'].append(self.update_praytimes)
self._funcs['redraw'].append(self.redraw_praytimes)
def setup_praytimes(self):
logging.info("Setup praytimes...")
self.pt = praytimes.PrayTimes()
self.pt.setMethod(self.calc_method)
params = dict(
fajr=15,
maghrib='0 min',
isha=15,
midnight='Jafari' #doublecheck: seems to be more correct than non-jafari
)
self.pt.adjust(params)
self.update_info['praytimes'] = {}
self.update_info['praytimes']['pt'] = None
self.update_info['praytimes']['curr'] = None
self.update_info['praytimes']['curr_end'] = None
self.update_info['praytimes']['next_time'] = None
def update_praytimes(self):
logging.info("Update praytimes...")
today = get_today()
tomorrow = get_tomorrow()
now = get_now()
coords = COORDS['Culver City']
timezone = TIMEZONES['Los Angeles']
dst = time.localtime().tm_isdst
pt = self.pt.getTimes(today, coords,
timezone, dst, self.time_fmt)
fmt = '%I:%M%p'
def to_time_obj(p1):
p2 = datetime.datetime.strptime(pt[p1], fmt)
def to_date_obj():
return datetime.datetime(year=now.year, month=now.month, day=now.day,
hour=p2.hour, minute=p2.minute)
return to_date_obj
fajr = to_time_obj('fajr')()
sunrise = to_time_obj('sunrise')()
dhuhr = to_time_obj('dhuhr')()
asr = to_time_obj('asr')()
maghrib = to_time_obj('maghrib')()
isha = to_time_obj('isha')()
midnight = to_time_obj('midnight')()
# Assume maghrib lasts for 45 mins
maghrib_end = maghrib + datetime.timedelta(minutes=45)
# Figure out what applies to current time
curr = {}
curr['fajr'] = fajr <= now < sunrise
after_fajr = sunrise <= now < dhuhr
curr['dhuhr'] = dhuhr <= now < asr
curr['asr'] = asr <= now < maghrib
curr['maghrib'] = maghrib <= now < maghrib_end
after_maghrib = maghrib_end <= now < isha
# Check isha time (could be past 00:00)
is_isha = False
if not any((curr['fajr'], curr['dhuhr'], curr['asr'], curr['maghrib'],
after_fajr, after_maghrib)):
# Either we are before fajr, or after isha
after_isha = now >= isha
if after_isha:
m_hr = midnight.hour
if m_hr < fajr.hour:
m_hr += 24
m_min = midnight.minute
n_hr = now.hour
n_min = now.minute
if n_hr < m_hr:
is_isha = True
elif n_hr == m_hr:
if n_min < m_min:
is_isha = True
curr['isha'] = is_isha
# Figure out what comes next
next_secs, next_time = secs_til_midnight(), 'midnight'
next_prayer, pt['next_fajr'] = 'next_fajr', None
curr_end = None
if curr['fajr']:
next_secs, next_time = (sunrise - now).seconds, pt['sunrise']
curr_end = pt['sunrise']
next_prayer = 'dhuhr'
elif after_fajr:
next_secs, next_time = (dhuhr - now).seconds, pt['dhuhr']
next_prayer = 'dhuhr'
elif curr['dhuhr']:
next_secs, next_time = (asr - now).seconds, pt['asr']
curr_end = pt['asr']
next_prayer = 'asr'
elif curr['asr']:
next_secs, next_time = (maghrib - now).seconds, pt['maghrib']
curr_end = pt['maghrib']
next_prayer = 'maghrib'
elif curr['maghrib']:
maghrib_end_t = maghrib_end.strftime('%I:%M%p')
next_secs, next_time = (maghrib_end - now).seconds, maghrib_end_t
curr_end = maghrib_end_t
next_prayer = 'isha'
elif after_maghrib:
next_secs, next_time = (isha - now).seconds, pt['isha']
next_prayer = 'isha'
elif curr['isha']:
curr_end = pt['midnight']
elif now < fajr:
next_secs, next_time = (fajr - now).seconds, pt['fajr']
next_prayer = 'fajr'
# Need to get next day's times
if next_prayer == 'next_fajr':
next_pt = self.pt.getTimes(tomorrow, coords,
timezone, dst, self.time_fmt)
pt['next_fajr'] = next_pt['fajr']
# Save info
self.update_info['next_secs'] = next_secs
self.update_info['praytimes']['pt'] = pt
self.update_info['praytimes']['curr'] = curr
self.update_info['praytimes']['curr_end'] = curr_end
self.update_info['praytimes']['next_time'] = next_time
self.update_info['praytimes']['next_prayer'] = next_prayer
def redraw_praytimes(self):
logging.info("Redraw praytimes...")
if EPD_USED == EPD2in13:
self.redraw_praytimes_partial()
else:
self.redraw_praytimes_full()
def redraw_praytimes_partial(self):
pinfo = self.update_info['praytimes']
pt, curr, curr_end = pinfo['pt'], pinfo['curr'], pinfo['curr_end']
next_upd, next_prayer = pinfo['next_time'], pinfo['next_prayer']
h, w = self.epd.height, self.epd.width
bmp = Image.open(os.path.join(imgdir, 'masjid.bmp'))
self.image.paste(bmp, (2,w//2+25))
# If have current end time then we have a current prayer time as well
font = font24
if curr_end:
for p in curr:
if curr[p]:
txt = f"{p.capitalize()}: {pt[p]}"
x, y = font.getsize(txt)
self.draw.rectangle([(5,5),(x+15,y+15)], fill='black')
self.draw.text((10,10), txt, font=font, fill='white')
self.draw.text((10, y+15), f'Ends {curr_end}', font=font18, fill=0)
else:
txt = f'Next update: {next_upd}'
self.draw.text((10, 10), txt, font=font, fill=0)
# Next prayer
self.draw.text((55,w//2+10), 'Upcoming:', font=font18, fill=0)
p = next_prayer
n = 'fajr' if p=='next_fajr' else p
txt = f"{n.capitalize()}: {pt[p]}"
self.draw.text((55,w//2+30), txt, font=font, fill=0)
def redraw_praytimes_full(self):
"""To be implemented"""
| 34.371429 | 87 | 0.541563 | 876 | 7,218 | 4.27968 | 0.221461 | 0.040011 | 0.044812 | 0.067485 | 0.150173 | 0.081889 | 0.016538 | 0.016538 | 0.016538 | 0 | 0 | 0.013944 | 0.334303 | 7,218 | 209 | 88 | 34.535885 | 0.766285 | 0.077445 | 0 | 0.063291 | 0 | 0 | 0.104866 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050633 | false | 0 | 0.056962 | 0.006329 | 0.139241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6d00e0815efbd0fa8cd9dea45c88de5c7194783 | 2,689 | py | Python | src/web/sistema/templatetags/number_to_text.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 5 | 2018-03-08T17:22:27.000Z | 2018-03-11T14:20:53.000Z | src/web/sistema/templatetags/number_to_text.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 263 | 2018-03-08T18:05:12.000Z | 2022-03-11T23:26:20.000Z | src/web/sistema/templatetags/number_to_text.py | fossabot/SIStema | 1427dda2082688a9482c117d0e24ad380fdc26a6 | [
"MIT"
] | 6 | 2018-03-12T19:48:19.000Z | 2022-01-14T04:58:52.000Z | from django.template import Library
register = Library()
hundreds = [
'',
'сто',
'двести',
'триста',
'четыреста',
'пятьсот',
'шестьсот',
'семьсот',
'восемьсот',
'девятьсот'
]
first_decade = [
'',
('одна', 'один'),
('две', 'два'),
'три',
'четыре',
'пять',
'шесть',
'семь',
'восемь',
'девять'
]
second_decade = [
'десять',
'одиннадцать',
'двенадцать',
'тринадцать',
'четырнадцать',
'пятнадцать',
'шестнадцать',
'семнадцать',
'восемнадцать',
'девятнадцать'
]
decades = [
'',
'десять',
'двадцать',
'тридцать',
'сорок',
'пятьдесят',
'шестьдесят',
'семьдесят',
'восемьдесят',
'девяносто'
]
def pluralize(number, one, two, five):
last_digit = number % 10
prelast_digit = (number // 10) % 10
if last_digit == 1 and prelast_digit != 1:
return one
if 2 <= last_digit <= 4 and prelast_digit != 1:
return two
return five
@register.filter(is_safe=False)
def russian_pluralize(value, arg='s'):
if ',' not in arg:
arg = ',' + arg
bits = arg.split(',')
if len(bits) > 3:
return ''
one, two, five = bits[:3]
return pluralize(value, one, two, five)
@register.filter
def number_to_text(number, gender='male', return_text_for_zero=True):
""" Supports numbers less than 1 000 000 000 """
if number is None or number == 0:
return 'ноль' if return_text_for_zero else ''
text = []
if number >= 1000000:
billions = number // 1000000
text.extend([number_to_text(billions, gender='male', return_text_for_zero=False),
'миллион' + pluralize(billions, '', 'а', 'ов')])
number %= 100000
if number >= 1000:
thousands = number // 1000
text.extend([number_to_text(thousands, gender='female', return_text_for_zero=False),
'тысяч' + pluralize(thousands, 'а', 'и', '')])
number %= 1000
if number >= 100:
text.append(hundreds[number // 100])
number %= 100
if number == 0:
pass
elif number < 10:
number_text = first_decade[number]
if isinstance(number_text, (tuple, list)):
number_text = number_text[1 if gender == 'male' else 0]
text.append(number_text)
elif number < 20:
text.append(second_decade[number - 10])
else:
number_text = first_decade[number % 10]
if isinstance(number_text, (tuple, list)):
number_text = number_text[1 if gender == 'male' else 0]
text.extend([decades[number // 10], number_text])
return ' '.join(text)
| 22.040984 | 92 | 0.564522 | 300 | 2,689 | 4.923333 | 0.39 | 0.067705 | 0.035207 | 0.046039 | 0.249831 | 0.13541 | 0.098849 | 0.098849 | 0.098849 | 0.098849 | 0 | 0.041907 | 0.290071 | 2,689 | 121 | 93 | 22.223141 | 0.731797 | 0.014875 | 0 | 0.09 | 0 | 0 | 0.128458 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03 | false | 0.01 | 0.01 | 0 | 0.11 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6d5022d562b8e05c89e7d039fdd54b153bdd856 | 10,444 | py | Python | resdk/tests/unit/test_utils.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | null | null | null | resdk/tests/unit/test_utils.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | null | null | null | resdk/tests/unit/test_utils.py | tristanbrown/resolwe-bio-py | c911defde8a5e7e902ad1adf4f9e480f17002c18 | [
"Apache-2.0"
] | null | null | null | """
Unit tests for resdk/resources/utils.py file.
"""
# pylint: disable=missing-docstring, protected-access
import unittest
import six
from mock import MagicMock, call, patch
from resdk.resources import Collection, Data, Process, Relation, Sample
from resdk.resources.utils import (
_print_input_line, endswith_colon, fill_spaces, find_field, get_collection_id, get_data_id,
get_process_id, get_relation_id, get_resolwe, get_resource_collection, get_sample_id,
get_samples, iterate_fields, iterate_schema,
)
PROCESS_OUTPUT_SCHEMA = [
{'name': "fastq", 'type': "basic:file:", 'label': "Reads file"},
{'name': "bases", 'type': "basic:string:", 'label': "Number of bases"},
{'name': "options", 'label': "Options", 'group': [
{'name': "id", 'type': "basic:string:", 'label': "ID"},
{'name': "k", 'type': "basic:integer:", 'label': "k-mer size"}
]}
]
OUTPUT = {
'fastq': {'file': "example.fastq.gz"},
'bases': "75",
'options': {
'id': 'abc',
'k': 123}
}
class TestUtils(unittest.TestCase):
def test_iterate_fields(self):
result = list(iterate_fields(OUTPUT, PROCESS_OUTPUT_SCHEMA))
# result object is iterator - we use lists to pull all elements
expected = [
({
'type': 'basic:string:',
'name': 'id',
'label': 'ID'
}, {
'k': 123,
'id': 'abc'
}), ({
'type': 'basic:string:',
'name': 'bases',
'label': 'Number of bases'
}, {
'options': {
'k': 123,
'id': 'abc'
},
'bases': '75',
'fastq': {
'file': 'example.fastq.gz'
}
}), ({
'type': 'basic:file:',
'name': 'fastq',
'label': 'Reads file'
}, {
'options': {
'k': 123,
'id': 'abc'
},
'bases': '75',
'fastq': {
'file': 'example.fastq.gz'
}
}), ({
'type': 'basic:integer:',
'name': 'k',
'label': 'k-mer size'
}, {
'k': 123,
'id': 'abc'
})
]
six.assertCountEqual(self, result, expected)
def test_iterate_fields_modif(self):
"""
Ensure that changing ``values`` inside iteration loop also changes ``OUTPUT`` values.
"""
for schema, values in iterate_fields(OUTPUT, PROCESS_OUTPUT_SCHEMA):
field_name = schema['name']
if field_name == "bases":
values[field_name] = str(int(values[field_name]) + 1)
self.assertEqual(OUTPUT['bases'], "76")
# Fix the OUTPUT to previous state:
OUTPUT['bases'] = "75"
def test_find_field(self):
result = find_field(PROCESS_OUTPUT_SCHEMA, 'fastq')
expected = {'type': 'basic:file:', 'name': 'fastq', 'label': 'Reads file'}
self.assertEqual(result, expected)
def test_iterate_schema(self):
result1 = list(iterate_schema(OUTPUT, PROCESS_OUTPUT_SCHEMA, 'my_path'))
result2 = list(iterate_schema(OUTPUT, PROCESS_OUTPUT_SCHEMA))
expected1 = [
({'name': 'fastq', 'label': 'Reads file', 'type': 'basic:file:'},
{'fastq': {'file': 'example.fastq.gz'}, 'options': {'k': 123, 'id': 'abc'},
'bases': '75'}, 'my_path.fastq'),
({'name': 'bases', 'label': 'Number of bases', 'type': 'basic:string:'},
{'fastq': {'file': 'example.fastq.gz'}, 'options': {'k': 123, 'id': 'abc'},
'bases': '75'}, 'my_path.bases'),
({'name': 'id', 'label': 'ID', 'type': 'basic:string:'}, {'k': 123, 'id': 'abc'},
'my_path.options.id'),
({'name': 'k', 'label': 'k-mer size', 'type': 'basic:integer:'},
{'k': 123, 'id': 'abc'}, 'my_path.options.k')]
expected2 = [
({'type': 'basic:file:', 'name': 'fastq', 'label': 'Reads file'},
{'fastq': {'file': 'example.fastq.gz'}, 'bases': '75',
'options': {'k': 123, 'id': 'abc'}}),
({'type': 'basic:string:', 'name': 'bases', 'label': 'Number of bases'},
{'fastq': {'file': 'example.fastq.gz'}, 'bases': '75',
'options': {'k': 123, 'id': 'abc'}}),
({'type': 'basic:string:', 'name': 'id', 'label': 'ID'}, {'k': 123, 'id': 'abc'}),
({'type': 'basic:integer:', 'name': 'k', 'label': 'k-mer size'},
{'k': 123, 'id': 'abc'})]
self.assertEqual(result1, expected1)
self.assertEqual(result2, expected2)
def test_fill_spaces(self):
result = fill_spaces("one_word", 12)
self.assertEqual(result, "one_word ")
@patch('resdk.resources.utils.print')
def test_print_input_line(self, print_mock):
_print_input_line(PROCESS_OUTPUT_SCHEMA, 0)
calls = [
call(u'- fastq [basic:file:] - Reads file'),
call(u'- bases [basic:string:] - Number of bases'),
call(u'- options - Options'),
call(u' - id [basic:string:] - ID'),
call(u' - k [basic:integer:] - k-mer size')]
self.assertEqual(print_mock.mock_calls, calls)
def test_endswith_colon(self):
schema = {'process_type': 'data:reads:fastq:single'}
endswith_colon(schema, 'process_type')
self.assertEqual(schema, {'process_type': u'data:reads:fastq:single:'})
def test_get_collection_id(self):
collection = Collection(id=1, resolwe=MagicMock())
collection.id = 1 # this is overriden when initialized
self.assertEqual(get_collection_id(collection), 1)
self.assertEqual(get_collection_id(2), 2)
def test_get_sample_id(self):
sample = Sample(id=1, resolwe=MagicMock())
sample.id = 1 # this is overriden when initialized
self.assertEqual(get_sample_id(sample), 1)
self.assertEqual(get_sample_id(2), 2)
def test_get_data_id(self):
data = Data(id=1, resolwe=MagicMock())
data.id = 1 # this is overriden when initialized
self.assertEqual(get_data_id(data), 1)
self.assertEqual(get_data_id(2), 2)
def test_get_process_id(self):
process = Process(id=1, resolwe=MagicMock())
process.id = 1 # this is overriden when initialized
self.assertEqual(get_process_id(process), 1)
self.assertEqual(get_process_id(2), 2)
def test_get_relation_id(self):
relation = Relation(id=1, resolwe=MagicMock())
relation.id = 1 # this is overriden when initialized
self.assertEqual(get_relation_id(relation), 1)
self.assertEqual(get_relation_id(2), 2)
def test_get_samples(self):
collection = Collection(id=1, resolwe=MagicMock())
collection._samples = ['sample_1', 'sample_2']
self.assertEqual(get_samples(collection), ['sample_1', 'sample_2'])
collection_1 = Collection(id=1, resolwe=MagicMock())
collection_1._samples = ['sample_1']
collection_2 = Collection(id=2, resolwe=MagicMock())
collection_2._samples = ['sample_2']
self.assertEqual(get_samples([collection_1, collection_2]), ['sample_1', 'sample_2'])
data = Data(id=1, resolwe=MagicMock())
data._sample = 'sample_1'
self.assertEqual(get_samples(data), ['sample_1'])
data1 = Data(id=1, resolwe=MagicMock())
data1._sample = 'sample1'
data2 = Data(id=2, resolwe=MagicMock())
data2._sample = 'sample2'
self.assertEqual(get_samples([data1, data2]), ['sample1', 'sample2'])
data = Data(id=1, resolwe=MagicMock(**{'sample.filter.return_value': None}))
data._sample = None
with self.assertRaises(TypeError):
get_samples(data)
sample = Sample(id=1, resolwe=MagicMock())
self.assertEqual(get_samples(sample), [sample])
sample_1 = Sample(id=1, resolwe=MagicMock())
sample_2 = Sample(id=3, resolwe=MagicMock())
self.assertEqual(get_samples([sample_1, sample_2]), [sample_1, sample_2])
def test_get_resource_collection(self):
collection = Collection(id=1, resolwe=MagicMock())
collection.id = 1 # this is overriden when initialized
self.assertEqual(get_resource_collection(collection), 1)
relation = Relation(id=1, resolwe=MagicMock())
relation._hydrated_collection = Collection(id=2, resolwe=MagicMock())
relation._hydrated_collection.id = 2 # this is overriden when initialized
self.assertEqual(get_resource_collection(relation), 2)
data = Data(id=1, resolwe=MagicMock())
data._collections = [Collection(id=3, resolwe=MagicMock())]
data._collections[0].id = 3 # this is overriden when initialized
self.assertEqual(get_resource_collection(data), 3)
sample = Sample(id=1, resolwe=MagicMock())
sample._collections = [Collection(id=4, resolwe=MagicMock())]
sample._collections[0].id = 4 # this is overriden when initialized
self.assertEqual(get_resource_collection(sample), 4)
sample = Sample(id=1, resolwe=MagicMock())
sample._collections = [
Collection(id=5, resolwe=MagicMock()),
Collection(id=6, resolwe=MagicMock())
]
sample._collections[0].id = 5 # this is overriden when initialized
sample._collections[1].id = 6 # this is overriden when initialized
self.assertEqual(get_resource_collection(sample), None)
with self.assertRaises(LookupError):
get_resource_collection(sample, fail_silently=False)
def test_get_resolwe(self):
# same resolwe object
resolwe_mock = MagicMock()
relation = Relation(id=1, resolwe=resolwe_mock)
sample = Sample(id=1, resolwe=resolwe_mock)
self.assertEqual(get_resolwe(relation, sample), resolwe_mock)
relation = Relation(id=1, resolwe=MagicMock())
sample = Sample(id=1, resolwe=MagicMock())
with self.assertRaises(TypeError):
get_resolwe(relation, sample)
if __name__ == '__main__':
unittest.main()
| 37.035461 | 95 | 0.567503 | 1,156 | 10,444 | 4.957612 | 0.128028 | 0.075903 | 0.069098 | 0.062991 | 0.555749 | 0.440586 | 0.362938 | 0.282673 | 0.238527 | 0.238527 | 0 | 0.021751 | 0.278054 | 10,444 | 281 | 96 | 37.16726 | 0.738329 | 0.065588 | 0 | 0.234742 | 0 | 0 | 0.162034 | 0.010294 | 0 | 0 | 0 | 0 | 0.15493 | 1 | 0.070423 | false | 0 | 0.023474 | 0 | 0.098592 | 0.023474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6d7581ab54e0bc20bcf26b8c40b0005e45bc25c | 1,136 | py | Python | pa1-skeleton/submission/parse_block.py | yzhong94/cs276-spring-2019 | a4780a9f88b8c535146040fe11bb513c91c5693b | [
"MIT"
] | null | null | null | pa1-skeleton/submission/parse_block.py | yzhong94/cs276-spring-2019 | a4780a9f88b8c535146040fe11bb513c91c5693b | [
"MIT"
] | null | null | null | pa1-skeleton/submission/parse_block.py | yzhong94/cs276-spring-2019 | a4780a9f88b8c535146040fe11bb513c91c5693b | [
"MIT"
] | null | null | null | class BSBIIndex(BSBIIndex):
def parse_block(self, block_dir_relative):
"""Parses a tokenized text file into termID-docID pairs
Parameters
----------
block_dir_relative : str
Relative Path to the directory that contains the files for the block
Returns
-------
List[Tuple[Int, Int]]
Returns all the td_pairs extracted from the block
Should use self.term_id_map and self.doc_id_map to get termIDs and docIDs.
These persist across calls to parse_block
"""
### Begin your code
td_pairs = []
for filename in os.listdir(self.data_dir +'/'+ block_dir_relative):
with open(self.data_dir +'/'+ block_dir_relative +'/'+ filename, 'r',encoding="utf8", errors='ignore') as f:
doc_id = self.doc_id_map.__getitem__(filename)
for s in f.read().split():
term_id = self.term_id_map.__getitem__(s)
td_pairs.append((term_id, doc_id))
return td_pairs
### End your code
| 39.172414 | 120 | 0.564261 | 140 | 1,136 | 4.321429 | 0.521429 | 0.052893 | 0.105785 | 0.042975 | 0.089256 | 0.089256 | 0 | 0 | 0 | 0 | 0 | 0.001337 | 0.341549 | 1,136 | 28 | 121 | 40.571429 | 0.807487 | 0.365317 | 0 | 0 | 0 | 0 | 0.023451 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6dd795a3bd40d81aa9b1e2a17911949cba95e34 | 14,321 | py | Python | inventory/nova.py | forgeservicelab/ansible.account-cleanup | 14a855f57c5d06c1114f18f561199f9a1534f707 | [
"MIT"
] | null | null | null | inventory/nova.py | forgeservicelab/ansible.account-cleanup | 14a855f57c5d06c1114f18f561199f9a1534f707 | [
"MIT"
] | null | null | null | inventory/nova.py | forgeservicelab/ansible.account-cleanup | 14a855f57c5d06c1114f18f561199f9a1534f707 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# (c) 2012, Marco Vito Moscaritolo <marco@agavee.com>
# modified by Tomas Karasek <tomas.karasek@digile.fi>
#
# This file is part of Ansible,
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import sys
import re
import os
import argparse
import subprocess
import yaml
import time
import md5
import itertools
import novaclient.client
import ansible.module_utils.openstack
try:
import json
except ImportError:
import simplejson as json
# This is a script getting dynamic inventory from Nova. Features:
# - you can refer to instances by their nova name in ansible{-playbook} calls
# - you can refer to single tenants, regions and openstack environments in
# ansible{-playbook} calls
# - you can refer to a hostgroup when you pass the arbitrary --meta group=
# in "nova boot"
# - it caches the state of the cloud
# - it tries to guess ansible_ssh_user based on name of image
# ('\cubuntu' -> 'ubuntu', '\ccentos' -> 'cloud-user', ...)
# - allows to access machines by their private ip *
# - it will work with no additional configuration, just handling single tenant
# from set OS_* environment variables (just like python-novaclient).
# - you can choose to heavy-configure it for multiple environments
# - it's configured from simple YAML (I dislike ConfigParser). See nova.yml
# - Nodes can be listed in inventory either by DNS name or IP address based
# on setting.
#
# * I took few ideas and some code from other pull requests
# - https://github.com/ansible/ansible/pull/8657 by Monty Taylor
# - https://github.com/ansible/ansible/pull/7444 by Carson Gee
#
# If Ansible fails to parse JSON, please run this with --list and observe.
#
# HOW CACHING WORKS:
# Cache of list of servers is kept per combination of (auth_url, region_name,
# project_id). Default max age is 300 seconds. You can set the age per section
# (openstack envrionment) in config.
#
# If you want to build the cache from cron, consider:
# */5 * * * * . /home/tomk/os/openrc.sh && \
# ANSIBLE_NOVA_CONFIG=/home/tomk/.nova.yml \
# /home/tomk/ansible/plugins/inventory/nova.py --refresh-cache
#
# HOW IS NOVA INVENTORY CONFIGURED:
# (Note: if you have env vars set from openrc.sh, you can run this without
# writing the config file. Defaults are sane. The values in the config file
# will rewrite the defaults.)
#
# To load configuration from a file, you must have the config file path in
# environment variable ANSIBLE_NOVA_CONFIG.
#
# IN THE CONFIG FILE:
# The keys in the top level dict are names for different OS environments.
# The keys in a dict for OS environment can be:
# - auth_url
# - region_name (can be a list)
# - project_id (can be a list)
# - username
# - api_key
# - service_type
# - auth_system
# - prefer_private (connect using private IPs)
# - cache_max_age (how long to consider cached data. In seconds)
# - resolve_ips (translate IP addresses to domain names)
#
# If you have a list in region and/or project, all the combinations of
# will be listed.
#
# If you don't have configfile, there will be one cloud section created called
# 'openstack'.
#
# WHAT IS AVAILABLE AS A GROUP FOR ANSIBLE CALLS (how are nodes grouped):
# tenants, regions, clouds (top config section), groups by metadata key (nova
# boot --meta group=<name>).
CONFIG_ENV_VAR_NAME = 'ANSIBLE_NOVA_CONFIG'
NOVA_DEFAULTS = {
'auth_system': os.environ.get('OS_AUTH_SYSTEM'),
'service_type': 'compute',
'username': os.environ.get('OS_USERNAME'),
'api_key': os.environ.get('OS_PASSWORD'),
'auth_url': os.environ.get('OS_AUTH_URL'),
'project_id': os.environ.get('OS_TENANT_NAME'),
'region_name': os.environ.get('OS_REGION_NAME'),
'prefer_private': False,
'version': '2',
'cache_max_age': 300,
'resolve_ips': True,
}
DEFAULT_CONFIG_KEY = 'openstack'
CACHE_DIR = '~/.ansible/tmp'
CONFIG = {}
def load_config():
global CONFIG
_config_file = os.environ.get(CONFIG_ENV_VAR_NAME)
if _config_file:
with open(_config_file) as f:
CONFIG = yaml.load(f.read())
if not CONFIG:
CONFIG = {DEFAULT_CONFIG_KEY: {}}
for section in CONFIG.values():
for key in NOVA_DEFAULTS:
if (key not in section):
section[key] = NOVA_DEFAULTS[key]
def push(data, key, element):
''' Assist in items to a dictionary of lists '''
if (not element) or (not key):
return
if key in data:
data[key].append(element)
else:
data[key] = [element]
def to_safe(word):
'''
Converts 'bad' characters in a string to underscores so they can
be used as Ansible groups
'''
return re.sub(r"[^A-Za-z0-9\-]", "_", word)
def get_access_ip(server, prefer_private):
''' Find an IP for Ansible SSH for a host. '''
private = ansible.module_utils.openstack.openstack_find_nova_addresses(
getattr(server, 'addresses'), 'fixed', 'private')
public = ansible.module_utils.openstack.openstack_find_nova_addresses(
getattr(server, 'addresses'), 'floating', 'public')
if prefer_private:
return private[0]
if server.accessIPv4:
return server.accessIPv4
if public:
return public[0]
else:
return private[0]
def get_metadata(server):
''' Returns dictionary of all host metadata '''
results = {}
for key in vars(server):
# Extract value
value = getattr(server, key)
# Generate sanitized key
key = 'os_' + re.sub(r"[^A-Za-z0-9\-]", "_", key).lower()
# Att value to instance result (exclude manager class)
#TODO: maybe use value.__class__ or similar inside of key_name
if key != 'os_manager':
results[key] = value
return results
def get_ssh_user(server, nova_client):
''' Try to guess ansible_ssh_user based on image name. '''
try:
image_name = nova_client.images.get(server.image['id']).name
if 'ubuntu' in image_name.lower():
return 'ubuntu'
if 'centos' in image_name.lower():
return 'cloud-user'
if 'debian' in image_name.lower():
return 'debian'
if 'coreos' in image_name.lower():
return 'coreos'
except:
pass
def get_nova_client(combination):
'''
There is a bit more info in the combination than we need for nova client,
so we need to create a copy and delete keys that are not relevant.
'''
kwargs = dict(combination)
del kwargs['name']
del kwargs['prefer_private']
del kwargs['cache_max_age']
del kwargs['resolve_ips']
return novaclient.client.Client(**kwargs)
def merge_update_to_result(result, update):
'''
This will merge data from a nova servers.list call (in update) into
aggregating dict (in result)
'''
for host, specs in update['_meta']['hostvars'].items():
# Can same host be in two differnt listings? I hope not.
result['_meta']['hostvars'][host] = dict(specs)
# groups must be copied if not present, otherwise merged
for group in update:
if group == '_meta':
continue
if group not in result:
# copy the list over
result[group] = update[group][:]
else:
result[group] = list(set(update[group]) | set(result[group]))
def get_name(ip):
''' Gets the shortest domain name for IP address'''
# I first did this with gethostbyaddr but that did not return all the names
# Also, this won't work on Windows. But it can be turned of by setting
# resolve_ips to false
command = "host %s" % ip
p = subprocess.Popen(command.split(), stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, _ = p.communicate()
if p.returncode != 0:
return None
names = []
for l in stdout.split('\n'):
if 'domain name pointer' not in l:
continue
names.append(l.split()[-1])
return min(names, key=len)
def get_update(call_params):
'''
Fetch host dicts and groups from single nova_client.servers.list call.
This is called for each element in "cartesian product" of openstack e
environments, tenants and regions.
'''
update = {'_meta': {'hostvars': {}}}
# Cycle on servers
nova_client = get_nova_client(call_params)
for server in nova_client.servers.list():
access_ip = get_access_ip(server, call_params['prefer_private'])
access_identifier = access_ip
if call_params['resolve_ips']:
dns_name = get_name(access_ip)
if dns_name:
access_identifier = dns_name
# Push to a group for its name. This way we can use the nova name as
# a target for ansible{-playbook}
push(update, server.name, access_identifier)
# Run through each metadata item and add instance to it
for key, value in server.metadata.iteritems():
composed_key = to_safe('tag_{0}_{1}'.format(key, value))
push(update, composed_key, access_identifier)
# Do special handling of group for backwards compat
# inventory update
group = 'undefined'
if 'group' in server.metadata:
group = server.metadata['group']
push(update, group, access_identifier)
# Add vars to _meta key for performance optimization in
# Ansible 1.3+
update['_meta']['hostvars'][access_identifier] = get_metadata(server)
# guess username based on image name
ssh_user = get_ssh_user(server, nova_client)
if ssh_user:
host_record = update['_meta']['hostvars'][access_identifier]
host_record['ansible_ssh_user'] = ssh_user
push(update, call_params['name'], access_identifier)
push(update, call_params['project_id'], access_identifier)
if call_params['region_name']:
push(update, call_params['region_name'], access_identifier)
return update
def expand_to_product(d):
'''
this will transform
{1: [2, 3, 4], 5: [6, 7]}
to
[{1: 2, 5: 6}, {1: 2, 5: 7}, {1: 3, 5: 6}, {1: 3, 5: 7}, {1: 4, 5: 6},
{1: 4, 5: 7}]
'''
return (dict(itertools.izip(d, x)) for x in
itertools.product(*d.itervalues()))
def get_list_of_kwarg_combinations():
'''
This will transfrom
CONFIG = {'openstack':{version:'2', project_id:['tenant1', tenant2'],...},
'openstack_dev':{version:'2', project_id:'tenant3',...},
into
[{'name':'openstack', version:'2', project_id: 'tenant1', ...},
{'name':'openstack', version:'2', project_id: 'tenant2', ...},
{'name':'openstack_dev', version:'2', project_id: 'tenant3', ...}]
The elements in the returned list can be (with little customization) used
as **kwargs for nova client.
'''
l = []
for section in CONFIG:
d = dict(CONFIG[section])
d['name'] = section
for key in d:
# all single elements must become list for the product to work
if type(d[key]) is not list:
d[key] = [d[key]]
for one_call_kwargs in expand_to_product(d):
l.append(one_call_kwargs)
return l
def get_cache_filename(call_params):
'''
cache filename is
~/.ansible/tmp/<md5(auth_url,project_id,region_name)>.nova.json
'''
id_to_hash = ("region_name: %(region_name)s, auth_url:%(auth_url)s,"
"project_id: %(project_id)s, resolve_ips: %(resolve_ips)s"
% call_params)
return os.path.join(os.path.expanduser(CACHE_DIR),
md5.new(id_to_hash).hexdigest() + ".nova.json")
def cache_valid(call_params):
''' cache file is specific for (auth_url, project_id, region_name) '''
cache_path = get_cache_filename(call_params)
if os.path.isfile(cache_path):
mod_time = os.path.getmtime(cache_path)
current_time = time.time()
if (mod_time + call_params['cache_max_age']) > current_time:
return True
return False
def update_cache(call_params):
fn = get_cache_filename(call_params)
content = get_update(call_params)
with open(fn, 'w') as f:
f.write(json.dumps(content, sort_keys=True, indent=2))
def load_from_cache(call_params):
fn = get_cache_filename(call_params)
with open(fn) as f:
return json.loads(f.read())
def get_args(args_list):
parser = argparse.ArgumentParser(
description='Nova dynamic inventory for Ansible')
g = parser.add_mutually_exclusive_group()
g.add_argument('--list', action='store_true', default=True,
help='List instances (default: True)')
g.add_argument('--host', action='store',
help='Get all the variables about a specific instance')
parser.add_argument('--refresh-cache', action='store_true', default=False,
help=('Force refresh of cache by making API requests to'
'Nova (default: False - use cache files)'))
return parser.parse_args(args_list)
def main(args_list):
load_config()
args = get_args(args_list)
if args.host:
print(json.dumps({}))
return 0
if args.list:
output = {'_meta': {'hostvars': {}}}
# we have to deal with every combination of # (cloud, region, project).
for c in get_list_of_kwarg_combinations():
if args.refresh_cache or (not cache_valid(c)):
update_cache(c)
update = load_from_cache(c)
merge_update_to_result(output, update)
print(json.dumps(output, sort_keys=True, indent=2))
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))
| 33.46028 | 79 | 0.649047 | 1,994 | 14,321 | 4.521063 | 0.245236 | 0.019967 | 0.009318 | 0.009318 | 0.130006 | 0.086744 | 0.050804 | 0.033943 | 0.026179 | 0.016639 | 0 | 0.007915 | 0.241254 | 14,321 | 427 | 80 | 33.538642 | 0.821738 | 0.407863 | 0 | 0.061611 | 0 | 0 | 0.126424 | 0.002695 | 0 | 0 | 0 | 0.002342 | 0 | 1 | 0.085308 | false | 0.009479 | 0.066351 | 0 | 0.265403 | 0.009479 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6de6584aa755891b400cead530c2e7d4744bf03 | 7,003 | py | Python | mltc/evaluator.py | BonnierNews/lukas-ner-model | 1f7f688f9b0f1e7b7cb66c42f188358d27a0be09 | [
"MIT"
] | null | null | null | mltc/evaluator.py | BonnierNews/lukas-ner-model | 1f7f688f9b0f1e7b7cb66c42f188358d27a0be09 | [
"MIT"
] | null | null | null | mltc/evaluator.py | BonnierNews/lukas-ner-model | 1f7f688f9b0f1e7b7cb66c42f188358d27a0be09 | [
"MIT"
] | null | null | null | from datetime import date
import numpy as np
from sklearn.metrics import (
roc_curve,
auc,
)
import torch
from torch.utils.data import DataLoader
from .metrics import accuracy_thresh, fbeta, pairwise_confusion_matrix
import pandas as pd
from tqdm import tqdm
class ModelEvaluator:
"""Class for evaluating and testing the text classification models.
Evaluation is done with labeled data whilst testing/prediction is done
with unlabeled data.
"""
def __init__(self, args, processor, model, logger):
self.args = args
self.processor = processor
self.model = model
self.logger = logger
self.device = "cpu"
self.eval_dataloader: DataLoader
def prepare_eval_data(self, file_name, parent_labels=None):
"""Creates a PyTorch Dataloader from a CSV file, which is used
as input to the classifiers.
"""
eval_examples = self.processor.get_examples(file_name, "eval", parent_labels)
eval_features = self.processor.convert_examples_to_features(
eval_examples, self.args["max_seq_length"]
)
self.eval_dataloader = self.processor.pack_features_in_dataloader(
eval_features, self.args["eval_batch_size"], "eval"
)
def evaluate(self):
"""Evaluates a classifier using labeled data.
Calculates and returns accuracy, precision, recall F1 score and ROC AUC.
"""
all_logits = None
all_labels = None
self.model.eval()
eval_loss, eval_accuracy, eval_f1, eval_prec, eval_rec = 0, 0, 0, 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
for batch in self.eval_dataloader:
batch = tuple(t.to(self.device) for t in batch)
input_ids, input_mask, segment_ids, label_ids, parent_labels = batch
with torch.no_grad():
# parent_labels is of boolean type if there are no parent labels
if parent_labels.dtype != torch.bool:
outputs = self.model(
input_ids,
segment_ids,
input_mask,
label_ids,
parent_labels=parent_labels,
)
else:
outputs = self.model(input_ids, segment_ids, input_mask, label_ids)
tmp_eval_loss, logits = outputs[:2]
tmp_eval_accuracy = accuracy_thresh(logits, label_ids)
eval_loss += tmp_eval_loss.mean().item()
eval_accuracy += tmp_eval_accuracy
f1, prec, rec = fbeta(logits, label_ids)
eval_f1 += f1
eval_prec += prec
eval_rec += rec
if all_logits is None:
all_logits = logits.detach().cpu().numpy()
else:
all_logits = np.concatenate(
(all_logits, logits.detach().cpu().numpy()), axis=0
)
if all_labels is None:
all_labels = label_ids.detach().cpu().numpy()
else:
all_labels = np.concatenate(
(all_labels, label_ids.detach().cpu().numpy()), axis=0
)
nb_eval_examples += input_ids.size(0)
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
eval_accuracy = eval_accuracy / nb_eval_examples
eval_f1 = eval_f1 / nb_eval_steps
eval_prec = eval_prec / nb_eval_steps
eval_rec = eval_rec / nb_eval_steps
# ROC-AUC calcualation
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
confusion_matrices = []
for i in range(len(self.processor.labels)):
fpr[i], tpr[i], _ = roc_curve(all_labels[:, i], all_logits[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
confusion_matrices += [
pairwise_confusion_matrix(
all_logits[:, [13, i]], all_labels[:, [13, i]]
)
]
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(
all_labels.ravel(), all_logits.ravel()
)
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
result = {
"eval_loss": eval_loss,
"eval_accuracy": eval_accuracy,
"roc_auc": roc_auc,
"eval_f1": eval_f1,
"eval_prec": eval_prec,
"eval_rec": eval_rec,
# "confusion_matrices": confusion_matrices,
}
self.save_result(result)
return result
def save_result(self, result):
"""Saves the evaluation results as a text file."""
d = date.today().strftime("%Y-%m-%d")
output_eval_file = f"mltc/data/results/eval_results_{d}.txt"
with open(output_eval_file, "w") as writer:
self.logger.info("***** Eval results *****")
for key in sorted(result.keys()):
self.logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
def predict(self, file_name):
"""Makes class predicitons for unlabeled data.
Returns the estimated probabilities for each of the labels.
"""
test_examples = self.processor.get_examples(file_name, "test")
test_features = self.processor.convert_examples_to_features(
test_examples, self.args["max_seq_length"]
)
test_dataloader = self.processor.pack_features_in_dataloader(
test_features, self.args["eval_batch_size"], "test"
)
# Hold input data for returning it
input_data = [
{"id": input_example.guid, "text": input_example.text_a}
for input_example in test_examples
]
self.logger.info("***** Running prediction *****")
self.logger.info(" Num examples = %d", len(test_examples))
self.logger.info(" Batch size = %d", self.args["eval_batch_size"])
all_logits = None
self.model.eval()
for step, batch in enumerate(
tqdm(test_dataloader, desc="Prediction Iteration")
):
batch = tuple(t.to(self.device) for t in batch)
input_ids, input_mask, segment_ids = batch
with torch.no_grad():
outputs = self.model(input_ids, segment_ids, input_mask)
logits = outputs[0]
logits = logits.sigmoid()
if all_logits is None:
all_logits = logits.detach().cpu().numpy()
else:
all_logits = np.concatenate(
(all_logits, logits.detach().cpu().numpy()), axis=0
)
return pd.merge(
pd.DataFrame(input_data),
pd.DataFrame(all_logits),
left_index=True,
right_index=True,
)
| 34.497537 | 87 | 0.565472 | 821 | 7,003 | 4.585871 | 0.228989 | 0.033466 | 0.01753 | 0.022311 | 0.308632 | 0.244622 | 0.214343 | 0.127224 | 0.127224 | 0.115803 | 0 | 0.005798 | 0.334999 | 7,003 | 202 | 88 | 34.668317 | 0.802663 | 0.109096 | 0 | 0.136986 | 0 | 0 | 0.057129 | 0.006185 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034247 | false | 0 | 0.054795 | 0 | 0.109589 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6df220a36694618699e66a95c74ab11a133b077 | 7,381 | py | Python | synapse/lib/datapath.py | larrycameron80/synapse | 24bf21c40b4a467e5dc28c8204aecaf502d5cddf | [
"Apache-2.0"
] | null | null | null | synapse/lib/datapath.py | larrycameron80/synapse | 24bf21c40b4a467e5dc28c8204aecaf502d5cddf | [
"Apache-2.0"
] | null | null | null | synapse/lib/datapath.py | larrycameron80/synapse | 24bf21c40b4a467e5dc28c8204aecaf502d5cddf | [
"Apache-2.0"
] | null | null | null | import collections
import xml.etree.ElementTree as x_etree
import synapse.common as s_common
import synapse.lib.syntax as s_syntax
class DataElem:
def __init__(self, item, name=None, parent=None):
self._d_name = name
self._d_item = item
self._d_parent = parent
self._d_special = {'..': parent, '.': self}
def _elem_valu(self):
return self._d_item
def _elem_step(self, step):
try:
item = self._d_item[step]
except Exception as e:
return None
return initelem(item, name=step, parent=self)
def name(self):
return self._d_name
def _elem_kids(self, step):
# Most primitives only have 1 child at a given step...
# However, we must handle the case of nested children
# during this form of iteration to account for constructs
# like XML/HTML ( See XmlDataElem )
try:
item = self._d_item[step]
except Exception as e:
return
yield initelem(item, name=step, parent=self)
def step(self, path):
'''
Step to the given DataElem within the tree.
'''
base = self
for step in self._parse_path(path):
spec = base._d_special.get(step)
if spec is not None:
base = spec
continue
base = base._elem_step(step)
if base is None:
return None
return base
def valu(self, path):
'''
Return the value of the element at the given path.
'''
if not path:
return self._elem_valu()
elem = self.step(path)
if elem is None:
return None
return elem._elem_valu()
def vals(self, path):
'''
Iterate the given path elements and yield values.
Example:
data = { 'foo':[ {'bar':'lol'}, {'bar':'heh'} ] }
root = s_datapath.initelem(data)
for elem in root.iter('foo/*/bar'):
dostuff(elem) # elem is at value "lol" and "heh"
'''
for elem in self.iter(path):
yield elem._elem_valu()
def _elem_iter(self):
# special case for dictionaries
# to iterate children and keep track
# of their names...
if type(self._d_item) == dict:
for name, item in self._d_item.items():
yield initelem((name, item), name=self.name(), parent=self)
return
if isinstance(self._d_item, int):
return
if isinstance(self._d_item, str):
return
for i, item in enumerate(self._d_item):
yield initelem(item, name=str(i), parent=self)
def _elem_search(self, step):
subs = self._elem_iter()
todo = collections.deque(subs)
while todo:
elem = todo.popleft()
#print('SEARCH: %r' % (elem.name(),))
if elem.name() == step:
yield elem
for sube in elem._elem_iter():
todo.append(sube)
def iter(self, path):
'''
Iterate sub elements using the given path.
Example:
data = { 'foo':[ {'bar':'lol'}, {'bar':'heh'} ] }
root = s_datapath.initelem(data)
for elem in root.iter('foo/*/bar'):
dostuff(elem) # elem is at value "lol" and "heh"
'''
steps = self._parse_path(path)
if not steps:
return
omax = len(steps) - 1
todo = collections.deque([(self, 0)])
while todo:
base, off = todo.popleft()
step = steps[off]
# some special syntax for "all kids" / iterables
if step == '*':
for elem in base._elem_iter():
if off == omax:
yield elem
else:
todo.append((elem, off + 1))
continue
# special "all kids with name" syntax ~foo
# (including recursive kids within kids)
# this syntax is mostly useful XML like
# hierarchical data structures.
if step[0] == '~':
for elem in base._elem_search(step[1:]):
if off == omax:
yield elem
else:
todo.append((elem, off + 1))
continue
for elem in base._elem_kids(step):
if off == omax:
yield elem
else:
todo.append((elem, off + 1))
def _parse_path(self, path):
off = 0
steps = []
plen = len(path)
while off < plen:
# eat the next (or possibly a first) slash
_, off = s_syntax.nom(path, off, ('/',))
if off >= plen:
break
if s_syntax.is_literal(path, off):
elem, off = s_syntax.parse_literal(path, off)
steps.append(elem)
continue
# eat until the next /
elem, off = s_syntax.meh(path, off, ('/',))
if not elem:
continue
steps.append(elem)
return steps
class XmlDataElem(DataElem):
def __init__(self, item, name=None, parent=None):
DataElem.__init__(self, item, name=name, parent=parent)
def _elem_kids(self, step):
#TODO possibly make step fnmatch compat?
# special case for iterating <tag> which recurses
# to find all instances of that element.
#if step[0] == '<' and step[-1] == '>':
#allstep = step[1:-1]
#todo = collections.deque(self._d_item)
#while todo:
#elem = todo.popleft()
for xmli in self._d_item:
if xmli.tag == step:
yield XmlDataElem(xmli, name=step, parent=self)
def _elem_tree(self):
todo = collections.deque([self._d_item])
while todo:
elem = todo.popleft()
yield elem
for sube in elem:
todo.append(sube)
def _elem_step(self, step):
# optional explicit syntax for dealing with colliding
# attributes and sub elements.
if step.startswith('$'):
item = self._d_item.attrib.get(step[1:])
if item is None:
return None
return initelem(item, name=step, parent=self)
for xmli in self._d_item:
if xmli.tag == step:
return XmlDataElem(xmli, name=step, parent=self)
item = self._d_item.attrib.get(step)
if item is not None:
return initelem(item, name=step, parent=self)
def _elem_valu(self):
return self._d_item.text
def _elem_iter(self):
for item in self._d_item:
yield initelem(item, name=item.tag, parent=self)
# Special Element Handler Classes
elemcls = {
x_etree.Element: XmlDataElem,
}
def initelem(item, name=None, parent=None):
'''
Construct a new DataElem from the given item using
which ever DataElem class is most correct for the type.
Example:
elem = initelem(
'''
ecls = elemcls.get(type(item), DataElem)
return ecls(item, name=name, parent=parent)
| 25.898246 | 75 | 0.520661 | 873 | 7,381 | 4.272623 | 0.200458 | 0.02815 | 0.041019 | 0.028954 | 0.407775 | 0.32252 | 0.276676 | 0.237802 | 0.237802 | 0.183914 | 0 | 0.003063 | 0.380843 | 7,381 | 284 | 76 | 25.989437 | 0.813129 | 0.22585 | 0 | 0.402685 | 0 | 0 | 0.001458 | 0 | 0 | 0 | 0 | 0.003521 | 0 | 1 | 0.127517 | false | 0 | 0.026846 | 0.020134 | 0.308725 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6e3ab9d5ea800c6d75075d9a0c1f51418a5e6e8 | 1,851 | py | Python | addon/blenderAddon/test.py | kazulagi/plantFEM | 20bf2df8202d41aa5dc25d1cc820385dabbc604f | [
"MIT"
] | 21 | 2020-06-21T08:21:44.000Z | 2022-01-13T04:28:30.000Z | addon/blenderAddon/test.py | kazulagi/plantFEM_binary | 32acf059a6d778307211718c2a512ff796b81c52 | [
"MIT"
] | 5 | 2021-05-08T05:20:06.000Z | 2022-03-25T05:39:29.000Z | addon/blenderAddon/test.py | kazulagi/plantFEM_binary | 32acf059a6d778307211718c2a512ff796b81c52 | [
"MIT"
] | 4 | 2020-10-20T18:28:59.000Z | 2021-12-15T08:35:25.000Z |
bl_info = {
"name": "plantFEM (Seed)",
"author": "Haruka Tomobe",
"version": (1, 0),
"blender": (2, 80, 0),
"location": "View3D > Add > Mesh > plantFEM Object",
"description": "Adds a new plantFEM Object",
"warning": "",
"wiki_url": "",
"category": "Add Mesh",
}
import bpy
from bpy.types import Operator
from bpy.props import FloatVectorProperty
from bpy_extras.object_utils import AddObjectHelper, object_data_add
from mathutils import Vector
class SAMPLE21_OT_CreateICOSphere(bpy.types.Operator):
bl_idname = "object.sample21_create_icosphere"
bl_label = "ICO Sphere"
bl_description = "Add ICO Sphere."
bl_options = {'REGISTER' , 'UNDO'}
def execute(self, context):
bpy.ops.mesh.primitive_ico_sphere_add()
print("Sample : Add ICO Sphere.")
return {'FINISHED'}
class SAMPLE21_OT_CreateCube(bpy.types.Operator):
bl_idname = "object.sample21_create_cube"
bl_label = "Cube"
bl_description = "Add Cube."
bl_options = {'REGISTER' , 'UNDO'}
def execute(self, context):
bpy.ops.mesh.primitive_cube_add()
print("Sample : Add Cube")
return{'FINISHED'}
def menu_fn(self, context):
self.layout.separator()
self.layout.operator(SAMPLE21_OT_CreateICOSphere.bl_idname)
self.layout.operator(SAMPLE21_OT_CreateCube.bl_idname)
classes = [
SAMPLE21_OT_CreateICOSphere,
SAMPLE21_OT_CreateCube,
]
def register():
for c in classes:
bpy.utils.register_class(c)
bpy.types.VIEW3D_MT_mesh_add.append(menu_fn)
print("クラスを二つ使用するサンプルアドオンが有効化されました。")
def unregister():
bpy.types.VIEW3D_MT_mesh_add.remove(menu_fn)
for c in classes:
bpy.utils.unregister_class(c)
print("クラスを二つ使用するサンプルアドオンが無効化されました。")
if __name__ == "__main__":
register()
| 26.070423 | 68 | 0.670989 | 222 | 1,851 | 5.342342 | 0.364865 | 0.05059 | 0.063238 | 0.030354 | 0.298482 | 0.251265 | 0.177066 | 0.177066 | 0.102867 | 0.102867 | 0 | 0.0171 | 0.210157 | 1,851 | 70 | 69 | 26.442857 | 0.794118 | 0 | 0 | 0.148148 | 0 | 0 | 0.22 | 0.062162 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092593 | false | 0 | 0.092593 | 0 | 0.388889 | 0.074074 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6e6226b3673f32b61f24bf4e754297739bd3b2c | 1,295 | py | Python | gui1.py | aseemk11/Attendance-system-using-Face-recognization- | b7ca08a868c8b41d776eaa8e0c6ada7ab23df040 | [
"Apache-2.0"
] | null | null | null | gui1.py | aseemk11/Attendance-system-using-Face-recognization- | b7ca08a868c8b41d776eaa8e0c6ada7ab23df040 | [
"Apache-2.0"
] | null | null | null | gui1.py | aseemk11/Attendance-system-using-Face-recognization- | b7ca08a868c8b41d776eaa8e0c6ada7ab23df040 | [
"Apache-2.0"
] | null | null | null | from tkinter import *
import os
main = Tk()
main.geometry('{}x{}'.format(550, 550))
main.wm_title("Welcome to Face Recognition Based Attendence System ")
svalue3= StringVar() # defines the widget state as string
svalue2 = StringVar()
#imagePath = PhotoImage(file="facerec.png")
#widgetf = Label(main, image=imagePath).pack(side="bottom")
#imagePath1 = PhotoImage(file="efylogo.png")
#widgetf = Label(main, image=imagePath1).pack(side="top")
comments = """ Developed and Design by Aseem Kanungo"""
widgets = Label(main,
justify=CENTER,
padx = 10,
text=comments).pack(side="bottom")
w = Entry(main,textvariable=svalue3) # adds a textarea widget
w.pack()
w.place(x=200,y=75)
def fisher_dataset_button_fn():
scholarid= svalue3.get()
os.system('python 01_face_dataset.py {0}'.format(scholarid))
def camera(*args):
camerano= svalue2.get()
os.system('python 01_face_dataset.py {0}'.format(camerano))
train_database_button = Button(main,text="Scholar ID", command=fisher_dataset_button_fn, justify=CENTER, padx = 10)
train_database_button.pack()
train_database_button.place(x=200, y=110)
a=[0,1]
popupMenu = OptionMenu(main, svalue2, *a)
Label(main, text="Choose a Camera").place(x=250, y=150)
popupMenu.place(x=250,y=160)
main.mainloop()
| 26.428571 | 115 | 0.713514 | 185 | 1,295 | 4.902703 | 0.502703 | 0.039691 | 0.062845 | 0.041896 | 0.13892 | 0.085998 | 0.085998 | 0.085998 | 0.085998 | 0.085998 | 0 | 0.04375 | 0.135135 | 1,295 | 48 | 116 | 26.979167 | 0.766071 | 0.197683 | 0 | 0 | 0 | 0 | 0.178295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.068966 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6e750c2e34d4d5c52e114e9bd8b810a1a5e2430 | 2,586 | py | Python | src/tiden/apps/zookeeper/zookeeper_utils.py | mshonichev/example_pkg | 556a703fe8ea4a7737b8cae9c5d4d19c1397a70b | [
"Apache-2.0"
] | 14 | 2020-06-05T09:30:42.000Z | 2022-01-19T00:26:48.000Z | src/tiden/apps/zookeeper/zookeeper_utils.py | mshonichev/example_pkg | 556a703fe8ea4a7737b8cae9c5d4d19c1397a70b | [
"Apache-2.0"
] | 6 | 2020-06-09T14:05:21.000Z | 2021-03-18T13:55:15.000Z | src/tiden/apps/zookeeper/zookeeper_utils.py | mshonichev/example_pkg | 556a703fe8ea4a7737b8cae9c5d4d19c1397a70b | [
"Apache-2.0"
] | 1 | 2020-06-09T13:53:15.000Z | 2020-06-09T13:53:15.000Z | #!/usr/bin/env python3
#
# Copyright 2017-2020 GridGain Systems.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from threading import Thread
# from random import choices
from ...util import log_print, util_sleep
from .zookeeper import Zookeeper
class ZkNodesRestart(Thread):
def __init__(self, zk, nodes_amount):
super().__init__()
self.setDaemon(True)
# self.zk: Zookeeper = zk
self.zk = zk
self.nodes_amount = nodes_amount
self.running = True
self.order = 'seq'
self.restart_timeout = 5
def stop(self):
log_print('Interrupting ZK nodes restarting thread', color='red')
self.running = False
def run(self):
log_print('Starting ZK nodes restarts', color='green')
while self.running:
for node_id in self.__get_nodes_to_restart():
log_print('Killing ZK node {}'.format(node_id), color='debug')
self.zk.kill_node(node_id)
util_sleep(self.restart_timeout)
log_print('Starting ZK node {}'.format(node_id), color='debug')
self.zk.start_node(node_id)
def set_params(self, **kwargs):
self.order = kwargs.get('order', self.order)
self.restart_timeout = kwargs.get('restart_timeout', self.restart_timeout)
self.nodes_amount = kwargs.get('nodes_amount', self.nodes_amount)
log_print('Params set to:\norder={}\nrestart_timeout={}\nnodes_amount={}'
.format(self.order, self.restart_timeout, self.nodes_amount))
def __get_nodes_to_restart(self):
zk_nodes = list(self.zk.nodes.keys())
zk_nodes = zk_nodes[:self.nodes_amount]
# uncomment this when Python 3.7 will be used.
# if self.order == 'rand':
# zk_nodes = choices(zk_nodes[:self.nodes_amount])
return zk_nodes
def __enter__(self):
self.start()
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
self.join()
if exc_type and exc_val and exc_tb:
raise Exception(exc_tb)
| 34.026316 | 82 | 0.656999 | 351 | 2,586 | 4.635328 | 0.407407 | 0.043024 | 0.055317 | 0.019668 | 0.13153 | 0.08236 | 0.041795 | 0.041795 | 0.041795 | 0 | 0 | 0.008155 | 0.241299 | 2,586 | 75 | 83 | 34.48 | 0.821101 | 0.29041 | 0 | 0 | 0 | 0 | 0.119074 | 0.027563 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0 | 0.073171 | 0 | 0.292683 | 0.146341 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6e87832836db722cb4f55a3e89485349b743a75 | 859 | py | Python | LaneTracking-master/main.py | cmflannery/openvision | 884ae95325dba1db0d428179796efe03964d5f5b | [
"MIT"
] | null | null | null | LaneTracking-master/main.py | cmflannery/openvision | 884ae95325dba1db0d428179796efe03964d5f5b | [
"MIT"
] | null | null | null | LaneTracking-master/main.py | cmflannery/openvision | 884ae95325dba1db0d428179796efe03964d5f5b | [
"MIT"
] | null | null | null | from __future__ import division
import cv2
import track
import detect
def main(video_path):
cap = cv2.VideoCapture(video_path)
ticks = 0
lt = track.LaneTracker(2, 0.1, 500)
ld = detect.LaneDetector(180)
while cap.isOpened():
precTick = ticks
ticks = cv2.getTickCount()
dt = (ticks - precTick) / cv2.getTickFrequency()
ret, frame = cap.read()
predicted = lt.predict(dt)
lanes = ld.detect(frame)
if predicted is not None:
cv2.line(frame, (predicted[0][0], predicted[0][1]), (predicted[0][2], predicted[0][3]), (0, 0, 255), 5)
cv2.line(frame, (predicted[1][0], predicted[1][1]), (predicted[1][2], predicted[1][3]), (0, 0, 255), 5)
lt.update(lanes)
cv2.imshow('', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
| 23.861111 | 115 | 0.571595 | 114 | 859 | 4.254386 | 0.438596 | 0.082474 | 0.049485 | 0.086598 | 0.028866 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076555 | 0.270081 | 859 | 35 | 116 | 24.542857 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0.001164 | 0 | 0 | 0 | 0.004657 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.173913 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6e8a2cccf8950f95f736755d08af27a34195960 | 368 | py | Python | app_server/config.py | brdavs/faker-trader | 6681bb65bab6d024e2f9dffec62a7decf64c76cd | [
"MIT"
] | null | null | null | app_server/config.py | brdavs/faker-trader | 6681bb65bab6d024e2f9dffec62a7decf64c76cd | [
"MIT"
] | null | null | null | app_server/config.py | brdavs/faker-trader | 6681bb65bab6d024e2f9dffec62a7decf64c76cd | [
"MIT"
] | null | null | null | SEECRET = 'Extremely secretive seecret. You could not guess this one if your life depended on it.'
DATABASE = 'db/data.db'
DATABASE_PRICES = 'db/prices.db'
SESSION_TTL = 240
WEBSOCKETS_PORT= 7334
WEBSOCKETS_URI = 'ws://localhost:' + str(WEBSOCKETS_PORT)
DEFAULT_LEDGER = {
'value': 10000,
'asset_id': 1
}
DEFAULT_COIN_PRICE = 500
RECORDS_FOR_TIMEFRAME = 260 | 24.533333 | 98 | 0.73913 | 53 | 368 | 4.924528 | 0.811321 | 0.061303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061093 | 0.154891 | 368 | 15 | 99 | 24.533333 | 0.778135 | 0 | 0 | 0 | 0 | 0 | 0.368564 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6ec2af6465219bbbd1c415984736e30d5b7e368 | 2,160 | py | Python | flarmfuncs.py | acasadoalonso/RealTimeScoring | 843fd151a9a963851f79549b8c9117ac37779578 | [
"MIT"
] | null | null | null | flarmfuncs.py | acasadoalonso/RealTimeScoring | 843fd151a9a963851f79549b8c9117ac37779578 | [
"MIT"
] | null | null | null | flarmfuncs.py | acasadoalonso/RealTimeScoring | 843fd151a9a963851f79549b8c9117ac37779578 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import MySQLdb
from ognddbfuncs import getognchk
unkglider = []
def getflarmid(conn, registration): # get the FLARMID from the GLIDERS table on the database
cursG = conn.cursor() # set the cursor for searching the devices
try:
cursG.execute("select idglider, flarmtype from GLIDERS where registration = '"+registration+"' ;")
except MySQLdb.Error as e:
try:
print(">>>MySQL Error [%d]: %s" % (e.args[0], e.args[1]))
except IndexError:
print(">>>MySQL Error: %s" % str(e))
print(">>>MySQL error:", "select idglider, flarmtype from GLIDERS where registration = '"+registration+"' ;")
print(">>>MySQL data :", registration)
return("NOREG")
rowg = cursG.fetchone() # look for that registration on the OGN database
if rowg is None:
return("NOREG")
idglider = rowg[0] # flarmid to report
flarmtype = rowg[1] # flarmtype flarm/ica/ogn
if not getognchk(idglider): # check that the registration is on the table - sanity check
if idglider not in unkglider:
print("Warning: FLARM ID=", idglider, "not on OGN DDB")
unkglider.append(idglider)
if flarmtype == 'F':
flarmid = "FLR"+idglider # flarm
elif flarmtype == 'I':
flarmid = "ICA"+idglider # ICA
elif flarmtype == 'O':
flarmid = "OGN"+idglider # ogn tracker
else:
flarmid = "RND"+idglider # undefined
#print "GGG:", registration, rowg, flarmid
return (flarmid)
# -----------------------------------------------------------
def chkflarmid(idglider): # check if the FLARM ID exist, if not add it to the unkglider table
glider = idglider[3:9] # only the last 6 chars of the ID
if not getognchk(glider): # check that the registration is on the table - sanity check
if idglider not in unkglider:
print("Warning: FLARM ID=", idglider, "not on OGN DDB")
unkglider.append(idglider)
return (False)
return (True)
# -----------------------------------------------------------
| 44.081633 | 121 | 0.572685 | 246 | 2,160 | 5.028455 | 0.373984 | 0.016168 | 0.036378 | 0.043654 | 0.315279 | 0.315279 | 0.315279 | 0.315279 | 0.21342 | 0.21342 | 0 | 0.00507 | 0.269444 | 2,160 | 48 | 122 | 45 | 0.778834 | 0.28287 | 0 | 0.243902 | 0 | 0 | 0.189295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04878 | false | 0 | 0.04878 | 0 | 0.170732 | 0.146341 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6ec4b20c2057ab10ad8e413862f2137de3b2c8f | 1,916 | py | Python | sf2_webapp/__main__.py | EdinburghGenomics/sf2-webapp | 7deab0a6513f8586646e16edacbc19bcfc3b18ad | [
"MIT"
] | null | null | null | sf2_webapp/__main__.py | EdinburghGenomics/sf2-webapp | 7deab0a6513f8586646e16edacbc19bcfc3b18ad | [
"MIT"
] | null | null | null | sf2_webapp/__main__.py | EdinburghGenomics/sf2-webapp | 7deab0a6513f8586646e16edacbc19bcfc3b18ad | [
"MIT"
] | null | null | null | """Edinburgh Genomics Online SF2 web application.
examples:
To start the tornado server:
$ start_sf2_webapp
More information is available at:
- http://gitlab.genepool.private/hdunnda/sf2-webapp
"""
__version__="0.0.1"
import os
import tornado.options
from tornado.options import define, options
import sf2_webapp.controller
import sf2_webapp.config
import sf2_webapp.database
define("dbconfig", default=None, help="Path to the database configuration file", type=str)
define("webconfig", default=None, help="Path to the web configuration file", type=str)
define("emailconfig", default=None, help="Path to the email configuration file", type=str)
define("loggingconfig", default=None, help="Path to the logging configuration file", type=str)
define("enable_cors", default=False, help="Flag to indicate that CORS should be enabled", type=bool)
def main(): # type: () -> None
"""Command line entry point for the web application"""
tornado.options.parse_command_line()
assert (options.dbconfig is None or os.path.exists(options.dbconfig)), 'Error: database configuration file ' + str(options.dbconfig) + ' not found.'
assert (options.webconfig is None or os.path.exists(options.webconfig)), 'Error: web configuration file ' + str(options.webconfig) + ' not found.'
assert (options.emailconfig is None or os.path.exists(options.emailconfig)), 'Error: email configuration file ' + str(options.emailconfig) + ' not found.'
assert (options.loggingconfig is None or os.path.exists(options.loggingconfig)), 'Error: logging configuration file ' + str(options.loggingconfig) + ' not found.'
sf2_webapp.controller.run(
enable_cors=options.enable_cors,
db_config_fp=options.dbconfig,
web_config_fp=options.webconfig,
email_config_fp=options.emailconfig,
logging_config_fp=options.loggingconfig
)
if __name__ == "__main__":
main()
| 34.214286 | 166 | 0.740605 | 252 | 1,916 | 5.507937 | 0.305556 | 0.097983 | 0.043228 | 0.054755 | 0.233429 | 0.146974 | 0.07781 | 0 | 0 | 0 | 0 | 0.00612 | 0.147182 | 1,916 | 55 | 167 | 34.836364 | 0.843329 | 0.138831 | 0 | 0 | 0 | 0 | 0.263126 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 1 | 0.037037 | false | 0 | 0.222222 | 0 | 0.259259 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6f068d6c5eae56035d6b920c42bb8ee0fe1c621 | 1,705 | py | Python | ppcd/utils/loss_compute.py | geoyee/PdRSCD | 4a1a7256320f006c15e3e5b5b238fdfba8198853 | [
"Apache-2.0"
] | 44 | 2021-04-21T02:41:55.000Z | 2022-03-09T03:01:16.000Z | ppcd/utils/loss_compute.py | MinZHANG-WHU/PdRSCD | 612976225201d78adc7ff99529ada17b41fedc5d | [
"Apache-2.0"
] | 2 | 2021-09-30T07:52:47.000Z | 2022-02-12T09:05:35.000Z | ppcd/utils/loss_compute.py | MinZHANG-WHU/PdRSCD | 612976225201d78adc7ff99529ada17b41fedc5d | [
"Apache-2.0"
] | 6 | 2021-07-23T02:18:39.000Z | 2022-01-14T01:15:50.000Z | # import paddle
def check_logits_losses(logits_list, losses):
# 自动权重和衰减
if 'ceof' not in losses.keys():
losses['ceof'] = [1] * len(losses['type'])
if 'decay' not in losses.keys():
losses['decay'] = [1] * len(losses['type'])
if len(losses['type']) == len(losses['ceof']) and \
len(losses['type']) == len(losses['decay']):
len_logits = len(logits_list)
len_losses = len(losses['type'])
if len_logits != len_losses:
raise RuntimeError(
'The length of logits_list should equal to the types of loss config: {} != {}.'
.format(len_logits, len_losses))
else:
raise RuntimeError('The logits_list type/coef/decay should equal.')
def loss_computation(logits_list, labels, losses, epoch=None, batch=None):
check_logits_losses(logits_list, losses)
loss_list = []
lab_m = False
if len(labels) > 1:
lab_m = True
if len(labels) != len(logits_list):
raise RuntimeError(
'The length of logits_list should equal to labels: {} != {}.'
.format(len(logits_list), len(labels)))
for i in range(len(logits_list)):
logits = logits_list[i]
coef_i = losses['ceof'][i]
loss_i = losses['type'][i]
label_i = labels[i] if lab_m else labels[0] # 多标签损失
if epoch != None and (epoch != 0 and batch == 0):
decay_i = losses['decay'][i] ** epoch
# print(decay_i)
loss_list.append(decay_i * coef_i * loss_i(logits, label_i))
else:
loss_list.append(coef_i * loss_i(logits, label_i))
return loss_list | 39.651163 | 96 | 0.567155 | 222 | 1,705 | 4.171171 | 0.225225 | 0.118791 | 0.070194 | 0.048596 | 0.37581 | 0.228942 | 0.157667 | 0.110151 | 0.110151 | 0.110151 | 0 | 0.005029 | 0.300293 | 1,705 | 43 | 97 | 39.651163 | 0.771165 | 0.024633 | 0 | 0.111111 | 0 | 0 | 0.149041 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6f151a961ded81e75f0e5f337aec384c9bf1232 | 28,150 | py | Python | aiida_bigdft/PyBigDFT/BigDFT/LRTDDFT.py | adegomme/aiida-bigdft-plugin | dfd17f166a8cd547d3e581c7c3c9f4eb32bd2aab | [
"MIT"
] | 2 | 2020-06-10T02:45:59.000Z | 2020-08-05T18:55:05.000Z | aiida_bigdft/PyBigDFT/BigDFT/LRTDDFT.py | mikiec84/aiida-bigdft-plugin | ce6ddc69def97977fe0209861ea7f1637090b60f | [
"MIT"
] | 6 | 2019-12-15T19:35:34.000Z | 2021-05-07T15:32:18.000Z | aiida_bigdft/PyBigDFT/BigDFT/LRTDDFT.py | mikiec84/aiida-bigdft-plugin | ce6ddc69def97977fe0209861ea7f1637090b60f | [
"MIT"
] | 1 | 2020-08-05T18:55:21.000Z | 2020-08-05T18:55:21.000Z |
import numpy as np
from futile.Utils import write
HaeV = 27.21138386
def _occ_and_virt(log):
"""
Extract the number of occupied and empty orbitals from a logfile
"""
norb = log.log['Total Number of Orbitals']
if log.log['Spin treatment'] == 'Averaged':
norbv = log.evals[0].info[0]-norb
return (norb,), (norbv,)
elif log.log['Spin treatment'] == 'Collinear':
mpol = log.log['dft']['mpol']
norbu = int((norb+mpol)/2)
norbd = norb-norbu
norbvu = log.evals[0].info[0]-norbu
norbvd = log.evals[0].info[0]-norbd
return (norbu, norbd), (norbvu, norbvd)
else:
raise ValueError('Information for the orbitals to be implemented')
def transition_indexes(np, nalpha, indexes):
"""
Returns the list of the indices in the bigdft convention that correspond
to the couple iorb-ialpha with given spin.
Args:
np (tuple): (norbu,norbd) occupied orbitals: when of length 1 assumed
spin averaged
nalpha (tuple): (norbu, norbd)virtual orbitals: when of length 1
assumed spin averaged
indexes (list): list of tuples of (iorb,ialpha,ispin) desired indices
in python convention (start from 0)
"""
nspin = len(np)
inds = []
for iorb, ialpha, ispin in indexes:
jspin = ispin if nspin == 2 else 0
ind = ialpha+iorb*nalpha[jspin] # local index of the spin subspace
if ispin == 1:
ind += np[0]*nalpha[0] # spin 2 comes after spin one
inds.append(ind)
return inds
def _collection_indexes(np, nvirt_small):
harvest = []
for ispin in [0, 1]:
jspin = ispin if len(np) == 2 else 0
for ip in range(np[jspin]):
for ialpha in range(nvirt_small[jspin]):
harvest.append([ip, ialpha, ispin])
return harvest
def _collection_indexes_iocc(iocc, nvirt, spin=None):
"""
For each iocc and a selected spin provide the list of couples that are
concerned up to nvirt
If spin is none provide the list for all values of the spin
"""
harvest = []
for ispin in [0, 1]:
jspin = ispin if len(nvirt) == 2 else 0
if spin is not None and ispin != spin:
continue
for ialpha in range(nvirt[jspin]):
harvest.append([iocc, ialpha, ispin])
return harvest
class TransitionMatrix(np.matrix):
"""
Matrix of Transition Quantities, might be either :class:`CouplingMatrix`
or :class:`TransitionMultipoles`
Args:
matrix (matrix-like): data of the coupling matrix. If present also
the number of orbitals should be provided.
norb_occ (tuple): number of occupied orbitals per spin channnel.
Compulsory if ``matrix`` is specified.
norb_virt (tuple): number of empty orbitals per spin channnel.
Compulsory if ``matrix`` is specified.
log (Logfile): Instance of the logfile from which the coupling matrix
calculation is performed. Automatically retrieves the ``norb_occ``
and `norb_virt`` parameters. When ``log`` parameter is present the
parameter ``matrix`` is ignored.
Raises:
ValueError: if the file of the coupling matrix indicated by ``log``
does not exists
"""
def __new__(cls, matrix=None, norb_occ=None, norb_virt=None, log=None):
"""
Create the object from the arguments and return the ``self`` instance
"""
import os
if log is not None:
datadir = log.log.get('radical', '')
datadir = 'data-'+datadir if len(datadir) > 0 else 'data'
cmfile = os.path.join(log.srcdir, datadir, cls._filename)
if not os.path.isfile(cmfile):
raise ValueError('The file "'+cmfile+'" does not exist')
norb, norbv = _occ_and_virt(log)
write('Loading data with ', norb, ' occupied and ',
norbv, ' empty states, from file "', cmfile, '"')
try:
import pandas as pd
write('Using pandas:')
mat = pd.read_csv(cmfile, delim_whitespace=True, header=None)
except ImportError:
write('Using numpy:')
mat = np.loadtxt(cmfile)
write('done')
else:
mat = matrix
return super(TransitionMatrix, cls).__new__(cls, mat)
def __init__(self, *args, **kwargs):
"""
Perform sanity checks on the loaded matrix
"""
log = kwargs.get('log')
if log is not None:
self.norb_occ, self.norb_virt = _occ_and_virt(log)
else:
self.norb_occ = kwargs.get('norb_occ')
self.norb_virt = kwargs.get('norb_virt')
assert(self.shape[0] == self._total_transitions())
write("Shape is conformal with the number of orbitals")
self._sanity_check()
def _total_transitions(self):
ntot = 0
for no, nv in zip(self.norb_occ, self.norb_virt):
ntot += no*nv
if len(self.norb_occ) == 1:
ntot *= 2
return ntot
def _subindices(self, norb_occ, norb_virt):
for i, (no, nv) in enumerate(zip(norb_occ, norb_virt)):
assert(no <= self.norb_occ[i] and nv <= self.norb_virt[i])
harvest = _collection_indexes(norb_occ, norb_virt)
return np.array(transition_indexes(norb_occ, self.norb_virt, harvest))
def _sanity_check(self):
pass
class CouplingMatrix(TransitionMatrix):
"""
Casida Coupling Matrix, extracted from the calculation performed by BigDFT
"""
_filename = 'coupling_matrix.txt'
def _sanity_check(self):
write('Casida Matrix is symmetric',
np.allclose(self, self.T, atol=1.e-12))
def subportion(self, norb_occ, norb_virt):
"""Extract a subportion of the coupling matrix.
Returns a Coupling Matrix which is made by only considering the first
``norb_occ`` and ``norb_virt`` orbitals
Args:
norb_occ (tuple): new number of occupied orbitals. Must be lower
that the instance value
norb_virt (tuple): new number of virtual orbitals. Must be lower
that the instance value
"""
inds = self._subindices(norb_occ, norb_virt)
mat = np.array([row[0, inds] for row in self[inds]])
return CouplingMatrix(matrix=mat, norb_occ=norb_occ,
norb_virt=norb_virt)
def diagonalize(self):
"""
Diagonalize the Coupling Matrix
Returns:
(np.matrix, np.matrix):
tuple of the Eigenvvalues and Eigenvectors of the coupling matrix,
as returned by :meth:`numpy.linalg.eigh`. We perform the
transpose of the matrix with eigenvectors to have them sorted as
row vectors
"""
write('Diagonalizing Coupling matrix of shape', self.shape)
E2, C_E2 = np.linalg.eigh(self)
write('Eigensystem solved')
C_E2 = C_E2.T
return E2, C_E2
class TransitionMultipoles(TransitionMatrix):
"""
Transition dipoles, extracted from the calculation performed by BigDFT
"""
_filename = 'transition_quantities.txt'
def subportion(self, norb_occ, norb_virt):
"""Extract a subportion of the Transition Multipoles.
Returns a set of transition multipoles which is made by only
considering the first ``norb_occ`` and ``norb_virt`` orbitals
Args:
norb_occ (tuple): new number of occupied orbitals. Must be lower
that the instance value
norb_virt (tuple): new number of virtual orbitals. Must be lower
that the instance value
Returns:
TransitionMultipoles: reduced transition multipoles
"""
inds = self._subindices(norb_occ, norb_virt)
mat = np.array(self[inds])
return TransitionMultipoles(matrix=mat, norb_occ=norb_occ,
norb_virt=norb_virt)
def get_transitions(self):
"""
Get the transition quantities as the dimensional objects which should
contribute to the oscillator strengths.
Returns:
numpy.array: Transition quantities multiplied by the square root of
the unperturbed transition energy
"""
newdipole = []
for line in self:
newdipole.append(np.ravel(line[0, 0]*line[0, 1:]))
return np.array(newdipole)
class TransitionDipoles(TransitionMultipoles):
"""
Transition dipoles as provided in the version of the code < 1.8.0.
Deprecated, to be used in some particular cases
"""
_filename = 'transition_dipoles.txt'
def get_transitions(self):
return self
class Excitations():
"""LR Excited states of a system
Definition of the excited states in the Casida Formalism
Args:
cm (CouplingMatrix): the matrix of coupling among transitions
tm (TransitionMultipoles): scalar product of multipoles among
transitions
"""
def __init__(self, cm, tm):
self.cm = cm
self.tm = tm
self.eigenvalues, self.eigenvectors = cm.diagonalize()
# : array: transition quantities coming from the multipoles
self.transitions = tm.get_transitions()
scpr = np.array(np.dot(self.eigenvectors, self.transitions))
#: array: oscillator strenghts components of the transitions defined
# as the square of $\int w_a(\mathbf r) r_i $
self.oscillator_strenghts = np.array([t**2 for t in scpr[:, 0:3]])
# : array: average of all the components of the OS
self.avg_os = np.average(self.oscillator_strenghts, axis=1)
self.alpha_prime = 2.0*self.oscillator_strenghts / \
self.eigenvalues[:, np.newaxis]
""" array: elements of the integrand of the statical polarizability in
the space of the excitations """
self._indices_for_spin_comparison = \
self._get_indices_for_spin_comparison()
self.identify_singlet_and_triplets(1.e-5)
def _get_indices_for_spin_comparison(self):
inds = [[], []]
inds0 = []
# get the indices for comparison, take the minimum between the spins
if len(self.cm.norb_occ) == 1:
nocc = self.cm.norb_occ[0]
nvirt = self.cm.norb_virt[0]
nos = [nocc, nocc]
nvs = [nvirt, nvirt]
else:
nocc = min(self.cm.norb_occ)
nvirt = min(self.cm.norb_virt)
nos = self.cm.norb_occ
nvs = self.cm.norb_virt
for ispin in [0, 1]:
for a in range(nvirt):
for p in range(nocc):
inds[ispin].append([p, a, ispin])
for a in range(nvirt, nvs[ispin]):
for p in range(nocc, nos[ispin]):
inds0.append([p, a, ispin])
transA = transition_indexes(
self.cm.norb_occ, self.cm.norb_virt, inds[0])
transB = transition_indexes(
self.cm.norb_occ, self.cm.norb_virt, inds[1])
trans0 = transition_indexes(self.cm.norb_occ, self.cm.norb_virt, inds0)
return transA, transB, trans0
def spectrum_curves(self, omega, slice=None, weights=None):
"""Calculate spectrum curves.
Provide the set of the curves associated to the weights. The resulting
curves might then be used to draw the excitation spectra.
Args:
omega (array): the linspace used for the plotting, of shape
``(n,)``. Must be provided in Atomic Units
slice (array): the lookup array that has to be considered. if Not
provided the entire range is assumed
weights (array): the set of arrays used to weight the spectra. Must
have shape ``(rank,m)``, where ``rank`` is equal to the number
of eigenvalues. If m is absent it is assumed to be 1. When not
specified, it defaults to the average oscillator strenghts.
Returns:
array: a set of spectrum curves, of shape equal to ``(n,m)``,
where ``n`` is the shape of ``omega`` and ``m`` the size of the
second dimension of ``weights``.
"""
if slice is None:
oo = self.eigenvalues[:, np.newaxis] - omega**2
wgts = weights if weights is not None else self.avg_os
else:
oo = self.eigenvalues[slice, np.newaxis] - omega**2
oo = oo[0]
wgts = weights if weights is not None else self.avg_os[slice]
return np.dot(2.0/oo.T, wgts)
def identify_singlet_and_triplets(self, tol=1.e-5):
"""
Find the lookup tables that select the singlets and the triplets among
the excitations
Args:
tol (float): tolerance to be applied to recognise the spin character
"""
sings = []
trips = []
for exc in range(len(self.eigenvalues)):
sing, trip = self.project_on_spin(exc, tol)
if sing:
sings.append(exc)
if trip:
trips.append(exc)
if len(sings) > 0:
self.singlets = (np.array(sings),)
"""array: lookup table of the singlet excitations"""
if len(trips) > 0:
self.triplets = (np.array(trips),)
"""array: lookup table of the triplet excitations"""
def _project_on_occ(self, exc):
"""
Project a given eigenvector on the occupied orbitals.
In the spin averaged case consider all the spin indices nonetheless
"""
norb_occ = self.cm.norb_occ
norb_virt = self.cm.norb_virt
pProj_spin = []
for ispin, norb in enumerate(norb_occ):
pProj = np.zeros(norb)
for iorb in range(norb):
harvest = _collection_indexes_iocc(
iorb, self.cm.norb_virt, spin=None if len(norb_occ) == 1
else ispin)
inds = np.array(transition_indexes(
norb_occ, norb_virt, harvest))
pProj[iorb] = np.sum(np.ravel(self.eigenvectors[exc, inds])**2)
pProj_spin.append(pProj)
return pProj_spin
def project_on_spin(self, exc, tol=1.e-8):
"""
Control if an excitation has a Singlet or Triplet character
Args:
exc (int): index of the excitation to be controlled
Returns:
tuple (bool,bool): couple of booleans indicating if the excitation
is a singlet or a triplet respectively
"""
A, B, zero = [np.ravel(self.eigenvectors[exc, ind])
for ind in self._indices_for_spin_comparison]
issinglet = np.linalg.norm(A-B) < tol
istriplet = np.linalg.norm(A+B) < tol
return issinglet, istriplet
# print (self.eigenvalues[exc], np.linalg.norm(A), np.linalg.norm(B),
# A-B,A+B, np.linalg.norm(zero))
def _get_threshold(self, pProj_spin, th_energies, tol):
"""
Identify the energy which is associated to the threshold of a given
excitation. The tolerance is used to discriminate the component
"""
ths = -1.e100
for proj, en in zip(pProj_spin, th_energies):
norb = len(en)
pProj = proj.tolist()
pProj.reverse()
imax = norb-1
for val in pProj:
if val > tol:
break
imax -= 1
ths = max(ths, en[imax])
return ths
def split_excitations(self, evals, tol, nexc='all'):
"""Separate the excitations in channels.
This methods classify the excitations according to the channel they
belong, and determine if a given excitation might be considered as a
belonging to a discrete part of the spectrum or not.
Args:
evals (BandArray): the eigenvalues as they are provided
(for instance) from a `Logfile` class instance.
tol (float): tolerance for determining the threshold
nexc (int,str): number of excitations to be analyzed.
If ``'all'`` then the entire set of excitations are analyzed.
"""
self.determine_occ_energies(evals)
self.identify_thresholds(self.occ_energies, tol, len(
self.eigenvalues) if nexc == 'all' else nexc)
def identify_thresholds(self, occ_energies, tol, nexc):
"""Identify the thresholds per excitation.
For each of the first ``nexc`` excitations, identify the energy value
of its corresponding threshold. This value is determined by projecting
the excitation components on the occupied states and verifying that
their norm for the highest energy level is below a given tolerance.
Args:
occ_energies (tuple of array-like): contains the list of the
energies of the occupied states per spin channel
tol (float): tolerance for determining the threshold
nexc (int): number of excitations to be analyzed
"""
# : Norm of the $w_p^a$ states associated to each excitation
self.wp_norms = []
threshold_energies = []
for exc in range(nexc):
proj = self._project_on_occ(exc)
self.wp_norms.append(proj)
threshold_energies.append(
self._get_threshold(proj, occ_energies, tol))
# : list: identified threshold for inspected excitations
self.threshold_energies = np.array(threshold_energies)
self.excitations_below_threshold = np.where(
np.abs(self.threshold_energies) >= np.sqrt(
self.eigenvalues[0:len(self.threshold_energies)]))
""" array: Indices of the excitations which lie below their
corresponding threshold """
self.excitations_above_threshold = np.where(
np.abs(self.threshold_energies) <
np.sqrt(self.eigenvalues[0:len(self.threshold_energies)]))
""" array: Indices of the excitations which lie above their
corresponding threshold """
def determine_occ_energies(self, evals):
"""
Extract the occupied energy levels from a Logfile BandArray structure,
provided the tuple of the number of occupied states
Args:
evals (BandArray): the eigenvalues as they are provided
(for instance) from a `Logfile` class instance.
"""
norb_occ = self.cm.norb_occ
occ_energies = []
# istart=0
for ispin, norb in enumerate(norb_occ): # range(len(norb_occ)):
# istart:istart+norb_occ[ispin]]))
occ_energies.append(np.array(evals[0][ispin][0:norb]))
# istart+=norb_tot[ispin]
# : array: energies of the occupied states out of the logfile
self.occ_energies = occ_energies
# : float: lowest threshold of the excitations. All excitations are
# discrete below this level
self.first_threshold = abs(
max(np.max(self.occ_energies[0]), np.max(self.occ_energies[-1])))
def plot_alpha(self, **kwargs):
"""Plot the imaginary part.
Plot the real or imaginary part of the dynamical polarizability.
Keyword Arguments:
real (bool): True if real part has to be plotted. The imaginary
part is plotted otherwise
eta (float): Value of the complex imaginary part. Defaults to 1.e-2.
group (str): see :meth:`lookup`
**kwargs:
other arguments that might be passed to the :meth:`plot` method
of the :mod:`matplotlib.pyplot` module.
Returns:
:mod:`matplotlib.pyplot`: the reference to
:mod:`matplotlib.pyplot` module.
"""
import matplotlib.pyplot as plt
from futile.Utils import kw_pop
emax = np.max(np.sqrt(self.eigenvalues))*HaeV
kwargs, real = kw_pop('real', False, **kwargs)
plt.xlim(xmax=emax)
if real:
plt.ylabel(r'$\mathrm{Re} \alpha$ (AU)', size=14)
else:
plt.ylabel(r'$\mathrm{Im} \alpha$', size=14)
plt.yticks([])
plt.xlabel(r'$\omega$ (eV)', size=14)
if hasattr(self, 'first_threshold'):
eps_h = self.first_threshold*HaeV
plt.axvline(x=eps_h, color='black', linestyle='--')
kwargs, eta = kw_pop('eta', 1.e-2, **kwargs)
omega = np.linspace(0.0, emax, 5000)+2.0*eta*1j
kwargs, group = kw_pop('group', 'all', **kwargs)
slice = self.lookup(group)
spectrum = self.spectrum_curves(omega, slice=slice)
toplt = spectrum.real if real else spectrum.imag
pltkwargs = dict(c='black', linewidth=1.5)
pltkwargs.update(kwargs)
plt.plot(omega*HaeV, toplt, **pltkwargs)
return plt
def lookup(self, group):
"""
Identify the group of the excitations according to the argument
Args:
group (str):
A string chosen between
* ``"all"`` : provides the entire set of excitations
(:py:class:`None` instead of the lookup array)
* ``"bt"`` : provides only the excitations below threshold
* ``"at"`` : provides only the excitations above threshold
* ``"singlets"`` : provides the index of the excitations that
have a singlet character
* ``"triplets"`` : provides the index of the excitations that
have a triplet character
"""
slice = None
if group == 'bt':
slice = self.excitations_below_threshold
if group == 'at':
slice = self.excitations_above_threshold
if group == 'singlets':
slice = self.singlets
if group == 'triplets':
slice = self.triplets
return slice
def plot_excitation_landscape(self, **kwargs):
"""
Represent the excitation landscape as splitted in the excitations class
Args:
**kwargs:
keyword arguments to be passed to the `pyplot` instance.
The ``xlabel``, ``ylabel`` as well as ``xlim`` are already set.
Returns:
:mod:`matplotlib.pyplot`: the reference to :mod:`matplotlib.pyplot`
module.
Example:
>>> ex=Excitations(cm,tm)
>>> ex.split_excitations(evals=...,tol=1.e-4,nexc=...)
>>> ex.plot_excitation_landscape(title='Excitation landscape')
"""
import matplotlib.pyplot as plt
Emin = 0.0
Emax = np.max(np.sqrt(self.eigenvalues))*HaeV
for level in self.occ_energies[0]:
eng_th = level*HaeV
plt.plot((Emin, eng_th), (level, level),
'--', c='red', linewidth=1)
plt.plot((eng_th, Emax), (level, level), '-', c='red', linewidth=1)
plt.scatter(abs(eng_th), level, marker='x', c='red')
ind_bt = self.excitations_below_threshold
exc_bt = np.sqrt(self.eigenvalues)[ind_bt]
lev_bt = self.threshold_energies[ind_bt]
plt.scatter(HaeV*exc_bt, lev_bt, s=16, marker='o', c='black')
ind_at = self.excitations_above_threshold
exc_at = np.sqrt(self.eigenvalues)[ind_at]
lev_at = self.threshold_energies[ind_at]
plt.scatter(HaeV*exc_at, lev_at, s=14, marker='s', c='blue')
plt.xlabel('energy (eV)')
plt.ylabel('Threshold energy (Ha)')
plt.xlim(xmin=Emin-1, xmax=Emax)
for attr, val in kwargs.items():
if type(val) == dict:
getattr(plt, attr)(**val)
else:
getattr(plt, attr)(val)
return plt
def dos_dict(self, group='all'):
"""Dictionary for DoS creation.
Creates the keyword arguments that have to be passed to the
`meth:BigDFT.DoS.append` method of the `DoS` class
Args:
group (str): see :meth:`lookup`
Returns:
:py:class:`dict`: kewyord arguments that can be passed to the
`meth:BigDFT.DoS.append` method of the :class:`DoS.DoS` class
"""
ev = np.sqrt(self.eigenvalues)
slice = self.lookup(group)
if slice is not None:
ev = ev[slice]
return dict(energies=np.array([np.ravel(ev)]), units='AU')
def dos(self, group='all', **kwargs):
"""Density of States of the Excitations.
Provides an instance of the :class:`~BigDFT.DoS.DoS` class,
corresponding to the Excitations instance.
Args:
group (str): see :meth:`lookup`
**kwargs: other arguments that might be passed to the
:class:`DoS.DoS` instantiation
Returns:
:class:`DoS.DoS`: instance of the Density of States class
"""
from BigDFT.DoS import DoS
kwa = self.dos_dict(group=group)
kwa['energies'] = kwa['energies'][0]
if hasattr(self, 'first_threshold'):
kwa['fermi_level'] = self.first_threshold
else:
kwa['fermi_level'] = 0.0
kwa.update(kwargs)
return DoS(**kwa)
def plot_Sminustwo(self, coord, alpha_ref=None, group='all'):
"""Inspect S-2 sum rule.
Provides an handle to the plotting of the $S^{-2}$ sum rule, which
should provide reference values for the static polarizability tensor.
Args:
coord (str): the coordinate used for inspection. May be ``'x'``,
``'y'`` or ``'z'``.
alpha_ref (list): diagonal of the reference static polarizability
tensor (for instance calculated via finite differences).
If present the repartition of the contribution of the various
groups of excitations is plotted.
group (str): see :meth:`lookup`
Returns:
reference to :mod:`matplotlib.pyplot` module.
"""
import matplotlib.pyplot as plt
idir = ['x', 'y', 'z'].index(coord)
fig, ax1 = plt.subplots()
ax1.set_xlabel('energy (eV)', size=14)
plt.ylabel(r'$\alpha_{'+coord+coord+r'}$ (AU)', size=14)
if alpha_ref is not None:
plt.axhline(y=alpha_ref[idir], color='r', linestyle='--')
if hasattr(self, 'first_threshold'):
eps_h = abs(HaeV*self.first_threshold)
plt.axvline(x=eps_h, color='black', linestyle='--')
e = np.sqrt(self.eigenvalues)*HaeV
w_ii = self.alpha_prime[:, idir]
slice = self.lookup(group)
if slice is not None:
e = e[slice]
w_ii = w_ii[slice]
ax1.plot(e, np.cumsum(w_ii))
ax2 = ax1.twinx()
ax2.plot(e, w_ii, color='grey', linestyle='-')
plt.ylabel(r'$w_{'+coord+coord+r'}$ (AU)', size=14)
return plt
def get_alpha_energy(log, norb, nalpha):
return log.evals[0][0][norb+nalpha-1]
def identify_contributions(numOrb, na, exc, C_E2):
pProj = np.zeros(numOrb*2)
for p in range(numOrb):
for spin in [0, 1]:
# sum over all the virtual orbital and spin
for alpha in range(na):
# extract the value of the index of C_E2
elements = transition_indexes(
[numOrb], [na], [[p, alpha, spin]])
for el in elements:
pProj[p+numOrb*spin] += C_E2[exc][el]**2
pProj = pProj[0:numOrb]+pProj[numOrb:2*numOrb] # halves the components
return pProj
def get_p_energy(log, norb):
return log.evals[0][0][0:norb]
def get_threshold(pProj, th_energies, th_levels, tol):
norb = len(th_energies)
pProj = pProj.tolist()
pProj.reverse()
imax = norb-1
for val in pProj:
if val > tol:
break
imax -= 1
return [th_levels[imax], th_energies[imax]]
| 37.383798 | 79 | 0.590515 | 3,554 | 28,150 | 4.587226 | 0.164041 | 0.015948 | 0.010428 | 0.010121 | 0.25161 | 0.201435 | 0.171993 | 0.161197 | 0.141569 | 0.11961 | 0 | 0.008755 | 0.310231 | 28,150 | 752 | 80 | 37.433511 | 0.83087 | 0.357052 | 0 | 0.158602 | 0 | 0 | 0.049097 | 0.002947 | 0 | 0 | 0 | 0 | 0.005376 | 1 | 0.094086 | false | 0.002688 | 0.026882 | 0.008065 | 0.217742 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6f34e314e7e198246f5d865ff9cf3535ca580d5 | 1,290 | py | Python | pi-pytorch/tutorials/sensor/eval.py | tongni1975/stackup-workshops | d83f1d5adcc0b133b10e22d1db295020af967bac | [
"MIT"
] | 12 | 2018-07-21T14:38:55.000Z | 2020-08-18T07:27:39.000Z | pi-pytorch/tutorials/sensor/eval.py | tongni1975/stackup-workshops | d83f1d5adcc0b133b10e22d1db295020af967bac | [
"MIT"
] | 1 | 2019-07-28T03:17:44.000Z | 2019-12-14T09:01:18.000Z | pi-pytorch/tutorials/sensor/eval.py | tongni1975/stackup-workshops | d83f1d5adcc0b133b10e22d1db295020af967bac | [
"MIT"
] | 10 | 2018-06-12T07:54:07.000Z | 2020-08-18T07:31:47.000Z | import torch
from torch.autograd import Variable
from sklearn.metrics import confusion_matrix, classification_report
import numpy as np
import time
# import our model and data
from rnn import RNN
from data import get_data
hidden_size = 10
learning_rate = 0.01
num_layers = 2
num_epochs = 1000
sequence_length = 10
batch_size = 32
def load_model(input_size):
model = RNN(input_size, hidden_size, num_layers)
# load on CPU only
checkpoint = torch.load('checkpoint.pt', map_location='cpu')
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
print(model)
print('model training loss', checkpoint['loss'])
print('model training epoch', checkpoint['epoch'])
return model
if __name__ == '__main__':
X_train, X_test, y_train, y_test = get_data(sequence_length)
input_size = X_train.shape[2] # batch, seq_len, input_size
model = load_model(input_size)
inputs = Variable(X_test.float())
tick = time.time()
outputs = model(inputs)
tock = time.time()
# convert probabilities => 0 or 1
y_pred = (outputs.detach().numpy() > 0.5).astype(np.int)
print('prediction time: %.3fs' % (tock - tick))
print(confusion_matrix(y_test.values, y_pred))
print(classification_report(y_test.values, y_pred))
| 24.807692 | 67 | 0.712403 | 187 | 1,290 | 4.663102 | 0.438503 | 0.051606 | 0.03211 | 0.041284 | 0.036697 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018886 | 0.17907 | 1,290 | 51 | 68 | 25.294118 | 0.804533 | 0.078295 | 0 | 0 | 0 | 0 | 0.092905 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.205882 | 0 | 0.264706 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6f3a4cb8ec73baf05b168d49488b8ba1e024e77 | 357 | py | Python | semesterIII/FADP/Exp2.py | ShreerajRaul/ghriet | 4ff8372ced0c871f019a61b735c60725b6d373d4 | [
"MIT"
] | null | null | null | semesterIII/FADP/Exp2.py | ShreerajRaul/ghriet | 4ff8372ced0c871f019a61b735c60725b6d373d4 | [
"MIT"
] | null | null | null | semesterIII/FADP/Exp2.py | ShreerajRaul/ghriet | 4ff8372ced0c871f019a61b735c60725b6d373d4 | [
"MIT"
] | null | null | null | # Solve the quadratic equation ax**2 + bx + c = 0
# import complex math module
import cmath
a=int(input("Enter a:"))
b=int(input("Enter b:"))
c=int(input("Enter c:"))
# calculate the discriminant
d = (b**2) - (4*a*c)
# find two solutions
sol1 = (-b-cmath.sqrt(d))/(2*a)
sol2 = (-b+cmath.sqrt(d))/(2*a)
print('The solution are {0} and {1}'.format(sol1,sol2)) | 29.75 | 55 | 0.641457 | 66 | 357 | 3.469697 | 0.530303 | 0.104803 | 0.170306 | 0.09607 | 0.113537 | 0.113537 | 0 | 0 | 0 | 0 | 0 | 0.038961 | 0.137255 | 357 | 12 | 55 | 29.75 | 0.704545 | 0.336134 | 0 | 0 | 0 | 0 | 0.223176 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6f5c7c92dd0905f3aec94fff3ac8b43d06bc4f8 | 2,084 | py | Python | pynars/utils/tools.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | pynars/utils/tools.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | pynars/utils/tools.py | AIxer/PyNARS | 443b6a5e1c9779a1b861df1ca51ce5a190998d2e | [
"MIT"
] | null | null | null | import sys
from typing import Callable, List
try:
sys.getsizeof(0)
getsizeof = lambda x: sys.getsizeof(x)
except:
# import resource
getsizeof = lambda _: 1#resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
def get_size(obj, seen=None):
"""Recursively finds size of objects"""
size = getsizeof(obj)
if seen is None:
seen = set()
obj_id = id(obj)
if obj_id in seen:
return 0
# Important mark as seen *before* entering recursion to gracefully handle
# self-referential objects
seen.add(obj_id)
if isinstance(obj, dict):
size += sum([get_size(v, seen) for v in obj.values()])
size += sum([get_size(k, seen) for k in obj.keys()])
elif hasattr(obj, '__dict__'):
size += get_size(obj.__dict__, seen)
elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
size += sum([get_size(i, seen) for i in obj])
return size
def list_contains(base_list, obj_list):
''''''
if len(base_list) < len(obj_list): return False
obj0 = obj_list[0]
for i, base in enumerate(base_list[:len(base_list)+1 - len(obj_list)]):
if base == obj0:
if base_list[i: i+len(obj_list)] == obj_list:
return True
return False
def rand_seed(x: int):
import random
random.seed(x)
import numpy as np
np.random.seed(x)
# if using pytorch, set its seed!
# # import torch
# # torch.manual_seed(x)
# # torch.cuda.manual_seed(x)
# # torch.cuda.manual_seed_all(x)
find_var_with_pos: Callable[[list, list, List[list]], list] = lambda pos_search, variables, positions: [var for var, pos in zip(variables, positions) if pos[:len(pos_search)] == pos_search] # find those variables with a common head of position. e.g. pos_search=[0], variables=[1, 1, 2, 2], and positions=[[0, 2, 0, 0], [0, 2, 1, 0], [0, 3, 0], [1, 0]], then return [1, 1, 2]
find_pos_with_pos: Callable[[list, List[list]], list] = lambda pos_search, positions: [pos for pos in positions if pos[:len(pos_search)] == pos_search] | 33.079365 | 374 | 0.640595 | 323 | 2,084 | 3.96904 | 0.312694 | 0.043682 | 0.046802 | 0.032761 | 0.168487 | 0.168487 | 0.168487 | 0.054602 | 0 | 0 | 0 | 0.017337 | 0.225048 | 2,084 | 63 | 375 | 33.079365 | 0.776471 | 0.242802 | 0 | 0 | 0 | 0 | 0.010296 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078947 | false | 0 | 0.105263 | 0 | 0.289474 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6f7adc0b1077f898d2e6f6848ac5efb072221fd | 7,331 | py | Python | funwalk/hyphae.py | cameronmartino/funwalk | 24d4bcf6790322f53c0e066492303b4fd2e0980e | [
"MIT"
] | null | null | null | funwalk/hyphae.py | cameronmartino/funwalk | 24d4bcf6790322f53c0e066492303b4fd2e0980e | [
"MIT"
] | null | null | null | funwalk/hyphae.py | cameronmartino/funwalk | 24d4bcf6790322f53c0e066492303b4fd2e0980e | [
"MIT"
] | 1 | 2019-12-16T19:45:03.000Z | 2019-12-16T19:45:03.000Z | from __future__ import division
import numpy as np
import pandas as pd
import math
from tqdm import tqdm
from matplotlib.colors import rgb2hex
def MonodExt(k1,k2,kt,l,Stmp,ks=200):
# source: Lejeune et al 1995, Morphology of Trichoderma reesei QM 9414 in Submerged Cultures
return (k1+k2*(l/(l+kt)))*(Stmp/(Stmp+ks))
class hyphal_walk(object):
def __init__(self,minTheta=10,maxTheta=100,avgRate=28,H=1440,N=1200,M=1e10,tstep=.0005,
q=0.004,S0=5e5,k1=50,maxktip=None,k2=None,kt=5,init_n=20,width=100,
set_start_center=True,use_monod=True,normal_theta=True):
"""
minTheta = 10 #*pi/180 - minimum angle a branch can occur
maxTheta = 100 #*pi/180 - maximum angle a branch can occur
H = 1440 # number of hours
N = 1200 # max simulation rounds
M = 1e10 # max hyphae (carrying capacity)
tstep = .0005 # time step (hours/step)
q = 0.004 # branching frequency (maybe need to be scaled of what the time step is)
S0 = 5e5 # intital conc. of substrate mg in whole grid (evenly dist.)
k1 = 50 # (µm/h) initial tip extension rate, value estimated from Spohr et al 1998 figure 5
maxktip = 2*k1 # (µm/h) maximum tip extension rate, value estimated from Spohr et al 1998 figure 5
k2 = maxktip - k1 # (µm/h) difference between k1 and maxktip
kt = 5 # saturation constant
init_n = 20 # starting spores
width = 100 # view window (um) (this is just 1 cm)
set_start_center = True # if you want the model to start all spores at (0,0)
"""
if maxktip is None:
maxktip = k1
if k2 is None:
k2 = maxktip - k1
self.minTheta = minTheta
self.maxTheta = maxTheta
#self.avgRate = avgRate
self.H = H
self.N = N
self.M = M
self.tstep = tstep
self.q = q
self.S0 = S0
self.k1 = k1
self.maxktip = maxktip
self.k2 = k2
self.kt = kt
self.init_n = init_n
self.width = width
self.set_start_center = set_start_center
self.use_monod = use_monod
self.normal_theta = normal_theta
self.hyphae = self.intialize_hyphae()
self.Sgrid = self.intialize_subtrate()
def intialize_hyphae(self):
hyphae = {}
centers = np.array([0,0]).reshape(1, 2)
# for each spore make and intital random walk direction (no movement yet)
if self.normal_theta==True:
theta_init = {i:angle_ for i,angle_ in enumerate(np.linspace(0,360,self.init_n))}
for spore_i in range(0,self.init_n):
if self.set_start_center==False:
rxy = np.random.uniform(0,round(self.width),2) + centers
else:
rxy = centers
if self.normal_theta==True:
iTheta = theta_init[spore_i]
else:
iTheta = np.around(np.random.uniform(0,360),1)
hyphae[spore_i] = {'x0':rxy[:,0], 'y0':rxy[:,1], 'x':rxy[:,0],
'y':rxy[:,1], 'angle':iTheta, 'biomass':0, 't':0, 'l':0}
return hyphae
def intialize_subtrate(self,block_div=2):
# make a substrate grid
Sgrid = []
# make a substrate grid
size_of_block = round(self.width/block_div)
grid_min = list(np.linspace(-self.width,self.width,
size_of_block)[:-1])
grid_max = list(np.linspace(-self.width,self.width,
size_of_block)[1:])
for i in range(len(grid_max)):
Sgrid.append(pd.DataFrame([[self.S0/len(grid_max)]*len(grid_max),grid_min,grid_max,
[grid_min[i]]*len(grid_max),[grid_max[i]]*len(grid_max)],
index=['S','X_Gmin','X_Gmax','Y_Gmin','Y_Gmax']).T)
Sgrid = pd.concat(Sgrid,axis=0).reset_index()
return Sgrid
def run_simulation(self):
# run until i exceeds limits
time_snapshot_hy = {}
time_snapshot_sub = {}
for i in tqdm(range(0,self.N)):
bio_mass = 0
if len(self.hyphae)>=self.M:
# hit carrying cpacity of the system
print('broke capacity first')
break
# otherwise continue to model
for j in range(0,len(self.hyphae)):
# find tip in substrate grid
grid_index = self.Sgrid[((self.Sgrid['Y_Gmin']<=self.hyphae[j]['y'][0])&\
(self.Sgrid['X_Gmin']<=self.hyphae[j]['x'][0]))==True].index.max()
if np.isnan(grid_index):
# left view space
continue
if round(self.Sgrid.loc[grid_index,'S'])!=0:
# get current extention
if self.use_monod==True:
ext = MonodExt(self.k1,self.k2,self.kt,
self.hyphae[j]['l'],
self.Sgrid.loc[grid_index,'S'])
else:
ext = self.maxktip
# extend in x and y
dx = ext * self.tstep * np.cos(self.hyphae[j]['angle']*np.pi/180) # new coordinate in x-axis
dy = ext * self.tstep * np.sin(self.hyphae[j]['angle']*np.pi/180) # new coordinate in y-axis
# biomass created for hyphae j
dl_c = np.sqrt(dx**2 + dy**2)
# (constant to scale biomass density)
dl_c *= 1
bio_mass += dl_c
# subtract used substrate
if self.use_monod==True:
self.Sgrid.loc[grid_index,'S'] = self.Sgrid.loc[grid_index,'S'] - dl_c
# update location
self.hyphae[j]['x'] = self.hyphae[j]['x']+dx
self.hyphae[j]['y'] = self.hyphae[j]['y']+dy
self.hyphae[j]['l'] = np.sqrt((self.hyphae[j]['x'][0]-self.hyphae[j]['x0'][0])**2 \
+(self.hyphae[j]['y'][0]-self.hyphae[j]['y0'][0])**2 )
self.hyphae[j]['biomass'] = self.hyphae[j]['biomass'] + dl_c
# randomly split
if np.random.uniform(0,1) < self.q:
direction = [-1,1][round(np.random.uniform(0,1))]
newangle = direction*round(np.random.uniform(self.minTheta,self.maxTheta))
newangle += self.hyphae[j]['angle']
self.hyphae[len(self.hyphae)] = {'x0':self.hyphae[j]['x'], 'y0':self.hyphae[j]['y'],
'x':self.hyphae[j]['x'], 'y':self.hyphae[j]['y'],
'angle':newangle, 'biomass':0, 't':i, 'l':0}
time_snapshot_hy[i] = pd.DataFrame(self.hyphae.copy()).copy()
time_snapshot_sub[i] = pd.DataFrame(self.Sgrid.copy()).copy()
return time_snapshot_hy,time_snapshot_sub | 47.915033 | 116 | 0.506343 | 939 | 7,331 | 3.848775 | 0.257721 | 0.077476 | 0.063918 | 0.019923 | 0.192861 | 0.125069 | 0.075263 | 0.075263 | 0.075263 | 0.075263 | 0 | 0.04117 | 0.370482 | 7,331 | 153 | 117 | 47.915033 | 0.741928 | 0.214568 | 0 | 0.065421 | 0 | 0 | 0.026414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046729 | false | 0 | 0.056075 | 0.009346 | 0.149533 | 0.009346 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6f9290567678db771e6a8c50fd70341df8c77a3 | 1,636 | py | Python | Tree/(CODECHEF) Binary_Operations.py | XitizVerma/Data-Structures-and-Algorithms-Advanced | 610225eeb7e0b4ade229ec86355901ad1ca38784 | [
"MIT"
] | 1 | 2020-08-27T06:59:52.000Z | 2020-08-27T06:59:52.000Z | Tree/(CODECHEF) Binary_Operations.py | XitizVerma/Data-Structures-and-Algorithms-Advanced | 610225eeb7e0b4ade229ec86355901ad1ca38784 | [
"MIT"
] | null | null | null | Tree/(CODECHEF) Binary_Operations.py | XitizVerma/Data-Structures-and-Algorithms-Advanced | 610225eeb7e0b4ade229ec86355901ad1ca38784 | [
"MIT"
] | null | null | null | import sys
sys.setrecursionlimit(10**6)
class Node:
def __init__(self, val, pos):
self.left = None
self.right = None
self.pos = pos
self.val = val
def insert(node, val, pos):
if node is None:
print(pos)
return Node(val, pos)
if val < node.val: # move to left child
node.left = insert(node.left, val, 2*pos)
else: # move to right child
node.right = insert(node.right, val, 2*pos+1)
return node
def minValueNode(node):
current = node
while current.left is not None:
current = current.left
return current
def delete(node,val, case=True):
if node is None:
return node
# search
if val < node.val: # move to left child
node.left = delete(node.left, val, case)
elif val > node.val: # move to right child
node.right = delete(node.right, val, case)
else: # here found
if case:
print(node.pos)
# Now delete node and replacement
if node.left is None and node.right is None: # check left child, if None
node = None
elif node.left is None:
node = node.right
elif node.right is None:
node = node.left
else:
temp = minValueNode(node.right)
node.val = temp.val
node.right = delete(node.right, temp.val, False)
return node
root = None
def main(q):
global root
oper, elem = input().split()
if oper == 'i':
root = insert(root, int(elem), 1)
else:
root = delete(root, int(elem), True)
if q>1:
main(q-1)
main(int(input())) | 27.266667 | 81 | 0.56846 | 232 | 1,636 | 3.991379 | 0.228448 | 0.097192 | 0.032397 | 0.045356 | 0.182505 | 0.12959 | 0.075594 | 0.075594 | 0.075594 | 0.075594 | 0 | 0.008182 | 0.327628 | 1,636 | 60 | 82 | 27.266667 | 0.833636 | 0.094743 | 0 | 0.203704 | 0 | 0 | 0.000679 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092593 | false | 0 | 0.018519 | 0 | 0.222222 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6fa1a37daf66eeb44908ffa65e9185aef668bb0 | 814 | py | Python | kubernetes/generate.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 13 | 2020-01-04T07:37:38.000Z | 2021-08-31T05:19:58.000Z | kubernetes/generate.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 3 | 2020-06-05T22:42:53.000Z | 2020-08-24T07:18:54.000Z | kubernetes/generate.py | gottaegbert/penter | 8cbb6be3c4bf67c7c69fa70e597bfbc3be4f0a2d | [
"MIT"
] | 9 | 2020-10-19T04:53:06.000Z | 2021-08-31T05:20:01.000Z | from kubernetes import client, config
import json
# 生成YML
def main():
pod = create_pod("dev")
print(json.dumps(client.ApiClient().sanitize_for_serialization(pod)))
def create_pod(environment):
return client.V1Pod(
api_version="v1",
kind="Pod",
metadata=client.V1ObjectMeta(
name="test-pod",
),
spec=client.V1PodSpec(
containers=[
client.V1Container(
name="test-container",
image="nginx",
env=[
client.V1EnvVar(
name="ENV",
value=environment,
)
]
)
]
)
)
if __name__ == '__main__':
main()
| 22 | 73 | 0.443489 | 63 | 814 | 5.52381 | 0.634921 | 0.051724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013514 | 0.454545 | 814 | 36 | 74 | 22.611111 | 0.77027 | 0.006143 | 0 | 0 | 0 | 0 | 0.057001 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.068966 | 0.034483 | 0.172414 | 0.034483 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6fb376cc1a2cdba245a1630f216a970be324900 | 1,680 | py | Python | manim/animation/update.py | philschatz/manim | e3359a571d9a02a08979b3e037ddada3e874eb7c | [
"MIT"
] | 5 | 2019-02-22T14:10:08.000Z | 2022-03-13T01:03:49.000Z | manim/animation/update.py | philschatz/manim | e3359a571d9a02a08979b3e037ddada3e874eb7c | [
"MIT"
] | 17 | 2021-04-10T13:47:17.000Z | 2021-05-17T21:25:30.000Z | manim/animation/update.py | philschatz/manim | e3359a571d9a02a08979b3e037ddada3e874eb7c | [
"MIT"
] | 1 | 2021-03-31T20:46:51.000Z | 2021-03-31T20:46:51.000Z | """Animations that update mobjects."""
__all__ = ["UpdateFromFunc", "UpdateFromAlphaFunc", "MaintainPositionRelativeTo"]
import operator as op
import typing
from ..animation.animation import Animation
if typing.TYPE_CHECKING:
from ..mobject.mobject import Mobject
class UpdateFromFunc(Animation):
"""
update_function of the form func(mobject), presumably
to be used when the state of one mobject is dependent
on another simultaneously animated mobject
"""
def __init__(
self,
mobject: "Mobject",
update_function: typing.Callable[["Mobject"], typing.Any],
suspend_mobject_updating: bool = False,
**kwargs
) -> None:
self.update_function = update_function
super().__init__(
mobject, suspend_mobject_updating=suspend_mobject_updating, **kwargs
)
def interpolate_mobject(self, alpha: float) -> None:
self.update_function(self.mobject)
class UpdateFromAlphaFunc(UpdateFromFunc):
def interpolate_mobject(self, alpha: float) -> None:
self.update_function(self.mobject, alpha)
class MaintainPositionRelativeTo(Animation):
def __init__(
self, mobject: "Mobject", tracked_mobject: "Mobject", **kwargs
) -> None:
self.tracked_mobject = tracked_mobject
self.diff = op.sub(
mobject.get_center(),
tracked_mobject.get_center(),
)
super().__init__(mobject, **kwargs)
def interpolate_mobject(self, alpha: float) -> None:
target = self.tracked_mobject.get_center()
location = self.mobject.get_center()
self.mobject.shift(target - location + self.diff)
| 28.965517 | 81 | 0.674405 | 175 | 1,680 | 6.217143 | 0.348571 | 0.077206 | 0.058824 | 0.060662 | 0.217831 | 0.171875 | 0.171875 | 0.171875 | 0.125 | 0.125 | 0 | 0 | 0.225595 | 1,680 | 57 | 82 | 29.473684 | 0.83628 | 0.108929 | 0 | 0.189189 | 0 | 0 | 0.059264 | 0.017711 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135135 | false | 0 | 0.108108 | 0 | 0.324324 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6fb8c0182b842a2a3c6c84fa4f6e7466f61a46b | 5,583 | py | Python | nets/mobilenet.py | DanielWong0623/Dog-Face-Recognition-PyTorch | 70920a65617d9b6de59919d8920e4a1a133d58d3 | [
"MIT"
] | null | null | null | nets/mobilenet.py | DanielWong0623/Dog-Face-Recognition-PyTorch | 70920a65617d9b6de59919d8920e4a1a133d58d3 | [
"MIT"
] | null | null | null | nets/mobilenet.py | DanielWong0623/Dog-Face-Recognition-PyTorch | 70920a65617d9b6de59919d8920e4a1a133d58d3 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
def _make_divisible(ch, divisor=8, min_ch=None):
if min_ch is None:
min_ch = divisor
new_ch = max(min_ch, int(ch + divisor / 2) // divisor * divisor)
if new_ch < 0.9 * ch:
new_ch += divisor
return new_ch
# ----------------------
# MobileNetV1
# ----------------------
# 普通卷积+BN+ReLU
def conv_bn(inp, oup, stride=1):
return nn.Sequential(
nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU6()
)
# DW卷积++BN+ReLU
def conv_dw(inp, oup, stride=1):
return nn.Sequential(
nn.Conv2d(inp, inp, 3, stride, 1, groups=inp, bias=False),
nn.BatchNorm2d(inp),
nn.ReLU6(),
# PW
nn.Conv2d(inp, oup, 1, 1, 0, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU6(),
)
class MobileNetV1(nn.Module):
def __init__(self):
super(MobileNetV1, self).__init__()
self.stage1 = nn.Sequential(
# H, W, C
# 224, 224, 3 -> 112, 112, 32
conv_bn(3, 32, 2),
# 112, 112, 32 -> 112, 112, 64
conv_dw(32, 64, 1),
# 112, 112, 64 -> 56, 56, 128
conv_dw(64, 128, 2),
conv_dw(128, 128, 1),
# 56 ,56 ,128 -> 28, 28, 256
conv_dw(128, 256, 2),
conv_dw(256, 256, 1),
)
self.stage2 = nn.Sequential(
# 28, 28, 256 -> 14, 14, 512
conv_dw(256, 512, 2),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
conv_dw(512, 512, 1),
)
self.stage3 = nn.Sequential(
# 14, 14, 512 -> 7, 7, 1024
conv_dw(512, 1024, 2),
conv_dw(1024, 1024, 1),
)
# 7, 7, 1024 -> 1, 1, 1024
self.avg = nn.AdaptiveAvgPool2d((1, 1))
# 1, 1, 1024 -> 1, 1, 1000
self.fc = nn.Linear(1024, 1000)
def forward(self, x):
x = self.stage1(x)
x = self.stage2(x)
x = self.stage3(x)
x = self.avg(x)
x = x.view(-1, 1024)
x = self.fc(x)
return x
# ----------------------
# MobileNet V2
# ----------------------
class ConvBNReLU(nn.Sequential):
def __init__(self, in_channel, out_channel, kernel_size=3, stride=1, groups=1):
padding = (kernel_size - 1) // 2
super(ConvBNReLU, self).__init__(
nn.Conv2d(in_channel, out_channel, kernel_size, stride, padding, groups=groups, bias=False),
nn.BatchNorm2d(out_channel),
nn.ReLU6(inplace=True)
)
class InvertedResidual(nn.Module):
def __init__(self, in_channel, out_channel, stride, expand_ratio):
super(InvertedResidual, self).__init__()
hidden_channel = in_channel * expand_ratio
self.use_shortcut = stride == 1 and in_channel == out_channel
layers = []
if expand_ratio != 1:
# 1x1 pointwise conv
layers.append(ConvBNReLU(in_channel, hidden_channel, kernel_size=1))
layers.extend([
# 3x3 depthwise conv
ConvBNReLU(hidden_channel, hidden_channel, stride=stride, groups=hidden_channel),
# 1x1 pointwise conv(linear)
nn.Conv2d(hidden_channel, out_channel, kernel_size=1, bias=False),
nn.BatchNorm2d(out_channel),
])
self.conv = nn.Sequential(*layers)
def forward(self, x):
if self.use_shortcut:
return x + self.conv(x)
else:
return self.conv(x)
class MobileNetV2(nn.Module):
def __init__(self, num_classes=1000, alpha=1.0, round_nearest=8):
super(MobileNetV2, self).__init__()
block = InvertedResidual
input_channel = _make_divisible(32 * alpha, round_nearest)
last_channel = _make_divisible(1280 * alpha, round_nearest)
inverted_residual_setting = [
# t, c, n, s
[1, 16, 1, 1],
[6, 24, 2, 2],
[6, 32, 3, 2],
[6, 64, 4, 2],
[6, 96, 3, 1],
[6, 160, 3, 2],
[6, 320, 1, 1],
]
features = []
# conv1 layer
features.append(ConvBNReLU(3, input_channel, stride=2))
# building inverted residual residual blockes
for t, c, n, s in inverted_residual_setting:
output_channel = _make_divisible(c * alpha, round_nearest)
for i in range(n):
stride = s if i == 0 else 1
features.append(block(input_channel, output_channel, stride, expand_ratio=t))
input_channel = output_channel
# building last several layers
features.append(ConvBNReLU(input_channel, last_channel, 1))
# combine feature layers
self.features = nn.Sequential(*features)
# building classifier
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
self.classifier = nn.Sequential(
nn.Dropout(0.2),
nn.Linear(last_channel, num_classes)
)
def forward(self, x):
x = self.features(x)
x = self.avgpool(x)
x = torch.flatten(x, 1)
x = self.classifier(x)
return x
if __name__ == '__main__':
model = MobileNetV2()
# model = MobileNetV1()
input = torch.randn(1, 3, 224, 224)
out = model(input)
print(out.shape)
| 30.675824 | 105 | 0.51818 | 684 | 5,583 | 4.049708 | 0.209064 | 0.030325 | 0.019495 | 0.039711 | 0.177256 | 0.150181 | 0.098556 | 0.053791 | 0.053791 | 0.053791 | 0 | 0.095736 | 0.348916 | 5,583 | 181 | 106 | 30.845304 | 0.6663 | 0.106574 | 0 | 0.144 | 0 | 0 | 0.001675 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.016 | 0.016 | 0.184 | 0.008 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6fc1bb2c513a94df13f8c3502baea0cf10d6404 | 538 | py | Python | xbrr/edinet/reader/aspects/stock.py | 5laps2go/xbrr | 4c0824b53bfe971111d60e6c1ff4e36f4f4845a3 | [
"MIT"
] | null | null | null | xbrr/edinet/reader/aspects/stock.py | 5laps2go/xbrr | 4c0824b53bfe971111d60e6c1ff4e36f4f4845a3 | [
"MIT"
] | null | null | null | xbrr/edinet/reader/aspects/stock.py | 5laps2go/xbrr | 4c0824b53bfe971111d60e6c1ff4e36f4f4845a3 | [
"MIT"
] | null | null | null | from xbrr.base.reader.base_parser import BaseParser
from xbrr.edinet.reader.element_value import ElementValue
class Stock(BaseParser):
def __init__(self, reader):
tags = {
"dividend_paid": "jpcrp_cor:DividendPaidPerShareSummaryOfBusinessResults", # 一株配当
"dividends_surplus": "jppfs_cor:DividendsFromSurplus", # 剰余金の配当
"purchase_treasury_stock": "jppfs_cor:PurchaseOfTreasuryStock", # 自社株買い
}
super().__init__(reader, ElementValue, tags)
| 38.428571 | 96 | 0.665428 | 48 | 538 | 7.104167 | 0.6875 | 0.046921 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.247212 | 538 | 13 | 97 | 41.384615 | 0.841975 | 0.031599 | 0 | 0 | 0 | 0 | 0.32882 | 0.270793 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6fc289ed39bfadb67662fbe1b9dc0d8849d8f8c | 2,031 | py | Python | ClusterAuthors.py | wjs018/JitaChat | 0b289f7f6e65c1a8b0627e7c3573307580042d1c | [
"MIT"
] | null | null | null | ClusterAuthors.py | wjs018/JitaChat | 0b289f7f6e65c1a8b0627e7c3573307580042d1c | [
"MIT"
] | null | null | null | ClusterAuthors.py | wjs018/JitaChat | 0b289f7f6e65c1a8b0627e7c3573307580042d1c | [
"MIT"
] | null | null | null | """This program first reads in the sqlite database made by ParseAuthors.py.
Then, after just a little data cleaning, it undergoes PCA decomposition.
After being decomposed via PCA, the author data is then clustered by way of a
K-means clustering algorithm. The number of clusters can be set by changing
the value of n_clusters."""
import sqlite3
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
if __name__ == '__main__':
# Filepath of sqlite database made by ParseAuthors.py
db_path = '/media/sf_G_DRIVE/jita1407/authors.sqlite'
# Load this into a dataframe
conn = sqlite3.connect(db_path, detect_types=sqlite3.PARSE_DECLTYPES)
dataframe = pd.read_sql_query("SELECT * FROM Authors", conn)
conn.close()
# Get rid of some redundant data to make analysis cleaner and more straightforward
dataframe = dataframe.drop(['int_skew', 'unique_messages'], axis=1)
# Separate out our list of Authors from the data about them
authors = dataframe.ix[:,1].copy()
data = dataframe.ix[:,2:7].copy()
# Set up our PCA decomposition
pca = PCA()
pca.fit(data.as_matrix())
# Transform our data into features calculated by PCA
transformed = pca.transform(data.as_matrix())
# Cluster our data according to K-means
n_clusters = 2 # number of clusters to organize data into
n_init = 20 # number of times to replicate clustering
n_jobs = 1 # number of processors to use for clustering (-1 for all)
kmeans = KMeans(n_clusters=n_clusters, n_init=n_init, n_jobs=n_jobs).fit(transformed)
# Get the results of the clustering
centers = kmeans.cluster_centers_
labels = kmeans.labels_
# Make some plots
# Plot explained variance for each PCA component
#plt.bar(np.arange(len(pca.explained_variance_)), pca.explained_variance_)
| 31.734375 | 89 | 0.690793 | 286 | 2,031 | 4.776224 | 0.493007 | 0.023426 | 0.026354 | 0.029283 | 0.04978 | 0.04978 | 0 | 0 | 0 | 0 | 0 | 0.010336 | 0.237814 | 2,031 | 64 | 90 | 31.734375 | 0.872093 | 0.476613 | 0 | 0 | 0 | 0 | 0.088995 | 0.039234 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.26087 | 0 | 0.26087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d6fcc73481c40c481a1a9aa8423bcb41c20526de | 718 | py | Python | Leetcode/304. Range Sum Query 2D - Immutable/solution1.py | asanoviskhak/Outtalent | c500e8ad498f76d57eb87a9776a04af7bdda913d | [
"MIT"
] | 51 | 2020-07-12T21:27:47.000Z | 2022-02-11T19:25:36.000Z | Leetcode/304. Range Sum Query 2D - Immutable/solution1.py | CrazySquirrel/Outtalent | 8a10b23335d8e9f080e5c39715b38bcc2916ff00 | [
"MIT"
] | null | null | null | Leetcode/304. Range Sum Query 2D - Immutable/solution1.py | CrazySquirrel/Outtalent | 8a10b23335d8e9f080e5c39715b38bcc2916ff00 | [
"MIT"
] | 32 | 2020-07-27T13:54:24.000Z | 2021-12-25T18:12:50.000Z | class NumMatrix:
def __init__(self, matrix: List[List[int]]):
if not matrix or not matrix[0]: return None
m, n = len(matrix), len(matrix[0])
self.dp = [[0] * (n + 1) for _ in range(m + 1)]
for r in range(m):
for c in range(n):
self.dp[r + 1][c + 1] = self.dp[r + 1][c] + self.dp[r][c + 1] + matrix[r][c] - self.dp[r][c]
def sumRegion(self, row1: int, col1: int, row2: int, col2: int) -> int:
return self.dp[row2 + 1][col2 + 1] - self.dp[row1][col2 + 1] - self.dp[row2 + 1][col1] + self.dp[row1][col1]
# Your NumMatrix object will be instantiated and called as such:
# obj = NumMatrix(matrix)
# param_1 = obj.sumRegion(row1,col1,row2,col2)
| 42.235294 | 116 | 0.568245 | 122 | 718 | 3.295082 | 0.327869 | 0.134328 | 0.069652 | 0.039801 | 0.087065 | 0 | 0 | 0 | 0 | 0 | 0 | 0.056075 | 0.254875 | 718 | 16 | 117 | 44.875 | 0.695327 | 0.182451 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0 | 0.1 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |