hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1ed2ac907fbfedb11ec7c3978dc0e38575801a04 | 476 | py | Python | libalgs-py/sorting/insertion_sort.py | tdudz/libalgs-py | 9b2610de66217c0564193096702c47478de5db5e | [
"MIT"
] | null | null | null | libalgs-py/sorting/insertion_sort.py | tdudz/libalgs-py | 9b2610de66217c0564193096702c47478de5db5e | [
"MIT"
] | null | null | null | libalgs-py/sorting/insertion_sort.py | tdudz/libalgs-py | 9b2610de66217c0564193096702c47478de5db5e | [
"MIT"
] | null | null | null | """
Insertion Sort
--------------
Builds the sorted array by inserting one at a time.
Time Complexity: O(n^2)
Space Complexity: O(n)
Stable: Yes
"""
def insertion_sort(A):
"""
Takes a list of integers and sorts them in ascending order, then returns it.
Args:
A (arr): A list of integers
Returns:
arr: A list of sorted integers
"""
for i in xrange(1, len(A)):
x = A[i]
j = i-1
while j >= 0 and A[j] > x:
A[j+1] = A[j]
j -= 1
A[j+1] = x
return A
| 14.875 | 77 | 0.596639 | 88 | 476 | 3.215909 | 0.511364 | 0.028269 | 0.074205 | 0.106007 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019499 | 0.245798 | 476 | 31 | 78 | 15.354839 | 0.768802 | 0.655462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1ed6fb3058696a7fab8ff068f87872a50fb687c5 | 2,001 | py | Python | vespa/simulation/dialog_experiment_list.py | vespa-mrs/vespa | 6d3e84a206ec427ac1304e70c7fadf817432956b | [
"BSD-3-Clause"
] | null | null | null | vespa/simulation/dialog_experiment_list.py | vespa-mrs/vespa | 6d3e84a206ec427ac1304e70c7fadf817432956b | [
"BSD-3-Clause"
] | 4 | 2021-04-17T13:58:31.000Z | 2022-01-20T14:19:57.000Z | vespa/simulation/dialog_experiment_list.py | vespa-mrs/vespa | 6d3e84a206ec427ac1304e70c7fadf817432956b | [
"BSD-3-Clause"
] | 3 | 2021-06-05T16:34:57.000Z | 2022-01-19T16:13:22.000Z | # Python modules
# 3rd party modules
import wx
# Our modules
import vespa.simulation.auto_gui.experiment_list as gui_experiment_list
import vespa.common.wx_gravy.common_dialogs as common_dialogs
import vespa.common.wx_gravy.util as wx_util
#------------------------------------------------------------------------------
# Note. GUI Architecture/Style
#
# Many of the GUI components in Vespa are designed using the WxGlade
# application to speed up development times. The GUI components are designed
# interactively and users can preview the resultant window/panel/dialog, but
# while event functions can be specified, only stub functions with those
# names are created. The WxGlade files (with *.wxg extensions) are stored in
# the 'wxglade' subdirectory. The ouput of their code generation are stored
# in the 'auto_gui' subdirectory.
#
# To used these GUI classes, each one is inherited into a unique 'vespa'
# class, where program specific initialization and other functionality are
# written. Also, the original stub functions for widget event handlers are
# overloaded to provide program specific event handling.
#------------------------------------------------------------------------------
class DialogExperimentList(gui_experiment_list.MyDialog):
def __init__(self, parent, metabolite):
if not parent:
parent = wx.GetApp().GetTopWindow()
gui_experiment_list.MyDialog.__init__(self, parent)
self.metabolite = metabolite
self.SetTitle("Experiments Using %s" % metabolite.name)
names = [name for name in metabolite.experiment_names]
self.ListExperiments.SetItems(names)
self.Layout()
self.Center()
self.ListExperiments.SetFocus()
def on_copy(self, event):
s = "Simulation %s:\n" % self.GetTitle()
s += "\n".join([name for name in self.metabolite.experiment_names])
wx_util.copy_to_clipboard(s)
def on_close(self, event):
self.Close()
| 32.274194 | 79 | 0.668666 | 245 | 2,001 | 5.342857 | 0.485714 | 0.039725 | 0.051948 | 0.02903 | 0.036669 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000612 | 0.183908 | 2,001 | 61 | 80 | 32.803279 | 0.80098 | 0.487256 | 0 | 0 | 0 | 0 | 0.037849 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.181818 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1ed76eae61cc1ae67ed7991861d56ea74fb9b976 | 696 | py | Python | Practice/Problem Solving/CutTheSticks.py | avantikasharma/HackerRank-Solutions | a980859ac352688853fcbcf3c7ec6d95685f99ea | [
"MIT"
] | 1 | 2018-07-08T15:44:15.000Z | 2018-07-08T15:44:15.000Z | Practice/Problem Solving/CutTheSticks.py | avantikasharma/HackerRank-Solutions | a980859ac352688853fcbcf3c7ec6d95685f99ea | [
"MIT"
] | null | null | null | Practice/Problem Solving/CutTheSticks.py | avantikasharma/HackerRank-Solutions | a980859ac352688853fcbcf3c7ec6d95685f99ea | [
"MIT"
] | 2 | 2018-08-10T06:49:34.000Z | 2020-10-01T04:50:59.000Z | #!/bin/python3
import math
import os
import random
import re
import sys
# Complete the cutTheSticks function below.
def cutTheSticks(arr):
l = len(arr)
result = []
referer = list(set(arr))
# print('array is', arr)
refer = sorted(referer,reverse = True)
# print('reference is', refer)
lr = len(refer)
for i in range(lr):
value = refer[i]
result.append(arr.count(value))
# print(result)
for i in range(1,lr):
result[i] = result[i]+result[i-1]
res = result[::-1]
for i in res:
print(i)
if __name__ == '__main__':
n = int(input())
arr = list(map(int, input().rstrip().split()))
cutTheSticks(arr)
| 19.333333 | 50 | 0.586207 | 96 | 696 | 4.166667 | 0.5 | 0.03 | 0.045 | 0.055 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007828 | 0.265805 | 696 | 35 | 51 | 19.885714 | 0.774951 | 0.186782 | 0 | 0 | 0 | 0 | 0.01426 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.217391 | 0 | 0.26087 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1eda3ffb7cd59c6c60967f92603c29ba38922e8a | 5,054 | py | Python | ODS - Staging - Illuminate/loadTables.py | achievementfirst/af-edfi-shared | 996a201a23b56d498eee75251bb2455201c17308 | [
"Apache-2.0"
] | 2 | 2020-10-06T22:06:52.000Z | 2020-12-24T17:27:48.000Z | ODS - Staging - Illuminate/loadTables.py | achievementfirst/af-edfi-shared | 996a201a23b56d498eee75251bb2455201c17308 | [
"Apache-2.0"
] | null | null | null | ODS - Staging - Illuminate/loadTables.py | achievementfirst/af-edfi-shared | 996a201a23b56d498eee75251bb2455201c17308 | [
"Apache-2.0"
] | 1 | 2020-10-07T18:43:17.000Z | 2020-10-07T18:43:17.000Z | """
Name: loadTables.py
Description: Script extracts data (specific to Illuminate assessments) from the edfi ODS and loads
it into the AWS Aurora db
Note:
The following code can be run locally or as a lambda function by converting the main function
to a lambda handler.
"""
import argparse
import configparser
import os
import pymssql
import psycopg2
from psycopg2 import extras
import re
from parseRows import *
config = configparser.ConfigParser()
def main():
parser = argparse.ArgumentParser(
description='Copy data from Source tables to Staging.')
parser.add_argument('configuration file', type=str,
help='configuration file')
parser.add_argument('--test', '-t', action='store_true', default=False,
help='print queries only.')
args = parser.parse_args()
config.read(getattr(args, 'configuration file'))
tables = config['tables']
for table in tables:
rows = extract_table(
table,
print_only=args.test,
local=True)
parsed_rows = parse_rows(rows, table)
load_to_staging(
parsed_rows,
table,
print_only=args.test,
local=True)
def extract_table(table, print_only=False, local=True):
""" Loads queried rows from the ODS
returns matrix of query results """
cred = config['source']
print('Connecting to source db...')
conn = pymssql.connect(
server=cred['host'],
database=cred['dbname'],
user=cred['username'],
password=cred['password'],
port=cred['port'])
print("Extract table: " + table)
with open(os.path.join(config['sql']['source'], f'{table}.sql'), 'r',
encoding='utf') as f:
query = f.read()
if print_only:
print(query)
return []
with conn.cursor() as cursor:
cursor.execute(query)
rows = cursor.fetchall()
conn.close()
print("Done query table: " + table)
return rows
def parse_rows(rows, table):
""" parses rows from the ODS for insertion in AFStaged, based on inputted table
returns parsed matrix of query results """
if table == 'Assessment':
parsed_rows = parse_assessment(rows)
elif table == 'AssessmentAcademicSubject':
parsed_rows = parse_assessmentacademicsubject(rows)
elif table == 'AssessmentAssessedGradeLevel':
parsed_rows = parse_assessmentassessedgradelevel(rows)
elif table == 'AssessmentItem':
parsed_rows = parse_assessmentitem(rows)
elif table == 'AssessmentItemCategoryDescriptor':
parsed_rows = parse_assessmentitemcategorydescriptor(rows)
elif table == 'AssessmentItemLearningStandard':
parsed_rows = parse_assessmentitemlearningstandard(rows)
elif table == 'AssessmentItemResultDescriptor':
parsed_rows = assessmentitemresultdescriptor(rows)
elif table == 'AssessmentPerformanceLevel':
parsed_rows = parse_assessmentperformancelevel(rows)
elif table == 'AssessmentReportingMethodType':
parsed_rows = parse_assessmentreportingmethodtype(rows)
elif table == 'AssessmentScore':
parsed_rows = parse_assessmentscore(rows)
elif table == 'Descriptor':
parsed_rows = parse_descriptor(rows)
elif table == 'LearningStandard':
parsed_rows = parse_learningstandard(rows)
elif table == 'StudentAssessment':
parsed_rows = parse_studentassessment(rows)
elif table == 'StudentAssessmentItem':
parsed_rows = parse_studentassessmentitem(rows)
elif table == 'StudentAssessmentPerformanceLevel':
parsed_rows = parse_studentassessmentperformancelevel(rows)
elif table == 'StudentAssessmentScoreResult':
parsed_rows = parse_studentassessmentscoreresult(rows)
else:
print('Invalid table name')
raise Exception('PythonCatchException')
return parsed_rows
def load_to_staging(parsed_rows, table, print_only=False, local=True):
""" loads parsed rows from ODS into Aurora db """
with open(os.path.join(config["sql"]["staged"], f'{table}.sql'), 'r',
encoding='utf') as f:
query = f.read()
if print_only:
print(query)
return
cred = config['destination']
conn = psycopg2.connect(
dbname=cred['dbname'],
user=cred['username'],
password=cred['password'], host=cred['host'],
port=cred['port'])
schema = config['sql']['schema']
print(f'Inserting data into {schema}.{table}')
with conn.cursor() as cursor:
delete_command = f'DELETE FROM {schema}."{table}"'
cursor.execute(delete_command)
try:
extras.execute_batch(cursor, query, parsed_rows)
except:
print(cursor.query)
raise
print(f'Done inserting into {schema}.{table}')
conn.commit()
conn.close()
if __name__ == "__main__":
main()
| 31.197531 | 98 | 0.641472 | 527 | 5,054 | 6.028463 | 0.296015 | 0.069248 | 0.075543 | 0.01385 | 0.160529 | 0.139125 | 0.139125 | 0.090652 | 0.038401 | 0.038401 | 0 | 0.000797 | 0.255046 | 5,054 | 161 | 99 | 31.391304 | 0.843028 | 0.096953 | 0 | 0.188034 | 0 | 0 | 0.186118 | 0.062334 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034188 | false | 0.017094 | 0.068376 | 0 | 0.136752 | 0.136752 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1eda6198db3d13f656d2b6dfb7c8f3b811602a4d | 6,788 | py | Python | tst/schedulers/bayesopt/gpautograd/test_gp.py | awslabs/syne-tune | 1dd8e157477b86db01047a9a7821780ea04389bc | [
"ECL-2.0",
"Apache-2.0"
] | 97 | 2021-11-18T17:14:30.000Z | 2022-03-29T00:33:12.000Z | tst/schedulers/bayesopt/gpautograd/test_gp.py | awslabs/syne-tune | 1dd8e157477b86db01047a9a7821780ea04389bc | [
"ECL-2.0",
"Apache-2.0"
] | 54 | 2021-11-18T17:14:12.000Z | 2022-03-22T08:11:48.000Z | tst/schedulers/bayesopt/gpautograd/test_gp.py | awslabs/syne-tune | 1dd8e157477b86db01047a9a7821780ea04389bc | [
"ECL-2.0",
"Apache-2.0"
] | 9 | 2021-11-29T11:47:32.000Z | 2022-02-24T15:28:11.000Z | # Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
import numpy
import autograd.numpy as anp
from syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.mean import (
ScalarMeanFunction,
)
from syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.kernel import Matern52
from syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.likelihood import (
MarginalLikelihood,
)
from syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gp_regression import (
GaussianProcessRegression,
)
from syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.constants import (
NOISE_VARIANCE_LOWER_BOUND,
INVERSE_BANDWIDTHS_LOWER_BOUND,
)
from syne_tune.optimizer.schedulers.searchers.bayesopt.gpautograd.gluon_blocks_helpers import (
LogarithmScalarEncoding,
PositiveScalarEncoding,
)
def test_likelihood_encoding():
mean = ScalarMeanFunction()
kernel = Matern52(dimension=1)
likelihood = MarginalLikelihood(mean=mean, kernel=kernel)
assert isinstance(likelihood.encoding, LogarithmScalarEncoding)
likelihood = MarginalLikelihood(mean=mean, kernel=kernel, encoding_type="positive")
assert isinstance(likelihood.encoding, PositiveScalarEncoding)
def test_gp_regression_no_noise():
def f(x):
return anp.sin(x) / x
x_train = anp.arange(-5, 5, 0.2) # [-5,-4.8,-4.6,...,4.8]
x_test = anp.arange(
-4.9, 5, 0.2
) # [-4.9, -4.7, -4.5,...,4.9], note that train and test points do not overlap
y_train = f(x_train)
y_test = f(x_test)
# to np.ndarray
y_train_np_ndarray = anp.array(y_train)
x_train_np_ndarray = anp.array(x_train)
x_test_np_ndarray = anp.array(x_test)
model = GaussianProcessRegression(kernel=Matern52(dimension=1))
model.fit(x_train_np_ndarray, y_train_np_ndarray)
# Check that the value of the residual noise variance learned by empirical Bayes is in the same order
# as the smallest allowed value (since there is no noise)
noise_variance = model.likelihood.get_noise_variance()
numpy.testing.assert_almost_equal(noise_variance, NOISE_VARIANCE_LOWER_BOUND)
mu_train, var_train = model.predict(x_train_np_ndarray)[0]
mu_test, var_test = model.predict(x_test_np_ndarray)[0]
numpy.testing.assert_almost_equal(mu_train, y_train, decimal=4)
numpy.testing.assert_almost_equal(var_train, [0.0] * len(var_train), decimal=4)
# Fewer decimals imposed for the test points
numpy.testing.assert_almost_equal(mu_test, y_test, decimal=3)
# If we wish plot
# import matplotlib.pyplot as plt
# plt.plot(x_train, y_train, "r")
# plt.errorbar(x=x_train,
# y=mu_train,
# yerr=var_train)
# plt.plot(x_test, y_test, "b")
# plt.errorbar(x=x_test,
# y=mu_test,
# yerr=var_test)
# plt.show()
def test_gp_regression_with_noise():
def f(x):
return anp.sin(x) / x
anp.random.seed(7)
x_train = anp.arange(-5, 5, 0.2) # [-5, -4.8, -4.6,..., 4.8]
x_test = anp.arange(
-4.9, 5, 0.2
) # [-4.9, -4.7, -4.5,..., 4.9], note that train and test points do not overlap
y_train = f(x_train)
y_test = f(x_test)
std_noise = 0.01
noise_train = anp.random.normal(0.0, std_noise, size=y_train.shape)
# to anp.ndarray
y_train_np_ndarray = anp.array(y_train)
noise_train_np_ndarray = anp.array(noise_train)
x_train_np_ndarray = anp.array(x_train)
x_test_np_ndarray = anp.array(x_test)
model = GaussianProcessRegression(kernel=Matern52(dimension=1))
model.fit(x_train_np_ndarray, y_train_np_ndarray + noise_train_np_ndarray)
# Check that the value of the residual noise variance learned by empirical Bayes is in the same order as std_noise^2
noise_variance = model.likelihood.get_noise_variance()
numpy.testing.assert_almost_equal(noise_variance, std_noise**2, decimal=4)
mu_train, _ = model.predict(x_train_np_ndarray)[0]
mu_test, _ = model.predict(x_test_np_ndarray)[0]
numpy.testing.assert_almost_equal(mu_train, y_train, decimal=2)
numpy.testing.assert_almost_equal(mu_test, y_test, decimal=2)
def test_gp_regression_2d_with_ard():
def f(x):
# Only dependent on the first column of x
return anp.sin(x[:, 0]) / x[:, 0]
anp.random.seed(7)
dimension = 3
# 30 train and test points in R^3
x_train = anp.random.uniform(-5, 5, size=(30, dimension))
x_test = anp.random.uniform(-5, 5, size=(30, dimension))
y_train = f(x_train)
y_test = f(x_test)
# to np.ndarray
y_train_np_ndarray = anp.array(y_train)
x_train_np_ndarray = anp.array(x_train)
x_test_np_ndarray = anp.array(x_test)
model = GaussianProcessRegression(kernel=Matern52(dimension=dimension, ARD=True))
model.fit(x_train_np_ndarray, y_train_np_ndarray)
# Check that the value of the residual noise variance learned by empirical Bayes is in the same order as the smallest allowed value (since there is no noise)
noise_variance = model.likelihood.get_noise_variance()
numpy.testing.assert_almost_equal(noise_variance, NOISE_VARIANCE_LOWER_BOUND)
# Check that the bandwidths learned by empirical Bayes reflect the fact that only the first column is useful
# In particular, for the useless dimensions indexed by {1,2}, the inverse bandwidths should be close to INVERSE_BANDWIDTHS_LOWER_BOUND
# (or conversely, bandwidths should be close to their highest allowed values)
sqd = model.likelihood.kernel.squared_distance
inverse_bandwidths = sqd.encoding.get(sqd.inverse_bandwidths_internal.data())
assert (
inverse_bandwidths[0] > inverse_bandwidths[1]
and inverse_bandwidths[0] > inverse_bandwidths[2]
)
numpy.testing.assert_almost_equal(
inverse_bandwidths[1], INVERSE_BANDWIDTHS_LOWER_BOUND
)
numpy.testing.assert_almost_equal(
inverse_bandwidths[2], INVERSE_BANDWIDTHS_LOWER_BOUND
)
mu_train, _ = model.predict(x_train_np_ndarray)[0]
mu_test, _ = model.predict(x_test_np_ndarray)[0]
numpy.testing.assert_almost_equal(mu_train, y_train, decimal=2)
# Fewer decimals imposed for the test points
numpy.testing.assert_almost_equal(mu_test, y_test, decimal=1)
| 38.568182 | 161 | 0.725692 | 1,009 | 6,788 | 4.653122 | 0.199207 | 0.047923 | 0.050692 | 0.061342 | 0.612567 | 0.573163 | 0.552503 | 0.532907 | 0.444728 | 0.426837 | 0 | 0.02096 | 0.177666 | 6,788 | 175 | 162 | 38.788571 | 0.820136 | 0.289923 | 0 | 0.432692 | 0 | 0 | 0.001674 | 0 | 0 | 0 | 0 | 0 | 0.144231 | 1 | 0.067308 | false | 0 | 0.076923 | 0.028846 | 0.173077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1edfdd935af453f5e2a4d99084ce76fa471248d6 | 975 | py | Python | demystifying/feature_extraction/random_feature_extractor.py | delemottelab/demystifying | e8527b52d5fbe0570cd391921ecda5aefceb797a | [
"MIT"
] | 16 | 2020-01-04T14:46:03.000Z | 2021-07-10T05:54:05.000Z | demystifying/feature_extraction/random_feature_extractor.py | delemottelab/demystifying | e8527b52d5fbe0570cd391921ecda5aefceb797a | [
"MIT"
] | 11 | 2020-01-10T16:18:17.000Z | 2022-03-20T09:53:33.000Z | demystifying/feature_extraction/random_feature_extractor.py | delemottelab/demystifying | e8527b52d5fbe0570cd391921ecda5aefceb797a | [
"MIT"
] | 3 | 2020-03-16T04:35:01.000Z | 2022-02-10T12:39:01.000Z | from __future__ import absolute_import, division, print_function
import logging
import sys
logging.basicConfig(
stream=sys.stdout,
format='%(asctime)s %(name)s-%(levelname)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
import numpy as np
from .feature_extractor import FeatureExtractor
logger = logging.getLogger("KL divergence")
class RandomFeatureExtractor(FeatureExtractor):
"""Class which randomly assigns importance to features"""
def __init__(self,
name="RAND",
**kwargs):
FeatureExtractor.__init__(self,
name=name,
supervised=True,
**kwargs)
def train(self, data, labels):
pass
def get_feature_importance(self, model, data, labels):
"""
returns random values per feature between 0 and 1
"""
return np.random.random((data.shape[1], labels.shape[1]))
| 27.857143 | 65 | 0.594872 | 104 | 975 | 5.413462 | 0.615385 | 0.028419 | 0.042629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005839 | 0.297436 | 975 | 34 | 66 | 28.676471 | 0.816058 | 0.10359 | 0 | 0 | 0 | 0 | 0.095858 | 0.027219 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0.045455 | 0.272727 | 0 | 0.5 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1ee382a0125a89f5fdc7cff849261c133ebb7a31 | 692 | py | Python | config.py | bruce30262/ghidra-dark | ee955a8afa3c4d28bdefe3db7970b762254b3411 | [
"MIT"
] | null | null | null | config.py | bruce30262/ghidra-dark | ee955a8afa3c4d28bdefe3db7970b762254b3411 | [
"MIT"
] | null | null | null | config.py | bruce30262/ghidra-dark | ee955a8afa3c4d28bdefe3db7970b762254b3411 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import yaml
""" Get flatlaf config """
flatlaf_version, flatlaf_style = None, None
font_decomp, font_ld = {}, {}
try:
with open("config.yml", "r") as f:
config = yaml.load(f, Loader=yaml.FullLoader)
flatlaf_version = config['flatlaf']['version']
flatlaf_style = config['flatlaf']['style']
font_decomp = config['font']['Decompiler']
font_ld = config['font']['Listing Display']
except FileNotFoundError:
print("Cannot find \"config.yml\" in the directory. Please make sure there's such file in the same folder ( check config_example.yml )")
exit(1)
except:
import traceback
traceback.print_exc()
exit(1)
| 30.086957 | 140 | 0.66185 | 90 | 692 | 4.977778 | 0.566667 | 0.087054 | 0.089286 | 0.120536 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005405 | 0.197977 | 692 | 22 | 141 | 31.454545 | 0.801802 | 0.030347 | 0 | 0.117647 | 0 | 0.058824 | 0.286159 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.117647 | 0 | 0.117647 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1ee51c35bc679d160d7d39ce6956fbb88b6ed8ad | 1,531 | py | Python | 2015/solutions/20.py | adtok/advent-of-code | df1f61759bd8f3bfd7995b7e2a124d7f6e97ba01 | [
"MIT"
] | null | null | null | 2015/solutions/20.py | adtok/advent-of-code | df1f61759bd8f3bfd7995b7e2a124d7f6e97ba01 | [
"MIT"
] | null | null | null | 2015/solutions/20.py | adtok/advent-of-code | df1f61759bd8f3bfd7995b7e2a124d7f6e97ba01 | [
"MIT"
] | null | null | null | """Advent of Code 2015: Day 20"""
from itertools import count
from typing import Callable, List
def get_factors(n: int) -> List[int]:
divisors = []
sqrt = int(n ** 0.5) + 1
for i in range(1, sqrt):
if n % i == 0:
divisors.append(i)
divisors.append(n // i)
return divisors
def calculate_presents(house: int, presents_per_elf: int, upper_limit: int = 0):
factors = get_factors(house)
if upper_limit:
factors = list(filter(lambda f: f <= upper_limit, factors))
# print(factors)
return presents_per_elf * sum(factors)
def find_lowest_house(value: int, ppe: int, upper_limit: int = 0):
step = 6
for i in count(step, step):
if calculate_presents(i, ppe, upper_limit=upper_limit) >= value:
lowest = i
for j in count(1):
if calculate_presents(i - j, ppe, upper_limit=upper_limit) >= value:
lowest = i - j
else:
break
return lowest
return -1
def part_one(input_value: int) -> int:
result = find_lowest_house(input_value, 10)
return result
def part_two(input_value: int) -> int:
result = find_lowest_house(input_value, 11, upper_limit=50)
return result
def solve(func: Callable[[int], int]):
input_value = 34_000_000
result = func(input_value)
print(f"The solution for {func.__name__!r} is {result}")
def main():
solve(part_one)
solve(part_two)
if __name__ == "__main__":
main()
| 24.301587 | 84 | 0.606793 | 214 | 1,531 | 4.116822 | 0.32243 | 0.102157 | 0.051078 | 0.036322 | 0.224745 | 0.186152 | 0.186152 | 0.186152 | 0.106697 | 0.106697 | 0 | 0.027347 | 0.283475 | 1,531 | 62 | 85 | 24.693548 | 0.775752 | 0.028086 | 0 | 0.047619 | 0 | 0 | 0.036437 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.047619 | 0 | 0.357143 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
1ee540097c32429348fbeb504278fb986bd3a9e7 | 1,864 | py | Python | python/examples/bert/test_multi_fetch_client.py | hysunflower/Serving | 50d0c2900f3385b049f76b91e38cc69d8e8a102d | [
"Apache-2.0"
] | 789 | 2019-04-05T09:20:46.000Z | 2022-03-31T13:43:54.000Z | python/examples/bert/test_multi_fetch_client.py | hysunflower/Serving | 50d0c2900f3385b049f76b91e38cc69d8e8a102d | [
"Apache-2.0"
] | 1,195 | 2019-04-08T10:05:28.000Z | 2022-03-31T03:43:42.000Z | python/examples/bert/test_multi_fetch_client.py | hysunflower/Serving | 50d0c2900f3385b049f76b91e38cc69d8e8a102d | [
"Apache-2.0"
] | 229 | 2019-04-05T09:20:57.000Z | 2022-03-30T06:21:22.000Z | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddle_serving_client import Client
from paddle_serving_app.reader import ChineseBertReader
import sys
import numpy as np
client = Client()
client.load_client_config("./bert_seq32_client/serving_client_conf.prototxt")
client.connect(["127.0.0.1:9292"])
reader = ChineseBertReader({"max_seq_len": 32})
fetch = ["sequence_10", "sequence_12", "pooled_output"]
expected_shape = {
"sequence_10": (4, 32, 768),
"sequence_12": (4, 32, 768),
"pooled_output": (4, 768)
}
batch_size = 4
feed_batch = {}
batch_len = 0
for line in sys.stdin:
feed = reader.process(line)
if batch_len == 0:
for key in feed.keys():
val_len = len(feed[key])
feed_batch[key] = np.array(feed[key]).reshape((1, val_len, 1))
continue
if len(feed_batch) < batch_size:
for key in feed.keys():
np.concatenate([
feed_batch[key], np.array(feed[key]).reshape((1, val_len, 1))
])
else:
fetch_map = client.predict(feed=feed_batch, fetch=fetch)
feed_batch = []
for var_name in fetch:
if fetch_map[var_name].shape != expected_shape[var_name]:
print("fetch var {} shape error.".format(var_name))
sys.exit(1)
| 34.518519 | 77 | 0.67221 | 269 | 1,864 | 4.513011 | 0.464684 | 0.049423 | 0.021417 | 0.026359 | 0.093904 | 0.067545 | 0.067545 | 0.067545 | 0.067545 | 0.067545 | 0 | 0.036961 | 0.216202 | 1,864 | 53 | 78 | 35.169811 | 0.793977 | 0.312768 | 0 | 0.055556 | 0 | 0 | 0.132597 | 0.037885 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 0.111111 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
948fa3226659c8d5d08bc6395c25891bde4c906c | 23,434 | py | Python | gs_quant/markets/core.py | alexanu/gs-quant | fbb8d88d570aee545ed3a8601d9052c281ecca19 | [
"Apache-2.0"
] | 1 | 2020-05-18T02:09:39.000Z | 2020-05-18T02:09:39.000Z | gs_quant/markets/core.py | atefar2/gs-quant | d31ae3204d5421861897bac49383bc213d5497a2 | [
"Apache-2.0"
] | null | null | null | gs_quant/markets/core.py | atefar2/gs-quant | d31ae3204d5421861897bac49383bc213d5497a2 | [
"Apache-2.0"
] | null | null | null | """
Copyright 2019 Goldman Sachs.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
"""
import copy
import datetime as dt
import logging
import weakref
from abc import ABCMeta
from concurrent.futures import Future, ThreadPoolExecutor
from threading import Lock
from typing import Iterable, Optional, Mapping, Tuple, Union
from gs_quant.api.risk import RiskApi
from gs_quant.base import Priceable, PricingKey, Scenario
from gs_quant.context_base import ContextBaseWithDefault, nullcontext
from gs_quant.datetime.date import business_day_offset, is_business_day
from gs_quant.risk import DataFrameWithInfo, ErrorValue, FloatWithInfo, MarketDataScenario, \
PricingDateAndMarketDataAsOf, \
ResolvedInstrumentValues, RiskMeasure, RiskPosition, RiskRequest, \
RiskRequestParameters, SeriesWithInfo
from gs_quant.risk.results import MultipleRiskMeasureFuture
from gs_quant.risk import CompositeScenario, StringWithInfo
from gs_quant.session import GsSession
from gs_quant.target.data import MarketDataCoordinate as __MarketDataCoordinate
_logger = logging.getLogger(__name__)
class PricingFuture(Future):
def __init__(self, pricing_context):
super().__init__()
self.__pricing_context = pricing_context
def result(self, timeout=None):
"""Return the result of the call that the future represents.
:param timeout: The number of seconds to wait for the result if the future isn't done.
If None, then there is no limit on the wait time.
Returns:
The result of the call that the future represents.
Raises:
CancelledError: If the future was cancelled.
TimeoutError: If the future didn't finish executing before the given timeout.
Exception: If the call raised then that exception will be raised.
"""
if not self.done() and PricingContext.current == self.__pricing_context and self.__pricing_context.is_entered:
raise RuntimeError('Cannot evaluate results under the same pricing context being used to produce them')
return super().result(timeout=timeout)
class MarketDataCoordinate(__MarketDataCoordinate):
def __str__(self):
return "|".join(f or '' for f in (self.mkt_type, self.mkt_asset, self.mkt_class,
'_'.join(self.mkt_point or ()), self.mkt_quoting_style))
class PricingCache(metaclass=ABCMeta):
"""
Weakref cache for instrument calcs
"""
__cache = weakref.WeakKeyDictionary()
@classmethod
def clear(cls):
__cache = weakref.WeakKeyDictionary()
@classmethod
def missing_pricing_keys(cls,
priceable: Priceable,
risk_measure: RiskMeasure,
pricing_key: Optional[PricingKey] = None) -> Tuple[PricingKey, ...]:
pricing_key = pricing_key or PricingContext.current.pricing_key
if priceable in cls.__cache and risk_measure in cls.__cache[priceable]:
cached = cls.__cache[priceable][risk_measure]
return tuple(k for k in pricing_key if k not in cached)
else:
return pricing_key
@classmethod
def get(cls,
priceable: Priceable,
risk_measure: RiskMeasure,
pricing_key: Optional[PricingKey] = None,
return_partial: bool = False) -> Optional[Union[DataFrameWithInfo, FloatWithInfo, SeriesWithInfo]]:
if priceable not in cls.__cache or risk_measure not in cls.__cache[priceable]:
return
pricing_key = pricing_key or PricingContext.current.pricing_key
cached = cls.__cache[priceable][risk_measure]
if len(pricing_key.pricing_market_data_as_of) > 1:
values = [cached[k] for k in pricing_key if k in cached]
if values and (return_partial or len(values) == len(pricing_key.pricing_market_data_as_of)):
return values[0].compose(values, pricing_key)
else:
return cached.get(pricing_key)
@classmethod
def put(cls,
priceable: Priceable,
risk_measure: RiskMeasure,
result: Union[DataFrameWithInfo, FloatWithInfo, SeriesWithInfo],
pricing_key: Optional[PricingKey] = None):
pricing_key = pricing_key or PricingContext.current.pricing_key
if isinstance(result, (DataFrameWithInfo, FloatWithInfo, SeriesWithInfo)):
cls.__cache.setdefault(priceable, {}).setdefault(risk_measure, {}).update(
{k: result.for_pricing_key(k) for k in pricing_key})
@classmethod
def drop(cls, priceable: Priceable):
if priceable in cls.__cache:
cls.__cache.pop(priceable)
class PricingContext(ContextBaseWithDefault):
"""
A context for controlling pricing and market data behaviour
"""
def __init__(self,
pricing_date: Optional[dt.date] = None,
market_data_as_of: Optional[Union[dt.date, dt.datetime]] = None,
market_data_location: Optional[str] = None,
is_async: bool = False,
is_batch: bool = False,
use_cache: bool = False,
visible_to_gs: bool = False,
csa_term: Optional[str] = None,
poll_for_batch_results: Optional[bool] = False,
batch_results_timeout: Optional[int] = None
):
"""
The methods on this class should not be called directly. Instead, use the methods on the instruments,
as per the examples
:param pricing_date: the date for pricing calculations. Default is today
:param market_data_as_of: the date/datetime for sourcing market data (defaults to 1 business day before
pricing_date)
:param market_data_location: the location for sourcing market data ('NYC', 'LDN' or 'HKG' (defaults to LDN)
:param is_async: if True, return (a future) immediately. If False, block (defaults to False)
:param is_batch: use for calculations expected to run longer than 3 mins, to avoid timeouts.
It can be used with is_aync=True|False (defaults to False)
:param use_cache: store results in the pricing cache (defaults to False)
:param visible_to_gs: are the contents of risk requests visible to GS (defaults to False)
:param csa_term: the csa under which the calculations are made. Default is local ccy ois index
**Examples**
To change the market data location of the default context:
>>> from gs_quant.markets import PricingContext
>>> import datetime as dt
>>>
>>> PricingContext.current = PricingContext(market_data_location='LDN')
For a blocking, synchronous request:
>>> from gs_quant.instrument import IRCap
>>> cap = IRCap('5y', 'GBP')
>>>
>>> with PricingContext():
>>> price_f = cap.dollar_price()
>>>
>>> price = price_f.result()
For an asynchronous request:
>>> with PricingContext(is_async=True):
>>> price_f = cap.dollar_price()
>>>
>>> while not price_f.done:
>>> ...
"""
super().__init__()
if pricing_date is None:
pricing_date = dt.date.today()
while not is_business_day(pricing_date):
pricing_date -= dt.timedelta(days=1)
self.__pricing_date = pricing_date
self.__csa_term = csa_term
self.__market_data_as_of = market_data_as_of
# Do not use self.__class__.current - it will cause a cycle
self.__market_data_location = market_data_location or (
self.__class__.path[0].market_data_location if self.__class__.path else 'LDN')
self.__is_async = is_async
self.__is_batch = is_batch
self.__poll_for_batch_results = poll_for_batch_results
self.__batch_results_timeout = batch_results_timeout
self.__risk_measures_in_scenario_by_provider_and_position = {}
self.__futures = {}
self.__use_cache = use_cache
self.__visible_to_gs = visible_to_gs
self.__positions_by_provider = {}
self.__lock = Lock()
def _on_exit(self, exc_type, exc_val, exc_tb):
if exc_val:
raise exc_val
else:
self._calc()
def _calc(self):
positions_by_provider = self.__active_context.__positions_by_provider
session = GsSession.current
batch_result = Future() if self.__is_batch else None
batch_providers = set()
batch_lock = Lock() if self.__is_batch else nullcontext()
def handle_results(requests_to_results: Mapping[RiskRequest, dict]):
for request_, result in requests_to_results.items():
try:
self._handle_results(result, request_)
except Exception as e:
try:
self._handle_results(e, request_)
except Exception as he:
_logger.error('Error setting error result: ' + str(he))
def run_requests(requests_: Iterable[RiskRequest], provider_: RiskApi):
try:
with session:
results = provider_.calc_multi(requests_)
if self.__is_batch:
get_batch_results(dict(zip(results, requests_)), provider_)
else:
handle_results(dict(zip(requests_, results)))
except Exception as e:
handle_results({r: e for r in requests_})
def get_batch_results(ids_to_requests: Mapping[str, RiskRequest], provider_: RiskApi):
def get_results():
try:
with session:
return provider_.get_results(ids_to_requests,
self.__poll_for_batch_results,
timeout=self.__batch_results_timeout)
except Exception as be:
return {r: be for r in ids_to_requests.values()}
def set_results(results: Mapping[RiskRequest, Union[Exception, dict]]):
handle_results(results)
with batch_lock:
# Check if we're the last provide and signal done if so
batch_providers.remove(provider_)
if not batch_providers:
batch_result.set_result(True)
if self.__is_async:
batch_result_pool = ThreadPoolExecutor(1)
batch_result_pool.submit(get_results).add_done_callback(lambda f: set_results(f.result()))
batch_result_pool.shutdown(wait=False)
else:
set_results(get_results())
with self.__lock:
# Group requests by risk_measures, positions, scenario - so we can create unique RiskRequest objects
# Determine how many we will need
while self.__risk_measures_in_scenario_by_provider_and_position:
provider, risk_measures_by_scenario =\
self.__risk_measures_in_scenario_by_provider_and_position.popitem()
for position, scenario_to_risk_measures in risk_measures_by_scenario.items():
for scenario, risk_measures in scenario_to_risk_measures.items():
risk_measures = tuple(sorted(risk_measures, key=lambda m: m.name or m.measure_type.value))
positions_by_provider.setdefault(provider, {}).setdefault((scenario, risk_measures), [])\
.append(position)
if self.__positions_by_provider:
num_providers = len(self.__positions_by_provider)
request_pool = ThreadPoolExecutor(num_providers) if num_providers > 1 or self.__is_async else None
batch_providers = set(self.__positions_by_provider.keys())
while self.__positions_by_provider:
provider, positions_by_scenario_and_risk_measures = self.__positions_by_provider.popitem()
requests = [
RiskRequest(
tuple(positions),
risk_measures,
parameters=self.__parameters,
wait_for_results=not self.__is_batch,
pricing_location=self.__market_data_location,
scenario=scenario,
pricing_and_market_data_as_of=self._pricing_market_data_as_of,
request_visible_to_gs=self.__visible_to_gs
)
for (scenario, risk_measures), positions in positions_by_scenario_and_risk_measures.items()
]
if request_pool:
request_pool.submit(run_requests, requests, provider)
else:
run_requests(requests, provider)
if request_pool:
request_pool.shutdown(wait=not self.__is_async)
if batch_result and not self.__is_async:
batch_result.result()
def _handle_results(self, results: Union[Exception, dict], request: RiskRequest):
error = None
if isinstance(results, Exception):
error = str(results)
results = {}
_logger.error('Error while handling results: ' + error)
with self.__lock:
for risk_measure in request.measures:
# Get each risk measure from from the request and the corresponding positions --> futures dict
positions_for_measure = self.__futures[(request.scenario, risk_measure)]
# Get the results for this measure
position_results = results.pop(risk_measure, {})
for position in request.positions:
# Set the result for this position to the returned value or an error if missing
result = position_results.get(position, ErrorValue(self.pricing_key, error=error))
if self.__use_cache and not isinstance(result, ErrorValue):
# Populate the cache
PricingCache.put(position.instrument, risk_measure, result)
# Retrieve from the cache - this is used by HistoricalPricingContext. We ensure the cache has
# all values (in case some had already been computed) then populate the result as the final step
result = PricingCache.get(position.instrument, risk_measure)
# Set the result for the future
positions_for_measure.pop(position).set_result(result)
if not positions_for_measure:
self.__futures.pop((request.scenario, risk_measure))
@property
def __active_context(self):
return next((c for c in reversed(PricingContext.path) if c.is_entered), self)
@property
def __parameters(self) -> RiskRequestParameters:
return RiskRequestParameters(csa_term=self.__csa_term, raw_results=True)
@property
def __scenario(self) -> Optional[MarketDataScenario]:
scenarios = Scenario.path
if not scenarios:
return None
return MarketDataScenario(scenario=scenarios[0] if len(scenarios) == 1 else
CompositeScenario(scenarios=tuple(reversed(scenarios))))
@property
def _pricing_market_data_as_of(self) -> Tuple[PricingDateAndMarketDataAsOf, ...]:
return PricingDateAndMarketDataAsOf(self.pricing_date, self.market_data_as_of),
@property
def pricing_date(self) -> dt.date:
"""Pricing date"""
return self.__pricing_date
@property
def market_data_as_of(self) -> Union[dt.date, dt.datetime]:
"""Market data as of"""
if self.__market_data_as_of:
return self.__market_data_as_of
elif self.pricing_date == dt.date.today():
return business_day_offset(self.pricing_date, -1, roll='preceding')
else:
return self.pricing_date
@property
def market_data_location(self) -> str:
"""Market data location"""
return self.__market_data_location
@property
def use_cache(self) -> bool:
"""Cache results"""
return self.__use_cache
@property
def visible_to_gs(self) -> bool:
"""Request contents visible to GS"""
return self.__visible_to_gs
@property
def pricing_key(self) -> PricingKey:
"""A key representing information about the pricing environment"""
return PricingKey(
self._pricing_market_data_as_of,
self.__market_data_location,
self.__parameters,
self.__scenario)
def calc(self, priceable: Priceable, risk_measure: Union[RiskMeasure, Iterable[RiskMeasure]])\
-> Union[list, DataFrameWithInfo, ErrorValue, FloatWithInfo, Future, MultipleRiskMeasureFuture,
SeriesWithInfo]:
"""
Calculate the risk measure for the priceable instrument. Do not use directly, use via instruments
:param priceable: The priceable (e.g. instrument)
:param risk_measure: The measure we wish to calculate
:return: A float, Dataframe, Series or Future (depending on is_async or whether the context is entered)
**Examples**
>>> from gs_quant.instrument import IRSwap
>>> from gs_quant.risk import IRDelta
>>>
>>> swap = IRSwap('Pay', '10y', 'USD', fixed_rate=0.01)
>>> delta = swap.calc(IRDelta)
"""
position = RiskPosition(priceable, priceable.get_quantity())
multiple_measures = not isinstance(risk_measure, RiskMeasure)
futures = {}
active_context_lock = self.__active_context.__lock if self.__active_context != self else nullcontext()
with self.__lock, active_context_lock:
for measure in risk_measure if multiple_measures else (risk_measure,):
scenario = self.__scenario
measure_future = self.__active_context.__futures.get((scenario, measure), {}).get(position)
if measure_future is None:
measure_future = PricingFuture(self.__active_context)
if self.__use_cache:
cached_result = PricingCache.get(priceable, risk_measure)
if cached_result:
measure_future.set_result(cached_result)
if not measure_future.done():
self.__risk_measures_in_scenario_by_provider_and_position.setdefault(
priceable.provider(), {}).setdefault(
position, {}).setdefault(scenario, set()).add(measure)
self.__active_context.__futures.setdefault((scenario, measure), {})[position] = measure_future
futures[measure] = measure_future
future = MultipleRiskMeasureFuture(futures, result_future=PricingFuture(self.__active_context))\
if multiple_measures else futures[risk_measure]
if not (self.is_entered or self.__is_async):
if not future.done():
self._calc()
return future.result()
else:
return future
def resolve_fields(self, priceable: Priceable, in_place: bool) -> Optional[Union[Priceable, Future]]:
"""
Resolve fields on the priceable which were not supplied. Do not use directly, use via instruments
:param priceable: The priceable (e.g. instrument)
:param in_place: Resolve in place or return a new Priceable
**Examples**
>>> from gs_quant.instrument import IRSwap
>>>
>>> swap = IRSwap('Pay', '10y', 'USD')
>>> rate = swap.fixed_rate
fixedRate is None
>>> swap.resolve()
>>> rate = swap.fixed_rate
fixed_rate is now the solved value
"""
resolution_key = self.pricing_key
if priceable.resolution_key:
if in_place:
if resolution_key != priceable.resolution_key:
_logger.warning(
'Calling resolve() on an instrument already resolved under a different PricingContext')
return
elif resolution_key == priceable.resolution_key:
return copy.copy(priceable)
def check_valid(result_):
if isinstance(result_, StringWithInfo):
_logger.error('Failed to resolve instrument fields: ' + result_)
return priceable
if isinstance(result_, ErrorValue):
_logger.error('Failed to resolve instrument fields: ' + result_.error)
return priceable
else:
return result_
result = self.calc(priceable, ResolvedInstrumentValues)
if in_place:
def handle_result(result_):
result_ = check_valid(result_)
if result_ is not priceable:
priceable.unresolved = copy.copy(priceable)
priceable.from_instance(result_)
priceable.resolution_key = result_.resolution_key
if isinstance(result, Future):
result.add_done_callback(lambda f: handle_result(f.result()))
else:
handle_result(result)
else:
if isinstance(result, Future):
result.add_done_callback(lambda f: check_valid(f.result()))
return result
else:
return check_valid(result)
class LivePricingContext(PricingContext):
def __init__(self,
market_data_location: Optional[str] = None,
is_async: bool = False,
is_batch: bool = False,
visible_to_gs: bool = False,
csa_term: Optional[str] = None,
poll_for_batch_results: Optional[bool] = False,
batch_results_timeout: Optional[int] = None
):
# TODO we use 23:59:59.999999 as a sentinel value to indicate live pricing for now. Fix this
d = business_day_offset(dt.date.today(), -1, roll='preceding')
super().__init__(
pricing_date=dt.date.today(),
market_data_as_of=dt.datetime(d.year, d.month, d.day, 23, 59, 59, 999999),
market_data_location=market_data_location,
is_async=is_async,
is_batch=is_batch,
use_cache=False,
visible_to_gs=visible_to_gs,
csa_term=csa_term,
poll_for_batch_results=poll_for_batch_results,
batch_results_timeout=batch_results_timeout
)
| 41.996416 | 120 | 0.622856 | 2,605 | 23,434 | 5.312476 | 0.161612 | 0.024568 | 0.013874 | 0.016186 | 0.226606 | 0.158321 | 0.119084 | 0.113158 | 0.086856 | 0.063516 | 0 | 0.003184 | 0.303064 | 23,434 | 557 | 121 | 42.071813 | 0.844171 | 0.192242 | 0 | 0.233618 | 0 | 0 | 0.017491 | 0 | 0 | 0 | 0 | 0.001795 | 0 | 1 | 0.091168 | false | 0 | 0.048433 | 0.011396 | 0.245014 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
949348fb8fb9c95b02cdfcf92957d20bc4c5574a | 898 | py | Python | tests/test_c_evluater.py | jiamo/pcc | fde1173e3ba81a41b4f901780711cffb7934107e | [
"Unlicense"
] | 4 | 2018-06-20T16:32:30.000Z | 2018-09-15T15:12:38.000Z | tests/test_c_evluater.py | jiamo/pcc | fde1173e3ba81a41b4f901780711cffb7934107e | [
"Unlicense"
] | 5 | 2018-05-19T08:35:06.000Z | 2021-10-13T05:23:57.000Z | tests/test_c_evluater.py | jiamo/pcc | fde1173e3ba81a41b4f901780711cffb7934107e | [
"Unlicense"
] | null | null | null | import sys
import os
this_dir = os.path.dirname(__file__)
parent_dir = os.path.dirname(this_dir)
sys.path.insert(0, parent_dir)
from pcc.evaluater.c_evaluator import CEvaluator
import unittest
import unittest
class TestCevluatar(unittest.TestCase):
def test_simple(self):
pcc = CEvaluator()
# kalei.evaluate('def binary: 1 (x y) y')
ret = pcc.evaluate('''
int add(int x, int y){
return x + y;
}
int main(){
int a = 3;
int b = 4;
return add(a, b);
}
''', llvmdump=True)
print("The answer is {}".format(ret))
assert (ret == 7)
# This is a good point to self start main
# print(pcc.evaluate('main()'))
if __name__ == '__main__':
# Evaluate some code
# if __name__ == '__main__':
unittest.main()
| 21.902439 | 49 | 0.544543 | 111 | 898 | 4.171171 | 0.513514 | 0.030238 | 0.038877 | 0.069114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008403 | 0.337416 | 898 | 40 | 50 | 22.45 | 0.769748 | 0.172606 | 0 | 0.08 | 0 | 0 | 0.328358 | 0 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.04 | false | 0 | 0.2 | 0 | 0.36 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9499935e046ef24294dc0f8b1d160b07bb4cedd9 | 15,621 | py | Python | tests/datasets/test_mnist.py | dbcollection/dbcollection | a36f57a11bc2636992e26bba4406914162773dd9 | [
"MIT"
] | 23 | 2017-09-20T19:23:26.000Z | 2022-01-09T16:18:11.000Z | tests/datasets/test_mnist.py | dbcollection/dbcollection | a36f57a11bc2636992e26bba4406914162773dd9 | [
"MIT"
] | 148 | 2017-07-23T14:28:28.000Z | 2022-01-13T00:35:17.000Z | tests/datasets/test_mnist.py | dbcollection/dbcollection | a36f57a11bc2636992e26bba4406914162773dd9 | [
"MIT"
] | 6 | 2018-01-12T15:47:57.000Z | 2021-02-09T06:32:39.000Z | """
Test the base classes for managing datasets and tasks.
"""
import os
import sys
import pytest
import numpy as np
from numpy.testing import assert_array_equal
from dbcollection.utils.string_ascii import convert_str_to_ascii as str2ascii
from dbcollection.utils.pad import pad_list
from dbcollection.datasets.mnist.classification import (
Classification,
DatasetAnnotationLoader,
ClassLabelField,
ImageField,
LabelIdField,
ObjectFieldNamesField,
ObjectIdsField,
ImagesPerClassList
)
@pytest.fixture()
def mock_classification_class():
return Classification(data_path='/some/path/data', cache_path='/some/path/cache')
@pytest.fixture()
def classes_classification():
return ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
class TestClassificationTask:
"""Unit tests for the mnist Classification task."""
def test_task_attributes(self, mocker, mock_classification_class, classes_classification):
assert mock_classification_class.filename_h5 == 'classification'
assert mock_classification_class.classes == classes_classification
def test_load_data(self, mocker, mock_classification_class):
dummy_data = ['some_data']
mock_load_train = mocker.patch.object(DatasetAnnotationLoader, "load_train_data", return_value=dummy_data)
mock_load_test = mocker.patch.object(DatasetAnnotationLoader, "load_test_data", return_value=dummy_data)
load_data_generator = mock_classification_class.load_data()
if sys.version[0] == '3':
train_data = load_data_generator.__next__()
test_data = load_data_generator.__next__()
else:
train_data = load_data_generator.next()
test_data = load_data_generator.next()
mock_load_train.assert_called_once_with()
mock_load_test.assert_called_once_with()
assert train_data == {"train": ['some_data']}
assert test_data == {"test": ['some_data']}
def test_process_set_metadata(self, mocker, mock_classification_class):
dummy_ids = [0, 1, 2, 3, 4, 5]
mock_class_field = mocker.patch.object(ClassLabelField, "process")
mock_image_field = mocker.patch.object(ImageField, "process", return_value=dummy_ids)
mock_label_field = mocker.patch.object(LabelIdField, "process", return_value=dummy_ids)
mock_objfield_field = mocker.patch.object(ObjectFieldNamesField, "process")
mock_objids_field = mocker.patch.object(ObjectIdsField, "process")
mock_images_per_class_list = mocker.patch.object(ImagesPerClassList, "process")
data = {"classes": 1, "images": 1, "labels": 1,
"object_fields": 1, "object_ids": 1, "list_images_per_class": 1}
mock_classification_class.process_set_metadata(data, 'train')
mock_class_field.assert_called_once_with()
mock_image_field.assert_called_once_with()
mock_label_field.assert_called_once_with()
mock_objfield_field.assert_called_once_with()
mock_objids_field.assert_called_once_with(dummy_ids, dummy_ids)
mock_images_per_class_list.assert_called_once_with()
class TestDatasetAnnotationLoader:
"""Unit tests for the DatasetAnnotationLoader class."""
@staticmethod
@pytest.fixture()
def mock_loader_class():
return DatasetAnnotationLoader(
classes=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'],
data_path='/some/path/data',
cache_path='/some/path/cache',
verbose=True
)
def test_task_attributes(self, mocker, mock_loader_class):
assert mock_loader_class.classes == ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
assert mock_loader_class.data_path == '/some/path/data'
assert mock_loader_class.cache_path == '/some/path/cache'
assert mock_loader_class.verbose == True
def test_load_train_data(self, mocker, mock_loader_class):
dummy_data = {"dummy": 'data'}
mock_load_data = mocker.patch.object(DatasetAnnotationLoader, 'load_data_set', return_value=dummy_data)
data = mock_loader_class.load_train_data()
mock_load_data.assert_called_once_with(is_test=False)
assert data == dummy_data
def test_load_test_data(self, mocker, mock_loader_class):
dummy_data = {"dummy": 'data'}
mock_load_data = mocker.patch.object(DatasetAnnotationLoader, 'load_data_set', return_value=dummy_data)
data = mock_loader_class.load_test_data()
mock_load_data.assert_called_once_with(is_test=True)
assert data == dummy_data
def test_load_data_set(self, mocker, mock_loader_class):
dummy_images = np.random.rand(10,28,28)
dummy_labels = np.random.randint(0, 9, 10)
mock_load_data = mocker.patch.object(DatasetAnnotationLoader, "load_data_annotations", return_value=(dummy_images, dummy_labels))
set_data = mock_loader_class.load_data_set(is_test=True)
mock_load_data.assert_called_once_with(True)
assert set_data['classes'] == ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
assert_array_equal(set_data['images'], dummy_images)
assert_array_equal(set_data['labels'], dummy_labels)
@pytest.mark.parametrize('is_test', [False, True])
def test_load_data_annotations(self, mocker, mock_loader_class, is_test):
dummy_images = np.random.rand(10*28*28)
dummy_labels = np.random.randint(0,9,10)
dummy_set_size = 10
mock_get_data_test = mocker.patch.object(DatasetAnnotationLoader, "get_data_test", return_value=(dummy_images, dummy_labels, dummy_set_size))
mock_get_data_train = mocker.patch.object(DatasetAnnotationLoader, "get_data_train", return_value=(dummy_images, dummy_labels, dummy_set_size))
images, labels = mock_loader_class.load_data_annotations(is_test=is_test)
if is_test:
mock_get_data_test.assert_called_once_with()
mock_get_data_train.assert_not_called()
else:
mock_get_data_test.assert_not_called()
mock_get_data_train.assert_called_once_with()
assert_array_equal(images, dummy_images.reshape(dummy_set_size, 28, 28))
assert_array_equal(labels, dummy_labels)
def test_get_data_test(self, mocker, mock_loader_class):
dummy_images = np.zeros((5,28*28))
dummy_labels = np.random.randint(0,9,5)
mock_load_images = mocker.patch.object(DatasetAnnotationLoader, "load_images_numpy", return_value=dummy_images)
mock_load_labels = mocker.patch.object(DatasetAnnotationLoader, "load_labels_numpy", return_value=dummy_labels)
test_images, test_labels, size_test = mock_loader_class.get_data_test()
mock_load_images.assert_called_once_with(os.path.join('/some/path/data', 't10k-images.idx3-ubyte'))
mock_load_labels.assert_called_once_with(os.path.join('/some/path/data', 't10k-labels.idx1-ubyte'))
assert_array_equal(test_images, dummy_images)
assert_array_equal(test_labels, dummy_labels)
assert size_test == 10000
def test_get_data_train(self, mocker, mock_loader_class):
dummy_images = np.zeros((5,28*28))
dummy_labels = np.random.randint(0,9,5)
mock_load_images = mocker.patch.object(DatasetAnnotationLoader, "load_images_numpy", return_value=dummy_images)
mock_load_labels = mocker.patch.object(DatasetAnnotationLoader, "load_labels_numpy", return_value=dummy_labels)
train_images, train_labels, size_train = mock_loader_class.get_data_train()
mock_load_images.assert_called_once_with(os.path.join('/some/path/data', 'train-images.idx3-ubyte'))
mock_load_labels.assert_called_once_with(os.path.join('/some/path/data', 'train-labels.idx1-ubyte'))
assert_array_equal(train_images, dummy_images)
assert_array_equal(train_labels, dummy_labels)
assert size_train == 60000
@pytest.fixture()
def test_data_loaded():
classes = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
images = np.random.rand(10,28, 28)
labels = np.array(range(10))
return {
"classes": classes,
"images": images,
"labels": labels
}
@pytest.fixture()
def field_kwargs(test_data_loaded):
return {
"data": test_data_loaded,
"set_name": 'train',
"hdf5_manager": {'dummy': 'object'},
"verbose": True
}
class TestClassLabelField:
"""Unit tests for the ClassLabelField class."""
@staticmethod
@pytest.fixture()
def mock_classlabel_class(field_kwargs):
return ClassLabelField(**field_kwargs)
def test_process(self, mocker, mock_classlabel_class):
dummy_names = ['car']*10
mock_get_class = mocker.patch.object(ClassLabelField, "get_class_names", return_value=dummy_names)
mock_save_hdf5 = mocker.patch.object(ClassLabelField, "save_field_to_hdf5")
mock_classlabel_class.process()
mock_get_class.assert_called_once_with()
assert mock_save_hdf5.called
# **disabled until I find a way to do assert calls with numpy arrays**
# mock_save_hdf5.assert_called_once_with(
# set_name='train',
# field='classes',
# data=str2ascii(dummy_names),
# dtype=np.uint8,
# fillvalue=-1
# )
def test_get_class_names(self, mocker, mock_classlabel_class, test_data_loaded):
class_names = mock_classlabel_class.get_class_names()
assert class_names == test_data_loaded['classes']
class TestImageField:
"""Unit tests for the ImageField class."""
@staticmethod
@pytest.fixture()
def mock_image_class(field_kwargs):
return ImageField(**field_kwargs)
def test_process(self, mocker, mock_image_class):
dummy_images = np.random.rand(5, 28, 28)
dummy_ids = list(range(5))
mock_get_images = mocker.patch.object(ImageField, "get_images", return_value=(dummy_images, dummy_ids))
mock_save_hdf5 = mocker.patch.object(ImageField, "save_field_to_hdf5")
image_ids = mock_image_class.process()
assert image_ids == dummy_ids
mock_get_images.assert_called_once_with()
assert mock_save_hdf5.called
# **disabled until I find a way to do assert calls with numpy arrays**
# mock_save_hdf5.assert_called_once_with(
# set_name='train',
# field='images',
# data=dummy_images,
# dtype=np.uint8,
# fillvalue=-1
# )
def test_get_images(self, mocker, mock_image_class, test_data_loaded):
images, image_ids = mock_image_class.get_images()
assert_array_equal(images, test_data_loaded['images'])
assert image_ids == list(range(len(images)))
class TestLabelIdField:
"""Unit tests for the LabelIdField class."""
@staticmethod
@pytest.fixture()
def mock_label_class(field_kwargs):
return LabelIdField(**field_kwargs)
def test_process(self, mocker, mock_label_class):
dummy_labels = np.array(range(10))
dummy_ids = list(range(10))
mock_get_labels = mocker.patch.object(LabelIdField, "get_labels", return_value=(dummy_labels, dummy_ids))
mock_save_hdf5 = mocker.patch.object(LabelIdField, "save_field_to_hdf5")
label_ids = mock_label_class.process()
assert label_ids == dummy_ids
mock_get_labels.assert_called_once_with()
assert mock_save_hdf5.called
# **disabled until I find a way to do assert calls with numpy arrays**
# mock_save_hdf5.assert_called_once_with(
# set_name='train',
# field='labels',
# data=dummy_labels,
# dtype=np.uint8,
# fillvalue=0
# )
def test_get_images(self, mocker, mock_label_class, test_data_loaded):
labels, label_ids = mock_label_class.get_labels()
assert_array_equal(labels, test_data_loaded['labels'])
assert label_ids == list(range(len(labels)))
class TestObjectFieldNamesField:
"""Unit tests for the ObjectFieldNamesField class."""
@staticmethod
@pytest.fixture()
def mock_objfields_class(field_kwargs):
return ObjectFieldNamesField(**field_kwargs)
def test_process(self, mocker, mock_objfields_class):
mock_save_hdf5 = mocker.patch.object(ObjectFieldNamesField, "save_field_to_hdf5")
mock_objfields_class.process()
assert mock_save_hdf5.called
# **disabled until I find a way to do assert calls with numpy arrays**
# mock_save_hdf5.assert_called_once_with(
# set_name='train',
# field='object_fields',
# data=str2ascii(['images', 'labels']),
# dtype=np.uint8,
# fillvalue=0
# )
class TestObjectIdsField:
"""Unit tests for the ObjectIdsField class."""
@staticmethod
@pytest.fixture()
def mock_objfids_class(field_kwargs):
return ObjectIdsField(**field_kwargs)
def test_process(self, mocker, mock_objfids_class):
mock_save_hdf5 = mocker.patch.object(ObjectIdsField, "save_field_to_hdf5")
image_ids = [0, 1, 2, 3, 4, 5]
label_ids = [1, 5, 9, 8, 3, 5]
object_ids = mock_objfids_class.process(
image_ids=image_ids,
label_ids=label_ids
)
assert mock_save_hdf5.called
# **disabled until I find a way to do assert calls with numpy arrays**
# mock_save_hdf5.assert_called_once_with(
# set_name='train',
# field='object_ids',
# data=np.array([[0, 1], [1, 5], [2, 9], [3, 8], [4, 3], [5, 5]], dtype=np.int32),
# dtype=np.int32,
# fillvalue=-1
# )
# )
class TestImagesPerClassList:
"""Unit tests for the ImagesPerClassList class."""
@staticmethod
@pytest.fixture()
def mock_img_per_class_list(field_kwargs):
return ImagesPerClassList(**field_kwargs)
def test_process(self, mocker, mock_img_per_class_list):
dummy_ids = [[0], [2, 3], [4, 5]]
dummy_array = np.array([[0, -1], [2, 3], [4, 5]])
mock_get_ids = mocker.patch.object(ImagesPerClassList, "get_image_ids_per_class", return_value=dummy_ids)
mock_convert_array = mocker.patch.object(ImagesPerClassList, "convert_list_to_array", return_value=dummy_array)
mock_save_hdf5 = mocker.patch.object(ImagesPerClassList, "save_field_to_hdf5")
mock_img_per_class_list.process()
mock_get_ids.assert_called_once_with()
mock_convert_array.assert_called_once_with(dummy_ids)
assert mock_save_hdf5.called
# **disabled until I find a way to do assert calls with numpy arrays**
# mock_save_hdf5.assert_called_once_with(
# set_name='train',
# field='list_images_per_class',
# data=dummy_array,
# dtype=np.int32,
# fillvalue=-1
# )
def test_get_image_ids_per_class(self, mocker, mock_img_per_class_list):
images_per_class_ids = mock_img_per_class_list.get_image_ids_per_class()
assert images_per_class_ids == [[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]]
def test_convert_list_to_array(self, mocker, mock_img_per_class_list):
list_ids = [[0], [2, 3], [4, 5, 6]]
images_per_class_array = mock_img_per_class_list.convert_list_to_array(list_ids)
expected = np.array(pad_list([[0, -1, -1], [2, 3, -1], [4, 5, 6]], -1), dtype=np.int32)
assert_array_equal(images_per_class_array, expected)
| 39.150376 | 151 | 0.679534 | 2,014 | 15,621 | 4.90715 | 0.077458 | 0.031165 | 0.048164 | 0.056663 | 0.57948 | 0.449256 | 0.34271 | 0.301022 | 0.260852 | 0.245674 | 0 | 0.02042 | 0.209974 | 15,621 | 398 | 152 | 39.248744 | 0.780407 | 0.115101 | 0 | 0.177419 | 0 | 0 | 0.06877 | 0.012821 | 0 | 0 | 0 | 0 | 0.245968 | 1 | 0.129032 | false | 0 | 0.032258 | 0.040323 | 0.237903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
9499bf696b293047856ed9f2ec46e452e8a7ad7f | 669 | py | Python | apps/stringbyte/stringbyte.py | jeonghoonkang/BerePi | e04283a94a6a0487ab0049dc3e514d6c5dda39cc | [
"BSD-2-Clause"
] | 22 | 2015-06-03T06:28:27.000Z | 2022-03-18T08:02:45.000Z | apps/stringbyte/stringbyte.py | jeonghoonkang/BerePi | e04283a94a6a0487ab0049dc3e514d6c5dda39cc | [
"BSD-2-Clause"
] | 14 | 2015-06-08T01:31:53.000Z | 2020-08-30T02:19:15.000Z | apps/stringbyte/stringbyte.py | jeonghoonkang/BerePi | e04283a94a6a0487ab0049dc3e514d6c5dda39cc | [
"BSD-2-Clause"
] | 26 | 2015-05-12T09:33:55.000Z | 2021-08-30T05:41:00.000Z | # -*- coding: utf-8 -*-
# Author : Jeonghoonkang, github.com/jeonghoonkang
hexstr = '123456'
# 1) [ for ... ]
hlist = [hexstr[i:i+2] for i in range(0,len(hexstr),2)]
#2) re.findall
hlist = re.findall(r'..',hexstr)
#3) map, zip, iter 이용
hlist = map(''.join, zip(*[iter(hexstr)]*2))
'''
iter(hxstr) 는 글자 하나씩 가져오는 이터레이터 함수
[ 'a' ] * 2 는 [ 'a', 'a'] 이고,
함수 호출 시 f(['a','a'])는 목록이 넘어가지만,
f(*['a','a'])는 f('a','a') 와 동일합니다. 즉, 괄호를 벗기는 역할을 합니다.
그러면 결국 zip(iter(hxstr)결과, iter(hxstr)결과) 와 같은 식인데,
첫번째 패러미터와 두번째 패러미터의 값을 한번씩 호출하게 되어있습니다.
그런데 동일한 이터레이터 이므로 하나씩 꺼내게 되므로,
(('1','2'), ('3','4'),('5','6')) 의 결과가 나옵니다.
이를 각 항목에 대하여 ''.join <=> cat 을 시키니
['12','34','56']
'''
| 23.068966 | 55 | 0.566517 | 129 | 669 | 2.937985 | 0.635659 | 0.021108 | 0.023747 | 0.021108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049091 | 0.177877 | 669 | 28 | 56 | 23.892857 | 0.64 | 0.176383 | 0 | 0 | 0 | 0 | 0.04878 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
949b48cef09179339869e0e0e398444e8cad808f | 3,383 | py | Python | coronastat/utils.py | theNocturnalGuy/coronastat | a5f6ee1ad59a709c48f18d09413ba89b39572b97 | [
"MIT"
] | 1 | 2020-07-29T21:05:20.000Z | 2020-07-29T21:05:20.000Z | coronastat/utils.py | thenocturnalguy/coronastat | a5f6ee1ad59a709c48f18d09413ba89b39572b97 | [
"MIT"
] | null | null | null | coronastat/utils.py | thenocturnalguy/coronastat | a5f6ee1ad59a709c48f18d09413ba89b39572b97 | [
"MIT"
] | null | null | null | ########### IMPORTING THE REQURIED LIBRARIES ###########
from __future__ import print_function
from terminaltables import AsciiTable
import sys, time
########## DECLARING THE GLOBAL VARIABLES #############
WORLD_URL = "https://www.worldometers.info/coronavirus"
INDIA_URL = "https://www.mohfw.gov.in/"
MAX_TIMEOUT = 3
PROXY_TIMEOUT = 3
########## DECLARING STYLES ##########
class style:
PURPLE = "\033[95m"
CYAN = "\033[96m"
DARKCYAN = "\033[36m"
BLUE = "\033[94m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
RED = "\033[91m"
BOLD = "\033[1m"
UNDERLINE = "\033[4m"
ITALIC = "\033[3m"
END = "\033[0m"
######### EXTRACTING NUMBER FROM STRING #########
def extractNumbers( string ):
dig = ""
for c in string:
if c.isdigit():
dig += c
return dig
######### DISPLAYING THE COUNTRYWISE STATISTICS #########
def displayWorldInfo( corona ):
print( "\nFetching data. Please wait...\n" );
page = corona.getPageResponse( WORLD_URL )
if not page:
print( "\nSorry, couldn't fetch any information for you." )
print( "\nMaybe you don't have a working internet connection or\nthe source are blocking the application\n" )
exit()
counts = corona.extractCounts( page, "w" )
table = corona.extractTableData( page, "w" )
print( style.RED + style.BOLD + counts.table + style.END + "\n" )
print( table.table )
######### DISPLAYING THE STATEWISE STATISTICS #########
def displayCountryInfo( corona ):
print( "\nFetching data. Please wait...\n" );
page = corona.getPageResponse( INDIA_URL )
if not page:
print( "\nSorry, couldn't fetch any information for you." )
print( "\nMaybe you don't have a working internet connection or\nthe source are blocking the application\n" )
exit()
counts = corona.extractCounts( page, "c" )
table = corona.extractTableData( page, "c" )
print( style.RED + style.BOLD + counts.table + style.END + "\n" )
print( table.table )
######### DISPLAYING THE HELP #########
def displayHelp():
print( "\nUsage : coronastat [ OPTIONS ]\n" );
print( "Commands : " );
table = [
[ "Options", "Functions" ],
[ "-h, --help", "Opens the help for this CLI tool." ],
[ "-c, --country", "Opens statewise COVID-19 statistics ( only India's data is possible till now )." ],
[ "-w, --world", "Opens countrywise COVID-19 statistics." ]
]
table = AsciiTable( table )
print( table.table )
######### DISPLAYING THE ASCII ART #########
def displayASCIIArt():
print(
style.CYAN + style.ITALIC + style.BOLD +
'''\n
██████╗ ██████╗ ██████╗ ██████╗ ███╗ ██╗ █████╗ ███████╗████████╗ █████╗ ████████╗
██╔════╝██╔═══██╗██╔══██╗██╔═══██╗████╗ ██║██╔══██╗██╔════╝╚══██╔══╝██╔══██╗╚══██╔══╝
██║ ██║ ██║██████╔╝██║ ██║██╔██╗ ██║███████║███████╗ ██║ ███████║ ██║
██║ ██║ ██║██╔══██╗██║ ██║██║╚██╗██║██╔══██║╚════██║ ██║ ██╔══██║ ██║
╚██████╗╚██████╔╝██║ ██║╚██████╔╝██║ ╚████║██║ ██║███████║ ██║ ██║ ██║ ██║
╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══╝╚═╝ ╚═╝╚══════╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝
Developed by: Rahul Gupta
\n'''
+ style.END
);
######### DISPLAYING THE LOADING ANIMATION #########
def displayLoadingAnim():
for _ in range( 3 ):
loader="\\|/-\\|/-"
for l in loader:
sys.stdout.write( l )
sys.stdout.flush()
sys.stdout.write( '\b' )
time.sleep( 0.2 ) | 31.324074 | 111 | 0.514336 | 390 | 3,383 | 5.482051 | 0.446154 | 0.016838 | 0.021048 | 0.03508 | 0.297474 | 0.284378 | 0.284378 | 0.284378 | 0.284378 | 0.284378 | 0 | 0.022313 | 0.205143 | 3,383 | 108 | 112 | 31.324074 | 0.620305 | 0.080106 | 0 | 0.211268 | 0 | 0.028169 | 0.339482 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084507 | false | 0 | 0.042254 | 0 | 0.309859 | 0.211268 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
949c67ffc18b5f1e8255608fad78da04a4deef69 | 4,744 | py | Python | host/function/send_files.py | zadjii/nebula | 50c4ec019c9f7eb15fe105a6c53a8a12880e281c | [
"MIT"
] | 2 | 2020-04-15T11:20:59.000Z | 2021-05-12T13:01:36.000Z | host/function/send_files.py | zadjii/nebula | 50c4ec019c9f7eb15fe105a6c53a8a12880e281c | [
"MIT"
] | 1 | 2018-06-05T04:48:56.000Z | 2018-06-05T04:48:56.000Z | host/function/send_files.py | zadjii/nebula | 50c4ec019c9f7eb15fe105a6c53a8a12880e281c | [
"MIT"
] | 1 | 2018-08-15T06:45:46.000Z | 2018-08-15T06:45:46.000Z | import os
import shutil
from stat import S_ISDIR
from datetime import datetime
from common_util import Error, Success
from host.util import mylog
from messages import HostFileTransferMessage
__author__ = 'Mike'
def send_tree(db, other_id, cloud, requested_root, connection):
"""
Note: This can't be used to send a tree of files over the network
to a mirror on the same host process. It blocks and is bad.
Fortunately, this is only used by `nebs mirror` at the momment,
so we don't need to worry.
:param db:
:param other_id:
:param cloud:
:param requested_root:
:param connection:
:return:
"""
mylog('They requested the file {}'.format(requested_root))
# find the file on the system, get it's size.
requesting_all = requested_root == '/'
filepath = None
# if the root is '/', send all of the children of the root
if requesting_all:
filepath = cloud.root_directory
else:
filepath = os.path.join(cloud.root_directory, requested_root)
mylog('The translated request path was {}'.format(filepath))
send_file_to_other(other_id, cloud, filepath, connection)
complete_sending_files(other_id, cloud, filepath, connection)
connection.close()
def send_file_to_local(db, src_mirror, tgt_mirror, relative_pathname):
# type: (SimpleDB, Cloud, Cloud, str) -> ResultAndData
rd = Error()
full_src_path = os.path.join(src_mirror.root_directory, relative_pathname)
full_tgt_path = os.path.join(tgt_mirror.root_directory, relative_pathname)
src_file_stat = os.stat(full_src_path)
src_file_is_dir = S_ISDIR(src_file_stat.st_mode)
rd = Success()
try:
if src_file_is_dir and not os.path.exists(full_tgt_path):
os.mkdir(full_tgt_path)
else:
shutil.copy2(full_src_path, full_tgt_path)
except IOError as e:
rd = Error(e)
if rd.success:
updated_node = tgt_mirror.create_or_update_node(relative_pathname, db)
if updated_node is not None:
old_modified_on = updated_node.last_modified
updated_node.last_modified = datetime.utcfromtimestamp(os.path.getmtime(full_tgt_path))
mylog('update mtime {}=>{}'.format(old_modified_on, updated_node.last_modified))
db.session.commit()
else:
mylog('ERROR: Failed to create a FileNode for the new file {}'.format(full_tgt_path))
return rd
def send_file_to_other(other_id, cloud, filepath, socket_conn, recurse=True):
"""
Assumes that the other host was already verified, and the cloud is non-null
"""
req_file_stat = os.stat(filepath)
relative_pathname = os.path.relpath(filepath, cloud.root_directory)
# print 'relpath({}) in \'{}\' is <{}>'.format(filepath, cloud.name, relative_pathname)
req_file_is_dir = S_ISDIR(req_file_stat.st_mode)
# mylog('filepath<{}> is_dir={}'.format(filepath, req_file_is_dir))
if req_file_is_dir:
if relative_pathname != '.':
msg = HostFileTransferMessage(
other_id
, cloud.uname()
, cloud.cname()
, relative_pathname
, 0
, req_file_is_dir
)
socket_conn.send_obj(msg)
# TODO#23: The other host should reply with FileTransferSuccessMessage
if recurse:
subdirectories = os.listdir(filepath)
# mylog('Sending children of <{}>={}'.format(filepath, subdirectories))
for f in subdirectories:
send_file_to_other(other_id, cloud, os.path.join(filepath, f), socket_conn)
else:
req_file_size = req_file_stat.st_size
requested_file = open(filepath, 'rb')
msg = HostFileTransferMessage(
other_id
, cloud.uname()
, cloud.cname()
, relative_pathname
, req_file_size
, req_file_is_dir
)
socket_conn.send_obj(msg)
l = 1
while l:
new_data = requested_file.read(1024)
l = socket_conn.send_next_data(new_data)
# mylog(
# '[{}]Sent {}B of file<{}> data'
# .format(cloud.my_id_from_remote, l, filepath)
# )
mylog(
'[{}]Sent <{}> data to [{}]'
.format(cloud.my_id_from_remote, filepath, other_id)
)
requested_file.close()
def complete_sending_files(other_id, cloud, filepath, socket_conn):
msg = HostFileTransferMessage(other_id, cloud.uname(), cloud.cname(), None, None, None)
socket_conn.send_obj(msg)
mylog('[{}] completed sending files to [{}]'
.format(cloud.my_id_from_remote, other_id))
| 34.376812 | 99 | 0.638702 | 614 | 4,744 | 4.672638 | 0.275244 | 0.029278 | 0.037644 | 0.020913 | 0.278843 | 0.210178 | 0.194493 | 0.11328 | 0.070408 | 0.0481 | 0 | 0.002577 | 0.263912 | 4,744 | 137 | 100 | 34.627737 | 0.819015 | 0.194983 | 0 | 0.213483 | 0 | 0 | 0.054293 | 0 | 0 | 0 | 0 | 0.007299 | 0 | 1 | 0.044944 | false | 0 | 0.078652 | 0 | 0.134831 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
949ea06e98739eef69309f8bebfeb873270bebd7 | 795 | py | Python | mrobpy/tests/point_cloud_test.py | miloserdova-l/mrob | 48bef772ba3158d2122991069196d6efd4a39f8c | [
"Apache-2.0"
] | 12 | 2020-09-22T15:33:48.000Z | 2022-03-02T17:27:39.000Z | mrobpy/tests/point_cloud_test.py | MobileRoboticsSkoltech/mrob | 7668a3ee35345c4878aa86fff082cc017992d205 | [
"Apache-2.0"
] | 46 | 2020-09-22T15:47:08.000Z | 2022-01-22T10:56:44.000Z | mrobpy/tests/point_cloud_test.py | MobileRoboticsSkoltech/mrob | 7668a3ee35345c4878aa86fff082cc017992d205 | [
"Apache-2.0"
] | 9 | 2020-09-22T15:59:33.000Z | 2021-12-20T20:15:16.000Z | import numpy as np
import mrob
import pytest
class TestPointCloud:
def test_point_cloud_registration(self):
# example equal to ./PC_alignment/examples/example_align.cpp
# generate random data
N = 500
X = np.random.rand(N,3)
T = mrob.geometry.SE3(np.random.rand(6))
Y = T.transform_array(X)
print('X = \n', X,'\n T = \n', T.T(),'\n Y =\n', Y)
# solve the problem
T_arun = mrob.registration.arun(X,Y)
print('Arun solution =\n', T_arun.T())
assert(np.ndarray.all(np.isclose(T.T(), T_arun.T())))
W = np.ones(N)
T_wp = mrob.registration.weighted(X,Y,W)
print('Weighted point optimization solution =\n', T_wp.T())
assert(np.ndarray.all(np.isclose(T.T(),T_wp.T())))
| 28.392857 | 68 | 0.583648 | 122 | 795 | 3.704918 | 0.42623 | 0.022124 | 0.053097 | 0.070796 | 0.137168 | 0.137168 | 0.137168 | 0.137168 | 0.137168 | 0.137168 | 0 | 0.010187 | 0.25912 | 795 | 27 | 69 | 29.444444 | 0.757216 | 0.122013 | 0 | 0 | 0 | 0 | 0.115274 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.058824 | false | 0 | 0.176471 | 0 | 0.294118 | 0.176471 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
949f8b0b268ad53be3b2e03a7886ed9475f6d1c0 | 1,698 | py | Python | src/pyams_utils/attr.py | Py-AMS/pyams-utils | 65b166596a8b9f66fb092a69ce5d53ac6675685e | [
"ZPL-2.1"
] | null | null | null | src/pyams_utils/attr.py | Py-AMS/pyams-utils | 65b166596a8b9f66fb092a69ce5d53ac6675685e | [
"ZPL-2.1"
] | null | null | null | src/pyams_utils/attr.py | Py-AMS/pyams-utils | 65b166596a8b9f66fb092a69ce5d53ac6675685e | [
"ZPL-2.1"
] | null | null | null | #
# Copyright (c) 2008-2015 Thierry Florac <tflorac AT ulthar.net>
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
"""PyAMS_utils.attr module
This module provides an :ref:`ITraversable` adapter which can be used to get access to an object's
attribute from a browser URL.
This adapter is actually used to get access to 'file' attributes in PyAMS_file package.
"""
from pyramid.exceptions import NotFound
from zope.interface import Interface
from zope.traversing.interfaces import ITraversable
from pyams_utils.adapter import ContextAdapter, adapter_config
__docformat__ = 'restructuredtext'
@adapter_config(name='attr', required=Interface, provides=ITraversable)
class AttributeTraverser(ContextAdapter):
"""++attr++ namespace traverser
This custom traversing adapter can be used to access an object attribute directly from
an URL by using a path like this::
/path/to/object/++attr++name
Where *name* is the name of the requested attribute.
"""
def traverse(self, name, furtherpath=None): # pylint: disable=unused-argument
"""Traverse from current context to given attribute"""
if '.' in name:
name = name.split('.', 1)[0]
try:
return getattr(self.context, name)
except AttributeError as exc:
raise NotFound from exc
| 33.96 | 98 | 0.732627 | 230 | 1,698 | 5.369565 | 0.556522 | 0.012146 | 0.022672 | 0.017814 | 0.02753 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008734 | 0.190813 | 1,698 | 49 | 99 | 34.653061 | 0.890102 | 0.603651 | 0 | 0 | 0 | 0 | 0.035256 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.285714 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94a23063115af6a9b0dee607fd43ac955b2fab32 | 794 | py | Python | examples/tutorials/tutorial_1.py | tgolsson/appJar | 5e2f8bff44e927e7c2bae17fccddc6dbf79952f0 | [
"Apache-2.0"
] | 666 | 2016-11-14T18:17:40.000Z | 2022-03-29T03:53:22.000Z | examples/tutorials/tutorial_1.py | tgolsson/appJar | 5e2f8bff44e927e7c2bae17fccddc6dbf79952f0 | [
"Apache-2.0"
] | 598 | 2016-10-20T21:04:09.000Z | 2022-03-15T22:44:49.000Z | examples/tutorials/tutorial_1.py | tgolsson/appJar | 5e2f8bff44e927e7c2bae17fccddc6dbf79952f0 | [
"Apache-2.0"
] | 95 | 2017-01-19T12:23:58.000Z | 2022-03-06T18:16:21.000Z | import sys
sys.path.append("../../")
from appJar import gui
count = 0
def press(btn):
global count
count += 1
app.setLabel("title", "Count=" + str(count))
if btn == "PRESS":
app.setFg("white")
app.setBg("green")
elif btn == "PRESS ME TOO":
app.setFg("green")
app.setBg("pink")
elif btn == "PRESS ME AS WELL":
app.setFg("orange")
app.setBg("red")
if count >= 10:
app.infoBox("You win", "You got the counter to 10")
app = gui("Demo GUI")
app.setBg("yellow")
app.setFg("red")
app.setFont(15)
app.addLabel("title", "Hello World")
app.addMessage("info", "This is a simple demo of appJar")
app.addButton("PRESS", press)
app.addButton("PRESS ME TOO", press)
app.addButton("PRESS ME AS WELL", press)
app.go()
| 20.358974 | 59 | 0.596977 | 116 | 794 | 4.086207 | 0.474138 | 0.067511 | 0.107595 | 0.059072 | 0.101266 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013008 | 0.225441 | 794 | 38 | 60 | 20.894737 | 0.757724 | 0 | 0 | 0 | 0 | 0 | 0.265743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.068966 | 0 | 0.103448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94a3721f1a6e647e0a04fb6c974da5b5a1d53263 | 2,024 | py | Python | Hash Table/811. Subdomain Visit Count.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | 138 | 2020-02-08T05:25:26.000Z | 2021-11-04T11:59:28.000Z | Hash Table/811. Subdomain Visit Count.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | null | null | null | Hash Table/811. Subdomain Visit Count.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | 24 | 2021-01-02T07:18:43.000Z | 2022-03-20T08:17:54.000Z | """
811. Subdomain Visit Count
Example 1:
Input:
["9001 discuss.leetcode.com"]
Output:
["9001 discuss.leetcode.com", "9001 leetcode.com", "9001 com"]
Explanation:
We only have one website domain: "discuss.leetcode.com". As discussed above, the subdomain "leetcode.com" and "com" will also be visited. So they will all be visited 9001 times.
Example 2:
Input:
["900 google.mail.com", "50 yahoo.com", "1 intel.mail.com", "5 wiki.org"]
Output:
["901 mail.com","50 yahoo.com","900 google.mail.com","5 wiki.org","5 org","1 intel.mail.com","951 com"]
Explanation:
We will visit "google.mail.com" 900 times, "yahoo.com" 50 times, "intel.mail.com" once and "wiki.org" 5 times. For the subdomains, we will visit "mail.com" 900 + 1 = 901 times, "com" 900 + 50 + 1 = 951 times, and "org" 5 times.
"""
class Solution:
def subdomainVisits(self, cpdomains):
"""
:type cpdomains: List[str]
:rtype: List[str]
"""
dic = collections.defaultdict(int)
for domain in cpdomains:
count = int(domain[:domain.find(" ")])
ip = domain[domain.find(" "): ]
while '.' in ip:
dic[ip[1:]] += count
ip = ip[ip.find('.',1):]
res = []
for k ,v in dic.items():
res.append(str(v)+" "+k) # or use : list(map(lambda kv: str(kv[1])+" "+kv[0], dic.items()))
return res
class Solution:
def subdomainVisits(self, cpdomains):
"""
:type cpdomains: List[str]
:rtype: List[str]
"""
result = defaultdict(int)
for domain in cpdomains:
count, domain = domain.split()
count = int(count)
frags = domain.split('.')
curr = []
for i in reversed(range(len(frags))):
curr.append(frags[i])
result[".".join(reversed(curr))] += count
return ["{} {}".format(count, domain) for domain, count in result.item()] | 36.142857 | 228 | 0.552372 | 261 | 2,024 | 4.283525 | 0.337165 | 0.050089 | 0.048301 | 0.039356 | 0.26297 | 0.205725 | 0.205725 | 0.135957 | 0.135957 | 0.135957 | 0 | 0.050978 | 0.29249 | 2,024 | 56 | 229 | 36.142857 | 0.729749 | 0.467391 | 0 | 0.24 | 0 | 0 | 0.012592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94a6826e247a30c899f1dfbebb5660a35c3394dc | 1,523 | py | Python | src/control_single.py | RidgeX/zenwheels-python | 83804281f09bb1b3947715f11bd61611283cfc14 | [
"MIT"
] | 2 | 2017-08-11T05:17:59.000Z | 2018-02-19T06:29:25.000Z | src/control_single.py | RidgeX/zenwheels-python | 83804281f09bb1b3947715f11bd61611283cfc14 | [
"MIT"
] | null | null | null | src/control_single.py | RidgeX/zenwheels-python | 83804281f09bb1b3947715f11bd61611283cfc14 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
from bluetooth import *
from protocol import *
import sys
import threading
def chunks(l, n):
"""
Yield successive n-sized chunks from l.
"""
for i in range(0, len(l), n):
yield l[i:i + n]
# MAC address to connect to
address = '00:06:66:61:AC:9E' # MicroCar-58
# Whether the processing thread is running
running = True
def process(socket):
while running:
data = socket.recv(1024)
for msg in chunks(data, 2):
if msg[0] == HALL_SENSOR and msg[1] == HALL_SENSOR_ON:
print('Magnet detected')
elif msg[0] == BATTERY:
print('{0:.1f}V'.format(msg[1] / 10.0))
def main():
global running
# Open connection
socket = BluetoothSocket(RFCOMM)
socket.connect((address, 1))
# Start processing thread
t_process = threading.Thread(target=process, args=(socket,))
t_process.daemon = True
t_process.start()
# Main loop
try:
while True:
c = sys.stdin.read(1) # This waits until Enter is pressed
if c == 'z':
socket.send(bytes([THROTTLE, 0x10]))
if c == 'x':
socket.send(bytes([THROTTLE, 0x70]))
if c == 'c':
socket.send(bytes([THROTTLE, 0x0]))
if c == 'n':
socket.send(bytes([HEADLIGHT, HEADLIGHT_BRIGHT]))
if c == 'm':
socket.send(bytes([HEADLIGHT, HEADLIGHT_OFF]))
except KeyboardInterrupt:
# Ctrl-C pressed
pass
# Cleanup socket
print('Shutting down...')
running = False
t_process.join()
socket.close()
if __name__ == '__main__':
main()
| 22.397059 | 64 | 0.621799 | 212 | 1,523 | 4.386792 | 0.509434 | 0.016129 | 0.080645 | 0.074194 | 0.070968 | 0 | 0 | 0 | 0 | 0 | 0 | 0.031814 | 0.236376 | 1,523 | 67 | 65 | 22.731343 | 0.767842 | 0.16415 | 0 | 0 | 0 | 0 | 0.055112 | 0 | 0 | 0 | 0.008786 | 0 | 0 | 1 | 0.066667 | false | 0.022222 | 0.088889 | 0 | 0.155556 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94a709bea76252538506a491da154f22aa286fe8 | 15,383 | py | Python | analysis/202102--dither_pattern_ghost/13_removal_test.py | rsiverd/ultracool | cbeb2e0e4aee0acc9f8ed2bde7ecdf8be5fa85a1 | [
"BSD-2-Clause"
] | null | null | null | analysis/202102--dither_pattern_ghost/13_removal_test.py | rsiverd/ultracool | cbeb2e0e4aee0acc9f8ed2bde7ecdf8be5fa85a1 | [
"BSD-2-Clause"
] | null | null | null | analysis/202102--dither_pattern_ghost/13_removal_test.py | rsiverd/ultracool | cbeb2e0e4aee0acc9f8ed2bde7ecdf8be5fa85a1 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
# vim: set fileencoding=utf-8 ts=4 sts=4 sw=4 et tw=80 :
#
# Attempt removal of fading residual ghost image from image set.
#
# Rob Siverd
# Created: 2021-02-16
# Last modified: 2021-02-16
#--------------------------------------------------------------------------
#**************************************************************************
#--------------------------------------------------------------------------
## Logging setup:
import logging
#logging.basicConfig(level=logging.DEBUG)
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
#logger.setLevel(logging.DEBUG)
logger.setLevel(logging.INFO)
## Current version:
__version__ = "0.0.1"
## Optional matplotlib control:
#from matplotlib import use, rc, rcParams
#from matplotlib import use
#from matplotlib import rc
#from matplotlib import rcParams
#use('GTKAgg') # use GTK with Anti-Grain Geometry engine
#use('agg') # use Anti-Grain Geometry engine (file only)
#use('ps') # use PostScript engine for graphics (file only)
#use('cairo') # use Cairo (pretty, file only)
#rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
#rc('font',**{'family':'serif','serif':['Palatino']})
#rc('font',**{'sans-serif':'Arial','family':'sans-serif'})
#rc('text', usetex=True) # enables text rendering with LaTeX (slow!)
#rcParams['axes.formatter.useoffset'] = False # v. 1.4 and later
#rcParams['agg.path.chunksize'] = 10000
#rcParams['font.size'] = 10
## Python version-agnostic module reloading:
try:
reload # Python 2.7
except NameError:
try:
from importlib import reload # Python 3.4+
except ImportError:
from imp import reload # Python 3.0 - 3.3
## Modules:
#import argparse
import shutil
#import resource
#import signal
import glob
import os
import sys
import time
import numpy as np
#from numpy.lib.recfunctions import append_fields
#import datetime as dt
#from dateutil import parser as dtp
#import scipy.linalg as sla
#import scipy.signal as ssig
#import scipy.ndimage as ndi
#import scipy.optimize as opti
#import scipy.interpolate as stp
#import scipy.spatial.distance as ssd
import matplotlib.pyplot as plt
#import matplotlib.cm as cm
#import matplotlib.ticker as mt
#import matplotlib._pylab_helpers as hlp
#from matplotlib.colors import LogNorm
#import matplotlib.colors as mplcolors
#import matplotlib.collections as mcoll
#import matplotlib.gridspec as gridspec
#from functools import partial
#from collections import OrderedDict
#from collections.abc import Iterable
#import multiprocessing as mp
#np.set_printoptions(suppress=True, linewidth=160)
#import pandas as pd
#import statsmodels.api as sm
#import statsmodels.formula.api as smf
#from statsmodels.regression.quantile_regression import QuantReg
#import PIL.Image as pli
#import seaborn as sns
#import cmocean
#import theil_sen as ts
#import window_filter as wf
#import itertools as itt
_have_np_vers = float('.'.join(np.__version__.split('.')[:2]))
##--------------------------------------------------------------------------##
## Home-brew robust statistics:
try:
import robust_stats
reload(robust_stats)
rs = robust_stats
except ImportError:
logger.error("module robust_stats not found! Install and retry.")
sys.stderr.write("\nError! robust_stats module not found!\n"
"Please install and try again ...\n\n")
sys.exit(1)
## Fast FITS I/O:
#try:
# import fitsio
#except ImportError:
# logger.error("fitsio module not found! Install and retry.")
# sys.stderr.write("\nError: fitsio module not found!\n")
# sys.exit(1)
## Various from astropy:
try:
# import astropy.io.ascii as aia
import astropy.io.fits as pf
# import astropy.io.votable as av
# import astropy.table as apt
import astropy.time as astt
# import astropy.wcs as awcs
# from astropy import constants as aconst
# from astropy import coordinates as coord
# from astropy import units as uu
except ImportError:
logger.error("astropy module not found! Install and retry.")
sys.stderr.write("\nError: astropy module not found!\n")
sys.exit(1)
##--------------------------------------------------------------------------##
## Save FITS image with clobber (astropy / pyfits):
def qsave(iname, idata, header=None, padkeys=1000, **kwargs):
this_func = sys._getframe().f_code.co_name
parent_func = sys._getframe(1).f_code.co_name
sys.stderr.write("Writing to '%s' ... " % iname)
if header:
while (len(header) < padkeys):
header.append() # pad header
if os.path.isfile(iname):
os.remove(iname)
pf.writeto(iname, idata, header=header, **kwargs)
sys.stderr.write("done.\n")
##--------------------------------------------------------------------------##
## Save FITS image with clobber (fitsio):
#def qsave(iname, idata, header=None, **kwargs):
# this_func = sys._getframe().f_code.co_name
# parent_func = sys._getframe(1).f_code.co_name
# sys.stderr.write("Writing to '%s' ... " % iname)
# #if os.path.isfile(iname):
# # os.remove(iname)
# fitsio.write(iname, idata, clobber=True, header=header, **kwargs)
# sys.stderr.write("done.\n")
##--------------------------------------------------------------------------##
##------------------ Parse Command Line ----------------##
##--------------------------------------------------------------------------##
## Parse arguments and run script:
#class MyParser(argparse.ArgumentParser):
# def error(self, message):
# sys.stderr.write('error: %s\n' % message)
# self.print_help()
# sys.exit(2)
#
### Enable raw text AND display of defaults:
#class CustomFormatter(argparse.ArgumentDefaultsHelpFormatter,
# argparse.RawDescriptionHelpFormatter):
# pass
#
### Parse the command line:
#if __name__ == '__main__':
#
# # ------------------------------------------------------------------
# prog_name = os.path.basename(__file__)
# descr_txt = """
# PUT DESCRIPTION HERE.
#
# Version: %s
# """ % __version__
# parser = argparse.ArgumentParser(
# prog='PROGRAM_NAME_HERE',
# prog=os.path.basename(__file__),
# #formatter_class=argparse.RawTextHelpFormatter)
# description='PUT DESCRIPTION HERE.')
# #description=descr_txt)
# parser = MyParser(prog=prog_name, description=descr_txt)
# #formatter_class=argparse.RawTextHelpFormatter)
# # ------------------------------------------------------------------
# parser.set_defaults(thing1='value1', thing2='value2')
# # ------------------------------------------------------------------
# parser.add_argument('firstpos', help='first positional argument')
# parser.add_argument('-w', '--whatever', required=False, default=5.0,
# help='some option with default [def: %(default)s]', type=float)
# parser.add_argument('-s', '--site',
# help='Site to retrieve data for', required=True)
# parser.add_argument('-n', '--number_of_days', default=1,
# help='Number of days of data to retrieve.')
# parser.add_argument('-o', '--output_file',
# default='observations.csv', help='Output filename.')
# parser.add_argument('--start', type=str, default=None,
# help="Start time for date range query.")
# parser.add_argument('--end', type=str, default=None,
# help="End time for date range query.")
# parser.add_argument('-d', '--dayshift', required=False, default=0,
# help='Switch between days (1=tom, 0=today, -1=yest', type=int)
# parser.add_argument('-e', '--encl', nargs=1, required=False,
# help='Encl to make URL for', choices=all_encls, default=all_encls)
# parser.add_argument('-s', '--site', nargs=1, required=False,
# help='Site to make URL for', choices=all_sites, default=all_sites)
# parser.add_argument('remainder', help='other stuff', nargs='*')
# # ------------------------------------------------------------------
# # ------------------------------------------------------------------
# #iogroup = parser.add_argument_group('File I/O')
# #iogroup.add_argument('-o', '--output_file', default=None, required=True,
# # help='Output filename', type=str)
# #iogroup.add_argument('-R', '--ref_image', default=None, required=True,
# # help='KELT image with WCS')
# # ------------------------------------------------------------------
# # ------------------------------------------------------------------
# ofgroup = parser.add_argument_group('Output format')
# fmtparse = ofgroup.add_mutually_exclusive_group()
# fmtparse.add_argument('--python', required=False, dest='output_mode',
# help='Return Python dictionary with results [default]',
# default='pydict', action='store_const', const='pydict')
# bash_var = 'ARRAY_NAME'
# bash_msg = 'output Bash code snippet (use with eval) to declare '
# bash_msg += 'an associative array %s containing results' % bash_var
# fmtparse.add_argument('--bash', required=False, default=None,
# help=bash_msg, dest='bash_array', metavar=bash_var)
# fmtparse.set_defaults(output_mode='pydict')
# # ------------------------------------------------------------------
# # Miscellany:
# miscgroup = parser.add_argument_group('Miscellany')
# miscgroup.add_argument('--debug', dest='debug', default=False,
# help='Enable extra debugging messages', action='store_true')
# miscgroup.add_argument('-q', '--quiet', action='count', default=0,
# help='less progress/status reporting')
# miscgroup.add_argument('-v', '--verbose', action='count', default=0,
# help='more progress/status reporting')
# # ------------------------------------------------------------------
#
# context = parser.parse_args()
# context.vlevel = 99 if context.debug else (context.verbose-context.quiet)
# context.prog_name = prog_name
#
##--------------------------------------------------------------------------##
##--------------------------------------------------------------------------##
##--------------------------------------------------------------------------##
## Where to save results:
out_dir = 'fix_test'
if os.path.isdir(out_dir):
shutil.rmtree(out_dir)
os.mkdir(out_dir)
## Images to process:
img_dir = 'r17577216'
#img_list = sorted(glob.glob('%s/SPITZER_I1_*_hcfix.fits' % img_dir))
img_list = sorted(glob.glob('%s/SPITZER_I1_*_clean.fits' % img_dir))
## Stacked frame:
stack_file = 'zmed_i1_clip30.fits'
sdata, shdrs = pf.getdata(stack_file, header=True)
sdata -= np.median(sdata)
smed, siqrn = rs.calc_ls_med_IQR(sdata)
## Select bright pixels:
s_sigcut = 13.0
sbright = sdata > s_sigcut * siqrn
#smasked = sdata.copy()
#smasked[~bright] = np.nan
#qsave('lookie.fits', smasked, overwrite=True)
##--------------------------------------------------------------------------##
## Loop over target images and attempt to fix:
i_sigcut = 75.0
data_list = []
hdrs_list = []
ratio_list = []
for ipath in img_list:
sys.stderr.write("Examining %s ... \n" % ipath)
ibase = os.path.basename(ipath)
fpath = os.path.join(out_dir, ibase)
idata, ihdrs = pf.getdata(ipath, header=True)
data_list.append(idata)
hdrs_list.append(ihdrs)
# identify bright pixels on target frame:
ibright = idata > i_sigcut * siqrn
rbright = sbright & ~ibright # safe to scale with
ghost_ratio = idata[rbright] / sdata[rbright]
rmed, rsig = rs.calc_ls_med_IQR(ghost_ratio)
ratio_list.append(rmed)
sys.stderr.write("rmed: %10.5f\n" % rmed)
sys.stderr.write("rsig: %10.5f\n" % rsig)
sys.stderr.write("\n")
ifixed = idata - rmed * sdata
qsave(fpath, ifixed)
tstamps = astt.Time([h['DATE_OBS'] for h in hdrs_list], scale='utc')
##--------------------------------------------------------------------------##
##--------------------------------------------------------------------------##
##--------------------------------------------------------------------------##
#plt.style.use('bmh') # Bayesian Methods for Hackers style
fig_dims = (10, 8)
fig = plt.figure(1, figsize=fig_dims)
plt.gcf().clf()
#fig, axs = plt.subplots(2, 2, sharex=True, figsize=fig_dims, num=1)
# sharex='col' | sharex='row'
#fig.frameon = False # disable figure frame drawing
#fig.subplots_adjust(left=0.07, right=0.95)
#ax1 = plt.subplot(gs[0, 0])
ax1 = fig.add_subplot(111)
#ax1 = fig.add_axes([0, 0, 1, 1])
#ax1.patch.set_facecolor((0.8, 0.8, 0.8))
ax1.grid(True)
#ax1.axis('off')
imtime_sec = 86400.0 * (tstamps.jd - tstamps.jd.min())
ax1.scatter(imtime_sec, ratio_list)
ax1.set_xlabel("Time (sec)")
ax1.set_ylabel("Ghost ratio (unitless)")
## Disable axis offsets:
#ax1.xaxis.get_major_formatter().set_useOffset(False)
#ax1.yaxis.get_major_formatter().set_useOffset(False)
#ax1.plot(kde_pnts, kde_vals)
#blurb = "some text"
#ax1.text(0.5, 0.5, blurb, transform=ax1.transAxes)
#ax1.text(0.5, 0.5, blurb, transform=ax1.transAxes,
# va='top', ha='left', bbox=dict(facecolor='white', pad=10.0))
# fontdict={'family':'monospace'}) # fixed-width
#colors = cm.rainbow(np.linspace(0, 1, len(plot_list)))
#for camid, c in zip(plot_list, colors):
# cam_data = subsets[camid]
# xvalue = cam_data['CCDATEMP']
# yvalue = cam_data['PIX_MED']
# yvalue = cam_data['IMEAN']
# ax1.scatter(xvalue, yvalue, color=c, lw=0, label=camid)
#mtickpos = [2,5,7]
#ndecades = 1.0 # for symlog, set width of linear portion in units of dex
#nonposx='mask' | nonposx='clip' | nonposy='mask' | nonposy='clip'
#ax1.set_xscale('log', basex=10, nonposx='mask', subsx=mtickpos)
#ax1.set_xscale('log', nonposx='clip', subsx=[3])
#ax1.set_yscale('symlog', basey=10, linthreshy=0.1, linscaley=ndecades)
#ax1.xaxis.set_major_formatter(formatter) # re-format x ticks
#ax1.set_ylim(ax1.get_ylim()[::-1])
#ax1.set_xlabel('whatever', labelpad=30) # push X label down
#ax1.set_xticks([1.0, 3.0, 10.0, 30.0, 100.0])
#ax1.set_xticks([1, 2, 3], ['Jan', 'Feb', 'Mar'])
#for label in ax1.get_xticklabels():
# label.set_rotation(30)
# label.set_fontsize(14)
#ax1.xaxis.label.set_fontsize(18)
#ax1.yaxis.label.set_fontsize(18)
#ax1.set_xlim(nice_limits(xvec, pctiles=[1,99], pad=1.2))
#ax1.set_ylim(nice_limits(yvec, pctiles=[1,99], pad=1.2))
#spts = ax1.scatter(x, y, lw=0, s=5)
##cbar = fig.colorbar(spts, orientation='vertical') # old way
#cbnorm = mplcolors.Normalize(*spts.get_clim())
#scm = plt.cm.ScalarMappable(norm=cbnorm, cmap=spts.cmap)
#scm.set_array([])
#cbar = fig.colorbar(scm, orientation='vertical')
#cbar = fig.colorbar(scm, ticks=cs.levels, orientation='vertical') # contours
#cbar.formatter.set_useOffset(False)
#cbar.update_ticks()
plot_name = 'ratio_decay.png'
fig.tight_layout() # adjust boundaries sensibly, matplotlib v1.1+
plt.draw()
fig.savefig(plot_name, bbox_inches='tight')
# cyclical colormap ... cmocean.cm.phase
# cmocean: https://matplotlib.org/cmocean/
######################################################################
# CHANGELOG (13_removal_test.py):
#---------------------------------------------------------------------
#
# 2021-02-16:
# -- Increased __version__ to 0.0.1.
# -- First created 13_removal_test.py.
#
| 37.703431 | 79 | 0.593512 | 1,882 | 15,383 | 4.72848 | 0.333688 | 0.025958 | 0.026745 | 0.004944 | 0.150803 | 0.112035 | 0.090909 | 0.077424 | 0.046634 | 0.041802 | 0 | 0.018887 | 0.149841 | 15,383 | 407 | 80 | 37.796069 | 0.661569 | 0.755249 | 0 | 0.095745 | 0 | 0 | 0.125263 | 0.00781 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010638 | false | 0 | 0.170213 | 0 | 0.180851 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94a7c591d47067f6549e87406e4215a9225d03db | 647 | py | Python | fsu/__main__.py | jamesabel/fsu | 652e9f37a83e4df545dc785ea807c818ee04104a | [
"MIT"
] | null | null | null | fsu/__main__.py | jamesabel/fsu | 652e9f37a83e4df545dc785ea807c818ee04104a | [
"MIT"
] | null | null | null | fsu/__main__.py | jamesabel/fsu | 652e9f37a83e4df545dc785ea807c818ee04104a | [
"MIT"
] | null | null | null | import os
import argparse
from fsu import __application_name__, longest_path, longest_path_command
def fsu():
parser = argparse.ArgumentParser(prog=__application_name__)
parser.add_argument("-c", "--command", required=True, help="commands: {longest_path_command}")
parser.add_argument("-p", "--path", help="path")
args = parser.parse_args()
if args.command == longest_path_command:
lp = longest_path(args.path)
for p in [lp, os.path.abspath(lp)]:
print(f"{p} ({len(p)})")
else:
print(f"{args.command} is not a valid command (use -h for help)")
if __name__ == "__main__":
fsu()
| 28.130435 | 98 | 0.658423 | 87 | 647 | 4.563218 | 0.45977 | 0.138539 | 0.13602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.194745 | 647 | 22 | 99 | 29.409091 | 0.761996 | 0 | 0 | 0 | 0 | 0 | 0.204019 | 0.034003 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.1875 | 0 | 0.25 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94a92c7220f442afdf93bb1104fa4c59061bf49f | 3,914 | py | Python | authors/apps/follows/views.py | andela/ah-the-jedi-backend | ba429dfcec577bd6d52052673c1c413835f65988 | [
"BSD-3-Clause"
] | 1 | 2019-12-25T18:59:34.000Z | 2019-12-25T18:59:34.000Z | authors/apps/follows/views.py | katherine95/ah-the-jedi-backend | ba429dfcec577bd6d52052673c1c413835f65988 | [
"BSD-3-Clause"
] | 26 | 2019-04-23T11:20:35.000Z | 2022-03-11T23:45:54.000Z | authors/apps/follows/views.py | katherine95/ah-the-jedi-backend | ba429dfcec577bd6d52052673c1c413835f65988 | [
"BSD-3-Clause"
] | 8 | 2019-05-21T06:54:34.000Z | 2019-11-18T19:45:22.000Z | from django.shortcuts import render
from rest_framework import status
from rest_framework.generics import (
RetrieveAPIView, CreateAPIView, DestroyAPIView)
from rest_framework.response import Response
from .models import UserFollow
from .utils import Utilities
from .serializers import FollowsSerializer, FollowersSerializer
from ..profiles.permissions import IsGetOrIsAuthenticated
from ..profiles.renderers import ProfileJSONRenderer
from ..profiles.serializers import ProfileSerializer
class UserFollowingRetrieveView(RetrieveAPIView):
"""
get:
Get users that the user follows.
"""
permission_classes = (IsGetOrIsAuthenticated,)
serializer_class = FollowsSerializer
def retrieve(self, request):
"""
get:
Get profile that user follows
"""
user = Utilities.get_user(self, request.user.username)
following = Utilities.get_followers(
self, user=user, follower=request.user.id)
context = {"user": user}
serializer = self.serializer_class(
following, many=True, context=context)
response = {
"users": serializer.data,
"following": user.following.all().count()
}
return Response({"data": response},
status=status.HTTP_200_OK)
class UserFollowersRetrieveView(RetrieveAPIView):
"""
get:
Get users that follow the user's profile.
"""
permission_classes = (IsGetOrIsAuthenticated,)
serializer_class = FollowersSerializer
def retrieve(self, request):
"""
get:
Get users follow the user's profile
"""
user = Utilities.get_user(self, request.user.username)
follower = Utilities.get_followers(
self, user=user, following=request.user.id)
context = {"user": user}
serializer = self.serializer_class(
follower, many=True, context=context)
response = {
"users": serializer.data,
"followers": user.followers.all().count()
}
return Response({"data": response},
status=status.HTTP_200_OK)
class FollowUserView(CreateAPIView):
permission_classes = (IsGetOrIsAuthenticated,)
renderer_classes = (ProfileJSONRenderer,)
serializer_class = ProfileSerializer
def post(self, request, username):
"""
post:
Follow users
"""
user = Utilities.get_profile(self, username)
current_user = Utilities.get_user(self, username)
request_user = Utilities.get_user(self, request.user.username)
Utilities.validate_user(self, request, username)
context = {
"request": request.user.username,
"username": username
}
Utilities.follow_user(self, follower=request_user,
followee=current_user)
serializer = self.serializer_class(user, context=context)
return Response(serializer.data, status=status.HTTP_200_OK)
class UnfollowUserView(DestroyAPIView):
"""
delete:
Unfollow users
"""
permission_classes = (IsGetOrIsAuthenticated,)
renderer_classes = (ProfileJSONRenderer,)
serializer_class = ProfileSerializer
def delete(self, request, username):
"""
delete:
Unfollow users
"""
user = Utilities.get_profile(self, username)
current_user = Utilities.get_user(self, username)
request_user = Utilities.get_user(self, request.user.username)
context = {
"request": request.user.username,
"username": username
}
Utilities.unfollow_user(
self, follower=request_user, following=current_user)
serializer = self.serializer_class(user, context=context)
return Response(serializer.data, status=status.HTTP_200_OK)
| 26.268456 | 70 | 0.645631 | 365 | 3,914 | 6.79726 | 0.189041 | 0.053204 | 0.051592 | 0.048368 | 0.653366 | 0.546957 | 0.495768 | 0.495768 | 0.421604 | 0.371624 | 0 | 0.004171 | 0.264946 | 3,914 | 148 | 71 | 26.445946 | 0.858186 | 0.057486 | 0 | 0.526316 | 0 | 0 | 0.021101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.131579 | 0 | 0.421053 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94ad6a21c0e36974f4e1b9968bd669337deebb18 | 3,987 | py | Python | server.py | Chase-Warwick/CMPUT404-Assignment-1 | b224c13309879d4894616267f396494ad37d1a9d | [
"Apache-2.0"
] | null | null | null | server.py | Chase-Warwick/CMPUT404-Assignment-1 | b224c13309879d4894616267f396494ad37d1a9d | [
"Apache-2.0"
] | null | null | null | server.py | Chase-Warwick/CMPUT404-Assignment-1 | b224c13309879d4894616267f396494ad37d1a9d | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
import socketserver
import os
from HTTP_Parser import HTTP_Parser
# Copyright 2013 Abram Hindle, Eddie Antonio Santos, Chase Warwick
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# Furthermore it is derived from the Python documentation examples thus
# some of the code is Copyright © 2001-2013 Python Software
# Foundation; All Rights Reserved
#
# http://docs.python.org/2/library/socketserver.html
#
# run: python freetests.py
# try: curl -v -X GET http://127.0.0.1:8080/
class MyWebServer(socketserver.BaseRequestHandler):
def handle(self):
"""Handles requests from client"""
self.redirect = {
"./www/deep" : "/deep/"
}
self.FORBIDDEN_PATHS = ['../']
data = self.request.recv(8192).strip().decode('utf-8')
print ("Got a request of: %s\n" % data)
self.HTTP_parser = HTTP_Parser(data)
if self.should_redirect():
path = self.HTTP_parser.get_path()
print("Client should be redirected to a new directory: " + path + " :Sending 301 status code\n")
response = self.HTTP_parser.construct_HTTP_response(301, self.redirect[path])
self.request.sendall(bytearray(response,'utf-8'))
return
if self.is_405_error():
print("Client made an invalid request: " + self.HTTP_parser.get_request_method() + ":Sending 405 status code\n")
response = self.HTTP_parser.construct_HTTP_response(405)
self.request.sendall(bytearray(response, 'utf-8'))
return
if self.is_404_error():
print("Client attempted to access nonexistant path: " + self.HTTP_parser.get_path() + ":Sending 404 status code\n")
response = self.HTTP_parser.construct_HTTP_response(404)
self.request.sendall(bytearray(response, 'utf-8'))
return
response = self.HTTP_parser.construct_HTTP_response(200)
self.request.sendall(bytearray(response,'utf-8'))
def should_redirect(self):
"""
This function checks against a variety of predefined URLs to see if there is a match
in the case that there is it returns True
"""
path = self.HTTP_parser.get_path()
return path in self.redirect
def is_404_error(self):
"""
This function checks if a 404 error should be thrown which occurs
in two cases, the path does not exist or the path given is outside of
www directory
"""
path = self.HTTP_parser.get_path()
if not os.path.exists(path):
return True
for forbidden_path in self.FORBIDDEN_PATHS:
if forbidden_path in path:
return True
return False
def is_405_error(self):
"""
This function returns true if a 405 error should be thrown
which occurs when the client uses a method which is not permitted
"""
method = self.HTTP_parser.get_request_method()
if not method == 'GET':
return True
return False
if __name__ == "__main__":
HOST, PORT = "localhost", 8080
socketserver.TCPServer.allow_reuse_address = True
# Create the server, binding to localhost on port 8080
server = socketserver.TCPServer((HOST, PORT), MyWebServer)
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
| 34.669565 | 127 | 0.651618 | 526 | 3,987 | 4.836502 | 0.38403 | 0.055031 | 0.060535 | 0.040094 | 0.241745 | 0.241745 | 0.155267 | 0.123035 | 0.105346 | 0.105346 | 0 | 0.028513 | 0.261099 | 3,987 | 114 | 128 | 34.973684 | 0.834691 | 0.365438 | 0 | 0.294118 | 0 | 0 | 0.120985 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078431 | false | 0 | 0.058824 | 0 | 0.333333 | 0.078431 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94ae2a18c46e0824f96fd46435cacfc8f3b0d962 | 3,542 | py | Python | polire/interpolate/idw/idw.py | patel-zeel/ALdeploy | 58212209c4b495db471ada0fd695eab38e2acfe8 | [
"BSD-3-Clause"
] | null | null | null | polire/interpolate/idw/idw.py | patel-zeel/ALdeploy | 58212209c4b495db471ada0fd695eab38e2acfe8 | [
"BSD-3-Clause"
] | null | null | null | polire/interpolate/idw/idw.py | patel-zeel/ALdeploy | 58212209c4b495db471ada0fd695eab38e2acfe8 | [
"BSD-3-Clause"
] | null | null | null | r"""
This is a module for IDW Spatial Interpolation
"""
import numpy as np
from ...utils.distance import haversine, euclidean
from ..base import Base
from copy import deepcopy
def is_row_in_array(row, arr):
return list(row) in arr.tolist()
def get_index(row, arr):
t1 = np.where(arr[:, 0] == row[0])
t2 = np.where(arr[:, 1] == row[1])
index = np.intersect1d(t1, t2)[0]
# If length of index exceeds one!! - Uniqueness Error
return index
class Idw(Base):
"""A class that is declared for performing IDW Interpolation.
For more information on how this method works, kindly refer to
https://en.wikipedia.org/wiki/Inverse_distance_weighting
Parameters
----------
exponent : positive float, optional
The rate of fall of values from source data points.
Higher the exponent, lower is the value when we move
across space. Default value is 2.
Attributes
----------
Interpolated Values : {array-like, 2D matrix}, shape(resolution, resolution)
This contains all the interpolated values when the interpolation is performed
over a grid, instead of interpolation over a set of points.
X : {array-like, 2D matrix}, shape(n_samples, 2)
Set of all the coordinates available for interpolation.
y : array-like, shape(n_samples,)
Set of all the available values at the specified X coordinates.
result : array_like, shape(n_to_predict, )
Set of all the interpolated values when interpolating over a given
set of data points.
"""
def __init__(self, exponent=2, resolution="standard", coordinate_type="Euclidean"):
super().__init__(resolution, coordinate_type)
self.exponent = exponent
self.interpolated_values = None
self.X = None
self.y = None
self.result = None
if self.coordinate_type == 'Geographic':
self.distance = haversine
elif self.coordinate_type == 'Euclidean':
self.distance = euclidean
else:
raise NotImplementedError("Only Geographic and Euclidean Coordinates are available")
def _fit(self, X, y):
"""This function is for the IDW Class.
This is not expected to be called directly
"""
self.X = X
self.y = y
return self
def _predict_grid(self, x1lim, x2lim):
""" Gridded interpolation for natural neighbors interpolation. This function should not
be called directly.
"""
lims = (*x1lim, *x2lim)
x1min, x1max, x2min, x2max = lims
x1 = np.linspace(x1min, x1max, self.resolution)
x2 = np.linspace(x2min, x2max, self.resolution)
X1, X2 = np.meshgrid(x1, x2)
return self._predict(np.array([X1.ravel(), X2.ravel()]).T)
def _predict(self, X):
"""The function call to predict using the interpolated data
in IDW interpolation. This should not be called directly.
"""
result = np.zeros(X.shape[0])
for i in range(len(X)):
point = X[i]
# Preserve point estimates. This is mandatory in IDW
flag = is_row_in_array(point, self.X)
if flag:
index = get_index(point, self.X)
result[i] = self.y[index]
else:
weights = 1 / (self.distance(point, self.X) ** self.exponent)
result[i] = np.multiply(self.y.reshape(self.y.shape[0],), weights).sum() / (weights.sum())
self.result = result
return self.result
| 34.057692 | 106 | 0.627329 | 464 | 3,542 | 4.717672 | 0.357759 | 0.015989 | 0.010964 | 0.015075 | 0.068524 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014769 | 0.273574 | 3,542 | 103 | 107 | 34.38835 | 0.835989 | 0.398645 | 0 | 0.04 | 0 | 0 | 0.046452 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.08 | 0.02 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94ae725011a08e4c8517cf30bebcb0c980693e6f | 306 | py | Python | rucken_todo/helpers/jwt_utils.py | site15/rucken-todo-django | bf30d4ad43be22bd8383447e07b151d6dc99da72 | [
"MIT"
] | 3 | 2018-06-04T07:36:59.000Z | 2019-10-07T05:33:56.000Z | rucken_todo/helpers/jwt_utils.py | site15/rucken-todo-django-example | bf30d4ad43be22bd8383447e07b151d6dc99da72 | [
"MIT"
] | 428 | 2017-11-24T20:19:39.000Z | 2022-03-26T04:13:25.000Z | rucken_todo/helpers/jwt_utils.py | rucken/todo-django | bf30d4ad43be22bd8383447e07b151d6dc99da72 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from rucken_todo.serializers import AccountSerializer
def jwt_response_payload_handler(token, user=None, request=None):
user_info = AccountSerializer(user, context={'request': request})
return {
'token': token,
'user': user_info.data
}
| 27.818182 | 69 | 0.732026 | 35 | 306 | 6.085714 | 0.628571 | 0.084507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 306 | 10 | 70 | 30.6 | 0.845238 | 0 | 0 | 0 | 0 | 0 | 0.052288 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94ae9dc93f1afd0836d68bb47efd279b23b7bf97 | 4,464 | py | Python | Bitrate/bitrate_api.py | AnirudhJanagam/Applied-Network-Management | 95ceac895976ef55dd1889660920de70d1f53012 | [
"Apache-2.0"
] | 1 | 2021-01-21T06:15:29.000Z | 2021-01-21T06:15:29.000Z | Bitrate/bitrate_api.py | AnirudhJanagam/Applied-Network-Management | 95ceac895976ef55dd1889660920de70d1f53012 | [
"Apache-2.0"
] | null | null | null | Bitrate/bitrate_api.py | AnirudhJanagam/Applied-Network-Management | 95ceac895976ef55dd1889660920de70d1f53012 | [
"Apache-2.0"
] | null | null | null | import os, sys
import subprocess
from subprocess import Popen
from bitrate_db import influx
from flask import Flask
from flask import request, Response, make_response
from functools import wraps
import threading
from threading import Thread
dpmi = Flask(__name__)
interface="ens4"
directory="/home/ajheartnett/consumer-bitrate"
password="aj_heartnett001"
global mainstream
mainstream=[]
global streams
streams=[]
def auth(w):
@wraps(w)
def q(*args, **kwargs):
r = request.authorization
if r and r.username == 'dpmi' and r.password == 'dpmi':
return w(*args, **kwargs)
return make_response('\n...Could not verify...\nPlease check the credentials\n', 401, {'WWW-Authenticate' : 'Basic realm = "login required"'})
return q
def pkill():
sudo_Password = password
stop_command = 'sudo pkill bitrate'
kill = os.system('echo %s|sudo -S %s' % (sudo_Password, stop_command))
return
def bitrate(str):
os.chdir(directory)
bitrate=subprocess.Popen(["unbuffer","./bitrate","-i",interface,str],stdout=subprocess.PIPE)
influx_thread=threading.Thread(target=influx,args=(bitrate.stdout,str,))
influx_thread.start()
@dpmi.route('/startstream/<stream>', methods=['GET'])
@auth
def main(stream):
global mainstream
mul=stream.split(',')
if len(mul)>1:
return '\n...start with only one stream, use "addstream" for adding multiple streams...\n\n'
else:
if not streams:
if stream in mainstream:
return '\n... bitrate stream %s is already running...\n\n' %stream
else:
streams.append(stream)
mainstream=mainstream+streams
bitrate_thread=threading.Thread(target=bitrate,args=(stream,))
bitrate_thread.deamon=True
bitrate_thread.start()
return '\n...bitrate stream %s started...\n\n' %stream
elif stream in mainstream:
return '\n... bitrate stream %s is already running...\n\n' %stream
else:
return '\n...a stream has already been started, use "addstream" to add the streams...\n\n'
@dpmi.route('/showstream', methods=['GET'])
@auth
def show():
if not mainstream:
return '\n...No streams available...\n\n'
else:
show=" ".join(str(S) for S in mainstream)
return '\n...running bitrate streams %s...\n\n' %show
@dpmi.route('/addstream/<add>', methods=['GET'])
@auth
def add(add):
global mainstream
addstream=add.split(',')
b=",".join(addstream)
already=[]
already=list(set(addstream).intersection(mainstream))
stralready=" ".join(str(j) for j in already)
new=[]
new=list(set(addstream)-set(already))
strnew=" ".join(str(i) for i in new)
mainstream=mainstream+new
for s in new:
bitrate_add_thread=threading.Thread(target=bitrate,args=(s,))
bitrate_add_thread.deamon=True
bitrate_add_thread.start()
if not already:
return '\n...adding bitrate streams %s...\n\n' %strnew
else:
return '\n...stream %s already running...\n...streams %s added...\n\n' %(stralready,strnew)
@dpmi.route('/deletestream/<delet>', methods=['GET'])
@auth
def delete(delet):
global mainstream
delet=delet.split(',')
suredel=[]
suredel=list(set(delet).intersection(mainstream))
strsuredel=",".join(str(l) for l in suredel)
cantdel=[]
cantdel=list(set(delet)-set(suredel))
strcantdel=" ".join(str(m) for m in cantdel)
mainstream=list(set(mainstream)-set(suredel))
strmainstream=",".join(str(k) for k in mainstream)
if not suredel:
return '\n...stream(s) not available to delete...\n\n'
else:
pkill()
for h in mainstream:
bitrate_add_thread=threading.Thread(target=bitrate,args=(h,))
bitrate_add_thread.deamon=True
bitrate_add_thread.start()
if set(suredel).intersection(streams)!=0 :
del streams[:]
if not cantdel:
return "\n...bitrate stream %s deleted...\n\n" %(strsuredel)
else:
return "\n...bitrate stream %s deleted...\n...bitrate stream %s not available to delete...\n\n" %(strsuredel,strcantdel)
@dpmi.route('/changestream/<stream>', methods=['GET'])
@auth
def change(stream):
global ch
global mainstream
ch=stream
if ch in mainstream:
return '\n...bitrate stream %s already running, change to another stream...\n\n' %ch
else:
stop()
del streams[:]
mainstream=list(set(mainstream)-set(streams))
main(ch)
return '\n...bitrate stream changed to %s...\n\n' %ch
@dpmi.route('/stop', methods=['GET'])
@auth
def stop():
pkill()
del (mainstream[:],streams[:])
return "\n...bitrate stream killed...\n\n"
if __name__ == "__main__":
dpmi.run(host='localhost', port=5000, debug=True)
| 25.363636 | 144 | 0.691084 | 627 | 4,464 | 4.861244 | 0.236045 | 0.034449 | 0.041339 | 0.052493 | 0.221785 | 0.168963 | 0.156496 | 0.12664 | 0.076115 | 0.076115 | 0 | 0.0034 | 0.143593 | 4,464 | 175 | 145 | 25.508571 | 0.793879 | 0 | 0 | 0.214815 | 0 | 0.022222 | 0.255152 | 0.03181 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0.02963 | 0.066667 | 0 | 0.281481 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94af70908d2b3dd85709ae398d03903d5cfe364f | 3,438 | py | Python | tutorials/ot-oti-find-tree.py | mtholder/peyotl | bbbe0e828c0cd706a11cd1b25a739d4603cbbe63 | [
"BSD-2-Clause"
] | 6 | 2015-01-11T06:24:38.000Z | 2017-11-23T17:18:12.000Z | tutorials/ot-oti-find-tree.py | OpenTreeOfLife/peyotl | bbbe0e828c0cd706a11cd1b25a739d4603cbbe63 | [
"BSD-2-Clause"
] | 60 | 2015-01-09T10:30:58.000Z | 2020-05-20T17:08:36.000Z | tutorials/ot-oti-find-tree.py | OpenTreeOfLife/peyotl | bbbe0e828c0cd706a11cd1b25a739d4603cbbe63 | [
"BSD-2-Clause"
] | 3 | 2015-09-03T15:29:00.000Z | 2015-10-08T20:03:30.000Z | #!/usr/bin/env python
from __future__ import print_function
'''Simple command-line tool that wraps OTI to get trees for an argument which is a property value pair
e.g. python ot-oti-find-tree.py '{"ot:ottTaxonName": "Bos"}'
which is described at https://github.com/OpenTreeOfLife/opentree/wiki/Open-Tree-of-Life-APIs#find_trees
'''
import sys
import json
def ot_find_tree(arg_dict, exact=True, verbose=False, oti_wrapper=None):
"""Uses a peyotl wrapper around an Open Tree web service to get a list of trees including values `value` for a given property to be searched on `porperty`.
The oti_wrapper can be None (in which case the default wrapper from peyotl.sugar will be used.
All other arguments correspond to the arguments of the web-service call.
"""
if oti_wrapper is None:
from peyotl.sugar import oti
oti_wrapper = oti
return oti_wrapper.find_trees(arg_dict,
exact=exact,
verbose=verbose,
wrap_response=True)
def print_matching_trees(arg_dict, tree_format, exact, verbose):
"""The `TreeRef` instance returned by the oti.find_trees(... wrap_response=True)
can be used as an argument to the phylesystem_api.get call.
If you pass in a string (instead of a TreeRef), the string will be interpreted as a study ID
"""
from peyotl.sugar import phylesystem_api
tree_list = ot_find_tree(arg_dict, exact=exact, verbose=verbose)
for tree_ref in tree_list:
print(tree_ref)
print(phylesystem_api.get(tree_ref, format=tree_format))
def main(argv):
"""This function sets up a command-line option parser and then calls print_matching_trees
to do all of the real work.
"""
import argparse
description = 'Uses Open Tree of Life web services to try to find a tree with the value property pair specified. ' \
'setting --fuzzy will allow fuzzy matching'
parser = argparse.ArgumentParser(prog='ot-get-tree', description=description)
parser.add_argument('arg_dict', type=json.loads, help='name(s) for which we will try to find OTT IDs')
parser.add_argument('--property', default=None, type=str, required=False)
parser.add_argument('--fuzzy', action='store_true', default=False,
required=False) # exact matching and verbose not working atm...
parser.add_argument('--verbose', action='store_true', default=False, required=False)
parser.add_argument('-f', '--format', type=str, default='newick',
help='Format of the tree. Should be "newick", "nexson", "nexml", or "nexus"')
try:
args = parser.parse_args(argv)
arg_dict = args.arg_dict
exact = not args.fuzzy
verbose = args.verbose
tree_format = args.format.lower()
except:
arg_dict = {'ot:ottTaxonName': 'Chamaedorea frondosa'}
sys.stderr.write('Running a demonstration query with {}\n'.format(arg_dict))
exact = True
verbose = False
tree_format = 'newick'
if tree_format not in ('newick', 'nexson', 'nexml', 'nexus'):
raise ValueError('Unrecognized format "{}"'.format(tree_format))
print_matching_trees(arg_dict, tree_format, exact=exact, verbose=verbose)
if __name__ == '__main__':
try:
main(sys.argv[1:])
except Exception as x:
sys.exit('{}\n'.format(str(x)))
| 45.236842 | 159 | 0.670157 | 482 | 3,438 | 4.643154 | 0.356846 | 0.031278 | 0.02681 | 0.032172 | 0.154156 | 0.133155 | 0.071492 | 0.035746 | 0 | 0 | 0 | 0.000377 | 0.228621 | 3,438 | 75 | 160 | 45.84 | 0.843514 | 0.213496 | 0 | 0.040816 | 0 | 0 | 0.198821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0 | 0.122449 | 0 | 0.204082 | 0.102041 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94af9687241b62e54f2ba2743065ae7f38faf540 | 5,817 | py | Python | negmgan/model_Imeans.py | junzhuang-code/NEGMGAN | aae8912cf03e3590abc81f66a255b891cee4443d | [
"MIT"
] | null | null | null | negmgan/model_Imeans.py | junzhuang-code/NEGMGAN | aae8912cf03e3590abc81f66a255b891cee4443d | [
"MIT"
] | null | null | null | negmgan/model_Imeans.py | junzhuang-code/NEGMGAN | aae8912cf03e3590abc81f66a255b891cee4443d | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@title: Non-Exhaustive Gaussian Mixture Generative Adversarial Networks (NE-GM-GAN)
@topic: I-means Model
@author: Jun Zhuang, Mohammad Al Hasan
@ref:
https://math.stackexchange.com/questions/20593/calculate-variance-from-a-stream-of-sample-values
"""
import numpy as np
from utils import compute_loss
class Imeans():
def denoising(self, alist, ts=0.5):
"""
@topic: Denoising Activation Function
@input: 1D list, threshold(float); @output: 1D list.
"""
if ts > 1 or ts < 0:
return "Given threshold should be in the range of [0, 1]."
list_dn = []
#list_max, list_min = max(alist), min(alist)
list_max, list_min = max(alist), 0
for i in range(len(alist)):
# normalize the data
i_nm = (alist[i] - list_min) / (list_max - list_min)
# filter the data with given threshold
if i_nm > ts:
list_dn.append(1)
else:
list_dn.append(0)
return list_dn
def testing(self, x, i, n_round):
"""Output the information in n_round"""
if n_round <= 0:
return "n_round must be larger than zero."
if i % n_round == 0:
print(x)
def imeans(self, X, mu, cov, N, Z=3, WS=100, verbose=True):
"""
@topic: I-means algorithm: detect the number of new cluster.
@input: X: a batch of testing points (array);
mu: mean of original clusters (list of list); e.g. mu = [[mu0], [mu1], [mu2], ...]
cov: covariance of original clusters (list of list); e.g. cov = [[cov0], [cov1], [cov2], ...]
N: the number of samples in original clusters (list of int); N = [n0, n1, n2, ...]
Z: the value for "Z sigma rule" based on the test of confidence interval (int);
WS: the number of epochs in the warm-up stage for learning beta prior knowledge (int).
@output: k_new: the number of new cluster (int).
"""
# Initializ parameters
mu = list(mu)
sigma = [np.sqrt(np.diag(cov[i])) for i in range(len(cov))] # convert to (3, 121, 121) diag matrix.
ts = [compute_loss(mu[i]+Z*sigma[i], mu[i]) for i in range(len(mu)) if len(mu)==len(sigma)] # threshold list (3,)
N_test = list(np.zeros_like(N)) # empty list for storing the number of testing clusters
N_loss = [[] for i in range(len(N))] # collect the historical loss_{min} of existing clusters
N_sp = [[1, 1] for i in range(len(N))] # store the shape parameters [alpha, beta]
for i in range(len(X)): # for each testing point in a batch
if verbose:
self.testing("Round {0}: ".format(i), i, 100)
# Compute the loss to each cluster and find out the loss_{min}.
loss_k_list = []
for k in range(len(mu)):
loss_k = compute_loss(X[i], mu[k])
loss_k_list.append(loss_k)
if verbose:
self.testing("The loss to {0} clusters: \n {1}".format(len(loss_k_list), loss_k_list), i, 100)
loss_min = min(loss_k_list) # select the min value from loss_k_list.
nidx = loss_k_list.index(loss_min) # return the index of loss_min.
# Select the threshold TS
if len(N_loss[nidx]) <= WS:
TS = ts[nidx] # select TS based on "Z sigma rule" (Z=3).
ts[nidx] = compute_loss(mu[nidx]+Z*sigma[nidx], mu[nidx]) # Update TS
else:
# Compute the theta_MAP for "nidx" cluster: theta_MAP = alpha / (alpha + beta)
theta_MAP = N_sp[nidx][0] / (N_sp[nidx][0] + N_sp[nidx][1])
ts_idx = int(len(N_loss[nidx])*(1 - theta_MAP)) # compute the threshold TS index based on theta_MAP.
TS = N_loss[nidx][ts_idx] # select the "ts_idx"-th norm in "N_loss" as threshold.
# Make a decision
if loss_min <= TS: # if loss_min < TS: Xi belongs to cluster[nidx].
# Update mu and sigma in streaming data
mu_old = mu[nidx]
# mu_{n+1} = mu_{n} + (x_{n+1} - mu_{n})/(n+1)
mu[nidx] = mu_old + (X[i] - mu[nidx])/(N[nidx]+1)
# v_{n+1} = v_{n} + (x_{n+1} - mu_{n})*(x_{n+1} - mu_{n+1}); sigma_{n+1} = √[v_{n+1}/n]
sigma[nidx] = np.sqrt(((sigma[nidx]**2)*N[nidx] + (X[i] - mu_old)*(X[i] - mu[nidx]))/N[nidx])
N[nidx] = N[nidx] + 1
N_test[nidx] = N_test[nidx] + 1
N_loss[nidx].append(loss_min) # store the loss_min to corresponding clusters.
N_loss[nidx].sort() # sort the list of loss_min.
N_sp[nidx][1] = N_sp[nidx][1] + 1 # beta+1
if verbose:
self.testing("The number of samples in cluster {0}: {1}.".format(nidx, N[nidx]), i, 50)
else: # if loss_min > TS: Xi belongs to new cluster.
mu.append(X[i]) # assign current Xi as new mean vector
sigma.append(np.zeros_like(X[i])) # the sigma is 0 for only one point
ts.append(np.mean(ts)) # use the mean of ts list as the initial threshold of new point
N.append(1)
N_test.append(1)
N_loss.append([loss_min]) # store loss_min to new entry
N_sp.append([1,1]) # initialize a beta distribution for new cluster
N_sp[nidx][0] = N_sp[nidx][0] + 1 # alpha+1
# Filter the noise inside predicted result
if verbose:
print("Predicted clusters and corresponding numbers: \n", N_test)
N_test_dn = self.denoising(N_test, 0.3)
k_new = sum(N_test_dn)
return k_new
| 48.882353 | 121 | 0.552347 | 882 | 5,817 | 3.52381 | 0.240363 | 0.029279 | 0.022523 | 0.021236 | 0.151223 | 0.098777 | 0.064672 | 0.048263 | 0.007079 | 0 | 0 | 0.022424 | 0.31769 | 5,817 | 118 | 122 | 49.29661 | 0.760393 | 0.413959 | 0 | 0.1 | 0 | 0 | 0.066154 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0 | 0.028571 | 0 | 0.142857 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94b2972116bf954df018e058f59806dfa116f612 | 9,733 | py | Python | board.py | jimages/Surakarta-AI-Core | 6b6e5ddde7337676212cff0ee9f5c7ac2cb01f06 | [
"MIT"
] | 1 | 2021-01-04T12:08:29.000Z | 2021-01-04T12:08:29.000Z | board.py | jimages/Surakarta-AI-Core | 6b6e5ddde7337676212cff0ee9f5c7ac2cb01f06 | [
"MIT"
] | null | null | null | board.py | jimages/Surakarta-AI-Core | 6b6e5ddde7337676212cff0ee9f5c7ac2cb01f06 | [
"MIT"
] | null | null | null | from status import Chess, GameStatus, Direction
class Action(object):
def __init__(self, x, y, eat_pos=None, direction=None):
self.x = x
self.y = y
if direction is not None:
self.is_move = True
self.direction = direction
self.to_x = None
self.to_y = None
else:
self.is_move = False
self.direction = None
self.to_x = eat_pos[0]
self.to_y = eat_pos[1]
def __str__(self):
if self.direction is not None:
s = 'm %d %d ' % (self.x, self.y)
dir_dict = {
Direction.Up: 'u',
Direction.Down: 'd',
Direction.Left: 'l',
Direction.Right: 'r',
Direction.LeftUp: 'lu',
Direction.LeftDown: 'ld',
Direction.RightUp: 'ru',
Direction.RightDown: 'rd'
}
s += dir_dict[self.direction]
else:
s = 'e %d %d %d %d' % (self.x, self.y, self.to_x, self.to_y)
return s
def __eq__(self, obj):
if obj is None:
return False
else:
return self.x == obj.x and self.y == obj.y \
and self.is_move == obj.is_move \
and self.direction == obj.direction \
and self.to_x == obj.to_x and self.to_y == obj.to_y
class Board(object):
'''
A class of chess board for Surakarta game.
'''
def __init__(self):
self.__board = [([Chess.Null] * 6) for i in range(6)]
self.__status = GameStatus.RedMoving
self.new_game()
def __str__(self):
s = ''
for i in self.__board:
for j in i:
if j == Chess.Null:
s += '- '
elif j == Chess.Red:
s += 'R '
else:
s += 'B '
s += '\n'
return s.rstrip('\n')
def __eq__(self, other):
for i in range(6):
for j in range(6):
if self.__board[i][j] != other.__board[i][j]:
return False
return True
def __check_movable(self, x, y):
'''
如果能分出胜负已经在对手的步数,则判断不能移动
'''
chess = self.get_chess(x, y)
if chess == Chess.Null:
return False
if self.__status in [GameStatus.RedWon, GameStatus.BlackWon]:
return False
if chess == Chess.Black and self.__status == GameStatus.RedMoving:
return False
if chess == Chess.Red and self.__status == GameStatus.BlackMoving:
return False
return True
def __get_eat_pos(self, x, y, direction, chess, arc_count, original_x, original_y):
'''判断(x,y)的子可以吃到的位置'''
if chess == Chess.Null:
return None, None
# 四个角是吃不到子的
if (x, y) in [(0, 0), (0, 5), (5, 0), (5, 5)]:
return None, None
success, x, y = self.__get_target_pos(x, y, direction)
if not success:
pos_list = [
(1, -1), (2, -1), (2, 6), (1, 6),
(4, -1), (3, -1), (3, 6), (4, 6),
(-1, 1), (-1, 2), (6, 2), (6, 1),
(-1, 4), (-1, 3), (6, 3), (6, 4)
]
x_dir = Direction.Down if y <= 2 else Direction.Up
y_dir = Direction.Right if x <= 2 else Direction.Left
if x == -1:
return self.__get_eat_pos(pos_list[y - 1][0],
pos_list[y - 1][1], x_dir, chess, arc_count + 1, original_x, original_y)
elif x == 6:
return self.__get_eat_pos(pos_list[y + 3][0],
pos_list[y + 3][1], x_dir, chess, arc_count + 1, original_x, original_y)
elif y == -1:
return self.__get_eat_pos(pos_list[x + 7][0],
pos_list[x + 7][1], y_dir, chess, arc_count + 1, original_x, original_y)
else: # y == 6
return self.__get_eat_pos(pos_list[x + 11][0],
pos_list[x + 11][1], y_dir, chess, arc_count + 1, original_x, original_y)
else:
new_chess = self.get_chess(x, y)
# 注意有一个特殊情况。
if new_chess == chess and (x != original_x or y != original_y):
return None, None
elif new_chess == Chess.Null:
return self.__get_eat_pos(x, y, direction, chess, arc_count, original_x, original_y)
else:
return (x, y) if arc_count else (None, None)
def __update_status(self):
'''
Update the status of current game.
'''
red, black = 0, 0
for i in self.__board:
for j in i:
if j == Chess.Red:
red += 1
elif j == Chess.Black:
black += 1
if red == 0:
self.__status = GameStatus.BlackWon
elif black == 0:
self.__status = GameStatus.RedWon
elif self.__status == GameStatus.RedMoving:
self.__status = GameStatus.BlackMoving
elif self.__status == GameStatus.BlackMoving:
self.__status = GameStatus.RedMoving
@staticmethod
def __get_target_pos(x, y, direction):
'''
Get the target position of giving position move along the direction.
'''
if direction & Direction.Up:
y -= 1
elif direction & Direction.Down:
y += 1
if direction & Direction.Left:
x -= 1
elif direction & Direction.Right:
x += 1
success = x in range(6) and y in range(6)
return success, x, y
@property
def status(self):
'''
Return the status of current game.
'''
return self.__status
@property
def won(self):
'''
Return whether the red or black has already won.
'''
return self.__status == GameStatus.RedWon \
or self.__status == GameStatus.BlackWon
@property
def board_size(self):
'''
Return the size of board.
'''
return len(self.__board)
def new_game(self):
'''
Reset the whole board and start a new game.
'''
for i in range(6):
if i < 2:
for j in range(6):
self.__board[i][j] = Chess.Black
elif i < 4:
for j in range(6):
self.__board[i][j] = Chess.Null
else:
for j in range(6):
self.__board[i][j] = Chess.Red
self.__status = GameStatus.RedMoving
def get_chess(self, x, y):
'''
Get the status of specific chess on board.
'''
if x not in range(6) or y not in range(6):
return Chess.Null
return self.__board[y][x]
def can_move(self, x, y, direction):
'''
Check if chess on (x, y) can move with giving direction.
'''
if not self.__check_movable(x, y):
return False
success, x, y = self.__get_target_pos(x, y, direction)
if not success:
return False
if self.get_chess(x, y) != Chess.Null:
return False
return True
def get_can_move(self, x, y):
'''
获得某一个子可以移动的所有的位置
'''
dir_list = []
for i in Direction:
if self.can_move(x, y, i):
dir_list.append(i)
return dir_list
def get_can_eat(self, x, y):
'''获得可以当前子可以吃的棋'''
if not self.__check_movable(x, y):
return []
chess_list = []
chess = self.get_chess(x, y)
left = self.__get_eat_pos(x, y, Direction.Left, chess, 0, x, y)
right = self.__get_eat_pos(x, y, Direction.Right, chess, 0, x, y)
up = self.__get_eat_pos(x, y, Direction.Up, chess, 0, x, y)
down = self.__get_eat_pos(x, y, Direction.Down, chess, 0, x, y)
if left[0] is not None:
chess_list.append(left)
if right[0] is not None:
chess_list.append(right)
if up[0] is not None:
chess_list.append(up)
if down[0] is not None:
chess_list.append(down)
return chess_list
def player_move(self, x, y, direction):
'''
Let chess on (x, y) move along the direction.
'''
if not self.__check_movable(x, y):
return False
success, nx, ny = self.__get_target_pos(x, y, direction)
if not success:
return False
if self.get_chess(nx, ny) != Chess.Null:
return False
self.__board[ny][nx] = self.__board[y][x]
self.__board[y][x] = Chess.Null
self.__update_status()
return True
def player_eat(self, x, y, eat_x, eat_y):
chess_list = self.get_can_eat(x, y)
if (eat_x, eat_y) not in chess_list:
return False
chess = self.get_chess(x, y)
self.__board[eat_y][eat_x] = chess
self.__board[y][x] = Chess.Null
self.__update_status()
return True
def apply_action(self, action):
'''
Apply an action to board.
'''
if action.is_move:
return self.player_move(action.x, action.y, action.direction)
else:
return self.player_eat(action.x, action.y,
action.to_x, action.to_y)
# some test
if __name__ == '__main__':
board = Board()
print('current board')
print(board)
print('current status:', board.status)
| 32.228477 | 115 | 0.494606 | 1,248 | 9,733 | 3.642628 | 0.107372 | 0.017598 | 0.052794 | 0.027717 | 0.375715 | 0.284866 | 0.24593 | 0.199956 | 0.16894 | 0.16894 | 0 | 0.017279 | 0.393507 | 9,733 | 301 | 116 | 32.335548 | 0.752838 | 0.059797 | 0 | 0.285088 | 0 | 0 | 0.008931 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092105 | false | 0 | 0.004386 | 0 | 0.285088 | 0.013158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94b31dcd01a9c5b2acc1e8e7cd78ecaf4e1b79f0 | 1,422 | py | Python | plotting/plot_seed_variation/prepare_data_for_r.py | kienpt/site_discovery_public | 61440a8400bcc018c4dd9f8d2a810971ef548d8e | [
"Apache-1.1"
] | 4 | 2020-10-12T14:26:36.000Z | 2021-08-19T17:26:00.000Z | plotting/plot_seed_variation/prepare_data_for_r.py | kienpt/site_discovery_public | 61440a8400bcc018c4dd9f8d2a810971ef548d8e | [
"Apache-1.1"
] | null | null | null | plotting/plot_seed_variation/prepare_data_for_r.py | kienpt/site_discovery_public | 61440a8400bcc018c4dd9f8d2a810971ef548d8e | [
"Apache-1.1"
] | null | null | null | import argparse
import sys
def get_precision_values(input_file):
seednumbs = []
precs = []
with open(input_file) as lines:
for line in lines:
if "RESULTS_SEEDNUMB" in line:
values = line.strip().split()
seednumbs.append(str(values[1]))
if "RESULTS_AGGREGATION" in line:
tokens = line.strip().split(',')
mean, median, prec = float(tokens[2]), float(tokens[3]), float(tokens[4])
precs.append(prec)
seednumbs.reverse()
precs.reverse()
return seednumbs, precs
def prepare_data(infile, domain, outputdir):
seednumbs, precs = get_precision_values(infile)
# P@K
fname = outputdir + "/prec_" + domain + ".csv"
with open(fname, 'w') as f:
f.write(','.join(seednumbs) + '\n')
f.write(','.join([str(p) for p in precs]) + '\n')
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--inputfile", help="input file with data to plot", type=str)
parser.add_argument("-o", "--outputdir", help="output directory", type=str)
#parser.add_argument("-t", "--plottype", help="plot type: ['prec', 'median', 'mean']", type=str)
parser.add_argument("-d", "--domain", help="domain name", type=str)
args = parser.parse_args()
prepare_data(args.inputfile, args.domain, args.outputdir)
if __name__=='__main__':
main()
| 33.069767 | 100 | 0.600563 | 175 | 1,422 | 4.742857 | 0.405714 | 0.043373 | 0.081928 | 0.057831 | 0.086747 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003683 | 0.236287 | 1,422 | 42 | 101 | 33.857143 | 0.760589 | 0.068917 | 0 | 0 | 0 | 0 | 0.115064 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.0625 | 0 | 0.1875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94b49b9436654c2b1de48f28ac3c2c9ba73a4771 | 722 | py | Python | python/baekjoon/step/19-divide-and-conquer/quad-tree.py | bum12ark/algorithm | b6e262b0c29a8b5fb551db5a177a40feebc411b4 | [
"MIT"
] | 1 | 2022-03-06T03:49:31.000Z | 2022-03-06T03:49:31.000Z | python/baekjoon/step/19-divide-and-conquer/quad-tree.py | bum12ark/algorithm | b6e262b0c29a8b5fb551db5a177a40feebc411b4 | [
"MIT"
] | null | null | null | python/baekjoon/step/19-divide-and-conquer/quad-tree.py | bum12ark/algorithm | b6e262b0c29a8b5fb551db5a177a40feebc411b4 | [
"MIT"
] | null | null | null | """
출처: https://yabmoons.tistory.com/450
"""
import sys
N = int(sys.stdin.readline())
M = [
list(map(int, sys.stdin.readline().rstrip())) for _ in range(N)
]
def DFS(x, y, size):
color = M[x][y]
for i in range(x, x + size):
for j in range(y, y + size):
if color != M[i][j]:
print('(', end='')
DFS(x, y, size // 2)
DFS(x, y + size // 2, size // 2)
DFS(x + size // 2, y, size // 2)
DFS(x + size // 2, y + size // 2, size // 2)
print(')', end='')
return
if color == 0:
print(0, end='')
return
else:
print(1, end='')
return
DFS(0, 0, N)
| 21.878788 | 67 | 0.409972 | 102 | 722 | 2.892157 | 0.343137 | 0.135593 | 0.081356 | 0.091525 | 0.20339 | 0.118644 | 0.118644 | 0.118644 | 0.118644 | 0 | 0 | 0.036866 | 0.398892 | 722 | 32 | 68 | 22.5625 | 0.642857 | 0.049862 | 0 | 0.125 | 0 | 0 | 0.00295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.041667 | 0 | 0.208333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94b81133f944f81c4a396de00a4f4cef01d9f74a | 3,760 | py | Python | prep/make_database.py | radinformatics/wordfish-standard | 678235806d97b0c4cec6637900fa636541d3b730 | [
"MIT"
] | 1 | 2017-05-24T15:14:39.000Z | 2017-05-24T15:14:39.000Z | prep/make_database.py | vsoch/wordfish-standard | 678235806d97b0c4cec6637900fa636541d3b730 | [
"MIT"
] | null | null | null | prep/make_database.py | vsoch/wordfish-standard | 678235806d97b0c4cec6637900fa636541d3b730 | [
"MIT"
] | null | null | null | from glob import glob
import os
import shutil
import simplejson
import xmltodict
pwd = os.getcwd()
output_dir = "%s/collection" %(pwd)
if not os.path.exists(output_dir):
os.mkdir(output_dir)
cookies = glob("%s/matched/img/*.png" %(pwd))
def read_xml(xml_file):
'''read_xml reads an xml file and returns a dict
'''
with open(xml_file) as filey:
doc = xmltodict.parse(filey.read())
return doc
def write_json(json_obj,filename,mode="w",print_pretty=True):
'''write_json will (optionally,pretty print) a json object to file
:param json_obj: the dict to print to json
:param filename: the output file to write to
:param pretty_print: if True, will use nicer formatting
'''
with open(filename,mode) as filey:
if print_pretty == True:
filey.writelines(simplejson.dumps(json_obj, indent=4, separators=(',', ': ')))
else:
filey.writelines(simplejson.dumps(json_obj))
return filename
def make_kv(key,value):
'''make_kv will return a dict
with a key and value pair
'''
return {'key':key,'value':value}
def extract_features(marks):
features = []
if isinstance(marks,list):
for mark in marks:
mark_features = []
mark_features.append(make_kv('name',mark['@name']))
mark_features.append(make_kv('type',mark['@variableType']))
mark_features.append(make_kv('value',mark['#text']))
if "@units" in mark:
mark_features.append(make_kv('units',mark['@units']))
features.append(mark_features)
else:
mark_features = []
mark_features.append(make_kv('name',marks['@name']))
mark_features.append(make_kv('type',marks['@variableType']))
mark_features.append(make_kv('value',marks['#text']))
if "@units" in marks:
mark_features.append(make_kv('units',marks['@units']))
features.append(mark_features)
return features
def parse_xml(xml_file,root=None):
'''parse_xml will iterate over an xml file
and return as a dictionary
'''
if root == None:
root = "CandyTumor"
doc = read_xml(xml_file)
metadata = []
metadata.append(make_kv('id',doc[root]['@uniqueIdentifier']))
metadata.append(make_kv('rx',doc[root]['ClassAnnotation']['@class']))
marks = doc[root]['ClassAnnotation']['TumorAnnotation']
if isinstance(marks,dict):
features = extract_features(marks['TumorFeature'])
else:
features = []
for mark in marks:
new_features = extract_features(mark['TumorFeature'])
features = features + new_features
metadata.append(make_kv('features',features))
return metadata
# Main function is here, parsing the cookies!
for cookie in cookies:
# First find cookie images based on image files
cookie_image = os.path.basename(cookie)
cookie_id = os.path.splitext(cookie_image)[0]
cookie_dir = "%s/%s" %(output_dir,cookie_id)
cookie_images = "%s/images" %(cookie_dir)
if not os.path.exists(cookie_dir):
os.mkdir(cookie_dir)
os.mkdir("%s/images" %(cookie_dir))
os.mkdir("%s/images/image1" %(cookie_dir))
shutil.copyfile(cookie,"%s/image1/image1.png" %(cookie_images))
# Is there a matching overlay (mask?)
cookie_overlay = "%s/matched/mask/%s" %(pwd,cookie_image)
if os.path.exists(cookie_overlay):
shutil.copyfile(cookie_overlay,"%s/image1/overlay1.png" %(cookie_images))
# Is there metadata?
cookie_xml = "%s/matched/%s.xml" %(pwd,cookie_id)
if os.path.exists(cookie_xml):
cookie_metadata = parse_xml(cookie_xml)
write_json(cookie_metadata,"%s/image1/overlay1.json" %(cookie_images))
| 31.333333 | 90 | 0.647606 | 495 | 3,760 | 4.761616 | 0.242424 | 0.033093 | 0.056003 | 0.074671 | 0.247773 | 0.171404 | 0.095885 | 0.033941 | 0 | 0 | 0 | 0.003053 | 0.215957 | 3,760 | 119 | 91 | 31.596639 | 0.796472 | 0.14016 | 0 | 0.142857 | 0 | 0 | 0.127364 | 0.014187 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064935 | false | 0 | 0.064935 | 0 | 0.194805 | 0.025974 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94bd39f7b280d5e41bc80e6c9d2c2a8db8614c19 | 4,103 | py | Python | jabble/dataset.py | mdd423/wobble_jax | 8923eac67529d92d670d38a02d409cf16dad4d7a | [
"BSD-3-Clause"
] | null | null | null | jabble/dataset.py | mdd423/wobble_jax | 8923eac67529d92d670d38a02d409cf16dad4d7a | [
"BSD-3-Clause"
] | null | null | null | jabble/dataset.py | mdd423/wobble_jax | 8923eac67529d92d670d38a02d409cf16dad4d7a | [
"BSD-3-Clause"
] | null | null | null | import numpy as np
# import matplotlib.pyplot as plt
import astropy.table as at
import astropy.units as u
import astropy.coordinates as coord
import astropy.constants as const
import astropy.time as atime
import scipy.ndimage as ndimage
import numpy.polynomial as polynomial
# import jabble.model as wobble_model
import jax.numpy as jnp
def find_nearest(array,value):
array = np.asarray(array)
idx = (np.abs(array-value)).argmin()
return idx
def velocityfromshift(shifts):
expon = np.exp(2*shifts)
vel = const.c * (expon-1)/(1 + expon)
return vel
def get_loss_array(shift_grid,model,xs,ys,yerr,loss,*args):
# so so so shit
if len(xs.shape) == 1:
xs = np.expand_dims(xs,axis=0)
if len(shift_grid.shape) == 1:
loss_arr = np.empty((xs.shape[0],shift_grid.shape[0]))
for i in range(xs.shape[0]):
for j,shift in enumerate(shift_grid):
loss_arr[i,j] = loss(model.p,xs[i,:]+shift,ys[i,:],yerr[i,:],i,model,*args)
if len(shift_grid.shape) == 2:
loss_arr = np.empty((xs.shape[0],shift_grid.shape[1]))
for i in range(xs.shape[0]):
for j,shift in enumerate(shift_grid[i,:]):
loss_arr[i,j] = loss(model.p,xs[i,:]+shift,ys[i,:],yerr[i,:],i,model,*args)
return loss_arr
def get_parabolic_min(loss_array,grid,return_all=False):
epoches = loss_array.shape[0]
grid_min = np.empty(epoches)
xss = np.empty((epoches,3))
yss = np.empty((epoches,3))
polys = []
for n in range(epoches):
idx = loss_array[n,:].argmin()
print("epch {}: min {}".format(n,idx))
if idx == 0:
print("minimum likely out of range")
idx = 1
if idx == grid.shape[1]-1:
print("minimum likely out of range")
idx -= 1
# else:
xs = grid[n,idx-1:idx+2]
xss[n,:] = xs
ys = loss_array[n,idx-1:idx+2]
yss[n,:] = ys
poly = np.polyfit(xs,ys,deg=2)
polys.append(poly)
deriv = np.polyder(poly)
x_min = np.roots(deriv)
x_min = x_min[x_min.imag==0].real
y_min = np.polyval(poly,x_min)
grid_min[n] = x_min
if (return_all):
return grid_min, xss, yss, polys
else:
return grid_min
def zplusone(vel):
return np.sqrt((1 + vel/(const.c))/(1 - vel/(const.c)))
def shifts(vel):
return np.log(zplusone(vel))
def get_star_velocity(BJD,star_name,observatory_name,parse=False):
hatp20_c = coord.SkyCoord.from_name(star_name,parse=parse)
loc = coord.EarthLocation.of_site(observatory_name)
ts = atime.Time(BJD, format='jd', scale='tdb')
bc = hatp20_c.radial_velocity_correction(obstime=ts, location=loc).to(u.km/u.s)
return bc
def interpolate_mask(flux,mask):
new_flux = np.zeros(flux.shape)
new_flux = flux
for j,mask_row in enumerate(mask):
cnt = 0
for i, mask_ele in enumerate(mask_row):
if mask_ele != 0:
cnt += 1
if mask_ele == 0 and cnt != 0:
new_flux[j,i-cnt:i] = np.linspace(flux[j,i-cnt-1],flux[j,i],cnt+2)[1:-1]
cnt = 0
return new_flux
def gauss_filter(flux,sigma):
filtered_flux = ndimage.gaussian_filter1d(flux,sigma)
return filtered_flux
def normalize_flux(flux,sigma):
return flux/gauss_filter(flux,sigma)
def convert_xy(lamb,flux,ferr):
y = np.log(flux)
x = np.log(lamb)
yerr = ferr/flux
return x, y, yerr
def set_masked(y,yerr,mask,y_const=0.0,err_const=10.0):
y[mask] = y_const
yerr[mask] = err_const
return y, yerr
class WobbleDataset:
def __init__(self,wave,flux,flux_error,mask,sigma=80):
self.mask = mask
self.flux = flux
self.wave = wave
flux = interpolate_mask(flux,mask)
flux_norm = normalize_flux(flux,sigma)
self.xs, self.ys, self.yerr = np.log(wave/u.Angstrom), np.log(flux_norm), flux_error/flux
self.ys, self.yerr = set_masked(self.ys,self.yerr,mask)
self.epoches = self.ys.shape[0]
| 30.169118 | 97 | 0.61126 | 647 | 4,103 | 3.751159 | 0.248841 | 0.025958 | 0.023074 | 0.011125 | 0.155748 | 0.132674 | 0.132674 | 0.132674 | 0.106304 | 0.106304 | 0 | 0.016922 | 0.251036 | 4,103 | 135 | 98 | 30.392593 | 0.77286 | 0.021204 | 0 | 0.074766 | 0 | 0 | 0.018449 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121495 | false | 0 | 0.084112 | 0.028037 | 0.336449 | 0.028037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94c1134d670b16674cc4abd8161470411f0363bf | 660 | py | Python | ietf/name/migrations/0009_add_verified_errata_to_doctagname.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 25 | 2022-03-05T08:26:52.000Z | 2022-03-30T15:45:42.000Z | ietf/name/migrations/0009_add_verified_errata_to_doctagname.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 219 | 2022-03-04T17:29:12.000Z | 2022-03-31T21:16:14.000Z | ietf/name/migrations/0009_add_verified_errata_to_doctagname.py | hassanakbar4/ietfdb | cabee059092ae776015410640226064331c293b7 | [
"BSD-3-Clause"
] | 22 | 2022-03-04T15:34:34.000Z | 2022-03-28T13:30:59.000Z | # Copyright The IETF Trust 2020, All Rights Reserved
# -*- coding: utf-8 -*-
from django.db import migrations
def forward(apps, schema_editor):
DocTagName = apps.get_model('name','DocTagName')
DocTagName.objects.get_or_create(slug='verified-errata', name='Has verified errata', desc='', used=True, order=0)
def reverse(apps, schema_editor):
DocTagName = apps.get_model('name','DocTagName')
DocTagName.objects.filter(slug='verified-errata').delete()
class Migration(migrations.Migration):
dependencies = [
('name', '0008_reviewerqueuepolicyname'),
]
operations = [
migrations.RunPython(forward, reverse)
]
| 27.5 | 117 | 0.7 | 75 | 660 | 6.066667 | 0.626667 | 0.092308 | 0.07033 | 0.114286 | 0.303297 | 0.303297 | 0.303297 | 0.303297 | 0.303297 | 0.303297 | 0 | 0.018083 | 0.162121 | 660 | 23 | 118 | 28.695652 | 0.804702 | 0.109091 | 0 | 0.142857 | 0 | 0 | 0.186325 | 0.047863 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.071429 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94c275106d2338f72c37c61390adc86cd0cf7bba | 3,635 | py | Python | src/morle/modules/compile.py | maciejjan/morle | a56a3371e403568b6b7abc63e08f6a111e0b5fee | [
"MIT"
] | 1 | 2021-03-04T16:23:02.000Z | 2021-03-04T16:23:02.000Z | src/morle/modules/compile.py | maciejjan/morle | a56a3371e403568b6b7abc63e08f6a111e0b5fee | [
"MIT"
] | null | null | null | src/morle/modules/compile.py | maciejjan/morle | a56a3371e403568b6b7abc63e08f6a111e0b5fee | [
"MIT"
] | null | null | null | import morle.algorithms.fst as FST
from morle.datastruct.lexicon import LexiconEntry
from morle.datastruct.rules import Rule
from morle.utils.files import file_exists, read_tsv_file
import morle.shared as shared
import hfst
import logging
import math
from operator import itemgetter
from typing import List, Tuple
# TODO use RuleSet instead!
def load_rules() -> List[Tuple[Rule, float]]:
rules_filename = None
if shared.config['compile'].getboolean('weighted'):
if shared.config['Models'].get('edge_model') == 'simple':
rules_filename = shared.filenames['edge-model']
max_cost = None \
if shared.config['compile'].get('max_cost') == 'none' \
else shared.config['compile'].getfloat('max_cost')
rules = [(Rule.from_string(rule), -math.log(prod))\
for rule, prod in\
read_tsv_file(rules_filename, (str, float))\
if max_cost is None or -math.log(prod) < max_cost ] +\
[(Rule.from_string(':/:___:'), 0.0)]
return rules
else:
raise Exception('Compiling a weighted analyzer is only possible'
' for the Bernoulli edge model.')
else:
rules_filename = shared.filenames['rules-modsel']
if not file_exists(rules_filename):
rules_filename = shared.filenames['rules']
return [(Rule.from_string(rule), 0.0)\
for (rule,) in read_tsv_file(rules_filename, (str,))] +\
[(Rule.from_string(':/:___:'), 0.0)]
# TODO use Lexicon instead!
def load_roots() -> List[LexiconEntry]:
def root_reader():
col = 0
for row in read_tsv_file(shared.filenames['wordlist']):
if col < len(row) and row[col]:
yield row[col]
roots = []
for root_str in root_reader():
try:
roots.append(LexiconEntry(root_str))
except Exception as ex:
logging.getLogger('main').warning(str(ex))
return roots
def build_rule_transducer(rules :List[Tuple[Rule, float]]) \
-> hfst.HfstTransducer:
transducers = []
for rule, weight in rules:
rule_tr = rule.to_fst(weight=weight)
transducers.append(rule_tr)
result = FST.binary_disjunct(transducers, print_progress=True)
return result
def build_root_transducer(roots :List[LexiconEntry]) -> hfst.HfstTransducer:
transducers = []
for root in roots:
seq = root.word + root.tag
transducers.append(FST.seq_to_transducer(zip(seq, seq)))
result = FST.binary_disjunct(transducers, print_progress=True)
return result
# def build_rootgen_transducer(roots :List[LexiconEntry]) -> hfst.HfstTransducer:
# alergia = AlergiaStringFeature()
# alergia.fit(roots)
# return alergia.automaton
def run() -> None:
rules = load_rules()
roots = load_roots()
logging.getLogger('main').info('Building the rule transducer...')
rules_tr = build_rule_transducer(rules)
FST.save_transducer(rules_tr, shared.filenames['rules-tr'])
if shared.config['General'].getboolean('supervised'):
logging.getLogger('main').info('Building the root transducer...')
roots_tr = build_root_transducer(roots)
FST.save_transducer(roots_tr, shared.filenames['roots-tr'])
# logging.getLogger('main').info('Building the root generator transducer...')
# rootgen_tr = algorithms.fst.load_transducer(shared.filenames['root-model'])
# algorithms.fst.save_transducer(rootgen_tr, shared.filenames['rootgen-tr'])
| 35.990099 | 81 | 0.638514 | 432 | 3,635 | 5.210648 | 0.268519 | 0.05331 | 0.019547 | 0.037317 | 0.268769 | 0.182586 | 0.123501 | 0.063083 | 0.063083 | 0.063083 | 0 | 0.002537 | 0.24099 | 3,635 | 100 | 82 | 36.35 | 0.813338 | 0.126272 | 0 | 0.138889 | 0 | 0 | 0.095735 | 0 | 0 | 0 | 0 | 0.01 | 0 | 1 | 0.083333 | false | 0 | 0.138889 | 0 | 0.291667 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94c29179afdca3ff89eaa0d1a46d602e67dacea5 | 1,214 | py | Python | deck_berry_py/repl_db.py | curtjen/deck-berry-py | e0fbf79ca1b0e3b6a1455093abee207bba6b3561 | [
"MIT"
] | null | null | null | deck_berry_py/repl_db.py | curtjen/deck-berry-py | e0fbf79ca1b0e3b6a1455093abee207bba6b3561 | [
"MIT"
] | null | null | null | deck_berry_py/repl_db.py | curtjen/deck-berry-py | e0fbf79ca1b0e3b6a1455093abee207bba6b3561 | [
"MIT"
] | null | null | null | import requests
import os
import json
db_url = os.getenv('REPLIT_DB_URL')
def set(key, val, type='string'):
try:
if (type == 'string'):
return requests.post(db_url, data = { key: val })
if (type == 'json'):
return requests.post(db_url, data = { key: json.dumps(val)})
except:
return({'error': { 'message': 'There was an issue with writing to the database'}})
def get(key):
resp = None
try:
resp = requests.get("{0}/{1}".format(db_url, key))
# Return JSON dict by default
return json.loads(resp.text)
except:
try:
# Return as string if not JSON
return resp.text
except:
return({'error': { 'message': 'There was an issue with getting data from the database'}})
def delete(key):
try:
return requests.delete("{0}/{1}".format(db_url, key))
except:
return({'error': { 'message': 'There was an issue with deleting data from the database'}})
def list(key):
"List db entries that start with [key]"
try:
resp = requests.get('{0}?prefix={1}'.format(db_url, key))
resp_list = resp.text.split('\n')
return resp_list
except:
return({'error': { 'message': 'There was an issue with listing data from the database'}}) | 28.232558 | 96 | 0.632619 | 178 | 1,214 | 4.258427 | 0.320225 | 0.046174 | 0.08971 | 0.126649 | 0.474934 | 0.348285 | 0.306069 | 0.226913 | 0.226913 | 0 | 0 | 0.006283 | 0.213344 | 1,214 | 43 | 97 | 28.232558 | 0.787435 | 0.078254 | 0 | 0.285714 | 0 | 0 | 0.306228 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0 | 0.085714 | 0 | 0.371429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94ce46151c1716825bf8cd7802cd6c9c77ce147e | 1,640 | py | Python | user/views.py | RadySonabu/ThesisAPI | dc986623ff8d831ced17396a1121d256fa1f66d6 | [
"MIT"
] | null | null | null | user/views.py | RadySonabu/ThesisAPI | dc986623ff8d831ced17396a1121d256fa1f66d6 | [
"MIT"
] | null | null | null | user/views.py | RadySonabu/ThesisAPI | dc986623ff8d831ced17396a1121d256fa1f66d6 | [
"MIT"
] | null | null | null | from django.shortcuts import render, redirect, get_object_or_404
from rest_framework import generics
from .models import MyUser
from .forms.forms import UserRegisterForm, UpdateUserForm
def register(request):
if request.method == 'POST':
form = UserRegisterForm(request.POST)
if form.is_valid():
form.save()
username = form.cleaned_data.get('username')
# messages.success(request, f'Your account has been created! You are now able to log in')
return redirect('login')
else:
form = UserRegisterForm()
return render(request, 'user/registration.html', {'form': form})
def update_user(request):
if request.method == 'POST':
u_form = UpdateUserForm(request.POST, instance=request.user)
if u_form.is_valid():
u_form.save()
return redirect('profile')
else:
u_form = UpdateUserForm(instance=request.user)
context = {
'form': u_form,
}
return render(request, 'dashboard/profile-update.html', context)
def delete_user(request, id):
# dictionary for initial data with
# field names as keys
context ={}
# fetch the object related to passed id
obj = get_object_or_404(MyUser, id = id)
if request.method =="POST":
# delete object
obj.delete()
# after deleting redirect to
# home page
return redirect("login")
return render(request, "dashboard/delete_user.html", context)
def profile(request):
context = {}
return render(request, 'user/profile.html', context) | 27.333333 | 101 | 0.629268 | 190 | 1,640 | 5.336842 | 0.405263 | 0.024655 | 0.074951 | 0.056213 | 0.051282 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005013 | 0.270122 | 1,640 | 60 | 102 | 27.333333 | 0.842105 | 0.143902 | 0 | 0.166667 | 0 | 0 | 0.09957 | 0.055158 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.416667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94d29e2296f9233cdb585fa571d29d93761c5b3a | 626 | py | Python | src/smtv_api/setup_test_environment.py | AdamDomagalsky/smtv-micro-scpr | 87195f4c0f738a687648b3218a0976c3450d0b6c | [
"Apache-2.0"
] | null | null | null | src/smtv_api/setup_test_environment.py | AdamDomagalsky/smtv-micro-scpr | 87195f4c0f738a687648b3218a0976c3450d0b6c | [
"Apache-2.0"
] | null | null | null | src/smtv_api/setup_test_environment.py | AdamDomagalsky/smtv-micro-scpr | 87195f4c0f738a687648b3218a0976c3450d0b6c | [
"Apache-2.0"
] | null | null | null | import os
def pytest_configure(config: dict) -> None:
setup()
def setup() -> None:
os.environ['AIRFLOW_API_ROOT_URL'] = 'http://test-airflow/api/experimental/'
os.environ['API_DATABASE_URL'] = 'postgres://postgres:postgres@postgres:5432/postgres'
os.environ['S3_ENDPOINT_URL'] = 'http://localhost:5000/'
os.environ['S3_BUCKET'] = 'test-bucket'
os.environ['S3_ACCESS_KEY_ID'] = 'inner-access-key'
os.environ['S3_SECRET_ACCESS_KEY'] = 'inner-secret-access-key'
os.environ['CELERY_BROKER_URL'] = 'pyamqp://rabbitmq:rabbitmq@rabbit:5672//'
os.environ['CELERY_RESULT_BACKEND'] = 'rpc://'
| 31.3 | 90 | 0.693291 | 83 | 626 | 5 | 0.46988 | 0.173494 | 0.106024 | 0.086747 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02925 | 0.126198 | 626 | 19 | 91 | 32.947368 | 0.729433 | 0 | 0 | 0 | 0 | 0 | 0.543131 | 0.215655 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94d7d35515c333dfd9a0474a1d86a98db0e415bf | 3,259 | py | Python | wikionary/wikionary.py | DedInc/wikionary | 45d595044aa980d129479b6aff6c31c1b329e9d8 | [
"MIT"
] | null | null | null | wikionary/wikionary.py | DedInc/wikionary | 45d595044aa980d129479b6aff6c31c1b329e9d8 | [
"MIT"
] | null | null | null | wikionary/wikionary.py | DedInc/wikionary | 45d595044aa980d129479b6aff6c31c1b329e9d8 | [
"MIT"
] | null | null | null | from lxml import html, etree
from requests import get
from urllib.parse import unquote
agent = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'
def getSynonims(word):
syns = []
headers = {'User-Agent': agent}
tree = html.fromstring(get(
'https://jeck.ru/tools/SynonymsDictionary/{}'.format(word), headers=headers).content)
urls = tree.xpath('//td/a/@href')
for url in urls:
if not '+' in url:
dem = url.split('Dictionary/')
if len(dem) > 1:
word = unquote(dem[1])
syns.append(word)
return syns
def getAntonyms(word):
antonyms = []
headers = {'User-Agent': agent}
tree = html.fromstring(
get('https://ru.wiktionary.org/wiki/{}'.format(word), headers=headers).content)
for k in range(1, 100):
for i in range(1, 100):
try:
anto = tree.xpath(
'/html/body/div[3]/div[3]/div[5]/div[1]/ol[3]/li[{}]/a[{}]/text()'.format(i, k))
antonyms.append(anto[0])
except:
break
return list(dict.fromkeys(antonyms))
def getPhraseologs(word):
phraseols = []
headers = {'User-Agent': agent}
tree = html.fromstring(
get('https://ru.wiktionary.org/wiki/{}'.format(word), headers=headers).content)
for k in range(1, 100):
for i in range(1, 100):
try:
phrase = tree.xpath(
'/html/body/div[3]/div[3]/div[5]/div[1]/ul[1]/li[{}]/a[{}]/text()'.format(k, i))
if not 'МФА' in phrase:
phraseols.append(phrase[0])
except:
break
return list(dict.fromkeys(phraseols))
def getRandomWord():
headers = {'User-Agent': agent}
tree = html.fromstring(
get('https://ru.wiktionary.org/wiki/%D0%A1%D0%BB%D1%83%D0%B6%D0%B5%D0%B1%D0%BD%D0%B0%D1%8F:%D0%A1%D0%BB%D1%83%D1%87%D0%B0%D0%B9%D0%BD%D0%B0%D1%8F_%D1%81%D1%82%D1%80%D0%B0%D0%BD%D0%B8%D1%86%D0%B0', headers=headers).content)
return tree.xpath('/html/body/div[3]/h1/text()')[0]
def getAssociations(word):
assocs = []
headers = {'User-Agent': agent}
tree = html.fromstring(get(
'https://wordassociations.net/ru/%D0%B0%D1%81%D1%81%D0%BE%D1%86%D0%B8%D0%B0%D1%86%D0%B8%D0%B8-%D0%BA-%D1%81%D0%BB%D0%BE%D0%B2%D1%83/{}'.format(word), headers=headers).content)
urls = tree.xpath('//li/a/@href')
for url in urls:
if 'D1%83/' in url:
assocs.append(unquote(url.split('D1%83/')[1]).lower())
return assocs
def getHyperonims(word):
headers = {'User-Agent': agent}
tree = html.fromstring(
get('https://ru.wiktionary.org/wiki/{}'.format(word), headers=headers).content)
phraseols = []
for k in range(1, 100):
for i in range(1, 100):
try:
phrase = tree.xpath(
'/html/body/div[3]/div[3]/div[5]/div[1]/ol[4]/li[{}]/a[{}]/text()'.format(k, i))
phraseols.append(phrase[0])
except:
break
return list(dict.fromkeys(phraseols))
| 36.617978 | 230 | 0.567659 | 474 | 3,259 | 3.896624 | 0.251055 | 0.01516 | 0.051976 | 0.068219 | 0.656199 | 0.645371 | 0.59177 | 0.552788 | 0.505143 | 0.45425 | 0 | 0.080816 | 0.248236 | 3,259 | 88 | 231 | 37.034091 | 0.673061 | 0 | 0 | 0.533333 | 0 | 0.08 | 0.30807 | 0.074563 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.04 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94dac3b4eafe4e1f3cf44a39e7110d561f9e7e30 | 17,728 | py | Python | scripts/analysis_wfc3.py | Lachimax/COMP777 | 9bf73cc451e1353763fa47ae124062b53bf09ded | [
"CC0-1.0"
] | null | null | null | scripts/analysis_wfc3.py | Lachimax/COMP777 | 9bf73cc451e1353763fa47ae124062b53bf09ded | [
"CC0-1.0"
] | null | null | null | scripts/analysis_wfc3.py | Lachimax/COMP777 | 9bf73cc451e1353763fa47ae124062b53bf09ded | [
"CC0-1.0"
] | null | null | null | # Code by Lachlan Marnoch, 2018
import numpy as np
import matplotlib.pyplot as plt
# import math as m
from astropy import wcs
from astropy.coordinates import SkyCoord
from astropy.io import fits
import random as r
def sub_stars(condition):
subtract = np.zeros(len(B), dtype=bool)
# Set seed for replicability
r.seed(729626)
for i in range(4):
for j in range(20):
# This gives the number of stars to remove1 from the corresponding cell in the cluster CMD
n = sum((B_I_cell == i) & (B_cell == j) & condition)
# This gives the indices of stars in the corresponding cell in the cluster CMD
ind_cell_cluster = np.nonzero((B_I_cell == i) & (B_cell == j) & in_cluster)[0]
# If the number of stars to be removed from a cell is greater than the number of stars in the cell, just
# remove1 them all
if n >= len(ind_cell_cluster):
for k in ind_cell_cluster:
subtract[k] = True
else:
# This while loop randomly selects stars in the respective cluster CMD cell and removes them until the
# same number has been removed as is in the background CMD, or else all have been.
k = 0
while k < n:
# Pick a random index from the indices of stars in the same cell in the cluster CMD
rr = r.randint(0, len(ind_cell_cluster) - 1)
ind = ind_cell_cluster[rr]
# If that star has not been subtracted, do so. If it has, pick a new random index.
if not subtract[ind]:
subtract[ind] = True
k += 1
print('Subtracted', sum(subtract), 'stars')
print('Stars after decontamination:', sum(in_cluster & (subtract == False)))
return subtract, condition
def draw_cells(ax):
for z in B_I_grid:
ax.plot([z, z], [B_min, B_max], c='blue')
for z in B_grid:
ax.plot([B_I_max, B_I_min], [z, z], c='blue')
def draw_sgb_box(ax):
ax.plot([left_sgb, left_sgb], [top_sgb, bot_sgb], c='red')
ax.plot([left_sgb, right_sgb], [bot_sgb, bot_sgb], c='red')
ax.plot([right_sgb, right_sgb], [top_sgb, bot_sgb], c='red')
ax.plot([left_sgb, right_sgb], [top_sgb, top_sgb], c='red')
def draw_isochrones():
plt.plot(B_I_iso_min, B_iso_min, c='purple', label='1 Gyrs')
plt.plot(B_I_iso_max, B_iso_max, c='violet', label='3.09 Gyrs')
plt.plot(B_I_138, B_138, c='blue', label='1.38 Gyrs')
plt.plot(B_I_218, B_218, c='red', label='2.18 Gyrs')
plt.plot(B_I_best, B_best, c='green', label='1.58 Gyrs')
def draw_all_isochrones():
for iso in iso_list:
plt.plot(iso[:, 29] - iso[:, 34] + B_I_offset, iso[:, 29] + DM)
def get_nearest(xx, arr):
return np.abs(xx - arr).argmin()
def mse(yy, y_dash):
return sum((yy - y_dash) ** 2) / len(yy)
# Distance modulus (correction for absolute madnitudes, which are the values we have in the isochrone data, to apparent
# magnitudes, the values in the DOLPHOT data) to the SMC
DM = 19.35
B_I_offset = 0.14
# B_I_offset = 0
# Pixel Scale, from FITS header
pixel_scale = 0.03962000086903572 # arcsec
# Limits of the frame in equatorial coordinates, to correct orientation
top = -71.785
bottom = -71.737
left = 16.904
right = 17.051
# Define CMD region of interest
B_I_max = 3
B_I_min = -1
B_max = 23
B_min = 17
# Define CMD region of subgiant branch
top_sgb = 21.3
bot_sgb = 20.25
left_sgb = 0.8
right_sgb = 1.25
# Importing data
print('Importing Data, thank you for your patience')
data_dir = "..//data//"
# The following two lines provide a means of converting between image pixel coordinates and sky coordinates
# (ie Right Ascension and Declination)
hdulist = fits.open(data_dir + "ibhy12050_drz.fits")
w = wcs.WCS(hdulist[2].header)
data = np.genfromtxt(data_dir + "wfc3_attempt_2")
print('Number of Stars:', len(data))
# Include bright stars only; this excludes various artifacts, galaxies, and some background stars
print('Excluding bad stars')
data = data[data[:, 10] == 1]
print('Number of Stars:', len(data))
# Trim out objects with sharpness not in the range -0.5 < sharpness < 0.5
print('Excluding stars with |sharpness| > 0.5')
data = data[data[:, 6] < 0.5]
data = data[data[:, 6] > -0.5]
print('Number of Stars:', len(data))
# Cut any stars outside of the CMD region of interest (Main Sequence Turnoff and subgiant branch).
print('Excluding stars outside CMD region of interest')
# data = data[data[:, 15] - data[:, 28] < B_I_max]
# data = data[data[:, 15] - data[:, 28] > B_I_min]
# data = data[data[:, 15] < B_max]
print('Number of Stars:', len(data))
print()
print('Calculating')
# F475W magnitude
B = data[:, 15]
# F814 magnitude
I = data[:, 28]
# B-I color
B_I = B - I
x_pix = data[:, 2]
y_pix = data[:, 3]
x = pixel_scale * x_pix
y = pixel_scale * y_pix
# Convert x_pix and y_pix (pixel coordinates) to world coordinates.
pixel_coords = np.array([x_pix, y_pix]).transpose()
world_coords = w.all_pix2world(pixel_coords, 1)
ra = world_coords[:, 0]
dec = world_coords[:, 1]
# In the FITS files, Right Ascension is treated as the horizontal coordinate, and Declination as the vertical. We will
# continue to do so here for consistency.
# The centre of the cluster is at RA = 01h 07m 56.22s, Dec = -71deg 46' 04.40'', according to Li et al
# Convert these to degrees (because the sky coordinate system is clunky as hell)
c = SkyCoord(ra='01h07m56.22s', dec='-71d46min04.40s')
centre_ra = c.ra.deg
centre_dec = c.dec.deg
print()
print('Centre position: ')
print('RA:', centre_ra)
print('DEC:', centre_dec)
# Find the centre of the cluster in pixel coordinates.
centre = w.all_world2pix([[centre_ra, centre_dec]], 1)
centre_x_pix = centre[0, 0]
centre_y_pix = centre[0, 1]
centre_x = centre_x_pix * pixel_scale
centre_y = centre_y_pix * pixel_scale
print('x (pixels):', centre_x_pix)
print('y (pixels):', centre_y_pix)
# Calculate angular distance of each star from centre of cluster.
print()
print('Finding stars within 50 arcsec of cluster centre')
pix_dist = np.sqrt((x_pix - centre_x_pix) ** 2 + (y_pix - centre_y_pix) ** 2)
equ_dist = np.sqrt((ra - centre_ra) ** 2 + (dec - centre_dec) ** 2)
dist = pix_dist * pixel_scale
# Find the stars that are within 50 arcsec of the cluster centre (in accordance with Li et al)
in_cluster = dist < 50
in_cluster_equ = np.array(equ_dist < 50. / 3600.)
print('Stars in cluster:', sum(in_cluster == True))
# Find stars in the SGB region
in_sgb = (B < top_sgb) & (B > bot_sgb) & (B_I < right_sgb) & (B_I > left_sgb)
print('SGB: ', sum(in_sgb))
# Decontamination
# Method from Hu et al
print('Decontaminating')
# Divide CMD field1 into cells 0.5*0.25 mag^2
B_I_grid = np.arange(B_I_min, B_I_max, step=0.5)
B_grid = np.arange(B_min, B_max, step=0.25)
B_I_cell = np.floor(B_I / 0.5)
B_cell = np.floor((B - 18.) / 0.25)
remove1, field1 = sub_stars(condition=y_pix <= 800)
remove2, field2 = sub_stars(condition=x_pix <= 800)
remove3, field3 = sub_stars(condition=(80 <= dist) & (dist <= 100))
remove4, field4 = sub_stars(condition=dist >= 80)
# Import isochrone data
isos = np.genfromtxt(data_dir + "isochrones3.dat")
# Get the list of ages used in the isochrones
ages = np.unique(isos[:, 1])
# Seperate the big bad isochrone file into
iso_list = []
for i, a in enumerate(ages):
iso = isos[isos[:, 1] == a]
iso_list.append(iso)
select_sgb = in_sgb & in_cluster & (remove3 == False)
# Find the points on the isochrone with the nearest x-values to our SGB stars
mses = np.zeros(len(ages))
for j, iso in enumerate(iso_list):
y_dash = np.zeros(sum(select_sgb))
for i, xx in enumerate(B_I[select_sgb]):
nearest = get_nearest(xx, iso[:, 29] - iso[:, 34] + B_I_offset)
y_dash[i] = iso[nearest, 29] + DM
mses[j] = (mse(B[select_sgb], y_dash))
print('Selected stars in SGB:', sum(select_sgb))
# Extract some useful individual isochrones
# Our youngest isochrone
iso_min = iso_list[0]
B_iso_min = iso_min[:, 29] + DM
I_iso_min = iso_min[:, 34] + DM
B_I_iso_min = B_iso_min - I_iso_min + B_I_offset
# 1.38 Gyrs
iso_138 = iso_list[np.abs(ages - 1.38e9).argmin()]
B_138 = iso_138[:, 29] + DM
I_138 = iso_138[:, 34] + DM
B_I_138 = B_138 - I_138 + B_I_offset
# 2.18 Gyrs
iso_218 = iso_list[np.abs(ages - 2.18e9).argmin()]
B_218 = iso_218[:, 29] + DM
I_218 = iso_218[:, 34] + DM
B_I_218 = B_218 - I_218 + B_I_offset
# Our best-fitting isochrone:
iso_best = iso_list[mses.argmin()]
B_best = iso_best[:, 29] + DM
I_best = iso_best[:, 34] + DM
B_I_best = B_best - I_best + B_I_offset
# Our oldest isochrone:
iso_max = iso_list[-1]
B_iso_max = iso_max[:, 29] + DM
I_iso_max = iso_max[:, 34] + DM
B_I_iso_max = B_iso_max - I_iso_max + B_I_offset
# PLOTS
print('Plotting')
# Sky maps
# Pixel Coordinates, showing cluster centre
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
ax1.set_title('Star Pixel Coordinates')
ax1.scatter(x_pix, y_pix, c='black', marker=',', s=1)
# ax1.scatter(x_pix[in_cluster], y_pix[in_cluster], c='blue', marker=',', s=1)
# ax1.scatter(centre_x_pix, centre_y_pix, c='green')
ax1.axis('equal')
ax1.set_title('')
ax1.set_xlabel('x (pixels)')
ax1.set_ylabel('y (pixels)')
ax1.set_xlim(0, 4000)
ax1.set_ylim(0, 4500)
plt.show()
# Equatorial Coordinates, showing cluster centre
fig2 = plt.figure()
ax2 = fig2.add_subplot(111)
ax2.set_title('Star Equatorial Coordinates')
ax2.scatter(ra, dec, c='black', marker=',', s=1)
ax2.scatter(centre_ra, centre_dec, c='green')
# ax2.set_xlim(left, right)
# ax2.set_ylim(bottom, top)
# ax2.axis('equal')
ax2.set_xlabel('Right Ascension (deg)')
ax2.set_ylabel('Declination (deg)')
plt.show(fig2)
# Histogram of angular distance from cluster centre
plt.title('Angular Distance from Cluster Centre')
plt.xlabel('Angle x (arcseconds)')
plt.ylabel('Angle y (arcsecs)')
plt.hist(dist, bins=50)
plt.show()
# Plot of both cluster determination methods (pixel and WCS coordinates), in equatorial coordinates
fig4 = plt.figure()
ax4 = fig4.add_subplot(111)
ax4.axis('equal')
ax4.set_title('Equatorial Coordinates of Stars')
ax4.scatter(ra, dec, c='black', marker=',', s=1)
ax4.scatter(ra[in_cluster], dec[in_cluster], c='blue', marker=',', s=1)
ax4.scatter(ra[in_cluster_equ], dec[in_cluster_equ], c='red', marker=',', s=1)
ax4.scatter(centre_ra, centre_dec, c='green')
ax4.set_xlim(left, right)
ax4.set_ylim(bottom, top)
ax4.set_xlabel('Right Ascension (deg)')
ax4.set_ylabel('Declination (deg)')
plt.show(fig4)
# The same again, but in image (pixel*pixel scale) coordinates
fig5 = plt.figure()
ax5 = fig5.add_subplot(111)
ax5.axis('equal')
ax5.set_title('Image Coordinates of Stars')
ax5.scatter(x, y, c='black', marker=',', s=1)
ax5.scatter(x[in_cluster], y[in_cluster], c='blue', marker=',', s=1)
# ax5.scatter(x[in_cluster_equ], y[in_cluster_equ], c='red', marker=',', s=1)
ax5.scatter(centre_x, centre_y, c='green')
# ax5.set_xlim(left, right)
# ax5.set_ylim(bottom, top)
ax5.set_xlabel('x (arcsec)')
ax5.set_ylabel('y (arcsec)')
plt.show(fig5)
# Hertzsprung-Russell / Colour-Magnitude Diagrams
# Raw HR Diagram
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
ax1.set_title('Unprocessed Colour-Magnitude Diagram for stars in image')
ax1.scatter(B_I, B, c='black', marker=',', s=1)
ax1.set_xlim(0, 2)
ax1.set_ylim(23, 18)
ax1.set_xlabel('B - I Colour Index')
ax1.set_ylabel('B magnitude')
plt.show()
# HR Diagram limited to cluster stars
plt.title('Cluster CMD Before Statistical Subtraction')
plt.scatter(B_I[in_cluster], B[in_cluster], c='black', marker=',', s=1)
plt.xlim(0, 2)
plt.ylim(23, 18)
plt.xlabel('B - I Colour Index')
plt.ylabel('B magnitude')
plt.show()
# HR Diagram with stars labelled not in cluster
plt.title('Background Star Colour-Magnitude Diagram')
plt.scatter(B_I[in_cluster == False], B[in_cluster == False], c='black', marker=',', s=1)
plt.xlim(0, 2)
plt.ylim(23, 18)
plt.xlabel('B - I Colour Index')
plt.ylabel('B magnitude')
plt.show()
# Demonstration of Decontamination Technique
fig7, ax7 = plt.subplots(2, 2)
ax7[0, 0].set_title('Background CMD, with y <= 800')
draw_cells(ax7[0, 0])
ax7[0, 0].scatter(B_I[field4], B[field4], c='black', marker=',', s=1)
ax7[0, 0].set_xlabel('B - I Colour Index')
ax7[0, 0].set_ylabel('B magnitude')
ax7[0, 0].set_xlim(0, 2)
ax7[0, 0].set_ylim(23, 18)
draw_sgb_box(ax7[0, 0])
ax7[0, 1].set_title('Cluster CMD')
draw_cells(ax7[0, 1])
ax7[0, 1].scatter(B_I[in_cluster], B[in_cluster], c='black', marker=',', s=1)
ax7[0, 1].set_xlabel('B - I Colour Index')
ax7[0, 1].set_ylabel('B magnitude')
ax7[0, 1].set_xlim(0, 2)
ax7[0, 1].set_ylim(23, 18)
draw_sgb_box(ax7[0, 1])
ax7[1, 0].set_title('Cluster CMD, subtracted stars in red')
draw_cells(ax7[1, 0])
ax7[1, 0].scatter(B_I[in_cluster & (remove4 == False)], B[in_cluster & (remove4 == False)], c='black', marker=',', s=1)
ax7[1, 0].scatter(B_I[remove4], B[remove4], c='red', marker=',', s=1)
ax7[1, 0].set_xlabel('B - I Colour Index')
ax7[1, 0].set_ylabel('B magnitude')
ax7[1, 0].set_xlim(0, 2)
ax7[1, 0].set_ylim(23, 18)
draw_sgb_box(ax7[1, 0])
ax7[1, 1].set_title('Cluster CMD after subtraction')
draw_cells(ax7[1, 1])
ax7[1, 1].scatter(B_I[in_cluster & (remove4 == False)], B[in_cluster & (remove4 == False)], c='black', marker=',', s=1)
ax7[1, 1].set_xlabel('B - I Colour Index')
ax7[1, 1].set_ylabel('B magnitude')
ax7[1, 1].set_xlim(0, 2)
ax7[1, 1].set_ylim(23, 18)
draw_sgb_box(ax7[1, 1])
plt.show()
# Recreate Figure 1 in Li et al
fig8, ax8 = plt.subplots(3, 4)
ax8[0, 0].scatter(B_I[in_cluster & (remove1 == False)], B[in_cluster & (remove1 == False)], c='black', marker=',', s=1)
ax8[0, 0].set_xlabel('B - I Colour Index')
ax8[0, 0].set_ylabel('B magnitude')
ax8[0, 0].set_xlim(0, 2)
ax8[0, 0].set_ylim(23, 18)
draw_sgb_box(ax8[0, 0])
ax8[1, 0].scatter(B_I[field1], B[field1], c='black', marker=',', s=1)
ax8[1, 0].set_xlabel('B - I Colour Index')
ax8[1, 0].set_ylabel('B magnitude')
ax8[1, 0].set_xlim(0, 2)
ax8[1, 0].set_ylim(23, 18)
ax8[2, 0].axis('equal')
ax8[2, 0].scatter(x, y, c='black', marker=',', s=1)
ax8[2, 0].scatter(x[in_cluster], y[in_cluster], c='blue', marker=',', s=1)
ax8[2, 0].scatter(x[field1], y[field1], c='red', marker=',', s=1)
ax8[2, 0].set_xlabel('x (arcsec)')
ax8[2, 0].set_ylabel('y (arcsec)')
ax8[0, 1].scatter(B_I[in_cluster & (remove2 == False)], B[in_cluster & (remove2 == False)], c='black', marker=',', s=1)
ax8[0, 1].set_xlabel('B - I Colour Index')
ax8[0, 1].set_ylabel('B magnitude')
ax8[0, 1].set_xlim(0, 2)
ax8[0, 1].set_ylim(23, 18)
draw_sgb_box(ax8[0, 1])
ax8[1, 1].scatter(B_I[field2], B[field2], c='black', marker=',', s=1)
ax8[1, 1].set_xlabel('B - I Colour Index')
ax8[1, 1].set_ylabel('B magnitude')
ax8[1, 1].set_xlim(0, 2)
ax8[1, 1].set_ylim(23, 18)
ax8[2, 1].axis('equal')
ax8[2, 1].scatter(x, y, c='black', marker=',', s=1)
ax8[2, 1].scatter(x[in_cluster], y[in_cluster], c='blue', marker=',', s=1)
ax8[2, 1].scatter(x[field2], y[field2], c='red', marker=',', s=1)
ax8[2, 1].set_xlabel('x (arcsec)')
ax8[2, 1].set_ylabel('y (arcsec)')
ax8[0, 2].scatter(B_I[in_cluster & (remove3 == False)], B[in_cluster & (remove3 == False)], c='black', marker=',', s=1)
ax8[0, 2].set_xlabel('B - I Colour Index')
ax8[0, 2].set_ylabel('B magnitude')
ax8[0, 2].set_xlim(0, 2)
ax8[0, 2].set_ylim(23, 18)
draw_sgb_box(ax8[0, 2])
ax8[1, 2].scatter(B_I[field3], B[field3], c='black', marker=',', s=1)
ax8[1, 2].set_xlabel('B - I Colour Index')
ax8[1, 2].set_ylabel('B magnitude')
ax8[1, 2].set_xlim(0, 2)
ax8[1, 2].set_ylim(23, 18)
ax8[2, 2].axis('equal')
ax8[2, 2].scatter(x, y, c='black', marker=',', s=1)
ax8[2, 2].scatter(x[in_cluster], y[in_cluster], c='blue', marker=',', s=1)
ax8[2, 2].scatter(x[field3], y[field3], c='red', marker=',', s=1)
ax8[2, 2].set_xlabel('x (arcsec)')
ax8[2, 2].set_ylabel('y (arcsec)')
ax8[0, 3].scatter(B_I[in_cluster & (remove4 == False)], B[in_cluster & (remove4 == False)], c='black', marker=',', s=1)
ax8[0, 3].set_xlabel('B - I Colour Index')
ax8[0, 3].set_ylabel('B magnitude')
ax8[0, 3].set_xlim(0, 2)
ax8[0, 3].set_ylim(23, 18)
draw_sgb_box(ax8[0, 3])
ax8[1, 3].scatter(B_I[field4], B[field4], c='black', marker=',', s=1)
ax8[1, 3].set_xlabel('B - I Colour Index')
ax8[1, 3].set_ylabel('B magnitude')
ax8[1, 3].set_xlim(0, 2)
ax8[1, 3].set_ylim(23, 18)
ax8[2, 3].axis('equal')
ax8[2, 3].scatter(x, y, c='black', marker=',', s=1)
ax8[2, 3].scatter(x[in_cluster], y[in_cluster], c='blue', marker=',', s=1)
ax8[2, 3].scatter(x[field4], y[field4], c='red', marker=',', s=1)
ax8[2, 3].set_xlabel('x (arcsec)')
ax8[2, 3].set_ylabel('y (arcsec)')
plt.show()
# Show isochrones
plt.title('Colour-Magnitude Diagram with isochrones')
plt.scatter(B_I[in_cluster & (remove3 == False)], B[in_cluster & (remove3 == False)], c='black', marker=',', s=1)
plt.scatter(B_I[select_sgb], B[select_sgb], c='violet',
marker=',', label='SGB stars')
draw_isochrones()
draw_sgb_box(plt)
plt.xlim(0.1, 1.5)
plt.ylim(22.5, 19.5)
plt.xlabel('B - I Colour Index')
plt.ylabel('B magnitude')
plt.legend()
plt.show()
plt.title('Colour-Magnitude Diagram with isochrones')
plt.xlabel('B - I Colour Index')
plt.ylabel('B magnitude')
plt.scatter(B_I[in_cluster & (remove3 == False)], B[in_cluster & (remove3 == False)], c='black', marker=',', s=1)
draw_all_isochrones()
draw_sgb_box(plt)
plt.scatter(B_I[select_sgb], B[select_sgb], c='violet',
marker=',', label='SGB stars')
plt.xlim(0.1, 1.5)
plt.ylim(22.5, 19.5)
plt.legend()
plt.show()
# Plot MSEs of isochrones
plt.title("Mean Squared Error of Isochrones to the Subgiant Branch")
plt.plot(ages, mses)
plt.xlabel('Age')
plt.ylabel('Mean Squared Error')
plt.show()
print('Optimum age:', ages[mses.argmin()])
print(len(ages))
print(min(ages))
print(max(ages))
| 32.52844 | 119 | 0.668152 | 3,189 | 17,728 | 3.566008 | 0.133898 | 0.013366 | 0.027436 | 0.028579 | 0.402656 | 0.360359 | 0.259585 | 0.215529 | 0.146676 | 0.12821 | 0 | 0.06304 | 0.157096 | 17,728 | 544 | 120 | 32.588235 | 0.697986 | 0.203012 | 0 | 0.132432 | 0 | 0 | 0.153644 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018919 | false | 0 | 0.018919 | 0.005405 | 0.045946 | 0.078378 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94ddbc269422cd58ff07684a2795facf93d242e3 | 449 | py | Python | rpi_object_detection_opencv_python/picamera/camera.py | altanai/Ramudroid | 7594ccad895871dc52ed636b23ab18d7030ec679 | [
"MIT"
] | 15 | 2019-07-23T01:47:56.000Z | 2021-12-14T22:51:33.000Z | webrtc_stream_objectdetection/rpi_object_detection_opencv_python/picamera/camera.py | Ramudroid/Ramudroid | 10425397b68763e0a8e89e2a949df737d7e945ee | [
"MIT"
] | 4 | 2019-11-17T12:32:45.000Z | 2021-12-13T20:19:00.000Z | webrtc_stream_objectdetection/rpi_object_detection_opencv_python/picamera/camera.py | Ramudroid/Ramudroid | 10425397b68763e0a8e89e2a949df737d7e945ee | [
"MIT"
] | 6 | 2019-07-23T01:48:03.000Z | 2022-03-03T03:28:24.000Z | from picamera import PiCamera
from time import sleep
camera = PiCamera()
# preview
# camera.start_preview()
# sleep(5)
# camera.stop_preview()
# take 5 pics
camera.start_preview()
for i in range(5):
sleep(5)
camera.capture('/home/pi/Desktop/image%s.jpg' % i)
camera.stop_preview()
# video record
# camera.start_preview()
# camera.start_recording('/home/pi/Desktop/video.h264')
# sleep(5)
# camera.stop_recording()
# camera.stop_preview() | 19.521739 | 55 | 0.726058 | 65 | 449 | 4.892308 | 0.415385 | 0.138365 | 0.169811 | 0.100629 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020408 | 0.126949 | 449 | 23 | 56 | 19.521739 | 0.790816 | 0.485523 | 0 | 0 | 0 | 0 | 0.127273 | 0.127273 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94e12cd9710f6c79b7d5368f138b150067c7b158 | 6,314 | py | Python | Bot/cogs/search.py | andrewlee91/DiscordBot | b20cfa95f90e0a7299a9f117eca48fc2c59adf9f | [
"MIT"
] | null | null | null | Bot/cogs/search.py | andrewlee91/DiscordBot | b20cfa95f90e0a7299a9f117eca48fc2c59adf9f | [
"MIT"
] | null | null | null | Bot/cogs/search.py | andrewlee91/DiscordBot | b20cfa95f90e0a7299a9f117eca48fc2c59adf9f | [
"MIT"
] | null | null | null | import json
import logging
import discord
import requests
from discord.ext import commands
logger = logging.getLogger(__name__)
class search(commands.Cog):
"""Search commands for various websites"""
def __init__(self, bot):
self.bot = bot
@commands.command()
async def urban(self, ctx, *, term: str):
"""Search Urban Dictionary"""
req = requests.get(
"http://api.urbandictionary.com/v0/define?term={}".format(term)
)
# Get JSON data for the first result
dictTerm = req.json()
dictTerm = dictTerm["list"][0]
word = dictTerm["word"]
definition = dictTerm["definition"]
example = dictTerm["example"]
message = "{} \n\n *{}*".format(definition, example)
# Get rid of any square brackets
message = message.replace("[", "")
message = message.replace("]", "")
embed = discord.Embed()
embed.add_field(name=word, value=message, inline=False)
await ctx.send(embed=embed)
@commands.command()
async def anime(self, ctx, *, text: str):
"""Search AniList for anime"""
query = """
query ($search: String) {
Media (search: $search, type: ANIME) {
id
title {
romaji
english
native
}
description
episodes
duration
status
genres
averageScore
coverImage {
large
}
}
}
"""
variables = {"search": text}
response = requests.post(
"https://graphql.anilist.co", json={"query": query, "variables": variables}
)
rateLimitRemaining = int(response.headers["X-RateLimit-Remaining"])
# Rate limiting is currently set to 90 requests per minute
# If you go over the rate limit you'll receive a 1-minute timeout
# https://anilist.gitbook.io/anilist-apiv2-docs/overview/rate-limiting
if rateLimitRemaining > 0:
animeJSON = response.json()["data"]["Media"]
description = animeJSON["description"]
description = description.replace("<br>", "")
genres = ""
for g in animeJSON["genres"]:
genres += "{}, ".format(g)
embed = discord.Embed(
title="{} / {}".format(
animeJSON["title"]["romaji"], animeJSON["title"]["native"]
),
url="https://anilist.co/anime/{}".format(animeJSON["id"]),
description=description,
)
embed.set_thumbnail(url=animeJSON["coverImage"]["large"])
embed.add_field(
name="Episode Count", value=animeJSON["episodes"], inline=True
)
embed.add_field(
name="Duration",
value="{} minutes per episode".format(animeJSON["duration"]),
inline=True,
)
embed.add_field(name="Status", value=animeJSON["status"], inline=True)
embed.add_field(name="Genres", value=genres[:-2], inline=True)
embed.add_field(
name="Average Score", value=animeJSON["averageScore"], inline=True
)
embed.set_footer(text="Powered by anilist.co")
await ctx.send(embed=embed)
else:
await ctx.send(
"The bot is currently being rate limited :( Try again in {} seconds".format(
response.headers["Retry-After"]
)
)
@commands.command()
async def manga(self, ctx, *, text: str):
"""Search AniList for manga"""
query = """
query ($search: String) {
Media (search: $search, type: MANGA) {
id
title {
romaji
english
native
}
description
chapters
volumes
status
genres
averageScore
coverImage {
large
}
}
}
"""
variables = {"search": text}
response = requests.post(
"https://graphql.anilist.co", json={"query": query, "variables": variables}
)
rateLimitRemaining = int(response.headers["X-RateLimit-Remaining"])
# Rate limiting is currently set to 90 requests per minute
# If you go over the rate limit you'll receive a 1-minute timeout
# https://anilist.gitbook.io/anilist-apiv2-docs/overview/rate-limiting
if rateLimitRemaining > 0:
mangaJSON = response.json()["data"]["Media"]
description = mangaJSON["description"]
description = description.replace("<br>", "")
genres = ""
for g in mangaJSON["genres"]:
genres += "{}, ".format(g)
embed = discord.Embed(
title="{} / {}".format(
mangaJSON["title"]["romaji"], mangaJSON["title"]["native"]
),
url="https://anilist.co/manga/{}".format(mangaJSON["id"]),
description=description,
)
embed.set_thumbnail(url=mangaJSON["coverImage"]["large"])
embed.add_field(name="Chapters", value=mangaJSON["chapters"], inline=True)
embed.add_field(name="Volumes", value=mangaJSON["volumes"], inline=True)
embed.add_field(name="Status", value=mangaJSON["status"], inline=True)
embed.add_field(name="Genres", value=genres[:-2], inline=True)
embed.add_field(
name="Average Score", value=mangaJSON["averageScore"], inline=True
)
embed.set_footer(text="Powered by anilist.co")
await ctx.send(embed=embed)
else:
await ctx.send(
"The bot is currently being rate limited :( Try again in {} seconds".format(
response.headers["Retry-After"]
)
)
def setup(bot):
bot.add_cog(search(bot))
| 32.885417 | 92 | 0.509819 | 582 | 6,314 | 5.489691 | 0.254296 | 0.027543 | 0.044757 | 0.058529 | 0.661033 | 0.634116 | 0.556495 | 0.510172 | 0.459468 | 0.396244 | 0 | 0.003504 | 0.367279 | 6,314 | 191 | 93 | 33.057592 | 0.796245 | 0.076497 | 0 | 0.496644 | 0 | 0 | 0.310525 | 0.007331 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013423 | false | 0 | 0.033557 | 0 | 0.053691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94e46dab588b249b888b14d393509caa3e2c7213 | 1,042 | py | Python | deque.py | iliankostadinov/hackerrank-python | 4082eb5a6da7ab11ead9b687fdc2d9a846f59f99 | [
"Apache-2.0"
] | null | null | null | deque.py | iliankostadinov/hackerrank-python | 4082eb5a6da7ab11ead9b687fdc2d9a846f59f99 | [
"Apache-2.0"
] | null | null | null | deque.py | iliankostadinov/hackerrank-python | 4082eb5a6da7ab11ead9b687fdc2d9a846f59f99 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
from collections import deque
def pilingUp(q):
if len(d) > 1:
value = "Yes"
currentEl = 0
if d[0] > d[-1]:
currentEl = d[0]
d.popleft()
else:
currentEl = d[-1]
d.pop()
for _ in range(len(d) - 1):
if d[0] > d[-1] and d[0] <= currentEl:
d.popleft()
else:
if d[-1] <= currentEl:
d.pop()
else:
value = "No"
break
else:
value = "Yes"
return value
if __name__ == "__main__":
# number of testcases
t = int(input())
results = []
for _ in range(t):
d = deque()
# number of cubes
n = int(input())
# side lenght of each cube
row_of_cubes = map(int, input().split())
# put elements in deque
for e in row_of_cubes:
d.append(e)
results.append(pilingUp(d))
for i in results:
print(i)
| 22.170213 | 50 | 0.432821 | 125 | 1,042 | 3.496 | 0.424 | 0.02746 | 0.020595 | 0.022883 | 0.02746 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02087 | 0.448177 | 1,042 | 46 | 51 | 22.652174 | 0.73913 | 0.099808 | 0 | 0.285714 | 0 | 0 | 0.017167 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.028571 | 0 | 0.085714 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94e47510d912939867b8bf4137069b8df856ea63 | 12,864 | py | Python | Psience/DVR/BaseDVR.py | McCoyGroup/Coordinerds | 058a4f5b29f157e499cec3c8f2da8b216f0210ef | [
"MIT"
] | null | null | null | Psience/DVR/BaseDVR.py | McCoyGroup/Coordinerds | 058a4f5b29f157e499cec3c8f2da8b216f0210ef | [
"MIT"
] | null | null | null | Psience/DVR/BaseDVR.py | McCoyGroup/Coordinerds | 058a4f5b29f157e499cec3c8f2da8b216f0210ef | [
"MIT"
] | null | null | null | """
Redoes what was originally PyDVR but in the _right_ way using proper subclassing and abstract properties
"""
import abc, numpy as np, scipy.sparse as sp, scipy.interpolate as interp
from McUtils.Data import UnitsData
__all__ = ["BaseDVR", "DVRResults", "DVRException"]
class BaseDVR(metaclass=abc.ABCMeta):
"""
Provides the abstract interface for creating a
convenient runnable DVR that can be cleanly subclassed to provide
extensions
"""
def __init__(self,
domain=None,
divs=None,
potential_function=None,
**base_opts
):
"""
:param base_opts: base opts to use when running
:type base_opts:
"""
self.domain = domain
base_opts['domain'] = domain
self.divs = divs
base_opts['divs'] = divs
self.potential_function = potential_function
base_opts['potential_function'] = potential_function
self.opts = base_opts
def __repr__(self):
if self.potential_function is not None:
return "{}({}, pts={}, pot={})".format(
type(self).__name__,
self.domain,
self.divs,
self.potential_function
)
else:
return "{}({}, pts={}, pot={})".format(
type(self).__name__,
self.domain,
self.divs,
self.potential_function
)
@abc.abstractmethod
def get_grid(self, domain=None, divs=None, **kwargs):
raise NotImplementedError("abstract interface")
def grid(self, domain=None, divs=None, **kwargs):
if domain is None:
domain = self.domain
if divs is None:
divs = self.divs
if domain is None:
raise ValueError("need a value for `domain`")
if divs is None:
raise ValueError("need a value for `divs`")
return self.get_grid(domain=domain, divs=divs, **kwargs)
@abc.abstractmethod
def get_kinetic_energy(self, grid=None, mass=None, hb=1, **kwargs):
raise NotImplementedError("abstract interface")
def kinetic_energy(self, grid=None, mass=None, hb=1, g=None, g_deriv=None, **kwargs):
if grid is None:
grid = self.grid()
if g is not None:
mass = 1
if mass is None:
raise ValueError("need a value for the mass")
ke_1D = self.get_kinetic_energy(grid=grid, mass=mass, hb=hb, **kwargs)
if g is not None:
if g_deriv is None:
raise ValueError(
"if a function for `g` is supplied, also need a function, `g_deriv` for the second derivative of `g`")
# add the average value of `g` across the grid points
try:
iter(g)
except TypeError:
g_vals = g(grid)
else:
print(g)
g_vals = np.asanyarray(g)
try:
iter(g_deriv)
except TypeError:
g_deriv_vals = g_deriv(grid)
else:
g_deriv_vals = np.asanyarray(g_deriv)
g_vals = 1 / 2 * (g_vals[:, np.newaxis] + g_vals[np.newaxis, :])
g_deriv_vals = (hb ** 2) / 2 * np.diag(g_deriv_vals)
ke_1D = ke_1D * g_vals + g_deriv_vals
return ke_1D
def real_momentum(self, grid=None, mass=None, hb=1, **kwargs):
raise NotImplementedError("real momentum needs to be implemented")
def potential_energy(self, grid=None,
potential_function=None,
potential_values=None,
potential_grid=None,
**pars
):
"""
Calculates the potential energy at the grid points based
on dispatching on the input form of the potential
:param grid: the grid of points built earlier in the DVR
:type grid:
:param potential_function: a function to evaluate the potential energy at the points
:type potential_function:
:param potential_values: the values of the potential at the DVR points
:type potential_values:
:param potential_grid: a grid of points and values to be interpolated
:type potential_grid:
:param pars: ignored keyword arguments
:type pars:
:return:
:rtype:
"""
if grid is None:
grid = self.grid()
if potential_function is None and potential_grid is None and potential_values is None:
potential_function = self.potential_function
if potential_function is not None:
# explicit potential function passed; map over coords
pf=potential_function
dim = len(grid.shape)
if dim > 1:
npts = np.prod(grid.shape[:-1], dtype=int)
grid = np.reshape(grid, (npts, grid.shape[-1]))
pot = sp.diags([pf(grid)], [0])
else:
pot = np.diag(pf(grid))
elif potential_values is not None:
# array of potential values at coords passed
dim = len(grid.shape)
if dim > 1:
pot = sp.diags([potential_values], [0])
else:
pot = np.diag(potential_values)
elif potential_grid is not None:
# TODO: extend to include ND, scipy.griddata
dim = len(grid.shape)
if dim > 1:
dim -= 1
npts = npts = np.prod(grid.shape[:-1], dtype=int)
grid = np.reshape(grid, (npts, grid.shape[-1]))
if dim == 1:
# use a cubic spline interpolation
interpolator = lambda g1, g2: interp.interp1d(g1[:, 0], g1[:, 1], kind='cubic')(g2)
else:
# use griddata to do a general purpose interpolation
def interpolator(g, g2):
# g is an np.ndarray of potential points and values
# g2 is the set of grid points to interpolate them over
shape_dim = len(g.shape)
if shape_dim == 2:
points = g[:, :-1]
vals = g[:, -1]
return interp.griddata(points, vals, g2)
else:
# assuming regular structured grid
mesh = np.moveaxis(g, 0, shape_dim)
points = tuple(np.unique(x) for x in mesh[:-1])
vals = mesh[-1]
return interp.interpn(points, vals, g2)
wtf = np.nan_to_num(interpolator(potential_grid, grid))
pot = sp.diags([wtf], [0])
else:
raise DVRException("couldn't construct potential matrix")
return pot
def hamiltonian(self, kinetic_energy=None, potential_energy=None, potential_threshold=None, **pars):
"""
Calculates the total Hamiltonian from the kinetic and potential matrices
:param kinetic_energy:
:type kinetic_energy:
:param potential_energy:
:type potential_energy: np.ndarray | sp.spmatrix
:param potential_threshold:
:type potential_threshold:
:param pars:
:type pars:
:return:
:rtype:
"""
if potential_threshold is not None:
diag = potential_energy.diagonal()
chops = np.where(diag > 0)
if len(chops) == 0:
return kinetic_energy + potential_energy
chops = chops[0]
ham = kinetic_energy + potential_energy
ham[chops, :] = 0
ham[:, chops] = 0
return ham
else:
return kinetic_energy + potential_energy
def wavefunctions(self, hamiltonian=None, num_wfns=25, nodeless_ground_state=False, diag_mode=None, **pars):
"""
Calculates the wavefunctions for the given Hamiltonian.
Doesn't support any kind of pruning based on potential values although that might be a good feature
to support explicitly in the future
:param hamiltonian:
:type hamiltonian:
:param num_wfns:
:type num_wfns:
:param nodeless_ground_state:
:type nodeless_ground_state:
:param diag_mode:
:type diag_mode:
:param pars:
:type pars:
:return:
:rtype:
"""
if isinstance(hamiltonian, sp.spmatrix) and diag_mode == 'dense':
hamiltonian = hamiltonian.toarray()
if isinstance(hamiltonian, sp.spmatrix):
import scipy.sparse.linalg as la
engs, wfns = la.eigsh(hamiltonian, num_wfns, which='SM')
else:
engs, wfns = np.linalg.eigh(hamiltonian)
if num_wfns is not None:
engs = engs[:num_wfns]
wfns = wfns[:, :num_wfns]
if nodeless_ground_state:
s = np.sign(wfns[:, 0])
wfns *= s[:, np.newaxis]
return engs, wfns
def run(self, result='wavefunctions', **opts):
"""
:return:
:rtype: DVRResults
"""
from .Wavefunctions import DVRWavefunctions
opts = dict(self.opts, **opts)
res = DVRResults(parent=self, **opts)
grid = self.grid(**opts)
res.grid = grid
if result == 'grid':
return res
pe = self.potential_energy(grid=res.grid, **opts)
res.potential_energy = pe
if result == 'potential_energy':
return res
ke = self.kinetic_energy(grid=res.grid, **opts)
res.kinetic_energy = ke
if result == 'kinetic_energy':
return res
h = self.hamiltonian(
kinetic_energy=res.kinetic_energy,
potential_energy=res.potential_energy,
**opts
)
res.hamiltonian = h
if result == 'hamiltonian':
return res
energies, wfn_data = self.wavefunctions(
hamiltonian=res.hamiltonian,
**opts
)
wfns = DVRWavefunctions(energies=energies, wavefunctions=wfn_data, results=res, **opts)
res.wavefunctions = wfns
return res
class DVRException(Exception):
"""
Base exception class for working with DVRs
"""
class DVRResults:
"""
A subclass that can wrap all of the DVR run parameters and results into a clean interface for reuse and extension
"""
def __init__(self,
grid=None,
kinetic_energy=None,
potential_energy=None,
hamiltonian=None,
wavefunctions=None,
parent=None,
**opts
):
self.parent=None,
self.grid=grid
self.kinetic_energy=kinetic_energy
self.potential_energy=potential_energy
self.parent=parent
self.wavefunctions=wavefunctions
self.hamiltonian=hamiltonian
self.opts = opts
@property
def dimension(self):
dim = len(self.grid.shape)
if dim > 1:
dim -= 1
return dim
def plot_potential(self, plot_class=None, figure=None, plot_units=None, energy_threshold=None, zero_shift=False, **opts):
"""
Simple plotting function for the potential.
Should be updated to deal with higher dimensional cases
:param plot_class: the graphics class to use for the plot
:type plot_class: McUtils.Plots.Graphics
:param opts: plot styling options
:type opts:
:return:
:rtype: McUtils.Plots.Graphics
"""
from McUtils.Plots import Plot, ContourPlot
# get the grid for plotting
MEHSH = self.grid
dim = self.dimension
if dim == 1:
mesh = [MEHSH]
else:
mesh = np.moveaxis(MEHSH, dim, 0)
if plot_class is None:
if dim == 1:
plot_class = Plot
elif dim == 2:
plot_class = ContourPlot
else:
raise DVRException("{}.{}: don't know how to plot {} dimensional potential".format(
type(self).__name__,
'plot',
dim
))
pot = self.potential_energy.diagonal()
if isinstance(plot_units, str) and plot_units == 'wavenumbers':
pot = pot * UnitsData.convert("Hartrees", "Wavenumbers")
if zero_shift:
pot = pot - np.min(pot)
if energy_threshold:
pot[pot > energy_threshold] = energy_threshold
return plot_class(*mesh, pot.reshape(mesh[0].shape), figure=figure, **opts)
| 32.900256 | 125 | 0.54773 | 1,438 | 12,864 | 4.776078 | 0.196106 | 0.041497 | 0.010483 | 0.012231 | 0.194962 | 0.139633 | 0.112842 | 0.082266 | 0.059843 | 0.052708 | 0 | 0.006977 | 0.364894 | 12,864 | 390 | 126 | 32.984615 | 0.83366 | 0.189366 | 0 | 0.3 | 0 | 0.004167 | 0.054721 | 0 | 0 | 0 | 0 | 0.002564 | 0 | 1 | 0.0625 | false | 0 | 0.020833 | 0 | 0.170833 | 0.004167 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94e54ce08093f7eeec968430ff2c16c5e73a50b8 | 3,802 | py | Python | Weight-Converter-GUI/weight_con_gui.py | avinashkranjan/PraticalPythonProjects | 12c1f7cedae57a843ceb6aba68cca48df505f341 | [
"MIT"
] | 930 | 2020-09-05T22:07:28.000Z | 2022-03-30T07:56:18.000Z | Weight-Converter-GUI/weight_con_gui.py | maheshdbabar9340/Amazing-Python-Scripts | e2272048cbe49b4bda5072bbdd8479739bb6c18d | [
"MIT"
] | 893 | 2020-09-04T07:57:24.000Z | 2022-02-08T02:12:26.000Z | Weight-Converter-GUI/weight_con_gui.py | maheshdbabar9340/Amazing-Python-Scripts | e2272048cbe49b4bda5072bbdd8479739bb6c18d | [
"MIT"
] | 497 | 2020-09-05T08:16:24.000Z | 2022-03-31T00:55:57.000Z | # import libraries
from tkinter import *
# initialized window
root = Tk()
root.geometry('480x350')
root.resizable(0, 0)
root.title('Weight Converter')
# defining the function for converting weights
def WeightConv():
# making textbox user-friendly that is editable
t1.configure(state='normal')
t1.delete("1.0", END)
t2.configure(state='normal')
t2.delete("1.0", END)
t3.configure(state='normal')
t3.delete("1.0", END)
t4.configure(state='normal')
t4.delete("1.0", END)
t5.configure(state='normal')
t5.delete("1.0", END)
t6.configure(state='normal')
t6.delete("1.0", END)
# exception handling
try:
kilograms = float(e1.get())
# insert the output in textboxes correct upto 2 places after decimal
t1.insert(END, "%.2f" % (kilograms * 5000))
t2.insert(END, "%.2f" % (kilograms * 1000))
t3.insert(END, "%.2f" % (kilograms * 35.274))
t4.insert(END, "%.2f" % (kilograms * 2.20462))
t5.insert(END, "%.2f" % (kilograms * 0.01))
t6.insert(END, "%.2f" % (kilograms * 0.001))
# if blank or invalid input is given then exception is thrown
except ValueError:
t1.insert(END, " ~ Invalid input ~ ")
t2.insert(END, " ~ Invalid input ~ ")
t3.insert(END, " ~ Invalid input ~ ")
t4.insert(END, " ~ Invalid input ~ ")
t5.insert(END, " ~ Invalid input ~ ")
t6.insert(END, " ~ Invalid input ~ ")
# making textbox uneditable
t1.configure(state='disabled')
t2.configure(state='disabled')
t3.configure(state='disabled')
t4.configure(state='disabled')
t5.configure(state='disabled')
t6.configure(state='disabled')
# creating a label to display
l1 = Label(root, text="Enter the weight in kilograms (kg) : ")
l1.grid(row=1, column=1, columnspan=2)
value = StringVar()
# creating a entry box for input
e1 = Entry(root, textvariable=value)
e1.grid(row=1, column=3, columnspan=2)
# create a button for conversion
button = Button(root, text="Convert", command=WeightConv)
button.grid(row=2, column=2, columnspan=2, rowspan=2)
# make labels for textbox
t1l1 = Label(root, text="kg to ct : ")
t1l1.grid(row=4, column=1, columnspan=1)
t2l2 = Label(root, text="kg to g : ")
t2l2.grid(row=5, column=1, columnspan=1)
t3l3 = Label(root, text="kg to oz : ")
t3l3.grid(row=6, column=1, columnspan=1)
t4l4 = Label(root, text="kg to lb : ")
t4l4.grid(row=7, column=1, columnspan=1)
t5l5 = Label(root, text="kg to q : ")
t5l5.grid(row=8, column=1, columnspan=1)
t6l6 = Label(root, text="kg to t : ")
t6l6.grid(row=9, column=1, columnspan=1)
t1r1 = Label(root, text="Carat")
t1r1.grid(row=4, column=4, columnspan=1)
t2r2 = Label(root, text="Gram")
t2r2.grid(row=5, column=4, columnspan=1)
t3r3 = Label(root, text="Ounce")
t3r3.grid(row=6, column=4, columnspan=1)
t4r4 = Label(root, text="Pound")
t4r4.grid(row=7, column=4, columnspan=1)
t5r5 = Label(root, text="Quintal")
t5r5.grid(row=8, column=4, columnspan=1)
t6r6 = Label(root, text="Tonne")
t6r6.grid(row=9, column=4, columnspan=1)
# creating textbox and defining grid to show output
t1 = Text(root, height=1, width=20)
t1.grid(row=4, column=2, columnspan=2)
t2 = Text(root, height=1, width=20)
t2.grid(row=5, column=2, columnspan=2)
t3 = Text(root, height=1, width=20)
t3.grid(row=6, column=2, columnspan=2)
t4 = Text(root, height=1, width=20)
t4.grid(row=7, column=2, columnspan=2)
t5 = Text(root, height=1, width=20)
t5.grid(row=8, column=2, columnspan=2)
t6 = Text(root, height=1, width=20)
t6.grid(row=9, column=2, columnspan=2)
# making blank spaces in GUI
for r in range(10):
root.grid_rowconfigure(r, minsize=30)
for c in range(6):
root.grid_columnconfigure(c, minsize=50)
# infinite loop to run program
root.mainloop()
| 26.964539 | 76 | 0.653603 | 587 | 3,802 | 4.229983 | 0.267462 | 0.059203 | 0.068063 | 0.050745 | 0.111156 | 0.053162 | 0 | 0 | 0 | 0 | 0 | 0.076059 | 0.180431 | 3,802 | 140 | 77 | 27.157143 | 0.720796 | 0.136244 | 0 | 0 | 0 | 0 | 0.126377 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.011494 | false | 0 | 0.011494 | 0 | 0.022989 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94e8d2a632eeca6338d3e4d6466fdec1e8bb6e14 | 3,789 | py | Python | tests/common/test_op/ascend/pooling.py | tianjiashuo/akg | a9cbf642063fb1086a93e8bc6be6feb145689817 | [
"Apache-2.0"
] | 286 | 2020-06-23T06:40:44.000Z | 2022-03-30T01:27:49.000Z | tests/common/test_op/ascend/pooling.py | tianjiashuo/akg | a9cbf642063fb1086a93e8bc6be6feb145689817 | [
"Apache-2.0"
] | 10 | 2020-07-31T03:26:59.000Z | 2021-12-27T15:00:54.000Z | tests/common/test_op/ascend/pooling.py | tianjiashuo/akg | a9cbf642063fb1086a93e8bc6be6feb145689817 | [
"Apache-2.0"
] | 30 | 2020-07-17T01:04:14.000Z | 2021-12-27T14:05:19.000Z | # Copyright 2020-2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""operator dsl function: pooling"""
import akg
import akg.utils as utils
from akg.utils.format_transform import get_shape
from akg.ops.nn.ascend import MaxPool, Avgpool
def _pooling_compute(x, window, stride,
mode=0, pad_mode=5, pad=(0, 0, 0, 0)):
"""compute for pooling"""
# convert mode&pad_mode to str
if mode == 0:
mode = "MAX"
elif mode == 1:
mode = "AVG"
else:
raise RuntimeError("Invalid mode parameters, mode must set 0 or 1.")
if pad_mode == 5:
pad_mode = "VALID"
elif pad_mode == 6:
pad_mode = "SAME"
else:
raise RuntimeError("Invalid pad_mode parameters, pad_mode must set 5 or 6.")
# check pad
if pad not in ((0, 0, 0, 0), [0, 0, 0, 0]):
raise RuntimeError("Not support pad now!")
in_size_h = x.shape[2].value
in_size_w = x.shape[3].value
window = list(window)
if window[0] >= in_size_h and window[1] >= in_size_w:
window[0] = in_size_h
window[1] = in_size_w
pad_mode = "VALID"
stride = [1, 1]
if mode == "MAX":
res = MaxPool(x, window, stride, pad_mode)
else:
# AVG
res = Avgpool(x, window, stride, pad_mode)
return res
@utils.check_input_type(akg.tvm.tensor.Tensor,
(list, tuple), (list, tuple), (int, type(None)),
(int, type(None)), (list, tuple, type(None)),
(bool, type(None)), (int, type(None)))
def pooling(x, window, stride,
mode=0, pad_mode=5, pad=(0, 0, 0, 0),
global_pooling=False, ceil_mode=0):
"""
Pooling operation, including MaxPool and AvgPool.
Args:
x (tvm.tensor.Tensor): Input tensor, only support float16
dtype, and NC1HWC0 format.
window (Union[list, tuple]): Pooling window, only support pooling
in H or W.
stride (Union[list, tuple]): Pooling stride, only support pooling
in H or W.
mode (int): Mode of pooling, support MaxPool and AvgPool. 0 for MaxPool,
1 for AvgPool.
pad_mode (int): Mode of padding, 5 for VALID, 6 for SAME.
pad (Union[list, tuple]): Implicit padding size to up/down/left/right.
global_pooling (bool): Global pooling flag, invalid now, should be False.
ceil_mode (int): Round_mode params, invalid now, should be 0.
Returns:
A tvm.tensor.Tensor with same dtype as input.
"""
utils.check_shape(get_shape(x))
utils.ops_dtype_check(x.dtype, utils.DtypeForDavinci.FLOAT16)
if len(window) != 2:
raise RuntimeError("Invalid shape params, window shape must be 2 dims, "
"including window_h and window_w.")
if len(stride) != 2:
raise RuntimeError("Invalid shape params, stride shape must be 2 dims, "
"including stride_h and stride_w.")
if global_pooling or ceil_mode != 0:
raise RuntimeError("Not support global_pooling and ceil_mode for now.")
return _pooling_compute(x, window, stride, mode, pad_mode, pad)
| 36.432692 | 84 | 0.611771 | 532 | 3,789 | 4.261278 | 0.287594 | 0.043229 | 0.013233 | 0.012351 | 0.210851 | 0.127481 | 0.053816 | 0.029113 | 0.029113 | 0.029113 | 0 | 0.024065 | 0.287147 | 3,789 | 103 | 85 | 36.786408 | 0.815254 | 0.398786 | 0 | 0.098039 | 0 | 0 | 0.164446 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0 | 0.078431 | 0 | 0.156863 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94ee81700cccc92d145a91273dcbf30469d045b5 | 2,130 | py | Python | MoinMoin/datastruct/backends/composite_groups.py | RealTimeWeb/wikisite | 66a22c68c172f0ebb3c88a9885ccd33e2d59c3c5 | [
"Apache-2.0"
] | 1 | 2016-04-01T04:02:28.000Z | 2016-04-01T04:02:28.000Z | Documentation/ManualSource/wikicmd/MoinMoin/datastruct/backends/composite_groups.py | sleyzerzon/soar | 74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71 | [
"Unlicense"
] | 3 | 2020-06-26T21:21:32.000Z | 2020-06-26T21:21:36.000Z | Documentation/ManualSource/wikicmd/MoinMoin/datastruct/backends/composite_groups.py | sleyzerzon/soar | 74a6f32ba1be3a7b3ed4eac0b44b0f4b2e981f71 | [
"Unlicense"
] | 2 | 2017-01-25T20:06:44.000Z | 2021-03-25T18:39:55.000Z | # -*- coding: iso-8859-1 -*-
"""
MoinMoin - group access via various backends.
The composite_groups is a backend that does not have direct storage,
but composes other backends to a new one, so group definitions are
retrieved from several backends. This allows to mix different
backends.
@copyright: 2009 DmitrijsMilajevs
@license: GPL, see COPYING for details
"""
from MoinMoin.datastruct.backends import BaseGroupsBackend, GroupDoesNotExistError
class CompositeGroups(BaseGroupsBackend):
"""
Manage several group backends.
"""
def __init__(self, request, *backends):
"""
@param backends: list of group backends which are used to get
access to the group definitions.
"""
super(CompositeGroups, self).__init__(request)
self._backends = backends
def __getitem__(self, group_name):
"""
Get a group by its name. First match counts.
"""
for backend in self._backends:
try:
return backend[group_name]
except GroupDoesNotExistError:
pass
raise GroupDoesNotExistError(group_name)
def __iter__(self):
"""
Iterate over group names in all backends (filtering duplicates).
If a group with same name is defined in several backends, the
composite_groups backend yields only backend which is listed
earlier in self._backends.
"""
yielded_groups = set()
for backend in self._backends:
for group_name in backend:
if group_name not in yielded_groups:
yield group_name
yielded_groups.add(group_name)
def __contains__(self, group_name):
"""
Check if a group called group_name is available in any of the backends.
@param group_name: name of the group [unicode]
"""
for backend in self._backends:
if group_name in backend:
return True
return False
def __repr__(self):
return "<%s backends=%s>" % (self.__class__, self._backends)
| 30 | 82 | 0.630516 | 244 | 2,130 | 5.29918 | 0.438525 | 0.076566 | 0.04331 | 0.037123 | 0.055684 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006061 | 0.302817 | 2,130 | 70 | 83 | 30.428571 | 0.864646 | 0.413146 | 0 | 0.115385 | 0 | 0 | 0.014625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192308 | false | 0.038462 | 0.038462 | 0.038462 | 0.423077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94ef7d0044d15bba93e634290af3a6fddf02e79a | 15,192 | py | Python | experiments/distributed/detection/main_fedavg_fasterrcnn.py | RicardLake/FedDetection | a4004593a4d07126447db73b98b53a7e56e0472b | [
"MIT"
] | null | null | null | experiments/distributed/detection/main_fedavg_fasterrcnn.py | RicardLake/FedDetection | a4004593a4d07126447db73b98b53a7e56e0472b | [
"MIT"
] | null | null | null | experiments/distributed/detection/main_fedavg_fasterrcnn.py | RicardLake/FedDetection | a4004593a4d07126447db73b98b53a7e56e0472b | [
"MIT"
] | null | null | null | import argparse
import logging
import os
import random
import socket
import sys
import datetime
import numpy as np
import psutil
import setproctitle
import torch
import torchvision.models
#import wandb
# add the FedML root directory to the python path
sys.path.insert(0, os.path.abspath(os.path.join(os.getcwd(), "../../../")))
from FedML.fedml_api.distributed.utils.gpu_mapping import mapping_processes_to_gpu_device_from_yaml_file
from FedML.fedml_api.distributed.feddetec.FedDetecAPI import FedML_init, FedML_FedDetec_distributed
from FedML.fedml_api.distributed.feddetec.utils import count_parameters
from data_preprocessing.coco.coco_detection.data_loader import load_partition_data_coco,load_partition_data_electric
# from data_preprocessing.coco.segmentation.data_loader.py import load_partition_data_distributed_coco_segmentation, load_partition_data_coco_segmentation
from data_preprocessing.pascal_voc_augmented.data_loader import load_partition_data_distributed_pascal_voc, \
load_partition_data_pascal_voc
from data_preprocessing.coco.coco_detection.datasets import create_dataloader
from data_preprocessing.cityscapes.data_loader import load_partition_data_distributed_cityscapes, \
load_partition_data_cityscapes
#from model.segmentation.deeplabV3_plus import DeepLabV3_plus
#from model.segmentation.unet import UNet
from training.detection_trainer import DetectionTrainer
#from training.segmentation_trainer import SegmentationTrainer
def str2bool(v):
if isinstance(v, bool):
return v
if v.lower() in ('yes', 'true', 't', 'y', '1'):
return True
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
return False
else:
raise argparse.ArgumentTypeError('Boolean value expected.')
def add_args(parser):
"""
parser : argparse.ArgumentParser
return a parser added with args required by fit
"""
# Training settings
parser.add_argument('--process_name', type=str, default='FedDetec-distributed:',
help='Machine process names')
parser.add_argument('--model', type=str, default='fasterrcnn_resnet50_rpn', metavar='N',
help='neural network used in training')
parser.add_argument('--device', default='',
help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--backbone', type=str, default='resnet',
help='employ with backbone (default: xception)')
parser.add_argument('--backbone_pretrained', type=str2bool, nargs='?', const=True, default=True,
help='pretrained backbone (default: True)')
parser.add_argument('--backbone_freezed', type=str2bool, nargs='?', const=True, default=False,
help='Freeze backbone to extract features only once (default: False)')
parser.add_argument('--extract_feat', type=str2bool, nargs='?', const=True, default=False,
help='Extract Feature Maps of (default: False) NOTE: --backbone_freezed has to be True for this argument to be considered')
parser.add_argument('--outstride', type=int, default=16,
help='network output stride (default: 16)')
parser.add_argument('--dataset', type=str, default='pascal_voc', metavar='N',
choices=['coco','electric', 'pascal_voc', 'cityscapes'],
help='dataset used for training')
parser.add_argument('--data_dir', type=str, default='/home/chaoyanghe/BruteForce/FedML/data/pascal_voc',
help='data directory (default = /home/chaoyanghe/BruteForce/FedML/data/pascal_voc)')
parser.add_argument('--log_dir', type=str, default='./runs_test/',
help='data directory')
parser.add_argument('--checkname', type=str, default='deeplab-resnet-finetune-hetero',
help='set the checkpoint name')
parser.add_argument('--partition_method', type=str, default='hetero', metavar='N',
help='how to partition the dataset on local workers')
parser.add_argument('--partition_alpha', type=float, default=0.5, metavar='PA',
help='partition alpha (default: 0.5)')
parser.add_argument('--client_num_in_total', type=int, default=3, metavar='NN',
help='number of workers in a distributed cluster')
parser.add_argument('--client_num_per_round', type=int, default=3, metavar='NN',
help='number of workers')
parser.add_argument('--save_client_model', type=str2bool, nargs='?', const=True, default=False,
help='whether to save locally trained model by clients (default: False')
parser.add_argument('--save_model', type=str2bool, nargs='?', const=True, default=False,
help='whether to save best averaged model (default: False')
parser.add_argument('--load_model', type=str2bool, nargs='?', const=True, default=False,
help='whether to load pre-trained model weights (default: False')
parser.add_argument('--model_path', type=str, default=None,
help='Pre-trained saved model path NOTE: --load has to be True for this argument to be considered')
parser.add_argument('--batch_size', type=int, default=10, metavar='N',
help='input batch size for training (default: 32)')
parser.add_argument('--sync_bn', type=str2bool, nargs='?', const=True, default=False,
help='whether to use sync bn (default: False)')
parser.add_argument('--freeze_bn', type=str2bool, nargs='?', const=True, default=False,
help='whether to freeze bn parameters (default: False)')
parser.add_argument('--client_optimizer', type=str, default='sgd',
help='adam')
parser.add_argument('--lr', type=float, default=0.001, metavar='LR',
help='learning rate (default: 0.001)')
parser.add_argument('--lr_scheduler', type=str, default='poly',
choices=['poly', 'step', 'cos'],
help='lr scheduler mode: (default: poly)')
parser.add_argument('--momentum', type=float, default=0.9,
metavar='M', help='momentum (default: 0.9)')
parser.add_argument('--weight_decay', type=float, default=5e-4,
metavar='M', help='w-decay (default: 5e-4)')
parser.add_argument('--nesterov', action='store_true', default=False,
help='whether use nesterov (default: False)')
parser.add_argument('--loss_type', type=str, default='ce',
choices=['ce', 'focal'],
help='loss func type (default: ce)')
parser.add_argument('--epochs', type=int, default=2, metavar='EP',
help='how many epochs will be trained locally')
parser.add_argument('--comm_round', type=int, default=200,
help='how many round of communications we shoud use')
parser.add_argument('--is_mobile', type=int, default=0,
help='whether the program is running on the FedML-Mobile server side')
parser.add_argument('--evaluation_frequency', type=int, default=5,
help='Frequency of model evaluation on training dataset (Default: every 5th round)')
parser.add_argument('--gpu_server_num', type=int, default=1,
help='gpu_server_num')
parser.add_argument('--gpu_num_per_server', type=int, default=4,
help='gpu_num_per_server')
parser.add_argument('--gpu_mapping_file', type=str, default="gpu_mapping.yaml",
help='the gpu utilization file for servers and clients. If there is no \
gpu_util_file, gpu will not be used.')
parser.add_argument('--gpu_mapping_key', type=str, default="mapping_config1_5",
help='the key in gpu utilization file')
parser.add_argument('--image_size', type=int, default=512,
help='Specifies the input size of the model (transformations are applied to scale or crop the image)')
parser.add_argument('--ci', type=int, default=0,
help='CI')
args = parser.parse_args()
return args
def load_data(process_id, args, dataset_name):
data_loader = None
if dataset_name == "coco":
data_loader = load_partition_data_coco
elif dataset_name == "pascal_voc":
data_loader = load_partition_data_pascal_voc
elif dataset_name == 'cityscapes':
data_loader = load_partition_data_cityscapes
elif dataset_name == 'electric':
data_loader = load_partition_data_electric
train_data_num, test_data_num, train_data_global, test_data_global, data_local_num_dict, \
train_data_local_dict, test_data_local_dict, class_num = data_loader(args)
dataset = [train_data_num, test_data_num, train_data_global, test_data_global, data_local_num_dict,
train_data_local_dict, test_data_local_dict, class_num]
return dataset
def create_model(args, model_name, output_dim, img_size):
print("Creating model")
kwargs = {
"trainable_backbone_layers": 5
}
num_classes=91
model=torchvision.models.detection.fasterrcnn_resnet50_fpn(
num_classes=num_classes, pretrained=False,
**kwargs
)
# model = DeepLabV3_plus(backbone=args.backbone,
# image_size=img_size,
# n_classes=output_dim,
# output_stride=args.outstride,
# pretrained=args.backbone_pretrained,
# freeze_bn=args.freeze_bn,
# sync_bn=args.sync_bn)
num_params = count_parameters(model)
logging.info("Fasterrcnn_resnet50_fpn Model Size : {}".format(num_params))
return model
def init_training_device(process_ID, fl_worker_num, gpu_num_per_machine, gpu_server_num):
# initialize the mapping from process ID to GPU ID: <process ID, GPU ID>
if process_ID == 0:
device = torch.device("cuda:" + str(gpu_server_num) if torch.cuda.is_available() else "cpu")
return device
process_gpu_dict = dict()
for client_index in range(fl_worker_num):
gpu_index = (client_index % gpu_num_per_machine)
process_gpu_dict[client_index] = gpu_index + gpu_server_num
device = torch.device("cuda:" + str(process_gpu_dict[process_ID - 1]) if torch.cuda.is_available() else "cpu")
logging.info('GPU process allocation {0}'.format(process_gpu_dict))
logging.info('GPU device available {0}'.format(device))
return device
if __name__ == "__main__":
# initialize distributed computing (MPI)
comm, process_id, worker_number = FedML_init()
# customize the log format
logging.basicConfig(filename='info.log',
level=logging.INFO,
format=str(
process_id) + ' - %(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S')
now = datetime.datetime.now()
time_start = now.strftime("%Y-%m-%d %H:%M:%S")
logging.info("Executing Image Detection at time: {0}".format(time_start))
# parse python script input parameters
parser = argparse.ArgumentParser()
args = add_args(parser)
logging.info('Given arguments {0}'.format(args))
# customize the process name
str_process_name = args.process_name + str(process_id)
setproctitle.setproctitle(str_process_name)
hostname = socket.gethostname()
logging.info("Host and process details")
logging.info(
"process ID: {0}, host name: {1}, process ID: {2}, process name: {3}, worker number: {4}".format(process_id,
hostname,
os.getpid(),
psutil.Process(
os.getpid()),
worker_number))
# initialize the wandb machine learning experimental tracking platform (https://www.wandb.com/).
#if process_id == 0:
# wandb.init(
# project="fedcv-detection",
# name=args.process_name + str(args.partition_method) + "r" + str(args.comm_round) + "-e" + str(
# args.epochs) + "-lr" + str(
# args.lr),
# config=args
# )
# Set the random seed. The np.random seed determines the dataset partition.
# The torch_manual_seed determines the initial weight.
# We fix these two, so that we can reproduce the result.
random.seed(0)
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed_all(0)
# GPU arrangement: Please customize this function according your own topology.
# The GPU server list is configured at "mpi_host_file".
# If we have 4 machines and each has two GPUs, and your FL network has 8 workers and a central worker.
# The 4 machines will be assigned as follows:
# machine 1: worker0, worker4, worker8;
# machine 2: worker1, worker5;
# machine 3: worker2, worker6;
# machine 4: worker3, worker7;
# Therefore, we can see that workers are assigned according to the order of machine list.
device = mapping_processes_to_gpu_device_from_yaml_file(process_id, worker_number , args.gpu_mapping_file,args.gpu_mapping_key)
#device = init_training_device(process_id, worker_number - 1, args.gpu_num_per_server, args.gpu_server_num)
# load data
dataset = load_data(process_id, args, args.dataset)
[train_data_num, test_data_num, train_data_global, test_data_global, data_local_num_dict,
train_data_local_dict, test_data_local_dict, class_num] = dataset
# create model.
# Note if the model is DNN (e.g., ResNet), the training will be very slow.
# In this case, please use our FedML distributed version (./fedml_experiments/distributed_fedavg)
model = create_model(args, model_name=args.model, output_dim=class_num,
img_size=torch.Size([args.image_size, args.image_size]))
if args.load_model:
try:
checkpoint = torch.load(args.model_path)
model.load_state_dict(checkpoint['state_dict'])
except:
raise ("Failed to load pre-trained model")
# define my own trainer
model_trainer = DetectionTrainer(model, args)
logging.info("Calling FedML_FedSeg_distributed")
FedML_FedDetec_distributed(process_id, worker_number, device, comm, model, train_data_num, data_local_num_dict,
train_data_local_dict, test_data_local_dict, args, model_trainer)
| 45.349254 | 154 | 0.635729 | 1,858 | 15,192 | 4.983315 | 0.213132 | 0.038881 | 0.073442 | 0.019009 | 0.255211 | 0.162869 | 0.139756 | 0.123987 | 0.095907 | 0.095907 | 0 | 0.009486 | 0.257504 | 15,192 | 334 | 155 | 45.48503 | 0.811348 | 0.155016 | 0 | 0.009852 | 0 | 0.014778 | 0.235437 | 0.028187 | 0 | 0 | 0 | 0 | 0 | 1 | 0.024631 | false | 0 | 0.098522 | 0 | 0.162562 | 0.004926 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94f1a97363b5f6d9cb9e80535531efb052a4d58d | 1,678 | py | Python | src/ship_detector/main.py | dzubke/ship_detector | 28212be681914ad739544f6b849152f502289ff3 | [
"MIT"
] | 1 | 2019-11-28T02:39:31.000Z | 2019-11-28T02:39:31.000Z | src/ship_detector/main.py | dzubke/ship_detector | 28212be681914ad739544f6b849152f502289ff3 | [
"MIT"
] | null | null | null | src/ship_detector/main.py | dzubke/ship_detector | 28212be681914ad739544f6b849152f502289ff3 | [
"MIT"
] | null | null | null | # standard libraries
import time
# non-standard libraries
from sklearn.linear_model import LogisticRegression
# all of the local modules
from prep_data import read_images, dataset_split
from explore_data import array_info, image_info
from models import run_model
from assess_model import count_time, roc_assess, F1score_assess
def main():
"""The place where I put all the 'glue-code' that calls all the various functions together
"""
dir_path =r'/Users/dustin/CS/projects/ship_detector/data/ships-in-satellite-imagery/shipsnet/'
data_array, label_array = read_images(dir_path)
array_info(data_array, label_array)
image_info(data_array[0,:], plot_image=False)
split_ratios = [0.8, 0.1, 0.1] #splitting the dataset into 80% train, 10% dev, 10% test
Xtrain, Xdev, Xtest, ytrain, ydev, ytest = dataset_split(data_array, label_array, split_ratios)
print(f"xtrain, xdev, xtest, ytrain, ydev, ytest shapes: {Xtrain.shape}, {Xdev.shape}, {Xtest.shape}, {ytrain.shape}, {ydev.shape} {ytest.shape} ")
print(type(LogisticRegression()))
model = LogisticRegression(solver='lbfgs')
model_fit = model.fit(Xtrain, ytrain)
train_acc=model_fit.score(Xtrain, ytrain)
test_acc=model_fit.score(Xtest,ytest)
print("Training Data Accuracy: %0.2f" %(train_acc))
print("Test Data Accuracy: %0.2f" %(test_acc))
roc_assess(model_fit, Xtest, ytest, print_values=True)
F1score_assess(model_fit, Xtest, ytest, print_values=True)
# run_model(logreg, Xtrain, Xdev, ytrain, ydev)
# output = count_time(run_model(Xtrain, Xdev, ytrain, ydev))
# print(output[0])
if __name__ == "__main__": main() | 29.438596 | 151 | 0.722884 | 241 | 1,678 | 4.813278 | 0.40249 | 0.041379 | 0.036207 | 0.049138 | 0.118966 | 0.118966 | 0.067241 | 0.067241 | 0 | 0 | 0 | 0.014215 | 0.161502 | 1,678 | 57 | 152 | 29.438596 | 0.810235 | 0.200834 | 0 | 0 | 0 | 0.083333 | 0.217293 | 0.060902 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.25 | 0 | 0.291667 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94f38c4de85964984884d82f381aa9b84ed0adc1 | 521 | py | Python | Leetcoding-Actions/Explore-Monthly-Challenges/2020-08/29-pancake-sorting.py | shoaibur/SWE | 1e114a2750f2df5d6c50b48c8e439224894d65da | [
"MIT"
] | 1 | 2020-11-14T18:28:13.000Z | 2020-11-14T18:28:13.000Z | Leetcoding-Actions/Explore-Monthly-Challenges/2020-08/29-pancake-sorting.py | shoaibur/SWE | 1e114a2750f2df5d6c50b48c8e439224894d65da | [
"MIT"
] | null | null | null | Leetcoding-Actions/Explore-Monthly-Challenges/2020-08/29-pancake-sorting.py | shoaibur/SWE | 1e114a2750f2df5d6c50b48c8e439224894d65da | [
"MIT"
] | null | null | null | class Solution:
def pancakeSort(self, A: List[int]) -> List[int]:
result = []
def flip(idx):
for i in range(0, idx//2+1):
temp = A[i]
A[i] = A[idx-i]
A[idx-i] = temp
for i in range(len(A)-1, 0, -1):
for j in range(1, i+1):
if A[j] == i+1:
flip(j)
result.append(j+1)
break
flip(i)
result.append(i+1)
return result
| 28.944444 | 53 | 0.372361 | 70 | 521 | 2.771429 | 0.357143 | 0.108247 | 0.061856 | 0.113402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 0.493282 | 521 | 17 | 54 | 30.647059 | 0.693182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94f4ddecf6b729fadb8142b50b2738a3b687fb16 | 6,816 | py | Python | index_server/containers/bgsplit_mapper/handler.py | jeremyephron/forager | 6db1590686e0e34b2e42ff5deb70f62fcee73d7d | [
"MIT"
] | 1 | 2020-12-01T23:25:58.000Z | 2020-12-01T23:25:58.000Z | index_server/containers/bgsplit_mapper/handler.py | jeremyephron/forager | 6db1590686e0e34b2e42ff5deb70f62fcee73d7d | [
"MIT"
] | 2 | 2020-10-07T01:03:06.000Z | 2020-10-12T19:08:55.000Z | index_server/containers/bgsplit_mapper/handler.py | jeremyephron/forager | 6db1590686e0e34b2e42ff5deb70f62fcee73d7d | [
"MIT"
] | null | null | null | import numpy as np
import aiohttp
import asyncio
import os.path
from pathlib import Path
import torch
import torch.nn.functional as F
from torchvision import transforms, utils, io
from typing import Dict, List, Optional, Tuple, Union, Any
from enum import Enum
from knn import utils
from knn.mappers import Mapper
from knn.utils import JSONType
import config
from model import Model
class BGSplittingMapper(Mapper):
class ReturnType(Enum):
SAVE = 0
SERIALIZE = 1
def initialize_container(self):
# Create connection pool
self.session = aiohttp.ClientSession()
self.use_cuda = False
async def initialize_job(self, job_args):
return_type = job_args.get("return_type", "serialize")
if return_type == "save":
job_args["return_type"] = BGSplittingMapper.ReturnType.SAVE
elif return_type == "serialize":
job_args["return_type"] = BGSplittingMapper.ReturnType.SERIALIZE
else:
raise ValueError(f"Unknown return type: {return_type}")
# Get checkpoint data
if job_args["checkpoint_path"] == 'TEST':
model = Model(num_main_classes=2, num_aux_classes=1)
else:
map_location = torch.device('cuda') if self.use_cuda else torch.device('cpu')
checkpoint_state = torch.load(job_args["checkpoint_path"],
map_location=map_location)
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in checkpoint_state['state_dict'].items():
name = k[7:] # remove `module.`
new_state_dict[name] = v
if 'model_kwargs' in checkpoint_state:
kwargs = checkpoint_state['model_kwargs']
num_aux_classes = kwargs['num_aux_classes']
else:
num_aux_classes = 1
# Create model
model = Model(num_main_classes=2, num_aux_classes=num_aux_classes)
# Load model weights
model.load_state_dict(new_state_dict)
model.eval()
if self.use_cuda:
model = model.cuda()
job_args["model"] = model
job_args["transform"] = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ConvertImageDtype(torch.float32),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
job_args["n_chunks_saved"] = 0
return job_args
@utils.log_exception_from_coro_but_return_none
async def process_chunk(
self, chunk: List[JSONType], job_id: str, job_args: Any, request_id: str
) -> Tuple[np.ndarray, np.ndarray]:
image_paths = [c["path"] for c in chunk]
# Download images
if "http" not in image_paths[0]:
image_bucket = job_args["input_bucket"]
image_paths = [
os.path.join(config.GCS_URL_PREFIX, image_bucket, image_path)
for image_path in image_paths]
transform = job_args["transform"]
async def download_transform(image_path):
return await self.transform_image(
await self.download_image(image_path),
transform=transform)
with self.profiler(request_id, "download_time"):
input_images = await asyncio.gather(
*[
download_transform(image_path)
for image_path in image_paths
])
# Run inference
model = job_args["model"]
with self.profiler(request_id, "inference_time"):
image_batch = torch.stack(input_images)
if self.use_cuda:
image_batch = image_batch.cuda()
embeddings = model.forward_backbone(image_batch)
scores = F.softmax(model.main_head(embeddings), dim=1)[:, 1]
return (embeddings.detach().cpu().numpy(),
scores.detach().cpu().numpy())
async def download_image(
self, image_path: str, num_retries: int = config.DOWNLOAD_NUM_RETRIES
) -> bytes:
for i in range(num_retries + 1):
try:
async with self.session.get(image_path) as response:
assert response.status == 200
return await response.read()
except Exception:
if i < num_retries:
await asyncio.sleep(2 ** i)
else:
raise
assert False # unreachable
async def transform_image(
self, image_bytes: bytes, transform,
) -> torch.Tensor:
data = torch.tensor(
list(image_bytes),
dtype=torch.uint8)
image = io.decode_image(data, mode=io.image.ImageReadMode.RGB)
return transform(image)
async def postprocess_chunk(
self,
inputs,
outputs: Tuple[np.ndarray, np.ndarray],
job_id,
job_args,
request_id,
) -> Union[Tuple[str, List[Optional[int]]],
Tuple[None, List[Optional[str]]]]:
if job_args["return_type"] == BGSplittingMapper.ReturnType.SAVE:
with self.profiler(request_id, "save_time"):
data_path_tmpl = config.DATA_FILE_TMPL.format(
job_id, self.worker_id, job_args["n_chunks_saved"]
)
job_args["n_chunks_saved"] += 1
Path(data_path_tmpl).parent.mkdir(parents=True, exist_ok=True)
data = {'ids': np.array([inp['id'] for inp in inputs], dtype=np.int),
'embeddings': outputs[0],
'scores': outputs[1]}
np.save(data_path_tmpl.format(None), data)
return data_path_tmpl.format(None), [
len(output) if output is not None else None for output in outputs[0]
]
else:
with self.profiler(request_id, "reduce_time"):
reduce_fn = config.REDUCTIONS[job_args.get("reduction")]
reduced_outputs = [
reduce_fn(output) if output is not None else None
for output in outputs
]
with self.profiler(request_id, "serialize_time"):
serialized_outputs = [
utils.numpy_to_base64(output) if output is not None else None
for output in reduced_outputs
]
return None, serialized_outputs
async def process_element(
self,
input: JSONType,
job_id: str,
job_args: Any,
request_id: str,
element_index: int,
) -> Any:
pass
app = BGSplittingMapper().server
| 36.063492 | 89 | 0.577171 | 775 | 6,816 | 4.865806 | 0.261935 | 0.037125 | 0.020684 | 0.030496 | 0.20419 | 0.13206 | 0.120392 | 0.094935 | 0.077433 | 0.057279 | 0 | 0.012093 | 0.332746 | 6,816 | 188 | 90 | 36.255319 | 0.817062 | 0.019513 | 0 | 0.069182 | 0 | 0 | 0.055589 | 0 | 0 | 0 | 0 | 0 | 0.012579 | 1 | 0.006289 | false | 0.006289 | 0.100629 | 0 | 0.163522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94f52a5e553ca733b3138d1b081bb226e35c66cc | 17,794 | py | Python | generated-libraries/python/netapp/volume/volume_attributes.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | 2 | 2017-03-28T15:31:26.000Z | 2018-08-16T22:15:18.000Z | generated-libraries/python/netapp/volume/volume_attributes.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | generated-libraries/python/netapp/volume/volume_attributes.py | radekg/netapp-ontap-lib-get | 6445ebb071ec147ea82a486fbe9f094c56c5c40d | [
"MIT"
] | null | null | null | from netapp.volume.volume_hybrid_cache_attributes import VolumeHybridCacheAttributes
from netapp.volume.volume_mirror_attributes import VolumeMirrorAttributes
from netapp.volume.volume_space_attributes import VolumeSpaceAttributes
from netapp.volume.volume_directory_attributes import VolumeDirectoryAttributes
from netapp.volume.volume_state_attributes import VolumeStateAttributes
from netapp.volume.volume_autosize_attributes import VolumeAutosizeAttributes
from netapp.volume.volume_flexcache_attributes import VolumeFlexcacheAttributes
from netapp.volume.volume_id_attributes import VolumeIdAttributes
from netapp.volume.volume_antivirus_attributes import VolumeAntivirusAttributes
from netapp.volume.volume_qos_attributes import VolumeQosAttributes
from netapp.volume.volume_transition_attributes import VolumeTransitionAttributes
from netapp.volume.volume_snapshot_attributes import VolumeSnapshotAttributes
from netapp.volume.volume_language_attributes import VolumeLanguageAttributes
from netapp.volume.volume_security_attributes import VolumeSecurityAttributes
from netapp.volume.volume_sis_attributes import VolumeSisAttributes
from netapp.volume.volume_performance_attributes import VolumePerformanceAttributes
from netapp.volume.volume_inode_attributes import VolumeInodeAttributes
from netapp.volume.volume_snapshot_autodelete_attributes import VolumeSnapshotAutodeleteAttributes
from netapp.volume.volume_vm_align_attributes import VolumeVmAlignAttributes
from netapp.volume.volume_64bit_upgrade_attributes import Volume64BitUpgradeAttributes
from netapp.volume.volume_clone_attributes import VolumeCloneAttributes
from netapp.volume.volume_infinitevol_attributes import VolumeInfinitevolAttributes
from netapp.volume.volume_export_attributes import VolumeExportAttributes
from netapp.netapp_object import NetAppObject
class VolumeAttributes(NetAppObject):
"""
Attributes of a volume.
When returned as part of the output, all elements of this typedef
are reported, unless limited by a set of desired attributes
specified by the caller.
<p>
When used as input to specify desired attributes to return,
omitting a given element indicates that it shall not be returned
in the output. In contrast, by providing an element (even with
no value) the caller ensures that a value for that element will
be returned, given that the value can be retrieved.
<p>
When used as input to specify queries, any element can be omitted
in which case the resulting set of objects is not constrained by
any specific value of that attribute.
"""
_volume_hybrid_cache_attributes = None
@property
def volume_hybrid_cache_attributes(self):
"""
This field contains information on Flash Pool caching
attributes on a volume
"""
return self._volume_hybrid_cache_attributes
@volume_hybrid_cache_attributes.setter
def volume_hybrid_cache_attributes(self, val):
if val != None:
self.validate('volume_hybrid_cache_attributes', val)
self._volume_hybrid_cache_attributes = val
_volume_mirror_attributes = None
@property
def volume_mirror_attributes(self):
"""
This field contains information applying exclusive to
volume mirror.
"""
return self._volume_mirror_attributes
@volume_mirror_attributes.setter
def volume_mirror_attributes(self, val):
if val != None:
self.validate('volume_mirror_attributes', val)
self._volume_mirror_attributes = val
_volume_space_attributes = None
@property
def volume_space_attributes(self):
"""
This field contains information related to volume disk
space management including on-disk layout.
"""
return self._volume_space_attributes
@volume_space_attributes.setter
def volume_space_attributes(self, val):
if val != None:
self.validate('volume_space_attributes', val)
self._volume_space_attributes = val
_volume_directory_attributes = None
@property
def volume_directory_attributes(self):
"""
This field contains information related to directories in
a volume.
"""
return self._volume_directory_attributes
@volume_directory_attributes.setter
def volume_directory_attributes(self, val):
if val != None:
self.validate('volume_directory_attributes', val)
self._volume_directory_attributes = val
_volume_state_attributes = None
@property
def volume_state_attributes(self):
"""
This field contains information about the state or status
of a volume or its features.
"""
return self._volume_state_attributes
@volume_state_attributes.setter
def volume_state_attributes(self, val):
if val != None:
self.validate('volume_state_attributes', val)
self._volume_state_attributes = val
_volume_autosize_attributes = None
@property
def volume_autosize_attributes(self):
"""
This field contains information about the autosize
settings of the volume.
"""
return self._volume_autosize_attributes
@volume_autosize_attributes.setter
def volume_autosize_attributes(self, val):
if val != None:
self.validate('volume_autosize_attributes', val)
self._volume_autosize_attributes = val
_volume_flexcache_attributes = None
@property
def volume_flexcache_attributes(self):
"""
This field contains information applying exclusively to
flexcache volumes.
"""
return self._volume_flexcache_attributes
@volume_flexcache_attributes.setter
def volume_flexcache_attributes(self, val):
if val != None:
self.validate('volume_flexcache_attributes', val)
self._volume_flexcache_attributes = val
_volume_id_attributes = None
@property
def volume_id_attributes(self):
"""
This field contains identification information about the
volume.
"""
return self._volume_id_attributes
@volume_id_attributes.setter
def volume_id_attributes(self, val):
if val != None:
self.validate('volume_id_attributes', val)
self._volume_id_attributes = val
_volume_antivirus_attributes = None
@property
def volume_antivirus_attributes(self):
"""
This field contains information about Antivirus On-Access
settings for the volume.
"""
return self._volume_antivirus_attributes
@volume_antivirus_attributes.setter
def volume_antivirus_attributes(self, val):
if val != None:
self.validate('volume_antivirus_attributes', val)
self._volume_antivirus_attributes = val
_volume_qos_attributes = None
@property
def volume_qos_attributes(self):
"""
This field contains the information that relates to QoS.
"""
return self._volume_qos_attributes
@volume_qos_attributes.setter
def volume_qos_attributes(self, val):
if val != None:
self.validate('volume_qos_attributes', val)
self._volume_qos_attributes = val
_volume_transition_attributes = None
@property
def volume_transition_attributes(self):
"""
This field contains information applying exclusively to
transitioned or transitioning volumes.
"""
return self._volume_transition_attributes
@volume_transition_attributes.setter
def volume_transition_attributes(self, val):
if val != None:
self.validate('volume_transition_attributes', val)
self._volume_transition_attributes = val
_volume_snapshot_attributes = None
@property
def volume_snapshot_attributes(self):
"""
This field contains information applying exclusively to
all the snapshots in the volume. Volume disk
space-related settings are excluded.
"""
return self._volume_snapshot_attributes
@volume_snapshot_attributes.setter
def volume_snapshot_attributes(self, val):
if val != None:
self.validate('volume_snapshot_attributes', val)
self._volume_snapshot_attributes = val
_volume_language_attributes = None
@property
def volume_language_attributes(self):
"""
This field contains information about volume
language-related settings.
"""
return self._volume_language_attributes
@volume_language_attributes.setter
def volume_language_attributes(self, val):
if val != None:
self.validate('volume_language_attributes', val)
self._volume_language_attributes = val
_volume_security_attributes = None
@property
def volume_security_attributes(self):
"""
This field contains information about volume security
settings.
"""
return self._volume_security_attributes
@volume_security_attributes.setter
def volume_security_attributes(self, val):
if val != None:
self.validate('volume_security_attributes', val)
self._volume_security_attributes = val
_volume_sis_attributes = None
@property
def volume_sis_attributes(self):
"""
This field contains information about Deduplication, file
clone, compression, etc.
"""
return self._volume_sis_attributes
@volume_sis_attributes.setter
def volume_sis_attributes(self, val):
if val != None:
self.validate('volume_sis_attributes', val)
self._volume_sis_attributes = val
_volume_performance_attributes = None
@property
def volume_performance_attributes(self):
"""
This field contains information that relates to the
performance of the volume.
"""
return self._volume_performance_attributes
@volume_performance_attributes.setter
def volume_performance_attributes(self, val):
if val != None:
self.validate('volume_performance_attributes', val)
self._volume_performance_attributes = val
_volume_inode_attributes = None
@property
def volume_inode_attributes(self):
"""
This field contains information about inodes in a
volume.
"""
return self._volume_inode_attributes
@volume_inode_attributes.setter
def volume_inode_attributes(self, val):
if val != None:
self.validate('volume_inode_attributes', val)
self._volume_inode_attributes = val
_volume_snapshot_autodelete_attributes = None
@property
def volume_snapshot_autodelete_attributes(self):
"""
This field contains information about snapshot autodelete
policy settings.
"""
return self._volume_snapshot_autodelete_attributes
@volume_snapshot_autodelete_attributes.setter
def volume_snapshot_autodelete_attributes(self, val):
if val != None:
self.validate('volume_snapshot_autodelete_attributes', val)
self._volume_snapshot_autodelete_attributes = val
_volume_vm_align_attributes = None
@property
def volume_vm_align_attributes(self):
"""
This field contains information related to the Virtual
Machine alignment settings on a volume
"""
return self._volume_vm_align_attributes
@volume_vm_align_attributes.setter
def volume_vm_align_attributes(self, val):
if val != None:
self.validate('volume_vm_align_attributes', val)
self._volume_vm_align_attributes = val
_volume_64bit_upgrade_attributes = None
@property
def volume_64bit_upgrade_attributes(self):
"""
Information related to 64-bit upgrade. After 64-bit
upgrade completes, this information is no longer
available.
"""
return self._volume_64bit_upgrade_attributes
@volume_64bit_upgrade_attributes.setter
def volume_64bit_upgrade_attributes(self, val):
if val != None:
self.validate('volume_64bit_upgrade_attributes', val)
self._volume_64bit_upgrade_attributes = val
_volume_clone_attributes = None
@property
def volume_clone_attributes(self):
"""
This field contains information applying exclusively to
clone volumes.
"""
return self._volume_clone_attributes
@volume_clone_attributes.setter
def volume_clone_attributes(self, val):
if val != None:
self.validate('volume_clone_attributes', val)
self._volume_clone_attributes = val
_volume_infinitevol_attributes = None
@property
def volume_infinitevol_attributes(self):
"""
This field contains information about the state of an
Infinite Volume.
"""
return self._volume_infinitevol_attributes
@volume_infinitevol_attributes.setter
def volume_infinitevol_attributes(self, val):
if val != None:
self.validate('volume_infinitevol_attributes', val)
self._volume_infinitevol_attributes = val
_volume_export_attributes = None
@property
def volume_export_attributes(self):
"""
This field contains information about export settings of
the volume.
"""
return self._volume_export_attributes
@volume_export_attributes.setter
def volume_export_attributes(self, val):
if val != None:
self.validate('volume_export_attributes', val)
self._volume_export_attributes = val
@staticmethod
def get_api_name():
return "volume-attributes"
@staticmethod
def get_desired_attrs():
return [
'volume-hybrid-cache-attributes',
'volume-mirror-attributes',
'volume-space-attributes',
'volume-directory-attributes',
'volume-state-attributes',
'volume-autosize-attributes',
'volume-flexcache-attributes',
'volume-id-attributes',
'volume-antivirus-attributes',
'volume-qos-attributes',
'volume-transition-attributes',
'volume-snapshot-attributes',
'volume-language-attributes',
'volume-security-attributes',
'volume-sis-attributes',
'volume-performance-attributes',
'volume-inode-attributes',
'volume-snapshot-autodelete-attributes',
'volume-vm-align-attributes',
'volume-64bit-upgrade-attributes',
'volume-clone-attributes',
'volume-infinitevol-attributes',
'volume-export-attributes',
]
def describe_properties(self):
return {
'volume_hybrid_cache_attributes': { 'class': VolumeHybridCacheAttributes, 'is_list': False, 'required': 'optional' },
'volume_mirror_attributes': { 'class': VolumeMirrorAttributes, 'is_list': False, 'required': 'optional' },
'volume_space_attributes': { 'class': VolumeSpaceAttributes, 'is_list': False, 'required': 'optional' },
'volume_directory_attributes': { 'class': VolumeDirectoryAttributes, 'is_list': False, 'required': 'optional' },
'volume_state_attributes': { 'class': VolumeStateAttributes, 'is_list': False, 'required': 'optional' },
'volume_autosize_attributes': { 'class': VolumeAutosizeAttributes, 'is_list': False, 'required': 'optional' },
'volume_flexcache_attributes': { 'class': VolumeFlexcacheAttributes, 'is_list': False, 'required': 'optional' },
'volume_id_attributes': { 'class': VolumeIdAttributes, 'is_list': False, 'required': 'optional' },
'volume_antivirus_attributes': { 'class': VolumeAntivirusAttributes, 'is_list': False, 'required': 'optional' },
'volume_qos_attributes': { 'class': VolumeQosAttributes, 'is_list': False, 'required': 'optional' },
'volume_transition_attributes': { 'class': VolumeTransitionAttributes, 'is_list': False, 'required': 'optional' },
'volume_snapshot_attributes': { 'class': VolumeSnapshotAttributes, 'is_list': False, 'required': 'optional' },
'volume_language_attributes': { 'class': VolumeLanguageAttributes, 'is_list': False, 'required': 'optional' },
'volume_security_attributes': { 'class': VolumeSecurityAttributes, 'is_list': False, 'required': 'optional' },
'volume_sis_attributes': { 'class': VolumeSisAttributes, 'is_list': False, 'required': 'optional' },
'volume_performance_attributes': { 'class': VolumePerformanceAttributes, 'is_list': False, 'required': 'optional' },
'volume_inode_attributes': { 'class': VolumeInodeAttributes, 'is_list': False, 'required': 'optional' },
'volume_snapshot_autodelete_attributes': { 'class': VolumeSnapshotAutodeleteAttributes, 'is_list': False, 'required': 'optional' },
'volume_vm_align_attributes': { 'class': VolumeVmAlignAttributes, 'is_list': False, 'required': 'optional' },
'volume_64bit_upgrade_attributes': { 'class': Volume64BitUpgradeAttributes, 'is_list': False, 'required': 'optional' },
'volume_clone_attributes': { 'class': VolumeCloneAttributes, 'is_list': False, 'required': 'optional' },
'volume_infinitevol_attributes': { 'class': VolumeInfinitevolAttributes, 'is_list': False, 'required': 'optional' },
'volume_export_attributes': { 'class': VolumeExportAttributes, 'is_list': False, 'required': 'optional' },
}
| 41.868235 | 143 | 0.694785 | 1,821 | 17,794 | 6.488193 | 0.100494 | 0.03504 | 0.031147 | 0.042827 | 0.440542 | 0.279306 | 0.185019 | 0.1438 | 0.117647 | 0.008802 | 0 | 0.002042 | 0.229516 | 17,794 | 424 | 144 | 41.966981 | 0.859737 | 0.139991 | 0 | 0.166667 | 0 | 0 | 0.16894 | 0.119264 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170139 | false | 0 | 0.083333 | 0.010417 | 0.427083 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94f9d7fbfdb6b3a2de5dcc04f68b6e728b2be5ff | 4,517 | py | Python | bin/plot_matches.py | hidden-ar/OpenSfM | 3ea1216d4dedc94b93ea9f7aa51cd8efd7377922 | [
"BSD-2-Clause"
] | null | null | null | bin/plot_matches.py | hidden-ar/OpenSfM | 3ea1216d4dedc94b93ea9f7aa51cd8efd7377922 | [
"BSD-2-Clause"
] | null | null | null | bin/plot_matches.py | hidden-ar/OpenSfM | 3ea1216d4dedc94b93ea9f7aa51cd8efd7377922 | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python3
import argparse
import os.path
from itertools import combinations
import matplotlib.cm as cm
import matplotlib.pyplot as pl
import numpy as np
from opensfm import dataset
from opensfm import features
from opensfm import io
from numpy import ndarray
from typing import List
def plot_matches(im1, im2, p1: ndarray, p2: ndarray) -> None:
h1, w1, c = im1.shape
h2, w2, c = im2.shape
image = np.zeros((max(h1, h2), w1 + w2, 3), dtype=im1.dtype)
image[0:h1, 0:w1, :] = im1
image[0:h2, w1 : (w1 + w2), :] = im2
p1 = features.denormalized_image_coordinates(p1, w1, h1)
p2 = features.denormalized_image_coordinates(p2, w2, h2)
pl.imshow(image)
for a, b in zip(p1, p2):
pl.plot([a[0], b[0] + w1], [a[1], b[1]], "c")
pl.plot(p1[:, 0], p1[:, 1], "ob")
pl.plot(p2[:, 0] + w1, p2[:, 1], "ob")
def plot_graph(data) -> None:
cmap = cm.get_cmap("viridis")
connectivity = {}
for im1 in images:
for im2, matches in data.load_matches(im1).items():
if len(matches) == 0:
continue
connectivity[tuple(sorted([im1, im2]))] = len(matches)
all_values = connectivity.values()
lowest = np.percentile(list(all_values), 5)
highest = np.percentile(list(all_values), 95)
exifs = {im: data.load_exif(im) for im in data.images()}
reference = data.load_reference()
for (node1, node2), edge in sorted(connectivity.items(), key=lambda x: x[1]):
gps1 = exifs[node1]["gps"]
o1 = np.array(
reference.to_topocentric(gps1["latitude"], gps1["longitude"], 0)[:2]
)
gps2 = exifs[node2]["gps"]
o2 = np.array(
reference.to_topocentric(gps2["latitude"], gps2["longitude"], 0)[:2]
)
c = max(0, min(1.0, 1 - (edge - lowest) / (highest - lowest)))
pl.plot([o1[0], o2[0]], [o1[1], o2[1]], linestyle="-", color=cmap(c))
for node in data.images():
gps = exifs[node]["gps"]
o = np.array(reference.to_topocentric(gps["latitude"], gps["longitude"], 0)[:2])
c = 0
pl.plot(o[0], o[1], linestyle="", marker="o", color=cmap(c))
pl.xticks([])
pl.yticks([])
ax = pl.gca()
for b in ["top", "bottom", "left", "right"]:
ax.spines[b].set_visible(False)
pl.savefig(os.path.join(data.data_path, "matchgraph.png"))
def plot_matches_for_images(data, image, images) -> None:
if image:
pairs = [(image, o) for o in images if o != image]
elif images:
subset = images.split(",")
pairs = combinations(subset, 2)
else:
pairs = combinations(images, 2)
i = 0
for im1, im2 in pairs:
matches = data.find_matches(im1, im2)
if len(matches) == 0:
continue
print("plotting {} matches between {} {}".format(len(matches), im1, im2))
features_data1 = data.load_features(im1)
features_data2 = data.load_features(im2)
assert features_data1
assert features_data2
p1 = features_data1.points[matches[:, 0]]
p2 = features_data2.points[matches[:, 1]]
pl.figure(figsize=(20, 10))
pl.title("Images: " + im1 + " - " + im2 + ", matches: " + str(matches.shape[0]))
plot_matches(data.load_image(im1), data.load_image(im2), p1, p2)
i += 1
if args.save_figs:
p = os.path.join(args.dataset, "plot_tracks")
io.mkdir_p(p)
pl.savefig(os.path.join(p, "{}_{}.jpg".format(im1, im2)), dpi=100)
pl.close()
else:
if i >= 10:
i = 0
pl.show()
if not args.save_figs and i > 0:
pl.show()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Plot matches between images")
parser.add_argument("dataset", help="path to the dataset to be processed")
parser.add_argument("--image", help="show tracks for a specific")
parser.add_argument(
"--images", help="show tracks between a subset of images (separated by commas)"
)
parser.add_argument("--graph", help="display image graph", action="store_true")
parser.add_argument(
"--save_figs", help="save figures instead of showing them", action="store_true"
)
args: argparse.Namespace = parser.parse_args()
data = dataset.DataSet(args.dataset)
images: List[str] = data.images()
if args.graph:
plot_graph(data)
else:
plot_matches_for_images(data, args.image, args.images)
| 33.213235 | 88 | 0.593314 | 624 | 4,517 | 4.200321 | 0.286859 | 0.016024 | 0.03243 | 0.020603 | 0.108737 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040816 | 0.251494 | 4,517 | 135 | 89 | 33.459259 | 0.734398 | 0.004649 | 0 | 0.116071 | 0 | 0 | 0.100779 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 1 | 0.026786 | false | 0 | 0.098214 | 0 | 0.125 | 0.008929 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94fa2681b04226b78da641bf14ffbff25cef7b0f | 3,738 | py | Python | sarabande/.ipynb_checkpoints/main-checkpoint.py | James11222/sarabande | 387cafede311be8c7069e5ae6fe3cf42c198ccda | [
"MIT"
] | null | null | null | sarabande/.ipynb_checkpoints/main-checkpoint.py | James11222/sarabande | 387cafede311be8c7069e5ae6fe3cf42c198ccda | [
"MIT"
] | null | null | null | sarabande/.ipynb_checkpoints/main-checkpoint.py | James11222/sarabande | 387cafede311be8c7069e5ae6fe3cf42c198ccda | [
"MIT"
] | null | null | null | import numpy as np
from subprocess import call
import astropy.io.fits as pyf
import time
from .utils import *
class measure:
def __init__(self, nPCF=4, projected=False, m_max=None, density_field_data = None, save_dir=None, save_name=None, ell_max=5,
nbins=4, bin_spacing='LIN',bin_min=1, physical_boxsize = None, rmin = None, rmax = None):
"""
This class allows us to measure the 3/4pcf from some input data field
"""
self.ell_max = ell_max
self.eps = 1e-15
self.nbins = nbins
self.projected = projected
self.ld_one_d = np.shape(density_field_data)[0]
self.bin_min = bin_min-1e-5
self.bin_max = (self.ld_one_d // 2) + 1e-5
####################################
# Initialization Case Handling
####################################
if nPCF == 3 or nPCF == 4:
self.nPCF = nPCF
if self.projected:
if m_max is None:
raise ValueError("You need to provide an m_max you would like to compute up to.")
else:
self.m_max = m_max
else:
raise ValueError("Sarabande only calculates 3 or 4 point correlation functions. Please give an integer 3 or 4.")
if physical_boxsize or rmin or rmax is not None:
if physical_boxsize and rmin and rmax is not None:
self.bin_min = (rmin/physical_boxsize)*self.ld_one_d - 1e-5
self.bin_max = (rmax/physical_boxsize)*self.ld_one_d + 1e-5
else:
raise AssertionError("""If you want to use physical scales, you need to give physical_boxsize, rmin, and rmax""")
if bin_spacing == 'LIN' or bin_spacing == 'INV' or bin_spacing == 'LOG':
#We can toggle what binning we want to use using the bin_spacing argument
switch = {
'LIN' : np.linspace(self.bin_min, self.bin_max, self.nbins+1),
'INV' : 1./np.linspace(1./self.bin_min, 1./self.bin_max, self.nbins+1),
'LOG' : np.exp(np.linspace(np.log(self.bin_min), np.log(self.bin_max), self.nbins+1))}
else:
raise ValueError("""Please put a valid bin_spacing argument, acceptable options are: \n LIN \n INV \n LOG \n in string format.""")
self.bin_edges = switch[bin_spacing]
# self.ld_one_d = ld_one_d
if density_field_data is not None:
if len(np.shape(density_field_data)) == 3 and self.projected == True:
raise AssertionError("""Projected 3/4 PCFs can only be computed on a 2D data set, use full 3/4 PCFs for 3D data sets.""")
elif len(np.shape(density_field_data)) == 2 and self.projected == False:
raise AssertionError("""Projected 3/4 PCFs can only be computed on a 2D data set, use full 3/4 PCFs for 3D data sets.""")
else:
self.density_field_data = density_field_data
else:
if self.projected == True:
raise ValueError("Please include a density_field_data argument. Should be a density sheet in the form of a numpy array")
else:
raise ValueError("Please include a density_field_data argument. Should be a density cube in the form of a numpy array")
if save_name is not None:
self.save_name = save_name
else:
raise ValueError("Please include a save_name argument")
if save_dir is not None:
self.save_dir = save_dir
else:
raise ValueError("Please include a save_dir argument")
| 46.148148 | 142 | 0.578384 | 522 | 3,738 | 3.992337 | 0.260536 | 0.036948 | 0.069098 | 0.023992 | 0.329175 | 0.290787 | 0.235125 | 0.178503 | 0.151631 | 0.151631 | 0 | 0.017724 | 0.32076 | 3,738 | 81 | 143 | 46.148148 | 0.803072 | 0.05297 | 0 | 0.189655 | 0 | 0.051724 | 0.237874 | 0 | 0 | 0 | 0 | 0 | 0.051724 | 1 | 0.017241 | false | 0 | 0.086207 | 0 | 0.12069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
94fcd28600d0a6e91aed228789715089ef0fff56 | 3,624 | py | Python | PRM_Django_REST_API/projects/models.py | maciejKusy/Project_Risk_Management_Django_REST_API | f654a3dedbe395d5ad80a3572e22bc0502326a48 | [
"MIT"
] | null | null | null | PRM_Django_REST_API/projects/models.py | maciejKusy/Project_Risk_Management_Django_REST_API | f654a3dedbe395d5ad80a3572e22bc0502326a48 | [
"MIT"
] | null | null | null | PRM_Django_REST_API/projects/models.py | maciejKusy/Project_Risk_Management_Django_REST_API | f654a3dedbe395d5ad80a3572e22bc0502326a48 | [
"MIT"
] | null | null | null | from django.db import models
from django.utils.datetime_safe import datetime
from simple_history.models import HistoricalRecords
class Project(models.Model):
"""
Used to house information pertaining to a particular project. The below fields are the bare minimum - additional
relevant fields can be added.
"""
name = models.CharField(max_length=100, blank=False)
description = models.TextField(max_length=300, blank=False)
users_assigned = models.ManyToManyField(
"users.Profile", related_name="projects", blank=True
)
def __str__(self):
return self.name
class Risk(models.Model):
"""
Used to create records describing particular Project Risks.
"""
class Background(models.TextChoices):
FINANCE = "1", "Finance"
OPERATIONS = "2", "Operations"
STAFFING = "3", "Staffing"
class Priority(models.TextChoices):
LOW = "1", "Low"
MEDIUM = "2", "Medium"
HIGH = "3", "High"
class Probability(models.TextChoices):
ZERO_PERCENT = "0", "0%"
TEN_PERCENT = "1", "10%"
TWENTY_PERCENT = "2", "20%"
THIRTY_PERCENT = "3", "30%"
FORTY_PERCENT = "4", "40%"
FIFTY_PERCENT = "5", "50%"
SIXTY_PERCENT = "6", "60%"
SEVENTY_PERCENT = "7", "70%"
EIGHTY_PERCENT = "8", "80%"
NINETY_PERCENT = "9", "90%"
HUNDRED_PERCENT = "10", "100%"
name = models.CharField(max_length=100, blank=False)
project = models.ForeignKey(
Project, on_delete=models.CASCADE, related_name="risks", blank=False
)
background = models.CharField(
max_length=50, choices=Background.choices, blank=False
)
priority = models.CharField(max_length=2, choices=Priority.choices, blank=False)
probability_percentage = models.CharField(
max_length=2, choices=Probability.choices, blank=False
)
resolvers_assigned = models.ManyToManyField(
"users.Profile", related_name="resolvers_assigned", blank=True
)
change_history = HistoricalRecords()
def __str__(self):
return self.name
@property
def get_change_history(self) -> list:
"""
Retrieves information from the historical records of a Model and presents them in the form of
:return: a list of changes applied to a Risk object
"""
history = self.change_history.all().values()
changes_list = list(history)
irrelevant_changes = ["history_id", "history_date", "history_type"]
changes_descriptions = list()
for index, change in enumerate(changes_list):
if index != 0:
for key, value in change.items():
if changes_list[index - 1][key] != changes_list[index][key]:
if key not in irrelevant_changes:
new_value = changes_list[index - 1][key]
old_value = changes_list[index][key]
timestamp = datetime.strftime(
changes_list[index]["history_date"],
"%d-%m-%Y, %H:%M:%S",
)
changes_descriptions.append({
"change": {
"field_changed": key,
"old_value": old_value,
"new_value": new_value,
"changed_on": timestamp}
}
)
return changes_descriptions
| 37.360825 | 116 | 0.564018 | 373 | 3,624 | 5.310992 | 0.396783 | 0.035336 | 0.045432 | 0.060575 | 0.170621 | 0.150429 | 0.093892 | 0.041393 | 0 | 0 | 0 | 0.023112 | 0.331402 | 3,624 | 96 | 117 | 37.75 | 0.79447 | 0.096026 | 0 | 0.08 | 0 | 0 | 0.080062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.04 | 0.026667 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a20244ac84e009282501e8057bf8b16d29cf894f | 2,490 | py | Python | aries_cloudagent/vc/tests/document_loader.py | rbeltranmontijo/aries-python | 77f6e0dd2f98cb70c2a17a1c72b729f0766f8f61 | [
"Apache-2.0"
] | null | null | null | aries_cloudagent/vc/tests/document_loader.py | rbeltranmontijo/aries-python | 77f6e0dd2f98cb70c2a17a1c72b729f0766f8f61 | [
"Apache-2.0"
] | 8 | 2021-07-27T01:13:56.000Z | 2022-03-15T01:12:40.000Z | aries_cloudagent/vc/tests/document_loader.py | rbeltranmontijo/aries-python | 77f6e0dd2f98cb70c2a17a1c72b729f0766f8f61 | [
"Apache-2.0"
] | 1 | 2022-02-02T17:05:27.000Z | 2022-02-02T17:05:27.000Z | from .contexts import (
DID_V1,
SECURITY_V1,
SECURITY_V2,
SECURITY_V3_UNSTABLE,
CREDENTIALS_V1,
EXAMPLES_V1,
BBS_V1,
CITIZENSHIP_V1,
ODRL,
SCHEMA_ORG,
)
from ..ld_proofs.constants import (
SECURITY_CONTEXT_V2_URL,
SECURITY_CONTEXT_V1_URL,
DID_V1_CONTEXT_URL,
SECURITY_CONTEXT_BBS_URL,
CREDENTIALS_CONTEXT_V1_URL,
SECURITY_CONTEXT_V3_URL,
)
from .dids import (
DID_z6Mkgg342Ycpuk263R9d8Aq6MUaxPn1DDeHyGo38EefXmgDL,
DID_zUC72Q7XD4PE4CrMiDVXuvZng3sBvMmaGgNeTUJuzavH2BS7ThbHL9FhsZM9QYY5fqAQ4MB8M9oudz3tfuaX36Ajr97QRW7LBt6WWmrtESe6Bs5NYzFtLWEmeVtvRYVAgjFcJSa,
DID_EXAMPLE_48939859,
DID_SOV_QqEfJxe752NCmWqR5TssZ5,
)
DOCUMENTS = {
DID_z6Mkgg342Ycpuk263R9d8Aq6MUaxPn1DDeHyGo38EefXmgDL.get(
"id"
): DID_z6Mkgg342Ycpuk263R9d8Aq6MUaxPn1DDeHyGo38EefXmgDL,
DID_zUC72Q7XD4PE4CrMiDVXuvZng3sBvMmaGgNeTUJuzavH2BS7ThbHL9FhsZM9QYY5fqAQ4MB8M9oudz3tfuaX36Ajr97QRW7LBt6WWmrtESe6Bs5NYzFtLWEmeVtvRYVAgjFcJSa.get(
"id"
): DID_zUC72Q7XD4PE4CrMiDVXuvZng3sBvMmaGgNeTUJuzavH2BS7ThbHL9FhsZM9QYY5fqAQ4MB8M9oudz3tfuaX36Ajr97QRW7LBt6WWmrtESe6Bs5NYzFtLWEmeVtvRYVAgjFcJSa,
DID_EXAMPLE_48939859.get("id"): DID_EXAMPLE_48939859,
DID_SOV_QqEfJxe752NCmWqR5TssZ5.get("id"): DID_SOV_QqEfJxe752NCmWqR5TssZ5,
SECURITY_CONTEXT_V1_URL: SECURITY_V1,
SECURITY_CONTEXT_V2_URL: SECURITY_V2,
SECURITY_CONTEXT_V3_URL: SECURITY_V3_UNSTABLE,
DID_V1_CONTEXT_URL: DID_V1,
CREDENTIALS_CONTEXT_V1_URL: CREDENTIALS_V1,
SECURITY_CONTEXT_BBS_URL: BBS_V1,
"https://www.w3.org/2018/credentials/examples/v1": EXAMPLES_V1,
"https://w3id.org/citizenship/v1": CITIZENSHIP_V1,
"https://www.w3.org/ns/odrl.jsonld": ODRL,
"http://schema.org/": SCHEMA_ORG,
}
def custom_document_loader(url: str, options: dict):
# Check if full url (with fragments is in document map)
if url in DOCUMENTS:
return {
"contentType": "application/ld+json",
"contextUrl": None,
"document": DOCUMENTS[url],
"documentUrl": url,
}
# Otherwise look if it is present without fragment
without_fragment = url.split("#")[0]
if without_fragment in DOCUMENTS:
return {
"contentType": "application/ld+json",
"contextUrl": None,
"document": DOCUMENTS[without_fragment],
"documentUrl": url,
}
print("Could not find")
raise Exception(f"No custom context support for {url}")
| 34.583333 | 148 | 0.736145 | 239 | 2,490 | 7.334728 | 0.330544 | 0.068454 | 0.027382 | 0.022818 | 0.355961 | 0.13919 | 0.086709 | 0.086709 | 0.086709 | 0.086709 | 0 | 0.090148 | 0.184739 | 2,490 | 71 | 149 | 35.070423 | 0.773399 | 0.040964 | 0 | 0.15625 | 0 | 0 | 0.127883 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015625 | false | 0 | 0.046875 | 0 | 0.09375 | 0.015625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a203cf5cfdc6a8501d530dc5088e0a158e658e1d | 288 | py | Python | pandoc/filters/codeblocks_common.py | jasonchoimtt/dotfiles | 3064785ddc4f5fd13118e15167ee38409eac5bc9 | [
"MIT"
] | 13 | 2016-09-24T02:20:59.000Z | 2017-04-27T09:15:02.000Z | pandoc/filters/codeblocks_common.py | jasonchoimtt/dotfiles | 3064785ddc4f5fd13118e15167ee38409eac5bc9 | [
"MIT"
] | null | null | null | pandoc/filters/codeblocks_common.py | jasonchoimtt/dotfiles | 3064785ddc4f5fd13118e15167ee38409eac5bc9 | [
"MIT"
] | 1 | 2019-01-28T06:17:15.000Z | 2019-01-28T06:17:15.000Z | from collections import OrderedDict
def parse_codeblock_args(elem):
syntax = elem.classes[0] if elem.classes else ''
args = OrderedDict(elem.attributes)
for k, v in args.items():
if v.lower() in ('false', 'no'):
args[k] = False
return syntax, args
| 22.153846 | 52 | 0.631944 | 39 | 288 | 4.615385 | 0.615385 | 0.122222 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00463 | 0.25 | 288 | 12 | 53 | 24 | 0.828704 | 0 | 0 | 0 | 0 | 0 | 0.024306 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2069227330d2f68ac9581598475c67af9bc9bb2 | 361 | py | Python | python/libfuzzer/python-atheris/src/mayhemit.py | ForAllSecure/mayhem-examples | bae123373c7e760c8b0ea0d9fb182f6acfbe5381 | [
"MIT"
] | null | null | null | python/libfuzzer/python-atheris/src/mayhemit.py | ForAllSecure/mayhem-examples | bae123373c7e760c8b0ea0d9fb182f6acfbe5381 | [
"MIT"
] | null | null | null | python/libfuzzer/python-atheris/src/mayhemit.py | ForAllSecure/mayhem-examples | bae123373c7e760c8b0ea0d9fb182f6acfbe5381 | [
"MIT"
] | null | null | null | #!/usr/local/bin/python3
import atheris
import sys
import os
def TestOneInput(data):
if len(data) >= 3 :
if data[0] == ord('b'):
if data[1] == ord('u'):
if data[2] == ord('g'):
raise Exception("Made it to the bug!")
atheris.instrument_all()
atheris.Setup(sys.argv, TestOneInput)
atheris.Fuzz()
| 21.235294 | 58 | 0.556787 | 49 | 361 | 4.081633 | 0.673469 | 0.09 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01938 | 0.285319 | 361 | 16 | 59 | 22.5625 | 0.755814 | 0.063712 | 0 | 0 | 0 | 0 | 0.065282 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.25 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2075dd6f8477f9a928e6816e1109df1a8072a85 | 496 | py | Python | djfw/tinymce/urls.py | kozzztik/tulius | 81b8f6484eefdc453047f62173a08f5e6f640cd6 | [
"MIT"
] | 1 | 2020-04-21T15:09:18.000Z | 2020-04-21T15:09:18.000Z | djfw/tinymce/urls.py | kozzztik/tulius | 81b8f6484eefdc453047f62173a08f5e6f640cd6 | [
"MIT"
] | 70 | 2019-04-10T22:32:32.000Z | 2022-03-11T23:12:54.000Z | djfw/tinymce/urls.py | kozzztik/tulius | 81b8f6484eefdc453047f62173a08f5e6f640cd6 | [
"MIT"
] | 1 | 2019-04-12T14:55:39.000Z | 2019-04-12T14:55:39.000Z | from django.conf import urls
from djfw.tinymce import views
app_name = 'djfw.tinymce'
urlpatterns = [
urls.re_path(r'^$', views.Smiles.as_view(), name='index'),
urls.re_path(
r'^emotions/emotions.htm$', views.Smiles.as_view(), name='smiles'),
urls.re_path(
r'^uploaded_files/$',
views.Uploaded_files.as_view(),
name='uploaded_files'),
urls.re_path(
r'^upload_file/$', views.Upload_file.as_view(), name='upload_file'),
]
| 27.555556 | 77 | 0.622984 | 66 | 496 | 4.454545 | 0.363636 | 0.081633 | 0.136054 | 0.14966 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.21371 | 496 | 17 | 78 | 29.176471 | 0.753846 | 0 | 0 | 0.214286 | 0 | 0 | 0.217119 | 0.048017 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a20bc93fee8de2b40c212fc499e721856e4ab411 | 661 | py | Python | config/settings/test.py | andrewtcrooks/taskorganizer | 661b1ae790d06f351871b5ef5f3f820055ddb704 | [
"MIT"
] | null | null | null | config/settings/test.py | andrewtcrooks/taskorganizer | 661b1ae790d06f351871b5ef5f3f820055ddb704 | [
"MIT"
] | null | null | null | config/settings/test.py | andrewtcrooks/taskorganizer | 661b1ae790d06f351871b5ef5f3f820055ddb704 | [
"MIT"
] | null | null | null | """taskorganizer.config.settings.test ."""
from .base import *
import json
# JSON-based secrets module
with open('test_secrets.json') as f:
secrets = json.loads(f.read())
def get_secret(setting, secrets=secrets):
"""Get the secret variable or return explicit exception."""
try:
return secrets[setting]
except KeyError:
error_msg = 'Set the {0} environment variable'.format(setting)
raise ImproperlyConfigured(error_msg)
SECRET_KEY = get_secret('SECRET_KEY')
DEBUG = True
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'andrew',
'HOST': 'localhost',
}}
| 23.607143 | 70 | 0.655068 | 77 | 661 | 5.532468 | 0.675325 | 0.051643 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001919 | 0.2118 | 661 | 27 | 71 | 24.481481 | 0.815739 | 0.177005 | 0 | 0 | 0 | 0 | 0.232645 | 0.054409 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a20e3bed418b96fae7ddcec8794e7d98aa58b03f | 7,727 | py | Python | salina_examples/rl/LoP/agents.py | jbgaya/salina | 9f957d4c74974fb4db188c291b4cad039582e4e2 | [
"MIT"
] | null | null | null | salina_examples/rl/LoP/agents.py | jbgaya/salina | 9f957d4c74974fb4db188c291b4cad039582e4e2 | [
"MIT"
] | null | null | null | salina_examples/rl/LoP/agents.py | jbgaya/salina | 9f957d4c74974fb4db188c291b4cad039582e4e2 | [
"MIT"
] | null | null | null | import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.distributions.normal import Normal
from salina import Agent, TAgent
from brax.envs.to_torch import JaxToTorchWrapper
from salina_examples.rl.LoP.envs import create_gym_env
from salina_examples.rl.LoP.subspace import Linear, Sequential
from torch.distributions.dirichlet import Dirichlet
from torch.distributions.categorical import Categorical
from torch.distributions.uniform import Uniform
from salina.agents.brax import BraxAgent
class CustomBraxAgent(BraxAgent):
def _initialize_envs(self, n_envs):
assert self._seed is not None, "[GymAgent] seeds must be specified"
self.gym_env = create_gym_env(
self.brax_env_name, batch_size=n_envs, seed=self._seed, **self.args
)
self.gym_env = JaxToTorchWrapper(self.gym_env)
class AlphaAgent(TAgent):
def __init__(self, device, n_dim = 2, geometry = "simplex", dist = "flat"):
super().__init__()
self.n_dim = n_dim
self.geometry = geometry
self.device = device
assert geometry in ["simplex","bezier"], "geometry must be 'simplex' or 'bezier'"
if geometry == "bezier":
assert n_dim == 3, "number of dimensions must be equal to 3 for Bezier subspaces"
assert dist in ["flat","categorical"], "distribution must be 'flat' or 'categorical'"
if dist == "flat":
self.dist = Dirichlet(torch.ones(n_dim)) if geometry == "simplex" else Uniform(0,1)
else:
self.dist = Categorical(torch.ones(n_dim))
def forward(self, t, replay = False, **args):
B = self.workspace.batch_size()
alphas = self.dist.sample(torch.Size([B])).to(self.device)
if isinstance(self.dist,Categorical):
alphas = F.one_hot(alphas,num_classes = self.n_dim).float()
elif self.geometry == "bezier":
alphas = torch.stack([(1 - alphas) ** 2, 2 * alphas * (1 - alphas), alphas ** 2],dim = 1)
if (t > 0) and (not replay):
done = self.get(("env/done", t)).float().unsqueeze(-1)
alphas_old = self.get(("alphas", t-1))
alphas = alphas * done + alphas_old * (1 - done)
self.set(("alphas", t), alphas)
class LoPAgent(TAgent):
def __init__(self, **args):
super().__init__()
env = JaxToTorchWrapper(create_gym_env(args["env"].env_name))
input_size = env.observation_space.shape[0]
num_outputs = env.action_space.shape[0]
hs = args["hidden_size"]
self.n_models = args["n_models"]
n_layers = args["n_layers"]
hidden_layers = [Linear(self.n_models,hs,hs) if i%2==0 else nn.ReLU() for i in range(2*(n_layers - 1))] if n_layers >1 else [nn.Identity()]
self.model = Sequential(
Linear(self.n_models, input_size, hs),
nn.ReLU(),
*hidden_layers,
Linear(self.n_models, hs, num_outputs),
)
def cosine_similarity(self,i,j):
assert (i < self.n_models) and (j < self.n_models), "index higher than n_models"
cos_sim = torch.Tensor([0.]).to(self.model[0].weight.device)
n = 0
for w in self.parameters():
p = ((w[i] * w[j]).sum() / max(((w[i] ** 2).sum().sqrt() * (w[j] ** 2).sum().sqrt()),1e-8)) ** 2
cos_sim += p
n += 1
return cos_sim / n
def L2_norm(self,i,j):
assert (i < self.n_models) and (j < self.n_models), "index higher than n_models"
L2_norm = torch.Tensor([0.]).to(self.model[0].weight.device)
n = 0
for w in self.parameters():
L2_norm += torch.linalg.norm(w[i] - w[j])
n += 1
return L2_norm / n
def forward(self, t, replay, action_std, **args):
if replay:
input = self.get("env/transformed_obs")
alphas = self.get("alphas")
mean = self.model(input,alphas)
std = torch.ones_like(mean) * action_std + 0.000001
dist = Normal(mean, std)
action = self.get("real_action")
logp_pi = dist.log_prob(action).sum(axis=-1)
logp_pi -= (2 * (np.log(2) - action - F.softplus(-2 * action))).sum(axis=-1)
self.set("action_logprobs", logp_pi)
else:
input = self.get(("env/transformed_obs", t))
alphas = self.get(("alphas",t))
with torch.no_grad():
mean = self.model(input,alphas)
std = torch.ones_like(mean) * action_std + 0.000001
dist = Normal(mean, std)
action = dist.sample() if action_std > 0 else dist.mean
self.set(("real_action", t), action)
logp_pi = dist.log_prob(action).sum(axis=-1)
logp_pi -= (2 * (np.log(2) - action - F.softplus(-2 * action))).sum(axis=-1)
self.set(("old_action_logprobs", t), logp_pi)
action = torch.tanh(action)
self.set(("action", t), action)
def seed(self,seed):
pass
class CriticAgent(Agent):
def __init__(self, **args):
super().__init__()
env = JaxToTorchWrapper(create_gym_env(args["env"].env_name))
input_size = env.observation_space.shape[0]
alpha_size = args["alpha_size"]
hs = args["hidden_size"]
n_layers = args["n_layers"]
hidden_layers = [nn.Linear(hs,hs) if i%2==0 else nn.ReLU() for i in range(2*(n_layers - 1))] if n_layers >1 else [nn.Identity()]
self.model_critic = nn.Sequential(
nn.Linear(input_size + alpha_size, hs),
nn.ReLU(),
*hidden_layers,
nn.Linear(hs, 1),
)
def forward(self, t = None, **args):
if t == None:
input = self.get("env/transformed_obs")
alphas = self.get("alphas")
x = torch.cat([input,alphas], dim=-1)
critic = self.model_critic(x).squeeze(-1)
self.set("critic", critic)
else:
input = self.get(("env/transformed_obs",t))
alphas = self.get(("alphas",t))
x = torch.cat([input,alphas], dim=-1)
critic = self.model_critic(x).squeeze(-1)
self.set(("critic",t), critic)
class Normalizer(TAgent):
def __init__(self, env):
super().__init__()
env = JaxToTorchWrapper(create_gym_env(env.env_name))
self.n_features = env.observation_space.shape[0]
self.n = None
self.mean = nn.Parameter(torch.zeros(self.n_features), requires_grad = False)
self.mean_diff = torch.zeros(self.n_features)
self.var = nn.Parameter(torch.ones(self.n_features), requires_grad = False)
def forward(self, t, update_normalizer=True, **kwargs):
input = self.get(("env/env_obs", t))
if update_normalizer:
self.update(input)
input = self.normalize(input)
self.set(("env/transformed_obs", t), input)
def update(self, x):
if self.n is None:
device = x.device
self.n = torch.zeros(self.n_features).to(device)
self.mean = self.mean.to(device)
self.mean_diff = self.mean_diff.to(device)
self.var = self.var.to(device)
self.n += 1.0
last_mean = self.mean.clone()
self.mean += (x - self.mean).mean(dim=0) / self.n
self.mean_diff += (x - last_mean).mean(dim=0) * (x - self.mean).mean(dim=0)
self.var = nn.Parameter(torch.clamp(self.mean_diff / self.n, min=1e-2), requires_grad = False).to(x.device)
def normalize(self, inputs):
obs_std = torch.sqrt(self.var)
return (inputs - self.mean) / obs_std
def seed(self, seed):
torch.manual_seed(seed) | 41.543011 | 147 | 0.589362 | 1,061 | 7,727 | 4.134779 | 0.169651 | 0.025074 | 0.020059 | 0.017096 | 0.406656 | 0.354456 | 0.332574 | 0.289036 | 0.289036 | 0.289036 | 0 | 0.014701 | 0.269315 | 7,727 | 186 | 148 | 41.543011 | 0.76231 | 0 | 0 | 0.319277 | 0 | 0 | 0.074534 | 0 | 0 | 0 | 0 | 0 | 0.036145 | 1 | 0.090361 | false | 0.006024 | 0.078313 | 0 | 0.216867 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a211435c327ed70ae065e1343ef2dfb4cbf26da9 | 1,541 | py | Python | setup.py | Zincr0/pyscrap | 69b4c2bd42dbec125444ad68a1f76168fab7607e | [
"Apache-2.0"
] | 1 | 2015-04-28T22:54:05.000Z | 2015-04-28T22:54:05.000Z | setup.py | Zincr0/pyscrap | 69b4c2bd42dbec125444ad68a1f76168fab7607e | [
"Apache-2.0"
] | null | null | null | setup.py | Zincr0/pyscrap | 69b4c2bd42dbec125444ad68a1f76168fab7607e | [
"Apache-2.0"
] | null | null | null | # -*- coding=utf-8 -*-
#Copyright 2012 Daniel Osvaldo Mondaca Seguel
#
#Licensed under the Apache License, Version 2.0 (the "License");
#you may not use this file except in compliance with the License.
#You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#Unless required by applicable law or agreed to in writing, software
#distributed under the License is distributed on an "AS IS" BASIS,
#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#See the License for the specific language governing permissions and
#limitations under the License.
import sys
import os
from setuptools import setup
reload(sys)
sys.setdefaultencoding('utf-8')
files = ["pyscrap/*"]
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
setup(
scripts=["bin/wscrap"],
name="pyscrap",
version="0.0.9",
author="Daniel Mondaca",
author_email="daniel@analitic.cl",
description=("micro framework for web scraping"),
license = "Apache 2.0 License",
keywords = "web scraping",
url = "http://github.com/Nievous/pyscrap",
packages=["pyscrap"],
install_requires = ["lxml", "simplejson"],
long_description=read("README.txt"),
package_data = {"package": files},
classifiers=[
"Development Status :: 4 - Beta",
"Topic :: Software Development",
"License :: OSI Approved :: Apache Software License",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python",
],
)
| 30.215686 | 73 | 0.686567 | 199 | 1,541 | 5.276382 | 0.648241 | 0.057143 | 0.024762 | 0.030476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012769 | 0.186892 | 1,541 | 50 | 74 | 30.82 | 0.825219 | 0.374432 | 0 | 0 | 0 | 0 | 0.394099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.1 | 0.033333 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a211ebfcbf069a00aa69ab75205d588df2ae8176 | 10,615 | py | Python | densenet.py | BingoH/ReinventingWheel | 5232d0ab697ad57a039c766355545bbde3b2a200 | [
"MIT"
] | 4 | 2017-08-18T11:49:54.000Z | 2019-04-02T11:35:28.000Z | densenet.py | BingoH/ReinventingWheel | 5232d0ab697ad57a039c766355545bbde3b2a200 | [
"MIT"
] | null | null | null | densenet.py | BingoH/ReinventingWheel | 5232d0ab697ad57a039c766355545bbde3b2a200 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
#####
# simple implementation of "Densely Connected Convolutional Networks, CVPR 2017"
#####
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import numpy as np
class DenseBlock(nn.Module):
""" Dense blocks introduced in paper
"""
def __init__(self, k_0, k, L, use_dropout = False):
"""
Args:
k_0 (int): number of input channels to dense block
k (int): growth rate described in paper
L (int): number of layers in dense block
use_dropout (bool) : whether use dropout after conv layer
"""
super(DenseBlock, self).__init__()
self.L = L
self.use_dropout = use_dropout
for i in range(L):
# TODO: per-dimension batchnormalization is provided in torch.nn
# however, in the original batchnormalization paper,
# per feature map batchnormalization is applied for convolutional layer
n_in = k_0 if i == 0 else (k_0 + i * k) # number of input channels
self.add_module('bn' + str(i), nn.BatchNorm2d(n_in))
self.add_module('conv' + str(i), nn.Conv2d(n_in, k, 3, padding = 1))
def forward(self, x):
children = self.children()
for i in range(self.L):
bn = children.next()
y = F.relu(bn(x))
conv = children.next()
y = conv(y)
if self.use_dropout:
y = F.dropout(y, p = 0.2, training = self.training)
if (i + 1) == self.L:
x = y # return last conv layer output
else:
x = torch.cat((x, y), 1)
return x
class TransitionLayer(nn.Module):
""" TransitionLayer between dense blocks
"""
def __init__(self, n_in, n_out, use_dropout = False):
"""
Args:
n_in (int) : number of input channels
n_out (int) : number of output channels
use_dropout (bool) : whether use dropout after conv layer
"""
super(TransitionLayer, self).__init__()
self.conv1x1 = nn.Conv2d(n_in, n_out, 1) # 1x1 conv layer
self.use_dropout = use_dropout
def forward(self, x):
x = self.conv1x1(x)
if self.use_dropout:
x = F.dropout(x, p = 0.2, training = self.training)
x = F.avg_pool2d(x, 2)
return x
def init_weights(m):
"""
TODO: initialization
"""
pass
class DenseNet(nn.Module):
""" Whole framework of dense net for 32 x 32 color image
"""
def __init__(self, k, L, C, use_dropout = False):
"""
Args:
k (int): growth rate for denseblocks
L (int): number of layers for denseblocks
C (int) : number of classes
use_dropout (bool) : whether use dropout after conv layer
"""
super(DenseNet, self).__init__()
self.conv1 = nn.Conv2d(3, 16, 3, padding = 1) # first conv layer applied on input images
# dense blocks, connected by transition layer
self.db1 = DenseBlock(16, k, L, use_dropout)
self.trl1 = TransitionLayer(k, k, use_dropout)
self.db2 = DenseBlock(k, k, L, use_dropout)
self.trl2 = TransitionLayer(k, k, use_dropout)
self.db3 = DenseBlock(k, k, L, use_dropout)
# linear layer
self.fc = nn.Linear(k, C)
def forward(self, x):
x = self.conv1(x)
x = self.db1(x)
x = self.trl1(x)
x = self.db2(x)
x = self.trl2(x)
x = self.db3(x)
# global average pooling
x = F.avg_pool2d(x, 8)
x = x.view(-1, self.num_flat_features(x))
x = self.fc(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimension except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def split_train(dataset, valid_percentage):
"""
Utility function to split training set into training and validation
return sample index of both training and validation
Args:
dataset (Dataset) : original training dataset
valid_percentage (float) : percentage of validation
Returns:
train_idx (list) : training split index
valid_idx (list) : validataion split index
"""
n = len(dataset)
n_valid = int(np.floor(n * valid_percentage))
perm = np.random.permutation(n)
train_idx = perm[:(n - n_valid)]
valid_idx = perm[(n - n_valid):]
return (train_idx, valid_idx)
if __name__ == '__main__':
#######
# test on CIFAR10 dataset
#######
# compute CIFAR10 mean and variance
#data_dir = '../data/cifar10/'
data_dir = './cifar10/'
data = torchvision.datasets.CIFAR10(root = data_dir, train = True, download = True).train_data
data = data.astype(np.float32) / 255.
cifar_mean = np.mean(data, axis = (0, 1, 2))
cifar_std = np.std(data, axis = (0, 1, 2))
cifar_mean = torch.from_numpy(cifar_mean).float()
cifar_std = torch.from_numpy(cifar_std).float()
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize(cifar_mean, cifar_std)])
augment = None # no data augmentation
#augment = transforms.Compose([]) # data augmentation
valid_transform = transform
if augment is None:
train_transform = transform
else:
train_transform = transforms.Compose([augment, transform])
use_cuda = True
use_cuda = use_cuda and torch.cuda.is_available()
kwargs = {'num_workers' : 1, 'pin_memory': True} if use_cuda else {}
# simply duplicate dataset
cifar10_train = torchvision.datasets.CIFAR10(root = data_dir, train = True,
download = True, transform = train_transform)
train_idx, valid_idx = split_train(cifar10_train, 0.1) # 5000 validation samples
cifar10_valid = torchvision.datasets.CIFAR10(root = data_dir, train = True,
download = True, transform = valid_transform)
batch_sz = 64
train_loader = torch.utils.data.DataLoader(cifar10_train, batch_size = batch_sz,
sampler = torch.utils.data.sampler.SubsetRandomSampler(train_idx),
**kwargs)
valid_loader = torch.utils.data.DataLoader(cifar10_valid, batch_size = batch_sz,
sampler = torch.utils.data.sampler.SubsetRandomSampler(valid_idx),
**kwargs)
# last epoch use the whole training dataset
whole_train_loader = torch.utils.data.DataLoader(cifar10_train, batch_size = batch_sz, **kwargs)
#######
# training stage
######
#net = DenseNet(5, 3, 10, True) # just for CPU test
cuda_device = 2 # avoid Tensors on different GPU
net = DenseNet(12, 25, 10, use_dropout = True)
if use_cuda:
net.cuda(cuda_device)
criterion = nn.CrossEntropyLoss()
init_lr = 0.1
weight_decay = 1e-4
momentum = 0.9
optimizer = optim.SGD(net.parameters(), lr = init_lr, weight_decay = weight_decay,
momentum = momentum, nesterov = True)
n_epoch = 300
valid_freq = 50 # validation frequency
net.train() # training mode for dropout and batch normalization layer
for epoch in range(n_epoch):
running_loss = 0.0
if epoch + 1 == n_epoch: ## use the whole training in last epoch
data_loader = whole_train_loader
else:
data_loader = train_loader
# divide lr by 10 after 50% and 75% epochs
if epoch + 1 == .5 * n_epoch:
for param_group in optimizer.param_groups:
param_group['lr'] = init_lr * 0.1
if epoch + 1 == .75 * n_epoch:
for param_group in optimizer.param_groups:
param_group['lr'] = init_lr * 0.01
for i, data in enumerate(data_loader, 0):
inputs, labels = data
# wrap in variable
if use_cuda:
inputs, labels = Variable(inputs.cuda(cuda_device)), Variable(labels.cuda(cuda_device))
else:
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.data[0]
# print statistics
if (i + 1) % valid_freq == 0:
print('[%d, %d] training loss: %.4f' %(epoch + 1,
i + 1, running_loss / valid_freq))
running_loss = 0.
# validation loss
valid_loss_val = 0.0
net.eval() # eval mode
for inputs, labels in valid_loader:
if use_cuda:
inputs = Variable(inputs.cuda(cuda_device), volatile = True)
labels = Variable(labels.cuda(cuda_device))
else:
inputs, labels = Variable(inputs, volatile = True), Variable(labels)
outputs = net(inputs)
valid_loss = criterion(outputs, labels)
valid_loss_val += valid_loss.data[0]
print('\t\t validation loss: %.4f' %(valid_loss_val / len(valid_loader)))
net.train()
print('Finished Training')
#######
# testing stage
#######
test_dataset = torchvision.datasets.CIFAR10(root = data_dir, train = False,
download = True, transform = valid_transform)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size = batch_sz,
shuffle = False, **kwargs)
net.eval()
correct = 0
total = 0
for inputs, labels in test_loader:
if use_cuda:
inputs = Variable(inputs.cuda(cuda_device), volatile = True)
#labels = Variable(labels.cuda(cuda_device))
else:
#inputs, labels = Variable(inputs, volatile = True), Variable(labels)
inputs = Variable(inputs, volatile = True)
outputs = net(inputs)
_, predicted = torch.max(outputs.data, 1)
correct += (predicted.cpu() == labels).squeeze().sum()
total += labels.size(0)
print ("Test accuracy : %f %% " % (correct * 100.0 / total))
| 34.576547 | 103 | 0.581912 | 1,304 | 10,615 | 4.582822 | 0.220859 | 0.035141 | 0.008032 | 0.008032 | 0.294846 | 0.228079 | 0.18407 | 0.177042 | 0.177042 | 0.177042 | 0 | 0.022981 | 0.315403 | 10,615 | 306 | 104 | 34.689542 | 0.799367 | 0.21903 | 0 | 0.192308 | 0 | 0 | 0.017826 | 0 | 0 | 0 | 0 | 0.006536 | 0 | 1 | 0.049451 | false | 0.005495 | 0.043956 | 0 | 0.137363 | 0.021978 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2157079ef9ffaffcc108a55445091fe2efede1e | 13,871 | py | Python | centpy/solver2d.py | olekravchenko/centpy | e10d1b92c0ee5520110496595b6875b749fa4451 | [
"MIT"
] | 2 | 2021-06-23T17:23:21.000Z | 2022-01-14T01:28:57.000Z | centpy/solver2d.py | olekravchenko/centpy | e10d1b92c0ee5520110496595b6875b749fa4451 | [
"MIT"
] | null | null | null | centpy/solver2d.py | olekravchenko/centpy | e10d1b92c0ee5520110496595b6875b749fa4451 | [
"MIT"
] | null | null | null | from .solver1d import *
# p coefficient functions
class Solver2d(Solver1d):
def __init__(self, equation):
super().__init__(equation)
# Equation dependent functions
self.flux_y = equation.flux_y
self.spectral_radius_y = equation.spectral_radius_y
def fd2(self, u):
u_star = np.ones(u.shape)
un_half = np.ones(u.shape)
u_prime_x = np.ones(u.shape)
u_prime_y = np.ones(u.shape)
u_prime_x[1:-1, 1:-1] = limiter_x(u)
u_prime_y[1:-1, 1:-1] = limiter_y(u)
if self.odd:
un_half[1:-2, 1:-2] = 0.25 * (
(u[1:-2, 1:-2] + u[2:-1, 1:-2] + u[1:-2, 2:-1] + u[2:-1, 2:-1])
+ 0.25
* (
(u_prime_x[1:-2, 1:-2] - u_prime_x[2:-1, 1:-2])
+ (u_prime_x[1:-2, 2:-1] - u_prime_x[2:-1, 2:-1])
+ (u_prime_y[1:-2, 1:-2] - u_prime_y[1:-2, 2:-1])
+ (u_prime_y[2:-1, 1:-2] - u_prime_y[2:-1, 2:-1])
)
)
else:
un_half[2:-1, 2:-1] = 0.25 * (
(u[1:-2, 1:-2] + u[2:-1, 1:-2] + u[1:-2, 2:-1] + u[2:-1, 2:-1])
+ 0.25
* (
(u_prime_x[1:-2, 1:-2] - u_prime_x[2:-1, 1:-2])
+ (u_prime_x[1:-2, 2:-1] - u_prime_x[2:-1, 2:-1])
+ (u_prime_y[1:-2, 1:-2] - u_prime_y[1:-2, 2:-1])
+ (u_prime_y[2:-1, 1:-2] - u_prime_y[2:-1, 2:-1])
)
)
f = self.flux_x(u)
g = self.flux_y(u)
f_prime_x = limiter_x(f)
g_prime_y = limiter_y(g)
u_star[1:-1, 1:-1] = u[1:-1, 1:-1] - 0.5 * self.dt * (
f_prime_x / self.dx + g_prime_y / self.dy
)
self.boundary_conditions(u_star)
f_star = self.flux_x(u_star)
g_star = self.flux_y(u_star)
if self.odd:
u[1:-2, 1:-2] = (
un_half[1:-2, 1:-2]
- 0.5
* self.dt
/ self.dx
* (
(f_star[2:-1, 1:-2] - f_star[1:-2, 1:-2])
+ (f_star[2:-1, 2:-1] - f_star[1:-2, 2:-1])
)
- 0.5
* self.dt
/ self.dy
* (
(g_star[1:-2, 2:-1] - g_star[1:-2, 1:-2])
+ (g_star[2:-1, 2:-1] - g_star[2:-1, 1:-2])
)
)
else:
u[2:-1, 2:-1] = (
un_half[2:-1, 2:-1]
- 0.5
* self.dt
/ self.dx
* (
(f_star[2:-1, 1:-2] - f_star[1:-2, 1:-2])
+ (f_star[2:-1, 2:-1] - f_star[1:-2, 2:-1])
)
- 0.5
* self.dt
/ self.dy
* (
(g_star[1:-2, 2:-1] - g_star[1:-2, 1:-2])
+ (g_star[2:-1, 2:-1] - g_star[2:-1, 1:-2])
)
)
self.boundary_conditions(u)
self.odd = not self.odd
return u
#################
# SD2
#################
def reconstruction_sd2(self, u):
u_N, u_S, u_E, u_W = np.ones((4,) + u.shape)
ux = limiter_x(u[1:-1, 1:-1])
uy = limiter_y(u[1:-1, 1:-1])
u_N[j0, j0], u_S[j0, j0], u_E[j0, j0], u_W[j0, j0] = u[None, j0, j0] + np.array(
[0.5 * uy, -0.5 * uy, 0.5 * ux, -0.5 * ux]
)
list(map(self.boundary_conditions, [u_N, u_S, u_E, u_W]))
return u_N, u_S, u_E, u_W
def Hx_flux_sd2(self, u_E, u_W):
a = np.maximum(self.spectral_radius_x(u_E), self.spectral_radius_x(u_W))
f_E = self.flux_x(u_E)
f_W = self.flux_x(u_W)
if u_W.shape == a.shape:
return 0.5 * (f_W + f_E) - 0.5 * a * (u_W - u_E) # scalar
else:
return 0.5 * (f_W + f_E) - 0.5 * np.multiply(
a[:, :, None], (u_W - u_E)
) # systems
def Hy_flux_sd2(self, u_E, u_W):
a = np.maximum(self.spectral_radius_y(u_E), self.spectral_radius_y(u_W))
f_E = self.flux_y(u_E)
f_W = self.flux_y(u_W)
if u_W.shape == a.shape:
return 0.5 * (f_W + f_E) - 0.5 * a * (u_W - u_E) # scalar
else:
return 0.5 * (f_W + f_E) - 0.5 * np.multiply(
a[:, :, None], (u_W - u_E)
) # systems
def c_flux_sd2(self, u_N, u_S, u_E, u_W):
Hx_halfm = self.Hx_flux_sd2(u_E[jm, j0], u_W[j0, j0])
Hx_halfp = self.Hx_flux_sd2(u_E[j0, j0], u_W[jp, j0])
Hy_halfm = self.Hy_flux_sd2(u_N[j0, jm], u_S[j0, j0])
Hy_halfp = self.Hy_flux_sd2(u_N[j0, j0], u_S[j0, jp])
return -self.dt / self.dx * (Hx_halfp - Hx_halfm) - self.dt / self.dy * (
Hy_halfp - Hy_halfm
)
def sd2(self, u):
self.boundary_conditions(u)
u_N, u_S, u_E, u_W = self.reconstruction_sd2(u)
C0 = self.c_flux_sd2(u_N, u_S, u_E, u_W)
u[j0, j0] += C0
self.boundary_conditions(u)
u_N, u_S, u_E, u_W = self.reconstruction_sd2(u)
C1 = self.c_flux_sd2(u_N, u_S, u_E, u_W)
u[j0, j0] += 0.5 * (C1 - C0)
self.boundary_conditions(u)
return u
#################
# SD3
#################
# indicators: indicators_2d_sd3, indicators_diag_2d_sd3
def indicators_sd3(self, u):
u_norm = np.sqrt(self.dx * self.dy) * np.linalg.norm(u[j0, j0])
pl0, pl1, pr0, pr1, pcx0, pcx1, pcx2 = px_coefs(u)
ISl = pl1 ** 2 / (u_norm + eps)
IScx = 1.0 / (u_norm + eps) * ((13.0 / 3.0) * pcx2 ** 2 + pcx1 ** 2)
ISr = pr1 ** 2 / (u_norm + eps)
pb0, pb1, pt0, pt1, pcy0, pcy1, pcy2 = py_coefs(u)
ISb = pb1 ** 2 / (u_norm + eps)
IScy = 1.0 / (u_norm + eps) * ((13.0 / 3.0) * pcy2 ** 2 + pcy1 ** 2)
ISt = pt1 ** 2 / (u_norm + eps)
return ISl, IScx, ISr, ISb, IScy, ISt
def indicators_diag_sd3(self, u):
u_norm = np.sqrt(self.dx * self.dy) * np.linalg.norm(u[j0, j0])
pl0, pl1, pr0, pr1, pcx0, pcx1, pcx2 = pdx_coefs(u)
dISl = pl1 ** 2 / (u_norm + eps)
dIScx = 1.0 / (u_norm + eps) * ((13.0 / 3.0) * pcx2 ** 2 + pcx1 ** 2)
dISr = pr1 ** 2 / (u_norm + eps)
pb0, pb1, pt0, pt1, pcy0, pcy1, pcy2 = pdy_coefs(u)
dISb = pb1 ** 2 / (u_norm + eps)
dIScy = 1.0 / (u_norm + eps) * ((13.0 / 3.0) * pcy2 ** 2 + pcy1 ** 2)
dISt = pt1 ** 2 / (u_norm + eps)
return dISl, dIScx, dISr, dISb, dIScy, dISt
# reconstruction: reconstruction_2d_sd3, reconstruction_diag_2d_sd3
def reconstruction_sd3(self, u, ISl, IScx, ISr, ISb, IScy, ISt):
u_N, u_S, u_E, u_W = np.ones((4,) + u.shape)
cl = 0.25
ccx = 0.5
cr = 0.25
cb = 0.25
ccy = 0.5
ct = 0.25
pl0, pl1, pr0, pr1, pcx0, pcx1, pcx2 = px_coefs(u)
alpl = cl / ((eps + ISl) ** 2)
alpcx = ccx / ((eps + IScx) ** 2)
alpr = cr / ((eps + ISr) ** 2)
alp_sum = alpl + alpcx + alpr
wl = alpl / alp_sum
wcx = alpcx / alp_sum
wr = alpr / alp_sum
pb0, pb1, pt0, pt1, pcy0, pcy1, pcy2 = py_coefs(u)
alpb = cb / ((eps + ISb) ** 2)
alpcy = ccy / ((eps + IScy) ** 2)
alpt = ct / ((eps + ISt) ** 2)
alp_sum = alpb + alpcy + alpt
wb = alpb / alp_sum
wcy = alpcy / alp_sum
wt = alpt / alp_sum
u_N[j0, j0] = (
wb * (pb0 + 0.5 * pb1)
+ wcy * (pcy0 + 0.5 * pcy1 + 0.25 * pcy2)
+ wt * (pt0 + 0.5 * pt1)
)
u_S[j0, j0] = (
wb * (pb0 - 0.5 * pb1)
+ wcy * (pcy0 - 0.5 * pcy1 + 0.25 * pcy2)
+ wt * (pt0 - 0.5 * pt1)
)
u_E[j0, j0] = (
wl * (pl0 + 0.5 * pl1)
+ wcx * (pcx0 + 0.5 * pcx1 + 0.25 * pcx2)
+ wr * (pr0 + 0.5 * pr1)
)
u_W[j0, j0] = (
wl * (pl0 - 0.5 * pl1)
+ wcx * (pcx0 - 0.5 * pcx1 + 0.25 * pcx2)
+ wr * (pr0 - 0.5 * pr1)
)
return u_N, u_S, u_E, u_W
def reconstruction_diag_sd3(self, u, dISl, dIScx, dISr, dISb, dIScy, dISt):
u_NE, u_SE, u_NW, u_SW = np.ones((4,) + u.shape)
cl = 0.25
ccx = 0.5
cr = 0.25
cb = 0.25
ccy = 0.5
ct = 0.25
pl0, pl1, pr0, pr1, pcx0, pcx1, pcx2 = pdx_coefs(u)
alpl = cl / (eps + dISl) ** 2
alpcx = ccx / (eps + dIScx) ** 2
alpr = cr / (eps + dISr) ** 2
alp_sum = alpl + alpcx + alpr
wl = alpl / alp_sum
wcx = alpcx / alp_sum
wr = alpr / alp_sum
pb0, pb1, pt0, pt1, pcy0, pcy1, pcy2 = pdy_coefs(u)
alpb = cb / (eps + dISb) ** 2
alpcy = ccy / (eps + dIScy) ** 2
alpt = ct / (eps + dISt) ** 2
alp_sum = alpb + alpcy + alpt
wb = alpb / alp_sum
wcy = alpcy / alp_sum
wt = alpt / alp_sum
u_NW[j0, j0] = (
wb * (pb0 + 0.5 * pb1)
+ wcy * (pcy0 + 0.5 * pcy1 + 0.25 * pcy2)
+ wt * (pt0 + 0.5 * pt1)
)
u_SE[j0, j0] = (
wb * (pb0 - 0.5 * pb1)
+ wcy * (pcy0 - 0.5 * pcy1 + 0.25 * pcy2)
+ wt * (pt0 - 0.5 * pt1)
)
u_NE[j0, j0] = (
wl * (pl0 + 0.5 * pl1)
+ wcx * (pcx0 + 0.5 * pcx1 + 0.25 * pcx2)
+ wr * (pr0 + 0.5 * pr1)
)
u_SW[j0, j0] = (
wl * (pl0 - 0.5 * pl1)
+ wcx * (pcx0 - 0.5 * pcx1 + 0.25 * pcx2)
+ wr * (pr0 - 0.5 * pr1)
)
return u_NW, u_SE, u_NE, u_SW
# numerical fluxes: Hx_flux_2d_sd3, Hy_flux_2d_sd3, c_flux_2d_sd3
def Hx_flux_sd3(self, u_NW, u_W, u_SW, u_NE, u_E, u_SE):
a = np.maximum(self.spectral_radius_x(u_E), self.spectral_radius_x(u_W))
f_E = self.flux_x(u_E)
f_W = self.flux_x(u_W)
f_NE = self.flux_x(u_NE)
f_NW = self.flux_x(u_NW)
f_SE = self.flux_x(u_SE)
f_SW = self.flux_x(u_SW)
Hx = (
1.0
/ 12.0
* (
(f_NW + f_NE + 4.0 * (f_W + f_E) + f_SW + f_SE)
- a * (u_NW - u_NE + 4.0 * (u_W - u_E) + u_SW - u_SE)
)
)
return Hx
def Hy_flux_sd3(self, u_SW, u_S, u_SE, u_NE, u_N, u_NW):
b = np.maximum(self.spectral_radius_y(u_N), self.spectral_radius_y(u_S))
g_N = self.flux_y(u_N)
g_S = self.flux_y(u_S)
g_NE = self.flux_y(u_NE)
g_NW = self.flux_y(u_NW)
g_SE = self.flux_y(u_SE)
g_SW = self.flux_y(u_SW)
Hy = (
1.0
/ 12.0
* (
(g_SW + g_NW + 4.0 * (g_S + g_N) + g_SE + g_NE)
- b * (u_SW - u_NW + 4.0 * (u_S - u_N) + u_SE - u_NE)
)
)
return Hy
def c_flux_sd3(self, u_N, u_S, u_E, u_W, u_NE, u_SE, u_SW, u_NW):
Hx_fluxm = self.Hx_flux_sd3(
u_NW[j0, j0],
u_W[j0, j0],
u_SW[j0, j0],
u_NE[jm, j0],
u_E[jm, j0],
u_SE[jm, j0],
)
Hx_fluxp = self.Hx_flux_sd3(
u_NW[jp, j0],
u_W[jp, j0],
u_SW[jp, j0],
u_NE[j0, j0],
u_E[j0, j0],
u_SE[j0, j0],
)
Hy_fluxm = self.Hy_flux_sd3(
u_SW[j0, j0],
u_S[j0, j0],
u_SE[j0, j0],
u_NE[j0, jm],
u_N[j0, jm],
u_NW[j0, jm],
)
Hy_fluxp = self.Hy_flux_sd3(
u_SW[j0, jp],
u_S[j0, jp],
u_SE[j0, jp],
u_NE[j0, j0],
u_N[j0, j0],
u_NW[j0, j0],
)
return -self.dt / self.dx * (Hx_fluxp - Hx_fluxm) - self.dt / self.dy * (
Hy_fluxp - Hy_fluxm
)
# final scheme sd3_2d
def sd3(self, u):
self.boundary_conditions(u)
ISl, IScx, ISr, ISb, IScy, ISt = self.indicators_sd3(u)
u_N, u_S, u_E, u_W = self.reconstruction_sd3(u, ISl, IScx, ISr, ISb, IScy, ISt)
dISl, dIScx, dISr, dISb, dIScy, dISt = self.indicators_diag_sd3(u)
u_NW, u_SE, u_NE, u_SW = self.reconstruction_diag_sd3(
u, dISl, dIScx, dISr, dISb, dIScy, dISt
)
list(
map(self.boundary_conditions, [u_N, u_S, u_E, u_W, u_NE, u_SE, u_SW, u_NW])
)
C0 = self.c_flux_sd3(u_N, u_S, u_E, u_W, u_NE, u_SE, u_SW, u_NW)
u[j0, j0] += C0
self.boundary_conditions(u)
u_N, u_S, u_E, u_W = self.reconstruction_sd3(u, ISl, IScx, ISr, ISb, IScy, ISt)
u_NW, u_SE, u_NE, u_SW = self.reconstruction_diag_sd3(
u, dISl, dIScx, dISr, dISb, dIScy, dISt
)
list(
map(self.boundary_conditions, [u_N, u_S, u_E, u_W, u_NE, u_SE, u_SW, u_NW])
)
C1 = self.c_flux_sd3(u_N, u_S, u_E, u_W, u_NE, u_SE, u_SW, u_NW)
u[j0, j0] += 0.25 * (C1 - 3.0 * C0)
self.boundary_conditions(u)
u_N, u_S, u_E, u_W = self.reconstruction_sd3(u, ISl, IScx, ISr, ISb, IScy, ISt)
u_NW, u_SE, u_NE, u_SW = self.reconstruction_diag_sd3(
u, dISl, dIScx, dISr, dISb, dIScy, dISt
)
list(
map(self.boundary_conditions, [u_N, u_S, u_E, u_W, u_NE, u_SE, u_SW, u_NW])
)
C2 = self.c_flux_sd3(u_N, u_S, u_E, u_W, u_NE, u_SE, u_SW, u_NW)
u[j0, j0] += +1.0 / 12.0 * (8.0 * C2 - C1 - C0)
self.boundary_conditions(u)
return u
def set_dt(self):
r_max_x = np.max(self.spectral_radius_x(self.u))
r_max_y = np.max(self.spectral_radius_y(self.u))
dt = self.cfl / np.sqrt((r_max_x / self.dx) ** 2 + (r_max_y / self.dy) ** 2)
return dt
| 32.791962 | 88 | 0.443443 | 2,301 | 13,871 | 2.429379 | 0.066493 | 0.021109 | 0.013953 | 0.015742 | 0.709302 | 0.647585 | 0.59356 | 0.54186 | 0.523971 | 0.52093 | 0 | 0.087934 | 0.394132 | 13,871 | 422 | 89 | 32.869668 | 0.577225 | 0.021195 | 0 | 0.474576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045198 | false | 0 | 0.002825 | 0 | 0.09887 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a21a620a2dc472c716f0c4a791cee689f1e97aaa | 942 | py | Python | tests/unit/cluster/test_cluster.py | jwillis0720/sadie | d289ae68f06f5698ee40ffc1757e1b8aa85f1175 | [
"MIT"
] | 9 | 2020-12-22T19:14:01.000Z | 2022-03-17T04:34:06.000Z | tests/unit/cluster/test_cluster.py | jwillis0720/sadie | d289ae68f06f5698ee40ffc1757e1b8aa85f1175 | [
"MIT"
] | 32 | 2020-12-28T07:46:44.000Z | 2022-03-31T01:25:01.000Z | tests/unit/cluster/test_cluster.py | jwillis0720/sadie | d289ae68f06f5698ee40ffc1757e1b8aa85f1175 | [
"MIT"
] | 2 | 2021-07-30T16:44:46.000Z | 2022-01-12T20:15:17.000Z | from sadie.cluster import Cluster
from sadie.airr import AirrTable, LinkedAirrTable
def test_cluster(heavy_catnap_airrtable, light_catnap_airrtable):
for table in [heavy_catnap_airrtable, light_catnap_airrtable]:
cluster = Cluster(table)
clustered_df = cluster.cluster(10)
assert "cluster" in clustered_df.columns
assert isinstance(clustered_df, AirrTable)
linked = LinkedAirrTable(
heavy_catnap_airrtable.merge(light_catnap_airrtable, on="cellid", suffixes=["_heavy", "_light"]),
key_column="cellid",
)
cluster = Cluster(
linked,
groupby=["v_call_top_heavy", "v_call_top_light"],
lookup=["cdr1_aa_heavy", "cdr2_aa_heavy", "cdr3_aa_heavy", "cdr1_aa_light", "cdr2_aa_light", "cdr3_aa_light"],
)
cluster_df_linked = cluster.cluster(10)
assert isinstance(cluster_df_linked, LinkedAirrTable)
assert "cluster" in cluster_df_linked.columns
| 37.68 | 118 | 0.721868 | 115 | 942 | 5.53913 | 0.321739 | 0.141287 | 0.094192 | 0.078493 | 0.125589 | 0.125589 | 0 | 0 | 0 | 0 | 0 | 0.012953 | 0.180467 | 942 | 24 | 119 | 39.25 | 0.812176 | 0 | 0 | 0 | 0 | 0 | 0.157113 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.05 | false | 0 | 0.1 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a21d4928f175025df7eb53ecd52295a8549f7c02 | 3,313 | py | Python | datasets/import_yelp.py | dallascard/proportions | f01502428333e45310654a36d26503612fe45234 | [
"Apache-2.0"
] | 1 | 2019-08-05T14:45:24.000Z | 2019-08-05T14:45:24.000Z | datasets/import_yelp.py | dallascard/proportions | f01502428333e45310654a36d26503612fe45234 | [
"Apache-2.0"
] | null | null | null | datasets/import_yelp.py | dallascard/proportions | f01502428333e45310654a36d26503612fe45234 | [
"Apache-2.0"
] | null | null | null | import os
import re
import sys
from collections import Counter
from optparse import OptionParser
import numpy as np
import pandas as pd
from util import file_handling as fh
def main():
usage = "%prog input_dir output_dir"
parser = OptionParser(usage=usage)
#parser.add_option('--keyword', dest='key', default=None,
# help='Keyword argument: default=%default')
#parser.add_option('--boolarg', action="store_true", dest="boolarg", default=False,
# help='Keyword argument: default=%default')
(options, args) = parser.parse_args()
input_dir = args[0]
output_dir = args[1]
if not os.path.exists(output_dir):
sys.exit("Error: Output directory does not exist")
city_lookup = dict()
print("Reading in business data")
lines = fh.read_jsonlist(os.path.join(input_dir, 'business.json'))
for line in lines:
city = line['city']
business_id = line['business_id']
city_lookup[business_id] = city
city_counts = Counter()
print("Reading in review data")
lines = fh.read_jsonlist(os.path.join(input_dir, 'review.json'))
pairs = [('Las Vegas', 'Phoenix'), ('Toronto', 'Scottsdale'), ('Charlotte', 'Pittsburgh'), ('Tempe', 'Henderson')]
for pair in pairs:
text_lines = []
labels = []
years = []
year_counts = Counter()
count = 0
city1, city2 = pair
for i, line in enumerate(lines):
if i % 100000 == 0:
print(i, count)
review_id = line['review_id']
text = line['text']
date = line['date']
year = date.split('-')[0]
funny = int(line['funny'])
useful = int(line['useful'])
cool = int(line['cool'])
business_id = line['business_id']
if business_id in city_lookup:
city = city_lookup[business_id]
city_counts.update([city])
label = None
if city == city1:
label = [1, 0]
elif city == city2:
label = [0, 1]
if label is not None:
text_lines.append({'text': text, 'city': city, 'year': year, 'id': count, 'review_id': review_id, 'label': label, 'funny': funny, 'useful': useful, 'cool': cool})
labels.append(label)
years.append(year)
year_counts.update([year])
count += 1
n_reviews = len(text_lines)
print(pair)
print("Found {:d} reviews".format(n_reviews))
name = '_'.join([re.sub('\s', '_', city) for city in pair])
fh.write_jsonlist(text_lines, os.path.join(output_dir, name + '.jsonlist'))
labels_df = pd.DataFrame(np.vstack(labels), index=np.arange(n_reviews), columns=[city1, city2])
labels_df.to_csv(os.path.join(output_dir, name + '.labels.csv'))
years_df = pd.DataFrame(years, index=np.arange(n_reviews), columns=['year'])
years_df.to_csv(os.path.join(output_dir, name + '.years.csv'))
print("Year counts")
keys = list(year_counts.keys())
keys.sort()
for k in keys:
print(k, year_counts[k])
if __name__ == '__main__':
main()
| 33.806122 | 182 | 0.561425 | 403 | 3,313 | 4.459057 | 0.312655 | 0.038954 | 0.027824 | 0.026711 | 0.213133 | 0.122983 | 0.079021 | 0.079021 | 0.079021 | 0.045632 | 0 | 0.009507 | 0.301539 | 3,313 | 97 | 183 | 34.154639 | 0.76707 | 0.077875 | 0 | 0.026667 | 0 | 0 | 0.123566 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013333 | false | 0 | 0.106667 | 0 | 0.12 | 0.093333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a2288fb94c080ace0de40f0b975efab9344f4528 | 1,777 | py | Python | content.py | bguenthe/mpwebswitch | d598fc0ff74a5fef0535fa8ccf45029add2fbb16 | [
"MIT"
] | null | null | null | content.py | bguenthe/mpwebswitch | d598fc0ff74a5fef0535fa8ccf45029add2fbb16 | [
"MIT"
] | null | null | null | content.py | bguenthe/mpwebswitch | d598fc0ff74a5fef0535fa8ccf45029add2fbb16 | [
"MIT"
] | null | null | null | # Micropython Http Server
# Erni Tron ernitron@gmail.com
# Copyright (c) 2016
# Content Callback functions.
# They should receive parameters and return a HTML formatted string
# By convention they start with cb_
import gc
import time
from config import save_config, set_config, get_config
# Content Functions
def cb_index(title):
with open('index.txt', 'r') as f:
return f.readlines()
return []
def cb_status():
uptime = time.time()
import os
filesystem = os.listdir()
chipid = get_config('chipid')
macaddr = get_config('mac')
address = get_config('address')
return '<h2>Device %s</h2>' \
'<p>MacAddr: %s' \
'<p>Address: %s' \
'<p>Free Mem: %d (alloc %d)' \
'<p>Files: %s' \
'<p>Uptime: %d"</div>' % (chipid, macaddr, address, gc.mem_free(), gc.mem_alloc(), filesystem, uptime)
def cb_help():
with open('help.txt', 'r') as f:
return f.readlines()
return []
def cb_setplace(place):
set_config('place', place)
save_config()
return 'Place set to %s' % place
def cb_setparam(param, value):
if param == None:
return '<p>Set configuration parameter<form action="/conf">' \
'Param <input type="text" name="param"> ' \
'Value <input type="text" name="value"> ' \
'<input type="submit" value="Submit">' \
'</form></p></div>'
else:
set_config(param, value)
save_config()
return 'Param set to %s' % value
def cb_setwifi(ssid, pwd):
if len(ssid) < 3 or len(pwd) < 8:
return '<h2>WiFi too short, try again</h2>'
set_config('ssid', ssid)
set_config('pwd', pwd)
save_config()
return '<h2>WiFi set to %s %s</h2>' % (ssid, pwd)
| 25.753623 | 113 | 0.585819 | 238 | 1,777 | 4.281513 | 0.407563 | 0.029441 | 0.047105 | 0.013739 | 0.066732 | 0.066732 | 0.066732 | 0.066732 | 0.066732 | 0.066732 | 0 | 0.00916 | 0.262802 | 1,777 | 68 | 114 | 26.132353 | 0.768702 | 0.122116 | 0 | 0.152174 | 0 | 0 | 0.272552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130435 | false | 0 | 0.086957 | 0 | 0.434783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf3e58c7c4f6441c3bd403bedd53cbdc604438db | 4,512 | py | Python | comparator/__init__.py | dchaplinsky/comparator | 664eee047debe654b0b9d74e725af787a5306516 | [
"MIT"
] | 5 | 2019-09-27T17:34:21.000Z | 2020-07-28T01:42:31.000Z | comparator/__init__.py | dchaplinsky/comparator | 664eee047debe654b0b9d74e725af787a5306516 | [
"MIT"
] | null | null | null | comparator/__init__.py | dchaplinsky/comparator | 664eee047debe654b0b9d74e725af787a5306516 | [
"MIT"
] | null | null | null | __version__ = "1.0.0"
__all__ = ["full_compare"]
import re
from functools import reduce
from itertools import permutations, product, islice, zip_longest
from operator import mul
from Levenshtein import jaro, jaro_winkler
def _smart_jaro(a, b, func=jaro):
if func(a[1:], b[1:]) > 0.99:
return True
if func(a, b[1:]) > 0.99:
return True
if func(a[1:], b) > 0.99:
return True
chunk_distance = max([func(a, b)])
if abs(len(a) - len(b)) >= 3:
chunk_distance -= 0.2
return chunk_distance
def _compare_two_names(
name1, name2, max_splits=7, straight_limit=0.70, smart_limit=0.96
):
straight_similarity = jaro(name1, name2)
if straight_similarity > smart_limit:
return True
if straight_similarity > straight_limit:
min_pair_distance = 1
for a, b in zip_longest(name1.split(" "), name2.split(" ")):
if a is not None and b is not None:
chunk_distance = _smart_jaro(a, b, func=jaro_winkler)
min_pair_distance = min(chunk_distance, min_pair_distance)
if min_pair_distance > 0.88:
return True
return False
def _normalize_name(s):
return (
re.sub(r"\s+", " ", s.strip().replace("-", " "))
.replace(".", "")
.replace(",", "")
.replace('"', "")
.replace("'", "")
.replace("’", "")
.replace("є", "е")
.replace("i", "и")
.replace("і", "и")
.replace("ь", "")
.replace("'", "")
.replace('"', "")
.replace("`", "")
.replace("конст", "кост")
.replace("’", "")
.replace("ʼ", "")
)
def _slugify_name(s):
s = s.replace(" ", "")
return re.sub(r"\d+", "", s)
def _thorough_compare(name1, name2, max_splits=7):
splits = name2.split(" ")
limit = reduce(mul, range(1, max_splits + 1))
for opt in islice(permutations(splits), limit):
if _compare_two_names(name1, " ".join(opt)):
return True
return False
def full_compare(name1, name2):
name1 = _normalize_name(name1)
name2 = _normalize_name(name2)
slugified_name1 = _slugify_name(name1)
slugified_name2 = _slugify_name(name2)
if slugified_name1 == slugified_name2:
return True
if slugified_name1.startswith(slugified_name2) and len(slugified_name2) >= 10:
return True
if slugified_name2.startswith(slugified_name1) and len(slugified_name1) >= 10:
return True
if slugified_name1.endswith(slugified_name2) and len(slugified_name2) >= 10:
return True
if slugified_name2.endswith(slugified_name1) and len(slugified_name1) >= 10:
return True
if jaro(slugified_name1, slugified_name2) > 0.95:
return True
if jaro(slugified_name2, slugified_name1) > 0.95:
return True
if _compare_two_names(name1, name2):
return True
if _compare_two_names(name2, name1):
return True
return _thorough_compare(name1, name2) or _thorough_compare(name2, name1)
def test_file(csv_file, debug):
import csv
from veryprettytable import VeryPrettyTable
pt = VeryPrettyTable([" ", "Positive", "Negative"])
with open(csv_file, "r") as fp:
r = csv.DictReader(fp)
res = {True: {True: 0, False: 0}, False: {True: 0, False: 0}}
for l in r:
expected = l["ground truth"].lower() in ["true", "1", "on"]
predicted = full_compare(l["name1"], l["name2"])
if predicted != expected and debug:
print(predicted, expected, l["name1"], l["name2"])
res[predicted][expected] += 1
for predicted in [True, False]:
pt.add_row(
[
"Predicted positive" if predicted else "Predicted negative",
res[predicted][True],
res[predicted][False],
]
)
precision = res[True][True] / (res[True][True] + res[True][False])
recall = res[True][True] / (res[True][True] + res[False][True])
f1 = 2 * precision * recall / (precision + recall)
print(pt)
print("Precision: {:5.2f}".format(precision))
print("Recall: {:5.2f}".format(recall))
print("F1 score: {:5.2f}".format(f1))
if __name__ == "__main__":
import sys
if len(sys.argv) > 1:
test_file(sys.argv[1], len(sys.argv) > 2)
else:
print(
"Supply .csv file with ground truth data to calculate precision/recall/f1 metrics"
)
| 26.080925 | 94 | 0.585993 | 563 | 4,512 | 4.509769 | 0.227353 | 0.059078 | 0.051989 | 0.05514 | 0.270973 | 0.179204 | 0.123671 | 0.103978 | 0.103978 | 0.086648 | 0 | 0.035627 | 0.272163 | 4,512 | 172 | 95 | 26.232558 | 0.737515 | 0 | 0 | 0.188525 | 0 | 0 | 0.06383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057377 | false | 0 | 0.065574 | 0.008197 | 0.295082 | 0.04918 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf3e74e6f6ce3ba1a29bca66409ff4b07faabce7 | 6,134 | py | Python | matt/day4/2hap.py | brianfay/aoc-2021 | 08894c6b9e62777f93d2252896b8909b174b04e6 | [
"Unlicense"
] | 1 | 2021-12-08T21:35:55.000Z | 2021-12-08T21:35:55.000Z | matt/day4/2hap.py | brianfay/aoc-2021 | 08894c6b9e62777f93d2252896b8909b174b04e6 | [
"Unlicense"
] | 1 | 2022-01-08T02:45:09.000Z | 2022-01-08T02:45:09.000Z | matt/day4/2hap.py | brianfay/aoc-2021 | 08894c6b9e62777f93d2252896b8909b174b04e6 | [
"Unlicense"
] | null | null | null | # Parse input:
# line 1 = call_numbers separated by ,
# Matrices separated by blank row
# Store input:
# Bingo Cards: Dictionary of Dictionaires of lists? (card - column - numbers)
# call_numbers: list
# function to x numbers as called
# function to check if card(s) won
# function to add all unmarked numbers on winning board (if card won)
# store unmarked numbers as dictionary of lists (card - numbers)
# sum unmarked numbers in each list
# multiply sum of each list by called number and determine highest score
import collections
number_list = []
cards = {}
numbers_called = 0
card_winners = []
with open('data.txt') as data:
# numbers to call
numbers = data.readline().strip().split(',')
for number in numbers:
number = int(number)
number_list.append(number)
# bingo cards
card_number = 0
for line in data:
line.strip()
if line == "\n":
card_number += 1
elif line != "\n":
# parse line
y = line.rstrip().split(" ")
while("" in y):
y.remove("")
if card_number not in cards:
cards[card_number] = {}
cards[card_number]['B'] = [int(y[0])]
cards[card_number]['I'] = [int(y[1])]
cards[card_number]['N'] = [int(y[2])]
cards[card_number]['G'] = [int(y[3])]
cards[card_number]['O'] = [int(y[4])]
elif card_number in cards:
cards[card_number]['B'].append(int(y[0]))
cards[card_number]['I'].append(int(y[1]))
cards[card_number]['N'].append(int(y[2]))
cards[card_number]['G'].append(int(y[3]))
cards[card_number]['O'].append(int(y[4]))
else:
"crymost"
# print(f'{cards[1]}\n{cards[2]}\n{cards[3]}')
# sample data testing
# number_list = [7,4,9,5,11,17,23,2,0,14,21,24,10,16,13,6,15,25,12,22,18,20,8,19,3,26,1]
# cardz = {
# 1: {
# 'B': [22,8,21,6,1],
# 'I': [13,2,9,10,12],
# 'N': [17,23,14,3,20],
# 'G': [11,4,16,18,15],
# 'O': [0,24,7,5,19]
# },
# 2: {
# 'B': [3,9,19,20,14],
# 'I': [15,18,8,11,21],
# 'N': [0,13,7,10,16],
# 'G': [2,17,25,24,12],
# 'O': [22,5,23,4,6]
# },
# 3: {
# 'B': [14,10,18,22,2],
# 'I': [21,16,8,11,0],
# 'N': [17,15,23,13,12],
# 'G': [24,9,26,6,3],
# 'O': [4,19,20,5,7]
# }
# }
# print(f'{cardz[1]}\n{cardz[2]}\n{cardz[3]}')
# exit()
def check_number(last_called):
for card in cards:
for x in range(5):
if cards[card]['B'][x] == last_called:
cards[card]['B'][x] = 'x'
elif cards[card]['I'][x] == last_called:
cards[card]['I'][x] = 'x'
elif cards[card]['N'][x] == last_called:
cards[card]['N'][x] = 'x'
elif cards[card]['G'][x] == last_called:
cards[card]['G'][x] = 'x'
elif cards[card]['O'][x] == last_called:
cards[card]['O'][x] = 'x'
def check_winner(last_called):
winning_scores = []
winners = False
for card in cards:
for y in range(5):
if card in card_winners:
break
elif card not in card_winners:
# check horizontal & vertical win for all cards
if ((cards[card]['B'][y] == "x" and cards[card]['I'][y] == 'x' and cards[card]['N'][y] == 'x' and cards[card]['G'][y] == 'x' and cards[card]['O'][y] == 'x')
or cards[card]['B'][0] == 'x' and cards[card]['B'][1] == 'x' and cards[card]['B'][2] == 'x' and cards[card]['B'][3] == 'x' and cards[card]['B'][4] == 'x'
or cards[card]['I'][0] == 'x' and cards[card]['I'][1] == 'x' and cards[card]['I'][2] == 'x' and cards[card]['I'][3] == 'x' and cards[card]['I'][4] == 'x'
or cards[card]['N'][0] == 'x' and cards[card]['N'][1] == 'x' and cards[card]['N'][2] == 'x' and cards[card]['N'][3] == 'x' and cards[card]['N'][4] == 'x'
or cards[card]['G'][0] == 'x' and cards[card]['G'][1] == 'x' and cards[card]['G'][2] == 'x' and cards[card]['G'][3] == 'x' and cards[card]['G'][4] == 'x'
or cards[card]['O'][0] == 'x' and cards[card]['O'][1] == 'x' and cards[card]['O'][2] == 'x' and cards[card]['O'][3] == 'x' and cards[card]['O'][4] == 'x'):
winners = True
# add all unmarked numbers for this card
add_number = 0
for z in range(5):
if cards[card]['B'][z] != 'x':
add_number += cards[card]['B'][z]
if cards[card]['I'][z] != 'x':
add_number += cards[card]['I'][z]
if cards[card]['N'][z] != 'x':
add_number += cards[card]['N'][z]
if cards[card]['G'][z] != 'x':
add_number += cards[card]['G'][z]
if cards[card]['O'][z] != 'x':
add_number += cards[card]['O'][z]
winning_scores.append(add_number)
card_winners.append(card)
# After all cards checked
if winners:
final = compare_scores(winning_scores, last_called)
return final
def compare_scores(winning_scores, last_called):
final_winning_number = 0
for score in winning_scores:
last_score = score
if last_score:
if score <= last_score:
final_winning_number = score
elif last_score < score:
final_winning_number = last_score
return final_winning_number * last_called
for last_called in number_list:
numbers_called += 1
check_number(last_called)
if numbers_called > 4:
winning = check_winner(last_called)
if winning:
print(winning)
| 34.077778 | 175 | 0.469188 | 847 | 6,134 | 3.312869 | 0.144038 | 0.19886 | 0.076978 | 0.11119 | 0.367071 | 0.135424 | 0.074127 | 0 | 0 | 0 | 0 | 0.054703 | 0.341376 | 6,134 | 179 | 176 | 34.268156 | 0.639851 | 0.217639 | 0 | 0.020833 | 0 | 0 | 0.025463 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.010417 | 0 | 0.0625 | 0.010417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf3eb6b868a5e43fd91faea2606c647531478b26 | 3,120 | py | Python | dliplib/graphs/lodopab_200_performance.py | oterobaguer/ct-dip-benchmark | 0539c284c94089ed86421ea0892cd68aa1d0575a | [
"Apache-2.0"
] | null | null | null | dliplib/graphs/lodopab_200_performance.py | oterobaguer/ct-dip-benchmark | 0539c284c94089ed86421ea0892cd68aa1d0575a | [
"Apache-2.0"
] | null | null | null | dliplib/graphs/lodopab_200_performance.py | oterobaguer/ct-dip-benchmark | 0539c284c94089ed86421ea0892cd68aa1d0575a | [
"Apache-2.0"
] | null | null | null | import matplotlib.pyplot as plt
import numpy as np
from dliplib.utils.helper import set_use_latex
plt.style.use('seaborn-whitegrid')
set_use_latex()
sizes = [0.0001, 0.001, 0.01, 0.10, 1.00]
ticks = range(len(sizes))
# performance on the different data sizes
learnedgd = {'PSNR [db]': [29.87, 31.28, 31.83, 32.7, 32.7],
'SSIM': [0.7151, 0.7473, 0.7602, 0.7802, 0.7802]}
learnedpd = {'PSNR [db]': [29.65, 32.48, 33.21, 33.53, 33.64],
'SSIM': [0.7343, 0.7771, 0.7929, 0.799, 0.8020]}
fbpunet = {'PSNR [db]': [29.33, 31.58, 32.6, 33.19, 33.55],
'SSIM': [0.7143, 0.7616, 0.7818, 0.7931, 0.7994]}
iradonmap = {'PSNR [db]': [14.61, 18.77, 24.63, 31.27, 32.45],
'SSIM': [0.3529, 0.4492, 0.6031, 0.7569, 0.7781]}
tv = {'PSNR [db]': 30.89,
'SSIM': 0.7563}
# psnr: 28.38, ssim: 0.6492
fbp = {'PSNR [db]': 28.38,
'SSIM': 0.6492}
diptv = {'PSNR [db]': 32.51,
'SSIM': 0.7803}
learnedpd_dip = {'PSNR [db]': [32.52, 32.78, 33.21],
'SSIM': [0.7822, 0.7821, 0.7929]}
fig, ax = plt.subplots(1, 2, figsize=(8, 4.0))
for i, measure in enumerate(['PSNR [db]', 'SSIM']):
ax[i].axhline(fbp[measure], ticks[0], ticks[-1], label='FBP', color='tab:gray',
linestyle=':', linewidth=1.5)
ax[i].axhline(tv[measure], ticks[0], ticks[-1], label='TV', color='tab:orange',
linestyle='--', linewidth=1.5)
ax[i].axhline(diptv[measure], ticks[0], ticks[-1], label='DIP+TV', color='tab:brown',
linestyle='-.', linewidth=1.5)
ax[i].plot(ticks, iradonmap[measure], label='iRadonMap', color='tab:green',
linewidth=1.5, marker='o')
ax[i].plot(ticks, fbpunet[measure], label='FBP+U-Net', color='tab:blue',
linewidth=1.5, marker='o')
ax[i].plot(ticks, learnedgd[measure], label='LearnedGD', color='tab:red',
linewidth=1.5, marker='o')
ax[i].plot(ticks, learnedpd[measure], label='LearnedPD', color='tab:purple',
linewidth=1.5, marker='o')
ax[i].plot(ticks[:3], learnedpd_dip[measure], label='LearnedPD + DIP', color='tab:purple',
linewidth=1.5, marker='o', markerfacecolor='white')
ax[i].set_xticks(ticks)
ax[i].set_xticklabels(np.array(sizes) * 100, rotation=45)
ax[i].set_xlabel('Data size [$\%$]')
ax[i].set_ylabel(measure)
ax[i].set_title('LoDoPaB (200) - Test error')
ax[0].set_ylim([24.0, 35.0])
ax[1].set_ylim([0.58, 0.82])
for i in range(2):
box = ax[i].get_position()
ax[i].set_position([box.x0, box.y0, box.width, box.height * 0.6])
h, l = ax[0].get_legend_handles_labels()
ax[0].legend([h[3], h[4], h[5], h[6]], [l[3], l[4], l[5], l[6]], bbox_to_anchor=(0.0, -0.45, 1., 0.5), loc=3,
ncol=2, mode="expand", frameon=False)
h, l = ax[1].get_legend_handles_labels()
ax[1].legend([h[2], h[7], h[0], h[1]], [l[2], l[7], l[0], l[1]], bbox_to_anchor=(0.0, -0.45, 1., 0.5), loc=3,
ncol=2, mode="expand", frameon=False)
plt.tight_layout()
plt.tight_layout()
plt.tight_layout()
plt.savefig('lodopab-200-performance.pdf')
plt.show()
| 33.548387 | 109 | 0.575 | 530 | 3,120 | 3.332075 | 0.330189 | 0.025481 | 0.04983 | 0.033975 | 0.306908 | 0.265006 | 0.211212 | 0.151755 | 0.125708 | 0.057758 | 0 | 0.141097 | 0.19359 | 3,120 | 92 | 110 | 33.913043 | 0.560811 | 0.020833 | 0 | 0.145161 | 0 | 0 | 0.118938 | 0.008847 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.048387 | 0 | 0.048387 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf3ef1247bc316b19c823cb48904b6ec5fce2e57 | 1,448 | py | Python | 16/Part2.py | iammanish17/AOC2020 | afbed26492b8cd0025e6c6b196e1a944e0a84109 | [
"MIT"
] | 1 | 2020-12-02T11:34:42.000Z | 2020-12-02T11:34:42.000Z | 16/Part2.py | iammanish17/AOC2020 | afbed26492b8cd0025e6c6b196e1a944e0a84109 | [
"MIT"
] | null | null | null | 16/Part2.py | iammanish17/AOC2020 | afbed26492b8cd0025e6c6b196e1a944e0a84109 | [
"MIT"
] | 1 | 2020-12-23T06:56:51.000Z | 2020-12-23T06:56:51.000Z | s = open('input.txt','r').read()
s = [k for k in s.split("\n")]
di = {}
ok = False
ans = 0
f = [set() for i in range(20)]
nums = set()
cnt = 0
for line in s:
for each in line.split(" "):
if "-" in each:
x, y = each.split("-")
x, y = int(x), int(y)
if line.split(":")[0] not in di:
di[line.split(":")[0]] = set()
key = line.split(":")[0]
for i in range(x,y+1):
nums.add(i)
di[key].add(i)
for i in range(20):
f[i].add(key)
if "nearby tickets" in line:
ok = True
elif ok:
x = list(map(int, line.split(",")))
pos=True
for i in x:
if i not in nums:
pos=False
break
if not pos:continue
for i in range(20):
st = set()
for key in di:
if x[i] in di[key]:
st.add(key)
#print(st)
f[i] = f[i].intersection(st)
elif not ok and len(line.split(",")) == 20:
my = list(map(int, line.split(",")))
st = set()
ind = {}
for j in range(20):
for i in range(20):
f[i] = f[i].difference(st)
if len(f[i]) == 1:
for k in f[i]:
ind[k] = i
st.add(k)
break
ans = 1
for title in ind:
if "departure" in title:
ans *= my[ind[title]]
print(ans)
| 23.737705 | 47 | 0.410912 | 218 | 1,448 | 2.729358 | 0.243119 | 0.035294 | 0.060504 | 0.092437 | 0.157983 | 0.05042 | 0.05042 | 0 | 0 | 0 | 0 | 0.024096 | 0.426796 | 1,448 | 60 | 48 | 24.133333 | 0.692771 | 0.006215 | 0 | 0.12963 | 0 | 0 | 0.030598 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf42ac1b125502bb6adf579b40a034bc662330a7 | 1,640 | py | Python | classes/utils.py | sicko7947/Nike | aa028412977f4a730e4b1ad97c29e157a9304d02 | [
"MIT"
] | 1 | 2022-02-07T17:29:42.000Z | 2022-02-07T17:29:42.000Z | classes/utils.py | sicko7947/Nike | aa028412977f4a730e4b1ad97c29e157a9304d02 | [
"MIT"
] | null | null | null | classes/utils.py | sicko7947/Nike | aa028412977f4a730e4b1ad97c29e157a9304d02 | [
"MIT"
] | 1 | 2021-08-07T04:39:55.000Z | 2021-08-07T04:39:55.000Z | import datetime,threading,random
from termcolor import cprint, colored
import colorama
class Logger():
def __init__(self):
colorama.init()
def __timestamp(self):
now = str(datetime.datetime.now())
now = now.split(' ')[1]
threadname = threading.currentThread().getName()
threadname = str(threadname).replace('Thread', 'Task')
now = '[' + str(now) + ']' + '[' + str(threadname) + ']'
return now
def log(self, text):
print("{} {}".format(self.__timestamp(), text))
return
def success(self, text):
print("{} {}".format(self.__timestamp(), colored(text, "green")))
return
def warn(self, text):
print("{} {}".format(self.__timestamp(), colored(text, "yellow")))
return
def error(self, text):
print("{} {}".format(self.__timestamp(), colored(text, "red")))
return
def status(self, text):
print("{} {}".format(self.__timestamp(), colored(text, "magenta")))
return
class ProxyManager():
def __init__(self):
self.proxies = []
with open('proxy.txt') as f:
for item in f.read().splitlines():
if not item == '':
item = item.split(":")
if len(item) == 4:
proxyDict = {
'http': 'http://{}:{}@{}:{}'.format(item[2], item[3], item[0], item[1]),
'https': 'https://{}:{}@{}:{}'.format(item[2], item[3], item[0], item[1])
}
self.proxies.append(proxyDict)
elif len(item) == 2:
proxyDict = {
'http': 'http://{}:{}'.format(item[0], item[1]),
'https': 'https://{}:{}'.format(item[0], item[1])
}
self.proxies.append(proxyDict)
else:
pass
f.close()
def get_proxy(self):
return random.choice(self.proxies)
| 25.230769 | 80 | 0.589024 | 197 | 1,640 | 4.796954 | 0.345178 | 0.042328 | 0.068783 | 0.100529 | 0.416931 | 0.374603 | 0.340741 | 0.340741 | 0.055026 | 0 | 0 | 0.01127 | 0.188415 | 1,640 | 64 | 81 | 25.625 | 0.698723 | 0 | 0 | 0.211538 | 0 | 0 | 0.092073 | 0 | 0.019231 | 0 | 0 | 0 | 0 | 1 | 0.173077 | false | 0.019231 | 0.057692 | 0.019231 | 0.403846 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf43755e187af099cccae06e2ad0a561bdda3853 | 7,486 | py | Python | old_code/soil_heat_model.py | miksch/atm233-midterm-winter2020 | 0b7f33deb398f268efdcb77c7e7ddbc2ab2b2704 | [
"MIT"
] | null | null | null | old_code/soil_heat_model.py | miksch/atm233-midterm-winter2020 | 0b7f33deb398f268efdcb77c7e7ddbc2ab2b2704 | [
"MIT"
] | null | null | null | old_code/soil_heat_model.py | miksch/atm233-midterm-winter2020 | 0b7f33deb398f268efdcb77c7e7ddbc2ab2b2704 | [
"MIT"
] | 1 | 2020-03-03T18:28:21.000Z | 2020-03-03T18:28:21.000Z | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
from scipy import signal
import timeit
from numba import jit
import datashader as ds
import xarray as xr
from datashader import transfer_functions as tf
#1. Define the boundary conditions
# Needed: surface temperature forcing (sin wave at the surce), temperature profile (intinal conditions), bottom boundary
# condition, time step, grid size, thermal conductivity, n (number of vertical grid cells)
"""
# Define set up of the parameters of the model
n = 1500 # number of vertical grids (includes top and bottom)
n_coeffs = n-2 # number of coefficients for the tridiag solver
dz = 0.001266667 # vertical grid spacing (in meters)
dt = 1 # time step in seconds
depth = dz * n # the depth of the soil modeled
kap = 8e-7 # soil diffusivity (m2 s-1)
la = (dt*kap)/(dz**2) # la as defined with dt*kappa/dz^2 (unitless)
time_steps = 84600*7 # number of time steps to calculate
T_bar = 20. # Average temperature of bottom layer
A = 10. # Amplitude of sine wave for surface layer
"""
"""
## Set of parameters we used with a decent looking output
## (uncomment by taking away triple quotes)
# Define set up of the parameters of the model
n = 30 # number of vertical grids (includes top and bottom)
n_coeffs = n-2 # number of coefficients for the tridiag solver
dz = 0.05 # vertical grid spacing (in meters)
dt = 3600 # time step in seconds
depth = dz * n # the depth of the soil modeled
kap = 8e-7 # soil diffusivity (m2 s-1)
la = (dt*kap)/(dz**2) # la as defined with dt*kappa/dz^2 (unitless)
time_steps = 200 # number of time steps to calculate
T_bar = 20. # Average temperature of bottom layer
A = 10. # Amplitude of sine wave for surface layer
"""
## Set of parameters we used with a decent looking output
## (uncomment by taking away triple quotes)
# Define set up of the parameters of the model
n = 150 # number of vertical grids (includes top and bottom)
n_coeffs = n-2 # number of coefficients for the tridiag solver
dz = 0.01 # vertical grid spacing (in meters)
dt = 1800 # time step in seconds
depth = dz * n # the depth of the soil modeled
kap = 8e-7 # soil diffusivity (m2 s-1)
la = (dt*kap)/(dz**2) # la as defined with dt*kappa/dz^2 (unitless)
time_steps = 400 # number of time steps to calculate
T_bar = 20. # Average temperature of bottom layer
A = 10. # Amplitude of sine wave for surface layer
print(f"la: {la}")
print(f"dt/(dz^2): {dt / (dz**2)}")
## Tri Diagonal Matrix Algorithm(a.k.a Thomas algorithm) solver
# https://gist.github.com/cbellei/8ab3ab8551b8dfc8b081c518ccd9ada9
# Modified to take in coefficient array
def TDMAsolver_no_vec(coeffs):
"""
TDMA solver, a b c d can be NumPy array type or Python list type.
refer to http://en.wikipedia.org/wiki/Tridiagonal_matrix_algorithm
and to http://www.cfd-online.com/Wiki/Tridiagonal_matrix_algorithm_-_TDMA_(Thomas_algorithm)
"""
a = coeffs[1:, 0]
b = coeffs[:, 1]
c = coeffs[:-1, 2]
d = coeffs[:, 3]
nf = len(d) # number of equations
ac, bc, cc, dc = map(np.array, (a, b, c, d)) # copy arrays
for it in range(1, nf):
mc = ac[it-1]/bc[it-1]
bc[it] = bc[it] - mc*cc[it-1]
dc[it] = dc[it] - mc*dc[it-1]
xc = bc
xc[-1] = dc[-1]/bc[-1]
for il in range(nf-2, 1, -1):
xc[il] = (dc[il]-cc[il]*xc[il+1])/bc[il]
return xc
# https://stackoverflow.com/questions/8733015/tridiagonal-matrix-algorithm-tdma-aka-thomas-algorithm-using-python-with-nump
@jit
def TDMAsolver(coeff):
# Set up diagonal coefficients
a = coeff[1:, 0]
b = coeff[:, 1]
c = coeff[:-1, 2]
d = coeff[:, 3]
n = len(d)
w = np.zeros(n-1)
g = np.zeros(n)
p = np.zeros(n)
w[0] = c[0]/b[0]
g[0] = d[0]/b[0]
for i in range(1,n-1):
w[i] = c[i]/(b[i] - a[i-1]*w[i-1])
for i in range(1,n):
g[i] = (d[i] - a[i-1]*g[i-1])/(b[i] - a[i-1]*w[i-1])
p[n-1] = g[n-1]
for i in range(n-1,0,-1):
p[i-1] = g[i-1] - w[i-1]*p[i]
return p
## Define boundary conditions
# Initialize temperature, time, and depth arrays
Temps = np.full((n, time_steps), np.nan)
tao = np.array([t * dt for t in np.arange(time_steps)])
depths = np.array([-d * dz for d in np.arange(n)])
def temp_surface(tao, T_bar, A):
"""
Calculate surface temperature for a set of times (tao)
"""
omega = (2 * np.pi) / (86400)
T = T_bar + A * np.sin(omega * tao) # + np.pi/2 for time offset
return T
# Initialize boundary conditions in Temps array
Temps[0, :] = temp_surface(tao, T_bar, A) # Surface temperature
Temps[-1, :] = T_bar # Temperature at lower boundary
Temps[:, 0] = T_bar # Temperature at tau=0
print(Temps)
# Some initial tries of tao=0 boundary coundition
# Linear
# Temps[:, 0] = np.linspace(T_bar+10, T_bar, n) # Lowest depth = T_bar
# Diverging from tmax in center
# gauss = signal.gaussian(n, std=3)
# Temps[:,0] = gauss
# Coefficient matrix for tridiagonal solver
coeffs = np.full((n_coeffs, 4), 0.)
## 2. Finding the coefficents for a, b, c, d
for i, t in enumerate(tao[1:-1]):
# Index in temperature array
Temp_idx = i + 1
# depth = 1
coeffs[0, 1] = 1 + 2 * la
coeffs[0, 2] = - la
coeffs[0, 3] = Temps[1, Temp_idx - 1] + la * Temps[0, Temp_idx]
# depth = bottom
coeffs[-1, 0] = -la
coeffs[-1, 1] = 1 + 2 * la
coeffs[-1, 3] = Temps[-2, Temp_idx - 1] + la * Temps[-1, Temp_idx]
# Loop through
for depth in np.arange(coeffs.shape[0])[1:-1]:
coeffs[depth, 0] = -la
coeffs[depth, 1] = 1 + 2 * la
coeffs[depth, 2] = -la
coeffs[depth, 3] = Temps[depth, Temp_idx - 1]
#print(coeffs)
Temps[1:-1, Temp_idx] = TDMAsolver(coeffs)
## Some initial tests to make sure the tridiag solver was working
# Tridiag solver from github
def test_tridiag(test_coeff, tridiag_func, print_output=False):
v = tridiag_func(test_coeff)
if print_output:
print(F"Function: {str(tridiag_func)},\n Solution: {v}")
test_coeff = np.array([[0, 2, -.5, 35],
[-.5, 2, -.5, 20],
[-.5, 2, 0, 30]])
time_novec = timeit.timeit('test_tridiag(test_coeff, TDMAsolver_no_vec)',
'from __main__ import test_tridiag, test_coeff, TDMAsolver_no_vec')
time_vec = timeit.timeit('test_tridiag(test_coeff, TDMAsolver)',
'from __main__ import test_tridiag, test_coeff, TDMAsolver')
print(f"No vectorization: {time_novec},\n Vectorized: {time_vec}")
## Save output (in case of large file
# Create grid to plot on (time is in hours)
x, y = np.meshgrid(tao, depths)
print(x, y)
da = xr.DataArray(Temps, coords=[('depth',depths), ('tau',tao)]).to_dataset(name='temp')
da.to_netcdf(f'data/dt_{dt}_dz_{dz}_data.nc')
## Sample output plot
# NOTE: does not work for large (e.g. 1 billion) points, need to use
# a different plotting package like datashade
fig, (ax1, ax2) = plt.subplots(nrows=2, **{'figsize':(10,10)})
# Plot temperatures
try:
# temp_plt = ax.pcolormesh(x, y, Temps)
# temp_plt = ax.contourf(x, y, Temps) # Contour plot
temp_plt = ax1.pcolormesh(Temps[:30,:])
# plt2 = ax2.pcolormesh(x, y, Temps)
ax1.set_xlabel('Time [hr]')
ax1.set_ylabel('Depth [m]')
#fig.colorbar(temp_plt)
plt.savefig(f"figures/dt_{dt}_{dz}_output_xy_subset_2.png", dpi=300)
except Exception as e:
print(e)
tf.shade(ds.Canvas(plot_height=400, plot_width=1200).raster(da['Temps']))
| 31.720339 | 123 | 0.64841 | 1,260 | 7,486 | 3.778571 | 0.249206 | 0.018484 | 0.009452 | 0.021004 | 0.358328 | 0.332493 | 0.299307 | 0.279563 | 0.257719 | 0.257719 | 0 | 0.042792 | 0.213332 | 7,486 | 236 | 124 | 31.720339 | 0.765665 | 0.323137 | 0 | 0.018349 | 0 | 0 | 0.120268 | 0.037852 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036697 | false | 0 | 0.110092 | 0 | 0.174312 | 0.082569 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf462c8fa1454512e16f250d30d5dc8b3cd688cc | 2,803 | py | Python | extensions/mute.py | meisme-dev/Gardenbot | a8c7b30b28aa4ad2dcba367eedfbd856f63f5eca | [
"MIT"
] | 5 | 2020-10-26T13:14:51.000Z | 2021-01-13T16:52:49.000Z | extensions/mute.py | meisme-dev/Gardenbot | a8c7b30b28aa4ad2dcba367eedfbd856f63f5eca | [
"MIT"
] | 1 | 2021-02-13T00:45:32.000Z | 2021-02-13T08:21:31.000Z | extensions/mute.py | meisme-dev/Gardenbot | a8c7b30b28aa4ad2dcba367eedfbd856f63f5eca | [
"MIT"
] | 5 | 2020-10-26T02:21:36.000Z | 2020-11-28T04:10:24.000Z | # MIT License
# Copyright (c) 2020 me is me
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#Define dependencies of the bot and import them
import discord
from discord.ext import commands
from discord.ext.commands import has_permissions
#Define the class and initialize the bot
class Moderation(commands.Cog):
def _init_(self, client):
self.client = client
#Define the commands
@commands.command()
@has_permissions(manage_roles=True)
async def mute(self,ctx,member : discord.Member, *, reason = None):
role = discord.utils.get(ctx.guild.roles, name="Muted")
perms = discord.PermissionOverwrite()
perms.send_messages = False
perms.read_messages = True
if discord.utils.get(ctx.guild.roles, name="Muted"):
await member.add_roles(role)
else:
role = await ctx.guild.create_role(name='Muted', permissions=discord.Permissions(0))
for channel in ctx.guild.channels:
await channel.set_permissions(role, overwrite=perms)
await member.add_roles(role)
embedVar = discord.Embed(title="Muted", description=f"{member.mention} was muted for {reason}.", color=0x35a64f)
await ctx.message.delete()
await ctx.send(embed=embedVar)
@commands.command()
@has_permissions(administrator=True)
async def unmute(self,ctx,member : discord.Member):
role = discord.utils.get(ctx.guild.roles, name="Muted")
await member.remove_roles(role)
embedVar = discord.Embed(title=
"Unmuted", description=f"{member.mention} was unmuted.", color=0x35a64f)
await ctx.message.delete()
await ctx.send(embed=embedVar)
#Connect the cog to the main bot
def setup(client):
client.add_cog(Moderation(client)) | 40.623188 | 120 | 0.717802 | 385 | 2,803 | 5.18961 | 0.425974 | 0.044044 | 0.022523 | 0.027027 | 0.230731 | 0.163664 | 0.12963 | 0.12963 | 0.12963 | 0.107107 | 0 | 0.006676 | 0.198359 | 2,803 | 69 | 121 | 40.623188 | 0.88251 | 0.426329 | 0 | 0.294118 | 0 | 0 | 0.063642 | 0 | 0 | 0 | 0.010082 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.088235 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf487798f8e7882fa1a70476d00ee5b869483959 | 15,839 | py | Python | app/sova/views.py | ivoras/sova | bb555e53a48f0e92874ab90e8651b314e2df0e8d | [
"BSD-2-Clause"
] | 1 | 2017-02-20T13:49:48.000Z | 2017-02-20T13:49:48.000Z | app/sova/views.py | ivoras/sova | bb555e53a48f0e92874ab90e8651b314e2df0e8d | [
"BSD-2-Clause"
] | null | null | null | app/sova/views.py | ivoras/sova | bb555e53a48f0e92874ab90e8651b314e2df0e8d | [
"BSD-2-Clause"
] | null | null | null | import base64
from datetime import datetime, timedelta
import re
import os
from django.core.mail import EmailMultiAlternatives
from django.shortcuts import render, get_object_or_404
from django.http import HttpResponseRedirect, Http404, HttpResponseBadRequest
from django.urls import reverse
from django.core.validators import validate_email
from django import forms
from django.utils import timezone
from django.utils.html import strip_tags
from django.utils.safestring import mark_safe
from django.conf import settings
from django.contrib import messages
from django.template.loader import get_template
from django.contrib.auth import authenticate, login, logout
from .models import Person, Group, Event, Participation, EmailSchedule, Token, EventOption
RE_EMAIL = re.compile(r'^[^@]+@[^@]+\.[^@.]+')
TS_FORMAT = '%Y-%m-%dT%H:%M'
def index(req):
if not req.user.is_authenticated:
return HttpResponseRedirect(reverse('login'))
organiser = Person.objects.get(email=req.user.email)
ctx = { 'events': [ {'id': e.id, 'name': e.name, 'date': e.date} for e in Event.objects.filter(organiser=organiser).order_by('-date')] }
return render(req, 'sova/index.html', ctx)
def vlogin(req):
ctx = {}
if req.method == 'GET':
return render(req, 'sova/login.html', ctx)
user = authenticate(username=req.POST['username'], password=req.POST['password'])
if user == None:
ctx['error'] = 'Invalid login'
return render(req, 'sova/login.html', ctx)
if not user.is_staff:
ctx['error'] = 'Access denied'
return render(req, 'sova/login.html', ctx)
login(req, user)
return HttpResponseRedirect(reverse('index'))
def vlogout(req):
logout(req)
return HttpResponseRedirect(reverse('index'))
def about(req):
ctx = {}
return render(req, 'sova/about.html', ctx)
def newevent(req):
if not (req.user.is_authenticated and req.user.is_staff):
return HttpResponseRedirect(reverse('index'))
organiser = Person.objects.get(email=req.user.email)
if req.method == 'GET':
now = datetime.now()
ctx = {'groups': [ {"id": g.id, "name": g.name } for g in Group.objects.order_by('name') ], 'eventDate': (now + timedelta(days=7)).strftime(TS_FORMAT), 'invitationDate': (now + timedelta(days=3)).strftime(TS_FORMAT), 'acceptDate': (now + timedelta(days=6, hours=12)).strftime(TS_FORMAT), 'reminderDate': (now + timedelta(days=5)).strftime(TS_FORMAT),'detailsDate': (now + timedelta(days=6)).strftime(TS_FORMAT), 'thanksDate': (now + timedelta(days=8)).strftime(TS_FORMAT)}
return render(req, 'sova/newevent.html', ctx)
dateEvent = datetime.strptime(req.POST['dateEvent'], TS_FORMAT)
dateAccept = datetime.strptime(req.POST['dateAccept'], TS_FORMAT)
maxPeople = int(req.POST['maxPeople']) if req.POST['maxPeople'] != '' else None
group = Group.objects.get(id=int(req.POST['group']))
e = Event(name=req.POST['name'], hype_text=req.POST['hypeText'], mail_prefix=req.POST['slug'], organiser=organiser, header=mark_safe(req.POST['header']), footer=mark_safe(req.POST['footer']), date=dateEvent, deadline_for_joining=dateAccept,
max_people=maxPeople)
e.save()
invitationDate = datetime.strptime(req.POST['invitationDate'], TS_FORMAT)
invitationES = EmailSchedule(name='Pozivnica: %s' % e.name,
group=group, target=EmailSchedule.SEND_EVERYONE, event=e, type=EmailSchedule.TYPE_INVITATION, date=invitationDate, subject='Pozivnica: %s' % e.name, message=mark_safe(req.POST['invitationText']))
invitationES.save()
reminderDate = datetime.strptime(req.POST['reminderDate'], TS_FORMAT)
reminderES = EmailSchedule(name='Pozivnica: %s (podsjetnik)' % e.name, group=group, target=EmailSchedule.SEND_NOT_ACCEPTED, event=e, type=EmailSchedule.TYPE_INVITATION, date=reminderDate, subject='Pozivnica: %s (podsjetnik)' % e.name, message=mark_safe(req.POST['invitationText']))
reminderES.save()
detailsDate = datetime.strptime(req.POST['detailsDate'], TS_FORMAT)
detailsES = EmailSchedule(name='Detalji: %s' % e.name, group=group, target=EmailSchedule.SEND_ACCEPTED, event=e, type=EmailSchedule.TYPE_MESSAGE, date=detailsDate, subject='%s' % e.name, message=mark_safe(req.POST['detailsText']))
detailsES.save()
thanksDate = datetime.strptime(req.POST['thanksDate'], TS_FORMAT)
thanksES = EmailSchedule(name='Zahvalnica: %s' % e.name, group=group, target=EmailSchedule.SEND_ACCEPTED, event=e, type=EmailSchedule.TYPE_EXIT_POLL, date=thanksDate, subject='Zahvalnica: %s' % e.name, message=mark_safe(req.POST['thanksText']))
thanksES.save()
return HttpResponseRedirect(reverse('index'))
def join(req, event, person):
person = get_object_or_404(Person, pk=int(person))
event = get_object_or_404(Event, pk=int(event))
try:
participation = Participation.objects.get(person=person, event=event)
except Participation.DoesNotExist:
participation = None
context = {
'person': person,
'event': event,
'participation': participation
}
return render(req, 'sova/join.html', context)
def vote(req, event, person):
person = get_object_or_404(Person, pk=int(person))
event = get_object_or_404(Event, pk=int(event))
accepted = req.POST['choice']
if not (accepted == 'True' or accepted == 'False'):
# redisplay the form
return render(req, 'sova/join.html', {
'person': person,
'event': event,
'error_message': "You didn't select a choice.",
})
participation = Participation(person=person, event=event, accepted=accepted)
participation.save()
return HttpResponseRedirect(reverse('join', args=(person.pk, event.pk,)))
def accept(req, schedule, person):
"""
Shows event info and allows the user to accept.
"""
schedule = get_object_or_404(EmailSchedule, pk=int(schedule))
person = get_object_or_404(Person, pk=int(person))
people_count = Participation.objects.filter(event=schedule.event, accepted=True).count()
people_percent = int((people_count / schedule.event.max_people) * 100) if schedule.event.max_people else 0
try:
participation = Participation.objects.get(person=person, event=schedule.event)
if participation.accepted:
return render(req, 'sova/unaccept.html', { 'person': person, 'schedule': schedule, 'people_count': people_count, 'people_percent': people_percent })
options = EventOption.objects.filter(event_id = schedule.event_id)
except Participation.DoesNotExist:
options = []
if schedule.event.max_people and people_count >= schedule.event.max_people:
return render(req, 'sova/noroom.html', { 'person': person, 'schedule': schedule })
if timezone.now() > schedule.event.date or (schedule.event.deadline_for_joining and timezone.now() > schedule.event.deadline_for_joining):
return render(req, 'sova/toolate.html', { 'person': person, 'schedule': schedule })
return render(req, 'sova/accept.html', { 'person': person, 'schedule': schedule, 'people_count': people_count, 'people_percent': people_percent, 'options': options })
def confirm(req, schedule, person):
"""
Notifies the user he/she has confirmed attendance.
"""
schedule = get_object_or_404(EmailSchedule, pk=int(schedule))
person = get_object_or_404(Person, pk=int(person))
try:
participation = Participation.objects.get(person=person, event=schedule.event)
participation.accepted = True
except Participation.DoesNotExist:
participation = Participation(person=person, event=schedule.event, accepted=True)
participation.save()
tpl = get_template('sova/confirmemail.html')
html = tpl.render({ 'person': person, 'schedule': schedule, 'participation': participation, 'email_admin': settings.EMAIL_ADMIN })
subject = "[%s] %s - potvrda!" % (schedule.event.mail_prefix, schedule.name)
plain_text = strip_tags(html)
msg = EmailMultiAlternatives(subject, plain_text, settings.EMAIL_FROM, [person.email])
msg.attach_alternative(html, "text/html")
msg.send()
return render(req, 'sova/confirm.html', { 'person': person, 'schedule': schedule, 'participation': participation })
def unaccept(req, schedule, person):
"""
Notifies the user he/she has canceled attendance.
"""
schedule = get_object_or_404(EmailSchedule, pk=int(schedule))
person = get_object_or_404(Person, pk=int(person))
try:
participation = Participation.objects.get(person=person, event=schedule.event)
participation.accepted = False
except Participation.DoesNotExist:
participation = Participation(person=person, event=schedule.event, accepted=False)
participation.save()
tpl = get_template('sova/unacceptemail.html')
html = tpl.render({ 'person': person, 'schedule': schedule, 'participation': participation, 'email_admin': settings.EMAIL_ADMIN })
subject = "[%s] %s - otkazivanje" % (schedule.event.mail_prefix, schedule.name)
plain_text = strip_tags(html)
msg = EmailMultiAlternatives(subject, plain_text, settings.EMAIL_FROM, [person.email])
msg.attach_alternative(html, "text/html")
msg.send()
return render(req, 'sova/unacceptconfirm.html', { 'person': person, 'schedule': schedule, 'participation': participation })
def exitpoll(req, schedule, person):
"""
Shows event exit poll.
"""
schedule = get_object_or_404(EmailSchedule, pk=int(schedule))
person = get_object_or_404(Person, pk=int(person))
people_count = Participation.objects.filter(event=schedule.event, accepted=True, participated=True).count()
people_percent = int((people_count / schedule.event.max_people) * 100) if schedule.event.max_people else 0
participation = get_object_or_404(Participation, person=person, event=schedule.event)
return render(req, 'sova/exitpoll.html', { 'person': person, 'schedule': schedule, 'people_count': people_count, 'people_percent': people_percent, 'participation': participation })
def exitpollsave(req, schedule, person):
"""
Saves the exit poll results.
"""
schedule = get_object_or_404(EmailSchedule, pk=int(schedule))
person = get_object_or_404(Person, pk=int(person))
participation = get_object_or_404(Participation, person=person, event=schedule.event)
participation.poll_grade = int(req.POST['grade'])
participation.poll_best = req.POST['best']
participation.poll_worst = req.POST['worst']
participation.poll_futureorg = True if 'futureorg' in req.POST and req.POST['futureorg'] == '1' else False
participation.poll_change = req.POST['change']
participation.poll_note = req.POST['note']
participation.save()
return render(req, 'sova/exitpollthanks.html', { 'person': person, 'schedule': schedule })
def unsubscribe(req, person):
"""
Shows the unsubscribe form to the user.
"""
person = get_object_or_404(Person, pk=int(person))
return render(req, 'sova/unsubscribe.html', { 'person': person })
def unsubscribesave(req, person):
if req.POST['unsubscribe'] == '1':
person = get_object_or_404(Person, pk=int(person))
person.email_enabled = False
person.save()
return render(req, 'sova/unsubscribe_yes.html', { 'person': person })
else:
return render(req, 'sova/unsubscribe_no.html', { 'person': person })
def subscribe(req):
if settings.SUBSCRIBE_ENABLED:
return render(req, 'sova/subscribe.html', { 'org_title': settings.ORG_TITLE, 'cfg': settings.CFG })
else:
raise Http404("Subscribing not enabled")
def subscribesave(req):
if not settings.SUBSCRIBE_ENABLED:
raise Http404("Subscribing not enabled")
m = RE_EMAIL.match(req.POST['email'])
if m == None:
return subscribe(req)
"""
When your users submit the form where you integrated reCAPTCHA, you'll get as part of the payload a string with the name "g-recaptcha-response". In order to check whether Google has verified that user, send a POST request with these parameters:
URL: https://www.google.com/recaptcha/api/siteverify
secret (required) 6LdfeFsUAAAAAIJpr3bug3TF3BQzNGN_MIAQ1QR5
response (required) The value of 'g-recaptcha-response'.
remoteip The end user's ip address.
The reCAPTCHA documentation site describes more details and advanced configurations.
"""
def subscribeconfirm(req):
if not settings.SUBSCRIBE_ENABLED:
raise Http404("Subscribing not enabled")
def contact(req):
return render(req, 'sova/contact.html', {})
def get_profile_token(req, person=0):
try:
person = Person.objects.get(pk=int(person))
token = Token.objects.filter(person=person.id, date_created__gte=timezone.now() - timezone.timedelta(
minutes=settings.TOKEN_EXPIRY_TIME)).order_by('-id')[0]
except Person.DoesNotExist:
person = None
token = None
except Token.DoesNotExist:
token = None
return render(req, 'sova/getprofiletoken.html', {
'person': person,
'token' : token,
})
def send_profile_token(req):
profile = req.POST.get('profile_email', False)
# otherwise, validate it and retrieve the Person
try:
validate_email(profile)
person = get_object_or_404(Person, email=str(profile))
token = Token(token=base64.urlsafe_b64encode(os.urandom(12)), person=person)
token.save()
# send email, something along these lines
# subject, from_email, to = 'Token', settings.EMAIL_FROM, person.email
# text_content = 'Your token is ' + token.token
# html_content = '<a href="{% url edituserprofile token.token %} ">'
# msg = EmailMultiAlternatives(subject, text_content, from_email, [to])
# msg.attach_alternative(html_content, "text/html")
# msg.send()
return HttpResponseRedirect(reverse('getprofiletoken', args=(person.id,)))
except forms.ValidationError:
messages.error(req, "You've entered an invalid e-mail address")
return render(req, 'sova/getprofiletoken.html')
def edit_user_profile(req, token=''):
try:
token = Token.objects.get(token=token, date_created__gte=timezone.now() - timezone.timedelta(
minutes=settings.TOKEN_EXPIRY_TIME))
# either no token or token has expired
except Token.DoesNotExist:
token = None
return render(req, 'sova/edituserprofile.html', {
'token': token,
})
def save_user_profile(req, token=''):
try:
retrieved_token = Token.objects.get(token=token, date_created__gte=timezone.now() - timezone.timedelta(
minutes=settings.TOKEN_EXPIRY_TIME))
retrieved_token.person.name = req.POST.get('username', retrieved_token.person.name)
if req.POST.get('email_enabled', False):
retrieved_token.person.email_enabled = True
else:
retrieved_token.person.email_enabled = False
retrieved_token.person.phone = req.POST.get('phone', retrieved_token.person.phone)
if req.POST.get('phone_enabled', False):
retrieved_token.person.phone_enabled = True
else:
retrieved_token.person.phone_enabled = False
retrieved_token.person.save()
messages.success(req, 'Successfull edit')
except Token.DoesNotExist:
token = None
messages.error(req, 'Token expired :(')
return render(req, 'sova/edituserprofile.html', {
'token': token,
})
except forms.ValidationError:
messages.error(req, 'Form validation problem :(')
return render(req, 'sova/edituserprofile.html', {
'token': token,
})
return HttpResponseRedirect(reverse('edituserprofile', args=(retrieved_token.token,)))
| 44.366947 | 480 | 0.696067 | 1,931 | 15,839 | 5.593993 | 0.163128 | 0.022681 | 0.036104 | 0.045732 | 0.490187 | 0.426032 | 0.369376 | 0.344103 | 0.291798 | 0.268099 | 0 | 0.008091 | 0.172864 | 15,839 | 356 | 481 | 44.491573 | 0.816426 | 0.044005 | 0 | 0.382239 | 0 | 0 | 0.130012 | 0.019943 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084942 | false | 0.003861 | 0.069498 | 0.003861 | 0.289575 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf50573fc8c7f1473140c678218ca5ff24e7755c | 514 | py | Python | hello.py | charlieallatson/hello_world | 1a759534f9ae4ae27f845dc7b4d5015c9aa91dbc | [
"MIT"
] | null | null | null | hello.py | charlieallatson/hello_world | 1a759534f9ae4ae27f845dc7b4d5015c9aa91dbc | [
"MIT"
] | null | null | null | hello.py | charlieallatson/hello_world | 1a759534f9ae4ae27f845dc7b4d5015c9aa91dbc | [
"MIT"
] | null | null | null | """
A new python package to learn how to create python packages
"""
import sys
__version__ = '0.0.22'
def hello(name='world'):
"""
Return a greeting for the given name
"""
return 'Hello, {}'.format(name)
def main():
"""
Reads input from the args passed into the script and prints the
output to stdout.
"""
args = sys.argv[1:]
name = ' '.join(args)
if name:
print(hello(name))
else:
print(hello())
if __name__ == '__main__':
main()
| 16.580645 | 67 | 0.571984 | 68 | 514 | 4.147059 | 0.617647 | 0.06383 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013774 | 0.293774 | 514 | 30 | 68 | 17.133333 | 0.763085 | 0.346304 | 0 | 0 | 0 | 0 | 0.098639 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.076923 | 0 | 0.307692 | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf562b96a69896118fd044a0894e367ce47403f6 | 4,666 | py | Python | Artificial Algae Algorithm/AAA.py | shahind/Nature-Inspired-Algorithms | 88cfc37f903bfb3840566257f2d818ac5534affe | [
"MIT"
] | 17 | 2021-01-08T10:35:04.000Z | 2022-03-23T07:13:54.000Z | Artificial Algae Algorithm/AAA.py | shahind/Nature-Inspired-Algorithms | 88cfc37f903bfb3840566257f2d818ac5534affe | [
"MIT"
] | null | null | null | Artificial Algae Algorithm/AAA.py | shahind/Nature-Inspired-Algorithms | 88cfc37f903bfb3840566257f2d818ac5534affe | [
"MIT"
] | 5 | 2021-01-11T08:31:07.000Z | 2022-01-11T06:28:04.000Z | import numpy as np
from Sphere import ObjVal
from CalculateGreatness import CalculateGreatness
from GreatnessOrder import GreatnessOrder
from FrictionSurface import FrictionSurface
from tournement_selection import tournement_selection
def AAA(MaxFEVs, N, D, LB, UB, K, le, Ap):
Algae = np.zeros((N,D))
# Xij = Ximin + (Ximax - Ximin) * random(0, 1)
Algae = LB + (UB - LB) * np.random.rand(N, D)
Starve = np.zeros((1, N))
Big_Algae = np.ones((1, N))
Obj_Algae = []
Best_Algae = np.zeros((1, N))
for i in range(1, N):
Obj_Algae = ObjVal(Algae)
# [value,indices] = np.min(Obj_Algae)
min_Obj_Alg = np.min(Obj_Algae)
for ii, vi in enumerate((Obj_Algae),start=0):
if vi == min_Obj_Alg:
indices = ii
value = vi
Best_Algae = Algae[indices, :]
Obj_Best_Algae = value
Big_Algae = CalculateGreatness(Big_Algae, Obj_Algae)
counter = 0
c = N
while c < MaxFEVs:
Cloro_ALG = GreatnessOrder(Big_Algae); # Calculate energy values
Big_Algae_Surface = FrictionSurface(Big_Algae); # Sorting by descending size and normalize between[0, 1]
for i in range(0,c):
starve = 0
while Cloro_ALG[:,39] >= 0 and c < MaxFEVs:
Neighbor = tournement_selection(Obj_Algae)
while Neighbor == i:
Neighbor = tournement_selection(Obj_Algae)
parameters = np.random.permutation(D)
New_Algae = Algae[i, :]
parameter0 = np.int(parameters[0])
parameter1 = np.int(parameters[1])
parameter2 = np.int(parameters[2])
Subtr_Eq0 = np.float(Algae[Neighbor, parameter0] - New_Algae[parameter0])
Subtr_Eq1 = np.float(Algae[Neighbor, parameter0] - New_Algae[parameter0])
Subtr_Eq2 = np.float(Algae[Neighbor, parameter0] - New_Algae[parameter0])
K_Big_Algae = K - np.float(Big_Algae_Surface[:,i])
rand_value = np.random.random() - 0.5
cosine_value = np.cos(np.random.random() * 360)
sine_value = np.sin(np.random.random() * 360)
New_Algae[parameter0] = Subtr_Eq0 * K_Big_Algae * (rand_value * 2)
New_Algae[parameter1] = Subtr_Eq1 * K_Big_Algae * cosine_value
New_Algae[parameter2] = Subtr_Eq2 * K_Big_Algae * sine_value
##########################################
for p in range(1, 3):
if New_Algae[parameters[p]] > UB:
New_Algae[parameters[p]] = UB
if New_Algae[parameters[p]] < LB:
New_Algae[parameters[p]] = LB
Obj_New_Algae = ObjVal(New_Algae)
c = c + 1
counter = c
Cloro_ALG[:,i] = Cloro_ALG[:,i] - (le / 2)
if Obj_New_Algae <= Obj_Algae[i]:
Algae[i, :] = New_Algae
Obj_Algae[i] = Obj_New_Algae
starve = 1
else:
Cloro_ALG[:,i] = Cloro_ALG[:,i] - (le / 2)
if starve == 0:
Starve[:,i] = Starve[:,i] + 1
#[val, ind] = np.min(Obj_Algae)
min_Obj_Alg1 = np.min(Obj_Algae)
for ju, valuee in enumerate((Obj_Algae), start=0):
if valuee == min_Obj_Alg1:
ind = ju
valki = valuee
if valki < Obj_Best_Algae:
Best_Algae = Algae[ind, :]
Obj_Best_Algae = valki
Big_Algae = CalculateGreatness(Big_Algae, Obj_Algae)
m = np.int(np.fix(np.random.random() * D) + 1)
imax = np.max(Big_Algae)
imin = np.min(Big_Algae)
big_algae_to_1_arr = np.array(Big_Algae).reshape(N)
#Algae[imin, m] = Algae[imax, m]
for ind_max, max_value in enumerate((big_algae_to_1_arr),start=0):
if max_value == imax:
index_max = ind_max
maxi_value = max_value
for ind_min, min_value in enumerate((big_algae_to_1_arr),start=0):
if min_value == imin:
index_min = ind_min
mini_value = min_value
if m >= 40:
m = m - 1;
Algae[index_min, m] = Algae[index_max, m]
starve = np.int(np.max(Starve))
if np.random.random() < Ap:
for m in range(0,D):
Algae[starve, m] = Algae[starve, m] + (Algae[index_max, m] - Algae[starve, m]) * np.random.random()
print('Run = %d error = %1.8e\n' %(counter, Obj_Best_Algae))
return Obj_Best_Algae
| 32.17931 | 115 | 0.541149 | 594 | 4,666 | 4.037037 | 0.188552 | 0.063386 | 0.035029 | 0.021685 | 0.30025 | 0.18849 | 0.172644 | 0.115096 | 0.095079 | 0.031693 | 0 | 0.022143 | 0.341835 | 4,666 | 144 | 116 | 32.402778 | 0.758711 | 0.04715 | 0 | 0.0625 | 0 | 0 | 0.005458 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010417 | false | 0 | 0.0625 | 0 | 0.083333 | 0.010417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf56a4ec5c9c9f5034ff21cf5feff393693d48db | 22,712 | py | Python | RL/PPO_ZFlt_stable_2019_1123.py | GuanShiTing/DL_RL_Zoo | 520cd92c1a28f64006d51444a0940cc645b95c6d | [
"Apache-2.0"
] | 1 | 2021-06-08T08:20:31.000Z | 2021-06-08T08:20:31.000Z | RL/PPO_ZFlt_stable_2019_1123.py | GuanShiTing/DL_RL_Zoo | 520cd92c1a28f64006d51444a0940cc645b95c6d | [
"Apache-2.0"
] | null | null | null | RL/PPO_ZFlt_stable_2019_1123.py | GuanShiTing/DL_RL_Zoo | 520cd92c1a28f64006d51444a0940cc645b95c6d | [
"Apache-2.0"
] | null | null | null | import os
import sys
from time import time as timer
import gym
import numpy as np
import numpy.random as rd
import torch
import torch.nn as nn
import torch.nn.functional as F
"""
beta2 PPO ZFlt stable, running state mean std, def run_eval()
beta1 GPU, def get_eva_reward()
"""
class Arguments:
env_name = "LunarLanderContinuous-v2"
max_step = 2000 # max steps in one epoch
max_epoch = 1000 # max num of train_epoch
'''device'''
gpu_id = sys.argv[0][-4]
mod_dir = 'DDPG_%s' % gpu_id
is_remove = True # remove the pre-training data? (True, False, None:ask me)
random_seed = 1943
'''training'''
actor_dim = 2 ** 8 # the network width of actor_net
critic_dim = int(actor_dim * 1.25) # the network width of critic_net
memories_size = int(2 ** 16) # memories capacity (memories: replay buffer)
batch_size = 2 ** 8 # num of transitions sampled from replay buffer.
update_gap = 2 ** 7 # update the target_net, delay update
soft_update_tau = 1 # could be 0.005
gamma = 0.99 # discount for future rewards
explore_noise = 0.4 # action = select_action(state) + noise, 'explore_noise': sigma of noise
policy_noise = 0.8 # actor_target(next_state) + noise, 'policy_noise': sigma of noise
'''plot'''
show_gap = 2 ** 5 # print the Reward, actor_loss, critic_loss
eval_epoch = 4 # reload and evaluate the target policy network(actor)
smooth_kernel = 2 ** 4 # smooth the reward/loss curves
def __init__(self):
self.env_name = "BipedalWalker-v2" # 17837s 124e
# self.env_name = "LunarLanderContinuous-v2" # 14554s 132e
class RunningStat(object):
def __init__(self, shape):
self._n = 0
self._M = np.zeros(shape)
self._S = np.zeros(shape)
def push(self, x):
x = np.asarray(x)
assert x.shape == self._M.shape
self._n += 1
if self._n == 1:
self._M[...] = x
else:
oldM = self._M.copy()
self._M[...] = oldM + (x - oldM) / self._n
self._S[...] = self._S + (x - oldM) * (x - self._M)
@property
def n(self):
return self._n
@property
def mean(self):
return self._M
@property
def var(self):
return self._S / (self._n - 1) if self._n > 1 else np.square(self._M)
@property
def std(self):
return np.sqrt(self.var)
@property
def shape(self):
return self._M.shape
class ZFilter:
"""
y = (x-mean)/std
using running estimates of mean,std
"""
def __init__(self, shape, demean=True, destd=True, clip=10.0):
self.demean = demean
self.destd = destd
self.clip = clip
self.rs = RunningStat(shape)
def __call__(self, x, update=True):
if update:
self.rs.push(x)
if self.demean:
x = x - self.rs.mean
if self.destd:
x = x / (self.rs.std + 1e-8)
if self.clip:
x = np.clip(x, -self.clip, self.clip)
return x
def output_shape(self, input_space):
return input_space.shape
from collections import namedtuple
Transition = namedtuple('Transition', ('state', 'value', 'action', 'logproba', 'mask', 'next_state', 'reward'))
class Memory(object):
def __init__(self):
self.memory = []
def push(self, *args):
self.memory.append(Transition(*args))
def sample(self):
return Transition(*zip(*self.memory))
def __len__(self):
return len(self.memory)
class ActorCritic(nn.Module):
def __init__(self, num_inputs, num_outputs, layer_norm=True):
super(ActorCritic, self).__init__()
mid_dim = 96
self.actor_fc1 = nn.Linear(num_inputs, mid_dim)
self.actor_fc2 = nn.Linear(mid_dim, mid_dim)
self.actor_fc3 = nn.Linear(mid_dim, num_outputs)
self.actor_logstd = nn.Parameter(torch.zeros(1, num_outputs))
self.critic_fc1 = nn.Linear(num_inputs, mid_dim)
self.critic_fc2 = nn.Linear(mid_dim, mid_dim)
self.critic_fc3 = nn.Linear(mid_dim, 1)
if layer_norm:
self.layer_norm(self.actor_fc1, std=1.0)
self.layer_norm(self.actor_fc2, std=1.0)
self.layer_norm(self.actor_fc3, std=0.01)
self.layer_norm(self.critic_fc1, std=1.0)
self.layer_norm(self.critic_fc2, std=1.0)
self.layer_norm(self.critic_fc3, std=1.0)
@staticmethod
def layer_norm(layer, std=1.0, bias_const=0.0):
torch.nn.init.orthogonal_(layer.weight, std)
torch.nn.init.constant_(layer.bias, bias_const)
def forward(self, states):
"""
run policy network (actor) as well as value network (critic)
:param states: a Tensor2 represents states
:return: 3 Tensor2
"""
action_mean, action_logstd = self._forward_actor(states)
critic_value = self._forward_critic(states)
return action_mean, action_logstd, critic_value
def _forward_actor(self, states):
x = f_hard_swish(self.actor_fc1(states))
x = f_hard_swish(self.actor_fc2(x))
action_mean = self.actor_fc3(x)
action_logstd = self.actor_logstd.expand_as(action_mean)
return action_mean, action_logstd
def _forward_critic(self, states):
x = f_hard_swish(self.critic_fc1(states))
x = f_hard_swish(self.critic_fc2(x))
critic_value = self.critic_fc3(x)
return critic_value
def select_action(self, action_mean, action_logstd, return_logproba=True):
"""
given mean and std, sample an action from normal(mean, std)
also returns probability of the given chosen
"""
action_std = torch.exp(action_logstd)
action = torch.normal(action_mean, action_std)
if return_logproba:
logproba = self._normal_logproba(action, action_mean, action_logstd, action_std)
return action, logproba
@staticmethod
def _normal_logproba(x, mean, logstd, std=None):
if std is None:
std = torch.exp(logstd)
std_sq = std.pow(2)
logproba = - 0.5 * np.log(2 * np.pi) - logstd - (x - mean).pow(2) / (2 * std_sq)
return logproba.sum(1)
def get_logproba(self, states, actions):
"""
return probability of chosen the given actions under corresponding states of current network
:param states: Tensor
:param actions: Tensor
"""
action_mean, action_logstd = self._forward_actor(states)
logproba = self._normal_logproba(actions, action_mean, action_logstd)
return logproba
def f_hard_swish(x):
return F.relu6(x + 3) / 6 * x
"""train"""
def run_train():
args = Arguments()
gpu_id = args.gpu_id
env_name = args.env_name
mod_dir = args.mod_dir
memories_size = args.memories_size
batch_size = args.batch_size
update_gap = args.update_gap
soft_update_tau = args.soft_update_tau
actor_dim = args.actor_dim
critic_dim = args.critic_dim
show_gap = args.show_gap
max_step = args.max_step
max_epoch = args.max_epoch
gamma = args.gamma
explore_noise = args.explore_noise
policy_noise = args.policy_noise
random_seed = args.random_seed
smooth_kernel = args.smooth_kernel
is_remove = args.is_remove
'''PPO'''
num_episode = 500
batch_size = 2048
max_step_per_round = 2000
gamma = 0.995
lamda = 0.97
log_num_episode = 1
num_epoch = 10
minibatch_size = 256
clip = 0.2
loss_coeff_value = 0.5
loss_coeff_entropy = 0.02 # 0.01
lr = 3e-4
num_parallel_run = 5
layer_norm = True
state_norm = True
advantage_norm = True
lossvalue_norm = True
schedule_adam = 'linear'
schedule_clip = 'linear'
clip_now = clip
# whether_remove_history(remove=is_remove, mod_dir=mod_dir)
'''env init'''
env = gym.make(env_name)
state_dim, action_dim, action_max, target_reward = get_env_info(env)
'''mod init'''
os.environ['CUDA_VISIBLE_DEVICES'] = str(gpu_id)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
network = ActorCritic(state_dim, action_dim, layer_norm=True).to(device)
running_state = ZFilter((state_dim,), clip=5.0)
from torch.optim import Adam
optimizer = Adam(network.parameters(), lr=lr)
torch.set_num_threads(8)
torch.manual_seed(random_seed)
np.random.seed(random_seed)
'''train loop'''
rd_normal = np.random.normal
recorders = list()
rewards = list()
start_time = show_time = timer()
EPS = 1e-10
reward_record = []
global_steps = 0
from torch import Tensor
try:
for i_episode in range(num_episode):
# step1: perform current policy to collect trajectories
# this is an on-policy method!
memory = Memory()
num_steps = 0
reward_list = []
len_list = []
while num_steps < batch_size:
state = env.reset()
reward_sum = 0
t = 0
if state_norm:
state = running_state(state)
for t in range(max_step_per_round):
state_ten = torch.tensor((state,), dtype=torch.float32, device=device)
action_mean, action_logstd, value = network(state_ten)
action, logproba = network.select_action(action_mean, action_logstd)
action = action.cpu().data.numpy()[0]
logproba = logproba.cpu().data.numpy()[0]
next_state, reward, done, _ = env.step(action)
reward_sum += reward
if state_norm:
next_state = running_state(next_state)
mask = 0 if done else 1
memory.push(state, value, action, logproba, mask, next_state, reward)
if done:
break
state = next_state
num_steps += (t + 1)
global_steps += (t + 1)
reward_list.append(reward_sum)
len_list.append(t + 1)
reward_record.append({
'episode': i_episode,
'steps': global_steps,
'meanepreward': np.mean(reward_list),
'meaneplen': np.mean(len_list)})
batch = memory.sample()
batch_size = len(memory)
rewards = torch.tensor(batch.reward, dtype=torch.float32, device=device)
values = torch.tensor(batch.value, dtype=torch.float32, device=device)
masks = torch.tensor(batch.mask, dtype=torch.float32, device=device)
actions = torch.tensor(batch.action, dtype=torch.float32, device=device)
states = torch.tensor(batch.state, dtype=torch.float32, device=device)
oldlogproba = torch.tensor(batch.logproba, dtype=torch.float32, device=device)
prev_return = 0
prev_value = 0
prev_advantage = 0
returns = torch.empty(batch_size, dtype=torch.float32, device=device)
deltas = torch.empty(batch_size, dtype=torch.float32, device=device)
advantages = torch.empty(batch_size, dtype=torch.float32, device=device)
for i in reversed(range(batch_size)):
returns[i] = rewards[i] + gamma * prev_return * masks[i]
deltas[i] = rewards[i] + gamma * prev_value * masks[i] - values[i]
# ref: https://arxiv.org/pdf/1506.02438.pdf (generalization advantage estimate)
advantages[i] = deltas[i] + gamma * lamda * prev_advantage * masks[i]
prev_return = returns[i]
prev_value = values[i]
prev_advantage = advantages[i]
if advantage_norm:
advantages = (advantages - advantages.mean()) / (advantages.std() + EPS)
for i_epoch in range(int(num_epoch * batch_size / minibatch_size)):
# sample from current batch
minibatch_ind = np.random.choice(batch_size, minibatch_size, replace=False)
minibatch_states = states[minibatch_ind]
minibatch_actions = actions[minibatch_ind]
minibatch_oldlogproba = oldlogproba[minibatch_ind]
minibatch_newlogproba = network.get_logproba(minibatch_states, minibatch_actions)
minibatch_advantages = advantages[minibatch_ind]
minibatch_returns = returns[minibatch_ind]
minibatch_newvalues = network._forward_critic(minibatch_states).flatten()
ratio = torch.exp(minibatch_newlogproba - minibatch_oldlogproba)
surr1 = ratio * minibatch_advantages
surr2 = ratio.clamp(1 - clip_now, 1 + clip_now) * minibatch_advantages
loss_surr = - torch.mean(torch.min(surr1, surr2))
# not sure the value loss should be clipped as well
# clip example: https://github.com/Jiankai-Sun/Proximal-Policy-Optimization-in-Pytorch/blob/master/ppo.py
# however, it does not make sense to clip score-like value by a dimensionless clipping parameter
# moreover, original paper does not mention clipped value
if lossvalue_norm:
minibatch_return_6std = 6 * minibatch_returns.std()
loss_value = torch.mean((minibatch_newvalues - minibatch_returns).pow(2)) / minibatch_return_6std
else:
loss_value = torch.mean((minibatch_newvalues - minibatch_returns).pow(2))
loss_entropy = torch.mean(torch.exp(minibatch_newlogproba) * minibatch_newlogproba)
total_loss = loss_surr + loss_coeff_value * loss_value + loss_coeff_entropy * loss_entropy
optimizer.zero_grad()
total_loss.backward()
optimizer.step()
if schedule_clip == 'linear':
ep_ratio = 1 - (i_episode / num_episode)
clip_now = clip * ep_ratio
if schedule_adam == 'linear':
ep_ratio = 1 - (i_episode / num_episode)
lr_now = lr * ep_ratio
# set learning rate
# ref: https://stackoverflow.com/questions/48324152/
for g in optimizer.param_groups:
g['lr'] = lr_now
eva_reward = get_eva_reward(env, network, state_norm, running_state, max_step,
target_reward, device)
if i_episode % log_num_episode == 0:
print('E: {:4} |R: {:8.3f} EvaR: {:8.2f} |L: {:6.3f} = {:6.3f} + {} * {:6.3f} + {} * {:6.3f}'.format(
i_episode, reward_record[-1]['meanepreward'], eva_reward,
total_loss.data, loss_surr.data,
loss_coeff_value, loss_value.data,
loss_coeff_entropy, loss_entropy.data,
))
if eva_reward > target_reward:
print("########## Solved! ###########")
print('E: {:4} |R: {:8.3f} EvaR: {:8.2f}'.format(
i_episode, reward_record[-1]['meanepreward'], eva_reward, ))
break
except KeyboardInterrupt:
print("KeyboardInterrupt")
print('TimeUsed:', int(timer() - start_time))
rs = running_state.rs
print("State.mean", repr(rs.mean))
print("State.std ", repr(rs.std))
torch.save(network.state_dict(), '%s/PPO.pth' % (mod_dir,))
np.save('{}/reward_record.npy'.format(mod_dir), reward_record)
print("Save in Mod_dir:", mod_dir)
reward_record = np.load('{}/reward_record.npy'.format(args.mod_dir), allow_pickle=True)
recorders = np.array([(i['episode'], i['meanepreward'], i['meaneplen'])
for i in reward_record])
draw_plot_ppo(recorders, args.smooth_kernel, args.mod_dir)
def run_eval():
args = Arguments()
env = gym.make(args.env_name)
state_dim, action_dim, action_max, target_reward = get_env_info(env)
'''mod init'''
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu_id)
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
network = ActorCritic(state_dim, action_dim, layer_norm=True)
network.load_state_dict(torch.load('%s/PPO.pth' % (args.mod_dir,), map_location=lambda storage, loc: storage))
network.eval()
state_mean = np.array([6.37162155e-02, 2.92069533e-01, 1.34650579e-02, -1.37428364e-01,
5.30449211e-05, 7.67869142e-04, 3.98111940e-01, 4.12266648e-01])
state_std = np.array([0.28471291, 0.50380399, 0.24356069, 0.23674863,
0.15911274, 0.15998845, 0.48951016, 0.4922441, ])
def noise_filter(s):
return (s - state_mean) / state_std
state_norm = True
# import cv2
for epoch in range(args.eval_epoch):
epoch_reward = 0
state = env.reset()
for t in range(args.max_step):
if state_norm:
state = noise_filter(state)
state_tensor = torch.tensor((state,), dtype=torch.float32)
action_mean, action_logstd, value = network(state_tensor)
# action, logproba = network.select_action(action_mean, action_logstd)
# action = action.cpu().data.numpy()[0]
action = action_mean.cpu().data.numpy()[0]
next_state, reward, done, _ = env.step(action)
epoch_reward += reward
env.render()
if done:
break
state = next_state
print("%3i\tEpiR %3i" % (epoch, epoch_reward))
env.close()
def run_test(): # todo test
args = Arguments()
env = gym.make(args.env_name)
state_dim, action_dim, action_max, target_reward = get_env_info(env)
'''mod init'''
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu_id)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
network = ActorCritic(state_dim, action_dim, layer_norm=True) # todo gpu
network = network.to(device)
rs = network(torch.randn(2, state_dim, dtype=torch.float32, device=device))
print([r.size() for r in rs])
"""utils"""
def get_eva_reward(env, network, state_norm, running_state, max_step, target_reward,
device): # 2019-11-20
network.eval()
eva_rewards = list()
eva_epoch = 100
for eval_epoch in range(eva_epoch):
state = env.reset()
eva_reward = 0
for _ in range(max_step):
if state_norm:
state = running_state(state)
state_ten = torch.tensor((state,), dtype=torch.float32, device=device)
action_mean, action_logstd, value = network(state_ten)
action = action_mean.cpu().data.numpy()[0]
state, reward, done, _ = env.step(action)
eva_reward += reward
# env.render()
if done:
break
eva_rewards.append(eva_reward)
temp_target_reward = target_reward * (len(eva_rewards) / eva_epoch)
if np.average(eva_rewards) < temp_target_reward:
break # break the evaluating loop ahead of time.
if eval_epoch == 0 and eva_reward < target_reward:
break
network.train()
eva_reward = np.average(eva_rewards)
eva_r_std = float(np.std(eva_rewards))
if eva_reward > target_reward:
print("Eval| avg: %.2f std: %.2f" % (eva_reward, eva_r_std))
return eva_reward
def get_env_info(env):
state_dim = env.observation_space.shape[0]
if isinstance(env.action_space, gym.spaces.Discrete):
action_dim = env.action_space.n # Discrete
action_max = None
print('action_space: Discrete:', action_dim)
elif isinstance(env.action_space, gym.spaces.Box):
action_dim = env.action_space.shape[0] # Continuous
action_max = float(env.action_space.high[0])
else:
action_dim = None
action_max = None
print('Error: env.action_space in get_env_info(env)')
exit()
target_reward = env.spec.reward_threshold
return state_dim, action_dim, action_max, target_reward
def whether_remove_history(mod_dir, remove=None):
if remove is None:
remove = bool(input(" 'y' to REMOVE: %s? " % mod_dir) == 'y')
if remove:
import shutil
shutil.rmtree(mod_dir, ignore_errors=True)
print("| Remove")
del shutil
if not os.path.exists(mod_dir):
os.mkdir(mod_dir)
def draw_plot_ppo(recorders, smooth_kernel, mod_dir, save_name=None): # 2019-11-08 16
load_path = '%s/recorders.npy' % mod_dir
if recorders is None:
recorders = np.load(load_path)
print(recorders.shape)
else:
np.save(load_path, recorders)
if len(recorders) == 0:
return print('Record is empty')
else:
print("Matplotlib Plot:", save_name)
if save_name is None:
save_name = "%s_plot.png" % (mod_dir,)
import matplotlib.pyplot as plt
# plt.style.use('ggplot')
x_epoch = np.array(recorders[:, 0])
fig, axs = plt.subplots(2)
plt.title(save_name, y=2.3)
r_avg, r_std = calculate_avg_std(recorders[:, 1], smooth_kernel)
ax11 = axs[0]
ax11_color = 'darkcyan'
ax11_label = 'Eval R'
ax11.plot(x_epoch, r_avg, label=ax11_label, color=ax11_color)
ax11.set_ylabel(ylabel=ax11_label, color=ax11_color)
ax11.fill_between(x_epoch, r_avg - r_std, r_avg + r_std, facecolor=ax11_color, alpha=0.1, )
ax11.tick_params(axis='y', labelcolor=ax11_color)
# ax11.legend(loc='best')
# ax11.set_facecolor('#f0f0f0')
# ax11.grid(color='white', linewidth=1.5)
ax21 = axs[1]
ax21_color = 'darkcyan'
ax21_label = 'mean e len'
ax21.set_ylabel(ax21_label, color=ax21_color)
ax21.plot(x_epoch, -recorders[:, 1], label=ax21_label, color=ax21_color) # negative loss A
ax21.tick_params(axis='y', labelcolor=ax21_color)
plt.savefig("%s/%s" % (mod_dir, save_name))
plt.show()
def calculate_avg_std(y_reward, smooth_kernel):
r_avg = list()
r_std = list()
for i in range(len(y_reward)):
i_beg = i - smooth_kernel // 2
i_end = i_beg + smooth_kernel
i_beg = 0 if i_beg < 0 else i_beg
rewards = y_reward[i_beg:i_end]
r_avg.append(np.average(rewards))
r_std.append(np.std(rewards))
r_avg = np.array(r_avg)
r_std = np.array(r_std)
return r_avg, r_std
if __name__ == '__main__':
run_train()
# run_eval()
# run_test()
| 33.949178 | 121 | 0.608401 | 2,969 | 22,712 | 4.422364 | 0.164702 | 0.009596 | 0.015842 | 0.020107 | 0.276999 | 0.224829 | 0.189947 | 0.162833 | 0.123839 | 0.102818 | 0 | 0.032428 | 0.281745 | 22,712 | 668 | 122 | 34 | 0.772451 | 0.095544 | 0 | 0.145299 | 0 | 0.002137 | 0.039604 | 0.001193 | 0 | 0 | 0 | 0.001497 | 0.002137 | 1 | 0.070513 | false | 0 | 0.029915 | 0.021368 | 0.196581 | 0.036325 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf582559e0911ceac34a3c025dbda5c3487ce0a3 | 759 | py | Python | ansible/ansiblelints/rules/WeakCryptographyalgo.py | KalmanMeth/defect-prediction | 0a4b549b8af5259241f41eaee77dd841e98e0064 | [
"Apache-2.0"
] | null | null | null | ansible/ansiblelints/rules/WeakCryptographyalgo.py | KalmanMeth/defect-prediction | 0a4b549b8af5259241f41eaee77dd841e98e0064 | [
"Apache-2.0"
] | null | null | null | ansible/ansiblelints/rules/WeakCryptographyalgo.py | KalmanMeth/defect-prediction | 0a4b549b8af5259241f41eaee77dd841e98e0064 | [
"Apache-2.0"
] | null | null | null | import ruamel.yaml
from ansiblelint import AnsibleLintRule
class WeakCryptographyalgo(AnsibleLintRule):
id = 'ANSIBLE0010'
description = 'check if weak algorithms such as MD5 are used or not '
severity = 'HIGH'
tags = {'weak algo'}
version_added = 'v1.0.0'
shortdesc = 'WeakCryptographyalgo'
# _commands = ['shell']
_modules = ['add_host']
def matchtask(self, file, task):
with open(file['path']) as fp:
data = ruamel.yaml.round_trip_load(fp)
for line in ruamel.yaml.round_trip_dump(data, indent=2, block_seq_indent=3).splitlines(True):
if "md5" in line.lower():
return True
if "sh1" in line.lower():
return True
return False
| 29.192308 | 101 | 0.621871 | 92 | 759 | 5.021739 | 0.695652 | 0.064935 | 0.064935 | 0.082251 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021818 | 0.275362 | 759 | 25 | 102 | 30.36 | 0.818182 | 0.027668 | 0 | 0.105263 | 0 | 0 | 0.164402 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.105263 | 0 | 0.736842 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf588f3851b0b3222aeb9120dab2bdad11f953db | 2,966 | py | Python | ApproachV4/src/GetAccountPropertiesV4.py | kanishk2509/TwitterBotDetection | 26355410a43c27fff9d58f71ca0d87ff6e707b6a | [
"Unlicense"
] | 2 | 2021-06-09T20:55:17.000Z | 2021-11-03T03:07:37.000Z | ApproachV4/src/GetAccountPropertiesV4.py | kanishk2509/TwitterBotDetection | 26355410a43c27fff9d58f71ca0d87ff6e707b6a | [
"Unlicense"
] | null | null | null | ApproachV4/src/GetAccountPropertiesV4.py | kanishk2509/TwitterBotDetection | 26355410a43c27fff9d58f71ca0d87ff6e707b6a | [
"Unlicense"
] | 1 | 2020-07-26T02:31:38.000Z | 2020-07-26T02:31:38.000Z | import tweepy
import datetime as dt
import requests
import re
from GetTweetProperties import get_tweet_properties, get_tweet_semantics
dow_ratios = {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0}
'''
Step 3
Calculating Twitter User Account Properties Component
'''
def get_data(user_id, api):
tbl = []
try:
tbl = mine_data(user_id, api)
return tbl
except tweepy.TweepError as e:
print(e)
return tbl
def mine_data(user_id, api):
tbl = []
user = api.get_user(user_id)
print('User Screen Name :: ', user.screen_name)
age = dt.datetime.today().timestamp() - user.created_at.timestamp()
print("User Age :: ", age, " seconds")
in_out_ratio = 1
if user.friends_count != 0:
in_out_ratio = user.followers_count / user.friends_count
favourites_ratio = 86400 * user.favourites_count / age
print("Favourites Ratio :: ", favourites_ratio)
status_ratio = 86400 * user.statuses_count / age
print("Status Ratio :: ", status_ratio)
acct_rep = 0
if user.followers_count + user.friends_count != 0:
acct_rep = user.followers_count / (user.followers_count + user.friends_count)
print("Account Reputation :: ", acct_rep)
symbols = r'_|%|"| '
# screen_name_binary = user.screen_name.contains(symbols, case=False, na=False)
tbl.append(user_id)
# tbl.append(screen_name_binary)
tbl.append(age)
tbl.append(in_out_ratio)
tbl.append(favourites_ratio)
tbl.append(status_ratio)
tbl.append(acct_rep)
tbl2 = get_tweet_properties(user_id, api, user)
for i in tbl2:
tbl.append(i)
tbl3 = get_tweet_semantics(user_id, api)
tbl.append(tbl3[1])
tbl.append(tbl3[2])
tbl.append(tbl3[3])
tbl.append(tbl3[4])
tbl.append(tbl3[5])
tbl.append(tbl3[6])
tbl.append(tbl3[7])
tbl.append(tbl3[8])
return tbl
# Send all the urls out to Google's SafeBrowsing API to check for
# malicious urls, and return the number found
def num_malicious_urls(urls):
key = 'AIzaSyAAPunMDPhArqLnE_zH9ZK91VDGWxka8K8'
lookup_url = 'https://safebrowsing.googleapis.com/v4/threatMatches:find?key={}'.format(key)
url_list = ''
for url in urls:
url_list += '{{\"url\": \"{}\"}},\n'.format(url)
payload = '{{\"client\" : \
{{\"clientId\" : \"csci455\", \"clientVersion\" : \"0.0.1\"}}, \
\"threatInfo\" : \
{{\"threatTypes\" : [\"MALWARE\",\"SOCIAL_ENGINEERING\",\"UNWANTED_SOFTWARE\",\"MALICIOUS_BINARY\"], \
\"platformTypes\" : [\"ANY_PLATFORM\"], \
\"threatEntryTypes\" : [\"URL\"], \
\"threatEntries\": [ {} ] \
}} \
}}'.format(url_list)
r = requests.post(lookup_url, data=payload)
if r.status_code == 200 and len(r.json()) > 0:
return len(r.json()['matches'])
return 0
def update_dow_ratios(weekday):
dow_ratios[weekday] += 1
| 27.719626 | 117 | 0.618678 | 381 | 2,966 | 4.632546 | 0.338583 | 0.081586 | 0.058924 | 0.049858 | 0.087819 | 0.05779 | 0 | 0 | 0 | 0 | 0 | 0.029101 | 0.235334 | 2,966 | 106 | 118 | 27.981132 | 0.749118 | 0.072825 | 0 | 0.069444 | 0 | 0 | 0.15988 | 0.014569 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.069444 | 0 | 0.194444 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf58f80c1eddf33be62a39e2715221f40903b812 | 2,712 | py | Python | VTPtools/vtp-skew.py | moyogo/tirotools | 9cf6876fcc50448e6d45d046c10293d8ba103c31 | [
"MIT"
] | 17 | 2019-04-18T15:28:55.000Z | 2020-06-11T15:49:19.000Z | VTPtools/vtp-skew.py | moyogo/tirotools | 9cf6876fcc50448e6d45d046c10293d8ba103c31 | [
"MIT"
] | 4 | 2020-06-27T04:43:49.000Z | 2022-03-23T11:38:25.000Z | VTPtools/vtp-skew.py | moyogo/tirotools | 9cf6876fcc50448e6d45d046c10293d8ba103c31 | [
"MIT"
] | 1 | 2021-08-07T22:55:03.000Z | 2021-08-07T22:55:03.000Z | import argparse
import logging
import math
import re
import sys
from fontTools.misc.fixedTools import otRound
from fontTools.misc.transform import Identity
from fontTools.ttLib import TTFont, TTLibError
from fontTools.voltLib.parser import Parser
from io import StringIO
log = logging.getLogger()
anchor_re = re.compile(r"DEF_ANCHOR.*.END_ANCHOR")
def replace(match, transform):
volt = Parser(StringIO(match.group(0))).parse()
anchor = volt.statements[0]
adv, dx, dy, adv_adjust_by, dx_adjust_by, dy_adjust_by = anchor.pos
if dy:
dx, dy = transform.transformPoint((dx or 0, dy))
pos = ""
if adv is not None:
pos += " ADV %g" % otRound(adv)
for at, adjust_by in adv_adjust_by.items():
pos += f" ADJUST_BY {adjust_by} AT {at}"
if dx is not None:
pos += " DX %g" % otRound(dx)
for at, adjust_by in dx_adjust_by.items():
pos += f" ADJUST_BY {adjust_by} AT {at}"
if dy is not None:
pos += " DY %g" % otRound(dy)
for at, adjust_by in dy_adjust_by.items():
pos += f" ADJUST_BY {adjust_by} AT {at}"
return (
f'DEF_ANCHOR "{anchor.name}" '
f"ON {anchor.gid} "
f"GLYPH {anchor.glyph_name} "
f"COMPONENT {anchor.component} "
f'{anchor.locked and "LOCKED " or ""}'
f"AT "
f"POS{pos} END_POS "
f"END_ANCHOR"
)
return match.group(0)
def main(args=None):
parser = argparse.ArgumentParser(
description="Transform X anchor positions in VOLT/VTP files."
)
parser.add_argument("input", metavar="INPUT", help="input font/VTP file to process")
parser.add_argument("output", metavar="OUTPUT", help="output font/VTP file")
parser.add_argument(
"-a", "--angle", type=float, required=True, help="the slant angle (in degrees)"
)
options = parser.parse_args(args)
font = None
try:
font = TTFont(options.input)
if "TSIV" in font:
indata = font["TSIV"].data.decode("utf-8")
else:
log.error('"TSIV" table is missing, font was not saved from VOLT?')
return 1
except TTLibError:
with open(options.input) as f:
indata = f.read()
transform = Identity.skew(options.angle * math.pi / 180)
outdata = anchor_re.sub(lambda m: replace(m, transform), indata)
if font is not None:
font["TSIV"].data = outdata.encode("utf-8")
font.save(options.output)
else:
with open(options.output, "w") as f:
f.write(outdata)
if __name__ == "__main__":
sys.exit(main())
| 30.818182 | 88 | 0.592183 | 365 | 2,712 | 4.287671 | 0.326027 | 0.076677 | 0.023003 | 0.023003 | 0.102236 | 0.073482 | 0.073482 | 0.073482 | 0.073482 | 0.073482 | 0 | 0.005165 | 0.286136 | 2,712 | 87 | 89 | 31.172414 | 0.803202 | 0 | 0 | 0.068493 | 0 | 0 | 0.198009 | 0.008481 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027397 | false | 0 | 0.136986 | 0 | 0.205479 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf597e9b79cff3a9a7086f9c848f3046668b7b67 | 1,376 | py | Python | bnb/utils/general_utils.py | elanmart/bnb-full | 6d86964d6580436816a84d1dd822ede02f4bc56f | [
"MIT"
] | 4 | 2018-04-28T14:29:11.000Z | 2019-02-08T19:09:04.000Z | bnb/utils/general_utils.py | elanmart/bnb | e7556abc9b76193f1de555315e14785f47b7e99a | [
"BSD-3-Clause"
] | null | null | null | bnb/utils/general_utils.py | elanmart/bnb | e7556abc9b76193f1de555315e14785f47b7e99a | [
"BSD-3-Clause"
] | null | null | null | import inspect
import os
from collections import namedtuple
from contextlib import contextmanager
from pathlib import Path
from typing import *
import git
@contextmanager
def chdir(path):
old = os.getcwd()
os.chdir(path)
yield
os.chdir(old)
def normalize_path(path: Union[Path, str]) -> Path:
if not isinstance(path, Path):
path = Path(path)
return path.expanduser().absolute().resolve()
def get_caller_globals():
return inspect.stack()[1][0].f_back.f_globals
def caller_git_info(filename=None):
retval = namedtuple('retval', ('root', 'hash', 'dirty'))
if filename is None:
previous_frame = inspect.currentframe().f_back.f_back
filename, *_ = inspect.getframeinfo(previous_frame)
filename = filename or os.path.dirname(normalize_path(filename))
filename = str(filename)
if os.path.basename(filename).startswith('<ipython'):
filename = os.path.dirname(filename)
try:
git_repo = git.Repo(filename, search_parent_directories=True)
git_root = git_repo.git.rev_parse("--show-toplevel")
git_hash = git_repo.head.commit.hexsha[:7]
is_dirty = git_repo.is_dirty()
except git.InvalidGitRepositoryError:
git_root = filename
git_hash = None
is_dirty = False
return retval(normalize_path(git_root), git_hash, is_dirty)
| 24.140351 | 69 | 0.686773 | 177 | 1,376 | 5.163842 | 0.389831 | 0.043764 | 0.039387 | 0.035011 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00274 | 0.204215 | 1,376 | 56 | 70 | 24.571429 | 0.831963 | 0 | 0 | 0 | 0 | 0 | 0.030523 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.184211 | 0.026316 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf5a9cb5d3e9f2267ebde9027febc965b300c8da | 8,379 | py | Python | source/aws/services/transit_gateway_peering_attachments.py | josete89/serverless-transit-network-orchestrator | 6a69e1f7ebcbcba20100f80bc2040f0d2ae86cf8 | [
"Apache-2.0"
] | 42 | 2019-11-16T18:00:32.000Z | 2021-09-16T01:10:53.000Z | source/aws/services/transit_gateway_peering_attachments.py | sukenshah/serverless-transit-network-orchestrator | 947edac276f56357859f2d2b8434b9d28fa1c6c1 | [
"Apache-2.0"
] | 38 | 2020-01-31T03:31:29.000Z | 2021-09-16T03:20:23.000Z | source/aws/services/transit_gateway_peering_attachments.py | sukenshah/serverless-transit-network-orchestrator | 947edac276f56357859f2d2b8434b9d28fa1c6c1 | [
"Apache-2.0"
] | 31 | 2019-12-09T17:20:03.000Z | 2021-03-30T06:52:02.000Z | ###############################################################################
# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). #
# You may not use this file except in compliance #
# with the License. A copy of the License is located at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# or in the "license" file accompanying this file. This file is distributed #
# on an "AS IS" BASIS, WITHOUT WARRANTIES #
# OR CONDITIONS OF ANY KIND, express or implied. See the License for the #
# specific language governing permissions #
# and limitations under the License. #
###############################################################################
# !/bin/python
from botocore.exceptions import ClientError
from lib.decorator import try_except_retry
from aws.utils.boto3_session import Boto3Session
class TgwPeeringAttachmentAPIHandler(Boto3Session):
def __init__(self, logger, region, **kwargs):
self.logger = logger
self.__service_name = 'ec2'
self.region = region
kwargs.update({'region': self.region})
super().__init__(self.logger, self.__service_name, **kwargs)
self.ec2_client = super().get_client()
@try_except_retry()
def describe_transit_gateway_peering_attachments(self,
tgw_id: str,
states: list) -> list:
"""
Describe the tgw peering attachments for the tagged tgw id
:param tgw_id: tgw id of the tagged transit gateway
:param states: use the state to limit the returned response
:return: list of transit gateway peering attachments
"""
try:
response = self.ec2_client\
.describe_transit_gateway_peering_attachments(
Filters=[
{
'Name': 'transit-gateway-id',
'Values': [tgw_id]
},
{
'Name': 'state',
'Values': states
}
]
)
transit_gateway_peering_attachments_list = response.get(
'TransitGatewayPeeringAttachments', [])
next_token = response.get('NextToken', None)
while next_token is not None:
self.logger.info("Handling Next Token: {}".format(next_token))
response = self.ec2_client\
.describe_transit_gateway_peering_attachments(
Filters=[
{
'Name': 'transit-gateway-id',
'Values': [tgw_id]
},
{
'Name': 'state',
'Values': states
}
],
NextToken=next_token)
self.logger.info("Extending TGW Peering Attachment List")
transit_gateway_peering_attachments_list \
.extend(response.get('TransitGatewayPeeringAttachments',
[]))
next_token = response.get('NextToken', None)
return transit_gateway_peering_attachments_list
except ClientError as error:
self.logger.log_unhandled_exception(error)
raise
def create_transit_gateway_peering_attachment(self,
tgw_id: str,
peer_tgw_id: str,
peer_account_id,
peer_region) -> dict:
"""
Create tgw peering attachment
:param tgw_id: REQUIRED - transit gateway id of the local region
:param peer_tgw_id: REQUIRED - id for peer transit gateway hosted in
the peer region
:param peer_account_id: REQUIRED - current account id
:param peer_region: peer region where peer transit gateway is hosted
:return: details for the tgw peering attachment
"""
try:
response = self.ec2_client\
.create_transit_gateway_peering_attachment(
TransitGatewayId=tgw_id,
PeerTransitGatewayId=peer_tgw_id,
PeerAccountId=peer_account_id,
PeerRegion=peer_region,
)
return response.get('TransitGatewayPeeringAttachment')
except ClientError as error:
self.logger.log_unhandled_exception(error)
raise
def delete_transit_gateway_peering_attachment(self,
tgw_attach_id: str) -> str:
"""
Delete tgw peering attachment
:param tgw_attach_id: REQUIRED - transit gateway peering attachment id
:return: current state of the peering attachment
"""
try:
response = self.ec2_client\
.delete_transit_gateway_peering_attachment(
TransitGatewayAttachmentId=tgw_attach_id
)
return response.get('TransitGatewayPeeringAttachment').get('State')
except ClientError as error:
self.logger.log_unhandled_exception(error)
raise
def accept_transit_gateway_peering_attachment(self,
tgw_attach_id: str) -> str:
"""
Accept tgw peering attachment
:param tgw_attach_id: REQUIRED - transit gateway peering attachment id
:return: current state of the peering attachment
"""
try:
response = self.ec2_client\
.accept_transit_gateway_peering_attachment(
TransitGatewayAttachmentId=tgw_attach_id
)
return response.get('TransitGatewayPeeringAttachment').get('State')
except ClientError as error:
self.logger.log_unhandled_exception(error)
raise
def get_transit_gateway_peering_attachment_state(self,
tgw_attachment_id) -> list:
"""
Describe the tgw peering attachments for the tagged tgw id
:param tgw_attachment_id: tgw id of the tagged transit gateway
:return: list of transit gateway peering attachments
"""
try:
response = self.ec2_client\
.describe_transit_gateway_peering_attachments(
TransitGatewayAttachmentIds=[tgw_attachment_id])
transit_gateway_peering_attachments_list = response.get(
'TransitGatewayPeeringAttachments', [])
next_token = response.get('NextToken', None)
while next_token is not None:
self.logger.info(
"Handling Next Token: {}".format(next_token))
response = self.ec2_client \
.describe_transit_gateway_peering_attachments(
TransitGatewayAttachmentIds=[tgw_attachment_id],
NextToken=next_token)
self.logger.info("Extending TGW Peering Attachment List")
transit_gateway_peering_attachments_list \
.extend(response.get('TransitGatewayPeeringAttachments',
[]))
next_token = response.get('NextToken', None)
state = transit_gateway_peering_attachments_list[0].get('State')
return state
except ClientError as error:
self.logger.log_unhandled_exception(error)
raise
| 46.038462 | 80 | 0.509488 | 704 | 8,379 | 5.832386 | 0.210227 | 0.09888 | 0.112518 | 0.101315 | 0.661471 | 0.611057 | 0.601802 | 0.591817 | 0.57623 | 0.57623 | 0 | 0.004254 | 0.410789 | 8,379 | 181 | 81 | 46.292818 | 0.827426 | 0.247285 | 0 | 0.570248 | 0 | 0 | 0.082781 | 0.037566 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049587 | false | 0 | 0.024793 | 0 | 0.123967 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf5eb68165231d8bd2e30af9eab24faf05a5f7d7 | 675 | py | Python | ui/AutoSelectSettingDialog.py | Zhehua-Hu/Enchain | 94bb21f8ff627fab7d28ca15b575ba01710fb579 | [
"Apache-2.0"
] | 12 | 2017-02-20T05:54:12.000Z | 2020-02-13T18:26:29.000Z | ui/AutoSelectSettingDialog.py | Zhehua-Hu/-nchain | 94bb21f8ff627fab7d28ca15b575ba01710fb579 | [
"Apache-2.0"
] | null | null | null | ui/AutoSelectSettingDialog.py | Zhehua-Hu/-nchain | 94bb21f8ff627fab7d28ca15b575ba01710fb579 | [
"Apache-2.0"
] | 3 | 2017-02-23T06:35:13.000Z | 2020-06-18T07:06:17.000Z | #!/usr/bin/env python
# coding=utf-8
"""
Provide AutoSelectSettingDialog Class
"""
from PyQt5.QtWidgets import QDialog, QSpinBox
from AutoSelectSetting import Ui_AutoSelectSetting
class AutoSelectSettingDialog(QDialog, Ui_AutoSelectSetting):
def __init__(self, parent=None):
QDialog.__init__(self)
self.setupUi(self)
self.buttonBoxQuery.accepted.connect(self.set_value)
self.buttonBoxQuery.rejected.connect(self.close)
self.value = 1
self.value_has_set = False
def set_value(self):
self.value_has_set = True
self.value = self.spinBox.value()
def get_value(self):
return self.value
| 21.774194 | 61 | 0.700741 | 79 | 675 | 5.772152 | 0.481013 | 0.098684 | 0.052632 | 0.065789 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005607 | 0.207407 | 675 | 30 | 62 | 22.5 | 0.846729 | 0.105185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.133333 | 0.066667 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf5f7e78f69402cb12269c29faa4e2ec83813aa8 | 13,409 | py | Python | aiida_gollum/parsers/gollum.py | garsua/aiida-gollu2 | 4e41753c885c1e5aa955578a0e4973d57ccd9a96 | [
"MIT"
] | null | null | null | aiida_gollum/parsers/gollum.py | garsua/aiida-gollu2 | 4e41753c885c1e5aa955578a0e4973d57ccd9a96 | [
"MIT"
] | 2 | 2020-03-19T20:04:11.000Z | 2021-12-07T16:38:11.000Z | aiida_gollum/parsers/gollum.py | garsua/aiida-gollu2 | 4e41753c885c1e5aa955578a0e4973d57ccd9a96 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import numpy as np
from aiida.orm.data.parameter import ParameterData
from aiida.parsers.parser import Parser
from aiida.parsers.exceptions import OutputParsingError
from aiida_gollum.calculations.gollum import GollumCalculation
__copyright__ = u"Copyright (c), 2015, ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (Theory and Simulation of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (NCCR MARVEL)), Switzerland and ROBERT BOSCH LLC, USA. All rights reserved."
__license__ = "MIT license, see LICENSE.txt file"
__version__ = "0.12.0"
__contributors__ = "Victor M. Garcia-Suarez"
# Based on the 0.9.0 version of the STM workflow developed by Alberto
# Garcia for the aiida_siesta plugin
class GollumOutputParsingError(OutputParsingError):
pass
class GollumParser(Parser):
"""
Parser for the output of a Gollum calculation.
"""
def __init__(self,calc):
"""
Initialize the instance of GollumParser
"""
# check for valid input
self._check_calc_compatibility(calc)
super(GollumParser, self).__init__(calc)
def _check_calc_compatibility(self,calc):
if not isinstance(calc,GollumCalculation):
raise GollumOutputParsingError("Input calc must be a GollumCalculation")
def _get_output_nodes(self, output_path, messages_path, oc_path, ou_path, od_path, tt_path, tu_path, td_path):
"""
Extracts output nodes from the standard output and standard error
files. (And XML and JSON files)
"""
from aiida.orm.data.array.trajectory import TrajectoryData
import re
result_list = []
# Add errors
successful = True
if messages_path is None:
errors_list = ['WARNING: No aiida.out file...']
else:
successful, errors_list = self.get_errors_from_file(messages_path)
result_dict = {}
result_dict["errors"] = errors_list
# Add warnings
warnings_list = self.get_warnings_from_file(messages_path)
result_dict["warnings"] = warnings_list
# Add outuput data
output_dict = self.get_output_from_file(output_path)
result_dict.update(output_dict)
# Add open channels and transmission data
if successful:
if oc_path is not None:
oc_dict = self.get_ndata_from_file(oc_path,'oc')
result_dict.update(oc_dict)
oc_data = self.get_transport_data(oc_path)
if oc_data is not None:
result_list.append(('oc_array',oc_data))
if ou_path is not None:
ou_dict = self.get_ndata_from_file(ou_path,'ou')
result_dict.update(ou_dict)
ou_data = self.get_transport_data(ou_path)
if ou_data is not None:
result_list.append(('ou_array',ou_data))
if od_path is not None:
od_dict = self.get_ndata_from_file(od_path,'od')
result_dict.update(od_dict)
od_data = self.get_transport_data(od_path)
if od_data is not None:
result_list.append(('od_array',od_data))
if tt_path is not None:
tt_dict = self.get_ndata_from_file(tt_path,'tt')
result_dict.update(tt_dict)
tt_data = self.get_transport_data(tt_path)
if tt_data is not None:
result_list.append(('tt_array',tt_data))
if tu_path is not None:
tu_dict = self.get_ndata_from_file(tu_path,'tu')
result_dict.update(tu_dict)
tu_data = self.get_transport_data(tu_path)
if tu_data is not None:
result_list.append(('tu_array',tu_data))
if td_path is not None:
td_dict = self.get_ndata_from_file(td_path,'td')
result_dict.update(td_dict)
td_data = self.get_transport_data(td_path)
if td_data is not None:
result_list.append(('td_array',td_data))
# Add parser info dictionary
parser_info = {}
parser_version = 'aiida-0.12.0--gollum-2.1.0'
parser_info['parser_info'] =\
'AiiDA Gollum Parser V. {}'.format(parser_version)
parser_info['parser_warnings'] = []
parsed_dict = dict(result_dict.items() + parser_info.items())
output_data = ParameterData(dict=parsed_dict)
link_name = self.get_linkname_outparams()
result_list.append((link_name,output_data))
return successful, result_list
def parse_with_retrieved(self,retrieved):
"""
Receives in input a dictionary of retrieved nodes.
Does all the logic here.
"""
from aiida.common.exceptions import InvalidOperation
import os
output_path = None
messages_path = None
oc_path = None
ou_path = None
od_path = None
tt_path = None
tu_path = None
td_path = None
try:
output_path, messages_path, oc_path, ou_path, od_path, tt_path, tu_path, td_path =\
self._fetch_output_files(retrieved)
except InvalidOperation:
raise
except IOError as e:
self.logger.error(e.message)
return False, ()
if output_path is None and messages_path is None and oc_path is None and ou_path is None and od_path is None and tt_path is None and tu_path is None and td_path is None:
self.logger.error("No output files found")
return False, ()
successful, out_nodes = self._get_output_nodes(output_path, messages_path, oc_path, ou_path, od_path, tt_path, tu_path, td_path)
return successful, out_nodes
def _fetch_output_files(self, retrieved):
"""
Checks the output folder for standard output and standard error
files, returns their absolute paths on success.
:param retrieved: A dictionary of retrieved nodes, as obtained from the
parser.
"""
from aiida.common.datastructures import calc_states
from aiida.common.exceptions import InvalidOperation
import os
# Check that the retrieved folder is there
try:
out_folder = retrieved[self._calc._get_linkname_retrieved()]
except KeyError:
raise IOError("No retrieved folder found")
list_of_files = out_folder.get_folder_list()
output_path = None
messages_path = None
oc_path = None
ou_path = None
od_path = None
tt_path = None
tu_path = None
td_path = None
if self._calc._DEFAULT_OUTPUT_FILE in list_of_files:
output_path = os.path.join( out_folder.get_abs_path('.'),
self._calc._DEFAULT_OUTPUT_FILE )
if self._calc._DEFAULT_MESSAGES_FILE in list_of_files:
messages_path = os.path.join( out_folder.get_abs_path('.'),
self._calc._DEFAULT_MESSAGES_FILE )
if self._calc._DEFAULT_OC_FILE in list_of_files:
oc_path = os.path.join( out_folder.get_abs_path('.'),
self._calc._DEFAULT_OC_FILE )
if self._calc._DEFAULT_OU_FILE in list_of_files:
ou_path = os.path.join( out_folder.get_abs_path('.'),
self._calc._DEFAULT_OU_FILE )
if self._calc._DEFAULT_OD_FILE in list_of_files:
od_path = os.path.join( out_folder.get_abs_path('.'),
self._calc._DEFAULT_OD_FILE )
if self._calc._DEFAULT_TT_FILE in list_of_files:
tt_path = os.path.join( out_folder.get_abs_path('.'),
self._calc._DEFAULT_TT_FILE )
if self._calc._DEFAULT_TU_FILE in list_of_files:
tu_path = os.path.join( out_folder.get_abs_path('.'),
self._calc._DEFAULT_TU_FILE )
if self._calc._DEFAULT_TD_FILE in list_of_files:
td_path = os.path.join( out_folder.get_abs_path('.'),
self._calc._DEFAULT_TD_FILE )
return output_path, messages_path, oc_path, ou_path, od_path, tt_path, tu_path, td_path
def get_errors_from_file(self,messages_path):
"""
Generates a list of errors from the 'aiida.out' file.
:param messages_path:
Returns a boolean indicating success (True) or failure (False)
and a list of strings.
"""
f = open(messages_path)
lines = f.read().split('\n') # There will be a final '' element
import re
# Search for 'Error' messages, log them, and return immediately
lineerror = []
there_are_fatals = False
for line in lines:
if re.match('^.*Error.*$',line):
self.logger.error(line)
lineerror.append(line)
there_are_fatals = True
if there_are_fatals:
lineeror.append(lines[-1])
return False, lineerror
# Make sure that the job did finish (and was not interrupted
# externally)
normal_end = False
for line in lines:
if re.match('^.*THE END.*$',line):
normal_end = True
if normal_end == False:
lines[-1] = 'FATAL: ABNORMAL_EXTERNAL_TERMINATION'
self.logger.error("Calculation interrupted externally")
return False, lines[-2:] # Return also last line of the file
return True, lineerror
def get_warnings_from_file(self,messages_path):
"""
Generates a list of warnings from the 'aiida.out' file.
:param messages_path:
Returns a list of strings.
"""
f = open(messages_path)
lines = f.read().split('\n') # There will be a final '' element
import re
# Find warnings
linewarning = []
for line in lines:
if re.match('^.*in =/.*$',line):
linewarning.append(line)
return linewarning
def get_output_from_file(self,output_path):
"""
Generates a list of variables from the 'aiida.out' file.
:param output_path:
Returns a list of strings.
"""
f = open(output_path)
lines = f.read().split('\n') # There will be a final '' element
import re
# Find data
output_dict = {}
for line in lines:
if re.match('^.*Version.*$',line):
output_dict['gollum_version'] = line.strip()
if re.match('^.*LD_LIBRARY_PATH.*$',line):
output_dict['ld_library_path'] = line.split()[2]
if re.match('^.*Start of run.*$',line):
output_dict['start_of_run'] = ' '.join(line.split()[-2:])
if re.match('^.*End of run.*$',line):
output_dict['end_of_run'] = ' '.join(line.split()[-2:])
if re.match('^.*Elapsed time.*$',line):
output_dict['total_time'] = float(line.split()[-2])
return output_dict
def get_ndata_from_file(self,nd_path,nd_prefix):
"""
Generates a list of variables from the 'aiida.out' file.
:param nd_path:
Returns a list of strings.
"""
f = open(nd_path)
lines = f.readlines()
import re
# Find data
nd_dict = {}
linenew = []
not_ef = True
cef = 'unknown'
for line in lines:
try:
c1 = float(line.split()[0])
c2 = float(line.split()[1])
linenew.append(c2)
if c1 > 0 and not_ef:
cef=c3
not_ef=False
c3=c2
except:
pass
nd_ef = nd_prefix + '_ef'
nd_dict[nd_ef] = cef
nd_M = nd_prefix + '_M'
nd_dict[nd_M] = max(linenew)
nd_m = nd_prefix + '_m'
nd_dict[nd_m] = min(linenew)
return nd_dict
def get_linkname_outarray(self):
"""
Returns the name of the link to the output_array
"""
return 'output_array'
def get_transport_data(self,nd_path):
"""
Parses the open channels and transmission
files to get ArrayData objects that can
be stored in the database
"""
import numpy as np
from aiida.orm.data.array import ArrayData
f = open(nd_path)
lines = f.readlines()
x = []
y = []
for line in lines:
try:
c1 = float(line.split()[0])
x.append(c1)
c2 = float(line.split()[1])
y.append(c2)
except:
pass
X = np.array(x,dtype=float)
Y = np.array(y,dtype=float)
arraydata = ArrayData()
arraydata.set_array('X', X)
arraydata.set_array('Y', Y)
return arraydata
| 35.757333 | 278 | 0.572899 | 1,657 | 13,409 | 4.364514 | 0.161738 | 0.021018 | 0.033186 | 0.018805 | 0.407494 | 0.317063 | 0.276549 | 0.239629 | 0.199806 | 0.175194 | 0 | 0.004753 | 0.340965 | 13,409 | 374 | 279 | 35.852941 | 0.813625 | 0.127974 | 0 | 0.233607 | 0 | 0.004098 | 0.078444 | 0.006767 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045082 | false | 0.012295 | 0.07377 | 0 | 0.180328 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf642152676e936f1e33bce340262b6207d887ce | 746 | py | Python | manage.py | stephen22otieno/oneMinutePitch | 485e5a94b64b87c0967668f7204cd3c702a189e6 | [
"MIT"
] | null | null | null | manage.py | stephen22otieno/oneMinutePitch | 485e5a94b64b87c0967668f7204cd3c702a189e6 | [
"MIT"
] | null | null | null | manage.py | stephen22otieno/oneMinutePitch | 485e5a94b64b87c0967668f7204cd3c702a189e6 | [
"MIT"
] | null | null | null | from flask_script import Manager,Server
from app import create_app,db
from app.models import User,Category,Peptalk,Comments
from flask_migrate import Migrate, MigrateCommand
app = create_app('development')
manager =Manager(app)
migrate = Migrate(app,db)
manager.add_command('server',Server)
manager.add_command('db', MigrateCommand)
app = create_app('test')
@manager.command
def test():
"""Run the unit tests."""
import unittest
tests = unittest.TestLoader().discover('test')
unittest.TextTestRunner(verbosity=2).run(tests)
@manager.shell
def make_shell_contex():
return dict(app = app, db = db, Category = Category, User = User, Peptalk = Peptalk, Comments = Comments)
if __name__ == '__main__':
manager.run()
| 25.724138 | 113 | 0.737265 | 97 | 746 | 5.494845 | 0.402062 | 0.050657 | 0.086304 | 0.097561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001563 | 0.142091 | 746 | 28 | 114 | 26.642857 | 0.83125 | 0.025469 | 0 | 0 | 0 | 0 | 0.048544 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.25 | 0.05 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf654202b335a3ba2dfa9de6adedf93c5fc38356 | 1,668 | py | Python | sitecheck.py | ghostleyjim/DiscordCoronaBotNL | 8ac1e68abcf33d35a473c2b047ddbd5dcc840aa3 | [
"MIT"
] | null | null | null | sitecheck.py | ghostleyjim/DiscordCoronaBotNL | 8ac1e68abcf33d35a473c2b047ddbd5dcc840aa3 | [
"MIT"
] | null | null | null | sitecheck.py | ghostleyjim/DiscordCoronaBotNL | 8ac1e68abcf33d35a473c2b047ddbd5dcc840aa3 | [
"MIT"
] | null | null | null | #testfile to scrape the RIVM website all is copied to bot
import requests
from bs4 import BeautifulSoup
import csv
import time
import ntplib
date_format = "%A %d %B %Y"
f = open('updatetime.txt', 'r')
old_time = f.readline()
old_time = old_time.rstrip('\n')
f.close()
with open('updatetime.txt', "r+") as timefile:
ntp_client = ntplib.NTPClient()
response = ntp_client.request('pool.ntp.org')
current_time = time.strftime(date_format, time.localtime(response.tx_time))
if current_time != old_time:
print(current_time, file=timefile, end='')
page = requests.get('https://www.rivm.nl/coronavirus-kaart-van-nederland-per-gemeente')
print('accessing RIVM...')
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find(id="csvData")
RIVM = results.get_text()
RIVM = RIVM.lower()
with open("database.txt", "w") as text_file:
print(RIVM[1:], file=text_file, end='')
while True:
with open('database.txt') as csv_file:
csv_reader = (csv.reader(csv_file, delimiter=';'))
plaats = []
besmettingen = []
for row in csv_reader:
gemeente = row[1]
aantal = row[2]
plaats.append(gemeente)
besmettingen.append(aantal)
plaatsnaam = input('welke plaatsnaam?').lower()
try:
plaatsex = plaats.index(plaatsnaam)
gevallen = besmettingen[plaatsex]
print('aantal bekende besmettingen in', plaatsnaam, 'is: ', gevallen)
except:
print(plaatsnaam, 'plaats onbekend')
| 29.263158 | 96 | 0.60012 | 196 | 1,668 | 5.010204 | 0.505102 | 0.028513 | 0.034623 | 0.03666 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003303 | 0.273981 | 1,668 | 56 | 97 | 29.785714 | 0.807597 | 0.033573 | 0 | 0 | 0 | 0 | 0.158842 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.121951 | 0 | 0.121951 | 0.121951 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf65b6b01d6ff04b856a4ec4aff1e18a2ea3bdcb | 1,067 | py | Python | source/engine/detect/step_setup_camera.py | Borrk/DeepLearning-Engine | 54f6cdb8a76e76d9f439f8562652f545e4dbc02e | [
"MIT"
] | null | null | null | source/engine/detect/step_setup_camera.py | Borrk/DeepLearning-Engine | 54f6cdb8a76e76d9f439f8562652f545e4dbc02e | [
"MIT"
] | null | null | null | source/engine/detect/step_setup_camera.py | Borrk/DeepLearning-Engine | 54f6cdb8a76e76d9f439f8562652f545e4dbc02e | [
"MIT"
] | null | null | null | from engine.steps.IStep import IStep
from imutils.video import VideoStream
import imutils
class step_setup_camera(IStep):
""" setup camera"""
options = {}
def __init__(self, output_channel, name=None ):
super().__init__(self, output_channel, name)
self.usePiCamera = True
def IRun(self):
self.camera=VideoStream(usePiCamera=self.usePiCamera,
resolution=self.options['resolution'], framerate= self.options['framerate'] ).start()
self.output_channel['camera'] = self.camera
self.output_channel['resolution'] = self.options['resolution']
def IParseConfig( self, config_json ):
self.options['framerate'] = config_json['framerate']
self.options['resolution'] = (config_json['resolution'][0],config_json['resolution'][1])
self.options['usePiCamera'] = config_json['usePiCamera']
def IDispose( self ):
try:
self.target.stop()
except Exception as e:
print("close camera exception") | 38.107143 | 117 | 0.63074 | 111 | 1,067 | 5.891892 | 0.378378 | 0.100917 | 0.103976 | 0.06422 | 0.076453 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002484 | 0.245548 | 1,067 | 28 | 118 | 38.107143 | 0.809938 | 0.011246 | 0 | 0 | 0 | 0 | 0.130601 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.136364 | 0 | 0.409091 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf65f199ebe8974254f6145acc0b95d4220672c6 | 1,428 | py | Python | tota11y/middleware.py | hiisi13/django-tota11y | bf91c6a6e8a2997a6e088af41d46ef0830a4df32 | [
"MIT"
] | null | null | null | tota11y/middleware.py | hiisi13/django-tota11y | bf91c6a6e8a2997a6e088af41d46ef0830a4df32 | [
"MIT"
] | null | null | null | tota11y/middleware.py | hiisi13/django-tota11y | bf91c6a6e8a2997a6e088af41d46ef0830a4df32 | [
"MIT"
] | null | null | null | import re
from django.conf import settings
from django.utils.encoding import force_text
from django.template.loader import render_to_string
_HTML_TYPES = ('text/html', 'application/xhtml+xml')
class Tota11yMiddleware(object):
def process_response(self, request, response):
content_encoding = response.get('Content-Encoding', '')
content_type = response.get('Content-Type', '').split(';')[0]
if any((getattr(response, 'streaming', False),
'gzip' in content_encoding,
content_type not in _HTML_TYPES)):
return response
content = force_text(response.content, encoding=settings.DEFAULT_CHARSET)
insert_before = '</body>'
try:
pattern = re.escape(insert_before)
bits = re.split(pattern, content, flags=re.IGNORECASE)
except:
pattern = '(.+?)(%s|$)' % re.escape(insert_before)
matches = re.findall(pattern, content, flags=re.DOTALL | re.IGNORECASE)
bits = [m[0] for m in matches if m[1] == insert_before]
bits.append(''.join(m[0] for m in matches if m[1] == ''))
if len(bits) > 1:
bits[-2] += render_to_string('tota11y/base.html')
response.content = insert_before.join(bits)
if response.get('Content-Length', None):
response['Content-Length'] = len(response.content)
return response
| 38.594595 | 83 | 0.617647 | 169 | 1,428 | 5.094675 | 0.408284 | 0.10453 | 0.062718 | 0.060395 | 0.044135 | 0.044135 | 0.044135 | 0.044135 | 0.044135 | 0 | 0 | 0.010329 | 0.254202 | 1,428 | 36 | 84 | 39.666667 | 0.798122 | 0 | 0 | 0.068966 | 0 | 0 | 0.094538 | 0.014706 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.137931 | 0 | 0.275862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf6617694b418ef663952f93af37250bef039317 | 2,487 | py | Python | cilantro_audit/home_page.py | CapstoneCilantro/CilantroAuditNewFeature | 10b4548b4d6178bbbb5ba1d5fd46f5b7c51a527d | [
"MIT"
] | null | null | null | cilantro_audit/home_page.py | CapstoneCilantro/CilantroAuditNewFeature | 10b4548b4d6178bbbb5ba1d5fd46f5b7c51a527d | [
"MIT"
] | null | null | null | cilantro_audit/home_page.py | CapstoneCilantro/CilantroAuditNewFeature | 10b4548b4d6178bbbb5ba1d5fd46f5b7c51a527d | [
"MIT"
] | null | null | null | import kivy
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.popup import Popup
from kivy.uix.screenmanager import Screen
from kivy.uix.screenmanager import ScreenManager
from cilantro_audit.admin_page import AdminPage
from cilantro_audit.auditor_page import AuditorPage
from cilantro_audit.create_audit_template_page import CreateAuditTemplatePage
from cilantro_audit.completed_audits_list_page import CompletedAuditsListPage
from cilantro_audit.auditor_completed_audits_list_page import AuditorCompletedAuditsListPage
from cilantro_audit.view_audit_templates import ViewAuditTemplates
from cilantro_audit.constants import KIVY_REQUIRED_VERSION, ADMIN_SCREEN, HOME_SCREEN, AUDITOR_SCREEN, \
CREATE_AUDIT_TEMPLATE_PAGE, COMPLETED_AUDITS_LIST_PAGE, AUDITOR_COMPLETED_AUDITS_LIST_PAGE, VIEW_AUDIT_TEMPLATES, \
CREATE_COMPLETED_AUDIT_PAGE
from create_completed_audit_page import CreateCompletedAuditPage
kivy.require(KIVY_REQUIRED_VERSION)
Builder.load_file('./widgets/home_page.kv')
Builder.load_file('./widgets/admin_page.kv')
# Create the screen manager
sm = ScreenManager()
class HomePage(Screen):
pass
class AdminLoginPopup(Popup):
def validate_password(self, value):
if value == '12345':
sm.current = ADMIN_SCREEN
self.dismiss()
class CilantroAudit(App):
# Initialize screen manager and other necessary fields
def build(self):
sm.add_widget(HomePage(name=HOME_SCREEN))
sm.add_widget(AdminPage(name=ADMIN_SCREEN))
sm.add_widget(AuditorPage(name=AUDITOR_SCREEN))
sm.add_widget(CreateAuditTemplatePage(name=CREATE_AUDIT_TEMPLATE_PAGE))
sm.add_widget(CompletedAuditsListPage(name=COMPLETED_AUDITS_LIST_PAGE))
sm.add_widget(AuditorCompletedAuditsListPage(name=AUDITOR_COMPLETED_AUDITS_LIST_PAGE))
sm.add_widget(ViewAuditTemplates(name=VIEW_AUDIT_TEMPLATES))
sm.add_widget(CreateCompletedAuditPage(name=CREATE_COMPLETED_AUDIT_PAGE))
self.title = 'CilantroAudit'
return sm
# Set the text field inside of the popup to be focused
def on_popup_parent(self, popup):
if popup:
popup.content.children[1].focus = True
# Show the admin login, and focus onto the text field
def open_admin_login_popup(self):
t = AdminLoginPopup()
t.bind(on_open=self.on_popup_parent)
t.open()
def exit(self):
exit(1)
if __name__ == '__main__':
CilantroAudit().run()
| 33.608108 | 119 | 0.772819 | 314 | 2,487 | 5.821656 | 0.286624 | 0.021882 | 0.04814 | 0.075492 | 0.125821 | 0.037199 | 0.037199 | 0 | 0 | 0 | 0 | 0.003343 | 0.158022 | 2,487 | 73 | 120 | 34.068493 | 0.869628 | 0.073583 | 0 | 0 | 0 | 0 | 0.030883 | 0.019574 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0.04 | 0.28 | 0 | 0.46 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf68c5f9f6dbda4bc4e365b63fe8c8768421ddab | 11,476 | py | Python | appengine/monorail/sitewide/userprofile.py | xswz8015/infra | f956b78ce4c39cc76acdda47601b86794ae0c1ba | [
"BSD-3-Clause"
] | null | null | null | appengine/monorail/sitewide/userprofile.py | xswz8015/infra | f956b78ce4c39cc76acdda47601b86794ae0c1ba | [
"BSD-3-Clause"
] | 7 | 2022-02-15T01:11:37.000Z | 2022-03-02T12:46:13.000Z | appengine/monorail/sitewide/userprofile.py | NDevTK/chromium-infra | d38e088e158d81f7f2065a38aa1ea1894f735ec4 | [
"BSD-3-Clause"
] | null | null | null | # Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file or at
# https://developers.google.com/open-source/licenses/bsd
"""Classes for the user profile page ("my page")."""
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
import logging
import time
import json
import ezt
import settings
from businesslogic import work_env
from framework import framework_helpers
from framework import framework_views
from framework import permissions
from framework import servlet
from framework import timestr
from framework import xsrf
from project import project_views
from sitewide import sitewide_helpers
class UserProfile(servlet.Servlet):
"""Shows a page of information about a user."""
_PAGE_TEMPLATE = 'sitewide/user-profile-page.ezt'
def GatherPageData(self, mr):
"""Build up a dictionary of data values to use when rendering the page."""
viewed_user = mr.viewed_user_auth.user_pb
if self.services.usergroup.GetGroupSettings(
mr.cnxn, mr.viewed_user_auth.user_id):
url = framework_helpers.FormatAbsoluteURL(
mr, '/g/%s/' % viewed_user.email, include_project=False)
self.redirect(url, abort=True) # Show group page instead.
with work_env.WorkEnv(mr, self.services) as we:
project_lists = we.GetUserProjects(mr.viewed_user_auth.effective_ids)
(visible_ownership, visible_archived, visible_membership,
visible_contrib) = project_lists
with mr.profiler.Phase('Getting user groups'):
group_settings = self.services.usergroup.GetAllGroupSettings(
mr.cnxn, mr.viewed_user_auth.effective_ids)
member_ids, owner_ids = self.services.usergroup.LookupAllMembers(
mr.cnxn, list(group_settings.keys()))
friend_project_ids = [] # TODO(issue 4202): implement this.
visible_group_ids = []
for group_id in group_settings:
if permissions.CanViewGroupMembers(
mr.perms, mr.auth.effective_ids, group_settings[group_id],
member_ids[group_id], owner_ids[group_id], friend_project_ids):
visible_group_ids.append(group_id)
user_group_views = framework_views.MakeAllUserViews(
mr.cnxn, self.services.user, visible_group_ids)
user_group_views = sorted(
list(user_group_views.values()), key=lambda ugv: ugv.email)
with mr.profiler.Phase('Getting linked accounts'):
linked_parent = None
linked_children = []
linked_views = framework_views.MakeAllUserViews(
mr.cnxn, self.services.user,
[viewed_user.linked_parent_id],
viewed_user.linked_child_ids)
if viewed_user.linked_parent_id:
linked_parent = linked_views[viewed_user.linked_parent_id]
if viewed_user.linked_child_ids:
linked_children = [
linked_views[child_id] for child_id in viewed_user.linked_child_ids]
offer_unlink = (mr.auth.user_id == viewed_user.user_id or
mr.auth.user_id in linked_views)
incoming_invite_users = []
outgoing_invite_users = []
possible_parent_accounts = []
can_edit_invites = mr.auth.user_id == mr.viewed_user_auth.user_id
display_link_invites = can_edit_invites or mr.auth.user_pb.is_site_admin
# TODO(jrobbins): allow site admin to edit invites for other users.
if display_link_invites:
with work_env.WorkEnv(mr, self.services, phase='Getting link invites'):
incoming_invite_ids, outgoing_invite_ids = we.GetPendingLinkedInvites(
user_id=viewed_user.user_id)
invite_views = framework_views.MakeAllUserViews(
mr.cnxn, self.services.user, incoming_invite_ids, outgoing_invite_ids)
incoming_invite_users = [
invite_views[uid] for uid in incoming_invite_ids]
outgoing_invite_users = [
invite_views[uid] for uid in outgoing_invite_ids]
possible_parent_accounts = _ComputePossibleParentAccounts(
we, mr.viewed_user_auth.user_view, linked_parent, linked_children)
viewed_user_display_name = framework_views.GetViewedUserDisplayName(mr)
with work_env.WorkEnv(mr, self.services) as we:
starred_projects = we.ListStarredProjects(
viewed_user_id=mr.viewed_user_auth.user_id)
logged_in_starred = we.ListStarredProjects()
logged_in_starred_pids = {p.project_id for p in logged_in_starred}
starred_user_ids = self.services.user_star.LookupStarredItemIDs(
mr.cnxn, mr.viewed_user_auth.user_id)
starred_user_dict = framework_views.MakeAllUserViews(
mr.cnxn, self.services.user, starred_user_ids)
starred_users = list(starred_user_dict.values())
starred_users_json = json.dumps(
[uv.display_name for uv in starred_users])
is_user_starred = self._IsUserStarred(
mr.cnxn, mr.auth.user_id, mr.viewed_user_auth.user_id)
if viewed_user.last_visit_timestamp:
last_visit_str = timestr.FormatRelativeDate(
viewed_user.last_visit_timestamp, days_only=True)
last_visit_str = last_visit_str or 'Less than 2 days ago'
else:
last_visit_str = 'Never'
if viewed_user.email_bounce_timestamp:
last_bounce_str = timestr.FormatRelativeDate(
viewed_user.email_bounce_timestamp, days_only=True)
last_bounce_str = last_bounce_str or 'Less than 2 days ago'
else:
last_bounce_str = None
can_ban = permissions.CanBan(mr, self.services)
viewed_user_is_spammer = viewed_user.banned.lower() == 'spam'
viewed_user_may_be_spammer = not viewed_user_is_spammer
all_projects = self.services.project.GetAllProjects(mr.cnxn)
for project_id in all_projects:
project = all_projects[project_id]
viewed_user_perms = permissions.GetPermissions(viewed_user,
mr.viewed_user_auth.effective_ids, project)
if (viewed_user_perms != permissions.EMPTY_PERMISSIONSET and
viewed_user_perms != permissions.USER_PERMISSIONSET):
viewed_user_may_be_spammer = False
ban_token = None
ban_spammer_token = None
if mr.auth.user_id and can_ban:
form_token_path = mr.request.path + 'ban.do'
ban_token = xsrf.GenerateToken(mr.auth.user_id, form_token_path)
form_token_path = mr.request.path + 'banSpammer.do'
ban_spammer_token = xsrf.GenerateToken(mr.auth.user_id, form_token_path)
can_delete_user = permissions.CanExpungeUsers(mr)
page_data = {
'user_tab_mode': 'st2',
'viewed_user_display_name': viewed_user_display_name,
'viewed_user_may_be_spammer': ezt.boolean(viewed_user_may_be_spammer),
'viewed_user_is_spammer': ezt.boolean(viewed_user_is_spammer),
'viewed_user_is_banned': ezt.boolean(viewed_user.banned),
'owner_of_projects': [
project_views.ProjectView(
p, starred=p.project_id in logged_in_starred_pids)
for p in visible_ownership],
'committer_of_projects': [
project_views.ProjectView(
p, starred=p.project_id in logged_in_starred_pids)
for p in visible_membership],
'contributor_to_projects': [
project_views.ProjectView(
p, starred=p.project_id in logged_in_starred_pids)
for p in visible_contrib],
'owner_of_archived_projects': [
project_views.ProjectView(p) for p in visible_archived],
'starred_projects': [
project_views.ProjectView(
p, starred=p.project_id in logged_in_starred_pids)
for p in starred_projects],
'starred_users': starred_users,
'starred_users_json': starred_users_json,
'is_user_starred': ezt.boolean(is_user_starred),
'viewing_user_page': ezt.boolean(True),
'last_visit_str': last_visit_str,
'last_bounce_str': last_bounce_str,
'vacation_message': viewed_user.vacation_message,
'can_ban': ezt.boolean(can_ban),
'ban_token': ban_token,
'ban_spammer_token': ban_spammer_token,
'user_groups': user_group_views,
'linked_parent': linked_parent,
'linked_children': linked_children,
'incoming_invite_users': incoming_invite_users,
'outgoing_invite_users': outgoing_invite_users,
'possible_parent_accounts': possible_parent_accounts,
'can_edit_invites': ezt.boolean(can_edit_invites),
'offer_unlink': ezt.boolean(offer_unlink),
'can_delete_user': ezt.boolean(can_delete_user),
}
viewed_user_prefs = None
if mr.perms.HasPerm(permissions.EDIT_OTHER_USERS, None, None):
with work_env.WorkEnv(mr, self.services) as we:
viewed_user_prefs = we.GetUserPrefs(mr.viewed_user_auth.user_id)
user_settings = (
framework_helpers.UserSettings.GatherUnifiedSettingsPageData(
mr.auth.user_id, mr.viewed_user_auth.user_view, viewed_user,
viewed_user_prefs))
page_data.update(user_settings)
return page_data
def _IsUserStarred(self, cnxn, logged_in_user_id, viewed_user_id):
"""Return whether the logged in user starred the viewed user."""
if logged_in_user_id:
return self.services.user_star.IsItemStarredBy(
cnxn, viewed_user_id, logged_in_user_id)
return False
def ProcessFormData(self, mr, post_data):
"""Process the posted form."""
has_admin_perm = mr.perms.HasPerm(permissions.EDIT_OTHER_USERS, None, None)
with work_env.WorkEnv(mr, self.services) as we:
framework_helpers.UserSettings.ProcessSettingsForm(
we, post_data, mr.viewed_user_auth.user_pb, admin=has_admin_perm)
# TODO(jrobbins): Check all calls to FormatAbsoluteURL for include_project.
return framework_helpers.FormatAbsoluteURL(
mr, mr.viewed_user_auth.user_view.profile_url, include_project=False,
saved=1, ts=int(time.time()))
def _ComputePossibleParentAccounts(
we, user_view, linked_parent, linked_children):
"""Return a list of email addresses of possible parent accounts."""
if not user_view:
return [] # Anon user cannot link to any account.
if linked_parent or linked_children:
return [] # If account is already linked in any way, don't offer.
possible_domains = settings.linkable_domains.get(user_view.domain, [])
possible_emails = ['%s@%s' % (user_view.username, domain)
for domain in possible_domains]
found_users, _ = we.ListReferencedUsers(possible_emails)
found_emails = [user.email for user in found_users]
return found_emails
class UserProfilePolymer(UserProfile):
"""New Polymer version of user profiles in Monorail."""
_PAGE_TEMPLATE = 'sitewide/user-profile-page-polymer.ezt'
class BanUser(servlet.Servlet):
"""Bans or un-bans a user."""
def ProcessFormData(self, mr, post_data):
"""Process the posted form."""
if not permissions.CanBan(mr, self.services):
raise permissions.PermissionException(
"You do not have permission to ban users.")
framework_helpers.UserSettings.ProcessBanForm(
mr.cnxn, self.services.user, post_data, mr.viewed_user_auth.user_id,
mr.viewed_user_auth.user_pb)
# TODO(jrobbins): Check all calls to FormatAbsoluteURL for include_project.
return framework_helpers.FormatAbsoluteURL(
mr, mr.viewed_user_auth.user_view.profile_url, include_project=False,
saved=1, ts=int(time.time()))
| 42.191176 | 80 | 0.720634 | 1,510 | 11,476 | 5.141722 | 0.184768 | 0.074704 | 0.026275 | 0.035033 | 0.426198 | 0.339387 | 0.246136 | 0.229006 | 0.185729 | 0.142839 | 0 | 0.00141 | 0.196846 | 11,476 | 271 | 81 | 42.346863 | 0.840946 | 0.085744 | 0 | 0.111628 | 0 | 0 | 0.071867 | 0.028459 | 0 | 0 | 0 | 0.00369 | 0 | 1 | 0.023256 | false | 0 | 0.07907 | 0 | 0.162791 | 0.004651 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf6b7fa8ef56c8e91159fea3e2c469042648b18d | 12,766 | py | Python | bel/nanopub/pubmed.py | belbio/bel | 14ff8e543a679e7dfff3f38f31c0f91ffd55e4d8 | [
"Apache-2.0"
] | 6 | 2018-01-31T21:25:40.000Z | 2020-11-18T16:43:56.000Z | bel/nanopub/pubmed.py | belbio/bel | 14ff8e543a679e7dfff3f38f31c0f91ffd55e4d8 | [
"Apache-2.0"
] | 83 | 2018-01-03T17:31:49.000Z | 2021-12-13T19:50:17.000Z | bel/nanopub/pubmed.py | belbio/bel | 14ff8e543a679e7dfff3f38f31c0f91ffd55e4d8 | [
"Apache-2.0"
] | 2 | 2019-04-12T20:42:06.000Z | 2020-07-17T02:49:03.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Pubmed related utilities
Given PMID - collect Pubmed data and Pubtator Bioconcepts used for the BELMgr
or enhancing BEL Nanopubs
"""
# Standard Library
import asyncio
import copy
import datetime
import re
from typing import Any, Mapping, MutableMapping
# Third Party
import cachetools
import httpx
from loguru import logger
from lxml import etree
# Local
import bel.core.settings as settings
import bel.terms.terms
from bel.core.utils import http_client, url_path_param_quoting
# Replace PMID
PUBMED_TMPL = "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pubmed&retmode=xml&id="
# https://www.ncbi.nlm.nih.gov/research/pubtator-api/publications/export/biocjson?pmids=28483577,28483578,28483579
PUBTATOR_URL = (
"https://www.ncbi.nlm.nih.gov/research/pubtator-api/publications/export/biocjson?pmids="
)
pubtator_ns_convert = {
"CHEBI": "CHEBI",
"Species": "TAX",
"Gene": "EG",
"Chemical": "MESH",
"Disease": "MESH",
}
pubtator_entity_convert = {"Chemical": "Abundance", "Gene": "Gene", "Disease": "Pathology"}
pubtator_annotation_convert = {"Disease": "Pathology"}
pubtator_known_types = [key for key in pubtator_ns_convert.keys()]
def node_text(node):
"""Needed for things like abstracts which have internal tags (see PMID:27822475)"""
if node.text:
result = node.text
else:
result = ""
for child in node:
if child.tail is not None:
result += child.tail
return result
@cachetools.cached(cachetools.TTLCache(maxsize=200, ttl=3600))
def get_pubtator_url(pmid):
"""Get pubtator content from url"""
pubtator = None
url = f"{PUBTATOR_URL}{pmid}"
r = http_client.get(url, timeout=10)
if r and r.status_code == 200:
pubtator = r.json()
else:
logger.error(f"Cannot access Pubtator, status: {r.status_code} url: {url}")
return pubtator
def pubtator_convert_to_key(annotation: dict) -> str:
"""Convert pubtator annotation info to key (NS:ID)"""
ns = pubtator_ns_convert.get(annotation["infons"]["type"], None)
id_ = annotation["infons"]["identifier"]
id_ = id_.replace("MESH:", "")
if ns is None:
logger.warning("")
return f"{ns}:{id_}"
def get_pubtator(pmid):
"""Get Pubtator Bioconcepts from Pubmed Abstract
Re-configure the denotations into an annotation dictionary format
and collapse duplicate terms so that their spans are in a list.
"""
annotations = []
pubtator = get_pubtator_url(pmid)
if pubtator is None:
return annotations
known_types = ["CHEBI", "Chemical", "Disease", "Gene", "Species"]
for passage in pubtator["passages"]:
for annotation in passage["annotations"]:
if annotation["infons"]["type"] not in known_types:
continue
key = pubtator_convert_to_key(annotation)
annotations.append(
{
"key": key,
"text": annotation["text"],
"locations": copy.copy(annotation["locations"]),
}
)
return annotations
def process_pub_date(year, mon, day, medline_date):
"""Create pub_date from what Pubmed provides in Journal PubDate entry"""
if medline_date:
year = "0000"
match = re.search(r"\d{4,4}", medline_date)
if match:
year = match.group(0)
if year and re.match("[a-zA-Z]+", mon):
pub_date = datetime.datetime.strptime(f"{year}-{mon}-{day}", "%Y-%b-%d").strftime(
"%Y-%m-%d"
)
elif year:
pub_date = f"{year}-{mon}-{day}"
else:
pub_date = None
if year and re.match("[a-zA-Z]+", mon):
pub_date = datetime.datetime.strptime(f"{year}-{mon}-{day}", "%Y-%b-%d").strftime(
"%Y-%m-%d"
)
elif year:
pub_date = f"{year}-{mon}-{day}"
return pub_date
def parse_book_record(doc: dict, root) -> dict:
"""Parse Pubmed Book entry"""
doc["title"] = next(iter(root.xpath("//BookTitle/text()")))
doc["authors"] = []
for author in root.xpath("//Author"):
last_name = next(iter(author.xpath("LastName/text()")), "")
first_name = next(iter(author.xpath("ForeName/text()")), "")
initials = next(iter(author.xpath("Initials/text()")), "")
if not first_name and initials:
first_name = initials
doc["authors"].append(f"{last_name}, {first_name}")
pub_year = next(iter(root.xpath("//Book/PubDate/Year/text()")), None)
pub_mon = next(iter(root.xpath("//Book/PubDate/Month/text()")), "Jan")
pub_day = next(iter(root.xpath("//Book/PubDate/Day/text()")), "01")
medline_date = next(iter(root.xpath("//Journal/JournalIssue/PubDate/MedlineDate/text()")), None)
pub_date = process_pub_date(pub_year, pub_mon, pub_day, medline_date)
doc["pub_date"] = pub_date
for abstracttext in root.xpath("//Abstract/AbstractText"):
abstext = node_text(abstracttext)
label = abstracttext.get("Label", None)
if label:
doc["abstract"] += f"{label}: {abstext}\n"
else:
doc["abstract"] += f"{abstext}\n"
doc["abstract"] = doc["abstract"].rstrip()
return doc
def parse_journal_article_record(doc: dict, root) -> dict:
"""Parse Pubmed Journal Article record"""
doc["title"] = next(iter(root.xpath("//ArticleTitle/text()")), "")
# TODO https://stackoverflow.com/questions/4770191/lxml-etree-element-text-doesnt-return-the-entire-text-from-an-element
atext = next(iter(root.xpath("//Abstract/AbstractText/text()")), "")
for abstracttext in root.xpath("//Abstract/AbstractText"):
abstext = node_text(abstracttext)
label = abstracttext.get("Label", None)
if label:
doc["abstract"] += f"{label}: {abstext}\n"
else:
doc["abstract"] += f"{abstext}\n"
doc["abstract"] = doc["abstract"].rstrip()
doc["authors"] = []
for author in root.xpath("//Author"):
last_name = next(iter(author.xpath("LastName/text()")), "")
first_name = next(iter(author.xpath("ForeName/text()")), "")
initials = next(iter(author.xpath("Initials/text()")), "")
if not first_name and initials:
first_name = initials
doc["authors"].append(f"{last_name}, {first_name}")
pub_year = next(iter(root.xpath("//Journal/JournalIssue/PubDate/Year/text()")), None)
pub_mon = next(iter(root.xpath("//Journal/JournalIssue/PubDate/Month/text()")), "Jan")
pub_day = next(iter(root.xpath("//Journal/JournalIssue/PubDate/Day/text()")), "01")
medline_date = next(iter(root.xpath("//Journal/JournalIssue/PubDate/MedlineDate/text()")), None)
pub_date = process_pub_date(pub_year, pub_mon, pub_day, medline_date)
doc["pub_date"] = pub_date
doc["journal_title"] = next(iter(root.xpath("//Journal/Title/text()")), "")
doc["joural_iso_title"] = next(iter(root.xpath("//Journal/ISOAbbreviation/text()")), "")
doc["doi"] = next(iter(root.xpath('//ArticleId[@IdType="doi"]/text()')), None)
doc["compounds"] = []
for chem in root.xpath("//ChemicalList/Chemical/NameOfSubstance"):
chem_id = chem.get("UI")
doc["compounds"].append({"key": f"MESH:{chem_id}", "label": chem.text})
compounds = [cmpd["key"] for cmpd in doc["compounds"]]
doc["mesh"] = []
for mesh in root.xpath("//MeshHeading/DescriptorName"):
mesh_id = f"MESH:{mesh.get('UI')}"
if mesh_id in compounds:
continue
doc["mesh"].append({"key": mesh_id, "label": mesh.text})
return doc
@cachetools.cached(cachetools.TTLCache(maxsize=200, ttl=3600))
def get_pubmed_url(pmid):
"""Get pubmed url"""
root = None
try:
pubmed_url = f"{PUBMED_TMPL}{str(pmid)}"
r = http_client.get(pubmed_url)
logger.info(f"Status {r.status_code} URL: {pubmed_url}")
if r.status_code == 200:
content = r.content
root = etree.fromstring(content)
else:
logger.warning(f"Could not download pubmed url: {pubmed_url}")
except Exception as e:
logger.warning(
f"Bad Pubmed request, error: {str(e)}",
url=f'{PUBMED_TMPL.replace("PMID", pmid)}',
)
return root
def get_pubmed(pmid: str) -> Mapping[str, Any]:
"""Get pubmed xml for pmid and convert to JSON
Remove MESH terms if they are duplicated in the compound term set
ArticleDate vs PubDate gets complicated: https://www.nlm.nih.gov/bsd/licensee/elements_descriptions.html see <ArticleDate> and <PubDate>
Only getting pub_year at this point from the <PubDate> element.
Args:
pmid: pubmed id number as a string
Returns:
pubmed json
"""
doc = {
"abstract": "",
"pmid": pmid,
"title": "",
"authors": [],
"pub_date": "",
"journal_iso_title": "",
"journal_title": "",
"doi": "",
"compounds": [],
"mesh": [],
}
root = get_pubmed_url(pmid)
if root is None:
return None
try:
doc["pmid"] = root.xpath("//PMID/text()")[0]
except Exception as e:
return None
if doc["pmid"] != pmid:
logger.error(f"Requested PMID {doc['pmid']}doesn't match record PMID {pmid}")
if root.find("PubmedArticle") is not None:
doc = parse_journal_article_record(doc, root)
elif root.find("PubmedBookArticle") is not None:
doc = parse_book_record(doc, root)
return doc
async def async_get_normalized_terms_for_annotations(term_keys):
"""Async collection of normalized terms for annotations"""
normalized = asyncio.gather(
*[bel.terms.terms.async_get_normalized_terms(term_key) for term_key in term_keys]
)
return normalized
def get_normalized_terms_for_annotations(term_keys):
return [bel.terms.terms.get_normalized_terms(term_key) for term_key in term_keys]
def add_annotations(pubmed):
"""Add nanopub annotations to pubmed doc
Enhance MESH terms etc as full-fledged nanopub annotations for use by the BEL Nanopub editor
"""
term_keys = (
[entry["key"] for entry in pubmed.get("compounds", [])]
+ [entry["key"] for entry in pubmed.get("mesh", [])]
+ [entry["key"] for entry in pubmed.get("pubtator", [])]
)
term_keys = list(set(term_keys))
terms = {}
for entry in pubmed.get("pubtator", []):
terms[entry["key"]] = {"key": entry["key"], "label": entry["text"]}
for entry in pubmed.get("compounds", []):
terms[entry["key"]] = {"key": entry["key"], "label": entry["label"]}
for entry in pubmed.get("mesh", []):
terms[entry["key"]] = {"key": entry["key"], "label": entry["label"]}
# loop = asyncio.get_event_loop()
# normalized = loop.run_until_complete(async_get_normalized_terms_for_annotations(term_keys))
normalized = get_normalized_terms_for_annotations(terms.keys())
normalized = sorted(normalized, key=lambda x: x["annotation_types"], reverse=True)
pubmed["annotations"] = []
for annotation in normalized:
# HACK - only show first annotation type
if len(annotation["annotation_types"]) > 0:
annotation_type = annotation["annotation_types"][0]
else:
annotation_type = ""
if annotation.get("label", False):
terms[annotation["original"]]["key"] = annotation["decanonical"]
terms[annotation["original"]]["label"] = annotation["label"]
terms[annotation["original"]]["annotation_types"] = [annotation_type]
pubmed["annotations"] = copy.deepcopy(
sorted(terms.values(), key=lambda x: x.get("annotation_types", []), reverse=True)
)
# Add missing
for idx, annotation in enumerate(pubmed["annotations"]):
if annotation["label"] == "":
pubmed["annotations"][idx]["label"] = annotation["key"]
return pubmed
def get_pubmed_for_beleditor(pmid: str, pubmed_only: bool = False) -> Mapping[str, Any]:
"""Get fully annotated pubmed doc with Pubtator and full entity/annotation_types
Args:
pmid: Pubmed PMID
Returns:
Mapping[str, Any]: pubmed dictionary
"""
pubmed = get_pubmed(pmid)
if pubmed is None:
return pubmed
if not pubmed_only:
pubmed["pubtator"] = get_pubtator(pmid)
# Add entity types and annotation types to annotations
pubmed = add_annotations(pubmed)
return pubmed
def main():
pmid = "19894120"
pubmed = get_pubmed_for_beleditor(pmid)
if __name__ == "__main__":
main()
| 29.550926 | 140 | 0.622121 | 1,586 | 12,766 | 4.882724 | 0.209962 | 0.024406 | 0.021694 | 0.030733 | 0.371255 | 0.33329 | 0.30501 | 0.27376 | 0.257748 | 0.247676 | 0 | 0.008505 | 0.226304 | 12,766 | 431 | 141 | 29.61949 | 0.775539 | 0.14335 | 0 | 0.286275 | 0 | 0.007843 | 0.219446 | 0.058357 | 0 | 0 | 0 | 0.00232 | 0 | 1 | 0.05098 | false | 0.007843 | 0.047059 | 0.003922 | 0.164706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf6c41df7444eeb8c502ae2cb7c1337b9779518c | 1,879 | py | Python | code/_archive/009_userCheckin.py | LadyMiss88/recommendation-engine-using-social-graph | 4ae9978b6251bffdde6793ba394dfa77a7d4a6f4 | [
"MIT"
] | null | null | null | code/_archive/009_userCheckin.py | LadyMiss88/recommendation-engine-using-social-graph | 4ae9978b6251bffdde6793ba394dfa77a7d4a6f4 | [
"MIT"
] | null | null | null | code/_archive/009_userCheckin.py | LadyMiss88/recommendation-engine-using-social-graph | 4ae9978b6251bffdde6793ba394dfa77a7d4a6f4 | [
"MIT"
] | null | null | null | import requests
from bs4 import BeautifulSoup
import simplejson as json
import config
import pymysql
global database_conn
global database_cursor
database_conn = pymysql.connect(host = config.db_host, user = config.db_user, passwd = config.db_pass, db = config.db_database, use_unicode=True, charset="utf8")
database_cursor = database_conn.cursor()
param = {
'client_id': config.client_id,
'client_secret': config.client_secret,
'oauth_token': config.access_token,
'limit': '250',
'v': '20170625'
}
sql = "select distinct uid from user;"
database_cursor.execute(sql)
results = database_cursor.fetchall()
uids = [uid[0] for uid in results]
sql = "select distinct rid from restaurant;"
database_cursor.execute(sql)
results = database_cursor.fetchall()
rids = [rid[0] for rid in results]
for uid in uids:
offset = 0
count = -1
while count == -1 or count == 250:
checkin_record = []
param_str = '&'.join(['='.join(i) for i in param.items()])
req = requests.get('https://api.foursquare.com/v2/users/' + str(uid) + '/checkins?' + param_str + '&offset=' + str(offset))
soup = BeautifulSoup(req.content, 'html.parser')
try:
jdata = json.loads(str(soup))
except:
print('Error!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
continue
count = jdata['response']['checkins']['count']
print(count)
if count != 0:
for i in jdata['response']['checkins']['items']:
rid = i['venue']['id']
checkin_record.append((uid, rid))
lineCheckin = 'insert into checkin (uid, rid) values ' + str(checkin_record)[1:-1] + ';'
print(lineCheckin)
database_cursor.execute(lineCheckin)
database_conn.commit()
database_conn.close() | 34.163636 | 162 | 0.60298 | 221 | 1,879 | 5 | 0.411765 | 0.088688 | 0.057014 | 0.047059 | 0.095928 | 0.095928 | 0.095928 | 0.095928 | 0 | 0 | 0 | 0.017819 | 0.253326 | 1,879 | 55 | 163 | 34.163636 | 0.769779 | 0 | 0 | 0.083333 | 0 | 0 | 0.168675 | 0.018072 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.020833 | 0.104167 | 0 | 0.104167 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf6c77a4579d9ca212f8e013e1dcddd3b2cbae17 | 2,255 | py | Python | pastebin_mirror/storage.py | imnotjames/pastebin_mirror | 8b90d13c7590e12942045da781a081110e7897c1 | [
"MIT"
] | 5 | 2017-10-22T21:52:47.000Z | 2020-10-24T11:09:25.000Z | pastebin_mirror/storage.py | imnotjames/pastebin_mirror | 8b90d13c7590e12942045da781a081110e7897c1 | [
"MIT"
] | 2 | 2017-07-26T21:24:47.000Z | 2018-05-06T18:05:55.000Z | pastebin_mirror/storage.py | imnotjames/pastebin-mirror | 8b90d13c7590e12942045da781a081110e7897c1 | [
"MIT"
] | 1 | 2017-07-08T00:45:59.000Z | 2017-07-08T00:45:59.000Z | import sqlite3
import logging
logger = logging.getLogger(__name__)
class SQLite3Storage:
def __init__(self, location='pastebin.db'):
self.connection = sqlite3.connect(location)
def initialize_tables(self):
logger.info('creating table `paste` if it doesn\'t exist')
self.connection.execute(
'''
CREATE TABLE IF NOT EXISTS paste (
paste_key CHAR(8) PRIMARY KEY,
timestamp TIMESTAMP,
size INT,
expires TIMESTAMP,
title TEXT,
syntax TEXT,
user TEXT NULL
);
'''
)
logger.info('creating table `paste_content` if it doesn\'t exist')
self.connection.execute(
'''
CREATE TABLE IF NOT EXISTS paste_content (
paste_key CHAR(8) PRIMARY KEY,
raw_content TEXT
);
'''
)
def has_paste_content(self, key):
cursor = self.connection.cursor()
cursor.execute('SELECT COUNT(*) FROM paste_content WHERE paste_key = ?', (key,))
paste_content_count = cursor.fetchone()[0]
return paste_content_count > 0
def save_paste_reference(self, key, size, timestamp, expires, title, syntax, user):
self.connection.execute(
'''
INSERT OR REPLACE INTO paste
(paste_key, timestamp, size, expires, title, syntax, user)
VALUES
(?, ?, ?, ?, ?, ?, ?)
''',
(
key,
timestamp,
size,
expires,
title,
syntax,
user,
)
)
logger.debug('persisted paste reference for paste `%s`', key)
self.connection.commit()
def save_paste_content(self, key, content):
self.connection.execute(
'''
INSERT OR REPLACE INTO paste_content
(paste_key, raw_content)
VALUES
(?, ?)
''',
(
key,
content,
)
)
logger.debug('persisted paste content for paste `%s`', key) | 26.529412 | 88 | 0.489579 | 205 | 2,255 | 5.239024 | 0.326829 | 0.100559 | 0.078212 | 0.061453 | 0.366853 | 0.314711 | 0.271881 | 0.201117 | 0.117318 | 0.117318 | 0 | 0.005331 | 0.417738 | 2,255 | 85 | 89 | 26.529412 | 0.812643 | 0 | 0 | 0.142857 | 0 | 0 | 0.153686 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119048 | false | 0 | 0.047619 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf71de2744fe5d1242aa731546dcbfd8e15f512b | 3,381 | py | Python | batch/batch/driver/resource_manager.py | vrautela/hail | 7db6189b5b1feafa88452b8470e497d9505d9a46 | [
"MIT"
] | null | null | null | batch/batch/driver/resource_manager.py | vrautela/hail | 7db6189b5b1feafa88452b8470e497d9505d9a46 | [
"MIT"
] | null | null | null | batch/batch/driver/resource_manager.py | vrautela/hail | 7db6189b5b1feafa88452b8470e497d9505d9a46 | [
"MIT"
] | null | null | null | from typing import List, Any, Tuple
import abc
import logging
from hailtop.utils import time_msecs
from .instance import Instance
from ..file_store import FileStore
from ..instance_config import InstanceConfig, QuantifiedResource
log = logging.getLogger('compute_manager')
class VMDoesNotExist(Exception):
pass
class VMState:
pass
class NoTimestampVMState(VMState):
def __init__(self, state: str, full_spec: Any):
self.state = state
self.full_spec = full_spec
def __str__(self):
return f'state={self.state} full_spec={self.full_spec}'
class UnknownVMState(NoTimestampVMState):
def __init__(self, full_spec: Any):
super().__init__('Unknown', full_spec)
class VMStateTerminated(NoTimestampVMState):
def __init__(self, full_spec: Any):
super().__init__('Terminated', full_spec)
class TimestampedVMState(VMState):
def __init__(self, state: str, full_spec: Any, last_state_change_timestamp_msecs: int):
assert last_state_change_timestamp_msecs is not None
self.state = state
self.full_spec = full_spec
self.last_state_change_timestamp_msecs = last_state_change_timestamp_msecs
def time_since_last_state_change(self) -> int:
return time_msecs() - self.last_state_change_timestamp_msecs
def __str__(self):
return f'state={self.state} full_spec={self.full_spec} last_state_change_timestamp_msecs={self.last_state_change_timestamp_msecs}'
class VMStateCreating(TimestampedVMState):
def __init__(self, full_spec: Any, last_state_change_timestamp_msecs: int):
super().__init__('Creating', full_spec, last_state_change_timestamp_msecs)
class VMStateRunning(TimestampedVMState):
def __init__(self, full_spec: Any, last_state_change_timestamp_msecs: int):
super().__init__('Running', full_spec, last_state_change_timestamp_msecs)
class CloudResourceManager:
@abc.abstractmethod
def machine_type(self, cores: int, worker_type: str, local_ssd: bool) -> str:
raise NotImplementedError
@abc.abstractmethod
def worker_type_and_cores(self, machine_type: str) -> Tuple[str, int]:
raise NotImplementedError
@abc.abstractmethod
def instance_config(
self,
machine_type: str,
preemptible: bool,
local_ssd_data_disk: bool,
data_disk_size_gb: int,
boot_disk_size_gb: int,
job_private: bool,
location: str,
) -> InstanceConfig:
raise NotImplementedError
@abc.abstractmethod
def instance_config_from_dict(self, data: dict) -> InstanceConfig:
raise NotImplementedError
@abc.abstractmethod
async def create_vm(
self,
file_store: FileStore,
machine_name: str,
activation_token: str,
max_idle_time_msecs: int,
local_ssd_data_disk: bool,
data_disk_size_gb: int,
boot_disk_size_gb: int,
preemptible: bool,
job_private: bool,
location: str,
machine_type: str,
instance_config: InstanceConfig,
) -> List[QuantifiedResource]:
raise NotImplementedError
@abc.abstractmethod
async def delete_vm(self, instance: Instance):
raise NotImplementedError
@abc.abstractmethod
async def get_vm_state(self, instance: Instance) -> VMState:
raise NotImplementedError
| 28.897436 | 138 | 0.71192 | 398 | 3,381 | 5.638191 | 0.223618 | 0.064171 | 0.080214 | 0.117647 | 0.58467 | 0.51738 | 0.420677 | 0.324421 | 0.256684 | 0.165775 | 0 | 0 | 0.20911 | 3,381 | 116 | 139 | 29.146552 | 0.839192 | 0 | 0 | 0.494118 | 0 | 0 | 0.062703 | 0.037267 | 0 | 0 | 0 | 0 | 0.011765 | 1 | 0.152941 | false | 0.023529 | 0.082353 | 0.035294 | 0.376471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf722d5a372f87aff15987a7ee497c8e0a060410 | 6,570 | py | Python | lib/URLutils.py | ecartierlipn/neoveille2020 | 6dd91c39f1210dd1829b5f62637c7c6f81e14386 | [
"Apache-2.0"
] | null | null | null | lib/URLutils.py | ecartierlipn/neoveille2020 | 6dd91c39f1210dd1829b5f62637c7c6f81e14386 | [
"Apache-2.0"
] | null | null | null | lib/URLutils.py | ecartierlipn/neoveille2020 | 6dd91c39f1210dd1829b5f62637c7c6f81e14386 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
'''
URL utils
'''
import re,justext, chardet
#import urllib2
#from urllib2 import urlopen, URLError, HTTPError, Request
from bs4 import BeautifulSoup as bs
import unicodedata
import requests, os
#import settings
import logging
log = logging.getLogger(__name__)
### utils
def detect_file_encoding(url):
log.info(url + "\n")
r = requests.get(url)
if r.encoding != None:
log.info(r.encoding)
return r.encoding
else:
return 'unable to get encoding from url'
def remove_control_chars(str):
stripped = lambda s: "".join(i for i in s if ord(i) not in range(8) + range(11,31) + [127])
return stripped(str)
#print "DIAPO. Quand les Toques Blanchés font le bonheur \tdes producteurs\n"
#print remove_control_chars("DIAPO. Quand les Toques Blanchés font le bonheur \tdes producteurs\n")
#exit()
def remove_control_characters(s):
return "".join(ch for ch in s if unicodedata.category(ch)[0]!="Cc")
def strip_html_tags(str):
html = bs(str,"html.parser")
str2 = html.get_text()
str3=re.sub(r"^[<>]+$","",str2)
str4=re.sub(r"(https?:\/\/)?(\w+)?(\.\w+){2,4}(\/[\.\w]+){0,4}","",str3)
str5=re.sub(r"\w+\.\w+","",str4)
#log.info(str5)
#str6= remove_control_chars(str5)
#log.info(str6)
return str5
def remove_links(text):
#log.info(str)
res = re.sub(r"<.+?>","", text)
#log.info(res)
return res
def get_url_article(link):
'''
TO BE DONE : error handling : http://www.voidspace.org.uk/python/articles/urllib2.shtml#handling-exceptions
'''
### bug encodage
if len(link)<5:
return False
try:
l = link.decode("utf-8", errors='ignore')
log.info("Retrieving : " + l)
#hdr = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'
hdr='Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:10.0) Gecko/20100101 Firefox/10.0'
headers={'User-Agent':hdr,'method':'get'}
req = urllib2.Request(l)
req.add_header('User-Agent',hdr)
p = urllib2.urlopen(req,timeout=20)
page = p.read()
contents=''
paragraphs = justext.justext(page, justext.get_stoplist(lang))
for paragraph in paragraphs:
if paragraph.class_type == 'good':
#and re.search(r'Facebook connect|cliquez|Envoyer cet article par email|D.couvrez tous nos packs|d.j.un|recevoirnos|nosoffres|acc.dezà|cliquez ici|En poursuivant votre navigation sur ce site|accédezà|pasencore|Veuillez cliquer|créez gratuitement votre compte]',paragraph.text)== None:
contents = contents + "\n" + paragraph.text
cts = remove_control_characters(contents)
return cts
except HTTPError as e:
log.warning("HTTP Error : " + str(e))
return False
except URLError as e:
log.warning("URL Error : " + str(e))
return False
except UnicodeEncodeError as e:
log.warning("UnicodeEncode Exception : " + str(e))
return False
except UnicodeDecodeError as e:
log.warning("UnicodeDecode Exception : " + str(e))
return False
except Exception as e:
log.warning("Exception : " + str(e))
return False
except:
return false
def get_url_article2(link,lang):
'''
TO BE DONE : error handling : http://www.voidspace.org.uk/python/articles/urllib2.shtml#handling-exceptions
'''
### bug encodage
if len(link)<5:
return False
try:
#l = link.decode("utf-8", errors='ignore')
log.info("Retrieving : " + link)
#hdr = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'
hdr='Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:10.0) Gecko/20100101 Firefox/10.0'
headers={'User-Agent':hdr}
resp = requests.get(link,headers=headers)
resp.raise_for_status()
page = resp.text
#log.info(page)
contents=''
#print(justext.get_stoplist())
paragraphs = justext.justext(page, justext.get_stoplist(lang))
for paragraph in paragraphs:
if paragraph.class_type == 'good':
#and re.search(r'Facebook connect|cliquez|Envoyer cet article par email|D.couvrez tous nos packs|d.j.un|recevoirnos|nosoffres|acc.dezà|cliquez ici|En poursuivant votre navigation sur ce site|accédezà|pasencore|Veuillez cliquer|créez gratuitement votre compte]',paragraph.text)== None:
contents = contents + "\n" + paragraph.text
cts = remove_control_characters(contents)
if len(cts)==0:
log.warning("No contents for :" + link ) # + " " + page
return cts
except requests.exceptions.RequestException as e:
log.warning("Exception : " + str(e))
return False
def get_boilerplate_text(text,lang):
'''
TO BE DONE : error handling : http://www.voidspace.org.uk/python/articles/urllib2.shtml#handling-exceptions
'''
#log.info(page)
contents=''
#print(justext.get_stoplist())
paragraphs = justext.justext(text, justext.get_stoplist(lang))
for paragraph in paragraphs:
if paragraph.class_type == 'good':
#and re.search(r'Facebook connect|cliquez|Envoyer cet article par email|D.couvrez tous nos packs|d.j.un|recevoirnos|nosoffres|acc.dezà|cliquez ici|En poursuivant votre navigation sur ce site|accédezà|pasencore|Veuillez cliquer|créez gratuitement votre compte]',paragraph.text)== None:
contents = contents + "\n" + paragraph.text
cts = remove_control_characters(contents)
if len(cts)==0:
log.warning("No contents for :" + link ) # + " " + page
return cts
def find_rssfeeds(url):
'''utility to find rss feeds from webpage'''
page = urllib2.urlopen(url)
soup = bs(page)
links = soup.find_all('link', type='application/rss+xml')
if len(links)>0:
for l in links:
print (l['href'], l['title'])
return links
else:
print ("No RSS feeds on this site")
return False
# main method
def main():
FORMAT = "%(levelname)s:%(asctime)s:%(message)s[%(filename)s:%(lineno)s - %(funcName)s()]"
logging.basicConfig(format=FORMAT, datefmt='%m/%d/%Y %I:%M:%S %p', filename="./log/URLutils.log", level=logging.INFO)
log = logging.getLogger(__name__)
find_rssfeeds("http://www.lemonde.fr/rss/index.html")
# main
if __name__ == '__main__':
main()
else:
log = logging.getLogger(__name__)
| 36.5 | 300 | 0.633333 | 891 | 6,570 | 4.602694 | 0.271605 | 0.017069 | 0.008778 | 0.01902 | 0.582297 | 0.582297 | 0.553524 | 0.553524 | 0.553524 | 0.535479 | 0 | 0.02337 | 0.224962 | 6,570 | 179 | 301 | 36.703911 | 0.782011 | 0.310198 | 0 | 0.401709 | 0 | 0.025641 | 0.162424 | 0.024555 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08547 | false | 0 | 0.042735 | 0.008547 | 0.299145 | 0.017094 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf7319f7bbdd5e51be9c062b9af0fbc76ec7aa71 | 855 | py | Python | problems/test_0849.py | chrisxue815/leetcode_python | dec3c160d411a5c19dc8e9d96e7843f0e4c36820 | [
"Unlicense"
] | 1 | 2017-06-17T23:47:17.000Z | 2017-06-17T23:47:17.000Z | problems/test_0849.py | chrisxue815/leetcode_python | dec3c160d411a5c19dc8e9d96e7843f0e4c36820 | [
"Unlicense"
] | null | null | null | problems/test_0849.py | chrisxue815/leetcode_python | dec3c160d411a5c19dc8e9d96e7843f0e4c36820 | [
"Unlicense"
] | null | null | null | import unittest
from typing import List
import utils
# O(n) time. O(1) space. Iteration.
class Solution:
def maxDistToClosest(self, seats: List[int]) -> int:
lo = 0
while lo < len(seats) and seats[lo] == 0:
lo += 1
result = lo
for hi in range(lo + 1, len(seats)):
if seats[hi] == 1:
result = max(result, (hi - lo) >> 1)
lo = hi
result = max(result, len(seats) - 1 - lo)
return result
class Test(unittest.TestCase):
def test(self):
cases = utils.load_test_json(__file__).test_cases
for case in cases:
args = str(case.args)
actual = Solution().maxDistToClosest(**case.args.__dict__)
self.assertEqual(case.expected, actual, msg=args)
if __name__ == '__main__':
unittest.main()
| 21.923077 | 70 | 0.556725 | 108 | 855 | 4.231481 | 0.444444 | 0.052516 | 0.065646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013841 | 0.323977 | 855 | 38 | 71 | 22.5 | 0.776817 | 0.038596 | 0 | 0 | 0 | 0 | 0.009756 | 0 | 0 | 0 | 0 | 0 | 0.041667 | 1 | 0.083333 | false | 0 | 0.125 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf73761720d08a914d996df2769ba78971749eee | 2,792 | py | Python | Gerador de Amostra/gerador_amostra.py | Paulognunes/Ajuste-de-Parametros-do-modelo-SIR | 99607db943e87658303eea8cb59e1c776073b3e4 | [
"MIT"
] | 1 | 2022-01-14T12:44:14.000Z | 2022-01-14T12:44:14.000Z | Gerador de Amostra/gerador_amostra.py | Paulognunes/Ajuste-de-Parametros-do-modelo-SIR | 99607db943e87658303eea8cb59e1c776073b3e4 | [
"MIT"
] | null | null | null | Gerador de Amostra/gerador_amostra.py | Paulognunes/Ajuste-de-Parametros-do-modelo-SIR | 99607db943e87658303eea8cb59e1c776073b3e4 | [
"MIT"
] | null | null | null | def gera_amostra(data_inicio_corte, data_fim_corte, arquivo_confirmados, arquivo_recuperados, arquivo_obitos,
arquivo_datas):
confirmados = open(arquivo_confirmados, 'r')
recuperados = open(arquivo_recuperados, 'r')
obitos = open(arquivo_obitos, 'r')
data = open(arquivo_datas, 'r')
n = 90497
c = []
r = []
o = []
d = []
rt = []
infec = []
qnt_infec = 205
d1 = []
# Novos Casos Confirmados
for i in confirmados:
c.append(i.replace('\n', ''))
# Número de casos confirmados acumulados
for i in c:
qnt_infec += int(i)
infec.append(qnt_infec)
# Número de recuperados acumulados
for i in recuperados:
r.append(i.replace('\n', ''))
# Número de óbitos
for i in obitos:
o.append(i.replace('\n', ''))
# Vetor de datas
for i in data:
aux = i.replace('/', '_')
d.append(aux.replace('\n', ''))
d1.append(i)
# Desde 01/07/2020
# Soma os óbitos aos recuperados
x = 5
for i in range(len(r)):
if int(o[i]) == 0:
aux = x + int(r[i])
rt.append(aux)
else:
x += int(o[i])
aux = x + int(r[i])
rt.append(aux)
data_inicio_corte = data_inicio_corte.replace('/', '_')
data_fim_corte = data_fim_corte.replace('/', '_')
corte = 0
tam_amostra = 0
for i in range(len(d)):
if d[i] == data_inicio_corte:
corte = i
print(corte)
print(c[corte])
print(infec[corte])
print(rt[corte])
for i in range(len(d)):
if d[i] == data_fim_corte:
tam_amostra = i - corte
if corte > (442 - tam_amostra):
print("Data ou tamanho da amostra inválidos")
return
a = 'dados_sjdr_' + str(d[corte]) + '__' + str(d[corte + tam_amostra]) + '.csv'
dados = open(a, 'w')
for i in range(corte, corte + tam_amostra):
suc = n - int(infec[i])
if i != corte + tam_amostra - 1:
inf_ativos = int(infec[i]) - int(rt[i])
dados.write(str(suc) + ',' + str(inf_ativos) + ',' + str(rt[i]) + '\n')
else:
inf_ativos = int(infec[i]) - int(rt[i])
dados.write(str(suc) + ',' + str(inf_ativos) + ',' + str(rt[i]))
for i in range(len(infec)):
inf_ativos = int(infec[i]) - int(rt[i])
print(f' {inf_ativos}, {d1[i]}\n')
confirmados.close()
recuperados.close()
obitos.close()
data.close()
dados.close()
if __name__ == '__main__':
gera_amostra('15/07/2021', '15/09/2021', 'confirmados.txt', 'recuperados.txt', 'obitos.txt', 'datas.txt')
| 28.20202 | 110 | 0.511819 | 360 | 2,792 | 3.813889 | 0.219444 | 0.029133 | 0.0437 | 0.040058 | 0.2185 | 0.198106 | 0.164603 | 0.164603 | 0.11799 | 0.11799 | 0 | 0.023168 | 0.335244 | 2,792 | 98 | 111 | 28.489796 | 0.716595 | 0.062679 | 0 | 0.148649 | 0 | 0 | 0.071286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013514 | false | 0 | 0 | 0 | 0.027027 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
bf7570cf463efe1bbbf754068fceecada315ee88 | 5,040 | py | Python | Machine_Learning/use_stocker.py | vhn0912/Finance | 39cf49d4d778d322537531cee4ce3981cc9951f9 | [
"MIT"
] | 441 | 2020-04-22T02:21:19.000Z | 2022-03-29T15:00:24.000Z | Machine_Learning/use_stocker.py | happydasch/Finance | 4f6c5ea8f60fb0dc3b965ffb9628df83c2ecef35 | [
"MIT"
] | 5 | 2020-07-06T15:19:58.000Z | 2021-07-23T18:32:29.000Z | Machine_Learning/use_stocker.py | happydasch/Finance | 4f6c5ea8f60fb0dc3b965ffb9628df83c2ecef35 | [
"MIT"
] | 111 | 2020-04-21T11:40:39.000Z | 2022-03-20T07:26:17.000Z | from stocker import Stocker
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style
import matplotlib as mpl
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 20, 10
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LinearRegression
from fastai.structured import add_datepart
import tensorflow as tf
from tensorflow.keras import layers
from sklearn import neighbors
from sklearn.model_selection import GridSearchCV
from pandas.util.testing import assert_frame_equal
goog = Stocker('GOOGL')
goog.plot_stock()
# Create model
model, model_data = goog.create_prophet_model(days=90)
goog.evaluate_prediction()
# Optimize the model
goog.changepoint_prior_analysis(changepoint_priors=[0.001, 0.05, 0.1, 0.2])
goog.changepoint_prior_validation(start_date='2016-01-04', end_date='2017-01-03', changepoint_priors=[0.001, 0.05, 0.1, 0.2])
# Evaluate the new model
goog.evaluate_prediction()
print(goog.evaluate_prediction(nshares=1000))
# Getting the dataframe of the data
goog_data = goog.make_df('2004-08-19', '2018-03-27')
print(goog_data.head(50))
goog_data = goog_data[['Date', 'Open', 'High', 'Low', 'Close', 'Adj. Close', 'Volume']]
print(goog_data.head(50))
# Moving Average
scaler = MinMaxScaler(feature_range=(0, 1))
df = goog_data
print(df.head())
df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d')
df.index = df['Date']
print(df.head(50))
plt.figure(figsize=(16,8))
plt.plot(df['Date'], df['Adj. Close'], label='Close Price history')
# Creating dataframe with date and the target variable
data = df.sort_index(ascending=True, axis=0)
new_data = pd.DataFrame(index=range(0, len(df)), columns=['Date', 'Adj. Close'])
for i in range(0, len(data)):
new_data['Date'][i] = data['Date'][i]
new_data['Adj. Close'][i] = data['Adj. Close'][i]
# Train-test split
train = new_data[:2600]
test = new_data[2600:]
new_data.shape, train.shape, test.shape
num = test.shape[0]
train['Date'].min(), train['Date'].max(), test['Date'].min(), test['Date'].max()
# Making predictions
preds = []
for i in range(0, num):
a = train['Adj. Close'][len(train)-924+i:].sum() + sum(preds)
b = a/num
preds.append(b)
len(preds)
# Measure accuracy with rmse (Root Mean Squared Error)
rms=np.sqrt(np.mean(np.power((np.array(test['Adj. Close'])-preds),2)))
print(rms)
test['Predictions'] = 0
test['Predictions'] = preds
plt.plot(train['Adj. Close'])
plt.plot(test[['Adj. Close', 'Predictions']])
# Simple Linear Regression
lr_data = goog_data
lr_data.head(50)
lr_data['Date'] = pd.to_datetime(lr_data.Date, format='%Y-%m-%d')
lr_data.index = lr_data['Date']
lr_data = lr_data.sort_index(ascending=True, axis=0)
new_data = pd.DataFrame(index=range(0, len(lr_data)), columns=['Date', 'Adj. Close'])
for i in range(0,len(data)):
new_data['Date'][i] = lr_data['Date'][i]
new_data['Adj. Close'][i] = lr_data['Adj. Close'][i]
print(new_data.head(50))
add_datepart(new_data, 'Date')
new_data.drop('Elapsed', axis=1, inplace=True)
# Train-test split
train = new_data[:2600]
test = new_data[2600:]
x_train = train.drop('Adj. Close', axis=1)
y_train = train['Adj. Close']
x_test = test.drop('Adj. Close', axis=1)
y_test = test['Adj. Close']
# Implementing linear regression
model = LinearRegression()
model.fit(x_train, y_train)
# Predictions
preds = model.predict(x_test)
lr_rms = np.sqrt(np.mean(np.power((np.array(y_test)-np.array(preds)),2)))
print(lr_rms)
# Plot
test['Predictions'] = 0
test['Predictions'] = preds
plt.plot(train['Adj. Close'])
plt.plot(test[['Adj. Close', 'Predictions']])
# k-Nearest Neighbours
scaler = MinMaxScaler(feature_range=(0, 1))
# scaling the data
x_train_scaled = scaler.fit_transform(x_train)
x_train = pd.DataFrame(x_train_scaled)
x_test_scaled = scaler.fit_transform(x_test)
x_test = pd.DataFrame(x_test_scaled)
# using gridsearch to find the best value of k
params = {'n_neighbors': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]}
knn = neighbors.KNeighborsRegressor()
model = GridSearchCV(knn, params, cv=5)
# fitting the model and predicting
model.fit(x_train, y_train)
new_preds = model.predict(x_test)
# Results
k_rms = np.sqrt(np.mean(np.power((np.array(y_test)-np.array(preds)),2)))
print(k_rms)
test['Predictions'] = 0
test['Predictions'] = new_preds
plt.plot(train['Adj. Close'])
plt.plot(test[['Adj. Close', 'Predictions']])
# Multilayer Perceptron
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(100, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(1, activation=tf.nn.relu))
model.compile(optimizer='adam', loss='mean_squared_error')
X_train = np.array(x_train)
Y_train = np.array(y_train)
model.fit(X_train, Y_train, epochs=500)
preds = model.predict(x_test)
# Results
mlp_rms = np.sqrt(np.mean(np.power((np.array(y_test)-np.array(preds)),2)))
print(mlp_rms)
test['Predictions'] = 0
test['Predictions'] = preds
plt.plot(train['Adj. Close'])
plt.plot(test[['Adj. Close', 'Predictions']]) | 28 | 125 | 0.721825 | 826 | 5,040 | 4.27724 | 0.251816 | 0.049816 | 0.022078 | 0.014718 | 0.41353 | 0.365129 | 0.293801 | 0.293801 | 0.279649 | 0.271441 | 0 | 0.033511 | 0.105952 | 5,040 | 180 | 126 | 28 | 0.750555 | 0.097222 | 0 | 0.284483 | 0 | 0 | 0.12718 | 0 | 0 | 0 | 0 | 0 | 0.008621 | 1 | 0 | false | 0 | 0.12931 | 0 | 0.12931 | 0.086207 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |