hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
cf07119cf6e3bdb2abce6a77371ad7da0041ab09 | 2,260 | py | Python | muk_utils/tests/test_search_parents.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | muk_utils/tests/test_search_parents.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | muk_utils/tests/test_search_parents.py | juazisco/gestion_rifa | bce6b75f17cb5ab2df7e2f7dd5141fc85a1a5bfb | [
"MIT"
] | null | null | null | ##########################################################################
#
# Copyright (C) 2017 MuK IT GmbH
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##########################################################################
import os
import base64
import logging
from odoo import exceptions
from odoo.tests import common
_path = os.path.dirname(os.path.dirname(__file__))
_logger = logging.getLogger(__name__)
class SearchParentTestCase(common.TransactionCase):
def setUp(self):
super(SearchParentTestCase, self).setUp()
self.model = self.env['res.partner.category']
def tearDown(self):
super(SearchParentTestCase, self).tearDown()
def _evaluate_parent_result(self, parents, records):
for parent in parents:
self.assertTrue(
not parent.parent_id or
parent.parent_id.id not in records.ids
)
def test_search_parents(self):
records = self.model.search([])
parents = self.model.search_parents([])
self._evaluate_parent_result(parents, records)
def test_search_parents_domain(self):
records = self.model.search([('id', '!=', 1)])
parents = self.model.search_parents([('id', '!=', 1)])
self._evaluate_parent_result(parents, records)
def test_search_read_parents(self):
parents = self.model.search_parents([])
read_names = parents.read(['name'])
search_names = self.model.search_read_parents([], ['name'])
self.assertTrue(read_names == search_names)
| 36.451613 | 78 | 0.620354 | 266 | 2,260 | 5.12782 | 0.424812 | 0.046188 | 0.065982 | 0.064516 | 0.25 | 0.148094 | 0.124633 | 0.07478 | 0.07478 | 0 | 0 | 0.005155 | 0.227434 | 2,260 | 61 | 79 | 37.04918 | 0.77606 | 0.305752 | 0 | 0.125 | 0 | 0 | 0.026866 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 1 | 0.1875 | false | 0 | 0.15625 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf07d61453467fdaf7a52e8c01b18b42bfc7226e | 4,514 | py | Python | daal4py/df_regr.py | PivovarA/scikit-learn_bench | 52e96f28eda3ca25d0f51594041fd06ee3f8d4c2 | [
"MIT"
] | null | null | null | daal4py/df_regr.py | PivovarA/scikit-learn_bench | 52e96f28eda3ca25d0f51594041fd06ee3f8d4c2 | [
"MIT"
] | null | null | null | daal4py/df_regr.py | PivovarA/scikit-learn_bench | 52e96f28eda3ca25d0f51594041fd06ee3f8d4c2 | [
"MIT"
] | 2 | 2020-08-07T16:19:32.000Z | 2020-08-07T16:22:12.000Z | # Copyright (C) 2018-2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
import argparse
from bench import (
parse_args, measure_function_time, load_data, print_output, rmse_score,
float_or_int, getFPType
)
from daal4py import (
decision_forest_regression_training,
decision_forest_regression_prediction,
engines_mt2203
)
def df_regr_fit(X, y, n_trees=100, seed=12345, n_features_per_node=0,
max_depth=0, min_impurity=0, bootstrap=True):
fptype = getFPType(X)
features_per_node = X.shape[1]
if n_features_per_node > 0 and n_features_per_node <= features_per_node:
features_per_node = n_features_per_node
engine = engines_mt2203(seed=seed, fptype=fptype)
algorithm = decision_forest_regression_training(
fptype=fptype,
method='defaultDense',
nTrees=n_trees,
observationsPerTreeFraction=1.,
featuresPerNode=features_per_node,
maxTreeDepth=max_depth,
minObservationsInLeafNode=1,
engine=engine,
impurityThreshold=min_impurity,
varImportance='MDI',
resultsToCompute='',
memorySavingMode=False,
bootstrap=bootstrap
)
df_regr_result = algorithm.compute(X, y)
return df_regr_result
def df_regr_predict(X, training_result):
algorithm = decision_forest_regression_prediction(
fptype='float'
)
result = algorithm.compute(X, training_result.model)
return result.prediction
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='daal4py random forest '
'regression benchmark')
parser.add_argument('--criterion', type=str, default='mse',
choices=('mse'),
help='The function to measure the quality of a split')
parser.add_argument('--num-trees', type=int, default=100,
help='Number of trees in the forest')
parser.add_argument('--max-features', type=float_or_int, default=0,
help='Upper bound on features used at each split')
parser.add_argument('--max-depth', type=int, default=0,
help='Upper bound on depth of constructed trees')
parser.add_argument('--min-samples-split', type=float_or_int, default=2,
help='Minimum samples number for node splitting')
parser.add_argument('--max-leaf-nodes', type=int, default=None,
help='Grow trees with max_leaf_nodes in best-first fashion'
'if it is not None')
parser.add_argument('--min-impurity-decrease', type=float, default=0.,
help='Needed impurity decrease for node splitting')
parser.add_argument('--no-bootstrap', dest='bootstrap', default=True,
action='store_false', help="Don't control bootstraping")
parser.add_argument('--use-sklearn-class', action='store_true',
help='Force use of '
'sklearn.ensemble.RandomForestRegressor')
params = parse_args(parser, prefix='daal4py')
# Load data
X_train, X_test, y_train, y_test = load_data(
params, add_dtype=True, label_2d=True)
columns = ('batch', 'arch', 'prefix', 'function', 'threads', 'dtype',
'size', 'num_trees', 'time')
if isinstance(params.max_features, float):
params.max_features = int(X_train.shape[1] * params.max_features)
# Time fit and predict
fit_time, res = measure_function_time(
df_regr_fit, X_train, y_train,
n_trees=params.num_trees,
n_features_per_node=params.max_features,
max_depth=params.max_depth,
min_impurity=params.min_impurity_decrease,
bootstrap=params.bootstrap,
seed=params.seed,
params=params)
yp = df_regr_predict(X_train, res)
train_rmse = rmse_score(yp, y_train)
predict_time, yp = measure_function_time(
df_regr_predict, X_test, res, params=params)
test_rmse = rmse_score(yp, y_test)
print_output(library='daal4py', algorithm='decision_forest_regression',
stages=['training', 'prediction'], columns=columns,
params=params, functions=['df_regr.fit', 'df_regr.predict'],
times=[fit_time, predict_time], accuracy_type='rmse',
accuracies=[train_rmse, test_rmse], data=[X_train, X_test])
| 36.699187 | 83 | 0.636907 | 530 | 4,514 | 5.158491 | 0.316981 | 0.019751 | 0.049378 | 0.029261 | 0.125457 | 0.060351 | 0.019751 | 0 | 0 | 0 | 0 | 0.013209 | 0.262074 | 4,514 | 122 | 84 | 37 | 0.807565 | 0.022375 | 0 | 0 | 0 | 0 | 0.175176 | 0.019741 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.044444 | 0 | 0.088889 | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf0923f79d25f11e0f15404bd9b6b8cdac76eb86 | 8,658 | py | Python | src/feature/eme_data_loader.py | 0shimax/Pytorch-DRN | a5e70784d0097069e9e1cf958a446f819dbdb7f1 | [
"MIT"
] | null | null | null | src/feature/eme_data_loader.py | 0shimax/Pytorch-DRN | a5e70784d0097069e9e1cf958a446f819dbdb7f1 | [
"MIT"
] | null | null | null | src/feature/eme_data_loader.py | 0shimax/Pytorch-DRN | a5e70784d0097069e9e1cf958a446f819dbdb7f1 | [
"MIT"
] | null | null | null | from pathlib import Path
import pandas as pd
import numpy as np
import random
import torch
from torch.utils.data import Dataset
from sklearn.model_selection import train_test_split
def get_id_columns(df):
user_and_target_id_columns = ["user_id", "target_user_id"]
return df[user_and_target_id_columns]
def extranct_interacted_user_rows(df):
tmp = df[["user_id", "label"]].groupby('user_id').sum()
interacted_user_id = tmp[tmp.label>0].reset_index()
return df[df.user_id.isin(interacted_user_id.user_id)]
def get_ethnicity_columns(df):
ethnicity_user = df.ethnicity_user
ethnicity_target = df.ethnicity_target
ethnicity_columns = [c for c in df.columns if "ethnicity_" in c]
df.drop(ethnicity_columns, axis=1, inplace=True)
df = df.assign(ethnicity_user=ethnicity_user,
ethnicity_target=ethnicity_target)
return df
def calculate_user_features(df):
c_id = 'user_id'
user_feature_columns = [c for c in df.columns
if '_user' in c and 'target_user_id' != c]
user_features = df.groupby(c_id)[user_feature_columns].head(1)
user_features[c_id] = df.loc[user_features.index].user_id
return user_features
def calculate_target_features(df):
c_id = 'target_user_id'
target_feature_columns =\
[c for c in df.columns.values if '_target' in c]
target_features = df[[c_id] + target_feature_columns]
return target_features
def calcurate_target_clicked(df):
result = df[['target_user_id', 'label']]\
.groupby('target_user_id')\
.agg(['sum', 'count'])\
.reset_index()
result.columns = ['target_user_id', 'label_sum', 'label_cnt']
result = result.assign(label_rate=result.label_sum/result.label_cnt)
result.index = df.groupby('target_user_id').head(1).index
return result
def get_target_ids_for_train_input(squewed_user_target_labels,
valued_target_idxs, n_high, n_low):
# 全て返す
return squewed_user_target_labels.index.values
n_total = n_high + n_low
high_rate_flag = squewed_user_target_labels.label > 0
if len(valued_target_idxs) >= n_total:
idxs = np.random.permutation(len(valued_target_idxs))[:n_total]
return valued_target_idxs[idxs]
query = ~squewed_user_target_labels.index.isin(valued_target_idxs)
query &= high_rate_flag
n_rest = n_total - len(valued_target_idxs)
if n_rest == 1:
hight = squewed_user_target_labels[query].sample(n_rest).index.values
return np.concatenate([valued_target_idxs, hight])
m_n_high = int(n_rest * n_high / n_total)
m_n_low = n_rest - m_n_high
hight = squewed_user_target_labels[query].sample(m_n_high, replace=True).index.values
low = squewed_user_target_labels[
squewed_user_target_labels.label == 0].sample(m_n_low, replace=True).index.values
idxs = np.concatenate([valued_target_idxs, hight, low])
return idxs
def get_target_ids_for_test_input(squewed_user_target_labels, n_high, n_low):
# 全て返す
return squewed_user_target_labels.index.values
n_total = n_high + n_low
high_rate_flag = squewed_user_target_labels.label > 0
if sum(high_rate_flag) < n_high:
hight = squewed_user_target_labels[high_rate_flag].index.values
n_low = n_total - sum(high_rate_flag)
else:
hight = squewed_user_target_labels[high_rate_flag].sample(n_high).index.values
low = squewed_user_target_labels[
squewed_user_target_labels.label == 0].sample(n_low, replace=True).index.values
idxs = np.concatenate([hight, low])
return idxs
def get_target_ids_for_input(squewed_user_target_labels,
valued_target_idxs, n_high, n_low, train=True):
if train:
return get_target_ids_for_train_input(squewed_user_target_labels, valued_target_idxs, n_high, n_low)
else:
return get_target_ids_for_test_input(squewed_user_target_labels, n_high, n_low)
class OwnDataset(Dataset):
def __init__(self, file_name, root_dir, n_high, n_low,
subset=False, transform=None, train=True, split_seed=555):
super().__init__()
print("Train:", train)
self.file_name = file_name
self.root_dir = root_dir
self.transform = transform
self.n_high = n_high
self.n_low = n_low
self._train = train
self.split_seed = split_seed
self.prepare_data()
self.user_features_orig = self.user_features
def __len__(self):
return len(self.user_and_target_ids)
def reset(self):
self.user_features = self.user_features_orig
def prepare_data(self):
data_path = Path(self.root_dir, self.file_name)
eme_data = pd.read_csv(data_path)
extracted_interacted_rows = extranct_interacted_user_rows(eme_data)
unique_user_ids = extracted_interacted_rows.user_id.unique()
train_user_ids, test_user_ids = train_test_split(unique_user_ids,
random_state=self.split_seed,
shuffle=True,
test_size=0.2)
if self._train:
_data = eme_data[eme_data.user_id.isin(train_user_ids)]
self.user_features = calculate_user_features(_data)
self.user_and_target_ids = get_id_columns(_data)
self.rewards = eme_data[["user_id", "target_user_id", "label"]]
self.target_features_all = calculate_target_features(eme_data) # _data
else:
_data = eme_data[eme_data.user_id.isin(test_user_ids)]
self.user_and_target_ids = get_id_columns(_data)
self.user_features = calculate_user_features(_data)
self.rewards = eme_data[["user_id", "target_user_id", "label"]]
self.target_features_all = calculate_target_features(eme_data)
print("user", self.user_features.shape)
print("target", len(self.target_features_all.target_user_id.unique()))
def __getitem__(self, idx):
ids = self.user_and_target_ids.iloc[idx].values
current_user_id = ids[0]
user_feature = self.user_features[self.user_features.user_id == current_user_id]
user_feature = user_feature.copy().drop("user_id", axis=1)
user_feature = user_feature.astype(np.float32).values
user_feature = user_feature.reshape(-1)
query = (self.rewards.user_id == current_user_id)
query &= (self.rewards.label == 1)
valued_target_idxs = self.rewards[query].index.values
# TODO: 後で名前変えたる
squewed_user_target_labels =\
self.rewards.groupby("target_user_id").head(1)
target_idxs = get_target_ids_for_input(
squewed_user_target_labels, valued_target_idxs,
self.n_high, self.n_low, self._train)
target_features = self.target_features_all.loc[target_idxs].copy().reindex()
target_ids = target_features.target_user_id.values
target_features =\
target_features.copy().drop("target_user_id", axis=1)
target_features = target_features.astype(np.float32).values
eliminate_teacher = self.target_features_all.loc[valued_target_idxs].copy().reindex()
eliminate_teacher_ids = eliminate_teacher.target_user_id.values
eliminate_teacher_val = target_ids == eliminate_teacher_ids[0]
for v in eliminate_teacher_ids[1:]:
eliminate_teacher_val += target_ids == v
eliminate_teacher_val = eliminate_teacher_val.astype(np.float32)
return (torch.FloatTensor(user_feature),
torch.FloatTensor(target_features),
current_user_id,
target_ids,
eliminate_teacher_val)
def get_reward(self, current_user_id, target_ids):
query_user = self.rewards.user_id == current_user_id
query_target = self.rewards.target_user_id.isin(target_ids)
query = (query_user) & (query_target)
reward = self.rewards[query].label.values
if len(reward) == 0:
return 0.
else:
return float(reward.max())
def loader(dataset, batch_size, shuffle=True):
loader = torch.utils.data.DataLoader(
dataset,
batch_size=batch_size,
shuffle=shuffle,
num_workers=0)
return loader
| 39.176471 | 109 | 0.661354 | 1,156 | 8,658 | 4.544118 | 0.121972 | 0.044546 | 0.064725 | 0.087569 | 0.416905 | 0.364744 | 0.300781 | 0.284028 | 0.214163 | 0.189796 | 0 | 0.00477 | 0.249365 | 8,658 | 220 | 110 | 39.354545 | 0.803508 | 0.003465 | 0 | 0.127907 | 0 | 0 | 0.034154 | 0 | 0 | 0 | 0 | 0.004545 | 0 | 1 | 0.093023 | false | 0 | 0.040698 | 0.005814 | 0.25 | 0.017442 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf09fa6ef4f4a3fb801b920743bebb3502eaa28b | 2,320 | py | Python | mp/plot_process.py | RawPikachu/valor | 02e1eb0e599904d3f0c49b52534fcb6c3762951d | [
"MIT"
] | null | null | null | mp/plot_process.py | RawPikachu/valor | 02e1eb0e599904d3f0c49b52534fcb6c3762951d | [
"MIT"
] | null | null | null | mp/plot_process.py | RawPikachu/valor | 02e1eb0e599904d3f0c49b52534fcb6c3762951d | [
"MIT"
] | null | null | null | from sql import ValorSQL
from util import guild_name_from_tag
import matplotlib.pyplot as plt
import matplotlib.dates as md
from scipy.interpolate import make_interp_spline
from matplotlib.ticker import MaxNLocator
import numpy as np
from datetime import datetime
import time
def plot_process(lock, opt, query):
a = []
b = []
xfmt = md.DateFormatter('%Y-%m-%d %H:%M:%S')
fig = plt.figure()
fig.set_figwidth(20)
fig.set_figheight(10)
ax = plt.gca()
ax.xaxis.set_major_formatter(xfmt)
plt.xticks(rotation=25)
data_pts = 0
for name in opt.guild:
with lock:
res = ValorSQL.execute_sync(query % name)
if opt.split:
b = np.array([x[2] for x in res])
a = np.array([x[1] for x in res])
if opt.moving_average > 1:
a = np.convolve(a, np.ones(opt.moving_average)/opt.moving_average, mode="valid")
b = b[:len(b)-opt.moving_average+1]
if opt.smooth:
spline = make_interp_spline(b, a)
b = np.linspace(b.min(), b.max(), 500)
a = spline(b)
plt.plot([datetime.fromtimestamp(x) for x in b], a, label=name)
plt.legend(loc="upper left")
else:
for i in range(len(res)):
if i >= len(a):
a.append(0)
b.append(res[i][2])
a[i] += res[i][1]
a = np.array(a)
b = np.array(b)
data_pts += len(res)
content = "Plot"
if opt.split:
content = "Split graph"
else:
content =f"""```
Mean: {sum(a)/len(a):.7}
Max: {max(a)}
Min: {min(a)}```"""
if opt.moving_average > 1:
a = np.convolve(a, np.ones(opt.moving_average)/opt.moving_average, mode="valid")
b = b[:len(b)-opt.moving_average+1]
if opt.smooth:
spline = make_interp_spline(b, a)
b = np.linspace(b.min(), b.max(), 500)
a = spline(b)
plt.plot([datetime.fromtimestamp(x) for x in b], a)
ax.xaxis.set_major_locator(MaxNLocator(30))
plt.title("Online Player Activity")
plt.ylabel("Player Count")
plt.xlabel("Date Y-m-d H:M:S")
fig.savefig("/tmp/valor_guild_plot.png")
return data_pts, content
| 25.494505 | 96 | 0.551293 | 337 | 2,320 | 3.706231 | 0.335312 | 0.057646 | 0.102482 | 0.054444 | 0.339472 | 0.339472 | 0.339472 | 0.32506 | 0.32506 | 0.32506 | 0 | 0.015645 | 0.311207 | 2,320 | 90 | 97 | 25.777778 | 0.765957 | 0 | 0 | 0.272727 | 0 | 0 | 0.080172 | 0.010776 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015152 | false | 0 | 0.136364 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf0d0e8e618571c35cece51daafa83cd4f90bede | 47,627 | py | Python | bot.py | SHIA1204/kyaru | c76a5df7c26fb30136ac473bd3f1ca90a2b65739 | [
"Apache-2.0"
] | null | null | null | bot.py | SHIA1204/kyaru | c76a5df7c26fb30136ac473bd3f1ca90a2b65739 | [
"Apache-2.0"
] | null | null | null | bot.py | SHIA1204/kyaru | c76a5df7c26fb30136ac473bd3f1ca90a2b65739 | [
"Apache-2.0"
] | null | null | null | import os
import shutil
from os import system
import discord
import asyncio
import os.path
import linecache
import datetime
import urllib
import requests
from bs4 import BeautifulSoup
from discord.utils import get
from discord.ext import commands
from discord.ext.commands import CommandNotFound
import logging
import itertools
import sys
import traceback
import random
import itertools
import math
from async_timeout import timeout
from functools import partial
import functools
from youtube_dl import YoutubeDL
import youtube_dl
from io import StringIO
import time
import urllib.request
from gtts import gTTS
from urllib.request import URLError
from urllib.request import HTTPError
from urllib.request import urlopen
from urllib.request import Request, urlopen
from urllib.parse import quote
import re
import warnings
import unicodedata
import json
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from oauth2client.tools import argparser
##################### 로깅 ###########################
log_stream = StringIO()
logging.basicConfig(stream=log_stream, level=logging.WARNING)
#ilsanglog = logging.getLogger('discord')
#ilsanglog.setLevel(level = logging.WARNING)
#handler = logging.StreamHandler()
#handler.setFormatter(logging.Formatter('%(asctime)s:%(levelname)s:%(name)s: %(message)s'))
#ilsanglog.addHandler(handler)
#####################################################
def init():
global command
command = []
fc = []
command_inidata = open('command.ini', 'r', encoding = 'utf-8')
command_inputData = command_inidata.readlines()
############## 뮤직봇 명령어 리스트 #####################
for i in range(len(command_inputData)):
tmp_command = command_inputData[i][12:].rstrip('\n')
fc = tmp_command.split(', ')
command.append(fc)
fc = []
del command[0]
command_inidata.close()
#print (command)
init()
#mp3 파일 생성함수(gTTS 이용, 남성목소리)
async def MakeSound(saveSTR, filename):
tts = gTTS(saveSTR, lang = 'ko')
tts.save('./' + filename + '.wav')
'''
try:
encText = urllib.parse.quote(saveSTR)
urllib.request.urlretrieve("https://clova.ai/proxy/voice/api/tts?text=" + encText + "%0A&voicefont=1&format=wav",filename + '.wav')
except Exception as e:
print (e)
tts = gTTS(saveSTR, lang = 'ko')
tts.save('./' + filename + '.wav')
pass
'''
#mp3 파일 재생함수
async def PlaySound(voiceclient, filename):
source = discord.FFmpegPCMAudio(filename)
try:
voiceclient.play(source)
except discord.errors.ClientException:
while voiceclient.is_playing():
await asyncio.sleep(1)
while voiceclient.is_playing():
await asyncio.sleep(1)
voiceclient.stop()
source.cleanup()
# Silence useless bug reports messages
youtube_dl.utils.bug_reports_message = lambda: ''
class VoiceError(Exception):
pass
class YTDLError(Exception):
pass
class YTDLSource(discord.PCMVolumeTransformer):
YTDL_OPTIONS = {
'format': 'bestaudio/best',
'extractaudio': True,
'audioformat': 'mp3',
'outtmpl': '%(extractor)s-%(id)s-%(title)s.%(ext)s',
'restrictfilenames': True,
'noplaylist': False,
'nocheckcertificate': True,
'ignoreerrors': False,
'logtostderr': False,
'quiet': True,
'no_warnings': True,
'default_search': 'auto',
'source_address': '0.0.0.0',
'force-ipv4' : True,
'-4': True
}
FFMPEG_OPTIONS = {
'before_options': '-reconnect 1 -reconnect_streamed 1 -reconnect_delay_max 5',
'options': '-vn',
}
ytdl = youtube_dl.YoutubeDL(YTDL_OPTIONS)
def __init__(self, ctx: commands.Context, source: discord.FFmpegPCMAudio, *, data: dict, volume: float = 0.5):
super().__init__(source, volume)
self.requester = ctx.author
self.channel = ctx.channel
self.data = data
self.uploader = data.get('uploader')
self.uploader_url = data.get('uploader_url')
date = data.get('upload_date')
self.upload_date = date[6:8] + '.' + date[4:6] + '.' + date[0:4]
self.title = data.get('title')
self.thumbnail = data.get('thumbnail')
self.description = data.get('description')
self.duration = self.parse_duration(int(data.get('duration')))
self.tags = data.get('tags')
self.url = data.get('webpage_url')
self.views = data.get('view_count')
self.likes = data.get('like_count')
self.dislikes = data.get('dislike_count')
self.stream_url = data.get('url')
def __str__(self):
return '**{0.title}** by **{0.uploader}**'.format(self)
@classmethod
async def create_source(cls, bot, ctx: commands.Context, search: str, *, loop: asyncio.BaseEventLoop = None):
loop = loop or asyncio.get_event_loop()
if "http" not in search:
partial = functools.partial(cls.ytdl.extract_info, f"ytsearch5:{search}", download=False, process=False)
data = await loop.run_in_executor(None, partial)
if data is None:
raise YTDLError('Couldn\'t find anything that matches `{}`'.format(search))
emoji_list : list = ["1️⃣", "2️⃣", "3️⃣", "4️⃣", "5️⃣", "🚫"]
song_list_str : str = ""
cnt : int = 0
song_index : int = 0
for data_info in data["entries"]:
cnt += 1
if 'title' not in data_info:
data_info['title'] = f"{search} - 제목 정보 없음"
song_list_str += f"`{cnt}.` [**{data_info['title']}**](https://www.youtube.com/watch?v={data_info['url']})\n"
embed = discord.Embed(description= song_list_str)
embed.set_footer(text=f"10초 안에 미선택시 취소됩니다.")
song_list_message = await ctx.send(embed = embed)
for emoji in emoji_list:
await song_list_message.add_reaction(emoji)
def reaction_check(reaction, user):
return (reaction.message.id == song_list_message.id) and (user.id == ctx.author.id) and (str(reaction) in emoji_list)
try:
reaction, user = await bot.wait_for('reaction_add', check = reaction_check, timeout = 10)
except asyncio.TimeoutError:
reaction = "🚫"
for emoji in emoji_list:
await song_list_message.remove_reaction(emoji, bot.user)
await song_list_message.delete(delay = 10)
if str(reaction) == "1️⃣":
song_index = 0
elif str(reaction) == "2️⃣":
song_index = 1
elif str(reaction) == "3️⃣":
song_index = 2
elif str(reaction) == "4️⃣":
song_index = 3
elif str(reaction) == "5️⃣":
song_index = 4
else:
return False
result_url = f"https://www.youtube.com/watch?v={data['entries'][song_index]['url']}"
else:
result_url = search
webpage_url = result_url
partial = functools.partial(cls.ytdl.extract_info, webpage_url, download=False)
processed_info = await loop.run_in_executor(None, partial)
if processed_info is None:
raise YTDLError('Couldn\'t fetch `{}`'.format(webpage_url))
if 'entries' not in processed_info:
info = processed_info
else:
info = None
while info is None:
try:
info = processed_info['entries'].pop(0)
except IndexError:
raise YTDLError('Couldn\'t retrieve any matches for `{}`'.format(webpage_url))
return cls(ctx, discord.FFmpegPCMAudio(info['url'], **cls.FFMPEG_OPTIONS), data=info)
@staticmethod
def parse_duration(duration: int):
return time.strftime('%H:%M:%S', time.gmtime(duration))
class Song:
__slots__ = ('source', 'requester')
def __init__(self, source: YTDLSource):
self.source = source
self.requester = source.requester
def create_embed(self):
embed = (discord.Embed(title='Now playing',
description='**```fix\n{0.source.title}\n```**'.format(self),
color=discord.Color.blurple())
.add_field(name='Duration', value=self.source.duration)
.add_field(name='Requested by', value=self.requester.mention)
.add_field(name='Uploader', value='[{0.source.uploader}]({0.source.uploader_url})'.format(self))
.add_field(name='URL', value='[Click]({0.source.url})'.format(self))
.set_thumbnail(url=self.source.thumbnail))
return embed
class SongQueue(asyncio.Queue):
def __getitem__(self, item):
if isinstance(item, slice):
return list(itertools.islice(self._queue, item.start, item.stop, item.step))
else:
return self._queue[item]
def __iter__(self):
return self._queue.__iter__()
def __len__(self):
return self.qsize()
def clear(self):
self._queue.clear()
def shuffle(self):
random.shuffle(self._queue)
def select(self, index : int, loop : bool = False):
for i in range(index-1):
if not loop:
del self._queue[0]
else:
self._queue.append(self._queue[0])
del self._queue[0]
def remove(self, index: int):
del self._queue[index]
class VoiceState:
def __init__(self, bot: commands.Bot, ctx: commands.Context):
self.bot = bot
self._ctx = ctx
self._cog = ctx.cog
self.current = None
self.voice = None
self.next = asyncio.Event()
self.songs = SongQueue()
self._loop = False
self._volume = 0.5
self.skip_votes = set()
self.audio_player = bot.loop.create_task(self.audio_player_task())
def __del__(self):
self.audio_player.cancel()
@property
def loop(self):
return self._loop
@loop.setter
def loop(self, value: bool):
self._loop = value
@property
def volume(self):
return self._volume
@volume.setter
def volume(self, value: float):
self._volume = value
@property
def is_playing(self):
return self.voice and self.current
async def audio_player_task(self):
while True:
self.next.clear()
if self.loop and self.current is not None:
source1 = await YTDLSource.create_source(self.bot, self._ctx, self.current.source.url, loop=self.bot.loop)
song1 = Song(source1)
await self.songs.put(song1)
else:
pass
try:
async with timeout(180): # 3 minutes
self.current = await self.songs.get()
except asyncio.TimeoutError:
self.bot.loop.create_task(self.stop())
return
self.current.source.volume = self._volume
self.voice.play(self.current.source, after=self.play_next_song)
play_info_msg = await self.current.source.channel.send(embed=self.current.create_embed())
# await play_info_msg.delete(delay = 20)
await self.next.wait()
def play_next_song(self, error=None):
if error:
raise VoiceError(str(error))
self.next.set()
def skip(self):
self.skip_votes.clear()
if self.is_playing:
self.voice.stop()
async def stop(self):
self.songs.clear()
if self.voice:
await self.voice.disconnect()
self.voice = None
self.bot.loop.create_task(self._cog.cleanup(self._ctx))
class Music(commands.Cog):
def __init__(self, bot: commands.Bot):
self.bot = bot
self.voice_states = {}
def get_voice_state(self, ctx: commands.Context):
state = self.voice_states.get(ctx.guild.id)
if not state:
state = VoiceState(self.bot, ctx)
self.voice_states[ctx.guild.id] = state
return state
def cog_unload(self):
for state in self.voice_states.values():
self.bot.loop.create_task(state.stop())
def cog_check(self, ctx: commands.Context):
if not ctx.guild:
raise commands.NoPrivateMessage('This command can\'t be used in DM channels.')
return True
async def cog_before_invoke(self, ctx: commands.Context):
ctx.voice_state = self.get_voice_state(ctx)
async def cog_command_error(self, ctx: commands.Context, error: commands.CommandError):
await ctx.send('에러 : {}'.format(str(error)))
'''
@commands.command(name='join', invoke_without_subcommand=True)
async def _join(self, ctx: commands.Context):
destination = ctx.author.voice.channel
if ctx.voice_state.voice:
await ctx.voice_state.voice.move_to(destination)
return
ctx.voice_state.voice = await destination.connect()
'''
async def cleanup(self, ctx: commands.Context):
del self.voice_states[ctx.guild.id]
@commands.command(name=command[0][0], aliases=command[0][1:]) #음성 채널 입장
#@commands.has_permissions(manage_guild=True)
async def _summon(self, ctx: commands.Context, *, channel: discord.VoiceChannel = None):
channel = ctx.message.author.voice.channel
if not channel and not ctx.author.voice:
raise VoiceError(':no_entry_sign: 현재 접속중인 음악채널이 없습니다.')
destination = channel or ctx.author.voice.channel
if ctx.voice_state.voice:
await ctx.voice_state.voice.move_to(destination)
return
ctx.voice_state.voice = await destination.connect()
@commands.command(name=command[1][0], aliases=command[1][1:]) #음성 채널 퇴장
#@commands.has_permissions(manage_guild=True)
async def _leave(self, ctx: commands.Context):
if not ctx.voice_state.voice:
return await ctx.send(embed=discord.Embed(title=":no_entry_sign: 현재 접속중인 음악채널이 없습니다.",colour = 0x2EFEF7))
await ctx.voice_state.stop()
del self.voice_states[ctx.guild.id]
@commands.command(name=command[8][0], aliases=command[8][1:]) #볼륨 조절
async def _volume(self, ctx: commands.Context, *, volume: int):
vc = ctx.voice_client
if not ctx.voice_state.is_playing:
return await ctx.send(embed=discord.Embed(title=":mute: 현재 재생중인 음악이 없습니다.",colour = 0x2EFEF7))
if not 0 < volume < 101:
return await ctx.send(embed=discord.Embed(title=":no_entry_sign: 볼륨은 1 ~ 100 사이로 입력 해주세요.",colour = 0x2EFEF7))
if vc.source:
vc.source.volume = volume / 100
ctx.voice_state.volume = volume / 100
await ctx.send(embed=discord.Embed(title=f":loud_sound: 볼륨을 {volume}%로 조정하였습니다.",colour = 0x2EFEF7))
@commands.command(name=command[7][0], aliases=command[7][1:]) #현재 재생 중인 목록
async def _now(self, ctx: commands.Context):
await ctx.send(embed=ctx.voice_state.current.create_embed())
@commands.command(name=command[3][0], aliases=command[3][1:]) #음악 일시 정지
#@commands.has_permissions(manage_guild=True)
async def _pause(self, ctx: commands.Context):
if ctx.voice_state.is_playing and ctx.voice_state.voice.is_playing():
ctx.voice_state.voice.pause()
await ctx.message.add_reaction('⏸')
@commands.command(name=command[4][0], aliases=command[4][1:]) #음악 다시 재생
#@commands.has_permissions(manage_guild=True)
async def _resume(self, ctx: commands.Context):
if ctx.voice_state.is_playing and ctx.voice_state.voice.is_paused():
ctx.voice_state.voice.resume()
await ctx.message.add_reaction('⏯')
@commands.command(name=command[9][0], aliases=command[9][1:]) #음악 정지
#@commands.has_permissions(manage_guild=True)
async def _stop(self, ctx: commands.Context):
ctx.voice_state.songs.clear()
if ctx.voice_state.is_playing:
ctx.voice_state.voice.stop()
await ctx.message.add_reaction('⏹')
@commands.command(name=command[5][0], aliases=command[5][1:]) #현재 음악 스킵
async def _skip(self, ctx: commands.Context, *, args: int = 1):
if not ctx.voice_state.is_playing:
return await ctx.send(embed=discord.Embed(title=':mute: 현재 재생중인 음악이 없습니다.',colour = 0x2EFEF7))
await ctx.message.add_reaction('⏭')
if args != 1:
ctx.voice_state.songs.select(args, ctx.voice_state.loop)
ctx.voice_state.skip()
'''
voter = ctx.message.author
if voter == ctx.voice_state.current.requester:
await ctx.message.add_reaction('⏭')
ctx.voice_state.skip()
elif voter.id not in ctx.voice_state.skip_votes:
ctx.voice_state.skip_votes.add(voter.id)
total_votes = len(ctx.voice_state.skip_votes)
if total_votes >= 3:
await ctx.message.add_reaction('⏭')
ctx.voice_state.skip()
else:
await ctx.send('Skip vote added, currently at **{}/3**'.format(total_votes))
else:
await ctx.send('```이미 투표하셨습니다.```')
'''
@commands.command(name=command[6][0], aliases=command[6][1:]) #재생 목록
async def _queue(self, ctx: commands.Context, *, page: int = 1):
if len(ctx.voice_state.songs) == 0:
return await ctx.send(embed=discord.Embed(title=':mute: 재생목록이 없습니다.',colour = 0x2EFEF7))
items_per_page = 10
pages = math.ceil(len(ctx.voice_state.songs) / items_per_page)
start = (page - 1) * items_per_page
end = start + items_per_page
queue = ''
for i, song in enumerate(ctx.voice_state.songs[start:end], start=start):
queue += '`{0}.` [**{1.source.title}**]({1.source.url})\n'.format(i + 1, song)
if ctx.voice_state.loop:
embed = discord.Embed(title = '🔁 Now playing', description='**```fix\n{0.source.title}\n```**'.format(ctx.voice_state.current))
else:
embed = discord.Embed(title = 'Now playing', description='**```fix\n{0.source.title}\n```**'.format(ctx.voice_state.current))
embed.add_field(name ='\u200B\n**{} tracks:**\n'.format(len(ctx.voice_state.songs)), value = f"\u200B\n{queue}")
embed.set_thumbnail(url=ctx.voice_state.current.source.thumbnail)
embed.set_footer(text='Viewing page {}/{}'.format(page, pages))
await ctx.send(embed=embed)
@commands.command(name=command[11][0], aliases=command[11][1:]) #음악 셔플
async def _shuffle(self, ctx: commands.Context):
if len(ctx.voice_state.songs) == 0:
return await ctx.send(embed=discord.Embed(title=':mute: 재생목록이 없습니다.',colour = 0x2EFEF7))
ctx.voice_state.songs.shuffle()
result = await ctx.send(embed=discord.Embed(title=':twisted_rightwards_arrows: 셔플 완료!',colour = 0x2EFEF7))
await result.add_reaction('🔀')
@commands.command(name=command[10][0], aliases=command[10][1:]) #음악 삭제
async def _remove(self, ctx: commands.Context, index: int):
if len(ctx.voice_state.songs) == 0:
return ctx.send(embed=discord.Embed(title=':mute: 재생목록이 없습니다.',colour = 0x2EFEF7))
# remove_result = '`{0}.` [**{1.source.title}**] 삭제 완료!\n'.format(index, ctx.voice_state.songs[index - 1])
result = await ctx.send(embed=discord.Embed(title='`{0}.` [**{1.source.title}**] 삭제 완료!\n'.format(index, ctx.voice_state.songs[index - 1]),colour = 0x2EFEF7))
ctx.voice_state.songs.remove(index - 1)
await result.add_reaction('✅')
@commands.command(name=command[14][0], aliases=command[14][1:]) #음악 반복
async def _loop(self, ctx: commands.Context):
if not ctx.voice_state.is_playing:
return await ctx.send(embed=discord.Embed(title=':mute: 현재 재생중인 음악이 없습니다.',colour = 0x2EFEF7))
# Inverse boolean value to loop and unloop.
ctx.voice_state.loop = not ctx.voice_state.loop
if ctx.voice_state.loop :
result = await ctx.send(embed=discord.Embed(title=':repeat: 반복재생이 설정되었습니다!',colour = 0x2EFEF7))
else:
result = await ctx.send(embed=discord.Embed(title=':repeat_one: 반복재생이 취소되었습니다!',colour = 0x2EFEF7))
await result.add_reaction('🔁')
@commands.command(name=command[2][0], aliases=command[2][1:]) #음악 재생
async def _play(self, ctx: commands.Context, *, search: str):
if not ctx.voice_state.voice:
await ctx.invoke(self._summon)
async with ctx.typing():
try:
source = await YTDLSource.create_source(self.bot, ctx, search, loop=self.bot.loop)
if not source:
return await ctx.send(f"노래 재생/예약이 취소 되었습니다.")
except YTDLError as e:
await ctx.send('에러가 발생했습니다 : {}'.format(str(e)))
else:
song = Song(source)
await ctx.channel.purge(limit=1)
await ctx.voice_state.songs.put(song)
await ctx.send(embed=discord.Embed(title=f':musical_note: 재생목록 추가 : {str(source)}',colour = 0x2EFEF7))
# @commands.command(name=command[13][0], aliases=command[13][1:]) #지우기
# async def clear_channel_(self, ctx: commands.Context, *, msg: int = 1):
# try:
# msg = int(msg)
# except:
# await ctx.send(f"```지우고 싶은 줄수는 [숫자]로 입력해주세요!```")
# await ctx.channel.purge(limit = msg)
@_summon.before_invoke
@_play.before_invoke
async def ensure_voice_state(self, ctx: commands.Context):
if not ctx.author.voice or not ctx.author.voice.channel:
raise commands.CommandError('음성채널에 접속 후 사용해주십시오.')
if ctx.voice_client:
if ctx.voice_client.channel != ctx.author.voice.channel:
raise commands.CommandError('봇이 이미 음성채널에 접속해 있습니다.')
# @commands.command(name=command[12][0], aliases=command[12][1:]) #도움말
# async def menu_(self, ctx):
# command_list = ''
# command_list += '!인중 : 봇상태가 안좋을 때 쓰세요!' #!
# command_list += ','.join(command[0]) + '\n' #!들어가자
# command_list += ','.join(command[1]) + '\n' #!나가자
# command_list += ','.join(command[2]) + ' [검색어] or [url]\n' #!재생
# command_list += ','.join(command[3]) + '\n' #!일시정지
# command_list += ','.join(command[4]) + '\n' #!다시재생
# command_list += ','.join(command[5]) + ' (숫자)\n' #!스킵
# command_list += ','.join(command[6]) + ' 혹은 [명령어] + [숫자]\n' #!목록
# command_list += ','.join(command[7]) + '\n' #!현재재생
# command_list += ','.join(command[8]) + ' [숫자 1~100]\n' #!볼륨
# command_list += ','.join(command[9]) + '\n' #!정지
# command_list += ','.join(command[10]) + '\n' #!삭제
# command_list += ','.join(command[11]) + '\n' #!섞기
# command_list += ','.join(command[14]) + '\n' #!
# command_list += ','.join(command[13]) + ' [숫자]\n' #!경주
# embed = discord.Embed(
# title = "----- 명령어 -----",
# description= '```' + command_list + '```',
# color=0xff00ff
# )
# await ctx.send( embed=embed, tts=False)
################ 음성파일 생성 후 재생 ################
@commands.command(name="==인중")
async def playText_(self, ctx):
#msg = ctx.message.content[len(ctx.invoked_with)+1:]
#sayMessage = msg
await MakeSound('뮤직봇이 많이 아파요. 잠시 후 사용해주세요.', './say' + str(ctx.guild.id))
await ctx.send("```뮤직봇이 많이 아파요. 잠시 후 사용해주세요.```", tts=False)
if not ctx.voice_state.voice:
await ctx.invoke(self._summon)
if ctx.voice_state.is_playing:
ctx.voice_state.voice.stop()
await PlaySound(ctx.voice_state.voice, './say' + str(ctx.guild.id) + '.wav')
await ctx.voice_state.stop()
del self.voice_states[ctx.guild.id]
#client = commands.Bot(command_prefix='==', help_command = None)
client = commands.Bot('', help_command = None)
client.add_cog(Music(client))
access_client_id = os.environ["client_id"]
access_client_secret = os.environ["client_secret"]
client_id = access_client_id
client_secret = access_client_secret
def create_soup(url, headers):
res = requests.get(url, headers=headers)
res.raise_for_status()
soup = BeautifulSoup(res.text, 'lxml')
return soup
@client.event
async def on_ready():
print(f'로그인 성공: {client.user.name}!')
game = discord.Game("==명령어")
await client.change_presence(status=discord.Status.online, activity=game)
@client.event
async def on_command_error(ctx, error):
if isinstance(error, CommandNotFound):
return
elif isinstance(error, discord.ext.commands.MissingRequiredArgument):
return
raise error
@client.command(pass_context = True, aliases=['==명령어'])
async def cmd_cmd_abc(ctx):
await ctx.channel.purge(limit=1)
emoji_list : list = ["🅰️", "1️⃣", "2️⃣", "3️⃣", "🚫"]
embed = discord.Embed(title = "캬루봇 명령어 목록", colour = 0x30e08b)
embed.add_field(name = ':a: 전체', value = '전체 명령어 보기', inline = False)
embed.add_field(name = ':one: 일반', value = '일반 명령어 보기', inline = False)
embed.add_field(name = ':two: TruckersMP', value = 'TruckersMP 관련 명령어 보기', inline = False)
embed.add_field(name = ':three: 음악', value = '음악 재생 관련 명령어 보기', inline = False)
embed.add_field(name = ':no_entry_sign: 취소', value = '실행 취소', inline = False)
cmd_message = await ctx.send(embed = embed)
for emoji in emoji_list:
await cmd_message.add_reaction(emoji)
def reaction_check(reaction, user):
return (reaction.message.id == cmd_message.id) and (user.id == ctx.author.id) and (str(reaction) in emoji_list)
try:
reaction, user = await client.wait_for('reaction_add', check = reaction_check, timeout = 10)
except asyncio.TimeoutError:
reaction = "🚫"
for emoji in emoji_list:
# await cmd_message.remove_reaction(emoji, client.user)
await cmd_message.delete(delay = 0)
await cmd_message.delete(delay = 10)
if str(reaction) == "1️⃣":
embed1 = discord.Embed(title = "캬루봇 명령어 목록 [일반 명령어]", colour = 0x30e08b)
embed1.add_field(name = '==지우기 <숫자>', value = '최근 1~99개의 메세지를 삭제합니다.', inline = False)
embed1.add_field(name = '==내정보', value = '자신의 디스코드 정보를 보여줍니다.', inline = False)
embed1.add_field(name = '==실검', value = '네이버의 급상승 검색어 TOP10을 보여줍니다.', inline = False)
embed1.add_field(name = '==날씨 <지역>', value = '<지역>의 날씨를 알려줍니다.', inline = False)
embed1.add_field(name = '==말해 <text>', value = '<text>를 말합니다.', inline = False)
embed1.add_field(name = '==번역 <언어> <text>', value = '<text>를 번역합니다.', inline = False)
embed1.add_field(name = '==유튜브 <text>', value = '유튜브에서 <text>를 검색합니다.', inline = False)
embed1.set_footer(text = 'Service provided by RyuZU', icon_url="https://cdn.discordapp.com/attachments/740877681209507880/755440825667813497/20200817_184231.jpg")
await ctx.channel.send(embed = embed1)
elif str(reaction) == "2️⃣":
embed2 = discord.Embed(title = "캬루봇 명령어 목록 [TruckersMP]", colour = 0x30e08b)
embed2.add_field(name = '==T정보, ==ts', value = 'TruckersMP의 정보를 보여줍니다.', inline = False)
embed2.add_field(name = '==T프로필 <TMPID>, ==tp', value = '해당 TMPID 아이디를 가진 사람의 프로필을 보여줍니다.', inline = False)
embed2.add_field(name = '==T트래픽순위, ==ttr', value = 'TruckersMP의 트래픽 순위 TOP5를 보여줍니다.', inline = False)
embed2.set_footer(text = 'Service provided by RyuZU', icon_url="https://cdn.discordapp.com/attachments/740877681209507880/755440825667813497/20200817_184231.jpg")
await ctx.channel.send(embed = embed2)
elif str(reaction) == "3️⃣":
embed3 = discord.Embed(title = "캬루봇 명령어 목록 [음악 재생]", colour = 0x30e08b)
embed3.add_field(name = '==들어와', value = '봇이 음성 통화방에 들어옵니다.', inline = False)
embed3.add_field(name = '==나가', value = '봇이 음성 통화방에서 나갑니다.', inline = False)
embed3.add_field(name = '==재생', value = '봇이 음악을 재생합니다.', inline = False)
embed3.add_field(name = '==일시정지', value = '현재 재생 중인 음악을 일시 정지합니다.', inline = False)
embed3.add_field(name = '==다시재생', value = '일시 정지한 음악을 다시 재생합니다.', inline = False)
embed3.add_field(name = '==스킵', value = '현재 재생 중인 음악을 스킵합니다.', inline = False)
embed3.add_field(name = '==목록', value = '재생 목록을 보여줍니다.', inline = False)
embed3.add_field(name = '==현재재생', value = '현재 재생 중인 음악을 보여줍니다.', inline = False)
embed3.add_field(name = '==볼륨', value = '봇의 볼륨을 조절합니다.', inline = False)
embed3.add_field(name = '==정지', value = '현재 재생 중인 음악을 정지합니다.', inline = False)
embed3.add_field(name = '==삭제 <트랙 번호>', value = '재생 목록에 있는 특정 음악을 삭제합니다.', inline = False)
embed3.add_field(name = '==섞기', value = '재생 목록을 섞습니다.', inline = False)
embed3.add_field(name = '==반복', value = '현재 재생 중인 음악을 반복 재생합니다.', inline = False)
embed3.set_footer(text = 'Service provided by RyuZU', icon_url="https://cdn.discordapp.com/attachments/740877681209507880/755440825667813497/20200817_184231.jpg")
await ctx.channel.send(embed = embed3)
elif str(reaction) == "🅰️":
embed6 = discord.Embed(title = "캬루봇 명령어 목록 [전체 명령어]", colour = 0x30e08b)
embed6.add_field(name = '==지우기 <숫자>', value = '최근 1~99개의 메세지를 삭제합니다.', inline = False)
embed6.add_field(name = '==내정보', value = '자신의 디스코드 정보를 보여줍니다.', inline = False)
embed6.add_field(name = '==실검', value = '네이버의 급상승 검색어 TOP10을 보여줍니다.', inline = False)
embed6.add_field(name = '==날씨 <지역>', value = '<지역>의 날씨를 알려줍니다.', inline = False)
embed6.add_field(name = '==말해 <내용>', value = '<내용>을 말합니다.', inline = False)
embed6.add_field(name = '==번역 <언어> <내용>', value = '<내용>을 번역합니다.', inline = False)
embed6.add_field(name = '==유튜브 <text>', value = '유튜브에서 <text>를 검색합니다.', inline = False)
embed6.add_field(name = '==T정보, ==ts', value = 'TruckersMP의 서버 정보를 보여줍니다.', inline = False)
embed6.add_field(name = '==T프로필 <TMPID>, ==tp', value = '해당 TMPID 아이디를 가진 사람의 프로필을 보여줍니다.', inline = False)
embed6.add_field(name = '==T트래픽순위, ==ttr', value = 'TruckersMP의 트래픽 순위 TOP5를 보여줍니다.', inline = False)
embed6.add_field(name = '==들어와', value = '봇이 음성 통화방에 들어옵니다.', inline = False)
embed6.add_field(name = '==나가', value = '봇이 음성 통화방에서 나갑니다.', inline = False)
embed6.add_field(name = '==재생', value = '봇이 음악을 재생합니다.', inline = False)
embed6.add_field(name = '==일시정지', value = '현재 재생 중인 음악을 일시 정지합니다.', inline = False)
embed6.add_field(name = '==다시재생', value = '일시 정지한 음악을 다시 재생합니다.', inline = False)
embed6.add_field(name = '==스킵', value = '현재 재생 중인 음악을 스킵합니다.', inline = False)
embed6.add_field(name = '==목록', value = '재생 목록을 보여줍니다.', inline = False)
embed6.add_field(name = '==현재재생', value = '현재 재생 중인 음악을 보여줍니다.', inline = False)
embed6.add_field(name = '==볼륨', value = '봇의 볼륨을 조절합니다.', inline = False)
embed6.add_field(name = '==정지', value = '현재 재생 중인 음악을 정지합니다.', inline = False)
embed6.add_field(name = '==삭제 <트랙 번호>', value = '재생 목록에 있는 특정 음악을 삭제합니다.', inline = False)
embed6.add_field(name = '==섞기', value = '재생 목록을 섞습니다.', inline = False)
embed6.add_field(name = '==반복', value = '현재 재생 중인 음악을 반복 재생합니다.', inline = False)
embed6.set_footer(text = 'Service provided by RyuZU', icon_url="https://cdn.discordapp.com/attachments/740877681209507880/755440825667813497/20200817_184231.jpg")
await ctx.channel.send(embed = embed6)
elif str(reaction) == "🚫":
await cmd_message.delete(delay = 0)
else:
return False
@client.command(pass_context = True, aliases=['==지우기'])
@commands.has_permissions(administrator=True)
async def claer_clear_abc(ctx, amount):
amount = int(amount)
if amount < 100:
await ctx.channel.purge(limit=amount)
embed = discord.Embed(title=f":put_litter_in_its_place: {amount}개의 채팅을 삭제했어요.",colour = 0x2EFEF7)
embed.set_footer(text = 'Service provided by RyuZU')
await ctx.channel.send(embed=embed)
else:
await ctx.channel.purge(limit=1)
await ctx.channel.send(embed=discord.Embed(title=f":no_entry_sign: 숫자를 99 이하로 입력해 주세요.",colour = 0x2EFEF7))
embed = discord.Embed(title=f":put_litter_in_its_place: {amount}개의 채팅을 삭제했어요.",colour = 0x2EFEF7)
embed.set_footer(text = 'Service provided by RyuZU', icon_url="https://cdn.discordapp.com/attachments/740877681209507880/755440825667813497/20200817_184231.jpg")
await ctx.channel.send(embed=embed)
@client.command(aliases=['==핑'])
async def ping_ping_abc(ctx):
await ctx.channel.send('퐁! `{}ms`'.format(round(client.latency * 1000)))
@client.command(pass_context = True, aliases=['==내정보'])
async def my_my_abc_profile(ctx):
date = datetime.datetime.utcfromtimestamp(((int(ctx.author.id) >> 22) + 1420070400000) / 1000)
embed = discord.Embed(title = ctx.author.display_name + "님의 정보", colour = 0x2EFEF7)
embed.add_field(name = '사용자명', value = ctx.author.name, inline = False)
embed.add_field(name = '가입일', value = str(date.year) + "년" + str(date.month) + "월" + str(date.day) + "일", inline = False)
embed.add_field(name = '아이디', value = ctx.author.id, inline = False)
embed.set_thumbnail(url = ctx.author.avatar_url)
embed.set_footer(text = 'Service provided by RyuZU', icon_url="https://cdn.discordapp.com/attachments/740877681209507880/755440825667813497/20200817_184231.jpg")
await ctx.channel.send(embed = embed)
@client.command(pass_context = True, aliases=['==카페'])
async def cafe_cafe_abc(ctx):
embed = discord.Embed(title = "KCTG 공식 카페", colour = 0x2EFEF7)
embed.add_field(name = 'https://cafe.naver.com/kctgofficial', value = "\n\u200b", inline = False)
embed.set_thumbnail(url = "https://cdn.discordapp.com/attachments/740877681209507880/744451389396353106/KCTG_Wolf_1.png")
embed.set_footer(text = 'Service provided by RyuZU', icon_url="https://cdn.discordapp.com/attachments/740877681209507880/755440825667813497/20200817_184231.jpg")
await ctx.channel.send(embed = embed)
@client.command(pass_context = True, aliases=['==실검'])
async def search_search_abc_rank(ctx):
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Whale/2.8.105.22 Safari/537.36'}
url = "https://datalab.naver.com/keyword/realtimeList.naver?where=main"
soup = create_soup(url, headers)
rank_list = soup.find("ul", attrs={"class":"ranking_list"})
one = rank_list.find_all("span", attrs={"class":"item_title"})[0].get_text().strip().replace("1", "") #순서대로 실검 1~10위
two = rank_list.find_all("span", attrs={"class":"item_title"})[1].get_text().strip().replace("2", "")
three = rank_list.find_all("span", attrs={"class":"item_title"})[2].get_text().strip().replace("3", "")
four = rank_list.find_all("span", attrs={"class":"item_title"})[3].get_text().strip().replace("4", "")
five = rank_list.find_all("span", attrs={"class":"item_title"})[4].get_text().strip().replace("5", "")
six = rank_list.find_all("span", attrs={"class":"item_title"})[5].get_text().strip().replace("6", "")
seven = rank_list.find_all("span", attrs={"class":"item_title"})[6].get_text().strip().replace("7", "")
eight = rank_list.find_all("span", attrs={"class":"item_title"})[7].get_text().strip().replace("8", "")
nine = rank_list.find_all("span", attrs={"class":"item_title"})[8].get_text().strip().replace("9", "")
ten = rank_list.find_all("span", attrs={"class":"item_title"})[9].get_text().strip().replace("10", "")
time = soup.find("span", attrs={"class":"time_txt _title_hms"}).get_text() #현재 시간
await ctx.channel.send(f'Ⅰ ``{one}``\nⅡ ``{two}``\nⅢ ``{three}``\nⅣ ``{four}``\nⅤ ``{five}``\nⅥ ``{six}``\nⅦ ``{seven}``\nⅧ ``{eight}``\nⅨ ``{nine}``\nⅩ ``{ten}``\n\n``Time[{time}]``')
@client.command(pass_context = True, aliases=['==날씨'])
async def weather_weather_abc(ctx, arg1):
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Whale/2.8.105.22 Safari/537.36'}
url = f"https://search.naver.com/search.naver?sm=tab_hty.top&where=nexearch&query={arg1}+날씨&oquery=날씨&tqi=U1NQ%2FsprvmsssUNA1MVssssssPN-224813"
soup = create_soup(url, headers)
rotate = soup.find("span", attrs={"class":"btn_select"}).get_text() #지역
cast = soup.find("p", attrs={"class":"cast_txt"}).get_text() #맑음, 흐림 같은거
curr_temp = soup.find("p", attrs={"class":"info_temperature"}).get_text().replace("도씨", "") #현재 온도
sen_temp = soup.find("span", attrs={"class":"sensible"}).get_text().replace("체감온도", "체감") #체감 온도
min_temp = soup.find("span", attrs={"class":"min"}).get_text() #최저 온도
max_temp = soup.find("span", attrs={"class":"max"}).get_text() #최고 온도
# 오전, 오후 강수 확률
morning_rain_rate = soup.find("span", attrs={"class":"point_time morning"}).get_text().strip() #오전
afternoon_rain_rate = soup.find("span", attrs={"class":"point_time afternoon"}).get_text().strip() #오후
# 미세먼지, 초미세먼지
dust = soup.find("dl", attrs={"class":"indicator"})
pm10 = dust.find_all("dd")[0].get_text() #미세먼지
pm25 = dust.find_all("dd")[1].get_text() #초미세먼지
daylist = soup.find("ul", attrs={"class":"list_area _pageList"})
tomorrow = daylist.find_all("li")[1]
#내일 온도
to_min_temp = tomorrow.find_all("span")[12].get_text() #최저
to_max_temp = tomorrow.find_all("span")[14].get_text() #최고
#내일 강수
to_morning_rain_rate = daylist.find_all("span", attrs={"class":"point_time morning"})[1].get_text().strip() #오전
to_afternoon_rain_rate = daylist.find_all("span", attrs={"class":"point_time afternoon"})[1].get_text().strip() #오후
await ctx.channel.send((rotate) + f'\n오늘의 날씨 ``' + (cast) + f'``\n__기온__ ``현재 {curr_temp}({sen_temp}) 최저 {min_temp} 최고 {max_temp}``\n__강수__ ``오전 {morning_rain_rate}`` ``오후 {afternoon_rain_rate}``\n__대기__ ``미세먼지 {pm10}`` ``초미세먼지 {pm25}``\n\n내일의 날씨\n__기온__ ``최저 {to_min_temp}˚`` ``최고 {to_max_temp}˚``\n__강수__ ``오전 {to_morning_rain_rate}`` ``오후 {to_afternoon_rain_rate}``')
@client.command(pass_context = True, aliases=['==말해'])
async def tell_tell_abc(ctx, *, arg):
tell = str(arg)
await ctx.channel.purge(limit=1)
await ctx.channel.send(tell)
@client.command(pass_context = True, aliases=['==T정보', '==TS', '==t정보', '==ts'])
async def tmp_tmp_abc_server_status(ctx):
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Whale/2.8.105.22 Safari/537.36'}
url = "https://stats.truckersmp.com/"
soup = create_soup(url, headers)
#현재 접속중인 플레이어
curr_status = soup.find("div", attrs={"class":"container-fluid"})
sim1 = curr_status.find_all("div", attrs={"class":"server-count"})[0].get_text().strip()
sim2 = curr_status.find_all("div", attrs={"class":"server-count"})[1].get_text().strip()
sim_us = curr_status.find_all("div", attrs={"class":"server-count"})[2].get_text().strip()
sim_sgp = curr_status.find_all("div", attrs={"class":"server-count"})[3].get_text().strip()
arc = curr_status.find_all("div", attrs={"class":"server-count"})[4].get_text().strip()
pro = curr_status.find_all("div", attrs={"class":"server-count"})[5].get_text().strip()
pro_arc = curr_status.find_all("div", attrs={"class":"server-count"})[6].get_text().strip()
#서버 온오프 여부
sim1_sta = curr_status.find_all("div", attrs={"class":"server-status ONLINE"})[0].get_text().strip().replace("LINE", "")
sim2_sta = curr_status.find_all("div", attrs={"class":"server-status ONLINE"})[1].get_text().strip().replace("LINE", "")
sim_us_sta = curr_status.find_all("div", attrs={"class":"server-status ONLINE"})[2].get_text().strip().replace("LINE", "")
sim_sgp_sta = curr_status.find_all("div", attrs={"class":"server-status ONLINE"})[3].get_text().strip().replace("LINE", "")
arc_sta = curr_status.find_all("div", attrs={"class":"server-status ONLINE"})[4].get_text().strip().replace("LINE", "")
pro_sta = curr_status.find_all("div", attrs={"class":"server-status ONLINE"})[5].get_text().strip().replace("LINE", "")
pro_arc_sta = curr_status.find_all("div", attrs={"class":"server-status ONLINE"})[6].get_text().strip().replace("LINE", "")
#서버 시간
curr_game_time = soup.find("span", attrs={"id":"game_time"}).get_text().strip()
embed = discord.Embed(title = "[ETS2] TruckersMP 서버 현황", colour = 0x2EFEF7)
embed.add_field(name = f'`[{sim1_sta}]` Simulation 1', value = f"{sim1}", inline = False)
embed.add_field(name = f'`[{sim2_sta}]` Simulation 2', value = f"{sim2}", inline = False)
embed.add_field(name = f'`[{sim_us_sta}]` [US] Simulation', value = f"{sim_us}", inline = False)
embed.add_field(name = f'`[{sim_sgp_sta}]` [SGP] Simulation', value = f"{sim_sgp}", inline = False)
embed.add_field(name = f'`[{arc_sta}]` Arcade', value = f"{arc}", inline = False)
embed.add_field(name = f'`[{pro_sta}]` ProMods', value = f"{pro}", inline = False)
embed.add_field(name = f'`[{pro_arc_sta}]` ProMods Arcade', value = f"{pro_arc}", inline = False)
embed.set_footer(text=f"서버 시간: {curr_game_time}")
await ctx.channel.send(embed = embed)
@client.command(pass_context = True, aliases=['==T트래픽순위', '==TTR', '==t트래픽순위', '==ttr'])
async def tmp_tmp_abc_traffic(ctx):
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Whale/2.8.105.22 Safari/537.36'}
url = "https://traffic.krashnz.com/"
soup = create_soup(url, headers)
#실시간 트래픽 순위
traffic_top = soup.find("ul", attrs={"class":"list-group mb-3"})
rank1 = traffic_top.find_all("div")[1].get_text().strip()
rank2 = traffic_top.find_all("div")[2].get_text().strip()
rank3 = traffic_top.find_all("div")[3].get_text().strip()
rank4 = traffic_top.find_all("div")[4].get_text().strip()
rank5 = traffic_top.find_all("div")[5].get_text().strip()
g_set = soup.find("div", attrs={"class":"row text-center mb-2"})
g_player = g_set.find_all("span", attrs={"class":"stats-number"})[0].get_text().strip()
g_time = g_set.find_all("span", attrs={"class":"stats-number"})[1].get_text().strip()
embed = discord.Embed(title = "[ETS2] TruckersMP 실시간 트래픽 TOP5", colour = 0x2EFEF7)
embed.add_field(name = f'{rank1}', value = "\n\u200b", inline = False)
embed.add_field(name = f'{rank2}', value = "\n\u200b", inline = False)
embed.add_field(name = f'{rank3}', value = "\n\u200b", inline = False)
embed.add_field(name = f'{rank4}', value = "\n\u200b", inline = False)
embed.add_field(name = f'{rank5}', value = f"\n{g_player} players tracked / {g_time} in-game time", inline = False)
await ctx.channel.send(embed = embed)
@client.command(pass_context = True, aliases=['==T프로필', '==TP', '==t프로필', '==tp'])
async def tmp_tmp_abc_user_profile(ctx, arg):
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Whale/2.8.105.22 Safari/537.36'}
url = f"https://truckersmp.com/user/{arg}"
soup = create_soup(url, headers)
#플레이어 정보
user_status = soup.find("div", attrs={"class":"profile-bio"})
name = user_status.find_all("span")[0].get_text().strip()
check = user_status.find_all("strong")[0].get_text()
if check == "Also known as":
steam = user_status.find_all("span")[3].get_text().strip().replace("Steam ID:", "")
birt = user_status.find_all("span")[5].get_text().strip().replace("Member since:", "")
bans = user_status.find_all("span")[6].get_text().strip().replace("Active bans:", "")
else:
steam = user_status.find_all("span")[2].get_text().strip().replace("Steam ID:", "")
birt = user_status.find_all("span")[4].get_text().strip().replace("Member since:", "")
bans = user_status.find_all("span")[5].get_text().strip().replace("Active bans:", "")
vtc_check = soup.find_all("h2", attrs={"class":"panel-title heading-sm pull-left"})[2].get_text()
if vtc_check == " VTC":
vtc_find = soup.find_all("div", attrs={"class":"panel panel-profile"})[2]
vtc_name = vtc_find.find("h5", attrs={"class":"text-center break-all"}).get_text().strip()
else:
vtc_name = "없음"
#프로필 이미지
img = soup.find_all("div", attrs={"class": "col-md-3 md-margin-bottom-40"})[0]
imgs = img.find("img", attrs={"class": "img-responsive profile-img margin-bottom-20 shadow-effect-1"})
prof_image = imgs.get("src")
embed = discord.Embed(title = f"[TruckersMP] {arg}'s 프로필", colour = 0x2EFEF7)
embed.add_field(name = 'Name', value = f"{name}", inline = False)
embed.add_field(name = 'Steam ID', value = f"{steam}", inline = False)
embed.add_field(name = 'Member since', value = f"{birt}", inline = False)
embed.add_field(name = 'Active bans', value = f"{bans}", inline = False)
embed.add_field(name = 'VTC', value = f"{vtc_name}", inline = False)
embed.set_thumbnail(url=prof_image)
await ctx.channel.send(embed = embed)
@client.command(aliases=['==번역'])
async def _translator_abc(ctx, arg, *, content):
content = str(content)
if arg[0] == '한':
langso = "Korean"
so = "ko"
elif arg[0] == '영':
langso = "English"
so = "en"
elif arg[0] == '일':
langso = "Japanese"
so = "ja"
elif arg[0] == '중':
langso = "Chinese"
so = "zh-CN"
else:
pass
if arg[1] == '한':
langta = "Korean"
ta = "ko"
elif arg[1] == '영':
langta = "English"
ta = "en"
elif arg[1] == '일':
langta = "Japanese"
ta = "ja"
elif arg[1] == '중':
langta = "Chinese"
ta = "zh-CN"
else:
pass
url = "https://openapi.naver.com/v1/papago/n2mt"
#띄어쓰기 : split처리후 [1:]을 for문으로 붙인다.
trsText = str(content)
try:
if len(trsText) == 1:
await ctx.channel.send("단어 혹은 문장을 입력해주세요.")
else:
trsText = trsText[0:]
combineword = ""
for word in trsText:
combineword += "" + word
sourcetext = combineword.strip()
combineword = quote(sourcetext)
dataParmas = f"source={so}&target={ta}&text=" + combineword
request = Request(url)
request.add_header("X-Naver-Client-Id", client_id)
request.add_header("X-Naver-Client-Secret", client_secret)
response = urlopen(request, data=dataParmas.encode("utf-8"))
responsedCode = response.getcode()
if (responsedCode == 200):
response_body = response.read()
# response_body -> byte string : decode to utf-8
api_callResult = response_body.decode('utf-8')
# JSON data will be printed as string type. So need to make it back to type JSON(like dictionary)
api_callResult = json.loads(api_callResult)
#번역 결과
translatedText = api_callResult['message']['result']["translatedText"]
embed = discord.Embed(title=f"번역 ┃ {langso} → {langta}", description="", color=0x2e9fff)
embed.add_field(name=f"{langso}", value=sourcetext, inline=False)
embed.add_field(name=f"{langta}", value=translatedText, inline=False)
embed.set_thumbnail(url="https://cdn.discordapp.com/attachments/740877681209507880/755471340227526706/papago_og.png")
embed.set_footer(text="Provided by Naver Open API",
icon_url='https://cdn.discordapp.com/attachments/740877681209507880/755471340227526706/papago_og.png')
await ctx.channel.send(embed=embed)
else:
await ctx.channel.send("Error Code : " + responsedCode)
except HTTPError as e:
await ctx.channel.send("번역 실패. HTTP에러 발생.")
@client.command(pass_context = True, aliases=['==유튜브'])
async def _youtube_abc_search(ctx, * , arg):
arg_title = str(arg)
arg = str(arg).replace(" ", "%20")
DEVELOPER_KEY = os.environ["DEVELOPER_KEY"]
YOUTUBE_API_SERVICE_NAME="youtube"
YOUTUBE_API_VERSION="v3"
youtube = build(YOUTUBE_API_SERVICE_NAME,YOUTUBE_API_VERSION,developerKey=DEVELOPER_KEY)
search_response = youtube.search().list(
q = f"{arg_title}",
order = "relevance",
part = "snippet",
maxResults = 6
).execute()
thumbnail_img = search_response['items'][1]['snippet']['thumbnails']['high']['url']
title1 = search_response['items'][1]['snippet']['title'].replace('"', '"').replace("'", "'")
title2 = search_response['items'][2]['snippet']['title'].replace('"', '"').replace("'", "'")
title3 = search_response['items'][3]['snippet']['title'].replace('"', '"').replace("'", "'")
title4 = search_response['items'][4]['snippet']['title'].replace('"', '"').replace("'", "'")
title5 = search_response['items'][5]['snippet']['title'].replace('"', '"').replace("'", "'")
link = "https://www.youtube.com/watch?v="
link1 = link + search_response['items'][1]['id']['videoId']
link2 = link + search_response['items'][2]['id']['videoId']
link3 = link + search_response['items'][3]['id']['videoId']
link4 = link + search_response['items'][4]['id']['videoId']
link5 = link + search_response['items'][5]['id']['videoId']
url = f"https://www.youtube.com/results?search_query={arg}"
embed = discord.Embed(title = f":movie_camera: {arg_title} 검색 결과", colour = 0xb30e11)
embed.set_author(name = '더보기', url = url)
embed.add_field(name = "\n\u200b", value = f'**1. [{title1}]({link1})**', inline = False)
embed.add_field(name = "\n\u200b", value = f'**2. [{title2}]({link2})**', inline = False)
embed.add_field(name = "\n\u200b", value = f'**3. [{title3}]({link3})**', inline = False)
embed.add_field(name = "\n\u200b", value = f'**4. [{title4}]({link4})**', inline = False)
embed.add_field(name = "\n\u200b", value = f'**5. [{title5}]({link5})**\n\u200b', inline = False)
embed.set_thumbnail(url=thumbnail_img)
embed.set_footer(text='Provided by Youtube API')
await ctx.channel.send(embed = embed)
access_token = os.environ["BOT_TOKEN"]
client.run(access_token)
| 43.140399 | 374 | 0.659752 | 6,782 | 47,627 | 4.506488 | 0.143763 | 0.021987 | 0.032981 | 0.018912 | 0.491608 | 0.43749 | 0.358538 | 0.331577 | 0.30478 | 0.259759 | 0 | 0.032297 | 0.161358 | 47,627 | 1,103 | 375 | 43.17951 | 0.730985 | 0.058538 | 0 | 0.154501 | 0 | 0.014599 | 0.21004 | 0.015797 | 0 | 0 | 0.005343 | 0 | 0 | 1 | 0.03528 | false | 0.019465 | 0.051095 | 0.010949 | 0.13747 | 0.001217 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf0ef43a8bc52fd3f88dd05dc9e8f4a26b23551b | 760 | py | Python | deploy.py | ksksksks-dev/Solidity-Demo | 572f26efdfcaeb8721cf9f98c08205dd344848b3 | [
"MIT"
] | null | null | null | deploy.py | ksksksks-dev/Solidity-Demo | 572f26efdfcaeb8721cf9f98c08205dd344848b3 | [
"MIT"
] | null | null | null | deploy.py | ksksksks-dev/Solidity-Demo | 572f26efdfcaeb8721cf9f98c08205dd344848b3 | [
"MIT"
] | 1 | 2021-10-02T07:23:28.000Z | 2021-10-02T07:23:28.000Z | import json
import solcx
from solcx import compile_standard
# solcx.install_solc()
with open("./SimpleStorage.sol", "r") as file:
simple_storage_file = file.read()
compiled_sol = compile_standard(
{
"language": "Solidity",
"sources": {"SimpleStorage.sol": {"content": simple_storage_file}},
"settings": {
"outputSelection": {
"*": {"*": ["abi", "metadata", "evm.bytecode", "evm.sourceMap"]}
}
},
},
)
with open("./compiled_code.json", "w") as file:
json.dump(compiled_sol, file)
bytecode = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["evm"][
"bytecode"
]["object"]
abi = compiled_sol["contracts"]["SimpleStorage.sol"]["SimpleStorage"]["abi"]
| 25.333333 | 82 | 0.610526 | 76 | 760 | 5.947368 | 0.460526 | 0.141593 | 0.075221 | 0.146018 | 0.216814 | 0.216814 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203947 | 760 | 29 | 83 | 26.206897 | 0.747107 | 0.026316 | 0 | 0 | 0 | 0 | 0.334688 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.136364 | 0 | 0.136364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf10b831b724d9102e64bddbd11566c602b17ffc | 2,185 | py | Python | test_clouddb/test_instance.py | adregner/python-clouddb | 6c77261a0e9cda221980c9240c7fffc93a78f7f7 | [
"X11"
] | 1 | 2018-05-21T23:09:36.000Z | 2018-05-21T23:09:36.000Z | test_clouddb/test_instance.py | adregner/python-clouddb | 6c77261a0e9cda221980c9240c7fffc93a78f7f7 | [
"X11"
] | null | null | null | test_clouddb/test_instance.py | adregner/python-clouddb | 6c77261a0e9cda221980c9240c7fffc93a78f7f7 | [
"X11"
] | null | null | null |
"""Primary testing suite for clouddb.models.instance.
This code is licensed under the MIT license. See COPYING for more details."""
import time
import unittest
import clouddb
import test_clouddb
CLOUDDB_TEST_INSTANCE_OBJECT = None
CLOUDDB_TEST_BASELINE_INSTANCE_COUNT = None
CLOUDDB_TEST_INSTANCE_NAME = "testsuite-ci-%d" % time.time()
class InstanceBaseline(test_clouddb.BaseTestCase):
def test_instance_list_baseline(self):
instances = self.raxdb.instances()
self.assertIsInstance(instances, list)
test_clouddb.test_instance.CLOUDDB_TEST_BASELINE_INSTANCE_COUNT = len(instances)
class InstanceCreate(test_clouddb.BaseTestCase):
def test_create_instance(self):
test_clouddb.test_instance.CLOUDDB_TEST_INSTANCE_OBJECT = \
self.raxdb.create_instance(CLOUDDB_TEST_INSTANCE_NAME, 1, 1, wait=True)
self.assertIsInstance(test_clouddb.test_instance.CLOUDDB_TEST_INSTANCE_OBJECT,
clouddb.models.instance.Instance)
class InstanceListGet(test_clouddb.BaseTestCase):
def test_instance_list(self):
instances = self.raxdb.instances()
self.assertIsInstance(instances, list)
self.assertEqual(len(instances),
test_clouddb.test_instance.CLOUDDB_TEST_BASELINE_INSTANCE_COUNT + 1)
self.assertIsInstance(instances[-1], clouddb.models.instance.Instance)
class InstanceDestroy(test_clouddb.BaseTestCase):
def test_instance_remove(self):
test_clouddb.test_instance.CLOUDDB_TEST_INSTANCE_OBJECT.delete(wait=True)
class InstanceListFinal(test_clouddb.BaseTestCase):
def test_instance_list_baseline_again(self):
instances = self.raxdb.instances()
self.assertEqual(len(instances),
test_clouddb.test_instance.CLOUDDB_TEST_BASELINE_INSTANCE_COUNT)
def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(InstanceBaseline))
suite.addTest(unittest.makeSuite(InstanceCreate))
suite.addTest(unittest.makeSuite(InstanceListGet))
suite.addTest(unittest.makeSuite(InstanceDestroy))
suite.addTest(unittest.makeSuite(InstanceListFinal))
return suite
if __name__ == "__main__":
unittest.main()
| 37.672414 | 88 | 0.769794 | 250 | 2,185 | 6.432 | 0.232 | 0.109453 | 0.141791 | 0.085821 | 0.522388 | 0.441542 | 0.398632 | 0.372512 | 0.280473 | 0.10199 | 0 | 0.002142 | 0.145538 | 2,185 | 57 | 89 | 38.333333 | 0.859132 | 0.058124 | 0 | 0.162791 | 0 | 0 | 0.011214 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 1 | 0.139535 | false | 0 | 0.093023 | 0 | 0.372093 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf12d221f01553d46a5821f0b5720d8d94341b9e | 3,327 | py | Python | examples/tensorflow/nlp/bert_large_squad/tune_squad.py | kevinintel/neural-compressor | b57645566aeff8d3c18dc49d2739a583c072f940 | [
"Apache-2.0"
] | 100 | 2020-12-01T02:40:12.000Z | 2021-09-09T08:14:22.000Z | examples/tensorflow/nlp/bert_large_squad/tune_squad.py | kevinintel/neural-compressor | b57645566aeff8d3c18dc49d2739a583c072f940 | [
"Apache-2.0"
] | 25 | 2021-01-05T00:16:17.000Z | 2021-09-10T03:24:01.000Z | examples/tensorflow/nlp/bert_large_squad/tune_squad.py | kevinintel/neural-compressor | b57645566aeff8d3c18dc49d2739a583c072f940 | [
"Apache-2.0"
] | 25 | 2020-12-01T19:07:08.000Z | 2021-08-30T14:20:07.000Z | #!/usr/bin/env python
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Run BERT on SQuAD 1.1 and SQuAD 2.0."""
import tensorflow as tf
import numpy as np
flags = tf.compat.v1.flags
FLAGS = flags.FLAGS
## Required parameters
flags.DEFINE_string(
'input_model', None, 'Run inference with specified pb graph.')
flags.DEFINE_string(
'output_model', None, 'The output model of the quantized model.')
flags.DEFINE_string(
'mode', 'performance', 'define benchmark mode for accuracy or performance')
flags.DEFINE_bool(
'tune', False, 'whether to tune the model')
flags.DEFINE_bool(
'benchmark', False, 'whether to benchmark the model')
flags.DEFINE_string(
'config', 'bert.yaml', 'yaml configuration of the model')
flags.DEFINE_bool(
'strip_iterator', False, 'whether to strip the iterator of the model')
def strip_iterator(graph_def):
from neural_compressor.adaptor.tf_utils.util import strip_unused_nodes
input_node_names = ['input_ids', 'input_mask', 'segment_ids']
output_node_names = ['unstack']
# create the placeholder and merge with the graph
with tf.compat.v1.Graph().as_default() as g:
input_ids = tf.compat.v1.placeholder(tf.int32, shape=(None,384), name="input_ids")
input_mask = tf.compat.v1.placeholder(tf.int32, shape=(None,384), name="input_mask")
segment_ids = tf.compat.v1.placeholder(tf.int32, shape=(None,384), name="segment_ids")
tf.import_graph_def(graph_def, name='')
graph_def = g.as_graph_def()
# change the input from iterator to placeholder
for node in graph_def.node:
for idx, in_tensor in enumerate(node.input):
if 'IteratorGetNext:0' == in_tensor or 'IteratorGetNext' == in_tensor:
node.input[idx] = 'input_ids'
if 'IteratorGetNext:1' in in_tensor:
node.input[idx] = 'input_mask'
if 'IteratorGetNext:2' in in_tensor:
node.input[idx] = 'segment_ids'
graph_def = strip_unused_nodes(graph_def, input_node_names, output_node_names)
return graph_def
def main(_):
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
if FLAGS.benchmark:
from neural_compressor.experimental import Benchmark
evaluator = Benchmark(FLAGS.config)
evaluator.model = FLAGS.input_model
evaluator(FLAGS.mode)
elif FLAGS.tune:
from neural_compressor.experimental import Quantization
quantizer = Quantization(FLAGS.config)
quantizer.model = FLAGS.input_model
q_model = quantizer()
if FLAGS.strip_iterator:
q_model.graph_def = strip_iterator(q_model.graph_def)
q_model.save(FLAGS.output_model)
if __name__ == "__main__":
tf.compat.v1.app.run()
| 36.56044 | 94 | 0.703937 | 468 | 3,327 | 4.839744 | 0.326923 | 0.038852 | 0.03532 | 0.025166 | 0.175717 | 0.121854 | 0.065342 | 0.065342 | 0.065342 | 0.065342 | 0 | 0.014574 | 0.195672 | 3,327 | 90 | 95 | 36.966667 | 0.831839 | 0.227833 | 0 | 0.127273 | 0 | 0 | 0.198821 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.109091 | 0 | 0.163636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf18350dca3c8a011e1f04f49243469e79dd2045 | 1,484 | py | Python | run.py | wallarelvo/SmallCartography | 007e621386eb86d904fefef3f518b1d5f1dc7fe6 | [
"Apache-2.0"
] | null | null | null | run.py | wallarelvo/SmallCartography | 007e621386eb86d904fefef3f518b1d5f1dc7fe6 | [
"Apache-2.0"
] | null | null | null | run.py | wallarelvo/SmallCartography | 007e621386eb86d904fefef3f518b1d5f1dc7fe6 | [
"Apache-2.0"
] | null | null | null |
import carto
import argparse
def main():
parser = argparse.ArgumentParser(
description="Runs programs for the carto MapReduce library"
)
parser.add_argument(
"--host", dest="host", type=str, default="localhost",
help="Host of the program"
)
parser.add_argument(
"--port", dest="port", type=int, default=8000,
help="Port of the program"
)
parser.add_argument(
"--name", dest="name", type=str,
help="Name used by the worker"
)
parser.add_argument(
"--program", dest="program", type=str, default="client",
help="Used to determine what program will run"
)
parser.add_argument(
"--ns-host", dest="ns_host", type=str, default="localhost",
help="Host of the name server"
)
parser.add_argument(
"--ns-port", dest="ns_port", type=int, default="8080",
help="Port used by the name server"
)
args = parser.parse_args()
if args.program == carto.master.worker.WorkerType.MASTER:
carto.master.run(args.host, args.port)
elif args.program == carto.master.worker.WorkerType.MAPPER:
carto.mapper.run(args.host, args.port, args.ns_host,
args.ns_port, args.name)
elif args.program == carto.master.worker.WorkerType.REDUCER:
carto.reducer.run(args.host, args.port, args.ns_host,
args.ns_port, args.name)
if __name__ == "__main__":
main()
| 26.981818 | 67 | 0.607143 | 185 | 1,484 | 4.756757 | 0.264865 | 0.061364 | 0.115909 | 0.075 | 0.418182 | 0.396591 | 0.293182 | 0.197727 | 0.197727 | 0.106818 | 0 | 0.007266 | 0.258086 | 1,484 | 54 | 68 | 27.481481 | 0.792007 | 0 | 0 | 0.195122 | 0 | 0 | 0.209036 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02439 | false | 0 | 0.04878 | 0 | 0.073171 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf1bdfaeda3c9d3dd53a3e8c1108702ddef142c8 | 3,623 | py | Python | microservices_miner/control/issue_mgr.py | IBM/microservices-miner | b7befa1c97930b1e7347c9e386a4bb5c5f2d2198 | [
"MIT"
] | null | null | null | microservices_miner/control/issue_mgr.py | IBM/microservices-miner | b7befa1c97930b1e7347c9e386a4bb5c5f2d2198 | [
"MIT"
] | 4 | 2021-06-08T22:11:29.000Z | 2022-01-14T21:21:04.000Z | microservices_miner/control/issue_mgr.py | IBM/microservices-miner | b7befa1c97930b1e7347c9e386a4bb5c5f2d2198 | [
"MIT"
] | 1 | 2020-08-06T14:53:05.000Z | 2020-08-06T14:53:05.000Z | # (C) Copyright IBM Corporation 2017, 2018, 2019
# U.S. Government Users Restricted Rights: Use, duplication or disclosure restricted
# by GSA ADP Schedule Contract with IBM Corp.
#
# Author: Leonardo P. Tizzei <ltizzei@br.ibm.com>
from microservices_miner.control.database_conn import IssueConn, UserConn, RepositoryConn
from microservices_miner.model.repository import Repository
import logging
logging.basicConfig(filename='github_miner.log', level=logging.DEBUG, format='%(asctime)s %(message)s')
class IssueMgr:
def __init__(self, path_to_db):
self.db_path = path_to_db
self.issue_conn = IssueConn(path_to_db)
self.user_conn = UserConn(path_to_db)
self.repo_conn = RepositoryConn(path_to_db)
def insert_issue_into_db(self, repo):
"""
Parameters
----------
repo: Repository
Returns
-------
"""
for issue in repo.issues:
updated_at = issue.updated_at
if updated_at is not None:
updated_at_str = updated_at.isoformat()
else:
updated_at_str = None
if issue.closed_at is None:
closed_at_str = None
else:
closed_at_str = issue.closed_at.isoformat()
user_id = issue.user.commit_id
issue_id = self.issue_conn.insert_issue(title=issue.title, body=issue.body, repository_id=repo.repository_id,
closed_at=closed_at_str, updated_at=updated_at_str,
created_at=issue.created_at.isoformat(),
user_id=user_id, state=issue.state)
for assignee in issue.assignees:
assignee_id = self.issue_conn.insert_assignee(assignee)
self.issue_conn.insert_issue_assignee(assignee_id=assignee_id, issue_id=issue_id)
for label in issue.labels:
label_id = self.issue_conn.insert_label(label)
self.issue_conn.insert_issue_label(issue_id=issue_id, label_id=label_id)
def get_issues_by_label(self, repository_id: int):
"""
Parameters
----------
repository_id: int
Returns
-------
List[Issue]
"""
issues = self.issue_conn.get_issues(repository_id=repository_id)
return issues
def get_label(self, name):
"""
Parameters
----------
name
Returns
-------
Label
"""
labels = self.issue_conn.get_labels(name=name)
if len(labels) == 0:
return None
else:
label = labels.pop()
return label
def get_assignee(self, login):
"""
Parameters
----------
login
Returns
-------
Assignee
"""
assignees = self.issue_conn.get_assignee(login)
if len(assignees) == 0:
return None
else:
assignee = assignees.pop()
return assignee
def insert_assignee(self, assignee):
"""
Parameters
----------
assignee: Assignee
Returns
-------
int
"""
rowid = self.issue_conn.insert_assignee(assignee)
return rowid
def insert_label(self, label):
"""
Parameters
----------
label: Label
Returns
-------
int
"""
row_id = self.issue_conn.insert_label(label)
return row_id
| 27.44697 | 121 | 0.548993 | 384 | 3,623 | 4.921875 | 0.265625 | 0.052381 | 0.075661 | 0.07037 | 0.110053 | 0.069841 | 0.032804 | 0 | 0 | 0 | 0 | 0.005957 | 0.351366 | 3,623 | 131 | 122 | 27.656489 | 0.798298 | 0.157604 | 0 | 0.109091 | 0 | 0 | 0.014607 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.127273 | false | 0 | 0.054545 | 0 | 0.327273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf1d3c9ee4fa3f3a46513695b9bd7c1714c7aef5 | 10,893 | py | Python | custom_components/skyq/config_flow.py | TomBrien/Home_Assistant_SkyQ_MediaPlayer | 50f9ad0d3b7a3bc2acc652415ff59740bf3ace10 | [
"MIT"
] | null | null | null | custom_components/skyq/config_flow.py | TomBrien/Home_Assistant_SkyQ_MediaPlayer | 50f9ad0d3b7a3bc2acc652415ff59740bf3ace10 | [
"MIT"
] | null | null | null | custom_components/skyq/config_flow.py | TomBrien/Home_Assistant_SkyQ_MediaPlayer | 50f9ad0d3b7a3bc2acc652415ff59740bf3ace10 | [
"MIT"
] | null | null | null | """Configuration flow for the skyq platform."""
import ipaddress
import json
import logging
import re
from operator import attrgetter
import homeassistant.helpers.config_validation as cv
import pycountry
import voluptuous as vol
from homeassistant import config_entries, exceptions
from homeassistant.const import CONF_HOST, CONF_NAME
from homeassistant.core import callback
from pyskyqremote.const import KNOWN_COUNTRIES
from pyskyqremote.skyq_remote import SkyQRemote
from .const import (
CHANNEL_DISPLAY,
CHANNEL_SOURCES_DISPLAY,
CONF_CHANNEL_SOURCES,
CONF_COUNTRY,
CONF_EPG_CACHE_LEN,
CONF_GEN_SWITCH,
CONF_LIVE_TV,
CONF_OUTPUT_PROGRAMME_IMAGE,
CONF_ROOM,
CONF_SOURCES,
CONF_VOLUME_ENTITY,
CONST_DEFAULT,
CONST_DEFAULT_EPGCACHELEN,
DOMAIN,
LIST_EPGCACHELEN,
SKYQREMOTE,
)
from .schema import DATA_SCHEMA
from .utils import convert_sources_JSON
SORT_CHANNELS = False
_LOGGER = logging.getLogger(__name__)
def host_valid(host):
"""Return True if hostname or IP address is valid."""
try:
if ipaddress.ip_address(host).version == (4 or 6):
return True
except ValueError:
disallowed = re.compile(r"[^a-zA-Z\d\-]")
return all(x and not disallowed.search(x) for x in host.split("."))
class SkyqConfigFlow(config_entries.ConfigFlow, domain=DOMAIN):
"""Example config flow."""
VERSION = 1
CONNECTION_CLASS = config_entries.CONN_CLASS_LOCAL_POLL
def __init__(self):
"""Initiliase the configuration flow."""
@staticmethod
@callback
def async_get_options_flow(config_entry):
"""Sky Q options callback."""
return SkyQOptionsFlowHandler(config_entry)
async def async_step_user(self, user_input=None):
"""Handle the initial step."""
errors = {}
if user_input:
if host_valid(user_input[CONF_HOST]):
host = user_input[CONF_HOST]
name = user_input[CONF_NAME]
try:
await self._async_setUniqueID(host)
except CannotConnect:
errors["base"] = "cannot_connect"
else:
return self.async_create_entry(title=name, data=user_input)
errors[CONF_HOST] = "invalid_host"
return self.async_show_form(
step_id="user", data_schema=vol.Schema(DATA_SCHEMA), errors=errors
)
async def _async_setUniqueID(self, host):
remote = await self.hass.async_add_executor_job(SkyQRemote, host)
if not remote.deviceSetup:
raise CannotConnect()
deviceInfo = await self.hass.async_add_executor_job(remote.getDeviceInformation)
await self.async_set_unique_id(
deviceInfo.countryCode
+ "".join(e for e in deviceInfo.serialNumber.casefold() if e.isalnum())
)
self._abort_if_unique_id_configured()
class SkyQOptionsFlowHandler(config_entries.OptionsFlow):
"""Config flow options for Sky Q."""
def __init__(self, config_entry):
"""Initialize Sky Q options flow."""
self._name = config_entry.title
self._config_entry = config_entry
self._remote = None
self._channel_sources = config_entry.options.get(CONF_CHANNEL_SOURCES, [])
self._sources = convert_sources_JSON(
sources_list=config_entry.options.get(CONF_SOURCES)
)
self._room = config_entry.options.get(CONF_ROOM)
self._volume_entity = config_entry.options.get(CONF_VOLUME_ENTITY)
self._gen_switch = config_entry.options.get(CONF_GEN_SWITCH, False)
self._live_tv = config_entry.options.get(CONF_LIVE_TV, True)
self._country = config_entry.options.get(CONF_COUNTRY, CONST_DEFAULT)
if self._country != CONST_DEFAULT:
self._country = self._convertCountry(alpha_3=self._country)
self._output_programme_image = config_entry.options.get(
CONF_OUTPUT_PROGRAMME_IMAGE, True
)
self._epg_cache_len = config_entry.options.get(
CONF_EPG_CACHE_LEN, CONST_DEFAULT_EPGCACHELEN
)
self._channelDisplay = []
self._channel_list = []
async def async_step_init(self, user_input=None):
"""Set up the option flow."""
self._remote = self.hass.data[DOMAIN][self._config_entry.entry_id][SKYQREMOTE]
s = set(KNOWN_COUNTRIES[country] for country in KNOWN_COUNTRIES)
countryNames = []
for alpha3 in s:
countryName = self._convertCountry(alpha_3=alpha3)
countryNames.append(countryName)
self._country_list = [CONST_DEFAULT] + sorted(countryNames)
if self._remote.deviceSetup:
channelData = await self.hass.async_add_executor_job(
self._remote.getChannelList
)
self._channel_list = channelData.channels
for channel in self._channel_list:
self._channelDisplay.append(
CHANNEL_DISPLAY.format(channel.channelno, channel.channelname)
)
self._channel_sources_display = []
for channel in self._channel_sources:
try:
channelData = next(
c for c in self._channel_list if c.channelname == channel
)
self._channel_sources_display.append(
CHANNEL_DISPLAY.format(
channelData.channelno, channelData.channelname
)
)
except StopIteration:
pass
return await self.async_step_user()
return await self.async_step_retry()
async def async_step_user(self, user_input=None):
"""Handle a flow initialized by the user."""
errors = {}
if user_input:
self._channel_sources_display = user_input[CHANNEL_SOURCES_DISPLAY]
user_input.pop(CHANNEL_SOURCES_DISPLAY)
if len(self._channel_sources_display) > 0:
channelitems = []
for channel in self._channel_sources_display:
channelData = next(
c
for c in self._channel_list
if channel == CHANNEL_DISPLAY.format(c.channelno, c.channelname)
)
channelitems.append(channelData)
if SORT_CHANNELS:
channelnosorted = sorted(channelitems, key=attrgetter("channelno"))
channelsorted = sorted(
channelnosorted, key=attrgetter("channeltype"), reverse=True
)
channel_sources = []
for c in channelsorted:
channel_sources.append(c.channelname)
else:
channel_sources = []
for c in channelitems:
channel_sources.append(c.channelname)
user_input[CONF_CHANNEL_SOURCES] = channel_sources
self._gen_switch = user_input.get(CONF_GEN_SWITCH)
self._live_tv = user_input.get(CONF_LIVE_TV)
self._output_programme_image = user_input.get(CONF_OUTPUT_PROGRAMME_IMAGE)
self._room = user_input.get(CONF_ROOM)
self._volume_entity = user_input.get(CONF_VOLUME_ENTITY)
self._country = user_input.get(CONF_COUNTRY)
if self._country == CONST_DEFAULT:
user_input.pop(CONF_COUNTRY)
else:
user_input[CONF_COUNTRY] = self._convertCountry(name=self._country)
self._epg_cache_len = user_input.get(CONF_EPG_CACHE_LEN)
try:
self._sources = user_input.get(CONF_SOURCES)
if self._sources:
user_input[CONF_SOURCES] = convert_sources_JSON(
sources_json=self._sources
)
for source in user_input[CONF_SOURCES]:
self._validate_commands(source)
return self.async_create_entry(title="", data=user_input)
except json.decoder.JSONDecodeError:
errors["base"] = "invalid_sources"
except InvalidCommand:
errors["base"] = "invalid_command"
return self.async_show_form(
step_id="user",
description_placeholders={CONF_NAME: self._name},
data_schema=vol.Schema(
{
vol.Optional(
CHANNEL_SOURCES_DISPLAY, default=self._channel_sources_display
): cv.multi_select(self._channelDisplay),
vol.Optional(
CONF_OUTPUT_PROGRAMME_IMAGE,
default=self._output_programme_image,
): bool,
vol.Optional(CONF_LIVE_TV, default=self._live_tv): bool,
vol.Optional(CONF_GEN_SWITCH, default=self._gen_switch): bool,
vol.Optional(
CONF_ROOM, description={"suggested_value": self._room}
): str,
vol.Optional(CONF_COUNTRY, default=self._country): vol.In(
self._country_list
),
vol.Optional(
CONF_VOLUME_ENTITY,
description={"suggested_value": self._volume_entity},
): str,
vol.Optional(
CONF_EPG_CACHE_LEN, default=self._epg_cache_len
): vol.In(LIST_EPGCACHELEN),
vol.Optional(
CONF_SOURCES, description={"suggested_value": self._sources}
): str,
}
),
errors=errors,
)
async def async_step_retry(self, user_input=None):
"""Handle a failed connection."""
errors = {}
errors["base"] = "cannot_connect"
return self.async_show_form(
step_id="retry",
data_schema=vol.Schema({}),
errors=errors,
)
def _convertCountry(self, alpha_3=None, name=None):
if name:
return pycountry.countries.get(name=name).alpha_3
if alpha_3:
return pycountry.countries.get(alpha_3=alpha_3).name
def _validate_commands(self, source):
commands = source[1].split(",")
for command in commands:
if command not in SkyQRemote.commands:
raise InvalidCommand()
class CannotConnect(exceptions.HomeAssistantError):
"""Error to indicate we cannot connect."""
class InvalidCommand(exceptions.HomeAssistantError):
"""Error to indicate we cannot connect."""
| 36.431438 | 88 | 0.597815 | 1,133 | 10,893 | 5.425419 | 0.185349 | 0.038067 | 0.034163 | 0.030747 | 0.228729 | 0.109647 | 0.076948 | 0.056613 | 0.027005 | 0.027005 | 0 | 0.0019 | 0.323602 | 10,893 | 298 | 89 | 36.553691 | 0.832383 | 0.028 | 0 | 0.177215 | 0 | 0 | 0.017202 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025316 | false | 0.004219 | 0.067511 | 0 | 0.168776 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf1e1fc048aed029497d762bdbe8c8befabdb682 | 2,045 | py | Python | tradssat/out/soilni.py | shreyayadav/traDSSAT | cc9650f896910c0d0a7a382aff36bef89aba70f2 | [
"MIT"
] | null | null | null | tradssat/out/soilni.py | shreyayadav/traDSSAT | cc9650f896910c0d0a7a382aff36bef89aba70f2 | [
"MIT"
] | null | null | null | tradssat/out/soilni.py | shreyayadav/traDSSAT | cc9650f896910c0d0a7a382aff36bef89aba70f2 | [
"MIT"
] | null | null | null | from tradssat.tmpl.output import OutFile
from tradssat.tmpl.var import FloatVar, IntegerVar
class SoilNiOut(OutFile):
"""
Reader for DSSAT soil nitrogen (SOILNI.OUT) files.
"""
filename = 'SoilNi.Out'
def _get_var_info(self):
return vars_
vars_ = {
IntegerVar('YEAR', 4, info='Year'),
IntegerVar('DOY', 3, info='Day of year starting on Jan 1.'),
IntegerVar('DAS', 5, info='Day after start'),
IntegerVar('NAPC', 5, info='Cumulative inorganic N applied, kg/ha'),
IntegerVar('NI#M', 5, info='N application numbers'),
FloatVar('NIAD', 7, 1, info='Inorganic N in soil, kg/ha'),
FloatVar('NITD', 6, 1, info='Amount of total NO3, kg/ha'),
FloatVar('NHTD', 6, 1, info='Amount of total NH4, kg/ha'),
FloatVar('NI1D', 7, 2, info='NO3 at 0-5 cm soil depth, ppm'),
FloatVar('NI2D', 7, 2, info='NO3 at 5-15 cm soil depth, ppm'),
FloatVar('NI3D', 7, 2, info='NO3 at 15-30 cm soil depth, ppm'),
FloatVar('NI4D', 7, 2, info='NO3 at 30-45 cm soil depth, ppm'),
FloatVar('NI5D', 7, 2, info='NO3 at 45-60 cm soil depth, ppm'),
FloatVar('NI6D', 7, 2, info='NO3 at 60-90 cm soil depth, ppm'),
FloatVar('NI7D', 7, 2, info='NO3 at 90-110 cm soil depth, ppm'),
FloatVar('NH1D', 7, 2, info='NH4 at 0-5 cm soil depth, ppm'),
FloatVar('NH2D', 7, 2, info='NH4 at 5-15 cm soil depth, ppm'),
FloatVar('NH3D', 7, 2, info='NH4 at 15-30 cm soil depth, ppm'),
FloatVar('NH4D', 7, 2, info='NH4 at 30-45 cm soil depth, ppm'),
FloatVar('NH5D', 7, 2, info='NH4 at 45-60 cm soil depth, ppm'),
FloatVar('NH6D', 7, 2, info='NH4 at 60-90 cm soil depth, ppm'),
FloatVar('NH7D', 7, 2, info='NH4 at 90-110 cm soil depth, ppm'),
FloatVar('NMNC', 7, 0, info=''),
FloatVar('NITC', 7, 0, info=''),
FloatVar('NDNC', 7, 0, info=''),
FloatVar('NIMC', 7, 0, info=''),
FloatVar('AMLC', 7, 0, info=''),
FloatVar('NNMNC', 7, 0, info=''),
FloatVar('NUCM', 7, 0, info='N uptake, kg/ha'),
FloatVar('NLCC', 7, 0, info='Cumulative N leached, kg/ha'),
}
| 43.510638 | 72 | 0.604401 | 340 | 2,045 | 3.620588 | 0.285294 | 0.022746 | 0.068237 | 0.15922 | 0.448416 | 0.34606 | 0.315191 | 0.315191 | 0 | 0 | 0 | 0.084611 | 0.202445 | 2,045 | 46 | 73 | 44.456522 | 0.670141 | 0.02445 | 0 | 0 | 0 | 0 | 0.39717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.052632 | 0.026316 | 0.157895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf1edbd7a30a852f3ca1224c69d6e47997c186c3 | 4,888 | py | Python | project/scripts/run-cooja.py | nfi/multitrace | 7a043f4c3f580ca87c39f23337322b98594f3a51 | [
"BSD-3-Clause"
] | 4 | 2021-12-20T12:25:56.000Z | 2022-03-23T20:39:16.000Z | project/scripts/run-cooja.py | nfi/multitrace | 7a043f4c3f580ca87c39f23337322b98594f3a51 | [
"BSD-3-Clause"
] | null | null | null | project/scripts/run-cooja.py | nfi/multitrace | 7a043f4c3f580ca87c39f23337322b98594f3a51 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
import argparse
import sys
import os
import time
import traceback
import subprocess
from subprocess import PIPE, STDOUT, CalledProcessError
# Find path to this script
SELF_PATH = os.path.dirname(os.path.abspath(__file__))
# Find path to Contiki-NG relative to this script
CONTIKI_PATH = os.path.dirname(os.path.dirname(SELF_PATH))
cooja_jar = os.path.normpath(os.path.join(CONTIKI_PATH, "tools", "cooja", "dist", "cooja.jar"))
cooja_output = 'COOJA.testlog'
cooja_log = 'COOJA.log'
#######################################################
# Run a child process and get its output
def _run_command(command):
try:
proc = subprocess.run(command, stdout=PIPE, stderr=STDOUT, shell=True, universal_newlines=True)
return proc.returncode, proc.stdout if proc.stdout else ''
except CalledProcessError as e:
print(f"Command failed: {e}", file=sys.stderr)
return e.returncode, e.stdout if e.stdout else ''
except (OSError, Exception) as e:
traceback.print_exc()
return -1, str(e)
def _remove_file(filename):
try:
os.remove(filename)
except FileNotFoundError:
pass
#############################################################
# Run a single instance of Cooja on a given simulation script
def run_simulation(cooja_file, output_path=None):
# Remove any old simulation logs
_remove_file(cooja_output)
_remove_file(cooja_log)
target_basename = cooja_file
if target_basename.endswith('.csc.gz'):
target_basename = target_basename[:-7]
elif target_basename.endswith('.csc'):
target_basename = target_basename[:-4]
simulation_id = str(round(time.time() * 1000))
if output_path is not None:
target_basename = os.path.join(output_path, target_basename)
target_basename += '-dt-' + simulation_id
target_basename_fail = target_basename + '-fail'
target_output = target_basename + '/cooja.testlog'
target_log_output = target_basename + '/cooja.log'
# filename = os.path.join(SELF_PATH, cooja_file)
command = (f"java -Djava.awt.headless=true -jar {cooja_jar} -nogui={cooja_file} -contiki={CONTIKI_PATH}"
f" -datatrace={target_basename}")
sys.stdout.write(f" Running Cooja:\n {command}\n")
start_time = time.perf_counter_ns()
(return_code, output) = _run_command(command)
end_time = time.perf_counter_ns()
with open(cooja_log, 'a') as f:
f.write(f'\nSimulation execution time: {end_time - start_time} ns.\n')
if not os.path.isdir(target_basename):
os.mkdir(target_basename)
has_cooja_output = os.path.isfile(cooja_output)
if has_cooja_output:
os.rename(cooja_output, target_output)
os.rename(cooja_log, target_log_output)
if return_code != 0 or not has_cooja_output:
print(f"Failed, ret code={return_code}, output:", file=sys.stderr)
print("-----", file=sys.stderr)
print(output, file=sys.stderr, end='')
print("-----", file=sys.stderr)
if not has_cooja_output:
print("No Cooja simulation script output!", file=sys.stderr)
os.rename(target_basename, target_basename_fail)
return False
print(" Checking for output...")
is_done = False
with open(target_output, "r") as f:
for line in f.readlines():
line = line.strip()
if line == "TEST OK":
is_done = True
continue
if not is_done:
print(" test failed.")
os.rename(target_basename, target_basename_fail)
return False
print(f" test done in {round((end_time - start_time) / 1000000)} milliseconds.")
return True
#######################################################
# Run the application
def main(parser=None):
if not os.access(cooja_jar, os.R_OK):
sys.exit(f'The file "{cooja_jar}" does not exist, did you build Cooja?')
if not parser:
parser = argparse.ArgumentParser()
parser.add_argument('-o', dest='output_path')
parser.add_argument('input', nargs='+')
try:
conopts = parser.parse_args(sys.argv[1:])
except Exception as e:
sys.exit(f"Illegal arguments: {e}")
if conopts.output_path and not os.path.isdir(conopts.output_path):
os.mkdir(conopts.output_path)
for simulation_file in conopts.input:
if not os.access(simulation_file, os.R_OK):
print(f'Can not read simulation script "{simulation_file}"', file=sys.stderr)
sys.exit(1)
print(f'Running simulation "{simulation_file}"')
if not run_simulation(simulation_file, conopts.output_path):
sys.exit(f'Failed to run simulation "{simulation_file}"')
print('Done. No more simulation files specified.')
#######################################################
if __name__ == '__main__':
main()
| 33.479452 | 108 | 0.63748 | 635 | 4,888 | 4.711811 | 0.264567 | 0.098262 | 0.030414 | 0.046791 | 0.081551 | 0.052807 | 0.037433 | 0.037433 | 0.037433 | 0.037433 | 0 | 0.004639 | 0.206219 | 4,888 | 145 | 109 | 33.710345 | 0.766495 | 0.059534 | 0 | 0.088235 | 0 | 0.009804 | 0.18336 | 0.01719 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039216 | false | 0.009804 | 0.068627 | 0 | 0.166667 | 0.127451 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf1f3cd2308c871fbca4d806dda3a4b0a43ddbe0 | 711 | py | Python | Recent Excel Documents.lbaction/Contents/Scripts/default.py | nriley/LBOfficeMRU | e2df583cdb32a066f3ab002d4182fa40759839a6 | [
"Apache-2.0"
] | 13 | 2016-08-21T12:18:42.000Z | 2022-02-01T22:03:45.000Z | Recent Excel Documents.lbaction/Contents/Scripts/default.py | nriley/LBOfficeMRU | e2df583cdb32a066f3ab002d4182fa40759839a6 | [
"Apache-2.0"
] | 1 | 2017-02-11T10:46:12.000Z | 2017-03-31T04:20:01.000Z | Recent Excel Documents.lbaction/Contents/Scripts/default.py | nriley/LBOfficeMRU | e2df583cdb32a066f3ab002d4182fa40759839a6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
import json, operator
import mruservice, mruuserdata
APP_NAME = 'Excel'
APP_BUNDLE_ID = 'com.microsoft.Excel'
APP_URL_PREFIX = 'ms-excel:ofe|u|'
EXTENSION_TO_ICON_NAME = dict(
slk='XLS8', dif='XLS8', ods='ODS', xls='XLS8', xlsx='XLSX', xltx='XLTX', xlsm='XLSM',
xltm='XLTM', xlsb='XLSB', xlam='XLAM', xlw='XLW8', xla='XLA8', xlb='XLB8', xlt='XLT',
xld='XLD5', xlm='XLM4', xll='XLL', csv='CSV', txt='TEXT', xml='XMLS', tlb='OTLB', _='TEXT')
items = mruuserdata.items_for_app(APP_NAME)
items += mruservice.items_for_app(APP_NAME, APP_BUNDLE_ID, APP_URL_PREFIX, EXTENSION_TO_ICON_NAME)
items.sort(key=operator.itemgetter('Timestamp'), reverse=True)
print(json.dumps(items))
| 35.55 | 98 | 0.703235 | 110 | 711 | 4.345455 | 0.590909 | 0.043933 | 0.046025 | 0.079498 | 0.075314 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014063 | 0.099859 | 711 | 19 | 99 | 37.421053 | 0.732813 | 0.029536 | 0 | 0 | 0 | 0 | 0.191582 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf1f9a3ecc8549d804dcf2f5aef38297dc7945b8 | 2,458 | py | Python | 3sum_medium.py | victorsemenov1980/LeetCodeDailyFun | f66273a9868ede5e2337f586e21eaf9e771b9b48 | [
"MIT"
] | null | null | null | 3sum_medium.py | victorsemenov1980/LeetCodeDailyFun | f66273a9868ede5e2337f586e21eaf9e771b9b48 | [
"MIT"
] | null | null | null | 3sum_medium.py | victorsemenov1980/LeetCodeDailyFun | f66273a9868ede5e2337f586e21eaf9e771b9b48 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat May 22 12:03:16 2021
@author: user
"""
'''
Given an integer array nums, return all the triplets [nums[i], nums[j], nums[k]] such that i != j, i != k, and j != k, and nums[i] + nums[j] + nums[k] == 0.
Notice that the solution set must not contain duplicate triplets.
Example 1:
Input: nums = [-1,0,1,2,-1,-4]
Output: [[-1,-1,2],[-1,0,1]]
Example 2:
Input: nums = []
Output: []
Example 3:
Input: nums = [0]
Output: []
Constraints:
0 <= nums.length <= 3000
-105 <= nums[i] <= 105
Accepted
1,304,501
Submissions
4,576,232
'''
'''
Slow brutforce
'''
class Solution:
def threeSum(self, nums):
if len(nums)<3:
return []
else:
out=[]
import itertools
indices=[x for x in range(0,len(nums))]
combs=list(itertools.combinations(indices, 3))
for i in combs:
summ=[]
for j in i:
summ.append(nums[j])
if sum(summ)==0 and sorted(summ) not in out:
out.append(sorted(summ))
return out
y=Solution()
nums = [-1,0,1,2,-1,-4]
print(y.threeSum(nums))
nums = [0,0,0]
print(y.threeSum(nums))
# nums = [0]
# print(y.threeSum(nums))
'''
Faster
'''
class Solution:
def threeSum(self, nums):
if len(nums)<3:
return []
else:
out=[]
indices = {}
nums=sorted(nums)
for key ,value in enumerate(nums):
indices[value]=key
for first_ind,first_num in enumerate(nums):
if first_num>0:#no reason to continue
break
else:
for second_ind,second_num in enumerate(nums[first_ind+1:]):
zero=-(first_num+second_num)
if zero in indices.keys() and indices[zero]>first_ind+second_ind+1:
temp=sorted([zero,first_num,second_num])
if temp not in out:
out.append(temp)
return out
y=Solution()
nums = [-1,0,1,2,-1,-4]
print(y.threeSum(nums))
# nums = [0,0,0]
# print(y.threeSum(nums))
# nums = [0]
# print(y.threeSum(nums))
| 22.550459 | 156 | 0.47559 | 311 | 2,458 | 3.720257 | 0.321543 | 0.031115 | 0.072602 | 0.093345 | 0.361279 | 0.331893 | 0.266206 | 0.257563 | 0.257563 | 0.257563 | 0 | 0.055814 | 0.387714 | 2,458 | 108 | 157 | 22.759259 | 0.712957 | 0.091131 | 0 | 0.488889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044444 | false | 0 | 0.022222 | 0 | 0.2 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf20204aba78a60c893f6561c24e36c3ce30077f | 651 | py | Python | tests/test_loss.py | MartinXPN/abcde | 13192c5f7dfb32a461b9205aed4b0b21e79d8285 | [
"MIT"
] | 4 | 2021-01-20T09:15:37.000Z | 2022-03-03T13:58:18.000Z | tests/test_loss.py | MartinXPN/abcde | 13192c5f7dfb32a461b9205aed4b0b21e79d8285 | [
"MIT"
] | null | null | null | tests/test_loss.py | MartinXPN/abcde | 13192c5f7dfb32a461b9205aed4b0b21e79d8285 | [
"MIT"
] | null | null | null | from unittest import TestCase
from torch import Tensor
from abcde.loss import PairwiseRankingCrossEntropyLoss
class TestPairwiseRankingLoss(TestCase):
def test_simple_case(self):
loss = PairwiseRankingCrossEntropyLoss()
res = loss(pred_betweenness=Tensor([[0.5], [0.7], [3]]), target_betweenness=Tensor([[0.2], [1], [2]]),
src_ids=Tensor([0, 1, 2, 2, 1, 0, 1, 2, 2, 1, 0, 1, 2, 2, 1, ]).long(),
targ_ids=Tensor([1, 0, 0, 1, 2, 1, 0, 0, 1, 2, 1, 0, 0, 1, 2, ]).long())
# This number is taken from the tensorflow implementation
self.assertAlmostEqual(res, 0.636405362070762)
| 40.6875 | 110 | 0.623656 | 91 | 651 | 4.395604 | 0.428571 | 0.035 | 0.045 | 0.03 | 0.075 | 0.075 | 0.075 | 0.075 | 0.075 | 0.075 | 0 | 0.109344 | 0.227343 | 651 | 15 | 111 | 43.4 | 0.685885 | 0.084485 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf27715d55b617221f21406b94ab34e0ac04baac | 5,981 | py | Python | lib/surface/compute/instance_groups/managed/abandon_instances.py | eyalev/gcloud | 421ee63a0a6d90a097e8530d53a6df5b905a0205 | [
"Apache-2.0"
] | null | null | null | lib/surface/compute/instance_groups/managed/abandon_instances.py | eyalev/gcloud | 421ee63a0a6d90a097e8530d53a6df5b905a0205 | [
"Apache-2.0"
] | null | null | null | lib/surface/compute/instance_groups/managed/abandon_instances.py | eyalev/gcloud | 421ee63a0a6d90a097e8530d53a6df5b905a0205 | [
"Apache-2.0"
] | 2 | 2020-11-04T03:08:21.000Z | 2020-11-05T08:14:41.000Z | # Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Command for abandoning instances owned by a managed instance group."""
from googlecloudsdk.api_lib.compute import base_classes
from googlecloudsdk.api_lib.compute import instance_groups_utils
from googlecloudsdk.calliope import arg_parsers
from googlecloudsdk.calliope import base
from googlecloudsdk.command_lib.compute import flags
def _AddArgs(parser, multizonal):
"""Adds args."""
parser.add_argument('name',
help='The managed instance group name.')
parser.add_argument(
'--instances',
type=arg_parsers.ArgList(min_length=1),
action=arg_parsers.FloatingListValuesCatcher(),
metavar='INSTANCE',
required=True,
help='Names of instances to abandon.')
if multizonal:
scope_parser = parser.add_mutually_exclusive_group()
flags.AddRegionFlag(
scope_parser,
resource_type='instance group',
operation_type='abandon instances',
explanation=flags.REGION_PROPERTY_EXPLANATION_NO_DEFAULT)
flags.AddZoneFlag(
scope_parser,
resource_type='instance group manager',
operation_type='abandon instances',
explanation=flags.ZONE_PROPERTY_EXPLANATION_NO_DEFAULT)
else:
flags.AddZoneFlag(
parser,
resource_type='instance group manager',
operation_type='abandon instances')
@base.ReleaseTracks(base.ReleaseTrack.GA, base.ReleaseTrack.BETA)
class AbandonInstances(base_classes.BaseAsyncMutator):
"""Abandon instances owned by a managed instance group."""
@staticmethod
def Args(parser):
_AddArgs(parser=parser, multizonal=False)
@property
def method(self):
return 'AbandonInstances'
@property
def service(self):
return self.compute.instanceGroupManagers
@property
def resource_type(self):
return 'instanceGroupManagers'
def CreateRequests(self, args):
zone_ref = self.CreateZonalReference(args.name, args.zone)
instance_refs = self.CreateZonalReferences(
args.instances,
zone_ref.zone,
resource_type='instances')
instances = [instance_ref.SelfLink() for instance_ref in instance_refs]
return [(self.method,
self.messages.ComputeInstanceGroupManagersAbandonInstancesRequest(
instanceGroupManager=zone_ref.Name(),
instanceGroupManagersAbandonInstancesRequest=(
self.messages.InstanceGroupManagersAbandonInstancesRequest(
instances=instances,
)
),
project=self.project,
zone=zone_ref.zone,
),),]
@base.ReleaseTracks(base.ReleaseTrack.ALPHA)
class AbandonInstancesAlpha(base_classes.BaseAsyncMutator,
instance_groups_utils.InstancesReferenceMixin):
"""Abandon instances owned by a managed instance group."""
@staticmethod
def Args(parser):
_AddArgs(parser=parser, multizonal=True)
@property
def method(self):
return 'AbandonInstances'
@property
def service(self):
return self.compute.instanceGroupManagers
@property
def resource_type(self):
return 'instanceGroupManagers'
def CreateRequests(self, args):
errors = []
group_ref = instance_groups_utils.CreateInstanceGroupReference(
scope_prompter=self, compute=self.compute, resources=self.resources,
name=args.name, region=args.region, zone=args.zone)
instances = self.CreateInstanceReferences(
group_ref, args.instances, errors)
if group_ref.Collection() == 'compute.instanceGroupManagers':
service = self.compute.instanceGroupManagers
request = (
self.messages.
ComputeInstanceGroupManagersAbandonInstancesRequest(
instanceGroupManager=group_ref.Name(),
instanceGroupManagersAbandonInstancesRequest=(
self.messages.InstanceGroupManagersAbandonInstancesRequest(
instances=instances,
)
),
project=self.project,
zone=group_ref.zone,
))
else:
service = self.compute.regionInstanceGroupManagers
request = (
self.messages.
ComputeRegionInstanceGroupManagersAbandonInstancesRequest(
instanceGroupManager=group_ref.Name(),
regionInstanceGroupManagersAbandonInstancesRequest=(
self.messages.
RegionInstanceGroupManagersAbandonInstancesRequest(
instances=instances,
)
),
project=self.project,
region=group_ref.region,
))
return [(service, self.method, request)]
AbandonInstances.detailed_help = {
'brief': 'Abandon instances owned by a managed instance group.',
'DESCRIPTION': """
*{command}* abandons one or more instances from a managed instance
group, thereby reducing the targetSize of the group. Once instances have been
abandoned, the currentSize of the group is automatically reduced as well to
reflect the change.
Abandoning an instance does not delete the underlying virtual machine instances,
but just removes the instances from the instance group. If you would like the
delete the underlying instances, use the delete-instances command instead.
""",
}
AbandonInstancesAlpha.detailed_help = AbandonInstances.detailed_help
| 35.390533 | 80 | 0.69587 | 590 | 5,981 | 6.947458 | 0.325424 | 0.031715 | 0.029275 | 0.025616 | 0.312515 | 0.303733 | 0.260795 | 0.251769 | 0.241034 | 0.241034 | 0 | 0.00195 | 0.22839 | 5,981 | 168 | 81 | 35.60119 | 0.886241 | 0.126066 | 0 | 0.434109 | 0 | 0 | 0.165383 | 0.01367 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085271 | false | 0 | 0.03876 | 0.046512 | 0.20155 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf289c374ee47f4952cddf28f571e3c1c464ba43 | 1,185 | py | Python | day:22/isBinaryTreeSymmetric.py | hawaijar/FireLeetcode | e981e96f6a38a3b08e9b7ef59aec65f6e0e5728a | [
"MIT"
] | 1 | 2020-10-21T12:28:23.000Z | 2020-10-21T12:28:23.000Z | day:22/isBinaryTreeSymmetric.py | hawaijar/FireLeetcode | e981e96f6a38a3b08e9b7ef59aec65f6e0e5728a | [
"MIT"
] | null | null | null | day:22/isBinaryTreeSymmetric.py | hawaijar/FireLeetcode | e981e96f6a38a3b08e9b7ef59aec65f6e0e5728a | [
"MIT"
] | 1 | 2020-10-21T12:28:24.000Z | 2020-10-21T12:28:24.000Z | # Definition for a binary tree node.
# class TreeNode:
# def __init__(self, val=0, left=None, right=None):
# self.val = val
# self.left = left
# self.right = right
class Solution:
def isSymmetric(self, root: TreeNode) -> bool:
#base case(s)
if(root is None): return True;
if(root.left is None and root.right is None): return True;
q = [root];
while(len(q) > 0):
list = [];
qq = [];
while(len(q) > 0):
temp = q.pop(0);
if(temp is None):
list.append('null');
else:
list.append(temp.val);
qq.append(temp.left);
qq.append(temp.right);
if(self.isPalindrome(list) is False): return False;
q = qq;
return True;
def isPalindrome(self, list):
[i , j] = [0, len(list) - 1];
while(i < j):
if(list[i] == list[j]):
i += 1;
j -= 1;
continue;
else:
return False;
return True;
| 28.902439 | 66 | 0.420253 | 131 | 1,185 | 3.770992 | 0.335878 | 0.048583 | 0.048583 | 0.064777 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012422 | 0.45654 | 1,185 | 40 | 67 | 29.625 | 0.754658 | 0.161181 | 0 | 0.206897 | 0 | 0 | 0.004057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf296e88d03c596024b49c000c2c21fe1354248f | 3,991 | py | Python | main.py | prjavidi/C- | 76e7c7720a921e48726ad652cfc0f1000f9a2b3e | [
"MIT"
] | null | null | null | main.py | prjavidi/C- | 76e7c7720a921e48726ad652cfc0f1000f9a2b3e | [
"MIT"
] | null | null | null | main.py | prjavidi/C- | 76e7c7720a921e48726ad652cfc0f1000f9a2b3e | [
"MIT"
] | null | null | null | '''chane the below arguments to check different tasks'''
TRAINSIZE = 5000
TESTSIZE = 500
'''To check TASK 3 put Normalize=1 otherwise 0'''
Nomalize = 1
learningRate = 0.01
threshold = 85
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
@np.vectorize
def sigmoid(x):
return 1 / (1 + np.e ** -x)
def normalize(data):
for i in range(len(data)):
data[i] = data[i] / 255
return data
'''Chaneg the below numbers to pick how many samples you need'''
trainData = np.loadtxt("mnist_train.csv", delimiter=",", max_rows=TRAINSIZE)
testData = np.loadtxt("mnist_test.csv", delimiter=",", max_rows=TESTSIZE)
print(trainData.shape)
print(testData.shape)
# Step 0: Normalization to have 0 and 1
trainImg = np.asfarray(trainData[:, 1:])
testImg = np.asfarray(testData[:, 1:])
# to normalize dataset with binary function
if Nomalize == 0:
trainImg[trainImg < threshold] = 0
trainImg[trainImg >= threshold] = 1
testImg[testImg < threshold] = 0
testImg[testImg >= threshold] = 1
else:
# to normalize dataset in range [0,1]
trainImg = normalize(trainImg)
testImg = normalize(testImg)
train_labels = np.asfarray(trainData[:, :1])
test_labels = np.asfarray(testData[:, :1])
no_of_different_labels = 10
lr = np.arange(10)
train_labels_one_hot = (lr == train_labels).astype(np.float)
test_labels_one_hot = (lr == test_labels).astype(np.float)
# Step 1: Initialize parameters and weights
inputNodes = 784
outputNodes = 10
epoch = 1
w = np.zeros((outputNodes, inputNodes + 1))
w[:, :] = 0.1
# Step 2: Apply input x from training set
MSE = []
while epoch < 50:
mse = []
for idx in range(len(trainImg)):
x = trainImg[idx]
d = train_labels_one_hot[idx]
V = np.dot(w[:, 1:], x) + w[:, 0]
Y = np.zeros(outputNodes)
# step 4: applying activation function
for i in range(outputNodes):
if Nomalize == 0:
if V[i] >= 0:
Y[i] = 1
else:
Y[i] = 0
else:
Y[i] = sigmoid(V[i])
e = d - Y
# e= np.array([e])
w[:, 1:] += (learningRate * (e[:,None] * x[None,:]))
w[:, 0] += learningRate * e
# print("MSE: ", float(MSE))
mse.append(np.sum((d - Y) ** 2))
MSE.append(np.sum(mse) / 2)
epoch += 1
if MSE[-1] < 0.001:
break
# print("epoch: ", epoch,", MSE:", MSE)
fig, ax = plt.subplots()
numberArrayTestIncorrect = np.zeros(10)
numberArrayTest = np.zeros(10)
ax.plot(MSE)
ax.set(xlabel='Iteration', ylabel='MSE', title='Learning curve for learning rate=' + str(learningRate))
ax.grid()
plt.show()
# testing process:
correct = []
incorrect = []
for idx in range(len(testImg)):
x = testImg[idx][np.newaxis]
x = x.T
checkIdx = int(test_labels[idx][0])
d = test_labels_one_hot[idx]
V = np.dot(w[:, 1:], x) + w[:, 0][np.newaxis].T
Y = np.zeros(outputNodes)
for i in range(outputNodes):
if V[i] >= 0:
Y[i] = 1
else:
Y[i] = 0
if np.array_equal(d, Y):
correct.append(1)
numberArrayTest[checkIdx] += 1
else:
incorrect.append(1)
numberArrayTestIncorrect[checkIdx] += 1
print("Correct=", sum(correct), ", incorrect: ", sum(incorrect), ", accuracy: ", sum(correct)/TESTSIZE)
print(numberArrayTest)
print(numberArrayTestIncorrect)
N = 10
fig, ax = plt.subplots()
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars: can also be len(x) sequence
p1 = ax.bar(ind, numberArrayTest, width, bottom=0, yerr=(0, 0, 0, 0, 0, 0, 0, 0, 0, 0))
p2 = ax.bar(ind + width, numberArrayTestIncorrect, width, bottom=0, yerr=(0, 0, 0, 0, 0, 0, 0, 0, 0, 0))
ax.set_title('Correct VS incorrect identification')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(('0', '1', '2', '3', '4', '5', '6', '7', '8', '9'))
ax.legend((p1[0], p2[0]), ('Correct', 'Incorrect'))
ax.autoscale_view()
plt.show()
| 27.14966 | 104 | 0.606865 | 576 | 3,991 | 4.154514 | 0.300347 | 0.015044 | 0.020059 | 0.023402 | 0.088592 | 0.075219 | 0.055997 | 0.055997 | 0.055997 | 0.055997 | 0 | 0.041476 | 0.232774 | 3,991 | 146 | 105 | 27.335616 | 0.740039 | 0.117013 | 0 | 0.198113 | 0 | 0 | 0.050088 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018868 | false | 0 | 0.028302 | 0.009434 | 0.066038 | 0.04717 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf2c4d8068a5e81799ce759db7c058c410706010 | 6,269 | py | Python | polyaxon/scheduler/spawners/tensorboard_spawner.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | polyaxon/scheduler/spawners/tensorboard_spawner.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | polyaxon/scheduler/spawners/tensorboard_spawner.py | elyase/polyaxon | 1c19f059a010a6889e2b7ea340715b2bcfa382a0 | [
"MIT"
] | null | null | null | import json
import random
from django.conf import settings
from polyaxon_k8s.exceptions import PolyaxonK8SError
from scheduler.spawners.project_job_spawner import ProjectJobSpawner
from scheduler.spawners.templates import constants, ingresses, services
from scheduler.spawners.templates.pod_environment import (
get_affinity,
get_node_selector,
get_tolerations
)
from scheduler.spawners.templates.project_jobs import deployments
from scheduler.spawners.templates.volumes import (
get_pod_outputs_volume,
get_pod_refs_outputs_volumes
)
class TensorboardSpawner(ProjectJobSpawner):
TENSORBOARD_JOB_NAME = 'tensorboard'
PORT = 6006
def get_tensorboard_url(self):
return self._get_service_url(self.TENSORBOARD_JOB_NAME)
def request_tensorboard_port(self):
if not self._use_ingress():
return self.PORT
labels = 'app={},role={}'.format(settings.APP_LABELS_TENSORBOARD,
settings.ROLE_LABELS_DASHBOARD)
ports = [service.spec.ports[0].port for service in self.list_services(labels)]
port = random.randint(*settings.TENSORBOARD_PORT_RANGE)
while port in ports:
port = random.randint(*settings.TENSORBOARD_PORT_RANGE)
return port
def start_tensorboard(self,
image,
outputs_path,
persistence_outputs,
outputs_refs_jobs=None,
outputs_refs_experiments=None,
resources=None,
node_selector=None,
affinity=None,
tolerations=None):
ports = [self.request_tensorboard_port()]
target_ports = [self.PORT]
volumes, volume_mounts = get_pod_outputs_volume(persistence_outputs)
refs_volumes, refs_volume_mounts = get_pod_refs_outputs_volumes(
outputs_refs=outputs_refs_jobs,
persistence_outputs=persistence_outputs)
volumes += refs_volumes
volume_mounts += refs_volume_mounts
refs_volumes, refs_volume_mounts = get_pod_refs_outputs_volumes(
outputs_refs=outputs_refs_experiments,
persistence_outputs=persistence_outputs)
volumes += refs_volumes
volume_mounts += refs_volume_mounts
node_selector = get_node_selector(
node_selector=node_selector,
default_node_selector=settings.NODE_SELECTOR_EXPERIMENTS)
affinity = get_affinity(
affinity=affinity,
default_affinity=settings.AFFINITY_EXPERIMENTS)
tolerations = get_tolerations(
tolerations=tolerations,
default_tolerations=settings.TOLERATIONS_EXPERIMENTS)
deployment = deployments.get_deployment(
namespace=self.namespace,
app=settings.APP_LABELS_TENSORBOARD,
name=self.TENSORBOARD_JOB_NAME,
project_name=self.project_name,
project_uuid=self.project_uuid,
job_name=self.job_name,
job_uuid=self.job_uuid,
volume_mounts=volume_mounts,
volumes=volumes,
image=image,
command=["/bin/sh", "-c"],
args=["tensorboard --logdir={} --port={}".format(outputs_path, self.PORT)],
ports=target_ports,
container_name=settings.CONTAINER_NAME_PLUGIN_JOB,
resources=resources,
node_selector=node_selector,
affinity=affinity,
tolerations=tolerations,
role=settings.ROLE_LABELS_DASHBOARD,
type=settings.TYPE_LABELS_RUNNER)
deployment_name = constants.JOB_NAME.format(name=self.TENSORBOARD_JOB_NAME,
job_uuid=self.job_uuid)
deployment_labels = deployments.get_labels(app=settings.APP_LABELS_TENSORBOARD,
project_name=self.project_name,
project_uuid=self.project_uuid,
job_name=self.job_name,
job_uuid=self.job_uuid,
role=settings.ROLE_LABELS_DASHBOARD,
type=settings.TYPE_LABELS_RUNNER)
dep_resp, _ = self.create_or_update_deployment(name=deployment_name, data=deployment)
service = services.get_service(
namespace=self.namespace,
name=deployment_name,
labels=deployment_labels,
ports=ports,
target_ports=target_ports,
service_type=self._get_service_type())
service_resp, _ = self.create_or_update_service(name=deployment_name, data=service)
results = {'deployment': dep_resp.to_dict(), 'service': service_resp.to_dict()}
if self._use_ingress():
annotations = json.loads(settings.K8S_INGRESS_ANNOTATIONS)
paths = [{
'path': '/tensorboard/{}'.format(self.project_name.replace('.', '/')),
'backend': {
'serviceName': deployment_name,
'servicePort': ports[0]
}
}]
ingress = ingresses.get_ingress(namespace=self.namespace,
name=deployment_name,
labels=deployment_labels,
annotations=annotations,
paths=paths)
self.create_or_update_ingress(name=deployment_name, data=ingress)
return results
def stop_tensorboard(self):
deployment_name = constants.JOB_NAME.format(name=self.TENSORBOARD_JOB_NAME,
job_uuid=self.job_uuid)
try:
self.delete_deployment(name=deployment_name)
self.delete_service(name=deployment_name)
if self._use_ingress():
self.delete_ingress(name=deployment_name)
return True
except PolyaxonK8SError:
return False
| 43.534722 | 93 | 0.595948 | 591 | 6,269 | 5.978003 | 0.184433 | 0.051514 | 0.040759 | 0.033965 | 0.320974 | 0.276819 | 0.276819 | 0.251344 | 0.251344 | 0.216247 | 0 | 0.0024 | 0.335301 | 6,269 | 143 | 94 | 43.839161 | 0.845452 | 0 | 0 | 0.307692 | 0 | 0 | 0.021375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030769 | false | 0 | 0.069231 | 0.007692 | 0.169231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf2cdf5265503bfa5f46413c8c8ff1d4149197dd | 4,651 | py | Python | iot/rooms/__init__.py | joh90/iot | 4a571be7e0760445dd2d5be858ecb4372b5d59b4 | [
"MIT"
] | 6 | 2018-11-06T02:07:21.000Z | 2021-12-15T07:56:14.000Z | iot/rooms/__init__.py | joh90/iot | 4a571be7e0760445dd2d5be858ecb4372b5d59b4 | [
"MIT"
] | 7 | 2019-06-17T15:50:22.000Z | 2021-03-14T19:24:16.000Z | iot/rooms/__init__.py | joh90/iot | 4a571be7e0760445dd2d5be858ecb4372b5d59b4 | [
"MIT"
] | 1 | 2020-05-26T09:32:56.000Z | 2020-05-26T09:32:56.000Z | import logging
from iot.constants import ROOM_LIST_MESSAGE
from iot.utils import return_mac
from iot.devices import DeviceType
from iot.devices.broadlink import (
BroadlinkDeviceFactory,
BroadlinkDeviceTypes
)
from iot.devices.errors import (
DeviceTypeNotFound, BrandNotFound,
SendCommandError
)
from iot.devices.factory import DeviceFactory
logger = logging.getLogger(__name__)
d_factory = DeviceFactory()
bl_d_factory = BroadlinkDeviceFactory()
# We assume one RM3 RM per room for now
# Supports multiple Broadlink devices
# eg. Smart Plug, Multi Plugs
class Room:
__slots__ = (
"name",
"rm",
"DEVICES",
"BL_DEVICES",
"last_action"
)
def __init__(self, name, rm):
self.name = name
self.rm = rm
self.DEVICES = {}
self.BL_DEVICES = {}
self.last_action = None
def room_info(self):
return {
"name": self.name,
"rm_host": self.rm.host[0] if self.rm else None,
"rm_mac": return_mac(self.rm.mac) if self.rm else None,
"type": self.rm.type if self.rm else None,
"devices": self.DEVICES
}
def format_room_devices(self):
room_devices = [
"*{}* | Type: {}".format(d.id, DeviceType(d.device_type).name) \
for d in self.DEVICES.values()
]
return room_devices
def format_room_bl_devices(self):
room_bl_devices = [
"*{}* | Type: {} | IP: {} | Mac: {}".format(
d.id, d.device_type, d.ip, d.mac_address) \
for d in self.BL_DEVICES.values()
]
return room_bl_devices
def room_list_info(self):
info = self.room_info()
room_devices = self.format_room_devices()
room_broadlink_devices = self.format_room_bl_devices()
return ROOM_LIST_MESSAGE.format(
info["name"],
"Type: {}, IP: {}, Mac: {}".format(
info["type"], info["rm_host"], info["rm_mac"]),
"\n".join(room_devices),
"\n".join(room_broadlink_devices)
)
def populate_devices(self, devices):
populated = []
for d in devices:
if d["id"] not in self.DEVICES:
try:
dev = d_factory.create_device(
d["type"], self, d["id"], d["brand"], d["model"]
)
self.add_device(dev)
populated.append(dev)
except DeviceTypeNotFound:
continue
except BrandNotFound:
logger.error(
"Room: %s, Unable to populate device %s, " \
"Brand %s not found for Device Type %s",
self.name, d["id"], d["brand"], d["type"]
)
continue
return populated
def add_device(self, device):
self.DEVICES[device.id] = device
def get_device(self, device_id):
pass
def populate_broadlink_devices(self, devices):
from iot.server import iot_server
for d in devices:
if d["id"] not in self.BL_DEVICES:
bl_device = iot_server.find_broadlink_device(
d["mac_address"], d["broadlink_type"].upper()
)
if bl_device is None:
logger.error(
"Room: %s, Unable to populate Broadlink device %s, " \
"Broadlink device %s not found with Device Type %s",
self.name, d["id"], d["mac_address"], d["broadlink_type"]
)
continue
try:
dev = bl_d_factory.create_device(
d["broadlink_type"], self, d["id"], bl_device
)
self.add_broadlink_devices(dev.id, dev)
iot_server.devices[dev.id] = dev
except DeviceTypeNotFound:
continue
def add_broadlink_devices(self, id, bl_device):
self.BL_DEVICES[id] = bl_device
def convert_to_bytearray(self, data):
return bytearray.fromhex("".join(data))
def send(self, data):
# Check device type
if self.rm and self.rm.type == "RMMINI":
self.send_rm_data(data)
def send_rm_data(self, data):
try:
self.rm.send_data(
self.convert_to_bytearray(data)
)
except Exception as e:
raise SendCommandError("{}: {}".format(e.__class__, e))
| 29.436709 | 81 | 0.529779 | 517 | 4,651 | 4.562863 | 0.195358 | 0.025435 | 0.023739 | 0.015261 | 0.135227 | 0.090293 | 0.069521 | 0.042391 | 0.022891 | 0.022891 | 0 | 0.000678 | 0.365513 | 4,651 | 157 | 82 | 29.624204 | 0.798712 | 0.025586 | 0 | 0.106557 | 0 | 0 | 0.098962 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106557 | false | 0.008197 | 0.065574 | 0.016393 | 0.237705 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf2d6649ae78a91eff025de10e3d668a7dec13c5 | 2,919 | py | Python | start.py | mutageneral/fossdiscord | 54111e6e6ff8ee64f54241a11b9da52db4776223 | [
"MIT"
] | null | null | null | start.py | mutageneral/fossdiscord | 54111e6e6ff8ee64f54241a11b9da52db4776223 | [
"MIT"
] | null | null | null | start.py | mutageneral/fossdiscord | 54111e6e6ff8ee64f54241a11b9da52db4776223 | [
"MIT"
] | null | null | null | import os, ctypes, sys, subprocess, config, globalconfig, shutil
from git import Repo
from shutil import copyfile
commands = ["--help", "--updatebot", "--start", "--credits"]
def startbot():
print("Attempting to start the bot...")
print("REMEMBER: YOU MUST RUN THE COMMAND '" + config.prefix + "shutdownbot' TO SHUTDOWN THE BOT!!!!")
dir_path = os.getcwd()
subprocess.Popen(['python', dir_path + '/bot.py'])
sys.exit()
def botupdate():
if sys.platform == "linux" or sys.platform == "linux2":
try:
os.mkdir('/tmp/freeupdate')
except OSError:
os.rmdir('/tmp/freeupdate')
os.mkdir('/tmp/freeupdate')
HTTPS_REMOTE_URL = globalconfig.github_login_url
DEST_NAME = '/tmp/freeupdate'
Repo.clone_from(HTTPS_REMOTE_URL, DEST_NAME)
dir_path = os.getcwd()
shutil.rmtree(dir_path + "/cogs/")
#path = dir_path
src = '/tmp/freeupdate/cogs'
dest = dir_path + "/cogs"
shutil.copytree(src, dest)
copyfile('/tmp/freeupdate/bot.py', dir_path + '/bot.py')
copyfile('/tmp/freeupdate/setup.py', dir_path + '/setup.py')
copyfile('/tmp/freeupdate/README.md', dir_path + '/README.md')
copyfile('/tmp/freeupdate/globalconfig.py', dir_path + '/globalconfig.py')
shutil.rmtree('/tmp/freeupdate')
print("Done! Restart the bot to apply the changes!")
print(title = "Updated!", description = "FreeDiscord updated! No error reported. Check your console to confirm this.")
elif sys.platform == "win32":
print("'updatebot' is not yet available for Windows.")
elif sys.platform == "darwin":
print("'updatebot' is not yet available for macOS.")
try:
booloutput = bool(sys.argv[1])
except:
startbot()
for commandList in commands:
if sys.argv[1] not in commands:
sys.exit(sys.argv[1] + " is not a command. To get a command list, run 'python3 start.py --help'.")
if "--help" in sys.argv[1]:
try:
bool(sys.argv[2])
except:
sys.exit("FreeDiscord Start Script\nCommand List:\n\t--help - This message\n\t--start (or no argument) - Starts this FreeDiscord instance.\n\t--credits - Shows the credits of FreeDiscord.\n\t--updatebot - Updates this FreeDiscord instance.")
if sys.argv[2] == "gui":
sys.exit("FreeDiscord Start Script\npython3 start.py --start\nStarts the bot.")
elif sys.argv[2] == "help":
sys.exit("FreeDiscord Start Script\npython3 start.py --help\nShows the command list.")
elif sys.argv[2] == "crash":
sys.exit("FreeDiscord Start Script\npython3 start.py --updatebot\nUpdates the FreeDiscord instance.")
elif sys.argv[2] == "credits":
sys.exit("redev's CrashDash\npython3 start.py --credits\nShows the credits of FreeDiscord.")
if "--updatebot" in sys.argv[1]:
botupdate()
if "--start" in sys.argv[1]:
startbot()
| 42.304348 | 249 | 0.64063 | 385 | 2,919 | 4.807792 | 0.322078 | 0.041599 | 0.025932 | 0.049703 | 0.123717 | 0.10805 | 0.10805 | 0.071313 | 0 | 0 | 0 | 0.008243 | 0.210346 | 2,919 | 68 | 250 | 42.926471 | 0.794794 | 0.005139 | 0 | 0.183333 | 0 | 0.016667 | 0.443679 | 0.052015 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.05 | 0 | 0.083333 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf2ebd0be605b85c733e5e7a385de095a11ecc48 | 932 | py | Python | QTM/MixQC/1.0.0/plt.py | binggu56/qmd | e2628710de15f8a8b9a1280fcf92f9e87559414c | [
"MIT"
] | null | null | null | QTM/MixQC/1.0.0/plt.py | binggu56/qmd | e2628710de15f8a8b9a1280fcf92f9e87559414c | [
"MIT"
] | null | null | null | QTM/MixQC/1.0.0/plt.py | binggu56/qmd | e2628710de15f8a8b9a1280fcf92f9e87559414c | [
"MIT"
] | null | null | null | ##!/usr/bin/python
import numpy as np
import pylab as pl
#with open("traj.dat") as f:
# data = f.read()
#
# data = data.split('\n')
#
# x = [row.split(' ')[0] for row in data]
# y = [row.split(' ')[1] for row in data]
#
# fig = plt.figure()
#
# ax1 = fig.add_subplot(111)
#
# ax1.set_title("Plot title...")
# ax1.set_xlabel('your x label..')
# ax1.set_ylabel('your y label...')
#
# ax1.plot(x,y, c='r', label='the data')
#
# leg = ax1.legend()
#fig = plt.figure()
font = {'family' : 'Times New Roman',
# 'weight' : 'bold',
'size' : 20}
pl.rc('font', **font)
data = np.genfromtxt(fname='xoutput')
#data = np.loadtxt('traj.dat')
for x in range(1,20):
pl.plot(data[:,0],data[:,x],'k-',linewidth=1)
#plt.figure(1)
#plt.plot(x,y1,'-')
#plt.plot(x,y2,'g-')
#pl.ylim(0,1)
pl.xlabel('Time [a.u.]')
pl.ylabel('Positions')
#pl.title('')
pl.savefig('traj.pdf')
pl.show()
| 19.416667 | 49 | 0.549356 | 150 | 932 | 3.386667 | 0.48 | 0.05315 | 0.031496 | 0.047244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030831 | 0.199571 | 932 | 47 | 50 | 19.829787 | 0.650134 | 0.590129 | 0 | 0 | 0 | 0 | 0.1875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf2ee0d6951dff87d2cc119417466bb9ccb36246 | 2,753 | py | Python | generator/generator.py | zbelateche/ee272_cgra | 4cf2e3cf4a4bdf585d87a9209a5bf252666bc6a2 | [
"BSD-3-Clause"
] | 1 | 2020-07-23T02:57:12.000Z | 2020-07-23T02:57:12.000Z | generator/generator.py | zbelateche/ee272_cgra | 4cf2e3cf4a4bdf585d87a9209a5bf252666bc6a2 | [
"BSD-3-Clause"
] | null | null | null | generator/generator.py | zbelateche/ee272_cgra | 4cf2e3cf4a4bdf585d87a9209a5bf252666bc6a2 | [
"BSD-3-Clause"
] | 1 | 2021-04-27T23:13:43.000Z | 2021-04-27T23:13:43.000Z | from abc import ABC, abstractmethod
from ordered_set import OrderedSet
import magma
from common.collections import DotDict
from generator.port_reference import PortReference, PortReferenceBase
import warnings
class Generator(ABC):
def __init__(self):
self.ports = DotDict()
self.wires = []
@abstractmethod
def name(self):
pass
def add_port(self, name, T):
if name in self.ports:
raise ValueError(f"{name} is already a port")
self.ports[name] = PortReference(self, name, T)
def add_ports(self, **kwargs):
for name, T in kwargs.items():
self.add_port(name, T)
def wire(self, port0, port1):
assert isinstance(port0, PortReferenceBase)
assert isinstance(port1, PortReferenceBase)
connection = self.__sort_ports(port0, port1)
if connection not in self.wires:
self.wires.append(connection)
else:
warnings.warn(f"skipping duplicate connection: "
f"{port0.qualified_name()}, "
f"{port1.qualified_name()}")
def remove_wire(self, port0, port1):
assert isinstance(port0, PortReferenceBase)
assert isinstance(port1, PortReferenceBase)
connection = self.__sort_ports(port0, port1)
if connection in self.wires:
self.wires.remove(connection)
def decl(self):
io = []
for name, port in self.ports.items():
io += [name, port.base_type()]
return io
def children(self):
children = OrderedSet()
for ports in self.wires:
for port in ports:
if port.owner() == self:
continue
children.add(port.owner())
return children
def circuit(self):
children = self.children()
circuits = {}
for child in children:
circuits[child] = child.circuit()
class _Circ(magma.Circuit):
name = self.name()
IO = self.decl()
@classmethod
def definition(io):
instances = {}
for child in children:
instances[child] = circuits[child]()
instances[self] = io
for port0, port1 in self.wires:
inst0 = instances[port0.owner()]
inst1 = instances[port1.owner()]
wire0 = port0.get_port(inst0)
wire1 = port1.get_port(inst1)
magma.wire(wire0, wire1)
return _Circ
def __sort_ports(self, port0, port1):
if id(port0) < id(port1):
return (port0, port1)
else:
return (port1, port0)
| 30.588889 | 69 | 0.55721 | 289 | 2,753 | 5.217993 | 0.256055 | 0.041777 | 0.029178 | 0.023873 | 0.210875 | 0.18435 | 0.18435 | 0.18435 | 0.18435 | 0.18435 | 0 | 0.020101 | 0.349437 | 2,753 | 89 | 70 | 30.932584 | 0.821887 | 0 | 0 | 0.133333 | 0 | 0 | 0.03814 | 0.017799 | 0 | 0 | 0 | 0 | 0.053333 | 1 | 0.146667 | false | 0.013333 | 0.08 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf33c0f359af61ed23f396ff759a9bbdc5a2e5ec | 7,118 | py | Python | app/gws/web/wrappers.py | ewie/gbd-websuite | 6f2814c7bb64d11cb5a0deec712df751718fb3e1 | [
"Apache-2.0"
] | null | null | null | app/gws/web/wrappers.py | ewie/gbd-websuite | 6f2814c7bb64d11cb5a0deec712df751718fb3e1 | [
"Apache-2.0"
] | null | null | null | app/gws/web/wrappers.py | ewie/gbd-websuite | 6f2814c7bb64d11cb5a0deec712df751718fb3e1 | [
"Apache-2.0"
] | null | null | null | import os
import gzip
import io
import werkzeug.utils
import werkzeug.wrappers
import werkzeug.wsgi
from werkzeug.utils import cached_property
import gws
import gws.tools.date
import gws.tools.json2
import gws.tools.net
import gws.tools.vendor.umsgpack as umsgpack
import gws.web.error
import gws.types as t
_JSON = 1
_MSGPACK = 2
_struct_mime = {
_JSON: 'application/json',
_MSGPACK: 'application/msgpack',
}
#:export IResponse
class BaseResponse(t.IResponse):
def __init__(self, **kwargs):
if 'wz' in kwargs:
self._wz = kwargs['wz']
else:
self._wz = werkzeug.wrappers.Response(**kwargs)
def __call__(self, environ, start_response):
return self._wz(environ, start_response)
def set_cookie(self, key, **kwargs):
self._wz.set_cookie(key, **kwargs)
def delete_cookie(self, key, **kwargs):
self._wz.delete_cookie(key, **kwargs)
def add_header(self, key, value):
self._wz.headers.add(key, value)
#:export IBaseRequest
class BaseRequest(t.IBaseRequest):
def __init__(self, root: t.IRootObject, environ: dict, site: t.IWebSite):
self._wz = werkzeug.wrappers.Request(environ)
# this is also set in nginx (see server/ini), but we need this for unzipping (see data() below)
self._wz.max_content_length = root.var('server.web.maxRequestLength') * 1024 * 1024
self.params = {}
self._lower_params = {}
self.root: t.IRootObject = root
self.site: t.IWebSite = site
self.method: str = self._wz.method
def init(self):
self.params = self._parse_params() or {}
self._lower_params = {k.lower(): v for k, v in self.params.items()}
@property
def environ(self) -> dict:
return self._wz.environ
@cached_property
def input_struct_type(self) -> int:
if self.method == 'POST':
ct = self.header('content-type', '').lower()
if ct.startswith(_struct_mime[_JSON]):
return _JSON
if ct.startswith(_struct_mime[_MSGPACK]):
return _MSGPACK
return 0
@cached_property
def output_struct_type(self) -> int:
h = self.header('accept', '').lower()
if _struct_mime[_MSGPACK] in h:
return _MSGPACK
if _struct_mime[_JSON] in h:
return _JSON
return self.input_struct_type
@property
def data(self) -> t.Optional[bytes]:
if self.method != 'POST':
return None
data = self._wz.get_data(as_text=False, parse_form_data=False)
if self.root.application.developer_option('request.log_all'):
gws.write_file_b(f'{gws.VAR_DIR}/debug_request_{gws.tools.date.timestamp_msec()}', data)
if self.header('content-encoding') == 'gzip':
with gzip.GzipFile(fileobj=io.BytesIO(data)) as fp:
return fp.read(self._wz.max_content_length)
return data
@property
def text(self) -> t.Optional[str]:
if self.method != 'POST':
return None
charset = self.header('charset', 'utf-8')
try:
return self.data.decode(encoding=charset, errors='strict')
except UnicodeDecodeError as e:
gws.log.error('post data decoding error')
raise gws.web.error.BadRequest() from e
@property
def is_secure(self) -> bool:
return self._wz.is_secure
def env(self, key: str, default: str = None) -> str:
return self._wz.environ.get(key, default)
def param(self, key: str, default: str = None) -> str:
return self._lower_params.get(key.lower(), default)
def has_param(self, key: str) -> bool:
return key.lower() in self._lower_params
def header(self, key: str, default: str = None) -> str:
return self._wz.headers.get(key, default)
def cookie(self, key: str, default: str = None) -> str:
return self._wz.cookies.get(key, default)
def url_for(self, url: t.Url) -> t.Url:
u = self.site.url_for(self, url)
# gws.log.debug(f'url_for: {url!r}=>{u!r}')
return u
def response(self, content: str, mimetype: str, status: int = 200) -> t.IResponse:
return BaseResponse(
response=content,
mimetype=mimetype,
status=status
)
def redirect_response(self, location, status=302):
return werkzeug.utils.redirect(location, status)
def file_response(self, path: str, mimetype: str, status: int = 200, attachment_name: str = None) -> t.IResponse:
headers = {
'Content-Length': os.path.getsize(path)
}
if attachment_name:
headers['Content-Disposition'] = f'attachment; filename="{attachment_name}"'
fp = werkzeug.wsgi.wrap_file(self.environ, open(path, 'rb'))
return BaseResponse(
response=fp,
mimetype=mimetype,
status=status,
headers=headers,
direct_passthrough=True
)
def struct_response(self, data: t.Response, status: int = 200) -> t.IResponse:
typ = self.output_struct_type or _JSON
return self.response(self._encode_struct(data, typ), _struct_mime[typ], status)
def error_response(self, err) -> t.IResponse:
return BaseResponse(wz=err.get_response(self._wz.environ))
def _parse_params(self):
if self.input_struct_type:
return self._decode_struct(self.input_struct_type)
args = {k: v for k, v in self._wz.args.items()}
path = self._wz.path
# the server only understands requests to /_/...
# the params can be given as query string or encoded in the path
# like _/cmd/command/layer/la/x/12/y/34 etc
if path == gws.SERVER_ENDPOINT:
return args
if path.startswith(gws.SERVER_ENDPOINT + '/'):
p = path.split('/')
for n in range(3, len(p), 2):
args[p[n - 1]] = p[n]
return args
gws.log.error(f'invalid request path: {path!r}')
raise gws.web.error.NotFound()
def _encode_struct(self, data, typ):
if typ == _JSON:
return gws.tools.json2.to_string(data, pretty=True)
if typ == _MSGPACK:
return umsgpack.dumps(data, default=gws.as_dict)
raise ValueError('invalid struct type')
def _decode_struct(self, typ):
if typ == _JSON:
try:
s = self.data.decode(encoding='utf-8', errors='strict')
return gws.tools.json2.from_string(s)
except (UnicodeDecodeError, gws.tools.json2.Error):
gws.log.error('malformed json request')
raise gws.web.error.BadRequest()
if typ == _MSGPACK:
try:
return umsgpack.loads(self.data)
except (TypeError, umsgpack.UnpackException):
gws.log.error('malformed msgpack request')
raise gws.web.error.BadRequest()
gws.log.error('invalid struct type')
raise gws.web.error.BadRequest()
| 31.635556 | 117 | 0.609722 | 906 | 7,118 | 4.634658 | 0.232892 | 0.027149 | 0.015718 | 0.019052 | 0.136699 | 0.094784 | 0.036675 | 0.036675 | 0.036675 | 0.027864 | 0 | 0.006958 | 0.27311 | 7,118 | 224 | 118 | 31.776786 | 0.8046 | 0.045659 | 0 | 0.180723 | 0 | 0 | 0.064397 | 0.017094 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162651 | false | 0.006024 | 0.084337 | 0.066265 | 0.451807 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf34a3f0197c3f6dc8a1f65c74ae293fb179d4ac | 3,299 | py | Python | mozinor/example/toto_stack_model_script.py | Jwuthri/Mozinor | 5a2cd4f0447a96425d899a8e063668741a091a8b | [
"MIT"
] | 3 | 2017-08-17T21:32:05.000Z | 2018-07-30T11:30:09.000Z | mozinor/example/toto_stack_model_script.py | Jwuthri/Mozinor | 5a2cd4f0447a96425d899a8e063668741a091a8b | [
"MIT"
] | null | null | null | mozinor/example/toto_stack_model_script.py | Jwuthri/Mozinor | 5a2cd4f0447a96425d899a8e063668741a091a8b | [
"MIT"
] | null | null | null |
# -*- coding: utf-8 -*-
"""
Created on July 2017
@author: JulienWuthrich
"""
import pandas as pd
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_absolute_error, accuracy_score, r2_score
from sklearn.model_selection import train_test_split
from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier, GradientBoostingClassifier
from sklearn.ensemble import ExtraTreesRegressor, RandomForestRegressor, GradientBoostingRegressor, AdaBoostRegressor
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import ElasticNetCV, LassoLarsCV, RidgeCV
from sklearn.naive_bayes import BernoulliNB, GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import KNeighborsRegressor
from xgboost import XGBRegressor, XGBClassifier
from vecstack import stacking
# Read the csv file
data = pd.read_csv("toto.csv")
regression = False
if regression:
metric = r2_score
else:
metric = accuracy_score
# Split dependants and independant variables
y = data[["predict"]]
X = data.drop("predict", axis=1)
# Split into training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
# Apply Some Featuring
poly_reg = PolynomialFeatures(degree=1)
# Transform into numpy object
x_train = poly_reg.fit_transform(X_train)
x_test = poly_reg.fit_transform(X_test)
y_test = np.array(y_test.ix[:,0])
y_train = np.array(y_train.ix[:,0])
# define lmodels
lmodels = [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
max_depth=None, max_features=0.6, max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=4, min_weight_fraction_leaf=0.0,
n_estimators=100, n_jobs=1, oob_score=False, random_state=None,
verbose=0, warm_start=False), XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=1,
gamma=0, learning_rate=0.5, max_delta_step=0, max_depth=8,
min_child_weight=6, missing=None, n_estimators=50, nthread=-1,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=0, silent=True, subsample=0.9), KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=17, p=2,
weights='distance')]
# build the stack level 1
S_train, S_test = stacking(
lmodels, x_train, y_train, x_test,
regression=regression, metric=metric,
n_folds=3, shuffle=True, random_state=0, verbose=1
)
# build model lvel 2
model = DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=10,
max_features=None, max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=2,
min_samples_split=5, min_weight_fraction_leaf=0.0,
presort=False, random_state=None, splitter='best')
# Fit the model
model.fit(S_train, y_train)
# Predict
y_pred = model.predict(S_test)
# Scoring
if regression:
print('Score on test set:', mean_absolute_error(y_test, y_pred))
else:
print('Score on test set:', accuracy_score(y_test, y_pred))
print(metric(y_test, y_pred))
| 35.095745 | 136 | 0.76114 | 469 | 3,299 | 5.130064 | 0.381663 | 0.054863 | 0.012469 | 0.012469 | 0.148795 | 0.093101 | 0.073982 | 0.041563 | 0.041563 | 0.041563 | 0 | 0.023734 | 0.144286 | 3,299 | 93 | 137 | 35.473118 | 0.828551 | 0.090634 | 0 | 0.067797 | 0 | 0 | 0.037261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.271186 | 0 | 0.271186 | 0.050847 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf36336bd222b8046304d99fe89eeed7d9b73ede | 4,330 | py | Python | Detection and Tracking/main.py | Jay-Nehra/Object-Detection | f91085ecf709d21bf7ffd3b2e370fc36ae5e88f2 | [
"BSD-3-Clause"
] | 1 | 2021-01-23T09:11:59.000Z | 2021-01-23T09:11:59.000Z | Detection and Tracking/main.py | Jay-Nehra/Object-Detection | f91085ecf709d21bf7ffd3b2e370fc36ae5e88f2 | [
"BSD-3-Clause"
] | null | null | null | Detection and Tracking/main.py | Jay-Nehra/Object-Detection | f91085ecf709d21bf7ffd3b2e370fc36ae5e88f2 | [
"BSD-3-Clause"
] | null | null | null | """
this program takes in a checkerboard image from a camera and calibrates the
image to remove camera radial and tangential distortion.
"""
import cv2
import YOLO as odYOLO # object detection using YOLO
import HOG as odHOG # object detection using an svm and HOG features
import data
import numpy as np
""" Uncomment below if adding project 4 - advanced lane detection """
#from driveline import Lane
#from camera import CameraImage
#from lane import lane_pipeline
use_yolo = False
def adjust_channel_gamma(channel, gamma=1.):
# adjusts the brightness of an image channel
# channel : 2D source channel
# gamma : brightness correction factor, gamma < 1 => darker image
# returns : gamma corrected image
# build a lookup table mapping the pixel values [0, 255] to
# their adjusted gamma values
# http://www.pyimagesearch.com/2015/10/05/opencv-gamma-correction/
invGamma = 1.0 / np.absolute(gamma)
table = (np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8"))
# apply gamma correction using the lookup table
return cv2.LUT(channel, table)
def adjust_image_gamma(img, gamma=1.):
# adjusts the brightness of an image
# img : source image
# gamma : brightness correction factor, gamma < 1 => darker image
# returns : gamma corrected image
# convert to HSV to adjust gamma by V
img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
img[:, :, 2] = adjust_channel_gamma(img[:, :, 2], gamma=gamma)
return cv2.cvtColor(img, cv2.COLOR_HSV2BGR)
# Define the codec and create VideoWriter object
if data.isVideo:
# setup video recording when using a video
fourcc = cv2.VideoWriter_fourcc(* 'WMV2') #'MJPG')
filename = 'output_images/YOLO_projectvideo.wmv' # + data.video
out = cv2.VideoWriter(filename, fourcc, 20.0, (1280, 720))
# initalise the video capture
cam = cv2.VideoCapture(data.img_add)
# setup which object detection method to use Yolo or SVM & HOG
if use_yolo is True:
# define the yolo classifier
# this calls the python wrapper implemented by darkflow
# https://github.com/thtrieu/darkflow
# this is an implementation of the yolo object detection method outlined in papers
# You Only Look Once: Unified, Real-Time Object Detection, arXiv:1506.02640 [cs.CV],
# YOLO9000: Better, Faster, Stronger, arXiv:1612.08242 [cs.CV]
yolo = odYOLO.yolo(model="cfg/tiny-yolo-voc.cfg", chkpt="bin/tiny-yolo-voc.weights", threshold=0.12)
else:
# define a SVM and HOG classifier
car_object = odHOG.object(spatial_size=(12,12), hist_bins=34, pix_per_cell=13, hog_channel='ALL', cspace='HLS')
# location of the training data for the SVM
car_object.train_svm("data/vehicles_smallset/", "data/non-vehicles_smallset/")
while(1):
# continually loop if the input is a video until it ends of the user presses 'q'
# if an image execute once and wait till the user presses a key
if data.isVideo:
ret, image = cam.read()
if ret == False:
break
else:
# read in the image to the program
image = cv2.imread(data.img_add, -1)
""" object detection """
if use_yolo is True:
# YOLO classifier
gamma_img = adjust_image_gamma(image.copy(), 2)
objs = yolo.find_object(gamma_img) # find the objects
image = yolo.draw_box(image, objs, show_label=True) # add the detected objects to the window
else:
h, w = image.shape[:2]
# SVM and HOG classifier
gamma_img = adjust_image_gamma(image.copy(), 2)
obj_pos = car_object.locate_objects(gamma_img, h // 2, h-80, 0, w, scale=2, show_obj=False,
show_boxes=False, heat_thresh=6, show_heat=False)
image = car_object.draw_labeled_bboxes(image, obj_pos, color=(0, 0, 255), thick=6)
cv2.imshow('final', image)
# wait for a user key interrupt then close all windows
if data.isVideo:
out.write(image) # save image to video
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
# save the new image
cv2.imwrite('output_images/objects_' + data.image, image)
cv2.waitKey(0)
break
if data.isVideo:
out.release()
cam.release()
cv2.destroyAllWindows()
| 37.982456 | 115 | 0.671132 | 624 | 4,330 | 4.575321 | 0.399038 | 0.031524 | 0.018214 | 0.011208 | 0.1331 | 0.10718 | 0.10718 | 0.10718 | 0.082662 | 0.051839 | 0 | 0.034555 | 0.231409 | 4,330 | 113 | 116 | 38.318584 | 0.823317 | 0.419861 | 0 | 0.272727 | 0 | 0 | 0.073418 | 0.064557 | 0 | 0 | 0.001688 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.090909 | 0 | 0.163636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf39abdd7b9db220323875a0a137611f84fce21d | 1,646 | py | Python | functions/07.py | luan-gomes/python-basic-exercises | 213844b421b27ab3e9c09be24d4efb37cc6fce08 | [
"MIT"
] | null | null | null | functions/07.py | luan-gomes/python-basic-exercises | 213844b421b27ab3e9c09be24d4efb37cc6fce08 | [
"MIT"
] | null | null | null | functions/07.py | luan-gomes/python-basic-exercises | 213844b421b27ab3e9c09be24d4efb37cc6fce08 | [
"MIT"
] | null | null | null | """
1) Faça um programa que use a função valorPagamento para determinar o
valor a ser pago por uma prestação de uma conta.
2) O programa deverá solicitar ao usuário o valor da prestação e o número
de dias em atraso e passar estes valores para a função valorPagamento,
que calculará o valor a ser pago e devolverá este valor ao programa que
a chamou. O programa deverá então exibir o valor a ser pago na tela.
3) Após a execução, o programa deverá voltar a pedir outro valor de
prestação e assim continuar até que seja informado um valor igual a zero
para a prestação. Neste momento o programa deverá ser encerrado, exibindo
o relatório do dia, que conterá a quantidade e o valor total de prestações
pagas no dia.
4)O cálculo do valor a ser pago é feito da seguinte forma. Para pagamentos
sem atraso, cobrar o valor da prestação. Quando houver atraso, cobrar 3%
de multa, mais 0,1% de juros por dia de atraso.
"""
def valorPagamento(valorPrestacao, diasAtraso):
if diasAtraso == 0:
return valorPrestacao
else:
multa = 0.03 * valorPrestacao
jurosAoDia = (0.001*diasAtraso) * valorPrestacao
valorAPagar = valorPrestacao + multa + jurosAoDia
return valorAPagar
montanteDoDia = 0
quantidade = 0
while True:
prestacao = float(input("Informe o valor da prestação: "))
dias = int(input("Informe quantos dias de atraso: "))
if prestacao == 0:
print("-"*5+" RELATÓRIO DO DIA "+"-"*5)
print(f"Quantidade de contas pagas: {quantidade}")
print(f"Montante total: {montanteDoDia}")
break
else:
valor = valorPagamento(prestacao, dias)
print(f"Valor a ser pago: {valor}")
quantidade += 1
montanteDoDia += valor
| 35.021277 | 74 | 0.744228 | 253 | 1,646 | 4.841897 | 0.426877 | 0.034286 | 0.036735 | 0.053061 | 0.034286 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015683 | 0.186513 | 1,646 | 46 | 75 | 35.782609 | 0.899178 | 0.556501 | 0 | 0.086957 | 0 | 0 | 0.246537 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0 | 0.130435 | 0.173913 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf3a73df976f6a84385fb7762c36292debe844b3 | 1,814 | py | Python | common/login.py | zhaopiandehuiyiforsang/python_test | 7a6ef77afd3b436f798ca68c77b9ac8669e00094 | [
"MIT"
] | null | null | null | common/login.py | zhaopiandehuiyiforsang/python_test | 7a6ef77afd3b436f798ca68c77b9ac8669e00094 | [
"MIT"
] | null | null | null | common/login.py | zhaopiandehuiyiforsang/python_test | 7a6ef77afd3b436f798ca68c77b9ac8669e00094 | [
"MIT"
] | null | null | null | # -*- conding:utf-8 -*-
from init_env import BASE_DIR
from common.HttpUtils import HttpUtils
from common.env_config import ServerCC
from common.DateUtils import currentTimeMillis, DateTime
import json
import os
token_json_path = BASE_DIR + '/resources/token.json'
"""
获取接口调用凭证token工具
"""
URL_AUTH = 'https://rasdev9.zhixueyun.com/oauth/api/v1/auth'
def login(url=URL_AUTH, data=None):
if data is None:
return None
r = HttpUtils()
result = r.post(url, data=data)
if result.status_code != 200:
print('获取token失败')
os._exit(0)
token_file = open(token_json_path, 'w')
jsonObj = json.loads(result.text)
expires_in = jsonObj['expires_in']
# 过期时间
out_of_time = currentTimeMillis()+expires_in
jsonObj['out_of_time'] = out_of_time
jsonObj['expires_time'] = DateTime(out_of_time)
jsonObj['create_time'] = DateTime()
jsonStr = json.dumps(jsonObj)
token_file.write(jsonStr)
token_file.close()
r.logJson(jsonStr)
return jsonStr
def getToken(url=URL_AUTH, data=None, content=None):
if content == '' or content == None:
token_file = open(token_json_path, 'r')
content = token_file.read()
token_file.close()
if content == '' or content == None:
content = login(url, data)
return getToken(url, content)
jsonObj = json.loads(content)
access_token = jsonObj['access_token']
token_type = jsonObj['token_type']
out_of_time = jsonObj['out_of_time']
if out_of_time < currentTimeMillis()+5:
content = login(url, data)
return getToken(url, content)
token = token_type+'__'+access_token
return token
if __name__ == "__main__":
server = ServerCC()
URL_AUTH = server.getEnv(ServerCC.DEV)[1]
# print(getToken(''))
login(URL_AUTH)
| 25.914286 | 60 | 0.669239 | 238 | 1,814 | 4.857143 | 0.331933 | 0.030277 | 0.054498 | 0.041522 | 0.188581 | 0.119377 | 0.074394 | 0.074394 | 0 | 0 | 0 | 0.006276 | 0.209482 | 1,814 | 69 | 61 | 26.289855 | 0.799861 | 0.025358 | 0 | 0.163265 | 0 | 0 | 0.095348 | 0.012062 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.122449 | 0 | 0.265306 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf3bbae06f3088b31cf43074001976c60e15c3b8 | 262 | py | Python | wbb/utils/filter_groups.py | Imran95942/userbotisl | 1614af1d1ba904dfd5e28dfd5b3e21d5e24bb55c | [
"MIT"
] | 1 | 2021-11-17T13:25:25.000Z | 2021-11-17T13:25:25.000Z | wbb/utils/filter_groups.py | Imran95942/userbotisl | 1614af1d1ba904dfd5e28dfd5b3e21d5e24bb55c | [
"MIT"
] | null | null | null | wbb/utils/filter_groups.py | Imran95942/userbotisl | 1614af1d1ba904dfd5e28dfd5b3e21d5e24bb55c | [
"MIT"
] | null | null | null | chat_filters_group = 1
chatbot_group = 2
karma_positive_group = 3
karma_negative_group = 4
regex_group = 5
welcome_captcha_group = 6
antiflood_group = 7
blacklist_filters_group = 8
taglog_group = 9
chat_watcher_group = 10
flood_group = 11
autocorrect_group = 12
| 20.153846 | 27 | 0.816794 | 42 | 262 | 4.666667 | 0.666667 | 0.122449 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066372 | 0.137405 | 262 | 12 | 28 | 21.833333 | 0.800885 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf3e620c460aed9e0fba7d56f5f6161f6fb1dbd6 | 3,162 | py | Python | my_pilz_sandbox/scripts/pause.py | ct2034/my_pilz_sandbox | 40400c6469918f56d384580d41f61b2cca3b49c9 | [
"BSD-3-Clause"
] | null | null | null | my_pilz_sandbox/scripts/pause.py | ct2034/my_pilz_sandbox | 40400c6469918f56d384580d41f61b2cca3b49c9 | [
"BSD-3-Clause"
] | null | null | null | my_pilz_sandbox/scripts/pause.py | ct2034/my_pilz_sandbox | 40400c6469918f56d384580d41f61b2cca3b49c9 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
from geometry_msgs.msg import Pose, Point, PoseArray, Quaternion
import math
import numpy as np
from pilz_robot_programming import *
import random
import rospy
import time
__REQUIRED_API_VERSION__ = "1" # API version
SLOW_VEL_SCALE = .1
ACC_SCALE = .1
GRIPPER_POSE_CLOSED = 0.001
GRIPPER_POSE_OPEN = 0.029
class MoveThread(threading.Thread):
def __init__(self, robot, cmd):
threading.Thread.__init__(self)
self._robot = robot
self._cmd = cmd
self.exception_thrown = False
def run(self):
rospy.logdebug("Start motion...")
try:
self._robot.move(self._cmd)
except RobotMoveFailed:
rospy.loginfo("Caught expected exception.")
self.exception_thrown = True
# trying to pause a seq command
def pausing_a_sequence(r):
r.move(Ptp(goal=Pose(position=Point(0.0, 0.0, .9), orientation=Quaternion(0,0,0,1)),
vel_scale=SLOW_VEL_SCALE,
acc_scale=ACC_SCALE))
r.move(Ptp(goal=Pose(position=Point(0.0, 0.0, .9), orientation=Quaternion(0,0,0,1)),
vel_scale=SLOW_VEL_SCALE,
acc_scale=ACC_SCALE))
print("prepared.")
seq = Sequence()
seq.append(Ptp(goal=Pose(position=Point(0.0, 0, .9), orientation=Quaternion(0,0,0,1)),
vel_scale=SLOW_VEL_SCALE,
acc_scale=ACC_SCALE))
seq.append(Ptp(goal=Pose(position=Point(0.2, 0, .9), orientation=Quaternion(0,0,0,1)),
vel_scale=SLOW_VEL_SCALE,
acc_scale=ACC_SCALE),
blend_radius=0.099)
seq.append(Ptp(goal=Pose(position=Point(0.2, 0.2, .9), orientation=Quaternion(0,0,0,1)),
vel_scale=SLOW_VEL_SCALE,
acc_scale=ACC_SCALE),
blend_radius=0.099)
seq.append(Ptp(goal=Pose(position=Point(0, 0.2, .9), orientation=Quaternion(0,0,0,1)),
vel_scale=SLOW_VEL_SCALE,
acc_scale=ACC_SCALE))
move_thread = MoveThread(r, seq)
move_thread.start()
for i in range(10):
rospy.sleep(1)
try:
r.pause()
except Exception as e:
rospy.loginfo(e)
rospy.sleep(.2)
r.resume()
move_thread.join()
# trying to pause a ptp command
def pausing_a_ptp(r):
r.move(Ptp(goal=Pose(position=Point(-0.2, 0.0, .9), orientation=Quaternion(0,0,0,1)),
vel_scale=SLOW_VEL_SCALE,
acc_scale=ACC_SCALE))
print("prepared.")
ptp = Ptp(goal=Pose(position=Point(0.2, 0, .9), orientation=Quaternion(0,0,0,1)),
vel_scale=SLOW_VEL_SCALE,
acc_scale=ACC_SCALE)
move_thread = MoveThread(r, ptp)
move_thread.start()
for i in range(10):
rospy.sleep(1)
r.pause()
rospy.sleep(.2)
r.resume()
move_thread.join()
if __name__ == "__main__":
# init a rosnode
rospy.init_node('robot_program_node')
# initialisation
r = Robot(__REQUIRED_API_VERSION__) # instance of the robot
# start the main program
pausing_a_sequence(r)
# pausing_a_ptp(r)
| 30.403846 | 92 | 0.606262 | 441 | 3,162 | 4.113379 | 0.226757 | 0.028666 | 0.114664 | 0.083793 | 0.556229 | 0.556229 | 0.556229 | 0.556229 | 0.520397 | 0.484565 | 0 | 0.041286 | 0.272296 | 3,162 | 103 | 93 | 30.699029 | 0.747066 | 0.058191 | 0 | 0.468354 | 0 | 0 | 0.028966 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050633 | false | 0 | 0.088608 | 0 | 0.151899 | 0.025316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf3f13905a5ccf5bc9884a2805ccfdf8e0e29624 | 822 | py | Python | feed-runner.py | quandram/podcatcher | b1d14b10b3e1afd1947e09ddf2006dac37c6fae7 | [
"MIT"
] | null | null | null | feed-runner.py | quandram/podcatcher | b1d14b10b3e1afd1947e09ddf2006dac37c6fae7 | [
"MIT"
] | null | null | null | feed-runner.py | quandram/podcatcher | b1d14b10b3e1afd1947e09ddf2006dac37c6fae7 | [
"MIT"
] | null | null | null | import configparser
import os
from podcatcher import podcatcher
import configKeys
def update_last_processed_date(config, configSection, lastDownloadedDate):
config.set(configSection, configKeys.LAST_DOWNLOADED_DATE, lastDownloadedDate.strftime("%Y-%m-%d %H:%M:%S %Z"))
with open(os.path.join(os.path.dirname(__file__), "config.ini"), "w") as configFile:
config.write(configFile)
def main():
config = configparser.ConfigParser()
config.read(os.path.join(os.path.dirname(__file__), "config.ini"))
for configSection in config.sections():
if configSection != configKeys.SETTINGS_NAME:
update_last_processed_date(config, configSection, podcatcher(config[configKeys.SETTINGS_NAME], configSection, config[configSection]).get_new_pods());
if __name__ == "__main__":
main()
| 37.363636 | 161 | 0.744526 | 98 | 822 | 5.959184 | 0.459184 | 0.041096 | 0.065068 | 0.078767 | 0.267123 | 0.267123 | 0.123288 | 0.123288 | 0.123288 | 0 | 0 | 0 | 0.131387 | 822 | 21 | 162 | 39.142857 | 0.817927 | 0 | 0 | 0 | 0 | 0 | 0.059611 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf47b256b9183a754f0c9560868b735c8181e6d5 | 9,250 | py | Python | cli/train.py | breid1313/nlp_hw3_text_fcn_pytorch | a4234e90d37e94a3043d9715c90bac7543f4b0ae | [
"Apache-2.0"
] | null | null | null | cli/train.py | breid1313/nlp_hw3_text_fcn_pytorch | a4234e90d37e94a3043d9715c90bac7543f4b0ae | [
"Apache-2.0"
] | null | null | null | cli/train.py | breid1313/nlp_hw3_text_fcn_pytorch | a4234e90d37e94a3043d9715c90bac7543f4b0ae | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Vladislav Lialin and Skillfactory LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
"""Train a neural network classifier."""
import argparse
import logging
import os
import sys
import torch
import torch.nn.functional as F
import datasets
import toml
import wandb
from tqdm.auto import tqdm
from nn_classifier import utils, data_utils
from nn_classifier.modelling import FcnBinaryClassifier
logging.basicConfig(
format="%(asctime)s | %(levelname)s | %(name)s | %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
level=logging.INFO,
stream=sys.stdout,
)
logger = logging.getLogger(os.path.basename(__file__))
def parse_args(args=None):
parser = argparse.ArgumentParser()
# fmt: off
# preprocessing
parser.add_argument("--max_vocab_size", default=50_000, type=int,
help="maximum size of the vocabulary")
# model
parser.add_argument("--hidden_size", default=32, type=int,
help="size of the intermediate layer in the network")
# note that we can't use action='store_true' here or this won't work with wandb sweeps
parser.add_argument("--use_batch_norm", default=False, type=lambda s: s.lower() == 'true')
parser.add_argument("--dropout", default=0.5, type=float)
parser.add_argument("--weight_decay", default=0, type=float,
help="L2 regularization parameter.")
parser.add_argument("--lr", default=1e-3, type=float,
help="Learning rate")
# training
parser.add_argument("--batch_size", default=64, type=int,
help="number of examples in a single batch")
parser.add_argument("--max_epochs", default=5, type=int,
help="number of passes through the dataset during training")
parser.add_argument("--early_stopping", default=1, type=int,
help="Stop training if the model does not improve the results after this many epochs")
# misc
parser.add_argument("--device", default=None, type=str,
help="device to train on, use GPU if available by default")
parser.add_argument("--output_dir", default=None, type=str,
help="a directory to save the model and config, do not save the model by default")
parser.add_argument("--wandb_project", default="nlp_module_3_assignment",
help="wandb project name to log metrics to")
# fmt: on
args = parser.parse_args(args)
return args
def main(args):
"""Train tokenizer, model and save them to a directory
args should __only__ be used in this function or passed to a hyperparameter logger.
Never propagate args further into your code - it causes complicated and tightly connected interfaces
that are easy to modify, but impossible to read and use outside the main file.
"""
if args.output_dir is not None and os.path.exists(args.output_dir):
raise ValueError(f"output_dir {args.output_dir} already exists")
# Initialize wandb as soon as possible to log all stdout to the cloud
wandb.init(config=args)
device = args.device
# TASK 2.1: if device is not specified, set it to "cuda" if torch.cuda.is_available()
# if cuda is not available, set device to "cpu"
# Our implementation is 2 lines
# YOUR CODE STARTS
if not device:
device = "cuda" if torch.cuda.is_available() else "cpu"
# YOUR CODE ENDS
_device_description = "CPU" if device == "cpu" else "GPU"
logger.info(f"Using {_device_description} for training")
# Create dataset objects
logger.info("Loading dataset")
text_dataset = datasets.load_dataset("imdb")
train_texts = text_dataset["train"]["text"]
train_labels = text_dataset["train"]["label"]
tokenizer = utils.make_whitespace_tokenizer(
train_texts, max_vocab_size=args.max_vocab_size
)
train_dataset = data_utils.CountDataset(
train_texts,
tokenizer=tokenizer,
labels=train_labels,
)
test_dataset = data_utils.CountDataset(
text_dataset["test"]["text"], tokenizer, text_dataset["test"]["label"]
)
# It is very important to shuffle the training set
dataloader = torch.utils.data.DataLoader(
train_dataset, batch_size=args.batch_size, shuffle=True
)
test_dataloader = torch.utils.data.DataLoader(
test_dataset, batch_size=args.batch_size, shuffle=False
)
# Create model and optimizer
input_size = tokenizer.get_vocab_size()
model = FcnBinaryClassifier(
input_size=input_size,
hidden_size=args.hidden_size,
dropout_prob=args.dropout,
use_batch_norm=args.use_batch_norm,
)
model = model.to(device)
wandb.watch(model)
# TASK 2.2: Create AdamW optimizer (not Adam)
# and provide learning rate and weight decay parameters to it
# Our implementation is 1 line
# YOUR CODE STARTS
optimizer = torch.optim.AdamW(
model.parameters(), lr=args.lr, weight_decay=args.weight_decay
)
# YOUR CODE ENDS
# Initialize current best accuracy as 0 for early stopping
best_acc = 0
epochs_without_improvement = (
0 # training stops when this is larger than args.early_stopping
)
# if args.output_dir is specified, create it and save args as a toml file
# toml is a more flexible, readable and error-prone alternative to yaml and json
if args.output_dir is not None:
os.makedirs(args.output_dir)
with open(os.path.join(args.output_dir, "args.toml"), "w") as f:
toml.dump(vars(args), f)
tokenizer.save(os.path.join(args.output_dir, "tokenizer.json"))
logger.info("Starting training")
for _ in tqdm(range(args.max_epochs), desc="Epochs"):
for x, y in dataloader:
# TASK 2.3a: Define the training loop
# 1. Move and and y to the device you are using for training
# 2. Get class probabilites using model
# 3. Calculate loss using F.binary_cross_entropy
# 4. Zero out the cashed gradients from the previous iteration
# 4. Backpropagate the loss
# 5. Update the parameters
# Our implementation is 7 lines
# YOUR CODE STARTS
x = x.to(device)
y = y.to(device)
probs = model(x)
loss = F.binary_cross_entropy(probs, y)
loss.backward()
optimizer.zero_grad()
optimizer.step()
# YOUR CODE ENDS
wandb.log(
{
"train_acc": utils.accuracy(probs, y),
"train_loss": loss,
}
)
# Task 2.3b: Evaluate the model on the test set
# Use utils.evaluate_model to get it and wandb.log to log it as "test_acc"
# Our implementation is 2 lines
# YOUR CODE STARTS
test_acc = utils.evaluate_model(model, dataloader, device=device)
wandb.log({"test_acc": test_acc})
# YOUR CODE ENDS
# TASK 2.4: if output_dir is provided and test accuracy is better than the current best accuracy
# save the model to output_dir/model_checkpoint.pt
# use os.path.join to write code transferable between Linux/Mac and Windows
# extract save model.state_dict() using torch.save
# set epochs_without_improvement to zero.
# Remember to update best_acc even if output_dir is not provided.
# Stop training (use break) if epochs_without_improvement > early_stopping
# Before that use the logger.info to indicate that the training stopped early.
# Our implementation is 12 lines
# YOUR CODE STARTS
if test_acc >= best_acc:
if args.output_dir:
torch.save(
model.state_dict(),
os.path.join(args.output_dir, "model_checkpoint.pt"),
)
best_acc = test_acc
epochs_without_improvement = 0
else:
epochs_without_improvement += 1
if epochs_without_improvement > args.early_stopping:
logger.info(
f"Stopping training early. {epochs_without_improvement} have passed without improvement, which has crossed the threshold of {args.early_stopping}"
)
break
# YOUR CODE ENDS
# Log the best accuracy as a summary so that wandb would use it instead of the final value
wandb.run.summary["test_acc"] = best_acc
logger.info("Training is finished!")
if __name__ == "__main__":
args = parse_args()
main(args)
| 36.27451 | 162 | 0.647027 | 1,231 | 9,250 | 4.734362 | 0.300569 | 0.023164 | 0.035003 | 0.010295 | 0.103638 | 0.054736 | 0.033974 | 0.013384 | 0 | 0 | 0 | 0.007746 | 0.260324 | 9,250 | 254 | 163 | 36.417323 | 0.844051 | 0.339676 | 0 | 0 | 0 | 0.007463 | 0.185037 | 0.015461 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014925 | false | 0.014925 | 0.089552 | 0 | 0.11194 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf483c36d559d50ef56df32e2b8c8288a4ddb79b | 7,436 | py | Python | src/profile.py | SimonPerche/PersonalitiesWars | 495803a5be5e9fde572c3f39086d8a3510c75f58 | [
"MIT"
] | null | null | null | src/profile.py | SimonPerche/PersonalitiesWars | 495803a5be5e9fde572c3f39086d8a3510c75f58 | [
"MIT"
] | null | null | null | src/profile.py | SimonPerche/PersonalitiesWars | 495803a5be5e9fde572c3f39086d8a3510c75f58 | [
"MIT"
] | 1 | 2022-03-08T22:07:50.000Z | 2022-03-08T22:07:50.000Z | from datetime import datetime, timedelta
import asyncio
import math
from collections import defaultdict
import discord
from discord.ext import commands, pages
from discord.commands import slash_command, Option
from database import DatabaseDeck, DatabasePersonality
from roll import min_until_next_claim
import utils
class Profile(commands.Cog):
def __init__(self, bot):
"""Initial the cog with the bot."""
self.bot = bot
#### Commands ####
@slash_command(aliases=['pr'], description='Show the user profile or yours if no user given.',
guild_ids=utils.get_authorized_guild_ids())
async def profile(self, ctx, member: Option(discord.Member, required=False, default=None)):
profile_owner = member or ctx.author
id_perso_profile = DatabaseDeck.get().get_id_perso_profile(ctx.guild.id, profile_owner.id)
image = profile_owner.avatar.url if profile_owner.avatar else None
if id_perso_profile:
current_image = DatabaseDeck.get().get_perso_current_image(ctx.guild.id, id_perso_profile)
perso = DatabasePersonality.get().get_perso_information(id_perso_profile)
# Show profile's perso only if user owns the personality (might not be the case with trade, give, discard)
owner = DatabaseDeck.get().perso_belongs_to(ctx.guild.id, perso['id'])
if owner and owner == profile_owner.id and current_image:
image = current_image
ids_deck = DatabaseDeck.get().get_user_deck(ctx.guild.id, profile_owner.id)
groups_count = defaultdict(int) # Default value of 0
personalities = DatabasePersonality.get().get_multiple_perso_information(ids_deck)
if personalities:
for perso in personalities:
groups_count[perso["group"]] += 1
# Keep only the 10 most popular groups
groups = sorted(groups_count.items(), key=lambda item: item[1], reverse=True)[:10]
# Badges
owned_badges = []
badges = DatabaseDeck.get().get_all_badges_with_perso(ctx.guild.id)
for badge_name in badges:
if all(id_perso in ids_deck for id_perso in badges[badge_name]):
owned_badges.append(badge_name)
badges_embed_msg = 'You don\'t own any badge...'
if owned_badges:
badges_embed_msg = '\n'.join(owned_badges)
embed = discord.Embed(
title=f'Profile of {profile_owner.name if profile_owner.nick is None else profile_owner.nick}', type='rich')
embed.description = f'You own {len(ids_deck)} personalit{"ies" if len(ids_deck) > 1 else "y"}!'
embed.add_field(name='Badges', value=badges_embed_msg)
if groups:
embed.add_field(name='Most owned groups',
value='\n'.join([f'*{group[0].capitalize()}* ({group[1]})' for group in groups]))
if image:
embed.set_thumbnail(url=image)
await ctx.respond(embed=embed)
@slash_command(description='Show the user deck or yours if no user given.',
guild_ids=utils.get_authorized_guild_ids())
async def deck(self, ctx, member: Option(discord.Member, required=False, default=None)):
deck_owner = member or ctx.author
ids_deck = DatabaseDeck.get().get_user_deck(ctx.guild.id, deck_owner.id)
persos_text = []
personalities = DatabasePersonality.get().get_multiple_perso_information(ids_deck)
if personalities:
for perso in personalities:
persos_text.append(f'**{perso["name"]}** *{perso["group"]}*')
persos_text.sort()
nb_per_page = 20
persos_pages = []
for i in range(0, len(persos_text), nb_per_page):
embed = discord.Embed(title=deck_owner.name if deck_owner.nick is None else deck_owner.nick,
description='\n'.join([perso for perso in persos_text[i:i + nb_per_page]]))
if deck_owner.avatar:
embed.set_thumbnail(url=deck_owner.avatar.url)
persos_pages.append(embed)
paginator = pages.Paginator(pages=persos_pages, show_disabled=True, show_indicator=True)
await paginator.send(ctx)
@slash_command(description='Set the profile displayed personality.\n'
'You can leave name blank to remove the current personality.',
guild_ids=utils.get_authorized_guild_ids())
async def set_perso_profile(self, ctx, name: Option(str, 'Pick a name or write yours',
autocomplete=utils.deck_name_searcher),
group: Option(str, 'Pick a group or write yours',
autocomplete=utils.personalities_group_searcher, required=False,
default=None)):
if name is None:
DatabaseDeck.get().set_id_perso_profile(ctx.guild.id, ctx.author.id, None)
await ctx.respond('I removed your profile\'s personality.')
return
name = name.strip()
if group:
group = group.strip()
if group:
id_perso = DatabasePersonality.get().get_perso_group_id(name, group)
else:
id_perso = DatabasePersonality.get().get_perso_id(name)
if not id_perso:
await ctx.respond(f'Personality **{name}**{" from *" + group + "* " if group else ""} not found.')
return
owner = DatabaseDeck.get().perso_belongs_to(ctx.guild.id, id_perso)
if not owner or owner != ctx.author.id:
await ctx.respond(f'You don\'t own **{name}**{" from *" + group + "* " if group else ""}...')
return None
DatabaseDeck.get().set_id_perso_profile(ctx.guild.id, ctx.author.id, id_perso)
await ctx.respond(f'Set your perso profile to {name} {group if group else ""}')
@slash_command(description='Show time before next rolls and claim reset.',
guild_ids=utils.get_authorized_guild_ids())
async def time(self, ctx):
next_claim = min_until_next_claim(ctx.guild.id, ctx.author.id)
username = ctx.author.name if ctx.author.nick is None else ctx.author.nick
msg = f'{username}, you '
if next_claim == 0:
msg += 'can claim right now!'
else:
time = divmod(next_claim, 60)
msg += f'can\'t claim yet. ' \
f'Ready **<t:{int((datetime.now() + timedelta(minutes=next_claim)).timestamp())}:R>**.'
user_nb_rolls = DatabaseDeck.get().get_nb_rolls(ctx.guild.id, ctx.author.id)
max_rolls = DatabaseDeck.get().get_rolls_per_hour(ctx.guild.id)
last_roll = DatabaseDeck.get().get_last_roll(ctx.guild.id, ctx.author.id)
if not last_roll:
user_nb_rolls = 0
else:
last_roll = datetime.strptime(last_roll, '%Y-%m-%d %H:%M:%S')
now = datetime.now()
# If a new hour began
if now.date() != last_roll.date() or (now.date() == last_roll.date() and now.hour != last_roll.hour):
user_nb_rolls = 0
msg += f'\nYou have **{max_rolls - user_nb_rolls}** rolls left.\n' \
f'Next rolls reset **<t:{int((datetime.now().replace(minute=0) + timedelta(hours=1)).timestamp())}:R>**.'
await ctx.respond(msg)
| 44.261905 | 120 | 0.620764 | 966 | 7,436 | 4.593168 | 0.200828 | 0.023665 | 0.029299 | 0.01465 | 0.302457 | 0.249042 | 0.18526 | 0.18526 | 0.18526 | 0.146495 | 0 | 0.003676 | 0.268424 | 7,436 | 167 | 121 | 44.526946 | 0.811949 | 0.030662 | 0 | 0.153226 | 0 | 0.040323 | 0.145202 | 0.025313 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008065 | false | 0 | 0.080645 | 0 | 0.120968 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf4a971868a5db584bf5e20d4c62c91c74f32e96 | 271 | py | Python | aids/strings/is_palindrome.py | ueg1990/aids | bb543c6f53983d59edbc6a522ca10d64efd9c42e | [
"MIT"
] | null | null | null | aids/strings/is_palindrome.py | ueg1990/aids | bb543c6f53983d59edbc6a522ca10d64efd9c42e | [
"MIT"
] | null | null | null | aids/strings/is_palindrome.py | ueg1990/aids | bb543c6f53983d59edbc6a522ca10d64efd9c42e | [
"MIT"
] | null | null | null | '''
In this module, we determine if a given string is a palindrome
'''
def is_palindrome(string):
'''
Return True if given string is a palindrome
'''
if len(string) < 2:
return True
if string[0] == string[-1]:
return is_palindrome(string[1:-1])
return False
| 16.9375 | 62 | 0.678967 | 43 | 271 | 4.232558 | 0.44186 | 0.120879 | 0.142857 | 0.153846 | 0.263736 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022936 | 0.195572 | 271 | 15 | 63 | 18.066667 | 0.811927 | 0.391144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0 | 0.666667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf4dc6fb0422c61d631abfb411ae82187b6217d2 | 3,433 | py | Python | Chapter08/python/ab-env/lib/python3.8/site-packages/numpy-1.16.4-py3.8-macosx-10.16-x86_64.egg/numpy/core/_dtype_ctypes.py | PacktPublishing/Supercharge-Your-Applications-with-GraalVM | bfb068e445f0325be9c7d526b6e07324dff9d1d2 | [
"MIT"
] | 9 | 2021-06-27T07:22:14.000Z | 2022-02-25T18:05:01.000Z | Chapter08/python/ab-env/lib/python3.8/site-packages/numpy-1.16.4-py3.8-macosx-10.16-x86_64.egg/numpy/core/_dtype_ctypes.py | PacktPublishing/Supercharge-Your-Applications-with-GraalVM | bfb068e445f0325be9c7d526b6e07324dff9d1d2 | [
"MIT"
] | null | null | null | Chapter08/python/ab-env/lib/python3.8/site-packages/numpy-1.16.4-py3.8-macosx-10.16-x86_64.egg/numpy/core/_dtype_ctypes.py | PacktPublishing/Supercharge-Your-Applications-with-GraalVM | bfb068e445f0325be9c7d526b6e07324dff9d1d2 | [
"MIT"
] | 8 | 2021-05-28T15:45:12.000Z | 2022-02-01T10:21:37.000Z | """
Conversion from ctypes to dtype.
In an ideal world, we could acheive this through the PEP3118 buffer protocol,
something like::
def dtype_from_ctypes_type(t):
# needed to ensure that the shape of `t` is within memoryview.format
class DummyStruct(ctypes.Structure):
_fields_ = [('a', t)]
# empty to avoid memory allocation
ctype_0 = (DummyStruct * 0)()
mv = memoryview(ctype_0)
# convert the struct, and slice back out the field
return _dtype_from_pep3118(mv.format)['a']
Unfortunately, this fails because:
* ctypes cannot handle length-0 arrays with PEP3118 (bpo-32782)
* PEP3118 cannot represent unions, but both numpy and ctypes can
* ctypes cannot handle big-endian structs with PEP3118 (bpo-32780)
"""
import ctypes
import numpy as np
def _from_ctypes_array(t):
return np.dtype((dtype_from_ctypes_type(t._type_), (t._length_,)))
def _from_ctypes_structure(t):
for item in t._fields_:
if len(item) > 2:
raise TypeError(
"ctypes bitfields have no dtype equivalent")
if hasattr(t, "_pack_"):
formats = []
offsets = []
names = []
current_offset = 0
for fname, ftyp in t._fields_:
names.append(fname)
formats.append(dtype_from_ctypes_type(ftyp))
# Each type has a default offset, this is platform dependent for some types.
effective_pack = min(t._pack_, ctypes.alignment(ftyp))
current_offset = ((current_offset + effective_pack - 1) // effective_pack) * effective_pack
offsets.append(current_offset)
current_offset += ctypes.sizeof(ftyp)
return np.dtype(dict(
formats=formats,
offsets=offsets,
names=names,
itemsize=ctypes.sizeof(t)))
else:
fields = []
for fname, ftyp in t._fields_:
fields.append((fname, dtype_from_ctypes_type(ftyp)))
# by default, ctypes structs are aligned
return np.dtype(fields, align=True)
def _from_ctypes_scalar(t):
"""
Return the dtype type with endianness included if it's the case
"""
if getattr(t, '__ctype_be__', None) is t:
return np.dtype('>' + t._type_)
elif getattr(t, '__ctype_le__', None) is t:
return np.dtype('<' + t._type_)
else:
return np.dtype(t._type_)
def _from_ctypes_union(t):
formats = []
offsets = []
names = []
for fname, ftyp in t._fields_:
names.append(fname)
formats.append(dtype_from_ctypes_type(ftyp))
offsets.append(0) # Union fields are offset to 0
return np.dtype(dict(
formats=formats,
offsets=offsets,
names=names,
itemsize=ctypes.sizeof(t)))
def dtype_from_ctypes_type(t):
"""
Construct a dtype object from a ctypes type
"""
if issubclass(t, _ctypes.Array):
return _from_ctypes_array(t)
elif issubclass(t, _ctypes._Pointer):
raise TypeError("ctypes pointers have no dtype equivalent")
elif issubclass(t, _ctypes.Structure):
return _from_ctypes_structure(t)
elif issubclass(t, _ctypes.Union):
return _from_ctypes_union(t)
elif isinstance(getattr(t, '_type_', None), str):
return _from_ctypes_scalar(t)
else:
raise NotImplementedError(
"Unknown ctypes type {}".format(t.__name__))
| 30.380531 | 103 | 0.633848 | 435 | 3,433 | 4.770115 | 0.31954 | 0.072289 | 0.043855 | 0.05494 | 0.250602 | 0.2 | 0.167711 | 0.167711 | 0.143614 | 0.143614 | 0 | 0.015538 | 0.268861 | 3,433 | 112 | 104 | 30.651786 | 0.811155 | 0.30032 | 0 | 0.4 | 0 | 0 | 0.059695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.030769 | 0.015385 | 0.276923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf500d8b74ed4e30cef6a56fa9722244906f9406 | 2,202 | py | Python | tests/test_micromagnetic_zeeman.py | computationalmodelling/fidimag | 07a275c897a44ad1e0d7e8ef563f10345fdc2a6e | [
"BSD-2-Clause"
] | 53 | 2016-02-27T09:40:21.000Z | 2022-01-19T21:37:44.000Z | tests/test_micromagnetic_zeeman.py | computationalmodelling/fidimag | 07a275c897a44ad1e0d7e8ef563f10345fdc2a6e | [
"BSD-2-Clause"
] | 132 | 2016-02-26T13:18:58.000Z | 2021-12-01T21:52:42.000Z | tests/test_micromagnetic_zeeman.py | computationalmodelling/fidimag | 07a275c897a44ad1e0d7e8ef563f10345fdc2a6e | [
"BSD-2-Clause"
] | 32 | 2016-02-26T13:21:40.000Z | 2022-03-08T08:54:51.000Z | from fidimag.micro import Zeeman
from fidimag.common import CuboidMesh
from fidimag.micro import Sim
import numpy as np
def varying_field(pos):
return (1.2 * pos[0], 2.3 * pos[1], 0)
def test_H0_is_indexable_or_callable():
"""
Test that an exception is raised if H0 is not indexable, and that an
exception is not raised if H0 is indexable.
"""
# Test for some different accepted types.
inputSuccess = ([0., 0., 1.],
np.array([0., 0., 1.]),
lambda x: x + 0.1)
for zS in inputSuccess:
Zeeman(zS)
# Test for different failing types. Should perhaps use a unittest.TestCase
# for testing to make this more elegant, but there's probably a reason why
# it's not used elsewhere.
inputFailures = [5., -7]
for zS in inputFailures:
try:
Zeeman(zS)
except ValueError:
pass
else:
raise Exception("Zeeman argument \"{}\" was expected to raise an "
"exception, but did not!."
.format(zS))
def test_zeeman():
mesh = CuboidMesh(nx=5, ny=2, nz=1)
sim = Sim(mesh)
sim.set_m((1, 0, 0))
zeeman = Zeeman(varying_field)
sim.add(zeeman)
field = zeeman.compute_field()
assert field[6] == 1.2 * (2 + 0.5)
assert field[7] == 2.3 * 0.5
def test_zeeman_energy():
mu0 = 4 * np.pi * 1e-7
# A system of 8 cells ( not using nm units)
mesh = CuboidMesh(dx=2, dy=2, dz=2,
nx=2, ny=2, nz=2
)
sim = Sim(mesh)
Ms = 1e5
sim.set_Ms(Ms)
sim.set_m((0, 0, 1))
H = 0.1 / mu0
zeeman = Zeeman((0, 0, H))
sim.add(zeeman)
field = zeeman.compute_field()
zf = sim.get_interaction('Zeeman')
# -> ->
# Expected energy: Int ( -mu0 M * H ) dV
# Since we have 8 cells with the same M, we just sum their contrib
exp_energy = 8 * (-mu0 * H * Ms * mesh.dx * mesh.dy * mesh.dz)
assert np.abs(zf.compute_energy() - exp_energy) < 1e-10
if __name__ == "__main__":
test_zeeman()
test_H0_is_indexable_or_callable()
test_zeeman_energy()
| 25.604651 | 78 | 0.560854 | 319 | 2,202 | 3.761755 | 0.401254 | 0.008333 | 0.0325 | 0.036667 | 0.11 | 0.11 | 0.11 | 0 | 0 | 0 | 0 | 0.044325 | 0.323797 | 2,202 | 85 | 79 | 25.905882 | 0.761585 | 0.232062 | 0 | 0.156863 | 0 | 0 | 0.048707 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.078431 | false | 0.019608 | 0.078431 | 0.019608 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf50585745c7b40989b43625db650caccd9e042a | 13,058 | py | Python | rule_learner_both_classes.py | mgbarsky/classification_rules | 699969b87bd7a9080a7e937025fd26398c11a60d | [
"MIT"
] | null | null | null | rule_learner_both_classes.py | mgbarsky/classification_rules | 699969b87bd7a9080a7e937025fd26398c11a60d | [
"MIT"
] | null | null | null | rule_learner_both_classes.py | mgbarsky/classification_rules | 699969b87bd7a9080a7e937025fd26398c11a60d | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
class Rule:
def __init__(self, class_label):
self.conditions = [] # list of conditions
self.class_label = class_label # rule class
def add_condition(self, condition):
self.conditions.append(condition)
def set_params(self, accuracy, coverage):
self.accuracy = accuracy
self.coverage = coverage
def to_filter(self):
result = ""
for cond in self.conditions:
result += cond.to_filter() + " & "
result += "(current_data[columns[-1]] == class_label)"
return result
def to_filter_no_class(self):
result = ""
for cond in self.conditions:
result += cond.to_filter() + " & "
result += "True"
return result
def __repr__(self):
return "If {} then {}. Coverage:{}, accuracy: {}".format(self.conditions, self.class_label,
self.coverage, self.accuracy)
class Condition:
def __init__(self, attribute, value, true_false = None):
self.attribute = attribute
self.value = value
self.true_false = true_false
def to_filter(self):
result = ""
if self is None:
return result
if self.true_false is None:
result += '(current_data["' + self.attribute + '"]' + "==" + '"' + self.value + '")'
elif self.true_false:
result += '(current_data["' + self.attribute + '"]' + ">=" + str(self.value) + ")"
else:
result += '(current_data["' + self.attribute + '"]' + "<" + str(self.value) + ")"
return result
def __repr__(self):
if self.true_false is None:
return "{}={}".format(self.attribute, self.value)
else:
if self.true_false:
return "{}>={}".format(self.attribute, self.value)
else:
return "{}<{}".format(self.attribute, self.value)
def filter_for_list(condition_list):
result = ""
for cond in condition_list:
result += cond.to_filter() + " & "
result += "True"
return result
def get_best_condition(columns, current_data, prev_conditions, class_labels, min_coverage=30, prev_best_accuracy=0):
used_attributes = [x.attribute for x in prev_conditions]
best_accuracy = prev_best_accuracy
best_coverage = None
best_col = None
best_val = None
best_true_false = None
best_class_label = None
for class_label in class_labels:
# we iterate over all attributes except the class - which is in the last column
for col in columns[:-1]:
# we do not use the same column in one rule
if col in used_attributes:
continue
# Extract unique values from the column
unique_vals = current_data[col].unique().tolist()
# Consider each unique value in turn
# The treatment is different for numeric and categorical attributes
for val in unique_vals:
if isinstance(val, int) or isinstance(val, float):
# Here we construct 2 conditions:
# if actual value >= val or if actual value < val
# First if actual value >= val
# construct new set of conditions by adding a new condition
new_conditions = prev_conditions.copy()
current_cond = Condition(col, val, True)
new_conditions.append(current_cond)
# create a filtering condition
filter = filter_for_list(new_conditions)
# total covered by current condition
total_covered = len(current_data[eval(filter)])
if total_covered >= min_coverage:
# total with this condition and a given class
total_correct = len(current_data[(current_data[columns[-1]] == class_label) & eval(filter)])
acc = total_correct/total_covered
if acc > best_accuracy or (acc == best_accuracy and
(best_coverage is None or total_covered > best_coverage)):
best_accuracy = acc
best_coverage = total_covered
best_col = col
best_val = val
best_true_false = True
best_class_label = class_label
# now repeat the same for the case - if actual value < val
# construct new set of conditions by adding a new condition
new_conditions = prev_conditions.copy()
current_cond = Condition(col, val, False)
new_conditions.append(current_cond)
# create a filtering condition
filter = filter_for_list(new_conditions)
# total covered by current condition
total_covered = len(current_data[eval(filter)])
if total_covered >= min_coverage:
# total with this condition and a given class
total_correct = len(current_data[(current_data[columns[-1]] == class_label) & eval(filter)])
acc = total_correct / total_covered
if acc > best_accuracy or (acc == best_accuracy and
(best_coverage is None or total_covered > best_coverage)):
best_accuracy = acc
best_coverage = total_covered
best_col = col
best_val = val
best_true_false = False
best_class_label = class_label
else: # categorical attribute
# For categorical attributes - this is just single condition if actual value == val
new_conditions = prev_conditions.copy()
current_cond = Condition(col, val)
new_conditions.append(current_cond)
# create a filtering condition
filter = filter_for_list(new_conditions)
# total covered by current condition
total_covered = len(current_data[eval(filter)])
if total_covered >= min_coverage:
# total with this condition and a given class
total_correct = len(current_data[(current_data[columns[-1]] == class_label) & eval(filter)])
acc = total_correct / total_covered
if acc > best_accuracy or (acc == best_accuracy and
(best_coverage is None or total_covered > best_coverage)):
best_accuracy = acc
best_coverage = total_covered
best_col = col
best_val = val
best_true_false = None
best_class_label = class_label
if best_col is None:
return None
return (best_class_label, Condition(best_col,best_val, best_true_false))
def learn_one_rule(columns, current_data, class_labels,
min_coverage=30):
tuple = get_best_condition(columns, current_data, [], class_labels, min_coverage)
if tuple is None:
return None
class_label, best_condition = tuple
# start with creating a new Rule with a single best condition
current_rule = Rule(class_label)
current_rule.add_condition(best_condition)
# create a filtering condition
filter = current_rule.to_filter_no_class()
# total covered by current condition
total_covered = len(current_data[eval(filter)])
# total with this condition and a given class
total_correct = len(current_data[(current_data[columns[-1]] == class_label) & eval(filter)])
current_accuracy = total_correct / total_covered
current_rule.set_params(current_accuracy, total_covered )
if total_covered < min_coverage:
return None
if current_accuracy == 1.0:
return current_rule
# repeatedly try to improve Rule's accuracy as long as coverage remains sufficient
while True:
tuple = get_best_condition(columns, current_data, current_rule.conditions,
class_labels, min_coverage, current_accuracy)
if tuple is None:
return current_rule
class_label, best_condition = tuple
new_rule = Rule(class_label)
for cond in current_rule.conditions:
new_rule.add_condition(cond)
new_rule.add_condition(best_condition)
# create a filtering condition
filter = new_rule.to_filter_no_class()
# total covered by current condition
total_covered = len(current_data[eval(filter)])
if total_covered < min_coverage:
return current_rule # return previous rule
# total with this condition and a given class
total_correct = len(current_data[(current_data[columns[-1]] == class_label) & eval(filter)])
new_accuracy = total_correct / total_covered
new_rule.set_params(new_accuracy, total_covered)
if new_accuracy == 1:
return new_rule
current_rule = new_rule
return current_rule
def learn_rules(columns, data, classes=None,
min_coverage=30, min_accuracy=0.6):
# List of final rules
rules = []
# If list of classes of interest is not provided - it is extracted from the last column of data
if classes is not None:
class_labels = classes
else:
class_labels = data[columns[-1]].unique().tolist()
current_data = data.copy()
# This follows the logic of the original PRISM algorithm
# It processes each class in turn. Because for high accuracy
# the rules generated are disjoint with respect to class label
# this is not a problem when we are just interested in rules themselves - not classification
# For classification the order in which the rules are discovered matters, and we should
# process all classes at the same time, as shown in the lecture examples
done = False
while len(current_data) >= min_coverage and not done:
# Learn a rule with a single condition
rule = learn_one_rule(columns, current_data, class_labels, min_coverage)
# The best rule does not pass the coverage threshold - we are done with this class
if rule is None:
break
# If we get the rule with coverage above threshold
# We check if it passes accuracy threshold
if rule.accuracy >= min_accuracy:
rules.append(rule)
# remove rows covered by this rule
# we have to remove the rows where all of the conditions hold
# create a filtering condition
filter = rule.to_filter_no_class()
current_data = current_data.drop(current_data[eval(filter)].index)
else:
done = True
return rules
if __name__ == "__main__":
data_file = "titanic.csv"
data = pd.read_csv(data_file)
# take a subset of attributes
data = data[['Pclass', 'Sex', 'Age', 'Survived']]
# drop all columns and rows with missing values
data = data.dropna(how="any")
print("Total rows", len(data))
column_list = data.columns.to_numpy().tolist()
print("Columns:", column_list)
# we can set different accuracy thresholds
# here we can reorder class labels - to first learn the rules with class label "survived".
rules = learn_rules(column_list, data, [1, 0], 30, 0.6)
from operator import attrgetter
# sort rules by accuracy descending
rules.sort(key=attrgetter('accuracy', 'coverage'), reverse=True)
for rule in rules[:10]:
print(rule)
'''
Total rows 714
Columns: ['Pclass', 'Sex', 'Age', 'Survived']
If [Pclass<2, Sex=female, Age>=26.0] then 1. Coverage:38, accuracy: 1.0
If [Age<25.0, Pclass<3, Sex=female] then 1. Coverage:48, accuracy: 0.9791666666666666
If [Sex=male, Pclass>=3, Age>=33.0] then 0. Coverage:59, accuracy: 0.9491525423728814
If [Sex=male, Pclass>=2, Age>=32.5] then 0. Coverage:31, accuracy: 0.9354838709677419
If [Sex=male, Age>=54.0, Pclass>=1] then 0. Coverage:37, accuracy: 0.8918918918918919
If [Sex=male, Pclass>=2, Age<29.0] then 0. Coverage:52, accuracy: 0.8653846153846154
If [Sex=male, Age<25.0, Pclass>=1] then 0. Coverage:33, accuracy: 0.8484848484848485
If [Sex=male, Pclass>=3, Age<25.0] then 0. Coverage:118, accuracy: 0.847457627118644
If [Age<6.0, Pclass>=1] then 1. Coverage:31, accuracy: 0.8387096774193549
If [Age>=48.0, Pclass<3] then 1. Coverage:39, accuracy: 0.8205128205128205''' | 38.519174 | 116 | 0.593736 | 1,555 | 13,058 | 4.796141 | 0.151768 | 0.044248 | 0.020649 | 0.015286 | 0.469697 | 0.412443 | 0.373022 | 0.342853 | 0.33159 | 0.323277 | 0 | 0.029144 | 0.327309 | 13,058 | 339 | 117 | 38.519174 | 0.8199 | 0.189998 | 0 | 0.458763 | 0 | 0 | 0.02616 | 0.002699 | 0.015464 | 0 | 0 | 0 | 0 | 1 | 0.06701 | false | 0 | 0.015464 | 0.005155 | 0.190722 | 0.015464 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf5251ba997fd509524b5ed305550da937b3de70 | 5,314 | py | Python | packager/rpm/build.py | csdms/packagebuilder | a72f1d264d9219acfb422864fbcd57dfd6cfd51b | [
"MIT"
] | null | null | null | packager/rpm/build.py | csdms/packagebuilder | a72f1d264d9219acfb422864fbcd57dfd6cfd51b | [
"MIT"
] | null | null | null | packager/rpm/build.py | csdms/packagebuilder | a72f1d264d9219acfb422864fbcd57dfd6cfd51b | [
"MIT"
] | null | null | null | #! /usr/bin/env python
#
# Builds binary and source RPMs for a CSDMS model or tool.
#
# Create the executable script `build_rpm` with:
# $ cd path/to/packagebuilder
# $ sudo python setup.py install
#
# Examples:
# $ build_rpm --help
# $ build_rpm --version
# $ build_rpm hydrotrend
# $ build_rpm babel --tag 1.4.0
# $ build_rpm cem --tag 0.2 --quiet
# $ build_rpm hydrotrend --local $HOME/rpm_models
# $ build_rpm babel --prefix /usr/local/csdms
#
# Mark Piper (mark.piper@colorado.edu)
import sys, os, shutil
from subprocess import call
import glob
import shlex
from packager.core.module import Module
from packager.core.flavor import debian_check
class BuildRPM(object):
'''
Uses `rpmbuild` to build a CSDMS model or tool into an RPM.
'''
def __init__(self, name, version, local_dir, prefix, quiet):
self.is_debian = debian_check()
self.is_quiet = " --quiet " if quiet else " "
self.install_prefix = "/usr/local" if prefix is None else prefix
# Get the model or tool and its spec file.
self.module = Module(name, version, local_dir)
self.spec_file = os.path.join(self.module.location, \
self.module.name + ".spec")
# Set up the local rpmbuild directory.
self.rpmbuild = os.path.join(os.getenv("HOME"), "rpmbuild", "")
self.prep_directory()
# Download the module's source code and make a tarball.
self.tarball = self.module.get_source()
# Copy module files to the rpmbuild directory.
self.prep_files()
# Build the binary and source RPMs.
self.build()
self.cleanup()
print("Success!")
def prep_directory(self):
'''
Prepares the RPM build directory `~/rpmbuild`. Sets up member
variables for paths in the build directory.
'''
print("Setting up rpmbuild directory structure.")
if os.path.isdir(self.rpmbuild):
shutil.rmtree(self.rpmbuild)
subdirectories = ["BUILD","BUILDROOT","RPMS","SOURCES","SPECS","SRPMS"]
for dname in subdirectories:
os.makedirs(os.path.join(self.rpmbuild, dname))
self.sources_dir = os.path.join(self.rpmbuild, "SOURCES", "")
self.specs_dir = os.path.join(self.rpmbuild, "SPECS", "")
def prep_files(self):
'''
Copies source tarball, spec file, patches (if any) and scripts
(if any) for the build process. Patches must use the extension
".patch", scripts must use the extension ".sh" or ".py".
'''
print("Copying module files.")
shutil.copy(self.spec_file, self.specs_dir)
shutil.copy(self.tarball, self.sources_dir)
for patch in glob.glob(os.path.join(self.module.location, "*.patch")):
shutil.copy(patch, self.sources_dir)
for script in glob.glob(os.path.join(self.module.location, "*.sh")):
shutil.copy(script, self.sources_dir)
for script in glob.glob(os.path.join(self.module.location, "*.py")):
shutil.copy(script, self.sources_dir)
def build(self):
'''
Builds binary and source RPMS for the module.
'''
print("Building RPMs.")
cmd = "rpmbuild -ba" + self.is_quiet \
+ os.path.join(self.specs_dir, os.path.basename(self.spec_file)) \
+ " --define '_prefix " + self.install_prefix + "'" \
+ " --define '_version " + self.module.version + "'"
if not self.is_debian:
cmd += " --define '_buildrequires " + self.module.dependencies + "'"
print(cmd)
ret = call(shlex.split(cmd))
if ret != 0:
print("Error in building module RPM.")
sys.exit(2) # can't build RPM
def cleanup(self):
'''
Deletes the directory used to store the downloaded archives from
the rpm_models and rpm_tools repos.
'''
self.module.cleanup()
#-----------------------------------------------------------------------------
def main():
'''
Accepts command-line arguments and passes them to an instance of BuildRPM.
'''
import argparse
from packager import __version__
# Allow only Linuxen.
if not sys.platform.startswith('linux'):
print("Error: this OS is not supported.")
sys.exit(1) # not Linux
parser = argparse.ArgumentParser(
description="Builds a CSDMS model or tool into an RPM.")
parser.add_argument("module_name",
help="the name of the model or tool to build")
parser.add_argument("--local",
help="use LOCAL path to the module files")
parser.add_argument("--prefix",
help="use PREFIX as install path for RPM [/usr/local]")
parser.add_argument("--tag",
help="build TAG version of the module [head]")
parser.add_argument("--quiet", action="store_true",
help="provide less detailed output [verbose]")
parser.add_argument('--version', action='version',
version='build_rpm ' + __version__)
args = parser.parse_args()
BuildRPM(args.module_name, args.tag, args.local, args.prefix, args.quiet)
if __name__ == "__main__":
main()
| 36.902778 | 80 | 0.59936 | 661 | 5,314 | 4.711044 | 0.279879 | 0.021195 | 0.028902 | 0.035967 | 0.147078 | 0.125883 | 0.06808 | 0.06808 | 0.051381 | 0.039178 | 0 | 0.002057 | 0.26816 | 5,314 | 143 | 81 | 37.160839 | 0.798663 | 0.258939 | 0 | 0.025316 | 0 | 0 | 0.171946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075949 | false | 0 | 0.101266 | 0 | 0.189873 | 0.088608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf528e1ce597b280628a646ef42b416b3143745b | 1,094 | py | Python | setup.py | dwhall/sx127x_ahsm | 71605ddb218636cb86f628441c2f1aee904bd271 | [
"MIT"
] | 1 | 2019-09-07T08:59:41.000Z | 2019-09-07T08:59:41.000Z | setup.py | dwhall/sx127x_ahsm | 71605ddb218636cb86f628441c2f1aee904bd271 | [
"MIT"
] | 1 | 2020-06-15T14:25:28.000Z | 2020-06-15T22:55:40.000Z | setup.py | dwhall/sx127x_ahsm | 71605ddb218636cb86f628441c2f1aee904bd271 | [
"MIT"
] | 1 | 2020-06-14T16:35:47.000Z | 2020-06-14T16:35:47.000Z | import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name="sx127x_ahsm",
version="0.1.0",
author="Dean Hall",
author_email="dwhall256@gmail.com",
description="A driver for the Semtech SX127X radio data modem.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/dwhall/sx127x_ahsm",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"License :: OSI Approved :: MIT License",
# This project is deprected
"Development Status :: 7 - Inactive",
# This project is designed to run on a Raspberry Pi
# with a SX127X LoRa radio attached via the SPI bus
"Operating System :: POSIX :: Linux",
"Topic :: System :: Hardware :: Hardware Drivers",
"Topic :: Communications :: Ham Radio",
],
)
| 33.151515 | 68 | 0.632541 | 128 | 1,094 | 5.328125 | 0.65625 | 0.087977 | 0.146628 | 0.152493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03253 | 0.241316 | 1,094 | 32 | 69 | 34.1875 | 0.789157 | 0.11426 | 0 | 0 | 0 | 0 | 0.507772 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.04 | 0 | 0.04 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf545cb8f22abd776b690122d22917eb5c3778ef | 5,756 | py | Python | Preprocessing/reversegeo.py | salathegroup/Semester_Project | 2de38eef4ae6b3c350f8b742021ff098ecb376c4 | [
"MIT"
] | null | null | null | Preprocessing/reversegeo.py | salathegroup/Semester_Project | 2de38eef4ae6b3c350f8b742021ff098ecb376c4 | [
"MIT"
] | 1 | 2018-02-20T15:25:22.000Z | 2018-02-20T15:25:22.000Z | Preprocessing/reversegeo.py | salathegroup/Semester_Project | 2de38eef4ae6b3c350f8b742021ff098ecb376c4 | [
"MIT"
] | 2 | 2017-11-07T09:12:11.000Z | 2019-04-12T16:07:40.000Z | import reverse_geocoder as rg
import csv
import multiprocessing as mp
import multiprocessing.pool
import glob
import re
mx_ca_us_state_abbrev = {
'Alabama': '1',
'Alaska': '2',
'Arizona': '3',
'Arkansas': '4',
'California': '5',
'Colorado': '6',
'Connecticut': '7',
'Delaware': '8',
'Florida': '9',
'Georgia': '10',
'Hawaii': '11',
'Idaho': '12',
'Illinois': '13',
'Indiana': '14',
'Iowa': '15',
'Kansas': '16',
'Kentucky': '17',
'Louisiana': '18',
'Maine': '19',
'Maryland': '20',
'Massachusetts': '21',
'Michigan': '22',
'Minnesota': '23',
'Mississippi': '24',
'Missouri': '25',
'Montana': '26',
'Nebraska': '27',
'Nevada': '28',
'New Hampshire': '29',
'New Jersey': '30',
'New Mexico': '31',
'New York': '32',
'North Carolina': '33',
'North Dakota': '34',
'Ohio': '35',
'Oklahoma': '36',
'Oregon': '37',
'Pennsylvania': '38',
'Rhode Island': '39',
'South Carolina': '40',
'South Dakota': '41',
'Tennessee': '42',
'Texas': '43',
'Utah': '44',
'Vermont': '45',
'Virginia': '46',
'Washington': '47',
'West Virginia': '48',
'Wisconsin': '49',
'Wyoming': '50',
'Ontario': '51',
'Quebec': '52',
'Nova Scotia': '53',
'New Brunswick': '54',
'Manitoba': '55',
'British Columbia': '56',
'Prince Edward': '57',
'Saskatchewan': '58',
'Alberta': '59',
'Newfoundland and Labrador': '60',
'Washington, D.C.': '61',
'Chihuahua': '62',
'Baja California': '63',
'Freeport': '64',
'Nuevo Leon': '65',
}
# coordinates = (30.5029812,-84.2449241)
#
# results = rg.search(coordinates) # default mode = 2
#
# print(results)
NUM_OF_PROCESSES = 4
def ensure_output_paths_exist():
"""Maybe we will not use this since we will be editing the files directly"""
# ensure OUTPUT_DIRECTORY exists
try:
os.mkdir(OUTPUT_DIRECTORY)
except:
#TODO: Use the correct exception here
pass
##############################################################################
############### Run through all folders ######################################
##############################################################################
def run_all(path):
"""This will allow to run all the directories from a path"""
file_paths = glob.glob(path+"/*.csv")
# Based on the current tweet storage mechanism (from Todd's code)
# ensure_output_paths_exist()
# If NUM_OF_PROCESSES is False, use mp.cpu_count
pool = multiprocessing.pool.ThreadPool(NUM_OF_PROCESSES or mp.cpu_count())
pool.map(gzworker, file_paths, chunksize=1)
pool.close()
##############################################################################
###################### Worker Function #######################################
##############################################################################
# def gzworker(fullpath):
# """Worker opens one .gz file"""
# print('Processing {}'.format(fullpath))
# tweet_buffer = []
# try:
# with open(fullpath, 'r+') as f:
# reader = csv.reader(f)
# #TODO: location = ???
# location = blob
# out_lines = [row + [lstName[i]] for i, row in enumerate(reader)]
# # f.seek(0) # set file position to the beginning of the file
# csv.writer(f, delimiter=',').writerows(out_lines)
#
#
# with csv.open(str(fullpath), 'rb') as infile:
# decoded = io.TextIOWrapper(infile, encoding='utf8')
# for _line in decoded:
# if _line.strip() != "":
# json_data = _line.split('|', 1)[1][:-1]
#
# result = tweet_select(json.loads(json_data))
# if result:
# tweet_buffer.append(result)
#
# except:
# print("Error in {}".format(fullpath))
# pass
#
# #Write to OUTPUT_DIRECTORY (if _buffer has contents)
# if tweet_buffer != None:
# print("going to save")
# OUTPUT_PATH = "%s/%s.csv" % (OUTPUT_DIRECTORY, fullpath[5:-3])
#
# with open(OUTPUT_PATH, "w", errors='ignore') as csvfile:
# writer = csv.writer(csvfile)
# for row in tweet_buffer:
# writer.writerow(row)
#
# print('Finished {}'.format(fullpath))
def gzworker(fullpath):
"""Worker will open the .csv file and process the information inside"""
print('Processing {}'.format(fullpath))
# try:
with open(fullpath, 'r+') as f:
reader = csv.reader(f)
for row in reader:
geoloc = row[3]
geoloc = geoloc.split(',')
lon = geoloc[0].replace('[', '')
lat = geoloc[1].replace(']', '').replace(' ', '')
# print('Longitude: {} \nLatitude: {}'.format(lon, lat))
# m_obj = re.search(r"(\d+)", geoloc)
# print(m_obj)
coordinates = (lat,lon)
results = rg.search(coordinates) # default mode = 2
print(results)
state_num = mx_ca_us_state_abbrev.get(results[0].get('admin1'))
print(state_num)
# state_num = us_state_abbrev.results['admin1']
# print(state_num)
# [('lat', '29.23329'), ('lon', '-98.79641'), ('name', 'Lytle'), ('admin1', 'Texas'), ('admin2', 'Atascosa County'), ('cc', 'US')]
# except:
# print("Error in {}".format(fullpath))
# pass
print('Finished {}'.format(fullpath))
#TODO: Get .csv file loaded
#TODO: extract long-lat from tweet
#TODO: invert long-lat
#TODO: use reverse_geocoder to get the information
#TODO: save the information on the same line in the same .csv file
| 30.455026 | 130 | 0.509034 | 620 | 5,756 | 4.63871 | 0.508065 | 0.029207 | 0.013561 | 0.00765 | 0.098748 | 0.086926 | 0.086926 | 0.061892 | 0.061892 | 0.027121 | 0 | 0.040131 | 0.255386 | 5,756 | 188 | 131 | 30.617021 | 0.630891 | 0.430334 | 0 | 0 | 0 | 0 | 0.274928 | 0 | 0 | 0 | 0 | 0.005319 | 0 | 1 | 0.030303 | false | 0.010101 | 0.060606 | 0 | 0.090909 | 0.040404 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf54c232a75d4a7341295831e0d07ef22dddb9f7 | 12,143 | py | Python | Trainer.py | Gorilla-Lab-SCUT/OrthDNNs | 7391b1751334c485feea212a80abc4dc8430dc1e | [
"BSD-3-Clause"
] | 4 | 2021-07-15T07:34:30.000Z | 2022-03-30T08:23:46.000Z | Trainer.py | Gorilla-Lab-SCUT/OrthDNNs | 7391b1751334c485feea212a80abc4dc8430dc1e | [
"BSD-3-Clause"
] | 1 | 2020-02-11T10:55:46.000Z | 2020-02-11T10:55:46.000Z | Trainer.py | Yuxin-Wen/OrthDNNs | 7391b1751334c485feea212a80abc4dc8430dc1e | [
"BSD-3-Clause"
] | 1 | 2021-11-23T03:31:09.000Z | 2021-11-23T03:31:09.000Z | from __future__ import division
import time
import numpy as np
import math
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
import torchvision
from Utility import Average_meter
from Utility import Training_aux
#from Utility import progress_bar
class Trainer(object):
"""a method that packaging dataloader and model and optim_methods"""
"""the model are trained here"""
"""the mixup operation and data_agu operation are perform here"""
def __init__(self, train_loader, val_loader, model, criterion,
optimizer, nEpoch, lr_base = 0.1, lr_end = 0.001, lr_decay_method = 'exp',
is_soft_regu=False, is_SRIP=False, soft_lambda = 1e-4,
svb_flag = False, iter_svb_flag=False, svb_factor = 0.5,
bbn_flag = False, bbn_factor = 0.2, bbn_type = 'rel',
fsave = './Save', print_freq = 10, is_evaluate = False, dataset = 'CIFAR10'):
self.train_loader = train_loader
self.val_loader = val_loader
self.model = model
self.criterion = criterion
self.optimizer = optimizer
self.nEpoch = nEpoch
self.lr_base = lr_base
self.lr_end = lr_end
self.lr_decay_method = lr_decay_method
self.is_soft_regu = is_soft_regu
self.is_SRIP = is_SRIP
self.soft_lambda = soft_lambda
self.svb_flag = svb_flag
self.iter_svb_flag = iter_svb_flag
self.svb_factor = svb_factor
self.bbn_flag = bbn_flag
self.bbn_factor = bbn_factor
self.bbn_type = bbn_type
self.training_aux = Training_aux(fsave)
self.is_evaluate = is_evaluate
self.print_freq = print_freq
self.best_prec1 = 0
def train(self, epoch):
"""Train for one epoch on the training set"""
batch_time = Average_meter()
data_time = Average_meter()
losses = Average_meter()
top1 = Average_meter()
top5 = Average_meter()
# switch to train mode
self.model.train()
begin = time.time()
for i, (image, target) in enumerate(self.train_loader):
batch_size= image.size(0)
# measure data loading time
data_time.update(time.time() - begin)
image = image.cuda()
input_var = Variable(image)
target = target.cuda()
target_var = Variable(target)
output = self.model(input_var)
if self.is_soft_regu or self.is_SRIP:
loss = self.criterion(output, target_var, self.model, self.soft_lambda)
else:
loss = self.criterion(output, target_var)
# measure accuracy and record loss
prec1, prec5 = self.training_aux.accuracy(output.data, target, topk=(1, 5))
losses.update(loss.data.item(), batch_size)
top1.update(prec1.item(), batch_size)
top5.update(prec5.item(), batch_size)
# compute gradient and do SGD step
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# measure elapsed time
batch_time.update(time.time() - begin)
if i % self.print_freq == 0:
#progress_bar(i, len(self.train_loader), 'Loss: {loss.avg:.4f} | Prec@1 {top1.avg:.3f} | Prec@5 {top5.avg:.3f}'.format(loss=losses, top1=top1, top5=top5))
print('Epoch: [{0}][{1}/{2}]\t'
'Time {batch_time.avg:.3f}\t'
'Data {data_time.avg:.3f}\t'
'Loss {loss.avg:.4f}\t'
'Prec@1 {top1.avg:.3f}\t'
'Prec@5 {top5.avg:.3f}'.format(
epoch, i, len(self.train_loader), batch_time=batch_time,
data_time=data_time, loss=losses, top1=top1, top5=top5))
begin = time.time()
if (self.iter_svb_flag) and epoch != (self.nEpoch -1) and i != (self.train_loader.__len__() -1):
self.fcConvWeightReguViaSVB()
self.training_aux.write_err_to_file(epoch = epoch, top1 = top1, top5 = top5, trn_loss = losses, mode = 'train')
return
def validate(self, epoch, img_size=320):
"""Perform validation on the validation set"""
batch_time = Average_meter()
losses = Average_meter()
top1 = Average_meter()
top5 = Average_meter()
self.model.eval()
begin = time.time()
with torch.no_grad():
for i, (raw_img, raw_label) in enumerate(self.val_loader):
raw_label = raw_label.cuda()
raw_img = raw_img.cuda()
input_var = Variable(raw_img)
target_var = Variable(raw_label)
# compute output
output = self.model(input_var)
# measure accuracy and record loss
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target_var)
# measure accuracy and record loss
prec1, prec5 = self.training_aux.accuracy(output.data, raw_label, topk=(1, 5))
top1.update(prec1.item(), raw_img.size(0))
top5.update(prec5.item(), raw_img.size(0))
losses.update(loss.data.item(), raw_img.size(0))
# measure elapsed time
batch_time.update(time.time() - begin)
if i % self.print_freq == 0:
#progress_bar(i, len(self.train_loader), 'Loss: {loss.avg:.4f} | Prec@1 {top1.avg:.3f} | Prec@5 {top5.avg:.3f}'.format(loss=losses, top1=top1, top5=top5))
print('Test: [{0}/{1}]\t'
'Time {batch_time.avg:.3f}\t'
'Loss {loss.avg:.4f}\t'
'{top1.avg:.3f}\t'
'{top5.avg:.3f}'.format(
i, len(self.val_loader), batch_time=batch_time,
loss=losses, top1=top1, top5=top5))
begin = time.time()
print(' * Loss {loss.avg:.4f} Prec@1 {top1.avg:.3f} Prec@5 {top5.avg:.3f}'
.format(loss=losses, top1=top1, top5=top5))
self.is_best = top1.avg > self.best_prec1
self.best_prec1 = max(top1.avg, self.best_prec1)
if self.is_evaluate:
return top1.avg
else:
self.training_aux.write_err_to_file(epoch = epoch, top1 = top1, top5 = top5, mode = 'val')
return top1.avg
def adjust_learning_rate(self, epoch, warm_up_epoch = 0,scheduler=None):
"""Sets the learning rate to the initial LR decayed by 10 after 0.5 and 0.75 epochs"""
if self.lr_decay_method == 'exp':
lr = self.lr_base
if epoch < warm_up_epoch:
lr = 0.001 + (self.lr_base - 0.001) * epoch / warm_up_epoch
if epoch >= warm_up_epoch:
lr_series = torch.logspace(math.log(self.lr_base, 10), math.log(self.lr_end, 10), int(self.nEpoch/2))
lr = lr_series[int(math.floor((epoch-warm_up_epoch)/2))]
for param_group in self.optimizer.param_groups:
param_group['lr'] = lr
elif self.lr_decay_method == 'noDecay':
lr = self.lr_base
for param_group in self.optimizer.param_groups:
param_group['lr'] = lr
print('lr:{0}'.format(self.optimizer.param_groups[-1]['lr']))
return
def save_checkpoint(self, epoch, save_flag = 'learning', filename = False):
if save_flag == 'standard':
model = self.standard_model
optimizer = self.standard_optimizer
elif save_flag == 'learning':
model = self.model
optimizer = self.optimizer
else:
raise Exception('save_flag should be one of standard or learning')
state = {
'epoch': epoch,
'state_dict': model.state_dict(),
'best_prec1': self.best_prec1,
'optimizer' : optimizer.state_dict(),
}
fname = filename or 'checkpoint' + '.pth.tar'
self.training_aux.save_checkpoint(state = state, is_best = self.is_best, filename=fname)
return
def fcConvWeightReguViaSVB(self):
for m in self.model.modules():
#svb
if self.svb_flag == True:
if isinstance(m,nn.Conv2d):
tmpbatchM = m.weight.data.view(m.weight.data.size(0), -1).t().clone()
try:
tmpU, tmpS, tmpV = torch.svd(tmpbatchM)
except:
tmpbatchM = tmpbatchM[np.logical_not(np.isnan(tmpbatchM))]
tmpbatchM = tmpbatchM.view(m.weight.data.size(0), -1).t()
tmpU, tmpS, tmpV = np.linalg.svd(tmpbatchM.cpu().numpy())
tmpU = torch.from_numpy(tmpU).cuda()
tmpS = torch.from_numpy(tmpS).cuda()
tmpV = torch.from_numpy(tmpV).cuda()
for idx in range(0, tmpS.size(0)):
if tmpS[idx] > (1+self.svb_factor):
tmpS[idx] = 1+self.svb_factor
elif tmpS[idx] < 1/(1+self.svb_factor):
tmpS[idx] = 1/(1+self.svb_factor)
tmpbatchM = torch.mm(torch.mm(tmpU, torch.diag(tmpS.cuda())), tmpV.t()).t().contiguous()
m.weight.data.copy_(tmpbatchM.view_as(m.weight.data))
elif isinstance(m, nn.Linear):
tmpbatchM = m.weight.data.t().clone()
tmpU, tmpS, tmpV = torch.svd(tmpbatchM)
for idx in range(0, tmpS.size(0)):
if tmpS[idx] > (1+self.svb_factor):
tmpS[idx] = 1+self.svb_factor
elif tmpS[idx] < 1/(1+self.svb_factor):
tmpS[idx] = 1/(1+self.svb_factor)
tmpbatchM = torch.mm(torch.mm(tmpU, torch.diag(tmpS.cuda())), tmpV.t()).t().contiguous()
m.weight.data.copy_(tmpbatchM.view_as(m.weight.data))
# bbn
if self.bbn_flag == True:
if isinstance(m, nn.BatchNorm2d):
tmpbatchM = m.weight.data
if self.bbn_type == 'abs':
for idx in range(0, tmpbatchM.size(0)):
if tmpbatchM[idx] > (1+self.bbn_factor):
tmpbatchM[idx] = (1+self.bbn_factor)
elif tmpbatchM[idx] < 1/(1+self.bbn_factor):
tmpbatchM[idx] = 1/(1+self.bbn_factor)
elif self.bbn_type == 'rel':
mean = torch.mean(tmpbatchM)
relVec = torch.div(tmpbatchM, mean)
for idx in range(0, tmpbatchM.size(0)):
if relVec[idx] > (1+self.bbn_factor):
tmpbatchM[idx] = mean * (1+self.bbn_factor)
elif relVec[idx] < 1/(1+self.bbn_factor):
tmpbatchM[idx] = mean/(1+self.bbn_factor)
elif self.bbn_type == 'bbn':
running_var = m.running_var
eps = m.eps
running_std = torch.sqrt(torch.add(running_var, eps))
mean = torch.mean(tmpbatchM/running_std)
for idx in range(0, tmpbatchM.size(0)):
if tmpbatchM[idx]/(running_std[idx]*mean) > 1+self.bbn_factor:
tmpbatchM[idx] = running_std[idx] * mean * (1+self.bbn_factor)
elif tmpbatchM[idx]/(running_std[idx]*mean) < 1/(1+self.bbn_factor):
tmpbatchM[idx] = running_std[idx] * mean / (1+self.bbn_factor)
m.weight.data.copy_(tmpbatchM)
| 43.679856 | 174 | 0.534464 | 1,472 | 12,143 | 4.238451 | 0.158288 | 0.01683 | 0.027088 | 0.026927 | 0.431319 | 0.374098 | 0.336432 | 0.317038 | 0.298125 | 0.285623 | 0 | 0.024778 | 0.35189 | 12,143 | 277 | 175 | 43.837545 | 0.76798 | 0.066376 | 0 | 0.253456 | 0 | 0.004608 | 0.043106 | 0.005801 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02765 | false | 0 | 0.064516 | 0 | 0.119816 | 0.036866 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf555654bbc3d88a367ec4273df655fffb2396cc | 952 | py | Python | src/utils/login_to_spotify.py | SecondThundeR/spotichecker | 05787bae85cb0d9c5832939c72bad526eb419705 | [
"MIT"
] | null | null | null | src/utils/login_to_spotify.py | SecondThundeR/spotichecker | 05787bae85cb0d9c5832939c72bad526eb419705 | [
"MIT"
] | null | null | null | src/utils/login_to_spotify.py | SecondThundeR/spotichecker | 05787bae85cb0d9c5832939c72bad526eb419705 | [
"MIT"
] | null | null | null | """Utils for logging to Spotify.
This module contains functions for connecting to Spotify API.
This file can also be imported as a module and contains the following functions:
* login_to_spotify - connect to Spotify and return OAuth object
"""
import spotipy
from spotipy.oauth2 import SpotifyOAuth
SCOPES = "user-library-read, playlist-read-private, playlist-read-collaborative"
def login_to_spotify(credentials: dict) -> SpotifyOAuth:
"""Trigger Spotify authentication and return current token.
Args:
credentials (dict): Credentials data (CLIENT_ID and CLIENT_SECRET).
Returns:
spotipy.oauth2.SpotifyOAuth: Spotify OAuth object.
"""
sp = spotipy.Spotify(
auth_manager=SpotifyOAuth(
client_id=credentials["CLIENT_ID"],
client_secret=credentials["CLIENT_SECRET"],
redirect_uri="http://localhost:8080",
scope=SCOPES,
)
)
return sp
| 27.2 | 80 | 0.698529 | 111 | 952 | 5.882883 | 0.540541 | 0.068913 | 0.042879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.008086 | 0.220588 | 952 | 34 | 81 | 28 | 0.871968 | 0.465336 | 0 | 0 | 0 | 0 | 0.235294 | 0.102941 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf556fe0579840dc64ac6b121230f3d881ae21c9 | 17,516 | py | Python | prostate_cancer_nomograms/statistical_analysis/nomograms_performance_evaluation/decision_curve_analysis/__init__.py | MaxenceLarose/ProstateCancerNomograms | 4ff15dccd1f2dbde58d3a21a2e680e909e2e408a | [
"Apache-2.0"
] | 1 | 2021-10-04T18:03:10.000Z | 2021-10-04T18:03:10.000Z | prostate_cancer_nomograms/statistical_analysis/nomograms_performance_evaluation/decision_curve_analysis/__init__.py | MaxenceLarose/ProstateCancerNomograms | 4ff15dccd1f2dbde58d3a21a2e680e909e2e408a | [
"Apache-2.0"
] | null | null | null | prostate_cancer_nomograms/statistical_analysis/nomograms_performance_evaluation/decision_curve_analysis/__init__.py | MaxenceLarose/ProstateCancerNomograms | 4ff15dccd1f2dbde58d3a21a2e680e909e2e408a | [
"Apache-2.0"
] | null | null | null | import pandas as pd
from .algo import *
from .validate import *
from .validate import DCAError
__all__ = ['DecisionCurveAnalysis'] # only public member should be the class
class DecisionCurveAnalysis:
"""DecisionCurveAnalysis(...)
DecisionCurveAnalysis(algorithm='dca', **kwargs)
Create an object of class DecisionCurveAnalysis for generating
and plotting "net benefit" and "interventions avoided" curves
Parameters
----------
algorithm : str
the type of analysis to run
valid values are 'dca' (decision curve) or 'stdca' (survival time decision curve)
**kwargs : object
keyword arguments that are used in the analysis
Attributes
----------
data : pd.DataFrame
The data set to analyze, with observations in each row, and
outcomes/predictors in the columns
outcome : str
The column in `data` to use as the outcome for the analysis
All observations in this column must be coded 0/1
predictors : list(str)
The column(s) in `data` to use as predictors during the analysis
All observations, 'x', in this column must be in the range 0 <= x <= 1
Methods
-------
run : runs the analysis
smooth_results : use local regression (LOWESS) to smooth the
results of the analysis, using the specified fraction
plot_net_benefit : TODO
plot_interv_avoid : TODO
Examples
--------
TODO
"""
#universal parameters for dca
_common_args = {'data' : None,
'outcome' : None,
'predictors' : None,
'thresh_lo' : 0.01,
'thresh_hi' : 0.99,
'thresh_step' : 0.01,
'probabilities' : None,
'harms' : None,
'intervention_per' : 100}
#stdca-specific attributes
_stdca_args = {'tt_outcome' : None,
'time_point' : None,
'cmp_risk' : False}
def __init__(self, algorithm='dca', **kwargs):
"""Initializes the DecisionCurveAnalysis object
Arguments for the analysis may be passed in as keywords upon object initialization
Parameters
----------
algorithm : str
the algorithm to use, valid options are 'dca' or 'stdca'
**kwargs :
keyword arguments to populate instance attributes that will be used in analysis
Raises
------
ValueError
if user doesn't specify a valid algorithm; valid values are 'dca' or 'stdca'
if the user specifies an invalid keyword
"""
if algorithm not in ['dca', 'stdca']:
raise ValueError("did not specify a valid algorithm, only 'dca' and 'stdca' are valid")
self.algorithm = algorithm
#set args based on keywords passed in
#this naively assigns values passed in -- validation occurs afterwords
for kw in kwargs:
if kw in self._common_args:
self._common_args[kw] = kwargs[kw] #assign
continue
elif kw in self._stdca_args:
self._stdca_args[kw] = kwargs[kw]
else:
raise ValueError("{kw} is not a valid decision_curve_analysis keyword"
.format(kw=repr(kw)))
#do validation on all args, make sure we still have a valid analysis
self.data = data_validate(self.data)
self.outcome = outcome_validate(self.data, self.outcome)
self.predictors = predictors_validate(self.predictors, self.data)
#validate bounds
new_bounds = []
curr_bounds = [self._common_args['thresh_lo'], self._common_args['thresh_hi'],
self._common_args['thresh_step']]
for i, bound in enumerate(['lower', 'upper', 'step']):
new_bounds.append(threshold_validate(bound, self.threshold_bound(bound),
curr_bounds))
self.set_threshold_bounds(new_bounds[0], new_bounds[1], new_bounds[2])
#validate predictor-reliant probs/harms
self.probabilities = probabilities_validate(self.probabilities,
self.predictors)
self.harms = harms_validate(self.harms, self.predictors)
#validate the data in each predictor column
self.data = validate_data_predictors(self.data, self.outcome, self.predictors,
self.probabilities)
def _args_dict(self):
"""Forms the arguments to pass to the analysis algorithm
Returns
-------
dict(str, object)
A dictionary that can be unpacked and passed to the algorithm for the
analysis
"""
if self.algorithm == 'dca':
return self._common_args
else:
from collections import Counter
return dict(Counter(self._common_args) + Counter(self._stdca_args))
def _algo(self):
"""The algorithm to use for this analysis
"""
return dca if self.algorithm == 'dca' else stdca
def run(self, return_results=False):
"""Performs the analysis
Parameters
----------
return_results : bool
if `True`, sets the results to the instance attribute `results`
if `False` (default), the function returns the results as a tuple
Returns
-------
tuple(pd.DataFrame, pd.DataFrame)
Returns net_benefit, interventions_avoided if `return_results=True`
"""
nb, ia = self._algo()(**(self._args_dict()))
if return_results:
return nb, ia
else:
self.results = {'net benefit' : nb, 'interventions avoided' : ia}
def smooth_results(self, lowess_frac, return_results=False):
"""Smooths the results using a LOWESS smoother
Parameters
----------
lowess_frac : float
the fraction of the endog value to use when smoothing
return_results : bool
if `True`, sets the results to the instance attribute `results`
if `False` (default), the function returns the results as a tuple
Returns
-------
tuple(pd.DataFrame, pd.DataFrame)
smoothed predictor dataFrames for results if `return_results=True`
"""
from dcapy.calc import lowess_smooth_results
_nb = _ia = None
for predictor in self.predictors:
nb, ia = lowess_smooth_results(predictor, self.results['net benefit'],
self.results['interventions avoided'],
lowess_frac)
#concatenate results
_nb = pd.concat([_nb, nb], axis=1)
_ia = pd.concat([_ia, ia], axis=1)
if return_results:
return _nb, _ia
else:
self.results['net benefit'] = pd.concat(
[self.results['net benefit'], _nb], axis=1)
self.results['interventions avoided'] = pd.concat(
[self.results['interventions avoided'], _ia], axis=1)
def plot_net_benefit(self, custom_axes=None, make_legend=True):
"""Plots the net benefit from the analysis
Parameters
----------
custom_axes : list(float)
a length-4 list of dimensions for the plot, `[x_min, x_max, y_min, y_max]`
make_legend : bool
whether to include a legend in the plot
Returns
-------
matplotlib.rc_context
"""
try:
import matplotlib.pyplot as plt
except ImportError as e:
e.args += ("plotting the analysis requires matplotlib")
raise
try:
net_benefit = getattr(self, 'results')['net benefit']
except AttributeError:
raise DCAError("must run analysis before plotting!")
plt.plot(net_benefit)
plt.ylabel("Net Benefit")
plt.xlabel("Threshold Probability")
#prettify the graph
if custom_axes:
plt.axis(custom_axes)
else: #use default
plt.axis([0, self.threshold_bound('upper')*100,
-0.05, 0.20])
def plot_interventions_avoided(self, custom_axes=None, make_legend=True):
"""Plots the interventions avoided per `interventions_per` patients
Notes
-----
Generated plots are 'interventions avoided per `intervention_per` patients' vs. threshold
Parameters
----------
custom_axes : list(float)
a length-4 list of dimensions for the plot, `[x_min, x_max, y_min, y_max]`
make_legend : bool
whether to include a legend in the plot
Returns
-------
matplotlib.rc_context
context manager for working with the newly-created plot
"""
try:
import matplotlib.pyplot as plt
except ImportError as e:
e.args += ("plotting the analysis requires matplotlib")
raise
try:
interv_avoid = getattr(self, 'results')['interventions avoided']
except AttributeError:
raise DCAError("must run analysis before plotting!")
iaplot = plt.plot(interv_avoid)
#TODO: graph prettying/customization
return iaplot
@property
def data(self):
"""The data set to analyze
Returns
-------
pd.DataFrame
"""
return self._common_args['data']
@data.setter
def data(self, value):
"""Set the data for the analysis
Parameters
----------
value : pd.DataFrame
the data to analyze
"""
value = data_validate(value) # validate
self._common_args['data'] = value
@property
def outcome(self):
"""The outcome to use for the analysis
"""
return self._common_args['outcome']
@outcome.setter
def outcome(self, value):
"""Sets the column in the dataset to use as the outcome for the analysis
Parameters
----------
value : str
the name of the column in `data` to set as `outcome`
"""
value = outcome_validate(self.data, value) # validate
self._common_args['outcome'] = value
@property
def predictors(self):
"""The predictors to use
Returns
-------
list(str)
A list of all predictors for the analysis
"""
return self._common_args['predictors']
@predictors.setter
def predictors(self, value):
"""Sets the predictors to use for the analysis
Parameters
----------
value : list(str)
the list of predictors to use
"""
value = predictors_validate(value, self.data)
self._common_args['predictors'] = value
def threshold_bound(self, bound):
"""Gets the specified threshold boundary
Parameters
----------
bound : str
the boundary to get; valid values are "lower", "upper", or "step"
Returns
-------
float
the current value of that boundary
"""
mapping = {'lower' : 'thresh_lo',
'upper' : 'thresh_hi',
'step' : 'thresh_step'}
try:
return self._common_args[mapping[bound]]
except KeyError:
raise ValueError("did not specify a valid boundary")
def set_threshold_bounds(self, lower, upper, step=None):
"""Sets the threshold boundaries (thresh_*) for the analysis
Notes
-----
Passing `None` for any of the parameters will skip that parameter
The analysis will be run over all steps, x, lower <= x <= upper
Parameters
----------
lower : float
the lower boundary
upper : float
the upper boundary
step : float
the increment between calculations
"""
_step = step if step else self._common_args['thresh_step']
bounds_to_test = [lower, upper, _step]
if lower is not None:
lower = threshold_validate('lower', lower, bounds_to_test)
self._common_args['thresh_lo'] = lower
if upper is not None:
upper = threshold_validate('upper', upper, bounds_to_test)
self._common_args['thresh_hi'] = upper
if step is not None:
step = threshold_validate('step', step, bounds_to_test)
self._common_args['thresh_step'] = step
@property
def probabilities(self):
"""The list of probability values for each predictor
Returns
-------
list(bool)
the probability list
"""
return self._common_args['probabilities']
@probabilities.setter
def probabilities(self, value):
"""Sets the probabilities list for the analysis
Notes
-----
The length of the parameter `value` must match that of the predictors
Parameters
----------
value : list(bool)
a list of probabilities to assign, one for each predictor
"""
value = probabilities_validate(value, self.predictors)
self._common_args['probabilities'] = value
def set_probability_for_predictor(self, predictor, probability):
"""Sets the probability value for the given predictor
Parameters
----------
predictor : str
the predictor to set the probability value for
probability : bool
the probability value
"""
try: # make sure we're setting a valid predictor
ind = self._common_args['predictors'].index(predictor)
except ValueError as e:
e.args += ("did not specify a valid predictor")
raise
self._common_args['probabilities'][ind] = probability
@property
def harms(self):
"""The list of harm values for the predictors
Returns
-------
list(float)
"""
return self._common_args['harms']
@harms.setter
def harms(self, value):
"""Sets the list of harm values to be used
Notes
-----
The length of the parameter `value` must match that of the predictors
Parameters
----------
value : list(float)
a list of floats to assign, one for each predictor
"""
value = harms_validate(value, self.predictors) # validate
self._common_args['harms'] = value
def set_harm_for_predictor(self, predictor, harm):
"""Sets the harm value for the given predictor
Parameters
----------
predictor : str
the predictor to set the harm value for
harm : float
the harm value (must be between 0 and 1)
"""
try: # make sure specifying a valid predictor
ind = self._common_args['harm'].index(predictor)
except ValueError as e:
e.args += ("did not specify a valid predictor")
raise
self._common_args['harm'][ind] = harm
@property
def intervention_per(self):
"""The number of patients per intervention
Returns
-------
int
"""
return self._common_args['intervention_per']
@intervention_per.setter
def intervention_per(self, value):
"""Sets the value of the number of patients to assume per intervention
Parameters
----------
value : int
"""
self._common_args['intervention_per'] = value
@property
def time_to_outcome(self):
"""The column in the data used to specify the time taken to reach the outcome
Returns
-------
str
"""
return self._common_args['tt_outcome']
@time_to_outcome.setter
def time_to_outcome(self, value):
"""Sets the column to use as the `tt_outcome` for the analysis
Parameters
----------
value : str
"""
if value in data.columns:
self._stdca_args['tt_outcome'] = value
else:
raise ValueError("time to outcome must be a valid column in the data set")
@property
def time_point(self):
"""The time point of interest
Returns
-------
float
"""
return self._stdca_args['time_point']
@time_point.setter
def time_point(self, value):
"""Sets the time point of interest
Parameters
----------
value : float
"""
self._stdca_args['time_point'] = value
@property
def competing_risk(self):
"""Run competing risk analysis
Returns
-------
bool
"""
return self._stdca_args['cmp_risk']
@competing_risk.setter
def competing_risk(self, value):
"""Sets whether to run a competing risk analysis
Parameters
----------
value : bool
"""
if not isinstance(value, bool):
raise TypeError("competing risk must be a boolean value")
self._stdca_args['cmp_risk'] = value | 32.13945 | 99 | 0.565711 | 1,925 | 17,516 | 5.02026 | 0.152727 | 0.031043 | 0.042012 | 0.018626 | 0.292529 | 0.224855 | 0.212024 | 0.168874 | 0.162459 | 0.141349 | 0 | 0.003206 | 0.341117 | 17,516 | 545 | 100 | 32.13945 | 0.834156 | 0.375885 | 0 | 0.205882 | 0 | 0 | 0.12688 | 0.004829 | 0 | 0 | 0 | 0.007339 | 0 | 1 | 0.142157 | false | 0 | 0.04902 | 0 | 0.284314 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf565b37008bf14878731348b0d414b055945931 | 1,493 | py | Python | pyxmp/xmp.py | jeslyvarghese/pyxmp | 94e9f97574230f04b47fbcc7ed2caaa26e125ec4 | [
"MIT"
] | null | null | null | pyxmp/xmp.py | jeslyvarghese/pyxmp | 94e9f97574230f04b47fbcc7ed2caaa26e125ec4 | [
"MIT"
] | null | null | null | pyxmp/xmp.py | jeslyvarghese/pyxmp | 94e9f97574230f04b47fbcc7ed2caaa26e125ec4 | [
"MIT"
] | null | null | null | import xml.etree.ElementTree as ET
from .__keysearch import keysearch
from .__attribute import Attribute
class XMP(object):
def __init__(self, filepath, **namespaces):
self.filepath = filepath
with open(self.filepath, 'rb') as f:
data = f.read()
xmp_start = data.find(b'<x:xmpmeta')
xmp_end = data.find(b'</x:xmpmeta')
self.__namespaces = namespaces
self.__xmp_string = data[xmp_start:xmp_end+12]
try:
self.__root = ET.fromstring(self.__xmp_string)
self.__rdf_el = self.__root[0][0]
self.__attrib_dict = self.__rdf_el.attrib
except ET.ParseError:
self.__attrib_dict = {}
self.__namespaced_dict = {}
self.__update_namespaced_dict()
self.__create_namespace_attributes()
def __update_namespaced_dict(self):
for k, v in self.__attrib_dict.items():
nk = k
for ns, url in self.__namespaces.items():
nk = k.replace('{'+ url +'}', ns+':')
if k != nk:
break
self.__namespaced_dict[nk] = v
def __create_namespace_attributes(self):
for k in self.__namespaces.keys():
setattr(self, k, Attribute())
obj = getattr(self, k)
for key in keysearch(self.__namespaced_dict, k):
attr_name = key.replace(k+':', '')
setattr(obj, attr_name, self.__namespaced_dict[key])
| 37.325 | 68 | 0.578701 | 176 | 1,493 | 4.494318 | 0.363636 | 0.106195 | 0.091024 | 0.025284 | 0.042984 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00388 | 0.309444 | 1,493 | 39 | 69 | 38.282051 | 0.763337 | 0 | 0 | 0 | 0 | 0 | 0.018084 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.194444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf58a225d1a16173cd170707ce55c8de870dc56f | 568 | py | Python | sparse/utils.py | ContinuumIO/sparse | 10da2d31f0228f192b3064ab253bc828b3cf1a50 | [
"BSD-3-Clause"
] | 2 | 2017-09-17T21:22:21.000Z | 2019-08-26T02:28:10.000Z | sparse/utils.py | ContinuumIO/sparse | 10da2d31f0228f192b3064ab253bc828b3cf1a50 | [
"BSD-3-Clause"
] | null | null | null | sparse/utils.py | ContinuumIO/sparse | 10da2d31f0228f192b3064ab253bc828b3cf1a50 | [
"BSD-3-Clause"
] | 4 | 2019-03-21T05:38:06.000Z | 2021-02-23T06:26:48.000Z | import numpy as np
from .core import COO
def assert_eq(x, y):
assert x.shape == y.shape
assert x.dtype == y.dtype
if isinstance(x, COO):
if x.sorted:
assert is_lexsorted(x)
if isinstance(y, COO):
if y.sorted:
assert is_lexsorted(y)
if hasattr(x, 'todense'):
xx = x.todense()
else:
xx = x
if hasattr(y, 'todense'):
yy = y.todense()
else:
yy = y
assert np.allclose(xx, yy)
def is_lexsorted(x):
return not x.shape or (np.diff(x.linear_loc()) > 0).all()
| 19.586207 | 61 | 0.549296 | 86 | 568 | 3.569767 | 0.383721 | 0.107492 | 0.091205 | 0.149837 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002597 | 0.322183 | 568 | 28 | 62 | 20.285714 | 0.794805 | 0 | 0 | 0.090909 | 0 | 0 | 0.024648 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.090909 | false | 0 | 0.090909 | 0.045455 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf6581484116a18845484669a17d5f8076cfe782 | 2,612 | py | Python | baseline/xray.py | RoliKhanna/Anchor-Free | e3d599b7cbdc988ad7720c1e8324cabe87917d59 | [
"MIT"
] | null | null | null | baseline/xray.py | RoliKhanna/Anchor-Free | e3d599b7cbdc988ad7720c1e8324cabe87917d59 | [
"MIT"
] | null | null | null | baseline/xray.py | RoliKhanna/Anchor-Free | e3d599b7cbdc988ad7720c1e8324cabe87917d59 | [
"MIT"
] | 1 | 2019-11-25T22:08:19.000Z | 2019-11-25T22:08:19.000Z |
from nltk.corpus import reuters
import sys
import numpy as np
from scipy import optimize
# Loading data here
train_documents, train_categories = zip(*[(reuters.raw(i), reuters.categories(i)) for i in reuters.fileids() if i.startswith('training/')])
test_documents, test_categories = zip(*[(reuters.raw(i), reuters.categories(i)) for i in reuters.fileids() if i.startswith('test/')])
def col2norm(X):
return np.sum(np.abs(X) ** 2,axis=0)
def xray(X, r):
cols = []
R = np.copy(X)
while len(cols) < r:
i = np.argmax(col2norm(X))
# Loop until we choose a column that has not been selected.
while True:
p = np.random.random((X.shape[0], 1))
scores = col2norm(np.dot(R.T, X)) / col2norm(X)
scores[cols] = -1 # IMPORTANT
best_col = np.argmax(scores)
if best_col in cols:
# Re-try
continue
else:
cols.append(best_col)
H, rel_res = NNLSFrob(X, cols)
R = X - np.dot(X[:, cols] , H)
break
return cols
def GP_cols(data, r):
votes = {}
for row in data:
min_ind = np.argmin(row)
max_ind = np.argmax(row)
for ind in [min_ind, max_ind]:
if ind not in votes:
votes[ind] = 1
else:
votes[ind] += 1
votes = sorted(votes.items(), key=lambda x: x[1], reverse=True)
return [x[0] for x in votes][0:r]
def NNLSFrob(X, cols):
ncols = X.shape[1]
H = np.zeros((len(cols), ncols))
for i in xrange(ncols):
sol, res = optimize.nnls(X[:, cols], X[:, i])
H[:, i] = sol
rel_res = np.linalg.norm(X - np.dot(X[:, cols], H), 'fro')
rel_res /= np.linalg.norm(X, 'fro')
return H, rel_res
def ComputeNMF(data, colnorms, r):
data = np.copy(data)
colinv = np.linalg.pinv(np.diag(colnorms))
_, S, Vt = np.linalg.svd(data)
A = np.dot(np.diag(S), Vt)
cols = xray(data, r)
H, rel_res = NNLSFrob(data, cols)
return cols, H, rel_res
def ParseMatrix(matpath):
matrix = []
with open(matpath, 'r') as f:
for row in f:
matrix.append([float(v) for v in row.split()[1:]])
return np.array(matrix)
def ParseColnorms(colpath):
norms = []
with open(colpath, 'r') as f:
for line in f:
norms.append(float(line.split()[-1]))
return norms
data = ParseMatrix(train_documents)
colnorms = ParseColnorms(train_categories)
r = 4
cols, H, rel_res = ComputeNMF(data, colnorms, r)
cols.sort()
print("Final result: ", rel_res)
| 28.086022 | 139 | 0.568147 | 387 | 2,612 | 3.775194 | 0.315245 | 0.032854 | 0.023956 | 0.031485 | 0.144422 | 0.144422 | 0.102669 | 0.102669 | 0.102669 | 0.102669 | 0 | 0.009677 | 0.287902 | 2,612 | 92 | 140 | 28.391304 | 0.775806 | 0.035222 | 0 | 0.027397 | 0 | 0 | 0.01432 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09589 | false | 0 | 0.054795 | 0.013699 | 0.246575 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf664ab43e12cf24ecd3e41b3708349ac277b2fd | 2,487 | py | Python | models/deepset.py | sgvdan/OCTransformer | 4bc6861406ea75afd23bdf1608a088dcba99ff14 | [
"Apache-2.0"
] | null | null | null | models/deepset.py | sgvdan/OCTransformer | 4bc6861406ea75afd23bdf1608a088dcba99ff14 | [
"Apache-2.0"
] | null | null | null | models/deepset.py | sgvdan/OCTransformer | 4bc6861406ea75afd23bdf1608a088dcba99ff14 | [
"Apache-2.0"
] | null | null | null | import torch
from torch import nn
# Obtained from: https://github.com/manzilzaheer/DeepSets/blob/master/PointClouds/classifier.py#L58
class PermEqui1_mean(nn.Module):
def __init__(self, in_dim, out_dim):
super().__init__()
self.Gamma = nn.Linear(in_dim, out_dim)
def forward(self, x):
xm = x.mean(1, keepdim=True)
x = self.Gamma(x-xm)
return x
class DeepSet(nn.Module):
def __init__(self, backbone, x_dim, d_dim, num_classes):
"""
:param backbone:
:param x_dim: backbone's output dim
:param d_dim: the intermediate dim
:param num_classes: number of classes to classify for
"""
super().__init__()
self.backbone = backbone
self.phi = self.phi = nn.Sequential(
PermEqui1_mean(x_dim, d_dim),
nn.ELU(inplace=True),
PermEqui1_mean(d_dim, d_dim),
nn.ELU(inplace=True),
PermEqui1_mean(d_dim, d_dim),
nn.ELU(inplace=True),
)
self.ro = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(d_dim, d_dim),
nn.ELU(inplace=True),
nn.Dropout(p=0.5),
nn.Linear(d_dim, num_classes),
)
# Taken from SliverNet
def nonadaptiveconcatpool2d(self, x, k):
# concatenating average and max pool, with kernel and stride the same
ap = torch.nn.functional.avg_pool2d(x, kernel_size=k, stride=k)
mp = torch.nn.functional.max_pool2d(x, kernel_size=k, stride=k)
return torch.cat([mp, ap], 1)
def forward(self, x):
batch_size, slices_num, channels, height, width = x.shape
x = x.view(batch_size * slices_num, channels, height, width)
if x.shape[0] > 100: # Cuda & ResNet are having trouble with long vectors, so split
split = torch.split(x, 100)
temp_features = []
for chunk in split:
temp_features.append(self.backbone(chunk))
features = torch.cat(temp_features)
else:
features = self.backbone(x) # B x M x h x w - B=batch size, M=#slices_per_volume, h=height, w=width
kernel_size = (features.shape[-2], features.shape[-1])
features = self.nonadaptiveconcatpool2d(features, kernel_size).view(batch_size, slices_num, -1)
phi_output = self.phi(features)
sum_output = phi_output.mean(1)
ro_output = self.ro(sum_output)
return ro_output
| 34.068493 | 112 | 0.602734 | 339 | 2,487 | 4.235988 | 0.327434 | 0.027855 | 0.024373 | 0.02507 | 0.245822 | 0.201253 | 0.201253 | 0.114903 | 0.100975 | 0.067549 | 0 | 0.015194 | 0.285485 | 2,487 | 72 | 113 | 34.541667 | 0.792909 | 0.184158 | 0 | 0.24 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.04 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf677d8bfffcaf593d5e10ff7108b260a1cb5b41 | 2,478 | py | Python | pandoc-wrapfig.py | nsheff/pandoc-wrapfig | d4523cf43ebab47024d7efde27d7ccddfd983d2f | [
"MIT"
] | null | null | null | pandoc-wrapfig.py | nsheff/pandoc-wrapfig | d4523cf43ebab47024d7efde27d7ccddfd983d2f | [
"MIT"
] | null | null | null | pandoc-wrapfig.py | nsheff/pandoc-wrapfig | d4523cf43ebab47024d7efde27d7ccddfd983d2f | [
"MIT"
] | 1 | 2020-08-11T18:35:53.000Z | 2020-08-11T18:35:53.000Z | #! /usr/bin/env python
# -*- coding: utf-8 -*-
"""Pandoc filter to allow variable wrapping of LaTeX/pdf documents
through the wrapfig package.
Simply add a " {?}" tag to the end of the caption for the figure, where
? is an integer specifying the width of the wrap in inches. 0 will
cause the width of the figure to be used.
"""
from pandocfilters import toJSONFilter, Image, RawInline, stringify, Div, RawBlock
import re, sys
FLAG_PAT = re.compile('.*\{(\d+\.?\d?)\}')
def html(x):
return RawBlock('html', x)
def wrapfig(key, val, fmt, meta):
# if key == "Div":
# sys.stderr.write(key)
# # join(str(x) for x in caption)
# [[ident, classes, kvs], contents] = val
# newcontents = [html('<dt>Theorem ' + str("hello") + '</dt>'),
# html('<dd>')] + contents + [html('</dd>')]
# return Div([ident, classes, kvs], newcontents)
if key == 'Latex':
sys.stderr.write(key)
if key == 'Image':
attrs, caption, target = val
if fmt == 'markdown' or fmt == 'html':
return [Image(attrs, caption, target)] + \
[RawInline(fmt, "<span class='caption'>")] + caption + [RawInline(fmt, "</span>")]
if FLAG_PAT.match(stringify(caption)):
# Strip tag
size = FLAG_PAT.match(caption[-1]['c']).group(1)
stripped_caption = caption[:-2]
# sys.stderr.write(caption[:-2])
if fmt == 'latex':
latex_begin = r'\setlength{\intextsep}{2pt}\setlength{\columnsep}{8pt}\begin{wrapfigure}{R}{' + size + 'in}'
if len(stripped_caption) > 0:
latex_fig = r'\centering\includegraphics{' + target[0] \
+ '}\caption{'
latex_end = r'}\vspace{-5pt}\end{wrapfigure}'
return [RawInline(fmt, latex_begin + latex_fig)] \
+ stripped_caption + [RawInline(fmt, latex_end)]
else:
latex_fig = r'\centering\includegraphics{' + target[0] \
+ '}'
latex_end = r'\end{wrapfigure}'
return [RawInline(fmt, latex_begin + latex_fig)] \
+ [RawInline(fmt, latex_end)]
else:
return Image(attrs, stripped_caption, target)
if __name__ == '__main__':
toJSONFilter(wrapfig)
sys.stdout.flush() # Should fix issue #1 (pipe error)
| 38.71875 | 124 | 0.536723 | 280 | 2,478 | 4.657143 | 0.421429 | 0.055215 | 0.052147 | 0.019939 | 0.173313 | 0.136503 | 0.136503 | 0.075153 | 0.075153 | 0 | 0 | 0.007652 | 0.314366 | 2,478 | 63 | 125 | 39.333333 | 0.759859 | 0.274415 | 0 | 0.171429 | 0 | 0 | 0.155318 | 0.090039 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.057143 | 0.028571 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf697a286088c58c3db9ead0e8a7c5dfcff5c956 | 3,999 | py | Python | las2vola.py | moloned/volumetric_accelerator_toolkit | 8f5cf226a7d788e4dd4215c181db49d9568c6240 | [
"Apache-2.0"
] | 6 | 2019-02-11T14:32:23.000Z | 2021-12-07T09:49:41.000Z | las2vola.py | moloned/volumetric_accelerator_toolkit | 8f5cf226a7d788e4dd4215c181db49d9568c6240 | [
"Apache-2.0"
] | null | null | null | las2vola.py | moloned/volumetric_accelerator_toolkit | 8f5cf226a7d788e4dd4215c181db49d9568c6240 | [
"Apache-2.0"
] | 2 | 2018-10-11T17:29:37.000Z | 2021-09-08T12:01:40.000Z | #!/usr/bin/env python3
"""
Las2vola: Converts Las files into VOLA format.
The ISPRS las format is the standard for LIDAR devices and stores information
on the points obtained. This parser uses the las information
for the nbit per voxel representation. The data stored is: color, height,
number of returns, intensity and classification
@author Jonathan Byrne & Anton Shmatov
@copyright 2018 Intel Ltd (see LICENSE file).
"""
from __future__ import print_function
import glob
import os
import numpy as np
import binutils as bu
from laspy import file as lasfile
from laspy.util import LaspyException
from volatree import VolaTree
def main():
"""Read the file, build the tree. Write a Binary."""
start_time = bu.timer()
parser = bu.parser_args("*.las / *.laz")
args = parser.parse_args()
# Parse directories or filenames, whichever you want!
if os.path.isdir(args.input):
filenames = glob.glob(os.path.join(args.input, '*.laz'))
filenames.extend(glob.glob(os.path.join(args.input, '*.las')))
else:
filenames = glob.glob(args.input)
print("processing: ", ' '.join(filenames))
for filename in filenames:
if args.dense:
outfilename = bu.sub(filename, "dvol")
else:
outfilename = bu.sub(filename, "vol")
if os.path.isfile(outfilename):
print("File already exists!")
continue
print("converting", filename, "to", outfilename)
bbox, points, pointsdata = parse_las(filename, args.nbits)
# work out how many chunks are required for the data
if args.nbits:
print("nbits set, adding metadata to occupancy grid")
div, mod = divmod(len(pointsdata[0]), 8)
if mod > 0:
nbits = div + 1
else:
nbits = div
else:
print("Only occupancy data being set! Use -n flag to add metadata")
nbits = 0
if len(points) > 0:
volatree = VolaTree(args.depth, bbox, args.crs,
args.dense, nbits)
volatree.cubify(points, pointsdata)
volatree.writebin(outfilename)
bu.print_ratio(filename, outfilename)
else:
print("The las file is empty!")
bu.timer(start_time)
def parse_las(filename, nbits):
"""Read las format point data and return header and points."""
pointfile = lasfile.File(filename, mode='r')
header = pointfile.header
maxheight = header.max[2]
points = np.array((pointfile.x, pointfile.y, pointfile.z)).transpose() # get all points, change matrix orientation
pointsdata = np.zeros((len(pointfile), 7), dtype=np.int)
if nbits > 0: # if want to set other data, find in matrices
try:
red = pointfile.red
except LaspyException:
red = [0] * len(points)
try:
green = pointfile.green
except LaspyException:
green = [0] * len(points)
try:
blue = pointfile.blue
except LaspyException:
blue = [0] * len(points)
coldata = np.int64(np.array([red, green, blue]).transpose() / 256)
scaleddata = np.array([pointfile.get_z(), pointfile.get_num_returns(),
pointfile.intensity, pointfile.raw_classification], dtype='int64').transpose()
min = np.array([0, 1, 0, 0])
max = np.array([maxheight, 7, 1000, 31])
normdata = np.int64(bu.normalize_np(scaleddata, min, max) * 255)
coldata[(coldata[:, 0] == 0) & (coldata[:, 1] == 0) &
(coldata[:, 2] == 0)] = 200 # if all three colours are 0, set to 200
pointsdata = np.concatenate([coldata, normdata], axis=1)
if len(points) == 0:
return [], [], None
bbox = [points.min(axis=0).tolist(), points.max(axis=0).tolist()]
if nbits:
return bbox, points, pointsdata
else:
return bbox, points, None
if __name__ == '__main__':
main()
| 32.778689 | 118 | 0.607652 | 499 | 3,999 | 4.819639 | 0.390782 | 0.018711 | 0.012474 | 0.011642 | 0.022453 | 0.022453 | 0.022453 | 0 | 0 | 0 | 0 | 0.020083 | 0.277819 | 3,999 | 121 | 119 | 33.049587 | 0.812673 | 0.186797 | 0 | 0.142857 | 0 | 0 | 0.065965 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02381 | false | 0 | 0.095238 | 0 | 0.154762 | 0.095238 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf6a926cdf026b6807d2fbef9356b946cbf88279 | 2,871 | py | Python | pipeline/test_users.py | streamsets/datacollector-tests-external | 6f255b5e7496deeef333b57a5e9df4911ba3ef00 | [
"Apache-2.0"
] | 1 | 2020-04-14T03:01:51.000Z | 2020-04-14T03:01:51.000Z | pipeline/test_users.py | streamsets/test | 1ead70179ee92a4acd9cfaa33c56a5a9e233bf3d | [
"Apache-2.0"
] | 1 | 2019-04-24T11:06:38.000Z | 2019-04-24T11:06:38.000Z | pipeline/test_users.py | anubandhan/datacollector-tests | 301c024c66d68353735256b262b681dd05ba16cc | [
"Apache-2.0"
] | 2 | 2019-05-24T06:34:37.000Z | 2020-03-30T11:48:18.000Z | # Copyright 2017 StreamSets Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pytest
from streamsets.testframework import sdc
logger = logging.getLogger(__name__)
@pytest.fixture(scope='module')
def sdc_common_hook():
def hook(data_collector):
data_collector.add_user('jarcec', roles=['admin'], groups=['jarcec', 'employee'])
data_collector.add_user('dima', roles=['admin'], groups=['dima', 'employee'])
data_collector.add_user('bryan', roles=['manager', 'creator'], groups=['bryan', 'contractor'])
data_collector.add_user('arvind', roles=['guest'], groups=['arvind', 'guests'])
return hook
@pytest.fixture(scope='module')
def pipeline(sdc_executor):
builder = sdc_executor.get_pipeline_builder()
dev_data_generator = builder.add_stage('Dev Data Generator')
trash = builder.add_stage('Trash')
dev_data_generator >> trash
pipeline = builder.build()
sdc_executor.set_user('admin')
sdc_executor.add_pipeline(pipeline)
yield pipeline
# Validate "current" user switching and getting the proper groups and roles.
def test_current_user(sdc_executor):
sdc_executor.set_user('admin')
user = sdc_executor.current_user
assert user.name == 'admin'
sdc_executor.set_user('jarcec')
user = sdc_executor.current_user
assert user.name == 'jarcec'
assert user.groups == ['all', 'jarcec', 'employee']
assert user.roles == ['admin']
# Ensure that the operations are indeed executed by the current user.
def test_pipeline_history(sdc_executor, pipeline):
sdc_executor.set_user('jarcec')
sdc_executor.start_pipeline(pipeline)
sdc_executor.set_user('dima')
sdc_executor.stop_pipeline(pipeline)
history = sdc_executor.get_pipeline_history(pipeline)
# History is in descending order.
entry = history.entries[0]
assert entry['user'] == 'dima'
assert entry['status'] == 'STOPPED'
entry = history.entries[1]
assert entry['user'] == 'dima'
assert entry['status'] == 'STOPPING'
entry = history.entries[2]
assert entry['user'] == 'jarcec'
assert entry['status'] == 'RUNNING'
entry = history.entries[3]
assert entry['user'] == 'jarcec'
assert entry['status'] == 'STARTING'
entry = history.entries[4]
assert entry['user'] == 'admin'
assert entry['status'] == 'EDITED'
| 30.542553 | 102 | 0.703239 | 370 | 2,871 | 5.310811 | 0.37027 | 0.083969 | 0.035623 | 0.045802 | 0.23715 | 0.116031 | 0.116031 | 0.040712 | 0 | 0 | 0 | 0.005469 | 0.172065 | 2,871 | 93 | 103 | 30.870968 | 0.821203 | 0.253222 | 0 | 0.230769 | 0 | 0 | 0.144805 | 0 | 0 | 0 | 0 | 0 | 0.269231 | 1 | 0.096154 | false | 0 | 0.057692 | 0 | 0.173077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf6af0cf676fc11ed879ddf07c27b61f75d1ae0d | 1,107 | py | Python | email_client/email_send.py | geeksLabTech/email-client | 0f533f7b33c38d74aec8663ccc6d8116e0a2489d | [
"MIT"
] | 1 | 2021-09-06T16:43:37.000Z | 2021-09-06T16:43:37.000Z | email_client/email_send.py | geeksLabTech/email-client | 0f533f7b33c38d74aec8663ccc6d8116e0a2489d | [
"MIT"
] | null | null | null | email_client/email_send.py | geeksLabTech/email-client | 0f533f7b33c38d74aec8663ccc6d8116e0a2489d | [
"MIT"
] | 2 | 2020-09-13T02:25:50.000Z | 2021-01-06T17:25:38.000Z | import smtplib
from tools.errors import LoginException
from tools.read_config import read_config
def send_mail(sender:str, pwd:str, to:str, subject:str, text:str):
# Read the email config file
config = read_config('./config/config_email.json')
# create connection with the smtp server
smtpserver = smtplib.SMTP_SSL(host=config['smtp_host'], port=config['smtp_port'])
# send enhaced HELO to the server to identify with the server
smtpserver.ehlo()
# login in the server with the credentials given
try:
smtpserver.login(sender, pwd)
except LoginException:
raise LoginException
else:
# create the email
msg = 'Subject:'+subject+'\n\n'+text
# send the email
smtpserver.sendmail(sender, to, msg)
# close connection
smtpserver.close()
| 42.576923 | 91 | 0.515808 | 109 | 1,107 | 5.165138 | 0.422018 | 0.053286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.417344 | 1,107 | 25 | 92 | 44.28 | 0.872868 | 0.200542 | 0 | 0 | 0 | 0 | 0.063854 | 0.029647 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.2 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf70a281c3c891880251c2d76efe8ac3eb44248a | 1,860 | py | Python | spongeauth/api/tests/test_delete_user.py | felixoi/SpongeAuth | d44ee52d0b35b2e1909c7bf6bad29aa7b4835b26 | [
"MIT"
] | 10 | 2016-11-18T12:37:24.000Z | 2022-03-04T09:25:25.000Z | spongeauth/api/tests/test_delete_user.py | felixoi/SpongeAuth | d44ee52d0b35b2e1909c7bf6bad29aa7b4835b26 | [
"MIT"
] | 794 | 2016-11-19T18:34:37.000Z | 2022-03-31T16:49:11.000Z | spongeauth/api/tests/test_delete_user.py | PowerNukkit/OreAuth | 96a2926c9601fce6fac471bdb997077f07e8bf9a | [
"MIT"
] | 11 | 2016-11-26T22:30:17.000Z | 2022-03-16T17:20:14.000Z | import urllib.parse
import django.shortcuts
import pytest
import faker
import accounts.tests.factories
import api.models
@pytest.fixture
def fake():
return faker.Faker()
def _make_path(data):
return "{}?{}".format(django.shortcuts.reverse("api:users-list"), urllib.parse.urlencode(data))
@pytest.mark.django_db
def test_invalid_api_key(client, fake):
assert not api.models.APIKey.objects.exists()
resp = client.delete(_make_path({"apiKey": "foobar", "username": fake.user_name()}))
assert resp.status_code == 403
@pytest.mark.django_db
def test_works(client):
api.models.APIKey.objects.create(key="foobar")
assert not accounts.models.User.objects.exists()
user = accounts.tests.factories.UserFactory.create()
assert user.deleted_at is None
assert user.is_active
resp = client.delete(_make_path({"apiKey": "foobar", "username": user.username}))
assert resp.status_code == 200
# check database
user = accounts.models.User.objects.get(id=user.id)
assert user.deleted_at is not None
assert not user.is_active
# check response
data = resp.json()
assert data["id"] == user.id
assert data["username"] == user.username
assert data["email"] == user.email
assert "avatar_url" in data
@pytest.mark.django_db
def test_not_existing(client, fake):
api.models.APIKey.objects.create(key="foobar")
resp = client.delete(_make_path({"apiKey": "foobar", "username": fake.user_name()}))
assert resp.status_code == 404
@pytest.mark.django_db
def test_deleted(client, fake):
api.models.APIKey.objects.create(key="foobar")
user = accounts.tests.factories.UserFactory.create(deleted_at=fake.date_time_this_century(), is_active=False)
resp = client.delete(_make_path({"apiKey": "foobar", "username": user.username}))
assert resp.status_code == 404
| 26.956522 | 113 | 0.716129 | 255 | 1,860 | 5.078431 | 0.27451 | 0.034749 | 0.049421 | 0.055598 | 0.52278 | 0.485714 | 0.380695 | 0.307336 | 0.307336 | 0.234749 | 0 | 0.007561 | 0.146774 | 1,860 | 68 | 114 | 27.352941 | 0.808444 | 0.015591 | 0 | 0.295455 | 0 | 0 | 0.077681 | 0 | 0 | 0 | 0 | 0 | 0.318182 | 1 | 0.136364 | false | 0 | 0.136364 | 0.045455 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf70d57cf63af1b7800f864d1cbbd1296009fe92 | 2,091 | py | Python | tests/rw_all.py | clayne/retrowrite | 117dad525114bca695317e14affffd4e3de13cce | [
"MIT"
] | 478 | 2019-06-19T09:33:50.000Z | 2022-03-25T09:34:24.000Z | tests/rw_all.py | clayne/retrowrite | 117dad525114bca695317e14affffd4e3de13cce | [
"MIT"
] | 30 | 2019-07-12T09:38:43.000Z | 2022-03-28T04:53:31.000Z | tests/rw_all.py | clayne/retrowrite | 117dad525114bca695317e14affffd4e3de13cce | [
"MIT"
] | 62 | 2019-06-25T16:41:04.000Z | 2022-02-22T15:47:35.000Z | import argparse
import json
import subprocess
import os
from multiprocessing import Pool
def do_test(cmd):
print("[!] Running on {}".format(cmd))
try:
subprocess.check_call(cmd, shell=True)
except subprocess.CalledProcessError:
print("[x] Failed {}".format(cmd))
def do_tests(tests, filter, args, outdir):
assert not (args.ddbg and args.parallel)
pool = Pool()
for test in tests:
if not filter(test):
continue
path = test["path"]
binp = os.path.join(path, test["name"])
outp = os.path.join(outdir, test["name"] + ".s")
if args.ddbg:
outp = os.path.join(outdir, test["name"] + "_asan")
cmd = "python -m debug.ddbg {} {}".format(binp, outp)
elif args.asan:
outp = os.path.join(outdir, test["name"] + "_asan")
cmd = "retrowrite --asan {} {}".format(binp, outp)
else:
cmd = "python -m librw.rw {} {}".format(binp, outp)
if args.parallel:
pool.apply_async(do_test, args=(cmd, ))
else:
do_test(cmd)
pool.close()
pool.join()
if __name__ == "__main__":
argp = argparse.ArgumentParser()
argp.add_argument("test_file", type=str, help="JSON file containing tests")
argp.add_argument(
"--targets",
type=str,
help="Only test build target, comma separated string of names")
argp.add_argument(
"--asan",
action='store_true',
help="Instrument with asan")
argp.add_argument(
"--ddbg",
action='store_true',
help="Do delta debugging")
argp.add_argument(
"--parallel",
action='store_true',
help="Do multiple tests in parallel")
args = argp.parse_args()
filter = lambda x: True
if args.targets:
filter = lambda x: x["name"] in args.targets.split(",")
args.testfile = os.path.abspath(args.test_file)
outdir = os.path.dirname(args.test_file)
with open(args.test_file) as tfd:
do_tests(json.load(tfd), filter, args, outdir)
| 27.155844 | 79 | 0.583931 | 261 | 2,091 | 4.563218 | 0.35249 | 0.030227 | 0.062972 | 0.035264 | 0.117548 | 0.082284 | 0.082284 | 0.058774 | 0.058774 | 0 | 0 | 0 | 0.275466 | 2,091 | 76 | 80 | 27.513158 | 0.786139 | 0 | 0 | 0.180328 | 0 | 0 | 0.175036 | 0 | 0 | 0 | 0 | 0 | 0.016393 | 1 | 0.032787 | false | 0 | 0.081967 | 0 | 0.114754 | 0.032787 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf73010efaaefc559ce2e5d857ca0b89c2eb9c35 | 2,753 | py | Python | tests/conftest.py | Nonse/monkeys | 93681edf18126cc49858992f80df25a7cff931e8 | [
"MIT"
] | null | null | null | tests/conftest.py | Nonse/monkeys | 93681edf18126cc49858992f80df25a7cff931e8 | [
"MIT"
] | null | null | null | tests/conftest.py | Nonse/monkeys | 93681edf18126cc49858992f80df25a7cff931e8 | [
"MIT"
] | null | null | null | import os
import pytest
import random
import config
from monkeygod import create_app, models
from monkeygod.models import db as _db
TEST_DATABASE_URI = 'postgresql://postgres:postgres@localhost/test_monkeydb'
# Adapted from http://goo.gl/KXDq2p
@pytest.fixture(scope='session')
def app(request):
"""Session-wide test `Flask` application."""
config.TESTING = True
config.SQLALCHEMY_DATABASE_URI = TEST_DATABASE_URI
config.CSRF_ENABLED = False
config.WTF_CSRF_ENABLED = False
app = create_app(config)
# Establish an application context before running the tests.
context = app.app_context()
context.push()
def teardown():
context.pop()
request.addfinalizer(teardown)
return app
@pytest.fixture(scope='session')
def db(app, request):
"""Session-wide test database."""
def teardown():
_db.drop_all()
_db.app = app
_db.create_all()
request.addfinalizer(teardown)
return _db
@pytest.fixture(scope='function')
def session(db, request):
"""Creates a new database session for a test."""
connection = db.engine.connect()
transaction = connection.begin()
options = dict(bind=connection, binds={})
session = db.create_scoped_session(options=options)
db.session = session
def teardown():
transaction.rollback()
connection.close()
session.remove()
request.addfinalizer(teardown)
return session
@pytest.fixture(scope='function')
def testdata(session, request):
monkeys = []
for i in range(20):
monkeys.append(
models.Monkey(
name='monkey{}'.format(i+1),
age=random.randint(0, 20),
email='monkey{}@example.com'.format(i+1)
)
)
session.add_all(monkeys)
session.commit()
def teardown():
for monkey in monkeys:
session.delete(monkey)
session.commit()
request.addfinalizer(teardown)
@pytest.fixture(scope='function')
def testdata_with_friends(session, testdata, request):
monkeys = models.Monkey.query.all()
for monkey in monkeys:
friends = random.sample(monkeys, random.randint(0, 20))
for friend in friends:
if random.randint(0, 5) == 0:
monkey.add_best_friend(friend)
else:
monkey.add_friend(friend)
session.add_all(monkeys)
session.commit()
@pytest.fixture(scope='function')
def testdata_with_many_friends(session, testdata, request):
monkeys = models.Monkey.query.all()
for monkey in monkeys:
friends = random.sample(monkeys, 20)
for friend in friends:
monkey.add_friend(friend)
session.add_all(monkeys)
session.commit()
| 24.149123 | 76 | 0.65129 | 322 | 2,753 | 5.462733 | 0.313665 | 0.044343 | 0.061399 | 0.059125 | 0.361001 | 0.261512 | 0.221717 | 0.175099 | 0.175099 | 0.175099 | 0 | 0.007615 | 0.236833 | 2,753 | 113 | 77 | 24.362832 | 0.829605 | 0.073738 | 0 | 0.375 | 0 | 0 | 0.050533 | 0.021319 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.075 | 0 | 0.2375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf73290c5bcbebb20fd5e98add009b993c971061 | 8,610 | py | Python | src/classifier.py | WattSocialBot/ijcnlp2017-customer-feedback | 2dccdcfaf26df832343dbb76b1e31a094c578c0e | [
"MIT"
] | 17 | 2017-10-27T20:48:38.000Z | 2020-03-16T15:05:47.000Z | src/classifier.py | WattSocialBot/ijcnlp2017-customer-feedback | 2dccdcfaf26df832343dbb76b1e31a094c578c0e | [
"MIT"
] | null | null | null | src/classifier.py | WattSocialBot/ijcnlp2017-customer-feedback | 2dccdcfaf26df832343dbb76b1e31a094c578c0e | [
"MIT"
] | 3 | 2017-10-28T15:34:26.000Z | 2020-03-09T13:56:40.000Z | __author__ = "bplank"
import argparse
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, f1_score
from sklearn.preprocessing import LabelEncoder, StandardScaler, MinMaxScaler
import numpy as np
import random
import seaborn as sn
import matplotlib.pyplot as plt
import pandas as pd
import os
from myutils import ItemSelector, DateStats, MeanEmbedding
seed=103
random.seed(seed)
np.random.seed(seed)
# parse command line options
parser = argparse.ArgumentParser(description="""Simple SVM classifier using various kinds of features (cf. Plank, 2017)""")
parser.add_argument("train", help="train model on a file")
parser.add_argument("test", help="test model on a file")
parser.add_argument("--lang", help="language", default="en")
parser.add_argument("--output", help="output predictions", required=False,action="store_true")
parser.add_argument("--C", help="parameter C for regularization (higher: regularize less)", required=False, default=10, type=float)
parser.add_argument("--num-components", help="svd components", default=40, type=int)
parser.add_argument("--print-confusion-matrix", help="show confusion matrix", action="store_true", default=False)
parser.add_argument("--features", help="feature set", choices=("words","chars","words+chars","embeds", "chars+embeds", "all","all+pos", "chars+embeds+pos"), default="chars+embeds")
args = parser.parse_args()
## read input data
print("load data..")
# using pandas dataframe
df_train = pd.read_csv(args.train)
df_dev = pd.read_csv(args.test)
X_train, y_train = df_train['texts'], df_train['labels']
X_dev, y_dev = df_dev['texts'], df_dev['labels']
labEnc = LabelEncoder()
y_train = labEnc.fit_transform(y_train)
y_dev = labEnc.transform(y_dev)
print("#train instances: {} #dev: {}".format(len(X_train),len(X_dev)))
print("Labels:", labEnc.classes_)
print("vectorize data..")
#algo = LogisticRegression(solver='lbfgs', C=args.C)
algo = LinearSVC(C=args.C)
# tfidf was slightly better than countvectorizer
vectorizerChars = TfidfVectorizer(analyzer='char', ngram_range=(3, 10), binary=True)
vectorizerWords = TfidfVectorizer(ngram_range=(1,2), analyzer='word', binary=True)
vectorizerPos = TfidfVectorizer(ngram_range=(1,3), analyzer='word', binary=True)
if "+" in args.lang:
embSelector = ItemSelector(key='textsPrefix')
else:
embSelector = ItemSelector(key='texts')
if args.features == "words":
features = FeatureUnion([
('words', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerWords),
]))
])
elif args.features == "chars":
features = FeatureUnion([
('chars', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerChars),
]))
])
elif args.features == "words+chars":
features = FeatureUnion([
# ('words', vectorizerWords),
#('chars', vectorizerChars),
('words', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerWords),
]))
,
('chars', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerChars),
]))
])
elif args.features == "embeds":
features = FeatureUnion([
('embeds', Pipeline([
('selector', embSelector),
('mean_emb', MeanEmbedding(args.lang)),
('scaler', MinMaxScaler()),
# ('standardscaler', StandardScaler()),
]))
])
elif args.features == "chars+embeds": # is the all-in-1 model
features = FeatureUnion([
('chars', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerChars),
]))
,
('embeds', Pipeline([
('selector', embSelector),
('mean_emb', MeanEmbedding(args.lang)),
('scaler', MinMaxScaler()),
]))
])
elif args.features == "all":
features = FeatureUnion([
('words', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerWords),
]))
,
('chars', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerChars),
]))
,
('embeds', Pipeline([
('selector', embSelector),
('mean_emb', MeanEmbedding(args.lang)),
('scaler', MinMaxScaler()),
]))
])
elif args.features == "all+pos":
features = FeatureUnion([
('words', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerWords),
]))
,
('chars', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerChars),
]))
,
('pos', Pipeline([
('selector', ItemSelector(key='pos')),
('tfidf', vectorizerPos),
]))
,
('embeds', Pipeline([
('selector', embSelector),
('mean_emb', MeanEmbedding(args.lang)),
('scaler', MinMaxScaler()),
]))
])
elif args.features == "chars+embeds+pos":
features = FeatureUnion([
('chars', Pipeline([
('selector', ItemSelector(key='texts')),
('tfidf', vectorizerChars),
]))
,
('pos', Pipeline([
('selector', ItemSelector(key='pos')),
('tfidf', vectorizerPos),
]))
,
('embeds', Pipeline([
('selector', embSelector),
('mean_emb', MeanEmbedding(args.lang)),
('scaler', MinMaxScaler()),
]))
])
classifier = Pipeline([
('features', features),
('clf', algo)])
print("train model..")
tune=0
debug=0
if tune:
from sklearn.model_selection import GridSearchCV
param_grid = {'clf__C': [0.01, 0.02, 0.5, 0.1, 0.5, 1, 2, 5, 10, 100, 1000]}
grid_search = GridSearchCV(classifier, param_grid, cv=5)
grid_search.fit(X_train, y_train)
y_predicted_dev = grid_search.predict(X_dev)
y_predicted_train = grid_search.predict(X_train)
print("dev: ", accuracy_score(y_dev, y_predicted_dev))
print("train: ", accuracy_score(y_train, y_predicted_train))
print("best:", grid_search.best_params_)
print("best score:", grid_search.best_score_)
else:
y_train = df_train['labels']
y_dev = df_dev['labels']
classifier.fit(df_train, y_train)
y_predicted_dev = classifier.predict(df_dev)
y_predicted_train = classifier.predict(df_train)
if debug:
from scipy import stats
# access weight vectors
for weights in classifier.named_steps['clf'].coef_:
print(weights.shape)
print(stats.describe(weights))
if args.output:
# write output
OUT = open("predictions2/"+os.path.basename(args.test)+"."+os.path.basename(args.train)+"pred.out","w")
sentence_ids = df_dev['sentence_ids'].values
org_dev = df_dev['original_texts'].values
for i, y_pred in enumerate(y_predicted_dev):
sent_id = sentence_ids[i]
text = org_dev[i]
OUT.write("{}\t{}\t{}\n".format(sent_id, text, y_pred))
OUT.close()
###
accuracy_dev = accuracy_score(y_dev, y_predicted_dev)
accuracy_train = accuracy_score(y_train, y_predicted_train)
print("Classifier accuracy train: {0:.2f}".format(accuracy_train*100))
print("===== dev set ====")
print("Classifier: {0:.2f}".format(accuracy_dev*100))
mat = confusion_matrix(y_dev, y_predicted_dev)
if args.print_confusion_matrix:
sn.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False,
xticklabels=labEnc.classes_, yticklabels=labEnc.classes_)
plt.xlabel('true label')
plt.ylabel('predicted label')
plt.show()
print(classification_report(y_dev, y_predicted_dev, target_names=labEnc.classes_, digits=3))
f1_dev = f1_score(y_dev, y_predicted_dev, average="weighted")
print("weighted f1: {0:.1f}".format(f1_dev*100))
## end
| 30.316901 | 180 | 0.589315 | 901 | 8,610 | 5.477248 | 0.256382 | 0.055117 | 0.068085 | 0.07538 | 0.339412 | 0.324823 | 0.310638 | 0.298886 | 0.285512 | 0.26768 | 0 | 0.010678 | 0.260395 | 8,610 | 283 | 181 | 30.424028 | 0.76429 | 0.03856 | 0 | 0.446078 | 0 | 0 | 0.15339 | 0.002906 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.073529 | 0 | 0.073529 | 0.088235 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf73e7f195ff23cb66846fa6c6da7d28660538de | 20,029 | py | Python | scripts/parser/oldslavdep.py | npedrazzini/jPTDPEarlySlavic | de9d3fa720fb86acadafc923d85473ae3371903f | [
"MIT"
] | 6 | 2021-08-20T20:00:31.000Z | 2022-01-03T15:43:50.000Z | scripts/parser/oldslavdep.py | npedrazzini/jPTDPEarlySlavic | de9d3fa720fb86acadafc923d85473ae3371903f | [
"MIT"
] | 1 | 2021-07-30T13:07:36.000Z | 2021-07-30T13:07:36.000Z | scripts/parser/oldslavdep.py | npedrazzini/jPTDPEarlySlavic | de9d3fa720fb86acadafc923d85473ae3371903f | [
"MIT"
] | 1 | 2021-01-23T20:00:25.000Z | 2021-01-23T20:00:25.000Z | # coding=utf-8
from __future__ import absolute_import, division, print_function, unicode_literals
from builtins import str
from io import open
from dynet import *
import dynet
from utils import read_conll, read_conll_predict, write_conll, load_embeddings_file
from operator import itemgetter
import utils, time, random, decoder
import numpy as np
from mnnl import FFSequencePredictor, Layer, RNNSequencePredictor, BiRNNSequencePredictor
class OldSlavDep:
def __init__(self, vocab, pos, rels, w2i, c2i, options):
self.model = ParameterCollection()
random.seed(1)
self.trainer = RMSPropTrainer(self.model)
#if options.learning_rate is not None: #Uncomment if model is used to train new parser or update OldSlavNet
# self.trainer = RMSPropTrainer(self.model, options.learning_rate)
#print("RMSPropTrainer initial learning rate:", options.learning_rate)
self.activations = {'tanh': tanh,
'sigmoid': logistic,
'relu': rectify,
'tanh3': (lambda x: tanh(cwise_multiply(cwise_multiply(x, x), x)))
}
self.activation = self.activations[options.activation]
self.blstmFlag = options.blstmFlag
self.labelsFlag = options.labelsFlag
self.costaugFlag = options.costaugFlag
self.bibiFlag = options.bibiFlag
self.ldims = options.lstm_dims #because it is a bi-lstm (NP)
self.wdims = options.wembedding_dims
self.cdims = options.cembedding_dims
self.layers = options.lstm_layers
self.wordsCount = vocab
self.vocab = {word: ind + 3 for word, ind in w2i.items()}
self.pos = {word: ind for ind, word in enumerate(pos)}
self.id2pos = {ind: word for ind, word in enumerate(pos)}
self.c2i = c2i
self.rels = {word: ind for ind, word in enumerate(rels)}
self.irels = rels
self.pdims = options.pembedding_dims
self.vocab['*PAD*'] = 1
self.vocab['*INITIAL*'] = 2
self.wlookup = self.model.add_lookup_parameters((len(vocab) + 3, self.wdims))
self.clookup = self.model.add_lookup_parameters((len(c2i), self.cdims))
self.plookup = self.model.add_lookup_parameters((len(pos), self.pdims))
if options.external_embedding is not None:
ext_embeddings, ext_emb_dim = load_embeddings_file(options.external_embedding, lower=True)
assert (ext_emb_dim == self.wdims)
print("Initializing word embeddings by pre-trained vectors")
count = 0
for word in self.vocab:
_word = str(word, "utf-8")
if _word in ext_embeddings:
count += 1
self.wlookup.init_row(self.vocab[word], ext_embeddings[_word])
print(("Vocab size: %d; #words having pretrained vectors: %d" % (len(self.vocab), count)))
self.pos_builders = [VanillaLSTMBuilder(1, self.wdims + self.cdims * 2, self.ldims, self.model),
VanillaLSTMBuilder(1, self.wdims + self.cdims * 2, self.ldims, self.model)]
self.pos_bbuilders = [VanillaLSTMBuilder(1, self.ldims * 2, self.ldims, self.model),
VanillaLSTMBuilder(1, self.ldims * 2, self.ldims, self.model)]
if self.bibiFlag:
self.builders = [VanillaLSTMBuilder(1, self.wdims + self.cdims * 2 + self.pdims, self.ldims, self.model),
VanillaLSTMBuilder(1, self.wdims + self.cdims * 2 + self.pdims, self.ldims, self.model)]
self.bbuilders = [VanillaLSTMBuilder(1, self.ldims * 2, self.ldims, self.model),
VanillaLSTMBuilder(1, self.ldims * 2, self.ldims, self.model)]
elif self.layers > 0:
self.builders = [VanillaLSTMBuilder(self.layers, self.wdims + self.cdims * 2 + self.pdims, self.ldims, self.model),
VanillaLSTMBuilder(self.layers, self.wdims + self.cdims * 2 + self.pdims, self.ldims, self.model)]
else:
self.builders = [SimpleRNNBuilder(1, self.wdims + self.cdims * 2, self.ldims, self.model),
SimpleRNNBuilder(1, self.wdims + self.cdims * 2, self.ldims, self.model)]
self.ffSeqPredictor = FFSequencePredictor(Layer(self.model, self.ldims * 2, len(self.pos), softmax))
self.hidden_units = options.hidden_units
self.hidBias = self.model.add_parameters((self.ldims * 8))
self.hidLayer = self.model.add_parameters((self.hidden_units, self.ldims * 8))
self.hid2Bias = self.model.add_parameters((self.hidden_units))
self.outLayer = self.model.add_parameters((1, self.hidden_units if self.hidden_units > 0 else self.ldims * 8))
if self.labelsFlag:
self.rhidBias = self.model.add_parameters((self.ldims * 8))
self.rhidLayer = self.model.add_parameters((self.hidden_units, self.ldims * 8))
self.rhid2Bias = self.model.add_parameters((self.hidden_units))
self.routLayer = self.model.add_parameters(
(len(self.irels), self.hidden_units if self.hidden_units > 0 else self.ldims * 8))
self.routBias = self.model.add_parameters((len(self.irels)))
self.ffRelPredictor = FFSequencePredictor(
Layer(self.model, self.hidden_units if self.hidden_units > 0 else self.ldims * 8, len(self.irels),
softmax))
self.char_rnn = RNNSequencePredictor(LSTMBuilder(1, self.cdims, self.cdims, self.model))
def __getExpr(self, sentence, i, j):
if sentence[i].headfov is None:
sentence[i].headfov = concatenate([sentence[i].lstms[0], sentence[i].lstms[1]])
if sentence[j].modfov is None:
sentence[j].modfov = concatenate([sentence[j].lstms[0], sentence[j].lstms[1]])
_inputVector = concatenate(
[sentence[i].headfov, sentence[j].modfov, dynet.abs(sentence[i].headfov - sentence[j].modfov),
dynet.cmult(sentence[i].headfov, sentence[j].modfov)])
if self.hidden_units > 0:
output = self.outLayer.expr() * self.activation(
self.hid2Bias.expr() + self.hidLayer.expr() * self.activation(
_inputVector + self.hidBias.expr()))
else:
output = self.outLayer.expr() * self.activation(_inputVector + self.hidBias.expr())
return output
def __evaluate(self, sentence):
exprs = [[self.__getExpr(sentence, i, j) for j in range(len(sentence))] for i in range(len(sentence))]
scores = np.array([[output.scalar_value() for output in exprsRow] for exprsRow in exprs])
return scores, exprs
def pick_neg_log(self, pred, gold):
return -dynet.log(dynet.pick(pred, gold))
def __getRelVector(self, sentence, i, j):
if sentence[i].rheadfov is None:
sentence[i].rheadfov = concatenate([sentence[i].lstms[0], sentence[i].lstms[1]])
if sentence[j].rmodfov is None:
sentence[j].rmodfov = concatenate([sentence[j].lstms[0], sentence[j].lstms[1]])
_outputVector = concatenate(
[sentence[i].rheadfov, sentence[j].rmodfov, abs(sentence[i].rheadfov - sentence[j].rmodfov),
cmult(sentence[i].rheadfov, sentence[j].rmodfov)])
if self.hidden_units > 0:
return self.rhid2Bias.expr() + self.rhidLayer.expr() * self.activation(
_outputVector + self.rhidBias.expr())
else:
return _outputVector
def Save(self, filename):
self.model.save(filename)
def Load(self, filename):
self.model.populate(filename)
def Predict(self, conll_path):
with open(conll_path) as conllFP:
for iSentence, sentence in enumerate(read_conll_predict(conllFP, self.c2i, self.wordsCount)):
conll_sentence = [entry for entry in sentence if isinstance(entry, utils.ConllEntry)]
for entry in conll_sentence:
wordvec = self.wlookup[int(self.vocab.get(entry.norm, 0))] if self.wdims > 0 else None
last_state = self.char_rnn.predict_sequence([self.clookup[c] for c in entry.idChars])[-1]
rev_last_state = self.char_rnn.predict_sequence([self.clookup[c] for c in reversed(entry.idChars)])[
-1]
entry.vec = concatenate([_f for _f in [wordvec, last_state, rev_last_state] if _f])
entry.pos_lstms = [entry.vec, entry.vec]
entry.headfov = None
entry.modfov = None
entry.rheadfov = None
entry.rmodfov = None
#Predicted pos tags
lstm_forward = self.pos_builders[0].initial_state()
lstm_backward = self.pos_builders[1].initial_state()
for entry, rentry in zip(conll_sentence, reversed(conll_sentence)):
lstm_forward = lstm_forward.add_input(entry.vec)
lstm_backward = lstm_backward.add_input(rentry.vec)
entry.pos_lstms[1] = lstm_forward.output()
rentry.pos_lstms[0] = lstm_backward.output()
for entry in conll_sentence:
entry.pos_vec = concatenate(entry.pos_lstms)
blstm_forward = self.pos_bbuilders[0].initial_state()
blstm_backward = self.pos_bbuilders[1].initial_state()
for entry, rentry in zip(conll_sentence, reversed(conll_sentence)):
blstm_forward = blstm_forward.add_input(entry.pos_vec)
blstm_backward = blstm_backward.add_input(rentry.pos_vec)
entry.pos_lstms[1] = blstm_forward.output()
rentry.pos_lstms[0] = blstm_backward.output()
concat_layer = [concatenate(entry.pos_lstms) for entry in conll_sentence]
outputFFlayer = self.ffSeqPredictor.predict_sequence(concat_layer)
predicted_pos_indices = [np.argmax(o.value()) for o in outputFFlayer]
predicted_postags = [self.id2pos[idx] for idx in predicted_pos_indices]
# Add predicted pos tags for parsing prediction
for entry, posid in zip(conll_sentence, predicted_pos_indices):
entry.vec = concatenate([entry.vec, self.plookup[posid]])
entry.lstms = [entry.vec, entry.vec]
if self.blstmFlag:
lstm_forward = self.builders[0].initial_state()
lstm_backward = self.builders[1].initial_state()
for entry, rentry in zip(conll_sentence, reversed(conll_sentence)):
lstm_forward = lstm_forward.add_input(entry.vec)
lstm_backward = lstm_backward.add_input(rentry.vec)
entry.lstms[1] = lstm_forward.output()
rentry.lstms[0] = lstm_backward.output()
if self.bibiFlag:
for entry in conll_sentence:
entry.vec = concatenate(entry.lstms)
blstm_forward = self.bbuilders[0].initial_state()
blstm_backward = self.bbuilders[1].initial_state()
for entry, rentry in zip(conll_sentence, reversed(conll_sentence)):
blstm_forward = blstm_forward.add_input(entry.vec)
blstm_backward = blstm_backward.add_input(rentry.vec)
entry.lstms[1] = blstm_forward.output()
rentry.lstms[0] = blstm_backward.output()
scores, exprs = self.__evaluate(conll_sentence)
heads = decoder.parse_proj(scores)
# Multiple roots: heading to the previous "rooted" one
rootCount = 0
rootWid = -1
for index, head in enumerate(heads):
if head == 0:
rootCount += 1
if rootCount == 1:
rootWid = index
if rootCount > 1:
heads[index] = rootWid
rootWid = index
for entry, head, pos in zip(conll_sentence, heads, predicted_postags):
entry.pred_parent_id = head
entry.pred_relation = '_'
entry.pred_pos = pos
dump = False
if self.labelsFlag:
concat_layer = [self.__getRelVector(conll_sentence, head, modifier + 1) for modifier, head in
enumerate(heads[1:])]
outputFFlayer = self.ffRelPredictor.predict_sequence(concat_layer)
predicted_rel_indices = [np.argmax(o.value()) for o in outputFFlayer]
predicted_rels = [self.irels[idx] for idx in predicted_rel_indices]
for modifier, head in enumerate(heads[1:]):
conll_sentence[modifier + 1].pred_relation = predicted_rels[modifier]
renew_cg()
if not dump:
yield sentence
def Train(self, conll_path):
eloss = 0.0
mloss = 0.0
eerrors = 0
etotal = 0
start = time.time()
with open(conll_path) as conllFP:
shuffledData = list(read_conll(conllFP, self.c2i))
random.shuffle(shuffledData)
errs = []
lerrs = []
posErrs = []
for iSentence, sentence in enumerate(shuffledData):
if iSentence % 500 == 0 and iSentence != 0:
print("Processing sentence number: %d" % iSentence, ", Loss: %.4f" % (
eloss / etotal), ", Time: %.2f" % (time.time() - start))
start = time.time()
eerrors = 0
eloss = 0.0
etotal = 0
conll_sentence = [entry for entry in sentence if isinstance(entry, utils.ConllEntry)]
for entry in conll_sentence:
c = float(self.wordsCount.get(entry.norm, 0))
dropFlag = (random.random() < (c / (0.25 + c)))
wordvec = self.wlookup[
int(self.vocab.get(entry.norm, 0)) if dropFlag else 0] if self.wdims > 0 else None
last_state = self.char_rnn.predict_sequence([self.clookup[c] for c in entry.idChars])[-1]
rev_last_state = self.char_rnn.predict_sequence([self.clookup[c] for c in reversed(entry.idChars)])[
-1]
entry.vec = dynet.dropout(concatenate([_f for _f in [wordvec, last_state, rev_last_state] if _f]), 0.33)
entry.pos_lstms = [entry.vec, entry.vec]
entry.headfov = None
entry.modfov = None
entry.rheadfov = None
entry.rmodfov = None
#POS tagging loss
lstm_forward = self.pos_builders[0].initial_state()
lstm_backward = self.pos_builders[1].initial_state()
for entry, rentry in zip(conll_sentence, reversed(conll_sentence)):
lstm_forward = lstm_forward.add_input(entry.vec)
lstm_backward = lstm_backward.add_input(rentry.vec)
entry.pos_lstms[1] = lstm_forward.output()
rentry.pos_lstms[0] = lstm_backward.output()
for entry in conll_sentence:
entry.pos_vec = concatenate(entry.pos_lstms)
blstm_forward = self.pos_bbuilders[0].initial_state()
blstm_backward = self.pos_bbuilders[1].initial_state()
for entry, rentry in zip(conll_sentence, reversed(conll_sentence)):
blstm_forward = blstm_forward.add_input(entry.pos_vec)
blstm_backward = blstm_backward.add_input(rentry.pos_vec)
entry.pos_lstms[1] = blstm_forward.output()
rentry.pos_lstms[0] = blstm_backward.output()
concat_layer = [dynet.dropout(concatenate(entry.pos_lstms), 0.33) for entry in conll_sentence]
outputFFlayer = self.ffSeqPredictor.predict_sequence(concat_layer)
posIDs = [self.pos.get(entry.pos) for entry in conll_sentence]
for pred, gold in zip(outputFFlayer, posIDs):
posErrs.append(self.pick_neg_log(pred, gold))
# Add predicted pos tags
for entry, poses in zip(conll_sentence, outputFFlayer):
entry.vec = concatenate([entry.vec, dynet.dropout(self.plookup[np.argmax(poses.value())], 0.33)])
entry.lstms = [entry.vec, entry.vec]
#Parsing losses
if self.blstmFlag:
lstm_forward = self.builders[0].initial_state()
lstm_backward = self.builders[1].initial_state()
for entry, rentry in zip(conll_sentence, reversed(conll_sentence)):
lstm_forward = lstm_forward.add_input(entry.vec)
lstm_backward = lstm_backward.add_input(rentry.vec)
entry.lstms[1] = lstm_forward.output()
rentry.lstms[0] = lstm_backward.output()
if self.bibiFlag:
for entry in conll_sentence:
entry.vec = concatenate(entry.lstms)
blstm_forward = self.bbuilders[0].initial_state()
blstm_backward = self.bbuilders[1].initial_state()
for entry, rentry in zip(conll_sentence, reversed(conll_sentence)):
blstm_forward = blstm_forward.add_input(entry.vec)
blstm_backward = blstm_backward.add_input(rentry.vec)
entry.lstms[1] = blstm_forward.output()
rentry.lstms[0] = blstm_backward.output()
scores, exprs = self.__evaluate(conll_sentence)
gold = [entry.parent_id for entry in conll_sentence]
heads = decoder.parse_proj(scores, gold if self.costaugFlag else None)
if self.labelsFlag:
concat_layer = [dynet.dropout(self.__getRelVector(conll_sentence, head, modifier + 1), 0.33) for
modifier, head in enumerate(gold[1:])]
outputFFlayer = self.ffRelPredictor.predict_sequence(concat_layer)
relIDs = [self.rels[conll_sentence[modifier + 1].relation] for modifier, _ in enumerate(gold[1:])]
for pred, goldid in zip(outputFFlayer, relIDs):
lerrs.append(self.pick_neg_log(pred, goldid))
e = sum(1 for h, g in zip(heads[1:], gold[1:]) if h != g)
eerrors += e
if e > 0:
loss = [(exprs[h][i] - exprs[g][i]) for i, (h, g) in enumerate(zip(heads, gold)) if h != g] # * (1.0/e)
eloss += (e)
mloss += (e)
errs.extend(loss)
etotal += len(conll_sentence)
if iSentence % 1 == 0:
if len(errs) > 0 or len(lerrs) > 0 or len(posErrs) > 0:
eerrs = (esum(errs + lerrs + posErrs))
eerrs.scalar_value()
eerrs.backward()
self.trainer.update()
errs = []
lerrs = []
posErrs = []
renew_cg()
print("Loss: %.4f" % (mloss / iSentence))
| 48.379227 | 127 | 0.567277 | 2,230 | 20,029 | 4.943946 | 0.133184 | 0.044807 | 0.017687 | 0.019592 | 0.606168 | 0.56 | 0.521088 | 0.474376 | 0.443356 | 0.434467 | 0 | 0.012726 | 0.333017 | 20,029 | 413 | 128 | 48.496368 | 0.812561 | 0.023017 | 0 | 0.405751 | 0 | 0 | 0.010587 | 0 | 0 | 0 | 0 | 0 | 0.003195 | 1 | 0.028754 | false | 0 | 0.031949 | 0.003195 | 0.079872 | 0.015974 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf7bf89fc30751bcda78ce1d1f53a0da0361b74d | 1,509 | py | Python | dashdaemon/keys.py | rGunti/CarPi-DashDaemon | b8b340d35125b6f7fe5bb9647760d37301b07cac | [
"MIT"
] | null | null | null | dashdaemon/keys.py | rGunti/CarPi-DashDaemon | b8b340d35125b6f7fe5bb9647760d37301b07cac | [
"MIT"
] | null | null | null | dashdaemon/keys.py | rGunti/CarPi-DashDaemon | b8b340d35125b6f7fe5bb9647760d37301b07cac | [
"MIT"
] | null | null | null | """
CARPI DASH DAEMON
(C) 2018, Raphael "rGunti" Guntersweiler
Licensed under MIT
"""
from redisdatabus.bus import TypedBusListener as Types
import gpsdaemon.keys as gpskeys
import obddaemon.keys as obdkeys
SETTINGS_KEY_BASE = 'carpi.settings.'
DASH_KEY_BASE = 'carpi.dashboard.'
def _build_key(type, key_base, name):
return "{}{}{}".format(type if type else "", key_base, name)
CONFIG_KEYS = {
'engine_vol': _build_key(Types.TYPE_PREFIX_INT, SETTINGS_KEY_BASE, 'car.enginevolume'),
'vol_efficency': _build_key(Types.TYPE_PREFIX_INT, SETTINGS_KEY_BASE, 'car.efficency'),
'fuel_density': _build_key(Types.TYPE_PREFIX_INT, SETTINGS_KEY_BASE, 'car.fueldensity')
}
CONFIG_DEFAULT_VALUES = {
CONFIG_KEYS['engine_vol']: 1000,
CONFIG_KEYS['vol_efficency']: 85,
CONFIG_KEYS['fuel_density']: 745
}
LIVE_INPUT_DATA_KEYS = {
'car_rpm': obdkeys.KEY_RPM,
'car_map': obdkeys.KEY_INTAKE_PRESSURE,
'car_tmp': obdkeys.KEY_INTAKE_TEMP,
'car_spd': obdkeys.KEY_SPEED,
'gps_spd': gpskeys.KEY_SPEED,
'gps_acc_lng': gpskeys.KEY_EPX,
'gps_acc_lat': gpskeys.KEY_EPY,
'gps_acc_spd': gpskeys.KEY_EPS
}
LIVE_OUTPUT_DATA_KEYS = {
'speed': _build_key(Types.TYPE_PREFIX_INT, DASH_KEY_BASE, 'speed'),
'fuel_usage': _build_key(Types.TYPE_PREFIX_FLOAT, DASH_KEY_BASE, 'fuelusage'),
'fuel_efficiency': _build_key(Types.TYPE_PREFIX_FLOAT, DASH_KEY_BASE, 'fuelefficiency'),
'fuel_fail_flag': _build_key(Types.TYPE_PREFIX_BOOL, DASH_KEY_BASE, 'fuelfailflag')
}
| 30.795918 | 92 | 0.743539 | 214 | 1,509 | 4.808411 | 0.369159 | 0.07483 | 0.088435 | 0.115646 | 0.251701 | 0.229349 | 0.204082 | 0.204082 | 0.204082 | 0.12828 | 0 | 0.009909 | 0.13055 | 1,509 | 48 | 93 | 31.4375 | 0.77439 | 0.051027 | 0 | 0 | 0 | 0 | 0.212781 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.030303 | false | 0 | 0.090909 | 0.030303 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf7e16d1f4e90c037eb66831eeffade73df69683 | 261 | py | Python | imdb_movie_review_sentiment_prediction/training_and_evaluation.py | slaily/deep-learning-bits | cb9ce7ec539efbdfcaa023d141466f919bd31b71 | [
"MIT"
] | null | null | null | imdb_movie_review_sentiment_prediction/training_and_evaluation.py | slaily/deep-learning-bits | cb9ce7ec539efbdfcaa023d141466f919bd31b71 | [
"MIT"
] | null | null | null | imdb_movie_review_sentiment_prediction/training_and_evaluation.py | slaily/deep-learning-bits | cb9ce7ec539efbdfcaa023d141466f919bd31b71 | [
"MIT"
] | null | null | null | model.compile(
optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc']
)
history = model.fit(
x_train,
y_train,
epochs=10,
batch_size=32,
validation_data=(x_val, y_val)
)
model.save_weights('pre_trained_glove_model.h5')
| 18.642857 | 48 | 0.678161 | 35 | 261 | 4.742857 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023474 | 0.183908 | 261 | 13 | 49 | 20.076923 | 0.755869 | 0 | 0 | 0 | 0 | 0 | 0.210728 | 0.099617 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf831543b480d5861c0d351648dc6dd8a55ea5de | 460 | py | Python | python/controls/choicegroup/choicegroup_with_change_event.py | pglet/pglet-samples | ab47e797a4daccfa4779daa3d1fd1cc27d92e7f9 | [
"MIT"
] | null | null | null | python/controls/choicegroup/choicegroup_with_change_event.py | pglet/pglet-samples | ab47e797a4daccfa4779daa3d1fd1cc27d92e7f9 | [
"MIT"
] | null | null | null | python/controls/choicegroup/choicegroup_with_change_event.py | pglet/pglet-samples | ab47e797a4daccfa4779daa3d1fd1cc27d92e7f9 | [
"MIT"
] | null | null | null | import pglet
from pglet import ChoiceGroup, choicegroup, Text
with pglet.page("choicegroup-with-change-event") as page:
def choicegroup_changed(e):
t.value = f"ChoiceGroup value changed to {cg.value}"
t.update()
cg = ChoiceGroup(label='Select color', on_change=choicegroup_changed, options=[
choicegroup.Option('Red'),
choicegroup.Option('Green'),
choicegroup.Option('Blue')
])
t = Text()
page.add(cg, t)
input() | 24.210526 | 81 | 0.680435 | 58 | 460 | 5.344828 | 0.517241 | 0.164516 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184783 | 460 | 19 | 82 | 24.210526 | 0.826667 | 0 | 0 | 0 | 0 | 0 | 0.199566 | 0.062907 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf84fe1671965d8bf607c4db0b1fce05cc370700 | 910 | py | Python | raspberrypi/sound1.py | Shadowsith/python | b8878c822e55528e663de16bd1029d330862c8dc | [
"MIT"
] | null | null | null | raspberrypi/sound1.py | Shadowsith/python | b8878c822e55528e663de16bd1029d330862c8dc | [
"MIT"
] | null | null | null | raspberrypi/sound1.py | Shadowsith/python | b8878c822e55528e663de16bd1029d330862c8dc | [
"MIT"
] | 1 | 2020-05-19T11:32:25.000Z | 2020-05-19T11:32:25.000Z | #!/usr/bin/python
#Doppelklatschen
import time
gpioPort = 40
import RPi.GPIO as GPIO
import mysql.connector
#MySQL Verbindung
statement = "UPDATE Flags SET wert=0 WHERE name='bewegung';"
#GPIO Layout verwenden
GPIO.setmode(GPIO.BOARD)
GPIO.setup(gpioPort, GPIO.IN)
lastSound = 0
def mysqlConnect(statement):
cnx = mysql.connector.connect(user='pi', password='raspberry', host='localhost', database='EIT11C')
cursor = cnx.cursor()
cursor.execute(statement)
cnx.commit()
cursor.close()
cnx.close()
while 1:
if GPIO.input(gpioPort) == GPIO.HIGH:
if lastSound == 0 or (lastSound + 500) < int(round(time.time()*1000)):
lastSound = int(round(time.time()*1000))
time.sleep(0.1)
print("Klatchen1")
else:
print("Klatschen2")
lastSound = 0
time.sleep(0.1)
mysqlConnect(statement)
| 23.947368 | 103 | 0.631868 | 110 | 910 | 5.227273 | 0.554545 | 0.052174 | 0.041739 | 0.055652 | 0.069565 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03741 | 0.236264 | 910 | 37 | 104 | 24.594595 | 0.789928 | 0.074725 | 0 | 0.153846 | 0 | 0 | 0.108592 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0.038462 | 0.115385 | 0 | 0.153846 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf85325b7b5d658e0a68da64304ce7b4f2588e9a | 7,466 | py | Python | apted/all_possible_mappings_ted.py | JoaoFelipe/apted | 828b3e3f4c053f7d35f0b55b0d5597e8041719ac | [
"MIT"
] | 52 | 2017-11-14T06:45:45.000Z | 2022-03-01T01:14:45.000Z | apted/all_possible_mappings_ted.py | JoaoFelipe/apted | 828b3e3f4c053f7d35f0b55b0d5597e8041719ac | [
"MIT"
] | 7 | 2018-11-21T17:21:14.000Z | 2021-09-04T09:23:53.000Z | apted/all_possible_mappings_ted.py | JoaoFelipe/apted | 828b3e3f4c053f7d35f0b55b0d5597e8041719ac | [
"MIT"
] | 7 | 2017-12-17T16:49:45.000Z | 2020-07-16T18:49:44.000Z | #
# The MIT License
#
# Copyright 2017 Joao Felipe Pimentel
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
"""Implements an exponential algorithm for the tree edit distance. It
computes all possible TED mappings between two trees and calculated their
minimal cost."""
from __future__ import (absolute_import, division)
from copy import copy
from .config import Config
from .node_indexer import NodeIndexer
class AllPossibleMappingsTED(object):
"""Implements an exponential algorithm for the tree edit distance. It
computes all possible TED mappings between two trees and calculated their
minimal cost."""
def __init__(self, tree1, tree2, config=None):
self.config = config or Config()
"""Config object that specifies how to calculate the edit distance"""
self.it1 = NodeIndexer(tree1, 0, self.config)
"""Stores the indexes of the first input tree"""
self.it2 = NodeIndexer(tree2, 1, self.config)
"""Stores the indexes of the second input tree"""
def compute_edit_distance(self):
"""Computes the tree edit distance between two trees by trying all
possible TED mappings. It uses the specified cost model."""
mappings = [
mapping for mapping in self.generate_all_one_to_one_mappins()
if self.is_ted_mapping(mapping)
]
return self.get_min_cost(mappings)
def generate_all_one_to_one_mappins(self):
"""Generate all possible 1-1 mappings.
These mappings do not conform to TED conditions (sibling-order and
ancestor-descendant).
A mapping is a list of pairs (arrays) of preorder IDs (identifying
nodes).
return set of all 1-1 mappings
"""
mappings = [
[(node1, None) for node1 in self.it1.pre_ltr_info] +
[(None, node2) for node2 in self.it2.pre_ltr_info]
]
# For each node in the source tree
for node1 in self.it1.pre_ltr_info:
# Duplicate all mappings and store in mappings_copy
mappings_copy = [
copy(x) for x in mappings
]
# For each node in the destination tree
for node2 in self.it2.pre_ltr_info:
# For each mapping (produced for all n1 values smaller than
# current n1)
for mapping in mappings_copy:
# Produce new mappings with the pair (n1, n2) by adding this
# pair to all mappings where it is valid to add
element_add = True
# Verify if (n1, n2) can be added to mapping m.
# All elements in m are checked with (n1, n2) for possible
# violation
# One-to-one condition
for ele1, ele2 in mapping:
# n1 is not in any of previous mappings
if ele1 and ele2 and ele2 is node2:
element_add = False
break
# New mappings must be produces by duplicating a previous
# mapping and extending it by (n1, n2)
if element_add:
m_copy = copy(mapping)
m_copy.append((node1, node2))
m_copy.remove((node1, None))
m_copy.remove((None, node2))
mappings.append(m_copy)
return mappings
def is_ted_mapping(self, mapping):
"""Test if a 1-1 mapping is a TED mapping"""
# pylint: disable=no-self-use, invalid-name
# Validade each pait of pairs of mapped nodes in the mapping
for node_a1, node_a2 in mapping:
# Use only pairs of mapped nodes for validation.
if node_a1 is None or node_a2 is None:
continue
for node_b1, node_b2 in mapping:
# Use only pairs of mapped nodes for validation.
if node_b1 is None or node_b2 is None:
continue
# If any of the conditions below doesn't hold, discard m.
# Validate ancestor-descendant condition.
n1 = (
node_a1.pre_ltr < node_b1.pre_ltr and
node_a1.pre_rtl < node_b1.pre_rtl
)
n2 = (
node_a2.pre_ltr < node_b2.pre_ltr and
node_a2.pre_rtl < node_b2.pre_rtl
)
if (n1 and not n2) or (not n1 and n2):
# Discard the mapping.
# If this condition doesn't hold, the next condition
# doesn't have to be verified any more and any other
# pair doesn't have to be verified any more.
return False
# Validade sibling-order condition
n1 = (
node_a1.pre_ltr < node_b1.pre_ltr and
node_a1.pre_rtl > node_b1.pre_rtl
)
n2 = (
node_a2.pre_ltr < node_b2.pre_ltr and
node_a2.pre_rtl > node_b2.pre_rtl
)
if (n1 and not n2) or (not n1 and n2):
# Discard the mapping.
return False
return True
def get_min_cost(self, mappings):
"""Given list of all TED mappings, calculate the cost of the
minimal-cost mapping."""
insert, delete = self.config.insert, self.config.delete
rename = self.config.rename
# Initialize min_cost to the upper bound
min_cost = float('inf')
# verify cost of each mapping
for mapping in mappings:
m_cost = 0
# Sum up edit costs for all elements in the mapping m.
for node1, node2 in mapping:
if node1 and node2:
m_cost += rename(node1.node, node2.node)
elif node1:
m_cost += delete(node1.node)
else:
m_cost += insert(node2.node)
# Break as soon as the current min_cost is exceeded.
# Only for early loop break.
if m_cost > min_cost:
break
# Store the minimal cost - compare m_cost and min_cost
min_cost = min(min_cost, m_cost)
return min_cost
| 42.420455 | 80 | 0.583311 | 964 | 7,466 | 4.409751 | 0.274896 | 0.016937 | 0.00941 | 0.012232 | 0.234533 | 0.228652 | 0.21642 | 0.201835 | 0.175488 | 0.175488 | 0 | 0.019895 | 0.360434 | 7,466 | 175 | 81 | 42.662857 | 0.870366 | 0.425931 | 0 | 0.216867 | 0 | 0 | 0.000757 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060241 | false | 0 | 0.048193 | 0 | 0.192771 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
cf8a7c68901bef8af36175c6396dc707d25c27e2 | 4,429 | py | Python | Antics/AI/AIPlayer.py | sundercode/AI-Homework | 423f703685852313bc127338f9cf6b4e862b898e | [
"MIT"
] | null | null | null | Antics/AI/AIPlayer.py | sundercode/AI-Homework | 423f703685852313bc127338f9cf6b4e862b898e | [
"MIT"
] | null | null | null | Antics/AI/AIPlayer.py | sundercode/AI-Homework | 423f703685852313bc127338f9cf6b4e862b898e | [
"MIT"
] | null | null | null | import random
import sys
sys.path.append("..") #so other modules can be found in parent dir
from Player import *
from Constants import *
from Construction import CONSTR_STATS
from Ant import UNIT_STATS
from Move import Move
from GameState import *
from AIPlayerUtils import *
##
#AIPlayer
#Description: The responsbility of this class is to interact with the game by
#deciding a valid move based on a given game state. This class has methods that
#will be implemented by students in Dr. Nuxoll's AI course.
#
#Variables:
# playerId - The id of the player.
##
class AIPlayer(Player):
#__init__
#Description: Creates a new Player
#
#Parameters:
# inputPlayerId - The id to give the new player (int)
##
def __init__(self, inputPlayerId):
super(AIPlayer,self).__init__(inputPlayerId, "Random")
##
#getPlacement
#
#Description: called during setup phase for each Construction that
# must be placed by the player. These items are: 1 Anthill on
# the player's side; 1 tunnel on player's side; 9 grass on the
# player's side; and 2 food on the enemy's side.
#
#Parameters:
# construction - the Construction to be placed.
# currentState - the state of the game at this point in time.
#
#Return: The coordinates of where the construction is to be placed
##
def getPlacement(self, currentState):
numToPlace = 0
#implemented by students to return their next move
if currentState.phase == SETUP_PHASE_1: #stuff on my side
numToPlace = 11
moves = []
for i in range(0, numToPlace):
move = None
while move == None:
#Choose any x location
x = random.randint(0, 9)
#Choose any y location on your side of the board
y = random.randint(0, 3)
#Set the move if this space is empty
if currentState.board[x][y].constr == None and (x, y) not in moves:
move = (x, y)
#Just need to make the space non-empty. So I threw whatever I felt like in there.
currentState.board[x][y].constr == True
moves.append(move)
return moves
elif currentState.phase == SETUP_PHASE_2: #stuff on foe's side
numToPlace = 2
moves = []
for i in range(0, numToPlace):
move = None
while move == None:
#Choose any x location
x = random.randint(0, 9)
#Choose any y location on enemy side of the board
y = random.randint(6, 9)
#Set the move if this space is empty
if currentState.board[x][y].constr == None and (x, y) not in moves:
move = (x, y)
#Just need to make the space non-empty. So I threw whatever I felt like in there.
currentState.board[x][y].constr == True
moves.append(move)
return moves
else:
return [(0, 0)]
##
#getMove
#Description: Gets the next move from the Player.
#
#Parameters:
# currentState - The state of the current game waiting for the player's move (GameState)
#
#Return: The Move to be made
##
def getMove(self, currentState):
moves = listAllLegalMoves(currentState)
selectedMove = moves[random.randint(0,len(moves) - 1)];
#don't do a build move if there are already 3+ ants
numAnts = len(currentState.inventories[currentState.whoseTurn].ants)
while (selectedMove.moveType == BUILD and numAnts >= 3):
selectedMove = moves[random.randint(0,len(moves) - 1)];
return selectedMove
##
#getAttack
#Description: Gets the attack to be made from the Player
#
#Parameters:
# currentState - A clone of the current state (GameState)
# attackingAnt - The ant currently making the attack (Ant)
# enemyLocation - The Locations of the Enemies that can be attacked (Location[])
##
def getAttack(self, currentState, attackingAnt, enemyLocations):
#Attack a random enemy.
return enemyLocations[random.randint(0, len(enemyLocations) - 1)]
| 37.533898 | 105 | 0.589298 | 551 | 4,429 | 4.704174 | 0.303085 | 0.006173 | 0.032407 | 0.029321 | 0.337191 | 0.283179 | 0.283179 | 0.261574 | 0.23071 | 0.23071 | 0 | 0.010183 | 0.334839 | 4,429 | 117 | 106 | 37.854701 | 0.869654 | 0.413186 | 0 | 0.423077 | 0 | 0 | 0.003163 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.173077 | 0.019231 | 0.365385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8415d3e67ce2c47d7251854165bcf91208abf86 | 22,718 | py | Python | pysnptools/util/mapreduce1/runner/hpc.py | fastlmm/PySnpTools | ce2ecaa5548e82b64c8ed6a205dbf419701b66b6 | [
"Apache-2.0"
] | 13 | 2019-12-23T06:51:08.000Z | 2022-01-07T18:14:55.000Z | pysnptools/util/mapreduce1/runner/hpc.py | fastlmm/PySnpTools | ce2ecaa5548e82b64c8ed6a205dbf419701b66b6 | [
"Apache-2.0"
] | 3 | 2020-07-30T16:07:43.000Z | 2021-07-14T09:00:42.000Z | pysnptools/util/mapreduce1/runner/hpc.py | fastlmm/PySnpTools | ce2ecaa5548e82b64c8ed6a205dbf419701b66b6 | [
"Apache-2.0"
] | 3 | 2020-05-22T09:46:16.000Z | 2021-01-26T13:27:36.000Z |
from pysnptools.util.mapreduce1.runner import *
import os
import subprocess, sys, os.path
import multiprocessing
import pysnptools.util as pstutil
import pdb
import logging
try:
import dill as pickle
except:
logging.warning("Can't import dill, so won't be able to clusterize lambda expressions. If you try, you'll get this error 'Can't pickle <type 'function'>: attribute lookup __builtin__.function failed'")
import cPickle as pickle
class HPC(Runner):
'''
Old code to run on a Microsoft Widows HPC Cluster. Not currently supported.
'''
#!!LATER make it (and Hadoop) work from root directories -- or give a clear error message
def __init__(self, taskcount, clustername, fileshare, priority="Normal", unit="core", mkl_num_threads=None, runtime="infinite", remote_python_parent=None,
update_remote_python_parent=False, min=None, max=None, excluded_nodes=[], template=None, nodegroups=None, skipinputcopy=False, node_local=True,clean_up=True,preemptable=True,FailOnTaskFailure=False,logging_handler=logging.StreamHandler(sys.stdout)):
logger = logging.getLogger()
if not logger.handlers:
logger.setLevel(logging.INFO)
for h in list(logger.handlers):
logger.removeHandler(h)
logger.addHandler(logging_handler)
if logger.level == logging.NOTSET:
logger.setLevel(logging.INFO)
self.taskcount = taskcount
self.clustername = clustername
self.fileshare = fileshare
self.priority = priority
self.runtime = runtime
self.unit = unit
self.excluded_nodes = excluded_nodes
self.min = min
self.max = max
self.remote_python_parent = remote_python_parent
self.update_remote_python_parent = update_remote_python_parent
self.CheckUnitAndMKLNumThreads(mkl_num_threads, unit)
self.skipinputcopy=skipinputcopy
self.template = template
self.nodegroups = nodegroups
self.node_local = node_local
self.clean_up = clean_up
self.preemptable = preemptable
self.FailOnTaskFailure = FailOnTaskFailure
def run(self, distributable):
# Check that the local machine has python path set
localpythonpath = os.environ.get("PYTHONPATH")#!!should it be able to work without pythonpath being set (e.g. if there was just one file)? Also, is None really the return or is it an exception.
if localpythonpath is None: raise Exception("Expect local machine to have 'pythonpath' set")
remotepythoninstall = self.check_remote_pythoninstall()
remotewd, run_dir_abs, run_dir_rel, nodelocalwd = self.create_run_dir()
pstutil.create_directory_if_necessary(os.path.join(remotewd, distributable.tempdirectory), isfile=False) #create temp directory now so that cluster tasks won't try to create it many times at once
result_remote = os.path.join(run_dir_abs,"result.p")
self.copy_python_settings(run_dir_abs)
inputOutputCopier = HPCCopier(remotewd,skipinput=self.skipinputcopy) #Create the object that copies input and output files to where they are needed
inputOutputCopier.input(distributable) # copy of the input files to where they are needed (i.e. the cluster)
remotepythonpath = self.FindOrCreateRemotePythonPath(localpythonpath, run_dir_abs)
batfilename_rel = self.create_bat_file(distributable, remotepythoninstall, remotepythonpath, remotewd, run_dir_abs, run_dir_rel, result_remote, nodelocalwd, distributable)
self.submit_to_cluster(batfilename_rel, distributable, remotewd, run_dir_abs, run_dir_rel, nodelocalwd)
inputOutputCopier.output(distributable) # copy the output file from where they were created (i.e. the cluster) to the local computer
assert os.path.exists(result_remote), "The HPC job produced no result (and, thus, likely failed)"
with open(result_remote, mode='rb') as f:
result = pickle.load(f)
#logging.info('Done: HPC runner is running a distributable. Returns {0}'.format(result))
return result
def CheckUnitAndMKLNumThreads(self, mkl_num_threads, unit):
if unit.lower() == "core":
if mkl_num_threads is not None and mkl_num_threads!=1 : raise Exception("When 'unit' is 'core', mkl_num_threads must be unspecified or 1")
self.mkl_num_threads = 1
elif unit.lower() == "socket":
if mkl_num_threads is None : raise Exception("When 'unit' is 'socket', mkl_num_threads must be specified")
self.mkl_num_threads = mkl_num_threads
elif unit.lower() == "node":
self.mkl_num_threads = mkl_num_threads
else :
raise Exception("Expect 'unit' to be 'core', 'socket', or 'node'")
def copy_python_settings(self, run_dir_abs):
#localuserprofile = os.environ.get("USERPROFILE")
user_python_settings=".continuum"
python_settings=os.path.join(self.fileshare,user_python_settings)
if os.path.exists(python_settings):
import shutil
remote_user_python_settings=os.path.join(run_dir_abs,user_python_settings)
shutil.copytree(python_settings,remote_user_python_settings)
def FindOrCreateRemotePythonPath(self, localpythonpath, run_dir_abs):
if self.remote_python_parent is None:
remotepythonpath = self.CopySource(localpythonpath, run_dir_abs)
else:
pstutil.create_directory_if_necessary(self.remote_python_parent,isfile=False)
list = []
for rel in os.listdir(self.remote_python_parent):
list.append(os.path.join(self.remote_python_parent,rel))
remotepythonpath = ";".join(list)
if self.update_remote_python_parent:
remotepythonpath = self.CopySource(localpythonpath, run_dir_abs)
return remotepythonpath
def numString(self):
if self.min is None and self.max is None:
return " -Num{0} *-*".format(self.unit.capitalize())
if self.min is None:
return " -Num{0} {1}".format(self.unit.capitalize(), self.max)
if self.max is None:
return " -Num{0} {1}-*".format(self.unit.capitalize(), self.min)
return " -Num{0} {1}-{2}".format(self.unit.capitalize(), self.min, self.max)
def submit_to_cluster(self, batfilename_rel, distributable, remotewd, run_dir_abs, run_dir_rel, nodelocalwd):
stdout_dir_rel = os.path.join(run_dir_rel,"stdout")
stdout_dir_abs = os.path.join(run_dir_abs,"stdout")
pstutil.create_directory_if_necessary(stdout_dir_abs, isfile=False)
stderr_dir_rel = os.path.join(run_dir_rel,"stderr")
stderr_dir_abs = os.path.join(run_dir_abs,"stderr")
pstutil.create_directory_if_necessary(stderr_dir_abs, isfile=False)
if len(self.excluded_nodes) > 0:
excluded_nodes = "Set-HpcJob -Id $r.Id -addExcludedNodes {0}".format(", ".join(self.excluded_nodes))
else:
excluded_nodes = ""
#create the Powershell file
psfilename_rel = os.path.join(run_dir_rel,"dist.ps1")
psfilename_abs = os.path.join(run_dir_abs,"dist.ps1")
pstutil.create_directory_if_necessary(psfilename_abs, isfile=True)
with open(psfilename_abs, "w") as psfile:
psfile.write(r"""Add-PsSnapin Microsoft.HPC
Set-Content Env:CCP_SCHEDULER {0}
$r = New-HpcJob -Name "{7}" -Priority {8}{12}{14}{16} -RunTime {15} -FailOnTaskFailure {23} #-Preemptable {22}
$r.Id
if ({20})
{10}
$from = "{4}"
$to = "{17}"
Add-HpcTask -Name NodePrep -JobId $r.Id -Type NodePrep -CommandLine "${{from}}\{18}" -StdOut "${{from}}\{2}\nodeprep.txt" -StdErr "${{from}}\{3}\nodeprep.txt" -WorkDir .
Add-HpcTask -Name Parametric -JobId $r.Id -Parametric -Start 0 -End {1} -CommandLine "${{from}}\{6} * {5}" -StdOut "${{from}}\{2}\*.txt" -StdErr "${{from}}\{3}\*.txt" -WorkDir $to
Add-HpcTask -Name Reduce -JobId $r.Id -Depend Parametric -CommandLine "${{from}}\{6} {5} {5}" -StdOut "${{from}}\{2}\reduce.txt" -StdErr "${{from}}\{3}\reduce.txt" -WorkDir $to
{21}Add-HpcTask -Name NodeRelease -JobId $r.Id -Type NodeRelease -CommandLine "${{from}}\{19}" -StdOut "${{from}}\{2}\noderelease.txt" -StdErr "${{from}}\{3}\noderelease.txt" -WorkDir .
{11}
else
{10}
Add-HpcTask -Name Parametric -JobId $r.Id -Parametric -Start 0 -End {1} -CommandLine "{6} * {5}" -StdOut "{2}\*.txt" -StdErr "{3}\*.txt" -WorkDir {4}
Add-HpcTask -Name Reduce -JobId $r.Id -Depend Parametric -CommandLine "{6} {5} {5}" -StdOut "{2}\reduce.txt" -StdErr "{3}\reduce.txt" -WorkDir {4}
{11}
{13}
Submit-HpcJob -Id $r.Id
$j = Get-HpcJob -Id $r.Id
$i = $r.id
$s = 10
while(($j.State -ne "Finished") -and ($j.State -ne "Failed") -and ($j.State -ne "Canceled"))
{10}
$x = $j.State
Write-Host "${10}x{11}. Job# ${10}i{11} sleeping for ${10}s{11}"
Start-Sleep -s $s
if ($s -ge 60)
{10}
$s = 60
{11}
else
{10}
$s = $s * 1.1
{11}
$j.Refresh()
{11}
""" .format(
self.clustername, #0
self.taskcount-1, #1
stdout_dir_rel, #2
stderr_dir_rel, #3
remotewd, #4 fileshare wd
self.taskcount, #5
batfilename_rel, #6
self.maxlen(str(distributable),50), #7
self.priority, #8
self.unit, #9 -- not used anymore,. Instead #12 sets unit
"{", #10
"}", #11
self.numString(), #12
excluded_nodes, #13
' -templateName "{0}"'.format(self.template) if self.template is not None else "", #14
self.runtime, #15 RuntimeSeconds
' -NodeGroups "{0}"'.format(self.nodegroups) if self.nodegroups is not None else "", #16
nodelocalwd, #17 the node-local wd
batfilename_rel[0:-8]+"nodeprep.bat", #18
batfilename_rel[0:-8]+"noderelease.bat", #19
1 if self.node_local else 0, #20
"", #21 always run release task
self.preemptable, #22
'$true' if self.FailOnTaskFailure else '$false', #23
))
assert batfilename_rel[-8:] == "dist.bat", "real assert"
import subprocess
proc = subprocess.Popen(["powershell.exe", "-ExecutionPolicy", "Unrestricted", psfilename_abs], cwd=os.getcwd())
if not 0 == proc.wait(): raise Exception("Running powershell cluster submit script results in non-zero return code")
#move to utils?
@staticmethod
def maxlen(s,max):
'''
Truncate cluster job name if longer than max.
'''
if len(s) <= max:
return s
else:
#return s[0:max-1]
return s[-max:] #JL: I prefer the end of the name rather than the start
def create_distributablep(self, distributable, run_dir_abs, run_dir_rel):
distributablep_filename_rel = os.path.join(run_dir_rel, "distributable.p")
distributablep_filename_abs = os.path.join(run_dir_abs, "distributable.p")
with open(distributablep_filename_abs, mode='wb') as f:
pickle.dump(distributable, f, pickle.HIGHEST_PROTOCOL)
return distributablep_filename_rel, distributablep_filename_abs
@staticmethod
def FindDirectoriesToExclude(localpythonpathdir):
logging.info("Looking in '{0}' for directories to skip".format(localpythonpathdir))
xd_string = " /XD $TF /XD .git"
for root, dir, files in os.walk(localpythonpathdir):
for file in files:
if file.lower() == ".ignoretgzchange":
xd_string += " /XD {0}".format(root)
return xd_string
def CopySource(self,localpythonpath, run_dir_abs):
if self.update_remote_python_parent:
remote_python_parent = self.remote_python_parent
else:
remote_python_parent = run_dir_abs + os.path.sep + "pythonpath"
pstutil.create_directory_if_necessary(remote_python_parent, isfile=False)
remotepythonpath_list = []
for i, localpythonpathdir in enumerate(localpythonpath.split(';')):
remotepythonpathdir = os.path.join(remote_python_parent, str(i))
remotepythonpath_list.append(remotepythonpathdir)
xd_string = HPC.FindDirectoriesToExclude(localpythonpathdir)
xcopycommand = 'robocopy /s {0} {1}{2}'.format(localpythonpathdir,remotepythonpathdir,xd_string)
logging.info(xcopycommand)
os.system(xcopycommand)
remotepythonpath = ";".join(remotepythonpath_list)
return remotepythonpath
def create_bat_file(self, distributable, remotepythoninstall, remotepythonpath, remotewd, run_dir_abs, run_dir_rel, result_remote, nodelocalwd, create_bat_file):
path_share_list = [r"",r"Scripts"]
remotepath_list = []
for path_share in path_share_list:
path_share_abs = os.path.join(remotepythoninstall,path_share)
if not os.path.isdir(path_share_abs): raise Exception("Expect path directory at '{0}'".format(path_share_abs))
remotepath_list.append(path_share_abs)
remotepath = ";".join(remotepath_list)
distributablep_filename_rel, distributablep_filename_abs = self.create_distributablep(distributable, run_dir_abs, run_dir_rel)
distributable_py_file = os.path.join(os.path.dirname(__file__),"..","distributable.py")
if not os.path.exists(distributable_py_file): raise Exception("Expect file at " + distributable_py_file + ", but it doesn't exist.")
localfilepath, file = os.path.split(distributable_py_file)
for remote_path_part in remotepythonpath.split(';'):
remoteexe = os.path.join(remote_path_part,"fastlmm","util",file)
if os.path.exists(remoteexe):
break #not continue
remoteexe = None
assert remoteexe is not None, "Could not find '{0}' on remote python path. Is fastlmm on your local python path?".format(file)
#run_dir_rel + os.path.sep + "pythonpath" + os.path.sep + os.path.splitdrive(localfilepath)[1]
#result_remote2 = result_remote.encode("string-escape")
command_string = remoteexe + r""" "{0}" """.format(distributablep_filename_abs) + r""" "LocalInParts(%1,{0},mkl_num_threads={1},result_file=""{2}"",run_dir=""{3}"") " """.format(
self.taskcount,
self.mkl_num_threads,
"result.p",
run_dir_abs.encode("string-escape"))
batfilename_rel = os.path.join(run_dir_rel,"dist.bat")
batfilename_abs = os.path.join(run_dir_abs,"dist.bat")
pstutil.create_directory_if_necessary(batfilename_abs, isfile=True)
matplotlibfilename_rel = os.path.join(run_dir_rel,".matplotlib")
matplotlibfilename_abs = os.path.join(run_dir_abs,".matplotlib")
pstutil.create_directory_if_necessary(matplotlibfilename_abs, isfile=False)
pstutil.create_directory_if_necessary(matplotlibfilename_abs + "/tex.cache", isfile=False)
ipythondir_rel = os.path.join(run_dir_rel,".ipython")
ipythondir_abs = os.path.join(run_dir_abs,".ipython")
pstutil.create_directory_if_necessary(ipythondir_abs, isfile=False)
with open(batfilename_abs, "w") as batfile:
batfile.write("set path={0};%path%\n".format(remotepath))
batfile.write("set PYTHONPATH={0}\n".format(remotepythonpath))
batfile.write("set USERPROFILE={0}\n".format(run_dir_abs))
batfile.write("set MPLCONFIGDIR={0}\n".format(matplotlibfilename_abs))
batfile.write("set IPYTHONDIR={0}\n".format(ipythondir_abs))
batfile.write("python {0}\n".format(command_string))
if (self.node_local):
with open( os.path.join(run_dir_abs,"nodeprep.bat"), "w") as prepfile:
prepfile.write(r"""set f="{0}"{1}""".format(remotewd,'\n'))
prepfile.write(r"""set t="{0}"{1}""".format(nodelocalwd,'\n'))
prepfile.write("if not exist %t% mkdir %t%\n")
with open( os.path.join(run_dir_abs,"noderelease.bat"), "w") as releasefile:
releasefile.write(r"""set f="{0}"{1}""".format(remotewd,'\n'))
releasefile.write(r"""set t="{0}"{1}""".format(nodelocalwd,'\n'))
inputOutputCopier = HPCCopierNodeLocal(prepfile,releasefile,self.clean_up) #Create the object that copies input and output files to where they are needed
inputOutputCopier.input(distributable) # copy of the input files to where they are needed (i.e. to the cluster)
inputOutputCopier.output(distributable) # copy of the output files to where they are needed (i.e. off the cluster)
releasefile.write("rmdir /s %t%\n")
releasefile.write("exit /b 0\n")
return batfilename_rel
def check_remote_pythoninstall(self):
remotepythoninstall = r"\\GCR\Scratch\RR1\escience\pythonInstallD" #!!! don't hardwire this
if not os.path.isdir(remotepythoninstall): raise Exception("Expect Python and related directories at '{0}'".format(remotepythoninstall))
return remotepythoninstall
def create_run_dir(self):
username = os.environ["USERNAME"]
localwd = os.getcwd()
#!!make an option to specify the full remote WD. Also what is the "\\\\" case for?
if localwd.startswith("\\\\"):
remotewd = self.fileshare + os.path.sep + username +os.path.sep + "\\".join(localwd.split('\\')[4:])
nodelocalwd = "d:\scratch\escience" + os.path.sep + username +os.path.sep + "\\".join(localwd.split('\\')[4:]) #!!!const
else:
remotewd = self.fileshare + os.path.sep + username + os.path.splitdrive(localwd)[1] #using '+' because 'os.path.join' isn't work with shares
nodelocalwd = "d:\scratch\escience" + os.path.sep + username + os.path.splitdrive(localwd)[1] #!!! const
import datetime
now = datetime.datetime.now()
run_dir_rel = os.path.join("runs",pstutil._datestamp(appendrandom=True))
run_dir_abs = os.path.join(remotewd,run_dir_rel)
pstutil.create_directory_if_necessary(run_dir_abs,isfile=False)
return remotewd, run_dir_abs, run_dir_rel, nodelocalwd
class HPCCopier(object): #Implements ICopier
def __init__(self, remotewd, skipinput=False):
self.remotewd = remotewd
self.skipinput=skipinput
def input(self,item):
if self.skipinput:
return
if isinstance(item, str):
itemnorm = os.path.normpath(item)
remote_file_name = os.path.join(self.remotewd,itemnorm)
remote_dir_name,ignore = os.path.split(remote_file_name)
pstutil.create_directory_if_necessary(remote_file_name)
xcopycommand = "xcopy /d /e /s /c /h /y {0} {1}".format(itemnorm, remote_dir_name)
logging.info(xcopycommand)
rc = os.system(xcopycommand)
print("rc=" +str(rc))
if rc!=0: raise Exception("xcopy cmd failed with return value={0}, from cmd {1}".format(rc,xcopycommand))
elif hasattr(item,"copyinputs"):
item.copyinputs(self)
# else -- do nothing
def output(self,item):
if isinstance(item, str):
itemnorm = os.path.normpath(item)
pstutil.create_directory_if_necessary(itemnorm)
remote_file_name = os.path.join(self.remotewd,itemnorm)
local_dir_name,ignore = os.path.split(itemnorm)
assert os.path.exists(remote_file_name), "Don't see expected file '{0}'. Did the HPC job fail?".format(remote_file_name)
#xcopycommand = "xcopy /d /e /s /c /h /y {0} {1}".format(remote_file_name, local_dir_name) # we copy to the local dir instead of the local file so that xcopy won't ask 'file or dir?'
xcopycommand = "xcopy /d /c /y {0} {1}".format(remote_file_name, local_dir_name) # we copy to the local
logging.info(xcopycommand)
rc = os.system(xcopycommand)
if rc!=0: logging.info("xcopy cmd failed with return value={0}, from cmd {1}".format(rc,xcopycommand))
elif hasattr(item,"copyoutputs"):
item.copyoutputs(self)
# else -- do nothing
class HPCCopierNodeLocal(object): #Implements ICopier
def __init__(self, fileprep, filerelease, clean_up):
self.fileprep = fileprep
self.filerelease = filerelease
self.clean_up = clean_up
def input(self,item):
if isinstance(item, str):
itemnorm = os.path.normpath(item)
dirname = os.path.dirname(itemnorm)
self.fileprep.write("if not exist %t%\{0} mkdir %t%\{0}\n".format(dirname))
self.fileprep.write("xcopy /d /e /s /c /h /y %f%\{0} %t%\{1}\\\n".format(itemnorm,dirname))
if self.clean_up:
self.filerelease.write("del %t%\{0}\n".format(itemnorm))
elif hasattr(item,"copyinputs"):
item.copyinputs(self)
# else -- do nothing
def output(self,item):
if isinstance(item, str):
itemnorm = os.path.normpath(item)
dirname = os.path.dirname(itemnorm)
self.filerelease.write("xcopy /d /e /s /c /h /y %t%\{0} %f%\{1}\\\n".format(itemnorm,dirname))
if self.clean_up:
self.filerelease.write("del %t%\{0}\n".format(itemnorm))
elif hasattr(item,"copyoutputs"):
item.copyoutputs(self)
# else -- do nothing
| 52.832558 | 265 | 0.61832 | 2,752 | 22,718 | 4.950218 | 0.165334 | 0.026426 | 0.02048 | 0.017177 | 0.340527 | 0.285547 | 0.256478 | 0.201864 | 0.173163 | 0.147618 | 0 | 0.013471 | 0.264812 | 22,718 | 429 | 266 | 52.955711 | 0.802179 | 0.087904 | 0 | 0.204611 | 0 | 0.034582 | 0.20704 | 0.016001 | 0 | 0 | 0 | 0 | 0.011527 | 1 | 0.057637 | false | 0 | 0.037464 | 0 | 0.146974 | 0.002882 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8424bf36382d1072f3fbfbb2e4fabd3526822c8 | 496 | py | Python | tests/formatters_test.py | MiraGeoscience/mirageoscience-apps | 8c445ec8f2391349aa4cac6c705426301b3c31ca | [
"MIT"
] | 1 | 2022-02-18T16:28:22.000Z | 2022-02-18T16:28:22.000Z | tests/formatters_test.py | nwilliams-kobold/geoapps | eb972321316a33628d8ae04613cc403a27d942ee | [
"MIT"
] | null | null | null | tests/formatters_test.py | nwilliams-kobold/geoapps | eb972321316a33628d8ae04613cc403a27d942ee | [
"MIT"
] | null | null | null | # Copyright (c) 2022 Mira Geoscience Ltd.
#
# This file is part of geoapps.
#
# geoapps is distributed under the terms and conditions of the MIT License
# (see LICENSE file at the root of this source code package).
import pytest
from geoapps.utils.formatters import string_name
def test_string_name():
chars = "!@#$%^&*().,"
value = "H!e(l@l#o.W$o%r^l&d*"
assert (
string_name(value, characters=chars) == "H_e_l_l_o_W_o_r_l_d_"
), "string_name validator failed"
| 23.619048 | 75 | 0.681452 | 80 | 496 | 4.0375 | 0.6 | 0.123839 | 0.018576 | 0.024768 | 0.06192 | 0.06192 | 0.06192 | 0.06192 | 0.06192 | 0.06192 | 0 | 0.010076 | 0.199597 | 496 | 20 | 76 | 24.8 | 0.803526 | 0.413306 | 0 | 0 | 0 | 0 | 0.282686 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.125 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8455d466c10af2e80eabb1b98ebf27274580915 | 5,992 | py | Python | midonet/neutron/services/l2gateway/plugin.py | NeCTAR-RC/networking-midonet | 7a69af3eab25f57e77738fd8398b6f4854346fd9 | [
"Apache-2.0"
] | null | null | null | midonet/neutron/services/l2gateway/plugin.py | NeCTAR-RC/networking-midonet | 7a69af3eab25f57e77738fd8398b6f4854346fd9 | [
"Apache-2.0"
] | null | null | null | midonet/neutron/services/l2gateway/plugin.py | NeCTAR-RC/networking-midonet | 7a69af3eab25f57e77738fd8398b6f4854346fd9 | [
"Apache-2.0"
] | null | null | null | # Copyright (C) 2015 Midokura SARL
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from neutron_lib.api import validators
from oslo_log import helpers as log_helpers
from oslo_log import log as logging
from oslo_utils import excutils
from networking_l2gw import extensions as l2gateway_ext
from networking_l2gw.services.l2gateway.common import l2gw_validators
from networking_l2gw.services.l2gateway import plugin as l2gw_plugin
from neutron.api import extensions as neutron_extensions
from midonet.neutron.common import constants as mido_const
from midonet.neutron.db import l2gateway_midonet as l2gw_db
from midonet.neutron.services.l2gateway.common import l2gw_midonet_validators
LOG = logging.getLogger(__name__)
class MidonetL2GatewayPlugin(l2gw_plugin.L2GatewayPlugin,
l2gw_db.MidonetL2GatewayMixin):
"""Implementation of the Neutron l2 gateway Service Plugin.
This class manages the workflow of Midonet l2 Gateway request/response.
The base plugin methods are overridden because the MidoNet driver requires
specific ordering of events. For creation, the Neutron data must be
created first, with the resource UUID generated. Also, for both creation
and deletion, by invoking the Neutron DB methods first, all the
validations, such as 'check_admin()' are executed prior to attempting to
modify the MidoNet data, preventing potential data inconsistency.
"""
def __init__(self):
# Dynamically change the validators so that they are applicable to
# the MidoNet implementation of L2GW.
# REVISIT(yamamoto): These validator modifications should not
# have been here in the first place. We should either put them
# in upstream or remove them.
l2gw_validators.validate_gwdevice_list = (l2gw_midonet_validators.
validate_gwdevice_list)
val_type = validators._to_validation_type('l2gwdevice_list')
validators.validators.pop(val_type, None)
validators.add_validator(
val_type,
l2gw_midonet_validators.validate_gwdevice_list)
l2gw_validators.validate_network_mapping_list = (
l2gw_midonet_validators.
validate_network_mapping_list_without_seg_id_validation)
neutron_extensions.append_api_extensions_path(l2gateway_ext.__path__)
super(MidonetL2GatewayPlugin, self).__init__()
def add_port_mac(self, context, port_dict):
# This function is not implemented now in MidoNet plugin.
# We block this function in plugin level to prevent from loading
# l2gw driver in upstream.
self._get_driver_for_provider(mido_const.MIDONET_L2GW_PROVIDER
).add_port_mac(context, port_dict)
def delete_port_mac(self, context, port):
# This function is not implemented now in MidoNet plugin.
# We block this function in plugin level to prevent from loading
# l2gw driver in upstream.
self._get_driver_for_provider(mido_const.MIDONET_L2GW_PROVIDER
).delete_port_mac(context, port)
def create_l2_gateway(self, context, l2_gateway):
# Gateway Device Management Service must be enabled
# when Midonet L2 Gateway is used.
self._check_and_get_gw_dev_service()
self.validate_l2_gateway_for_create(context, l2_gateway)
return l2gw_db.MidonetL2GatewayMixin.create_l2_gateway(
self, context, l2_gateway)
@log_helpers.log_method_call
def create_l2_gateway_connection(self, context, l2_gateway_connection):
self.validate_l2_gateway_connection_for_create(
context, l2_gateway_connection)
l2_gw_conn = (l2gw_db.MidonetL2GatewayMixin.
create_l2_gateway_connection(
self, context, l2_gateway_connection))
# Copy over the ID so that the MidoNet driver knows about it. ID is
# necessary for MidoNet to process its translation.
gw_connection = l2_gateway_connection[self.connection_resource]
gw_connection["id"] = l2_gw_conn["id"]
try:
self._get_driver_for_provider(mido_const.MIDONET_L2GW_PROVIDER
).create_l2_gateway_connection(
context, l2_gateway_connection)
except Exception as ex:
with excutils.save_and_reraise_exception():
LOG.error("Failed to create a l2 gateway connection "
"%(gw_conn_id)s in Midonet:%(err)s",
{"gw_conn_id": l2_gw_conn["id"], "err": ex})
try:
l2gw_db.MidonetL2GatewayMixin.delete_l2_gateway_connection(
self, context, l2_gw_conn["id"])
except Exception:
LOG.exception("Failed to delete a l2 gateway conn %s",
l2_gw_conn["id"])
return l2_gw_conn
@log_helpers.log_method_call
def delete_l2_gateway_connection(self, context, l2_gateway_connection):
l2gw_db.MidonetL2GatewayMixin.delete_l2_gateway_connection(
self, context, l2_gateway_connection)
self._get_driver_for_provider(mido_const.MIDONET_L2GW_PROVIDER
).delete_l2_gateway_connection(
context, l2_gateway_connection)
| 47.555556 | 79 | 0.691255 | 739 | 5,992 | 5.331529 | 0.301759 | 0.061675 | 0.08198 | 0.046701 | 0.374112 | 0.284264 | 0.236548 | 0.195939 | 0.195939 | 0.148731 | 0 | 0.018157 | 0.255507 | 5,992 | 125 | 80 | 47.936 | 0.865053 | 0.315921 | 0 | 0.185714 | 0 | 0 | 0.036954 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0 | 0.157143 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d848f8dd8085e1bf86cb047117735a5685ffbd13 | 1,781 | py | Python | setup.py | mcrowson/wunderpy2 | a3a959d1a3569ccb0869adba10e671978609a697 | [
"MIT"
] | null | null | null | setup.py | mcrowson/wunderpy2 | a3a959d1a3569ccb0869adba10e671978609a697 | [
"MIT"
] | null | null | null | setup.py | mcrowson/wunderpy2 | a3a959d1a3569ccb0869adba10e671978609a697 | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
from codecs import open
import os.path
import sys
script_dir = os.path.abspath(os.path.dirname(__file__))
def read(*paths):
"""Build a file path from *paths* and return the contents."""
with open(os.path.join(*paths), 'r') as f:
return f.read()
# argparse is only a builtin in 2.7
# I don't plan to support 2.6, but just in case I do in the future
install_requires = ['requests', 'six']
if sys.hexversion < 0x02070000:
install_requires.append('argparse')
setup(
name='wunderpy2',
version='0.1.4',
description='A Python library for the Wunderlist 2 REST API',
# Idea credit of https://hynek.me/articles/sharing-your-labor-of-love-pypi-quick-and-dirty/
long_description=(read('README.rst') + '\n\n' +
read('HISTORY.rst') + '\n\n' +
read('AUTHORS.rst')),
url='https://github.com/mieubrisse/wunderpy2',
author='mieubrisse',
author_email='mieubrisse@gmail.com',
license='MIT',
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Utilities',
'Natural Language :: English',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
],
keywords='wunderpy wunderpy2 wunderlist api cli',
packages=find_packages(exclude=['contrib', 'docs', 'tests*']),
install_requires=install_requires,
)
| 36.346939 | 95 | 0.632229 | 218 | 1,781 | 5.105505 | 0.573395 | 0.085355 | 0.112309 | 0.093441 | 0.048518 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021708 | 0.224031 | 1,781 | 48 | 96 | 37.104167 | 0.783647 | 0.137563 | 0 | 0 | 0 | 0 | 0.442408 | 0 | 0 | 0 | 0.006545 | 0 | 0 | 1 | 0.025 | false | 0 | 0.1 | 0 | 0.15 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8491385a7cb1fe2a3fcabf28f8d930e00a5e6f3 | 612 | py | Python | mpos/web/manager.py | cackharot/ngen-milk-pos | 4814bdbc6bddf02530ff10e1ec842fb316b0fa91 | [
"Apache-2.0"
] | null | null | null | mpos/web/manager.py | cackharot/ngen-milk-pos | 4814bdbc6bddf02530ff10e1ec842fb316b0fa91 | [
"Apache-2.0"
] | null | null | null | mpos/web/manager.py | cackharot/ngen-milk-pos | 4814bdbc6bddf02530ff10e1ec842fb316b0fa91 | [
"Apache-2.0"
] | 1 | 2019-04-24T06:11:47.000Z | 2019-04-24T06:11:47.000Z | # Set the path
import os
import sys
from flask_script import Manager, Server
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
from web import app
manager = Manager(app)
# Turn on debugger by default and reloader
manager.add_command("run", Server(
use_debugger=True,
use_reloader=True,
host='0.0.0.0',
#processes=3,
threaded=True,
port=4000)
)
# Turn on debugger by default and reloader
manager.add_command("prod", Server(
use_debugger=False,
use_reloader=False,
host='127.0.0.1',
port=80)
)
if __name__ == "__main__":
manager.run() | 19.741935 | 79 | 0.691176 | 92 | 612 | 4.391304 | 0.48913 | 0.019802 | 0.069307 | 0.079208 | 0.252475 | 0.252475 | 0.252475 | 0.252475 | 0.252475 | 0.252475 | 0 | 0.03373 | 0.176471 | 612 | 31 | 80 | 19.741935 | 0.767857 | 0.173203 | 0 | 0 | 0 | 0 | 0.065737 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.190476 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d84b963aacb5fb2dab3e77cf74727cfedec95c03 | 323 | py | Python | setup.py | khsk/Python-App-Capture | a0b893765558f144399ec31f1f11fb0b30025cc7 | [
"MIT"
] | null | null | null | setup.py | khsk/Python-App-Capture | a0b893765558f144399ec31f1f11fb0b30025cc7 | [
"MIT"
] | null | null | null | setup.py | khsk/Python-App-Capture | a0b893765558f144399ec31f1f11fb0b30025cc7 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Tue Oct 03 15:54:20 2017
@author: y-takeuchi
"""
from cx_Freeze import setup, Executable
exe = Executable(script = 'capture.py', base = 'Win32Gui')
setup(name = 'AppCapture',
version = '0.1',
description = 'Save Screen',
executables = [exe]) | 17.944444 | 60 | 0.585139 | 39 | 323 | 4.820513 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07113 | 0.260062 | 323 | 18 | 61 | 17.944444 | 0.715481 | 0.244582 | 0 | 0 | 0 | 0 | 0.190909 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d84bc5b6f7292dc9f40fd92ef12317fa084962da | 2,731 | py | Python | mosquitto-1.5.4/test/broker/08-ssl-bridge.py | RainaWLK/mqtt-test | cb4175c8bd1e35deed45941ca61c88fdcc6ddeba | [
"MIT"
] | null | null | null | mosquitto-1.5.4/test/broker/08-ssl-bridge.py | RainaWLK/mqtt-test | cb4175c8bd1e35deed45941ca61c88fdcc6ddeba | [
"MIT"
] | null | null | null | mosquitto-1.5.4/test/broker/08-ssl-bridge.py | RainaWLK/mqtt-test | cb4175c8bd1e35deed45941ca61c88fdcc6ddeba | [
"MIT"
] | 1 | 2021-06-19T17:17:41.000Z | 2021-06-19T17:17:41.000Z | #!/usr/bin/env python
import subprocess
import socket
import ssl
import inspect, os, sys
# From http://stackoverflow.com/questions/279237/python-import-a-module-from-a-folder
cmd_subfolder = os.path.realpath(os.path.abspath(os.path.join(os.path.split(inspect.getfile( inspect.currentframe() ))[0],"..")))
if cmd_subfolder not in sys.path:
sys.path.insert(0, cmd_subfolder)
import mosq_test
def write_config(filename, port1, port2):
with open(filename, 'w') as f:
f.write("port %d\n" % (port2))
f.write("\n")
f.write("connection bridge_test\n")
f.write("address 127.0.0.1:%d\n" % (port1))
f.write("topic bridge/# both 0\n")
f.write("notifications false\n")
f.write("restart_timeout 2\n")
f.write("\n")
f.write("bridge_cafile ../ssl/all-ca.crt\n")
f.write("bridge_insecure true\n")
(port1, port2) = mosq_test.get_port(2)
conf_file = os.path.basename(__file__).replace('.py', '.conf')
write_config(conf_file, port1, port2)
rc = 1
keepalive = 60
client_id = socket.gethostname()+".bridge_test"
connect_packet = mosq_test.gen_connect(client_id, keepalive=keepalive, clean_session=False, proto_ver=128+4)
connack_packet = mosq_test.gen_connack(rc=0)
mid = 1
subscribe_packet = mosq_test.gen_subscribe(mid, "bridge/#", 0)
suback_packet = mosq_test.gen_suback(mid, 0)
publish_packet = mosq_test.gen_publish("bridge/ssl/test", qos=0, payload="message")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
ssock = ssl.wrap_socket(sock, ca_certs="../ssl/all-ca.crt", keyfile="../ssl/server.key", certfile="../ssl/server.crt", server_side=True, ssl_version=ssl.PROTOCOL_TLSv1)
ssock.settimeout(20)
ssock.bind(('', port1))
ssock.listen(5)
broker = mosq_test.start_broker(filename=os.path.basename(__file__), port=port2, use_conf=True)
try:
(bridge, address) = ssock.accept()
bridge.settimeout(20)
if mosq_test.expect_packet(bridge, "connect", connect_packet):
bridge.send(connack_packet)
if mosq_test.expect_packet(bridge, "subscribe", subscribe_packet):
bridge.send(suback_packet)
pub = subprocess.Popen(['./08-ssl-bridge-helper.py', str(port2)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
pub.wait()
(stdo, stde) = pub.communicate()
if mosq_test.expect_packet(bridge, "publish", publish_packet):
rc = 0
bridge.close()
finally:
os.remove(conf_file)
try:
bridge.close()
except NameError:
pass
broker.terminate()
broker.wait()
(stdo, stde) = broker.communicate()
if rc:
print(stde)
ssock.close()
exit(rc)
| 31.390805 | 168 | 0.680337 | 389 | 2,731 | 4.606684 | 0.383033 | 0.049107 | 0.027344 | 0.047433 | 0.061384 | 0.046875 | 0 | 0 | 0 | 0 | 0 | 0.021978 | 0.166972 | 2,731 | 86 | 169 | 31.755814 | 0.765714 | 0.038081 | 0 | 0.092308 | 0 | 0 | 0.125381 | 0.009527 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0.015385 | 0.076923 | 0 | 0.092308 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d84c3bb5b6974c0f95b489673269ce950a277333 | 8,658 | py | Python | models/architecture/vaegan/trainer.py | EmmaNguyen/feature_adversarial_with_topology_signatures | efa7db6d0fdf5b2505d67d4341dcdb2ab05a97a7 | [
"MIT"
] | 1 | 2018-10-08T09:29:51.000Z | 2018-10-08T09:29:51.000Z | models/architecture/vaegan/trainer.py | EmmaNguyen/feature_adversarial_with_topology_signatures | efa7db6d0fdf5b2505d67d4341dcdb2ab05a97a7 | [
"MIT"
] | 4 | 2018-06-30T18:06:47.000Z | 2018-08-16T02:01:59.000Z | models/architecture/vaegan/trainer.py | EmmaNguyen/feature_adversarial_with_topology_signatures | efa7db6d0fdf5b2505d67d4341dcdb2ab05a97a7 | [
"MIT"
] | null | null | null | import numpy as np
import torch
import torch.nn.functional as F
from torch.autograd import Variable
from .distributions import rand_circle2d
from ot import gromov_wasserstein2, unif
def rand_projections(embedding_dim, num_samples=50):
"""This fn generates `L` random samples from the latent space's unit sphere.
Args:
embedding_dim (int): embedding dimension size
num_samples (int): number of random projection samples
Return:
torch.Tensor
"""
theta = [w / np.sqrt((w**2).sum()) for w in np.random.normal(size=(num_samples, embedding_dim))]
theta = np.asarray(theta)
return torch.from_numpy(theta).type(torch.FloatTensor)
def _sliced_wasserstein_distance(encoded_samples, distribution_samples, num_projections=50, p=2):
"""Sliced Wasserstein Distance between encoded samples and drawn distribution samples.
Args:
encoded_samples (toch.Tensor): embedded training tensor samples
distribution_samples (torch.Tensor): distribution training tensor samples
num_projections (int): number of projectsion to approximate sliced wasserstein distance
p (int): power of distance metric
Return:
torch.Tensor
"""
# derive latent space dimension size from random samples drawn from a distribution in it
embedding_dim = distribution_samples.size(1)
# generate random projections in latent space
projections = rand_projections(embedding_dim, num_projections)
# calculate projection of the encoded samples
encoded_projections = encoded_samples.matmul(projections.transpose(0, 1))
# calculate projection of the random distribution samples
distribution_projections = distribution_samples.matmul(projections.transpose(0, 1))
# calculate the sliced wasserstein distance by
# sorting the samples per projection and
# calculating the difference between the
# encoded samples and drawn samples per projection
wasserstein_distance = torch.sort(encoded_projections.transpose(0, 1), dim=1)[0] - torch.sort(distribution_projections.transpose(0, 1), dim=1)[0]
# distance between them (L2 by default for Wasserstein-2)
wasserstein_distance_p = torch.pow(wasserstein_distance, p)
# approximate wasserstein_distance for each projection
return wasserstein_distance_p.mean()
def sliced_wasserstein_distance(encoded_samples, distribution_fn=rand_circle2d, num_projections=50, p=2):
"""Sliced Wasserstein Distance between encoded samples and drawn distribution samples.
Args:
encoded_samples (toch.Tensor): embedded training tensor samples
distribution_fn (callable): callable to draw random samples
num_projections (int): number of projectsion to approximate sliced wasserstein distance
p (int): power of distance metric
Return:
torch.Tensor
"""
# derive batch size from encoded samples
batch_size = encoded_samples.size(0)
# draw samples from latent space prior distribution
z = distribution_fn(batch_size)
# approximate wasserstein_distance between encoded and prior distributions
# for average over each projection
swd = _sliced_wasserstein_distance(encoded_samples, z, num_projections, p)
return swd
def _topology_persistence(encoded_samples, distribution_samples, num_projections=50, p=2):
prior_subcripted_views = distribution_samples
posterior_subscripted_views = encoded_samples
adversarial_learner = AdversariallearnerBatchTrainer()
adversarial_learner.train_on_batch(prior_subcripted_views)
posterior_pred = adversarial_learner.eval_on_batch(posterior_subscripted_views)
bce = F.binary_cross_entropy(posterior_pred)
# derive latent space dimension size from random samples drawn from a distribution in it
embedding_dim = distribution_samples.size(1)
# generate random projections in latent space
projections = rand_projections(embedding_dim, num_projections)
# calculate projection of the encoded samples
#import pdb; pdb.set_trace()
Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
encoded_projections = encoded_samples.matmul(projections.transpose(0, 1).cuda())
# calculate projection of the random distribution samples
distribution_projections = distribution_samples.matmul(projections.transpose(0, 1))
# calculate the sliced wasserstein distance by
# sorting the samples per projection and
# calculating the difference between the
# encoded samples and drawn samples per projection
wasserstein_distance = torch.sort(encoded_projections.transpose(0, 1).cuda(), dim=1)[0] - torch.sort(distribution_projections.transpose(0, 1).cuda(), dim=1)[0]
# distance between them (L2 by default for Wasserstein-2)
wasserstein_distance_p = torch.pow(wasserstein_distance, p)
# approximate wasserstein_distance for each projection
return wasserstein_distance_p.mean()
def topology_persistence(encoded_samples, distribution_fn=rand_cirlce2d, num_projections=50, p=2):
batch_size = encoded_samples.size(0)
z = distribution_fn(batch_size)
return _topology_persistence(encoded_samples, self._distribution_fn, self.num_projections_, self.p_)
def gromov_wasserstein_distance(X, Y, device):
import concurrent.futures
# import pdb; pdb.set_trace()
mb_size = X.size(0)
gw_dist = np.zeros(mb_size)
Tensor = torch.FloatTensor
with concurrent.futures.ProcessPoolExecutor() as executor:
for i in executor.map(range(mb_size)):
C1 = sp.spatial.distance.cdist(X[i,:].reshape(28,28).data.cpu().numpy(), X[i,:].reshape(28,28).data.cpu().numpy()) #Convert data back to an image from one hot encoding with size 28x28
C2 = sp.spatial.distance.cdist(Y[i,:].reshape(28,28).data.cpu().numpy(), Y[i,:].reshape(28,28).data.cpu().numpy())
C1 /= C1.max()
C2 /= C2.max()
p = unif(28)
q = unif(28)
gw_dist[i] = gromov_wasserstein2(C1, C2, p, q, loss_fun='square_loss', epsilon=5e-4)
print("*"*100)
return Variable(Tensor(gw_dist), requires_grad=True).sum()
class SWAEBatchTrainer:
"""Sliced Wasserstein Autoencoder Batch Trainer.
Args:
autoencoder (torch.nn.Module): module which implements autoencoder framework
optimizer (torch.optim.Optimizer): torch optimizer
distribution_fn (callable): callable to draw random samples
num_projections (int): number of projectsion to approximate sliced wasserstein distance
p (int): power of distance metric
weight_swd (float): weight of divergence metric compared to reconstruction in loss
device (torch.Device): torch device
"""
def __init__(self, autoencoder, optimizer, distribution_fn,
num_projections=50, p=2, weight_swd=10.0, device=None):
self.model_ = autoencoder
self.optimizer = optimizer
self._distribution_fn = distribution_fn
self.embedding_dim_ = self.model_ .encoder.embedding_dim_
self.num_projections_ = num_projections
self.p_ = p
self.weight_swd = weight_swd
self._device = device if device else torch.device('cpu')
def __call__(self, x):
return self.eval_on_batch(x)
def train_on_batch(self, x):
# reset gradients
self.optimizer.zero_grad()
# autoencoder forward pass and loss
evals = self.eval_on_batch(x)
# backpropagate loss
evals['loss'].backward()
# update encoder and decoder parameters
self.optimizer.step()
return evals
def test_on_batch(self, x):
# reset gradients
self.optimizer.zero_grad()
# autoencoder forward pass and loss
evals = self.eval_on_batch(x)
return evals
def eval_on_batch(self, x):
x = x.to(self._device)
recon_x, z = self.model_(x)
# Equation 4 - this works for 1D
# import pdb; pdb.set_trace()
gw = gromov_wasserstein_distance(recon_x, x, self._device)
# Equation 15, this is only works for 2D
entropy = float(self.weight_swd) * topology_persistence(z, self._distribution_fn, self.num_projections_, self.p_)
# Equation 16: but why there is a bce. Following the original implementation with Keras
# it is said that (bce and l1) is the first term for equation 16, and w2 for the second term.
loss = gw + entropy
return {'loss': loss, 'gw': gw, 'entropy': entropy, 'encode': z, 'decode': recon_x}
| 46.299465 | 195 | 0.71125 | 1,096 | 8,658 | 5.443431 | 0.211679 | 0.073248 | 0.041904 | 0.029501 | 0.563024 | 0.521958 | 0.501341 | 0.488099 | 0.456252 | 0.422058 | 0 | 0.015604 | 0.208016 | 8,658 | 186 | 196 | 46.548387 | 0.854455 | 0.377916 | 0 | 0.227273 | 0 | 0 | 0.008641 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.079545 | 0.011364 | 0.329545 | 0.011364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d84cdf2cbca845f67fc205a391078d2af1f1badc | 475 | py | Python | image_action.py | abhishekchetani/ML_18june | 4a6465259c7d0de0cbdc12c1c9f10dd6f925883d | [
"Apache-2.0"
] | null | null | null | image_action.py | abhishekchetani/ML_18june | 4a6465259c7d0de0cbdc12c1c9f10dd6f925883d | [
"Apache-2.0"
] | null | null | null | image_action.py | abhishekchetani/ML_18june | 4a6465259c7d0de0cbdc12c1c9f10dd6f925883d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
import cv2
img = cv2.imread("/home/abhishek/Desktop/tracks.jpeg")
cv2.line(img,(0,0),(236,236),(100,54,255),3)
cv2.rectangle(img,(199,112),(325,238),(0,0,255),2)
cv2.circle(img,(262,175),60,(255,200,0),3)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(img,'TRAIN',(210,270),font,1,(90,200,140),cv2.LINE_4)
cv2.imshow("actions",img)
cv2.imwrite("/home/abhishek/Desktop/lines.jpeg",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
| 23.75 | 65 | 0.661053 | 81 | 475 | 3.839506 | 0.580247 | 0.057878 | 0.122187 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189573 | 0.111579 | 475 | 19 | 66 | 25 | 0.547393 | 0.033684 | 0 | 0 | 0 | 0 | 0.172489 | 0.146288 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.090909 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d84e2e63426049e55a4ce07d524f85ba7b495330 | 14,662 | py | Python | GMM_nDim3.py | Sharut/My-Hybrid-GMM-SVM-Model | 68f0ab9b86dbb0ca3d1e63f2df0dcc4c7066e424 | [
"MIT"
] | 1 | 2019-06-07T13:22:57.000Z | 2019-06-07T13:22:57.000Z | GMM_nDim3.py | Sharut/My-Hybrid-GMM-SVM-Model | 68f0ab9b86dbb0ca3d1e63f2df0dcc4c7066e424 | [
"MIT"
] | null | null | null | GMM_nDim3.py | Sharut/My-Hybrid-GMM-SVM-Model | 68f0ab9b86dbb0ca3d1e63f2df0dcc4c7066e424 | [
"MIT"
] | 1 | 2020-08-30T06:49:25.000Z | 2020-08-30T06:49:25.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri May 24 09:08:48 2019
@author: uiet_mac1
"""
import numpy as np
import random as rd
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
#import hungarian as hg
def random_parameters(data, K):
""" K is the number of gaussians"""
"""if dimension is d, then mean is dX1"""
""" init the means, covariances and mixing coefs"""
cols = (data.shape)[1]
#print(len(data))
mu = np.zeros((K, cols)) #mean of k clusters KXD
for k in range(K):
idx = np.floor(rd.random()*len(data))
for col in range(cols):
mu[k][col] += (data[int(idx)][col])
sigma = []
for k in range(K):
sigma.append(np.cov(data.T))
pi = np.ones(K)*1.0/K
print(mu)
print(sigma)
return mu, sigma, pi
def e_step(data, K, mu, sigma, pi):
idvs = (data.shape)[0]
#cols = (data.shape)[1]
#print("idvs is " +str(idvs))
resp = np.zeros((idvs, K))
for i in range(idvs):
for k in range(K):
resp[i][k] = pi[k]*gaussian(data[i], mu[k], sigma[k])/likelihood(data[i], K, mu, sigma, pi)
#print("responsibitlies is ")
#print(resp)
return resp
def log_likelihood(data, K, mu, sigma, pi):
""" marginal over X """
log_likelihood = 0.0
for n in range (len(data)):
log_likelihood += np.log(likelihood(data[n], K, mu, sigma, pi))
return log_likelihood
def likelihood(x, K, mu, sigma, pi):
rs = 0.0
for k in range(K):
rs += pi[k]*gaussian(x, mu[k], sigma[k])
return rs
def m_step(data, K, resp):
""" find the parameters that maximize the log-likelihood given the current resp."""
idvs = (data.shape)[0]
cols = (data.shape)[1]
mu = np.zeros((K, cols))
sigma = np.zeros((K, cols, cols))
pi = np.zeros(K)
marg_resp = np.zeros(K)
for k in range(K):
for i in range(idvs):
marg_resp[k] += resp[i][k]
mu[k] += (resp[i][k])*data[i]
mu[k] /= marg_resp[k]
for i in range(idvs):
#x_i = (np.zeros((1,cols))+data[k])
x_mu = np.zeros((1,cols))+data[i]-mu[k]
sigma[k] += (resp[i][k]/marg_resp[k])*x_mu*x_mu.T
pi[k] = marg_resp[k]/idvs
return mu, sigma, pi
def gaussian(x, mu, sigma):
""" compute the pdf of the multi-var gaussian """
idvs = len(x)
norm_factor = (2*np.pi)**idvs
norm_factor *= np.linalg.det(sigma)
norm_factor = 1.0/np.sqrt(norm_factor)
x_mu = np.matrix(x-mu)
rs = norm_factor*np.exp(-0.5*x_mu*np.linalg.inv(sigma)*x_mu.T)
return rs
def EM(data, rst, K, threshold):
converged = False
mu, sigma, pi = random_parameters(data, K)
likelihood_list=[]
current_log_likelihood = log_likelihood(data, K, mu, sigma, pi)
max_iter = 100
for it in range(max_iter):
likelihood_list.append(float(current_log_likelihood[0][0]))
print(rst, " | ", it, " | ", current_log_likelihood[0][0])
#print("Mixing proportion is ", pi )
resp = e_step(data, K, mu, sigma, pi)
mu, sigma, pi = m_step(data, K, resp)
new_log_likelihood = log_likelihood(data, K, mu, sigma, pi)
if (abs(new_log_likelihood-current_log_likelihood) < threshold):
converged = True
break
current_log_likelihood = new_log_likelihood
print(converged)
plt.plot(likelihood_list)
plt.ylabel('log likelihood')
plt.show()
return current_log_likelihood, mu, sigma, pi, resp
#######################################################################
def assign_clusters(K, resp):
idvs = len(resp)
clusters = np.zeros(idvs, dtype=int)
for i in range(idvs):
#clusters[i][k] = 0
clss = 0
for k in range(K):
if resp[i][k] > resp[i][clss]:
clss = k
resp[i][clss]= resp[i][k]
clusters[i] = clss
return clusters
'''
def compute_statistics(clusters, ref_clusters, K):
mat = make_ce_matrix(clusters, ref_clusters, K)
#hung_solver = hg.Hungarian()
rs = hung_solver.compute(mat, False)
tmp_clusters = np.array(clusters)
for old, new in rs:
clusters[np.where(tmp_clusters == old)] = new
#print old, new
#print clusters, ref_clusters
nbrIts = 0
for k in range(K):
ref = np.where(ref_clusters == k)[0]
clust = np.where(clusters == k)[0]
nbrIts += len(np.intersect1d(ref, clust))
print(len(np.intersect1d(ref, clust)))
return nbrIts
def make_ce_matrix(clusters, ref_clusters, K):
mat = np.zeros((K, K), dtype=int)
for i in range(K):
for j in xrange(K):
ref_i = np.where(ref_clusters == i)[0]
clust_j = np.where(clusters == j)[0]
its = np.intersect1d(ref_i, clust_j)
mat[i,j] = len(ref_i) + len(clust_j) -2*len(its)
return mat
'''
########################################################################
def read_data(file_name):
""" read the data from filename as numpy array """
with open(file_name) as f:
data = np.loadtxt(f, delimiter=",", dtype = "float",
skiprows=0, usecols=(0,1,2,3))
with open(file_name) as f:
ref_classes = np.loadtxt(f, delimiter=",", dtype = "str",
skiprows=0, usecols=[4])
unique_ref_classes = np.unique(ref_classes)
ref_clusters = np.argmax(ref_classes[np.newaxis,:]==unique_ref_classes[:,np.newaxis],axis=0)
return data, ref_clusters
def f(t):
return t
def plot_ellipse(ax, mu, sigma, color="k"):
"""
Based on
http://stackoverflow.com/questions/17952171/not-sure-how-to-fit-data-with-a-gaussian-python.
"""
# Compute eigenX_embeddedues and associated eigenvectors
X_embeddeds, vecs = np.linalg.eigh(sigma)
# Compute "tilt" of ellipse using first eigenvector
x, y = vecs[:, 0]
theta = np.degrees(np.arctan2(y, x))
# EigenX_embeddedues give length of ellipse along each eigenvector
w, h = 2 * np.sqrt(X_embeddeds)
ax.tick_params(axis='both', which='major', labelsize=20)
ellipse = Ellipse(mu, w, h, theta, color=color) # color="k")
ellipse.set_clip_box(ax.bbox)
ellipse.set_alpha(0.2)
ax.add_artist(ellipse)
def error_ellipse(mu, cov, ax=None, factor=1.0, **kwargs):
"""
Plot the error ellipse at a point given its covariance matrix.
"""
# some sane defaults
facecolor = kwargs.pop('facecolor', 'none')
edgecolor = kwargs.pop('edgecolor', 'k')
x, y = mu
U, S, V = np.linalg.svd(cov)
theta = np.degrees(np.arctan2(U[1, 0], U[0, 0]))
ellipsePlot = Ellipse(xy=[x, y],
width=2 * np.sqrt(S[0]) * factor,
height=2 * np.sqrt(S[1]) * factor,
angle=theta,
facecolor=facecolor, edgecolor=edgecolor, **kwargs)
if ax is None:
ax = plt.gca()
ax.add_patch(ellipsePlot)
return ellipsePlot
def _plot_gaussian(mean, covariance, color, zorder=0):
"""Plots the mean and 2-std ellipse of a given Gaussian"""
plt.plot(mean[0], mean[1], color[0] + ".", zorder=zorder)
if covariance.ndim == 1:
covariance = np.diag(covariance)
radius = np.sqrt(5.991)
eigX_embeddeds, eigvecs = np.linalg.eig(covariance)
axis = np.sqrt(eigX_embeddeds) * radius
slope = eigvecs[1][0] / eigvecs[1][1]
angle = 180.0 * np.arctan(slope) / np.pi
plt.axes().add_artist(Ellipse(
mean, 2 * axis[0], 2 * axis[1], angle=angle,
fill=False, color=color, linewidth=1, zorder=zorder
))
plt.show()
def _plot_cov_ellipse(cov, pos, nstd=2, ax=None, **kwargs):
"""
Plots an `nstd` sigma error ellipse based on the specified covariance
matrix (`cov`). Additional keyword arguments are passed on to the
ellipse patch artist.
Parameters
----------
cov : The 2x2 covariance matrix to base the ellipse on
pos : The location of the center of the ellipse. Expects a 2-element
sequence of [x0, y0].
nstd : The radius of the ellipse in numbers of standard deviations.
Defaults to 2 standard deviations.
ax : The axis that the ellipse will be plotted on. Defaults to the
current axis.
Additional keyword arguments are pass on to the ellipse patch.
Returns
-------
A matplotlib ellipse artist
"""
from matplotlib import pyplot as plt
from matplotlib.patches import Ellipse
def eigsorted(cov):
X_embeddeds, vecs = np.linalg.eigh(cov)
order = X_embeddeds.argsort()[::-1]
return X_embeddeds[order], vecs[:, order]
if ax is None:
ax = plt.gca()
X_embeddeds, vecs = eigsorted(cov)
theta = np.degrees(np.arctan2(*vecs[:, 0][::-1]))
# Width and height are "full" widths, not radius
width, height = 2 * nstd * np.sqrt(X_embeddeds)
ellip = Ellipse(xy=pos, width=width, height=height, angle=theta,
**kwargs)
ax.add_artist(ellip)
plt.show()
return ellip
def main():
print("begining...")
file_name = "iris.data"
nbr_restarts = 5
threshold = 0.001
K = 3
data, ref_clusters = read_data(file_name)
print("#restart | EM iteration | log likelihood")
print("----------------------------------------")
max_likelihood_score = float("-inf")
for rst in range(nbr_restarts):
log_likelihood, mu, sigma, pi, resp = EM(data, rst, K, threshold)
if log_likelihood > max_likelihood_score:
max_likelihood_score = log_likelihood
max_mu, max_sigma, max_pi, max_resp = mu, sigma, pi, resp
#print("Iteration is"+ str(rst))
#print("mixing is ")
#print(max_pi)
#print("mean is ")
#print(max_mu)
#print("sigma is ")
#print(max_sigma)
#print(max_mu, max_sigma, max_pi)
print("mean matrix is ")
print(max_mu)
clusters = assign_clusters(K, max_resp)
#cost = compute_statistics(clusters, ref_clusters, K)
print(clusters)
print(ref_clusters)
#print(cost*1.0/len(data))
from mpl_toolkits.mplot3d import Axes3D
#with first three variables are on the axis and the fourth being color:
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15, 12))
ax = fig.add_subplot(111, projection='3d')
sp = ax.scatter(data[:,0],data[:,1],data[:,2], s=20, c=data[:,3])
fig.colorbar(sp)
plt.show()
from sklearn.manifold import TSNE
data = np.concatenate((data,mu),axis = 0)
print(data)
X = np.array(data)
#means = np.array(mu)
'''
X_embedded = TSNE(n_components=1).fit_transform(X)
print("!!!!")
figs = plt.figure(figsize=(15, 12))
plt.plot(X_embedded,'ro')
plt.plot( X_embedded[150:153],'g^')
t1 = np.linspace(0, 140, 100)
plt.plot(t1,[X_embedded[150]]*100 , 'g^')
plt.plot(t1,[X_embedded[151]]*100 , 'g^')
plt.plot(t1,[X_embedded[152]]*100 , 'g^')
plt.ylabel('some numbers')
plt.show()
'''
X_embedded = TSNE(n_components=2).fit_transform(X)
print(X_embedded)
print("!!!!")
figs = plt.figure(figsize=(15, 12))
plt.plot(X_embedded[0:150,0], X_embedded[0:150,1],'ro')
plt.plot( X_embedded[150:153,0],X_embedded[150:153,1] ,'g^')
plt.ylabel('some numbers')
A = np.matrix(max_sigma[0])
N, M = A.shape
assert N % 2 == 0
assert M % 2 == 0
A0 = np.empty((N//2, M//2))
for i in range(N//2):
for j in range(M//2):
A0[i,j] = A[2*i:2*i+2, 2*j:2*j+2].sum()
A = np.matrix(max_sigma[1])
N, M = A.shape
assert N % 2 == 0
assert M % 2 == 0
A1 = np.empty((N//2, M//2))
for i in range(N//2):
for j in range(M//2):
A1[i,j] = A[2*i:2*i+2, 2*j:2*j+2].sum()
A = np.matrix(max_sigma[2])
N, M = A.shape
assert N % 2 == 0
assert M % 2 == 0
A2 = np.empty((N//2, M//2))
for i in range(N//2):
for j in range(M//2):
A2[i,j] = A[2*i:2*i+2, 2*j:2*j+2].sum()
print(A0)
print(A1)
print(A2)
print(X_embedded[150,:])
#_plot_cov_ellipse(A0,X_embedded[150,:] )
mean = X_embedded[150,:]
covariance = A0
plt.plot(mean[0], mean[1], 'g' + ".", zorder=0)
if covariance.ndim == 1:
covariance = np.diag(covariance)
radius = np.sqrt(5.991)
eigX_embeddeds, eigvecs = np.linalg.eig(covariance)
axis = np.sqrt(eigX_embeddeds) * radius
slope = eigvecs[1][0] / eigvecs[1][1]
angle = 180.0 * np.arctan(slope) / np.pi
plt.axes().add_artist(Ellipse(
mean, 2 * axis[0], 2 * axis[1], angle=angle,
fill=False, color='g', linewidth=1, zorder=0
))
mean = X_embedded[151,:]
covariance = A1
plt.plot(mean[0], mean[1], 'g' + ".", zorder=0)
if covariance.ndim == 1:
covariance = np.diag(covariance)
radius = np.sqrt(5.991)
eigX_embeddeds, eigvecs = np.linalg.eig(covariance)
axis = np.sqrt(eigX_embeddeds) * radius
slope = eigvecs[1][0] / eigvecs[1][1]
angle = 180.0 * np.arctan(slope) / np.pi
plt.axes().add_artist(Ellipse(
mean, 2 * axis[0], 2 * axis[1], angle=angle,
fill=False, color='g', linewidth=1, zorder=0
))
mean = X_embedded[152,:]
covariance = A2
plt.plot(mean[0], mean[1], 'g' + ".", zorder=0)
if covariance.ndim == 1:
covariance = np.diag(covariance)
radius = np.sqrt(5.991)
eigX_embeddeds, eigvecs = np.linalg.eig(covariance)
axis = np.sqrt(eigX_embeddeds) * radius
slope = eigvecs[1][0] / eigvecs[1][1]
angle = 180.0 * np.arctan(slope) / np.pi
plt.axes().add_artist(Ellipse(
mean, 2 * axis[0], 2 * axis[1], angle=angle,
fill=False, color='g', linewidth=1, zorder=0
))
plt.show()
#_plot_gaussian(X_embedded[150,:], A0,'r')
#error_ellipse(X_embedded[150,:], A0)
#plot_ellipse(plt, X_embedded[150,:], A0 )
#np.savetxt("mu.txt",max_mu)
return max_mu
if __name__ == '__main__':
main() | 29.033663 | 103 | 0.559132 | 2,096 | 14,662 | 3.81584 | 0.166985 | 0.019255 | 0.016879 | 0.010003 | 0.384471 | 0.311453 | 0.245436 | 0.213303 | 0.195049 | 0.185046 | 0 | 0.034831 | 0.283317 | 14,662 | 505 | 104 | 29.033663 | 0.726304 | 0.149366 | 0 | 0.352941 | 0 | 0 | 0.022873 | 0.003781 | 0 | 0 | 0 | 0 | 0.022059 | 1 | 0.058824 | false | 0 | 0.033088 | 0.003676 | 0.143382 | 0.066176 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d854e1572c3ce2b3c51dea839fbb388e61fd565b | 535 | py | Python | li_hang/test/test_knn.py | LucienShui/HelloMachineLearning | b00a4b3791808ace3b1e45112350c2b3c539995e | [
"Apache-2.0"
] | 2 | 2019-07-28T08:25:40.000Z | 2019-07-29T05:29:10.000Z | li_hang/test/test_knn.py | LucienShui/HelloMachineLearning | b00a4b3791808ace3b1e45112350c2b3c539995e | [
"Apache-2.0"
] | null | null | null | li_hang/test/test_knn.py | LucienShui/HelloMachineLearning | b00a4b3791808ace3b1e45112350c2b3c539995e | [
"Apache-2.0"
] | null | null | null | import unittest
import logging
import numpy
from knn import KNN
class MyTestCase(unittest.TestCase):
def test_something(self):
logging.basicConfig()
dataset = numpy.array([
[[5, 4], 1],
[[9, 6], 1],
[[4, 7], 1],
[[2, 3], -1],
[[8, 1], -1],
[[7, 2], -1]
])
knn = KNN(dataset, 1)
test_point = numpy.array([5, 3])
self.assertEqual(knn.predict(test_point), 1)
if __name__ == '__main__':
unittest.main()
| 18.448276 | 52 | 0.48785 | 63 | 535 | 3.968254 | 0.492063 | 0.08 | 0.088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063401 | 0.351402 | 535 | 28 | 53 | 19.107143 | 0.657061 | 0 | 0 | 0 | 0 | 0 | 0.014953 | 0 | 0 | 0 | 0 | 0 | 0.05 | 1 | 0.05 | false | 0 | 0.2 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8560e6218ec99112b9cb038f1f87fe00535d31f | 2,130 | py | Python | src/taming.py | dwaybright/g729a_python | a9c78d9a6b2934c9742f63e3ade225fe4aee245e | [
"Unlicense"
] | null | null | null | src/taming.py | dwaybright/g729a_python | a9c78d9a6b2934c9742f63e3ade225fe4aee245e | [
"Unlicense"
] | null | null | null | src/taming.py | dwaybright/g729a_python | a9c78d9a6b2934c9742f63e3ade225fe4aee245e | [
"Unlicense"
] | null | null | null | from basic_op import *
from ld8a import *
from tab_ld8a import *
L_exc_err = [0] * 4
def Init_exc_err() -> None:
global L_exc_err
for i in range(0, 4):
L_exc_err[i] = MAX_INT_14 # Q14
def test_err(T0: int, T0_frac: int) -> int:
"""
# (o) flag set to 1 if taming is necessary
# (i) T0 - integer part of pitch delay
# (i) T0_frac - fractional part of pitch delay
"""
if T0_frac > 0:
t1 = add(T0, 1)
else:
t1 = T0
i = sub(t1, (L_SUBFR + L_INTER10))
if i < 0:
i = 0
zone1 = tab_tab_zone[i]
i = add(t1, (L_INTER10 - 2))
zone2 = tab_tab_zone[i]
L_maxloc = -1
flag = 0
for i in range(zone, zone1 + 1, -1):
L_acc = L_sub(L_exc_err[i], L_maxloc)
if L_acc > 0:
L_maxloc = L_exc_err[i]
L_acc = L_sub(L_maxloc, L_THRESH_ERR)
if L_acc > 0:
flag = 1
return flag
def update_exc_err(gain_pit: int, T0: int) -> None:
"""
# (i) pitch gain
# (i) integer part of pitch delay
"""
L_worst = -1
n = sub(T0, L_SUBFR)
if n < 0:
hi, lo = L_Extract(L_exc_err[0])
L_temp = Mpy_32_16(hi, lo, gain_pit)
L_temp = L_shl(L_temp, 1)
L_temp = L_add(MAX_INT_14, L_temp)
L_acc = L_sub(L_temp, L_worst)
if L_acc > 0:
L_worst = L_temp
hi, lo = L_Extract(L_temp)
L_temp = Mpy_32_16(hi, lo, gain_pit)
L_temp = L_shl(L_temp, 1)
L_temp = L_add(MAX_INT_14, L_temp)
L_acc = L_sub(L_temp, L_worst)
if L_acc > 0:
L_worst = L_temp
else:
zone1 = tab_tab_zone[n]
i = sub(T0, 1)
zone2 = tab_tab_zone[i]
for i in range(zone1, zone2 + 1):
hi, lo = L_Extract(L_exc_err[i])
L_temp = Mpy_32_16(hi, lo, gain_pit)
L_temp = L_shl(L_temp, 1)
L_temp = L_add(MAX_INT_14, L_temp)
L_acc = L_sub(L_temp, L_worst)
if L_acc > 0:
L_worst = L_temp
for i in range(3, 0, -1):
L_exc_err[i] = L_exc_err[i-1]
L_exc_err[0] = L_worst
| 21.958763 | 51 | 0.530047 | 380 | 2,130 | 2.655263 | 0.181579 | 0.109019 | 0.077304 | 0.047572 | 0.477701 | 0.326065 | 0.326065 | 0.288404 | 0.288404 | 0.288404 | 0 | 0.059942 | 0.357746 | 2,130 | 96 | 52 | 22.1875 | 0.677632 | 0.088732 | 0 | 0.387097 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048387 | false | 0 | 0.048387 | 0 | 0.112903 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d85ca52402346be7dfaf6277ede793e7a996a2e4 | 1,176 | py | Python | db_create.py | abmorton/stockhawk | b5f4d188a8f9420898f2390b01741c87a17ebbbd | [
"MIT"
] | 7 | 2015-11-11T22:55:49.000Z | 2021-06-03T17:23:59.000Z | db_create.py | abmorton/stockhawk | b5f4d188a8f9420898f2390b01741c87a17ebbbd | [
"MIT"
] | null | null | null | db_create.py | abmorton/stockhawk | b5f4d188a8f9420898f2390b01741c87a17ebbbd | [
"MIT"
] | 3 | 2016-01-19T02:23:14.000Z | 2018-08-03T12:20:07.000Z | from app import db
from models import *
import datetime
# create the db and tables
db.create_all()
# prepare data to insert
year = 1982
month = 4
day = 3
birthday = datetime.date(year, month, day)
now = datetime.datetime.now()
today = datetime.date(now.year, now.month, now.day)
yesterday = datetime.date(now.year, now.month, 13)
# insert data
adam = User("adam", "abmorton@gmail.com", "testpw", yesterday)
# db.session.add(User("admin", "admin@admin.com", "adminpw", today))
db.session.add(User(adam))
db.session.commit()
# make a Portfolio
port = Portfolio(adam.id)
db.session.add(port)
db.session.commit()
# add a stock
db.session.add(Stock("XOMA", "XOMA Corporation", "NGM", "0.9929", None, None, None, "117.74M", 1))
db.session.commit()
# get a stock instance for later use creating other records
stock = Stock.query.get(1)
# make some trades
db.session.add(Trade(stock.symbol, 1, 10, yesterday, None, None, None))
db.session.add(Trade(stock.symbol, 1.20, -5, today, None, None, None))
# make a Position
# pos = Position(port.id, )
# position = Position(1)
# insert the data requiring ForeignKeys & relationship()
# commit changes
db.session.commit() | 21.381818 | 98 | 0.706633 | 182 | 1,176 | 4.56044 | 0.412088 | 0.108434 | 0.086747 | 0.045783 | 0.13494 | 0.13494 | 0.06988 | 0 | 0 | 0 | 0 | 0.027723 | 0.141156 | 1,176 | 55 | 99 | 21.381818 | 0.794059 | 0.311224 | 0 | 0.173913 | 0 | 0 | 0.080301 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d85fa73be967336630b8bccd9bd0353e0af7dd9d | 879 | py | Python | test/libraryData_BulkUpdates.py | masqu3rad3/tik_manager | 59821670e87a2af753a59cc70924c5f0aad8ad51 | [
"BSD-3-Clause"
] | 26 | 2019-05-05T04:52:38.000Z | 2022-01-27T19:25:27.000Z | test/libraryData_BulkUpdates.py | masqu3rad3/tik_manager | 59821670e87a2af753a59cc70924c5f0aad8ad51 | [
"BSD-3-Clause"
] | null | null | null | test/libraryData_BulkUpdates.py | masqu3rad3/tik_manager | 59821670e87a2af753a59cc70924c5f0aad8ad51 | [
"BSD-3-Clause"
] | 5 | 2020-02-14T06:43:07.000Z | 2021-08-13T09:58:44.000Z | from tik_manager import assetLibrary
reload(assetLibrary)
import pprint
import time
pathList = ["E:\\backup\\_CharactersLibrary", "E:\\backup\\_BalikKrakerAssetLibrary", "E:\\backup\\_AssetLibrary", "M:\\Projects\\_CharactersLibrary", "M:\\Projects\\_BalikKrakerAssetLibrary", "M:\\Projects\\_AssetLibrary"]
for path in pathList:
lib = assetLibrary.AssetLibrary(path)
lib.scanAssets()
for item in lib.assetsList:
data = lib._getData(item)
# data["sourceProject"] = "Maya(ma)"
# data["notes"] = "N/A"
# data["version"] = "N/A"
# if data["Faces/Triangles"] == "Nothing counted : no polygonal object is selected./Nothing counted : no polygonal object is selected.":
# data["Faces/Triangles"] = "N/A"
data["notes"]=""
# data["Faces/Triangles"] = data["Faces/Trianges"]
lib._setData(item, data)
| 43.95 | 223 | 0.651877 | 96 | 879 | 5.875 | 0.447917 | 0.06383 | 0.095745 | 0.088652 | 0.14539 | 0.14539 | 0.14539 | 0 | 0 | 0 | 0 | 0 | 0.185438 | 879 | 19 | 224 | 46.263158 | 0.78771 | 0.341297 | 0 | 0 | 0 | 0 | 0.337413 | 0.328671 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d862e5191af1e26ac32d9cdf7c011969df1241d6 | 997 | py | Python | video_reader.py | evgenevolkov/Automated-car-tracker-and-plates-reader | 5cee11b654bb8cfd20d081198af43b56811d2107 | [
"MIT"
] | 3 | 2020-10-15T14:32:36.000Z | 2022-03-08T20:56:58.000Z | video_reader.py | evgenevolkov/Automated-car-tracker-and-plates-reader | 5cee11b654bb8cfd20d081198af43b56811d2107 | [
"MIT"
] | 2 | 2022-02-09T23:51:20.000Z | 2022-02-10T02:25:10.000Z | video_reader.py | evgenevolkov/Automated-car-tracker-and-plates-reader | 5cee11b654bb8cfd20d081198af43b56811d2107 | [
"MIT"
] | 2 | 2021-04-07T11:56:20.000Z | 2022-01-28T22:25:36.000Z | # import nesessary packages
import cv2
import config
DEBUG = config.DEBUG
class Reader:
def __init__(self, source):
if DEBUG:
print('[INFO, reader]: reader module loaded')
# if source:
self.vs = None
self.set_source(source)
# else:
# print('[INFO, reader]: videosource not defined, using camera')
# self.vs = cv2.VideoCapture(0)
self.start_frame_number = 0
def set_start_frame_no(self, frame_no):
self.vs.set(cv2.CAP_PROP_POS_FRAMES, frame_no)
def set_source(self, source):
# set SOURCE_VID file as source otherwise use camera
if source:
self.vs = cv2.VideoCapture(source)
if DEBUG:
print('[INFO, reader]: videofile ' + source + ' succesfully opened')
else:
print('[ERR, reader]: no source file provided, using camera as source')
def read(self):
ret, frame = self.vs.read()
return ret, frame | 29.323529 | 84 | 0.594784 | 123 | 997 | 4.682927 | 0.398374 | 0.052083 | 0.078125 | 0.0625 | 0.097222 | 0.097222 | 0 | 0 | 0 | 0 | 0 | 0.008683 | 0.306921 | 997 | 34 | 85 | 29.323529 | 0.824891 | 0.194584 | 0 | 0.090909 | 0 | 0 | 0.179423 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.363636 | 0.136364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d863b2417d20fc0b71005243f57ab636233f418d | 2,605 | py | Python | Harry_Poter_Cloak/harry_potter_cloak.py | SusovanGithub/SusovanGithub-OpenCV_projects | bff292a976e0e48c8b4094607878133e70395029 | [
"MIT"
] | 1 | 2021-05-18T15:49:54.000Z | 2021-05-18T15:49:54.000Z | Harry_Poter_Cloak/harry_potter_cloak.py | SusovanGithub/SusovanGithub-OpenCV_projects | bff292a976e0e48c8b4094607878133e70395029 | [
"MIT"
] | null | null | null | Harry_Poter_Cloak/harry_potter_cloak.py | SusovanGithub/SusovanGithub-OpenCV_projects | bff292a976e0e48c8b4094607878133e70395029 | [
"MIT"
] | null | null | null | import cv2
import numpy as np
# function for the empty work
def empty(a):
pass
# * creating the Window
windowName = 'Color Detection in HSV Space' # Window Name
cv2.namedWindow(windowName) # Window Creation
# * Adding the Track pad
cv2.createTrackbar('HUE min',windowName,0,179,empty)
cv2.createTrackbar('HUE max',windowName,179,179,empty)
cv2.createTrackbar('SAT min',windowName,0,255,empty)
cv2.createTrackbar('SAT max',windowName,255,255,empty)
cv2.createTrackbar('Value min',windowName,0,255,empty)
cv2.createTrackbar('Value max',windowName,255,255,empty)
# * Creating the Webcam Instance
cam = cv2.VideoCapture(0)
while True:
cv2.waitKey(1000)
isTrue, initial_frame = cam.read()
if isTrue:
break
# * Start Video Rolling
while True:
isTrue, frame = cam.read() # Reading the Frames
# * Converting the frame in HSC color space
framehsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
# * Getting the track bar Values
h_min = cv2.getTrackbarPos('HUE min',windowName)
h_max = cv2.getTrackbarPos('HUE max',windowName)
s_min = cv2.getTrackbarPos('SAT min',windowName)
s_max = cv2.getTrackbarPos('SAT max',windowName)
v_min = cv2.getTrackbarPos('Value min',windowName)
v_max = cv2.getTrackbarPos('Value max',windowName)
# creating the lower and upper range
lower = np.array([h_min,s_min,v_min])
upper = np.array([h_max,s_max,v_max])
# creating the mask
mask = cv2.inRange(framehsv, lower, upper)
mask = cv2.medianBlur(mask, 3)
mask_inv = 255+mask
kernel = np.ones((3,3),np.uint8)
mask = cv2.dilate(mask, kernel,5)
# creating blanket area color black
b = frame[:,:,0]
g = frame[:,:,1]
r = frame[:,:,2]
b = cv2.bitwise_and(b,mask_inv)
g = cv2.bitwise_and(g,mask_inv)
r = cv2.bitwise_and(r,mask_inv)
black_blanket_frame = cv2.merge([b,g,r])
# cutting blanket area from initial frame
b = initial_frame[:,:,0]
g = initial_frame[:,:,1]
r = initial_frame[:,:,2]
b = cv2.bitwise_and(b,mask)
g = cv2.bitwise_and(g,mask)
r = cv2.bitwise_and(r,mask)
initial_blanket_frame = cv2.merge([b,g,r])
# result output
result = cv2.bitwise_or(black_blanket_frame,initial_blanket_frame)
# stacking the output
stackimgs = np.hstack([frame,result])
# * Display
cv2.imshow(windowName,stackimgs)
# * Creating the Exit Pole
if cv2.waitKey(1) & 0xFF == 27:
break
cam.release() # Releasing the instance
cv2.destroyAllWindows() # Destroing the windows | 28.010753 | 70 | 0.664491 | 365 | 2,605 | 4.643836 | 0.30411 | 0.041298 | 0.046018 | 0.044248 | 0.19174 | 0.147493 | 0.102655 | 0.029499 | 0 | 0 | 0 | 0.042418 | 0.212668 | 2,605 | 93 | 71 | 28.010753 | 0.784008 | 0.187716 | 0 | 0.072727 | 0 | 0 | 0.057252 | 0 | 0 | 0 | 0.001908 | 0 | 0 | 1 | 0.018182 | false | 0.018182 | 0.036364 | 0 | 0.054545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8644fc985adc50f63489cffe3bfe8417550597e | 5,575 | py | Python | autosrc.py | pwnwikiorg/AutoSRC | 4cee92b2ae0e4f024059840a0b84d49f5e125e94 | [
"MIT"
] | 44 | 2021-07-12T05:45:47.000Z | 2021-09-24T13:49:39.000Z | autosrc.py | mama2100/AutoSRC | 4cee92b2ae0e4f024059840a0b84d49f5e125e94 | [
"MIT"
] | null | null | null | autosrc.py | mama2100/AutoSRC | 4cee92b2ae0e4f024059840a0b84d49f5e125e94 | [
"MIT"
] | 15 | 2021-07-12T05:48:25.000Z | 2021-09-10T07:56:55.000Z | #!/usr/bin/python
# -*- coding: utf-8 -*-
import os
import subprocess
import requests
import argparse
import base64
import sys
import json
import codecs
def dec_data(byte_data: bytes):
try:
return byte_data.decode('UTF-8')
except UnicodeDecodeError:
return byte_data.decode('GB18030')
def get_files(path):
all_files = []
for root, dirs, files in os.walk(path):
all_files = files
return all_files
def automation():
get_payload_dir = get_files("./payload/")
get_result_dir = get_files("./fofa_file/")
for i in get_payload_dir:
print("\033[1;32m ================================================================\033[0m")
print("\033[1;32m 开始 %s 漏洞检查\033[0m" % (i))
print("\033[1;32m 正在检查请稍等......\033[0m")
print("\033[1;32m ================================================================\033[0m")
for j in get_result_dir:
if j == i + ".txt":
p = subprocess.Popen('python3 "./payload/%s" -f "./fofa_file/%s"' % (i, j), shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True)
while p.poll() is None:
line = p.stdout.readline().strip()
if line:
line = dec_data(line)
x = line.find('不', 0, len(line))
if x == -1:
result = line.replace(
"\033[1;36m", "").replace("\033[0m", " ").replace("\033[1;32m", " ").replace(
"\033[0m", " ".replace("\033[36m[o] ", " ").replace("\033[0m", " "))
print(result)
f = open("./results/" + i + "_OK.txt", 'a', encoding='utf-8')
f.write(result + "\n")
def banner():
print("""
\033[1;36m ___ \033[0m
\033[1;36m ,--.'|_ \033[0m
\033[1;36m ,--, | | :,' ,---. __ ,-. \033[0m
\033[1;36m ,'_ /| : : ' : ' ,'\ .--.--. ,' ,'/ /| \033[0m
\033[1;36m ,--.--. .--. | | :.;__,' / / / | / / ' ' | |' | ,---. \033[0m
\033[1;36m / \ ,'_ /| : . || | | . ; ,. :| : /`./ | | ,'/ \ \033[0m
\033[1;36m .--. .-. | | ' | | . .:__,'| : ' | |: :| : ;_ ' : / / / ' \033[0m
\033[1;36m \__\/: . . | | ' | | | ' : |__' | .; : \ \ `. | | ' . ' / \033[0m
\033[1;36m ," .--.; | : | : ; ; | | | '.'| : | `----. \; : | ' ; :__ \033[0m
\033[1;36m / / ,. | ' : `--' \ ; : ;\ \ / / /`--' /| , ; ' | '.'| \033[0m
\033[1;36m; : .' \: , .-./ | , / `----' '--'. / ---' | : : \033[0m
\033[1;36m| , .-./ `--`----' ---`-' `--'---' \ \ / \033[0m
\033[1;36m `--`---' `----' \033[0m
""")
print('\033[1;36m 工具使用方法\033[0m')
print('\033[1;36m python3 autosrc.py -e/--email email -k/--key key\033[0m')
print('\033[1;36m python3 autosrc.py -h/--help\033[0m')
if len(sys.argv) == 1:
banner()
sys.exit()
parser = argparse.ArgumentParser(description='autosrcfofaapi help')
parser.add_argument('-e', '--email', help='Please Input a email!', default='')
parser.add_argument('-k', '--key', help='Please Input a key!', default='')
args = parser.parse_args()
email = args.email
key = args.key
url = "https://fofa.so/api/v1/info/my?email=" + email + "&key=" + key
header = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded"
}
response = requests.get(url, headers=header)
if 'errmsg' not in response.text:
print("\033[1;32memail和key均正确\033[0m")
get_payload_dir = get_files("./payload/")
print(get_payload_dir)
for i in get_payload_dir:
f = codecs.open("./payload/" + i, mode='r', encoding='utf-8')
line = f.readline()
sentence = line.strip("#")
print(sentence)
print("\033[1;36mfofa语句 >>>\033[0m" + sentence)
sentence = base64.b64encode(sentence.encode('utf-8')).decode("utf-8")
url = "https://fofa.so/api/v1/search/all?email=" + email + "&key=" + key + "&qbase64=" + sentence
response = requests.get(url, headers=header)
if 'errmsg' not in response.text:
print("\033[1;36m已保存到\033[0m\033[1;32mfofa_file目录下\033[0m")
r1 = json.loads(response.text)
for k in r1['results']:
s = k[0]
print(s)
f = open("./fofa_file/" + i + ".txt", 'a', encoding='utf-8')
f.write(s + "\n")
else:
print("\033[1;31mfofa语句不正确\033[0m")
else:
print("\033[1;31memail或key不正确\033[0m")
print("\033[1;34m[INFO]\033[0m Success")
print("\033[1;32m ================================================================\033[0m")
print("\033[1;32m FOFA采集完成 开始漏洞检查\033[0m")
print("\033[1;32m ================================================================\033[0m")
automation()
| 45.696721 | 136 | 0.411121 | 581 | 5,575 | 3.851979 | 0.292599 | 0.073727 | 0.068365 | 0.058088 | 0.316354 | 0.290438 | 0.226095 | 0.206434 | 0.186774 | 0.157283 | 0 | 0.106799 | 0.348341 | 5,575 | 121 | 137 | 46.07438 | 0.509221 | 0.006816 | 0 | 0.132075 | 0 | 0.09434 | 0.49372 | 0.087551 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037736 | false | 0 | 0.075472 | 0 | 0.141509 | 0.198113 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8681174b4934ada560118e7c8363f5ba24fcfa0 | 4,263 | py | Python | gym-kinova-gripper/Old Code/stuff.py | OSUrobotics/KinovaGrasping | f22af60d3683fdc4ffecf49ccff179fbc6750748 | [
"Linux-OpenIB"
] | 16 | 2020-05-16T00:40:31.000Z | 2022-02-22T11:59:03.000Z | gym-kinova-gripper/Old Code/stuff.py | OSUrobotics/KinovaGrasping | f22af60d3683fdc4ffecf49ccff179fbc6750748 | [
"Linux-OpenIB"
] | 9 | 2020-08-10T08:33:55.000Z | 2021-08-17T02:10:50.000Z | gym-kinova-gripper/Old Code/stuff.py | OSUrobotics/KinovaGrasping | f22af60d3683fdc4ffecf49ccff179fbc6750748 | [
"Linux-OpenIB"
] | 7 | 2020-07-27T09:45:05.000Z | 2021-06-21T21:42:50.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Dec 13 09:59:13 2019
@author: orochi
"""
import numpy as np
import csv
from classifier_network import LinearNetwork
from classifier_network import ReducedLinearNetwork
import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
def calc_velocity(start,end):
delta_t=0.05
#print(type(start),type(end))
velocity=(end-start)/delta_t
return velocity
def normalize_vector(vector):
#print(vector-np.min(vector))
#print(np.max(vector)-np.min(vector))
if (np.max(vector)-np.min(vector)) == 0:
n_vector=np.ones(np.shape(vector))*0.5
else:
n_vector=(vector-np.min(vector))/(np.max(vector)-np.min(vector))
return n_vector
filenames=['Classifier_Data_Big_Cube.csv','Classifier_Data_Med_Cube.csv','Classifier_Data_Small_Cube.csv', \
'Classifier_Data_Big_Cylinder.csv','Classifier_Data_Med_Cylinder.csv','Classifier_Data_Small_Cylinder.csv']
a=[]
column_names=[]
#load in the data to one massive matrix called a
for k in range(6):
with open('Classifier_Data/'+filenames[k]) as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
line_count = 0
for row in csv_reader:
if line_count == 0:
column_names.append(row)
#print(f'Column names are {", ".join(row)}')
#print(row[6],row[48])
line_count += 1
else:
a.append(row)
line_count += 1
#print('here')
#print(f'Processed {line_count} lines.')
#print(np.shape(a))
network=ReducedLinearNetwork()
network.zero_grad()
network.double()
b=np.shape(a)
print(b)
a=np.array(a,dtype='f')
#create a list of numbers that correspond to the columns to be removed. This arrangement removes the roll, pitch and yaw from the matrix a
c=np.arange(9,42,6)
d=np.arange(10,42,6)
e=np.arange(11,42,6)
f=np.arange(51,87,6)
g=np.arange(52,87,6)
h=np.arange(53,87,6)
#obj_pose=np.array([84,85,86])
c=np.concatenate((c,d,e,f,g,h))
#calculate the velocity of the fingers.
for i in range(36):
velocity=calc_velocity(a[:,i+6],a[:,i+48])
a[:,i+6]=velocity
#normalize the entire table so that all the inputs and outputs lie on a spectrum from 0-1
for i in range(b[1]):
a[:,i]=normalize_vector(a[:,i])
#remove the columns that are unwanted, described by the array c
new_a=np.zeros([b[0],69])
for i in range(b[0]):
new_a[i,:]=np.delete(a[i,:],c)
#check to make sure the right columns got deleted
column_names=np.delete(column_names,c)
print(column_names[0])
a=new_a
#print(a[:,-1])
running_loss=0
learning_rate=0.1
total_loss=[]
total_time=[]
num_epocs=100
network= network.float()
for j in range(num_epocs):
print(j)
learning_rate=0.1-j/num_epocs*0.09
np.random.shuffle(a)
running_loss=0
for i in range(b[0]):
#network=network.float()
#state = ego.convert_world_state_to_front()
#ctrl_delta, ctrl_vel, err, interr, differr = controller.calc_steer_control(t[i],state,x_true,y_true, vel, network)
input1=a[i,:-1]
#print(input1)
network_input=torch.tensor(input1)
#print(network_input)
#print(a[i,-1])
network_target=torch.tensor(a[i,-1])
#network_target.reshape(1)
network_input=network_input.float()
#print(network_input)
out=network(network_input)
out.reshape(1)
network.zero_grad()
criterion = nn.MSELoss()
loss = criterion(out, network_target)
loss.backward()
running_loss += loss.item()
#print(out.data,network_target.data, out.data-network_target.data)
#print(loss.item())
for f in network.parameters():
f.data.sub_(f.grad.data * learning_rate)
if i % 1000 ==999: # keep a tally of the loss and time so that the training can be plotted
print(running_loss)
#print(loss.item(),out[0])
total_loss.append(running_loss)
total_time.append((i+1)/1000+j*b[0]/1000)
running_loss=0
plt.plot(total_time,total_loss)
plt.show()
torch.save(network.state_dict(),'./full_trained_classifier_no_rpw_obj_pose.pth') | 31.577778 | 138 | 0.655407 | 676 | 4,263 | 3.989645 | 0.323965 | 0.007416 | 0.020393 | 0.031517 | 0.068224 | 0.034112 | 0 | 0 | 0 | 0 | 0 | 0.034808 | 0.204785 | 4,263 | 135 | 139 | 31.577778 | 0.760767 | 0.289937 | 0 | 0.122222 | 0 | 0 | 0.082581 | 0.076563 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022222 | false | 0 | 0.088889 | 0 | 0.133333 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8693362b05650b4c1b31dbc4438c95cc27c7e7b | 3,052 | py | Python | src/app.py | davidkowalk/Kalaha | 2b00fce97f5559c0527ec1c8addf3c488c46fccf | [
"MIT"
] | 1 | 2021-06-19T16:08:52.000Z | 2021-06-19T16:08:52.000Z | src/app.py | davidkowalk/Kalaha | 2b00fce97f5559c0527ec1c8addf3c488c46fccf | [
"MIT"
] | null | null | null | src/app.py | davidkowalk/Kalaha | 2b00fce97f5559c0527ec1c8addf3c488c46fccf | [
"MIT"
] | null | null | null | from Board import Board, code_to_list
from sys import argv
def print_layout():
print("╔══╦══╦══╦══╦══╦══╦══╦══╗")
print("║ ║ 6║ 5║ 4║ 3║ 2║ 1║ ║ <- Player 2")
print("║ ╠══╬══╬══╬══╬══╬══╣ ║")
print("║ ║ 1║ 2║ 3║ 4║ 5║ 6║ ║ <- Player 1")
print("╚══╩══╩══╩══╩══╩══╩══╩══╝")
def lpad(str, length=2):
num = len(str)
return " "*(length-num)+str
def render(field):
print("""
╔══╦══╦══╦══╦══╦══╦══╦══╗
║ ║{N}║{M}║{L}║{K}║{J}║{I}║ ║
║{A}╠══╬══╬══╬══╬══╬══╣{H}║
║ ║{B}║{C}║{D}║{E}║{F}║{G}║ ║
╚══╩══╩══╩══╩══╩══╩══╩══╝
""".format(
A = lpad(str(field[0])),
B = lpad(str(field[1])),
C = lpad(str(field[2])),
D = lpad(str(field[3])),
E = lpad(str(field[4])),
F = lpad(str(field[5])),
G = lpad(str(field[6])),
H = lpad(str(field[7])),
I = lpad(str(field[8])),
J = lpad(str(field[9])),
K = lpad(str(field[10])),
L = lpad(str(field[11])),
M = lpad(str(field[12])),
N = lpad(str(field[13]))
)
)
def get_index(board):
if board.game_ended():
return 0
while True:
i = input(f"Player {board.current_player+1}:")
if i == "exit":
# Print board representation
print(f"Continue with code \"{board.get_code()}\"")
print("> python3 ./app.py <code>")
exit()
elif 0 < int(i) < 7:
return int(i)+board.current_player*7
else:
#print("Please select number from 1 to 6 or exit via \"exit\"\r\033[A\033[A")
print("Please select number from 1 to 6 or exit via \"exit\"")
def game_loop(b):
while not b.ended:
render(b.state)
i = get_index(b)
code = b.play(i)
if code == 1:
print("You can only play your own side.", end="")
elif code == 2:
print("You cannot play your Mancala.", end="")
elif code == 3:
print("The position you want to play must have a stone count higher than 0!", end="")
elif code == 4:
print("You ended in your Mancala. You may play again.", end="")
elif code == 5:
print(f"Player {(1-b.current_player)+1} took.", end="")
elif code == -1:
print(f"ERROR: Index {i} not on board....", end="")
elif code == 6:
print("Game Ended\n\n")
winner = b.finalize()
print(f"Player {winner+1} won!")
render(b.state)
break
else:
print(" "*90, end="")
print(" "*30)
#print("\r\033[A\033[A\033[A\033[A\033[A\033[A\033[A\033[A\033[A\033[A") #Return to start
def main():
#import colorama
#colorama.init()
if len(argv)>1:
state = code_to_list(argv[1])
b = Board(state)
else:
b = Board()
print("Layout")
print_layout()
print("\nGAME")
game_loop(b)
if __name__ == '__main__':
main()
| 28.259259 | 97 | 0.449541 | 450 | 3,052 | 3.393333 | 0.264444 | 0.068762 | 0.11002 | 0.05239 | 0.090373 | 0.083824 | 0.083824 | 0.083824 | 0.083824 | 0.083824 | 0 | 0.047059 | 0.331586 | 3,052 | 107 | 98 | 28.523364 | 0.614216 | 0.071756 | 0 | 0.05814 | 0 | 0 | 0.265039 | 0.079972 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069767 | false | 0 | 0.023256 | 0 | 0.127907 | 0.267442 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8709d89acee40ccaac332ee9c01a0773827a0af | 499 | py | Python | python/arrays/0048-rotate-image.py | karolinyoliveira/leetcode-ebbinghaus-practice | 5149e06f1c187b87e280fd58541c11d8ab8626d3 | [
"MIT"
] | 2 | 2021-05-28T03:41:39.000Z | 2021-10-19T16:53:16.000Z | python/arrays/0048-rotate-image.py | karolinyoliveira/leetcode-ebbinghaus-practice | 5149e06f1c187b87e280fd58541c11d8ab8626d3 | [
"MIT"
] | null | null | null | python/arrays/0048-rotate-image.py | karolinyoliveira/leetcode-ebbinghaus-practice | 5149e06f1c187b87e280fd58541c11d8ab8626d3 | [
"MIT"
] | null | null | null | from typing import List
def rotate(matrix: List[List[int]]) -> None:
for layer in range(len(matrix) // 2):
first = layer
last = len(matrix) - layer - 1
for i in range(first, last):
offset = i - first
top = matrix[first][i]
matrix[first][i] = matrix[last - offset][first]
matrix[last - offset][first] = matrix[last][last - offset]
matrix[last][last - offset] = matrix[i][last]
matrix[i][last] = top
| 35.642857 | 70 | 0.541082 | 64 | 499 | 4.21875 | 0.328125 | 0.185185 | 0.088889 | 0.133333 | 0.325926 | 0.192593 | 0 | 0 | 0 | 0 | 0 | 0.005882 | 0.318637 | 499 | 13 | 71 | 38.384615 | 0.788235 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d87268d377955c1f6efb88d6ef67f9df1b77d9d4 | 6,279 | py | Python | py-src/helper/img_transform.py | gabeoh/CarND-P01-LaneLines | 5a35a7698f5a2efeff70d5537fedae366c1e51a0 | [
"MIT"
] | null | null | null | py-src/helper/img_transform.py | gabeoh/CarND-P01-LaneLines | 5a35a7698f5a2efeff70d5537fedae366c1e51a0 | [
"MIT"
] | null | null | null | py-src/helper/img_transform.py | gabeoh/CarND-P01-LaneLines | 5a35a7698f5a2efeff70d5537fedae366c1e51a0 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
import cv2
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
# defining a blank mask to start with
mask = np.zeros_like(img)
# defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
# filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
# returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def find_aggregated_line(lines_x, lines_y, y_bottom, y_top):
"""
Find two end-points (bottom and top) of aggregated line for given line collection.
The endpoints are determined by given y coordinate range
:param lines_x: x coordinates of lines
:param lines_y: y coordinates of lines
:param y_bottom: bottom end y coordinate of aggregated line segment
:param y_top: top end y coordinate of aggregated line segment
:return: (x, y) coordinates of two end-points of aggregated line segment
"""
# First, make sure that lines_x and lines_y are non-empty same size arrays
assert(len(lines_x) > 0 and len(lines_x) == len(lines_y))
# Compute straight lines that fit line endpoints for left and right line segments
line_fit = np.polyfit(lines_x, lines_y, 1)
# Find start and end points for aggregated lines
x_bottom = int(round((y_bottom - line_fit[1]) / line_fit[0]))
x_top = int(round((y_top - line_fit[1]) / line_fit[0]))
return [(x_bottom, y_bottom), (x_top, y_top)]
def draw_lines(img, lines, color=[255, 0, 0], thickness=10):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# Identify line end points of each line (separate into left and right lines)
lines_left_x = []
lines_left_y = []
lines_right_x = []
lines_right_y = []
xsize = img.shape[1]
x_middle = int(round(xsize / 2))
for line in lines:
for x1, y1, x2, y2 in line:
slope = (y2 - y1) / (x2 - x1)
if (slope > -0.9 and slope < -0.5) and (x1 < x_middle and x2 < x_middle):
lines_left_x.extend([x1, x2])
lines_left_y.extend([y1, y2])
elif (slope > 0.4 and slope < 0.8) and (x1 > x_middle and x2 > x_middle):
lines_right_x.extend([x1, x2])
lines_right_y.extend([y1, y2])
else:
#print('Ignore outlier lines - slope: %f, (%d, %d), (%d, %d)' % (slope, x1, y1, x2, y2))
pass
# Determine Y range for aggregated lines
ysize = img.shape[0]
y_bottom, y_top = ysize - 1, min(lines_left_y + lines_right_y)
# Find and draw aggregated lines for left and right line collections respectively
if (len(lines_left_x) > 0):
point_bottom, point_top = find_aggregated_line(lines_left_x, lines_left_y, y_bottom, y_top)
cv2.line(img, point_bottom, point_top, color, thickness)
if (len(lines_right_x) > 0):
point_bottom, point_top = find_aggregated_line(lines_right_x, lines_right_y, y_bottom, y_top)
cv2.line(img, point_bottom, point_top, color, thickness)
def draw_raw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
The original draw_lines function provided in the project
"""
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len,
maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
| 38.521472 | 104 | 0.672241 | 992 | 6,279 | 4.118952 | 0.25504 | 0.017621 | 0.010768 | 0.010768 | 0.21488 | 0.174743 | 0.111601 | 0.09349 | 0.077827 | 0.063632 | 0 | 0.023372 | 0.236821 | 6,279 | 162 | 105 | 38.759259 | 0.829299 | 0.47842 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 1 | 0.140625 | false | 0.015625 | 0.0625 | 0 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8794c7745220a34124f93774d760cb2a2e49b5f | 1,581 | py | Python | src/prepare_train_valid.py | partham16/demo_classification | d756ab150a1913c220f1048eda552483e88c01c1 | [
"MIT"
] | null | null | null | src/prepare_train_valid.py | partham16/demo_classification | d756ab150a1913c220f1048eda552483e88c01c1 | [
"MIT"
] | null | null | null | src/prepare_train_valid.py | partham16/demo_classification | d756ab150a1913c220f1048eda552483e88c01c1 | [
"MIT"
] | null | null | null | from typing import List, Tuple
import h2o
import pandas as pd
from sklearn.model_selection import train_test_split
from .config import Config
def get_train_valid(df: pd.DataFrame) -> Tuple[pd.DataFrame]:
"""Get train - valid - test"""
full_train_df, test_df = train_test_split(
df,
test_size=Config.test_percent,
random_state=Config.test_seed,
stratify=df[Config.stratify_col].values,
)
train_df, valid_df = train_test_split(
full_train_df,
test_size=Config.valid_percent,
random_state=Config.valid_seed,
stratify=full_train_df[Config.stratify_col].values,
)
return full_train_df, train_df, valid_df, test_df
def get_h2o_train_valid(dfs: Tuple[pd.DataFrame]) -> Tuple[h2o.H2OFrame]:
"""Convert DataFrames to H2OFrames"""
full_train_df, train_df, valid_df, test_df = dfs
if not Config.use_full_train:
train = h2o.H2OFrame(train_df)
else:
train = h2o.H2OFrame(full_train_df)
valid = h2o.H2OFrame(valid_df)
test = h2o.H2OFrame(test_df)
return train, valid, test
def treat_categorical_cols(
dfs: Tuple[h2o.H2OFrame], cat_cols: List[str]
) -> Tuple[h2o.H2OFrame, List[str], str]:
"""Set categorical columns as factor"""
train, valid, test = dfs
x = train.columns
y = Config.target_col
x.remove(y)
train[y] = train[y].asfactor()
for col in cat_cols:
train[col] = train[col].asfactor()
valid[col] = valid[col].asfactor()
test[col] = test[col].asfactor()
return train, valid, test, x, y
| 28.745455 | 73 | 0.674889 | 226 | 1,581 | 4.486726 | 0.247788 | 0.069034 | 0.065089 | 0.04142 | 0.110454 | 0.061144 | 0.061144 | 0.061144 | 0.061144 | 0 | 0 | 0.013699 | 0.215054 | 1,581 | 54 | 74 | 29.277778 | 0.803384 | 0.056926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.121951 | 0 | 0.268293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d87c366bf70803b5e5a62ba14bdd8953959d7029 | 419 | py | Python | generate-text-replacements.py | clrcrl/tech-name-fixer | 4d5ab36aa28a1e2912e02c5ea33a3f8af8d0e77b | [
"Apache-2.0"
] | null | null | null | generate-text-replacements.py | clrcrl/tech-name-fixer | 4d5ab36aa28a1e2912e02c5ea33a3f8af8d0e77b | [
"Apache-2.0"
] | null | null | null | generate-text-replacements.py | clrcrl/tech-name-fixer | 4d5ab36aa28a1e2912e02c5ea33a3f8af8d0e77b | [
"Apache-2.0"
] | null | null | null | import csv
import plistlib as plist
SOURCE_FILE = "tech-names.csv"
snippets_array = []
with open(SOURCE_FILE, "rt") as csvfile:
reader = csv.DictReader(csvfile)
firstline = True
for row in reader:
snippets_array.append(
{"phrase": row["correct_spelling"], "shortcut": row["common_misspelling"]}
)
with open("tech-names.plist", "wb") as fp:
plist.dump(snippets_array, fp)
| 23.277778 | 86 | 0.661098 | 54 | 419 | 5 | 0.592593 | 0.144444 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210024 | 419 | 17 | 87 | 24.647059 | 0.81571 | 0 | 0 | 0 | 0 | 0 | 0.195704 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d87dbe03e3d17cde827cd192191f97d4763ebc9a | 4,524 | py | Python | cjax/continuation/_perturbed_arc_len_continuation.py | harsh306/continuation-jax | c1452604558764df9cd4770130b60035eea5c5b3 | [
"MIT"
] | 2 | 2022-01-26T18:02:51.000Z | 2022-02-15T01:36:39.000Z | cjax/continuation/_perturbed_arc_len_continuation.py | harsh306/continuation-jax | c1452604558764df9cd4770130b60035eea5c5b3 | [
"MIT"
] | null | null | null | cjax/continuation/_perturbed_arc_len_continuation.py | harsh306/continuation-jax | c1452604558764df9cd4770130b60035eea5c5b3 | [
"MIT"
] | 1 | 2022-02-15T01:37:50.000Z | 2022-02-15T01:37:50.000Z | from cjax.continuation._arc_len_continuation import PseudoArcLenContinuation
from cjax.continuation.states.state_variables import StateWriter
from cjax.continuation.methods.predictor.secant_predictor import SecantPredictor
from jax.experimental.optimizers import l2_norm
from cjax.continuation.methods.corrector.perturbed_constrained_corrector import (
PerturbedCorrecter,
)
import copy
from cjax.utils.profiler import profile
import gc
from cjax.utils.math_trees import pytree_relative_error
# TODO: make **kwargs availible
class PerturbedPseudoArcLenContinuation(PseudoArcLenContinuation):
"""Noisy Pseudo Arc-length Continuation strategy.
Composed of secant predictor and noisy constrained corrector"""
def __init__(
self,
state,
bparam,
state_0,
bparam_0,
counter,
objective,
dual_objective,
hparams,
key_state,
):
super().__init__(
state,
bparam,
state_0,
bparam_0,
counter,
objective,
dual_objective,
hparams,
)
self.key_state = key_state
@profile(sort_by="cumulative", lines_to_print=10, strip_dirs=True)
def run(self):
"""Runs the continuation strategy.
A continuation strategy that defines how predictor and corrector components of the algorithm
interact with the states of the mathematical system.
"""
self.sw = StateWriter(f"{self.output_file}/version_{self.key_state}.json")
for i in range(self.continuation_steps):
print(self._value_wrap.get_record(), self._bparam_wrap.get_record())
self._state_wrap.counter = i
self._bparam_wrap.counter = i
self._value_wrap.counter = i
self.sw.write(
[
self._state_wrap.get_record(),
self._bparam_wrap.get_record(),
self._value_wrap.get_record(),
]
)
concat_states = [
(self._state_wrap.state, self._bparam_wrap.state),
(self._prev_state, self._prev_bparam),
self.prev_secant_direction,
]
predictor = SecantPredictor(
concat_states=concat_states,
delta_s=self._delta_s,
omega=self._omega,
net_spacing_param=self.hparams["net_spacing_param"],
net_spacing_bparam=self.hparams["net_spacing_bparam"],
hparams=self.hparams,
)
predictor.prediction_step()
self.prev_secant_direction = predictor.secant_direction
self.hparams["sphere_radius"] = (
0.005 * self.hparams["omega"] * l2_norm(predictor.secant_direction)
)
concat_states = [
predictor.state,
predictor.bparam,
predictor.secant_direction,
predictor.get_secant_concat(),
]
del predictor
gc.collect()
corrector = PerturbedCorrecter(
optimizer=self.opt,
objective=self.objective,
dual_objective=self.dual_objective,
lagrange_multiplier=self._lagrange_multiplier,
concat_states=concat_states,
delta_s=self._delta_s,
ascent_opt=self.ascent_opt,
key_state=self.key_state,
compute_min_grad_fn=self.compute_min_grad_fn,
compute_max_grad_fn=self.compute_max_grad_fn,
compute_grad_fn=self.compute_grad_fn,
hparams=self.hparams,
pred_state=[self._state_wrap.state, self._bparam_wrap.state],
pred_prev_state=[self._state_wrap.state, self._bparam_wrap.state],
counter=self.continuation_steps,
)
self._prev_state = copy.deepcopy(self._state_wrap.state)
self._prev_bparam = copy.deepcopy(self._bparam_wrap.state)
state, bparam, quality = corrector.correction_step()
value = self.value_func(state, bparam)
print(
"How far ....", pytree_relative_error(self._bparam_wrap.state, bparam)
)
self._state_wrap.state = state
self._bparam_wrap.state = bparam
self._value_wrap.state = value
del corrector
gc.collect()
| 36.192 | 100 | 0.600133 | 462 | 4,524 | 5.541126 | 0.279221 | 0.042188 | 0.049219 | 0.044531 | 0.225391 | 0.179297 | 0.156641 | 0.156641 | 0.142188 | 0.046875 | 0 | 0.003924 | 0.32405 | 4,524 | 124 | 101 | 36.483871 | 0.833224 | 0.070292 | 0 | 0.247619 | 0 | 0 | 0.029525 | 0.011522 | 0 | 0 | 0 | 0.008065 | 0 | 1 | 0.019048 | false | 0 | 0.085714 | 0 | 0.114286 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d87e5d6a6c3a210e859a21073e4fe4f95aee7c09 | 1,345 | py | Python | dependencytrack/bom.py | dmuse89/dependency-track-python | 462d4a2b7ba5b1b1b0d0ea9066057872f5bd74bb | [
"CNRI-Python"
] | null | null | null | dependencytrack/bom.py | dmuse89/dependency-track-python | 462d4a2b7ba5b1b1b0d0ea9066057872f5bd74bb | [
"CNRI-Python"
] | null | null | null | dependencytrack/bom.py | dmuse89/dependency-track-python | 462d4a2b7ba5b1b1b0d0ea9066057872f5bd74bb | [
"CNRI-Python"
] | null | null | null | # SPDX-License-Identifier: GPL-2.0+
from .exceptions import DependencyTrackApiError
class Bom:
"""Class dedicated to all "bom" related endpoints"""
def upload_bom(
self,
file_name,
project_id=None,
project_name=None,
project_version=None,
auto_create=False,
):
"""Upload a supported bill of material format document
API Endpoint: POST /bom
:return: UUID-Token
:rtype: string
:raises DependencyTrackApiError: if the REST call failed
"""
multipart_form_data = {}
multipart_form_data["bom"] = ("bom", open(file_name, "r"))
if project_id:
multipart_form_data["project"] = project_id
if project_name:
multipart_form_data["projectName"] = project_name
if project_version:
multipart_form_data["projectVersion"] = project_version
multipart_form_data["autoCreate"] = auto_create
response = self.session.post(
self.api + "/bom",
params=self.paginated_param_payload,
files=multipart_form_data,
)
if response.status_code == 200:
return response.json()
else:
description = f"Unable to upload BOM file"
raise DependencyTrackApiError(description, response)
| 29.888889 | 67 | 0.613383 | 142 | 1,345 | 5.591549 | 0.521127 | 0.11461 | 0.149874 | 0.06801 | 0.078086 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005314 | 0.300372 | 1,345 | 44 | 68 | 30.568182 | 0.83847 | 0.186617 | 0 | 0 | 0 | 0 | 0.074856 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034483 | false | 0 | 0.034483 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d87f974916d2df6ce93e6643e73f56fff02c54aa | 1,895 | py | Python | Contributors/IanDavis/ValidSentence.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 33 | 2019-12-02T23:29:47.000Z | 2022-03-24T02:40:36.000Z | Contributors/IanDavis/ValidSentence.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 39 | 2020-01-15T19:28:12.000Z | 2021-11-26T05:13:29.000Z | Contributors/IanDavis/ValidSentence.py | FergusDevelopmentLLC/Coders-Workshop | 3513bd5f79eaa85b4d2a648c5f343a224842325d | [
"MIT"
] | 49 | 2019-12-02T23:29:53.000Z | 2022-03-03T01:11:37.000Z | """By Ian Davis for Bootcampers Collective Coders Workshop on 2/19/20"""
""" This program evaluates a string and determines if it its a real sentence """
validString = 'This is a valid sentence.'
twoSpaces = "This isn't valid"
firstCharacterNotCapitalized = 'not capitalized'
containsProperNoun = 'Only the firs character can be capitalized Colorado'
lastCharNotTerminator = 'last not terminator'
validCharacters = [',', ';', ':', '.', '?', '!', "'", ' ']
def loopSentence(sentence):
for i, char in enumerate(sentence[1:]):
if (char.isalpha() or char in validCharacters):
print(f'char {char} is valid')
if sentence[i] == ' ':
if sentence[i+1] == ' ':
print(f'two spaces in a row')
return False
if char.isupper():
print('no propper nouns!')
return False
else:
print(f'char {char} is not valid')
return False
return True
def checkLastLetterTerminator(sentence):
if sentence[-1] not in ['.', '!', '?']:
print(f'last character is not a sentence terminator')
return False
else:
return True
def checkFirstLetterUppercase(sentence):
if sentence[0].isupper():
print('First letter of the sentence is Uppercase')
return True
else:
print('First letter of the sentence is NOT Uppercase')
return False
def combineTests(sentence):
if(not checkFirstLetterUppercase(sentence) or not loopSentence(sentence)):
print(f'TESTS FAILED on {sentence}\n')
else:
print(f'TESTS PASSED on {sentence}\n')
def main():
combineTests(validString)
combineTests(twoSpaces)
combineTests(firstCharacterNotCapitalized)
combineTests(containsProperNoun)
combineTests(lastCharNotTerminator)
if __name__ == "__main__":
main()
| 31.065574 | 80 | 0.624802 | 206 | 1,895 | 5.708738 | 0.393204 | 0.030612 | 0.017007 | 0.02381 | 0.079932 | 0.052721 | 0.052721 | 0 | 0 | 0 | 0 | 0.006461 | 0.264908 | 1,895 | 60 | 81 | 31.583333 | 0.83776 | 0.034829 | 0 | 0.255319 | 0 | 0 | 0.236948 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.106383 | false | 0.021277 | 0 | 0 | 0.276596 | 0.191489 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d87fb2cf3d6c3c5eaead08973e95a1d7f892f80b | 1,269 | py | Python | MirrorMirror/theme/TextBox.py | RubanSeven/MirrorMirror | 47c7a1f458f87c536d068fcf249625f426920cc3 | [
"Apache-2.0"
] | 2 | 2021-07-07T13:21:11.000Z | 2021-09-24T06:57:16.000Z | MirrorMirror/theme/TextBox.py | RubanSeven/MirrorMirror | 47c7a1f458f87c536d068fcf249625f426920cc3 | [
"Apache-2.0"
] | null | null | null | MirrorMirror/theme/TextBox.py | RubanSeven/MirrorMirror | 47c7a1f458f87c536d068fcf249625f426920cc3 | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf-8 -*-
"""
@author: RubanSeven
@project: MirrorMirror
"""
from PyQt5.QtWidgets import *
class CodeTextEdit(QTextEdit):
def __init__(self, *__args):
super().__init__(*__args)
self.setStyleSheet(
"""
QTextEdit {
background-color: rgb(83, 83, 83);
border:0px;
font-size: 15px;
color: rgb(214, 214, 214);
}
"""
)
class ParamLineEdit(QLineEdit):
def __init__(self, *__args):
super().__init__(*__args)
self.setStyleSheet(
"""
QLineEdit {
background-color: rgb(46, 46, 46);
border:1px rgb(62, 62, 62);
font-size: 15px;
color: rgb(205, 205, 205);
height: 30px;
}
"""
)
class LabelText(QLabel):
def __init__(self, *__args):
super().__init__(*__args)
self.setStyleSheet(
"""
QLabel {
border: none;
font-size: 13px;
color: rgb(153, 153, 153);
}
"""
)
| 24.403846 | 51 | 0.408983 | 98 | 1,269 | 4.928571 | 0.438776 | 0.082816 | 0.068323 | 0.093168 | 0.362319 | 0.279503 | 0.279503 | 0.279503 | 0.279503 | 0 | 0 | 0.085329 | 0.473601 | 1,269 | 51 | 52 | 24.882353 | 0.637725 | 0.050433 | 0 | 0.5625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.0625 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8819ce394b5003e6d8376f1810f650835d534ec | 1,429 | py | Python | 012getLncRNA_PMID.py | qiufengdiewu/LPInsider | 92fcc2ad9e05cb634c4e3f1accd1220b984a027d | [
"Apache-2.0"
] | null | null | null | 012getLncRNA_PMID.py | qiufengdiewu/LPInsider | 92fcc2ad9e05cb634c4e3f1accd1220b984a027d | [
"Apache-2.0"
] | null | null | null | 012getLncRNA_PMID.py | qiufengdiewu/LPInsider | 92fcc2ad9e05cb634c4e3f1accd1220b984a027d | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# -*- coding: UTF-8 -*-
from Bio import Entrez
import MySQLdb as mySQLDB
Entrez.email="A.N.Other@example.com"
def savePID():
returnCount=100000#每次可以最大返回十万条数据。
handle=Entrez.esearch(db="pubmed",term="lncRNA",RetMax=returnCount)
'''
这些参数值目前是够用的,但是不能保证以后一定可以。如果运行错误,则参照官网给出的函数参数进行修改
'''
record=Entrez.read(handle)
print(record)
idList=record["IdList"]
count=record["Count"]
print("Count"+count)
#打开数据库连接
db = mySQLDB.connect(host='127.0.0.1',user='root',passwd='11223366',db='ncrna',charset='utf8')
print()
#使用cursor()方法获取操作游标
cursor=db.cursor()
for i in range(0,int(count)):
sql = "insert into lncrna_pmid (pmid) values(" + idList[i] + ")"
try:
#执行SQL语句
cursor.execute(sql)
#提交到数据库执行
db.commit()
#print(sql)
# 这些语句执行之后不能保证PID的唯一,我给的解决方案是讲数据库中的PID设置为unique来避免这个问题
'''SQL语句如下:
CREATE TABLE `lncrna_pid` ( `Id` int(11) NOT NULL AUTO_INCREMENT, `pid` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`Id`), UNIQUE KEY `pid` (`pid`)) ENGINE=InnoDB AUTO_INCREMENT=15374 DEFAULT CHARSET=utf8;'''
except:
# Rollback in case there is any error
db.rollback()
print("Error,can't insert data. "+str(sql))
db.close()
if __name__ == "__main__":
savePID() | 34.02381 | 222 | 0.590623 | 163 | 1,429 | 5.104294 | 0.638037 | 0.028846 | 0.019231 | 0.028846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032474 | 0.26732 | 1,429 | 42 | 223 | 34.02381 | 0.762178 | 0.13366 | 0 | 0 | 0 | 0 | 0.171364 | 0.023675 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.04 | 0.08 | 0 | 0.12 | 0.16 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d8836803dce845ee19f76d28736903e2e7b8d35b | 11,867 | py | Python | scripts/generate_schema/worldbank/generate_worldbank_schema.py | liangmuxin/datamart | 495a21588db39c9ad239409208bec701dca07f30 | [
"MIT"
] | 7 | 2018-10-02T01:32:23.000Z | 2020-10-08T00:42:35.000Z | scripts/generate_schema/worldbank/generate_worldbank_schema.py | liangmuxin/datamart | 495a21588db39c9ad239409208bec701dca07f30 | [
"MIT"
] | 47 | 2018-10-02T05:41:13.000Z | 2021-02-02T21:50:31.000Z | scripts/generate_schema/worldbank/generate_worldbank_schema.py | liangmuxin/datamart | 495a21588db39c9ad239409208bec701dca07f30 | [
"MIT"
] | 19 | 2018-10-01T22:27:20.000Z | 2019-02-28T18:59:53.000Z | import os
from argparse import ArgumentParser
import requests
import json
import traceback
LOCATIONS = [
"Aruba",
"Afghanistan",
"Africa",
"Angola",
"Albania",
"Andorra",
"Andean Region",
"Arab World",
"United Arab Emirates",
"Argentina",
"Armenia",
"American Samoa",
"Antigua and Barbuda",
"Australia",
"Austria",
"Azerbaijan",
"Burundi",
"East Asia & Pacific (IBRD-only countries)",
"Europe & Central Asia (IBRD-only countries)",
"Belgium",
"Benin",
"Burkina Faso",
"Bangladesh",
"Bulgaria",
"IBRD countries classified as high income",
"Bahrain",
"Bahamas, The",
"Bosnia and Herzegovina",
"Latin America & the Caribbean (IBRD-only countries)",
"Belarus",
"Belize",
"Middle East & North Africa (IBRD-only countries)",
"Bermuda",
"Bolivia",
"Brazil",
"Barbados",
"Brunei Darussalam",
"Sub-Saharan Africa (IBRD-only countries)",
"Bhutan",
"Botswana",
"Sub-Saharan Africa (IFC classification)",
"Central African Republic",
"Canada",
"East Asia and the Pacific (IFC classification)",
"Central Europe and the Baltics",
"Europe and Central Asia (IFC classification)",
"Switzerland",
"Channel Islands",
"Chile",
"China",
"Cote d'Ivoire",
"Latin America and the Caribbean (IFC classification)",
"Middle East and North Africa (IFC classification)",
"Cameroon",
"Congo, Dem. Rep.",
"Congo, Rep.",
"Colombia",
"Comoros",
"Cabo Verde",
"Costa Rica",
"South Asia (IFC classification)",
"Caribbean small states",
"Cuba",
"Curacao",
"Cayman Islands",
"Cyprus",
"Czech Republic",
"East Asia & Pacific (IDA-eligible countries)",
"Europe & Central Asia (IDA-eligible countries)",
"Germany",
"IDA countries classified as Fragile Situations",
"Djibouti",
"Latin America & the Caribbean (IDA-eligible countries)",
"Dominica",
"Middle East & North Africa (IDA-eligible countries)",
"IDA countries not classified as Fragile Situations",
"Denmark",
"IDA countries in Sub-Saharan Africa not classified as fragile situations ",
"Dominican Republic",
"South Asia (IDA-eligible countries)",
"IDA countries in Sub-Saharan Africa classified as fragile situations ",
"Sub-Saharan Africa (IDA-eligible countries)",
"IDA total, excluding Sub-Saharan Africa",
"Algeria",
"East Asia & Pacific (excluding high income)",
"Early-demographic dividend",
"East Asia & Pacific",
"Europe & Central Asia (excluding high income)",
"Europe & Central Asia",
"Ecuador",
"Egypt, Arab Rep.",
"Euro area",
"Eritrea",
"Spain",
"Estonia",
"Ethiopia",
"European Union",
"Fragile and conflict affected situations",
"Finland",
"Fiji",
"France",
"Faroe Islands",
"Micronesia, Fed. Sts.",
"IDA countries classified as fragile situations, excluding Sub-Saharan Africa",
"Gabon",
"United Kingdom",
"Georgia",
"Ghana",
"Gibraltar",
"Guinea",
"Gambia, The",
"Guinea-Bissau",
"Equatorial Guinea",
"Greece",
"Grenada",
"Greenland",
"Guatemala",
"Guam",
"Guyana",
"High income",
"Hong Kong SAR, China",
"Honduras",
"Heavily indebted poor countries (HIPC)",
"Croatia",
"Haiti",
"Hungary",
"IBRD, including blend",
"IBRD only",
"IDA & IBRD total",
"IDA total",
"IDA blend",
"Indonesia",
"IDA only",
"Isle of Man",
"India",
"Not classified",
"Ireland",
"Iran, Islamic Rep.",
"Iraq",
"Iceland",
"Israel",
"Italy",
"Jamaica",
"Jordan",
"Japan",
"Kazakhstan",
"Kenya",
"Kyrgyz Republic",
"Cambodia",
"Kiribati",
"St. Kitts and Nevis",
"Korea, Rep.",
"Kuwait",
"Latin America & Caribbean (excluding high income)",
"Lao PDR",
"Lebanon",
"Liberia",
"Libya",
"St. Lucia",
"Latin America & Caribbean ",
"Latin America and the Caribbean",
"Least developed countries,ssification",
"Low income",
"Liechtenstein",
"Sri Lanka",
"Lower middle income",
"Low & middle income",
"Lesotho",
"Late-demographic dividend",
"Lithuania",
"Luxembourg",
"Latvia",
"Macao SAR, China",
"St. Martin (French part)",
"Morocco",
"Central America",
"Monaco",
"Moldova",
"Middle East (developing only)",
"Madagascar",
"Maldives",
"Middle East & North Africa",
"Mexico",
"Marshall Islands",
"Middle income",
"Macedonia, FYR",
"Mali",
"Malta",
"Myanmar",
"Middle East & North Africa (excluding high income)",
"Montenegro",
"Mongolia",
"Northern Mariana Islands",
"Mozambique",
"Mauritania",
"Mauritius",
"Malawi",
"Malaysia",
"North America",
"North Africa",
"Namibia",
"New Caledonia",
"Niger",
"Nigeria",
"Nicaragua",
"Netherlands",
"Non-resource rich Sub-Saharan Africa countries, of which landlocked",
"Norway",
"Nepal",
"Non-resource rich Sub-Saharan Africa countries",
"Nauru",
"IDA countries not classified as fragile situations, excluding Sub-Saharan Africa",
"New Zealand",
"OECD members",
"Oman",
"Other small states",
"Pakistan",
"Panama",
"Peru",
"Philippines",
"Palau",
"Papua New Guinea",
"Poland",
"Pre-demographic dividend",
"Puerto Rico",
"Korea, Dem. People’s Rep.",
"Portugal",
"Paraguay",
"West Bank and Gaza",
"Pacific island small states",
"Post-demographic dividend",
"French Polynesia",
"Qatar",
"Romania",
"Resource rich Sub-Saharan Africa countries",
"Resource rich Sub-Saharan Africa countries, of which oil exporters",
"Russian Federation",
"Rwanda",
"South Asia",
"Saudi Arabia",
"Southern Cone",
"Sudan",
"Senegal",
"Singapore",
"Solomon Islands",
"Sierra Leone",
"El Salvador",
"San Marino",
"Somalia",
"Serbia",
"Sub-Saharan Africa (excluding high income)",
"South Sudan",
"Sub-Saharan Africa ",
"Small states",
"Sao Tome and Principe",
"Suriname",
"Slovak Republic",
"Slovenia",
"Sweden",
"Eswatini",
"Sint Maarten (Dutch part)",
"Sub-Saharan Africa excluding South Africa",
"Seychelles",
"Syrian Arab Republic",
"Turks and Caicos Islands",
"Chad",
"East Asia & Pacific (IDA & IBRD countries)",
"Europe & Central Asia (IDA & IBRD countries)",
"Togo",
"Thailand",
"Tajikistan",
"Turkmenistan",
"Latin America & the Caribbean (IDA & IBRD countries)",
"Timor-Leste",
"Middle East & North Africa (IDA & IBRD countries)",
"Tonga",
"South Asia (IDA & IBRD)",
"Sub-Saharan Africa (IDA & IBRD countries)",
"Trinidad and Tobago",
"Tunisia",
"Turkey",
"Tuvalu",
"Taiwan, China",
"Tanzania",
"Uganda",
"Ukraine",
"Upper middle income",
"Uruguay",
"United States",
"Uzbekistan",
"St. Vincent and the Grenadines",
"Venezuela, RB",
"British Virgin Islands",
"Virgin Islands (U.S.)",
"Vietnam",
"Vanuatu",
"World",
"Samoa",
"Kosovo",
"Sub-Saharan Africa excluding South Africa and Nigeria",
"Yemen, Rep.",
"South Africa",
"Zambia",
"Zimbabwe"
]
def getAllIndicatorList():
url = "https://api.worldbank.org/v2/indicators?format=json&page=1"
res = requests.get(url)
data = res.json()
total = data[0]['total']
url2 = "https://api.worldbank.org/v2/indicators?format=json&page=1&per_page=" + str(total)
res2 = requests.get(url2)
data2 = res2.json()
return data2[1]
def generate_json_schema(dst_path):
unique_urls_str = getAllIndicatorList()
for commondata in unique_urls_str:
try:
urldata = "https://api.worldbank.org/v2/countries/indicators/" + commondata['id'] + "?format=json"
resdata = requests.get(urldata)
data_ind = resdata.json()
print("Generating schema for Trading economics", commondata['name'])
schema = {}
schema["title"] = commondata['name']
schema["description"] = commondata['sourceNote']
schema["url"] = "https://api.worldbank.org/v2/indicators/" + commondata['id'] + "?format=json"
schema["keywords"] = [i for i in commondata['name'].split()]
schema["date_updated"] = data_ind[0]["lastupdated"] if data_ind else None
schema["license"] = None
schema["provenance"] = {"source": "http://worldbank.org"}
schema["original_identifier"] = commondata['id']
schema["materialization"] = {
"python_path": "worldbank_materializer",
"arguments": {
"url": "https://api.worldbank.org/v2/indicators/" + commondata['id'] + "?format=json"
}
}
schema['variables'] = []
first_col = {
"name": "indicator_id",
"description": "id is identifier of an indicator in worldbank datasets",
"semantic_type": ["https://metadata.datadrivendiscovery.org/types/CategoricalData"]
}
second_col = {
"name": "indicator_value",
"description": "name of an indicator in worldbank datasets",
"semantic_type": ["http://schema.org/Text"]
}
third_col = {
"name": "unit",
"description": "unit of value returned by this indicator for a particular country",
"semantic_type": ["https://metadata.datadrivendiscovery.org/types/CategoricalData"]
}
fourth_col = {
"name": "sourceNote",
"description": "Long description of the indicator",
"semantic_type": ["http://schema.org/Text"]
}
fifth_col = {
"name": "sourceOrganization",
"description": "Source organization from where Worldbank acquired this data",
"semantic_type": ["http://schema.org/Text"]
}
sixth_col = {
"name": "country_value",
"description": "Country for which idicator value is returned",
"semantic_type": ["https://metadata.datadrivendiscovery.org/types/Location"],
"named_entity": LOCATIONS
}
seventh_col = {
"name": "countryiso3code",
"description": "Country iso code for which idicator value is returned",
"semantic_type": ["https://metadata.datadrivendiscovery.org/types/Location"]
}
eighth_col = {
"name": "date",
"description": "date for which indictor value is returned for a particular country",
"semantic_type": ["https://metadata.datadrivendiscovery.org/types/Time"],
"temporal_coverage": {"start": "1950", "end": "2100"}
}
schema['variables'].append(first_col)
schema['variables'].append(second_col)
schema['variables'].append(third_col)
schema['variables'].append(fourth_col)
schema['variables'].append(fifth_col)
schema['variables'].append(sixth_col)
schema['variables'].append(seventh_col)
schema['variables'].append(eighth_col)
if dst_path:
os.makedirs(dst_path + '/worldbank_schema', exist_ok=True)
file = os.path.join(dst_path, 'worldbank_schema',
"{}_description.json".format(commondata['id']))
else:
os.makedirs('WorldBank_schema', exist_ok=True)
file = os.path.join('worldbank_schema',
"{}_description.json".format(commondata['id']))
with open(file, "w") as fp:
json.dump(schema, fp, indent=2)
except:
traceback.print_exc()
pass
if __name__ == '__main__':
parser = ArgumentParser()
parser.add_argument("-o", "--dst", action="store", type=str, dest="dst_path")
args, _ = parser.parse_known_args()
generate_json_schema(args.dst_path)
| 27.469907 | 110 | 0.605292 | 1,219 | 11,867 | 5.83347 | 0.400328 | 0.023907 | 0.038251 | 0.023625 | 0.259598 | 0.208128 | 0.166362 | 0.138096 | 0.088314 | 0.077064 | 0 | 0.002908 | 0.24665 | 11,867 | 431 | 111 | 27.533643 | 0.792506 | 0 | 0 | 0.01699 | 0 | 0.002427 | 0.574703 | 0.003623 | 0 | 0 | 0 | 0 | 0 | 1 | 0.004854 | false | 0.002427 | 0.012136 | 0 | 0.019417 | 0.004854 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d88413a8c6025245623ff27f30f5b74590dab51a | 1,546 | py | Python | examples/lineplots/devol.py | aengelke/z-plot | 63e4e6656355b608487a3e4df5da13b7fad9b108 | [
"BSD-3-Clause"
] | 22 | 2016-10-19T15:02:22.000Z | 2021-12-23T12:40:37.000Z | examples/lineplots/devol.py | aengelke/z-plot | 63e4e6656355b608487a3e4df5da13b7fad9b108 | [
"BSD-3-Clause"
] | 4 | 2017-04-16T03:15:48.000Z | 2020-10-28T11:36:35.000Z | examples/lineplots/devol.py | aengelke/z-plot | 63e4e6656355b608487a3e4df5da13b7fad9b108 | [
"BSD-3-Clause"
] | 11 | 2017-01-18T02:41:57.000Z | 2021-12-28T02:21:30.000Z | #! /usr/bin/env python
from zplot import *
import sys
import sys
ctype = 'eps' if len(sys.argv) < 2 else sys.argv[1]
c = canvas(ctype, title='devol', dimensions=['400','340'])
t = table(file='devol.data')
t.addcolumns(['month','year'])
t.update(set='month = substr(date, 1, 2)')
t.update(set='year = substr(date, 4, 2)')
d = drawable(canvas=c, xrange=[-1,t.getmax(column='rownumber') + 1],
yrange=[0,2000], coord=[40,40], dimensions=[350,270])
grid(drawable=d, ystep=200, xstep=1, linecolor='lightgrey')
axis(drawable=d, style='y', yauto=['','',200])
axis(drawable=d, style='x', xmanual=t.getaxislabels(column='month'),
xlabelrotate=90, xlabelanchor='r,c', xlabelfontsize=7,
title='Number of Inquiries Per Month', titlesize=8,
titlefont='Courier-Bold', xtitle='Year and Month',
xtitleshift=[0,-15])
# Just pick out the unique years that show up and use them to label the axis
years, xlabels = [], []
for label in t.getaxislabels(column='year'):
if label[0] not in years:
years.append(label[0])
xlabels.append(label)
axis(drawable=d, style='x', xmanual=xlabels, linewidth=0, xlabelshift=[5,-15],
xlabelrotate=0, xlabelanchor='r,c', xlabelfontsize=7, xlabelformat='\'%s')
p = plotter()
p.line(drawable=d, table=t, xfield='rownumber', yfield='value', stairstep=True,
linecolor='purple', labelfield='value', labelsize=7, labelcolor='purple',
labelshift=[6,0], labelrotate=90, labelanchor='l,c')
c.circle(coord=d.map([10.5,463]), radius=20, linecolor='red')
c.render()
| 35.136364 | 80 | 0.666882 | 229 | 1,546 | 4.502183 | 0.558952 | 0.043647 | 0.037827 | 0.052376 | 0.106693 | 0.050436 | 0 | 0 | 0 | 0 | 0 | 0.048084 | 0.139069 | 1,546 | 43 | 81 | 35.953488 | 0.726521 | 0.062096 | 0 | 0.064516 | 0 | 0.032258 | 0.147099 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.096774 | 0 | 0.096774 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d88462214366d2d59a4aa76c3e990d20a7d331bd | 4,208 | py | Python | DLplatform/aggregating/geometric_median.py | chelseajohn/dlplatform | 429e42c598039d1e9fd1df3da4247f391915a31b | [
"Apache-2.0"
] | 5 | 2020-05-05T08:54:26.000Z | 2021-02-20T07:36:28.000Z | DLplatform/aggregating/geometric_median.py | zagazao/dlplatform | ab32af8f89cfec4b478203bd5d13ce2d30e89ba7 | [
"Apache-2.0"
] | 1 | 2020-11-16T14:15:53.000Z | 2020-11-16T14:15:53.000Z | DLplatform/aggregating/geometric_median.py | zagazao/dlplatform | ab32af8f89cfec4b478203bd5d13ce2d30e89ba7 | [
"Apache-2.0"
] | 4 | 2020-05-05T08:56:57.000Z | 2020-07-22T11:28:52.000Z | from DLplatform.aggregating import Aggregator
from DLplatform.parameters import Parameters
from typing import List
import numpy as np
from scipy.spatial.distance import cdist, euclidean
class GeometricMedian(Aggregator):
'''
Provides a method to calculate an averaged model from n individual models (using the arithmetic mean)
'''
def __init__(self, name="Geometric median"):
'''
Returns
-------
None
'''
Aggregator.__init__(self, name=name)
def calculateDivergence(self, param1, param2):
if type(param1) is np.ndarray:
return np.linalg.norm(param1 - param2)
else:
return param1.distance(param2)
def __call__(self, params: List[Parameters]) -> Parameters:
'''
This aggregator takes n lists of model parameters and returns a list of component-wise arithmetic means.
Parameters
----------
params A list of Paramters objects. These objects support addition and scalar multiplication.
Returns
-------
A new parameter object that is the average of params.
'''
Z = []
for param in params:
Z_i = param.toVector()
Z.append(Z_i)
Z = np.array(Z) #TODO: check that the shape is correct (that is, that no transpose is required)
gm = self.calcGeometricMedian(Z) #computes the GM for a numpy array
newParam = params[0].getCopy()#by copying the parameters object, we ensure that the shape information is preserved
newParam.fromVector(gm)
return newParam
def calcGeometricMedian(self, X, eps=1e-5, mat_iter = 10e6):
y = np.mean(X, 0)
iterCount = 0
while iterCount <= mat_iter:
D = cdist(X, [y])
nonzeros = (D != 0)[:, 0]
Dinv = 1 / D[nonzeros]
Dinvs = np.sum(Dinv)
W = Dinv / Dinvs
T = np.sum(W * X[nonzeros], 0)
num_zeros = len(X) - np.sum(nonzeros)
if num_zeros == 0:
y1 = T
elif num_zeros == len(X):
return y
else:
R = (T - y) * Dinvs
r = np.linalg.norm(R)
rinv = 0 if r == 0 else num_zeros/r
y1 = max(0, 1-rinv)*T + min(1, rinv)*y
if euclidean(y, y1) < eps:
return y1
y = y1
iterCount += 1
def __str__(self):
return "Geometric median"
# def setToGeometricMedian(self, params : List):
# models = params
#
# shapes = []
# b = []
# once = True
# newWeightsList = []
# try:
# for i, model in enumerate(models):
# w2 = model.get()
# c = []
# c = np.array(c)
# for i in range(len(w2)):
# z = np.array(w2[i])
#
# if len(shapes) < 8:
# shapes.append(z.shape)
# d = np.array(w2[i].flatten()).squeeze()
# c = np.concatenate([c, d])
# if (once):
# b = np.zeros_like(c)
# b[:] = c[:]
# once = False
# else:
# once = False
# b = np.concatenate([b.reshape((-1, 1)), c.reshape((-1, 1))], axis=1)
# median_val = np.array(b[0]) #hd.geomedian(b))
# sizes = []
# for j in shapes:
# size = 1
# for k in j:
# size *= k
# sizes.append(size)
# newWeightsList = []
#
# chunks = []
# count = 0
# for size in sizes:
# chunks.append([median_val[i + count] for i in range(size)])
# count += size
# for chunk, i in zip(chunks, range(len(shapes))):
# newWeightsList.append(np.array(chunk).reshape(shapes[i]))
#
# except Exception as e:
# print("Error happened! Message is ", e)
# self.set(newWeightsList)
| 31.402985 | 122 | 0.481939 | 466 | 4,208 | 4.293991 | 0.362661 | 0.02099 | 0.011994 | 0.011994 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017642 | 0.407319 | 4,208 | 133 | 123 | 31.639098 | 0.784683 | 0.512595 | 0 | 0.040816 | 0 | 0 | 0.017094 | 0 | 0 | 0 | 0 | 0.007519 | 0 | 1 | 0.102041 | false | 0 | 0.102041 | 0.020408 | 0.346939 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d88610b31c7b5f25ebda49d4c2f961d36945c83b | 4,670 | py | Python | stock_price_predictor/app.py | abdullahtk/Stock-Market-Predictor | 1e97d5d2c647912447b9db8eb548e52c0ad1fe8a | [
"MIT"
] | 3 | 2019-07-25T22:41:38.000Z | 2021-04-06T04:37:05.000Z | stock_price_predictor/app.py | abdullahtk/Stock-Market-Predictor | 1e97d5d2c647912447b9db8eb548e52c0ad1fe8a | [
"MIT"
] | 2 | 2019-07-13T15:36:06.000Z | 2021-06-01T23:56:50.000Z | stock_price_predictor/app.py | abdullahtk/Stock-Market-Predictor | 1e97d5d2c647912447b9db8eb548e52c0ad1fe8a | [
"MIT"
] | 1 | 2019-07-25T22:42:03.000Z | 2019-07-25T22:42:03.000Z | from flask import Flask
from flask import render_template, request, jsonify
from source import StockPredictor as sp
from source import ModelsParametersTunning as mpt
from datetime import datetime
import json
from plotly.graph_objs import Scatter
from pandas.tseries.offsets import BDay
app = Flask(__name__)
def install_and_import(package):
import importlib
try:
importlib.import_module(package)
except ImportError:
import pip
pip.main(['install', package])
finally:
globals()[package] = importlib.import_module(package)
install_and_import('plotly')
@app.route('/')
def index():
print('index')
return render_template('master.html')
@app.route('/go', methods=['GET', 'POST'])
def go():
# save user input in query
query = request.values
print('go')
tickers = []
ticker_of_interest = request.values.get('ticker')
tickers.append(ticker_of_interest)
tickers.append('SPY')
start_date_str = request.values.get('start_date')
start_date = datetime.strptime(start_date_str, '%Y-%m-%d').date()
end_date_str = request.values.get('end_date')
end_date = datetime.strptime(end_date_str, '%Y-%m-%d').date()
prediction_date_str = request.values.get('prediction_date')
prediction_date = datetime.strptime(prediction_date_str, '%Y-%m-%d').date()
number_of_days = request.values.get('number_of_days')
if (number_of_days == ""):
number_of_days = 5
df = sp.getData(tickers , start_date.strftime("%Y-%m-%d"), '2019-07-08')
more_features = sp.introduce_features(df, ticker_of_interest,tickers,number_of_days)
data_dict = sp.split_data(more_features, ticker_of_interest, end_date)
all_features_normed = data_dict['all_features_normed']
all_target = data_dict['all_target']
training_features_normed = data_dict["training_features_normed"]
training_target = data_dict["training_target"]
small_features_normed = data_dict["small_features_normed"]
small_target = data_dict["small_target"]
features_validation_normed = data_dict["features_validation_normed"]
future_price_validation = data_dict["future_price_validation"]
price_validation = data_dict["price_validation"]
highest_model, highest_score = sp.pick_best_regressor(small_features_normed, small_target, features_validation_normed, future_price_validation)
tunned_model = mpt.tune_parameters(highest_model.__class__.__name__, small_features_normed, small_target, features_validation_normed, future_price_validation)
model = tunned_model.fit(all_features_normed,all_target)
predictions = sp.predict_n_days(model, all_features_normed, prediction_date, number_of_days)
real_data = df[predictions['Date'][0]:predictions['Date'][0]+BDay(int(number_of_days)-1)][ticker_of_interest]
pct = [abs(float(r)-float(p))/float(r)*100 for r,p in zip(real_data,predictions['Predicted Price'])]
# Plot closing prices
graphs = [
{
'data': [
Scatter(
x=df[ticker_of_interest].index,
y=df[ticker_of_interest],
)
],
'layout': {
'title': 'Adjusted Close Price' ,
'yaxis': {
'title': "Price"
},
'xaxis': {
'title': "Date"
}
}
},
{
'data': [
Scatter(
x=predictions['Date'],
y=predictions['Predicted Price'],
name= 'Predicted Price',
),
Scatter(
x=predictions['Date'],
y=real_data,
name= 'Actual Price',
),
Scatter(
x=predictions['Date'],
y=pct,
name= 'PCT',
yaxis= 'y2',
line = dict(
width = 1,
dash = 'dash')
)
],
'layout': {
'title': 'Predicted Adjusted Close Price' ,
'xaxis': {
'title': "Date"
},
'yaxis': {
'title': "Price"
},
'yaxis2': {
'title': 'Actual vs. Predicted',
'overlaying': 'y',
'side': 'right'
}
}
}]
ids = ["graph-{}".format(i) for i, _ in enumerate(graphs)]
graphJSON = json.dumps(graphs, cls=plotly.utils.PlotlyJSONEncoder)
return render_template('go.html', query=query , df=data_dict, ids=ids, graphJSON=graphJSON)
if __name__ == '__main__':
app.run(debug=True)
| 33.357143 | 162 | 0.601927 | 520 | 4,670 | 5.107692 | 0.294231 | 0.033133 | 0.042169 | 0.02259 | 0.190136 | 0.111069 | 0.056476 | 0.056476 | 0.056476 | 0.056476 | 0 | 0.005341 | 0.278373 | 4,670 | 139 | 163 | 33.597122 | 0.782789 | 0.009422 | 0 | 0.194915 | 0 | 0 | 0.130651 | 0.020333 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025424 | false | 0 | 0.127119 | 0 | 0.169492 | 0.016949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d88917a3681f54f58e6f0236e05d538383e5fe13 | 1,100 | py | Python | src/unittest/python/modules/processing/filter_lt_tests.py | FHNW-CyberCaptain/CyberCaptain | 07c989190e997353fbf57eb7a386947d6ab8ffd5 | [
"MIT"
] | 1 | 2018-10-01T10:59:55.000Z | 2018-10-01T10:59:55.000Z | src/unittest/python/modules/processing/filter_lt_tests.py | FHNW-CyberCaptain/CyberCaptain | 07c989190e997353fbf57eb7a386947d6ab8ffd5 | [
"MIT"
] | null | null | null | src/unittest/python/modules/processing/filter_lt_tests.py | FHNW-CyberCaptain/CyberCaptain | 07c989190e997353fbf57eb7a386947d6ab8ffd5 | [
"MIT"
] | 1 | 2021-11-01T00:09:00.000Z | 2021-11-01T00:09:00.000Z | import unittest
from cybercaptain.processing.filter import processing_filter
class ProcessingFilterLTTest(unittest.TestCase):
"""
Test the filters for LT
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
arguments = {'src': '.',
'filterby': 'LT',
'rule': 'LT 500',
'target': '.'}
self.processing = processing_filter(**arguments)
def test_LT_positive(self):
"""
Test if the filter passes LT correctly.
"""
# border line test
self.assertTrue(self.processing.filter({"LT":499}), 'should not be filtered')
# deep test
self.assertTrue(self.processing.filter({"LT":400}), 'should not be filtered')
def test_LT_negative(self):
"""
Test if the filter fails LT correctly.
"""
# border line test
self.assertFalse(self.processing.filter({"LT":500}), 'should be filtered')
# deep test
self.assertFalse(self.processing.filter({"LT":600}), 'should be filtered')
| 32.352941 | 85 | 0.579091 | 116 | 1,100 | 5.37069 | 0.37931 | 0.179775 | 0.128411 | 0.141252 | 0.433387 | 0.327448 | 0.260032 | 0 | 0 | 0 | 0 | 0.019133 | 0.287273 | 1,100 | 33 | 86 | 33.333333 | 0.77551 | 0.142727 | 0 | 0 | 0 | 0 | 0.135535 | 0 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.1875 | false | 0 | 0.125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d889b29c8ab7230fa4821ccc373f7fe3a359f78f | 6,129 | py | Python | src/data/clean.py | samatix/ml-asset-managers | 27c9c0b3f67fd0350e80c5fb2729e64a13dccbb8 | [
"Apache-2.0"
] | 2 | 2022-01-01T11:06:22.000Z | 2022-02-19T03:19:18.000Z | src/data/clean.py | samatix/ml-asset-managers | 27c9c0b3f67fd0350e80c5fb2729e64a13dccbb8 | [
"Apache-2.0"
] | null | null | null | src/data/clean.py | samatix/ml-asset-managers | 27c9c0b3f67fd0350e80c5fb2729e64a13dccbb8 | [
"Apache-2.0"
] | 2 | 2020-08-15T05:38:49.000Z | 2022-03-05T07:31:11.000Z | import logging
import numpy as np
import pandas as pd
from sklearn.neighbors.kde import KernelDensity
from scipy.optimize import minimize
from src.utils import cov2corr
class MarcenkoPastur:
def __init__(self, points=1000):
"""
Marcenko-Pastur
:param points:
:type points: int
:return:The Marcenko-Pastur probability density function
:rtype: pd.Series
"""
self.points = points
self.eigen_max = None
def pdf(self, var, q):
"""
:param var: The variance
:type var: float
:param q: N/T number of observations on the number of dates
:type q: float
:return:
:rtype:
"""
if isinstance(var, np.ndarray):
var = var.item()
eigen_min = var * (1 - (1. / q) ** .5) ** 2
eigen_max = var * (1 + (1. / q) ** .5) ** 2
eigen_values = np.linspace(eigen_min,
eigen_max,
self.points)
pdf = q / (2 * np.pi * var * eigen_values) * (
(eigen_max - eigen_values) * (
eigen_values - eigen_min)) ** .5
pdf = pd.Series(pdf, index=eigen_values)
return pdf
def err_pdfs(self, var, eigenvalues, q, bandwidth):
pdf0 = self.pdf(var, q)
pdf1 = fit_kde(
eigenvalues, bandwidth,
x=pdf0.index.values.reshape(-1, 1)
)
sse = np.sum((pdf1 - pdf0) ** 2)
return sse
def fit(self, eigenvalues, q, bandwidth):
func = lambda *x: self.err_pdfs(*x)
x0 = 0.5
out = minimize(func, x0,
args=(eigenvalues, q, bandwidth),
bounds=((1E-5, 1 - 1E-5),))
if out['success']:
var = out['x'][0]
else:
var = 1
eigen_max = var * (1 + (1. / q) ** 0.5) ** 2
self.eigen_max = eigen_max
return eigen_max, var
def facts_number(self, eigenvalues):
if self.eigen_max is not None:
return eigenvalues.shape[0] - \
np.diag(eigenvalues)[::-1].searchsorted(self.eigen_max)
else:
raise ValueError(f"Eigen max is not calculated. Please "
f"run the fit method before calculating the "
f"facts number")
def _denoise_constant_residual(self, eigenvalues, eigenvectors):
facts_number = self.facts_number(eigenvalues)
eigenvalues_ = eigenvalues.diagonal().copy()
# Denoising by making constant the eigen values past facts_number
eigenvalues_[facts_number:] = eigenvalues_[
facts_number:].sum() / float(
eigenvalues_.shape[0] - facts_number)
eigenvalues_ = np.diag(eigenvalues_)
cov = eigenvectors @ eigenvalues_ @ eigenvectors.T
# Rescaling
return cov2corr(cov)
def _denoise_shrink(self, eigenvalues, eigenvectors, alpha=0):
# Eigenvalues and eigenvectors corresponding
# to the eigenvalues less than the max value
facts_number = self.facts_number(eigenvalues)
eigenvalues_l = eigenvalues[:facts_number, :facts_number]
eigenvectors_l = eigenvectors[:, :facts_number]
# Eigenvalues and eigenvectors corresponding
# to the eigenvalues more than the max value
eigenvalues_r = eigenvalues[facts_number:, facts_number:]
eigenvectors_r = eigenvectors[:, facts_number:]
corr_l = eigenvectors_l @ eigenvalues_l @ eigenvectors_l.T
corr_r = eigenvectors_r @ eigenvalues_r @ eigenvectors_r.T
return corr_l + alpha * corr_r + (1 - alpha) * np.diag(
corr_r.diagonal())
def denoise(self, eigenvalues, eigenvectors, method="constant", alpha=0):
"""
Remove noise from corr by fixing random eigenvalues
:param eigenvalues:
:type eigenvalues:
:param eigenvectors:
:type eigenvectors:
:param method:
:type method: str
:param alpha:
:type alpha:
:return:
:rtype:
"""
if method == "constant":
return self._denoise_constant_residual(eigenvalues, eigenvectors)
elif method == "shrink":
return self._denoise_shrink(eigenvalues, eigenvectors, alpha=alpha)
else:
raise ValueError(f"The only available denoising methods are "
f"'constant' or 'shrink'. The method provided is "
f"{method}")
def detone(self, eigenvalues, eigenvectors):
# Test if the correlation matrix has a market component
eigenvalues_m = eigenvalues[0, 0]
eigenvectors_m = eigenvectors[:, 0]
cov = (eigenvectors @ eigenvalues @ eigenvectors.T) - \
(eigenvectors_m @ eigenvalues_m @ eigenvectors_m.T)
return cov2corr(cov)
def get_pca(matrix):
"""
Function to retrieve the eigenvalues and eigenvector from a Hermitian
matrix
:param matrix: Hermitian matrix
:type matrix: np.matrix or np.ndarray
:return:
:rtype:
"""
eigenvalues, eigenvectors = np.linalg.eigh(matrix)
indices = eigenvalues.argsort()[::-1]
eigenvalues, eigenvectors = eigenvalues[indices], eigenvectors[:, indices]
eigenvalues = np.diagflat(eigenvalues)
return eigenvalues, eigenvectors
def fit_kde(obs, bandwidth=0.25, kernel='gaussian', x=None):
"""
Fit kernel to a series of observations and derive the probability of obs
:param obs:
:type obs:
:param bandwidth:
:type bandwidth:
:param kernel:
:type kernel:
:param x: The array of values on which the fit KDE will be evaluated
:type x: array like
:return:
:rtype:
"""
if len(obs.shape) == 1:
obs = obs.reshape(-1, 1)
kde = KernelDensity(kernel=kernel, bandwidth=bandwidth).fit(obs)
if x is None:
x = np.unique(obs).reshape(-1, 1)
log_prob = kde.score_samples(x)
pdf = pd.Series(np.exp(log_prob), index=x.flatten())
return pdf
| 33.309783 | 79 | 0.586229 | 697 | 6,129 | 5.037303 | 0.249641 | 0.050128 | 0.037596 | 0.005127 | 0.134435 | 0.096554 | 0.066078 | 0 | 0 | 0 | 0 | 0.014068 | 0.315712 | 6,129 | 183 | 80 | 33.491803 | 0.823081 | 0.195791 | 0 | 0.089109 | 0 | 0 | 0.048579 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108911 | false | 0 | 0.059406 | 0 | 0.287129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d88d295ec1717480689aae0ec07ffd4cad0afd39 | 530 | py | Python | saleor/core/emails.py | cleobuck/krolocosmetics | 4ae97601a18461323606d6e22673bb38cbaa6272 | [
"CC-BY-4.0"
] | 2 | 2019-12-04T19:43:51.000Z | 2020-07-06T09:56:04.000Z | saleor/core/emails.py | cleobuck/krolocosmetics | 4ae97601a18461323606d6e22673bb38cbaa6272 | [
"CC-BY-4.0"
] | 11 | 2021-02-02T22:34:37.000Z | 2022-02-10T20:20:50.000Z | saleor/core/emails.py | cleobuck/krolocosmetics | 4ae97601a18461323606d6e22673bb38cbaa6272 | [
"CC-BY-4.0"
] | null | null | null | from django.contrib.sites.models import Site
from django.templatetags.static import static
from ..core.utils import build_absolute_uri
def get_email_context():
site: Site = Site.objects.get_current()
logo_url = build_absolute_uri(static("images/logo-light.jpg"))
send_email_kwargs = {"from_email": site.settings.default_from_email}
email_template_context = {
"domain": site.domain,
"logo_url": logo_url,
"site_name": site.name,
}
return send_email_kwargs, email_template_context
| 31.176471 | 72 | 0.732075 | 71 | 530 | 5.15493 | 0.464789 | 0.057377 | 0.087432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167925 | 530 | 16 | 73 | 33.125 | 0.829932 | 0 | 0 | 0 | 0 | 0 | 0.101887 | 0.039623 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.230769 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d88fb1b1d25ac7b6d5954cd96c458c9d471fb3b6 | 4,109 | py | Python | inb/tests/test_linkedin/test_driver.py | JoshiAyush/LinkedIn-Automator | 6341867fb9bb974ecfe388d90d1860e9c85a3b3c | [
"MIT"
] | 1 | 2021-01-05T17:29:02.000Z | 2021-01-05T17:29:02.000Z | inb/tests/test_linkedin/test_driver.py | JoshiAyush/LinkedIn-Automator | 6341867fb9bb974ecfe388d90d1860e9c85a3b3c | [
"MIT"
] | null | null | null | inb/tests/test_linkedin/test_driver.py | JoshiAyush/LinkedIn-Automator | 6341867fb9bb974ecfe388d90d1860e9c85a3b3c | [
"MIT"
] | null | null | null | # MIT License
#
# Copyright (c) 2019 Creative Commons
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# from __future__ imports must occur at the beginning of the file. DO NOT CHANGE!
from __future__ import annotations
import os
import stat
import unittest
from unittest.mock import call
from unittest.mock import Mock
from unittest.mock import patch
from linkedin import Driver
from lib import DRIVER_PATH
from errors import WebDriverPathNotGivenException
from errors import WebDriverNotExecutableException
class TestDriverClass(unittest.TestCase):
@unittest.skipIf(not os.getuid() == 0, "Requires root privileges!")
def test_constructor_method_with_invalid_executable_path(
self: TestDriverClass) -> None:
paths = [1, (1, 2, 3), [1, 2, 3], {1: 1, 2: 2}]
for path in paths:
with self.assertRaises(WebDriverPathNotGivenException):
driver = Driver(path)
original_file_permissions = stat.S_IMODE(
os.lstat(DRIVER_PATH).st_mode)
def remove_execute_permissions(path):
"""Remove write permissions from this path, while keeping all other
permissions intact.
Params:
path: The path whose permissions to alter.
"""
NO_USER_EXECUTE = ~stat.S_IXUSR
NO_GROUP_EXECUTE = ~stat.S_IXGRP
NO_OTHER_EXECUTE = ~stat.S_IXOTH
NO_EXECUTE = NO_USER_EXECUTE & NO_GROUP_EXECUTE & NO_OTHER_EXECUTE
current_permissions = stat.S_IMODE(os.lstat(path).st_mode)
os.chmod(path, current_permissions & NO_EXECUTE)
remove_execute_permissions(DRIVER_PATH)
with self.assertRaises(WebDriverNotExecutableException):
driver = Driver(driver_path=DRIVER_PATH)
# place the original file permissions back
os.chmod(DRIVER_PATH, original_file_permissions)
@patch("linkedin.Driver.enable_webdriver_chrome")
def test_constructor_method_with_valid_chromedriver_path(self: TestDriverClass, mock_enable_webdriver_chrome: Mock) -> None:
driver = Driver(driver_path=DRIVER_PATH)
mock_enable_webdriver_chrome.assert_called()
@patch("selenium.webdriver.ChromeOptions.add_argument")
def test_constructor_method_add_argument_internal_calls(
self: TestDriverClass, mock_add_argument: Mock) -> None:
calls = [
call(Driver.HEADLESS),
call(Driver.INCOGNITO),
call(Driver.NO_SANDBOX),
call(Driver.DISABLE_GPU),
call(Driver.START_MAXIMIZED),
call(Driver.DISABLE_INFOBARS),
call(Driver.ENABLE_AUTOMATION),
call(Driver.DISABLE_EXTENSIONS),
call(Driver.DISABLE_NOTIFICATIONS),
call(Driver.DISABLE_SETUID_SANDBOX),
call(Driver.IGNORE_CERTIFICATE_ERRORS)]
driver = Driver(driver_path=DRIVER_PATH, options=[
Driver.HEADLESS, Driver.INCOGNITO, Driver.NO_SANDBOX, Driver.DISABLE_GPU, Driver.START_MAXIMIZED,
Driver.DISABLE_INFOBARS, Driver.ENABLE_AUTOMATION, Driver.DISABLE_EXTENSIONS, Driver.DISABLE_NOTIFICATIONS,
Driver.DISABLE_SETUID_SANDBOX, Driver.IGNORE_CERTIFICATE_ERRORS])
mock_add_argument.assert_has_calls(calls)
| 40.683168 | 127 | 0.748601 | 531 | 4,109 | 5.595104 | 0.369115 | 0.037025 | 0.02861 | 0.022215 | 0.088522 | 0.051161 | 0 | 0 | 0 | 0 | 0 | 0.004755 | 0.181066 | 4,109 | 100 | 128 | 41.09 | 0.878158 | 0.324166 | 0 | 0.035714 | 0 | 0 | 0.040015 | 0.030837 | 0 | 0 | 0 | 0 | 0.071429 | 1 | 0.071429 | false | 0 | 0.196429 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d895cb920f81b7b33316df8ee7c07eb1ad364352 | 3,037 | py | Python | example-tests/example_Grid.py | Indomerun/pyHiChi | fdceb238dfed6433ee350d5c593ca5e2cd4fbd2b | [
"MIT"
] | 11 | 2019-08-22T12:47:40.000Z | 2022-01-28T16:07:29.000Z | example-tests/example_Grid.py | Indomerun/pyHiChi | fdceb238dfed6433ee350d5c593ca5e2cd4fbd2b | [
"MIT"
] | 14 | 2019-09-02T08:24:55.000Z | 2022-02-14T11:40:43.000Z | example-tests/example_Grid.py | Indomerun/pyHiChi | fdceb238dfed6433ee350d5c593ca5e2cd4fbd2b | [
"MIT"
] | 9 | 2019-07-31T13:25:20.000Z | 2022-01-28T16:07:45.000Z | import sys
sys.path.append("../bin/")
import pyHiChi as hichi
import numpy as np
def valueE(x, y, z):
E = hichi.Vector3d(0, np.cos(z), 0) #sin(x)
return E
def valueEx(x, y, z):
Ex = 0
return Ex
def valueEy(x, y, z):
Ey = np.cos(z)
return Ey
def valueEz(x, y, z):
Ez = 0
return Ez
def valueB(x, y, z):
B = hichi.Vector3d(-np.cos(z), 0, 0)
return B
def valueBx(x, y, z):
Bx = -np.cos(z)
return Bx
def valueBy(x, y, z):
By = 0
return By
def valueBz(x, y, z):
Bz = 0
return Bz
field_size = hichi.Vector3d(5, 10, 11)
min_coords = hichi.Vector3d(0.0, 1.0, 0.0)
max_coords = hichi.Vector3d(3.5, 7.0, 2*np.pi)
field_step = (max_coords - min_coords) / field_size
time_step = 1e-16
field1 = hichi.YeeField(field_size, min_coords, field_step, time_step)
field2 = hichi.YeeField(field_size, min_coords, field_step, time_step)
field1.set_E(valueE)
field1.set_B(valueB)
field2.set_E(valueEx, valueEy, valueEz)
field2.set_B(valueBx, valueBy, valueBz)
#show
import matplotlib.pyplot as plt
N = 37
x = np.arange(0, 3.5, 3.5/N)
z = np.arange(0, 2*np.pi, 2*np.pi/N)
Ex1 = np.zeros(shape=(N,N))
Ex2 = np.zeros(shape=(N,N))
Ey1 = np.zeros(shape=(N,N))
Ey2 = np.zeros(shape=(N,N))
Bx1 = np.zeros(shape=(N,N))
Bx2 = np.zeros(shape=(N,N))
for ix in range(N):
for iy in range(N):
coord_xz = hichi.Vector3d(x[ix], 0.0, z[iy])
E1 = field1.get_E(coord_xz)
Ex1[ix, iy] = E1.x
Ey1[ix, iy] = E1.y
Bx1[ix, iy] = field1.get_B(coord_xz).x
E2 = field2.get_E(coord_xz)
Ex2[ix, iy] = E2.x
Ey2[ix, iy] = E2.y
Bx2[ix, iy] = field2.get_B(coord_xz).x
fig, axes = plt.subplots(ncols=3, nrows=2)
bar11 = axes[0, 0].imshow(Ex1, cmap='RdBu', interpolation='none', extent=(0, 2*np.pi, 0, 3.5))
fig.colorbar(bar11, ax=axes[0, 0])
axes[0, 0].set_title("Ex1")
axes[0, 0].set_xlabel("x")
axes[0, 0].set_ylabel("z")
bar12 = axes[0, 1].imshow(Ey1, cmap='RdBu', interpolation='none', extent=(0, 2*np.pi, 0, 3.5))
fig.colorbar(bar12, ax=axes[0, 1])
axes[0, 1].set_title("Ey1")
axes[0, 1].set_xlabel("x")
axes[0, 1].set_ylabel("z")
bar13 = axes[0, 2].imshow(Bx1, cmap='RdBu', interpolation='none', extent=(0, 2*np.pi, 0, 3.5))
fig.colorbar(bar13, ax=axes[0, 2])
axes[0, 2].set_title("Bx1")
axes[0, 2].set_xlabel("x")
axes[0, 2].set_ylabel("z")
bar21 = axes[1, 0].imshow(Ex2, cmap='RdBu', interpolation='none', extent=(0, 2*np.pi, 0, 3.5))
fig.colorbar(bar21, ax=axes[1, 0])
axes[1, 0].set_title("Ex2")
axes[1, 0].set_xlabel("x")
axes[1, 0].set_ylabel("z")
bar22 = axes[1, 1].imshow(Ey2, cmap='RdBu', interpolation='none', extent=(0, 2*np.pi, 0, 3.5))
fig.colorbar(bar22, ax=axes[1, 1])
axes[1, 1].set_title("Ey2")
axes[1, 1].set_xlabel("x")
axes[1, 1].set_ylabel("z")
bar23 = axes[1, 2].imshow(Bx2, cmap='RdBu', interpolation='none', extent=(0, 2*np.pi, 0, 3.5))
cbar = fig.colorbar(bar23, ax=axes[1, 2])
axes[1, 2].set_title("Bx2")
axes[1, 2].set_xlabel("x")
axes[1, 2].set_ylabel("z")
plt.tight_layout()
plt.show()
| 24.103175 | 94 | 0.622654 | 592 | 3,037 | 3.113176 | 0.179054 | 0.040695 | 0.024417 | 0.026044 | 0.322843 | 0.212154 | 0.212154 | 0.212154 | 0.212154 | 0.212154 | 0 | 0.079541 | 0.167929 | 3,037 | 125 | 95 | 24.296 | 0.649782 | 0.003293 | 0 | 0 | 0 | 0 | 0.028118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086022 | false | 0 | 0.043011 | 0 | 0.215054 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |