hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5c165b43b1f198ab8d8542c8ff00784873352544 | 4,911 | py | Python | simulations/exp_schapiro.py | nicktfranklin/EventSegmentation | aa1ab809918ee3a10fa0bd4ae1757b3febc573d1 | [
"MIT"
] | 14 | 2019-04-05T14:06:39.000Z | 2022-03-26T19:11:42.000Z | simulations/exp_schapiro.py | nicktfranklin/SEM_paper_simulations | aa1ab809918ee3a10fa0bd4ae1757b3febc573d1 | [
"MIT"
] | null | null | null | simulations/exp_schapiro.py | nicktfranklin/SEM_paper_simulations | aa1ab809918ee3a10fa0bd4ae1757b3febc573d1 | [
"MIT"
] | 2 | 2020-07-07T17:12:09.000Z | 2021-01-15T23:30:16.000Z | import numpy as np
from models import SEM, clear_sem
from sklearn import metrics
import pandas as pd
from scipy.special import logsumexp
def logsumexp_mean(x):
return logsumexp(x) - np.log(len(x))
def batch_experiment(sem_kwargs, n_train=1400, n_test=600, progress_bar=True):
# define the graph structure for the experiment
g = np.array([
[0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
], dtype=float)
# define the random vectors
d = 25
items = np.random.randn(15, d) / np.sqrt(d)
# draw random walks on the graph
def sample_pmf(pmf):
return np.sum(np.cumsum(pmf) < np.random.uniform(0, 1))
train_nodes = [np.random.randint(15)]
for _ in range(n_train-1):
train_nodes.append(sample_pmf(g[train_nodes[-1]] / g[train_nodes[-1]].sum()))
# draw hamiltonian paths from the graph
# this graph defines the same thing but a preference order as well
# higher number are c
preferred_nodes = np.array([
[1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0],
], dtype=float)
def sample_hamilton(node0):
is_visited = np.zeros(15, dtype=bool)
counter = 0
nodes = []
while counter < (len(is_visited)):
p = g[node0] * ~is_visited * preferred_nodes
if np.sum(p) == 0:
p = g[node0] * ~is_visited
node0 = sample_pmf(p / np.sum(p))
nodes.append(node0)
is_visited[node0] = True
counter += 1
return nodes
test_nodes = []
node0 = np.random.randint(15)
for _ in range(n_test / 15):
test_nodes += sample_hamilton(node0)
node0 = test_nodes[-1]
# embed the vectors
all_nodes = train_nodes + test_nodes
x = []
for node in all_nodes:
x.append(items[node])
x = np.array(x)
sem_model = SEM(**sem_kwargs)
sem_model.run(x, progress_bar=progress_bar)
# prepared diagnostic measures
clusters = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
node_cluster = []
for node in test_nodes:
node_cluster.append(clusters[node])
node_cluster = np.array(node_cluster)
all_node_cluster = []
for node in all_nodes:
all_node_cluster.append(clusters[node])
all_node_cluster = np.array(all_node_cluster)
all_boundaries_true = np.concatenate([[False], (all_node_cluster[1:] != all_node_cluster[:-1])])
test_boundaries = sem_model.results.e_hat[n_train-1:-1] != sem_model.results.e_hat[n_train:]
boundaries = sem_model.results.e_hat[:n_train-1] != sem_model.results.e_hat[1:n_train]
test_bound_prob = sem_model.results.log_boundary_probability[n_train:]
bound_prob = sem_model.results.log_boundary_probability[1:n_train]
# pull the prediction error (Bayesian Suprise)
test_pe = sem_model.results.surprise[n_train:]
bound_pe = sem_model.results.surprise[1:n_train]
# cache the correlation between log boundary probability and log surprise
r = np.corrcoef(
sem_model.results.log_boundary_probability, sem_model.results.surprise
)[0][1]
output = {
'Community Transitions (Hamilton)': np.exp(logsumexp_mean(test_bound_prob[all_boundaries_true[1400:]])),
'Other Parse (Hamilton)': np.exp(logsumexp_mean(test_bound_prob[all_boundaries_true[1400:]==False])),
'Community Transitions (All Other Trials)': np.exp(logsumexp_mean(bound_prob[all_boundaries_true[1:n_train]])),
'Other Parse (All Other Trials)': np.exp(logsumexp_mean(bound_prob[all_boundaries_true[1:n_train]==False])),
'PE Community Transitions (Hamilton)': logsumexp_mean(test_pe[all_boundaries_true[1400:]]),
'PE Other Parse (Hamilton)': logsumexp_mean(test_pe[all_boundaries_true[1400:]==False]),
'PE Community Transitions (All Other Trials)': logsumexp_mean(bound_pe[all_boundaries_true[1:n_train]]),
'PE Other Parse (All Other Trials)': logsumexp_mean(bound_pe[all_boundaries_true[1:n_train]==False]),
'r':r
}
# clear_sem_model
clear_sem(sem_model)
sem_model = None
return output
| 37.48855 | 119 | 0.591326 | 817 | 4,911 | 3.388005 | 0.161567 | 0.102601 | 0.131142 | 0.148844 | 0.476156 | 0.375723 | 0.354046 | 0.34104 | 0.27565 | 0.211344 | 0 | 0.088437 | 0.258603 | 4,911 | 130 | 120 | 37.776923 | 0.671793 | 0.082672 | 0 | 0.043011 | 0 | 0 | 0.05809 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043011 | false | 0 | 0.053763 | 0.021505 | 0.139785 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c17c5b7aabeab82c94531a8db0f717695069f5e | 28,818 | py | Python | src/mlshell/producers/dataset.py | nizaevka/mlshell | 36893067f598f6b071b61604423d0fd15c2a7c62 | [
"Apache-2.0"
] | 8 | 2020-10-04T15:33:58.000Z | 2020-11-24T15:10:18.000Z | src/mlshell/producers/dataset.py | nizaevka/mlshell | 36893067f598f6b071b61604423d0fd15c2a7c62 | [
"Apache-2.0"
] | 5 | 2020-03-06T18:13:10.000Z | 2022-03-12T00:52:48.000Z | src/mlshell/producers/dataset.py | nizaevka/mlshell | 36893067f598f6b071b61604423d0fd15c2a7c62 | [
"Apache-2.0"
] | null | null | null | """
The :mod:`mlshell.producers.dataset` contains examples of `Dataset` class for
empty data object creation and `DataProducer` class for filling it.
:class:`mlshell.Dataset` proposes unified interface to interact with underlying
data. Intended to be used in :class:`mlshell.Workflow`. For new data formats
no need to edit `Workflow` class, adapt `Dataset` in compliance to interface.
Current realization based on dictionary.
:class:`mlshell.DataProducer` specifies methods divided for convenience on:
* :class:`mlshell.DataIO` defining IO related methods.
Currently reading from csv-file implemented.
* :class:`mlshell.DataPreprocessor` preprocessing data to final state.
Implemented data transformation in compliance to `Dataset` class, also common
exploration techniques available.
"""
import copy
import os
import jsbeautifier
import numpy as np
import pandas as pd
import pycnfg
import sklearn
import tabulate
__all__ = ['Dataset', 'DataIO', 'DataPreprocessor', 'DatasetProducer']
class Dataset(dict):
"""Unified data interface.
Implements interface to access arbitrary data.
Interface: x, y, data, meta, subset, dump_pred and whole dict api.
Parameters
----------
*args : list
Passed to parent class constructor.
**kwrags : dict
Passed to parent class constructor.
Attributes
----------
data : :class:`pandas.DataFrame`
Underlying data.
subsets : dict
{'subset_id' : array-like subset indices, ..}.
meta : dict
Extracted auxiliary information from data: {
'index': list
List of index column label(s).
'features': list
List of feature column label(s).
'categoric_features': list
List of categorical feature column label(s).
'targets': list
List of target column label(s),
'indices': list
List of rows indices.
'classes': list of :class:`numpy.ndarray`
List of sorted unique labels for each target(s) (n_outputs,
n_classes).
'pos_labels': list
List of "positive" label(s) for target(s) (n_outputs,).
'pos_labels_ind': list
List of "positive" label(s) index in :func:`numpy.unique`
for target(s) (n_outputs).
categoric_ind_name : dict
Dictionary with categorical feature indices as key, and
tuple ('feature_name', categories) as value:
{'column_index': ('feature_name', ['cat1', 'cat2'])}.
numeric_ind_name : dict
Dictionary with numeric features indices as key, and tuple
('feature_name', ) as value: {'columns_index':('feature_name',)}.
}
Notes
-----
Inherited from dict class, so attributes section describes keys.
"""
_required_parameters = []
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def __hash__(self):
return hash(pd.util.hash_pandas_object(self['data']).sum())
@property
def oid(self):
"""str: Dataset identifier."""
return self['_oid']
@oid.setter
def oid(self, value):
self['_oid'] = value
@property
def x(self):
""":class:`pandas.DataFrame` : Extracted features columns."""
df = self['data']
meta = self['meta']
return df.loc[:, meta['features']]
@property
def y(self):
""":class:`pandas.DataFrame` : Extracted targets columns."""
df = self['data']
meta = self['meta']
# return df[meta['targets']].values
res = df.loc[:, meta['targets']].values.ravel() \
if len(meta['targets']) == 1 else df.loc[:, meta['targets']].values
return res
@property
def meta(self):
"""dict: Access meta."""
return self['meta']
@property
def data(self):
""":class:`pandas.DataFrame` : Access data."""
return self['data']
def subset(self, subset_id):
""":class:`mlshell.Dataset` : Access subset. """
if subset_id is '':
return self
df = self['data']
index = self['subsets'][subset_id] # subset of meta['inices']
# Inherit only meta (except indices).
# dict(self) will inherit by ref.
dataset = Dataset(**{
'meta': copy.deepcopy(self.meta),
'data': df.loc[index],
'subsets': {},
'_oid': f"{self['_oid']}__{subset_id}"})
# Update indices in meta.
dataset.meta['indices'] = index
# if reset_index: np.array(dataset.meta['indices'])[index].tolist()
return dataset
def dump_pred(self, filepath, y_pred, **kwargs):
"""Dump columns to disk.
Parameters
----------
filepath: str
File path without extension.
y_pred: array-like
pipeline.predict() result.
**kwargs: dict
` Additional kwargs to pass in .to_csv(**kwargs).
Returns
-------
fullpath : str
Full filepath.
"""
meta = self.meta
# Recover original index and names.
dic = dict(zip(
meta['targets'],
[y_pred] if len(meta['targets']) == 1 else np.array(y_pred).T
))
obj = pd.DataFrame(index=meta['indices'],
data=dic).rename_axis(meta['index'], axis=0)
fullpath = f"{filepath}_pred.csv"
if "PYTEST_CURRENT_TEST" in os.environ:
if 'float_format' not in kwargs:
kwargs['float_format'] = '%.8f'
with open(fullpath, 'w', newline='') as f:
obj.to_csv(f, mode='w', header=True, index=True, sep=',',
line_terminator='\n', **kwargs)
return fullpath
class DataIO(object):
"""Get raw data from database.
Interface: load.
Parameters
----------
project_path: str.
Absolute path to current project dir.
logger : :class:`logging.Logger`
Logger.
"""
_required_parameters = ['project_path', 'logger']
def __init__(self, project_path, logger):
self.logger = logger
self.project_path = project_path
def load(self, dataset, filepath, key='data',
random_skip=False, random_state=None, **kwargs):
"""Load data from csv-file.
Parameters
----------
dataset : :class:`mlshell.Dataset`
Template for dataset.
filepath : str
Absolute path to csv file or relative to 'project__path' started
with './'.
key : str, optional (default='data')
Loaded data identifier to add in dataset dictionary. Useful when
load multiple files and combine them in separate step under 'data'.
random_skip : bool, optional (default=False)
If True randomly skip rows while read file, remain 'nrow' lines.
Rewrite `skiprows` kwarg.
random_state : int, optional (default=None).
Fix random state for `random_skip`.
**kwargs : dict
Additional parameter passed to the :func:`pandas.read_csv()` .
Returns
-------
dataset : :class:`mlshell.Dataset`
Key added: {'data': :class:`pandas.DataFrame` ,}.
Notes:
------
If `nrow` > lines in file, auto set to None.
"""
if filepath.startswith('./'):
filepath = "{}/{}".format(self.project_path, filepath[2:])
# Count lines.
with open(filepath, 'r') as f:
lines = sum(1 for _ in f)
if 'skiprows' in kwargs and random_skip:
self.logger.warning("random_skip rewrite skiprows kwarg.")
nrows = kwargs.get('nrows', None)
skiprows = kwargs.get('skiprows', None)
if nrows:
if nrows > lines:
nrows = None
elif random_skip:
# skiprows index start from 0.
# If no headers, returns nrows+1.
random_state = sklearn.utils.check_random_state(random_state)
skiprows = random_state.choice(range(1, lines),
size=lines - nrows - 1,
replace=False, p=None)
kwargs['skiprows'] = skiprows
kwargs['nrows'] = nrows
with open(filepath, 'r') as f:
raw = pd.read_csv(f, **kwargs)
self.logger.info("Data loaded from:\n {}".format(filepath))
dataset[key] = raw
return dataset
class DataPreprocessor(object):
"""Transform raw data in compliance with `Dataset` class.
Interface: preprocess, info, split.
Parameters
----------
project_path: str.
Absolute path to current project dir.
logger : :class:`logging.Logger`
Logger.
"""
_required_parameters = ['project_path', 'logger']
def __init__(self, project_path, logger):
self.logger = logger
self.project_path = project_path
def preprocess(self, dataset, targets_names, features_names=None,
categor_names=None, pos_labels=None, **kwargs):
"""Preprocess raw data.
Parameters
----------
dataset : :class:`mlshell.Dataset`
Raw dataset: {'data': :class:`pandas.DataFrame` }.
targets_names: list
List of targets columns names in raw dataset. Even if no exist,
will be used to name predictions in ``dataset.dump_pred`` .
features_names: list, optional (default=None)
List of features columns names in raw dataset. If None, all except
targets.
categor_names: list, optional (default=None)
List of categorical features(also binary) identifiers in raw
dataset. If None, empty list.
pos_labels: list, optional (default=None)
Classification only, list of "positive" label(s) in target(s).
Could be used in :func:`sklearn.metrics.roc_curve` for
threshold analysis and metrics evaluation if classifier supports
``predict_proba``. If None, for each target last label in
:func:`numpy.unique` is used . For regression set [] to prevent
evaluation.
**kwargs : dict
Additional parameters to add in dataset.
Returns
-------
dataset : :class:`mlshell.Dataset`
Resulted dataset. Key updated: 'data'. Keys added:
'subsets': dict
Storage for data subset(s) indices (filled in split method)
{'subset_id': indices}.
'meta' : dict
Extracted auxiliary information from data:
{
'index': list
List of index column label(s).
'features': list
List of feature column label(s).
'categoric_features': list
List of categorical feature column label(s).
'targets': list
List of target column label(s),
'indices': list
List of rows indices.
'classes': list of :class:`numpy.ndarray`
List of sorted unique labels for each target(s) (n_outputs,
n_classes).
'pos_labels': list
List of "positive" label(s) for target(s) (n_outputs,).
'pos_labels_ind': list
List of "positive" label(s) index in :func:`numpy.unique`
for target(s) (n_outputs).
categoric_ind_name : dict
Dictionary with categorical feature indices as key, and
tuple ('feature_name', categories) as value:
{'column_index': ('feature_name', ['cat1', 'cat2'])}.
numeric_ind_name : dict
Dictionary with numeric features indices as key, and tuple
('feature_name', ) as value: {'columns_index':
('feature_name',)}.
}
Notes
-----
Don`t change dataframe shape or index/columns names after ``meta``
generating.
Features columns unified:
* Fill gaps.
* If gap in categorical => set 'unknown'.
* If gap in non-categorical => set np.nan.
* Cast categorical features to str dtype, and apply Ordinal encoder.
* Cast values to np.float64.
"""
raw = dataset['data']
if categor_names is None:
categor_names = []
if features_names is None:
features_names = [c for c in raw.columns if c not in targets_names]
for i in (targets_names, features_names, categor_names):
if not isinstance(i, list):
raise TypeError(f"{i} should be a list.")
index = raw.index
targets_df, raw_info_targets =\
self._process_targets(raw, targets_names, pos_labels)
features_df, raw_info_features =\
self._process_features(raw, features_names, categor_names)
data = self._combine(index, targets_df, features_df)
meta = {
'index': index.name,
'indices': list(index),
'targets': targets_names,
'features': list(features_names),
'categoric_features': categor_names,
**raw_info_features,
**raw_info_targets,
}
self.logger.debug(f"Dataset meta:\n {meta}")
dataset.update({'data': data,
'meta': meta,
'subsets': {},
**kwargs})
return dataset
def info(self, dataset, **kwargs):
"""Log dataset info.
Check:
* duplicates.
* gaps.
Parameters
----------
dataset : :class:`mlshell.Dataset`
Dataset to explore.
**kwargs : dict
Additional parameters to pass in low-level functions.
Returns
-------
dataset : :class:`mlshell.Dataset`
For compliance with producer logic.
"""
self._check_duplicates(dataset['data'], **kwargs)
self._check_gaps(dataset['data'], **kwargs)
return dataset
def split(self, dataset, **kwargs):
"""Split dataset on train, test.
Parameters
----------
dataset : :class:`mlshell.Dataset`
Dataset to unify.
**kwargs : dict
Additional parameters to pass in:
:func:`sklearn.model_selection.train_test_split` .
Returns
-------
dataset : :class:`mlshell.Dataset`
Resulted dataset. 'subset' value updated:
{'train': array-like train rows indices,
'test': array-like test rows indices,}
Notes
-----
If split ``train_size==1.0`` or ``test_size==0``: ``test=train`` ,
other kwargs ignored.
No copy takes place.
"""
if 'test_size' not in kwargs:
kwargs['test_size'] = None
if 'train_size' not in kwargs:
kwargs['train_size'] = None
data = dataset['data']
if (kwargs['train_size'] == 1.0 and kwargs['test_size'] is None
or kwargs['train_size'] is None and kwargs['test_size'] == 0):
# train = test = data
train_index = test_index = data.index
else:
train, test, train_index, test_index = \
sklearn.model_selection.train_test_split(
data, data.index.values, **kwargs)
# Add to dataset.
dataset['subsets'].update({'train': train_index,
'test': test_index})
return dataset
# ============================== preprocess ===============================
def _process_targets(self, raw, target_names, pos_labels):
"""Targets preprocessing."""
try:
targets_df = raw[target_names]
except KeyError:
self.logger.warning(f"No target column(s) found in df:\n"
f" {target_names}")
targets_df = pd.DataFrame()
targets_df, classes, pos_labels, pos_labels_ind =\
self._unify_targets(targets_df, pos_labels)
# targets = targets_df.values
raw_info_targets = {
'classes': classes,
'pos_labels': pos_labels,
'pos_labels_ind': pos_labels_ind,
}
return targets_df, raw_info_targets
def _process_features(self, raw, features_names, categor_names):
"""Features preprocessing."""
features_df = raw[features_names]
features_df, categoric_ind_name, numeric_ind_name \
= self._unify_features(features_df, categor_names)
# features = features_df.values
raw_info_features = {
'categoric_ind_name': categoric_ind_name,
'numeric_ind_name': numeric_ind_name, }
return features_df, raw_info_features
def _combine(self, index, targets_df, features_df):
"""Combine preprocessed sub-data."""
# targets_df empty dataframe or None is possible
return pd.concat(
[targets_df, features_df],
axis=1,
)
def _unify_targets(self, targets, pos_labels=None):
"""Unify input targets.
Extract classes and positive label index (classification only).
Parameters
----------
targets : :class:`pandas.DataFrame`
Data to unify.
pos_labels: list, optional (default=None)
Classification only, list of "positive" labels for targets.
Could be used for threshold analysis (roc_curve) and metrics
evaluation if classifiers supported predict_proba. If None, last
label in :func:`numpy.unique` for each target used. For regression
set [] to prevent evaluation.
Returns
-------
targets: :class:`pandas.DataFrame`
Unchanged input.
classes: list of :class:`numpy.ndarray`
List of sorted unique labels for target(s) (n_outputs, n_classes).
pos_labels: list
List of "positive" label(s) for target(s) (n_outputs,).
pos_labels_ind: list
List of "positive" label(s) index in :func:`numpy.unique`
for target(s) (n_outputs,).
"""
# Regression.
if isinstance(pos_labels, list) and not pos_labels:
classes = []
pos_labels_ind = []
return targets, classes, pos_labels, pos_labels_ind
# Classification.
# Find classes, example: [array([1]), array([2, 7])].
classes = [np.unique(j) for i, j in targets.iteritems()]
if pos_labels is None:
n_targets = len(classes)
pos_labels_ind = [len(classes[i]) - 1 for i in range(n_targets)]
pos_labels = [classes[i][pos_labels_ind[i]]
for i in range(n_targets)] # [2,4]
else:
# Find where pos_labels in sorted labels, example: [1, 0].
pos_labels_ind = [np.where(classes[i] == pos_labels[i])[0][0]
for i in range(len(classes))]
# Could be no target columns in new data.
self.logger.debug(
f"Labels {pos_labels} identified as positive for target(s):\n"
f" when classifier supports predict_proba: prediction="
f"pos_label on sample, if P(pos_label) > classification "
f"threshold.")
return targets, classes, pos_labels, pos_labels_ind
def _unify_features(self, features, categor_names):
"""Unify input features.
Parameters
----------
features : :class:`pandas.DataFrame`
Data to unify.
categor_names: list
List of categorical features (and binary) column names in features.
Returns
-------
features: :class:`pandas.DataFrame`
Input updates:
* fill gaps.
if gap in categorical => fill 'unknown'
if gap in non-categor => np.nan
* cast categorical features to str dtype, and apply Ordinalencoder.
* cast the whole featuresframe to np.float64.
categoric_ind_name : dict
{'column_index': ('feature_name', ['cat1', 'cat2'])}
Dictionary with categorical feature indices as key, and tuple
('feature_name', categories) as value.
numeric_ind_name : dict {'columns_index':('feature_name',)}
Dictionary with numeric features indices as key, and tuple
('feature_name', ) as value.
"""
categoric_ind_name = {}
numeric_ind_name = {}
# Turn off: SettingWithCopy, excessive.
pd.options.mode.chained_assignment = None
for ind, column_name in enumerate(features):
if column_name in categor_names:
# Fill gaps with 'unknown', inplace unreliable (copy!).
features.loc[:, column_name] = features[column_name]\
.fillna(value='unknown', method=None, axis=None,
inplace=False, limit=None, downcast=None)
# Cast dtype to str (copy!).
features.loc[:, column_name] = features[column_name].astype(str)
# Encode
encoder = sklearn.preprocessing.\
OrdinalEncoder(categories='auto')
features.loc[:, column_name] = encoder\
.fit_transform(features[column_name]
.values.reshape(-1, 1))
# Generate {index: ('feature_id', ['B','A','C'])}.
# tolist need for 'hr' cache dump.
categoric_ind_name[ind] = (column_name,
encoder.categories_[0].tolist())
else:
# Fill gaps with np.nan, inplace unreliable (copy!).
# Could work with no copy on slice or single col even inplace.
features.loc[:, column_name] = features.loc[:, column_name]\
.fillna(value=np.nan, method=None, axis=None,
inplace=False, downcast=None)
# Generate {'index': ('feature_id',)}.
numeric_ind_name[ind] = (column_name,)
# Turn on: SettingWithCopy.
pd.options.mode.chained_assignment = 'warn'
# Cast to np.float64 without copy.
# python float = np.float = C double =
# np.float64 = np.double(64 bit processor)).
# [alternative] sklearn.utils.as_float_array / assert_all_finite
features = features.astype(np.float64, copy=False, errors='ignore')
# Additional check.
self._check_numeric_types(features, categor_names)
return features, categoric_ind_name, numeric_ind_name
def _check_numeric_types(self, data, categor_names):
"""Check that all non-categorical features are of numeric type."""
dtypes = data.dtypes
misstype = []
for ind, column_name in enumerate(data):
if column_name not in categor_names:
if not np.issubdtype(dtypes[column_name], np.number):
misstype.append(column_name)
if misstype:
raise ValueError(f"Input data non-categoric columns should be "
f"subtype of np.number, check:\n"
f" {misstype}")
return None
# ================================ info ===================================
def _check_duplicates(self, data, del_duplicates=False):
"""Check duplicates rows in dataframe.
Parameters
----------
data : :class:`pandas.DataFrame`
Dataframe to check.
del_duplicates : bool
If True, delete rows with duplicated.
If False, do nothing.
Notes
-----
Use del_duplicates=True only before generating dataset `meta`.
"""
# Duplicate rows index mask.
mask = data.duplicated(subset=None, keep='first')
dupl_n = np.sum(mask)
if dupl_n:
self.logger.warning(f"Warning: {dupl_n} duplicates rows found,\n"
" see debug.log for details.")
# Count unique duplicated rows.
rows_count = data[mask].groupby(data.columns.tolist())\
.size().reset_index().rename(columns={0: 'count'})
rows_count.sort_values(by=['count'], axis=0,
ascending=False, inplace=True)
with pd.option_context('display.max_rows', None,
'display.max_columns', None):
pprint = tabulate.tabulate(rows_count, headers='keys',
tablefmt='psql')
self.logger.debug(f"Duplicates found\n{pprint}")
if del_duplicates:
# Delete duplicates (without index reset).
size_before = data.size
data.drop_duplicates(keep='first', inplace=True)
size_after = data.size
if size_before - size_after != 0:
self.logger.warning(f"Warning: delete duplicates rows "
f"({size_before - size_after} values).")
return None
def _check_gaps(self, data, del_gaps=False, nogap_columns=None):
"""Check gaps in dataframe.
Parameters
----------
data : :class:`pandas.DataFrame`
Dataframe to check.
del_gaps : bool, optional (default=False)
If True, delete rows with gaps from `nongap_columns` list.
If False, raise Exception when `nongap_columns` contain gaps.
nogap_columns : list, optional (default=None)
Columns where gaps are forbidden: ['column_1', ..]. if None, [].
Notes
-----
Use del_geps=True only before generating dataset `meta` (preprocess).
"""
if nogap_columns is None:
nogap_columns = []
gaps_number = data.size - data.count().sum()
columns_with_gaps_dic = {}
if gaps_number > 0:
for column_name in data:
column_gaps_namber = data[column_name].size \
- data[column_name].count()
if column_gaps_namber > 0:
columns_with_gaps_dic[column_name] = column_gaps_namber
self.logger.warning('Warning: gaps found: {} {:.3f}%,\n'
' see debug.log for details.'
.format(gaps_number, gaps_number / data.size))
pprint = jsbeautifier.beautify(str(columns_with_gaps_dic))
self.logger.debug(f"Gaps per column:\n{pprint}")
subset = [column_name for column_name in nogap_columns
if column_name in columns_with_gaps_dic]
if del_gaps and subset:
# Delete rows with gaps in specified columns.
data.dropna(axis=0, how='any', thresh=None,
subset=[subset], inplace=True)
elif subset:
raise ValueError(f"Gaps in {subset}.")
return None
class DatasetProducer(pycnfg.Producer, DataIO, DataPreprocessor):
"""Factory to produce dataset.
Parameters
----------
objects : dict
Dictionary with objects from previous executed producers:
{'section_id__config__id', object,}.
oid : str
Unique identifier of produced object.
path_id : str, optional (default='default')
Project path identifier in `objects`.
logger_id : str, optional (default='default')
Logger identifier in `objects`.
Attributes
----------
objects : dict
Dictionary with objects from previous executed producers:
{'section_id__config__id', object,}.
oid : str
Unique identifier of produced object.
logger : :class:`logging.Logger`
Logger.
project_path: str
Absolute path to project dir.
"""
_required_parameters = ['objects', 'oid', 'path_id', 'logger_id']
def __init__(self, objects, oid, path_id='path__default',
logger_id='logger__default'):
pycnfg.Producer.__init__(self, objects, oid, path_id=path_id,
logger_id=logger_id)
DataIO.__init__(self, self.project_path, self.logger)
DataPreprocessor.__init__(self, self.project_path, self.logger)
if __name__ == '__main__':
pass
| 36.804598 | 80 | 0.561906 | 3,144 | 28,818 | 4.997774 | 0.151399 | 0.019474 | 0.011455 | 0.008592 | 0.347419 | 0.276459 | 0.222364 | 0.188634 | 0.177942 | 0.173232 | 0 | 0.002892 | 0.328024 | 28,818 | 782 | 81 | 36.851662 | 0.808521 | 0.436949 | 0 | 0.118243 | 0 | 0 | 0.102163 | 0.001928 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084459 | false | 0.003378 | 0.027027 | 0.003378 | 0.212838 | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c1ae1443f7fa73080c8c0a769854cb87fc44def | 21,356 | py | Python | examples/utils/discrete_space.py | beark007/smarts_ppo | 8f6aa33a6fcfb74dc0b8e92951d6b70d6e2874de | [
"MIT"
] | null | null | null | examples/utils/discrete_space.py | beark007/smarts_ppo | 8f6aa33a6fcfb74dc0b8e92951d6b70d6e2874de | [
"MIT"
] | null | null | null | examples/utils/discrete_space.py | beark007/smarts_ppo | 8f6aa33a6fcfb74dc0b8e92951d6b70d6e2874de | [
"MIT"
] | null | null | null | """
this file contains tuned obs function and reward function
fix ttc calculate
"""
import math
import gym
import numpy as np
from smarts.core.agent import AgentSpec
from smarts.core.agent_interface import AgentInterface
from smarts.core.agent_interface import OGM, NeighborhoodVehicles
from smarts.core.controllers import ActionSpaceType, DiscreteAction
MAX_LANES = 5 # The maximum number of lanes we expect to see in any scenario.
lane_crash_flag = False # used for training to signal a flipped car
intersection_crash_flag = False # used for training to signal intersect crash
# ==================================================
# Discrete Action Space
# "keep_lane", "slow_down", "change_lane_left", "change_lane_right"
# ==================================================
ACTION_SPACE = gym.spaces.Discrete(4)
ACTION_CHOICE = [
DiscreteAction.keep_lane,
DiscreteAction.slow_down,
DiscreteAction.change_lane_left,
DiscreteAction.change_lane_right,
]
# ==================================================
# Observation Space
# This observation space should match the output of observation(..) below
# ==================================================
OBSERVATION_SPACE = gym.spaces.Dict(
{
# To make car follow the waypoints
# distance from lane center
"distance_from_center": gym.spaces.Box(low=-1e10, high=1e10, shape=(1,)),
# relative heading angle from 10 waypoints in 50 forehead waypoints
"heading_errors": gym.spaces.Box(low=-1.0, high=1.0, shape=(10,)),
# Car attributes
# ego speed
"speed": gym.spaces.Box(low=-1e10, high=1e10, shape=(1,)),
# ego steering
"steering": gym.spaces.Box(low=-1e10, high=1e10, shape=(1,)),
# To make car learn to slow down, overtake or dodge
# distance to the closest car in each lane
"lane_dist": gym.spaces.Box(low=-1e10, high=1e10, shape=(5,)),
# time to collide to the closest car in each lane
"lane_ttc": gym.spaces.Box(low=-1e10, high=1e10, shape=(5,)),
# ego lane closest social vehicle relative speed
"closest_lane_nv_rel_speed": gym.spaces.Box(low=-1e10, high=1e10, shape=(1,)),
# distance to the closest car in possible intersection direction
"intersection_ttc": gym.spaces.Box(low=-1e10, high=1e10, shape=(1,)),
# time to collide to the closest car in possible intersection direction
"intersection_distance": gym.spaces.Box(low=-1e10, high=1e10, shape=(1,)),
# intersection closest social vehicle relative speed
"closest_its_nv_rel_speed": gym.spaces.Box(low=-1e10, high=1e10, shape=(1,)),
# intersection closest social vehicle relative position in vehicle heading coordinate
"closest_its_nv_rel_pos": gym.spaces.Box(low=-1e10, high=1e10, shape=(2,)),
}
)
def heading_to_degree(heading):
# +y = 0 rad. Note the 0 means up direction
return np.degrees((heading + math.pi) % (2 * math.pi))
def heading_to_vec(heading):
# axis x: right, y:up
angle = (heading + math.pi * 0.5) % (2 * math.pi)
return np.array([math.cos(angle), math.sin(angle)])
def ttc_by_path(ego, wp_paths, neighborhood_vehicle_states, ego_closest_wp):
global lane_crash_flag
global intersection_crash_flag
# init flag, dist, ttc, headings
lane_crash_flag = False
intersection_crash_flag = False
# default 10s
lane_ttc = np.array([1] * 5, dtype=float)
# default 100m
lane_dist = np.array([1] * 5, dtype=float)
# default 120km/h
closest_lane_nv_rel_speed = 1
intersection_ttc = 1
intersection_distance = 1
closest_its_nv_rel_speed = 1
# default 100m
closest_its_nv_rel_pos = np.array([1, 1])
# here to set invalid value to 0
wp_paths_num = len(wp_paths)
lane_ttc[wp_paths_num:] = 0
lane_dist[wp_paths_num:] = 0
# return if no neighbour vehicle or off the routes(no waypoint paths)
if not neighborhood_vehicle_states or not wp_paths_num:
return (
lane_ttc,
lane_dist,
closest_lane_nv_rel_speed,
intersection_ttc,
intersection_distance,
closest_its_nv_rel_speed,
closest_its_nv_rel_pos,
)
# merge waypoint paths (consider might not the same length)
merge_waypoint_paths = []
for wp_path in wp_paths:
merge_waypoint_paths += wp_path
wp_poses = np.array([wp.pos for wp in merge_waypoint_paths])
# compute neighbour vehicle closest wp
nv_poses = np.array([nv.position for nv in neighborhood_vehicle_states])
nv_wp_distance = np.linalg.norm(nv_poses[:, :2][:, np.newaxis] - wp_poses, axis=2)
nv_closest_wp_index = np.argmin(nv_wp_distance, axis=1)
nv_closest_distance = np.min(nv_wp_distance, axis=1)
# get not in same lane id social vehicles(intersect vehicles and behind vehicles)
wp_lane_ids = np.array([wp.lane_id for wp in merge_waypoint_paths])
nv_lane_ids = np.array([nv.lane_id for nv in neighborhood_vehicle_states])
not_in_same_lane_id = nv_lane_ids[:, np.newaxis] != wp_lane_ids
not_in_same_lane_id = np.all(not_in_same_lane_id, axis=1)
ego_edge_id = ego.lane_id[1:-2] if ego.lane_id[0] == "-" else ego.lane_id[:-2]
nv_edge_ids = np.array(
[
nv.lane_id[1:-2] if nv.lane_id[0] == "-" else nv.lane_id[:-2]
for nv in neighborhood_vehicle_states
]
)
not_in_ego_edge_id = nv_edge_ids[:, np.newaxis] != ego_edge_id
not_in_ego_edge_id = np.squeeze(not_in_ego_edge_id, axis=1)
is_not_closed_nv = not_in_same_lane_id & not_in_ego_edge_id
not_closed_nv_index = np.where(is_not_closed_nv)[0]
# filter sv not close to the waypoints including behind the ego or ahead past the end of the waypoints
close_nv_index = np.where(nv_closest_distance < 2)[0]
if not close_nv_index.size:
pass
else:
close_nv = [neighborhood_vehicle_states[i] for i in close_nv_index]
# calculate waypoints distance to ego car along the routes
wps_with_lane_dist_list = []
for wp_path in wp_paths:
path_wp_poses = np.array([wp.pos for wp in wp_path])
wp_poses_shift = np.roll(path_wp_poses, 1, axis=0)
wps_with_lane_dist = np.linalg.norm(path_wp_poses - wp_poses_shift, axis=1)
wps_with_lane_dist[0] = 0
wps_with_lane_dist = np.cumsum(wps_with_lane_dist)
wps_with_lane_dist_list += wps_with_lane_dist.tolist()
wps_with_lane_dist_list = np.array(wps_with_lane_dist_list)
# get neighbour vehicle closest waypoints index
nv_closest_wp_index = nv_closest_wp_index[close_nv_index]
# ego car and neighbour car distance, not very accurate since use the closest wp
ego_nv_distance = wps_with_lane_dist_list[nv_closest_wp_index]
# get neighbour vehicle lane index
nv_lane_index = np.array(
[merge_waypoint_paths[i].lane_index for i in nv_closest_wp_index]
)
# get wp path lane index
lane_index_list = [wp_path[0].lane_index for wp_path in wp_paths]
for i, lane_index in enumerate(lane_index_list):
# get same lane vehicle
same_lane_nv_index = np.where(nv_lane_index == lane_index)[0]
if not same_lane_nv_index.size:
continue
same_lane_nv_distance = ego_nv_distance[same_lane_nv_index]
closest_nv_index = same_lane_nv_index[np.argmin(same_lane_nv_distance)]
closest_nv = close_nv[closest_nv_index]
closest_nv_speed = closest_nv.speed
closest_nv_heading = closest_nv.heading
# radius to degree
closest_nv_heading = heading_to_degree(closest_nv_heading)
closest_nv_pos = closest_nv.position[:2]
bounding_box = closest_nv.bounding_box
# map the heading to make it consistent with the position coordination
map_heading = (closest_nv_heading + 90) % 360
map_heading_radius = np.radians(map_heading)
nv_heading_vec = np.array(
[np.cos(map_heading_radius), np.sin(map_heading_radius)]
)
nv_heading_vertical_vec = np.array([-nv_heading_vec[1], nv_heading_vec[0]])
# get four edge center position (consider one vehicle take over two lanes when change lane)
# maybe not necessary
closest_nv_front = closest_nv_pos + bounding_box.length * nv_heading_vec
closest_nv_behind = closest_nv_pos - bounding_box.length * nv_heading_vec
closest_nv_left = (
closest_nv_pos + bounding_box.width * nv_heading_vertical_vec
)
closest_nv_right = (
closest_nv_pos - bounding_box.width * nv_heading_vertical_vec
)
edge_points = np.array(
[closest_nv_front, closest_nv_behind, closest_nv_left, closest_nv_right]
)
ep_wp_distance = np.linalg.norm(
edge_points[:, np.newaxis] - wp_poses, axis=2
)
ep_closed_wp_index = np.argmin(ep_wp_distance, axis=1)
ep_closed_wp_lane_index = set(
[merge_waypoint_paths[i].lane_index for i in ep_closed_wp_index]
+ [lane_index]
)
min_distance = np.min(same_lane_nv_distance)
if ego_closest_wp.lane_index in ep_closed_wp_lane_index:
if min_distance < 6:
lane_crash_flag = True
nv_wp_heading = (
closest_nv_heading
- heading_to_degree(
merge_waypoint_paths[
nv_closest_wp_index[closest_nv_index]
].heading
)
) % 360
# find those car just get from intersection lane into ego lane
if nv_wp_heading > 30 and nv_wp_heading < 330:
relative_close_nv_heading = closest_nv_heading - heading_to_degree(
ego.heading
)
# map nv speed to ego car heading
map_close_nv_speed = closest_nv_speed * np.cos(
np.radians(relative_close_nv_heading)
)
closest_lane_nv_rel_speed = min(
closest_lane_nv_rel_speed,
(map_close_nv_speed - ego.speed) * 3.6 / 120,
)
else:
closest_lane_nv_rel_speed = min(
closest_lane_nv_rel_speed,
(closest_nv_speed - ego.speed) * 3.6 / 120,
)
relative_speed_m_per_s = ego.speed - closest_nv_speed
if abs(relative_speed_m_per_s) < 1e-5:
relative_speed_m_per_s = 1e-5
ttc = min_distance / relative_speed_m_per_s
# normalized into 10s
ttc /= 10
for j in ep_closed_wp_lane_index:
if min_distance / 100 < lane_dist[j]:
# normalize into 100m
lane_dist[j] = min_distance / 100
if ttc <= 0:
continue
if j == ego_closest_wp.lane_index:
if ttc < 0.1:
lane_crash_flag = True
if ttc < lane_ttc[j]:
lane_ttc[j] = ttc
# get vehicles not in the waypoints lane
if not not_closed_nv_index.size:
pass
else:
filter_nv = [neighborhood_vehicle_states[i] for i in not_closed_nv_index]
nv_pos = np.array([nv.position for nv in filter_nv])[:, :2]
nv_heading = heading_to_degree(np.array([nv.heading for nv in filter_nv]))
nv_speed = np.array([nv.speed for nv in filter_nv])
ego_pos = ego.position[:2]
ego_heading = heading_to_degree(ego.heading)
ego_speed = ego.speed
nv_to_ego_vec = nv_pos - ego_pos
line_heading = (
(np.arctan2(nv_to_ego_vec[:, 1], nv_to_ego_vec[:, 0]) * 180 / np.pi) - 90
) % 360
nv_to_line_heading = (nv_heading - line_heading) % 360
ego_to_line_heading = (ego_heading - line_heading) % 360
# judge two heading whether will intersect
same_region = (nv_to_line_heading - 180) * (
ego_to_line_heading - 180
) > 0 # both right of line or left of line
ego_to_nv_heading = ego_to_line_heading - nv_to_line_heading
valid_relative_angle = (
(nv_to_line_heading - 180 > 0) & (ego_to_nv_heading > 0)
) | ((nv_to_line_heading - 180 < 0) & (ego_to_nv_heading < 0))
# emit behind vehicles
valid_intersect_angle = np.abs(line_heading - ego_heading) < 90
# emit patient vehicles which stay in the intersection
not_patient_nv = nv_speed > 0.01
# get valid intersection sv
intersect_sv_index = np.where(
same_region & valid_relative_angle & valid_intersect_angle & not_patient_nv
)[0]
if not intersect_sv_index.size:
pass
else:
its_nv_pos = nv_pos[intersect_sv_index][:, :2]
its_nv_speed = nv_speed[intersect_sv_index]
its_nv_to_line_heading = nv_to_line_heading[intersect_sv_index]
line_heading = line_heading[intersect_sv_index]
# ego_to_line_heading = ego_to_line_heading[intersect_sv_index]
# get intersection closest vehicle
ego_nv_distance = np.linalg.norm(its_nv_pos - ego_pos, axis=1)
ego_closest_its_nv_index = np.argmin(ego_nv_distance)
ego_closest_its_nv_distance = ego_nv_distance[ego_closest_its_nv_index]
line_heading = line_heading[ego_closest_its_nv_index]
ego_to_line_heading = (
heading_to_degree(ego_closest_wp.heading) - line_heading
) % 360
ego_closest_its_nv_speed = its_nv_speed[ego_closest_its_nv_index]
its_closest_nv_to_line_heading = its_nv_to_line_heading[
ego_closest_its_nv_index
]
# rel speed along ego-nv line
closest_nv_rel_speed = ego_speed * np.cos(
np.radians(ego_to_line_heading)
) - ego_closest_its_nv_speed * np.cos(
np.radians(its_closest_nv_to_line_heading)
)
closest_nv_rel_speed_m_s = closest_nv_rel_speed
if abs(closest_nv_rel_speed_m_s) < 1e-5:
closest_nv_rel_speed_m_s = 1e-5
ttc = ego_closest_its_nv_distance / closest_nv_rel_speed_m_s
intersection_ttc = min(intersection_ttc, ttc / 10)
intersection_distance = min(
intersection_distance, ego_closest_its_nv_distance / 100
)
# transform relative pos to ego car heading coordinate
rotate_axis_angle = np.radians(90 - ego_to_line_heading)
closest_its_nv_rel_pos = (
np.array(
[
ego_closest_its_nv_distance * np.cos(rotate_axis_angle),
ego_closest_its_nv_distance * np.sin(rotate_axis_angle),
]
)
/ 100
)
closest_its_nv_rel_speed = min(
closest_its_nv_rel_speed, -closest_nv_rel_speed * 3.6 / 120
)
if ttc < 0:
pass
else:
intersection_ttc = min(intersection_ttc, ttc / 10)
intersection_distance = min(
intersection_distance, ego_closest_its_nv_distance / 100
)
# if to collide in 3s, make it slow down
if ttc < 2 or ego_closest_its_nv_distance < 6:
intersection_crash_flag = True
return (
lane_ttc,
lane_dist,
closest_lane_nv_rel_speed,
intersection_ttc,
intersection_distance,
closest_its_nv_rel_speed,
closest_its_nv_rel_pos,
)
def ego_ttc_calc(ego_lane_index, ttc_by_path, lane_dist):
# transform lane ttc and dist to make ego lane in the array center
# index need to be set to zero
# 4: [0,1], 3:[0], 2:[], 1:[4], 0:[3,4]
zero_index = [[3, 4], [4], [], [0], [0, 1]]
zero_index = zero_index[ego_lane_index]
ttc_by_path[zero_index] = 0
lane_ttc = np.roll(ttc_by_path, 2 - ego_lane_index)
lane_dist[zero_index] = 0
ego_lane_dist = np.roll(lane_dist, 2 - ego_lane_index)
return lane_ttc, ego_lane_dist
def get_distance_from_center(env_obs):
ego_state = env_obs.ego_vehicle_state
wp_paths = env_obs.waypoint_paths
closest_wps = [path[0] for path in wp_paths]
# distance of vehicle from center of lane
closest_wp = min(closest_wps, key=lambda wp: wp.dist_to(ego_state.position))
signed_dist_from_center = closest_wp.signed_lateral_error(ego_state.position)
lane_hwidth = closest_wp.lane_width * 0.5
norm_dist_from_center = signed_dist_from_center / lane_hwidth
return norm_dist_from_center
# ==================================================
# obs function
# ==================================================
def observation_adapter(env_obs):
"""
Transform the environment's observation into something more suited for your model
"""
ego_state = env_obs.ego_vehicle_state
wp_paths = env_obs.waypoint_paths
closest_wps = [path[0] for path in wp_paths]
# distance of vehicle from center of lane
closest_wp = min(closest_wps, key=lambda wp: wp.dist_to(ego_state.position))
signed_dist_from_center = closest_wp.signed_lateral_error(ego_state.position)
lane_hwidth = closest_wp.lane_width * 0.5
norm_dist_from_center = signed_dist_from_center / lane_hwidth
# wp heading errors in current lane in front of vehicle
indices = np.array([0, 1, 2, 3, 5, 8, 13, 21, 34, 50])
# solve case that wps are not enough, then assume the left heading to be same with the last valid.
wps_len = [len(path) for path in wp_paths]
max_len_lane_index = np.argmax(wps_len)
max_len = np.max(wps_len)
last_wp_index = 0
for i, wp_index in enumerate(indices):
if wp_index > max_len - 1:
indices[i:] = last_wp_index
break
last_wp_index = wp_index
sample_wp_path = [wp_paths[max_len_lane_index][i] for i in indices]
heading_errors = [
math.sin(wp.relative_heading(ego_state.heading)) for wp in sample_wp_path
]
ego_lane_index = closest_wp.lane_index
(
lane_ttc,
lane_dist,
closest_lane_nv_rel_speed,
intersection_ttc,
intersection_distance,
closest_its_nv_rel_speed,
closest_its_nv_rel_pos,
) = ttc_by_path(
ego_state, wp_paths, env_obs.neighborhood_vehicle_states, closest_wp
)
lane_ttc, lane_dist = ego_ttc_calc(ego_lane_index, lane_ttc, lane_dist)
return {
"distance_from_center": np.array([norm_dist_from_center]),
"heading_errors": np.array(heading_errors),
"speed": np.array([ego_state.speed * 3.6 / 120]),
"steering": np.array([ego_state.steering / (0.5 * math.pi)]),
"lane_ttc": np.array(lane_ttc),
"lane_dist": np.array(lane_dist),
"closest_lane_nv_rel_speed": np.array([closest_lane_nv_rel_speed]),
"intersection_ttc": np.array([intersection_ttc]),
"intersection_distance": np.array([intersection_distance]),
"closest_its_nv_rel_speed": np.array([closest_its_nv_rel_speed]),
"closest_its_nv_rel_pos": np.array(closest_its_nv_rel_pos),
}
# ==================================================
# reward function
# ==================================================
def reward_adapter(env_obs, env_reward):
"""
Here you can perform your reward shaping.
The default reward provided by the environment is the increment in
distance travelled. Your model will likely require a more
sophisticated reward function
"""
global lane_crash_flag
distance_from_center = get_distance_from_center(env_obs)
center_penalty = -np.abs(distance_from_center)
# penalise close proximity to lane cars
if lane_crash_flag:
crash_penalty = -5
else:
crash_penalty = 0
# penalise close proximity to intersection cars
if intersection_crash_flag:
crash_penalty -= 5
total_reward = np.sum([1.0 * env_reward])
total_penalty = np.sum([0.1 * center_penalty, 1 * crash_penalty])
return (total_reward + total_penalty) / 200.0
def action_adapter(model_action):
assert model_action in [0, 1, 2, 3]
return ACTION_CHOICE[model_action]
def info_adapter(reward, info):
return info
agent_interface = AgentInterface(
max_episode_steps=None,
waypoints=True,
# neighborhood < 60m
neighborhood_vehicles=NeighborhoodVehicles(radius=60),
# OGM within 64 * 0.25 = 16
ogm=OGM(64, 64, 0.25),
action=ActionSpaceType.Lane,
)
agent_spec = AgentSpec(
interface=agent_interface,
observation_adapter=observation_adapter,
reward_adapter=reward_adapter,
action_adapter=action_adapter,
info_adapter=info_adapter,
)
| 38.135714 | 106 | 0.631111 | 2,933 | 21,356 | 4.232526 | 0.113877 | 0.029 | 0.029966 | 0.020541 | 0.45497 | 0.34308 | 0.265748 | 0.215483 | 0.185194 | 0.15112 | 0 | 0.022196 | 0.274302 | 21,356 | 559 | 107 | 38.203936 | 0.77881 | 0.176484 | 0 | 0.193548 | 0 | 0 | 0.019819 | 0.01054 | 0 | 0 | 0 | 0 | 0.002688 | 1 | 0.024194 | false | 0.010753 | 0.018817 | 0.005376 | 0.069892 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c1c78f65b55a91c212dbfb6fa80d719220b3e5d | 1,156 | py | Python | tests/optimizers/test_constant_optimizer.py | kevinconway/pycc | 69ba99d78a859ba61c2ce7ee35766e21c789db21 | [
"Apache-2.0"
] | 17 | 2015-04-01T13:51:25.000Z | 2021-12-15T21:07:09.000Z | tests/optimizers/test_constant_optimizer.py | kevinconway/pycc | 69ba99d78a859ba61c2ce7ee35766e21c789db21 | [
"Apache-2.0"
] | 3 | 2018-09-05T04:34:24.000Z | 2019-05-27T00:44:33.000Z | tests/optimizers/test_constant_optimizer.py | kevinconway/pycc | 69ba99d78a859ba61c2ce7ee35766e21c789db21 | [
"Apache-2.0"
] | 5 | 2018-05-19T23:50:44.000Z | 2021-08-05T08:39:57.000Z | """Test suite for optimizers.constant."""
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import ast
import pytest
from pycc.asttools import parse
from pycc.optimizers import constant
source = """
ONE = 1
TWO = 2
THREE = ONE + TWO
FOUR = THREE + ONE
FIVE = THREE + TWO
def return_const():
return FOUR
def return_var():
return FIVE
FIVE = FIVE + ONE
FIVE -= ONE
"""
@pytest.fixture
def node():
"""Get as AST node from the source."""
return parse.parse(source)
def test_constant_inliner(node):
"""Test that constant values are inlined."""
constant.optimize(node)
# Check assignment values using constants.
assert node.body[2].value.n == 3
assert node.body[3].value.n == 4
assert node.body[4].value.n == 5
# Check return val of const function.
assert isinstance(node.body[5].body[0].value, ast.Num)
assert node.body[5].body[0].value.n == 4
# Check return val of var function.
assert isinstance(node.body[6].body[0].value, ast.Name)
assert node.body[6].body[0].value.id == 'FIVE'
| 21.018182 | 59 | 0.694637 | 172 | 1,156 | 4.534884 | 0.354651 | 0.071795 | 0.089744 | 0.041026 | 0.158974 | 0.097436 | 0 | 0 | 0 | 0 | 0 | 0.018143 | 0.189446 | 1,156 | 54 | 60 | 21.407407 | 0.814301 | 0.189446 | 0 | 0 | 0 | 0 | 0.195865 | 0 | 0 | 0 | 0 | 0 | 0.212121 | 1 | 0.060606 | false | 0 | 0.242424 | 0 | 0.393939 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c1ea40702594b065154a2ff625e0441b3a8faf9 | 2,079 | py | Python | vnc_viewer/engine/service/proxy.py | alsbi/vnc | 1dd89ffed17a0f5a47cf08516b5757b4481ae6b4 | [
"MIT"
] | null | null | null | vnc_viewer/engine/service/proxy.py | alsbi/vnc | 1dd89ffed17a0f5a47cf08516b5757b4481ae6b4 | [
"MIT"
] | null | null | null | vnc_viewer/engine/service/proxy.py | alsbi/vnc | 1dd89ffed17a0f5a47cf08516b5757b4481ae6b4 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
__author__ = 'alsbi'
from multiprocessing import Process
from websockify import WebSocketProxy
class Proxy():
port = {}
@classmethod
def get_port(cls,
uuid):
if uuid in Proxy.port:
return Proxy.port[uuid]
port = list(set(range(50000, 63000)) - set([port for port in Proxy.port.values()]))[0]
Proxy.port[uuid] = port
return port
def __init__(self,
target_host=None,
target_port=None,
listen_host=None,
uuid=None):
self.target_host = target_host
self.target_port = target_port
self.listen_host = listen_host
self.listen_port = Proxy.get_port(uuid = uuid)
self.uuid = uuid
def start_proxy(self):
def run():
cert = '/home/vnc/vnc_service/bin/server.crt'
key = '/home/vnc/vnc_service/bin/server.key'
params = {'ssl_only': True,
'cert': cert,
'key': key,
'target_port': self.target_port}
server = WebSocketProxy(**params)
server.start_server()
proc = Process(target = run)
proc.start()
return proc
class ProxyManager(object):
def __init__(self,
listen_host=None,
target_host=None):
self.listen_host = listen_host
self.target_host = target_host
self.list_proxy = {}
def create(self,
uuid=None,
port=None):
if not self.list_proxy.get(uuid):
proxy = Proxy(target_port = port,
target_host = self.target_host,
listen_host = self.listen_host,
uuid = uuid)
self.list_proxy[uuid] = proxy
proxy.start_proxy()
return proxy
else:
return self.list_proxy[uuid]
def delete(self, uuid):
if self.list_proxy.get(uuid):
del self.list_proxy[uuid]
| 28.479452 | 94 | 0.526215 | 228 | 2,079 | 4.583333 | 0.254386 | 0.076555 | 0.074641 | 0.051675 | 0.220096 | 0.15311 | 0 | 0 | 0 | 0 | 0 | 0.009281 | 0.378066 | 2,079 | 72 | 95 | 28.875 | 0.798917 | 0.010101 | 0 | 0.135593 | 0 | 0 | 0.050097 | 0.035019 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118644 | false | 0 | 0.033898 | 0 | 0.288136 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c1f238b6bcc670b2b5ec473a5f107f816b2ed4c | 318 | py | Python | massthings/__init__.py | FPVogel/Fixator10-Cogs | 002a90e06952b7bf7a0ffdbd93c9d423f238f124 | [
"MIT"
] | 76 | 2018-07-21T21:09:00.000Z | 2022-03-17T06:56:03.000Z | massthings/__init__.py | FPVogel/Fixator10-Cogs | 002a90e06952b7bf7a0ffdbd93c9d423f238f124 | [
"MIT"
] | 59 | 2019-01-23T08:13:13.000Z | 2022-03-13T16:39:05.000Z | massthings/__init__.py | FPVogel/Fixator10-Cogs | 002a90e06952b7bf7a0ffdbd93c9d423f238f124 | [
"MIT"
] | 63 | 2019-03-06T01:43:45.000Z | 2022-02-14T20:16:19.000Z | from .massthings import MassThings
__red_end_user_data_statement__ = (
"This cog does not persistently store data or metadata about users."
# "<s>If you are using this cog, user data storage will probably be much less significant thing then API abuse</s>"
)
def setup(bot):
bot.add_cog(MassThings(bot))
| 28.909091 | 119 | 0.745283 | 49 | 318 | 4.653061 | 0.77551 | 0.070175 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179245 | 318 | 10 | 120 | 31.8 | 0.873563 | 0.355346 | 0 | 0 | 0 | 0 | 0.325123 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c200dbfad013cbf8dd5260a6b61041a4ae56d9e | 1,528 | py | Python | observatory/middleware/CssSmasher.py | natestedman/Observatory | 6e810b22d844416b2a3057e99ef23baa0d122ab4 | [
"0BSD"
] | 1 | 2015-01-16T04:17:54.000Z | 2015-01-16T04:17:54.000Z | observatory/middleware/CssSmasher.py | natestedman/Observatory | 6e810b22d844416b2a3057e99ef23baa0d122ab4 | [
"0BSD"
] | null | null | null | observatory/middleware/CssSmasher.py | natestedman/Observatory | 6e810b22d844416b2a3057e99ef23baa0d122ab4 | [
"0BSD"
] | null | null | null | # Copyright (c) 2010, individual contributors (see AUTHORS file)
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
import os
import re
from django.core.exceptions import MiddlewareNotUsed
from observatory.settings import MEDIA_ROOT, JS_FILES, CSS_FILES
class CssSmasher(object):
def __init__(self):
cssdir = os.path.join(MEDIA_ROOT, 'css')
with open(os.path.join(MEDIA_ROOT, 'style.css'), 'w') as stylecss:
for file in os.listdir(cssdir):
with open(os.path.join(cssdir, file), 'r') as cssfile:
stylecss.write(re.sub(r"\s+", ' ', cssfile.read()))
with open(os.path.join(MEDIA_ROOT, "observatory.js"), 'w') as js:
for jsfile in [os.path.join(MEDIA_ROOT, path) for path in JS_FILES]:
with open(jsfile, "r") as jsdata:
js.write(re.sub(r"\s+", " ", jsdata.read()))
raise MiddlewareNotUsed
| 44.941176 | 74 | 0.725785 | 231 | 1,528 | 4.748918 | 0.47619 | 0.041021 | 0.045579 | 0.054695 | 0.122151 | 0.049225 | 0.049225 | 0 | 0 | 0 | 0 | 0.00319 | 0.179319 | 1,528 | 33 | 75 | 46.30303 | 0.871611 | 0.496073 | 0 | 0 | 0 | 0 | 0.050265 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.25 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c21f13b79aff05e74d7a6471705742eb1ef0d68 | 2,101 | py | Python | okta/models/access_policy_rule_application_sign_on.py | ander501/okta-sdk-python | 0927dc6a2f6d5ebf7cd1ea806d81065094c92471 | [
"Apache-2.0"
] | null | null | null | okta/models/access_policy_rule_application_sign_on.py | ander501/okta-sdk-python | 0927dc6a2f6d5ebf7cd1ea806d81065094c92471 | [
"Apache-2.0"
] | null | null | null | okta/models/access_policy_rule_application_sign_on.py | ander501/okta-sdk-python | 0927dc6a2f6d5ebf7cd1ea806d81065094c92471 | [
"Apache-2.0"
] | null | null | null | # flake8: noqa
"""
Copyright 2021 - Present Okta, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# AUTO-GENERATED! DO NOT EDIT FILE DIRECTLY
# SEE CONTRIBUTOR DOCUMENTATION
from okta.okta_object import OktaObject
from okta.models import verification_method\
as verification_method
class AccessPolicyRuleApplicationSignOn(
OktaObject
):
"""
A class for AccessPolicyRuleApplicationSignOn objects.
"""
def __init__(self, config=None):
super().__init__(config)
if config:
self.access = config["access"]\
if "access" in config else None
if "verificationMethod" in config:
if isinstance(config["verificationMethod"],
verification_method.VerificationMethod):
self.verification_method = config["verificationMethod"]
elif config["verificationMethod"] is not None:
self.verification_method = verification_method.VerificationMethod(
config["verificationMethod"]
)
else:
self.verification_method = None
else:
self.verification_method = None
else:
self.access = None
self.verification_method = None
def request_format(self):
parent_req_format = super().request_format()
current_obj_format = {
"access": self.access,
"verificationMethod": self.verification_method
}
parent_req_format.update(current_obj_format)
return parent_req_format
| 33.887097 | 86 | 0.654926 | 225 | 2,101 | 5.977778 | 0.457778 | 0.133829 | 0.098141 | 0.057993 | 0.050558 | 0.050558 | 0.050558 | 0 | 0 | 0 | 0 | 0.005941 | 0.278915 | 2,101 | 61 | 87 | 34.442623 | 0.881848 | 0.331747 | 0 | 0.176471 | 0 | 0 | 0.091371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.058824 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c24da598237de3c027b9acde2ff873da368e571 | 3,203 | py | Python | bot/player_commands/missing.py | XIIIsiren/CommunityAPI | e665638d2800b71b3d32d49c6897901f4c49a9c5 | [
"Apache-2.0"
] | null | null | null | bot/player_commands/missing.py | XIIIsiren/CommunityAPI | e665638d2800b71b3d32d49c6897901f4c49a9c5 | [
"Apache-2.0"
] | null | null | null | bot/player_commands/missing.py | XIIIsiren/CommunityAPI | e665638d2800b71b3d32d49c6897901f4c49a9c5 | [
"Apache-2.0"
] | null | null | null | import discord
from discord.ext import commands
import json
from utils import error, RARITY_DICT
from parse_profile import get_profile_data
from extract_ids import extract_internal_names
# Create the master list!
from text_files.accessory_list import talisman_upgrades
# Get a list of all accessories
ACCESSORIES = []
with open("text_files/MASTER_ITEM_DICT.json", "r", encoding="utf-8") as file:
item_dict = json.load(file)
for item in item_dict:
if item_dict[item].get("rarity", False) and item_dict[item]["rarity"] != "UNKNOWN":
ACCESSORIES.append(item_dict[item])
# Now remove all the low tier ones
MASTER_ACCESSORIES = []
for accessory in ACCESSORIES:
if accessory["internal_name"] not in talisman_upgrades.keys():
MASTER_ACCESSORIES.append(accessory)
class missing_cog(commands.Cog):
def __init__(self, bot):
self.client = bot
@commands.command(aliases=['missing_accessories', 'accessories', 'miss', 'm'])
async def missing(self, ctx, username=None):
player_data = await get_profile_data(ctx, username)
if player_data is None:
return
username = player_data["username"]
accessory_bag = player_data.get("talisman_bag", None)
inv_content = player_data.get("inv_contents", {"data": []})
if not accessory_bag:
return await error(ctx, "Error, could not find this person's accessory bag", "Do they have their API disabled for this command?")
accessory_bag = extract_internal_names(accessory_bag["data"])
inventory = extract_internal_names(inv_content["data"])
missing = [x for x in MASTER_ACCESSORIES if x["internal_name"] not in accessory_bag+inventory]
if not missing:
return await error(ctx, f"Completion!", f"{username} already has all accessories!")
sorted_accessories = sorted(missing, key=lambda x: x["name"])[:42]
extra = "" if len(missing) <= 36 else f", showing the first {len(sorted_accessories)}"
embed = discord.Embed(title=f"Missing {len(missing)} accessories for {username}{extra}", colour=0x3498DB)
def make_embed(embed, acc_list):
text = ""
for item in acc_list:
internal_name, name, rarity, wiki_link = item.values()
wiki_link = "<Doesn't exist>" if not wiki_link else f"[wiki]({wiki_link})"
text += f"{RARITY_DICT[rarity]} {name}\nLink: {wiki_link}\n"
embed.add_field(name=f"{acc_list[0]['name'][0]}-{acc_list[-1]['name'][0]}", value=text, inline=True)
if len(sorted_accessories) < 6: # For people with only a few missing
make_embed(embed, sorted_accessories)
else:
list_length = int(len(sorted_accessories)/6)
for row in range(6):
row_accessories = sorted_accessories[row*list_length:(row+1)*list_length] # Get the first group out of 6
make_embed(embed, row_accessories)
embed.set_footer(text=f"Command executed by {ctx.author.display_name} | Community Bot. By the community, for the community.")
await ctx.send(embed=embed)
| 42.144737 | 141 | 0.658445 | 427 | 3,203 | 4.75644 | 0.34192 | 0.023634 | 0.029542 | 0.016741 | 0.023634 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007727 | 0.232282 | 3,203 | 75 | 142 | 42.706667 | 0.818219 | 0.046831 | 0 | 0 | 0 | 0.018519 | 0.21241 | 0.05023 | 0 | 0 | 0.002626 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.12963 | 0 | 0.240741 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c256fbd08baa69454ac110542d3949899a3e62f | 725 | py | Python | extract-faces.py | rgooding/face-recogniser | 3c14bf2faf3cd815c43a537e8a6d86258e5c52c7 | [
"MIT"
] | null | null | null | extract-faces.py | rgooding/face-recogniser | 3c14bf2faf3cd815c43a537e8a6d86258e5c52c7 | [
"MIT"
] | null | null | null | extract-faces.py | rgooding/face-recogniser | 3c14bf2faf3cd815c43a537e8a6d86258e5c52c7 | [
"MIT"
] | null | null | null | import os
import cv2
from lib import lib
images_dir = "images"
faces_dir = "images/faces"
def main():
files = os.listdir(images_dir)
for file_name in files:
full_path = images_dir + "/" + file_name
if not os.path.isfile(full_path):
continue
print("Processing " + full_path)
img = cv2.imread(full_path, cv2.IMREAD_UNCHANGED)
_, _, faces = lib.detect_faces(img, return_colour=True)
n = 1
for face in faces:
(x, y, w, h) = face
imgfile = "%s/%s__%d.jpg" % (faces_dir, file_name, n)
# imgfile = faces_dir + "/" + file_name + "__" + n + ".jpg"
cv2.imwrite(imgfile, img[y: y + w, x: x + h])
main()
| 23.387097 | 71 | 0.56 | 100 | 725 | 3.83 | 0.43 | 0.083551 | 0.086162 | 0.083551 | 0.088773 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01 | 0.310345 | 725 | 30 | 72 | 24.166667 | 0.756 | 0.078621 | 0 | 0 | 0 | 0 | 0.064565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.15 | 0 | 0.2 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c271332264e94d543256ede8fdcc6d30fbdd9a2 | 5,142 | py | Python | models/train_classifier.py | julie-data/disaster-responses-pipeline | 3539747e18f98c301ae9f8e2e4661c985e29dfbb | [
"FTL"
] | null | null | null | models/train_classifier.py | julie-data/disaster-responses-pipeline | 3539747e18f98c301ae9f8e2e4661c985e29dfbb | [
"FTL"
] | null | null | null | models/train_classifier.py | julie-data/disaster-responses-pipeline | 3539747e18f98c301ae9f8e2e4661c985e29dfbb | [
"FTL"
] | null | null | null | import sys
import pandas as pd
from sqlalchemy import create_engine
import re
import numpy as np
import nltk
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.tokenize import word_tokenize
from nltk import pos_tag, ne_chunk
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('words')
nltk.download('averaged_perceptron_tagger')
nltk.download('maxent_ne_chunker')
from sklearn.pipeline import Pipeline
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
import pickle
def load_data(database_filepath):
'''
INPUT
database_filepath - the path where the database has been created
OUTPUT
X - the messages
Y - the categories (= labels)
Y.columns - the names of the categories
This function loads the data and prepare it in a X and Y function to be used by a ML model.
'''
# Get data
engine = create_engine('sqlite:///'+database_filepath)
df = pd.read_sql_table('DisasterMessages',engine)
# Split features from labels
X = df['message']
Y = df.drop(['id','message','original','genre'],axis=1)
return X, Y, Y.columns
def tokenize(text):
'''
INPUT
text - text to tokenize
OUTPUT
words_lemmed - tokenized text
This function transforms a text into something that can be read by a ML model:
1. Set to lower case
2. Remove punctuation
3. Split the text into words
4. Remove English stop words
5. Lemmatize words (reduce them according to the dictionary)
'''
# Normalize
text_normalized = re.sub(r"[^a-zA-Z0-9]", " ", text.lower())
# Tokenize text
text_tokenized = word_tokenize(text_normalized)
# Remove stop words
words_no_stopwords = [w for w in text_tokenized if w not in stopwords.words("english")]
# Lemmatize
words_lemmed = [WordNetLemmatizer().lemmatize(w) for w in words_no_stopwords]
return words_lemmed
def build_model():
'''
INPUT
OUTPUT
cv - final model
This function creates a pipeline with a model to run on the data in order to categorize the messages automatically.
'''
# Build pipeline
randomforest = RandomForestClassifier()
pipeline = Pipeline([
('vect',CountVectorizer(tokenizer=tokenize)),
('tfidf',TfidfTransformer()),
('clf',MultiOutputClassifier(randomforest))
])
# Get best parameters with gridsearch
parameters = {
'clf__estimator__max_features': ['auto','log2'],
'clf__estimator__min_samples_leaf': [1,2]
}
cv = GridSearchCV(pipeline,parameters)
return cv
def evaluate_model(model, X_test, Y_test, category_names):
'''
INPUT
model - model we want to evaluate (result from build_model() function)
X_test - test split of X
Y_test - test split of Y
category_names - the names of the possible categories
OUTPUT
This function scores the model for each category separately.
'''
# Get predictions
y_pred = model.predict(X_test)
# Get scores for each category
for i in range(len(category_names)):
print(category_names[i])
print(classification_report(Y_test.iloc[:,i], y_pred[:,i]))
def save_model(model, model_filepath):
'''
INPUT
model - model we want to evaluate (result from build_model() function)
model_filepath - where to save the pickle file
OUTPUT
This function outputs the model in a pickle file.
'''
pkl_filename = model_filepath
with open(pkl_filename, 'wb') as file:
pickle.dump(model, file)
def main():
if len(sys.argv) == 3:
database_filepath, model_filepath = sys.argv[1:]
print('Loading data...\n DATABASE: {}'.format(database_filepath))
X, Y, category_names = load_data(database_filepath)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2)
print('Building model...')
model = build_model()
print('Training model...')
model.fit(X_train, Y_train)
print('Evaluating model...')
evaluate_model(model, X_test, Y_test, category_names)
print('Saving model...\n MODEL: {}'.format(model_filepath))
save_model(model, model_filepath)
print('Trained model saved!')
else:
print('Please provide the filepath of the disaster messages database '\
'as the first argument and the filepath of the pickle file to '\
'save the model to as the second argument. \n\nExample: python '\
'train_classifier.py ../data/DisasterResponse.db classifier.pkl')
if __name__ == '__main__':
main() | 27.945652 | 119 | 0.669584 | 656 | 5,142 | 5.096037 | 0.332317 | 0.029913 | 0.005384 | 0.014358 | 0.094526 | 0.059827 | 0.059827 | 0.059827 | 0.059827 | 0.035298 | 0 | 0.003855 | 0.243291 | 5,142 | 184 | 120 | 27.945652 | 0.855307 | 0.259627 | 0 | 0 | 0 | 0 | 0.172781 | 0.031644 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.234568 | 0 | 0.345679 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c275f2ec32045f61bf64ac2e62550759a730fd3 | 976 | py | Python | plot_normballs.py | bkoyuncu/notes | 0e660f46b7d17fdfddc2cad1bb60dcf847f5d1e4 | [
"MIT"
] | 191 | 2016-01-21T19:44:23.000Z | 2022-03-25T20:50:50.000Z | plot_normballs.py | onurboyar/notes | 2ec14820af044c2cfbc99bc989338346572a5e24 | [
"MIT"
] | 2 | 2018-02-18T03:41:04.000Z | 2018-11-21T11:08:49.000Z | plot_normballs.py | onurboyar/notes | 2ec14820af044c2cfbc99bc989338346572a5e24 | [
"MIT"
] | 138 | 2015-10-04T21:57:21.000Z | 2021-06-15T19:35:55.000Z |
import numpy as np
import matplotlib as mpl
import matplotlib.pylab as plt
def norm_ball(p):
step = np.pi/128
THETA = np.arange(0, 2*np.pi+step, step)
X = np.mat(np.zeros((2,len(THETA))))
for i, theta in enumerate(THETA):
x = (np.cos(theta), np.sin(theta))
a = (1/(np.abs(x[0])**p + np.abs(x[1])**p ))**(1/p)
X[:, i] = a*np.mat(x).T
return X
P = np.arange(0.25,5.25,0.25)
#print(X)
fig = plt.figure(figsize=(10,10))
NumPlotRows = 5
NumPlotCols = 4
for i,p in enumerate(P):
X = norm_ball(p=p)
plt.subplot(NumPlotRows, NumPlotCols, i+1)
plt.plot(X[0,:].T, X[1,:].T,'-',clip_on=False)
ax = fig.gca()
ax.set_xlim((-2,2))
ax.set_ylim((-2,2))
ax.axis('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
#plt.plot(X[0,:].tolist())
for loc, spine in ax.spines.items():
spine.set_color('none') # don't draw spine
plt.title(p)
plt.show() | 22.697674 | 59 | 0.57582 | 176 | 976 | 3.136364 | 0.403409 | 0.01087 | 0.032609 | 0.032609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043364 | 0.220287 | 976 | 43 | 60 | 22.697674 | 0.681997 | 0.05123 | 0 | 0 | 0 | 0 | 0.010834 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.1 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c29a79bfc65dc7f4164e897bed8d0b6f6931bee | 6,281 | py | Python | reviewboard/manage.py | BarracudaPff/code-golf-data-pythpn | 42e8858c2ebc6a061012bcadb167d29cebb85c5e | [
"MIT"
] | null | null | null | reviewboard/manage.py | BarracudaPff/code-golf-data-pythpn | 42e8858c2ebc6a061012bcadb167d29cebb85c5e | [
"MIT"
] | null | null | null | reviewboard/manage.py | BarracudaPff/code-golf-data-pythpn | 42e8858c2ebc6a061012bcadb167d29cebb85c5e | [
"MIT"
] | null | null | null | def check_dependencies(settings):
pyver = sys.version_info[:2]
if pyver < PYTHON_2_MIN_VERSION or (3, 0) <= pyver < PYTHON_3_MIN_VERSION:
dependency_error("Python %s or %s+ is required." % (PYTHON_2_MIN_VERSION_STR, PYTHON_3_MIN_VERSION_STR))
if not is_exe_in_path("node"):
dependency_error("node (from NodeJS) was not found. It must be " "installed from your package manager or from " "https://nodejs.org/")
if not os.path.exists("node_modules"):
dependency_error("The node_modules directory is missing. Please " "re-run `./setup.py develop` to install all NodeJS " "dependencies.")
for key in ("UGLIFYJS_BINARY", "LESS_BINARY", "BABEL_BINARY"):
path = settings.PIPELINE[key]
if not os.path.exists(path):
dependency_error("%s is missing. Please re-run `./setup.py " "develop` to install all NodeJS dependencies." % os.path.abspath(path))
if not has_module("pysvn") and not has_module("subvertpy"):
dependency_warning("Neither the subvertpy nor pysvn Python modules " "were found. Subversion integration will not work. " "For pysvn, see your package manager for the " "module or download from " "http://pysvn.tigris.org/project_downloads.html. " "For subvertpy, run `pip install subvertpy`. We " "recommend pysvn for better compatibility.")
if has_module("P4"):
try:
subprocess.call(["p4", "-h"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
except OSError:
dependency_warning("The p4 command not found. Perforce " "integration will not work. To enable support, " "download p4 from " "http://cdist2.perforce.com/perforce/ and " "place it in your PATH.")
else:
dependency_warning("The p4python module was not found. Perforce " "integration will not work. To enable support, " "run `pip install p4python`")
if not is_exe_in_path("hg"):
dependency_warning("The hg command was not found. Mercurial " "integration will not work. To enable support, " "run `pip install mercurial`")
if not is_exe_in_path("bzr"):
dependency_warning("The bzr command was not found. Bazaar integration " "will not work. To enable support, run " "`pip install bzr`")
if not is_exe_in_path("cvs"):
dependency_warning("The cvs command was not found. CVS integration " "will not work. To enable support, install cvs " "from your package manager or from " "http://www.nongnu.org/cvs/")
if not is_exe_in_path("git"):
dependency_warning("The git command not found. Git integration " "will not work. To enable support, install git " "from your package manager or from " "https://git-scm.com/downloads")
fail_if_missing_dependencies()
def upgrade_database():
"""Perform an upgrade of the database.
This will prompt the user for confirmation, with instructions on what
will happen. If the database is using SQLite3, it will be backed up
automatically, making a copy that contains the current timestamp.
Otherwise, the user will be prompted to back it up instead.
Returns:
bool:
``True`` if the user has confirmed the upgrade. ``False`` if they
have not.
"""
database = settings.DATABASES["default"]
db_name = database["NAME"]
backup_db_name = None
if "--no-backup" not in sys.argv and database["ENGINE"] == "django.db.backends.sqlite3" and os.path.exists(db_name):
backup_db_name = "%s.%s" % (db_name, datetime.now().strftime("%Y%m%d.%H%M%S"))
try:
shutil.copy(db_name, backup_db_name)
except Exception as e:
sys.stderr.write("Unable to make a backup of your database at " "%s: %s\n\n" % (db_name, e))
backup_db_name = None
if "--noinput" in sys.argv:
if backup_db_name:
print("Your existing database has been backed up to\n" "%s\n" % backup_db_name)
perform_upgrade = True
else:
message = "You are about to upgrade your database, which cannot be undone." "\n\n"
if backup_db_name:
message += "Your existing database has been backed up to\n" "%s" % backup_db_name
else:
message += "PLEASE MAKE A BACKUP BEFORE YOU CONTINUE!"
message += '\n\nType "yes" to continue or "no" to cancel: '
perform_upgrade = input(message).lower() in ("yes", "y")
print("\n")
if perform_upgrade:
print("===========================================================\n" 'Performing the database upgrade. Any "unapplied evolutions"\n' "will be handled automatically.\n" "===========================================================\n")
commands = [["evolve", "--noinput", "--execute"]]
for command in commands:
execute_from_command_line([sys.argv[0]] + command)
else:
print("The upgrade has been cancelled.\n")
sys.exit(1)
def main(settings, in_subprocess):
if dirname(settings.__file__) == os.getcwd():
sys.stderr.write("manage.py should not be run from within the " "'reviewboard' Python package directory.\n")
sys.stderr.write("Make sure to run this from the top of the " "Review Board source tree.\n")
sys.exit(1)
try:
command_name = sys.argv[1]
except IndexError:
command_name = None
if command_name in ("runserver", "test"):
if settings.DEBUG and not in_subprocess:
sys.stderr.write("Running dependency checks (set DEBUG=False " "to turn this off)...\n")
check_dependencies(settings)
if command_name == "runserver":
simple_server.ServerHandler.http_version = "1.1"
elif command_name not in ("evolve", "syncdb", "migrate"):
initialize()
if command_name == "upgrade":
upgrade_database()
return
execute_from_command_line(sys.argv)
def run():
sys.path.insert(0, dirname(dirname(abspath(__file__))))
try:
sys.path.remove(dirname(abspath(__file__)))
except ValueError:
pass
if str("DJANGO_SETTINGS_MODULE") not in os.environ:
in_subprocess = False
os.environ[str("DJANGO_SETTINGS_MODULE")] = str("reviewboard.settings")
else:
in_subprocess = True
if len(sys.argv) > 1 and sys.argv[1] == "test":
os.environ[str("RB_RUNNING_TESTS")] = str("1")
try:
pass
except ImportError as e:
sys.stderr.write("Error: Can't find the file 'settings.py' in the " "directory containing %r. It appears you've " "customized things.\n" "You'll have to run django-admin.py, passing it your " "settings module.\n" "(If the file settings.py does indeed exist, it's " "causing an ImportError somehow.)\n" % __file__)
sys.stderr.write("The error we got was: %s\n" % e)
sys.exit(1)
main(settings, in_subprocess)
if __name__ == "__main__":
run() | 54.617391 | 343 | 0.704824 | 940 | 6,281 | 4.571277 | 0.285106 | 0.018152 | 0.022341 | 0.035839 | 0.218059 | 0.173377 | 0.134745 | 0.119386 | 0.098906 | 0.098906 | 0 | 0.005066 | 0.151409 | 6,281 | 115 | 344 | 54.617391 | 0.801126 | 0.064003 | 0 | 0.180952 | 0 | 0 | 0.485695 | 0.032894 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038095 | false | 0.028571 | 0.019048 | 0 | 0.066667 | 0.038095 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c2d351ff240445b749489807879c9a82ce4996e | 1,528 | py | Python | train/train.py | sahandilshan/PruneLM | 318c90af6d2f0802ddcef39a0ade62f41926fda0 | [
"Apache-2.0"
] | null | null | null | train/train.py | sahandilshan/PruneLM | 318c90af6d2f0802ddcef39a0ade62f41926fda0 | [
"Apache-2.0"
] | null | null | null | train/train.py | sahandilshan/PruneLM | 318c90af6d2f0802ddcef39a0ade62f41926fda0 | [
"Apache-2.0"
] | null | null | null | from torch import nn
from tqdm import tqdm
import math
from train.utils import get_batch, repackage_hidden
def train(model, criterion, optimizer, num_tokens, train_data, epoch_no, epochs,
batch_size=256, sequence_length=6):
# Turn on training mode which enables dropout.
assert num_tokens is not None
model.train()
total_loss = 0.
loop = tqdm(enumerate(range(0, train_data.size(0) - 1, sequence_length)), total=len(train_data) // sequence_length,
position=0, leave=True)
counter = 0
for batch, i in loop:
data, targets = get_batch(train_data, i)
hidden = model.init_hidden(batch_size)
# Starting each batch, we detach the hidden state from how it was previously produced.
# If we didn't, the model would try backpropagating all the way to start of the dataset.
model.zero_grad()
hidden = repackage_hidden(hidden)
# print('data:', data.shape)
# print('target:', targets.shape)
output, hidden = model(data, hidden)
loss = criterion(output.view(-1, num_tokens), targets)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(model.parameters(), 3)
optimizer.step()
total_loss += loss.item()
counter += 1
loop.set_description(f"Epoch: [{epoch_no}/{epochs}]")
loop.set_postfix(loss=loss.item(), ppl=math.exp(loss.item()))
return total_loss / len(train_data)
| 39.179487 | 119 | 0.662304 | 210 | 1,528 | 4.671429 | 0.504762 | 0.045872 | 0.026504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011121 | 0.234948 | 1,528 | 38 | 120 | 40.210526 | 0.828058 | 0.231675 | 0 | 0 | 0 | 0 | 0.023993 | 0.017995 | 0 | 0 | 0 | 0 | 0.037037 | 1 | 0.037037 | false | 0 | 0.148148 | 0 | 0.222222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c2fa77cba3144e55f33531814d2f04bc1677219 | 4,961 | py | Python | day24.py | drewbrew/advent-of-code-2020 | 543635d0fd71a85c6a957dcff4e878af7415469c | [
"Apache-2.0"
] | null | null | null | day24.py | drewbrew/advent-of-code-2020 | 543635d0fd71a85c6a957dcff4e878af7415469c | [
"Apache-2.0"
] | null | null | null | day24.py | drewbrew/advent-of-code-2020 | 543635d0fd71a85c6a957dcff4e878af7415469c | [
"Apache-2.0"
] | null | null | null | """Day 24: Lobby layout
Coordinate system taken from
https://www.redblobgames.com/grids/hexagons/#coordinates
"""
from collections import defaultdict
from day23 import REAL_INPUT
from typing import Dict, List, Tuple
MOVES = {
"e": [1, -1, 0],
"w": [-1, 1, 0],
"se": [0, -1, 1],
"sw": [-1, 0, 1],
"nw": [0, 1, -1],
"ne": [1, 0, -1],
}
TEST_INPUT = """sesenwnenenewseeswwswswwnenewsewsw
neeenesenwnwwswnenewnwwsewnenwseswesw
seswneswswsenwwnwse
nwnwneseeswswnenewneswwnewseswneseene
swweswneswnenwsewnwneneseenw
eesenwseswswnenwswnwnwsewwnwsene
sewnenenenesenwsewnenwwwse
wenwwweseeeweswwwnwwe
wsweesenenewnwwnwsenewsenwwsesesenwne
neeswseenwwswnwswswnw
nenwswwsewswnenenewsenwsenwnesesenew
enewnwewneswsewnwswenweswnenwsenwsw
sweneswneswneneenwnewenewwneswswnese
swwesenesewenwneswnwwneseswwne
enesenwswwswneneswsenwnewswseenwsese
wnwnesenesenenwwnenwsewesewsesesew
nenewswnwewswnenesenwnesewesw
eneswnwswnwsenenwnwnwwseeswneewsenese
neswnwewnwnwseenwseesewsenwsweewe
wseweeenwnesenwwwswnew""".splitlines()
EXPECTED_PART_TWO_RESULTS = {
2: 12,
3: 25,
4: 14,
5: 23,
6: 28,
7: 41,
8: 37,
9: 49,
10: 37,
20: 132,
30: 259,
40: 406,
50: 566,
60: 788,
70: 1106,
80: 1373,
90: 1844,
100: 2208,
}
with open("day24.txt") as infile:
REAL_INPUT = [line.strip() for line in infile]
def parse_step(line: str) -> Tuple[int, int, int]:
"""Follow a step from start to finish"""
index = 0
start = (0, 0, 0)
while index < len(line):
if line[index] in MOVES:
x, y, z = start
x1, y1, z1 = MOVES[line[index]]
start = (x + x1, y + y1, z + z1)
index += 1
elif line[index : index + 2] in MOVES:
x, y, z = start
x1, y1, z1 = MOVES[line[index : index + 2]]
start = (x + x1, y + y1, z + z1)
index += 2
else:
raise ValueError(f"Unknown step {line[index:index + 2]}")
return start
def part_one(puzzle: List[str]) -> int:
grid: Dict[Tuple[int], bool] = defaultdict(lambda: False)
for line in puzzle:
coordinates = parse_step(line)
grid[coordinates] = not grid[coordinates]
return sum(grid.values())
def get_neighbor_coordinates(pos: Tuple[int, int, int]) -> List[Tuple[int, int, int]]:
result = []
x, y, z = pos
for x1, y1, z1 in MOVES.values():
result.append((x + x1, y + y1, z + z1))
return result
def next_state(
pos: Tuple[int, int, int], grid: Dict[Tuple[int, int, int], bool]
) -> bool:
neighbors = get_neighbor_coordinates(pos)
active_neighbors = sum(grid[neighbor] for neighbor in neighbors)
state = grid[pos]
if state:
if active_neighbors not in {1, 2}:
# if the tile is black and there are 0 or more than two active neighbors,
# flip it
state = False
else:
if active_neighbors == 2:
# if it's white and there are exactly two black tiles next to it, flip it
state = True
return state
def take_turn(
grid: Dict[Tuple[int, int, int], bool]
) -> Dict[Tuple[int, int, int], bool]:
x_values = sorted(grid)
y_values = sorted(grid, key=lambda k: k[1])
z_values = sorted(grid, key=lambda k: k[2])
min_x = x_values[0][0] - 1
max_x = x_values[-1][0] + 2
min_y = y_values[0][1] - 1
max_y = y_values[-1][1] + 2
min_z = z_values[0][2] - 1
max_z = z_values[-1][2] + 2
new_grid = defaultdict(lambda: False)
new_grid.update(
{
(x, y, z): next_state((x, y, z), grid)
for x in range(min_x, max_x)
for y in range(min_y, max_y)
for z in range(min_z, max_z)
# this conditional is crucial
if x + y + z == 0
}
)
return new_grid
def part_two(puzzle: List[str], days: int = 100) -> int:
grid: Dict[Tuple[int], bool] = defaultdict(lambda: False)
for line in puzzle:
coordinates = parse_step(line)
grid[coordinates] = not grid[coordinates]
testing = puzzle == TEST_INPUT
for day in range(days):
grid = take_turn(grid)
if testing:
try:
expected_result = EXPECTED_PART_TWO_RESULTS[day + 1]
except KeyError:
pass
else:
assert expected_result == sum(grid.values()), (
day + 1,
sum(grid.values()),
grid,
)
if not day % 10:
# this code gets slower as the days go on
print(f"in progress: day {day}")
return sum(grid.values())
def main():
assert part_one(TEST_INPUT) == 10, part_one(TEST_INPUT)
print("part 1 result:", part_one(REAL_INPUT))
assert part_two(TEST_INPUT) == 2208, part_two(TEST_INPUT)
print("part 2 result", part_two(REAL_INPUT))
if __name__ == "__main__":
main()
| 27.258242 | 86 | 0.599274 | 645 | 4,961 | 4.497674 | 0.303876 | 0.028956 | 0.026543 | 0.033781 | 0.192003 | 0.165115 | 0.15443 | 0.11789 | 0.104791 | 0.104791 | 0 | 0.047659 | 0.285225 | 4,961 | 181 | 87 | 27.40884 | 0.770446 | 0.073171 | 0 | 0.115646 | 0 | 0 | 0.163902 | 0.131165 | 0 | 0 | 0 | 0 | 0.020408 | 1 | 0.047619 | false | 0.006803 | 0.020408 | 0 | 0.108844 | 0.020408 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c3309eff3593cc991ca6b9ae1c54999360fde55 | 840 | py | Python | holmes/migrations/versions/37881a97d680_create_user_table.py | scorphus/holmes-api | 6b3c76d4299fecf2d8799d7b5c3c6a6442cacd59 | [
"MIT"
] | null | null | null | holmes/migrations/versions/37881a97d680_create_user_table.py | scorphus/holmes-api | 6b3c76d4299fecf2d8799d7b5c3c6a6442cacd59 | [
"MIT"
] | null | null | null | holmes/migrations/versions/37881a97d680_create_user_table.py | scorphus/holmes-api | 6b3c76d4299fecf2d8799d7b5c3c6a6442cacd59 | [
"MIT"
] | null | null | null | """create user table
Revision ID: 37881a97d680
Revises: d8f500d9168
Create Date: 2014-02-10 15:52:50.366173
"""
# revision identifiers, used by Alembic.
revision = '37881a97d680'
down_revision = 'd8f500d9168'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'users',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('fullname', sa.String(200), nullable=False),
sa.Column('email', sa.String(100), nullable=False),
sa.Column('last_login', sa.DateTime, nullable=True),
sa.Column(
'is_superuser',
sa.Boolean,
nullable=False,
server_default='0'
)
)
op.create_index('idx_email', 'users', ['email'])
def downgrade():
op.drop_index('idx_email', 'users')
op.drop_table('users')
| 22.105263 | 62 | 0.625 | 103 | 840 | 4.990291 | 0.533981 | 0.077821 | 0.046693 | 0.081712 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097978 | 0.234524 | 840 | 37 | 63 | 22.702703 | 0.7014 | 0.172619 | 0 | 0 | 0 | 0 | 0.151383 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c36603505642a938a1b3d00b8392601cbdd9272 | 2,235 | py | Python | src/data/make_dataset.py | AlanGanem/fastai-flow | f5b873fd3bdf917be0bd958b144214d0568df15c | [
"MIT"
] | null | null | null | src/data/make_dataset.py | AlanGanem/fastai-flow | f5b873fd3bdf917be0bd958b144214d0568df15c | [
"MIT"
] | null | null | null | src/data/make_dataset.py | AlanGanem/fastai-flow | f5b873fd3bdf917be0bd958b144214d0568df15c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import click
import logging
from pathlib import Path
#from dotenv import find_dotenv, load_dotenv
import pandas as pd
import tqdm
import numpy as np
#@click.command()
#@click.argument('input_filepath', type=click.Path(exists=True))
#@click.argument('output_filepath', type=click.Path())
def main(
input_filepath,
output_filepath,
):
""" Runs data processing scripts to turn raw data from (../raw) into
cleaned data ready to be analyzed (saved in ../processed).
"""
logger = logging.getLogger(__name__)
logger.info('making final data set from raw data')
data = pd.read_csv(input_filepath, sep = ',', encoding = 'utf-8')
data = data.astype(str)
data = data.drop_duplicates(subset=['Material', 'Nºdopedido', 'IVAMIRO'])
data = data.astype(str)
strip_columns = [
'Material',
'Filial',
'IVAPC',
'PEP',
'Fornecedor',
'Contrato',
'UF',
'TpImposto',
'IVAMIRO',
'Nºdopedido'
]
for col in tqdm.tqdm(strip_columns):
try:
data[col] = data[col].str.strip(' ').str.lstrip('0')
except:
#change to warn in the future
print('{} not in data.columns'.format(col))
#data = data.replace({'': np.nan})
#data = data.replace({'nan': np.nan})
data['OrderType'] = data['Nºdopedido'].str[0:2]
data = data.replace(to_replace = {'nan': ''})
data.to_csv(output_filepath)
if __name__ == '__main__':
log_fmt = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(level=logging.INFO, format=log_fmt)
# not used in this stub but often useful for finding various files
#project_dir = Path(__file__).resolve().parents[2]
# find .env automagically by walking up directories until it's found, then
# load up the .env entries as environment variables
#load_dotenv(find_dotenv())
today = int(pd.to_datetime('today').timestamp())
main(
r'C:\Users\User Ambev\Desktop\Célula de analytics\Projetos\iva-apfj\data\external\history.csv',
r'C:\Users\User Ambev\Desktop\Célula de analytics\Projetos\iva-apfj\data\external\hist_prep.csv'#.format(today)
)
| 32.391304 | 119 | 0.635347 | 290 | 2,235 | 4.768966 | 0.496552 | 0.040492 | 0.032538 | 0.030369 | 0.096891 | 0.096891 | 0.096891 | 0.096891 | 0.096891 | 0.096891 | 0 | 0.003442 | 0.220134 | 2,235 | 68 | 120 | 32.867647 | 0.790017 | 0.310962 | 0 | 0.045455 | 0 | 0.045455 | 0.284672 | 0.071666 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022727 | false | 0 | 0.136364 | 0 | 0.159091 | 0.022727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c370945df28f172815f20a5fdc6617f23cc8f02 | 1,989 | py | Python | server/core/server.py | Den4200/immortals | 2c3e3316f498ade2f301f43748fc95f5fbe9daf2 | [
"MIT"
] | null | null | null | server/core/server.py | Den4200/immortals | 2c3e3316f498ade2f301f43748fc95f5fbe9daf2 | [
"MIT"
] | 2 | 2021-06-08T20:59:31.000Z | 2021-09-08T01:49:50.000Z | server/core/server.py | Den4200/immortals | 2c3e3316f498ade2f301f43748fc95f5fbe9daf2 | [
"MIT"
] | null | null | null | import asyncio
import time
import zmq
import zmq.asyncio
from pymunk import Vec2d
from zmq import Socket
from .constants import SERVER_TICK
from .events.events import PlayerEvent
from .events.movement import apply_movement
from .events.states import GameState, PlayerState
async def main():
future = asyncio.Future()
game_state = GameState(
player_states=[PlayerState()]
)
ctx = zmq.asyncio.Context()
sock_b = ctx.socket(zmq.PULL)
sock_b.bind('tcp://*:25001')
task_b = asyncio.create_task(
update_from_client(game_state, sock_b)
)
sock_c = ctx.socket(zmq.PUB)
sock_c.bind('tcp://*:25000')
task_c = asyncio.create_task(
push_game_state(game_state, sock_c)
)
try:
await asyncio.wait(
[task_b, task_c, future],
return_when=asyncio.FIRST_COMPLETED
)
except asyncio.CancelledError:
print('Cancelled')
finally:
sock_b.close()
sock_c.close()
ctx.destroy(linger=1)
async def update_from_client(game_state: GameState, sock: Socket) -> None:
try:
while True:
msg = dict(await sock.recv_json())
event_dict = msg['event']
# print(event_dict)
event = PlayerEvent(**event_dict)
update_game_state(game_state, event)
except asyncio.CancelledError:
print("Cancelled")
pass
def update_game_state(game_state: GameState, event: PlayerEvent) -> None:
for ps in game_state.player_states:
pos = Vec2d(ps.x, ps.y)
dt = time.time() - ps.updated
new_pos = apply_movement(ps.speed, dt, pos, event)
ps.x, ps.y = new_pos.x, new_pos.y
ps.updated = time.time()
async def push_game_state(game_state: GameState, sock: Socket) -> None:
try:
while True:
sock.send_string(game_state.to_json())
await asyncio.sleep(1 / SERVER_TICK)
except asyncio.CancelledError:
pass
| 24.555556 | 74 | 0.635998 | 256 | 1,989 | 4.738281 | 0.3125 | 0.096455 | 0.059357 | 0.059357 | 0.249794 | 0.072547 | 0.072547 | 0.072547 | 0.072547 | 0 | 0 | 0.009543 | 0.262443 | 1,989 | 80 | 75 | 24.8625 | 0.817314 | 0.008547 | 0 | 0.163934 | 0 | 0 | 0.024873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016393 | false | 0.032787 | 0.163934 | 0 | 0.180328 | 0.032787 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c37e14295bfcedd72a3433670777db43f247942 | 1,945 | py | Python | cerberus/tests/test_rule_schema.py | pykler/cerberus | 8765b317442c002a84e556bd5d9677b868e6deb2 | [
"0BSD"
] | 2,020 | 2017-03-08T13:24:00.000Z | 2022-03-30T19:46:02.000Z | cerberus/tests/test_rule_schema.py | pykler/cerberus | 8765b317442c002a84e556bd5d9677b868e6deb2 | [
"0BSD"
] | 281 | 2017-03-08T23:05:10.000Z | 2022-03-25T01:37:04.000Z | cerberus/tests/test_rule_schema.py | pykler/cerberus | 8765b317442c002a84e556bd5d9677b868e6deb2 | [
"0BSD"
] | 171 | 2017-03-10T17:27:41.000Z | 2022-03-16T06:43:34.000Z | from cerberus import errors
from cerberus.tests import assert_fail, assert_success
def test_schema(validator):
field = 'a_dict'
subschema_field = 'address'
assert_success({field: {subschema_field: 'i live here', 'city': 'in my own town'}})
assert_fail(
schema={
field: {
'type': 'dict',
'schema': {
subschema_field: {'type': 'string'},
'city': {'type': 'string', 'required': True},
},
}
},
document={field: {subschema_field: 34}},
validator=validator,
error=(
field,
(field, 'schema'),
errors.SCHEMA,
validator.schema['a_dict']['schema'],
),
child_errors=[
(
(field, subschema_field),
(field, 'schema', subschema_field, 'type'),
errors.TYPE,
('string',),
),
(
(field, 'city'),
(field, 'schema', 'city', 'required'),
errors.REQUIRED_FIELD,
True,
),
],
)
assert field in validator.errors
assert subschema_field in validator.errors[field][-1]
assert (
errors.BasicErrorHandler.messages[errors.TYPE.code].format(
constraint=('string',)
)
in validator.errors[field][-1][subschema_field]
)
assert 'city' in validator.errors[field][-1]
assert (
errors.BasicErrorHandler.messages[errors.REQUIRED_FIELD.code]
in validator.errors[field][-1]['city']
)
def test_options_passed_to_nested_validators(validator):
validator.allow_unknown = True
assert_success(
schema={'sub_dict': {'type': 'dict', 'schema': {'foo': {'type': 'string'}}}},
document={'sub_dict': {'foo': 'bar', 'unknown': True}},
validator=validator,
)
| 29.029851 | 87 | 0.515681 | 175 | 1,945 | 5.577143 | 0.28 | 0.114754 | 0.08709 | 0.090164 | 0.182377 | 0.135246 | 0.135246 | 0.135246 | 0.135246 | 0.135246 | 0 | 0.004702 | 0.343959 | 1,945 | 66 | 88 | 29.469697 | 0.760188 | 0 | 0 | 0.118644 | 0 | 0 | 0.110026 | 0 | 0 | 0 | 0 | 0 | 0.152542 | 1 | 0.033898 | false | 0.016949 | 0.033898 | 0 | 0.067797 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c3901f50ecda0c64af94d368538bfb5042b9562 | 4,216 | py | Python | facetool/classify.py | hay/facetool | 3e296f7b177ebbcceb4b25f12f3327c3f6612f14 | [
"MIT"
] | 29 | 2018-12-10T22:40:07.000Z | 2022-03-30T02:56:28.000Z | facetool/classify.py | hay/facetool | 3e296f7b177ebbcceb4b25f12f3327c3f6612f14 | [
"MIT"
] | 2 | 2020-02-21T09:48:37.000Z | 2021-03-06T22:33:45.000Z | facetool/classify.py | hay/facetool | 3e296f7b177ebbcceb4b25f12f3327c3f6612f14 | [
"MIT"
] | 7 | 2019-08-09T09:19:12.000Z | 2022-03-30T02:56:27.000Z | import logging
logger = logging.getLogger(__name__)
from .profiler import Profiler
from . import config, resnet
profiler = Profiler("classify.py")
import os
import cv2
import dlib
import numpy as np
import tensorflow as tf
from imutils.face_utils import FaceAligner
from imutils.face_utils import rect_to_bb
profiler.tick("Libraries imported")
def get_gender(gender):
if gender == 0:
return "female"
elif gender == 1:
return "male"
else:
return "unknown"
class Classify:
def __init__(self, model_path, predictor_path, use_cuda = False):
self.model_path = model_path
self.predictor_path = predictor_path
self.use_cuda = use_cuda
if not use_cuda:
os.environ['CUDA_VISIBLE_DEVICES'] = ''
self._create_session()
def classify(self, path):
aligned_image, image, rect_nums, XY = self._load_image(path)
profiler.tick("Loaded image")
ages, genders = self._evaluate(aligned_image)
profiler.tick("Evaluated image")
if config.PROFILE:
profiler.dump_events()
return {
"ages" : ages.tolist(),
"genders" : [get_gender(g) for g in genders.tolist()]
}
def _create_session(self):
logger.debug("Creating session")
with tf.Graph().as_default():
self.session = tf.Session()
images_pl = tf.placeholder(tf.float32, shape=[None, 160, 160, 3], name='input_image')
images = tf.map_fn(lambda frame: tf.reverse_v2(frame, [-1]), images_pl) #BGR TO RGB
images_norm = tf.map_fn(lambda frame: tf.image.per_image_standardization(frame), images)
train_mode = tf.placeholder(tf.bool)
age_logits, gender_logits, _ = resnet.inference(images_norm, keep_probability=0.8,
phase_train=train_mode,
weight_decay=1e-5)
gender = tf.argmax(tf.nn.softmax(gender_logits), 1)
age_ = tf.cast(tf.constant([i for i in range(0, 101)]), tf.float32)
age = tf.reduce_sum(tf.multiply(tf.nn.softmax(age_logits), age_), axis=1)
init_op = tf.group(
tf.global_variables_initializer(),
tf.local_variables_initializer()
)
self.session.run(init_op)
saver = tf.train.Saver()
ckpt = tf.train.get_checkpoint_state(self.model_path)
if ckpt and ckpt.model_checkpoint_path:
saver.restore(self.session, ckpt.model_checkpoint_path)
self._age = age
self._gender = gender
self._images_pl = images_pl
self._train_mode = train_mode
logger.debug("Session restorted")
else:
logger.debug("Could not create session")
profiler.tick("Created session")
def _evaluate(self, aligned_images):
return self.session.run(
[self._age, self._gender],
feed_dict = {
self._images_pl: aligned_images,
self._train_mode: False
}
)
def _load_image(self, image_path):
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(self.predictor_path)
fa = FaceAligner(predictor, desiredFaceWidth=160)
image = cv2.imread(image_path, cv2.IMREAD_COLOR)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
rects = detector(gray, 2)
rect_nums = len(rects)
XY, aligned_images = [], []
if rect_nums == 0:
aligned_images.append(image)
return aligned_images, image, rect_nums, XY
else:
for i in range(rect_nums):
aligned_image = fa.align(image, gray, rects[i])
aligned_images.append(aligned_image)
(x, y, w, h) = rect_to_bb(rects[i])
image = cv2.rectangle(image, (x, y), (x + w, y + h), color=(255, 0, 0), thickness=2)
XY.append((x, y))
return np.array(aligned_images), image, rect_nums, XY | 35.133333 | 100 | 0.58278 | 500 | 4,216 | 4.68 | 0.328 | 0.038889 | 0.016667 | 0.019231 | 0.063248 | 0.041026 | 0 | 0 | 0 | 0 | 0 | 0.014946 | 0.3176 | 4,216 | 120 | 101 | 35.133333 | 0.798401 | 0.002372 | 0 | 0.030612 | 0 | 0 | 0.04446 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061224 | false | 0 | 0.112245 | 0.010204 | 0.255102 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c3c1abdd5938c1500009922bfaa382eec74fe54 | 6,318 | py | Python | pycalc/data_sr.py | Hirico/supic | 3cb5f3edc59d433fd65a2b638f796cad7126431b | [
"MIT"
] | 4 | 2017-07-18T10:24:11.000Z | 2018-02-19T12:43:47.000Z | pycalc/data_sr.py | Hirico/supic | 3cb5f3edc59d433fd65a2b638f796cad7126431b | [
"MIT"
] | null | null | null | pycalc/data_sr.py | Hirico/supic | 3cb5f3edc59d433fd65a2b638f796cad7126431b | [
"MIT"
] | 1 | 2017-07-16T01:40:20.000Z | 2017-07-16T01:40:20.000Z | import tensorflow as tf
import argument_sr
from os.path import join
from PIL import Image
import os
from numpy import array
"""
fliker image data by pil
"""
def pil_batch_queue():
lrs ,hr2s , hr4s = argument_sr.options.get_pil_file_list()
lrs = array(lrs)
hr2s = array(hr2s)
hr4s = array(hr4s)
lrs = tf.convert_to_tensor(lrs,dtype=tf.float32)
hr2s = tf.convert_to_tensor(hr2s, dtype=tf.float32)
hr4s = tf.convert_to_tensor(hr4s, dtype=tf.float32)
lrs = tf.expand_dims(lrs,3)
hr2s = tf.expand_dims(hr2s, 3)
hr4s = tf.expand_dims(hr4s, 3)
return lrs,hr2s,hr4s,None
"""
SET5 test
"""
def pil_single_test_SET5(path):
a,b,c = argument_sr.options.get_set5(path)
lrs = array(a)
hr2s = array(b)
hr4s = array(c)
lrs = tf.convert_to_tensor(lrs, dtype=tf.float32)
hr2s = tf.convert_to_tensor(hr2s, dtype=tf.float32)
hr4s = tf.convert_to_tensor(hr4s, dtype=tf.float32)
lrs = tf.expand_dims(lrs, 0)
hr2s = tf.expand_dims(hr2s, 0)
hr4s = tf.expand_dims(hr4s, 0)
lrs = tf.expand_dims(lrs, 3)
hr2s = tf.expand_dims(hr2s, 3)
hr4s = tf.expand_dims(hr4s, 3)
return lrs, hr2s, hr4s, None
def RGB_to_Tcrbr_Y(tensor):
"""
Args:
tensor: 要转换的图片
Returns:
"""
with tf.name_scope("rgb_to_tcrbr"):
print(tensor)
R = tensor[:, : ,0]
G = tensor[:, :, 1]
B = tensor[:, :, 2]
L = 0.299*R+0.587*G+0.114*B
# print(L)
return tf.expand_dims(L,2)
def get_all_file(path, endFormat, withPath=True):
"""
寻找path 路径下以 在endFormate数组中出现的文件格式的文件
Args:
path: 路径
endFormat: [] 包含这些类型的 format
withPath: 返回的路径是否带之间的路径
Returns:
所有符合条件的文件名
"""
dir = []
for root, dirs, files in os.walk(path):
for file in files:
if True in [file.endswith(x) for x in endFormat]:
filename = join(path, file) if withPath else file
dir.append(filename)
return dir
"""
普通的JPG 训练集
"""
def batch_queue_for_training_normal(data_path):
num_channel = argument_sr.options.input_channel
image_height = argument_sr.options.height
image_width = argument_sr.options.width
batch_size = argument_sr.options.batch_size
threads_num = argument_sr.options.num_threads
min_queue_examples = argument_sr.options.min_after_dequeue
filename_queue = tf.train.string_input_producer(get_all_file(path=data_path, endFormat=['jpg']))
file_reader = tf.WholeFileReader()
_, image_file = file_reader.read(filename_queue)
patch = tf.image.decode_jpeg(image_file, 3)
patch = tf.image.convert_image_dtype(patch, dtype=tf.float32)
# patch = RGB_to_Tcrbr_Y(patch)
image_HR8 = tf.random_crop(patch, [image_height, image_width, num_channel])
image_HR4 = tf.image.resize_images(image_HR8, [int(image_height / 2), int(image_width / 2)],
method=tf.image.ResizeMethod.BICUBIC)
image_HR2 = tf.image.resize_images(image_HR8, [int(image_height / 4), int(image_width / 4)],
method=tf.image.ResizeMethod.BICUBIC)
image_LR = tf.image.resize_images(image_HR8, [int(image_height / 8), int(image_width / 8)],
method=tf.image.ResizeMethod.BICUBIC)
low_res_batch, high2_res_batch, high4_res_batch, high8_res_batch = tf.train.shuffle_batch(
[image_LR, image_HR2, image_HR4, image_HR8],
batch_size=batch_size,
num_threads=threads_num,
capacity=min_queue_examples + 3 * batch_size,
min_after_dequeue=min_queue_examples)
return low_res_batch, high2_res_batch, high4_res_batch, high8_res_batch
def save_image(image, path):
with open(path, "wb") as file:
file.write(image)
"""
使用特殊的大量训练集
"""
def batch_queue_for_training_mkdir():
num_channel = argument_sr.options.input_channel
image_height = argument_sr.options.height
image_width = argument_sr.options.width
batch_size = argument_sr.options.batch_size
threads_num = argument_sr.options.num_threads
filename_queue = tf.train.string_input_producer(argument_sr.options.get_file_list())
file_reader = tf.WholeFileReader()
_, image_file = file_reader.read(filename_queue)
patch = tf.image.decode_jpeg(image_file, 3)
patch = tf.image.convert_image_dtype(patch, dtype=tf.float32)
patch = RGB_to_Tcrbr_Y(patch)
image_HR8 = tf.random_crop(patch, [image_height, image_width, num_channel])
image_HR4 = tf.image.resize_images(image_HR8, [int(image_height / 2), int(image_width / 2)],
method=tf.image.ResizeMethod.BICUBIC)
image_HR2 = tf.image.resize_images(image_HR8, [int(image_height / 4), int(image_width / 4)],
method=tf.image.ResizeMethod.BICUBIC)
image_LR = tf.image.resize_images(image_HR8, [int(image_height / 8), int(image_width / 8)],
method=tf.image.ResizeMethod.BICUBIC)
low_res_batch, high2_res_batch, high4_res_batch, high8_res_batch = tf.train.batch(
[image_LR, image_HR2, image_HR4, image_HR8],
batch_size=batch_size,
num_threads=threads_num,
capacity=3 * batch_size)
filename_queue.close()
return low_res_batch, high2_res_batch, high4_res_batch, high8_res_batch
def dataTest():
low_res_batch, high2_res_batch, high4_res_batch, high8_res_batch = batch_queue_for_training_normal(argument.options.test_data_path)
images = []
for i in range(16):
temp = tf.image.convert_image_dtype(high2_res_batch[i], dtype=tf.uint8)
temp = tf.image.encode_jpeg(temp)
images.append(temp)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
tf.train.start_queue_runners(sess=sess)
low = sess.run(images)
for i in range(16):
save_image(low[i], './test/low' + str(i) + '.jpg')
def get_image_info():
paths = argument_sr.options.get_file_list()
for path in paths:
im = Image.open(path)
height = min(im.size[1], im.size[0])
width = max(im.size[1], im.size[0])
print(width, height)
if __name__ == '__main__':
print(pil_single_test_SET5('./SET5/baby.png'))
| 33.252632 | 135 | 0.662235 | 897 | 6,318 | 4.377926 | 0.181717 | 0.042781 | 0.064935 | 0.025974 | 0.654698 | 0.617774 | 0.586707 | 0.566845 | 0.566845 | 0.566845 | 0 | 0.028062 | 0.227287 | 6,318 | 189 | 136 | 33.428571 | 0.776321 | 0.034821 | 0 | 0.444444 | 0 | 0 | 0.009097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.047619 | 0 | 0.166667 | 0.02381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c3f4f20d10436d040d5804e29572c8cb06bf2d7 | 15,927 | py | Python | dipper/sources/BioGrid.py | putmantime/dipper | 583d76207877096a84a98a379c904ea9c960c400 | [
"BSD-3-Clause"
] | null | null | null | dipper/sources/BioGrid.py | putmantime/dipper | 583d76207877096a84a98a379c904ea9c960c400 | [
"BSD-3-Clause"
] | null | null | null | dipper/sources/BioGrid.py | putmantime/dipper | 583d76207877096a84a98a379c904ea9c960c400 | [
"BSD-3-Clause"
] | 1 | 2022-01-04T14:34:33.000Z | 2022-01-04T14:34:33.000Z | import os
import logging
import re
from datetime import datetime
from stat import ST_CTIME
from zipfile import ZipFile
from dipper import config
from dipper.sources.Source import Source
from dipper.models.Model import Model
from dipper.models.assoc.InteractionAssoc import InteractionAssoc
from dipper.models.Dataset import Dataset
__author__ = 'nicole'
logger = logging.getLogger(__name__)
BGDL = 'http://thebiogrid.org/downloads/archives/Latest%20Release'
class BioGrid(Source):
"""
Biogrid interaction data
"""
# TODO write up class summary for docstring
files = {
'interactions': {
'file': 'interactions.mitab.zip',
'url': BGDL + '/BIOGRID-ALL-LATEST.mitab.zip'},
'identifiers': {
'file': 'identifiers.tab.zip',
'url': BGDL + '/BIOGRID-IDENTIFIERS-LATEST.tab.zip'}
}
# biogrid-specific identifiers for use in subsetting identifier mapping
biogrid_ids = [
106638, 107308, 107506, 107674, 107675, 108277, 108506, 108767, 108814,
108899, 110308, 110364, 110678, 111642, 112300, 112365, 112771, 112898,
199832, 203220, 247276, 120150, 120160, 124085]
def __init__(self, graph_type, are_bnodes_skolemized, tax_ids=None):
super().__init__(graph_type, are_bnodes_skolemized, 'biogrid')
self.tax_ids = tax_ids
self.dataset = Dataset(
'biogrid', 'The BioGrid', 'http://thebiogrid.org/', None,
'http://wiki.thebiogrid.org/doku.php/terms_and_conditions')
# Defaults
# our favorite animals
# taxids = [9606,10090,10116,7227,7955,6239,8355]
if self.tax_ids is None:
self.tax_ids = [9606, 10090, 7955]
if 'test_ids' not in config.get_config() or \
'gene' not in config.get_config()['test_ids']:
logger.warning("not configured with gene test ids.")
else:
self.test_ids = config.get_config()['test_ids']['gene']
# data-source specific warnings
# (will be removed when issues are cleared)
logger.warning(
"several MI experimental codes do not exactly map to ECO; "
"using approximations.")
return
def fetch(self, is_dl_forced=False):
"""
:param is_dl_forced:
:return: None
"""
self.get_files(is_dl_forced)
# the version number is encoded in the filename in the zip.
# for example, the interactions file may unzip to
# BIOGRID-ALL-3.2.119.mitab.txt, where the version number is 3.2.119
f = '/'.join((self.rawdir, self.files['interactions']['file']))
st = os.stat(f)
filedate = datetime.utcfromtimestamp(st[ST_CTIME]).strftime("%Y-%m-%d")
with ZipFile(f, 'r') as myzip:
flist = myzip.namelist()
# assume that the first entry is the item
fname = flist[0]
# get the version from the filename
version = \
re.match(r'BIOGRID-ALL-(\d+\.\d+\.\d+)\.mitab.txt', fname)
myzip.close()
self.dataset.setVersion(filedate, str(version.groups()[0]))
return
def parse(self, limit=None):
"""
:param limit:
:return:
"""
if self.testOnly:
self.testMode = True
self._get_interactions(limit)
self._get_identifiers(limit)
logger.info("Loaded %d test graph nodes", len(self.testgraph))
logger.info("Loaded %d full graph nodes", len(self.graph))
return
def _get_interactions(self, limit):
logger.info("getting interactions")
line_counter = 0
f = '/'.join((self.rawdir, self.files['interactions']['file']))
myzip = ZipFile(f, 'r')
# assume that the first entry is the item
fname = myzip.namelist()[0]
matchcounter = 0
with myzip.open(fname, 'r') as csvfile:
for line in csvfile:
# skip comment lines
if re.match(r'^#', line.decode()):
logger.debug("Skipping header line")
continue
line_counter += 1
line = line.decode().strip()
# print(line)
(interactor_a, interactor_b, alt_ids_a, alt_ids_b, aliases_a,
aliases_b, detection_method, pub_author, pub_id, taxid_a,
taxid_b, interaction_type, source_db, interaction_id,
confidence_val) = line.split('\t')
# get the actual gene ids,
# typically formated like: gene/locuslink:351|BIOGRID:106848
gene_a_num = re.search(
r'locuslink\:(\d+)\|?', interactor_a).groups()[0]
gene_b_num = re.search(
r'locuslink\:(\d+)\|?', interactor_b).groups()[0]
if self.testMode:
g = self.testgraph
# skip any genes that don't match our test set
if (int(gene_a_num) not in self.test_ids) or\
(int(gene_b_num) not in self.test_ids):
continue
else:
g = self.graph
# when not in test mode, filter by taxon
if int(re.sub(r'taxid:', '', taxid_a.rstrip())) not in\
self.tax_ids or\
int(re.sub(
r'taxid:', '', taxid_b.rstrip())) not in\
self.tax_ids:
continue
else:
matchcounter += 1
gene_a = 'NCBIGene:'+gene_a_num
gene_b = 'NCBIGene:'+gene_b_num
# get the interaction type
# psi-mi:"MI:0407"(direct interaction)
int_type = re.search(r'MI:\d+', interaction_type).group()
rel = self._map_MI_to_RO(int_type)
# scrub pubmed-->PMID prefix
pub_id = re.sub(r'pubmed', 'PMID', pub_id)
# remove bogus whitespace
pub_id = pub_id.strip()
# get the method, and convert to evidence code
det_code = re.search(r'MI:\d+', detection_method).group()
evidence = self._map_MI_to_ECO(det_code)
# note that the interaction_id is some kind of internal biogrid
# identifier that does not map to a public URI.
# we will construct a monarch identifier from this
assoc = InteractionAssoc(g, self.name, gene_a, gene_b, rel)
assoc.add_evidence(evidence)
assoc.add_source(pub_id)
assoc.add_association_to_graph()
if not self.testMode and (
limit is not None and line_counter > limit):
break
myzip.close()
return
def _get_identifiers(self, limit):
"""
This will process the id mapping file provided by Biogrid.
The file has a very large header, which we scan past,
then pull the identifiers, and make equivalence axioms
:param limit:
:return:
"""
logger.info("getting identifier mapping")
line_counter = 0
f = '/'.join((self.rawdir, self.files['identifiers']['file']))
myzip = ZipFile(f, 'r')
# assume that the first entry is the item
fname = myzip.namelist()[0]
foundheader = False
# TODO align this species filter with the one above
# speciesfilters = 'Homo sapiens,Mus musculus,Drosophila melanogaster,
# Danio rerio, Caenorhabditis elegans,Xenopus laevis'.split(',')
speciesfilters = 'Homo sapiens,Mus musculus'.split(',')
with myzip.open(fname, 'r') as csvfile:
for line in csvfile:
# skip header lines
if not foundheader:
if re.match(r'BIOGRID_ID', line.decode()):
foundheader = True
continue
line = line.decode().strip()
# BIOGRID_ID
# IDENTIFIER_VALUE
# IDENTIFIER_TYPE
# ORGANISM_OFFICIAL_NAME
# 1 814566 ENTREZ_GENE Arabidopsis thaliana
(biogrid_num, id_num, id_type,
organism_label) = line.split('\t')
if self.testMode:
g = self.testgraph
# skip any genes that don't match our test set
if int(biogrid_num) not in self.biogrid_ids:
continue
else:
g = self.graph
model = Model(g)
# for each one of these,
# create the node and add equivalent classes
biogrid_id = 'BIOGRID:'+biogrid_num
prefix = self._map_idtype_to_prefix(id_type)
# TODO make these filters available as commandline options
# geneidtypefilters='NCBIGene,OMIM,MGI,FlyBase,ZFIN,MGI,HGNC,
# WormBase,XenBase,ENSEMBL,miRBase'.split(',')
geneidtypefilters = 'NCBIGene,MGI,ENSEMBL,ZFIN,HGNC'.split(',')
# proteinidtypefilters='HPRD,Swiss-Prot,NCBIProtein'
if (speciesfilters is not None) \
and (organism_label.strip() in speciesfilters):
line_counter += 1
if (geneidtypefilters is not None) \
and (prefix in geneidtypefilters):
mapped_id = ':'.join((prefix, id_num))
model.addEquivalentClass(biogrid_id, mapped_id)
# this symbol will only get attached to the biogrid class
elif id_type == 'OFFICIAL_SYMBOL':
model.addClassToGraph(biogrid_id, id_num)
# elif (id_type == 'SYNONYM'):
# FIXME - i am not sure these are synonyms, altids?
# gu.addSynonym(g,biogrid_id,id_num)
if not self.testMode and limit is not None \
and line_counter > limit:
break
myzip.close()
return
@staticmethod
def _map_MI_to_RO(mi_id):
rel = InteractionAssoc.interaction_object_properties
mi_ro_map = {
# colocalization
'MI:0403': rel['colocalizes_with'],
# direct interaction
'MI:0407': rel['interacts_with'],
# synthetic genetic interaction defined by inequality
'MI:0794': rel['genetically_interacts_with'],
# suppressive genetic interaction defined by inequality
'MI:0796': rel['genetically_interacts_with'],
# additive genetic interaction defined by inequality
'MI:0799': rel['genetically_interacts_with'],
# association
'MI:0914': rel['interacts_with'],
# physical association
'MI:0915': rel['interacts_with']
}
ro_id = rel['interacts_with'] # default
if mi_id in mi_ro_map:
ro_id = mi_ro_map.get(mi_id)
return ro_id
@staticmethod
def _map_MI_to_ECO(mi_id):
eco_id = 'ECO:0000006' # default to experimental evidence
mi_to_eco_map = {
'MI:0018': 'ECO:0000068', # yeast two-hybrid
'MI:0004': 'ECO:0000079', # affinity chromatography
'MI:0047': 'ECO:0000076', # far western blotting
'MI:0055': 'ECO:0000021', # should be FRET, but using physical_interaction FIXME
'MI:0090': 'ECO:0000012', # desired: protein complementation, using: functional complementation
'MI:0096': 'ECO:0000085', # desired: pull down, using: immunoprecipitation
'MI:0114': 'ECO:0000324', # desired: x-ray crystallography, using: imaging assay
'MI:0254': 'ECO:0000011', # desired: genetic interference, using: genetic interaction evidence
'MI:0401': 'ECO:0000172', # desired: biochemical, using: biochemical trait evidence
'MI:0415': 'ECO:0000005', # desired: enzymatic study, using: enzyme assay evidence
'MI:0428': 'ECO:0000324', # imaging
'MI:0686': 'ECO:0000006', # desired: unspecified, using: experimental evidence
'MI:1313': 'ECO:0000006' # None?
}
if mi_id in mi_to_eco_map:
eco_id = mi_to_eco_map.get(mi_id)
else:
logger.warning(
"unmapped code %s. Defaulting to experimental_evidence", mi_id)
return eco_id
@staticmethod
def _map_idtype_to_prefix(idtype):
"""
Here we need to reformat the BioGrid source prefixes
to standard ones used in our curie-map.
:param idtype:
:return:
"""
prefix = idtype
idtype_to_prefix_map = {
'XENBASE': 'XenBase',
'TREMBL': 'TrEMBL',
'MGI': 'MGI',
'REFSEQ_DNA_ACCESSION': 'RefSeqNA',
'MAIZEGDB': 'MaizeGDB',
'BEEBASE': 'BeeBase',
'ENSEMBL': 'ENSEMBL',
'TAIR': 'TAIR',
'GENBANK_DNA_GI': 'NCBIgi',
'CGNC': 'CGNC',
'RGD': 'RGD',
'GENBANK_GENOMIC_DNA_GI': 'NCBIgi',
'SWISSPROT': 'Swiss-Prot',
'MIM': 'OMIM',
'FLYBASE': 'FlyBase',
'VEGA': 'VEGA',
'ANIMALQTLDB': 'AQTLDB',
'ENTREZ_GENE_ETG': 'ETG',
'HPRD': 'HPRD',
'APHIDBASE': 'APHIDBASE',
'GENBANK_PROTEIN_ACCESSION': 'NCBIProtein',
'ENTREZ_GENE': 'NCBIGene',
'SGD': 'SGD',
'GENBANK_GENOMIC_DNA_ACCESSION': 'NCBIGenome',
'BGD': 'BGD',
'WORMBASE': 'WormBase',
'ZFIN': 'ZFIN',
'DICTYBASE': 'dictyBase',
'ECOGENE': 'ECOGENE',
'BIOGRID': 'BIOGRID',
'GENBANK_DNA_ACCESSION': 'NCBILocus',
'VECTORBASE': 'VectorBase',
'MIRBASE': 'miRBase',
'IMGT/GENE-DB': 'IGMT',
'HGNC': 'HGNC',
'SYSTEMATIC_NAME': None,
'OFFICIAL_SYMBOL': None,
'REFSEQ_GENOMIC_DNA_ACCESSION': 'NCBILocus',
'GENBANK_PROTEIN_GI': 'NCBIgi',
'REFSEQ_PROTEIN_ACCESSION': 'RefSeqProt',
'SYNONYM': None,
'GRID_LEGACY': None,
# the following showed up in 3.3.124
'UNIPROT-ACCESSION': 'UniprotKB',
'SWISS-PROT': 'Swiss-Prot',
'OFFICIAL SYMBOL': None,
'ENSEMBL RNA': None,
'GRID LEGACY': None,
'ENSEMBL PROTEIN': None,
'REFSEQ-RNA-GI': None,
'REFSEQ-RNA-ACCESSION': None,
'REFSEQ-PROTEIN-GI': None,
'REFSEQ-PROTEIN-ACCESSION-VERSIONED': None,
'REFSEQ-PROTEIN-ACCESSION': None,
'REFSEQ-LEGACY': None,
'SYSTEMATIC NAME': None,
'ORDERED LOCUS': None,
'UNIPROT-ISOFORM': 'UniprotKB',
'ENSEMBL GENE': 'ENSEMBL',
'CGD': None, # Not sure what this is?
'WORMBASE-OLD': 'WormBase'
}
if idtype in idtype_to_prefix_map:
prefix = idtype_to_prefix_map.get(idtype)
else:
logger.warning("unmapped prefix %s", prefix)
return prefix
def getTestSuite(self):
import unittest
from tests.test_biogrid import BioGridTestCase
# TODO add InteractionAssoc tests
# TODO add test about if all prefixes are mapped?
test_suite = \
unittest.TestLoader().loadTestsFromTestCase(BioGridTestCase)
return test_suite
| 37.475294 | 108 | 0.54348 | 1,697 | 15,927 | 4.952269 | 0.296405 | 0.00476 | 0.00595 | 0.005712 | 0.155878 | 0.121966 | 0.08841 | 0.080795 | 0.074607 | 0.061637 | 0 | 0.040186 | 0.353174 | 15,927 | 424 | 109 | 37.563679 | 0.775578 | 0.204056 | 0 | 0.173913 | 0 | 0 | 0.191335 | 0.037043 | 0 | 0 | 0 | 0.007075 | 0 | 1 | 0.032609 | false | 0 | 0.047101 | 0 | 0.123188 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c42f7d286d116445c758adc9f9639297784a6e6 | 10,084 | py | Python | test_mimecat.py | xuzheliang135/mimecat | 4502e49e86951b02d61bfa7a52392f2ea74190f5 | [
"MIT"
] | null | null | null | test_mimecat.py | xuzheliang135/mimecat | 4502e49e86951b02d61bfa7a52392f2ea74190f5 | [
"MIT"
] | 1 | 2021-06-25T15:19:49.000Z | 2021-06-25T15:19:49.000Z | test_mimecat.py | xuzheliang135/mimecat | 4502e49e86951b02d61bfa7a52392f2ea74190f5 | [
"MIT"
] | 1 | 2020-02-13T11:43:11.000Z | 2020-02-13T11:43:11.000Z | # -*- coding: utf-8 -*-
import os
import unittest
from StringIO import StringIO
from mimecat import (Catalogue, _canonicalize_extension,
_parse_file, _parse_line)
TEST_MIME_TYPES = """
# This file maps Internet media types to unique file extension(s).
# Although created for httpd, this file is used by many software systems
# and has been placed in the public domain for unlimited redisribution.
#
# The table below contains both registered and (common) unregistered types.
# A type that has no unique extension can be ignored -- they are listed
# here to guide configurations toward known types and to make it easier to
# identify "new" types. File extensions are also commonly used to indicate
# content languages and encodings, so choose them carefully.
#
# Internet media types should be registered as described in RFC 4288.
# The registry is at <http://www.iana.org/assignments/media-types/>.
#
# MIME type (lowercased) Extensions
# ============================================ ==========
# application/activemessage
application/andrew-inset ez
application/json json
# application/kpml-request+xml
# audio/amr
audio/midi mid midi kar rmi
# audio/mobile-xmf
audio/mp4 mp4a
audio/mp4a-latm m4a m4p
audio/ogg oga ogg spx
image/jpeg jpeg jpg jpe
# image/jpm
# message/cpim
# message/delivery-status
message/rfc822 eml mime
text/css css
text/plain txt text conf def list log in
# text/xml
video/3gpp 3gp
video/3gpp2 3g2
video/ogg ogv
"""
class CatalogueTests(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.test_filename = "test.mime.types"
cls.test_filename_shibboleth = "test-shibboleth.mime.types"
with open(cls.test_filename, "w") as filep:
filep.write(TEST_MIME_TYPES)
with open(cls.test_filename_shibboleth, "w") as filep:
filep.write("text/plain2 txt\n")
filep.write("text/plain txt2\n")
@classmethod
def tearDownClass(cls):
os.unlink(cls.test_filename)
os.unlink(cls.test_filename_shibboleth)
def setUp(self):
self.catalogue = Catalogue(self.test_filename)
self.empty_catalogue = Catalogue(self.test_filename)
self.empty_catalogue.clear()
def test_init(self):
cat = Catalogue(self.test_filename)
self.assertIn("message/rfc822",
cat._known_mimetypes)
def test_init_with_filep(self):
with open(self.test_filename, "r") as filep:
cat = Catalogue(filep = filep)
self.assertIn("message/rfc822",
cat._known_mimetypes)
def test_init_with_order(self):
with open(self.test_filename, "r") as filep:
cat = Catalogue(self.test_filename_shibboleth, filep)
# test_filename should've been used first, so text/plain2 should
# come after text/plain in the extensions to type map
type_list = cat._exts_to_types[".txt"]
self.assertGreater(type_list.index("text/plain2"),
type_list.index("text/plain"))
def test_init_fails(self):
cat = None
with self.assertRaises(IOError):
cat = Catalogue(["BOGUS_FILE"])
self.assertIsNone(cat)
def test_clear(self):
self.catalogue.clear()
self.assertEqual( {}, self.catalogue._types_to_exts)
self.assertEqual( {}, self.catalogue._exts_to_types)
self.assertEqual(set(), self.catalogue._known_mediatypes)
self.assertEqual(set(), self.catalogue._known_mimetypes)
self.assertEqual(set(), self.catalogue._known_extensions)
def test_load_filenames_stops(self):
self.empty_catalogue.load_filenames([self.test_filename_shibboleth,
self.test_filename],
True)
self.assertEqual(len(self.empty_catalogue._known_mediatypes), 1)
self.assertEqual(len(self.empty_catalogue._known_mimetypes), 2)
self.assertEqual(len(self.empty_catalogue._known_extensions), 2)
def test_load_filenames_does_not_stop(self):
self.empty_catalogue.load_filenames([self.test_filename_shibboleth,
self.test_filename], False)
self.assertGreater(len(self.empty_catalogue._known_mediatypes), 1)
self.assertGreater(len(self.empty_catalogue._known_mimetypes), 2)
self.assertGreater(len(self.empty_catalogue._known_extensions), 2)
def test_load_filenames_fail(self):
with self.assertRaises(IOError):
self.empty_catalogue.load_filenames(["BOGUS_FILE", "BOGUS_FILE2"])
def test_load_filename(self):
self.empty_catalogue.load_filename(self.test_filename_shibboleth)
self.assertEqual(len(self.empty_catalogue._known_mediatypes), 1)
self.assertEqual(len(self.empty_catalogue._known_mimetypes), 2)
self.assertEqual(len(self.empty_catalogue._known_extensions), 2)
def test_load_filename_fails(self):
with self.assertRaises(IOError):
self.empty_catalogue.load_filename("BOGUS_FILE")
def test_load_file(self):
with open(self.test_filename_shibboleth) as filep:
self.empty_catalogue.load_file(filep)
self.assertEqual(len(self.empty_catalogue._known_mediatypes), 1)
self.assertEqual(len(self.empty_catalogue._known_mimetypes), 2)
self.assertEqual(len(self.empty_catalogue._known_extensions), 2)
def test_parse_file(self):
with open(self.test_filename_shibboleth) as filep:
items = [item for item in _parse_file(filep) if item is not None]
self.assertEqual(len(items), 2)
with open(self.test_filename) as filep:
items = [item for item in _parse_file(filep) if item is not None]
self.assertEqual(len(items), 13)
def test_parse_line(self):
result = _parse_line("#")
self.assertIsNone(result)
result = _parse_line("# more")
self.assertIsNone(result)
result = _parse_line("text/plain")
self.assertEqual(("text/plain", []), result)
result = _parse_line("text/plain ext1 ext2 ext3")
self.assertEqual(("text/plain", [".ext1", ".ext2", ".ext3"]), result)
result = _parse_line("text/plain ext1 ext2 ext3 # with comment")
self.assertEqual(("text/plain", [".ext1", ".ext2", ".ext3"]), result)
result = _parse_line("# text/plain ext1 ext2 ext3")
self.assertIsNone(result)
result = _parse_line("# text/plain ext1 ext2 ext3 # with comment")
self.assertIsNone(result)
def test_parse_line_fails(self):
with self.assertRaises(ValueError):
_ = _parse_line("invalid exts")
def test_known_mediatypes(self):
self.assertIn("application", self.catalogue.known_mediatypes)
self.assertIn("text", self.catalogue.known_mediatypes)
def test_known_mimetypes(self):
self.assertIn("application/json", self.catalogue.known_mimetypes)
self.assertIn("audio/mp4", self.catalogue.known_mimetypes)
def test_known_extensions(self):
self.assertIn(".ez", self.catalogue.known_extensions)
self.assertIn(".m4a", self.catalogue.known_extensions)
def test_get_extensions(self):
exts = self.catalogue.get_extensions("audio/midi")
self.assertEqual(len(exts), 4)
def test_get_extensions_fails(self):
with self.assertRaises(KeyError):
self.catalogue.get_extensions("bad/type")
def test_get_types(self):
types = self.catalogue.get_types(".txt")
self.assertEqual(len(types), 1)
types = self.catalogue.get_types("txt")
self.assertEqual(len(types), 1)
def test_get_types_with_duplicate(self):
self.catalogue.add_type("text/plain2", ".txt")
types = self.catalogue.get_types("txt")
self.assertIn("text/plain", types)
self.assertIn("text/plain2", types)
def test_get_types_fails(self):
with self.assertRaises(KeyError):
self.catalogue.get_types("asdf")
def test_add_type(self):
self.empty_catalogue.add_type("text/plain", "txt")
self.assertIn("text", self.empty_catalogue._known_mediatypes)
self.assertIn("text/plain", self.empty_catalogue._known_mimetypes)
self.assertIn(".txt", self.empty_catalogue._known_extensions)
self.empty_catalogue.clear()
self.empty_catalogue.add_type("text/plain", ".txt")
self.assertIn("text", self.empty_catalogue._known_mediatypes)
self.assertIn("text/plain", self.empty_catalogue._known_mimetypes)
self.assertIn(".txt", self.empty_catalogue._known_extensions)
self.empty_catalogue.clear()
self.empty_catalogue.add_type("text/plain", [".txt"])
self.assertIn("text", self.empty_catalogue._known_mediatypes)
self.assertIn("text/plain", self.empty_catalogue._known_mimetypes)
self.assertIn(".txt", self.empty_catalogue._known_extensions)
def test_add_types_with_duplicate_extensions(self):
self.empty_catalogue.add_type("text/plain", "txt")
self.empty_catalogue.add_type("text/doc", "txt")
self.assertIn("text/plain", self.empty_catalogue._exts_to_types[".txt"])
self.assertIn("text/doc", self.empty_catalogue._exts_to_types[".txt"])
self.assertIn(".txt", self.empty_catalogue._types_to_exts["text/plain"])
self.assertIn(".txt", self.empty_catalogue._types_to_exts["text/doc"])
def test_add_type_fails(self):
with self.assertRaises(ValueError):
self.empty_catalogue.add_type("textplain", ".txt")
def test_canonicalize_extension(self):
ret = _canonicalize_extension("test")
self.assertEqual(ret, ".test")
ret = _canonicalize_extension(".test")
self.assertEqual(ret, ".test")
ret = _canonicalize_extension("")
self.assertEqual(ret, "")
ret = _canonicalize_extension(None)
self.assertIsNone(ret)
| 38.636015 | 80 | 0.671559 | 1,236 | 10,084 | 5.250809 | 0.182848 | 0.056857 | 0.113713 | 0.074422 | 0.587365 | 0.530663 | 0.478274 | 0.453467 | 0.417411 | 0.373035 | 0 | 0.008834 | 0.214201 | 10,084 | 260 | 81 | 38.784615 | 0.810197 | 0.013487 | 0 | 0.273171 | 0 | 0 | 0.204847 | 0.017096 | 0 | 0 | 0 | 0 | 0.317073 | 1 | 0.141463 | false | 0 | 0.019512 | 0 | 0.165854 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c45f38fe38fa77cbec0c1601f2b98424a73d9e6 | 17,404 | py | Python | query.py | emmanuelsalawu/tp-db | 5f495876cf68bf2f1158e3781c79118072e097be | [
"Apache-2.0"
] | null | null | null | query.py | emmanuelsalawu/tp-db | 5f495876cf68bf2f1158e3781c79118072e097be | [
"Apache-2.0"
] | null | null | null | query.py | emmanuelsalawu/tp-db | 5f495876cf68bf2f1158e3781c79118072e097be | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function
"""
Author: Emmanuel Salawu
Email: dr.emmanuel.salawu@gmail.com
Copyright 2016 Emmanuel Salawu
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sys,os
import cPickle as cP, math
import subprocess as sbp
import numpy as np
import time
import argparse
# from Bio import SeqIO, Blast
# from Bio.Blast.Applications import NcbiblastnCommandline, NcbiblastpCommandline
# import Bio.Blast.NCBIXML as NCBIXML
d3_ = {'A': (2649921, 0.109), 'C': (284780, 0.012), 'E': (2037420, 0.084), 'D': (1266155, 0.052), 'G': (1127806, 0.046), 'F': (927803, 0.038), 'I': (1418628, 0.058), 'H': (535711, 0.022), 'K': (1547402, 0.064), 'M': (642753, 0.026), 'L': (2794992, 0.115), 'N': (916269, 0.038), 'Q': (1098361, 0.045), 'P': (509355, 0.021), 'S': (1298566, 0.053), 'R': (1427259, 0.059), 'T': (1180051, 0.049), 'W': (337622, 0.014), 'V': (1505317, 0.062), 'Y': (795511, 0.033)}
#Source http://web.expasy.org/docs/relnotes/relstat.html
rel_ab = {'A': 0.08259999999999999, 'C': 0.0137, 'E': 0.0674, 'D': 0.0546, 'G': 0.0708, 'F': 0.038599999999999995, 'I': 0.0593, 'H': 0.0227, 'K': 0.0582, 'M': 0.0241, 'L': 0.0965, 'N': 0.0406, 'Q': 0.0393, 'P': 0.0472, 'S': 0.0659, 'R': 0.0553, 'T': 0.053399999999999996, 'W': 0.0109, 'V': 0.0687, 'Y': 0.0292}
#Based on log_e math.log
d4_ = {key:(value[0], value[1], rel_ab[key], value[1]/rel_ab[key], math.log (value[1]/rel_ab[key])) for (key, value) in d3_.items()}
d4_rounded = {i: (d4_[i][0], d4_[i][1], round(d4_[i][2], 3), round(d4_[i][3], 3), round(d4_[i][4], 3),) for i in d4_}
needed_contacts = None
needed_dir = 'contacts'
compressed_hh_contacts = None
def scoreMatchedSeq (matchedSeq, contacts = [0], aaScore = d4_, divisor = 1.0):
score = sum ([aaScore.get (aa, (0., 0.))[-1] for aa in matchedSeq])
normalizsedScore = score / (divisor or 1.0)
return round (normalizsedScore, 3), round (score, 3), 100.0 - np.mean (contacts)
def parseLine (line, lenOfPattern, matchedPattern):
global needed_contacts
#print matchedPattern
split_line = line.split ()
split_0_line = split_line [0].split ('_')
start_of_full_helix, end_of_full_helix = int (split_0_line [3]), int (split_0_line [4])
seq = split_line [1]
start = int (split_line [2])
stop = start + lenOfPattern
index_in_db = int (split_line [3])
try:
if not needed_contacts:
needed_contacts = cP.load (open ('needed_contacts_%s.cP' % needed_dir, 'rb'))
contacts = needed_contacts [index_in_db][1] [start : stop]
except:
contacts = [0]
start_in_pdb = start_of_full_helix + start
stop_in_pdb = start_of_full_helix + stop
matchedSeq = seq [start : stop]
score = scoreMatchedSeq (matchedSeq, contacts)
return [score,
(split_0_line [0], split_0_line [1], 1, #int (split_0_line [2]),
start_of_full_helix, end_of_full_helix, ),
split_line [1], start, (start_in_pdb, stop_in_pdb - 1),
int (split_line [3]),
matchedPattern, matchedSeq,
highlight1 (split_line [1], start, stop, matchedPattern),
#, ,
]
def parseRawOutput (raw_output):
raw_output_seg = [l for l in
[[k for k in [j.strip() for j in i.splitlines()] if k] for i in
raw_output.split ('>')] if l]
return raw_output_seg
def parseKey (key):
global split_key_list
key_proper = ''
split_key_1 = key.split ('_')
split_key_list = []
for index_1, item_1 in enumerate ( split_key_1 ):
index_1p = index_1 - 1
if item_1.isdigit():
split_key_2 = [item_1]
else:
split_key_2 = list (item_1)
for item_2 in split_key_2:
try:
split_key_list.append (int (item_2))
except:
split_key_list.append (item_2)
if len (item_1) > 1:
if not index_1: #len (split_key_list) < 6:
start = 0
else:
if (split_key_list [(index_1p * 2) + 1] + split_key_list [(index_1p * 2) + 3]) < split_key_list [(index_1p * 2) + 5]:
start = 1
else:
start = 3
key_proper += item_1 [start :]
return key_proper, split_key_list, effectiveLenOfKeyProper (key_proper)
#E4D3E_5_D3E3H
#04589 589
# 1 3
# 1 3 5 7 9 11
def parseRawOutputIntoNativeTypes (raw_output_seg):
raw_output_seg_native = []
for item in raw_output_seg:
raw_output_seg_native.append ([])
for lineId, line in enumerate (item):
if lineId == 0:
raw_output_seg_native [-1].append ( parseKey (line) )
else:
raw_output_seg_native [-1].append ( parseLine (line,
raw_output_seg_native [-1][0][2],
raw_output_seg_native [-1][0][0] ) )
return raw_output_seg_native
def effectiveLenOfKeyProper (key):
'F0D2E2H'
return sum ([int(i) if i.isdigit () else 1 for i in list (key)])
def simpleHighlight1 (seq, start, stop, tagBeg = '<span class="sh_1">', tagEnd = '</span>'):
return seq [:start] + tagBeg + seq [start : stop] + tagEnd + seq [stop:]
def simpleHighlight2 (seq, start, stop, tagBeg = '<span class="sh_1">', tagEnd = '</span>'):
return seq [:start] + tagBeg + seq [start : stop] + tagEnd + seq [stop:]
def breakSeq (seq, currentOffset=0, segLen=30):
pos = 0
lenSeq = len (seq)
if (lenSeq + currentOffset) < segLen:
return seq
output = ''
positions = range (-currentOffset, lenSeq, segLen)
#print positions
for index, pos in enumerate (positions [1:]):
output += seq [max (0, positions [index]): pos] + '<br/>'
output += seq [pos :]
return output
def detailedHighlightShortSeq (shortSeq, matchedPattern, tagBeg = '<span class="dh_1">', tagEnd = '</span>', start=0):
'4xr7_J_1_544_561 TTLLTDLGYLFDMMERSH 10 208869'
#global possitionsProcessed
#print shortSeq
#F0D2E2H FDMMERSH
# 01234567
neededStr = ''
aaAndNumbers = [int(i) if i.isdigit () else i for i in matchedPattern]
#['F', 0, 'D', 2, 'E', 2, 'H']
#<span >FD</span>MM<span >E</span>RS<span >H</span>
possitionsProcessed = 0; numOfBrAdded = (start // 30) + 1
for index, aaOrNum in enumerate (aaAndNumbers):
#print possitionsProcessed
if shortSeq and (index % 2):
if ((start + possitionsProcessed) > 30 * numOfBrAdded):
neededStr += '<br/>'
numOfBrAdded += 1
if (possitionsProcessed == 0) and shortSeq:
print (shortSeq)
print (possitionsProcessed)
neededStr += tagBeg + shortSeq [possitionsProcessed] + tagEnd
# elif ((start + possitionsProcessed) > 30 * numOfBrAdded):
# neededStr += '<br/>'
# numOfBrAdded += 1
neededStr += shortSeq [possitionsProcessed + 1 : possitionsProcessed + 1 + aaOrNum]
neededStr += tagBeg + shortSeq [possitionsProcessed + 1 + aaOrNum] + tagEnd
possitionsProcessed += 1 + aaOrNum
# if index % 2:
# if aaOrNum:
# neededStr += tagBeg + shortSeq [possitionsProcessed] + tagEnd
# neededStr += shortSeq [possitionsProcessed + 1 : possitionsProcessed + 1 + aaOrNum]
# neededStr += tagBeg + shortSeq [possitionsProcessed + 1 + aaOrNum] + tagEnd
# possitionsProcessed += 1 + aaOrNum
# else:
# neededStr += tagBeg + shortSeq [possitionsProcessed : possitionsProcessed + 2] + tagEnd
# possitionsProcessed += 2
neededStr = neededStr.replace (tagEnd + tagBeg, '')
return neededStr
def highlight1 (seq, start, stop, matchedPattern,
tagBegSimple = '<span class="sh_1">', tagEndSimple = '</span>',
tagBegDetailed = '<span class="dh_1">', tagEndDetailed = '</span>'):
return breakSeq (seq [:start], currentOffset=0, segLen=30) + tagBegSimple + \
detailedHighlightShortSeq (seq [start : stop], matchedPattern, tagBegDetailed, tagEndDetailed, start=start) \
+ tagEndSimple + breakSeq (seq [stop:], currentOffset=stop % 30, segLen=30)
def generateSortedResults (raw_output_seg_native):
results = []
for each_result in raw_output_seg_native:
results.extend (each_result [1:])
sorted_results = sorted (results, reverse=True)
return sorted_results
def process_output (raw_output, jobId):
#raw_output = sampleOutput
raw_output_seg = parseRawOutput (raw_output)
raw_output_seg_native = parseRawOutputIntoNativeTypes (raw_output_seg)
cP.dump (raw_output_seg_native, open ('%s.raw_output_seg_native' % jobId, 'wb'), -1)
sorted_results = generateSortedResults (raw_output_seg_native)
cP.dump (sorted_results, open ('%s.sorted_results' % jobId, 'wb'), -1)
return sorted_results, raw_output_seg_native
def genHtmlTable (sorted_results):
global compressed_hh_contacts
table_headers = ['#', 'U#', 'Matched<br/>Sequence (MS)', 'Matched<br/>Pattern', 'Full Helix (FH)', 'PDB ID: Chain',
' Positions in PDB <br/>(MS), (FH)', 'Helical<br/>Propensity', 'Contact', 'Interacting<br>Partners',
] #'View Helix in 3D', ]
output = '<tr>%s</tr>' % (''.join (['<th>%s</th>' % i for i in table_headers]), )
uniqueness = set ([])
for index, entry in enumerate (sorted_results):
seq = entry [7] # entry [2]
if seq in uniqueness:
unique = 'not_unique'
else:
uniqueness.add (seq)
unique = 'unique'
contact_ = '%.3f' % (100.0 - entry [0][2],) if (entry [0][2] != 100.0) else ''
helical_contact = ''
try:
if not compressed_hh_contacts:
compressed_hh_contacts = cP.load (open ('compressed_hh_contacts.cP' , 'rb'))
# PDB ID CHAIN start_end_pair
helical_contact_info = compressed_hh_contacts [entry [1][0]] [entry [1][1]] [(entry [1][3], entry [1][4])]
for hc_info in helical_contact_info:
helical_contact += "%(pdbid)s:%(chain)s (%(start)s, %(end)s)</br>" % \
{"pdbid": entry [1][0], "chain": hc_info [1], "start": hc_info [2], "end": hc_info [3]}
except:
pass
table_row = [str (index + 1), str (len (uniqueness)), entry [7], entry [6], entry [8],
'%s:%s'% (entry [1][0], entry [1][1]),
'%s, (%s, %s)' % (str (entry [4]), entry [1][3], entry [1][4], ),
'%.3f' % (entry [0][0],), contact_, helical_contact, # 'View Helix'
]
output += '<tr class="%s">%s</tr>' % (unique, ''.join (['<td>%s</td>' % i for i in table_row]), )
table_css = '''
.sh_1 {color: blue;}
.dh_1 {font-weight: 900;}
.monospace {font-family: "Courier New", Courier, monospace; font-size: 80%; }
td {font-size: 80%; }
tr:nth-child(2n) {
background-color:#F4F4F8;
}
tr:nth-child(2n+1) {
background-color:#EFF1F1;
}
'''
return '<style>%s</style><span class="monospace"><table id="results1">%s</table></span>' % (table_css,output)
def genViableNumbers (string):
if string.find ('/') != -1:
parts = string.split ('/')
firstPart = parts [0]
secPart = parts [1]
firstPart = [int (i) for i in firstPart.split (',')]
secPart = [int (i) for i in secPart.split (',')]
if len (firstPart) > 1:
firstPart [1] = firstPart [1] + 1
firstPart = range (*firstPart)
if len (secPart) > 1:
secPart [1] = secPart [1] + 1
secPart = range (*secPart)
needed_numbers = firstPart + secPart
else:
firstPart = [int (i) for i in string.split (',')]
if len (firstPart) > 1:
firstPart [1] = firstPart [1] + 1
firstPart = range (*firstPart)
needed_numbers = firstPart
return needed_numbers
def genViableAlphabets (string):
return string.split ('/')
def genSubQueries (list_of_lists): # [[A, D], [1, 2, 3], [E, f]]
sub_queries = []
for itemI in list_of_lists [0]:
for itemJ in list_of_lists [1]:
for itemK in list_of_lists [2]:
sub_queries.append ('%s%s%s' % (itemI, itemJ, itemK))
return sub_queries
def genViableQueries (needed_sub_queries): # [['A0S', 'A2S', 'A4S'], ['S2L', 'S3L'], ['A3A']]
num_levels = len (needed_sub_queries)
viable_queries = []
for itemsL0 in needed_sub_queries [0]:
if num_levels > 1:
for itemsL1 in needed_sub_queries [1]:
if num_levels > 2:
for itemsL2 in needed_sub_queries [2]:
if num_levels > 3:
for itemsL3 in needed_sub_queries [3]:
#viable_queries.append (itemsL0 + '_' + itemsL1 + '_' + itemsL2 + '_' + itemsL3)
viable_queries.append (itemsL0 + itemsL1 [1:] + '_' + str (1 + int (itemsL0 [1:-1]) + 1 + int (itemsL1 [1:-1])) + '_' + itemsL2 [:-1] + itemsL3)
else:
#viable_queries.append (itemsL0 + '_' + itemsL1 + '_' + itemsL2)
#viable_queries.append (itemsL0 + itemsL1 [1:-1] + itemsL2)
viable_queries.append (itemsL0 + itemsL1 [1:] + '_' + str (1 + int (itemsL0 [1:-1])) + '_' + itemsL1 [:-1] + itemsL2)
else:
#viable_queries.append (itemsL0 + '_' + itemsL1)
viable_queries.append (itemsL0 + itemsL1 [1:])
else:
viable_queries.append (itemsL0)
return viable_queries
def processQuery (query):
query_list = query.split ()
expanded_subset = []
for index, subset in enumerate (query_list):
if index % 2 == 0:
expanded_subset.append (genViableAlphabets (subset))
else:
expanded_subset.append (genViableNumbers (subset))
needed_sub_queries = []
for index in range (0, len (expanded_subset) - 2, 2):
needed_sub_queries.append (genSubQueries (expanded_subset [index : index + 3]))
viable_queries = genViableQueries (needed_sub_queries)
return viable_queries
def genFastaForQuery (viable_queries, outputFileName):
fasta = ''
for queryIndex, viable_query in enumerate (viable_queries):
if queryIndex == 0:
fasta += '>%s %s\n%s\n' % (viable_query.replace ('_', ' '), outputFileName, viable_query)
else:
fasta += '>%s\n%s\n' % (viable_query.replace ('_', ' '), viable_query)
return fasta
def mainQueryDb (query, outputFileName = 'outputFileName'):
with open ('%s.submitted' % (outputFileName,), 'w') as submittedQuery:
submittedQuery.write (query)
viable_queries = processQuery (query)
fasta_query = genFastaForQuery (viable_queries, outputFileName)
with open ('query.fasta', 'w') as query_file: query_file.write (fasta_query)
sbp.call ('cp query.fasta pending_jobs', shell=True)
def query(example_query,outputFileName):
raw_result_file = '%s.output' % outputFileName
while os.path.exists(raw_result_file):
os.remove(raw_result_file)
mainQueryDb(example_query,outputFileName = outputFileName)
i = 0
while not os.path.exists(raw_result_file):
i +=1
status = '.' * (i%4)
print( 'running%-3s'% status, end='\r')
sys.stdout.flush()
time.sleep(0.3)
raw_result = open (raw_result_file).read ()
sorted_results, raw_output_seg_native = process_output(raw_result,outputFileName)
html_table = genHtmlTable (sorted_results)
with open('%s_table.html'%outputFileName,'w') as f:
f.write(html_table)
print('The output file is %s_table.html'%outputFileName)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--query', help='The query string of TP-DB')
parser.add_argument('--output', help='The output prefix')
args = parser.parse_args()
query(args.query,args.output)
# from query import query
# query('asdfasdfasdf','sdfsdsa')
# query('asdfasdfasdf1','sdfsdsa1')
# query('asdfasdfasdf2','sdfsdsa2')
# query('asdfasdfasdf3','sdfsdsa3')
| 38 | 458 | 0.579924 | 2,099 | 17,404 | 4.635541 | 0.213911 | 0.024049 | 0.025899 | 0.027749 | 0.20853 | 0.173279 | 0.106269 | 0.090647 | 0.071737 | 0.071737 | 0 | 0.058569 | 0.283843 | 17,404 | 457 | 459 | 38.083151 | 0.72208 | 0.114169 | 0 | 0.09375 | 0 | 0.010417 | 0.085542 | 0.016403 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076389 | false | 0.003472 | 0.024306 | 0.013889 | 0.173611 | 0.017361 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c46184c6fec988857f84fb1a16cfb0fcdf021e8 | 1,745 | py | Python | tunepy2/optimizers/optimizer_basic.py | efortner/tunepy | 28ab7aa0b851d42cf2a81a5573fb24b261daba89 | [
"MIT"
] | null | null | null | tunepy2/optimizers/optimizer_basic.py | efortner/tunepy | 28ab7aa0b851d42cf2a81a5573fb24b261daba89 | [
"MIT"
] | null | null | null | tunepy2/optimizers/optimizer_basic.py | efortner/tunepy | 28ab7aa0b851d42cf2a81a5573fb24b261daba89 | [
"MIT"
] | null | null | null | from tunepy2 import Genome
from tunepy2.interfaces import AbstractOptimizer, AbstractGenomeFactory, AbstractConvergenceCriterion
class BasicOptimizer(AbstractOptimizer):
"""
A very simple optimizer that builds new Genomes until convergence is satisfied.
"""
def __init__(
self,
initial_candidate: Genome,
genome_factory: AbstractGenomeFactory,
convergence_criterion: AbstractConvergenceCriterion):
"""
Creates a new BasicOptimizer.
:param initial_candidate: seed Genome object
:param genome_factory: creates new Genome objects
:param convergence_criterion: will declare convergence once criterion is satisfied
"""
self._candidate = initial_candidate
self._genome_factory = genome_factory
self._convergence_criterion = convergence_criterion
self._converged = False
def next(self):
"""
Performs the next iteration of optimization.
"""
old_candidate = self._candidate
new_candidate = self._genome_factory.build([old_candidate])
new_candidate.run()
if new_candidate.fitness > old_candidate.fitness:
self._candidate = new_candidate
self._converged = self._convergence_criterion.converged(old_candidate, new_candidate)
@property
def converged(self) -> bool:
"""
Whether or not this algorithm has converged
:return: true when this algorithm has converged or false if not
"""
return self._converged
@property
def best_genome(self) -> Genome:
"""
The best genome so far
:return: a Genome instance
"""
return self._candidate
| 31.160714 | 101 | 0.66533 | 171 | 1,745 | 6.567251 | 0.380117 | 0.057881 | 0.0748 | 0.046305 | 0.051647 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001576 | 0.272779 | 1,745 | 55 | 102 | 31.727273 | 0.883373 | 0.282521 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.08 | 0 | 0.36 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c499791997ea36839195aeb1485030e5ba2db60 | 4,759 | py | Python | pygrn/grns/gpu.py | nico1as/pyGRN | 115d9d42dfbd374fc64393cabefb2a8e245aa6b7 | [
"Apache-2.0"
] | 7 | 2018-07-18T16:08:51.000Z | 2020-12-09T07:18:35.000Z | pygrn/grns/gpu.py | nico1as/pyGRN | 115d9d42dfbd374fc64393cabefb2a8e245aa6b7 | [
"Apache-2.0"
] | 3 | 2018-04-13T11:44:59.000Z | 2018-04-19T13:58:06.000Z | pygrn/grns/gpu.py | nico1as/pyGRN | 115d9d42dfbd374fc64393cabefb2a8e245aa6b7 | [
"Apache-2.0"
] | 6 | 2018-07-22T01:54:14.000Z | 2021-08-04T16:01:38.000Z | from copy import deepcopy
from .classic import ClassicGRN
import numpy as np
import tensorflow as tf
class GPUGRN(ClassicGRN):
def __init__(self):
pass
def reset(self):
self.concentration = np.ones(
len(self.identifiers)) * (1.0/len(self.identifiers))
self.tf_input_conc = tf.convert_to_tensor(
self.concentration[0:self.num_input], dtype=tf.float32)
self.tf_output_conc = tf.convert_to_tensor(
self.concentration[self.num_input:(self.num_input +
self.num_output)],
dtype=tf.float32)
self.tf_regulatory_conc = tf.convert_to_tensor(
self.concentration[self.num_input+self.num_output:],
dtype=tf.float32)
return self
def warmup(self, nsteps):
self.concentration[0:self.num_input] = np.zeros(self.num_input)
for i in range(nsteps):
super(GPUGRN, self).step()
self.tf_input_conc = tf.convert_to_tensor(
self.concentration[0:self.num_input], dtype=tf.float32)
self.tf_output_conc = tf.convert_to_tensor(
self.concentration[self.num_input:(self.num_input +
self.num_output)],
dtype=tf.float32)
self.tf_regulatory_conc = tf.convert_to_tensor(
self.concentration[self.num_input+self.num_output:],
dtype=tf.float32)
def setup(self):
super(GPUGRN, self).setup()
self.length = self.num_input + self.num_output + self.num_regulatory
self.tf_input_conc = tf.convert_to_tensor(
self.concentration[0:self.num_input], dtype=tf.float32)
self.tf_output_conc = tf.convert_to_tensor(
self.concentration[self.num_input:(self.num_input +
self.num_output)],
dtype=tf.float32)
self.tf_regulatory_conc = tf.convert_to_tensor(
self.concentration[self.num_input+self.num_output:],
dtype=tf.float32)
self.tf_sigs = tf.convert_to_tensor(self.enhance_match -
self.inhibit_match,
dtype=tf.float32)
self.tf_beta = tf.convert_to_tensor(self.beta, dtype=tf.float32)
self.tf_delta_n = tf.convert_to_tensor(self.delta/self.length,
dtype=tf.float32)
self.tf_output_mask = tf.convert_to_tensor(
np.concatenate((np.ones(self.num_input),
np.zeros(self.num_output),
np.ones(self.num_regulatory))),
dtype=tf.float32)
def get_signatures(self):
with tf.Session() as s:
return s.run(self.tf_sigs)
def get_concentrations(self):
with tf.Session() as s:
return s.run(tf.concat([self.tf_input_conc,
self.tf_output_conc,
self.tf_regulatory_conc], 0))
def set_input(self, input_t):
inp_concs = tf.convert_to_tensor(input_t, dtype=tf.float32)
self.tf_input_conc = inp_concs
def step(self):
concs = tf.concat([self.tf_input_conc, self.tf_output_conc,
self.tf_regulatory_conc], 0)
conc_diff = tf.multiply(concs, self.tf_output_mask)
conc_diff = tf.reshape(conc_diff, [1, self.length])
conc_diff = tf.matmul(conc_diff, self.tf_sigs)
conc_diff = tf.multiply(self.tf_delta_n, conc_diff)
concs = tf.add(concs, conc_diff)
concs = tf.maximum(0.0, concs)
concs = tf.reshape(concs, [self.length])
_, regs = tf.split(concs, [self.num_input,
self.num_regulatory+self.num_output])
sumconcs = tf.reduce_sum(regs)
concs = tf.cond(tf.greater(sumconcs, 0),
lambda: tf.div(concs, sumconcs), lambda: concs)
_, self.tf_output_conc, self.tf_regulatory_conc = tf.split(
concs, [self.num_input, self.num_output, self.num_regulatory])
def get_output_tensor(self):
return self.tf_output_conc
def get_output(self):
with tf.Session() as s:
return s.run(self.get_output_tensor())
def clone(self):
g = GPUGRN()
g.identifiers = deepcopy(self.identifiers)
g.enhancers = deepcopy(self.enhancers)
g.inhibitors = deepcopy(self.inhibitors)
g.beta = deepcopy(self.beta)
g.delta = deepcopy(self.delta)
g.num_input = deepcopy(self.num_input)
g.num_output = deepcopy(self.num_output)
g.num_regulatory = deepcopy(self.num_regulatory)
return g
| 41.382609 | 76 | 0.59046 | 597 | 4,759 | 4.463987 | 0.144054 | 0.091932 | 0.085553 | 0.089306 | 0.531332 | 0.485178 | 0.468668 | 0.449156 | 0.419512 | 0.376735 | 0 | 0.012162 | 0.308888 | 4,759 | 114 | 77 | 41.745614 | 0.798115 | 0 | 0 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.11 | false | 0.01 | 0.04 | 0.01 | 0.22 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
5c49b9a646a94f624602a019444beba6bd5601aa | 10,892 | py | Python | experiments/classification_experiments.py | utkarsh512/Ad-hominem-fallacies | 7234173726a359e80492b2919e40ea1a9a0119d1 | [
"Apache-2.0"
] | null | null | null | experiments/classification_experiments.py | utkarsh512/Ad-hominem-fallacies | 7234173726a359e80492b2919e40ea1a9a0119d1 | [
"Apache-2.0"
] | null | null | null | experiments/classification_experiments.py | utkarsh512/Ad-hominem-fallacies | 7234173726a359e80492b2919e40ea1a9a0119d1 | [
"Apache-2.0"
] | null | null | null | # modified for custom training and testing on GPU by Utkarsh Patel
from classifiers import AbstractTokenizedDocumentClassifier
from embeddings import WordEmbeddings
from nnclassifiers import StackedLSTMTokenizedDocumentClassifier, CNNTokenizedDocumentClassifier
from nnclassifiers_experimental import StructuredSelfAttentiveSentenceEmbedding
from readers import JSONPerLineDocumentReader, AHVersusDeltaThreadReader
from tcframework import LabeledTokenizedDocumentReader, AbstractEvaluator, Fold, TokenizedDocumentReader, \
TokenizedDocument, ClassificationEvaluator
from comment import Comment
from vocabulary import Vocabulary
import argparse, os
import numpy as np
import pickle
class ClassificationExperiment:
def __init__(self, labeled_document_reader: LabeledTokenizedDocumentReader,
classifier: AbstractTokenizedDocumentClassifier, evaluator: AbstractEvaluator):
self.reader = labeled_document_reader
self.classifier = classifier
self.evaluator = evaluator
def run(self) -> None:
__folds = self.reader.get_folds()
for i, fold in enumerate(__folds, start=1):
assert isinstance(fold, Fold)
assert fold.train and fold.test
print("Running fold %d/%d" % (i, len(__folds)))
self.classifier.train(fold.train)
predicted_labels = self.classifier.test(fold.test, fold_no=i)
self.evaluate_fold(fold.test, predicted_labels)
print("Evaluating after %d folds" % i)
self.evaluator.evaluate()
print("Final evaluation; reader.input_path_train was %s" % self.reader.input_path_train)
self.evaluator.evaluate()
def evaluate_fold(self, labeled_document_instances: list, predicted_labels: list):
assert labeled_document_instances
assert len(predicted_labels)
assert len(labeled_document_instances) == len(predicted_labels), "Prediction size mismatch"
assert isinstance(labeled_document_instances[0].label, type(predicted_labels[0]))
# convert string labels int
all_gold_labels = [doc.label for doc in labeled_document_instances]
# collect IDs
ids = [doc.id for doc in labeled_document_instances]
self.evaluator.add_single_fold_results(all_gold_labels, predicted_labels, ids)
def label_external(self, document_reader: TokenizedDocumentReader) -> dict:
self.classifier.train(self.reader.train, validation=False)
instances = document_reader.instances
predictions, probs = self.classifier.test(instances)
probs = list(probs)
result = dict()
for instance, prediction, prob in zip(instances, predictions, probs):
assert isinstance(instance, TokenizedDocument)
# assert isinstance(prediction, float)
# get id and put the label to the resulting dictionary
cur_text = ' '.join(instance.tokens)
result[instance.id] = (prediction, prob)
return result
def cross_validation_ah(model_type):
# classification without context
import random
random.seed(1234567)
import tensorflow as tf
if tf.test.is_gpu_available():
strategy = tf.distribute.MirroredStrategy()
print('Using GPU')
else:
raise ValueError('CPU not recommended.')
with strategy.scope():
vocabulary = Vocabulary.deserialize('en-top100k.vocabulary.pkl.gz')
embeddings = WordEmbeddings.deserialize('en-top100k.embeddings.pkl.gz')
reader = JSONPerLineDocumentReader('data/experiments/ah-classification1/exported-3621-sampled-positive-negative-ah-no-context.json', True)
e = None
if model_type == 'cnn':
e = ClassificationExperiment(reader, CNNTokenizedDocumentClassifier(vocabulary, embeddings), ClassificationEvaluator())
else:
e = ClassificationExperiment(reader, StackedLSTMTokenizedDocumentClassifier(vocabulary, embeddings), ClassificationEvaluator())
e.run()
def cross_validation_thread_ah_delta_context3():
# classification with context
import random
random.seed(1234567)
import tensorflow as tf
if tf.test.is_gpu_available():
strategy = tf.distribute.MirroredStrategy()
print('Using GPU')
else:
raise ValueError('CPU not recommended.')
with strategy.scope():
vocabulary = Vocabulary.deserialize('en-top100k.vocabulary.pkl.gz')
embeddings = WordEmbeddings.deserialize('en-top100k.embeddings.pkl.gz')
reader = AHVersusDeltaThreadReader('data/sampled-threads-ah-delta-context3', True)
e = ClassificationExperiment(reader, StructuredSelfAttentiveSentenceEmbedding(vocabulary, embeddings, '/tmp/visualization-context3'), ClassificationEvaluator())
e.run()
def train_test_model_with_context(train_dir, indir, outdir):
'''Custom training and testing SSAE model
:param train_dir: Path to JSON file containing training examples
:param indir: Path to LOG file containing examples as Comment() object (which has already been classified by Bert)
:param outdir: Path to LOG file to be created by adding prediction of this model as well'''
import random
random.seed(1234567)
import tensorflow as tf
if tf.test.is_gpu_available():
strategy = tf.distribute.MirroredStrategy()
print('Using GPU')
else:
raise ValueError('CPU not recommended.')
with strategy.scope():
vocabulary = Vocabulary.deserialize('en-top100k.vocabulary.pkl.gz')
embeddings = WordEmbeddings.deserialize('en-top100k.embeddings.pkl.gz')
reader = JSONPerLineDocumentReader(train_dir, True)
e = ClassificationExperiment(reader, StructuredSelfAttentiveSentenceEmbedding(vocabulary, embeddings), ClassificationEvaluator())
test_comments = TokenizedDocumentReader(indir)
result = e.label_external(test_comments)
for k in result.keys():
print(f'{k}: {result[k]}')
instances = dict()
e = Comment(-1, 'lol', 'ah')
f = open(indir, 'rb')
try:
while True:
e = pickle.load(f)
print(e)
instances[str(e.id)] = e
except EOFError:
f.close()
f = open(outdir, 'wb')
for key in result.keys():
model_label, model_score = result[key]
model_label = model_label.lower()
score = model_score[1]
if model_label == 'none':
score = model_score[0]
instances[key].add_model(model_type, model_label, score, None)
e = instances[key]
print(e)
print(e.labels)
print(e.scores)
print('=' * 20)
pickle.dump(instances[key], f)
f.close()
def train_test_model_no_context(model_type, train_dir, indir, outdir):
# Training and testing CNN / BiLSTM model on custom data
# :param train_dir: Path to JSON file containing training examples
# :param indir: Path to LOG file containing examples as Comment() object (which has already been classified by Bert)
# :param outdir: Path to LOG file to be created by adding prediction of this model as well
import random
random.seed(1234567)
import tensorflow as tf
if tf.test.is_gpu_available():
strategy = tf.distribute.MirroredStrategy()
print('Using GPU')
else:
raise ValueError('CPU not recommended.')
with strategy.scope():
vocabulary = Vocabulary.deserialize('en-top100k.vocabulary.pkl.gz')
embeddings = WordEmbeddings.deserialize('en-top100k.embeddings.pkl.gz')
reader = JSONPerLineDocumentReader(train_dir, True)
e = None
if model_type == 'cnn':
e = ClassificationExperiment(reader, CNNTokenizedDocumentClassifier(vocabulary, embeddings), ClassificationEvaluator())
else:
e = ClassificationExperiment(reader, StackedLSTMTokenizedDocumentClassifier(vocabulary, embeddings), ClassificationEvaluator())
# e.run()
test_comments = TokenizedDocumentReader(indir)
result = e.label_external(test_comments)
for k in result.keys():
print(f'{k}: {result[k]}')
instances = dict()
e = Comment(-1, 'lol', 'ah')
f = open(indir, 'rb')
try:
while True:
e = pickle.load(f)
print(e)
instances[str(e.id)] = e
except EOFError:
f.close()
f = open(outdir, 'wb')
for key in result.keys():
model_label, model_score = result[key]
model_label = model_label.lower()
score = model_score[1]
if model_label == 'none':
score = model_score[0]
instances[key].add_model(model_type, model_label, score, None)
e = instances[key]
print(e)
print(e.labels)
print(e.scores)
print('=' * 20)
pickle.dump(instances[key], f)
f.close()
def main3():
# Custom training and testing for context-model (SSAE)
parser = argparse.ArgumentParser()
parser.add_argument("--train_dir", default=None, type=str, required=True, help="Path to JSON file containing training examples")
parser.add_argument("--indir", default=None, type=str, required=True, help="Path to LOG file containing examples as Comment() object (which has already been classified by Bert)")
parser.add_argument("--outdir", default=None, type=str, required=True, help="Path to LOG file to be created by adding prediction of this model as well")
args = parser.parse_args()
train_test_model_with_context(args.train_dir, args.indir, args.outdir)
def main2():
# Custom training and testing for no-context models
parser = argparse.ArgumentParser()
parser.add_argument("--model", default=None, type=str, required=True, help="Model used for classification")
parser.add_argument("--train_dir", default=None, type=str, required=True, help="Path to JSON file containing training examples")
parser.add_argument("--indir", default=None, type=str, required=True, help="Path to LOG file containing examples as Comment() object (which has already been classified by Bert)")
parser.add_argument("--outdir", default=None, type=str, required=True, help="Path to LOG file to be created by adding prediction of this model as well")
args = parser.parse_args()
train_test_model_no_context(args.model, args.train_dir, args.indir, args.outdir)
def main():
# For supervised learning task (with or without context) as described in the paper
parser = argparse.ArgumentParser()
parser.add_argument("--model", default=None, type=str, required=True, help="Model used for classification")
args = parser.parse_args()
if args.model == 'ssase':
cross_validation_thread_ah_delta_context3()
else:
cross_validation_ah(args.model)
if __name__ == '__main__':
# main()
main2()
| 40.340741 | 182 | 0.689772 | 1,249 | 10,892 | 5.895116 | 0.18735 | 0.009779 | 0.02173 | 0.014125 | 0.63466 | 0.61809 | 0.595817 | 0.570691 | 0.561456 | 0.561456 | 0 | 0.009006 | 0.21502 | 10,892 | 269 | 183 | 40.490706 | 0.852164 | 0.099431 | 0 | 0.668342 | 0 | 0.005025 | 0.128478 | 0.04153 | 0 | 0 | 0 | 0 | 0.035176 | 1 | 0.055276 | false | 0 | 0.095477 | 0 | 0.160804 | 0.095477 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3083adebeba13826315956aee8985f1e8da7705d | 1,550 | py | Python | book_to_wordforms.py | timokoola/suomicollectorscripts | b2e52047fcc14116058e4673052f142a5861e00a | [
"Apache-2.0"
] | null | null | null | book_to_wordforms.py | timokoola/suomicollectorscripts | b2e52047fcc14116058e4673052f142a5861e00a | [
"Apache-2.0"
] | null | null | null | book_to_wordforms.py | timokoola/suomicollectorscripts | b2e52047fcc14116058e4673052f142a5861e00a | [
"Apache-2.0"
] | null | null | null | import requests, libvoikko, json, collections, sys
if len(sys.argv) > 1:
filename = sys.argv[1]
else:
print("Need a url that points to a text file")
sys.exit(0)
r = requests.get(filename)
normalized = r.text.split()
v = libvoikko.Voikko("fi")
word_forms = [
(word, v.analyze(word)) for word in normalized if len(v.analyze(word)) > 0
]
flat_words = []
for item in word_forms:
word = item[0]
for i in item[1]:
flat_words.append({"BOOKWORD": word.lower(), **i})
f = open("kotus_all.json")
kotus = json.loads(f.read())
f.close()
book_bw = set([w["BASEFORM"] for w in flat_words])
kotus_w = set([w["word"] for w in kotus])
kotus_dict = dict([(x["word"], x) for x in kotus])
results = []
for bw in flat_words:
baseform = bw["BASEFORM"]
if baseform in kotus_dict:
results.append({**kotus_dict[baseform], **bw})
summary = collections.Counter(
[
(w["tn"], w["av"], w["SIJAMUOTO"], w["NUMBER"])
for w in results
if "SIJAMUOTO" in w and "NUMBER" in w
]
)
# utils
def query(tn, av, list_):
return sorted(
list(set([x["BOOKWORD"] for x in list_ if x["tn"] == tn and x["av"] == av]))
)
def queryform(form, list_):
return sorted(
list(
set(
[
x["BOOKWORD"]
for x in list_
if "SIJAMUOTO" in x and x["SIJAMUOTO"] == form
]
)
)
)
f = open("output.json", "w+")
f.write(json.dumps(results))
f.close()
print(summary) | 20.666667 | 84 | 0.556129 | 223 | 1,550 | 3.793722 | 0.336323 | 0.042553 | 0.021277 | 0.047281 | 0.104019 | 0.104019 | 0.104019 | 0.104019 | 0.104019 | 0.104019 | 0 | 0.005401 | 0.283226 | 1,550 | 75 | 85 | 20.666667 | 0.756076 | 0.003226 | 0 | 0.072727 | 0 | 0 | 0.110104 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036364 | false | 0 | 0.018182 | 0.036364 | 0.090909 | 0.036364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3084f8f882e9fb444232a3f9dc40223a7c80dd37 | 1,648 | py | Python | ellipticcurve/signature.py | zohaibd4l/ecdsa-python | 7e4d78a8d1d90bed2ae974f7ba90f98d29f87cac | [
"MIT"
] | 76 | 2018-09-02T17:04:41.000Z | 2022-03-23T08:06:57.000Z | ellipticcurve/signature.py | zohaibd4l/ecdsa-python | 7e4d78a8d1d90bed2ae974f7ba90f98d29f87cac | [
"MIT"
] | 11 | 2018-12-28T16:30:05.000Z | 2022-01-15T23:32:31.000Z | ellipticcurve/signature.py | zohaibd4l/ecdsa-python | 7e4d78a8d1d90bed2ae974f7ba90f98d29f87cac | [
"MIT"
] | 21 | 2019-01-15T23:08:35.000Z | 2022-01-04T15:41:10.000Z | from .utils.compatibility import *
from .utils.der import parse, encodeConstructed, encodePrimitive, DerFieldType
from .utils.binary import hexFromByteString, byteStringFromHex, base64FromByteString, byteStringFromBase64
class Signature:
def __init__(self, r, s, recoveryId=None):
self.r = r
self.s = s
self.recoveryId = recoveryId
def toDer(self, withRecoveryId=False):
hexadecimal = self._toString()
encodedSequence = byteStringFromHex(hexadecimal)
if not withRecoveryId:
return encodedSequence
return toBytes(chr(27 + self.recoveryId)) + encodedSequence
def toBase64(self, withRecoveryId=False):
return base64FromByteString(self.toDer(withRecoveryId))
@classmethod
def fromDer(cls, string, recoveryByte=False):
recoveryId = None
if recoveryByte:
recoveryId = string[0] if isinstance(string[0], intTypes) else ord(string[0])
recoveryId -= 27
string = string[1:]
hexadecimal = hexFromByteString(string)
return cls._fromString(string=hexadecimal, recoveryId=recoveryId)
@classmethod
def fromBase64(cls, string, recoveryByte=False):
der = byteStringFromBase64(string)
return cls.fromDer(der, recoveryByte)
def _toString(self):
return encodeConstructed(
encodePrimitive(DerFieldType.integer, self.r),
encodePrimitive(DerFieldType.integer, self.s),
)
@classmethod
def _fromString(cls, string, recoveryId=None):
r, s = parse(string)[0]
return Signature(r=r, s=s, recoveryId=recoveryId)
| 33.632653 | 106 | 0.677184 | 159 | 1,648 | 6.968553 | 0.314465 | 0.025271 | 0.079422 | 0.046931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016667 | 0.235437 | 1,648 | 48 | 107 | 34.333333 | 0.862698 | 0 | 0 | 0.078947 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.184211 | false | 0 | 0.078947 | 0.052632 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3086472fb9e3853923fb7e30229a986a449fd0f4 | 1,691 | py | Python | showroompodcast/showroom_podcast.py | road-master/showroom-podcast | 71221b361cabd40e70aa4e8710292b20398e0ab5 | [
"MIT"
] | null | null | null | showroompodcast/showroom_podcast.py | road-master/showroom-podcast | 71221b361cabd40e70aa4e8710292b20398e0ab5 | [
"MIT"
] | 1 | 2021-08-22T07:35:13.000Z | 2021-08-22T07:35:13.000Z | showroompodcast/showroom_podcast.py | road-master/showroom-podcast | 71221b361cabd40e70aa4e8710292b20398e0ab5 | [
"MIT"
] | null | null | null | """Main module."""
import asyncio
import logging
from pathlib import Path
from asynccpu import ProcessTaskPoolExecutor
from showroompodcast import CONFIG
from showroompodcast.archiving_task_manager import ArchivingTaskManager
from showroompodcast.showroom_archiver import TIME_TO_FORCE_TARMINATION, ShowroomArchiver
from showroompodcast.showroom_poller import ShowroomPoller
from showroompodcast.slack.slack_client import SlackNotification
class ShowroomPodcast:
"""Main class."""
def __init__(
self, *, path_to_configuraion: Path = None, time_to_force_termination: int = TIME_TO_FORCE_TARMINATION
) -> None:
logging.basicConfig(level=logging.DEBUG)
CONFIG.load(path_to_configuraion)
self.showroom_archiver = ShowroomArchiver(time_to_force_termination=time_to_force_termination)
self.archiving_task_manager = ArchivingTaskManager(CONFIG.list_room_id)
self.logger = logging.getLogger(__name__)
def run(self):
"""Runs"""
try:
asyncio.run(self.archive_repeatedly())
except Exception as error:
self.logger.exception(error)
if CONFIG.slack.bot_token is not None and CONFIG.slack.channel is not None:
SlackNotification(CONFIG.slack.bot_token, CONFIG.slack.channel).post_error(error)
raise error
async def archive_repeatedly(self):
with ProcessTaskPoolExecutor(max_workers=CONFIG.number_process, cancel_tasks_when_shutdown=True) as executor:
showroom_poller = ShowroomPoller(self.showroom_archiver, executor)
while True:
await self.archiving_task_manager.poll_all_rooms(showroom_poller)
| 40.261905 | 117 | 0.746304 | 190 | 1,691 | 6.357895 | 0.426316 | 0.078642 | 0.04553 | 0.054636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186872 | 1,691 | 41 | 118 | 41.243902 | 0.878545 | 0.01715 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0 | 0.290323 | 0 | 0.387097 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
308a170bee23a8817d6391c877b12dec983c3913 | 3,239 | py | Python | utils/HttpTools.py | atChenAn/SimpleJmeter | 34c11c3d325a4633adb6324758abb2023e05fac1 | [
"MIT"
] | null | null | null | utils/HttpTools.py | atChenAn/SimpleJmeter | 34c11c3d325a4633adb6324758abb2023e05fac1 | [
"MIT"
] | null | null | null | utils/HttpTools.py | atChenAn/SimpleJmeter | 34c11c3d325a4633adb6324758abb2023e05fac1 | [
"MIT"
] | null | null | null | # coding=utf-8
import requests
from PyQt5.QtCore import QThread, pyqtSignal
from urllib import request, parse
from utils import LogTools
sysLog = LogTools.SysLogs()
class Http:
signal = None # 括号里填写信号传递的参数
# init 初始化部分
def __init__(self):
self.signal = pyqtSignal(object)
# ============================================ 拦截器相关部分的代码 ============================================ #
# 添加拦截器
def __addInterceptor(self, interceptor, fn):
'''
私有方法,添加拦截器响应函数
:param interceptor: 将要操作的拦截器集合
:param fn: 添加的拦截器响应函数
:return: None
'''
interceptor.append(fn)
return interceptor.__len__() - 1
# 移除拦截器
def __removeInterceptor(self, interceptor, index):
'''
移除已经存在的拦截器
:param interceptor: 将要操作的拦截器集合
:param index: 此参数可以为添加拦截器时返回的index或者拦截器响应函数本身
:return: None
'''
try:
if isinstance(index, int):
if index < interceptor.__len__():
interceptor.remove(interceptor[index])
else:
interceptor.remove(index)
except Exception:
print('remove interceptor failed,because can not find it')
# 添加请求拦截器
def addRequestInterceptor(self, fn):
'''
添加请求拦截器,将在发起请求前进行调用
:param fn: 拦截请求响应函数
:return:
'''
return self.__addInterceptor(self.requestInterceptor, fn)
# 移除请求拦截器
def removeRequestInterceptor(self, index):
'''
移除请求拦截器
:param index: 下标或拦截响应函数本身
:return:
'''
self.__removeInterceptor(self.requestInterceptor, index)
# 添加响应拦截器
def addResponseInterceptor(self, fn):
'''
添加响应拦截器,将在响应后调用
:param fn: 拦截响应响应函数
:return:
'''
return self.__addInterceptor(self.responseInterceptor, fn)
# 移除响应拦截器
def removeResponseInterceptor(self, index):
'''
移除响应函数
:param index: 下标或响应函数本身
:return:
'''
self.__removeInterceptor(self.responseInterceptor, index)
# ============================================ 拦截器相关部分的代码 ============================================ #
def http_get(self, path, callback, params={}):
# 创建线程
# 启动线程
queryStr = '?%s' % parse.urlencode(params)
# 如果没有查询条件就清空queryStr
if queryStr == '?':
queryStr = ''
try:
response = requests.get('https://' + path + queryStr)
response.encoding = 'utf-8'
buffer = response.text;
return buffer;
except Exception as e:
sysLog.warn('获取数据失败:' + e.__str__())
# 请求拦截器、静态成员
Http.requestInterceptor = []
# 响应拦截器、静态成员
Http.responseInterceptor = []
def http_get(path, params={}):
queryStr = '?%s' % parse.urlencode(params)
# 如果没有查询条件就清空queryStr
if queryStr == '?':
queryStr = ''
try:
response = requests.get('http://' + path + queryStr)
response.encoding = 'utf-8'
buffer = response.text;
return buffer;
except Exception as e:
sysLog.warn('请求失败:' + str(e))
if __name__ == '__main__':
data = http_get('https://www.baidu.com')
print(data)
| 25.912 | 108 | 0.543069 | 271 | 3,239 | 6.346863 | 0.398524 | 0.023256 | 0.030233 | 0.036047 | 0.246512 | 0.206977 | 0.206977 | 0.206977 | 0.206977 | 0.206977 | 0 | 0.002213 | 0.302563 | 3,239 | 124 | 109 | 26.120968 | 0.759185 | 0.221673 | 0 | 0.303571 | 0 | 0 | 0.054618 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.160714 | false | 0 | 0.071429 | 0 | 0.357143 | 0.035714 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
308b99133cba1b420b98e1026e82294464c485e7 | 6,293 | py | Python | keyboard.py | doitintl/mac-keymappings | e2a78213a9c383e31e9e8f4817f46767f693256a | [
"MIT"
] | null | null | null | keyboard.py | doitintl/mac-keymappings | e2a78213a9c383e31e9e8f4817f46767f693256a | [
"MIT"
] | null | null | null | keyboard.py | doitintl/mac-keymappings | e2a78213a9c383e31e9e8f4817f46767f693256a | [
"MIT"
] | null | null | null | import re
import typing
import xml.etree.ElementTree as ET
from unicodedata import name
from typing import *
from jinja2 import Template
keylayout_file = 'keylayout_file'
keylayout_xml = keylayout_file + '.xml'
keylayout_html = keylayout_file + '.html'
TOFU = '\ufffd'
def mapindex_by_modifier(map_to_modifier: Dict[int, str], modifier: str) -> int:
'''Get the index (in the XML list) of the keyboard map, given the modifiers that produce that map.'''
return [k for k, v in map_to_modifier.items() if v == modifier][0]
def map_by_index(tree: ET.Element) -> Dict[int, Dict[int, str]]:
'''Get the keyboard map given the index n its XML list, given the modifiers that produce that map.'''
def to_chr(v: str) -> str:
return chr(int(v, 16)) if len(v) == 6 else v
keyMapSets = tree.findall('./keyMapSet')
assert len(keyMapSets) == 1, 'For now, only supports a single KeyMapSet in the file, found %d' % len(keyMapSets)
theOnlykeyMapSet = keyMapSets[0]
key_maps = {int(keyMap.attrib['index']): {
int(oneKey.attrib['code']): to_chr(oneKey.attrib['output'])
for oneKey in keyMap} for keyMap in theOnlykeyMapSet}
return key_maps
def modifier_by_mapindex(tree: ET.Element) -> Dict[int, str]:
'''Get the modifiers that produce a keyboard map, given its index in the XML.'''
def shorten_modifier_descriptions(s: str) -> str:
'''Abbreviate the names of the modifiers, using Mac modifier icons. Separate with semicolons.'''
conversions = {'Shift': '⇧', 'Option': '⌥', 'Command': '⇧', 'Control': '⌃',
' ': '; '}
for in_, out in conversions.items():
s = re.sub(in_, out, s, flags=re.IGNORECASE)
return s
keyMapSelects = tree.find("./modifierMap").findall('./keyMapSelect')
return {
int(keyMapSelect.attrib['mapIndex']):
shorten_modifier_descriptions(keyMapSelect.find('./modifier').attrib['keys'])
for keyMapSelect in keyMapSelects}
def tweaked_xml() -> str:
'''Read the XML and fix entity markers.'''
def remove_entity_markers(xml_s):
'''
Reformat entities like Ne7 as 0x78e7. This is necessary because
the XML parser can choke on inability to resolve entities, which
we do not want to do anyway.
'''
return re.sub(r'&#(x[\dA-F]{4});', r'0\g<1>', xml_s)
with open(keylayout_xml, 'r') as f:
return remove_entity_markers(f.read())
def build_table(ascii_keyboard, unmodified_nonasciii_keyboard, map_by_index_,
modifier_by_mapindex_: Dict[int, str])-> List[Dict[str, str]]:
def sort_by_asciifirst_and_moddescription_length(modifier_by_mapindex_, ascii_keyboard_):
modifier_by_map_index_items = list(modifier_by_mapindex_.items())
'''
We are using length of modifiers to sort the keyboards, since a single
modifier is more "common" than multiple modifiers, and NO modifiers
is most common of all. However, we put the ASCII keyboard first.
'''
modifier_by_map_index_items.sort(key=lambda item: (item[1].count(';'), item[1]))
ascii_keyboard_dict = {0: ascii_keyboard_} # Put it first
modifier_by_mapindex_ = dict(modifier_by_map_index_items)
modifier_by_mapindex_ = {**ascii_keyboard_dict, **modifier_by_mapindex_}
return modifier_by_mapindex_
def unicode_name(s: str) -> str:
'''Get the official unicode name for a character.'''
if not s:
return ""
names = []
for ch in s:
try:
names.append(name(ch))
except ValueError: # codepoints like 4,12,16,127
# Code points including 1...31 and 127 have no name.
names.append(TOFU) # tofu
return ' & '.join(names)
modifier_by_mapindex_ = sort_by_asciifirst_and_moddescription_length(modifier_by_mapindex_, ascii_keyboard)
rows = []
for idx, modifier in modifier_by_mapindex_.items():
modified_keyboard = map_by_index_[idx]
for key_idx in modified_keyboard:
modified_key = modified_keyboard[key_idx]
if unicode_name(modified_key) not in [TOFU, '']:
if not modifier.strip():
modifier = '<NONE>'
rows.append({
'modifier': modifier,
'ascii': ascii_keyboard[key_idx],
'unmodified_non_ascii_key': unmodified_nonasciii_keyboard[key_idx],
'modified_key': modified_key,
'unicode_name': unicode_name(modified_key)})
return rows
def render(title, rows:List[Dict[str,str]]):
template = """
<!DOCTYPE html>
<html>
<head>
<title>{{ title|escape }}</title>
<meta charset="UTF-8">
<style>
th {
background: #ACA
}
tr:nth-child(even) {background: #CAC}
tr:nth-child(odd) {background: #EEE}
table {
border: 1px solid black;
}
</style>
</head>
<body>
<table>
<tr><th>{%- for k in item_list[0]%}{{k.replace('_',' ').title()|escape}}{%- if not loop.last %}</th><th>{%- endif %}{%- endfor %}</th></tr>
{%- for item in item_list %}
<tr><td>{%- for v in item.values() %}{{v|escape}}{%- if not loop.last %}</td><td>{%- endif %}{%- endfor %}</td></tr>
{%- endfor %}
</table>
</body>
</html>"""
rendered = Template(template).render(title=title, item_list=rows)
with open(keylayout_html, 'w') as f:
f.write(rendered)
def main():
tree = ET.fromstring(tweaked_xml())
map_by_index_ = map_by_index(tree)
modifier_by_mapindex_ = modifier_by_mapindex(tree)
unmodified_nonasciii_keyboard = map_by_index_[mapindex_by_modifier(modifier_by_mapindex_, '')]
ascii_keyboard = map_by_index_[0]
item_list = build_table(ascii_keyboard, unmodified_nonasciii_keyboard, map_by_index_, modifier_by_mapindex_)
title = tree.attrib['name']
render(title, item_list)
if __name__ == '__main__':
main()
pass
| 38.607362 | 152 | 0.608613 | 788 | 6,293 | 4.640863 | 0.282995 | 0.049221 | 0.073831 | 0.02461 | 0.18321 | 0.1102 | 0.100082 | 0.080941 | 0.080941 | 0.080941 | 0 | 0.008266 | 0.269506 | 6,293 | 162 | 153 | 38.845679 | 0.786382 | 0.111553 | 0 | 0 | 0 | 0.017241 | 0.23545 | 0.014597 | 0 | 0 | 0 | 0 | 0.008621 | 1 | 0.103448 | false | 0.008621 | 0.051724 | 0.008621 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30901a41bb25b4bd3ce2983a3e73c05998cafcb0 | 1,394 | py | Python | tests/test_gln.py | jodal/biip | ea56a53a8999f3ded04be881c3348b1b9a373d22 | [
"Apache-2.0"
] | 17 | 2020-06-30T08:07:31.000Z | 2022-03-26T08:14:24.000Z | tests/test_gln.py | jodal/biip | ea56a53a8999f3ded04be881c3348b1b9a373d22 | [
"Apache-2.0"
] | 117 | 2020-08-12T14:32:11.000Z | 2022-03-28T04:07:48.000Z | tests/test_gln.py | jodal/biip | ea56a53a8999f3ded04be881c3348b1b9a373d22 | [
"Apache-2.0"
] | 1 | 2020-11-23T23:15:58.000Z | 2020-11-23T23:15:58.000Z | """Tests of parsing GLNs."""
import pytest
from biip import ParseError
from biip.gln import Gln
from biip.gs1 import GS1Prefix
def test_parse() -> None:
gln = Gln.parse("1234567890128")
assert gln == Gln(
value="1234567890128",
prefix=GS1Prefix(value="123", usage="GS1 US"),
payload="123456789012",
check_digit=8,
)
def test_parse_strips_surrounding_whitespace() -> None:
gln = Gln.parse(" \t 1234567890128 \n ")
assert gln.value == "1234567890128"
def test_parse_value_with_invalid_length() -> None:
with pytest.raises(ParseError) as exc_info:
Gln.parse("123")
assert (
str(exc_info.value)
== "Failed to parse '123' as GLN: Expected 13 digits, got 3."
)
def test_parse_nonnumeric_value() -> None:
with pytest.raises(ParseError) as exc_info:
Gln.parse("123456789o128")
assert (
str(exc_info.value)
== "Failed to parse '123456789o128' as GLN: Expected a numerical value."
)
def test_parse_with_invalid_check_digit() -> None:
with pytest.raises(ParseError) as exc_info:
Gln.parse("1234567890127")
assert (
str(exc_info.value)
== "Invalid GLN check digit for '1234567890127': Expected 8, got 7."
)
def test_as_gln() -> None:
gln = Gln.parse(" \t 1234567890128 \n ")
assert gln.as_gln() == "1234567890128"
| 22.852459 | 80 | 0.643472 | 176 | 1,394 | 4.9375 | 0.3125 | 0.048331 | 0.069045 | 0.051784 | 0.35443 | 0.330265 | 0.330265 | 0.330265 | 0.252014 | 0.162255 | 0 | 0.149577 | 0.237446 | 1,394 | 60 | 81 | 23.233333 | 0.667921 | 0.015782 | 0 | 0.282051 | 0 | 0 | 0.24451 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0.153846 | false | 0 | 0.102564 | 0 | 0.25641 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30909c4ad0b0995022dfe80c45b1417b387a7f7d | 3,523 | py | Python | web/views.py | Chaixi/My-Django-Demo | 43cea290f4a4a1c1a3b981eae3be11cc1188b103 | [
"MIT"
] | null | null | null | web/views.py | Chaixi/My-Django-Demo | 43cea290f4a4a1c1a3b981eae3be11cc1188b103 | [
"MIT"
] | null | null | null | web/views.py | Chaixi/My-Django-Demo | 43cea290f4a4a1c1a3b981eae3be11cc1188b103 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.core.paginator import EmptyPage, PageNotAnInteger, Paginator
from django.views.decorators.csrf import csrf_exempt
import json
from django.core import serializers
# Create your views here.
from django.http import HttpResponse
from django.http.response import JsonResponse
from web import models
def hello(request):
# return HttpResponse("hello world!")
context = {}
# 数据绑定
context['hello'] = 'Hello world!'
# 将绑定的数据传入前台
return render(request, 'hello.html', context)
@csrf_exempt
def index(request):
if request.method == 'POST':
pass
else:
xmu_news_list = models.News.objects.order_by('-release_time').all()
jwc_news_list = models.jwc_News.objects.order_by('-release_time').all()
xsc_news_list = models.xsc_News.objects.order_by('-release_time').all()
xmu_news_paginator = Paginator(xmu_news_list, 10) #show 20 news per page
jwc_news_paginator = Paginator(jwc_news_list, 10)
xsc_news_paginator = Paginator(xsc_news_list, 10)
# print("p.count:"+str(paginator.count))
# print("p.num_pages:"+str(paginator.num_pages))
# print("p.page_range:"+str(paginator.page_range))
source = request.GET.get("source")
page = request.GET.get('page')
xmu_news = xmu_news_paginator.get_page(1)
jwc_news = jwc_news_paginator.get_page(1)
xsc_news = xsc_news_paginator.get_page(1)
if(source == 'xmu'):
xmu_news = xmu_news_paginator.get_page(page)
elif(source == 'jwc'):
jwc_news = jwc_news_paginator.get_page(page)
elif (source == 'xsc'):
xsc_news = xsc_news_paginator.get_page(page)
# print("p.number:"+str(news.number))
return render(request, 'index.html', {'li':xmu_news, 'jwc_li':jwc_news, 'xsc_li': xsc_news})
@csrf_exempt
def get_page(request):
if request.method == 'GET':
source = request.GET.get('source')
num_page = request.GET.get('page')
all_news_list = models.News.objects.order_by('-release_time').all()
paginator = Paginator(all_news_list, 20) # show 20 news per page
# print("p.count:"+str(paginator.count))
# print("p.num_pages:"+str(paginator.num_pages))
# print("p.page_range:"+str(paginator.page_range))
# page = request.GET.get('page')
news = paginator.get_page(num_page)
return HttpResponse()
@csrf_exempt
def get_detail(request):
if request.method == 'POST':
id = request.POST.get("id")
source = request.POST.get("source")
print("source: {0}, news_id: {1}".format(source, str(id)))
# 此处用get会报错,用filter比较方便
if (source == 'xmu'):
item = models.News.objects.filter(id=id)
models.News.objects.filter(id=id).update(read_status=2)
elif (source == 'jwc'):
item = models.jwc_News.objects.filter(id=id)
models.jwc_News.objects.filter(id=id).update(read_status=2)
elif (source == 'xsc'):
item = models.xsc_News.objects.filter(id=id)
models.xsc_News.objects.filter(id=id).update(read_status=2)
item = item
# print(item)
# print(item[0])
#
json_item = serializers.serialize("json", item)
# print(json_item)
#
json_item = json.loads(json_item)
print("Data from get_detail(). 数据从get_detail()获取")
return JsonResponse(json_item[0]['fields'], safe=False)
| 35.94898 | 100 | 0.640931 | 465 | 3,523 | 4.655914 | 0.189247 | 0.035566 | 0.051732 | 0.064665 | 0.466975 | 0.371824 | 0.356582 | 0.194919 | 0.194919 | 0.177367 | 0 | 0.008056 | 0.224808 | 3,523 | 97 | 101 | 36.319588 | 0.784694 | 0.147885 | 0 | 0.171875 | 0 | 0 | 0.079168 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0.015625 | 0.125 | 0 | 0.25 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3094ceba4f2b42b31ff34703aec38ad0bb23b36e | 658 | py | Python | tests/unit/utils/test_beta.py | cwegrzyn/records-mover | e3b71d6c09d99d0bcd6a956b9d09d20f8abe98d2 | [
"Apache-2.0"
] | 36 | 2020-03-17T11:56:51.000Z | 2022-01-19T16:03:32.000Z | tests/unit/utils/test_beta.py | cwegrzyn/records-mover | e3b71d6c09d99d0bcd6a956b9d09d20f8abe98d2 | [
"Apache-2.0"
] | 60 | 2020-03-02T23:13:29.000Z | 2021-05-19T15:05:42.000Z | tests/unit/utils/test_beta.py | cwegrzyn/records-mover | e3b71d6c09d99d0bcd6a956b9d09d20f8abe98d2 | [
"Apache-2.0"
] | 4 | 2020-08-11T13:17:37.000Z | 2021-11-05T21:11:52.000Z | from records_mover.utils import beta, BetaWarning
import unittest
from unittest.mock import patch
@patch('records_mover.utils.warnings')
class TestBeta(unittest.TestCase):
def test_beta(self, mock_warnings):
@beta
def my_crazy_function():
return 123
out = my_crazy_function()
self.assertEqual(out, 123)
mock_warnings.warn.assert_called_with('Call to beta function my_crazy_function - '
'interface may change in the future!',
category=BetaWarning,
stacklevel=2)
| 34.631579 | 90 | 0.571429 | 67 | 658 | 5.41791 | 0.58209 | 0.057851 | 0.123967 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016706 | 0.363222 | 658 | 18 | 91 | 36.555556 | 0.849642 | 0 | 0 | 0 | 0 | 0 | 0.159574 | 0.042553 | 0 | 0 | 0 | 0 | 0.133333 | 1 | 0.133333 | false | 0 | 0.2 | 0.066667 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30959754a6e2c8e563632f08b50d266c0db6795c | 4,835 | py | Python | msqure.py | Levi-Huynh/JS-INTERVIEW | 768e5577fd8f3f26c244154be9d9fd5a348f6171 | [
"MIT"
] | null | null | null | msqure.py | Levi-Huynh/JS-INTERVIEW | 768e5577fd8f3f26c244154be9d9fd5a348f6171 | [
"MIT"
] | null | null | null | msqure.py | Levi-Huynh/JS-INTERVIEW | 768e5577fd8f3f26c244154be9d9fd5a348f6171 | [
"MIT"
] | null | null | null | """
Mag Square
#always do Detective work nm what
UNDERSTAND
-nxn matrix of distinctive pos INT from 1 to n^2
-Sum of any row, column, or diagonal of length n is always equal to the same
number: "Mag" constant
-Given: 3x3 matrix s of integers in the inclusive range [1,9]
we can convert any digit a to any other digit b in the range [1,9]
at cost of |a-b|
-Given s, convert it into a magic square at minimal cost. Print this cost to a new line
-constraints: resulting magic square must contain distinct integers on inclusive range [1,9]
input= array (3x3 matrix) of s intes, in range[1,9] (convert any digit a to any digit b in range [1,9] at cost |a-b| abs value a-b)
output= minimal cost |a-b|, 1 integer
input:
5* 3 4
1 5 8*
6 4* 2
OUTPUT:
8* 3 4
1 5 9*
6 7* 2
input [[5,3,4],[1,5,8],[6,4,2]]
12 14 12
output [[8,3,4],[1,5,9],[6,7,2]]
15 15 15
if [i][i] + [i+1][i] + [i+2][i] //C
[i][i] + [i][i+1] + [i][i+2] && //R
[i][i] + [i+1][i+2] + [i+2][i+2] //D
oputput took 3 replacements at cost of |5-8| + |8-9| + |4-7| = 7
3 1 3
row, column, diagonal
row1 = s[0][0] [0][1] [0][2]
2 = s[1][0], [1][1], [2][2]
3 [2][0] [2][1] [2][2]
#i do it the way you want it done, when you want it done lord
#when i say i dont have money for something you ask me for, lack of faith
# do you help me to your own hurt? put me first
#my strength is in you your laws ty, consistently give all the time
input:
4 8* 2 |9-8| 1 // 14R, 15D, 14C
4* 5 7 |3-4| 1 // 16RM, 14MC
6* 1 6 |8-6| 2 // 13R, 13D, 15C
input [[4,8,2],[4,5,7],[6,1,6]]
14 16 13
output [[4,9,2],[3,5,7],[8,1,6]]
15 15 15
totla min =4
PLAN
EXECUTE
REFLECT
*/
/*
input:
5* 3 4
1 5 8*
6 4* 2
OUTPUT:
8* 3 4
1 5 9*
6 7* 2
input [[5,3,4],[1,5,8],[6,4,2]]
12 14 12
output [[8,3,4],[1,5,9],[6,7,2]]
15 15 15
if [i][i] + [i+1][i] + [i+2][i] //C
[i][i] + [i][i+1] + [i][i+2] && //R
[i][i] + [i+1][i+2] + [i+2][i+2] //D
*/
function Sq(arr) {
let i = 0;
let col1 = arr[i][i] + arr[i+1][i] + arr[i+2][i]
let col2 = arr[i][i+1] + arr[i+1][i+1] + arr[i+2][i+1]
let col3 = arr[i][i+2] + arr[i+1][i+2] + arr[i+2][i+2]
let row1 = arr[i][i] + arr[i][i+1] + arr[i][i+2]
let row2 = arr[i+1][i] + arr[i+1][i+1]+ arr[i+1][i+2]
let row3=arr[i+2][i] + arr[i+2][i+1]+ arr[i+2][i+2]
let dia = arr[i][i] + arr[i+1][i+1] + arr[i+2][i+2]
let dia2 = arr[i][i+2] + arr[i+1][i+1] + arr[i+2][i]
//console.log("col1:", col1, "col2:", col2, "col3:", col3+ "\n" + "row1:", row1, "row2:", row2, "row3:", row3 + "\n" + "diag:", dia, "diag2:", dia2)
console.log(col1 - col3)
}
Sq([[5,3,4],[1,5,8],[6,4,2]])
//
//1=? what makes the col/row/diag thats not the primary sums, equal to primary sum
//2== how to find the col/row/diag thats diff than main prim sum
// store all sums of each col/rows/diags in list
// find most freq sum
// if col/row/diag != mostfreqsum, then store in tracker array diffSum=[]
//diffSum 14, 14
// mostfreqsum - each tracker:colX/rowX/diagX
// store ^ all difference in track variable
col1: 12 col2: 12 col3: 14
row1: 12 row2: 14 row3: 12
diag: 12 diag2: 15
"""
import statistics
def Sq(arr):
i=0
store=[]
dict1 = {}
keys = ["col1", "col2", "col3", "row1", "row2", "row3", "diag", "diag2"]
col1 = arr[i][i] + arr[i+1][i] + arr[i+2][i]
col2 = arr[i][i+1] + arr[i+1][i+1] + arr[i+2][i+1]
col3 = arr[i][i+2] + arr[i+1][i+2] + arr[i+2][i+2]
row1 = arr[i][i] + arr[i][i+1] + arr[i][i+2]
row2 = arr[i+1][i] + arr[i+1][i+1]+ arr[i+1][i+2]
row3=arr[i+2][i] + arr[i+2][i+1]+ arr[i+2][i+2]
dia = arr[i][i] + arr[i+1][i+1] + arr[i+2][i+2]
dia2 = arr[i][i+2] + arr[i+1][i+1] + arr[i+2][i]
print("col1:", col1, "col2:", col2, "col3:", col3, "\n" + "row1:", row1, "row2:", row2, "row3:", row3, "\n" + "diag:", dia, "diag2:", dia2)
store.append([col1, col2, col3, row1,row2, row3, dia, dia2])
for i in keys:
for x in store[0]:
dict1[i] = x
t= dict1["col1"]
#print("dict", t)
counter =0
num = store[0][0]
track={}
for key,value in dict1.items():
if value not in track:
track[value]=0
else:
track[value]+=1
mostfreq= max(dict1, key=lambda k:dict1[k])
maxtra= max(track, key=track.get)
print("h",mostfreq,maxtra)
# if store.count(num) < 3, get average from list
# FIND mean of store
"""
mything= sorted(store[0])
print(mything)
mymean= statistics.median(mything)
rmean= round(mymean)
print(mymean, rmean)
myl= [rmean-x for x in store[0] if x != rmean]
mysum= abs(sum(myl))
print(myl, mysum)
"""
#return mysum
Sq([[4,9,2],[3,5,7],[8,1,5]]) #1 | 24.668367 | 150 | 0.535677 | 976 | 4,835 | 2.653689 | 0.217213 | 0.075676 | 0.025483 | 0.037066 | 0.351351 | 0.318147 | 0.282239 | 0.282239 | 0.273745 | 0.27027 | 0 | 0.114207 | 0.250259 | 4,835 | 196 | 151 | 24.668367 | 0.600276 | 0.705895 | 0 | 0 | 0 | 0 | 0.071001 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.03125 | 0 | 0.0625 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
3095b3e9337fe55ac442949b05abdd6196442a62 | 1,371 | py | Python | scripts/common/rename.py | andrewsanchez/GenBankQC-Workflow | 8e630ca89c3f1a3cd9d6b2c4987100e3552d831e | [
"MIT"
] | 1 | 2020-03-19T13:00:30.000Z | 2020-03-19T13:00:30.000Z | scripts/common/rename.py | andrewsanchez/GenBankQC-Workflow | 8e630ca89c3f1a3cd9d6b2c4987100e3552d831e | [
"MIT"
] | null | null | null | scripts/common/rename.py | andrewsanchez/GenBankQC-Workflow | 8e630ca89c3f1a3cd9d6b2c4987100e3552d831e | [
"MIT"
] | null | null | null | import re
def parse_genome_id(genome):
genome_id = re.search("GCA_[0-9]*.[0-9]", genome).group()
return genome_id
def rm_duplicates(seq):
"""Remove duplicate strings during renaming
"""
seen = set()
seen_add = seen.add
return [x for x in seq if not (x in seen or seen_add(x))]
def clean_up_name(name):
rm_words = re.compile(r"((?<=_)(sp|sub|substr|subsp|str|strain)(?=_))")
name = re.sub(" +", "_", name)
name = rm_words.sub("_", name)
name = re.sub("_+", "_", name)
name = re.sub("[\W]+", "_", name)
name = rm_duplicates(filter(None, name.split("_")))
name = "_".join(name)
return name
def rename_genome(genome, assembly_summary):
"""Rename FASTAs based on info in the assembly summary
"""
genome_id = parse_genome_id(genome)
infraspecific_name = assembly_summary.at[genome_id, "infraspecific_name"]
organism_name = assembly_summary.at[genome_id, "organism_name"]
if type(infraspecific_name) == float:
infraspecific_name = ""
isolate = assembly_summary.at[genome_id, "isolate"]
if type(isolate) == float:
isolate = ""
assembly_level = assembly_summary.at[genome_id, "assembly_level"]
name = "_".join(
[genome_id, organism_name, infraspecific_name, isolate, assembly_level]
)
name = clean_up_name(name) + ".fna.gz"
return name
| 30.466667 | 79 | 0.648432 | 184 | 1,371 | 4.559783 | 0.342391 | 0.095352 | 0.081049 | 0.109654 | 0.183552 | 0.06913 | 0 | 0 | 0 | 0 | 0 | 0.003656 | 0.202042 | 1,371 | 44 | 80 | 31.159091 | 0.763254 | 0.070751 | 0 | 0.0625 | 0 | 0 | 0.108108 | 0.035771 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.03125 | 0 | 0.28125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30963bad8971717768000852380d252200fb7adb | 1,598 | py | Python | example/fetch.py | bh2smith/duneapi | 835d69fc0b62876b1ac313ac5c21faedd80a8350 | [
"Apache-2.0"
] | 1 | 2022-03-15T18:58:33.000Z | 2022-03-15T18:58:33.000Z | example/fetch.py | bh2smith/duneapi | 835d69fc0b62876b1ac313ac5c21faedd80a8350 | [
"Apache-2.0"
] | 6 | 2022-03-25T08:18:52.000Z | 2022-03-28T13:52:28.000Z | example/fetch.py | bh2smith/duneapi | 835d69fc0b62876b1ac313ac5c21faedd80a8350 | [
"Apache-2.0"
] | null | null | null | """Sample Fetch script from DuneAnalytics"""
from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime
from src.duneapi.api import DuneAPI
from src.duneapi.types import Network, QueryParameter, DuneQuery
from src.duneapi.util import open_query
@dataclass
class Record:
"""Arbitrary record with a few different data types"""
string: str
integer: int
decimal: float
time: datetime
@classmethod
def from_dict(cls, obj: dict[str, str]) -> Record:
"""Constructs Record from Dune Data as string dict"""
return cls(
string=obj["block_hash"],
integer=int(obj["number"]),
decimal=float(obj["tx_fees"]),
# Dune timestamps are UTC!
time=datetime.strptime(obj["time"], "%Y-%m-%dT%H:%M:%S+00:00"),
)
def fetch_records(dune: DuneAPI) -> list[Record]:
"""Initiates and executes Dune query, returning results as Python Objects"""
sample_query = DuneQuery.from_environment(
raw_sql=open_query("./example/query.sql"),
name="Sample Query",
network=Network.MAINNET,
parameters=[
QueryParameter.number_type("IntParam", 10),
QueryParameter.date_type("DateParam", datetime(2022, 3, 10, 12, 30, 30)),
QueryParameter.text_type("TextParam", "aba"),
],
)
results = dune.fetch(sample_query)
return [Record.from_dict(row) for row in results]
if __name__ == "__main__":
records = fetch_records(DuneAPI.new_from_environment())
print("First result:", records[0])
| 30.730769 | 85 | 0.655194 | 193 | 1,598 | 5.274611 | 0.507772 | 0.020629 | 0.041257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016155 | 0.225282 | 1,598 | 51 | 86 | 31.333333 | 0.806139 | 0.145181 | 0 | 0 | 0 | 0 | 0.097398 | 0.0171 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.166667 | 0 | 0.416667 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
309e29bd5c11a93e22fbf0abbb2d7d78bdce1323 | 1,321 | py | Python | constants.py | tabilab-dip/sentiment-embeddings | 8859a19d19cb96ee0b0da5053396a1c54bab5da6 | [
"MIT"
] | 7 | 2020-11-18T10:02:22.000Z | 2022-01-06T03:24:37.000Z | constants.py | tabilab-dip/sentiment-embeddings | 8859a19d19cb96ee0b0da5053396a1c54bab5da6 | [
"MIT"
] | 1 | 2020-10-26T18:56:23.000Z | 2020-10-26T18:56:23.000Z | constants.py | tabilab-dip/sentiment-embeddings | 8859a19d19cb96ee0b0da5053396a1c54bab5da6 | [
"MIT"
] | 1 | 2020-12-04T13:51:46.000Z | 2020-12-04T13:51:46.000Z | #!/usr/bin/py
# -*- coding: utf-8 -*-
"""
@author: Cem Rıfkı Aydın
Constant parameters to be leveraged across the program.
"""
import os
COMMAND = "cross_validate"
# Dimension size of embeddings
EMBEDDING_SIZE = 100
#Language can be either "turkish" or "english"
LANG = "turkish"
DATASET_PATH = os.path.join("input", "Sentiment_dataset_turk.csv") # Sentiment_dataset_tr.csv; test_tr.csv
CONTEXT_WINDOW_SIZE = 5
""" Word embedding type can be one of the following:
corpus_svd: SVD - U vector
lexical_svd: Dictionary, SVD - U vector
supervised: Four context delta idf score vector
ensemble: Combination of the above three embeddings
clustering: Clustering vector
word2vec: word2vec
"""
EMBEDDING_TYPE = "ensemble"
# Number of cross-validation
CV_NUMBER = 10
# Concatenation of the maximum, average, and minimum delta-idf polarity scores with the average document vector.
USE_3_REV_POL_SCORES = True
# The below variables are used only if the command "train_and_test_separately" is chosen.
TRAINING_FILE_PATH = os.path.join("input", "Turkish_twitter_train.csv")
TEST_FILE_PATH = os.path.join("input", "Turkish_twitter_test.csv")
# The below file name for the model trained could also be used given that
# model parameters (e.g. embedding size and type) are the same.
MODEL_FILE_NAME = 'finalized_model.sav'
| 26.42 | 112 | 0.7676 | 199 | 1,321 | 4.934673 | 0.567839 | 0.01833 | 0.03055 | 0.04277 | 0.094705 | 0.075356 | 0.075356 | 0.075356 | 0 | 0 | 0 | 0.008842 | 0.14383 | 1,321 | 49 | 113 | 26.959184 | 0.859416 | 0.445117 | 0 | 0 | 0 | 0 | 0.312925 | 0.170068 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30a37bf131fb4b3429d304093f34cb95cda25236 | 9,285 | py | Python | torpido/io.py | AP-Atul/Torpido | a646b4d6de7f2e2c96de4c64ce3113f53e3931c2 | [
"Unlicense"
] | 21 | 2020-12-23T07:13:10.000Z | 2022-01-12T10:32:22.000Z | torpido/io.py | AP-Atul/Torpido | a646b4d6de7f2e2c96de4c64ce3113f53e3931c2 | [
"Unlicense"
] | 2 | 2020-12-30T10:45:42.000Z | 2021-09-25T09:52:00.000Z | torpido/io.py | AP-Atul/Torpido | a646b4d6de7f2e2c96de4c64ce3113f53e3931c2 | [
"Unlicense"
] | 1 | 2021-02-06T21:39:41.000Z | 2021-02-06T21:39:41.000Z | """
This file contains function to separate out video and audio using ffmpeg.
It consists of two functions to split and merge video and audio
using ffmpeg.
"""
import os
from torpido.config.constants import (CACHE_DIR, CACHE_NAME,
IN_AUDIO_FILE, OUT_AUDIO_FILE,
OUT_VIDEO_FILE, THUMBNAIL_FILE)
from torpido.exceptions import AudioStreamMissingException, FFmpegProcessException
from torpido.ffpbar import Progress
from torpido.tools.ffmpeg import split, merge, thumbnail
from torpido.tools.logger import Log
class FFMPEG:
"""
FFMPEG helper function to make use of the subprocess module to run command
to trim and split video files. The std out logs are parsed to generate a progress
bar to show the progress of the command that is being executed.
Attributes
----------
__input_file_name : str
input video file name
__input_file_path : str
input video file name
__output_video_file_name : str
output video file name
__input_audio_file_name : str
original splatted audio file name
__output_audio_file_name : str
de-noised audio file name
__output_file_path : str
same as the input file path
__intro : str
name of the intro video file
__outro : str
name of the outro video file
__extension : str
name of the extension of the input file
__progress_bar : tqdm
object of tqdm to display a progress bar
"""
def __init__(self):
self.__input_file_name = self.__input_file_path = self.__output_video_file_name = self.__input_audio_file_name = None
self.__output_audio_file_name = self.__output_file_path = self.__thumbnail_file = self.__intro = None
self.__outro = self.__extension = self.__progress_bar = None
def set_intro_video(self, intro):
""" Sets the intro video file """
self.__intro = intro
def set_outro_video(self, outro):
""" Sets the outro video file """
self.__outro = outro
def get_input_file_name_path(self):
""" Returns file name that was used for processing """
if self.__input_file_name is not None:
return os.path.join(self.__input_file_path, self.__input_file_name)
def get_output_file_name_path(self):
""" Returns output file name generated from input file name """
if self.__output_video_file_name is not None:
return os.path.join(self.__output_file_path, self.__output_video_file_name)
def get_input_audio_file_name_path(self):
""" Returns name and path of the input audio file that is split from the input video file """
if self.__input_audio_file_name is not None:
return os.path.join(self.__output_file_path, self.__input_audio_file_name)
def get_output_audio_file_name_path(self):
""" Returns the output audio file name and path that is de-noised """
if self.__output_audio_file_name is not None:
return os.path.join(self.__output_file_path, self.__output_audio_file_name)
def split_video_audio(self, input_file):
"""
Function to split the input video file into audio file using FFmpeg. Progress bar is
updated as the command is run
Note : No new video file is created, only audio file is created
Parameters
----------
input_file : str
input video file
Returns
-------
bool
returns True if success else Error
"""
if os.path.isfile(input_file) is False:
Log.e("File does not exists")
return False
# storing all the references
self.__input_file_path = os.path.dirname(input_file)
self.__output_file_path = self.__input_file_path
self.__input_file_name = os.path.basename(input_file)
# get the base name without the extension
base_name, self.__extension = os.path.splitext(self.__input_file_name)
self.__output_video_file_name = "".join([base_name, OUT_VIDEO_FILE, self.__extension])
self.__input_audio_file_name = base_name + IN_AUDIO_FILE
self.__output_audio_file_name = base_name + OUT_AUDIO_FILE
self.__thumbnail_file = base_name + THUMBNAIL_FILE
# call ffmpeg tool to do the splitting
try:
Log.i("Splitting the video file.")
self.__progress_bar = Progress()
for log in split(input_file,
os.path.join(self.__output_file_path, self.__input_audio_file_name)):
self.__progress_bar.display(log)
if not os.path.isfile(os.path.join(self.__output_file_path, self.__input_audio_file_name)):
raise AudioStreamMissingException
self.__progress_bar.complete()
print("----------------------------------------------------------")
return True
# no audio in the video
except AudioStreamMissingException:
Log.e(AudioStreamMissingException.cause)
self.__progress_bar.clear()
return False
def merge_video_audio(self, timestamps):
"""
Function to merge the processed files using FFmpeg. The timestamps are used the trim
the original video file and the audio stream is replaced with the de-noised audio
file created by `Auditory`
Parameters
----------
timestamps : list
list of start and end timestamps
Returns
-------
bool
True id success else error is raised
"""
if self.__input_file_name is None or self.__output_audio_file_name is None:
Log.e("Files not found for merging")
return False
# call ffmpeg tool to merge the files
try:
self.__progress_bar = Progress()
Log.i("Writing the output video file.")
for log in merge(os.path.join(self.__output_file_path, self.__input_file_name),
os.path.join(self.__output_file_path, self.__output_audio_file_name),
os.path.join(self.__output_file_path, self.__output_video_file_name),
timestamps,
intro=self.__intro,
outro=self.__outro):
self.__progress_bar.display(log)
if not os.path.isfile(os.path.join(self.__output_file_path, self.__output_video_file_name)):
raise FFmpegProcessException
self.__progress_bar.complete()
print("----------------------------------------------------------")
return True
except FFmpegProcessException:
Log.e(FFmpegProcessException.cause)
self.__progress_bar.clear()
return False
def gen_thumbnail(self, time):
"""
Generates the thumbnail from the timestamps. Since, the timestamps contains the best
of all the video, so picking a random int (sec) from a video and saving the frame
as the thumbnail, Later some kind of peak rank from the input data can be picked.
Notes
-----
To pick the max rank we can use the sum rank and easily get the max value and use
it to generate the thumbnail for the video.
"""
# call ffmpeg tool to merge the files
try:
self.__progress_bar = Progress()
Log.i("Writing the output video file.")
for log in thumbnail(os.path.join(self.__output_file_path, self.__input_file_name),
os.path.join(self.__output_file_path, self.__thumbnail_file),
time):
self.__progress_bar.display(log)
if not os.path.isfile(os.path.join(self.__output_file_path, self.__thumbnail_file)):
raise FFmpegProcessException
self.__progress_bar.complete()
print("----------------------------------------------------------")
return True
except FFmpegProcessException:
Log.e(FFmpegProcessException.cause)
self.__progress_bar.clear()
return False
def clean_up(self):
"""
Deletes extra files created while processing, deletes the ranking files
cache, etc.
"""
# processing is not yet started for something went wrong
if self.__input_file_name is not None:
# original audio split from the video file
if os.path.isfile(os.path.join(self.__output_file_path, self.__input_audio_file_name)):
os.unlink(os.path.join(self.__output_file_path, self.__input_audio_file_name))
# de-noised audio file output of Auditory
if os.path.isfile(os.path.join(self.__output_file_path, self.__output_audio_file_name)):
os.unlink(os.path.join(self.__output_file_path, self.__output_audio_file_name))
# cache storage file
if os.path.isfile(os.path.join(CACHE_DIR, CACHE_NAME)):
os.unlink(os.path.join(CACHE_DIR, CACHE_NAME))
del self.__progress_bar
Log.d("Clean up completed.")
| 39.510638 | 125 | 0.627464 | 1,179 | 9,285 | 4.597964 | 0.164546 | 0.070836 | 0.055156 | 0.059768 | 0.4418 | 0.37613 | 0.353994 | 0.323372 | 0.290168 | 0.290168 | 0 | 0 | 0.287561 | 9,285 | 234 | 126 | 39.679487 | 0.819501 | 0.294561 | 0 | 0.339623 | 0 | 0 | 0.053498 | 0.028642 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103774 | false | 0 | 0.056604 | 0 | 0.283019 | 0.028302 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30a54810ed9eb56db883cc95109f0ab931d09a5e | 605 | py | Python | COS120/testPlanet.py | thejayhaykid/Python | 641c33b94762f0cace203dcf4cc121571625ab02 | [
"MIT"
] | null | null | null | COS120/testPlanet.py | thejayhaykid/Python | 641c33b94762f0cace203dcf4cc121571625ab02 | [
"MIT"
] | null | null | null | COS120/testPlanet.py | thejayhaykid/Python | 641c33b94762f0cace203dcf4cc121571625ab02 | [
"MIT"
] | null | null | null | import planetClass
def createSomePlanets():
aPlanet=planetClass.Planet("Zorks",2000,30000,100000,5)
bPlanet=planetClass.Planet("Zapps",1000,20000,200000,17)
print(aPlanet.getName() + " has a radius of " + str(aPlanet.getRadius()))
planetList=[aPlanet,bPlanet]
for planet in planetList:
print(planet.getName() + " has a mass of " + str(planet.getMass()))
for planet in planetList:
print(planet.getName() + " has " + str(planet.getMoons()) + " moons.")
print(bPlanet.getName() + " has a circumference of " + str(bPlanet.getCircumference()))
createSomePlanets()
| 40.333333 | 91 | 0.680992 | 70 | 605 | 5.885714 | 0.485714 | 0.097087 | 0.080097 | 0.101942 | 0.203884 | 0.203884 | 0.203884 | 0.203884 | 0 | 0 | 0 | 0.065606 | 0.168595 | 605 | 14 | 92 | 43.214286 | 0.753479 | 0 | 0 | 0.166667 | 0 | 0 | 0.128926 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.166667 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30a7222ecfe5a24f86695b0a4cf26331038a5731 | 1,410 | py | Python | Source/mincost.py | aarsheem/696-ds | 2d74b1e3f430e369202982d7ad8c56f362b00f76 | [
"MIT"
] | 2 | 2020-02-12T22:56:40.000Z | 2020-02-17T16:59:05.000Z | Source/mincost.py | aarsheem/696-ds | 2d74b1e3f430e369202982d7ad8c56f362b00f76 | [
"MIT"
] | null | null | null | Source/mincost.py | aarsheem/696-ds | 2d74b1e3f430e369202982d7ad8c56f362b00f76 | [
"MIT"
] | 2 | 2020-02-12T17:25:33.000Z | 2021-02-01T20:29:17.000Z | import numpy as np
from systemrl.agents.q_learning import QLearning
import matplotlib.pyplot as plt
from helper import decaying_epsilon, evaluate_interventions
from tqdm import tqdm
#This is not converging
def mincost(env, human_policy, min_performance, agent, num_episodes=1000, max_steps=1000):
print("mincost")
episode_returns_ = []
for episodes in tqdm(range(num_episodes)):
env.reset()
state = env.state
is_end = False
returns = 0
returns_ = 0
count = 0
epsilon = decaying_epsilon(episodes, num_episodes)
while not is_end and count < max_steps:
count += 1
if np.random.random() < epsilon:
action = np.random.randint(agent.num_actions)
else:
action = agent.get_action(state)
next_state, reward, is_end = env.step(action)
returns += reward
if is_end == True or count >= max_steps:
if returns < min_performance:
reward_ = -100
elif action == human_policy[state]:
reward_ = 0
else:
reward_ = -1
agent.train(state, action, reward_, next_state)
state = next_state
returns_ += reward_
episode_returns_.append(returns_)
return evaluate_interventions(env, human_policy, agent)
| 32.790698 | 90 | 0.595745 | 161 | 1,410 | 4.987578 | 0.42236 | 0.024907 | 0.034869 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018028 | 0.331206 | 1,410 | 42 | 91 | 33.571429 | 0.83351 | 0.015603 | 0 | 0.055556 | 0 | 0 | 0.005047 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.138889 | 0 | 0.194444 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30a791a26136890491f10b4931b42e8e670355a5 | 2,076 | py | Python | server/algorithms.py | sebastianfrey/geoio-server | 2f127244adda96d54e7a082f8514f9bdc9c8db97 | [
"MIT"
] | null | null | null | server/algorithms.py | sebastianfrey/geoio-server | 2f127244adda96d54e7a082f8514f9bdc9c8db97 | [
"MIT"
] | null | null | null | server/algorithms.py | sebastianfrey/geoio-server | 2f127244adda96d54e7a082f8514f9bdc9c8db97 | [
"MIT"
] | null | null | null | """All algorithms used by geoio-server"""
import math
import functools
import geojson
def angle(point_1, point_2):
"""calculates the angle between two points in radians"""
return math.atan2(point_2[1] - point_1[1], point_2[0] - point_1[0])
def convex_hull(collection):
"""Calculates the convex hull of an geojson feature collection."""
features_or_geometries = []
if collection["type"] == "GeometryCollection":
features_or_geometries = collection["geometries"]
elif collection["type"] == "FeatureCollection":
features_or_geometries = collection["features"]
if features_or_geometries is None or len(features_or_geometries) == 0:
raise Exception("No features or geometries where found")
features_or_geometries_mapper = lambda feature_or_geometry: list(geojson.utils.coords(feature_or_geometry))
coordinates_reducer = lambda a, b: a + b
coordinates_list = list(map(features_or_geometries_mapper, features_or_geometries))
points = list(functools.reduce(coordinates_reducer, coordinates_list))
if len(points) < 3:
raise Exception("Can not calculate convex hull of less then 3 input points.")
points = list(set(points))
points = sorted(points, key=lambda point: (point[1], point[0]))
point0 = points.pop(0)
points = list(map(lambda point: (point[0], point[1], angle(point0, point)), points))
points = list(sorted(points, key=lambda point: point[2]))
points.insert(0, point0)
hull = [points[0], (points[1][0], points[1][1])]
i = 2
points_count = len(points)
while i < points_count:
stack_length = len(hull)
point_1 = hull[stack_length-1]
point_2 = hull[stack_length-2]
point_i = points[i]
discrimnante = (point_2[0] - point_1[0]) * (point_i[1] - point_1[1]) - (point_i[0] - point_1[0]) * (point_2[1] - point_1[1])
if discrimnante < 0 or stack_length == 2:
hull.append((point_i[0], point_i[1]))
i = i+1
else:
hull.pop()
return geojson.Polygon(hull)
| 32.952381 | 132 | 0.66474 | 286 | 2,076 | 4.643357 | 0.283217 | 0.045181 | 0.135542 | 0.018072 | 0.108434 | 0.088855 | 0 | 0 | 0 | 0 | 0 | 0.032297 | 0.209538 | 2,076 | 62 | 133 | 33.483871 | 0.776965 | 0.070809 | 0 | 0 | 0 | 0 | 0.081547 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.075 | 0 | 0.175 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30a8e19c8a3340ed914debb399aef2df50e28dac | 2,495 | py | Python | First Project/Exercise2/modified_bisection.py | TasosOperatingInBinary/Numerical-Analysis-Projects | 61a8014f2b853a646145cea5a4d3655e100be854 | [
"MIT"
] | null | null | null | First Project/Exercise2/modified_bisection.py | TasosOperatingInBinary/Numerical-Analysis-Projects | 61a8014f2b853a646145cea5a4d3655e100be854 | [
"MIT"
] | null | null | null | First Project/Exercise2/modified_bisection.py | TasosOperatingInBinary/Numerical-Analysis-Projects | 61a8014f2b853a646145cea5a4d3655e100be854 | [
"MIT"
] | null | null | null | import numpy as np
def modified_bisection(f, a, b, eps=5e-6):
"""
Function that finds a root using modified Bisection method for a given function f(x).
On each iteration instead of the middle of the interval, a random number is chosen as the next guess for the
root.
The function finds the root of f(x) with a predefined absolute accuracy epsilon.
The function excepts an interval [a,b] in which is known that the function f has a root.
If the function f has multiple roots in this interval then Bisection method converges randomly to one of them.
If in the interval [a,b] f(x) doesn't change sign (Bolzano theorem can not be applied) the function returns nan
as root and -1 as the number of iterations.Also the function checks if either a or b is a root of f(x), if both
are then the function returns the value of a.
Parameters
----------
f : callable
The function to find a root of.
a : float
The start of the initial interval in which the function will find the root of f(x).
b : float
The end of the initial interval in which the function will find the root of f(x).
eps : float
The target accuracy.
The iteration stops when the length of the current interval divided by 2 to the power of n+1 is below eps.
Default value is 5e-6.
Returns
-------
root : float
The estimated value for the root.
iterations_num : int
The number of iterations.
"""
# check if Bolzano theorem can not be applied or a is larger than b
if f(a) * f(b) > 0 or a > b:
return np.nan, -1
elif f(a) == 0: # check if a is root of f
return a, 0
elif f(b) == 0: # or b is root of f
return b, 0
# Modified Bisection algorithm
iterations_num = 0
while abs(b - a) >= eps:
current_root = np.random.uniform(a, b) # each iteration root approximation is a random number from a uniform
# distribution
if f(a) * f(current_root) < 0: # find out where Bolzano theorem still can be applied and update the interval
# [a,b]
b = current_root
else:
a = current_root
iterations_num += 1
current_root = np.random.uniform(a, b)
return current_root, iterations_num
| 42.288136 | 119 | 0.600401 | 382 | 2,495 | 3.89267 | 0.314136 | 0.073974 | 0.028245 | 0.02152 | 0.180901 | 0.153329 | 0.114324 | 0.076664 | 0.076664 | 0.076664 | 0 | 0.009744 | 0.341884 | 2,495 | 58 | 120 | 43.017241 | 0.895859 | 0.676553 | 0 | 0.111111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.055556 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30a910ea0718b3060e358f1839e61f48a5b25649 | 8,454 | py | Python | dynamic_forms/models.py | juandisay/django-dynamic-forms | 761bfea5332fbd4d247a0fa55a19cfc0dd36e3c8 | [
"BSD-3-Clause"
] | 135 | 2015-01-16T08:14:23.000Z | 2021-12-22T07:21:37.000Z | dynamic_forms/models.py | ayoub-root/django-dynamic-forms | 614b9a06f6edfeb3349b7e64cc8820e56535ad59 | [
"BSD-3-Clause"
] | 33 | 2015-01-18T13:24:00.000Z | 2019-05-03T12:26:17.000Z | dynamic_forms/models.py | ayoub-root/django-dynamic-forms | 614b9a06f6edfeb3349b7e64cc8820e56535ad59 | [
"BSD-3-Clause"
] | 48 | 2015-01-20T20:04:20.000Z | 2022-01-06T10:02:57.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import json
from collections import OrderedDict
from django.core.urlresolvers import reverse
from django.db import models
from django.db.transaction import atomic
from django.template.defaultfilters import slugify
from django.utils.crypto import get_random_string
from django.utils.encoding import force_text, python_2_unicode_compatible
from django.utils.html import format_html, format_html_join
from django.utils.translation import ugettext_lazy as _
from dynamic_forms.actions import action_registry
from dynamic_forms.conf import settings
from dynamic_forms.fields import TextMultiSelectField
from dynamic_forms.formfields import formfield_registry
@python_2_unicode_compatible
class FormModel(models.Model):
name = models.CharField(_('Name'), max_length=50, unique=True)
submit_url = models.CharField(_('Submit URL'), max_length=100, unique=True,
help_text=_('The full URL path to the form. It should start '
'and end with a forward slash (<code>/</code>).'))
success_url = models.CharField(_('Success URL'), max_length=100,
help_text=_('The full URL path where the user will be '
'redirected after successfully sending the form. It should start '
'and end with a forward slash (<code>/</code>). If empty, the '
'success URL is generated by appending <code>done/</code> to the '
'“Submit URL”.'), blank=True, default='')
actions = TextMultiSelectField(_('Actions'), default='',
choices=action_registry.get_as_choices())
form_template = models.CharField(_('Form template path'), max_length=100,
default='dynamic_forms/form.html',
choices=settings.DYNAMIC_FORMS_FORM_TEMPLATES)
success_template = models.CharField(_('Success template path'),
max_length=100, default='dynamic_forms/form_success.html',
choices=settings.DYNAMIC_FORMS_SUCCESS_TEMPLATES)
allow_display = models.BooleanField(_('Allow display'), default=False,
help_text=_('Allow a user to view the input at a later time. This '
'requires the “Store in database” action to be active. The sender '
'will be given a unique URL to recall the data.'))
recipient_email = models.EmailField(_('Recipient email'), blank=True,
null=True, help_text=_('Email address to send form data.'))
class Meta:
ordering = ['name']
verbose_name = _('Dynamic form')
verbose_name_plural = _('Dynamic forms')
def __str__(self):
return self.name
def get_fields_as_dict(self):
"""
Returns an ``OrderedDict`` (``SortedDict`` when ``OrderedDict is not
available) with all fields associated with this form where their name
is the key and their label is the value.
"""
return OrderedDict(self.fields.values_list('name', 'label').all())
def save(self, *args, **kwargs):
"""
Makes sure that the ``submit_url`` and -- if defined the
``success_url`` -- end with a forward slash (``'/'``).
"""
if not self.submit_url.endswith('/'):
self.submit_url = self.submit_url + '/'
if self.success_url:
if not self.success_url.endswith('/'):
self.success_url = self.success_url + '/'
else:
self.success_url = self.submit_url + 'done/'
super(FormModel, self).save(*args, **kwargs)
@python_2_unicode_compatible
class FormFieldModel(models.Model):
parent_form = models.ForeignKey(FormModel, on_delete=models.CASCADE,
related_name='fields')
field_type = models.CharField(_('Type'), max_length=255,
choices=formfield_registry.get_as_choices())
label = models.CharField(_('Label'), max_length=255)
name = models.SlugField(_('Name'), max_length=50, blank=True)
_options = models.TextField(_('Options'), blank=True, null=True)
position = models.SmallIntegerField(_('Position'), blank=True, default=0)
class Meta:
ordering = ['parent_form', 'position']
unique_together = ("parent_form", "name",)
verbose_name = _('Form field')
verbose_name_plural = _('Form fields')
def __str__(self):
return _('Field “%(field_name)s” in form “%(form_name)s”') % {
'field_name': self.label,
'form_name': self.parent_form.name,
}
def generate_form_field(self, form):
field_type_cls = formfield_registry.get(self.field_type)
field = field_type_cls(**self.get_form_field_kwargs())
field.contribute_to_form(form)
return field
def get_form_field_kwargs(self):
kwargs = self.options
kwargs.update({
'name': self.name,
'label': self.label,
})
return kwargs
@property
def options(self):
"""Options passed to the form field during construction."""
if not hasattr(self, '_options_cached'):
self._options_cached = {}
if self._options:
try:
self._options_cached = json.loads(self._options)
except ValueError:
pass
return self._options_cached
@options.setter
def options(self, opts):
if hasattr(self, '_options_cached'):
del self._options_cached
self._options = json.dumps(opts)
def save(self, *args, **kwargs):
if not self.name:
self.name = slugify(self.label)
given_options = self.options
field_type_cls = formfield_registry.get(self.field_type)
invalid = set(self.options.keys()) - set(field_type_cls._meta.keys())
if invalid:
for key in invalid:
del given_options[key]
self.options = given_options
super(FormFieldModel, self).save(*args, **kwargs)
@python_2_unicode_compatible
class FormModelData(models.Model):
form = models.ForeignKey(FormModel, on_delete=models.SET_NULL,
related_name='data', null=True)
value = models.TextField(_('Form data'), blank=True, default='')
submitted = models.DateTimeField(_('Submitted on'), auto_now_add=True)
display_key = models.CharField(_('Display key'), max_length=24, null=True,
blank=True, db_index=True, default=None, unique=True,
help_text=_('A unique identifier that is used to allow users to view '
'their sent data. Unique over all stored data sets.'))
class Meta:
verbose_name = _('Form data')
verbose_name_plural = _('Form data')
def __str__(self):
return _('Form: “%(form)s” on %(date)s') % {
'form': self.form,
'date': self.submitted,
}
def save(self, *args, **kwargs):
with atomic():
if self.form.allow_display and not self.display_key:
dk = get_random_string(24)
while FormModelData.objects.filter(display_key=dk).exists():
dk = get_random_string(24)
self.display_key = dk
super(FormModelData, self).save(*args, **kwargs)
@property
def json_value(self):
return OrderedDict(sorted(json.loads(self.value).items()))
def pretty_value(self):
try:
value = format_html_join('',
'<dt>{0}</dt><dd>{1}</dd>',
(
(force_text(k), force_text(v))
for k, v in self.json_value.items()
)
)
return format_html('<dl>{0}</dl>', value)
except ValueError:
return self.value
pretty_value.allow_tags = True
@property
def show_url(self):
"""
If the form this data set belongs to has
:attr:`~FormModel.allow_display` ``== True``, return the permanent URL.
If displaying is not allowed, return an empty string.
"""
if self.form.allow_display:
return reverse('dynamic_forms:data-set-detail',
kwargs={'display_key': self.display_key})
return ''
@property
def show_url_link(self):
"""
Similar to :attr:`show_url` but wraps the display key in an `<a>`-tag
linking to the permanent URL.
"""
if self.form.allow_display:
return format_html('<a href="{0}">{1}</a>', self.show_url, self.display_key)
return ''
| 38.427273 | 88 | 0.62917 | 1,028 | 8,454 | 4.957198 | 0.2393 | 0.03022 | 0.020016 | 0.018838 | 0.167975 | 0.111068 | 0.091444 | 0.074568 | 0.074568 | 0.020016 | 0 | 0.006209 | 0.257038 | 8,454 | 219 | 89 | 38.60274 | 0.805127 | 0.075112 | 0 | 0.16568 | 0 | 0 | 0.158738 | 0.013945 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088757 | false | 0.005917 | 0.088757 | 0.023669 | 0.402367 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30abf6ce8fba41eb376313eba17a397685a314e9 | 29,178 | py | Python | src/pumpwood_djangoviews/views.py | Murabei-OpenSource-Codes/pumpwood-djangoviews | 792fb825a5e924d1b7307c0bc3b40d798b68946d | [
"BSD-3-Clause"
] | null | null | null | src/pumpwood_djangoviews/views.py | Murabei-OpenSource-Codes/pumpwood-djangoviews | 792fb825a5e924d1b7307c0bc3b40d798b68946d | [
"BSD-3-Clause"
] | null | null | null | src/pumpwood_djangoviews/views.py | Murabei-OpenSource-Codes/pumpwood-djangoviews | 792fb825a5e924d1b7307c0bc3b40d798b68946d | [
"BSD-3-Clause"
] | null | null | null | """Create views using Pumpwood pattern."""
import os
import pandas as pd
import simplejson as json
from io import BytesIO
from django.conf import settings
from django.http import HttpResponse
from rest_framework.parsers import JSONParser
from rest_framework import viewsets, status
from rest_framework.response import Response
from werkzeug.utils import secure_filename
from pumpwood_communication import exceptions
from pumpwood_communication.serializers import PumpWoodJSONEncoder
from django.db.models.fields import NOT_PROVIDED
from pumpwood_djangoviews.renderer import PumpwoodJSONRenderer
from pumpwood_djangoviews.query import filter_by_dict
from pumpwood_djangoviews.action import load_action_parameters
from pumpwood_djangoviews.aux.map_django_types import django_map
from django.db.models.fields.files import FieldFile
def save_serializer_instance(serializer_instance):
is_valid = serializer_instance.is_valid()
if is_valid:
return serializer_instance.save()
else:
raise exceptions.PumpWoodException(serializer_instance.errors)
class PumpWoodRestService(viewsets.ViewSet):
"""Basic View-Set for pumpwood rest end-points."""
_view_type = "simple"
renderer_classes = [PumpwoodJSONRenderer]
#####################
# Route information #
endpoint_description = None
dimentions = {}
icon = None
#####################
service_model = None
storage_object = None
microservice = None
trigger = False
# List fields
serializer = None
list_fields = None
foreign_keys = {}
file_fields = {}
# Front-end uses 50 as limit to check if all data have been fetched,
# if change this parameter, be sure to update front-end list component.
list_paginate_limit = 50
@staticmethod
def _allowed_extension(filename, allowed_extensions):
extension = 'none'
if '.' in filename:
extension = filename.rsplit('.', 1)[1].lower()
if "*" not in allowed_extensions:
if extension not in allowed_extensions:
return [(
"File {filename} with extension {extension} not " +
"allowed.\n Allowed extensions: {allowed_extensions}"
).format(filename=filename, extension=extension,
allowed_extensions=str(allowed_extensions))]
return []
def list(self, request):
"""
View function to list objects with pagination.
Number of objects are limited by
settings.REST_FRAMEWORK['PAGINATE_BY']. To get next page, use
exclude_dict['pk__in': [list of the received pks]] to get more
objects.
Use to limit the query .query.filter_by_dict function.
:param request.data['filter_dict']: Dictionary passed as
objects.filter(**filter_dict)
:type request.data['filter_dict']: dict
:param request.data['exclude_dict']: Dictionary passed as
objects.exclude(**exclude_dict)
:type request.data['exclude_dict']: dict
:param request.data['order_by']: List passed as
objects.order_by(*order_by)
:type request.data['order_by']: list
:return: A list of objects using list_serializer
"""
try:
request_data = request.data
limit = request_data.pop("limit", None)
list_paginate_limit = limit or self.list_paginate_limit
fields = request_data.pop("fields", None)
list_fields = fields or self.list_fields
arg_dict = {'query_set': self.service_model.objects.all()}
arg_dict.update(request_data)
query_set = filter_by_dict(**arg_dict)[:list_paginate_limit]
return Response(self.serializer(
query_set, many=True, fields=list_fields).data)
except TypeError as e:
raise exceptions.PumpWoodQueryException(message=str(e))
def list_without_pag(self, request):
"""List data without pagination.
View function to list objects. Basicaley the same of list, but without
limitation by settings.REST_FRAMEWORK['PAGINATE_BY'].
:param request.data['filter_dict']: Dictionary passed as
objects.filter(**filter_dict)
:type request.data['filter_dict']: dict
:param request.data['exclude_dict']: Dictionary passed as
objects.exclude(**exclude_dict)
:type request.data['exclude_dict']: dict
:param request.data['order_by']: List passed as
objects.order_by(*order_by)
:type request.data['order_by']: list
:return: A list of objects using list_serializer
.. note::
Be careful with the number of the objects that will be retrieved
"""
try:
request_data = request.data
fields = request_data.pop("fields", None)
list_fields = fields or self.list_fields
arg_dict = {'query_set': self.service_model.objects.all()}
arg_dict.update(request_data)
query_set = filter_by_dict(**arg_dict)
return Response(self.serializer(
query_set, many=True, fields=list_fields).data)
except TypeError as e:
raise exceptions.PumpWoodQueryException(
message=str(e))
def retrieve(self, request, pk=None):
"""
Retrieve view, uses the retrive_serializer to return object with pk.
:param int pk: Object pk to be retrieve
:return: The representation of the object passed by
self.retrive_serializer
:rtype: dict
"""
obj = self.service_model.objects.get(pk=pk)
return Response(self.serializer(obj, many=False).data)
def retrieve_file(self, request, pk: int):
"""
Read file without stream.
Args:
pk (int): Pk of the object to save file field.
file_field(str): File field to receive stream file.
Returns:
A stream of bytes with da file.
"""
if self.storage_object is None:
raise exceptions.PumpWoodForbidden(
"storage_object not set")
file_field = request.query_params.get('file-field', None)
if file_field not in self.file_fields.keys():
msg = (
"'{file_field}' must be set on file_fields "
"dictionary.").format(file_field=file_field)
raise exceptions.PumpWoodForbidden(msg)
obj = self.service_model.objects.get(id=pk)
file_path = getattr(obj, file_field)
if isinstance(file_path, FieldFile):
file_path = file_path.name
if file_path is None:
raise exceptions.PumpWoodObjectDoesNotExist(
"field [{}] not found at object".format(file_field))
file_data = self.storage_object.read_file(file_path)
file_name = os.path.basename(file_path)
response = HttpResponse(content=BytesIO(file_data["data"]))
response['Content-Type'] = file_data["content_type"]
response['Content-Disposition'] = \
'attachment; filename=%s' % file_name
return response
def delete(self, request, pk=None):
"""
Delete view.
:param int pk: Object pk to be retrieve
"""
obj = self.service_model.objects.get(pk=pk)
return_data = self.serializer(obj, many=False).data
obj.delete()
return Response(return_data, status=200)
def delete_many(self, request):
"""
Delete many data using filter.
:param request.data['filter_dict']: Dictionary passed as
objects.filter(**filter_dict)
:type request.data['filter_dict']: dict
:param request.data['exclude_dict']: Dictionary passed as
objects.exclude(**exclude_dict)
:type request.data['exclude_dict']: dict
:return: True if delete is ok
"""
try:
arg_dict = {'query_set': self.service_model.objects.all()}
arg_dict.update(request.data)
query_set = filter_by_dict(**arg_dict)
query_set.delete()
return Response(True, status=200)
except TypeError as e:
raise exceptions.PumpWoodQueryException(
message=str(e))
def remove_file_field(self, request, pk: int) -> bool:
"""
Remove file field.
Args:
pk (int): pk of the object.
Kwargs:
No kwargs for this function.
Raises:
PumpWoodForbidden: If file_file is not in file_fields keys of the
view.
PumpWoodException: Propagates exceptions from storage_objects.
"""
file_field = request.query_params.get('file_field', None)
if file_field not in self.file_fields.keys():
raise exceptions.PumpWoodForbidden(
"file_field must be set on self.file_fields dictionary.")
obj = self.service_model.objects.get(id=pk)
file = getattr(obj, file_field)
if file is None:
raise exceptions.PumpWoodObjectDoesNotExist(
"field [{}] not found at object".format(file_field))
else:
file_path = file.name
setattr(obj, file_field, None)
obj.save()
try:
self.storage_object.delete_file(file_path)
return Response(True)
except Exception as e:
raise exceptions.PumpWoodException(str(e))
def save(self, request):
"""
Save and update object acording to request.data.
Object will be updated if request.data['pk'] is not None.
:param dict request.data: Object representation as
self.retrive_serializer
:raise PumpWoodException: 'Object model class diferent from
{service_model} : {service_model}' request.data['service_model']
not the same as self.service_model.__name__
"""
request_data: dict = None
if "application/json" in request.content_type.lower():
request_data = request.data
else:
request_data = request.data.dict()
for k in request_data.keys():
if k not in self.file_fields.keys():
request_data[k] = json.loads(request_data[k])
data_pk = request_data.get('pk')
saved_obj = None
for field in self.file_fields.keys():
request_data.pop(field, None)
# update
if data_pk:
data_to_update = self.service_model.objects.get(pk=data_pk)
serializer = self.serializer(
data_to_update, data=request_data,
context={'request': request})
saved_obj = save_serializer_instance(serializer)
response_status = status.HTTP_200_OK
# save
else:
serializer = self.serializer(
data=request_data, context={'request': request})
saved_obj = save_serializer_instance(serializer)
response_status = status.HTTP_201_CREATED
# Uploading files
object_errors = {}
for field in self.file_fields.keys():
field_errors = []
if field in request.FILES:
file = request.FILES[field]
file_name = secure_filename(file.name)
field_errors.extend(self._allowed_extension(
filename=file_name,
allowed_extensions=self.file_fields[field]))
filename = "{}___{}".format(saved_obj.id, file_name)
if len(field_errors) != 0:
object_errors[field] = field_errors
else:
model_class = self.service_model.__name__.lower()
file_path = '{model_class}__{field}/'.format(
model_class=model_class, field=field)
storage_filepath = self.storage_object.write_file(
file_path=file_path, file_name=filename,
data=file.read(),
content_type=file.content_type,
if_exists='overide')
setattr(saved_obj, field, storage_filepath)
if object_errors != {}:
message = "error when saving object: " \
if data_pk is None else "error when updating object: "
payload = object_errors
message_to_append = []
for key, value in object_errors.items():
message_to_append.append(key + ", " + str(value))
message = message + "; ".join(message_to_append)
raise exceptions.PumpWoodObjectSavingException(
message=message, payload=payload)
saved_obj.save()
if self.microservice is not None and self.trigger:
# Process ETLTrigger for the model class
self.microservice.login()
if data_pk is None:
self.microservice.execute_action(
"ETLTrigger", action="process_triggers", parameters={
"model_class": self.service_model.__name__.lower(),
"type": "create",
"pk": None,
"action_name": None})
else:
self.microservice.execute_action(
"ETLTrigger", action="process_triggers", parameters={
"model_class": self.service_model.__name__.lower(),
"type": "update",
"pk": saved_obj.pk,
"action_name": None})
# Overhead, serializando e deserializando o objecto
return Response(
self.serializer(saved_obj).data, status=response_status)
def get_actions(self):
"""Get all actions with action decorator."""
# this import works here only
import inspect
function_dict = dict(inspect.getmembers(
self.service_model, predicate=inspect.isfunction))
method_dict = dict(inspect.getmembers(
self.service_model, predicate=inspect.ismethod))
method_dict.update(function_dict)
actions = {
name: func for name,
func in method_dict.items()
if getattr(func, 'is_action', False)}
return actions
def list_actions(self, request):
"""List model exposed actions."""
actions = self.get_actions()
action_descriptions = [
action.action_object.to_dict()
for name, action in actions.items()]
return Response(action_descriptions)
def list_actions_with_objects(self, request):
"""List model exposed actions acording to selected objects."""
actions = self.get_actions()
action_descriptions = [
action.action_object.description
for name, action in actions.items()]
return Response(action_descriptions)
def execute_action(self, request, action_name, pk=None):
"""Execute action over object or class using parameters."""
parameters = request.data
actions = self.get_actions()
rest_action_names = list(actions.keys())
if action_name not in rest_action_names:
message = (
"There is no method {action} in rest actions "
"for {class_name}").format(
action=action_name,
class_name=self.service_model.__name__)
raise exceptions.PumpWoodForbidden(
message=message, payload={"action_name": action_name})
action = getattr(self.service_model, action_name)
if pk is None and not action.action_object.is_static_function:
msg_template = (
"Action [{action}] at model [{class_name}] is not "
"a classmethod and not pk provided.")
message = msg_template.format(
action=action_name,
class_name=self.service_model.__name__)
raise exceptions.PumpWoodActionArgsException(
message=message, payload={"action_name": action_name})
if pk is not None and action.action_object.is_static_function:
msg_template = (
"Action [{action}] at model [{class_name}] is a"
"classmethod and pk provided.")
message = msg_template.format(
action=action_name,
class_name=self.service_model.__name__)
raise exceptions.PumpWoodActionArgsException(
message=message, payload={"action_name": action_name})
object_dict = None
action = None
if pk is not None:
model_object = self.service_model.objects.filter(pk=pk).first()
if model_object is None:
message_template = (
"Requested object {service_model}[{pk}] not found.")
temp_service_model = \
self.service_model.__name__
message = message_template.format(
service_model=temp_service_model, pk=pk)
raise exceptions.PumpWoodObjectDoesNotExist(
message=message, payload={
"service_model": temp_service_model, "pk": pk})
action = getattr(model_object, action_name)
object_dict = self.serializer(
model_object, many=False, fields=self.list_fields).data
else:
action = getattr(self.service_model, action_name)
loaded_parameters = load_action_parameters(action, parameters, request)
result = action(**loaded_parameters)
if self.microservice is not None and self.trigger:
self.microservice.login()
self.microservice.execute_action(
"ETLTrigger", action="process_triggers", parameters={
"model_class": self.service_model.__name__.lower(),
"type": "action", "pk": pk, "action_name": action_name})
return Response({
'result': result, 'action': action_name,
'parameters': parameters, 'object': object_dict})
@classmethod
def cls_search_options(cls):
fields = cls.service_model._meta.get_fields()
all_info = {}
# f = fields[10]
for f in fields:
column_info = {}
################################################################
# Do not create relations between models in search description #
is_relation = getattr(f, "is_relation", False)
if is_relation:
continue
################################################################
# Getting correspondent simple type
column_type = f.get_internal_type()
python_type = django_map.get(column_type)
if python_type is None:
msg = (
"Type [{column_type}] not implemented in map dictionary "
"django type -> python type")
raise NotImplementedError(
msg.format(column_type=column_type))
# Getting default value
default = None
f_default = getattr(f, 'default', None)
if f_default != NOT_PROVIDED and f_default is not None:
if callable(f_default):
default = f_default()
else:
default = f_default
primary_key = getattr(f, "primary_key", False)
help_text = str(getattr(f, "help_text", ""))
db_index = getattr(f, "db_index", False)
unique = getattr(f, "unique", False)
column = None
if primary_key:
column = "pk"
else:
column = f.column
column_info = {
'primary_key': primary_key,
"column": column,
"doc_string": help_text,
"type": python_type,
"nullable": f.null,
"default": default,
"indexed": db_index or primary_key,
"unique": unique}
# Get choice options if avaiable
choices = getattr(f, "choices", None)
if choices is not None:
column_info["type"] = "options"
column_info["in"] = [
{"value": choice[0], "description": choice[1]}
for choice in choices]
# Set autoincrement for primary keys
if primary_key:
column_info["column"] = "pk"
column_info["default"] = "#autoincrement#"
column_info["doc_string"] = "object primary key"
# Ajust type if file
file_field = cls.file_fields.get(column)
if file_field is not None:
column_info["type"] = "file"
column_info["permited_file_types"] = file_field
all_info[column] = column_info
# Adding description for foreign keys
for key, item in cls.foreign_keys.items():
column = getattr(cls.service_model, key, None)
if column is None:
msg = (
"Foreign Key incorrectly configured at Pumpwood View. "
"[{key}] not found on [{model}]").format(
key=key, model=str(cls.service_model))
raise Exception(msg)
primary_key = getattr(column.field, "primary_key", False)
help_text = str(getattr(column.field, "help_text", ""))
db_index = getattr(column.field, "db_index", False)
unique = getattr(column.field, "unique", False)
null = getattr(column.field, "null", False)
# Getting default value
default = None
f_default = getattr(f, 'default', None)
if f_default != NOT_PROVIDED and f_default is not None:
if callable(f_default):
default = f_default()
else:
default = f_default
column_info = {
'primary_key': primary_key,
"column": key,
"doc_string": help_text,
"type": "foreign_key",
"nullable": null,
"default": default,
"indexed": db_index or primary_key,
"unique": unique}
if isinstance(item, dict):
column_info["model_class"] = item["model_class"]
column_info["many"] = item["many"]
elif isinstance(item, str):
column_info["model_class"] = item
column_info["many"] = False
else:
msg = (
"foreign_key not correctly defined, check column"
"[{key}] from model [{model}]").format(
key=key, model=str(cls.service_model))
raise Exception(msg)
all_info[key] = column_info
return all_info
def search_options(self, request):
"""
Return options to be used in list funciton.
:return: Dictionary with options for list parameters
:rtype: dict
.. note::
Must be implemented
"""
return Response(self.cls_search_options())
def fill_options(self, request):
"""
Return options for object update acording its partial data.
:param dict request.data: Partial object data.
:return: A dictionary with options for diferent objects values
:rtype: dict
.. note::
Must be implemented
"""
return Response(self.cls_search_options())
class PumpWoodDataBaseRestService(PumpWoodRestService):
"""This view extends PumpWoodRestService, including pivot function."""
_view_type = "data"
model_variables = []
"""Specify which model variables will be returned in pivot. Line index are
the model_variables - columns (function pivot parameter) itens."""
expected_cols_bulk_save = []
"""Set the collumns needed at bulk_save."""
def pivot(self, request):
"""
Pivot QuerySet data acording to columns selected, and filters passed.
:param request.data['filter_dict']: Dictionary passed as
objects.filter(**filter_dict)
:type request.data['filter_dict']: dict
:param request.data['exclude_dict']: Dictionary passed as
objects.exclude(**exclude_dict)
:type request.data['exclude_dict']: dict
:param request.data['order_by']: List passed as
objects.order_by(*order_by)
:type request.data['order_by']: list
:param request.data['columns']: Variables to be used as pivot collumns
:type request.data['columns']: list
:param request.data['format']: Format used in
pandas.DataFrame().to_dict()
:type request.data['columns']: str
:return: Return database data pivoted acording to columns parameter
:rtyoe: panas.Dataframe converted to disctionary
"""
columns = request.data.get('columns', [])
format = request.data.get('format', 'list')
model_variables = request.data.get('variables')
show_deleted = request.data.get('show_deleted', False)
model_variables = model_variables or self.model_variables
if type(columns) != list:
raise exceptions.PumpWoodException(
'Columns must be a list of elements.')
if len(set(columns) - set(model_variables)) != 0:
raise exceptions.PumpWoodException(
'Column chosen as pivot is not at model variables')
index = list(set(model_variables) - set(columns))
filter_dict = request.data.get('filter_dict', {})
exclude_dict = request.data.get('exclude_dict', {})
order_by = request.data.get('order_by', {})
if hasattr(self.service_model, 'deleted'):
if not show_deleted:
filter_dict["deleted"] = False
arg_dict = {'query_set': self.service_model.objects.all(),
'filter_dict': filter_dict,
'exclude_dict': exclude_dict,
'order_by': order_by}
query_set = filter_by_dict(**arg_dict)
try:
filtered_objects_as_list = list(
query_set.values_list(*(model_variables)))
except TypeError as e:
raise exceptions.PumpWoodQueryException(message=str(e))
melted_data = pd.DataFrame(
filtered_objects_as_list, columns=model_variables)
if len(columns) == 0:
return Response(melted_data.to_dict(format))
if melted_data.shape[0] == 0:
return Response({})
else:
if "value" not in melted_data.columns:
raise exceptions.PumpWoodException(
"'value' column not at melted data, it is not possible"
" to pivot dataframe.")
pivoted_table = pd.pivot_table(
melted_data, values='value', index=index,
columns=columns, aggfunc=lambda x: tuple(x)[0])
return Response(
pivoted_table.reset_index().to_dict(format))
def bulk_save(self, request):
"""
Bulk save data.
Args:
data_to_save(list): List of dictionaries which must have
self.expected_cols_bulk_save.
Return:
dict: ['saved_count']: total of saved objects.
"""
data_to_save = request.data
if data_to_save is None:
raise exceptions.PumpWoodException(
'Post payload must have data_to_save key.')
if len(self.expected_cols_bulk_save) == 0:
raise exceptions.PumpWoodException('Bulk save not avaiable.')
pd_data_to_save = pd.DataFrame(data_to_save)
pd_data_cols = set(list(pd_data_to_save.columns))
if len(set(self.expected_cols_bulk_save) - pd_data_cols) == 0:
objects_to_load = []
for d in data_to_save:
new_obj = self.service_model(**d)
objects_to_load.append(new_obj)
self.service_model.objects.bulk_create(objects_to_load)
return Response({'saved_count': len(objects_to_load)})
else:
template = 'Expected columns and data columns do not match:' + \
'\nExpected columns:{expected}' + \
'\nData columns:{data_cols}'
raise exceptions.PumpWoodException(template.format(
expected=set(self.expected_cols_bulk_save),
data_cols=pd_data_cols,))
| 38.955941 | 79 | 0.579512 | 3,136 | 29,178 | 5.185906 | 0.122449 | 0.041936 | 0.02558 | 0.015557 | 0.376437 | 0.339851 | 0.317408 | 0.285126 | 0.281436 | 0.250507 | 0 | 0.001527 | 0.326479 | 29,178 | 748 | 80 | 39.008021 | 0.826023 | 0.180753 | 0 | 0.314465 | 0 | 0 | 0.097511 | 0.001956 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039832 | false | 0 | 0.039832 | 0 | 0.165618 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30b35825288ae55d3b4e9f3fda30cd87266a239e | 4,030 | py | Python | test_contest/solutions/ladders.py | tbuzzelli/Veris | b2e9bd5f944a60365de8c18f17e041fa65f9e74a | [
"Apache-2.0"
] | 7 | 2018-09-26T17:17:01.000Z | 2020-12-20T17:23:33.000Z | test_contest/solutions/ladders.py | tbuzzelli/Veris | b2e9bd5f944a60365de8c18f17e041fa65f9e74a | [
"Apache-2.0"
] | 4 | 2018-09-26T17:49:24.000Z | 2020-12-20T17:15:37.000Z | test_contest/solutions/ladders.py | tbuzzelli/Veris | b2e9bd5f944a60365de8c18f17e041fa65f9e74a | [
"Apache-2.0"
] | 1 | 2021-12-03T17:49:50.000Z | 2021-12-03T17:49:50.000Z | # Written by Will Cromar
# Python 3.6
from collections import deque
# DX/DY array. In order of precedence, we'll move
# left, right, up, and down
DX = [-1, 1, 0, 0]
DY = [0, 0, -1, 1]
# Constants for types of spaces
EMPTY = '.'
LADDER = '#'
CHUTE = '*'
START = 'S'
EXIT = 'E'
# Primary business functions. Includes several helpers
def solve(grid, l, w, h):
# Generator that gives all adjacent spots to the given one
def adjacent(x, y, z):
# print("call to adj w/", (x, y, z))
for dx, dy in zip(DX, DY):
# print(x + dx, y + dy, z)
yield x + dx, y + dy, z
# True if the result is in bounds, false otherwise
def inBounds(x, y, z):
return x >= 0 and y >= 0 and z >= 0 and x < w and y < l and z < h
# Gives all adjacent nodes that are in bounds
def moves(x, y, z):
# Get all adjacent nodes and then filter them
return filter(lambda triplet: inBounds(*triplet), adjacent(x, y, z))
# Follow a chute to the bottom
def chuteBottom(x, y, z):
# While the spot below us is still a chute, keep moving down levels
while z > 0 and grid[z - 1][y][x] == CHUTE:
z -= 1
return x, y, z
# Gives vertically adjacent spots that have ladders
def ladderSpots(x, y, z):
# If there's a ladder above us, yield it
if z < h - 1 and grid[z + 1][y][x] == LADDER:
yield x, y, z + 1
# Likewise, yield ladders below us
if z > 0 and grid[z - 1][y][x] == LADDER:
yield x, y, z - 1
# Searches for the start symbol and returns an ordered triplet
# or None
def findStart():
for z, level in enumerate(grid):
for y, row in enumerate(level):
if START in row:
return row.find(START), y, z
# Finally, we can move on to our breadth-first search
# This will store which spots we've seen before
seen = set()
# Will store an ordered triplet and a number of steps
q = deque()
# Find the starting position and add it to the queue w/ 0 steps
start = findStart()
seen.add(start)
q.append(start)
# Run BFS until the queue empties
while len(q) > 0:
# Unpack the data from the queue
triplet = q.popleft()
x, y, z = triplet
# If we've found the exit, we're done!
if grid[z][y][x] == EXIT:
return True
# If we stepped on a chute, fall
if grid[z][y][x] == CHUTE:
next = chuteBottom(x, y, z)
if next not in seen:
seen.add(next)
q.appendleft(next)
# If we're not at the bottom of the chute, then we
# can't move anywhere else
if next != triplet:
continue
# If we stepped on a ladder, see where we can go
if grid[z][y][x] == LADDER:
for next in ladderSpots(x, y, z):
if next not in seen:
seen.add(next)
q.append(next)
# Otherwise, explore our "slice" of the map
for next in moves(x, y, z):
if next not in seen:
seen.add(next)
q.append(next)
return False
# Number of test cases
n = int(input())
# Process each test case
for case in range(n):
# Get the dimensions of the grid
l, w, h = map(int, input().strip().split())
# Read it in
grid = []
for _ in range(h):
slice = []
for _ in range(l):
slice.append(input().strip())
grid.append(slice)
# I reverse the list to make z = 0 correspond to the lowest
# slice on the map
grid.reverse()
# Determine if there is a path from start to end
ans = solve(grid, l, w, h)
# If there is, print "Yes", otherwise, "No"
print("Map #%d: %s" % (case + 1, "Yes" if ans else "No"))
| 29.416058 | 77 | 0.525806 | 606 | 4,030 | 3.493399 | 0.320132 | 0.014171 | 0.019839 | 0.009447 | 0.140293 | 0.083609 | 0.083609 | 0.083609 | 0.076523 | 0.076523 | 0 | 0.010285 | 0.372705 | 4,030 | 136 | 78 | 29.632353 | 0.827136 | 0.368983 | 0 | 0.115942 | 0 | 0 | 0.008872 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.101449 | false | 0 | 0.014493 | 0.028986 | 0.202899 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30b3709aef46a9041782b934920c7987f957d49e | 1,789 | py | Python | fits2fshr.py | rainwoodman/fsfits | e052c159915e7a67f10972c7851d2924afeaa302 | [
"MIT"
] | null | null | null | fits2fshr.py | rainwoodman/fsfits | e052c159915e7a67f10972c7851d2924afeaa302 | [
"MIT"
] | null | null | null | fits2fshr.py | rainwoodman/fsfits | e052c159915e7a67f10972c7851d2924afeaa302 | [
"MIT"
] | null | null | null | import fitsio
import fsfits
from argparse import ArgumentParser
import json
import numpy
ap = ArgumentParser()
ap.add_argument('--check', action='store_true', default=False,
help="Test if output contains identical information to input")
ap.add_argument('input')
ap.add_argument('output')
ns = ap.parse_args()
def main():
fin = fitsio.FITS(ns.input)
if ns.check:
fout = fsfits.FSHR.open(ns.output)
else:
fout = fsfits.FSHR.create(ns.output)
with fout:
for hdui, hdu in enumerate(fin):
header = hdu.read_header()
header = dict(header)
type = hdu.get_exttype()
if hdu.has_data():
if type in (
'BINARY_TBL',
'ASCII_TBL'):
data = hdu[:]
elif type in ('IMAGE_HDU'):
data = hdu[:, :]
else:
data = None
if ns.check:
block = fout["HDU-%04d" % hdui]
with block:
for key in header:
assert header[key] == block.metadata[key]
if data is not None:
assert (block[...] == data).all()
else:
assert block[...] is None
else:
if data is not None:
block = fout.create_block("HDU-%04d" % hdui,
data.shape, data.dtype)
else:
block = fout.create_block("HDU-%04d" % hdui,
(0,), None)
with block:
block.metadata.update(header)
if data is not None:
block[...] = data
main()
| 28.854839 | 70 | 0.455562 | 184 | 1,789 | 4.358696 | 0.380435 | 0.018703 | 0.048628 | 0.041147 | 0.137157 | 0.118454 | 0.074813 | 0 | 0 | 0 | 0 | 0.007007 | 0.441587 | 1,789 | 61 | 71 | 29.327869 | 0.795796 | 0 | 0 | 0.269231 | 0 | 0 | 0.074902 | 0 | 0 | 0 | 0 | 0 | 0.057692 | 1 | 0.019231 | false | 0 | 0.096154 | 0 | 0.115385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30b43d3abb832538af7887238ea688d9f984e472 | 4,086 | py | Python | window_switcher/window.py | include4eto/linux-appfinder | 23b00d7e18760409e53aad7deb0544d7a4850bc3 | [
"MIT"
] | 2 | 2018-04-24T22:05:08.000Z | 2018-04-25T09:50:37.000Z | window_switcher/window.py | include4eto/linux-appfinder | 23b00d7e18760409e53aad7deb0544d7a4850bc3 | [
"MIT"
] | 1 | 2018-04-25T09:26:59.000Z | 2018-04-25T09:36:43.000Z | window_switcher/window.py | include4eto/window-switcher | 23b00d7e18760409e53aad7deb0544d7a4850bc3 | [
"MIT"
] | 2 | 2019-11-07T03:43:53.000Z | 2021-03-25T12:14:01.000Z | from tkinter import Entry, Listbox, StringVar
import sys, tkinter, subprocess
from window_switcher.aux import get_windows
class Window:
FONT = ('Monospace', 11)
ITEM_HEIGHT = 22
MAX_FOUND = 10
BG_COLOR = '#202b3a'
FG_COLOR = '#ced0db'
def resize(self, items):
if self.resized:
return
self.root.geometry('{0}x{1}'.format(self.width, self.height + items * Window.ITEM_HEIGHT))
self.resized = True
def __init__(self, root, width, height):
self.root = root
self.width = width
self.height = height
self.all_windows = []
self.resized = False
# master.geometry(500)
root.title("window switcher")
root.resizable(width=False, height=False)
root.configure(background=Window.BG_COLOR)
# ugly tkinter code below
sv = StringVar()
sv.trace("w", lambda name, index, mode, sv=sv: self.on_entry(sv))
self.main_entry = Entry(
root,
font=Window.FONT,
width=1000,
textvariable=sv,
bg=Window.BG_COLOR,
fg=Window.FG_COLOR,
insertbackground=Window.FG_COLOR,
bd=0
)
self.main_entry.grid(row=0, column=0, padx=10)
self.main_entry.focus_set()
self.listbox = Listbox(
root,
height=Window.ITEM_HEIGHT,
font=Window.FONT,
highlightthickness=0,
borderwidth=0,
bg=Window.BG_COLOR,
fg=Window.FG_COLOR,
selectbackground='#2c3c51',
selectforeground='#cedaed'
)
self.listbox.grid(row=1, column=0, sticky='we', padx=10, pady=10)
# key bindings
self.main_entry.bind('<Control-a>', self.select_all)
self.main_entry.bind('<Up>', self.select_prev)
self.main_entry.bind('<Down>', self.select_next)
self.main_entry.bind('<Return>', self.select_window)
self.root.bind('<Escape>', lambda e: sys.exit())
# self.resize(Window.MAX_FOUND)
self.initial_get(None)
def initial_get(self, event):
self.all_windows = get_windows()
self.find_windows('')
def select_all(self, event):
# select text
self.main_entry.select_clear()
self.main_entry.select_range(0, 'end')
# move cursor to the end
self.main_entry.icursor('end')
return 'break'
def find_windows(self, text):
text = text.lower()
found = [window for window in self.all_windows if window['name'].find(text) != -1]
# print(found)
self.found = found
self.listbox.delete(0, 'end')
for i, item in enumerate(found):
if i >= Window.MAX_FOUND:
break
self.listbox.insert('end', item['name'])
self.resize(min(len(found), Window.MAX_FOUND))
# select first element
self.listbox.selection_set(0)
def select_next(self, event):
if len(self.found) == 0:
return
idx = self.listbox.curselection()[0]
max = self.listbox.size()
idx += 1
if idx >= max:
idx = 0
self.listbox.selection_clear(0, 'end')
self.listbox.selection_set(idx)
def select_prev(self, event):
if len(self.found) == 0:
return
idx = self.listbox.curselection()[0]
max = self.listbox.size()
idx -= 1
if idx < 0:
idx = max - 1
self.listbox.selection_clear(0, 'end')
self.listbox.selection_set(idx)
def select_window(self, event):
idx = self.listbox.curselection()[0]
id = self.found[idx]['id']
# switch to window and exit
# wmctrl -ia <id`>
subprocess.call(['wmctrl', '-ia', id])
# print(subprocess.check_output(['wmctrl', '-ia', id]).decode('utf-8'))
sys.exit(0)
def on_entry(self, newtext):
search_test = newtext.get()
self.find_windows(search_test)
return True
| 27.986301 | 98 | 0.564366 | 496 | 4,086 | 4.52621 | 0.282258 | 0.068597 | 0.057906 | 0.03029 | 0.170601 | 0.158575 | 0.158575 | 0.158575 | 0.131849 | 0.131849 | 0 | 0.01958 | 0.312531 | 4,086 | 145 | 99 | 28.17931 | 0.779637 | 0.065835 | 0 | 0.213592 | 0 | 0 | 0.037057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.087379 | false | 0 | 0.029126 | 0 | 0.223301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30b4986ed423fc0d856c04b8b25dc611fff18369 | 3,394 | py | Python | imcsdk/mometa/bios/BiosVfSgxLePubKeyHash.py | ecoen66/imcsdk | b10eaa926a5ee57cea7182ae0adc8dd1c818b0ab | [
"Apache-2.0"
] | 31 | 2016-06-14T07:23:59.000Z | 2021-09-12T17:17:26.000Z | imcsdk/mometa/bios/BiosVfSgxLePubKeyHash.py | sthagen/imcsdk | 1831eaecb5960ca03a8624b1579521749762b932 | [
"Apache-2.0"
] | 109 | 2016-05-25T03:56:56.000Z | 2021-10-18T02:58:12.000Z | imcsdk/mometa/bios/BiosVfSgxLePubKeyHash.py | sthagen/imcsdk | 1831eaecb5960ca03a8624b1579521749762b932 | [
"Apache-2.0"
] | 67 | 2016-05-17T05:53:56.000Z | 2022-03-24T15:52:53.000Z | """This module contains the general information for BiosVfSgxLePubKeyHash ManagedObject."""
from ...imcmo import ManagedObject
from ...imccoremeta import MoPropertyMeta, MoMeta
from ...imcmeta import VersionMeta
class BiosVfSgxLePubKeyHashConsts:
VP_SGX_LE_PUB_KEY_HASH0_PLATFORM_DEFAULT = "platform-default"
VP_SGX_LE_PUB_KEY_HASH1_PLATFORM_DEFAULT = "platform-default"
VP_SGX_LE_PUB_KEY_HASH2_PLATFORM_DEFAULT = "platform-default"
VP_SGX_LE_PUB_KEY_HASH3_PLATFORM_DEFAULT = "platform-default"
class BiosVfSgxLePubKeyHash(ManagedObject):
"""This is BiosVfSgxLePubKeyHash class."""
consts = BiosVfSgxLePubKeyHashConsts()
naming_props = set([])
mo_meta = {
"classic": MoMeta("BiosVfSgxLePubKeyHash", "biosVfSgxLePubKeyHash", "Sgx-Le-PubKeyHash", VersionMeta.Version421a, "InputOutput", 0xff, [], ["admin"], ['biosPlatformDefaults', 'biosSettings'], [], [None]),
}
prop_meta = {
"classic": {
"child_action": MoPropertyMeta("child_action", "childAction", "string", VersionMeta.Version421a, MoPropertyMeta.INTERNAL, None, None, None, None, [], []),
"dn": MoPropertyMeta("dn", "dn", "string", VersionMeta.Version421a, MoPropertyMeta.READ_WRITE, 0x2, 0, 255, None, [], []),
"rn": MoPropertyMeta("rn", "rn", "string", VersionMeta.Version421a, MoPropertyMeta.READ_WRITE, 0x4, 0, 255, None, [], []),
"status": MoPropertyMeta("status", "status", "string", VersionMeta.Version421a, MoPropertyMeta.READ_WRITE, 0x8, None, None, None, ["", "created", "deleted", "modified", "removed"], []),
"vp_sgx_le_pub_key_hash0": MoPropertyMeta("vp_sgx_le_pub_key_hash0", "vpSgxLePubKeyHash0", "string", VersionMeta.Version421a, MoPropertyMeta.READ_WRITE, 0x10, None, None, r"""[0-9a-fA-F]{1,16}""", ["platform-default"], []),
"vp_sgx_le_pub_key_hash1": MoPropertyMeta("vp_sgx_le_pub_key_hash1", "vpSgxLePubKeyHash1", "string", VersionMeta.Version421a, MoPropertyMeta.READ_WRITE, 0x20, None, None, r"""[0-9a-fA-F]{1,16}""", ["platform-default"], []),
"vp_sgx_le_pub_key_hash2": MoPropertyMeta("vp_sgx_le_pub_key_hash2", "vpSgxLePubKeyHash2", "string", VersionMeta.Version421a, MoPropertyMeta.READ_WRITE, 0x40, None, None, r"""[0-9a-fA-F]{1,16}""", ["platform-default"], []),
"vp_sgx_le_pub_key_hash3": MoPropertyMeta("vp_sgx_le_pub_key_hash3", "vpSgxLePubKeyHash3", "string", VersionMeta.Version421a, MoPropertyMeta.READ_WRITE, 0x80, None, None, r"""[0-9a-fA-F]{1,16}""", ["platform-default"], []),
},
}
prop_map = {
"classic": {
"childAction": "child_action",
"dn": "dn",
"rn": "rn",
"status": "status",
"vpSgxLePubKeyHash0": "vp_sgx_le_pub_key_hash0",
"vpSgxLePubKeyHash1": "vp_sgx_le_pub_key_hash1",
"vpSgxLePubKeyHash2": "vp_sgx_le_pub_key_hash2",
"vpSgxLePubKeyHash3": "vp_sgx_le_pub_key_hash3",
},
}
def __init__(self, parent_mo_or_dn, **kwargs):
self._dirty_mask = 0
self.child_action = None
self.status = None
self.vp_sgx_le_pub_key_hash0 = None
self.vp_sgx_le_pub_key_hash1 = None
self.vp_sgx_le_pub_key_hash2 = None
self.vp_sgx_le_pub_key_hash3 = None
ManagedObject.__init__(self, "BiosVfSgxLePubKeyHash", parent_mo_or_dn, **kwargs)
| 50.656716 | 235 | 0.675015 | 385 | 3,394 | 5.58961 | 0.231169 | 0.048792 | 0.065056 | 0.092937 | 0.493959 | 0.47723 | 0.192379 | 0.153346 | 0.139405 | 0.079461 | 0 | 0.036878 | 0.177077 | 3,394 | 66 | 236 | 51.424242 | 0.73362 | 0.035946 | 0 | 0.042553 | 0 | 0 | 0.291411 | 0.103988 | 0 | 0 | 0.008896 | 0 | 0 | 1 | 0.021277 | false | 0 | 0.06383 | 0 | 0.319149 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30babde668ddbdfd3b25264b810e1027b167156a | 1,791 | py | Python | setup.py | naritotakizawa/getsize | 25989cdb07e343c6684586768e60363f366f8c71 | [
"MIT"
] | 1 | 2017-08-07T18:15:37.000Z | 2017-08-07T18:15:37.000Z | setup.py | naritotakizawa/getsize | 25989cdb07e343c6684586768e60363f366f8c71 | [
"MIT"
] | null | null | null | setup.py | naritotakizawa/getsize | 25989cdb07e343c6684586768e60363f366f8c71 | [
"MIT"
] | null | null | null | import os
import sys
from setuptools import find_packages, setup
from setuptools.command.test import test as TestCommand
class PyTest(TestCommand):
user_options = [('pytest-args=', 'a', "Arguments to pass to py.test")]
def initialize_options(self):
TestCommand.initialize_options(self)
self.pytest_args = []
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = []
self.test_suite = True
def run_tests(self):
#import here, cause outside the eggs aren't loaded
import pytest
errno = pytest.main(self.pytest_args)
sys.exit(errno)
with open(os.path.join(os.path.dirname(__file__), 'README.rst'), 'rb') as readme:
README = readme.read()
setup(
name='getsize',
version='0.1',
packages=find_packages(exclude=('tests',)),
include_package_data=True,
license='MIT License',
description='Command Line Tools For Get File and Directory Size',
long_description=README.decode('utf-8'),
url='https://github.com/naritotakizawa/getsize',
author='Narito Takizawa',
author_email='toritoritorina@gmail.com',
classifiers=[
"Environment :: Console",
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
"Programming Language :: Python :: 3",
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
"Topic :: Software Development :: Libraries :: Python Modules",
],
tests_require = ['pytest', 'pytest-cov'],
cmdclass = {'test': PyTest},
entry_points={'console_scripts': [
'pysize = getsize.main:main',
]},
) | 32.563636 | 82 | 0.624232 | 199 | 1,791 | 5.502513 | 0.547739 | 0.086758 | 0.114155 | 0.094977 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007407 | 0.246231 | 1,791 | 55 | 83 | 32.563636 | 0.803704 | 0.027359 | 0 | 0 | 0 | 0 | 0.33827 | 0.014218 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0.021739 | 0.108696 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30bb85a4d211b4c8e0c05cca0ae92e39839b9906 | 1,194 | py | Python | dockwidgetpluginbase.py | sandroklippel/qgis-dockable-plugin | 9410d20646155d1fe90ec44caa7d15a607d1e72d | [
"MIT"
] | null | null | null | dockwidgetpluginbase.py | sandroklippel/qgis-dockable-plugin | 9410d20646155d1fe90ec44caa7d15a607d1e72d | [
"MIT"
] | null | null | null | dockwidgetpluginbase.py | sandroklippel/qgis-dockable-plugin | 9410d20646155d1fe90ec44caa7d15a607d1e72d | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_DockWidgetPluginBase(object):
def setupUi(self, DockWidgetPluginBase):
DockWidgetPluginBase.setObjectName("DockWidgetPluginBase")
DockWidgetPluginBase.resize(232, 141)
self.dockWidgetContents = QtWidgets.QWidget()
self.dockWidgetContents.setObjectName("dockWidgetContents")
self.gridLayout = QtWidgets.QGridLayout(self.dockWidgetContents)
self.gridLayout.setObjectName("gridLayout")
self.label = QtWidgets.QLabel(self.dockWidgetContents)
self.label.setObjectName("label")
self.gridLayout.addWidget(self.label, 0, 0, 1, 1)
# DockWidgetPluginBase.setWidget(self.dockWidgetContents)
self.retranslateUi(DockWidgetPluginBase)
QtCore.QMetaObject.connectSlotsByName(DockWidgetPluginBase)
def retranslateUi(self, DockWidgetPluginBase):
_translate = QtCore.QCoreApplication.translate
DockWidgetPluginBase.setWindowTitle(_translate("DockWidgetPluginBase", "Test DockWidget"))
self.label.setText(_translate("DockWidgetPluginBase", "Replace this QLabel\n"
"with the desired\n"
"plugin content."))
| 42.642857 | 98 | 0.741206 | 101 | 1,194 | 8.722772 | 0.465347 | 0.124858 | 0.088536 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011964 | 0.159967 | 1,194 | 27 | 99 | 44.222222 | 0.866401 | 0.064489 | 0 | 0 | 0 | 0 | 0.145553 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.05 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30bc0d5caa2ba95a5a11a4f13508d048a4057cd0 | 684 | py | Python | setup.py | KirstensGitHub/attention-memory-task | c3d4b13e43134d4cc25c277e5f49220ac48ab931 | [
"MIT"
] | 4 | 2017-10-30T20:46:25.000Z | 2020-10-16T16:28:29.000Z | setup.py | KirstensGitHub/attention-memory-task | c3d4b13e43134d4cc25c277e5f49220ac48ab931 | [
"MIT"
] | 26 | 2017-10-30T20:44:07.000Z | 2019-10-11T20:07:33.000Z | setup.py | KirstensGitHub/attention-memory-task | c3d4b13e43134d4cc25c277e5f49220ac48ab931 | [
"MIT"
] | 7 | 2019-08-21T13:20:41.000Z | 2022-03-01T03:25:26.000Z |
# -*- coding: utf-8 -*-
from setuptools import setup, find_packages
with open('README.md') as f:
readme = f.read()
with open('LICENSE') as f:
license = f.read()
setup(
name='attention-memory-task',
version='0.1.0',
description='Attention and Memory Experiment',
long_description='This repository contains the Psychopy code for a psychology experiment exploring the relationship between covert attention and recognition memory.',
author='Contextual Dynamics Laboratory',
author_email='contextualdynamics@gmail.com',
url='https://github.com/ContextLab/attention-memory-task',
license=license,
packages=find_packages(exclude=('tests'))
)
| 29.73913 | 170 | 0.72076 | 85 | 684 | 5.752941 | 0.658824 | 0.04908 | 0.07771 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006932 | 0.156433 | 684 | 22 | 171 | 31.090909 | 0.840555 | 0.030702 | 0 | 0 | 0 | 0 | 0.504545 | 0.074242 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30bc925cd0ed0b7b60bfef9a34e4c38e0b3fa974 | 3,659 | py | Python | blue_print/testLogReg.py | yimingq/COMP90051-SML | 3e1d93d9b1a8cd23f4c05eeb18d615e87d4d6369 | [
"Apache-2.0"
] | 1 | 2020-09-16T04:58:49.000Z | 2020-09-16T04:58:49.000Z | blue_print/testLogReg.py | yimingq/COMP90051-SML | 3e1d93d9b1a8cd23f4c05eeb18d615e87d4d6369 | [
"Apache-2.0"
] | null | null | null | blue_print/testLogReg.py | yimingq/COMP90051-SML | 3e1d93d9b1a8cd23f4c05eeb18d615e87d4d6369 | [
"Apache-2.0"
] | 2 | 2020-03-31T08:55:45.000Z | 2020-09-05T14:02:16.000Z | from numpy import *
import matplotlib.pyplot as plt
import time
from LogReg import trainLogRegres, showLogRegres, predicTestData
# from sklearn.datasets import make_circles
# from sklearn.model_selection import train_test_split
# from sklearn.metrics import accuracy_score
import csv
def loadTrainData():
train_x = []
train_y = []
# fileIn = open('/Users/martinzhang/Desktop/testSet.txt')
# for line in fileIn.readlines():
# lineArr = line.strip().split()
# train_x.append([1, float(lineArr[0]), float(lineArr[1])])
# train_y.append(float(lineArr[2]))
# # return mat(train_x), mat(train_y).transpose()
# for userA, relation in range(relation_dic):
# for userB in range(userlist):
# if user not in relation:
# label = 0
# else:
# label = 1
# train_x.append([1, float(lineArr[0]), float(lineArr[1])])
# train_y.append(label)
# return mat(train_x), mat(train_y).transpose()
with load('/Users/martinzhang/Desktop/compressed_test_7features.npz') as fd:
temp_matrix = fd["temptest"]
# d = [0, 1, 2, 3, 4, 5, 6]
length =10336114
a = ones(length)
temp_matrix_x = temp_matrix[0:length, 0:7]
result = insert(temp_matrix_x, 0, values=a, axis=1)
temp_matrix_y = temp_matrix[0:length, 7]
return mat(result), mat(temp_matrix_y).transpose()
def loadTestData():
# test_list = []
# test_id = []
test_x = []
# test_y = []
# file_path = open('/Users/martinzhang/Desktop/ml_data/test-public.txt')
index = -1
for line in file_path.readlines():
index += 1
if index == 0:
pass
else:
line_content = line.strip().split()
data_id = line_content[0]
data_source = line_content[1]
data_sink = line_content[2]
# test_list.append([data_id, data_source, data_sink])
# test_id.append(data_id)
test_x.append([data_id, data_source, data_sink])
# test_x
if index == 100:
return mat(test_x)
#
#
## step 1: load data
print("step 1: load data...")
train_x, train_y = loadTrainData()
# for item in train_x:
# print item
# test_list = loadTestData()
# for test_data in test_list:
# test_x = train_x
# test_y = train_y
# test_x = loadTestData()
## step 2: training...
print("step 2: training...")
opts = {'alpha': 0.01, 'maxIter': 1, 'optimizeType': 'gradDescent'}
optimalWeights = trainLogRegres(train_x, train_y, opts)
# print(optimalWeights.transpose().tolist()[0])
# for line in optimalWeights:
# print(line[0])
print(optimalWeights.transpose().tolist()[0][1])
print(shape(optimalWeights))
## step 3: predicting
# print("step 3: predicting...")
# accuracy = predicTestData(optimalWeights, test_x, test_y)
# ## step 4: show the result
# print("step 4: show the result...")
# print('The classify accuracy is: %.3f%%' % (accuracy * 100))
# showLogRegres(optimalWeights, train_x, train_y)
# test = loadTestData()
# for item in test:
# print(item)
# def testCSV():
# test_y = []
# # csvfile = open('/Users/martinzhang/Desktop/ml_data/result.csv', 'wb')
# with open('/Users/martinzhang/Desktop/ml_data/result.csv', 'w') as csvfile:
# writer = csv.writer(csvfile)
# for i in range(0, 100):
# # predict = float(sigmoid(test_x[i, 1:] * weights)[0, 0])
# wwwww='ggg'
# # bytes(wwwww, encoding="utf8")
# writer.writerow((i, wwwww))
# csvfile.close()
# print('finishing!')
#
# testCSV()
| 28.364341 | 81 | 0.608363 | 471 | 3,659 | 4.564756 | 0.284501 | 0.027907 | 0.053488 | 0.022326 | 0.21814 | 0.185581 | 0.148837 | 0.148837 | 0.047442 | 0.047442 | 0 | 0.0251 | 0.248702 | 3,659 | 128 | 82 | 28.585938 | 0.757003 | 0.558896 | 0 | 0 | 0 | 0 | 0.089669 | 0.036387 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.026316 | 0.131579 | 0 | 0.236842 | 0.105263 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30bf63f85cca01c1bcc1223cd6e0ff4f4c12199f | 7,652 | py | Python | tests/exporter/test_metadata.py | HumanCellAtlas/ingest-common | 6a230f9606f64cd787b67c143854db36e012a2b7 | [
"Apache-2.0"
] | null | null | null | tests/exporter/test_metadata.py | HumanCellAtlas/ingest-common | 6a230f9606f64cd787b67c143854db36e012a2b7 | [
"Apache-2.0"
] | null | null | null | tests/exporter/test_metadata.py | HumanCellAtlas/ingest-common | 6a230f9606f64cd787b67c143854db36e012a2b7 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from mock import Mock
from ingest.exporter.metadata import MetadataResource, MetadataService, MetadataParseException, MetadataProvenance
class MetadataResourceTest(TestCase):
def test_provenance_from_dict(self):
# given:
uuid_value = '3f3212da-d5d0-4e55-b31d-83243fa02e0d'
data = {
'uuid': {'uuid': uuid_value},
'submissionDate': 'a submission date',
'updateDate': 'an update date',
'dcpVersion': '2019-12-02T13:40:50.520Z',
'content': {
'describedBy': 'https://some-schema/1.2.3'
}
}
# when:
metadata_provenance = MetadataResource.provenance_from_dict(data)
# then:
self.assertIsNotNone(metadata_provenance)
self.assertEqual(uuid_value, metadata_provenance.document_id)
self.assertEqual('a submission date', metadata_provenance.submission_date)
self.assertEqual('2019-12-02T13:40:50.520Z', metadata_provenance.update_date)
def test_provenance_from_dict_fail_fast(self):
# given:
uuid_value = '3f3212da-d5d0-4e55-b31d-83243fa02e0d'
data = {'uuid': uuid_value, # unexpected structure structure
'submissionDate': 'a submission date',
'updateDate': 'an update date'}
# then:
with self.assertRaises(MetadataParseException):
# when
MetadataResource.provenance_from_dict(data)
def test_from_dict(self):
# given:
uuid_value = '3f3212da-d5d0-4e55-b31d-83243fa02e0d'
data = self._create_test_data(uuid_value)
# when:
metadata = MetadataResource.from_dict(data)
# then:
self.assertIsNotNone(metadata)
self.assertEqual('biomaterial', metadata.metadata_type)
self.assertEqual(data['content'], metadata.metadata_json)
self.assertEqual(data['dcpVersion'], metadata.dcp_version)
# and:
self.assertEqual(uuid_value, metadata.uuid)
def test_from_dict_provenance_optional(self):
# given:
uuid = '566be204-a684-4896-bda7-8dbb3e4fc65c'
data_no_provenance = self._create_test_data(uuid)
del data_no_provenance['submissionDate']
# and:
data = self._create_test_data(uuid)
# when:
metadata_no_provenance = MetadataResource.from_dict(data_no_provenance, require_provenance=False)
metadata = MetadataResource.from_dict(data, require_provenance=False)
# then:
self.assertIsNotNone(metadata_no_provenance)
self.assertEqual(uuid, metadata_no_provenance.uuid)
self.assertIsNone(metadata_no_provenance.provenance)
# and:
self.assertIsNotNone(metadata.provenance)
def test_from_dict_fail_fast_with_missing_info(self):
# given:
data = {}
# then:
with self.assertRaises(MetadataParseException):
# when
MetadataResource.from_dict(data)
def test_to_bundle_metadata(self):
# given:
uuid_value = '3f3212da-d5d0-4e55-b31d-83243fa02e0d'
data = self._create_test_data(uuid_value)
metadata = MetadataResource.from_dict(data)
# and:
data_no_provenance = self._create_test_data(uuid_value)
del data_no_provenance['submissionDate']
metadata_no_provenance = MetadataResource.from_dict(data_no_provenance, require_provenance=False)
self.assertIsNone(metadata_no_provenance.provenance)
# when
bundle_metadata = metadata.to_bundle_metadata()
bundle_metadata_no_provenance = metadata_no_provenance.to_bundle_metadata()
# then:
self.assertTrue('provenance' in bundle_metadata)
self.assertTrue(bundle_metadata['provenance'] == metadata.provenance.to_dict())
self.assertTrue(set(data['content'].keys()) <= set(
bundle_metadata.keys())) # <= operator checks if a dict is subset of another dict
# and:
self.assertIsNotNone(bundle_metadata_no_provenance)
self.assertEqual(metadata_no_provenance.metadata_json['describedBy'],
bundle_metadata_no_provenance['describedBy'])
@staticmethod
def _create_test_data(uuid_value):
return {'type': 'Biomaterial',
'uuid': {'uuid': uuid_value},
'content': {'describedBy': "http://some-schema/1.2.3",
'some': {'content': ['we', 'are', 'agnostic', 'of']}},
'dcpVersion': '6.9.1',
'submissionDate': 'a date',
'updateDate': 'another date'}
def test_get_staging_file_name(self):
# given:
metadata_resource_1 = MetadataResource(metadata_type='specimen',
uuid='9b159cae-a1fe-4cce-94bc-146e4aa20553',
metadata_json={'description': 'test'},
dcp_version='5.1.0',
provenance=MetadataProvenance('9b159cae-a1fe-4cce-94bc-146e4aa20553',
'some date', 'some other date', 1, 1))
metadata_resource_2 = MetadataResource(metadata_type='donor_organism',
uuid='38e0ee7c-90dc-438a-a0ed-071f9231f590',
metadata_json={'text': 'sample'},
dcp_version='1.0.7',
provenance=MetadataProvenance('38e0ee7c-90dc-438a-a0ed-071f9231f590',
'some date', 'some other date', '2', '2'))
# expect:
self.assertEqual('specimen_9b159cae-a1fe-4cce-94bc-146e4aa20553.json',
metadata_resource_1.get_staging_file_name())
self.assertEqual('donor_organism_38e0ee7c-90dc-438a-a0ed-071f9231f590.json',
metadata_resource_2.get_staging_file_name())
class MetadataServiceTest(TestCase):
def test_fetch_resource(self):
# given:
ingest_client = Mock(name='ingest_client')
uuid = '301636f7-f97b-4379-bf77-c5dcd9f17bcb'
raw_metadata = {'type': 'Biomaterial',
'uuid': {'uuid': uuid},
'content': {'describedBy': "http://some-schema/1.2.3",
'some': {'content': ['we', 'are', 'agnostic', 'of']}},
'dcpVersion': '2019-12-02T13:40:50.520Z',
'submissionDate': 'a submission date',
'updateDate': 'an update date'
}
ingest_client.get_entity_by_callback_link = Mock(return_value=raw_metadata)
# and:
metadata_service = MetadataService(ingest_client)
# when:
metadata_resource = metadata_service.fetch_resource(
'hca.domain.com/api/cellsuspensions/301636f7-f97b-4379-bf77-c5dcd9f17bcb')
# then:
self.assertEqual('biomaterial', metadata_resource.metadata_type)
self.assertEqual(uuid, metadata_resource.uuid)
self.assertEqual(raw_metadata['content'], metadata_resource.metadata_json)
self.assertEqual(raw_metadata['dcpVersion'], metadata_resource.dcp_version)
self.assertEqual(raw_metadata['submissionDate'], metadata_resource.provenance.submission_date)
self.assertEqual(raw_metadata['dcpVersion'], metadata_resource.provenance.update_date)
| 42.748603 | 119 | 0.604809 | 733 | 7,652 | 6.060027 | 0.20191 | 0.057407 | 0.049527 | 0.024313 | 0.507879 | 0.303692 | 0.271724 | 0.188429 | 0.136875 | 0.136875 | 0 | 0.060018 | 0.292342 | 7,652 | 178 | 120 | 42.988764 | 0.760295 | 0.033978 | 0 | 0.232759 | 0 | 0 | 0.187772 | 0.082745 | 0 | 0 | 0 | 0 | 0.25 | 1 | 0.077586 | false | 0 | 0.025862 | 0.008621 | 0.12931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30c056f49843fec67503aa5c21f43521e60897d9 | 5,708 | py | Python | src/delivery/delivery.py | ska-telescope/sdp-workflows-procfunc | ef6e7be9584a006e936139ae653902a41af4d906 | [
"BSD-3-Clause"
] | null | null | null | src/delivery/delivery.py | ska-telescope/sdp-workflows-procfunc | ef6e7be9584a006e936139ae653902a41af4d906 | [
"BSD-3-Clause"
] | null | null | null | src/delivery/delivery.py | ska-telescope/sdp-workflows-procfunc | ef6e7be9584a006e936139ae653902a41af4d906 | [
"BSD-3-Clause"
] | null | null | null | """
Prototype Delivery Workflow.
"""
import os
import sys
import glob
import logging
import ska_sdp_config
import dask
import distributed
from google.oauth2 import service_account
from google.cloud import storage
# Initialise logging
logging.basicConfig()
LOG = logging.getLogger("delivery")
LOG.setLevel(logging.INFO)
def ee_dask_deploy(config, pb_id, image, n_workers=1, buffers=[], secrets=[]):
"""Deploy Dask execution engine.
:param config: configuration DB handle
:param pb_id: processing block ID
:param image: Docker image to deploy
:param n_workers: number of Dask workers
:param buffers: list of buffers to mount on Dask workers
:param secrets: list of secrets to mount on Dask workers
:return: deployment ID and Dask client handle
"""
# Make deployment
deploy_id = "proc-{}-dask".format(pb_id)
values = {"image": image, "worker.replicas": n_workers}
for i, b in enumerate(buffers):
values["buffers[{}]".format(i)] = b
for i, s in enumerate(secrets):
values["secrets[{}]".format(i)] = s
deploy = ska_sdp_config.Deployment(
deploy_id, "helm", {"chart": "dask", "values": values}
)
for txn in config.txn():
txn.create_deployment(deploy)
# Wait for scheduler to become available
scheduler = deploy_id + "-scheduler." + os.environ["SDP_HELM_NAMESPACE"] + ":8786"
client = None
while client is None:
try:
client = distributed.Client(scheduler, timeout=1)
except:
pass
return deploy_id, client
def ee_dask_remove(config, deploy_id):
"""Remove Dask EE deployment.
:param config: configuration DB handle
:param deploy_id: deployment ID
"""
for txn in config.txn():
deploy = txn.get_deployment(deploy_id)
txn.delete_deployment(deploy)
def upload_directory_to_gcp(sa_file, bucket_name, src_dir, dst_dir):
"""Recursively upload contents of directory to Google Cloud Platform
:param sa_file: service account JSON file
:param bucket_name: name of the bucket on GCP
:param src_dir: local source directory
:param dst_dir: destination directory in GCP bucket
"""
# Recursively list everything in directory
src_list = sorted(glob.glob(src_dir + "/**", recursive=True))
# Get connection to GCP bucket
credentials = service_account.Credentials.from_service_account_file(sa_file)
client = storage.Client(credentials=credentials, project=credentials.project_id)
bucket = client.bucket(bucket_name)
# Upload files (ignoring directories)
for src_name in src_list:
if not os.path.isdir(src_name):
dst_name = dst_dir + "/" + src_name[len(src_dir) + 1 :]
blob = bucket.blob(dst_name)
blob.upload_from_filename(src_name)
def deliver(client, parameters):
"""Delivery function
:param client: Dask distributed client
:param parameters: parameters
"""
sa = parameters.get("service_account")
bucket = parameters.get("bucket")
buffers = parameters.get("buffers", [])
if sa is None or bucket is None:
return
sa_file = "/secret/{}/{}".format(sa.get("secret"), sa.get("file"))
tasks = []
for b in buffers:
src_dir = "/buffer/" + b.get("name")
dst_dir = b.get("destination")
tasks.append(
dask.delayed(upload_directory_to_gcp)(sa_file, bucket, src_dir, dst_dir)
)
client.compute(tasks, sync=True)
def main(argv):
"""Workflow main function."""
pb_id = argv[0]
config = ska_sdp_config.Config()
for txn in config.txn():
txn.take_processing_block(pb_id, config.client_lease)
pb = txn.get_processing_block(pb_id)
LOG.info("Claimed processing block %s", pb_id)
# Parse parameters
n_workers = pb.parameters.get("n_workers", 1)
buffers = [b.get("name") for b in pb.parameters.get("buffers", [])]
secrets = [pb.parameters.get("service_account", {}).get("secret")]
# Set state to indicate workflow is waiting for resources
LOG.info("Setting status to WAITING")
for txn in config.txn():
state = txn.get_processing_block_state(pb_id)
state["status"] = "WAITING"
txn.update_processing_block_state(pb_id, state)
# Wait for resources_available to be true
LOG.info("Waiting for resources to be available")
for txn in config.txn():
state = txn.get_processing_block_state(pb_id)
ra = state.get("resources_available")
if ra is not None and ra:
LOG.info("Resources are available")
break
txn.loop(wait=True)
# Set state to indicate workflow is running
for txn in config.txn():
state = txn.get_processing_block_state(pb_id)
state["status"] = "RUNNING"
txn.update_processing_block_state(pb_id, state)
# Deploy Dask EE
LOG.info("Deploying Dask EE")
image = "artefact.skao.int/ska-sdp-wflow-delivery:{}".format(
pb.workflow.get("version")
)
deploy_id, client = ee_dask_deploy(
config, pb.id, image, n_workers=n_workers, buffers=buffers, secrets=secrets
)
# Run delivery function
LOG.info("Starting delivery")
deliver(client, pb.parameters)
LOG.info("Finished delivery")
# Remove Dask EE deployment
LOG.info("Removing Dask EE deployment")
ee_dask_remove(config, deploy_id)
# Set state to indicate processing is finished
for txn in config.txn():
state = txn.get_processing_block_state(pb_id)
state["status"] = "FINISHED"
txn.update_processing_block_state(pb_id, state)
config.close()
if __name__ == "__main__":
main(sys.argv[1:])
| 30.201058 | 86 | 0.666608 | 758 | 5,708 | 4.850923 | 0.220317 | 0.016318 | 0.01523 | 0.026652 | 0.208866 | 0.193364 | 0.132989 | 0.115583 | 0.08458 | 0.065543 | 0 | 0.002484 | 0.224071 | 5,708 | 188 | 87 | 30.361702 | 0.827726 | 0.22267 | 0 | 0.130841 | 0 | 0 | 0.125522 | 0.009977 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046729 | false | 0.009346 | 0.084112 | 0 | 0.149533 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30c246c819da13e7b7505537f7f56ed38dc95334 | 7,516 | py | Python | ands/ds/BinaryHeap.py | bssrdf/ands | 504d91abfe12d316119424ddcb0ad3df3207ee73 | [
"MIT"
] | 50 | 2016-12-14T15:10:39.000Z | 2022-03-05T23:32:19.000Z | ands/ds/BinaryHeap.py | bssrdf/ands | 504d91abfe12d316119424ddcb0ad3df3207ee73 | [
"MIT"
] | 58 | 2016-11-17T23:27:52.000Z | 2020-12-30T13:55:46.000Z | ands/ds/BinaryHeap.py | bssrdf/ands | 504d91abfe12d316119424ddcb0ad3df3207ee73 | [
"MIT"
] | 15 | 2016-12-11T12:43:18.000Z | 2020-12-17T12:44:42.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
# Meta-info
Author: Nelson Brochado
Created: 01/07/2015
Updated: 06/04/2018
# Description
Contains the abstract class BinaryHeap.
# References
- Slides by prof. A. Carzaniga
- Chapter 13 of Introduction to Algorithms (3rd ed.)
- http://www.math.clemson.edu/~warner/M865/HeapDelete.html
- https://docs.python.org/3/library/exceptions.html#NotImplementedError
- http://effbot.org/pyfaq/how-do-i-check-if-an-object-is-an-instance-of-a-given-class-or-of-a-subclass-of-it.htm
- https://en.wikipedia.org/wiki/Heap_(data_structure)
- https://arxiv.org/pdf/1012.0956.pdf
- http://pymotw.com/2/heapq/
- http://stackoverflow.com/a/29197855/3924118
"""
import io
import math
from abc import ABC, abstractmethod
__all__ = ["BinaryHeap", "build_pretty_binary_heap"]
class BinaryHeap(ABC):
"""Abstract class to represent binary heaps.
This binary heap allows duplicates.
It's the responsibility of the client to ensure that inserted elements are
comparable among them.
Their order also defines their priority.
Public interface:
- size
- is_empty
- clear
- add
- contains
- delete
- merge
MinHeap, MaxHeap and MinMaxHeap all derive from this class."""
def __init__(self, ls=None):
self.heap = [] if not isinstance(ls, list) else ls
self._build_heap()
@property
def size(self) -> int:
"""Returns the number of elements in this heap.
Time complexity: O(1)."""
return len(self.heap)
def is_empty(self) -> bool:
"""Returns true if this heap is empty, false otherwise.
Time complexity: O(1)."""
return self.size == 0
def clear(self) -> None:
"""Removes all elements from this heap.
Time complexity: O(1)."""
self.heap.clear()
def add(self, x: object) -> None:
"""Adds object x to this heap.
This algorithm proceeds by placing x at an available leaf of this heap,
then bubbles up from there, in order to maintain the heap property.
Time complexity: O(log n)."""
if x is None:
raise ValueError("x cannot be None")
self.heap.append(x)
if self.size > 1:
self._push_up(self.size - 1)
def contains(self, x: object) -> bool:
"""Returns true if x is in this heap, false otherwise.
Time complexity: O(n)."""
if x is None:
raise ValueError("x cannot be None")
return self._index(x) != -1
def delete(self, x: object) -> None:
"""Removes the first found x from this heap.
If x is not in this heap, LookupError is raised.
Time complexity: O(n)."""
if x is None:
raise ValueError("x cannot be None")
i = self._index(x)
if i == -1:
raise LookupError("x not found")
# self has at least one element.
if i == self.size - 1:
self.heap.pop()
else:
self._swap(i, self.size - 1)
self.heap.pop()
self._push_down(i)
self._push_up(i)
def merge(self, o: "Heap") -> None:
"""Merges this heap with the o heap.
Time complexity: O(n + m)."""
self.heap += o.heap
self._build_heap()
@abstractmethod
def _push_down(self, i: int) -> None:
"""Classical "heapify" operation for heaps."""
pass
@abstractmethod
def _push_up(self, i: int) -> None:
"""Classical reverse-heapify operation for heaps."""
pass
def _build_heap(self) -> list:
"""Builds the heap data structure using Robert Floyd's heap construction
algorithm.
Floyd's algorithm is optimal as long as complexity is expressed in terms
of sets of functions described via the asymptotic symbols O, Θ and Ω.
Indeed, its linear complexity Θ(n), both in the worst and best case,
cannot be improved as each object must be examined at least once.
Floyd's algorithm was invented in 1964 as an improvement of the
construction phase of the classical heap-sort algorithm introduced
earlier that year by Williams J.W.J.
Time complexity: Θ(n)."""
if self.heap:
for index in range(len(self.heap) // 2, -1, -1):
self._push_down(index)
def _index(self, x: object) -> int:
"""Returns the index of x in this heap if x is in this heap, otherwise
it returns -1.
Time complexity: O(n)."""
for i, node in enumerate(self.heap):
if node == x:
return i
return -1
def _swap(self, i: int, j: int) -> None:
"""Swaps elements at indexes i and j.
Time complexity: O(1)."""
assert self._is_good_index(i) and self._is_good_index(j)
self.heap[i], self.heap[j] = self.heap[j], self.heap[i]
def _left_index(self, i: int) -> int:
"""Returns the left child's index of the node at index i, if it exists,
otherwise this function returns -1.
Time complexity: O(1)."""
assert self._is_good_index(i)
left = i * 2 + 1
return left if self._is_good_index(left) else -1
def _right_index(self, i: int) -> int:
"""Returns the right child's index of the node at index i, if it exists,
otherwise this function returns -1.
Time complexity: O(1)."""
assert self._is_good_index(i)
right = i * 2 + 2
return right if self._is_good_index(right) else -1
def _parent_index(self, i: int) -> int:
"""Returns the parent's index of the node at index i.
If i = 0, then -1 is returned, because the root has no parent.
Time complexity: O(1)."""
assert self._is_good_index(i)
return -1 if i == 0 else (i - 1) // 2
def _is_good_index(self, i: int) -> bool:
"""Returns true if i is in the bounds of elf.heap, false otherwise.
Time complexity: O(1)."""
return False if (i < 0 or i >= self.size) else True
def __str__(self):
return str(self.heap)
def __repr__(self):
return build_pretty_binary_heap(self.heap)
def build_pretty_binary_heap(heap: list, total_width=36, fill=" ") -> str:
"""Returns a string (which can be printed) representing heap as a tree.
To increase/decrease the horizontal space between nodes, just
increase/decrease the float number h_space.
To increase/decrease the vertical space between nodes, just
increase/decrease the integer number v_space.
Note: v_space must be an integer.
To change the length of the line under the heap, you can simply change the
line_length variable."""
if not isinstance(heap, list):
raise TypeError("heap must be an list object")
if len(heap) == 0:
return "Nothing to print: heap is empty."
output = io.StringIO()
last_row = -1
h_space = 3.0
v_space = 2
for i, heap_node in enumerate(heap):
if i != 0:
row = int(math.floor(math.log(i + 1, 2)))
else:
row = 0
if row != last_row:
output.write("\n" * v_space)
columns = 2 ** row
column_width = int(math.floor((total_width * h_space) / columns))
output.write(str(heap_node).center(column_width, fill))
last_row = row
s = output.getvalue() + "\n"
line_length = total_width + 15
s += ('-' * line_length + "\n")
return s
| 29.245136 | 112 | 0.608302 | 1,090 | 7,516 | 4.100917 | 0.282569 | 0.028635 | 0.043624 | 0.028635 | 0.230649 | 0.184116 | 0.16264 | 0.10179 | 0.10179 | 0.096197 | 0 | 0.018934 | 0.283262 | 7,516 | 256 | 113 | 29.359375 | 0.810841 | 0.458622 | 0 | 0.19 | 0 | 0 | 0.044846 | 0.006563 | 0 | 0 | 0 | 0 | 0.04 | 1 | 0.2 | false | 0.02 | 0.03 | 0.02 | 0.37 | 0.01 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30c2fc61b7992bd51c445c7ef21cc3452bd93e85 | 2,190 | py | Python | product.py | trytonus/trytond-customs-value | ce2097fefab714131fae77ec1f49322141051110 | [
"BSD-3-Clause"
] | null | null | null | product.py | trytonus/trytond-customs-value | ce2097fefab714131fae77ec1f49322141051110 | [
"BSD-3-Clause"
] | 1 | 2016-04-12T18:10:19.000Z | 2016-04-12T18:10:19.000Z | product.py | fulfilio/trytond-customs-value | ce2097fefab714131fae77ec1f49322141051110 | [
"BSD-3-Clause"
] | 6 | 2015-08-24T12:44:43.000Z | 2016-04-12T10:04:08.000Z | # -*- coding: utf-8 -*-
from trytond.pool import PoolMeta
from trytond.model import fields
from trytond.pyson import Eval, Bool, Not
__all__ = ['Product']
__metaclass__ = PoolMeta
class Product:
"Product"
__name__ = 'product.product'
country_of_origin = fields.Many2One(
'country.country', 'Country of Origin',
)
customs_value = fields.Numeric(
"Customs Value",
states={
'invisible': Bool(Eval('use_list_price_as_customs_value')),
'required': Not(Bool(Eval('use_list_price_as_customs_value')))
}, depends=['use_list_price_as_customs_value'],
)
use_list_price_as_customs_value = fields.Boolean(
"Use List Price As Customs Value ?"
)
customs_value_used = fields.Function(
fields.Numeric("Customs Value Used"),
'get_customs_value_used'
)
customs_description = fields.Text(
"Customs Description",
states={
'invisible': Bool(Eval("use_name_as_customs_description")),
'required': Not(Bool(Eval("use_name_as_customs_description")))
},
depends=["use_name_as_customs_description"]
)
use_name_as_customs_description = fields.Boolean(
"Use Name as Customs Description ?"
)
customs_description_used = fields.Function(
fields.Text("Customs Description Used"),
"get_customs_description_used"
)
@classmethod
def get_customs_description_used(cls, products, name):
return {
product.id: (
product.name if product.use_name_as_customs_description else
product.customs_description
)
for product in products
}
@staticmethod
def default_use_name_as_customs_description():
return True
@staticmethod
def default_use_list_price_as_customs_value():
return True
@classmethod
def get_customs_value_used(cls, products, name):
return {
product.id: (
product.list_price if product.use_list_price_as_customs_value
else product.customs_value
)
for product in products
}
| 27.375 | 77 | 0.636986 | 238 | 2,190 | 5.478992 | 0.222689 | 0.128834 | 0.064417 | 0.075153 | 0.41181 | 0.268405 | 0.168712 | 0.115031 | 0 | 0 | 0 | 0.001263 | 0.276712 | 2,190 | 79 | 78 | 27.721519 | 0.82197 | 0.013699 | 0 | 0.21875 | 0 | 0 | 0.217351 | 0.108906 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.046875 | 0.0625 | 0.3125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30c67023520b45683449e683c5e3f1a8e0df06b5 | 523 | py | Python | samples/indicator/rsi_test.py | gsamarakoon/ParadoxTrading | 2c4024e60b14bf630fd141ccd4c77f197b7c901a | [
"MIT"
] | 95 | 2018-01-14T14:35:35.000Z | 2021-03-17T02:10:24.000Z | samples/indicator/rsi_test.py | yutiansut/ParadoxTrading | b915d1491663443bedbb048017abeed3f7dcd4e2 | [
"MIT"
] | 2 | 2018-01-14T14:35:51.000Z | 2018-07-06T02:57:49.000Z | samples/indicator/rsi_test.py | yutiansut/ParadoxTrading | b915d1491663443bedbb048017abeed3f7dcd4e2 | [
"MIT"
] | 25 | 2018-01-14T14:38:08.000Z | 2020-07-15T16:03:04.000Z | from ParadoxTrading.Chart import Wizard
from ParadoxTrading.Fetch.ChineseFutures import FetchDominantIndex
from ParadoxTrading.Indicator import RSI
fetcher = FetchDominantIndex()
market = fetcher.fetchDayData('20100701', '20170101', 'rb')
rsi = RSI(14).addMany(market).getAllData()
wizard = Wizard()
price_view = wizard.addView('price', _view_stretch=3)
price_view.addLine('market', market.index(), market['closeprice'])
sub_view = wizard.addView('sub')
sub_view.addLine('rsi', rsi.index(), rsi['rsi'])
wizard.show()
| 27.526316 | 66 | 0.76673 | 63 | 523 | 6.269841 | 0.460317 | 0.136709 | 0.086076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.039832 | 0.087954 | 523 | 18 | 67 | 29.055556 | 0.78826 | 0 | 0 | 0 | 0 | 0 | 0.091778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30c7a9362a7e00f4dea1c4a3028e5f5f01134985 | 406 | py | Python | Multipliers/mult.py | ThoAppelsin/pure-music | ab1a8604cc24b1dbf329a556154d5f0cc7f2236b | [
"MIT"
] | null | null | null | Multipliers/mult.py | ThoAppelsin/pure-music | ab1a8604cc24b1dbf329a556154d5f0cc7f2236b | [
"MIT"
] | null | null | null | Multipliers/mult.py | ThoAppelsin/pure-music | ab1a8604cc24b1dbf329a556154d5f0cc7f2236b | [
"MIT"
] | null | null | null | import fractions
from pprint import pprint
lcm_limit = 32
octave_limit = 4
candidates = [(x, y) for x in range(1, lcm_limit + 1) for y in range(1, lcm_limit + 1)]
candidates = [c for c in candidates
if fractions.gcd(c[0], c[1]) == 1
and c[0] * c[1] <= lcm_limit
and 1 / octave_limit <= c[0] / c[1] <= octave_limit]
candidates.sort(key=lambda c: c[0] / c[1])
pprint(candidates)
print(len(candidates)) | 25.375 | 87 | 0.672414 | 76 | 406 | 3.5 | 0.342105 | 0.120301 | 0.045113 | 0.06015 | 0.12782 | 0.12782 | 0 | 0 | 0 | 0 | 0 | 0.050746 | 0.174877 | 406 | 16 | 88 | 25.375 | 0.743284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30c828d9ab280febb16bf5ef11feff5b3f99b8e8 | 573 | py | Python | Python3/24.swap-nodes-in-pairs.py | canhetingsky/LeetCode | 67f4eaeb5746d361056d08df828c653f89dd9fdd | [
"MIT"
] | 1 | 2019-09-23T13:25:21.000Z | 2019-09-23T13:25:21.000Z | Python3/24.swap-nodes-in-pairs.py | canhetingsky/LeetCode | 67f4eaeb5746d361056d08df828c653f89dd9fdd | [
"MIT"
] | 6 | 2019-10-25T10:17:50.000Z | 2019-11-17T05:07:19.000Z | Python3/24.swap-nodes-in-pairs.py | canhetingsky/LeetCode | 67f4eaeb5746d361056d08df828c653f89dd9fdd | [
"MIT"
] | null | null | null | #
# @lc app=leetcode id=24 lang=python3
#
# [24] Swap Nodes in Pairs
#
# @lc code=start
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution:
def swapPairs(self, head: ListNode) -> ListNode:
curr = dummyHead = ListNode(0)
curr.next = head
while curr.next and curr.next.next:
l = curr.next
r = l.next
curr.next, r.next, l.next = r, l, r.next
curr = l
return dummyHead.next
# @lc code=end
| 19.758621 | 52 | 0.556719 | 78 | 573 | 4.038462 | 0.5 | 0.126984 | 0.057143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015584 | 0.328098 | 573 | 28 | 53 | 20.464286 | 0.802597 | 0.369983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30cc83890770a4003caacdcf071dce2fdf40b1e0 | 1,102 | py | Python | lesson_01/005.py | amindmobile/geekbrains-python-002 | 4bc2f7af755d00e73ddc48f1138830cb78e87034 | [
"MIT"
] | null | null | null | lesson_01/005.py | amindmobile/geekbrains-python-002 | 4bc2f7af755d00e73ddc48f1138830cb78e87034 | [
"MIT"
] | null | null | null | lesson_01/005.py | amindmobile/geekbrains-python-002 | 4bc2f7af755d00e73ddc48f1138830cb78e87034 | [
"MIT"
] | null | null | null | # Запросите у пользователя значения выручки и издержек фирмы. Определите, с каким финансовым результатом работает фирма
# (прибыль — выручка больше издержек, или убыток — издержки больше выручки). Выведите соответствующее сообщение. Если
# фирма отработала с прибылью, вычислите рентабельность выручки (соотношение прибыли к выручке). Далее запросите
# численность сотрудников фирмы и определите прибыль фирмы в расчете на одного сотрудника.
earnings = float(input("Укажите сумму выручки: "))
expenses = float(input("Укажите сумму расходов: "))
if earnings > expenses: # если прибыль больше расходов
employee = int(input("Сколько служащих в вашей организации: "))
global_Profit = earnings - expenses # выручка
employee_Profit = global_Profit / employee # сколько денег зарабатывает один сотрудник
print(f"Фирма в прибыли {global_Profit:.2f}¥, с рентабельностью {earnings / expenses:.2f}%")
print(f"Примерно {employee_Profit:.2f}¥ зарабатывает для вас каждый служащий.")
elif earnings == expenses:
print("У вас бестолковая организация, она работает 'в ноль'.")
else:
print("У вас убытки.")
| 55.1 | 119 | 0.777677 | 140 | 1,102 | 6.114286 | 0.564286 | 0.074766 | 0.03972 | 0.051402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003155 | 0.137024 | 1,102 | 19 | 120 | 58 | 0.892744 | 0.46461 | 0 | 0 | 0 | 0 | 0.519793 | 0.07401 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30d0d4d6a16170d4deabeabcc9a6af64c6ea52e2 | 5,498 | py | Python | src/pumpwood_djangoviews/action.py | Murabei-OpenSource-Codes/pumpwood-djangoviews | 792fb825a5e924d1b7307c0bc3b40d798b68946d | [
"BSD-3-Clause"
] | null | null | null | src/pumpwood_djangoviews/action.py | Murabei-OpenSource-Codes/pumpwood-djangoviews | 792fb825a5e924d1b7307c0bc3b40d798b68946d | [
"BSD-3-Clause"
] | null | null | null | src/pumpwood_djangoviews/action.py | Murabei-OpenSource-Codes/pumpwood-djangoviews | 792fb825a5e924d1b7307c0bc3b40d798b68946d | [
"BSD-3-Clause"
] | null | null | null | """Define actions decorator."""
import inspect
import pandas as pd
from typing import cast
from datetime import date, datetime
from typing import Callable
from pumpwood_communication.exceptions import PumpWoodActionArgsException
class Action:
"""Define a Action class to be used in decorator action."""
def __init__(self, func: Callable, info: str, auth_header: str = None):
"""."""
signature = inspect.signature(func)
function_parameters = signature.parameters
parameters = {}
is_static_function = True
for key in function_parameters.keys():
if key in ['self', 'cls', auth_header]:
if key == "self":
is_static_function = False
continue
param = function_parameters[key]
if param.annotation == inspect.Parameter.empty:
if inspect.Parameter.empty:
param_type = "Any"
else:
param_type = type(param.default).__name__
else:
param_type = param.annotation \
if type(param.annotation) == str \
else param.annotation.__name__
parameters[key] = {
"required": param.default is inspect.Parameter.empty,
"type": param_type
}
if param.default is not inspect.Parameter.empty:
parameters[key]['default_value'] = param.default
self.action_name = func.__name__
self.is_static_function = is_static_function
self.parameters = parameters
self.info = info
self.auth_header = auth_header
def to_dict(self):
"""Return dict representation of the action."""
return {
"action_name": self.action_name,
"is_static_function": self.is_static_function,
"info": self.info,
"parameters": self.parameters}
def action(info: str = "", auth_header: str = None):
"""
Define decorator that will convert the function into a rest action.
Args:
info: Just an information about the decorated function that will be
returned in GET /rest/<model_class>/actions/.
Kwargs:
request_user (str): Variable that will receive logged user.
Returns:
func: Action decorator.
"""
def action_decorator(func):
func.is_action = True
func.action_object = Action(
func=func, info=info, auth_header=auth_header)
return func
return action_decorator
def load_action_parameters(func: Callable, parameters: dict, request):
"""Cast arguments to its original types."""
signature = inspect.signature(func)
function_parameters = signature.parameters
# Loaded parameters for action run
return_parameters = {}
# Errors found when processing the parameters
errors = {}
# Unused parameters, passed but not in function
unused_params = set(parameters.keys()) - set(function_parameters.keys())
# The request user parameter, set the logged user
auth_header = func.action_object.auth_header
if len(unused_params) != 0:
errors["unused args"] = {
"type": "unused args",
"message": list(unused_params)}
for key in function_parameters.keys():
# pass if arguments are self and cls for classmethods
if key in ['self', 'cls']:
continue
# If arguent is the request user one, set with the logged user
if key == auth_header:
token = request.headers.get('Authorization')
return_parameters[key] = {'Authorization': token}
continue
param_type = function_parameters[key]
par_value = parameters.get(key)
if par_value is not None:
try:
if param_type.annotation == date:
return_parameters[key] = pd.to_datetime(par_value).date()
elif param_type.annotation == datetime:
return_parameters[key] = \
pd.to_datetime(par_value).to_pydatetime()
else:
return_parameters[key] = param_type.annotation(par_value)
# If any error ocorrur then still try to cast data from python
# typing
except Exception:
try:
return_parameters[key] = return_parameters[key] = cast(
param_type.annotation, par_value)
except Exception as e:
errors[key] = {
"type": "unserialize",
"message": str(e)}
# If parameter is not passed and required return error
elif param_type.default is inspect.Parameter.empty:
errors[key] = {
"type": "nodefault",
"message": "not set and no default"}
# Raise if there is any error in serilization
if len(errors.keys()) != 0:
template = "[{key}]: {message}"
error_msg = "error when unserializing function arguments:\n"
error_list = []
for key in errors.keys():
error_list.append(template.format(
key=key, message=errors[key]["message"]))
error_msg = error_msg + "\n".join(error_list)
raise PumpWoodActionArgsException(
status_code=400, message=error_msg, payload={
"arg_errors": errors})
return return_parameters
| 35.934641 | 77 | 0.591124 | 595 | 5,498 | 5.295798 | 0.247059 | 0.031736 | 0.030467 | 0.019042 | 0.160584 | 0.10092 | 0.066646 | 0.066646 | 0 | 0 | 0 | 0.001342 | 0.322117 | 5,498 | 152 | 78 | 36.171053 | 0.844111 | 0.166788 | 0 | 0.153846 | 0 | 0 | 0.065526 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048077 | false | 0 | 0.057692 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30d36f714ffcf578c5b755784494e079e2271651 | 5,426 | py | Python | logistic-regression/mnist_binary_classifier.py | eliben/deep-learning-samples | d5ca86c5db664fabfb302cbbc231c50ec3d6a103 | [
"Unlicense"
] | 183 | 2015-12-29T07:21:24.000Z | 2022-01-18T01:19:23.000Z | logistic-regression/mnist_binary_classifier.py | eliben/deep-learning-samples | d5ca86c5db664fabfb302cbbc231c50ec3d6a103 | [
"Unlicense"
] | null | null | null | logistic-regression/mnist_binary_classifier.py | eliben/deep-learning-samples | d5ca86c5db664fabfb302cbbc231c50ec3d6a103 | [
"Unlicense"
] | 68 | 2016-06-02T15:31:51.000Z | 2021-09-08T19:58:10.000Z | # A binary linear classifier for MNIST digits.
#
# Poses a binary classification problem - is this image showing digit D (for
# some D, for example "4"); trains a linear classifier to solve the problem.
#
# Eli Bendersky (http://eli.thegreenplace.net)
# This code is in the public domain
from __future__ import print_function
import argparse
import numpy as np
import sys
from mnist_dataset import *
from regression_lib import *
if __name__ == '__main__':
argparser = argparse.ArgumentParser()
argparser.add_argument('--type',
choices=['binary', 'logistic'],
default='logistic',
help='Type of classification: binary (yes/no result)'
'or logistic (probability of "yes" result).')
argparser.add_argument('--set-seed', default=-1, type=int,
help='Set random seed to this number (if > 0).')
argparser.add_argument('--nsteps', default=150, type=int,
help='Number of steps for gradient descent.')
argparser.add_argument('--recognize-digit', default=4, type=int,
help='Digit to recognize in training.')
argparser.add_argument('--display-test', default=-1, type=int,
help='Display this image from the test data '
'set and exit.')
argparser.add_argument('--normalize', action='store_true', default=False,
help='Normalize data: (x-mu)/sigma.')
argparser.add_argument('--report-mistakes', action='store_true',
default=False,
help='Report all mistakes made in classification.')
args = argparser.parse_args()
if args.set_seed > 0:
np.random.seed(args.set_seed)
# Load MNIST data into memory; this may download the MNIST dataset from
# the web if not already on disk.
(X_train, y_train), (X_valid, y_valid), (X_test, y_test) = get_mnist_data()
if args.display_test > -1:
display_mnist_image(X_test[args.display_test],
y_test[args.display_test])
sys.exit(1)
if args.normalize:
print('Normalizing data...')
X_train_normalized, mu, sigma = feature_normalize(X_train)
X_train_augmented = augment_1s_column(X_train_normalized)
X_valid_augmented = augment_1s_column((X_valid - mu) / sigma)
X_test_augmented = augment_1s_column((X_test - mu) / sigma)
else:
X_train_augmented = augment_1s_column(X_train)
X_valid_augmented = augment_1s_column(X_valid)
X_test_augmented = augment_1s_column(X_test)
# Convert y_train to binary "is this a the digit D", with +1 for D, -1
# otherwise. Also reshape it into a column vector as regression_lib expects.
D = args.recognize_digit
print('Training for digit', D)
y_train_binary = convert_y_to_binary(y_train, D)
y_valid_binary = convert_y_to_binary(y_valid, D)
y_test_binary = convert_y_to_binary(y_test, D)
# Hyperparameters.
LEARNING_RATE = 0.08
REG_BETA=0.03
if args.type == 'binary':
print('Training binary classifier with hinge loss...')
lossfunc = lambda X, y, theta: hinge_loss(X, y,
theta, reg_beta=REG_BETA)
else:
print('Training logistic classifier with cross-entropy loss...')
lossfunc = lambda X, y, theta: cross_entropy_loss_binary(
X, y, theta, reg_beta=REG_BETA)
n = X_train_augmented.shape[1]
gi = gradient_descent(X_train_augmented,
y_train_binary,
init_theta=np.random.randn(n, 1),
lossfunc=lossfunc,
batch_size=256,
nsteps=args.nsteps,
learning_rate=LEARNING_RATE)
for i, (theta, loss) in enumerate(gi):
if i % 50 == 0 and i > 0:
print(i, loss)
# We use predict_binary for both binary and logistic classification.
# See comment on predict_binary to understand why it works for
# logistic as well.
yhat = predict_binary(X_train_augmented, theta)
yhat_valid = predict_binary(X_valid_augmented, theta)
print('train accuracy =', np.mean(yhat == y_train_binary))
print('valid accuracy =', np.mean(yhat_valid == y_valid_binary))
print('After {0} training steps...'.format(args.nsteps))
print('loss =', loss)
yhat_valid = predict_binary(X_valid_augmented, theta)
yhat_test = predict_binary(X_test_augmented, theta)
print('valid accuracy =', np.mean(yhat_valid == y_valid_binary))
print('test accuracy =', np.mean(yhat_test == y_test_binary))
# For logistic, get predicted probabilities as well.
if args.type == 'logistic':
yhat_test_prob = predict_logistic_probability(X_test_augmented, theta)
if args.report_mistakes:
for i in range(yhat_test.size):
if yhat_test[i][0] != y_test_binary[i][0]:
print('@ {0}: predict {1}, actual {2}'.format(
i, yhat_test[i][0], y_test_binary[i][0]), end='')
if args.type == 'logistic':
print('; prob={0}'.format(yhat_test_prob[i][0]))
else:
print('')
| 44.113821 | 80 | 0.605418 | 691 | 5,426 | 4.51809 | 0.250362 | 0.019218 | 0.044843 | 0.046124 | 0.225176 | 0.213004 | 0.155029 | 0.141576 | 0.046765 | 0.032031 | 0 | 0.011449 | 0.291743 | 5,426 | 122 | 81 | 44.47541 | 0.800937 | 0.13509 | 0 | 0.097826 | 0 | 0 | 0.159752 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.065217 | 0 | 0.065217 | 0.163043 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30d65ba961b4012c418911d00c91a40240451543 | 5,409 | py | Python | IEEE-ASHRAE_kernel/01_data_minify.py | Daniel1586/Initiative_Kaggle | 945e0a2ebe94aa7ee3ed59dd0de53d9a1b82aa05 | [
"MIT"
] | 1 | 2019-08-12T14:28:22.000Z | 2019-08-12T14:28:22.000Z | IEEE-ASHRAE_kernel/01_data_minify.py | Daniel1586/Initiative_Kaggle | 945e0a2ebe94aa7ee3ed59dd0de53d9a1b82aa05 | [
"MIT"
] | null | null | null | IEEE-ASHRAE_kernel/01_data_minify.py | Daniel1586/Initiative_Kaggle | 945e0a2ebe94aa7ee3ed59dd0de53d9a1b82aa05 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
(https://www.kaggle.com/c/ashrae-energy-prediction).
Train shape:(590540,394),identity(144233,41)--isFraud 3.5%
Test shape:(506691,393),identity(141907,41)
############### TF Version: 1.13.1/Python Version: 3.7 ###############
"""
import os
import math
import random
import warnings
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
warnings.filterwarnings('ignore')
# make all processes deterministic/固定随机数生成器的种子
# environ是一个字符串所对应环境的映像对象,PYTHONHASHSEED为其中的环境变量
# Python会用一个随机的种子来生成str/bytes/datetime对象的hash值;
# 如果该环境变量被设定为一个数字,它就被当作一个固定的种子来生成str/bytes/datetime对象的hash值
def set_seed(seed=0):
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
# reduce memory for dataframe/优化dataframe数据格式,减少内存占用
def reduce_mem_usage(df, verbose=True):
numerics = ["int16", "int32", "int64", "float16", "float32", "float64"]
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == "int":
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
reduction = 100*(start_mem-end_mem)/start_mem
if verbose:
print("Default Mem. {:.2f} Mb, Optimized Mem. {:.2f} Mb, Reduction {:.1f}%".
format(start_mem, end_mem, reduction))
return df
if __name__ == "__main__":
print("========== 1.Set random seed ...")
SEED = 42
set_seed(SEED)
LOCAL_TEST = False
print("========== 2.Load csv data ...")
dir_data_csv = os.getcwd() + "\\great-energy-predictor\\"
train_df = pd.read_csv(dir_data_csv + "\\train.csv")
infer_df = pd.read_csv(dir_data_csv + "\\test.csv")
build_df = pd.read_csv(dir_data_csv + "\\building_metadata.csv")
train_weat_df = pd.read_csv(dir_data_csv + "\\weather_train.csv")
infer_weat_df = pd.read_csv(dir_data_csv + "\\weather_test.csv")
print('#' * 30)
print('Main data:', list(train_df), train_df.info())
print('#' * 30)
print('Buildings data:', list(build_df), build_df.info())
print('#' * 30)
print('Weather data:', list(train_weat_df), train_weat_df.info())
print('#' * 30)
for df in [train_df, infer_df, train_weat_df, infer_weat_df]:
df["timestamp"] = pd.to_datetime(df["timestamp"])
for df in [train_df, infer_df]:
df["DT_M"] = df["timestamp"].dt.month.astype(np.int8)
df["DT_W"] = df["timestamp"].dt.weekofyear.astype(np.int8)
df["DT_D"] = df["timestamp"].dt.dayofyear.astype(np.int16)
df["DT_hour"] = df["timestamp"].dt.hour.astype(np.int8)
df["DT_day_week"] = df["timestamp"].dt.dayofweek.astype(np.int8)
df["DT_day_month"] = df["timestamp"].dt.day.astype(np.int8)
df["DT_week_month"] = df["timestamp"].dt.day / 7
df["DT_week_month"] = df["DT_week_month"].apply(lambda x: math.ceil(x)).astype(np.int8)
print("========== 3.ETL tran categorical feature [String] ...")
build_df['primary_use'] = build_df['primary_use'].astype('category')
build_df['floor_count'] = build_df['floor_count'].fillna(0).astype(np.int8)
build_df['year_built'] = build_df['year_built'].fillna(-999).astype(np.int16)
le = LabelEncoder()
build_df['primary_use'] = build_df['primary_use'].astype(str)
build_df['primary_use'] = le.fit_transform(build_df['primary_use']).astype(np.int8)
do_not_convert = ['category', 'datetime64[ns]', 'object']
for df in [train_df, infer_df, build_df, train_weat_df, infer_weat_df]:
original = df.copy()
df = reduce_mem_usage(df)
for col in list(df):
if df[col].dtype.name not in do_not_convert:
if (df[col] - original[col]).sum() != 0:
df[col] = original[col]
print('Bad transformation', col)
print('#' * 30)
print('Main data:', list(train_df), train_df.info())
print('#' * 30)
print('Buildings data:', list(build_df), build_df.info())
print('#' * 30)
print('Weather data:', list(train_weat_df), train_weat_df.info())
print('#' * 30)
print("========== 4.Save pkl ...")
train_df.to_pickle("train.pkl")
infer_df.to_pickle("infer.pkl")
build_df.to_pickle("build.pkl")
train_weat_df.to_pickle("weather_train.pkl")
infer_weat_df.to_pickle("weather_infer.pkl")
| 41.290076 | 95 | 0.612683 | 778 | 5,409 | 4.062982 | 0.244216 | 0.031636 | 0.034166 | 0.022145 | 0.388168 | 0.319203 | 0.285353 | 0.139829 | 0.139829 | 0.094274 | 0 | 0.035145 | 0.210945 | 5,409 | 130 | 96 | 41.607692 | 0.705483 | 0.094842 | 0 | 0.158416 | 0 | 0 | 0.174314 | 0.010037 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019802 | false | 0 | 0.069307 | 0 | 0.09901 | 0.19802 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30d759a40bbb943fa4065bfc1dc4dff7a8c4a0fa | 8,774 | py | Python | src/optim/DeepSAD_trainer.py | wych1005/Deep-SAD-PyTorch | af93186a38ed30985dc155d1b00b90aa181cfe0b | [
"MIT"
] | null | null | null | src/optim/DeepSAD_trainer.py | wych1005/Deep-SAD-PyTorch | af93186a38ed30985dc155d1b00b90aa181cfe0b | [
"MIT"
] | null | null | null | src/optim/DeepSAD_trainer.py | wych1005/Deep-SAD-PyTorch | af93186a38ed30985dc155d1b00b90aa181cfe0b | [
"MIT"
] | null | null | null | import matplotlib.pyplot as plt
from base.base_trainer import BaseTrainer
from base.base_dataset import BaseADDataset
from base.base_net import BaseNet
import seaborn as sns
from torch.utils.data.dataloader import DataLoader
from sklearn.metrics import roc_auc_score, confusion_matrix, average_precision_score, roc_curve, precision_recall_curve
import logging
import time
import torch
import torch.optim as optim
import numpy as np
class DeepSADTrainer(BaseTrainer):
def __init__(self, c, eta: float, optimizer_name: str = 'adam', lr: float = 0.001, n_epochs: int = 150,
lr_milestones: tuple = (), batch_size: int = 128, weight_decay: float = 1e-6, device: str = 'cuda',
n_jobs_dataloader: int = 0):
super().__init__(optimizer_name, lr, n_epochs, lr_milestones, batch_size, weight_decay, device,
n_jobs_dataloader)
# Deep SAD parameters
self.c = torch.tensor(c, device=self.device) if c is not None else None
self.eta = eta
# Optimization parameters
self.eps = 1e-6
# Results
self.train_time = None
self.test_auc = None
self.test_auprc = None
self.test_confusion_matrix = None
self.test_time = None
self.test_scores = None
self.roc_x = None
self.roc_y = None
self.prec = None
self.rec = None
def train(self, dataset: BaseADDataset, net: BaseNet):
logger = logging.getLogger()
# Get train data loader
train_loader, _ = dataset.loaders(batch_size=self.batch_size, num_workers=self.n_jobs_dataloader)
# Set device for network
net = net.to(self.device)
# Set optimizer (Adam optimizer for now)
optimizer = optim.Adam(net.parameters(), lr=self.lr, weight_decay=self.weight_decay)
# Set learning rate scheduler
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=self.lr_milestones, gamma=0.1)
# Initialize hypersphere center c (if c not loaded)
if self.c is None:
logger.info('Initializing center c...')
self.c = self.init_center_c(train_loader, net)
logger.info('Center c initialized.')
# Training
logger.info('Starting training...')
start_time = time.time()
net.train()
for epoch in range(self.n_epochs):
scheduler.step()
if epoch in self.lr_milestones:
logger.info(' LR scheduler: new learning rate is %g' % float(scheduler.get_lr()[0]))
epoch_loss = 0.0
n_batches = 0
epoch_start_time = time.time()
for data in train_loader:
inputs, _, semi_targets, _ = data
inputs, semi_targets = inputs.to(self.device), semi_targets.to(self.device)
# Zero the network parameter gradients
optimizer.zero_grad()
# Update network parameters via backpropagation: forward + backward + optimize
outputs = net(inputs)
dist = torch.sum((outputs - self.c) ** 2, dim=1)
losses = torch.where(semi_targets == 0, dist, self.eta * ((dist + self.eps) ** semi_targets.float()))
loss = torch.mean(losses)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
n_batches += 1
# log epoch statistics
epoch_train_time = time.time() - epoch_start_time
logger.info(f'| Epoch: {epoch + 1:03}/{self.n_epochs:03} | Train Time: {epoch_train_time:.3f}s '
f'| Train Loss: {epoch_loss / n_batches:.6f} |')
self.train_time = time.time() - start_time
logger.info('Training Time: {:.3f}s'.format(self.train_time))
logger.info('Finished training.')
return net
def test(self, dataset: BaseADDataset, net: BaseNet):
logger = logging.getLogger()
# Get test data loader
_, test_loader = dataset.loaders(batch_size=self.batch_size, num_workers=self.n_jobs_dataloader)
# Set device for network
net = net.to(self.device)
# Testing
logger.info('Starting testing...')
epoch_loss = 0.0
n_batches = 0
start_time = time.time()
idx_label_score = []
net.eval()
with torch.no_grad():
for data in test_loader:
inputs, labels, semi_targets, idx = data
inputs = inputs.to(self.device)
labels = labels.to(self.device)
semi_targets = semi_targets.to(self.device)
idx = idx.to(self.device)
outputs = net(inputs)
dist = torch.sum((outputs - self.c) ** 2, dim=1)
losses = torch.where(semi_targets == 0, dist, self.eta * ((dist + self.eps) ** semi_targets.float()))
loss = torch.mean(losses)
scores = dist
# Save triples of (idx, label, score) in a list
idx_label_score += list(zip(idx.cpu().data.numpy().tolist(),
labels.cpu().data.numpy().tolist(),
scores.cpu().data.numpy().tolist()))
epoch_loss += loss.item()
n_batches += 1
self.test_time = time.time() - start_time
self.test_scores = idx_label_score
score_array = np.array(self.test_scores, dtype=np.int32)
# Get predictions
prediction_array = np.array([int(abs(p - 1) < abs(p)) for p in list(score_array[:, 2])], dtype=np.int32)
gt_array = score_array[:, 1]
# Compute AUC
_, labels, scores = zip(*idx_label_score)
labels = np.array(labels)
scores = np.array(scores)
self.test_auc = roc_auc_score(labels, scores)
self.test_auprc = average_precision_score(labels, scores)
self.roc_x, self.roc_y, _ = roc_curve(labels, scores)
self.prec, self.rec, _ = precision_recall_curve(labels, scores)
self.test_confusion_matrix = confusion_matrix(gt_array, prediction_array)
sns.heatmap(self.test_confusion_matrix, annot=True, fmt='.2%')
plt.savefig("cm_deepsad.png")
plt.close()
# Log results
logger.info('Test Loss: {:.6f}'.format(epoch_loss / n_batches))
logger.info('Test AUROC: {:.2f}%'.format(100. * self.test_auc))
logger.info('Test AUPRC: {:.2f}%'.format(100. * self.test_auprc))
logger.info('Test Time: {:.3f}s'.format(self.test_time))
logger.info('Finished testing.')
####### ROC and AUC ############
lw = 2
plt.figure()
####### ROC ############
plt.plot(self.roc_x, self.roc_y, color='darkorange',
lw=lw, label='ROC curve (AUC = %0.2f)' % self.test_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--', label="No-skill classifier")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate ')
plt.ylabel('True Positive Rate ')
plt.title('ROC (Receiver Operating Characteristic curve)')
plt.legend(loc="lower right")
plt.show()
plt.savefig('roc_auc.png')
plt.close()
####### PR ############
plt.figure()
plt.plot(self.prec, self.rec, color='darkorange',
lw=lw, label='PR curve (AUC = %0.2f)' % self.test_auprc)
plt.plot([0, 1], [0.0526, 0.0526], color='navy', lw=lw, linestyle='--', label="No-skill classifier")
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall ')
plt.ylabel('Precision ')
plt.title('PRC (Precision-Recall curve)')
plt.legend(loc="lower right")
plt.show()
plt.savefig('pr_auc.png')
plt.close()
def init_center_c(self, train_loader: DataLoader, net: BaseNet, eps=0.1):
"""Initialize hypersphere center c as the mean from an initial forward pass on the data."""
n_samples = 0
c = torch.zeros(net.rep_dim, device=self.device)
net.eval()
with torch.no_grad():
for data in train_loader:
# get the inputs of the batch
inputs, _, _, _ = data
inputs = inputs.to(self.device)
outputs = net(inputs)
n_samples += outputs.shape[0]
c += torch.sum(outputs, dim=0)
c /= n_samples
# If c_i is too close to 0, set to +-eps. Reason: a zero unit can be trivially matched with zero weights.
c[(abs(c) < eps) & (c < 0)] = -eps
c[(abs(c) < eps) & (c > 0)] = eps
return c
| 39.345291 | 119 | 0.574652 | 1,110 | 8,774 | 4.39009 | 0.214414 | 0.027909 | 0.022163 | 0.01416 | 0.312333 | 0.252001 | 0.208085 | 0.18387 | 0.18387 | 0.146932 | 0 | 0.01714 | 0.301801 | 8,774 | 222 | 120 | 39.522523 | 0.778322 | 0.083884 | 0 | 0.275 | 0 | 0.00625 | 0.086738 | 0.005908 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025 | false | 0 | 0.075 | 0 | 0.11875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30de8aa4ae6150e3acdc0398071ec1e1b7a6910b | 1,577 | py | Python | server.py | rikuru-to865/Voyeur-with-python | 104d61e189912134f38463842a0ee0834dd31129 | [
"MIT"
] | null | null | null | server.py | rikuru-to865/Voyeur-with-python | 104d61e189912134f38463842a0ee0834dd31129 | [
"MIT"
] | null | null | null | server.py | rikuru-to865/Voyeur-with-python | 104d61e189912134f38463842a0ee0834dd31129 | [
"MIT"
] | null | null | null | import socket
import numpy
import cv2
import threading
import os
BUFFER_SIZE = 4096*10
currentPort = 50001
buf = b''
class capture():
def __init__(self,port):
self.port = port
def recive(self):
global buf
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s:
while True:
s.bind(("0.0.0.0", self.port))
try:
data,address = s.recvfrom(BUFFER_SIZE)
if not data:
break
buf = buf + data
finally:
s.close()
narray=numpy.frombuffer(buf,dtype='uint8')
buf = b""
return cv2.imdecode(narray,1)
def show(self):
while True:
img = self.recive()
cv2.imshow('Capture',img)
if cv2.waitKey(100) & 0xFF == ord('q'):
break
img = ''
with socket.socket(socket.AF_INET,socket.SOCK_STREAM) as portInfo:
portInfo.bind(("0.0.0.0",50000))
portInfo.listen()
print("aitayo")
while True:
(connection, client) = portInfo.accept()
print("kita!")
try:
data = connection.recv(800)
if data == b"OpenPortRequest":
print('recive')
connection.send(currentPort.to_bytes(2,"big"))
cap = capture(currentPort)
th = threading.Thread(target=cap.show)
th.daemon = True
th.start()
finally:
connection.close() | 26.728814 | 67 | 0.496512 | 169 | 1,577 | 4.568047 | 0.47929 | 0.015544 | 0.015544 | 0.056995 | 0.119171 | 0.098446 | 0.098446 | 0.098446 | 0 | 0 | 0 | 0.039749 | 0.393786 | 1,577 | 59 | 68 | 26.728814 | 0.767782 | 0 | 0 | 0.176471 | 0 | 0 | 0.03929 | 0 | 0 | 0 | 0.002535 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.098039 | 0 | 0.196078 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30e0d9aba329cb35b870099915a7b244906f7302 | 1,978 | py | Python | lcm/lcm/nf/serializers/lccn_filter_data.py | onap/vfc-gvnfm-vnflcm | e3127fee0fdb5bf193fddc74a69312363a6d20eb | [
"Apache-2.0"
] | 1 | 2019-04-02T03:15:20.000Z | 2019-04-02T03:15:20.000Z | lcm/lcm/nf/serializers/lccn_filter_data.py | onap/vfc-gvnfm-vnflcm | e3127fee0fdb5bf193fddc74a69312363a6d20eb | [
"Apache-2.0"
] | null | null | null | lcm/lcm/nf/serializers/lccn_filter_data.py | onap/vfc-gvnfm-vnflcm | e3127fee0fdb5bf193fddc74a69312363a6d20eb | [
"Apache-2.0"
] | 1 | 2021-10-15T15:26:47.000Z | 2021-10-15T15:26:47.000Z | # Copyright (C) 2018 Verizon. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from rest_framework import serializers
from .vnf_instance_subscription_filter import VnfInstanceSubscriptionFilter
from lcm.nf.const import NOTIFICATION_TYPES, LCM_OPERATION_TYPES, LCM_OPERATION_STATE_TYPES
class LifeCycleChangeNotificationsFilter(serializers.Serializer):
notificationTypes = serializers.ListField(
child=serializers.ChoiceField(required=True, choices=NOTIFICATION_TYPES),
help_text="Match particular notification types",
allow_null=False,
required=False)
operationTypes = serializers.ListField(
child=serializers.ChoiceField(required=True, choices=LCM_OPERATION_TYPES),
help_text="Match particular VNF lifecycle operation types for the " +
"notification of type VnfLcmOperationOccurrenceNotification.",
allow_null=False,
required=False)
operationStates = serializers.ListField(
child=serializers.ChoiceField(required=True, choices=LCM_OPERATION_STATE_TYPES),
help_text="Match particular LCM operation state values as reported " +
"in notifications of type VnfLcmOperationOccurrenceNotification.",
allow_null=False,
required=False)
vnfInstanceSubscriptionFilter = VnfInstanceSubscriptionFilter(
help_text="Filter criteria to select VNF instances about which to notify.",
required=False,
allow_null=False)
| 46 | 91 | 0.761375 | 228 | 1,978 | 6.5 | 0.495614 | 0.040486 | 0.037787 | 0.072874 | 0.319163 | 0.244265 | 0.244265 | 0.244265 | 0.105263 | 0.105263 | 0 | 0.004905 | 0.17543 | 1,978 | 42 | 92 | 47.095238 | 0.90374 | 0.289687 | 0 | 0.24 | 0 | 0 | 0.237239 | 0.054637 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.12 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30e38abbdfe5ba2eb3dbe185a3fb1b8a9ce8a9b5 | 2,213 | py | Python | treeNodeTravelThrewTreeNodes.py | GehartM/german-text-watermarking | f9765702225e0cfdce868eec816ed6e6fd4ffc63 | [
"MIT"
] | 1 | 2021-04-08T11:23:46.000Z | 2021-04-08T11:23:46.000Z | treeNodeTravelThrewTreeNodes.py | GehartM/german-text-watermarking | f9765702225e0cfdce868eec816ed6e6fd4ffc63 | [
"MIT"
] | null | null | null | treeNodeTravelThrewTreeNodes.py | GehartM/german-text-watermarking | f9765702225e0cfdce868eec816ed6e6fd4ffc63 | [
"MIT"
] | null | null | null | def TravelThrewTreeNodes(currentTreeNode, leftDirections, encodedSign):
# Überprüfung, ob die letzte Stelle erreicht wurde. Sollte dies nicht der Fall sein so wird weiter durch den
# Baum navigiert
if not len(leftDirections) == 1:
# Speicherung der nächsten Richtung
nextDirection = leftDirections[0]
# Löschen der aktuellen Richtung von den übrigen Richtungen
leftDirections = leftDirections[1:]
# Überprüfung, ob vom aktuellen TreeNode in Richtung 0 navigiert werden soll
if nextDirection == '0':
if not CheckIfChildrenTreeNodeAlreadyExists(currentTreeNode.zero):
# Die gewünschte Richtung existiert noch nicht, daher wird ein neuer TreeNode für diese
# Richtung erstellt
currentTreeNode.AddTreeNode(True)
TravelThrewTreeNodes(currentTreeNode.zero, leftDirections, encodedSign)
elif nextDirection == '1':
if not CheckIfChildrenTreeNodeAlreadyExists(currentTreeNode.one):
# Die gewünschte Richtung existiert noch nicht, daher wird ein neuer TreeNode für diese Richtung
# erstellt
currentTreeNode.AddTreeNode(False)
TravelThrewTreeNodes(currentTreeNode.one, leftDirections, encodedSign)
else:
# Es wurde die letzte Stelle erreicht. Damit kann das kodierte Zeichen an seine vorgesehene Stelle gespeichert
# werden
# Überprüfung, ob vom aktuellen TreeNode in Richtung 0 navigiert werden soll
if leftDirections == '0':
# Erstelle einen neuen TreeNode für diese Richtung und speichere als Wert das kodierte Zeichen
currentTreeNode.AddTreeNode(True, encodedSign)
elif leftDirections == '1':
# Erstelle einen neuen TreeNode für diese Richtung und speichere als Wert das kodierte Zeichen
currentTreeNode.AddTreeNode(False, encodedSign)
def CheckIfChildrenTreeNodeAlreadyExists(childrenTreeNode):
# Existiert das abzufragende Kind-Objekt bereits, so wird True zurückgegeben
if childrenTreeNode is not None:
return True
else:
return False
| 49.177778 | 119 | 0.685043 | 215 | 2,213 | 7.051163 | 0.404651 | 0.029024 | 0.042216 | 0.063325 | 0.37467 | 0.37467 | 0.37467 | 0.37467 | 0.37467 | 0.37467 | 0 | 0.005573 | 0.270221 | 2,213 | 44 | 120 | 50.295455 | 0.933127 | 0.428378 | 0 | 0.090909 | 0 | 0 | 0.00332 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.181818 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30e55d7e04ab513adf991395e8e91f97eaca5c02 | 5,633 | py | Python | jarvis/resume/skillset.py | Anubhav722/blahblah | 160698e06a02e671ac40de3113cd37d642e72e96 | [
"MIT"
] | 1 | 2019-01-03T06:10:04.000Z | 2019-01-03T06:10:04.000Z | jarvis/resume/skillset.py | Anubhav722/blahblah | 160698e06a02e671ac40de3113cd37d642e72e96 | [
"MIT"
] | 1 | 2021-03-31T19:11:52.000Z | 2021-03-31T19:11:52.000Z | jarvis/resume/skillset.py | Anubhav722/blahblah | 160698e06a02e671ac40de3113cd37d642e72e96 | [
"MIT"
] | null | null | null | skills_list = [
"actionscript",
"ado.net",
"ajax",
"akka",
"algorithm",
"amazon-ec2",
"amazon-s3",
"amazon-web-services",
"android",
"angular",
"angularjs",
"ansible",
"ant",
"apache",
"apache-camel",
"apache-kafka",
"apache-poi",
"apache-spark",
"apache-spark-sql",
"apache2",
"applet",
"arduino",
"asp.net",
"asp.net-core",
"asp.net-core-mvc",
"asp.net-mvc",
"assembly",
"asynchronous",
"automation",
"awk",
"azure",
"backbone.js",
"beautifulsoup",
"bigdata",
"bootstrap",
"c#",
"c++",
"cakephp",
"cassandra",
"chef",
"ckeditor",
"clang",
"clojure",
"cocoa",
"coffeescript",
"coldfusion",
"concurrency",
"content-management-system",
"continuous-integration",
"cordova",
"cryptography",
"css",
"cucumber",
"cuda",
"cursor",
"cygwin",
"cypher",
"d3.js",
"dart",
"data-binding",
"data-structures",
"database",
"database-design",
"datatables",
"deep-learning",
"delphi",
"dependency-injection",
"deployment",
"design",
"design-patterns",
"django",
"dns",
"docker",
"dplyr",
"drupal",
"ejb",
"elasticsearch",
"eloquent",
"emacs",
"ember.js",
"entity-framework",
"erlang",
"event-handling",
"excel",
"excel-vba",
"express",
"express.js",
"expressjs",
"extjs",
"f#",
"facebook-graph-api",
"firebase",
"flask",
"flex",
"flexbox",
"fortran",
"frameworks",
"functional-programming",
"git",
"glassfish",
"go",
"golang",
"google-analytics",
"google-api",
"google-app-engine",
"google-apps-script",
"google-bigquery",
"google-chrome",
"google-chrome-extension",
"google-cloud-datastore",
"google-cloud-messaging",
"google-cloud-platform",
"google-drive-sdk",
"google-maps",
"gps",
"gradle",
"grails",
"groovy",
"gruntjs",
"gson",
"gtk",
"gulp",
"gwt",
"hadoop",
"handlebars.js",
"haskell",
"hbase",
"hdfs",
"heroku",
"hibernate",
"highcharts",
"hive",
"html",
"html5",
"hyperlink",
"ibm-mobilefirst",
"iis",
"image-processing",
"imagemagick",
"ionic-framework",
"ionic2",
"itext",
"jackson",
"jasmine",
"java",
"java-8",
"java-ee",
"java-me",
"javafx",
"javascript",
"jax-rs",
"jaxb",
"jboss",
"jdbc",
"jenkins",
"jersey",
"jetty",
"jframe",
"jmeter",
"jms",
"jni",
"join",
"joomla",
"jpa",
"jpanel",
"jqgrid",
"jquery",
"jsf",
"jsf-2",
"json",
"json.net",
"jsoup",
"jsp",
"jtable",
"junit",
"jupyter-notebook",
"jvm",
"kendo-grid",
"kendo-ui",
"keras",
"knockout.js",
"kotlin",
"kubernetes",
"laravel",
"linq",
"log4j",
"lua",
"lucene",
"machine-learning",
"macros",
"magento",
"mapreduce",
"matlab",
"matplotlib",
"matrix",
"maven",
"memory-management",
"mercurial",
"meteor",
"mockito",
"model-view-controller",
"mongodb",
"mongoose",
"multiprocessing",
"multithreading",
"mysql",
"neo4j",
"network-programming",
"networking",
"neural-network",
"nginx",
"nhibernate",
"nlp",
"node.js",
"nosql",
"numpy",
"oauth",
"object",
"objective-c",
"odbc",
"odoo",
"oop",
"opencv",
"openssl",
"oracle",
"pandas",
"perl",
"phantomjs",
"php",
"playframework",
"plsql",
"postgresql",
"powershell",
"prolog",
"pthreads",
"pyqt",
"pyspark",
"python",
"qml",
"qt",
"rabbitmq",
"raspberry-pi",
"razor",
"react-native",
"react-redux",
"react-router",
"reactjs",
"realm",
"recursion",
"redirect",
"redis",
"redux",
"refactoring",
"reference",
"reflection",
"regex",
"registry",
"requirejs",
"responsive-design",
"rest",
"rss",
"ruby",
"ruby-on-rails",
"rust",
"rxjs",
"sails.js",
"salesforce",
"sas",
"sass",
"scala",
"scikit-learn",
"scipy",
"scrapy",
"scripting",
"security",
"selenium",
"selenium-webdriver",
"seo",
"servlets",
"sharepoint",
"shell",
"silverlight",
"smtp",
"soap",
"socket.io",
"sockets",
"solr",
"sonarqube",
"speech-recognition",
"spring",
"sql",
"sql-server",
"sqlalchemy",
"sqlite",
"sqlite3",
"svn",
"swift",
"swing",
"swt",
"symfony",
"synchronization",
"telerik",
"tensorflow",
"three.js",
"titanium",
"tkinter",
"tomcat",
"twitter-bootstrap",
"typescript",
"vagrant",
"vb.net",
"vb6",
"vba",
"vbscript",
"version-control",
"vim",
"visual-c++",
"vue.js",
"vuejs2",
"wamp",
"wcf",
"web",
"web-applications",
"web-crawler",
"web-scraping",
"web-services",
"webdriver",
"websocket",
"websphere",
"woocommerce",
"wordpress",
"wpf",
"wsdl",
"wso2",
"wxpython",
"x86",
"xamarin",
"xaml",
"xampp",
"xcode",
"xml",
"xml-parsing",
"xmlhttprequest",
"xmpp",
"xna",
"xsd",
"xslt",
"yii",
"yii2",
"zend-framework",
]
skillset = set(skills_list)
| 15.867606 | 32 | 0.480916 | 460 | 5,633 | 5.884783 | 0.817391 | 0.008866 | 0.007388 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004387 | 0.312089 | 5,633 | 354 | 33 | 15.912429 | 0.694194 | 0 | 0 | 0 | 0 | 0 | 0.494585 | 0.0316 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30eaa9dd3051eaa994ef4cd89b26f3deca2ee553 | 6,054 | py | Python | startup/users/30-user-chen_xpcs.py | NSLS-II-SMI/profile_collection | c1e2236a7520f605ac85e7591f05682add06357c | [
"BSD-3-Clause"
] | null | null | null | startup/users/30-user-chen_xpcs.py | NSLS-II-SMI/profile_collection | c1e2236a7520f605ac85e7591f05682add06357c | [
"BSD-3-Clause"
] | 13 | 2018-09-25T19:35:08.000Z | 2021-01-15T20:42:26.000Z | startup/users/30-user-chen_xpcs.py | NSLS-II-SMI/profile_collection | c1e2236a7520f605ac85e7591f05682add06357c | [
"BSD-3-Clause"
] | 3 | 2019-09-06T01:40:59.000Z | 2020-07-01T20:27:39.000Z | def grid_scan_xpcs():
folder = "301000_Chen34"
xs = np.linspace(-9350, -9150, 2)
ys = np.linspace(1220, 1420, 2)
names=['PSBMA5_200um_grid']
energies = [2450, 2472, 2476, 2490]
x_off = [0, 60, 0, 60]
y_off = [0, 0, 60, 60]
xxs, yys = np.meshgrid(xs, ys)
dets = [pil1M]
for name in names:
for ener, xof, yof in zip(energies, x_off, y_off):
yield from bps.mv(energy, ener)
yield from bps.sleep(10)
for i, (x, y) in enumerate(zip(xxs.ravel(), yys.ravel())):
pil1M.cam.file_path.put(f"/ramdisk/images/users/2019_3/%s/1M/%s_pos%s"%(folder, name, i))
yield from bps.mv(piezo.x, x+xof)
yield from bps.mv(piezo.y, y+yof)
name_fmt = '{sample}_{energy}eV_pos{pos}'
sample_name = name_fmt.format(sample=name, energy=ener, pos = '%2.2d'%i)
sample_id(user_name='Chen', sample_name=sample_name)
yield from bps.sleep(5)
det_exposure_time(0.03, 30)
print(f'\n\t=== Sample: {sample_name} ===\n')
pil1M.cam.acquire.put(1)
yield from bps.sleep(5)
pv = EpicsSignal('XF:12IDC-ES:2{Det:1M}cam1:Acquire', name="pv")
while pv.get() == 1:
yield from bps.sleep(5)
yield from bps.mv(energy, 2475)
yield from bps.mv(energy, 2450)
def NEXAFS_SAXS_S_edge(t=1):
dets = [pil300KW]
name = 'sample_thick_waxs'
energies = [2450, 2480, 2483, 2484, 2485, 2486, 2500]
det_exposure_time(t,t)
name_fmt = '{sample}_{energy}eV_wa{wa}'
waxs_an = np.linspace(0, 26, 5)
yss = np.linspace(1075, 1575, 5)
for wax in waxs_an:
yield from bps.mv(waxs, wax)
for e, ys in zip(energies, yss):
yield from bps.mv(energy, e)
yield from bps.mv(piezo.y, ys)
sample_name = name_fmt.format(sample=name, energy=e, wa = '%3.1f'%wax)
sample_id(user_name='Chen', sample_name=sample_name)
print(f'\n\t=== Sample: {sample_name} ===\n')
yield from bp.count(dets, num=1)
yield from bps.mv(energy, 2475)
yield from bps.mv(energy, 2450)
def grid_scan_static():
names=['PSBMA30_10um_static']
x_off = -36860+np.asarray([-200, 200])
y_off = 1220+np.asarray([-100, 0, 100])
energies = np.linspace(2500, 2450, 51)
xxs, yys = np.meshgrid(x_off, y_off)
dets = [pil300KW, pil1M]
for name in names:
for i, (x, y) in enumerate(zip(xxs.ravel(), yys.ravel())):
yield from bps.mv(piezo.x, x)
yield from bps.mv(piezo.y, y)
energies = energies[::-1]
yield from bps.sleep(2)
for ener in energies:
yield from bps.mv(energy, ener)
yield from bps.sleep(0.1)
name_fmt = '{sample}_{energy}eV_pos{pos}_xbpm{xbpm}'
sample_name = name_fmt.format(sample=name, energy=ener, pos = '%2.2d'%i, xbpm='%3.1f'%xbpm3.sumY.value)
sample_id(user_name='Chen', sample_name=sample_name)
det_exposure_time(0.1, 0.1)
yield from bp.count(dets, num=1)
def nexafs_S_edge_chen(t=1):
dets = [pil300KW]
det_exposure_time(t,t)
waxs_arc = [45.0]
name_fmt = 'nexafs_sampletest1_4_{energy}eV_wa{wax}_bpm{xbpm}'
for wa in waxs_arc:
for e in energies:
yield from bps.mv(energy, e)
yield from bps.sleep(1)
bpm = xbpm2.sumX.value
sample_name = name_fmt.format(energy='%6.2f'%e, wax = wa, xbpm = '%4.3f'%bpm)
sample_id(user_name='WC', sample_name=sample_name)
print(f'\n\t=== Sample: {sample_name} ===\n')
yield from bp.count(dets, num=1)
yield from bps.mv(energy, 2490)
yield from bps.mv(energy, 2470)
yield from bps.mv(energy, 2450)
def waxs_S_edge_chen_2020_3(t=1):
dets = [pil300KW, pil1M]
names = ['sampleA1', 'sampleB1', 'sampleB2', 'sampleB3', 'sampleC4', 'sampleC5', 'sampleE8', 'sampleD1', 'sampleD2', 'sampleD3', 'sampleD4',
'sampleD5', 'sampleC8', 'sampleF1', 'sampleF2', 'sampleE1', 'sampleE3', 'sampleE4', 'sampleE5', 'sampleD8', 'sampleF8']
x = [43800, 28250, 20750, 13350, 5150, -5660, -10900, -18400, -26600, -34800, -42800, 42300, 34400, 26700, 18800, 11200, 2900, -5000, -12000,
-20300, -27800, ]
y = [-4900, -5440, -5960, -5660, -5660, -5660, -5880, -4750, -5450, -5000, -4450, 6950, 6950, 6950, 7450, 7200, 7400, 8250, 8250,
8250, 7750]
energies = [2450.0, 2474.0, 2475.0, 2476.0, 2477.0, 2478.0, 2479.0, 2482.0, 2483.0, 2484.0, 2485.0, 2486.0, 2487.0, 2490.0, 2500.0]
waxs_arc = np.linspace(0, 13, 3)
for name, xs, ys in zip(names, x, y):
yield from bps.mv(piezo.x, xs)
yield from bps.mv(piezo.y, ys+30)
for wa in waxs_arc:
yield from bps.mv(waxs, wa)
det_exposure_time(t,t)
name_fmt = '{sample}_rev_{energy}eV_wa{wax}_bpm{xbpm}'
for e in energies[::-1]:
yield from bps.mv(energy, e)
yield from bps.sleep(1)
bpm = xbpm2.sumX.value
sample_name = name_fmt.format(sample=name, energy='%6.2f'%e, wax = wa, xbpm = '%4.3f'%bpm)
sample_id(user_name='GF', sample_name=sample_name)
print(f'\n\t=== Sample: {sample_name} ===\n')
yield from bp.count(dets, num=1)
yield from bps.mv(energy, 2480)
yield from bps.mv(energy, 2460) | 33.633333 | 146 | 0.521143 | 830 | 6,054 | 3.675904 | 0.249398 | 0.103245 | 0.121927 | 0.105539 | 0.549 | 0.490659 | 0.457883 | 0.363815 | 0.319895 | 0.276303 | 0 | 0.136216 | 0.339115 | 6,054 | 180 | 147 | 33.633333 | 0.626343 | 0 | 0 | 0.356522 | 0 | 0 | 0.114121 | 0.042775 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0 | 0 | 0.043478 | 0.034783 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30f29e0855bb09fb75ab5994617d326c1d54e876 | 9,298 | py | Python | hl7tools.py | cdgramos/Sublime-Text-3---HL7-Plug-In | b4b821f272460fa970c019c261ded552c724fee7 | [
"MIT"
] | 8 | 2018-03-01T14:38:01.000Z | 2020-02-28T22:41:34.000Z | hl7tools.py | cdgramos/Sublime-Text-3---HL7-Plug-In | b4b821f272460fa970c019c261ded552c724fee7 | [
"MIT"
] | 19 | 2018-04-13T21:13:08.000Z | 2021-09-02T22:48:53.000Z | hl7tools.py | cdgramos/Sublime-Text-3---HL7-Plug-In | b4b821f272460fa970c019c261ded552c724fee7 | [
"MIT"
] | 1 | 2020-05-18T21:43:46.000Z | 2020-05-18T21:43:46.000Z | import sublime
import sublime_plugin
import re
import webbrowser
from .lib.hl7Event import *
from .lib.hl7Segment import *
from .lib.hl7TextUtils import *
hl7EventList = hl7Event("","")
hl7EventList = hl7EventList.loadEventList()
hl7SegmentList = hl7Segment("","")
hl7SegmentList = hl7SegmentList.loadSegmentList()
STATUS_BAR_HL7 = 'StatusBarHL7'
# On selection modified it will update the status bar
class selectionModifiedListener(sublime_plugin.EventListener):
def on_selection_modified(self, view):
line = getLineTextBeforeCursorPosition(self, view, sublime)
fullLine = getLineAtCursorPosition(self, view)
# Get the first 3 letters of the line
segment = fullLine[:3]
if hl7Segment.getSegmentByCode(self, segment, hl7SegmentList) != None:
statusBarText = '[ ' + segment + ' '
fieldList = re.split(r'(?<!\\)(?:\\\\)*\|', line)
fieldCounter = len(fieldList)
if segment != 'MSH':
statusBarText += str(fieldCounter-1)
else:
statusBarText += str(fieldCounter)
isComponentRequired = False
isSubComponentRequired = False
fullField = getFieldAtCursorPosition(self, view, sublime)
# Level of detail required
if fieldHasComponents(self, fullField) == True:
isComponentRequired = True
if fieldHasSubComponents(self, fullField) == True:
isComponentRequired = True
isSubComponentRequired = True
if isComponentRequired == True:
field = fieldList[-1]
componentList = re.split(r'(?<!\\)(?:\\\\)*\^', field)
componentCounter = len(componentList)
statusBarText += '.' + str(componentCounter)
if isSubComponentRequired == True:
subComponent = componentList[-1]
subComponentList = re.split(r'(?<!\\)(?:\\\\)*\&', subComponent)
subComponentCounter = len(subComponentList)
statusBarText += '.' + str(subComponentCounter)
statusBarText += ' ]'
#sublime.status_message('\t' + statusBarText + ' '*20)
view.set_status(STATUS_BAR_HL7, statusBarText)
else:
#sublime.status_message('')
view.erase_status(STATUS_BAR_HL7)
# Double click on keywords (segments / events)
class doubleClickKeywordListener(sublime_plugin.EventListener):
def on_text_command(self, view, cmd, args):
if cmd == 'drag_select' and 'event' in args:
event = args['event']
isEvent = True
pt = view.window_to_text((event['x'], event['y']))
text = []
if view.sel():
for region in view.sel():
print(view.substr(view.word(region)))
if region.empty():
text.append(view.substr(view.word(region)))
def asyncMessagePopup():
desc = ""
for eventItem in hl7EventList:
regex = "(\^)"
filler = "_"
codeWithoutCircunflex = re.sub(regex, filler, text[0])
if (eventItem.code == codeWithoutCircunflex):
desc = eventItem.description
for segmentItem in hl7SegmentList:
regex = "(\^)"
filler = "_"
codeWithoutCircunflex = re.sub(regex, filler, text[0])
if (segmentItem.code == codeWithoutCircunflex):
desc = segmentItem.description
if (len(desc) > 0):
if(getComponentAtCursorPosition(self, view, sublime) == text[0]):
view.show_popup('<b style="color:#33ccff;">' + desc + '</b>', location=pt)
sublime.set_timeout_async(asyncMessagePopup)
else:
text.append(view.substr(region))
# Searchs an event or segment on caristix web-site
class hl7searchCommand(sublime_plugin.WindowCommand):
def run(self):
window = self.window
view = window.active_view()
sel = view.sel()
region1 = sel[0]
selectionText = view.substr(region1)
isValid = 0
URL = "http://hl7-definition.caristix.com:9010/HL7%20v2.5.1/Default.aspx?version=HL7 v2.5.1&"
for eventItem in hl7EventList:
regex = "(\^)"
filler = "_"
codeWithoutCircunflex = re.sub(regex, filler, selectionText)
if (eventItem.code == codeWithoutCircunflex):
URL = URL + "triggerEvent=" + eventItem.code
isValid = 1
for segmentItem in hl7SegmentList:
if (segmentItem.code == selectionText):
URL = URL + "segment=" + segmentItem.code
isValid = 1
if (isValid == 1):
webbrowser.open_new(URL)
# Inspects an entire line
class hl7inspectorCommand(sublime_plugin.TextCommand):
def run(self, edit):
#Popup layout
header = ""
body = ""
segmentCode = ""
#Segment
selectedSegment = self.view.substr(self.view.line(self.view.sel()[0]))
fields = selectedSegment.split('|')
fields = re.split(r'(?<!\\)(?:\\\\)*\|', selectedSegment)
fieldId = 0
componentId = 1
subComponentId = 1
for segmentItem in hl7SegmentList:
if (segmentItem.code == fields[0]):
header = segmentItem.code + " - " + segmentItem.description
segmentCode = segmentItem.code
header = '<b style="color:#33ccff;">' + header + '</b>'
for field in fields:
if (field != ""):
if(field != "^~\&"):
components = re.compile(r'(?<!\\)(?:\\\\)*\^').split(field)
totalCircunflex = field.count("^")
for component in components:
if(component != ""):
subComponents = re.compile(r'(?<!\\)(?:\\\\)*&').split(component)
if(len(subComponents) > 1):
for subComponent in subComponents:
if(subComponent != ""):
regex = "(<)"
filler = "<"
subComponent = re.sub(regex, filler, subComponent)
regex = "(>)"
filler = ">"
subComponent = re.sub(regex, filler, subComponent)
body = body + '<br>' + str(fieldId) + "." + str(componentId) + "."+ str(subComponentId) + " - " + subComponent
subComponentId = subComponentId + 1
subComponentId = 1
else:
regex = "(<)"
filler = "<"
component = re.sub(regex, filler, component)
regex = "(>)"
filler = ">"
component = re.sub(regex, filler, component)
till = re.compile(r'(?<!\\)(?:\\\\)*~').split(component)
if segmentCode == 'MSH' and fieldId > 1:
fieldCounter = fieldId + 1
else:
fieldCounter = fieldId
if(totalCircunflex > 0):
for tillItem in till:
body = body + '<br>' + str(fieldCounter) + "." + str(componentId) + " - " + tillItem
else:
for tillItem in till:
body = body + '<br>' + str(fieldCounter) + " - " + tillItem
componentId = componentId + 1
componentId = 1
else:
if len(selectedSegment) > 3:
if selectedSegment[3] == '|':
body = body + '<br>' + str(1) + " - " + selectedSegment[3] + "\n"
if len(fields) > 0:
body = body + '<br>' + str(2) + " - " + fields[1] + "\n"
fieldId = fieldId + 1
message = header + body
message = message.replace("\&", "\&")
self.view.show_popup(message, on_navigate=print)
# Cleans an HL7 message from reduntant information and idents it
class hl7cleanerCommand(sublime_plugin.TextCommand):
def run(self, edit):
content = self.view.substr(sublime.Region(0, self.view.size()))
for segmentItem in hl7SegmentList:
regex = "(\^M)" + segmentItem.code
filler = "\n" + segmentItem.code
content = re.sub(regex, filler, content)
for segmentItem in hl7SegmentList:
regex = "(\^K)" + segmentItem.code
filler = "\n" + segmentItem.code
content = re.sub(regex, filler, content)
#remove any empty space before each segment
for segmentItem in hl7SegmentList:
regex = "\ {1,}" + segmentItem.code
filler = "" + segmentItem.code
content = re.sub(regex, filler, content)
#when there is no space before
for segmentItem in hl7SegmentList:
regex = "(\|){1,}(?<=[a-zA-Z0-9|])" + segmentItem.code + "(\|){1,}"
filler = "|\n" + segmentItem.code + "|"
content = re.sub(regex, filler, content)
#last two ^M at the end of content followed by new line
content = re.sub("(\^M\^\\\\\^M)\n", "\n", content)
#last two ^M at the end of content followed by end of content
content = re.sub("(\^M\^\\\\\^M)$", "\n", content)
#last ^M with new line
regex = "(\^M)\n"
filler = "\n"
content = re.sub(regex, filler, content)
#last ^M with end of content
regex = "(\^M)$"
filler = "\n"
content = re.sub(regex, filler, content)
#last two ^M at the end of content followed by new line with empty space before
content = re.sub("(\^M\^\\\\\^M)\ {1,}\n", "\n", content)
#last two ^M at the end of content followed by end of content with empty space before
content = re.sub("(\^M\^\\\\\^M)\ {1,}$", "\n", content)
#last ^M with new line with empty space before
regex = "(\^M)\ {1,}\n"
filler = "\n"
content = re.sub(regex, filler, content)
#last ^M with end of content with empty space before
regex = "(\^M)\ {1,}$"
filler = "\n"
content = re.sub(regex, filler, content)
#extra circumflex ^
content = re.sub("(\^{1,})[|]", "|", content)
#extra pipes | followed by new lines
content = re.sub("\|{2,}\n", "|\n", content)
#extra pipes | followed by end of content
content = re.sub("\|{2,}$", "|\n", content)
#empty lines at the beginning of the text
content = re.sub("^(\n){1,}", "", content)
#blank spaces at the beginning of the text
content = re.sub("^ {1,}", "", content)
self.view.insert(edit, 0, content + "\n\n\n")
| 26.490028 | 120 | 0.61766 | 1,036 | 9,298 | 5.511583 | 0.212355 | 0.021016 | 0.035727 | 0.042032 | 0.355867 | 0.290893 | 0.240455 | 0.209457 | 0.170228 | 0.142032 | 0 | 0.014436 | 0.22521 | 9,298 | 350 | 121 | 26.565714 | 0.778179 | 0.11368 | 0 | 0.307692 | 0 | 0.004808 | 0.082907 | 0.008644 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028846 | false | 0 | 0.033654 | 0 | 0.086538 | 0.009615 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30f2e9dc143ed93952cacd555538c812b12a36a2 | 8,220 | py | Python | gen_data/yolo_label.py | KevinLADLee/CARLA_INVS | b23249500ffbafdb312a71d17b0dfef4191672f0 | [
"MIT"
] | null | null | null | gen_data/yolo_label.py | KevinLADLee/CARLA_INVS | b23249500ffbafdb312a71d17b0dfef4191672f0 | [
"MIT"
] | null | null | null | gen_data/yolo_label.py | KevinLADLee/CARLA_INVS | b23249500ffbafdb312a71d17b0dfef4191672f0 | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import argparse
import time
import cv2
import glob
import sys
from pathlib import Path
import numpy as np
import pandas as pd
import yaml
from multiprocessing.dummy import Pool as ThreadPool
sys.path.append(Path(__file__).resolve().parent.parent.as_posix()) # repo path
sys.path.append(Path(__file__).resolve().parent.as_posix()) # file path
from params import *
LABEL_DATAFRAME = pd.DataFrame(columns=['raw_value', 'color', 'coco_names_index'],
data=[
# [ 4, (220, 20, 60), 0],
[18, (250, 170, 30), 9],
[12, (220, 220, 0), 80]])
LABEL_COLORS = np.array([
# (220, 20, 60), # Pedestrian
# (0, 0, 142), # Vehicle
(220, 220, 0), # TrafficSign -> COCO INDEX
(250, 170, 30), # TrafficLight
])
COCO_NAMES = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat',
'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog',
'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella',
'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite',
'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana',
'apple',
'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard',
'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
'teddy bear',
'hair drier', 'toothbrush', 'traffic sign']
class YoloLabel:
def __init__(self, data_path, debug=False):
self.data_path = data_path
self.data_output_path = Path(data_path) / "../yolo_dataset"
self.data_output_path = Path(os.path.abspath(self.data_output_path.as_posix()))
print(self.data_output_path)
self.image_out_path = self.data_output_path / "images" / "train"
self.label_out_path = self.data_output_path / "labels" / "train"
self.image_rgb = None
self.image_seg = None
self.preview_img = None
self.rec_pixels_min = 150
self.debug = debug
self.thread_pool = ThreadPool()
def process(self):
img_path_list = sorted(glob.glob(self.data_path + '/*.png'))
img_seg_path_list = sorted(glob.glob(self.data_path + '/seg' + '/*.png'))
start = time.time()
self.thread_pool.starmap(self.label_img, zip(img_path_list, img_seg_path_list))
# for rgb_img, seg_img in zip(img_path_list, img_seg_path_list):
# self.label_img(rgb_img, seg_img)
self.thread_pool.close()
self.thread_pool.join()
print("cost: {}s".format(time.time()-start))
def label_img(self, rgb_img_path, seg_img_path):
success = self.check_id(rgb_img_path, seg_img_path)
if not success:
return
image_rgb = None
image_seg = None
image_rgb = cv2.imread(rgb_img_path, cv2.IMREAD_COLOR)
image_seg = cv2.imread(seg_img_path, cv2.IMREAD_UNCHANGED)
if image_rgb is None or image_seg is None:
return
image_rgb = cv2.cvtColor(image_rgb, cv2.COLOR_RGBA2RGB)
image_seg = cv2.cvtColor(image_seg, cv2.COLOR_BGRA2RGB)
img_name = os.path.basename(rgb_img_path)
height, width, _ = image_rgb.shape
labels_all = []
for index, label_info in LABEL_DATAFRAME.iterrows():
seg_color = label_info['color']
coco_id = label_info['coco_names_index']
mask = (image_seg == seg_color)
tmp_mask = (mask.sum(axis=2, dtype=np.uint8) == 3)
mono_img = np.array(tmp_mask * 255, dtype=np.uint8)
preview_img = image_rgb
# self.preview_img = self.image_seg
# cv2.imshow("seg", self.preview_img)
# cv2.imshow("mono", mono_img)
# cv2.waitKey()
contours, _ = cv2.findContours(mono_img, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
labels = []
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
if w * h < self.rec_pixels_min:
continue
# cv2.rectangle(self.preview_img, (x, y), (x + w, y + h), (0, 255, 0), 1)
# cv2.imshow("rect", self.preview_img)
# cv2.waitKey()
max_y, max_x, _ = image_rgb.shape
if y + h >= max_y or x + w >= max_x:
continue
# Draw label info to image
cv2.putText(preview_img, COCO_NAMES[coco_id], (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (36, 255, 12), 1)
cv2.rectangle(image_rgb, (x, y), (x + w, y + h), (0, 255, 0), 1)
label_info = "{} {} {} {} {}".format(coco_id,
float(x + (w / 2.0)) / width,
float(y + (h / 2.0)) / height,
float(w) / width,
float(h) / height)
labels.append(label_info)
# cv2.imshow("result", self.preview_img)
# cv2.imshow("test", self.preview_img[y:y+h, x:x+w, :])
# cv2.waitKey()
if len(labels) > 0:
labels_all += labels
if len(labels_all) > 0:
os.makedirs(self.label_out_path, exist_ok=True)
os.makedirs(self.image_out_path, exist_ok=True)
# print("image\t\t\twidth\theight\n{}\t{}\t{}".format(img_name, width, height))
# print("Got {} labels".format(len(labels_all)))
cv2.imwrite(self.image_out_path.as_posix() + '/' + os.path.splitext(img_name)[0] + '.jpg', image_rgb)
# print(self.image_rgb.shape)
with open(self.label_out_path.as_posix() + '/' + os.path.splitext(img_name)[0] + '.txt', "w") as f:
for label in labels_all:
f.write(label)
f.write('\n')
# print("Label output path: {}".format(self.label_out_path))
self.dump_yaml(self.data_output_path.as_posix())
# print("******")
return
def check_id(self, rgb_img_path, seg_img_path):
img_name = os.path.splitext(os.path.basename(rgb_img_path))[0]
seg_name = os.path.splitext(os.path.basename(seg_img_path))[0]
if img_name != seg_name:
print("Img name error: {} {}".format(img_name, seg_name))
return False
else:
return True
def dump_yaml(self, dataset_path):
dict_file = {
'path': dataset_path,
'train': 'images/train',
'val': 'images/train',
'test': '',
'nc': len(COCO_NAMES),
'names': COCO_NAMES
}
with open(dataset_path + '/../yolo_coco_carla.yaml', 'w') as file:
yaml.dump(dict_file, file)
def main():
argparser = argparse.ArgumentParser(description=__doc__)
argparser.add_argument(
'--record_id',
default='record2021_1106_0049',
help='record_id of raw data')
argparser.add_argument(
'--debug',
default=False
)
args = argparser.parse_args()
debug = args.debug
data_path = RAW_DATA_PATH / Path(args.record_id)
print(data_path)
vehicle_data_list = glob.glob(data_path.as_posix() + '/vehicle*' + '/vehicle*')
for vehicle_data_path in vehicle_data_list:
yolo_label_manager = YoloLabel(vehicle_data_path, debug)
yolo_label_manager.process()
return
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print('\nCancelled by user. Bye!')
except RuntimeError as e:
print(e)
| 40.895522 | 123 | 0.550243 | 1,008 | 8,220 | 4.246032 | 0.301587 | 0.022897 | 0.022897 | 0.029439 | 0.168925 | 0.131776 | 0.107477 | 0.051402 | 0.02243 | 0.02243 | 0 | 0.025234 | 0.310584 | 8,220 | 200 | 124 | 41.1 | 0.730016 | 0.103528 | 0 | 0.052632 | 0 | 0 | 0.122054 | 0.003269 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039474 | false | 0 | 0.072368 | 0 | 0.157895 | 0.039474 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30f35ff68fe1eac3b585ba8e3b6bf7e40042a3c9 | 3,004 | py | Python | tr/acs_config_test.py | DentonGentry/gfiber-catawampus | b01e4444f3c7f12b1af7837203b37060fd443bb7 | [
"Apache-2.0"
] | 2 | 2017-10-03T16:06:29.000Z | 2020-09-08T13:03:13.000Z | tr/acs_config_test.py | DentonGentry/gfiber-catawampus | b01e4444f3c7f12b1af7837203b37060fd443bb7 | [
"Apache-2.0"
] | null | null | null | tr/acs_config_test.py | DentonGentry/gfiber-catawampus | b01e4444f3c7f12b1af7837203b37060fd443bb7 | [
"Apache-2.0"
] | 1 | 2017-05-07T17:39:02.000Z | 2017-05-07T17:39:02.000Z | #!/usr/bin/python
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# unittest requires method names starting in 'test'
# pylint:disable=invalid-name
#
# Refactored, originally from: platform/gfmedia/device_test.py
"""Unit tests for device.py."""
__author__ = 'jnewlin@google.com (John Newlin)'
import os
import shutil
import tempfile
import google3
import tornado.testing
import acs_config
from wvtest import unittest
class AcsConfigTest(tornado.testing.AsyncTestCase, unittest.TestCase):
"""Tests for acs_config.py."""
def setUp(self):
super(AcsConfigTest, self).setUp()
self.old_ACSCONTACT = acs_config.ACSCONTACT
self.old_ACSCONNECTED = acs_config.ACSCONNECTED
self.old_SET_ACS = acs_config.SET_ACS
acs_config.SET_ACS = 'testdata/acs_config/set-acs'
self.scriptout = tempfile.NamedTemporaryFile()
os.environ['TESTOUTPUT'] = self.scriptout.name
def tearDown(self):
super(AcsConfigTest, self).tearDown()
acs_config.ACSCONTACT = self.old_ACSCONTACT
acs_config.ACSCONNECTED = self.old_ACSCONNECTED
acs_config.SET_ACS = self.old_SET_ACS
self.scriptout = None # File will delete itself
def testSetAcs(self):
ac = acs_config.AcsConfig()
self.assertEqual(ac.GetAcsUrl(), 'bar')
ac.SetAcsUrl('foo')
self.assertEqual(self.scriptout.read().strip(), 'cwmp foo')
def testClearAcs(self):
ac = acs_config.AcsConfig()
ac.SetAcsUrl('')
self.assertEqual(self.scriptout.read().strip(), 'cwmp clear')
def testAcsAccess(self):
tmpdir = tempfile.mkdtemp()
acscontact = os.path.join(tmpdir, 'acscontact')
self.assertRaises(OSError, os.stat, acscontact) # File does not exist yet
acs_config.ACSCONTACT = acscontact
acsconnected = os.path.join(tmpdir, 'acsconnected')
self.assertRaises(OSError, os.stat, acsconnected)
acs_config.ACSCONNECTED = acsconnected
ac = acs_config.AcsConfig()
acsurl = 'this is the acs url'
# Simulate ACS connection attempt
ac.AcsAccessAttempt(acsurl)
self.assertTrue(os.stat(acscontact))
self.assertEqual(open(acscontact, 'r').read(), acsurl)
self.assertRaises(OSError, os.stat, acsconnected)
# Simulate ACS connection success
ac.AcsAccessSuccess(acsurl)
self.assertTrue(os.stat(acscontact))
self.assertTrue(os.stat(acsconnected))
self.assertEqual(open(acsconnected, 'r').read(), acsurl)
# cleanup
shutil.rmtree(tmpdir)
if __name__ == '__main__':
unittest.main()
| 31.957447 | 78 | 0.734687 | 387 | 3,004 | 5.602067 | 0.423773 | 0.062269 | 0.02214 | 0.027675 | 0.255535 | 0.129151 | 0.074723 | 0 | 0 | 0 | 0 | 0.003559 | 0.158123 | 3,004 | 93 | 79 | 32.301075 | 0.853697 | 0.296937 | 0 | 0.132075 | 0 | 0 | 0.069264 | 0.012987 | 0 | 0 | 0 | 0 | 0.207547 | 1 | 0.09434 | false | 0 | 0.132075 | 0 | 0.245283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30f3fff33de30496fe028fef08f8b083e40facf8 | 2,941 | py | Python | src/tests/integration/api_test_client.py | doitintl/elastic-event-store | ad00f1fbecf430432c306d5917984d9f9ff522f4 | [
"MIT"
] | 22 | 2021-02-02T17:11:55.000Z | 2021-12-19T15:00:26.000Z | src/tests/integration/api_test_client.py | vladikk/elastic-event-store | ad00f1fbecf430432c306d5917984d9f9ff522f4 | [
"MIT"
] | 8 | 2021-02-04T19:21:25.000Z | 2021-02-08T07:48:06.000Z | src/tests/integration/api_test_client.py | vladikk/elastic-event-store | ad00f1fbecf430432c306d5917984d9f9ff522f4 | [
"MIT"
] | 4 | 2021-02-02T16:58:26.000Z | 2021-07-17T04:17:43.000Z | import boto3
import os
import requests
class ApiTestClient():
api_endpoint: str
some_metadata = {
'timestamp': '123123',
'command_id': '456346234',
'issued_by': 'test@test.com'
}
some_events = [
{ "type": "init", "foo": "bar" },
{ "type": "update", "foo": "baz" },
]
def __init__(self, sam_stack_name=None):
if not sam_stack_name:
sam_stack_name = os.environ.get("AWS_SAM_STACK_NAME")
if not sam_stack_name:
raise Exception(
"Cannot find env var AWS_SAM_STACK_NAME. \n"
"Please setup this environment variable with the stack name where we are running integration tests."
)
client = boto3.client("cloudformation")
try:
response = client.describe_stacks(StackName=sam_stack_name)
except Exception as e:
raise Exception(
f"Cannot find stack {sam_stack_name}. \n" f'Please make sure stack with the name "{sam_stack_name}" exists.'
) from e
stacks = response["Stacks"]
stack_outputs = stacks[0]["Outputs"]
api_outputs = [output for output in stack_outputs if output["OutputKey"] == "ApiEndpoint"]
if not api_outputs:
raise Exception(f"Cannot find output ApiEndpoint in stack {sam_stack_name}")
self.api_endpoint = api_outputs[0]["OutputValue"] + '/'
def commit(self, stream_id, events=None, metadata=None, last_changeset_id=None, last_event_id=None):
expected_last_changeset = ""
expected_last_event = ""
if last_changeset_id is not None:
try:
expected_last_changeset = int(last_changeset_id)
except ValueError:
expected_last_changeset = last_changeset_id
if last_event_id is not None:
try:
expected_last_event = int(last_event_id)
except ValueError:
expected_last_event = last_event_id
url = self.api_endpoint + f'streams/{stream_id}?expected_last_changeset={expected_last_changeset}&expected_last_event={expected_last_event}'
payload = { }
if events:
payload["events"] = events
if metadata:
payload["metadata"] = metadata
return requests.post(url, json=payload)
def query_changesets(self, stream_id, from_changeset=None, to_changeset=None):
url = self.api_endpoint + f'streams/{stream_id}/changesets?&from={from_changeset or ""}&to={to_changeset or ""}'
return requests.get(url)
def query_events(self, stream_id, from_event=None, to_event=None):
url = self.api_endpoint + f'streams/{stream_id}/events?&from={from_event or ""}&to={to_event or ""}'
return requests.get(url)
def version(self):
return requests.get(self.api_endpoint + "version") | 35.865854 | 148 | 0.615097 | 352 | 2,941 | 4.872159 | 0.286932 | 0.057726 | 0.069971 | 0.050729 | 0.26414 | 0.16793 | 0.094461 | 0.06414 | 0.044315 | 0 | 0 | 0.009035 | 0.284937 | 2,941 | 82 | 149 | 35.865854 | 0.806467 | 0 | 0 | 0.174603 | 0 | 0 | 0.253569 | 0.07036 | 0 | 0 | 0 | 0 | 0 | 1 | 0.079365 | false | 0 | 0.047619 | 0.015873 | 0.253968 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30f6d4afaaa493162b41239409c7176bc51a45d0 | 2,785 | py | Python | validation/synthetic_promoters.py | umarov90/DeepREFind | c3b24b760f3829ea8ed6ba6ed7db76cf90c45a9e | [
"Apache-2.0"
] | 1 | 2022-03-16T07:39:10.000Z | 2022-03-16T07:39:10.000Z | validation/synthetic_promoters.py | umarov90/DeepREFind | c3b24b760f3829ea8ed6ba6ed7db76cf90c45a9e | [
"Apache-2.0"
] | 2 | 2021-11-04T14:58:52.000Z | 2022-02-11T04:28:32.000Z | validation/synthetic_promoters.py | umarov90/ReFeaFi | c3b24b760f3829ea8ed6ba6ed7db76cf90c45a9e | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import common as cm
import tensorflow as tf
import numpy as np
import math
from scipy import stats
from Bio.Seq import Seq
half_size = 500
batch_size = 128
scan_step = 1
seq_len = 1001
out = []
os.chdir(open("../data_dir").read().strip())
fasta = cm.parse_genome("data/genomes/hg19.fa")
# fasta = pickle.load(open("fasta.p", "rb"))
background = {}
background["RPLP0_CE_bg"] = ["chr12", "-", 120638861 - 1, 120639013 - 1]
background["ACTB_CE_bg"] = ["chr7", "-", 5570183 - 1, 5570335 - 1]
background["C14orf166_CE_bg"] = ["chr14", "+", 52456090 - 1, 52456242 - 1]
our_scores = []
real_scores = []
new_graph = tf.Graph()
with tf.Session(graph=new_graph) as sess:
tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], "models/model_predict")
saver = tf.train.Saver()
saver.restore(sess, "models/model_predict/variables/variables")
input_x = tf.get_default_graph().get_tensor_by_name("input_prom:0")
y = tf.get_default_graph().get_tensor_by_name("output_prom:0")
kr = tf.get_default_graph().get_tensor_by_name("kr:0")
in_training_mode = tf.get_default_graph().get_tensor_by_name("in_training_mode:0")
with open('data/Supplemental_Table_S7.tsv') as file:
next(file)
for line in file:
vals = line.split("\t")
fa = vals[25].strip()
if(vals[3] in background.keys()):
bg = background[vals[3]]
strand = bg[1]
real_score = float(vals[23])
if math.isnan(real_score):
continue
real_scores.append(real_score)
if (strand == "+"):
fa_bg = fasta[bg[0]][bg[2] - (half_size - 114): bg[2] - (half_size - 114) + seq_len]
faf = fa_bg[:half_size - 49] + fa + fa_bg[half_size + 115:]
else:
fa_bg = fasta[bg[0]][bg[2] - (half_size - 49): bg[2] - (half_size - 49) + seq_len]
fa_bg = str(Seq(fa_bg).reverse_complement())
faf = fa_bg[:half_size - 114] + fa + fa_bg[half_size + 50:]
predict = sess.run(y,
feed_dict={input_x: [cm.encode_seq(faf)], kr: 1.0, in_training_mode: False})
score = predict[0][0] - predict[0][1]
score = math.log(1 + score / (1.0001 - score))
our_scores.append(score)
out.append(str(real_score) + "," + str(score))
real_scores = np.asarray(real_scores)
corr = stats.pearsonr(np.asarray(our_scores), np.asarray(real_scores))[0]
print(corr)
with open("figures_data/synth.csv", 'w+') as f:
f.write('\n'.join(out)) | 39.785714 | 111 | 0.596409 | 400 | 2,785 | 3.925 | 0.39 | 0.04586 | 0.030573 | 0.043312 | 0.2 | 0.110828 | 0.110828 | 0.110828 | 0.029299 | 0 | 0 | 0.062171 | 0.249192 | 2,785 | 70 | 112 | 39.785714 | 0.688666 | 0.022621 | 0 | 0 | 0 | 0 | 0.109886 | 0.033811 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.114754 | 0 | 0.114754 | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30f85e513a4285e63e608e0444128184121a5ac0 | 865 | py | Python | dojos/lendico/sinais.py | ramalho/tdd-com-pytest | edae805ddf2267d8e08eea6ab344242217f52043 | [
"BSD-3-Clause"
] | 15 | 2018-05-25T16:17:08.000Z | 2020-04-02T21:39:41.000Z | dojos/lendico/sinais.py | ramalho/tdd-com-pytest | edae805ddf2267d8e08eea6ab344242217f52043 | [
"BSD-3-Clause"
] | null | null | null | dojos/lendico/sinais.py | ramalho/tdd-com-pytest | edae805ddf2267d8e08eea6ab344242217f52043 | [
"BSD-3-Clause"
] | 1 | 2021-04-19T21:27:17.000Z | 2021-04-19T21:27:17.000Z | #!/usr/bin/env python3
import sys
def search(query, data):
query = query.replace('-',' ')
words = set(query.upper().split())
for code, char, name in data:
name = name.replace('-',' ')
if words <= set(name.split()):
yield f'{code}\t{char}\t{name}'
def reader():
with open('UnicodeData.txt') as _file:
for line in _file:
code, name = line.split(';')[:2]
char = chr(int(code, 16))
yield f'U+{code}', char, name
def main(*words):
if len(words) < 1:
print("Please provide one word or more", file=sys.stderr)
return
query = ' '.join(words)
index = -1
for index, line in enumerate(search(query, reader())):
print(line)
if index == -1:
print("No results", file=sys.stderr)
if __name__ == "__main__":
main(*sys.argv[1:])
| 23.378378 | 65 | 0.543353 | 116 | 865 | 3.965517 | 0.474138 | 0.047826 | 0.052174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012903 | 0.283237 | 865 | 36 | 66 | 24.027778 | 0.729032 | 0.024277 | 0 | 0 | 0 | 0 | 0.118624 | 0.026097 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.038462 | 0 | 0.192308 | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30f86263cd9609adb5d7644c7670d39fe1c0be15 | 6,154 | py | Python | bms_2/app01/views.py | luyl1017713252/python | 3b30cffa85b625e512415fa882b4bc7708a5e0b8 | [
"MulanPSL-1.0"
] | null | null | null | bms_2/app01/views.py | luyl1017713252/python | 3b30cffa85b625e512415fa882b4bc7708a5e0b8 | [
"MulanPSL-1.0"
] | null | null | null | bms_2/app01/views.py | luyl1017713252/python | 3b30cffa85b625e512415fa882b4bc7708a5e0b8 | [
"MulanPSL-1.0"
] | null | null | null | import json
from django.shortcuts import render, redirect, HttpResponse
# Create your views here.
from django.urls import reverse
from app01 import models
from app01.models import Book, Publish, Author, AuthorDetail
def books(request):
book_list = Book.objects.all()
publish_list = Publish.objects.all()
author_list = Author.objects.all()
return render(request, 'books.html', {'book_list': book_list, 'publish_list': publish_list, 'author_list': author_list})
def add_book(request):
if request.method == 'GET':
publish_list = Publish.objects.all()
author_list = Author.objects.all()
return render(request, 'book_add.html', {'publish_list': publish_list, 'author_list': author_list})
else:
title = request.POST.get('title')
price = request.POST.get('price')
pub_date = request.POST.get('pub_date')
publish = request.POST.get('publish')
authors = request.POST.getlist('authors')
book = Book.objects.create(title=title, price=price, pub_date=pub_date, publish_id=publish)
book.authors.add(*authors)
return redirect('/books/')
def del_book(request):
book_id = request.POST.get('id_book')
print(book_id)
ret = Book.objects.filter(id=book_id).delete()
if ret:
return HttpResponse('True')
else:
return HttpResponse('False')
def up_book(request, book_id):
if request.method == 'GET':
book_up_list = Book.objects.filter(id=book_id)
publish_list = Publish.objects.all()
author_list = Author.objects.all()
book = Book.objects.filter(id=book_id).values('publish')[0]
author = Book.objects.filter(id=book_id).values('authors')[0]
publish_name = Publish.objects.filter(id=book['publish']).first()
author_name = Author.objects.filter(id=author['authors']).first()
return render(request, 'up_book.html',
{'book_up_list': book_up_list, 'publish_list': publish_list, 'author_list': author_list,
'publish_name': publish_name, 'author_name': author_name})
else:
title = request.POST.get('title')
price = request.POST.get('price')
pub_date = request.POST.get('pub_date')
publish = request.POST.get('publish')
authors = request.POST.getlist('authors')
# a_list = {'author_id': authors}
print(authors)
date_list = {'title': title, 'price': price, 'pub_date': pub_date, 'publish_id': publish}
Book.objects.filter(id=book_id).update(**date_list)
book = Book.objects.filter(id=book_id).first()
book.authors.set(authors)
return redirect(reverse('books'))
def publishs(request):
publish_list = Publish.objects.all()
return render(request, 'publishs.html', {'publish_list': publish_list})
def add_publish(request):
if request.method == 'GET':
return render(request, 'publish_add.html')
else:
name = request.POST.get('name')
email = request.POST.get('email')
Publish.objects.create(name=name, email=email)
return redirect('/publishs/')
def del_publish(request, publish_id):
Publish.objects.filter(id=publish_id).delete()
return redirect('/publishs/')
def up_publish(request, publish_id):
if request.method == 'GET':
publish = Publish.objects.filter(id=publish_id)
return render(request, 'up_publish.html', {'publish': publish})
else:
publish = request.POST.dict()
del publish['csrfmiddlewaretoken']
Publish.objects.filter(id=publish_id).update(**publish)
return redirect('/publishs/')
def authors(request):
author_list = Author.objects.all()
return render(request, 'authors.html', {'author_list': author_list})
def add_author(request):
if request.method == 'GET':
return render(request, 'author_add.html')
else:
name = request.POST.get('name')
age = request.POST.get('age')
email = request.POST.get('email')
tel = request.POST.get('tel')
tels = AuthorDetail.objects.create(tel=tel)
author = Author.objects.create(name=name, age=age, email=email, ad=tels)
return redirect('/authors/')
def del_author(request, author_id):
author = Author.objects.filter(id=author_id)
ad_id = author.values('ad_id')[0]['ad_id']
author.delete()
AuthorDetail.objects.filter(id=ad_id).delete()
return redirect('/authors/')
def up_author(request, author_id):
if request.method == 'GET':
author = Author.objects.filter(id=author_id)
return render(request, 'up_author.html', {'author': author})
else:
name = request.POST.get('name')
age = request.POST.get('age')
email = request.POST.get('email')
tel = request.POST.get('tel')
author = Author.objects.filter(id=author_id)
author.update(name=name, age=age, email=email)
AuthorDetail.objects.filter(id=author.values('ad_id')[0]['ad_id']).update(tel=tel)
return redirect('/authors/')
def ajax_add_book(request):
authors_list = []
title = request.POST.get('title')
price = str(format(int(request.POST.get('price')), '.2f'))
pub_date = request.POST.get('pub_date')
publish = request.POST.get('publish')
authors = request.POST.getlist('authors')
book = Book.objects.create(title=title, price=price, pub_date=pub_date, publish_id=publish)
print(price)
book.authors.add(*authors)
count_book = Book.objects.all().count()
publish_name = Publish.objects.filter(id=publish).first()
authors_name = Author.objects.filter(id__in=authors)
for i in authors_name:
authors_list.append(i.name)
data_dict = {
'pd': 'true',
'book_id': book.id,
'count_book': count_book,
'title': title,
'price': price,
'pub_date': pub_date,
'publish_name': publish_name.name,
'authors': authors_list
}
# print(authors, publish)
if book:
data_dict['pd'] = 'true'
else:
data_dict['pd'] = 'false'
return HttpResponse(json.dumps(data_dict)) | 35.165714 | 124 | 0.647221 | 782 | 6,154 | 4.938619 | 0.098465 | 0.076903 | 0.083376 | 0.034438 | 0.564215 | 0.487053 | 0.399275 | 0.347488 | 0.279907 | 0.245469 | 0 | 0.001847 | 0.207995 | 6,154 | 175 | 125 | 35.165714 | 0.790521 | 0.012837 | 0 | 0.415493 | 0 | 0 | 0.11166 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.091549 | false | 0 | 0.035211 | 0 | 0.267606 | 0.021127 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30fcb43dd0f69d332ab19924f865fff8aca3d2e2 | 3,642 | py | Python | chatbot/scripts/interactive_slack.py | check-spelling/learning | a3b031cf8fe766ad97e023ec1f3177389778853b | [
"Apache-2.0"
] | 2 | 2021-08-24T05:20:48.000Z | 2021-09-30T18:03:46.000Z | chatbot/scripts/interactive_slack.py | check-spelling/learning | a3b031cf8fe766ad97e023ec1f3177389778853b | [
"Apache-2.0"
] | null | null | null | chatbot/scripts/interactive_slack.py | check-spelling/learning | a3b031cf8fe766ad97e023ec1f3177389778853b | [
"Apache-2.0"
] | 3 | 2021-07-24T23:19:45.000Z | 2021-12-27T04:20:45.000Z | #!/usr/bin/env python3
# Copyright (c) Facebook, Inc. and its affiliates.
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
"""
Talk with a model using a Slack channel.
# Examples
```shell
parlai interactive_slack --token xoxb-... --task blended_skill_talk:all -mf zoo:blenderbot2/blenderbot2_400M/model --search-server http://localhost:5000```
"""
from os import getenv
from pprint import pformat
from parlai.scripts.interactive import setup_args
from parlai.core.agents import create_agent
from parlai.core.worlds import create_task
from parlai.core.script import ParlaiScript, register_script
import parlai.utils.logging as logging
from parlai.agents.local_human.local_human import LocalHumanAgent
try:
from slack import RTMClient
except ImportError:
raise ImportError('The slackclient package must be installed to run this script')
SHARED = {}
SLACK_BOT_TOKEN = getenv('SLACK_BOT_TOKEN')
def setup_slack_args(shared):
"""
Build and parse CLI opts.
"""
parser = setup_args()
parser.description = 'Interactive chat with a model in a Slack channel'
parser.add_argument(
'--token',
default=SLACK_BOT_TOKEN,
metavar='SLACK_BOT_TOKEN',
help='A legacy Slack bot token to use for RTM messaging',
)
return parser
@RTMClient.run_on(event='message')
async def rtm_handler(rtm_client, web_client, data, **kwargs):
"""
Handles new chat messages from Slack.
Does the following to let the user know immediately that the agent is working since it takes a while
Adds the :eyes: reaction to let the user
Sets the status of the bot to typing
Runs the agent on the given text
Returns the model_response text to the channel where the message came from.
Removes the :eyes: reaction
Only works on user messages, not messages from other bots
"""
global SHARED
if 'bot_profile' in data:
# Dont respond to bot messages (eg the one this app generates)
return
logging.info(f'Got new message {pformat(data)}')
channel = data['channel']
web_client.reactions_add(channel=channel, name='eyes', timestamp=data['ts'])
reply = {'episode_done': False, 'text': data['text']}
SHARED['agent'].observe(reply)
logging.info('Agent observed')
await rtm_client.typing(channel=channel)
model_response = SHARED['agent'].act()
logging.info('Agent acted')
web_client.chat_postMessage(channel=channel, text=model_response['text'])
logging.info(f'Sent response: {model_response["text"]}')
web_client.reactions_remove(channel=channel, name='eyes', timestamp=data['ts'])
def interactive_slack(opt):
global SHARED
if not opt.get('token'):
raise RuntimeError(
'A Slack bot token must be specified. Must be a legacy bot app token for RTM messaging'
)
human_agent = LocalHumanAgent(opt)
agent = create_agent(opt, requireModelExists=True)
agent.opt.log()
agent.opt['verbose'] = True
SHARED['opt'] = agent.opt
SHARED['agent'] = agent
SHARED['world'] = create_task(SHARED.get('opt'), [human_agent, SHARED['agent']])
SHARED['client'] = client = RTMClient(token=opt['token'])
logging.info('Slack client is starting')
client.start()
@register_script('interactive_slack', aliases=['slack'], hidden=True)
class InteractiveSlack(ParlaiScript):
@classmethod
def setup_args(cls):
return setup_slack_args(SHARED)
def run(self):
return interactive_slack(self.opt)
if __name__ == '__main__':
InteractiveSlack.main()
| 33.412844 | 155 | 0.7095 | 496 | 3,642 | 5.092742 | 0.40121 | 0.019002 | 0.030879 | 0.015835 | 0.029295 | 0.029295 | 0.029295 | 0 | 0 | 0 | 0 | 0.003383 | 0.188358 | 3,642 | 108 | 156 | 33.722222 | 0.85115 | 0.136189 | 0 | 0.030303 | 0 | 0 | 0.204443 | 0.009036 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.166667 | 0.030303 | 0.30303 | 0.015152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
30fd7f4d4cd4681834c31425f77246c6aabccc27 | 2,542 | py | Python | plywood/plugins/include.py | colinta/plywood | 49cf98f198da302ec66e11338320c6b72b642ffa | [
"BSD-2-Clause-FreeBSD"
] | 1 | 2015-06-11T06:17:42.000Z | 2015-06-11T06:17:42.000Z | plywood/plugins/include.py | colinta/plywood | 49cf98f198da302ec66e11338320c6b72b642ffa | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | plywood/plugins/include.py | colinta/plywood | 49cf98f198da302ec66e11338320c6b72b642ffa | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | '''
Includes content from another template. If you assign any keyword arguments,
those will be available in the scope of that template.
'''
import os
from plywood import Plywood
from plywood.values import PlywoodValue
from plywood.exceptions import InvalidArguments
from plywood.env import PlywoodEnv
@PlywoodEnv.register_runtime()
def include(states, scope, arguments, block):
if len(arguments.args) != 1:
raise InvalidArguments('`include` only accepts one argument')
if len(block.lines):
raise InvalidArguments('`include` does not accept a block')
restore_scope = {}
delete_scope = []
if isinstance(scope['self'], PlywoodValue):
context = scope['self'].python_value(scope)
else:
context = scope['self']
if len(arguments.kwargs):
kwargs = dict(
(item.key.get_name(), item.value)
for item in arguments.kwargs
)
for key, value in kwargs.items():
if key in context:
restore_scope[key] = context[key]
else:
delete_scope.append(key)
context[key] = value
if '__env' in scope:
env = scope['__env']
else:
env = PlywoodEnv({'separator': ' '})
template_name = arguments.args[0].python_value(scope)
plywood = None
for path in scope['__paths']:
template_path = os.path.join(path, template_name) + '.ply'
if template_path in env.includes:
plywood = env.includes[template_path]
elif os.path.exists(template_path):
retval = ''
with open(template_path) as f:
input = f.read()
plywood = Plywood(input)
env.includes[template_path] = plywood
# scope is not pushed/popped - `include` adds its variables to the local scope.
if plywood:
break
if not plywood:
raise Exception('Could not find template: {0!r}'.format(template_name))
retval = plywood.run(context, env)
if len(arguments.kwargs):
for key, value in restore_scope.items():
context[key] = value
for key in delete_scope:
del context[key]
return states, retval
@PlywoodEnv.register_startup()
def startup(plywood, scope):
if plywood.options.get('paths'):
scope['__paths'] = plywood.options.get('paths')
elif plywood.options.get('path'):
scope['__paths'] = [plywood.options.get('path')]
else:
scope['__paths'] = [os.getcwd()]
| 31 | 95 | 0.611723 | 298 | 2,542 | 5.110738 | 0.338926 | 0.047275 | 0.044649 | 0.026264 | 0.072226 | 0.03677 | 0 | 0 | 0 | 0 | 0 | 0.001641 | 0.280881 | 2,542 | 81 | 96 | 31.382716 | 0.83151 | 0.083006 | 0 | 0.126984 | 0 | 0 | 0.077486 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.079365 | 0 | 0.126984 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a5011c8dc68549aa930d5e1f66b7f91d841322f1 | 388 | py | Python | segundos.py | rafabentoss/courses | ced3f5447dea38220b904678c601ed2bf7f2b0f1 | [
"CC0-1.0"
] | null | null | null | segundos.py | rafabentoss/courses | ced3f5447dea38220b904678c601ed2bf7f2b0f1 | [
"CC0-1.0"
] | null | null | null | segundos.py | rafabentoss/courses | ced3f5447dea38220b904678c601ed2bf7f2b0f1 | [
"CC0-1.0"
] | null | null | null | segundos_str = int(input("Por favor, entre com o número de segundos que deseja converter:"))
dias=segundos_str//86400
segs_restantes1=segundos_str%86400
horas=segs_restantes1//3600
segs_restantes2=segs_restantes1%3600
minutos=segs_restantes2//60
segs_restantes_final=segs_restantes2%60
print(dias,"dias,",horas, "horas,",minutos, "minutos e", segs_restantes_final, "segundos.") | 38.8 | 93 | 0.796392 | 55 | 388 | 5.381818 | 0.490909 | 0.111486 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07932 | 0.090206 | 388 | 10 | 94 | 38.8 | 0.759207 | 0 | 0 | 0 | 0 | 0 | 0.242105 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a501d89e850c922d43a99061a0daa6321c93954a | 870 | py | Python | src/balloons.py | cloudzfy/pychallenge | 1af98a632021532e136721d282b0e7c2cbc519a3 | [
"MIT"
] | 3 | 2016-07-23T03:31:46.000Z | 2019-08-22T01:23:07.000Z | src/balloons.py | cloudzfy/pychallenge | 1af98a632021532e136721d282b0e7c2cbc519a3 | [
"MIT"
] | null | null | null | src/balloons.py | cloudzfy/pychallenge | 1af98a632021532e136721d282b0e7c2cbc519a3 | [
"MIT"
] | 3 | 2017-05-22T09:41:20.000Z | 2018-09-06T02:05:19.000Z | import urllib
import StringIO
import gzip
from difflib import Differ
from binascii import unhexlify
import Image
src = urllib.urlopen('http://huge:file@www.pythonchallenge.com/pc/return/deltas.gz').read()
file = gzip.GzipFile(fileobj=StringIO.StringIO(src))
left = []
right = []
for line in file.readlines():
left.append(line[0:53])
right.append(line[56:109])
file.close()
d = Differ()
result = list(d.compare(left, right))
pic1 = ''
pic2 = ''
pic3 = ''
for line in result:
if line[0] == ' ':
pic1 += line[1:].replace(' ', '').replace('\n', '')
elif line[0] == '+':
pic2 += line[1:].replace(' ', '').replace('\n', '')
elif line[0] == '-':
pic3 += line[1:].replace(' ', '').replace('\n', '')
Image.open(StringIO.StringIO(unhexlify(pic1))).show()
Image.open(StringIO.StringIO(unhexlify(pic2))).show()
Image.open(StringIO.StringIO(unhexlify(pic3))).show()
| 24.166667 | 91 | 0.654023 | 119 | 870 | 4.781513 | 0.428571 | 0.112478 | 0.063269 | 0.100176 | 0.330404 | 0.235501 | 0.101933 | 0.101933 | 0 | 0 | 0 | 0.030105 | 0.121839 | 870 | 35 | 92 | 24.857143 | 0.71466 | 0 | 0 | 0 | 0 | 0 | 0.082759 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.206897 | 0 | 0.206897 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a506b736c248be3b528126847b39f395938649f3 | 11,437 | py | Python | workload_auto/vsphere_helper.py | CiscoDevNet/dcnm-workload-automation | 36d439f0351b88492d21160c534e1260b9d43ca8 | [
"Apache-2.0"
] | null | null | null | workload_auto/vsphere_helper.py | CiscoDevNet/dcnm-workload-automation | 36d439f0351b88492d21160c534e1260b9d43ca8 | [
"Apache-2.0"
] | null | null | null | workload_auto/vsphere_helper.py | CiscoDevNet/dcnm-workload-automation | 36d439f0351b88492d21160c534e1260b9d43ca8 | [
"Apache-2.0"
] | 1 | 2020-07-07T14:53:14.000Z | 2020-07-07T14:53:14.000Z | # Copyright 2020 Cisco Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''
vSphere/vCenter helper module. This module interacts with vSphere using pynmoni
'''
from pyVim.connect import SmartConnectNoSSL
from pyVmomi import vim
from workload_auto import logger
LOG = logger.get_logging(__name__)
class VsphereHelper:
'''
VSphere Helper class
'''
def __init__(self, **kwargs):
'''
Init routine that connects with the specified vCenter
'''
self.ip_addr = kwargs.get('ip')
self.user = kwargs.get('user')
self.pwd = kwargs.get('pwd')
self.si_obj = SmartConnectNoSSL(host=self.ip_addr, user=self.user,
pwd=self.pwd)
def get_all_objs(self, vimtype):
'''
Retrieve all the objects of a specific type
'''
obj = {}
container = self.si_obj.content.viewManager.CreateContainerView(
self.si_obj.content.rootFolder, vimtype, True)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
def get_specific_obj(self, vimtype, name):
'''
Retrieve the object of a specific type matching the name.
'''
container = self.si_obj.content.viewManager.CreateContainerView(
self.si_obj.content.rootFolder, vimtype, True)
for managed_object_ref in container.view:
if managed_object_ref.name == name:
return managed_object_ref
return None
def get_dvs_pg_obj(self, dvs_name, dvs_pg):
'''
Returns the DVS and DVS PG objects.
'''
dvs_obj = self.get_specific_obj([vim.DistributedVirtualSwitch],
dvs_name)
if dvs_obj is None or dvs_obj.name != dvs_name:
LOG.error("DVS %s does not exist", dvs_name)
return None, None
dvs_pgs = dvs_obj.portgroup
for dvpg in dvs_pgs:
if dvpg.name == dvs_pg:
return dvs_obj, dvpg
LOG.error("DVS PG %s does not exist", dvs_pg)
return None, None
def is_dvs_dvspg_exist(self, dvs_name, dvs_pg):
'''
Checks if the DVS and DVS PG exist in vSphere.
'''
dvs_obj, dvs_pg_obj = self.get_dvs_pg_obj(dvs_name, dvs_pg)
if dvs_obj is None or dvs_pg_obj is None:
LOG.error("get_dvs_pg_obj returns false for DVS %s dvs PG %s",
dvs_name, dvs_pg)
return False
return True
def get_host_pg_obj(self, host_name, host_pg):
'''
Returns the Host and PG object
'''
host_obj = self.get_specific_obj([vim.HostSystem], host_name)
if host_obj is None or host_obj.name != host_name:
LOG.error("Host %s does not exist", host_name)
return None, None
for pg_obj in host_obj.config.network.portgroup:
if pg_obj.spec.name == host_pg:
return host_obj, pg_obj
LOG.error("Host PG %s does not exist", host_pg)
return None, None
def is_host_hostpg_exist(self, host_name, host_pg):
'''
Checks if the Host and Host PG exist in vSphere.
'''
host_obj, host_pg_obj = self.get_host_pg_obj(host_name, host_pg)
if host_obj is None or host_pg_obj is None:
LOG.error("get_host_pg_obj returns false for host %s host PG %s",
host_name, host_pg)
return False
return True
def get_vlan_dvs(self, dvs_name, dvs_pg):
'''
Get the Vlan associated wih the DV-PG in a DVS.
First get all the DVS and get the DVS object matching the argument DVS
For all the DV-PG in the DVS object, find the DV-PG that matches the
DV-PG needed
Then from that DV-PG object get the VLAN as give.
Wish there's a should be a faster method to directly get the dvpg
info instead of going through the loop.
'''
dvs_obj, dvs_pg_obj = self.get_dvs_pg_obj(dvs_name, dvs_pg)
if dvs_obj is None or dvs_pg_obj is None:
LOG.error("DVS %s or PG %s does not exist, cannot obtain VLAN",
dvs_name, dvs_pg)
return ""
vlan_info = dvs_pg_obj.config.defaultPortConfig.vlan
cl_obj = vim.dvs.VmwareDistributedVirtualSwitch.TrunkVlanSpec
if isinstance(vlan_info, cl_obj):
# This needs to be tested
return ""
vlan_id = str(vlan_info.vlanId)
return vlan_id
def get_vlan_host_pg(self, host_name, pg_name):
'''
Get the Vlan associated wih the PG in a Host.
First get the network object associated with the PG.
Second, match the host, in case the same PG name is cfgs in multiple
hosts. There are other ways to write the loop, but i think this is
little better since the host loop being big is rare unless one cfgs the
same PG name in multiple hosts.
Next, for all the PG cfgd in the host find the matching PG object and
return the VLAN.
'''
#hosts = self.get_all_objs([vim.HostSystem])
host_obj, host_pg_obj = self.get_host_pg_obj(host_name, pg_name)
if host_obj is None or host_pg_obj is None:
LOG.error("Host %s or PG %s does not exist, cannot obtain VLAN",
host_name, pg_name)
return ""
vlan_id = str(host_pg_obj.spec.vlanId)
return vlan_id
def get_vlan(self, is_dvs, dvs_or_host_name, cmn_pg):
'''
Top level get vlan function.
'''
if is_dvs:
return self.get_vlan_dvs(dvs_or_host_name, cmn_pg)
else:
return self.get_vlan_host_pg(dvs_or_host_name, cmn_pg)
def get_host_neighbour(self, host_name, pnic_list):
'''
Retrieve the neighbour given a host and pnic's.
'''
host = self.get_specific_obj([vim.HostSystem], host_name)
neighbour = []
query = host.configManager.networkSystem.QueryNetworkHint()
for nic in query:
if not hasattr(nic, 'device'):
LOG.error('device attribute not present')
continue
pnic = nic.device
if pnic not in pnic_list:
continue
dct = {}
conn_sw_port = nic.connectedSwitchPort
if not hasattr(conn_sw_port, 'portId') or (
not hasattr(conn_sw_port, 'devId')):
LOG.error("Port Id or devId attribute not present")
continue
sw_port = conn_sw_port.portId
dev_id = conn_sw_port.devId
snum_str = dev_id.split('(')
if len(snum_str) > 1:
snum = snum_str[1].split(')')[0]
else:
LOG.error("snum not present for the switch")
if hasattr(conn_sw_port, 'mgmtAddr'):
sw_ip = conn_sw_port.mgmtAddr
if hasattr(conn_sw_port, 'systemName'):
sw_name = conn_sw_port.systemName
dct.update({'ip': sw_ip, 'snum': snum, 'pnic': pnic,
'sw_port': sw_port, 'name': sw_name})
neighbour.append(dct)
return neighbour
def get_pnic_dvs(self, host_name, key_dvs):
'''
Return the list of pnic's of a host that is a part of a DVS.
The pnic is taken from the host of DCV object.
'''
pnic_list = []
for host in key_dvs.config.host:
if host.config.host.name == host_name:
for pnic_elem in host.config.backing.pnicSpec:
pnic_list.append(pnic_elem.pnicDevice)
return pnic_list
def get_dvs_pnic_info(self, dvs, dvs_pg):
'''
Return the list of neighbours for every host associated with the DVS.
First get the matching DVS object and DV-PG object from the complete
list.
For the host that is a part of the DVS get the pnic and neighbour info.
'''
dvs_obj, dvs_pg_obj = self.get_dvs_pg_obj(dvs, dvs_pg)
if dvs_obj is None or dvs_pg_obj is None:
LOG.error("DVS %s or PG %s does not exist, cannot obtain pnic",
dvs, dvs_pg)
return ""
host_dict = {}
for host_obj in dvs_pg_obj.host:
pnic_list = self.get_pnic_dvs(host_obj.name, dvs_obj)
nei_list = self.get_host_neighbour(host_obj.name, pnic_list)
host_dict.update({host_obj.name: nei_list})
return host_dict
def _get_pnic_from_key(self, pnic_obj, pnic_comp_key):
'''
Return the device from pnic object for matching object name.
'''
for pnic_elem in pnic_obj:
if pnic_elem.key == pnic_comp_key:
return pnic_elem.device
return None
def get_host_pnic_info(self, host_name, pg_name):
'''
Get the pnic info for a specific host.
First get the list of all network object for the PG.
Then, filter the objects based on the passed host. This, i assume
may not be much unless the same PG name is cfgd in multiple host.
Get the vSwitch name that this PG is a part of.
Then, for all the vswitches in the host, filter the specific vswitch
and get the vswitch object.
Get the pnic list from the vswitch, this gives the object list!!
Then call _get_pnic_from_key which returns the dev associated
with the pnic object.
Retrieve the neighbour for the host and pnic.
'''
#hosts = self.get_all_objs([vim.HostSystem])
host_dict = {}
host_obj, host_pg_obj = self.get_host_pg_obj(host_name, pg_name)
if host_obj is None or host_pg_obj is None:
LOG.error("Host %s or PG %s does not exist, cannot obtain pnic",
host_name, pg_name)
return ""
vsw_name = host_pg_obj.spec.vswitchName
for vsw in host_obj.config.network.vswitch:
if vsw.name != vsw_name:
continue
pnic_list = vsw.pnic
pnic_dev_list = []
for pnic_elem in pnic_list:
dev = self._get_pnic_from_key(host_obj.config.network.pnic,
pnic_elem)
pnic_dev_list.append(dev)
nei_list = self.get_host_neighbour(host_name, pnic_dev_list)
host_dict.update({host_name: nei_list})
return host_dict
def get_pnic(self, is_dvs, dvs_host_name, cmn_pg):
'''
Top level function to retrieve the information associated with the pnic.
'''
if is_dvs:
return self.get_dvs_pnic_info(dvs_host_name, cmn_pg)
else:
return self.get_host_pnic_info(dvs_host_name, cmn_pg)
| 40.129825 | 80 | 0.607677 | 1,635 | 11,437 | 4.036697 | 0.160856 | 0.02197 | 0.019091 | 0.013333 | 0.345 | 0.262576 | 0.226818 | 0.179394 | 0.145 | 0.134394 | 0 | 0.001414 | 0.319927 | 11,437 | 284 | 81 | 40.271127 | 0.847133 | 0.26117 | 0 | 0.299401 | 0 | 0 | 0.071419 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095808 | false | 0 | 0.017964 | 0 | 0.299401 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a5083f8e81dde75a4b99510e59a13a9684a3d9d3 | 1,042 | py | Python | TextureConversion/Scripts/InvertYellowNormal.py | Beatbox309/Unity-Texture-Conversion | 19fd59c34b8d6c1e528db8f220700d5b1ccef5a1 | [
"MIT"
] | null | null | null | TextureConversion/Scripts/InvertYellowNormal.py | Beatbox309/Unity-Texture-Conversion | 19fd59c34b8d6c1e528db8f220700d5b1ccef5a1 | [
"MIT"
] | null | null | null | TextureConversion/Scripts/InvertYellowNormal.py | Beatbox309/Unity-Texture-Conversion | 19fd59c34b8d6c1e528db8f220700d5b1ccef5a1 | [
"MIT"
] | null | null | null | from PIL import Image
from PIL import ImageChops
import pathlib
import sys
sys.path.append(str(pathlib.Path(__file__).parent.absolute()))
import TextureConversionMain as tcm
def Convert(normPath):
# setup
ogNorm = Image.open(normPath)
normTuple = tcm.SplitImg(ogNorm)
print("Images Loaded")
# Invert Blue Channel
invB = ImageChops.invert(normTuple[2])
normTuple = (normTuple[0],normTuple[1],invB)
print("Image Inverted")
# Normal Map
nrmWhite = tcm.CreateBWImg(ogNorm.size,255)
normal = Image.merge("RGBA",(normTuple[0],normTuple[1],normTuple[2],nrmWhite))
normal.save(workingDir + "NormalMap.png")
print("Images Saved!")
ogNormalPath = ""
with open(str(pathlib.Path(__file__).parent.absolute()) + '\\' + "Temp.txt") as file:
line = file.readlines()
ogNormalPath = line[0]
ogNormalPath = ogNormalPath.rstrip("\n")
workingDir = str(pathlib.Path(ogNormalPath).parent.absolute()) + '\\'
Convert(ogNormalPath)
tcm.RemoveTempFiles()
| 26.05 | 86 | 0.673704 | 118 | 1,042 | 5.881356 | 0.483051 | 0.043228 | 0.060519 | 0.051873 | 0.092219 | 0.092219 | 0 | 0 | 0 | 0 | 0 | 0.01182 | 0.1881 | 1,042 | 39 | 87 | 26.717949 | 0.808511 | 0.034549 | 0 | 0 | 0 | 0 | 0.073728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.2 | 0 | 0.24 | 0.12 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a508b46bb0fbec56198e8d643e6c23a47a292553 | 1,695 | py | Python | mininet_scripts/simple_net.py | kulawczukmarcin/mypox | b6a0a3cbfc911f94d0ed2ba5c968025879691eab | [
"Apache-2.0"
] | null | null | null | mininet_scripts/simple_net.py | kulawczukmarcin/mypox | b6a0a3cbfc911f94d0ed2ba5c968025879691eab | [
"Apache-2.0"
] | null | null | null | mininet_scripts/simple_net.py | kulawczukmarcin/mypox | b6a0a3cbfc911f94d0ed2ba5c968025879691eab | [
"Apache-2.0"
] | null | null | null | __author__ = 'Ehsan'
from mininet.node import CPULimitedHost
from mininet.topo import Topo
from mininet.net import Mininet
from mininet.log import setLogLevel, info
from mininet.node import RemoteController
from mininet.cli import CLI
"""
Instructions to run the topo:
1. Go to directory where this fil is.
2. run: sudo -E python Simple_Pkt_Topo.py.py
The topo has 4 switches and 4 hosts. They are connected in a star shape.
"""
class SimplePktSwitch(Topo):
"""Simple topology example."""
def __init__(self, **opts):
"""Create custom topo."""
# Initialize topology
# It uses the constructor for the Topo cloass
super(SimplePktSwitch, self).__init__(**opts)
# Add hosts and switches
h1 = self.addHost('h1')
h2 = self.addHost('h2')
h3 = self.addHost('h3')
h4 = self.addHost('h4')
# Adding switches
s1 = self.addSwitch('s1', dpid="0000000000000001")
s2 = self.addSwitch('s2', dpid="0000000000000002")
s3 = self.addSwitch('s3', dpid="0000000000000003")
s4 = self.addSwitch('s4', dpid="0000000000000004")
# Add links
self.addLink(h1, s1)
self.addLink(h2, s2)
self.addLink(h3, s3)
self.addLink(h4, s4)
self.addLink(s1, s2)
self.addLink(s1, s3)
self.addLink(s1, s4)
def run():
c = RemoteController('c', '192.168.56.1', 6633)
net = Mininet(topo=SimplePktSwitch(), host=CPULimitedHost, controller=None)
net.addController(c)
net.start()
CLI(net)
net.stop()
# if the script is run directly (sudo custom/optical.py):
if __name__ == '__main__':
setLogLevel('info')
run()
| 26.904762 | 79 | 0.635988 | 218 | 1,695 | 4.844037 | 0.444954 | 0.072917 | 0.036932 | 0.039773 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.086381 | 0.241888 | 1,695 | 62 | 80 | 27.33871 | 0.735409 | 0.126254 | 0 | 0 | 0 | 0 | 0.087094 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.171429 | 0 | 0.257143 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a509787b777a426526c085c8fcec56b5e9a290ec | 5,806 | py | Python | .ci/watchdog/MSTeamsCommunicator.py | NervanaSystems/ngraph-onnx | 7fdb4e503a796ef3871927c550d1245e37cb6ffa | [
"Apache-2.0",
"MIT"
] | 47 | 2018-03-17T00:30:59.000Z | 2021-11-14T17:18:57.000Z | .ci/watchdog/MSTeamsCommunicator.py | NervanaSystems/ngraph-onnx | 7fdb4e503a796ef3871927c550d1245e37cb6ffa | [
"Apache-2.0",
"MIT"
] | 272 | 2018-03-22T14:31:09.000Z | 2022-03-24T20:51:04.000Z | .ci/watchdog/MSTeamsCommunicator.py | NervanaSystems/ngraph-onnx | 7fdb4e503a796ef3871927c550d1245e37cb6ffa | [
"Apache-2.0",
"MIT"
] | 20 | 2018-04-03T14:53:58.000Z | 2021-03-13T01:21:26.000Z | #!/usr/bin/python3
# INTEL CONFIDENTIAL
# Copyright 2018-2020 Intel Corporation
# The source code contained or described herein and all documents related to the
# source code ("Material") are owned by Intel Corporation or its suppliers or
# licensors. Title to the Material remains with Intel Corporation or its
# suppliers and licensors. The Material may contain trade secrets and proprietary
# and confidential information of Intel Corporation and its suppliers and
# licensors, and is protected by worldwide copyright and trade secret laws and
# treaty provisions. No part of the Material may be used, copied, reproduced,
# modified, published, uploaded, posted, transmitted, distributed, or disclosed
# in any way without Intel's prior express written permission.
# No license under any patent, copyright, trade secret or other intellectual
# property right is granted to or conferred upon you by disclosure or delivery of
# the Materials, either expressly, by implication, inducement, estoppel or
# otherwise. Any license under such intellectual property rights must be express
# and approved by Intel in writing.
# Include any supplier copyright notices as supplier requires Intel to use.
# Include supplier trademarks or logos as supplier requires Intel to use,
# preceded by an asterisk. An asterisked footnote can be added as follows:
# *Third Party trademarks are the property of their respective owners.
# Unless otherwise agreed by Intel in writing, you may not remove or alter
# this notice or any other notice embedded in Materials by Intel or Intel's
# suppliers or licensors in any way.
import requests
class MSTeamsCommunicator:
"""Class communicating with MSTeams using Incoming Webhook.
The purpose of this class is to use MSTeams API to send message.
Docs for used API, including wrapped methods can be found at:
https://docs.microsoft.com/en-us/outlook/actionable-messages/send-via-connectors
"""
def __init__(self, _ci_alerts_channel_url):
self._ci_alerts_channel_url = _ci_alerts_channel_url
self._queued_messages = {
self._ci_alerts_channel_url: [],
}
@property
def messages(self):
"""
Get list of queued messages.
:return: List of queued messages
:return type: List[String]
"""
return self._queued_messages.values()
def queue_message(self, message):
"""
Queue message to be sent later.
:param message: Message content
:type message: String
"""
self._queued_messages[self._ci_alerts_channel_url].append(message)
def _parse_text(self, message):
"""
Parse text to display as alert.
:param message: Unparsed message content
:type message: String
"""
message_split = message.split('\n')
title = message_split[2]
log_url = message_split[-1]
text = message_split[3]
header = message_split[0].split(' - ')
header_formatted = '{} - [Watchdog Log]({})'.format(header[0], header[1])
text_formatted = '{}: ***{}***'.format(text.split(':', 1)[0], text.split(':', 1)[1])
return title, log_url, '{}\n\n{}'.format(header_formatted, text_formatted)
def _json_request_content(self, title, log_url, text_formatted):
"""
Create final json request to send message to MS Teams channel.
:param title: Title of alert
:param log_url: URL to Watchdog log
:param text_formatted: General content of alert - finally formatted
:type title: String
:type title: String
:type title: String
"""
data = {
'@context': 'https://schema.org/extensions',
'@type': 'MessageCard',
'themeColor': '0072C6',
'title': title,
'text': text_formatted,
'potentialAction':
[
{
'@type': 'OpenUri',
'name': 'Open PR',
'targets':
[
{
'os': 'default',
'uri': log_url,
},
],
},
],
}
return data
def _send_to_channel(self, message, channel_url):
"""
Send MSTeams message to specified channel.
:param message: Message content
:type message: String
:param channel_url: Channel url
:type channel_url: String
"""
title, log_url, text_formatted = self._parse_text(message)
data = self._json_request_content(title, log_url, text_formatted)
try:
requests.post(url=channel_url, json=data)
except Exception as ex:
raise Exception('!!CRITICAL!! MSTeamsCommunicator: Could not send message '
'due to {}'.format(ex))
def send_message(self, message, quiet=False):
"""
Send queued messages as single communication.
:param message: Final message's content
:param quiet: Flag for disabling sending report through MS Teams
:type message: String
:type quiet: Boolean
"""
for channel, message_queue in self._queued_messages.items():
final_message = message + '\n\n' + '\n'.join(message_queue)
if not quiet and message_queue:
self._send_to_channel(final_message, channel)
| 40.319444 | 92 | 0.59783 | 651 | 5,806 | 5.208909 | 0.360983 | 0.02949 | 0.022117 | 0.026541 | 0.160425 | 0.078738 | 0.048953 | 0.023592 | 0 | 0 | 0 | 0.006101 | 0.322425 | 5,806 | 143 | 93 | 40.601399 | 0.855872 | 0.505167 | 0 | 0.035088 | 0 | 0 | 0.101621 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.122807 | false | 0 | 0.017544 | 0 | 0.210526 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a50994e7a70e911cc583b9d0550f81e22ad4842b | 4,301 | py | Python | script/create_buy_point_data_window.py | AtlantixJJ/vnpy | 28992c7d5391f6dd42a14b481d01ceafde048b5f | [
"MIT"
] | null | null | null | script/create_buy_point_data_window.py | AtlantixJJ/vnpy | 28992c7d5391f6dd42a14b481d01ceafde048b5f | [
"MIT"
] | null | null | null | script/create_buy_point_data_window.py | AtlantixJJ/vnpy | 28992c7d5391f6dd42a14b481d01ceafde048b5f | [
"MIT"
] | null | null | null | """Create wave training data.
Automatically annotate data by identifying waves. The begining,
middle and ending are obtained. Windows around these points are
collected as training data. They are organized in years.
"""
import sys, glob, os
path = os.getcwd()
sys.path.insert(0, ".")
from datetime import datetime
from vnpy.trader.database import database_manager
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
import time
from lib import utils
from lib.alg import get_waves
def normalize(x):
#return x / (1e-9 + x.mean(2, keepdims=True))
return x[:, :, 1:] / (1e-9 + x[:, :, :-1]) - 1
os.chdir(path)
WIN_SIZE = 20 # four weeks
PAD = 0 # the minimum distance between two different labels
NUM_SEGMENTS = 3
FEATURE_KEYS = ['open_price', 'close_price', 'high_price', 'low_price', 'volume']
binfos = utils.fast_index().values
binfos = [b for b in binfos if b[3] == 'd'] # day line only
data_keys = ["buy", "sell", "hold", "empty"]
key2color = {
"buy": "red",
"sell": "green",
"hold": "orange",
"empty": "blue"}
dic = {k: {} for k in data_keys}
buy_count = sell_count = hold_count = 0
for idx, binfo in enumerate(tqdm(binfos)):
_, symbol, exchange, interval, _ = binfo
vt_symbol = f"{symbol}.{exchange}"
for key in data_keys:
dic[key][vt_symbol] = {}
start = datetime.strptime(f"2000-01-01", "%Y-%m-%d")
end = datetime.strptime(f"2021-01-01", "%Y-%m-%d")
bars = database_manager.load_bar_data_s(
symbol=symbol, exchange=exchange, interval="d",
start=start, end=end)
if len(bars) < 100:
continue
df = utils.bars_to_df(bars)
N = df.shape[0]
# get waves
prices = df['close_price'].values
waves = get_waves(prices, T1=0.30, T2=0.20)
points = {k: {} for k in data_keys}
for year in range(2000, 2022):
for key in points.keys():
points[key][str(year)] = []
plot_flag = idx < 5
plot_waves = 10
if plot_flag:
st = 0
ed = waves[plot_waves - 1][2]
x = np.arange(st, ed)
y = df['close_price'].values[st:ed]
fig = plt.figure(figsize=(18, 6))
plt.plot(x, y)
lx = np.arange(st, ed)
for wave_id, (x1, y1, x2, y2, t) in enumerate(waves):
if t == -1: # decrease wave
offset = 0.8
start_key, middle_key, end_key = "sell", "hold", "hold"
elif t == 0: # null wave
offset = 1.0
start_key, middle_key, end_key = "empty", "empty", "empty"
elif t == 1: # increase wave
offset = 1.2
start_key, middle_key, end_key = "buy", "hold", "hold"
# segment length
S = (x2 - x1) // NUM_SEGMENTS
if plot_flag and t != 0:
ly = (y2 - y1) * offset / (x2 - x1) * (lx - x1) + y1 * offset
if plot_flag and t == 0:
ly = np.zeros_like(lx) + y1 * offset
def _work(ckey, win_st, win_ed):
if win_st >= win_ed:
return None
if plot_flag and wave_id < plot_waves and win_ed > win_st:
plt.plot(lx[win_st:win_ed], ly[win_st:win_ed],
color=key2color[ckey], linestyle='-')
for i in range(win_st, win_ed):
d = np.array([df[key][i - WIN_SIZE : i] \
for key in FEATURE_KEYS])
year = str(df.index[i - WIN_SIZE].year)
points[ckey][year].append(d)
_work(start_key, max(x1 + PAD + 1, WIN_SIZE), x1 + S + 1)
_work(middle_key, max(x1 + S + PAD + 1, WIN_SIZE), x2 - S + 1)
_work(end_key, max(x2 - S + PAD + 1, WIN_SIZE), x2 + 1)
for key in points.keys():
for year in points[key]:
if len(points[key][year]) == 0:
continue
x = normalize(np.array(points[key][year]))
if np.abs(x).max() < 100: # filter error data
dic[key][vt_symbol][year] = x
if plot_flag:
plt.savefig(f"results/buy_point_viz_{idx}.png")
plt.close()
if (idx + 1) % 100 == 0:
I = (idx + 1) // 100
np.save(f"data/buy_point/share_{I:02d}.npy", dic)
del dic
dic = {k: {} for k in data_keys}
np.save(f"data/buy_point/share_{I + 1:02d}.npy", dic) | 33.341085 | 81 | 0.56359 | 648 | 4,301 | 3.608025 | 0.307099 | 0.017964 | 0.021386 | 0.021386 | 0.120616 | 0.09923 | 0.051326 | 0.021386 | 0 | 0 | 0 | 0.038208 | 0.294118 | 4,301 | 129 | 82 | 33.341085 | 0.731884 | 0.096024 | 0 | 0.076923 | 0 | 0 | 0.081074 | 0.022205 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019231 | false | 0 | 0.096154 | 0.009615 | 0.134615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a50cc4e2779045bd523e797e37d15a965dda09d9 | 5,784 | py | Python | {{cookiecutter.dir_name}}/script.py | TAMU-CPT/cc_automated_drf_template | c538f01ea90bae98ee051a6a2fd977d3dd595cde | [
"BSD-3-Clause"
] | 14 | 2016-10-07T21:59:03.000Z | 2020-03-03T17:08:49.000Z | {{cookiecutter.dir_name}}/script.py | TAMU-CPT/cc_automated_drf_template | c538f01ea90bae98ee051a6a2fd977d3dd595cde | [
"BSD-3-Clause"
] | 6 | 2016-10-07T19:48:21.000Z | 2019-03-14T12:41:50.000Z | {{cookiecutter.dir_name}}/script.py | TAMU-CPT/drf_template | c538f01ea90bae98ee051a6a2fd977d3dd595cde | [
"BSD-3-Clause"
] | 3 | 2017-03-15T08:32:04.000Z | 2018-02-05T22:09:50.000Z | import ast, _ast, subprocess, os, argparse
def write_files(app_name):
models = {}
# parse models.py
with open('%s/models.py' % app_name) as models_file:
m = ast.parse(models_file.read())
for i in m.body:
if type(i) == _ast.ClassDef:
models[i.name] = {}
for x in i.body:
if type(x) == _ast.Assign:
models[i.name][x.targets[0].id] = x.value.func.attr
models[i.name]['id'] = "Intrinsic"
serializer_names = [model+'Serializer' for model in models]
# serializers.py
with open('%s/serializers.py' % app_name, 'w') as ser_file:
def ser_class(model):
s = "class %sSerializer(serializers.HyperlinkedModelSerializer):\n" % model
s += " class Meta:\n"
s += " model = %s\n" % model
if len(models[model]) > 0:
s += " "*8 + "fields = (" + ', '.join(["'%s'" % x for x in models[model]]) + ',)\n'
ser_file.write('\n')
ser_file.write(s)
ser_file.write('\n'.join(["from django.contrib.auth.models import User, Group",
"from rest_framework import serializers",
"from %s.models import " % app_name + ', '.join([model for model in models]) + '\n']))
ser_file.write('\n'.join(["\nclass GroupSerializer(serializers.ModelSerializer):",
" "*4 + "class Meta:",
" "*8 + "model = Group",
" "*8 + "fields = ('id', 'name',)\n"]))
ser_file.write('\n'.join(["\nclass UserSerializer(serializers.ModelSerializer):",
" "*4 + "groups = GroupSerializer(many=True)",
" "*4 + "class Meta:",
" "*8 + "model = User",
" "*8 + "fields = ('id', 'username', 'email', 'groups',)\n"]))
for model in models:
ser_class(model)
# views.py
with open('%s/views.py' % app_name, 'w') as view_file:
def viewset_class(model):
v = "class %sViewSet(viewsets.ModelViewSet):\n" % model
v += " queryset = %s.objects.all()\n" % model
v += " serializer_class = %sSerializer\n" % model
view_file.write('\n')
view_file.write(v)
view_file.write('\n'.join(["from rest_framework import viewsets",
"from django.contrib.auth.models import User, Group",
"from %s.serializers import UserSerializer, GroupSerializer, " % app_name + ', '.join([name for name in serializer_names]),
"from %s.models import " % app_name + ', '.join([model for model in models]) + '\n']))
viewset_class("User")
viewset_class("Group")
for model in models:
viewset_class(model)
# admin.py
with open('%s/admin.py' % app_name, 'w') as admin_file:
def admin_class(models):
for model in models:
a = "class %sAdmin(admin.ModelAdmin):\n" % model
a += " queryset = %s.objects.all()\n" % model
z = ["'%s'" % x for x in models[model] if models[model][x] != 'ManyToManyField']
if len(z) > 0:
a += " " + "list_display = (" + ', '.join(z) + ',)\n'
admin_file.write('\n')
admin_file.write(a)
admin_file.write('from django.contrib import admin\n')
admin_file.write('from .models import ' + ', '.join([model for model in models]) + '\n')
admin_class(models)
admin_file.write('\n')
for model in models:
admin_file.write("admin.site.register(%(0)s, %(0)sAdmin)\n" % {'0':model})
# urls.py
with open('%s/urls.py' % app_name, 'w') as url_file:
url_file.write('\n'.join(["from django.conf.urls import url, include",
"from rest_framework import routers",
"from %s import views\n" % app_name,
"router = routers.DefaultRouter()",
"router.register(r'users', views.UserViewSet)",
"router.register(r'groups', views.GroupViewSet)\n"]))
for model in models:
plural = 's'
if model.endswith('s'):
plural = 'es'
url_file.write("router.register(r'%(0)s', views.%(1)sViewSet)\n" % {'0':model.lower() + plural, '1':model})
url_file.write('\n')
u = '\n'.join(["urlpatterns = [",
" "*4 + "url(r'^%s/', include(router.urls))," % app_name,
"]"])
url_file.write(u)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Populate serializers, views, urls, and admin based on models.py')
parser.add_argument('--disable_venv', help='Disable creation of virtual environment', action="store_true")
parser.add_argument("--app_name", help='App name on which to perform script', default="{{cookiecutter.app_name}}")
args = parser.parse_args()
write_files(args.app_name)
my_env = os.environ.copy()
if not args.disable_venv:
subprocess.check_call(["virtualenv", "venv"])
my_env["PATH"] = os.getcwd() + '/venv/bin:' + my_env["PATH"]
subprocess.check_call(["pip", "install", "-r", "requirements.txt"], env=my_env)
subprocess.check_call(["python", "manage.py", "makemigrations", "%s" % args.app_name], env=my_env)
subprocess.check_call(["python", "manage.py", "migrate"], env=my_env)
| 45.1875 | 158 | 0.510028 | 658 | 5,784 | 4.357143 | 0.231003 | 0.056505 | 0.03488 | 0.050227 | 0.222881 | 0.164981 | 0.134287 | 0.09557 | 0.09557 | 0.03488 | 0 | 0.004916 | 0.331777 | 5,784 | 127 | 159 | 45.543307 | 0.736869 | 0.009682 | 0 | 0.113402 | 0 | 0 | 0.30706 | 0.087207 | 0.020619 | 0 | 0 | 0 | 0 | 1 | 0.041237 | false | 0 | 0.134021 | 0 | 0.175258 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a50f9f33df8927431578a0e644348f580030945d | 975 | py | Python | BackendAPI/customuser/utils.py | silvioramalho/django-startup-rest-api | b984ee6be27990d29ab3df7cdd446bb63ee3ee34 | [
"MIT"
] | 9 | 2020-05-23T14:42:00.000Z | 2022-03-04T12:21:00.000Z | BackendAPI/customuser/utils.py | silvioramalho/django-startup-rest-api | b984ee6be27990d29ab3df7cdd446bb63ee3ee34 | [
"MIT"
] | 6 | 2020-05-14T21:34:09.000Z | 2021-09-22T19:01:15.000Z | BackendAPI/customuser/utils.py | silvioramalho/django-startup-rest-api | b984ee6be27990d29ab3df7cdd446bb63ee3ee34 | [
"MIT"
] | 1 | 2022-03-04T12:20:52.000Z | 2022-03-04T12:20:52.000Z | from datetime import datetime
from calendar import timegm
from rest_framework_jwt.compat import get_username, get_username_field
from rest_framework_jwt.settings import api_settings
def jwt_otp_payload(user, device = None):
"""
Opcionalmente inclui o Device TOP no payload do JWT
"""
username_field = get_username_field()
username = get_username(user)
payload = {
'user_id': user.pk,
'username': username,
'exp': datetime.utcnow() + api_settings.JWT_EXPIRATION_DELTA
}
# Include original issued at time for a brand new token,
# to allow token refresh
if api_settings.JWT_ALLOW_REFRESH:
payload['orig_iat'] = timegm(
datetime.utcnow().utctimetuple()
)
if api_settings.JWT_AUDIENCE is not None:
payload['aud'] = api_settings.JWT_AUDIENCE
if api_settings.JWT_ISSUER is not None:
payload['iss'] = api_settings.JWT_ISSUER
return payload | 28.676471 | 70 | 0.683077 | 124 | 975 | 5.129032 | 0.451613 | 0.121069 | 0.132075 | 0.075472 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241026 | 975 | 34 | 71 | 28.676471 | 0.859459 | 0.133333 | 0 | 0 | 0 | 0 | 0.038601 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.190476 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a511699f0080c5e2bd9b9726e7f19dac525f2d43 | 3,781 | py | Python | extras/tally_results_ucf24.py | salmank255/ROADSlowFast | e939d8f79fe3eb6f3dd32e967a34530d00f45c8e | [
"Apache-2.0"
] | null | null | null | extras/tally_results_ucf24.py | salmank255/ROADSlowFast | e939d8f79fe3eb6f3dd32e967a34530d00f45c8e | [
"Apache-2.0"
] | null | null | null | extras/tally_results_ucf24.py | salmank255/ROADSlowFast | e939d8f79fe3eb6f3dd32e967a34530d00f45c8e | [
"Apache-2.0"
] | null | null | null |
"""
This script load and tally the result of UCF24 dataset in latex format.
"""
import os
import json
def run_exp(cmd):
return os.system(cmd)
if __name__ == '__main__':
base = '/mnt/mercury-alpha/ucf24/cache/resnet50'
modes = [['frames',['frame_actions', 'action_ness', 'action']],
['video',['action',]]]
logger = open('results_ucf24.tex','w')
for result_mode, label_types in modes:
logger.write('\n\nRESULTS FOR '+result_mode+'\n\n')
atable = 'Model'
for l in label_types: #['0.2', '0.5', '075','Avg-mAP']:
atable += ' & ' + l.replace('_','-').capitalize()
atable += '\\\\ \n\\midrule\n'
subsets = ['train']
for net,d in [('C2D',1), ('I3D',1),('RCN',1), ('RCLSTM',1)]:
for seq, bs, tseqs in [(8,4,[8,32])]:
for tseq in tseqs:
if result_mode == 'video':
trims = ['none','indiv']
eval_ths_all = [[20], [50], [75], [a for a in range(50,95,5)]]
else:
trims = ['none']
eval_ths_all = [[50]]
for trim in trims:
if result_mode != 'video':
atable += '{:s}-{:02d} '.format(net, tseq).ljust(15)
else:
atable += '{:s}-{:02d}-{:s} '.format(net, tseq, trim).ljust(20)
for train_subset in subsets:
splitn = train_subset[-1]
for eval_ths in eval_ths_all:
# logger.write(eval_ths)
anums = [[0,0] for _ in label_types]
for eval_th in eval_ths:
if result_mode == 'frames':
result_file = '{:s}{:s}512-Pkinetics-b{:d}s{:d}x1x1-ucf24t{:s}-h3x3x3/frame-ap-results-10-{:02d}-50.json'.format(base, net, bs, seq, splitn, tseq)
else:
result_file = '{:s}{:s}512-Pkinetics-b{:d}s{:d}x1x1-ucf24t{:s}-h3x3x3/tubes-10-{:02d}-80-20-score-25-4/video-ap-results-{:s}-0-{:d}-stiou.json'.format(base, net, bs, seq, splitn, tseq, trim, int(eval_th))
if os.path.isfile(result_file):
with open(result_file, 'r') as f:
results = json.load(f)
else:
results = None
for nlt, label_type in enumerate(label_types):
cc = 0
for subset, pp in [('test','&')]: #,
tag = subset + ' & ' + label_type
if results is not None and tag in results:
num = results[tag]['mAP']
anums[nlt][cc] += num
cc += 1
for nlt, label_type in enumerate(label_types):
cc = 0
for subset, pp in [('test','&')]: #,
num = anums[nlt][cc]/len(eval_ths)
atable += '{:s} {:0.01f} '.format(pp, num)
cc += 1
atable += '\\\\ \n'
logger.write(atable)
| 47.860759 | 244 | 0.370008 | 371 | 3,781 | 3.649596 | 0.358491 | 0.036189 | 0.026588 | 0.025111 | 0.196455 | 0.196455 | 0.196455 | 0.196455 | 0.149188 | 0.149188 | 0 | 0.049947 | 0.496958 | 3,781 | 79 | 245 | 47.860759 | 0.661935 | 0.034118 | 0 | 0.2 | 0 | 0.033333 | 0.135165 | 0.070055 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016667 | false | 0 | 0.033333 | 0.016667 | 0.066667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a5140c77514670cc1d68ea51faa3667ecc7cdf73 | 10,325 | py | Python | src/acquisition/wiki/wiki_download.py | eujing/delphi-epidata | 7281a7525b20e48147049229a9faa0cb97340427 | [
"MIT"
] | 1 | 2021-12-29T04:00:21.000Z | 2021-12-29T04:00:21.000Z | src/acquisition/wiki/wiki_download.py | eujing/delphi-epidata | 7281a7525b20e48147049229a9faa0cb97340427 | [
"MIT"
] | 1 | 2020-07-14T15:35:15.000Z | 2020-07-16T17:56:30.000Z | src/acquisition/wiki/wiki_download.py | eujing/delphi-epidata | 7281a7525b20e48147049229a9faa0cb97340427 | [
"MIT"
] | 4 | 2020-09-17T13:47:02.000Z | 2020-10-27T19:40:11.000Z | """
===============
=== Purpose ===
===============
Downloads wiki access logs and stores unprocessed article counts
See also: wiki.py
Note: for maximum portability, this program is compatible with both Python2 and
Python3 and has no external dependencies (e.g. running on AWS)
=================
=== Changelog ===
=================
2017-02-24 v10
+ compute hmac over returned data
2016-08-14: v9
* use pageviews instead of pagecounts-raw
2015-08-12: v8
* Corrected `Influenzalike_illness` to `Influenza-like_illness`
2015-05-21: v7
* Updated for Python3 and to be directly callable by wiki.py
2015-05-??: v1-v6
* Original versions
"""
# python 2 and 3
from __future__ import print_function
import sys
if sys.version_info.major == 2:
# python 2 libraries
from urllib import urlencode
from urllib2 import urlopen
else:
# python 3 libraries
from urllib.parse import urlencode
from urllib.request import urlopen
# common libraries
import argparse
import datetime
import hashlib
import hmac
import json
import subprocess
import time
import os
from sys import platform
from . import wiki_util
VERSION = 10
MASTER_URL = 'https://delphi.cmu.edu/~automation/public/wiki/master.php'
def text(data_string):
return str(data_string.decode('utf-8'))
def data(text_string):
if sys.version_info.major == 2:
return text_string
else:
return bytes(text_string, 'utf-8')
def get_hmac_sha256(key, msg):
key_bytes, msg_bytes = key.encode('utf-8'), msg.encode('utf-8')
return hmac.new(key_bytes, msg_bytes, hashlib.sha256).hexdigest()
def extract_article_counts(filename, language, articles, debug_mode):
"""
Support multiple languages ('en' | 'es' | 'pt')
Running time optimized to O(M), which means only need to scan the whole file once
:param filename:
:param language: Different languages such as 'en', 'es', and 'pt'
:param articles:
:param debug_mode:
:return:
"""
counts = {}
articles_set = set(map(lambda x: x.lower(), articles))
total = 0
with open(filename, "r", encoding="utf8") as f:
for line in f:
content = line.strip().split()
if len(content) != 4:
print('unexpected article format: {0}'.format(line))
continue
article_title = content[1].lower()
article_count = int(content[2])
if content[0] == language:
total += article_count
if content[0] == language and article_title in articles_set:
if(debug_mode):
print("Find article {0}: {1}".format(article_title, line))
counts[article_title] = article_count
if debug_mode:
print("Total number of counts for language {0} is {1}".format(language, total))
counts['total'] = total
return counts
def extract_article_counts_orig(articles, debug_mode):
"""
The original method which extracts article counts by shell command grep (only support en articles).
As it is difficult to deal with other languages (utf-8 encoding), we choose to use python read files.
Another things is that it is slower to go over the whole file once and once again, the time complexity is O(NM),
where N is the number of articles and M is the lines in the file
In our new implementation extract_article_counts(), the time complexity is O(M), and it can cope with utf8 encoding
:param articles:
:param debug_mode:
:return:
"""
counts = {}
for article in articles:
if debug_mode:
print(' %s' % (article))
out = text(
subprocess.check_output('LC_ALL=C grep -a -i "^en %s " raw2 | cat' % (article.lower()), shell=True)).strip()
count = 0
if len(out) > 0:
for line in out.split('\n'):
fields = line.split()
if len(fields) != 4:
print('unexpected article format: [%s]' % (line))
else:
count += int(fields[2])
# print ' %4d %s'%(count, article)
counts[article.lower()] = count
if debug_mode:
print(' %d' % (count))
print('getting total count...')
out = text(subprocess.check_output(
'cat raw2 | LC_ALL=C grep -a -i "^en " | cut -d" " -f 3 | awk \'{s+=$1} END {printf "%.0f", s}\'', shell=True))
total = int(out)
if debug_mode:
print(total)
counts['total'] = total
return counts
def run(secret, download_limit=None, job_limit=None, sleep_time=1, job_type=0, debug_mode=False):
worker = text(subprocess.check_output("echo `whoami`@`hostname`", shell=True)).strip()
print('this is [%s]'%(worker))
if debug_mode:
print('*** running in debug mode ***')
total_download = 0
passed_jobs = 0
failed_jobs = 0
while (download_limit is None or total_download < download_limit) and (job_limit is None or (passed_jobs + failed_jobs) < job_limit):
try:
time_start = datetime.datetime.now()
req = urlopen(MASTER_URL + '?get=x&type=%s'%(job_type))
code = req.getcode()
if code != 200:
if code == 201:
print('no jobs available')
if download_limit is None and job_limit is None:
time.sleep(60)
continue
else:
print('nothing to do, exiting')
return
else:
raise Exception('server response code (get) was %d'%(code))
# Make the code compatible with mac os system
if platform == "darwin":
job_content = text(req.readlines()[1])
else:
job_content = text(req.readlines()[0])
if job_content == 'no jobs':
print('no jobs available')
if download_limit is None and job_limit is None:
time.sleep(60)
continue
else:
print('nothing to do, exiting')
return
job = json.loads(job_content)
print('received job [%d|%s]'%(job['id'], job['name']))
# updated parsing for pageviews - maybe use a regex in the future
#year, month = int(job['name'][11:15]), int(job['name'][15:17])
year, month = int(job['name'][10:14]), int(job['name'][14:16])
#print 'year=%d | month=%d'%(year, month)
url = 'https://dumps.wikimedia.org/other/pageviews/%d/%d-%02d/%s'%(year, year, month, job['name'])
print('downloading file [%s]...'%(url))
subprocess.check_call('curl -s %s > raw.gz'%(url), shell=True)
print('checking file size...')
# Make the code cross-platfrom, so use python to get the size of the file
# size = int(text(subprocess.check_output('ls -l raw.gz | cut -d" " -f 5', shell=True)))
size = os.stat("raw.gz").st_size
if debug_mode:
print(size)
total_download += size
if job['hash'] != '00000000000000000000000000000000':
print('checking hash...')
out = text(subprocess.check_output('md5sum raw.gz', shell=True))
result = out[0:32]
if result != job['hash']:
raise Exception('wrong hash [expected %s, got %s]'%(job['hash'], result))
if debug_mode:
print(result)
print('decompressing...')
subprocess.check_call('gunzip -f raw.gz', shell=True)
#print 'converting case...'
#subprocess.check_call('cat raw | tr "[:upper:]" "[:lower:]" > raw2', shell=True)
#subprocess.check_call('rm raw', shell=True)
subprocess.check_call('mv raw raw2', shell=True)
print('extracting article counts...')
# Use python to read the file and extract counts, if you want to use the original shell method, please use
counts = {}
for language in wiki_util.Articles.available_languages:
lang2articles = {'en': wiki_util.Articles.en_articles, 'es': wiki_util.Articles.es_articles, 'pt': wiki_util.Articles.pt_articles}
articles = lang2articles[language]
articles = sorted(articles)
if debug_mode:
print("Language is {0} and target articles are {1}".format(language, articles))
temp_counts = extract_article_counts("raw2", language, articles, debug_mode)
counts[language] = temp_counts
if not debug_mode:
print('deleting files...')
subprocess.check_call('rm raw2', shell=True)
print('saving results...')
time_stop = datetime.datetime.now()
result = {
'id': job['id'],
'size': size,
'data': json.dumps(counts),
'worker': worker,
'elapsed': (time_stop - time_start).total_seconds(),
}
payload = json.dumps(result)
hmac_str = get_hmac_sha256(secret, payload)
if debug_mode:
print(' hmac: %s' % hmac_str)
post_data = urlencode({'put': payload, 'hmac': hmac_str})
req = urlopen(MASTER_URL, data=data(post_data))
code = req.getcode()
if code != 200:
raise Exception('server response code (put) was %d'%(code))
print('done! (dl=%d)'%(total_download))
passed_jobs += 1
except Exception as ex:
print('***** Caught Exception: %s *****'%(str(ex)))
failed_jobs += 1
time.sleep(30)
print('passed=%d | failed=%d | total=%d'%(passed_jobs, failed_jobs, passed_jobs + failed_jobs))
time.sleep(sleep_time)
if download_limit is not None and total_download >= download_limit:
print('download limit has been reached [%d >= %d]'%(total_download, download_limit))
if job_limit is not None and (passed_jobs + failed_jobs) >= job_limit:
print('job limit has been reached [%d >= %d]'%(passed_jobs + failed_jobs, job_limit))
def main():
# version info
print('version', VERSION)
# args and usage
parser = argparse.ArgumentParser()
parser.add_argument('secret', type=str, help='hmac secret key')
parser.add_argument('-b', '--blimit', action='store', type=int, default=None, help='download limit, in bytes')
parser.add_argument('-j', '--jlimit', action='store', type=int, default=None, help='job limit')
parser.add_argument('-s', '--sleep', action='store', type=int, default=1, help='seconds to sleep between each job')
parser.add_argument('-t', '--type', action='store', type=int, default=0, help='type of job')
parser.add_argument('-d', '--debug', action='store_const', const=True, default=False, help='enable debug mode')
args = parser.parse_args()
# runtime options
secret, download_limit, job_limit, sleep_time, job_type, debug_mode = args.secret, args.blimit, args.jlimit, args.sleep, args.type, args.debug
# run
run(secret, download_limit, job_limit, sleep_time, job_type, debug_mode)
if __name__ == '__main__':
main()
| 35.97561 | 144 | 0.644262 | 1,435 | 10,325 | 4.520557 | 0.261324 | 0.029135 | 0.02374 | 0.024665 | 0.217204 | 0.120857 | 0.08756 | 0.049946 | 0.049946 | 0.049946 | 0 | 0.022551 | 0.214044 | 10,325 | 286 | 145 | 36.101399 | 0.776833 | 0.214625 | 0 | 0.217617 | 0 | 0.005181 | 0.17822 | 0.003983 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036269 | false | 0.031088 | 0.082902 | 0.005181 | 0.160622 | 0.176166 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a51500997a9858263339a466d1f8dbc998e0d17e | 851 | py | Python | app/main.py | BloomTech-Labs/family-promise-service-tracker-ds-a | 1190c1f5fce55ec265a8a42b968aa1f8aea52ac6 | [
"MIT"
] | 4 | 2021-08-02T20:45:37.000Z | 2021-09-03T19:42:55.000Z | app/main.py | BloomTech-Labs/family-promise-service-tracker-ds-a | 1190c1f5fce55ec265a8a42b968aa1f8aea52ac6 | [
"MIT"
] | 22 | 2021-06-02T19:28:21.000Z | 2021-09-15T15:09:48.000Z | app/main.py | Lambda-School-Labs/family-promise-service-tracker-ds-a | 1190c1f5fce55ec265a8a42b968aa1f8aea52ac6 | [
"MIT"
] | 12 | 2021-06-02T09:30:14.000Z | 2021-09-27T20:21:00.000Z | from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from app import db, viz, eligibility, metrics, geocode
app = FastAPI(
title='DS API - Family Promise',
docs_url='/',
version='0.39.6',
)
app.include_router(db.router, tags=['Database'])
app.include_router(viz.router, tags=['Visualizations'])
app.include_router(eligibility.router, tags=['Eligibility'])
app.include_router(metrics.router, tags=['Metrics'])
app.include_router(geocode.router, tags=['Geocode'])
app.add_middleware(
CORSMiddleware,
allow_origins=['*'],
allow_credentials=True,
allow_methods=['*'],
allow_headers=['*'],
)
if __name__ == '__main__':
""" To run this API locally use the following commands
cd family-promise-service-tracker-ds-a
python -m app.main
"""
import uvicorn
uvicorn.run(app)
| 25.029412 | 60 | 0.703878 | 107 | 851 | 5.420561 | 0.495327 | 0.086207 | 0.137931 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00554 | 0.151586 | 851 | 33 | 61 | 25.787879 | 0.797784 | 0 | 0 | 0 | 0 | 0 | 0.121715 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.173913 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a515480d246a94b3a03edbf99bee0fcd4f9d88f2 | 2,033 | py | Python | scripts/make_binarized_masks.py | GUIEEN/acres | ed2ab521bbec866d1588216abfce8532ce28a5e9 | [
"MIT"
] | 10 | 2019-01-03T15:31:03.000Z | 2020-03-19T06:14:50.000Z | scripts/make_binarized_masks.py | GUIEEN/acres | ed2ab521bbec866d1588216abfce8532ce28a5e9 | [
"MIT"
] | 3 | 2020-05-11T12:13:15.000Z | 2021-01-03T02:13:10.000Z | scripts/make_binarized_masks.py | GUIEEN/acres | ed2ab521bbec866d1588216abfce8532ce28a5e9 | [
"MIT"
] | 3 | 2020-07-26T12:58:18.000Z | 2021-04-09T10:56:23.000Z | """
Take a directory of images and their segmentation masks (which only contain two classes - inside and outside)
and split the inside class into black and white. Save the resulting masks.
"""
import argparse
import os
import numpy as np
import cv2
def show(img):
cv2.namedWindow("image", cv2.WINDOW_NORMAL)
cv2.imshow("image", img)
cv2.waitKey(0)
# cv2.destroyAllWindows()
def is_vertical(img):
# Are the _bars_ vertical?
horiz = np.array([[-1, -1, 4, -1, -1]])
vert = np.array([[-1], [-1], [4], [-1], [-1]])
hc = cv2.filter2D(img, -1, horiz) # Convolve
vc = cv2.filter2D(img, -1, vert) # Convolve
res = np.mean(hc) > np.mean(vc)
print(res, np.mean(hc) - np.mean(vc))
return res
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--images-dir", type=str, help="Path to images")
parser.add_argument("--masks-dir", type=str, help="Path to masks")
parser.add_argument("--output-dir", type=str, help="Where to store output")
args = parser.parse_args()
names = [x[:-len(".png")] for x in os.listdir(args.masks_dir)]
cv2.namedWindow("image", cv2.WINDOW_NORMAL)
for name in names:
image = cv2.imread(os.path.join(args.images_dir, name + ".jpg"), cv2.IMREAD_GRAYSCALE)
mask = cv2.imread(os.path.join(args.masks_dir, name + ".png"), cv2.IMREAD_GRAYSCALE)
_, mask_bw = cv2.threshold(mask, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
image_binary = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY, 21, 1)
image_rgb = cv2.cvtColor(image_binary, cv2.COLOR_GRAY2RGB)
image_rgb[:, :, 0] = 255 # Seems to be blue here.
image_rgb = image_rgb * np.expand_dims(mask_bw != 0, 2)
if is_vertical(image_rgb):
# Filter out barcodes with the incorrect orientation
cv2.imwrite(os.path.join(args.output_dir, name + ".png"), image_rgb)
if __name__ == '__main__':
main()
| 34.457627 | 109 | 0.638465 | 292 | 2,033 | 4.297945 | 0.407534 | 0.038247 | 0.040637 | 0.033466 | 0.172112 | 0.172112 | 0.049402 | 0 | 0 | 0 | 0 | 0.035849 | 0.217905 | 2,033 | 58 | 110 | 35.051724 | 0.753459 | 0.160354 | 0 | 0.054054 | 0 | 0 | 0.072019 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.108108 | 0 | 0.216216 | 0.027027 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a51637bfb757b0164488ef03723963eb419dd5ee | 6,293 | py | Python | initialize.py | WiseDoge/Text-Classification-PyTorch | 9371eeed6bd7ecf1d529c8f2a6c997fcde67a559 | [
"MIT"
] | 6 | 2019-08-04T13:24:24.000Z | 2020-09-28T12:12:21.000Z | initialize.py | WiseDoge/Text-Classification-PyTorch | 9371eeed6bd7ecf1d529c8f2a6c997fcde67a559 | [
"MIT"
] | null | null | null | initialize.py | WiseDoge/Text-Classification-PyTorch | 9371eeed6bd7ecf1d529c8f2a6c997fcde67a559 | [
"MIT"
] | null | null | null | from typing import List, Dict
from collections import defaultdict
from pathlib import Path
from util import save_dataset, save_word_dict, save_embedding
import torch
import argparse
import nltk
import re
import logging
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
class Corpus(object):
def __init__(self, input_dir):
train_neg_dir = f'{input_dir}/train/neg'
train_pos_dir = f'{input_dir}/train/pos'
test_neg_dir = f'{input_dir}/test/neg'
test_pos_dir = f'{input_dir}/test/pos'
self.train_neg_tokens = self.load_data(train_neg_dir)
self.train_pos_tokens = self.load_data(train_pos_dir)
self.test_neg_tokens = self.load_data(test_neg_dir)
self.test_pos_tokens = self.load_data(test_pos_dir)
@staticmethod
def load_data(dir):
total_tokens = []
filenames = Path(dir).glob('*.txt')
for filename in filenames:
with open(filename, 'r', encoding='utf-8') as f:
tokens = Corpus.tokenize(f.read())
total_tokens.append(tokens)
return total_tokens
@staticmethod
def tokenize(sent):
sent = sent.lower().strip()
sent = re.sub(r"<br />", r" ", sent)
tokens = nltk.word_tokenize(sent)
return tokens
def stat_word_freq(c:Corpus):
"""Count the frequency of every word."""
freq_dict = defaultdict(int)
for data in (c.train_neg_tokens, c.train_pos_tokens, c.test_neg_tokens, c.test_pos_tokens):
for tokens in data:
for token in tokens:
freq_dict[token] += 1
return freq_dict
def add_to_vocab(word, word_dict_ref):
"""Add a word to word dict."""
if word not in word_dict_ref:
word_dict_ref[word] = len(word_dict_ref)
def build_vocab(freq_dict:Dict[str, int], max_size:int):
"""Build word dict based on the frequency of every word."""
word_dict = {'[PAD]': 0, '[UNK]': 1}
sorted_items = sorted(freq_dict.items(), key=lambda t: t[1], reverse=True)[
:max_size]
for word, _ in sorted_items:
add_to_vocab(word, word_dict)
return word_dict
@torch.jit.script
def convert_tokens_to_ids(datas: List[List[str]], word_dict: Dict[str, int], cls: int, max_seq_len: int):
"""Use @torch.jit.script to speed up."""
total = len(datas)
token_ids = torch.full((total, max_seq_len),
word_dict['[PAD]'], dtype=torch.long)
labels = torch.full((total,), cls, dtype=torch.long)
for i in range(total):
seq_len = len(datas[i])
for j in range(min(seq_len, max_seq_len)):
token_ids[i, j] = word_dict.get(datas[i][j], word_dict['[UNK]'])
return token_ids, labels
def create_dataset(neg, pos, word_dict, max_seq_len):
neg_tokens, neg_labels = convert_tokens_to_ids(
neg, word_dict, 0, max_seq_len)
pos_tokens, pos_labels = convert_tokens_to_ids(
pos, word_dict, 1, max_seq_len)
tokens = torch.cat([neg_tokens, pos_tokens], 0)
labels = torch.cat([neg_labels, pos_labels], 0)
return tokens, labels
def load_pretrained_glove(path, freq_dict, max_size):
word_dict = {'[PAD]': 0, '[UNK]': 1}
embedding = []
sorted_items = sorted(freq_dict.items(), key=lambda t: t[1], reverse=True)[:max_size]
freq_word_set = {word for word, _ in sorted_items}
with open(path, 'r', encoding='utf-8') as f:
vecs = f.readlines()
for line in vecs:
line = line.strip().split()
word, *vec = line
if word in freq_word_set:
add_to_vocab(word, word_dict)
vec = [float(num) for num in vec]
embedding.append(vec)
embedding = torch.tensor(embedding, dtype=torch.float)
embedding_dim = embedding.size(1)
pad = torch.randn(1, embedding_dim)
unk = torch.randn(1, embedding_dim)
embedding = torch.cat([pad, unk, embedding], 0)
return word_dict, embedding
if __name__ == "__main__":
nltk.download('punkt')
logger = logging.getLogger(__name__)
parser = argparse.ArgumentParser()
parser.add_argument("-i", "--input_dir", type=str, default='aclImdb', help='Folder of original dataset.')
parser.add_argument("-o", "--output_dir", type=str, default='data',
help='Folder to save the tensor format of dataset.')
parser.add_argument("--max_seq_len", type=int, default=256, help='Max sequence length.')
parser.add_argument("--max_vocab_size", type=int, default=30000, help='Max vocab size.')
parser.add_argument("--glove_path", type=str, default=None, help='Pre-trained word embedding path.')
args = parser.parse_args()
logger.info(
f"[input]: {args.input_dir} [output]: {args.output_dir} [max seq len]: {args.max_seq_len} [max vocab size]: {args.max_vocab_size}")
logger.info("Loading and tokenizing...")
c = Corpus(args.input_dir)
logger.info("Counting word frequency...")
freq_dict = stat_word_freq(c)
logger.info(f"Total number of words: {len(freq_dict)}")
logger.info("Building vocab...")
if args.glove_path is not None:
glove_path = Path(args.glove_path)
word_dict, embedding = load_pretrained_glove(glove_path, freq_dict, args.max_vocab_size)
logger.info(f"Embedding dim: {embedding.shape[1]}")
else:
word_dict = build_vocab(freq_dict, args.max_vocab_size)
logger.info(f"Vocab size: {len(word_dict)}")
logger.info("Creating train dataset...")
train_tokens, train_labels = create_dataset(
c.train_neg_tokens, c.train_pos_tokens, word_dict, args.max_seq_len)
logger.info("Creating test dataset...")
test_tokens, test_labels = create_dataset(
c.test_neg_tokens, c.test_pos_tokens, word_dict, args.max_seq_len)
saved_dir = Path(args.output_dir)
saved_dir.mkdir(parents=True, exist_ok=True)
logger.info("Saving dataset and word dict[and embedding]...")
save_word_dict(word_dict, saved_dir)
save_dataset(train_tokens, train_labels, saved_dir, 'train')
save_dataset(test_tokens, test_labels, saved_dir, 'test')
if args.glove_path is not None:
save_embedding(embedding, saved_dir)
logger.info("All done!")
| 37.458333 | 139 | 0.658033 | 914 | 6,293 | 4.272429 | 0.198031 | 0.057362 | 0.025352 | 0.012292 | 0.262228 | 0.141357 | 0.101408 | 0.089117 | 0.048656 | 0.03073 | 0 | 0.005264 | 0.21516 | 6,293 | 167 | 140 | 37.682635 | 0.785382 | 0.023518 | 0 | 0.06015 | 0 | 0.007519 | 0.135206 | 0.010287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067669 | false | 0 | 0.067669 | 0 | 0.195489 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a517e42fabf9ac9063a6459d9056fdf167f23154 | 9,422 | py | Python | src/BERT_NER/utils_preprocess/anntoconll.py | philippeitis/StackOverflowNER | 2a1efd8d88356de4a04e510a5ccfe85992fc8d8c | [
"MIT"
] | null | null | null | src/BERT_NER/utils_preprocess/anntoconll.py | philippeitis/StackOverflowNER | 2a1efd8d88356de4a04e510a5ccfe85992fc8d8c | [
"MIT"
] | null | null | null | src/BERT_NER/utils_preprocess/anntoconll.py | philippeitis/StackOverflowNER | 2a1efd8d88356de4a04e510a5ccfe85992fc8d8c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Convert text and standoff annotations into CoNLL format.
import re
import sys
from pathlib import Path
# assume script in brat tools/ directory, extend path to find sentencesplit.py
sys.path.append(str(Path(__file__).parent))
sys.path.append('.')
from sentencesplit import sentencebreaks_to_newlines
EMPTY_LINE_RE = re.compile(r'^\s*$')
CONLL_LINE_RE = re.compile(r'^\S+\t\d+\t\d+.')
import stokenizer # JT: Dec 6
from map_text_to_char import map_text_to_char # JT: Dec 6
NO_SPLIT = True
SINGLE_CLASS = None
ANN_SUFFIX = ".ann"
OUT_SUFFIX = "conll"
VERBOSE = False
def argparser():
import argparse
ap = argparse.ArgumentParser(
description='Convert text and standoff annotations into CoNLL format.'
)
ap.add_argument('-a', '--annsuffix', default=ANN_SUFFIX,
help='Standoff annotation file suffix (default "ann")')
ap.add_argument('-c', '--singleclass', default=SINGLE_CLASS,
help='Use given single class for annotations')
ap.add_argument('-n', '--nosplit', default=NO_SPLIT, action='store_true',
help='No sentence splitting')
ap.add_argument('-o', '--outsuffix', default=OUT_SUFFIX,
help='Suffix to add to output files (default "conll")')
ap.add_argument('-v', '--verbose', default=VERBOSE, action='store_true',
help='Verbose output')
# ap.add_argument('text', metavar='TEXT', nargs='+',
# help='Text files ("-" for STDIN)')
return ap
def init_globals():
global NO_SPLIT, SINGLE_CLASS, ANN_SUFFIX, OUT_SUFFIX, VERBOSE
ap = argparser()
args = ap.parse_args(sys.argv[1:])
NO_SPLIT = args.nosplit
SINGLE_CLASS = args.singleclass
ANN_SUFFIX = args.annsuffix
OUT_SUFFIX = args.outsuffix
VERBOSE = args.verbose
def read_sentence(f):
"""Return lines for one sentence from the CoNLL-formatted file.
Sentences are delimited by empty lines.
"""
lines = []
for l in f:
lines.append(l)
if EMPTY_LINE_RE.match(l):
break
if not CONLL_LINE_RE.search(l):
raise ValueError(
'Line not in CoNLL format: "%s"' %
l.rstrip('\n'))
return lines
def strip_labels(lines):
"""Given CoNLL-format lines, strip the label (first TAB-separated field)
from each non-empty line.
Return list of labels and list of lines without labels. Returned
list of labels contains None for each empty line in the input.
"""
labels, stripped = [], []
labels = []
for l in lines:
if EMPTY_LINE_RE.match(l):
labels.append(None)
stripped.append(l)
else:
fields = l.split('\t')
labels.append(fields[0])
stripped.append('\t'.join(fields[1:]))
return labels, stripped
def attach_labels(labels, lines):
"""Given a list of labels and CoNLL-format lines, affix TAB-separated label
to each non-empty line.
Returns list of lines with attached labels.
"""
assert len(labels) == len(
lines), "Number of labels (%d) does not match number of lines (%d)" % (len(labels), len(lines))
attached = []
for label, line in zip(labels, lines):
empty = EMPTY_LINE_RE.match(line)
assert (label is None and empty) or (label is not None and not empty)
if empty:
attached.append(line)
else:
attached.append('%s\t%s' % (label, line))
return attached
def conll_from_path(path):
"""Convert plain text into CoNLL format."""
lines = path.read_text().splitlines()
if NO_SPLIT:
sentences = lines
else:
sentences = []
for line in lines:
line = sentencebreaks_to_newlines(line)
sentences.extend([s for s in NEWLINE_TERM_REGEX.split(line) if s])
if ANN_SUFFIX:
annotations = get_annotations(path)
else:
annotations = None
return conll_from_sentences(sentences, annotations)
def conll_from_sentences(sentences, annotations=None):
"""Convert plain text into CoNLL format."""
lines = []
offset = 0
# print(sentences)
# JT: Feb 19: added it for resolving char encoding issues
fixed_sentences = []
for s in sentences:
# print(s)
# fixed_s = ftfy.fix_text(s)
# # print(fixed_s)
# fixed_sentences.append(fixed_s)
fixed_sentences.append(s)
# for s in sentences:
for s in fixed_sentences:
tokens = stokenizer.tokenize(s)
# Possibly apply timeout?
# try:
# tokens = stokenizer.tokenize(s)
# except stokenizer.TimedOutExc as e:
# try:
# print("***********using ark tokenizer")
# tokens = ark_twokenize.tokenizeRawTweetText(s)
# except Exception as e:
# print(e)
token_w_pos = map_text_to_char(s, tokens, offset)
for t, pos in token_w_pos:
if not t.isspace():
lines.append(('O', pos, pos + len(t), t))
lines.append(tuple())
offset += len(s)
# add labels (other than 'O') from standoff annotation if specified
if annotations:
lines = relabel(lines, annotations)
# lines = [[l[0], str(l[1]), str(l[2]), l[3]] if l else l for l in lines] #JT: Dec 6
return [(line[3], line[0]) if line else line for line in lines]
def relabel(lines, annotations):
# TODO: this could be done more neatly/efficiently
offset_label = {}
for tb in annotations:
for i in range(tb.start, tb.end):
if i in offset_label:
print("Warning: overlapping annotations in ", file=sys.stderr)
offset_label[i] = tb
prev_label = None
for i, l in enumerate(lines):
if not l:
prev_label = None
continue
tag, start, end, token = l
# TODO: warn for multiple, detailed info for non-initial
label = None
for o in range(start, end):
if o in offset_label:
if o != start:
print('Warning: annotation-token boundary mismatch: "%s" --- "%s"' % (
token, offset_label[o].text), file=sys.stderr)
label = offset_label[o].type
break
if label is not None:
if label == prev_label:
tag = 'I-' + label
else:
tag = 'B-' + label
prev_label = label
lines[i] = [tag, start, end, token]
# optional single-classing
if SINGLE_CLASS:
for l in lines:
if l and l[0] != 'O':
l[0] = l[0][:2] + SINGLE_CLASS
return lines
def process_files(files, output_directory, phase_name=""):
suffix = OUT_SUFFIX.replace(".", "") + "_" + phase_name.replace("/", "")
for path in files:
try:
lines = '\n'.join(
'\t'.join(line) for line in
conll_from_path(path)
)
except Exception as e:
print(e)
continue
# TODO: better error handling
if lines is None:
print(f"file at {path} could not be tokenized")
continue
file_name = output_directory / Path(f"{path.stem}_{suffix}.txt")
file_name.write_text(lines)
TEXTBOUND_LINE_RE = re.compile(r'^T\d+\t')
def parse_textbounds(f):
"""Parse textbound annotations in input, returning a list of Textbound."""
from .format_markdown import Annotation
textbounds = []
for line in f:
line = line.rstrip('\n')
if not TEXTBOUND_LINE_RE.search(line):
continue
id_, type_offsets, text = line.split('\t')
type_, start, end = type_offsets.split()
start, end = int(start), int(end)
textbounds.append(Annotation(None, type_, start, end, text))
return textbounds
def eliminate_overlaps(textbounds):
eliminate = {}
# TODO: avoid O(n^2) overlap check
for t1 in textbounds:
for t2 in textbounds:
if t1 is t2:
continue
if t2.start >= t1.end or t2.end <= t1.start:
continue
# eliminate shorter
if t1.end - t1.start > t2.end - t2.start:
print("Eliminate %s due to overlap with %s" % (
t2, t1), file=sys.stderr)
eliminate[t2] = True
else:
print("Eliminate %s due to overlap with %s" % (
t1, t2), file=sys.stderr)
eliminate[t1] = True
return [t for t in textbounds if t not in eliminate]
def get_annotations(path: Path):
path = path.with_suffix(ANN_SUFFIX)
textbounds = parse_textbounds(path.read_text().splitlines())
return eliminate_overlaps(textbounds)
def convert_standoff_to_conll(source_directory_ann, output_directory_conll):
init_globals()
files = [f for f in source_directory_ann.iterdir() if f.suffix == ".txt" and f.is_file()]
process_files(files, output_directory_conll)
if __name__ == '__main__':
own_path = Path(__file__)
source_directory_ann = own_path / Path("../temp_files/standoff_files/")
output_directory_conll = own_path / Path("../temp_files/conll_files/")
convert_standoff_to_conll(source_directory_ann, output_directory_conll)
| 29.628931 | 103 | 0.594884 | 1,209 | 9,422 | 4.495451 | 0.214227 | 0.008832 | 0.014351 | 0.00828 | 0.136891 | 0.086845 | 0.064765 | 0.051518 | 0.022079 | 0.022079 | 0 | 0.005723 | 0.295266 | 9,422 | 317 | 104 | 29.722397 | 0.812801 | 0.178306 | 0 | 0.131313 | 0 | 0 | 0.098298 | 0.01034 | 0 | 0 | 0 | 0.003155 | 0.010101 | 1 | 0.065657 | false | 0 | 0.040404 | 0 | 0.156566 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a51b1c0fe48c6f6a0c51d7e391fb9492bd28a19a | 5,509 | py | Python | EfficientObjectDetection/dataset/dataloader_ete.py | JunHyungKang/SAROD_ICIP | 71585951f64dc1cc22ed72900eff81f747edec77 | [
"MIT"
] | null | null | null | EfficientObjectDetection/dataset/dataloader_ete.py | JunHyungKang/SAROD_ICIP | 71585951f64dc1cc22ed72900eff81f747edec77 | [
"MIT"
] | null | null | null | EfficientObjectDetection/dataset/dataloader_ete.py | JunHyungKang/SAROD_ICIP | 71585951f64dc1cc22ed72900eff81f747edec77 | [
"MIT"
] | 1 | 2020-12-27T05:24:19.000Z | 2020-12-27T05:24:19.000Z | import pandas as pd
import numpy as np
import warnings
import os
from torch.utils.data.dataset import Dataset
from PIL import Image
Image.MAX_IMAGE_PIXELS = None
warnings.simplefilter('ignore', Image.DecompressionBombWarning)
class CustomDatasetFromImages(Dataset):
def __init__(self, fine_data, coarse_data, transform):
"""
Args:
fine_data (list): list of fine evaluation results [source img name, patch path, precision, recall,
average precision, mean of loss, counts of object]
coarse_data (list): list of coarse evaluation results [source img name, patch path, precision, recall,
average precision, mean of loss, counts of object]
transform: pytorch transforms for transforms and tensor conversion
"""
# Transforms
self.transforms = transform
# [source img name, patch path, precision, recall, average precision, mean of loss, counts of object]
self.fine_data = fine_data
self.fine_data.sort(key=lambda element: element[1])
self.coarse_data = coarse_data
self.coarse_data.sort(key=lambda element: element[1])
# Calculate len
if len(self.fine_data) == len(self.coarse_data):
self.data_len = len(self.fine_data) / 4
# Second column is the image paths
# self.image_arr = np.asarray(data_info.iloc[:, 1])
# First column is the image IDs
# self.label_arr = np.asarray(data_info.iloc[:, 0])
def __getitem__(self, index):
index = index * 4
source_path = os.sep.join(self.fine_data[index][1].split(os.sep)[:-4])
source_path = os.path.join(source_path, self.fine_data[index][1].split(os.sep)[-3], 'images', self.fine_data[index][0]) + '.jpg'
# print('\ncomplete_source_path', source_path)
img_as_img = Image.open(source_path)
# Transform the image
img_as_tensor = self.transforms(img_as_img)
# Get label(class) of the image based on the cropped pandas column
# single_image_label = self.label_arr[index]
f_p, c_p = [], []
f_r, c_r = [], []
f_ap, c_ap = [], []
f_loss, c_loss = [], []
f_ob, c_ob = [], []
f_stats, c_stats = [], []
target_dict = dict()
for i in range(4):
f_p.append(self.fine_data[index+i][2])
c_p.append(self.coarse_data[index + i][2])
f_r.append(self.fine_data[index + i][3])
c_r.append(self.coarse_data[index + i][3])
f_ap.append(self.fine_data[index + i][4])
c_ap.append(self.coarse_data[index + i][4])
f_loss.append(self.fine_data[index + i][5])
c_loss.append(self.coarse_data[index + i][5])
f_ob.append(self.fine_data[index + i][6])
c_ob.append(self.coarse_data[index + i][6])
f_stats.append(self.fine_data[index+i][7])
c_stats.append(self.coarse_data[index+i][7])
target_dict['f_p'] = f_p
target_dict['c_p'] = c_p
target_dict['f_r'] = f_r
target_dict['c_r'] = c_r
target_dict['f_ap'] = f_ap
target_dict['c_ap'] = c_ap
target_dict['f_loss'] = f_loss
target_dict['c_loss'] = c_loss
target_dict['f_ob'] = f_ob
target_dict['c_ob'] = c_ob
target_dict['f_stats'] = f_stats
target_dict['c_stats'] = c_stats
return img_as_tensor, target_dict
def __len__(self):
return int(self.data_len)
class CustomDatasetFromImages_test(Dataset):
def __init__(self, img_path, transform):
# Transforms
self.transforms = transform
# img list
self.img_path = img_path
self.img_list = os.listdir(img_path)
def __len__(self):
return len(self.img_list)
def __getitem__(self, index):
img_as_img = Image.open(os.path.join(self.img_path, self.img_list[index]))
# Transform the image
img_as_tensor = self.transforms(img_as_img)
# Get label
label_path = os.path.join(self.img_path.replace('images', 'labels'), self.img_list[index].replace('.jpg', '.txt'))
return img_as_tensor, label_path
class CustomDatasetFromImages_timetest(Dataset):
def __init__(self, csv_path, transform):
"""
Args:
csv_path (string): path to csv file
img_path (string): path to the folder where images are
transform: pytorch transforms for transforms and tensor conversion
"""
# Transforms
self.transforms = transform
# Read the csv file
data_info = pd.read_csv(csv_path, header=None)
# Second column is the image paths
self.image_arr = np.asarray(data_info.iloc[:, 1])
# First column is the image IDs
self.label_arr = np.asarray(data_info.iloc[:, 0])
# Calculate len
self.data_len = len(data_info)
def __getitem__(self, index):
# Get image name from the pandas df
single_image_name = self.image_arr[index] + '.jpg'
# Open image
img_as_img = Image.open(single_image_name.replace('/media/data2/dataset', '/home'))
# Transform the image
img_as_tensor = self.transforms(img_as_img)
# Get label(class) of the image based on the cropped pandas column
single_image_label = self.label_arr[index]
return (img_as_tensor, single_image_name)
def __len__(self):
return self.data_len | 34.217391 | 136 | 0.622254 | 769 | 5,509 | 4.191157 | 0.16645 | 0.039715 | 0.052125 | 0.047471 | 0.48247 | 0.457648 | 0.351536 | 0.331679 | 0.314303 | 0.314303 | 0 | 0.006686 | 0.267018 | 5,509 | 161 | 137 | 34.217391 | 0.791481 | 0.239608 | 0 | 0.141176 | 0 | 0 | 0.029339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105882 | false | 0 | 0.070588 | 0.035294 | 0.282353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a51c924df2a2764415545da28cdbfc63cde5829d | 6,236 | py | Python | tests/test.py | angstwad/datemike | da84df48022c4067a28182e9bb53434c8f5010bc | [
"Apache-2.0"
] | 8 | 2015-01-30T01:42:02.000Z | 2019-04-05T10:50:42.000Z | tests/test.py | angstwad/datemike | da84df48022c4067a28182e9bb53434c8f5010bc | [
"Apache-2.0"
] | null | null | null | tests/test.py | angstwad/datemike | da84df48022c4067a28182e9bb53434c8f5010bc | [
"Apache-2.0"
] | 3 | 2016-06-27T11:23:38.000Z | 2018-03-05T21:37:39.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from collections import OrderedDict
import unittest
import yaml
from datemike import ansible, base, utils
from datemike.providers import rackspace
desired_task_yaml = """name: Create Cloud Server(s)
rax:
exact_count: true
flavor: performance1-1
image: image-ubuntu-1204
name: servername
"""
desired_play_yaml = """name: TestPlay
tasks:
- name: Create Cloud Server(s)
rax:
exact_count: true
flavor: performance1-1
image: image-ubuntu-1204
name: servername
"""
desired_playbook_yaml = """- name: TestPlay
tasks:
- name: Create Cloud Server(s)
rax:
exact_count: true
flavor: performance1-1
image: image-ubuntu-1204
name: servername
"""
desired_task_obj = OrderedDict(
[
('name', 'Create Cloud Server(s)'),
(
'rax', {
'image': 'image-ubuntu-1204',
'name': 'servername',
'flavor': 'performance1-1',
'exact_count': True
}
)
]
)
class TestAnsible(unittest.TestCase):
def setUp(self):
self.server = rackspace.CloudServer(
'servername', 'performance1-1', 'image-ubuntu-1204'
)
self.task = ansible.Task(self.server)
def tearDown(self):
pass
def setup_play(self):
play = ansible.Play('TestPlay')
play.add_task(self.task)
return play
def setup_playbook(self):
play = self.setup_play()
book = ansible.Playbook()
return play, book
def test_task_localaction(self):
task = ansible.Task(self.server, local_action=True)
yamlobj = yaml.load(task.to_yaml())
self.assertIn('local_action', yamlobj.keys(),
'local_action not in the parsed YAML object!')
def test_task_localaction_module(self):
task = ansible.Task(self.server, local_action=True)
yamlobj = yaml.load(task.to_yaml())
module = yamlobj.get('local_action')
self.assertEqual(module.get('module'), 'rax',
'value of module not rax')
self.assertEqual(module.get('image'), 'image-ubuntu-1204',
'value of image not image-ubuntu-1204')
self.assertEqual(module.get('flavor'), 'performance1-1',
'value of flavor not performance1-1')
def test_task(self):
yamlobj = yaml.load(self.task.to_yaml())
self.assertNotIn('local_action', yamlobj.keys())
def test_task_module(self):
module = yaml.load(self.task.to_yaml())
self.assertIn('rax', module.keys())
rax = module.get('rax')
self.assertEqual(rax.get('image'), 'image-ubuntu-1204',
'value of image not image-ubuntu-1204')
self.assertEqual(rax.get('flavor'), 'performance1-1',
'value of flavor not performance1-1')
def test_task_to_yaml(self):
task_yaml = self.task.to_yaml()
self.assertEqual(desired_task_yaml, task_yaml,
'Task YAML and expected YAML are not equal.')
def test_task_as_str(self):
task_yaml = self.task.to_yaml()
self.assertEqual(desired_task_yaml, str(task_yaml))
def test_task_as_obj(self):
task_obj = self.task.as_obj()
self.assertEqual(desired_task_obj, task_obj,
'Task object and expected object are not the equal.')
def test_play(self):
play = ansible.Play('TestPlay')
self.assertEqual(play.play.get('name'), 'TestPlay',
'Play name not equal to TestPlay')
def test_play_add_task(self):
play = self.setup_play()
self.assertEqual(play.play.get('tasks')[0], self.task.as_obj(),
'Task not at expected index in play object')
def test_play_add_tasks(self):
play = self.setup_play()
task = ansible.Task(self.server, local_action=True)
play.add_task([self.task, task])
self.assertEqual(play.play.get('tasks', [])[0], self.task.as_obj(),
'Play task index 0 does not match self.task')
self.assertNotEqual(play.play.get('tasks')[1], task.as_obj(),
'Play task index 1 shouldn\' match local task')
def test_play_add_host(self):
play = ansible.Play('TestPlay')
play.add_host('testhost')
self.assertIn('testhost', play.as_obj().get('hosts'),
'testhosts not in play hosts')
def test_play_add_role(self):
play = ansible.Play('TestPlay')
play.add_role('testrole')
self.assertIn('testrole', play.as_obj().get('roles'),
'testrole not in play roles')
def test_play_yaml(self):
play = self.setup_play()
self.assertEqual(desired_play_yaml, play.to_yaml(),
'Play YAML does not equal expected YAML')
def test_play_as_str(self):
play = self.setup_play()
self.assertEqual(desired_play_yaml, str(play),
'Play YAML does not equal expected YAML')
def test_playbook_add_play(self):
play, book = self.setup_playbook()
book.add_play(play)
self.assertEquals(book.playbook[0], play.as_obj(),
'play does not equal playbook play at index 0')
def test_playbook_add_plays(self):
play, book = self.setup_playbook()
play2 = self.setup_play()
book.add_play([play, play2])
self.assertEqual(len(book.playbook), 2,
'length of plays in playbook is not equal to 2')
def test_playbook_yaml(self):
play, book = self.setup_playbook()
book.add_play(play)
self.assertEqual(desired_playbook_yaml, book.to_yaml(),
'Playbook YAML output does not match intended YAML')
def test_playbook_as_str(self):
play, book = self.setup_playbook()
book.add_play(play)
self.assertEqual(desired_playbook_yaml, str(book),
'Playbook YAML output does not match intended YAML')
def main():
TestAnsible.run()
if __name__ == '__main__':
main()
| 32.821053 | 78 | 0.59814 | 761 | 6,236 | 4.733246 | 0.13929 | 0.034981 | 0.037479 | 0.033315 | 0.536924 | 0.503887 | 0.451971 | 0.403387 | 0.392282 | 0.345641 | 0 | 0.014821 | 0.28592 | 6,236 | 189 | 79 | 32.994709 | 0.794071 | 0.006735 | 0 | 0.341935 | 0 | 0 | 0.257429 | 0 | 0 | 0 | 0 | 0 | 0.148387 | 1 | 0.148387 | false | 0.006452 | 0.032258 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
a51d4b9577b8109ecc4fe05e8b1f90f8ad9d6edf | 1,100 | py | Python | update_others.py | XiaogangHe/cv | 814fabeb93c6302526ad0fca79587bf3fbd2a0ea | [
"CC-BY-4.0"
] | null | null | null | update_others.py | XiaogangHe/cv | 814fabeb93c6302526ad0fca79587bf3fbd2a0ea | [
"CC-BY-4.0"
] | null | null | null | update_others.py | XiaogangHe/cv | 814fabeb93c6302526ad0fca79587bf3fbd2a0ea | [
"CC-BY-4.0"
] | 1 | 2020-12-20T08:02:39.000Z | 2020-12-20T08:02:39.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import division, print_function
import re
import json
import requests
def get_number_of_citations(url):
r = requests.get(url)
try:
r.raise_for_status()
except Exception as e:
print(e)
return None
results = re.findall("([0-9]+) results", r.text)
if not len(results):
print("no results found")
return None
try:
return int(results[0])
except Exception as e:
print(e)
return None
def update_others():
with open("other_pubs.json", "r") as f:
pubs = json.load(f)
for i, pub in enumerate(pubs):
if not pub["url"].startswith("https://scholar.google.com"):
continue
n = get_number_of_citations(pub["url"])
if n is None or n < pub["citations"]:
continue
pubs[i]["citations"] = n
with open("other_pubs.json", "w") as f:
json.dump(pubs, f, sort_keys=True, indent=2, separators=(",", ": "))
if __name__ == "__main__":
update_others()
| 23.913043 | 76 | 0.571818 | 147 | 1,100 | 4.102041 | 0.510204 | 0.049751 | 0.036484 | 0.066335 | 0.182421 | 0.112769 | 0.112769 | 0.112769 | 0 | 0 | 0 | 0.006435 | 0.293636 | 1,100 | 45 | 77 | 24.444444 | 0.769627 | 0.038182 | 0 | 0.323529 | 0 | 0 | 0.118371 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.117647 | 0 | 0.294118 | 0.117647 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eb47b43bc4e1a81b0b221c391bba44d51529910a | 1,451 | py | Python | haminfo/db/models/request.py | hemna/haminfo | 86db93536075999afa086fda84f10c1911af0375 | [
"Apache-2.0"
] | null | null | null | haminfo/db/models/request.py | hemna/haminfo | 86db93536075999afa086fda84f10c1911af0375 | [
"Apache-2.0"
] | null | null | null | haminfo/db/models/request.py | hemna/haminfo | 86db93536075999afa086fda84f10c1911af0375 | [
"Apache-2.0"
] | null | null | null | import sqlalchemy as sa
import datetime
from haminfo.db.models.modelbase import ModelBase
class Request(ModelBase):
__tablename__ = 'request'
id = sa.Column(sa.Integer, sa.Sequence('request_id_seq'), primary_key=True)
created = sa.Column(sa.Date)
latitude = sa.Column(sa.Float)
longitude = sa.Column(sa.Float)
band = sa.Column(sa.String)
filters = sa.Column(sa.String)
count = sa.Column(sa.Integer)
callsign = sa.Column(sa.String)
stations = sa.Column(sa.String)
def __repr__(self):
return (f"<Request(callsign='{self.callsign}', created='{self.created}'"
f", latitude='{self.latitude}', longitude='{self.longitude}'), "
f"count='{self.count}' filters='{self.filters}' "
f"stations='{self.stations}'>")
def to_dict(self):
dict_ = {}
for key in self.__mapper__.c.keys():
# LOG.debug("KEY {}".format(key))
dict_[key] = getattr(self, key)
return dict_
@staticmethod
def from_json(r_json):
r = Request(
latitude=r_json["lat"],
longitude=r_json["lon"],
band=r_json["band"],
callsign=r_json.get("Callsign", "None"),
count=r_json.get("count", 1),
filters=r_json.get("filters", "None"),
stations=r_json.get("stations", "None"),
created=datetime.datetime.now()
)
return r
| 30.87234 | 80 | 0.582357 | 175 | 1,451 | 4.668571 | 0.325714 | 0.088127 | 0.110159 | 0.078335 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000937 | 0.264645 | 1,451 | 46 | 81 | 31.543478 | 0.764761 | 0.021365 | 0 | 0 | 0 | 0 | 0.187588 | 0.118477 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081081 | false | 0 | 0.081081 | 0.027027 | 0.540541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
eb49b2cbd9ad1ebc523230ba0859acde197afa9b | 1,166 | py | Python | Box_Plot.py | johndunne2019/pands-project | 4a544be7a4074dc8a277775981b3619239c45872 | [
"Apache-2.0"
] | null | null | null | Box_Plot.py | johndunne2019/pands-project | 4a544be7a4074dc8a277775981b3619239c45872 | [
"Apache-2.0"
] | null | null | null | Box_Plot.py | johndunne2019/pands-project | 4a544be7a4074dc8a277775981b3619239c45872 | [
"Apache-2.0"
] | null | null | null | # 2019-04-24
# John Dunne
# Box plot of the data set
# pandas box plot documentation: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.boxplot.html
print("The box plot will appear on the screen momentarily")
import matplotlib.pyplot as pl
import pandas as pd
# imported the libraries needed and give shortened names
data = "https://raw.githubusercontent.com/johndunne2019/pands-project/master/Fishers_Iris_data_set.csv"
# url of the data set to be read by pandas.read
dataset = pd.read_csv(data, header=0)
# pandas.read used to read in the data set from my repository, header set to first row of data in the data set
dataset.boxplot(by= 'species', grid=True)
# pandas dataframe.boxplot used to plot box plot of data set: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.boxplot.html
# Added grid and set box plot to group by column 5 of the data set- species
# So in this case 4 plots of - petal width and lenght and sepal width and lenght
# Adapted from : http://cmdlinetips.com/2018/03/how-to-make-boxplots-in-python-with-pandas-and-seaborn/
pl.show()
# pyplot.show() command shows the histogram on the screen | 61.368421 | 150 | 0.779588 | 200 | 1,166 | 4.525 | 0.49 | 0.054144 | 0.055249 | 0.039779 | 0.163536 | 0.163536 | 0.163536 | 0.163536 | 0.163536 | 0.163536 | 0 | 0.020669 | 0.128645 | 1,166 | 19 | 151 | 61.368421 | 0.870079 | 0.716981 | 0 | 0 | 0 | 0 | 0.476341 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |