hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d741d0669fd00ba25351122795792c2771bb54c4 | 928 | py | Python | ocdsapi/views/swagger.py | openprocurement/ocdsapi | 137d979c1f674b251436b3f68da8164921218bd6 | [
"Apache-2.0"
] | null | null | null | ocdsapi/views/swagger.py | openprocurement/ocdsapi | 137d979c1f674b251436b3f68da8164921218bd6 | [
"Apache-2.0"
] | 4 | 2019-12-26T17:18:39.000Z | 2022-03-21T22:16:51.000Z | ocdsapi/views/swagger.py | openprocurement/ocdsapi | 137d979c1f674b251436b3f68da8164921218bd6 | [
"Apache-2.0"
] | 2 | 2018-05-11T12:07:28.000Z | 2018-07-27T16:19:30.000Z | import cornice
import cornice_swagger
from pyramid.view import view_config
from ocdsapi.constants import DESCRIPTIONS, RESPONSES, RECORD
from deep_merge import merge
@view_config(
renderer='simplejson',
route_name='cornice_swagger.open_api_path',
request_method='GET'
)
def swagger_json(request):
services = cornice.service.get_services()
swagger = cornice_swagger.CorniceSwagger(
services,
pyramid_registry=request.registry
)
swagger.base_path = '/api'
swagger.summary_docstrings = True
info = request.registry.settings['api_specs']
base = swagger.generate(**info, info=info)
for path in base['paths'].keys():
for doc in (DESCRIPTIONS, RESPONSES):
merge(base['paths'][path], doc[path.lstrip('/')])
base['definitions'] = request.registry.models
# This is private endpoint
del(base['paths']['/releases.json']['post'])
return base
| 29.935484 | 61 | 0.701509 | 110 | 928 | 5.772727 | 0.490909 | 0.066142 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181034 | 928 | 30 | 62 | 30.933333 | 0.835526 | 0.025862 | 0 | 0 | 0 | 0 | 0.110865 | 0.032151 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0 | 0.192308 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7426df2bfaf4cfb1c328559d0df6414d05b4a45 | 2,246 | py | Python | Tasks/GenerateThreadsSimulationsTask.py | edifym/Edifym | 928bb74332abe09dccaee3a94a49b00fa97c7c8c | [
"MIT"
] | 1 | 2018-09-10T14:28:25.000Z | 2018-09-10T14:28:25.000Z | Tasks/GenerateThreadsSimulationsTask.py | edifym/Edifym | 928bb74332abe09dccaee3a94a49b00fa97c7c8c | [
"MIT"
] | null | null | null | Tasks/GenerateThreadsSimulationsTask.py | edifym/Edifym | 928bb74332abe09dccaee3a94a49b00fa97c7c8c | [
"MIT"
] | null | null | null | import itertools
import datetime
import MainConfig
from Tasks.ITask import ITask
from typing import List, Iterator
from BenchmarkConfig import Benchmark, Task
from Tasks.RunSingleSimulationTask import RunSingleSimulationTask
from Tasks.ValidateSingleSimulationTask import ValidateSingleSimulationTask
class GenerateThreadsSimulationsTask(ITask):
main_config: MainConfig
benchmark: Benchmark
skip: float
rank: int
def __init__(self, main_config: MainConfig, benchmark: Benchmark, rank: int, skip: float):
self.main_config = main_config
self.benchmark = benchmark
self.skip = skip
self.rank = rank
def produce_task_permutations(self, tasks: List[Task]) -> Iterator[List[Task]]:
for workloads in itertools.islice(itertools.permutations(tasks, len(tasks)), int(self.rank * self.skip), int((self.rank + 1) * self.skip)):
yield workloads
def get_run_args(self, tasks: List[Task]) -> str:
args = f"{self.main_config.executable} {len(tasks)} "
for task in tasks:
args += task.name + ";"
args = args[:-1]
args += " "
for task in tasks:
if task.values:
for value in task.values:
args += str(value.values[0]) + ";"
args = args[:-1]
args += " "
else:
args += "0 "
return args
def execute(self):
start = datetime.datetime.now()
print(f'node {self.rank} starting GenerateThreadsSimulationsTask {len(self.benchmark.tasks)} {start}')
run_id = 1
for task_permutation in self.produce_task_permutations(self.benchmark.tasks):
for x in range(len(self.benchmark.tasks) + 1):
core_one = task_permutation[:x]
core_two = task_permutation[x:]
run_args: List[str] = [self.get_run_args(core_one), self.get_run_args(core_two)]
ValidateSingleSimulationTask(self.main_config, run_args, self.rank, run_id, self.main_config.num_cpus, 6000).execute()
run_id += 1
end = datetime.datetime.now()
print(f'node {self.rank} GenerateThreadsSimulationsTask done {end - start}')
| 33.522388 | 147 | 0.632235 | 258 | 2,246 | 5.372093 | 0.255814 | 0.050505 | 0.050505 | 0.041847 | 0.134199 | 0.053391 | 0.053391 | 0.053391 | 0 | 0 | 0 | 0.00729 | 0.267142 | 2,246 | 66 | 148 | 34.030303 | 0.834751 | 0 | 0 | 0.122449 | 0 | 0 | 0.092164 | 0.051647 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081633 | false | 0 | 0.163265 | 0 | 0.367347 | 0.040816 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d74535ec87e43cf550d39fbea9aff10509052010 | 1,181 | py | Python | code/gen_gae_dataset.py | ucsd-hep-ex/GraphAE | 0ccf7dee48ee8096e2b186fc03ba911732b7a687 | [
"Apache-2.0"
] | 2 | 2021-08-05T00:26:14.000Z | 2021-12-06T08:25:30.000Z | code/gen_gae_dataset.py | ucsd-hep-ex/GraphAE | 0ccf7dee48ee8096e2b186fc03ba911732b7a687 | [
"Apache-2.0"
] | 1 | 2021-09-17T17:34:26.000Z | 2021-09-17T17:40:39.000Z | code/gen_gae_dataset.py | ucsd-hep-ex/GraphAE | 0ccf7dee48ee8096e2b186fc03ba911732b7a687 | [
"Apache-2.0"
] | 4 | 2021-08-13T07:35:18.000Z | 2022-02-08T12:40:26.000Z | from datagen.graph_data_gae import GraphDataset
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--dataset", type=str, help="dataset path", required=True)
parser.add_argument("--n-proc", type=int, default=1, help="number of concurrent processes")
parser.add_argument("--n-events", type=int, default=-1, help="number of events (-1 means all)")
parser.add_argument("--n-particles", type=int, default=-1, help="max number of particles per jet with zero-padding (-1 means all)")
parser.add_argument("--bb", type=int, default=0, help="black box number (0 is background, -1 is the mixed rnd set)")
parser.add_argument("--n-events-merge", type=int, default=100, help="number of events to merge")
parser.add_argument("--features", choices=['xyz','relptetaphi'], help="Generate (px,py,pz) or relative (pt,eta,phi)", required=True)
args = parser.parse_args()
gdata = GraphDataset(root=args.dataset, bb=args.bb, n_proc=args.n_proc,
n_events=args.n_events, n_particles=args.n_particles,
n_events_merge=args.n_events_merge, features=args.features)
| 65.611111 | 136 | 0.69348 | 171 | 1,181 | 4.625731 | 0.415205 | 0.079646 | 0.150442 | 0.091024 | 0.21871 | 0.134008 | 0.068268 | 0 | 0 | 0 | 0 | 0.011066 | 0.15834 | 1,181 | 17 | 137 | 69.470588 | 0.784708 | 0 | 0 | 0 | 0 | 0 | 0.302286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.133333 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7455ed8b21a3d5aa4fe9118a564dff225930a31 | 2,233 | py | Python | utils.py | princeton-vl/attach-juxtapose-parser | 20e0ebb1bf43fc69b6a4c46a54bb0362fc062eed | [
"BSD-2-Clause"
] | 23 | 2020-10-29T01:53:54.000Z | 2022-03-14T07:39:02.000Z | utils.py | princeton-vl/attach-juxtapose-parser | 20e0ebb1bf43fc69b6a4c46a54bb0362fc062eed | [
"BSD-2-Clause"
] | 1 | 2021-06-11T08:29:33.000Z | 2021-06-11T12:28:59.000Z | utils.py | princeton-vl/attach-juxtapose-parser | 20e0ebb1bf43fc69b6a4c46a54bb0362fc062eed | [
"BSD-2-Clause"
] | 5 | 2020-12-10T01:46:27.000Z | 2022-03-30T14:39:11.000Z | """
Some utility functions
"""
import torch
import torch.nn as nn
import numpy as np
from omegaconf import DictConfig
from omegaconf.listconfig import ListConfig
from typing import Tuple, Any, List, Union, Dict
def get_device() -> Tuple[torch.device, bool]:
"Get GPU if available"
use_gpu = torch.cuda.is_available()
return torch.device("cuda" if use_gpu else "cpu"), use_gpu
def load_model(model_path: str) -> Dict[str, Any]:
"Load a model checkpoint"
if torch.cuda.is_available():
return torch.load(model_path) # type: ignore
else:
return torch.load(model_path, map_location=torch.device("cpu")) # type: ignore
def count_params(model: nn.Module) -> int:
"The number of parameters in a PyTorch model"
return sum([p.numel() for p in model.parameters()])
def count_actions(
pred: Union[List[Any], List[List[Any]]], gt: Union[List[Any], List[List[Any]]]
) -> Tuple[int, int]:
"Count the number of correct actions and the number of total actions"
if isinstance(pred[0], list):
num_correct = np.sum(
np.sum(x == y for x, y in zip(pred_seq, gt_seq))
for pred_seq, gt_seq in zip(pred, gt)
)
num_total = np.sum(len(pred_seq) for pred_seq in pred)
else:
num_correct = np.sum([x == y for x, y in zip(pred, gt)])
num_total = len(pred)
return num_correct, num_total
def conf2list(cfg: ListConfig) -> List[Any]:
cfg_list: List[Any] = []
for v in cfg:
if isinstance(v, ListConfig):
cfg_list.append(conf2list(v))
elif isinstance(v, DictConfig):
cfg_list.append(conf2dict(v))
else:
assert v is None or isinstance(v, (str, int, float, bool))
cfg_list.append(v)
return cfg_list
def conf2dict(cfg: DictConfig) -> Dict[str, Any]:
cfg_dict: Dict[str, Any] = {}
for k, v in cfg.items():
assert isinstance(k, str)
if isinstance(v, ListConfig):
cfg_dict[k] = conf2list(v)
elif isinstance(v, DictConfig):
cfg_dict[k] = conf2dict(v)
else:
assert v is None or isinstance(v, (str, int, float, bool))
cfg_dict[k] = v
return cfg_dict
| 28.628205 | 87 | 0.624272 | 329 | 2,233 | 4.130699 | 0.24924 | 0.030905 | 0.022075 | 0.029433 | 0.338484 | 0.272995 | 0.172185 | 0.116262 | 0.116262 | 0.116262 | 0 | 0.004219 | 0.257053 | 2,233 | 77 | 88 | 29 | 0.814949 | 0.092253 | 0 | 0.178571 | 0 | 0 | 0.074943 | 0 | 0 | 0 | 0 | 0 | 0.053571 | 1 | 0.107143 | false | 0 | 0.107143 | 0 | 0.339286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d745a1b1c2c3a110727607755d03ea2f15540c29 | 4,664 | py | Python | distsup/results.py | petropusz/DistSup | 4f9dc50fb6c96f1e4348bb6b79a0b22d1078161a | [
"Apache-2.0"
] | 28 | 2020-01-11T08:17:54.000Z | 2022-01-07T14:12:28.000Z | distsup/results.py | petropusz/DistSup | 4f9dc50fb6c96f1e4348bb6b79a0b22d1078161a | [
"Apache-2.0"
] | 1 | 2021-05-03T15:24:26.000Z | 2021-05-04T08:27:45.000Z | distsup/results.py | petropusz/DistSup | 4f9dc50fb6c96f1e4348bb6b79a0b22d1078161a | [
"Apache-2.0"
] | 11 | 2020-02-27T15:34:04.000Z | 2021-09-12T20:22:46.000Z | # -*- coding: utf8 -*-
# Copyright 2019 JSALT2019 Distant Supervision Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import os
import pandas as pd
def read_single_cvs(fpath, exp_id):
fpath = fpath.replace(os.path.sep * 2, os.path.sep)
df = pd.read_csv(
fpath, header=0,
dtype={'step': 'int32', 'name': 'category', 'value': 'float32'})
(exp_id, subset, fname) = exp_id.rsplit(os.path.sep, 2)
df['tstamp'] = int(fname.split('.')[1])
df['exp_id'] = exp_id
df['subset'] = subset
cols = ['exp_id', 'subset', 'step', 'name', 'tstamp']
df = df.sort_values(cols)
df = df[cols + ['value']]
meta = {'exp_path': fpath.rsplit(os.path.sep, 2)[0],
'exp_id': exp_id}
return df, meta
def read_csvs(root_dir):
data_files = glob.glob(f'{root_dir}/**/*.csv', recursive=True)
common = os.path.commonprefix(data_files)
data_frames, metas = zip(*[
read_single_cvs(fpath, fpath[len(common):])
for fpath in data_files])
df = pd.concat(data_frames, ignore_index=True)
df = df.loc[df.groupby(['exp_id', 'subset', 'name', 'step']).tstamp.idxmax()]
del df['tstamp']
df = df.reset_index(drop=True)
meta = pd.DataFrame(metas).drop_duplicates()
paths = meta.exp_path.str.rsplit(os.path.sep, 2, expand=True)
meta['exp_tag'] = paths[1]
meta['exp_name'] = paths[2]
cols = ['exp_id', 'exp_path', 'exp_tag', 'exp_name']
meta = meta.sort_values(cols)
meta = meta[cols]
meta = meta.reset_index(drop=True)
df = meta[['exp_id', 'exp_tag', 'exp_name']
].merge(df, on='exp_id')
return df, meta
def _like_clauses(like_patterns):
where_conds = []
for field_name, field_like in like_patterns.items():
if field_like:
where_conds.append(f'{field_name} LIKE "{field_like}"')
return where_conds
def _in_clause(name, vals):
return f"{name} IN ({', '.join(repr(v)for v in vals)})"
def _where_clause(where_conds):
if where_conds:
where_clause = "WHERE " + " AND ".join(where_conds)
else:
where_clause = ""
return where_clause
def _bq_query(query):
# print(query)
from google.cloud import bigquery
client = bigquery.Client()
query_job = client.query(query)
return query_job.to_dataframe()
def read_bq_experiments(exp_name_like=None, exp_tag_like=None, yaml_like=None,
user_like=None,
cluster_like=None, host_like=None,
exp_ids_in=None
):
like_clauses = _like_clauses({
'exp_name': exp_name_like,
'exp_tag': exp_tag_like,
'yaml': yaml_like,
'user': user_like,
'cluster': cluster_like,
'host': host_like,
})
if exp_ids_in:
uuid_clauses = [_in_clause('uuid', exp_ids_in)]
else:
uuid_clauses = []
query = f"""
SELECT * from results.meta
{_where_clause(like_clauses + uuid_clauses)}
"""
df = _bq_query(query)
df = df.rename(columns={'uuid': 'exp_id'})
return df
def read_bq_results(exp_ids_or_meta_df, subset_like=None, name_like=None):
if isinstance(exp_ids_or_meta_df, pd.DataFrame):
uuids = exp_ids_or_meta_df.exp_id.unique()
meta_df = exp_ids_or_meta_df
else:
uuids = exp_ids_or_meta_df
meta_df = read_bq_experiments(exp_ids_in=uuids)
uuid_clauses = [_in_clause('uuid', uuids)]
like_clauses = _like_clauses({
'subset': subset_like,
'name': name_like
})
log_df = _bq_query(f"""
select * from results.log
{_where_clause(like_clauses + uuid_clauses)}
""")
log_df = log_df.rename(columns={'uuid': 'exp_id'})
log_df = log_df.loc[
log_df.groupby(['exp_id', 'subset', 'name', 'step']).date_utc.idxmax()]
log_df = meta_df[['exp_id', 'exp_tag', 'exp_name']
].merge(log_df, on='exp_id')
log_df = log_df.reset_index(drop=True)
return log_df, meta_df
| 33.314286 | 82 | 0.61235 | 651 | 4,664 | 4.121352 | 0.270353 | 0.033545 | 0.016772 | 0.022363 | 0.200895 | 0.118897 | 0.038017 | 0 | 0 | 0 | 0 | 0.007493 | 0.256003 | 4,664 | 139 | 83 | 33.553957 | 0.765706 | 0.134005 | 0 | 0.104762 | 0 | 0 | 0.153373 | 0.01385 | 0.009524 | 0 | 0 | 0 | 0 | 1 | 0.07619 | false | 0 | 0.038095 | 0.009524 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7468116e9fe990867c9a23f95edad4c08b92d8e | 5,705 | py | Python | test.py | affromero/SMILE | 931510d69b2e33f2fe633563833c50a7408f89ef | [
"MIT"
] | 20 | 2020-10-07T15:44:38.000Z | 2022-02-25T10:08:22.000Z | test.py | affromero/SMILE | 931510d69b2e33f2fe633563833c50a7408f89ef | [
"MIT"
] | 1 | 2022-03-11T03:19:14.000Z | 2022-03-11T03:53:59.000Z | test.py | affromero/SMILE | 931510d69b2e33f2fe633563833c50a7408f89ef | [
"MIT"
] | 1 | 2021-11-03T11:15:52.000Z | 2021-11-03T11:15:52.000Z | from solver import Solver
import warnings
from misc.utils import mean_std_tensor
from misc.scores import Scores
from termcolor import colored
import os
from misc.utils import TimeNow_str
from data_loader import get_loader
from misc.visualization import debug_image_multidomain
from misc.utils import save_json
import math
warnings.filterwarnings('ignore')
class Test(Solver):
def __init__(self, args, data_loader):
self.args = args
super().__init__(args, data_loader)
# ==================================================================#
# ==================================================================#
def sample(self, dataset='', load=False):
last_name = self.resume_name()
save_folder = os.path.join(self.args.sample_path,
'{}_test'.format(last_name))
os.makedirs(save_folder, exist_ok=True)
# max(16, self.args.batch_size)
batch_size = self.args.batch_sample
data_loader = get_loader(self.args,
batch_size=batch_size,
shuffling=True)
string = TimeNow_str()
name = os.path.join(save_folder, string)
self.PRINT(
'Translated test images and saved into "{}"..!'.format(name))
if not self.args.REENACTMENT:
debug_image_multidomain(self.nets_ema,
self.args,
data_loader,
name,
training=False,
translate_all=True,
fill_rgb=self.args.FILL_RGB)
debug_image_multidomain(self.nets_ema,
self.args,
data_loader,
name,
training=False,
fill_rgb=self.args.FILL_RGB)
# if not self.args.REENACTMENT:
# debug_image_multidomain(self.nets_ema,
# self.args,
# data_loader,
# name,
# training=False,
# translate_all=True,
# fill_rgb=self.args.FILL_RGB)
def print_metric(self, dict_metric, _str='', metric='FID', mode='TEST'):
assert _str in ['Latent', 'Reference']
_metric = {}
for key, value in dict_metric.items():
_metric[key] = {}
if isinstance(value, dict):
for kk, vv in value.items():
vv, std = mean_std_tensor(vv)
_metric[key][kk] = {}
_metric[key][kk]['mean'] = '{:.3f}'.format(
vv) if not isinstance(vv, str) else vv
_metric[key][kk]['std'] = '{:.3f}'.format(std)
else:
value, std = mean_std_tensor(value)
_metric[key]['mean'] = '{:.3f}'.format(value)
_metric[key]['std'] = '{:.3f}'.format(std)
# _metric[key] = '{:.3f}'.format(value)
log = "{0} - {2} - {1}\n ->\n{2}\n<-".format(metric, mode, '{}')
log = log.format(
_str,
"\n".join("\t{}: {}".format(k, v) for k, v in _metric.items()))
log = colored(log, 'yellow')
return log, _metric
def Eval(self):
if self.args.FAN:
FAN = self.nets_ema.FAN
else:
FAN = None
scores = Scores(self.args,
generator=self.nets_ema.G,
style_model=self.nets_ema.S,
mapping=self.nets_ema.F,
verbose=True,
FAN=FAN,
mode='test')
results_json = {}
# for _str in ['Reference', 'Latent']:
for _str in ['Latent', 'Reference']:
results_json[_str] = {}
results = scores.Eval(latent_guided=_str == 'Latent',
image_guided=_str == 'Reference')
for keys in results.keys():
if 'files' in keys:
continue
if 'Female' in results[keys].keys():
log, _metric = self.print_metric(results[keys],
_str=_str,
metric=keys.upper())
results_json[_str][keys.upper()] = _metric
scores.PRINT(log)
else:
for key, values in results[keys].items():
if key in ['P', 'R']:
continue # not interested in mean/sd of precision and recall
_name = '{}_{}'.format(keys.upper(), key.upper())
log, _metric = self.print_metric(values,
_str=_str,
metric=_name)
results_json[_str][_name] = _metric
scores.PRINT(log)
json_file = '{}/{}_test_{}'.format(self.args.sample_path,
self.args.pretrained_model,
self.args.json_file)
save_json(results_json, json_file)
# python main.py --batch_size=4 --GPU=NO_CUDA --FAN --EYEGLASSES --GENDER
# --HAT --EARRINGS --HAIR --BANGS --ORG_DS --TRAIN_MASK --STYLE_SEMANTICS
# --lambda_ds=20 --MOD --SPLIT_STYLE
| 41.948529 | 89 | 0.445399 | 544 | 5,705 | 4.444853 | 0.272059 | 0.062862 | 0.031845 | 0.029777 | 0.184864 | 0.165012 | 0.134409 | 0.134409 | 0.134409 | 0.134409 | 0 | 0.004281 | 0.426819 | 5,705 | 135 | 90 | 42.259259 | 0.735168 | 0.140754 | 0 | 0.2 | 0 | 0 | 0.047112 | 0 | 0 | 0 | 0 | 0 | 0.009524 | 1 | 0.038095 | false | 0 | 0.104762 | 0 | 0.161905 | 0.028571 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d74703463ad0cc4855f67c2727d69d01259cc832 | 1,382 | py | Python | src/muzero/logger.py | ipsec/model-based-rl | c285b0234a077c343b197b4fffb5803501c6ac13 | [
"MIT"
] | 12 | 2020-11-09T05:43:04.000Z | 2022-02-09T04:28:18.000Z | src/muzero/logger.py | ipsec/model-based-rl | c285b0234a077c343b197b4fffb5803501c6ac13 | [
"MIT"
] | 9 | 2021-07-07T19:00:57.000Z | 2021-11-16T11:09:54.000Z | src/muzero/logger.py | ipsec/model-based-rl | c285b0234a077c343b197b4fffb5803501c6ac13 | [
"MIT"
] | 10 | 2021-03-18T00:23:48.000Z | 2022-02-09T04:28:20.000Z | from torch.utils.tensorboard import SummaryWriter
import torch
import pytz
import json
import os
class Logger(object):
def __init__(self):
self.dirs = self.make_dirs()
self.save_config()
self.writer = SummaryWriter(self.dirs['worker'])
def log_scalar(self, value, tag, i):
self.writer.add_scalar(tag, value, i)
def log_scalars(self, value_dict, group_tag, i):
self.writer.add_scalars(group_tag, value_dict, i)
def log_histogram(self, values, tag, i):
self.writer.add_histogram(tag, values, i)
def log_image(self, image, tag):
self.writer.add_image(tag, image)
def save_config(self):
path = os.path.join(self.dirs['config'], 'config.json')
if not os.path.isfile(path):
json.dump(self.config.__dict__, open(path, 'w'), indent=2)
def make_dirs(self):
base_dir = os.path.join('runs', self.config.environment)
if self.group_tag is not None:
base_dir = os.path.join(base_dir, self.group_tag)
base_dir = os.path.join(base_dir, self.run_tag)
dirs = {'base': base_dir,
'worker': os.path.join(base_dir, self.worker_id),
'saves': os.path.join(base_dir, 'saves'),
'config': os.path.join(base_dir, 'config')}
os.makedirs(dirs['saves'], exist_ok=True)
os.makedirs(dirs['config'], exist_ok=True)
os.makedirs(dirs['worker'], exist_ok=True)
return dirs
| 26.576923 | 64 | 0.672938 | 211 | 1,382 | 4.218009 | 0.270142 | 0.070787 | 0.078652 | 0.078652 | 0.257303 | 0.142697 | 0.062921 | 0.062921 | 0 | 0 | 0 | 0.000885 | 0.182344 | 1,382 | 51 | 65 | 27.098039 | 0.786726 | 0 | 0 | 0 | 0 | 0 | 0.055757 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.142857 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d74933e21172d57789df4d55dcc4ed1a58aa98ca | 372 | py | Python | sequencing/abi_to_fasta.py | olgatsiouri1996/biomisc | b4fdaf3dd49816b7ca9da1d200ab4443455ab784 | [
"MIT"
] | 2 | 2020-06-18T23:43:15.000Z | 2020-10-02T12:32:21.000Z | sequencing/abi_to_fasta.py | olgatsiouri1996/biomisc | b4fdaf3dd49816b7ca9da1d200ab4443455ab784 | [
"MIT"
] | 1 | 2021-04-18T00:15:24.000Z | 2021-08-01T20:46:02.000Z | sequencing/abi_to_fasta.py | olgatsiouri1996/biomisc | b4fdaf3dd49816b7ca9da1d200ab4443455ab784 | [
"MIT"
] | null | null | null | # python3
import argparse
from Bio import SeqIO
# input parameters
ap = argparse.ArgumentParser()
ap.add_argument("-in", "--input_file", required=True, help="input abi file")
ap.add_argument("-out", "--output_file", required=True, help="output fasta file")
args = vars(ap.parse_args())
# main
count = SeqIO.convert(args['input_file'], "abi", args['output_file'], "fasta")
| 33.818182 | 81 | 0.723118 | 53 | 372 | 4.943396 | 0.509434 | 0.038168 | 0.099237 | 0.152672 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002985 | 0.099462 | 372 | 10 | 82 | 37.2 | 0.779104 | 0.077957 | 0 | 0 | 0 | 0 | 0.271386 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d74d34a2082b15d2f8e88708beb58e06b18f9160 | 1,054 | py | Python | lambda/functions/virtualmail/api.py | virtualmail/virtualmail | c960cda1131848cc34dfd7f153e1d586afce930a | [
"MIT"
] | null | null | null | lambda/functions/virtualmail/api.py | virtualmail/virtualmail | c960cda1131848cc34dfd7f153e1d586afce930a | [
"MIT"
] | null | null | null | lambda/functions/virtualmail/api.py | virtualmail/virtualmail | c960cda1131848cc34dfd7f153e1d586afce930a | [
"MIT"
] | null | null | null | from .vmapi.actions import Actions
from anslapi import APIHandler
from anlogger import Logger
_logger = Logger("virtualmail-api", 'INFO')
logger = _logger.get()
from anenvconf import Config, ConfigValueType
config_schema = {
'email_domains': {
'type': ConfigValueType.JSON
},
'ddb_tablename': {},
'sns_admin': {},
'owner_domains': {
'type': ConfigValueType.JSON
},
'recipient_domains': {
'type': ConfigValueType.JSON
},
'restricted_access_keys': {
'type': ConfigValueType.JSON,
'default': '{}'
}
}
config = Config(config_schema)
def lambda_handler(event, context):
apikeyid = event['requestContext']['identity']['apiKeyId']
logger.info("apikeyid={}".format(apikeyid))
ah = APIHandler()
ac = Actions(config, apikeyid, logger)
ah.add_handler('/get', 'POST', ac.get)
ah.add_handler('/add', 'POST', ac.add)
ah.add_handler('/delete', 'POST', ac.delete)
ah.add_handler('/modify', 'POST', ac.modify)
response = ah.handle(event)
logger.info(response)
return response
| 22.425532 | 60 | 0.665085 | 117 | 1,054 | 5.854701 | 0.410256 | 0.110949 | 0.134307 | 0.131387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.175522 | 1,054 | 46 | 61 | 22.913043 | 0.788262 | 0 | 0 | 0.083333 | 0 | 0 | 0.199241 | 0.020873 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.111111 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d74eb6a02e5e7b74956573d997e01c4afec4190e | 26,780 | py | Python | grapevine.py | IceyMint/dumserver | c2f10eb602c9890080d72672872a9255aa508c8f | [
"MIT"
] | 66 | 2018-12-06T05:57:34.000Z | 2022-03-02T15:45:22.000Z | grapevine.py | IceyMint/dumserver | c2f10eb602c9890080d72672872a9255aa508c8f | [
"MIT"
] | 41 | 2018-12-11T14:50:33.000Z | 2021-11-26T11:18:36.000Z | grapevine.py | IceyMint/dumserver | c2f10eb602c9890080d72672872a9255aa508c8f | [
"MIT"
] | 16 | 2019-02-08T02:09:27.000Z | 2021-01-26T19:10:21.000Z | __filename__ = "grapevine.py"
__author__ = "Jubelo"
__credits__ = ["Jubelo", "Bartek Radwanski"]
__license__ = "MIT"
__version__ = "0.7.1"
__maintainer__ = "Bartek Radwanski"
__email__ = "bartek.radwanski@gmail.com"
__status__ = "Stable"
#! usr/bin/env python3
# Project: Akrios
# Filename: grapevine.py
#
# File Description: A module to allow connection to Grapevine chat network.
# Visit https://www.grapevine.haus/
#
# Dependencies: You will need to 'pip3 install websocket-client' to use this module.
#
#
# Implemented features:
# Auhentication to the grapevine network.
# Registration to the Gossip Channel(default) or other channels.
# Restart messages from the grapevine network.
# Sending and receiving messages to the Gossip(default) or other channel.
# Sending and receiving Player sign-in/sign-out messages.
# Player sending and receiving Tells.
# Sending and receiving player status requests.
# Sending single game requests.
# Game connect and disconnect messages.
# Sending and receiving game status requests.
# Game Status (all connected games, and single game)
#
#
# Example usage would be to import this module into your main game server. During server startup
# create grapevine.gsocket = grapevine.GrapevineSocket(). During instance init
# is when the connection to grapevine.haus happens. PLEASE PUT YOUR CLIENT ID AND CLIENT SECRET
# into the appropriate instance attributes of GrapevineSocket below. Please note the instance
# attribute in GrapevineSocket of debug, set to True if you would like to print to stdout various
# things that happen to help with debugging.
#
# You will need to periodically call the gsocket.handle_read() and gsocket.handle_write() as
# required by your configuration. Please see the examples in the repo of how this might look
# for you.
#
# The below two functions are being passed in the grapevine.gsocket as a variable named event_.
#
#@reoccuring_event
#def event_grapevine_send_message(event_):
# if len(event_.owner.outbound_frame_buffer) > 0:
# event_.owner.handle_write()
#
#@reoccuring_event
#def event_grapevine_player_query_status(event_):
# event_.owner.msg_gen_player_status_query()
#
#
#
# Please see additional code examples of commands, events, etc in the repo.
# https://github.com/oestrich/gossip-clients
#
# Or visit the latest version of the live client at:
# https://github.com/bdubyapee/akriosmud
#
# By: Jubelo, Creator of AkriosMUD
# At: akriosmud.funcity.org:4000
# jubelo@akriosmud.funcity.org
#
'''
Module used to communicate with the Grapevine.haus chat+ network.
https://grapevine.haus
https://vineyard.haus
Classes:
GrapevineReceivedMessage is used to parse incoming JSON messages from the network.
__init__(self, message, gsock)
message is the JSON from the grapevine network
gsock is the instance of GrapevineSocket for tracking foreign players locally
GrapevineSocket is used to authentcate to and send messages to the grapevine network.
__init__(self)
Module Variables of Note:
gsocket is an instance of GrapevineSocket, when this module is imported the authentication
portion is completed and working with grapevine is done through the gsocket.
'''
import json
import socket
import datetime
import uuid
import time
# import config parser
import configparser
from websocket import WebSocket
from functions import log
# load the configuration file
Config = configparser.ConfigParser()
Config.read('config.ini')
# example of config file usage
# print(str(Config.get('Database', 'Hostname')))
# The below imports are for Akrios. PLEASE LOOK BELOW FOR COMMENTS WITH XXX
# in them to see how I tied in my side. You can safetly ignore some of them
# being commented, but others you will need to implement (like heartbeat player list).
#import comm
#import event
#from keys import CLIENT_ID, SECRET_KEY
#import player
#import world
class GrapevineReceivedMessage():
def __init__(self, message, gsock):
# Short hand to convert JSON data to instance attributes.
# Not secure at all. If you're worreid about it feel free to modify
# to your needs.
for eachkey, eachvalue in json.loads(message).items():
setattr(self, eachkey, eachvalue)
# Point an instance attribute to the module level grapevine socket.
# Used for adding to and removing refs as well as keeping the foreign player
# cache in the gsocket up to date.
self.gsock = gsock
# When we receive a websocket it will always have an event type.
self.rcvr_func = {"heartbeat": (self.gsock.msg_gen_heartbeat, None),
"authenticate": (self.is_received_auth, None),
"restart": (self.is_received_restart, None),
"channels/broadcast": (self.received_broadcast_message, None),
"channels/subscribe": (self.received_chan_sub, gsock.sent_refs),
"channels/unsubscribe": (self.received_chan_unsub, gsock.sent_refs),
"players/sign-out": (self.received_player_logout, gsock.sent_refs),
"players/sign-in": (self.received_player_login, gsock.sent_refs),
"games/connect": (self.received_games_connected, None),
"games/disconnect": (self.received_games_disconnected, None),
"games/status": (self.received_games_status, gsock.sent_refs),
"players/status": (self.received_player_status, gsock.sent_refs),
"tells/send": (self.received_tells_status, gsock.sent_refs),
"tells/receive": (self.received_tells_message, None),
"channels/send": (self.received_message_confirm, gsock.sent_refs)}
self.restart_downtime = 0
def parse_frame(self):
'''
Parse any received JSON from the Grapevine network.
Verify we have an attribute from the JSON that is 'event'. If we have a key
in the rcvr_func that matches we will execute.
return whatever is returned by the method, or None.
'''
if hasattr(self, "event") and self.event in self.rcvr_func:
exec_func, args = self.rcvr_func[self.event]
if args == None:
retvalue = exec_func()
else:
retvalue = exec_func(args)
if retvalue:
return retvalue
def is_event_status(self, status):
'''
A helper method to determine if the event we received is type of status.
return True/False
'''
if hasattr(self, "event") and hasattr(self, "status"):
if self.status == status:
return True
else:
return False
def is_received_auth(self):
'''
We received an event Auth event type.
Determine if we are already authenticated, if so subscribe to the channels
as determined in msg_gen_chan_subscribed in the GrapevineSocket Object.
Otherwise, if we are not authenticated yet we send another authentication attempt
via msg_gen_authenticate(). This is in place for path hiccups or restart events.
return None
'''
if self.is_event_status("success"):
self.gsock.state["authenticated"] = True
self.gsock.msg_gen_chan_subscribe()
self.gsock.msg_gen_player_status_query()
elif self.gsock.state["authenticated"] == False:
self.gsock.msg_gen_authenticate()
def is_received_restart(self):
'''
We received a restart event. We'll asign the value to the restart_downtime
attribute for access by the calling code.
return None
'''
if hasattr(self, "payload"):
self.restart_downtime = int(self.payload["downtime"])
def received_chan_sub(self, sent_refs):
'''
We have attempted to subscribe to a channel. This is a response message from Grapevine.
If failure, we make sure we show unsubbed in our local list.
if success, we make sure we show subscribed in our local list.
return None
'''
if hasattr(self, "ref") and self.ref in sent_refs:
orig_req = sent_refs.pop(self.ref)
if self.is_event_status("failure"):
channel = orig_req["payload"]["channel"]
self.gsock.subscribed[channel] = False
if self.gsock.debug:
print(f"Failed to subscribe to channel {channel}")
elif self.is_event_status("success"):
channel = orig_req["payload"]["channel"]
self.gsock.subscribed[channel] = True
def received_chan_unsub(self, sent_refs):
'''
We at some point sent a channel unsubscribe. This is verifying Grapevine
received that. We unsub in our local list.
return None
'''
if hasattr(self, "ref") and self.ref in sent_refs:
orig_req = sent_refs.pop(self.ref)
channel = orig_req["payload"]["channel"]
self.gsock.subscribed[channel] = False
def received_player_logout(self, sent_refs):
'''
We have received a "player/sign-out" message from Grapevine.
Determine if it is a success message, which is an indication to us that Grapevine
received a player logout from us and is acknowledging, or if it is a message from
another game on the Grapevine network.
return None if it's an ack from grapevine, return player info if it's foreign.
'''
if hasattr(self, "ref"):
# We are a success message from Grapevine returned from our notification.
if self.ref in sent_refs and self.is_event_status("success"):
orig_req = sent_refs.pop(self.ref)
return
# We are receiving a player logout from another game.
if "game" in self.payload:
game = self.payload["game"].capitalize()
player = self.payload["name"].capitalize()
if game in self.gsock.other_games_players:
if player in self.gsock.other_games_players[game]:
self.gsock.other_games_players[game].remove(player)
if len(self.gsock.other_games_players[game]) <= 0:
self.gsock.other_games_players.pop(game)
return (player, "signed out of", game)
def received_player_login(self, sent_refs):
'''
We have received a "player/sign-in" message from Grapevine.
Determine if it is a success message, which is an indication to us that Grapevine
received a player login from us and is acknowledging, or if it is a message from
another game on the Grapevine Network.
return None if it's an ack from grapevine, return player info if it's foreign
'''
if hasattr(self, "ref"):
# We are a success message from Grapevine returned from our notification.
if self.ref in sent_refs and self.is_event_status("success"):
orig_req = sent_refs.pop(self.ref)
return
# We are a player login notification from Grapevine.
if "game" in self.payload:
game = self.payload["game"].capitalize()
player = self.payload["name"].capitalize()
if game in self.gsock.other_games_players:
if player not in self.gsock.other_games_players[game]:
self.gsock.other_games_players[game].append(player)
else:
self.gsock.other_games_players[game] = []
self.gsock.other_games_players[game].append(player)
return (player, "signed into", game)
def received_player_status(self, sent_refs):
'''
We have requested a multi-game or single game status update.
This is the response. We pop the valid Ref from our local list
and add them to the local cache.
return None
'''
if hasattr(self, "ref") and hasattr(self, "payload"):
# On first receive we pop the ref just so it's gone from the queue
if self.ref in sent_refs:
orig_req = sent_refs.pop(self.ref)
game = self.payload["game"].capitalize()
if len(self.payload["players"]) == 1 and self.payload["players"] == "":
self.gsock.other_games_players[game] = []
return
if len(self.payload["players"]) == 1:
player = self.payload["players"][0].capitalize()
self.gsock.other_games_players[game] = []
self.gsock.other_games_players[game].append(player)
return
if len(self.payload["players"]) > 1:
player = [player.capitalize() for player in self.payload["players"]]
self.gsock.other_games_players[game] = []
self.gsock.other_games_players[game] = player
return
def received_tells_status(self, sent_refs):
'''
One of the local players has sent a tell. This is specific response of an error
Provide the error and other pertinent info to the local game for handling
as required.
'''
if hasattr(self, "ref"):
if self.ref in sent_refs and hasattr(self, "error"):
orig_req = sent_refs.pop(self.ref)
if self.is_event_status("failure"):
caller = orig_req["payload"]['from_name'].capitalize()
target = orig_req["payload"]['to_name'].capitalize()
game = orig_req["payload"]['to_game'].capitalize()
return (caller, target, game, self.error)
def received_tells_message(self):
'''
We have received a tell message destined for a player in our game.
Grab the details and return to the local game to handle as required.
'''
if hasattr(self, "ref") and hasattr(self, "payload"):
sender = self.payload['from_name']
target = self.payload['to_name']
game = self.payload['from_game']
sent = self.payload['sent_at']
message = self.payload['message']
return (sender, target, game, sent, message)
def received_games_status(self, sent_refs):
'''
Received a game status response. Return the received info to the local
game to handle as required. Not using this in Akrios at the moment.
'''
if hasattr(self, "ref") and hasattr(self, "payload") and self.is_event_status("success"):
orig_req = sent_refs.pop(self.ref)
if self.ref in sent_refs:
game = self.payload['game']
display_name = self.payload['display_name']
description = self.payload['description']
homepage = self.payload['homepage_url']
user_agent = self.payload['user_agent']
user_agent_repo = self.payload['user_agent_repo_url']
connections = self.payload['connections']
supports = self.payload['supports']
num_players = self.payload['players_online_count']
return(game, display_name, description, homepage, user_agent,
user_agent_repo, connections, supports, num_players)
if hasattr(self, "ref") and hasattr(self, "error") and self.is_event_status("failure"):
orig_req = sent_refs.pop(self.ref)
if self.ref in sent_refs:
game = orig_req["payload"]["game"]
error_code = self.error
return (game, error_code)
def received_message_confirm(self, sent_refs):
'''
We received a confirmation that Grapevine received an outbound broadcase message
from us. Nothing to see here other than removing from our sent references list.
'''
if hasattr(self, "ref"):
if self.ref in sent_refs and self.is_event_status("success"):
orig_req = sent_refs.pop(self.ref)
def is_other_game_player_update(self):
'''
A helper method to determine if this is a player update from another game.
'''
if hasattr(self, "event"):
if self.event == "players/sign-in" or self.event == "players/sign-out":
if hasattr(self, "payload") and 'game' in self.payload:
return True
else:
return False
def received_games_connected(self):
'''
A foreign game has connected to the network, add the game to our local
cache of games/players and send a request for player list.
'''
if hasattr(self, "payload"):
# Clear what we knew about this game and request an update.
# Requesting updates from all games at this point, might as well refresh
# as I'm sure some games don't implement all features like player sign-in
# and sign-outs.
self.gsock.other_games_players[self.payload["game"]] = []
self.gsock.msg_gen_player_status_query()
return self.payload["game"]
def received_games_disconnected(self):
'''
A foreign game has disconnected, remove it from our local cache and return
details to local game to handle as required.
'''
if hasattr(self, "payload"):
if self.payload["game"] in self.gsock.other_games_players:
self.gsock.other_games_players.pop(self.payload["game"])
return self.payload["game"]
def received_broadcast_message(self):
'''
We received a broadcast message from another game. Return the pertinent
info so the local game can handle as required. See examples above.
'''
if hasattr(self, "payload"):
#return (self.payload['name'], self.payload['game'], self.payload['message'])
return(self.payload)
class GrapevineSocket(WebSocket):
def __init__(self):
super().__init__(sockopt=((socket.IPPROTO_TCP, socket.TCP_NODELAY,1),))
self.debug = False
self.lastHeartbeat = 0
if int(Config.get('Grapevine', 'Debug')) != 0:
self.debug = True
self.players = []
self.inbound_frame_buffer = []
self.outbound_frame_buffer = []
# This event attribute is specific to AkriosMUD. Replace with your event
# requirements, or comment/delete the below line.
#self.events = event.Queue(self, "grapevine")
# Replace the below with your specific information
# XXX
self.client_id = Config.get('Grapevine', 'ClientID')
self.client_secret = Config.get('Grapevine', 'ClientSecret')
self.supports = ["channels"]
# Populate the channels attribute if you want to subscribe to a specific
# channel or channels during authentication.
# self.channels = ["gossip", "testing", "announcements"]
self.channels = Config.get('Grapevine', 'Channels').split(';')
#print(type(Config.get('Grapevine', 'Channels').split(';')))
#print(Config.get('Grapevine', 'Channels').split(';'))
self.version = "0.1.9"
self.user_agent = Config.get('Grapevine', 'UserAgent')
self.state = {"connected": False,
"authenticated": False}
self.subscribed = {}
for each_channel in self.channels:
self.subscribed[each_channel] = False
# This event initialization is specific to AkriosMUD. This would be a good
# spot to initialize in your event system if required. Otherwise comment/delete this line.
#event.init_events_grapevine(self)
self.sent_refs = {}
# The below is a cache of players we know about from other games.
# Right now I just use this to populate additional fields in our in-game 'who' command
# to also show players logged into other Grapevine connected games.
self.other_games_players = {}
def gsocket_connect(self):
try:
result = self.connect("wss://grapevine.haus/socket")
except:
return False
# We need to set the below on the socket as websockets.WebSocket is
# blocking by default. :(
self.sock.setblocking(0)
self.state["connected"] = True
self.outbound_frame_buffer.append(self.msg_gen_authenticate())
# The below is a log specific to Akrios. Leave commented or replace.
# XXX
#comm.log(world.serverlog, "Sending Auth to Grapevine Network.")
#print("Sending Auth")
return True
def gsocket_disconnect(self):
self.state["connected"] = False
# self.events.clear()
self.subscribed.clear()
self.other_games_players.clear()
self.close()
def send_out(self, frame):
'''
A generic to make writing out cleaner, nothing more.
'''
self.outbound_frame_buffer.append(frame)
def read_in(self):
'''
A generic to make reading in cleaner, nothing more.
'''
return self.inbound_frame_buffer.pop(0)
def import_players(self, playerList):
'''
Custom method for importing a list of players into the object
'''
self.players = playerList
#print(self.players)
def msg_gen_authenticate(self):
'''
Need to authenticate to the Grapevine.haus network to participate.
This creates and sends that authentication as well as defaults us to
an authenticated state unless we get an error back indicating otherwise.
'''
payload = {"client_id": self.client_id,
"client_secret": self.client_secret,
"supports": self.supports,
"channels": self.channels,
"version": self.version,
"user_agent": self.user_agent}
# If we haven't assigned any channels, lets pull that out of our auth
# so we aren't trying to auth to an empty string. This also causes us
# to receive an error back from Grapevine.
if len(self.channels) == 0 :
payload.pop("channels")
msg = {"event": "authenticate",
"payload": payload}
self.state["authenticated"] = True
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_heartbeat(self):
'''
Once registered to Grapevine we will receive regular heartbeats. The
docs indicate to respond with the below heartbeat response which
also provides an update player logged in list to the network.
'''
# The below line builds a list of player names logged into Akrios for sending
# in response to a grapevine heartbeat. Uncomment/replace with your functionality.
# XXX
#player_list = [player.name.capitalize() for player in player.playerlist]
# print("Heartbeat!")
self.lastHeartbeat = int(time.time())
payload = {"players": self.players}
#payload = {}
msg = {"event": "heartbeat",
"payload": payload}
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
#return("generated heartbeat!")
def msg_gen_lastheartbeat_timestamp(self):
return self.lastHeartbeat
def msg_gen_chan_subscribe(self, chan=None):
'''
Subscribe to a specific channel, or Gossip by default.
'''
ref = str(uuid.uuid4())
if not chan:
payload = {"channel": "gossip"}
else:
payload = {"channel": chan}
if payload["channel"] in self.subscribed:
return
msg = {"event": "channels/subscribe",
"ref": ref,
"payload": payload}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_chan_unsubscribe(self, chan=None):
'''
Unsubscribe from a specific channel, defaul to Gossip channel if
none given.
'''
ref = str(uuid.uuid4())
if not chan:
payload = {"channel": "gossip"}
else:
payload = {"channel": chan}
msg = {"event": "channels/unsubscribe",
"ref": ref,
"payload": payload}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_player_login(self, player_name):
'''
Notify the Grapevine network of a player login.
'''
ref = str(uuid.uuid4())
payload = {"name": player_name.capitalize()}
msg = {"event": "players/sign-in",
"ref": ref,
"payload": payload}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_player_logout(self, player_name):
'''
Notify the Grapevine network of a player logout.
'''
ref = str(uuid.uuid4())
payload = {"name": player_name.capitalize()}
msg = {"event": "players/sign-out",
"ref": ref,
"payload": payload}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_message_channel_send(self, caller, channel, message):
'''
Sends a channel message to the Grapevine network. If we're not showing
as subscribed on our end, we bail out.
'''
if channel not in self.subscribed:
return
ref = str(uuid.uuid4())
payload = {"channel": channel,
#"name": caller.name.capitalize(),
"name": caller,
#"message": message[:290]}
"message": message}
msg = {"event": "channels/send",
"ref": ref,
"payload": payload}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_game_all_status_query(self):
'''
Request for each game to send full status update. You will receive in
return from each game quite a bit of detailed information. See the
grapevine.haus Documentation or review the receiver code above.
'''
ref = str(uuid.uuid4())
msg = {"events": "games/status",
"ref": ref}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_game_single_status_query(self, game):
'''
Request for a single game to send full status update. You will receive in
return from each game quite a bit of detailed information. See the
grapevine.haus Documentation or review the receiver code above.
'''
ref = str(uuid.uuid4())
msg = {"events": "games/status",
"ref": ref,
"payload": {"game": game}}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_player_status_query(self):
'''
This requests a player list status update from all connected games.
'''
ref = str(uuid.uuid4())
msg = {"event": "players/status",
"ref": ref}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_player_single_status_query(self, game):
'''
Request a player list status update from a single connected game.
'''
ref = str(uuid.uuid4())
msg = {"events": "players/status",
"ref": ref,
"payload": {"game": game}}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def msg_gen_player_tells(self, caller_name, game, target, msg):
'''
Send a tell message to a player on the Grapevine network.
'''
game = game.capitalize()
target = target.capitalize()
ref = str(uuid.uuid4())
time_now = f"{datetime.datetime.utcnow().replace(microsecond=0).isoformat()}Z"
payload = {"from_name": caller_name,
"to_game": game,
"to_name": target,
"sent_at": time_now,
"message": msg[:290]}
msg = {"event": "tells/send",
"ref": ref,
"payload": payload}
self.sent_refs[ref] = msg
self.send_out(json.dumps(msg, sort_keys=True, indent=4))
def handle_read(self):
'''
Perform the actual socket read attempt. Append anything received to the inbound
buffer.
'''
try:
self.inbound_frame_buffer.append(self.recv())
if self.debug:
#print(f"Grapevine In: {self.inbound_frame_buffer[-1]}")
#print("")
log(f"\nGrapevine In: {self.inbound_frame_buffer[-1]}", "debug")
except:
pass
def handle_write(self):
'''
Perform a write out to Grapevine from the outbound buffer.
'''
try:
# wowpin
outdata = None
# /wowpin
outdata = self.outbound_frame_buffer.pop(0)
if outdata != None:
self.send(outdata)
if self.debug:
#print(f"Grapevine Out: {outdata}")
log(f"\nGrapevine Out: {outdata}", "debug")
#print("")
except:
if self.debug:
# wowpin
if outdata != None:
# /wowpin
#print(f"Error sending data frame: {outdata}")
log(f"\nError sending data frame: {outdata}", "debug")
def receive_message(self):
return GrapevineReceivedMessage(self.read_in(), self)
| 33.685535 | 98 | 0.68637 | 3,700 | 26,780 | 4.848108 | 0.142703 | 0.028208 | 0.018954 | 0.019066 | 0.327629 | 0.283699 | 0.244676 | 0.237094 | 0.225889 | 0.210614 | 0 | 0.002686 | 0.207506 | 26,780 | 794 | 99 | 33.72796 | 0.842529 | 0.383308 | 0 | 0.39782 | 0 | 0 | 0.127887 | 0.010264 | 0 | 0 | 0 | 0 | 0 | 1 | 0.108992 | false | 0.002725 | 0.024523 | 0.00545 | 0.20436 | 0.002725 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d74f24122f74ae380750a08746ca75657efd102e | 3,966 | py | Python | examples/study.cases/RBCRelax/run-cmaes.py | JonathanLehner/korali | 90f97d8e2fed2311f988f39cfe014f23ba7dd6cf | [
"MIT"
] | 43 | 2018-07-26T07:20:42.000Z | 2022-03-02T10:23:12.000Z | examples/study.cases/RBCRelax/run-cmaes.py | JonathanLehner/korali | 90f97d8e2fed2311f988f39cfe014f23ba7dd6cf | [
"MIT"
] | 212 | 2018-09-21T10:44:07.000Z | 2022-03-22T14:33:05.000Z | examples/study.cases/RBCRelax/run-cmaes.py | JonathanLehner/korali | 90f97d8e2fed2311f988f39cfe014f23ba7dd6cf | [
"MIT"
] | 16 | 2018-07-25T15:00:36.000Z | 2022-03-22T14:19:46.000Z | #!/usr/bin/env python3
import sys
import argparse
# Importing the computational model
import sys
sys.path.append('./model')
from model import *
# Creating new experiment
import korali
e = korali.Experiment()
procId = 0
procCount = 1
jobId = 0
if ('SLURM_PROCID' in os.environ): procId = os.environ['SLURM_PROCID']
if ('SLURM_NTASKS' in os.environ): procCount = os.environ['SLURM_NTASKS']
if ('SLURM_JOBID' in os.environ): jobId = os.environ['SLURM_JOBID']
parser = argparse.ArgumentParser(prog='RBC Relaxation Sampling', description='Samples the viscosity parameter of an RBC inferred from actual experiments.')
parser.add_argument('--exp', help='Experiment name. Reference data will be taken from the {name}.txt file in the data folder.', default='henon')
parser.add_argument('--lower', help='Lower bound for gammaC uniform prior', default=8000)
parser.add_argument('--upper', help='Upper bound for gammaC uniform prior', default=32000)
parser.add_argument('--tend', help='Value for tend parameter', default=0.4)
parser.add_argument('--inimesh_fname', help='Mesh file at init', default='stretch_Hen1999_d01.off')
args = parser.parse_args()
resFolder = "./results/cmaes_" + args.exp
profFile = resFolder + '/profiling.' + str(jobId) + '.json'
refData = getReferenceData(args.exp)
refPoints = getReferencePoints(args.exp)
ini_mesh_fname = "./data/off_files/{0}".format(args.inimesh_fname)
lowerBound = float(args.lower)
upperBound = float(args.upper)
expTend = float(args.tend)
expName = args.exp
if (int(procId) == int(procCount) - 1):
print("[Korali] --------------------------------------------")
print("[Korali] Running experiment: " + expName + ".txt ...")
print("[Korali] Lower Gamma C Prior Bound: " + str(lowerBound))
print("[Korali] Upper Gamma C Prior Bound: " + str(upperBound))
print("[Korali] Tend Parameter: " + str(expTend))
print("[Korali] Result Folder: " + resFolder)
print("[Korali] Profiling File: " + profFile)
print("[Korali] Job Id: " + str(jobId))
print("[Korali] Rank Count: " + str(procCount))
sys.stdout.flush()
# Setting up the reference likelihood for the Bayesian Problem
e["Problem"]["Type"] = "Bayesian/Reference"
e["Problem"]["Likelihood Model"] = "Normal"
e["Problem"]["Reference Data"] = refData
# Configuring CMA-ES parameters
e["Solver"]["Type"] = "Optimizer/CMAES"
e["Solver"]["Population Size"] = 8
e["Solver"]["Termination Criteria"]["Max Generations"] = 100
# Configuring the problem's random distributions
e["Distributions"][0]["Name"] = "Uniform 0"
e["Distributions"][0]["Type"] = "Univariate/Uniform"
e["Distributions"][0]["Minimum"] = lowerBound
e["Distributions"][0]["Maximum"] = upperBound
e["Distributions"][1]["Name"] = "Uniform 1"
e["Distributions"][1]["Type"] = "Univariate/Uniform"
e["Distributions"][1]["Minimum"] = 0.0
e["Distributions"][1]["Maximum"] = 1.0
# Configuring the problem's variables
e["Variables"][0]["Name"] = "gammaC"
e["Variables"][0]["Prior Distribution"] = "Uniform 0"
e["Variables"][0]["Initial Value"] = 16000
e["Variables"][0]["Initial Standard Deviation"] = 7200
e["Variables"][1]["Name"] = "[Sigma]"
e["Variables"][1]["Prior Distribution"] = "Uniform 1"
e["Variables"][1]["Initial Value"] = 0.5
e["Variables"][1]["Initial Standard Deviation"] = 0.66
# General Settings
e["Console Output"]["Verbosity"] = "Detailed"
e["File Output"]["Path"] = resFolder
e["Store Sample Information"] = True
# Loading previous results, if they exist.
found = e.loadState(resFolder + '/latest')
# Setting Model after loading previous results to prevent bad function pointer
e["Problem"]["Computational Model"] = lambda sample: relaxModel(sample, refPoints, expName, expTend, ini_mesh_fname, korali.getMPIComm() )
# Configuring Linked (Distributed) Conduit
k = korali.Engine()
k["Conduit"]["Type"] = "Distributed"
k["Conduit"]["Ranks Per Worker"] = 2
k["Profiling"]["Detail"] = "Full"
k["Profiling"]["Path"] = profFile
k["Profiling"]["Frequency"] = 60
k.run(e)
| 37.415094 | 155 | 0.697428 | 516 | 3,966 | 5.317829 | 0.370155 | 0.036079 | 0.030977 | 0.015306 | 0.063411 | 0.024052 | 0 | 0 | 0 | 0 | 0 | 0.019384 | 0.115482 | 3,966 | 105 | 156 | 37.771429 | 0.762828 | 0.108169 | 0 | 0.027027 | 0 | 0 | 0.443279 | 0.019002 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.067568 | 0 | 0.067568 | 0.121622 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d751cb0b1eb2563289f2c8d89c5152133a3ce8ba | 3,030 | py | Python | Python/pymd/md/core/neighbour_list.py | ryanlopezzzz/ABPTutorial | 923fa89f1959cd71b28ecf4628ecfbfce6a6206c | [
"MIT"
] | 8 | 2020-05-05T00:41:50.000Z | 2021-11-04T20:54:43.000Z | Python/pymd/md/core/neighbour_list.py | ryanlopezzzz/ABPTutorial | 923fa89f1959cd71b28ecf4628ecfbfce6a6206c | [
"MIT"
] | null | null | null | Python/pymd/md/core/neighbour_list.py | ryanlopezzzz/ABPTutorial | 923fa89f1959cd71b28ecf4628ecfbfce6a6206c | [
"MIT"
] | 5 | 2020-05-04T16:37:13.000Z | 2021-08-18T07:53:58.000Z | # Copyright 2020 Rastko Sknepnek, University of Dundee, r.skepnek@dundee.ac.uk
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions
# of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
# TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
# CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# Class handling neighbour list
from .cell_list import CellList
from copy import copy
class NeighbourList:
"""
This class handles building and maintenance of the Verlet neighbour list.
Note: In this implementation performance is sacrificed for simplicity
"""
def __init__(self, sys, rcut, pad):
"""
Initialise the neighbour list object.
Parameter
---------
sys : Particles
Simulation system
rcut : float
Cutoff distance for the neighbours
pad : float
Padding distance for the neighbour list
"""
self.sys = sys
self.rcut = rcut
self.pad = pad
self.cell_list = CellList(self.sys.box, self.rcut + self.pad)
def build(self):
"""
Build the neighbour list aided by the cell list.
"""
# Store current positions of all particles
self.old_pos = []
for p in self.sys.particles:
self.old_pos.append(copy(p.r))
# Set up the cell list
self.cell_list.wipe()
for p in self.sys.particles:
self.cell_list.add_particle(p)
# Build the list
self.neighbours = []
for p in self.sys.particles:
neighbours = []
for n in self.cell_list.get_neighbours(p):
pn = self.sys.particles[n]
if pn.id > p.id:
dr = pn.r - p.r
dr.apply_periodic(self.sys.box)
if dr.length() < self.rcut + self.pad:
neighbours.append(n)
self.neighbours.append(neighbours)
self.sys.has_nl = True
def needs_rebuild(self):
"""
Check if the neighbour list needs to be rebuilt.
Note
----
A rebuild is done if one of the particles has moved more than 0.5*pad
"""
for p in self.sys.particles:
dr = p.r - self.old_pos[p.id]
dr.apply_periodic(self.sys.box)
if dr.length() >= 0.5*self.pad:
return True
return False | 35.647059 | 114 | 0.673597 | 438 | 3,030 | 4.618721 | 0.399543 | 0.038062 | 0.039545 | 0.019773 | 0.082056 | 0.082056 | 0.060306 | 0.034602 | 0.034602 | 0 | 0 | 0.003515 | 0.248845 | 3,030 | 85 | 115 | 35.647059 | 0.885325 | 0.576568 | 0 | 0.176471 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.058824 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d75544ac42bc8cc283cc3647259ef318ee8af049 | 1,760 | py | Python | osf/models/contributor.py | alexschiller/osf.io | 4122d4be152c6189142c2ebb19cfdee09c77035d | [
"Apache-2.0"
] | null | null | null | osf/models/contributor.py | alexschiller/osf.io | 4122d4be152c6189142c2ebb19cfdee09c77035d | [
"Apache-2.0"
] | null | null | null | osf/models/contributor.py | alexschiller/osf.io | 4122d4be152c6189142c2ebb19cfdee09c77035d | [
"Apache-2.0"
] | null | null | null | from django.db import models
from website.util.permissions import (
READ,
WRITE,
ADMIN,
)
class AbstractBaseContributor(models.Model):
read = models.BooleanField(default=False)
write = models.BooleanField(default=False)
admin = models.BooleanField(default=False)
visible = models.BooleanField(default=False)
user = models.ForeignKey('OSFUser')
def __repr__(self):
return ('<{self.__class__.__name__}(user={self.user}, '
'read={self.read}, write={self.write}, admin={self.admin}, '
'visible={self.visible}'
')>').format(self=self)
class Meta:
abstract = True
class Contributor(AbstractBaseContributor):
node = models.ForeignKey('AbstractNode')
class Meta:
unique_together = ('user', 'node')
# Make contributors orderable
# NOTE: Adds an _order column
order_with_respect_to = 'node'
class InstitutionalContributor(AbstractBaseContributor):
institution = models.ForeignKey('Institution')
class Meta:
unique_together = ('user', 'institution')
class RecentlyAddedContributor(models.Model):
user = models.ForeignKey('OSFUser') # the user who added the contributor
contributor = models.ForeignKey('OSFUser', related_name='recently_added_by') # the added contributor
date_added = models.DateTimeField(auto_now=True)
class Meta:
unique_together = ('user', 'contributor')
def get_contributor_permissions(contributor, as_list=True):
perm = []
if contributor.read:
perm.append(READ)
if contributor.write:
perm.append(WRITE)
if contributor.admin:
perm.append(ADMIN)
if as_list:
return perm
else:
return perm[-1]
| 29.333333 | 105 | 0.666477 | 184 | 1,760 | 6.222826 | 0.36413 | 0.069869 | 0.087336 | 0.104803 | 0.070742 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000731 | 0.222727 | 1,760 | 59 | 106 | 29.830508 | 0.836257 | 0.063636 | 0 | 0.130435 | 0 | 0 | 0.139988 | 0.04017 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0.021739 | 0.543478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7575d0b5e964c0cb0d93905c5da4e2ac214da5a | 1,636 | py | Python | backend/app/app/crud/crud_protocol.py | 0x174/conflux | 3c0bd83114e74871e56e797e2abb036067da323c | [
"MIT"
] | null | null | null | backend/app/app/crud/crud_protocol.py | 0x174/conflux | 3c0bd83114e74871e56e797e2abb036067da323c | [
"MIT"
] | null | null | null | backend/app/app/crud/crud_protocol.py | 0x174/conflux | 3c0bd83114e74871e56e797e2abb036067da323c | [
"MIT"
] | null | null | null | '''
--------------------------------------------------------------------------------
Description:
Roadmap:
Written by W.R. Jackson <wrjackso@bu.edu>, DAMP Lab 2020
--------------------------------------------------------------------------------
'''
from typing import List
from fastapi.encoders import jsonable_encoder
from sqlalchemy.orm import Session
from app.crud.base import CRUDBase
from app.models.protocols import Protocol
from app.schemas.protocols import ProtocolCreate, ProtocolUpdate
class CRUDProtocol(CRUDBase[Protocol, ProtocolCreate, ProtocolUpdate]):
def create_with_owner(
self,
db: Session,
*,
obj_in: ProtocolCreate,
creator_id: int
) -> Protocol:
'''
Args:
db:
obj_in:
creator_id:
Returns:
'''
obj_in_data = jsonable_encoder(obj_in)
db_obj = self.model(**obj_in_data, creator_id=creator_id)
db.add(db_obj)
db.commit()
db.refresh(db_obj)
return db_obj
def get_multi_by_owner(
self,
db: Session,
*,
creator_id: int,
skip: int = 0,
limit: int = 100,
) -> List[Protocol]:
'''
Args:
db:
creator_id:
skip:
limit:
Returns:
'''
return (
db.query(self.model)
.filter(Protocol.creator_id == creator_id)
.offset(skip)
.limit(limit)
.all()
)
protocol = CRUDProtocol(Protocol)
| 22.108108 | 80 | 0.476161 | 150 | 1,636 | 5.02 | 0.433333 | 0.095618 | 0.029216 | 0.047809 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007477 | 0.345966 | 1,636 | 73 | 81 | 22.410959 | 0.696262 | 0.216993 | 0 | 0.166667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.166667 | 0 | 0.305556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d758898c2a3e96abdde96c69f5f8b5a8cc83ec78 | 5,621 | py | Python | code/FewShotLearningDataSet.py | danielt17/Triplet-loss-few-shot-learning | 473e800d3c2b8e33e11d90089468ee5ee18ba7d4 | [
"Apache-2.0"
] | 2 | 2022-01-05T14:02:45.000Z | 2022-02-19T17:18:43.000Z | code/FewShotLearningDataSet.py | danielt17/Triplet-loss-few-shot-learning | 473e800d3c2b8e33e11d90089468ee5ee18ba7d4 | [
"Apache-2.0"
] | null | null | null | code/FewShotLearningDataSet.py | danielt17/Triplet-loss-few-shot-learning | 473e800d3c2b8e33e11d90089468ee5ee18ba7d4 | [
"Apache-2.0"
] | 1 | 2021-07-04T12:23:55.000Z | 2021-07-04T12:23:55.000Z | # -*- coding: utf-8 -*-
"""
Created on Thu Jun 10 19:04:39 2021
@author: danie
"""
# %% Imports
import torch
import torchvision
from torchvision import transforms
import numpy as np
# %% Functions
def LoadDataFMnist(labels_out = [7,8,9]):
'''
Description:
Splits data into train set and support set
Inputs:
labels_out: labels in support set
Returns:
Train_X: Train set inputs
Train_Y: Train set ouputs
Test_X: Test set inputs
Test_Y: Test set ouputs
SupportSet_X: Support set inputs
SupportSet_Y: Support set ouputs
'''
data = torchvision.datasets.FashionMNIST('../FashionMnist',download=True,transform=transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.4914,),(0.2023,))]))
X_full = data.data.numpy()
labels = data.targets.numpy()
indForTrainLs = []
for label in labels_out:
indForTrainLs.append((labels!=label))
indForTrain = np.ones((len(labels),),dtype = bool)
for ls in indForTrainLs:
indForTrain = indForTrain*ls
TrainTestSets_X = X_full[indForTrain]; TrainTestSets_Y = labels[indForTrain]
SupportSet_X = X_full[~indForTrain]; SupportSet_Y = labels[~indForTrain]
split_size = np.int64(len(TrainTestSets_X)*0.8)
Train_X = TrainTestSets_X[:split_size]; Train_Y = TrainTestSets_Y[:split_size]
Test_X = TrainTestSets_X[split_size:]; Test_Y = TrainTestSets_Y[split_size:]
return Train_X,Train_Y,Test_X,Test_Y,np.reshape(SupportSet_X,(SupportSet_X.shape[0],1,28,28)),SupportSet_Y
def CreateTriplets(X,Y,TripletSetSize=60000):
'''
Description:
Creates triplet PyTorch tensors
Inputs:
X: Input set
Y: Output set
TripletSetSize: output triplet sets size
Returns:
X_triplets: Inputs of triplet set
Y_triplets: Labels of triplet set
Index_triplets: Index of triplet images
'''
Y_triplets = []; Index_triplets = [];
anchor_array = np.zeros((TripletSetSize,1,28,28))
positive_array = np.zeros((TripletSetSize,1,28,28))
negative_array = np.zeros((TripletSetSize,1,28,28))
labels = np.unique(Y)
for ind in range(TripletSetSize):
anchor_label = np.random.choice(labels)
negative_label = np.random.choice(labels[labels!=anchor_label])
positives = np.where(Y==anchor_label)[0]
negatives = np.where(Y==negative_label)[0]
anchor_ind = np.random.choice(positives)
positive_ind = np.random.choice(positives[positives!=anchor_ind])
negative_ind = np.random.choice(negatives)
anchor_array[ind] = X[anchor_ind:anchor_ind+1]
positive_array[ind] = X[positive_ind:positive_ind+1]
negative_array[ind] = X[negative_ind:negative_ind+1]
Y_triplets.append((anchor_label,anchor_label,negative_label))
Index_triplets.append((anchor_ind,positive_ind,negative_ind))
X_triplets = [torch.from_numpy(np.float32(anchor_array)),torch.from_numpy(np.float32(positive_array)),torch.from_numpy(np.float32(negative_array))]
return X_triplets, Y_triplets, Index_triplets
def SupportSetAndQuery(SupportSet_X,SupportSet_Y,labels_out,k_way=2,n_shot=3):
'''
Description:
Creates support set and query with respect to given parameters
Inputs:
SupportSet_X: Support set inputs
SupportSet_Y: Support set outputs
labels_out: labels in support set
k_way: number of classes in actual support set
n_shot: number of shots in each class in support set
Returns:
Query: Query image
QueryLabel: Query ground truth label
classes: classes in actual support et
SupportSet: actual support set of size k*n
'''
labels_out = np.asarray(labels_out)
QueryLabel = np.random.choice(labels_out)
QueryInd = np.random.choice(np.where(SupportSet_Y==QueryLabel)[0])
Query = SupportSet_X[QueryInd:QueryInd+1]
Query = np.repeat(Query,n_shot,axis = 0)
classes = []
otherLabels = np.where(labels_out!=QueryLabel)[0]
while len(classes) < (k_way - 1):
labelNew = np.random.choice(labels_out[otherLabels])
if labelNew not in classes:
classes.append(labelNew)
classes.append(QueryLabel)
SupportSet = np.zeros((len(classes),n_shot,1,28,28))
for ind,label in enumerate(classes):
cur_label_examples = np.where(SupportSet_Y==label)[0]
inds_cur_label = np.random.choice(cur_label_examples, size=n_shot, replace=False)
SupportSet[ind] = SupportSet_X[inds_cur_label]
SupportSet = list(SupportSet)
for i in range(len(SupportSet)):
SupportSet[i] = torch.from_numpy(np.float32(SupportSet[i]))
return torch.from_numpy(np.float32(Query)),QueryLabel,classes,SupportSet
# %% Main
if __name__ == '__main__':
labels_out = [7,8,9]
TripletSetSize = 60000
TripletTestSize = np.int64(60000*0.2)
k_way=2
n_shot=50
Train_X,Train_Y,Test_X,Test_Y,SupportSet_X,SupportSet_Y = LoadDataFMnist(labels_out = labels_out)
Train_X_triplets, Train_Y_triplets, Train_Index_triplets = CreateTriplets(Train_X,Train_Y,TripletSetSize=TripletSetSize)
Test_X_triplets, Test_Y_triplets, Test_Index_triplets = CreateTriplets(Train_X,Train_Y,TripletSetSize=TripletTestSize)
Query,QueryLabel,classes,SupportSet = SupportSetAndQuery(SupportSet_X,SupportSet_Y,labels_out,k_way=k_way,n_shot=n_shot)
| 39.307692 | 174 | 0.677282 | 728 | 5,621 | 5.01511 | 0.208791 | 0.034511 | 0.034511 | 0.021912 | 0.248699 | 0.152013 | 0.121884 | 0.096412 | 0.055327 | 0 | 0 | 0.023771 | 0.221669 | 5,621 | 143 | 175 | 39.307692 | 0.810743 | 0.204946 | 0 | 0 | 0 | 0 | 0.005545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041096 | false | 0 | 0.054795 | 0 | 0.136986 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d75bd8a36a75ddc60681e4615206fef2931c4088 | 25,265 | py | Python | mql5_zmq_backtrader/mt5store.py | AwesomeTrading/mql5_zmq_backtrader | 126ff52ce88d5960998abfb9b83afdfb15d54766 | [
"MIT"
] | 9 | 2020-03-15T16:01:15.000Z | 2021-11-11T12:49:42.000Z | mql5_zmq_backtrader/mt5store.py | AwesomeTrading/mql5_zmq_backtrader | 126ff52ce88d5960998abfb9b83afdfb15d54766 | [
"MIT"
] | null | null | null | mql5_zmq_backtrader/mt5store.py | AwesomeTrading/mql5_zmq_backtrader | 126ff52ce88d5960998abfb9b83afdfb15d54766 | [
"MIT"
] | 9 | 2020-05-10T15:56:51.000Z | 2022-01-25T23:54:12.000Z | from __future__ import (absolute_import, division, print_function,
unicode_literals)
import zmq
import collections
from datetime import datetime
import threading
from mql5_zmq_backtrader.adapter import PositionAdapter, OrderAdapter, BalanceAdapter
import backtrader as bt
from backtrader.metabase import MetaParams
from backtrader.utils.py3 import queue, with_metaclass
import sys
class MTraderError(Exception):
def __init__(self, *args, **kwargs):
default = 'Meta Trader 5 ERROR'
if not (args or kwargs):
args = (default)
super(MTraderError, self).__init__(*args, **kwargs)
class ServerConfigError(MTraderError):
def __init__(self, *args, **kwargs):
super(self.__class__, self).__init__(*args, **kwargs)
class ServerDataError(MTraderError):
def __init__(self, *args, **kwargs):
super(self.__class__, self).__init__(*args, **kwargs)
class TimeFrameError(MTraderError):
def __init__(self, *args, **kwargs):
super(self.__class__, self).__init__(*args, **kwargs)
class StreamError(MTraderError):
def __init__(self, *args, **kwargs):
super(self.__class__, self).__init__(*args, **kwargs)
class MTraderAPI:
"""
This class implements Python side for MQL5 JSON API
See https://github.com/khramkov/MQL5-JSON-API for docs
"""
# TODO: unify error handling
def __init__(self, host=None):
self.HOST = host or 'localhost'
self.SYS_PORT = 15555 # REP/REQ port
self.DATA_PORT = 15556 # PUSH/PULL port
self.LIVE_PORT = 15557 # PUSH/PULL port
self.EVENTS_PORT = 15558 # PUSH/PULL port
# ZeroMQ timeout in miliseconds
self.SYS_TIMEOUT = 1000
self.DATA_TIMEOUT = 10000
self.REQUEST_RETRIES = 3 # Lazy Pirate implementation
self.sequence = 0 # Lazy Pirate request sequence
# initialise ZMQ context
self.context = zmq.Context()
# connect to server sockets
try:
self.sys_socket = self.context.socket(zmq.REQ)
# set port timeout
self.sys_socket.RCVTIMEO = self.SYS_TIMEOUT
self.sys_socket.connect(
'tcp://{}:{}'.format(self.HOST, self.SYS_PORT))
# Lazy Pirate implementation
self.poll = zmq.Poller()
self.poll.register(self.sys_socket, zmq.POLLIN)
self.data_socket = self.context.socket(zmq.PULL)
# set port timeout
self.data_socket.RCVTIMEO = self.DATA_TIMEOUT
self.data_socket.connect(
'tcp://{}:{}'.format(self.HOST, self.DATA_PORT))
except zmq.ZMQError:
raise zmq.ZMQBindError("E: Binding ports ERROR")
def _send_request(self, data: dict) -> None:
"""Send request to server via ZeroMQ System socket
Lazy Pirate implementation.
"""
# ram Caller's name
print("I: Caller 2 ", sys._getframe(2).f_code.co_name)
try:
# ram sequence = 0
retries_left = self.REQUEST_RETRIES
while retries_left:
self.sequence += 1
request = str(self.sequence).encode()
print("I: Sending (%s)" % self.sequence)
print("data ", data)
self.sys_socket.send_json(data)
expect_reply = True
while expect_reply:
socks = dict(self.poll.poll(self.SYS_TIMEOUT))
if socks.get(self.sys_socket) == zmq.POLLIN:
msg = self.sys_socket.recv_string()
if not msg:
break
# terminal received the request
if str(msg) == 'OK':
print("I: Server replied %s" % msg)
retries_left = 0
expect_reply = False
else:
print("E: Malformed reply from server: %s" % msg)
else:
print("W: No response from server, retrying…")
# Socket is confused. Close and remove it.
self.sys_socket.setsockopt(zmq.LINGER, 0)
self.sys_socket.close()
self.poll.unregister(self.sys_socket)
retries_left -= 1
if retries_left == 0:
print("E: Server seems to be offline, abandoning")
break
print("I: Reconnecting and resending (%s)" %
self.sequence)
# Create new connection
self.sys_socket = self.context.socket(zmq.REQ)
self.sys_socket.RCVTIMEO = self.SYS_TIMEOUT
self.sys_socket.connect(
'tcp://{}:{}'.format(self.HOST, self.SYS_PORT))
self.poll.register(self.sys_socket, zmq.POLLIN)
self.sys_socket.send_json(data)
# ram self.context.term()
except zmq.ZMQError:
raise zmq.NotDone("E: Sending request ERROR")
def _pull_reply(self):
# Get reply from server via Data socket with timeout
try:
msg = self.data_socket.recv_json()
#ram except zmq.ZMQError:
#ram raise zmq.NotDone('Data socket timeout ERROR')
except zmq.Again as e:
return None
except zmq.ZMQError as e:
logger.debug("W: Strange ZMQ behaviour during node-to-node message receiving, experienced {}".format(e))
return msg
def live_socket(self, context=None):
"""Connect to socket in a ZMQ context"""
try:
context = context or zmq.Context.instance()
socket = context.socket(zmq.PULL)
socket.connect('tcp://{}:{}'.format(self.HOST, self.LIVE_PORT))
except zmq.ZMQError:
raise zmq.ZMQBindError("E: Live port connection ERROR")
return socket
def streaming_socket(self, context=None):
"""Connect to socket in a ZMQ context"""
try:
context = context or zmq.Context.instance()
socket = context.socket(zmq.PULL)
socket.connect('tcp://{}:{}'.format(self.HOST, self.EVENTS_PORT))
except zmq.ZMQError:
raise zmq.ZMQBindError("E: Data port connection ERROR")
return socket
def construct_and_send(self, **kwargs) -> dict:
"""Construct a request dictionary from default and send it to server"""
# default dictionary
request = {
"action": None,
"actionType": None,
"symbol": None,
"chartTF": None,
"fromDate": None,
"toDate": None,
"id": None,
"magic": 1234,
"volume": None,
"price": None,
"stoploss": None,
"takeprofit": None,
"expiration": None,
"deviation": None,
"comment": None
}
# update dict values if exist
for key, value in kwargs.items():
if key in request:
request[key] = value
else:
raise KeyError('E: Unknown key in **kwargs ERROR')
# send dict to server
self._send_request(request)
# return server reply
return self._pull_reply()
class MetaSingleton(MetaParams):
"""Metaclass to make a metaclassed class a singleton"""
def __init__(cls, name, bases, dct):
super(MetaSingleton, cls).__init__(name, bases, dct)
cls._singleton = None
def __call__(cls, *args, **kwargs):
if cls._singleton is None:
cls._singleton = (
super(MetaSingleton, cls).__call__(*args, **kwargs))
return cls._singleton
class MTraderStore(with_metaclass(MetaSingleton, object)):
"""
Singleton class wrapping to control the connections to MetaTrader.
Balance update occurs at the beginning and after each
transaction registered by '_t_streaming_events'.
"""
# TODO: implement stop_limit
# TODO: Check position ticket
BrokerCls = None # broker class will autoregister
DataCls = None # data class will auto register
params = ()
# The Unix epoch (or Unix time or POSIX time or Unix timestamp)
_DTEPOCH = datetime(1970, 1, 1)
# MTrader supported granularities
_GRANULARITIES = {
# (bt.TimeFrame.Ticks, 1): 'Ticks',
(bt.TimeFrame.Minutes, 1): 'M1',
(bt.TimeFrame.Minutes, 2): 'M2',
(bt.TimeFrame.Minutes, 3): 'M3',
(bt.TimeFrame.Minutes, 4): 'M4',
(bt.TimeFrame.Minutes, 5): 'M5',
(bt.TimeFrame.Minutes, 6): 'M6',
(bt.TimeFrame.Minutes, 10): 'M10',
(bt.TimeFrame.Minutes, 12): 'M12',
(bt.TimeFrame.Minutes, 15): 'M15',
(bt.TimeFrame.Minutes, 20): 'M20',
(bt.TimeFrame.Minutes, 30): 'M30',
(bt.TimeFrame.Minutes, 60): 'H1',
(bt.TimeFrame.Minutes, 120): 'H2',
(bt.TimeFrame.Minutes, 180): 'H3',
(bt.TimeFrame.Minutes, 240): 'H4',
(bt.TimeFrame.Minutes, 360): 'H6',
(bt.TimeFrame.Minutes, 480): 'H8',
(bt.TimeFrame.Minutes, 720): 'H12',
(bt.TimeFrame.Days, 1): 'D1',
(bt.TimeFrame.Weeks, 1): 'W1',
(bt.TimeFrame.Months, 1): 'MN1',
}
# Order type matching with MetaTrader 5
_ORDEREXECS = {
# Market Buy order
(bt.Order.Market, 'buy'): 'ORDER_TYPE_BUY',
# Market Sell order
(bt.Order.Market, 'sell'): 'ORDER_TYPE_SELL',
# Buy Limit pending order
(bt.Order.Limit, 'buy'): 'ORDER_TYPE_BUY_LIMIT',
# Sell Limit pending order
(bt.Order.Limit, 'sell'): 'ORDER_TYPE_SELL_LIMIT',
# Buy Stop pending order
(bt.Order.Stop, 'buy'): 'ORDER_TYPE_BUY_STOP',
# Sell Stop pending order
(bt.Order.Stop, 'sell'): 'ORDER_TYPE_SELL_STOP',
# Upon reaching the order price, a pending Buy Limit
(bt.Order.StopLimit, 'buy'): 'ORDER_TYPE_BUY_STOP_LIMIT',
# order is placed at the StopLimit price
# Upon reaching the order price, a pending Sell Limit
(bt.Order.StopLimit, 'sell'): 'ORDER_TYPE_SELL_STOP_LIMIT',
# order is placed at the StopLimit price
}
@classmethod
def getdata(cls, *args, **kwargs):
"""Returns `DataCls` with args, kwargs"""
return cls.DataCls(*args, **kwargs)
@classmethod
def getbroker(cls, *args, **kwargs):
"""Returns broker with *args, **kwargs from registered `BrokerCls`"""
return cls.BrokerCls(*args, **kwargs)
def __init__(self, host='localhost'):
super(MTraderStore, self).__init__()
self.notifs = collections.deque() # store notifications for cerebro
self._env = None # reference to cerebro for general notifications
self.broker = None # broker instance
self.datas = list() # datas that have registered over start
self._orders = collections.OrderedDict() # map order.ref to oid
self._ordersrev = collections.OrderedDict() # map oid to order.ref
self._orders_type = dict() # keeps order types
self.oapi = MTraderAPI(host)
self._cash = 0.0
self._value = 0.0
self.q_livedata = queue.Queue()
self._cancel_flag = False
self.debug = True
def start(self, data=None, broker=None):
# Datas require some processing to kickstart data reception
if data is None and broker is None:
self.cash = None
return
if data is not None:
self._env = data._env
# For datas simulate a queue with None to kickstart co
self.datas.append(data)
if self.broker is not None:
self.broker.data_started(data)
elif broker is not None:
self.broker = broker
self.broker_threads()
self.streaming_events()
def stop(self):
# signal end of thread
if self.broker is not None:
self.q_ordercreate.put(None)
self.q_orderclose.put(None)
def put_notification(self, msg, *args, **kwargs):
self.notifs.append((msg, args, kwargs))
def get_notifications(self):
"""Return the pending "store" notifications"""
self.notifs.append(None) # put a mark / threads could still append
return [x for x in iter(self.notifs.popleft, None)]
def get_positions(self):
positions = self.oapi.construct_and_send(action="POSITIONS")
# Error handling
# if positions["error"]:
# raise ServerDataError(positions)
pos_list = positions.get('positions', [])
if self.debug:
print('Open positions: {}.'.format(pos_list))
return [PositionAdapter(o) for o in pos_list]
def get_granularity(self, timeframe, compression):
granularity = self._GRANULARITIES.get((timeframe, compression), None)
if granularity is None:
raise ValueError("W: Metatrader 5 doesn't support frame %s with compression %s" %
(bt.TimeFrame.getname(timeframe), compression))
return granularity
def get_cash(self):
return self._cash
def get_value(self):
return self._value
def get_balance(self):
try:
bal = self.oapi.construct_and_send(action="BALANCE")
except Exception as e:
self.put_notification(e)
# TODO: error handling
# if bal['error']:
# self.put_notification(bal)
# continue
try:
self._cash = float(bal["balance"])
self._value = float(bal["equity"])
except KeyError as e:
#ram
self.put_notification(e)
pass
def streaming_events(self):
t = threading.Thread(target=self._t_livedata, daemon=True)
t.start()
t = threading.Thread(target=self._t_streaming_events, daemon=True)
t.start()
def _t_livedata(self):
# create socket connection for the Thread
socket = self.oapi.live_socket()
while True:
try:
last_candle = socket.recv_json()
except zmq.ZMQError:
raise zmq.NotDone("Live data ERROR")
self.q_livedata.put(last_candle)
def _t_streaming_events(self):
# create socket connection for the Thread
socket = self.oapi.streaming_socket()
while True:
try:
transaction = socket.recv_json()
except zmq.ZMQError:
raise zmq.NotDone("E: Streaming data ERROR")
self._transaction(transaction)
def broker_threads(self):
self.q_ordercreate = queue.Queue()
t = threading.Thread(target=self._t_order_create, daemon=True)
t.start()
self.q_orderclose = queue.Queue()
t = threading.Thread(target=self._t_order_cancel, daemon=True)
t.start()
def order_create(self, order, stopside=None, takeside=None, **kwargs):
"""Creates an order"""
okwargs = dict()
okwargs['action'] = 'TRADE'
side = 'buy' if order.isbuy() else 'sell'
order_type = self._ORDEREXECS.get((order.exectype, side), None)
if order_type is None:
raise ValueError("W: Wrong order type: %s or side: %s" %
(order.exectype, side))
okwargs['actionType'] = order_type
okwargs['symbol'] = order.data._dataname
okwargs['volume'] = abs(order.created.size)
if order.exectype != bt.Order.Market:
okwargs['price'] = format(order.created.price)
if order.valid is None:
okwargs['expiration'] = 0 # good to cancel
else:
okwargs['expiration'] = order.valid # good to date
if order.exectype == bt.Order.StopLimit:
okwargs['price'] = order.created.pricelimit
# TODO: implement StopTrail
# if order.exectype == bt.Order.StopTrail:
# okwargs['distance'] = order.trailamount
okwargs['comment'] = dict()
if stopside is not None and stopside.price is not None:
okwargs['stoploss'] = stopside.price
okwargs['comment']['stopside'] = stopside.ref
if takeside is not None and takeside.price is not None:
okwargs['takeprofit'] = takeside.price
okwargs['comment']['takeside'] = takeside.ref
# set store backtrader order ref as MT5 order magic number
try:
okwargs['magic'] = order.info["magic"] #Ram Magic number must be inmutable
except KeyError:
print(KeyError)
okwargs.update(**kwargs) # anything from the user
self.q_ordercreate.put((order.ref, okwargs,))
# notify orders of being submitted
self.broker._submit(order.ref)
if stopside is not None and stopside.price is not None:
self.broker._submit(stopside.ref)
if takeside is not None and takeside.price is not None:
self.broker._submit(takeside.ref)
return order
def _t_order_create(self):
while True:
msg = self.q_ordercreate.get()
if msg is None:
break
oref, okwargs = msg
try:
o = self.oapi.construct_and_send(**okwargs)
except Exception as e:
self.put_notification(e)
self.broker._reject(oref)
return
if self.debug:
print(o)
if o['error']:
self.put_notification(o['description'])
self.broker._reject(oref)
return
else:
oid = o['order']
self._orders[oref] = oid
self.broker._submit(oref)
# keeps orders types
self._orders_type[oref] = okwargs['actionType']
# maps ids to backtrader order
self._ordersrev[oid] = oref
def order_cancel(self, order):
self.q_orderclose.put(order.ref)
return order
def _t_order_cancel(self):
while True:
oref = self.q_orderclose.get()
if oref is None:
break
oid = self._orders.get(oref, None)
if oid is None:
continue # the order is no longer there
# get symbol name
order = self.broker.orders[oref]
symbol = order.data._dataname
# get order type
order_type = self._orders_type.get(oref, None)
try:
if order_type in ['ORDER_TYPE_BUY', 'ORDER_TYPE_SELL']:
self.close_position(oid, symbol)
else:
self.cancel_order(oid, symbol)
except Exception as e:
self.put_notification(
"Order not cancelled: {}, {}".format(oid, e))
continue
self._cancel_flag = True
self.broker._cancel(oref)
def candles(self, dataname, dtbegin, dtend, timeframe, compression, include_first=False):
tf = self.get_granularity(timeframe, compression)
begin = end = None
if dtbegin:
begin = int((dtbegin - self._DTEPOCH).total_seconds())
if dtend:
end = int((dtbegin - self._DTEPOCH).total_seconds())
if self.debug:
print('Fetching: {}, Timeframe: {}, Fromdate: {}'.format(
dataname, tf, dtbegin))
data = self.oapi.construct_and_send(action="HISTORY", actionType="DATA", symbol=dataname,
chartTF=tf, fromDate=begin, toDate=end)
candles = data['data']
# Remove last unclosed candle
if not include_first:
try:
del candles[-1]
except:
pass
q = queue.Queue()
for c in candles:
q.put(c)
q.put({})
return q
'''ram
def config_server(self, symbol: str, timeframe: str) -> None:
"""Set server terminal symbol and time frame"""
conf = self.oapi.construct_and_send(action="CONFIG", symbol=symbol, chartTF=timeframe)
# TODO Error
# Error handling
if conf["error"]:
print(conf)
if conf["description"] == "Wrong symbol dosn't exist":
raise ServerConfigError("Symbol dosn't exist")
self.put_notification(conf["description"])
'''
def check_account(self) -> None:
"""Get MetaTrader 5 account settings"""
# ram Caller's name
print("I: Caller 3 ", sys._getframe(2).f_code.co_name)
conf = self.oapi.construct_and_send(action="ACCOUNT")
# Error handling
if conf["error"]:
raise ServerDataError(conf)
for key, value in conf.items():
print(key, value, sep=' - ')
def close_position(self, oid, symbol):
if self.debug:
print('Closing position: {}, on symbol: {}'.format(oid, symbol))
conf = self.oapi.construct_and_send(
action="TRADE", actionType='POSITION_CLOSE_ID', symbol=symbol, id=oid)
print(conf)
# Error handling
if conf["error"]:
raise ServerDataError(conf)
def cancel_order(self, oid, symbol):
if self.debug:
print('Cancelling order: {}, on symbol: {}'.format(oid, symbol))
conf = self.oapi.construct_and_send(
action="TRADE", actionType='ORDER_CANCEL', symbol=symbol, id=oid)
print(conf)
# Error handling
if conf["error"]:
raise ServerDataError(conf)
def _transaction(self, trans):
# Invoked from Streaming Events. May actually receive an event for an
# oid which has not yet been returned after creating an order. Hence
# store if not yet seen, else forward to processer
oid = oref = None
try:
request, reply = trans.values()
except KeyError:
raise KeyError(trans)
# Update balance after transaction
# self.get_balance()
if self.debug:
print(request, reply, sep='\n')
if request['action'] == 'TRADE_ACTION_DEAL':
# get order id (matches transaction id)
oid = request['order']
elif request['action'] == 'TRADE_ACTION_PENDING':
oid = request['order']
elif request['action'] == 'TRADE_ACTION_SLTP':
pass
elif request['action'] == 'TRADE_ACTION_MODIFY':
pass
elif request['action'] == 'TRADE_ACTION_REMOVE':
pass
elif request['action'] == 'TRADE_ACTION_CLOSE_BY':
pass
else:
return
# try:
# oref = self._ordersrev.pop(oid)
# except KeyError:
# raise KeyError(oid)
if oid in self._orders.values():
# when an order id exists process transaction
self._process_transaction(oid, request, reply)
else:
# external order created this transaction
if self._cancel_flag and reply['result'] == 'TRADE_RETCODE_DONE':
self._cancel_flag = False
size = float(reply['volume'])
price = float(reply['price'])
if request['type'].endswith('_SELL'):
size = -size
for data in self.datas:
if data._name == request['symbol']:
self.broker._fill_external(data, size, price)
break
def _process_transaction(self, oid, request, reply):
try:
# get a reference to a backtrader order based on the order id / trade id
oref = self._ordersrev[oid]
except KeyError:
return
if request['action'] == 'TRADE_ACTION_PENDING':
pass
if reply['result'] == 'TRADE_RETCODE_DONE':
size = float(reply['volume'])
price = float(reply['price'])
if request['type'].endswith('_SELL'):
size = -size
self.broker._fill(oref, size, price, reason=request['type'])
| 35.286313 | 121 | 0.548348 | 2,726 | 25,265 | 4.946442 | 0.180851 | 0.018763 | 0.024028 | 0.011866 | 0.287007 | 0.246514 | 0.206244 | 0.165678 | 0.13542 | 0.110056 | 0 | 0.008109 | 0.350802 | 25,265 | 715 | 122 | 35.335664 | 0.813803 | 0.1415 | 0 | 0.280255 | 0 | 0 | 0.086246 | 0.004591 | 0 | 0 | 0 | 0.004196 | 0 | 1 | 0.082803 | false | 0.014862 | 0.021231 | 0.004246 | 0.178344 | 0.042463 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d75c035ba51e8b697d8e9ae35d116dec8b0a69ab | 3,892 | py | Python | pororo/tasks/sentiment_analysis.py | jayten42/pororo | 0b02e6a633b9a32ec4241b8ed96745e6592db317 | [
"Apache-2.0"
] | 1,137 | 2021-02-02T02:09:06.000Z | 2022-03-29T03:10:40.000Z | pororo/tasks/sentiment_analysis.py | jayten42/pororo | 0b02e6a633b9a32ec4241b8ed96745e6592db317 | [
"Apache-2.0"
] | 57 | 2021-02-02T03:29:54.000Z | 2022-03-31T16:20:00.000Z | pororo/tasks/sentiment_analysis.py | jayten42/pororo | 0b02e6a633b9a32ec4241b8ed96745e6592db317 | [
"Apache-2.0"
] | 216 | 2021-02-02T02:49:02.000Z | 2022-03-28T01:19:58.000Z | """Sentiment Analysis related modeling class"""
from typing import Optional
from pororo.tasks.utils.base import PororoFactoryBase, PororoSimpleBase
class PororoSentimentFactory(PororoFactoryBase):
"""
Classification based sentiment analysis using Review Corpus
Korean (`brainbert.base.ko.shopping`)
- dataset: Shopping review corpus
- metric: Accuracy (95.00)
- ref: https://github.com/bab2min/corpus/tree/master/sentiment
Korean (`brainbert.base.ko.nsmc`)
- dataset: Naver sentiment movie corpus
- metric: Accuracy (90.84)
- ref: https://github.com/e9t/nsmc
Japanese (`jaberta.base.ja.sentiment`)
- data: Internal data
- metric: Accuracy (96.29)
Examples:
>>> sa = Pororo(task="sentiment", model="brainbert.base.ko.nsmc", lang="ko")
>>> sa("배송이 버트 학습시키는 것 만큼 느리네요")
'Negative'
>>> sa("배송이 경량화되었는지 빠르네요")
'Positive'
>>> sa = Pororo(task="sentiment", lang="ja")
>>> sa("日が暑くもイライラか。") # 날이 더워서 너무 짜증나요.
'Negative'
>>> sa('日が良く散歩に行きたいです。') # 날이 좋아서 산책을 가고 싶어요.
'Positive'
>>> sa = Pororo(task="sentiment", model="brainbert.base.ko.shopping", lang="ko")
>>> sa("꽤 맘에 들었어요. 겉에서 봤을땐 허름?했는데 맛도 있고, 괜찮아요")
'Positive'
>>> sa("예약하고 가세요 대기줄이 깁니다 훠궈는 하이디라오가 비싼만큼 만족도가 제일 높아요")
'Negative'
>>> sa("이걸 산 내가 레전드", show_probs=True)
{'negative': 0.7525266408920288, 'positive': 0.2474733293056488}
"""
def __init__(self, task: str, lang: str, model: Optional[str]):
super().__init__(task, lang, model)
@staticmethod
def get_available_langs():
return ["ko", "ja"]
@staticmethod
def get_available_models():
return {
"ko": [
"brainbert.base.ko.shopping",
"brainbert.base.ko.nsmc",
],
"ja": ["jaberta.base.ja.sentiment"],
}
def load(self, device: str):
"""
Load user-selected task-specific model
Args:
device (str): device information
Returns:
object: User-selected task-specific model
"""
if "brainbert" in self.config.n_model:
from pororo.models.brainbert import BrainRobertaModel
model = (BrainRobertaModel.load_model(
f"bert/{self.config.n_model}",
self.config.lang,
).eval().to(device))
return PororoBertSentiment(model, self.config)
if "jaberta" in self.config.n_model:
from pororo.models.brainbert import JabertaModel
model = (JabertaModel.load_model(
f"bert/{self.config.n_model}",
self.config.lang,
).eval().to(device))
return PororoBertSentiment(model, self.config)
class PororoBertSentiment(PororoSimpleBase):
def __init__(self, model, config):
super().__init__(config)
self._model = model
self._label_fn = {
"0": "negative",
"1": "positive",
"negative": "negative",
"positive": "positive",
}
def predict(self, sent: str, **kwargs) -> str:
"""
Conduct sentiment analysis
Args:
sent: (str) sentence to be sentiment analyzed
show_probs: (bool) whether to show probability score
Returns:
str: predicted sentence label - `negative` or `positive`
"""
show_probs = kwargs.get("show_probs", False)
res = self._model.predict_output(sent, show_probs=show_probs)
if show_probs:
probs = {self._label_fn[r]: res[r] for r in res}
return probs
else:
if self.config.lang == "ko":
return self._label_fn[res].title()
return res.title()
| 29.709924 | 88 | 0.567318 | 417 | 3,892 | 5.194245 | 0.381295 | 0.041551 | 0.041551 | 0.029548 | 0.215605 | 0.171745 | 0.171745 | 0.171745 | 0.133887 | 0.133887 | 0 | 0.018574 | 0.308325 | 3,892 | 130 | 89 | 29.938462 | 0.786033 | 0.405961 | 0 | 0.192308 | 0 | 0 | 0.102726 | 0.060857 | 0 | 0 | 0 | 0 | 0 | 1 | 0.115385 | false | 0 | 0.076923 | 0.038462 | 0.365385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d75ca43fbf6d64624319d49c64223c62c1d60b6b | 6,102 | py | Python | smashbenchmarking/parsers/vcfwriter.py | amplab/smash | 0ff627a5d9d74561ca0c3fd7e860ab2d71784216 | [
"BSD-2-Clause"
] | 26 | 2015-01-30T02:18:25.000Z | 2021-02-04T17:38:16.000Z | smashbenchmarking/parsers/vcfwriter.py | amplab/smash | 0ff627a5d9d74561ca0c3fd7e860ab2d71784216 | [
"BSD-2-Clause"
] | null | null | null | smashbenchmarking/parsers/vcfwriter.py | amplab/smash | 0ff627a5d9d74561ca0c3fd7e860ab2d71784216 | [
"BSD-2-Clause"
] | 8 | 2015-01-05T08:25:35.000Z | 2018-08-06T08:02:47.000Z | #Copyright (c) 2013, Regents of the University of California
#All rights reserved.
#
#Redistribution and use in source and binary forms, with or without
#modification, are permitted provided that the following conditions are met:
#
#1. Redistributions of source code must retain the above copyright notice,
#this list of conditions and the following disclaimer.
#
#2. Redistributions in binary form must reproduce the above copyright notice,
#this list of conditions and the following disclaimer in the documentation
#and/or other materials provided with the distribution.
#
#THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
#AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
#IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
#DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
#FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
#DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
#SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
#CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
#OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
#OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Write a VCF file.
Intended application: benchmarking.
"""
from __future__ import print_function
import genome
import util
_anon_header = """##fileformat=VCFv4.0
##source=VCFWriter
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT """
def _must_prepend(ref, alts):
"""Return whether the preceding reference base must be prepended.
Helper for satisfying the VCF spec.
"""
alleles = alts + [ref]
if any(not allele for allele in alleles):
return True
snp = all(len(allele) == 1 for allele in alleles)
return not snp and any(allele[0] != ref[0] for allele in alts)
class VCFWriter:
"""VCF writer for a particular person and reference genome.
Coordinates are "space-counted, zero-start" unless otherwise specified.
<http://alternateallele.blogspot.com/2012/03/genome-coordinate-conventions.html>
(Note that the VCF format itself is "base-counted, one-start".)
"""
def __init__(self, reference_fasta, person, output,header=_anon_header):
"""Given a reference, a person, and a file-like object for output.
Ideally the reference would be encoded in the FASTA.
"""
self._ref_genome = genome.Genome(reference_fasta)
self._output = output
print(header + person, file=self._output)
self._person = person
def write_record(self, CHROM, POS, ID, REF, ALT, gtype,INFO='.'):
"""Write a fully specified VCF record.
WARNING: 'REF' isn't checked against the reference genome.
Do nothing if 'REF' contains characters outside ACGT (notably N).
Return whether anything was written.
"""
write = util.is_proper_strand(REF)
if write:
QUAL = 20 # Default 1/100 error probability.
FILTER = 'PASS'
FORMAT = 'GT'
print(CHROM, POS, ID, REF, ALT, QUAL, FILTER, INFO, FORMAT, gtype,
sep='\t', file=self._output)
return write
def write_deletion(self, CHROM, start, end, ID, gtype):
"""Write a deletion by looking up the deleted reference bases."""
return self.write_deletion_with_insertion(CHROM, start, end,
ID, [''], gtype)
def write_insertion(self, CHROM, start, inserted_sequence, ID, gtype):
"""Write an insertion by taking the preceding base as ref allele."""
return self.write_deletion_with_insertion(CHROM, start, start,
ID, inserted_sequence, gtype)
def write_deletion_with_insertion(self, CHROM, start, end, ID, alts,
gtype):
"""Replace arbitrary sequence with arbitrary sequence.
'alts' is a list of alleles.
To conform to the VCF spec, prepend the preceding reference base to the
REF and ALT alleles if any of them are empty, or if this variant isn't
a SNP and not all alleles share a common first base.
WARNING: If prepending, assume the base preceding the deletion wasn't
in the reference allele of the previous variant, and that the deletion
isn't at the start of a chromosome.
"""
assert(0 <= start <= end)
REF = self._ref_genome.ref(CHROM, start, end)
def write(pos, ref, alts):
ALT = ','.join(alts) if alts else '.'
return self.write_record(CHROM, pos, ID, ref, ALT, gtype)
if _must_prepend(REF, alts):
assert(start)
anchor = self._ref_genome.ref(CHROM, start - 1, start)
return write(start - 1, anchor + REF, map(anchor.__add__, alts))
else:
return write(start, REF, alts)
def write_alleles(self, CHROM, start, end, ID, alleles, phased=True):
"""Like 'write_deletion_with_insertion' but with a pair of alleles."""
assert(1 <= len(alleles) <= 2)
REF = self._ref_genome.ref(CHROM, start, end)
distinct_alleles = [REF]
for allele in alleles:
if allele not in distinct_alleles:
distinct_alleles.append(allele)
allele_indices = map(distinct_alleles.index, alleles)
sep = '|' if phased else '/'
gtype = sep.join(map(str, allele_indices))
alts = distinct_alleles[1:]
return self.write_deletion_with_insertion(CHROM, start, end, ID, alts,
gtype)
def write_inversion(self, CHROM, start, end, ID, gtype):
assert(0 <= start < end)
REF = self._ref_genome.ref(CHROM, start, end)
ALT = REF[::-1]
return self.write_record(CHROM, start, ID, REF, ALT, gtype)
| 41.510204 | 84 | 0.655523 | 807 | 6,102 | 4.869888 | 0.322181 | 0.033079 | 0.029771 | 0.022901 | 0.204071 | 0.16056 | 0.122646 | 0.122646 | 0.102799 | 0.084478 | 0 | 0.007078 | 0.259095 | 6,102 | 146 | 85 | 41.794521 | 0.862199 | 0.445919 | 0 | 0.046154 | 0 | 0 | 0.050601 | 0.018975 | 0 | 0 | 0 | 0 | 0.061538 | 1 | 0.138462 | false | 0.015385 | 0.046154 | 0 | 0.353846 | 0.046154 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d75fb19f64daeefd8fccba812168b4aa40d55de5 | 973 | py | Python | diagnnose/models/wrappers/awd_lstm.py | i-machine-think/diagnnose | 4533347d1f2cc2959903ae667f99dccd4dda73ee | [
"MIT"
] | 35 | 2019-06-12T13:50:39.000Z | 2020-11-10T22:29:19.000Z | diagnnose/models/wrappers/awd_lstm.py | i-machine-think/diagnnose | 4533347d1f2cc2959903ae667f99dccd4dda73ee | [
"MIT"
] | 50 | 2019-04-07T20:22:54.000Z | 2020-11-14T12:58:27.000Z | diagnnose/models/wrappers/awd_lstm.py | i-machine-think/diagnnose | 4533347d1f2cc2959903ae667f99dccd4dda73ee | [
"MIT"
] | 5 | 2019-06-06T13:37:29.000Z | 2020-09-24T12:04:17.000Z | from typing import Any, Dict
from .forward_lstm import ForwardLSTM
class AWDLSTM(ForwardLSTM):
def __init__(self, *args: Any, **kwargs: Any) -> None:
kwargs.setdefault("rnn_name", "rnns")
super().__init__(*args, **kwargs)
@staticmethod
def param_names(
layer: int, rnn_name: str, no_suffix: bool = False, **kwargs
) -> Dict[str, str]:
# The AWD-LSTM has no separate weight names for a single layer LSTM
if no_suffix:
return {
"weight_hh": "",
"weight_ih": "",
"bias_hh": "",
"bias_ih": "",
}
else:
return {
"weight_hh": f"{rnn_name}.{layer}.module.weight_hh_l0_raw",
"weight_ih": f"{rnn_name}.{layer}.module.weight_ih_l0",
"bias_hh": f"{rnn_name}.{layer}.module.bias_hh_l0",
"bias_ih": f"{rnn_name}.{layer}.module.bias_ih_l0",
}
| 32.433333 | 75 | 0.530319 | 115 | 973 | 4.182609 | 0.426087 | 0.087318 | 0.066528 | 0.108108 | 0.216216 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0.006135 | 0.329908 | 973 | 29 | 76 | 33.551724 | 0.731595 | 0.066804 | 0 | 0.083333 | 0 | 0 | 0.251656 | 0.16777 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.291667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d76030673ee3f36c18685dc66e7b5e5a022eadb5 | 561 | py | Python | imdb.munge.py | pbloem/podcasts | 68cf4317b3324012c0710816c439d322694b25a4 | [
"MIT"
] | null | null | null | imdb.munge.py | pbloem/podcasts | 68cf4317b3324012c0710816c439d322694b25a4 | [
"MIT"
] | null | null | null | imdb.munge.py | pbloem/podcasts | 68cf4317b3324012c0710816c439d322694b25a4 | [
"MIT"
] | null | null | null |
import glob, os, json, sys
from os import sep as S
from tqdm import tqdm
DIR = '/Users/Peter/Dropbox/datasets/text/aclImdb'
for set in ['train', 'test']:
result = {}
result['pos'], result['neg'] = [], []
for sentiment in ['pos', 'neg']:
print('loading', sentiment)
for file in tqdm(glob.glob(f'{DIR}{S}{set}{S}{sentiment}{S}*.txt')):
with open(file, 'r') as f:
result[sentiment].append(f.read())
with open(f'{DIR}{S}imdb.{set}.json', 'w') as out:
json.dump(result, out)
print('done.') | 23.375 | 76 | 0.565062 | 82 | 561 | 3.865854 | 0.5 | 0.025237 | 0.031546 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 561 | 24 | 77 | 23.375 | 0.738928 | 0 | 0 | 0 | 0 | 0 | 0.240642 | 0.178253 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d762c91c37296a3f1949a713b31a632c81463bab | 651 | py | Python | api/filters.py | kravcov9109/yamdb_final | 0408756542215fdf41cb622fdc37fd29a87c5634 | [
"MIT"
] | null | null | null | api/filters.py | kravcov9109/yamdb_final | 0408756542215fdf41cb622fdc37fd29a87c5634 | [
"MIT"
] | null | null | null | api/filters.py | kravcov9109/yamdb_final | 0408756542215fdf41cb622fdc37fd29a87c5634 | [
"MIT"
] | null | null | null | from django_filters import rest_framework as filters
from .models import Title
class CharFilterInFilter(filters.BaseInFilter, filters.CharFilter):
pass
class TitleFilter(filters.FilterSet):
category = CharFilterInFilter(
field_name='category__slug',
lookup_expr='in',
)
genre = CharFilterInFilter(
field_name='genre__slug',
lookup_expr='in',
)
name = filters.CharFilter(
field_name='name',
lookup_expr='contains',
)
class Meta:
model = Title
fields = (
'genre',
'category',
'year',
'name',
)
| 19.727273 | 67 | 0.588326 | 59 | 651 | 6.288136 | 0.491525 | 0.072776 | 0.145553 | 0.086253 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.316436 | 651 | 32 | 68 | 20.34375 | 0.833708 | 0 | 0 | 0.08 | 0 | 0 | 0.095238 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.04 | 0.08 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d762ed9b9f41229c7c08eddcd2ace6a90bdd781d | 8,466 | py | Python | src/utils.py | acwooding/covid_nlp | d097c35ca5a7cedecae05bb5677bcfb8ff573b97 | [
"MIT"
] | null | null | null | src/utils.py | acwooding/covid_nlp | d097c35ca5a7cedecae05bb5677bcfb8ff573b97 | [
"MIT"
] | 1 | 2020-03-21T16:04:31.000Z | 2020-03-21T16:04:31.000Z | src/utils.py | acwooding/covid_nlp | d097c35ca5a7cedecae05bb5677bcfb8ff573b97 | [
"MIT"
] | 1 | 2020-03-21T15:16:13.000Z | 2020-03-21T15:16:13.000Z | import time
import pathlib
import numpy as np
import json
from .log import logger
# Timing and Performance
import hdbscan
import umap
import umap.plot
import numpy as np
import pandas as pd
from scipy.spatial.distance import cdist
def timing_info(method):
def wrapper(*args, **kw):
start_time = time.time()
result = method(*args, **kw)
end_time = time.time()
logger.info(f"timing_info: {method.__name__}"
f"@{round((end_time-start_time)*1000,1)} ms")
return result
return wrapper
def record_time_interval(section, start_time, line_break=False):
"""Record a time interval since the last timestamp"""
end_time = time.time()
delta = end_time - start_time
if delta < 1:
delta *= 1000
units = "ms"
else:
units = "s"
if line_break:
logger.debug("PROCESS_TIME:{:>36} {} {}\n".format(section, round(delta, 1), units))
else:
logger.debug("PROCESS_TIME:{:>36} {} {}".format(section, round(delta, 1), units))
return end_time
def normalize_numpy_dict(d):
ret = d.copy()
for k, v in ret.items():
if isinstance(v, np.generic):
ret[k] = np.asscalar(v)
return ret
def save_json(filename, obj, indent=2, sort_keys=True):
"""Dump an object to disk in json format
filename: pathname
Filename to dump to
obj: object
Object to dump
indent: integer
number of characters to indent
sort_keys: boolean
Whether to sort keys before writing. Should be True if you ever use revision control
on the resulting json file.
"""
with open(filename, 'w') as fw:
json.dump(obj, fw, indent=indent, sort_keys=sort_keys)
def load_json(filename):
"""Read a json file from disk"""
with open(filename) as fw:
obj = json.load(fw)
return obj
def head_file(filename, n=5):
"""Return the first `n` lines of a file
"""
with open(filename, 'r') as fd:
lines = []
for i, line in enumerate(fd):
if i > n:
break
lines.append(line)
return "".join(lines)
def list_dir(path, fully_qualified=False, glob_pattern='*'):
"""do an ls on a path
fully_qualified: boolean (default: False)
If True, return a list of fully qualified pathlib objects.
if False, return just the bare filenames
glob_pattern: glob (default: '*')
File mattern to match
Returns
-------
A list of names, or fully qualified pathlib objects"""
if fully_qualified:
return list(pathlib.Path(path).glob(glob_pattern))
return [file.name for file in pathlib.Path(path).glob(glob_pattern)]
class RankedPoints:
def __init__(self, points, clusterer, metric='euclidean', selection_method='centroid'):
""" Rank points in a cluster based on their distance to the cluster centroid/medoid.
From https://github.com/gclen/covid19-kaggle/.
Parameters
----------
points : array of shape (n_samples, n_features), and must be the same data passed into
HDBSCAN
clusterer : Instance of HDBSCAN that has been fit to data
metric: string or callable, optional (default='euclidean')
The metric to use when calculating distance between points in a cluster and
the cluster centroid/medoid. If metric is a string or callable, it must be one of
the options allowed by scipy.spatial.distance.cdist for its metric parameter.
selection_method: string, optional (default='centroid')
Method to use to find the weighted cluster center. Allowed options are 'centroid'
and 'medoid'.
"""
self.clusterer = clusterer
self.metric = metric
allowed_methods = ['centroid', 'medoid']
if selection_method not in allowed_methods:
raise ValueError(f'Selection method must be one of {allowed_methods}')
if selection_method == 'centroid' and metric != 'euclidean':
raise ValueError(f'Metric must be euclidian when using selection_method centroid. '
f'Current metric is {metric}')
self.selection_method = selection_method
self._embedding_cols = [str(i) for i in range(points.shape[1])]
self.embedding_df = pd.DataFrame(points, columns=self._embedding_cols)
self.embedding_df['cluster'] = clusterer.labels_
def calculate_all_distances_to_center(self):
"""For each cluster calculate the distance from each point to the centroid/medoid"""
all_distances = pd.DataFrame()
for label in np.unique(self.embedding_df['cluster']):
distance_df = self.calculate_distances_for_cluster(label)
all_distances = pd.concat([all_distances, distance_df])
self.embedding_df = self.embedding_df.merge(all_distances, left_index=True, right_index=True)
def calculate_distances_for_cluster(self, cluster_id):
"""For a given cluster_id calculate the distance from each point to the centroid/medoid.
Parameters
----------
cluster_id : int
The id of the cluster to compute the distances for. If the cluster id is -1 which
corresponds to the noise point cluster, then this will return a distance of NaN.
Returns
-------
df : A pandas DataFrame containing the distances from each point to the cluster centroid/medoid.
The index of the dataframe corresponds to the index in the original data.
"""
cluster_of_interest = self.embedding_df[self.embedding_df['cluster'] == cluster_id].copy()
if cluster_of_interest.empty:
raise ValueError(f'Cluster id {cluster_id} not found')
# Don't calculate distances for the noise cluster
if cluster_id == -1:
return pd.DataFrame(np.nan, columns=['dist_to_rep_point'], index=cluster_of_interest.index)
if self.selection_method == 'centroid':
rep_point = self.clusterer.weighted_cluster_centroid(cluster_id)
if self.selection_method == 'medoid':
rep_point = self.clusterer.weighted_cluster_medoid(cluster_id)
dists = cdist(rep_point.reshape((1,len(self._embedding_cols))), cluster_of_interest[self._embedding_cols].values, metric=self.metric)
return pd.DataFrame(dists[0], columns=['dist_to_rep_point'], index=cluster_of_interest.index)
def rank_cluster_points_by_distance(self, cluster_id):
"""For a given cluster return a pandas dataframe of points ranked
by distance to the cluster centroid/medoid
"""
cluster_of_interest = self.embedding_df[self.embedding_df['cluster'] == cluster_id].copy()
if cluster_of_interest.empty:
raise ValueError(f'Cluster id {cluster_id} not found')
if 'dist_to_rep_point' not in self.embedding_df.columns:
distance_df = self.calculate_distances_for_cluster(cluster_id)
cluster_of_interest = cluster_of_interest.merge(distance_df, left_index=True, right_index=True)
cluster_of_interest.sort_values('dist_to_rep_point', inplace=True)
return cluster_of_interest
def get_all_cluster_rankings(self):
"""Calculate the rank of each point within a cluster"""
if 'dist_to_rep_point' not in self.embedding_df.columns:
self.calculate_all_distances_to_center()
self.embedding_df['rank_in_cluster'] = self.embedding_df.groupby('cluster')['dist_to_rep_point'].rank(method='min')
def get_closest_samples_for_cluster(self, cluster_id, n_samples=5):
"""Get the N closest points to the cluster centroid/medoid"""
return self.rank_cluster_points_by_distance(cluster_id).head(n_samples)
def get_furthest_samples_for_cluster(self, cluster_id, n_samples=5):
"""Get the N points furthest away from the cluster centroid/medoid"""
return self.rank_cluster_points_by_distance(cluster_id).tail(n_samples)
def get_support_index(row):
"""
Helper function to obtain the words that a document (row) is supported on in the vocabulary.
Parameters
----------
row:
a row from the document matrix
Returns
-------
array of column indices that the row is supported on
"""
inds = row.indices
data = row.data
order = np.argsort(-data)
return inds[order]
| 36.808696 | 141 | 0.663832 | 1,141 | 8,466 | 4.748466 | 0.237511 | 0.031561 | 0.035991 | 0.026578 | 0.293835 | 0.246216 | 0.170912 | 0.144703 | 0.144703 | 0.144703 | 0 | 0.004215 | 0.243444 | 8,466 | 229 | 142 | 36.969432 | 0.841686 | 0.312544 | 0 | 0.125 | 0 | 0 | 0.10247 | 0.007003 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.098214 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d76631e7046201b8eb28d689d384fd47bab976ee | 6,514 | py | Python | data/snli/emnlp18/generate.py | uclmr/adversarial-nli | 7222b0fd2989fff237520c401d00249262f750dc | [
"MIT"
] | 27 | 2018-08-27T07:30:10.000Z | 2019-05-16T19:29:33.000Z | data/snli/emnlp18/generate.py | uclmr/adversarial-nli | 7222b0fd2989fff237520c401d00249262f750dc | [
"MIT"
] | null | null | null | data/snli/emnlp18/generate.py | uclmr/adversarial-nli | 7222b0fd2989fff237520c401d00249262f750dc | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import sys
import argparse
import requests
import pickle
import atexit
import operator
import copy
import numpy as np
import gzip
import json
import logging
logger = logging.getLogger(os.path.basename(sys.argv[0]))
def contradiction_loss_dam(sentence1, sentence2):
p1 = call_dam(sentence1, sentence2)['contradiction']
p2 = call_dam(sentence2, sentence1)['contradiction']
p1, p2 = float(p1), float(p2)
return max(p1 - p2, 0), max(p2 - p1, 0)
def contradiction_loss_esim(sentence1, sentence2):
p1 = call_esim(sentence1, sentence2)['contradiction']
p2 = call_esim(sentence2, sentence1)['contradiction']
p1, p2 = float(p1), float(p2)
return max(p1 - p2, 0), max(p2 - p1, 0)
def contradiction_loss_cbilstm(sentence1, sentence2):
p1 = call_cbilstm(sentence1, sentence2)['contradiction']
p2 = call_cbilstm(sentence2, sentence1)['contradiction']
p1, p2 = float(p1), float(p2)
return max(p1 - p2, 0), max(p2 - p1, 0)
def persist(path):
def decorator(fun):
cache = {}
if os.path.isfile(path):
with open(path, 'rb') as f:
cache = pickle.load(f)
def write():
with open(path, 'wb') as f:
pickle.dump(cache, f)
atexit.register(lambda: write())
def new_f(*args):
if tuple(args) not in cache:
cache[tuple(args)] = fun(*args)
return cache[args]
return new_f
return decorator
@persist('dam_cache.p')
def call_dam(sentence1, sentence2, url='http://127.0.0.1:8889/nnli'):
data = {'sentence1': sentence1, 'sentence2': sentence2}
ans = requests.post(url, data=data)
ans_json = ans.json()
return ans_json
@persist('esim_cache.p')
def call_esim(sentence1, sentence2, url='http://127.0.0.1:9001/nnli'):
data = {'sentence1': sentence1, 'sentence2': sentence2}
ans = requests.post(url, data=data)
ans_json = ans.json()
return ans_json
@persist('cbilstm_cache.p')
def call_cbilstm(sentence1, sentence2, url='http://127.0.0.1:9002/nnli'):
data = {'sentence1': sentence1, 'sentence2': sentence2}
ans = requests.post(url, data=data)
ans_json = ans.json()
return ans_json
def invert(_obj):
i_obj = copy.deepcopy(_obj)
# Switch sentence1 with sentence2
for i, j in [(1, 2), (2, 1)]:
for suf in ['', '_binary_parse', '_parse']:
i_obj['sentence{}{}'.format(i, suf)] = _obj['sentence{}{}'.format(j, suf)]
# Heuristically change the gold_label
gold_label = _obj['gold_label']
assert gold_label in {'entailment', 'contradiction', 'neutral', '-'}
if gold_label == 'entailment':
i_obj['gold_label'] = 'neutral'
if gold_label == 'neutral':
i_obj['gold_label'] = 'neutral'
if gold_label == 'contradiction':
i_obj['gold_label'] = 'contradiction'
return i_obj
def main(argv):
def fmt(prog):
return argparse.HelpFormatter(prog, max_help_position=100, width=200)
argparser = argparse.ArgumentParser('SNLI Adversarial Candidate Generator', formatter_class=fmt)
argparser.add_argument('--path', '-p', action='store', type=str, default='snli_1.0_train.jsonl.gz')
argparser.add_argument('--seed', '-s', action='store', type=int, default=0)
argparser.add_argument('--fraction', '-f', action='store', type=float, default=None)
argparser.add_argument('--nb-instances', '-n', action='store', type=int, default=None)
argparser.add_argument('--no-inverse', '-N', action='store_true', default=False)
args = argparser.parse_args(argv)
path = args.path
seed = args.seed
fraction = args.fraction
nb_instances = args.nb_instances
no_inverse = args.no_inverse
obj_lst = []
with gzip.open(path, 'rb') as f:
for line in f:
dl = line.decode('utf-8')
obj_lst += [json.loads(dl)]
if fraction is not None:
rs = np.random.RandomState(seed)
nb_obj = len(obj_lst)
# Round to the closest integer
nb_samples = int(round(nb_obj * fraction))
sample_idxs = rs.choice(nb_obj, nb_samples, replace=False)
sample_obj_lst = [obj_lst[i] for i in sample_idxs]
else:
sample_obj_lst = obj_lst
obj_c_loss_dam_pairs = []
obj_c_loss_esim_pairs = []
obj_c_loss_cbilstm_pairs = []
obj_c_loss_pairs = []
for obj in sample_obj_lst:
s1, s2 = obj['sentence1'], obj['sentence2']
dam_c1, dam_c2 = contradiction_loss_dam(s1, s2)
esim_c1, esim_c2 = contradiction_loss_esim(s1, s2)
cbilstm_c1, cbilstm_c2 = contradiction_loss_cbilstm(s1, s2)
c_loss_value_dam = dam_c1 + dam_c2
c_loss_value_esim = esim_c1 + esim_c2
c_loss_value_cbilstm = cbilstm_c1 + cbilstm_c2
obj_c_loss_dam_pairs += [(obj, c_loss_value_dam)]
obj_c_loss_esim_pairs += [(obj, c_loss_value_esim)]
obj_c_loss_cbilstm_pairs += [(obj, c_loss_value_cbilstm)]
obj_c_loss_pairs += [(obj, c_loss_value_dam + c_loss_value_esim + c_loss_value_cbilstm)]
sorted_objs_c_loss_pairs = sorted(obj_c_loss_pairs,
key=operator.itemgetter(1),
reverse=True)
if nb_instances is None:
nb_instances = len(sorted_objs_c_loss_pairs)
for obj, c_loss in sorted_objs_c_loss_pairs[:nb_instances]:
s1, s2 = obj['sentence1'], obj['sentence2']
dam_c1, dam_c2 = contradiction_loss_dam(s1, s2)
esim_c1, esim_c2 = contradiction_loss_esim(s1, s2)
cbilstm_c1, cbilstm_c2 = contradiction_loss_cbilstm(s1, s2)
c_obj = copy.deepcopy(obj)
i_obj = invert(obj)
c_obj['type'] = 'normal'
i_obj['type'] = 'inverse'
c_obj['c_loss_dam'] = dam_c1
i_obj['c_loss_dam'] = dam_c2
c_obj['c_loss_esim'] = esim_c1
i_obj['c_loss_esim'] = esim_c2
c_obj['c_loss_cbilstm'] = cbilstm_c1
i_obj['c_loss_cbilstm'] = cbilstm_c2
c_obj['dam'] = call_dam(s1, s2)
i_obj['dam'] = call_dam(s2, s1)
c_obj['esim'] = call_esim(s1, s2)
i_obj['esim'] = call_esim(s2, s1)
c_obj['cbilstm'] = call_esim(s1, s2)
i_obj['cbilstm'] = call_esim(s2, s1)
print(json.dumps(c_obj))
if no_inverse is False:
print(json.dumps(i_obj))
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
main(sys.argv[1:])
| 29.609091 | 103 | 0.63233 | 910 | 6,514 | 4.262637 | 0.191209 | 0.036092 | 0.041248 | 0.02346 | 0.435937 | 0.311163 | 0.295437 | 0.295437 | 0.219902 | 0.219902 | 0 | 0.035908 | 0.234725 | 6,514 | 219 | 104 | 29.744292 | 0.742227 | 0.021492 | 0 | 0.184211 | 0 | 0 | 0.116188 | 0.003611 | 0 | 0 | 0 | 0 | 0.006579 | 1 | 0.085526 | false | 0 | 0.078947 | 0.006579 | 0.236842 | 0.013158 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d76965a5df95ca9b604f309fdd6e68d62355603d | 5,272 | py | Python | clients/python/beneath/utils/aiodelaybuffer.py | admariner/beneath | a6aa2c220e4a646be792379528ae673f4bef440b | [
"MIT"
] | 65 | 2021-04-27T13:13:09.000Z | 2022-01-24T00:26:06.000Z | clients/python/beneath/utils/aiodelaybuffer.py | admariner/beneath | a6aa2c220e4a646be792379528ae673f4bef440b | [
"MIT"
] | 22 | 2021-10-06T10:30:40.000Z | 2021-12-10T11:36:55.000Z | clients/python/beneath/utils/aiodelaybuffer.py | admariner/beneath | a6aa2c220e4a646be792379528ae673f4bef440b | [
"MIT"
] | 4 | 2021-04-24T15:29:51.000Z | 2022-03-30T16:20:12.000Z | import asyncio
import logging
from typing import Generic, TypeVar
BufferValue = TypeVar("BufferValue")
class AIODelayBuffer(Generic[BufferValue]):
"""
AIODelayBuffer provides an asyncio-based means of buffering and flushing data based on
the size of buffered values or time passed since the first value was written to the buffer.
We use it to buffer writes for `max_delay_ms` before sending them in a single batched request
over the network (with forced buffer flushes at `max_buffer_size`).
It only lets one buffer be open at any moment. If a write is attempted when the buffer is full
or flushing, the write will not return until the flush of the previous buffer has completed.
This class is NOT thread-safe.
"""
def __init__(
self, max_delay_ms: int, max_record_size: int, max_buffer_size: int, max_buffer_count: int
):
self._max_delay = max_delay_ms / 1000
self._max_record_size = max_record_size
self._max_buffer_size = max_buffer_size
self._max_buffer_count = max_buffer_count
self._delay_task: asyncio.Task = None
self._delayed_flush_task: asyncio.Task = None
self._running = False
self._flushing = False
self._buffer_size = 0
self._buffer_count = 0
self._reset()
# PROPERTIES
@property
def running(self):
return self._running
# OVERRIDES
# pylint: disable=no-self-use
def _reset(self):
raise Exception("AIODelayBuffer subclasses must implement _reset")
# pylint: disable=no-self-use
def _merge(self, value: BufferValue):
raise Exception("AIODelayBuffer subclasses must implement _merge")
# pylint: disable=no-self-use
async def _flush(self):
raise Exception("AIODelayBuffer subclasses must implement _flush")
# LIFECYCLE
async def __aenter__(self):
await self.start()
return self
async def __aexit__(self, exc_type, exc, tb):
if not exc_type:
await self.stop()
async def start(self):
if self._running:
raise Exception("Already called start")
self._running = True
async def stop(self):
self._running = False
await self.force_flush()
async def write(self, value: BufferValue, size: int) -> asyncio.Task:
"""
Adds value to the buffer. When an awaited call to write returns, the value has been added
to the buffer, but not been flushed yet. If you wish to wait until the write has been
flushed, await the task returned by write. For example:
task = await buffer.write(value=..., size=..)
await task
"""
# check open
if not self._running:
raise Exception("Cannot call write because the buffer is closed")
# check value is within acceptable record size
if size > self._max_record_size:
raise ValueError(
f"Value exceeds maximum record size (size={size} "
f"max_record_size={self._max_record_size} value={value})"
)
# trigger/wait for flush if a) a flush is in progress, or b) value would cause size overflow
loops = 0
while (
self._flushing
or (self._buffer_size + size > self._max_buffer_size)
or (self._buffer_count == self._max_buffer_count)
):
assert self._delayed_flush_task is not None
await self.force_flush()
loops += 1
if loops > 5:
logging.warning(
"Unfortunate scheduling blocked write to buffer %i times"
" (try to limit concurrent writes)",
loops,
)
# now we know we're not flushing and the value fits;
# and execution will not be "interrupted" until next "await"
# add to buffer
self._merge(value)
self._buffer_size += size
self._buffer_count += 1
# if a delayed flush isn't already scheduled for this batch, schedule it now
if not self._delayed_flush_task:
self._delay_task = asyncio.create_task(asyncio.sleep(self._max_delay))
self._delayed_flush_task = asyncio.create_task(self._delayed_flush())
self._delayed_flush_task.add_done_callback(self._delayed_flush_done)
return self._delayed_flush_task
async def force_flush(self):
if self._delay_task:
self._delay_task.cancel()
if self._delayed_flush_task:
await self._delayed_flush_task
async def _delayed_flush(self):
# wait for delay
if self._delay_task:
try:
await self._delay_task
except asyncio.CancelledError:
pass
# flush
self._flushing = True
await self._flush()
self._reset()
self._flushing = False
self._buffer_size = 0
self._buffer_count = 0
self._delay_task = None
self._delayed_flush_task = None
def _delayed_flush_done(self, task: asyncio.Task):
if task.exception():
logging.error("Error in buffer flush background loop", exc_info=task.exception())
| 33.367089 | 100 | 0.632587 | 672 | 5,272 | 4.730655 | 0.282738 | 0.052847 | 0.055363 | 0.056622 | 0.202579 | 0.134319 | 0.067317 | 0.032715 | 0.032715 | 0.032715 | 0 | 0.003244 | 0.298369 | 5,272 | 157 | 101 | 33.579618 | 0.856177 | 0.198217 | 0 | 0.170213 | 0 | 0 | 0.11623 | 0.010209 | 0 | 0 | 0 | 0 | 0.010638 | 1 | 0.053191 | false | 0.010638 | 0.031915 | 0.010638 | 0.12766 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d76d2ca5409e2ba212a29fe010f57cd9834698b2 | 4,057 | py | Python | haruka/modules/language.py | jarvisbotsavage/groupmanaging | a420cc556347ab947d28d0d736b8aa72680949ff | [
"MIT"
] | null | null | null | haruka/modules/language.py | jarvisbotsavage/groupmanaging | a420cc556347ab947d28d0d736b8aa72680949ff | [
"MIT"
] | null | null | null | haruka/modules/language.py | jarvisbotsavage/groupmanaging | a420cc556347ab947d28d0d736b8aa72680949ff | [
"MIT"
] | null | null | null | from haruka.modules.sql.translation import switch_to_locale, prev_locale
from haruka.modules.translations.strings import tld
from telegram.ext import CommandHandler
from telegram import ParseMode, InlineKeyboardMarkup, InlineKeyboardButton
from haruka import dispatcher
from haruka.modules.translations.list_locale import list_locales
from haruka.modules.helper_funcs.chat_status import user_admin
from telegram.ext import CallbackQueryHandler
import re
from haruka.modules.connection import connected
@user_admin
def locale(bot, update, args):
chat = update.effective_chat
if len(args) > 0:
locale = args[0].lower()
if locale in list_locales:
if locale in ('en', 'ru', 'ua', 'es', 'tr', 'id'):
switch_to_locale(chat.id, locale)
update.message.reply_text(tld(chat.id, 'Switched to {} successfully!').format(list_locales[locale]))
else:
update.message.reply_text("{} is not supported yet!".format(list_locales[locale]))
else:
update.message.reply_text("Is that even a valid language code? Use an internationally accepted ISO code!")
else:
LANGUAGE = prev_locale(chat.id)
if LANGUAGE:
locale = LANGUAGE.locale_name
native_lang = list_locales[locale]
update.message.reply_text("Current locale for this chat is: *{}*".format(native_lang), parse_mode = ParseMode.MARKDOWN)
else:
update.message.reply_text("Current locale for this chat is: *English*", parse_mode=ParseMode.MARKDOWN)
@user_admin
def locale_button(bot, update):
chat = update.effective_chat
user = update.effective_user # type: Optional[User]
query = update.callback_query
lang_match = re.findall(r"en|ru|ua|es|tr|id", query.data)
if lang_match:
if lang_match[0]:
switch_to_locale(chat.id, lang_match[0])
query.answer(text="Language changed!")
else:
query.answer(text="Error!", show_alert=True)
try:
LANGUAGE = prev_locale(chat.id)
locale = LANGUAGE.locale_name
curr_lang = list_locales[locale]
except:
curr_lang = "English"
text = "*Select language* \n"
text += "User language : `{}`".format(curr_lang)
conn = connected(bot, update, chat, user.id, need_admin=False)
if not conn == False:
try:
chatlng = prev_locale(conn).locale_name
chatlng = list_locales[chatlng]
text += "\nConnected chat language : `{}`".format(chatlng)
except:
chatlng = "English"
text += "*\n\nSelect new user language:*"
query.message.reply_text(text, parse_mode=ParseMode.MARKDOWN,
reply_markup=InlineKeyboardMarkup([[
InlineKeyboardButton("English 🇺🇸", callback_data="set_lang_en")]] + [[
InlineKeyboardButton("Russian 🇷🇺", callback_data="set_lang_ru"),
InlineKeyboardButton("Ukrainian 🇺🇦", callback_data="set_lang_ua")]] + [[
InlineKeyboardButton("Spanish 🇪🇸", callback_data="set_lang_es"),
InlineKeyboardButton("Turkish 🇹🇷", callback_data="set_lang_tr")]] + [[
InlineKeyboardButton("Indonesian 🇮🇩", callback_data="set_lang_id")]] + [[
InlineKeyboardButton("⬅️ Back", callback_data="bot_start")]]))
print(lang_match)
query.message.delete()
bot.answer_callback_query(query.id)
LOCALE_HANDLER = CommandHandler(["set_locale", "locale", "lang", "setlang"], locale, pass_args=True)
locale_handler = CallbackQueryHandler(locale_button, pattern="chng_lang")
set_locale_handler = CallbackQueryHandler(locale_button, pattern=r"set_lang_")
dispatcher.add_handler(LOCALE_HANDLER)
dispatcher.add_handler(locale_handler)
dispatcher.add_handler(set_locale_handler) | 44.097826 | 131 | 0.635198 | 456 | 4,057 | 5.47807 | 0.287281 | 0.030825 | 0.038431 | 0.045637 | 0.215372 | 0.164932 | 0.113691 | 0.113691 | 0.079263 | 0.079263 | 0 | 0.001327 | 0.257087 | 4,057 | 92 | 132 | 44.097826 | 0.822827 | 0.00493 | 0 | 0.217949 | 0 | 0 | 0.140981 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025641 | false | 0.012821 | 0.128205 | 0 | 0.153846 | 0.012821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d76e9a04250ea8bab611c743c7c50bf120c2e4d2 | 6,621 | py | Python | DynamicalSystems/dubins_absolute.py | robotsorcerer/LevelSetPy | 54064ee7fd0144e0d658dd4f6121cbc1fda664b9 | [
"MIT"
] | 4 | 2022-03-14T07:04:08.000Z | 2022-03-14T18:08:56.000Z | DynamicalSystems/dubins_absolute.py | robotsorcerer/LevelSetPy | 54064ee7fd0144e0d658dd4f6121cbc1fda664b9 | [
"MIT"
] | null | null | null | DynamicalSystems/dubins_absolute.py | robotsorcerer/LevelSetPy | 54064ee7fd0144e0d658dd4f6121cbc1fda664b9 | [
"MIT"
] | null | null | null | __all__ = ["DubinsVehicleAbs"]
__author__ = "Lekan Molux"
__date__ = "Dec. 21, 2021"
__comment__ = "Two Dubins Vehicle in Absolute Coordinates"
import time
import cupy as cp
import numpy as np
from LevelSetPy.Utilities import eps
class DubinsVehicleAbs():
def __init__(self, grid, u_bound=+5, w_bound=+5, \
init_state=[0,0,0], rw_cov=0.0, \
axis_align=2, center=None, label=None,
neigh_rad=.4):
"""
Dubins Vehicle Dynamics in absolute coordinates.
Please consult Merz, 1972 for a detailed reference.
Dynamics:
==========
\dot{x}_1 = v cos x_3
\dot{x}_2 = v sin x_3
\dot{x}_3 = w
Parameters:
===========
grid: an np.meshgrid state space on which we are
resolving this vehicular dynamics. This grid does not have
a value function (yet!) until it's part of a flock
u_bound: absolute value of the linear speed of the vehicle.
w_bound: absolute value of the angular speed of the vehicle.
init_state: initial position and orientation of a bird on the grid
rw_cov: random covariance scalar for initiating the stochasticity
on the grid.
center: location of this bird's value function on the grid
axis_align: periodic dimension on the grid to be created
neigh_rad: neighboring radius that defines the circle where nearest neighbors are counted.
"""
assert label is not None, "label of an agent cannot be empty"
self.grid = grid
# self.v = lambda u: u*u_bound
# self.w = lambda w: w*w_bound
self.v = u_bound
self.w = w_bound
self.neigh_rad = neigh_rad
# this is a vector defined in the direction of its nearest neighbor
self.u = None
self.deltaT = eps # use system eps for a rough small start due to in deltaT
self.rand_walk_cov = rw_cov
self.center = center
self.axis_align = axis_align
if not np.any(init_state):
init_state = np.zeros((grid.shape))
# position this bird at in the state space
self.initialize(init_state, init_random)
def initialize(self, init_state, init_random):
"""
simulate each agent's position in a flock as a random walk
Parameters
==========
.init_state: current state of a bird in the state space
(does not have to be an initial state/could be a current
state during simulation).
"""
if init_random:
# time between iterations
W = np.asarray(([self.deltaT**2/2])).T*np.identity(init_state.shape[-1])
WWT = W@W.T*self.rand_walk_cov**2
WWCov = np.tile(WWT, [len(init_state), 1, 1])
rand_walker = init_state*WWCov
self.state = init_state + rand_walker
else:
self.state = init_state
return self.state
def dynamics(self, cur_state):
"""
Computes the Dubins vehicular dynamics in relative
coordinates (deterministic dynamics).
\dot{x}_1 = v cos x_3
\dot{x}_2 = v sin x_3
\dot{x}_3 = w * I[sizeof(x_3)]
"""
if not np.any(cur_state):
cur_state = self.grid.xs
xdot = [
self.v * np.cos(cur_state[2]),
self.v * np.sin(cur_state[2]),
self.w * np.ones_like(cur_state[2])
]
return np.asarray(xdot)
def update_values(self, cur_state, t_span=None):
"""
Birds use an optimization scheme to keep
separated distances from one another.
'even though vision is the main mechanism of interaction,
optimization determines the anisotropy of neighbors, and
not the eye's structure. There is also the possibility that
each individual keeps the front neighbor at larger distances
to avoid collisions. This collision avoidance mechanism is
vision-based but not related to the eye's structure.'
Parameters
==========
cur_state: position and orientation.
i.e. [x1, x2, θ] at this current position
t_span: time_span as a list [t0, tf] where
.t0: initial integration time
.tf: final integration time
"""
assert not np.any(cur_state), "current state cannot be empty."
M, h = 4, 0.2 # RK steps per interval vs time step
X = np.asarray(cur_state) if isinstance(cur_state, list) else cur_state
for j in range(M):
if np.any(t_span): # integrate for this much time steps
hh = (t_span[1]-t_span[0])/10/M
for h in np.arange(t_span[0], t_span[1], hh):
k1 = self.dynamics(X)
k2 = self.dynamics(X + h/2 * k1)
k3 = self.dynamics(X + h/2 * k2)
k4 = self.dynamics(X + h * k3)
X = X+(h/6)*(k1 + 2*k2 + 2*k3 + k4)
else:
k1 = self.dynamics(X)
k2 = self.dynamics(X + h/2 * k1)
k3 = self.dynamics(X + h/2 * k2)
k4 = self.dynamics(X + h * k3)
X = X+(h/6)*(k1 +2*k2 +2*k3 +k4)
return X
def dissipation(self, t, data, derivMin, derivMax, \
schemeData, dim):
"""
Parameters
==========
dim: The dissipation of the Hamiltonian on
the grid (see 5.11-5.12 of O&F).
t, data, derivMin, derivMax, schemeData: other parameters
here are merely decorators to conform to the boilerplate
we use in the levelsetpy toolbox.
"""
assert dim>=0 and dim <3, "Dubins vehicle dimension has to between 0 and 2 inclusive."
if dim==0:
return cp.abs(self.v_e - self.v_p * cp.cos(self.grid.xs[2])) + cp.abs(self.w(1) * self.grid.xs[1])
elif dim==1:
return cp.abs(self.v_p * cp.sin(self.grid.xs[2])) + cp.abs(self.w(1) * self.grid.xs[0])
elif dim==2:
return self.w_e + self.w_p
| 38.947059 | 111 | 0.530887 | 874 | 6,621 | 3.909611 | 0.297483 | 0.031607 | 0.030436 | 0.024583 | 0.139889 | 0.089552 | 0.089552 | 0.089552 | 0.089552 | 0.089552 | 0 | 0.025116 | 0.380607 | 6,621 | 169 | 112 | 39.177515 | 0.808096 | 0.380154 | 0 | 0.162162 | 0 | 0 | 0.063122 | 0 | 0 | 0 | 0 | 0 | 0.040541 | 1 | 0.067568 | false | 0 | 0.054054 | 0 | 0.216216 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d76f991198beffbe88c65ad111dbaf6dddb9d83b | 244 | py | Python | while.py | yadymary/Platzi_Course_Basic_Python | a70e1839985ac1d591424fdb9f04c774dd7b7ef1 | [
"MIT"
] | 1 | 2021-11-17T17:38:44.000Z | 2021-11-17T17:38:44.000Z | while.py | yadymary/Platzi_Course_Basic_Python | a70e1839985ac1d591424fdb9f04c774dd7b7ef1 | [
"MIT"
] | null | null | null | while.py | yadymary/Platzi_Course_Basic_Python | a70e1839985ac1d591424fdb9f04c774dd7b7ef1 | [
"MIT"
] | null | null | null | def run ():
i= 1
LIMITE=1000
while i < LIMITE:
print(i)
i += 1
if i == 333:
print('333 llegamos a un numero con todos igual ')
break
if __name__ == '__main__':
run() | 17.428571 | 62 | 0.442623 | 30 | 244 | 3.333333 | 0.666667 | 0.04 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090226 | 0.454918 | 244 | 14 | 63 | 17.428571 | 0.661654 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0 | 0 | 0.090909 | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7733bb714028b37b7865ec3fbb7370aa93ac455 | 2,103 | py | Python | migrations/versions/06482b687bae_.py | dskoda1/pizoms | 222e4ea958d78ece81dffbdbfac5670a997e9a9d | [
"MIT"
] | null | null | null | migrations/versions/06482b687bae_.py | dskoda1/pizoms | 222e4ea958d78ece81dffbdbfac5670a997e9a9d | [
"MIT"
] | null | null | null | migrations/versions/06482b687bae_.py | dskoda1/pizoms | 222e4ea958d78ece81dffbdbfac5670a997e9a9d | [
"MIT"
] | null | null | null | """
Create user, category, size, item tables
Revision ID: 06482b687bae
Revises: None
Create Date: 2017-09-10 22:35:30.792065
"""
# revision identifiers, used by Alembic.
revision = '06482b687bae'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('size',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('size', sa.String(length=100), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('size')
)
op.create_table('user',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('email', sa.String(length=255), nullable=False),
sa.Column('password', sa.String(length=255), nullable=False),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('email')
)
op.create_table('category',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=100), nullable=False),
sa.Column('description', sa.String(length=400), nullable=True),
sa.Column('created_by', sa.Integer(), nullable=False),
sa.ForeignKeyConstraint(['created_by'], ['user.id'], ),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint('name')
)
op.create_table('item',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('price', sa.Float(), nullable=False),
sa.Column('category_id', sa.Integer(), nullable=False),
sa.Column('size_id', sa.Integer(), nullable=False),
sa.Column('created_by', sa.Integer(), nullable=False),
sa.Column('description', sa.String(length=100), nullable=True),
sa.ForeignKeyConstraint(['category_id'], ['category.id'], ),
sa.ForeignKeyConstraint(['created_by'], ['user.id'], ),
sa.ForeignKeyConstraint(['size_id'], ['size.id'], ),
sa.PrimaryKeyConstraint('id')
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('item')
op.drop_table('category')
op.drop_table('user')
op.drop_table('size')
### end Alembic commands ###
| 32.859375 | 67 | 0.662863 | 260 | 2,103 | 5.296154 | 0.234615 | 0.087146 | 0.141612 | 0.152505 | 0.574437 | 0.5374 | 0.5374 | 0.324619 | 0.175744 | 0 | 0 | 0.030337 | 0.15359 | 2,103 | 63 | 68 | 33.380952 | 0.743258 | 0.147408 | 0 | 0.25 | 0 | 0 | 0.134736 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0.022727 | 0.045455 | 0 | 0.090909 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d77dca4c3397a62f930c9bb88dfbd375d62662bd | 16,284 | py | Python | vesper/mpg_ranch/nfc_species_classifier_2_0/dataset_utils.py | HaroldMills/NFC | 356b2234dc3c7d180282a597fa1e039ae79e03c6 | [
"MIT"
] | 29 | 2017-07-10T14:49:15.000Z | 2022-02-02T23:14:38.000Z | vesper/mpg_ranch/nfc_species_classifier_2_0/dataset_utils.py | Tubbz-alt/Vesper | 76e5931ca0c7fbe070c53b1362ec246ec9007beb | [
"MIT"
] | 167 | 2015-03-17T14:45:22.000Z | 2022-03-30T21:00:05.000Z | vesper/mpg_ranch/nfc_species_classifier_2_0/dataset_utils.py | Tubbz-alt/Vesper | 76e5931ca0c7fbe070c53b1362ec246ec9007beb | [
"MIT"
] | 4 | 2015-02-06T03:30:27.000Z | 2020-12-27T08:38:52.000Z | """
Constants and functions pertaining to tseep species classifier datasets.
"""
from collections import defaultdict
import math
from tensorflow.data import TFRecordDataset
from tensorflow.io import FixedLenFeature
import tensorflow as tf
import vesper.util.signal_utils as signal_utils
import vesper.util.time_frequency_analysis_utils as tfa_utils
_WAVEFORM_EXAMPLE_FEATURES = {
'waveform': FixedLenFeature((), tf.string),
'call_start_index': FixedLenFeature((), tf.int64),
'label': FixedLenFeature((), tf.int64),
'clip_id': FixedLenFeature((), tf.int64),
}
CLASS_NAMES = '''
ATSP
CCSP_BRSP
CHSP
Double Up
GRSP
LISP
MGWA
SAVS
VESP
WCSP
WIWA
Zeep
'''.strip().split('\n')
CLASS_COUNT = len(CLASS_NAMES)
def create_waveform_dataset_from_tensors(waveforms):
# One might like to just say:
#
# dataset = tf.data.Dataset.from_tensor_slices(waveforms)
#
# here instead of using a generator, but that only works if
# the waveforms all have the same length. Using a generator
# works even if the waveform lengths differ.
def generator():
for waveform in waveforms:
yield _normalize_waveform(waveform)
return tf.data.Dataset.from_generator(generator, tf.float32)
def create_waveform_dataset_from_tfrecord_files(dir_path):
"""
Creates a dataset of waveforms and associated metadata.
Each dataset example has the form:
(waveform, call_start_index, label, clip_id)
All of the waveforms of the dataset have the same length. Each
waveform contains one Vesper clip, which contains a nocturnal
flight call that starts at waveform index `call_start_index`.
The `label` of a dataset example is an integer indicating the
class of the call.
The `clip_id` of a dataset example is the ID of the clip included
in the waveform in the Vesper archive to which the clip belongs.
"""
# Use `tf.data.experimental.sample_from_datasets` here.
per_label_datasets = _get_per_label_datasets(dir_path)
dataset = tf.data.experimental.sample_from_datasets(per_label_datasets)
# Parse example protos.
dataset = dataset.map(
_parse_example,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
return dataset
def _get_per_label_datasets(dir_path):
file_path_lists = _get_per_label_file_path_lists(dir_path)
return [
TFRecordDataset(file_paths).repeat()
for file_paths in file_path_lists]
def _get_per_label_file_path_lists(dir_path):
file_paths = dir_path.glob('*.tfrecords')
file_paths = sorted(file_paths, key=lambda p: p.name)
path_lists_dict = defaultdict(list)
for file_path in file_paths:
label = _get_label(file_path)
path_lists_dict[label].append(str(file_path))
path_lists = [
path_lists_dict[label] for label in sorted(path_lists_dict.keys())]
return path_lists
def _get_label(file_path):
file_name = file_path.name
start_index = file_name.find('_') + 1
end_index = file_name.rfind('_')
return file_name[start_index:end_index]
def _parse_example(proto):
example = tf.io.parse_single_example(proto, _WAVEFORM_EXAMPLE_FEATURES)
# Get waveform tensor.
bytes_ = example['waveform']
waveform = tf.io.decode_raw(bytes_, out_type=tf.int16, little_endian=True)
waveform = _normalize_waveform(waveform)
call_start_index = example['call_start_index']
label = example['label']
one_hot_label = tf.one_hot(label, CLASS_COUNT)
clip_id = example['clip_id']
return (waveform, call_start_index, label, one_hot_label, clip_id)
def _normalize_waveform(waveform):
"""
Normalizes a waveform so it has 32-bit floating point samples in [-1, 1].
"""
return tf.cast(waveform, tf.float32) / 32768
def create_spectrogram_dataset(dir_path, settings):
"""
Creates a dataset of spectrograms.
Each dataset example has the form:
(spectrogram_slice, call_start_index, label, one_hot_label, clip_id)
"""
dataset = create_waveform_dataset_from_tfrecord_files(dir_path)
processor = _ExampleProcessor(settings)
dataset = dataset.map(
processor.preprocess_waveform,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(
processor.compute_spectrogram,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(
processor.slice_spectrogram_along_frequency_axis_with_shift,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(
processor.normalize_spectrogram_background,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
return dataset
def create_training_dataset(dir_path, settings):
"""
Creates a dataset suitable for training a neural network.
Each dataset example has the form:
(spectrogram_slice, one_hot_label)
All of the spectrogram slices of the dataset have the same shape,
of the form (spectrum count, bin count, 1). The exact shape depends
on the values of several `settings` attributes. The spectrogram slices
are suitable for input into a Keras convolutional neural network.
The `one_hot_label` of a dataset example is a vector whose length
is the number of example classes, and all of whose elements are
zero except for one, which has value one.
"""
dataset = create_spectrogram_dataset(dir_path, settings)
dataset = dataset.map(
_diddle_example,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
return dataset
def _diddle_example(gram, call_start_index, label, one_hot_label, clip_id):
# Reshape gram for input into Keras CNN.
gram = tf.expand_dims(gram, 2)
# Return only gram and one-hot label, discarding other data.
return gram, one_hot_label
class _ExampleProcessor:
"""
Dataset example processor.
A dataset example processor prepares dataset examples for input to
a neural network during training or inference. It performs waveform
slicing, waveform modifications for dataset augmentation, and
spectrogram computation.
"""
def __init__(self, settings):
self._settings = settings
s = settings
sample_rate = s.waveform_sample_rate
# Get waveform slice call start index range in samples.
self._waveform_slice_min_call_start_index = \
_s2f(s.waveform_slice_min_call_start_time, sample_rate)
self._waveform_slice_max_call_start_index = \
_s2f(s.waveform_slice_max_call_start_time, sample_rate)
# Get waveform slice length in samples.
self._waveform_slice_length = \
_s2f(s.waveform_slice_duration, sample_rate)
# Get low-level spectrogram settings.
(self._window_size, self._hop_size, self._dft_size,
self._freq_start_index, self._freq_end_index) = \
_get_low_level_spectrogram_settings(s)
self._window_fn = tf.signal.hann_window
def preprocess_waveform(self, waveform, call_start_index, *args):
"""
Preprocesses one input waveform.
Slices and applies data augmentations to the specified waveform
according to this preprocessor's settings.
"""
s = self._settings
waveform, call_start_index = \
self._slice_waveform(waveform, call_start_index)
if s.waveform_amplitude_scaling_data_augmentation_enabled:
waveform = self._scale_waveform_amplitude(waveform)
return (waveform, call_start_index) + tuple(args)
def _slice_waveform(self, waveform, call_start_index):
min_index = self._waveform_slice_min_call_start_index
max_index = self._waveform_slice_max_call_start_index
if min_index == max_index:
slice_call_start_index = min_index
else:
slice_call_start_index = \
tf.random.uniform((), min_index, max_index, dtype=tf.int64)
slice_start_index = call_start_index - slice_call_start_index
slice_end_index = slice_start_index + self._waveform_slice_length
waveform_slice = waveform[slice_start_index:slice_end_index]
return waveform_slice, slice_call_start_index
def _scale_waveform_amplitude(self, waveform):
max_abs = tf.math.reduce_max(tf.math.abs(waveform))
if max_abs == 0:
# waveform samples are all zero
return waveform
else:
# waveform samples are not all zero
# Find scale factor that would make maximum absolute waveform
# value one.
max_factor = _f32(1) / max_abs
# Find scale factor that would reduce RMS waveform value to
# 1 / 256. Yield 1 if RMS value is already less than 1 / 256.
sum_squared = tf.math.reduce_sum(waveform * waveform)
size = tf.cast(tf.size(waveform), tf.float32)
rms = tf.math.sqrt(sum_squared / size)
min_factor = tf.math.minimum(_f32(1), _f32(1 / 256) / rms)
# Choose random factor between `min_factor` and `max_factor`,
# with distribution uniform on log scale.
max_log = tf.math.log(max_factor)
min_log = tf.math.log(min_factor)
log_factor = tf.random.uniform(
(), min_log, max_log, dtype=tf.float32)
factor = tf.math.exp(log_factor)
# Scale waveform by chosen factor.
return factor * waveform
def compute_spectrogram(self, waveform, *args):
"""Computes the spectrogram of a waveform."""
s = self._settings
# Compute STFT. To use `tf.signal.stft`, we must add a leading
# unit dimension to the waveform tensor. After the call to
# `tf.signal.stft` we effectively remove the corresponding
# dimension of the resulting `stfts` tensor.
waveforms = tf.expand_dims(waveform, 0)
stfts = tf.signal.stft(
waveforms, self._window_size, self._hop_size, self._dft_size,
self._window_fn)
stft = stfts[0]
# Get spectrogram, i.e. squared magnitude of STFT.
gram = tf.math.real(stft * tf.math.conj(stft))
# gram = tf.abs(stft) ** 2
# Normalize spectrogram values so a full-scale, bin-centered
# sinusoid has a value of one with a rectangular window.
# TODO: Consider using a different normalization scheme that
# yields more consistent values (proportional to the spectral
# density, with units of watts per hertz) for noise across
# different sample rates, window sizes, and DFT sizes. This
# is what we'd like to use for spectrogram display, and it
# seems that we might as well use it here, too. It isn't
# necessary to build a working system, but the consistency
# might be helpful, for example for dataset visualization.
normalizing_scale_factor = 1 / (self._window_size / 2) ** 2
gram *= normalizing_scale_factor
# Take spectrogram log and apply affine transform to put
# full scale sinusoids at about 100 dB.
gram = tf.math.log(gram + s.spectrogram_log_epsilon)
decibel_scale_factor = 10 / math.log(10)
gram = 100 + decibel_scale_factor * gram
return (gram,) + tuple(args)
def slice_spectrogram_along_frequency_axis(self, gram, *args):
gram = gram[..., self._freq_start_index:self._freq_end_index]
return (gram,) + tuple(args)
def normalize_spectrogram_background(self, gram, *args):
s = self._settings
rank = s.spectrogram_background_normalization_percentile_rank
if rank is not None:
ranks = tf.constant([rank])
percentiles = _get_spectrogram_percentiles(gram, ranks)
percentiles = tf.reshape(percentiles, (1, tf.size(percentiles)))
gram = gram - percentiles
return (gram,) + tuple(args)
def slice_spectrogram_along_frequency_axis_with_shift(self, gram, *args):
# Get frequency shift in bins.
max_shift = self._settings.max_spectrogram_frequency_shift
shift = tf.random.uniform(
(), -max_shift, max_shift + 1, dtype=tf.int64)
gram = gram[
..., self._freq_start_index + shift:self._freq_end_index + shift]
return (gram,) + tuple(args)
def _s2f(seconds, sample_rate):
frames = signal_utils.seconds_to_frames(seconds, sample_rate)
return tf.cast(frames, tf.int64)
def _get_low_level_spectrogram_settings(settings):
s = settings
fs = s.waveform_sample_rate
s2f = signal_utils.seconds_to_frames
# spectrogram
window_size = s2f(s.spectrogram_window_size, fs)
fraction = s.spectrogram_hop_size / 100
hop_size = s2f(s.spectrogram_window_size * fraction, fs)
dft_size = tfa_utils.get_dft_size(window_size)
# frequency slicing
f2i = tfa_utils.get_dft_bin_num
freq_start_index = f2i(s.spectrogram_start_freq, fs, dft_size)
freq_end_index = f2i(s.spectrogram_end_freq, fs, dft_size) + 1
return (window_size, hop_size, dft_size, freq_start_index, freq_end_index)
def _f32(x):
return tf.cast(x, tf.float32)
_MAX_GRAM_VALUE = 120
def _get_spectrogram_percentiles(gram, percentile_ranks):
# Round gram values to nearest integer.
gram = tf.cast(tf.round(gram), tf.int32)
# Clip values.
gram = tf.clip_by_value(gram, 0, _MAX_GRAM_VALUE)
# Transpose gram so first dimension is frequency.
gram = tf.transpose(gram)
# print('rounded, clipped, and transposed spectrogram:')
# print(gram)
def accumulate_counts(x):
length = _MAX_GRAM_VALUE + 1
counts = tf.math.bincount(x, minlength=length, maxlength=length)
return tf.cumsum(counts)
cumulative_counts = tf.map_fn(accumulate_counts, gram)
# print()
# print('cumulative sums of rounded bin value counts:')
# print(cumulative_counts)
shape = tf.shape(gram)
bin_count = shape[0]
spectrum_count = shape[1]
percentile_ranks = tf.cast(percentile_ranks, tf.float32)
thresholds = percentile_ranks / 100. * tf.cast(spectrum_count, tf.float32)
thresholds = tf.cast(tf.round(thresholds), tf.int32)
thresholds = tf.reshape(thresholds, (1, len(thresholds)))
thresholds = tf.tile(thresholds, (bin_count, 1))
percentiles = tf.searchsorted(cumulative_counts, thresholds)
# print()
# print('percentiles:')
# print(percentiles)
return tf.cast(percentiles, tf.float32)
def _get_spectrogram_slice_length(settings):
s = settings
slice_duration = s.waveform_slice_duration
window_size = s.spectrogram_window_size
hop_size = window_size * s.spectrogram_hop_size / 100
return 1 + int(round((slice_duration - window_size) / hop_size))
def _slice_spectrogram(gram, slice_length):
# Get tensor of consecutive spectrogram slices.
slices = tf.signal.frame(gram, slice_length, frame_step=1, axis=0)
# Add trailing dimension for input into Keras CNN.
slices = tf.expand_dims(slices, 3)
return slices
def get_spectrogram_slice_shape(settings):
spectrum_count = _get_spectrogram_slice_length(settings)
_, _, _, freq_start_index, freq_end_index = \
_get_low_level_spectrogram_settings(settings)
bin_count = freq_end_index - freq_start_index
return (spectrum_count, bin_count, 1)
| 31.866928 | 78 | 0.658561 | 2,037 | 16,284 | 4.993127 | 0.198331 | 0.034412 | 0.031659 | 0.017304 | 0.264281 | 0.187985 | 0.134107 | 0.105004 | 0.068037 | 0.057418 | 0 | 0.01022 | 0.266949 | 16,284 | 510 | 79 | 31.929412 | 0.841836 | 0.259641 | 0 | 0.11588 | 0 | 0 | 0.013481 | 0 | 0 | 0 | 0 | 0.001961 | 0 | 1 | 0.11588 | false | 0 | 0.030043 | 0.004292 | 0.261803 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d77eefc723d0800ce089899d057030dfd89c7e35 | 1,772 | py | Python | docs/conf.py | jklymak/mpl-sphinx-theme | bc5d0c8f4ca68f117653ab88943b315ff317752c | [
"BSD-3-Clause"
] | null | null | null | docs/conf.py | jklymak/mpl-sphinx-theme | bc5d0c8f4ca68f117653ab88943b315ff317752c | [
"BSD-3-Clause"
] | null | null | null | docs/conf.py | jklymak/mpl-sphinx-theme | bc5d0c8f4ca68f117653ab88943b315ff317752c | [
"BSD-3-Clause"
] | null | null | null | import datetime
# Configuration file for the Sphinx documentation builder for
# matplotlib projects.
# Release mode enables optimizations and other related options.
is_release_build = tags.has('release') # noqa
# -- Project information -----------------------------------------------------
project = "Matplotlib Sphinx Theme"
copyright = (
f"2012 - {datetime.datetime.now().year} The Matplotlib development team"
)
author = "Matplotlib Developers"
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# -- Options for HTML output -------------------------------------------------
html_theme = "mpl_sphinx_theme"
html_favicon = "_static/favicon.ico"
html_theme_options = {
"logo_link": "https://matplotlib.org/stable/",
# collapse_navigation in pydata-sphinx-theme is slow, so skipped for local
# and CI builds https://github.com/pydata/pydata-sphinx-theme/pull/386
"collapse_navigation": not is_release_build,
"show_prev_next": False,
"native_site": False
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ["static"]
| 36.163265 | 78 | 0.685102 | 222 | 1,772 | 5.351351 | 0.581081 | 0.037037 | 0.023569 | 0.025253 | 0.082492 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004617 | 0.14447 | 1,772 | 48 | 79 | 36.916667 | 0.779024 | 0.661964 | 0 | 0 | 0 | 0 | 0.453287 | 0.051903 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058824 | 0 | 0.058824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d77fa98b9f3dba26f26b1d07cc54e5f1856ca3e6 | 1,286 | py | Python | detdata/cli.py | theaiscope/detdata | c30fc5eb798260af20f111d3662f38c493f654fe | [
"MIT"
] | 1 | 2020-06-28T10:19:10.000Z | 2020-06-28T10:19:10.000Z | detdata/cli.py | i008/detdata | c30fc5eb798260af20f111d3662f38c493f654fe | [
"MIT"
] | null | null | null | detdata/cli.py | i008/detdata | c30fc5eb798260af20f111d3662f38c493f654fe | [
"MIT"
] | 1 | 2019-10-23T15:27:25.000Z | 2019-10-23T15:27:25.000Z | import os
import begin
from detdata.mxio import csv_to_mxrecords, json_labels_to_csv
@begin.subcommand
def parse_coco_like(coco_labels_dir: 'dir to coco-like ds' = None, out_path: 'target path' = None):
if coco_labels_dir is None or out_path is None:
raise ValueError("Please provide neccesery input type python cli.py parse_coco_like --help for help")
csv_out = os.path.join(out_path, 'dataset_{}.csv')
json_labels_to_csv(coco_labels_dir, output_csv_file=csv_out)
csv_train = csv_out.format('train')
csv_valid = csv_out.format('valid')
csv_to_mxrecords(csv_train, coco_labels_dir, out_path)
csv_to_mxrecords(csv_valid, coco_labels_dir, out_path)
@begin.subcommand
def csv_to_mxindex(csv_index_file: 'CSV index file with annotations',
base_dir: 'Directory where the images mentioned in csv index are',
output_path: 'Where to save mxindex and mxrecord files'):
"""
:param csv_index_file: Csv file with annotations and filenames
:param base_dir: base_dir joined with fname in csv should give a valid path to the image
:param output_path: where to store mxrecord files
:return:
"""
csv_to_mxrecords(csv_index_file, base_dir, output_path)
@begin.start
def main():
pass
| 32.974359 | 110 | 0.729393 | 201 | 1,286 | 4.373134 | 0.363184 | 0.028441 | 0.073948 | 0.05802 | 0.045506 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193624 | 1,286 | 38 | 111 | 33.842105 | 0.847637 | 0.16563 | 0 | 0.095238 | 0 | 0 | 0.249042 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.047619 | 0.142857 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7808762748f6f8810cca799f2b5d0c038a9f651 | 1,049 | py | Python | Class/Multi_layer_Perceptron.py | dangdu259e/MachineLearning_HUS_2021 | 413d3a52cdc262d10a8043404e07c2f5319fabea | [
"Apache-2.0"
] | null | null | null | Class/Multi_layer_Perceptron.py | dangdu259e/MachineLearning_HUS_2021 | 413d3a52cdc262d10a8043404e07c2f5319fabea | [
"Apache-2.0"
] | null | null | null | Class/Multi_layer_Perceptron.py | dangdu259e/MachineLearning_HUS_2021 | 413d3a52cdc262d10a8043404e07c2f5319fabea | [
"Apache-2.0"
] | null | null | null | from __future__ import division, print_function, unicode_literals
import math
import numpy as np
import matplotlib.pyplot as plt
# Khởi tạo dữ liệu
N = 100 # number of points per class
d0 = 2 # dimensionality
C = 3 # number of classes
X = np.zeros((d0, N*C)) # data matrix (each row = single example)
y = np.zeros(N*C, dtype='uint8') # class labels
for j in range(C):
ix = range(N*j,N*(j+1))
r = np.linspace(0.0,1,N) # radius
t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
X[:,ix] = np.c_[r*np.sin(t), r*np.cos(t)].T
y[ix] = j
# lets visualize the data:
# plt.scatter(X[:N, 0], X[:N, 1], c=y[:N], s=40, cmap=plt.cm.Spectral)
plt.plot(X[0, :N], X[1, :N], 'bs', markersize = 7);
plt.plot(X[0, N:2*N], X[1, N:2*N], 'ro', markersize = 7);
plt.plot(X[0, 2*N:], X[1, 2*N:], 'g^', markersize = 7);
# plt.axis('off')
plt.xlim([-1.5, 1.5])
plt.ylim([-1.5, 1.5])
cur_axes = plt.gca()
cur_axes.axes.get_xaxis().set_ticks([])
cur_axes.axes.get_yaxis().set_ticks([])
plt.savefig('EX.png', bbox_inches='tight', dpi = 600)
plt.show() | 31.787879 | 70 | 0.626311 | 215 | 1,049 | 2.986047 | 0.455814 | 0.012461 | 0.037383 | 0.042056 | 0.079439 | 0.062305 | 0 | 0 | 0 | 0 | 0 | 0.050279 | 0.146806 | 1,049 | 33 | 71 | 31.787879 | 0.667039 | 0.240229 | 0 | 0 | 0 | 0 | 0.02799 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.16 | 0 | 0.16 | 0.04 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7839182ef6cd112a988b7b93dff4e9e28ed1079 | 69,475 | py | Python | code/loader.py | ContextLab/brainfit-paper | 68ad38df2b22d97fd7fcf5686771e47094a08d43 | [
"MIT"
] | 1 | 2022-03-21T16:24:43.000Z | 2022-03-21T16:24:43.000Z | code/loader.py | ContextLab/brainfit-paper | 68ad38df2b22d97fd7fcf5686771e47094a08d43 | [
"MIT"
] | null | null | null | code/loader.py | ContextLab/brainfit-paper | 68ad38df2b22d97fd7fcf5686771e47094a08d43 | [
"MIT"
] | null | null | null | # noinspection PyPackageRequirements
import datawrangler as dw
import os
import sys
import numpy as np
import pandas as pd
import ast
import json
import datetime
import quail
import nltk
import warnings
import pickle
import datetime as dt
# noinspection PyPackageRequirements
from spellchecker import SpellChecker
from glob import glob as lsdir
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
from scipy.spatial.distance import cdist, pdist
from scipy.stats import wasserstein_distance, pearsonr, zscore
from sklearn.linear_model import LinearRegression
from flair.models import TextClassifier
from flair.data import Sentence
import brainiak.eventseg.event as event
BASE_DIR = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
DATA_DIR = os.path.join(BASE_DIR, 'data')
def load_raw():
datadir = os.path.join(DATA_DIR, 'raw_formatted')
files = lsdir(os.path.join(datadir, '*.csv'))
skip = ['data_descriptors.csv', 'event_descriptors.csv', 'id_filename_key.csv']
# noinspection PyShadowingNames
files = [f for f in files if os.path.split(f)[-1] not in skip]
# noinspection PyShadowingNames
raw_data = [pd.read_csv(f) for f in files]
loaded = []
subjects = []
for i, x in enumerate(raw_data):
# noinspection PyBroadException
try:
y = x.pivot_table(index=['datetime'], columns='variable', aggfunc=lambda a: a)
y.columns = [c[1] for c in y.columns]
loaded.append(y)
subjects.append(f'P{i}')
except:
print(f'error loading data: {files[i]}')
pass
return loaded, subjects
def parse_data(d):
datadir = os.path.join(DATA_DIR, 'raw_formatted')
non_exp_descriptors = pd.read_csv(os.path.join(datadir, 'data_descriptors.csv'))
exp_descriptors = pd.read_csv(os.path.join(datadir, 'event_descriptors.csv'))
def variable_type(v):
def helper(descriptors):
if 'variable name' in descriptors.columns:
field = 'variable name'
else:
field = 'exp_event'
# noinspection PyShadowingNames
inds = np.where([x.strip() == v.strip() for x in descriptors[field].values])[0]
if len(inds) > 0:
description = descriptors.iloc[inds[0]]['description']
else:
raise Exception('description not found')
if any([keyword in description for keyword in ['fitbit', 'sleep', 'activ', 'sedent', 'elevation',
'floors', 'step', 'battery', 'cardio']]):
return 'fitbit'
elif any([keyword in v.lower() for keyword in ['fb_', 'cal', 'bodyfat', 'water', 'peak', 'weight', 'hr',
'oor', 'sync', 'device', 'bmi']]):
return 'fitbit'
elif any([keyword in description for keyword in ['clear', 'instruction', 'difficult', 'language', 'gender',
'coffee', 'color', 'today', 'plan', 'motiv', 'year',
'current', 'degree', 'freq', 'feedback', 'race',
'stress', 'impair']]):
return 'survey'
elif any([keyword in v.lower() for keyword in ['freq', 'setting']]):
return 'survey'
elif any([((keyword in description) or (keyword in v.lower())) for keyword in ['pres', 'rec', 'task',
'word', 'position', 'delay',
'movie', 'experiment']]):
return 'experiment'
else:
return 'meta'
if v.lower() == 'utc':
return 'meta'
elif v.lower() == 'tracker_features':
return 'fitbit'
elif v.lower() in ['recent_meds_injuries', 'job_activity', 'tracker_sync_today', 'typical_stress']:
return 'survey'
elif v.lower() in ['movie_sent_recall', 'movie_sent_recall_delay']:
return 'experiment'
# noinspection PyBroadException
try:
return helper(non_exp_descriptors)
except:
# noinspection PyBroadException
try:
return helper(exp_descriptors)
except:
if any([keyword in v.lower() for keyword in ['pres', 'rec', 'task', 'word', 'position', 'delay',
'resp']]):
return 'experiment'
else:
return 'untagged'
parsed = {}
for c in d.columns:
x = variable_type(c)
if x in parsed.keys():
parsed[x] = parsed[x].merge(pd.DataFrame(d[c]), how='outer', right_index=True, left_index=True)
else:
parsed[x] = pd.DataFrame(d[c])
return parsed
def simplify_dict_list(x, subjs):
combined = {'participants': subjs}
for i, d in enumerate(x):
for k in d.keys():
if k in combined.keys():
combined[k].append(d[k])
else:
combined[k] = [d[k]]
return combined
def get_stats(parsed, stat_dict):
stacked = {}
subjs = parsed.pop('participants', None)
for k in parsed.keys():
stacked[k] = dw.stack(parsed[k], keys=subjs)
stats = pd.DataFrame(columns=list(stat_dict.keys()), index=subjs)
for s in stat_dict.keys():
stats[s] = stat_dict(s)(stacked)
return stats
def compute_recent_and_change(x, index, name, ref_days, base_days, today=None):
# noinspection PyShadowingNames
def average(d, f):
warnings.simplefilter('ignore')
return pd.Series(index=index,
data=[np.nanmean([eval(j) for j in i[f] if type(j) is str])
if (type(i) is pd.DataFrame and i.shape[0] > 0 and f in i.columns)
else np.nan for i in d])
results = {'recent': average(extract_days_prior(x, ref_days, today=today), name)}
# noinspection PyShadowingNames, PyTypeChecker
baseline = average(extract_days_prior(x, base_days, today=[t - dt.timedelta(days=ref_days) for t in today]), name)
results['recent / baseline'] = results['recent'] / baseline
return results
def dict_diff(a, b):
keys = list(set(a.keys()).union(set(b.keys())))
diffs = {}
for k in keys:
diffs[k] = a[k] - b[k]
return diffs
def fitness_stats(parsed, reference_days=7, baseline_days=180):
stats = {}
index = np.arange(len(parsed['fitbit']))
# static body stats
# bmi
stats['BMI'] = pd.Series(index=index,
data=[eval(x) if type(x) is str and not np.isclose(eval(x), 0.0) else np.nan for x in
get_raw_feature(parsed['fitbit'], 'bmi')])
# bodyfat
stats['body fat'] = pd.Series(index=index,
data=[eval(x) if type(x) is str and not np.isclose(eval(x), 0.0) else np.nan for x in
get_raw_feature(parsed['fitbit'], 'bodyfat')])
# weight
stats['weight'] = pd.Series(index=index,
data=[eval(x) if type(x) is str and not np.isclose(eval(x), 0.0) else np.nan for x in
get_raw_feature(parsed['fitbit'], 'weight')])
# dynamic body stats (for each, compute most recent + change in reference vs. baseline)
# resting heart rate
stats['resting heart rate'] = compute_recent_and_change(parsed['fitbit'], index, 'resting_HR', reference_days,
baseline_days, today=get_test_day(parsed['experiment']))
# sleep hours
stats['sleep duration'] = compute_recent_and_change(parsed['fitbit'], index, 'sleep_duration', reference_days,
baseline_days, today=get_test_day(parsed['experiment']))
# sleep efficiency
stats['sleep efficiency'] = compute_recent_and_change(parsed['fitbit'], index, 'sleep_duration', reference_days,
baseline_days, today=get_test_day(parsed['experiment']))
# activity summary (recent + change in reference vs. baseline)
# steps
stats['steps'] = compute_recent_and_change(parsed['fitbit'], index, 'steps', reference_days, baseline_days,
today=get_test_day(parsed['experiment']))
# distance
stats['distance'] = compute_recent_and_change(parsed['fitbit'], index, 'distance', reference_days, baseline_days,
today=get_test_day(parsed['experiment']))
# elevation
stats['elevation'] = compute_recent_and_change(parsed['fitbit'], index, 'elevation', reference_days, baseline_days,
today=get_test_day(parsed['experiment']))
# floors
stats['floors climbed'] = compute_recent_and_change(parsed['fitbit'], index, 'floors', reference_days,
baseline_days, today=get_test_day(parsed['experiment']))
# activity details (recent + change in reference vs. baseline)
# light activity minutes
stats['light activity'] = compute_recent_and_change(parsed['fitbit'], index, 'light_act_mins', reference_days,
baseline_days, today=get_test_day(parsed['experiment']))
# fairly active minutes
stats['fair activity'] = compute_recent_and_change(parsed['fitbit'], index, 'fair_act_mins', reference_days,
baseline_days, today=get_test_day(parsed['experiment']))
# very active minutes
stats['high intensity activity'] = compute_recent_and_change(parsed['fitbit'], index, 'very_act_mins',
reference_days, baseline_days,
today=get_test_day(parsed['experiment']))
# cal - cal_bmr
cal = compute_recent_and_change(parsed['fitbit'], index, 'cal', reference_days, baseline_days,
today=get_test_day(parsed['experiment']))
cal_bmr = compute_recent_and_change(parsed['fitbit'], index, 'cal_bmr', reference_days, baseline_days,
today=get_test_day(parsed['experiment']))
stats['excess calories'] = dict_diff(cal, cal_bmr)
# heart-specific activity details (recent + change in reference vs. baseline)
# out of range minutes
stats['out-of-range HR'] = compute_recent_and_change(parsed['fitbit'], index, 'oor_mins', reference_days,
baseline_days, today=get_test_day(parsed['experiment']))
# fat burn minutes
stats['fat burn HR'] = compute_recent_and_change(parsed['fitbit'], index, 'fb_mins', reference_days, baseline_days,
today=get_test_day(parsed['experiment']))
# cardio minutes
stats['cardio HR'] = compute_recent_and_change(parsed['fitbit'], index, 'cardio_mins', reference_days,
baseline_days, today=get_test_day(parsed['experiment']))
# peak minutes
stats['peak HR'] = compute_recent_and_change(parsed['fitbit'], index, 'peak_mins', reference_days, baseline_days,
today=get_test_day(parsed['experiment']))
# today's heart rate variability (average) -- cannot compute change
test_day = extract_days_prior(parsed['fitbit'], 1, today=get_test_day(parsed['experiment']))
hrv = pd.Series(index=index)
for i, x in enumerate(test_day):
if 'todayHRval' in x.columns:
hrv.loc[index[i]] = np.nanstd([eval(h) if type(h) is str else np.nan for h in x['todayHRval']])
stats['HR variability'] = hrv
# not including the following-- almost no one logged them:
# - food and water intake (recent + change in reference vs. baseline)
# - water logged
# - food calories logged
x = alt_dict2df(stats)
return pd.DataFrame(index=parsed['participants'], data=x.values, columns=x.columns)
def lemmatize(word, lemmatizer=None):
if lemmatizer is None:
lemmatizer = WordNetLemmatizer()
if type(word) == list:
return [lemmatize(w, lemmatizer=lemmatizer) for w in word]
tag = nltk.pos_tag([word])[0][1]
if tag == 'J':
pos = wordnet.ADJ
elif tag == 'V':
pos = wordnet.VERB
elif tag == 'R':
pos = wordnet.ADV
else:
pos = wordnet.NOUN
return lemmatizer.lemmatize(word, pos)
# noinspection PyShadowingNames
def get_list_items(data, lists=None, pres_prefix='', rec_prefix='', aggregate_presentations=False, debug=False):
if lists is None:
lists = [1, 2, 3, 4]
spell = SpellChecker(language='en')
wordpool = pd.read_csv(os.path.join(DATA_DIR, 'task', 'wordpool.csv'))
known_mistakes = pd.read_csv(os.path.join(DATA_DIR, 'task', 'spellcheck.csv'))
def get_features(word):
if ', ' in word:
return [get_features(w) for w in word.split(', ')]
# remove extraneous characters
extras = [',', '.', '!', '?', ' ']
word = ''.join([c for c in word if c not in extras])
# basic spelling correction
if type(word) is str:
word = spell.correction(word.capitalize())
else:
raise ValueError(f'cannot process words of type {type(word)}')
# known mistakes
mistake = known_mistakes.query(f'misspelled == "{word.upper()}"')
if len(mistake) > 0:
word = mistake['corrected'].values[0]
w = wordpool.query(f'WORD == "{word.upper()}"')
if len(w) == 0:
# try lemmatizing the word
lemmatized_word = lemmatize(word.lower())
lw = wordpool.query(f'WORD == "{lemmatized_word.upper()}"')
if len(lw) > 0:
w = lw
word = lemmatized_word
if len(w) == 0:
if debug:
print(f'unrecognized word: {word.upper()}')
return {'item': word.upper(),
'word_length': len(word),
'starting_letter': word[0].upper()}
else:
return {'item': word.upper(),
'word_length': len(word),
'starting_letter': word[0].upper(),
'category': w['CATEGORY'].values[0].upper(),
'size': w['SIZE'].values[0].upper()}
pres_words = []
rec_words = []
for subj_data in data:
list_presentations = []
list_recalls = []
# noinspection PyBroadException
try:
for i, x in enumerate(lists):
presented_items = [get_features(w) for w in subj_data[f'{pres_prefix}{x}'] if type(w) is not float]
if aggregate_presentations:
list_presentations.extend([dw.core.update_dict(i, {'list': x}) for i in presented_items])
if i == 0:
try:
list_recalls.extend([get_features(w) for w in subj_data[f'{rec_prefix}'] if type(w) is not float])
except KeyError:
list_recalls.extend([])
else:
list_presentations.append(presented_items)
try:
next_recalls = []
for w in subj_data[f'{rec_prefix}{x}']:
if type(w) is str:
next_features = get_features(w)
if type(next_features) is dict:
next_recalls.append(next_features)
elif type(next_features) is list:
next_recalls.extend(next_features)
list_recalls.append(next_recalls)
except KeyError:
list_recalls.append([])
if aggregate_presentations:
pres_words.append([list_presentations])
rec_words.append([list_recalls])
else:
pres_words.append(list_presentations)
rec_words.append(list_recalls)
except:
raise Exception('throwing this error to help with debugging...')
return quail.Egg(pres=pres_words, rec=rec_words)
def sliding_windows(text, width=10, end='.'):
punctuation = ['.', ',', '-', '?', '!']
if len(text) == 0:
return None
elif type(text) is list:
return [sliding_windows(t, width=width, end=end) for t in text]
windows = []
if (end is None) or (len(end) == 0):
parts = text.split()
end = ''
else:
parts = text.split(end)
for p in punctuation:
windows = [w.replace(p, '') for w in windows]
for i in range(np.max([len(parts) - width, 1])):
windows.append((end + ' ').join([p.strip() for p in parts[i:np.min([i + width, len(parts)])]]) + end)
windows = [w.strip().lower() for w in windows if len(w.strip()) > 1]
return [w for w in windows if len(w) > 0]
# noinspection PyTypeChecker
def get_events(transcript, model, width=10, end='.', max_k=50):
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/sherlock_helpers/
# sherlock_helpers/functions.py
def create_diag_mask(arr, diag_start=0, diag_limit=None):
diag_mask = np.zeros_like(arr, dtype=bool)
if diag_limit is None:
diag_limit = find_diag_limit(arr)
# noinspection PyShadowingNames
for k in range(diag_start, diag_limit):
ix = kth_diag_indices(diag_mask, k)
diag_mask[ix] = True
return diag_mask
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/sherlock_helpers/
# sherlock_helpers/functions.py
# noinspection PyShadowingNames
def find_diag_limit(arr):
for k in range(arr.shape[0]):
d = np.diag(arr, k=k)
if ~(d > 0).any():
return k
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/sherlock_helpers/
# sherlock_helpers/functions.py
# noinspection PyShadowingNames
def kth_diag_indices(arr, k):
row_ix, col_ix = np.diag_indices_from(arr)
if k == 0:
return row_ix, col_ix
else:
return row_ix[:-k], col_ix[k:]
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/notebooks/main/
# eventseg_analysis.ipynb
# noinspection PyShadowingNames
def reduce_model(m, ev):
"""
Reduce a model based on event labels
"""
w = (np.round(ev.segments_[0]) == 1).astype(bool)
return np.array([m[wi, :].mean(axis=0) for wi in w.T])
if type(transcript) is list:
return [get_events(t, model, width=width, end=end) for t in transcript]
embeddings = dw.wrangle(sliding_windows(transcript, width=width, end=end), text_kwargs={'model': model}).values
ks = list(range(2, np.min([embeddings.shape[0], max_k])))
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/notebooks/main/
# eventseg_analysis.ipynb
mcorr = np.corrcoef(embeddings)
scores = []
for k in ks:
ev = event.EventSegment(k)
ev.fit(embeddings)
i1, i2 = np.where(np.round(ev.segments_[0]) == 1)
w = np.zeros_like(ev.segments_[0])
w[i1, i2] = 1
mask = np.dot(w, w.T).astype(bool)
# Create mask such that the maximum temporal distance
# for within and across correlations is the same
local_mask = create_diag_mask(mask)
within_vals = np.reshape(mcorr[mask * local_mask], [-1, 1])
across_vals = np.reshape(mcorr[~mask * local_mask], [-1, 1])
try:
scores.append(wasserstein_distance(within_vals.ravel(), across_vals.ravel()))
except ValueError:
scores.append(-np.inf)
try:
if np.all(np.isinf(scores)):
raise ValueError('cannot segment events')
opt_k = ks[np.argmax(scores)]
ev = event.EventSegment(opt_k)
ev.fit(embeddings)
return reduce_model(embeddings, ev)
except ValueError:
# noinspection PyBroadException
try:
return np.atleast_2d(embeddings.mean(axis=0))
except:
return np.array([])
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/sherlock_helpers/sherlock_helpers/
# functions.py
def r2z(r):
with np.errstate(invalid='ignore', divide='ignore'):
return 0.5 * (np.log(1 + r) - np.log(1 - r))
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/sherlock_helpers/sherlock_helpers/
# functions.py
def z2r(z):
with np.errstate(invalid='ignore', divide='ignore'):
return (np.exp(2 * z) - 1) / (np.exp(2 * z) + 1)
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/sherlock_helpers/sherlock_helpers/
# functions.py
def corr_mean(rs, axis=0):
return z2r(np.nanmean([r2z(r) for r in rs], axis=axis))
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/notebooks/main/
# precision_distinctiveness_fig.ipynb
def precision(video, recall):
if type(recall) is list:
return pd.Series(index=np.arange(len(recall)), data=[precision(video, r) for r in recall])
if np.prod(recall.shape) == 0:
return np.nan
return corr_mean(np.max(1 - cdist(video, recall, 'correlation'), 0))
# source: https://github.com/ContextLab/sherlock-topic-model-paper/blob/master/code/notebooks/main/
# precision_distinctiveness_fig.ipynb
# noinspection PyArgumentList
def distinctiveness(video, recall):
if type(recall) is list:
return pd.Series(index=np.arange(len(recall)), data=[distinctiveness(video, r) for r in recall])
if np.prod(recall.shape) == 0:
return np.nan
corrmat = 1 - cdist(video, recall, 'correlation')
z_corrs = zscore(corrmat, axis=0)
return z_corrs.max(axis=0).mean()
def get_pres_inds(items, presented_items, inds, exclude_nans=True):
pres_inds = []
for i in inds:
next_matches = [j for j, w in enumerate(items) if w.upper() == presented_items[i].upper()]
if (not exclude_nans) and len(next_matches) == 0:
pres_inds.append(np.nan)
elif len(next_matches) == 1:
pres_inds.extend(next_matches)
else:
pres_inds.extend(next_matches)
return np.array(pres_inds)
def get_temporal_clustering_vocab(correct, items, presented_items):
adjacent = []
for i in range(len(presented_items) - 1):
adjacent.append([get_pres_inds(items, presented_items, [i, i+1], exclude_nans=False)])
n_both_correct = 0
correction = 0
for a in adjacent:
try:
if correct[int(a[0][0])] and correct[int(a[0][1])]:
n_both_correct += 1
except (TypeError, ValueError):
correction += 1
if np.sum(correct) > (correction + 1):
return n_both_correct / (np.sum(correct) - 1 - correction)
else:
return np.nan
def get_mean_error_dist(responses, presented_items):
correct_positions = get_pres_inds([r[3] for r in responses], presented_items, range(len(presented_items)))
observed_positions = get_pres_inds([r[0] for r in responses], presented_items, range(len(presented_items)))
diffs = np.abs(correct_positions - observed_positions)
return np.mean(diffs[diffs > 0])
# noinspection PyShadowingNames
def spatial_estimation_error(data, n, metric='mean'):
if type(n) is list:
if len(n) == 0:
return np.nan
errors = spatial_estimation_error(data, n[0], metric=metric)
for i in n[1:]:
errors = errors + spatial_estimation_error(data, i, metric=metric)
return errors / len(n)
errors = pd.Series(index=np.arange(len(data)))
for s in range(len(data)):
pres = [eval(x) for x in data[s][f'spatial_pres_{n}'] if type(x) is str]
resp = [eval(x) for x in data[s][f'spatial_resp_{n}'] if type(x) is str]
next_errors = []
for i in range(len(pres)):
target_positions = {x[2]: [x[0], x[1]] for x in pres[i]}
response_positions = {x[2]: [x[0], x[1]] for x in resp[i][-1]}
trial_errors = []
for k in target_positions.keys():
trial_errors.append(float(cdist(np.atleast_2d(target_positions[k]),
np.atleast_2d(response_positions[k]))))
next_errors.append(np.mean(trial_errors))
if metric == 'mean':
errors[s] = np.mean(next_errors)
elif metric == 'var':
errors[s] = np.var(next_errors)
elif metric == 'std':
errors[s] = np.std(next_errors)
else:
raise ValueError(f'unknown metric: {metric}')
return errors
def dict2df(d):
# source: https://stackoverflow.com/questions/24988131/
# nested-dictionary-to-multiindex-dataframe-where-dictionary-keys-are-column-label
def flatten_dict(dictionary, t=tuple(), keys=None):
if keys is None:
keys = {}
for key, val in dictionary.items():
t = t + (key,)
if isinstance(val, dict):
flatten_dict(val, t, keys)
else:
keys.update({t: val})
t = t[:-1]
return keys
return pd.DataFrame(flatten_dict(d))
def alt_dict2df(stats):
# convert stats to a dataframe
keys = list(stats.keys())
assert type(stats[keys[0]]) is pd.Series, 'first key must be a series; cannot flatten stats dictionary'
df = pd.DataFrame({keys[0]: stats[keys[0]].values}, index=stats[keys[0]].index)
merged_columns = [('', keys[0])]
for k in keys[1:]:
if type(stats[k]) is pd.Series:
merged_columns.append(('', k))
next_df = pd.DataFrame({k: stats[k].values}, index=stats[k].index)
elif type(stats[k]) is pd.DataFrame:
columns = [(k, c) for c in stats[k].columns]
merged_columns.extend(columns)
next_df = pd.DataFrame(stats[k].values, columns=columns)
elif type(stats[k]) is dict:
s = dict2df(stats[k])
columns = [(k, c[0]) for c in s.columns]
merged_columns.extend(columns)
next_df = pd.DataFrame(s.values, columns=columns)
else:
raise ValueError(f'unsupported datatype ({k}): {type(stats[k])}')
df = df.merge(next_df, how='left', left_index=True, right_index=True)
return pd.DataFrame(df.values, columns=pd.MultiIndex.from_tuples(merged_columns))
# noinspection PyTypeChecker
def get_video_and_recall_trajectories(data, width=10, end='.', doc_model=None, doc_name=None, window_model=None, window_name=None):
if doc_model is None:
doc_model = {'model': 'TransformerDocumentEmbeddings', 'args': ['bert-base-uncased'], 'kwargs': {}}
if doc_name is None:
doc_name = doc_model['args'][0]
if window_model is None:
window_model = {'model': 'SentenceTransformerDocumentEmbeddings', 'args': ['stsb-bert-large'], 'kwargs': {}}
if window_name is None:
window_name = window_model['args'][0]
preprocessing_dir = os.path.join(DATA_DIR, 'preprocessed')
transcript = dw.io.load(os.path.join(DATA_DIR, 'task', 'storytext.txt'), dtype='text').lower()
immediate_recall = []
delayed_recall = []
for s in range(len(data)):
if 'movie_sent_recall' in data[s].keys():
immediate_recall.append([r.strip().lower() for r in data[s]['movie_sent_recall']
if type(r) is str and len(r) > 1])
else:
immediate_recall.append([])
if 'movie_sent_recall_delay' in data[s].keys():
delayed_recall.append([r.strip().lower() for r in data[s]['movie_sent_recall_delay']
if type(r) is str])
else:
delayed_recall.append([])
immediate_transcripts = [' '.join(x) for x in immediate_recall]
delayed_transcripts = [' '.join(x) for x in delayed_recall]
embedding_dir = os.path.join(preprocessing_dir, 'embeddings')
if not os.path.exists(embedding_dir):
os.makedirs(embedding_dir)
embeddings_fname = os.path.join(embedding_dir, f'embeddings_{doc_name}.pkl')
if not os.path.exists(embeddings_fname):
transcript_embedding = dw.wrangle(sliding_windows(transcript, width=width, end=end), text_kwargs={'model': doc_model})
immediate_embeddings = dw.wrangle(sliding_windows(immediate_transcripts, width=width, end=end),
text_kwargs={'model': doc_model})
delayed_embeddings = dw.wrangle(sliding_windows(delayed_transcripts, width=width, end=end),
text_kwargs={'model': doc_model})
with open(embeddings_fname, 'wb') as f:
pickle.dump([transcript_embedding, immediate_embeddings, delayed_embeddings], f)
with open(embeddings_fname, 'rb') as f:
transcript_embedding, immediate_embeddings, delayed_embeddings = pickle.load(f)
trajectories_fname = os.path.join(embedding_dir, f'trajectories_{window_name}.pkl')
if not os.path.exists(trajectories_fname):
transcript_events = get_events(transcript, window_model, width=width, end=end)
immediate_events = get_events(immediate_transcripts, window_model, width=width, end=end)
delayed_events = get_events(delayed_transcripts, window_model, width=width, end=end)
with open(trajectories_fname, 'wb') as f:
pickle.dump([transcript_events, immediate_events, delayed_events], f)
with open(trajectories_fname, 'rb') as f:
transcript_events, immediate_events, delayed_events = pickle.load(f)
return transcript_embedding, immediate_embeddings, delayed_embeddings,\
transcript_events, immediate_events, delayed_events
# noinspection PyTypeChecker
def behavioral_stats(parsed):
warnings.simplefilter('ignore')
stats = {task: {'immediate': {}, 'delayed': {}} for task in ['free recall', 'naturalistic recall',
'vocab learning', 'spatial learning']}
# free recall (immediate + delayed)
immediate_fr = get_list_items(parsed['experiment'], lists=[1, 2, 3, 4], pres_prefix='pres_word_',
rec_prefix='rec_word_')
delayed_fr = get_list_items(parsed['experiment'], pres_prefix='pres_word_', rec_prefix='rec_word_delay',
aggregate_presentations=True)
immediate_spc = immediate_fr.analyze('spc').data.groupby('Subject').mean()
delayed_spc = delayed_fr.analyze('spc').data.groupby('Subject').mean()
immediate_fingerprints = immediate_fr.analyze('fingerprint').data.groupby('Subject').mean()
delayed_fingerprints = delayed_fr.analyze('fingerprint').data.groupby('Subject').mean()
# proportion of words recalled
stats['free recall']['immediate']['recall proportion'] = immediate_spc.mean(axis=1)
stats['free recall']['delayed']['recall proportion'] = delayed_spc.mean(axis=1)
# average primacy effect
stats['free recall']['immediate']['primacy'] = \
immediate_spc.iloc[:, :3].mean(axis=1) / immediate_spc.iloc[:, 5:10].mean(axis=1)
stats['free recall']['delayed']['primacy'] = \
delayed_spc.iloc[:, :16].mean(axis=1) / delayed_spc.iloc[:, 16:48].mean(axis=1)
# average recency effect
stats['free recall']['immediate']['recency'] = \
immediate_spc.iloc[:, -3:].mean(axis=1) / immediate_spc.iloc[:, 5:10].mean(axis=1)
stats['free recall']['delayed']['recency'] = \
delayed_spc.iloc[:, -16:].mean(axis=1) / delayed_spc.iloc[:, 16:48].mean(axis=1)
# average temporal clustering score
stats['free recall']['immediate']['clustering: temporal'] = immediate_fingerprints['temporal']
stats['free recall']['delayed']['clustering: temporal'] = delayed_fingerprints['temporal']
stats['free recall']['delayed']['clustering: list'] = delayed_fingerprints['list']
# average category clustering score
stats['free recall']['immediate']['clustering: category'] = immediate_fingerprints['category']
stats['free recall']['delayed']['clustering: category'] = delayed_fingerprints['category']
# average size clustering score
stats['free recall']['immediate']['clustering: size'] = immediate_fingerprints['size']
stats['free recall']['delayed']['clustering: size'] = delayed_fingerprints['size']
# average starting letter clustering score
stats['free recall']['immediate']['clustering: starting letter'] = immediate_fingerprints['starting_letter']
stats['free recall']['delayed']['clustering: starting letter'] = delayed_fingerprints['starting_letter']
# average word length clustering score
stats['free recall']['immediate']['clustering: word length'] = immediate_fingerprints['word_length']
stats['free recall']['delayed']['clustering: word length'] = delayed_fingerprints['word_length']
# movie task (immediate + delayed)
# proportion of multiple choice questions answered correctly (immediate only)
correct_responses = ['library', 'custodian', 'reading', 'professor', '1940s', '2', 'waxed the tops of bookshelves',
'never', '3am', 'less than high school', 'quiet']
stats['naturalistic recall']['immediate']['proportion correct'] =\
pd.Series(index=np.arange(len(parsed['experiment'])))
for s in range(len(parsed['experiment'])):
next_responses = [s.strip().lower() for s in
[eval(q) for q in parsed['experiment'][s]['movie_qs'] if type(q) is str][0]]
if next_responses[0] != 'no':
pass
n_correct = np.sum([correct == resp for correct, resp in zip(correct_responses, next_responses[1:])])
stats['naturalistic recall']['immediate']['proportion correct'][s] = n_correct / len(correct_responses)
# average semantic similarity between full transcript and full response
transcript_embedding, immediate_embeddings, delayed_embeddings,\
transcript_events, immediate_events, delayed_events = \
get_video_and_recall_trajectories(parsed['experiment'])
immediate_match = 1 - cdist(transcript_embedding, immediate_embeddings, metric='correlation')[0]
delayed_match = 1 - cdist(transcript_embedding, delayed_embeddings, metric='correlation')[0]
stats['naturalistic recall']['immediate']['semantic match'] = pd.Series(index=np.arange(len(parsed['experiment'])))
stats['naturalistic recall']['delayed']['semantic match'] = pd.Series(index=np.arange(len(parsed['experiment'])))
i = 0
j = 0
for s in np.arange(len(immediate_transcripts)):
if len(immediate_transcripts[s].strip()) > 0:
stats['naturalistic recall']['immediate']['semantic match'][s] = immediate_match[i]
i += 1
if len(delayed_transcripts[s].strip()) > 0:
stats['naturalistic recall']['delayed']['semantic match'][s] = delayed_match[j]
j += 1
# average precision
stats['naturalistic recall']['immediate']['precision'] = precision(transcript_events, immediate_events)
stats['naturalistic recall']['delayed']['precision'] = precision(transcript_events, delayed_events)
# average distinctiveness
stats['naturalistic recall']['immediate']['distinctiveness'] = distinctiveness(transcript_events, immediate_events)
stats['naturalistic recall']['delayed']['distinctiveness'] = distinctiveness(transcript_events, delayed_events)
# number of detected events in response
stats['naturalistic recall']['immediate']['n events'] = pd.Series(index=np.arange(len(parsed['experiment'])),
data=[e.shape[0] for e in immediate_events])
stats['naturalistic recall']['delayed']['n events'] = pd.Series(index=np.arange(len(parsed['experiment'])),
data=[e.shape[0] for e in delayed_events])
# number of sentences in response
stats['naturalistic recall']['immediate']['n sentences'] = \
pd.Series(index=np.arange(len(parsed['experiment'])), data=[len(t.split('.')) for t in immediate_transcripts])
stats['naturalistic recall']['delayed']['n sentences'] = \
pd.Series(index=np.arange(len(parsed['experiment'])), data=[len(t.split('.')) for t in delayed_transcripts])
# average number of sentences per event in response
stats['naturalistic recall']['immediate']['event length'] = \
stats['naturalistic recall']['immediate']['n sentences'] / stats['naturalistic recall']['immediate']['n events']
stats['naturalistic recall']['delayed']['event length'] = \
stats['naturalistic recall']['delayed']['n sentences'] / stats['naturalistic recall']['delayed']['n events']
# vocab learning (immediate + delayed)
stats['vocab learning']['immediate'] = {'p(correct): all': [], 'p(correct): early': [], 'p(correct): late': [],
'temporal clustering': [], 'error distance': [], 'similarity: correct': [],
'similarity: incorrect': []}
stats['vocab learning']['delayed'] = {'p(correct): all': [], 'p(correct): early': [], 'p(correct): late': [],
'reaction time': [], 'speed/accuracy': [], 'temporal clustering': [],
'error distance': [], 'similarity: correct': [],
'similarity: incorrect': []}
for s in range(len(parsed['experiment'])):
pres_eng = [eval(p)[2] for p in parsed['experiment'][s]['vocab_pres'] if type(p) is str]
pres_gle = [eval(p)[0] for p in parsed['experiment'][s]['vocab_pres'] if type(p) is str]
resp = [eval(p) for p in parsed['experiment'][s]['vocab_resp'] if type(p) is str]
resp_delayed = [eval(p) for p in parsed['experiment'][s]['vocab_resp_delay'] if type(p) is str]
correct = [r[0] == r[3] for r in resp]
correct_delayed = [r[0] == r[3] for r in resp_delayed]
# average proportion correct
stats['vocab learning']['immediate']['p(correct): all'].append(np.mean(correct))
stats['vocab learning']['delayed']['p(correct): all'].append(np.mean(correct_delayed))
# average early proportion correct
early_inds = get_pres_inds([r[3] for r in resp], pres_gle, [0, 1, 2])
early_inds_delayed = get_pres_inds([r[3] for r in resp_delayed], pres_gle, [0, 1, 2])
stats['vocab learning']['immediate']['p(correct): early'].append(np.mean(np.array(correct)[early_inds]))
stats['vocab learning']['delayed']['p(correct): early'].append(np.mean(np.array(correct_delayed)[
early_inds_delayed]))
# average late proportion correct
late_inds = get_pres_inds([r[3] for r in resp], pres_gle, [7, 8, 9])
late_inds_delayed = get_pres_inds([r[3] for r in resp_delayed], pres_gle, [7, 8, 9])
stats['vocab learning']['immediate']['p(correct): late'].append(np.mean(np.array(correct)[late_inds]))
stats['vocab learning']['delayed']['p(correct): late'].append(np.mean(np.array(correct_delayed)[
late_inds_delayed]))
# average reaction time (delayed only) -- not logged for immediate TODO: look further into this...
stats['vocab learning']['delayed']['reaction time'].append(np.mean([r[5] for r in resp_delayed]))
# p(correct) / average reaction time (delayed only)
stats['vocab learning']['delayed']['speed/accuracy'].append(
stats['vocab learning']['delayed']['p(correct): all'][-1] /\
stats['vocab learning']['delayed']['reaction time'][-1])
# temporal clustering: p(correct next | correct current)
stats['vocab learning']['immediate']['temporal clustering'].append(
get_temporal_clustering_vocab(correct, [r[3] for r in resp], pres_gle))
stats['vocab learning']['delayed']['temporal clustering'].append(
get_temporal_clustering_vocab(correct_delayed, [r[3] for r in resp_delayed], pres_gle))
# average error distance (how far away on the study list are errors?)
stats['vocab learning']['immediate']['error distance'].append(
get_mean_error_dist(resp, pres_gle))
stats['vocab learning']['delayed']['error distance'].append(
get_mean_error_dist(resp_delayed, pres_gle))
glove = {'model': 'WordEmbeddings', 'args': ['glove'], 'kwargs': {}}
eng_embeddings = dw.wrangle([w.lower() for w in pres_eng], text_kwargs={'model': glove})
# average pairwise semantic similarity of correct words (vs. all)
correct_inds = get_pres_inds([r[3] for i, r in enumerate(resp) if correct[i]], pres_gle, np.arange(10))
correct_inds_delayed = get_pres_inds([r[3] for i, r in enumerate(resp_delayed) if correct_delayed[i]], pres_gle,
np.arange(10))
incorrect_inds = get_pres_inds([r[3] for i, r in enumerate(resp) if not correct[i]], pres_gle, np.arange(10))
incorrect_inds_delayed = get_pres_inds([r[3] for i, r in enumerate(resp_delayed) if not correct_delayed[i]],
pres_gle, np.arange(10))
average_similarity = corr_mean(1 - pdist(eng_embeddings, metric='correlation'))
stats['vocab learning']['immediate']['similarity: correct'].append(
corr_mean(1 - pdist(eng_embeddings.loc[correct_inds], metric='correlation')) / average_similarity)
stats['vocab learning']['delayed']['similarity: correct'].append(
corr_mean(1 - pdist(eng_embeddings.loc[correct_inds_delayed], metric='correlation')) / average_similarity)
# semantic pairwise similarity of errors (vs. all)
stats['vocab learning']['immediate']['similarity: incorrect'].append(
corr_mean(1 - pdist(eng_embeddings.loc[incorrect_inds], metric='correlation')) / average_similarity)
stats['vocab learning']['delayed']['similarity: incorrect'].append(
corr_mean(1 - pdist(eng_embeddings.loc[incorrect_inds_delayed], metric='correlation')) / average_similarity)
for i in ['immediate', 'delayed']:
for j in stats['vocab learning'][i].keys():
stats['vocab learning'][i][j] = pd.Series(index=np.arange(len(parsed['experiment'])),
data=stats['vocab learning'][i][j])
# spatial task (immediate only-- no delayed task)
stats['spatial learning']['immediate'] = {}
# average estimation error (2 or 3 shapes)
stats['spatial learning']['immediate']['estimation error (2/3)'] = spatial_estimation_error(parsed['experiment'],
[2, 3])
# average estimation error (4 or 5 shapes)
stats['spatial learning']['immediate']['estimation error (4/5)'] = spatial_estimation_error(parsed['experiment'],
[4, 5])
# average estimation error (6 or 7 shapes)
stats['spatial learning']['immediate']['estimation error (6/7)'] = spatial_estimation_error(parsed['experiment'],
[6, 7])
# slope of estimation error vs. number of shapes
errors = pd.concat([spatial_estimation_error(parsed['experiment'], [i]) for i in range(2, 8)], axis=1).values
slopes = pd.Series(index=np.arange(len(parsed['experiment'])))
for s in range(len(parsed['experiment'])):
reg = LinearRegression().fit(np.atleast_2d(np.arange(2, 8)).T, np.atleast_2d(errors[s, :]).T)
slopes[s] = float(reg.coef_)
stats['spatial learning']['immediate']['error change by n shapes'] = slopes
# error (6 or 7 shapes) - error (2 or 3 shapes)
stats['spatial learning']['immediate']['estimation error (6/7) - (2/3)'] = spatial_estimation_error(
parsed['experiment'], [6, 7]) - spatial_estimation_error(parsed['experiment'], [2, 3])
# average error variability across different numbers of shapes
stats['spatial learning']['immediate']['error std dev (2/3)'] = spatial_estimation_error(parsed['experiment'],
[2, 3], metric='std')
# average estimation error (4 or 5 shapes)
stats['spatial learning']['immediate']['error std dev (4/5)'] = spatial_estimation_error(parsed['experiment'],
[4, 5], metric='std')
# average estimation error (6 or 7 shapes)
stats['spatial learning']['immediate']['error std dev (6/7)'] = spatial_estimation_error(parsed['experiment'],
[6, 7], metric='std')
# turn the dictionary into a dataframe (with MultiIndex columns)
x = dict2df(stats)
return pd.DataFrame(index=parsed['participants'], data=x.values, columns=x.columns)
def str2dt(s):
dt_format = '%Y-%m-%d %H:%M:%S.%f'
return dt.datetime.strptime(s, dt_format)
# noinspection PyShadowingNames
def get_raw_feature(x, name, return_idx=False, truncate=True):
if type(x) is list:
return [get_raw_feature(i, name, return_idx=return_idx) for i in x]
if name not in x.columns:
if return_idx:
return str2dt(x.index[-1]), 'np.nan'
else:
return 'np.nan'
if return_idx:
f = [(str2dt(idx), val) for idx, val in x[name].items() if type(val) is str]
else:
f = [i for i in x[name] if type(i) is str]
if len(f) > 0:
if truncate:
return f[0]
else:
return f
else:
return np.nan
def get_indicator_feature(x, name):
participant_vals = [[x.lower() for x in eval(r)] for r in get_raw_feature(x, name)]
vals = []
for x in participant_vals:
for y in x:
vals.append(y)
vals = np.unique(vals)
df = pd.DataFrame(columns=vals)
for v in vals:
df[v] = np.array([v in p for p in participant_vals], dtype=int)
return df
def extract_days_prior(x, n_days, today=None):
if type(x) is list:
if type(today) is not list:
today = [today] * len(x)
return [extract_days_prior(i, n_days, today=t) for i, t in zip(x, today)]
all_dates = [str2dt(i) for i in x.index.values]
if today is None:
today = all_dates[-1]
interval_start = today - dt.timedelta(days=n_days)
return x.loc[[d >= interval_start for d in all_dates]]
def get_test_day(experiment):
if type(experiment) is list:
return [get_test_day(x) for x in experiment]
return np.sort([str2dt(d) for d in experiment.dropna(how='all').index.values])[0]
def get_tracked_exercise(x, minimum_exercise_mins):
light_activity = get_raw_feature(x, 'light_act_mins', truncate=False)
light_activity = [eval(x) if type(x) is str else np.nan for x in light_activity]
medium_activity = get_raw_feature(x, 'fair_act_mins', truncate=False)
medium_activity = [eval(x) if type(x) is str else np.nan for x in medium_activity]
high_activity = get_raw_feature(x, 'very_act_mins', truncate=False)
high_activity = [eval(x) if type(x) is str else np.nan for x in high_activity]
return [(light >= minimum_exercise_mins) or
(medium >= minimum_exercise_mins) or
(high >= minimum_exercise_mins) for light, medium, high in
zip(light_activity, medium_activity, high_activity)]
def get_days_exercised(x, min_mins):
if type(x) is list:
y = [get_days_exercised(i, min_mins) for i in x]
return [i[0] for i in y], [i[1] for i in y]
all_dates = [str2dt(i) for i in x.index.values]
duration = all_dates[-1] - all_dates[0]
activity_columns = [c for c in x.columns if c in ['light_act_mins', 'fair_act_mins', 'very_act_mins']]
exercise = pd.DataFrame(x[activity_columns], dtype=float).sum(axis=1)
last_date = None
day_counter = 0
for idx, val in exercise.items():
if type(val) is str:
v = eval(val)
else:
v = val
if v > min_mins:
if (last_date is None) or ((str2dt(idx) - last_date) >= dt.timedelta(days=1)):
day_counter += 1
last_date = str2dt(idx)
return day_counter, duration
def get_sentiment(x, sentiment_classifier):
if type(x) is list:
return [get_sentiment(i, sentiment_classifier) for i in x]
s = Sentence(x)
sentiment_classifier.predict(s)
if len(s.labels) == 0:
return np.nan
else:
if s.labels[0].value == 'POSITIVE':
return s.labels[0]._score
elif s.labels[0].value == 'NEGATIVE':
return -s.labels[0]._score
else:
pass
def survey_stats(parsed, baseline_days=30):
index = np.arange(len(parsed['survey']))
stats = {}
# age
birthyear = get_raw_feature(parsed['survey'], 'birthyear', return_idx=True)
stats['age'] = pd.Series(index=index,
data=[int(np.round(b[0].year - eval(b[1]))) if not np.isnan(eval(b[1])) else np.nan
for b in birthyear])
# gender
stats['gender'] = pd.get_dummies([g.lower() for g in get_raw_feature(parsed['survey'], 'gender')])
# race
stats['race'] = get_indicator_feature(parsed['survey'], 'race')
# degree
stats['degree'] = pd.get_dummies([x.lower() for x in get_raw_feature(parsed['survey'], 'degree')])
# number of fluent languages
stats['number fluent languages'] = pd.Series(index=index,
data=[len(eval(x)) for x in get_raw_feature(parsed['survey'],
'fluent_langs')])
# number of familiar languages
stats['number familiar languages'] = pd.Series(index=index,
data=[len(eval(x)) for x in get_raw_feature(parsed['survey'],
'familiar_langs')])
# color vision
stats['color vision'] = pd.Series(index=index,
data=[1 if c == 'Yes' else 0 for c in get_raw_feature(parsed['survey'],
'color_vision')])
# uncorrected visual impairments
stats['vision impaired'] = pd.Series(index=index,
data=[1 if c == 'Yes' else 0 for c in get_raw_feature(parsed['survey'],
'uncorr_impair')])
# number of medications or injuries
health_dict = {
'anxiety or depression': ['zoloft', 'welbutrin', 'trazodone', 'xanax', 'zoloft', 'sertraline'],
'high blood pressure': ['blood pressure', 'diuretic'],
'bipolar': ['lamictal'],
'hypothyroid': ['levo'],
'unspecified medications': ['some medication'],
'recent head injury': ['skull', 'concussion']}
meds = [x.strip().lower() for x in get_raw_feature(parsed['survey'], 'recent_meds_injuries')]
stats['health and wellness'] = pd.DataFrame(index=index, columns=list(health_dict.keys()))
for i, m in enumerate(meds):
for k in health_dict.keys():
if any([x in m for x in health_dict[k]]):
stats['health and wellness'].loc[i, k] = 1
else:
stats['health and wellness'].loc[i, k] = 0
# self-reported behaviors and mental state
# current stress level
stress_dict = {'very relaxed': -2,
'a little relaxed': -1,
'neutral': 0,
'a little stressed': 1,
'very stressed': 2}
stats['current stress'] = pd.Series(index=index,
data=[stress_dict[k.lower()] for k in get_raw_feature(parsed['survey'],
'current_stress')])
# typical stress level
stats['typical stress'] = pd.Series(index=index,
data=[stress_dict[k.lower()] for k in get_raw_feature(parsed['survey'],
'typical_stress')])
# current / typical stress level
stats['current / typical stress'] = stats['current stress'] / stats['typical stress']
# current alertness
alert_dict = {'very sluggish': -2,
'a little sluggish': -1,
'neutral': 0,
'a little alert': 1,
'very alert': 2}
stats['alertness'] = pd.Series(index=index,
data=[alert_dict[k.lower()] for k in get_raw_feature(parsed['survey'],
'current_alert')])
# reported water cups
stats['water intake'] = pd.Series(index=index, data=[int(c[0]) for c in get_raw_feature(parsed['fitbit'],
'water_cups')])
# tracked water cups: exclude (values reported by fitbit are unrealistic-- e.g., hundreds of cups of water/day;
# this could be due to a processing or interpretation issue, or a bug in the fitbit API)
# reported water cups / tracked water cups: exclude (see above)
# reported coffee cups
stats['coffee intake'] = pd.Series(index=index, data=[int(c[0]) for c in get_raw_feature(parsed['survey'],
'coffee_cups')])
# living setting
stats['location'] = pd.get_dummies([x.lower() for x in get_raw_feature(parsed['survey'], 'live_setting')])
# typical job activity level
job_activity_dict = {'sedentary (e.g., desk job)': 0,
'slightly active': 1,
'active': 2,
'highly active (e.g., heavy lifting)': 3}
stats['occupation activity level'] = pd.Series(index=index,
data=[job_activity_dict[k.lower()] for k in
get_raw_feature(parsed['survey'], 'job_activity')])
# self-reported exercised today?
stats['reported exercise today'] = pd.Series(index=index,
data=[1 if r.lower() == 'yes' else 0 for r in
get_raw_feature(parsed['survey'], 'exercise_today')])
# agreement between measured and reported exercise today (null if haven't synced tracker)
minimum_exercise_min = 1
tracker_synced = [1 if r.lower() == 'yes' else 0 for r in get_raw_feature(parsed['survey'], 'tracker_sync_today')]
tracked_exercise = get_tracked_exercise(extract_days_prior(parsed['fitbit'], 1,
today=get_test_day(parsed['experiment'])),
minimum_exercise_min)
stats['accurate exercise report'] = pd.Series(index=index,
data=[x if tracker_synced[i] else np.nan for i, x in
enumerate([r == a for r, a in
zip(stats['reported exercise today'],
tracked_exercise)])])
# self-reported plan to exercise today
stats['plan to exercise'] = pd.Series(index=index,
data=[1 if p.lower() == 'yes' else 0 for p in
get_raw_feature(parsed['survey'], 'exercise_plan')])
# self-reported typical exercise frequency
frequency_dict = {
'0 days per week': 0,
'1 day per week': 1,
'2 days per week': 2,
'3 days per week': 3,
'4 days per week': 4,
'5 days per week': 5,
'6 days per week': 6,
'7 days per week': 7,
'8-14 times per week': 11,
'more than 14 times per week': 15,
'2 días por semana': 2,
'4 días a la semana': 4,
'7 días a la semana': 7}
stats['reported exercise frequency'] = pd.Series(index=index,
data=[frequency_dict[k.lower()] for k in
get_raw_feature(parsed['survey'], 'exercise_freq')])
# agreement between measured and reported exercise frequency in the baseline interval
# compute as ((duration / 7) * observed) / max(reported, 7)
# note 1: only total active minutes are logged prior 1 day before testing, so we can only know whether *some*
# exercise occurred on a given day, not whether multiple exercise sessions were performed
# note 2: if the duration is less than 1 week (min_duration), set agreement to np.nan
min_duration = 7
observed_exercise, duration = get_days_exercised(extract_days_prior(parsed['fitbit'], baseline_days,
today=get_test_day(parsed['experiment'])),
minimum_exercise_min)
stats['reported exercise accuracy'] = pd.Series(index=index,
data=[(ob * d.days / 7) / np.max([r, 7]) if d.days >= min_duration else np.nan
for ob, d, r in zip(observed_exercise,
duration,
stats['reported exercise frequency'])])
# valence (sentiment) of reported motivation for exercising
classifier = TextClassifier.load('en-sentiment')
motivations = get_raw_feature(parsed['survey'], 'exercise_motiv')
unique_motivations = np.unique(motivations)
motivation_sentiments = [get_sentiment(x.lower(), classifier) for x in unique_motivations]
exercise_motivation_dict = {m: s for m, s in zip(unique_motivations, motivation_sentiments)}
stats['exercise motivation sentiment'] = pd.Series(index=index,
data=[exercise_motivation_dict[m] for m in motivations])
# valence (sentiment) of reported motivation for wearing tracker
motivations = get_raw_feature(parsed['survey'], 'tracker_motiv')
stats['tracker motivation sentiment'] = pd.Series(index=index,
data=[get_sentiment(m, classifier) for m in motivations])
# task understanding
clarity_dict = {
'very unclear': -2,
'unclear': -1,
'clear': 1,
'very clear': 2
}
# clarity of fitbit instructions
stats['clarity: fitbit setup'] = pd.Series(index=index,
data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'], 'fitbit_clear')])
# clarity of free recall instructions
stats['clarity: free recall (immediate)'] = pd.Series(index=index, data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'],
'wordlist_clear')])
# clarity of delayed free recall instructions
stats['clarity: free recall (delayed)'] = pd.Series(index=index, data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'],
'delayed_wordlist_clear')])
# clarity of vocab instructions
stats['clarity: vocab learning (immediate)'] = pd.Series(index=index, data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'],
'vocab_clear')])
# clarity of delayed vocab instructions
stats['clarity: vocab learning (delayed)'] = pd.Series(index=index, data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'],
'delayed_vocab_clear')])
# clarity of spatial task instructions
stats['clarity: spatial learning (immediate)'] = pd.Series(index=index, data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'],
'spatial_clear')])
# clarity of movie task instructions
stats['clarity: naturalistic recall (immediate)'] = pd.Series(index=index,
data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'],
'movie_clear')])
# clarity of delayed movie task instructions
stats['clarity: naturalistic recall (delayed)'] = pd.Series(index=index,
data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'],
'delayed_movie_clear')])
# clarity of survey instructions
stats['clarity: survey'] = pd.Series(index=index, data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'], 'survey_clear')])
# overall clarity of instructions
stats['clarity: overall'] = pd.Series(index=index, data=[clarity_dict[c.lower()] for c in
get_raw_feature(parsed['survey'], 'overall_clear')])
# self-reported task performance
difficulty_dict = {
'very difficult': -2,
'difficult': -1,
'medium': 0,
'easy': 1,
'very easy': 2
}
# difficulty of free recall task
stats['difficulty: free recall (immediate)'] = pd.Series(index=index,
data=[difficulty_dict[d.lower()] for d in
get_raw_feature(parsed['survey'],
'wordlist_difficult')])
# difficulty of delayed free recall task
stats['difficulty: free recall (delayed)'] = pd.Series(index=index,
data=[difficulty_dict[d.lower()] for d in
get_raw_feature(parsed['survey'],
'delayed_wordlist_difficult')])
# difficulty of vocab task
stats['difficulty: vocab learning (immediate)'] = pd.Series(index=index,
data=[difficulty_dict[d.lower()] for d in
get_raw_feature(parsed['survey'],
'vocab_difficult')])
# difficulty of delayed vocab task
stats['difficulty: vocab learning (delayed)'] = pd.Series(index=index,
data=[difficulty_dict[d.lower()] for d in
get_raw_feature(parsed['survey'],
'delayed_vocab_difficult')])
# difficulty of spatial task
stats['difficulty: spatial learning (immediate)'] = pd.Series(index=index,
data=[difficulty_dict[d.lower()] for d in
get_raw_feature(parsed['survey'],
'spatial_difficult')])
# difficulty of movie task
stats['difficulty: naturalistic recall (immediate)'] = pd.Series(index=index,
data=[difficulty_dict[d.lower()] for d in
get_raw_feature(parsed['survey'],
'movie_difficult')])
# difficulty of delayed movie task
stats['difficulty: naturalistic recall (delayed)'] = pd.Series(index=index,
data=[difficulty_dict[d.lower()] for d in
get_raw_feature(parsed['survey'],
'delayed_movie_difficult')])
# feedback
feedback = get_raw_feature(parsed['survey'], 'feedback')
# number of words of feedback on task
stats['feedback: number of words'] = pd.Series(index=index,
data=[len(x.strip().split(' ')) - 1 for x in feedback])
# average sentiment of feedback on task
stats['feedback: sentiment'] = pd.Series(index=index,
data=get_sentiment(feedback, classifier))
x = alt_dict2df(stats)
return pd.DataFrame(index=parsed['participants'], data=x.values, columns=x.columns)
def get_formatted_data():
data, participants = load_raw()
return simplify_dict_list([parse_data(d) for d in data], participants)
def load(recent=7, baseline=30):
preprocessed_dir = os.path.join(DATA_DIR, 'preprocessed')
if not os.path.exists(preprocessed_dir):
os.makedirs(preprocessed_dir)
behavioral_fname = os.path.join(preprocessed_dir, 'behavior.pkl')
survey_fname = os.path.join(preprocessed_dir, f'survey_{baseline}.pkl')
fitbit_fname = os.path.join(preprocessed_dir, f'fitbit_{recent}_{baseline}.pkl')
if not (os.path.exists(behavioral_fname) and
os.path.exists(survey_fname) and
os.path.exists(fitbit_fname)):
parsed_data = get_formatted_data()
else:
parsed_data = None
if not os.path.exists(behavioral_fname):
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
behavioral = behavioral_stats(parsed_data)
with open(behavioral_fname, 'wb') as f:
pickle.dump(behavioral, f)
with open(behavioral_fname, 'rb') as f:
behavioral = pickle.load(f)
if not os.path.exists(survey_fname):
survey = survey_stats(parsed_data, baseline_days=baseline)
with open(survey_fname, 'wb') as f:
pickle.dump(survey, f)
with open(survey_fname, 'rb') as f:
survey = pickle.load(f)
if not os.path.exists(fitbit_fname):
fitbit = fitness_stats(parsed_data, baseline_days=baseline, reference_days=recent)
with open(fitbit_fname, 'wb') as f:
pickle.dump(fitbit, f)
with open(fitbit_fname, 'rb') as f:
fitbit = pickle.load(f)
return behavioral, fitbit, survey
| 46.658831 | 131 | 0.566794 | 8,063 | 69,475 | 4.744636 | 0.099467 | 0.011711 | 0.01835 | 0.019762 | 0.483375 | 0.429893 | 0.368308 | 0.303639 | 0.257371 | 0.228853 | 0 | 0.008748 | 0.310558 | 69,475 | 1,488 | 132 | 46.690188 | 0.789933 | 0.098697 | 0 | 0.168639 | 0 | 0 | 0.14802 | 0.006888 | 0 | 0 | 0 | 0.000672 | 0.000986 | 1 | 0.043393 | false | 0.002959 | 0.022682 | 0.000986 | 0.147929 | 0.014793 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7847f179de7557b5446958536008adc3c981f95 | 4,564 | py | Python | python/pipeline/util.py | loveululu/Serving | 3a64af45b87f5a8a75ecd20059423d320849295d | [
"Apache-2.0"
] | 2 | 2021-11-16T02:36:03.000Z | 2022-03-23T11:45:46.000Z | python/pipeline/util.py | loveululu/Serving | 3a64af45b87f5a8a75ecd20059423d320849295d | [
"Apache-2.0"
] | 1 | 2021-02-24T08:34:45.000Z | 2021-02-24T08:34:45.000Z | python/pipeline/util.py | loveululu/Serving | 3a64af45b87f5a8a75ecd20059423d320849295d | [
"Apache-2.0"
] | 1 | 2020-06-16T01:50:49.000Z | 2020-06-16T01:50:49.000Z | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import logging
import threading
import multiprocessing
import multiprocessing.managers
from contextlib import closing
import socket
if sys.version_info.major == 2:
import Queue
from Queue import PriorityQueue
elif sys.version_info.major == 3:
import queue as Queue
from queue import PriorityQueue
else:
raise Exception("Error Python version")
_LOGGER = logging.getLogger(__name__)
class AvailablePortGenerator(object):
def __init__(self, start_port=12000):
self._curr_port = start_port
@staticmethod
def port_is_available(port):
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as sock:
sock.settimeout(2)
result = sock.connect_ex(('0.0.0.0', port))
if result != 0:
return True
else:
return False
def next(self):
while not AvailablePortGenerator.port_is_available(self._curr_port):
self._curr_port += 1
self._curr_port += 1
return self._curr_port - 1
_AvailablePortGenerator = AvailablePortGenerator()
def GetAvailablePortGenerator():
return _AvailablePortGenerator
class NameGenerator(object):
# use unsafe-id-generator
def __init__(self, prefix):
self._idx = -1
self._prefix = prefix
self._id_generator = UnsafeIdGenerator(1000000000000000000)
def next(self):
next_id = self._id_generator.next()
return "{}{}".format(self._prefix, next_id)
class UnsafeIdGenerator(object):
def __init__(self, max_id, base_counter=0, step=1):
self._base_counter = base_counter
self._counter = self._base_counter
self._step = step
self._max_id = max_id # for reset
def next(self):
if self._counter >= self._max_id:
self._counter = self._base_counter
_LOGGER.info("Reset Id: {}".format(self._counter))
next_id = self._counter
self._counter += self._step
return next_id
class ThreadIdGenerator(UnsafeIdGenerator):
def __init__(self, max_id, base_counter=0, step=1, lock=None):
# if you want to use your lock, you may need to use Reentrant-Lock
self._lock = lock
if self._lock is None:
self._lock = threading.Lock()
super(ThreadIdGenerator, self).__init__(max_id, base_counter, step)
def next(self):
next_id = None
with self._lock:
if self._counter >= self._max_id:
self._counter = self._base_counter
_LOGGER.info("Reset Id: {}".format(self._counter))
next_id = self._counter
self._counter += self._step
return next_id
class ProcessIdGenerator(UnsafeIdGenerator):
def __init__(self, max_id, base_counter=0, step=1, lock=None):
# if you want to use your lock, you may need to use Reentrant-Lock
self._lock = lock
if self._lock is None:
self._lock = multiprocessing.Lock()
self._base_counter = base_counter
self._counter = multiprocessing.Manager().Value('i', 0)
self._step = step
self._max_id = max_id
def next(self):
next_id = None
with self._lock:
if self._counter.value >= self._max_id:
self._counter.value = self._base_counter
_LOGGER.info("Reset Id: {}".format(self._counter.value))
next_id = self._counter.value
self._counter.value += self._step
return next_id
def PipelineProcSyncManager():
"""
add PriorityQueue into SyncManager, see more:
https://stackoverflow.com/questions/25324560/strange-queue-priorityqueue-behaviour-with-multiprocessing-in-python-2-7-6?answertab=active#tab-top
"""
class PipelineManager(multiprocessing.managers.SyncManager):
pass
PipelineManager.register("PriorityQueue", PriorityQueue)
m = PipelineManager()
m.start()
return m
| 31.694444 | 148 | 0.668054 | 568 | 4,564 | 5.128521 | 0.294014 | 0.064195 | 0.046344 | 0.02197 | 0.346722 | 0.290079 | 0.290079 | 0.264676 | 0.246825 | 0.246825 | 0 | 0.017945 | 0.242989 | 4,564 | 143 | 149 | 31.916084 | 0.825181 | 0.20596 | 0 | 0.402062 | 0 | 0 | 0.022575 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.134021 | false | 0.010309 | 0.113402 | 0.010309 | 0.402062 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d78545c2ca94fc142611dfb949001139fb9dfaed | 5,634 | py | Python | examples/svae_eval.py | florianmai/SentEval | ed2cc8d6eb41c8b7c2bd71c3dd9afb340f923e66 | [
"BSD-3-Clause"
] | 3 | 2019-03-05T11:22:55.000Z | 2020-07-03T04:33:59.000Z | examples/svae_eval.py | florianmai/SentEval | ed2cc8d6eb41c8b7c2bd71c3dd9afb340f923e66 | [
"BSD-3-Clause"
] | null | null | null | examples/svae_eval.py | florianmai/SentEval | ed2cc8d6eb41c8b7c2bd71c3dd9afb340f923e66 | [
"BSD-3-Clause"
] | 1 | 2019-01-27T11:19:43.000Z | 2019-01-27T11:19:43.000Z | # Copyright (c) 2017-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
#
from __future__ import absolute_import, division, unicode_literals
import os
import sys
import json
import logging
import argparse
import ipdb as pdb
import torch
from utils import get_tasks, write_results
# Set PATHs
if "cs.nyu.edu" in os.uname()[1]:
PATH_PREFIX = '/misc/vlgscratch4/BowmanGroup/awang/'
PROJ_PREFIX = '/home/awang/'
else:
PATH_PREFIX = '/beegfs/aw3272/'
PROJ_PREFIX = ''
# import senteval
PATH_SENTEVAL = '../'
sys.path.insert(0, PATH_SENTEVAL)
import senteval
PATH_TO_DATA = '../data/senteval_data/'
PATH_TO_GLOVE = PATH_PREFIX + 'raw_data/GloVe/glove.840B.300d.txt'
MODEL_PATH = PROJ_PREFIX + 'projects/Sentence-VAE'
sys.path.insert(0, MODEL_PATH)
from model import SentenceVAE, SentenceAE
def prepare(params, samples):
#params.infersent.build_vocab([' '.join(s) for s in samples], tokenize=False)
pass
def batcher(params, sentences):
means = params.model.encode(sentences)
return means
def main(arguments):
parser = argparse.ArgumentParser(description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
# Logistics
parser.add_argument("--cuda", help="CUDA id to use", type=int, default=0)
parser.add_argument("--seed", help="Random seed", type=int, default=19)
parser.add_argument("--use_pytorch", help="1 to use PyTorch", type=int, default=1)
parser.add_argument("--out_dir", help="Dir to write preds to", type=str, default='')
parser.add_argument("--log_file", help="File to log to", type=str)
parser.add_argument("--load_data", help="0 to read data from scratch", type=int, default=1)
# Task options
parser.add_argument("--tasks", help="Tasks to evaluate on, as a comma separated list", type=str)
parser.add_argument("--max_seq_len", help="Max sequence length", type=int, default=40)
# Model options
parser.add_argument("--ckpt_path", help="Path to ckpt to load", type=str,
default=PATH_PREFIX + 'ckpts/svae/glue_svae/best.mdl')
parser.add_argument("--vocab_path", help="Path to vocab to use", type=str,
default=PATH_PREFIX + 'processed_data/svae/glue_v2/vocab.json')
parser.add_argument("--model", help="Word emb dim", type=str, default='vae')
parser.add_argument("--embedding_size", help="Word emb dim", type=int, default=300)
parser.add_argument("--word_dropout", help="Word emb dim", type=float, default=0.5)
parser.add_argument("--hidden_size", help="RNN size", type=int, default=512)
parser.add_argument("--latent_size", help="Latent vector dim", type=int, default=16)
parser.add_argument("--num_layers", help="Number of encoder layers", type=int, default=1)
parser.add_argument("--bidirectional", help="1 for bidirectional", type=bool, default=False)
parser.add_argument("--rnn_type", help="Type of rnn", type=str, choices=['rnn', 'gru'],
default='gru')
parser.add_argument("--batch_size", help="Batch size to use", type=int, default=64)
# Classifier options
parser.add_argument("--cls_batch_size", help="Batch size to use", type=int, default=64)
args = parser.parse_args(arguments)
logging.basicConfig(format='%(asctime)s : %(message)s', level=logging.DEBUG)
if args.log_file:
fileHandler = logging.FileHandler(args.log_file)
logging.getLogger().addHandler(fileHandler)
logging.info(args)
# define senteval params
params_senteval = {'task_path': PATH_TO_DATA, 'usepytorch': args.use_pytorch, 'kfold': 10,
'max_seq_len': args.max_seq_len, 'batch_size': args.batch_size, 'load_data': args.load_data,
'seed': args.seed}
params_senteval['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': args.cls_batch_size,
'tenacity': 5, 'epoch_size': 4, 'cudaEfficient': True}
# Load InferSent model
vocab = json.load(open(args.vocab_path, 'r'))
args.denoise = False
args.prob_swap, args.prob_drop = 0.0, 0.0
if args.model == 'vae':
model = SentenceVAE(args, vocab['w2i'],
#sos_idx=w2i['<sos>'], eos_idx=w2i['<eos>'], pad_idx=w2i['<pad>'],
#max_sequence_length=args.max_seq_len,
embedding_size=args.embedding_size,
rnn_type=args.rnn_type, hidden_size=args.hidden_size,
word_dropout=args.word_dropout, latent_size=args.latent_size,
num_layers=args.num_layers, bidirectional=args.bidirectional)
elif args.model == 'ae':
model = SentenceAE(args, vocab['w2i'],
embedding_size=args.embedding_size,
rnn_type=args.rnn_type, hidden_size=args.hidden_size,
word_dropout=args.word_dropout, latent_size=args.latent_size,
num_layers=args.num_layers, bidirectional=args.bidirectional)
model.load_state_dict(torch.load(args.ckpt_path))
model = model.cuda()
model.eval()
params_senteval['model'] = model
# Do SentEval stuff
se = senteval.engine.SE(params_senteval, batcher, prepare)
tasks = get_tasks(args.tasks)
results = se.eval(tasks)
if args.out_dir:
write_results(results, args.out_dir)
if not args.log_file:
print(results)
else:
logging.info(results)
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))
| 42.681818 | 104 | 0.665247 | 756 | 5,634 | 4.765873 | 0.300265 | 0.049958 | 0.094366 | 0.009992 | 0.185124 | 0.138218 | 0.138218 | 0.120455 | 0.120455 | 0.120455 | 0 | 0.01312 | 0.20181 | 5,634 | 131 | 105 | 43.007634 | 0.788081 | 0.088924 | 0 | 0.105263 | 0 | 0 | 0.193195 | 0.035198 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031579 | false | 0.010526 | 0.115789 | 0 | 0.157895 | 0.010526 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7857f2ca253076e028288fef562d41bf9db77e5 | 418 | py | Python | examples/echo.py | yukinotenshi/picoweb | 67f8759b830ba267ffe686011f5f5bd2b0a56fc5 | [
"BSD-2-Clause"
] | null | null | null | examples/echo.py | yukinotenshi/picoweb | 67f8759b830ba267ffe686011f5f5bd2b0a56fc5 | [
"BSD-2-Clause"
] | null | null | null | examples/echo.py | yukinotenshi/picoweb | 67f8759b830ba267ffe686011f5f5bd2b0a56fc5 | [
"BSD-2-Clause"
] | null | null | null | from lib.pico_ipc_adapter import PicoIPCAdapter, PacketTypes
from time import sleep
def main():
ipc = PicoIPCAdapter("input.txt", "output.txt")
ipc.register_callback(100, lambda x: print("pico says({}) {}".format(len(x), x.decode('utf-8'))))
x = ''.join(['x' for _ in range(1000)])
while 1:
ipc.check_message()
ipc.send_message(bytearray(x.encode("utf-8")), 99, 0)
sleep(0.05) | 34.833333 | 101 | 0.638756 | 61 | 418 | 4.278689 | 0.688525 | 0.030651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.046921 | 0.184211 | 418 | 12 | 102 | 34.833333 | 0.718475 | 0 | 0 | 0 | 0 | 0 | 0.109785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.3 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d786c2c37649111c916c2a5ce5b692835ee143ae | 1,560 | py | Python | api/jobs/autograde_recalculate.py | Racheltrq/Anubis | 20eabe5651cee4ca5dc2f2b9bb531724aad1cf37 | [
"MIT"
] | 87 | 2021-11-08T10:58:26.000Z | 2022-03-31T19:02:47.000Z | api/jobs/autograde_recalculate.py | Racheltrq/Anubis | 20eabe5651cee4ca5dc2f2b9bb531724aad1cf37 | [
"MIT"
] | 114 | 2021-06-27T08:37:43.000Z | 2021-10-24T00:51:01.000Z | api/jobs/autograde_recalculate.py | Racheltrq/Anubis | 20eabe5651cee4ca5dc2f2b9bb531724aad1cf37 | [
"MIT"
] | 15 | 2021-11-07T17:02:21.000Z | 2022-03-28T02:04:16.000Z | from anubis.lms.assignments import get_recent_assignments
from anubis.lms.autograde import bulk_autograde
from anubis.utils.data import with_context
from anubis.utils.visuals.assignments import get_assignment_sundial
from anubis.utils.logging import logger
def autograde_recalculate():
"""
Calculate stats for recent submissions
:return:
"""
recent_assignments = get_recent_assignments(autograde_enabled=True)
print('Recent assignments:')
print('\n'.join(' ' * 4 + assignment.name for assignment in recent_assignments))
for assignment in recent_assignments:
print('Running bulk autograde on {:<20} :: {:<20}'.format(
assignment.name,
assignment.course.course_code,
))
bulk_autograde(assignment.id)
for assignment in recent_assignments:
print('Running sundial recalc on {:<20} :: {:<20}'.format(
assignment.name,
assignment.course.course_code,
))
get_assignment_sundial(assignment.id)
@with_context
def reap():
autograde_recalculate()
if __name__ == "__main__":
print("""
___
/ \\\\
/\\\\ | . . \\\\
////\\\\| ||
//// \\\\\\ ___//\\
/// \\\\ \\
/// |\\\\ |
// | \\\\ \\ \\
/ | \\\\ \\ \\
| \\\\ / /
| \\/ /
| \\\\/|
| \\\\|
| \\\\
| |
|_________\\
""")
reap()
| 25.57377 | 84 | 0.509615 | 126 | 1,560 | 5.960317 | 0.349206 | 0.158455 | 0.05992 | 0.083888 | 0.298269 | 0.255659 | 0.255659 | 0.138482 | 0.138482 | 0.138482 | 0 | 0.008755 | 0.341026 | 1,560 | 60 | 85 | 26 | 0.72179 | 0.030769 | 0 | 0.181818 | 0 | 0 | 0.35992 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.113636 | 0 | 0.159091 | 0.113636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d788747d7669088288ecf6659e9a300adfc57c6c | 1,891 | py | Python | 686.repeated-string-match.py | windard/leeeeee | 0107a5f95746592ca4fe78d2b5875cf65b1910e7 | [
"MIT"
] | null | null | null | 686.repeated-string-match.py | windard/leeeeee | 0107a5f95746592ca4fe78d2b5875cf65b1910e7 | [
"MIT"
] | null | null | null | 686.repeated-string-match.py | windard/leeeeee | 0107a5f95746592ca4fe78d2b5875cf65b1910e7 | [
"MIT"
] | null | null | null | # coding=utf-8
#
# @lc app=leetcode id=686 lang=python
#
# [686] Repeated String Match
#
# https://leetcode.com/problems/repeated-string-match/description/
#
# algorithms
# Easy (31.32%)
# Likes: 525
# Dislikes: 518
# Total Accepted: 71.8K
# Total Submissions: 227.2K
# Testcase Example: '"abcd"\n"cdabcdab"'
#
# Given two strings A and B, find the minimum number of times A has to be
# repeated such that B is a substring of it. If no such solution, return -1.
#
# For example, with A = "abcd" and B = "cdabcdab".
#
# Return 3, because by repeating A three times (“abcdabcdabcd”), B is a
# substring of it; and B is not a substring of A repeated two times
# ("abcdabcd").
#
# Note:
# The length of A and B will be between 1 and 10000.
#
#
class Solution(object):
def _repeatedStringMatch(self, A, B):
"""
:type A: str
:type B: str
:rtype: int
"""
index = 1
repeat = A
while True:
if B in repeat:
return index
if len(repeat) > len(B)*2 and index > 5:
return -1
index += 1
repeat += A
def repeatedStringMatch(self, A, B):
"""
:type A: str
:type B: str
:rtype: int
"""
# Wrong Answer
if B in A:
return 1
import math
if A in B:
mi = int(math.ceil(len(B) / float(len(A))))
if B in mi*A:
return mi
elif B in (mi+1)*A:
return mi+1
return -1
if B in 2*A:
return 2
return -1
# if __name__ == "__main__":
# s = Solution()
# print s.repeatedStringMatch("aa", "a")
# print s.repeatedStringMatch("abcd", "abcdb")
# print s.repeatedStringMatch("cdabcdab", "abcd")
# print s.repeatedStringMatch("aaaaaaaaaaaaaaaaaaaaaab", "ba")
| 25.554054 | 76 | 0.545214 | 253 | 1,891 | 4.039526 | 0.434783 | 0.034247 | 0.019569 | 0.02544 | 0.135029 | 0.135029 | 0.101761 | 0.101761 | 0.101761 | 0.101761 | 0 | 0.035172 | 0.338445 | 1,891 | 73 | 77 | 25.90411 | 0.781775 | 0.548387 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.04 | 0 | 0.48 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7894f465a58acdc6ba0e96873ff81e20f82571a | 5,967 | py | Python | ws2122-lspm/Lib/site-packages/pm4py/algo/conformance/footprints/util/evaluation.py | Malekhy/ws2122-lspm | e4dc8b801d12f862b8ef536a0f125f346f085a00 | [
"MIT"
] | 1 | 2022-01-19T04:02:46.000Z | 2022-01-19T04:02:46.000Z | ws2122-lspm/Lib/site-packages/pm4py/algo/conformance/footprints/util/evaluation.py | Malekhy/ws2122-lspm | e4dc8b801d12f862b8ef536a0f125f346f085a00 | [
"MIT"
] | 1 | 2021-11-19T07:21:48.000Z | 2021-11-19T07:21:48.000Z | ws2122-lspm/Lib/site-packages/pm4py/algo/conformance/footprints/util/evaluation.py | Malekhy/ws2122-lspm | e4dc8b801d12f862b8ef536a0f125f346f085a00 | [
"MIT"
] | 1 | 2022-01-14T17:15:38.000Z | 2022-01-14T17:15:38.000Z | '''
This file is part of PM4Py (More Info: https://pm4py.fit.fraunhofer.de).
PM4Py is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
PM4Py is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with PM4Py. If not, see <https://www.gnu.org/licenses/>.
'''
from collections import Counter
from typing import List, Dict, Any
from enum import Enum
class Outputs(Enum):
DFG = "dfg"
SEQUENCE = "sequence"
PARALLEL = "parallel"
START_ACTIVITIES = "start_activities"
END_ACTIVITIES = "end_activities"
ACTIVITIES = "activities"
SKIPPABLE = "skippable"
ACTIVITIES_ALWAYS_HAPPENING = "activities_always_happening"
MIN_TRACE_LENGTH = "min_trace_length"
TRACE = "trace"
DFG = "dfg"
FOOTPRINTS_KEY = "footprints"
START_ACTIVITIES = "start_activities"
END_ACTIVITIES = "end_activities"
SEQUENCE = "sequence"
PARALLEL = "parallel"
IS_FOOTPRINTS_FIT = "is_footprints_fit"
def fp_fitness(fp_log, fp_model, conf_results, parameters=None):
"""
Calculates the footprints fitness provided the footprints of the log,
and the result of footprints conformance (applied to the entire log)
Parameters
---------------
fp_log
Footprints of the log
fp_model
Footprints of the model
conf_results
Footprints conformance (applied to the entire log)
parameters
Parameters of the algorithm
Returns
---------------
fitness
Fitness value (between 0.0 and 1.0)
"""
if parameters is None:
parameters = {}
fit_traces = None
if isinstance(conf_results, list):
fit_traces = len([x for x in conf_results if x[IS_FOOTPRINTS_FIT]])/len(conf_results) * 100.0
fp_log = flatten_fp(fp_log)
conf_results = flatten_conf(conf_results)
dfg = fp_log[DFG]
num_sequence_log = len(fp_log[SEQUENCE])
num_parallel_log = len(fp_log[PARALLEL])
num_start_activities_log = len(fp_log[START_ACTIVITIES])
num_end_activities_log = len(fp_log[END_ACTIVITIES])
num_start_activities_dev = len(conf_results[START_ACTIVITIES])
num_end_activities_dev = len(conf_results[END_ACTIVITIES])
footprints = conf_results[FOOTPRINTS_KEY]
if dfg:
sum_dfg = float(sum(x for x in dfg.values()))
sum_dev = float(sum(dfg[x] for x in footprints))
fitness = ((1.0 - sum_dev / sum_dfg) * (num_sequence_log + num_parallel_log) + (
num_start_activities_log + num_end_activities_log - num_start_activities_dev - num_end_activities_dev)) / (
num_sequence_log + num_parallel_log + num_start_activities_log + num_end_activities_log)
else:
# return fitness 1.0 if DFG is empty
fitness = 1.0
if fit_traces is not None:
return {"perc_fit_traces": fit_traces, "log_fitness": fitness}
return fitness
def fp_precision(fp_log, fp_model, parameters=None):
"""
Calculates the footprints based precision provided the two footprints
of the log and the model.
Parameters
--------------
fp_log
Footprints of the log
fp_model
Footprints of the model
parameters
Parameters of the algorithm
Returns
-------------
precision
Precision value (between 0 and 1)
"""
if parameters is None:
parameters = {}
fp_log = flatten_fp(fp_log)
fp_model = flatten_fp(fp_model)
log_configurations = fp_log[Outputs.SEQUENCE.value].union(fp_log[Outputs.PARALLEL.value])
model_configurations = fp_model[Outputs.SEQUENCE.value].union(fp_model[Outputs.PARALLEL.value])
if model_configurations:
return float(len(log_configurations.intersection(model_configurations))) / float(len(model_configurations))
# return precision 1.0 if model configurations are empty
return 1.0
def flatten_fp(fp: List[Dict[str, Any]]) -> Dict[str, Any]:
"""
Flattens the trace-based footprints to the footprints of the overall log
Parameters
---------------
fp
Trace-based footprints
Returns
--------------
log_fp
Overall log footprints
"""
if isinstance(fp, list):
res = {DFG: Counter(), SEQUENCE: set(), PARALLEL: set(), START_ACTIVITIES: set(), END_ACTIVITIES: set()}
for el in fp:
for x, y in el[DFG].items():
res[DFG][x] += y
res[SEQUENCE] = res[SEQUENCE].union(el[SEQUENCE])
res[PARALLEL] = res[PARALLEL].union(el[PARALLEL])
res[START_ACTIVITIES] = res[START_ACTIVITIES].union(el[START_ACTIVITIES])
res[END_ACTIVITIES] = res[END_ACTIVITIES].union(el[END_ACTIVITIES])
return res
return fp
def flatten_conf(conf: List[Dict[str, Any]]) -> Dict[str, Any]:
"""
Flattens the trace-based conformance checking results (obtained using footprints) to the conformance checking
results on the overall log
Parameters
----------------
conf
Trace-based conformance checking results
Returns
----------------
log_conf
Overall log conformance checking results
"""
if isinstance(conf, list):
res = {FOOTPRINTS_KEY: set(), START_ACTIVITIES: set(), END_ACTIVITIES: set()}
for el in conf:
res[FOOTPRINTS_KEY] = res[FOOTPRINTS_KEY].union(el[FOOTPRINTS_KEY])
res[START_ACTIVITIES] = res[START_ACTIVITIES].union(el[START_ACTIVITIES])
res[END_ACTIVITIES] = res[END_ACTIVITIES].union(el[END_ACTIVITIES])
return res
return conf
| 31.739362 | 127 | 0.667505 | 773 | 5,967 | 4.958603 | 0.194049 | 0.074354 | 0.027394 | 0.018784 | 0.394991 | 0.288025 | 0.232194 | 0.232194 | 0.17845 | 0.17845 | 0 | 0.005682 | 0.233115 | 5,967 | 187 | 128 | 31.909091 | 0.831949 | 0.329982 | 0 | 0.278481 | 0 | 0 | 0.058839 | 0.007287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.050633 | false | 0 | 0.037975 | 0 | 0.329114 | 0.050633 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d78a150133064507695437891e566ed9ad856522 | 24,526 | py | Python | openomics/database/interaction.py | muluayele999/OpenOmics | 29e3bbc586489c3929ac54d9886627f907aa38b1 | [
"MIT"
] | null | null | null | openomics/database/interaction.py | muluayele999/OpenOmics | 29e3bbc586489c3929ac54d9886627f907aa38b1 | [
"MIT"
] | 1 | 2021-12-13T20:51:32.000Z | 2021-12-13T20:51:32.000Z | openomics/database/interaction.py | muluayele999/OpenOmics | 29e3bbc586489c3929ac54d9886627f907aa38b1 | [
"MIT"
] | 1 | 2021-02-18T10:39:00.000Z | 2021-02-18T10:39:00.000Z | import networkx as nx
from openomics.database.annotation import *
class Interactions(Dataset):
def __init__(self, path, file_resources, source_col_name, target_col_name, source_index, target_index,
edge_attr=None, directed=True, rename_dict=None, npartitions=0):
"""
This is an abstract class used to instantiate a database given a folder containing various file resources. When creating a Database class, the load_data function is called where the file resources are load as a DataFrame and performs necessary processings. This class provides an interface for RNA classes to annotate various genomic annotations, functional annotations, sequences, and disease associations.
Args:
path (str):
The folder path containing the data files.
file_resources (dict): default None,
Used to list required files for load_network of the dataset. A dictionary where keys are required filenames and value are file paths. If None, then the class constructor should automatically build the required file resources dict.
source_col_name (str):
Column name of DataFrame to be used as the source node names.
target_col_name (str):
Column name of DataFrame to be used as the target node names.
edge_attr (list):
A list of column names to be included as attributes for each edge (source-target pairs).
directed (bool): default True,
Whether to create a directed or an undirected network.
col_rename (dict): default None,
A dictionary to rename columns in the data table. If None, then automatically load defaults.
npartitions:
"""
if not os.path.isdir(path) or not os.path.exists(path):
raise NotADirectoryError(path)
else:
for _, filepath in file_resources.items():
if not os.path.exists(filepath):
raise FileNotFoundError(filepath)
self.import_folder = path
self.file_resources = file_resources
self.source_index = source_index
self.target_index = target_index
self.network = self.load_network(file_resources=file_resources, source_col_name=source_col_name,
target_col_name=target_col_name,
edge_attr=edge_attr, directed=directed)
self.network.name = self.name()
if self.network is None:
raise Exception(
"Make sure load_network() returns a Networkx Graph and is called with super().__init__() in the constructor.")
if rename_dict is not None:
self.network = nx.relabel_nodes(self.network, rename_dict)
print("{}".format(nx.info(self.network)))
def __repr__(self):
return f"{self.__class__.__name__}(num_nodes={self.network.number_of_nodes()}, num_edges={self.network.number_of_edges()})"
@abstractmethod
def load_network(self, file_resources, source_col_name, target_col_name, edge_attr, directed) -> nx.Graph:
raise NotImplementedError
def get_interactions(self, nodelist, data=False, inclusive=False):
"""
Args:
nodelist (list):
A list of nodes to fetch edges from
data (bool): default False
Whether to include edge attributes
inclusive (bool): default False
Whether to only retrieve edges from nodes inclusive in nodelist.
Returns:
edges (OutEdgeView): a NetworkX edgelist
"""
if hasattr(self, "network"):
if inclusive:
return self.network.subgraph(nodelist).edges(data=data)
else:
return self.network.edges(nodelist=nodelist, data=data)
else:
raise Exception(
"{} does not have network interaction data yet. Must run load_network() and assign self.network field first.".format(
self.name()))
class GeneMania(Interactions):
def __init__(self, path, file_resources=None, source_col_name="Gene_A", target_col_name="Gene_B",
source_index="gene_name", target_index="gene_name",
edge_attr=None, directed=True, rename_dict=None):
if edge_attr is None:
edge_attr = ["Weight"]
if file_resources is None:
file_resources = {}
file_resources["COMBINED.DEFAULT_NETWORKS.BP_COMBINING.txt"] = os.path.join(path,
"COMBINED.DEFAULT_NETWORKS.BP_COMBINING.txt")
file_resources["identifier_mappings.txt"] = os.path.join(path,
"identifier_mappings.txt")
super().__init__(path, file_resources, source_col_name, target_col_name, source_index, target_index,
edge_attr, directed, rename_dict)
def load_network(self, file_resources, source_col_name, target_col_name, edge_attr, directed):
interactions = pd.read_table(file_resources["COMBINED.DEFAULT_NETWORKS.BP_COMBINING.txt"], low_memory=True)
identifier = pd.read_table(file_resources["identifier_mappings.txt"])
# Rename ENSG ID's to gene names
identifier = identifier[identifier["Source"] == "Gene Name"]
identifier_map = pd.Series(identifier["Name"].values, index=identifier["Preferred_Name"]).to_dict()
interactions.replace(identifier_map, inplace=True)
genemania_RNA_RNA_network = nx.from_pandas_edgelist(interactions, source=source_col_name,
target=target_col_name,
edge_attr=edge_attr,
create_using=nx.DiGraph())
return genemania_RNA_RNA_network
class BioGRID(Interactions):
def __init__(self, path, file_resources=None, source_col_name="Official Symbol Interactor A",
target_col_name="Official Symbol Interactor B",
source_index="gene_name", target_index="gene_name",
edge_attr=None,
directed=False, rename_dict=None):
if edge_attr is None:
edge_attr = ['Score', 'Throughput', 'Qualifications', 'Modification', 'Phenotypes']
if file_resources is None:
file_resources = {}
file_resources["BIOGRID-ALL-X.X.XXX.tab2.txt"] = os.path.join(path, "BIOGRID-ALL-3.4.162.tab2.txt")
super().__init__(path, file_resources, source_col_name, target_col_name, source_index, target_index,
edge_attr, directed, rename_dict)
def load_network(self, file_resources, source_col_name, target_col_name, edge_attr, directed, species=9606):
biogrid_df = pd.read_table(file_resources["BIOGRID-ALL-X.X.XXX.tab2.txt"],
na_values=["-"],
usecols=['Official Symbol Interactor A',
'Official Symbol Interactor B', 'Organism Interactor A', 'Score',
'Throughput', 'Qualifications', 'Modification', 'Phenotypes'],
low_memory=True)
biogrid_df = biogrid_df[biogrid_df["Organism Interactor A"] == species]
# biogrid_df = biogrid_df[biogrid_df["Throughput"] == "High Throughput"]
biogrid_grn = nx.from_pandas_edgelist(biogrid_df, source=source_col_name, target=target_col_name,
edge_attr=edge_attr,
create_using=nx.DiGraph() if directed else nx.Graph())
return biogrid_grn
class LncBase(Interactions, Dataset):
def __init__(self, path, file_resources=None, source_col_name="mirna", target_col_name="geneId",
source_index="transcript_name", target_index="gene_id",
edge_attr=None, directed=True,
rename_dict=None, organism="Homo sapiens", tissue=None):
"""
Args:
path (str):
file_resources (dict): default None.
"""
self.organism = organism
self.tissue = tissue
if edge_attr is None:
edge_attr = ["tissue", "positive_negative"]
if file_resources is None:
file_resources = {}
file_resources["LncBasev2_download.csv"] = os.path.join(path, "LncBasev2_download.csv")
super(LncBase, self).__init__(path=path, file_resources=file_resources,
source_col_name=source_col_name,
target_col_name=target_col_name, source_index=source_index,
target_index=target_index,
edge_attr=edge_attr, directed=directed, rename_dict=rename_dict)
def get_rename_dict(self, from_index="geneId", to_index="geneName"):
lncbase_df = pd.read_table(self.file_resources["LncBasev2_download.csv"], low_memory=True)
gene_id_to_gene_name_dict = pd.Series(lncbase_df["geneName"].values,
index=lncbase_df["geneId"]).to_dict()
return gene_id_to_gene_name_dict
def load_network(self, file_resources, source_col_name="mirna", target_col_name="gene_id",
edge_attr=None, directed=True, ):
if edge_attr is None:
edge_attr = ["tissue", "positive_negative"]
df = pd.read_table(file_resources["LncBasev2_download.csv"], low_memory=True)
print(self.name(), df.columns.tolist())
df.replace({"species": {"Homo Sapiens": "Homo sapiens", "Mus Musculus": "Mus musculus"}}, inplace=True)
if self.organism is not None:
df = df[df["species"].str.lower() == self.organism.lower()]
if self.tissue is not None:
df = df[df["tissue"].str.lower() == self.tissue.lower()]
lncBase_lncRNA_miRNA_network = nx.from_pandas_edgelist(df, source=source_col_name, target=target_col_name,
edge_attr=edge_attr,
create_using=nx.DiGraph() if directed else nx.Graph())
return lncBase_lncRNA_miRNA_network
class lncRInter(Interactions):
def __init__(self, path, file_resources=None, source_col_name="lncrna",
target_col_name='Interacting partner',
source_index="gene_name", target_index="gene_name",
edge_attr=None,
directed=True, rename_dict=None, organism="Homo sapiens"):
self.organism = organism
if edge_attr is None:
edge_attr = ["Interaction Class", "Interaction Mode", "Tissue", "Phenotype"]
if file_resources is None:
file_resources = {}
file_resources["human_interactions.txt"] = os.path.join(path, "human_interactions.txt")
super().__init__(path, file_resources, source_col_name, target_col_name, source_index, target_index,
edge_attr, directed, rename_dict, )
def load_network(self, file_resources, source_col_name, target_col_name, edge_attr, directed):
lncRInter_df = pd.read_table(file_resources["human_interactions.txt"])
lncRInter_df = lncRInter_df[lncRInter_df["Organism"] == self.organism]
# Data cleaning
lncRInter_df.loc[lncRInter_df["Interacting partner"].str.contains("MIR"), "Interacting partner"] = \
lncRInter_df.loc[
lncRInter_df["Interacting partner"].str.contains("MIR"), "Interacting partner"].str.lower()
lncRInter_df["Interacting partner"] = lncRInter_df["Interacting partner"].str.replace("mirlet", "hsa-let-")
lncRInter_df["Interacting partner"] = lncRInter_df["Interacting partner"].str.replace("mir", "hsa-mir-")
lncRInter_df["Interacting partner"][
lncRInter_df["Interacting partner"].str.contains(r"[mir|let]\-[\d]+[a-z]+[\d]+")] = \
lncRInter_df["Interacting partner"][
lncRInter_df["Interacting partner"].str.contains(r"[mir|let]\-[\d]+[a-z]+[\d]+")].apply(
lambda x: x[:-1] + "-" + x[-1])
lncRInter_network = nx.from_pandas_edgelist(lncRInter_df, source=source_col_name,
target=target_col_name,
edge_attr=edge_attr,
create_using=nx.DiGraph() if directed else nx.Graph())
return lncRInter_network
class LncRNA2Target(Interactions):
def __init__(self, path, file_resources=None, source_col_name="lncrna_symbol",
target_col_name="gene_symbol",
source_index="gene_name", target_index="gene_name",
edge_attr=None, directed=True, rename_dict=None, version="high_throughput", species=9606):
"""
Args:
version (str): one of ["high_throughput", "low_throughput"].
The high_throughput version of lncRNA2Target database is v2.0 and low_throughput is v1.0, according to the database's website.
species (str, int): one of [9606, "Homo sapiens"].
The species column in high_throughput is formatted in int (e.g. 9606) and in low_throughput is in str (e.g. "Homo sapiens")
"""
if edge_attr is None:
edge_attr = ["P_Value", "direction"]
self.version = version
self.species = species
if file_resources is None:
file_resources = {}
file_resources["lncRNA_target_from_high_throughput_experiments.txt"] = os.path.join(path,
"lncRNA_target_from_high_throughput_experiments.txt")
super().__init__(path, file_resources, source_col_name, target_col_name, source_index, target_index,
edge_attr, directed, rename_dict, species)
def load_network(self, file_resources, source_col_name, target_col_name, edge_attr, directed):
if self.version == "high_throughput":
return self.load_network_high_throughput(self, file_resources, source_col_name, target_col_name, edge_attr,
directed)
elif self.version == "low_throughput":
return self.load_network_low_throughput(self, file_resources, source_col_name, target_col_name, edge_attr,
directed)
else:
raise Exception("LncRNA2Target version argument must be one of 'high_throughput' or 'low_throughput'")
def load_network_high_throughput(self, file_resources, source_col_name, target_col_name, edge_attr, directed):
table = pd.read_table(file_resources["lncRNA_target_from_high_throughput_experiments.txt"], low_memory=True)
table = table[table["species_id"] == self.species]
table["lncrna_symbol"] = table["lncrna_symbol"].str.upper().replace("LINC", "")
table["gene_symbol"] = table["gene_symbol"].str.upper()
lncrna2target_high_throughput_network = nx.from_pandas_edgelist(table,
source=source_col_name,
target=target_col_name,
edge_attr=edge_attr,
create_using=nx.DiGraph() if directed else nx.Graph())
return lncrna2target_high_throughput_network
def load_network_low_throughput(self, file_resources, source_col_name="GENCODE_gene_name",
target_col_name="Target_official_symbol",
edge_attr=None, directed=True):
table = pd.read_excel(file_resources["lncRNA_target_from_low_throughput_experiments.xlsx"])
table = table[table["Species"] == self.species]
table["Target_official_symbol"] = table["Target_official_symbol"].str.replace("(?i)(mir)", "hsa-mir-")
table["Target_official_symbol"] = table["Target_official_symbol"].str.replace("--", "-")
table["Target_official_symbol"].apply(lambda x: x.lower() if "mir" in x.lower() else x.upper())
table["GENCODE_gene_name"] = table["GENCODE_gene_name"].str.upper()
lncrna2target_low_throughput_network = nx.from_pandas_edgelist(table,
source=source_col_name,
target=target_col_name,
edge_attr=edge_attr,
create_using=nx.DiGraph() if directed else nx.Graph())
return lncrna2target_low_throughput_network
class lncRNome(Interactions):
pass
class NPInter(Interactions):
pass
class MiRTarBase(Interactions):
COLUMNS_RENAME_DICT = {"Species (Target Gene)": "species",
"Support Type": "Support_Type",
"Target Gene": "gene_name"}
def __init__(self, path, file_resources=None, source_col_name="miRNA", target_col_name="Target Gene",
source_index="transcript_name", target_index="gene_name",
edge_attr=None, directed=True, rename_dict=None, species="Homo sapiens",
strip_mirna_name=False):
if edge_attr is None:
edge_attr = ["Support Type"]
self.strip_mirna_name = strip_mirna_name
self.species = species
if file_resources is None:
file_resources = {}
file_resources["miRTarBase_MTI.xlsx"] = os.path.join(path, "miRTarBase_MTI.xlsx")
super(MiRTarBase, self).__init__(path=path, file_resources=file_resources,
source_col_name=source_col_name,
target_col_name=target_col_name, source_index=source_index,
target_index=target_index,
edge_attr=edge_attr, directed=directed, rename_dict=rename_dict, )
def load_network(self, file_resources, source_col_name, target_col_name, edge_attr, directed=True):
df = pd.read_excel(self.file_resources["miRTarBase_MTI.xlsx"])
if self.species:
df = df[df["Species (Target Gene)"].str.lower() == self.species.lower()]
if self.strip_mirna_name:
df['miRNA'] = df['miRNA'].str.lower()
df['miRNA'] = df['miRNA'].str.replace("-3p.*|-5p.*", "")
mir_target_network = nx.from_pandas_edgelist(df, source=source_col_name, target=target_col_name,
edge_attr=edge_attr,
create_using=nx.DiGraph() if directed else nx.Graph())
return mir_target_network
class TargetScan(Interactions, Dataset):
def __init__(self, path, file_resources=None, source_col_name="MiRBase ID", target_col_name="Gene Symbol",
source_index="transcript_name", target_index="transcript_name",
edge_attr=None, directed=True, rename_dict=None, species=9606,
strip_mirna_name=False):
if edge_attr is None:
edge_attr = ["tissue", "positive_negative"]
self.strip_mirna_name = strip_mirna_name
self.species = species
if file_resources is None:
file_resources = {}
file_resources["miR_Family_Info.txt"] = os.path.join(path, "miR_Family_Info.txt")
file_resources["Predicted_Targets_Info.default_predictions.txt"] = os.path.join(path,
"Predicted_Targets_Info.default_predictions.txt")
super(TargetScan, self).__init__(path=path, file_resources=file_resources,
source_col_name=source_col_name,
target_col_name=target_col_name, source_index=source_index,
target_index=target_index,
edge_attr=edge_attr, directed=directed, rename_dict=rename_dict)
def load_network(self, file_resources, source_col_name="MiRBase ID", target_col_name="Gene Symbol",
edge_attr=None, directed=True):
if edge_attr is None:
edge_attr = ["tissue", "positive_negative"]
self.df = self.process_miR_family_info_table(file_resources, self.species)
interactions_df = self.process_interactions_table(file_resources, self.df, self.species)
print(self.name(), interactions_df.columns.tolist())
mir_target_network = nx.from_pandas_edgelist(interactions_df,
source=source_col_name, target=target_col_name,
edge_attr=edge_attr,
create_using=nx.DiGraph() if directed else nx.Graph())
return mir_target_network
def process_miR_family_info_table(self, file_resources, species=None):
miR_Family_Info_df = pd.read_table(file_resources["miR_Family_Info.txt"], delimiter='\t')
if species:
miR_Family_Info_df = miR_Family_Info_df[miR_Family_Info_df['Species ID'] == species]
# Standardize MiRBase ID to miRNA names obtained from RNA-seq hg19
if self.strip_mirna_name:
miR_Family_Info_df['MiRBase ID'] = miR_Family_Info_df['MiRBase ID'].str.lower()
miR_Family_Info_df['MiRBase ID'] = miR_Family_Info_df['MiRBase ID'].str.replace("-3p.*|-5p.*", "")
miR_Family_Info_df.drop_duplicates(inplace=True)
miR_Family_Info_df = miR_Family_Info_df.filter(items=['miR family', 'MiRBase ID', 'Seed+m8', 'Mature sequence',
'Family Conservation?', 'MiRBase Accession'],
axis="columns")
miR_Family_Info_df['MiRBase ID'] = miR_Family_Info_df['MiRBase ID'].astype(str)
return miR_Family_Info_df
def process_interactions_table(self, file_resources, family_to_miR_df, species):
"""
This functions joins the interactions data table between miR Family and targets, and
Args:
file_resources:
family_to_miR_df:
species:
Returns:
"""
# Load data frame from file
family_interactions_df = pd.read_table(file_resources["Predicted_Targets_Info.default_predictions.txt"],
delimiter='\t', low_memory=True)
# Select only homo sapiens miRNA-target pairs
if species:
family_interactions_df = family_interactions_df[family_interactions_df["Species ID"] == species]
family_interactions_df = family_interactions_df.filter(items=["miR Family", "Gene Symbol"], axis="columns")
family_to_miR_df = family_to_miR_df.filter(items=['miR family', 'MiRBase ID'], axis="columns")
family_to_miR_df.rename(columns={'miR family': 'miR Family'}, inplace=True)
# map miRBase ID names to miR Family
# family_interactions_df = pd.merge(family_interactions_df, family_to_miR_df, how='outer', on="miR Family")
family_to_miR_df.set_index("miR Family", inplace=True)
family_interactions_df.set_index("miR Family", inplace=True)
mir_interactions_df = family_interactions_df.join(family_to_miR_df, how='outer', on="miR Family").reset_index()
# Standardize MiRBase ID to miRNA names obtained from RNA-seq hg19
if self.strip_mirna_name:
mir_interactions_df['MiRBase ID'] = mir_interactions_df['MiRBase ID'].str.lower()
mir_interactions_df['MiRBase ID'] = mir_interactions_df['MiRBase ID'].str.replace("-3p.*|-5p.*", "")
return mir_interactions_df
| 55.114607 | 415 | 0.603115 | 2,765 | 24,526 | 5.040506 | 0.112839 | 0.041185 | 0.038244 | 0.035445 | 0.605008 | 0.542369 | 0.504987 | 0.463873 | 0.433594 | 0.404176 | 0 | 0.003343 | 0.304697 | 24,526 | 444 | 416 | 55.238739 | 0.813933 | 0.110291 | 0 | 0.343234 | 0 | 0.006601 | 0.153972 | 0.049923 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075908 | false | 0.006601 | 0.009901 | 0.0033 | 0.174917 | 0.009901 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d78a3176f37606ee856bf63b491b3c041f711274 | 15,940 | py | Python | python-scripts/opencorporates.py | openc/knowledge-graph | 2cb55cfa1da9788c4b712c39d66a363065153fa6 | [
"Apache-2.0"
] | null | null | null | python-scripts/opencorporates.py | openc/knowledge-graph | 2cb55cfa1da9788c4b712c39d66a363065153fa6 | [
"Apache-2.0"
] | null | null | null | python-scripts/opencorporates.py | openc/knowledge-graph | 2cb55cfa1da9788c4b712c39d66a363065153fa6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
import config
import opencorporates_lookup
import logging
import requests
import json
import os
import sys
import getopt
import time
import datetime
from datetime import datetime
from datetime import timedelta
# ****************
# Global variables
# ****************
opencorporates_reconcile_score = config.opencorporates["reconcile_score"]
opencorporates_reconcile_api_url = config.opencorporates["reconcile_api_url"]
opencorporates_companies_api_url = config.opencorporates["companies_api_url"]
# **********
# Statistics
# **********
stat_no_awards = 0
stat_no_suppliers = 0
stat_no_candidate_companies = 0
stat_no_matching_companies = 0
stat_highest_result_score = 0
def write_stats(output_folder):
global stat_no_awards
global stat_no_suppliers
global stat_no_candidate_companies
global stat_no_matching_companies
global stat_highest_result_score
if not os.path.exists(output_folder):
os.makedirs(output_folder)
sfile = open(os.path.join(output_folder, 'STATISTICS.TXT'), 'w+')
sfile.write("stat_no_awards = " + str(stat_no_awards) +'\n')
sfile.write("stat_no_suppliers = " + str(stat_no_suppliers) +'\n')
sfile.write("stat_no_candidate_companies = " + str(stat_no_candidate_companies) +'\n')
sfile.write("stat_no_matching_companies = " + str(stat_no_matching_companies) +'\n')
sfile.write("stat_highest_result_score = " + str(stat_highest_result_score) + '\n')
sfile.close()
def reset_stats():
global stat_no_awards
global stat_no_suppliers
global stat_no_candidate_companies
global stat_no_matching_companies
global stat_highest_result_score
stat_no_awards = 0
stat_no_suppliers = 0
stat_no_candidate_companies = 0
stat_no_matching_companies = 0
stat_highest_result_score = 0
# ****************
# Lookup functions
# ****************
def country_name_2_code_jurisdiction(country_name):
try:
return opencorporates_lookup.country_name_codes[country_name.lower()]
except KeyError:
return ""
# *****************
# Reconcile company
# *****************
def reconcile_company(company_name):
url = opencorporates_reconcile_api_url
params = {
"query": company_name
}
headers = {
"Content-Type": "application/json"
}
response = requests.get(url, params=params, headers=headers)
return response
# ***********
# Get company
# ***********
def get_company(company_id, api_token):
url = opencorporates_companies_api_url + company_id
params = {
"api_token": api_token
}
headers = {
"Content-Type": "application/json"
}
response = requests.get(url, params=params, headers=headers)
if response.status_code != 200:
logging.info("get_company(): ERROR: " + json.dumps(response.json()))
return None
else:
return response
# ********************
# Is candidate company
# ********************
def is_candidate_company(buyer_data, supplier_data, result_data):
supplier_name = get_supplier_name(supplier_data)
result_id = result_data['id']
result_score = result_data['score']
# If score lower than configured score then return false
if float(result_score) < float(opencorporates_reconcile_score):
return False
global stat_highest_result_score
if float(result_score) > float(stat_highest_result_score):
stat_highest_result_score = result_score
# If buyer jurisdiction is empty then return false
buyer_jurisdiction = get_buyer_country_code(buyer_data)
if not buyer_jurisdiction:
return False
# If supplier jurisdiction is empty then use buyer jurisdiction for matching
supplier_jurisdiction = get_supplier_country_code(supplier_data)
if not supplier_jurisdiction:
supplier_jurisdiction = buyer_jurisdiction
# If supplier jurisdiction matches result jurisdiction then return true
result_jurisdiction = get_result_jurisdiction(result_data)
if supplier_jurisdiction == result_jurisdiction:
logging.info("is_candidate_company(): supplier_name = " + supplier_name)
logging.info("is_candidate_company(): result_id = " + result_id)
logging.info("is_candidate_company(): result_score = " + str(result_score))
global stat_no_candidate_companies
stat_no_candidate_companies += 1
return True
else:
return False
# *******************
# Is matching company
# *******************
def is_matching_company(supplier_data, company_data):
supplier_postal_code = get_supplier_postal_code(supplier_data)
supplier_street_address = get_supplier_street_address(supplier_data)
company_registered_address_in_full = get_company_registered_address_in_full(company_data)
if (supplier_postal_code == "") and (supplier_street_address == ""):
return False
elif not company_registered_address_in_full:
return False
elif (supplier_postal_code in company_registered_address_in_full) and (supplier_street_address in company_registered_address_in_full):
logging.info("is_matching_company(): supplier_postal_code = " + supplier_postal_code)
logging.info("is_matching_company(): supplier_street_address = " + supplier_street_address)
logging.info("is_matching_company(): company_registered_address_in_full = " + company_registered_address_in_full)
global stat_no_matching_companies
stat_no_matching_companies += 1
return True
else:
return False
# *************
# Write company
# *************
def write_company(ocid, response_company, output_folder):
if not os.path.exists(output_folder):
os.makedirs(output_folder)
if response_company:
data = json.loads(json.dumps(response_company.json()))
company_number = data['results']['company']['company_number']
company_jurisdiction = data['results']['company']['jurisdiction_code']
jfile = open(os.path.join(output_folder, str(ocid) + '-supplier-' + str(company_jurisdiction) + '-' + str(company_number) + '.json'), 'w+')
jfile.write(json.dumps(data, indent=4).replace(': null', ': ""'))
jfile.close()
# *****************************************************************************
# Loop through suppliers, reconcile and and write file for each candidate match
# *****************************************************************************
def process_suppliers(api_token, release_data, award_index, filename, output_folder):
logging.info("process_suppliers(): tag_value = " + str(get_tag(release_data)))
buyer_data = get_buyer(release_data)
buyer_name = get_buyer_name(buyer_data)
buyer_country_code = get_buyer_country_code(buyer_data)
logging.info("process_suppliers(): buyer_name = " + buyer_name)
logging.info("process_suppliers(): buyer_country_code = " + buyer_country_code)
# Try to reconcile each supplier
suppliers_data = get_suppliers(release_data, award_index)
if suppliers_data:
supplier_index = 0
for supplier_data in suppliers_data:
global stat_no_suppliers
stat_no_suppliers += 1
supplier_name = get_supplier_name(supplier_data)
release_ocid = release_data['ocid']
# Get reconcile results
response_reconcile_results = reconcile_company(supplier_name)
reconcile_results_data = json.loads(json.dumps(response_reconcile_results.json()))
for reconcile_result in reconcile_results_data['result']:
result_score = reconcile_result['score']
if is_candidate_company(buyer_data, supplier_data, reconcile_result):
logging.info("process_suppliers(): result_score = " + str(result_score))
company_id = reconcile_result['id']
response_company = get_company(company_id, api_token)
company_data = json.loads(json.dumps(response_company.json()))
if is_matching_company(supplier_data, company_data):
write_company(release_ocid, response_company, output_folder)
# Add specific TBFY property for OpenCorporates Id
company_jurisdiction = company_data['results']['company']['jurisdiction_code']
company_number = company_data['results']['company']['company_number']
release_data['json']['releases'][0]['awards'][award_index]['suppliers'][supplier_index]['tbfyOpenCorporatesJurisdiction'] = company_jurisdiction
release_data['json']['releases'][0]['awards'][award_index]['suppliers'][supplier_index]['tbfyOpenCorporatesCompanyNumber'] = company_number
release_data['json']['releases'][0]['awards'][award_index]['suppliers'][supplier_index]['tbfyOpenCorporatesId'] = "/" + company_jurisdiction + "/" + company_number
# Add specific TBFY properties for OpenOpps
award_id = release_data['json']['releases'][0]['awards'][award_index]['id']
release_data['json']['releases'][0]['awards'][award_index]['suppliers'][supplier_index]['tbfyOcid'] = release_ocid
release_data['json']['releases'][0]['awards'][award_index]['suppliers'][supplier_index]['tbfyAwardId'] = award_id
supplier_index += 1
release_data['json']['releases'][0]['awards'][0]['tbfyOcid'] = release_ocid
# Write award release to output folder
jfile = open(os.path.join(output_folder, release_data['ocid'] + '-release.json'), 'w+')
jfile.write(json.dumps(release_data, indent=4).replace(': null', ': ""'))
jfile.close()
# ****************************************************
# Collection of helper functions for JSON release data
# ****************************************************
def get_tag(release_data):
return release_data['json']['releases'][0]['tag']
def get_buyer(release_data):
return release_data['json']['releases'][0]['buyer']
def get_awards(release_data):
try:
return release_data['json']['releases'][0]['awards']
except KeyError:
return None
def get_suppliers(release_data, award_index):
try:
return release_data['json']['releases'][0]['awards'][award_index]['suppliers']
except KeyError:
return None
def is_award(release_data):
tag_value = get_tag(release_data)
if ("award" in tag_value) or ("awardUpdate" in tag_value):
global stat_no_awards
stat_no_awards += 1
return True
else:
return False
# *************************************************************
# Collection of helper functions for JSON reconcile result data
# *************************************************************
def get_buyer_name(buyer_data):
try:
name = buyer_data['name']
return name
except KeyError:
return ""
def get_buyer_country_code(buyer_data):
try:
country_name = buyer_data['address']['countryName']
return country_name_2_code_jurisdiction(country_name)
except KeyError:
return ""
def get_supplier_name(supplier_data):
try:
supplier_name = supplier_data['name']
supplier_legal_name = supplier_data['identifier']['legalName']
if supplier_legal_name != "":
return supplier_legal_name
else:
return supplier_name
except KeyError:
return ""
def get_supplier_country_code(supplier_data):
try:
country_name = supplier_data['address']['countryName']
return country_name_2_code_jurisdiction(country_name)
except KeyError:
return ""
def get_supplier_postal_code(supplier_data):
try:
postal_code = supplier_data['address']['postalCode']
return postal_code
except KeyError:
return ""
def get_supplier_street_address(supplier_data):
try:
street_address = supplier_data['address']['streetAddress']
return street_address
except KeyError:
return ""
def get_result_jurisdiction(result_data):
try:
result_id = str(result_data['id']).replace("/companies/", "")
result_jurisdiction = result_id[0:result_id.find("/")]
return result_jurisdiction
except KeyError:
return ""
def get_company_registered_address_in_full(company_data):
try:
registered_address_in_full = company_data['results']['company']['registered_address_in_full']
return registered_address_in_full
except KeyError:
return ""
# *************
# Main function
# *************
def main(argv):
logging.basicConfig(level=config.logging["level"])
api_token = ""
start_date = ""
end_date = ""
input_folder = ""
output_folder = ""
try:
opts, args = getopt.getopt(argv, "ha:s:e:i:o:")
except getopt.GetoptError:
print("opencorporates.py -a <api_token> -s <start_date> -e <end_date> -i <input_folder> -o <output_folder>")
sys.exit(2)
for opt, arg in opts:
if opt == "-h":
print("opencorporates.py -a <api_token> -s <start_date> -e <end_date> -i <input_folder> -o <output_folder>")
sys.exit()
elif opt in ("-a"):
api_token = arg
elif opt in ("-s"):
start_date = arg
elif opt in ("-e"):
end_date = arg
elif opt in ("-i"):
input_folder = arg
elif opt in ("-o"):
output_folder = arg
logging.info("main(): api_token = " + api_token)
logging.info("main(): start_date = " + start_date)
logging.info("main(): end_date = " + end_date)
logging.info("main(): input_folder = " + input_folder)
logging.info("main(): output_folder = " + output_folder)
copy_command = ""
if sys.platform.lower().startswith("win"):
copy_command = "copy"
elif sys.platform.lower().startswith("linux"):
copy_command = "cp"
else:
copy_command = "copy"
logging.info("main(): platform = " + sys.platform.lower())
logging.info("main(): copy_command = " + copy_command)
start = datetime.strptime(start_date, "%Y-%m-%d")
stop = datetime.strptime(end_date, "%Y-%m-%d")
while start <= stop:
release_date = datetime.strftime(start, "%Y-%m-%d")
dirname = release_date
# for dirname in os.listdir(input_folder):
dirPath = os.path.join(input_folder, dirname)
outputDirPath = os.path.join(output_folder, dirname)
if os.path.isdir(dirPath):
if not os.path.exists(outputDirPath):
os.makedirs(outputDirPath)
reset_stats()
for filename in os.listdir(dirPath):
filePath = os.path.join(dirPath, filename)
outputFilePath = os.path.join(outputDirPath, filename)
f = open(filePath)
lines = f.read()
try:
release_data = json.loads(lines)
f.close()
if is_award(release_data):
logging.info("main(): filename = " + f.name)
awards_data = get_awards(release_data)
if awards_data:
award_index = 0
for award_data in awards_data:
process_suppliers(api_token, release_data, award_index, filename, outputDirPath)
award_index += 1
else:
os.system(copy_command + ' ' + filePath + ' ' + outputFilePath)
except:
pass
write_stats(outputDirPath)
start = start + timedelta(days=1) # increase day one by one
# *****************
# Run main function
# *****************
if __name__ == "__main__": main(sys.argv[1:])
| 35.501114 | 187 | 0.634191 | 1,781 | 15,940 | 5.35598 | 0.116788 | 0.020128 | 0.015096 | 0.026523 | 0.472377 | 0.365133 | 0.255582 | 0.202222 | 0.179369 | 0.148443 | 0 | 0.003396 | 0.22409 | 15,940 | 448 | 188 | 35.580357 | 0.767869 | 0.099059 | 0 | 0.32381 | 0 | 0.006349 | 0.131447 | 0.024948 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073016 | false | 0.003175 | 0.038095 | 0.006349 | 0.231746 | 0.006349 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d78ac408c02fcab65812f50cca47acea9cb58f40 | 1,039 | py | Python | examples/sht31d_simple_mode.py | orthorhombic/micropython_SHT3X | 2618ae44c5ec68f3cf0a034ee8a376567dc5e9b1 | [
"MIT"
] | null | null | null | examples/sht31d_simple_mode.py | orthorhombic/micropython_SHT3X | 2618ae44c5ec68f3cf0a034ee8a376567dc5e9b1 | [
"MIT"
] | null | null | null | examples/sht31d_simple_mode.py | orthorhombic/micropython_SHT3X | 2618ae44c5ec68f3cf0a034ee8a376567dc5e9b1 | [
"MIT"
] | null | null | null | import machine
import adafruit_sht31d
# Create library object using our Bus I2C port
i2c = machine.I2C(0, scl=machine.Pin(22), sda=machine.Pin(21)) # esp32
sensor = adafruit_sht31d.SHT31D(i2c, address=69)
print("\033[1mSensor\033[0m = SHT31-D")
print("\033[1mSerial Number\033[0m = ", sensor.serial_number, "\n")
for i in range(3):
if i == 0:
sensor.repeatability = adafruit_sht31d.REP_LOW
print("\033[1m\033[36mLow Repeatability:\033[0m\n")
if i == 1:
sensor.repeatability = adafruit_sht31d.REP_MED
print("\n\033[1m\033[36mMedium Repeatability:\033[0m\n")
if i == 2:
sensor.repeatability = adafruit_sht31d.REP_HIGH
sensor.clock_stretching = False
print("\n\033[1m\033[36mHigh Repeatability:\033[0m")
# print("\033[1m\033[95mClock Stretching:\033[0m \033[92mEnabled\033[0m\n")
for itr in range(3):
print("\033[1mTemperature:\033[0m %0.3f ºC" % sensor.temperature)
print("\033[1mHumidity:\033[0m %0.2f %%" % sensor.relative_humidity, "\n")
| 39.961538 | 83 | 0.66795 | 153 | 1,039 | 4.464052 | 0.411765 | 0.065886 | 0.046852 | 0.144949 | 0.263543 | 0.064422 | 0 | 0 | 0 | 0 | 0 | 0.151231 | 0.179018 | 1,039 | 25 | 84 | 41.56 | 0.649472 | 0.119346 | 0 | 0 | 0 | 0 | 0.288694 | 0.175631 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.35 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d78c603b3d41893f6fbdc429d01d6b7aed0e0fb3 | 1,256 | py | Python | frontend_helpers/videoStreamHelper.py | PatchyVideo/PatchyVideo | cafbdfa34591d7292090d5e67bb633b974447b64 | [
"MIT"
] | 13 | 2020-06-04T00:25:24.000Z | 2022-03-31T13:12:17.000Z | frontend_helpers/videoStreamHelper.py | PatchyVideo/PatchyVideo | cafbdfa34591d7292090d5e67bb633b974447b64 | [
"MIT"
] | 1 | 2021-01-03T04:17:45.000Z | 2021-02-07T14:19:04.000Z | frontend_helpers/videoStreamHelper.py | PatchyVideo/PatchyVideo | cafbdfa34591d7292090d5e67bb633b974447b64 | [
"MIT"
] | null | null | null |
from .init import routes, init_funcs
from scraper.video import dispatch
from utils.jsontools import *
from utils.logger import log
from utils.interceptors import asyncJsonRequest
from aiohttp import ClientSession
import os
if os.getenv("FLASK_ENV", "development") == "production" :
VIDEOSTREAM_ADDRESS = 'http://videostream:5006'
else :
VIDEOSTREAM_ADDRESS = 'http://localhost:5006'
async def dispatch_presite_extraction(info) :
return makeResponseSuccess(info)
# if info['extractor'] == 'BiliBili' :
# ret_info = []
# for quality in info['streams'] :
# ret_info.append({
# 'format': quality['container'],
# 'quality_desc': quality['quality'],
# 'size': quality['size'],
# 'src': quality['src']
# })
# else :
# return makeResponseFailed('UNSUPPORTED_WEBSITE')
@routes.post("/get_video_stream")
@asyncJsonRequest
async def get_video_stream_info(request):
rqjson = (await request.json())
url = rqjson['url']
async with ClientSession() as session:
async with session.post(VIDEOSTREAM_ADDRESS, json={'url': url}) as resp:
resp_json = await resp.json()
if 'vs_err' in resp_json :
return makeResponseFailed({"errinfo": resp_json['vs_err']})
else :
return await dispatch_presite_extraction(resp_json)
| 27.911111 | 74 | 0.718153 | 152 | 1,256 | 5.776316 | 0.440789 | 0.045558 | 0.050114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007491 | 0.149682 | 1,256 | 44 | 75 | 28.545455 | 0.814607 | 0.234076 | 0 | 0.08 | 0 | 0 | 0.122234 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.28 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d78def6239c610d281ff431383daea6d7ced4316 | 411 | py | Python | problems/0XX/01X/012/solution.py | jamsidedown/euler_py | b9a6b117dda97b8636cc1f5a1380ae762250090e | [
"MIT"
] | null | null | null | problems/0XX/01X/012/solution.py | jamsidedown/euler_py | b9a6b117dda97b8636cc1f5a1380ae762250090e | [
"MIT"
] | null | null | null | problems/0XX/01X/012/solution.py | jamsidedown/euler_py | b9a6b117dda97b8636cc1f5a1380ae762250090e | [
"MIT"
] | null | null | null | from typing import List
def run() -> int:
i, n = 1, 1
while len(factors(n)) < 500:
i += 1
n = sum(range(i + 1))
return n
def factors(n: int) -> List[int]:
f, i = [], 1
while i * i <= n:
if n % i == 0:
f += list({i, n // i})
i += 1
return f
if __name__ == '__main__':
print(f'First triangular number with over 500 divisors: {run()}')
| 17.869565 | 69 | 0.469586 | 64 | 411 | 2.890625 | 0.453125 | 0.043243 | 0.086486 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049808 | 0.364964 | 411 | 22 | 70 | 18.681818 | 0.659004 | 0 | 0 | 0.125 | 0 | 0 | 0.153285 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.3125 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d78f060288c1e54a6f400af8221bafea132e60f8 | 5,144 | py | Python | viz/server.py | zstoebs/tgn | 3dea8cd7bed6f78f644efc62d251aa0ed8b8011a | [
"Apache-2.0"
] | null | null | null | viz/server.py | zstoebs/tgn | 3dea8cd7bed6f78f644efc62d251aa0ed8b8011a | [
"Apache-2.0"
] | null | null | null | viz/server.py | zstoebs/tgn | 3dea8cd7bed6f78f644efc62d251aa0ed8b8011a | [
"Apache-2.0"
] | null | null | null | import flask
import numpy as np
import json
import networkx as nx
from networkx.readwrite import json_graph
from flask import Flask
from flask import request
from flask_cors import CORS
from sklearn.manifold import TSNE
# create Flask app
app = Flask(__name__)
CORS(app)
tsne1 = TSNE(n_components=1)
tsne2 = TSNE(n_components=2)
timestamps = None
edge_graph = None
node_graph = None
def extract_timestamps_from_graph(G):
ts = []
for n1,n2,data in G.edges(data=True):
ts += [int(data['timestamp'])]
ts.sort()
print('Number of timestamps: ', len(ts))
print('Earliest timestamp: ', ts[0])
print('Latest timestamp: ', ts[-1])
return ts
def read_json_graph(fname):
with open(fname,'r') as f:
js_graph = json.load(f)
g = json_graph.node_link_graph(js_graph)
# remove negative edges --> not informative
edges = list(g.edges)
for s,t in edges:
if 'neg_prob' in g[s][t].keys():
g.remove_edge(s, t)
return g
def get_graph_at_timestamp(G,timestamp):
node_bunch = set()
for n1,n2,data in G.edges(data=True):
if int(data['timestamp']) <= timestamp:
node_bunch.add(n1)
node_bunch.add(n2)
return nx.subgraph(G,node_bunch)
def parse_graph(subgraph):
nodes = list(subgraph.nodes)
node_info = [{'id': node} for node in nodes]
edges = list(subgraph.edges)
edge_info = []
for s, t in edges:
info = {}
check = 'pos_prob' in subgraph[s][t].keys()
prob = subgraph[s][t]['pos_prob'] if check else subgraph[s][t]['neg_prob']
gt = 1 if check else 0
info['source'] = s
info['target'] = t
# info['ground_truth'] = gt
info['prob'] = prob[0]
info['timestamp'] = int(subgraph[s][t]['timestamp'])
info['source_embed'] = subgraph[s][t]['source_embed']
info['dest_embed'] = subgraph[s][t]['dest_embed']
edge_info += [info]
return node_info, edge_info
def compose_full_graph():
count = 0
for s, t in list(edge_graph.edges):
try:
edge_graph[s][t]['source_embed'] = node_graph[s][t]['source_embed']
edge_graph[s][t]['dest_embed'] = node_graph[s][t]['dest_embed']
except BaseException:
count += 1
edge_graph[s][t]['source_embed'] = None
edge_graph[s][t]['dest_embed'] = None
print('# edges without context: ', count)
def get_embedding_info(nodes):
subgraph = nx.subgraph(edge_graph, nodes)
source_nodes = []
source_embeds = []
dest_nodes = []
dest_embeds = []
for s, t in list(subgraph.edges):
source_embed = subgraph[s][t]['source_embed']
dest_embed = subgraph[s][t]['dest_embed']
if source_embed and dest_embed:
source_nodes += [s]
source_embeds += [np.array(source_embed)]
dest_nodes += [t]
dest_embeds += [np.array(dest_embed)]
return source_nodes, source_embeds, dest_nodes, dest_embeds
@app.route('/get_timestamps',methods=['GET'])
def get_timestamps():
return flask.jsonify(timestamps)
@app.route('/get_subgraph_by_node_id',methods=['GET','POST'])
def get_subgraph_by_node_id():
ids = request.get_json()
node_bunch = set()
for i in ids:
id = int(i)
node_bunch.add(id)
for n in edge_graph.neighbors(id):
node_bunch.add(n)
subgraph = nx.subgraph(edge_graph, node_bunch)
node_info, edge_info = parse_graph(subgraph)
return flask.jsonify({'nodes': node_info, 'edges': edge_info})
@app.route('/get_timeframe',methods=['GET','POST'])
def get_timeframe():
end = request.get_json()
tf = timestamps[:end+1]
subgraph = get_graph_at_timestamp(edge_graph,tf[-1]) # HARDCODED
node_info, edge_info = parse_graph(subgraph)
return flask.jsonify({'nodes':node_info,'edges':edge_info})
@app.route('/get_graph',methods=['GET'])
def get_graph():
node_info, edge_info = parse_graph(edge_graph)
return flask.jsonify({'nodes':node_info,'edges':edge_info})
@app.route('/perform_tsne1',methods=['GET','POST'])
def perform_tsne1():
nodes = request.get_json()
source_nodes, source_embeds, dest_nodes, dest_embeds = get_embedding_info(nodes)
source_embeds = np.vstack(source_embeds)
dest_embeds = np.vstack(dest_embeds)
source_embedded = tsne1.fit_transform(source_embeds)
dest_embedded = tsne1.fit_transform(dest_embeds)
return flask.jsonify({'source_nodes': source_nodes, 'dest_nodes': dest_nodes, 'x': source_embedded.tolist(), 'y': dest_embedded.tolist()})
@app.route('/perform_tsne2',methods=['GET','POST'])
def perform_tsne2():
nodes = request.get_json()
source_nodes, source_embeds, dest_nodes, dest_embeds = get_embedding_info(nodes)
source_embeds = np.vstack(source_embeds)
dest_embeds = np.vstack(dest_embeds)
source_embedded = tsne2.fit_transform(source_embeds)
dest_embedded = tsne2.fit_transform(dest_embeds)
source_x = source_embedded[:, 0].tolist()
source_y = source_embedded[:, 1].tolist()
dest_x = dest_embedded[:, 0].tolist()
dest_y = dest_embedded[:, 1].tolist()
return flask.jsonify({'source_nodes': source_nodes, 'dest_nodes': dest_nodes, 'source_x': source_x, 'source_y': source_y, 'dest_x': dest_x, 'dest_y': dest_y})
if __name__=='__main__':
edge_graph = read_json_graph('static/edge/edge_prediction.json')
node_graph = read_json_graph('static/node/node_classification.json')
compose_full_graph()
timestamps = extract_timestamps_from_graph(edge_graph)
app.run(host='localhost')
| 27.805405 | 159 | 0.719673 | 791 | 5,144 | 4.415929 | 0.164349 | 0.011451 | 0.022903 | 0.018609 | 0.403951 | 0.314343 | 0.262239 | 0.227884 | 0.203836 | 0.188377 | 0 | 0.006915 | 0.128499 | 5,144 | 184 | 160 | 27.956522 | 0.772251 | 0.018274 | 0 | 0.136691 | 0 | 0 | 0.122522 | 0.018239 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086331 | false | 0 | 0.064748 | 0.007194 | 0.230216 | 0.028777 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d78f9c5a8bea9a607d7bfbb566ef8d40f689795d | 1,953 | py | Python | gae_credentials/get_credentials.py | pirika-inc/gae-impersonate-credentials | fb84c86b82d59006fce10ccb99c6f8b562d496c3 | [
"Apache-2.0"
] | null | null | null | gae_credentials/get_credentials.py | pirika-inc/gae-impersonate-credentials | fb84c86b82d59006fce10ccb99c6f8b562d496c3 | [
"Apache-2.0"
] | null | null | null | gae_credentials/get_credentials.py | pirika-inc/gae-impersonate-credentials | fb84c86b82d59006fce10ccb99c6f8b562d496c3 | [
"Apache-2.0"
] | null | null | null | import os
from typing import List
from typing import Optional
from google.auth import default as app_default_credentials
from google.auth import impersonated_credentials
from google.auth.credentials import Credentials
def _is_run_on_gcp():
return 'GOOGLE_CLOUD_PROJECT' in os.environ or 'GCP_PROJECT' in os.environ
def get_credentials(service_account: str,
scopes: List[str],
lifetime: int = 3600,
is_run_on_gcp: Optional[bool] = None
) -> Optional[Credentials]:
"""Get an GCP Credentials instance
App runs in GCP, return None for use Application Default Credentials(ADC).
When runs in local, impersonate requested service account by ADC.
(In local environment, ADC is developer's own google account that signed in with `gcloud auth application-default login` in normal usage.)
If use this function on Cloud Functions Python 3.9 Runtime, this function cannot detect actual run environment
because not supplied to GOOGLE_CLOUD_PROJECT or GCP_PROJECT environment variable.
You need to supply these value in deploy Cloud Functions or set is_run_on_gcp parameter in this function.
:param service_account: Service account to impersonate access.
:param scopes: Requested api scopes
:param lifetime: Credential life time
:param is_run_on_gcp: set to True in run in GCP, if not supplied or passed None, use auto detection from environment variable.
:return: return None on runs in GCP, otherwise Credentials object.
"""
if is_run_on_gcp is None and _is_run_on_gcp() or is_run_on_gcp:
return None
source_credentials, default_project_id = app_default_credentials()
target_credentials = impersonated_credentials.Credentials(
source_credentials=source_credentials,
target_principal=service_account,
target_scopes=scopes,
lifetime=lifetime)
return target_credentials
| 45.418605 | 142 | 0.743472 | 269 | 1,953 | 5.219331 | 0.364312 | 0.024929 | 0.0349 | 0.049858 | 0.022792 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003889 | 0.209933 | 1,953 | 42 | 143 | 46.5 | 0.906027 | 0.482847 | 0 | 0 | 0 | 0 | 0.032461 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0.045455 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d792897eb6319450da15cd4475973128ed8335fd | 1,667 | py | Python | python/qitest/test/conftest.py | aldebaran/qibuild | efea6fa3744664348717fe5e8df708a3cf392072 | [
"BSD-3-Clause"
] | 51 | 2015-01-05T14:35:13.000Z | 2021-07-27T06:46:59.000Z | python/qitest/test/conftest.py | aldebaran/qibuild | efea6fa3744664348717fe5e8df708a3cf392072 | [
"BSD-3-Clause"
] | 104 | 2015-04-09T10:48:42.000Z | 2020-09-16T16:33:29.000Z | python/qitest/test/conftest.py | aldebaran/qibuild | efea6fa3744664348717fe5e8df708a3cf392072 | [
"BSD-3-Clause"
] | 46 | 2015-01-05T14:35:16.000Z | 2022-02-13T20:39:36.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2012-2021 SoftBank Robotics. All rights reserved.
# Use of this source code is governed by a BSD-style license (see the COPYING file).
""" QiBuild """
from __future__ import absolute_import
from __future__ import unicode_literals
from __future__ import print_function
import os
import pytest
import qisys
from qisys.test.conftest import TestAction
import qibuild.find
from qibuild.test.conftest import TestWorkTree, cd_to_tmpdir
@pytest.fixture
def compiled_tests(build_worktree):
""" Compiled Tests """
testme_proj = build_worktree.add_test_project("testme")
testme_proj.configure()
testme_proj.build()
tests = list()
paths = [testme_proj.sdk_directory]
for name in ["ok", "fail", "segfault", "timeout"]:
test = {
"name": name,
"cmd": [qibuild.find.find_bin(paths, name)],
}
if name == "timeout":
test["timeout"] = 1
tests.append(test)
return tests
@pytest.fixture
def qitest_action(cd_to_tmpdir):
""" QiTest Action """
return QiTestAction()
class QiTestAction(TestAction):
""" QiTestAction """
def __init__(self):
""" QiTestAction Init """
super(QiTestAction, self).__init__("qitest.actions")
self.worktree = TestWorkTree()
def add_test_project(self, src):
""" Add Test Project """
this_dir = os.path.dirname(__file__)
src_path = os.path.join(this_dir, "projects", src)
dest_path = os.path.join(self.worktree.root, src)
qisys.sh.copy_git_src(src_path, dest_path)
return self.worktree.add_project(src)
| 28.254237 | 84 | 0.666467 | 208 | 1,667 | 5.081731 | 0.471154 | 0.037843 | 0.045412 | 0.02649 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007645 | 0.215357 | 1,667 | 58 | 85 | 28.741379 | 0.800459 | 0.167966 | 0 | 0.052632 | 0 | 0 | 0.051967 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.105263 | false | 0 | 0.236842 | 0 | 0.447368 | 0.026316 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d7947836b423ccf37cd2e0e4360d3ece6596a7b0 | 2,272 | py | Python | src/utility.py | thakreyn/drive-sink | 0b2674f23e4ece7273c32112478ec0a24befd287 | [
"MIT"
] | 18 | 2021-08-28T06:39:43.000Z | 2022-02-28T16:15:30.000Z | src/utility.py | thakreyn/drive-sink | 0b2674f23e4ece7273c32112478ec0a24befd287 | [
"MIT"
] | 5 | 2021-09-01T07:59:54.000Z | 2021-09-08T20:20:55.000Z | src/utility.py | thakreyn/drive-sink | 0b2674f23e4ece7273c32112478ec0a24befd287 | [
"MIT"
] | 1 | 2021-09-02T04:07:37.000Z | 2021-09-02T04:07:37.000Z | """
utility.py:
Contains the following utility files :
1. Log
2.
"""
import os
from datetime import datetime
import configparser
from termcolor import colored
from . import init as user_init
def log(message , file = "usage.log"):
""" Log Message -> message and file options : [usage.log, commit.log] """
curr_dir = user_init.read_config_file()
path = os.path.join(curr_dir, '.sink', 'log', file)
# path = curr_dir + "/.sink/log/" + file
with open(path , "a") as file:
time = datetime.now().strftime("%d/%m/%Y %H:%M:%S")
log_message = f"\n[{time}] : {message}"
file.write(log_message)
def read_log(length, file = "commit.log"):
"""
Returns a list of 'length' log messages from the file
"""
curr_dir = user_init.read_config_file()
# path = curr_dir + "/.sink/log/" + file
path = os.path.join(curr_dir, '.sink', 'log', file)
with open(path, 'r') as file:
data = file.read().split('\n')
data = data[::-1]
if len(data) < length:
return data
else:
return data[:length]
def read_config_file(section = "general", attr = "root"):
""" Returns the mentioned attr from a given section
(Default: returns the init directory)
"""
config = configparser.ConfigParser()
config.read(os.path.join('.', '.sink', 'config', 'config.ini'))
return config[section][attr]
def edit_config_file(section, attr, new_attr):
""" Edits the mentioned section and attr in the config.ini """
edit = configparser.ConfigParser()
edit.read(os.path.join(read_config_file(), '.sink', 'config', 'config.ini'))
edit_section = edit[section]
edit_section[attr] = new_attr
with open( os.path.join(read_config_file(), '.sink', 'config', 'config.ini'), "w") as configfile:
edit.write(configfile)
def print_error(message):
"""
Prints red coloured error messsage to the terminal and logs it to the usage.log
"""
print(colored(f"[Error] : {message}", 'red'))
def check_drive_init():
""" Checks if the drive data is initialised
True -> Initialised
False -> Not
"""
return (read_config_file("general", "drive_status")) | 24.695652 | 101 | 0.609155 | 299 | 2,272 | 4.51505 | 0.311037 | 0.051852 | 0.062222 | 0.041481 | 0.204444 | 0.204444 | 0.204444 | 0.191111 | 0.117037 | 0.117037 | 0 | 0.001745 | 0.243398 | 2,272 | 92 | 102 | 24.695652 | 0.783595 | 0.247799 | 0 | 0.108108 | 0 | 0 | 0.123106 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.162162 | false | 0 | 0.135135 | 0 | 0.405405 | 0.054054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d794f7d2045069ea851810f45b3ed9e86a555e95 | 9,397 | py | Python | diagnnose/models/transformer_lm.py | i-machine-think/diagnnose | 4533347d1f2cc2959903ae667f99dccd4dda73ee | [
"MIT"
] | 35 | 2019-06-12T13:50:39.000Z | 2020-11-10T22:29:19.000Z | diagnnose/models/transformer_lm.py | i-machine-think/diagnnose | 4533347d1f2cc2959903ae667f99dccd4dda73ee | [
"MIT"
] | 50 | 2019-04-07T20:22:54.000Z | 2020-11-14T12:58:27.000Z | diagnnose/models/transformer_lm.py | i-machine-think/diagnnose | 4533347d1f2cc2959903ae667f99dccd4dda73ee | [
"MIT"
] | 5 | 2019-06-06T13:37:29.000Z | 2020-09-24T12:04:17.000Z | from typing import Callable, List, Optional, Union
import torch
import torch.nn as nn
from torch import Tensor
from torch.nn.functional import log_softmax
from torchtext.data import Batch
from diagnnose.attribute import ShapleyTensor
from diagnnose.models import LanguageModel
from diagnnose.typedefs.activations import (
ActivationDict,
ActivationName,
ActivationNames,
SelectionFunc,
)
class TransformerLM(LanguageModel):
"""Huggingface LM wrapper.
Parameters
----------
transformer_type: str
Transformer type that can be passed to
``auto_model.from_pretrained`` as one of the valid models made
available by Huggingface, e.g. ``"roberta-base"``.
mode : str, optional
Language model mode, one of ``"causal_lm"``, ``"masked_lm"``,
``"question_answering"``, ``"sequence_classification"``, or
``"token_classification"``. If not provided the model will be
imported using ``AutoModel``, which often yields an LM with no
task-specific head on top.
embeddings_attr : str, optional
Attribute name of the word embeddings of the model. Can be
nested. For example, if the word embeddings are stored as an
``"wte"`` attribute that is part of the ``"encoder"`` attribute
of the full model, you would pass ``"encoder.wte"``. For the
following models this parameter does not need to be passed:
``"(distil)-(Ro)BERT(a)"``, ``"(distil)-gpt2"``, ``"XLM"``
cache_dir: str, optional
Path towards the cache directory where the HF model weights will
be stored.
compute_pseudo_ll: bool, optional
Toggle to compute the Pseudo Log-Likelihood that was introduced
by Salazar et al. (2020). This can be used to compute the
sentence probabilies of bi-directional masked LMs, masking out
one token at the time.
device : str, optional
Torch device on which forward passes will be run.
Defaults to cpu.
"""
def __init__(
self,
embeddings_attr: Optional[str] = None,
compute_pseudo_ll: bool = False,
device: str = "cpu",
**kwargs,
):
super().__init__(device)
self.pretrained_model = self.load_model(**kwargs)
self.embeddings_attr = embeddings_attr
self.compute_pseudo_ll = compute_pseudo_ll
def load_model(self, *args, **kwargs):
raise NotImplementedError
@property
def embeddings(self) -> Callable[[Tensor], Tensor]:
raise NotImplementedError
@property
def decoder(self) -> nn.Module:
raise NotImplementedError
def base_model(self, compute_out: bool):
return self.pretrained_model
def forward(
self,
input_ids: Optional[Union[Tensor, List[int]]] = None,
inputs_embeds: Optional[Union[Tensor, ShapleyTensor]] = None,
input_lengths: Optional[List[int]] = None,
attention_mask: Optional[Union[Tensor, List[int]]] = None,
compute_out: bool = True,
calc_causal_lm_probs: bool = False,
only_return_top_embs: bool = True,
mask_idx: Optional[int] = None,
selection_func: Optional[SelectionFunc] = None,
batch: Optional[Batch] = None,
) -> Union[ActivationDict, Tensor]:
if input_ids is not None and inputs_embeds is not None:
raise ValueError(
"You cannot specify both input_ids and inputs_embeds at the same time"
)
if inputs_embeds is None and input_ids is None:
raise ValueError("inputs_embeds or input_ids must be provided")
if inputs_embeds is not None:
inputs_embeds = inputs_embeds.to(self.device)
if len(inputs_embeds.shape) == 2:
inputs_embeds = inputs_embeds.unsqueeze(0) # Add batch dimension
if input_lengths is None:
if input_ids is not None:
batch_size, max_sen_len = input_ids.shape
else:
batch_size, max_sen_len = inputs_embeds.shape[:2]
input_lengths = torch.tensor(batch_size * [max_sen_len], device=self.device)
if isinstance(attention_mask, list):
attention_mask = torch.tensor(attention_mask, device=self.device)
if attention_mask is None:
attention_mask = self.create_attention_mask(input_lengths)
model = self.base_model(compute_out)
attention_mask = attention_mask.to(self.device)
activation_name = (-1, "out") if compute_out else (-1, "hx")
if self.compute_pseudo_ll:
assert isinstance(mask_idx, int), "mask_idx must be provided for Pseudo LL"
logits = self._forward_pseudo_ll(
model,
inputs_embeds,
attention_mask,
mask_idx,
activation_name,
selection_func=selection_func,
batch=batch,
)
else:
logits = self._forward(
model,
compute_out,
attention_mask,
input_ids=input_ids,
inputs_embeds=inputs_embeds,
)
if calc_causal_lm_probs:
output_ids = input_ids[:, 1:].unsqueeze(-1)
probs = log_softmax(logits[:, :-1], dim=-1)
logits = torch.gather(probs, -1, output_ids)
if only_return_top_embs:
return logits
return {activation_name: logits}
@staticmethod
def _forward(
model,
compute_out: bool,
attention_mask: Tensor,
input_ids: Optional[Tensor] = None,
inputs_embeds: Optional[Tensor] = None,
) -> Tensor:
output = model(
input_ids=input_ids,
inputs_embeds=inputs_embeds,
attention_mask=attention_mask,
)
if hasattr(output, "logits"):
logits: Tensor = output.logits
elif hasattr(output, "last_hidden_state"):
logits = output.last_hidden_state
elif isinstance(output, tuple):
# Fairseq output logic
if compute_out:
logits = output[0]
else:
logits = output[1]["inner_states"][-1].transpose(0, 1)
logits = model.decoder.layer_norm(logits)
else:
raise AttributeError
return logits
def _forward_pseudo_ll(
self,
model,
inputs_embeds: Tensor,
attention_mask: Tensor,
mask_idx: int,
activation_name: ActivationName,
selection_func: Optional[SelectionFunc] = None,
batch: Optional[Batch] = None,
) -> Tensor:
mask_embedding = self.embeddings(torch.tensor(mask_idx, device=self.device))
sen_len = inputs_embeds.shape[1]
pseudo_ll_logits = torch.zeros(
*inputs_embeds.shape[:2], self.nhid(activation_name), device=self.device
)
for w_idx in range(sen_len):
if selection_func is not None:
sen_ids = []
for batch_idx, sen_idx in enumerate(batch.sen_idx):
if selection_func(w_idx, batch.dataset.examples[sen_idx]):
sen_ids.append(batch_idx)
if len(sen_ids) == 0:
continue
else:
sen_ids = slice(0, None)
masked_inputs_embeds = inputs_embeds[sen_ids].clone()
masked_inputs_embeds[:, w_idx] = mask_embedding
masked_attention_mask = attention_mask[sen_ids].clone()
logits = self._forward(model, masked_inputs_embeds, masked_attention_mask)
pseudo_ll_logits[sen_ids, w_idx] = logits[:, w_idx]
return pseudo_ll_logits
def create_attention_mask(self, input_lengths: List[int]) -> Tensor:
"""Creates an attention mask as described in:
https://huggingface.co/transformers/glossary.html#attention-mask
Parameters
----------
input_lengths : List[int]
List containing sentence lengths of each batch item.
Returns
-------
attention_mask : Tensor
Attention mask prescribing which items may be taken into
account by the attention mechanism.
Size: batch_size x max_sen_length
"""
max_sen_len = max(input_lengths)
attention_mask = torch.zeros(
len(input_lengths), max_sen_len, device=self.device
)
for idx, length in enumerate(input_lengths):
attention_mask[idx, :length] = 1.0
return attention_mask
def create_inputs_embeds(self, input_ids: Union[Tensor, List[int]]) -> Tensor:
if isinstance(input_ids, list):
input_ids = torch.tensor(input_ids, device=self.device)
inputs_embeds = self.embeddings(input_ids)
return inputs_embeds
@property
def num_layers(self) -> int:
return self.pretrained_model.config.n_layer
@property
def top_layer(self) -> int:
return -1
def nhid(self, activation_name: ActivationName) -> int:
if activation_name[1] == "out":
return self.pretrained_model.config.vocab_size
return self.pretrained_model.config.hidden_size
@staticmethod
def activation_names() -> ActivationNames:
return [(-1, "out")]
| 34.675277 | 88 | 0.615303 | 1,095 | 9,397 | 5.068493 | 0.238356 | 0.060541 | 0.017297 | 0.021622 | 0.116396 | 0.062703 | 0.036036 | 0.036036 | 0.021622 | 0 | 0 | 0.004401 | 0.298712 | 9,397 | 270 | 89 | 34.803704 | 0.837785 | 0.198893 | 0 | 0.20442 | 0 | 0 | 0.027283 | 0 | 0 | 0 | 0 | 0 | 0.005525 | 1 | 0.077348 | false | 0 | 0.049724 | 0.022099 | 0.198895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
d798e861fd8df1e91662f4de9241509ab8c4767e | 8,634 | py | Python | train_cnn_small_tfrecords.py | PatrickKalkman/leave-disease | 93ec96ef97356a858cc1c89f62a5ec2eb3fd3248 | [
"MIT"
] | 1 | 2021-09-22T20:13:38.000Z | 2021-09-22T20:13:38.000Z | train_cnn_small_tfrecords.py | PatrickKalkman/leave-disease | 93ec96ef97356a858cc1c89f62a5ec2eb3fd3248 | [
"MIT"
] | null | null | null | train_cnn_small_tfrecords.py | PatrickKalkman/leave-disease | 93ec96ef97356a858cc1c89f62a5ec2eb3fd3248 | [
"MIT"
] | 1 | 2021-03-01T11:52:06.000Z | 2021-03-01T11:52:06.000Z | import pickle
import math, re, os
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
from tensorflow.keras.losses import CategoricalCrossentropy
import matplotlib.pyplot as plt
import numpy as np
from functools import partial
import pandas as pd
import random
import shutil
import pathlib
BATCH_SIZE = 64
IMAGE_SIZE = [150, 150]
CLASSES = ['0', '1', '2', '3', '4']
EPOCHS = 30
AUTOTUNE = tf.data.experimental.AUTOTUNE
ALL_FILENAMES = []
for dirname, _, filenames in os.walk('./train_tfrecords'):
for filename in filenames:
ALL_FILENAMES.append(os.path.join(dirname, filename))
print(os.path.join(dirname, filename))
TEST_FILENAMES = []
for dirname, _, filenames in os.walk('../test_tfrecords'):
for filename in filenames:
TEST_FILENAMES.append(os.path.join(dirname, filename))
print(os.path.join(dirname, filename))
TRAINING_FILENAMES, VALID_FILENAMES = train_test_split(
ALL_FILENAMES,
test_size=0.1, random_state=5
)
def decode_image(image):
image = tf.image.decode_jpeg(image, channels=3)
return image
def read_tf_record(example, labeled):
tfrecord_format = {
"image": tf.io.FixedLenFeature([], tf.string),
"target": tf.io.FixedLenFeature([], tf.int64)
} if labeled else {
"image": tf.io.FixedLenFeature([], tf.string),
"image_name": tf.io.FixedLenFeature([], tf.string)
}
example = tf.io.parse_single_example(example, tfrecord_format)
image = decode_image(example['image'])
if labeled:
label = tf.cast(example['target'], tf.int64)
label = tf.one_hot(label, 5)
return image, label
idnum = example['image_name']
return image, idnum
def load_dataset(filenames, labeled=True, ordered=False, augment=False):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(filenames) # automatically interleaves reads from multiple files
dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(partial(read_tf_record, labeled=labeled))
if augment:
dataset = dataset.map(data_augment)
else:
dataset = dataset.map(data_only_resize)
return dataset
def data_augment(image, label):
# Thanks to the dataset.prefetch(AUTO) statement in the following function this happens essentially for free on TPU.
# Data pipeline code is executed on the "CPU" part of the TPU while the TPU itself is computing gradients.
image = tf.image.resize(image, IMAGE_SIZE)
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
image = tf.image.random_contrast(image, 0.1, 0.6)
# image = tf.image.random_rotation(image, 360.0, fill_mode='reflect')
# image = tf.image.random_zoom(image)
# image = tf.image.random_shear(image)
return image, label
def data_only_resize(image, label):
# Thanks to the dataset.prefetch(AUTO) statement in the following function this happens essentially for free on TPU.
# Data pipeline code is executed on the "CPU" part of the TPU while the TPU itself is computing gradients.
image = tf.image.resize(image, IMAGE_SIZE)
return image, label
def get_training_dataset():
dataset = load_dataset(TRAINING_FILENAMES, labeled=True, augment=True)
dataset = dataset.map(data_augment, num_parallel_calls=AUTOTUNE)
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTOTUNE)
return dataset
def get_validation_dataset(ordered=False):
dataset = load_dataset(VALID_FILENAMES, labeled=True, ordered=ordered)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.cache()
dataset = dataset.prefetch(AUTOTUNE)
return dataset
def get_test_dataset(ordered=False):
dataset = load_dataset(TEST_FILENAMES, labeled=False, ordered=ordered)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTOTUNE)
return dataset
def count_data_items(filenames):
n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1)) for filename in filenames]
return np.sum(n)
NUM_TRAINING_IMAGES = count_data_items(TRAINING_FILENAMES)
NUM_VALIDATION_IMAGES = count_data_items(VALID_FILENAMES)
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print('Dataset: {} training images, {} validation images, {} (unlabeled) test images'.format(
NUM_TRAINING_IMAGES, NUM_VALIDATION_IMAGES, NUM_TEST_IMAGES))
def create_cnn_model():
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(*IMAGE_SIZE, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(5, activation='softmax')
])
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adam(lr=10e-5), metrics=['accuracy'])
print(model.summary())
return model
def create_callbacks():
early_stopping = EarlyStopping(patience=6, monitor='val_loss', verbose=1)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', min_lr=0.001,
patience=6, mode='min',
verbose=1)
model_checkpoint = ModelCheckpoint(monitor='val_loss',
filepath='./best-model.h5',
save_best_only=True,
verbose=1)
callbacks = [
early_stopping,
reduce_lr,
model_checkpoint
]
return callbacks
def train_model_tf_records_naive_split():
train_dataset = get_training_dataset()
valid_dataset = get_validation_dataset()
STEPS_PER_EPOCH = NUM_TRAINING_IMAGES // BATCH_SIZE
VALID_STEPS = NUM_VALIDATION_IMAGES // BATCH_SIZE
print(f'steps = {STEPS_PER_EPOCH}')
print(f'valid steps= {VALID_STEPS}')
model = create_cnn_model()
history = model.fit(train_dataset,
steps_per_epoch=STEPS_PER_EPOCH,
validation_steps=VALID_STEPS,
validation_data=valid_dataset,
epochs=EPOCHS,
callbacks=create_callbacks())
return history
history = train_model_tf_records_naive_split()
def plot_result(history):
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
epochs = range(len(acc))
plt.figure(figsize=(15, 5))
plt.plot(epochs, acc, 'b*-', label='Training accuracy')
plt.plot(epochs, val_acc, 'r*-', label='Validation accuracy')
plt.grid()
plt.title('Training and validation accuracy')
plt.ylabel("Accuracy")
plt.xlabel("Epochs")
plt.legend()
plt.figure()
plt.show()
plt.figure(figsize=(15, 5))
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.plot(epochs, loss, 'b*-', label='Training Loss')
plt.plot(epochs, val_loss, 'r*-', label='Validation Loss')
plt.grid()
plt.title('Training and validation loss')
plt.ylabel("Loss")
plt.xlabel("Epochs")
plt.legend()
plt.figure()
plt.show()
plot_result(history)
def to_float32(image, label):
return tf.cast(image, tf.float32), label
test_ds = get_test_dataset(ordered=True)
test_ds = test_ds.map(to_float32)
print('Computing predictions...')
model = keras.models.load_model('./best-model.h5')
test_images_ds = test_ds.map(lambda image, idnum: image)
probabilities = model.predict(test_images_ds)
predictions = np.argmax(probabilities, axis=-1)
test_ids_ds = test_ds.map(lambda image, idnum: idnum).unbatch()
test_ids = next(iter(test_ids_ds.batch(NUM_TEST_IMAGES))).numpy().astype('U') # all in one batch
np.savetxt('submission.csv', np.rec.fromarrays([test_ids, predictions]), fmt=['%s', '%d'], delimiter=',', header='image_id,label', comments='')
#head submission.csv
| 34.536 | 143 | 0.687862 | 1,122 | 8,634 | 5.124777 | 0.246881 | 0.036522 | 0.02713 | 0.018783 | 0.330435 | 0.277391 | 0.243304 | 0.20887 | 0.199826 | 0.166957 | 0 | 0.015398 | 0.195159 | 8,634 | 249 | 144 | 34.674699 | 0.812059 | 0.089761 | 0 | 0.216931 | 0 | 0 | 0.075335 | 0.003059 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.079365 | 0.005291 | 0.227513 | 0.037037 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad00b6c848824b80f9f58e475347a4f9d9791917 | 798 | py | Python | pywhatsauto.py | mido2006/whatsapp_x_o_bot | cebbe5f4826e079d15b3a66cd5cfe140933b8353 | [
"MIT"
] | null | null | null | pywhatsauto.py | mido2006/whatsapp_x_o_bot | cebbe5f4826e079d15b3a66cd5cfe140933b8353 | [
"MIT"
] | null | null | null | pywhatsauto.py | mido2006/whatsapp_x_o_bot | cebbe5f4826e079d15b3a66cd5cfe140933b8353 | [
"MIT"
] | null | null | null |
def check_green_mark():
#this function check for new messages
s = None
while s is None:
if keyboard.is_pressed("q"):
quit()
s = pt.locateOnScreen('m.png',grayscale=True,confidence=.9)
if s != None:
x,y = pt.center(s)
pt.click(x-50,y)
def out():
#this function get out of the conversation to get new messages
pt.click(216,216)
def new_print(Text):
#this function write messages
pt.typewrite(Text)
pt.press("enter")
def get_msg():
#this function read messages
check_green_mark()
pt.moveTo(712,906)
pt.tripleClick()
pt.click(button='right')
pt.moveTo(754,928)
pt.click(754,928)
return pyperclip.paste()
| 21 | 68 | 0.558897 | 106 | 798 | 4.141509 | 0.537736 | 0.109339 | 0.063781 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.050467 | 0.329574 | 798 | 37 | 69 | 21.567568 | 0.770093 | 0.191729 | 0 | 0 | 0 | 0 | 0.026578 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0 | 0.227273 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad00c31625d3b212f822b9bdd2b628c09ea8f60b | 4,359 | py | Python | tests/python/unittest/test_base.py | leeesangwon/incubator-mxnet | 0514233103baff5e1581cf2057f561f7a36616c2 | [
"Apache-2.0"
] | 211 | 2016-06-06T08:32:36.000Z | 2021-07-03T16:50:16.000Z | tests/python/unittest/test_base.py | leeesangwon/incubator-mxnet | 0514233103baff5e1581cf2057f561f7a36616c2 | [
"Apache-2.0"
] | 42 | 2017-01-05T02:45:13.000Z | 2020-08-11T23:45:27.000Z | tests/python/unittest/test_base.py | leeesangwon/incubator-mxnet | 0514233103baff5e1581cf2057f561f7a36616c2 | [
"Apache-2.0"
] | 58 | 2016-10-27T07:37:08.000Z | 2021-07-03T16:50:17.000Z | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import mxnet as mx
from numpy.testing import assert_equal
from mxnet.base import data_dir
from mxnet.test_utils import environment
from mxnet.util import getenv
from common import with_environment
import os
import logging
import os.path as op
import platform
import pytest
@pytest.mark.garbage_expected
def test_environment():
name1 = 'MXNET_TEST_ENV_VAR_1'
name2 = 'MXNET_TEST_ENV_VAR_2'
# Test that a variable can be set in the python and backend environment
with environment(name1, '42'):
assert_equal(os.environ.get(name1), '42')
assert_equal(getenv(name1), '42')
# Test dict form of invocation
env_var_dict = {name1: '1', name2: '2'}
with environment(env_var_dict):
for key, value in env_var_dict.items():
assert_equal(os.environ.get(key), value)
assert_equal(getenv(key), value)
# Further testing in 'test_with_environment()'
@with_environment({'MXNET_TEST_ENV_VAR_1': '10', 'MXNET_TEST_ENV_VAR_2': None})
def test_with_environment():
name1 = 'MXNET_TEST_ENV_VAR_1'
name2 = 'MXNET_TEST_ENV_VAR_2'
def check_background_values():
assert_equal(os.environ.get(name1), '10')
assert_equal(getenv(name1), '10')
assert_equal(os.environ.get(name2), None)
assert_equal(getenv(name2), None)
check_background_values()
# This completes the testing of with_environment(), but since we have
# an environment with a couple of known settings, lets use it to test if
# 'with environment()' properly restores to these settings in all cases.
class OnPurposeError(Exception):
"""A class for exceptions thrown by this test"""
pass
# Enter an environment with one variable set and check it appears
# to both python and the backend. Then, outside the 'with' block,
# make sure the background environment is seen, regardless of whether
# the 'with' block raised an exception.
def test_one_var(name, value, raise_exception=False):
try:
with environment(name, value):
assert_equal(os.environ.get(name), value)
assert_equal(getenv(name), value)
if raise_exception:
raise OnPurposeError
except OnPurposeError:
pass
finally:
check_background_values()
# Test various combinations of set and unset env vars.
# Test that the background setting is restored in the presense of exceptions.
for raise_exception in [False, True]:
# name1 is initially set in the environment
test_one_var(name1, '42', raise_exception)
test_one_var(name1, None, raise_exception)
# name2 is initially not set in the environment
test_one_var(name2, '42', raise_exception)
test_one_var(name2, None, raise_exception)
def test_data_dir():
prev_data_dir = data_dir()
system = platform.system()
# Test that data_dir() returns the proper default value when MXNET_HOME is not set
with environment('MXNET_HOME', None):
if system == 'Windows':
assert_equal(data_dir(), op.join(os.environ.get('APPDATA'), 'mxnet'))
else:
assert_equal(data_dir(), op.join(op.expanduser('~'), '.mxnet'))
# Test that data_dir() responds to an explicit setting of MXNET_HOME
with environment('MXNET_HOME', '/tmp/mxnet_data'):
assert_equal(data_dir(), '/tmp/mxnet_data')
# Test that this test has not disturbed the MXNET_HOME value existing before the test
assert_equal(data_dir(), prev_data_dir)
| 39.627273 | 89 | 0.70039 | 614 | 4,359 | 4.811075 | 0.314332 | 0.055856 | 0.024374 | 0.030467 | 0.154705 | 0.108328 | 0.055518 | 0.035884 | 0.035884 | 0.035884 | 0 | 0.013787 | 0.21794 | 4,359 | 109 | 90 | 39.990826 | 0.852743 | 0.420739 | 0 | 0.129032 | 0 | 0 | 0.086047 | 0 | 0 | 0 | 0 | 0 | 0.241935 | 1 | 0.080645 | false | 0.032258 | 0.177419 | 0 | 0.274194 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad0237cc293417b1e64f93d77e7aa634d14876fa | 384 | py | Python | scripts/db.py | supersam654/gatech-maintenance-requests | 3506242d41075712b4543d76bf9a71161d1a383d | [
"MIT"
] | null | null | null | scripts/db.py | supersam654/gatech-maintenance-requests | 3506242d41075712b4543d76bf9a71161d1a383d | [
"MIT"
] | null | null | null | scripts/db.py | supersam654/gatech-maintenance-requests | 3506242d41075712b4543d76bf9a71161d1a383d | [
"MIT"
] | null | null | null | from pymongo import MongoClient, ASCENDING
_CONNECTION_STRING = 'localhost'
_client = MongoClient(_CONNECTION_STRING)
_db = _client['work_orders']
# Index makes processing metadata go from ~2 minutes to ~2 seconds :)
_db.requests.create_index([('order_data.code', ASCENDING)])
# The only thing that should be used publicly.
requests = _db['requests']
code_meta = _db['code_meta']
| 27.428571 | 69 | 0.768229 | 51 | 384 | 5.490196 | 0.686275 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005952 | 0.125 | 384 | 13 | 70 | 29.538462 | 0.827381 | 0.291667 | 0 | 0 | 0 | 0 | 0.193309 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad04fc38dbca76fe6181ab42570f87dae65ac9a2 | 2,587 | py | Python | cmdprogress/core.py | luciancooper/cmdprogress | a6c4ca4c008dd89c64b3ee344bd1b99e2b26a3b2 | [
"MIT"
] | 4 | 2018-12-07T20:09:43.000Z | 2019-12-27T15:36:03.000Z | cmdprogress/core.py | luciancooper/cmdprogress | a6c4ca4c008dd89c64b3ee344bd1b99e2b26a3b2 | [
"MIT"
] | 1 | 2020-07-04T23:23:07.000Z | 2020-07-04T23:23:07.000Z | cmdprogress/core.py | luciancooper/cmdprogress | a6c4ca4c008dd89c64b3ee344bd1b99e2b26a3b2 | [
"MIT"
] | null | null | null | import sys
import os
if sys.platform.startswith('win'):
import colorama
colorama.init()
if os.name == 'nt':
#import msvcrt
import ctypes
class _CursorInfo(ctypes.Structure):
_fields_ = [("size", ctypes.c_int),("visible", ctypes.c_byte)]
class ProgCLIError(Exception):
pass
class ProgCLI():
out = sys.stderr
themes = {
'smooth': (' ', '▏', '▎', '▍', '▌', '▋', '▊', '▉', '█'),
'pixel':('⡀', '⡄', '⡆', '⡇', '⣇', '⣧', '⣷', '⣿'),
'shady':(' ', '░', '▒', '▓', '█'),
'squares':('▢','▣'), # 9634,9635
'circles':('◯','◉'), # 9711,9673
'charge':('∙','█'), # 8729,9608
'basic':(' ','#'),
}
fill = themes['basic'] if os.name=='nt' else themes['smooth']
def __init__(self,theme=None,**kwargs):
if not self.out.isatty():
raise ProgCLIError("ProgCLI must be used within a command line interface")
if theme != None:
self.fill = self.themes[theme.lower()]
for k,v in kwargs.items():
setattr(self,k,v)
# -------- core --------- #
def update(self):
return self
def start(self):
return self
def finish(self):
return self
# ----- Hide / Show Cursor ----- #
def hide_cursor(self):
if os.name == 'nt':
ci = _CursorInfo()
handle = ctypes.windll.kernel32.GetStdHandle(-11)
ctypes.windll.kernel32.GetConsoleCursorInfo(handle, ctypes.byref(ci))
ci.visible = False
ctypes.windll.kernel32.SetConsoleCursorInfo(handle, ctypes.byref(ci))
elif os.name == 'posix':
#sys.out.write("\033[?25l")
#sys.out.flush()
print('\x1b[?25l', end='', file=self.out)
def show_cursor(self):
if os.name == 'nt':
ci = _CursorInfo()
handle = ctypes.windll.kernel32.GetStdHandle(-11)
ctypes.windll.kernel32.GetConsoleCursorInfo(handle, ctypes.byref(ci))
ci.visible = True
ctypes.windll.kernel32.SetConsoleCursorInfo(handle, ctypes.byref(ci))
elif os.name == 'posix':
#sys.out.write("\033[?25h")
#sys.out.flush()
print('\x1b[?25h', end='', file=self.out)
def clear_line(self):
print('\r\x1b[K', end='', file=self.out)
def line_up(self):
print('\x1b[1A', end='', file=self.out)
# ----- Enter / Exit ----- #
def __enter__(self):
self.hide_cursor()
return self
def __exit__(self, type, value, tb):
self.show_cursor()
| 27.231579 | 86 | 0.517201 | 303 | 2,587 | 4.419142 | 0.429043 | 0.026886 | 0.089619 | 0.029873 | 0.40702 | 0.340553 | 0.340553 | 0.340553 | 0.340553 | 0.340553 | 0 | 0.031978 | 0.286819 | 2,587 | 94 | 87 | 27.521277 | 0.679675 | 0.080015 | 0 | 0.269841 | 0 | 0 | 0.084144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.15873 | false | 0.015873 | 0.063492 | 0.047619 | 0.396825 | 0.063492 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad08e59cc0278ec10f9412963101ce59931d2072 | 3,945 | py | Python | cogs/info.py | daniel442li/the-squirrel-from-VandyHacks | 7e33f5dc0a84820be7e86d8fece0f02c3e9a1fb2 | [
"MIT"
] | null | null | null | cogs/info.py | daniel442li/the-squirrel-from-VandyHacks | 7e33f5dc0a84820be7e86d8fece0f02c3e9a1fb2 | [
"MIT"
] | null | null | null | cogs/info.py | daniel442li/the-squirrel-from-VandyHacks | 7e33f5dc0a84820be7e86d8fece0f02c3e9a1fb2 | [
"MIT"
] | null | null | null | import time
from datetime import timedelta
from database import update_pat_counter
import discord
import psutil
from discord.ext import commands
process = psutil.Process()
init_cpu_time = process.cpu_percent()
class Info(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(name="stats")
async def view_stats(self, ctx):
"""
Returns bot statistics and technical data.
"""
total_ram = (psutil.virtual_memory().total >> 30) + 1
embed = discord.Embed(
title="the squirrel from VandyHacks Bot Stats",
description=f"Running on a dedicated server with {total_ram}GB RAM.",
color=16761095,
)
embed.add_field(name="Latency", value=f"{self.bot.latency*1000:.03f}ms", inline=False)
embed.add_field(name='"technical" info', value="random values or something idk I'm not DevOps", inline=False)
embed.add_field(name="System CPU Usage", value=f"{psutil.cpu_percent():.02f}%")
embed.add_field(name="System RAM Usage", value=f"{psutil.virtual_memory().used/1048576:.02f} MB")
embed.add_field(name="System Uptime", value=f"{timedelta(seconds=int(time.time() - psutil.boot_time()))}")
embed.add_field(name="Bot CPU Usage", value=f"{process.cpu_percent():.02f}%")
embed.add_field(name="Bot RAM Usage", value=f"{process.memory_info().rss / 1048576:.02f} MB")
embed.add_field(name="Bot Uptime", value=f"{timedelta(seconds=int(time.time() - process.create_time()))}")
embed.add_field(name=":bulb::link:", value="now some thought provoking links", inline=False)
embed.add_field(name="Cool Website", value="[vandyhacks.org](https://vandyhacks.org)")
embed.add_field(name="Another Cool Website", value="[apply.vandyhacks.org]" "(https://apply.vandyhacks.org)")
embed.add_field(
name="Source of Cool Websites", value="[github.com/VandyHacks]" "(https://github.com/VandyHacks)"
)
embed.set_footer(
text="did you pat the squirrel yet? vh pat!",
icon_url="https://cdn.discordapp.com/emojis/757097790181605416.png?v=1",
)
await ctx.send(embed=embed)
@commands.command(name="ping")
async def ping(self, ctx):
"""
Checks bot latency.
"""
await ctx.send(f"Pong! {self.bot.latency * 1000:.03f}ms")
@commands.command(name="github", aliases=["gh"])
async def github(self, ctx):
# await ctx.send("closed source for now bb") # potentially abstract stuff away and make this open sourceable?
await ctx.send("Catch! https://github.com/VandyHacks/the-squirrel-from-VandyHacks")
@commands.command(name="pat")
async def vh_pat(self, ctx):
await ctx.send(f"the squirrel from VandyHacks has been pet {await update_pat_counter()} times!")
await ctx.send("<a:squirrelpat_gif:760595962048675894>")
@commands.command(name="where")
async def vh_where(self, ctx):
await ctx.send(
"right here :) "
"\ntwitch: <https://www.twitch.tv/vandyhacks> "
"\ndevpost: <https://vandyhacksviii.devpost.com/> "
"\nworkshops: <https://learn.vandyhacks.org/> "
)
@commands.command(name="why")
async def vh_why(self, ctx):
await ctx.send("<:yeehaw:753681271212867685>")
@commands.command(name="how")
async def vh_how(self, ctx, *, text=None):
if text == "is vh":
# for quest
await ctx.author.send("thank you for asking <3 ||vh{aww_thx_4_asking_heart_emoji}||")
await ctx.send("https://vhl.ink/hackerguide/")
@commands.command(name="what")
async def vh_what(self, ctx):
await ctx.send("bro idk")
@commands.command(name="who")
async def vh_who(self, ctx):
await ctx.send("need to think so much stuff sigh")
def setup(bot):
bot.add_cog(Info(bot))
| 38.676471 | 118 | 0.639037 | 523 | 3,945 | 4.728489 | 0.357553 | 0.038819 | 0.063081 | 0.082491 | 0.230085 | 0.156086 | 0.079256 | 0.031541 | 0 | 0 | 0 | 0.032808 | 0.211914 | 3,945 | 101 | 119 | 39.059406 | 0.762625 | 0.029658 | 0 | 0 | 0 | 0 | 0.389771 | 0.117631 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.083333 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad0959a590b2fda4ca237e519583850be76f63e5 | 6,566 | py | Python | source/lambda_utils.py | akhil-sm020/hostname-as-target-for-elastic-load-balancer | 77dfca3ecb4bac2373160f09642c2ff731c927af | [
"MIT-0"
] | 8 | 2020-12-02T15:47:39.000Z | 2022-02-09T12:06:30.000Z | source/lambda_utils.py | akhil-sm020/hostname-as-target-for-elastic-load-balancer | 77dfca3ecb4bac2373160f09642c2ff731c927af | [
"MIT-0"
] | 3 | 2020-12-23T21:47:52.000Z | 2021-07-09T13:09:33.000Z | source/lambda_utils.py | akhil-sm020/hostname-as-target-for-elastic-load-balancer | 77dfca3ecb4bac2373160f09642c2ff731c927af | [
"MIT-0"
] | 10 | 2020-12-15T20:30:11.000Z | 2022-01-31T14:05:04.000Z | import json
import logging
import random
import re
import sys
import boto3
from botocore.exceptions import ClientError
import dns.resolver
logger = logging.getLogger()
logger.setLevel(logging.INFO)
try:
to_unicode = unicode
except NameError:
to_unicode = str
try:
s3_client = boto3.client('s3')
except ClientError as e:
logger.error("ERROR: failed to connect to S3 client.")
logger.error(e.response['Error']['Message'])
sys.exit(1)
try:
cw_client = boto3.client('cloudwatch')
except ClientError as e:
logger.error("ERROR: failed to connect to cloudwatch client.")
logger.error(e.response['Error']['Message'])
sys.exit(1)
try:
elbv2_client = boto3.client('elbv2')
except ClientError as e:
logger.error("ERROR: failed to connect to elbv2 client.")
logger.error(e.response['Error']['Message'])
sys.exit(1)
def put_metric_data(ip_dict, target_fqdn):
"""
Put metric -- IPCount to CloudWatch
"""
try:
cw_client.put_metric_data(
Namespace='AWS/NetworkELB',
MetricData=[
{
'MetricName': "HostnameAsTargetIPCount",
'Dimensions': [
{
'Name': 'HostnameIPCount',
'Value': target_fqdn
},
],
'Value': float(len(ip_dict)),
'Unit': 'Count'
},
]
)
except ClientError:
logger.error("ERROR: Failed to register IP count metric.")
raise
def check_s3_bucket(s3_bucket):
"""
Check if s3_bucket exists or not. If it exists returns True, else False.
"""
existing_buckets = []
response = s3_client.list_buckets()
for i in range(len(response['Buckets'])):
existing_buckets.append(response['Buckets'][i]['Name'])
if s3_bucket in existing_buckets:
return True
else:
return False
def create_s3_bucket(s3_bucket, aws_region):
"""
Check if bucket exist or not. If it exists, use that, if not create one.
"""
try:
if not check_s3_bucket(s3_bucket):
logger.info(f"INFO: Creating S3 Bucket: {s3_bucket} in ÅWS Region: {aws_region}")
s3_client.create_bucket(
Bucket=s3_bucket,
CreateBucketConfiguration={
'LocationConstraint': aws_region
},
)
else:
logger.info(f"INFO: S3 Bucket {s3_bucket} already exists.")
except ClientError as e:
logger.error("ERROR: Failed to create S3 Bucket.")
logger.error(e.response['Error']['Message'])
raise
def upload_ip_list(s3_bucket, ip_dict, object_key):
"""
Upload a IP address list to S3
"""
str_ = json.dumps(ip_dict, indent=4, sort_keys=True,
separators=(',', ': '), ensure_ascii=False)
try:
s3_client.put_object(Bucket=s3_bucket, Key=object_key,
Body=to_unicode(str_))
except Exception:
logger.error("ERROR: Failed to upload IP list to specified S3 bucket.")
raise
def download_ip_list(s3_bucket, object_key):
"""
Download a IP address list of Load Balancer IP to S3
"""
ip_dict = dict()
try:
response = s3_client.get_object(Bucket=s3_bucket, Key=object_key)
except Exception:
logger.error("WARNING: Failed to download IP list from S3. It is normal"
"to see this message if it is the first time that the Lambda"
"function runs.")
return ip_dict
try:
ip_dict = json.loads(response['Body'].read())
except Exception:
logger.error("ERROR: Corrupt S3 file.")
raise
return ip_dict
def render_list(ip_list):
"""
Render a list of targets for registration/deregistration
"""
target_list = []
for ip in ip_list:
target = {
'Id': ip
}
target_list.append(target)
return target_list
def register_target(tg_arn, new_target_list):
"""
Register resolved IPs to the NLB target group
"""
logger.info("INFO: Register new_target_list:{}".format(new_target_list))
id_list = render_list(new_target_list)
try:
elbv2_client.register_targets(
TargetGroupArn=tg_arn,
Targets=id_list
)
except ClientError:
logger.error("ERROR: IP Targets registration failed.")
raise
def deregister_target(tg_arn, dereg_target_list):
"""
Deregister missing IPs from the target group
"""
id_list = render_list(dereg_target_list)
try:
logger.info("INFO: Deregistering {}".format(dereg_target_list))
elbv2_client.deregister_targets(
TargetGroupArn=tg_arn,
Targets=id_list
)
except ClientError:
logger.error("ERROR: IP Targets deregistration failed.")
raise
def describe_target_health(tg_arn):
"""
Get a IP address list of registered targets in the NLB's target group
"""
registered_ip_list = []
try:
response = elbv2_client.describe_target_health(TargetGroupArn=tg_arn)
for target in response['TargetHealthDescriptions']:
registered_ip = target['Target']['Id']
registered_ip_list.append(registered_ip)
except ClientError:
logger.error("ERROR: Can't retrieve Target Group information.")
raise
return registered_ip_list
def dns_lookup(dns_server, domainname, record_type):
"""
Get dns lookup results
:param domain:
:return: list of dns lookup results
"""
lookup_result_list = []
# Select DNS server to use
myResolver = dns.resolver.Resolver()
myResolver.domain = ''
# Apply default DNS Server override
if dns_server:
name_server_ip_list = re.split(r'[,; ]+', dns_server)
myResolver.nameservers = [random.choice(name_server_ip_list)]
else:
logger.info("INFO: Using default DNS "
"resolvers: {}".format(dns.resolver.Resolver().nameservers))
myResolver.nameservers = random.choice(dns.resolver.Resolver().nameservers)
logger.info("INFO: Selected DNS Server: {}".format(myResolver.nameservers))
# Resolve FQDN
try:
lookupAnswer = myResolver.query(domainname, record_type)
for answer in lookupAnswer:
lookup_result_list.append(str(answer))
except ClientError:
raise
return lookup_result_list
| 28.424242 | 93 | 0.616357 | 772 | 6,566 | 5.068653 | 0.235751 | 0.038845 | 0.040889 | 0.033734 | 0.217991 | 0.154613 | 0.146435 | 0.130079 | 0.130079 | 0.118835 | 0 | 0.00937 | 0.2848 | 6,566 | 230 | 94 | 28.547826 | 0.823893 | 0.095949 | 0 | 0.302469 | 0 | 0 | 0.17432 | 0.008144 | 0 | 0 | 0 | 0 | 0 | 1 | 0.061728 | false | 0 | 0.049383 | 0 | 0.154321 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad09cb180f5bbc7e3d50cf0a97bceb0bb9c552dd | 412 | py | Python | ejercicios/scraping/top_50_beers.py | leugimkm/Soluciones | d71601c8d9b5e86e926f48d9e49462af8a956b6d | [
"MIT"
] | 1 | 2022-02-02T04:44:56.000Z | 2022-02-02T04:44:56.000Z | ejercicios/scraping/top_50_beers.py | leugimkm/Soluciones | d71601c8d9b5e86e926f48d9e49462af8a956b6d | [
"MIT"
] | null | null | null | ejercicios/scraping/top_50_beers.py | leugimkm/Soluciones | d71601c8d9b5e86e926f48d9e49462af8a956b6d | [
"MIT"
] | null | null | null | """AyudaEnPython: https://www.facebook.com/groups/ayudapython
"""
import requests
from bs4 import BeautifulSoup
URL = 'https://www.ratebeer.com/beer/top-50/'
page = requests.get(URL)
soup = BeautifulSoup(page.content, "html.parser")
elements = soup.find_all("tr")
for tr in elements[5:55]:
td = tr.find_all("td")
if len(td) > 0:
for i in td:
print(i.text, end=" ")
print()
| 22.888889 | 61 | 0.640777 | 60 | 412 | 4.366667 | 0.65 | 0.061069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021084 | 0.194175 | 412 | 17 | 62 | 24.235294 | 0.768072 | 0.140777 | 0 | 0 | 0 | 0 | 0.152738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad0cbba0c09ab8b4d5c0e0f6df3edae625449e72 | 924 | py | Python | greent/annotators/disease_annotator.py | TranslatorIIPrototypes/robo-commons | a915d80b70f7e68a70f6a5f7ff6e732d2e02db06 | [
"MIT"
] | 1 | 2020-02-05T20:00:52.000Z | 2020-02-05T20:00:52.000Z | greent/annotators/disease_annotator.py | TranslatorIIPrototypes/robo-commons | a915d80b70f7e68a70f6a5f7ff6e732d2e02db06 | [
"MIT"
] | 12 | 2020-05-07T16:40:15.000Z | 2020-06-16T13:23:13.000Z | greent/annotators/disease_annotator.py | TranslatorIIPrototypes/robo-commons | a915d80b70f7e68a70f6a5f7ff6e732d2e02db06 | [
"MIT"
] | 6 | 2018-02-23T20:25:50.000Z | 2019-11-21T14:55:52.000Z | from greent.annotators.annotator import Annotator
from greent.util import Text
import logging
logger = logging.getLogger(name = __name__)
class DiseaseAnnotator(Annotator):
def __init__(self, rosetta):
super().__init__(rosetta)
self.prefix_source_mapping = {
'MONDO': self.get_mondo_properties
}
async def get_mondo_properties(self, mondo_curie):
"""
Gets the ascestors from onto and maps them to the ones we are intereseted in.
"""
conf = self.get_prefix_config('MONDO')
ancestors_url = conf['url'] + mondo_curie
response = await self.async_get_json(ancestors_url)
if 'superterms' not in response:
return {}
ancestors = response['superterms']
properties = {Text.snakify(conf['keys'][x]) : True for x in ancestors if x in conf['keys']}
return properties
| 31.862069 | 100 | 0.635281 | 107 | 924 | 5.242991 | 0.514019 | 0.035651 | 0.064171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.275974 | 924 | 28 | 101 | 33 | 0.838565 | 0 | 0 | 0 | 0 | 0 | 0.051637 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.157895 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad10dd2d8f8b9027a8d51ec430f31090d5344508 | 927 | py | Python | utils/tensorboard.py | LIAMF-USP/deep_active_learning | 97fa1af0c8ccb05d1b9568fa0d7698b931396cdd | [
"MIT"
] | 1 | 2018-07-10T09:07:59.000Z | 2018-07-10T09:07:59.000Z | utils/tensorboard.py | LIAMF-USP/deep_active_learning | 97fa1af0c8ccb05d1b9568fa0d7698b931396cdd | [
"MIT"
] | null | null | null | utils/tensorboard.py | LIAMF-USP/deep_active_learning | 97fa1af0c8ccb05d1b9568fa0d7698b931396cdd | [
"MIT"
] | null | null | null | import tensorflow as tf
from datetime import datetime
def create_unique_name(base_name):
"""
This function is used to create a unique name for saving tensorboard
information.
It adds to base_name received as parameter the current data and time when
a model is created.
Args:
base_name: A string containing the base name where to save tensorboard
information
Returns:
unique_name: A unique name that represents the directory name where
tensorboard information will be saved.
"""
date_str = datetime.now().strftime("%d-%m-%Y-%H:%M:%S")
return base_name + '-' + date_str
def add_array_to_summary_writer(writer, array, tagname):
for index, value in enumerate(array):
summary = tf.Summary()
summary.value.add(tag=tagname, simple_value=value)
writer.add_summary(summary, index)
writer.flush()
| 28.96875 | 78 | 0.670982 | 125 | 927 | 4.856 | 0.536 | 0.065898 | 0.036244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.255663 | 927 | 31 | 79 | 29.903226 | 0.87971 | 0.455232 | 0 | 0 | 0 | 0 | 0.039735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad13d368e69adb798a326e7196ab2fb505d1faff | 37,279 | py | Python | plotly/graph_objs/graph_objs.py | awesome-archive/plotly.py | 0af4ef6abd0fe9907268d266304de630f94cda60 | [
"MIT"
] | 48 | 2017-08-04T03:30:22.000Z | 2022-03-09T03:24:11.000Z | LDDMM_Python/lddmm_python/lib/plotly/graph_objs/graph_objs.py | hushunbo/lddmm-ot | 5af26fe32ae440c598ed403ce2876e98d6e1c692 | [
"MIT"
] | null | null | null | LDDMM_Python/lddmm_python/lib/plotly/graph_objs/graph_objs.py | hushunbo/lddmm-ot | 5af26fe32ae440c598ed403ce2876e98d6e1c692 | [
"MIT"
] | 15 | 2017-09-30T18:55:48.000Z | 2021-04-27T18:27:55.000Z | """
graph_objs
==========
A module that understands plotly language and can manage the json
structures. This module defines two base classes: PlotlyList and PlotlyDict.
The former inherits from `list` and the latter inherits from `dict`. and is
A third structure, PlotlyTrace, is also considered a base class for all
subclassing 'trace' objects like Scatter, Box, Bar, etc. It is also not meant
to instantiated by users.
Goals of this module:
---------------------
* A dict/list with the same entries as a PlotlyDict/PlotlyList should look
exactly the same once a call is made to plot.
* Only mutate object structure when users ASK for it. (some magic now...)
* It should always be possible to get a dict/list JSON representation from a
graph_objs object and it should always be possible to make a graph_objs object
from a dict/list JSON representation.
"""
from __future__ import absolute_import
import copy
import re
import warnings
from collections import OrderedDict
import six
from plotly import exceptions, graph_reference
from plotly.graph_objs import graph_objs_tools
class PlotlyBase(object):
"""
Base object for PlotlyList and PlotlyDict.
"""
_name = None
_parent = None
_parent_key = None
def _get_path(self):
"""
Get a tuple of the str keys and int indices for this object's path.
:return: (tuple)
"""
path = []
parents = self._get_parents()
parents.reverse()
children = [self] + parents[:-1]
for parent, child in zip(parents, children):
path.append(child._parent_key)
path.reverse()
return tuple(path)
def _get_parents(self):
"""
Get a list of all the parent objects above this one.
:return: (list[PlotlyBase])
"""
parents = []
parent = self._parent
while parent is not None:
parents.append(parent)
parent = parent._parent
parents.reverse()
return parents
def _get_parent_object_names(self):
"""
Get a list of the names of the parent objects above this one.
:return: (list[str])
"""
parents = self._get_parents()
return [parent._name for parent in parents]
def _get_class_name(self):
"""For convenience. See `graph_reference.object_name_to_class_name`."""
return graph_reference.object_name_to_class_name(self._name)
def help(self, return_help=False):
"""
Print a help string for this object.
:param (bool) return_help: Return help string instead of prining?
:return: (None|str) Optionally can return help string.
"""
object_name = self._name
path = self._get_path()
parent_object_names = self._get_parent_object_names()
help_string = graph_objs_tools.get_help(object_name, path,
parent_object_names)
if return_help:
return help_string
print(help_string)
def to_graph_objs(self, **kwargs):
"""Everything is cast into graph_objs. Here for backwards compat."""
pass
def validate(self):
"""Everything is *always* validated now. keep for backwards compat."""
pass
class PlotlyList(list, PlotlyBase):
"""
Base class for list-like Plotly objects.
"""
_name = None
def __init__(self, *args, **kwargs):
_raise = kwargs.get('_raise', True)
if self._name is None:
self.__dict__['_name'] = kwargs.pop('_name', None)
self.__dict__['_parent'] = kwargs.get('_parent')
self.__dict__['_parent_key'] = kwargs.get('_parent_key')
if self._name is None:
raise exceptions.PlotlyError(
"PlotlyList is a base class. It's shouldn't be instantiated."
)
if args and isinstance(args[0], dict):
note = (
"Just like a `list`, `{name}` must be instantiated "
"with a *single* collection.\n"
"In other words these are OK:\n"
">>> {name}()\n"
">>> {name}([])\n"
">>> {name}([dict()])\n"
">>> {name}([dict(), dict()])\n"
"However, these don't make sense:\n"
">>> {name}(dict())\n"
">>> {name}(dict(), dict())"
.format(name=self._get_class_name())
)
raise exceptions.PlotlyListEntryError(self, [0], notes=[note])
super(PlotlyList, self).__init__()
for index, value in enumerate(list(*args)):
value = self._value_to_graph_object(index, value, _raise=_raise)
if isinstance(value, PlotlyBase):
self.append(value)
def __setitem__(self, index, value, _raise=True):
"""Override to enforce validation."""
if not isinstance(index, int):
if _raise:
index_type = type(index)
raise TypeError('Index must be int, not {}'.format(index_type))
return
if index >= len(self):
raise IndexError(index)
value = self._value_to_graph_object(index, value, _raise=_raise)
if isinstance(value, (PlotlyDict, PlotlyList)):
super(PlotlyList, self).__setitem__(index, value)
def __setattr__(self, key, value):
raise exceptions.PlotlyError('Setting attributes on a PlotlyList is '
'not allowed')
def __iadd__(self, other):
"""Defines the `+=` operator, which we map to extend."""
self.extend(other)
return self
def __copy__(self):
# TODO: https://github.com/plotly/python-api/issues/291
return GraphObjectFactory.create(self._name, _parent=self._parent,
_parent_key=self._parent_key, *self)
def __deepcopy__(self, memodict={}):
# TODO: https://github.com/plotly/python-api/issues/291
return self.__copy__()
def _value_to_graph_object(self, index, value, _raise=True):
"""
Attempt to change the given value into a graph object.
If _raise is False, this won't raise. If the entry can't be converted,
`None` is returned, meaning the caller should ignore the value or
discard it as a failed conversion.
:param (dict) value: A dict to be converted into a graph object.
:param (bool) _raise: If False, ignore bad values instead of raising.
:return: (PlotlyBase|None) The graph object or possibly `None`.
"""
if not isinstance(value, dict):
if _raise:
path = self._get_path() + (index, )
raise exceptions.PlotlyListEntryError(self, path)
else:
return
items = graph_reference.ARRAYS[self._name]['items']
for i, item in enumerate(items, 1):
try:
return GraphObjectFactory.create(item, _raise=_raise,
_parent=self,
_parent_key=index, **value)
except exceptions.PlotlyGraphObjectError:
if i == len(items) and _raise:
raise
def append(self, value):
"""Override to enforce validation."""
index = len(self) # used for error messages
value = self._value_to_graph_object(index, value)
super(PlotlyList, self).append(value)
def extend(self, iterable):
"""Override to enforce validation."""
for value in iterable:
index = len(self)
value = self._value_to_graph_object(index, value)
super(PlotlyList, self).append(value)
def insert(self, index, value):
"""Override to enforce validation."""
value = self._value_to_graph_object(index, value)
super(PlotlyList, self).insert(index, value)
def update(self, changes, make_copies=False):
"""
Update current list with changed_list, which must be iterable.
:param (dict|list[dict]) changes:
:param (bool) make_copies:
Because mutable objects contain references to their values, updating
multiple items in a list will cause the items to all reference the same
original set of objects. To change this behavior add
`make_copies=True` which makes deep copies of the update items and
therefore break references.
"""
if isinstance(changes, dict):
changes = [changes]
for index in range(len(self)):
try:
update = changes[index % len(changes)]
except ZeroDivisionError:
pass
else:
if make_copies:
self[index].update(copy.deepcopy(update))
else:
self[index].update(update)
def strip_style(self):
"""Strip style by calling `stip_style` on children items."""
for plotly_dict in self:
plotly_dict.strip_style()
def get_data(self, flatten=False):
"""
Returns the JSON for the plot with non-data elements stripped.
:param (bool) flatten: {'a': {'b': ''}} --> {'a.b': ''}
:returns: (dict|list) Depending on (flat|unflat)
"""
l = list()
for plotly_dict in self:
l += [plotly_dict.get_data(flatten=flatten)]
del_indicies = [index for index, item in enumerate(self)
if len(item) == 0]
del_ct = 0
for index in del_indicies:
del self[index - del_ct]
del_ct += 1
if flatten:
d = {}
for i, e in enumerate(l):
for k, v in e.items():
key = "{0}.{1}".format(i, k)
d[key] = v
return d
else:
return l
def get_ordered(self, **kwargs):
"""All children are already validated. Just use get_ordered on them."""
return [child.get_ordered() for child in self]
def to_string(self, level=0, indent=4, eol='\n',
pretty=True, max_chars=80):
"""Get formatted string by calling `to_string` on children items."""
if not len(self):
return "{name}()".format(name=self._get_class_name())
string = "{name}([{eol}{indent}".format(
name=self._get_class_name(),
eol=eol,
indent=' ' * indent * (level + 1))
for index, entry in enumerate(self):
string += entry.to_string(level=level+1,
indent=indent,
eol=eol,
pretty=pretty,
max_chars=max_chars)
if index < len(self) - 1:
string += ",{eol}{indent}".format(
eol=eol,
indent=' ' * indent * (level + 1))
string += (
"{eol}{indent}])").format(eol=eol, indent=' ' * indent * level)
return string
def force_clean(self, **kwargs):
"""Remove empty/None values by calling `force_clean()` on children."""
for entry in self:
entry.force_clean()
del_indicies = [index for index, item in enumerate(self)
if len(item) == 0]
del_ct = 0
for index in del_indicies:
del self[index - del_ct]
del_ct += 1
class PlotlyDict(dict, PlotlyBase):
"""
Base class for dict-like Plotly objects.
"""
_name = None
_parent_key = None
_valid_attributes = None
_deprecated_attributes = None
_subplot_attributes = None
def __init__(self, *args, **kwargs):
_raise = kwargs.pop('_raise', True)
if self._name is None:
self.__dict__['_name'] = kwargs.pop('_name', None)
self.__dict__['_parent'] = kwargs.pop('_parent', None)
self.__dict__['_parent_key'] = kwargs.pop('_parent_key', None)
if self._name is None:
raise exceptions.PlotlyError(
"PlotlyDict is a base class. It's shouldn't be instantiated."
)
super(PlotlyDict, self).__init__()
if self._name in graph_reference.TRACE_NAMES:
self['type'] = self._name
# force key-value pairs to go through validation
d = {key: val for key, val in dict(*args, **kwargs).items()}
for key, val in d.items():
self.__setitem__(key, val, _raise=_raise)
def __dir__(self):
"""Dynamically return the existing and possible attributes."""
return sorted(list(self._get_valid_attributes()))
def __getitem__(self, key):
"""Calls __missing__ when key is not found. May mutate object."""
if key not in self:
self.__missing__(key)
return super(PlotlyDict, self).__getitem__(key)
def __setattr__(self, key, value):
"""Maps __setattr__ onto __setitem__"""
self.__setitem__(key, value)
def __setitem__(self, key, value, _raise=True):
"""Validates/Converts values which should be Graph Objects."""
if not isinstance(key, six.string_types):
if _raise:
raise TypeError('Key must be string, not {}'.format(type(key)))
return
if key.endswith('src'):
if key in self._get_valid_attributes():
value = graph_objs_tools.assign_id_to_src(key, value)
return super(PlotlyDict, self).__setitem__(key, value)
subplot_key = self._get_subplot_key(key)
if subplot_key is not None:
value = self._value_to_graph_object(subplot_key, value,
_raise=_raise)
if isinstance(value, (PlotlyDict, PlotlyList)):
return super(PlotlyDict, self).__setitem__(key, value)
if key not in self._get_valid_attributes():
if key in self._get_deprecated_attributes():
warnings.warn(
"Oops! '{attribute}' has been deprecated in "
"'{object_name}'\nThis may still work, but you should "
"update your code when possible.\n\n"
"Run `.help('{attribute}')` for more information."
.format(attribute=key, object_name=self._name)
)
# this means deprecated attrs get set *as-is*!
return super(PlotlyDict, self).__setitem__(key, value)
else:
if _raise:
path = self._get_path() + (key, )
raise exceptions.PlotlyDictKeyError(self, path)
return
if self._get_attribute_role(key) == 'object':
value = self._value_to_graph_object(key, value, _raise=_raise)
if not isinstance(value, (PlotlyDict, PlotlyList)):
return
super(PlotlyDict, self).__setitem__(key, value)
def __getattr__(self, key):
"""Python only calls this when key is missing!"""
try:
return self.__getitem__(key)
except KeyError:
raise AttributeError(key)
def __copy__(self):
# TODO: https://github.com/plotly/python-api/issues/291
return GraphObjectFactory.create(self._name, _parent=self.parent,
_parent_key=self._parent_key, **self)
def __deepcopy__(self, memodict={}):
# TODO: https://github.com/plotly/python-api/issues/291
return self.__copy__()
def __missing__(self, key):
"""Mimics defaultdict. This is called from __getitem__ when key DNE."""
if key in self._get_valid_attributes():
if self._get_attribute_role(key) == 'object':
value = GraphObjectFactory.create(key, _parent=self,
_parent_key=key)
return super(PlotlyDict, self).__setitem__(key, value)
subplot_key = self._get_subplot_key(key)
if subplot_key is not None:
value = GraphObjectFactory.create(subplot_key, _parent=self,
_parent_key=key)
super(PlotlyDict, self).__setitem__(key, value)
def _get_attribute_role(self, key, value=None):
"""See `graph_reference.get_role`."""
object_name = self._name
parent_object_names = self._get_parent_object_names()
return graph_reference.get_role(
object_name, key, value=value,
parent_object_names=parent_object_names
)
def _get_valid_attributes(self):
"""See `graph_reference.get_valid_attributes`."""
if self._valid_attributes is None:
parent_object_names = self._get_parent_object_names()
valid_attributes = graph_reference.get_valid_attributes(
self._name, parent_object_names
)
self.__dict__['_valid_attributes'] = valid_attributes
return self._valid_attributes
def _get_deprecated_attributes(self):
"""See `graph_reference.get_deprecated_attributes`."""
if self._deprecated_attributes is None:
parent_object_names = self._get_parent_object_names()
deprecated_attributes = graph_reference.get_deprecated_attributes(
self._name, parent_object_names
)
self.__dict__['_deprecated_attributes'] = deprecated_attributes
return self._deprecated_attributes
def _get_subplot_attributes(self):
"""See `graph_reference.get_subplot_attributes`."""
if self._subplot_attributes is None:
parent_object_names = self._get_parent_object_names()
subplot_attributes = graph_reference.get_subplot_attributes(
self._name, parent_object_names
)
self.__dict__['_subplot_attributes'] = subplot_attributes
return self._subplot_attributes
def _get_subplot_key(self, key):
"""Some keys can have appended integers, this handles that."""
match = re.search(r'(?P<digits>\d+$)', key)
if match:
root_key = key[:match.start()]
if (root_key in self._get_subplot_attributes() and
not match.group('digits').startswith('0')):
return root_key
def _value_to_graph_object(self, key, value, _raise=True):
"""
Attempt to convert value to graph object.
:param (str|unicode) key: Should be an object_name from GRAPH_REFERENCE
:param (dict) value: This will fail if it's not a dict.
:param (bool) _raise: Flag to prevent inappropriate erring.
:return: (PlotlyList|PlotlyDict|None) `None` if `_raise` and failure.
"""
if key in graph_reference.ARRAYS:
val_types = (list, )
else:
val_types = (dict, )
if not isinstance(value, val_types):
if _raise:
path = self._get_path() + (key, )
raise exceptions.PlotlyDictValueError(self, path)
else:
return
# this can be `None` when `_raise == False`
return GraphObjectFactory.create(key, value, _raise=_raise,
_parent=self, _parent_key=key)
def help(self, attribute=None, return_help=False):
"""
Print help string for this object or an attribute of this object.
:param (str) attribute: A valid attribute string for this object.
:param (bool) return_help: Return help_string instead of printing it?
:return: (None|str)
"""
if not attribute:
return super(PlotlyDict, self).help(return_help=return_help)
object_name = self._name
path = self._get_path()
parent_object_names = self._get_parent_object_names()
help_string = graph_objs_tools.get_help(object_name, path,
parent_object_names, attribute)
if return_help:
return help_string
print(help_string)
def update(self, dict1=None, **dict2):
"""
Update current dict with dict1 and then dict2.
This recursively updates the structure of the original dictionary-like
object with the new entries in the second and third objects. This
allows users to update with large, nested structures.
Note, because the dict2 packs up all the keyword arguments, you can
specify the changes as a list of keyword agruments.
Examples:
# update with dict
obj = Layout(title='my title', xaxis=XAxis(range=[0,1], domain=[0,1]))
update_dict = dict(title='new title', xaxis=dict(domain=[0,.8]))
obj.update(update_dict)
obj
{'title': 'new title', 'xaxis': {'range': [0,1], 'domain': [0,.8]}}
# update with list of keyword arguments
obj = Layout(title='my title', xaxis=XAxis(range=[0,1], domain=[0,1]))
obj.update(title='new title', xaxis=dict(domain=[0,.8]))
obj
{'title': 'new title', 'xaxis': {'range': [0,1], 'domain': [0,.8]}}
This 'fully' supports duck-typing in that the call signature is
identical, however this differs slightly from the normal update
method provided by Python's dictionaries.
"""
if dict1 is not None:
for key, val in list(dict1.items()):
if key in self:
if isinstance(self[key], (PlotlyDict, PlotlyList)):
self[key].update(val)
else:
self[key] = val
else:
self[key] = val
if len(dict2):
for key, val in list(dict2.items()):
if key in self:
if isinstance(self[key], (PlotlyDict, PlotlyList)):
self[key].update(val)
else:
self[key] = val
else:
self[key] = val
def strip_style(self):
"""
Recursively strip style from the current representation.
All PlotlyDicts and PlotlyLists are guaranteed to survive the
stripping process, though they made be left empty. This is allowable.
Keys that will be stripped in this process are tagged with
`'type': 'style'` in graph_objs_meta.json. Note that a key tagged as
style, but with an array as a value may still be considered data.
"""
keys = list(self.keys())
for key in keys:
if isinstance(self[key], (PlotlyDict, PlotlyList)):
self[key].strip_style()
else:
if self._get_attribute_role(key, value=self[key]) == 'style':
del self[key]
# this is for backwards compat when we updated graph reference.
elif self._name == 'layout' and key == 'autosize':
del self[key]
def get_data(self, flatten=False):
"""Returns the JSON for the plot with non-data elements stripped."""
d = dict()
for key, val in list(self.items()):
if isinstance(val, (PlotlyDict, PlotlyList)):
sub_data = val.get_data(flatten=flatten)
if flatten:
for sub_key, sub_val in sub_data.items():
key_string = "{0}.{1}".format(key, sub_key)
d[key_string] = sub_val
else:
d[key] = sub_data
else:
if self._get_attribute_role(key, value=val) == 'data':
d[key] = val
# we use the name to help make data frames
if self._name in graph_reference.TRACE_NAMES and key == 'name':
d[key] = val
keys = list(d.keys())
for key in keys:
if isinstance(d[key], (dict, list)):
if len(d[key]) == 0:
del d[key]
return d
def get_ordered(self, **kwargs):
"""Return a predictable, OrderedDict version of self."""
keys = sorted(self.keys(), key=graph_objs_tools.sort_keys)
ordered = OrderedDict()
for key in keys:
if isinstance(self[key], PlotlyBase):
ordered[key] = self[key].get_ordered()
else:
ordered[key] = self[key]
return ordered
def to_string(self, level=0, indent=4, eol='\n',
pretty=True, max_chars=80):
"""
Returns a formatted string showing graph_obj constructors.
:param (int) level: The number of indentations to start with.
:param (int) indent: The indentation amount.
:param (str) eol: The end of line character(s).
:param (bool) pretty: Curtail long list output with a '..' ?
:param (int) max_chars: The max characters per line.
Example:
print(obj.to_string())
"""
if not len(self):
return "{name}()".format(name=self._get_class_name())
string = "{name}(".format(name=self._get_class_name())
if self._name in graph_reference.TRACE_NAMES:
keys = [key for key in self.keys() if key != 'type']
else:
keys = self.keys()
keys = sorted(keys, key=graph_objs_tools.sort_keys)
num_keys = len(keys)
for index, key in enumerate(keys, 1):
string += "{eol}{indent}{key}=".format(
eol=eol,
indent=' ' * indent * (level+1),
key=key)
if isinstance(self[key], PlotlyBase):
string += self[key].to_string(level=level+1,
indent=indent,
eol=eol,
pretty=pretty,
max_chars=max_chars)
else:
if pretty: # curtail representation if too many chars
max_len = (max_chars -
indent*(level + 1) -
len(key + "=") -
len(eol))
if index < num_keys:
max_len -= len(',') # remember the comma!
if isinstance(self[key], list):
s = "[]"
for iii, entry in enumerate(self[key], 1):
if iii < len(self[key]):
s_sub = graph_objs_tools.curtail_val_repr(
entry,
max_chars=max_len - len(s),
add_delim=True
)
else:
s_sub = graph_objs_tools.curtail_val_repr(
entry,
max_chars=max_len - len(s),
add_delim=False
)
s = s[:-1] + s_sub + s[-1]
if len(s) == max_len:
break
string += s
else:
string += graph_objs_tools.curtail_val_repr(
self[key], max_len)
else: # they want it all!
string += repr(self[key])
if index < num_keys:
string += ","
string += "{eol}{indent})".format(eol=eol, indent=' ' * indent * level)
return string
def force_clean(self, **kwargs):
"""Recursively remove empty/None values."""
keys = list(self.keys())
for key in keys:
try:
self[key].force_clean()
except AttributeError:
pass
if isinstance(self[key], (dict, list)):
if len(self[key]) == 0:
del self[key] # clears empty collections!
elif self[key] is None:
del self[key]
class GraphObjectFactory(object):
"""GraphObject creation in this module should run through this factory."""
@staticmethod
def create(object_name, *args, **kwargs):
"""
Create a graph object from the OBJECTS dict by name, args, and kwargs.
:param (str) object_name: A valid object name from OBJECTS.
:param args: Arguments to pass to class constructor.
:param kwargs: Keyword arguments to pass to class constructor.
:return: (PlotlyList|PlotlyDict) The instantiated graph object.
"""
is_array = object_name in graph_reference.ARRAYS
is_object = object_name in graph_reference.OBJECTS
if not (is_array or is_object):
raise exceptions.PlotlyError(
"'{}' is not a valid object name.".format(object_name)
)
# We patch Figure and Data, so they actually require the subclass.
class_name = graph_reference.OBJECT_NAME_TO_CLASS_NAME.get(object_name)
if class_name in ['Figure', 'Data']:
return globals()[class_name](*args, **kwargs)
else:
kwargs['_name'] = object_name
if is_array:
return PlotlyList(*args, **kwargs)
else:
return PlotlyDict(*args, **kwargs)
def _add_classes_to_globals(globals):
"""
Create and add all the Graph Objects to this module for export.
:param (dict) globals: The globals() dict from this module.
"""
for class_name, class_dict in graph_reference.CLASSES.items():
object_name = class_dict['object_name']
base_type = class_dict['base_type']
# This is for backwards compat (e.g., Trace) and future changes.
if object_name is None:
globals[class_name] = base_type
continue
doc = graph_objs_tools.get_help(object_name)
if object_name in graph_reference.ARRAYS:
class_bases = (PlotlyList, )
else:
class_bases = (PlotlyDict, )
class_dict = {'__doc__': doc, '__name__': class_name,
'_name': object_name}
cls = type(str(class_name), class_bases, class_dict)
globals[class_name] = cls
def _patch_figure_class(figure_class):
def __init__(self, *args, **kwargs):
super(figure_class, self).__init__(*args, **kwargs)
if 'data' not in self:
self.data = GraphObjectFactory.create('data', _parent=self,
_parent_key='data')
figure_class.__init__ = __init__
def get_data(self, flatten=False):
"""
Returns the JSON for the plot with non-data elements stripped.
Flattening may increase the utility of the result.
:param (bool) flatten: {'a': {'b': ''}} --> {'a.b': ''}
:returns: (dict|list) Depending on (flat|unflat)
"""
return self.data.get_data(flatten=flatten)
figure_class.get_data = get_data
def to_dataframe(self):
"""
Create a pandas dataframe with trace names and keys as column names.
:return: (DataFrame)
"""
data = self.get_data(flatten=True)
from pandas import DataFrame, Series
return DataFrame(dict([(k, Series(v)) for k, v in data.items()]))
figure_class.to_dataframe = to_dataframe
def print_grid(self):
"""
Print a visual layout of the figure's axes arrangement.
This is only valid for figures that are created
with plotly.tools.make_subplots.
"""
try:
grid_str = self.__dict__['_grid_str']
except AttributeError:
raise Exception("Use plotly.tools.make_subplots "
"to create a subplot grid.")
print(grid_str)
figure_class.print_grid = print_grid
def append_trace(self, trace, row, col):
"""
Add a data traces to your figure bound to axes at the row, col index.
The row, col index is generated from figures created with
plotly.tools.make_subplots and can be viewed with Figure.print_grid.
:param (dict) trace: The data trace to be bound.
:param (int) row: Subplot row index (see Figure.print_grid).
:param (int) col: Subplot column index (see Figure.print_grid).
Example:
# stack two subplots vertically
fig = tools.make_subplots(rows=2)
This is the format of your plot grid:
[ (1,1) x1,y1 ]
[ (2,1) x2,y2 ]
fig.append_trace(Scatter(x=[1,2,3], y=[2,1,2]), 1, 1)
fig.append_trace(Scatter(x=[1,2,3], y=[2,1,2]), 2, 1)
"""
try:
grid_ref = self._grid_ref
except AttributeError:
raise Exception("In order to use Figure.append_trace, "
"you must first use plotly.tools.make_subplots "
"to create a subplot grid.")
if row <= 0:
raise Exception("Row value is out of range. "
"Note: the starting cell is (1, 1)")
if col <= 0:
raise Exception("Col value is out of range. "
"Note: the starting cell is (1, 1)")
try:
ref = grid_ref[row-1][col-1]
except IndexError:
raise Exception("The (row, col) pair sent is out of range. "
"Use Figure.print_grid to view the subplot grid. ")
if 'scene' in ref[0]:
trace['scene'] = ref[0]
if ref[0] not in self['layout']:
raise Exception("Something went wrong. "
"The scene object for ({r},{c}) subplot cell "
"got deleted.".format(r=row, c=col))
else:
xaxis_key = "xaxis{ref}".format(ref=ref[0][1:])
yaxis_key = "yaxis{ref}".format(ref=ref[1][1:])
if (xaxis_key not in self['layout']
or yaxis_key not in self['layout']):
raise Exception("Something went wrong. "
"An axis object for ({r},{c}) subplot cell "
"got deleted.".format(r=row, c=col))
trace['xaxis'] = ref[0]
trace['yaxis'] = ref[1]
self['data'] += [trace]
figure_class.append_trace = append_trace
def _patch_data_class(data_class):
def _value_to_graph_object(self, index, value, _raise=True):
if not isinstance(value, dict):
if _raise:
notes = ['Entry should subclass dict.']
path = self._get_path() + (index, )
raise exceptions.PlotlyListEntryError(self, path, notes=notes)
else:
return
item = value.get('type', 'scatter')
if item not in graph_reference.ARRAYS['data']['items']:
if _raise:
path = self._get_path() + (0, )
raise exceptions.PlotlyDataTypeError(self, path)
return GraphObjectFactory.create(item, _raise=_raise, _parent=self,
_parent_key=index, **value)
data_class._value_to_graph_object = _value_to_graph_object
def get_data(self, flatten=False):
"""
Returns the JSON for the plot with non-data elements stripped.
:param (bool) flatten: {'a': {'b': ''}} --> {'a.b': ''}
:returns: (dict|list) Depending on (flat|unflat)
"""
if flatten:
data = [v.get_data(flatten=flatten) for v in self]
d = {}
taken_names = []
for i, trace in enumerate(data):
# we want to give the traces helpful names
# however, we need to be sure they're unique too...
trace_name = trace.pop('name', 'trace_{0}'.format(i))
if trace_name in taken_names:
j = 1
new_trace_name = "{0}_{1}".format(trace_name, j)
while new_trace_name in taken_names:
new_trace_name = "{0}_{1}".format(trace_name, j)
j += 1
trace_name = new_trace_name
taken_names.append(trace_name)
# finish up the dot-concatenation
for k, v in trace.items():
key = "{0}.{1}".format(trace_name, k)
d[key] = v
return d
else:
return super(data_class, self).get_data(flatten=flatten)
data_class.get_data = get_data
_add_classes_to_globals(globals())
_patch_figure_class(globals()['Figure'])
_patch_data_class(globals()['Data'])
# We don't want to expose this module to users, just the classes.
# See http://blog.labix.org/2008/06/27/watch-out-for-listdictkeys-in-python-3
__all__ = list(graph_reference.CLASSES.keys())
| 37.093532 | 79 | 0.553985 | 4,312 | 37,279 | 4.580241 | 0.126623 | 0.013468 | 0.017215 | 0.011848 | 0.403949 | 0.346886 | 0.30719 | 0.280759 | 0.236911 | 0.221873 | 0 | 0.005655 | 0.345396 | 37,279 | 1,004 | 80 | 37.130478 | 0.803672 | 0.235038 | 0 | 0.4 | 0 | 0 | 0.067363 | 0.005077 | 0 | 0 | 0 | 0.001992 | 0 | 1 | 0.091803 | false | 0.006557 | 0.014754 | 0.006557 | 0.208197 | 0.009836 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad181c8bbf30d95cfe3be78bcc5a6c74387f87b5 | 8,332 | py | Python | publication/plot_synthetic_random.py | sean-mackenzie/gdpyt-analysis | b03931ee431862573aaf449a6b5db2aea00bf998 | [
"MIT"
] | null | null | null | publication/plot_synthetic_random.py | sean-mackenzie/gdpyt-analysis | b03931ee431862573aaf449a6b5db2aea00bf998 | [
"MIT"
] | null | null | null | publication/plot_synthetic_random.py | sean-mackenzie/gdpyt-analysis | b03931ee431862573aaf449a6b5db2aea00bf998 | [
"MIT"
] | null | null | null | # test bin, analyze, and plot functions
from os.path import join
import numpy as np
import pandas as pd
from utils import io, bin, plotting, modify
import matplotlib.pyplot as plt
# ------------------------------------------------
# formatting
plt.style.use(['science', 'ieee', 'std-colors'])
scale_fig_dim = [1, 1]
scale_fig_dim_outside_x_legend = [1.5, 1]
legend_loc = 'best'
# ------------------------------------------------
# read files
datasets = ['synthetic random density uniform z nl1']
save_ids = ['random density uniform z']
subsets = ['cm-combined']
test_id = 0
dataset = datasets[test_id]
save_id = save_ids[test_id]
subset = subsets[test_id]
# read .xlsx result files to dictionary
base_path = '/Users/mackenzie/Desktop/gdpyt-characterization/publication data/iteration 5/{}'.format(dataset)
path_name = join(base_path, subset, 'figs')
save_path_name = join(base_path, subset, 'results')
# ------------------------------------------------
# dx = 5: [93.0, 189.0, 284.0, 380.0, 475.0, 571.0, 666.0, 762.0, 858.0, 930] # for binning
# keys (dx=5): [5, 10, 15, 20, 25, 30, 35, 40, 50] # center-to-center overlap spacing
# dx = 7.5: [79.0, 163.5, 254.0, 348.5, 447.0, 555.5, 665.0, 777.5, 900.0]
# keys (dx=7.5): [7.5, 12.5, 17.5, 22.5, 27.5, 32.5, 37.5, 42.5, 47.5]
# split dataframe by parameters/values
dx_keys = [1, 2.5, 5, 7.5, 10]
dxx_keys = [1, 2.5, 5, 7.5, 10]
cm_keys = [0.5, 0.9]
round_x_to_decimal = 0
# filters for binning
h = 80
z_range = [-40.001, 40.001]
min_cm = 0.5
save_id = save_id + '_cm={}'.format(min_cm)
# read cm=0.5 excel spreadsheet
filepath_dx = join(base_path, 'cm-combined/read/random uniform z cm={}_mean_measurement_results.xlsx'.format(cm_keys[0]))
dfx = io.read_excel(path_name=filepath_dx, filetype='.xlsx')
# read cm=0.9 excel spreadsheet
filepath_dxx = join(base_path, 'cm-combined/read/random uniform z cm={}_mean_measurement_results.xlsx'.format(cm_keys[1]))
dfxx = io.read_excel(path_name=filepath_dxx, filetype='.xlsx')
# get dataframe of only gdpyt
dfx_gdpyt = dfx[dfx['filename'] == '1'].copy()
dfx_gdpyt['dx'] = dx_keys
dfxx_gdpyt = dfxx[dfxx['filename'] == '1'].copy()
dfxx_gdpyt['dx'] = dxx_keys
# get dataframe of only gdpyt
dfx_spc = dfx[dfx['filename'] == '11'].copy()
dfx_spc['dx'] = dx_keys
dfxx_spc = dfxx[dfxx['filename'] == '11'].copy()
dfxx_spc['dx'] = dxx_keys
# merge dataframes
#df_gdpyt = pd.concat([dfx_gdpyt, dfxx_gdpyt])
#df_spc = pd.concat([dfx_spc, dfxx_spc])
# sort values
def reorg_df(dfs):
dfs_new = []
for df in dfs:
df = df.astype(float)
df = df.sort_values(by='dx')
df = df.set_index(keys='dx')
dfs_new.append(df)
return dfs_new
dfx_gdpyt, dfxx_gdpyt, dfx_spc, dfxx_spc = reorg_df([dfx_gdpyt, dfxx_gdpyt, dfx_spc, dfxx_spc])
# merge into dictionary
dfbicts = {1.0: dfx_gdpyt, 2.0: dfxx_gdpyt,
11.0: dfx_spc, 12.0: dfxx_spc}
# -----------------------------
# mean z-uncertainty - compare GDPyT and SPC
# formatting figures
save_plots = True
show_plots = True
# compare static and spc
labels_compare = ['GDPyT', 'GDPT']
colors_compare = None
# compare all
ylim_compare_all = [-0.0005, 0.305]
ylim_percent_true_measured_compare_all = [0, 105]
ylim_percent_measured_compare_all = [0, 105]
ylim_cm_compare_all = [min_cm, 1.01]
# compare filtered
ylim_compare = [-0.0005, 0.305]
ylim_percent_true_measured_compare = [0, 105]
ylim_percent_measured_compare = [0, 105]
ylim_cm_compare = [min_cm, 1.01]
# local
filter_keys = 0
labels_local = [lbl for lbl in dx_keys if lbl > filter_keys]
labels_local.sort()
colors_local = None
linestyles = ['-', '--']
ylim_gdpyt = [-0.0005, 0.1]
ylim_spc = [-0.005, 0.5]
ylim_percent_true_measured_gdpyt = [0, 105]
ylim_percent_true_measured_spc = [0, 105]
ylim_num = 5000
# global
labels_global = [r'GDPyT$(c_{m}=0.5)$', r'GDPyT$(c_{m}=0.9)$', r'GDPT$(c_{m}=0.5)$', r'GDPT$(c_{m}=0.9)$']
colors_global = None
xlabel_for_keys = r'$\delta x (pix)$'
ylabel_for_sigma = r'$\sigma_{z}\left(z\right) / h$'
ylim_global = [-0.0005, 0.3105]
ylim_percent_true_measured_global = [0, 101]
ylim_percent_measured_global = [0, 101]
ylim_cm_global = [min_cm, 1.01]
# colors
"""
SciencePlots:
Blue: #0C5DA5
Green: #00B945
Red: #FF9500
Orange: #FF2C00
Shades of Blue:
Azure: ‘none’ or #069AF3
Blue: #0000FF or #0343DF
Light Blue: #ADD8E6 or ADD8E6 or ‘7BC8F6
Shades of Green:
Chartreuse: #7FFF00 or #C1F80A
Dark Green: #006400 or #054907
Green: #008000 or #15B01A
Light Green: #90EE90 or #76FF7B
Lime: #00FF00 or #AAFF32
Yellow Green: #9ACD32 or #BBF90F
"""
colors = ['#0C5DA5', '#FF9500', '#00B945', '#FF2C00']
# plot global uncertainty - gdpyt vs. spc
if save_plots:
# plot local - gdpyt
parameter = 'rmse_z'
fig, ax = plotting.plot_dfbicts_local(dfbicts, parameter, h=h, scale=scale_fig_dim, xlabel=xlabel_for_keys,
ylabel=ylabel_for_sigma, colors=colors)
ax.set_ylim(ylim_global)
ax.legend(labels_global, loc=legend_loc)
plt.tight_layout()
plt.savefig(join(path_name, save_id+'_static_v_spc_global_dx_rmse_z.png'))
if show_plots:
plt.show()
"""
parameter = ['rmse_z', 'true_percent_meas']
fig, ax, ax2 = plotting.plot_dfbicts_local(dfbicts, parameter, h=1, scale=scale_fig_dim_outside_x_legend,
xlabel=xlabel_for_keys, ylabel=ylabel_for_sigma)
ax.set_ylim(ylim_global)
ax2.set_ylabel(r'$\phi\left(z\right)$')
ax2.set_ylim(ylim_percent_true_measured_global)
ax.legend(labels_global, loc=legend_loc)
plt.tight_layout()
plt.savefig(join(path_name, save_id+'_static_v_spc_global_dx_rmse_z_and_true_percent_meas.png'))
if show_plots:
plt.show()
"""
parameter = ['rmse_z', 'percent_meas']
fig, ax, ax2 = plotting.plot_dfbicts_local(dfbicts, parameter, h=h, scale=scale_fig_dim,
xlabel=xlabel_for_keys, ylabel=ylabel_for_sigma, colors=colors)
ylim_global_percent_meas = [-0.0005, 0.3305]
ax.set_ylim(ylim_global_percent_meas)
ax2.set_ylabel(r'$\phi_{ID}\left(z\right)$')
ax2.set_ylim(ylim_percent_measured_global)
# ax.legend(labels_global, loc=legend_loc)
plt.tight_layout()
plt.savefig(join(path_name, save_id + '_static_v_spc_global_dx_rmse_z_and_percent_meas.png'))
if show_plots:
plt.show()
scale_fig_dim_outside_x_legend = [1.6, 1]
parameter = ['rmse_z', 'percent_meas']
fig, ax, ax2 = plotting.plot_dfbicts_local(dfbicts, parameter, h=h, scale=scale_fig_dim_outside_x_legend,
xlabel=xlabel_for_keys, ylabel=ylabel_for_sigma, colors=colors)
ax.set_ylim(ylim_global_percent_meas)
ax2.set_ylabel(r'$\phi_{ID}\left(z\right)$')
ax2.set_ylim(ylim_percent_measured_global)
#ax.legend(labels_global, loc=legend_loc)
ax.legend(labels_global, loc='upper left', bbox_to_anchor=(1.3, 1), fancybox=True, shadow=False, ncol=1)
plt.tight_layout()
plt.savefig(join(path_name, save_id+'_static_v_spc_global_dx_rmse_z_and_percent_meas_legend.png'))
if show_plots:
plt.show()
"""
parameter = ['rmse_z', 'num_meas', 'num_bind', 'true_num_particles']
fig, ax, ax2 = plotting.plot_dfbicts_local(dfbicts, parameter, h=1, scale=scale_fig_dim,
xlabel=xlabel_for_keys, ylabel=ylabel_for_sigma)
ax.set_ylim([x * 2 for x in ylim_global])
ax2.set_ylabel(r'$\#$')
ax2.set_ylim([0, ylim_num])
ax.legend(labels_global, loc=legend_loc)
plt.tight_layout()
plt.savefig(join(path_name, save_id+'_static_v_spc_global_dx_rmse_z_and_num_particles.png'))
if show_plots:
plt.show()
parameter = ['rmse_z', 'cm']
fig, ax, ax2 = plotting.plot_dfbicts_local(dfbicts, parameter, h=1, scale=scale_fig_dim,
xlabel=xlabel_for_keys, ylabel=ylabel_for_sigma)
ax.set_ylim(ylim_global)
ax2.set_ylabel(r'$c_{m}$')
ax2.set_ylim([min_cm, 1.01])
ax.legend(labels_global, loc=legend_loc)
plt.tight_layout()
plt.savefig(join(path_name, save_id+'_static_v_spc_global_dx_rmse_z_and_cm.png'))
if show_plots:
plt.show()
"""
# ---------------------------------------------------------------
j=1 | 34.147541 | 122 | 0.659265 | 1,298 | 8,332 | 3.9453 | 0.204931 | 0.011716 | 0.019332 | 0.027338 | 0.517087 | 0.4876 | 0.433704 | 0.42355 | 0.394845 | 0.355009 | 0 | 0.060906 | 0.176308 | 8,332 | 244 | 123 | 34.147541 | 0.685269 | 0.148224 | 0 | 0.165217 | 0 | 0 | 0.160624 | 0.079603 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008696 | false | 0 | 0.043478 | 0 | 0.06087 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad18a85359b4e2525a1a32be74be23e835e8344e | 258 | py | Python | src/0458_poor_pigs.py | soamsy/leetcode | 091f3b33e44613fac130ff1018c8b63493798f09 | [
"MIT"
] | null | null | null | src/0458_poor_pigs.py | soamsy/leetcode | 091f3b33e44613fac130ff1018c8b63493798f09 | [
"MIT"
] | null | null | null | src/0458_poor_pigs.py | soamsy/leetcode | 091f3b33e44613fac130ff1018c8b63493798f09 | [
"MIT"
] | null | null | null | import math
def poorPigs(buckets: int, minutesToDie: int, minutesToTest: int) -> int:
rounds = minutesToTest // minutesToDie
if rounds < 1:
return 0
pigs = 0
while (rounds + 1) ** pigs < buckets:
pigs += 1
return pigs | 25.8 | 73 | 0.600775 | 30 | 258 | 5.166667 | 0.5 | 0.090323 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027778 | 0.302326 | 258 | 10 | 74 | 25.8 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad1a899f32241a3503533d0340234dd91eef7679 | 2,245 | py | Python | bamboos/utils/model/xgboost_model.py | AdityaSidharta/bamboos | 6eea98f68eea671aaf62c4cd9af4b8cae11a2832 | [
"MIT"
] | null | null | null | bamboos/utils/model/xgboost_model.py | AdityaSidharta/bamboos | 6eea98f68eea671aaf62c4cd9af4b8cae11a2832 | [
"MIT"
] | 1 | 2021-06-01T23:33:17.000Z | 2021-06-01T23:33:17.000Z | bamboos/utils/model/xgboost_model.py | AdityaSidharta/bamboos | 6eea98f68eea671aaf62c4cd9af4b8cae11a2832 | [
"MIT"
] | null | null | null | import numpy as np
import xgboost as xgb
from bamboos.utils.model.base_model import Model
class XGBoostModel(Model):
def __init__(
self, name: str, pred_type: str, threshold: float = 0.5, **kwargs
) -> None:
super().__init__(name, None, pred_type, threshold)
self.kwargs = kwargs
if self.pred_type == "multiclass":
assert "num_class" in self.kwargs.keys()
self.num_class = self.kwargs["num_class"]
def fit(self, X_train, y_train):
dtrain = xgb.DMatrix(X_train, label=y_train)
if self.pred_type == "binary":
params = {"objective": "binary:logistic", "silent": 1}
elif self.pred_type == "multiclass":
params = {"objective": "multi:softprob", "silent": 1}
else:
assert self.pred_type == "regression"
params = {"objective": "reg:linear", "silent": 1}
for key, value in self.kwargs.items():
params[key] = value
if "num_boost_round" in self.kwargs.keys():
self.model = xgb.train(
params, dtrain, self.kwargs.get("num_boost_round"), verbose_eval=False
)
else:
self.model = xgb.train(params, dtrain, verbose_eval=False)
def predict(self, X_test):
dtest = xgb.DMatrix(X_test)
if self.pred_type == "binary":
prob = self.model.predict(dtest)
pred = np.where(prob >= self.threshold, 1, 0)
elif self.pred_type == "multiclass":
if np.all(np.isnan(self.model.predict(dtest))):
# Return array of NaN if model predicts all NaN
pred = self.model.predict(dtest)[:, 0]
else:
pred = np.argmax(self.model.predict(dtest), axis=1)
else:
assert self.pred_type == "regression"
pred = self.model.predict(dtest)
return pred
def predict_proba(self, X_test):
dtest = xgb.DMatrix(X_test)
if self.pred_type in ["binary", "multiclass"]:
result = self.model.predict(dtest)
else:
raise ValueError(
"pred_type should be on of the following: ['binary', 'multiclass']"
)
return result
| 35.078125 | 86 | 0.570156 | 272 | 2,245 | 4.566176 | 0.316176 | 0.070853 | 0.077295 | 0.101449 | 0.327697 | 0.169082 | 0.122383 | 0.069243 | 0.069243 | 0.069243 | 0 | 0.005803 | 0.309131 | 2,245 | 63 | 87 | 35.634921 | 0.794971 | 0.020045 | 0 | 0.25 | 0 | 0 | 0.125114 | 0 | 0 | 0 | 0 | 0 | 0.057692 | 1 | 0.076923 | false | 0 | 0.057692 | 0 | 0.192308 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad1d89d23760636c3941c02fc6ef2b6db886f591 | 7,405 | py | Python | vectorize.py | robinsloan/pixray | 59cd26c5eac08d2b74d95840fbf0582ccad47cce | [
"MIT"
] | 343 | 2021-09-09T03:41:35.000Z | 2022-03-29T18:02:37.000Z | vectorize.py | ohwe/pixray | 93a4e441d03f1ebc53897ea67973dd8705cc18e6 | [
"MIT"
] | 42 | 2021-09-12T09:45:10.000Z | 2022-02-22T20:57:19.000Z | vectorize.py | ohwe/pixray | 93a4e441d03f1ebc53897ea67973dd8705cc18e6 | [
"MIT"
] | 51 | 2021-09-12T15:04:37.000Z | 2022-02-22T20:01:34.000Z | import argparse
import sys
import json
import numpy as np
from sklearn import metrics
from sklearn import svm
import os
from tqdm import tqdm
from util import real_glob
import torch
from CLIP import clip
from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
from PIL import Image
perceptors = {}
def init(args):
global perceptors, resolutions
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
jit = True if float(torch.__version__[:3]) < 1.8 else False
if args.models is not None:
models = args.models.split(",")
args.models = [model.strip() for model in models]
else:
args.models = clip.available_models()
for clip_model in args.models:
model, preprocess = clip.load(clip_model, jit=jit)
perceptor = model.eval().requires_grad_(False).to(device)
perceptors[clip_model] = perceptor
def fetch_images(preprocess, image_files):
images = []
for filename in image_files:
image = preprocess(Image.open(filename).convert("RGB"))
images.append(image)
return images
def do_image_features(model, images, image_mean, image_std):
image_input = torch.tensor(np.stack(images)).cuda()
image_input -= image_mean[:, None, None]
image_input /= image_std[:, None, None]
with torch.no_grad():
image_features = model.encode_image(image_input).float()
return image_features
def spew_vectors(args, inputs, outfile):
global perceptors, resolutions
input_files = real_glob(inputs)
save_table = {}
for clip_model in args.models:
perceptor = perceptors[clip_model]
input_resolution = perceptor.visual.input_resolution
print(f"Running {clip_model} at {input_resolution}")
preprocess = Compose([
Resize(input_resolution, interpolation=Image.BICUBIC),
CenterCrop(input_resolution),
ToTensor()
])
image_mean = torch.tensor([0.48145466, 0.4578275, 0.40821073]).cuda()
image_std = torch.tensor([0.26862954, 0.26130258, 0.27577711]).cuda()
images = fetch_images(preprocess, input_files);
features = do_image_features(perceptor, images, image_mean, image_std)
print(f"saving {features.shape} to {clip_model}")
save_table[clip_model] = features.tolist()
with open(outfile, 'w') as fp:
json.dump(save_table, fp)
def run_avg_diff(args):
f1, f2 = args.avg_diff.split(",")
with open(f1) as f_in:
table1 = json.load(f_in)
with open(f2) as f_in:
table2 = json.load(f_in)
save_table = {}
for k in table1:
encoded1 = np.array(table1[k])
encoded2 = np.array(table2[k])
print("Taking the difference between {} and {} vectors".format(encoded1.shape, encoded2.shape))
m1 = np.mean(encoded1,axis=0)
m2 = np.mean(encoded2,axis=0)
atvec = m2 - m1
z_dim, = atvec.shape
atvecs = atvec.reshape(1,z_dim)
print("Computed diff shape: {}".format(atvecs.shape))
save_table[k] = atvecs.tolist()
with open(args.outfile, 'w') as fp:
json.dump(save_table, fp)
def run_svm_diff(args):
f1, f2 = args.svm_diff.split(",")
with open(f1) as f_in:
table1 = json.load(f_in)
with open(f2) as f_in:
table2 = json.load(f_in)
save_table = {}
for k in table1:
encoded1 = np.array(table1[k])
encoded2 = np.array(table2[k])
print("Taking the svm difference between {} and {} vectors".format(encoded1.shape, encoded2.shape))
h = .02 # step size in the mesh
C = 1.0 # SVM regularization parameter
X_arr = []
y_arr = []
for l in range(len(encoded1)):
X_arr.append(encoded1[l])
y_arr.append(False)
for l in range(len(encoded2)):
X_arr.append(encoded2[l])
y_arr.append(True)
X = np.array(X_arr)
y = np.array(y_arr)
# svc = svm.LinearSVC(C=C, class_weight="balanced").fit(X, y)
svc = svm.LinearSVC(C=C,max_iter=20000).fit(X, y)
# get the separating hyperplane
w = svc.coef_[0]
#FIXME: this is a scaling hack.
m1 = np.mean(encoded1,axis=0)
m2 = np.mean(encoded2,axis=0)
mean_vector = m1 - m2
mean_length = np.linalg.norm(mean_vector)
svn_length = np.linalg.norm(w)
atvec = (mean_length / svn_length) * w
z_dim, = atvec.shape
atvecs = atvec.reshape(1,z_dim)
print("Computed svm diff shape: {}".format(atvecs.shape))
save_table[k] = atvecs.tolist()
with open(args.outfile, 'w') as fp:
json.dump(save_table, fp)
def main():
parser = argparse.ArgumentParser(description="Do vectory things")
parser.add_argument("--models", type=str, help="CLIP model", default=None, dest='models')
parser.add_argument("--inputs", type=str, help="Images to process", default=None, dest='inputs')
parser.add_argument("--avg-diff", dest='avg_diff', type=str, default=None,
help="Two vector files to average and then diff")
parser.add_argument("--svm-diff", dest='svm_diff', type=str, default=None,
help="Two vector files to average and then svm diff")
parser.add_argument("--z-dim", dest='z_dim', type=int, default=100,
help="z dimension of vectors")
parser.add_argument("--encoded-vectors", type=str, default=None,
help="Comma separated list of json arrays")
parser.add_argument("--encoded-true", type=str, default=None,
help="Comma separated list of json arrays (true)")
parser.add_argument("--encoded-false", type=str, default=None,
help="Comma separated list of json arrays (false)")
parser.add_argument('--thresh', dest='thresh', default=False, action='store_true',
help="Compute thresholds for attribute vectors classifiers")
parser.add_argument('--svm', dest='svm', default=False, action='store_true',
help="Use SVM for computing attribute vectors")
parser.add_argument("--limit", dest='limit', type=int, default=None,
help="Limit number of inputs when computing atvecs")
parser.add_argument("--attribute-vectors", dest='attribute_vectors', default=None,
help="use json file as source of attribute vectors")
parser.add_argument("--attribute-thresholds", dest='attribute_thresholds', default=None,
help="use these non-zero values for binary classifier thresholds")
parser.add_argument("--attribute-set", dest='attribute_set', default="all",
help="score ROC/accuracy against true/false/all")
parser.add_argument('--attribute-indices', dest='attribute_indices', default=None, type=str,
help="indices to select specific attribute vectors")
parser.add_argument('--outfile', dest='outfile', default=None,
help="Output json file for vectors.")
args = parser.parse_args()
init(args)
if args.avg_diff:
run_avg_diff(args)
sys.exit(0)
if args.svm_diff:
run_svm_diff(args)
sys.exit(0)
spew_vectors(args, args.inputs, args.outfile)
if __name__ == '__main__':
main() | 38.567708 | 107 | 0.631465 | 976 | 7,405 | 4.652664 | 0.23668 | 0.031711 | 0.059899 | 0.019819 | 0.32658 | 0.266902 | 0.242678 | 0.242678 | 0.242678 | 0.216692 | 0 | 0.02149 | 0.245915 | 7,405 | 192 | 108 | 38.567708 | 0.791726 | 0.022957 | 0 | 0.236025 | 0 | 0 | 0.16805 | 0.003043 | 0 | 0 | 0 | 0.005208 | 0 | 1 | 0.043478 | false | 0 | 0.080745 | 0 | 0.136646 | 0.037267 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad1fff694d9af2824f23d56c8b78a6c4b8ba5ee3 | 2,321 | py | Python | src/colouring/colouring_bach.py | PeterJackNaylor/CellularHeatmaps | 52829685683b6f3315b62246a77cc2206326e2b3 | [
"Apache-2.0"
] | null | null | null | src/colouring/colouring_bach.py | PeterJackNaylor/CellularHeatmaps | 52829685683b6f3315b62246a77cc2206326e2b3 | [
"Apache-2.0"
] | 2 | 2022-01-13T03:57:02.000Z | 2022-03-12T01:01:45.000Z | src/colouring/colouring_bach.py | PeterJackNaylor/CellularHeatmaps | 52829685683b6f3315b62246a77cc2206326e2b3 | [
"Apache-2.0"
] | 1 | 2020-10-12T07:56:51.000Z | 2020-10-12T07:56:51.000Z |
import numpy as np
from os.path import join
from colouring import check_or_create, post_process_out
from tqdm import trange
from skimage import io
io.use_plugin('tifffile')
from joblib import Parallel, delayed
def main():
from optparse import OptionParser
parser = OptionParser()
parser.add_option("--input", dest="input", type="string",
help="record name")
parser.add_option("--slide", dest="slide", type="string",
help="slide_name")
parser.add_option("-s", "--no_samples",
action="store_false", dest="samples", default=True,
help="If to save samples")
parser.add_option("--n_jobs", dest="n_jobs", type="int", default=8,
help="Number of jobs")
(options, _) = parser.parse_args()
file = options.input
tiles_prob = "./tiles_prob"
tiles_contours = "./tiles_contours"
tiles_bin = "./tiles_bin"
folders = [tiles_bin, tiles_contours, tiles_prob]
out_names = [join(f, f.split('_')[-1] + "_{:03d}.tif") for f in folders]
for f in folders:
check_or_create(f)
files = np.load(file)
raw = files["raw"]
segmented_tiles = files["tiles"]
n = segmented_tiles.shape[0]
s = segmented_tiles.shape[1]
bins = np.zeros((n, s, s), dtype="uint8")
def process_i(i):
prob = segmented_tiles[i].copy()
rgb = raw[i]
list_img = post_process_out(prob, rgb)
if options.samples:
for image, name in zip(list_img, out_names):
io.imsave(name.format(i+1), image, resolution=[1.0, 1.0])
return list_img[0]
labeled_bins = Parallel(n_jobs=options.n_jobs)(delayed(process_i)(i) for i in trange(n))
bins = np.stack(labeled_bins)
# for i in trange(n):
# para = positions[i]
# prob = segmented_tiles[i]
# rgb = raw[i]
# list_img = post_process_out(prob, rgb)
# bins[i] = list_img[0]
# inp = list(para)
# del inp[-2]
# if options.samples:
# for image, name in zip(list_img, out_names):
# io.imsave(name.format(*inp), image, resolution=[1.0, 1.0])
np.savez("segmented_tiles_and_bins.npz", tiles=segmented_tiles,
raw=raw, bins=bins)
if __name__ == '__main__':
main()
| 32.236111 | 92 | 0.59888 | 318 | 2,321 | 4.166667 | 0.330189 | 0.073962 | 0.045283 | 0.028679 | 0.230943 | 0.181132 | 0.152453 | 0.152453 | 0.152453 | 0.152453 | 0 | 0.011105 | 0.262818 | 2,321 | 71 | 93 | 32.690141 | 0.763296 | 0.148212 | 0 | 0 | 0 | 0 | 0.125318 | 0.014264 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.145833 | 0 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad2166b40ae54cb4dbde1a761c5dcf982866f927 | 36,458 | py | Python | BlockchainFormation/Node_Handler.py | DLPS-Framework/BlockchainFormation | 6861d7a77e2009ad2d57b6e9195a11ce0fc5d048 | [
"Apache-2.0"
] | 2 | 2022-01-07T17:35:20.000Z | 2022-01-11T16:03:33.000Z | BlockchainFormation/Node_Handler.py | DLPS-Framework/BlockchainFormation | 6861d7a77e2009ad2d57b6e9195a11ce0fc5d048 | [
"Apache-2.0"
] | null | null | null | BlockchainFormation/Node_Handler.py | DLPS-Framework/BlockchainFormation | 6861d7a77e2009ad2d57b6e9195a11ce0fc5d048 | [
"Apache-2.0"
] | 3 | 2021-02-23T05:30:21.000Z | 2021-05-17T14:40:43.000Z | # Copyright 2021 ChainLab
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import getpass
import os
import paramiko
import sys
import boto3
import pytz
from dateutil import parser
from scp import SCPClient
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from BlockchainFormation.cost_calculator import AWSCostCalculator
from BlockchainFormation.blockchain_specifics.client.Client_Network import *
from BlockchainFormation.blockchain_specifics.couchdb.Couchdb_Network import *
from BlockchainFormation.blockchain_specifics.fabric.Fabric_Network import *
from BlockchainFormation.blockchain_specifics.empty.Empty_Network import *
from BlockchainFormation.blockchain_specifics.geth.Geth_Network import *
from BlockchainFormation.blockchain_specifics.indy.Indy_Network import *
from BlockchainFormation.blockchain_specifics.indy_client.Indy_client_Network import *
from BlockchainFormation.blockchain_specifics.leveldb.Leveldb_Network import *
from BlockchainFormation.blockchain_specifics.parity.Parity_Network import *
from BlockchainFormation.blockchain_specifics.quorum.Quorum_Network import *
from BlockchainFormation.blockchain_specifics.sawtooth.Sawtooth_Network import *
from BlockchainFormation.utils import utils
utc = pytz.utc
class Node_Handler:
"""
Class for handling startup and shutdown of aws VM instances
"""
def __init__(self, config):
self.logger = logging.getLogger(__name__)
if not self.logger.handlers:
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter and add it to the handlers
formatter = logging.Formatter('%(asctime)s - %(threadName)s - %(name)s - %(levelname)s - %(message)s')
# fh.setFormatter(formatter)
ch.setFormatter(formatter)
self.logger.addHandler(ch)
new_regions = {}
for region in config["aws_region"]:
if config["aws_region"][region] > 0:
new_regions[region] = config["aws_region"][region]
config["aws_region"] = new_regions
self.logger.info(config)
self.config = config
self.user_data = self.create_user_data()
try:
if self.config['instance_provision'] == 'aws':
self.logger.info("Automatic startup in AWS selected")
elif self.config['instance_provision'] == 'own':
self.logger.info("Automatic startup on user-proxided instances selected")
else:
self.logger.info("Invalid option")
raise Exception("No valid option for cloud specified")
except Exception as e:
self.logger.info("AWS config by default")
self.config['instance_provision'] = 'aws'
if self.config['instance_provision'] == 'aws':
# no proxy if no proxy user
if self.config['proxy'] is not None and "HTTP_PROXY" not in os.environ:
if self.config['proxy']['proxy_user'] is not None:
password = getpass.getpass(prompt=f"Enter proxy password for {self.config['proxy']['proxy_user']}:")
os.environ["HTTPS_PROXY"] = f"http://{self.config['proxy']['proxy_user']}:{password}@{self.config['proxy']['http_proxy']}"
os.environ["HTTP_PROXY"] = f"http://{self.config['proxy']['proxy_user']}:{password}@{self.config['proxy']['https_proxy']}"
else:
os.environ["HTTPS_PROXY"] = f"http://{self.config['proxy']['https_proxy']}"
os.environ["HTTP_PROXY"] = f"http://{self.config['proxy']['http_proxy']}"
os.environ["NO_PROXY"] = self.config['proxy']['no_proxy']
else:
self.logger.info("No proxy set since proxy user is None or proxy already set")
# This is needed that boto3 knows where to find the aws config and credentials
os.environ["AWS_SHARED_CREDENTIALS_FILE"] = self.config['aws_credentials']
os.environ["AWS_CONFIG_FILE"] = self.config['aws_config']
self.session = boto3.Session(profile_name=self.config['profile'])
self.ec2_instances = None
self.aws_calculator = AWSCostCalculator(self.session)
def create_user_data(self):
"""creates the user data script depending on experiment type. The user data is built out of base script and
specific script depending on experiment type"""
dir_name = os.path.dirname(os.path.realpath(__file__))
user_data_base = ""
try:
if self.config['instance_provision'] == "aws":
with open(f"{dir_name}/UserDataScripts/bootstrap_base_aws.sh", 'r') as content_file:
user_data_base = content_file.read()
elif self.config['instance_provision'] == "own":
with open(f"{dir_name}/UserDataScripts/bootstrap_base_own.sh", 'r') as content_file:
user_data_base = content_file.read()
except Exception as e:
with open(f"{dir_name}/UserDataScripts/bootstrap_base_aws.sh", 'r') as content_file:
user_data_base = content_file.read()
self.config['instance_provision'] = 'aws'
# If VM is hosted in public the VMs do not need the internal proxy settings
if (self.config['instance_provision'] == 'aws') and (not self.config['public_ip']):
# Is this the best solution to set proxy dynamically?
proxy_user_data = f" HTTP_PROXY={self.config['aws_proxy_settings']['aws_http_proxy']}\n" \
f" HTTPS_PROXY={self.config['aws_proxy_settings']['aws_https_proxy']}\n" \
f" NO_PROXY={self.config['aws_proxy_settings']['aws_no_proxy']}\n" \
f" export http_proxy=$HTTP_PROXY\n" \
f" export https_proxy=$HTTPS_PROXY\n" \
f" export no_proxy=$NO_PROXY\n" \
f" bash -c \"sudo echo http_proxy=$HTTP_PROXY >> /etc/environment\"\n" \
f" bash -c \"sudo echo https_proxy=$HTTPS_PROXY >> /etc/environment\"\n" \
f" bash -c \"sudo echo no_proxy=$NO_PROXY >> /etc/environment\"\n" \
f" sudo touch /etc/profile.d/environment_mods.sh\n" \
f" bash -c \"sudo echo http_proxy=$HTTP_PROXY >> /etc/profile.d/environment_mods.sh\"\n" \
f" bash -c \"sudo echo https_proxy=$HTTPS_PROXY >> /etc/profile.d/environment_mods.sh\"\n" \
f" bash -c \"sudo echo no_proxy=$NO_PROXY >> /etc/profile.d/environment_mods.sh\"\n"
user_data_base = user_data_base.replace(" # PROXY_PLACEHOLDER, DO NOT DELETE!", proxy_user_data)
# If blockchain type is base, no specific startup script is needed
if self.config['blockchain_type'] == 'base':
user_data_specific = "\n # ======= Create success indicator at end of this script ==========\n sudo touch /var/log/user_data_success.log"
eof = "\nEOF"
user_data_combined = user_data_base + user_data_specific + eof
# if the blockchain type is fabric, we can modify the version of the docker images
elif self.config['blockchain_type'] == 'fabric':
os.system(f"cp {dir_name}/blockchain_specifics/fabric/bootstrap_fabric.sh {dir_name}/blockchain_specifics/fabric/bootstrap_fabric_temp.sh")
os.system(f"sed -i -e 's/substitute_fabric_version/{self.config['fabric_settings']['fabric_version']}/g' {dir_name}/blockchain_specifics/fabric/bootstrap_fabric_temp.sh")
os.system(f"sed -i -e 's/substitute_fabric_ca_version/{self.config['fabric_settings']['fabric_ca_version']}/g' {dir_name}/blockchain_specifics/fabric/bootstrap_fabric_temp.sh")
os.system(f"sed -i -e 's/substitute_fabric_thirdparty_version/{self.config['fabric_settings']['thirdparty_version']}/g' {dir_name}/blockchain_specifics/fabric/bootstrap_fabric_temp.sh")
with open(f"{dir_name}/blockchain_specifics/fabric/bootstrap_fabric_temp.sh", 'r') as content_file:
user_data_specific = content_file.read()
user_data_combined = user_data_base + user_data_specific
os.system(f"rm {dir_name}/blockchain_specifics/fabric/bootstrap_fabric_temp.sh")
elif self.config['blockchain_type'] == 'eos':
# if we have non-standard settings, we need to compile binaries from scratch
if Eos_Network.check_config(self.config, self.logger):
replace_command = "sudo apt-get install -y make " \
"&& mkdir -p /data/eosio && cd /data/eosio " \
"&& git clone --recursive https://github.com/EOSIO/eos && cd eos " \
"&& git pull --recurse-submodules && git submodule update --init --recursive " \
"&& cd /data/eosio/eos && yes | ./scripts/eosio_build.sh " \
"&& cd /data/eosio/eos/build && sudo make install && sudo mv bin/* /usr/local/bin " \
f"&& sed -i -e 's/block_interval_ms = 500/block_interval_ms = {self.config['eos_settings']['block_interval_ms']}/g' /data/eosio/eos/libraries/chain/include/eosio/chain/config.hpp"
else:
replace_command = "wget https://github.com/EOSIO/eos/releases/download/v2.0.3/eosio_2.0.3-1-ubuntu-18.04_amd64.deb && sudo apt install -y ./eosio_2.0.3-1-ubuntu-18.04_amd64.deb"
os.system(f"cp {dir_name}/blockchain_specifics/eos/bootstrap_eos.sh {dir_name}/blockchain_specifics/eos/bootstrap_eos_temp.sh")
os.system(f"sed -i -e \"s#substitute_replace_command#{replace_command}#g\" {dir_name}/blockchain_specifics/eos/bootstrap_eos_temp.sh")
os.system(f"sed -i -e 's/substitute_replace_commandsubstitute_replace_command/\&\&/g' {dir_name}/blockchain_specifics/eos/bootstrap_eos_temp.sh")
with open(f"{dir_name}/blockchain_specifics/eos/bootstrap_eos_temp.sh", 'r') as content_file:
user_data_specific = content_file.read()
user_data_combined = user_data_base + user_data_specific
os.system(f"rm {dir_name}/blockchain_specifics/eos/bootstrap_eos_temp.sh")
else:
with open(f"{dir_name}/blockchain_specifics/{self.config['blockchain_type']}/bootstrap_{self.config['blockchain_type']}.sh", 'r') as content_file:
user_data_specific = content_file.read()
user_data_combined = user_data_base + user_data_specific
return user_data_combined
def run_general_startup(self):
"""
General startup script needed for all blockchain frameworks. After general part is finished, the specific startup script are kicked off
:return:
"""
try:
if self.config['instance_provision'] == "aws":
self.logger.info("Launching the required instances in aws")
elif self.config['instance_provision'] == "own":
self.logger.info(f"Using existing instances on ips {self.config['ips']}")
self.logger.info(f"Note that the user currently needs to run Ubuntu 18.04, the user name for ssh'ing must be 'ubuntu'"
f", and the instances require a directory /data/ with permissions set for ubuntu and at least 8 GB of storage")
except Exception as e:
self.logger.info("AWS by default")
self.logger.info("Checking consistency of the region")
if type(self.config["aws_region"]) is dict:
count = 0
for key in self.config["aws_region"]:
count = count + self.config["aws_region"][key]
self.logger.info(f"Different regions; in total there are {count} instances")
if count != self.config["vm_count"]:
self.logger.info("Inconsistent")
raise Exception("Error: Inconsistent number of nodes in the regions")
else:
self.logger.info("All right")
else:
region = self.config["aws_region"]
self.config["aws_region"] = {}
self.config["aws_region"][region] = self.config["vm_count"]
if type(self.config["subnet_id"]) is dict:
pass
else:
subnet_id = self.config["subnet_id"]
self.config["subnet_id"] = {}
self.config["subnet_id"][region] = subnet_id
if type(self.config["security_group_id"]) is dict:
pass
else:
security_group_id = self.config["security_group_id"]
self.config["security_group_id"] = {}
self.config["security_group_id"][region] = security_group_id
self.logger.info(f"New region: {self.config['aws_region']}")
if self.config['blockchain_type'] == "fabric":
Fabric_Network.check_config(self.config, self.logger)
elif self.config['blockchain_type'] == 'corda':
Corda_Network.check_config(self.config, self.logger)
elif self.config['blockchain_type'] == "eos":
# check_config is currently executed below
# eos_check_config(self.config, self.logger)
pass
elif self.config['blockchain_type'] == "sawtooth":
Sawtooth_Network.check_config(self.config, self.logger)
elif self.config['blockchain_type'] == "vendia":
Vendia_Network.check_config(self.config, self.logger)
self.get_image_ids()
if self.config['instance_provision'] == "aws" and self.config["vm_count"] > 0:
self.start_instances()
self.logger.info(f"Initiated the start of {self.config['vm_count']} {self.config['instance_type']} machines.")
ips = [0] * self.config['vm_count']
public_ips = [0] * self.config['vm_count']
vpc_ids = [0] * self.config['vm_count']
self.logger.info("Waiting until all VMs are up...")
self.logger.info(f"{self.ec2_instances}")
for index1, region in enumerate(self.config["aws_region"]):
for index2, i in enumerate(self.ec2_instances[region]):
pos = index1 + index2 * len(self.config["aws_region"].keys())
self.logger.info(pos)
i.wait_until_running()
i.load()
ips[pos] = i.private_ip_address
vpc_ids[pos] = i.vpc_id
if self.config['public_ip']:
public_ips[pos] = i.public_ip_address
self.logger.info(f"IPs: {ips}")
# add no proxy for all VM IPs
if self.config['proxy'] is not None:
# Careful that you do NOT delete old NO_PROXY settings, hence the os.environ["NO_PROXY"] + new
os.environ["NO_PROXY"] = os.environ["NO_PROXY"] + f",{','.join(str(ip) for ip in ips)}"
# add instance IPs and IDs to config
self.config['ips'] = ips
self.config['vpc_ids'] = vpc_ids
if len(self.config['aws_region'].keys()) == 1:
self.config['priv_ips'] = ips
else:
self.config['priv_ips'] = public_ips
if self.config['public_ip']:
self.config['ips'] = public_ips
self.config['pub_ips'] = public_ips
else:
self.config['pub_ips'] = ips
self.config["instance_ids"] = {}
for region in self.config["aws_region"]:
self.config['instance_ids'][region] = [instance.id for instance in self.ec2_instances[region]]
self.logger.info(f"You can now access machines via: ssh -i \"path to {self.config['key_name']} key\" ubuntu@{self.config['ips']} (if user is ubuntu) ")
self.logger.info(f"e.g. ssh -i {self.config['priv_key_path']} ubuntu@{self.config['ips'][0]}")
# Give launched instances tag with time/type of experiment/number of node
ts = time.time()
st = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d_%H-%M-%S')
for region in self.config["aws_region"]:
ec2 = self.session.resource('ec2', region_name=region)
for index, i in enumerate(self.ec2_instances[region]):
exp_tag = f"exp_{st}_{self.config['blockchain_type']}_Node{index}"
ec2.create_tags(Resources=[
i.id,
],
Tags=[
{
'Key': 'exp_tag',
'Value': exp_tag
},
])
self.launch_times = {}
for region in self.config["aws_region"]:
self.launch_times[region] = []
for i in self.ec2_instances[region]:
# self.logger.info("Launch Time: " + str(i.launch_time))
# get launch time
self.launch_times[region].append(i.launch_time.replace(tzinfo=None))
# create experiment directory structure
self.config['launch_times'] = self.launch_times
elif (self.config['instance_provision'] == "own" and self.config['vm_count'] > 0):
self.config['vm_count'] = len(self.config['pub_ips'])
if self.config['vm_count'] == len(self.config['pub_ips']) and self.config['vm_count'] == len(self.config['priv_ips']) and self.config['vm_count'] == len(self.config['ips']):
# writing the user data to a file
with open(f"{self.config['exp_dir']}/bootstrapping.sh", "w") as file:
file.write(self.user_data)
file.close()
self.create_ssh_scp_clients()
for index in range(0, self.config['vm_count']):
# deleting previous indicators of success
stdin, stdout, stderr = self.ssh_clients[index].exec_command("sudo rm -rf /var/log/user_data.log /var/log/user_data_success.log")
wait_and_log(stdout, stderr)
self.scp_clients[index].put(self.config['exp_dir'] + "/bootstrapping.sh", "/home/ubuntu")
stdin, stdout, stderr = self.ssh_clients[index].exec_command("sudo chmod 775 /home/ubuntu/bootstrapping.sh")
wait_and_log(stdout, stderr)
channel = self.ssh_clients[index].get_transport().open_session()
channel.exec_command("sudo /home/ubuntu/bootstrapping.sh")
else:
raise Exception("Inconsistent lengths of the ip fields compared to vm_count")
elif self.config["vm_count"] == 0:
self.config["ips"] = []
self.config["pub_ips"] = []
self.config["priv_ips"] = []
else:
self.logger.info("Neither AWS nor own IPs nor 0 nodes deployed")
raise Exception("Invalid configuration")
ts = time.time()
st = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d_%H-%M-%S')
self.config['exp_dir'] = f"{self.config['exp_dir']}/experiments/exp_{st}_{self.config['blockchain_type']}"
try:
os.makedirs(f"{self.config['exp_dir']}/user_data_logs")
os.makedirs(f"/{self.config['exp_dir']}/setup")
self.logger.info(f"Created {str(self.config['exp_dir'])} directory")
except OSError:
self.logger.error("Creation of the directories failed")
with open(f"{self.config['exp_dir']}/config.json", 'w') as outfile:
json.dump(self.config, outfile, default=datetimeconverter, indent=4)
if self.config['instance_provision'] is not "none":
# wait couple minutes until VMs are up
# first connect ssh clients, then scp client
if self.config['vm_count'] > 0:
self.logger.info("Waiting 60 seconds before creating ssh connection to VMs")
time.sleep(60)
self.create_ssh_scp_clients()
self.logger.info("Waiting for all VMs to finish the userData setup...")
if self.config['blockchain_type'] == "eos":
max_time = 120
normal_time = 60
elif self.config['blockchain_type'] == "tezos":
max_time = 60
normal_time = 30
else:
max_time = 30
normal_time = 10
# Wait until user Data is finished
if False in wait_till_done(self.config, self.ssh_clients, self.config['ips'], max_time * 60, 60,
"/var/log/user_data_success.log", False, normal_time * 60, self.logger):
self.logger.error('Boot up NOT successful')
if yes_or_no("Do you want to shut down the VMs?"):
self.logger.info(f"Running the shutdown script now")
self.run_general_shutdown()
else:
self.logger.info(f"VMs are not being shutdown")
else:
self.logger.info(f"Boot up of all {self.config['blockchain_type']}-VMs was successful")
self.refresh_ssh_scp_clients()
self._run_specific_startup()
if 'load_balancer_settings' in self.config and 'add_loadbalancer' in self.config['load_balancer_settings']:
# Load Balancer
if self.config['load_balancer_settings']['add_loadbalancer']:
self.logger.info("Load Balancer option was chosen, starting the creation routine now")
lb_handler = LBHandler(self.config, self.session, region)
lb_handler.creation_routine()
self.logger.info(
f"Setup of all VMs was successful, to terminate them run run.py terminate --config {self.config['exp_dir']}/config.json")
self.close_ssh_scp_clients()
# if yes_or_no("Do you want to shut down the whole network?"):
# self.run_general_shutdown()
def _run_specific_startup(self):
"""starts startup for given config (geth, parity, etc....)"""
# running the blockchain specific startup script
self.startup_network()
with open(f"{self.config['exp_dir']}/config.json", 'w') as outfile:
json.dump(self.config, outfile, default=datetimeconverter, indent=4)
# close ssh and scp channels
self.close_ssh_scp_clients()
def run_general_shutdown(self):
"""
Stops and terminates all VMs and calculates causes aws costs.
:return:
"""
# create ssh and scp channels
self.create_ssh_scp_clients()
self._run_specific_shutdown()
for index, ip in enumerate(self.config['ips']):
# get userData from all instances
try:
self.scp_clients[index].get("/var/log/user_data.log",
f"{self.config['exp_dir']}/user_data_logs/user_data_log_node_{index}.log")
except:
self.logger.info(f"User Data of {ip} cannot be pulled")
if self.config['instance_provision'] == "aws":
self.logger.info("Shutting down the instances in AWS")
if self.config['proxy'] is not None:
os.environ["NO_PROXY"] = f"{self.config['proxy']['no_proxy']},{','.join(str(ip) for ip in self.config['ips'])}"
self.ec2_instances = {}
for region in self.config["aws_region"]:
ec2 = self.session.resource('ec2', region_name=region)
try:
self.ec2_instances[region] = ec2.instances.filter(InstanceIds=self.config['instance_ids'][region])
self.logger.info(f"There are {sum(1 for _ in self.ec2_instances[region])} instances in region {region}")
except Exception as e:
self.logger.exception(e)
self.ec2_instances[region] = []
if any(instance.state['Name'] == "stopped" for instance in self.ec2_instances[region]):
self.logger.info(f"At least on of the instances was already stopped, hence no logs can be pulled from the machines, terminating them in the next step")
if self.config['instance_provision'] == "aws":
for region in self.config["aws_region"]:
for instance in self.ec2_instances[region]:
instance.stop()
if 'load_balancer_settings' in self.config and 'add_loadbalancer' in self.config['load_balancer_settings']:
# Load Balancer
if self.config['load_balancer_settings']['add_loadbalancer']:
self.logger.info("Starting Load Balancer termination now")
lb_handler = LBHandler(self.config, self.session, region)
lb_handler.shutdown_lb()
# calculate aws costs
self.aws_calculator.calculate_uptime_costs(self.config)
for region in self.config["aws_region"]:
for instance in self.ec2_instances[region]:
instance.terminate()
# close ssh and scp channels
self.close_ssh_scp_clients()
self.logger.info("All instances terminated - script is finished")
def _run_specific_shutdown(self):
"""Runs the specific shutdown scripts depending on blockchain_type"""
# running the blockchain specific startup script
self.shutdown_network()
def get_config_path(self):
return f"{self.config['exp_dir']}/config.json"
def get_config(self):
return self.config
def set_target_network_conf(self, dir_name, name):
"""
Needed by ChainLab project to set network_config after parallelism is finished
:param dir_name: Name of target_network_conf
:return:
"""
self.config[f'{name}_settings']['target_network_conf'] = dir_name
print(f"Dir_name in set_target_network_conf: " + dir_name)
with open(f"{self.config['exp_dir']}/config.json", 'w') as outfile:
json.dump(self.config, outfile, default=datetimeconverter, indent=4)
def create_ssh_scp_clients(self):
"""
Creates ssh/scp connection to VMs
:param config:
:param logger:
:return: array of scp and ssh clients
"""
ssh_clients = []
scp_clients = []
ssh_key_priv = paramiko.RSAKey.from_private_key_file(self.config['priv_key_path'])
if self.logger is not None:
# logger.debug(f"Trying to connect the ssh clients")
pass
self.logger.info(self.config['ips'])
for index, ip in enumerate(self.config['ips']):
if self.config['public_ip']:
# use public ip if exists, else it wont work
ip = self.config['pub_ips'][index]
ssh_clients.append(paramiko.SSHClient())
ssh_clients[index].set_missing_host_key_policy(paramiko.AutoAddPolicy())
while True:
try:
ssh_clients[index].connect(hostname=ip, username=self.config['user'], pkey=ssh_key_priv, timeout=86400, banner_timeout=100, auth_timeout=30)
except Exception as e:
if self.logger is not None:
self.logger.error(f"{e} on IP {ip}")
else:
print(f"{e} on IP {ip}")
try:
ssh_clients[index].close()
ssh_clients[index] = paramiko.SSHClient()
ssh_clients[index].set_missing_host_key_policy(paramiko.AutoAddPolicy())
except Exception as e:
if self.logger is not None:
self.logger.error(f"{e} on IP {ip}")
else:
print(f"{e} on IP {ip}")
else:
break
# SCPCLient takes a paramiko transport as an argument
scp_clients.append(SCPClient(ssh_clients[index].get_transport(), socket_timeout=86400, progress=Node_Handler.progress, sanitize=lambda x: x))
if self.logger is not None:
# logger.debug(f"All scp/ssh clients got created and connected")
pass
self.ssh_clients = ssh_clients
self.scp_clients = scp_clients
def refresh_ssh_scp_clients(self):
# Recreating the ssh and scp clients
self.close_ssh_scp_clients()
self.create_ssh_scp_clients()
def close_ssh_scp_clients(self):
try:
map(lambda client: client.close(), self.ssh_clients)
map(lambda client: client.close(), self.scp_clients)
except:
self.logger.info("ssh/scp clients already closed")
def shutdown_network(self):
blockchain_type = self.config['blockchain_type']
try:
func = getattr(globals()[f"{blockchain_type.capitalize()}_Network"], "shutdown")
func(self)
except Exception as e:
self.logger.exception(e)
raise Exception("")
def restart_network(self):
blockchain_type = self.config['blockchain_type']
try:
func = getattr(globals()[f"{blockchain_type.capitalize()}_Network"], "restart")
func(self)
except Exception as e:
self.logger.exception(e)
raise Exception("")
def startup_network(self):
blockchain_type = self.config['blockchain_type']
if blockchain_type in ["ethermint", "qldb", "tezos"]:
self.logger.warning("")
self.logger.warning("")
self.logger.warning(f" !!! The automatic setup for {blockchain_type.upper()} is not yet working - still under active development !!!")
self.logger.warning("")
self.logger.warning("")
try:
func = getattr(globals()[f"{blockchain_type.capitalize()}_Network"], "startup")
func(self)
except Exception as e:
self.logger.exception(e)
raise Exception("Network startup failed")
def restart_network(self, number_of_endorsers=None):
blockchain_type = self.config['blockchain_type']
try:
if blockchain_type == "fabric":
func = getattr(globals()[f"{blockchain_type.capitalize()}_Network"], "restart")
func(self, number_of_endorsers)
else:
func = getattr(globals()[f"{blockchain_type.capitalize()}_Network"], "restart")
func(self)
except Exception as e:
self.logger.exception(e)
raise Exception("")
@staticmethod
def progress(filename, size, sent):
sys.stdout.write("%s\'s progress: %.2f%% \r" % (filename, float(sent) / float(size) * 100))
def get_image_ids(self):
def search_newest_image(list_of_images):
"""
Search for the newest ubuntu image from a given list
:param list_of_images: list with all found images
:return:
"""
latest = None
for image in list_of_images:
if not latest:
latest = image
continue
if parser.parse(image['CreationDate']) > parser.parse(latest['CreationDate']):
latest = image
return latest
if (self.config['instance_provision'] == 'aws' and self.config['vm_count'] > 0):
self.config['image']['image_ids'] = {}
for region in self.config["aws_region"]:
# If no specific image ID is given search for the newest ubuntu 18 image
if self.config['image']['image_id'] is None:
ec2 = self.session.client('ec2', region_name=region)
# Find the latest official Ubuntu image from Canonical(owner = 099720109477)
amis = ec2.describe_images(
Filters=[
{
'Name': 'name',
'Values': [f"{self.config['image']['os']}/images/hvm-ssd/{self.config['image']['os']}-*-{self.config['image']['version']}*-amd64-server-????????"]
},
{
'Name': 'architecture',
'Values': ['x86_64']
},
{
'Name': 'state',
'Values': ['available']
},
{
'Name': 'root-device-type',
'Values': ['ebs']
}
],
Owners=[
'099720109477',
]
)
image = search_newest_image(amis['Images'])
self.config['image']['image_ids'][region] = image["ImageId"]
self.logger.info(f"Image IDs: {self.config['image']['image_ids']}")
def start_instances(self):
self.ec2_instances = {}
for region in self.config["aws_region"]:
ec2 = self.session.resource('ec2', region_name=region)
image = ec2.Image(self.config['image']['image_ids'][region])
self.logger.info(f"Selected Image for region {region}: " + image.description)
session = boto3.Session(profile_name=self.config['profile'])
ec2 = session.resource('ec2', region_name=region)
self.ec2_instances[region] = ec2.create_instances(
ImageId=self.config['image']['image_ids'][region],
MinCount=self.config['aws_region'][region],
MaxCount=self.config['aws_region'][region],
InstanceType=self.config['instance_type'],
KeyName=self.config['key_name'],
BlockDeviceMappings=self.config['storage_settings'],
UserData=self.user_data,
TagSpecifications=[
{
'ResourceType': "instance",
'Tags': [
{
'Key': 'Creator',
'Value': self.config['tag_name']
},
{
'Key': 'Name',
'Value': self.config['tag_name']
},
]
},
],
InstanceMarketOptions={
'MarketType': 'spot',
'SpotOptions': {
# 'MaxPrice': 'string',
'SpotInstanceType': 'one-time', # | 'persistent'
'BlockDurationMinutes': 240,
'InstanceInterruptionBehavior': 'terminate'
}
} if 'aws_spot_instances' in self.config and self.config['aws_spot_instances'] else {},
NetworkInterfaces=[
{
'DeviceIndex': 0,
'SubnetId': self.config['subnet_id'][region],
'Groups': self.config['security_group_id'][region],
'AssociatePublicIpAddress': self.config['public_ip']
}]
) | 44.406821 | 213 | 0.575621 | 4,213 | 36,458 | 4.80845 | 0.147638 | 0.095765 | 0.029026 | 0.018758 | 0.510366 | 0.432915 | 0.353687 | 0.302498 | 0.268437 | 0.232007 | 0 | 0.006996 | 0.309918 | 36,458 | 821 | 214 | 44.406821 | 0.798203 | 0.091859 | 0 | 0.292818 | 0 | 0.036832 | 0.275958 | 0.106939 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036832 | false | 0.016575 | 0.038674 | 0.003683 | 0.084715 | 0.005525 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad2260f559e84904f48d4447adf0839baef3441b | 2,039 | py | Python | tests/test_deploy_role_to_group.py | AdamMinton/looker_deployer | 1a57809257183234b900ad49ff30ded5c26c4f2a | [
"Apache-2.0"
] | 4 | 2020-01-09T04:24:19.000Z | 2020-05-18T23:42:35.000Z | tests/test_deploy_role_to_group.py | AdamMinton/looker_deployer | 1a57809257183234b900ad49ff30ded5c26c4f2a | [
"Apache-2.0"
] | 24 | 2022-01-06T00:49:57.000Z | 2022-03-30T00:08:36.000Z | tests/test_deploy_role_to_group.py | AdamMinton/looker_deployer | 1a57809257183234b900ad49ff30ded5c26c4f2a | [
"Apache-2.0"
] | 2 | 2021-11-24T05:48:17.000Z | 2022-01-29T22:23:19.000Z | from looker_deployer.commands import deploy_role_to_group
from looker_sdk import methods, models
class mockSettings:
base_url = "taco"
class mockAuth:
settings = mockSettings()
sdk = methods.LookerSDK(mockAuth(), "bar", "baz", "bosh", "bizz")
source_sdk = methods.LookerSDK(mockAuth(), "bar", "baz", "bosh", "bizz")
target_sdk = methods.LookerSDK(mockAuth(), "bar", "baz", "bosh", "bizz")
def test_get_filtered_roles(mocker):
role_list = [
models.Role(name="Taco"),
models.Role(name="Burrito")
]
mocker.patch.object(sdk, "all_roles")
sdk.all_roles.return_value = role_list
roles = deploy_role_to_group.get_filtered_roles(sdk)
assert roles == role_list
def test_get_filtered_roles_filter(mocker):
role_list = [
models.Role(name="Taco"),
models.Role(name="Burrito")
]
mocker.patch.object(sdk, "all_roles")
sdk.all_roles.return_value = role_list
roles = deploy_role_to_group.get_filtered_roles(sdk, "Burrito")
assert roles == [models.Role(name="Burrito")]
def test_write_role_to_group_new(mocker):
group_1 = models.Group(name="Taco", id=1)
group_2 = models.Group(name="Taco Supreme", id=2)
role = [models.Role(name="Explorer", id=1)]
role_group = [group_1]
groups_list = [group_1, group_2]
mocker.patch.object(source_sdk, "all_roles")
mocker.patch.object(source_sdk, "all_groups")
mocker.patch.object(source_sdk, "role_groups")
mocker.patch.object(target_sdk, "all_roles")
mocker.patch.object(target_sdk, "all_groups")
mocker.patch.object(target_sdk, "set_role_groups")
source_sdk.all_roles.return_value = role
target_sdk.all_roles.return_value = role
source_sdk.all_groups.return_value = groups_list
target_sdk.all_groups.return_value = groups_list
source_sdk.role_groups.return_value = role_group
deploy_role_to_group.write_role_to_group(source_sdk, target_sdk)
target_sdk.set_role_groups.assert_called_once_with(
role_id=role[0].id, body=[group_1.id])
| 30.432836 | 72 | 0.715547 | 291 | 2,039 | 4.687285 | 0.19244 | 0.052786 | 0.099707 | 0.049853 | 0.602639 | 0.536657 | 0.361437 | 0.31305 | 0.222874 | 0.222874 | 0 | 0.005824 | 0.157921 | 2,039 | 66 | 73 | 30.893939 | 0.788585 | 0 | 0 | 0.212766 | 0 | 0 | 0.092202 | 0 | 0 | 0 | 0 | 0 | 0.06383 | 1 | 0.06383 | false | 0 | 0.042553 | 0 | 0.191489 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad2289e51be50354003b28b7ac0c0b7fcaa24fed | 3,495 | py | Python | apps/logic/applications/application_vue_demo/fun.py | FuJianTech/OIP | 0f3f52a835966f7408ca1d0d9d3e46ae5e63f91a | [
"MIT"
] | null | null | null | apps/logic/applications/application_vue_demo/fun.py | FuJianTech/OIP | 0f3f52a835966f7408ca1d0d9d3e46ae5e63f91a | [
"MIT"
] | null | null | null | apps/logic/applications/application_vue_demo/fun.py | FuJianTech/OIP | 0f3f52a835966f7408ca1d0d9d3e46ae5e63f91a | [
"MIT"
] | null | null | null | # !/bin/env python
# -*- coding=utf-8 -*-
# from apps.logic.applications.application_vue_demo.database.model import *
from flask import request, render_template, jsonify
import shutil
from apps import appdir
from apps.DB.sql_orm import *
from werkzeug.utils import secure_filename
class TableAnalysis(object):
def __init__(self):
self.OpOr = OperateOrm()
def readdata(self):
res_list = self.OpOr.select_all_data(Todolist)
return res_list
def submit(self):
if request.method == 'POST':
data = request.data.decode()
if data != '':
d = eval(data)
print(23, d)
date = d['date']
mission = d['mission']
level = d['level']
add_one_str = Todolist(date=date, mission=mission, level=level)
self.OpOr.add_one_data(add_one_str)
def delectdata(self):
data = request.data.decode()
if request.method == 'POST':
if data != '':
data = eval(data)
print(36, type(data))
delete_data_dict = {"table": Todolist, "filters": Todolist.id == data.get('id')}
print(38, delete_data_dict)
self.OpOr.delete_data(delete_data_dict)
print('ok')
def update(self):
data = request.data.decode()
if request.method == 'POST':
if data != '':
update_data_dict = {"table": Todolist, "filters": Todolist.id == eval(data).get("id"),
"update_data": eval(data)}
self.OpOr.update_data(update_data_dict)
def get_upload_path(self):
from apps import appdir
return os.path.join(appdir, "upload")
def upload_pic(self):
print(57)
data_dict = eval(request.data.decode())
print(59, data_dict)
id = data_dict.get('id')
pic_name = data_dict.get('picname')
pic_path = f'\static{os.sep}uploads{os.sep}images{os.sep}{pic_name}'
# pic_path = "192.168.15.160:8020" + pic_path
update_data_dict = {"table": Todolist, "filters": Todolist.id == id,
"update_data": {'pic_path': pic_path}}
self.OpOr.update_data(update_data_dict)
return '成功'
def upload_timestemp(self):
data = request.data.decode()
import uuid
abs_path = ''
if request.method == 'POST':
path = self.get_upload_path()
f = request.files
name = f["files"]
print(76,f["files"],type(f["files"]))
timestamp = str(uuid.uuid1().hex)
for key, value in f.items():
print(78,key,value)
abs_path = os.path.join(path, timestamp)
if not os.path.exists(abs_path):
os.makedirs(abs_path)
for key, value in f.items():
_, name = os.path.split(value.filename)
fil=open(os.path.join(abs_path, name), "wb")
fil.write(value.stream.read())
fil.close()
shutil.copy(os.path.join(abs_path, name),os.path.join(appdir, f'static{os.sep}uploads{os.sep}images{os.sep}{name}'))
# return timestamp,name
return name
if __name__ == '__main__':
# OpOr = OperateOrm()
# DA = OpOr.select_all_data(Todolist)
# print(DA)
pass
da=TableAnalysis().readdata()
print(da)
| 32.663551 | 128 | 0.548212 | 422 | 3,495 | 4.369668 | 0.2891 | 0.047722 | 0.046095 | 0.041215 | 0.291757 | 0.238612 | 0.195228 | 0.139913 | 0.092191 | 0.092191 | 0 | 0.013108 | 0.323319 | 3,495 | 106 | 129 | 32.971698 | 0.766596 | 0.069528 | 0 | 0.21519 | 0 | 0.012658 | 0.076781 | 0.031761 | 0 | 0 | 0 | 0.009434 | 0 | 1 | 0.101266 | false | 0.012658 | 0.088608 | 0 | 0.253165 | 0.113924 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad25512b858965572b7f3697103e3341b4fa42ae | 1,559 | py | Python | examples/simple-server-thrift.py | bohblue2/callosum | adb8f6aa2d44cd3c4448f6899027a2964eca380a | [
"MIT"
] | 19 | 2018-08-17T15:58:43.000Z | 2022-03-31T07:12:43.000Z | examples/simple-server-thrift.py | bohblue2/callosum | adb8f6aa2d44cd3c4448f6899027a2964eca380a | [
"MIT"
] | 9 | 2018-11-15T15:44:11.000Z | 2019-12-06T15:32:57.000Z | examples/simple-server-thrift.py | bohblue2/callosum | adb8f6aa2d44cd3c4448f6899027a2964eca380a | [
"MIT"
] | 2 | 2018-05-16T06:02:39.000Z | 2020-07-24T06:30:58.000Z | import asyncio
import pathlib
import signal
from callosum.rpc import Peer
from callosum.serialize import noop_serializer, noop_deserializer
from callosum.lower.zeromq import ZeroMQAddress, ZeroMQRPCTransport
from callosum.upper.thrift import ThriftServerAdaptor
import thriftpy2 as thriftpy
simple_thrift = thriftpy.load(
str(pathlib.Path(__file__).parent / 'simple.thrift'),
module_name='simple_thrift')
class SimpleDispatcher:
async def echo(self, msg: str) -> str:
return msg
async def add(self, a: int, b: int) -> int:
return a + b
async def oops(self) -> bool:
raise ZeroDivisionError('oops')
async def long_delay(self) -> bool:
await asyncio.sleep(5.0)
return True
async def serve() -> None:
peer = Peer(
bind=ZeroMQAddress('tcp://127.0.0.1:5030'),
serializer=noop_serializer,
deserializer=noop_deserializer,
transport=ZeroMQRPCTransport)
adaptor = ThriftServerAdaptor(
peer,
simple_thrift.SimpleService,
SimpleDispatcher())
peer.handle_function('simple', adaptor.handle_function)
loop = asyncio.get_running_loop()
forever = loop.create_future()
loop.add_signal_handler(signal.SIGINT, forever.cancel)
loop.add_signal_handler(signal.SIGTERM, forever.cancel)
async with peer:
try:
print('server started')
await forever
except asyncio.CancelledError:
pass
print('server terminated')
if __name__ == '__main__':
asyncio.run(serve())
| 26.423729 | 67 | 0.683772 | 178 | 1,559 | 5.820225 | 0.488764 | 0.03861 | 0.025097 | 0.03861 | 0.050193 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010735 | 0.22322 | 1,559 | 58 | 68 | 26.87931 | 0.844756 | 0 | 0 | 0 | 0 | 0 | 0.060937 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.022222 | 0.177778 | 0 | 0.266667 | 0.044444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad2586574de3495771ca8ddb0ecf1196e6515ed7 | 556 | py | Python | NPS Exercise Files/Chapter 8/8-10.py | coderXeno/eric-matthes-py-book-solutions | 791a5f82f9d3bebcd8be919d54a7b592664c24d5 | [
"MIT"
] | null | null | null | NPS Exercise Files/Chapter 8/8-10.py | coderXeno/eric-matthes-py-book-solutions | 791a5f82f9d3bebcd8be919d54a7b592664c24d5 | [
"MIT"
] | null | null | null | NPS Exercise Files/Chapter 8/8-10.py | coderXeno/eric-matthes-py-book-solutions | 791a5f82f9d3bebcd8be919d54a7b592664c24d5 | [
"MIT"
] | null | null | null | def show_magicians(magicians):
for magician in magicians:
print(magician)
def make_great(magicians):
great_magicians = []
while magicians:
magician = magicians.pop()
great_magician = magician + ' the Great'
great_magicians.append(great_magician)
for great_magician in great_magicians:
magicians.append(great_magician)
magicians = ['mitochondricity', 'xtremeboi', 'lolbro','responsivedude']
show_magicians(magicians)
print("\n")
make_great(magicians)
show_magicians(magicians) | 26.47619 | 72 | 0.690647 | 57 | 556 | 6.526316 | 0.315789 | 0.188172 | 0.177419 | 0.150538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214029 | 556 | 21 | 73 | 26.47619 | 0.851259 | 0 | 0 | 0.125 | 0 | 0 | 0.104283 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad2723b5c2d7d9d5c3fc57d2579bf748302cd483 | 12,837 | py | Python | core/load_modules.py | kalijali400/Nettacker | b586f3b397382f31241e2d8287062c22678da054 | [
"Apache-2.0"
] | 1 | 2021-12-14T01:37:21.000Z | 2021-12-14T01:37:21.000Z | core/load_modules.py | kalijali400/Nettacker | b586f3b397382f31241e2d8287062c22678da054 | [
"Apache-2.0"
] | 11 | 2022-01-12T18:24:11.000Z | 2022-03-28T18:37:12.000Z | core/load_modules.py | digilant-demo/Nettacker | 9ce824a0cc63bf39c12d25550e43dbbfba84ad7e | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import copy
import os
import socket
import yaml
import time
import json
from glob import glob
from io import StringIO
def getaddrinfo(*args):
"""
same getaddrinfo() used in socket except its resolve addresses with socks proxy
Args:
args: *args
Returns:
getaddrinfo
"""
return [(socket.AF_INET, socket.SOCK_STREAM, 6, '', (args[0], args[1]))]
def set_socks_proxy(socks_proxy):
if socks_proxy:
import socks
socks_version = socks.SOCKS5 if socks_proxy.startswith('socks5://') else socks.SOCKS4
socks_proxy = socks_proxy.split('://')[1] if '://' in socks_proxy else socks_proxy
if '@' in socks_proxy:
socks_username = socks_proxy.split(':')[0]
socks_password = socks_proxy.split(':')[1].split('@')[0]
socks.set_default_proxy(
socks_version,
str(socks_proxy.rsplit('@')[1].rsplit(':')[0]), # hostname
int(socks_proxy.rsplit(':')[-1]), # port
username=socks_username,
password=socks_password
)
else:
socks.set_default_proxy(
socks_version,
str(socks_proxy.rsplit(':')[0]), # hostname
int(socks_proxy.rsplit(':')[1]) # port
)
return socks.socksocket, getaddrinfo
else:
return socket.socket, socket.getaddrinfo
class NettackerModules:
def __init__(self):
from config import nettacker_paths
self.module_name = None
self.module_content = None
self.scan_unique_id = None
self.target = None
self.process_number = None
self.module_thread_number = None
self.total_module_thread_number = None
self.module_inputs = {}
self.skip_service_discovery = None
self.discovered_services = None
self.ignored_core_modules = [
'subdomain_scan',
'icmp_scan',
'port_scan'
]
self.service_discovery_signatures = list(set(yaml.load(
StringIO(
open(nettacker_paths()['modules_path'] + '/scan/port.yaml').read().format(
**{'target': 'dummy'}
)
),
Loader=yaml.FullLoader
)['payloads'][0]['steps'][0]['response']['conditions'].keys()))
self.libraries = [
module_protocol.split('.py')[0] for module_protocol in
os.listdir(nettacker_paths()['module_protocols_path']) if
module_protocol.endswith('.py') and module_protocol != '__init__.py'
]
def load(self):
from config import nettacker_paths
from core.utility import find_and_replace_configuration_keys
from database.db import find_events
self.module_content = find_and_replace_configuration_keys(
yaml.load(
StringIO(
open(
nettacker_paths()['modules_path'] +
'/' +
self.module_name.split('_')[-1].split('.yaml')[0] +
'/' +
'_'.join(self.module_name.split('_')[:-1]) +
'.yaml',
'r'
).read().format(
**self.module_inputs
)
),
Loader=yaml.FullLoader
),
self.module_inputs
)
if not self.skip_service_discovery and self.module_name not in self.ignored_core_modules:
services = {}
for service in find_events(self.target, 'port_scan', self.scan_unique_id):
service_event = json.loads(service.json_event)
port = service_event['ports']
protocols = service_event['response']['conditions_results'].keys()
for protocol in protocols:
if protocol in self.libraries and protocol:
if protocol in services:
services[protocol].append(port)
else:
services[protocol] = [port]
self.discovered_services = copy.deepcopy(services)
index_payload = 0
for payload in copy.deepcopy(self.module_content['payloads']):
if payload['library'] not in self.discovered_services and \
payload['library'] in self.service_discovery_signatures:
del self.module_content['payloads'][index_payload]
index_payload -= 1
else:
index_step = 0
for step in copy.deepcopy(
self.module_content['payloads'][index_payload]['steps']
):
find_and_replace_configuration_keys(
step,
{
"ports": self.discovered_services[payload['library']]
}
)
self.module_content['payloads'][index_payload]['steps'][index_step] = step
index_step += 1
index_payload += 1
def generate_loops(self):
from core.utility import expand_module_steps
self.module_content['payloads'] = expand_module_steps(self.module_content['payloads'])
def start(self):
from terminable_thread import Thread
from core.utility import wait_for_threads_to_finish
active_threads = []
from core.alert import warn
from core.alert import verbose_event_info
from core.alert import messages
# counting total number of requests
total_number_of_requests = 0
for payload in self.module_content['payloads']:
if payload['library'] not in self.libraries:
warn(messages("library_not_supported").format(payload['library']))
return None
for step in payload['steps']:
for _ in step:
total_number_of_requests += 1
request_number_counter = 0
for payload in self.module_content['payloads']:
protocol = getattr(
__import__(
'core.module_protocols.{library}'.format(library=payload['library']),
fromlist=['Engine']
),
'Engine'
)
for step in payload['steps']:
for sub_step in step:
thread = Thread(
target=protocol.run,
args=(
sub_step,
self.module_name,
self.target,
self.scan_unique_id,
self.module_inputs,
self.process_number,
self.module_thread_number,
self.total_module_thread_number,
request_number_counter,
total_number_of_requests
)
)
thread.name = f"{self.target} -> {self.module_name} -> {sub_step}"
request_number_counter += 1
verbose_event_info(
messages("sending_module_request").format(
self.process_number,
self.module_name,
self.target,
self.module_thread_number,
self.total_module_thread_number,
request_number_counter,
total_number_of_requests
)
)
thread.start()
time.sleep(self.module_inputs['time_sleep_between_requests'])
active_threads.append(thread)
wait_for_threads_to_finish(
active_threads,
maximum=self.module_inputs['thread_per_host'],
terminable=True
)
wait_for_threads_to_finish(
active_threads,
maximum=None,
terminable=True
)
def load_all_graphs():
"""
load all available graphs
Returns:
an array of graph names
"""
from config import nettacker_paths
graph_names = []
for graph_library in glob(os.path.join(nettacker_paths()['home_path'] + '/lib/graph/*/engine.py')):
graph_names.append(graph_library.split('/')[-2] + '_graph')
return list(set(graph_names))
def load_all_languages():
"""
load all available languages
Returns:
an array of languages
"""
languages_list = []
from config import nettacker_paths
for language in glob(os.path.join(nettacker_paths()['home_path'] + '/lib/messages/*.yaml')):
languages_list.append(language.split('/')[-1].split('.')[0])
return list(set(languages_list))
def load_all_modules(limit=-1, full_details=False):
"""
load all available modules
limit: return limited number of modules
full: with full details
Returns:
an array of all module names
"""
# Search for Modules
from config import nettacker_paths
from core.utility import sort_dictonary
if full_details:
import yaml
module_names = {}
for module_name in glob(os.path.join(nettacker_paths()['modules_path'] + '/*/*.yaml')):
libname = module_name.split('/')[-1].split('.')[0]
category = module_name.split('/')[-2]
module_names[libname + '_' + category] = yaml.load(
StringIO(
open(
nettacker_paths()['modules_path'] +
'/' +
category +
'/' +
libname +
'.yaml',
'r'
).read().split('payload:')[0]
),
Loader=yaml.FullLoader
)['info'] if full_details else None
if len(module_names) == limit:
module_names['...'] = {}
break
module_names = sort_dictonary(module_names)
module_names['all'] = {}
return module_names
def load_all_profiles(limit=-1):
"""
load all available profiles
Returns:
an array of all profile names
"""
from core.utility import sort_dictonary
all_modules_with_details = load_all_modules(limit=limit, full_details=True)
profiles = {}
if '...' in all_modules_with_details:
del all_modules_with_details['...']
del all_modules_with_details['all']
for key in all_modules_with_details:
for tag in all_modules_with_details[key]['profiles']:
if tag not in profiles:
profiles[tag] = []
profiles[tag].append(key)
else:
profiles[tag].append(key)
if len(profiles) == limit:
profiles = sort_dictonary(profiles)
profiles['...'] = []
profiles['all'] = []
return profiles
profiles = sort_dictonary(profiles)
profiles['all'] = []
return profiles
def perform_scan(options, target, module_name, scan_unique_id, process_number, thread_number, total_number_threads):
from core.alert import (verbose_event_info,
messages)
socket.socket, socket.getaddrinfo = set_socks_proxy(options.socks_proxy)
options.target = target
validate_module = NettackerModules()
validate_module.skip_service_discovery = options.skip_service_discovery
validate_module.module_name = module_name
validate_module.process_number = process_number
validate_module.module_thread_number = thread_number
validate_module.total_module_thread_number = total_number_threads
validate_module.module_inputs = vars(options)
if options.modules_extra_args:
for module_extra_args in validate_module.module_inputs['modules_extra_args']:
validate_module.module_inputs[module_extra_args] = \
validate_module.module_inputs['modules_extra_args'][module_extra_args]
validate_module.scan_unique_id = scan_unique_id
validate_module.target = target
validate_module.load()
validate_module.generate_loops()
validate_module.start()
verbose_event_info(
messages("finished_parallel_module_scan").format(
process_number,
module_name,
target,
thread_number,
total_number_threads
)
)
return os.EX_OK
| 36.887931 | 116 | 0.546779 | 1,273 | 12,837 | 5.233307 | 0.15868 | 0.039027 | 0.025518 | 0.030021 | 0.347793 | 0.249325 | 0.210748 | 0.148454 | 0.107926 | 0.07145 | 0 | 0.004995 | 0.36052 | 12,837 | 347 | 117 | 36.994236 | 0.806554 | 0.045883 | 0 | 0.263158 | 0 | 0 | 0.062928 | 0.014287 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038596 | false | 0.007018 | 0.094737 | 0 | 0.17193 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad27ea843ef3b739fb2f1acc81cdeef53a6a8cf5 | 12,319 | py | Python | core/utils/quaternion_lf.py | AlbertoRemus/GDR_Net | 114cff27c6fc6048724a6f2bdce2306ab51d798e | [
"Apache-2.0"
] | 132 | 2021-02-25T10:45:29.000Z | 2022-03-30T06:54:26.000Z | core/utils/quaternion_lf.py | AlbertoRemus/GDR_Net | 114cff27c6fc6048724a6f2bdce2306ab51d798e | [
"Apache-2.0"
] | 69 | 2021-03-23T12:26:17.000Z | 2022-03-29T09:08:11.000Z | core/utils/quaternion_lf.py | AlbertoRemus/GDR_Net | 114cff27c6fc6048724a6f2bdce2306ab51d798e | [
"Apache-2.0"
] | 23 | 2021-03-26T06:21:32.000Z | 2022-03-23T23:53:51.000Z | # modified from: https://github.com/NVlabs/latentfusion/blob/master/latentfusion/three/quaternion.py
import math
import torch
from torch.nn import functional as F
@torch.jit.script
def acos_safe(t, eps: float = 1e-7):
return torch.acos(torch.clamp(t, min=-1.0 + eps, max=1.0 - eps))
@torch.jit.script
def ensure_batch_dim(tensor, num_dims: int):
unsqueezed = False
if len(tensor.shape) == num_dims:
tensor = tensor.unsqueeze(0)
unsqueezed = True
return tensor, unsqueezed
def identity(n: int, device: str = "cpu"):
return torch.tensor((1.0, 0.0, 0.0, 0.0), device=device).view(1, 4).expand(n, 4)
def normalize(quaternion: torch.Tensor, eps: float = 1e-12) -> torch.Tensor:
r"""Normalizes a quaternion.
The quaternion should be in (w, x, y, z) format.
Args:
quaternion (torch.Tensor): a tensor containing a quaternion to be
normalized. The tensor can be of shape :math:`(*, 4)`.
eps (Optional[bool]): small value to avoid division by zero.
Default: 1e-12.
Return:
torch.Tensor: the normalized quaternion of shape :math:`(*, 4)`.
"""
if not isinstance(quaternion, torch.Tensor):
raise TypeError("Input type is not a torch.Tensor. Got {}".format(type(quaternion)))
# if not quaternion.shape[-1] == 4:
# raise ValueError(
# "Input must be a tensor of shape (*, 4). Got {}".format(
# quaternion.shape))
return F.normalize(quaternion, p=2.0, dim=-1, eps=eps)
def quat_to_mat(quaternion: torch.Tensor) -> torch.Tensor:
"""
Converts a quaternion to a rotation matrix.
The quaternion should be in (w, x, y, z) format.
Adapted from:
https://github.com/kornia/kornia/blob/d729d7c4357ca73e4915a42285a0771bca4436ce/kornia/geometry/conversions.py#L235
Args:
quaternion (torch.Tensor): a tensor containing a quaternion to be
converted. The tensor can be of shape :math:`(*, 4)`.
Return:
torch.Tensor: the rotation matrix of shape :math:`(*, 3, 3)`.
Example:
>>> quaternion = torch.tensor([0., 0., 1., 0.])
>>> quat_to_mat(quaternion)
tensor([[[-1., 0., 0.],
[ 0., -1., 0.],
[ 0., 0., 1.]]])
"""
quaternion, unsqueezed = ensure_batch_dim(quaternion, 1)
if not quaternion.shape[-1] == 4:
raise ValueError("Input must be a tensor of shape (*, 4). Got {}".format(quaternion.shape))
# normalize the input quaternion
quaternion_norm = normalize(quaternion)
# unpack the normalized quaternion components
w, x, y, z = torch.chunk(quaternion_norm, chunks=4, dim=-1)
# compute the actual conversion
tx: torch.Tensor = 2.0 * x
ty: torch.Tensor = 2.0 * y
tz: torch.Tensor = 2.0 * z
twx: torch.Tensor = tx * w
twy: torch.Tensor = ty * w
twz: torch.Tensor = tz * w
txx: torch.Tensor = tx * x
txy: torch.Tensor = ty * x
txz: torch.Tensor = tz * x
tyy: torch.Tensor = ty * y
tyz: torch.Tensor = tz * y
tzz: torch.Tensor = tz * z
one: torch.Tensor = torch.tensor(1.0)
matrix: torch.Tensor = torch.stack(
[
one - (tyy + tzz),
txy - twz,
txz + twy,
txy + twz,
one - (txx + tzz),
tyz - twx,
txz - twy,
tyz + twx,
one - (txx + tyy),
],
dim=-1,
).view(-1, 3, 3)
if unsqueezed:
matrix = matrix.squeeze(0)
return matrix
def mat_to_quat(rotation_matrix: torch.Tensor, eps: float = 1e-8) -> torch.Tensor:
"""
Convert 3x3 rotation matrix to 4d quaternion vector.
The quaternion vector has components in (w, x, y, z) format.
Adapted From:
https://github.com/kornia/kornia/blob/d729d7c4357ca73e4915a42285a0771bca4436ce/kornia/geometry/conversions.py#L235
Args:
rotation_matrix (torch.Tensor): the rotation matrix to convert.
eps (float): small value to avoid zero division. Default: 1e-8.
Return:
torch.Tensor: the rotation in quaternion.
Shape:
- Input: :math:`(*, 3, 3)`
- Output: :math:`(*, 4)`
"""
rotation_matrix, unsqueezed = ensure_batch_dim(rotation_matrix, 2)
if not isinstance(rotation_matrix, torch.Tensor):
raise TypeError("Input type is not a torch.Tensor. Got {}".format(type(rotation_matrix)))
if not rotation_matrix.shape[-2:] == (3, 3):
raise ValueError("Input size must be a (*, 3, 3) tensor. Got {}".format(rotation_matrix.shape))
def safe_zero_division(numerator: torch.Tensor, denominator: torch.Tensor) -> torch.Tensor:
eps = torch.finfo(numerator.dtype).tiny
return numerator / torch.clamp(denominator, min=eps)
if not rotation_matrix.is_contiguous():
rotation_matrix_vec: torch.Tensor = rotation_matrix.reshape(*rotation_matrix.shape[:-2], 9)
else:
rotation_matrix_vec: torch.Tensor = rotation_matrix.view(*rotation_matrix.shape[:-2], 9)
m00, m01, m02, m10, m11, m12, m20, m21, m22 = torch.chunk(rotation_matrix_vec, chunks=9, dim=-1)
trace: torch.Tensor = m00 + m11 + m22
def trace_positive_cond():
sq = torch.sqrt(trace + 1.0) * 2.0 # sq = 4 * qw.
qw = 0.25 * sq
qx = safe_zero_division(m21 - m12, sq)
qy = safe_zero_division(m02 - m20, sq)
qz = safe_zero_division(m10 - m01, sq)
return torch.cat([qw, qx, qy, qz], dim=-1)
def cond_1():
sq = torch.sqrt(1.0 + m00 - m11 - m22 + eps) * 2.0 # sq = 4 * qx.
qw = safe_zero_division(m21 - m12, sq)
qx = 0.25 * sq
qy = safe_zero_division(m01 + m10, sq)
qz = safe_zero_division(m02 + m20, sq)
return torch.cat([qw, qx, qy, qz], dim=-1)
def cond_2():
sq = torch.sqrt(1.0 + m11 - m00 - m22 + eps) * 2.0 # sq = 4 * qy.
qw = safe_zero_division(m02 - m20, sq)
qx = safe_zero_division(m01 + m10, sq)
qy = 0.25 * sq
qz = safe_zero_division(m12 + m21, sq)
return torch.cat([qw, qx, qy, qz], dim=-1)
def cond_3():
sq = torch.sqrt(1.0 + m22 - m00 - m11 + eps) * 2.0 # sq = 4 * qz.
qw = safe_zero_division(m10 - m01, sq)
qx = safe_zero_division(m02 + m20, sq)
qy = safe_zero_division(m12 + m21, sq)
qz = 0.25 * sq
return torch.cat([qw, qx, qy, qz], dim=-1)
where_2 = torch.where(m11 > m22, cond_2(), cond_3())
where_1 = torch.where((m00 > m11) & (m00 > m22), cond_1(), where_2)
quaternion: torch.Tensor = torch.where(trace > 0.0, trace_positive_cond(), where_1)
if unsqueezed:
quaternion = quaternion.squeeze(0)
return quaternion
@torch.jit.script
def random(k: int = 1, device: str = "cpu"):
"""Return uniform random unit quaternion.
rand: array like or None
Three independent random variables that are uniformly distributed
between 0 and 1.
"""
rand = torch.rand(k, 3, device=device)
r1 = torch.sqrt(1.0 - rand[:, 0])
r2 = torch.sqrt(rand[:, 0])
pi2 = math.pi * 2.0
t1 = pi2 * rand[:, 1]
t2 = pi2 * rand[:, 2]
return torch.stack([torch.cos(t2) * r2, torch.sin(t1) * r1, torch.cos(t1) * r1, torch.sin(t2) * r2], dim=1)
def qmul(q1, q2):
"""Quaternion multiplication.
Use the Hamilton product to perform quaternion multiplication.
References:
http://en.wikipedia.org/wiki/Quaternions#Hamilton_product
https://github.com/matthew-brett/transforms3d/blob/master/transforms3d/quaternions.py
"""
assert q1.shape[-1] == 4
assert q2.shape[-1] == 4
ham_prod = torch.bmm(q2.view(-1, 4, 1), q1.view(-1, 1, 4))
w = ham_prod[:, 0, 0] - ham_prod[:, 1, 1] - ham_prod[:, 2, 2] - ham_prod[:, 3, 3]
x = ham_prod[:, 0, 1] + ham_prod[:, 1, 0] - ham_prod[:, 2, 3] + ham_prod[:, 3, 2]
y = ham_prod[:, 0, 2] + ham_prod[:, 1, 3] + ham_prod[:, 2, 0] - ham_prod[:, 3, 1]
z = ham_prod[:, 0, 3] - ham_prod[:, 1, 2] + ham_prod[:, 2, 1] + ham_prod[:, 3, 0]
return torch.stack((w, x, y, z), dim=1).view(q1.shape)
def rotate_vector(quat, vector):
"""
References:
https://github.com/matthew-brett/transforms3d/blob/master/transforms3d/quaternions.py#L419
"""
assert quat.shape[-1] == 4
assert vector.shape[-1] == 3
assert quat.shape[:-1] == vector.shape[:-1]
original_shape = list(vector.shape)
quat = quat.view(-1, 4)
vector = vector.view(-1, 3)
pure_quat = quat[:, 1:]
uv = torch.cross(pure_quat, vector, dim=1)
uuv = torch.cross(pure_quat, uv, dim=1)
return (vector + 2 * (quat[:, :1] * uv + uuv)).view(original_shape)
def from_spherical(theta, phi, r=1.0):
x = torch.cos(theta) * torch.sin(phi)
y = torch.sin(theta) * torch.sin(phi)
z = r * torch.cos(phi)
w = torch.zeros_like(x)
return torch.stack((w, x, y, z), dim=-1)
def from_axis_angle(axis, angle):
"""Compute a quaternion from the axis angle representation.
Reference:
https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Args:
axis: axis to rotate about
angle: angle to rotate by
Returns:
Tensor of shape (*, 4) representing a quaternion.
"""
if torch.is_tensor(axis) and isinstance(angle, float):
angle = torch.tensor(angle, dtype=axis.dtype, device=axis.device)
angle = angle.expand(axis.shape[0])
axis = axis / torch.norm(axis, dim=-1, keepdim=True)
c = torch.cos(angle / 2.0)
s = torch.sin(angle / 2.0)
w = c
x = s * axis[..., 0]
y = s * axis[..., 1]
z = s * axis[..., 2]
return torch.stack((w, x, y, z), dim=-1)
def qexp(q, eps=1e-8, is_normalized=False):
"""allow unnormalized Computes the quaternion exponent.
Reference:
https://en.wikipedia.org/wiki/Quaternion#Exponential,_logarithm,_and_power_functions
Args:
q (tensor): (*, 3) or (*, 4) the quaternion to compute the exponent of
Returns:
(tensor): Tensor of shape (*, 4) representing exp(q)
"""
if is_normalized:
q = normalize(q, eps=eps)
if q.shape[1] == 4:
# Let q = (s; v).
s, v = torch.split(q, (1, 3), dim=-1)
else:
s = torch.zeros_like(q[:, :1])
v = q
theta = torch.norm(v, dim=-1, keepdim=True)
exp_s = torch.exp(s)
w = torch.cos(theta)
xyz = 1.0 / theta.clamp(min=eps) * torch.sin(theta) * v
return exp_s * torch.cat((w, xyz), dim=-1)
def qlog(q, eps=1e-8):
"""Computes the quaternion logarithm.
Reference:
https://en.wikipedia.org/wiki/Quaternion#Exponential,_logarithm,_and_power_functions
https://users.aalto.fi/~ssarkka/pub/quat.pdf
Args:
q (tensor): the quaternion to compute the logarithm of
Returns:
(tensor): Tensor of shape (*, 4) representing ln(q)
"""
mag = torch.norm(q, dim=-1, keepdim=True)
# Let q = (s; v).
s, v = torch.split(q, (1, 3), dim=-1)
w = torch.log(mag)
xyz = v / torch.norm(v, dim=-1, keepdim=True).clamp(min=eps) * acos_safe(s / mag.clamp(min=eps))
return torch.cat((w, xyz), dim=-1)
def qdelta(n, std, device=None):
omega = torch.cat((torch.zeros(n, 1, device=device), torch.randn(n, 3, device=device)), dim=-1)
delta_q = qexp(std / 2.0 * omega)
return delta_q
def perturb(q, std):
"""Perturbs the unit quaternion `q`.
References:
https://math.stackexchange.com/questions/2992016/how-to-linearize-quaternions
http://asrl.utias.utoronto.ca/~tdb/bib/barfoot_aa10_appendix.pdf
https://math.stackexchange.com/questions/473736/small-angular-displacements-with-a-quaternion-representation
Args:
q (tensor): the quaternion to perturb (the mean of the perturbation)
std (float): the stadnard deviation of the perturbation
Returns:
(tensor): Tensor of shape (*, 4), the perturbed quaternion
"""
q, unsqueezed = ensure_batch_dim(q, num_dims=1)
n = q.shape[0]
delta_q = qdelta(n, std, device=q.device)
q_out = qmul(delta_q, q)
if unsqueezed:
q_out = q_out.squeeze(0)
return q_out
def angular_distance(q1, q2, eps: float = 1e-7):
q1 = normalize(q1)
q2 = normalize(q2)
dot = q1 @ q2.t()
dist = 2 * acos_safe(dot.abs(), eps=eps)
return dist
| 33.204852 | 122 | 0.602809 | 1,794 | 12,319 | 4.06243 | 0.169454 | 0.061883 | 0.02854 | 0.003842 | 0.336306 | 0.276482 | 0.218578 | 0.194978 | 0.176592 | 0.173299 | 0 | 0.051174 | 0.249696 | 12,319 | 370 | 123 | 33.294595 | 0.737315 | 0.314392 | 0 | 0.084656 | 0 | 0 | 0.022023 | 0 | 0 | 0 | 0 | 0 | 0.026455 | 1 | 0.111111 | false | 0 | 0.015873 | 0.010582 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad357cfc5c340b19395c6ffb496bc4447531b4bc | 3,340 | py | Python | sctools/cli.py | timoast/sctools | dca9404c2a1c406cab02afac18a0c63be6a5d66b | [
"MIT"
] | null | null | null | sctools/cli.py | timoast/sctools | dca9404c2a1c406cab02afac18a0c63be6a5d66b | [
"MIT"
] | null | null | null | sctools/cli.py | timoast/sctools | dca9404c2a1c406cab02afac18a0c63be6a5d66b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#! /usr/bin/env python
from __future__ import absolute_import
from sctools import sctools, genotype, replace
import pandas as pd
import functools
import time
def log_info(func):
"""Decorator that prints function arguments and runtime
"""
@functools.wraps(func)
def wrapper(args):
print("Function {} called with the following arguments:\n".format(func.__name__))
for arg in vars(args):
print(str(arg) + '\t' + str(getattr(args, arg)))
t1 = time.time()
func(args)
t2 = time.time()
elapsed = [round(x, 2) for x in divmod(t2-t1, 60)]
print("\nFunction completed in {} m {} s\n".format(elapsed[0], elapsed[1]))
return wrapper
@log_info
def run_filterbarcodes(options):
"""Wraps the sctools.filterbarcodes function for use on the command line
"""
sctools.filterbarcodes(cells=options.cells, bam=options.bam, trim_suffix=options.trim_suffix,
output=options.output, sam=options.sam, nproc=options.nproc)
@log_info
def run_genotyping(options):
"""Wraps the genotype.run_genotyping function for use on the command line
"""
data = pd.read_table(options.infile)
gt = genotype.run_genotyping(data=data,
min_umi_total=options.min_umi_total,
min_umi_each=options.min_umi_each,
subsample=options.downsample,
margin=options.margin,
max_difference=options.max_difference,
eps_background=options.eps_background,
eps_background_core=options.eps_background_core,
eps_cells=options.eps_cells,
eps_margin=options.eps_margin,
min_drops_background=options.min_samples_background,
min_drops_cells=options.min_samples_cells)
if options.plot:
import matplotlib.pyplot as plt
pt = gt.plot()
plot_name = options.sample_name + ".png"
plt.savefig(plot_name, dpi=500)
plt.close()
if options.summarize:
summary = gt.summarize()
summary.to_csv(options.sample_name + "_summary.tsv", sep="\t", index=False)
gt.labels[['cell_barcode', 'reference_count',
'alternate_count', 'label']].to_csv(options.sample_name + "_genotypes.tsv",
sep='\t', index=False)
@log_info
def run_countsnps(options):
"""Wraps the sctools.countsnps function for use on the command line
"""
data = sctools.countsnps(bam=options.bam, snp=options.snp, cells=options.cells, nproc=options.nproc)
sctools.save_data(data, options.output)
@log_info
def run_countedited(options):
"""Wraps the sctools.countedited function for use on the command line
"""
data = sctools.countedited(bam=options.bam, edit=options.edit, cells=options.cells, nproc=options.nproc)
sctools.save_edit_data(data, options.output)
@log_info
def run_replace(options):
"""Wraps the sctools.replace function for use on the command line
"""
replace.run_replace(genome=options.genome, snp=options.snp, outfile=options.output)
| 39.294118 | 108 | 0.620958 | 399 | 3,340 | 5.025063 | 0.330827 | 0.020948 | 0.024938 | 0.032419 | 0.205486 | 0.166584 | 0.166584 | 0.136658 | 0.040898 | 0 | 0 | 0.005383 | 0.276946 | 3,340 | 84 | 109 | 39.761905 | 0.824845 | 0.138323 | 0 | 0.084746 | 0 | 0 | 0.059382 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.118644 | false | 0 | 0.101695 | 0 | 0.237288 | 0.050847 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad38305f25ad6e7ef4f623b5737f9dc24393410a | 3,504 | py | Python | py_edl_editor/tc_tools.py | ThomasWeckenmann/py_edl_editor | efc100285afb2cc3b58998fae5054d5cd9abbb6e | [
"MIT"
] | 1 | 2021-09-07T17:32:55.000Z | 2021-09-07T17:32:55.000Z | py_edl_editor/tc_tools.py | ThomasWeckenmann/py_edl_editor | efc100285afb2cc3b58998fae5054d5cd9abbb6e | [
"MIT"
] | null | null | null | py_edl_editor/tc_tools.py | ThomasWeckenmann/py_edl_editor | efc100285afb2cc3b58998fae5054d5cd9abbb6e | [
"MIT"
] | null | null | null | """Timecode tools."""
# Import third-party modules
from timecode import Timecode # type: ignore
def remove_edl_gaps(edl):
"""Return EDL without gaps between EDL Events.
Args:
edl (Edl): Edit Decision List.
Return:
Edl: Edit Decision List without gaps.
"""
for index in range(len(edl.events) - 1):
event_rec_end = edl.events[index].rec_end_tc
next_event_rec_start = edl.events[index + 1].rec_start_tc
next_event_rec_end = edl.events[index + 1].rec_end_tc
diff = next_event_rec_start.frames - event_rec_end.frames
if diff > 0:
tc_diff = next_event_rec_start - event_rec_end
edl.events[index + 1].rec_start_tc = next_event_rec_start - tc_diff
edl.events[index + 1].rec_end_tc = next_event_rec_end - tc_diff
# Special case: EDL with incorrect order:
if diff < 0:
tc_diff = event_rec_end - next_event_rec_start
edl.events[index + 1].rec_start_tc = next_event_rec_start + tc_diff
edl.events[index + 1].rec_end_tc = next_event_rec_end + tc_diff
return edl
def set_edl_start_tc(edl, start_tc):
"""Return EDL with updated start timecode.
Args:
edl (Edl): Edit Decision List.
start_tc (string): String representing the start of the new EDL.
Return:
Edl: Edit Decision List with updated start timecode.
"""
new_start_tc = tc_from_string(edl.fps, start_tc)
if new_start_tc:
first_event_rec_start = edl.events[0].rec_start_tc
diff = edl.events[0].rec_start_tc - new_start_tc
for event in edl.events:
if first_event_rec_start > new_start_tc:
event.rec_start_tc = event.rec_start_tc - diff
event.rec_end_tc = event.rec_end_tc - diff
else:
event.rec_start_tc = event.rec_start_tc + diff
event.rec_end_tc = event.rec_end_tc + diff
return edl
def tc_from_string(framerate, start_tc):
"""Convert and return string to Timecode instance.
String can be either a frame number or a string in smpte timecode format
like: "hh:mm:ss:ff".
Args:
framerate (string): Framerate to calculate the Timecode instance.
start_tc (string): String to be converted to a Timecode instance.
Return:
Timecode: Timecode instance calculated from input string.
"""
new_start_tc = None
try:
frames = int(start_tc)
new_start_tc = Timecode(framerate, "00:00:00:{0}".format(frames))
except ValueError:
try:
new_start_tc = Timecode(framerate, start_tc)
except (IndexError, ValueError):
print("Wrong Timcode format: {0}".format(start_tc))
return new_start_tc
def add_handles_to_edl(edl, handles):
"""Return EDL with added handles.
Args:
edl (Edl): Edit Decision List.
handles (int): Number of handles to be added to each event.
Return:
Edl: Edit Decision List with added handles.
"""
first_event_rec_start = edl.events[0].rec_start_tc
if (first_event_rec_start.frame_number - handles) < 0:
edl = set_edl_start_tc(edl, str((first_event_rec_start + handles)))
for event in edl.events:
event.src_start_tc = event.src_start_tc - handles
event.src_end_tc = event.src_end_tc + handles
event.rec_start_tc = event.rec_start_tc - handles
event.rec_end_tc = event.rec_end_tc + handles
return edl
| 33.692308 | 79 | 0.655251 | 513 | 3,504 | 4.187135 | 0.177388 | 0.110801 | 0.102886 | 0.055866 | 0.492086 | 0.38175 | 0.268156 | 0.265829 | 0.213687 | 0.213687 | 0 | 0.00813 | 0.262842 | 3,504 | 103 | 80 | 34.019417 | 0.823461 | 0.286815 | 0 | 0.18 | 0 | 0 | 0.015651 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.02 | 0 | 0.18 | 0.02 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad3b5a1d941d29f38c7da6f8b663ed6c6efae579 | 17,935 | py | Python | modules/compiled/tests/testing_tools.py | jkiesele/HGCalML-1 | 23e98b207589b7659ee2bedf1dbd496e8500247c | [
"BSD-3-Clause"
] | null | null | null | modules/compiled/tests/testing_tools.py | jkiesele/HGCalML-1 | 23e98b207589b7659ee2bedf1dbd496e8500247c | [
"BSD-3-Clause"
] | null | null | null | modules/compiled/tests/testing_tools.py | jkiesele/HGCalML-1 | 23e98b207589b7659ee2bedf1dbd496e8500247c | [
"BSD-3-Clause"
] | null | null | null |
import tensorflow as tf
import time
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from select_knn_op import SelectKnn
import os
def makeIndices(nvert,nneigh):
all = []
for i in range(nvert):
a = np.array([],dtype='int32')
while len(a) < nneigh-1:
a = np.random.choice(nvert, nneigh-1, replace=False)
a = a[a != i]
a = np.concatenate([np.array([i],dtype='int32'),a],axis=-1)
a = np.expand_dims(a, axis=0)
all.append(a)
return np.concatenate(all,axis=0)
class Benchmarker(object):
def __init__(self, tf_implementation, custom_implementation, name, use_distances_direct,tfoncpu,customoncpu,mean_and_max):
self.tfimp=tf_implementation
self.customimpl=custom_implementation
self.name = name
self.debugout=False
self.use_distances_direct=use_distances_direct
self.tfoncpu=tfoncpu
self.customoncpu=customoncpu
self.mean_and_max=mean_and_max
def benchmark(self, nvert = 30000, nfeat = 64, nneigh = 128, ncoords = 4, dogradient=False,do_tf=True):
coords = tf.constant( np.random.rand(nvert,ncoords) ,dtype='float32')
feats = tf.constant( np.random.rand(nvert,nfeat) ,dtype='float32')
row_splits = tf.constant( [0, nvert] ,dtype='int32')
indices, distances = SelectKnn(K=nneigh, coords=coords, row_splits=row_splits)
if self.use_distances_direct:
coords = distances
tf_failed = False
if not dogradient:
#each gets one dry run to compile
meanmax = self.customimpl(coords, features=feats, indices=indices, mean_and_max=self.mean_and_max)
t0 = time.time()
for i in range(0,50):
meanmax = self.customimpl(coords, features=feats, indices=indices, mean_and_max=self.mean_and_max)
op_time= (time.time() - t0)/50.
print('op_time',op_time)
tf_time=0
if do_tf:
try:
meanmax = self.tfimp(coords, features=feats, indices=indices, mean_and_max=self.mean_and_max)
t0 = time.time()
for i in range(0,50):
meanmax = self.tfimp(coords, features=feats, indices=indices, mean_and_max=self.mean_and_max)
tf_time= (time.time() - t0)/50.
except:
tf_failed=True
print('tf_time',tf_time)
return op_time, tf_time
else:
with tf.GradientTape(persistent=True,watch_accessed_variables=True) as t_newop:
t_newop.watch(coords)
t_newop.watch(feats)
meanmax = self.customimpl(coords, features=feats, indices=indices, mean_and_max=self.mean_and_max)
#once to get it compiled in case needed
feat_grad = t_newop.gradient(meanmax, feats)
coord_grad = t_newop.gradient(meanmax, coords)
t0 = time.time()
for i in range(5) :
feat_grad = t_newop.gradient(meanmax, feats)
coord_grad = t_newop.gradient(meanmax, coords)
op_time= (time.time() - t0)/5.
tf_time=0
if do_tf:
try:
with tf.GradientTape(persistent=True) as t_tfop:
t_tfop.watch(coords)
t_tfop.watch(feats)
meanmax = self.tfimp(coords, features=feats, indices=indices, mean_and_max=self.mean_and_max)
feat_grad = t_tfop.gradient(meanmax, feats)
coord_grad = t_tfop.gradient(meanmax, coords)
t0 = time.time()
for i in range(5) :
feat_grad = t_tfop.gradient(meanmax, feats)
coord_grad = t_tfop.gradient(meanmax, coords)
tf_time= (time.time() - t0)/5.
except:
tf_failed=True
return op_time, tf_time
def difference(self, nvert = 300, nfeat = 64, nneigh = 32, ncoords = 4, onlyForward=False, assert_error=True):
coords = tf.constant( np.random.rand(nvert,ncoords) ,dtype='float32')
feats = np.random.rand(nvert,nfeat)
#to make the max unambiguous
frange = np.arange(nvert)
np.random.shuffle(frange)
toadd = np.expand_dims(frange, axis=1)
feats = tf.constant(feats+toadd, dtype='float32')
row_splits = tf.constant( [0, nvert] ,dtype='int32')
#print('building indices')
with tf.device("/cpu:0"):
indices, distances = SelectKnn(K=nneigh, coords=coords, row_splits=row_splits)
#indices = indices[:,1:]
#distances = distances[:,1:]
#print('process custom op')
if self.use_distances_direct:
coords = distances
op_time = 0
tfdevstring = "/gpu:0"
if self.customoncpu:
tfdevstring = "/cpu:0"
tfdev = tf.device(tfdevstring)
t0 = time.time()
with tfdev:
t0 = time.time()
with tf.GradientTape(persistent=True,watch_accessed_variables=True) as t_newop:
t_newop.watch(coords)
t_newop.watch(feats)
meanmax = self.customimpl( coords, features=feats, indices=indices, mean_and_max=self.mean_and_max)
t1 = time.time()
op_time= t1 - t0
#print('op time',op_time)
with tfdev:
coord_grad = t_newop.gradient(meanmax, coords)
feat_grad = t_newop.gradient(meanmax, feats)
if self.debugout:
print('coords',coords,'\n')
print('feats',feats,'\n')
print('custom output',meanmax,'\n')
print('indices',indices)
### tf op implementation
print('TFTFTF')
tf_feat_grad = None
tf_coord_grad = None
#print('process TF op')
tfdevstring = "/gpu:0"
if self.tfoncpu:
tfdevstring = "/cpu:0"
tfdev = tf.device(tfdevstring)
t0 = time.time()
with tfdev:
with tf.GradientTape(persistent=True) as t_tfop:
t_tfop.watch(coords)
t_tfop.watch(feats)
tf_meanmax = self.tfimp(coords, features=feats, indices=indices, mean_and_max=self.mean_and_max)
tf_time= time.time() - t0
if self.debugout:
print('TF output',tf_meanmax,'\n')
with tfdev:
tf_feat_grad = t_tfop.gradient(tf_meanmax, feats)
tf_coord_grad = t_tfop.gradient(tf_meanmax, coords)
with tf.device("/cpu:0"):
difference = meanmax - tf_meanmax
max_rel_difference = tf.reduce_max(tf.abs(difference/(tf.abs(tf_meanmax)+1e-3))).numpy()
max_difference = tf.reduce_max(tf.abs(difference)).numpy()
#print('max rel difference',max_rel_difference)
#print('max difference',max_difference)
#print('op time',op_time)
#print('tf time',tf_time)
if assert_error:
assert max_difference < 1e-2
if onlyForward:
return
## gradients
#print('tf_feat_grad',tf_feat_grad)
#print('tf_coord_grad',tf_coord_grad)
#print('feat_grad',feat_grad)
#print('coord_grad',coord_grad)
feat_grad_diff = feat_grad - tf_feat_grad
coord_grad_diff = coord_grad - tf_coord_grad
#print('feat_grad_diff',feat_grad_diff)
#print('coord_grad_diff',coord_grad_diff)
#print('relative feat_grad_diff',feat_grad_diff/tf_feat_grad)
#print('relative coord_grad_diff',coord_grad_diff/tf_coord_grad)
maxfeatgraddiff = tf.reduce_max(tf.abs(feat_grad_diff))
maxcoordgraddiff = tf.reduce_max(tf.abs(coord_grad_diff))
rel_feat_grad_diff = (feat_grad_diff)/(tf.abs(tf_feat_grad)+1e-2)
rel_coord_grad_diff = coord_grad_diff/(tf.abs(tf_coord_grad)+1e-2)
maxrelfeatgraddiff = tf.reduce_max(tf.abs(rel_feat_grad_diff))
maxrelcoordgraddiff = tf.reduce_max(tf.abs(rel_coord_grad_diff))
#print('\nmax relative feature grad diff', maxrelfeatgraddiff)
#print('max relative coordinate grad diff', maxrelcoordgraddiff)
def check_indices():
idx_ok=True
for i in tf.range(indices.shape[0]):
y,idx,c = tf.unique_with_counts(indices[i])
if (c.numpy() > 1).any() or (indices[i].numpy() >= indices.shape[0]).any() :
idx_ok=False
print("indices not unique", indices[i])
if idx_ok:
print('indices ok')
if self.debugout:
print('custom feature grad ',feat_grad)
print('TF feature grad',tf_feat_grad)
print('difference',feat_grad_diff)
print('custom coord grad',coord_grad)
print('TF coord grad',tf_coord_grad)
print('Difference',coord_grad_diff)
if maxrelfeatgraddiff > 1e-2:
print('Feature gradient off:')
print('max rel diff',maxrelfeatgraddiff)
print('max diff',maxfeatgraddiff)
print('min,max feat', tf.reduce_min(feats), tf.reduce_max(feats))
print('min,max coords', tf.reduce_min(coords), tf.reduce_max(coords))
check_indices()
if maxrelcoordgraddiff > 1e-2:
print('Coordinate gradient off:')
print('max rel diff',maxrelcoordgraddiff)
print('max diff',maxcoordgraddiff)
print('min,max feat', tf.reduce_min(feats), tf.reduce_max(feats))
print('min,max coords', tf.reduce_min(coords), tf.reduce_max(coords))
check_indices()
if maxfeatgraddiff > 1e-2:
print('Feature gradient off:')
print('max rel diff',maxrelfeatgraddiff)
print('max diff',maxfeatgraddiff)
print('min,max feat', tf.reduce_min(feats), tf.reduce_max(feats))
print('min,max coords', tf.reduce_min(coords), tf.reduce_max(coords))
check_indices()
if maxcoordgraddiff > 1e-2:
print('Coordinate gradient off:')
print('max rel diff',maxrelcoordgraddiff)
print('max diff',maxcoordgraddiff)
print('min,max feat', tf.reduce_min(feats), tf.reduce_max(feats))
print('min,max coords', tf.reduce_min(coords), tf.reduce_max(coords))
check_indices()
if assert_error:
assert maxrelfeatgraddiff < 5e-2
assert maxrelcoordgraddiff < 5e-2
reldifference = tf.reshape(difference/(tf.abs(tf_meanmax)+1e-4),[-1])
difference = tf.reshape(difference,[-1])
rel_feat_grad_diff = tf.reshape(rel_feat_grad_diff,[-1])
rel_coord_grad_diff = tf.reshape(rel_coord_grad_diff,[-1])
feat_grad_diff = tf.reshape(feat_grad_diff,[-1])
coord_grad_diff = tf.reshape(coord_grad_diff,[-1])
return difference,reldifference,rel_feat_grad_diff,rel_coord_grad_diff,feat_grad_diff,coord_grad_diff
def run_extended_difference(self,
nvert,
nneigh,
nfeat,
addstring=""):
diff = []
reldiff = []
relcoordgraddiff = []
relfeatgraddiff = []
coordgraddiff = []
featgraddiff = []
for nv in nvert:
for nn in nneigh:
for nf in nfeat:
print('nv:',nv, 'nf:',nf, 'nn:' ,nn)
for blub in range(5): #run a few times
d,dr,fr,cr,f,c = self.difference(nv,nf,nn, ncoords = 4, onlyForward=False, assert_error=False)
#print('>>> max feat diff',tf.reduce_max(tf.abs(f)))
diff.append(d)
reldiff.append(dr)
coordgraddiff.append(c)
featgraddiff.append(f)
relcoordgraddiff.append(cr)
relfeatgraddiff.append(fr)
def conc_and_reshape(intensor):
x = tf.concat(intensor,axis=0)
x = tf.reshape(x, [-1])
return x.numpy()
diff = conc_and_reshape(diff)
reldiff = conc_and_reshape(reldiff)
coordgraddiff = conc_and_reshape(coordgraddiff)
featgraddiff = conc_and_reshape(featgraddiff)
#print('total >>> max feat diff',tf.reduce_max(tf.abs(featgraddiff)))
relcoordgraddiff = conc_and_reshape(relcoordgraddiff)
relfeatgraddiff = conc_and_reshape(relfeatgraddiff)
nbins=101
print('plotting...')
plt.close()
plt.hist(diff, bins=nbins)
plt.xlabel("Output Difference")
plt.yscale('log')
plt.savefig(self.name+addstring+"output_diff.pdf")
plt.close()
plt.hist(reldiff, bins=nbins)
plt.xlabel("Relative Output Difference")
plt.yscale('log')
plt.savefig(self.name+addstring+"rel_output_diff.pdf")
plt.close()
plt.hist(coordgraddiff, bins=nbins)
plt.xlabel("Coordinate Gradient Difference")
plt.yscale('log')
plt.savefig(self.name+addstring+"coord_grad_diff.pdf")
plt.close()
plt.hist(featgraddiff, bins=nbins)
plt.xlabel("Feature Gradient Difference")
plt.yscale('log')
plt.savefig(self.name+addstring+"feat_grad_diff.pdf")
plt.close()
plt.hist(relcoordgraddiff, bins=nbins)
plt.xlabel("Relative Coordinate Gradient Difference")
plt.yscale('log')
plt.savefig(self.name+addstring+"rel_coord_grad_diff.pdf")
plt.close()
plt.hist(relfeatgraddiff, bins=nbins)
plt.xlabel("Relative Feature Gradient Difference")
plt.yscale('log')
plt.savefig(self.name+addstring+"rel_feat_grad_diff.pdf")
plt.close()
def run_extended_benchmark(self,
nvert,
nneigh,
nfeat,
d_nvert = 10000,
d_nneigh = 100,
d_nfeat = 100,
gradient=False,
tf_thresholds = {'nvert': 55000,
'nneigh': 210,
'nfeat': 200}):
tf_times = []
op_times = []
tfx = []
for nv in nvert:
print('nvert self.benchmark, nvert:',nv, "do tf",tf_thresholds['nvert']>nv)
opt,tft = self.benchmark(nv,d_nfeat,d_nneigh,4, dogradient=gradient,do_tf=tf_thresholds['nvert']>nv)
if tft:
tf_times.append(tft)
tfx.append(nv)
op_times.append(opt)
plt.plot(nvert,op_times,color='green',label="custom",marker='o')
plt.plot(tfx,tf_times,color='orange',label="TF",marker='o')
plt.xlabel("# vertices")
plt.ylabel("time")
#plt.yscale('log')
plt.legend()
if gradient:
plt.savefig(self.name+"benchmark_grad_nvert.pdf")
else:
plt.savefig(self.name+"benchmark_nvert.pdf")
plt.close()
tf_times=[]
op_times=[]
tfx=[]
for nn in nneigh:
print('nneigh self.benchmark, nn:',nn)
opt,tft = self.benchmark(d_nvert,d_nfeat,nn,4,
dogradient=gradient,do_tf=tf_thresholds['nneigh']>nn)
if tft:
tf_times.append(tft)
tfx.append(nn)
op_times.append(opt)
plt.plot(nneigh,op_times,color='green',label="custom",marker='o')
plt.plot(tfx,tf_times,color='orange',label="TF",marker='o')
plt.xlabel("# neighbours")
plt.ylabel("time")
plt.legend()
if gradient:
plt.savefig(self.name+"benchmark_grad_nneigh.pdf")
else:
plt.savefig(self.name+"benchmark_nneigh.pdf")
plt.close()
tf_times=[]
op_times=[]
tfx=[]
for nf in nfeat:
print('nfeat self.benchmark, nfeat:',nf)
opt,tft = self.benchmark(d_nvert,nf,d_nneigh,4,
dogradient=gradient,do_tf=tf_thresholds['nfeat']>nf)
if tft:
tf_times.append(tft)
tfx.append(nf)
op_times.append(opt)
plt.plot(nfeat,op_times,color='green',label="custom",marker='o')
plt.plot(tfx,tf_times,color='orange',label="TF",marker='o')
plt.xlabel("# features")
plt.ylabel("time")
plt.legend()
if gradient:
plt.savefig(self.name+"benchmark_grad_nfeat.pdf")
else:
plt.savefig(self.name+"benchmark_nfeat.pdf")
plt.close()
| 39.504405 | 126 | 0.537998 | 2,012 | 17,935 | 4.611332 | 0.114314 | 0.032766 | 0.020479 | 0.023281 | 0.59959 | 0.530933 | 0.480599 | 0.418194 | 0.391571 | 0.368075 | 0 | 0.012612 | 0.354558 | 17,935 | 454 | 127 | 39.504405 | 0.788874 | 0.056426 | 0 | 0.475504 | 0 | 0 | 0.072997 | 0.006986 | 0 | 0 | 0 | 0 | 0.020173 | 1 | 0.023055 | false | 0 | 0.020173 | 0 | 0.063401 | 0.118156 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad3c18e2d2780aa105dfdeaa760e7372f97a91ed | 2,028 | py | Python | run_microgrid.py | asokraju/DC-microgrids | 57135741e303f139534f28f6fa526306974c6a6a | [
"MIT"
] | null | null | null | run_microgrid.py | asokraju/DC-microgrids | 57135741e303f139534f28f6fa526306974c6a6a | [
"MIT"
] | null | null | null | run_microgrid.py | asokraju/DC-microgrids | 57135741e303f139534f28f6fa526306974c6a6a | [
"MIT"
] | null | null | null | import numpy as np
from cvxopt import matrix
from cvxopt import solvers
import numpy as np
import gym
from gym import spaces
import matplotlib.pyplot as plt
# local modules
from cbf.utilities import cbf_microgrid_v0, cbf_microgrid_v1, cbf_microgrid_dist_v0, plot_signals
from models.buck_microgrid import Buck_microgrid_v0, Buck_microgrid_v1
with_L = False
inc_mat = np.array([[-1, 0, 0, -1],[1, -1, 0, 0],[0, 1, -1, 0],[0, 0, 1, 1 ]])
lap_mat = np.matmul(inc_mat, inc_mat.T)
if with_L:
env = Buck_microgrid_v1(dt = 1e-6)
else:
env = Buck_microgrid_v0(dt = 1e-6)
theta = np.array([1.,2.,3.,4.])
state = env.reset()
W= np.diag([1.,1.,1.,1.])
print(env.state)
obs = []
N_steps = 10**5
for i in range(int(6e4)):
u=[]
theta = theta + env.T*np.dot(np.matmul(lap_mat,W), state[0:4])
u_dist = np.dot(np.matmul(W, lap_mat),theta)
for node in range(4):
if i==int(1e4):
#env.G = (1.2)*env.G
print('step: ', i)
#print('input: {}'.format(cbf_2(env)))
if with_L:
u_c = cbf_microgrid_v1(env, node =node, u_dist = u_dist, dV = 1.5, eta_1= .5, eta_2=.5)
u_net = (u_c-u_dist[node])/env.Vs[node]
u.append(u_net)
#u.append(cbf_microgrid_v1(env, node =node, u_dist = u_dist, dV = 3, eta_1= .9, eta_2=.9))
else:
u_c = cbf_microgrid_dist_v0(env, node =node, u_dist = u_dist, dV = 20, eta_1= .5, eta_2=.5)
u_net = (u_c-u_dist[node])/env.Vs[node]
u.append(u_net)
#u.append(cbf_microgrid_v0(env, node =node, u_dist = u_dist, dV = 3, eta_1= .9, eta_2=.9))
#u=cbf_3(env, dV = 1, eta_1= env.R*env.T/env.L, eta_2=env.R*env.T/env.L)
#print(u)
state, r, _, _ = env.step(u)
#obs.append(s)
path = './Power-Converters/DC-microgrids/results/'
#trajectory = np.concatenate(obs).reshape((int(2e4) ,env.observation_space.shape[0]))
#plot_signals(trajectory, env.Ides, env.Vdes, dt = 1e-5, dv = 1.5)
env.plot(savefig_filename = path + 'microgrid_l.png') | 34.965517 | 104 | 0.617357 | 371 | 2,028 | 3.177898 | 0.264151 | 0.04665 | 0.03732 | 0.040712 | 0.252757 | 0.252757 | 0.2324 | 0.222222 | 0.222222 | 0.199321 | 0 | 0.053292 | 0.213511 | 2,028 | 58 | 105 | 34.965517 | 0.685893 | 0.24211 | 0 | 0.25 | 0 | 0 | 0.040602 | 0.02685 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.225 | 0 | 0.225 | 0.05 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad3d6b07df1fb9aefe564e31b1238cc0fd7b8dd2 | 1,617 | py | Python | murt/apps/scene_randomer.py | tamsri/murt | 180dbb0d09ab50dfdaa1be843475f20a86651ea2 | [
"MIT"
] | 4 | 2021-06-19T08:38:47.000Z | 2022-01-10T21:10:38.000Z | murt/apps/scene_randomer.py | tamsri/murt | 180dbb0d09ab50dfdaa1be843475f20a86651ea2 | [
"MIT"
] | 1 | 2021-05-26T12:08:01.000Z | 2021-05-26T12:08:01.000Z | murt/apps/scene_randomer.py | tamsri/murt | 180dbb0d09ab50dfdaa1be843475f20a86651ea2 | [
"MIT"
] | 2 | 2022-01-24T11:04:46.000Z | 2022-02-23T22:26:42.000Z | from murt.window import Window
from murt import Tracer
from murt.utils.generator import SceneGenerator
from pyglet.window import key
from pyglet import text
import numpy as np
class RandomRunner(Window):
def __init__(self, seed=9999):
super().__init__()
self.tracer = Tracer()
self.generator = np.random.default_rng(seed)
self.regenerate_scene()
def on_key_release(self, pressed_key, modifier):
if pressed_key == key.G:
self.regenerate_scene()
super().on_key_release(pressed_key, modifier)
def regenerate_scene(self):
scene_gen = SceneGenerator(self.generator.integers(0, 1000000))
scene_gen.generate()
self.scene = scene_gen.sceneComponents
vertices, indice = scene_gen.get_triangles()
self.tracer.load_scene(vertices, indice)
self.lines_set = []
for i in range(1):
tx = (0.0, -3.0, 0.0)
rx = (0.0, -3.0, 0.0)
while self.tracer.is_outdoor(tx) is not True:
tx = (self.generator.uniform(-150, 150),
self.generator.uniform(1.5, 2),
self.generator.uniform(-150, 150))
while self.tracer.is_outdoor(rx) is not True:
rx = (self.generator.uniform(-150, 150),
self.generator.uniform(2, 7),
self.generator.uniform(-150, 150))
results = self.tracer.trace(tx, rx)
lines = self.tracer.result_to_lines(results, tx, rx)
self.lines_set += lines
def render(self):
super().render()
| 33.6875 | 71 | 0.596166 | 203 | 1,617 | 4.596059 | 0.339901 | 0.111468 | 0.128617 | 0.098607 | 0.21865 | 0.111468 | 0.098607 | 0.098607 | 0 | 0 | 0 | 0.047244 | 0.293135 | 1,617 | 47 | 72 | 34.404255 | 0.769029 | 0 | 0 | 0.102564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.102564 | false | 0 | 0.153846 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad3e8793524690fd1cbebcc9ac16221630555697 | 21,110 | py | Python | dellemc_ansible/powermax/library/dellemc_powermax_host.py | coreywan/ansible-powermax | 7c048458f354a578d2bceae9fd652aec9d19b0ab | [
"Apache-2.0"
] | null | null | null | dellemc_ansible/powermax/library/dellemc_powermax_host.py | coreywan/ansible-powermax | 7c048458f354a578d2bceae9fd652aec9d19b0ab | [
"Apache-2.0"
] | null | null | null | dellemc_ansible/powermax/library/dellemc_powermax_host.py | coreywan/ansible-powermax | 7c048458f354a578d2bceae9fd652aec9d19b0ab | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Copyright: (c) 2019, DellEMC
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils import dellemc_ansible_utils as utils
import logging
import copy
import re
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: dellemc_powermax_host
version_added: '2.6'
short_description: Manage host (initiator group) on PowerMax/VMAX Storage
System
description:
- Managing host on PowerMax Storage System includes create host with a set of
initiators and host flags, add/remove initiators to/from host, modify host
flag values, rename host and delete host
author:
- Vasudevu Lakhinana (vasudevu.lakhinana@dell.com)
- Manisha Agrawal (manisha.agrawal@dell.com)
options:
host_name:
description:
- The name of the host. No Special Character support except for _.
Case sensitive for REST Calls.
- Creation of empty host is allowed
required: true
initiators:
description:
- List of Initiator WWN or IQN to be added to host or removed from the
host.
state:
description:
- Define whether the host should exist or not.
- present - indicates that the host should exist in system
- absent - indicates that the host should not not exist in system
required: true
choices: [absent, present]
initiator_state:
description:
- Define whether the initiators should be present or absent in host.
- present-in-host - indicates that the initiators should exist on host
- absent-in-host - indicates that the initiators should not exist on host
- Required when creating a host with initiators or adding/removing
initiators to/from existing host
choices: [present-in-host, absent-in-host]
host_flags:
description:
- input as an yaml dictionary
- List of all host_flags-
- 1. volume_set_addressing
- 2. disable_q_reset_on_ua
- 3. environ_set
- 4. avoid_reset_broadcast
- 5. openvms
- 6. scsi_3
- 7. spc2_protocol_version
- 8. scsi_support1
- 9. consistent_lun
required: false
choices: [true, false, unset(default state)]
new_name:
description:
- The new name host for renaming function. No Special Character support
except for _. Case sensitive for REST Calls
'''
EXAMPLES = r'''
- name: Create host
dellemc_powermax_host:
unispherehost: "{{unispherehost}}"
universion: "{{universion}}"
verifycert: "{{verifycert}}"
user: "{{user}}"
password: "{{password}}"
serial_no: "{{serial_no}}"
host_name: "{{host_name}}"
initiators:
- 10000090fa7b4e85
host_flags:
spc2_protocol_version: true
consistent_lun: true
volume_set_addressing: 'unset'
disable_q_reset_on_ua: false
openvms: 'unset'
state: 'present'
initiator_state: 'present-in-host'
name: Get host details
dellemc_powermax_host:
unispherehost: "{{unispherehost}}"
universion: "{{universion}}"
verifycert: "{{verifycert}}"
user: "{{user}}"
password: "{{password}}"
serial_no: "{{serial_no}}"
host_name: "{{host_name}}"
state: 'present'
- name: Adding initiator to host
dellemc_powermax_host:
unispherehost: "{{unispherehost}}"
universion: "{{universion}}"
verifycert: "{{verifycert}}"
user: "{{user}}"
password: "{{password}}"
serial_no: "{{serial_no}}"
host_name: "{{host_name}}"
initiators:
- 10000090fa3d303e
initiator_state: 'present-in-host'
state: 'present'
- name: Removing initiator from host
dellemc_powermax_host:
unispherehost: "{{unispherehost}}"
universion: "{{universion}}"
verifycert: "{{verifycert}}"
user: "{{user}}"
password: "{{password}}"
serial_no: "{{serial_no}}"
host_name: "{{host_name}}"
initiators:
- 10000090fa3d303e
initiator_state: 'absent-in-host'
state: 'present'
- name: Modify flags of host
dellemc_powermax_host:
unispherehost: "{{unispherehost}}"
universion: "{{universion}}"
verifycert: "{{verifycert}}"
user: "{{user}}"
password: "{{password}}"
serial_no: "{{serial_no}}"
host_name: "{{host_name}}"
host_flags:
spc2_protocol_version: unset
consistent_lun: unset
volume_set_addressing: true
disable_q_reset_on_ua: false
openvms: false
avoid_reset_broadcast: true
state: 'present'
- name: Rename host
dellemc_powermax_host:
unispherehost: "{{unispherehost}}"
universion: "{{universion}}"
verifycert: "{{verifycert}}"
user: "{{user}}"
password: "{{password}}"
serial_no: "{{serial_no}}"
host_name: "{{host_name}}"
new_name: "{{new_host_name}}"
state: 'present'
- name: Delete host
dellemc_powermax_host:
unispherehost: "{{unispherehost}}"
universion: "{{universion}}"
verifycert: "{{verifycert}}"
user: "{{user}}"
password: "{{password}}"
serial_no: "{{serial_no}}"
host_name: "{{new_host_name}}"
state: 'absent'
'''
RETURN = r'''
'''
LOG = utils.get_logger('dellemc_powermax_host', log_devel=logging.INFO)
HAS_PYU4V = utils.has_pyu4v_sdk()
PYU4V_VERSION_CHECK = utils.pyu4v_version_check()
class PowerMaxHost(object):
'''Class with host(initiator group) operations'''
def __init__(self):
''' Define all parameters required by this module'''
self.module_params = utils.get_powermax_management_host_parameters()
self.module_params.update(self.get_powermax_host_parameters())
# initialize the ansible module
self.module = AnsibleModule(
argument_spec=self.module_params,
supports_check_mode=True
)
# result is a dictionary that contains changed status and host details
self.result = {"changed": False, "host_details": {}}
if HAS_PYU4V is False:
self.module.fail_json(msg="Ansible modules for PowerMax require "
"the PyU4V python library to be "
"installed. Please install the library "
"before using these modules.")
if PYU4V_VERSION_CHECK is not None:
self.module.fail_json(msg=PYU4V_VERSION_CHECK)
LOG.error(PYU4V_VERSION_CHECK)
self.u4v_conn = utils.get_U4V_connection(self.module.params)
self.provisioning = self.u4v_conn.provisioning
self.host_flags_list = {'volume_set_addressing', 'environ_set',
'disable_q_reset_on_ua', 'openvms',
'avoid_reset_broadcast', 'scsi_3',
'spc2_protocol_version', 'scsi_support1'}
LOG.info('Got PyU4V instance for provisioning on PowerMax ')
def get_powermax_host_parameters(self):
return dict(
host_name=dict(required=True, type='str'),
initiators=dict(required=False, type='list'),
state=dict(required=True, type='str'),
initiator_state=dict(required=False, type='str'),
host_flags=dict(required=False, type='dict'),
new_name=dict(type='str', required=False)
)
def get_host(self, host_name):
'''
Get details of a given host
'''
try:
LOG.info('Getting host {0} details'.format(host_name))
hostFromGet = self.provisioning.get_host(host_name)
if hostFromGet:
return hostFromGet
except Exception as e:
LOG.error('Got error {0} while getting details of host {0}'
.format(str(e), host_name))
return None
def _set_to_enable(self, host_flag_name, host_flag_dict):
host_flag_dict[host_flag_name.lower()] = {
'enabled': True,
'override': True
}
def _set_to_disable(self, host_flag_name, host_flag_dict):
host_flag_dict[host_flag_name.lower()] = {
'enabled': False,
'override': True
}
def _set_to_default(self, host_flag_name, host_flag_dict):
host_flag_dict[host_flag_name.lower()] = {
'enabled': False,
'override': False
}
def _disable_consistent_lun(self, host_flag_dict):
host_flag_dict['consistent_lun'] = False
def _enable_consistent_lun(self, host_flag_dict):
host_flag_dict['consistent_lun'] = True
def _create_host_flags_dict(self, received_host_flags,
new_host_flags_dict):
'''
Creating the expected payload for host_flags
'''
for host_flag_name in self.host_flags_list:
if (host_flag_name not in received_host_flags or
received_host_flags[host_flag_name] in ['unset', 'Unset']):
self._set_to_default(host_flag_name, new_host_flags_dict)
elif (received_host_flags[host_flag_name] is False or
received_host_flags[host_flag_name] in ['false', 'False']):
self._set_to_disable(host_flag_name, new_host_flags_dict)
else:
self._set_to_enable(host_flag_name, new_host_flags_dict)
if ('consistent_lun' not in received_host_flags
or received_host_flags['consistent_lun'] is False
or received_host_flags['consistent_lun'] in ['unset', 'Unset',
'false', 'False']):
self._disable_consistent_lun(new_host_flags_dict)
else:
self._enable_consistent_lun(new_host_flags_dict)
def create_host(self, host_name):
'''
Create host with given initiators and host_flags
'''
initiator_state = self.module.params['initiator_state']
initiators = self.module.params['initiators']
received_host_flags = self.module.params['host_flags']
if (initiator_state == 'absent-in-host' or initiator_state is None):
initiators = None
if received_host_flags:
new_host_flags_dict = {}
self._create_host_flags_dict(received_host_flags,
new_host_flags_dict)
else:
new_host_flags_dict = None
try:
msg = ("Creating host {0} with parameters:initiators={1},"
"host_flags={2}")
LOG.info(msg.format(host_name, initiators, new_host_flags_dict))
self.provisioning.create_host(host_name, initiator_list=initiators,
host_flags=new_host_flags_dict)
return True
except Exception as e:
errorMsg = 'Create host {0} failed with error {1}'.format(
host_name, str(e))
LOG.error(errorMsg)
self.module.fail_json(msg=errorMsg)
return None
def _get_add_initiators(self, existing, requested):
all_inits = existing + requested
add_inits = list(set(all_inits) - set(existing))
return add_inits
def _get_remove_initiators(self, existing, requested):
rem_inits = list(set(existing).intersection(set(requested)))
return rem_inits
def add_host_initiators(self, host_name, initiators):
host = self.get_host(host_name)
existing_inits = []
if host and 'initiator' in host:
existing_inits = host['initiator']
if initiators and cmp(existing_inits, initiators) == 0:
LOG.info('Initiators are already present in host {0}'
.format(host_name))
return False
add_list = self._get_add_initiators(existing_inits, initiators)
LOG.info('add_list'.format(add_list))
LOG.info('add_list'.format(*add_list))
if len(add_list) > 0:
try:
LOG.info('Adding initiators {0} to host {1}'.format(
add_list, host_name))
self.provisioning.modify_host(host_name,
add_init_list=add_list)
return True
except Exception as e:
errorMsg = (("Adding initiators {0} to host {1} failed with"
"error {2}").format(
add_list, host_name, str(e)))
LOG.error(errorMsg)
self.module.fail_json(msg=errorMsg)
else:
LOG.info('No initiators to add to host {0}'.format(
host_name))
return False
def remove_host_initiators(self, host_name, initiators):
host = self.get_host(host_name)
existing_inits = []
if host and 'initiator' in host:
existing_inits = host['initiator']
if existing_inits is None or not len(existing_inits):
LOG.info('No initiators are present in host {0}'
.format(host_name))
return False
remove_list = self._get_remove_initiators(existing_inits, initiators)
if len(remove_list) > 0:
try:
LOG.info('Removing initiators {0} from host {1}'.format(
remove_list, host_name))
self.provisioning.modify_host(host_name,
remove_init_list=remove_list)
return True
except Exception as e:
errorMsg = (("Removing initiators {0} from host {1} failed"
"with error {2}").format(remove_list, host_name,
str(e)))
LOG.error(errorMsg)
self.module.fail_json(msg=errorMsg)
else:
LOG.info('No initiators to remove from host {0}'.format(
host_name))
return False
def rename_host(self, host_name, new_name):
try:
self.provisioning.modify_host(host_name, new_name=new_name)
return True
except Exception as e:
errorMsg = 'Renaming of host {0} failed with error {1}'.format(
host_name, str(e))
LOG.error(errorMsg)
self.module.fail_json(msg=errorMsg)
return None
def _create_default_host_flags_dict(self, current_flags):
for flag in self.host_flags_list:
self._set_to_default(flag, current_flags)
self._disable_consistent_lun(current_flags)
def _recreate_host_flag_dict(self, host, current_flags):
'''
Recreate current flags dictionary using output from get_host() function
'''
self._create_default_host_flags_dict(current_flags)
for flag in host['enabled_flags'].split(','):
if len(flag) > 0:
'''
Remove any extra text from information received from get_host()
to match the desired input to VMAX python SDK
'''
self._set_to_enable(
re.sub(
r'\(.*?\)',
'',
flag),
current_flags)
for flag in host['disabled_flags'].split(','):
if len(flag) > 0:
self._set_to_disable(
re.sub(
r'\(.*?\)',
'',
flag),
current_flags)
if host['consistent_lun'] is False:
self._disable_consistent_lun(current_flags)
else:
self._enable_consistent_lun(current_flags)
def modify_host_flags(self, host_name, received_host_flags):
current_flags = {}
self._recreate_host_flag_dict(self.get_host(host_name), current_flags)
new_flags_dict = copy.deepcopy(current_flags)
for flag in received_host_flags:
if flag != 'consistent_lun':
if (received_host_flags[flag] is True or
received_host_flags[flag] in ['True', 'true']):
self._set_to_enable(flag, new_flags_dict)
elif (received_host_flags[flag] is False or
received_host_flags[flag] in ['false', 'False']):
self._set_to_disable(flag, new_flags_dict)
else:
self._set_to_default(flag, new_flags_dict)
elif (received_host_flags['consistent_lun'] is False or
received_host_flags['consistent_lun'] in
['False', 'false', 'unset', 'Unset']):
self._disable_consistent_lun(new_flags_dict)
else:
self._enable_consistent_lun(new_flags_dict)
if cmp(new_flags_dict, current_flags) == 0:
LOG.info('No change detected')
self.module.exit_json(changed=False)
else:
try:
LOG.info('Modifying host flags for host {0} with {1}'
.format(host_name, new_flags_dict))
self.provisioning.modify_host(host_name,
host_flag_dict=new_flags_dict)
return True
except Exception as e:
errorMsg = 'Modify host {0} failed with error {1}'.format(
host_name, str(e))
LOG.error(errorMsg)
self.module.fail_json(msg=errorMsg)
return None
def delete_host(self, host_name):
'''
Delete host from system
A host cannot be deleted if it is associated with a masking view.
'''
try:
self.provisioning.delete_host(host_name)
return True
except Exception as e:
errorMsg = ('Delete host {0} failed with error {1}'.format(
host_name, str(e)))
LOG.error(errorMsg)
self.module.fail_json(msg=errorMsg)
def _create_result_dict(self, changed):
self.result['changed'] = changed
if self.module.params['state'] == 'absent':
self.result['host_details'] = {}
else:
self.result['host_details'] = self.get_host(
self.module.params['host_name'])
def perform_module_operation(self):
'''
Perform different actions on host based on user parameter
chosen in playbook
'''
state = self.module.params['state']
intiator_state = self.module.params['initiator_state']
host_name = self.module.params['host_name']
initiators = self.module.params['initiators']
new_name = self.module.params['new_name']
host_flags = self.module.params['host_flags']
host = self.get_host(host_name)
changed = False
if state == 'present' and not host and host_name:
LOG.info('Creating host {0}'.format(host_name))
changed = self.create_host(host_name)
if (state == 'present' and host and intiator_state ==
'present-in-host' and initiators and len(initiators) > 0):
LOG.info('Adding initiators to host {0}'.format(host_name))
changed = (self.add_host_initiators(host_name, initiators) or
changed)
if (state == 'present' and host and intiator_state == 'absent-in-host'
and initiators and len(initiators) > 0):
LOG.info('Remove initiators from host {0}'.format(host_name))
changed = (self.remove_host_initiators(host_name, initiators)
or changed)
if state == 'present' and host and host_flags:
LOG.info('Modifying host flags of host {0} to {1}'.format(
host_name, host_flags))
changed = self.modify_host_flags(host_name, host_flags) or changed
if state == 'present' and host and new_name:
if host['hostId'] != new_name:
LOG.info('Renaming host {0} to {1}'.format(host_name,
new_name))
changed = self.rename_host(host_name, new_name)
if changed is True:
self.module.params['host_name'] = new_name
if state == 'absent' and host:
LOG.info('Delete host {0} '.format(host_name))
changed = self.delete_host(host_name) or changed
self._create_result_dict(changed)
# Update the module's final state
LOG.info('changed {0}'.format(changed))
self.module.exit_json(**self.result)
def main():
''' Create PowerMax host object and perform action on it
based on user input from playbook'''
obj = PowerMaxHost()
obj.perform_module_operation()
if __name__ == '__main__':
main()
| 36.522491 | 79 | 0.586026 | 2,388 | 21,110 | 4.923786 | 0.114322 | 0.046947 | 0.027471 | 0.014969 | 0.514883 | 0.41308 | 0.358649 | 0.29818 | 0.256421 | 0.22878 | 0 | 0.008401 | 0.317717 | 21,110 | 577 | 80 | 36.585789 | 0.807957 | 0.033823 | 0 | 0.379531 | 0 | 0 | 0.337158 | 0.02988 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046908 | false | 0.014925 | 0.010661 | 0.002132 | 0.098081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad406b2bef3047609a069cbcc4b74978a578f8f0 | 12,859 | py | Python | gen_dets.py | salmank255/ROADSlowFast | e939d8f79fe3eb6f3dd32e967a34530d00f45c8e | [
"Apache-2.0"
] | null | null | null | gen_dets.py | salmank255/ROADSlowFast | e939d8f79fe3eb6f3dd32e967a34530d00f45c8e | [
"Apache-2.0"
] | null | null | null | gen_dets.py | salmank255/ROADSlowFast | e939d8f79fe3eb6f3dd32e967a34530d00f45c8e | [
"Apache-2.0"
] | null | null | null |
"""
Testing
"""
import os
import time, json
import datetime
import numpy as np
import torch
import pdb
import pickle
import copy
import torch.utils.data as data_utils
from modules.evaluation import evaluate_frames
from modules.box_utils import decode, nms
from data import custum_collate
from modules import utils
import modules.evaluation as evaluate
from modules.utils import make_joint_probs_from_marginals
logger = utils.get_logger(__name__)
def gen_dets(args, net, val_dataset):
net.eval()
val_data_loader = data_utils.DataLoader(val_dataset, int(args.TEST_BATCH_SIZE), num_workers=args.NUM_WORKERS,
shuffle=False, pin_memory=True, collate_fn=custum_collate)
for epoch in args.EVAL_EPOCHS:
args.det_itr = epoch
logger.info('Testing at ' + str(epoch))
args.det_save_dir = os.path.join(args.SAVE_ROOT, "detections-{it:02d}-{sq:02d}-{n:d}/".format(it=epoch, sq=args.TEST_SEQ_LEN, n=int(100*args.GEN_NMS)))
logger.info('detection saving dir is :: '+args.det_save_dir)
is_all_done = True
if os.path.isdir(args.det_save_dir):
for vid, videoname in enumerate(val_dataset.video_list):
save_dir = '{:s}/{}'.format(args.det_save_dir, videoname)
if os.path.isdir(save_dir):
numf = val_dataset.numf_list[vid]
dets_list = [d for d in os.listdir(save_dir) if d.endswith('.pkl')]
if numf != len(dets_list):
is_all_done = False
print('Not done', save_dir, numf, len(dets_list))
break
else:
is_all_done = False
break
else:
is_all_done = False
os.makedirs(args.det_save_dir)
if is_all_done:
print('All done! skipping detection')
continue
args.MODEL_PATH = args.SAVE_ROOT + 'model_{:06d}.pth'.format(epoch)
net.load_state_dict(torch.load(args.MODEL_PATH))
logger.info('Finished loading model %d !' % epoch )
torch.cuda.synchronize()
tt0 = time.perf_counter()
net.eval() # switch net to evaluation mode
mAP, _, ap_strs = perform_detection(args, net, val_data_loader, val_dataset, epoch)
label_types = [args.label_types[0]] + ['ego_action']
for nlt in range(len(label_types)):
for ap_str in ap_strs[nlt]:
logger.info(ap_str)
ptr_str = '\n{:s} MEANAP:::=> {:0.5f}'.format(label_types[nlt], mAP[nlt])
logger.info(ptr_str)
torch.cuda.synchronize()
logger.info('Complete set time {:0.2f}'.format(time.perf_counter() - tt0))
def perform_detection(args, net, val_data_loader, val_dataset, iteration):
"""Test a network on a video database."""
num_images = len(val_dataset)
print_time = True
val_step = 50
count = 0
torch.cuda.synchronize()
ts = time.perf_counter()
activation = torch.nn.Sigmoid().cuda()
ego_pds = []
ego_gts = []
det_boxes = []
gt_boxes_all = []
for nlt in range(1):
numc = args.num_classes_list[nlt]
det_boxes.append([[] for _ in range(numc)])
gt_boxes_all.append([])
nlt = 0
processed_videos = []
with torch.no_grad():
for val_itr, (images, gt_boxes, gt_targets, ego_labels, batch_counts, img_indexs, wh) in enumerate(val_data_loader):
torch.cuda.synchronize()
t1 = time.perf_counter()
batch_size = images.size(0)
images = images.cuda(0, non_blocking=True)
decoded_boxes, confidence, ego_preds = net(images)
ego_preds = activation(ego_preds).cpu().numpy()
ego_labels = ego_labels.numpy()
confidence = activation(confidence)
seq_len = ego_preds.shape[1]
if print_time and val_itr%val_step == 0:
torch.cuda.synchronize()
tf = time.perf_counter()
logger.info('Forward Time {:0.3f}'.format(tf-t1))
for b in range(batch_size):
index = img_indexs[b]
annot_info = val_dataset.ids[index]
video_id, frame_num, step_size = annot_info
videoname = val_dataset.video_list[video_id]
save_dir = '{:s}/{}'.format(args.det_save_dir, videoname)
store_last = False
if videoname not in processed_videos:
processed_videos.append(videoname)
store_last = True
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
count += 1
for s in range(seq_len):
if ego_labels[b,s]>-1:
ego_pds.append(ego_preds[b,s,:])
ego_gts.append(ego_labels[b,s])
gt_boxes_batch = gt_boxes[b, s, :batch_counts[b, s],:].numpy()
gt_labels_batch = gt_targets[b, s, :batch_counts[b, s]].numpy()
decoded_boxes_batch = decoded_boxes[b,s]
frame_gt = utils.get_individual_labels(gt_boxes_batch, gt_labels_batch[:,:1])
gt_boxes_all[0].append(frame_gt)
confidence_batch = confidence[b,s]
scores = confidence_batch[:, 0].squeeze().clone()
cls_dets, save_data = utils.filter_detections_for_dumping(args, scores, decoded_boxes_batch, confidence_batch)
det_boxes[0][0].append(cls_dets)
save_name = '{:s}/{:05d}.pkl'.format(save_dir, frame_num+1)
frame_num += step_size
save_data = {'ego':ego_preds[b,s,:], 'main':save_data}
if s<seq_len-args.skip_ending or store_last:
with open(save_name,'wb') as ff:
pickle.dump(save_data, ff)
if print_time and val_itr%val_step == 0:
torch.cuda.synchronize()
te = time.perf_counter()
logger.info('im_detect: {:d}/{:d} time taken {:0.3f}'.format(count, num_images, te-ts))
torch.cuda.synchronize()
ts = time.perf_counter()
if print_time and val_itr%val_step == 0:
torch.cuda.synchronize()
te = time.perf_counter()
logger.info('NMS stuff Time {:0.3f}'.format(te - tf))
mAP, ap_all, ap_strs = evaluate.evaluate(gt_boxes_all, det_boxes, args.all_classes, iou_thresh=args.IOU_THRESH)
mAP_ego, ap_all_ego, ap_strs_ego = evaluate.evaluate_ego(np.asarray(ego_gts), np.asarray(ego_pds), args.ego_classes)
return mAP + [mAP_ego], ap_all + [ap_all_ego], ap_strs + [ap_strs_ego]
def gather_framelevel_detection(args, val_dataset):
detections = {}
for l, ltype in enumerate(args.label_types):
detections[ltype] = {}
if args.DATASET == 'road':
detections['av_actions'] = {}
else:
detections['frame_actions'] = {}
numv = len(val_dataset.video_list)
for vid, videoname in enumerate(val_dataset.video_list):
vid_dir = os.path.join(args.det_save_dir, videoname)
frames_list = os.listdir(vid_dir)
for frame_name in frames_list:
if not frame_name.endswith('.pkl'):
continue
save_name = os.path.join(vid_dir, frame_name)
with open(save_name,'rb') as ff:
dets = pickle.load(ff)
frame_name = frame_name.rstrip('.pkl')
# detections[videoname+frame_name] = {}
if args.DATASET == 'road':
detections['av_actions'][videoname+frame_name] = dets['ego']
else:
detections['frame_actions'][videoname+frame_name] = dets['ego']
frame_dets = dets['main']
if args.JOINT_4M_MARGINALS:
frame_dets = make_joint_probs_from_marginals(frame_dets, val_dataset.childs, args.num_classes_list)
start_id = 4
for l, ltype in enumerate(args.label_types):
numc = args.num_classes_list[l]
ldets = get_ltype_dets(frame_dets, start_id, numc, ltype, args)
detections[ltype][videoname+frame_name] = ldets
start_id += numc
logger.info('[{}/{}] Done for {}'.format(vid, numv, videoname))
# break
logger.info('Dumping detection in ' + args.det_file_name)
with open(args.det_file_name, 'wb') as f:
pickle.dump(detections, f)
logger.info('Done dumping')
def get_ltype_dets(frame_dets, start_id, numc, ltype, args):
dets = []
for cid in range(numc):
if frame_dets.shape[0]>0:
boxes = frame_dets[:, :4].copy()
scores = frame_dets[:, start_id+cid].copy()
pickn = boxes.shape[0]
if args.CLASSWISE_NMS:
cls_dets = utils.filter_detections(args, torch.from_numpy(scores), torch.from_numpy(boxes))
elif pickn<= args.TOPK+15:
cls_dets = np.hstack((boxes[:pickn,:], scores[:pickn, np.newaxis]))
if not args.JOINT_4M_MARGINALS:
cls_dets = cls_dets[scores>args.CONF_THRESH,:]
else:
sorted_ind = np.argsort(-scores)
sorted_ind = sorted_ind[:args.TOPK+15]
cls_dets = np.hstack((boxes[sorted_ind,:], scores[sorted_ind, np.newaxis]))
scores = scores[sorted_ind]
if not args.JOINT_4M_MARGINALS:
cls_dets = cls_dets[scores>args.CONF_THRESH,:]
else:
cls_dets = np.asarray([])
dets.append(cls_dets)
return dets
def eval_framewise_dets(args, val_dataset):
for epoch in args.EVAL_EPOCHS:
log_file = open("{pt:s}/frame-level-resutls-{it:06d}-{sq:02d}-{n:d}.log".format(pt=args.SAVE_ROOT, it=epoch, sq=args.TEST_SEQ_LEN, n=int(100*args.GEN_NMS)), "a", 10)
args.det_save_dir = os.path.join(args.SAVE_ROOT, "detections-{it:02d}-{sq:02d}-{n:d}/".format(it=epoch, sq=args.TEST_SEQ_LEN, n=int(100*args.GEN_NMS)))
args.det_file_name = "{pt:s}/frame-level-dets-{it:02d}-{sq:02d}-{n:d}.pkl".format(pt=args.SAVE_ROOT, it=epoch, sq=args.TEST_SEQ_LEN, n=int(100*args.GEN_NMS))
result_file = "{pt:s}/frame-ap-results-{it:02d}-{sq:02d}-{n:d}.json".format(pt=args.SAVE_ROOT, it=epoch, sq=args.TEST_SEQ_LEN,n=int(100*args.GEN_NMS))
if args.JOINT_4M_MARGINALS:
log_file = open("{pt:s}/frame-level-resutls-{it:06d}-{sq:02d}-{n:d}-j4m.log".format(pt=args.SAVE_ROOT, it=epoch, sq=args.TEST_SEQ_LEN, n=int(100*args.GEN_NMS)), "a", 10)
args.det_file_name = "{pt:s}/frame-level-dets-{it:02d}-{sq:02d}-{n:d}-j4m.pkl".format(pt=args.SAVE_ROOT, it=epoch, sq=args.TEST_SEQ_LEN, n=int(100*args.GEN_NMS))
result_file = "{pt:s}/frame-ap-results-{it:02d}-{sq:02d}-{n:d}-j4m.json".format(pt=args.SAVE_ROOT, it=epoch, sq=args.TEST_SEQ_LEN,n=int(100*args.GEN_NMS))
doeval = False
if not os.path.isfile(args.det_file_name):
logger.info('Gathering detection for ' + args.det_file_name)
gather_framelevel_detection(args, val_dataset)
logger.info('Done Gathering detections')
doeval = True
else:
logger.info('Detection will be loaded: ' + args.det_file_name)
if args.DATASET == 'road':
label_types = args.label_types + ['av_actions']
else:
label_types = args.label_types + ['frame_actions']
if doeval or not os.path.isfile(result_file):
results = {}
for subset in args.SUBSETS:
if len(subset)<2:
continue
sresults = evaluate_frames(val_dataset.anno_file, args.det_file_name, subset, iou_thresh=0.5, dataset=args.DATASET)
for _, label_type in enumerate(label_types):
name = subset + ' & ' + label_type
rstr = '\n\nResults for ' + name + '\n'
logger.info(rstr)
log_file.write(rstr+'\n')
results[name] = {'mAP': sresults[label_type]['mAP'], 'APs': sresults[label_type]['ap_all']}
for ap_str in sresults[label_type]['ap_strs']:
logger.info(ap_str)
log_file.write(ap_str+'\n')
with open(result_file, 'w') as f:
json.dump(results, f)
| 43.442568 | 181 | 0.572984 | 1,682 | 12,859 | 4.137931 | 0.159334 | 0.017098 | 0.015517 | 0.016092 | 0.348276 | 0.295546 | 0.262644 | 0.235057 | 0.216667 | 0.179885 | 0 | 0.013131 | 0.3071 | 12,859 | 296 | 182 | 43.442568 | 0.768013 | 0.009876 | 0 | 0.226891 | 0 | 0.02521 | 0.076838 | 0.031144 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021008 | false | 0 | 0.063025 | 0 | 0.092437 | 0.02521 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad40ac05019ddf66c3581ba2b2cc4d8b50f58f4c | 1,546 | py | Python | preprocessing.py | dahouda2pro/deep-learned-embedding | a4428cf99eae86691286ec18a0656e632fbc4600 | [
"CC0-1.0"
] | null | null | null | preprocessing.py | dahouda2pro/deep-learned-embedding | a4428cf99eae86691286ec18a0656e632fbc4600 | [
"CC0-1.0"
] | null | null | null | preprocessing.py | dahouda2pro/deep-learned-embedding | a4428cf99eae86691286ec18a0656e632fbc4600 | [
"CC0-1.0"
] | 1 | 2021-12-21T05:27:19.000Z | 2021-12-21T05:27:19.000Z | from nltk.util import pr
import pandas as pd
import corpus_creation
import nltk
import re
from sklearn.model_selection import train_test_split
stopwords = nltk.corpus.stopwords.words('english')
# Create a function to Tokenize all the Categorical Data
def tokenize_cat_var(catvar):
tokens = re.split('\W+', catvar)
#tokens = re.split('[,.]', catvar)
return tokens
corpus_creation.corpus['cat_var_tokenized'] = corpus_creation.corpus['cat_var'].apply(
lambda x: tokenize_cat_var(x.lower()))
print(corpus_creation.corpus.head(5))
# Split the Data subset into train and test set
X_train, X_test, y_train, y_test = train_test_split(
corpus_creation.corpus['cat_var_tokenized'], corpus_creation.corpus['income'], test_size=0.2)
# Let's save the training and test sets to ensure we are using the same data for each model
X_train.to_csv("./Data/X_train2.csv", index=False, header=True)
X_test.to_csv("./Data/X_test2.csv", index=False, header=True)
y_train.to_csv("./Data/y_train2.csv", index=False, header=True)
y_test.to_csv("./Data/y_test2.csv", index=False, header=True)
# Let's see our Tokenized Categorical Variable
print("****Let's see our Tokenized Categorical Variable****")
X_train = pd.read_csv("Data/X_train2.csv")
X_test = pd.read_csv("Data/X_test2.csv")
y_train = pd.read_csv("Data/y_train2.csv")
y_test = pd.read_csv("Data/y_test2.csv")
print(X_train.head(5))
print(y_train.head(5))
print("Shape of Xtrain:", X_train.shape)
print("Shape of Xtest:", X_test.shape)
| 33.608696 | 98 | 0.728978 | 256 | 1,546 | 4.195313 | 0.3125 | 0.052142 | 0.09311 | 0.070764 | 0.417132 | 0.281192 | 0.173184 | 0.102421 | 0.102421 | 0 | 0 | 0.009745 | 0.137128 | 1,546 | 45 | 99 | 34.355556 | 0.795352 | 0.173351 | 0 | 0 | 0 | 0 | 0.228199 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0 | 0.214286 | 0 | 0.285714 | 0.214286 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad40eaf6a444efc4ae81c61e82a03d9520874228 | 916 | py | Python | utils/dir_purger.py | hdm-dt-fb/rvt_model_services | af911b66add106b4135bfbe2b0f3af2547a2c127 | [
"MIT"
] | 28 | 2017-06-08T11:58:59.000Z | 2020-08-12T15:02:01.000Z | utils/dir_purger.py | hdm-dt-fb/rvt_model_services | af911b66add106b4135bfbe2b0f3af2547a2c127 | [
"MIT"
] | 13 | 2017-07-31T07:36:15.000Z | 2020-04-17T15:19:47.000Z | utils/dir_purger.py | hdm-dt-fb/rvt_model_services | af911b66add106b4135bfbe2b0f3af2547a2c127 | [
"MIT"
] | 13 | 2017-06-22T17:47:36.000Z | 2020-01-03T20:58:50.000Z | """
Little helper to purge files in a directory,
that are older than certain threshold.
"""
from pathlib import Path
import time
import colorful as col
def purge_old(directory: Path, extension: str, threshold_age_days=60):
"""
deletes all files with specified extension older than the threshold_age_days
:param directory: path to search
:param extension: file extension to filter by
:param threshold_age_days: max file age date modified
:return:
"""
found = 0
now = time.time()
for node in directory.iterdir():
if node.suffix == f".{extension}":
file_modified = node.stat().st_mtime
if (now - file_modified) // (24 * 3600) >= threshold_age_days:
node.unlink()
found += 1
if found > 0:
print(col.bold_green(f" cleaned-up {found} {extension} files older than: {threshold_age_days} in {directory}"))
| 31.586207 | 119 | 0.655022 | 123 | 916 | 4.756098 | 0.520325 | 0.102564 | 0.136752 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015988 | 0.248908 | 916 | 28 | 120 | 32.714286 | 0.834302 | 0.329694 | 0 | 0 | 0 | 0 | 0.168696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.285714 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad425bd94132d309dca8cba00ee2d06e0d818915 | 826 | py | Python | src/23 - Corner detection/03-FAST_Algorithm_for_Corner_Detection.py | hritik5102/Awesome-Computer-Vision-Guide | 005cd96f6d6c7dacdf1b9b5f5bf56cae3d6cea18 | [
"MIT"
] | 26 | 2020-07-02T20:41:46.000Z | 2022-01-04T09:51:07.000Z | src/23 - Corner detection/03-FAST_Algorithm_for_Corner_Detection.py | hritik5102/Awesome-Computer-Vision-Guide | 005cd96f6d6c7dacdf1b9b5f5bf56cae3d6cea18 | [
"MIT"
] | 2 | 2020-11-07T10:46:25.000Z | 2022-03-12T00:54:27.000Z | src/23 - Corner detection/03-FAST_Algorithm_for_Corner_Detection.py | hritik5102/Awesome-Computer-Vision-Guide | 005cd96f6d6c7dacdf1b9b5f5bf56cae3d6cea18 | [
"MIT"
] | 16 | 2020-10-12T08:38:25.000Z | 2021-12-17T08:16:46.000Z | import numpy as np
import cv2
from matplotlib import pyplot as plt
'''
Implementation of FAST (Features from Accelerated Segment Test) Algorithm for Corner Detection
'''
# Read RGB image
img = cv2.imread('../Images and Videos/Building.jpg')
cv2.imshow('original image',img)
# Initiate FAST object with default values
fast = cv2.FastFeatureDetector_create(50) # 50: Number of keypoint
# find and draw the keypoints
kp = fast.detect(img,None)
img2 = cv2.drawKeypoints(img, kp, None,flags=0)
# NMS -> Non max suppression
cv2.imshow('FAST With NMS',img2)
#Print all default params
print("Threshold: ", fast.getThreshold());
print("nonmaxSuppression: ", fast.getNonmaxSuppression());
print("neighborhood: ", fast.getType());
print("Total Keypoints with nonmaxSuppression: ", len(kp));
cv2.waitKey(0)
cv2.destroyAllWindows()
| 26.645161 | 94 | 0.751816 | 111 | 826 | 5.585586 | 0.630631 | 0.025806 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02213 | 0.124697 | 826 | 30 | 95 | 27.533333 | 0.835408 | 0.190073 | 0 | 0 | 0 | 0 | 0.257603 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0.266667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad44763afbf8aff9139336aaf9dcfb7f17640dbf | 15,653 | py | Python | mindspore/python/mindspore/dataset/engine/obs/util.py | httpsgithu/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | 1 | 2022-02-23T09:13:43.000Z | 2022-02-23T09:13:43.000Z | mindspore/python/mindspore/dataset/engine/obs/util.py | 949144093/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | null | null | null | mindspore/python/mindspore/dataset/engine/obs/util.py | 949144093/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | null | null | null | # Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
This dataset module provides internal utility function for OBSMindDataset API.
"""
import fcntl
import os
import shutil
import sys
import sqlite3
import time
from functools import wraps
from obs import ObsClient
from mindspore import log as logger
from .config_loader import config
from ..datasets import Shuffle
from ..samplers import RandomSampler, SequentialSampler, SubsetSampler, SubsetRandomSampler
obsClient = ObsClient(
access_key_id=config.AK,
secret_access_key=config.SK,
server=config.SERVER
)
def get_used_disk_per():
""" Get the disk usage of working directory."""
if not os.path.exists(config.WORKING_PATH):
try:
os.makedirs(config.WORKING_PATH)
except FileExistsError:
pass
total, used, _ = shutil.disk_usage(config.WORKING_PATH)
return used / total
def try_load_from_obs(remote_path, dataset_file, local_path):
"""
Download all dataset files from OBS, skip if it exists.
Args:
remote_path (str): OBS path of dataset files.
dataset_file (str): Name of dataset file.
local_path (str): Local path of dataset files.
"""
if not os.path.exists(os.path.join(local_path, dataset_file)):
_download_file(remote_path, dataset_file, local_path, lock_file=dataset_file)
meta_file = dataset_file + '.db'
if not os.path.exists(os.path.join(local_path, meta_file)):
_download_file(remote_path, meta_file, local_path, lock_file=meta_file)
def detect_all_meta_files(meta_files, local_path):
"""
Checking that all meta files exit in local.
Args:
meta_files (List[str]): Names of meta files.
local_path (str): Local path of dataset files.
"""
all_meta_files = True
for f in meta_files:
dataset_file = os.path.basename(f)
meta_file = dataset_file + '.db'
if _detect_file_exist(local_path, meta_file, lock_file=meta_file) is False:
all_meta_files = False
break
return all_meta_files
def make_sampler(shuffle, is_full_dataset, start, end):
"""
Generate a proper sampler based on inputs.
Args:
Shuffle (Union[bool, Shuffle level]): Shuffle level.
is_full_dataset (bool): Whether to include full dataset file.
start (int): Start index of sample for non-full dataset file.
end (int): End index of sample for non-full dataset file.
"""
sampler = None
if shuffle in (Shuffle.GLOBAL, Shuffle.INFILE):
if is_full_dataset:
sampler = RandomSampler()
else:
sampler = SubsetRandomSampler(list(range(start, end)))
else:
if is_full_dataset:
sampler = SequentialSampler()
else:
sampler = SubsetSampler(list(range(start, end)))
return sampler
def make_shard_samples(dataset_file_size_list, size_per_shard, shard_id):
"""
Make sharding files when shard_equal_rows equal to True.
Args:
dataset_file_size_list (List[tuple]): List of dataset file name and size.
size_per_shard (int): Size of each sharding.
shard_id (int): ID of sharding.
"""
pre_cnt = 0
shard_files = []
finish = False
while finish is False:
for f, dataset_size in dataset_file_size_list:
start_idx = shard_id * size_per_shard
end_idx = (shard_id + 1) * size_per_shard
push = False
is_full_dataset = False
if pre_cnt <= start_idx < pre_cnt + dataset_size:
start = start_idx - pre_cnt
push = True
if pre_cnt < end_idx <= pre_cnt + dataset_size:
end = end_idx - pre_cnt
else:
end = dataset_size
if start_idx <= pre_cnt < end_idx:
start = 0
push = True
if pre_cnt + dataset_size >= end_idx:
end = end_idx - pre_cnt
else:
end = dataset_size
if push:
if start == 0 and end == dataset_size:
is_full_dataset = True
shard_files.append((f, start, end, is_full_dataset))
pre_cnt += dataset_size
if pre_cnt >= (shard_id + 1) * size_per_shard:
finish = True
return shard_files
def make_dataset_tuple(dataset_files, local_path):
"""
Calculates the total size of the dataset and the size of each dataset file.
Args:
dataset_files (List[str]): Full paths of dataset files.
local_path (str): Local directory path of dataset files.
"""
dataset_file_size_list = []
dataset_size = 0
for dataset_file in dataset_files:
meta_file = os.path.basename(dataset_file) + '.db'
path = os.path.join(local_path, meta_file)
try:
conn = sqlite3.connect(path)
c = conn.cursor()
cursor = c.execute("SELECT COUNT(*) FROM INDEXES")
for row in cursor:
dataset_size += row[0]
dataset_file_size_list.append((dataset_file, row[0]))
conn.close()
except Exception as e:
raise RuntimeError(
"Failed to get dataset size from metadata, err: " + str(e))
return dataset_size, dataset_file_size_list
def fetch_meta_files(meta_files, local_path):
"""
Download all meta files from obs, skip if it exists.
Args:
meta_files (List[str]): Full paths of meta files.
local_path (str): Local directory path of dataset files.
"""
for df in meta_files:
dataset_file = os.path.basename(df)
meta_file = dataset_file + '.db'
remote_path = os.path.dirname(df)
_download_file(remote_path, meta_file, local_path, lock_file=meta_file)
def make_shard_files(dataset_files, num_shards, shard_id):
"""
Make sharding files when shard_equal_rows equal to False.
Args:
dataset_files (List[str]): Names of dataset files.
num_shards (int): Number of all sharding.
sharding (int): ID of sharding.
"""
idx = 0
shard_files = []
for dataset_file in dataset_files:
if idx % num_shards == shard_id:
shard_files.append((dataset_file, -1, -1, True))
idx += 1
return shard_files
def get_bucket_and_key(obs_path):
r"""
Split OBS path to bucket name and object key.
Args:
obs_path (str): OBS path that starts with s3://.
Returns:
bucketName and objectKey.
"""
start = obs_path.find('//')
end = obs_path.find('/', start + 2)
if end == -1:
return obs_path[start + 2:], ""
return obs_path[start + 2:end], obs_path[end + 1:]
def exclusive_lock(func):
""" Decorator that execute func under exclusive lock. """
@wraps(func)
def wrapped_func(*args, **kwargs):
try:
lock_file = os.path.join('/tmp/', '{}.lock'.format(kwargs['lock_file']))
except KeyError:
raise RuntimeError("Lock file can not found in function {}.".format(func_name))
with open(lock_file, 'w') as fd:
retry_cnt = 0
success = False
while True:
if success:
break
try:
if retry_cnt > config.MAX_RETRY:
raise RuntimeError("Function {} retries times {} has exceeded threshold {}.".format(
func_name, retry_cnt, config.MAX_RETRY))
fcntl.flock(fd, fcntl.LOCK_EX)
success = True
result = func(*args, **kwargs)
except RuntimeError as e:
raise e
except Exception as e: # pylint: disable=W0703
retry_cnt += 1
import traceback
logger.error(traceback.format_exc())
time.sleep(config.RETRY_DELTA_TIME)
finally:
fcntl.flock(fd, fcntl.LOCK_UN)
return result
return wrapped_func
def retry_execute(func):
""" Decorator that retry on unexpected errors. """
func_name = func.__name__
@wraps(func)
def wrapper(*args, **kwargs):
retry_cnt = 0
success = False
while True:
if success:
break
try:
if retry_cnt >= config.MAX_RETRY:
err_msg = "Function {} has retried for {} times, please check error above.".format(
func_name, retry_cnt)
logger.error(err_msg)
raise RuntimeError(err_msg)
result = func(*args, **kwargs)
success = True
except RuntimeError as e:
raise e
except Exception: # pylint: disable=W0703
retry_cnt += 1
import traceback
logger.error(traceback.format_exc())
time.sleep(config.RETRY_DELTA_TIME)
return result
return wrapper
@retry_execute
def _check_file_exists_in_obs(obs_path):
"""
Detect that file exists in OBS.
Args:
obs_path (str): OBS path of dataset file.
"""
bucket_name, object_key = get_bucket_and_key(obs_path)
try:
resp = obsClient.getObjectMetadata(bucket_name, object_key)
except ConnectionRefusedError:
err_msg = "Failed to connect to OBS, please check OBS sever {}:{}.".format(obsClient.server, obsClient.port)
logger.error(err_msg)
raise RuntimeError(err_msg)
if resp.status < 300:
logger.debug("[{} FUNCTION] OBS requestId: {}.".format(
sys._getframe(), resp.requestId)) # pylint: disable=W0212
elif resp.status == 403:
err_msg = "OBS access is Forbidden, please check AK or SK."
logger.error(err_msg)
raise RuntimeError(err_msg)
else:
err_msg = "File {} not found in OBS, please check again.".format(obs_path)
logger.error(err_msg)
raise RuntimeError(err_msg)
@retry_execute
def _file_download_from_obs(obs_path, local_path):
"""
Download file from OBS.
Args:
obs_path (str): OBS path of dataset file.
local_path (str): Local path of dataset file.
"""
bucket_name, object_key = get_bucket_and_key(obs_path)
downloadFile = local_path
taskNum = config.TASK_NUM
partSize = config.PART_SIZE
enableCheckpoint = True
resp = obsClient.downloadFile(
bucket_name, object_key, downloadFile, partSize, taskNum, enableCheckpoint)
if resp.status < 300:
logger.debug("[{} FUNCTION] OBS requestId: {}.".format(
sys._getframe(), resp.requestId)) # pylint: disable=W0212
else:
raise Exception("OBS SDK errorCode:{}, errMsg: {}.".format(
resp.errorCode, resp.errorMessage))
@exclusive_lock
def _download_file(remote_path, object_name, des_path, lock_file='tmp'):
"""
Download file from OBS exclusively.
Args:
remote_path (str): OBS directory path which dataset file is stored.
object_name (str): Name of dataset file.
des_path (str): Local directory path which dataset file is stored.
lock_file (str): File name to lock.
"""
local_path = os.path.join(des_path, object_name)
if os.path.exists(local_path):
return
if not os.path.exists(des_path):
os.makedirs(des_path)
obs_path = os.path.join(remote_path, object_name)
_check_file_exists_in_obs(obs_path)
_file_download_from_obs(obs_path, local_path)
@exclusive_lock
def init_cache_and_queue(cache, q, path, shard_file, idx, is_full_dataset, lock_file='tmp'):
"""
Initialize cache and queue according to the status of local dataset files.
Args:
cache (Dict[str, tuple]): Dict that indicate the status of local dataset files.
q (Queue): Queue that pass dataset file to be download to thread.
path (str): Local path of dataset file.
shard_file (str): Full path of dataset file.
idx (int): Index of dataset file.
is_full_dataset (bool): Whether to include full dataset file.
lock_file (str): File name to lock.
"""
dataset_file = os.path.basename(shard_file)
if os.path.exists(path): # found in local
logger.info("[{} FUNCTION] Push dataset file {} to cache.".format(
sys._getframe(), dataset_file)) # pylint: disable=W0212
cache[dataset_file] = (idx, not is_full_dataset)
else:
logger.info("[{} FUNCTION] Push dataset file {} to downloading queue.".format(
sys._getframe(), dataset_file)) # pylint: disable=W0212
cache[dataset_file] = (-1, not is_full_dataset)
q.put((idx, shard_file))
@exclusive_lock
def _detect_file_exist(local_path, meta_file, lock_file='tmp'):
"""
Detect that local meta file exists or not.
Args:
local_path (str): Local directory path of meta file.
meta_file (str): Name of meta file.
lock_file (str): File name to lock.
"""
if os.path.exists(os.path.join(local_path, meta_file)):
return True
return False
@retry_execute
def file_upload_to_obs(obs_path, sync_dir, ready_file_name):
"""
Upload sync file to OBS.
Args:
obs_path (str): OBS path of dataset file.
sync_fir (str): OBS directory path used for synchronization.
ready_file_name (str): Name of synchronization file.
"""
bucket_name, object_key = get_bucket_and_key(obs_path)
if not object_key:
resp = obsClient.headBucket(bucket_name)
else:
if not object_key.endswith("/"):
object_key += "/"
resp = obsClient.getObjectMetadata(bucket_name, object_key)
if resp.status < 300:
logger.debug("[{} FUNCTION] OBS requestId: {}.".format(
sys._getframe(), resp.requestId)) # pylint: disable=W0212
else:
raise RuntimeError("Directory/Bucket used for synchronization {} is not found in OBS, " \
"please create it on OBS first.".format(obs_path))
remote_dir = os.path.join(object_key, sync_dir)
resp = obsClient.putContent(bucket_name, remote_dir, content=None)
if resp.status < 300:
logger.debug("[{} FUNCTION] OBS requestId: {}.".format(
sys._getframe(), resp.requestId)) # pylint: disable=W0212
else:
raise Exception("OBS SDK errorCode:{}, errMsg: {}.".format(
resp.errorCode, resp.errorMessage))
resp = obsClient.putContent(bucket_name, os.path.join(
remote_dir, ready_file_name), content='OK')
if resp.status < 300:
logger.debug("[{} FUNCTION] OBS requestId: {}.".format(
sys._getframe(), resp.requestId)) # pylint: disable=W0212
else:
raise Exception("OBS SDK errorCode:{}, errMsg: {}.".format(
resp.errorCode, resp.errorMessage))
| 33.093023 | 116 | 0.618923 | 1,990 | 15,653 | 4.665829 | 0.160804 | 0.054497 | 0.015401 | 0.010985 | 0.457835 | 0.39203 | 0.343888 | 0.298008 | 0.232526 | 0.22391 | 0 | 0.007766 | 0.284291 | 15,653 | 472 | 117 | 33.163136 | 0.82103 | 0.257331 | 0 | 0.408759 | 0 | 0 | 0.079339 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.069343 | false | 0.00365 | 0.051095 | 0 | 0.175182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad45806b39366bce1b9d1f32af5ee69cebe62628 | 653 | py | Python | api/__init__.py | mczo/Remote-Control-Car-Server | bd6ac3480e7305e9b2091144c144e4fccf8b0832 | [
"MIT"
] | null | null | null | api/__init__.py | mczo/Remote-Control-Car-Server | bd6ac3480e7305e9b2091144c144e4fccf8b0832 | [
"MIT"
] | null | null | null | api/__init__.py | mczo/Remote-Control-Car-Server | bd6ac3480e7305e9b2091144c144e4fccf8b0832 | [
"MIT"
] | null | null | null | import socket
import config
from . import run
from . import turn
def listenRouter():
sk = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sk.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
ip_port = ('0.0.0.0', config.SERVER_PORT)
sk.bind(ip_port)
try:
while True:
req, addr = sk.recvfrom(1024)
if req == b"Ping":
sk.sendto("Pong".encode(), addr)
continue
run.load(req)
turn.load(req)
except KeyboardInterrupt:
print(KeyboardInterrupt)
except ValueError:
print(ValueError)
finally:
bonjour.close()
| 23.321429 | 60 | 0.591118 | 78 | 653 | 4.858974 | 0.576923 | 0.094987 | 0.015831 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01978 | 0.303216 | 653 | 27 | 61 | 24.185185 | 0.813187 | 0 | 0 | 0 | 0 | 0 | 0.022971 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.173913 | 0 | 0.217391 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad4c1f5383d3f58c65cfdc9ed2956cdcc845c195 | 4,575 | py | Python | src/services/wishlist_service.py | jsmuniz7/wishlist | 9ba4d3d8877e1838588aaf80bdd7a1ec7dd0400d | [
"MIT"
] | null | null | null | src/services/wishlist_service.py | jsmuniz7/wishlist | 9ba4d3d8877e1838588aaf80bdd7a1ec7dd0400d | [
"MIT"
] | null | null | null | src/services/wishlist_service.py | jsmuniz7/wishlist | 9ba4d3d8877e1838588aaf80bdd7a1ec7dd0400d | [
"MIT"
] | null | null | null | import infrastructure.products_api_http_client as http_client
from database.repositories.wishlist_repository import *
from database.repositories.wishlist_item_repository import create as create_item, exists_wishlist_product, get_paginated_wishlist_items, get_wishlist_item, delete_item
from database.repositories.customer_repository import get_by_id as get_customer_by_id
from database.models import Wishlist, WishlistItem
from api.response import Response
from api.response import ResponseMessages
from tools.http_status_code import HttpStatusCode
from tools.uuid_validator import is_valid_uuid
def create_wishlist(data):
customer_id = data.get('customer_id')
customer = get_customer_by_id(customer_id)
validationResult = __validate_wishlist_request(customer)
if not validationResult[0]:
return validationResult[1]
wishlist = Wishlist(customer)
return Response(
HttpStatusCode.CREATED.value,
ResponseMessages.SUCCESS.value,
create(wishlist),
None)
def get_wishlist_by_customer_id(customer_id):
return Response(
HttpStatusCode.OK.value,
ResponseMessages.SUCCESS.value,
get_wishlist_by_customer(customer_id),
None)
def add_wishlist_item(wishlist_id, data):
wishlist = get_by_id(wishlist_id)
product_id = data.get('product_id')
validationResult = __validate_wishlist_item_request(wishlist, product_id)
if not validationResult[0]:
return validationResult[1]
wishlist_item = WishlistItem(product_id, wishlist)
return Response(
HttpStatusCode.CREATED.value,
ResponseMessages.SUCCESS.value,
create_item(wishlist_item),
None)
def __validate_wishlist_request(customer):
error_message = None
if customer is None:
error_message = 'Invalid Customer Id'
elif exists_wishlist_for_customer(customer.id):
error_message = 'Customer already has a wishlist'
return __build_validation_response(error_message)
def __validate_wishlist_item_request(wishlist, product_id):
error_message = None
if wishlist is None:
error_message = 'Wishlist not found'
elif not is_valid_uuid(product_id):
error_message = 'Invalid Product Id'
elif exists_wishlist_product(wishlist.id, product_id):
error_message = 'Product already exists in the wishlist'
elif not http_client.exists_product(product_id):
error_message = 'Product not found'
return __build_validation_response(error_message)
def __build_validation_response(error_message):
if error_message is not None:
return (False, Response(
HttpStatusCode.BAD_REQUEST.value,
ResponseMessages.ERROR.value,
None,
error_message))
else:
return (True, None)
def get_wishlist_items(wishlist_id, page, per_page):
wishlist = get_by_id(wishlist_id)
if wishlist is None:
return Response(
HttpStatusCode.BAD_REQUEST.value,
ResponseMessages.ERROR.value,
None,
'Wishlist not found')
products = []
items_page = get_paginated_wishlist_items(wishlist_id, page, per_page)
for item in items_page.items:
products.append(http_client.get_product(item.product_id))
items_page.items = products
return Response(
HttpStatusCode.OK.value,
ResponseMessages.SUCCESS.value,
items_page,
None)
def delete_wishlist(wishlist_id):
wishlist = get_by_id(wishlist_id)
if wishlist is None:
return Response(
HttpStatusCode.BAD_REQUEST.value,
ResponseMessages.ERROR.value,
None,
'Wishlist not found')
delete(wishlist)
return Response(
HttpStatusCode.OK.value,
ResponseMessages.SUCCESS.value,
None,
None)
def delete_wishlist_item(wishlist_id, product_id):
wishlist = get_by_id(wishlist_id)
if wishlist is None:
return Response(
HttpStatusCode.BAD_REQUEST.value,
ResponseMessages.ERROR.value,
None,
'Wishlist not found')
wishlist_item = get_wishlist_item(wishlist_id, product_id)
if wishlist_item is None:
return Response(
HttpStatusCode.BAD_REQUEST.value,
ResponseMessages.ERROR.value,
None,
'Product not found in wishlist')
delete_item(wishlist_item)
return Response(
HttpStatusCode.OK.value,
ResponseMessages.SUCCESS.value,
None,
None)
| 27.39521 | 167 | 0.702077 | 523 | 4,575 | 5.843212 | 0.135755 | 0.051047 | 0.091623 | 0.064791 | 0.484948 | 0.45517 | 0.427356 | 0.347513 | 0.271597 | 0.201243 | 0 | 0.001141 | 0.233443 | 4,575 | 166 | 168 | 27.560241 | 0.870259 | 0 | 0 | 0.537815 | 0 | 0 | 0.053552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.07563 | false | 0 | 0.07563 | 0.008403 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad4dafe85cb9b43648e3d3676fe730059aff2bdf | 11,376 | py | Python | y/google-cloud-sdk/lib/googlecloudsdk/core/metrics.py | ychen820/microblog | d379afa2db3582d5c3be652165f0e9e2e0c154c6 | [
"BSD-3-Clause"
] | null | null | null | y/google-cloud-sdk/lib/googlecloudsdk/core/metrics.py | ychen820/microblog | d379afa2db3582d5c3be652165f0e9e2e0c154c6 | [
"BSD-3-Clause"
] | null | null | null | y/google-cloud-sdk/lib/googlecloudsdk/core/metrics.py | ychen820/microblog | d379afa2db3582d5c3be652165f0e9e2e0c154c6 | [
"BSD-3-Clause"
] | 2 | 2020-07-25T05:03:06.000Z | 2020-11-04T04:55:57.000Z | # Copyright 2013 Google Inc. All Rights Reserved.
"""Used to collect anonymous SDK metrics."""
import atexit
import hashlib
import os
import pickle
import socket
import subprocess
import tempfile
import time
import urllib
import uuid
from googlecloudsdk.core import config
from googlecloudsdk.core import log
from googlecloudsdk.core import properties
from googlecloudsdk.core.util import execution_utils
from googlecloudsdk.core.util import files
from googlecloudsdk.core.util import platforms
_GA_ENDPOINT = 'https://ssl.google-analytics.com/collect'
_GA_TID = 'UA-36037335-2'
_GA_INSTALLS_CATEGORY = 'Installs'
_GA_COMMANDS_CATEGORY = 'Commands'
_GA_HELP_CATEGORY = 'Help'
_GA_ERROR_CATEGORY = 'Error'
_GA_EXECUTIONS_CATEGORY = 'Executions'
_CSI_ENDPOINT = 'https://csi.gstatic.com/csi'
_CSI_ID = 'cloud_sdk'
_CSI_LOAD_EVENT = 'load'
_CSI_RUN_EVENT = 'run'
_CSI_TOTAL_EVENT = 'total'
class _GAEvent(object):
def __init__(self, category, action, label, value):
self.category = category
self.action = action
self.label = label
self.value = value
def _GetTimeMillis(time_secs):
return int(round(time_secs * 1000))
class _TimedEvent(object):
def __init__(self, name):
self.name = name
self.time_millis = _GetTimeMillis(time.time())
class _CommandTimer(object):
"""A class for timing the execution of a command."""
def __init__(self, start_time):
self.__start = _GetTimeMillis(start_time)
self.__events = []
self.__action = 'unknown'
def SetAction(self, action):
self.__action = action.replace('.', ',').replace('-', '_')
def Event(self, name):
self.__events.append(_TimedEvent(name))
def GetCSIParams(self):
params = [('action', self.__action)]
response_times = [
'{0}.{1}'.format(event.name, event.time_millis - self.__start)
for event in self.__events]
params.append(('rt', ','.join(response_times)))
return params
class _MetricsCollector(object):
"""A singleton class to handle metrics reporting."""
_disabled_cache = None
_instance = None
@staticmethod
def GetCollectorIfExists():
return _MetricsCollector._instance
@staticmethod
def GetCollector():
"""Returns the singleton _MetricsCollector instance or None if disabled."""
if _MetricsCollector._IsDisabled():
return None
if not _MetricsCollector._instance:
_MetricsCollector._instance = _MetricsCollector()
return _MetricsCollector._instance
@staticmethod
def _IsDisabled():
"""Returns whether metrics collection should be disabled."""
if _MetricsCollector._disabled_cache is None:
# Don't collect metrics for completions.
if '_ARGCOMPLETE' in os.environ:
_MetricsCollector._disabled_cache = True
else:
# Don't collect metrics if the user has opted out.
disabled = properties.VALUES.core.disable_usage_reporting.GetBool()
if disabled is None:
# There is no preference set, fall back to the installation default.
disabled = config.INSTALLATION_CONFIG.disable_usage_reporting
_MetricsCollector._disabled_cache = disabled
return _MetricsCollector._disabled_cache
def __init__(self):
"""Initialize a new MetricsCollector.
This should only be invoked through the static GetCollector() function.
"""
current_platform = platforms.Platform.Current()
self._user_agent = 'CloudSDK/{version} {fragment}'.format(
version=config.CLOUD_SDK_VERSION,
fragment=current_platform.UserAgentFragment())
self._async_popen_args = current_platform.AsycPopenArgs()
self._project_ids = {}
hostname = socket.getfqdn()
install_type = 'Google' if hostname.endswith('.google.com') else 'External'
self._ga_params = [('v', '1'),
('tid', _GA_TID),
('cid', _MetricsCollector._GetCID()),
('t', 'event'),
('cd1', config.INSTALLATION_CONFIG.release_channel),
('cd2', install_type)]
self._csi_params = [('s', _CSI_ID),
('v', '2'),
('rls', config.CLOUD_SDK_VERSION)]
self.StartTimer(time.time())
self._metrics = []
log.debug('Metrics collector initialized...')
@staticmethod
def _GetCID():
"""Gets the client id from the config file, or generates a new one.
Returns:
str, The hex string of the client id.
"""
uuid_path = config.Paths().analytics_cid_path
cid = None
if os.path.exists(uuid_path):
with open(uuid_path) as f:
cid = f.read()
if cid:
return cid
files.MakeDir(os.path.dirname(uuid_path))
with open(uuid_path, 'w') as f:
cid = uuid.uuid4().hex
f.write(cid) # A random UUID
return cid
def _GetProjectIDHash(self):
"""Gets the hash of the current project id.
Returns:
str, The hex digest of the current project id or None if the
project is not set.
"""
project_id = properties.VALUES.core.project.Get(validate=False)
if not project_id:
return None
hashed_id = self._project_ids.get(project_id)
if not hashed_id:
checksum = hashlib.sha1()
checksum.update(project_id)
hashed_id = checksum.hexdigest()
self._project_ids[project_id] = hashed_id
return hashed_id
def StartTimer(self, start_time):
self._timer = _CommandTimer(start_time)
def RecordTimedEvent(self, name):
self._timer.Event(name)
def SetTimerAction(self, action):
self._timer.SetAction(action)
def CollectCSIMetric(self):
"""Adds metric with latencies for the given command to the metrics queue."""
params = self._timer.GetCSIParams()
params.extend(self._csi_params)
data = urllib.urlencode(params)
self._metrics.append(
('{0}?{1}'.format(_CSI_ENDPOINT, data), 'GET', None, self._user_agent))
def CollectGAMetric(self, event):
"""Adds the given GA event to the metrics queue.
Args:
event: _Event, The event to process.
"""
params = [
('ec', event.category),
('ea', event.action),
('el', event.label),
('ev', event.value),
]
project_id_hash = self._GetProjectIDHash()
if project_id_hash:
params.append(('cd11', project_id_hash))
params.extend(self._ga_params)
data = urllib.urlencode(params)
self._metrics.append((_GA_ENDPOINT, 'POST', data, self._user_agent))
def ReportMetrics(self):
"""Reports the collected metrics using a separate async process."""
if not self._metrics:
return
temp_metrics_file = tempfile.NamedTemporaryFile(delete=False)
with temp_metrics_file:
pickle.dump(self._metrics, temp_metrics_file)
self._metrics = []
reporting_script_path = os.path.join(
config.GoogleCloudSDKPackageRoot(), 'core', 'metrics_reporter.py')
execution_args = execution_utils.ArgsForPythonTool(
reporting_script_path, temp_metrics_file.name)
exec_env = os.environ.copy()
python_path_var = 'PYTHONPATH'
python_path = exec_env.get(python_path_var)
if python_path:
python_path += os.pathsep + config.LibraryRoot()
else:
python_path = config.LibraryRoot()
exec_env[python_path_var] = python_path
subprocess.Popen(execution_args, env=exec_env, **self._async_popen_args)
log.debug('Metrics reporting process started...')
def _CollectGAMetricAndSetTimerAction(category, action, label, value=0):
"""Common code for processing a GA event."""
collector = _MetricsCollector.GetCollector()
if collector:
collector.CollectGAMetric(
_GAEvent(category=category, action=action, label=label, value=value))
# Dont include version. We already send it as the rls CSI parameter.
if category is _GA_COMMANDS_CATEGORY or category is _GA_EXECUTIONS_CATEGORY:
collector.SetTimerAction('{0}.{1}'.format(category, action))
elif category is _GA_ERROR_CATEGORY or category is _GA_HELP_CATEGORY:
collector.SetTimerAction('{0}.{1}.{2}'.format(category, action, label))
# Ignoring installs for now since there could be multiple per cmd execution.
def CaptureAndLogException(func):
"""Function decorator to capture and log any exceptions."""
def Wrapper(*args, **kwds):
try:
return func(*args, **kwds)
# pylint:disable=bare-except
except:
log.debug('Exception captured in %s', func.func_name, exc_info=True)
return Wrapper
@CaptureAndLogException
@atexit.register
def Shutdown():
"""Reports the metrics that were collected."""
collector = _MetricsCollector.GetCollectorIfExists()
if collector:
collector.RecordTimedEvent(_CSI_TOTAL_EVENT)
collector.CollectCSIMetric()
collector.ReportMetrics()
@CaptureAndLogException
def Installs(component_id, version_string):
"""Logs that an SDK component was installed.
Args:
component_id: str, The component id that was installed.
version_string: str, The version of the component.
"""
_CollectGAMetricAndSetTimerAction(
_GA_INSTALLS_CATEGORY, component_id, version_string)
@CaptureAndLogException
def Commands(command_path, version_string):
"""Logs that a gcloud command was run.
Args:
command_path: str, The '.' separated name of the calliope command.
version_string: str, The version of the command.
"""
if not version_string:
version_string = 'unknown'
_CollectGAMetricAndSetTimerAction(
_GA_COMMANDS_CATEGORY, command_path, version_string)
@CaptureAndLogException
def Help(command_path, mode):
"""Logs that help for a gcloud command was run.
Args:
command_path: str, The '.' separated name of the calliope command.
mode: str, The way help was invoked (-h, --help, help).
"""
_CollectGAMetricAndSetTimerAction(_GA_HELP_CATEGORY, command_path, mode)
@CaptureAndLogException
def Error(command_path, exc):
"""Logs that a top level Exception was caught for a gcloud command.
Args:
command_path: str, The '.' separated name of the calliope command.
exc: Exception, The exception that was caught.
"""
try:
cls = exc.__class__
name = '{0}.{1}'.format(cls.__module__, cls.__name__)
# pylint:disable=bare-except, Never want to fail on metrics reporting.
except:
name = 'unknown'
_CollectGAMetricAndSetTimerAction(_GA_ERROR_CATEGORY, command_path, name)
@CaptureAndLogException
def Executions(command_name, version_string):
"""Logs that a top level SDK script was run.
Args:
command_name: str, The script name.
version_string: str, The version of the command.
"""
if not version_string:
version_string = 'unknown'
_CollectGAMetricAndSetTimerAction(
_GA_EXECUTIONS_CATEGORY, command_name, version_string)
@CaptureAndLogException
def Started(start_time):
"""Record the time when the command was started."""
collector = _MetricsCollector.GetCollector()
if collector:
collector.StartTimer(start_time)
@CaptureAndLogException
def Loaded():
"""Record the time when command loading was completed."""
collector = _MetricsCollector.GetCollector()
if collector:
collector.RecordTimedEvent(_CSI_LOAD_EVENT)
@CaptureAndLogException
def Ran():
"""Record the time when command running was completed."""
collector = _MetricsCollector.GetCollector()
if collector:
collector.RecordTimedEvent(_CSI_RUN_EVENT)
| 29.625 | 80 | 0.704114 | 1,358 | 11,376 | 5.657585 | 0.240795 | 0.021997 | 0.017181 | 0.020305 | 0.179617 | 0.11545 | 0.094364 | 0.090329 | 0.077834 | 0.077834 | 0 | 0.004038 | 0.19462 | 11,376 | 383 | 81 | 29.70235 | 0.834534 | 0.211937 | 0 | 0.189076 | 0 | 0 | 0.051777 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130252 | false | 0 | 0.067227 | 0.008403 | 0.277311 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad504d0ed5506e0cbc4c99cd3a4ebeb45c721a40 | 6,000 | py | Python | resources/lib/services/nfsession/session/endpoints.py | rijsab/plugin.video.netflix | b30a6ba63ddeafcbcd53a642c74ffe5557eafb60 | [
"MIT"
] | null | null | null | resources/lib/services/nfsession/session/endpoints.py | rijsab/plugin.video.netflix | b30a6ba63ddeafcbcd53a642c74ffe5557eafb60 | [
"MIT"
] | null | null | null | resources/lib/services/nfsession/session/endpoints.py | rijsab/plugin.video.netflix | b30a6ba63ddeafcbcd53a642c74ffe5557eafb60 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Copyright (C) 2017 Sebastian Golasch (plugin.video.netflix)
Copyright (C) 2020 Stefano Gottardo - @CastagnaIT (original implementation module)
Netflix API endpoints
SPDX-License-Identifier: MIT
See LICENSES/MIT.md for more information.
"""
# Secure Netflix url
BASE_URL = 'https://www.netflix.com'
# List of all static endpoints for HTML/JSON POST/GET requests
# is_api_call:
# specify which address to use for the endpoint
# True -> The https address used is composed with 'apiUrl' value from reactContext data
# False -> The https address used is composed with the BASE_URL
# use_default_params:
# Add to the request the default parameters (see _prepare_request_properties)
# add_auth_url:
# Specifies if and where to put the 'authURL' value
# None -> Will not be added
# 'to_data' -> It will be added with the data to send
# 'to_params' -> It will be added to the request parameters
# content_type:
# If required add the Content-Type attribute to request header
# accept:
# If required add the Accept attribute to request header (if not specified use '*/*')
ENDPOINTS = {
'login':
{'address': '/login',
'is_api_call': False,
'use_default_params': False,
'add_auth_url': None,
# By default to login Netflix use 'application/x-www-form-urlencoded' Content-Type,
# instead we use 'application/json' for simplicity of data conversion
# if in the future login raise MbrStatusAnonymousError can means that json is no more accepted
'content_type': 'application/json',
'accept': 'text/html,application/xhtml+xml,application/xml'},
'logout':
{'address': '/SignOut',
'is_api_call': False,
'use_default_params': False,
'add_auth_url': None,
'accept': 'text/html,application/xhtml+xml,application/xml'},
'shakti':
{'address': '/pathEvaluator',
'is_api_call': True,
'use_default_params': True,
'add_auth_url': 'to_data',
'content_type': 'application/x-www-form-urlencoded'},
'browse':
{'address': '/browse',
'is_api_call': False,
'use_default_params': False,
'add_auth_url': None,
'accept': 'text/html,application/xhtml+xml,application/xml'},
'profiles_gate':
# This endpoint is used after ending editing profiles page, i think to force close an active profile session
{'address': '/ProfilesGate',
'is_api_call': False,
'use_default_params': False,
'add_auth_url': 'to_data',
'accept': '*/*'},
'profiles':
{'address': '/profiles/manage',
'is_api_call': False,
'use_default_params': False,
'add_auth_url': None,
'accept': '*/*'},
'switch_profile':
{'address': '/SwitchProfile',
'is_api_call': False,
'use_default_params': False,
'add_auth_url': None,
'accept': '*/*'},
'activate_profile':
{'address': '/profiles/switch',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': None},
'profile_lock':
{'address': '/profileLock',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': 'to_data',
'content_type': 'application/json',
'accept': 'application/json, text/javascript, */*'},
'profile_hub':
{'address': '/profilehub',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': 'to_data',
'content_type': 'application/json',
'accept': 'application/json, text/javascript, */*'},
'content_restrictions':
{'address': '/contentRestrictions',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': None,
'content_type': 'application/json',
'accept': 'application/json, text/javascript, */*'},
'restrictions':
# Page of content restrictions (former parental control)
{'address': '/settings/restrictions/{}', # At the end of the address will be appended the profile guid
'is_api_call': False,
'use_default_params': False,
'add_auth_url': None,
'accept': '*/*'},
'pin_reset':
{'address': '/pin/reset',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': None},
'pin_service':
{'address': '/pin/service',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': 'to_data',
'content_type': 'application/json',
'accept': 'application/json, text/javascript, */*'},
'metadata':
{'address': '/metadata',
'is_api_call': True,
'use_default_params': True,
'add_auth_url': 'to_params'},
'set_video_rating': # Old rating system
{'address': '/setVideoRating',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': 'to_data',
'content_type': 'application/json',
'accept': 'application/json, text/javascript, */*'},
'set_thumb_rating':
{'address': '/setThumbRating',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': 'to_data',
'content_type': 'application/json',
'accept': 'application/json, text/javascript, */*'},
'update_my_list':
{'address': '/playlistop',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': 'to_data',
'content_type': 'application/json',
'accept': 'application/json, text/javascript, */*'},
'viewing_activity':
{'address': '/viewingactivity',
'is_api_call': True,
'use_default_params': False,
'add_auth_url': 'to_data',
'content_type': 'application/json',
'accept': 'application/json, text/javascript, */*'}
}
| 36.809816 | 112 | 0.599667 | 672 | 6,000 | 5.111607 | 0.254464 | 0.029112 | 0.052402 | 0.10393 | 0.506841 | 0.48559 | 0.48559 | 0.466376 | 0.445124 | 0.425619 | 0 | 0.002024 | 0.258833 | 6,000 | 162 | 113 | 37.037037 | 0.770407 | 0.253667 | 0 | 0.642276 | 0 | 0 | 0.524175 | 0.044962 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad516edf0bd0737694c91e5a6f9b063d4b344c05 | 1,917 | py | Python | tests/src/Periodic_report/check_clusterwise_downloadcsv.py | JalajaTR/cQube | 6bf58ab25f0c36709630987ab730bbd5d9192c03 | [
"MIT"
] | null | null | null | tests/src/Periodic_report/check_clusterwise_downloadcsv.py | JalajaTR/cQube | 6bf58ab25f0c36709630987ab730bbd5d9192c03 | [
"MIT"
] | 2 | 2022-02-01T00:55:12.000Z | 2022-03-29T22:29:09.000Z | tests/src/Periodic_report/check_clusterwise_downloadcsv.py | JalajaTR/cQube | 6bf58ab25f0c36709630987ab730bbd5d9192c03 | [
"MIT"
] | null | null | null |
import csv
import os
import re
import time
from selenium.webdriver.support.select import Select
from Data.parameters import Data
from get_dir import pwd
from reuse_func import GetData
class Districts_Block_downloadcsv():
def __init__(self, driver):
self.driver = driver
def remove_csv(self):
os.remove(self.filename)
def check_districts_block(self):
cal = GetData()
cal.click_on_state(self.driver)
cal.page_loading(self.driver)
select_district = Select(self.driver.find_element_by_id('choose_dist'))
select_block = Select(self.driver.find_element_by_id('choose_block'))
count = 0
for x in range(len(select_district.options)-1, len(select_district.options)):
select_district.select_by_index(x)
cal.page_loading(self.driver)
for y in range(len(select_block.options)-1, len(select_block.options)):
select_block.select_by_index(y)
cal.page_loading(self.driver)
markers = self.driver.find_elements_by_class_name(Data.dots)
if len(markers) - 1 == 0:
print("District" + select_district.first_selected_option.text +"Block"+ select_block.first_selected_option.text +"No Data")
count = count + 1
else:
time.sleep(2)
self.driver.find_element_by_id('download').click()
time.sleep(2)
p = pwd()
self.filename = p.get_download_dir() + "/Cluster_per_block_report.csv"
if not os.path.isfile(self.filename):
print("District" + select_district.first_selected_option.text +"Block"+ select_block.first_selected_option.text+"csv is not downloaded")
count = count + 1
self.remove_csv()
return count
| 37.588235 | 160 | 0.617632 | 236 | 1,917 | 4.758475 | 0.330508 | 0.089047 | 0.049866 | 0.081923 | 0.310775 | 0.246661 | 0.224399 | 0.224399 | 0.158504 | 0.158504 | 0 | 0.006623 | 0.29108 | 1,917 | 50 | 161 | 38.34 | 0.81972 | 0 | 0 | 0.170732 | 0 | 0 | 0.05953 | 0.015144 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.195122 | 0 | 0.317073 | 0.04878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad5180658ad278b876a480f0422edf4e853e5f1f | 2,441 | py | Python | tests/components/homekit_controller/specific_devices/test_ryse_smart_bridge.py | Kapernicus/core | 16d2f29b896d44c5cad2924be481a333233474a2 | [
"Apache-2.0"
] | null | null | null | tests/components/homekit_controller/specific_devices/test_ryse_smart_bridge.py | Kapernicus/core | 16d2f29b896d44c5cad2924be481a333233474a2 | [
"Apache-2.0"
] | 40 | 2021-05-12T06:42:57.000Z | 2022-03-31T06:11:40.000Z | tests/components/homekit_controller/specific_devices/test_ryse_smart_bridge.py | Kapernicus/core | 16d2f29b896d44c5cad2924be481a333233474a2 | [
"Apache-2.0"
] | null | null | null | """Test against characteristics captured from a ryse smart bridge platforms."""
from homeassistant.helpers import device_registry as dr, entity_registry as er
from tests.components.homekit_controller.common import (
Helper,
setup_accessories_from_file,
setup_test_accessories,
)
async def test_ryse_smart_bridge_setup(hass):
"""Test that a Ryse smart bridge can be correctly setup in HA."""
accessories = await setup_accessories_from_file(hass, "ryse_smart_bridge.json")
config_entry, pairing = await setup_test_accessories(hass, accessories)
entity_registry = er.async_get(hass)
# Check that the cover.master_bath_south is correctly found and set up
cover_id = "cover.master_bath_south"
cover = entity_registry.async_get(cover_id)
assert cover.unique_id == "homekit-00:00:00:00:00:00-2-48"
cover_helper = Helper(
hass,
cover_id,
pairing,
accessories[0],
config_entry,
)
cover_state = await cover_helper.poll_and_get_state()
assert cover_state.attributes["friendly_name"] == "Master Bath South"
assert cover_state.state == "closed"
device_registry = dr.async_get(hass)
device = device_registry.async_get(cover.device_id)
assert device.manufacturer == "RYSE Inc."
assert device.name == "Master Bath South"
assert device.model == "RYSE Shade"
assert device.sw_version == "3.0.8"
bridge = device_registry.async_get(device.via_device_id)
assert bridge.manufacturer == "RYSE Inc."
assert bridge.name == "RYSE SmartBridge"
assert bridge.model == "RYSE SmartBridge"
assert bridge.sw_version == "1.3.0"
# Check that the cover.ryse_smartshade is correctly found and set up
cover_id = "cover.ryse_smartshade"
cover = entity_registry.async_get(cover_id)
assert cover.unique_id == "homekit-00:00:00:00:00:00-3-48"
cover_helper = Helper(
hass,
cover_id,
pairing,
accessories[0],
config_entry,
)
cover_state = await cover_helper.poll_and_get_state()
assert cover_state.attributes["friendly_name"] == "RYSE SmartShade"
assert cover_state.state == "open"
device_registry = dr.async_get(hass)
device = device_registry.async_get(cover.device_id)
assert device.manufacturer == "RYSE Inc."
assert device.name == "RYSE SmartShade"
assert device.model == "RYSE Shade"
assert device.sw_version == ""
| 32.986486 | 83 | 0.707087 | 327 | 2,441 | 5.042813 | 0.244648 | 0.024257 | 0.029109 | 0.029109 | 0.519102 | 0.497271 | 0.497271 | 0.497271 | 0.497271 | 0.396604 | 0 | 0.019378 | 0.196641 | 2,441 | 73 | 84 | 33.438356 | 0.82152 | 0.08603 | 0 | 0.45283 | 0 | 0 | 0.145901 | 0.05836 | 0 | 0 | 0 | 0 | 0.339623 | 1 | 0 | false | 0 | 0.037736 | 0 | 0.037736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad52f76d95f1ba28fb333e586f9d23c99644cb2c | 4,058 | py | Python | app/improving_agent/src/normalization/edge_normalization.py | brettasmi/EvidARA | 319bbe80ddb4d7d6aa4f1db005ad5461e015a8bc | [
"MIT"
] | null | null | null | app/improving_agent/src/normalization/edge_normalization.py | brettasmi/EvidARA | 319bbe80ddb4d7d6aa4f1db005ad5461e015a8bc | [
"MIT"
] | 1 | 2020-04-27T07:07:47.000Z | 2020-04-27T07:07:47.000Z | app/improving_agent/src/normalization/edge_normalization.py | brettasmi/EvidARA | 319bbe80ddb4d7d6aa4f1db005ad5461e015a8bc | [
"MIT"
] | 1 | 2020-03-23T10:39:59.000Z | 2020-03-23T10:39:59.000Z | from collections import defaultdict
from werkzeug.exceptions import BadRequest
from improving_agent.exceptions import (
MissingComponentError,
UnsupportedTypeError
)
from improving_agent.models import QEdge
from improving_agent.src.biolink.biolink import EDGE, get_supported_biolink_descendants
from improving_agent.src.biolink.spoke_biolink_constants import BIOLINK_SPOKE_EDGE_MAPPINGS, PREDICATES, SPOKE_ANY_TYPE
def _deserialize_qedge(qedge_id, qedge):
try:
subject = qedge['subject']
object_ = qedge['object']
constraints = qedge.get('constraints')
predicates = qedge.get('predicates')
qedge = QEdge(
predicates=predicates,
subject=subject,
object=object_,
constraints=constraints
)
setattr(qedge, 'qedge_id', qedge_id)
except (KeyError, TypeError):
raise BadRequest(f'Could not deserialize query edge {qedge_id}')
return qedge
def _get_objects_maps(subj_qnode):
if not subj_qnode.category:
return list(PREDICATES.values())
objects_maps = []
for category in subj_qnode.category:
objects_map = PREDICATES.get(category)
if objects_map:
objects_maps.append(objects_map)
if not objects_maps:
raise UnsupportedTypeError(f'Could not find any supported predicates for subject category: {subj_qnode.category}')
return objects_maps
def _get_potential_predicate_maps(subj_qnode, obj_qnode):
objects_maps = _get_objects_maps(subj_qnode)
potential_predicates_map = defaultdict(list)
if not obj_qnode.category:
for objects_map in objects_maps:
for predicate, spoke_edges in objects_map.items():
potential_predicates_map[predicate].extend(spoke_edges)
return potential_predicates_map
for category in obj_qnode.category:
for objects_map in objects_maps:
predicates_map = objects_map.get(category)
if not predicates_map:
continue
for predicate, spoke_edges in predicates_map.items():
potential_predicates_map[predicate].extend(spoke_edges)
if not potential_predicates_map:
raise UnsupportedTypeError(
'Could not find any supported predicates for subject category: '
f'{subj_qnode.category} and object category: {obj_qnode.category}'
)
return potential_predicates_map
def _get_subject_object_qnodes(query_graph, qedge):
subject_node = query_graph.nodes.get(qedge.subject)
object_node = query_graph.nodes.get(qedge.object)
if not subject_node or not object_node:
raise MissingComponentError(f'Subject or object missing for query edge {qedge.qedge_id}')
return subject_node, object_node
def _assign_spoke_edge_types(qedge, subj_qnode, obj_qnode, query_graph):
spoke_edge_types = []
if qedge.predicates:
compatible_predicates = get_supported_biolink_descendants(qedge.predicates, EDGE)
for predicate in compatible_predicates:
spoke_edge_mappings = BIOLINK_SPOKE_EDGE_MAPPINGS.get(predicate)
if not spoke_edge_mappings:
raise UnsupportedTypeError(f'imProving Agent does not currently accept predicates of type {predicate}')
spoke_edge_types.extend(spoke_edge_mappings)
if not spoke_edge_types:
raise UnsupportedTypeError(
f'imProving Agent does not currently accept predicates of type {qedge.predicates}'
)
else:
spoke_edge_types.append(SPOKE_ANY_TYPE)
setattr(qedge, 'spoke_edge_types', set(spoke_edge_types))
return qedge
def validate_normalize_qedges(query_graph):
qedges = {}
for qedge_id, qedge in query_graph.edges.items():
qedge = _deserialize_qedge(qedge_id, qedge)
subj_qnode, obj_qnode = _get_subject_object_qnodes(query_graph, qedge)
qedge = _assign_spoke_edge_types(qedge, subj_qnode, obj_qnode, query_graph)
qedges[qedge_id] = qedge
return qedges
| 36.232143 | 122 | 0.712666 | 485 | 4,058 | 5.657732 | 0.16701 | 0.042638 | 0.040816 | 0.024781 | 0.331268 | 0.25 | 0.230321 | 0.203353 | 0.203353 | 0.094752 | 0 | 0 | 0.22277 | 4,058 | 111 | 123 | 36.558559 | 0.870006 | 0 | 0 | 0.113636 | 0 | 0 | 0.127403 | 0.01035 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068182 | false | 0 | 0.068182 | 0 | 0.227273 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad57115395eb7b59d691284adf0cdc57ace0692f | 1,676 | py | Python | code/shared/theta_parameters.py | scailfin/madminer-workflow-ml | afa8b55c72322f9d9f9edc1cedd533de5f018213 | [
"MIT"
] | 1 | 2020-11-25T18:03:45.000Z | 2020-11-25T18:03:45.000Z | code/shared/theta_parameters.py | madminer-tool/madminer-workflow-ml | b51c9842c0a224c9a501e730cd68563dd57abe8e | [
"MIT"
] | 3 | 2020-07-17T11:40:32.000Z | 2021-05-24T17:27:56.000Z | code/shared/theta_parameters.py | madminer-tool/madminer-workflow-ml | b51c9842c0a224c9a501e730cd68563dd57abe8e | [
"MIT"
] | 2 | 2020-09-04T02:31:55.000Z | 2020-11-09T14:04:01.000Z | #!/usr/bin/python
from madminer.sampling import benchmark
from madminer.sampling import benchmarks
from madminer.sampling import morphing_point
from madminer.sampling import random_morphing_points
############################
##### Global variables #####
############################
sampling_methods = {
"benchmark": benchmark,
"benchmarks": benchmarks,
"morphing_points": morphing_point,
"random_morphing_points": random_morphing_points,
}
#############################
##### Parsing functions #####
#############################
def parse_theta_params(theta_spec: dict):
"""
Parses the theta parameters that the method will take later on
:param theta_spec: theta specification on the inputs file
:return: list
"""
params = []
for num, param in enumerate(theta_spec["prior"]):
params.append(
(
param[f"parameter_{num}"]["prior_shape"],
param[f"parameter_{num}"]["prior_param_0"],
param[f"parameter_{num}"]["prior_param_1"],
)
)
return params
def get_theta_values(theta_spec: dict):
"""
Parses the theta argument specification and generates a theta value
:param theta_spec: theta specification on the inputs file
:return: tuple
"""
sampling_method = theta_spec["sampling_method"]
if sampling_method == "random_morphing_points":
parameters = parse_theta_params(theta_spec)
arguments = [theta_spec["sampling_number"], parameters]
else:
arguments = [theta_spec["sampling_arg"]]
method = sampling_methods.get(sampling_method, benchmark)
return method(*arguments)
| 26.1875 | 71 | 0.627088 | 176 | 1,676 | 5.732955 | 0.357955 | 0.080278 | 0.079286 | 0.103072 | 0.277502 | 0.214073 | 0.105055 | 0.105055 | 0.105055 | 0.105055 | 0 | 0.001507 | 0.208234 | 1,676 | 63 | 72 | 26.603175 | 0.758855 | 0.195704 | 0 | 0 | 0 | 0 | 0.17753 | 0.037736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad57c8b22474019497c9a5f09d204feefbe720b7 | 3,338 | py | Python | main.py | varunnaik18/Python_UdacityMovieTrailerWebsite | acdf02c9296c60e64c969d7dc9d49942926d741f | [
"MIT"
] | null | null | null | main.py | varunnaik18/Python_UdacityMovieTrailerWebsite | acdf02c9296c60e64c969d7dc9d49942926d741f | [
"MIT"
] | null | null | null | main.py | varunnaik18/Python_UdacityMovieTrailerWebsite | acdf02c9296c60e64c969d7dc9d49942926d741f | [
"MIT"
] | null | null | null | import movie
import fresh_tomatoes
# Create Actor instances
ellen_degeneres = movie.Actors('Ellen DeGeneres',
'https://goo.gl/RPA7Xu')
alexander_gould = movie.Actors('Alexander Gould',
'https://goo.gl/mnzv2a')
albert_brooks = movie.Actors('Albert Brooks', 'https://goo.gl/SHCnnc')
tom_hanks = movie.Actors('Tom Hanks', 'https://goo.gl/EKP1Dy')
tim_allen = movie.Actors('Tim Allen', 'https://goo.gl/tDi8ry')
zoe_saldana = movie.Actors('Zoe Saldana', 'https://goo.gl/Dh6b47')
sam_worthington = movie.Actors('Sam Worthington',
'https://goo.gl/6JNDT3')
brad_bird = movie.Actors('Brad Bird', 'https://goo.gl/uqKDaJ')
patton_oswalt = movie.Actors('Patton Oswalt', 'https://goo.gl/KGmzmR')
robin_williams = movie.Actors('Robin Williams', 'https://goo.gl/3dCXWD')
evan_mcgregor = movie.Actors('Ewan McGregor', 'https://goo.gl/4MQTRQ')
robert_downey = movie.Actors('Robert Downey Jr.',
'https://goo.gl/7C7ykn')
chris_evans = movie.Actors('Chris Evans', 'https://goo.gl/wmkqUc')
chris_hemsworth = movie.Actors('Chris Hemsworth',
'https://goo.gl/QNp1B7')
scarlet_johansson = movie.Actors('Scarlet Johansson',
'https://goo.gl/tSSsgZ')
# Create Movie instances
toy_story = movie.Movie(
'Toy Story',
'Story of a boy and his toys',
'https://upload.wikimedia.org/wikipedia/en/1/13/Toy_Story.jpg',
'https://www.youtube.com/watch?v=KYz2wyBy3kc',
8.3,
['Drama', 'Action', 'Adventure'],
[tom_hanks, tim_allen],
)
avatar = movie.Movie(
'Avatar',
'A marine on alien planet',
'https://upload.wikimedia.org/wikipedia/en/a/a1/Avatar-video-game-cover.jpg'
,
'https://www.youtube.com/watch?v=5PSNL1qE6VY',
7.8,
['Drama', 'Action', 'Adventure'],
[zoe_saldana, sam_worthington],
)
finding_nemo = movie.Movie(
'Finding Nemo',
'Story of dad and a son',
'https://upload.wikimedia.org/wikipedia/en/7/71/Finding_Nemo_Coverart.png'
,
'https://www.youtube.com/watch?v=wZdpNglLbt8',
8.1,
['Drama', 'Action', 'Adventure'],
[ellen_degeneres, alexander_gould, albert_brooks],
)
robots = movie.Movie(
'Robots',
"A robot's story",
'https://upload.wikimedia.org/wikipedia/en/f/f2/Robots2005Poster.jpg'
,
'https://www.youtube.com/watch?v=p9X16KPOgFI',
6.3,
['Drama', 'Action', 'Adventure'],
[evan_mcgregor, robin_williams],
)
avengers = movie.Movie(
'The Avengers',
'Super Hero Adventures',
'https://upload.wikimedia.org/wikipedia/en/f/f9/TheAvengers2012Poster.jpg'
,
'https://www.youtube.com/watch?v=eOrNdBpGMv8',
8.1,
['Action', 'Superhero', 'Fictional'],
[robert_downey, chris_hemsworth, chris_evans, scarlet_johansson],
)
ratatouille = movie.Movie(
'Ratatouille',
'Story about a rat aspiring to become chef',
'https://upload.wikimedia.org/wikipedia/en/5/50/RatatouillePoster.jpg'
,
'https://www.youtube.com/watch?v=c3sBBRxDAqk',
8.0,
['Drama', 'Action', 'Adventure'],
[brad_bird, patton_oswalt],
)
movies = [
toy_story,
avatar,
finding_nemo,
ratatouille,
robots,
avengers,
]
# This function call with open a page of movies
fresh_tomatoes.open_movies_page(movies)
| 29.803571 | 80 | 0.643499 | 416 | 3,338 | 5.064904 | 0.336538 | 0.07831 | 0.071191 | 0.065496 | 0.173232 | 0.173232 | 0.097295 | 0 | 0 | 0 | 0 | 0.021537 | 0.193229 | 3,338 | 111 | 81 | 30.072072 | 0.760861 | 0.027262 | 0 | 0.076923 | 0 | 0.010989 | 0.466379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021978 | 0 | 0.021978 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad59e8295c2ebcc38e29fb254a85a703ebff34de | 12,789 | py | Python | stickersfrontend/__init__.py | secretisdead/stickersfrontend | 8c9e0ba437c8f5dfa0f369766fd8a8b30624fc2e | [
"MIT"
] | null | null | null | stickersfrontend/__init__.py | secretisdead/stickersfrontend | 8c9e0ba437c8f5dfa0f369766fd8a8b30624fc2e | [
"MIT"
] | null | null | null | stickersfrontend/__init__.py | secretisdead/stickersfrontend | 8c9e0ba437c8f5dfa0f369766fd8a8b30624fc2e | [
"MIT"
] | 1 | 2021-09-05T06:18:05.000Z | 2021-09-05T06:18:05.000Z | import time
import json
from datetime import datetime, timezone
import re
import uuid
import os
from PIL import Image
from flask import render_template
from stickers import Stickers
class StickersFrontend(Stickers):
def __init__(
self,
config,
accounts,
access_log,
engine,
install=False,
connection=None,
):
super().__init__(
engine,
config['db_prefix'],
install=install,
connection=connection,
)
self.config = config
self.accounts = accounts
self.access_log = access_log
self.config['maximum_name_length'] = min(
self.name_length,
self.config['maximum_name_length'],
)
self.config['maximum_display_length'] = min(
self.display_length,
self.config['maximum_display_length'],
)
self.config['maximum_category_length'] = self.category_length
self.callbacks = {}
def add_callback(self, name, f):
if name not in self.callbacks:
self.callbacks[name] = []
self.callbacks[name].append(f)
# cooldowns
def gachapon_cooldown(self, collected_stickers):
current_time = time.time()
if not self.config['receive_sticker_cooldown_periods_by_utc']:
period_start_time = (
current_time - self.config['receive_sticker_cooldown_period']
)
oldest_this_period = current_time
for collected_sticker in collected_stickers.values():
if collected_sticker.receive_time > period_start_time:
oldest_this_period = min(
oldest_this_period,
collected_sticker.receive_time,
)
period_end_time = (
oldest_this_period + self.config['receive_sticker_cooldown_period']
)
else:
# start at current utc day's midnight
current_datetime = datetime.now(timezone.utc)
period_end_time = datetime.datetime(
current_datetime.year,
current_datetime.month,
current_datetime.day,
0,
0,
0,
0,
timezone.utc,
).timestamp()
while period_end_time < current_time:
period_end_time += self.config['receive_sticker_cooldown_period']
period_start_time = (
period_end_time
+ self.config['receive_sticker_cooldown_period']
)
received_this_period = 0
last_received_sticker = None
for collected_sticker in collected_stickers.values():
if (
not last_received_sticker
or last_received_sticker.receive_time < collected_sticker.receive_time
):
last_received_sticker = collected_sticker
if collected_sticker.receive_time > period_start_time:
received_this_period += 1
if received_this_period < self.config['receive_sticker_cooldown_amount']:
return 0, last_received_sticker
return int(period_end_time), last_received_sticker
def place_sticker_cooldown(self, remote_origin, user_id):
return self.access_log.cooldown(
'place_sticker',
self.config['place_sticker_cooldown_amount'],
self.config['place_sticker_cooldown_period'],
remote_origin=remote_origin,
subject_id=user_id,
)
# require object or raise
def require_sticker(self, id):
sticker = self.get_sticker(id)
if not sticker:
raise ValueError('Sticker not found')
return sticker
def require_collected_sticker(self, id):
collected_sticker = self.get_collected_sticker(id)
if not collected_sticker:
raise ValueError('Collected sticker not found')
return collected_sticker
def require_sticker_placement(self, id):
sticker_placement = self.get_sticker_placement(id)
if not sticker_placement:
raise ValueError('Sticker placement not found')
return sticker_placement
# extend stickers methods
def get_sticker(self, sticker_id):
sticker = super().get_sticker(sticker_id)
if sticker:
self.populate_sticker_properties(sticker)
return sticker
def search_stickers(self, **kwargs):
stickers = super().search_stickers(**kwargs)
for sticker in stickers.values():
self.populate_sticker_properties(sticker)
return stickers
def get_collected_sticker(self, collected_sticker_id):
collected_sticker = super().get_collected_sticker(collected_sticker_id)
if collected_sticker:
self.populate_sticker_properties(collected_sticker.sticker)
users = self.accounts.search_users(
filter={'ids': collected_sticker.user_id},
)
if collected_sticker.user_id in users:
collected_sticker.user = users.get(collected_sticker.user_id_bytes)
return collected_sticker
def search_collected_stickers(self, **kwargs):
collected_stickers = super().search_collected_stickers(**kwargs)
user_ids = []
for collected_sticker in collected_stickers.values():
self.populate_sticker_properties(collected_sticker.sticker)
if collected_sticker.user_id:
user_ids.append(collected_sticker.user_id)
users = self.accounts.search_users(filter={'ids': user_ids})
for collected_sticker in collected_stickers.values():
if collected_sticker.user_id in users:
collected_sticker.user = users.get(collected_sticker.user_id_bytes)
return collected_stickers
def get_sticker_placement(self, sticker_placement_id):
sticker_placement = super().get_sticker_placement(sticker_placement_id)
if sticker_placement:
self.populate_sticker_properties(sticker_placement.sticker)
users = self.accounts.search_users(
filter={'ids': sticker_placement.user_id},
)
if sticker_placement.user_id in users:
sticker_placement.user = users.get(sticker_placement.user_id_bytes)
return sticker_placement
def search_sticker_placements(self, **kwargs):
sticker_placements = super().search_sticker_placements(**kwargs)
user_ids = []
for sticker_placement in sticker_placements.values():
self.populate_sticker_properties(sticker_placement.sticker)
if sticker_placement.user_id:
user_ids.append(sticker_placement.user_id)
users = self.accounts.search_users(filter={'ids': user_ids})
for sticker_placement in sticker_placements.values():
if sticker_placement.user_id in users:
sticker_placement.user = users.get(sticker_placement.user_id_bytes)
return sticker_placements
def create_sticker(self, **kwargs):
sticker = super().create_sticker(**kwargs)
subject_id = ''
if self.accounts.current_user:
subject_id = self.accounts.current_user.id_bytes
self.access_log.create_log(
scope='create_sticker',
subject_id=subject_id,
object_id=sticker.id,
)
return sticker
def update_sticker(self, id, **kwargs):
super().update_sticker(id, **kwargs)
subject_id = ''
if self.accounts.current_user:
subject_id = self.accounts.current_user.id_bytes
self.access_log.create_log(
scope='update_sticker',
subject_id=subject_id,
object_id=id,
)
def delete_sticker(self, sticker, user_id):
self.remove_sticker_image(sticker)
super().delete_sticker(sticker.id_bytes)
self.access_log.create_log(
scope='delete_sticker',
subject_id=user_id,
object_id=sticker.id_bytes,
)
def add_sticker_image(self, sticker, image):
sticker_image_path = os.path.join(self.config['sticker_images_path'], sticker.id)
edge = self.config['sticker_edge']
image_copy = image.copy()
image_copy.thumbnail((edge, edge), Image.BICUBIC)
# static
thumbnail_path = sticker_image_path + '.webp'
image_copy.save(thumbnail_path, 'WebP', lossless=True)
# fallback
thumbnail_path = sticker_image_path + '.png'
image_copy.save(thumbnail_path, 'PNG', optimize=True)
image_copy.close()
def remove_sticker_image(self, sticker):
sticker_image_path = os.path.join(self.config['sticker_images_path'], sticker.id)
extensions = ['webp', 'png']
for extension in extensions:
if os.path.exists(sticker_image_path + '.' + extension):
os.remove(sticker_image_path + '.' + extension)
def process_sticker_image(self, sticker_image):
errors = []
try:
file_contents = sticker_image.stream.read()
except ValueError as e:
raise ValueError('Problem uploading sticker_image')
file_path = os.path.join(
self.config['temp_path'],
'temp_sticker_image_' + str(uuid.uuid4()),
)
f = open(file_path, 'w+b')
f.write(file_contents)
f.close()
try:
image = Image.open(file_path)
# catch general exceptions here in case of problem reading image file
except:
#TODO file in use?
#os.remove(file_path)
raise ValueError('Problem opening sticker image')
else:
return image
def populate_sticker_image(self, sticker):
if not sticker:
return
sticker.image = ''
extensions = ['webp', 'png']
for extension in extensions:
if not os.path.exists(
os.path.join(
self.config['sticker_images_path'],
sticker.id + '.' + extension,
)
):
return
sticker.image = self.config['sticker_image_file_uri'].format(sticker.id)
# serve files over same protocol as pages
sticker.image = sticker.image.replace('https:', '').replace('http:', '')
def populate_sticker_properties(self, sticker):
if not sticker:
return
self.populate_sticker_image(sticker)
sticker.group_names = self.accounts.group_names_from_bits(
sticker.group_bits
)
def grant_sticker(self, sticker_id, user_id, receive_time=None, granting_user_id=''):
collected_sticker = super().grant_sticker(
sticker_id,
user_id,
receive_time=receive_time,
)
self.populate_sticker_properties(collected_sticker.sticker)
self.access_log.create_log(
scope='grant_sticker',
subject_id=granting_user_id,
object_id=collected_sticker.id,
)
return collected_sticker
def revoke_sticker(self, collected_sticker_id, user_id, revoking_user_id):
super().revoke_sticker(collected_sticker_id)
self.access_log.create_log(
scope='revoke_sticker',
subject_id=revoking_user_id,
object_id=collected_sticker_id,
)
def stickers_by_category(self, stickers):
ordered = {}
for category in self.config['categories']:
ordered[category] = []
for sticker in stickers.values():
if sticker.category in ordered:
ordered[sticker.category].append(
sticker
)
for stickers in ordered.values():
stickers.sort(key=lambda sticker: sticker.category_order)
return ordered
def collected_stickers_by_category(self, collected_stickers):
ordered = {}
for category in self.config['categories']:
ordered[category] = []
for collected_sticker in collected_stickers.values():
if collected_sticker.sticker.category in ordered:
ordered[collected_sticker.sticker.category].append(
collected_sticker
)
for stickers in ordered.values():
stickers.sort(
key=lambda collected_sticker: collected_sticker.sticker.category_order
)
return ordered
def place_sticker(self, **kwargs):
placement = super().place_sticker(**kwargs)
self.access_log.create_log(
scope='place_sticker',
subject_id=placement.user_id,
object_id=placement.id,
)
return placement
def unplace_sticker(self, placement_id, user_id=''):
super().unplace_sticker(placement_id)
self.access_log.create_log(
scope='unplace_sticker',
subject_id=user_id,
object_id=placement_id,
)
def unplace_by_user(self, user_id, subject_id=''):
super().unplace_by_user(user_id)
self.access_log.create_log(
scope='unplace_stickers_by_user',
subject_id=subject_id,
object_id=user_id,
)
def generate_sticker_placements_file(self, subject_id):
sticker_placements = self.search_sticker_placements(
filter={
'subject_ids': subject_id,
},
sort='placement_time',
order='asc',
)
sticker_ids = []
placements_list = []
for sticker_placement in sticker_placements.values():
sticker_ids.append(sticker_placement.sticker_id)
placements_list.append({
'id': sticker_placement.id,
'sticker_id': sticker_placement.sticker_id,
'placement_time': sticker_placement.placement_time,
'user_id': sticker_placement.user_id,
'position_x': sticker_placement.position_x,
'position_y': sticker_placement.position_y,
#'rotation': sticker_placement.rotation,
#'scale': sticker_placement.scale,
})
stickers = self.search_stickers(filter={'ids': sticker_ids})
rendered_stickers = {}
for sticker in stickers.values():
rendered_stickers[sticker.id] = render_template(
'sticker.html',
sticker=sticker,
).replace(
'\r', '',
).replace(
'\n', '',
).replace(
'\t', '',
)
f = open(
os.path.join(
self.config['sticker_placements_path'],
subject_id + '.json',
),
'w',
)
f.write(
json.dumps(
{
'placements': placements_list,
'stickers': rendered_stickers,
}
)
)
f.close()
def get_potential_stickers(self, group_bits):
without_group_bits = 0
for group_bit in self.accounts.group_names_to_bits.values():
if not self.accounts.contains_all_bits(group_bits, group_bit):
without_group_bits += int.from_bytes(group_bit, 'big')
filter = {}
if 0 < without_group_bits:
filter['without_group_bits'] = without_group_bits
potential_stickers = self.search_stickers(filter=filter)
return potential_stickers
| 29.467742 | 86 | 0.744468 | 1,654 | 12,789 | 5.439541 | 0.123337 | 0.081805 | 0.014449 | 0.019562 | 0.432033 | 0.346004 | 0.295654 | 0.215516 | 0.171502 | 0.155607 | 0 | 0.000923 | 0.152475 | 12,789 | 433 | 87 | 29.535797 | 0.829136 | 0.025491 | 0 | 0.279683 | 0 | 0 | 0.080254 | 0.03117 | 0 | 0 | 0 | 0.002309 | 0 | 1 | 0.079156 | false | 0 | 0.023747 | 0.002639 | 0.163588 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad5b1ed8c9574f8aca8a488a25cfbd49bebc8ccd | 755 | py | Python | PySpark/BigData/dataframe-abstract.py | James-McNeill/Learning | 3c4fe1a64240cdf5614db66082bd68a2f16d2afb | [
"MIT"
] | null | null | null | PySpark/BigData/dataframe-abstract.py | James-McNeill/Learning | 3c4fe1a64240cdf5614db66082bd68a2f16d2afb | [
"MIT"
] | null | null | null | PySpark/BigData/dataframe-abstract.py | James-McNeill/Learning | 3c4fe1a64240cdf5614db66082bd68a2f16d2afb | [
"MIT"
] | null | null | null | # Working with DataFrame and PySpark SQL code
# We have to use the SparkSession method when working with PySpark SQL
# 1. Create DataFrame using RDD
# Create an RDD from the list
rdd = sc.parallelize(sample_list)
# Create a PySpark DataFrame
names_df = spark.createDataFrame(rdd, schema=['Name', 'Age'])
# Check the type of names_df - displays that a spark DataFrame has been created
print("The type of names_df is", type(names_df))
# 2. Create DataFrame using CSV file
# Create an DataFrame from file_path
# inferSchema: allows the method to infer the data type for each column within the DataFrame
people_df = spark.read.csv(file_path, header=True, inferSchema=True)
# Check the type of people_df
print("The type of people_df is", type(people_df))
| 35.952381 | 92 | 0.770861 | 125 | 755 | 4.568 | 0.472 | 0.049037 | 0.063047 | 0.049037 | 0.115587 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003145 | 0.157616 | 755 | 20 | 93 | 37.75 | 0.894654 | 0.61457 | 0 | 0 | 0 | 0 | 0.192857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.4 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad5b8a3fd8d6b218194875c5a13cfc25facd4537 | 4,605 | py | Python | BayOptPy/freesurfer_preprocess/original_dataset/UKBIO/tpot_model_analysis.py | Mind-the-Pineapple/tpot-age | 2969bfa6dc5c652d5b4f00f59e9b0b23869f6bef | [
"MIT"
] | 3 | 2020-04-09T16:53:54.000Z | 2020-04-21T16:49:52.000Z | BayOptPy/freesurfer_preprocess/original_dataset/UKBIO/tpot_model_analysis.py | Mind-the-Pineapple/tpot-age | 2969bfa6dc5c652d5b4f00f59e9b0b23869f6bef | [
"MIT"
] | null | null | null | BayOptPy/freesurfer_preprocess/original_dataset/UKBIO/tpot_model_analysis.py | Mind-the-Pineapple/tpot-age | 2969bfa6dc5c652d5b4f00f59e9b0b23869f6bef | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
import os
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline, make_union
from sklearn.metrics import mean_absolute_error
from tpot.builtins import StackingEstimator
from BayOptPy.helperfunctions import (get_paths, get_data,
drop_missing_features,
set_publication_style)
"""
BANC dataset
This script tests the best model recommened by TPOT for 100 generations, random
seed 20, initial population 1000, mutation rate and cross-validation ratexxx
"""
set_publication_style()
# General Settings
#-------------------------------------------------------------------------------
debug = False
resamplefactor = 1
random_seed = 20
save_path = '/code/BayOptPy/'
# Load the clean data for both the UKBIO and the BANC analysis
# This version of the UKBIOBANK dataset contains the same columns as the BANC
# dataset
project_ukbio_wd, project_data_ukbio, _ = get_paths(debug, 'UKBIO_freesurf')
_, _, df_ukbio = \
get_data(project_data_ukbio, 'UKBIO_freesurf', debug,
project_ukbio_wd, resamplefactor, raw=False, analysis=None)
df_ukbio = df_ukbio.set_index('id')
# Drop the last column that corresponds the name of the dataset
df_ukbio = df_ukbio.drop('dataset', axis=1)
project_banc_wd, project_banc_data, _ = get_paths(debug,'BANC_freesurf')
demographics_banc,__, df_banc = get_data(project_banc_data,
'BANC_freesurf',
debug, project_banc_wd,
resamplefactor, raw=False,
analysis=None)
# Drop the last column that corresponds the name of the dataset
df_banc = df_banc.drop('dataset', axis=1)
# Get age for the BIOBANK dataset
age_UKBIO = pd.read_csv(os.path.join(project_data_ukbio, 'original_dataset',
'UKBIO','UKB_FS_age_sex.csv'))
targetAttribute_ukbio = np.array(age_UKBIO['age'])
#-------------------------------------------------------------------------------
# Train the model with BANC
#-------------------------------------------------------------------------------
data_banc = df_banc.values
# Find age for the BANC dataset
targetAttribute_banc = np.array(demographics_banc['age'])
Xtrain, Xtest, Ytrain, Ytest = train_test_split(data_banc, targetAttribute_banc,
test_size=.25,
random_state=random_seed)
print('Divided BANC dataset into test and training')
print('Check train test split sizes')
print('X_train: ' + str(Xtrain.shape))
print('X_test: ' + str(Xtest.shape))
print('Y_train: ' + str(Ytrain.shape))
print('Y_test: ' + str(Ytest.shape))
# Best pipeline recommended by TPOT
exported_pipeline = make_pipeline(
StackingEstimator(estimator=Ridge(alpha=10.0, random_state=42)),
StackingEstimator(estimator=ExtraTreesRegressor(bootstrap=True, max_features=0.7500000000000001, min_samples_leaf=4, min_samples_split=4, n_estimators=100, random_state=42)),
ExtraTreesRegressor(bootstrap=False, max_features=0.9000000000000001, min_samples_leaf=3, min_samples_split=2, n_estimators=100, random_state=42)
)
exported_pipeline.fit(Xtrain, Ytrain)
y_predicted_banc = exported_pipeline.predict(Xtest)
mae_banc = mean_absolute_error(Ytest, y_predicted_banc)
print('Print BANC MAE')
print(mae_banc)
# plot predicted vs true for the BIOBANK
fig = plt.figure()
plt.scatter(Ytest, y_predicted_banc)
plt.ylabel('Predicted Age')
plt.xlabel('True Age')
plt.savefig(os.path.join(save_path, 'banc_predicted_true_age.png'))
plt.close()
#-------------------------------------------------------------------------------
# Test the trained model on the BIOBANK
#-------------------------------------------------------------------------------
data_ukbio = df_ukbio.values
# Get the predictions on the BIOBANK dataset
y_predicted_biobank = exported_pipeline.predict(data_ukbio)
mae_biobank = mean_absolute_error(targetAttribute_ukbio, y_predicted_biobank)
print('Print UKBIO MAE')
print(mae_biobank)
# plot predicted vs true for the BIOBANK
fig = plt.figure()
plt.scatter(targetAttribute_ukbio, y_predicted_biobank)
plt.ylabel('Predicted Age')
plt.xlabel('True Age')
plt.savefig(os.path.join(save_path, 'biobank_predicted_true_age.png'))
plt.close()
| 41.116071 | 178 | 0.665147 | 574 | 4,605 | 5.099303 | 0.301394 | 0.014349 | 0.014349 | 0.016399 | 0.204305 | 0.179023 | 0.117526 | 0.117526 | 0.117526 | 0.117526 | 0 | 0.018044 | 0.169598 | 4,605 | 111 | 179 | 41.486486 | 0.747385 | 0.208686 | 0 | 0.105263 | 0 | 0 | 0.106179 | 0.016536 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.171053 | 0 | 0.171053 | 0.131579 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad5c1ae4ca7d8fb0b7a3d7fe79386b189c66500d | 2,320 | py | Python | encryption/__main__.py | TheDigitalPhoenixX/Encryption-Techniques | e41714ba50c9ec77c40d6acd12eebabc95ea0012 | [
"MIT"
] | null | null | null | encryption/__main__.py | TheDigitalPhoenixX/Encryption-Techniques | e41714ba50c9ec77c40d6acd12eebabc95ea0012 | [
"MIT"
] | null | null | null | encryption/__main__.py | TheDigitalPhoenixX/Encryption-Techniques | e41714ba50c9ec77c40d6acd12eebabc95ea0012 | [
"MIT"
] | null | null | null | import argparse
import traceback
from encryption.encryption import main
from encryption.techniques.classical import caesarCipherEncrypt, caesarCipherEncryptParam
from encryption.techniques.classical import vigenereCipherEncrypt, vigenereCipherEncryptParam
from encryption.techniques.classical import vernamCipherEncrypt, vernamCipherEncryptParam
from encryption.techniques.classical import hillCipherEncrypt, hillCipherEncryptParam
from encryption.techniques.classical import playfairCipherEncrypt, playfairCipherEncryptParam
from encryption.techniques.des import DESEncrypt, DESDecrypt, DESEncryptParam
from encryption.techniques.aes import AESEncrypt, AESDecrypt, AESEncryptParam
encryptionTechniques = [
(caesarCipherEncrypt, caesarCipherEncryptParam),
(vigenereCipherEncrypt, vigenereCipherEncryptParam),
(vernamCipherEncrypt, vernamCipherEncryptParam),
(hillCipherEncrypt, hillCipherEncryptParam),
(playfairCipherEncrypt, playfairCipherEncryptParam),
(DESEncrypt, DESEncryptParam),
(DESDecrypt, DESEncryptParam),
(AESEncrypt, AESEncryptParam),
(AESDecrypt, AESEncryptParam),
]
parser = argparse.ArgumentParser(
description=f'A simple python script that offers multiple simple impementation for encryption techniques.',
epilog='Source: https://github.com/TheDigitalPhoenixX/Encryption-Techniques'
)
parser.add_argument("-i", "--input", type=str, required=True,
help="Path to input file")
parser.add_argument("-o", "--output", type=str, default='output.txt',
help="Path to output file. (default: %(default)s)")
for encrypt, paramsDesc in encryptionTechniques:
parser.add_argument(f"--{encrypt.__name__}", action='store_const', const=encrypt, dest='encrypt',
help=f"selects '{encrypt.__name__}' as the encryption technique to use. (parameters: {paramsDesc})")
parser.add_argument('algoParam', nargs=argparse.REMAINDER)
args = parser.parse_args()
if not args.encrypt:
parser.error('no encryption technique selected.')
try:
main(inputPath=args.input,
outputPath=args.output,
encrypt=args.encrypt,
algorithmParam=args.algoParam)
except Exception as ex:
print('Exception Occured (common problem: make sure the algo param are correct):')
traceback.print_exc()
| 44.615385 | 124 | 0.77069 | 221 | 2,320 | 8.022624 | 0.475113 | 0.101523 | 0.094755 | 0.093063 | 0.109983 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137069 | 2,320 | 51 | 125 | 45.490196 | 0.885614 | 0 | 0 | 0 | 0 | 0 | 0.212069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.227273 | 0 | 0.227273 | 0.045455 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad5caf9726ae56347cbc14eae14c542b901dfdfd | 329 | py | Python | falcon/fprob/mh.test.py | lilanka/Falcon | 57a866b6a5e467684a3f45a36ec2b51c5bd097c0 | [
"MIT"
] | 1 | 2021-07-01T12:19:17.000Z | 2021-07-01T12:19:17.000Z | falcon/fprob/mh.test.py | lilanka/Falcon | 57a866b6a5e467684a3f45a36ec2b51c5bd097c0 | [
"MIT"
] | null | null | null | falcon/fprob/mh.test.py | lilanka/Falcon | 57a866b6a5e467684a3f45a36ec2b51c5bd097c0 | [
"MIT"
] | null | null | null | import numpy as np
from fstat.beta import Beta
from fstat.bernoulli import Bernoulli
def post(x, y, a=1, b=1):
if 0 <= x <= 1:
prior = Beta(a, b).pdf(x)
like = Bernoulli(x).pmf(y)
prob = like * prior
else:
prob = -np.inf
return prob
print(post(0.8, 0.3, 2, 4))
Y = Bernoulli(np.random.rand(20)).pmf(0.7)
print(Y)
| 18.277778 | 42 | 0.632219 | 65 | 329 | 3.2 | 0.523077 | 0.086538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05303 | 0.197568 | 329 | 17 | 43 | 19.352941 | 0.734848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.214286 | 0 | 0.357143 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad5d192a5aab9840b57e85eb36ddca6e9abfce1b | 9,125 | py | Python | infra/services/service_manager/config_watcher.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | 1 | 2018-01-02T05:47:07.000Z | 2018-01-02T05:47:07.000Z | infra/services/service_manager/config_watcher.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | null | null | null | infra/services/service_manager/config_watcher.py | mithro/chromium-infra | d27ac0b230bedae4bc968515b02927cf9e17c2b7 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import glob
import json
import logging
import os
import os.path
import sys
import time
from infra.services.service_manager import service
from infra.services.service_manager import service_thread
LOGGER = logging.getLogger(__name__)
def parse_config(content, filename='<unknown>'):
"""Check that the content of a config file is valid.
Args:
content(str): content of config file, as a json string.
Keyword Args:
filename(str): filename from which the content has been read. Used to
display useful messages to the user.
Returns:
config(dict): result of parsing 'content'.
"""
config = json.loads(content)
# Make sure we have all the required keys to help the unwary user.
error_occurred = False
if 'name' not in config:
LOGGER.error("Service name must be specified in the 'name' "
"field. File: %s", filename)
error_occurred = True
if 'root_directory' not in config:
LOGGER.error("Working directory must be specified in the "
"'root_directory' field. File: %s", filename)
error_occurred = True
if 'tool' not in config and 'cmd' not in config:
LOGGER.error("Command to run must be specified in the 'cmd' "
"field. File: %s", filename)
error_occurred = True
if 'cmd' in config and ('tool' in config or 'args' in config):
LOGGER.error("Command to run must be specified with the 'tool' and "
"'args' fields OR with the 'cmd' field, not both. "
"File: %s", filename)
error_occurred = True
if 'cmd' in config and not isinstance(config['cmd'], list):
LOGGER.error("'cmd' must be a list with executable name and arguments. "
"File: %s", filename)
error_occurred = True
# Make sure all arguments are passed as strings.
if 'cmd' in config:
config['cmd'] = [str(x) for x in config['cmd']]
# 'args', 'environment', 'resources', 'stop_time', 'working_directory' are
# optional
# We gathered enough errors, bail out.
if error_occurred:
return None
### Past this point, 'config' contains all required fields. ###
### i.e. name, root_directory, (tool(args)*)|cmd ###
# TODO(pgervais): this is deprecated. Remove when all configs have been
# ported to the new system.
if 'tool' in config:
LOGGER.warning("The 'tool' field is deprecated. Use 'cmd' instead. "
"File: %s", filename)
# Convert to the new format.
# This is compatibility code, os-specific parts were originally living in
# service.py.
executable = [os.path.join(config['root_directory'], 'run.py'),
config['tool']]
if sys.platform == 'win32': # pragma: no cover
# Prepend the path to the Python interpreter on windows.
executable.insert(
0, os.path.join(config['root_directory'],
os.path.join('ENV', 'Scripts', 'python.exe')))
# Using unicode for the key so as to be consistent with json.load
config[u'cmd'] = executable + config.get('args', [])
del config['tool']
if 'args' in config:
del config['args']
assert 'cmd' in config
assert 'args' not in config
assert 'tool' not in config
return config
def load_config(filename):
try:
with open(filename) as fh:
config = parse_config(fh.read(), filename=filename)
except Exception:
LOGGER.exception('Error opening or parsing %s', filename)
return None
return config
class _Metadata(object):
def __init__(self, mtime, config=None, thread=None):
self.mtime = mtime
self.config = config
self.thread = thread
class ConfigWatcher(object):
"""Polls a directory for .json files describing services to be run.
Tries to keep the running services in sync with the config files - services
are started immediately when valid configs are added, restarted when their
configs change (adding or removing args for example), and stopped when the
configs are deleted.
"""
def __init__(self, config_directory, config_poll_interval,
service_poll_interval, state_directory, root_directory,
cloudtail, _sleep_fn=time.sleep):
"""
Args:
config_directory(str): Directory containing .json config files to monitor.
config_poll_interval(int): How often (in seconds) to poll config_directory
for changes.
service_poll_interval(int): How often (in seconds) to restart failed
services.
state_directory(str): A file will be created in this directory (with the
same name as the service) when it is running containing its PID and
starttime.
cloudtail (CloudtailFactory): An object that knows how to start cloudtail.
"""
self._config_glob = os.path.join(config_directory, '*.json')
self._config_poll_interval = config_poll_interval
self._service_poll_interval = service_poll_interval
self._state_directory = state_directory
self._cloudtail = cloudtail
self._metadata = {} # Filename -> _Metadata
self._services = {} # Service name -> Filename
self._stop = False
self._sleep_fn = _sleep_fn
self._own_service = service.OwnService(state_directory, root_directory)
def run(self):
"""Runs continuously in this thread until stop() is called."""
if not self._own_service.start(): # pragma: no cover
# Another instance is already running. Exit immediately to prevent the
# ts_mon.close() in BaseApplication from being called.
os._exit(0)
while not self._stop:
self._iteration()
if not self._stop: # pragma: no cover
self._sleep_fn(self._config_poll_interval)
def _iteration(self):
"""Runs one iteration of the loop. Useful for testing."""
own_state = self._own_service.get_running_process_state()
if self._own_service.has_version_changed(own_state):
logging.info("The service_manager's version has changed, exiting")
self.stop()
return
files = set(glob.glob(self._config_glob))
for filename in files:
mtime = os.path.getmtime(filename)
if filename in self._metadata:
metadata = self._metadata[filename]
if mtime == metadata.mtime:
continue
self._config_changed(filename, metadata, mtime)
else:
self._config_added(filename, mtime)
for filename, metadata in self._metadata.iteritems():
if filename not in files and metadata.config is not None:
self._config_removed(metadata)
def stop(self):
"""Signals that run() should stop on its next iteration."""
self._stop = True
for metadata in self._metadata.values():
if metadata.thread is not None:
metadata.thread.stop()
def _config_added(self, filename, mtime):
config = load_config(filename)
if config is None:
# Add a bad metadata entry so we don't call _config_added again every
# time we read it.
self._metadata[filename] = _Metadata(mtime)
return
if config['name'] in self._services:
LOGGER.error('Duplicate service name "%s" (defined in %s and %s)' % (
config['name'], self._services[config['name']], filename))
return
LOGGER.info('Adding new service config for %s', config['name'])
thread = service_thread.ServiceThread(
self._service_poll_interval,
self._state_directory,
config,
self._cloudtail)
thread.start()
thread.start_service()
self._metadata[filename] = _Metadata(mtime, config, thread)
self._services[config['name']] = filename
def _config_changed(self, filename, metadata, new_mtime):
if metadata.config is not None:
del self._services[metadata.config['name']]
metadata.config = load_config(filename)
metadata.mtime = new_mtime
if (metadata.config is not None and
metadata.config['name'] in self._services):
LOGGER.error('Duplicate service name "%s" (defined in %s and %s)' % (
metadata.config['name'],
self._services[metadata.config['name']],
filename))
metadata.config = None
if metadata.config is None:
if metadata.thread is not None:
metadata.thread.stop_service()
return
LOGGER.info('Updating service config for %s', metadata.config['name'])
if metadata.thread is None:
metadata.thread = service_thread.ServiceThread(
self._service_poll_interval,
self._state_directory,
metadata.config,
self._cloudtail)
metadata.thread.start()
metadata.thread.start_service()
else:
metadata.thread.restart_with_new_config(metadata.config)
self._services[metadata.config['name']] = filename
def _config_removed(self, metadata):
LOGGER.info('Removing service config for %s', metadata.config['name'])
del self._services[metadata.config['name']]
metadata.config = None
metadata.mtime = None
metadata.thread.stop_service()
| 33.181818 | 80 | 0.671452 | 1,198 | 9,125 | 4.979967 | 0.232053 | 0.039893 | 0.024137 | 0.015085 | 0.261482 | 0.204827 | 0.171136 | 0.144653 | 0.093195 | 0.07878 | 0 | 0.00114 | 0.230904 | 9,125 | 274 | 81 | 33.30292 | 0.84896 | 0.265534 | 0 | 0.216867 | 0 | 0 | 0.139287 | 0 | 0 | 0 | 0 | 0.00365 | 0.018072 | 1 | 0.060241 | false | 0 | 0.054217 | 0 | 0.174699 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad61225eaddb1fc5325257e8d2379de2855d0509 | 961 | py | Python | experiments/simple.py | ice-stuff/ice-shell | a64830dde4d240247823da15d62da52fb94bddd3 | [
"MIT"
] | 1 | 2021-02-11T11:32:44.000Z | 2021-02-11T11:32:44.000Z | experiments/simple.py | ice-stuff/ice-shell | a64830dde4d240247823da15d62da52fb94bddd3 | [
"MIT"
] | null | null | null | experiments/simple.py | ice-stuff/ice-shell | a64830dde4d240247823da15d62da52fb94bddd3 | [
"MIT"
] | null | null | null | import ice # iCE package
from fabric import api as fab # Fabric API
@ice.Runner
def run(instances):
"""A sample iCE runner. It gets the hostnames of all instances and
prints them out.
:param instances: List of entities.Instance objects.
"""
# Get hostnames of all instances, through fab.execute
# First argument: Python function
# Second argument: List of hosts
# It returns a dictionary with the task result as value.
hostnames = fab.execute(get_hostname, instances)
# Prints
for key in hostnames:
print(hostnames[key])
@ice.Task
def get_hostname(instances):
"""A simple iCE task. It returns the FQDN hostname of the remote
instance.
:param instances: List of entities.Instance objects.
:rtype: str
:return: The FQDN hostname.
"""
# Get the FQDN hostname from each node
hostname = fab.run('hostname -f')
return hostname
| 27.457143 | 70 | 0.654527 | 126 | 961 | 4.97619 | 0.460317 | 0.028708 | 0.07177 | 0.073365 | 0.137161 | 0.137161 | 0.137161 | 0 | 0 | 0 | 0 | 0 | 0.273673 | 961 | 34 | 71 | 28.264706 | 0.898281 | 0.574402 | 0 | 0 | 0 | 0 | 0.030812 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.181818 | 0 | 0.454545 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ad63d3ded7abdb9de1259e55f13f2e57632c03f2 | 568 | py | Python | bot/cogs/util.py | Aarooy/progcount | f968db012e5cfd07f6c4df7e49abe74631ed2b10 | [
"Unlicense"
] | null | null | null | bot/cogs/util.py | Aarooy/progcount | f968db012e5cfd07f6c4df7e49abe74631ed2b10 | [
"Unlicense"
] | null | null | null | bot/cogs/util.py | Aarooy/progcount | f968db012e5cfd07f6c4df7e49abe74631ed2b10 | [
"Unlicense"
] | null | null | null | from discord.ext.commands import Cog, Bot, command, Context
from bot.knotbot import Knotbot
from bot.util.other import get_mentions
class Util(Cog):
def __init__(self, bot: Knotbot) -> None:
self.bot = bot
@command(name='avatar')
async def avatar(self, ctx: Context, arg1):
mentions = get_mentions(ctx, arg1, count=1)
if mentions is None:
await ctx.send("You have to mention 1 person, khey")
return
await ctx.send(mentions[0].avatar_url)
def setup(bot: Bot) -> None:
bot.add_cog(Util(bot))
| 25.818182 | 64 | 0.649648 | 82 | 568 | 4.402439 | 0.5 | 0.055402 | 0.066482 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011601 | 0.241197 | 568 | 21 | 65 | 27.047619 | 0.825986 | 0 | 0 | 0 | 0 | 0 | 0.070423 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.133333 | false | 0 | 0.2 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |