hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2a84979de1533c78abe4370a0bdbe57e64930ce4 | 2,300 | py | Python | virtual/lib/python3.9/site-packages/pyuploadcare/secure_url.py | alex-mu/Neighborhood-watch | 13a4926a59a924f84c5560966ca686168efa054e | [
"MIT"
] | 85 | 2015-01-14T21:37:58.000Z | 2022-03-16T07:15:41.000Z | virtual/lib/python3.9/site-packages/pyuploadcare/secure_url.py | alex-mu/Neighborhood-watch | 13a4926a59a924f84c5560966ca686168efa054e | [
"MIT"
] | 78 | 2015-01-15T23:44:15.000Z | 2022-03-21T12:05:26.000Z | virtual/lib/python3.9/site-packages/pyuploadcare/secure_url.py | alex-mu/Neighborhood-watch | 13a4926a59a924f84c5560966ca686168efa054e | [
"MIT"
] | 34 | 2015-01-13T16:06:29.000Z | 2021-08-09T12:38:06.000Z | import hashlib
import hmac
import time
from abc import ABC, abstractmethod
from typing import Optional
class BaseSecureUrlBuilder(ABC):
@abstractmethod
def build(self, uuid: str) -> str:
raise NotImplementedError
class AkamaiSecureUrlBuilder(BaseSecureUrlBuilder):
"""Akamai secure url builder.
See https://uploadcare.com/docs/security/secure_delivery/
for more details.
"""
template = "https://{cdn}/{uuid}/?token={token}"
field_delimeter = "~"
def __init__(
self,
cdn_url: str,
secret_key: str,
window: int = 300,
hash_algo=hashlib.sha1,
):
self.secret_key = secret_key
self.cdn_url = cdn_url
self.window = window
self.hash_algo = hash_algo
def build(self, uuid: str) -> str:
uuid = uuid.lstrip("/").rstrip("/")
expire = self._build_expire_time()
acl = self._format_acl(uuid)
signature = self._build_signature(expire, acl)
secure_url = self._build_url(uuid, expire, acl, signature)
return secure_url
def _build_url(
self,
uuid: str,
expire: int,
acl: str,
signature: str,
) -> str:
req_parameters = [
f"exp={expire}",
f"acl={acl}",
f"hmac={signature}",
]
token = self.field_delimeter.join(req_parameters)
return self.template.format(
cdn=self.cdn_url,
uuid=uuid,
token=token,
)
def _build_token(self, expire: int, acl: Optional[str], signature: str):
token_parts = [
f"exp={expire}",
f"acl={acl}",
f"hmac={signature}",
]
return self.field_delimeter.join(token_parts)
def _format_acl(self, uuid: str) -> str:
return f"/{uuid}/"
def _build_expire_time(self) -> int:
return int(time.time()) + self.window
def _build_signature(self, expire: int, acl: str) -> str:
hash_source = [
f"exp={expire}",
f"acl={acl}",
]
signature = hmac.new(
self.secret_key.encode(),
self.field_delimeter.join(hash_source).encode(),
self.hash_algo,
).hexdigest()
return signature
| 23.958333 | 76 | 0.566087 | 256 | 2,300 | 4.894531 | 0.25 | 0.038308 | 0.035116 | 0.03352 | 0.098164 | 0.098164 | 0.049481 | 0.049481 | 0.049481 | 0 | 0 | 0.002543 | 0.316087 | 2,300 | 95 | 77 | 24.210526 | 0.794024 | 0.044783 | 0 | 0.171429 | 0 | 0 | 0.064738 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.114286 | false | 0 | 0.071429 | 0.028571 | 0.328571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a84dba0aa5b5ad17a31e103f49afabef1f46f07 | 2,228 | py | Python | src/lexer.py | lorenzofelletti/mathexpparser | e365a7d6d025c3419da2f256b42eb93ebdd1299e | [
"MIT"
] | null | null | null | src/lexer.py | lorenzofelletti/mathexpparser | e365a7d6d025c3419da2f256b42eb93ebdd1299e | [
"MIT"
] | null | null | null | src/lexer.py | lorenzofelletti/mathexpparser | e365a7d6d025c3419da2f256b42eb93ebdd1299e | [
"MIT"
] | null | null | null | from .tokens import *
import numpy as np
class Lexer:
def __init__(self):
self.__digits__ = '0123456789'
self.__ops__ = '+-*/^'
self.__unary_minus_equivalent__ = [ElementToken(-1), MulOp()]
def __is_digit__(self, ch):
return self.__digits__.find(ch) >= 0
def __is_op__(self, ch):
return self.__ops__.find(ch) >= 0
def scan(self, exp):
# eliminate all whitespace
exp = exp.replace(' ', '')
tokens = np.array([])
def append_num():
nonlocal num
nonlocal tokens
if len(num) == 0:
return
num_tkn = ElementToken(int(num))
num = ''
tokens = np.append(tokens, num_tkn)
def append_op(op):
nonlocal tokens
if op == '+':
op_tkn = PlusOp()
elif op == '-':
op_tkn = MinOp()
elif op == '*':
op_tkn = MulOp()
elif op == '/':
op_tkn = DivOp()
elif op == '^':
op_tkn = PowOp()
else:
return
tokens = np.append(tokens, op_tkn)
i = 0
num = ''
while i < len(exp):
if self.__is_digit__(exp[i]):
num += exp[i]
elif self.__is_op__(exp[i]):
# check if it is unary +/-
if i == 0 or self.__is_op__(exp[i-1]) or exp[i-1] == '(':
if exp[i] == '+':
i += 1
continue
elif exp[i] == '-':
tokens = np.append(
tokens, self.__unary_minus_equivalent__)
i += 1
continue
append_num()
append_op(op=exp[i])
elif exp[i] == '(':
tokens = np.append(tokens, LeftParenthesis())
elif exp[i] == ')':
append_num()
tokens = np.append(tokens, RightParenthesis())
else:
raise Exception('Unidentified character.')
i += 1
append_num()
return tokens
| 27.85 | 73 | 0.417415 | 219 | 2,228 | 3.922374 | 0.269406 | 0.046566 | 0.08149 | 0.116414 | 0.146682 | 0.065192 | 0.065192 | 0 | 0 | 0 | 0 | 0.017588 | 0.464093 | 2,228 | 79 | 74 | 28.202532 | 0.701843 | 0.021993 | 0 | 0.25 | 0 | 0 | 0.022518 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.09375 | false | 0 | 0.03125 | 0.03125 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a8a010026b7d91ea844e13cb78a58eb17bdb454 | 13,020 | py | Python | src/utils.py | jlnbtz/DLAS_speech_tokenizer | 5331fa169a9bf30c1b8fb14fdcaa8f9cbb185f1e | [
"Apache-2.0"
] | 1 | 2019-01-13T18:44:10.000Z | 2019-01-13T18:44:10.000Z | src/utils.py | julianbetz/DLAS_speech_tokenizer | 5331fa169a9bf30c1b8fb14fdcaa8f9cbb185f1e | [
"Apache-2.0"
] | 2 | 2019-01-13T19:12:32.000Z | 2019-01-13T19:14:15.000Z | src/utils.py | julianbetz/DLAS_speech_tokenizer | 5331fa169a9bf30c1b8fb14fdcaa8f9cbb185f1e | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
@author: Benjamin Milde
"""
license = '''
Copyright 2017,2018 Benjamin Milde (Language Technology, Universität Hamburg, Germany)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.'''
import wave
import numpy as np
import scipy
import os
import scipy.io.wavfile
import tensorflow as tf
import os.path
import gzip
import bz2
import wavefile
from collections import defaultdict
def smart_open(filename, mode = 'rb', *args, **kwargs):
'''
Opens a file "smartly":
* If the filename has a ".gz" or ".bz2" extension, compression is handled
automatically;
* If the file is to be read and does not exist, corresponding files with
a ".gz" or ".bz2" extension will be attempted.
'''
readers = {'.gz': gzip.GzipFile, '.bz2': bz2.BZ2File}
if 'r' in mode and not os.path.exists(filename):
for ext in readers:
if os.path.exists(filename + ext):
filename += ext
break
extension = os.path.splitext(filename)[1]
return readers.get(extension, open)(filename, mode, *args, **kwargs)
#compresses the dynamic range, see https://en.wikipedia.org/wiki/%CE%9C-law_algorithm
def encode_mulaw(signal,mu=255):
return np.sign(signal)*(np.log1p(mu*np.abs(signal)) / np.log1p(mu))
#uncompress the dynamic range, see https://en.wikipedia.org/wiki/%CE%9C-law_algorithm
def decode_mulaw(signal,mu=255):
return np.sign(signal)*(1.0/mu)*(np.power(1.0+mu,np.abs(signal))-1.0)
# discretize signal between -1.0 and 1.0 into mu+1 bands.
def discretize(signal, mu=255.0):
output = np.array(signal)
output += 1.0
output = output*(0.5*mu)
signal = np.fmax(0.0,output)
#signal = np.fmin(255.0,signal)
return signal.astype(np.int32)
def undiscretize(signal, mu=255.0):
output = np.array(signal)
output = output.astype(np.float32)
output /= 0.5*mu
output -= 1.0
signal = np.fmax(-1.0,output)
signal = np.fmin(1.0,signal)
return signal
def readWordPosFile(filename,pos1=0,pos2=1):
unalign_list = []
with open(filename) as f:
for line in f.readlines():
split = line[:-1].split(" ")
unalign_list.append((float(split[pos1]), float(split[pos2])))
return unalign_list
def ensure_dir(f):
d = os.path.dirname(f)
if not os.path.exists(d):
os.makedirs(d)
def loadIdFile(idfile,use_no_files=-1):
ids = []
with open(idfile) as f:
ids = f.read().split('\n')[:use_no_files]
ids = [myid for myid in ids if myid != '']
if len(ids[0].split()) > 1:
utt_ids = []
wav_files = []
for myid in ids:
print(myid)
split = myid.split()
utt_ids.append(split[0])
wav_files.append(split[1])
else:
utt_ids = []
wav_files = ids
#check if ids exist
#ids = [myid for myid in ids if os.path.ispath(myid)]
return utt_ids, wav_files
def loadPhnFile(phn_file):
positions = []
names = []
with open(phn_file) as phn:
for line in phn:
if line[-1] == '\n':
line = line[:-1]
split = line.split()
pos = (split[0],split[1])
name = split[-1]
positions.append(pos)
names.append(name)
return positions,names
def loadUtt2Spk(utt_filename):
utts = {}
with open(utt_filename) as utt_file:
for line in utt_file:
if line[-1] == '\n':
line = line[:-1]
split = line.split()
utt = split[0]
spk = split[1]
utts[utt] = spk
return utts
def loadSpk2Utt(utt_filename, ignore_dash_spk_id=True):
spks = defaultdict(list)
with open(utt_filename) as utt_file:
for line in utt_file:
if line[-1] == '\n':
line = line[:-1]
split = line.split()
spk = split[0]
if ignore_dash_spk_id and '-' in spk:
spk = spk.split('-')[0]
utt = split[1:]
spks[spk] += utt
return spks
def getSignalOld(utterance):
spf = wave.open(utterance, 'r')
sound_info = spf.readframes(-1)
signal = np.fromstring(sound_info, 'Int16')
return signal, spf.getframerate()
# This is needed since the old loader had problems with NIST headers from TIMIT.
# See also https://stackoverflow.com/questions/10187043/read-nist-wav-file-in-timit-database-into-python-numpy-array
def getSignal(utterance):
samplerate, signal = wavefile.load(utterance)
print(signal)
signal = signal[0]
#print(utterance, 'dtype:', signal.dtype, 'min:', min(signal), 'max:', max(signal), 'samplerate:', samplerate)
return signal, samplerate
def writeSignal(signal, myfile, rate=16000, do_decode_mulaw=False):
if do_decode_mulaw:
signal = decode_mulaw(signal)
return scipy.io.wavfile.write(myfile, rate, signal)
def rolling_window(a, window_len, hop):
print("a.shape[:-1]", a.shape[:-1])
print("a.shape[-1]", a.shape[-1])
shape = a.shape[:-1] + (a.shape[-1] - window_len + 1, window_len)
strides = a.strides + (a.strides[-1],)
print('shape:',shape)
print('strides:',strides)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)[::hop]
# This code is from https://gist.github.com/seberg/3866040, public domain?
# This function is not licensed under Apache 2.0
def rolling_window_better(array, window=(0,), asteps=None, wsteps=None, axes=None, toend=True):
"""Create a view of `array` which for every point gives the n-dimensional
neighbourhood of size window. New dimensions are added at the end of
`array` or after the corresponding original dimension.
Parameters
----------
array : array_like
Array to which the rolling window is applied.
window : int or tuple
Either a single integer to create a window of only the last axis or a
tuple to create it for the last len(window) axes. 0 can be used as a
to ignore a dimension in the window.
asteps : tuple
Aligned at the last axis, new steps for the original array, ie. for
creation of non-overlapping windows. (Equivalent to slicing result)
wsteps : int or tuple (same size as window)
steps for the added window dimensions. These can be 0 to repeat values
along the axis.
axes: int or tuple
If given, must have the same size as window. In this case window is
interpreted as the size in the dimension given by axes. IE. a window
of (2, 1) is equivalent to window=2 and axis=-2.
toend : bool
If False, the new dimensions are right after the corresponding original
dimension, instead of at the end of the array. Adding the new axes at the
end makes it easier to get the neighborhood, however toend=False will give
a more intuitive result if you view the whole array.
Returns
-------
A view on `array` which is smaller to fit the windows and has windows added
dimensions (0s not counting), ie. every point of `array` is an array of size
window.
Examples
--------
>>> a = np.arange(9).reshape(3,3)
>>> rolling_window(a, (2,2))
array([[[[0, 1],
[3, 4]],
[[1, 2],
[4, 5]]],
[[[3, 4],
[6, 7]],
[[4, 5],
[7, 8]]]])
Or to create non-overlapping windows, but only along the first dimension:
>>> rolling_window(a, (2,0), asteps=(2,1))
array([[[0, 3],
[1, 4],
[2, 5]]])
Note that the 0 is discared, so that the output dimension is 3:
>>> rolling_window(a, (2,0), asteps=(2,1)).shape
(1, 3, 2)
This is useful for example to calculate the maximum in all (overlapping)
2x2 submatrixes:
>>> rolling_window(a, (2,2)).max((2,3))
array([[4, 5],
[7, 8]])
Or delay embedding (3D embedding with delay 2):
>>> x = np.arange(10)
>>> rolling_window(x, 3, wsteps=2)
array([[0, 2, 4],
[1, 3, 5],
[2, 4, 6],
[3, 5, 7],
[4, 6, 8],
[5, 7, 9]])
"""
array = np.asarray(array)
orig_shape = np.asarray(array.shape)
window = np.atleast_1d(window).astype(int) # maybe crude to cast to int...
if axes is not None:
axes = np.atleast_1d(axes)
w = np.zeros(array.ndim, dtype=int)
for axis, size in zip(axes, window):
w[axis] = size
window = w
# Check if window is legal:
if window.ndim > 1:
raise ValueError("`window` must be one-dimensional.")
if np.any(window < 0):
raise ValueError("All elements of `window` must be larger then 1.")
if len(array.shape) < len(window):
raise ValueError("`window` length must be less or equal `array` dimension.")
_asteps = np.ones_like(orig_shape)
if asteps is not None:
asteps = np.atleast_1d(asteps)
if asteps.ndim != 1:
raise ValueError("`asteps` must be either a scalar or one dimensional.")
if len(asteps) > array.ndim:
raise ValueError("`asteps` cannot be longer then the `array` dimension.")
# does not enforce alignment, so that steps can be same as window too.
_asteps[-len(asteps):] = asteps
if np.any(asteps < 1):
raise ValueError("All elements of `asteps` must be larger then 1.")
asteps = _asteps
_wsteps = np.ones_like(window)
if wsteps is not None:
wsteps = np.atleast_1d(wsteps)
if wsteps.shape != window.shape:
raise ValueError("`wsteps` must have the same shape as `window`.")
if np.any(wsteps < 0):
raise ValueError("All elements of `wsteps` must be larger then 0.")
_wsteps[:] = wsteps
_wsteps[window == 0] = 1 # make sure that steps are 1 for non-existing dims.
wsteps = _wsteps
# Check that the window would not be larger then the original:
if np.any(orig_shape[-len(window):] < window * wsteps):
raise ValueError("`window` * `wsteps` larger then `array` in at least one dimension.")
new_shape = orig_shape # just renaming...
# For calculating the new shape 0s must act like 1s:
_window = window.copy()
_window[_window==0] = 1
new_shape[-len(window):] += wsteps - _window * wsteps
new_shape = (new_shape + asteps - 1) // asteps
# make sure the new_shape is at least 1 in any "old" dimension (ie. steps
# is (too) large, but we do not care.
new_shape[new_shape < 1] = 1
shape = new_shape
strides = np.asarray(array.strides)
strides *= asteps
new_strides = array.strides[-len(window):] * wsteps
# The full new shape and strides:
if toend:
new_shape = np.concatenate((shape, window))
new_strides = np.concatenate((strides, new_strides))
else:
_ = np.zeros_like(shape)
_[-len(window):] = window
_window = _.copy()
_[-len(window):] = new_strides
_new_strides = _
new_shape = np.zeros(len(shape)*2, dtype=int)
new_strides = np.zeros(len(shape)*2, dtype=int)
new_shape[::2] = shape
new_strides[::2] = strides
new_shape[1::2] = _window
new_strides[1::2] = _new_strides
new_strides = new_strides[new_shape != 0]
new_shape = new_shape[new_shape != 0]
return np.lib.stride_tricks.as_strided(array, shape=new_shape, strides=new_strides)
def writeArkTextFeatFile(feat, feat_name, out_filename, append = False):
with open(out_filename, 'a' if append else 'w') as out_file:
out_file.write(feat_name + ' [')
for feat_vec in feat:
feat_vec_str = ' '.join([str(elem) for elem in feat_vec])
out_file.write(feat_vec_str)
def writeZeroSpeechFeatFile(feat, out_filename, window_length, hop_size):
ensure_dir(out_filename)
with open(out_filename, 'w') as out_file:
for i,feat_vec in enumerate(feat):
pos = i * hop_size + (window_length / 2.0)
feat_vec_str = ' '.join([str(elem) for elem in feat_vec])
out_file.write(str(pos) + ' ' + feat_vec_str + '\n')
def tensor_normalize_0_to_1(in_tensor):
x_min = tf.reduce_min(in_tensor)
x_max = tf.reduce_max(in_tensor)
tensor_0_to_1 = ((in_tensor - x_min) / (x_max - x_min))
return tensor_0_to_1
| 34.812834 | 116 | 0.613287 | 1,872 | 13,020 | 4.174679 | 0.223291 | 0.01945 | 0.005374 | 0.007678 | 0.155214 | 0.121177 | 0.104798 | 0.082278 | 0.060525 | 0.051056 | 0 | 0.026917 | 0.266667 | 13,020 | 373 | 117 | 34.906166 | 0.791579 | 0.30023 | 0 | 0.095023 | 0 | 0 | 0.129511 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085973 | false | 0 | 0.049774 | 0.00905 | 0.208145 | 0.027149 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a8a1e65cb3d858e2140c67a52a29f7ebdda8222 | 4,579 | py | Python | test/test_namespace.py | delocalizer/rdflib | 6534d8c1cb0e8fd96864c10280c0c80a42f7a5e9 | [
"BSD-3-Clause"
] | null | null | null | test/test_namespace.py | delocalizer/rdflib | 6534d8c1cb0e8fd96864c10280c0c80a42f7a5e9 | [
"BSD-3-Clause"
] | null | null | null | test/test_namespace.py | delocalizer/rdflib | 6534d8c1cb0e8fd96864c10280c0c80a42f7a5e9 | [
"BSD-3-Clause"
] | null | null | null | import unittest
from rdflib.graph import Graph
from rdflib.namespace import Namespace, FOAF, RDF, RDFS, SH
from rdflib.term import URIRef
class NamespacePrefixTest(unittest.TestCase):
def test_compute_qname(self):
"""Test sequential assignment of unknown prefixes"""
g = Graph()
self.assertEqual(
g.compute_qname(URIRef("http://foo/bar/baz")),
("ns1", URIRef("http://foo/bar/"), "baz"),
)
self.assertEqual(
g.compute_qname(URIRef("http://foo/bar#baz")),
("ns2", URIRef("http://foo/bar#"), "baz"),
)
# should skip to ns4 when ns3 is already assigned
g.bind("ns3", URIRef("http://example.org/"))
self.assertEqual(
g.compute_qname(URIRef("http://blip/blop")),
("ns4", URIRef("http://blip/"), "blop"),
)
# should return empty qnames correctly
self.assertEqual(
g.compute_qname(URIRef("http://foo/bar/")),
("ns1", URIRef("http://foo/bar/"), ""),
)
def test_reset(self):
data = (
"@prefix a: <http://example.org/a> .\n"
"a: <http://example.org/b> <http://example.org/c> ."
)
graph = Graph().parse(data=data, format="turtle")
for p, n in tuple(graph.namespaces()):
graph.store._Memory__namespace.pop(p)
graph.store._Memory__prefix.pop(n)
graph.namespace_manager.reset()
self.assertFalse(tuple(graph.namespaces()))
u = URIRef("http://example.org/a")
prefix, namespace, name = graph.namespace_manager.compute_qname(
u, generate=True
)
self.assertNotEqual(namespace, u)
def test_reset_preserve_prefixes(self):
data = (
"@prefix a: <http://example.org/a> .\n"
"a: <http://example.org/b> <http://example.org/c> ."
)
graph = Graph().parse(data=data, format="turtle")
graph.namespace_manager.reset()
self.assertTrue(tuple(graph.namespaces()))
u = URIRef("http://example.org/a")
prefix, namespace, name = graph.namespace_manager.compute_qname(
u, generate=True
)
self.assertEqual(namespace, u)
def test_n3(self):
g = Graph()
g.add(
(
URIRef("http://example.com/foo"),
URIRef("http://example.com/bar"),
URIRef("http://example.com/baz"),
)
)
n3 = g.serialize(format="n3", encoding='latin-1')
# Gunnar disagrees that this is right:
# self.assertTrue("<http://example.com/foo> ns1:bar <http://example.com/baz> ." in n3)
# as this is much prettier, and ns1 is already defined:
self.assertTrue(b"ns1:foo ns1:bar ns1:baz ." in n3)
def test_n32(self):
# this test not generating prefixes for subjects/objects
g = Graph()
g.add(
(
URIRef("http://example1.com/foo"),
URIRef("http://example2.com/bar"),
URIRef("http://example3.com/baz"),
)
)
n3 = g.serialize(format="n3", encoding="latin-1")
self.assertTrue(
b"<http://example1.com/foo> ns1:bar <http://example3.com/baz> ."
in n3
)
def test_closed_namespace(self):
"""Tests terms both in an out of the ClosedNamespace FOAF"""
def add_not_in_namespace(s):
return FOAF[s]
# a non-existent FOAF property
self.assertRaises(KeyError, add_not_in_namespace, "blah")
# a property name within the FOAF namespace
self.assertEqual(
add_not_in_namespace("givenName"),
URIRef("http://xmlns.com/foaf/0.1/givenName"),
)
def test_contains_method(self):
"""Tests for Namespace.__contains__() methods."""
ref = URIRef('http://www.w3.org/ns/shacl#example')
self.assertTrue(type(SH) == Namespace, "SH no longer a Namespace, update test.")
self.assertTrue(ref in SH, "sh:example not in SH")
ref = URIRef('http://www.w3.org/2000/01/rdf-schema#label')
self.assertTrue(ref in RDFS, "ClosedNamespace(RDFS) does not include rdfs:label")
ref = URIRef('http://www.w3.org/2000/01/rdf-schema#example')
self.assertFalse(ref in RDFS, "ClosedNamespace(RDFS) includes out-of-ns member rdfs:example")
ref = URIRef('http://www.w3.org/1999/02/22-rdf-syntax-ns#type')
self.assertTrue(ref in RDF, "_RDFNamespace does not include rdf:type") | 36.34127 | 101 | 0.571959 | 554 | 4,579 | 4.651625 | 0.265343 | 0.085371 | 0.048894 | 0.037253 | 0.412107 | 0.320528 | 0.288708 | 0.273962 | 0.273962 | 0.256888 | 0 | 0.016944 | 0.278227 | 4,579 | 126 | 102 | 36.34127 | 0.762784 | 0.116401 | 0 | 0.270833 | 0 | 0.041667 | 0.261928 | 0.010437 | 0 | 0 | 0 | 0 | 0.177083 | 1 | 0.083333 | false | 0 | 0.041667 | 0.010417 | 0.145833 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a8aa5f1048857b753fa806c47e4c7b4c2253b3e | 6,389 | py | Python | github/get/user/src/formula/formula.py | rogerio-ignacio-developer/formulas-github | 12cf7401f31e4a6212289b839c02de1d612c8271 | [
"Apache-2.0"
] | 32 | 2021-01-27T17:43:23.000Z | 2022-03-23T18:00:41.000Z | github/get/user/src/formula/formula.py | rogerio-ignacio-developer/formulas-github | 12cf7401f31e4a6212289b839c02de1d612c8271 | [
"Apache-2.0"
] | 12 | 2021-01-26T18:14:59.000Z | 2021-10-04T12:24:41.000Z | github/get/user/src/formula/formula.py | rogerio-ignacio-developer/formulas-github | 12cf7401f31e4a6212289b839c02de1d612c8271 | [
"Apache-2.0"
] | 11 | 2021-01-28T13:54:24.000Z | 2022-03-16T12:16:27.000Z | #!/usr/bin/python3
import json
import requests
from requests.auth import HTTPBasicAuth
import os
api_url_base = "https://api.github.com/"
headers = {
"Content-Type": "application/json",
"Accept": "application/vnd.github.v3+json"
}
def Run(user, key, username, repo_details, keep_file):
# Print User details
try:
user_details = get_user_details(username) # It's a binary string
except Exception as error:
print(error)
exit(0)
# Open file for writing
file_name = username + ".txt"
user_file = open(file_name, "w+")
if user_details is not None:
# convert it to utf-8 encoded json string
user_in_json = user_details.decode("utf-8")
# Load the JSON to a Python list & dump it back out as formatted JSON
user_detail_dict = json.loads(user_in_json)
if user_detail_dict["email"] is None or user_detail_dict["name"] is None:
events = requests.get(
url = f"https://api.github.com/users/{username}/events?per_page=100",
auth = HTTPBasicAuth(user, key),
).json()
if user_detail_dict["name"] is None:
user_detail_dict["name"] = get_name(events, username)
if user_detail_dict["email"] is None:
user_detail_dict["email"] = get_email(events, username, user_detail_dict["name"])
user_file.write("\n" + "="*10 + " User details of username: " + username + " " + "="*10 + "\n" )
user_file.write("🔅 User Name: {}".format(user_detail_dict["name"]) + "\n")
user_file.write("📔 Bio: {}".format(user_detail_dict["bio"]) + "\n")
user_file.write("📇 Company: {}".format(user_detail_dict["company"]) + "\n")
user_file.write("📧 Email: {}".format(user_detail_dict["email"]) + "\n")
user_file.write("🛰 Location: {}".format(user_detail_dict["location"])+ "\n")
user_file.write("👀 Following: {}".format(user_detail_dict["following"]) + "\n")
user_file.write("👥 Followers: {}".format(user_detail_dict["followers"]) + "\n")
user_file.write("🔢 Public Repo count: {}".format(user_detail_dict["public_repos"]) + "\n")
user_file.write("🆙 Account created at: {}".format(user_detail_dict["created_at"]) + "\n")
else:
print("❌ No User Found")
if repo_details == "yes":
# Print Repo list details
repo_list = get_repos(username) # It's a binary string
if repo_list is not None:
repo_in_json = repo_list.decode("utf-8") # convert it to utf-8 encoded json string
# Load the JSON to a Python list & dump it back out as formatted JSON
repo_list = json.loads(repo_in_json)
user_file.write("\n" + "="*10 + " Repo details of username: " + username + " " + "="*10 + "\n")
for repo_dict in repo_list:
user_file.write("*"*10 + " Repo Name: {}".format(repo_dict["name"]) + " " + "*"*10 + "\n")
user_file.write("📄 Description: {}".format(repo_dict["description"]) + "\n")
user_file.write("🌐 Repo url: {}".format(repo_dict["clone_url"]) + "\n")
user_file.write("🔀 Is it forked one : {}".format(repo_dict["fork"]) + "\n")
user_file.write("🆕 Created at: {}".format(repo_dict["created_at"]) + "\n")
user_file.write("🔄 Updated at: {}".format(repo_dict["updated_at"]) + "\n")
user_file.write("🗣 Language: {}".format(repo_dict["language"]) + "\n")
user_file.write("🧮 Fork Count: {}".format(repo_dict["forks_count"]) + "\n")
user_file.write("\n")
else:
print('❌ No Repo List Found')
user_file.close()
f = open(file_name, "r")
file_contents = f.read()
print (file_contents)
f.close()
if keep_file == "no":
os.system(f"rm -rf {file_name}")
def get_user_details(username):
user_url = f"{api_url_base}users/{username}"
response = requests.get(user_url, headers=headers)
if response.status_code == 200:
return response.content
else:
print(f"[!] HTTP {response.status_code} calling [{user_url}]")
return None
def get_repos(username):
repo_url = f"{api_url_base}users/{username}/repos"
response = requests.get(repo_url, headers=headers)
if response.status_code == 200:
return (response.content)
else:
print(f"[!] HTTP {response.status_code} calling [{repo_url}]")
return None
def get_name(events, username):
name = "-"
found_name = False
for event in events:
if not found_name and event["type"] == "PushEvent" and event["actor"] is not None and event["payload"] is not None:
actor = event["actor"]
if actor["login"] == username:
payload = event["payload"]
if len(payload["commits"]) == 1:
for commit in payload["commits"]:
if not found_name and commit["author"] is not None:
author = commit["author"]
if not found_name and author["email"] is not None and "github" not in author["email"]:
name = author["name"]
found_name = True
return name
def get_email(events, username, name):
email = "-"
found_email = False
for event in events:
if not found_email and event["type"] == "PushEvent" and event["payload"] is not None:
payload = event["payload"]
for commit in payload["commits"]:
if not found_email and commit["author"] is not None:
author = commit["author"]
if not found_email and author["name"] in username and "github" not in author["email"]:
email = author["email"]
found_email = True
if not found_email and author["name"] in name and "github" not in author["email"]:
email = author["email"]
found_email = True
if not found_email and name.split()[0].lower() in author["name"] and "github" not in author["email"]:
email = author["email"] + " *" # The * represents an email that is related but not necessary from this user account.
return email
| 43.168919 | 140 | 0.573173 | 823 | 6,389 | 4.304982 | 0.198056 | 0.049675 | 0.073384 | 0.067175 | 0.404742 | 0.342647 | 0.26503 | 0.234547 | 0.170195 | 0.170195 | 0 | 0.006566 | 0.284865 | 6,389 | 147 | 141 | 43.462585 | 0.764938 | 0.066364 | 0 | 0.172414 | 0 | 0 | 0.190124 | 0.023514 | 0.017241 | 0 | 0 | 0 | 0 | 1 | 0.043103 | false | 0 | 0.034483 | 0 | 0.12931 | 0.051724 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a8d1b7b9620571f9c38f98ac65ea2995046a3d0 | 11,348 | py | Python | boundle.py | dist-uniparthenope/MuseoNavaleAPI | c5fac0b5c3e3a11550c7daee612693d5afb31d43 | [
"Apache-2.0"
] | null | null | null | boundle.py | dist-uniparthenope/MuseoNavaleAPI | c5fac0b5c3e3a11550c7daee612693d5afb31d43 | [
"Apache-2.0"
] | null | null | null | boundle.py | dist-uniparthenope/MuseoNavaleAPI | c5fac0b5c3e3a11550c7daee612693d5afb31d43 | [
"Apache-2.0"
] | null | null | null | import requests
import os
import json
import zipfile
data = {}
data['items'] = []
rooms = []
tours = []
cont1 = 0
cont2 = 0
cont3 = 0
cont4 = 0
cont5 = 0
cont6 = 0
cont7 = 0
cont8 = 0
cont9 = 0
contt1 = 0
contt2 = 0
contt3 = 0
contt4 = 0
path_home = "file/MuseoNavale"
path_images = "file/MuseoNavale/images"
path_audio = "file/MuseoNavale/audio"
try:
os.mkdir(path_home)
except Exception as e:
print(e)
pass
try:
os.mkdir(path_images)
except Exception as e:
print(e)
pass
try:
os.mkdir(path_audio)
except Exception as e:
print(e)
querystring = {"_format": "hal_json"}
headers = {
'Content-Type': "application/hal+json"
}
response = requests.request("GET", "https://museonavale.uniparthenope.it/en/api/museum_items", headers=headers,
params=querystring)
_json = response.json()
for i in range(0, len(_json)):
if _json[i]['field_exposed'] == "True":
img = str(_json[i]['field_image'])
img_string = ""
audio_string = ""
audio = str(_json[i]['field_audio'])
print(audio)
s = (_json[i]['field_other_image']).split(",")
if (audio != ""):
with open(path_audio + "/" + audio[29:], "wb") as handler:
audio_string = "audio/" + audio[29:]
response = requests.get("https://museonavale.uniparthenope.it/" + audio, stream=True)
if not response.ok:
print(response)
for block in response.iter_content(1024):
if not block:
break
handler.write(block)
if (img != ""):
with open(path_images + "/" + img[29:], "wb") as handler:
img_string = "images/" + img[29:]
response = requests.get("https://museonavale.uniparthenope.it/" + img, stream=True)
if not response.ok:
print(response)
for block in response.iter_content(1024):
if not block:
break
handler.write(block)
img_temp_string = ""
for j in range(1, len(s)):
img_temp = s[j]
print(img_temp[30:])
url = "https://museonavale.uniparthenope.it/" + img_temp[2:]
print(url)
if (img_temp != ""):
img_data = requests.get(url).content
with open(path_images + "/" + img_temp[30:], "wb") as handler:
response = requests.get(url, stream=True)
if not response.ok:
print(response)
for block in response.iter_content(1024):
if not block:
break
handler.write(block)
if j == (len(s) - 1):
img_temp_string = img_temp_string + "images/" + img_temp[30:]
else:
img_temp_string = img_temp_string + "images/" + img_temp[30:] + ","
print(img_temp_string)
data['items'].append({
'nid': _json[i]['nid'],
'title': str(_json[i]['title']),
'body': str(_json[i]['body'].encode('utf-8')),
'field_other_image': img_temp_string,
'field_placing': _json[i]['field_placing'],
'field_model_value': _json[i]['field_model_value'],
'field_inventory': _json[i]['field_inventory'],
'field_model_actual_value': _json[i]['field_model_actual_value'],
'field_inventory_old': _json[i]['field_inventory_old'],
'field_year': _json[i]['field_year'],
'field_status': _json[i]['field_status'],
'field_estimation': _json[i]['field_estimation'],
'field_exposed': _json[i]['field_exposed'],
'field_inventory_1': _json[i]['field_inventory_1'],
'field_vertical_exposition': _json[i]['field_vertical_exposition'],
'field_image': img_string,
'field_audio': audio_string
})
print("Room", _json[i]['field_room'])
if _json[i]['field_room'] != "":
if (_json[i]['field_room'] == "Sala1"):
j = 0
cont1 = cont1 + 1
elif _json[i]['field_room'] == "Sala2":
j = 1
cont2 = cont2 + 1
elif _json[i]['field_room'] == "Sala3":
j = 2
cont3 = cont3 + 1
elif _json[i]['field_room'] == "Sala4":
j = 3
cont4 = cont4 + 3
elif _json[i]['field_room'] == "Sala5":
j = 4
cont5 = cont5 + 1
elif _json[i]['field_room'] == "Sala6":
j = 5
cont6 = cont6 + 1
elif _json[i]['field_room'] == "Sala7":
j = 6
cont7 = cont7 + 1
elif _json[i]['field_room'] == "Sala8":
j = 7
cont8 = cont8 + 1
elif _json[i]['field_room'] == "Sala9":
j = 8
cont9 = cont9 + 1
if cont1 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
if cont2 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
if cont3 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
if cont4 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
if cont5 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
if cont6 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
if cont7 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
if cont8 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
if cont9 == 1:
rooms.append({
"hall": _json[i]['field_room'],
"items": []
})
rooms[j]["items"].append({
'nid': _json[i]['nid'],
'title': str(_json[i]['title']),
'body': str(_json[i]['body'].encode('utf-8')),
'field_other_image': img_temp_string,
'field_placing': _json[i]['field_placing'],
'field_model_value': _json[i]['field_model_value'],
'field_inventory': _json[i]['field_inventory'],
'field_model_actual_value': _json[i]['field_model_actual_value'],
'field_inventory_old': _json[i]['field_inventory_old'],
'field_year': _json[i]['field_year'],
'field_status': _json[i]['field_status'],
'field_estimation': _json[i]['field_estimation'],
'field_exposed': _json[i]['field_exposed'],
'field_inventory_1': _json[i]['field_inventory_1'],
'field_vertical_exposition': _json[i]['field_vertical_exposition'],
'field_image': img_string,
'field_audio': audio_string,
'field_number_tour': _json[i]['field_tour_complete_number']
})
if _json[i]['field_tours'] != "":
if _json[i]['field_tours'] == "Complete":
j = 0;
contt1 = contt1 + 1
elif _json[i]['field_tours'] == "Baby":
j = 1
contt2 = contt2 + 1
elif _json[i]['field_tours'] == "Nautic":
j = 2;
contt3 = contt3 + 1
if contt1 == 1:
tours.append({
"tour": _json[i]['field_tours'],
"items": []
})
if contt2 == 1:
tours.append({
"tour": _json[i]['field_tours'],
"items": []
})
if contt3 == 1:
tours.append({
"tour": _json[i]['field_tours'],
"items": []
})
tours[j]["items"].append({
'nid': _json[i]['nid'],
'title': str(_json[i]['title']),
'body': str(_json[i]['body'].encode('utf-8')),
'field_other_image': img_temp_string,
'field_placing': _json[i]['field_placing'],
'field_model_value': _json[i]['field_model_value'],
'field_inventory': _json[i]['field_inventory'],
'field_model_actual_value': _json[i]['field_model_actual_value'],
'field_inventory_old': _json[i]['field_inventory_old'],
'field_year': _json[i]['field_year'],
'field_status': _json[i]['field_status'],
'field_estimation': _json[i]['field_estimation'],
'field_exposed': _json[i]['field_exposed'],
'field_inventory_1': _json[i]['field_inventory_1'],
'field_vertical_exposition': _json[i]['field_vertical_exposition'],
'field_image': img_string,
'field_audio': audio_string,
'field_number_tour': _json[i]['field_tour_complete_number']
})
orari = []
orari.append({
"giorno": "Domenica",
"orari": []
})
orari[0]["orari"].append({
"apertura": "9",
"chiusura": "17"
})
orari.append({
"giorno": "Lunedi",
"orari": []
})
orari[1]["orari"].append({
"apertura": "9",
"chiusura": "17"
})
orari.append({
"giorno": "Martedi",
"orari": []
})
orari[2]["orari"].append({
"apertura": "N/A",
"chiusura": "N/A"
})
orari.append({
"giorno": "Mercoledi",
"orari": []
})
orari[3]["orari"].append({
"apertura": "9",
"chiusura": "17"
})
orari.append({
"giorno": "Giovedi",
"orari": []
})
orari[4]["orari"].append({
"apertura": "N/A",
"chiusura": "N/A"
})
orari.append({
"giorno": "Venerdi",
"orari": []
})
orari[5]["orari"].append({
"apertura": "9",
"chiusura": "17"
})
orari.append({
"giorno": "Sabato",
"orari": []
})
orari[6]["orari"].append({
"apertura": "9",
"chiusura": "17"
})
data['orari'] = orari
data['rooms'] = rooms
data['tours'] = tours
f = open("version.txt", "r")
contents = f.read().splitlines()
version = contents[0]
new_version = int(version) + 1
f.close()
f = open("version.txt", "w")
f.write(str(new_version))
f.close()
data['version'] = new_version
with open(path_home + "/file.json", 'w') as outputfile:
json.dump(data, outputfile)
zf = zipfile.ZipFile("file/boundle.zip", "w")
for dirname, subdirs, files in os.walk('file/MuseoNavale/'):
zf.write(dirname)
for filename in files:
zf.write(os.path.join(dirname, filename))
zf.close()
| 29.475325 | 111 | 0.471096 | 1,171 | 11,348 | 4.319385 | 0.142613 | 0.07414 | 0.130486 | 0.055358 | 0.669632 | 0.637011 | 0.592131 | 0.592131 | 0.57157 | 0.511269 | 0 | 0.024107 | 0.371255 | 11,348 | 384 | 112 | 29.552083 | 0.684793 | 0 | 0 | 0.541033 | 0 | 0 | 0.230173 | 0.034455 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.006079 | 0.012158 | 0 | 0.012158 | 0.033435 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a8e80943a3df792682b45db405965fefb3a2506 | 2,702 | py | Python | project/server/main/parsers/oup.py | dataesr/bso-parser-html | 9a5b2d45aa1ff0c61be57fac4e04201becf58a42 | [
"MIT"
] | null | null | null | project/server/main/parsers/oup.py | dataesr/bso-parser-html | 9a5b2d45aa1ff0c61be57fac4e04201becf58a42 | [
"MIT"
] | null | null | null | project/server/main/parsers/oup.py | dataesr/bso-parser-html | 9a5b2d45aa1ff0c61be57fac4e04201becf58a42 | [
"MIT"
] | null | null | null | import re, bs4
from project.server.main.parsers.strings import get_clean_text, get_orcid
# doi 10.1093
def parse_oup(soup, doi):
res = {"doi": doi}
res.update(parse_authors(soup))
res.update(parse_abstract(soup))
return res
def parse_authors(soup):
res = {}
authors = []
affiliations = []
for elt in soup.find_all(class_="info-card-author"):
author = {}
current_affiliations = []
name_elt = elt.find(class_="info-card-name")
if name_elt:
author['full_name'] = get_clean_text(name_elt)
a_elem = elt.find("a", href=re.compile('search-results'))
if a_elem:
sp = re.sub(".*Authors=","",a_elem['href']).split('+')
if len(sp) == 2:
author['first_name'] = sp[0]
author['last_name'] = sp[1]
if sp:
author['full_name'] = " ".join(sp)
a_elem = elt.find("a", href=re.compile('mailto'))
if a_elem:
author['corresponding'] = True
author['email'] = a_elem['href'].replace('mailto:', '')
a_elem = elt.find("a", href=re.compile('orcid.org/'))
if a_elem:
author['orcid'] = get_orcid(a_elem['href'])
for aff_elt in elt.find_all(class_="aff"):
aff = {'name': get_clean_text(aff_elt)}
current_affiliations.append(aff)
if aff not in affiliations:
affiliations.append(aff)
if current_affiliations:
author['affiliations'] = current_affiliations
if author:
authors.append(author)
for ix, a in enumerate(authors):
a['author_position'] = ix+1
if affiliations:
res['affiliations'] = affiliations
if authors:
res['authors'] = authors
return res
def parse_abstract(soup):
res = {}
abstracts = []
for resume_elem in soup.find_all(class_="abstract"):
abstract = {}
abstract['abstract'] = get_clean_text(resume_elem)
if abstract:
abstracts.append(abstract)
if abstracts:
res['abstract'] = abstracts
keywords = []
for k in soup.find_all('a', href = re.compile('f_SemanticFilterTopics|keyword')):
keywords.append({'keyword': get_clean_text(k)})
if keywords:
res['keywords'] = keywords
for e in soup.find_all(class_="history-entry"):
date_type = e.find(class_="wi-state")
date_value = e.find(class_="wi-date")
if date_type and date_value:
date_type = get_clean_text(date_type).replace(':', '').strip().lower()+'_date'.replace(' ','_')
res[date_type] = get_clean_text(date_value)
return res
| 29.692308 | 107 | 0.57772 | 334 | 2,702 | 4.467066 | 0.254491 | 0.030161 | 0.0563 | 0.034853 | 0.120643 | 0.08445 | 0.052279 | 0.052279 | 0 | 0 | 0 | 0.00565 | 0.279423 | 2,702 | 90 | 108 | 30.022222 | 0.760657 | 0.004071 | 0 | 0.114286 | 0 | 0 | 0.117603 | 0.011165 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042857 | false | 0 | 0.028571 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a90426da21af285d2887e61a79c632312596b53 | 1,756 | py | Python | charles-university/2018-npfl104/hw/my-classifiers/naive-bayes.py | Hyperparticle/lct-master | 8acb0ca8fe14bb86305f235e3fec0a595acae2de | [
"MIT"
] | 3 | 2018-11-08T14:23:45.000Z | 2021-02-08T17:54:59.000Z | charles-university/2018-npfl104/hw/my-classifiers/naive-bayes.py | Hyperparticle/lct-master | 8acb0ca8fe14bb86305f235e3fec0a595acae2de | [
"MIT"
] | null | null | null | charles-university/2018-npfl104/hw/my-classifiers/naive-bayes.py | Hyperparticle/lct-master | 8acb0ca8fe14bb86305f235e3fec0a595acae2de | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import pandas as pd
import numpy as np
import math
from helper import train_test_split
def class_dict(data):
classes = {}
for row in data:
if (row[-1] not in classes):
classes[row[-1]] = []
classes[row[-1]].append(row)
return classes
def mean_std(data):
mstd = [(np.mean(col), np.std(col)) for col in list(zip(*data))[:-1]]
return [(mean, std) if std != 0 else (0.0,1.0) for mean,std in mstd]
def mean_std_classes(data):
classes = class_dict(data)
mstd = {}
for c in classes:
mstd[c] = mean_std(classes[c])
return mstd
def prob(x, mean, std):
if std == 0.0: return 1e-6
return (1/(math.sqrt(2*math.pi)*std))*math.exp(-(math.pow(x-mean,2)/(2*math.pow(std,2))))
def prior(train):
p = {}
for c in set(train[-1]):
p[c] = len([x for x in train[:,-1] if x == c]) / len(train[:,-1])
return p
def prob_classes(mstd, priors, row):
p = {}
for c in mstd:
p[c] = priors[c] * np.multiply.reduce([
prob(x, mean, std)
for (mean, std), x in zip(mstd[c], row)])
return p
def predict(mstd, priors, row):
probs = prob_classes(mstd, priors, row)
best = None, -1
for c in probs:
if best[0] is None or probs[c] > best[1]:
best = c, probs[c]
return best[0]
def accuracy(train, test):
dist = mean_std_classes(train)
priors = prior(train)
predicted = [predict(dist, priors, row) for row in test]
actual = [row[-1] for row in test]
return sum(1 for p,a in zip(predicted, actual) if p == a) / len(test) * 100.0
train, test = train_test_split()
print(accuracy(train['artificial'], test['artificial']))
print(accuracy(train['income'], test['income']))
| 27.873016 | 93 | 0.589977 | 290 | 1,756 | 3.524138 | 0.241379 | 0.061644 | 0.023483 | 0.023483 | 0.072407 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024115 | 0.244305 | 1,756 | 62 | 94 | 28.322581 | 0.746044 | 0.011959 | 0 | 0.078431 | 0 | 0 | 0.018454 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.156863 | false | 0 | 0.078431 | 0 | 0.392157 | 0.039216 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a9113b2cc7a007ea2fcd72db9bcf1d0c997a009 | 1,538 | py | Python | pattern2-two-pointers/12. Minimum Window Sort (medium).py | dopiwoo/Grokking-the-Coding-Interview | 78b2bacf9d761b460ac78882bac42df7465feec9 | [
"MIT"
] | null | null | null | pattern2-two-pointers/12. Minimum Window Sort (medium).py | dopiwoo/Grokking-the-Coding-Interview | 78b2bacf9d761b460ac78882bac42df7465feec9 | [
"MIT"
] | null | null | null | pattern2-two-pointers/12. Minimum Window Sort (medium).py | dopiwoo/Grokking-the-Coding-Interview | 78b2bacf9d761b460ac78882bac42df7465feec9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Dec 4 11:06:48 2020
@author: dopiwoo
Given an array, find the length of the smallest subarray in it which when sorted will sort the whole array.
Example 1:
Input: [1, 2, 5, 3, 7, 10, 9, 12]
Output: 5
Explanation: We need to sort only the subarray [5, 3, 7, 10, 9] to make the whole array sorted.
Example 2:
Input: [1, 3, 2, 0, -1, 7, 10]
Output: 5
Explanation: We need to sort only the subarray [1, 3, 2, 0, -1] to make the whole array sorted.
"""
from typing import List
def shortest_window_sort(arr: List[int]) -> int:
"""
Time Complexity: O(N) where 'N' is the total number of nodes in the LinkedList
Space Complexity: O(1)
Parameters
----------
arr
Returns
-------
"""
arr_len = len(arr)
low = 0
high = arr_len - 1
while low < arr_len - 1 and arr[low] <= arr[low + 1]:
low += 1
if low == arr_len - 1:
return 0
while high > 0 and arr[high] >= arr[high - 1]:
high -= 1
subarray = arr[low:high + 1]
subarray_min = min(subarray)
subarray_max = max(subarray)
while low > 0 and arr[low - 1] > subarray_min:
low -= 1
while high < arr_len - 1 and arr[high + 1] < subarray_max:
high += 1
return high - low + 1
if __name__ == '__main__':
print(shortest_window_sort([1, 2, 5, 3, 7, 10, 9, 12]))
print(shortest_window_sort([1, 3, 2, 0, -1, 7, 10]))
print(shortest_window_sort([1, 2, 3]))
print(shortest_window_sort([3, 2, 1]))
| 25.213115 | 107 | 0.596879 | 257 | 1,538 | 3.466926 | 0.338521 | 0.016835 | 0.10101 | 0.103255 | 0.318743 | 0.251403 | 0.141414 | 0.123457 | 0.10101 | 0.10101 | 0 | 0.078483 | 0.262679 | 1,538 | 60 | 108 | 25.633333 | 0.707231 | 0.424577 | 0 | 0 | 0 | 0 | 0.009569 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.041667 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2a9649414a045156779c2e5dc584786fe319bbbb | 4,129 | py | Python | chemyst/periodic_table.py | mordy-python/chemyst | 6ded98e79bb98fcc514956f3314e816dfe4269bd | [
"MIT"
] | 6 | 2021-04-30T21:42:59.000Z | 2021-07-17T22:15:55.000Z | chemyst/periodic_table.py | mordy-python/chemyst | 6ded98e79bb98fcc514956f3314e816dfe4269bd | [
"MIT"
] | 4 | 2021-05-06T17:19:37.000Z | 2021-05-11T13:38:26.000Z | chemyst/periodic_table.py | mordy-python/chemyst | 6ded98e79bb98fcc514956f3314e816dfe4269bd | [
"MIT"
] | 2 | 2021-05-06T23:36:13.000Z | 2021-05-07T16:00:33.000Z | # periodic table related functions
class InvalidAtomicNumber(Exception):
"""
Error class for invalid atomic numbers.
"""
pass
def _check_atomic_number(z:int) -> None:
"""
Checks if the atomic number provided is valid or not.
"""
if z <= 0:
raise InvalidAtomicNumber("Atomic number (Z) of an element cant be zero or less than zero.")
elif z > 118:
raise InvalidAtomicNumber("Atomic number (Z) of an element cant be greater than 118.")
def electronic_config(z:int) -> str:
"""
Returns the electronic configuration of an element corresponding to the Modern Periodic Table in string format.
"""
# checking if the atomic number passed is valid
_check_atomic_number(z)
# a variable `temp` which will be decreased with number of electrons corresponding to
# the subshell
temp = z
# all subshells in order
series = ['1S', '2S', '2P', '3S', '3P', '4S', '3D', '4P', '5S', '4D', '5P', '6S', '4F', '5D', '6P', '7S', '5F', '6D', '7P']
result = ""
for shell in series:
# breaking the loop if temp is 0, i.e., if the electronic configuration is complete
if temp <= 0:
break
# S subshell can hold 2 electrons max, so deducting `temp` with 2 if it's greater
# than 2 and substracting temp with itself if it's less than 2
if shell.endswith("S"):
if temp < 2:
result += f"{shell}{temp} "
temp -= temp
else:
result += f"{shell}2 "
temp -= 2
# P subshell can hold 6 electrons max, so deducting `temp` with 6 if it's greater
# than 6 and substracting temp with itself if it's less than 6
elif shell.endswith("P"):
if temp < 6:
result += f"{shell}{temp} "
temp -= temp
else:
result += f"{shell}6 "
temp -= 6
# D subshell can hold 10 electrons max, so deducting `temp` with 10 if it's greater
# than 10 and substracting temp with itself if it's less than 10
elif shell.endswith("D"):
if temp < 10:
result += f"{shell}{temp} "
temp -= temp
else:
result += f"{shell}10 "
temp -= 10
# P subshell can hold 14 electrons max, so deducting `temp` with 14 if it's greater
# than 14 and substracting temp with itself if it's less than 14
elif shell.endswith("F"):
if temp < 14:
result += f"{shell}{temp} "
temp -= temp
else:
result += f"{shell}14 "
temp -= 14
else:
print("Invalid subshell!")
return result.rstrip()
def period_number(z:int) -> int :
"""
Returns the period number of an element corresponding to the Modern Periodic Table.
"""
# checking if the atomic number passed is valid
_check_atomic_number(z)
# period number of an element is equal to the max subshell coefficient
# so iterating through the config of the element to find the max one and returning
# the same
config = electronic_config(z).split(" ")
ultimate_shell = 1
for i in config:
if int(i[0]) > ultimate_shell:
ultimate_shell = int(i[0])
return ultimate_shell
# def group_number(z:int) -> int:
# """
# Returns the group number of an element corresponding to the Modern Periodic Table.
# """
# _check_atomic_number(z)
# config = electronic_config(z).split(" ")
# last_subshell_no = config[-1][0]
# last_subshell = config[-1][1]
# valence_electrons = 0
# for i in config:
# if last_subshell_no == i[0]:
# valence_electrons += int(i[2:])
# if last_subshell == "S":
# return valence_electrons
# elif last_subshell == "P":
# return valence_electrons + 10
# elif last_subshell == "D":
# return "D block"
# elif last_subshell == "F":
# return 3
# else:
# print("Invalid subshell") | 31.045113 | 127 | 0.56745 | 542 | 4,129 | 4.260148 | 0.245387 | 0.046774 | 0.017324 | 0.031182 | 0.455175 | 0.378519 | 0.304894 | 0.304894 | 0.304894 | 0.284106 | 0 | 0.03004 | 0.330831 | 4,129 | 133 | 128 | 31.045113 | 0.805646 | 0.493098 | 0 | 0.277778 | 0 | 0 | 0.137412 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.018519 | 0 | 0 | 0.111111 | 0.018519 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa4a8eb7d79e947275631e742939238ec685b08f | 83,217 | py | Python | tw2/jit/samples/samples_data.py | toscawidgets/tw2.jit | c5e8059975115385f225029ba5c7380673524122 | [
"MIT"
] | 1 | 2020-01-12T05:11:24.000Z | 2020-01-12T05:11:24.000Z | tw2/jit/samples/samples_data.py | toscawidgets/tw2.jit | c5e8059975115385f225029ba5c7380673524122 | [
"MIT"
] | null | null | null | tw2/jit/samples/samples_data.py | toscawidgets/tw2.jit | c5e8059975115385f225029ba5c7380673524122 | [
"MIT"
] | null | null | null | # This module just contains some of the more lengthy constants used in
# samples.py that would otherwise clutter that file.
from random import randint, random
BarChartJSONSampleData = {
'label': ['label A', 'label B', 'label C', 'label D'],
'values': [
{
'label': 'date A',
'values': [20, 40, 15, 5]
},
{
'label': 'date B',
'values': [30, 10, 45, 10]
},
{
'label': 'date E',
'values': [38, 20, 35, 17]
},
{
'label': 'date F',
'values': [58, 10, 35, 32]
},
{
'label': 'date D',
'values': [55, 60, 34, 38]
},
{
'label': 'date C',
'values': [26, 40, 25, 40],
}
]
}
AreaChartJSONSampleData = {
'label' : ['Top income of the lowest quintile (%20) in the US',
'Top income of the second quintile',
'Top income of the third quintile',
'Top income of the fourth quintile',
'Bottom of top %5'],
'values' : [entry for entry in reversed([
{
'label': '09',
'values': [20453,38550,61801,100000,180001]
}, {
'label': '08',
'values': [20633,38852,62487,99860,179317]
}, {
'label': '07',
'values': [20991,40448,64138,103448,183103]
}, {
'label': '06',
'values': [21314,40185,63830,103226,185119]
}, {
'label': '05',
'values': [21071,39554,63352,100757,182386]
}, {
'label': '04',
'values': [20992,39375,62716,99930,178453]
}, {
'label': '03',
'values': [20974,39652,63505,101307,179740]
}, {
'label': '02',
'values': [21361,39795,63384,100170,178844]
}, {
'label': '01',
'values': [21771,40361,64212,101163,182335]
}, {
'label': '00',
'values': [22320,41103,64985,101844,180879]
}, {
'label': '99',
'values': [22059,41090,64859,101995,182795]
}, {
'label': '98',
'values': [21179,39960,63522,98561,173728]
}, {
'label': '97',
'values': [20520,38909,61294,95273,168626]
}, {
'label': '96',
'values': [20103,37789,59904,92587,162727]
}, {
'label': '95',
'values': [20124,37613,58698,91012,157919]
}, {
'label': '94',
'values': [19215,36065,57390,89936,157172]
}, {
'label': '93',
'values': [18954,36074,56704,88142,152953]
}, {
'label': '92',
'values': [18873,36158,56769,86886,148318]
}, {
'label': '91',
'values': [19338,36860,56933,87173,148055]
}, {
'label': '90',
'values': [19886,37644,57591,87826,150735]
}, {
'label': '89',
'values': [20203,38415,59042,89707,153241]
}, {
'label': '88',
'values': [19830,37459,58376,88146,149207]
}, {
'label': '87',
'values': [19507,37027,57798,87353,146172]
}, {
'label': '86',
'values': [19133,36598,56799,85859,143974]
}, {
'label': '85',
'values': [18898,35557,55082,82843,136881]
}, {
'label': '84',
'values': [18680,34961,53863,81365,134691]
}, {
'label': '83',
'values': [18317,34058,52273,78998,129971]
}, {
'label': '82',
'values': [17927,34095,52095,77683,128232]
}, {
'label': '81',
'values': [18158,33944,52500,77619,124914]
}, {
'label': '80',
'values': [18533,34757,53285,78019,125556]
}, {
'label': '79',
'values': [19274,35795,55073,79851,129029]
}, {
'label': '78',
'values': [19063,36044,54537,79317,126890]
}, {
'label': '77',
'values': [18487,34821,53076,77380,122518]
}, {
'label': '76',
'values': [18526,34516,52580,75648,119967]
}, {
'label': '75',
'values': [18124,34016,51400,73802,116463]
}, {
'label': '74',
'values': [19065,35364,52255,75839,120037]
}, {
'label': '73',
'values': [18973,36484,53982,77723,124921]
}, {
'label': '72',
'values': [18570,35764,52858,75655,121759]
}, {
'label': '71',
'values': [17946,34211,50343,71784,113995]
}, {
'label': '70',
'values': [18180,34827,50656,72273,114243]
}, {
'label': '69',
'values': [18491,35483,51316,71897,112759]
}, {
'label': '68',
'values': [17954,34039,48790,68554,107251]
}, {
'label': '67',
'values': [16845,32848,46621,66481,106684]
}
])]
}
def icicleColor(level, total, val):
magic = 0.49 # lol
total = total + 1
coeff = magic/total
perturb = coeff*val/10.0
base = (level+magic)/total + perturb
assert(base >= 0 and base <= 1)
R = int(256*base)
G = int(128*base)
B = int(256*(1 - base))
return "#" + "".join(
["%2s" % hex(component)[2:] for component in [R, G, B]]
).replace(' ', '0')
def generateTree(total_levels=2, _level=0, _index=0, pid='', code=''):
val = randint(1,10)
id = '%i_%i_%s' % (_level, _index, pid)
this_node = {
'id' : "%s_inode_%s" % (code, id),
'name' : "%i" % val,
'data' : {
'$area' : val,
'$dim' : val,
'$color' : icicleColor(_level, total_levels, val)
}
}
if _level < total_levels:
this_node['children'] = [
generateTree(total_levels, _level+1, i, id, code)
for i in range(randint(2,4))
]
return this_node
IcicleJSONSampleData = generateTree(5, code='icicle')
SpaceTreeJSONSampleData = generateTree(3, code='spacetree')
PieChartJSONSampleData = BarChartJSONSampleData
TreeMapJSONSampleData = {
"children": [
{
"children": [
{
"children": [],
"data": {
"playcount": "276",
"$color": "#8E7032",
"image": "http://userserve-ak.last.fm/serve/300x300/11403219.jpg",
"$area": 276
},
"id": "album-Thirteenth Step",
"name": "Thirteenth Step"
},
{
"children": [],
"data": {
"playcount": "271",
"$color": "#906E32",
"image": "http://userserve-ak.last.fm/serve/300x300/11393921.jpg",
"$area": 271
},
"id": "album-Mer De Noms",
"name": "Mer De Noms"
}
],
"data": {
"playcount": 547,
"$area": 547
},
"id": "artist_A Perfect Circle",
"name": "A Perfect Circle"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "209",
"$color": "#AA5532",
"image": "http://userserve-ak.last.fm/serve/300x300/32349839.jpg",
"$area": 209
},
"id": "album-Above",
"name": "Above"
}
],
"data": {
"playcount": 209,
"$area": 209
},
"id": "artist_Mad Season",
"name": "Mad Season"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "260",
"$color": "#956932",
"image": "http://userserve-ak.last.fm/serve/300x300/38753425.jpg",
"$area": 260
},
"id": "album-Tiny Music... Songs From the Vatican Gift Shop",
"name": "Tiny Music... Songs From the Vatican Gift Shop"
},
{
"children": [],
"data": {
"playcount": "254",
"$color": "#976732",
"image": "http://images.amazon.com/images/P/B000002IU3.01.LZZZZZZZ.jpg",
"$area": 254
},
"id": "album-Core",
"name": "Core"
}
],
"data": {
"playcount": 514,
"$area": 514
},
"id": "artist_Stone Temple Pilots",
"name": "Stone Temple Pilots"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "181",
"$color": "#B54932",
"image": "http://userserve-ak.last.fm/serve/300x300/8673371.jpg",
"$area": 181
},
"id": "album-The Science of Things",
"name": "The Science of Things"
}
],
"data": {
"playcount": 181,
"$area": 181
},
"id": "artist_Bush",
"name": "Bush"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "229",
"$color": "#A15D32",
"image": "http://userserve-ak.last.fm/serve/300x300/32579429.jpg",
"$area": 229
},
"id": "album-Echoes, Silence, Patience & Grace",
"name": "Echoes, Silence, Patience & Grace"
},
{
"children": [],
"data": {
"playcount": "185",
"$color": "#B34B32",
"image": "http://images.amazon.com/images/P/B0009HLDFU.01.MZZZZZZZ.jpg",
"$area": 185
},
"id": "album-In Your Honor (disc 2)",
"name": "In Your Honor (disc 2)"
}
],
"data": {
"playcount": 414,
"$area": 414
},
"id": "artist_Foo Fighters",
"name": "Foo Fighters"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "398",
"$color": "#5DA132",
"image": "http://images.amazon.com/images/P/B00005LNP5.01._SCMZZZZZZZ_.jpg",
"$area": 398
},
"id": "album-Elija Y Gane",
"name": "Elija Y Gane"
},
{
"children": [],
"data": {
"playcount": "203",
"$color": "#AC5232",
"image": "http://images.amazon.com/images/P/B0000B193V.01._SCMZZZZZZZ_.jpg",
"$area": 203
},
"id": "album-Para los Arboles",
"name": "Para los Arboles"
}
],
"data": {
"playcount": 601,
"$area": 601
},
"id": "artist_Luis Alberto Spinetta",
"name": "Luis Alberto Spinetta"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "224",
"$color": "#A35B32",
"image": "http://userserve-ak.last.fm/serve/300x300/26497553.jpg",
"$area": 224
},
"id": "album-Music Bank",
"name": "Music Bank"
},
{
"children": [],
"data": {
"playcount": "217",
"$color": "#A65832",
"image": "http://images.amazon.com/images/P/B0000296JW.01.MZZZZZZZ.jpg",
"$area": 217
},
"id": "album-Music Bank (disc 1)",
"name": "Music Bank (disc 1)"
},
{
"children": [],
"data": {
"playcount": "215",
"$color": "#A75732",
"image": "http://images.amazon.com/images/P/B0000296JW.01.MZZZZZZZ.jpg",
"$area": 215
},
"id": "album-Music Bank (disc 2)",
"name": "Music Bank (disc 2)"
},
{
"children": [],
"data": {
"playcount": "181",
"$color": "#B54932",
"image": "http://images.amazon.com/images/P/B0000296JW.01.MZZZZZZZ.jpg",
"$area": 181
},
"id": "album-Music Bank (disc 3)",
"name": "Music Bank (disc 3)"
}
],
"data": {
"playcount": 837,
"$area": 837
},
"id": "artist_Alice in Chains",
"name": "Alice in Chains"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "627",
"$color": "#00FF32",
"image": "http://userserve-ak.last.fm/serve/300x300/8480501.jpg",
"$area": 627
},
"id": "album-10,000 Days",
"name": "10,000 Days"
}
],
"data": {
"playcount": 627,
"$area": 627
},
"id": "artist_Tool",
"name": "Tool"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "261",
"$color": "#946A32",
"image": "http://cdn.last.fm/flatness/catalogue/noimage/2/default_album_medium.png",
"$area": 261
},
"id": "album-2006-09-07: O-Bar, Stockholm, Sweden",
"name": "2006-09-07: O-Bar, Stockholm, Sweden"
},
{
"children": [],
"data": {
"playcount": "211",
"$color": "#A95532",
"image": "http://userserve-ak.last.fm/serve/300x300/25402479.jpg",
"$area": 211
},
"id": "album-Lost and Found",
"name": "Lost and Found"
}
],
"data": {
"playcount": 472,
"$area": 472
},
"id": "artist_Chris Cornell",
"name": "Chris Cornell"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "197",
"$color": "#AE5032",
"image": "http://userserve-ak.last.fm/serve/300x300/8634627.jpg",
"$area": 197
},
"id": "album-The Sickness",
"name": "The Sickness"
}
],
"data": {
"playcount": 197,
"$area": 197
},
"id": "artist_Disturbed",
"name": "Disturbed"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "493",
"$color": "#36C832",
"image": "http://userserve-ak.last.fm/serve/300x300/8591345.jpg",
"$area": 493
},
"id": "album-Mama's Gun",
"name": "Mama's Gun"
}
],
"data": {
"playcount": 493,
"$area": 493
},
"id": "artist_Erykah Badu",
"name": "Erykah Badu"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "249",
"$color": "#996532",
"image": "http://userserve-ak.last.fm/serve/300x300/32070871.jpg",
"$area": 249
},
"id": "album-Audioslave",
"name": "Audioslave"
}
],
"data": {
"playcount": 249,
"$area": 249
},
"id": "artist_Audioslave",
"name": "Audioslave"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "359",
"$color": "#6C9232",
"image": "http://userserve-ak.last.fm/serve/300x300/15858421.jpg",
"$area": 359
},
"id": "album-Comfort y M\u00fasica Para Volar",
"name": "Comfort y M\u00fasica Para Volar"
}
],
"data": {
"playcount": 359,
"$area": 359
},
"id": "artist_Soda Stereo",
"name": "Soda Stereo"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "302",
"$color": "#847A32",
"image": "http://userserve-ak.last.fm/serve/300x300/8776205.jpg",
"$area": 302
},
"id": "album-Clearing the Channel",
"name": "Clearing the Channel"
}
],
"data": {
"playcount": 302,
"$area": 302
},
"id": "artist_Sinch",
"name": "Sinch"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "177",
"$color": "#B74732",
"image": "http://userserve-ak.last.fm/serve/300x300/32457599.jpg",
"$area": 177
},
"id": "album-Crash",
"name": "Crash"
}
],
"data": {
"playcount": 177,
"$area": 177
},
"id": "artist_Dave Matthews Band",
"name": "Dave Matthews Band"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "207",
"$color": "#AA5432",
"image": "http://userserve-ak.last.fm/serve/300x300/30352203.jpg",
"$area": 207
},
"id": "album-Vs.",
"name": "Vs."
}
],
"data": {
"playcount": 207,
"$area": 207
},
"id": "artist_Pearl Jam",
"name": "Pearl Jam"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "486",
"$color": "#39C532",
"image": "http://userserve-ak.last.fm/serve/300x300/26053425.jpg",
"$area": 486
},
"id": "album-It All Makes Sense Now",
"name": "It All Makes Sense Now"
},
{
"children": [],
"data": {
"playcount": "251",
"$color": "#986632",
"image": "http://userserve-ak.last.fm/serve/300x300/9658733.jpg",
"$area": 251
},
"id": "album-Air",
"name": "Air"
}
],
"data": {
"playcount": 737,
"$area": 737
},
"id": "artist_Kr\u00f8m",
"name": "Kr\u00f8m"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "345",
"$color": "#728C32",
"image": "http://userserve-ak.last.fm/serve/300x300/8605651.jpg",
"$area": 345
},
"id": "album-Temple Of The Dog",
"name": "Temple Of The Dog"
}
],
"data": {
"playcount": 345,
"$area": 345
},
"id": "artist_Temple of the Dog",
"name": "Temple of the Dog"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "318",
"$color": "#7D8132",
"image": "http://userserve-ak.last.fm/serve/300x300/29274729.jpg",
"$area": 318
},
"id": "album-And All That Could Have Been (Still)",
"name": "And All That Could Have Been (Still)"
}
],
"data": {
"playcount": 318,
"$area": 318
},
"id": "artist_Nine Inch Nails",
"name": "Nine Inch Nails"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "256",
"$color": "#966832",
"image": "http://userserve-ak.last.fm/serve/300x300/32595059.jpg",
"$area": 256
},
"id": "album-Mamagubida",
"name": "Mamagubida"
},
{
"children": [],
"data": {
"playcount": "220",
"$color": "#A55932",
"image": "http://cdn.last.fm/flatness/catalogue/noimage/2/default_album_medium.png",
"$area": 220
},
"id": "album-Reggae \u00e0 Coup de Cirque",
"name": "Reggae \u00e0 Coup de Cirque"
},
{
"children": [],
"data": {
"playcount": "181",
"$color": "#B54932",
"image": "http://userserve-ak.last.fm/serve/300x300/16799743.jpg",
"$area": 181
},
"id": "album-Grain de sable",
"name": "Grain de sable"
}
],
"data": {
"playcount": 657,
"$area": 657
},
"id": "artist_Tryo",
"name": "Tryo"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "258",
"$color": "#966832",
"image": "http://cdn.last.fm/flatness/catalogue/noimage/2/default_album_medium.png",
"$area": 258
},
"id": "album-Best Of",
"name": "Best Of"
},
{
"children": [],
"data": {
"playcount": "176",
"$color": "#B74732",
"image": "http://userserve-ak.last.fm/serve/300x300/5264426.jpg",
"$area": 176
},
"id": "album-Robbin' The Hood",
"name": "Robbin' The Hood"
}
],
"data": {
"playcount": 434,
"$area": 434
},
"id": "artist_Sublime",
"name": "Sublime"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "418",
"$color": "#55AA32",
"image": "http://userserve-ak.last.fm/serve/300x300/8590493.jpg",
"$area": 418
},
"id": "album-One Hot Minute",
"name": "One Hot Minute"
}
],
"data": {
"playcount": 418,
"$area": 418
},
"id": "artist_Red Hot Chili Peppers",
"name": "Red Hot Chili Peppers"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "275",
"$color": "#8F6F32",
"image": "http://userserve-ak.last.fm/serve/300x300/17597653.jpg",
"$area": 275
},
"id": "album-Chinese Democracy",
"name": "Chinese Democracy"
},
{
"children": [],
"data": {
"playcount": "203",
"$color": "#AC5232",
"image": "http://userserve-ak.last.fm/serve/300x300/15231979.jpg",
"$area": 203
},
"id": "album-Use Your Illusion II",
"name": "Use Your Illusion II"
}
],
"data": {
"playcount": 478,
"$area": 478
},
"id": "artist_Guns N' Roses",
"name": "Guns N' Roses"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "208",
"$color": "#AA5432",
"image": "http://images.amazon.com/images/P/B0007LCNNE.01.MZZZZZZZ.jpg",
"$area": 208
},
"id": "album-Tales of the Forgotten Melodies",
"name": "Tales of the Forgotten Melodies"
}
],
"data": {
"playcount": 208,
"$area": 208
},
"id": "artist_Wax Tailor",
"name": "Wax Tailor"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "208",
"$color": "#AA5432",
"image": "http://userserve-ak.last.fm/serve/300x300/7862623.png",
"$area": 208
},
"id": "album-In Rainbows",
"name": "In Rainbows"
}
],
"data": {
"playcount": 208,
"$area": 208
},
"id": "artist_Radiohead",
"name": "Radiohead"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "317",
"$color": "#7E8032",
"image": "http://userserve-ak.last.fm/serve/300x300/8600371.jpg",
"$area": 317
},
"id": "album-Down On The Upside",
"name": "Down On The Upside"
},
{
"children": [],
"data": {
"playcount": "290",
"$color": "#897532",
"image": "http://userserve-ak.last.fm/serve/300x300/8590515.jpg",
"$area": 290
},
"id": "album-Superunknown",
"name": "Superunknown"
}
],
"data": {
"playcount": 607,
"$area": 607
},
"id": "artist_Soundgarden",
"name": "Soundgarden"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "247",
"$color": "#9A6432",
"image": "http://userserve-ak.last.fm/serve/300x300/15113951.jpg",
"$area": 247
},
"id": "album-Nico",
"name": "Nico"
},
{
"children": [],
"data": {
"playcount": "218",
"$color": "#A65832",
"image": "http://userserve-ak.last.fm/serve/300x300/45729417.jpg",
"$area": 218
},
"id": "album-Soup",
"name": "Soup"
},
{
"children": [],
"data": {
"playcount": "197",
"$color": "#AE5032",
"image": "http://images.amazon.com/images/P/B00005V5PW.01.MZZZZZZZ.jpg",
"$area": 197
},
"id": "album-Classic Masters",
"name": "Classic Masters"
},
{
"children": [],
"data": {
"playcount": "194",
"$color": "#B04E32",
"image": "http://userserve-ak.last.fm/serve/300x300/15157989.jpg",
"$area": 194
},
"id": "album-Blind Melon",
"name": "Blind Melon"
}
],
"data": {
"playcount": 856,
"$area": 856
},
"id": "artist_Blind Melon",
"name": "Blind Melon"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "537",
"$color": "#24DA32",
"image": "http://userserve-ak.last.fm/serve/300x300/17594883.jpg",
"$area": 537
},
"id": "album-Make Yourself",
"name": "Make Yourself"
},
{
"children": [],
"data": {
"playcount": "258",
"$color": "#966832",
"image": "http://userserve-ak.last.fm/serve/300x300/31550385.jpg",
"$area": 258
},
"id": "album-Light Grenades",
"name": "Light Grenades"
},
{
"children": [],
"data": {
"playcount": "181",
"$color": "#B54932",
"image": "http://userserve-ak.last.fm/serve/300x300/32309285.jpg",
"$area": 181
},
"id": "album-Morning View",
"name": "Morning View"
}
],
"data": {
"playcount": 976,
"$area": 976
},
"id": "artist_Incubus",
"name": "Incubus"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "198",
"$color": "#AE5032",
"image": "http://userserve-ak.last.fm/serve/300x300/8599099.jpg",
"$area": 198
},
"id": "album-On And On",
"name": "On And On"
},
{
"children": [],
"data": {
"playcount": "186",
"$color": "#B34B32",
"image": "http://userserve-ak.last.fm/serve/300x300/30082075.jpg",
"$area": 186
},
"id": "album-Brushfire Fairytales",
"name": "Brushfire Fairytales"
}
],
"data": {
"playcount": 384,
"$area": 384
},
"id": "artist_Jack Johnson",
"name": "Jack Johnson"
},
{
"children": [
{
"children": [],
"data": {
"playcount": "349",
"$color": "#718D32",
"image": "http://userserve-ak.last.fm/serve/300x300/21881921.jpg",
"$area": 349
},
"id": "album-Mother Love Bone",
"name": "Mother Love Bone"
}
],
"data": {
"playcount": 349,
"$area": 349
},
"id": "artist_Mother Love Bone",
"name": "Mother Love Bone"
}
],
"data": {},
"id": "root",
"name": "Top Albums"
}
ForceDirectedGraphJSONSampleData = [
{
"adjacencies": [
"graphnode21",
{
"nodeTo": "graphnode1",
"nodeFrom": "graphnode0",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode13",
"nodeFrom": "graphnode0",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode14",
"nodeFrom": "graphnode0",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode15",
"nodeFrom": "graphnode0",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode16",
"nodeFrom": "graphnode0",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode17",
"nodeFrom": "graphnode0",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#83548B",
"$type": "circle",
"$dim": 10
},
"id": "graphnode0",
"name": "graphnode0"
}, {
"adjacencies": [
{
"nodeTo": "graphnode2",
"nodeFrom": "graphnode1",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode4",
"nodeFrom": "graphnode1",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode5",
"nodeFrom": "graphnode1",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode6",
"nodeFrom": "graphnode1",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode7",
"nodeFrom": "graphnode1",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode8",
"nodeFrom": "graphnode1",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode10",
"nodeFrom": "graphnode1",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode11",
"nodeFrom": "graphnode1",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode12",
"nodeFrom": "graphnode1",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode13",
"nodeFrom": "graphnode1",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode14",
"nodeFrom": "graphnode1",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode15",
"nodeFrom": "graphnode1",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode16",
"nodeFrom": "graphnode1",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode17",
"nodeFrom": "graphnode1",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#EBB056",
"$type": "circle",
"$dim": 11
},
"id": "graphnode1",
"name": "graphnode1"
}, {
"adjacencies": [
{
"nodeTo": "graphnode5",
"nodeFrom": "graphnode2",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode9",
"nodeFrom": "graphnode2",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode18",
"nodeFrom": "graphnode2",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#416D9C",
"$type": "circle",
"$dim": 7
},
"id": "graphnode2",
"name": "graphnode2"
}, {
"adjacencies": [
{
"nodeTo": "graphnode5",
"nodeFrom": "graphnode3",
"data": {
"$color": "#909291"
}
}, {
"nodeTo": "graphnode9",
"nodeFrom": "graphnode3",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode10",
"nodeFrom": "graphnode3",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode12",
"nodeFrom": "graphnode3",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#416D9C",
"$type": "square",
"$dim": 10
},
"id": "graphnode3",
"name": "graphnode3"
}, {
"adjacencies": [],
"data": {
"$color": "#83548B",
"$type": "square",
"$dim": 11
},
"id": "graphnode4",
"name": "graphnode4"
}, {
"adjacencies": [
{
"nodeTo": "graphnode9",
"nodeFrom": "graphnode5",
"data": {
"$color": "#909291"
}
}
],
"data": {
"$color": "#C74243",
"$type": "triangle",
"$dim": 8
},
"id": "graphnode5",
"name": "graphnode5"
}, {
"adjacencies": [
{
"nodeTo": "graphnode10",
"nodeFrom": "graphnode6",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode11",
"nodeFrom": "graphnode6",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#83548B",
"$type": "circle",
"$dim": 11
},
"id": "graphnode6",
"name": "graphnode6"
}, {
"adjacencies": [],
"data": {
"$color": "#EBB056",
"$type": "triangle",
"$dim": 12
},
"id": "graphnode7",
"name": "graphnode7"
}, {
"adjacencies": [],
"data": {
"$color": "#C74243",
"$type": "star",
"$dim": 10
},
"id": "graphnode8",
"name": "graphnode8"
}, {
"adjacencies": [],
"data": {
"$color": "#83548B",
"$type": "circle",
"$dim": 12
},
"id": "graphnode9",
"name": "graphnode9"
}, {
"adjacencies": [
{
"nodeTo": "graphnode11",
"nodeFrom": "graphnode10",
"data": {
"$color": "#909291"
}
}
],
"data": {
"$color": "#70A35E",
"$type": "triangle",
"$dim": 11
},
"id": "graphnode10",
"name": "graphnode10"
}, {
"adjacencies": [],
"data": {
"$color": "#70A35E",
"$type": "circle",
"$dim": 11
},
"id": "graphnode11",
"name": "graphnode11"
}, {
"adjacencies": [],
"data": {
"$color": "#83548B",
"$type": "triangle",
"$dim": 10
},
"id": "graphnode12",
"name": "graphnode12"
}, {
"adjacencies": [
{
"nodeTo": "graphnode14",
"nodeFrom": "graphnode13",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#EBB056",
"$type": "star",
"$dim": 7
},
"id": "graphnode13",
"name": "graphnode13"
}, {
"adjacencies": [],
"data": {
"$color": "#EBB056",
"$type": "triangle",
"$dim": 12
},
"id": "graphnode14",
"name": "graphnode14"
}, {
"adjacencies": [
{
"nodeTo": "graphnode16",
"nodeFrom": "graphnode15",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode17",
"nodeFrom": "graphnode15",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#83548B",
"$type": "triangle",
"$dim": 11
},
"id": "graphnode15",
"name": "graphnode15"
}, {
"adjacencies": [
{
"nodeTo": "graphnode17",
"nodeFrom": "graphnode16",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#C74243",
"$type": "star",
"$dim": 7
},
"id": "graphnode16",
"name": "graphnode16"
}, {
"adjacencies": [],
"data": {
"$color": "#416D9C",
"$type": "circle",
"$dim": 7
},
"id": "graphnode17",
"name": "graphnode17"
}, {
"adjacencies": [
{
"nodeTo": "graphnode19",
"nodeFrom": "graphnode18",
"data": {
"$color": "#557EAA"
}
}, {
"nodeTo": "graphnode20",
"nodeFrom": "graphnode18",
"data": {
"$color": "#557EAA"
}
}
],
"data": {
"$color": "#EBB056",
"$type": "triangle",
"$dim": 9
},
"id": "graphnode18",
"name": "graphnode18"
}, {
"adjacencies": [],
"data": {
"$color": "#70A35E",
"$type": "circle",
"$dim": 8
},
"id": "graphnode19",
"name": "graphnode19"
}, {
"adjacencies": [],
"data": {
"$color": "#C74243",
"$type": "star",
"$dim": 8
},
"id": "graphnode20",
"name": "graphnode20"
}
]
RadialGraphJSONSampleData = {
"id": "190_0",
"name": "Pearl Jamx0r",
"children": [{
"id": "306208_1",
"name": "Pearl Jam & Cypress Hill",
"data": {
"relation": "<h4>Pearl Jam & Cypress Hill</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: collaboration)</div></li><li>Cypress Hill <div>(relation: collaboration)</div></li></ul>"
},
"children": [{
"id": "84_2",
"name": "Cypress Hill",
"data": {
"relation": "<h4>Cypress Hill</h4><b>Connections:</b><ul><li>Pearl Jam & Cypress Hill <div>(relation: collaboration)</div></li></ul>"
},
"children": []
}]
}, {
"id": "107877_3",
"name": "Neil Young & Pearl Jam",
"data": {
"relation": "<h4>Neil Young & Pearl Jam</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: collaboration)</div></li><li>Neil Young <div>(relation: collaboration)</div></li></ul>"
},
"children": [{
"id": "964_4",
"name": "Neil Young",
"data": {
"relation": "<h4>Neil Young</h4><b>Connections:</b><ul><li>Neil Young & Pearl Jam <div>(relation: collaboration)</div></li></ul>"
},
"children": []
}]
}, {
"id": "236797_5",
"name": "Jeff Ament",
"data": {
"relation": "<h4>Jeff Ament</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Temple of the Dog <div>(relation: member of band)</div></li><li>Mother Love Bone <div>(relation: member of band)</div></li><li>Green River <div>(relation: member of band)</div></li><li>M.A.C.C. <div>(relation: collaboration)</div></li><li>Three Fish <div>(relation: member of band)</div></li><li>Gossman Project <div>(relation: member of band)</div></li></ul>"
},
"children": [{
"id": "1756_6",
"name": "Temple of the Dog",
"data": {
"relation": "<h4>Temple of the Dog</h4><b>Connections:</b><ul><li>Jeff Ament <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "14581_7",
"name": "Mother Love Bone",
"data": {
"relation": "<h4>Mother Love Bone</h4><b>Connections:</b><ul><li>Jeff Ament <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "50188_8",
"name": "Green River",
"data": {
"relation": "<h4>Green River</h4><b>Connections:</b><ul><li>Jeff Ament <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "65452_9",
"name": "M.A.C.C.",
"data": {
"relation": "<h4>M.A.C.C.</h4><b>Connections:</b><ul><li>Jeff Ament <div>(relation: collaboration)</div></li></ul>"
},
"children": []
}, {
"id": "115632_10",
"name": "Three Fish",
"data": {
"relation": "<h4>Three Fish</h4><b>Connections:</b><ul><li>Jeff Ament <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "346850_11",
"name": "Gossman Project",
"data": {
"relation": "<h4>Gossman Project</h4><b>Connections:</b><ul><li>Jeff Ament <div>(relation: member of band)</div></li></ul>"
},
"children": []
}]
}, {
"id": "41529_12",
"name": "Stone Gossard",
"data": {
"relation": "<h4>Stone Gossard</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Temple of the Dog <div>(relation: member of band)</div></li><li>Mother Love Bone <div>(relation: member of band)</div></li><li>Brad <div>(relation: member of band)</div></li><li>Green River <div>(relation: member of band)</div></li><li>Gossman Project <div>(relation: member of band)</div></li></ul>"
},
"children": [{
"id": "1756_13",
"name": "Temple of the Dog",
"data": {
"relation": "<h4>Temple of the Dog</h4><b>Connections:</b><ul><li>Stone Gossard <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "14581_14",
"name": "Mother Love Bone",
"data": {
"relation": "<h4>Mother Love Bone</h4><b>Connections:</b><ul><li>Stone Gossard <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "24119_15",
"name": "Brad",
"data": {
"relation": "<h4>Brad</h4><b>Connections:</b><ul><li>Stone Gossard <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "50188_16",
"name": "Green River",
"data": {
"relation": "<h4>Green River</h4><b>Connections:</b><ul><li>Stone Gossard <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "346850_17",
"name": "Gossman Project",
"data": {
"relation": "<h4>Gossman Project</h4><b>Connections:</b><ul><li>Stone Gossard <div>(relation: member of band)</div></li></ul>"
},
"children": []
}]
}, {
"id": "131161_18",
"name": "Eddie Vedder",
"data": {
"relation": "<h4>Eddie Vedder</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Temple of the Dog <div>(relation: member of band)</div></li><li>Eddie Vedder & Zeke <div>(relation: collaboration)</div></li><li>Bad Radio <div>(relation: member of band)</div></li><li>Beck & Eddie Vedder <div>(relation: collaboration)</div></li></ul>"
},
"children": [{
"id": "1756_19",
"name": "Temple of the Dog",
"data": {
"relation": "<h4>Temple of the Dog</h4><b>Connections:</b><ul><li>Eddie Vedder <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "72007_20",
"name": "Eddie Vedder & Zeke",
"data": {
"relation": "<h4>Eddie Vedder & Zeke</h4><b>Connections:</b><ul><li>Eddie Vedder <div>(relation: collaboration)</div></li></ul>"
},
"children": []
}, {
"id": "236657_21",
"name": "Bad Radio",
"data": {
"relation": "<h4>Bad Radio</h4><b>Connections:</b><ul><li>Eddie Vedder <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "432176_22",
"name": "Beck & Eddie Vedder",
"data": {
"relation": "<h4>Beck & Eddie Vedder</h4><b>Connections:</b><ul><li>Eddie Vedder <div>(relation: collaboration)</div></li></ul>"
},
"children": []
}]
}, {
"id": "236583_23",
"name": "Mike McCready",
"data": {
"relation": "<h4>Mike McCready</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Mad Season <div>(relation: member of band)</div></li><li>Temple of the Dog <div>(relation: member of band)</div></li><li>$10,000 Gold Chain <div>(relation: collaboration)</div></li><li>M.A.C.C. <div>(relation: collaboration)</div></li><li>The Rockfords <div>(relation: member of band)</div></li><li>Gossman Project <div>(relation: member of band)</div></li></ul>"
},
"children": [{
"id": "1744_24",
"name": "Mad Season",
"data": {
"relation": "<h4>Mad Season</h4><b>Connections:</b><ul><li>Mike McCready <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "1756_25",
"name": "Temple of the Dog",
"data": {
"relation": "<h4>Temple of the Dog</h4><b>Connections:</b><ul><li>Mike McCready <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "43661_26",
"name": "$10,000 Gold Chain",
"data": {
"relation": "<h4>$10,000 Gold Chain</h4><b>Connections:</b><ul><li>Mike McCready <div>(relation: collaboration)</div></li></ul>"
},
"children": []
}, {
"id": "65452_27",
"name": "M.A.C.C.",
"data": {
"relation": "<h4>M.A.C.C.</h4><b>Connections:</b><ul><li>Mike McCready <div>(relation: collaboration)</div></li></ul>"
},
"children": []
}, {
"id": "153766_28",
"name": "The Rockfords",
"data": {
"relation": "<h4>The Rockfords</h4><b>Connections:</b><ul><li>Mike McCready <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "346850_29",
"name": "Gossman Project",
"data": {
"relation": "<h4>Gossman Project</h4><b>Connections:</b><ul><li>Mike McCready <div>(relation: member of band)</div></li></ul>"
},
"children": []
}]
}, {
"id": "236585_30",
"name": "Matt Cameron",
"data": {
"relation": "<h4>Matt Cameron</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Soundgarden <div>(relation: member of band)</div></li><li>Temple of the Dog <div>(relation: member of band)</div></li><li>Eleven <div>(relation: supporting musician)</div></li><li>Queens of the Stone Age <div>(relation: member of band)</div></li><li>Wellwater Conspiracy <div>(relation: member of band)</div></li><li>M.A.C.C. <div>(relation: collaboration)</div></li><li>Tone Dogs <div>(relation: member of band)</div></li></ul>"
},
"children": [{
"id": "1111_31",
"name": "Soundgarden",
"data": {
"relation": "<h4>Soundgarden</h4><b>Connections:</b><ul><li>Matt Cameron <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "1756_32",
"name": "Temple of the Dog",
"data": {
"relation": "<h4>Temple of the Dog</h4><b>Connections:</b><ul><li>Matt Cameron <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "9570_33",
"name": "Eleven",
"data": {
"relation": "<h4>Eleven</h4><b>Connections:</b><ul><li>Matt Cameron <div>(relation: supporting musician)</div></li></ul>"
},
"children": []
}, {
"id": "11783_34",
"name": "Queens of the Stone Age",
"data": {
"relation": "<h4>Queens of the Stone Age</h4><b>Connections:</b><ul><li>Matt Cameron <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "61972_35",
"name": "Wellwater Conspiracy",
"data": {
"relation": "<h4>Wellwater Conspiracy</h4><b>Connections:</b><ul><li>Matt Cameron <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "65452_36",
"name": "M.A.C.C.",
"data": {
"relation": "<h4>M.A.C.C.</h4><b>Connections:</b><ul><li>Matt Cameron <div>(relation: collaboration)</div></li></ul>"
},
"children": []
}, {
"id": "353097_37",
"name": "Tone Dogs",
"data": {
"relation": "<h4>Tone Dogs</h4><b>Connections:</b><ul><li>Matt Cameron <div>(relation: member of band)</div></li></ul>"
},
"children": []
}]
}, {
"id": "236594_38",
"name": "Dave Krusen",
"data": {
"relation": "<h4>Dave Krusen</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Candlebox <div>(relation: member of band)</div></li></ul>"
},
"children": [{
"id": "2092_39",
"name": "Candlebox",
"data": {
"relation": "<h4>Candlebox</h4><b>Connections:</b><ul><li>Dave Krusen <div>(relation: member of band)</div></li></ul>"
},
"children": []
}]
}, {
"id": "236022_40",
"name": "Matt Chamberlain",
"data": {
"relation": "<h4>Matt Chamberlain</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Critters Buggin <div>(relation: member of band)</div></li><li>Edie Brickell and New Bohemians <div>(relation: member of band)</div></li></ul>"
},
"children": [{
"id": "54761_41",
"name": "Critters Buggin",
"data": {
"relation": "<h4>Critters Buggin</h4><b>Connections:</b><ul><li>Matt Chamberlain <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "92043_42",
"name": "Edie Brickell and New Bohemians",
"data": {
"relation": "<h4>Edie Brickell and New Bohemians</h4><b>Connections:</b><ul><li>Matt Chamberlain <div>(relation: member of band)</div></li></ul>"
},
"children": []
}]
}, {
"id": "236611_43",
"name": "Dave Abbruzzese",
"data": {
"relation": "<h4>Dave Abbruzzese</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Green Romance Orchestra <div>(relation: member of band)</div></li></ul>"
},
"children": [{
"id": "276933_44",
"name": "Green Romance Orchestra",
"data": {
"relation": "<h4>Green Romance Orchestra</h4><b>Connections:</b><ul><li>Dave Abbruzzese <div>(relation: member of band)</div></li></ul>"
},
"children": []
}]
}, {
"id": "236612_45",
"name": "Jack Irons",
"data": {
"relation": "<h4>Jack Irons</h4><b>Connections:</b><ul><li>Pearl Jam <div>(relation: member of band)</div></li><li>Redd Kross <div>(relation: member of band)</div></li><li>Eleven <div>(relation: member of band)</div></li><li>Red Hot Chili Peppers <div>(relation: member of band)</div></li><li>Anthym <div>(relation: member of band)</div></li><li>What Is This? <div>(relation: member of band)</div></li></ul>"
},
"children": [{
"id": "4619_46",
"name": "Redd Kross",
"data": {
"relation": "<h4>Redd Kross</h4><b>Connections:</b><ul><li>Jack Irons <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "9570_47",
"name": "Eleven",
"data": {
"relation": "<h4>Eleven</h4><b>Connections:</b><ul><li>Jack Irons <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "12389_48",
"name": "Red Hot Chili Peppers",
"data": {
"relation": "<h4>Red Hot Chili Peppers</h4><b>Connections:</b><ul><li>Jack Irons <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "114288_49",
"name": "Anthym",
"data": {
"relation": "<h4>Anthym</h4><b>Connections:</b><ul><li>Jack Irons <div>(relation: member of band)</div></li></ul>"
},
"children": []
}, {
"id": "240013_50",
"name": "What Is This?",
"data": {
"relation": "<h4>What Is This?</h4><b>Connections:</b><ul><li>Jack Irons <div>(relation: member of band)</div></li></ul>"
},
"children": []
}]
}],
"data": {
"relation": "<h4>Pearl Jam</h4><b>Connections:</b><ul><li>Pearl Jam & Cypress Hill <div>(relation: collaboration)</div></li><li>Neil Young & Pearl Jam <div>(relation: collaboration)</div></li><li>Jeff Ament <div>(relation: member of band)</div></li><li>Stone Gossard <div>(relation: member of band)</div></li><li>Eddie Vedder <div>(relation: member of band)</div></li><li>Mike McCready <div>(relation: member of band)</div></li><li>Matt Cameron <div>(relation: member of band)</div></li><li>Dave Krusen <div>(relation: member of band)</div></li><li>Matt Chamberlain <div>(relation: member of band)</div></li><li>Dave Abbruzzese <div>(relation: member of band)</div></li><li>Jack Irons <div>(relation: member of band)</div></li></ul>"
}
}
SunburstJSONSampleData = {
"children": [
{
"children": [
{
"children": [],
"data": {
"description": "",
"$angularWidth": 7490,
"days": 111,
"$color": "#FCD9A1",
"size": 7490
},
"id": "Source/Coordinates/Complex.js",
"name": "Complex.js"
},
{
"children": [],
"data": {
"description": "Fixed polar interpolation problem when theta = pi",
"$angularWidth": 6390,
"days": 2,
"$color": "#B0AAF6",
"size": 6390
},
"id": "Source/Coordinates/Polar.js",
"name": "Polar.js"
}
],
"data": {
"description": "Fixed polar interpolation problem when theta = pi",
"$color": "#B0AAF6",
"days": 2,
"$angularWidth": 1000,
"size": 13880
},
"id": "Source/Coordinates",
"name": "Coordinates"
},
{
"children": [
{
"children": [],
"data": {
"description": "Scaling done right :)",
"$angularWidth": 14952,
"days": 3,
"$color": "#B2ABF4",
"size": 14952
},
"id": "Source/Core/Canvas.js",
"name": "Canvas.js"
},
{
"children": [],
"data": {
"description": "Animated TreeMaps",
"$angularWidth": 14759,
"days": 3,
"$color": "#B2ABF4",
"size": 14759
},
"id": "Source/Core/Core.js",
"name": "Core.js"
},
{
"children": [],
"data": {
"description": "",
"$angularWidth": 5838,
"days": 111,
"$color": "#FCD9A1",
"size": 5838
},
"id": "Source/Core/Fx.js",
"name": "Fx.js"
}
],
"data": {
"description": "Animated TreeMaps",
"$color": "#B2ABF4",
"days": 3,
"$angularWidth": 1000,
"size": 35549
},
"id": "Source/Core",
"name": "Core"
},
{
"children": [
{
"children": [],
"data": {
"description": "Merge remote branch 'woot/bugfixes_docnet' into sunburst_fixes",
"$angularWidth": 18672,
"days": 1,
"$color": "#AEA9F8",
"size": 18672
},
"id": "Source/Extras/Extras.js",
"name": "Extras.js"
}
],
"data": {
"description": "Merge remote branch 'woot/bugfixes_docnet' into sunburst_fixes",
"$color": "#AEA9F8",
"days": 1,
"$angularWidth": 1000,
"size": 18672
},
"id": "Source/Extras",
"name": "Extras"
},
{
"children": [
{
"children": [],
"data": {
"description": "Animated TreeMaps",
"$angularWidth": 1652,
"days": 3,
"$color": "#B2ABF4",
"size": 1652
},
"id": "Source/Graph/Graph.Geom.js",
"name": "Graph.Geom.js"
},
{
"children": [],
"data": {
"description": "Animated TreeMaps",
"$angularWidth": 27921,
"days": 3,
"$color": "#B2ABF4",
"size": 27921
},
"id": "Source/Graph/Graph.js",
"name": "Graph.js"
},
{
"children": [],
"data": {
"description": "Added new Canvas class with zoom/pan options",
"$angularWidth": 9512,
"days": 5,
"$color": "#B6AEEF",
"size": 9512
},
"id": "Source/Graph/Graph.Label.js",
"name": "Graph.Label.js"
},
{
"children": [],
"data": {
"description": "Change the way edges where stored and used in Graph.js. This is how Graph.js internally handles nodes. The user API should remain the same.",
"$angularWidth": 22838,
"days": 26,
"$color": "#E0C7C0",
"size": 22838
},
"id": "Source/Graph/Graph.Op.js",
"name": "Graph.Op.js"
},
{
"children": [],
"data": {
"description": "Bug Fix Extras + Tweaking examples",
"$angularWidth": 18950,
"days": 19,
"$color": "#D2BFD0",
"size": 18950
},
"id": "Source/Graph/Graph.Plot.js",
"name": "Graph.Plot.js"
},
{
"children": [],
"data": {
"description": "(Re)-Implemented nodeTypes",
"$angularWidth": 6947,
"days": 32,
"$color": "#ECCFB3",
"size": 6947
},
"id": "Source/Graph/Helpers.js",
"name": "Helpers.js"
}
],
"data": {
"description": "Animated TreeMaps",
"$color": "#B2ABF4",
"days": 3,
"$angularWidth": 1000,
"size": 87820
},
"id": "Source/Graph",
"name": "Graph"
},
{
"children": [
{
"children": [],
"data": {
"description": "$jit namespace",
"$angularWidth": 4064,
"days": 111,
"$color": "#FCD9A1",
"size": 4064
},
"id": "Source/Layouts/Layouts.ForceDirected.js",
"name": "Layouts.ForceDirected.js"
},
{
"children": [],
"data": {
"description": "Animated TreeMaps",
"$angularWidth": 2198,
"days": 3,
"$color": "#B2ABF4",
"size": 2198
},
"id": "Source/Layouts/Layouts.js",
"name": "Layouts.js"
},
{
"children": [],
"data": {
"description": "$jit namespace",
"$angularWidth": 4372,
"days": 111,
"$color": "#FCD9A1",
"size": 4372
},
"id": "Source/Layouts/Layouts.Radial.js",
"name": "Layouts.Radial.js"
},
{
"children": [],
"data": {
"description": "Animated TreeMaps",
"$angularWidth": 15570,
"days": 3,
"$color": "#B2ABF4",
"size": 15570
},
"id": "Source/Layouts/Layouts.TM.js",
"name": "Layouts.TM.js"
},
{
"children": [],
"data": {
"description": "$jit namespace",
"$angularWidth": 6696,
"days": 111,
"$color": "#FCD9A1",
"size": 6696
},
"id": "Source/Layouts/Layouts.Tree.js",
"name": "Layouts.Tree.js"
}
],
"data": {
"description": "Animated TreeMaps",
"$color": "#B2ABF4",
"days": 3,
"$angularWidth": 1000,
"size": 32900
},
"id": "Source/Layouts",
"name": "Layouts"
},
{
"children": [
{
"children": [],
"data": {
"description": "Fixed passing of general Label object",
"$angularWidth": 8079,
"days": 26,
"$color": "#E0C7C0",
"size": 8079
},
"id": "Source/Loader/Loader.js",
"name": "Loader.js"
}
],
"data": {
"description": "Fixed passing of general Label object",
"$color": "#E0C7C0",
"days": 26,
"$angularWidth": 1000,
"size": 8079
},
"id": "Source/Loader",
"name": "Loader"
},
{
"children": [
{
"children": [],
"data": {
"description": "Small tweaks on Tips and Selected nodes in Charts",
"$angularWidth": 348,
"days": 33,
"$color": "#EED0B0",
"size": 348
},
"id": "Source/Options/Options.AreaChart.js",
"name": "Options.AreaChart.js"
},
{
"children": [],
"data": {
"description": "Added gradients to AreaChart",
"$angularWidth": 386,
"days": 37,
"$color": "#F6D5A7",
"size": 386
},
"id": "Source/Options/Options.BarChart.js",
"name": "Options.BarChart.js"
},
{
"children": [],
"data": {
"description": "Add label types",
"$angularWidth": 392,
"days": 26,
"$color": "#E0C7C0",
"size": 392
},
"id": "Source/Options/Options.Canvas.js",
"name": "Options.Canvas.js"
},
{
"children": [],
"data": {
"description": "Organizing sources and build",
"$angularWidth": 3856,
"days": 112,
"$color": "#FCD9A1",
"size": 3856
},
"id": "Source/Options/Options.Controller.js",
"name": "Options.Controller.js"
},
{
"children": [],
"data": {
"description": "Added raw Canvas options ",
"$angularWidth": 1475,
"days": 31,
"$color": "#EACDB5",
"size": 1475
},
"id": "Source/Options/Options.Edge.js",
"name": "Options.Edge.js"
},
{
"children": [],
"data": {
"description": "Extras.Events bug fixes",
"$angularWidth": 312,
"days": 20,
"$color": "#D4C0CE",
"size": 312
},
"id": "Source/Options/Options.Events.js",
"name": "Options.Events.js"
},
{
"children": [],
"data": {
"description": "$jit namespace",
"$angularWidth": 749,
"days": 111,
"$color": "#FCD9A1",
"size": 749
},
"id": "Source/Options/Options.Fx.js",
"name": "Options.Fx.js"
},
{
"children": [],
"data": {
"description": "Revisiting Extras.js",
"$angularWidth": 530,
"days": 25,
"$color": "#DEC6C2",
"size": 530
},
"id": "Source/Options/Options.js",
"name": "Options.js"
},
{
"children": [],
"data": {
"description": "Add label types",
"$angularWidth": 203,
"days": 26,
"$color": "#E0C7C0",
"size": 203
},
"id": "Source/Options/Options.Label.js",
"name": "Options.Label.js"
},
{
"children": [],
"data": {
"description": "* Ignore panning",
"$angularWidth": 137,
"days": 1,
"$color": "#AEA9F8",
"size": 137
},
"id": "Source/Options/Options.Navigation.js",
"name": "Options.Navigation.js"
},
{
"children": [],
"data": {
"description": "Added raw Canvas options",
"$angularWidth": 2083,
"days": 31,
"$color": "#EACDB5",
"size": 2083
},
"id": "Source/Options/Options.Node.js",
"name": "Options.Node.js"
},
{
"children": [],
"data": {
"description": "Bug Fix Extras + Tweaking examples",
"$angularWidth": 583,
"days": 19,
"$color": "#D2BFD0",
"size": 583
},
"id": "Source/Options/Options.NodeStyles.js",
"name": "Options.NodeStyles.js"
},
{
"children": [],
"data": {
"description": "Add an option to resize labels according to its pie slice",
"$angularWidth": 380,
"days": 1,
"$color": "#AEA9F8",
"size": 380
},
"id": "Source/Options/Options.PieChart.js",
"name": "Options.PieChart.js"
},
{
"children": [],
"data": {
"description": "Revisiting Extras.js RedesigningMouseEventManager",
"$angularWidth": 1120,
"days": 25,
"$color": "#DEC6C2",
"size": 1120
},
"id": "Source/Options/Options.Tips.js",
"name": "Options.Tips.js"
},
{
"children": [],
"data": {
"description": "Organizing sources and build",
"$angularWidth": 1021,
"days": 112,
"$color": "#FCD9A1",
"size": 1021
},
"id": "Source/Options/Options.Tree.js",
"name": "Options.Tree.js"
}
],
"data": {
"description": "Add an option to resize labels according to its pie slice",
"$color": "#AEA9F8",
"days": 1,
"$angularWidth": 1000,
"size": 13575
},
"id": "Source/Options",
"name": "Options"
},
{
"children": [
{
"children": [],
"data": {
"description": "Fixing AreaCharts for IE",
"$angularWidth": 13636,
"days": 19,
"$color": "#D2BFD0",
"size": 13636
},
"id": "Source/Visualizations/AreaChart.js",
"name": "AreaChart.js"
},
{
"children": [],
"data": {
"description": "Append utils, id and Class objects to $jit. Add legends to Bar/Pie/AreaChart examples.",
"$angularWidth": 12608,
"days": 15,
"$color": "#CABAD9",
"size": 12608
},
"id": "Source/Visualizations/BarChart.js",
"name": "BarChart.js"
},
{
"children": [],
"data": {
"description": "Added new Canvas class with zoom/pan options",
"$angularWidth": 16954,
"days": 5,
"$color": "#B6AEEF",
"size": 16954
},
"id": "Source/Visualizations/ForceDirected.js",
"name": "ForceDirected.js"
},
{
"children": [],
"data": {
"description": "Added new Canvas class with zoom/pan options",
"$angularWidth": 23448,
"days": 5,
"$color": "#B6AEEF",
"size": 23448
},
"id": "Source/Visualizations/Hypertree.js",
"name": "Hypertree.js"
},
{
"children": [],
"data": {
"description": "Adding $jit as Namespace + Build Refactor + Config (part I)",
"$angularWidth": 0,
"days": 112,
"$color": "#FCD9A1",
"size": 0
},
"id": "Source/Visualizations/Icicle.js",
"name": "Icicle.js"
},
{
"children": [],
"data": {
"description": "Add an option to resize labels according to its pie slice",
"$angularWidth": 10762,
"days": 1,
"$color": "#AEA9F8",
"size": 10762
},
"id": "Source/Visualizations/PieChart.js",
"name": "PieChart.js"
},
{
"children": [],
"data": {
"description": "Added new Canvas class with zoom/pan options",
"$angularWidth": 18010,
"days": 5,
"$color": "#B6AEEF",
"size": 18010
},
"id": "Source/Visualizations/RGraph.js",
"name": "RGraph.js"
},
{
"children": [],
"data": {
"description": "Animated TreeMaps",
"$angularWidth": 52895,
"days": 3,
"$color": "#B2ABF4",
"size": 52895
},
"id": "Source/Visualizations/Spacetree.js",
"name": "Spacetree.js"
},
{
"children": [],
"data": {
"description": "Adding new JSON data to the Sunburst and already finding some bugs :S",
"$angularWidth": 21436,
"days": 2,
"$color": "#B0AAF6",
"size": 21436
},
"id": "Source/Visualizations/Sunburst.js",
"name": "Sunburst.js"
},
{
"children": [],
"data": {
"description": "Animated TreeMaps",
"$angularWidth": 16472,
"days": 3,
"$color": "#B2ABF4",
"size": 16472
},
"id": "Source/Visualizations/Treemap.js",
"name": "Treemap.js"
}
],
"data": {
"description": "Merge remote branch 'woot/bugfixes_docnet' into sunburst_fixes",
"$color": "#AEA9F8",
"days": 1,
"$angularWidth": 1000,
"size": 186221
},
"id": "Source/Visualizations",
"name": "Visualizations"
}
],
"data": {
"$type": "none"
},
"id": "Source",
"name": "Source"
}
HyperTreeJSONSampleData = {
"id": "347_0",
"name": "Nine Inch Nails",
"children": [{
"id": "126510_1",
"name": "Jerome Dillon",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": [{
"id": "52163_2",
"name": "Howlin' Maggie",
"data": {
"band": "Jerome Dillon",
"relation": "member of band"
},
"children": []
}, {
"id": "324134_3",
"name": "nearLY",
"data": {
"band": "Jerome Dillon",
"relation": "member of band"
},
"children": []
}]
}, {
"id": "173871_4",
"name": "Charlie Clouser",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": []
}, {
"id": "235952_5",
"name": "James Woolley",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": []
}, {
"id": "235951_6",
"name": "Jeff Ward",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": [{
"id": "2382_7",
"name": "Ministry",
"data": {
"band": "Jeff Ward",
"relation": "member of band"
},
"children": []
}, {
"id": "2415_8",
"name": "Revolting Cocks",
"data": {
"band": "Jeff Ward",
"relation": "member of band"
},
"children": []
}, {
"id": "3963_9",
"name": "Pigface",
"children": []
}, {
"id": "7848_10",
"name": "Lard",
"data": {
"band": "Jeff Ward",
"relation": "member of band"
},
"children": []
}]
}, {
"id": "235950_11",
"name": "Richard Patrick",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": [{
"id": "1007_12",
"name": "Filter",
"data": {
"band": "Richard Patrick",
"relation": "member of band"
},
"children": []
}, {
"id": "327924_13",
"name": "Army of Anyone",
"data": {
"band": "Richard Patrick",
"relation": "member of band"
},
"children": []
}]
}, {
"id": "2396_14",
"name": "Trent Reznor",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": [{
"id": "3963_15",
"name": "Pigface",
"data": {
"band": "Trent Reznor",
"relation": "member of band"
},
"children": []
}, {
"id": "32247_16",
"name": "1000 Homo DJs",
"data": {
"band": "Trent Reznor",
"relation": "member of band"
},
"children": []
}, {
"id": "83761_17",
"name": "Option 30",
"data": {
"band": "Trent Reznor",
"relation": "member of band"
},
"children": []
}, {
"id": "133257_18",
"name": "Exotic Birds",
"data": {
"band": "Trent Reznor",
"relation": "member of band"
},
"children": []
}]
}, {
"id": "36352_19",
"name": "Chris Vrenna",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": [{
"id": "1013_20",
"name": "Stabbing Westward",
"data": {
"band": "Chris Vrenna",
"relation": "member of band"
},
"children": []
}, {
"id": "3963_21",
"name": "Pigface",
"data": {
"band": "Chris Vrenna",
"relation": "member of band"
},
"children": []
}, {
"id": "5752_22",
"name": "Jack Off Jill",
"data": {
"band": "Chris Vrenna",
"relation": "member of band"
},
"children": []
}, {
"id": "33602_23",
"name": "Die Warzau",
"data": {
"band": "Chris Vrenna",
"relation": "member of band"
},
"children": []
}, {
"id": "40485_24",
"name": "tweaker",
"data": {
"band": "Chris Vrenna",
"relation": "is person"
},
"children": []
}, {
"id": "133257_25",
"name": "Exotic Birds",
"data": {
"band": "Chris Vrenna",
"relation": "member of band"
},
"children": []
}]
}, {
"id": "236021_26",
"name": "Aaron North",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": []
}, {
"id": "236024_27",
"name": "Jeordie White",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": [{
"id": "909_28",
"name": "A Perfect Circle",
"data": {
"band": "Jeordie White",
"relation": "member of band"
},
"children": []
}, {
"id": "237377_29",
"name": "Twiggy Ramirez",
"data": {
"band": "Jeordie White",
"relation": "is person"
},
"children": []
}]
}, {
"id": "235953_30",
"name": "Robin Finck",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": [{
"id": "1440_31",
"name": "Guns N' Roses",
"data": {
"band": "Robin Finck",
"relation": "member of band"
},
"children": []
}]
}, {
"id": "235955_32",
"name": "Danny Lohner",
"data": {
"band": "Nine Inch Nails",
"relation": "member of band"
},
"children": [{
"id": "909_33",
"name": "A Perfect Circle",
"data": {
"band": "Danny Lohner",
"relation": "member of band"
},
"children": []
}, {
"id": "1695_34",
"name": "Killing Joke",
"data": {
"band": "Danny Lohner",
"relation": "member of band"
},
"children": []
}, {
"id": "1938_35",
"name": "Methods of Mayhem",
"data": {
"band": "Danny Lohner",
"relation": "member of band"
},
"children": []
}, {
"id": "5138_36",
"name": "Skrew",
"data": {
"band": "Danny Lohner",
"relation": "member of band"
},
"children": []
}, {
"id": "53549_37",
"name": "Angkor Wat",
"data": {
"band": "Danny Lohner",
"relation": "member of band"
},
"children": []
}, {
"id": "113510_38",
"name": "Puscifer",
"data": {
"band": "Danny Lohner",
"relation": "member of band"
},
"children": []
}, {
"id": "113512_39",
"name": "Renhold\u00ebr",
"data": {
"band": "Danny Lohner",
"relation": "is person"
},
"children": []
}]
}],
"data": []
}
| 29.593528 | 749 | 0.392372 | 6,744 | 83,217 | 4.818654 | 0.170522 | 0.048681 | 0.055636 | 0.069545 | 0.540481 | 0.459304 | 0.414438 | 0.374404 | 0.313475 | 0.292519 | 0 | 0.092797 | 0.423099 | 83,217 | 2,811 | 750 | 29.604056 | 0.584111 | 0.001478 | 0 | 0.443888 | 0 | 0.018942 | 0.41666 | 0.07391 | 0 | 0 | 0 | 0 | 0.000357 | 1 | 0.000715 | false | 0.000715 | 0.000357 | 0 | 0.001787 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa4aaacb199274cbe77eabc3d3c1639eae5abc7e | 467 | py | Python | Computer science/Programming languages/Python/Control flow/Control flow statements/Else statement/spellchecker.py | chanchanchong/PYTHON-TRACK-IN-HYPERSKILL | 462fe08ff4a2b183fd45a0235ab1ec7a788bd54c | [
"MIT"
] | null | null | null | Computer science/Programming languages/Python/Control flow/Control flow statements/Else statement/spellchecker.py | chanchanchong/PYTHON-TRACK-IN-HYPERSKILL | 462fe08ff4a2b183fd45a0235ab1ec7a788bd54c | [
"MIT"
] | null | null | null | Computer science/Programming languages/Python/Control flow/Control flow statements/Else statement/spellchecker.py | chanchanchong/PYTHON-TRACK-IN-HYPERSKILL | 462fe08ff4a2b183fd45a0235ab1ec7a788bd54c | [
"MIT"
] | null | null | null | # Write a simple spellchecker that tells you if the word is spelled
# correctly. Use the dictionary in the code below; it contains the
# list of all correctly written words.
# The input format:
# A single line with the "word"
# The output format:
# If the word is spelled correctly write Correct, otherwise,
# Incorrect.
dictionary = ["aa", "abab", "aac", "ba", "bac", "baba", "cac", "caac"]
word = input()
print("Correct" if word in dictionary else "Incorrect") | 29.1875 | 70 | 0.708779 | 70 | 467 | 4.728571 | 0.628571 | 0.063444 | 0.054381 | 0.066465 | 0.163142 | 0.163142 | 0 | 0 | 0 | 0 | 0 | 0 | 0.17773 | 467 | 16 | 71 | 29.1875 | 0.861979 | 0.650964 | 0 | 0 | 0 | 0 | 0.264516 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa4ecfe5ddd5ac018f7b1d728ed03a1ba71dacca | 1,264 | py | Python | mnb.py | rmayherr/python | 830aec82e3ab155b66d01032eac71bbe6f961fce | [
"MIT"
] | null | null | null | mnb.py | rmayherr/python | 830aec82e3ab155b66d01032eac71bbe6f961fce | [
"MIT"
] | null | null | null | mnb.py | rmayherr/python | 830aec82e3ab155b66d01032eac71bbe6f961fce | [
"MIT"
] | null | null | null |
#!/usr/bin/env python3
import requests, sys, traceback, re
from bs4 import BeautifulSoup
wurl = "https://mnb.hu/arfolyamok"
def download(url):
try:
r = requests.get(url,allow_redirects = True, timeout = 10)
wtype = r.headers.get('content-type').split(';')
if wtype[0] == "text/html" and r.status_code == 200:
return r.text
except:
print("Error occured in download()!")
traceback.print_exc()
sys.exit(1)
def parse_page(content):
s = BeautifulSoup(content,'html.parser')
counter = 1
result = []
wdate = s.find("th", { 'class' : 'head' })
result.append(wdate.get_text().encode('utf-8'))
for i in s.find_all("td", {'class' : ['valute', 'value']}):
if counter == 7:
break
result.append(i.get_text())
counter += 1
return result
def print_data(data):
#header = 'MNB legfrissebb hivatalos deviza' + '\xe1' + 'rfolyamai'
header = 'MNB legfrissebb hivatalos deviza árfolyamai'
print(f'\t{header} {data[0].decode()}')
print(f'\t{data[1]:<4} {data[2]:<6} HUF')
print(f'\t{data[3]:<4} {data[4]:<6} HUF')
print(f'\t{data[5]:<4} {data[6]:<6} HUF')
if __name__ == "__main__":
print_data(parse_page(download(wurl)))
| 30.829268 | 71 | 0.592563 | 175 | 1,264 | 4.177143 | 0.525714 | 0.032832 | 0.038304 | 0.045144 | 0.136799 | 0.04104 | 0 | 0 | 0 | 0 | 0 | 0.027551 | 0.224684 | 1,264 | 40 | 72 | 31.6 | 0.718367 | 0.068829 | 0 | 0 | 0 | 0 | 0.249787 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.060606 | 0 | 0.212121 | 0.242424 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa4ef685e4e299fdbf2fa284a9d484d02a17db84 | 5,023 | py | Python | tests/test_utils.py | owid/owid-catalog-py | e94d3db308831fec72a3af21a0248d4224bcd920 | [
"MIT"
] | 9 | 2021-10-18T09:56:28.000Z | 2022-03-26T06:28:21.000Z | tests/test_utils.py | owid/owid-catalog-py | e94d3db308831fec72a3af21a0248d4224bcd920 | [
"MIT"
] | 17 | 2021-09-22T09:00:05.000Z | 2022-03-31T07:31:54.000Z | tests/test_utils.py | owid/owid-catalog-py | e94d3db308831fec72a3af21a0248d4224bcd920 | [
"MIT"
] | 1 | 2022-02-22T15:30:52.000Z | 2022-02-22T15:30:52.000Z | import pandas as pd
import pytest
from owid.catalog import Table
from owid.catalog.utils import underscore, underscore_table
def test_underscore():
assert (
underscore(
"`17.11.1 - Developing countries’ and least developed countries’ share of global merchandise exports (%) - TX_EXP_GBMRCH`"
)
== "_17_11_1__developing_countries_and_least_developed_countries_share_of_global_merchandise_exports__pct__tx_exp_gbmrch"
)
assert underscore("Urban population") == "urban_population"
assert (
underscore("Urban population (% of total population)")
== "urban_population__pct_of_total_population"
)
assert (
underscore("Women's share of population ages 15+ living with HIV (%)")
== "womens_share_of_population_ages_15plus_living_with_hiv__pct"
)
assert (
underscore(
"Water productivity, total (constant 2010 US$ GDP per cubic meter of total freshwater withdrawal)"
)
== "water_productivity__total__constant_2010_usd_gdp_per_cubic_meter_of_total_freshwater_withdrawal"
)
assert (
underscore("Agricultural machinery, tractors per 100 sq. km of arable land")
== "agricultural_machinery__tractors_per_100_sq__km_of_arable_land"
)
assert (
underscore("GDP per capita, PPP (current international $)")
== "gdp_per_capita__ppp__current_international_dollar"
)
assert (
underscore("Automated teller machines (ATMs) (per 100,000 adults)")
== "automated_teller_machines__atms__per_100_000_adults"
)
assert (
underscore(
"Political regimes - OWID based on Boix et al. (2013), V-Dem (v12), and Lührmann et al. (2018)"
)
== "political_regimes__owid_based_on_boix_et_al__2013__v_dem__v12__and_luhrmann_et_al__2018"
)
assert (
underscore("Adjusted savings: particulate emission damage (current US$)")
== "adjusted_savings__particulate_emission_damage__current_usd"
)
assert (
underscore(
"Benefit incidence of unemployment benefits and ALMP to poorest quintile (% of total U/ALMP benefits)"
)
== "benefit_incidence_of_unemployment_benefits_and_almp_to_poorest_quintile__pct_of_total_u_almp_benefits"
)
assert (
underscore(
"Business extent of disclosure index (0=less disclosure to 10=more disclosure)"
)
== "business_extent_of_disclosure_index__0_less_disclosure_to_10_more_disclosure"
)
assert (
underscore("Firms that spend on R&D (% of firms)")
== "firms_that_spend_on_r_and_d__pct_of_firms"
)
assert (
underscore(
"Wages in the manufacturing sector vs. several food prices in the US – U.S. Bureau of Labor Statistics (2013)"
)
== "wages_in_the_manufacturing_sector_vs__several_food_prices_in_the_us__u_s__bureau_of_labor_statistics__2013"
)
assert (
underscore('Tax "composition" –\tArroyo Abad and P. Lindert (2016)')
== "tax_composition__arroyo_abad__and_p__lindert__2016"
)
assert (
underscore("20th century deaths in US - CDC")
== "_20th_century_deaths_in_us__cdc"
)
assert (
underscore("Poverty rate (<50% of median) (LIS Key Figures, 2018)")
== "poverty_rate__lt_50pct_of_median__lis_key_figures__2018"
)
assert underscore("10") == "_10"
assert (
underscore(
"Indicator 1.5.1: Death rate due to exposure to forces of nature (per 100,000 population) *Estimates reported here are based on a 10-year distributed lag for natural disaster mortality. - Past - Scaled"
)
== "indicator_1_5_1__death_rate_due_to_exposure_to_forces_of_nature__per_100_000_population__estimates_reported_here_are_based_on_a_10_year_distributed_lag_for_natural_disaster_mortality__past__scaled"
)
assert underscore("a|b") == "a_b"
assert underscore("$/£ exchange rate") == "dollar_ps_exchange_rate"
def test_underscore_table():
df = pd.DataFrame({"A": [1, 2, 3]})
df.index.names = ["I"]
t = Table(df)
t["A"].metadata.description = "column A"
tt = underscore_table(t)
assert tt.columns == ["a"]
assert tt.index.names == ["i"]
assert tt["a"].metadata.description == "column A"
def test_underscore_table_collision():
df = pd.DataFrame({"A__x": [1, 2, 3], "B": [1, 2, 3], "A(x)": [1, 2, 3]})
t = Table(df)
t["A__x"].metadata.description = "desc1"
t["B"].metadata.description = "desc2"
t["A(x)"].metadata.description = "desc3"
# raise error by default
with pytest.raises(NameError):
underscore_table(t)
# add suffix
tt = underscore_table(t, collision="rename")
assert list(tt.columns) == ["a__x_1", "b", "a__x_2"]
# make sure we retain metadata
assert tt["a__x_1"].metadata.description == "desc1"
assert tt["b"].metadata.description == "desc2"
assert tt["a__x_2"].metadata.description == "desc3"
| 38.937984 | 214 | 0.679873 | 643 | 5,023 | 4.892691 | 0.318818 | 0.106802 | 0.011443 | 0.009536 | 0.58328 | 0.486332 | 0.403051 | 0.384615 | 0.330579 | 0.330579 | 0 | 0.037321 | 0.221183 | 5,023 | 128 | 215 | 39.242188 | 0.766104 | 0.012343 | 0 | 0.230089 | 0 | 0.035398 | 0.553359 | 0.26165 | 0 | 0 | 0 | 0 | 0.247788 | 1 | 0.026549 | false | 0 | 0.035398 | 0 | 0.061947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa503a31f99be3c5ca537a1da63d0e1f818d8bc8 | 1,434 | py | Python | decode.py | J0113/MSGoverHASH | d1025e30783eb448ce70b3d4d1de35a1cf10bb4a | [
"Apache-2.0"
] | 1 | 2019-03-27T15:36:44.000Z | 2019-03-27T15:36:44.000Z | decode.py | J0113/MSGoverHASH | d1025e30783eb448ce70b3d4d1de35a1cf10bb4a | [
"Apache-2.0"
] | null | null | null | decode.py | J0113/MSGoverHASH | d1025e30783eb448ce70b3d4d1de35a1cf10bb4a | [
"Apache-2.0"
] | null | null | null | import apsw
from os import path
from time import time
from CONFIGURATION import *
print("\nEnter encrypted text:\n\n")
hashedinput = input()
print("\nEnter the encrytion key:\n\n")
securecode = input()
if path.isfile(securecode + ".db"):
starttime = time()
encryptedmsg = hashedinput.split()
conn=apsw.Connection(":memory:")
diskdb = apsw.Connection(securecode + ".db")
with conn.backup("main", diskdb, "main") as backup:
backup.step()
diskdb.close()
c = conn.cursor()
msg = ""
for x in encryptedmsg:
c.execute("SELECT value FROM tb WHERE hash=?", (x,))
group = c.fetchone()
if group:
msg = msg + str(group[0])
pass
endtime = time()
timeused = endtime-starttime
print("\n\n\n\n\n\n--------------------------------------------")
if timeused>60:
if timeused>3600:
print("Time: " + str(round((timeused/3600),2)) + " hours")
else:
print("Time: " + str(round((timeused/60),2)) + " min")
else:
print("Time: " + str(round((timeused), 4)) + " sec")
print("Message:\n\n\n")
print(msg)
print("\n\n\n--------------------------------------------\n\n")
c.close()
conn.close()
pass
else:
print("No DB for this code has been made yet, create one using the rainbow.py generator.")
#t = (hash,)
#c.execute('SELECT * FROM tb WHERE hash=?', t)
#print (c.fetchone())
| 29.265306 | 94 | 0.548117 | 181 | 1,434 | 4.342541 | 0.425414 | 0.033079 | 0.030534 | 0.025445 | 0.132316 | 0.099237 | 0 | 0 | 0 | 0 | 0 | 0.014559 | 0.233612 | 1,434 | 48 | 95 | 29.875 | 0.700637 | 0.052999 | 0 | 0.119048 | 0 | 0 | 0.257565 | 0.081181 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.047619 | 0.095238 | 0 | 0.095238 | 0.238095 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa51a06b7abab099953999d594a494ba49587413 | 5,159 | py | Python | train.py | irunecapri/imageclassifier- | 03f287d0d722cd27f61071eccb7b25601b865f62 | [
"MIT"
] | null | null | null | train.py | irunecapri/imageclassifier- | 03f287d0d722cd27f61071eccb7b25601b865f62 | [
"MIT"
] | null | null | null | train.py | irunecapri/imageclassifier- | 03f287d0d722cd27f61071eccb7b25601b865f62 | [
"MIT"
] | null | null | null | import torch
import time
import numpy as np
from torchvision import datasets, transforms, models
from torch import nn
from torch import optim
from collections import OrderedDict
import torch.nn.functional as F
import torchvision.models as models
from torch.autograd import Variable
import argparse
import json
from utility import load_data
def get_input_args():
parser = argparse.ArgumentParser()
parser.add_argument('data_dir', action='store', help='directory containing images')
parser.add_argument('--save_dir', action='store', help='save trained checkpoint to this directory' )
parser.add_argument('--arch', action='store', help='what kind of pretrained architecture to use', default='vgg19')
parser.add_argument('--gpu', action='store_true', help='use gpu to train model')
parser.add_argument('--epochs', action='store', help='# of epochs to train', type=int, default=4)
parser.add_argument('--lr', action='store', help='which learning rate to start with', type=float, default=0.001)
parser.add_argument('--hidden_units', action='store', help='# of hidden units to add to model', type=int, default=500)
parser.add_argument('--output_size', action='store', help='# of classes to output', type=int, default=102)
return parser.parse_args()
def main():
in_arg = get_input_args()
start_time = time.time()
trainloader, testloader, vloader, train_data = load_data(in_arg.data_dir)
model = get_model(in_arg.arch)
model = load_model(model, in_arg.arch, in_arg.hidden_units, in_arg.lr, in_arg.gpu)
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(), lr=in_arg.lr)
train(model, in_arg.epochs, in_arg.lr, criterion, optimizer, trainloader, vloader,in_arg.gpu, start_time)
print(f"Time to train and validate model: {(time.time() - start_time):.3f} seconds")
save_checkpoint(in_arg.save_dir, model, optimizer, in_arg.epochs, in_arg.arch, image_datasets, in_arg.lr)
def get_model(arch):
if arch == 'vgg19':
model = models.vgg19(pretrained=True)
elif arch =='alexnet':
model = models.alexnet(pretrained = True)
elif arch == 'densenet121':
model = models.densenet121(pretrained = True)
return model
def load_model(model, arch, hidden_units, lr, gpu):
if arch == 'vgg19':
input_size = 25088
elif arch == 'alexnet':
input_size = 9216
elif arch == 'densenet121':
input_size = 1024
output_size = 102
for param in model.parameters():
param.requires_grad = False
classifier= nn.Sequential(nn.Linear(input_size,hidden_units), nn.ReLU(), nn.Linear(hidden_units, 102), nn.LogSoftmax(dim=1))
model.classifier = classifier
criterion = nn.NLLLoss()
return model
def train(model, epochs, lr, criterion, optimizer, trainloader, vloader, gpu, start_time):
device = torch.device('cuda' if torch.cuda.is_available() and gpu else 'cpu')
model.train()
epochs= epochs
steps=0
running_loss=0
print_every=20
print(device)
for epoch in range(epochs):
for inputs, labels in trainloader:
steps+=1
inputs, labels =inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model.forward(inputs)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss+=loss.item()
if steps % print_every==0:
test_loss=0
accuracy=0
model.eval()
with torch.no_grad():
for inputs, labels in vloader:
inputs, labels=inputs.to(device), labels.to(device)
logps =model.forward(inputs)
batch_loss=criterion(logps, labels)
test_loss+=batch_loss.item()
ps=torch.exp(logps)
top_p, top_class=ps.topk(1, dim=1)
equals = top_class==labels.view(*top_class.shape)
accuracy+= torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.."
f"Train loss: {running_loss/print_every:.3f}.. "
f"Validation loss:{test_loss/len(vloader):.3f}.."
f"Validation accuracy: {accuracy/len(vloader):.3f}")
def save_checkpoint(save_dir, model, optimizer, epochs, arch, image_datasets, lr):
model.cpu()
model.class_to_idx = image_datasets[0].class_to_idx
checkpoint = {'output_size' : 102,
'optimizer': optimizer,
'arch': arch,
'state_dict': model.state_dict(),
'optimizer_state': optimizer.state_dict(),
'class_to_idx': model.class_to_idx}
torch.save(checkpoint, 'model_checkpoint.pth')
if __name__ == "__main__":
main()
| 28.346154 | 128 | 0.609227 | 627 | 5,159 | 4.85008 | 0.253589 | 0.024663 | 0.044722 | 0.016771 | 0.061822 | 0.026307 | 0.026307 | 0.026307 | 0 | 0 | 0 | 0.017977 | 0.277573 | 5,159 | 181 | 129 | 28.502762 | 0.797961 | 0 | 0 | 0.133333 | 0 | 0 | 0.144306 | 0.018331 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057143 | false | 0 | 0.12381 | 0 | 0.209524 | 0.057143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa5412f5bf3efea48610473b95341620c9ba4d98 | 5,716 | py | Python | Python GUI with TKinter/Lesson 8 - Listboxes.py | ryanzhao2/grade11cs | 173b9f50db49368ea2042f6803d6674dd9f185cd | [
"Apache-2.0"
] | null | null | null | Python GUI with TKinter/Lesson 8 - Listboxes.py | ryanzhao2/grade11cs | 173b9f50db49368ea2042f6803d6674dd9f185cd | [
"Apache-2.0"
] | null | null | null | Python GUI with TKinter/Lesson 8 - Listboxes.py | ryanzhao2/grade11cs | 173b9f50db49368ea2042f6803d6674dd9f185cd | [
"Apache-2.0"
] | null | null | null | """
from tkinter import *
from tkinter.font import Font
import random
def get_names(filename):
global all_names
all_names = []
fileIn = open(filename, encoding='utf-8', errors='replace')
for line in fileIn:
all_names.append(line.strip())
return all_names
def generate_names():
random_list = []
for i in range(10):
random_list.append(random.choice(get_names("random_names.txt")))
name_var.set(random_list)
# MAIN
global all_names
get_names("random_names.txt")
root = Tk()
root.config(bg="#293d3d")
mainframe = Frame(root, bg="#293d3d")
sunday_font = Font(family="Sunday", size=20)
title = Label(mainframe, text="Random Names", bg="#293d3d", fg="#ffffff", font=sunday_font)
# create the Listbox widget
initial_list=[]
name_var = StringVar()
for i in range(10):
initial_list.append(random.choice(get_names("random_names.txt")))
name_var.set(initial_list)
name_listbox = Listbox(mainframe, listvariable=name_var, selectmode=SINGLE, font=sunday_font)
random_button = Button(mainframe, text="Randomize", highlightbackground="#669999", font=sunday_font,
command=generate_names)
# Grid the widgets
mainframe.grid(padx=100, pady=100)
title.grid(row=0, column=1, pady=10)
name_listbox.grid(row=1, column=1, pady=20)
random_button.grid(row=2, column=1, sticky=EW, ipady=10)
root.mainloop()
"""
"""
from tkinter import *
from tkinter.font import Font
def load_images():
global image_list
rey_photo = PhotoImage(file="rey.png")
bb8_photo = PhotoImage(file="bb8.png")
c3po_photo = PhotoImage(file="c3po.png")
finn_photo = PhotoImage(file="finn.png")
poe_photo = PhotoImage(file="poe.png")
image_list = [rey_photo, bb8_photo, c3po_photo, finn_photo, poe_photo]
def change_image():
global image_list
if images_listbox.curselection == 0:
print(image_list[0])
if images_listbox.curselection == 1:
print(image_list[1])
if images_listbox.curselection == 2:
print(image_list[2])
if images_listbox.curselection == 3:
print(image_list[3])
if images_listbox.curselection == 4:
print(image_list[4])
# MAIN
# Holding frames
#########
root = Tk()
mainframe = Frame(root)
starwars_fontsmall = Font(family="Star Jedi", size=15)
starwars_font = Font(family="Star Jedi", size=30)
global image_list
load_images()
image_names = ['Rey', 'BB-8', 'C-3Po', 'Finn', 'Poe']
# Widgets
#########
title = Label(mainframe, text="star wars", font=starwars_font)
images_var = StringVar(value=image_names)
images_listbox = Listbox(mainframe, listvariable=images_var, selectmode=SINGLE, font=starwars_fontsmall)
current_image_label = Label(mainframe)
update_button = Button(mainframe, text="SEE", command=change_image)
# GRID THE WIDGETS
###########
mainframe.grid(padx=50, pady=50)
title.grid(row=1, column=1, sticky=W, padx=20, pady=5)
images_listbox.grid(row=2, column=1, padx=10)
current_image_label.grid(row=2, column=2, sticky=W, padx=10, pady=10)
update_button.grid(row=3, column=1, ipady=20, ipadx=40, padx=10, pady=10, sticky=E)
root.mainloop()
"""
from tkinter import *
from tkinter.font import Font
def generate_spotify_list(filename):
global spotify_music_list
spotify_music_list = []
fileIn = open(filename, encoding='utf-8', errors='replace')
fileIn.readline()
fileIn.readline()
for line in fileIn:
line = line.strip().split(",")
song = []
song.append(int(line[0]))
song.append(line[1].strip().replace('"', ''))
song.append(line[2].strip().replace('"', ''))
song.append(int(line[3]))
spotify_music_list.append(song)
return spotify_music_list
def format_music():
global spotify_music_list
format_list = []
for i in range(len(spotify_music_list)):
mini_list = []
a = spotify_music_list[i][1]
b = spotify_music_list[i][2]
mini_list.append(f'{a:<30}')
mini_list.append('by')
mini_list.append(f'{b}')
format_list.append(mini_list)
return format_list
def see_song_details():
global spotify_music_list
selection = music_listbox.curselection()[0]
first = spotify_music_list[selection][0]
second = spotify_music_list[selection][1]
third = spotify_music_list[selection][2]
fourth = spotify_music_list[selection][3]
format_data = (f'chart # {first}\n{second} by {third}\n#streams {fourth}')
info_var.set(format_data)
# MAIN
global spotify_music_list
spotify_music_list = generate_spotify_list("spotifyJan172020.csv")
# Holding frames
#########
root = Tk()
mainframe = Frame(root)
monofurFont = Font(family="monofur", size=20)
monofurFontMedium = Font(family="monofur", size=30)
monofurFontLarge = Font(family="monofur", size=40)
# Widgets
#########
title = Label(mainframe, text="music", font=monofurFontLarge)
music_list = format_music()
musicVar = StringVar()
musicVar.set(music_list)
music_listbox = Listbox(mainframe, selectmode=SINGLE, listvariable=musicVar, width=80, font=monofurFont)
info_var = StringVar()
info_var.set("")
info_label = Label(mainframe, textvariable=info_var, justify=LEFT, fg="#dd0054", font=monofurFontMedium)
seemore_button = Button(mainframe, text="see more", font=monofurFontLarge, command=see_song_details)
logo_canvas = Canvas(mainframe, width=200, height=200)
# GRID THE WIDGETS
###########
mainframe.grid(padx=50, pady=50)
title.grid(row=1, column=1, sticky=W, padx=20, pady=5)
music_listbox.grid(row=2, column=1, columnspan=2, padx=10)
info_label.grid(row=3, column=1, sticky=W, padx=10, pady=10)
seemore_button.grid(row=3, column=2, ipady=20, ipadx=40, padx=10, pady=10, sticky=E)
root.mainloop()
| 24.852174 | 104 | 0.697691 | 800 | 5,716 | 4.81 | 0.20375 | 0.039761 | 0.06237 | 0.035083 | 0.300936 | 0.219595 | 0.190229 | 0.151247 | 0.117983 | 0.117983 | 0 | 0.033554 | 0.155353 | 5,716 | 229 | 105 | 24.960699 | 0.763463 | 0.247551 | 0 | 0.098361 | 0 | 0 | 0.056701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04918 | false | 0 | 0.032787 | 0 | 0.114754 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa54940724eedfeb23248326e81d43b77e4307c6 | 43,135 | py | Python | oasislmf/model_preparation/reinsurance_layer.py | bbetov-corelogic/OasisLMF | fcb9a595ec6eb30c2ed3b9b67152c2f27fc0082b | [
"BSD-3-Clause"
] | null | null | null | oasislmf/model_preparation/reinsurance_layer.py | bbetov-corelogic/OasisLMF | fcb9a595ec6eb30c2ed3b9b67152c2f27fc0082b | [
"BSD-3-Clause"
] | null | null | null | oasislmf/model_preparation/reinsurance_layer.py | bbetov-corelogic/OasisLMF | fcb9a595ec6eb30c2ed3b9b67152c2f27fc0082b | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from builtins import open as io_open
from builtins import str
from future import standard_library
standard_library.install_aliases()
__all__ = [
'generate_xref_descriptions',
'generate_files_for_reinsurance',
'ReinsuranceLayer',
'write_ri_input_files'
]
import json
import logging
import math
import os
import shutil
import subprocess32 as subprocess
from collections import namedtuple
from itertools import product
import anytree
import numbers
import pandas as pd
from ..utils.exceptions import OasisException
from ..utils.log import oasis_log
from . import oed
from six import string_types
# Metadata about an inuring layer
InuringLayer = namedtuple(
"InuringLayer",
"inuring_priority reins_numbers is_valid validation_messages")
def _get_location_tiv(location, coverage_type_id):
switcher = {
oed.BUILDING_COVERAGE_TYPE_ID: location.get('BuildingTIV', 0),
oed.OTHER_BUILDING_COVERAGE_TYPE_ID: location.get('OtherTIV', 0),
oed.CONTENTS_COVERAGE_TYPE_ID: location.get('ContentsTIV', 0),
oed.TIME_COVERAGE_TYPE_ID: location.get('BITIV', 0)
}
return switcher.get(coverage_type_id, 0)
def generate_xref_descriptions(accounts_fp, locations_fp):
accounts = pd.read_csv(accounts_fp)
locations = pd.read_csv(locations_fp)
coverage_id = 0
item_id = 0
group_id = 0
policy_agg_id = 0
profile_id = 0
site_agg_id = 0
accounts_and_locations = pd.merge(accounts, locations, left_on='AccNumber', right_on='AccNumber')
for acc_and_loc, coverage_type, peril in product((acc for _, acc in accounts_and_locations.iterrows()), oed.COVERAGE_TYPES, oed.PERILS):
tiv = _get_location_tiv(acc_and_loc, coverage_type)
if tiv > 0:
policy_agg_id += 1
profile_id += 1
group_id += 1
site_agg_id += 1
profile_id += 1
coverage_id += 1
item_id += 1
yield oed.XrefDescription(
xref_id = item_id,
account_number = acc_and_loc.get('AccNumber'),
location_number = acc_and_loc.get('LocNumber'),
location_group = acc_and_loc.get('LocGroup'),
cedant_name = acc_and_loc.get('CedantName'),
producer_name = acc_and_loc.get('ProducerName'),
lob = acc_and_loc.get('LOB'),
country_code = acc_and_loc.get('CountryCode'),
reins_tag = acc_and_loc.get('ReinsTag'),
coverage_type_id = coverage_type,
peril_id = peril,
policy_number = acc_and_loc.get('PolNumber'),
portfolio_number = acc_and_loc.get('PortNumber'),
tiv = tiv
)
@oasis_log
def generate_files_for_reinsurance(
items,
coverages,
fm_xrefs,
xref_descriptions,
ri_info_df,
ri_scope_df,
direct_oasis_files_dir,
gulsummaryxref=pd.DataFrame(),
fmsummaryxref=pd.DataFrame()):
"""
Generate files for reinsurance.
"""
inuring_metadata = {}
previous_inuring_priority = None
previous_risk_level = None
reinsurance_index = 1
for inuring_priority in range(1, ri_info_df['InuringPriority'].max() + 1):
# Filter the reinsNumbers by inuring_priority
reins_numbers = ri_info_df[ri_info_df['InuringPriority'] == inuring_priority].ReinsNumber.tolist()
risk_level_set = set(ri_scope_df[ri_scope_df['ReinsNumber'].isin(reins_numbers)].RiskLevel)
for risk_level in oed.REINS_RISK_LEVELS:
if risk_level not in risk_level_set:
continue
written_to_dir = _generate_files_for_reinsurance_risk_level(
inuring_priority,
items,
coverages,
fm_xrefs,
xref_descriptions,
gulsummaryxref,
fmsummaryxref,
ri_info_df,
ri_scope_df,
previous_inuring_priority,
previous_risk_level,
risk_level,
reinsurance_index,
direct_oasis_files_dir)
inuring_metadata[reinsurance_index] = {
'inuring_priority': inuring_priority,
'risk_level': risk_level,
'directory': written_to_dir,
}
previous_inuring_priority = inuring_priority
previous_risk_level = risk_level
reinsurance_index = reinsurance_index + 1
return inuring_metadata
def _generate_files_for_reinsurance_risk_level(
inuring_priority,
items,
coverages,
fm_xrefs,
xref_descriptions,
gulsummaryxref,
fmsummaryxref,
ri_info_df,
ri_scope_df,
previous_inuring_priority,
previous_risk_level,
risk_level,
reinsurance_index,
direct_oasis_files_dir):
"""
Generate files for a reinsurance risk level.
"""
reins_numbers_1 = ri_info_df[
ri_info_df['InuringPriority'] == inuring_priority].ReinsNumber
if reins_numbers_1.empty:
return None
reins_numbers_2 = ri_scope_df[
ri_scope_df.isin({"ReinsNumber": reins_numbers_1.tolist()}).ReinsNumber
& (ri_scope_df.RiskLevel == risk_level)].ReinsNumber
if reins_numbers_2.empty:
return None
ri_info_inuring_priority_df = ri_info_df[ri_info_df.isin(
{"ReinsNumber": reins_numbers_2.tolist()}).ReinsNumber]
output_name = "ri_{}_{}".format(inuring_priority, risk_level)
reinsurance_layer = ReinsuranceLayer(
name=output_name,
ri_info=ri_info_inuring_priority_df,
ri_scope=ri_scope_df,
items=items,
coverages=coverages,
fm_xrefs=fm_xrefs,
xref_descriptions=xref_descriptions,
gulsummaryxref=gulsummaryxref,
fmsummaryxref=fmsummaryxref,
risk_level=risk_level
)
reinsurance_layer.generate_oasis_structures()
output_dir = os.path.join(direct_oasis_files_dir, "RI_{}".format(reinsurance_index))
reinsurance_layer.write_oasis_files(output_dir)
return output_dir
@oasis_log
def write_ri_input_files(
exposure_fp,
accounts_fp,
items_fp,
coverages_fp,
gulsummaryxref_fp,
fm_xref_fp,
fmsummaryxref_fp,
ri_info_fp,
ri_scope_fp,
target_dir
):
xref_descriptions = pd.DataFrame(generate_xref_descriptions(accounts_fp, exposure_fp))
return generate_files_for_reinsurance(
pd.read_csv(items_fp),
pd.read_csv(coverages_fp),
pd.read_csv(fm_xref_fp),
xref_descriptions,
pd.read_csv(ri_info_fp),
pd.read_csv(ri_scope_fp),
target_dir,
gulsummaryxref=pd.read_csv(gulsummaryxref_fp),
fmsummaryxref=pd.read_csv(fmsummaryxref_fp)
)
class ReinsuranceLayer(object):
"""
Generates ktools inputs and runs financial module for a reinsurance structure.
"""
def __init__(self,
name, ri_info, ri_scope, items, coverages, fm_xrefs,
xref_descriptions, risk_level, fmsummaryxref=pd.DataFrame(), gulsummaryxref=pd.DataFrame(), logger=None):
self.logger = logger or logging.getLogger()
self.name = name
self.coverages = coverages
self.items = items
self.fm_xrefs = fm_xrefs
self.xref_descriptions = xref_descriptions
self.fmsummaryxref = fmsummaryxref
self.gulsummaryxref = gulsummaryxref
self.item_ids = list()
self.item_tivs = list()
self.fmprogrammes = pd.DataFrame()
self.fmprofiles = pd.DataFrame()
self.fm_policytcs = pd.DataFrame()
self.risk_level = risk_level
self.ri_info = ri_info
self.ri_scope = ri_scope
self.add_profiles_args = namedtuple(
"AddProfilesArgs",
"program_node, ri_info_row, scope_rows, overlay_loop, layer_id, "
"node_layer_profile_map, fmprofiles_list, nolossprofile_id, passthroughprofile_id")
def _add_node(self, description, parent, level_id, agg_id,
portfolio_number=oed.NOT_SET_ID, account_number=oed.NOT_SET_ID,
policy_number=oed.NOT_SET_ID, location_number=oed.NOT_SET_ID,
location_group=oed.NOT_SET_ID):
node = anytree.Node(
description,
parent=parent,
level_id=level_id,
agg_id=agg_id,
portfolio_number=portfolio_number,
account_number=account_number,
policy_number=policy_number,
location_group=location_group,
location_number=location_number)
return node
def _add_program_node(self, level_id):
return self._add_node(
"Treaty",
parent=None,
level_id=level_id,
agg_id=1)
def _add_item_node(self, xref_id, parent):
return self._add_node(
"Item_id:{}".format(xref_id),
parent=parent,
level_id=1,
agg_id=xref_id)
def _add_location_node(
self, level_id, agg_id, xref_description, parent):
return self._add_node(
"Portfolio_number:{} Account_number:{} Policy_number:{} Location_number:{}".format(
xref_description.portfolio_number,
xref_description.account_number,
xref_description.policy_number,
xref_description.location_number),
parent=parent,
level_id=level_id,
agg_id=agg_id,
portfolio_number=xref_description.portfolio_number,
account_number=xref_description.account_number,
policy_number=xref_description.policy_number,
location_group=xref_description.location_group,
location_number=xref_description.location_number)
def _add_location_group_node(
self, level_id, agg_id, xref_description, parent):
return self._add_node(
"Location_group:{}".format(xref_description.location_group),
parent=parent,
level_id=level_id,
agg_id=agg_id,
location_group=xref_description.location_group)
def _add_policy_node(
self, level_id, agg_id, xref_description, parent):
return self._add_node(
"Portfolio number:{} Account_number:{} Policy_number:{}".format(
xref_description.portfolio_number, xref_description.account_number, xref_description.policy_number),
parent=parent,
level_id=level_id,
agg_id=agg_id,
portfolio_number=xref_description.portfolio_number,
account_number=xref_description.account_number,
policy_number=xref_description.policy_number)
def _add_account_node(
self, agg_id, level_id, xref_description, parent):
return self._add_node(
"Portfolio number:{} Account_number:{}".format(
xref_description.portfolio_number, xref_description.account_number),
parent=parent,
level_id=level_id,
agg_id=agg_id,
portfolio_number=xref_description.portfolio_number,
account_number=xref_description.account_number)
def _add_portfolio_node(
self, agg_id, level_id, xref_description, parent):
return self._add_node(
"Portfolio number:{}".format(xref_description.portfolio_number),
parent=parent,
level_id=level_id,
agg_id=agg_id,
portfolio_number=xref_description.portfolio_number)
def _is_valid_id(self, id_to_check):
is_valid = self._is_defined(id_to_check) and \
((isinstance(id_to_check, string_types) and id_to_check != "")
or
(isinstance(id_to_check, numbers.Number) and id_to_check > 0))
return is_valid
def _match_portfolio(self, node, scope_row, exact=False):
if self._is_valid_id(scope_row.PortNumber):
return node.portfolio_number == scope_row.PortNumber
else:
return True
def _match_account(self, node, scope_row, exact=False):
match = False
if exact:
match = self._match_portfolio(node, scope_row) and node.account_number == scope_row.AccNumber
else:
if (self._is_valid_id(scope_row.PortNumber) and self._is_valid_id(scope_row.AccNumber)):
match = self._match_portfolio(node, scope_row) and node.account_number == scope_row.AccNumber
else:
match = self._match_portfolio(node, scope_row)
return match
def _match_policy(self, node, scope_row, exact=False):
match = False
if exact:
match = self._match_account(node, scope_row) and node.policy_number == scope_row.PolNumber
else:
if (self._is_valid_id(scope_row.PolNumber) and self._is_valid_id(scope_row.AccNumber) and self._is_valid_id(scope_row.PortNumber)):
match = self._match_account(node, scope_row) and node.policy_number == scope_row.PolNumber
else:
match = self._match_account(node, scope_row)
return match
def _match_location(self, node, scope_row, exact=False):
match = False
if self._is_valid_id(scope_row.PolNumber):
if exact:
match = self._match_policy(node, scope_row) and node.location_number == scope_row.LocNumber
else:
if self._is_valid_id(scope_row.LocNumber) and self._is_valid_id(scope_row.AccNumber) and self._is_valid_id(scope_row.PortNumber):
match = self._match_policy(node, scope_row) and node.location_number == scope_row.LocNumber
else:
match = self._match_policy(node, scope_row)
else:
if exact:
match = self._match_account(node, scope_row) and node.location_number == scope_row.LocNumber
else:
if self._is_valid_id(scope_row.LocNumber) and self._is_valid_id(scope_row.AccNumber) and self._is_valid_id(scope_row.PortNumber):
match = self._match_account(node, scope_row) and node.location_number == scope_row.LocNumber
else:
match = self._match_account(node, scope_row)
return match
def _match_location_group(self, node, scope_row, exact=False):
match = False
if self._is_valid_id(scope_row.LocGroup):
match = node.location_group == scope_row.LocGroup
return match
def _is_valid_filter(self, value):
return (value is not None and value != "" and value == value)
def _match_row(self, node, scope_row):
match = True
if match and self._is_valid_filter(scope_row.PortNumber):
match = node.portfolio_number == scope_row.PortNumber
if match and self._is_valid_filter(scope_row.AccNumber):
match = node.account_number == scope_row.AccNumber
if match and self._is_valid_filter(scope_row.PolNumber):
match = node.policy_number == scope_row.PolNumber
if match and self._is_valid_filter(scope_row.LocGroup):
match = node.location_group == scope_row.LocGroup
if match and self._is_valid_filter(scope_row.LocNumber):
match = node.location_number == scope_row.LocNumber
# if match and self._is_valid_filter(scope_row.CedantName):
# if match and self._is_valid_filter(scope_row.ProducerName):
# if match and self._is_valid_filter(scope_row.LOB):
# if match and self._is_valid_filter(scope_row.CountryCode):
# if match and self._is_valid_filter(scope_row.ReinsTag):
return match
def _scope_filter(self, nodes_list, scope_row, exact=False):
"""
Return subset of `nodes_list` based on values of a row in `ri_scope.csv`
"""
filtered_nodes_list = list(filter(
lambda n: self._match_row(n, scope_row),
nodes_list))
return filtered_nodes_list
def _risk_level_filter(self, nodes_list, scope_row, exact=False):
"""
Return subset of `nodes_list` based on values of a row in `ri_scope.csv`
"""
if (scope_row.RiskLevel == oed.REINS_RISK_LEVEL_PORTFOLIO):
return list(filter(
lambda n: self._match_portfolio(n, scope_row, exact),
nodes_list))
elif (scope_row.RiskLevel == oed.REINS_RISK_LEVEL_ACCOUNT):
return list(filter(
lambda n: self._match_account(n, scope_row, exact),
nodes_list))
elif scope_row.RiskLevel == oed.REINS_RISK_LEVEL_POLICY:
nodes_list = list(filter(
lambda n: self._match_policy(n, scope_row, exact),
nodes_list))
elif scope_row.RiskLevel == oed.REINS_RISK_LEVEL_LOCATION:
nodes_list = list(filter(
lambda n: self._match_location(n, scope_row, exact),
nodes_list))
elif scope_row.RiskLevel == oed.REINS_RISK_LEVEL_LOCATION_GROUP:
nodes_list = list(filter(
lambda n: self._match_location_group(n, scope_row, exact),
nodes_list))
else:
raise OasisException("Unknown risk level: {}".format(scope_row.RiskLevel))
return nodes_list
def _is_defined(self, num_to_check):
# If the value = NaN it will return False
return num_to_check == num_to_check
def _check_scope_row(self, scope_row):
# For some treaty types the scope filter much match exactly
okay = True
if (scope_row.RiskLevel == oed.REINS_RISK_LEVEL_ACCOUNT):
okay = \
self._is_valid_id(scope_row.AccNumber) and \
not self._is_valid_id(scope_row.PolNumber) and \
not self._is_valid_id(scope_row.LocNumber)
elif scope_row.RiskLevel == oed.REINS_RISK_LEVEL_POLICY:
okay = \
self._is_valid_id(scope_row.AccNumber) and \
self._is_valid_id(scope_row.PolNumber) and \
not self._is_valid_id(scope_row.LocNumber)
elif scope_row.RiskLevel == oed.REINS_RISK_LEVEL_LOCATION:
okay = \
self._is_valid_id(scope_row.AccNumber) and \
self._is_valid_id(scope_row.LocNumber)
elif scope_row.RiskLevel == oed.REINS_RISK_LEVEL_LOCATION_GROUP:
okay = \
self._is_valid_id(scope_row.LocGroup)
return okay
LOCATION_RISK_LEVEL = 2
def _get_tree(self):
current_location_number = 0
current_policy_number = 0
current_account_number = 0
current_portfolio_number = 0
current_location_group = 0
current_location_node = None
current_node = None
if self.risk_level == oed.REINS_RISK_LEVEL_LOCATION:
risk_level_id = self.LOCATION_RISK_LEVEL
else:
risk_level_id = self.LOCATION_RISK_LEVEL + 1
program_node_level_id = risk_level_id + 1
program_node = self._add_program_node(program_node_level_id)
if self.risk_level == oed.REINS_RISK_LEVEL_LOCATION_GROUP:
xref_descriptions = self.xref_descriptions.sort_values(
by=["location_group", "portfolio_number", "account_number", "policy_number", "location_number"])
else:
xref_descriptions = self.xref_descriptions.sort_values(
by=["portfolio_number", "account_number", "policy_number", "location_number"])
agg_id = 0
loc_agg_id = 0
for row in xref_descriptions.itertuples():
if self.risk_level == oed.REINS_RISK_LEVEL_PORTFOLIO:
if current_portfolio_number != row.portfolio_number:
agg_id = agg_id + 1
current_node = self._add_portfolio_node(
agg_id, risk_level_id, row, program_node)
elif self.risk_level == oed.REINS_RISK_LEVEL_ACCOUNT:
if \
current_portfolio_number != row.portfolio_number or \
current_account_number != row.account_number:
agg_id = agg_id + 1
current_node = self._add_account_node(
agg_id, risk_level_id, row, program_node)
elif self.risk_level == oed.REINS_RISK_LEVEL_POLICY:
if \
current_portfolio_number != row.portfolio_number or \
current_account_number != row.account_number or \
current_policy_number != row.policy_number:
agg_id = agg_id + 1
current_node = self._add_policy_node(
risk_level_id, agg_id, row, program_node)
elif self.risk_level == oed.REINS_RISK_LEVEL_LOCATION_GROUP:
if current_location_group != row.location_group:
agg_id = agg_id + 1
current_node = self._add_location_group_node(
risk_level_id, agg_id, row, program_node)
if \
current_portfolio_number != row.portfolio_number or \
current_account_number != row.account_number or \
current_policy_number != row.policy_number or \
current_location_number != row.location_number:
loc_agg_id = loc_agg_id + 1
level_id = 2
if self.risk_level == oed.REINS_RISK_LEVEL_LOCATION:
current_location_node = self._add_location_node(
level_id, loc_agg_id, row, program_node)
else:
current_location_node = self._add_location_node(
level_id, loc_agg_id, row, current_node)
current_portfolio_number = row.portfolio_number
current_account_number = row.account_number
current_policy_number = row.policy_number
current_location_number = row.location_number
current_location_group = row.location_group
self._add_item_node(row.xref_id, current_location_node)
return program_node
def _get_risk_level_id(self):
if self.risk_level == oed.REINS_RISK_LEVEL_LOCATION:
risk_level_id = 2
else:
risk_level_id = 3
return risk_level_id
def _get_filter_level_id(self):
risk_level_id = 2
return risk_level_id
def _get_next_profile_id(self, add_profiles_args):
profile_id = max(
x.profile_id for x in add_profiles_args.fmprofiles_list)
return profile_id + 1
def _add_fac_profiles(self, add_profiles_args):
self.logger.debug("Adding FAC profiles:")
profile_id = self._get_next_profile_id(add_profiles_args)
add_profiles_args.fmprofiles_list.append(oed.get_reinsurance_profile(
profile_id,
attachment=add_profiles_args.ri_info_row.RiskAttachment,
limit=add_profiles_args.ri_info_row.RiskLimit,
ceded=add_profiles_args.ri_info_row.CededPercent,
placement=add_profiles_args.ri_info_row.PlacedPercent
))
nodes_risk_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_risk_level_id())
if self.risk_level != oed.REINS_RISK_LEVEL_LOCATION:
nodes_filter_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_filter_level_id())
for node in nodes_filter_level_all:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.passthroughprofile_id
for _, ri_scope_row in add_profiles_args.scope_rows.iterrows():
# Note that FAC profiles scope much match the filter exactly.
if not self._check_scope_row(ri_scope_row):
raise OasisException("Invalid scope row: {}".format(ri_scope_row))
nodes = self._risk_level_filter(nodes_risk_level_all, ri_scope_row, exact=True)
for node in nodes:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = profile_id
def _add_per_risk_profiles(self, add_profiles_args):
self.logger.debug("Adding PR profiles:")
profile_id = self._get_next_profile_id(add_profiles_args)
nodes_risk_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_risk_level_id())
if self.risk_level != oed.REINS_RISK_LEVEL_LOCATION:
nodes_filter_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_filter_level_id())
add_profiles_args.fmprofiles_list.append(oed.get_reinsurance_profile(
profile_id,
attachment=add_profiles_args.ri_info_row.RiskAttachment,
limit=add_profiles_args.ri_info_row.RiskLimit,
ceded=add_profiles_args.ri_info_row.CededPercent,
))
for _, ri_scope_row in add_profiles_args.scope_rows.iterrows():
if self.risk_level != oed.REINS_RISK_LEVEL_LOCATION:
selected_nodes = self._scope_filter(nodes_filter_level_all, ri_scope_row, exact=False)
for node in selected_nodes:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.passthroughprofile_id
selected_nodes = self._risk_level_filter(nodes_risk_level_all, ri_scope_row, exact=False)
for node in selected_nodes:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = profile_id
# add OccLimit / Placed Percent
profile_id = profile_id + 1
add_profiles_args.fmprofiles_list.append(
oed.get_occlim_profile(
profile_id,
limit=add_profiles_args.ri_info_row.OccLimit,
placement=add_profiles_args.ri_info_row.PlacedPercent,
))
add_profiles_args.node_layer_profile_map[
(add_profiles_args.program_node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = profile_id
def _add_surplus_share_profiles(self, add_profiles_args):
self.logger.debug("Adding SS profiles:")
profile_id = self._get_next_profile_id(add_profiles_args)
nodes_risk_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_risk_level_id())
if self.risk_level != oed.REINS_RISK_LEVEL_LOCATION:
nodes_filter_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_filter_level_id())
for node in nodes_filter_level_all:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.passthroughprofile_id
for _, ri_scope_row in add_profiles_args.scope_rows.iterrows():
# Note that surplus share profiles scope much match the filter exactly.
if not self._check_scope_row(ri_scope_row):
raise OasisException("Invalid scope row: {}".format(ri_scope_row))
add_profiles_args.fmprofiles_list.append(oed.get_reinsurance_profile(
profile_id,
attachment=add_profiles_args.ri_info_row.RiskAttachment,
limit=add_profiles_args.ri_info_row.RiskLimit,
ceded=ri_scope_row.CededPercent,
))
selected_nodes = self._risk_level_filter(nodes_risk_level_all, ri_scope_row, exact=True)
for node in selected_nodes:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = profile_id
profile_id = profile_id + 1
# add OccLimit / Placed Percent
add_profiles_args.fmprofiles_list.append(
oed.get_occlim_profile(
profile_id,
limit=add_profiles_args.ri_info_row.OccLimit,
placement=add_profiles_args.ri_info_row.PlacedPercent,
))
add_profiles_args.node_layer_profile_map[
(add_profiles_args.program_node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = profile_id
def _add_quota_share_profiles(self, add_profiles_args):
self.logger.debug("Adding QS profiles:")
profile_id = self._get_next_profile_id(add_profiles_args)
nodes_risk_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_risk_level_id())
if self.risk_level != oed.REINS_RISK_LEVEL_LOCATION:
nodes_filter_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_filter_level_id())
add_profiles_args.fmprofiles_list.append(
oed.get_reinsurance_profile(
profile_id,
limit=add_profiles_args.ri_info_row.RiskLimit,
ceded=add_profiles_args.ri_info_row.CededPercent,
))
for _, ri_scope_row in add_profiles_args.scope_rows.iterrows():
# Filter
if self.risk_level != oed.REINS_RISK_LEVEL_LOCATION:
selected_nodes = self._scope_filter(nodes_filter_level_all, ri_scope_row, exact=False)
for node in selected_nodes:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.passthroughprofile_id
selected_nodes = self._risk_level_filter(nodes_risk_level_all, ri_scope_row, exact=False)
for node in selected_nodes:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = profile_id
# add OccLimit / Placed Percent
profile_id = profile_id + 1
add_profiles_args.fmprofiles_list.append(
oed.get_occlim_profile(
profile_id,
limit=add_profiles_args.ri_info_row.OccLimit,
placement=add_profiles_args.ri_info_row.PlacedPercent,
))
add_profiles_args.node_layer_profile_map[
(add_profiles_args.program_node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = profile_id
def _add_cat_xl_profiles(self, add_profiles_args):
self.logger.debug("Adding CAT XL profiles")
profile_id = self._get_next_profile_id(add_profiles_args)
nodes_risk_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_risk_level_id())
if self.risk_level != oed.REINS_RISK_LEVEL_LOCATION:
nodes_filter_level_all = anytree.search.findall(
add_profiles_args.program_node, filter_=lambda node: node.level_id == self._get_filter_level_id())
for _, ri_scope_row in add_profiles_args.scope_rows.iterrows():
# Filter
if self.risk_level != oed.REINS_RISK_LEVEL_LOCATION:
selected_nodes = self._scope_filter(nodes_filter_level_all, ri_scope_row, exact=False)
for node in selected_nodes:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.passthroughprofile_id
selected_nodes = self._risk_level_filter(nodes_risk_level_all, ri_scope_row, exact=False)
for node in selected_nodes:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.passthroughprofile_id
# Add OccLimit / Placed Percent
add_profiles_args.fmprofiles_list.append(
oed.get_reinsurance_profile(
profile_id,
attachment=add_profiles_args.ri_info_row.OccAttachment,
ceded=add_profiles_args.ri_info_row.CededPercent,
limit=add_profiles_args.ri_info_row.OccLimit,
placement=add_profiles_args.ri_info_row.PlacedPercent,
))
add_profiles_args.node_layer_profile_map[
(add_profiles_args.program_node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = profile_id
def _log_reinsurance_structure(self, add_profiles_args):
if self.logger:
self.logger.debug('policytc_map: "{}"'.format(self.name))
policytc_map = dict()
for k in add_profiles_args.node_layer_profile_map.keys():
profile_id = add_profiles_args.node_layer_profile_map[k]
policytc_map["(Name=%s, layer_id=%s, overlay_loop=%s)" % k] = profile_id
self.logger.debug(json.dumps(policytc_map, indent=4))
self.logger.debug('fm_policytcs: "{}"'.format(self.name))
self.logger.debug(self.fm_policytcs)
self.logger.debug('fm_profile: "{}"'.format(self.name))
self.logger.debug(self.fmprofiles)
self.logger.debug('fm_programme: "{}"'.format(self.name))
self.logger.debug(self.fmprogrammes)
def _log_tree(self, program_node):
if self.logger:
self.logger.debug('program_node tree: "{}"'.format(self.name))
self.logger.debug(anytree.RenderTree(program_node))
def _log_reinsurance_structure(self, add_profiles_args):
if self.logger:
self.logger.debug('policytc_map: "{}"'.format(self.name))
policytc_map = dict()
for k in add_profiles_args.node_layer_profile_map.keys():
profile_id = add_profiles_args.node_layer_profile_map[k]
policytc_map["(Name=%s, layer_id=%s, overlay_loop=%s)" % k] = profile_id
self.logger.debug(json.dumps(policytc_map, indent=4))
self.logger.debug('fm_policytcs: "{}"'.format(self.name))
self.logger.debug(self.fm_policytcs)
self.logger.debug('fm_profile: "{}"'.format(self.name))
self.logger.debug(self.fmprofiles)
self.logger.debug('fm_programme: "{}"'.format(self.name))
self.logger.debug(self.fmprogrammes)
def generate_oasis_structures(self):
'''
Create the Oasis structures - FM Programmes, FM Profiles and FM Policy TCs -
that represent the reinsurance structure.
The algorithm to create the stucture has three steps:
Step 1 - Build a tree representation of the insurance program, depending on the reinsurance risk level.
Step 2 - Overlay the reinsurance structure. Each reinsurance contact is a seperate layer.
Step 3 - Iterate over the tree and write out the Oasis structure.
'''
fmprogrammes_list = list()
fmprofiles_list = list()
fm_policytcs_list = list()
profile_id = 1
nolossprofile_id = profile_id
fmprofiles_list.append(
oed.get_no_loss_profile(nolossprofile_id))
profile_id = profile_id + 1
passthroughprofile_id = profile_id
fmprofiles_list.append(
oed.get_pass_through_profile(passthroughprofile_id))
node_layer_profile_map = {}
self.logger.debug(fmprofiles_list)
#
# Step 1 - Build a tree representation of the insurance program, depening on the reinsurance risk level.
#
program_node = self._get_tree()
self._log_tree(program_node)
#
# Step 2 - Overlay the reinsurance structure. Each reinsurance contact is a seperate layer.
#
layer_id = 1 # Current layer ID
overlay_loop = 0 # Overlays multiple rules in same layer
prev_reins_number = 1
for _, ri_info_row in self.ri_info.iterrows():
overlay_loop += 1
scope_rows = self.ri_scope[
(self.ri_scope.ReinsNumber == ri_info_row.ReinsNumber)
& (self.ri_scope.RiskLevel == self.risk_level)]
# If FAC, don't increment the layer number
# Else, only increment inline with the reins_number
if ri_info_row.ReinsType in ['FAC']:
pass
elif prev_reins_number < ri_info_row.ReinsNumber:
layer_id += 1
prev_reins_number = ri_info_row.ReinsNumber
if self.logger:
pd.set_option('display.width', 1000)
self.logger.debug('ri_scope: "{}"'.format(self.name))
self.logger.debug(scope_rows)
if scope_rows.shape[0] == 0:
continue
add_profiles_args = self.add_profiles_args(
program_node, ri_info_row, scope_rows, overlay_loop, layer_id,
node_layer_profile_map, fmprofiles_list,
nolossprofile_id, passthroughprofile_id)
# Add pass through nodes at all levels so that the risks not explicitly covered are unaffected
for node in anytree.iterators.LevelOrderIter(add_profiles_args.program_node):
if self.risk_level == oed.REINS_RISK_LEVEL_LOCATION:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.nolossprofile_id
else:
if node.level_id == self._get_risk_level_id():
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.nolossprofile_id
elif node.level_id == self._get_filter_level_id():
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.nolossprofile_id
else:
add_profiles_args.node_layer_profile_map[(
node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.passthroughprofile_id
add_profiles_args.node_layer_profile_map[(
add_profiles_args.program_node.name, add_profiles_args.layer_id, add_profiles_args.overlay_loop)] = add_profiles_args.passthroughprofile_id
if ri_info_row.ReinsType == oed.REINS_TYPE_FAC:
self._add_fac_profiles(add_profiles_args)
elif ri_info_row.ReinsType == oed.REINS_TYPE_PER_RISK:
self._add_per_risk_profiles(add_profiles_args)
elif ri_info_row.ReinsType == oed.REINS_TYPE_QUOTA_SHARE:
self._add_quota_share_profiles(add_profiles_args)
elif ri_info_row.ReinsType == oed.REINS_TYPE_SURPLUS_SHARE:
self._add_surplus_share_profiles(add_profiles_args)
elif ri_info_row.ReinsType == oed.REINS_TYPE_CAT_XL:
self._add_cat_xl_profiles(add_profiles_args)
else:
raise Exception("ReinsType not supported yet: {}".format(
ri_info_row.ReinsType))
#
# Step 3 - Iterate over the tree and write out the Oasis structure.
#
for node in anytree.iterators.LevelOrderIter(program_node):
if node.parent is not None:
fmprogrammes_list.append(
oed.FmProgramme(
from_agg_id=node.agg_id,
level_id=node.level_id,
to_agg_id=node.parent.agg_id
)
)
for layer in range(1, layer_id + 1):
for node in anytree.iterators.LevelOrderIter(program_node):
if node.level_id > 1:
profiles_ids = []
# Collect over-lapping unique combinations of (layer_id, level_id, agg_id)
# and combine into a single layer
for overlay_rule in range(1, overlay_loop + 1):
try:
profiles_ids.append(
node_layer_profile_map[(node.name, layer, overlay_rule)])
except:
profiles_ids.append(1)
pass
fm_policytcs_list.append(oed.FmPolicyTc(
layer_id=layer,
level_id=node.level_id - 1,
agg_id=node.agg_id,
profile_id=max(profiles_ids)
))
self.fmprogrammes = pd.DataFrame(fmprogrammes_list)
self.fmprofiles = pd.DataFrame(fmprofiles_list)
self.fm_policytcs = pd.DataFrame(fm_policytcs_list)
self.fm_xrefs['layer_id'] = pd.Series(layer_id, range(len(self.fm_xrefs.index)))
self._log_reinsurance_structure(add_profiles_args)
def write_oasis_files(self, directory=None):
'''
Write out the generated data to Oasis input file format.
'''
if directory is None:
directory = "direct"
if os.path.exists(directory):
shutil.rmtree(directory)
os.makedirs(directory)
self.coverages.to_csv(
os.path.join(directory, "coverages.csv"), index=False)
self.items.to_csv(
os.path.join(directory, "items.csv"), index=False)
self.fmprogrammes.to_csv(
os.path.join(directory, "fm_programme.csv"), index=False)
self.fmprofiles.to_csv(
os.path.join(directory, "fm_profile.csv"), index=False)
self.fm_policytcs.to_csv(
os.path.join(directory, "fm_policytc.csv"), index=False)
self.fm_xrefs.to_csv(
os.path.join(directory, "fm_xref.csv"), index=False)
self.fmsummaryxref.to_csv(
os.path.join(directory, "fmsummaryxref.csv"), index=False)
self.gulsummaryxref.to_csv(
os.path.join(directory, "gulsummaryxref.csv"), index=False)
| 43.135 | 155 | 0.642889 | 5,245 | 43,135 | 4.873785 | 0.069209 | 0.062395 | 0.085084 | 0.020616 | 0.699214 | 0.66119 | 0.611079 | 0.593006 | 0.56441 | 0.530884 | 0 | 0.002773 | 0.281141 | 43,135 | 999 | 156 | 43.178178 | 0.821627 | 0.049658 | 0 | 0.449438 | 0 | 0 | 0.037443 | 0.002452 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049938 | false | 0.017478 | 0.027466 | 0.011236 | 0.121099 | 0.001248 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa591bd26efedb65ca418827ed8a45d41d4b76ed | 5,167 | py | Python | plugin/immuneResponseRNA/tac/convert2chp.py | konradotto/TS | bf088bd8432b1e3f4b8c8c083650a30d9ef2ae2e | [
"Apache-2.0"
] | 125 | 2015-01-22T05:43:23.000Z | 2022-03-22T17:15:59.000Z | plugin/immuneResponseRNA/tac/convert2chp.py | konradotto/TS | bf088bd8432b1e3f4b8c8c083650a30d9ef2ae2e | [
"Apache-2.0"
] | 59 | 2015-02-10T09:13:06.000Z | 2021-11-11T02:32:38.000Z | plugin/immuneResponseRNA/tac/convert2chp.py | konradotto/TS | bf088bd8432b1e3f4b8c8c083650a30d9ef2ae2e | [
"Apache-2.0"
] | 98 | 2015-01-17T01:25:10.000Z | 2022-03-18T17:29:42.000Z | # pylint: disable=line-too-long
""" pileup.py """
import sys
import os
import uuid
from run import Run
class Tac(Run):
""" Pileup """
def add_options(self):
""" Define options """
self.add_option("-i", "--input-file", "string", "Input file")
self.add_option("-o", "--output-dir", "string", "Directory for storing output from the run")
self.add_option("-m", "--method", "string", "Normalization methond")
def override_options(self):
""" Override json parameter values with command line arguments """
self.parameters['input_file'] = self.options.input_file
self.parameters['output_dir'] = self.options.output_dir
self.parameters['method'] = self.options.method
def validate_options(self):
""" Parameter validation """
if (self.options.input_file == None):
self.fatal_error("Please specify an input-file.")
if (self.options.output_dir == None):
self.fatal_error("Please specify an output-dir.")
if (self.options.method == None):
self.options.method = 'RPM'
def process(self):
""" Process """
tac_script_path = os.path.dirname(os.path.realpath(__file__))
chp_bin = os.path.join(tac_script_path, 'apt2-dset-util')
try:
os.mkdir(self.parameters['output_dir'])
except:
pass
headers = []
data = []
try:
fin = open(self.parameters['input_file'], 'r')
except IOError:
self.fatal_error("Cannot open input file:\t" + self.parameters['input_file'])
for line in fin:
line = line.rstrip()
if line.startswith("#"):
continue
if line.startswith("Target\t") or line.startswith("\"Target\"\t"):
headers = line.split("\t")
i = 0
for header in headers:
if header.startswith('"') and header.endswith('"'):
headers[i] = header[1:-1]
i += 1
continue
cols = line.split("\t")
if cols[0].startswith('"') and cols[0].endswith('"'):
cols[0] = cols[0][1:-1]
data.append(cols)
fin.close()
index = 0
for header in headers:
if index > 0:
try:
filename = self.parameters["output_dir"] + "/" + header
print(filename)
fout = open(filename, 'w')
except IOError:
self.fatal_error("Cannot open output file:\t" + filename)
fout.write("#%%BEGIN-FILE=/\n")
fout.write("#%gdh:0:data_source=affymetrix-quantification-analysis\n")
fout.write("#%gdh:0:uuid=" + str(uuid.uuid1()) + "\n")
fout.write("#%gdh:0:locale=\n")
fout.write("#%gdh:0:datetime=en-US\n")
fout.write("#%gdh:0:affymetrix-algorithm-name=" + self.parameters['method'] + "\n")
fout.write("#%gdh:0:affymetrix-algorithm-version=1.0\n")
fout.write("#%gdh:0:affymetrix-array-type=Immune-response\n")
fout.write("#%gdh:0:program-name=ImmuneResponse_plugin\n")
fout.write("#%gdh:0:program-version=v1.0\n")
fout.write("#%gdh:0:program-company=ThermoFisherScientific\n")
fout.write("#%gdh:0:affymetrix-algorithm-param-exec-guid=\n")
fout.write('#%gdh:0:affymetrix-algorithm-param-quantification-name=' + self.parameters['method'] + "\n")
fout.write('#%gdh:0:affymetrix-algorithm-param-quantification-version="1.0"\n')
fout.write('#%gdh:0:affymetrix-algorithm-param-quantification-scale=log2\n')
fout.write('#%gdh:0:affymetrix-algorithm-param-quantification-type=scaled-RPM\n')
fout.write("#%%BEGIN-GROUP=/Quantification\n")
fout.write("#%%BEGIN-DATASET=/Quantification/Quantification\n")
fout.write("#\n")
fout.write("#%%field-000=ProbeSetName_&size,int32\n")
fout.write("#%%field-001=ProbeSetName,string8,17\n")
fout.write("#%%field-002=Quantification,float32,-1\n")
fout.write("#\n")
fout.write("#%%dims=0:\n")
fout.write("#\n")
fout.write("#%%row-cnt=" + str(len(data)) + "\n")
fout.write("#\n")
fout.write("ProbeSetName_&size ProbeSetName Quantification\n")
for cols in data:
fout.write(str(len(cols[0])) + "\t" + cols[0] + "\t" + cols[index] + "\n")
fout.close()
cmd = chp_bin + " "
cmd += "-i " + filename + " "
cmd += "-o " + filename + ".gene.chp "
cmd += "-log-file " + filename + ".log"
self.run_command(cmd, "apt2-dset-util")
os.remove(filename)
os.remove(filename + ".log")
index += 1
if __name__ == '__main__':
TAC = Tac("1.0", sys.argv[1:])
| 45.324561 | 120 | 0.52274 | 577 | 5,167 | 4.606586 | 0.263432 | 0.098194 | 0.10158 | 0.073363 | 0.294959 | 0.27389 | 0.203913 | 0.138826 | 0.12453 | 0.042889 | 0 | 0.017217 | 0.314302 | 5,167 | 113 | 121 | 45.725664 | 0.732995 | 0.030192 | 0 | 0.13 | 0 | 0 | 0.277242 | 0.164656 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0.01 | 0.04 | 0 | 0.09 | 0.01 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa595184bc5cb73057c10ccbcbe211d4f6c40926 | 11,122 | py | Python | tools/telemetry/telemetry/benchmark_runner.py | sunjc53yy/chromium | 049b380040949089c2a6e447b0cd0ac3c4ece38e | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | tools/telemetry/telemetry/benchmark_runner.py | sunjc53yy/chromium | 049b380040949089c2a6e447b0cd0ac3c4ece38e | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | tools/telemetry/telemetry/benchmark_runner.py | sunjc53yy/chromium | 049b380040949089c2a6e447b0cd0ac3c4ece38e | [
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause"
] | null | null | null | # Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Parses the command line, discovers the appropriate benchmarks, and runs them.
Handles benchmark configuration, but all the logic for
actually running the benchmark is in Benchmark and PageRunner."""
import hashlib
import inspect
import json
import os
import sys
from telemetry import benchmark
from telemetry import decorators
from telemetry.core import browser_finder
from telemetry.core import browser_options
from telemetry.core import command_line
from telemetry.core import discover
from telemetry.core import environment
from telemetry.core import util
from telemetry.util import find_dependencies
class Deps(find_dependencies.FindDependenciesCommand):
"""Prints all dependencies"""
def Run(self, args):
main_module = sys.modules['__main__']
args.positional_args.append(os.path.realpath(main_module.__file__))
return super(Deps, self).Run(args)
class Help(command_line.OptparseCommand):
"""Display help information about a command"""
usage = '[command]'
def Run(self, args):
if len(args.positional_args) == 1:
commands = _MatchingCommands(args.positional_args[0])
if len(commands) == 1:
command = commands[0]
parser = command.CreateParser()
command.AddCommandLineArgs(parser)
parser.print_help()
return 0
print >> sys.stderr, ('usage: %s [command] [<options>]' % _ScriptName())
print >> sys.stderr, 'Available commands are:'
for command in _Commands():
print >> sys.stderr, ' %-10s %s' % (
command.Name(), command.Description())
print >> sys.stderr, ('"%s help <command>" to see usage information '
'for a specific command.' % _ScriptName())
return 0
class List(command_line.OptparseCommand):
"""Lists the available benchmarks"""
usage = '[benchmark_name] [<options>]'
@classmethod
def CreateParser(cls):
options = browser_options.BrowserFinderOptions()
parser = options.CreateParser('%%prog %s %s' % (cls.Name(), cls.usage))
return parser
@classmethod
def AddCommandLineArgs(cls, parser):
parser.add_option('-j', '--json-output-file', type='string')
parser.add_option('-n', '--num-shards', type='int', default=1)
@classmethod
def ProcessCommandLineArgs(cls, parser, args):
if not args.positional_args:
args.benchmarks = _Benchmarks()
elif len(args.positional_args) == 1:
args.benchmarks = _MatchBenchmarkName(args.positional_args[0],
exact_matches=False)
else:
parser.error('Must provide at most one benchmark name.')
def Run(self, args):
if args.json_output_file:
possible_browser = browser_finder.FindBrowser(args)
if args.browser_type in (
'exact', 'release', 'release_x64', 'debug', 'debug_x64', 'canary'):
args.browser_type = 'reference'
possible_reference_browser = browser_finder.FindBrowser(args)
else:
possible_reference_browser = None
with open(args.json_output_file, 'w') as f:
f.write(_GetJsonBenchmarkList(possible_browser,
possible_reference_browser,
args.benchmarks, args.num_shards))
else:
_PrintBenchmarkList(args.benchmarks)
return 0
class Run(command_line.OptparseCommand):
"""Run one or more benchmarks (default)"""
usage = 'benchmark_name [page_set] [<options>]'
@classmethod
def CreateParser(cls):
options = browser_options.BrowserFinderOptions()
parser = options.CreateParser('%%prog %s %s' % (cls.Name(), cls.usage))
return parser
@classmethod
def AddCommandLineArgs(cls, parser):
benchmark.AddCommandLineArgs(parser)
# Allow benchmarks to add their own command line options.
matching_benchmarks = []
for arg in sys.argv[1:]:
matching_benchmarks += _MatchBenchmarkName(arg)
if matching_benchmarks:
# TODO(dtu): After move to argparse, add command-line args for all
# benchmarks to subparser. Using subparsers will avoid duplicate
# arguments.
matching_benchmark = matching_benchmarks.pop()
matching_benchmark.AddCommandLineArgs(parser)
# The benchmark's options override the defaults!
matching_benchmark.SetArgumentDefaults(parser)
@classmethod
def ProcessCommandLineArgs(cls, parser, args):
if not args.positional_args:
_PrintBenchmarkList(_Benchmarks())
sys.exit(-1)
input_benchmark_name = args.positional_args[0]
matching_benchmarks = _MatchBenchmarkName(input_benchmark_name)
if not matching_benchmarks:
print >> sys.stderr, 'No benchmark named "%s".' % input_benchmark_name
print >> sys.stderr
_PrintBenchmarkList(_Benchmarks())
sys.exit(-1)
if len(matching_benchmarks) > 1:
print >> sys.stderr, ('Multiple benchmarks named "%s".' %
input_benchmark_name)
print >> sys.stderr, 'Did you mean one of these?'
print >> sys.stderr
_PrintBenchmarkList(matching_benchmarks)
sys.exit(-1)
benchmark_class = matching_benchmarks.pop()
if len(args.positional_args) > 1:
parser.error('Too many arguments.')
assert issubclass(benchmark_class, benchmark.Benchmark), (
'Trying to run a non-Benchmark?!')
benchmark.ProcessCommandLineArgs(parser, args)
benchmark_class.ProcessCommandLineArgs(parser, args)
cls._benchmark = benchmark_class
def Run(self, args):
return min(255, self._benchmark().Run(args))
def _ScriptName():
return os.path.basename(sys.argv[0])
def _Commands():
"""Generates a list of all classes in this file that subclass Command."""
for _, cls in inspect.getmembers(sys.modules[__name__]):
if not inspect.isclass(cls):
continue
if not issubclass(cls, command_line.Command):
continue
yield cls
def _MatchingCommands(string):
return [command for command in _Commands()
if command.Name().startswith(string)]
@decorators.Cache
def _Benchmarks():
benchmarks = []
for base_dir in config.base_paths:
benchmarks += discover.DiscoverClasses(base_dir, base_dir,
benchmark.Benchmark,
index_by_class_name=True).values()
return benchmarks
def _MatchBenchmarkName(input_benchmark_name, exact_matches=True):
def _Matches(input_string, search_string):
if search_string.startswith(input_string):
return True
for part in search_string.split('.'):
if part.startswith(input_string):
return True
return False
# Exact matching.
if exact_matches:
# Don't add aliases to search dict, only allow exact matching for them.
if input_benchmark_name in config.benchmark_aliases:
exact_match = config.benchmark_aliases[input_benchmark_name]
else:
exact_match = input_benchmark_name
for benchmark_class in _Benchmarks():
if exact_match == benchmark_class.Name():
return [benchmark_class]
return []
# Fuzzy matching.
return [benchmark_class for benchmark_class in _Benchmarks()
if _Matches(input_benchmark_name, benchmark_class.Name())]
def _GetJsonBenchmarkList(possible_browser, possible_reference_browser,
benchmark_classes, num_shards):
"""Returns a list of all enabled benchmarks in a JSON format expected by
buildbots.
JSON format (see build/android/pylib/perf/benchmark_runner.py):
{ "version": <int>,
"steps": {
<string>: {
"device_affinity": <int>,
"cmd": <string>,
"perf_dashboard_id": <string>,
},
...
}
}
"""
output = {
'version': 1,
'steps': {
}
}
for benchmark_class in benchmark_classes:
if not issubclass(benchmark_class, benchmark.Benchmark):
continue
if not decorators.IsEnabled(benchmark_class, possible_browser):
continue
base_name = benchmark_class.Name()
base_cmd = [sys.executable, os.path.realpath(sys.argv[0]),
'-v', '--output-format=buildbot', base_name]
perf_dashboard_id = base_name
# TODO(tonyg): Currently we set the device affinity to a stable hash of the
# benchmark name. This somewhat evenly distributes benchmarks among the
# requested number of shards. However, it is far from optimal in terms of
# cycle time. We should add a benchmark size decorator (e.g. small, medium,
# large) and let that inform sharding.
device_affinity = int(hashlib.sha1(base_name).hexdigest(), 16) % num_shards
output['steps'][base_name] = {
'cmd': ' '.join(base_cmd + [
'--browser=%s' % possible_browser.browser_type]),
'device_affinity': device_affinity,
'perf_dashboard_id': perf_dashboard_id,
}
if (possible_reference_browser and
decorators.IsEnabled(benchmark_class, possible_reference_browser)):
output['steps'][base_name + '.reference'] = {
'cmd': ' '.join(base_cmd + [
'--browser=reference', '--output-trace-tag=_ref']),
'device_affinity': device_affinity,
'perf_dashboard_id': perf_dashboard_id,
}
return json.dumps(output, indent=2, sort_keys=True)
def _PrintBenchmarkList(benchmarks):
if not benchmarks:
print >> sys.stderr, 'No benchmarks found!'
return
# Align the benchmark names to the longest one.
format_string = ' %%-%ds %%s' % max(len(b.Name()) for b in benchmarks)
filtered_benchmarks = [benchmark_class for benchmark_class in benchmarks
if issubclass(benchmark_class, benchmark.Benchmark)]
if filtered_benchmarks:
print >> sys.stderr, 'Available benchmarks are:'
for benchmark_class in sorted(filtered_benchmarks, key=lambda b: b.Name()):
print >> sys.stderr, format_string % (
benchmark_class.Name(), benchmark_class.Description())
print >> sys.stderr
config = environment.Environment([util.GetBaseDir()])
def main():
# Get the command name from the command line.
if len(sys.argv) > 1 and sys.argv[1] == '--help':
sys.argv[1] = 'help'
command_name = 'run'
for arg in sys.argv[1:]:
if not arg.startswith('-'):
command_name = arg
break
# Validate and interpret the command name.
commands = _MatchingCommands(command_name)
if len(commands) > 1:
print >> sys.stderr, ('"%s" is not a %s command. Did you mean one of these?'
% (command_name, _ScriptName()))
for command in commands:
print >> sys.stderr, ' %-10s %s' % (
command.Name(), command.Description())
return 1
if commands:
command = commands[0]
else:
command = Run
# Parse and run the command.
parser = command.CreateParser()
command.AddCommandLineArgs(parser)
options, args = parser.parse_args()
if commands:
args = args[1:]
options.positional_args = args
command.ProcessCommandLineArgs(parser, options)
return command().Run(options)
| 33.002967 | 80 | 0.677126 | 1,306 | 11,122 | 5.599541 | 0.232006 | 0.040202 | 0.028716 | 0.018871 | 0.261315 | 0.176808 | 0.124709 | 0.124709 | 0.10201 | 0.10201 | 0 | 0.005407 | 0.218396 | 11,122 | 336 | 81 | 33.10119 | 0.835845 | 0.152131 | 0 | 0.283262 | 0 | 0 | 0.08804 | 0.005028 | 0 | 0 | 0 | 0.005952 | 0.004292 | 1 | 0.081545 | false | 0 | 0.060086 | 0.012876 | 0.257511 | 0.06867 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa596369d63b550cebc895ba9015b84eac48f99e | 5,837 | py | Python | knowledge/knowledge/views/commands.py | roryzhengzhang/HAICOR_v2 | a7656ff1e920e319590eac9090e12159d5d81f37 | [
"MIT"
] | null | null | null | knowledge/knowledge/views/commands.py | roryzhengzhang/HAICOR_v2 | a7656ff1e920e319590eac9090e12159d5d81f37 | [
"MIT"
] | null | null | null | knowledge/knowledge/views/commands.py | roryzhengzhang/HAICOR_v2 | a7656ff1e920e319590eac9090e12159d5d81f37 | [
"MIT"
] | null | null | null | # Copyright (c) 2020 HAICOR Project Team
#
# This software is released under the MIT License.
# https://opensource.org/licenses/MIT
from __future__ import annotations
import csv
import gzip
import json
import os
import re
import click
import igraph
from knowledge.app import CONFIG_DIRECTORY, DATA_DIRECTORY, app, database
from knowledge.models.assertions import Assertion, ExternalURL, Relation
from knowledge.models.concepts import Concept, Language, PartOfSpeech
@app.cli.command("init")
@click.argument("conceptnet", type=str)
def initialize(conceptnet: str):
"""Initialize database and necessary Python objects (as pickle)."""
LIMIT = 10000
REGEX = re.compile(r"^/c/(\w+)/([^/]+)(/\w)?(/.+)?/?$")
CONCEPTNET = os.path.abspath(conceptnet)
# foreign key lookup tables
LANG = {}
SPEECH = {}
CONCEPT = {}
RELATION = {}
SPEECH[None] = None
# reset current database
database.drop_all()
database.create_all()
# process configuration files
with open(os.path.join(CONFIG_DIRECTORY, "language.csv"), "r") as file:
cache = []
for idx, (code, name) in enumerate(csv.reader(file)):
LANG[code] = idx + 1
cache.append({"id": idx + 1, "code": code, "name": name})
database.session.execute(Language.__table__.insert(), cache)
with open(os.path.join(CONFIG_DIRECTORY, "part-of-speech.csv"), "r") as file:
cache = []
for idx, (code, name) in enumerate(csv.reader(file)):
SPEECH[code] = idx + 1
cache.append({"id": idx + 1, "code": code, "name": name})
database.session.execute(PartOfSpeech.__table__.insert(), cache)
with open(os.path.join(CONFIG_DIRECTORY, "relation.csv"), "r") as file:
cache = []
for idx, (relation, directed) in enumerate(csv.reader(file)):
RELATION[relation] = idx + 1
cache.append({"id": idx + 1, "relation": relation,
"directed": directed == "directed"})
database.session.execute(Relation.__table__.insert(), cache)
# process conceptnet file
COUNTER = {"concept": 0, "assertion": 0, "external_url": 0}
def get_concept(uri: str) -> int:
if uri not in CONCEPT.keys():
COUNTER["concept"] += 1
CONCEPT[uri] = COUNTER["concept"]
lang, text, speech, suffix = re.match(REGEX, uri).groups()
speech = speech[1:] if speech else None
suffix = suffix[1:] if suffix else None
database.session.execute(Concept.__table__.insert(),
{"id": CONCEPT[uri],
"lang": LANG[lang],
"text": text,
"speech": SPEECH[speech],
"suffix": suffix})
return CONCEPT[uri]
with gzip.open(CONCEPTNET, "rt") as conceptnet:
cache = []
reader = csv.reader(conceptnet, delimiter='\t')
for idx, (_, relation, source, target, data) in enumerate(reader):
print(f"Processed {idx + 1:,} lines ("
f"concept: {COUNTER['concept']:,}, "
f"assertion: {COUNTER['assertion']:,})", end='\r')
relation = relation[3:]
data = json.loads(data)
if relation == "ExternalURL":
continue # process in second pass
COUNTER["assertion"] += 1
cache.append({"id": COUNTER["assertion"],
"relation_id": RELATION[relation],
"source_id": get_concept(source),
"target_id": get_concept(target),
"weight": data["weight"]})
if len(cache) == LIMIT:
database.session.execute(Assertion.__table__.insert(), cache)
cache.clear()
database.session.execute(Assertion.__table__.insert(), cache)
with gzip.open(CONCEPTNET, "rt") as conceptnet:
cache = []
reader = csv.reader(conceptnet, delimiter='\t')
for idx, (_, relation, source, target, data) in enumerate(reader):
print(f"Processed {idx + 1:,} lines ("
f"concept: {COUNTER['concept']:,}, "
f"assertion: {COUNTER['assertion']:,}, "
f"external url: {COUNTER['external_url']:,})", end='\r')
relation = relation[3:]
data = json.loads(data)
if relation != "ExternalURL" or source not in CONCEPT.keys():
continue # already processed in first pass
COUNTER["external_url"] += 1
cache.append({"id": COUNTER["external_url"],
"relation_id": RELATION[relation],
"source_id": get_concept(source),
"target_id": target,
"weight": data["weight"]})
if len(cache) == LIMIT:
database.session.execute(ExternalURL.__table__.insert(), cache)
cache.clear()
database.session.execute(ExternalURL.__table__.insert(), cache)
print()
database.session.commit()
# generate minified knowledge graph
print("Generating minified knowledge graph ...", end='\r')
assertions = database.session\
.query(Assertion.source_id, Assertion.target_id)\
.union(
database.session
.query(Assertion.target_id, Assertion.source_id)
.filter(Assertion.relation.has(directed=False))
)\
.distinct()
graph = igraph.Graph(edges=assertions.all(), directed=True)
graph.write_pickle(os.path.join(DATA_DIRECTORY, "directed-graph.pkl"))
print(f"Generated minified knowledge graph with {len(graph.es)} edges")
| 33.739884 | 81 | 0.562789 | 608 | 5,837 | 5.296053 | 0.259868 | 0.051242 | 0.054658 | 0.021739 | 0.428882 | 0.408385 | 0.408385 | 0.362733 | 0.332919 | 0.332919 | 0 | 0.006627 | 0.302039 | 5,837 | 172 | 82 | 33.936047 | 0.783751 | 0.064417 | 0 | 0.33913 | 0 | 0 | 0.132599 | 0.02865 | 0 | 0 | 0 | 0 | 0.113043 | 1 | 0.017391 | false | 0 | 0.095652 | 0 | 0.121739 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa5d2b911fc0ad82e70370c7f7b42c51ab93f67a | 6,023 | py | Python | bfillings/sumaclust_v1.py | gregcaporaso/burrito-fillings | a7b3b4db0d20b4baa064d447033782969f491622 | [
"BSD-3-Clause"
] | null | null | null | bfillings/sumaclust_v1.py | gregcaporaso/burrito-fillings | a7b3b4db0d20b4baa064d447033782969f491622 | [
"BSD-3-Clause"
] | null | null | null | bfillings/sumaclust_v1.py | gregcaporaso/burrito-fillings | a7b3b4db0d20b4baa064d447033782969f491622 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
#-----------------------------------------------------------------------------
# Copyright (c) 2013--, biocore development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#-----------------------------------------------------------------------------
"""
Application controller for SumaClust version 1.0
================================================
"""
# ----------------------------------------------------------------------------
# Copyright (c) 2014--, biocore development team
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
# ----------------------------------------------------------------------------
from os.path import split, isdir, dirname, isfile, exists, realpath
from burrito.util import CommandLineApplication, ResultPath
from burrito.parameters import ValuedParameter, FlagParameter
class Sumaclust(CommandLineApplication):
""" SumaClust generic application controller for de novo OTU picking
"""
_command = 'sumaclust'
_command_delimiter = ' '
_parameters = {
# Reference sequence length is the shortest
'-l': FlagParameter('-', Name='l', Value=True),
# Filepath of the OTU-map
'-O': ValuedParameter('-', Name='O', Delimiter=' ',
Value=None, IsPath=True),
# Flag '-f' must be passed to deactivate FASTA output
'-f': FlagParameter('-', Name='f', Value=True),
# Number of threads
'-p': ValuedParameter('-', Name='p', Delimiter=' ',
Value=1, IsPath=False),
# Assign sequence to the best matching cluster seed, rather
# than the first matching cluster (having >= similarity threshold)
'-e': FlagParameter('-', Name='e', Value=False),
# Similarity threshold
'-t': ValuedParameter('-', Name='t', Delimiter=' ',
Value=0.97, IsPath=False),
# Maximum ratio between abundance of two sequences so that the
# less abundant one can be considered as a variant of the more
# abundant one.
'-R': ValuedParameter('-', Name='R', Delimiter=' ',
Value=1, IsPath=False)
}
_synonyms = {}
_input_handler = '_input_as_string'
_supress_stdout = False
_supress_stderr = False
def _get_result_paths(self, data):
""" Set the result paths
"""
result = {}
# OTU map (mandatory output)
result['OtuMap'] = ResultPath(Path=self.Parameters['-O'].Value,
IsWritten=True)
# SumaClust will not produce any output file if the
# input file was empty, so we create an empty
# output file
if not isfile(result['OtuMap'].Path):
otumap_f = open(result['OtuMap'].Path, 'w')
otumap_f.close()
return result
def getHelp(self):
""" Method that points to documentation
"""
help_str = ("SumaClust is hosted at:\n"
"http://metabarcoding.org/sumatra/\n\n"
"The following paper should be cited if this resource "
"is used:\n\n"
"SUMATRA and SUMACLUST: fast and exact comparison and "
"clustering "
"of full-length barcode sequences\n"
"Mercier, C., Boyer, F., Kopylova, E., Taberlet, P., "
"Bonin, A. and Coissac E.,"
"2014 (in preparation)\n"
)
return help_str
def sumaclust_denovo_cluster(seq_path=None,
result_path=None,
shortest_len=True,
similarity=0.97,
threads=1,
exact=False,
HALT_EXEC=False
):
""" Function : launch SumaClust de novo OTU picker
Parameters: seq_path, filepath to reads;
result_path, filepath to output OTU map;
shortest_len, boolean;
similarity, the similarity threshold (between (0,1]);
threads, number of threads to use;
exact, boolean to perform exact matching
Return : clusters, list of lists
"""
# Sequence path is mandatory
if (seq_path is None
or not exists(seq_path)):
raise ValueError("Error: FASTA query sequence filepath is "
"mandatory input.")
# Output directory is mandatory
if (result_path is None
or not isdir(dirname(realpath(result_path)))):
raise ValueError("Error: output directory is mandatory input.")
# Instantiate the object
sumaclust = Sumaclust(HALT_EXEC=HALT_EXEC)
# Set the OTU-map filepath
sumaclust.Parameters['-O'].on(result_path)
# Set the similarity threshold
if similarity is not None:
sumaclust.Parameters['-t'].on(similarity)
# Set the option to perform exact clustering (default: False)
if exact:
sumaclust.Parameters['-e'].on()
# Turn off option for reference sequence length to be the shortest
if not shortest_len:
sumaclust.Parameters['-l'].off()
# Set the number of threads
if threads > 0:
sumaclust.Parameters['-p'].on(threads)
else:
raise ValueError("Number of threads must be positive.")
# Launch SumaClust,
# set the data string to include the read filepath
# (to be passed as final arguments in the sumaclust command)
app_result = sumaclust(seq_path)
# Put clusters into a list of lists
f_otumap = app_result['OtuMap']
clusters = [line.strip().split('\t')[1:] for line in f_otumap]
# Return clusters
return clusters
| 34.614943 | 78 | 0.546073 | 632 | 6,023 | 5.136076 | 0.348101 | 0.011091 | 0.018484 | 0.020333 | 0.105977 | 0.080715 | 0.080715 | 0.080715 | 0.080715 | 0.080715 | 0 | 0.006459 | 0.305994 | 6,023 | 173 | 79 | 34.815029 | 0.770096 | 0.38602 | 0 | 0 | 0 | 0 | 0.156004 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.04 | 0 | 0.226667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa5d6d3199de3edd77bf0f1f57accbdde11b04b2 | 2,308 | py | Python | X-Net/error_map.py | kanshichao/X-Microscopy | 527016f46b39861be9a0fab6066904755990b961 | [
"MIT"
] | 2 | 2022-03-12T12:31:28.000Z | 2022-03-27T03:44:15.000Z | X-Net/error_map.py | kanshichao/X-Microscopy | 527016f46b39861be9a0fab6066904755990b961 | [
"MIT"
] | null | null | null | X-Net/error_map.py | kanshichao/X-Microscopy | 527016f46b39861be9a0fab6066904755990b961 | [
"MIT"
] | null | null | null | #encoding=utf-8
from logger import setup_logger
from utils import *
from glob import glob
from skimage import measure as m
import os
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib as mlp
sample_files = sorted(glob('/media/ksc/code/Figure 4E/AK3 is better than perfect/'))
print(sample_files)
for sample_file in sample_files:
dir = sample_file + '/'
filelist = get_filelist(dir, [])
for fl in filelist:
if 'error_map' in fl:
if os.path.exists(fl):
os.remove(fl)
count = 0
for sample_file in sample_files:
dir = sample_file + '/'
filelist = get_filelist(dir, [])
for fl in filelist:
print(fl)
if 'perfect.tif' in fl:
print(fl)
image1 = scipy.misc.imread(fl)
compare_files = sorted(glob(fl[:-11]+'*'))
count += 1
for cfl in compare_files:
# if not ('wf.tif' in cfl or 'log_scores.txt' in cfl):
if 'ak3' in cfl:
image2 = scipy.misc.imread(cfl)
image1 = image1.astype(np.float32)
image2 = image2.astype(np.float32)
emap = np.abs(image1-image2)
# emap[:,:,1] = emap[:,:,0]
# emap[:,:,2] = emap[:,:,0]
emap = emap[:,:,0]
# color = ['blue','cyan','green','Lime','purple','black','orange','cyan']
# cmap = mlp.colors.ListedColormap(color)
plt.imshow(emap,interpolation='lanczos')
plt.xticks(fontproperties='Arial', weight='bold')
plt.yticks(fontproperties='Arial', weight='bold')
cbar = plt.colorbar(shrink=0.7)
font = {'family':'Times New Roman',
'weight': 'bold'}
# cbar.ax.tick_params(labelsize=13)
# cbar.set_ticklabels([0,50,100])
image_path =cfl.split('.tif')
plt.savefig(image_path[0]+'_error_map.png')
plt.show()
# result_image = Image.fromarray(emap, 'RGB')
# image_path =cfl.split('.tif')
# result_image.save(image_path[0]+'_error_map.tif') | 38.466667 | 93 | 0.515598 | 267 | 2,308 | 4.355805 | 0.438202 | 0.037833 | 0.025795 | 0.025795 | 0.196045 | 0.130696 | 0.130696 | 0.130696 | 0.130696 | 0.130696 | 0 | 0.024933 | 0.357019 | 2,308 | 60 | 94 | 38.466667 | 0.75876 | 0.182409 | 0 | 0.227273 | 0 | 0 | 0.081513 | 0.011721 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.181818 | 0 | 0.181818 | 0.068182 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa5e94eb5db83bd8dc7fd3774f1ae257d8403415 | 729 | py | Python | server/server.py | angel7i/camera-server | 368b0bcd2ac995adfccf2f8219ea61fd4cae8049 | [
"MIT"
] | null | null | null | server/server.py | angel7i/camera-server | 368b0bcd2ac995adfccf2f8219ea61fd4cae8049 | [
"MIT"
] | 4 | 2020-11-13T18:59:00.000Z | 2022-02-10T03:23:57.000Z | server/server.py | angel7i/camera-server | 368b0bcd2ac995adfccf2f8219ea61fd4cae8049 | [
"MIT"
] | null | null | null | from flask import Flask, request
from server.data import convert_to_image
from server.model import classify_image
app = Flask(__name__)
app.config.update(
ENV='development ')
@app.route('/', methods=['GET'])
def about():
return "Welcome to camera-server!"
@app.route('/image', methods=['POST'])
def process_image():
resp = {"msg": "No se adjunto imagen"}
req_data = request.get_json()
if req_data:
if 'data' in req_data:
img_data = convert_to_image(req_data['data'])
label = classify_image(img_data)
img_data.close()
resp["msg"] = label
#resp["size"] = img_data["size"]
#resp["data"] = img_data["data"]
return resp
| 22.090909 | 57 | 0.613169 | 95 | 729 | 4.484211 | 0.431579 | 0.08216 | 0.077465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.245542 | 729 | 32 | 58 | 22.78125 | 0.774545 | 0.085048 | 0 | 0 | 0 | 0 | 0.12782 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.15 | 0.05 | 0.35 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa6295670b8f6e0779772cc03e05bd4e0bd63f75 | 3,813 | py | Python | config/config.py | Dhruvacube/FatesList | ec90edfe65fc37e66351f0d25c29a709d717a03d | [
"MIT"
] | null | null | null | config/config.py | Dhruvacube/FatesList | ec90edfe65fc37e66351f0d25c29a709d717a03d | [
"MIT"
] | null | null | null | config/config.py | Dhruvacube/FatesList | ec90edfe65fc37e66351f0d25c29a709d717a03d | [
"MIT"
] | null | null | null | """Config for Fates List"""
import json
from typing import List, Dict, Union
import os
with open("config/data/discord.json") as f:
_discord_data = json.load(f)
_server_data = _discord_data["servers"]
_role_data = _discord_data["roles"]
_channel_data = _discord_data["channels"]
_oauth_data = _discord_data["oauth"]
discord_redirect_uri: str = _oauth_data["redirect_uri"] # Redirect URI
discord_client_id: int = int(_oauth_data["client_id"])
owner: int = int(_discord_data["owner"]) # Owner of fates list
server_bot_invite: str = _discord_data["server_bot_invite"] # Ensure that it uses 67649 for perms
support_url: str = _discord_data["support_server"] # Support server URL
bot_logs: int = int(_channel_data["bot_logs"]) # Bot logs
server_logs: int = int(_channel_data["server_logs"]) # Server logs
appeals_channel: int = int(_channel_data["appeals_channel"]) # Appeal/resubmission channel
site_errors_channel: int = int(_channel_data["site_errors_channel"]) # Site error logging
test_server: int = int(_server_data["testing"]) # Test server
main_server: int = int(_server_data["main"]) # Main server
staff_server: int = int(_server_data["staff"]) # Staff server
staff_ping_add_role: int = int(_role_data["staff_ping_add_role"]) # Staff ping role on bot add
bot_dev_role: int = int(_role_data["bot_dev_role"]) # Bot developer role
bots_role: int = int(_role_data["bots_role"]) # Bots role on main server
certified_bots_role: int = int(_role_data["certified_bots_role"]) # Certified bots role
certified_dev_role: int = int(_role_data["certified_dev_role"]) # Certified developers role
bronze_user_role: int = int(_role_data["bronze_user_role"]) # Bronze user role in main server
test_botsrole: int = int(_role_data["test_server_bots_role"]) # Test server bots role
test_staffrole: int = int(_role_data["test_server_staff_role"]) # Test server staff role
staff_ag: int = int(_role_data["staff_server_access_granted_role"]) # self-explanatory
with open("config/data/extra_data.json") as f:
_config_data = json.load(f)
INT64_MAX: int = int(_config_data["int64_max"])
API_VERSION: int = _config_data["api_version_curr"] # Current API version
reserved_vanity: List[str] = _config_data["reserved_vanity"] # Banned in vanity
md_extensions: List[str] = _config_data["md_extensions"] # Markdown extension settings
auth_namespaces: Dict[str, str] = _config_data["auth_namespaces"] # Deprecated. To remove
special_badges: List[Dict[str, str]] = _config_data["special_badges"] # Badge info.
features: Dict[str, Dict[str, str]] = _config_data["features"] # Supported features
langs: Dict[str, str] = _config_data["langs"] # Supported langs
pg_user: str = _config_data["pg_user"] # Unused (I think) but there for compatibility
site: str = _config_data["site"] # Site URL
sentry_dsn: str = _config_data["sentry_dsn"]
with open("config/data/ban_data.json") as fp:
bans_data = json.load(fp)
with open("config/data/staff_roles.json") as fp:
staff_roles = json.load(fp)
with open("config/data/policy.json") as fp:
_policy_data = json.load(fp)
rules: Dict[str, List[str]] = _policy_data["rules"]
privacy_policy: Dict[str, Union[List[str], Dict[str, str]]] = _policy_data["privacy_policy"]
with open("config/data/secrets.json") as fp:
_secret_data = json.load(fp)
TOKEN_SERVER: str = _secret_data["token_server"]
TOKEN_MANAGER: str = _secret_data["token_manager"]
# Value below should not be changed
site_url = "https://" + site
manager_key = "" # Backward compatibility
TOKEN_DBG = "" # Backward compatibility
# Notes
#
# Think about timed badges
TOKEN_MAIN = os.environ["MAIN_TOKEN"]
discord_client_secret = os.environ["CLIENT_SECRET"]
| 48.884615 | 101 | 0.72856 | 553 | 3,813 | 4.672694 | 0.240506 | 0.044118 | 0.03483 | 0.048762 | 0.206656 | 0.080495 | 0.021672 | 0 | 0 | 0 | 0 | 0.002782 | 0.151587 | 3,813 | 77 | 102 | 49.519481 | 0.795981 | 0.190139 | 0 | 0 | 0 | 0 | 0.216864 | 0.074147 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa62c886bddd41ddec58d400ce6196e9adfa8531 | 21,731 | py | Python | functions.py | harshendrashah/Spell-Corrector | a5b4de189cd8384ec5f4781242bb391bb162f62b | [
"MIT"
] | 6 | 2018-07-07T13:16:58.000Z | 2021-08-09T14:32:17.000Z | functions.py | harshendrashah/Spell-Corrector | a5b4de189cd8384ec5f4781242bb391bb162f62b | [
"MIT"
] | null | null | null | functions.py | harshendrashah/Spell-Corrector | a5b4de189cd8384ec5f4781242bb391bb162f62b | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import tensorflow as tf
import os
from os import listdir
from os.path import isfile, join
from collections import namedtuple
from tensorflow.python.layers.core import Dense
from tensorflow.python.ops.rnn_cell_impl import _zero_state_tensors
import time
import re
from sklearn.model_selection import train_test_split
import json
import difflib
from parameters import *
def load_book(path):
"""Load a book from its file"""
input_file = os.path.join(path)
with open(input_file) as f:
book = f.read()
return book
def clean_text(text):
#Remove unwanted characters and extra spaces from the text
text = re.sub(r'\n', ' ', text)
text = re.sub(r'[{}@_*>()\\#%+=\[\]]','', text)
text = re.sub('a0','', text)
text = re.sub('\'92t','\'t', text)
text = re.sub('\'92s','\'s', text)
text = re.sub('\'92m','\'m', text)
text = re.sub('\'92ll','\'ll', text)
text = re.sub('\'91','', text)
text = re.sub('\'92','', text)
text = re.sub('\'93','', text)
text = re.sub('\'94','', text)
text = re.sub('\.','. ', text)
text = re.sub('\!','', text)
text = re.sub('\?','', text)
text = re.sub(' +',' ', text)
text = re.sub(',','', text)
text = re.sub('-','', text)
text = re.sub('; ','', text)
text = re.sub(':','', text)
text = re.sub('"','', text)
text = re.sub("'97",'\'', text)
return text
def noise_maker(sentence, threshold):
'''Relocate, remove, or add characters to create spelling mistakes'''
noisy_sentence = []
i = 0
while i < len(sentence):
random = np.random.uniform(0,1,1)
# Most characters will be correct since the threshold value is high
if random < threshold:
noisy_sentence.append(sentence[i])
else:
new_random = np.random.uniform(0,1,1)
# ~33% chance characters will swap locations
if new_random > 0.67:
if i == (len(sentence) - 1):
# If last character in sentence, it will not be typed
continue
else:
# if any other character, swap order with following character
noisy_sentence.append(sentence[i+1])
noisy_sentence.append(sentence[i])
i += 1
# ~33% chance an extra lower case letter will be added to the sentence
elif new_random < 0.33:
random_letter = np.random.choice(letters, 1)[0]
noisy_sentence.append(vocab_to_int[random_letter])
noisy_sentence.append(sentence[i])
# ~33% chance a character will not be typed
else:
pass
i += 1
return noisy_sentence
def model_inputs():
'''Create palceholders for inputs to the model'''
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [None, None], name='inputs')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [None, None], name='targets')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
inputs_length = tf.placeholder(tf.int32, (None,), name='inputs_length')
targets_length = tf.placeholder(tf.int32, (None,), name='targets_length')
max_target_length = tf.reduce_max(targets_length, name='max_target_len')
return inputs, targets, keep_prob, inputs_length, targets_length ,max_target_length
def process_encoding_input(targets, vocab_to_int, batch_size):
'''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''
with tf.name_scope("process_encoding"):
ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)
return dec_input
def encoding_layer(rnn_size, sequence_length, num_layers, rnn_inputs, keep_prob, direction):
'''Create the encoding layer'''
if direction == 1:
with tf.name_scope("RNN_Encoder_Cell_1D"):
for layer in range(num_layers):
with tf.variable_scope('encoder_{}'.format(layer)):
lstm = tf.contrib.rnn.LSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm,
input_keep_prob = keep_prob)
enc_output, enc_state = tf.nn.dynamic_rnn(drop,
rnn_inputs,
sequence_length,
dtype=tf.float32)
return enc_output, enc_state
if direction == 2:
with tf.name_scope("RNN_Encoder_Cell_2D"):
for layer in range(num_layers):
with tf.variable_scope('encoder_{}'.format(layer)):
cell_fw = tf.contrib.rnn.LSTMCell(rnn_size)
cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw,
input_keep_prob = keep_prob)
cell_bw = tf.contrib.rnn.LSTMCell(rnn_size)
cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw,
input_keep_prob = keep_prob)
enc_output, enc_state = tf.nn.bidirectional_dynamic_rnn(cell_fw,
cell_bw,
rnn_inputs,
sequence_length,
dtype=tf.float32)
# Join outputs since we are using a bidirectional RNN
enc_output = tf.concat(enc_output,2)
# Use only the forward state because the model can't use both states at once
return enc_output, enc_state[0]
def training_decoding_layer(dec_embed_input, targets_length, dec_cell, initial_state, output_layer,
vocab_size):
'''Create the training logits'''
with tf.name_scope("Training_Decoder"):
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=targets_length,
time_major=False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
training_helper,
initial_state,
output_layer)
training_logits, one,two = tf.contrib.seq2seq.dynamic_decode(training_decoder,
output_time_major=False,
impute_finished=True,
maximum_iterations=tf.reduce_max(targets_length))
return training_logits
def inference_decoding_layer(embeddings, start_token, end_token, dec_cell, initial_state, output_layer,max_target_length,
batch_size,targets_length):
'''Create the inference logits'''
with tf.name_scope("Inference_Decoder"):
start_tokens = tf.tile(tf.constant([start_token], dtype=tf.int32), [batch_size], name='start_tokens')
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embeddings,
start_tokens,
end_token)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
inference_helper,
initial_state,
output_layer)
inference_logits, one_in,two_in = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
output_time_major=False,
impute_finished=True,
maximum_iterations=tf.reduce_max(targets_length))
return inference_logits
def decoding_layer(dec_embed_input, embeddings, enc_output, enc_state, vocab_size, inputs_length, targets_length, max_target_length,
rnn_size, vocab_to_int, keep_prob, batch_size, num_layers,direction):
'''Create the decoding cell and attention for the training and inference decoding layers'''
with tf.name_scope("RNN_Decoder_Cell"):
for layer in range(num_layers):
with tf.variable_scope('decoder_{}'.format(layer)):
lstm = tf.contrib.rnn.LSTMCell(rnn_size)
dec_cell = tf.contrib.rnn.DropoutWrapper(lstm,
input_keep_prob = keep_prob)
output_layer = Dense(vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
attn_mech = tf.contrib.seq2seq.BahdanauAttention(rnn_size,
enc_output,
inputs_length,
normalize=False,
name='BahdanauAttention')
with tf.name_scope("Attention_Wrapper"):
dec_cell = tf.contrib.seq2seq.AttentionWrapper(dec_cell,
attn_mech,
rnn_size)
initial_state = dec_cell.zero_state(dtype=tf.float32, batch_size=batch_size).clone(cell_state=enc_state)
with tf.variable_scope("decode"):
training_logits = training_decoding_layer(dec_embed_input,
targets_length,
dec_cell,
initial_state,
output_layer,
vocab_size)
with tf.variable_scope("decode", reuse=True):
inference_logits = inference_decoding_layer(embeddings,
vocab_to_int['<GO>'],
vocab_to_int['<EOS>'],
dec_cell,
initial_state,
output_layer,
max_target_length,
batch_size,
targets_length)
return training_logits, inference_logits
def seq2seq_model(inputs, targets, keep_prob, inputs_length, targets_length,max_target_length,
vocab_size, rnn_size, num_layers, vocab_to_int, batch_size, embedding_size,direction):
'''Use the previous functions to create the training and inference logits'''
enc_embeddings = tf.Variable(tf.random_uniform(shape=[vocab_size, embedding_size], minval = -1, maxval = 1, seed = 0.5))
enc_embed_input = tf.nn.embedding_lookup(enc_embeddings, inputs)
enc_output, enc_state = encoding_layer(rnn_size, inputs_length, num_layers,
enc_embed_input, keep_prob,direction)
dec_embeddings = tf.Variable(tf.random_uniform(shape=[vocab_size, embedding_size],minval=-1,maxval= 1,seed = 0.5))
dec_input = process_encoding_input(targets, vocab_to_int, batch_size)
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
training_logits, inference_logits = decoding_layer(dec_embed_input,
dec_embeddings,
enc_output,
enc_state,
vocab_size,
inputs_length,
targets_length,
max_target_length,
rnn_size,
vocab_to_int,
keep_prob,
batch_size,
num_layers,
direction)
return training_logits, inference_logits
def pad_sentence_batch(sentence_batch):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [vocab_to_int['<PAD>']] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sentences, batch_size, threshold):
"""Batch sentences, noisy sentences, and the lengths of their sentences together.
With each epoch, sentences will receive new mistakes"""
for batch_i in range(0, len(sentences)//batch_size):
start_i = batch_i * batch_size
sentences_batch = sentences[start_i:start_i + batch_size]
sentences_batch_noisy = []
for sentence in sentences_batch:
sentences_batch_noisy.append(noise_maker(sentence, threshold))
sentences_batch_eos = []
for sentence in sentences_batch:
sentence.append(vocab_to_int['<EOS>'])
sentences_batch_eos.append(sentence)
pad_sentences_batch = np.array(pad_sentence_batch(sentences_batch_eos))
pad_sentences_noisy_batch = np.array(pad_sentence_batch(sentences_batch_noisy))
# Need the lengths for the _lengths parameters
pad_sentences_lengths = []
for sentence in pad_sentences_batch:
pad_sentences_lengths.append(len(sentence))
pad_sentences_noisy_lengths = []
for sentence in pad_sentences_noisy_batch:
pad_sentences_noisy_lengths.append(len(sentence))
yield pad_sentences_noisy_batch, pad_sentences_batch, pad_sentences_noisy_lengths, pad_sentences_lengths
def build_graph(keep_prob, rnn_size, num_layers, batch_size, learning_rate, embedding_size,direction):
tf.reset_default_graph()
# Load the model inputs
inputs, targets, keep_prob, inputs_length, targets_length, max_target_length = model_inputs()
# Create the training and inference logits
training_logits, inference_logits = seq2seq_model(tf.reverse(inputs, [-1]),
targets,
keep_prob,
inputs_length,
targets_length,
max_target_length,
len(vocab_to_int)+1,
rnn_size,
num_layers,
vocab_to_int,
batch_size,
embedding_size,
direction)
# Create tensors for the training logits and inference logits
training_logits = tf.identity(training_logits.rnn_output, 'logits')
with tf.name_scope('predictions'):
predictions = tf.identity(inference_logits.sample_id, name='predictions')
tf.summary.histogram('predictions', predictions)
# Create the weights for sequence_loss
masks = tf.sequence_mask(targets_length, dtype=tf.float32, name='masks')
with tf.name_scope("cost"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(training_logits,
targets,
masks)
tf.summary.scalar('cost', cost)
with tf.name_scope("optimze"):
optimizer = tf.train.AdamOptimizer(learning_rate)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
# Merge all of the summaries
merged = tf.summary.merge_all()
# Export the nodes
export_nodes = ['inputs', 'targets', 'keep_prob', 'cost', 'inputs_length', 'targets_length',
'predictions', 'merged', 'train_op','optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
saver = tf.train.Saver()
return graph, saver
# Train the model with the desired tuning parameters
'''for keep_probability in [0.75]:
for num_layers in [3]:
for threshold in [0.75]:
log_string = 'kp={},nl={},th={}'.format(keep_probability,
num_layers,
threshold)
model, saver = build_graph(keep_probability, rnn_size, num_layers, batch_size,
learning_rate,
embedding_size,
direction)
#train(model, epochs, log_string, saver)'''
def text_to_ints(text):
'''Prepare the text for the model'''
text = clean_text(text)
return [vocab_to_int[word] for word in text]
path = './books/'
book_files = [f for f in listdir(path) if isfile(join(path, f))]
book_files = book_files[1:]
books = [] # books data ka array
for book in book_files:
books.append(load_book(path+book))
# Clean the text of the books
clean_books = []
for book in books:
book.lower()
clean_books.append(clean_text(book))
# Create a dictionary to convert the vocabulary (characters) to integers
vocab_to_int = {}
'''count = 0
for book in clean_books:
for character in book:
if character not in vocab_to_int:
vocab_to_int[character] = count
count += 1'''
with open("./clean_data/vocab_to_int.json", 'r') as f:
vocab_to_int = json.load(f)
count = len(vocab_to_int)
# Add special tokens to vocab_to_int
'''codes = ['<PAD>','<EOS>','<GO>']
for code in codes:
vocab_to_int[code] = count
count += 1'''
# Create another dictionary to convert integers to their respective characters
int_to_vocab = {}
for character, value in vocab_to_int.items():
int_to_vocab[value] = character
# Split the text from the books into sentences.
sentences = []
'''for book in clean_books:
for sentence in book.split('. '):
sentences.append(sentence.lower())'''
text_file = open("./clean_data/sentences.txt",'r')
sentences = text_file.read().split(". ")
words_list = {}
for i in range(0,len(sentences)):
temp_list = sentences[i].split(" ")
for j in range(0,len(temp_list)):
if temp_list[j] in words_list:
val = words_list[temp_list[j]]
val = val+1
words_list[temp_list[j]] = val
else:
words_list[temp_list[j]] = 1
# Convert sentences to integers
int_sentences = []
for sentence in sentences:
int_sentence = []
for character in sentence:
if character != "\n":
int_sentence.append(vocab_to_int[character])
int_sentences.append(int_sentence)
# Find the length of each sentence
lengths = []
for sentence in int_sentences:
lengths.append(len(sentence))
lengths = pd.DataFrame(lengths, columns=["counts"])
lengths.describe()
max_length = 92
min_length = 10
good_sentences = []
for sentence in int_sentences:
if len(sentence) <= max_length and len(sentence) >= min_length:
good_sentences.append(sentence)
print("We will use {} to train and test our model.".format(len(good_sentences)))
# Split the data into training and testing sentences
training, testing = train_test_split(good_sentences, test_size = 0.15, random_state = 2)
print("Number of training sentences:", len(training))
print("Number of testing sentences:", len(testing))
# Sort the sentences by length to reduce padding, which will allow the model to train faster
training_sorted = []
testing_sorted = []
for i in range(min_length, max_length+1):
for sentence in training:
if len(sentence) == i:
training_sorted.append(sentence)
for sentence in testing:
if len(sentence) == i:
testing_sorted.append(sentence)
#used to modify sentences and create noise
letters = ['a','b','c','d','e','f','g','h','i','j','k','l','m',
'n','o','p','q','r','s','t','u','v','w','x','y','z',]
| 43.202783 | 132 | 0.533754 | 2,317 | 21,731 | 4.765645 | 0.16789 | 0.015215 | 0.021735 | 0.024724 | 0.372668 | 0.278301 | 0.237366 | 0.189187 | 0.181579 | 0.166636 | 0 | 0.009538 | 0.377617 | 21,731 | 502 | 133 | 43.288845 | 0.806876 | 0.09567 | 0 | 0.228228 | 0 | 0.018018 | 0.038257 | 0.003013 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042042 | false | 0.003003 | 0.045045 | 0 | 0.129129 | 0.009009 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa6855a59e066b6957b59f78dfaeae9521bc7616 | 3,856 | py | Python | S4/S4 Library/core/sims4/core_services.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | 1 | 2021-05-20T19:33:37.000Z | 2021-05-20T19:33:37.000Z | S4/S4 Library/core/sims4/core_services.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | S4/S4 Library/core/sims4/core_services.py | NeonOcean/Environment | ca658cf66e8fd6866c22a4a0136d415705b36d26 | [
"CC-BY-4.0"
] | null | null | null | import paths
import sims4.reload
SUPPORT_RELOADING_SCRIPTS = False and (not paths.IS_ARCHIVE and paths.SCRIPT_ROOT is not None)
SUPPORT_GSI = False
with sims4.reload.protected(globals()):
service_manager = None
if paths.SUPPORT_RELOADING_RESOURCES:
_file_change_manager = None
if SUPPORT_RELOADING_SCRIPTS:
_directory_watcher_manager = None
if SUPPORT_GSI:
_command_buffer_service = None
_http_service = None
defer_tuning_references = True
def file_change_manager():
if paths.SUPPORT_RELOADING_RESOURCES:
return _file_change_manager
raise RuntimeError('The FileChangeService is not available')
def directory_watcher_manager():
if SUPPORT_RELOADING_SCRIPTS:
return _directory_watcher_manager
raise RuntimeError('The DirectoryWatcherService is not available')
def command_buffer_service():
if SUPPORT_GSI:
return _command_buffer_service
raise RuntimeError('The CommandBufferService is not available')
def http_service():
return _http_service
def start_services(init_critical_services, services):
global service_manager, defer_tuning_references, _file_change_manager, _directory_watcher_manager, _command_buffer_service, _http_service
service_manager = sims4.service_manager.ServiceManager()
defer_tuning_references = False
if paths.SUPPORT_RELOADING_RESOURCES:
if _file_change_manager is not None:
raise RuntimeError('The FileChangeService has already been created.')
from sims4.file_change_service import FileChangeService
_file_change_manager = FileChangeService()
services.insert(0, _file_change_manager)
if SUPPORT_RELOADING_SCRIPTS:
if _directory_watcher_manager is not None:
raise RuntimeError('The DirectoryWatcherService has already been created.')
from sims4.reload_service import ReloadService
from sims4.directory_watcher_service import DirectoryWatcherService
_directory_watcher_manager = DirectoryWatcherService()
_directory_watcher_manager.set_paths([paths.SCRIPT_ROOT], 'script_root')
services.insert(0, _directory_watcher_manager)
services.append(ReloadService)
if SUPPORT_GSI:
if _command_buffer_service is not None:
raise RuntimeError('The CommandBufferService has already been created.')
if _http_service is not None:
raise RuntimeError('The HttpService has already been created.')
from sims4.gsi.command_buffer import CommandBufferService
from sims4.gsi.http_service import HttpService
_command_buffer_service = CommandBufferService()
_http_service = HttpService()
services.insert(0, _command_buffer_service)
services.insert(1, _http_service)
for service in init_critical_services:
service_manager.register_service(service, is_init_critical=True)
for service in services:
service_manager.register_service(service)
service_manager.start_services(defer_start_to_tick=True)
def start_service_tick():
if service_manager is None:
raise RuntimeError('Service manager is is not initialized')
return service_manager.start_single_service()
def stop_services():
global service_manager, _file_change_manager, _directory_watcher_manager, _command_buffer_service, _http_service
service_manager.stop_services()
service_manager = None
if paths.SUPPORT_RELOADING_RESOURCES:
_file_change_manager = None
if SUPPORT_RELOADING_SCRIPTS:
_directory_watcher_manager = None
if SUPPORT_GSI:
_command_buffer_service = None
_http_service = None
def on_tick():
if SUPPORT_RELOADING_SCRIPTS:
_directory_watcher_manager.on_tick()
if SUPPORT_GSI:
_command_buffer_service.on_tick()
_http_service.on_tick()
| 41.462366 | 141 | 0.759595 | 447 | 3,856 | 6.138702 | 0.1566 | 0.066327 | 0.092201 | 0.045554 | 0.381924 | 0.335277 | 0.258746 | 0.188776 | 0.188776 | 0.188776 | 0 | 0.003835 | 0.188537 | 3,856 | 92 | 142 | 41.913043 | 0.873122 | 0 | 0 | 0.285714 | 0 | 0 | 0.09388 | 0.011929 | 0 | 0 | 0 | 0 | 0 | 1 | 0.095238 | false | 0 | 0.083333 | 0.011905 | 0.238095 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa69c4468a0e9fe0134a22b8a3f512466b978c5d | 13,012 | py | Python | Learning_algorithms/PSRL_numerical_rewards.py | ernovoseller/DuelingPosteriorSampling | 0b34db67bd20d664f73611608638e1e0a32faf30 | [
"MIT"
] | 4 | 2020-05-30T21:42:03.000Z | 2021-07-06T05:41:11.000Z | Learning_algorithms/PSRL_numerical_rewards.py | ernovoseller/DuelingPosteriorSampling | 0b34db67bd20d664f73611608638e1e0a32faf30 | [
"MIT"
] | null | null | null | Learning_algorithms/PSRL_numerical_rewards.py | ernovoseller/DuelingPosteriorSampling | 0b34db67bd20d664f73611608638e1e0a32faf30 | [
"MIT"
] | 1 | 2020-05-30T21:44:01.000Z | 2020-05-30T21:44:01.000Z | # -*- coding: utf-8 -*-
"""
Implementation of the posterior sampling RL algorithm (PSRL), as described in
"(More) Efficient Reinforcement Learning via Posterior Sampling," by I. Osband,
B. Van Roy, and D. Russo (2013).
Unlike preference-based learning algorithms, PSRL receives numerical reward
feedback after every step of interaction between the agent and the environment.
"""
import numpy as np
from collections import defaultdict
import sys
if "../" not in sys.path:
sys.path.append("../")
from ValueIteration import value_iteration
def PSRL(time_horizon, NG_prior_params, env, num_iter, diri_prior = 1,
run_str = '', seed = None):
"""
This function implements the PSRL algorithm for performing posterior
sampling with numerical rewards at every step.
Inputs:
1) time_horizon: episode horizon; this is the number of state/action
pairs in each learning episode.
2) NG_prior_params: the hyperparameters for the normal-gamma model
used for learning the posterior over rewards. This is a length-4
list of the form [mu0, k0, alpha0, beta0]. The normal-gamma
prior is defined as NG(mu, lambda | mu0, k0, alpha0, beta0) =
Normal(mu | mu0, (k0 * lambda)^(-1)) * Gamma(lambda | alpha0,
rate = beta0). For details on the normal-gamma distribution, see
"Conjugate Bayesian analysis of the Gaussian distribution" by Kevin
P. Murphy, https://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf.
3) env: the RL environment.
4) num_iter: the number of iterations of the learning algorithm to run.
Note that one trajectory rollout occurs per iteration of learning.
5) diri_prior: parameter for setting the prior of the transition
dynamics model. For each state/action pair, the Dirichlet prior is
set to diri_prior * np.ones(num_states), where num_states is the
number of states in the MDP.
6) run_str: if desired, a string with information about the current
call to PSRL (e.g. hyperparameter values or repetition number),
which can be useful for print statements to track progress.
7) seed: seed for random number generation.
Returns: a vector of rewards received as the algorithm runs. This is either
a) the total rewards from each trajectory rollout, or b) the
rewards at every step taken in the environment (the environment
determines whether a) or b) is used).
"""
if not seed is None:
np.random.seed(seed)
# Numbers of states and actions in the environment:
num_states = env.nS
num_actions = env.nA
# Dirichlet model posterior over state/action transition probabilities.
# Initially, this is set to the Dirichlet prior, and it's updated after
# each observed state transition. Note that dirichlet_posterior[state][action]
# is a length-num_states array, specifying the probability distribution for
# transitioning to each possible subsequent state from the given state/action.
# Setting diri_prior = 1 gives a uniform prior over transition probabilities.
dirichlet_posterior = defaultdict(lambda: defaultdict(lambda: \
diri_prior * np.ones(num_states)))
# Initialize posterior parameters used for sampling from the reward model
# (initially, these are equal to the prior parameters):
NG_params = np.tile(NG_prior_params, (num_states, num_actions, 1))
# Store how many times each state/action pair gets visited:
visit_counts = np.zeros((num_states, num_actions))
# Store rewards observed in each state/action:
reward_samples = defaultdict(lambda: [])
num_policies = 1 # Number of policies to sample per learning iteration
# To store results (for evaluation purposes only):
if env.store_episode_reward: # Store total reward for each trajectory
rewards = np.empty(num_iter * num_policies)
else: # Store reward at each step within each trajectory
rewards = np.empty(num_iter * time_horizon * num_policies)
reward_count = 0 # Counts how many values in the "rewards" variable
# defined above have been populated
"""
Here is where the learning algorithm begins.
"""
for iteration in range(num_iter):
# Print status:
print('PSRL, parameters %s: iteration = %i' % (run_str, iteration + 1))
# Sample policies:
policies, reward_models = advance(num_policies, dirichlet_posterior,
num_states, num_actions, NG_params, time_horizon)
for policy in policies: # Roll out an action sequence
state = env.reset()
for t in range(time_horizon):
action = np.random.choice(num_actions, p = policy[t, state, :])
next_state, reward, done, = env.step(action)
# Update state transition posterior:
dirichlet_posterior[state][action][next_state] += 1
# Update state/action visits counts:
visit_counts[state][action] += 1
# Store observed rewards:
reward_samples[state, action].append(reward)
# Tracking rewards for evaluation purposes (in case of
# tracking rewards at every single step):
if not env.store_episode_reward:
rewards[reward_count] = env.get_step_reward(state,
action, next_state)
reward_count += 1
# Terminate trajectory if environment turns on "done" flag.
if done:
break
state = next_state
# Tracking rewards for evaluation purposes (in case of tracking
# rewards just over entire episodes):
if env.store_episode_reward:
rewards[reward_count] = env.get_trajectory_return()
reward_count += 1
# Call feedback function to update the normal-gamma reward posterior:
NG_params = feedback_NG(NG_prior_params, visit_counts, reward_samples,
num_states, num_actions)
# Return performance results:
return rewards
def advance(num_policies, dirichlet_posterior, num_states, num_actions,
NG_params, time_horizon):
"""
Draw a specified number of samples from the model posteriors over the
environment (i.e., the transition dynamics and rewards). For each sampled
environment, run value iteration to obtain the optimal policy given the
sampled dynamics and rewards.
This function assumes that the reward model posterior is an independent
normal-gamma distribution for each state/action pair.
Inputs: (note: d = num_states * num_actions, the number of state/action pairs)
1) num_policies: the number of samples to draw from the posterior; a
positive integer.
2) dirichlet_posterior: the model posterior over transition dynamics
parameters: dirichlet_posterior[state][action] is a length-num_states
array of the Dirichlet parameters for the given state and action.
These give the probability distribution of transitioning to each
possible subsequent state from the given state and action.
3) num_states: number of states in the MDP.
4) num_actions: number of actions in the MDP.
5) NG_params: these parameters specify the normal-gamma reward posterior.
It's a matrix of size num_states x num_actions x 4. NG_params[s, a, :]
gives the 4 parameters of the normal-gamma model for state/action
pair (s, a). This is a length-4 list of the form [mu_n, k_n, alpha_n,
beta_n]. The normal-gamma posterior is defined as:
NG(mu, lambda | mu_n, k_n, alpha_n, beta_n) =
Normal(mu | mu_n, (k_n * lambda)^(-1)) * Gamma(lambda |
alpha_n, rate = beta_n).
For details on the normal-gamma distribution, see
"Conjugate Bayesian analysis of the Gaussian distribution" by Kevin
P. Murphy, https://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf.
6) time_horizon: episode horizon; this is the number of state/action
pairs in each learning episode.
Output:
1) policies: this is a length-num_policies list, in which each element
is a policy. A policy is represented by a NumPy array of size
time_horizon x num_states x num_actions, in which policy[t][s][a]
is the probability that the policy takes action a in state s at
time-step t.
"""
policies = []
reward_models = []
for i in range(num_policies):
# Sample state transition dynamics from Dirichlet posterior:
dynamics_sample = []
for state in range(num_states):
dynamics_sample_ = []
for action in range(num_actions):
dynamics_sample_.append(np.random.dirichlet(dirichlet_posterior[state][action]))
dynamics_sample.append(dynamics_sample_)
# Sample rewards from Normal-Gamma posterior:
R = np.empty((num_states, num_actions))
for s in range(num_states):
for a in range(num_actions):
gamma_sample = np.random.gamma(NG_params[s, a, 2], 1 / NG_params[s, a, 3])
R[s, a] = np.random.normal(NG_params[s, a, 0],
(NG_params[s, a, 1] * gamma_sample)**(-0.5))
# Value iteration to determine policy:
policies.append(value_iteration(dynamics_sample, R, num_states,
num_actions, epsilon = 0,
H = time_horizon)[0])
reward_models.append(R)
return policies, reward_models
def feedback_NG(NG_prior_params, visit_counts, reward_samples, num_states,
num_actions):
"""
This function updates the Normal-Gamma reward posterior based upon the
observed data.
1) NG_prior_params: the hyperparameters for the normal-gamma model
used for learning the posterior over rewards. This is a length-4
list of the form [mu0, k0, alpha0, beta0]. The normal-gamma
prior is defined as NG(mu, lambda | mu0, k0, alpha0, beta0) =
Normal(mu | mu0, (k0 * lambda)^(-1)) * Gamma(lambda | alpha0,
rate = beta0). For details on the normal-gamma distribution, see
"Conjugate Bayesian analysis of the Gaussian distribution" by Kevin
P. Murphy, https://www.cs.ubc.ca/~murphyk/Papers/bayesGauss.pdf.
2) visit_counts: num_states x num_actions matrix recording how many times
each state/action pair has been visited.
3) reward_samples: dictionary for which reward_samples[s][a] is a list of
all the rewards observed on visits (so far) to state/action pair (s, a).
4) num_states: number of states in the MDP.
5) num_actions: number of actions in the MDP.
Output: normal-gamma posterior. This is a matrix of size num_states x
num_actions x 4. NG_params[s, a, :] gives the 4 parameters of the
normal-gamma model for state/action pair (s, a). This is a length-4
list of the form [mu_n, k_n, alpha_n, beta_n]. The normal-gamma
posterior is defined as:
NG(mu, lambda | mu_n, k_n, alpha_n, beta_n) =
Normal(mu | mu_n, (k_n * lambda)^(-1)) * Gamma(lambda |
alpha_n, rate = beta_n).
"""
# To store the normal-gamma posterior:
NG_params = np.empty((num_states, num_actions, 4))
mu0 = NG_prior_params[0] # Unpack prior parameters
k0 = NG_prior_params[1]
alpha0 = NG_prior_params[2]
beta0 = NG_prior_params[3]
# Calculate posterior for each state/action pair:
for s in range(num_states):
for a in range(num_actions):
n = visit_counts[s, a]
if n == 0:
NG_params[s, a] = NG_prior_params
continue
samples = np.array(reward_samples[s, a])
avg = np.mean(samples)
NG_params[s, a, 0] = (k0 * mu0 + n * avg) / (k0 + n)
NG_params[s, a, 1] = k0 + n
NG_params[s, a, 2] = alpha0 + n/2
NG_params[s, a, 3] = beta0 + 0.5 * np.sum((samples - avg)**2) + \
k0 * n * (avg - mu0)**2 / (2 * (k0 + n))
return NG_params
| 44.108475 | 96 | 0.616047 | 1,699 | 13,012 | 4.600942 | 0.181283 | 0.028783 | 0.026865 | 0.014072 | 0.402712 | 0.358577 | 0.33619 | 0.319304 | 0.30293 | 0.291416 | 0 | 0.012064 | 0.31202 | 13,012 | 294 | 97 | 44.258503 | 0.861148 | 0.583769 | 0 | 0.087912 | 0 | 0 | 0.008347 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032967 | false | 0 | 0.043956 | 0 | 0.10989 | 0.010989 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa6a7f3a7b59efe99fa6b546ccad0ee854657aef | 5,501 | py | Python | portal/portal/settings.py | gongweibao/PaddlePaddle.org | c2d33f2d20bf0248a0f81f344a10391ef6153c1a | [
"Apache-2.0"
] | null | null | null | portal/portal/settings.py | gongweibao/PaddlePaddle.org | c2d33f2d20bf0248a0f81f344a10391ef6153c1a | [
"Apache-2.0"
] | null | null | null | portal/portal/settings.py | gongweibao/PaddlePaddle.org | c2d33f2d20bf0248a0f81f344a10391ef6153c1a | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Django settings for portal project.
Generated by 'django-admin startproject' using Django 1.8.11.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
from django.utils.translation import ugettext_lazy as _
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get('SECRET_KEY', 'secret')
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
class PPO_MODES:
'''
PPO has 3 modes:
1) Default: Default website mode.
2) DOC_EDIT_MODE: Document editor mode. This will allow document editors to generate and view
their documentation. This mode is activated if there is no 'ENV' environment variable set and
HAS_MOUNT is NOT set or set to '1' (ie: We have mounted a volume to /var/content in Docker).
3) DOC_VIEW_MODE: Document viewer mode. This will allow users to view the latest PaddlePaddle
documentation. This mode is activated if there is no 'ENV' environment variable set AND
'HAS_MOUNT' is set to 0 (meaning there is no mount set for the content directory)
'''
Default, DOC_EDIT_MODE, DOC_VIEW_MODE = range(3)
CURRENT_PPO_MODE = PPO_MODES.Default
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ENV = os.environ.get('ENV', None)
HAS_MOUNT = os.environ.get('HAS_MOUNT', '1')
WORKSPACE_ZIP_FILE_NAME = 'workspace.tar.gz'
WORKSPACE_DOWNLOAD_URL = 'https://s3-ap-southeast-1.amazonaws.com/paddlepaddle.org/%s' % WORKSPACE_ZIP_FILE_NAME
if not ENV:
if HAS_MOUNT == '0':
CURRENT_PPO_MODE = PPO_MODES.DOC_VIEW_MODE
else:
CURRENT_PPO_MODE = PPO_MODES.DOC_EDIT_MODE
DEBUG = True
elif ENV == 'development':
DEBUG = True
DEFAULT_DOCS_VERSION = 'develop' if CURRENT_PPO_MODE != PPO_MODES.DOC_EDIT_MODE else 'doc_test'
if DEBUG:
ALLOWED_HOSTS = ['localhost', '127.0.0.1']
else:
ALLOWED_HOSTS = ['.paddlepaddle.org', '.ap-southeast-1.elb.amazonaws.com', '.ap-southeast-1.compute.amazonaws.com']
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'portal',
'visualDL',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'portal.middleware.subdomain.SubdomainMiddleware',
)
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
PREFERRED_VERSION_NAME = 'preferred_version'
ROOT_URLCONF = 'portal.urls'
TEMPLATE_DIR = os.path.join(BASE_DIR, 'portal/templates')
CONTENT_DIR = os.environ.get('CONTENT_DIR', None)
WORKSPACE_DIR = '%s/.ppo_workspace' % CONTENT_DIR
GENERATED_DOCS_DIR = '%s/generated_docs' % WORKSPACE_DIR
EXTERNAL_TEMPLATE_DIR = '%s/content' % WORKSPACE_DIR
RESOLVED_SITEMAP_DIR = '%s/resolved_sitemap' % WORKSPACE_DIR
OTHER_PAGE_PATH = '%s/docs/%s/other/%s'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [TEMPLATE_DIR, EXTERNAL_TEMPLATE_DIR],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
'django.template.context_processors.i18n',
'portal.context_processors.base_context',
],
},
},
]
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'TIMEOUT': 0 if DEBUG else 300
}
}
WSGI_APPLICATION = 'portal.wsgi.application'
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
LANGUAGES = (
('en', _('English')),
('zh', _('Chinese')),
)
LOCALE_PATHS = (
os.path.join(BASE_DIR, 'locale'),
)
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
APPEND_SLASH = True
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'portal/static/'),
os.path.join(BASE_DIR, 'visualDL/static/'),
)
STATIC_ROOT = 'static/'
STATIC_URL = '/static/'
TEMPORARY_DIR = '/tmp/'
| 30.225275 | 119 | 0.71078 | 731 | 5,501 | 5.19015 | 0.365253 | 0.015814 | 0.013179 | 0.01845 | 0.142594 | 0.114918 | 0.096205 | 0.096205 | 0.042699 | 0.042699 | 0 | 0.011221 | 0.173787 | 5,501 | 181 | 120 | 30.392265 | 0.823542 | 0.351027 | 0 | 0.040816 | 0 | 0 | 0.388111 | 0.244356 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.020408 | 0 | 0.030612 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa6f07ea835cbb9246dfe73b50a2ae2de6cc9df0 | 6,657 | py | Python | qlst.py | MelleVessies/qLST | 01a9b1f049be09b6887a76e2694d77386d1d6cd0 | [
"MIT"
] | 1 | 2021-12-04T20:46:23.000Z | 2021-12-04T20:46:23.000Z | qlst.py | MelleVessies/qLST | 01a9b1f049be09b6887a76e2694d77386d1d6cd0 | [
"MIT"
] | null | null | null | qlst.py | MelleVessies/qLST | 01a9b1f049be09b6887a76e2694d77386d1d6cd0 | [
"MIT"
] | null | null | null | from argparse import ArgumentParser
import torch
import torch.nn as nn
import pytorch_lightning as pl
class Attention1D(nn.Module):
"""Attention mechanism.
Parameters
----------
dim : int
The input and out dimension of per token features.
n_heads : int
Number of attention heads.
qkv_bias : bool
If True then we include bias to the query, key and value projections.
attn_p : float
Dropout probability applied to the query, key and value tensors.
proj_p : float
Dropout probability applied to the output tensor.
Attributes
----------
scale : float
Normalizing consant for the dot product.
qkv : nn.Linear
Linear projection for the query, key and value.
proj : nn.Linear
Linear mapping that takes in the concatenated output of all attention
heads and maps it into a new space.
attn_drop, proj_drop : nn.Dropout
Dropout layers.
"""
def __init__(self, dim, n_heads=16, qkv_bias=True, attn_p=0., proj_p=0.):
super().__init__()
self.n_heads = n_heads
self.dim = dim
self.head_dim = dim
self.scale = self.head_dim ** -0.5
self.qkv = nn.Linear(dim, dim * n_heads * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_p)
self.proj = nn.Linear(dim * n_heads, dim)
self.proj_drop = nn.Dropout(proj_p)
def forward(self, x):
"""Run forward pass.
Parameters
----------
x : torch.Tensor
Shape `(n_samples, n_patches + 1, dim)`.
Returns
-------
torch.Tensor
Shape `(n_samples, n_patches + 1, dim)`.
"""
n_samples, dim = x.shape
if dim != self.dim:
raise ValueError
qkv = self.qkv(x) # (n_samples, 3 * dim)
qkv = qkv.reshape(n_samples, 3, self.n_heads, self.head_dim) # (n_smaples, 3, n_heads, head_dim)
qkv = qkv.permute(1, 0, 2, 3) # (3, n_samples, n_heads, head_dim)
q, k, v = qkv[0], qkv[1], qkv[2]
k_t = k.transpose(-2, -1) # (n_samples, head_dim, n_heads)
dp = (q @ k_t) * self.scale # (n_samples, n_heads, n_heads)
attn = dp.softmax(dim=-1) # (n_samples, n_heads, n_heads)
attn = self.attn_drop(attn)
weighted_avg = attn @ v # (n_samples, n_heads, head_dim)
weighted_avg = weighted_avg.transpose(1, 2) # (n_samples, head_dim, n_heads)
weighted_avg = weighted_avg.flatten(1) # (n_samples, dim)
x = self.proj(weighted_avg) # (n_samples, dim)
x = self.proj_drop(x) # (n_samples, dim)
return x
class qLST(pl.LightningModule):
def __init__(
self,
classification_model: pl.LightningModule,
vae: pl.LightningModule,
query_idx : int,
lr : float = 1e-4,
**kwargs
):
super(qLST, self).__init__()
self.query_idx = query_idx
self.lr = lr
self.latent_dim = vae.model.latent_dim
self.delta_weight = 0.25
self.classification_model = classification_model
self.vae = vae
self.classification_model.requires_grad_(False)
self.vae.requires_grad_(False)
self.encoder = self.vae.model.encoder
self.encoder.requires_grad_(False)
self.decoder = self.vae.model.decoder
self.decoder.requires_grad_(True)
self.num_classes = classification_model.num_classes
self.exerator = nn.Sequential(*[
Attention1D(self.latent_dim + self.num_classes + 1, 5, attn_p=0.1),
nn.Linear(self.latent_dim + self.num_classes + 1, self.latent_dim)
])
def forward(self, x, q):
mu, log_var = self.encoder(x)
z = mu
z_query = torch.cat((z, q), dim=1)
z_delta = self.exerator(z_query)
z_e_recon = self.decoder(z + z_delta)
z_e_class = self.classification_model(z_e_recon)
return z, z_delta, z_e_recon, z_e_class
def _run_step(self, x, q):
mu, log_var = self.encoder(x)
z = mu
z_query = torch.cat((z, q), dim=1)
z_delta = self.exerator(z_query)
z_e_recon = self.decoder(z + z_delta)
z_e_class = self.classification_model(z_e_recon)
return z, z_delta, z_e_recon, z_e_class
def step(self, batch, batch_idx):
x = batch['waveform']
self.classification_model.eval()
self.vae.eval()
self.encoder.eval()
# Run classification
q_orig = self.classification_model(x).sigmoid()
# Create random queries
q = torch.rand(q_orig[:, self.query_idx].shape).to(x.device)
# Calculate query diff for loss and concatenate query and classifier output
q_diff = (q_orig[:, self.query_idx] - q).abs()
q_orig = torch.cat((q_orig, q.unsqueeze(-1)), dim=1)
z, z_delta, z_e_recon, z_e_class = self._run_step(x, q_orig)
classification_loss = torch.functional.F.binary_cross_entropy_with_logits(z_e_class[:, self.query_idx], q, reduction='none')
delta_loss = torch.functional.F.mse_loss(x, z_e_recon, reduction='none').flatten(start_dim=1).sum(dim=1)
weighted_delta_loss = delta_loss * (1 - q_diff + 0.01) * self.delta_weight
loss = (classification_loss + weighted_delta_loss).mean()
logs = {
"classification_loss": classification_loss.mean(),
"delta_loss": delta_loss.mean(),
"weighted_delta_loss": weighted_delta_loss.mean(),
"delta_size (mean)": abs(z_delta).sum(dim=-1).mean(),
"loss": loss,
}
return loss, logs
def training_step(self, batch, batch_idx):
loss, logs = self.step(batch, batch_idx)
self.log_dict(
{f"train_{k}": v for k, v in logs.items()}, on_step=True, on_epoch=False, prog_bar=True
)
return loss
def validation_step(self, batch, batch_idx):
loss, logs = self.step(batch, batch_idx)
logs = {f"val_{k}": v for k, v in logs.items()}
self.log_dict(logs)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.exerator.parameters(), lr=self.lr)
@staticmethod
def add_model_specific_args(parent_parser):
parser = ArgumentParser(parents=[parent_parser], add_help=False)
parser.add_argument("--lr", type=float, default=1e-6)
parser.add_argument("--batch_size", type=int, default=512)
parser.add_argument("--num_workers", type=int, default=4)
parser.add_argument("--data_dir", type=str, default=".")
return parser | 32.315534 | 132 | 0.609283 | 929 | 6,657 | 4.128095 | 0.221744 | 0.025033 | 0.014602 | 0.01043 | 0.265971 | 0.233638 | 0.191656 | 0.145763 | 0.136375 | 0.112125 | 0 | 0.011385 | 0.274298 | 6,657 | 206 | 133 | 32.315534 | 0.782447 | 0.197386 | 0 | 0.151261 | 0 | 0 | 0.027379 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084034 | false | 0 | 0.033613 | 0.008403 | 0.201681 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa7107a0781c87201fc539737982a5a5a90f460e | 1,084 | py | Python | programs_made/quicknotes/quicknotes.py | Rorima/exercicios-python | ca78e2d2402c2aa90efd95ccaa620c0a8b42444f | [
"MIT"
] | null | null | null | programs_made/quicknotes/quicknotes.py | Rorima/exercicios-python | ca78e2d2402c2aa90efd95ccaa620c0a8b42444f | [
"MIT"
] | null | null | null | programs_made/quicknotes/quicknotes.py | Rorima/exercicios-python | ca78e2d2402c2aa90efd95ccaa620c0a8b42444f | [
"MIT"
] | null | null | null | from tkinter.filedialog import *
import tkinter as tk
def saveFile():
new_file = asksaveasfile(mode = 'w', filetype = [('text files', '.txt')])
if new_file is None:
return
text = str(entry.get(1.0, END))
new_file.write(text)
new_file.close()
def openFile():
file = askopenfile(mode = 'r', filetype = [('text files', '*.txt')])
if file is not None:
content = file.read()
try:
entry.insert(INSERT, content)
except UnboundLocalError:
pass
canvas = tk.Tk()
canvas.iconbitmap('quicknotes.ico')
canvas.geometry("600x400")
canvas.title("Quicknote")
canvas.config(bg = "white") # Background color
top = Frame(canvas)
top.pack(padx = 10, pady = 5, anchor = 'nw')
b1 = Button(canvas, text="Read", bg = "white", command = openFile)
b1.pack(in_ = top, side=LEFT)
b2 = Button(canvas, text="Save as", bg = "white", command = saveFile)
b2.pack(in_ = top, side=LEFT)
entry = Text(canvas,wrap = WORD, bg = "#eaeaea", font = ("poppins", 15))
entry.pack(padx = 10, pady = 5, expand = TRUE, fill = BOTH)
canvas.mainloop()
| 26.439024 | 77 | 0.634686 | 148 | 1,084 | 4.608108 | 0.547297 | 0.041056 | 0.049853 | 0.058651 | 0.158358 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023068 | 0.200185 | 1,084 | 40 | 78 | 27.1 | 0.763552 | 0.01476 | 0 | 0 | 0 | 0 | 0.096623 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064516 | false | 0.032258 | 0.064516 | 0 | 0.16129 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa719a89fb11a8fb4e07f1cd828520067b645c0f | 681 | py | Python | alembic/versions/215a632cdf23_.py | reo7sp/vk-channelify | 06e513d8aef456bc91b927102d542fb444cf8502 | [
"MIT"
] | 21 | 2017-05-01T11:25:59.000Z | 2022-03-01T20:10:15.000Z | alembic/versions/215a632cdf23_.py | reo7sp/vk-channelify | 06e513d8aef456bc91b927102d542fb444cf8502 | [
"MIT"
] | 6 | 2017-05-06T01:55:30.000Z | 2018-06-27T20:00:26.000Z | alembic/versions/215a632cdf23_.py | reo7sp/vk-channelify | 06e513d8aef456bc91b927102d542fb444cf8502 | [
"MIT"
] | 3 | 2017-05-30T12:13:41.000Z | 2018-03-17T18:18:46.000Z | """empty message
Revision ID: 215a632cdf23
Revises: f5f69376d382
Create Date: 2017-07-09 17:04:22.228617
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '215a632cdf23'
down_revision = 'f5f69376d382'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('disabled_channels', sa.Column('channel_id', sa.String(), nullable=True))
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('disabled_channels', 'channel_id')
# ### end Alembic commands ###
| 23.482759 | 91 | 0.703377 | 84 | 681 | 5.595238 | 0.595238 | 0.057447 | 0.089362 | 0.097872 | 0.187234 | 0.187234 | 0.187234 | 0.187234 | 0 | 0 | 0 | 0.095238 | 0.167401 | 681 | 28 | 92 | 24.321429 | 0.733686 | 0.433186 | 0 | 0 | 0 | 0 | 0.223496 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa74d83c9f32c175963369b0b5f3e284bb6f0dbe | 2,651 | py | Python | efloras/patterns/count_patterns.py | rafelafrance/traiter_floras | 4599b58173fad55cb839934b35bed9fc6c483aa7 | [
"MIT"
] | null | null | null | efloras/patterns/count_patterns.py | rafelafrance/traiter_floras | 4599b58173fad55cb839934b35bed9fc6c483aa7 | [
"MIT"
] | 2 | 2020-11-04T21:13:46.000Z | 2020-11-05T17:57:36.000Z | efloras/patterns/count_patterns.py | rafelafrance/traiter_floras | 4599b58173fad55cb839934b35bed9fc6c483aa7 | [
"MIT"
] | null | null | null | """Common count snippets."""
from spacy import registry
from traiter import util as t_util
from traiter.actions import REJECT_MATCH
from traiter.const import CROSS
from traiter.const import SLASH
from traiter.patterns.matcher_patterns import MatcherPatterns
from ..pylib import const
NOT_COUNT_WORDS = CROSS + SLASH + """ average side times days weeks by """.split()
NOT_COUNT_ENTS = """ imperial_length metric_mass imperial_mass """.split()
DECODER = const.COMMON_PATTERNS | {
"adp": {"POS": {"IN": ["ADP"]}},
"count_suffix": {"ENT_TYPE": "count_suffix"},
"count_word": {"ENT_TYPE": "count_word"},
"not_count_ent": {"ENT_TYPE": {"IN": NOT_COUNT_ENTS}},
"not_count_word": {"LOWER": {"IN": NOT_COUNT_WORDS}},
"per_count": {"ENT_TYPE": "per_count"},
"subpart": {"ENT_TYPE": "subpart"},
}
# ####################################################################################
COUNT = MatcherPatterns(
"count",
on_match="efloras.count.v1",
decoder=DECODER,
patterns=[
"99-99 -* per_count?",
"99-99 per_count count_suffix?",
"per_count adp? 99-99 count_suffix?",
"( 99-99 count_suffix? ) per_count",
"99-99 - subpart",
],
)
@registry.misc(COUNT.on_match)
def count(ent):
"""Enrich the match with data."""
ent._.new_label = "count"
range_ = [t for t in ent if t.ent_type_ == "range"][0]
ent._.data = range_._.data
for key in ["min", "low", "high", "max"]:
if key in ent._.data:
ent._.data[key] = t_util.to_positive_int(ent._.data[key])
if ent._.data.get("range"):
del ent._.data["range"]
if pc := [e for e in ent.ents if e.label_ == "per_count"]:
pc = pc[0]
pc_text = pc.text.lower()
pc._.new_label = "count_group"
ent._.data["count_group"] = const.REPLACE.get(pc_text, pc_text)
# ####################################################################################
COUNT_WORD = MatcherPatterns(
"count_word",
on_match="efloras.count_word.v1",
decoder=DECODER,
patterns=[
"count_word",
],
)
@registry.misc(COUNT_WORD.on_match)
def count_word(ent):
ent._.new_label = "count"
word = [e for e in ent.ents if e.label_ == "count_word"][0]
word._.data = {"low": t_util.to_positive_int(const.REPLACE[word.text.lower()])}
# ####################################################################################
NOT_A_COUNT = MatcherPatterns(
"not_a_count",
on_match=REJECT_MATCH,
decoder=DECODER,
patterns=[
"99-99 not_count_ent",
"99-99 not_count_word 99-99? not_count_ent?",
"9 / 9",
],
)
| 29.455556 | 86 | 0.566201 | 336 | 2,651 | 4.190476 | 0.235119 | 0.076705 | 0.023438 | 0.025568 | 0.112216 | 0.03125 | 0.03125 | 0.03125 | 0.03125 | 0 | 0 | 0.018216 | 0.19238 | 2,651 | 89 | 87 | 29.786517 | 0.639421 | 0.018861 | 0 | 0.191176 | 0 | 0 | 0.253431 | 0.009005 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029412 | false | 0 | 0.102941 | 0 | 0.132353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa7732c06b366c9372958f4049925790f0b0da1d | 326 | py | Python | www/app/Personal_development/account/urls.py | yohei4/Django-Scraping | 1ac72b414025e703c21076d044b5b9b421f95049 | [
"MIT"
] | 1 | 2021-09-05T02:45:59.000Z | 2021-09-05T02:45:59.000Z | www/app/Personal_development/account/urls.py | yohei4/Django-Scraping | 1ac72b414025e703c21076d044b5b9b421f95049 | [
"MIT"
] | null | null | null | www/app/Personal_development/account/urls.py | yohei4/Django-Scraping | 1ac72b414025e703c21076d044b5b9b421f95049 | [
"MIT"
] | null | null | null | from django.urls import path
from . import views
app_name = 'account'
urlpatterns = [
path('home/', views.index, name='index'),
path('', views.Login.as_view(), name='login'),
path('newAccount', views.CreateAccount.as_view(), name='account'),
# path('newAccount/create', views.CreateAccount, name='create')
] | 27.166667 | 70 | 0.674847 | 40 | 326 | 5.425 | 0.45 | 0.101382 | 0.092166 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.141104 | 326 | 12 | 71 | 27.166667 | 0.775 | 0.187117 | 0 | 0 | 0 | 0 | 0.147727 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa78a639f370581c9fb0671f890276ad63369be2 | 11,955 | py | Python | package_management/package_management/doctype/package/package.py | anvilerp/package_management | ce7f4a13b84637d3f0e534a15e535a6ec45c092b | [
"Apache-2.0"
] | null | null | null | package_management/package_management/doctype/package/package.py | anvilerp/package_management | ce7f4a13b84637d3f0e534a15e535a6ec45c092b | [
"Apache-2.0"
] | null | null | null | package_management/package_management/doctype/package/package.py | anvilerp/package_management | ce7f4a13b84637d3f0e534a15e535a6ec45c092b | [
"Apache-2.0"
] | 2 | 2020-10-22T21:17:05.000Z | 2022-03-17T23:01:04.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2020, Lintec Tecnología and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
from . import fetch
import frappe
from frappe import _
from frappe.utils import now_datetime
import collections
import json
from frappe.model.document import Document
STATE_LEVELS = {
"origin": 1,
"transfer": 1,
"returned": 1,
"planned": 2,
"loaded": 2.1,
"transit": 2.2,
"completed": 3, # For the TransportationTrip
"delivered": 3,
"returned_carrier": 3,
"other": 3
}
@frappe.whitelist()
def quick_package_creation(customer, packages):
'''Method that takes care of validating and
creating packages on custom dialog, by passing
only guide, customer, type.'''
packages = json.loads(packages)
# Filter empty or incomplete rows
packages = list(filter(
lambda x: "guide" in x
and "type" in x
and x["guide"] != ""
and x["type"] != "", packages))
if not any(packages):
frappe.msgprint(_("No packages values provided, try again"))
return
# Check if no packages is duplicate.
duplicates = frappe.db.get_all(
doctype='Package',
filters={
"guide": ['in'] + [p["guide"] for p in packages]
},
fields=["name", "guide"]
)
if any(duplicates):
duplicates = [p.name for p in duplicates]
frappe.throw(_("Duplicate packages {0}".format(", ".join(duplicates))))
else:
received_date = now_datetime()
origin = frappe.db.get_single_value(
'Package Management Settings',
'default_origin')
print("-----my-----", origin)
counter = 0
for p in packages:
counter += 1
doc = frappe.new_doc('Package')
doc.guide = p["guide"]
doc.type = p["type"]
doc.customer = customer
doc.received_date = received_date
doc.origin = origin
doc.save()
frappe.msgprint(_("Created {0} packages".format(counter)))
return 1
def fetch_package_info(packages=[]):
print("RUNNING FETCH PACKAGE")
if not any(packages):
packages = frappe.db.get_all(
doctype='Package',
filters={
'fetchable': True,
'to_fetch': True,
},
fields=['name', 'guide', 'customer'])
print("Fetching packages", packages)
# Get the different companies since the fetching is different
# Dictionary with customer name, customer_id
customers = {p.customer for p in packages}
customers = {
c: frappe.get_doc('Package Management Customer', c).customer_id
for c in customers
}
for customer, customer_id in customers.items():
tofetch = list(filter(lambda p: p.customer == customer, packages))
method = getattr(fetch, f'{customer_id}_fetch', False)
if callable(method):
# If there's actually a method let's get the whole
# package object and pass it to the fetch function.
tofetch = list(map(
lambda p: frappe.get_doc('Package', p.name), tofetch
))
method(tofetch)
return True
else:
print(f"No method fetch found for customer {customer}")
class Package(Document):
def fetch_package(self):
print("Calling fetch_package")
result = fetch_package_info([self])
if result:
return True
def can_be_fetched(self):
'''Determines if a package can be fetched or not, meaning a method
to fetch it exists.'''
customer = frappe.get_doc('Package Management Customer', self.customer)
customer_id = customer.customer_id
method = getattr(fetch, f'{customer_id}_fetch', False)
if method:
return True
else:
return False
def validate_check_dupliate(self):
same_guide = frappe.db.get_all(
doctype='Package',
filters={'guide': self.guide, 'name': ['!=', self.name]},
fields=['name', 'guide']
)
if same_guide:
# Check the current amended_from comes from the duplicate
duplicate = same_guide[0]
if self.amended_from == duplicate.name:
return None
else:
frappe.throw(_("The guide number {0} already exists on the system in the package {1}".format(self.guide, duplicate.name)))
def validate_delivery_date(self):
'''If no delivery date and package in END_STATES e.g level 3
throw and exception, if package is not in END_STATES
remove the delivery date'''
if STATE_LEVELS[self.state] == 3 and not self.delivery_date:
frappe.throw(_(f"Set delivery date to set package as {0}".format(self.state)))
def validate_event_for_state(self):
events = [e.type for e in self.events]
if self.state not in events:
frappe.throw(_("To set the state as {0} and event of type {0} must be created first".format(self.state)))
def validate_no_duplicate_event_type_per_transporation_trip(self):
'''Check for no duplicate events type per transportation Trip'''
trans_trips = {e.transportation_trip for e in self.events
if e.transportation_trip}
for t in trans_trips:
events = [e.type for e in self.events
if e.transportation_trip == t]
if len(events) != len(set(events)):
frappe.throw(_("Duplicate event type for Trip {0} in Package {1}".format(t, self.name)))
def validate_no_duplicate_end_event_type_per_transporation_trip(self):
'''Check for no duplicate end events type per transportation Trip'''
events = self.events
trans_trips = {e.transportation_trip for e in self.events
if e.transportation_trip}
for t in trans_trips:
# Only capture end events, level 3
end_events = list(filter(lambda e: e.is_end_event and e.transportation_trip == t, events))
if len(end_events) > 1:
frappe.throw(_("Duplicate end event type for Trip {0} in Package {1}".format(t, self.name)))
def validate_sort_events(self):
'''Sort the child table events'''
sequence = range(1, len(self.events)+1)
sorted_events = self.events
sorted_events.sort(key=lambda x: x.date)
for e, i in zip(sorted_events, sequence):
e.idx = i
def before_save_delivery_or_return_event(self):
'''Deal with end event, in case is done manually'''
# TODO: Repurpose for a form button action, And a list action
if self.state in ['delivered', 'returned',
'returned_carrier', 'other']:
# Get all the event types
events = [e.type for e in self.events]
# If there's not an event for this state
# Let's create it otherwise do nothing
if self.state not in events:
# Set the proper date
if self.delivery_date:
# If delivery date is set
date = self.delivery_date
else:
date = now_datetime()
if self.state == 'delivered':
# Set the proper destination
destination = self.destination
else:
# If it was returned or to carrier just set the origin
destination = self.origin
self.append('events', {
'doctype': 'Package Event',
'type': self.state,
'origin': self.origin,
'date': date,
'destination': destination
})
def validate_create_origin_event(self):
# Check if there's an origin event
origin = [doc for doc in self.events if doc.type == 'origin']
if not origin:
# Create origin event when creating the package
self.append('events', {
'doctype': 'Package Event',
'type': 'origin',
'origin': self.origin,
'date': self.received_date,
'destination': self.origin
})
def validate_dates(self):
# Check if there's a received date
# If not set the now datetime
if self.delivery_date:
if self.received_date > self.delivery_date:
frappe.throw(_("Delivery date must be later than received date"))
def validate_update_state(self):
"""This method takes care of the state field logic,
takes adventage of table elements being sorted already
and also sets delivery date if state is in END_STATES"""
db_state = frappe.db.get_value('Package', self.name, 'state')
# If state has been changed manually don't trigger
if self.state != db_state:
return
else:
last_item = max(self.events, key=lambda x: x.idx, default=0)
# Set delivery date as event date if no delivery date set
if STATE_LEVELS[last_item.type] == 3 and not self.delivery_date:
self.delivery_date = last_item.date
# Remove deliver date if new state is not in END_STATES
elif STATE_LEVELS[last_item.type] < 3 and self.delivery_date:
self.delivery_date = ''
# Set the state as the last item type
self.state = last_item.type
def validate_completed(self):
'''Method that takes care of the completed field logic
be automatic but allow overriding'''
bs_self = self.get_doc_before_save()
if self.completed != bs_self.completed:
return
else:
last_item = max(self.events, key=lambda x: x.idx, default=0)
self.completed = True if STATE_LEVELS[last_item.type] == 3 else False
def autoname(self):
"""If field is new sets the name, if fields that set
the name have changed, renames"""
# Get the name the record should have
name = self.get_name()
# If it doesn't exist, e.a Is a new record
# Name it and end it
if not frappe.db.exists('Package', self.name):
self.name = name
return
else:
if self.name != self.get_name():
frappe.rename_doc("Package",
self.name, name, ignore_permissions=True)
def get_name(self):
'''Method that returns the record name'''
customer = self.customer
if len(customer) > 3:
customer = customer[0:3].upper()
return f"{customer}-{self.guide}"
def before_save(self):
pass
def after_insert(self):
'''Check if the package can be fetch, and set the proper status'''
if self.can_be_fetched():
self.fetchable = True
self.tofetch = True
else:
self.fetchable = False
self.tofetch = False
def validate(self):
self.validate_dates()
self.validate_check_dupliate()
self.validate_create_origin_event()
self.validate_no_duplicate_event_type_per_transporation_trip()
self.validate_no_duplicate_end_event_type_per_transporation_trip()
self.validate_sort_events()
self.validate_update_state()
self.validate_completed()
self.validate_event_for_state()
self.validate_delivery_date()
def on_update(self):
self.autoname()
| 36.898148 | 138 | 0.576495 | 1,444 | 11,955 | 4.643352 | 0.185596 | 0.034004 | 0.021477 | 0.012528 | 0.251305 | 0.193438 | 0.162714 | 0.132438 | 0.120507 | 0.081879 | 0 | 0.005759 | 0.331828 | 11,955 | 323 | 139 | 37.012384 | 0.833625 | 0.182267 | 0 | 0.209607 | 0 | 0 | 0.112151 | 0.002391 | 0 | 0 | 0 | 0.003096 | 0 | 1 | 0.091703 | false | 0.004367 | 0.034935 | 0 | 0.179039 | 0.030568 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa7c22db2aa85dc870c78c434e0299bc4cb05af4 | 412 | py | Python | Algorithms/Easy/836. Rectangle Overlap/answer.py | KenWoo/Algorithm | 4012a2f0a099a502df1e5df2e39faa75fe6463e8 | [
"Apache-2.0"
] | null | null | null | Algorithms/Easy/836. Rectangle Overlap/answer.py | KenWoo/Algorithm | 4012a2f0a099a502df1e5df2e39faa75fe6463e8 | [
"Apache-2.0"
] | null | null | null | Algorithms/Easy/836. Rectangle Overlap/answer.py | KenWoo/Algorithm | 4012a2f0a099a502df1e5df2e39faa75fe6463e8 | [
"Apache-2.0"
] | null | null | null | from typing import List
class Solution:
def isRectangleOverlap(self, rec1: List[int], rec2: List[int]) -> bool:
return not (rec1[2] <= rec2[0] or
rec1[3] <= rec2[1] or
rec1[0] >= rec2[2] or
rec1[1] >= rec2[3])
if __name__ == "__main__":
s = Solution()
result = s.isRectangleOverlap([0, 0, 2, 2], [1, 1, 3, 3])
print(result)
| 25.75 | 75 | 0.509709 | 55 | 412 | 3.672727 | 0.472727 | 0.089109 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094545 | 0.332524 | 412 | 15 | 76 | 27.466667 | 0.64 | 0 | 0 | 0 | 0 | 0 | 0.019417 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.090909 | 0.090909 | 0.363636 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa7c9ab9791a98f305e55df1341fb91be253a9d2 | 2,957 | py | Python | vvc/detector/yolo_v3.py | vvc-unal/vvc | 9dd413bd99f60b41d4d33931b301aefaed42609d | [
"MIT"
] | null | null | null | vvc/detector/yolo_v3.py | vvc-unal/vvc | 9dd413bd99f60b41d4d33931b301aefaed42609d | [
"MIT"
] | null | null | null | vvc/detector/yolo_v3.py | vvc-unal/vvc | 9dd413bd99f60b41d4d33931b301aefaed42609d | [
"MIT"
] | null | null | null | '''
'''
import os
from keras import backend as K
import numpy as np
from PIL import Image
from yolo import YOLO
from yolo3.utils import letterbox_image
from vvc.config import model_folder
class YOLOV3(object):
'''
classdocs
'''
def __init__(self, model_name, body_name):
'''
Constructor
'''
self.model_name = model_name
config = {
"model_path": os.path.join(model_folder, model_name, 'weights.h5'),
"anchors_path": os.path.join(model_folder, model_name, 'anchors.txt'),
"classes_path": os.path.join(model_folder, model_name, 'classes.txt'),
'body_name': body_name
}
self.yolo = YOLO(**config)
def predict(self, frame):
image = Image.fromarray(frame)
if self.yolo.model_image_size != (None, None):
assert self.yolo.model_image_size[0]%32 == 0, 'Multiples of 32 required'
assert self.yolo.model_image_size[1]%32 == 0, 'Multiples of 32 required'
boxed_image = letterbox_image(image, tuple(reversed(self.yolo.model_image_size)))
else:
new_image_size = (image.width - (image.width % 32),
image.height - (image.height % 32))
boxed_image = letterbox_image(image, new_image_size)
image_data = np.array(boxed_image, dtype='float32')
image_data /= 255.
image_data = np.expand_dims(image_data, 0) # Add batch dimension.
out_boxes, out_scores, out_classes = self.yolo.sess.run(
[self.yolo.boxes, self.yolo.scores, self.yolo.classes],
feed_dict={
self.yolo.yolo_model.input: image_data,
self.yolo.input_image_shape: [image.size[1], image.size[0]],
K.learning_phase(): 0
})
final_bboxes = []
for i, c in reversed(list(enumerate(out_classes))):
predicted_class = self.yolo.class_names[c]
box = out_boxes[i]
score = out_scores[i]
top, left, bottom, right = box
top = max(0, np.floor(top + 0.5).astype('int32'))
left = max(0, np.floor(left + 0.5).astype('int32'))
bottom = min(image.size[1], np.floor(max(bottom, 0) + 0.5).astype('int32'))
right = min(image.size[0], np.floor(right + 0.5).astype('int32'))
assert top >= 0
assert left >= 0
assert bottom >= 0, "Box: {}, bottom{}".format(box, bottom)
assert right >= 0
bbox = {}
bbox['class'] = predicted_class
bbox['box'] = [left, top, right, bottom]
bbox['prob'] = score
final_bboxes.append(bbox)
return final_bboxes
def get_class_mapping(self):
return {k: v for k, v in enumerate(self.yolo.class_names)}
| 33.988506 | 93 | 0.553602 | 364 | 2,957 | 4.321429 | 0.293956 | 0.066116 | 0.033058 | 0.045772 | 0.195804 | 0.13096 | 0.064844 | 0.064844 | 0 | 0 | 0 | 0.026553 | 0.324992 | 2,957 | 87 | 94 | 33.988506 | 0.761523 | 0.014542 | 0 | 0 | 0 | 0 | 0.062391 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 1 | 0.050847 | false | 0 | 0.118644 | 0.016949 | 0.220339 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa7de1d9c0aeda7c014ca49bdae8ed05ada232bf | 3,192 | py | Python | examples/Testing_shapes.py | pauleveritt/arcade | a3f31bd74c17e18cdc95ed5f0b350459f31af29e | [
"MIT"
] | 1 | 2021-03-11T09:08:56.000Z | 2021-03-11T09:08:56.000Z | examples/Testing_shapes.py | pauleveritt/arcade | a3f31bd74c17e18cdc95ed5f0b350459f31af29e | [
"MIT"
] | null | null | null | examples/Testing_shapes.py | pauleveritt/arcade | a3f31bd74c17e18cdc95ed5f0b350459f31af29e | [
"MIT"
] | null | null | null | import arcade
def on_draw(delta_time):
""" Use this function to draw everything to the screen. """
# Start the render. This must happen before any drawing
# commands. We do NOT need an stop render command.
arcade.start_render()
# Draw shapes
on_draw.rectangle.draw()
on_draw.oval.draw()
on_draw.ellipse.draw()
on_draw.circle.draw()
on_draw.square.draw()
arcade.draw_all(shapes)
# update shape positions
on_draw.rectangle.update()
on_draw.oval.update()
on_draw.ellipse.update()
on_draw.circle.update()
on_draw.square.update()
arcade.update_all(shapes)
arcade.open_window("Drawing Example", 800, 600)
arcade.set_background_color(arcade.color.WHITE)
on_draw.rectangle = arcade.Rectangle(400, 100, 35, 50, arcade.color.PURPLE)
on_draw.rectangle.change_x = 3
on_draw.rectangle.change_y = 2
on_draw.oval = arcade.Oval(250, 250, 50, 25, arcade.color.ORANGE)
on_draw.oval.change_x = 1
on_draw.oval.change_y = -1
on_draw.ellipse = arcade.Ellipse(500, 0, 25, 50, arcade.color.COCONUT)
on_draw.ellipse.change_y = 2
on_draw.ellipse.change_angle = 15
on_draw.circle = arcade.Circle(350, 250, 15, arcade.color.BLUE)
on_draw.circle.change_x = 1
on_draw.square = arcade.Square(350, 150, 20, arcade.color.GREEN, 12, 12)
on_draw.square.change_angle = 20
on_draw.m_circle = arcade.Circle(700, 550, 18, arcade.color.CORNFLOWER_BLUE)
on_draw.m_circle.change_x = -2
on_draw.m_rectangle = arcade.Rectangle(400, 300, 27, 18,
arcade.color.KOMBU_GREEN)
on_draw.m_rectangle.change_x = 3
on_draw.m_rectangle.change_y = -3
on_draw.m_square = arcade.Square(50, 50, 27,
arcade.color.LANGUID_LAVENDER, 6, 45)
on_draw.m_square.change_y = 5
shapes = [on_draw.m_square, on_draw.m_rectangle, on_draw.m_circle]
on_draw.point = arcade.Point(90, 90, 25, arcade.color.FOREST_GREEN)
on_draw.point.change_y = .5
shapes.append(on_draw.point)
on_draw.text = arcade.Text("Hello!!", 250, 300, 100, arcade.color.CHESTNUT)
shapes.append(on_draw.text)
on_draw.triangle = arcade.Triangle(40, 99, 100, 50, 55, 150,
arcade.color.MAROON)
on_draw.triangle.change_x = 2
on_draw.triangle.change_y = 4
shapes.append(on_draw.triangle)
points = ([19, 24], [33, 107], [15, 66], [100, 75], [100, 90])
on_draw.polygon = arcade.Polygon(points, arcade.color.CYAN)
on_draw.polygon.change_x = 6
on_draw.polygon.change_y = 2
shapes.append(on_draw.polygon)
on_draw.parabola = arcade.Parabola(300, 450, 390, 50, arcade.color.INDIGO, 14)
on_draw.parabola.change_y = -2
on_draw.parabola.change_angle = 8
shapes.append(on_draw.parabola)
on_draw.line = arcade.Line(0, 0, 800, 800, arcade.color.AMAZON, 3)
on_draw.line.change_y = -2
shapes.append(on_draw.line)
on_draw.Arc = arcade.Arc(250, 250, 75, 100,
arcade.color.BRICK_RED, 0, 180, 0, 0)
on_draw.Arc.change_x = 0.5
on_draw.Arc.change_y = 0.5
on_draw.Arc.change_start_angle = .2
on_draw.Arc.change_end_angle = -.1
on_draw.Arc.change_tilt_angle = 3
shapes.append(on_draw.Arc)
arcade.schedule(on_draw, 1 / 80)
arcade.run()
# unnecssary if drawing with on_draw
# arcade.finish_render()
| 30.4 | 78 | 0.712406 | 531 | 3,192 | 4.067797 | 0.242938 | 0.175 | 0.032407 | 0.058333 | 0.11713 | 0.061111 | 0.024074 | 0 | 0 | 0 | 0 | 0.076063 | 0.159774 | 3,192 | 104 | 79 | 30.692308 | 0.729306 | 0.078008 | 0 | 0 | 0 | 0 | 0.007506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013889 | false | 0 | 0.013889 | 0 | 0.027778 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa804bfcb7ea56b5354bef4ef21186d57f75503f | 8,955 | py | Python | social/social/management/fake_content.py | onosendi/social | a2491c6fc37f935a9d44a1d9a3a9084310a84f28 | [
"Unlicense"
] | 10 | 2020-11-14T01:09:34.000Z | 2022-03-22T23:04:37.000Z | social/social/management/fake_content.py | thukaramvh/social | ca60df03def47559fc8863efe921714b8567d561 | [
"Unlicense"
] | null | null | null | social/social/management/fake_content.py | thukaramvh/social | ca60df03def47559fc8863efe921714b8567d561 | [
"Unlicense"
] | 9 | 2020-11-13T05:06:36.000Z | 2022-02-08T10:13:06.000Z | import datetime
import os
import pathlib
import shutil
from random import randint, randrange
from typing import Any, List
from faker import Faker
from django.conf import settings
from django.contrib.auth import get_user_model
from django.db import IntegrityError, transaction
from django.utils import timezone
from posts.models import Post
from users.models import Profile
User = get_user_model()
faker = Faker()
def random_items(count: int, items: List[Any]):
items_len = len(items)
count = items_len if count > items_len else count
result = []
while True:
if len(result) >= count:
break
random_item = items[randrange(items_len)]
if random_item not in result:
result.append(random_item)
return result
class ManageImages:
"""
Get random images from a bulk image directory (in) and copy them to another
directory (out).
In directory file paths:
male: BASE_DIR/media/fake/in/male
female: BASE_DIR/media/fake/in/female
banner: BASE_DIR/media/fake/in/banner
Out directory file paths:
male: BASE_DIR/media/fake/out/male
female: BASE_DIR/media/fake/out/female
banner: BASE_DIR/media/fake/out/banner
"""
def __init__(self, count):
self._count = count
def _concat_dir(self, dir_name: str, dir_type: str):
"""
param dir_name: male|female|banner
param dir_type: in|out
"""
concat_dir = os.path.join(
settings.BASE_DIR,
f"media/fake/{dir_type}/{dir_name}",
)
isdir = os.path.isdir(concat_dir)
if dir_type == "in" and isdir is False:
raise Exception(f'Directory "{concat_dir}" does not exist')
if dir_type == "out" and isdir is False:
pathlib.Path(concat_dir).mkdir(parents=True, exist_ok=True)
return concat_dir
def _copy_images_out(self, dir_name):
in_dir = self._concat_dir(dir_name, "in")
out_dir = self._concat_dir(dir_name, "out")
images = os.listdir(in_dir)
if not images:
raise Exception(f"No files found in: {in_dir}")
images_out = random_items(self._count, images)
for image in images_out:
in_image = os.path.join(in_dir, image)
out_image = os.path.join(out_dir, image)
shutil.copy(in_image, out_image)
def all_images(self):
self.banner_images()
self.female_images()
self.male_images()
def banner_images(self):
self._copy_images_out("banner")
def female_images(self):
self._copy_images_out("female")
def male_images(self):
self._copy_images_out("male")
def create_users(count: int = 100) -> None:
def get_sex():
sex = ["M", "F"]
return sex[randrange(2)]
for _ in range(count):
sex = get_sex()
if sex == "M":
first = faker.first_name_male()
else:
first = faker.first_name_female()
last = faker.last_name()
password = None
username = first.lower()
while True:
try:
with transaction.atomic():
email = f"{username}@testing.com"
user = User.objects.create_user(
name=f"{first} {last}",
username=username,
email=email,
password=password,
fake_account=True,
)
profile_data = {
"bio": faker.company(),
"location": f"{faker.city()}, {faker.state()}",
"sex": sex,
}
Profile.objects.filter(user_id=user.id).update(**profile_data)
except IntegrityError:
random_number = randint(0, 9)
username = f"{username}{random_number}"
else:
break
def create_posts():
users = User.objects.all()
for user in users:
post_number = randint(0, 15)
for _ in range(post_number):
Post.objects.create(
author=user,
body=faker.paragraph(),
)
def create_replies():
users = User.objects.all()
post_ids = Post.objects.filter(is_reply=False).values_list("id", flat=True)
post_ids_length = len(post_ids)
for user in users:
reply_number = randint(0, round(len(users) * 0.15))
for _ in range(reply_number):
id = post_ids[randint(0, post_ids_length - 1)]
parent = Post.objects.get(id=id)
Post.objects.create(
author=user,
body=faker.paragraph(),
is_reply=True,
parent=parent,
)
def create_reposts():
users = User.objects.all()
post_ids = Post.objects.filter(is_reply=False).values_list("id", flat=True)
post_ids_length = len(post_ids)
for user in users:
repost_number = randint(0, 3)
for _ in range(repost_number):
id = post_ids[randint(0, post_ids_length - 1)]
parent = Post.objects.get(id=id)
body = "" if randint(0, 1) else faker.paragraph()
Post.objects.create(
author=user,
body=body,
parent=parent,
)
def create_likes():
users = User.objects.all()
post_ids = Post.objects.values_list("id", flat=True)
post_ids_length = len(post_ids)
for user in users:
like_number = round(post_ids_length * 0.20)
for _ in range(like_number):
id = post_ids[randint(1, post_ids_length - 1)]
post = Post.objects.get(id=id)
post.liked.add(user)
def create_followers():
users = User.objects.all()
user_ids = User.objects.values_list("id", flat=True)
user_ids_length = len(user_ids)
for user in users:
follow_number = randint(0, round(user_ids_length * 0.20))
for _ in range(follow_number):
id = user_ids[randint(1, user_ids_length - 1)]
followed_user = User.objects.get(id=id)
user.follow(followed_user)
def randomize_timestamps():
posts = Post.objects.all()
for post in posts:
start_time = datetime.datetime(2019, 1, 1, 0, 0, 0)
end_time = datetime.datetime.now()
seconds_diff = round((end_time - start_time).total_seconds())
random_seconds = randrange(seconds_diff)
new_date = start_time + datetime.timedelta(seconds=random_seconds)
new_date = new_date.replace(tzinfo=timezone.get_default_timezone())
post.created_at = new_date
post.save()
def set_images():
base_dir = settings.BASE_DIR
male_img_dir = "fake/out/male"
female_img_dir = "fake/out/female"
users = User.objects.all()
for user in users:
if user.profile.sex:
profile_image_list = Profile.objects.values_list("image", flat=True)
used_image_list = []
for image in profile_image_list:
image_split = image.split("/")
filename = image_split.pop()
if filename:
used_image_list.append(filename)
if user.profile.sex == "M":
sex_img_dir = male_img_dir
else:
sex_img_dir = female_img_dir
image_dir = os.path.join(base_dir, "media", sex_img_dir)
dir_image_list = os.listdir(image_dir)
available_images = list(set(dir_image_list) - set(used_image_list))
random_image = available_images[randrange(len(available_images))]
if random_image:
user.profile.image = os.path.join(sex_img_dir, random_image)
user.profile.save()
def set_banners():
base_dir = settings.BASE_DIR
banner_dir = "fake/out/banner"
users = User.objects.all()
for user in users:
banner_image_list = Profile.objects.values_list("banner", flat=True)
used_image_list = []
for image in banner_image_list:
image_split = image.split("/")
filename = image_split.pop()
if filename:
used_image_list.append(filename)
image_dir = os.path.join(base_dir, "media", banner_dir)
dir_image_list = os.listdir(image_dir)
available_images = list(set(dir_image_list) - set(used_image_list))
random_image = available_images[randrange(len(available_images))]
if random_image:
user.profile.banner = os.path.join(banner_dir, random_image)
user.profile.save()
def set_user_data():
m = ManageImages(100)
m.all_images()
create_users()
create_posts()
create_replies()
create_reposts()
create_likes()
create_followers()
randomize_timestamps()
set_images()
set_banners()
| 31.64311 | 82 | 0.591066 | 1,122 | 8,955 | 4.482175 | 0.159537 | 0.022271 | 0.019089 | 0.026447 | 0.385365 | 0.360708 | 0.284947 | 0.276198 | 0.177968 | 0.177968 | 0 | 0.007091 | 0.307091 | 8,955 | 282 | 83 | 31.755319 | 0.803384 | 0.050251 | 0 | 0.279279 | 0 | 0 | 0.036571 | 0.00938 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085586 | false | 0.009009 | 0.058559 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa80d031504251b54bb60e43819965bd44935246 | 495 | py | Python | 1_diena/1_diena_matematikas_uzdevumi.py | IngaBertule/patstavigie_darbi_IB | 25e041bf7c61d1405564c07aad0d789e24189b76 | [
"MIT"
] | null | null | null | 1_diena/1_diena_matematikas_uzdevumi.py | IngaBertule/patstavigie_darbi_IB | 25e041bf7c61d1405564c07aad0d789e24189b76 | [
"MIT"
] | null | null | null | 1_diena/1_diena_matematikas_uzdevumi.py | IngaBertule/patstavigie_darbi_IB | 25e041bf7c61d1405564c07aad0d789e24189b76 | [
"MIT"
] | null | null | null | # 1. uzdevums
# Dotas divas taisnstūra malas 4, 7 aprēķināt taisnstūra laukumu.
mala_A = 4
mala_B = 7
laukums = (mala_A * mala_B)
print("Taisnstūra laukums ir: ",laukums)
# 2. uzdevums
# Dota temperatūra Celsija grādos 21, cik tas būs Fārenheiti?
celsius = 27
fahrenheit = (celsius * 9/5) + 32
print("Fārenheiti:", fahrenheit)
# 3. uzdevums
# Dots riņķa līnijas diametrs 7, aprēķināt riņķa līnijas garumu.
import math
garums = round (7 *math.pi, 2)
print("Riņķa līnijas garums = ", garums) | 22.5 | 65 | 0.723232 | 72 | 495 | 4.916667 | 0.597222 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.043902 | 0.171717 | 495 | 22 | 66 | 22.5 | 0.819512 | 0.448485 | 0 | 0 | 0 | 0 | 0.213483 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.1 | 0 | 0.1 | 0.3 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa810ac4aeadbd1290dd8dcb3e2ee98a3ac7bed6 | 8,638 | py | Python | src/analytics/model.py | Ematrix163/Dublin_bikes | ab0e39548e5cee36c7f7a21a722520f213f54e4e | [
"MIT"
] | 2 | 2018-02-27T10:45:36.000Z | 2018-03-23T11:40:47.000Z | src/analytics/model.py | Ematrix163/Dublin_bikes | ab0e39548e5cee36c7f7a21a722520f213f54e4e | [
"MIT"
] | null | null | null | src/analytics/model.py | Ematrix163/Dublin_bikes | ab0e39548e5cee36c7f7a21a722520f213f54e4e | [
"MIT"
] | null | null | null | import pandas as pd
import datetime
from sklearn.ensemble import RandomForestRegressor
from sklearn.externals import joblib
import json
import getpass
from db import getconfig
from sqlalchemy import create_engine
class model():
def __init__(self,from_data=False, from_pikl=False):
if from_data == True:
self.trainModel()
elif from_pikl==True:
try:
self.features = self.loadFeatures()
except:
print('missing model feature docs. Training new model..')
self.trainModel()
try:
self.clf = joblib.load('analytics/model.pikl')
except:
print('Missing .pikl file. Building model from data instead.')
self.trainModel()
def trainModel(self):
print('Collecting data: ')
df_all = self.getandpreprocess()
#we don't want these features in our X dataframe
cols = [col for col in df_all.columns if col not in ['dt','time', 'index', 'id', 'icon','description', 'main', 'status','available_bikes','bike_stands','available_bike_stands','target', 'day', 'hour', 'number']]
print('Training model..')
from sklearn.ensemble import RandomForestRegressor
self.clf=RandomForestRegressor(max_depth=100).fit(df_all[cols], df_all['target'])
print('Saving model to pikl....')
#save model to a pikl file
self.piklData('analytics/model.pikl')
#save model features in json format
print('Writing model feature names to file')
f=open('analytics/modelfeatures','w')
f.write(json.dumps({"features":cols}))
f.close()
def loadFeatures(self):
'''Load saved model features from disk'''
features = json.load(open('analytics/modelfeatures'))
print(features)
return features['features']
def getandpreprocess(self):
'''Download data, clean and merge it into one table that can be used to train the model'''
#set up connection and download db resources
params = getconfig.getConfig()
connstring = 'mysql+pymysql://'+params['user']+':'+params['passw']+'@'+params['host']+'/dublinbikes'
engine = create_engine(connstring)
df_bikes=pd.read_sql_table(table_name='dynamic_bikes', con=engine)
df_bikes = df_bikes.drop(['index'], 1)
df_weather1=pd.read_sql_table(table_name='weather', con=engine)
#resample this first weather table so that we have a value for every hour.
print('Resampling weather data..')
df_weather1['dt']=pd.to_datetime(df_weather1['dt'], unit='s')
df_weather1.set_index('dt', inplace=True)
df_weather1=df_weather1.resample('H').ffill()
#load second weather table
df_weather2=pd.read_sql_table(table_name='dublin_weather', con=engine)
df_weather2['dt']=pd.to_datetime(df_weather2['dt'], unit='s')
def auto_truncate(val):
return val[:20]
#load old weather table and clip all of the strings that are longer than Varchar(20)
df_old_weather = pd.read_csv('analytics/dublin_weather.csv', converters={'weather.description': auto_truncate})
print('Cleaning weather tables')
#rename columns in old weather table
df_old_weather['dt']=pd.to_datetime(df_old_weather['dt'], unit='s')
df_old_weather['temp']=df_old_weather['main.temp']
df_old_weather['temp_min']=df_old_weather['main.temp_min']
df_old_weather['humidity']=df_old_weather['main.humidity']
df_old_weather['temp_max']=df_old_weather['main.temp_max']
df_old_weather['pressure']=df_old_weather['main.pressure']
df_old_weather['wind_speed']=df_old_weather['wind.speed']
df_old_weather['wind_deg']=df_old_weather['wind.deg']
df_old_weather['description']=df_old_weather['weather.description']
df_old_weather['icon']=df_old_weather['weather.icon']
df_old_weather['main']=df_old_weather['weather.main']
df_old_weather = df_old_weather[['dt', 'temp', 'humidity', 'temp_min', 'temp_max', 'pressure', 'wind_speed', 'wind_deg', 'description', 'icon', 'main']]
print('Concacatenating weather tables')
#Splice weather tables together
df_weather = df_weather1.append([df_weather2, df_old_weather])
print('Cleaning bike data')
#extract times information from bikes
df_bikes['time']=df_bikes['time']//1000
df_bikes['dt']=pd.to_datetime(df_bikes['time'], unit='s')
df_bikes['hour']=df_bikes['dt'].dt.hour
df_bikes['day']=df_bikes['dt'].dt.dayofweek
df_bikes['month']=df_bikes['dt'].dt.month
df_bikes['monthday']=df_bikes['dt'].dt.day
df_bikes=df_bikes.drop(['dt','time'], 1)
#extract time information from weather
df_weather['hour']=df_weather['dt'].dt.hour
df_weather['day']=df_weather['dt'].dt.dayofweek
df_weather['month']=df_weather['dt'].dt.month
df_weather['monthday']=df_weather['dt'].dt.day
#merge tables on the time information
print('Merging tables')
df_all = pd.merge(df_bikes, df_weather, on=['month', 'monthday', 'hour', 'day'], how='inner')
#create a target feature
df_all['target']=df_all['bike_stands']-df_all['available_bike_stands']
#create dummy features for all categorical features
features_to_concat = [df_all]
print('Creating dummy features')
for feature in ['description','main', 'hour', 'day', 'number']:
features_to_concat.append(pd.get_dummies(df_all[feature], prefix=feature))
df_all = pd.concat(features_to_concat, axis=1)
#return the new df
return df_all
def piklData(self, fileLocation):
'''Save the model to a pikl file that can be reloaded'''
#save data to a pikl
from sklearn.externals import joblib
joblib.dump(self.clf, fileLocation)
def predict(self, object):
'''Make a prediction, given a dictionary of data points.'''
#preprocess the object so we can predict from it
# then make a prediction
row={}
print(len(self.features))
for feature in self.features:
#add empty columns to the row dictionary
row[feature]=0
for feature in object:
if feature in self.features and feature not in ['description', 'main']:
row[feature]=object[feature]
elif feature in ['description','main','hour','day','number']:
try:
row[feature+'_'+str(object[feature])]+=1
#if we encounter a new value for categorical features, record it in the error log file. This is a pretty rubbish fix, but will work for now. We can check this error log to see if new weather descriptions have been encountered. The next time we build the model, it will automatically include these new descriptions as dummies anyway, so all is not lost.
except:
IndexError
f=open('modelerrorlog.log','a')
f.write('encountered new valu for '+str(feature)+' : '+str(object[feature]))
f.close()
#convert dictionary to dataframe
row = pd.DataFrame([row], columns=row.keys())
return self.clf.predict(row)[0]
def predictMass(self, d):
new_dict = {}
#create an empty dictionary with every value set to 0
for feature in self.features:
new_dict[feature]=[0 for i in range(len(d['day']))]
for feature in d:
if feature in self.features and feature not in ['description', 'main', 'hour', 'day', 'number']:
new_dict[feature]=d[feature]
else:
for index, f in enumerate(d[feature]):
try:
new_dict[feature + '_' + str(f)][index]=1
except:
IndexError
filename=open('modelerrorlog.log','a')
filename.write('encountered new valu for '+str(feature)+' : '+str(f))
filename.close()
filename = open('modelerrorlog.log','a')
filename.write('encountered new valu for '+str(feature)+' : '+str(f))
#convert to dictionary
df = pd.DataFrame(new_dict, columns=new_dict.keys())
return [value for value in self.clf.predict(df)]
if __name__ == '__main__':
m = model(from_data=True)
| 38.052863 | 368 | 0.616925 | 1,101 | 8,638 | 4.681199 | 0.233424 | 0.054327 | 0.060536 | 0.018626 | 0.216919 | 0.106131 | 0.090221 | 0.075863 | 0.0617 | 0.050446 | 0 | 0.004851 | 0.26013 | 8,638 | 226 | 369 | 38.221239 | 0.801596 | 0.162422 | 0 | 0.185185 | 0 | 0 | 0.187152 | 0.016129 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059259 | false | 0.014815 | 0.074074 | 0.007407 | 0.177778 | 0.103704 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa84e7c3bcf13c30448910c33847cef11d42b6f7 | 3,377 | py | Python | arca/task.py | encukou/arca | edc3e81d27a5c194da10d54402923c27085e0e96 | [
"MIT"
] | 6 | 2017-09-25T00:43:01.000Z | 2018-09-05T07:59:08.000Z | arca/task.py | encukou/arca | edc3e81d27a5c194da10d54402923c27085e0e96 | [
"MIT"
] | 41 | 2017-10-05T21:10:11.000Z | 2019-09-10T16:48:22.000Z | arca/task.py | encukou/arca | edc3e81d27a5c194da10d54402923c27085e0e96 | [
"MIT"
] | 2 | 2019-12-09T15:12:17.000Z | 2019-12-09T20:00:53.000Z | import hashlib
import json
from typing import Optional, Any, Dict, Iterable
from cached_property import cached_property
from entrypoints import EntryPoint, BadEntryPoint
from .exceptions import TaskMisconfigured
class Task:
""" A class for defining tasks the run in the repositories. The task is defined by an entry point,
timeout (5 seconds by default), arguments and keyword arguments.
The class uses :class:`entrypoints.EntryPoint` to load the callables.
As apposed to :class:`EntryPoint <entrypoints.EntryPoint>`, only objects are allowed, not modules.
Let's presume we have this function in a package ``library.module``:
.. code-block:: python
def ret_argument(value="Value"):
return value
This Task would return the default value:
>>> Task("library.module:ret_argument")
These two Tasks would returned an overridden value:
>>> Task("library.module:ret_argument", args=["Overridden value"])
>>> Task("library.module:ret_argument", kwargs={"value": "Overridden value"})
"""
def __init__(self, entry_point: str, *,
timeout: int=5,
args: Optional[Iterable[Any]]=None,
kwargs: Optional[Dict[str, Any]]=None) -> None:
try:
self._entry_point = EntryPoint.from_string(entry_point, "task")
except BadEntryPoint:
raise TaskMisconfigured("Incorrectly defined entry point.")
if self._entry_point.object_name is None:
raise TaskMisconfigured("Task entry point must be an object, not a module.")
try:
self._timeout = int(timeout)
if self._timeout < 1:
raise ValueError
except ValueError:
raise TaskMisconfigured("Provided timeout could not be converted to int.")
try:
self._args = list(args or [])
self._kwargs = dict(kwargs or {})
except (TypeError, ValueError):
raise TaskMisconfigured("Provided arguments cannot be converted to list or dict.")
if not all([isinstance(x, str) for x in self._kwargs.keys()]):
raise TaskMisconfigured("Keywords must be strings")
try:
assert isinstance(self.json, str)
except (AssertionError, ValueError):
raise TaskMisconfigured("Provided arguments are not JSON-serializable") from None
@property
def entry_point(self):
return self._entry_point
@property
def args(self):
return self._args
@property
def kwargs(self):
return self._kwargs
@property
def timeout(self):
return self._timeout
def __repr__(self):
return f"Task({self.entry_point})"
@cached_property
def json(self):
return json.dumps(self.serialized)
@cached_property
def serialized(self):
import arca
return {
"version": arca.__version__,
"entry_point": {
"module_name": self._entry_point.module_name,
"object_name": self._entry_point.object_name
},
"args": self._args,
"kwargs": self._kwargs
}
@cached_property
def hash(self):
""" Returns a SHA1 hash of the Task for usage in cache keys.
"""
return hashlib.sha256(bytes(self.json, "utf-8")).hexdigest()
| 30.7 | 102 | 0.629553 | 389 | 3,377 | 5.33162 | 0.323907 | 0.062681 | 0.047252 | 0.031823 | 0.127772 | 0.057377 | 0.041466 | 0 | 0 | 0 | 0 | 0.003281 | 0.278057 | 3,377 | 109 | 103 | 30.981651 | 0.847416 | 0.243115 | 0 | 0.166667 | 0 | 0 | 0.134623 | 0.009674 | 0 | 0 | 0 | 0 | 0.030303 | 1 | 0.136364 | false | 0 | 0.106061 | 0.090909 | 0.378788 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa873b99c7a1bf55d0cc52761a9d5d4adf2ff779 | 4,154 | py | Python | test/test_cvt.py | jtpils/optimesh | 24a8276235b1f4e86f2fb92cf814bf81e7fdbc48 | [
"MIT"
] | 1 | 2019-11-20T16:50:34.000Z | 2019-11-20T16:50:34.000Z | test/test_cvt.py | jtpils/optimesh | 24a8276235b1f4e86f2fb92cf814bf81e7fdbc48 | [
"MIT"
] | null | null | null | test/test_cvt.py | jtpils/optimesh | 24a8276235b1f4e86f2fb92cf814bf81e7fdbc48 | [
"MIT"
] | null | null | null | import numpy
import pytest
from scipy.spatial import Delaunay
import helpers
import optimesh
from meshes import pacman, simple1
@pytest.mark.parametrize(
"mesh, ref1, ref2, refi",
[
(simple1, 4.9863354526224510, 2.1181412069258942, 1.0),
(pacman, 1.9378501813564521e03, 7.5989359705818785e01, 5.0),
],
)
def test_cvt_lloyd(mesh, ref1, ref2, refi):
X, cells = mesh()
X, cells = optimesh.cvt.quasi_newton_uniform_lloyd(
X, cells, 1.0e-2, 100, verbose=False
)
# Assert that we're dealing with the mesh we expect.
helpers.assert_norms(X, [ref1, ref2, refi], 1.0e-12)
return
@pytest.mark.parametrize(
"mesh, ref1, ref2, refi",
[
(simple1, 4.9959407761650168e00, 2.1203672449514870e00, 1.0),
(pacman, 1.9367454827286492e03, 7.5966311532153185e01, 5.0),
],
)
def test_cvt_lloyd2(mesh, ref1, ref2, refi):
X, cells = mesh()
X, cells = optimesh.cvt.quasi_newton_uniform_lloyd(X, cells, 1.0e-2, 100, omega=2.0)
# Assert that we're dealing with the mesh we expect.
helpers.assert_norms(X, [ref1, ref2, refi], 1.0e-12)
return
@pytest.mark.parametrize(
"mesh, ref1, ref2, refi",
[
(simple1, 4.9957677170205690e00, 2.1203267741647247e00, 1.0),
(pacman, 1.9368767962050219e03, 7.5956311011221615e01, 5.0),
],
)
def test_cvt_qnb(mesh, ref1, ref2, refi):
X, cells = mesh()
X, cells = optimesh.cvt.quasi_newton_uniform_blocks(X, cells, 1.0e-2, 100)
# Assert that we're dealing with the mesh we expect.
helpers.assert_norms(X, [ref1, ref2, refi], 1.0e-12)
return
@pytest.mark.parametrize(
"mesh, ref1, ref2, refi",
[
(simple1, 4.9971490009329251e00, 2.1206501666066013e00, 1.0),
(pacman, 1.9384829418092067e03, 7.5992721059144543e01, 5.0),
],
)
def test_cvt_qnf(mesh, ref1, ref2, refi):
X, cells = mesh()
X, cells = optimesh.cvt.quasi_newton_uniform_full(X, cells, 1.0e-2, 100, omega=0.9)
# Assert that we're dealing with the mesh we expect.
helpers.assert_norms(X, [ref1, ref2, refi], 1.0e-12)
return
def create_random_circle(n, radius, seed=None):
k = numpy.arange(n)
boundary_pts = radius * numpy.column_stack(
[numpy.cos(2 * numpy.pi * k / n), numpy.sin(2 * numpy.pi * k / n)]
)
# Compute the number of interior nodes such that all triangles can be somewhat
# equilateral.
edge_length = 2 * numpy.pi * radius / n
domain_area = numpy.pi - n * (
radius ** 2 / 2 * (edge_length - numpy.sin(edge_length))
)
cell_area = numpy.sqrt(3) / 4 * edge_length ** 2
target_num_cells = domain_area / cell_area
# Euler:
# 2 * num_points - num_boundary_edges - 2 = num_cells
# <=>
# num_interior_points ~= 0.5 * (num_cells + num_boundary_edges) + 1 - num_boundary_points
m = int(0.5 * (target_num_cells + n) + 1 - n)
# Generate random points in circle;
# <http://mathworld.wolfram.com/DiskPointPicking.html>.
# Choose the seed such that the fully smoothened mesh has no random boundary points.
if seed is not None:
numpy.random.seed(seed)
r = numpy.random.rand(m)
alpha = 2 * numpy.pi * numpy.random.rand(m)
interior_pts = numpy.column_stack(
[numpy.sqrt(r) * numpy.cos(alpha), numpy.sqrt(r) * numpy.sin(alpha)]
)
pts = numpy.concatenate([boundary_pts, interior_pts])
tri = Delaunay(pts)
# pts = numpy.column_stack([pts[:, 0], pts[:, 1], numpy.zeros(pts.shape[0])])
return pts, tri.simplices
# This test iterates over a few meshes that produce weird sitations that did have the
# methods choke. Mostly bugs in GhostedMesh.
@pytest.mark.parametrize("seed", [0, 4, 20])
def test_for_breakdown(seed):
numpy.random.seed(seed)
n = numpy.random.randint(10, 20)
pts, cells = create_random_circle(n=n, radius=1.0)
optimesh.cvt.quasi_newton_uniform_lloyd(
pts, cells, omega=1.0, tol=1.0e-10, max_num_steps=10
)
return
if __name__ == "__main__":
test_cvt_lloyd(pacman, 1939.1198108068188, 75.94965207932323, 5.0)
# test_cvt_lloyd(simple1, 4.985355657854027, 2.1179164560036154, 1.0)
| 30.321168 | 93 | 0.660087 | 598 | 4,154 | 4.456522 | 0.280936 | 0.036023 | 0.054034 | 0.04803 | 0.347467 | 0.321951 | 0.304315 | 0.295685 | 0.295685 | 0.278799 | 0 | 0.152181 | 0.21064 | 4,154 | 136 | 94 | 30.544118 | 0.660567 | 0.213288 | 0 | 0.303371 | 0 | 0 | 0.030769 | 0 | 0 | 0 | 0 | 0 | 0.044944 | 1 | 0.067416 | false | 0 | 0.067416 | 0 | 0.202247 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa8bc3b64fd6eb8042bdcee513e87de12b7b92c8 | 1,534 | py | Python | setup.py | nabakirov/drf_mixin_tools | ff5b0131ef07f1612ef191262a5f8bfebd044a66 | [
"MIT"
] | null | null | null | setup.py | nabakirov/drf_mixin_tools | ff5b0131ef07f1612ef191262a5f8bfebd044a66 | [
"MIT"
] | null | null | null | setup.py | nabakirov/drf_mixin_tools | ff5b0131ef07f1612ef191262a5f8bfebd044a66 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
from setuptools import setup
name = 'drf_mixin_tools'
package = 'drf_mixin_tools'
description = 'Collection of helpfull tools for drf'
url = 'https://github.com/nabakirov/drf_mixin_tools'
author = 'Nursultan Abakirov'
author_email = 'nabakirov@gmail.com'
license = 'MIT'
version = '0.0.3'
if sys.argv[-1] == 'publish':
if os.system("pip freeze | grep wheel"):
print("wheel not installed.\nUse `pip install wheel`.\nExiting.")
sys.exit()
os.system("python setup.py sdist upload")
os.system("python setup.py bdist_wheel upload")
print("You probably want to also tag the version now:")
print(" git tag -a {0} -m 'version {0}'".format(version))
print(" git push --tags")
sys.exit()
setup(
name=name,
version=version,
url=url,
license=license,
description=description,
author=author,
author_email=author_email,
packages=['drf_mixin_tools'],
package_data={'drf_mixin_tools': []},
install_requires=[
'Django>=2.0.4',
'djangorestframework>=3.8.2',
],
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Environment :: Web Environment',
'Framework :: Django',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Natural Language :: English',
'Programming Language :: Python :: 3',
'Topic :: Internet :: WWW/HTTP',
]
)
| 27.392857 | 73 | 0.627771 | 186 | 1,534 | 5.091398 | 0.548387 | 0.042239 | 0.068638 | 0.042239 | 0.044351 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012626 | 0.225554 | 1,534 | 55 | 74 | 27.890909 | 0.784512 | 0.027379 | 0 | 0.042553 | 0 | 0 | 0.500671 | 0.01745 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.06383 | 0 | 0.06383 | 0.085106 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa8d36c1a691df2ae19b17f9a8d869a212152e9c | 5,948 | py | Python | contentcuration/contentcuration/view/settings_views.py | benjaoming/content-curation | e1cf371c1a0df2fad20e6d5ffd3eafc016b6f642 | [
"MIT"
] | null | null | null | contentcuration/contentcuration/view/settings_views.py | benjaoming/content-curation | e1cf371c1a0df2fad20e6d5ffd3eafc016b6f642 | [
"MIT"
] | null | null | null | contentcuration/contentcuration/view/settings_views.py | benjaoming/content-curation | e1cf371c1a0df2fad20e6d5ffd3eafc016b6f642 | [
"MIT"
] | null | null | null | import json
import math
from django.shortcuts import render, redirect
from django.conf import settings as ccsettings
from django.contrib.auth.decorators import login_required
from django.contrib.auth import views
from django.utils.translation import ugettext as _
from django.views.generic.edit import FormView
from contentcuration.forms import ProfileSettingsForm, AccountSettingsForm, PreferencesSettingsForm
from rest_framework.authtoken.models import Token
from django.core.urlresolvers import reverse_lazy
from contentcuration.api import check_supported_browsers
@login_required
def settings(request):
if not check_supported_browsers(request.META['HTTP_USER_AGENT']):
return redirect(reverse_lazy('unsupported_browser'))
if not request.user.is_authenticated():
return redirect('accounts/login')
return redirect('settings/profile')
class ProfileView(FormView):
"""
Base class for user settings views.
"""
success_url = reverse_lazy('profile_settings')
form_class = ProfileSettingsForm
template_name = 'settings/profile.html'
def get(self, request, *args, **kwargs):
if not self.request.user.is_authenticated():
return redirect('/accounts/login')
return super(ProfileView, self).get(request, *args, **kwargs)
def get_context_data(self, **kwargs):
context = super(ProfileView, self).get_context_data(**kwargs)
context.update({"page": "profile", 'channel_name': False, "success": False})
return context
def get_initial(self):
initial = self.initial.copy()
initial.update({'first_name': self.request.user.first_name, 'last_name': self.request.user.last_name})
return initial
def form_valid(self, form):
form.save(self.request.user)
context = self.get_context_data(form=form)
context.update({'success': True})
return self.render_to_response(context)
def form_invalid(self, form):
return self.render_to_response(self.get_context_data(form=form))
def user(self):
return self.request.user
class PreferencesView(FormView):
"""
Base class for user settings views.
"""
success_url = reverse_lazy('preferences_settings')
form_class = PreferencesSettingsForm
template_name = 'settings/preferences.html'
def get(self, request, *args, **kwargs):
if not self.request.user.is_authenticated():
return redirect('/accounts/login')
return super(PreferencesView, self).get(request, *args, **kwargs)
def get_context_data(self, **kwargs):
context = super(PreferencesView, self).get_context_data(**kwargs)
context.update({"page": "preferences", "success": False})
return context
def get_initial(self):
initial = self.initial.copy()
initial.update(json.loads(self.request.user.preferences))
initial.update({
'm_value': initial.get('m_value') or 1,
'n_value': initial.get('n_value') or 1,
})
return initial
def form_valid(self, form):
form.save(self.request.user)
context = self.get_context_data(form=form)
context.update({'success': True})
return self.render_to_response(context)
def form_invalid(self, form):
return self.render_to_response(self.get_context_data(form=form))
def user(self):
return self.request.user
@login_required
def account_settings(request):
if not request.user.is_authenticated():
return redirect('/accounts/login')
return views.password_change(
request,
template_name='settings/account.html',
post_change_redirect=reverse_lazy('account_settings_success'),
password_change_form=AccountSettingsForm,
extra_context={"current_user": request.user, "page": "account"}
)
@login_required
def account_settings_success(request):
return views.password_change(
request,
template_name='settings/account_success.html',
post_change_redirect=reverse_lazy('account_settings_success'),
password_change_form=AccountSettingsForm,
extra_context={"current_user": request.user, "page": "account"}
)
@login_required
def tokens_settings(request):
if not request.user.is_authenticated():
return redirect('/accounts/login')
user_token, isNew = Token.objects.get_or_create(user=request.user)
return render(request, 'settings/tokens.html', {"current_user": request.user,
"page": "tokens",
"tokens": [str(user_token)]})
@login_required
def storage_settings(request):
storage_used = request.user.get_space_used()
storage_percent = (min(storage_used / float(request.user.disk_space), 1) * 100)
breakdown = [{
"name": k.capitalize(),
"size":"%.2f" % (float(v)/1048576),
"percent": "%.2f" % (min(float(v) / float(request.user.disk_space), 1) * 100)
} for k,v in request.user.get_space_used_by_kind().items()]
return render(request, 'settings/storage.html', {"current_user": request.user,
"page": "storage",
"percent_used": "%.2f" % storage_percent,
"used": "%.2f" % (float(storage_used) / 1048576),
"total": "%.2f" % (float(request.user.disk_space) / 1048576),
"available": "%.2f" % (request.user.get_available_space() / 1048576),
"breakdown": breakdown,
"request_email": ccsettings.SPACE_REQUEST_EMAIL,
})
| 39.131579 | 121 | 0.631473 | 645 | 5,948 | 5.629457 | 0.2 | 0.069678 | 0.03718 | 0.029744 | 0.564858 | 0.534839 | 0.518315 | 0.502341 | 0.479758 | 0.448912 | 0 | 0.009957 | 0.257061 | 5,948 | 151 | 122 | 39.390728 | 0.811722 | 0.011937 | 0 | 0.470588 | 0 | 0 | 0.110503 | 0.028224 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0.033613 | 0.10084 | 0.042017 | 0.504202 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa8da1e8c7cf63277a8c57066f67320d517494bf | 14,608 | py | Python | merlin/datasets/entertainment/movielens/dataset.py | bschifferer/models-1 | b6042dbd1b98150cc50fd7d2cb6c07033f42fd35 | [
"Apache-2.0"
] | null | null | null | merlin/datasets/entertainment/movielens/dataset.py | bschifferer/models-1 | b6042dbd1b98150cc50fd7d2cb6c07033f42fd35 | [
"Apache-2.0"
] | null | null | null | merlin/datasets/entertainment/movielens/dataset.py | bschifferer/models-1 | b6042dbd1b98150cc50fd7d2cb6c07033f42fd35 | [
"Apache-2.0"
] | null | null | null | import logging
import os
from pathlib import Path
from typing import Optional, Union
import numpy as np
import pandas as pd
import merlin.io
# Get dataframe library - cuDF or pandas
from merlin.core.dispatch import get_lib
from merlin.core.utils import download_file
from merlin.datasets import BASE_PATH
from merlin.models.utils.example_utils import workflow_fit_transform
from merlin.models.utils.nvt_utils import require_nvt
df_lib = get_lib()
logging.basicConfig()
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
try:
import nvtabular as nvt
Workflow = nvt.Workflow
except ImportError:
Workflow = None
def get_movielens(
path: Union[str, Path] = None,
variant="ml-25m",
overwrite: bool = False,
transformed_name: str = "transformed",
nvt_workflow: Optional[Workflow] = None,
**kwargs,
):
"""Gets the movielens dataset for use with merlin-models
This function will return a tuple of train/test merlin.io.Dataset objects for the
movielens dataset. This will download the movielens dataset locally if needed,
and run a ETL pipeline with NVTabular to make this dataset ready for use with
merlin-models.
Parameters
----------
path : str
The path to download the files locally to. If not set will default to
the 'merlin-models-data` directory in your home folder
variant : "ml-25m" or "ml-100k"
Which variant of the movielens dataset to use. Must be either "ml-25m" or "ml-100k"
Returns
-------
tuple
A tuple consisting of a merlin.io.Dataset for the training dataset and validation dataset
"""
require_nvt()
if path is None:
p = Path(BASE_PATH) / "movielens"
else:
p = Path(path)
raw_path = p / variant
if not raw_path.exists():
download_movielens(p, variant)
nvt_path = raw_path / transformed_name
train_path, valid_path = nvt_path / "train", nvt_path / "valid"
nvt_path_exists = train_path.exists() and valid_path.exists()
if not nvt_path_exists or overwrite:
transform_movielens(
raw_path, nvt_path, nvt_workflow=nvt_workflow, variant=variant, **kwargs
)
train = merlin.io.Dataset(str(train_path), engine="parquet")
valid = merlin.io.Dataset(str(valid_path), engine="parquet")
return train, valid
def download_movielens(path: Union[str, Path], variant: str = "ml-25m"):
"""Downloads the movielens dataset to the specified path
Parameters
----------
path : str
The path to download the files locally to. If not set will default to
the 'merlin-models-data` directory in your home folder
variant : "ml-25m" or "ml-100k"
Which variant of the movielens dataset to use. Must be either "ml-25m", "ml-1m" or "ml-100k"
"""
download_file(
f"http://files.grouplens.org/datasets/movielens/{variant}.zip",
os.path.join(path, f"{variant}.zip"),
)
def transform_movielens(
raw_data_path: Union[str, Path],
output_path: Union[str, Path],
nvt_workflow: Optional[Workflow] = None,
variant: str = "ml-25m",
**kwargs,
):
"""
Transforms the movielens dataset to be ready for use with merlin-models
Parameters
----------
raw_data_path: Union[str, Path]
The path to the raw data
output_path: Union[str, Path]
The path to save the transformed data
nvt_workflow: Optional[Workflow]
The NVTabular workflow to use for the transformation.
If not set, will use the default.
variant: str
The variant of the movielens dataset to use.
Must be either "ml-25m", "ml-1m" or "ml-100k"
"""
if nvt_workflow:
_nvt_workflow = nvt_workflow
else:
if variant == "ml-25m":
_nvt_workflow = default_ml25m_transformation(**locals())
elif variant == "ml-1m":
_nvt_workflow = default_ml1m_transformation(**locals())
elif variant == "ml-100k":
_nvt_workflow = default_ml100k_transformation(**locals())
else:
raise ValueError(
"Unknown dataset name. Only Movielens 25M, 1M and 100k datasets are supported."
)
workflow_fit_transform(
_nvt_workflow,
os.path.join(raw_data_path, "train.parquet"),
os.path.join(raw_data_path, "valid.parquet"),
str(output_path),
)
def default_ml25m_transformation(raw_data_path: str, **kwargs):
from nvtabular import ops
movies = df_lib.read_csv(os.path.join(raw_data_path, "movies.csv"))
movies["genres"] = movies["genres"].str.split("|")
movies.to_parquet(os.path.join(raw_data_path, "movies_converted.parquet"))
ratings = df_lib.read_csv(os.path.join(raw_data_path, "ratings.csv"))
# shuffle the dataset
ratings = ratings.sample(len(ratings), replace=False)
# split the train_df as training and validation data sets.
num_valid = int(len(ratings) * 0.2)
train = ratings[:-num_valid]
valid = ratings[-num_valid:]
train.to_parquet(os.path.join(raw_data_path, "train.parquet"))
valid.to_parquet(os.path.join(raw_data_path, "valid.parquet"))
logger.info("starting ETL..")
# NVTabular pipeline
movies = df_lib.read_parquet(os.path.join(raw_data_path, "movies_converted.parquet"))
joined = ["userId", "movieId"] >> ops.JoinExternal(movies, on=["movieId"])
cat_features = joined >> ops.Categorify(dtype="int32")
label = nvt.ColumnSelector(["rating"])
# Columns to apply to
cats = nvt.ColumnSelector(["movieId"])
# Target Encode movieId column
te_features = cats >> ops.TargetEncoding(label, kfold=5, p_smooth=20)
te_features_norm = te_features >> ops.Normalize() >> ops.TagAsItemFeatures()
# count encode `userId`
count_logop_feat = (
["userId"]
>> ops.JoinGroupby(cont_cols=["movieId"], stats=["count"])
>> ops.LogOp()
>> ops.TagAsUserFeatures()
)
feats_item = cat_features["movieId"] >> ops.AddMetadata(tags=["item_id", "item"])
feats_user = cat_features["userId"] >> ops.AddMetadata(tags=["user_id", "user"])
feats_genres = cat_features["genres"] >> ops.ValueCount() >> ops.TagAsItemFeatures()
feats_target = (
nvt.ColumnSelector(["rating"])
>> ops.LambdaOp(lambda col: (col > 3).astype("int32"))
>> ops.AddMetadata(tags=["binary_classification", "target"])
>> nvt.ops.Rename(name="rating_binary")
)
target_orig = (
["rating"]
>> ops.LambdaOp(lambda col: col.astype("float32"))
>> ops.AddMetadata(tags=["regression", "target"])
)
return nvt.Workflow(
feats_item
+ feats_user
+ feats_genres
+ te_features_norm
+ count_logop_feat
+ target_orig
+ feats_target
+ joined["title"]
)
def default_ml1m_transformation(raw_data_path: str, **kwargs):
from nvtabular import ops
users = pd.read_csv(
os.path.join(raw_data_path, "users.dat"),
sep="::",
names=["userId", "gender", "age", "occupation", "zipcode"],
)
ratings = pd.read_csv(
os.path.join(raw_data_path, "ratings.dat"),
sep="::",
names=["userId", "movieId", "rating", "timestamp"],
)
movies = pd.read_csv(
os.path.join(raw_data_path, "movies.dat"),
names=["movieId", "title", "genres"],
sep="::",
encoding="latin1",
)
movies["genres"] = movies["genres"].str.split("|")
movies.to_parquet(os.path.join(raw_data_path, "movies_converted.parquet"))
users.to_parquet(os.path.join(raw_data_path, "users_converted.parquet"))
ratings = ratings.sample(len(ratings), replace=False)
# split the train_df as training and validation data sets.
num_valid = int(len(ratings) * 0.2)
train = ratings[:-num_valid]
valid = ratings[-num_valid:]
train.to_parquet(os.path.join(raw_data_path, "train.parquet"))
valid.to_parquet(os.path.join(raw_data_path, "valid.parquet"))
logger.info("starting ETL..")
movies = df_lib.read_parquet(os.path.join(raw_data_path, "movies_converted.parquet"))
users = df_lib.read_parquet(os.path.join(raw_data_path, "users_converted.parquet"))
joined = (
["userId", "movieId"]
>> ops.JoinExternal(movies, on=["movieId"])
>> ops.JoinExternal(users, on=["userId"])
)
cat = lambda: nvt.ops.Categorify(dtype="int32") # noqa
cat_features = joined >> cat()
label = nvt.ColumnSelector(["rating"])
# Columns to apply to
cats = nvt.ColumnSelector(["movieId", "userId"])
# Target Encode movieId column
te_features = cats + joined[["age", "gender", "occupation", "zipcode"]] >> ops.TargetEncoding(
label, kfold=5, p_smooth=20
)
te_features_norm = te_features >> ops.Normalize()
# count encode `userId`
# count_logop_feat = (
# ["userId"] >> ops.JoinGroupby(cont_cols=["movieId"], stats=["count"]) >> ops.LogOp()
# )
feats_item = cat_features["movieId"] >> ops.AddMetadata(tags=["item_id", "item"])
feats_userId = cat_features["userId"] >> ops.AddMetadata(tags=["user_id", "user"])
feats_genres = cat_features["genres"] >> ops.ValueCount() >> ops.TagAsItemFeatures()
feats_te_user = te_features_norm[
[
"TE_userId_rating",
"TE_age_rating",
"TE_gender_rating",
"TE_occupation_rating",
"TE_zipcode_rating",
]
] >> ops.AddMetadata(tags=["user"])
feats_te_item = te_features_norm[["TE_movieId_rating"]] >> ops.AddMetadata(tags=["item"])
# feats_user = joined[["age", "gender", "occupation", "zipcode"]] >> ops.AddMetadata(
# tags=["item"]
# )
feats_target = (
nvt.ColumnSelector(["rating"])
>> ops.LambdaOp(lambda col: (col > 3).astype("int32"))
>> ops.AddMetadata(tags=["binary_classification", "target"])
>> nvt.ops.Rename(name="rating_binary")
)
target_orig = (
["rating"]
>> ops.LambdaOp(lambda col: col.astype("float32"))
>> ops.AddMetadata(tags=["regression", "target"])
)
return nvt.Workflow(
cat_features
+ te_features_norm
+ feats_te_user
+ feats_te_item
+ feats_item
+ feats_userId
+ feats_genres
+ feats_target
+ target_orig
)
def default_ml100k_transformation(raw_data_path: str, **kwargs):
from nvtabular import ops
logger.info("starting ETL..")
# ratings = pd.read_csv(
# os.path.join(raw_data_path, "u.data"),
# names=["userId", "movieId", "rating", "timestamp"],
# sep="\t",
# )
user_features = pd.read_csv(
os.path.join(raw_data_path, "u.user"),
names=["userId", "age", "gender", "occupation", "zip_code"],
sep="|",
)
user_features.to_parquet(os.path.join(raw_data_path, "user_features.parquet"))
cols = [
"movieId",
"title",
"release_date",
"video_release_date",
"imdb_URL",
"unknown",
"Action",
"Adventure",
"Animation",
"Childrens", # noqa
"Comedy",
"Crime",
"Documentary",
"Drama",
"Fantasy",
"Film_Noir",
"Horror",
"Musical",
"Mystery",
"Romance",
"Sci-Fi",
"Thriller",
"War",
"Western",
]
genres_ = [
"unknown",
"Action",
"Adventure",
"Animation",
"Childrens",
"Comedy",
"Crime",
"Documentary",
"Drama",
"Fantasy",
"Film_Noir",
"Horror",
"Musical",
"Mystery",
"Romance",
"Sci-Fi",
"Thriller",
"War",
"Western",
]
movies = pd.read_csv(
os.path.join(raw_data_path, "u.item"), names=cols, sep="|", encoding="latin1"
)
for col in genres_:
movies[col] = movies[col].replace(1, col)
movies[col] = movies[col].replace(0, np.nan)
s = movies[genres_]
s.notnull()
movies["genres"] = s.notnull().dot(s.columns + ",").str[:-1]
movies_converted = movies[
["movieId", "title", "release_date", "video_release_date", "genres", "imdb_URL"]
]
movies_converted.to_parquet(os.path.join(raw_data_path, "movies_converted.parquet"))
train = pd.read_csv(
os.path.join(raw_data_path, "ua.base"),
names=["userId", "movieId", "rating", "timestamp"],
sep="\t",
)
valid = pd.read_csv(
os.path.join(raw_data_path, "ua.test"),
names=["userId", "movieId", "rating", "timestamp"],
sep="\t",
)
train = train.merge(user_features, on="userId", how="left")
train = train.merge(movies_converted, on="movieId", how="left")
valid = valid.merge(user_features, on="userId", how="left")
valid = valid.merge(movies_converted, on="movieId", how="left")
train.to_parquet(os.path.join(raw_data_path, "train.parquet"))
valid.to_parquet(os.path.join(raw_data_path, "valid.parquet"))
cat = lambda: nvt.ops.Categorify(dtype="int32") # noqa
cont_names = ["age"]
boundaries = {"age": [0, 10, 20, 30, 40, 50, 60, 70, 80, 90]}
age_bucket = cont_names >> ops.Bucketize(boundaries) >> cat() >> ops.AddMetadata(tags=["user"])
label = nvt.ColumnSelector(["rating"])
# Target Encode movieId column
te_features = ["movieId"] >> ops.TargetEncoding(label, kfold=5, p_smooth=20)
te_features_norm = te_features >> ops.Normalize()
# count encode `userId`
count_logop_feat = (
["userId"] >> ops.JoinGroupby(cont_cols=["movieId"], stats=["count"]) >> ops.LogOp()
)
feats_item = ["movieId"] >> cat() >> ops.TagAsItemID()
feats_user = ["userId"] >> cat() >> ops.TagAsUserID()
feats_genres = ["genres"] >> cat() >> ops.ValueCount() >> ops.TagAsItemFeatures()
user_features = ["gender", "zip_code"] >> cat() >> ops.TagAsUserFeatures()
feats_target = (
nvt.ColumnSelector(["rating"])
>> ops.LambdaOp(lambda col: (col > 3).astype("int32"))
>> ops.AddMetadata(tags=["binary_classification", "target"])
>> nvt.ops.Rename(name="rating_binary")
)
target_orig = ["rating"] >> ops.AddMetadata(tags=["regression", "target"])
return nvt.Workflow(
feats_item
+ feats_user
+ feats_genres
+ te_features_norm
+ count_logop_feat
+ user_features
+ target_orig
+ feats_target
+ age_bucket
+ ["title"]
)
| 32.390244 | 100 | 0.618086 | 1,758 | 14,608 | 4.953925 | 0.154721 | 0.025721 | 0.039155 | 0.03881 | 0.624871 | 0.571478 | 0.556321 | 0.496383 | 0.480767 | 0.469744 | 0 | 0.010154 | 0.238157 | 14,608 | 450 | 101 | 32.462222 | 0.772396 | 0.15649 | 0 | 0.432099 | 0 | 0 | 0.158529 | 0.02062 | 0 | 0 | 0 | 0 | 0 | 1 | 0.018519 | false | 0 | 0.052469 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa8f5306dc27eb7ce2788a4b63ffef5cdba13f38 | 530 | py | Python | nautilus/validators/__init__.py | LeptoSpira/nautilus-chambers | 5aafd9eb599ed35d3e90c3ef7b84a25d28e60922 | [
"MIT"
] | 1 | 2020-05-12T03:01:58.000Z | 2020-05-12T03:01:58.000Z | nautilus/validators/__init__.py | LeptoFlare/nautilus-chambers | 5aafd9eb599ed35d3e90c3ef7b84a25d28e60922 | [
"MIT"
] | 13 | 2020-05-05T01:06:01.000Z | 2020-07-19T07:17:31.000Z | nautilus/validators/__init__.py | LeptoFlare/nautilus-chambers | 5aafd9eb599ed35d3e90c3ef7b84a25d28e60922 | [
"MIT"
] | 1 | 2019-08-16T02:35:17.000Z | 2019-08-16T02:35:17.000Z | from pydantic import ValidationError
from . import errors
from . import models
def validate_profileinput(profile_raw):
"""Validate ProfileInput user input data."""
profile, errs = None, []
try:
profile = models.ProfileInput(**profile_raw).dict()
except ValidationError as e:
errs = e.errors()
return profile, errs
def validate_discord(snowflake):
"""Validate a discord user id to make sure it follows some simple requirements."""
return snowflake.isdigit() and len(snowflake) > 10
| 26.5 | 86 | 0.703774 | 64 | 530 | 5.765625 | 0.59375 | 0.054201 | 0.119241 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004739 | 0.203774 | 530 | 19 | 87 | 27.894737 | 0.869668 | 0.216981 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.25 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa8ffd583190bc873aa9f1770bd9a7ed4cf4137e | 3,250 | py | Python | dmscripts/email_engine/typing.py | Crown-Commercial-Service/digitalmarketplace-scripts | 1c75b674a294e51600fc32b3d6ed4372a2e7d727 | [
"MIT"
] | 1 | 2020-06-23T01:55:31.000Z | 2020-06-23T01:55:31.000Z | dmscripts/email_engine/typing.py | alphagov/digitalmarketplace-scripts | f92138016b7375836dfd14aa3ffcc4553bce63f9 | [
"MIT"
] | 267 | 2015-10-12T12:43:52.000Z | 2021-08-19T10:38:55.000Z | dmscripts/email_engine/typing.py | Crown-Commercial-Service/digitalmarketplace-scripts | 1c75b674a294e51600fc32b3d6ed4372a2e7d727 | [
"MIT"
] | 7 | 2015-11-11T16:47:41.000Z | 2021-04-10T18:03:04.000Z | """Types and classes for typing Notify calls
We want to be able to use static typing on email_engine so that coding mistakes
can be caught before run time.
Also, we create some classes to help with saving a notification to a file and
reading it back into memory in a human readable fashion.
"""
from ast import literal_eval
from typing import Callable, Dict, Generator, Union
from dmutils.email.helpers import hash_string
class EmailNotification(dict):
"""A typed, hashable, serder-able, frozen dict subclass
This class packages the arguments to to
NotificationsAPIClient.send_email_notification()
It supports the following behaviours we need to support email_engine
functionality:
- compare two notifications to remove duplicates
- allow using notifications as keys to a dictionary
- write and read a human-readable string representation
"""
def __init__(
self,
*,
email_address: str,
template_id: str,
personalisation: Dict[str, str] = None
):
super().__init__(
email_address=email_address,
template_id=template_id,
personalisation=personalisation,
)
def __setitem__(self, key: str, value: str) -> None:
raise RuntimeError("EmailNotification instances are frozen")
def __hash__(self) -> int: # type: ignore[override] # noqa: F821
# dicts are usually unhashable, but we want to use EmailNotifications
# as the key to another dict, so we cheat and find the hash of the
# string representation. The order of keys is going to be important for
# this, so we make it explicit
return (
dict(
email_address=self["email_address"],
template_id=self["template_id"],
personalisation=self["personalisation"],
)
.__repr__()
.__hash__()
)
@classmethod
def from_str(cls, s: str) -> "EmailNotification":
"""Parse a dict literal representation of a notification"""
return cls(**literal_eval(s))
@property
def sha256_hash(self) -> str:
# Calculate the SHA256 hash of the string representation. This is reproducible and allows us to generate a
# unique reference for an email that can be stored in our logs and checked to see an email's status
# The order of keys is important for the hash, so make it explicit
return (
hash_string(
str(
dict(
email_address=self["email_address"],
template_id=self["template_id"],
personalisation=self["personalisation"],
)
)
)
)
class NotificationResponse(dict):
@classmethod
def from_str(cls, s: str) -> "NotificationResponse":
"""parse a dict literal representation of a NotificationResponse"""
return cls(**literal_eval(s))
NotificationsGenerator = Generator[EmailNotification, None, None]
NotificationsGeneratorFunction = Callable[..., NotificationsGenerator]
Notifications = Union[NotificationsGenerator, NotificationsGeneratorFunction]
| 34.574468 | 114 | 0.645231 | 369 | 3,250 | 5.547425 | 0.411924 | 0.041036 | 0.023449 | 0.032242 | 0.212995 | 0.14851 | 0.14851 | 0.087934 | 0.087934 | 0.087934 | 0 | 0.003881 | 0.286462 | 3,250 | 93 | 115 | 34.946237 | 0.878827 | 0.412308 | 0 | 0.27451 | 0 | 0 | 0.082837 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117647 | false | 0 | 0.058824 | 0.039216 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa91747b89bd00509193e57d90f6d255c24d1e80 | 5,283 | py | Python | auto_drive/rule_drive/sign_lane/rsign.py | YingshuLu/self-driving-formula-racing | 0c45030c9f761a1e38abf7fc3957244389bb1165 | [
"MIT"
] | null | null | null | auto_drive/rule_drive/sign_lane/rsign.py | YingshuLu/self-driving-formula-racing | 0c45030c9f761a1e38abf7fc3957244389bb1165 | [
"MIT"
] | null | null | null | auto_drive/rule_drive/sign_lane/rsign.py | YingshuLu/self-driving-formula-racing | 0c45030c9f761a1e38abf7fc3957244389bb1165 | [
"MIT"
] | null | null | null | import cv2
import sys
import math
import numpy as np
ROI_THRESHOLD=[10, 100, 200]
def flatten(img):
r, g, b = cv2.split(img)
r_filter = (r == np.maximum(np.maximum(r, g), b)) & (r >= 120) & (g < 150) & (b < 150)
g_filter = (g == np.maximum(np.maximum(r, g), b)) & (g >= 120) & (r < 150) & (b < 150)
b_filter = (b == np.maximum(np.maximum(r, g), b)) & (b >= 120) & (r < 150) & (g < 150)
y_filter = ((r >= 128) & (g >= 128) & (b < 100))
r[y_filter], g[y_filter] = 255, 255
b[np.invert(y_filter)] = 0
b[b_filter], b[np.invert(b_filter)] = 255, 0
r[r_filter], r[np.invert(r_filter)] = 255, 0
g[g_filter], g[np.invert(g_filter)] = 255, 0
flattened = cv2.merge((r, g, b))
return flattened
def _mask(img):
ga = cv2.GaussianBlur(img, (5,5), 0)
rgb = flatten(img)
b, g, r = cv2.split(rgb)
mask = cv2.threshold(r, 200, 255, cv2.THRESH_BINARY)[1]
blur = cv2.blur(mask, (5,5))
mask = cv2.threshold(blur, 127, 255, cv2.THRESH_BINARY)[1]
# cv2.imshow("mask", mask)
return mask
def r_mask(img):
color_low = np.array([10, 10, 120])
color_high =np.array([70, 60, 200])
ga = cv2.GaussianBlur(img, (5,5), 0)
mask = cv2.inRange(ga, color_low, color_high)
blur = cv2.blur(mask, (5,5))
mask = cv2.threshold(blur, 127, 255, cv2.THRESH_BINARY)[1]
return mask
def draw_box(img, locs):
# print("draw box locs:", locs)
max_x = locs[0][0]
max_y = locs[0][1]
min_x = locs[1][0]
min_y = locs[1][1]
if max_x < 0 or min_x < 0 or max_y < 0 or min_y < 0:
return
img = cv2.rectangle(img, (max_x, max_y), (min_x, min_y), (0, 255, 0), 1)
cv2.imshow("box", img)
# cv2.waitKey(1)
def get_rectangle_locs(contour):
h, w, l = contour.shape
locs = contour.reshape((h, l))
x_locs = locs[0:h, 0]
y_locs = locs[0:h, 1]
max_x = np.max(x_locs)
max_y = np.max(y_locs)
min_x = np.min(x_locs)
min_y = np.min(y_locs)
return np.array([[max_x, max_y], [min_x, min_y]])
def locs_distance(loc1, loc2):
d = loc1 - loc2
d = d * d
d = math.sqrt(np.sum(d))
return d
def locs_filter(mask, locs):
h, w = mask.shape[:2]
max_x = locs[0]
max_y = locs[1]
min_x = locs[2]
min_y = locs[3]
xd = locs[0] - locs[2]
yd = locs[1] - locs[3]
# print("height/3:", h/3, "weight/3:", h/3)
# print("xd:", xd, "yd:", yd)
if xd > h*2/3 or xd > w/3 or xd < 6 or yd < 6:
return [-1, -1, -1, -1]
ratio = 0.2
xd = max_x - min_x
yd = max_y - min_y
max_x = min(max_x + int(ratio*xd), h)
if min_x - int(ratio*xd) > 0:
min_x = min_x - int(ratio*xd)
else:
min_x = 0
max_y = min(max_y + int(ratio*yd), w)
if min_y - int(ratio*yd) > 0:
min_y = min_y - int(ratio*yd)
else:
min_y = 0
return locs
def detect(img, sen = 0):
mask = _mask(img)
binary, contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
sum = 0
if len(contours) < 1:
return False, mask
for i in range(len(contours)):
sum += cv2.contourArea(contours[i])
nums = np.sum(mask != 0)
#print(">>> ROI area:", sum)
return sum >= ROI_THRESHOLD[sen], mask
def location(mask):
h, w = mask.shape[:2]
mask_locs = np.array([[0,0], [0,0]])
mask_locs1 = np.array([[h,w],[h,w]])
diagonal = locs_distance(mask_locs,mask_locs1)
binary, contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
num = len(contours)
#print("len contours:", len(contours))
if num == 0:
return [-1, -1, -1, -1]
elif num == 1:
locs = get_rectangle_locs(contours[0])
return locs_filter(mask, [locs[0,0], locs[0,1], locs[1,0], locs[1,1]])
area = []
for i in range(len(contours)):
area.append(cv2.contourArea(contours[i]))
area_copy = area[:]
max_id = np.argmax(area_copy)
locs0 = get_rectangle_locs(contours[max_id])
dist = []
for i in range(len(area)):
locs = get_rectangle_locs(contours[i])
dist.append(locs_distance(locs0, locs))
dist_copy = dist[:]
del dist_copy[max_id]
d = min(dist_copy)
if d > diagonal/8:
return locs_filter(mask, [locs[0,0], locs[0,1], locs[1,0], locs[1,1]])
locs1 = get_rectangle_locs(contours[dist.index(d)])
locs = np.concatenate((locs0, locs1), axis=0)
x_locs = locs[:, 0]
y_locs = locs[:, 1]
max_x = np.max(x_locs)
max_y = np.max(y_locs)
min_x = np.min(x_locs)
min_y = np.min(y_locs)
#print("upper point:", [max_x, max_y])
#print("down point:", [min_x, min_y])
return locs_filter(mask,[max_x, max_y, min_x, min_y])
def debug_draw_box(img):
detected, mask = detect(img)
print("contains sign ROI, need recognize?", detected)
if not detected:
return
locs = location(mask)
draw_box(img, locs)
if __name__ == '__main__':
filename = sys.argv[1]
img = cv2.imread(filename)
cv2.imshow("original", img)
detected, mask = detect(img)
print("contains sign ROI, need recognize?", detected)
if not detected:
exit()
locs = location(mask)
draw_box(img, locs)
cv2.waitKey(60000)
| 25.399038 | 95 | 0.573727 | 885 | 5,283 | 3.270057 | 0.153672 | 0.022115 | 0.005183 | 0.011057 | 0.372495 | 0.307187 | 0.291983 | 0.233587 | 0.228058 | 0.214927 | 0 | 0.061609 | 0.256483 | 5,283 | 207 | 96 | 25.521739 | 0.675153 | 0.054514 | 0 | 0.273973 | 0 | 0 | 0.017466 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068493 | false | 0 | 0.027397 | 0 | 0.19863 | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa9253597f3f9261092bf2641db581d03cf9fbda | 350 | py | Python | tectool/detailed_alignment.py | zavolanlab/TECtool | c03f9310159f729f6007697ef16a456f7280905f | [
"MIT"
] | 5 | 2019-10-28T14:37:12.000Z | 2021-07-08T14:13:40.000Z | tectool/detailed_alignment.py | zavolanlab/TECtool | c03f9310159f729f6007697ef16a456f7280905f | [
"MIT"
] | 4 | 2019-10-29T21:58:42.000Z | 2021-06-08T15:56:44.000Z | tectool/detailed_alignment.py | zavolanlab/TECtool | c03f9310159f729f6007697ef16a456f7280905f | [
"MIT"
] | 2 | 2021-02-18T09:26:38.000Z | 2021-12-12T15:00:51.000Z | class DetailedAlignment:
""" This class represents a detailed alignment """
def __init__(self, aln):
self.aln = aln # this will be dropped in the feature, so that memory does not blow up
self.number_of_S = 0
self.split_event_list = list()
self.regions_set = set()
self.spans_regions_boarder = False
| 29.166667 | 93 | 0.654286 | 48 | 350 | 4.541667 | 0.75 | 0.06422 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003906 | 0.268571 | 350 | 11 | 94 | 31.818182 | 0.847656 | 0.322857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa957eeb7285dbdfc36d349393584e6858d7364e | 3,897 | py | Python | blog/views.py | alinecrsouza/django-blog-app | 5ec837743cd23143e25e57f0431ed4dfddaf7f2f | [
"MIT"
] | null | null | null | blog/views.py | alinecrsouza/django-blog-app | 5ec837743cd23143e25e57f0431ed4dfddaf7f2f | [
"MIT"
] | 4 | 2016-10-25T16:53:54.000Z | 2021-06-10T18:27:57.000Z | blog/views.py | alinecrsouza/django-blog-app | 5ec837743cd23143e25e57f0431ed4dfddaf7f2f | [
"MIT"
] | 1 | 2016-10-23T10:51:57.000Z | 2016-10-23T10:51:57.000Z | from django.http import HttpResponse, HttpResponseRedirect
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
from django.shortcuts import get_object_or_404, render, render_to_response
from django.urls import reverse
from .forms import CommentForm
from blog.models import Category, Post, Comment, Author
# the home/index page of the blog
def home(request):
posts_list = Post.objects.filter(status='Published').order_by('-created_at')
paginator = Paginator(posts_list, 3) # Show 3 posts per page
page = request.GET.get('page')
try:
posts = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
posts = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
posts = paginator.page(paginator.num_pages)
context = {
'posts': posts,
}
return render(request, 'blog/home.html', context)
# about page
def about(request):
return render(request, 'blog/about.html')
# contact page
def contact(request):
return render(request, 'blog/contact.html')
# show published posts by category ordered by date of creation descending
def show_posts_by_category(request, category_id):
category = Category.objects.get(pk = category_id)
posts_list = Post.objects.filter(category = category, status = 'Published').order_by('-created_at')
paginator = Paginator(posts_list, 3) # Show 3 posts per page
page = request.GET.get('page')
try:
posts = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
posts = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
posts = paginator.page(paginator.num_pages)
context ={
'posts': posts,
'category': category,
}
return render(request, 'blog/home.html', context)
# show published posts by author ordered by date of creation descending
def show_posts_by_author(request, author_id):
author = Author.objects.get(pk = author_id)
posts_list = Post.objects.filter(author = author, status = 'Published').order_by('-created_at')
paginator = Paginator(posts_list, 3) # Show 3 posts per page
page = request.GET.get('page')
try:
posts = paginator.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
posts = paginator.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), deliver last page of results.
posts = paginator.page(paginator.num_pages)
context ={
'posts': posts,
'author': author,
}
return render(request, 'blog/home.html', context)
# show full post, comments, and comment form
def show_post(request, post_id):
# Excludes posts with draft status
query = Post.objects.filter(status = 'Published')
post = get_object_or_404(query, pk=post_id)
#post = Post.objects.get(pk = post_id)
comments = Comment.objects.filter(post = post)
# if this is a POST request we need to process the form data
if request.method == 'POST':
# create a form instance and populate it with data from the request:
form = CommentForm(request.POST)
# check whether it's valid:
if form.is_valid():
comment = form.save(commit=False)
# assign the post to the comment.post foreign key
comment.post = post
comment.save()
return HttpResponseRedirect(reverse('blog.post', args=(post.id,)))
# if a GET (or any other method) we'll create a blank form
else:
form = CommentForm()
context ={
'comments': comments,
'post': post,
'form': form,
}
return render(request, 'blog/post.html', context)
| 34.184211 | 103 | 0.66641 | 514 | 3,897 | 4.978599 | 0.219844 | 0.049238 | 0.063306 | 0.053927 | 0.520516 | 0.474404 | 0.452521 | 0.437671 | 0.404846 | 0.404846 | 0 | 0.009048 | 0.234283 | 3,897 | 113 | 104 | 34.486726 | 0.848525 | 0.250192 | 0 | 0.453333 | 0 | 0 | 0.07833 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.08 | 0.026667 | 0.253333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa95d789ba546f8d7470a6eb841bdc4121e83880 | 2,522 | py | Python | Scripts/autoCapture2.py | darrahts/TeachableRobots | 89d80aa4fda4e6b15ed2ab554ffdd81078867cef | [
"MIT"
] | 3 | 2018-02-09T15:50:58.000Z | 2021-09-21T00:11:23.000Z | Scripts/autoCapture2.py | darrahts/TeachableRobots | 89d80aa4fda4e6b15ed2ab554ffdd81078867cef | [
"MIT"
] | null | null | null | Scripts/autoCapture2.py | darrahts/TeachableRobots | 89d80aa4fda4e6b15ed2ab554ffdd81078867cef | [
"MIT"
] | null | null | null | #!/usr/bin/python3 -*- coding: utf-8 -*-
##### IMPORTS #####
import termios
import datetime
import tty
import sys
import os
import picamera
import time
##### VARIABLES #####
today = datetime.date.today()
folderPath = "/home/pi/timelapse"
timeNow = ""
dateNow = ""
fileName = ""
intervalTime = time.time()
checkTime = time.time()
########## CONTROLS CLASS ##########
class Controls():
##### GET KEY #####
def getKey():
fd = sys.stdin.fileno()
old = termios.tcgetattr(fd)
new = termios.tcgetattr(fd)
new[3] = new[3] & ~termios.ICANON & ~termios.ECHO
new[6][termios.VMIN] = 1
new[6][termios.VTIME] = 0
termios.tcsetattr(fd, termios.TCSANOW, new)
k = None
try:
k = os.read(fd, 3)
finally:
termios.tcsetattr(fd, termios.TCSAFLUSH, old)
key = str(k)
key = key.replace("b", "")
key = key.replace("'", "")
return key
def start(self):
os.system("stty -echo")
#camera.start_preview(fullscreen=False, window=(10, 24, 640, 480))
return
def run(self):
user_input = ""
try:
while(1):
user_input = Controls.getKey()
if user_input == "q":
break
if user_input == "c":
dateNow = str(datetime.date.today())
timeNow = str(datetime.datetime.now().strftime("%H:%M:%S"))
fileName = dateNow + "_" + timeNow
camera.capture(folderPath + "/" + fileName + ".jpg")
finally:
camera.stop_preview()
os.system("stty echo")
def assertDirectory():
if not os.path.exists(folderPath):
os.makedirs(folderPath)
def capture(folderPath):
dateNow = str(datetime.date.today())
timeNow = str(datetime.datetime.now().strftime("%H:%M:%S"))
fileName = dateNow + "_" + timeNow
camera.capture(folderPath + "/" + fileName + ".jpg")
#print(fileName)
try:
camera = picamera.PiCamera()
assertDirectory()
while(True):
time.sleep(300) #300 / 60sec = 5min
checkTime = time.time()
if(checkTime - intervalTime > 1800): #1800 / 60sec = 30
capture(folderPath)
intervalTime = checkTime
finally:
os.system("stty echo")
| 27.714286 | 80 | 0.503569 | 249 | 2,522 | 5.068273 | 0.405622 | 0.028526 | 0.040412 | 0.038035 | 0.194929 | 0.194929 | 0.194929 | 0.194929 | 0.194929 | 0.194929 | 0 | 0.025076 | 0.351705 | 2,522 | 90 | 81 | 28.022222 | 0.746789 | 0.090008 | 0 | 0.26087 | 0 | 0 | 0.036279 | 0 | 0 | 0 | 0 | 0 | 0.028986 | 1 | 0.072464 | false | 0 | 0.101449 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa9cd243f03be3fa84f948aabccdf4239d0c8257 | 2,691 | py | Python | Python Programming - Beginner/Introduction to Functions-270.py | nairachyut/dataquest-projects | 0807564bb35f39df21a84c8d97ab8eb3a428fb19 | [
"Unlicense"
] | 2 | 2020-05-23T20:02:07.000Z | 2020-07-20T13:01:20.000Z | Python Programming - Beginner/Introduction to Functions-270.py | nairachyut/dataquest-projects | 0807564bb35f39df21a84c8d97ab8eb3a428fb19 | [
"Unlicense"
] | null | null | null | Python Programming - Beginner/Introduction to Functions-270.py | nairachyut/dataquest-projects | 0807564bb35f39df21a84c8d97ab8eb3a428fb19 | [
"Unlicense"
] | null | null | null | ## 1. Overview ##
f = open("movie_metadata.csv", "r")
movies = f.read()
split_movies = movies.split("\n")
movie_data = []
for each in split_movies:
movie_data.append(each.split(","))
print(movie_data[0:5])
## 3. Writing Our Own Functions ##
def first_elts(input_lst):
elts = []
for each in input_lst:
elts.append(each[0])
return elts
movie_names = first_elts(movie_data)
print(movie_names[0:5])
## 4. Functions with Multiple Return Paths ##
wonder_woman = ['Wonder Woman','Patty Jenkins','Color',141,'Gal Gadot','English','USA',2017]
def is_usa(input_lst):
if input_lst[6] == "USA":
return True
else:
return False
wonder_woman_usa = is_usa(wonder_woman)
## 5. Functions with Multiple Arguments ##
wonder_woman = ['Wonder Woman','Patty Jenkins','Color',141,'Gal Gadot','English','USA',2017]
def is_usa(input_lst):
if input_lst[6] == "USA":
return True
else:
return False
def index_equals_str(input_lst,index,input_str):
if input_lst[index] == input_str:
return True
else:
return False
wonder_woman_in_color = index_equals_str(wonder_woman,2,"Color")
print(wonder_woman_in_color)
## 6. Optional Arguments ##
def index_equals_str(input_lst,index,input_str):
if input_lst[index] == input_str:
return True
else:
return False
def counter(input_lst,header_row = False):
num_elt = 0
if header_row == True:
input_lst = input_lst[1:len(input_lst)]
for each in input_lst:
num_elt = num_elt + 1
return num_elt
def feature_counter(input_lst,index, input_str, header_row = False):
num_elt = 0
if header_row == True:
input_lst = input_lst[1:len(input_lst)]
for each in input_lst:
if each[index] == input_str:
num_elt = num_elt + 1
return num_elt
num_of_us_movies = feature_counter(movie_data,6,"USA",True)
print(num_of_us_movies)
## 7. Calling a Function inside another Function ##
def feature_counter(input_lst,index, input_str, header_row = False):
num_elt = 0
if header_row == True:
input_lst = input_lst[1:len(input_lst)]
for each in input_lst:
if each[index] == input_str:
num_elt = num_elt + 1
return num_elt
def summary_statistics(input_lst):
num_japan_films = feature_counter(input_lst,6,"Japan",True)
num_color_films = feature_counter(input_lst,2,"Color",True)
num_films_in_english = feature_counter(input_lst,5,"English",True)
summary_dict = {"japan_films" : num_japan_films, "color_films" : num_color_films, "films_in_english" : num_films_in_english}
return summary_dict
summary = summary_statistics(movie_data) | 27.742268 | 128 | 0.684504 | 408 | 2,691 | 4.213235 | 0.193627 | 0.134962 | 0.0605 | 0.062827 | 0.547411 | 0.506108 | 0.504363 | 0.491565 | 0.491565 | 0.475276 | 0 | 0.019052 | 0.200297 | 2,691 | 97 | 129 | 27.742268 | 0.77974 | 0.070606 | 0 | 0.633803 | 0 | 0 | 0.076333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.126761 | false | 0 | 0 | 0 | 0.309859 | 0.056338 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa9dafe14566ccbd2d1949dd06062b83d48f82b6 | 6,470 | py | Python | image_retrieval.py | adisam007/Detecting-similar-images-in-a-dataset | 2458e46364630897371f4d042337869b46bfd223 | [
"Apache-2.0"
] | null | null | null | image_retrieval.py | adisam007/Detecting-similar-images-in-a-dataset | 2458e46364630897371f4d042337869b46bfd223 | [
"Apache-2.0"
] | null | null | null | image_retrieval.py | adisam007/Detecting-similar-images-in-a-dataset | 2458e46364630897371f4d042337869b46bfd223 | [
"Apache-2.0"
] | null | null | null | """
image_retrieval.py (author: Anson Wong / git: ankonzoid)
We perform image retrieval using transfer learning on a pre-trained
VGG image classifier. We plot the k=5 most similar images to our
query images, as well as the t-SNE visualizations.
"""
import os
import numpy as np
import tensorflow as tf
from sklearn.neighbors import NearestNeighbors
from src.CV_IO_utils import read_imgs_dir
from src.CV_transform_utils import apply_transformer
from src.CV_transform_utils import resize_img, normalize_img
from src.CV_plot_utils import plot_query_retrieval, plot_tsne, plot_reconstructions
from src.autoencoder import AutoEncoder
# Run mode: (autoencoder -> simpleAE, convAE) or (transfer learning -> vgg19)
modelName = "convAE" # try: "simpleAE", "convAE", "vgg19"
trainModel = True
parallel = True # use multicore processing
# Make paths
dataTrainDir = os.path.join(os.getcwd(), "data", "train")
dataTestDir = os.path.join(os.getcwd(), "data", "test")
outDir = os.path.join(os.getcwd(), "output", modelName)
if not os.path.exists(outDir):
os.makedirs(outDir)
# Read images
extensions = [".jpg", ".jpeg"]
print("Reading train images from '{}'...".format(dataTrainDir))
imgs_train = read_imgs_dir(dataTrainDir, extensions, parallel=parallel)
print("Reading test images from '{}'...".format(dataTestDir))
imgs_test = read_imgs_dir(dataTestDir, extensions, parallel=parallel)
shape_img = imgs_train[0].shape
print("Image shape = {}".format(shape_img))
# Build models
if modelName in ["simpleAE", "convAE"]:
# Set up autoencoder
info = {
"shape_img":
shape_img,
"autoencoderFile":
os.path.join(outDir, "{}_autoecoder.h5".format(modelName)),
"encoderFile":
os.path.join(outDir, "{}_encoder.h5".format(modelName)),
"decoderFile":
os.path.join(outDir, "{}_decoder.h5".format(modelName)),
}
model = AutoEncoder(modelName, info)
model.set_arch()
if modelName == "simpleAE":
shape_img_resize = shape_img
input_shape_model = (model.encoder.input.shape[1], )
output_shape_model = (model.encoder.output.shape[1], )
n_epochs = 300
elif modelName == "convAE":
shape_img_resize = shape_img
input_shape_model = tuple(
[int(x) for x in model.encoder.input.shape[1:]])
output_shape_model = tuple(
[int(x) for x in model.encoder.output.shape[1:]])
n_epochs = 500
else:
raise Exception("Invalid modelName!")
elif modelName in ["vgg19"]:
# Load pre-trained VGG19 model + higher level layers
print("Loading VGG19 pre-trained model...")
model = tf.keras.applications.VGG19(
weights='imagenet', include_top=False, input_shape=shape_img)
model.summary()
shape_img_resize = tuple([int(x) for x in model.input.shape[1:]])
input_shape_model = tuple([int(x) for x in model.input.shape[1:]])
output_shape_model = tuple([int(x) for x in model.output.shape[1:]])
n_epochs = None
else:
raise Exception("Invalid modelName!")
# Print some model info
print("input_shape_model = {}".format(input_shape_model))
print("output_shape_model = {}".format(output_shape_model))
# Apply transformations to all images
class ImageTransformer(object):
def __init__(self, shape_resize):
self.shape_resize = shape_resize
def __call__(self, img):
img_transformed = resize_img(img, self.shape_resize)
img_transformed = normalize_img(img_transformed)
return img_transformed
transformer = ImageTransformer(shape_img_resize)
print("Applying image transformer to training images...")
imgs_train_transformed = apply_transformer(
imgs_train, transformer, parallel=parallel)
print("Applying image transformer to test images...")
imgs_test_transformed = apply_transformer(
imgs_test, transformer, parallel=parallel)
# Convert images to numpy array
X_train = np.array(imgs_train_transformed).reshape((-1, ) + input_shape_model)
X_test = np.array(imgs_test_transformed).reshape((-1, ) + input_shape_model)
print(" -> X_train.shape = {}".format(X_train.shape))
print(" -> X_test.shape = {}".format(X_test.shape))
# Train (if necessary)
if modelName in ["simpleAE", "convAE"]:
if trainModel:
model.compile(loss="binary_crossentropy", optimizer="adam")
model.fit(X_train, n_epochs=n_epochs, batch_size=256)
model.save_models()
else:
model.load_models(loss="binary_crossentropy", optimizer="adam")
# Create embeddings using model
print("Inferencing embeddings using pre-trained model...")
E_train = model.predict(X_train)
E_train_flatten = E_train.reshape((-1, np.prod(output_shape_model)))
E_test = model.predict(X_test)
E_test_flatten = E_test.reshape((-1, np.prod(output_shape_model)))
print(" -> E_train.shape = {}".format(E_train.shape))
print(" -> E_test.shape = {}".format(E_test.shape))
print(" -> E_train_flatten.shape = {}".format(E_train_flatten.shape))
print(" -> E_test_flatten.shape = {}".format(E_test_flatten.shape))
# Make reconstruction visualizations
if modelName in ["simpleAE", "convAE"]:
print("Visualizing database image reconstructions...")
imgs_train_reconstruct = model.decoder.predict(E_train)
if modelName == "simpleAE":
imgs_train_reconstruct = imgs_train_reconstruct.reshape(
(-1, ) + shape_img_resize)
plot_reconstructions(
imgs_train,
imgs_train_reconstruct,
os.path.join(outDir, "{}_reconstruct.png".format(modelName)),
range_imgs=[0, 255],
range_imgs_reconstruct=[0, 1])
# Fit kNN model on training images
print("Fitting k-nearest-neighbour model on training images...")
knn = NearestNeighbors(n_neighbors=5, metric="cosine")
knn.fit(E_train_flatten)
# Perform image retrieval on test images
print("Performing image retrieval on test images...")
for i, emb_flatten in enumerate(E_test_flatten):
_, indices = knn.kneighbors(
[emb_flatten]) # find k nearest train neighbours
img_query = imgs_test[i] # query image
imgs_retrieval = [imgs_train[idx]
for idx in indices.flatten()] # retrieval images
outFile = os.path.join(outDir, "{}_retrieval_{}.png".format(modelName, i))
plot_query_retrieval(img_query, imgs_retrieval, outFile)
# Plot t-SNE visualization
print("Visualizing t-SNE on training images...")
outFile = os.path.join(outDir, "{}_tsne.png".format(modelName))
plot_tsne(E_train_flatten, imgs_train, outFile) | 37.616279 | 83 | 0.710665 | 857 | 6,470 | 5.151692 | 0.235706 | 0.03171 | 0.020385 | 0.021744 | 0.229898 | 0.146093 | 0.094451 | 0.069536 | 0.047339 | 0.047339 | 0 | 0.008321 | 0.164142 | 6,470 | 172 | 84 | 37.616279 | 0.808062 | 0.126275 | 0 | 0.096 | 0 | 0 | 0.172211 | 0.003736 | 0 | 0 | 0 | 0 | 0 | 1 | 0.016 | false | 0 | 0.072 | 0 | 0.104 | 0.152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa9dba2788c8f9eb3f29fb47deb58b561e8a09c1 | 5,434 | py | Python | src/snkit/extract.py | BenDickens/snkit | 17831cd88afa5bcb299947e50c9443d87908a085 | [
"MIT"
] | 1 | 2020-04-10T08:02:51.000Z | 2020-04-10T08:02:51.000Z | src/snkit/extract.py | BenDickens/snkit | 17831cd88afa5bcb299947e50c9443d87908a085 | [
"MIT"
] | 2 | 2020-04-10T13:12:04.000Z | 2020-04-10T15:54:15.000Z | src/snkit/extract.py | BenDickens/snkit | 17831cd88afa5bcb299947e50c9443d87908a085 | [
"MIT"
] | 1 | 2020-04-09T14:24:39.000Z | 2020-04-09T14:24:39.000Z | import geopandas
import pandas
import ogr
import os
import numpy
import gdal
from tqdm import tqdm
from pygeos import from_wkb
def query_b(geoType,keyCol,**valConstraint):
"""
This function builds an SQL query from the values passed to the retrieve() function.
Arguments:
*geoType* : Type of geometry (osm layer) to search for.
*keyCol* : A list of keys/columns that should be selected from the layer.
***valConstraint* : A dictionary of constraints for the values. e.g. WHERE 'value'>20 or 'value'='constraint'
Returns:
*string: : a SQL query string.
"""
query = "SELECT " + "osm_id"
for a in keyCol: query+= ","+ a
query += " FROM " + geoType + " WHERE "
# If there are values in the dictionary, add constraint clauses
if valConstraint:
for a in [*valConstraint]:
# For each value of the key, add the constraint
for b in valConstraint[a]: query += a + b
query+= " AND "
# Always ensures the first key/col provided is not Null.
query+= ""+str(keyCol[0]) +" IS NOT NULL"
return query
def retrieve(osm_path,geoType,keyCol,**valConstraint):
"""
Function to extract specified geometry and keys/values from OpenStreetMap
Arguments:
*osm_path* : file path to the .osm.pbf file of the region
for which we want to do the analysis.
*geoType* : Type of Geometry to retrieve. e.g. lines, multipolygons, etc.
*keyCol* : These keys will be returned as columns in the dataframe.
***valConstraint: A dictionary specifiying the value constraints.
A key can have multiple values (as a list) for more than one constraint for key/value.
Returns:
*GeoDataFrame* : a geopandas GeoDataFrame with all columns, geometries, and constraints specified.
"""
driver=ogr.GetDriverByName('OSM')
data = driver.Open(osm_path)
query = query_b(geoType,keyCol,**valConstraint)
sql_lyr = data.ExecuteSQL(query)
features =[]
# cl = columns
cl = ['osm_id']
for a in keyCol: cl.append(a)
if data is not None:
print('query is finished, lets start the loop')
for feature in tqdm(sql_lyr):
try:
if feature.GetField(keyCol[0]) is not None:
geom = from_wkb(feature.geometry().ExportToWkb())
if geom is None:
continue
# field will become a row in the dataframe.
field = []
for i in cl: field.append(feature.GetField(i))
field.append(geom)
features.append(field)
except:
print("WARNING: skipped OSM feature")
else:
print("ERROR: Nonetype error when requesting SQL. Check required.")
cl.append('geometry')
if len(features) > 0:
return pandas.DataFrame(features,columns=cl)
else:
print("WARNING: No features or No Memory. returning empty GeoDataFrame")
return pandas.DataFrame(columns=['osm_id','geometry'])
def roads(osm_path):
"""
Function to extract road linestrings from OpenStreetMap
Arguments:
*osm_path* : file path to the .osm.pbf file of the region
for which we want to do the analysis.
Returns:
*GeoDataFrame* : a geopandas GeoDataFrame with all unique road linestrings.
"""
return retrieve(osm_path,'lines',['highway'])
def railway(osm_path):
"""
Function to extract railway linestrings from OpenStreetMap
Arguments:
*osm_path* : file path to the .osm.pbf file of the region
for which we want to do the analysis.
Returns:
*GeoDataFrame* : a geopandas GeoDataFrame with all unique land-use polygons.
"""
return retrieve(osm_path,'lines',['railway','service'],**{"service":[" IS NOT NULL"]})
def ferries(osm_path):
"""
Function to extract road linestrings from OpenStreetMap
Arguments:
*osm_path* : file path to the .osm.pbf file of the region
for which we want to do the analysis.
Returns:
*GeoDataFrame* : a geopandas GeoDataFrame with all unique road linestrings.
"""
return retrieve(osm_path,'lines',['route'],**{"route":["='ferry'",]})
def electricity(osm_path):
"""
Function to extract railway linestrings from OpenStreetMap
Arguments:
*osm_path* : file path to the .osm.pbf file of the region
for which we want to do the analysis.
Returns:
*GeoDataFrame* : a geopandas GeoDataFrame with all unique land-use polygons.
"""
return retrieve(osm_path,'lines',['power','voltage'],**{'voltage':[" IS NULL"],})
def mainRoads(osm_path):
"""
Function to extract main road linestrings from OpenStreetMap
Arguments:
*osm_path* : file path to the .osm.pbf file of the region
for which we want to do the analysis.
Returns:
*GeoDataFrame* : a geopandas GeoDataFrame with all unique main road linestrings.
"""
return retrieve(osm_path,'lines',['highway','oneway','lanes','maxspeed'],**{'highway':["='primary' or ","='trunk' or ","='motorway' or ","='motorway_link' or ","='trunk_link' or ",
"='primary_link' or ", "='secondary' or ","='tertiary' or ","='tertiary_link'"]}) | 40.857143 | 184 | 0.618513 | 673 | 5,434 | 4.947994 | 0.258544 | 0.037838 | 0.027027 | 0.052252 | 0.424024 | 0.397598 | 0.387387 | 0.372973 | 0.356456 | 0.356456 | 0 | 0.001276 | 0.279168 | 5,434 | 133 | 185 | 40.857143 | 0.848864 | 0.480861 | 0 | 0.033898 | 0 | 0 | 0.215125 | 0 | 0.016949 | 0 | 0 | 0 | 0 | 1 | 0.118644 | false | 0 | 0.135593 | 0 | 0.389831 | 0.067797 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aa9e7f859bde1bafdef4ee3640b6f1a936bd62d7 | 375 | py | Python | examples/ex20190503_second_fetch.py | brianr747/platform | 761f83311494996bfb21218400a1ee8b6864d190 | [
"Apache-2.0"
] | 3 | 2019-05-11T12:28:18.000Z | 2022-02-09T07:03:51.000Z | examples/ex20190503_second_fetch.py | brianr747/platform | 761f83311494996bfb21218400a1ee8b6864d190 | [
"Apache-2.0"
] | null | null | null | examples/ex20190503_second_fetch.py | brianr747/platform | 761f83311494996bfb21218400a1ee8b6864d190 | [
"Apache-2.0"
] | 2 | 2019-05-12T21:35:45.000Z | 2021-05-22T19:41:46.000Z | """
Plot the 10-year US Treasury/Euro area AAA govvie spread in 4 -- count'em, 4 -- lines of code.
(Couldn't find a daily bund series...)
"""
from econ_platform.start import fetch, quick_plot
ust10 = fetch('F@DGS10')
euro_AAA_10 = fetch('D@Eurostat/irt_euryld_d/D.EA.PYC_RT.Y10.CGB_EA_AAA')
quick_plot(ust10-euro_AAA_10, title='U.S. 10Y Spread Over AAA-Rated Euro Govvie') | 37.5 | 95 | 0.738667 | 71 | 375 | 3.732394 | 0.704225 | 0.067925 | 0.10566 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054878 | 0.125333 | 375 | 10 | 96 | 37.5 | 0.753049 | 0.357333 | 0 | 0 | 0 | 0 | 0.423077 | 0.213675 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaa02140998ee049878e1070e2f7aaf442787473 | 397 | py | Python | MindLink-Eumpy/test/JointTimeFrequencyAnalysis/__init__.py | Breeze1in1drizzle/MindLink-Exploring | 24e7d60112754c9fe5faf7b7f9ae255fa1bc4c59 | [
"MIT"
] | 7 | 2020-11-19T14:34:50.000Z | 2022-02-26T14:16:50.000Z | MindLink-Eumpy/test/JointTimeFrequencyAnalysis/__init__.py | Breeze1in1drizzle/MindLink-Exploring | 24e7d60112754c9fe5faf7b7f9ae255fa1bc4c59 | [
"MIT"
] | 1 | 2021-08-20T07:30:32.000Z | 2021-09-01T07:20:14.000Z | MindLink-Eumpy/test/JointTimeFrequencyAnalysis/__init__.py | Breeze1in1drizzle/MindLink-Exploring | 24e7d60112754c9fe5faf7b7f9ae255fa1bc4c59 | [
"MIT"
] | 2 | 2021-07-20T08:59:14.000Z | 2021-08-10T08:03:56.000Z | import matplotlib.pyplot as plt
import numpy as np
import numpy.fft as fft
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False # 用来正常显示符号
Fs = 1000 # 采样频率
T = 1 / Fs # 采样周期
L = 1000 # 信号长度
t = [i * T for i in range(L)]
t = np.array(t)
S = 0.2 + 0.7*np.cos(2*np.pi*50*t+20/180*np.pi) + 0.2*np.cos(2*np.pi*100*t+70/180*np.pi)
print("S:\n", S)
| 26.466667 | 88 | 0.63728 | 83 | 397 | 3.036145 | 0.542169 | 0.063492 | 0.047619 | 0.063492 | 0.079365 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09697 | 0.168766 | 397 | 14 | 89 | 28.357143 | 0.666667 | 0.085642 | 0 | 0 | 0 | 0 | 0.120448 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaa2260dacc73da713d66450b0d641656f1c2c54 | 1,479 | py | Python | fsps_models/set_hstacs_mist_test.py | joungh93/Phot_JFG | 6d5d4cfb340b528e999292abd5d4dec66c7ab39d | [
"MIT"
] | null | null | null | fsps_models/set_hstacs_mist_test.py | joungh93/Phot_JFG | 6d5d4cfb340b528e999292abd5d4dec66c7ab39d | [
"MIT"
] | null | null | null | fsps_models/set_hstacs_mist_test.py | joungh93/Phot_JFG | 6d5d4cfb340b528e999292abd5d4dec66c7ab39d | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Jun 18 11:28:59 2020
@author: jlee
"""
import time
start_time = time.time()
import numpy as np
import glob, os, copy
import fsps
# sp.libraries = (b'mist', b'miles')
import init_hstacs_mist_test as ini
# ----- Obtaining magnitudes ----- #
for i in np.arange(len(ini.name_sp)):
for j in np.arange(len(ini.name_Z)):
sp_mag = np.zeros((ini.n_age, ini.n_z, ini.n_band+1))
exec("sp = ini.sp_"+ini.name_sp[i]+"_"+ini.name_Z[j])
for k in np.arange(ini.n_age):
for l in np.arange(ini.n_z):
sp_mags = sp.get_mags(tage=ini.age[k], redshift=ini.z[l], bands=ini.acs_bands)
sp_Ms = sp.stellar_mass
sp_mag[k,l,:] = np.append(sp_mags, sp_Ms)
# sp_mag[k,l,:] = sp.get_mags(tage=ini.age[k], redshift=ini.z[l], bands=ini.acs_bands)
exec("sp_mag_"+ini.name_sp[i]+"_"+ini.name_Z[j]+" = copy.deepcopy(sp_mag)")
# ----- Saving arrays ----- #
os.system('rm -rfv '+ini.sav_name)
# np.savez_compressed(ini.sav_name,
# ssp0_m42=sp_mag_ssp0_m42, ssp0_m62=sp_mag_ssp0_m62,
# ssp1_m42=sp_mag_ssp1_m42, ssp1_m62=sp_mag_ssp1_m62,
# tau0_m42=sp_mag_tau0_m42, tau0_m62=sp_mag_tau0_m62,
# tau1_m42=sp_mag_tau1_m42, tau1_m62=sp_mag_tau1_m62)
np.savez_compressed(ini.sav_name,
ssp0_m62=sp_mag_ssp0_m62,
tau0_m62=sp_mag_tau0_m62)
# Printing the running time
print('--- %s seconds ---' %(time.time()-start_time)) | 32.152174 | 90 | 0.651116 | 267 | 1,479 | 3.310861 | 0.325843 | 0.084842 | 0.054299 | 0.029412 | 0.382353 | 0.350679 | 0.223982 | 0.153846 | 0.11086 | 0.11086 | 0 | 0.061881 | 0.180527 | 1,479 | 46 | 91 | 32.152174 | 0.667492 | 0.413117 | 0 | 0 | 0 | 0 | 0.083726 | 0.024764 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.238095 | 0 | 0.238095 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaa47e8de9abaa0afaa95e0c7781bcfbe661d851 | 5,933 | py | Python | In Class Projects/In Class Examples Spring 2019/Section 4/stats.py | hunterluepke/Learn-Python-for-Stats-and-Econ | d580a8e27ba937fc8401ac6d0714b6488ac8bbb6 | [
"MIT"
] | 16 | 2019-01-10T18:54:13.000Z | 2022-01-28T20:07:20.000Z | In Class Projects/In Class Examples Spring 2019/Section 4/stats.py | hunterluepke/Learn-Python-for-Stats-and-Econ | d580a8e27ba937fc8401ac6d0714b6488ac8bbb6 | [
"MIT"
] | null | null | null | In Class Projects/In Class Examples Spring 2019/Section 4/stats.py | hunterluepke/Learn-Python-for-Stats-and-Econ | d580a8e27ba937fc8401ac6d0714b6488ac8bbb6 | [
"MIT"
] | 15 | 2019-01-24T17:11:20.000Z | 2021-12-11T01:53:57.000Z | #stats.py
class Stats():
def __init__(self):
self = self
def total(self, list_obj):
total = 0
n = len(list_obj)
for i in range(n):
total += list_obj[i]
return total
def mean(self, list_obj):
n = len(list_obj)
mean = self.total(list_obj) / n
return mean
def median(self, list_obj):
n = len(list_obj)
# lists of even length divided by 2 have remainder of 0
if n % 2 != 0:
#list is odd
middle_num = int((n - 1) / 2)
median = list_obj[middle_num]
else:
middle_num2 = int(n/2)
middle_num1 = middle_num2 - 1
# pass slice with two middle values to mean()
median = self.mean(list_obj[middle_num1:middle_num2 + 1])
return median
def mode(self, list_obj):
max_count = 0
counter_dict = {}
for value in list_obj:
counter_dict[value] = 0
for value in list_obj:
counter_dict[value] += 1
count_list = list(counter_dict.values())
max_count = max(count_list)
mode = [key for key in counter_dict if counter_dict[key] == max_count]
return mode
def variance(self, list_obj, sample = False):
""" Step 1 """
list_mean = self.mean(list_obj)
n = len(list_obj)
""" Step 2 """
sum_sq_diff = 0
for val in list_obj:
sum_sq_diff += (val - list_mean) ** 2
if sample == False:
list_variance = sum_sq_diff / n
if sample == True:
list_variance = sum_sq_diff / (n - 1)
return list_variance
def SD(self, list_obj, sample = False):
list_variance = self.variance(list_obj, sample)
list_SD = list_variance ** (1/2)
return list_SD
def covariance(self, list1, list2, sample = False):
"""
1. Check lengths of lists are the same
2. Calculate the means
3. Use a for loop to sum product of the differences for each observation
from both lists
4. Divide by the number of observations
"""
len_list1 = len(list1)
len_list2 = len(list2)
if len_list1 == len_list2:
mean_list1 = self.mean(list1)
mean_list2 = self.mean(list2)
sum_of_diff_prods = 0
for i in range(len_list1):
diff_list1 = list1[i] - mean_list1
diff_list2 = list2[i] - mean_list2
sum_of_diff_prods += diff_list1 * diff_list2
if sample == False:
cov = sum_of_diff_prods / len_list1
if sample:
cov = sum_of_diff_prods / (len_list1 - 1)
return cov
print("List lengths not equal")
print("List1 observations:", len_list1)
print("List2 observations:", len_list2)
return None
def correlation(self, list1, list2):
cov = self.covariance(list1, list2)
SD1 = self.SD(list1)
SD2 = self.SD(list2)
corr = cov / (SD1 * SD2)
return corr
def skewness(self, list_obj, sample = False):
mean_ = self.mean(list_obj)
skew = 0
n = len(list_obj)
for x in list_obj:
skew += (x - mean_) ** 3
skew = skew / n if not sample else n * skew / ((n - 1) * (n - 2))
SD_ = self.SD(list_obj, sample)
skew = skew / (SD_ ** 3)
return skew
def kurtosis(self, list_obj, sample = False):
mean_ = self.mean(list_obj)
kurt = 0
n = len(list_obj)
for x in list_obj:
kurt += (x - mean_) ** 4
SD_ = self.SD(list_obj, sample)
kurt = kurt / n if not sample else n * (n + 1) * kurt / ((n - 1) * \
(n - 2)) - (3 * (n - 1) ** 2) / ((n - 2) * (n - 3))
kurt = kurt / SD_ ** 4
return kurt
list1 = [1,4,7,33,5,4,22,55,4,55,4,32]
list2 = [4,8,22,1,9,43,3,2,1,99,3,10]
stats = Stats()
total1 = stats.total(list1)
total2 = stats.total(list2)
mean1 = stats.mean(list1)
mean2 = stats.mean(list2)
mode1 = stats.mode(list1)
mode2 = stats.mode(list2)
median1 = stats.median(list1)
median2 = stats.median(list2)
variance1 = stats.variance(list1)
variance2 = stats.variance(list2)
standard_deviation1 = stats.SD(list1)
standard_deviation2 = stats.SD(list2)
covariance_pop = stats.covariance(list1, list2)
covariance_sample = stats.covariance(list1, list2, True)
correlation = stats.correlation(list1, list2)
skewness_pop1 = stats.skewness(list1)
skewness_pop2 = stats.skewness(list2)
skewness_sample1 = stats.skewness(list1, True)
skewness_sample2 = stats.skewness(list2, True)
kurtosis_pop1 = stats.kurtosis(list1)
kurtosis_pop2 = stats.kurtosis(list2)
kurtosis_sample1 = stats.kurtosis(list1, True)
kurtosis_sample2 = stats.kurtosis(list2, True)
print("Total1:", total1)
print("Total2:", total2)
print("Mean1:", mean1)
print("Mean2", mean2)
print("Mode1:", mode1)
print("Mode2:", mode2)
print("Median1:", median1)
print("Median2:", median2)
print("Variance1:", variance1)
print("Variance2:", variance2)
print("Standard Deviation1:", standard_deviation1)
print("Standard Deviation2:", standard_deviation2)
print("Covariance (Population):", covariance_pop)
print("Covariance (Sample):", covariance_sample)
print("Correlation (Population):", correlation)
print("SkewnessPop1 (Population):", skewness_pop1)
print("SkewnessPop2 (Population):", skewness_pop2)
print("SkewnessSample1 (Sample):", skewness_sample1)
print("SkewnessSample2 (Sample):", skewness_sample2)
print("Kurtosis1 (Population):", kurtosis_pop1)
print("Kurtosis2 (Population):", kurtosis_pop2)
print("Kurtosis1 (Sample):", kurtosis_sample1)
print("Kurtosis2 (Sample):", kurtosis_sample2)
| 33.145251 | 80 | 0.587561 | 761 | 5,933 | 4.411301 | 0.178712 | 0.060471 | 0.026214 | 0.01966 | 0.175454 | 0.146262 | 0.086982 | 0.058981 | 0.039321 | 0.039321 | 0 | 0.049243 | 0.298331 | 5,933 | 178 | 81 | 33.331461 | 0.757146 | 0.053936 | 0 | 0.110345 | 0 | 0 | 0.077354 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.075862 | false | 0 | 0 | 0 | 0.158621 | 0.17931 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaa4e08c7bee64dc9cd7b7c67b38abdb08a7d494 | 1,966 | py | Python | scripts/archive-team-update.py | isabela-pf/jupyter-a11y-mgmt | 07e209499d61002d84f837883b7ec9dd8ed367e6 | [
"BSD-3-Clause"
] | null | null | null | scripts/archive-team-update.py | isabela-pf/jupyter-a11y-mgmt | 07e209499d61002d84f837883b7ec9dd8ed367e6 | [
"BSD-3-Clause"
] | null | null | null | scripts/archive-team-update.py | isabela-pf/jupyter-a11y-mgmt | 07e209499d61002d84f837883b7ec9dd8ed367e6 | [
"BSD-3-Clause"
] | null | null | null | import os
from base64 import b64decode, b64encode
from datetime import date
from ghapi.actions import github_token
from ghapi.all import GhApi
from IPython.display import Markdown
OWNER = "Quansight-Labs"
REPO = "jupyter-a11y-mgmt"
# ------------------------------------------------------------------
# On GitHub Actions "ACCESS_TOKEN" should be a personal access token with r/w permissions to *other* repos
token = (
github_token() if "ACCESS_TOKEN" not in os.environ else os.environ["ACCESS_TOKEN"]
)
# Initialize the GH API and our markdown
api = GhApi(token=token)
# Grab the report template
template = api.repos.get_content(OWNER, REPO, "team_updates/template.md")
template = b64decode(template.content).decode("utf-8")
# Get the team update issue and the comments
issues = api.issues.list_for_repo(OWNER, REPO, labels="type: team-update", state="open")
if issues:
for issue in issues:
issue_comments = api.issues.list_comments(
OWNER, REPO, issue_number=issue.number
)
issue_url = issue.url
if issue_comments:
summary = (
"\n".join(
[
f"- **@{comment.user.login}** \n\n {comment.body} \n---\n"
for comment in issue_comments
]
)
+ "\n\n"
+ f"See the original issue at: <{issue.url}>"
+ "\n\n"
)
else:
summary = "Nothing to report"
# Replace template
template = template.replace("{{ INSERT PERSONAL UPDATES }}", summary)
report_date = date.today().strftime("%d-%m-%Y")
template = template.replace("{{ date }}", report_date)
# Encode the markdown document
encoded_template = b64encode(bytes(template, "utf-8")).decode("utf-8")
resp = api.repos.create_or_update_file_contents(
owner=OWNER,
repo=REPO,
message="🤖 weekly team update",
content=encoded_template,
path=f"team_updates/{report_date}.md",
branch="master",
)
| 28.085714 | 106 | 0.626144 | 248 | 1,966 | 4.866935 | 0.419355 | 0.036454 | 0.01657 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009836 | 0.224313 | 1,966 | 69 | 107 | 28.492754 | 0.780984 | 0.164802 | 0 | 0.042553 | 0 | 0.021277 | 0.207466 | 0.047736 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.12766 | 0 | 0.12766 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaa556758df57cd0a65472ee412a895560ec53a1 | 874 | py | Python | src/ly_python_tools/config.py | LeapYear/poetry-autoupgrade | d490a7168c0980f14e7a41cfc2573a9dec1d8b4a | [
"MIT"
] | null | null | null | src/ly_python_tools/config.py | LeapYear/poetry-autoupgrade | d490a7168c0980f14e7a41cfc2573a9dec1d8b4a | [
"MIT"
] | 2 | 2022-03-26T19:00:56.000Z | 2022-03-28T16:40:21.000Z | src/ly_python_tools/config.py | LeapYear/poetry-autoupgrade | d490a7168c0980f14e7a41cfc2573a9dec1d8b4a | [
"MIT"
] | null | null | null | """Helper functions for dealing with the pyproject file."""
from __future__ import annotations
from pathlib import Path
from typing import Sequence
def get_pyproject(config_name: Path | str = "pyproject.toml") -> Path:
"""Get the location of pyproject.toml in the first parent diretory."""
cwd = Path.cwd().absolute()
paths = [cwd] + list(cwd.parents)
for path in paths:
pyproject = path / config_name
if pyproject.exists() and pyproject.is_file():
return pyproject
raise NoProjectFile(config_name, search_paths=paths)
class NoProjectFile(Exception):
"""No project file could be found."""
def __init__(self, proj_filename: Path | str, search_paths: Sequence[Path]):
super().__init__()
self.proj_filename = str(proj_filename)
self.search_paths = [path.as_posix() for path in search_paths]
| 33.615385 | 80 | 0.693364 | 113 | 874 | 5.141593 | 0.486726 | 0.075732 | 0.030981 | 0.068847 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.203661 | 874 | 25 | 81 | 34.96 | 0.83477 | 0.171625 | 0 | 0 | 0 | 0 | 0.019774 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.1875 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaa5bb28af259067813bf32979f1c9e8e219f123 | 2,666 | py | Python | pkg_radish_ext/radish_ext/sdk/cfg.py | bbielicki/radish-bdd-extensions | 7f1317461af23a70f2a551b66299b54e296af32f | [
"BSD-3-Clause"
] | 4 | 2019-09-19T21:25:26.000Z | 2019-11-10T06:09:06.000Z | pkg_radish_ext/radish_ext/sdk/cfg.py | bbielicki/radish-bdd-extensions | 7f1317461af23a70f2a551b66299b54e296af32f | [
"BSD-3-Clause"
] | null | null | null | pkg_radish_ext/radish_ext/sdk/cfg.py | bbielicki/radish-bdd-extensions | 7f1317461af23a70f2a551b66299b54e296af32f | [
"BSD-3-Clause"
] | 2 | 2019-09-17T11:26:59.000Z | 2020-01-23T20:20:43.000Z | # © 2019 Nokia
# Licensed under the BSD 3 Clause license
# SPDX-License-Identifier: BSD-3-Clause
import os
import jinja2
import yaml
from radish_ext import get_radish_ext_etc_dir
from radish_ext.sdk.l import Logging
from radish_ext.sdk.config import Config
class CfgComponentException(Exception):
pass
class CfgConfig(Config):
def __init__(self):
super(CfgConfig, self).__init__()
self.cfg_dir = get_radish_ext_etc_dir()
self.yaml = None
self.j2_config_template = None
self.default_cfg_dirs = ['.']
self.custom_cfg_dirs = []
def set_properties(self, yaml_, j2_config_template, custom_cfg_dirs=None):
self.yaml = yaml_
self.j2_config_template = j2_config_template
if custom_cfg_dirs is not None:
self.custom_cfg_dirs = self.default_cfg_dirs + custom_cfg_dirs
else:
self.custom_cfg_dirs = self.default_cfg_dirs
return self
class CfgComponent(object):
CONFIG_FILE_PATH = "__config_file_path__"
def __init__(self, cfg_config):
super(CfgComponent, self).__init__()
self.log = Logging.get_object_logger(self)
self.config = cfg_config
if self.config.yaml:
cfg_dir = self.find_config_directory(self.config.yaml, self.config.custom_cfg_dirs + [self.config.cfg_dir])
self.log.debug("Using config directory: %s" % cfg_dir)
with open(os.path.join(cfg_dir, self.config.yaml)) as f:
cfg = yaml.load(f, Loader=yaml.FullLoader)
if self.config.j2_config_template is None:
self.cfg = cfg
else:
jinja2_env = jinja2.Environment(loader=jinja2.FileSystemLoader(cfg_dir))
template = jinja2_env.get_template(self.config.j2_config_template)
self.cfg = yaml.load(template.render(**cfg))
self.cfg[CfgComponent.CONFIG_FILE_PATH] = os.path.join(cfg_dir, self.config.yaml)
else:
self.cfg = yaml.load("")
print(dir(self.cfg))
@staticmethod
def find_config_directory(file_name, cfg_dirs):
for i in cfg_dirs:
if os.path.isfile(os.path.join(i, file_name)):
cfg_dir = i
break
else:
raise CfgComponentException('Config file %s not found in %s' % (file_name, cfg_dirs))
return cfg_dir
def cfg_from_file(cfg_yaml_path):
return CfgComponent(CfgConfig().set_properties(os.path.basename(cfg_yaml_path),
None,
[os.path.dirname(cfg_yaml_path)])).cfg
| 33.746835 | 119 | 0.631658 | 345 | 2,666 | 4.573913 | 0.243478 | 0.057668 | 0.057668 | 0.034221 | 0.13815 | 0.082383 | 0.082383 | 0.082383 | 0 | 0 | 0 | 0.00884 | 0.278695 | 2,666 | 78 | 120 | 34.179487 | 0.811232 | 0.033758 | 0 | 0.068966 | 0 | 0 | 0.029938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086207 | false | 0.017241 | 0.103448 | 0.017241 | 0.310345 | 0.017241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaa5c7fc15dca94e6a294c0e59da29a25ed2dd0e | 2,660 | py | Python | src/notification/alert.py | mmde-lab/PubZ | 900f86efefb0d4b8bcd2513b7bbfbdf0454d14b2 | [
"MIT"
] | 2 | 2018-08-11T15:51:22.000Z | 2018-11-28T01:08:10.000Z | src/notification/alert.py | mmde-lab/PubZ | 900f86efefb0d4b8bcd2513b7bbfbdf0454d14b2 | [
"MIT"
] | 33 | 2018-12-10T04:30:39.000Z | 2022-01-28T09:57:30.000Z | src/notification/alert.py | getty708/bman | c4c9b60828ae10bfcf7a57c99ef89daf301e44a6 | [
"MIT"
] | 3 | 2019-02-07T00:33:38.000Z | 2021-07-03T14:46:37.000Z | # Write functions for sending email here.
from core.models import Bibtex
from django.contrib.auth.decorators import login_required
from django.core.mail import send_mail
from django.template.loader import get_template
from notification.const import address
@login_required
def send_email_test():
# 件名
subject = "Please update the registration information."
# 本文
message = "The following papers have missing items.\n\n\n"
not_published_list = Bibtex.objects.filter(is_published=False)
mail_template = get_template("notification/mail_templates/mail_basic.txt")
for bib in not_published_list:
book = Bibtex.objects.get(id=bib.id).book
context = {
"bib": bib,
"book": book,
}
message = message + mail_template.render(context) + "\n"
# 送信元
# from_email = "test@test.com"
from_email = "settings.EMAIL_HOST_USER"
# あて先
recipient_list = address
return send_mail(subject, message, from_email, recipient_list)
def send_email_to_appointed_address(address, bibtex):
# 件名
subject = "Please update the registration information."
# 本文
message = "The following papers have missing items.\n\n\n"
mail_template = get_template("notification/mail_templates/mail_basic.txt")
context = {
"bib": bibtex,
"book": bibtex.book,
}
message = message + mail_template.render(context) + "\n"
# 送信元
# from_email = "test@test.com"
from_email = "settings.EMAIL_HOST_USER"
# あて先
recipient_list = [address]
return send_mail(subject, message, from_email, recipient_list)
def send_email_to_all():
# 件名
subject = "Please update the registration information."
# 本文
from_email = "settings.EMAIL_HOST_USER"
not_published_list = Bibtex.objects.filter(is_published=False)
bad_status = []
for bib in not_published_list:
message = "The following papers have missing items.\n\n\n"
mail_template = get_template("notification/mail_templates/mail_basic.txt")
book = bib.book
if len(bib.authors.all()) == 0:
continue
address = bib.authors.all()[0].mail
if address is None:
continue
context = {
"bib": bib,
"book": book,
}
message = message + mail_template.render(context) + "\n"
status = send_mail(subject, message, from_email, [address])
if status is False:
bad_status.append((address, book))
if len(bad_status) == 0:
status = "Success"
else:
status = bad_status
return status, not_published_list
| 25.825243 | 82 | 0.653759 | 328 | 2,660 | 5.103659 | 0.243902 | 0.043011 | 0.04779 | 0.037634 | 0.666667 | 0.666667 | 0.601553 | 0.601553 | 0.572282 | 0.51135 | 0 | 0.001502 | 0.248872 | 2,660 | 102 | 83 | 26.078431 | 0.836336 | 0.049248 | 0 | 0.5 | 0 | 0 | 0.19841 | 0.078728 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0.083333 | 0 | 0.183333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaa5f3876248c04e7e75f2961a168974fb30863b | 6,713 | py | Python | fishem_mockupio.py | ddeel/fishem | bcfe478b241dfd30830ea0434026000e48bd569b | [
"BSD-3-Clause"
] | 1 | 2022-03-03T13:16:10.000Z | 2022-03-03T13:16:10.000Z | fishem_mockupio.py | ddeel/fishem | bcfe478b241dfd30830ea0434026000e48bd569b | [
"BSD-3-Clause"
] | 3 | 2021-07-25T18:33:43.000Z | 2022-03-20T19:41:20.000Z | fishem_mockupio.py | ddeel/fishem | bcfe478b241dfd30830ea0434026000e48bd569b | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2021 by Don Deel. All rights reserved.
"""
Handle mockup I/O for fishem.
"""
# Standard library module imports
import json # JSON handling
import os # File I/O handling
# Third party module imports
import xmltodict # XML handling for mockups
# Local module imports
from fish_data import fish # Fish data
# Constants
FISH_KEY_BASE = '/redfish/v1'
# Function: input()
def input(imockup_dir):
"""Load the mockup from 'imockup_dir' into the current fish.
"""
# Ensure the input mockup directory exists
if not os.path.exists(imockup_dir):
print('Input mockup not found', imockup_dir)
# Failure exit; cannot continue
print('Input mockup not loaded, fishem ending')
exit(1)
# imockup_dir_norm is imockup_dir with normalized slashes
imockup_dir_norm = imockup_dir.replace('\\', '/')
for dirpath, dirnames, filenames in os.walk(imockup_dir):
for file_name in filenames:
# Only deal with files of interest
if file_name not in ['index.json', 'index.xml']:
continue
# Set up file_path, rel_path, and fish_key
file_path = os.path.join(dirpath, file_name)
if dirpath == imockup_dir: # Service root case
rel_path = ''
fish_key = FISH_KEY_BASE
else: # All other cases
# Normalize slashes and remove topdir from rel_path
rel_path = dirpath.replace('\\', '/')
rel_path = rel_path.replace(imockup_dir_norm + '/', '')
fish_key = FISH_KEY_BASE + '/' + rel_path
# Get data from individual mockup files
if file_name == 'index.xml' and rel_path == '$metadata':
# Get the $metadata document (index.xml) file data
# Convert XML to JSON before storing it in fish
try:
json_data = xmltodict.parse(
open(file_path, 'r').read())
except Exception as error:
print('Failed to read input mockup XML data file', \
file_name, 'for', fish_key, 'with this error:')
print(error)
# Failure exit; cannot continue
print('Input mockup not loaded, fishem ending')
exit(1)
else:
# Get the JSON data for a fish object (index.json)
try:
json_data = json.load(open(file_path))
except Exception as error:
print('Failed to read input mockup JSON data file', \
file_name, 'for', fish_key, 'with this error:')
print(error)
# Failure exit; cannot continue
print('Input mockup not loaded, fishem ending')
exit(1)
# Store the JSON data for a fish object in fish
fish[fish_key] = json_data
# Success return
print('Loaded the mockup in "', imockup_dir, '"', sep='')
return
# End of input()
# Function: output()
def output(omockup_dir):
"""Save the current fish as a mockup in 'omockup_dir'.
"""
# Delete the old output mockup directory hierarchy if it exists;
# must build a new output mockup directory hierarchy every time
if os.path.exists(omockup_dir):
# Walk the existing mockup directory hierarchy (bottom-up)
for dirpath, dirnames, filenames in os.walk(omockup_dir, \
topdown = False):
try:
# Delete any files in a directory before
# deleting the directory itself
for file_name in filenames:
file_path = os.path.join(dirpath, file_name)
os.remove(file_path)
# Delete the directory
os.rmdir(dirpath)
except Exception as error:
print('Failed to remove old output mockup directory "',
omockup_dir, '":', sep='')
print(error)
# Failure exit; cannot continue
print('Output mockup not saved, fishem ending')
exit(1)
# Create a new directory hierarchy for the output mockup
for fish_key in fish:
# The Redfish version object is not included in mockups
if fish_key == '/redfish':
continue
dir_path = fish_key.replace(FISH_KEY_BASE, omockup_dir)
dir_path = os.path.normpath(dir_path)
if not os.path.isdir(dir_path):
try:
os.makedirs(dir_path)
except Exception as error:
print('Failed to create output mockup directory "',
dir_path, '":', sep='')
print(error)
# Failure exit; cannot continue
print('Output mockup not saved, fishem ending')
exit(1)
# Save the fish objects in the output mockup directories
for fish_key in fish:
dir_path = fish_key.replace(FISH_KEY_BASE, omockup_dir)
dir_path = os.path.normpath(dir_path)
# The Redfish Version object is not included in mockups
if fish_key == '/redfish':
continue
# Save the Redfish $metadata document as 'index.xml'
if fish_key == '/redfish/v1/$metadata':
file_path = os.path.join(dir_path, 'index.xml')
xml_data = xmltodict.unparse(fish[fish_key],
pretty=True)
try:
open(file_path, 'w').write(xml_data)
except Exception as error:
print('Failed to save output mockup XML data in "',
file_path, '":', sep='')
print(error)
# Failure exit; cannot continue
print('Output mockup not saved, fishem ending')
exit(1)
continue
# Save the fish object as 'index.json'
file_path = os.path.join(dir_path, 'index.json')
json_data = json.dumps(fish[fish_key], indent=4)
try:
open(file_path, 'w').write(json_data)
except Exception as error:
print('Failed to save output mockup JSON data in "',
file_path, '":', sep='')
print(error)
# Failure exit; cannot continue
print('Output mockup not saved, fishem ending')
exit(1)
# Success return
print('Saved the current fish as a mockup in "',
omockup_dir, '"', sep='')
return
# End of output()
| 37.088398 | 73 | 0.549829 | 784 | 6,713 | 4.586735 | 0.200255 | 0.038932 | 0.033092 | 0.048665 | 0.477753 | 0.437987 | 0.426307 | 0.383204 | 0.336207 | 0.316741 | 0 | 0.003296 | 0.367198 | 6,713 | 180 | 74 | 37.294444 | 0.84322 | 0.25756 | 0 | 0.533333 | 0 | 0 | 0.156066 | 0.004267 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019048 | false | 0 | 0.038095 | 0 | 0.07619 | 0.209524 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaaadd9a5da7549428cb6426c0c349734b427077 | 3,783 | py | Python | linear_search.py | kevinyamauchi/point-slicing | 4163f2903b3f4fdaad2046615147814b537b2b79 | [
"BSD-3-Clause"
] | null | null | null | linear_search.py | kevinyamauchi/point-slicing | 4163f2903b3f4fdaad2046615147814b537b2b79 | [
"BSD-3-Clause"
] | 1 | 2022-01-31T16:06:53.000Z | 2022-01-31T16:06:53.000Z | linear_search.py | kevinyamauchi/point-slicing | 4163f2903b3f4fdaad2046615147814b537b2b79 | [
"BSD-3-Clause"
] | 1 | 2022-02-02T10:59:00.000Z | 2022-02-02T10:59:00.000Z | from dataclasses import dataclass
import numpy as np
import zarr
from dask import array as da
from scipy.spatial.transform import Rotation as R
from create_data import CreateData
from utils.cli import read_args
from utils.logger import logger
from utils.timer import timer
@dataclass
class LinearSearch:
data: zarr.Array
alpha: int
beta: int
gamma: int
x: float
y: float
z: float
tolerance: float
def __post_init__(self) -> None:
self.original_points = da.from_zarr(self.data)
self.plane_point = np.array([self.x, self.y, self.z], dtype=self.data.dtype)
self.plane_normal = self._create_plane_normal()
self.projected_points, distance_to_plane = self._project_points_onto_plane()
self.indices = self._find_points_within_tolerance(distance_to_plane)
@timer
def _create_plane_normal(self) -> np.ndarray:
"""
creates the normal to the plane based on the Euler angles (in degrees)
* gamma rotation about z-axis
* beta rotation about y-axis
* alpha rotation about z-axis
"""
r = (
R.from_euler("zyz", [self.gamma, self.beta, self.alpha], degrees=True)
.as_rotvec()
.astype(self.data.dtype)
)
return r / np.linalg.norm(r)
@timer
def _project_points_onto_plane(self) -> tuple[da.Array, da.Array]:
"""
Project points on to a plane. Plane is defined by a point and a normal
vector. This function is designed to work with points and planes in 3D.
Returns
-------
projected_point : np.ndarray
The point that has been projected to the plane.
This is always an Nx3 array.
signed_distance_to_plane : np.ndarray
The signed projection distance between the points and the plane.
Positive values indicate the point is on the positive normal side
of the plane.
Negative values indicate the point is on the negative normal side
of the plane.
"""
# get the vector from point on the plane to the point to be projected
point_vector = self.original_points - self.plane_point
# find the distance to the plane along the normal direction
signed_distance_to_plane = point_vector @ self.plane_normal
# project the point
projected_points = self.original_points - (
signed_distance_to_plane.reshape(-1, 1) @ self.plane_normal.reshape(1, -1)
)
return projected_points, signed_distance_to_plane
@timer
def _find_points_within_tolerance(self, distance: da.Array) -> np.ndarray:
"""
Find the points within a tolerance of the plane.
"""
return np.where(np.abs(distance) < self.tolerance)[0]
@staticmethod
@timer
def retrieve_values(data: da.Array, name: str) -> np.ndarray:
"""
helper function to compute the value of a given dask array
"""
logger.info(f"Compute {name}")
return data.compute()
if __name__ == "__main__":
args = read_args()
data = CreateData(
ndim=args.ndim, points_per_dim=args.points, chunk_size=args.chunksize
).box
search = LinearSearch(
data=data,
alpha=args.alpha,
beta=args.beta,
gamma=args.gamma,
x=args.x,
y=args.y,
z=args.z,
tolerance=args.tolerance,
)
idx = search.retrieve_values(search.indices, "indices of points")
percent = 100 * len(idx) / len(data)
logger.info(f"found {len(idx)} points within the tolerance, {percent}%")
found_points = search.retrieve_values(
search.projected_points[idx], "values of points"
)
print(found_points)
| 32.333333 | 86 | 0.641819 | 503 | 3,783 | 4.667992 | 0.274354 | 0.027257 | 0.03833 | 0.035775 | 0.100085 | 0.024702 | 0.024702 | 0 | 0 | 0 | 0 | 0.003644 | 0.27465 | 3,783 | 116 | 87 | 32.612069 | 0.852041 | 0.256146 | 0 | 0.056338 | 0 | 0 | 0.043863 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.070423 | false | 0 | 0.126761 | 0 | 0.380282 | 0.014085 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaab918f4baca74d3278721004f3f93acc796d92 | 35,559 | py | Python | alf/layers.py | jesbu1/alf | def59fe39bdbca70a6c80e9b8f2c7c785cb59ea7 | [
"Apache-2.0"
] | null | null | null | alf/layers.py | jesbu1/alf | def59fe39bdbca70a6c80e9b8f2c7c785cb59ea7 | [
"Apache-2.0"
] | null | null | null | alf/layers.py | jesbu1/alf | def59fe39bdbca70a6c80e9b8f2c7c785cb59ea7 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) 2019 Horizon Robotics. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Some basic layers."""
import gin
import copy
import numpy as np
import torch
import torch.nn as nn
from alf.initializers import variance_scaling_init
from alf.nest.utils import get_outer_rank
from alf.tensor_specs import TensorSpec
from alf.utils import common
from alf.utils.math_ops import identity
def normalize_along_batch_dims(x, mean, variance, variance_epsilon):
"""Normalizes a tensor by ``mean`` and ``variance``, which are expected to have
the same tensor spec with the inner dims of ``x``.
Args:
x (Tensor): a tensor of (``[D1, D2, ..] + shape``), where ``D1``, ``D2``, ..
are arbitrary leading batch dims (can be empty).
mean (Tensor): a tensor of ``shape``
variance (Tensor): a tensor of ``shape``
variance_epsilon (float): A small float number to avoid dividing by 0.
Returns:
Normalized tensor.
"""
spec = TensorSpec.from_tensor(mean)
assert spec == TensorSpec.from_tensor(variance), \
"The specs of mean and variance must be equal!"
bs = BatchSquash(get_outer_rank(x, spec))
x = bs.flatten(x)
variance_epsilon = torch.as_tensor(variance_epsilon).to(variance.dtype)
inv = torch.rsqrt(variance + variance_epsilon)
x = (x - mean.to(x.dtype)) * inv.to(x.dtype)
x = bs.unflatten(x)
return x
class BatchSquash(object):
"""Facilitates flattening and unflattening batch dims of a tensor. Copied
from `tf_agents`.
Exposes a pair of matched flatten and unflatten methods. After flattening
only 1 batch dimension will be left. This facilitates evaluating networks
that expect inputs to have only 1 batch dimension.
"""
def __init__(self, batch_dims):
"""Create two tied ops to flatten and unflatten the front dimensions.
Args:
batch_dims (int): Number of batch dimensions the flatten/unflatten
ops should handle.
Raises:
ValueError: if batch dims is negative.
"""
if batch_dims < 0:
raise ValueError('Batch dims must be non-negative.')
self._batch_dims = batch_dims
self._original_tensor_shape = None
def flatten(self, tensor):
"""Flattens and caches the tensor's batch_dims."""
if self._batch_dims == 1:
return tensor
self._original_tensor_shape = tensor.shape
return torch.reshape(tensor,
(-1, ) + tuple(tensor.shape[self._batch_dims:]))
def unflatten(self, tensor):
"""Unflattens the tensor's batch_dims using the cached shape."""
if self._batch_dims == 1:
return tensor
if self._original_tensor_shape is None:
raise ValueError('Please call flatten before unflatten.')
return torch.reshape(
tensor, (tuple(self._original_tensor_shape[:self._batch_dims]) +
tuple(tensor.shape[1:])))
@gin.configurable
class OneHot(nn.Module):
def __init__(self, num_classes):
super().__init__()
self._num_classes = num_classes
def forward(self, input):
return nn.functional.one_hot(
input, num_classes=self._num_classes).to(torch.float32)
@gin.configurable
class FixedDecodingLayer(nn.Module):
def __init__(self,
input_size,
output_size,
basis_type="rbf",
sigma=1.,
tau=0.5):
"""A layer that uses a set of fixed basis for decoding the inputs.
Args:
input_size (int): the size of input to be decoded, representing the
number of representation coefficients
output_size (int): the size of the decoded output
basis_type (str): the type of basis to be used for decoding
- "poly": polynomial basis using Vandermonde matrix
- "cheb": polynomial basis using Chebyshev polynomials
- "rbf": radial basis functions
- "haar": Haar wavelet basis
sigma (float): the bandwidth parameter used for RBF basis.
If None, a default value of 1. will be used.
tau (float): a factor for weighting the basis exponentially
according to the order (``n``) of the basis, i.e., ``tau**n```
"""
# get the argument list with vals
self._kwargs = copy.deepcopy(locals())
self._kwargs.pop('self')
self._kwargs.pop('__class__')
super(FixedDecodingLayer, self).__init__()
assert input_size > 0, "input_size should be at least one"
assert basis_type in {"poly", "cheb", "rbf", "haar"
}, ("the specified method "
"{} is not supported".format(basis_type))
self._B = nn.Linear(input_size, output_size, bias=False)
def _polyvander_matrix(n, D, tau=tau):
# non-square matrix [n, D + 1]
x = torch.linspace(-1, 1, n)
B = torch.as_tensor(np.polynomial.polynomial.polyvander(x, D))
# weight for encoding the preference to low-frequency basis
exp_factor = torch.arange(D + 1).float()
basis_weight = tau**exp_factor
return B * basis_weight
def _chebvander_matrix(n, D, tau=tau):
# non-square matrix [n, D + 1]
x = np.linspace(-1, 1, n)
B = torch.as_tensor(np.polynomial.chebyshev.chebvander(x, D))
# weight for encoding the preference to low-frequency basis
exp_factor = torch.arange(D + 1).float()
basis_weight = tau**exp_factor
return B * basis_weight
def _rbf_matrix(n, sigma=1.0):
# square matrix [n, n]
x = torch.linspace(-1, 1, n)
B = torch.empty(n, n)
for d in range(n):
B[:, d] = torch.exp(-(x - x[d])**2 / sigma)
return B
def _haar_matrix(n, tau=tau):
# square matrix [n, n]
def _is_power_of_two(x):
return (x & (x - 1)) == 0
# allow only size n to be the power of 2
assert _is_power_of_two(n), "n is required to be the power of 2"
def _get_haar_matrix(n):
if n > 2:
h = _get_haar_matrix(n // 2)
else:
return torch.Tensor([[1, 1], [1, -1]])
def _kron(A, B):
return torch.einsum("ab,cd->acbd", A, B).view(
A.size(0) * B.size(0),
A.size(1) * B.size(1))
# calculate upper haar part
h_n = _kron(h, torch.Tensor([[1], [1]]))
# calculate lower haar part
h_i = torch.sqrt(torch.Tensor([n / 2])) * _kron(
torch.eye(len(h)), torch.Tensor([[1], [-1]]))
# combine both parts
h = torch.cat((h_n, h_i), dim=1)
return h
B = _get_haar_matrix(n) / torch.sqrt(torch.Tensor([n]))
# weight for encoding the preference to low-frequency basis
exp_factor = torch.ceil(torch.log2(torch.arange(n).float() + 1))
basis_weight = tau**exp_factor
return B * basis_weight
if basis_type == "poly":
B = _polyvander_matrix(output_size, input_size - 1)
elif basis_type == "cheb":
B = _chebvander_matrix(output_size, input_size - 1)
elif basis_type == "rbf":
assert input_size == output_size
B = _rbf_matrix(input_size, sigma=sigma)
elif basis_type == "haar":
assert input_size == output_size
B = _haar_matrix(input_size)
# assign the constructed transformation matrix and set it to be non-trainable
self._B.weight.requires_grad = False
self._B.weight.copy_(B)
def forward(self, inputs):
return self._B(inputs)
@property
def weight(self):
return self._B.weight
@gin.configurable
class FC(nn.Module):
def __init__(self,
input_size,
output_size,
activation=identity,
use_bias=True,
kernel_initializer=None,
kernel_init_gain=1.0,
bias_init_value=0.0):
"""A fully connected layer that's also responsible for activation and
customized weights initialization. An auto gain calculation might depend
on the activation following the linear layer. Suggest using this wrapper
module instead of ``nn.Linear`` if you really care about weight std after
init.
Args:
input_size (int): input size
output_size (int): output size
activation (torch.nn.functional):
use_bias (bool): whether use bias
kernel_initializer (Callable): initializer for the FC layer kernel.
If none is provided a ``variance_scaling_initializer`` with gain as
``kernel_init_gain`` will be used.
kernel_init_gain (float): a scaling factor (gain) applied to
the std of kernel init distribution. It will be ignored if
``kernel_initializer`` is not None.
bias_init_value (float): a constant
"""
# get the argument list with vals
self._kwargs = copy.deepcopy(locals())
self._kwargs.pop('self')
self._kwargs.pop('__class__')
super(FC, self).__init__()
self._activation = activation
self._linear = nn.Linear(input_size, output_size, bias=use_bias)
self._kernel_initializer = kernel_initializer
self._kernel_init_gain = kernel_init_gain
self._bias_init_value = bias_init_value
self._use_bias = use_bias
self.reset_parameters()
def reset_parameters(self):
if self._kernel_initializer is None:
variance_scaling_init(
self._linear.weight.data,
gain=self._kernel_init_gain,
nonlinearity=self._activation)
else:
self._kernel_initializer(self._linear.weight.data)
if self._use_bias:
nn.init.constant_(self._linear.bias.data, self._bias_init_value)
def forward(self, inputs):
return self._activation(self._linear(inputs))
@property
def weight(self):
return self._linear.weight
@property
def bias(self):
return self._linear.bias
def make_parallel(self, n):
"""Create a ``ParallelFC`` using ``n`` replicas of ``self``.
The initialized layer parameters will be different.
"""
return ParallelFC(n=n, **self._kwargs)
@gin.configurable
class ParallelFC(nn.Module):
def __init__(self,
input_size,
output_size,
n,
activation=identity,
use_bias=True,
kernel_initializer=None,
kernel_init_gain=1.0,
bias_init_value=0.0):
"""Parallel FC layer.
It is equivalent to ``n`` separate FC layers with the same
``input_size`` and ``output_size``.
Args:
input_size (int): input size
output_size (int): output size
n (int): n independent ``FC`` layers
activation (torch.nn.functional):
use_bias (bool): whether use bias
kernel_initializer (Callable): initializer for the FC layer kernel.
If none is provided a ``variance_scaling_initializer`` with gain
as ``kernel_init_gain`` will be used.
kernel_init_gain (float): a scaling factor (gain) applied to
the std of kernel init distribution. It will be ignored if
``kernel_initializer`` is not None.
bias_init_value (float): a constant
"""
super().__init__()
self._activation = activation
self._weight = nn.Parameter(torch.Tensor(n, output_size, input_size))
if use_bias:
self._bias = nn.Parameter(torch.Tensor(n, output_size))
else:
self._bias = None
for i in range(n):
if kernel_initializer is None:
variance_scaling_init(
self._weight.data[i],
gain=kernel_init_gain,
nonlinearity=self._activation)
else:
kernel_initializer(self._weight.data[i])
if use_bias:
nn.init.constant_(self._bias.data, bias_init_value)
def forward(self, inputs):
"""Forward
Args:
inputs (torch.Tensor): with shape ``[B, n, input_size]`` or ``[B, input_size]``
Returns:
torch.Tensor with shape ``[B, n, output_size]``
"""
n, k, l = self._weight.shape
if inputs.ndim == 2:
assert inputs.shape[1] == l, (
"inputs has wrong shape %s. Expecting (B, %d)" % (inputs.shape,
l))
inputs = inputs.unsqueeze(0).expand(n, *inputs.shape)
elif inputs.ndim == 3:
assert (inputs.shape[1] == n and inputs.shape[2] == l), (
"inputs has wrong shape %s. Expecting (B, %d, %d)" %
(inputs.shape, n, l))
inputs = inputs.transpose(0, 1) # [n, B, l]
else:
raise ValueError("Wrong inputs.ndim=%d" % inputs.ndim)
if self.bias is not None:
y = torch.baddbmm(
self._bias.unsqueeze(1), inputs,
self.weight.transpose(1, 2)) # [n, B, k]
else:
y = torch.bmm(inputs, self._weight.transpose(1, 2)) # [n, B, k]
y = y.transpose(0, 1) # [B, n, k]
return self._activation(y)
@property
def weight(self):
"""Get the weight Tensor.
Returns:
Tensor: with shape (n, output_size, input_size). ``weight[i]`` is
the weight for the i-th FC layer. ``weight[i]`` can be used for
``FC`` layer with the same ``input_size`` and ``output_size``
"""
return self._weight
@property
def bias(self):
"""Get the bias Tensor.
Returns:
Tensor: with shape (n, output_size). ``bias[i]`` is the bias for the
i-th FC layer. ``bias[i]`` can be used for ``FC`` layer with
the same ``input_size`` and ``output_size``
"""
return self._bias
@gin.configurable
class Conv2D(nn.Module):
def __init__(self,
in_channels,
out_channels,
kernel_size,
activation=torch.relu_,
strides=1,
padding=0,
use_bias=True,
kernel_initializer=None,
kernel_init_gain=1.0,
bias_init_value=0.0):
"""A 2D Conv layer that's also responsible for activation and customized
weights initialization. An auto gain calculation might depend on the
activation following the conv layer. Suggest using this wrapper module
instead of ``nn.Conv2d`` if you really care about weight std after init.
Args:
in_channels (int): channels of the input image
out_channels (int): channels of the output image
kernel_size (int or tuple):
activation (torch.nn.functional):
strides (int or tuple):
padding (int or tuple):
use_bias (bool):
kernel_initializer (Callable): initializer for the conv layer kernel.
If None is provided a variance_scaling_initializer with gain as
``kernel_init_gain`` will be used.
kernel_init_gain (float): a scaling factor (gain) applied to the
std of kernel init distribution. It will be ignored if
``kernel_initializer`` is not None.
bias_init_value (float): a constant
"""
super(Conv2D, self).__init__()
self._activation = activation
self._conv2d = nn.Conv2d(
in_channels,
out_channels,
kernel_size,
stride=strides,
padding=padding,
bias=use_bias)
if kernel_initializer is None:
variance_scaling_init(
self._conv2d.weight.data,
gain=kernel_init_gain,
nonlinearity=self._activation)
else:
kernel_initializer(self._conv2d.weight.data)
if use_bias:
nn.init.constant_(self._conv2d.bias.data, bias_init_value)
def forward(self, img):
return self._activation(self._conv2d(img))
@property
def weight(self):
return self._conv2d.weight
@property
def bias(self):
return self._conv2d.bias
@gin.configurable
class ParallelConv2D(nn.Module):
def __init__(self,
in_channels,
out_channels,
kernel_size,
n,
activation=torch.relu_,
strides=1,
padding=0,
use_bias=True,
kernel_initializer=None,
kernel_init_gain=1.0,
bias_init_value=0.0):
"""A parallel 2D Conv layer that can be used to perform n independent
2D convolutions in parallel.
It is equivalent to ``n`` separate ``Conv2D`` layers with the same
``in_channels`` and ``out_channels``.
Args:
in_channels (int): channels of the input image
out_channels (int): channels of the output image
kernel_size (int or tuple):
n (int): n independent ``Conv2D`` layers
activation (torch.nn.functional):
strides (int or tuple):
padding (int or tuple):
use_bias (bool):
kernel_initializer (Callable): initializer for the conv layer kernel.
If None is provided a ``variance_scaling_initializer`` with gain
as ``kernel_init_gain`` will be used.
kernel_init_gain (float): a scaling factor (gain) applied to the
std of kernel init distribution. It will be ignored if
``kernel_initializer`` is not None.
bias_init_value (float): a constant
"""
super(ParallelConv2D, self).__init__()
self._activation = activation
self._n = n
self._in_channels = in_channels
self._out_channels = out_channels
self._kernel_size = common.tuplify2d(kernel_size)
self._conv2d = nn.Conv2d(
in_channels * n,
out_channels * n,
kernel_size,
groups=n,
stride=strides,
padding=padding,
bias=use_bias)
for i in range(n):
if kernel_initializer is None:
variance_scaling_init(
self._conv2d.weight.data[i * out_channels:(i + 1) *
out_channels],
gain=kernel_init_gain,
nonlinearity=self._activation)
else:
kernel_initializer(
self._conv2d.weight.data[i * out_channels:(i + 1) *
out_channels])
# [n*C', C, kernel_size, kernel_size]->[n, C', C, kernel_size, kernel_size]
self._weight = self._conv2d.weight.view(
self._n, self._out_channels, self._in_channels,
self._kernel_size[0], self._kernel_size[1])
if use_bias:
nn.init.constant_(self._conv2d.bias.data, bias_init_value)
# [n*C']->[n, C']
self._bias = self._conv2d.bias.view(self._n, self._out_channels)
else:
self._bias = None
def forward(self, img):
"""Forward
Args:
img (torch.Tensor): with shape ``[B, C, H, W]``
or ``[B, n, C, H, W]``
where the meaning of the symbols are:
- ``B``: batch size
- ``n``: number of replicas
- ``C``: number of channels
- ``H``: image height
- ``W``: image width.
When the shape of img is ``[B, C, H, W]``, all the n 2D Conv
operations will take img as the same shared input.
When the shape of img is ``[B, n, C, H, W]``, each 2D Conv operator
will have its own input data by slicing img.
Returns:
torch.Tensor with shape ``[B, n, C', H', W']``
where the meaning of the symbols are:
- ``B``: batch
- ``n``: number of replicas
- ``C'``: number of output channels
- ``H'``: output height
- ``W'``: output width
"""
if img.ndim == 4:
# the shared input case
assert img.shape[1] == self._in_channels, (
"Input img has wrong shape %s. Expecting (B, %d, H, W)" %
(img.shape, self._in_channels))
img = img.unsqueeze(1).expand(img.shape[0], self._n,
*img.shape[1:])
elif img.ndim == 5:
# the non-shared case
assert (
img.shape[1] == self._n
and img.shape[2] == self._in_channels), (
"Input img has wrong shape %s. Expecting (B, %d, %d, H, W)"
% (img.shape, self._n, self._in_channels))
else:
raise ValueError("Wrong img.ndim=%d" % img.ndim)
# merge replica and channels
img = img.reshape(img.shape[0], img.shape[1] * img.shape[2],
*img.shape[3:])
res = self._activation(self._conv2d(img))
# reshape back: [B, n*C', H', W'] -> [B, n, C', H', W']
res = res.reshape(res.shape[0], self._n, self._out_channels,
*res.shape[2:])
return res
@property
def weight(self):
return self._weight
@property
def bias(self):
return self._bias
@gin.configurable
class ConvTranspose2D(nn.Module):
def __init__(self,
in_channels,
out_channels,
kernel_size,
activation=torch.relu_,
strides=1,
padding=0,
use_bias=True,
kernel_initializer=None,
kernel_init_gain=1.0,
bias_init_value=0.0):
"""A 2D ConvTranspose layer that's also responsible for activation and
customized weights initialization. An auto gain calculation might depend
on the activation following the conv layer. Suggest using this wrapper
module instead of ``nn.ConvTranspose2d`` if you really care about weight std
after init.
Args:
in_channels (int): channels of the input image
out_channels (int): channels of the output image
kernel_size (int or tuple):
activation (torch.nn.functional):
strides (int or tuple):
padding (int or tuple):
use_bias (bool):
kernel_initializer (Callable): initializer for the conv_trans layer.
If None is provided a variance_scaling_initializer with gain as
``kernel_init_gain`` will be used.
kernel_init_gain (float): a scaling factor (gain) applied to the
std of kernel init distribution. It will be ignored if
``kernel_initializer`` is not None.
bias_init_value (float): a constant
"""
super(ConvTranspose2D, self).__init__()
self._activation = activation
self._conv_trans2d = nn.ConvTranspose2d(
in_channels,
out_channels,
kernel_size,
stride=strides,
padding=padding,
bias=use_bias)
if kernel_initializer is None:
variance_scaling_init(
self._conv_trans2d.weight.data,
gain=kernel_init_gain,
nonlinearity=self._activation,
transposed=True)
else:
kernel_initializer(self._conv_trans2d.weight.data)
if use_bias:
nn.init.constant_(self._conv_trans2d.bias.data, bias_init_value)
def forward(self, img):
return self._activation(self._conv_trans2d(img))
@property
def weight(self):
return self._conv_trans2d.weight
@property
def bias(self):
return self._conv_trans2d.bias
@gin.configurable
class ParallelConvTranspose2D(nn.Module):
def __init__(self,
in_channels,
out_channels,
kernel_size,
n,
activation=torch.relu_,
strides=1,
padding=0,
use_bias=True,
kernel_initializer=None,
kernel_init_gain=1.0,
bias_init_value=0.0):
"""A parallel ConvTranspose2D layer that can be used to perform n
independent 2D transposed convolutions in parallel.
Args:
in_channels (int): channels of the input image
out_channels (int): channels of the output image
kernel_size (int or tuple):
n (int): n independent ``ConvTranspose2D`` layers
activation (torch.nn.functional):
strides (int or tuple):
padding (int or tuple):
use_bias (bool):
kernel_initializer (Callable): initializer for the conv_trans layer.
If None is provided a ``variance_scaling_initializer`` with gain
as ``kernel_init_gain`` will be used.
kernel_init_gain (float): a scaling factor (gain) applied to the
std of kernel init distribution. It will be ignored if
``kernel_initializer`` is not None.
bias_init_value (float): a constant
"""
super(ParallelConvTranspose2D, self).__init__()
self._activation = activation
self._n = n
self._in_channels = in_channels
self._out_channels = out_channels
self._kernel_size = common.tuplify2d(kernel_size)
self._conv_trans2d = nn.ConvTranspose2d(
in_channels * n,
out_channels * n,
kernel_size,
groups=n,
stride=strides,
padding=padding,
bias=use_bias)
for i in range(n):
if kernel_initializer is None:
variance_scaling_init(
self._conv_trans2d.weight.data[i * in_channels:(i + 1) *
in_channels],
gain=kernel_init_gain,
nonlinearity=self._activation)
else:
kernel_initializer(
self._conv_trans2d.weight.data[i * in_channels:(i + 1) *
in_channels])
# [n*C, C', kernel_size, kernel_size]->[n, C, C', kernel_size, kernel_size]
self._weight = self._conv_trans2d.weight.view(
self._n, self._in_channels, self._out_channels,
self._kernel_size[0], self._kernel_size[1])
if use_bias:
nn.init.constant_(self._conv_trans2d.bias.data, bias_init_value)
# [n*C]->[n, C]
self._bias = self._conv_trans2d.bias.view(self._n,
self._out_channels)
else:
self._bias = None
def forward(self, img):
"""Forward
Args:
img (torch.Tensor): with shape ``[B, C, H, W]``
or ``[B, n, C, H, W]``
where the meaning of the symbols are:
- ``B``: batch size
- ``n``: number of replicas
- ``C``: number of channels
- ``H``: image height
- ``W``: image width.
When the shape of img is ``[B, C, H, W]``, all the n transposed 2D
Conv operations will take img as the same shared input.
When the shape of img is ``[B, n, C, H, W]``, each transposed 2D
Conv operator will have its own input data by slicing img.
Returns:
torch.Tensor with shape ``[B, n, C', H', W']``
where the meaning of the symbols are:
- ``B``: batch
- ``n``: number of replicas
- ``C'``: number of output channels
- ``H'``: output height
- ``W'``: output width
"""
if img.ndim == 4:
# the shared input case
assert img.shape[1] == self._in_channels, (
"Input img has wrong shape %s. Expecting (B, %d, H, W)" %
(img.shape, self._in_channels))
img = img.unsqueeze(1).expand(img.shape[0], self._n,
*img.shape[1:])
elif img.ndim == 5:
# the non-shared case
assert (
img.shape[1] == self._n
and img.shape[2] == self._in_channels), (
"Input img has wrong shape %s. Expecting (B, %d, %d, H, W)"
% (img.shape, self._n, self._in_channels))
else:
raise ValueError("Wrong img.ndim=%d" % img.ndim)
# merge replica and channels
img = img.reshape(img.shape[0], img.shape[1] * img.shape[2],
*img.shape[3:])
res = self._activation(self._conv_trans2d(img))
# reshape back: [B, n*C', H', W'] -> [B, n, C', H', W']
res = res.reshape(res.shape[0], self._n, self._out_channels,
res.shape[2], res.shape[3])
return res
@property
def weight(self):
return self._weight
@property
def bias(self):
return self._bias
class Reshape(nn.Module):
def __init__(self, shape):
"""A layer for reshape the tensor.
The result of this layer is a tensor reshaped to ``(B, *shape)`` where
``B`` is ``x.shape[0]``
Args:
shape (tuple): desired shape not including the batch dimension.
"""
super().__init__()
self._shape = shape
def forward(self, x):
return x.reshape(x.shape[0], *self._shape)
def _tuplify2d(x):
if isinstance(x, tuple):
assert len(x) == 2
return x
return (x, x)
def _conv_transpose_2d(in_channels,
out_channels,
kernel_size,
stride=1,
padding=0):
# need output_padding so that output_size is stride * input_size
# See https://pytorch.org/docs/stable/nn.html#torch.nn.ConvTranspose2d
output_padding = stride + 2 * padding - kernel_size
return nn.ConvTranspose2d(
in_channels,
out_channels,
kernel_size,
stride=stride,
padding=padding,
output_padding=output_padding)
@gin.configurable(whitelist=['v1_5', 'with_batch_normalization'])
class BottleneckBlock(nn.Module):
"""Bottleneck block for ResNet.
We allow two slightly different architectures:
* v1: Placing the stride at the first 1x1 convolution as described in the
original ResNet paper `Deep residual learning for image recognition
<https://arxiv.org/abs/1512.03385>`_.
* v1.5: Placing the stride for downsampling at 3x3 convolution. This variant
is also known as ResNet V1.5 and improves accuracy according to
`<https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch>`_.
"""
def __init__(self,
in_channels,
kernel_size,
filters,
stride,
transpose=False,
v1_5=True,
with_batch_normalization=True):
"""
Args:
kernel_size (int): the kernel size of middle layer at main path
filters (int): the filters of 3 layer at main path
stride (int): stride for this block.
transpose (bool): a bool indicate using ``Conv2D`` or ``Conv2DTranspose``.
If two BottleneckBlock layers ``L`` and ``LT`` are constructed
with the same arguments except ``transpose``, it is gauranteed that
``LT(L(x)).shape == x.shape`` if ``x.shape[-2:]`` can be divided
by ``stride``.
v1_5 (bool): whether to use the ResNet V1.5 structure
with_batch_normalization (bool): whether to include batch normalization.
Note that standard ResNet uses batch normalization.
Return:
Output tensor for the block
"""
super().__init__()
filters1, filters2, filters3 = filters
conv_fn = _conv_transpose_2d if transpose else nn.Conv2d
padding = (kernel_size - 1) // 2
if v1_5:
a = conv_fn(in_channels, filters1, 1)
b = conv_fn(filters1, filters2, kernel_size, stride, padding)
else:
a = conv_fn(in_channels, filters1, 1, stride)
b = conv_fn(filters1, filters2, kernel_size, 1, padding)
nn.init.kaiming_normal_(a.weight.data)
nn.init.zeros_(a.bias.data)
nn.init.kaiming_normal_(b.weight.data)
nn.init.zeros_(b.bias.data)
c = conv_fn(filters2, filters3, 1)
nn.init.kaiming_normal_(c.weight.data)
nn.init.zeros_(c.bias.data)
s = conv_fn(in_channels, filters3, 1, stride)
nn.init.kaiming_normal_(s.weight.data)
nn.init.zeros_(s.bias.data)
relu = nn.ReLU(inplace=True)
if with_batch_normalization:
core_layers = nn.Sequential(a, nn.BatchNorm2d(filters1), relu, b,
nn.BatchNorm2d(filters2), relu, c,
nn.BatchNorm2d(filters3))
shortcut_layers = nn.Sequential(s, nn.BatchNorm2d(filters3))
else:
core_layers = nn.Sequential(a, relu, b, relu, c)
shortcut_layers = s
self._core_layers = core_layers
self._shortcut_layers = shortcut_layers
def forward(self, inputs):
core = self._core_layers(inputs)
shortcut = self._shortcut_layers(inputs)
return torch.relu_(core + shortcut)
def calc_output_shape(self, input_shape):
x = torch.zeros(1, *input_shape)
y = self.forward(x)
return y.shape[1:]
| 37.312697 | 91 | 0.555556 | 4,251 | 35,559 | 4.466714 | 0.11362 | 0.019486 | 0.01917 | 0.012166 | 0.604487 | 0.577681 | 0.549821 | 0.508058 | 0.494523 | 0.466768 | 0 | 0.012185 | 0.349138 | 35,559 | 952 | 92 | 37.351891 | 0.808244 | 0.35659 | 0 | 0.556622 | 0 | 0.003839 | 0.032739 | 0.001147 | 0 | 0 | 0 | 0 | 0.024952 | 1 | 0.09405 | false | 0 | 0.019194 | 0.036468 | 0.213052 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaacf74339175e22492daa155ed4a5b2bbaa69e4 | 8,681 | py | Python | neurolang/tests/test_interval_algebra.py | hndgzkn/NeuroLang | a3178d47f80bc0941440d9bb09e06c2f217b9566 | [
"BSD-3-Clause"
] | 1 | 2021-01-07T02:00:22.000Z | 2021-01-07T02:00:22.000Z | neurolang/tests/test_interval_algebra.py | hndgzkn/NeuroLang | a3178d47f80bc0941440d9bb09e06c2f217b9566 | [
"BSD-3-Clause"
] | 207 | 2020-11-04T12:51:10.000Z | 2022-03-30T13:42:26.000Z | neurolang/tests/test_interval_algebra.py | hndgzkn/NeuroLang | a3178d47f80bc0941440d9bb09e06c2f217b9566 | [
"BSD-3-Clause"
] | 6 | 2020-11-04T13:59:35.000Z | 2021-03-19T05:28:10.000Z | from numpy import random
from ..interval_algebra import (
converse, meets, before, starts, during,
finishes, equals, overlaps, negate,
get_intervals_relations
)
from ..regions import Region
from copy import deepcopy
def app(x, z, first, second, elements):
elements.remove(x)
elements.remove(z)
for y in elements:
if first(x, y) and second(y, z):
return True
return False
def composition(relations, domain, convert=False):
if convert:
relations = [[converse(f), converse(g)] for [f, g] in relations]
return [
lambda x, z, f=pair_of_fs[0], g=pair_of_fs[1]:
app(x, z, f, g, deepcopy(domain)) for pair_of_fs in relations
]
def apply_composition(
relations,
parameters,
negate_whole_expression=False,
negations=None,
conversion=False
):
if conversion:
parameters = list(reversed(parameters))
res = True
for i in range(len(relations)):
result = relations[i](parameters[i][0], parameters[i][1])
if negations:
if negations[i] != (not result):
res = False
break
else:
if not result:
res = False
break
return negate_whole_expression != res
def test_ia_relations_functions():
intervals = [
tuple([1, 2]),
tuple([5, 7]),
tuple([1, 5]),
tuple([4, 6]),
tuple([2, 4]),
tuple([6, 7]),
tuple([2, 4])
]
assert before(intervals[0], intervals[1])
assert meets(intervals[0], intervals[4])
assert starts(intervals[0], intervals[2])
assert during(intervals[4], intervals[2])
assert overlaps(intervals[3], intervals[1])
assert finishes(intervals[5], intervals[1])
assert equals(intervals[4], intervals[6])
assert not equals(intervals[1], intervals[0])
assert not during(intervals[1], intervals[2])
assert not overlaps(intervals[0], intervals[2])
assert not starts(intervals[3], intervals[4])
def test_compositions():
elems = [tuple([1, 2]), tuple([4, 6]), tuple([8, 10])]
rel = composition([[before, before]], elems)
assert apply_composition(rel, [[tuple([1, 2]), tuple([8, 10])]])
rel = composition([[before, before]], elems)
assert not apply_composition(rel, [[tuple([4, 6]), tuple([8, 10])]])
elems.append(tuple([1, 5]))
rel = composition([[starts, before]], elems)
assert apply_composition(rel, [[tuple([1, 2]), tuple([8, 10])]])
elems.append(tuple([1, 2]))
rel = composition([[equals, starts]], elems)
assert apply_composition(rel, [[tuple([1, 2]), tuple([1, 5])]])
elems.append(tuple([2, 5]))
rel = composition([[meets, overlaps]], elems)
assert apply_composition(rel, [[tuple([1, 2]), tuple([4, 6])]])
# multiple compositions
elems.append(tuple([1, 2]))
elems.append(tuple([1, 2]))
rel = composition([[equals, equals], [equals, equals]], elems)
assert apply_composition(
rel, [[tuple([1, 2]), tuple([1, 2])], [tuple([1, 2]),
tuple([1, 2])]]
)
elems.append(tuple([5, 8]))
elems.append(tuple([0, 1]))
rel = composition([[before, overlaps], [overlaps, overlaps]], elems)
assert apply_composition(
rel, [[tuple([0, 1]), tuple([4, 6])], [tuple([2, 5]),
tuple([5, 8])]]
)
def test_calculus_axioms():
elems = [tuple(random.randint(1, 100, size=2)) for _ in range(10)]
# Huntington's axiom
r, s = random.choice([
before, overlaps, during, meets, starts, finishes, equals
], 2)
i, j = random.choice(range(len(elems)), 2, replace=False)
assert not (not r(elems[i], elems[j]) or not s(elems[i], elems[j])) or (
not ((not r(elems[i], elems[j])) or s(elems[i], elems[j]))
) == r(elems[i], elems[j])
# identity
i, j = random.choice(range(len(elems)), 2)
elems.append(elems[j])
rel = composition([[meets, equals]], elems)
assert apply_composition(rel, [[elems[i], elems[j]]
]) == meets(elems[i], elems[j])
i, j = random.choice(range(len(elems)), 2)
elems.append(elems[i])
rel = composition([[equals, meets]], elems)
assert apply_composition(rel, [[elems[i], elems[j]]
]) == meets(elems[i], elems[j])
# involution
for op in [before, overlaps, during, meets, starts, finishes, equals]:
i, j = random.choice(range(len(elems)), 2, replace=False)
converse(converse(op))(elems[i], elems[j]) == op(elems[i], elems[j])
# associativity
r, s, t = random.choice([
before, overlaps, during, meets, starts, finishes, equals
], 3)
c1 = composition([[r, s]], elems)
c2 = composition([[s, t]], elems)
c = composition([[r, s], [s, t]], elems)
i, j, k, l = random.choice(range(len(elems)), 4, replace=False)
t1, t2, t3, t4 = elems[i], elems[j], elems[k], elems[l]
assert (
apply_composition(c1, [[t1, t2]]) and
apply_composition(c2, [[t3, t4]])
) == apply_composition(c, [[t1, t2], [t3, t4]])
# distributivity
r, s, t = random.choice([
before, overlaps, during, meets, starts, finishes, equals
], 3)
c1 = composition([[r, t]], elems)
c2 = composition([[s, t]], elems)
c = composition([[random.choice([s, r]), t]], elems)
i, j = random.choice(range(len(elems)), 2, replace=False)
app = apply_composition(c, [[elems[i], elems[j]]])
assert (apply_composition(c1, [[
elems[i], elems[j]
]]) == app) or (apply_composition(c2, [[elems[i], elems[j]]]) == app)
# inv-distrib
r, s = random.choice([
before, overlaps, during, meets, starts, finishes, equals
], 2)
i, j, k, l = random.choice(range(len(elems)), 4, replace=False)
[i, j, k,
l] = [(q, p) for (p, q) in [elems[i], elems[j], elems[k], elems[l]]]
assert any([r(i, j), s(k,
l)]) == any([converse(r)(j, i),
converse(s)(l, k)])
# inv-involutive-distr
s, t = random.choice([
before, overlaps, during, meets, starts, finishes, equals
], 2)
c = composition([[s, t]], elems)
inv_c = composition([[converse(t), converse(s)]], elems)
i, j = random.choice(range(len(elems)), 2, replace=False)
assert apply_composition(c, [[
elems[i], elems[j]
]], conversion=True) == apply_composition(inv_c, [[elems[j], elems[i]]])
# Tarski/ de Morgan
r, s = random.choice([
before, overlaps, during, meets, starts, finishes, equals
], 2)
i, j = random.choice(range(len(elems)), 2, replace=False)
c = composition([[converse(r), negate(r)]], elems)
c2 = composition([[r, s]], elems)
assert (
apply_composition(
c, [[elems[i], elems[j]]],
negate_whole_expression=False,
negations=[False, True]
) and apply_composition(
c2, [[elems[i], elems[j]]], negate_whole_expression=True
) and (not s)
) == (not s)
def test_get_interval_relations_of_regions():
r1 = Region((1, 1, 1), (2, 2, 2))
r2 = Region((5, 5, 5), (8, 8, 8))
assert get_intervals_relations(
r1.bounding_box.limits, r2.bounding_box.limits
) == tuple(['b', 'b', 'b'])
r1 = Region((1, 1, 1), (10, 10, 10))
assert get_intervals_relations(
r1.bounding_box.limits, r2.bounding_box.limits
) == tuple(['di', 'di', 'di'])
r1 = Region((1, 1, 1), (6, 6, 6))
assert get_intervals_relations(
r1.bounding_box.limits, r2.bounding_box.limits
) == tuple(['o', 'o', 'o'])
r2 = Region((1, 1, 1), (2, 2, 2))
assert get_intervals_relations(
r1.bounding_box.limits, r2.bounding_box.limits
) == tuple(['si', 'si', 'si'])
r2 = Region((1, 1, 1), (6, 6, 6))
assert get_intervals_relations(
r1.bounding_box.limits, r2.bounding_box.limits
) == tuple(['e', 'e', 'e'])
r1 = Region((5, 5, 5), (8, 8, 8))
r2 = Region((8, 7, 12), (10, 8, 14))
assert get_intervals_relations(
r1.bounding_box.limits, r2.bounding_box.limits
) == tuple(['m', 'fi', 'b'])
assert get_intervals_relations(
r2.bounding_box.limits, r1.bounding_box.limits
) == tuple(['mi', 'f', 'bi'])
r1 = Region((5, 5, 5), (8, 8, 8))
r2 = Region((3, 3, 7), (6, 6, 9))
assert get_intervals_relations(
r1.bounding_box.limits, r2.bounding_box.limits
) == tuple(['oi', 'oi', 'o'])
assert get_intervals_relations(
r2.bounding_box.limits, r1.bounding_box.limits
) == tuple(['o', 'o', 'oi'])
| 33.517375 | 76 | 0.567446 | 1,157 | 8,681 | 4.182368 | 0.121003 | 0.029758 | 0.043191 | 0.047117 | 0.592478 | 0.535855 | 0.519529 | 0.460012 | 0.407109 | 0.369498 | 0 | 0.036706 | 0.253081 | 8,681 | 258 | 77 | 33.647287 | 0.709593 | 0.016127 | 0 | 0.339713 | 0 | 0 | 0.004572 | 0 | 0 | 0 | 0 | 0 | 0.167464 | 1 | 0.033493 | false | 0 | 0.019139 | 0 | 0.07177 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaadfda690a41074a15536c73a9293be7ac14796 | 13,011 | py | Python | scripts/cromwell/get_output_paths.py | leipzig/gatk-sv | 96566cbbaf0f8f9c8452517b38eea1e5dd6ed33a | [
"BSD-3-Clause"
] | 76 | 2020-06-18T21:31:43.000Z | 2022-03-02T18:42:58.000Z | scripts/cromwell/get_output_paths.py | iamh2o/gatk-sv | bf3704bd1d705339577530e267cd4d1b2f77a17f | [
"BSD-3-Clause"
] | 195 | 2020-06-22T15:12:28.000Z | 2022-03-28T18:06:46.000Z | scripts/cromwell/get_output_paths.py | iamh2o/gatk-sv | bf3704bd1d705339577530e267cd4d1b2f77a17f | [
"BSD-3-Clause"
] | 39 | 2020-07-03T06:47:18.000Z | 2022-03-03T03:47:25.000Z | #!/bin/python
import argparse
import json
import logging
import re
import os.path
from urllib.parse import urlparse
from google.cloud import storage
"""
Summary: Find GCS paths for specified workflow file outputs for multiple workflows at once without downloading metadata.
Caveats: Assumes cromwell file structure. Recommended for use with cromwell final_workflow_outputs_dir
to reduce number of files to search. Requires file suffixes for each output file that are
unique within the workflow directory.
For usage & parameters: Run python get_output_paths.py --help
Output: TSV file with columns for each output variable and a row for each
batch (or entity, if providing --entities-file), containing GCS output paths
Author: Emma Pierce-Hoffman (epierceh@broadinstitute.org)
"""
def check_file_nonempty(f):
# Validate existence of file and that it is > 0 bytes
if not os.path.isfile(f):
raise RuntimeError("Required input file %s does not exist." % f)
elif os.path.getsize(f) == 0:
raise RuntimeError("Required input file %s is empty." % f)
def read_entities_file(entities_file):
# Get list of entities from -e entities file
entities = []
if entities_file is not None:
# proceed with reading file - must not be None at this point
check_file_nonempty(entities_file)
with open(entities_file, 'r') as f:
for line in f:
entities.append(line.strip())
return entities
def load_filenames(filenames):
# Read -f filenames / output names JSON
files_dict = json.load(open(filenames, 'r'))
output_names = sorted(files_dict.keys())
if len(output_names) == 0:
raise ValueError("No output files to search for found in required -f/--filenames JSON %s." % filenames)
return files_dict, output_names
def split_bucket_subdir(directory):
# Parse -b URI input into top-level bucket name (no gs://) and subdirectory path
uri = urlparse(directory)
return uri.netloc, uri.path.lstrip("/")
def get_batch_dirs(workflows, workflow_id, directory):
# Return list of (batch_name, batch_subdirectory) and top-level bucket parsed from -b URI input
batches_dirs = [] # to hold tuples of (batch, dir) in order given in input
bucket, subdir = split_bucket_subdir(directory)
# If using -i input, just add workflow ID to subdirectory path and return
if workflow_id is not None:
return [("placeholder_batch", os.path.join(subdir, workflow_id))], bucket
# If using -w input, read workflows file to get batch names and workflow IDs
with open(workflows, 'r') as inp:
for line in inp:
if line.strip() == "":
continue
(batch, workflow) = line.strip().split('\t')
batch_dir = os.path.join(subdir, workflow)
batches_dirs.append((batch, batch_dir))
return batches_dirs, bucket
def find_batch_output_files(batch, bucket, prefix, files_dict, output_names, num_outputs):
# Search batch directory for files with specified prefixes
# Get all objects in directory
storage_client = storage.Client()
blobs = storage_client.list_blobs(bucket, prefix=prefix,
delimiter=None) # only one workflow per batch - assumes caching if multiple
# Go through each object in directory once, checking if it matches any filenames not yet found
batch_outputs = {file: [] for file in output_names}
names_left = list(output_names)
num_found = 0
for blob in blobs:
blob_name = blob.name.strip()
# in case multiple files, continue matching on suffixes even if already found file match(es)
for name in output_names:
if blob_name.endswith(files_dict[name]):
blob_path = os.path.join("gs://", bucket, blob_name) # reconstruct URI
if len(batch_outputs[name]) == 0:
num_found += 1
names_left.remove(name)
batch_outputs[name].append(blob_path)
break
# Warn if some outputs not found
if num_found < num_outputs:
for name in names_left:
logging.warning(f"{batch} output file {name} not found in gs://{bucket}/{prefix}. Outputting empty string")
return batch_outputs
def sort_files_by_shard(file_list):
# Attempt to sort file list by shard number based on last occurrence of "shard-" in URI
if len(file_list) < 2:
return file_list
regex = r'^(shard-)([0-9]+)(/.*)' # extract shard number for sorting - group 2
shard_numbers = []
check_different_shard = None
for file in file_list:
index = file.rfind("shard-") # find index of last occurrence of shard- substring in file path
if index == -1:
return file_list # abandon sorting if no shard- substring
shard = int(re.match(regex, file[index:]).group(2))
# make sure first two shard numbers actually differ
if check_different_shard is None:
check_different_shard = shard
elif check_different_shard != -1:
if shard == check_different_shard:
return file_list # if first two shard numbers match, then abandon sorting by shard
check_different_shard = -1
shard_numbers.append(shard)
return [x for _, x in sorted(zip(shard_numbers, file_list), key=lambda pair: pair[0])]
def format_batch_line(batch, output_names, batch_outputs):
# Format line with batch and outputs (if not using entities option)
batch_line = batch + "\t"
batch_line += "\t".join(",".join(sort_files_by_shard(batch_outputs[name])) for name in output_names)
batch_line += "\n"
return batch_line
def update_entity_outputs(output_names, batch_outputs, entities, entity_outputs):
# Edit entity_outputs dict in place: add new batch outputs to each corresponding entity
for output_index, name in enumerate(output_names):
filepaths = batch_outputs[name]
filenames = [path.split("/")[-1] for path in filepaths]
for entity in entities: # not efficient but should be <500 entities and filenames to search
for i, filename in enumerate(filenames):
# cannot handle Array[File] output for one entity
if entity in filename and entity_outputs[entity][output_index] == "":
entity_outputs[entity][output_index] = filepaths[i]
entity_outputs[entity].append(filepaths[i])
filenames.remove(filename)
filepaths.remove(filepaths[i])
break
def write_entity_outputs(entity_outputs, keep_all_entities, entities, output_stream):
# Check, format, and write entity outputs
# do write inside function to be able to print line-by-line
for entity in entities:
# check for blank entities
if all(element == "" for element in entity_outputs[entity]):
if keep_all_entities:
logging.info(f"No output files found for entity '{entity}' in provided directories. "
f"Outputting blank entry. Remove -k argument to exclude empty entities.")
else:
logging.info(f"No output files found for entity '{entity}' in provided directories. "
f"Omitting from output. Use -k argument to include empty entities.")
continue
output_stream.write(entity + "\t" + "\t".join(entity_outputs[entity]) + "\n")
def retrieve_and_write_output_files(batches_dirs, bucket, files_dict, output_names, output_file,
entities, entity_type, keep_all_entities):
num_outputs = len(output_names)
num_entities = len(entities)
entity_outputs = {entity: [""] * num_outputs for entity in entities} # empty if entities is empty
logging.info("Writing %s" % output_file)
with open(output_file, 'w') as out:
out.write(entity_type + "\t" + "\t".join(output_names) + "\n")
for batch, batch_dir in batches_dirs:
logging.info("Searching for outputs for %s" % batch)
batch_outputs = find_batch_output_files(batch, bucket, batch_dir, files_dict, output_names, num_outputs)
if num_entities > 0:
update_entity_outputs(output_names, batch_outputs, entities, entity_outputs)
else:
batch_line = format_batch_line(batch, output_names, batch_outputs)
out.write(batch_line)
if num_entities > 0:
write_entity_outputs(entity_outputs, keep_all_entities, entities, out)
logging.info("Done!")
# Main function
def main():
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument("-w", "--workflows-file",
help="TSV file (no header) with batch (or sample) names and workflow IDs (one workflow "
"per batch). Either -i or -w required.")
group.add_argument("-i", "--workflow-id",
help="Workflow ID provided directly on the command line; alternative to -w if only "
"one workflow. Either -i or -w required.")
parser.add_argument("-f", "--filenames", required=True,
help="JSON file with workflow output file names (for column names in output TSV) and a "
"unique filename suffix expected for each workflow output. "
"Format is { \"output_file_name\": \"unique_file_suffix\" }.")
parser.add_argument("-o", "--output-file", required=True, help="Output file path to create")
parser.add_argument("-b", "--bucket", required=True,
help="Google bucket path to search for files - should include all subdirectories "
"preceding the workflow ID, including the workflow name.")
parser.add_argument("-l", "--log-level", required=False, default="INFO",
help="Specify level of logging information, ie. info, warning, error (not case-sensitive). "
"Default: INFO")
parser.add_argument("-e", "--entities-file", required=False,
help="Newline-separated text file of entity (ie. sample, batch) names (no header). "
"Entity here refers to units, like samples within a batch or batches within a cohort, "
"for which the workflow(s) produced outputs; the script expects one output per entity "
"for all outputs, with the filename containing the entity ID provided in the entities "
"file. Output will have one line per entity in the order provided. "
"If multiple batches, outputs will be concatenated and order may be affected.")
parser.add_argument("-t", "--entity-type", required=False, default="batch",
help="Entity type (ie. sample, batch) of each line of output. If using -e, then define "
"what each entity name in the file is (ie. a sample, a batch). Otherwise, define "
"what each workflow corresponds to. This type will be the first column name. "
"Default: batch")
parser.add_argument("-k", "--keep-all-entities", required=False, default=False, action='store_true',
help="With --entities-file, output a line for every entity, even if none of the "
"output files are found.")
args = parser.parse_args()
# Set logging level from -l input
log_level = args.log_level
numeric_level = getattr(logging, log_level.upper(), None)
if not isinstance(numeric_level, int):
raise ValueError('Invalid log level: %s' % log_level)
logging.basicConfig(level=numeric_level, format='%(levelname)s: %(message)s')
# Set required arguments. Validate existence of & read filenames JSON
filenames, output_file, bucket = args.filenames, args.output_file, args.bucket # required
check_file_nonempty(filenames)
files_dict, output_names = load_filenames(filenames)
# Determine workflow IDs from -w or -i arguments. Get subdirectories
workflows, workflow_id = args.workflows_file, args.workflow_id
if workflows is not None:
check_file_nonempty(workflows)
batches_dirs, bucket = get_batch_dirs(workflows, workflow_id, bucket)
# Set entity arguments and read entities file
entity_type, entities_file, keep_all_entities = args.entity_type, args.entities_file, args.keep_all_entities
entities = read_entities_file(entities_file)
# Core functionality
retrieve_and_write_output_files(batches_dirs, bucket, files_dict, output_names, output_file,
entities, entity_type, keep_all_entities)
if __name__ == "__main__":
main()
| 49.284091 | 120 | 0.651064 | 1,690 | 13,011 | 4.863905 | 0.205325 | 0.026764 | 0.018491 | 0.014599 | 0.141363 | 0.112165 | 0.081265 | 0.081265 | 0.070803 | 0.057664 | 0 | 0.002189 | 0.262547 | 13,011 | 263 | 121 | 49.471483 | 0.854508 | 0.151257 | 0 | 0.081967 | 0 | 0.005464 | 0.221836 | 0.004344 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065574 | false | 0 | 0.038251 | 0 | 0.163934 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaae0db50e34aa167cdff091c27ea99aa52b2a5d | 861 | py | Python | score.py | dezounet/google_hash_code_2020 | 65a289951aab6dc05d6edd087f85a373cb4c2e11 | [
"MIT"
] | 1 | 2020-02-20T17:25:41.000Z | 2020-02-20T17:25:41.000Z | score.py | dezounet/google_hash_code_2020 | 65a289951aab6dc05d6edd087f85a373cb4c2e11 | [
"MIT"
] | 1 | 2020-02-20T17:41:45.000Z | 2020-02-20T17:41:45.000Z | score.py | dezounet/google_hash_code_2020 | 65a289951aab6dc05d6edd087f85a373cb4c2e11 | [
"MIT"
] | null | null | null | import os
from config import INPUT_DIRECTORY
from config import OUTPUT_DIRECTORY
def get_best_score():
best_scores = {}
for output_filename in os.listdir(OUTPUT_DIRECTORY):
if output_filename.startswith('.'):
continue
current_score = 0
input_filename = get_input_from_output(output_filename)
best_scores[input_filename] = current_score
return best_scores
def get_input_from_output(output_filename):
# Compute input filename from output filename
input_filename = os.path.splitext(output_filename)[0]
return input_filename
if __name__ == '__main__':
best_scores = get_best_score()
total_score = 0
for filename, score in best_scores.items():
print('%s score: %s' % (filename, score))
total_score += score
print('===> total score: %s' % total_score)
| 22.076923 | 63 | 0.692218 | 108 | 861 | 5.148148 | 0.287037 | 0.151079 | 0.057554 | 0.064748 | 0.115108 | 0.115108 | 0 | 0 | 0 | 0 | 0 | 0.004484 | 0.222997 | 861 | 38 | 64 | 22.657895 | 0.826607 | 0.049942 | 0 | 0 | 0 | 0 | 0.050245 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.136364 | 0 | 0.318182 | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aab145a8df8d8c44d5a4493bde6455120f468886 | 6,624 | py | Python | src/rastervision/labels/object_detection_labels.py | nholeman/raster-vision | f3e1e26c555feed6fa018183c3fa04d7858d91bd | [
"Apache-2.0"
] | null | null | null | src/rastervision/labels/object_detection_labels.py | nholeman/raster-vision | f3e1e26c555feed6fa018183c3fa04d7858d91bd | [
"Apache-2.0"
] | null | null | null | src/rastervision/labels/object_detection_labels.py | nholeman/raster-vision | f3e1e26c555feed6fa018183c3fa04d7858d91bd | [
"Apache-2.0"
] | null | null | null | import numpy as np
from object_detection.utils.np_box_list import BoxList
from object_detection.utils.np_box_list_ops import (
prune_non_overlapping_boxes, clip_to_window, concatenate,
non_max_suppression)
from rastervision.core.box import Box
from rastervision.core.labels import Labels
class ObjectDetectionLabels(Labels):
"""A set of boxes and associated class_ids and scores.
Implemented using the Tensorflow Object Detection API's BoxList class.
"""
def __init__(self, npboxes, class_ids, scores=None):
"""Construct a set of object detection labels.
Args:
npboxes: float numpy array of size nx4 with cols
ymin, xmin, ymax, xmax. Should be in pixel coordinates within
the global frame of reference.
class_ids: int numpy array of size n with class ids starting at 1
scores: float numpy array of size n
"""
self.boxlist = BoxList(npboxes)
# This field name actually needs to be 'classes' to be able to use
# certain utility functions in the TF Object Detection API.
self.boxlist.add_field('classes', class_ids)
# We need to ensure that there is always a scores field so that the
# concatenate method will work with empty labels objects.
if scores is None:
scores = np.ones(class_ids.shape)
self.boxlist.add_field('scores', scores)
def assert_equal(self, expected_labels):
np.testing.assert_array_equal(self.get_npboxes(),
expected_labels.get_npboxes())
np.testing.assert_array_equal(self.get_class_ids(),
expected_labels.get_class_ids())
np.testing.assert_array_equal(self.get_scores(),
expected_labels.get_scores())
@staticmethod
def make_empty():
npboxes = np.empty((0, 4))
class_ids = np.empty((0, ))
scores = np.empty((0, ))
return ObjectDetectionLabels(npboxes, class_ids, scores)
@staticmethod
def from_boxlist(boxlist):
"""Make ObjectDetectionLabels from BoxList object."""
scores = (boxlist.get_field('scores')
if boxlist.has_field('scores') else None)
return ObjectDetectionLabels(
boxlist.get(), boxlist.get_field('classes'), scores=scores)
def get_boxes(self):
"""Return list of Boxes."""
return [Box.from_npbox(npbox) for npbox in self.boxlist.get()]
def get_npboxes(self):
return self.boxlist.get()
def get_scores(self):
if self.boxlist.has_field('scores'):
return self.boxlist.get_field('scores')
return None
def get_class_ids(self):
return self.boxlist.get_field('classes')
def __len__(self):
return self.boxlist.get().shape[0]
def __str__(self):
return str(self.boxlist.get())
def to_boxlist(self):
return self.boxlist
@staticmethod
def local_to_global(npboxes, window):
"""Convert from local to global coordinates.
The local coordinates are row/col within the window frame of reference.
The global coordinates are row/col within the extent of a RasterSource.
"""
xmin = window.xmin
ymin = window.ymin
return npboxes + np.array([[ymin, xmin, ymin, xmin]])
@staticmethod
def global_to_local(npboxes, window):
"""Convert from global to local coordinates.
The global coordinates are row/col within the extent of a RasterSource.
The local coordinates are row/col within the window frame of reference.
"""
xmin = window.xmin
ymin = window.ymin
return npboxes - np.array([[ymin, xmin, ymin, xmin]])
@staticmethod
def local_to_normalized(npboxes, window):
"""Convert from local to normalized coordinates.
The local coordinates are row/col within the window frame of reference.
Normalized coordinates range from 0 to 1 on each (height/width) axis.
"""
height = window.get_height()
width = window.get_width()
return npboxes / np.array([[height, width, height, width]])
@staticmethod
def normalized_to_local(npboxes, window):
"""Convert from normalized to local coordinates.
Normalized coordinates range from 0 to 1 on each (height/width) axis.
The local coordinates are row/col within the window frame of reference.
"""
height = window.get_height()
width = window.get_width()
return npboxes * np.array([[height, width, height, width]])
@staticmethod
def get_overlapping(labels, window, ioa_thresh=0.000001, clip=False):
"""Return subset of labels that overlap with window.
Args:
labels: ObjectDetectionLabels
window: Box
ioa_thresh: the minimum IOA for a box to be considered as
overlapping
clip: if True, clip label boxes to the window
"""
window_npbox = window.npbox_format()
window_boxlist = BoxList(np.expand_dims(window_npbox, axis=0))
boxlist = prune_non_overlapping_boxes(
labels.boxlist, window_boxlist, minoverlap=ioa_thresh)
if clip:
boxlist = clip_to_window(boxlist, window_npbox)
return ObjectDetectionLabels.from_boxlist(boxlist)
@staticmethod
def concatenate(labels1, labels2):
"""Return concatenation of labels.
Args:
labels1: ObjectDetectionLabels
labels2: ObjectDetectionLabels
"""
new_boxlist = concatenate([labels1.to_boxlist(), labels2.to_boxlist()])
return ObjectDetectionLabels.from_boxlist(new_boxlist)
@staticmethod
def prune_duplicates(labels, score_thresh, merge_thresh):
"""Remove duplicate boxes.
Runs non-maximum suppression to remove duplicate boxes that result from
sliding window prediction algorithm.
Args:
labels: ObjectDetectionLabels
score_thresh: the minimum allowed score of boxes
merge_thresh: the minimum IOA allowed when merging two boxes
together
Returns:
ObjectDetectionLabels
"""
max_output_size = 1000000
pruned_boxlist = non_max_suppression(
labels.boxlist,
max_output_size=max_output_size,
iou_threshold=merge_thresh,
score_threshold=score_thresh)
return ObjectDetectionLabels.from_boxlist(pruned_boxlist)
| 36.196721 | 79 | 0.645984 | 787 | 6,624 | 5.280813 | 0.21601 | 0.021174 | 0.020212 | 0.028874 | 0.315688 | 0.270452 | 0.240616 | 0.201636 | 0.201636 | 0.201636 | 0 | 0.006686 | 0.277476 | 6,624 | 182 | 80 | 36.395604 | 0.86168 | 0.333333 | 0 | 0.186813 | 0 | 0 | 0.012687 | 0 | 0 | 0 | 0 | 0 | 0.043956 | 1 | 0.197802 | false | 0 | 0.054945 | 0.054945 | 0.450549 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aab3f24504c2c8982167dbbb21b49a63b5a0326d | 11,582 | py | Python | scripts/scikit-cv-class.py | varisd/MLFix | 383d3c71e57eaa0d0829624f6d0d890f9c720567 | [
"BSD-3-Clause"
] | 1 | 2021-11-18T02:12:42.000Z | 2021-11-18T02:12:42.000Z | scripts/scikit-cv-class.py | varisd/MLFix | 383d3c71e57eaa0d0829624f6d0d890f9c720567 | [
"BSD-3-Clause"
] | 1 | 2019-08-05T14:51:44.000Z | 2019-08-05T14:51:44.000Z | scripts/scikit-cv-class.py | varisd/MLFix | 383d3c71e57eaa0d0829624f6d0d890f9c720567 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
from __future__ import division
import os, sys, argparse
import datetime
import gzip
import model
import neural
import scorer
import numpy as np
from sklearn.feature_extraction import DictVectorizer
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.cross_validation import StratifiedKFold
from sklearn.metrics import accuracy_score
from sklearn.metrics import average_precision_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
from scipy.stats import randint
# TMP selector-related imports
from sklearn.svm import SVC
from sklearn.feature_selection import RFECV
# "Constants"
seed=123
dense_models=["gaussian_bayes"]
#features_ignore_regex = [ "agr", "new", "src", "sibling", "lemma", "form", "tag", "old_node_id", "wrong_form_1", "wrong_form_2", "wrong_form_3" ]
features_ignore_regex = [ "agr", "old_node_lemma", "new", "form", "tag", "old_node_id", "wrong_form_1", "wrong_form_2", "wrong_form_3" ]
avg_method = "weighted"
def chunks (l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i+n]
def downsample(X, Y, n):
X1 = []
Y1 = []
print(len(X))
data = zip(X,Y)
i = 0
for inst in data:
if inst[1]["wrong_form_3"] == 1 or i == 0:
X1.append(inst[0])
Y1.append(inst[1])
i = (i + 1) % n
print(len(X1))
return X1,Y1
def ignored_field (feature_name):
ignored = False;
for regex in features_ignore_regex:
if regex in feature_name:
ignored = True
return ignored
def targets2numpy (input_dict, targets):
target_arr = []
for line in input_dict:
arr = []
for t in targets:
arr.append(line[t])
target_arr.append(arr)
return np.array(target_arr)
def line2dict (feat_names, feat_vals, ignore_blank):
""" Create dictionary from the input line."""
result = dict()
if len(feat_names) != len(feat_vals):
raise ValueError("Feature vector length does not match: expected=%s got=%s" % (len(feat_names),len(feat_vals)))
for i in range(len(feat_names)):
if ignore_blank and feat_vals[i] == "":
continue
result[feat_names[i]] = feat_vals[i]
return result
def split_targets_feats (input_dict, targets):
target_dict = dict()
feat_dict = dict()
for key,item in input_dict.items():
if key in targets:
target_dict[key] = item
elif ignored_field(key) != True:
feat_dict[key] = item
return target_dict, feat_dict
def line2base (targets, values):
result = dict()
if len(targets) != len(values):
raise ValueError("Number of targets between baseline and predicted does not match: expected=%s got=%s" % (len(targets),len(values)))
for i in range(len(targets)):
result[targets[i]] = values[i]
return result
def evaluate (model, true, base, pred, targets):
g = 0
tp = 0
tn = 0
fp = 0
fn = 0
wp = 0
for i in range(len(pred)):
p1 = pred[i]
b1 = base[i]
t1 = true[i]
base_str = ";".join([b1[x] for x,_ in enumerate(targets)])
pred_str = ";".join([p1[x] for x,_ in enumerate(targets)])
true_str = ";".join([t1[x] for x,_ in enumerate(targets)])
if pred_str == true_str:
g = g + 1
if pred_str == base_str:
tn = tn + 1
#print "TRUENEG %s" % (pred_str)
else:
tp = tp + 1
#print "TRUEPOS %s -> %s" % (base_str, pred_str)
else:
if pred_str == base_str:
fn = fn + 1
#print "FALSENEG %s -> %s" % (base_str, true_str)
elif true_str == base_str:
fp = fp + 1
#print "FALSEPOS %s -> %s" % (base_str, pred_str)
else:
wp = wp + 1
#print "WRONGPOS %s -> %s !-> %s" % (base_str, pred_str, true_str)
acc = accuracy_score(global_encoder.transform(true), global_encoder.transform(pred))
prec = 0
recall = 0
if tp != 0:
prec = tp / (tp + fp)
recall = tp / (tp + fn)
f1 = 0
if prec != 0 or recall != 0:
f1 = 2 * (prec * recall) / (prec + recall)
sys.stdout.write("Instances Accuracy Precision Recall F1-Measure TruePos TrueNeg FalsePos FalseNeg WrongPos Classifier Selector\n")
sys.stdout.write("%9d %8.2f %9.2f %6.2f %10.2f %7d %7d %8d %8d %8d %s %s\n" % (len(pred), acc, prec, recall, f1, tp, tn, fp, fn, wp, model_type, f_select))
## Main Program ##
# Parse command line arguments
parser = argparse.ArgumentParser(description="Train and crossvalidate Scikit-Learn classifier.")
parser.add_argument('--input_file', metavar='input_data', type=str)
parser.add_argument('--base_file', metavar='baseline_results', type=str)
parser.add_argument('--target', metavar='predicted_category', type=str)
parser.add_argument('--model_type', metavar='model_type', type=str)
parser.add_argument('--model_params', metavar='model_parameters', type=str)
parser.add_argument('--feat_selector', metavar='feature_selector', type=str)
parser.add_argument('--feat_selector_params', metavar='feature_selector_parameters', type=str)
parser.add_argument('--save_model', metavar='model_save_destination', nargs='?', type=str)
parser.add_argument('--load_model', metavar='model_location', nargs='?', type=str)
args = parser.parse_args()
fh = gzip.open(args.input_file, 'rt', 'UTF-8')
line = fh.readline().rstrip("\n")
feature_names = line.split("\t")
targets = args.target.split('|')
model_type = args.model_type
f_select = args.feat_selector
if f_select == "":
f_select = None
registered_feat_names = dict()
multiclass = False
if len(targets) > 1:
multiclass = True
sparse = True
if model_type in dense_models:
sparse = False
# Prepare the data
data_X = []
data_Y = []
weights = []
while True:
line = fh.readline().rstrip("\n")
if not line:
break
feat_values = line.split("\t")
line_dict = line2dict(feature_names, feat_values, False)
tdict, fdict = split_targets_feats(line_dict, targets)
for key,item in fdict.items():
registered_feat_names[key] = 1
data_X.append(fdict)
data_Y.append(tdict)
fh.close()
sys.stderr.write("# of initial features: %d\n" % (len(registered_feat_names)))
fh = gzip.open(args.base_file, 'rt', 'UTF-8')
line = fh.readline().rstrip("\n")
baseline = []
while True:
line = fh.readline()
if not line:
break
line = line.rstrip("\n")
values = line.split("\t")
line_dict = line2base(targets, values)
baseline.append(line_dict)
fh.close()
# Load model, predict targets and exit
if args.load_model != None:
sys.stderr.write("Loading model from: %s\n" % (args.load_model))
m = model.loadModel(args.load_model)
res = m.predict(data_X)
for r in res:
print(r)
sys.exit()
data_X = np.array(data_X)
# Model cross validation
if model_type in ["FeedForward", "Highway"]:
baseline = targets2numpy(baseline, targets)
data_Y = targets2numpy(data_Y, targets)
m = eval("neural.FeedForwardNetwork({}, layer_type='{}')".format(args.model_params, model_type))
else:
baseline = np.array(baseline)
data_Y = np.array(data_Y)
m = model.Model(model_type, args.model_params, f_select, args.feat_selector_params, sparse=sparse)
pred = data_Y
predicted = np.reshape(baseline, [-1])
tr_pred = np.reshape(baseline, [-1])
sys.stderr.write("Starting crossvalidation\n")
#cv = cross_validation.StratifiedKFold(data_X, n_folds=10, shuffle=True, random_state=seed)
#scores = cross_validation.cross_val_score(m, data_X, data_Y, cv=10)
#print "10-fold cross validation: %s" % scores.mean()
global_encoder = LabelEncoder()
global_encoder.fit(np.concatenate((data_Y,baseline)))
print("10-fold cross validation (baseline): {}".format(accuracy_score(global_encoder.transform(data_Y), global_encoder.transform(baseline))))
#scores = cross_validation.cross_val_score(model.Model("baseline", "strategy='most_frequent',random_state=%d" % seed), data_X, data_Y, cv=cv)
#print "10-fold cross validation (most_frequent): %s" % scores.mean()
#scores = cross_validation.cross_val_score(model.Model("baseline", "strategy='uniform',random_state=%d" % seed), data_X, data_Y, cv=cv)
#print "10-fold cross validation (uniform): %s" % scores.mean()
#scores = cross_validation.cross_val_score(model.Model("baseline", "strategy='stratified',random_state=%d" % seed), data_X, data_Y, cv=cv)
#print "10-fold cross validation (stratified): %s" % scores.mean()
#btr = global_encoder.transform(baseline)
#sc = scorer.MyScorer(btr)
#grid = GridSearchCV(estimator=m.model, param_grid={"n_neighbors" : randint.rvs(1,15,size=5)}, scoring=sc.recall, cv=10)
#grid.fit(data_X,data_Y)
#print grid
#print grid.estimator.model
#print sc.precision(grid, data_X, data_Y)
# Leave one out prediction
sys.stderr.write("Starting 10-fold leave-one-out crossvalidation\n")
count = 1
n_chunks = (len(data_X) // 10) // 2000 + 1
#n_chunks = len(data_X) // 10 + 1
print(n_chunks)
#kf = KFold(len(data_X), n_folds=10)
kf = KFold(10)
#skf = StratifiedKFold(y=global_encoder.transform(data_Y), n_folds=10)
#for train_index, test_index in kf:
for k, (train_index, test_index) in enumerate(kf.split(data_X, data_Y)):
sys.stderr.write("KFold iteration: %d\n" % (count))
X_train, X_test = data_X[train_index], data_X[test_index]
Y_train, Y_test, base = data_Y[train_index], data_Y[test_index], baseline[test_index]
if model_type in ["FeedForward", "Highway"]:
m = eval("neural.FeedForwardNetwork({}, layer_type='{}')".format(args.model_params, model_type))
else:
m = model.Model(model_type, args.model_params, f_select, args.feat_selector_params, sparse=sparse)
m.fit(X_train, Y_train)
sys.stderr.write(str(datetime.datetime.now().time()) + ": started predicting (predict))\n")
predicted[test_index] = m.predict(X_test)
sys.stderr.write(str(datetime.datetime.now().time()) + ": stopped predicting (predict))\n")
# for inst in test_index.tolist():
# sys.stderr.write(str(datetime.datetime.now().time()) + ": started predicting (predict))\n")
# print len(data_X[inst])
# predicted[inst] = m.predict([data_X[inst]])
# sys.stderr.write(str(datetime.datetime.now().time()) + ": stopped predicting (predict))\n")
evaluate(m, Y_test, base, predicted[test_index], targets)
count = count + 1
print("Training set Evaluation:")
if model_type in ["FeedForward", "Highway"]:
m = eval("neural.FeedForwardNetwork({}, layer_type='{}')".format(args.model_params, model_type))
else:
m = model.Model(model_type, args.model_params, f_select, args.feat_selector_params, sparse=sparse)
m.fit(data_X, data_Y)
sys.stderr.write(str(datetime.datetime.now().time()) + ": started predicting (predict))\n")
tr_pred = m.predict(data_X)
sys.stderr.write(str(datetime.datetime.now().time()) + ": stopped predicting (predict))\n")
evaluate(m, data_Y, baseline, tr_pred, targets)
print("Final Evaluation:")
evaluate(m, data_Y, baseline, predicted, targets)
# Train model and save it
if args.save_model != None:
sys.stderr.write("Saving model to: " + args.save_model + "\n")
model.saveModel(m, args.save_model)
| 35.857585 | 159 | 0.671905 | 1,666 | 11,582 | 4.492197 | 0.185474 | 0.014698 | 0.022448 | 0.012026 | 0.378808 | 0.280999 | 0.225013 | 0.200695 | 0.193212 | 0.184928 | 0 | 0.013046 | 0.185978 | 11,582 | 322 | 160 | 35.968944 | 0.780759 | 0.196598 | 0 | 0.155844 | 0 | 0.004329 | 0.144 | 0.017081 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034632 | false | 0 | 0.099567 | 0 | 0.160173 | 0.030303 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aab66c4bd10af21c46b6d0682db3dfdf54a823b9 | 2,377 | py | Python | demo/multimodal/online/multimodal_preprocess/client.py | meta-soul/MetaSpore | e6fbc12c6a3139df76c87215b16f9dba65962ec7 | [
"Apache-2.0"
] | 32 | 2022-03-30T10:24:00.000Z | 2022-03-31T16:19:15.000Z | demo/multimodal/online/multimodal_preprocess/client.py | meta-soul/MetaSpore | e6fbc12c6a3139df76c87215b16f9dba65962ec7 | [
"Apache-2.0"
] | null | null | null | demo/multimodal/online/multimodal_preprocess/client.py | meta-soul/MetaSpore | e6fbc12c6a3139df76c87215b16f9dba65962ec7 | [
"Apache-2.0"
] | 3 | 2022-03-30T10:28:57.000Z | 2022-03-30T11:37:39.000Z | #
# Copyright 2022 DMetaSoul
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import print_function
import sys
import json
import logging
import grpc
from hf_preprocessor import hf_preprocessor_pb2
from hf_preprocessor import hf_preprocessor_pb2_grpc
import pyarrow as pa
def run_tokenize(model_key, text, port=60051):
with grpc.insecure_channel(f'localhost:{port}') as channel:
stub = hf_preprocessor_pb2_grpc.HfPreprocessorStub(channel)
# req encode
payload = {'texts': json.dumps([text]).encode('utf8')}
req = hf_preprocessor_pb2.HfTokenizerRequest(model_name=model_key, payload=payload)
response = stub.HfTokenizer(req)
# res decode via json
#payload = {k:json.loads(v.decode('utf8')) for k,v in response.payload.items()}
# res decode via pyarrow
payload = {}
for name in response.payload:
with pa.BufferReader(response.payload[name]) as reader:
payload[name] = pa.ipc.read_tensor(reader).to_numpy().tolist()
print("Client received: payload={}, extras={}".format(payload, response.extras))
def run_push(model_key, model_url, port=60051):
with grpc.insecure_channel(f'localhost:{port}') as channel:
stub = hf_preprocessor_pb2_grpc.HfPreprocessorStub(channel)
req = hf_preprocessor_pb2.HfTokenizerPushRequest(model_name=model_key, model_url=model_url)
response = stub.HfTokenizerPush(req)
print("Client received: status={}, message={}".format(response.status, response.msg))
if __name__ == '__main__':
logging.basicConfig()
action = sys.argv[1]
if action == 'push':
key, url = sys.argv[2], sys.argv[3]
run_push(key, url)
elif action == 'tokenize':
key, text = sys.argv[2], sys.argv[3]
run_tokenize(key, text)
else:
print('invalid action!')
| 36.569231 | 99 | 0.701725 | 319 | 2,377 | 5.081505 | 0.423197 | 0.069093 | 0.062924 | 0.038865 | 0.207279 | 0.207279 | 0.207279 | 0.133251 | 0.133251 | 0.133251 | 0 | 0.01618 | 0.193942 | 2,377 | 64 | 100 | 37.140625 | 0.829854 | 0.286496 | 0 | 0.111111 | 0 | 0 | 0.090692 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.222222 | 0 | 0.277778 | 0.111111 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aab6c3235dabdeffefa4b759b1f927ac7c28e0e3 | 1,173 | py | Python | 16. 3Sum Closest.py | Muthu2093/Algorithms-practice | 999434103a9098a4361104fd39cba5913860fa9d | [
"MIT"
] | null | null | null | 16. 3Sum Closest.py | Muthu2093/Algorithms-practice | 999434103a9098a4361104fd39cba5913860fa9d | [
"MIT"
] | null | null | null | 16. 3Sum Closest.py | Muthu2093/Algorithms-practice | 999434103a9098a4361104fd39cba5913860fa9d | [
"MIT"
] | null | null | null | ## Given an array nums of n integers and an integer target, find three integers in nums such that the sum is closest to target. Return the sum of the three integers. You may assume that each input would have exactly one solution.
## Example:
## Given array nums = [-1, 2, 1, -4], and target = 1.
## The sum that is closest to the target is 2. (-1 + 2 + 1 = 2).
class Solution(object):
def threeSumClosest(self, nums, target):
"""
:type nums: List[int]
:type target: int
:rtype: int
"""
if len(nums)<=3:
return sum(nums)
nums.sort()
sums = -999
for m in range(1, len(nums)-1):
l = 0
r = len(nums)-1
while (l<m and m<r):
temp = nums[l] + nums[m] + nums[r]
if (temp == target ):
return target
if (abs(target-temp) < abs(target-sums)):
sums = temp
if (temp > target):
r = r-1
if (temp < target):
l = l+1
return sums
| 33.514286 | 229 | 0.460358 | 150 | 1,173 | 3.6 | 0.393333 | 0.033333 | 0.066667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030303 | 0.43734 | 1,173 | 34 | 230 | 34.5 | 0.787879 | 0.341858 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.05 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aab80a69330eaa7bf7d6d33b584b8718223c3f94 | 1,659 | py | Python | build_defs/append_optional_xml_elements.py | hlopko/intellij | adebffd92637ce28c0e56b9a01d435777454c60d | [
"Apache-2.0"
] | null | null | null | build_defs/append_optional_xml_elements.py | hlopko/intellij | adebffd92637ce28c0e56b9a01d435777454c60d | [
"Apache-2.0"
] | null | null | null | build_defs/append_optional_xml_elements.py | hlopko/intellij | adebffd92637ce28c0e56b9a01d435777454c60d | [
"Apache-2.0"
] | null | null | null | """Appends XML elements specifying optional dependencies to a plugin XML file.
"""
import argparse
import sys
from xml.dom.minidom import parse # pylint: disable=g-importing-member
try:
from itertools import izip # pylint: disable=g-importing-member,g-import-not-at-top
except ImportError:
# Python 3.x already has a built-in `zip` that takes `izip`'s place.
izip = zip
parser = argparse.ArgumentParser()
parser.add_argument(
"--plugin_xml", help="The main plugin xml file", required=True)
parser.add_argument("--output", help="The output file.")
parser.add_argument(
"optional_xml_files",
nargs="+",
help="Sequence of module, module xml... pairs")
def pairwise(t):
it = iter(t)
return izip(it, it)
def main():
args = parser.parse_args()
dom = parse(args.plugin_xml)
plugin_xml = dom.documentElement
for module, optional_xml in pairwise(args.optional_xml_files):
depends_element = dom.createElement("depends")
depends_element.setAttribute("optional", "true")
depends_element.setAttribute("config-file", optional_xml)
depends_element.appendChild(dom.createTextNode(module))
plugin_xml.appendChild(depends_element)
plugin_xml.appendChild(dom.createTextNode("\n"))
if args.output:
with open(args.output, "wb") as f:
f.write(dom.toxml(encoding="utf-8"))
else:
if hasattr(sys.stdout, "buffer"):
sys.stdout.buffer.write(dom.toxml(encoding="utf-8"))
else:
# Python 2.x has no sys.stdout.buffer, but `print` still accepts byte
# strings.
print(dom.toxml(encoding="utf-8")) # pylint: disable=superfluous-parens
if __name__ == "__main__":
main()
| 28.118644 | 86 | 0.708861 | 227 | 1,659 | 5.057269 | 0.444934 | 0.054878 | 0.044425 | 0.049652 | 0.118467 | 0.050523 | 0.050523 | 0 | 0 | 0 | 0 | 0.003587 | 0.159735 | 1,659 | 58 | 87 | 28.603448 | 0.819943 | 0.207957 | 0 | 0.102564 | 0 | 0 | 0.139017 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051282 | false | 0 | 0.128205 | 0 | 0.205128 | 0.025641 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aab9a779cd4155a5850a5c48f86de29e4b8fb4a7 | 5,079 | py | Python | aqueduct/worker.py | artemcpp/aqueduct | 2fc177b9e533dbe900f5878b9cc7a9c0e9eed179 | [
"MIT"
] | null | null | null | aqueduct/worker.py | artemcpp/aqueduct | 2fc177b9e533dbe900f5878b9cc7a9c0e9eed179 | [
"MIT"
] | null | null | null | aqueduct/worker.py | artemcpp/aqueduct | 2fc177b9e533dbe900f5878b9cc7a9c0e9eed179 | [
"MIT"
] | null | null | null | import multiprocessing as mp
import queue
import time
from typing import Callable, Iterable, Iterator, List, Optional
from .handler import BaseTaskHandler
from .logger import log
from .metrics.timer import timeit
from .task import BaseTask, StopTask
def batches(
elements: Iterable,
batch_size: int,
timeout: float,
) -> Iterator[List]:
batch = []
timeout_end = time.monotonic() + timeout
for elem in elements:
if elem:
batch.append(elem)
if time.monotonic() >= timeout_end or len(batch) == batch_size:
if batch:
yield batch
batch = []
timeout_end = time.monotonic() + timeout
if batch:
yield batch
def batches_with_lock(batches_gen: Iterator[List], lock: mp.Lock) -> Iterator[List]:
while True:
lock.acquire()
try:
batch = next(batches_gen)
except StopIteration:
return
finally:
lock.release()
yield batch
class Worker:
"""Обертка над классом BaseTaskHandler.
Достает из входной очереди задачу, пропускает ее через обработчика task_handler и кладет ее
в выходную очередь.
"""
def __init__(
self,
queue_in: mp.Queue,
queue_out: mp.Queue,
task_handler: BaseTaskHandler,
handle_condition: Callable[[BaseTask], bool],
batch_size: int,
batch_timeout: float,
batch_lock: Optional[mp.RLock],
step_number: int,
):
self.queue_in = queue_in
self.queue_out = queue_out
self.task_handler = task_handler
self.handle_condition = handle_condition
self.name = task_handler.__class__.__name__
self.step_name = self.task_handler.get_step_name(step_number)
self._batch_size = batch_size
self._batch_timeout = batch_timeout
self._batch_lock = batch_lock
self._stop_task: BaseTask = None # noqa
def _start(self):
"""Runs something huge (e.g. model) in child process."""
self.task_handler.on_start()
def _tasks(self) -> Iterator[Optional[BaseTask]]:
"""Provides suitable for processing tasks."""
while True:
try:
task: BaseTask = self.queue_in.get(block=False)
except queue.Empty:
# returns control
yield
time.sleep(0.001)
continue
if isinstance(task, StopTask):
self._stop_task = task
break
log.debug(f'[{self.name}] Have message')
task.metrics.stop_transfer_timer(self.step_name)
# dont't pass an expired task to the next steps
if task.is_expired():
log.debug(f'[{self.name}] Task expired. Skip: {task}')
continue
# don't process unsuitable tasks
if not self.handle_condition(task):
self._post_handle(task)
continue
yield task
def _tasks_batches(self) -> Iterator[Optional[List[BaseTask]]]:
if self._batch_size == 1:
# pseudo batching
for task in self._tasks():
if task:
yield [task]
else:
tasks_batches: Iterator[List[BaseTask]] = batches(
self._tasks(),
batch_size=self._batch_size,
timeout=self._batch_timeout,
)
if self._batch_lock:
# to take a queue_in lock for the duration of batch filling time
tasks_batches = batches_with_lock(tasks_batches, self._batch_lock)
while True:
try:
with timeit() as timer:
tasks_batch = next(tasks_batches)
tasks_batch[0].metrics.batch_times.add(self.step_name, timer.seconds)
tasks_batch[0].metrics.batch_sizes.add(self.step_name, len(tasks_batch))
except StopIteration:
return
yield tasks_batch
def _post_handle(self, task: BaseTask):
task.metrics.start_transfer_timer(self.step_name)
self.queue_out.put(task)
def loop(self, pid: int, start_barrier: mp.Barrier):
"""Main worker loop.
The code below is executed in a new process.
"""
log.info(f'[Worker] initialising handler {self.name}')
self._start()
log.info(f'[Worker] handler {self.name} ok, waiting for others to start')
start_barrier.wait()
log.info(f'[Worker] handler {self.name} ok, starting loop')
for tasks_batch in self._tasks_batches():
with timeit() as timer:
self.task_handler.handle(*tasks_batch)
for task in tasks_batch:
task.metrics.handle_times.add(self.step_name, timer.seconds)
self._post_handle(task)
if self._stop_task:
self.queue_out.put(self._stop_task)
| 32.767742 | 95 | 0.577082 | 579 | 5,079 | 4.848014 | 0.276339 | 0.032063 | 0.02565 | 0.016031 | 0.116138 | 0.069825 | 0.044888 | 0.022088 | 0 | 0 | 0 | 0.002083 | 0.338452 | 5,079 | 154 | 96 | 32.980519 | 0.833333 | 0.094901 | 0 | 0.237288 | 0 | 0 | 0.046906 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067797 | false | 0 | 0.067797 | 0 | 0.161017 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aabb5bba19a9a40398664c8eb455fe08a7fdfc7d | 1,024 | py | Python | neuronit/about-us/models.py | neuronit/pfa | 6483f23de3ac43ae1121760ab44a2cae1f2cc901 | [
"MIT"
] | null | null | null | neuronit/about-us/models.py | neuronit/pfa | 6483f23de3ac43ae1121760ab44a2cae1f2cc901 | [
"MIT"
] | null | null | null | neuronit/about-us/models.py | neuronit/pfa | 6483f23de3ac43ae1121760ab44a2cae1f2cc901 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib import admin
import os
class DescriptionP(models.Model):
id = models.AutoField(primary_key=True)
content = models.TextField()
def __str__(self):
return self.content
class DescriptionPAdmin(admin.ModelAdmin):
list_display = ['id', 'content']
search_fields = ['id']
def get_image_path(instance, filename):
return os.path.join('images_profile', str(instance.id), filename)
class TeamMember(models.Model):
id = models.AutoField(primary_key=True)
name = models.CharField(max_length = 200)
image = models.ImageField(upload_to=get_image_path, blank=True, null=True)
title = models.CharField(max_length = 200)
description_text = models.TextField(blank=True, null=True)
class TeamMemberAdmin(admin.ModelAdmin):
list_display = ['id', 'name', 'image', 'title', 'description_text']
search_fields = ['name', 'title']
admin.site.register(DescriptionP, DescriptionPAdmin)
admin.site.register(TeamMember, TeamMemberAdmin)
| 30.117647 | 78 | 0.733398 | 126 | 1,024 | 5.801587 | 0.420635 | 0.02736 | 0.035568 | 0.051984 | 0.26539 | 0.114911 | 0.114911 | 0.114911 | 0 | 0 | 0 | 0.006849 | 0.144531 | 1,024 | 33 | 79 | 31.030303 | 0.827626 | 0 | 0 | 0.083333 | 0 | 0 | 0.064453 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.125 | 0.083333 | 0.916667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aabb9812f884cc434b853a0cf0a074099ebe95b2 | 73,483 | py | Python | ai4materials/models/sis.py | hpleva/ai4materials | 5b5548f4fbfd4751cd1f9d57cedaa1e1d7ca04b2 | [
"Apache-2.0"
] | 23 | 2019-12-23T14:47:53.000Z | 2022-03-25T10:50:18.000Z | ai4materials/models/sis.py | hpleva/ai4materials | 5b5548f4fbfd4751cd1f9d57cedaa1e1d7ca04b2 | [
"Apache-2.0"
] | 8 | 2019-12-16T21:08:24.000Z | 2022-02-09T23:56:46.000Z | ai4materials/models/sis.py | hpleva/ai4materials | 5b5548f4fbfd4751cd1f9d57cedaa1e1d7ca04b2 | [
"Apache-2.0"
] | 10 | 2018-11-21T14:05:33.000Z | 2022-02-10T11:28:46.000Z | # coding=utf-8
# Copyright 2016-2018 Emre Ahmetick
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
__author__ = "Emre Ahmetick"
__copyright__ = "Copyright 2018, Emre Ahmetick"
__maintainer__ = "Emre Ahmetick"
__email__ = "ahmetick@fhi-berlin.mpg.de"
__date__ = "23/09/18"
import numpy as np
from ai4materials.utils.utils_config import SSH
import os
import sched
import time
import sys
import logging
from shutil import rmtree
import pandas as pd
from subprocess import Popen
import operator as opop
from copy import deepcopy
from functools import reduce
F_unit = [
['IP(A)', 'IP(B)', 'EA(A)', 'EA(B)'],
['E_HOMO(A)', 'E_HOMO(B)', 'E_LUMO(A)', 'E_LUMO(B)'],
['r_s(A)', 'r_s(B)', 'r_p(A)', 'r_p(B)', 'r_d(A)', 'r_d(B)', 'r_sigma(AB)', 'r_pi(AB)'],
['Z(A)', 'Z(B)', 'Z_val(A)', 'Z_val(B)', 'period(A)', 'period(B)'],
['d(AB)', 'd(A)', 'd(B)'],
['E_b(AB)', 'E_b(A)', 'E_b(B)'],
['HL_gap(AB)', 'HL_gap(A)', 'HL_gap(B)'],
]
reals = [
'IP(A)',
'IP(B)',
'EA(A)',
'EA(B)',
'E_HOMO(A)',
'E_HOMO(B)',
'E_LUMO(A)',
'E_LUMO(B)',
'r_s(A)',
'r_s(B)',
'r_p(A)',
'r_p(B)',
'r_d(A)',
'r_d(B)',
'd(AB)',
'd(A)',
'd(B)',
'E_b(AB)',
'E_b(A)',
'E_b(B)',
'HL_gap(AB)',
'HL_gap(A)',
'HL_gap(B)',
'r_sigma(AB)',
'r_pi(AB)']
ints = ['Z(A)', 'Z(B)', 'Z_val(A)', 'Z_val(B)', 'period(A)', 'period(B)']
standard_format = [
'IP(A)',
'IP(B)',
'EA(A)',
'EA(B)',
'E_HOMO(A)',
'E_HOMO(B)',
'E_LUMO(A)',
'E_LUMO(B)',
'r_s(A)',
'r_s(B)',
'r_p(A)',
'r_p(B)',
'r_d(A)',
'r_d(B)',
'd(AB)',
'd(A)',
'd(B)',
'Z(A)',
'Z(B)',
'Z_val(A)',
'Z_val(B)',
'E_b(AB)',
'E_b(A)',
'E_b(B)',
'HL_gap(AB)',
'HL_gap(A)',
'HL_gap(B)',
'r_sigma(AB)',
'r_pi(AB)',
'period(A)',
'period(B)']
converted_format = [
'ipA',
'ipB',
'eaA',
'eaB',
'homoA',
'homoB',
'lumoA',
'lumoB',
'rsA',
'rsB',
'rpA',
'rpB',
'rdA',
'rdB',
'disAB',
'disA',
'disB',
'zA',
'zB',
'valA',
'valB',
'ebAB',
'ebA',
'ebB',
'hlgapAB',
'hlgapA',
'hlgapB',
'rsigmaAB',
'rpiAB',
'periodA',
'periodB']
standard_2_converted = dict(zip(standard_format, converted_format))
converted_2_standard = dict(zip(converted_format, standard_format))
""" Set logger for outputs as errors, warnings, infos. """
#
# try:
# hdlr = logging.FileHandler(configs["output_file"], mode='a')
# except:
# hdlr = logging.FileHandler(configs["output_file"], mode='w')
#
# level = logging.getLevelName(configs["log_level_general"])
#
# logger = logging.getLogger(__name__)
# logger.setLevel(level)
# logging.basicConfig(level=level)
# FORMAT = "%(levelname)s: %(message)s"
# formatter = logging.Formatter(fmt=FORMAT)
# handler = logging.StreamHandler()
# handler.setFormatter(formatter)
# hdlr.setFormatter(formatter)
# logger.addHandler(handler)
# logger.addHandler(hdlr)
# logger.setLevel(level)
# logger.propagate = False
#
# __metainfopath__ = configs["meta_info_file"]
# START PARAMETERS REFERENCE
# In the following lists of tuples the order of the items might be important. Thus no dict is used.
# If value is tuple, then only one of items are possible as value when passing the dict control to the SIS class.
Tuple_list = [
# FCDI
('mpiname', str), # code will be run by: mpiname codename. set mpiname='' for serial run.
('desc_dim', int), # starting iteration (can be n if iteration up to n-1 already calculated before)
('ptype', ('quanti', 'quali')), # property type: 'quanti'(quantitative),'quali'(qualitative)
('ntask', int), # number of tasks (properties)
('nsample', list), # number of samples for each task (and group for classification, e.g. (4,3,5),(7,9) )
('width', float), # for classification, the boundary tolerance
# FC
('nsf', int), # number of scalar features (i.e.: the atomic parameters)
('task_arr', int), # number of tasks arranged in columns
('rung', int), # rung of feature spaces (rounds of combination)
('opset', list), # oprators(currently: (+)(-)(*)(/)(exp)(log)(^-1)(^2)(^3)(sqrt)(|-|) )
('ndimtype', int), # number of dimension types (for dimension analysis)
('dimclass', list), # specify features in each class denoted by ( )
('allele', bool), # Should all elements appear in each of the selected features?
('nele', int), # number of element (<=6): useful only when symm=.true. and/or allele=.true.
('maxfval_lb', float), # features having the max. abs. data value <maxfval_lb will not be selected
('maxfval_ub', float), # features having the max. abs. data value >maxfval_ub will not be selected
('subs_sis', int), # total number of features selected by sure independent screen
# DI
('method', ('L1L0', 'L0')), # 'L1L0' or 'L0'
('size_fs', int), # number of total features in each taskxxx.dat (same for all)
('nfL0', int), # number of features for L0(ntotf->nfL0 if nfL0>ntotf)
('metric', ('LS_RMSE', 'CV_RMSE', 'CV_MAE')), # metric for the evaluation: LS_RMSE,CV_RMSE,CV_MAE
('n_eval', int), # number of top models (based on fitting) to be evaluated by the metric
('CV_fold', int), # k-fold CV (>=2)
('CV_repeat', int), # repeated k-fold CV
('n_out', int), # number of top models to be output, off when =0
]
# Generate lists and dics for easier coding later.
Param_key_list = [i for i, j in Tuple_list]
Param_dic = dict(Tuple_list)
# Important: control reference. Specifies how the structure of input control dict to SIS class should look like.
# If key tuple, then value has to be tuple, too. A tuple stands for the option that on and only one of the keys
# have to set.
control_ref = {
'local_paths': {'local_path': str, 'SIS_input_folder_name': str},
('local_run', 'remote_run'): (
{'SIS_code_path': str, 'mpi_command': str},
{'SIS_code_path': str, 'username': str, 'hostname': str, 'port': int, 'remote_path': str,
'eos': bool, 'mpi_command': str, 'nodes': int, ('key_file', 'password'): (str, str)}
),
'parameters': {'rung': int, 'subs_sis': int, 'desc_dim': int, 'opset': list, 'ptype': ('quanti', 'quali')},
'advanced_parameters': Param_dic
}
# All keys which do not need to be set in input control dict tree. If they are not set, default values are used.
not_mandotary = ['advanced_parameters', 'eos', 'nodes', 'port', 'FC', 'DI', 'FCDI'] + Param_key_list
# Availabel OPs for the SIS fortran code, at the moment.
available_OPs = ['+', '-', '*', '/', 'exp', 'exp-', '^-1', '^2', '^3', 'sqrt', 'log', '|-|', 'SCD', '^6']
un_OP = ['exp', '^2', 'exp-', '^-1', '^2', '^3', 'sqrt', 'log', 'SCD', '^6']
bin_OP = ['-', '/']
bin_OP_bino = ['+', '|-|', '*']
# END PARAMETERS REFERENCE
class SIS(object):
""" Python interface with the fortran SIS+(Sure Independent Screening)+L0/L1L0 code.
The SIS+(Sure Independent Screening)+L0/L1L0 is a greedy algorithm. It enhances the OMP, by considering
not only the closest feature vector to the residual in each step, but collects the closest 'n_SIS' features vectors.
The final model is then built after a given number of iterations by determining the (approximately) best linear combination
of the collected features using the L0 (L1-L0) algorithm.
To execute the code, besides the SIS code parameters also folder paths are needed as well as account
information of a remote machine to let the code be executed on it.
Parameters
----------
P : array, [n_sample]; list; [n_sample]
P refers to the target (label). If ptype = 'quali' list of ints is required
D : array, [n_sample, n_features]
D refers to the feature matrix. The SIS code calculates algebraic combinations
of the features and then applies the SIS+L0/L1L0 algorithm.
feature_list : list of strings
List of feature names. Needs to be in the same order as the feature vectors (columns) in D.
Features must consist of strings which are in F_unit (See above).
feature_unit_classes : None or {list integers or the string: 'no_unit'}
integers correspond to the unit class of the features from feature_list. 'no_unit' is reserved for
dimensionless unit.
output_log_file : string
file path for the logger output.
rm_existing_files : bool
If SIS_input_path on local or remote machine (remote_input_path) exists, it is removed.
Otherwise it is renamed to SIS_input_path_$number.
control : dict of dicts (of dicts)
Dict tree: {
'local_paths': { 'local_path':str, 'SIS_input_folder_name':str},
('local_run','remote_run') : (
{'SIS_code_path':str, 'mpi_command':str},
{'SIS_code_path':str, 'username':str, 'hostname':str, 'remote_path':str, 'eos':bool, 'mpi_command':str, 'nodes':int, ('key_file', 'password'):(str,str)}
),
'parameters' : {'n_comb':int, 'n_sis':int, 'max_dim':int, 'OP_list':list},
'advanced_parameters' : {'FC':FC_dic,'DI':DI_dic, 'FCDI':FCDI_dic}
}
Here the tuples (.,.) mean that one and only one of the both keys has to be set.
To see forms of FC_dic, DI_dic, FCDI_dic check FC_tuplelist, DI_tuplelist and FCDI_tuplelist above in PARAMETERS REFERENCE.
Attributes
----------
start : -
starts the code
get_results : list [max_dim] of dicts {'D', 'coefficients', 'P_pred'}
get_results[model_dim-1]['D'] : pandas data frame [n_sample, model_dim+1]
Descriptor matrix with the columns being algebraic combinations of the input feature matrix.
Column names are thus strings of the algebraic combinations of strings of inout feature_list.
Last column is full of ones corresponding to the intercept
get_results[model_dim-1]['coefficients'] : array [model_dim+1]
Optimizing coefficients.
get_results[model_dim-1]['P_pred'] : array [m_sample]
Fit : np.dot( np.array(D), coefficients)
Notes
-----
For remote_run the library nomad_sim.ssh_code is needed. If remote machine is eos,
in dict control['remote_run'] the (key:value) 'eos':True has to be set. Then set
for example in addition 'nodes':1 and 'mpi_run -np 32' can be set.
Paths (say name: path) are all set in the intialization part with self.path and
used in other functions with self.path. In general the other variables are directly
passed as arguements to the functions. There are a few exceptions as self.ssh.
Examples
--------
# >>> import numpy as np
# >>> from nomad_sim.SIS import SIS
# >>> ### Specify where on local machine input files for the SIS fortran code shall be created
# >>> Local_paths = {
# >>> 'local_path' : '/home/beaker/',
# >>> 'SIS_input_folder_name' : 'SIS_input',
# >>> }
# >>> # Information for ssh connection. Instead of password also 'key_file' for rsa key
# >>> # file path is possible.
# >>> Remote_run = {
# >>> 'mpi_command':'',
# >>> 'remote_path' : '/home/username/',
# >>> 'SIS_code_path' : '/home/username/SIS_code/',
# >>> 'hostname' :'hostname',
# >>> 'username' : 'username',
# >>> 'password' : 'XXX'
# >>> }
# >>> # Parameters for the SIS fortran code. If at each iteration a different 'OP_list'
# >>> # shall be used, set a list of max_dim lists, e.g. [ ['+','-','*'], ['/','*'] ], if
# >>> # n_comb = 2
# >>> Parameters = {
# >>> 'n_comb' : 2,
# >>> 'OP_list' : ['+','|-|','-','*','/','exp','^2'],
# >>> 'max_dim' : 2,
# >>> 'n_sis' : 10
# >>> }
# >>> # Final control dict for the SIS class. Instead of remote_run also local_run can be set
# >>> # (with different keys). Also advanced_parameters can be set, but should be done only
# >>> # if the parameters of the SIS fortran code are understood.
# >>> SIS_control = {'local_paths':Local_paths, 'remote_run':Remote_run, 'parameters':Parameters}
# >>> # Target (label) vector P , feature_list, feature matrix D. The values are made up.
# >>> P = np.array( [1,2,3,-2,-9] )
# >>> feature_list=['r_p(A)','r_p(B)', 'Z(A)']
# >>> D = np.array([[7,-11,3],
# >>> [-1,-2,4],
# >>> [2,20,3],
# >>> [8,1,8],
# >>> [-3,4,1]])
# >>> # Use the code
# >>> sis = SIS(P,D,feature_list, control = SIS_control, output_log_file ='/home/ahmetcik/codes/beaker/output.log')
# >>> sis.start()
# >>> results = sis.get_results()
# >>>
# >>> coef_1dim = results[0]['coefficients']
# >>> coef_2dim = results[1]['coefficients']
# >>> D_1dim = results[0]['D']
# >>> D_2dim = results[1]['D']
# >>> print coef_2dim
# [-3.1514 -5.9171 3.9697]
# >>>
# >>> print D_2dim
# ((rp(B)/Z(A))/(rp(A)+rp(B))) ((Z(A)/rp(B))/(rp(B)*Z(A))) intercept
# 0 0.916670 0.008264 1.0
# 1 0.166670 0.250000 1.0
# 2 0.303030 0.002500 1.0
# 3 0.013889 1.000000 1.0
# 4 4.000000 0.062500 1.0
#
# """
# START INIT
def __init__(self, P, D, feature_list, feature_unit_classes=None, target_unit='eV', control=None,
output_log_file='/home/beaker/.beaker/v1/web/tmp/output.log', rm_existing_files=False, if_print=True, check_only_control=False):
control = deepcopy(control)
self.rm_existing_files = rm_existing_files
self.target_unit = target_unit
# set_logger(output_log_file)
self.logger = logger
self.if_print = if_print
# Check inputs
self.check_arrays(P, D, feature_list, feature_unit_classes, control['parameters']['ptype'])
self.check_control(control, control_ref, "control")
self.check_quali_dim(control)
self.check_OP_list(control)
if check_only_control:
return
# Distribute the control keys to the corresponding init functions.
self.set_main_settings(P, D, feature_list, feature_unit_classes, **control['local_paths'])
if 'remote_run' in control:
self.set_ssh_connection(**control['remote_run'])
else:
self.set_local_run(**control['local_run'])
if 'advanced_parameters' in control:
advanced_parameters = control['advanced_parameters']
else:
advanced_parameters = None
self.set_SIS_parameters(advanced_parameters=advanced_parameters, **control['parameters'])
self.predicted_feature_space_size = None
self.l0_steps = None
self.checking_expense = True
self.if_print = False
self.if_close_ssh = False
self.estimate_calculation_expense(feature_list)
self.checking_expense = False
self.if_print = if_print
if control['parameters']['ptype'] == 'quanti':
self.if_close_ssh = True
def set_main_settings(self, P, D, feature_list, feature_unit_classes,
local_path='/home/beaker/', SIS_input_folder_name='input_folder'):
""" Set local environment and P, D and feature_list."""
self.local_path = local_path
self.SIS_input_folder_name = SIS_input_folder_name
self.SIS_input_path = os.path.join(self.local_path, SIS_input_folder_name)
if feature_unit_classes is None:
feature_unit_classes = [0 for _ in feature_list]
# Bring feature_list and D in the feature_order of F_unit becauese self.check_feature_untis needs it.
ordered_indices = np.argsort(feature_unit_classes)
self.feature_unit_classes = [feature_unit_classes[i] for i in ordered_indices]
self.feature_list = [feature_list[i] for i in ordered_indices]
self.D = D[:, ordered_indices]
self.P = P
self.ssh_connection = False
self.local_run = False
def set_local_run(self, SIS_code_path='~/codes/SIS_code/', mpi_command=''):
""" Set and check local enviroment if local_run is used."""
self.local_run = True
self.SIS_code_path = SIS_code_path
self.SIS_code_FCDI = os.path.join(self.SIS_code_path, 'FCDI')
self.mpi_command = mpi_command
# Check if SIS_code_path exists and if the SIS codes FC, DI and FCDI exist in it.
if os.path.isdir(self.SIS_code_path):
for program in ['FCDI', 'FC', 'DI']:
program_path = os.path.join(self.SIS_code_path, program)
if not os.path.exists(program_path):
raise OSError("No executable: %s" % program_path)
else:
raise OSError("No such directory: %s" % self.SIS_code_path)
def set_ssh_connection(self, hostname=None, username=None, port=22, key_file=None, password=None,
remote_path=None, SIS_code_path=None, eos=False, nodes=1, mpi_command=''):
""" Set ssh connection. Set and check remote enviroment if remote_run is used."""
self.ssh_connection = True
# weather close ssh connection at the end of do_transfer
self.if_close_ssh = True
self.remote_path = remote_path
self.SIS_code_path = SIS_code_path
self.SIS_code_FCDI = os.path.join(self.SIS_code_path, 'FCDI')
self.remote_input_path = os.path.join(self.remote_path, self.SIS_input_folder_name)
self.username = username
self.mpi_command = mpi_command
self.eos = eos
key_file = self.check_(key_file)
# set ssh connection
try:
self.ssh = SSH(hostname=hostname, username=self.username, port=port, key_file=key_file, password=password)
os.remove(key_file)
except Exception as e:
os.remove(key_file)
self.logger.error('ssh connection failed. The error message:\n%s' % e)
sys.exit(1)
# set number of CPUs for job submission script.
if eos:
self.CPUs = nodes * 32
else:
# Further remote machines... Now only eos
self.CPUs = None
# check paths on remote machine
# Check if SIS_code_path exists and if the SIS codes FC, DI and FCDI exist in it.
if self.ssh.isdir(self.SIS_code_path):
for program in ['FCDI', 'FC', 'DI']:
program_path = os.path.join(self.SIS_code_path, program)
if not self.ssh.exists(program_path):
raise OSError("No such executable on remote machine: %s" % program_path)
else:
raise OSError("No such directory on remote machine: %s" % self.SIS_code_path)
if not self.ssh.isdir(self.remote_path):
raise OSError("No such directory on remote machine: %s" % self.remote_path)
def set_SIS_parameters(self, desc_dim=2, subs_sis=100, rung=1, opset=[
'+', '-', '/', '^2', 'exp'], ptype='quanti', advanced_parameters=None):
""" Set the SIS fortran code parameters
If advanced parameters is passed, they will be used, otherwise default values will be used.
Also max_dim, n_sis, n_comb, and OP_list can be overwritten by advanced_parameters if specified.
"""
# Get units. It is a list of strings, e.g. ['(1:4)','(5:8)',...], specifiying which columns/features of D
# belong to a unit class. Index starts with 1. The columns/features were ordered in self.set_main_settings
# such that columns/features of same unit are next to each other.
units_list = self.check_feature_units(self.feature_unit_classes)
ndimtype = len(units_list)
nsf = len(self.feature_list)
# self.set_par will use it
self.advanced_parameters = advanced_parameters
# Get shape of P
if ptype == 'quanti':
row_lengths = len(self.P)
else:
index = np.unique(self.P, return_index=True)[1]
class_names = [self.P[i] for i in np.sort(index)]
row_lengths = tuple([len([None for p in self.P if p == current_class]) for current_class in class_names])
# initilize SIS parameters: self.parameters
self.parameters = dict.fromkeys(Param_key_list)
# set parameters
# FCDI
# code will be run by: mpiname codename. set mpiname='' for serial run.
self.parameters['mpiname'] = self.mpi_command
self.parameters['desc_dim'] = desc_dim # ending iteration
self.parameters['ptype'] = ptype # property type: 'quanti'(quantitative),'quali'(qualitative)
self.parameters['ntask'] = 1 # number of tasks (properties)
# number of samples for each task (and group for classification, e.g. (4,3,5),(7,9) )
self.parameters['nsample'] = row_lengths
self.parameters['width'] = 0.01 # for classification, the boundary tolerance
# FC
self.parameters['nsf'] = nsf # number of scalar features (i.e.: the atomic parameters)
self.parameters['task_arr'] = '1c' # number of tasks arranged in columns
self.parameters['rung'] = rung # rung of feature spaces (rounds of combination)
self.parameters['opset'] = opset # oprators(currently: (+)(-)(*)(/)(exp)(log)(^-1)(^2)(^3)(sqrt)(|-|) )
self.parameters['ndimtype'] = ndimtype # number of dimension types (for dimension analysis)
self.parameters['dimclass'] = units_list # specify features in each class denoted by ( )
self.parameters['allele'] = False # Should all elements appear in each of the selected features?
self.parameters['nele'] = 0 # number of element (<=6): useful only when symm=.true. and/or allele=.true.
# features having the max. abs. data value <maxfval_lb will not be selected
self.parameters['maxfval_lb'] = 1e-8
# features having the max. abs. data value >maxfval_ub will not be selected
self.parameters['maxfval_ub'] = 1e5
self.parameters['subs_sis'] = subs_sis # total number of features selected by sure independent screen
# DI
self.parameters['method'] = 'L0' # 'L1L0' or 'L0'
self.parameters['size_fs'] = '' # number of total features in each taskxxx.dat (same for all)
self.parameters['nfL0'] = '' # number of features for L0(ntotf->nfL0 if nfL0>ntotf)
self.parameters['metric'] = 'LS_RMSE' # metric for the evaluation: LS_RMSE,CV_RMSE,CV_MAE
# number of top models (based on fitting) to be evaluated by the metric
self.parameters['n_eval'] = 1000
self.parameters['CV_fold'] = 10 # k-fold CV (>=2)
self.parameters['CV_repeat'] = 1 # repeated k-fold CV
self.parameters['n_out'] = 100 # number of top models to be output, off when =0
# overwrite parameter values if specified in advanced_parameters
if not advanced_parameters is None:
for key, value in advanced_parameters.iteritems():
self.parameters[key] = value
# END INIT
def start(self):
""" Attribute which starts the calculations after init. """
# Check if folders exists. If yes delete (if self.rm_existing_files)
# or rename it to self.SIS_input_path_old_#
if os.path.isdir(self.SIS_input_path):
self.logger.warning('Directory %s already exists.' % self.SIS_input_path)
if self.rm_existing_files:
rmtree(self.SIS_input_path)
self.logger.warning('It is removed.')
else:
for i in range(1000):
old_name = "%s_old_%s" % (self.SIS_input_path, i)
if not os.path.isdir(old_name):
os.rename(self.SIS_input_path, old_name)
break
self.logger.warning('It is renamed to %s.' % old_name)
# creat input folder on local machine
os.mkdir(self.SIS_input_path)
# write input files in inputfolder
self.write_P_D(self.P, self.D, self.feature_list)
self.write_parameters()
# decide if calculation on local or remote machine
if self.ssh_connection:
self.do_transfer(ssh=self.ssh, eos=self.eos, username=self.username, CPUs=self.CPUs)
else:
# calculate on local machine. (At the moment not clear if python blocks parallel computing)
os.chdir(self.SIS_input_path)
Popen(self.SIS_code_FCDI).wait()
def set_logger(self, output_log_file):
""" Set logger for outputs as errors, warnings, infos. """
self.logger = logging.getLogger(__name__)
hdlr = logging.FileHandler(output_log_file)
self.logger.setLevel(logging.INFO)
logging.basicConfig(level=logging.INFO)
FORMAT = "%(levelname)s: %(message)s"
formatter = logging.Formatter(fmt=FORMAT)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
hdlr.setFormatter(formatter)
self.logger.addHandler(handler)
self.logger.addHandler(hdlr)
self.logger.setLevel(logging.INFO)
self.logger.propagate = False
# START ckecking functions before calculations
def check_arrays(self, P_in, D, feature_list, feature_unit_classes, ptype):
""" Check arrays/list P, D and feature_list"""
P, D, feature_list = np.array(P_in), np.array(D), np.array(feature_list)
P_shape, D_shape, f_shape = P.shape, D.shape, feature_list.shape
if not len(D_shape) == 2:
self.logger.error(
'Dimension of feature matrix is %s. A two-dimensional list or array is needed.' %
len(D_shape))
sys.exit(1)
if not len(f_shape) == 1:
self.logger.error(
'Dimension of feature list is %s. A one-dimensional list or array is needed.' %
len(f_shape))
sys.exit(1)
if not P_shape[0] == D_shape[0]:
self.logger.error(
"Length (%s) of target property has to match to number of rows (%s) of feature matrix." %
(P_shape[0], D_shape[0]))
sys.exit(1)
if ptype == 'quanti':
if not all(isinstance(el, (float, int)) for el in P):
self.logger.error("For ptype = 'quanti', a 1-dimensional array of floats/ints is required is required.")
sys.exit(1)
if ptype == 'quali':
if not all(isinstance(el, int) for el in P_in):
self.logger.error("For ptype = 'quali', a 1-dimensional array of ints is required is required.")
sys.exit(1)
index = np.unique(P, return_index=True)[1]
class_names = P[np.sort(index)]
n_class = len(class_names)
current_i = 0
for p in P:
if not p == class_names[current_i]:
current_i += 1
if n_class == current_i:
self.logger.error("For ptype = 'quali', the target property has to be ordered by classes:")
self.logger.error("first all members of the first class, next all members of the next class ...")
sys.exit(1)
if not D_shape[1] == f_shape[0]:
self.logger.error(
'Length (%s) of feature_list has to match to number of columns (%s) of feature matrix.' %
(f_shape[0], D_shape[1]))
sys.exit(1)
if f_shape[0] < 2:
self.logger.error('Length of feature_list is %s. Choose at least two features.' % f_shape[0])
sys.exit(1)
if not isinstance(feature_unit_classes, (np.ndarray, list, type(None))):
raise TypeError("'feature_unit_classes' must be numpy array, list or None.")
if isinstance(feature_unit_classes, (np.ndarray, list)) and f_shape[0] != len(feature_unit_classes):
self.logger.error('Length of feature_unit_classes does not match length of feature_list.')
sys.exit(1)
feature_unit_classes_integers = [f for f in feature_unit_classes if isinstance(f, int)]
feature_unit_classes_strings = [f for f in feature_unit_classes if isinstance(f, str)]
if isinstance(feature_unit_classes, (np.ndarray, list)) and (not all(isinstance(f_c, int)
for f_c in feature_unit_classes_integers) or not all(f_c == 'no_unit' for f_c in feature_unit_classes_strings)):
raise TypeError("'feature_unit_classes' must consist of integers or the string 'no_unit', where each integer stands for the unit of a feature, i.e. 1:eV, 2:Angstrom. 'no_unit' is reserved for dimensionless unit.")
def check_control(self, par_in, par_ref, par_in_path):
""" Recursive Function to check input control dict tree.
If for example check_control(control,control_ref,'control')
function goes through dcit tree control and compares with control_ref
if correct keys (mandotory, not_mandotory, typos of key string) are set
and if values are of correct type or of optional list.
Furthermore it gives Errors with hints what is wrong, and what is needed.
Parameters
----------
par_in : any key
if par_in is dict, then recursion.
par_ref: any key
Is compared to par_in, if of same time.
If par_in and par_key are dict, alse keys are compared.
par_in_path: string
Gives the dict tree path where, when error occurs, e.g.
control[key_1][key_2]... For using function from outside
start with name of input dict, e.g. 'control'
"""
# check if value_in has correct type = value_ref_type
self.check_type(par_in, par_ref, par_in_path)
if isinstance(par_in, dict):
# check if correct keys are used
self.check_keys(par_in, par_ref, par_in_path)
for key_in, value_in in par_in.iteritems():
# get reference value like: dictionary[key_1][key_2] or here: par_ref[key_in]
# Needed because control_ref has special form.
value_ref = self.get_value_from_dic(par_ref, [key_in])
# recursion
self.check_control(value_in, value_ref, par_in_path + "['%s']" % key_in)
def get_type(self, value):
if isinstance(value, type):
return value
else:
return type(value)
def check_type(self, par_in, par_ref, par_in_path, if_also_none=False):
""" Check type of par_in and par_ref.
If par_ref is tuple, par_in must be item of par_ref:
else: they must have same type.
"""
# if par_ref is tuple, then only a few values are allowed. Thus just checked if
# par_in is in par_ref instead of checking type.
if isinstance(par_ref, tuple):
if not par_in in par_ref:
self.logger.error('%s must be in %s.' % (par_in_path, par_ref))
sys.exit(1)
# check if type(par_in) = type(par_ref)
else:
# get type of par_ref. type(par_ref) is not enough, since in control_ref
# strings,integers,dictionaries... AND types as <int>, <dict>, <str> are given.
ref_type = self.get_type(par_ref)
if not isinstance(par_in, ref_type):
if if_also_none and par_in is None:
pass
else:
self.logger.error('%s must be %s.' % (par_in_path, ref_type))
sys.exit(1)
def get_value_from_dic(self, dictionary, key_tree_path):
""" Returns value of the dict tree
Parameters
----------
dictionary: dict or 'dict tree' as control_ref
dict_tree is when key is tuple of keys and value is tuple of
corresponding values.
key_tree_path: list of keys
Must be in the correct order beginning from the top of the tree/dict.
# Examples
# --------
# >>> print get_value_from_dic[control_ref, ['local_run','SIS_code_path']]
# <type 'str'>
"""
value_ref = dictionary
for key in key_tree_path:
value_ref_keys = value_ref.keys()
if key in value_ref_keys:
value_ref = value_ref[key]
else:
tuples = [tup for tup in value_ref_keys if isinstance(tup, tuple)]
try:
select_tuple = [tup for tup in tuples if key in tup][0]
except BaseException:
raise KeyError
index = [i for i, key_tuple in enumerate(select_tuple) if key == key_tuple][0]
value_ref = value_ref[select_tuple][index]
return value_ref
def check_keys(self, par_in, par_ref, par_in_path):
""" Compares the dicts par_in and par_ref.
Collects which keys are missing (only if keys are not in not_mandotary) amd
whcih keys are not expected (if for example there is a typo).
If there are missing or not expected ones, error message with missing/not expected ones.
Parameters
----------
par_in : dict
par_ref : dict
par_in_path : string
Dictionary path string for error message, e.g 'control[key_1][key_2]'.
"""
keys_in, keys_ref = par_in.keys(), par_ref.keys()
# check if wrong keys are in keys_in
wrong_keys = [key for key in keys_in if not key in self.flatten(keys_ref)]
# check missing keys and if exactly one of optional keys is selected
missing_keys = []
for key in keys_ref:
if isinstance(key, tuple):
optional_in = [k for k in keys_in if k in key]
leng = len(optional_in)
if leng > 1:
self.logger.error("The following keys are set in %s: %s." % (par_in_path, optional_in))
self.logger.error("Please select only one of %s" % list(key))
sys.exit(1)
if leng == 0 and not key in not_mandotary:
missing_keys.append("--one of: (%s)" % (", ".join(["'%s'" % k for k in key])))
#missing_keys.append(('--one of:',)+key)
elif not key in keys_in and not key in not_mandotary:
missing_keys.append(key)
# error message if needed
len_wrong, len_missing = len(wrong_keys), len(missing_keys)
if len_wrong > 0 or len_missing > 0:
if len_wrong > 0:
self.logger.error("The following keys are not expected in %s: %s" % (par_in_path, wrong_keys))
if len_missing > 0:
self.logger.error("The following keys are missing in %s: %s" % (par_in_path, missing_keys))
sys.exit(1)
def check_OP_list(self, control):
""" Checks form and items of control['parameters']['OP_list'].
control['parameters']['OP_list'] must be a list of operations strings
or list of n_comb lists of operation strings. Furthermore if operation
strings are item of available_OPs (see above) is checked.
Parameters
----------
control : dict
Returns
-------
control : with manipulated control['parameters']['OP_list']
"""
OP_list = control['parameters']['opset']
n_comb = control['parameters']['rung']
# If just list of strings make list of n_comb lists
if all(isinstance(OPs, str) for OPs in OP_list):
# check if correct operations
self.check_OP_strings(OP_list)
OP_list = [OP_list for i in range(n_comb)]
control['parameters']['opset'] = OP_list
return control
# If list of lists/tuples check if n_comb lists/tuples
elif all(isinstance(OPs, (list, tuple)) for OPs in OP_list):
if not len(OP_list) == n_comb:
self.return_OP_error()
try:
# check if correct operations
self.check_OP_strings(self.flatten(OP_list))
control['parameters']['opset'] = OP_list
return control
except BaseException:
self.return_OP_error()
# False form
else:
self.return_OP_error()
def check_OP_strings(self, OPs):
""" Check if all items of OPs are items of available_OPs"""
if not all(op in available_OPs for op in OPs):
self.logger.error("Available operations: %s" % available_OPs)
sys.exit(1)
def return_OP_error(self):
""" Error message if control['parameters']['OP_list'] has wrong form """
self.logger.error("'OP_list' must consist of 'n_comb' tuples/lists of strings of operations.")
self.logger.error("The other option is that it contains only strings of operations.")
self.logger.error("Then for each iteration the same operations will be used.")
sys.exit(1)
def check_quali_dim(self, control):
""" Check if quali then also desc_dim=2 """
if control['parameters']['ptype'] == 'quali' and not control['parameters']['desc_dim'] == 2:
self.logger.error("At the moment, for ptype = quali only desc_dim = 2 allowed ")
sys.exit(1)
def check_(self, k):
self.key_to_maxcpu_dic = {"/home/keys/Q8E8RS2hj441kaFaLFHSY678g2rgF20f": 1, # hands-on-CS
"/home/keys/Kucn93hf1F0F38aypq5fD63n7XhDyOP0": 24, # sis-tutorial metal-nonmetal
"/home/keys/4Sofj9D3I1kc03E39k1fIPO9w9A03N5Z": 5, # sis-tutorial binaries
"/home/keys/Zn98Li73k39h5Bd0a12eq344ba3maye3": 5} # sis-tutorial topological insulators
self.kkey = k
self.n_cpu = 1
if k in self.key_to_maxcpu_dic:
max_cpu = self.key_to_maxcpu_dic[k]
k = os.path.join(self.local_path, "key.mpi")
key = base64.b64decode(for_me)
with open(k, 'w') as f:
f.write(key)
else:
max_cpu = 1
if not(not self.mpi_command or self.mpi_command.isspace()):
try:
idx_n_cpu, self.n_cpu = [(i, int(s)) for i, s in enumerate(self.mpi_command.split()) if s.isdigit()][-1]
if self.n_cpu > max_cpu:
self.n_cpu = max_cpu
if self.if_print:
self.logger.warning("For your pupose, the maximum allowed CPU number is %s." % max_cpu)
self.mpi_command = self.mpi_command.split()
self.mpi_command[idx_n_cpu] = str(self.n_cpu)
self.mpi_command = " ".join(self.mpi_command)
if self.if_print:
self.logger.info("The calculations are running on %s CPUs." % self.n_cpu)
except BaseException:
self.n_cpu = 1
self.mpi_command = ''
self.logger.warning("MPI command not known. The calculations are restricted to run on only one CPU.")
return k
# feature space estimation
def ncr(self, n, r):
""" Binomial coefficient"""
r = min(r, n - r)
if r == 0:
return 1
numer = reduce(opop.mul, xrange(n, n - r, -1))
denom = reduce(opop.mul, xrange(1, r + 1))
return numer // denom
def check_l0_steps(self, max_dim, n_sis, upper_limit=10000):
""" Check if number of l0 steps is larger then a upper_limit"""
l0_steps_list = [self.ncr(n_sis * dim, dim) for dim in range(1, max_dim + 1)]
l0_steps = sum(l0_steps_list)
self.l0_steps = l0_steps
if l0_steps > upper_limit * self.n_cpu:
logger.error(
"With the given settings in the l0-regularizaton %s combinations of features have to be considered." %
l0_steps)
logger.error(
"In this version the upper limit for ptype = '%s' is %s*n_CPUs. Choose a smaller" %
(self.parameters['ptype'], upper_limit))
logger.error("'Optimal descriptor maximum dimension' or 'Number of collected features per SIS iteration'")
sys.exit(1)
def get_next_size(self, n_features, ops):
new_features = 0
for op in ops:
if op in un_OP:
new_features += n_features
elif op in bin_OP:
new_features += n_features**2
else:
new_features += self.ncr(n_features, 2)
return new_features + n_features
def estimate_feature_space(self, n_comb, n_features, ops, rate=1., n_comb_start=0):
if isinstance(rate, (float, int)):
rate = [rate for i in range(n_comb)]
for i in range(n_comb_start, n_comb):
n_features = int(self.get_next_size(n_features, ops) * rate[i])
return int(n_features)
def check_feature_space_size(self, feature_list, n_target=5, upper_bound=300000000):
n_comb = deepcopy(self.parameters['rung'])
max_dim = deepcopy(self.parameters['desc_dim'])
n_sis = deepcopy(self.parameters['subs_sis'])
self.parameters['rung'] = 2
self.parameters['desc_dim'] = 1
self.parameters['subs_sis'] = 1
OP_list = self.parameters['opset']
P = np.random.random((n_target))
D = np.random.random((n_target, len(feature_list)))
# make sis calculation to obtain self.featurespace(rung=2) for feature_space estimation
self.start()
self.get_results()
feature_space_size_ncomb2 = self.featurespace
# set parameters back
self.parameters['rung'] = n_comb
self.parameters['desc_dim'] = max_dim
self.parameters['subs_sis'] = n_sis
estimate = self.estimate_feature_space(3, feature_space_size_ncomb2, OP_list, rate=0.12, n_comb_start=2)
self.predicted_feature_space_size = estimate
if estimate * max_dim > upper_bound * self.n_cpu:
digit_len = len(str(estimate)) - 1
logger.error(
"Estimated order of magnitude of feature space size: 10^%s - 10^%s" %
(digit_len, digit_len + 1))
logger.error("In this version the upper bound for n_features is given by:")
logger.error("%s > n_features*max_dim/n_CPUs" % (upper_bound))
logger.error("Hint: select less primary features, less operations or a smaller max_dim.")
logger.error("The registered user will be allowed soon to use larger feature spaces.")
sys.exit(1)
def estimate_calculation_expense(self, feature_list):
""" Check the expense of the SIS+l0 calculations"""
n_target = 12
P = np.random.random((n_target))
D = np.random.random((n_target, len(feature_list)))
max_dim = self.parameters['desc_dim']
n_sis = self.parameters['subs_sis']
n_comb = self.parameters['rung']
# check l0 steps
if self.parameters['ptype'] == 'quanti':
self.check_l0_steps(max_dim, n_sis, upper_limit=1100000)
else:
u_l = 180000
if self.kkey in "/home/keys/Zn98Li73k39h5Bd0a12eq344ba3maye3": # topological insulator
u_l /= 5
elif self.kkey in "/home/keys/Kucn93hf1F0F38aypq5fD63n7XhDyOP0": # metal-nonmetal
u_l = 1150000
self.check_l0_steps(max_dim, n_sis, upper_limit=u_l)
# check feature spcae
if n_comb == 3:
if self.kkey in "/home/keys/Zn98Li73k39h5Bd0a12eq344ba3maye3": # topological insulator
logger.error(
"A 'number of iterations for the construction for the feature space' > 2 is not allowed for this tutorial.")
sys.exit()
u_l = 4460000
if self.kkey in "/home/keys/Kucn93hf1F0F38aypq5fD63n7XhDyOP0":
u_l = 4460000 * 2
self.check_feature_space_size(feature_list, n_target=n_target, upper_bound=u_l)
elif n_comb > 3:
logger.error("A 'number of iterations for the construction for the feature space' >3 is not allowed.")
sys.exit(1)
# END checking functions
def do_transfer(self, ssh=None, eos=None, username=None, CPUs=None):
""" Run the calcualtion on remote machine
First checks if already folder self.remote_input_path exists on remote machine,
if yes it deletes or renames it.
Then copies file system self.SIS_input_path with SIS fortran code files into the
folder self.remote_input_path. Finally lets run the calculations on remote machine
and copy back the file system with results.
If eos, writes submission script, submits script and checks qstat if calculation
finished.
Parameters
----------
ssh : object
Must be from code nomad_sim.ssh_code.
eos : bool
If remote machine is eos. To write submission script and submit ...
username: string
needed to check qstat on eos
CPUs : int
To reserve the write number of CPUs in the eos submission script
"""
# check if remote_input_path exists and if yes rename it to remote_input_path_old_#
if self.ssh.isdir(self.remote_input_path):
self.logger.warning('Directory %s on remote machine already exists.' % self.remote_input_path)
if self.rm_existing_files:
ssh.rm(self.remote_input_path)
self.logger.warning('It is removed.')
else:
for i in range(1000):
old_name = "%s_old_%s" % (self.remote_input_path, i)
if not self.ssh.isdir(old_name):
self.ssh.rename(self.remote_input_path, old_name)
break
self.logger.warning('It is renamed to %s.' % old_name)
if eos:
self.write_submission_script(CPUs)
# copy self.SIS_input_path INto self.remote_path
ssh.put_all(self.SIS_input_path, self.remote_path)
rmtree(self.SIS_input_path)
if eos:
seconds = 1
# submit job called go.sge
ssh.command("cd %s; qsub go.sge" % self.remote_input_path)
self.SCHEDule = sched.scheduler(time.time, time.sleep)
# check each seconds if is job is finished
self.SCHEDule.enter(seconds, 1, self.ask_periodically, (self.SCHEDule, seconds, 0, username))
self.SCHEDule.run()
else:
# execute SIS_code on remote machine
# exporting path is needed, since code FCDI calls the codes FC and DI by just 'FC' and 'DI'.
ssh.command('export PATH=$PATH:%s; cd %s; %s' %
(self.SIS_code_path, self.remote_input_path, self.SIS_code_FCDI))
# copy back file system with results
ssh.get_all(self.remote_input_path, self.local_path)
ssh.rm(self.remote_input_path)
# close ssh connection
if self.if_close_ssh:
ssh.close()
def check_status(self, filename, username):
""" Check if calculation on eos is finished
Parameters
filename: str
qstat will be written into this file. The file will be then read.
username: str
search in filename for this username. If not appears calculation is finished.
Returns
-------
status : bool
True if calculations is still running.
"""
# write qstat into filenmae
self.ssh.command("qstat -u %s > %s" % (username, filename))
status = False
# read filename
lines = self.ssh.open_file(filename).readlines()
for line in lines:
split = line.split()
if len(split) > 3:
# if job name SIS_tutori (only 10 char) and username appears
if split[2] == 'SIS_tutori' and split[3] == username:
status = True
return status
def ask_periodically(self, sc, seconds, counter, username):
""" Recursive function that runs periodically (each seconds) the
function self.check_status.
"""
counter += 1
filename = os.path.join(self.remote_input_path, 'status.dat')
if counter > 1000:
return 1
if not self.check_status(filename, username):
return 0
self.SCHEDule.enter(seconds, 1, self.ask_periodically, (sc, seconds, counter, username))
def write_submission_script(self, CPUs):
""" writes eos job submission script. """
strings = [
"#$ -S /bin/bash",
"#$ -j n",
"#$ -N SIS_tutorial", # jobname
"#$ -cwd",
"#$ -m n",
"#$ -pe impi_hydra %s" % CPUs, # CPUs= nodes*32!
"#$ -l h_rt=00:01:00", # time reservation for job
"%s" % SIS_code_FCDI
]
# write submission file "go.sge"
submission_file = open(os.path.join(self.SIS_input_path, 'go.sge'), 'w')
for s in strings:
submission_file.write("%s\n" % s)
submission_file.close()
def check_feature_units(self, feature_unit_classes):
""" Check feature units
Checks which
Parameters
----------
feature_unit_classes : list integers
list must be sorted.
Returns
-------
unit_strings : list of strings
In the form ['(1:3)','(4:8)',..], where the indices start from 1,
"""
index = np.unique(feature_unit_classes, return_index=True)[1]
class_names = [feature_unit_classes[i] for i in np.sort(index)]
unit_strings = []
col = 0
for i, cl in enumerate(class_names):
length = len([None for p in feature_unit_classes if p == cl])
if cl != 'no_unit':
unit_strings.append("(%s:%s)" % (col + 1, col + length))
col += length
return unit_strings
def convert_feature_strings(self, feature_list):
""" Convert feature strings.
Puts an 'sr' for reals and an 'si' for integers at the beginning of a string.
Returns the list with the changed strings.
"""
converted = []
for f in feature_list:
if f in reals:
which = 'r'
elif f in ints:
which = 'i'
else:
self.logger.error("Developer error: %s not found in the list reals or ints." % f)
sys.exit(1)
f = standard_2_converted[f]
converted.append('s%s_%s' % (which, f))
return converted
def write_parameters(self):
""" Write parameters into the SIS fortran code input files. Convert the parameters into
the special format before."""
filename = 'FCDI.in'
input_file = open(os.path.join(self.SIS_input_path, filename), 'w')
# loop in correct order as in Param_key_list could be essential. So better no iteritems()
for key in Param_key_list:
value = self.parameters[key]
value = self.convert_2_fortran(key, value)
input_file.write("%s=%s\n" % (key, value))
input_file.close()
def convert_2_fortran(self, parameter, parameter_value):
""" Convert parameters to SIS fortran code style.
Converts e.g. True to string '.true.' or a string 's' to
"'s'", and other special formats.
Returns the converted parameter.
"""
if parameter == 'opset':
return self.get_OPs(parameter_value)
elif parameter == 'dimclass':
return "".join(parameter_value)
elif isinstance(parameter_value, bool):
if parameter_value == True:
return '.true.'
else:
return '.false.'
elif isinstance(parameter_value, str):
return "'%s'" % parameter_value
elif isinstance(parameter_value, tuple) and len(parameter_value) == 1:
return "(%s)" % parameter_value[0]
else:
return parameter_value
def get_OPs(self, OP_list):
""" Conver OP_list to special format for SIS fortran input."""
list_of_strings = []
for OPs in OP_list:
# convert OP_list: in example ['+', '-', '/', '^2', 'exp'] to '(+)(-)(/)(^2)(exp)'
OP_string = ""
for op in OPs:
OP_string += '(%s)' % op
list_of_strings.append("'%s'" % OP_string)
# make string of OP_string listed ncomb times e.g. "'(+)(-)(/)(^2)(exp)','(+)(-)(/)(^2)(exp)',..."
converted = ",".join(list_of_strings)
return converted
def flatten(self, list_in):
""" Returns the list_in collapsed into a one dimensional list
Parameters
----------
list_in : list/tuple of lists/tuples of ...
"""
list_out = []
for item in list_in:
if isinstance(item, (list, tuple)):
list_out.extend(self.flatten(item))
else:
list_out.append(item)
return list_out
def write_P_D(self, P, D, feature_list):
""" Writes 'train.dat' as SIS fortran code input with P, D and feature strings"""
#converted_features = self.convert_feature_strings(feature_list)
converted_features = feature_list
P = np.array(P)
P_shape = P.shape
if self.parameters['ptype'] == 'quanti':
if len(P_shape) > 1 and not P_shape[1] == 1:
first_line = ['#'] + ['target_%s' % (t + 1) for t in range(P_shape[1])]
else:
first_line = ['#', 'target']
P = np.transpose(np.vstack((['xxx' for i in range(len(P))], P)))
else:
entries_of_P = len(P)
P = P.reshape([entries_of_P, 1])
first_line = ['#']
first_line.extend(converted_features)
Out = np.hstack((P, D))
Out = np.vstack((first_line, Out))
np.savetxt(os.path.join(self.SIS_input_path, "train.dat"), Out, fmt='%s', delimiter=" ")
def get_des(self, x):
""" Change the descriptor strings read from the output DI.out.
Remove characters as ':' 'si', 'sr'. Then convert feature strings for printing"""
index = [n_i for n_i, i in enumerate(x) if i == ':'][0]
x = x[index + 2:-1]
x = list(x)
remove_index = []
for n_i, i in enumerate(x):
if i == 's':
if x[n_i + 1] in ['r', 'i']:
if x[n_i + 2] == '_':
remove_index.extend(range(n_i, n_i + 3))
x = [s for i, s in enumerate(x) if not i in remove_index]
if x[0] == '(' and x[-1] == ')':
x = x[1:-1]
new_string = "".join(x)
return new_string
def check_FC(self, file_path):
""" Check FC.out, if calculation has finished and feature space_sizes.
Returns
-------
calc_finished : bool
If calculation finished there shoul be a 'Have a nice day !'.
featurespace : integer
Total feature space size generated, before the redundant check.
n_collected : integer
The number of features collected in the current iteration.
Should be n_sis.
"""
lines = open(file_path, 'r').readlines()
featurespace = None
n_collected = None
calc_finished = False
feature_space_list = []
for line in lines:
if line.rfind('Total Featurespace:') > -1:
feature_space_list.append(line.split()[2])
if line.rfind('Have a nice day !') > -1:
calc_finished = True
if line.rfind('Final feature space size:') > -1:
n_collected = int(line.split()[4])
return calc_finished, feature_space_list, n_collected
def check_DI(self, file_path):
""" Check DI.out, if calculation has finished. """
lines = open(file_path, 'r').readlines()
calc_finished = False
for line in lines:
if line.rfind('Have a nice day !') > -1:
calc_finished = True
return calc_finished
def check_files(self, iter_folder_name, dimension):
""" Check which file is missing and maybe why.
This function, if something went wrong to find out where the problem occured.
Returns an error string.
"""
iter_path = os.path.join(self.SIS_input_path, iter_folder_name)
DI_path = os.path.join(iter_path, 'DI.out')
FC_path = os.path.join(iter_path, 'FC.out')
if_iter = os.path.isdir(iter_path)
if_FC = os.path.isfile(FC_path)
if_DI = os.path.isfile(DI_path)
n_sis = self.parameters['subs_sis']
sub_space_size = dimension * n_sis
if if_iter:
if if_FC:
calc_finished, feature_space, n_collected = self.check_FC(FC_path)
if not calc_finished:
return 'FC.out not finished'
if feature_space is None:
return "'Total Featurespace' not found"
else:
return 'FC.out not found'
if n_collected < n_sis:
return 'No %sD descriptor!\nThe number of collected feateres in iteration %s is %s. Probably the total feature space size is not large enough. Collect less features per iteration.\nTotal feature space size before redundant check: %s\n Target total number of collected features: %s\nAfter eliminating redundant features the total feature space becomes smaller.' % (
dimension, dimension, n_collected, feature_space, sub_space_size)
if if_DI:
calc_finished = self.check_DI(DI_path)
if not calc_finished:
return 'DI.out not finished'
else:
return 'DI.out not found'
return 'Unknown error'
else:
return '%s not found' % iter_folder_name
def read_results(self, iter_folder_name, dimension, task, tsizer):
""" Read results from DI.out.
parameters
----------
iter_folder : string
Name of the iter_folder the outputs of the corresponding iteration of SIS+l1/l1l0,
e.g. 'iter01', 'iter02'.
dimension : integer
DI.out provides for example in iteration three 1-3 dimensionl descriptors.
Here choose which dimension should be returned.
task : integer < 100
For multi task, must be worked on.
tsizer : integer
Number of samples, e.g. number ofrows of D or P.
Returns
-------
RMSE : float
Root means squares error of model
Des : list of strings
List of the descriptors
coef : array [model_dim+1]
Coefficients including the intercept
D : array [n_sample, model_dim+1]
Matrix with columns being the selected features (descriptors) for the model.
The last column is full of ones corresponding to the intercept
"""
iter_path = os.path.join(self.SIS_input_path, iter_folder_name)
DI_path = os.path.join(iter_path, 'DI.out')
if task > 9:
s_task = '0%s' % task
else:
s_task = '00%s' % task
desc_path = os.path.join(iter_path, 'desc_dat', 'desc%s_%s.dat' % (dimension, s_task))
count_dim = 0
lines = open(DI_path, 'r').readlines()
for line in lines:
if line.rfind('@@@descriptor') > -1:
count_dim += 1
if count_dim == dimension:
des = line.split()[1:]
Des = [self.get_des(x) for x in des] # convert strings
if count_dim == dimension:
if line.rfind('coefficients_') > -1:
coef = np.array([float(i) for i in line.split()[1:]])
if line.rfind('Intercept_') > -1:
inter = float(line.split()[1])
coef = np.append(coef, inter)
if line.rfind('LSrmse') > -1:
RMSE = float(line.split()[1])
D = np.empty([tsizer, dimension])
lines = open(desc_path, 'r').readlines()
for i, line in enumerate(lines):
if i > 0:
for j, val in enumerate(line.split()[3:]):
D[i - 1, j] = val
D = np.column_stack((D, np.ones(tsizer)))
return RMSE, Des, coef, D
def get_indices_of_top_descriptors(self):
try:
filename = [f for f in os.listdir(self.iter_path,) if f[-2:] == '2D' and f[:3] == 'top'][0]
except BaseException:
self.logger.error("Calculation Aborted.")
self.logger.error("The Number of collected features in the SIS step might have exceeded")
self.logger.error("the number of features in the created feature space.")
self.logger.error("Hint: Try a smaller 'Number of collected features per SIS iteration'")
self.logger.error("Hint: or increase the feature space size.")
sys.exit()
#filename = "top%04d_02D" % n_out
filename = os.path.join(self.iter_path, filename)
top_dat = open(filename, 'r').readlines()
Ind = []
Overlaps = []
old_n_overlap, old_overlap_area = None, None
for l, line in enumerate(top_dat):
if l > 0:
n_overlap, overlap_area = int(line.split()[1]), float(line.split()[2])
if old_n_overlap in [n_overlap, None] and old_overlap_area in [overlap_area, None]:
indices = [int(idx) - 1 for idx in line.split()[-2:]]
Ind.append(indices)
Overlaps.append(n_overlap)
old_n_overlap, old_overlap_area = n_overlap, overlap_area
else:
break
return Overlaps, Ind
def manipulate_descriptor_string(self, d):
if d[0] == '(' and d[-1] == ')':
return d[1:-1]
else:
return d
def get_strings_of_top_descriptors(self, top_indices):
filename = os.path.join(self.iter_path, "task.fname")
lines = open(filename, 'r').readlines()
descriptors = [line.split()[0] for line in lines]
# importan to return [1:-1] to remove brackets in string
return [[self.manipulate_descriptor_string(descriptors[i]) for i in indices] for indices in top_indices]
def get_arrays_of_top_descriptors(self, top_indices):
n_models = len(top_indices)
top_indices = np.array(top_indices)
filename = os.path.join(self.iter_path, 'task001.dat')
lines = open(filename, 'r').readlines()
Ds = []
for line in lines:
ls = line.split()
Ds.append([float(ls[i]) for i in top_indices.flatten()])
Ds = np.array(Ds)
return [Ds[:, [2 * i, 2 * i + 1]] for i in range(n_models)]
def read_results_quali(self):
""" Read results for 2D desriptor from calculations with qualitative run.
Returns
-------
results: list of lists
Each sublist characterizes separate model (if multiple model have same score/cost
all of them are returned). Sublist contains [descriptor_strings, D, n_overlap]
where D (D.shape = (n_smaple,2)) is array with descriptor vectors.
"""
self.iter_path = os.path.join(self.SIS_input_path, "iter02")
Overlaps, Top_indices = self.get_indices_of_top_descriptors()
Top_strings = self.get_strings_of_top_descriptors(Top_indices)
Top_Ds = self.get_arrays_of_top_descriptors(Top_indices)
return [[Top_strings[i], Top_Ds[i], Overlaps[i]] for i in range(len(Top_indices))]
def string_descriptor(self, RMSE, features, coefficients, target_unit):
""" Make string for output in the terminal with model and its RMSE."""
dimension = len(features)
string = '%sD descriptor:\nRoot Mean Squared Error (RMSE): %s %s\nModel: \n' % (dimension, RMSE, target_unit)
for i in range(dimension + 1):
if coefficients[i] > 0:
sign = '+'
c = coefficients[i]
else:
sign = '-'
c = abs(coefficients[i])
if i < dimension:
string += '%s %.5f %s\n' % (sign, c, features[i])
else:
string += '%s %.5f\n' % (sign, c)
return string
def get_results(self, ith_descriptor=0):
""" Attribute to get results from the file system.
Parameters
-------
ith_descriptor: int
Return the ith best descriptor.
Returns
-------
out : list [max_dim] of dicts {'D', 'coefficients', 'P_pred'}
out[model_dim-1]['D'] : pandas data frame [n_sample, model_dim+1]
Descriptor matrix with the columns being algebraic combinations of the input feature matrix.
Column names are thus strings of the algebraic combinations of strings of inout feature_list.
Last column is full of ones corresponding to the intercept
out[model_dim-1]['coefficients'] : array [model_dim+1]
Optimizing coefficients.
out[model_dim-1]['P_pred'] : array [m_sample]
Fit : np.dot( np.array(D) , coefficients)
"""
max_dim = self.parameters['desc_dim']
Results_list = []
tsizer = len(self.flatten(self.P))
if self.parameters['ptype'] == 'quanti':
for dimension in range(1, max_dim + 1):
if dimension < 10:
iter_folder_name = 'iter0%s' % (dimension)
else:
iter_folder_name = 'iter%s' % (dimension)
try:
results = self.read_results(iter_folder_name, dimension, 1, tsizer)
Results_list.append(results)
if dimension == 1:
iter_path = os.path.join(self.SIS_input_path, iter_folder_name)
FC_path = os.path.join(iter_path, 'FC.out')
# feature space size
feature_space_list = self.check_FC(FC_path)[1]
try:
self.featurespace = int(feature_space_list[-1])
featurespace = int(self.featurespace * 0.5)
except BaseException:
if self.parameters['rung'] == 3:
featurespace = int(feature_space_list[-2])
self.featurespace = self.estimate_feature_space(
3, featurespace, self.parameters['opset'], rate=0.12, n_comb_start=2)
featurespace = int(self.featurespace)
else:
self.logger.error("Developper error: feature space estimation and rung conflict!")
self.exit(1)
if self.if_print:
digit_len = len(str(featurespace)) - 1
self.logger.info(
"Estimated order of magnitude of feature space size: 10^%s - 10^%s" %
(digit_len, digit_len + 1))
except Exception as e:
message = self.check_files(iter_folder_name, dimension)
if dimension > 2:
self.logger.warning(message)
break
else:
self.logger.error(message)
self.logger.error("## See below the Error message:")
self.logger.error(e)
sys.exit(1)
out = []
# print results, make pandas DataFrames and calulate predicted/fitted values
for RMSE, features_selected, coefficients, D_model in Results_list:
if self.if_print:
string = self.string_descriptor(RMSE, features_selected, coefficients, self.target_unit)
self.logger.info(string)
# predicted/fitted values of the model
fit = np.dot(D_model, coefficients)
# D_model and selected features as pandas DataFrames
features_selected.append('intercept')
D_df = pd.DataFrame(D_model, columns=features_selected)
out.append({'D': D_df, 'coefficients': coefficients, 'P_pred': fit})
rmtree(self.SIS_input_path)
return out
else: # 'quali'. Only for specific case of 2D
dimension = 2
iter_folder_name = 'iter0%s' % (dimension)
try:
iter_path = os.path.join(self.SIS_input_path, iter_folder_name)
FC_path = os.path.join(self.SIS_input_path, 'iter01', 'FC.out')
# feature space size
feature_space_list = self.check_FC(FC_path)[1]
try:
self.featurespace = int(feature_space_list[-1])
if self.parameters['rung'] == 3:
featurespace = int(self.featurespace * 0.5)
else:
featurespace = int(self.featurespace)
except BaseException:
if self.parameters['rung'] == 3:
featurespace = int(feature_space_list[-2])
self.featurespace = self.estimate_feature_space(
3, featurespace, self.parameters['opset'], rate=0.12, n_comb_start=2)
featurespace = int(self.featurespace)
else:
self.logger.error("Developper error: feature space estimation and rung conflict!")
self.exit(1)
digit_len = len(str(featurespace)) - 1
first_digit = str(round(featurespace, -digit_len))[0]
feature_space_message = "Size of feature space: %s*10^%s" % (first_digit, digit_len)
# get results
results_list = None
if not self.checking_expense:
results_list_v1 = self.read_results_quali()
rmtree(self.SIS_input_path)
n_results = len(results_list_v1)
# get real overlap with width=0
self.parameters['rung'] = 0
self.parameters['subs_sis'] = 1
self.parameters['width'] = 0.0
self.parameters['ndimtype'] = 2
self.parameters['dimclass='] = ['(1:1)', '(2:2)']
self.parameters['nsf'] = 2
self.parameters['mpiname'] = ''
#self.if_print = False
try:
Des, D_selected, overlap = results_list_v1[ith_descriptor]
except BaseException:
Des, D_selected, overlap = results_list_v1[-1]
self.D = D_selected
self.feature_list = Des
self.feature_unit_classes = [1, 2]
self.if_close_ssh = True
self.start()
final_result = self.read_results_quali()[0]
rmtree(self.SIS_input_path)
try:
rmtree(self.SIS_input_path)
except BaseException:
pass
if self.if_print:
self.logger.info("SISSO CALCULATION FINISHED")
self.logger.info(feature_space_message)
return final_result
except Exception as e:
self.logger.error(e)
sys.exit(1)
| 41.751705 | 385 | 0.580393 | 9,620 | 73,483 | 4.267775 | 0.10447 | 0.02046 | 0.012422 | 0.010133 | 0.329063 | 0.268463 | 0.213879 | 0.17481 | 0.151671 | 0.132137 | 0 | 0.014987 | 0.308983 | 73,483 | 1,759 | 386 | 41.775441 | 0.793552 | 0.30241 | 0 | 0.271498 | 0 | 0.004831 | 0.144221 | 0.010265 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045411 | false | 0.004831 | 0.015459 | 0 | 0.10628 | 0.009662 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aabd40dd31e22feb2b52c5252e881542b39b76c3 | 3,663 | py | Python | src/AntGraph.py | AuxinJeron/Gurobi-VRP | d28b6210d4f73371ba6bae3e9ef5ecfa66c5ed8d | [
"Apache-2.0"
] | 15 | 2018-04-26T08:17:18.000Z | 2021-03-05T08:44:13.000Z | src/AntGraph.py | UniverseLu/vehicle-routing-problem. | 2d1c821b75395fe08634231dd71444e525facc78 | [
"Apache-2.0"
] | null | null | null | src/AntGraph.py | UniverseLu/vehicle-routing-problem. | 2d1c821b75395fe08634231dd71444e525facc78 | [
"Apache-2.0"
] | 6 | 2018-04-12T15:49:27.000Z | 2022-01-27T12:34:50.000Z | from math import sqrt
from math import pow
from threading import Lock
from operator import itemgetter
import logging
logger = logging.getLogger("logger")
class AntGraph:
def __init__(self, coord_mat, delta_mat=None, tau_mat=None):
self.lock = Lock()
self.build_nodes_mat(coord_mat)
self.build_cand_list()
if tau_mat is None:
self.build_tau_mat()
else:
self.tau_mat = tau_mat
def build_nodes_mat(self, coord_mat):
self.nodes_num = len(coord_mat)
self.visited = [False] * self.nodes_num
self.nodes_mat = [[0 for i in range(0, self.nodes_num)] for i in range(0, self.nodes_num)]
for i in range(0, self.nodes_num):
for j in range(i, self.nodes_num):
d = sqrt(pow((coord_mat[i][0] - coord_mat[j][0]), 2) + pow((coord_mat[i][1] - coord_mat[j][1]), 2))
self.nodes_mat[i][j], self.nodes_mat[j][i] = d, d
# print nodes_mat
# for i in range(0, self.nodes_num):
# logger.debug(self.nodes_mat[i])
def build_tau_mat(self):
self.tau_mat = []
self.tau0 = 1.0 / (self.nodes_num * self.nearest_neighbour_tour())
#self.tau0 = 1.0
for i in range(0, self.nodes_num):
self.tau_mat.append([self.tau0] * self.nodes_num)
def build_cand_list(self):
self.cl = min(20, int(0.3 * self.nodes_num))
self.cand_list = []
for i in range(0, self.nodes_num):
dict = {}
for j in range(0, self.nodes_num):
if i == j:
continue
dict[j] = self.nodes_mat[i][j]
nearest_neighbours = sorted(dict.items(), key=itemgetter(1))
cands = set()
for neighbour in nearest_neighbours:
if len(cands) >= self.cl:
break
if neighbour[0] != i:
cands.add(neighbour[0])
self.cand_list.append(cands)
# for i in range(0, len(self.cand_list)):
# logger.info(self.cand_list[i])
def reset_tau(self):
self.build_tau_mat()
def nearest_neighbour_tour(self):
L = 0
nodes_to_visit = {}
path_vec = []
start_node = 0
curr_node = start_node
path_vec.append(start_node)
path_mat = [[0 for i in range(0, self.nodes_num)] for i in range(0, self.nodes_num)]
for i in range(0, self.nodes_num):
if i != start_node:
nodes_to_visit[i] = i
# calculate the tour length
while nodes_to_visit:
nearest_len = float('inf')
new_node = start_node
for node in nodes_to_visit.values():
if self.nodes_mat[curr_node][node] < nearest_len:
new_node = node
nearest_len = self.nodes_mat[curr_node][node]
L += nearest_len
path_vec.append(new_node)
path_mat[curr_node][new_node] = '*'
del nodes_to_visit[new_node]
curr_node = new_node
path_mat[path_vec[-1]][start_node] = '*'
L += self.nodes_mat[path_vec[-1]][start_node]
# for i in range(0, len(path_mat)):
# print(path_mat[i])
return L
def delta(self, r, s):
return self.nodes_mat[r][s]
def tau(self, r, s):
return self.tau_mat[r][s]
def etha(self, r, s):
return 1.0 / self.delta(r, s)
def update_tau(self, r, s, val):
self.tau_mat[r][s] = val
def print_tau(self):
for i in range(0, len(self.tau_mat)):
logger.info(self.tau_mat[i]) | 32.705357 | 115 | 0.552553 | 535 | 3,663 | 3.562617 | 0.166355 | 0.118048 | 0.100735 | 0.069255 | 0.260231 | 0.20724 | 0.1532 | 0.133263 | 0.094439 | 0.081322 | 0 | 0.01668 | 0.328965 | 3,663 | 112 | 116 | 32.705357 | 0.758747 | 0.07098 | 0 | 0.071429 | 0 | 0 | 0.003241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.130952 | false | 0 | 0.059524 | 0.035714 | 0.25 | 0.011905 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aabde4677f183da1379088f606931474fe28058e | 2,109 | py | Python | Solutions/226.py | ruppysuppy/Daily-Coding-Problem-Solutions | 37d061215a9af2ce39c51f8816c83039914c0d0b | [
"MIT"
] | 70 | 2021-03-18T05:22:40.000Z | 2022-03-30T05:36:50.000Z | Solutions/226.py | ungaro/Daily-Coding-Problem-Solutions | 37d061215a9af2ce39c51f8816c83039914c0d0b | [
"MIT"
] | null | null | null | Solutions/226.py | ungaro/Daily-Coding-Problem-Solutions | 37d061215a9af2ce39c51f8816c83039914c0d0b | [
"MIT"
] | 30 | 2021-03-18T05:22:43.000Z | 2022-03-17T10:25:18.000Z | """
Problem:
You come across a dictionary of sorted words in a language you've never seen before.
Write a program that returns the correct order of letters in this language.
For example, given ['xww', 'wxyz', 'wxyw', 'ywx', 'ywz'], you should return
['x', 'z', 'w', 'y'].
"""
from typing import Dict, List, Optional, Set
def update_letter_order(sorted_words: List[str], letters: Dict[str, Set[str]]) -> None:
order = []
new_words = {}
prev_char = None
for word in sorted_words:
if word:
char = word[0]
if char != prev_char:
order.append(char)
if char not in new_words:
new_words[char] = list()
new_words[char].append(word[1:])
prev_char = char
for index, char in enumerate(order):
letters[char] = letters[char] | set(order[index + 1 :])
for char in new_words:
update_letter_order(new_words[char], letters)
def find_path(
letters: Dict[str, Set[str]], start: str, path: List[str], length: int
) -> Optional[List[str]]:
if len(path) == length:
return path
if not letters[start]:
return None
for next_start in letters[start]:
new_path = find_path(letters, next_start, path + [next_start], length)
if new_path:
return new_path
def get_letter_order(sorted_words: List[str]):
letters = {}
for word in sorted_words:
for letter in word:
if letter not in letters:
letters[letter] = set()
update_letter_order(sorted_words, letters)
max_children = max([len(x) for x in letters.values()])
potential_heads = [x for x in letters if len(letters[x]) == max_children]
path = None
for head in potential_heads:
path = find_path(letters, head, path=[head], length=len(letters))
if path:
break
return path
if __name__ == "__main__":
print(get_letter_order(["xww", "wxyz", "wxyw", "ywx", "ywz"]))
"""
SPECS:
TIME COMPLEXITY: O(words x letters + words ^ 2 + letters ^ 2)
SPACE COMPLEXITY: O(words x letters)
"""
| 26.696203 | 87 | 0.610716 | 293 | 2,109 | 4.242321 | 0.290102 | 0.053097 | 0.04103 | 0.053097 | 0.232502 | 0.057924 | 0.057924 | 0 | 0 | 0 | 0 | 0.00324 | 0.268374 | 2,109 | 78 | 88 | 27.038462 | 0.802333 | 0.127549 | 0 | 0.086957 | 0 | 0 | 0.014552 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.065217 | false | 0 | 0.021739 | 0 | 0.173913 | 0.021739 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aabe599b0fbc948c1c5b83c3e325ea01307b3fbe | 598 | py | Python | pdf_rendering_service/documents/urls.py | KPilnacek/pdf-rendering-service-1 | 6a4351fa57c3f84aff7fc6fd25763043acb93395 | [
"MIT"
] | null | null | null | pdf_rendering_service/documents/urls.py | KPilnacek/pdf-rendering-service-1 | 6a4351fa57c3f84aff7fc6fd25763043acb93395 | [
"MIT"
] | null | null | null | pdf_rendering_service/documents/urls.py | KPilnacek/pdf-rendering-service-1 | 6a4351fa57c3f84aff7fc6fd25763043acb93395 | [
"MIT"
] | null | null | null | from django.urls import path
from pdf_rendering_service.documents.views import (
DocumentPageView,
DocumentUploadView,
DocumentView,
)
app_name = "documents"
urlpatterns = [
path("documents", DocumentUploadView.as_view(), name="documents"),
path("documents/<int:pk>", DocumentView.as_view(), name="document"),
path(
"documents/<str:filename>",
DocumentUploadView.as_view(),
name="documents_with_filename",
),
path(
"documents/<int:pk>/pages/<int:number>",
DocumentPageView.as_view(),
name="document_page",
),
]
| 23.92 | 72 | 0.653846 | 59 | 598 | 6.457627 | 0.457627 | 0.136483 | 0.104987 | 0.146982 | 0.194226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.204013 | 598 | 24 | 73 | 24.916667 | 0.80042 | 0 | 0 | 0.190476 | 0 | 0 | 0.250836 | 0.140468 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.095238 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aac1eb8aa47fa795be546197ad98f9ed858b8080 | 323 | py | Python | src/tensor/datatype/float_/x64.py | jedhsu/tensor | 3b2fe21029fa7c50b034190e77d79d1a94ea5e8f | [
"Apache-2.0"
] | null | null | null | src/tensor/datatype/float_/x64.py | jedhsu/tensor | 3b2fe21029fa7c50b034190e77d79d1a94ea5e8f | [
"Apache-2.0"
] | null | null | null | src/tensor/datatype/float_/x64.py | jedhsu/tensor | 3b2fe21029fa7c50b034190e77d79d1a94ea5e8f | [
"Apache-2.0"
] | null | null | null | """
*f64*
"""
import jax.numpy as jnp
from .._datatype import Datatype
from ._float import Float
__all__ = ["f64"]
class f64(
jnp.float64,
Float,
Datatype,
):
def __init__(
self,
value: int,
):
super(f64, self).__init__(
self,
value,
)
| 11.535714 | 34 | 0.504644 | 33 | 323 | 4.515152 | 0.545455 | 0.107383 | 0.174497 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049751 | 0.377709 | 323 | 27 | 35 | 11.962963 | 0.691542 | 0.01548 | 0 | 0.235294 | 0 | 0 | 0.009868 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.176471 | 0 | 0.294118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aac64c25934af9c02b05d7379fc33a879f9471cc | 3,948 | py | Python | 06_face_verification_limit/dataloader.py | yeodongbin/2020AIChallengeCode | 776c686b65a67bc0d71eed1118eed6cf45ea17c6 | [
"MIT"
] | null | null | null | 06_face_verification_limit/dataloader.py | yeodongbin/2020AIChallengeCode | 776c686b65a67bc0d71eed1118eed6cf45ea17c6 | [
"MIT"
] | null | null | null | 06_face_verification_limit/dataloader.py | yeodongbin/2020AIChallengeCode | 776c686b65a67bc0d71eed1118eed6cf45ea17c6 | [
"MIT"
] | null | null | null |
import os
import numpy as np
import pandas as pd
from PIL import Image
import torch
from torch.utils import data
import torchvision.transforms as transforms
class CustomDataset(data.Dataset):
def __init__(self, root, phase='train', transform=None):
self.root = root
self.phase = phase
self.labels = {}
self.transform = transform
if self.phase != 'train':
self.label_path = os.path.join(root, self.phase, self.phase + '_label.csv')
# used to prepare the labels and images path
self.direc_df = pd.read_csv(self.label_path)
self.direc_df.columns = ["image1", "image2", "label"]
self.dir = os.path.join(root, self.phase)
else:
self.train_meta_dir = os.path.join(root, self.phase, self.phase + '_meta.csv')
train_meta = pd.read_csv(self.train_meta_dir)
train_data = []
# make_true_pair
id_list = list(set(train_meta['face_id']))
for id in id_list:
pair = []
candidate = train_meta[train_meta['face_id'] == int(id)]
pair.append(candidate[candidate['acc_option']=='none'].sample(1)['file_name'].item())
pair.append(candidate[candidate['acc_option']=='acc'].sample(1)['file_name'].item())
pair.append(0)
train_data.append(pair)
# make_false_pair
id_list = list(set(train_meta['face_id']))
for id in id_list:
pair = []
candidate = train_meta[train_meta['face_id'] == int(id)]
candidate_others = train_meta[train_meta['face_id'] != int(id)]
pair.append(candidate[candidate['acc_option']=='none'].sample(1)['file_name'].item())
pair.append(candidate_others[candidate_others['acc_option']=='acc'].sample(1)['file_name'].item())
pair.append(1)
train_data.append(pair)
self.direc_df = pd.DataFrame(train_data)
self.direc_df.columns = ["image1", "image2", "label"]
self.dir = os.path.join(root, self.phase)
self.direc_df.to_csv(os.path.join(root, self.phase, self.phase + '_label.csv'), mode='w', index=False)
self.label_path = os.path.join(root, self.phase, self.phase + '_label.csv')
def __getitem__(self, index):
# getting the image path
image1_path = os.path.join(self.dir, self.direc_df.iat[index, 0])
image2_path = os.path.join(self.dir, self.direc_df.iat[index, 1])
# Loading the image
img0 = Image.open(image1_path)
img1 = Image.open(image2_path)
img0 = img0.convert("L")
img1 = img1.convert("L")
# Apply image transformations
if self.transform is not None:
img0 = self.transform(img0)
img1 = self.transform(img1)
if self.phase != 'test':
return (self.direc_df.iat[index, 0], img0, self.direc_df.iat[index, 1], img1,
torch.from_numpy(np.array([int(self.direc_df.iat[index, 2])], dtype=np.float32)))
elif self.phase == 'test':
dummy = ""
return (self.direc_df.iat[index, 0], img0, self.direc_df.iat[index, 1], img1, dummy)
def __len__(self):
return len(self.direc_df)
def get_label_file(self):
return self.label_path
def data_loader(root, phase='train', batch_size=64,):
if phase == 'train':
shuffle = True
else:
shuffle = False
dataset = CustomDataset(root, phase,transform=transforms.Compose([transforms.Resize((100,100)),
transforms.ToTensor()
]))
dataloader = data.DataLoader(dataset=dataset, batch_size=batch_size, shuffle=shuffle)
return dataloader, dataset.get_label_file()
| 44.863636 | 114 | 0.578521 | 495 | 3,948 | 4.432323 | 0.214141 | 0.057429 | 0.065178 | 0.044667 | 0.44485 | 0.43619 | 0.431176 | 0.431176 | 0.4134 | 0.4134 | 0 | 0.016094 | 0.291793 | 3,948 | 87 | 115 | 45.37931 | 0.768598 | 0.035968 | 0 | 0.27027 | 0 | 0 | 0.060295 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067568 | false | 0 | 0.094595 | 0.027027 | 0.243243 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aac789f0f30d66245ac00042de1997240cb240e6 | 3,994 | py | Python | src/svm/spam_detector.py | dimart10/machine-learning | 0f33bef65a9335c0f7fed680f1112419bae8fabc | [
"MIT"
] | null | null | null | src/svm/spam_detector.py | dimart10/machine-learning | 0f33bef65a9335c0f7fed680f1112419bae8fabc | [
"MIT"
] | null | null | null | src/svm/spam_detector.py | dimart10/machine-learning | 0f33bef65a9335c0f7fed680f1112419bae8fabc | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from sklearn.svm import SVC
from svm import *
from process_email import *
from get_vocab_dict import *
import codecs
def main():
# DATA PREPROCESSING
vocab_dick = getVocabDict()
dick_size = len(vocab_dick)
validationPercent = 0.3
# SPAM
directorySpam = 'spam'
mSpam = 500
X_spam = np.zeros(((int)(mSpam * (1-validationPercent)), dick_size))
Y_spam = np.ones(((int)(mSpam * (1-validationPercent))))[:, np.newaxis]
X_spam_val = np.zeros(((int)(mSpam * validationPercent), dick_size))
Y_spam_val = np.ones((int)(mSpam * validationPercent))[:, np.newaxis]
for i in range(mSpam):
email_contents = codecs.open('../data/emails/{0}/{1:04d}.txt'.format(directorySpam, i+1), 'r', encoding = 'utf 8', errors = 'ignore' ).read()
email_contents = email2TokenList(email_contents)
val = i >= mSpam * (1-validationPercent)
currentX = X_spam if not val else X_spam_val
for word_idx in range(len(email_contents)):
dick_index = vocab_dick.get(email_contents[word_idx])
if (dick_index != None):
currentX[i if not val else (int)(i - mSpam * (1-validationPercent)), dick_index-1] = 1
# EASY HAM
directoryEasy = 'easy_ham'
mEasy = 500
X_easy = np.zeros(((int)(mEasy * (1-validationPercent)), dick_size))
Y_easy = np.zeros(((int)(mEasy * (1-validationPercent))))[:, np.newaxis]
X_easy_val = np.zeros(((int)(mEasy * validationPercent), dick_size))
Y_easy_val = np.zeros((int)(mEasy * validationPercent))[:, np.newaxis]
for i in range(mEasy):
email_contents = codecs.open('../data/emails/{0}/{1:04d}.txt'.format(directoryEasy, i+1), 'r', encoding = 'utf 8', errors = 'ignore' ).read()
email_contents = email2TokenList(email_contents)
val = i >= mEasy * (1-validationPercent)
currentX = X_easy if not val else X_easy_val
for word_idx in range(len(email_contents)):
dick_index = vocab_dick.get(email_contents[word_idx])
if (dick_index != None):
currentX[i if not val else (int)(i - mEasy * (1-validationPercent)), dick_index-1] = 1
# HARD HAM
directoryhard = 'hard_ham'
mhard = 250
X_hard = np.zeros(((int)(mhard * (1-validationPercent)), dick_size))
Y_hard = np.zeros(((int)(mhard * (1-validationPercent))))[:, np.newaxis]
X_hard_val = np.zeros(((int)(mhard * validationPercent), dick_size))
Y_hard_val = np.zeros((int)(mhard * validationPercent))[:, np.newaxis]
for i in range(mhard):
email_contents = codecs.open('../data/emails/{0}/{1:04d}.txt'.format(directoryhard, i+1), 'r', encoding = 'utf 8', errors = 'ignore' ).read()
email_contents = email2TokenList(email_contents)
val = i >= mhard * (1-validationPercent)
currentX = X_hard if not val else X_hard_val
for word_idx in range(len(email_contents)):
dick_index = vocab_dick.get(email_contents[word_idx])
if (dick_index != None):
currentX[i if not val else (int)(i - mhard * (1-validationPercent)), dick_index-1] = 1
# Mix spam with non spam
X = np.vstack((X_spam, X_easy))
X = np.vstack((X, X_hard))
Y = np.vstack((Y_spam, Y_easy))
Y = np.vstack((Y, Y_hard))
X_val = np.vstack((X_spam_val, X_easy_val))
X_val = np.vstack((X_val, X_hard_val))
Y_val = np.vstack((Y_spam_val, Y_easy_val))
Y_val = np.vstack((Y_val, Y_hard_val))
# Finally, train the SVM and test the results
# trained_svm = train(X, Y, 1, 0.1)
# success_percentage = test(trained_svm, X_val, Y_val)
possible_values = (0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30)
bestSvmResults = findBestSVM(X, Y, X_val, Y_val, possible_values, possible_values)
#success_percentage = bestSvmResults[-1]
#print("Success percentage: ", success_percentage * 100, "%")
if __name__ == "__main__":
main()
| 37.679245 | 149 | 0.640961 | 567 | 3,994 | 4.306878 | 0.178131 | 0.079853 | 0.04095 | 0.063882 | 0.603604 | 0.517609 | 0.45086 | 0.29484 | 0.29484 | 0.29484 | 0 | 0.023285 | 0.215073 | 3,994 | 105 | 150 | 38.038095 | 0.755662 | 0.07361 | 0 | 0.173913 | 0 | 0 | 0.041746 | 0.024397 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014493 | false | 0 | 0.115942 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aac9a755797a1c26d30dc5585dbc1f8ad84a59fd | 3,552 | py | Python | scripts/python/summariseSNVs_rCRS.py | MagnusHaughey/liverMitoDNAPipeline | 0d63a41ea626bca032473450e3d10d451744f175 | [
"MIT"
] | null | null | null | scripts/python/summariseSNVs_rCRS.py | MagnusHaughey/liverMitoDNAPipeline | 0d63a41ea626bca032473450e3d10d451744f175 | [
"MIT"
] | null | null | null | scripts/python/summariseSNVs_rCRS.py | MagnusHaughey/liverMitoDNAPipeline | 0d63a41ea626bca032473450e3d10d451744f175 | [
"MIT"
] | null | null | null |
import numpy as np
import sys
import argparse
# Parse command line arguments
parser = argparse.ArgumentParser()
#parser.add_argument('-I', help='Input file with raw coverage data')
parser.add_argument('--input_one', help='', type=str)
parser.add_argument('--input_two', help='', type=str)
parser.add_argument('--input_three', help='', type=str)
parser.add_argument('--output', help='', type=str)
args = parser.parse_args()
# Read in data files
position , p_val , raw_freq = np.loadtxt(args.input_one , unpack=True , usecols=(2,5,6))
ref_base , var_base = [] , []
for line in open(args.input_one , 'r').readlines():
fields = line.replace(' ',' ').replace(' ' , ' ' ).split(" ")
ref_base.append(fields[3])
var_base.append(fields[4])
n_tst_fw , cov_tst_fw , n_tst_bw , cov_tst_bw , n_ctrl_fw = np.loadtxt(args.input_two , unpack=True , skiprows=1 , usecols=(2,3,4,5,6))
cov_ctrl_fw , n_ctrl_bw , cov_ctrl_bw = np.loadtxt(args.input_three , unpack=True , skiprows=1 , usecols=(1,2,3))
# Compute "shifted" variant frequencies
shifted_var_freq = (( n_tst_fw + n_tst_bw )/( cov_tst_fw + cov_tst_bw ))
# Filtering
filtered_out = []
f = open(args.output + '.METRICS.dat' , 'w')
if not isinstance(n_tst_bw, np.float64):
for i in range(len(n_tst_bw)):
# Filter on raw number of calls for each variant
if ((n_tst_bw[i] + n_tst_fw[i]) < 10):
filtered_out.append(i)
if (( n_ctrl_fw[i] + n_ctrl_bw[i] )/( cov_ctrl_fw[i] + cov_ctrl_bw[i] ) <= 0.01):
f.write("Removed somatic mutation {}{}{} due to small number of raw calls\n".format(int(position[i]) , ref_base[i] , var_base[i]))
else:
f.write("Removed germline mutation {}{}{} due to small number of raw calls\n".format(int(position[i]) , ref_base[i] , var_base[i]))
elif isinstance(n_tst_bw, np.float64):
# Filter on raw number of calls for each variant
if ((n_tst_bw + n_tst_fw) < 10):
filtered_out.append(0)
if (( n_ctrl_fw[i] + n_ctrl_bw[i] )/( cov_ctrl_fw[i] + cov_ctrl_bw[i] ) <= 0.01):
f.write("Removed entry {}{}{} due to small number of raw calls\n".format(int(position) , ref_base , var_base))
else:
f.write("Removed germline mutation {}{}{} due to small number of raw calls\n".format(int(position) , ref_base , var_base))
f.close()
# Open output file
if not isinstance(n_tst_bw, np.float64):
#g = open(args.output , 'w')
#
#for i in range(len(shifted_var_freq)):
# if (i in filtered_out):
# continue
# else:
# g.write("{}{}{} {:1.10f} {}\n".format(int(position[i]) , ref_base[i] , var_base[i] , shifted_var_freq[i] , p_val[i]))
#
#g.close()
# Write somatic calls to file
g = open(args.output , 'w')
for i in range(len(shifted_var_freq)):
# If variant detected in control at frequency greater than 1%, define as germline
if (( n_ctrl_fw[i] + n_ctrl_bw[i] )/( cov_ctrl_fw[i] + cov_ctrl_bw[i] ) <= 0.01):
g.write("{}{}{} {:1.10f} {}\n".format(int(position[i]) , ref_base[i] , var_base[i] , shifted_var_freq[i] , p_val[i]))
g.close()
else:
#g = open(args.output , 'w')
#
#if (len(filtered_out) == 0):
# g.write("{}{}{} {:1.10f} {}\n".format(int(position) , ref_base , var_base , shifted_var_freq , p_val))
#
#
#g.close()
# Write somatic calls to file
g = open(args.output , 'w')
# If variant detected in control at frequency greater than 1%, define as germline
if (len(filtered_out) == 0) and (( n_ctrl_fw + n_ctrl_bw )/( cov_ctrl_fw + cov_ctrl_bw ) <= 0.01):
g.write("{}{}{} {:1.10f} {}\n".format(int(position) , ref_base , var_base , shifted_var_freq , p_val))
g.close()
| 27.534884 | 135 | 0.657095 | 605 | 3,552 | 3.641322 | 0.190083 | 0.021788 | 0.021788 | 0.065365 | 0.672265 | 0.628688 | 0.596459 | 0.53291 | 0.505674 | 0.505674 | 0 | 0.018157 | 0.162725 | 3,552 | 128 | 136 | 27.75 | 0.722596 | 0.250845 | 0 | 0.255319 | 0 | 0 | 0.138326 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.06383 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aacace79fddb0cea3b81c9b0d7df9b1e3860e55e | 36,173 | py | Python | tests/runners.py | IvanMalison/invoke | 322718d7f38ce04fc2bde947ba67ab4002f669b6 | [
"BSD-2-Clause"
] | null | null | null | tests/runners.py | IvanMalison/invoke | 322718d7f38ce04fc2bde947ba67ab4002f669b6 | [
"BSD-2-Clause"
] | null | null | null | tests/runners.py | IvanMalison/invoke | 322718d7f38ce04fc2bde947ba67ab4002f669b6 | [
"BSD-2-Clause"
] | null | null | null | import os
import sys
import types
from io import BytesIO
from signal import SIGINT, SIGTERM
from invoke.vendor.six import StringIO, b
from spec import (
Spec, trap, eq_, skip, ok_, raises, assert_contains, assert_not_contains
)
from mock import patch, Mock, call
from invoke.vendor import six
from invoke import Runner, Local, Context, Config, Failure, ThreadException
from invoke.platform import WINDOWS
from _util import mock_subprocess, mock_pty, skip_if_windows
# Dummy command that will blow up if it ever truly hits a real shell.
_ = "nope"
class _Dummy(Runner):
"""
Dummy runner subclass that does minimum work required to execute run().
It also serves as a convenient basic API checker; failure to update it to
match the current Runner API will cause TypeErrors and similar.
"""
# Neuter the input loop sleep, so tests aren't slow (at the expense of CPU,
# which isn't a problem for testing).
input_sleep = 0
def start(self, command, shell, env):
pass
def read_proc_stdout(self, num_bytes):
return ""
def read_proc_stderr(self, num_bytes):
return ""
def _write_proc_stdin(self, data):
pass
@property
def process_is_finished(self):
return True
def returncode(self):
return 0
def send_interrupt(self, exception):
pass
# Runner that fakes ^C during subprocess exec
class _KeyboardInterruptingRunner(_Dummy):
def wait(self):
raise KeyboardInterrupt
class OhNoz(Exception):
pass
def _run(*args, **kwargs):
klass = kwargs.pop('klass', _Dummy)
settings = kwargs.pop('settings', {})
context = Context(config=Config(overrides=settings))
return klass(context).run(*args, **kwargs)
def _runner(out='', err='', **kwargs):
klass = kwargs.pop('klass', _Dummy)
runner = klass(Context(config=Config(overrides=kwargs)))
if 'exits' in kwargs:
runner.returncode = Mock(return_value=kwargs.pop('exits'))
out_file = BytesIO(b(out))
err_file = BytesIO(b(err))
runner.read_proc_stdout = out_file.read
runner.read_proc_stderr = err_file.read
return runner
class Runner_(Spec):
# NOTE: these copies of _run and _runner form the base case of "test Runner
# subclasses via self._run/_runner helpers" functionality. See how e.g.
# Local_ uses the same approach but bakes in the dummy class used.
def _run(self, *args, **kwargs):
return _run(*args, **kwargs)
def _runner(self, *args, **kwargs):
return _runner(*args, **kwargs)
def _mock_stdin_writer(self):
"""
Return new _Dummy subclass whose write_proc_stdin() method is a mock.
"""
class MockedStdin(_Dummy):
pass
MockedStdin.write_proc_stdin = Mock()
return MockedStdin
class init:
"__init__"
def takes_a_context_instance(self):
c = Context()
eq_(Runner(c).context, c)
@raises(TypeError)
def context_instance_is_required(self):
Runner()
class warn:
def honors_config(self):
runner = self._runner(run={'warn': True}, exits=1)
# Doesn't raise Failure -> all good
runner.run(_)
def kwarg_beats_config(self):
runner = self._runner(run={'warn': False}, exits=1)
# Doesn't raise Failure -> all good
runner.run(_, warn=True)
class hide:
@trap
def honors_config(self):
runner = self._runner(out='stuff', run={'hide': True})
r = runner.run(_)
eq_(r.stdout, 'stuff')
eq_(sys.stdout.getvalue(), '')
@trap
def kwarg_beats_config(self):
runner = self._runner(out='stuff')
r = runner.run(_, hide=True)
eq_(r.stdout, 'stuff')
eq_(sys.stdout.getvalue(), '')
class pty:
def pty_defaults_to_off(self):
eq_(self._run(_).pty, False)
def honors_config(self):
runner = self._runner(run={'pty': True})
eq_(runner.run(_).pty, True)
def kwarg_beats_config(self):
runner = self._runner(run={'pty': False})
eq_(runner.run(_, pty=True).pty, True)
class shell:
def defaults_to_bash_when_pty_True(self):
eq_(self._run(_, pty=True).shell, '/bin/bash')
def defaults_to_bash_when_pty_False(self):
eq_(self._run(_, pty=False).shell, '/bin/bash')
def may_be_overridden(self):
eq_(self._run(_, shell='/bin/zsh').shell, '/bin/zsh')
def may_be_configured(self):
runner = self._runner(run={'shell': '/bin/tcsh'})
eq_(runner.run(_).shell, '/bin/tcsh')
def kwarg_beats_config(self):
runner = self._runner(run={'shell': '/bin/tcsh'})
eq_(runner.run(_, shell='/bin/zsh').shell, '/bin/zsh')
class env:
def defaults_to_os_environ(self):
eq_(self._run(_).env, os.environ)
def updates_when_dict_given(self):
expected = dict(os.environ, FOO='BAR')
eq_(self._run(_, env={'FOO': 'BAR'}).env, expected)
def replaces_when_replace_env_True(self):
eq_(
self._run(_, env={'JUST': 'ME'}, replace_env=True).env,
{'JUST': 'ME'}
)
def config_can_be_used(self):
eq_(
self._run(_, settings={'run': {'env': {'FOO': 'BAR'}}}).env,
dict(os.environ, FOO='BAR'),
)
def kwarg_wins_over_config(self):
settings = {'run': {'env': {'FOO': 'BAR'}}}
kwarg = {'FOO': 'NOTBAR'}
eq_(
self._run(_, settings=settings, env=kwarg).env['FOO'],
'NOTBAR'
)
class return_value:
def return_code_in_result(self):
"""
Result has .return_code (and .exited) containing exit code int
"""
runner = self._runner(exits=17)
r = runner.run(_, warn=True)
eq_(r.return_code, 17)
eq_(r.exited, 17)
def ok_attr_indicates_success(self):
runner = self._runner()
eq_(runner.run(_).ok, True) # default dummy retval is 0
def ok_attr_indicates_failure(self):
runner = self._runner(exits=1)
eq_(runner.run(_, warn=True).ok, False)
def failed_attr_indicates_success(self):
runner = self._runner()
eq_(runner.run(_).failed, False) # default dummy retval is 0
def failed_attr_indicates_failure(self):
runner = self._runner(exits=1)
eq_(runner.run(_, warn=True).failed, True)
@trap
def stdout_attribute_contains_stdout(self):
runner = self._runner(out='foo')
eq_(runner.run(_).stdout, "foo")
eq_(sys.stdout.getvalue(), "foo")
@trap
def stderr_attribute_contains_stderr(self):
runner = self._runner(err='foo')
eq_(runner.run(_).stderr, "foo")
eq_(sys.stderr.getvalue(), "foo")
def whether_pty_was_used(self):
eq_(self._run(_).pty, False)
eq_(self._run(_, pty=True).pty, True)
def command_executed(self):
eq_(self._run(_).command, _)
def shell_used(self):
eq_(self._run(_).shell, '/bin/bash')
class command_echoing:
@trap
def off_by_default(self):
self._run("my command")
eq_(sys.stdout.getvalue(), "")
@trap
def enabled_via_kwarg(self):
self._run("my command", echo=True)
assert_contains(sys.stdout.getvalue(), "my command")
@trap
def enabled_via_config(self):
self._run("yup", settings={'run': {'echo': True}})
assert_contains(sys.stdout.getvalue(), "yup")
@trap
def kwarg_beats_config(self):
self._run("yup", echo=True, settings={'run': {'echo': False}})
assert_contains(sys.stdout.getvalue(), "yup")
@trap
def uses_ansi_bold(self):
self._run("my command", echo=True)
# TODO: vendor & use a color module
eq_(sys.stdout.getvalue(), "\x1b[1;37mmy command\x1b[0m\n")
class encoding:
# NOTE: these tests just check what Runner.encoding ends up as; it's
# difficult/impossible to mock string objects themselves to see what
# .decode() is being given :(
#
# TODO: consider using truly "nonstandard"-encoded byte sequences as
# fixtures, encoded with something that isn't compatible with UTF-8
# (UTF-7 kinda is, so...) so we can assert that the decoded string is
# equal to its Unicode equivalent.
#
# Use UTF-7 as a valid encoding unlikely to be a real default derived
# from test-runner's locale.getpreferredencoding()
def defaults_to_encoding_method_result(self):
# Setup
runner = self._runner()
encoding = 'UTF-7'
runner.default_encoding = Mock(return_value=encoding)
# Execution & assertion
runner.run(_)
runner.default_encoding.assert_called_with()
eq_(runner.encoding, 'UTF-7')
def honors_config(self):
c = Context(Config(overrides={'run': {'encoding': 'UTF-7'}}))
runner = _Dummy(c)
runner.default_encoding = Mock(return_value='UTF-not-7')
runner.run(_)
eq_(runner.encoding, 'UTF-7')
def honors_kwarg(self):
skip()
def uses_locale_module_for_default_encoding(self):
# Actually testing this highly OS/env specific stuff is very
# error-prone; so we degrade to just testing expected function
# calls for now :(
with patch('invoke.runners.locale') as fake_locale:
fake_locale.getdefaultlocale.return_value = ('meh', 'UHF-8')
fake_locale.getpreferredencoding.return_value = 'FALLBACK'
expected = 'UHF-8' if six.PY2 else 'FALLBACK'
eq_(self._runner().default_encoding(), expected)
def falls_back_to_defaultlocale_when_preferredencoding_is_None(self):
if not six.PY3:
skip()
with patch('invoke.runners.locale') as fake_locale:
fake_locale.getdefaultlocale.return_value = (None, None)
fake_locale.getpreferredencoding.return_value = 'FALLBACK'
eq_(self._runner().default_encoding(), 'FALLBACK')
class output_hiding:
@trap
def _expect_hidden(self, hide, expect_out="", expect_err=""):
self._runner(out='foo', err='bar').run(_, hide=hide)
eq_(sys.stdout.getvalue(), expect_out)
eq_(sys.stderr.getvalue(), expect_err)
def both_hides_everything(self):
self._expect_hidden('both')
def True_hides_everything(self):
self._expect_hidden(True)
def out_only_hides_stdout(self):
self._expect_hidden('out', expect_out="", expect_err="bar")
def err_only_hides_stderr(self):
self._expect_hidden('err', expect_out="foo", expect_err="")
def accepts_stdout_alias_for_out(self):
self._expect_hidden('stdout', expect_out="", expect_err="bar")
def accepts_stderr_alias_for_err(self):
self._expect_hidden('stderr', expect_out="foo", expect_err="")
def None_hides_nothing(self):
self._expect_hidden(None, expect_out="foo", expect_err="bar")
def False_hides_nothing(self):
self._expect_hidden(False, expect_out="foo", expect_err="bar")
@raises(ValueError)
def unknown_vals_raises_ValueError(self):
self._run(_, hide="wat?")
def unknown_vals_mention_value_given_in_error(self):
value = "penguinmints"
try:
self._run(_, hide=value)
except ValueError as e:
msg = "Error from run(hide=xxx) did not tell user what the bad value was!" # noqa
msg += "\nException msg: {0}".format(e)
ok_(value in str(e), msg)
else:
assert False, "run() did not raise ValueError for bad hide= value" # noqa
def does_not_affect_capturing(self):
eq_(self._runner(out='foo').run(_, hide=True).stdout, 'foo')
@trap
def overrides_echoing(self):
self._runner().run('invisible', hide=True, echo=True)
assert_not_contains(sys.stdout.getvalue(), 'invisible')
class output_stream_overrides:
@trap
def out_defaults_to_sys_stdout(self):
"out_stream defaults to sys.stdout"
self._runner(out="sup").run(_)
eq_(sys.stdout.getvalue(), "sup")
@trap
def err_defaults_to_sys_stderr(self):
"err_stream defaults to sys.stderr"
self._runner(err="sup").run(_)
eq_(sys.stderr.getvalue(), "sup")
@trap
def out_can_be_overridden(self):
"out_stream can be overridden"
out = StringIO()
self._runner(out="sup").run(_, out_stream=out)
eq_(out.getvalue(), "sup")
eq_(sys.stdout.getvalue(), "")
@trap
def err_can_be_overridden(self):
"err_stream can be overridden"
err = StringIO()
self._runner(err="sup").run(_, err_stream=err)
eq_(err.getvalue(), "sup")
eq_(sys.stderr.getvalue(), "")
@trap
def pty_defaults_to_sys(self):
self._runner(out="sup").run(_, pty=True)
eq_(sys.stdout.getvalue(), "sup")
@trap
def pty_out_can_be_overridden(self):
out = StringIO()
self._runner(out="yo").run(_, pty=True, out_stream=out)
eq_(out.getvalue(), "yo")
eq_(sys.stdout.getvalue(), "")
class output_stream_handling:
# Mostly corner cases, generic behavior's covered above
def writes_and_flushes_to_stdout(self):
out = Mock(spec=StringIO)
self._runner(out="meh").run(_, out_stream=out)
out.write.assert_called_once_with("meh")
out.flush.assert_called_once_with()
def writes_and_flushes_to_stderr(self):
err = Mock(spec=StringIO)
self._runner(err="whatever").run(_, err_stream=err)
err.write.assert_called_once_with("whatever")
err.flush.assert_called_once_with()
class input_stream_handling:
# NOTE: actual autoresponder tests are elsewhere. These just test that
# stdin works normally & can be overridden.
@patch('invoke.runners.sys.stdin', StringIO("Text!"))
def defaults_to_sys_stdin(self):
# Execute w/ runner class that has a mocked stdin_writer
klass = self._mock_stdin_writer()
self._runner(klass=klass).run(_, out_stream=StringIO())
# Check that mocked writer was called w/ the data from our patched
# sys.stdin (one char at a time)
calls = list(map(lambda x: call(x), "Text!"))
klass.write_proc_stdin.assert_has_calls(calls, any_order=False)
def can_be_overridden(self):
klass = self._mock_stdin_writer()
in_stream = StringIO("Hey, listen!")
self._runner(klass=klass).run(
_,
in_stream=in_stream,
out_stream=StringIO(),
)
# stdin mirroring occurs char-by-char
calls = list(map(lambda x: call(x), "Hey, listen!"))
klass.write_proc_stdin.assert_has_calls(calls, any_order=False)
@patch('invoke.util.debug')
def exceptions_get_logged(self, mock_debug):
# Make write_proc_stdin asplode
klass = self._mock_stdin_writer()
klass.write_proc_stdin.side_effect = OhNoz("oh god why")
# Execute with some stdin to trigger that asplode (but skip the
# actual bubbled-up raising of it so we can check things out)
try:
stdin = StringIO("non-empty")
self._runner(klass=klass).run(_, in_stream=stdin)
except ThreadException:
pass
# Assert debug() was called w/ expected format
# TODO: make the debug call a method on ExceptionHandlingThread,
# then make thread class configurable somewhere in Runner, and pass
# in a customized ExceptionHandlingThread that has a Mock for that
# method?
mock_debug.assert_called_with("Encountered exception OhNoz('oh god why',) in thread for 'handle_stdin'") # noqa
class failure_handling:
@raises(Failure)
def fast_failures(self):
self._runner(exits=1).run(_)
def non_one_return_codes_still_act_as_failure(self):
r = self._runner(exits=17).run(_, warn=True)
eq_(r.failed, True)
def Failure_repr_includes_stderr(self):
try:
self._runner(exits=1, err="ohnoz").run(_, hide=True)
assert false # noqa. Ensure failure to Failure fails
except Failure as f:
r = repr(f)
err = "Sentinel 'ohnoz' not found in {0!r}".format(r)
assert 'ohnoz' in r, err
def Failure_repr_should_present_stdout_when_pty_was_used(self):
try:
# NOTE: using mocked stdout because that's what ptys do as
# well. when pty=True, nothing's even trying to read stderr.
self._runner(exits=1, out="ohnoz").run(_, hide=True, pty=True)
assert false # noqa. Ensure failure to Failure fails
except Failure as f:
r = repr(f)
err = "Sentinel 'ohnoz' not found in {0!r}".format(r)
assert 'ohnoz' in r, err
class threading:
def errors_within_io_thread_body_bubble_up(self):
class Oops(_Dummy):
def handle_stdout(self, **kwargs):
raise OhNoz()
def handle_stderr(self, **kwargs):
raise OhNoz()
runner = Oops(Context())
try:
runner.run("nah")
except ThreadException as e:
# Expect two separate OhNoz objects on 'e'
eq_(len(e.exceptions), 2)
for tup in e.exceptions:
ok_(isinstance(tup.value, OhNoz))
ok_(isinstance(tup.traceback, types.TracebackType))
eq_(tup.type, OhNoz)
# TODO: test the arguments part of the tuple too. It's pretty
# implementation-specific, though, so possibly not worthwhile.
else:
assert False, "Did not raise ThreadException as expected!"
class responding:
def nothing_is_written_to_stdin_by_default(self):
# NOTE: technically if some goofus ran the tests by hand and mashed
# keys while doing so...this would fail. LOL?
# NOTE: this test seems not too useful but is a) a sanity test and
# b) guards against e.g. breaking the autoresponder such that it
# responds to "" or "\n" or etc.
klass = self._mock_stdin_writer()
self._runner(klass=klass).run(_)
ok_(not klass.write_proc_stdin.called)
def _expect_response(self, **kwargs):
"""
Execute a run() w/ ``responses`` set & _runner() ``kwargs`` given.
:returns: The mocked ``write_proc_stdin`` method of the runner.
"""
klass = self._mock_stdin_writer()
kwargs['klass'] = klass
runner = self._runner(**kwargs)
runner.run(_, responses=kwargs['responses'], hide=True)
return klass.write_proc_stdin
def string_keys_in_responses_kwarg_yield_values_as_stdin_writes(self):
self._expect_response(
out="the house was empty",
responses={'empty': 'handed'},
).assert_called_once_with("handed")
def regex_keys_also_work(self):
self._expect_response(
out="technically, it's still debt",
responses={r'tech.*debt': 'pay it down'},
).assert_called_once_with('pay it down')
def multiple_hits_yields_multiple_responses(self):
holla = call('how high?')
self._expect_response(
out="jump, wait, jump, wait",
responses={'jump': 'how high?'},
).assert_has_calls([holla, holla])
def chunk_sizes_smaller_than_patterns_still_work_ok(self):
klass = self._mock_stdin_writer()
klass.read_chunk_size = 1 # < len('jump')
responses = {'jump': 'how high?'}
runner = self._runner(klass=klass, out="jump, wait, jump, wait")
runner.run(_, responses=responses, hide=True)
holla = call('how high?')
# Responses happened, period.
klass.write_proc_stdin.assert_has_calls([holla, holla])
# And there weren't duplicates!
eq_(len(klass.write_proc_stdin.call_args_list), 2)
def patterns_span_multiple_lines(self):
output = """
You only call me
when you have a problem
You never call me
Just to say hi
"""
self._expect_response(
out=output,
responses={r'call.*problem': 'So sorry'},
).assert_called_once_with('So sorry')
def both_out_and_err_are_scanned(self):
bye = call("goodbye")
# Would only be one 'bye' if only scanning stdout
self._expect_response(
out="hello my name is inigo",
err="hello how are you",
responses={"hello": "goodbye"},
).assert_has_calls([bye, bye])
def multiple_patterns_works_as_expected(self):
calls = [call('betty'), call('carnival')]
# Technically, I'd expect 'betty' to get called before 'carnival',
# but under Python 3 it's reliably backwards from Python 2.
# In real world situations where each prompt sits & waits for its
# response, this probably wouldn't be an issue, so using
# any_order=True for now. Thanks again Python 3.
self._expect_response(
out="beep boop I am a robot",
responses={'boop': 'betty', 'robot': 'carnival'},
).assert_has_calls(calls, any_order=True)
def multiple_patterns_across_both_streams(self):
responses = {
'boop': 'betty',
'robot': 'carnival',
'Destroy': 'your ego',
'humans': 'are awful',
}
calls = map(lambda x: call(x), responses.values())
# CANNOT assume order due to simultaneous streams.
# If we didn't say any_order=True we could get race condition fails
self._expect_response(
out="beep boop, I am a robot",
err="Destroy all humans!",
responses=responses,
).assert_has_calls(calls, any_order=True)
class io_sleeping:
# NOTE: there's an explicit CPU-measuring test in the integration suite
# which ensures the *point* of the sleeping - avoiding CPU hogging - is
# actually functioning. These tests below just unit-test the mechanisms
# around the sleep functionality (ensuring they are visible and can be
# altered as needed).
def input_sleep_attribute_defaults_to_hundredth_of_second(self):
eq_(Runner(Context()).input_sleep, 0.01)
@mock_subprocess()
def subclasses_can_override_input_sleep(self):
class MyRunner(_Dummy):
input_sleep = 0.007
with patch('invoke.runners.time') as mock_time:
MyRunner(Context()).run(
_,
in_stream=StringIO("foo"),
out_stream=StringIO(), # null output to not pollute tests
)
eq_(mock_time.sleep.call_args_list, [call(0.007)] * 3)
class stdin_mirroring:
def _test_mirroring(
self,
expect_mirroring,
**kwargs
):
# Setup
fake_in = "I'm typing!"
output = Mock()
input_ = StringIO(fake_in)
input_is_pty = kwargs.pop('in_pty', None)
class MyRunner(_Dummy):
def should_echo_stdin(self, input_, output):
# Fake result of isatty() test here and only here; if we do
# this farther up, it will affect stuff trying to run
# termios & such, which is harder to mock successfully.
if input_is_pty is not None:
input_.isatty = lambda: input_is_pty
return super(MyRunner, self).should_echo_stdin(
input_, output)
# Execute basic command with given parameters
self._run(
_,
klass=MyRunner,
in_stream=input_,
out_stream=output,
**kwargs
)
# Examine mocked output stream to see if it was mirrored to
if expect_mirroring:
eq_(
output.write.call_args_list,
list(map(lambda x: call(x), fake_in))
)
eq_(len(output.flush.call_args_list), len(fake_in))
# Or not mirrored to
else:
eq_(output.write.call_args_list, [])
def when_pty_is_True_no_mirroring_occurs(self):
self._test_mirroring(
pty=True,
expect_mirroring=False,
)
def when_pty_is_False_we_write_in_stream_back_to_out_stream(self):
self._test_mirroring(
pty=False,
in_pty=True,
expect_mirroring=True,
)
def mirroring_is_skipped_when_our_input_is_not_a_tty(self):
self._test_mirroring(
in_pty=False,
expect_mirroring=False,
)
def mirroring_can_be_forced_on(self):
self._test_mirroring(
# Subprocess pty normally disables echoing
pty=True,
# But then we forcibly enable it
echo_stdin=True,
# And expect it to happen
expect_mirroring=True,
)
def mirroring_can_be_forced_off(self):
# Make subprocess pty False, stdin tty True, echo_stdin False,
# prove no mirroring
self._test_mirroring(
# Subprocess lack of pty normally enables echoing
pty=False,
# Provided the controlling terminal _is_ a tty
in_pty=True,
# But then we forcibly disable it
echo_stdin=False,
# And expect it to not happen
expect_mirroring=False,
)
def mirroring_honors_configuration(self):
self._test_mirroring(
pty=False,
in_pty=True,
settings={'run': {'echo_stdin': False}},
expect_mirroring=False,
)
class character_buffered_stdin:
@skip_if_windows
@patch('invoke.platform.tty')
@patch('invoke.platform.termios') # stub
def setcbreak_called_on_tty_stdins(self, mock_termios, mock_tty):
self._run(_)
mock_tty.setcbreak.assert_called_with(sys.stdin)
@skip_if_windows
@patch('invoke.platform.tty')
def setcbreak_not_called_on_non_tty_stdins(self, mock_tty):
self._run(_, in_stream=StringIO())
eq_(mock_tty.setcbreak.call_args_list, [])
@skip_if_windows
@patch('invoke.platform.tty') # stub
@patch('invoke.platform.termios')
def tty_stdins_have_settings_restored_by_default(
self, mock_termios, mock_tty
):
sentinel = [1, 7, 3, 27]
mock_termios.tcgetattr.return_value = sentinel
self._run(_)
mock_termios.tcsetattr.assert_called_once_with(
sys.stdin, mock_termios.TCSADRAIN, sentinel
)
@skip_if_windows
@patch('invoke.platform.tty') # stub
@patch('invoke.platform.termios')
def tty_stdins_have_settings_restored_on_KeyboardInterrupt(
self, mock_termios, mock_tty
):
# This test is re: GH issue #303
# tcgetattr returning some arbitrary value
sentinel = [1, 7, 3, 27]
mock_termios.tcgetattr.return_value = sentinel
# Don't actually bubble up the KeyboardInterrupt...
try:
self._run(_, klass=_KeyboardInterruptingRunner)
except KeyboardInterrupt:
pass
# Did we restore settings?!
mock_termios.tcsetattr.assert_called_once_with(
sys.stdin, mock_termios.TCSADRAIN, sentinel
)
class keyboard_interrupts_act_transparently:
def _run_with_mocked_interrupt(self, klass):
runner = klass(Context(config=Config()))
runner.send_interrupt = Mock()
try:
runner.run(_)
except:
pass
return runner
def send_interrupt_called_on_KeyboardInterrupt(self):
runner = self._run_with_mocked_interrupt(
_KeyboardInterruptingRunner
)
assert runner.send_interrupt.called
def send_interrupt_not_called_for_other_exceptions(self):
class _GenericExceptingRunner(_Dummy):
def wait(self):
raise Exception
runner = self._run_with_mocked_interrupt(_GenericExceptingRunner)
assert not runner.send_interrupt.called
def KeyboardInterrupt_is_still_raised(self):
raised = None
try:
self._run(_, klass=_KeyboardInterruptingRunner)
except KeyboardInterrupt as e:
raised = e
assert raised is not None
class _FastLocal(Local):
# Neuter this for same reason as in _Dummy above
input_sleep = 0
class _KeyboardInterruptingFastLocal(_FastLocal):
def wait(self):
raise KeyboardInterrupt
class Local_(Spec):
def _run(self, *args, **kwargs):
return _run(*args, **dict(kwargs, klass=_FastLocal))
def _runner(self, *args, **kwargs):
return _runner(*args, **dict(kwargs, klass=_FastLocal))
class pty_and_pty_fallback:
@mock_pty()
def when_pty_True_we_use_pty_fork_and_os_exec(self):
"when pty=True, we use pty.fork and os.exec*"
self._run(_, pty=True)
# @mock_pty's asserts check os/pty calls for us.
@mock_pty()
def pty_is_set_to_controlling_terminal_size(self):
self._run(_, pty=True)
# @mock_pty's asserts check fcntl calls for us
def warning_only_fires_once(self):
# I.e. if implementation checks pty-ness >1 time, only one warning
# is emitted. This is kinda implementation-specific, but...
skip()
@mock_pty(isatty=False)
def can_be_overridden_by_kwarg(self):
self._run(_, pty=True, fallback=False)
# @mock_pty's asserts will be mad if pty-related os/pty calls
# didn't fire, so we're done.
@mock_pty(isatty=False)
def can_be_overridden_by_config(self):
self._runner(run={'fallback': False}).run(_, pty=True)
# @mock_pty's asserts will be mad if pty-related os/pty calls
# didn't fire, so we're done.
@trap
@mock_subprocess(isatty=False)
def fallback_affects_result_pty_value(self, *mocks):
eq_(self._run(_, pty=True).pty, False)
@mock_pty(isatty=False)
def overridden_fallback_affects_result_pty_value(self):
eq_(self._run(_, pty=True, fallback=False).pty, True)
@patch('invoke.runners.sys')
def replaced_stdin_objects_dont_explode(self, mock_sys):
# Replace sys.stdin with an object lacking .isatty(), which
# normally causes an AttributeError unless we are being careful.
mock_sys.stdin = object()
# Test. If bug is present, this will error.
runner = Local(Context())
eq_(runner.should_use_pty(pty=True, fallback=True), False)
@mock_pty(trailing_error=OSError("Input/output error"))
def spurious_OSErrors_handled_gracefully(self):
# Doesn't-blow-up test.
self._run(_, pty=True)
@mock_pty(trailing_error=OSError("wat"))
def non_spurious_OSErrors_bubble_up(self):
try:
self._run(_, pty=True)
except ThreadException as e:
e = e.exceptions[0]
eq_(e.type, OSError)
eq_(str(e.value), "wat")
class send_interrupt:
def _run(self, pty):
runner = _KeyboardInterruptingFastLocal(Context(config=Config()))
try:
runner.run(_, pty=pty)
except KeyboardInterrupt:
pass
return runner
@mock_pty(skip_asserts=True)
def uses_os_kill_when_pty_True(self):
with patch('invoke.runners.os.kill') as kill:
runner = self._run(pty=True)
kill.assert_called_once_with(runner.pid, SIGINT)
@mock_subprocess()
def uses_subprocess_send_signal_when_pty_False(self):
runner = self._run(pty=False)
# Don't see a great way to test this w/o replicating the logic.
expected = SIGTERM if WINDOWS else SIGINT
runner.process.send_signal.assert_called_once_with(expected)
class shell:
@mock_pty(insert_os=True)
def defaults_to_bash_when_pty_True(self, mock_os):
self._run(_, pty=True)
eq_(mock_os.execve.call_args_list[0][0][0], '/bin/bash')
@mock_subprocess(insert_Popen=True)
def defaults_to_bash_when_pty_False(self, mock_Popen):
self._run(_, pty=False)
eq_(mock_Popen.call_args_list[0][1]['executable'], '/bin/bash')
@mock_pty(insert_os=True)
def may_be_overridden_when_pty_True(self, mock_os):
self._run(_, pty=True, shell='/bin/zsh')
eq_(mock_os.execve.call_args_list[0][0][0], '/bin/zsh')
@mock_subprocess(insert_Popen=True)
def may_be_overridden_when_pty_False(self, mock_Popen):
self._run(_, pty=False, shell='/bin/zsh')
eq_(mock_Popen.call_args_list[0][1]['executable'], '/bin/zsh')
class env:
# NOTE: update-vs-replace semantics are tested 'purely' up above in
# regular Runner tests.
@mock_subprocess(insert_Popen=True)
def uses_Popen_kwarg_for_pty_False(self, mock_Popen):
self._run(_, pty=False, env={'FOO': 'BAR'})
expected = dict(os.environ, FOO='BAR')
eq_(
mock_Popen.call_args_list[0][1]['env'],
expected
)
@mock_pty(insert_os=True)
def uses_execve_for_pty_True(self, mock_os):
type(mock_os).environ = {'OTHERVAR': 'OTHERVAL'}
self._run(_, pty=True, env={'FOO': 'BAR'})
expected = {'OTHERVAR': 'OTHERVAL', 'FOO': 'BAR'}
eq_(
mock_os.execve.call_args_list[0][0][2],
expected
)
| 37.368802 | 123 | 0.580129 | 4,293 | 36,173 | 4.627067 | 0.161426 | 0.028695 | 0.010068 | 0.014096 | 0.349577 | 0.270942 | 0.190697 | 0.152135 | 0.121375 | 0.102094 | 0 | 0.00361 | 0.318497 | 36,173 | 967 | 124 | 37.407446 | 0.802166 | 0.158074 | 0 | 0.327116 | 0 | 0 | 0.070233 | 0.005165 | 0 | 0 | 0 | 0.001034 | 0.05165 | 1 | 0.192253 | false | 0.012912 | 0.017217 | 0.011478 | 0.285509 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aacb51f4d47708d6596701aa7fecbbaaf4255ef3 | 25,115 | py | Python | xmind/document.py | americanpezza/reqmapper | c4e015cc654c627ee9a135c43e5517fd65ba410d | [
"IBM-pibs"
] | null | null | null | xmind/document.py | americanpezza/reqmapper | c4e015cc654c627ee9a135c43e5517fd65ba410d | [
"IBM-pibs"
] | null | null | null | xmind/document.py | americanpezza/reqmapper | c4e015cc654c627ee9a135c43e5517fd65ba410d | [
"IBM-pibs"
] | null | null | null | # -*- coding: utf-8 -*-
# (c) 2008-2010, Marcin Kasperski
"""
Create and parse XMind maps.
"""
from lxml import etree
import zipfile
from .id_gen import IdGen, qualify_id, unique_id
from .xmlutil import XmlHelper, ns_name, \
CONTENT_NSMAP, STYLES_NSMAP, find_xpath
import logging
log = logging.getLogger(__name__)
DUMP_PARSED_DATA = False
ATTACHMENTS_DIR = "attachments/"
META_FILE_BODY = u'<?xml version="1.0" encoding="UTF-8" standalone="no"?>' + \
'<meta xmlns="urn:xmind:xmap:xmlns:meta:2.0" version="2.0"/>'
MANIFEST_FILE_BODY = u'''<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<manifest xmlns="urn:xmind:xmap:xmlns:manifest:1.0">
<file-entry full-path="content.xml" media-type="text/xml"/>
<file-entry full-path="META-INF/" media-type=""/>
<file-entry full-path="META-INF/manifest.xml" media-type="text/xml"/>
<file-entry full-path="styles.xml" media-type=""/>
<file-entry full-path="Thumbnails/" media-type=""/>
<file-entry full-path="Thumbnails/thumbnail.jpg" media-type="image/jpeg"/>
</manifest>'''
# See org.xmind.ui.resources/markers/markerSheet.xml
ALL_MARKS = [
'priority-1', 'priority-2', 'priority-3',
'priority-4', 'priority-5', 'priority-6',
'flag-red', 'flag-orange', 'flag-green',
'flag-purple', 'flag-blue', 'flag-black',
'smiley-smile', 'smiley-laugh', 'smiley-angry',
'smiley-cry', 'smiley-surprise', 'smiley-boring',
'other-calendar', 'other-email', 'other-phone', 'other-fax',
'other-people', 'other-clock', 'other-coffee-cup', 'other-question',
'other-exclam', 'other-lightbulb',
'task-start', 'task-quarter', 'task-half',
'task-3quar', 'task-done', 'task-pause',
]
SHAPE_RECTANGLE = "org.xmind.topicShape.rectangle"
SHAPE_ROUND_RECTANGLE = "org.xmind.topicShape.roundedRect"
SHAPE_ELLIPSIS = "org.xmind.topicShape.ellipse"
_id_gen = IdGen(26)
class DocumentPart(object):
"""
Base class for all mindmap related objects (sheets, topics, legends etc).
Provides .doc attribute
"""
def __init__(self, doc):
self.doc = doc
class Legend(DocumentPart):
"""
Map legend handling.
Legend can be used to describe meaning of markers (graphical
symbols) present on the map, is displayed as a rectangular box
containing markers and their descriptions. By default it is empty,
markers which are to be described should be added using ``add_marker``
method.
Legend object is usually created/accessed via Sheet.get_legend.
>>> legend = sheet.get_legend()
>>> legend.add_marker(
... "task-done", u"Task done")
>>> legend.add_marker(
... "task-start", u"Task being worked on")
"""
@classmethod
def create(cls, doc, sheet_tag):
"""
Creates legend on the mind-map. Usually not
used directly (see Sheet.get_legend instead).
Arguments
---------
doc : XMindDocument
MindMap being modified
sheet_tag : etree
XML node of <sheet>
"""
legend_tag = doc.create_child(
sheet_tag, u"legend", visibility = "visible")
return Legend(doc, legend_tag)
def __init__(self, doc, legend_tag):
DocumentPart.__init__(self, doc)
self.legend_tag = legend_tag
def set_position(self, x_pos, y_pos):
"""
Enforce legend position on the sheet.
>>> sheet.get_legend().set_position(500, 500)
Arguments
---------
x_pos : int
Horizontal position (in pixels, 0 means left border)
y_pos : int
Vertical position (in pixels, 0 means top border)
"""
pos = self.doc.find_or_create_child(self.legend_tag, "position")
pos.set(ns_name("svg", "x"), x_pos)
pos.set(ns_name("svg", "y"), y_pos)
def add_marker(self, marker_id, description):
"""
Adds marker to the legend with given description.
>>> sheet.get_legend().add_marker(
... "task-done", u"Task done")
Arguments
---------
marker_id : string
Either name of one of the prederined XMind markers
(one of the constants in ALL_MARKS), or hashed string
which identifies custom marker from embedded markers
(see XMindDocument.embed_markers)
description : string
Short marker description to be put on the legend.
"""
markers_block = self.doc.find_or_create_child(
self.legend_tag, "marker-descriptions")
self.doc.create_child(markers_block, u"marker-description",
attrib={"marker-id": marker_id,
"description": description})
class Sheet(DocumentPart):
"""
Represents single sheet (diagram) on the mind-map
(note that XMind handles many sheet per diagram).
"""
@classmethod
def create(cls, doc, sheet_name, root_topic_name):
"""
Create new sheet. Usually not used directly,
use ``XMindDocument.create_sheet`` instead.
"""
sheet_tag = doc.create_child(doc.doc_tag, "sheet",
id = _id_gen.next())
sheet = Sheet(doc, sheet_tag)
sheet.set_title(sheet_name)
topic_tag = doc.create_child(sheet_tag, u"topic",
id = _id_gen.next())
doc.create_child(topic_tag, u"title").text = root_topic_name
return sheet
def __init__(self, doc, sheet_tag):
DocumentPart.__init__(self, doc)
self.sheet_tag = sheet_tag
def set_title(self, title):
"""
Change sheet title (label displayed on sheet tab).
"""
self.doc.find_or_create_child(self.sheet_tag, "title").text = title
def get_title(self):
"""
Get the sheet title
"""
return self.doc.find_only_child(self.sheet_tag, "title").text
def get_root_topic(self):
"""
Get the root topic of the sheet (this topic always exists)
"""
return Topic(self.doc, self.doc.find_only_child(
self.sheet_tag, "topic"))
def get_legend(self):
"""
Get the legend object for the sheet, create it if it does
not exist.
"""
legend_tag = self.doc.find_only_child(
self.sheet_tag, u"legend", required = False)
if legend_tag is not None:
return Legend(self.doc, legend_tag)
else:
return Legend.create(self.doc, self.sheet_tag)
class Topic(DocumentPart):
"""
Representation of single topic (item) on the map.
"""
def __init__(self, doc, topic_tag):
DocumentPart.__init__(self, doc)
self.topic_tag = topic_tag
def get_embedded_id(self):
"""
Read and return so called "embedded topic id", if present,
otherwise returns None.
"embedded ids" are purely mekk.xmind convention used to
identify topics in scenarios where some map is created with
mekk.xmind, then edited inside XMind, then parsed again with
mekk.xmind. As XMind identifies every topic on the map with a
identifier (and preserves this identifier while the topic is
edited), mekk.xmind just uses this field, adding some specific
prefix to detect new topics.
So, using get_embedded_id makes sense only on maps which
were initially created with mekk.xmind. If such an id is specified
while topic is created, then it can be recognized after map is edited.
The method returns None for topics created directly
inside XMind.
"""
return qualify_id(self.topic_tag.get("id"))
def get_correlation_id(self):
"""
Returns unique identifier for given topic. The identifier
is unique within the whole map and is never empty, can be
used - for example - as a key in structures containing topics.
"""
return unique_id(self.topic_tag.get("id"))
def _subtopics_tag(self, detached = False ):
"""
Internal helper. Returns XML tag for subtopics block
"""
children_tag = self.doc.find_or_create_child(self.topic_tag, "children")
mode = detached and "detached" or "attached"
#topics_tag = children_tag.xpath("topics[@type='%s']" % mode)
#topics_tag[0]
topics_tag = find_xpath(
children_tag,
"%s[@%s='%s']" % (self.doc.xpath_name("topics"),
"type", #self.doc.xpath_name("type"),
mode),
single = True, required = False)
if topics_tag is None:
topics_tag = self.doc.create_child(
children_tag, u"topics", type = mode)
return topics_tag
def add_subtopic(self, subtopic_title,
subtopic_emb_id = None, detached = False, folded = True):
"""
Create new topic as a child of this topic.
Arguments
---------
subtopic_title : unicode
Title (label) of newly added topic
subtopic_emb_id : string (optional)
Embedded identifier (see comment for `get_embedded_id`)
detached : bool (default False)
Make subtopic detached (not connected to the parent).
Usually used only while adding child to the root topic,
but seems to work elsewhere too.
"""
topics_tag = self._subtopics_tag(detached)
subtopic_tag = self.doc.create_child(topics_tag, u"topic",
id = _id_gen.next(subtopic_emb_id))
if folded:
subtopic_tag.set("branch", "folded")
self.doc.create_child(subtopic_tag, u"title").text = subtopic_title
return Topic(self.doc, subtopic_tag)
def get_subtopics(self, detached = False):
"""
Yields all subtopics of this topic. By default
connected children are returned, if `detached` param
is set, disconnected (detached) chilren are returned.
"""
topics_tag = self._subtopics_tag(detached)
for element in self.doc.find_children(topics_tag, "topic"):
yield Topic(self.doc, element)
def set_title(self, title):
"""
Change topic title
"""
self.doc.find_or_create_child(self.topic_tag, "title").text = title
def get_title(self):
"""
Returns topic title
"""
return self.doc.find_or_create_child(self.topic_tag, "title").text
def add_marker(self, marker):
"""
Add graphical marker to the topic.
Note: single topic can have many markers (but it is not very
pleasant visually).
Arguments
---------
marker : string
Either name of one of the prederined XMind markers
(one of the constants in `ALL_MARKS`), or hashed string
which identifies custom marker from embedded markers
(see XMindDocument.embed_markers)
"""
marker_refs_tag = self.doc.find_or_create_child(
self.topic_tag, "marker-refs")
self.doc.create_child(
marker_refs_tag, "marker-ref", attrib={"marker-id": marker})
def get_markers(self):
"""
Yields all markers currently attached to the topic.
Returned values have semantics described in ``add_marker``
(are either predefined constants like ``smiley-laugh``,
or hashed identifiers for attached marker sheet items).
"""
marker_refs_tag = self.doc.find_only_child(
self.topic_tag, "marker-refs", required = False)
if marker_refs_tag is not None:
for element in self.doc.find_children(
marker_refs_tag, "marker-ref"):
yield element.get("marker-id")
def set_link(self, url):
"""
Adds/replaces http(s) link to the topic. XMind will show
that the link is present and will make it possible to open
linked page using external or internal web browser.
Warning: setting link removes attachment, if present, topic
can't contain both.
Arguments
----------
url : string
Page address (for example "http://slashdot.org")
"""
self.topic_tag.set("{http://www.w3.org/1999/xlink}href", url)
def get_link(self):
"""
Returns link (url) attached to topic, if present, or None, if not.
"""
return self.topic_tag.get("{http://www.w3.org/1999/xlink}href")
def set_attachment(self, data, extension):
"""
Attaches some data to the topic. Given data are saved inside
generated mind map, and linked to this topic.
Warning: setting attachment removes any previous attachment
and also any set link.
Arguments
---------
data : string
actual data (usually content of some file)
extension : string
file extension (used to signal the data format, for example
``.txt``, ``.html``, ``.zip``, ``.json``)
"""
att_name = _id_gen.next() + extension
self.doc._create_attachment(att_name, data)
self.topic_tag.set("{http://www.w3.org/1999/xlink}href",
"xap:attachments/" + att_name)
def set_note(self, note_text):
"""
Adds/replaces topic note (long text attached to the topic).
Line breaks are preserved (to mark paragraphs), apart from that
no formatting is handled.
"""
notes_tag = self.doc.find_or_create_child(self.topic_tag, "notes")
self.doc.find_or_create_child(notes_tag, "plain").text = note_text
html_tag = self.doc.find_or_create_child(notes_tag, "html")
for line in note_text.split("\n"):
self.doc.create_child(html_tag, "xhtml:p").text = line
# TODO: Implement set_note_html(self, html_text). Difficulty: HTML tags
# must be namespace prefixed.
def get_note(self):
"""
Returns note (topic description) text, or empty string
if it is not present
"""
notes_tag = self.doc.find_or_create_child(self.topic_tag, "notes")
return self.doc.find_or_create_child(notes_tag, "plain").text
def set_label(self, label_text):
"""
Sets/replaces topic label (short tag-like annotation)
"""
labels_tag = self.doc.find_or_create_child(self.topic_tag, "labels")
self.doc.find_or_create_child(labels_tag, "label").text = label_text
def get_label(self):
"""
Gets topic label (or empty text if missing)
"""
labels_tag = self.doc.find_or_create_child(self.topic_tag, "labels")
return self.doc.find_or_create_child(labels_tag, "label").text
def set_style(self, style):
"""
Attaches specific visual style to the topic.
Arguments
---------
style : TopicStyle
Object defining visual characteristics of the topic
(usually created via XMindDocument.create_topic_style)
"""
self.topic_tag.set("style-id", style.get_id())
class TopicStyle(object):
"""
Topic visual presentation style. To be used as Topic.set_style
parameter.
Single TopicStyle can be used for many topics.
"""
@classmethod
def create(cls, doc,
fill, shape = SHAPE_ROUND_RECTANGLE,
line_color = "#CACACA", line_width = "1pt", styleid=None):
"""
Create style object, saving it inside the map. Such
object can be later attached to topics using set_style.
Note: while this method can be used directly, the recommended
way is to call XMindDocument.create_topic_style(...)
Arguments
---------
doc : XMindDocument
Map object (style is always
fill : string
Background color (using SVG notation, for example
``#37D02B``)
shape : string (optional)
Shape (SHAPE_RECTANGLE, SHNAPE_ROUND_RECTANGLE or SHAPE_ELLIPSIS)
line_color : string (optional)
Border color (SVG, for example ``#AABBCC``)
line_width : string (optional)
Border width (SVG-like, for example ``1pt``)
"""
styles = doc.find_or_create_child(doc.styles_tag, "styles")
if styleid is None:
styleid = _id_gen.next()
style_tag = doc.create_child(styles, "style",
id = styleid, type="topic")
doc.create_child(style_tag, "topic-properties",
attrib = {
"line-color" : line_color,
"line-width" : line_width,
"shape-class" : shape,
ns_name("svg", "fill") : fill,
})
s = TopicStyle(style_tag)
doc.add_style(s)
return s
def __init__(self, style_tag):
self.style_tag = style_tag
def get_id(self):
"""
Returns internal object identifier (unique within map)
"""
return self.style_tag.get("id")
class XMindDocument(XmlHelper):
"""
Whole XMind document representation
"""
_styles={}
@classmethod
def create(cls, first_sheet_name, root_topic_name):
"""
Create new, almost empty document, with just one
sheet and it's root topic. Document can be manipulated
using library API (usually via sheets), then saved using ``save``.
"""
doc_tag = etree.Element(
"xmap-content", nsmap = CONTENT_NSMAP, version = "2.0")
styles_tag = etree.Element(
"xmap-styles", nsmap = STYLES_NSMAP, version = "2.0")
obj = XMindDocument(True, doc_tag, styles_tag)
obj.create_sheet(first_sheet_name, root_topic_name)
return obj
@classmethod
def open(cls, filename):
"""
Open and parse existing mind-map.
"""
archive = zipfile.ZipFile(filename, "r")
doc_tag = None
styles_tag = None
attachments = {}
for name in archive.namelist():
if name == "content.xml":
#doc_tag = etree.parse(archive.open(name), "r") # python 2.6
log.debug("parsing content.xml")
doc_tag = etree.XML(archive.read(name))
elif name == "styles.xml":
log.debug("parsing styles.xml")
styles_tag = etree.XML(archive.read(name))
elif name in ['meta.xml', 'META-INF/manifest.xml',
'Thumbnails/thumbnail.jpg' ]:
pass
elif name.startswith(ATTACHMENTS_DIR):
short = name[len(ATTACHMENTS_DIR):]
log.debug("Found attachment %s" % short)
attachments[short] = archive.read(name)
elif name.startswith("markers/"):
pass
else:
log.warn("Unknown xmind file member: %s" % name)
if doc_tag is None:
raise Exception("Invalid xmind file: %s (missing content block)" % filename)
if styles_tag is None:
# XMind 3.1.1 happens to miss this tag
#raise Exception("Invalid xmind file: %s (missing style block)" % filename)
styles_tag = etree.Element(
"xmap-styles", nsmap = STYLES_NSMAP, version = "2.0")
if DUMP_PARSED_DATA:
logging.debug("Parsed document:\n%s",
etree.tostring(doc_tag, pretty_print = True))
logging.debug("Parsed styles:\n%s",
etree.tostring(styles_tag, pretty_print = True))
return XMindDocument(False, doc_tag, styles_tag, attachments)
def __init__(self, is_creating, doc_tag, styles_tag, attachments = None):
"""
Constructor. Don't use directly, use
XMindDocument.create or XMindDocument.open
"""
XmlHelper.__init__(self, is_creating, "xm")
self.doc_tag = doc_tag
self.styles_tag = styles_tag
self.attachments = (attachments or {})
self.embed_xmp = None
def create_sheet(self, sheet_name, root_topic_name):
"""
Add new sheet (and return it)
"""
sheet = Sheet.create(self,
sheet_name, root_topic_name)
return sheet
def create_topic_style(self, *args, **kwargs):
"""
Create visual topic style (which can be attached
to one or more topics with topic.set_style(style).
The parameters are identical as in TopicStyle.create
(except doc).
"""
return TopicStyle.create(self, *args, **kwargs)
def get_first_sheet(self):
"""
Return first sheet of the map.
"""
sheet_tags = self.find_children(
self.doc_tag, "sheet", require_non_empty = True)
return Sheet(self, sheet_tags[0])
def get_all_sheets(self):
"""
Yields all sheets of the map.
"""
sheet_tags = self.find_children(
self.doc_tag, "sheet", require_non_empty = True)
for sheet_tag in sheet_tags:
yield Sheet(self, sheet_tag)
def embed_markers(self, xmp_file_name):
"""
Attaches to the map set of custom markers (graphical icons
used to mark topics). Markers will be saved inside the map,
so will be visible on other installations.
Only one marker set can be embedded, successive calls
to this function overwrite previous values.
Arguments
---------
xmp_file_name : string (file name)
Name of ``.xmp`` file to be embedded.
The best way to create such a file is to export
markers using appropriate XMind option.
Note: the file is not immediately accessed, it's content
is copied during ``save``.
"""
self.embed_xmp = xmp_file_name
def save(self, output_file_name):
"""
Save mindmap to given file.
"""
zipf = zipfile.ZipFile(output_file_name, "w")
self._add_to_zip(zipf, "content.xml",
self._serialize_xml(self.doc_tag))
self._add_to_zip(zipf, "styles.xml",
self._serialize_xml(self.styles_tag))
self._add_to_zip(zipf, "meta.xml", META_FILE_BODY)
manifest_content = MANIFEST_FILE_BODY
for name, data in self.attachments.items():
path = ATTACHMENTS_DIR + name
self._add_to_zip(zipf, path, data)
manifest_content = manifest_content.replace(
"</manifest>",
('<file-entry full-path="%s" media-type=""/>' % path)
+ "\n</manifest>")
if self.embed_xmp:
xmpf = zipfile.ZipFile(self.embed_xmp, "r")
manifest_content = manifest_content.replace(
"</manifest>",
'<file-entry full-path="markers/" media-type=""/>'
+ "\n</manifest>")
for name in xmpf.namelist():
path = "markers/" + name
self._add_to_zip(
zipf, path,
xmpf.read(name))
manifest_content = manifest_content.replace(
"</manifest>",
('<file-entry full-path="%s" media-type=""/>' % path)
+ "\n</manifest>")
self._add_to_zip(zipf, "META-INF/manifest.xml", manifest_content)
def pretty_print(self):
"""
Debug helper, prints internal map structure to the screen
"""
print (self._serialize_xml(self.doc_tag))
print (self._serialize_xml(self.styles_tag))
def attachment_names(self):
"""
Return names of all attachments present inside the map
(independent to which topic they are attached).
"""
return self.attachments.keys()
def attachment_body(self, name):
"""
Returns body of attachment of given name.
"""
return self.attachments[name]
def _create_attachment(self, internal_name, data):
"""
Private attachment-creation helper.
Use Topic.set_attachment instead!
"""
self.attachments[internal_name] = data
def _add_to_zip(self, zipf, name, content):
"""
Add member of name name and content content to zipfile zipf.
"""
zipf.writestr(name, content)
def _serialize_xml(self, tag):
"""
Serialize given tag to text using proper settings.
"""
return etree.tostring(
tag,
encoding = "utf-8", method="xml",
xml_declaration=True, pretty_print=True,
with_tail=True)
def add_style(self, style):
self._styles[style.get_id()] = style
def get_styles(self):
return self._styles
| 34.498626 | 88 | 0.591957 | 3,048 | 25,115 | 4.710302 | 0.173556 | 0.023891 | 0.016856 | 0.017761 | 0.255207 | 0.221913 | 0.175663 | 0.136937 | 0.115205 | 0.105036 | 0 | 0.004171 | 0.303126 | 25,115 | 727 | 89 | 34.54608 | 0.816135 | 0.330679 | 0 | 0.146104 | 0 | 0.00974 | 0.150102 | 0.031672 | 0 | 0 | 0 | 0.001376 | 0 | 1 | 0.159091 | false | 0.006494 | 0.016234 | 0.003247 | 0.279221 | 0.019481 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aacecaf7e16afbf0b894e67b354ec9442d6c5b37 | 5,574 | py | Python | db/python/load_virta_otp_viisviis.py | CSCfi/antero | e762c09e395cb01e334f2a04753ba983107ac7d7 | [
"MIT"
] | 6 | 2017-08-03T08:49:17.000Z | 2021-11-14T17:09:27.000Z | db/python/load_virta_otp_viisviis.py | CSC-IT-Center-for-Science/antero | 2835d7fd07cd7399a348033a6230d1b16634fb3b | [
"MIT"
] | 3 | 2017-05-03T08:45:42.000Z | 2020-10-27T06:30:40.000Z | db/python/load_virta_otp_viisviis.py | CSC-IT-Center-for-Science/antero | 2835d7fd07cd7399a348033a6230d1b16634fb3b | [
"MIT"
] | 4 | 2017-10-19T11:31:43.000Z | 2022-01-05T14:53:57.000Z | # -*- coding: utf-8 -*-
"""
Created on Tue Aug 24 15:59:32 2021
@author: vhamalai
"""
#!/usr/bin/python
# vim: set fileencoding=UTF-8 :
"""
load p3
Python 3 version of load.py
todo doc
"""
import sys,os,getopt
import requests
import json
import base64
from time import localtime, strftime
import dboperator
def makerow():
return {
'edellinenSyysolo': None, 'hetu': None, 'ika': None, 'kevat': None, 'loAloituspvm': None, 'olok': None,
'olos': None, 'ooAloituspvm': None, 'opSummaKunOtePankista': None, 'opiskelijaavain': None, 'opiskeluoikeusavain': None,
'opiskeluoikeusid': None, 'oppilaitos': None, 'oppilaitostunnus': None, 'pankkiKumuEnnen55': None, 'pankkiSaldo55': None,
'regDatum': None, 'sukupuoli': None, 'summa': None, 'suorittanut27': None, 'suorittanut55ilmanPankkia': None,
'suorittanut55pankinAvulla': None, 'syys': None, 'tkoodi': None, 'uusiOpisk': None, 'uusiOpiskKevat': None,
'uuttaPankkiin': None, 'vuosi': None
}
# get value from json
def jv(jsondata, key):
if key in jsondata:
return jsondata[key]
return None
def show(message):
print((strftime("%Y-%m-%d %H:%M:%S", localtime())+" "+message))
def load(secure,hostname,url,schema,table,postdata,condition,verbose):
show("begin "+hostname+" "+url+" "+schema+" "+table+" "+(postdata or "No postdata")+" "+(condition or ""))
address = "http://"+hostname+url
show("load from "+address)
reqheaders = {'Content-Type': 'application/json'}
reqheaders['Caller-Id'] = '1.2.246.562.10.2013112012294919827487.vipunen'
# api credentials from env vars
if os.getenv("API_USERNAME"):
show("using authentication")
apiuser = os.getenv("API_USERNAME")
apipass = os.getenv("API_PASSWORD")
reqheaders['Authorization'] = 'Basic %s' % base64.b64encode(apiuser+":"+apipass)
# automatic POST with (post)data
#request = urllib.request.Request(address, data=postdata, headers=reqheaders)
#time=300
try:
response = requests.get(address, headers=reqheaders).json()
except Exception as e:
show('HTTP GET failed.')
show('Reason: %s'%(str(e)))
sys.exit(2)
else:
# everything is fine
show("api call OK")
# remove data conditionally, otherwise empty
# merge operation could be considered here...
if condition:
show("remove from %s.%s with condition '%s'"%(schema,table,condition))
dboperator.execute("DELETE FROM %s.%s WHERE %s"%(schema,table,condition))
else:
show("empty %s.%s"%(schema,table))
dboperator.empty(schema,table)
show("insert data")
cnt=0
for i in response:
cnt+=1
# make "columnlist" (type has no meaning as we're not creating table)
row = makerow()
# setup dboperator so other calls work
dboperator.columns(row)
row["edellinenSyysolo"] = jv(i,"edellinenSyysolo")
row["hetu"] = jv(i,"hetu")
row["ika"] = jv(i,"ika")
row["kevat"] = jv(i,"kevat")
row["loAloituspvm"] = jv(i,"loAloituspvm")
row["olok"] = jv(i,"olok")
row["olos"] = jv(i,"olos")
row["ooAloituspvm"] = jv(i,"ooAloituspvm")
row["opSummaKunOtePankista"] = jv(i,"opSummaKunOtePankista")
row["opiskelijaavain"] = jv(i,"opiskelijaavain")
row["opiskeluoikeusavain"] = jv(i,"opiskeluoikeusavain")
row["opiskeluoikeusid"] = jv(i,"opiskeluoikeusid")
row["oppilaitos"] = jv(i,"oppilaitos")
row["oppilaitostunnus"] = jv(i,"oppilaitostunnus")
row["pankkiKumuEnnen55"] = jv(i,"pankkiKumuEnnen55")
row["pankkiSaldo55"] = jv(i,"pankkiSaldo55")
row["regDatum"] = jv(i,"regDatum")
row["sukupuoli"] = jv(i,"sukupuoli")
row["summa"] = jv(i,"summa")
row["suorittanut27"] = jv(i,"suorittanut27")
row["suorittanut55ilmanPankkia"] = jv(i,"suorittanut55ilmanPankkia")
row["suorittanut55pankinAvulla"] = jv(i,"suorittanut55pankinAvulla")
row["syys"] = jv(i,"syys")
row["tkoodi"] = jv(i,"tkoodi")
row["uusiOpisk"] = jv(i,"uusiOpisk")
row["uusiOpiskKevat"] = jv(i,"uusiOpiskKevat")
row["uuttaPankkiin"] = jv(i,"uuttaPankkiin")
row["vuosi"] = jv(i,"vuosi")
dboperator.insert(hostname+url,schema,table,row)
# show some sign of being alive
if cnt%100 == 0:
sys.stdout.write('.')
sys.stdout.flush()
if cnt%1000 == 0:
show("-- %d" % (cnt))
if verbose: show("%d -- %s"%(cnt,row))
show("wrote %d"%(cnt))
show("ready")
def usage():
print("""
usage: load.py [-s|--secure] -H|--hostname <hostname> -u|--url <url> -e|--schema <schema> -t|--table <table> [-p|--postdata] [-c|--condition <condition>] [-v|--verbose]
""")
def main(argv):
# muuttujat jotka kerrotaan argumentein
secure=False
hostname,url,schema,table="","","",""
postdata=None
condition=None
verbose = False
try:
opts,args=getopt.getopt(argv,"sH:u:e:t:p:c:v",["secure","hostname=","url=","schema=","table=","postdata=","condition=","verbose"])
except getopt.GetoptError as err:
print(err)
usage()
sys.exit(2)
for opt,arg in opts:
if opt in ("-s", "--secure"): secure=True
elif opt in ("-H", "--hostname"): hostname=arg
elif opt in ("-u", "--url"): url=arg
elif opt in ("-e", "--schema"): schema=arg
elif opt in ("-t", "--table"): table=arg
elif opt in ("-p", "--postdata"): postdata=arg
elif opt in ("-c", "--condition"): condition=arg
elif opt in ("-v", "--verbose"): verbose=True
if not hostname or not url or not schema or not table:
usage()
sys.exit(2)
load(secure,hostname,url,schema,table,postdata,condition,verbose)
dboperator.close()
if __name__ == "__main__":
main(sys.argv[1:])
| 32.596491 | 168 | 0.641909 | 698 | 5,574 | 5.110315 | 0.313754 | 0.023549 | 0.017662 | 0.037006 | 0.062798 | 0.045977 | 0.045977 | 0.045977 | 0.031399 | 0 | 0 | 0.022059 | 0.170434 | 5,574 | 170 | 169 | 32.788235 | 0.749351 | 0.101722 | 0 | 0.07438 | 0 | 0.008264 | 0.338334 | 0.052301 | 0 | 0 | 0 | 0.005882 | 0 | 1 | 0.049587 | false | 0.016529 | 0.049587 | 0.008264 | 0.123967 | 0.024793 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad042067e71ad1859756997e4fc67b00c955314 | 4,823 | py | Python | otcextensions/common/format.py | zsoltn/python-otcextensions | 4c0fa22f095ebd5f9636ae72acbae5048096822c | [
"Apache-2.0"
] | 10 | 2018-03-03T17:59:59.000Z | 2020-01-08T10:03:00.000Z | otcextensions/common/format.py | zsoltn/python-otcextensions | 4c0fa22f095ebd5f9636ae72acbae5048096822c | [
"Apache-2.0"
] | 208 | 2020-02-10T08:27:46.000Z | 2022-03-29T15:24:21.000Z | otcextensions/common/format.py | zsoltn/python-otcextensions | 4c0fa22f095ebd5f9636ae72acbae5048096822c | [
"Apache-2.0"
] | 15 | 2020-04-01T20:45:54.000Z | 2022-03-23T12:45:43.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
import calendar
from openstack import format
class YNBool(format.Formatter):
@classmethod
def deserialize(cls, value):
"""Convert a boolean string to a boolean"""
if isinstance(value, bool):
return value
expr = str(value).lower()
if "y" == expr:
return True
elif "n" == expr:
return False
else:
raise ValueError("Unable to deserialize boolean string: %s"
% value)
@classmethod
def serialize(cls, value):
"""Convert a boolean to a boolean string"""
if value in ["Y", "N", "y", "n"]:
return str(value).upper()
if isinstance(value, bool):
if value:
return "Y"
else:
return "N"
else:
raise ValueError("Unable to serialize boolean string: %s"
% value)
class Bool_10(format.Formatter):
@classmethod
def deserialize(cls, value):
"""Convert a boolean string to a boolean"""
if isinstance(value, bool):
return value
expr = str(value).lower()
if "1" == expr:
return True
elif "0" == expr:
return False
else:
raise ValueError("Unable to deserialize boolean string: %s"
% value)
@classmethod
def serialize(cls, value):
"""Convert a boolean to a boolean string"""
if value in ["1", "0"]:
return str(value).upper()
if isinstance(value, bool):
if value:
return "1"
else:
return "0"
else:
raise ValueError("Unable to serialize boolean string: %s"
% value)
class BoolStr_1(format.BoolStr):
"""Deserialize bool, which can be either bool or string
"""
@classmethod
def deserialize(cls, value):
"""Convert a boolean string to a boolean"""
if isinstance(value, bool):
return value
expr = str(value).lower()
if "true" == expr:
return True
elif "false" == expr:
return False
else:
raise ValueError("Unable to deserialize boolean string: %s"
% value)
class ListRef(format.Formatter):
"""A formatter used to serialize/deserialize list reference
[{"id": "any-id"}] <-> ["any-id"], for example.
"""
@classmethod
def deserialize(cls, value):
"""Convert a list primitive to list reference"""
if isinstance(value, list):
return [item["id"] for item in value]
else:
raise ValueError("Unable to deserialize list reference: %s"
% value)
@classmethod
def serialize(cls, value):
"""Convert list reference to list primitive"""
if isinstance(value, list):
return [{"id": item} for item in value]
else:
raise ValueError("Unable to serialize list reference: %s"
% value)
class TimeTMsStr(format.Formatter):
@classmethod
def deserialize(cls, value):
"""Convert a time_t with msec precision to ISO8601"""
_time = time.gmtime(value / 1000)
# Embed MS placeholder into the format string directly
_format = '%Y-%m-%dT%H:%M:%S.{ms}+00:00'
return time.strftime(_format, _time).format(
ms=int(value % 1000))
@classmethod
def serialize(cls, value):
"""Convert ISO8601 to time_t with ms"""
if isinstance(value, str):
_time_t = None
try:
_time_t = time.strptime(value, '%Y-%m-%dT%H:%M:%S+00:00')
except ValueError:
_time_t = time.strptime(value, '%Y-%m-%dT%H:%M:%S')
if _time_t:
return calendar.timegm(_time_t) * 1000
else:
raise ValueError("Unable to parse time reference: %s"
% value)
elif isinstance(value, int):
raise value
else:
raise ValueError("Unable to serialize list reference: %s"
% value)
| 31.940397 | 75 | 0.546962 | 545 | 4,823 | 4.807339 | 0.242202 | 0.030534 | 0.051527 | 0.085878 | 0.561069 | 0.528244 | 0.50687 | 0.491221 | 0.474427 | 0.433969 | 0 | 0.013149 | 0.353514 | 4,823 | 150 | 76 | 32.153333 | 0.827133 | 0.227244 | 0 | 0.641509 | 0 | 0 | 0.120789 | 0.013969 | 0 | 0 | 0 | 0 | 0 | 1 | 0.084906 | false | 0 | 0.028302 | 0 | 0.339623 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad10c50201722e5ab90e8b6c0ecd781370f77d4 | 1,248 | py | Python | flask_cms/slugifier.py | gaybro8777/Flask-CMS-demo | 9b8dfc3baac23e8f27d1b125e4075c7f939067b3 | [
"BSD-2-Clause"
] | 3 | 2018-04-04T19:48:38.000Z | 2021-02-19T08:40:54.000Z | flask_cms/slugifier.py | gaybro8777/Flask-CMS-demo | 9b8dfc3baac23e8f27d1b125e4075c7f939067b3 | [
"BSD-2-Clause"
] | null | null | null | flask_cms/slugifier.py | gaybro8777/Flask-CMS-demo | 9b8dfc3baac23e8f27d1b125e4075c7f939067b3 | [
"BSD-2-Clause"
] | 3 | 2020-07-13T13:14:10.000Z | 2021-02-19T08:47:31.000Z | import re # Regular expressions
import string
import sys
from unidecode import unidecode
class Slugifier(object):
def __init__(self):
self.to_lower = True
self.safe_chars = string.ascii_letters + string.digits # "a...zA...Z0...9"
self.separator_char = '-'
def slugify(self, text):
if sys.version_info[0] == 2: # Python 2.x
if not isinstance(text, unicode):
text = text.decode('utf8', 'ignore')
else: # Python 3.x
if not isinstance(text, str):
text = text.decode('utf8', 'ignore')
text = unidecode(text)
# Lower case if specified
if self.to_lower:
text = text.lower()
# Replace one or more unsafe chars with one separator char
# Compile regular expression once
if not hasattr(self, 'compiled_expression'):
expression = '[^' + self.safe_chars + ']+'
self.compiled_expression = re.compile(expression)
# Substitute unsafe chars using compiled expression
text = self.compiled_expression.sub(self.separator_char, text)
# Strip leading and trailing separator chars
text = text.strip(self.separator_char)
return text
| 32.842105 | 83 | 0.610577 | 147 | 1,248 | 5.07483 | 0.455782 | 0.069705 | 0.068365 | 0.042895 | 0.117962 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00906 | 0.292468 | 1,248 | 37 | 84 | 33.72973 | 0.835787 | 0.21234 | 0 | 0.08 | 0 | 0 | 0.045267 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.16 | 0 | 0.32 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad1618a594bc10e8806beb0a94f89c675acc645 | 2,418 | py | Python | src/tests/testdummy/signals.py | fabm3n/pretix | 520fb620888d5c434665a6a4a33cb2ab22dd42c7 | [
"Apache-2.0"
] | 1,248 | 2015-04-24T13:32:06.000Z | 2022-03-29T07:01:36.000Z | src/tests/testdummy/signals.py | fabm3n/pretix | 520fb620888d5c434665a6a4a33cb2ab22dd42c7 | [
"Apache-2.0"
] | 2,113 | 2015-02-18T18:58:16.000Z | 2022-03-31T11:12:32.000Z | src/tests/testdummy/signals.py | fabm3n/pretix | 520fb620888d5c434665a6a4a33cb2ab22dd42c7 | [
"Apache-2.0"
] | 453 | 2015-05-13T09:29:06.000Z | 2022-03-24T13:39:16.000Z | #
# This file is part of pretix (Community Edition).
#
# Copyright (C) 2014-2020 Raphael Michel and contributors
# Copyright (C) 2020-2021 rami.io GmbH and contributors
#
# This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation in version 3 of the License.
#
# ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are
# applicable granting you additional permissions and placing additional restrictions on your usage of this software.
# Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive
# this file, see <https://pretix.eu/about/en/license>.
#
# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with this program. If not, see
# <https://www.gnu.org/licenses/>.
#
from django.dispatch import receiver
from pretix.base.channels import SalesChannel
from pretix.base.signals import (
register_payment_providers, register_sales_channels,
register_ticket_outputs,
)
@receiver(register_ticket_outputs, dispatch_uid="output_dummy")
def register_ticket_outputs(sender, **kwargs):
from .ticketoutput import DummyTicketOutput
return DummyTicketOutput
@receiver(register_payment_providers, dispatch_uid="payment_dummy")
def register_payment_provider(sender, **kwargs):
from .payment import (
DummyFullRefundablePaymentProvider,
DummyPartialRefundablePaymentProvider, DummyPaymentProvider,
)
return [DummyPaymentProvider, DummyFullRefundablePaymentProvider, DummyPartialRefundablePaymentProvider]
class FoobazSalesChannel(SalesChannel):
identifier = "baz"
verbose_name = "Foobar"
icon = "home"
testmode_supported = False
class FoobarSalesChannel(SalesChannel):
identifier = "bar"
verbose_name = "Foobar"
icon = "home"
testmode_supported = True
unlimited_items_per_order = True
@receiver(register_sales_channels, dispatch_uid="sc_dummy")
def register_sc(sender, **kwargs):
return [FoobarSalesChannel, FoobazSalesChannel]
| 37.78125 | 118 | 0.780811 | 303 | 2,418 | 6.132013 | 0.49505 | 0.010764 | 0.025834 | 0.040904 | 0.11733 | 0.11733 | 0.100108 | 0 | 0 | 0 | 0 | 0.008811 | 0.155087 | 2,418 | 63 | 119 | 38.380952 | 0.900636 | 0.46981 | 0 | 0.129032 | 0 | 0 | 0.046825 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096774 | false | 0 | 0.16129 | 0.032258 | 0.709677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad199fefea17786f66ddf58b46784fd341b0e29 | 2,127 | py | Python | rfid_to_csv_list.py | Annamalaisaravanan/RFID-based-Class-Room-Attendance-System | ade38797f86d42a0131d7a0fb39034d126d9070b | [
"MIT"
] | null | null | null | rfid_to_csv_list.py | Annamalaisaravanan/RFID-based-Class-Room-Attendance-System | ade38797f86d42a0131d7a0fb39034d126d9070b | [
"MIT"
] | null | null | null | rfid_to_csv_list.py | Annamalaisaravanan/RFID-based-Class-Room-Attendance-System | ade38797f86d42a0131d7a0fb39034d126d9070b | [
"MIT"
] | null | null | null | import serial
import time
import pandas as pd
from datetime import date,datetime
name=list()
roll=list()
time=list()
time_left=dict()
try:
ser=serial.Serial('COM3',9600)
except:
print("Processing")
column_names=['Name','Roll_No','Time','Time_left']
df=pd.DataFrame(columns=column_names)
i=0
while True:
b = ser.readline()
string_n = b.decode()
string = string_n.rstrip()
flt =string
if flt=="ANNAMALAI":
today=date.today()
now=datetime.now()
name.append("Annamalai")
roll.append(1816106)
time.append(now.strftime('%H:%M:%S'))
time_left["annamalai"]=None
print("\nPerson 1 Entered Class")
i+=1
elif flt=="AJAI":
today=date.today()
now=datetime.now()
name.append("Ajay")
roll.append(1816117)
time.append(now.strftime('%H:%M:%S'))
time_left["ajay"]=None
print("\nPerson 2 Entered Class")
i+=1
elif flt=="SANJAY":
today=date.today()
now=datetime.now()
name.append("Sanjay")
roll.append(1816139)
time.append(now.strftime('%H:%M:%S'))
time_left["sanjay"]=None
print("\nPerson 3 Entered Class")
i+=1
elif flt=="anna":
now=datetime.now()
time_left["annamalai"]=now.strftime('%H:%M:%S')
print("\nPerson 1 left the class")
i+=1
elif flt=="a1":
now=datetime.now()
time_left["ajay"]=now.strftime('%H:%M:%S')
print("\nPerson 2 left the class")
i+=1
elif flt=="sanjay":
now=datetime.now()
time_left["sanjay"]=now.strftime('%H:%M:%S')
print("\nPerson 3 left the class")
i+=1
else:
pass
if i>5:
print("break")
break
else:
pass
df['Name']=name
df['Roll_No']=roll
df['Time']=time
df['Time_left']=time_left.values()
df.to_csv(r'path\to\the\directory\attendence'+now.strftime('%d_%m_%Y')+'.csv')
| 25.321429 | 79 | 0.523742 | 268 | 2,127 | 4.085821 | 0.287313 | 0.073059 | 0.076712 | 0.071233 | 0.442922 | 0.369863 | 0.30137 | 0.191781 | 0.087671 | 0 | 0 | 0.028121 | 0.314528 | 2,127 | 83 | 80 | 25.626506 | 0.722908 | 0 | 0 | 0.297297 | 0 | 0 | 0.192759 | 0.015656 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.027027 | 0.054054 | 0 | 0.054054 | 0.108108 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad2aef40ee84ac2c450c61cc46327034040a0cb | 7,634 | py | Python | script/old/compare_vcf_lofreq-0.0.1.py | genepii/seqmet | 89fdab79131c861d4a5aae364ecdbeb3a9e0ae23 | [
"MIT"
] | null | null | null | script/old/compare_vcf_lofreq-0.0.1.py | genepii/seqmet | 89fdab79131c861d4a5aae364ecdbeb3a9e0ae23 | [
"MIT"
] | null | null | null | script/old/compare_vcf_lofreq-0.0.1.py | genepii/seqmet | 89fdab79131c861d4a5aae364ecdbeb3a9e0ae23 | [
"MIT"
] | null | null | null | from __future__ import print_function
import os
import sys
import getopt
##Count the number of minor variants in a target vcf reported as major variant in a reference vcf, adapted to lofreq vcf
#v0.0.1
def main(argv):
global ref
global var
global con
global oup
global oud
global mode
global bed
global region
global min_depth
global min_freq
ref = ''
var = ''
con = ''
oup = ''
oud = './'
mode = ['raw']
bed = ''
region = ''
min_depth = 20
min_freq = 0.01
try:
opts, args = getopt.getopt(argv, 'hr:c:v:o:x:m:b:R:d:f:', ['help', 'ref', 'con', 'var', 'output', 'outdir', 'mode', 'bed', 'region', 'min_depth', 'min_freq'])
for opt, arg in opts:
if opt in ('-h', '--help'):
usage()
sys.exit()
elif opt in ('-r', '--ref'):
ref = arg
elif opt in ('-c', '--con'):
con = arg
elif opt in ('-v', '--var'):
var = arg
elif opt in ('-o', '--output'):
oup = arg
elif opt in ('-x', '--outdir'):
oud = arg
elif opt in ('-m', '--mode'):
mode = []
for i in range(len(arg.split(','))):
mode.append(arg.split(',')[i])
elif opt in ('-b', '--bed'):
bed = arg
elif opt in ('-R', '--region'):
region = arg
elif opt in ('-d', '--min_depth'):
min_depth = int(arg)
elif opt in ('-f', '--min_freq'):
min_freq = float(arg)
if ref == '' or con == '' or var == '':
usage()
sys.exit()
if oup == '':
oup = var.split("/")[-1].split(".")[0] + '_' + region.split("/")[-1].split(".")[0]
except getopt.GetoptError:
usage()
sys.exit(2)
def usage():
print('usage: ' + sys.argv[0] + ' -h --help -r --ref [fasta] --con [vcf] --var [vcf] -o --output [tsv] -m --mode [raw,cov,common,expected] -b --bed [bed] -R --region [bed] -d --min_depth [int] -f --min_freq [float]')
if __name__ == '__main__':
main(sys.argv[1:])
def count_commented(file):
lines = open(file, 'r').read().rstrip('\n').split('\n')
count = 0
for line in lines:
if line[0] == "#":
count += 1
return count
flatten = lambda t: [item for sublist in t for item in sublist]
seq = [[x.replace('\r\n','\n').split('\n')[0], ''.join(x.replace('\r\n','\n').split('\n')[1:]).replace(' ','')] for x in open(ref, 'r').read().rstrip('\n').split('>')[1:]]
cons = open(con, 'r').read().rstrip('\n').split('\n')[count_commented(con):]
vas = open(var, 'r').read().rstrip('\n').split('\n')[count_commented(var):]
bga = [x.split('\t') for x in open(bed, 'r').read().replace('\r\n','\n').rstrip('\n').split('\n')]
depth = []
for i in range(len(bga)):
depth.append([int(bga[i][3]) for x in range(int(bga[i][1]),int(bga[i][2]))])
depth = flatten(depth)
if region != '':
treg = [x.split('\t') for x in open(region, 'r').read().replace('\r\n','\n').rstrip('\n').split('\n')]
reg = []
for i in range(len(treg)):
reg.append([int(x) for x in range(int(treg[i][1]),int(treg[i][2]))])
reg = flatten(reg)
else:
reg = []
vas_chrom, vas_pos, vas_ref, vas_alt, vas_af, cons_chrom, cons_pos, cons_ref, cons_alt, cons_af = ([] for i in range(10))
temp = []
exp = []
common = 0
expected = 0
for i in range(len(vas)):
vas_chrom.append(vas[i].split('\t')[0])
vas_pos.append(int(vas[i].split('\t')[1])-1)
vas_ref.append(vas[i].split('\t')[3].rstrip('='))
vas_alt.append(vas[i].split('\t')[4].rstrip('='))
vas_af.append(float(vas[i].split('\t')[7].split(';')[1].split('=')[1]))
for i in range(len(cons)):
cons_chrom.append(cons[i].split('\t')[0])
cons_pos.append(int(cons[i].split('\t')[1])-1)
cons_ref.append(cons[i].split('\t')[3].rstrip('='))
cons_alt.append(cons[i].split('\t')[4].rstrip('='))
cons_af.append(float(cons[i].split('\t')[7].split(';')[1].split('=')[1]))
for i in range(len(cons_chrom)):
if cons_alt[i][0] == '-':
cons_temp = cons_ref[i]
cons_ref[i] = cons_ref[i] + cons_alt[i][1:]
cons_alt[i] = cons_temp
if cons_alt[i][0] == '+':
cons_alt[i] = cons_ref[i] + cons_alt[i][1:]
for i in range(len(vas_chrom)):
if vas_alt[i][0] == '-':
vas_temp = vas_ref[i]
vas_ref[i] = vas_ref[i] + vas_alt[i][1:]
vas_alt[i] = vas_temp
if vas_alt[i][0] == '+':
vas_alt[i] = vas_ref[i] + vas_alt[i][1:]
for i in range(len(cons_chrom)):
if (cons_pos[i] in reg or reg == []) and depth[cons_pos[i]] >= min_depth and float(cons_af[i]) >= min_freq:
if float(cons_af[i]) >= 0.5:
if cons_pos[i] in vas_pos:
vas_index = vas_pos.index(cons_pos[i])
if cons_alt[i] == vas_alt[vas_index] and float(vas_af[vas_index]) >= 0.5:
pass
#print([cons_pos[i], cons_alt[i], vas_alt[vas_index], "old"])
else:
expected += 1
exp.append([cons_pos[i], cons_alt[i], vas_alt[vas_index]])
else:
expected += 1
exp.append([cons_pos[i], cons_alt[i], "ref1"])
for i in range(len(vas_chrom)):
if (vas_pos[i] in reg or reg == []) and depth[vas_pos[i]] >= min_depth and float(vas_af[i]) >= min_freq:
if float(vas_af[i]) < 0.5:
if vas_pos[i] in cons_pos:
cons_index = cons_pos.index(vas_pos[i])
if vas_alt[i] == cons_alt[cons_index] and float(cons_af[cons_index]) >= 0.5:
common += 1
temp.append([vas_pos[i], vas_alt[i], cons_alt[cons_index]])
elif vas_alt[i] == seq[[x[0] for x in seq].index(vas_chrom[i])][1][vas_pos[i]:vas_pos[i]+len(vas_alt[i])] and vas_ref[i] != seq[[x[0] for x in seq].index(vas_chrom[i])][1][vas_pos[i]:vas_pos[i]+len(vas_ref[i])]:
common += 1
temp.append([vas_pos[i], vas_alt[i], "ref2"])
else:
if vas_pos[i] in cons_pos:
cons_index = cons_pos.index(vas_pos[i])
if vas_alt[i] != cons_alt[cons_index] and float(cons_af[cons_index]) >= 0.5:
expected += 1
exp.append([vas_pos[i], vas_alt[i], cons_alt[cons_index]])
else:
expected += 1
exp.append([vas_pos[i], vas_alt[i], "ref3"])
exp = sorted(exp, key=lambda i: i[0])
print(exp)
print (temp)
if "cov" in mode:
w = open(oud + oup + "_cov.tsv", 'a+')
if expected > 0:
w.write(con.split("/")[-1].split(".")[0] + "\t" + str(round(float(common)/float(expected), 2)) + "\n")
else:
w.write(con.split("/")[-1].split(".")[0] + "\t0.0\n")
w.close()
if "common" in mode:
w = open(oud + oup + "_common.tsv", 'a+')
w.write(con.split("/")[-1].split(".")[0] + "\t" + str(common) + "\n")
w.close()
if "expected" in mode:
w = open(oud + oup + "_expected.tsv", 'a+')
w.write(con.split("/")[-1].split(".")[0] + "\t" + str(expected) + "\n")
w.close()
if "raw" in mode:
w = open(oud + oup + "_raw.tsv", 'a+')
w.write(con.split("/")[-1].split(".")[0] + "\t" + str(common) + "//" + str(expected) + "\n")
w.close()
print (str(common) + "//" + str(expected))
| 38.361809 | 224 | 0.493581 | 1,147 | 7,634 | 3.149956 | 0.119442 | 0.026571 | 0.027124 | 0.030446 | 0.485192 | 0.405757 | 0.348187 | 0.303349 | 0.254913 | 0.233047 | 0 | 0.017006 | 0.291328 | 7,634 | 198 | 225 | 38.555556 | 0.650832 | 0.024103 | 0 | 0.176136 | 0 | 0.005682 | 0.082092 | 0.006347 | 0 | 0 | 0 | 0 | 0 | 1 | 0.017045 | false | 0.005682 | 0.022727 | 0 | 0.045455 | 0.028409 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad2e1bf8e629e7e89ab2a4e3d87728d48a2022e | 3,372 | py | Python | vulture_whitelist/qt.py | RJ722/vulture-whitelist-generators | 4f208e5bb62dd3b73406eae2d15b0ffad01f7bc4 | [
"MIT"
] | null | null | null | vulture_whitelist/qt.py | RJ722/vulture-whitelist-generators | 4f208e5bb62dd3b73406eae2d15b0ffad01f7bc4 | [
"MIT"
] | 5 | 2018-07-15T11:15:24.000Z | 2018-08-13T06:09:14.000Z | vulture_whitelist/qt.py | RJ722/vulture-whitelist-generators | 4f208e5bb62dd3b73406eae2d15b0ffad01f7bc4 | [
"MIT"
] | null | null | null | import itertools
import os
import subprocess
import tempfile
from vulture_whitelist.utils import Creator, log
from lxml import etree
FEATURES = ['PyQt_Accessibility', 'PyQt_SessionManager', 'PyQt_SSL',
'PyQt_qreal_double', 'Py_v3', 'PyQt_PrintDialog', 'PyQt_Printer',
'PyQt_PrintPreviewWidget', 'PyQt_PrintPreviewDialog',
'PyQt_RawFont', 'PyQt_OpenGL', 'PyQt_Desktop_OpenGL',
'PyQt_NotBootstrapped', 'PyQt_Process', 'PyQt_MacOSXOnly']
PLATFORMS = ['WS_X11', 'WS_WIN', 'WS_MACX']
TIMELINE = ['Qt_5_0_0', 'Qt_5_0_1', 'Qt_5_0_2', 'Qt_5_1_0', 'Qt_5_1_1',
'Qt_5_2_0', 'Qt_5_2_1', 'Qt_5_3_0', 'Qt_5_3_1', 'Qt_5_3_2',
'Qt_5_4_0', 'Qt_5_4_1', 'Qt_5_4_2', 'Qt_5_5_0', 'Qt_5_5_1',
'Qt_5_6_0', 'Qt_5_6_1', 'Qt_5_6_2', 'Qt_5_6_3', 'Qt_5_6_4',
'Qt_5_6_5', 'Qt_5_6_6', 'Qt_5_6_7', 'Qt_5_6_8', 'Qt_5_6_9',
'Qt_5_7_0', 'Qt_5_7_1', 'Qt_5_8_0', 'Qt_5_8_1', 'Qt_5_9_0',
'Qt_5_9_1', 'Qt_5_9_2', 'Qt_5_9_3', 'Qt_5_9_99', 'Qt_5_10_0',
'Qt_5_10_1']
class QtWhitelistCreator(Creator):
"""
Takes in sip files and emits a whitelist.
"""
def _write_mod_whitelist(self, f, module, name_set):
f.write('# {}\n'.format(module))
for name in sorted(name_set):
f.write('{}.{}\n'.format(module, name))
f.write('\n')
def _prepare_sip_command(self, module, outdir, sip_executable):
for exclusive_tags in itertools.product(TIMELINE, PLATFORMS):
filename = '{}-{}.xml'.format(module, '-'.join(exclusive_tags))
outfile = os.path.join(outdir, filename)
cmdline = [sip_executable, '-m', outfile]
for tag in list(exclusive_tags) + FEATURES:
cmdline += ['-t', tag]
cmdline.append(
os.path.join(module, '{}mod.sip'.format(module)))
log(' {} -> {}'.format(', '.join(exclusive_tags), outfile))
yield cmdline
def create_xml(self, module, outdir, sip_executable):
log("Running sip for {}...".format(module))
for sipcmd in self._prepare_sip_command(
module, outdir, sip_executable):
subprocess.call(sipcmd)
def get_modules(self):
for filename in sorted(os.listdir()):
filepath = os.path.abspath(filename)
if os.path.isdir(filepath):
yield filename
def parse_xmls(self, xmldir):
for basename in sorted(os.listdir(xmldir)):
xmlfile = os.path.join(xmldir, basename)
with open(xmlfile, 'r') as f:
tree = etree.parse(f)
yield from tree.xpath('/Module/Class/Function[@virtual="1"]/@name')
def name_set(self, xmldir):
log("Parsing and merging XML files for {}\n".format(xmldir))
name_set = set()
for name in self.parse_xmls(xmldir):
name_set.add(name)
return name_set
def create_mod_whitelist(self, module, outfile):
with tempfile.TemporaryDirectory() as tmpdir:
self.create_xml(module, tmpdir, self.sip)
self._write_mod_whitelist(outfile, module, self.name_set(tmpdir))
def create(self):
with open(self.whitelist_name, 'w') as outfile:
for module in self.get_modules():
self.create_mod_whitelist(module, outfile)
| 39.670588 | 79 | 0.606168 | 479 | 3,372 | 3.91858 | 0.254697 | 0.057539 | 0.023442 | 0.039957 | 0.058604 | 0.027704 | 0.027704 | 0 | 0 | 0 | 0 | 0.045798 | 0.255338 | 3,372 | 84 | 80 | 40.142857 | 0.701712 | 0.012159 | 0 | 0 | 0 | 0 | 0.20905 | 0.026546 | 0 | 0 | 0 | 0 | 0 | 1 | 0.119403 | false | 0 | 0.089552 | 0 | 0.238806 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad39afa57f47fcffb828ef8575dc78166d14d13 | 2,626 | py | Python | experiments/lh5/processing.py | rauscher1995/pygama | 7357e3fb0be7c6712010e4925d863b0f0f843c27 | [
"Apache-2.0"
] | null | null | null | experiments/lh5/processing.py | rauscher1995/pygama | 7357e3fb0be7c6712010e4925d863b0f0f843c27 | [
"Apache-2.0"
] | null | null | null | experiments/lh5/processing.py | rauscher1995/pygama | 7357e3fb0be7c6712010e4925d863b0f0f843c27 | [
"Apache-2.0"
] | 1 | 2021-12-18T14:43:33.000Z | 2021-12-18T14:43:33.000Z | #!/usr/bin/env python3
import os
import time
import h5py
import numpy as np
import pandas as pd
from pprint import pprint
import matplotlib.pyplot as plt
plt.style.use("../../pygama/clint.mpl")
from pygama import DataSet, read_lh5, get_lh5_header
import pygama.analysis.histograms as pgh
def main():
"""
this is the high-level part of the code, something that a user might
write (even on the interpreter) for processing with a specific config file.
"""
# process_data()
plot_data()
# plot_waveforms()
def process_data():
from pygama import DataSet
ds = DataSet(0, md="config.json")
ds.daq_to_raw(overwrite=True, test=False)
# ds.raw_to_dsp(....)
def plot_data():
"""
read the lh5 output.
"""
f_lh5 = "/Users/wisecg/Data/L200/tier1/t1_run0.lh5"
df = get_lh5_header(f_lh5)
# df = read_lh5(f_lh5)
# print(df)
exit()
# hf = h5py.File("/Users/wisecg/Data/L200/tier1/t1_run0.lh5")
# # 1. energy histogram
# wf_max = hf['/daqdata/wf_max'][...] # slice reads into memory
# wf_bl = hf['/daqdata/baseline'][...]
# wf_max = wf_max - wf_bl
# xlo, xhi, xpb = 0, 5000, 10
# hist, bins = pgh.get_hist(wf_max, range=(xlo, xhi), dx=xpb)
# plt.semilogy(bins, hist, ls='steps', c='b')
# plt.xlabel("Energy (uncal)", ha='right', x=1)
# plt.ylabel("Counts", ha='right', y=1)
# # plt.show()
# # exit()
# plt.cla()
# 2. energy vs time
# ts = hf['/daqdata/timestamp']
# plt.plot(ts, wf_max, '.b')
# plt.show()
# 3. waveforms
nevt = hf['/daqdata/waveform/values/cumulative_length'].size
# create a waveform block compatible w/ pygama
# and yeah, i know, for loops are inefficient. i'll optimize when it matters
wfs = []
wfidx = hf["/daqdata/waveform/values/cumulative_length"] # where each wf starts
wfdata = hf["/daqdata/waveform/values/flattened_data"] # adc values
wfsel = np.arange(2000)
for iwf in wfsel:
ilo = wfidx[iwf]
ihi = wfidx[iwf+1] if iwf+1 < nevt else nevt
wfs.append(wfdata[ilo : ihi])
wfs = np.vstack(wfs)
print(wfs.shape) # wfs on each row. will work w/ pygama.
# plot waveforms, flip polarity for fun
for i in range(wfs.shape[0]):
wf = wfs[i,:]
plt.plot(np.arange(len(wf)), wf)
plt.xlabel("clock ticks", ha='right', x=1)
plt.ylabel("adc", ha='right', y=1)
plt.tight_layout()
plt.show()
# plt.savefig(f"testdata_evt{ievt}.png")
hf.close()
if __name__=="__main__":
main() | 26 | 83 | 0.602437 | 384 | 2,626 | 4.010417 | 0.481771 | 0.019481 | 0.033117 | 0.044805 | 0.132468 | 0.116883 | 0.042857 | 0.042857 | 0 | 0 | 0 | 0.023882 | 0.250571 | 2,626 | 101 | 84 | 26 | 0.758638 | 0.413557 | 0 | 0 | 0 | 0 | 0.155571 | 0.126359 | 0 | 0 | 0 | 0 | 0 | 1 | 0.073171 | false | 0 | 0.243902 | 0 | 0.317073 | 0.04878 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad4c9ee6d31f346e2f8cbf92e010330f68debf8 | 1,711 | py | Python | checkov/terraform/checks/resource/aws/CloudfrontTLS12.py | bosmak/checkov | 5598921bd9bbcdd1fd94319c58e976bd730c3a3c | [
"Apache-2.0"
] | null | null | null | checkov/terraform/checks/resource/aws/CloudfrontTLS12.py | bosmak/checkov | 5598921bd9bbcdd1fd94319c58e976bd730c3a3c | [
"Apache-2.0"
] | null | null | null | checkov/terraform/checks/resource/aws/CloudfrontTLS12.py | bosmak/checkov | 5598921bd9bbcdd1fd94319c58e976bd730c3a3c | [
"Apache-2.0"
] | null | null | null | from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
class CloudFrontTLS12(BaseResourceValueCheck):
def __init__(self):
name = "Verify CloudFront Distribution Viewer Certificate is using TLS v1.2"
id = "CKV_AWS_174"
supported_resources = ["aws_cloudfront_distribution"]
categories = [CheckCategories.ENCRYPTION]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf):
if "viewer_certificate" in conf.keys():
# check if cloudfront_default_certificate is true then this could use less than tls 1.2
viewer_certificate = conf["viewer_certificate"][0]
if 'cloudfront_default_certificate' in viewer_certificate:
#is not using the default certificate
if viewer_certificate["cloudfront_default_certificate"] is not True:
#these protocol versions
if "minimum_protocol_version" in viewer_certificate:
protocol=viewer_certificate["minimum_protocol_version"][0]
if protocol in ['TLSv1.2_2018', 'TLSv1.2_2019', 'TLSv1.2_2021']:
return CheckResult.PASSED
#No cert specified so using default which can be less that tls 1.2
return CheckResult.FAILED
def get_inspected_key(self):
return "viewer_certificate/[0]/minimum_protocol_version"
def get_expected_values(self):
return ['TLSv1.2_2018', 'TLSv1.2_2019', 'TLSv1.2_2021']
check = CloudFrontTLS12() | 46.243243 | 106 | 0.687317 | 194 | 1,711 | 5.804124 | 0.42268 | 0.135879 | 0.0746 | 0.053286 | 0.053286 | 0.053286 | 0.053286 | 0.053286 | 0.053286 | 0 | 0 | 0.039908 | 0.238457 | 1,711 | 37 | 107 | 46.243243 | 0.824252 | 0.122151 | 0 | 0 | 0 | 0 | 0.245661 | 0.121495 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0.041667 | 0.083333 | 0.083333 | 0.458333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aad4f9ee56b69b3a021bb4fa81409ece269af3dd | 21,435 | py | Python | trans_mri/data.py | ben0it8/trans-mri | ec273bb6c96c7f104659cc9f437d6e1e82f18e01 | [
"MIT"
] | 1 | 2020-02-29T11:01:24.000Z | 2020-02-29T11:01:24.000Z | trans_mri/data.py | ben0it8/trans-mri | ec273bb6c96c7f104659cc9f437d6e1e82f18e01 | [
"MIT"
] | null | null | null | trans_mri/data.py | ben0it8/trans-mri | ec273bb6c96c7f104659cc9f437d6e1e82f18e01 | [
"MIT"
] | null | null | null | import os
import logging
import numpy as np
import multiprocessing
from pathlib import Path
from tabulate import tabulate
import pandas as pd
from sklearn.model_selection import GroupShuffleSplit, train_test_split
from tqdm import tqdm_notebook as tqdm
import pickle
import torch
from torch.utils.data import Dataset, DataLoader
from trans_mri.utils import *
from torchvision.transforms import Compose
logger = logging.getLogger(__name__)
default_transforms = Compose([ToTensor(),
IntensityRescale(masked=False, on_gpu=True)])
def balanced_subsample(y, size=None):
subsample = []
if size is None:
n_smp = y.value_counts().min()
else:
n_smp = int(size / len(y.value_counts().index))
for label in y.value_counts().index:
samples = y[y == label].index.values
index_range = range(samples.shape[0])
indexes = np.random.choice(index_range, size=n_smp, replace=False)
subsample += samples[indexes].tolist()
return subsample
class MRIDataset(Dataset):
"""
PyTorch dataset that consists of MRI images and labels.
Args:
filenames (iterable of strings): The filenames to the MRI images.
labels (iterable): The labels for the images.
mask (array): If not None (default), images are masked by multiplying with this array.
transform: Any transformations to apply to the images.
"""
def __init__(self, filenames, labels, id2label, z_factor=None, mask=None, transform=None):
self.filenames = filenames
self.labels = torch.LongTensor(labels)
self.label_counts = dict(zip(*np.unique(labels, return_counts=True)))
self.class_weights = np.array(list(self.label_counts.values()))/len(labels)
self.mask = mask
self.transform = transform
self.id2label = id2label
self.z_factor = z_factor
# Required by torchsample.
self.num_inputs = 1
self.num_targets = 1
# Default values. Should be set via fit_normalization.
self.mean = 0
self.std = 1
self.shape = self.get_image_shape()
def __len__(self):
return len(self.filenames)
def __repr__(self):
return (f"MRIDataset - no. samples: {len(self)}; shape: {self.shape}; no. classes: {len(self.labels.unique())}")
def __getitem__(self, idx):
"""Return the image as a FloatTensor and its corresponding label."""
label = self.labels[idx]
struct_arr = load_nifti(
self.filenames[idx], mask=self.mask, z_factor=self.z_factor, dtype=np.float32)
# TDOO: Try normalizing each image to mean 0 and std 1 here.
#struct_arr = (struct_arr - struct_arr.mean()) / (struct_arr.std() + 1e-10)
# prevent 0 division by adding small factor
if self.transform is not None:
struct_arr = self.transform(struct_arr)
else:
struct_arr = (struct_arr - self.mean) / (self.std + 1e-10)
struct_arr = torch.FloatTensor(struct_arr[None]) # add (empty) channel dimension
return struct_arr, label
def get_image_shape(self):
"""The shape of the MRI images."""
img = load_nifti(self.filenames[0], mask=None, z_factor=self.z_factor)
return img.shape
def fit_normalization(self, num_sample=None, show_progress=False):
"""
Calculate the voxel-wise mean and std across the dataset for normalization.
Args:
num_sample (int or None): If None (default), calculate the values across the complete dataset,
otherwise sample a number of images.
show_progress (bool): Show a progress bar during the calculation."
"""
if num_sample is None:
num_sample = len(self)
image_shape = self.get_image_shape()
all_struct_arr = np.zeros(
(num_sample, image_shape[0], image_shape[1], image_shape[2]))
sampled_filenames = np.random.choice(
self.filenames, num_sample, replace=False)
if show_progress:
sampled_filenames = tqdm(sampled_filenames)
for i, filename in enumerate(sampled_filenames):
all_struct_arr[i] = load_nifti(filename, mask=self.mask, z_factor=self.z_factor)
self.mean = all_struct_arr.mean(0)
self.std = all_struct_arr.std(0)
def get_raw_image(self, idx):
"""Return the raw image at index idx (i.e. not normalized, no color channel, no transform."""
return load_nifti(self.filenames[idx], mask=self.mask, z_factor=self.z_factor)
def get_image_filepath(df_row, source_dir=''):
"""Return the filepath of the image that is described in the row of the data table."""
# Current format for the image filepath is:
# <PTID>/<Visit (spaces removed)>/<PTID>_<Scan.Date (/ replaced by -)>_<Visit (spaces removed)>_<Image.ID>_<DX>_Warped.nii.gz
filedir = os.path.join(df_row['PTID'], df_row['Visit'].replace(' ', ''))
filename = '{}_{}_{}_{}_{}_Warped.nii.gz'.format(df_row['PTID'], df_row['Scan.Date'].replace(
'/', '-'), df_row['Visit'].replace(' ', ''), df_row['Image.ID'], df_row['DX'])
return os.path.join(source_dir, filedir, filename)
class DataBunch():
DEFAULT_FILE = 'file_path'
DEFAULT_LABEL = 'DX'
DEFAULT_PTID = 'PTID'
CACHE_NAME = 'databunch.pkl'
def __init__(self, source_dir:str, path:str, table:str, image_dir:str=None, mask:str=None,
transforms:Compose=Compose([ToTensor(), IntensityRescale(masked=False, on_gpu=True)]),
labels_to_keep:list=None, get_file_path:callable=None, balance:bool=False, num_samples:int=None,
num_training_samples:int=None, z_factor:float=0.5, test_size:float=0.1, grouped:bool=False,
no_cache:bool=True, file_col='file_path', label_col='DX', ptid_col='PTID', random_state:int=42, **kwargs):
"""DataBunch class to built training and test MRIDatasets and DataLoaders from a single input csv file containing .nii file paths.
Upon initialization, test set is randomly picked based on arguments grouped,balanced and test_size.
Important methods:
- normalize: normalize dataset based on training data.
- build_dataloaders: re-batchify data and store iterators at `train_dl` and `test_dl`.
- print_stats: prints set and patient level statistics
- show_sample: show random processed training sample
# Arguments:
source_dir: Path to source_dir folder, where table and image_dir can be found.
path: Path where intermediary data will be stored (eg. cache).
image_dir: Image directory *relative* to source_dir, where the .nii files are.
table: CSV file path *relative* to source_dir containing samples. The tables *must*
contain file_col, label_col and ptid_col columns.
mask: Path to binary brain mask in .nii format. This will be resized with z_factor.
transforms: A PyTorch Compose container object, with the transformations to apply to samples. Defaults to
using ToTensor() and IntensityRescaling() into [0,1].
labels_to_keep: List of labels to keep in the datasets. Defaults to None (all labels).
get_file_path: A function mapping the rows of table to the respective file paths of the samples.
balance: Boolean switch for enforcing balanced classes.
grouped: Boolean switch to enforce grouped train/test splitting, i.e. ensuring that no train samples
are present in the test set.
test_size: Fraction of samples to pick for test set.
num_samples: Total no. of samples to consider from the table., defaults to None (all).
num_training_samples: No. of training samples to pick, defaults to None (all).
z_factor: Zoom factor to apply to each image.
no_cache: Prevents caching (caching is useful when later we normalize the DataBunch and load it back).
file_col: Column name in table identifying the path to the given sample's .nii file.
label_col: Column name in table identifying the path to the given sample's label.
ptid_col: Column name in table identifying the path to the given sample's patient ID.
random_state: Random state to enforce reproducibility for train/test splitting.
"""
self.set_column_ids(file_col, label_col, ptid_col)
if not os.path.isdir(source_dir):
raise RuntimeError(f"{source_dir} not existing!")
self.source_dir = Path(source_dir)
self.path = Path(path)
if not no_cache:
if os.path.exists(self.path/self.CACHE_NAME):
ans = str(input(f"Do you want to load cache from {self.path/self.CACHE_NAME}? [y/n]")).strip()
if ans == 'y':
try:
self.load()
self.loaded_cache=True
self.print_stats()
print(f"DataBunch initialized at {self.path}")
return
except EOFError:
logger.warning("Pickled DataBunch is corrupted at {}".format(self.path))
print(f"Cannot load {self.CACHE_NAME} because it is corrupted. Building Databunch..\n")
elif ans == 'n': pass
else:
raise RuntimeError(f"Invalid answer {ans}.")
self.loaded_cache=False
os.makedirs(path, exist_ok=True)
self.table = table
self.image_dir = self.source_dir/image_dir if image_dir is not None else None
self.z_factor = z_factor
self.mask = load_nifti(
str(mask), z_factor=z_factor) if mask is not None else None
self.random_state = random_state
df = pd.read_csv(self.source_dir/self.table, index_col=None)
print(f"Found {len(df)} images in {self.table}")
print(
f"Found {len(df[self.LABEL].unique())} labels: {df[self.LABEL].unique().tolist()}")
if balance:
subsample_idx = balanced_subsample(df[self.LABEL])
df = df[df.index.isin(subsample_idx)]
get_file_path = get_image_filepath if get_file_path is None else get_file_path
if self.FILE not in df.columns:
if get_file_path is not None and self.image_dir is not None and callable(get_file_path):
df[self.FILE] = df.apply(
lambda r: get_file_path(r, self.image_dir), axis=1)
else:
raise RuntimeError(f"If {self.FILE} column is not in {self.table},"
f"please pass a valid `get_file_path` function and an `image_dir`.")
len_before = len(df)
self.labels_to_keep = df[self.LABEL].unique().tolist() if labels_to_keep is None else labels_to_keep
df = df[df[self.LABEL].isin(self.labels_to_keep)]
print(
f"Dropped {len_before-len(df)} samples that were not in {self.labels_to_keep}")
self.df = df[[self.FILE, self.LABEL, self.PTID]].dropna()
print(
f"Final dataframe contains {len(self.df)} samples from {len(df[self.PTID].unique())} patients")
self.classes = self.df[self.LABEL].unique().tolist()[::-1]
self.label2id = {k: v for k, v in zip(
self.classes, np.arange(len(self.classes)))}
self.id2label = dict(zip(self.label2id.values(), self.label2id.keys()))
self.test_size = test_size
self.transforms = transforms
if test_size is not None:
self.build_datasets(test_size=test_size, transforms=transforms, num_samples=num_samples,
num_training_samples=num_training_samples, grouped=grouped)
self.print_stats()
print(f"DataBunch initialized at {self.path}")
def set_column_ids(self, file_col, label_col, ptid_col):
self.FILE = self.DEFAULT_FILE if file_col is None else file_col
self.LABEL = self.DEFAULT_LABEL if label_col is None else label_col
self.PTID = self.DEFAULT_PTID if ptid_col is None else ptid_col
logger.info(f"Using file column {self.FILE}; label column {self.LABEL} and patient_id column {self.PTID}")
def build_datasets(self, test_size:float= .1, transforms:list=None, num_samples=None, num_training_samples=None, random_state:int=None, grouped=False):
print("Building datasets")
print(
f"Patient-wise train/test splitting with test_size = {test_size}")
random_state = self.random_state if random_state is None else random_state
if num_samples is not None:
self.df = self.df.sample(n=num_samples)
logger.info(f"Sampling {num_training_samples} samples")
if grouped:
gss = GroupShuffleSplit(
n_splits=1, test_size=test_size, random_state=random_state)
trn, tst = next(
iter(gss.split(self.df, groups=self.df[self.PTID].tolist())))
df_trn, df_tst = self.df.iloc[trn, :], self.df.iloc[tst, :]
else:
df_trn, df_tst = train_test_split(self.df, test_size=test_size, stratify=self.df[self.LABEL], shuffle=True)
self.df_trn, self.df_tst = df_trn, df_tst
if num_training_samples is not None:
self.df_trn = self.df_trn.sample(n=num_training_samples)
logger.info(f"Sampling {num_training_samples} training samples")
self.train_ds = MRIDataset(self.df_trn[self.FILE].tolist(),
[self.label2id[l]
for l in df_trn[self.LABEL]],
id2label=self.id2label,
z_factor=self.z_factor,
transform=transforms,
mask=self.mask)
self.test_ds = MRIDataset(df_tst[self.FILE].tolist(),
[self.label2id[l]
for l in df_tst[self.LABEL]],
id2label=self.id2label,
z_factor=self.z_factor,
transform=transforms,
mask=self.mask)
self.shape = self.train_ds.shape
self.train_dl, self.test_dl = None, None
def normalize(self, use_samples: int = None):
"""Normalizes the dataset with mean and std calculated on the training set"""
if not hasattr(self, "train_ds"):
raise RuntimeError(f"Attribute `train_ds` not found.")
print("Normalizing datasets")
if use_samples is None:
use_samples = len(self.train_ds)
else:
use_samples = len(self.train_ds) if use_samples > len(
self.train_ds) else use_samples
print(
f"Calculating mean and std for normalization based on {use_samples} train samples:")
self.train_ds.fit_normalization(
num_sample=use_samples, show_progress=True)
self.test_ds.mean, self.test_ds.std = self.train_ds.mean, self.train_ds.std
self.mean, self.std = self.train_ds.mean, self.train_ds.std
self.test_ds.mean, self.test_ds.std = self.mean, self.std
self.train_ds.transform = None
self.test_ds.transform = None
def build_dataloaders(self, bs:int=8, normalize:bool=False, use_samples:int=None, num_workers:int=None):
"""Build DataLoaders with bs, optionally normalizing the datasets too, or performing downsampling."""
print("Building dataloaders")
if normalize:
if self.loaded_cache:
print("Already normalized -- using attributes `mean` and `std`.")
else: self.normalize(use_samples=use_samples)
else: logger.warning("Dataset not normalized, performance might be significantly hurt!")
print(
f"No. training/test samples: {len(self.train_ds)}/{len(self.test_ds)}")
if num_workers is None: num_workers = multiprocessing.cpu_count()
pin_memory = torch.cuda.is_available()
self.train_dl = DataLoader(self.train_ds, batch_size=bs, shuffle=True,
num_workers=num_workers, pin_memory=pin_memory)
self.test_dl = DataLoader(self.test_ds, batch_size=bs, shuffle=True,
num_workers=num_workers, pin_memory=pin_memory)
def print_stats(self):
"""Print statistics about the patients and images."""
headers = []
headers.append('IMAGES')
headers += [cls for cls in self.classes]
headers.append('PATIENTS')
headers += [cls for cls in self.classes]
def get_stats(df):
image_count, patient_count = [
len(df)], [len(df[self.PTID].unique())]
image_count += [len(df[df[self.LABEL] == cls])
for cls in self.classes]
patient_count += [len(df[df[self.LABEL] == cls]
[self.PTID].unique()) for cls in self.classes]
return image_count+patient_count
stats = [['Train'] + get_stats(self.df_trn),
['Test'] + get_stats(self.df_tst),
['Total'] + get_stats(self.df)]
print(tabulate(stats, headers=headers))
print()
print(f"Data shape: {self.train_ds.shape}")
if self.z_factor is not None:
print(f"NOTE: data have been downsized by a factor of {self.z_factor}")
def show_sample(self, **kwargs):
"""Shows a random training sample after zooming, masking and tranformations."""
if self.train_ds is None:
raise RuntimeError(
f"`train_ds` not found, please call `build` method first.")
img, lbl = self.train_ds[np.random.randint(0, len(self.train_ds))]
print(f"label={self.id2label[lbl.item()]}")
f = show_brain(img[0].numpy())
plt.show()
def save(self):
"""Cache the entire DataBunch object to `path`."""
pickle.dump(self.__dict__,
open(self.path/self.CACHE_NAME, 'wb'),
protocol=pickle.HIGHEST_PROTOCOL)
print(f"Saved DataBunch to {self.path/self.CACHE_NAME}")
def load(self):
"""Load cached DataBunch object from `path`."""
tmp_dict = pickle.load(open(self.path/self.CACHE_NAME, 'rb'))
self.__dict__.update(tmp_dict)
print(f"Cached DataBunch has been successfully loaded.")
def get_idss(path, bs=8, test_size=0.15, z_factor=None, num_training_samples=None, labels_to_keep=["AD", "CVD"],
transforms=default_transforms, random_state=None, balance=False, **kwargs):
db = DataBunch(source_dir="/analysis/ritter/data/iDSS",
table="tables/mri_complete_4_class_minimal.csv",
path=path,
mask=f"/analysis/ritter/data/PPMI/Mask/mask_T1.nii", # same mask as T1 ppmi scans
labels_to_keep=labels_to_keep,
transforms=transforms, random_state=random_state, balance=balance,
test_size=test_size, z_factor=z_factor, num_training_samples=num_training_samples, **kwargs)
db.build_dataloaders(bs=bs)
return db
def get_adni(path, bs=8, test_size=0.15, z_factor=0.56, num_training_samples=None,labels_to_keep=["Dementia", "CN"],
transforms=default_transforms, grouped=True, balance=False, **kwargs):
db = DataBunch(source_dir="/analysis/ritter/data/ADNI",
image_dir="ADNI_2Yr_15T_quick_preprocessed",
table="ADNI_tables/customized/DxByImgClean_CompleteAnnual2YearVisitList_1_5T.csv",
path=path,
mask="/analysis/ritter/data/ADNI/binary_brain_mask.nii.gz",
labels_to_keep=labels_to_keep,
transforms=transforms, random_state=1337, grouped=grouped, balance=balance,
test_size=test_size, num_training_samples=num_training_samples, z_factor=z_factor,**kwargs)
db.build_dataloaders(bs=bs)
return db
def get_ppmi(path, bs=8, test_size=0.15, mri_type='T2', z_factor=None, num_training_samples=None, labels_to_keep=["PD", "HC"],
transforms=default_transforms, random_state=None, balance=False, **kwargs):
mri_type = mri_type.upper()
assert mri_type in ['T1', 'T2'], "Argument mri_type has to be one of T1 or T2"
if mri_type=='T2': z_factor=0.87
db = DataBunch(source_dir="/analysis/ritter/data/PPMI",
table=f'tables/PPMI_{mri_type}.csv',
path=path,
mask=f"/analysis/ritter/data/PPMI/Mask/mask_{mri_type}.nii",
labels_to_keep=labels_to_keep, random_state=random_state, balance=balance,
test_size=test_size, num_training_samples=num_training_samples, z_factor=z_factor,
transforms=transforms, **kwargs)
db.build_dataloaders(bs=bs)
return db
| 47.213656 | 155 | 0.620527 | 2,818 | 21,435 | 4.535486 | 0.166075 | 0.018621 | 0.02535 | 0.010015 | 0.248259 | 0.209921 | 0.177607 | 0.162272 | 0.142242 | 0.118614 | 0 | 0.005697 | 0.279403 | 21,435 | 453 | 156 | 47.317881 | 0.821766 | 0.190436 | 0 | 0.147541 | 0 | 0.016393 | 0.134053 | 0.044468 | 0 | 0 | 0 | 0 | 0.003279 | 1 | 0.072131 | false | 0.006557 | 0.045902 | 0.006557 | 0.177049 | 0.078689 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aada617c7604a45409e99ba74ec6160732d2f739 | 879 | py | Python | src/scripts/plotting.py | xiedidan/sparse-coding | 36fb106217382dedbbea9234b10e02b0505d9b50 | [
"MIT"
] | 18 | 2019-06-06T03:56:57.000Z | 2022-02-06T11:59:34.000Z | src/scripts/plotting.py | xiedidan/sparse-coding | 36fb106217382dedbbea9234b10e02b0505d9b50 | [
"MIT"
] | 1 | 2020-02-20T06:51:33.000Z | 2020-08-16T05:14:23.000Z | src/scripts/plotting.py | xiedidan/sparse-coding | 36fb106217382dedbbea9234b10e02b0505d9b50 | [
"MIT"
] | 7 | 2019-06-07T03:46:16.000Z | 2022-02-09T06:34:22.000Z | import numpy as np
import matplotlib.pyplot as plt
def plot_rf(rf, out_dim, M):
rf = rf.reshape(out_dim, -1)
# normalize
rf = rf.T / np.abs(rf).max(axis=1)
rf = rf.T
rf = rf.reshape(out_dim, M, M)
# plotting
n = int(np.ceil(np.sqrt(rf.shape[0])))
fig, axes = plt.subplots(nrows=n, ncols=n, sharex=True, sharey=True)
fig.set_size_inches(10, 10)
for i in range(rf.shape[0]):
ax = axes[i // n][i % n]
ax.imshow(rf[i], cmap='gray', vmin=-1, vmax=1)
ax.set_xticks([])
ax.set_yticks([])
ax.set_aspect('equal')
for j in range(rf.shape[0], n * n):
ax = axes[j // n][j % n]
ax.imshow(np.ones_like(rf[0]) * -1, cmap='gray', vmin=-1, vmax=1)
ax.set_xticks([])
ax.set_yticks([])
ax.set_aspect('equal')
fig.subplots_adjust(wspace=0.0, hspace=0.0)
return fig
| 30.310345 | 73 | 0.562002 | 153 | 879 | 3.137255 | 0.392157 | 0.0625 | 0.05 | 0.058333 | 0.366667 | 0.233333 | 0.233333 | 0.233333 | 0.233333 | 0.233333 | 0 | 0.029008 | 0.254835 | 879 | 28 | 74 | 31.392857 | 0.703817 | 0.020478 | 0 | 0.25 | 0 | 0 | 0.020979 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041667 | false | 0 | 0.083333 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aaddb70ec56acdaa2a3c5a37735b0ab6621b5143 | 19,883 | py | Python | dev_code/demo_color_seg.py | Computational-Plant-Science/plant_image_analysis | 321eaae9531cd5f8eaebf3ee6c68b99eb53e420c | [
"BSD-3-Clause"
] | null | null | null | dev_code/demo_color_seg.py | Computational-Plant-Science/plant_image_analysis | 321eaae9531cd5f8eaebf3ee6c68b99eb53e420c | [
"BSD-3-Clause"
] | null | null | null | dev_code/demo_color_seg.py | Computational-Plant-Science/plant_image_analysis | 321eaae9531cd5f8eaebf3ee6c68b99eb53e420c | [
"BSD-3-Clause"
] | null | null | null | '''
Name: color_segmentation.py
Version: 1.0
Summary: K-means color clustering based segmentation. This is achieved
by converting the source image to a desired color space and
running K-means clustering on only the desired channels,
with the pixels being grouped into a desired number
of clusters.
Author: suxing liu
Author-email: suxingliu@gmail.com
Created: 2018-05-29
USAGE:
python3 demo_color_seg.py -p ~/plant-image-analysis/test/ -ft JPG
'''
# import the necessary packages
import os
import glob
import argparse
from sklearn.cluster import KMeans
from skimage.feature import peak_local_max
from skimage.morphology import watershed, medial_axis
from skimage import img_as_float, img_as_ubyte, img_as_bool, img_as_int
from skimage import measure
from skimage.segmentation import clear_border
from scipy.spatial import distance as dist
from scipy import optimize
from scipy import ndimage
import math
import numpy as np
import argparse
import cv2
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import warnings
warnings.filterwarnings("ignore")
import concurrent.futures
import multiprocessing
from multiprocessing import Pool
from contextlib import closing
MBFACTOR = float(1<<20)
# generate foloder to store the output results
def mkdir(path):
# import module
import os
# remove space at the beginning
path=path.strip()
# remove slash at the end
path=path.rstrip("\\")
# path exist? # True # False
isExists=os.path.exists(path)
# process
if not isExists:
# construct the path and folder
#print path + ' folder constructed!'
# make dir
os.makedirs(path)
return True
else:
# if exists, return
#print path+' path exists!'
return False
def color_cluster_seg(image, args_colorspace, args_channels, args_num_clusters, min_size):
# Change image color space, if necessary.
colorSpace = args_colorspace.lower()
if colorSpace == 'hsv':
image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
elif colorSpace == 'ycrcb' or colorSpace == 'ycc':
image = cv2.cvtColor(image, cv2.COLOR_BGR2YCrCb)
elif colorSpace == 'lab':
image = cv2.cvtColor(image, cv2.COLOR_BGR2LAB)
else:
colorSpace = 'bgr' # set for file naming purposes
# Keep only the selected channels for K-means clustering.
if args_channels != 'all':
channels = cv2.split(image)
channelIndices = []
for char in args_channels:
channelIndices.append(int(char))
image = image[:,:,channelIndices]
if len(image.shape) == 2:
image.reshape(image.shape[0], image.shape[1], 1)
(width, height, n_channel) = image.shape
#print("image shape: \n")
#print(width, height, n_channel)
# Flatten the 2D image array into an MxN feature vector, where M is the number of pixels and N is the dimension (number of channels).
reshaped = image.reshape(image.shape[0] * image.shape[1], image.shape[2])
# Perform K-means clustering.
if args_num_clusters < 2:
print('Warning: num-clusters < 2 invalid. Using num-clusters = 2')
#define number of cluster
numClusters = max(2, args_num_clusters)
# clustering method
kmeans = KMeans(n_clusters = numClusters, n_init = 40, max_iter = 500).fit(reshaped)
# get lables
pred_label = kmeans.labels_
# Reshape result back into a 2D array, where each element represents the corresponding pixel's cluster index (0 to K - 1).
clustering = np.reshape(np.array(pred_label, dtype=np.uint8), (image.shape[0], image.shape[1]))
# Sort the cluster labels in order of the frequency with which they occur.
sortedLabels = sorted([n for n in range(numClusters)],key = lambda x: -np.sum(clustering == x))
# Initialize K-means grayscale image; set pixel colors based on clustering.
kmeansImage = np.zeros(image.shape[:2], dtype=np.uint8)
for i, label in enumerate(sortedLabels):
kmeansImage[clustering == label] = int(255 / (numClusters - 1)) * i
ret, thresh = cv2.threshold(kmeansImage,0,255,cv2.THRESH_BINARY | cv2.THRESH_OTSU)
thresh_cleaned = clear_border(thresh)
if np.count_nonzero(thresh) > 0:
thresh_cleaned_bw = clear_border(thresh)
else:
thresh_cleaned_bw = thresh
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(thresh_cleaned, connectivity = 8)
# stats[0], centroids[0] are for the background label. ignore
# cv2.CC_STAT_LEFT, cv2.CC_STAT_TOP, cv2.CC_STAT_WIDTH, cv2.CC_STAT_HEIGHT
sizes = stats[1:, cv2.CC_STAT_AREA]
Coord_left = stats[1:, cv2.CC_STAT_LEFT]
Coord_top = stats[1:, cv2.CC_STAT_TOP]
Coord_width = stats[1:, cv2.CC_STAT_WIDTH]
Coord_height = stats[1:, cv2.CC_STAT_HEIGHT]
Coord_centroids = centroids
#print("Coord_centroids {}\n".format(centroids[1][1]))
#print("[width, height] {} {}\n".format(width, height))
nb_components = nb_components - 1
#min_size = 70
max_size = width*height*0.1
img_thresh = np.zeros([width, height], dtype=np.uint8)
#for every component in the image, keep it only if it's above min_size
for i in range(0, nb_components):
'''
#print("{} nb_components found".format(i))
if (sizes[i] >= min_size) and (Coord_left[i] > 1) and (Coord_top[i] > 1) and (Coord_width[i] - Coord_left[i] > 0) and (Coord_height[i] - Coord_top[i] > 0) and (centroids[i][0] - width*0.5 < 10) and ((centroids[i][1] - height*0.5 < 10)) and ((sizes[i] <= max_size)):
img_thresh[output == i + 1] = 255
print("Foreground center found ")
elif ((Coord_width[i] - Coord_left[i])*0.5 - width < 15) and (centroids[i][0] - width*0.5 < 15) and (centroids[i][1] - height*0.5 < 15) and ((sizes[i] <= max_size)):
imax = max(enumerate(sizes), key=(lambda x: x[1]))[0] + 1
img_thresh[output == imax] = 255
print("Foreground max found ")
'''
if (sizes[i] >= min_size):
img_thresh[output == i + 1] = 255
#from skimage import img_as_ubyte
#img_thresh = img_as_ubyte(img_thresh)
#print("img_thresh.dtype")
#print(img_thresh.dtype)
#return img_thresh
return img_thresh
'''
def medial_axis_image(thresh):
#convert an image from OpenCV to skimage
thresh_sk = img_as_float(thresh)
image_bw = img_as_bool((thresh_sk))
image_medial_axis = medial_axis(image_bw)
return image_medial_axis
'''
class clockwise_angle_and_distance():
'''
A class to tell if point is clockwise from origin or not.
This helps if one wants to use sorted() on a list of points.
Parameters
----------
point : ndarray or list, like [x, y]. The point "to where" we g0
self.origin : ndarray or list, like [x, y]. The center around which we go
refvec : ndarray or list, like [x, y]. The direction of reference
use:
instantiate with an origin, then call the instance during sort
reference:
https://stackoverflow.com/questions/41855695/sorting-list-of-two-dimensional-coordinates-by-clockwise-angle-using-python
Returns
-------
angle
distance
'''
def __init__(self, origin):
self.origin = origin
def __call__(self, point, refvec = [0, 1]):
if self.origin is None:
raise NameError("clockwise sorting needs an origin. Please set origin.")
# Vector between point and the origin: v = p - o
vector = [point[0]-self.origin[0], point[1]-self.origin[1]]
# Length of vector: ||v||
lenvector = np.linalg.norm(vector[0] - vector[1])
# If length is zero there is no angle
if lenvector == 0:
return -pi, 0
# Normalize vector: v/||v||
normalized = [vector[0]/lenvector, vector[1]/lenvector]
dotprod = normalized[0]*refvec[0] + normalized[1]*refvec[1] # x1*x2 + y1*y2
diffprod = refvec[1]*normalized[0] - refvec[0]*normalized[1] # x1*y2 - y1*x2
angle = math.atan2(diffprod, dotprod)
# Negative angles represent counter-clockwise angles so we need to
# subtract them from 2*pi (360 degrees)
if angle < 0:
return 2*math.pi+angle, lenvector
# I return first the angle because that's the primary sorting criterium
# but if two vectors have the same angle then the shorter distance
# should come first.
return angle, lenvector
# Detect stickers in the image
def sticker_detect(img_ori, save_path):
'''
image_file_name = Path(image_file).name
abs_path = os.path.abspath(image_file)
filename, file_extension = os.path.splitext(abs_path)
base_name = os.path.splitext(os.path.basename(filename))[0]
print("Processing image : {0}\n".format(str(image_file)))
# save folder construction
mkpath = os.path.dirname(abs_path) +'/cropped'
mkdir(mkpath)
save_path = mkpath + '/'
print ("results_folder: " + save_path)
'''
# load the image, clone it for output, and then convert it to grayscale
#img_ori = cv2.imread(image_file)
img_rgb = img_ori.copy()
# Convert it to grayscale
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
# Store width and height of template in w and h
w, h = template.shape[::-1]
# Perform match operations.
res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)
#(minVal, maxVal, minLoc, maxLoc) = cv2.minMaxLoc(res)
# Specify a threshold
threshold = 0.8
# Store the coordinates of matched area in a numpy array
loc = np.where( res >= threshold)
if len(loc):
(y,x) = np.unravel_index(res.argmax(), res.shape)
(min_val, max_val, min_loc, max_loc) = cv2.minMaxLoc(res)
#print(y,x)
print(min_val, max_val, min_loc, max_loc)
(startX, startY) = max_loc
endX = startX + template.shape[1]
endY = startY + template.shape[0]
# Draw a rectangle around the matched region.
for pt in zip(*loc[::-1]):
sticker_overlay = cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,255,0), 1)
sticker_crop_img = img_rgb[startY:endY, startX:endX]
return sticker_crop_img, sticker_overlay
def comp_external_contour(orig, thresh, save_path):
#find contours and get the external one
#find contours and get the external one
contours, hier = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
img_height, img_width, img_channels = orig.shape
index = 1
print("contour length {}".format(len(contours)))
list_of_pts = []
if len(contours) > 1:
'''
for ctr in contours:
list_of_pts += [pt[0] for pt in ctr]
center_pt = np.array(list_of_pts).mean(axis = 0) # get origin
clock_ang_dist = clockwise_angle_and_distance(center_pt) # set origin
list_of_pts = sorted(list_of_pts, key=clock_ang_dist) # use to sort
contours_joined = np.array(list_of_pts).reshape((-1,1,2)).astype(np.int32)
'''
kernel = np.ones((4,4), np.uint8)
dilation = cv2.dilate(thresh.copy(), kernel, iterations = 1)
closing = cv2.morphologyEx(dilation, cv2.MORPH_CLOSE, kernel)
trait_img = closing
#trait_img = cv2.drawContours(thresh, contours_joined, -1, (0,255,255), -1)
#x, y, w, h = cv2.boundingRect(contours_joined)
#trait_img = cv2.rectangle(thresh, (x, y), (x+w, y+h), (255, 255, 0), 3)
contours, hier = cv2.findContours(trait_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
print("contour length {}".format(len(contours)))
for c in contours:
#get the bounding rect
x, y, w, h = cv2.boundingRect(c)
#if w>img_width*0.05 and h>img_height*0.05:
if w>0 and h>0:
offset_w = int(w*0.05)
offset_h = int(h*0.05)
# draw a green rectangle to visualize the bounding rect
roi = orig[y-offset_h : y+h+offset_h, x-offset_w : x+w+offset_w]
print("ROI {} detected ...".format(index))
result_file = (save_path + str(format(index, "02")) + '.' + ext)
#print(result_file)
cv2.imwrite(result_file, roi)
trait_img = cv2.rectangle(orig, (x, y), (x+w, y+h), (255, 255, 0), 3)
#trait_img = cv2.putText(orig, "#{}".format(index), (int(x), int(y)), cv2.FONT_HERSHEY_SIMPLEX, 3.0, (255, 0, 255), 10)
index+= 1
return trait_img
def segmentation(image_file):
abs_path = os.path.abspath(image_file)
filename, file_extension = os.path.splitext(image_file)
file_size = os.path.getsize(image_file)/MBFACTOR
print("Segmenting image : {0} \n".format(str(filename)))
# load original image
image = cv2.imread(image_file)
img_height, img_width, img_channels = image.shape
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# make the folder to store the results
#current_path = abs_path + '/'
base_name = os.path.splitext(os.path.basename(filename))[0]
# save folder construction
mkpath = os.path.dirname(abs_path) +'/' + base_name
mkdir(mkpath)
save_path = mkpath + '/'
mkpath_sticker = os.path.dirname(abs_path) +'/' + base_name + '/sticker'
mkdir(mkpath_sticker)
save_path_sticker = mkpath_sticker + '/'
print("results_folder: {0}\n".format(str(save_path)))
if (file_size > 5.0):
print("It will take some time due to large file size {0} MB".format(str(int(file_size))))
else:
print("Segmenting plant image into blocks... ")
#make backup image
orig = image.copy()
'''
#color clustering based plant object segmentation
thresh = color_cluster_seg(orig, args_colorspace, args_channels, args_num_clusters, min_size = 100)
#result_mask = save_path + 'mask.' + ext
#cv2.imwrite(result_mask, thresh)
#find external contour and segment image into small ROI based on each plant
trait_img = comp_external_contour(image.copy(), thresh, save_path)
result_file = abs_path + '_label.' + ext
cv2.imwrite(result_file, trait_img)
'''
(sticker_crop_img, sticker_overlay) = sticker_detect(image.copy(), save_path)
# save segmentation result
result_file = (save_path_sticker + base_name + '_sticker_overlay.' + args['filetype'])
print(result_file)
cv2.imwrite(result_file, sticker_overlay)
# save segmentation result
result_file = (save_path_sticker + base_name + '_sticker_match.' + args['filetype'])
#print(result_file)
cv2.imwrite(result_file, sticker_crop_img)
thresh_sticker = color_cluster_seg(sticker_crop_img.copy(), args_colorspace, args_channels, 8, min_size = 10)
trait_img_sticker = comp_external_contour(sticker_crop_img.copy(), thresh_sticker, save_path_sticker)
result_file_sticker = save_path_sticker + '_label.' + ext
cv2.imwrite(result_file_sticker, trait_img_sticker)
#number of rows
nRows = 4
# Number of columns
mCols = 8
# Dimensions of the image
sizeX = img_width
sizeY = img_height
#print(img.shape)
for i in range(0, nRows):
for j in range(0, mCols):
roi = orig[int(i*sizeY/nRows):int(i*sizeY/nRows) + int(sizeY/nRows),int(j*sizeX/mCols):int(j*sizeX/mCols) + int(sizeX/mCols)]
result_file = (save_path + str(i+1) + str(j+1) + '.' + ext)
cv2.imwrite(result_file, roi)
#return thresh
#trait_img
if __name__ == '__main__':
ap = argparse.ArgumentParser()
#ap.add_argument('-i', '--image', required = True, help = 'Path to image file')
ap.add_argument("-p", "--path", required = True, help="path to image file")
ap.add_argument("-ft", "--filetype", required=True, help="Image filetype")
ap.add_argument('-s', '--color-space', type =str, default ='lab', help='Color space to use: BGR (default), HSV, Lab, YCrCb (YCC)')
ap.add_argument('-c', '--channels', type = str, default='1', help='Channel indices to use for clustering, where 0 is the first channel,'
+ ' 1 is the second channel, etc. E.g., if BGR color space is used, "02" '
+ 'selects channels B and R. (default "all")')
ap.add_argument('-n', '--num-clusters', type = int, default = 2, help = 'Number of clusters for K-means clustering (default 3, min 2).')
args = vars(ap.parse_args())
# setting path to model file
file_path = args["path"]
ext = args['filetype']
args_colorspace = args['color_space']
args_channels = args['channels']
args_num_clusters = args['num_clusters']
#accquire image file list
filetype = '*.' + ext
image_file_path = file_path + filetype
#accquire image file list
imgList = sorted(glob.glob(image_file_path))
global template
# local path needed!
template_path = "/home/suxing/plant-image-analysis/marker_template/sticker_template.jpg"
# Read the template
template = cv2.imread(template_path, 0)
print("template was found")
print((imgList))
#current_img = imgList[0]
#(thresh, trait_img) = segmentation(current_img)
# get cpu number for parallel processing
#agents = psutil.cpu_count()
agents = multiprocessing.cpu_count()
print("Using {0} cores to perform parallel processing... \n".format(int(agents)))
# Create a pool of processes. By default, one is created for each CPU in the machine.
# extract the bouding box for each image in file list
with closing(Pool(processes = agents)) as pool:
result = pool.map(segmentation, imgList)
pool.terminate()
'''
#loop execute
for image in imgList:
(thresh) = segmentation(image)
'''
#color clustering based plant object segmentation
#thresh = color_cluster_seg(orig, args_colorspace, args_channels, args_num_clusters)
# save segmentation result
#result_file = (save_path + filename + '_seg' + file_extension)
#print(filename)
#cv2.imwrite(result_file, thresh)
#find external contour
#trait_img = comp_external_contour(image.copy(),thresh, file_path)
#save segmentation result
#result_file = (save_path + filename + '_excontour' + file_extension)
#cv2.imwrite(result_file, trait_img)
#accquire medial axis of segmentation mask
#image_medial_axis = medial_axis_image(thresh)
# save medial axis result
#result_file = (save_path + filename + '_medial_axis' + file_extension)
#cv2.imwrite(result_file, img_as_ubyte(image_medial_axis))
| 30.263318 | 273 | 0.617663 | 2,609 | 19,883 | 4.549253 | 0.210042 | 0.017693 | 0.013481 | 0.015166 | 0.251327 | 0.190665 | 0.143399 | 0.105569 | 0.079956 | 0.067234 | 0 | 0.02259 | 0.274204 | 19,883 | 656 | 274 | 30.309451 | 0.799875 | 0.290952 | 0 | 0.056338 | 0 | 0.004695 | 0.084093 | 0.005964 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032864 | false | 0 | 0.112676 | 0 | 0.187793 | 0.061033 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aadf3c9bf3bcd278727bd41463238b564bd2bd23 | 1,524 | py | Python | distribution.py | clee088/Character-Distribution | 2a0748191c43f5aeffdbf5ec1188839f31ca22a4 | [
"MIT"
] | null | null | null | distribution.py | clee088/Character-Distribution | 2a0748191c43f5aeffdbf5ec1188839f31ca22a4 | [
"MIT"
] | null | null | null | distribution.py | clee088/Character-Distribution | 2a0748191c43f5aeffdbf5ec1188839f31ca22a4 | [
"MIT"
] | null | null | null | """
distribution.py
Author: Christopher Lee
Credit: https://developers.google.com/edu/python/sorting
Assignment:
Write and submit a Python program (distribution.py) that computes and displays
the distribution of characters in a given sample of text.
Output of your program should look like this:
Please enter a string of text (the bigger the better): The rain in Spain stays mainly in the plain.
The distribution of characters in "The rain in Spain stays mainly in the plain." is:
iiiiii
nnnnnn
aaaaa
sss
ttt
ee
hh
ll
pp
yy
m
r
Notice about this example:
* The text: 'The rain ... plain' is provided by the user as input to your program.
* Uppercase characters are converted to lowercase
* Spaces and punctuation marks are ignored completely.
* Characters that are more common appear first in the list.
* Where the same number of characters occur, the lines are ordered alphabetically.
For example, in the printout above, the letters e, h, l, p and y both occur twice
in the text and they are listed in the output in alphabetical order.
* Letters that do not occur in the text are not listed in the output at all.
"""
import string
text = str(input("Please enter a string of text (the bigger the better): "))
print('The distribution of characters in ''"' + text + '" is:')
text = text.lower()
alpha = list(string.ascii_lowercase)
newtext = []
for l in alpha:
if text.count(l) != 0:
newtext.append(l * text.count(l))
p = (sorted(newtext, key=len, reverse = True))
for i in p:
print(i) | 28.222222 | 99 | 0.734252 | 250 | 1,524 | 4.472 | 0.496 | 0.04025 | 0.045617 | 0.072451 | 0.215564 | 0.137746 | 0.137746 | 0.137746 | 0.137746 | 0.075134 | 0 | 0.00081 | 0.189633 | 1,524 | 54 | 100 | 28.222222 | 0.904453 | 0.73622 | 0 | 0 | 0 | 0 | 0.240506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.083333 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
aae02ec2a46c05ef679e3d0812eec85d4c71d767 | 707 | py | Python | past/past201912/past201912k-2.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | 1 | 2019-08-21T00:49:34.000Z | 2019-08-21T00:49:34.000Z | past/past201912/past201912k-2.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | past/past201912/past201912k-2.py | c-yan/atcoder | 940e49d576e6a2d734288fadaf368e486480a948 | [
"MIT"
] | null | null | null | from sys import stdin, setrecursionlimit
def euler_tour(n, i):
left[n] = i
i += 1
for c in children[n]:
i = euler_tour(c, i)
right[n] = i
return i
readline = stdin.readline
setrecursionlimit(10 ** 6)
N = int(readline())
root = -1
children = [[] for _ in range(N)]
for i in range(N):
p = int(readline())
if p == -1:
root = i
else:
children[p - 1].append(i)
left = [0] * N
right = [0] * N
euler_tour(root, 0)
Q = int(readline())
result = []
for _ in range(Q):
a, b = map(lambda x: int(x) - 1, readline().split())
if left[b] < left[a] < right[b]:
result.append('Yes')
else:
result.append('No')
print(*result, sep='\n')
| 18.128205 | 56 | 0.545969 | 111 | 707 | 3.432432 | 0.369369 | 0.020997 | 0.052493 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021569 | 0.278642 | 707 | 38 | 57 | 18.605263 | 0.72549 | 0 | 0 | 0.064516 | 0 | 0 | 0.009901 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.032258 | 0 | 0.096774 | 0.032258 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |