hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ff166319d7571cbeff470120486344f6e07be45c | 2,809 | py | Python | database/user.py | As-12/Fit-App-backend- | d95b07fdb1aed882d01d3a70b4b0f308374bf304 | [
"MIT"
] | null | null | null | database/user.py | As-12/Fit-App-backend- | d95b07fdb1aed882d01d3a70b4b0f308374bf304 | [
"MIT"
] | null | null | null | database/user.py | As-12/Fit-App-backend- | d95b07fdb1aed882d01d3a70b4b0f308374bf304 | [
"MIT"
] | null | null | null | from main import db
from dataclasses import dataclass
import database
@dataclass
class User(db.Model):
__tablename__ = 'user'
id: str = db.Column(db.String, primary_key=True,
autoincrement=False)
target_weight: float = db.Column(db.Float, nullable=False)
height: float = db.Column(db.Float, nullable=False)
city: str = db.Column(db.String)
state: str = db.Column(db.String)
progress = db.relationship("Progress")
def insert(self):
"""
insert()
inserts a new model into a database
the model must have a unique name
the model must have a unique id or null id
EXAMPLE
user = User(id=user_id,
target_weight=120.3,
dob=Date())
user.insert()
"""
try:
self.validate()
db.session.add(self)
db.session.commit()
except Exception as e:
db.session.rollback()
raise e
finally:
db.session.close()
def delete(self):
"""
delete()
deletes a new model into a database
the model must exist in the database
EXAMPLE
user = User(id=user_id,
target_weight=120.3,
dob=Date())
user.delete()
"""
db.session.delete(self)
db.session.commit()
db.session.close()
def update(self, update_dict=None):
"""
update()
updates a new model into a database
the model must exist in the database
EXAMPLE
user = User(id=user_id,
target_weight=120.3,
dob=Date())
user.target_weight = 200.5
user.update()
"""
try:
if update_dict is not None:
for key in ["target_weight", "height", "city", "state"]:
if key in update_dict:
setattr(self, key, update_dict[key])
self.validate()
db.session.commit()
except Exception as e:
db.session.rollback()
raise e
finally:
db.session.close()
def validate(self):
"""
validate()
Validate the model for invalid value.
Raise a ValueError for invalid value
This function is automatically
called upon insert or update
"""
if self.target_weight < 0:
raise ValueError("target_weight must be greater "
"or equal to zero")
if self.height < 0:
raise ValueError("height must be greater "
"or equal to zero")
| 29.260417 | 72 | 0.503382 | 307 | 2,809 | 4.540717 | 0.29316 | 0.064562 | 0.035868 | 0.027977 | 0.471306 | 0.430416 | 0.406026 | 0.321377 | 0.321377 | 0.296987 | 0 | 0.010909 | 0.412602 | 2,809 | 95 | 73 | 29.568421 | 0.833939 | 0.304735 | 0 | 0.425532 | 0 | 0 | 0.076034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0 | 0.06383 | 0 | 0.319149 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ff179049d6a93f539ffdb9a3e5f19fac5a840892 | 2,755 | py | Python | AutoFormer/model/module/Linear_super.py | Inch-Z/Cream | 5adb978db133842dd44f54614a9303dc5d11aa7d | [
"MIT"
] | 307 | 2020-10-29T13:17:02.000Z | 2022-03-30T09:55:49.000Z | AutoFormer/model/module/Linear_super.py | Inch-Z/Cream | 5adb978db133842dd44f54614a9303dc5d11aa7d | [
"MIT"
] | 42 | 2020-10-30T07:09:48.000Z | 2022-03-29T13:54:56.000Z | AutoFormer/model/module/Linear_super.py | Inch-Z/Cream | 5adb978db133842dd44f54614a9303dc5d11aa7d | [
"MIT"
] | 64 | 2020-10-30T10:08:48.000Z | 2022-03-30T06:51:01.000Z | import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
class LinearSuper(nn.Linear):
def __init__(self, super_in_dim, super_out_dim, bias=True, uniform_=None, non_linear='linear', scale=False):
super().__init__(super_in_dim, super_out_dim, bias=bias)
# super_in_dim and super_out_dim indicate the largest network!
self.super_in_dim = super_in_dim
self.super_out_dim = super_out_dim
# input_dim and output_dim indicate the current sampled size
self.sample_in_dim = None
self.sample_out_dim = None
self.samples = {}
self.scale = scale
self._reset_parameters(bias, uniform_, non_linear)
self.profiling = False
def profile(self, mode=True):
self.profiling = mode
def sample_parameters(self, resample=False):
if self.profiling or resample:
return self._sample_parameters()
return self.samples
def _reset_parameters(self, bias, uniform_, non_linear):
nn.init.xavier_uniform_(self.weight) if uniform_ is None else uniform_(
self.weight, non_linear=non_linear)
if bias:
nn.init.constant_(self.bias, 0.)
def set_sample_config(self, sample_in_dim, sample_out_dim):
self.sample_in_dim = sample_in_dim
self.sample_out_dim = sample_out_dim
self._sample_parameters()
def _sample_parameters(self):
self.samples['weight'] = sample_weight(self.weight, self.sample_in_dim, self.sample_out_dim)
self.samples['bias'] = self.bias
self.sample_scale = self.super_out_dim/self.sample_out_dim
if self.bias is not None:
self.samples['bias'] = sample_bias(self.bias, self.sample_out_dim)
return self.samples
def forward(self, x):
self.sample_parameters()
return F.linear(x, self.samples['weight'], self.samples['bias']) * (self.sample_scale if self.scale else 1)
def calc_sampled_param_num(self):
assert 'weight' in self.samples.keys()
weight_numel = self.samples['weight'].numel()
if self.samples['bias'] is not None:
bias_numel = self.samples['bias'].numel()
else:
bias_numel = 0
return weight_numel + bias_numel
def get_complexity(self, sequence_length):
total_flops = 0
total_flops += sequence_length * np.prod(self.samples['weight'].size())
return total_flops
def sample_weight(weight, sample_in_dim, sample_out_dim):
sample_weight = weight[:, :sample_in_dim]
sample_weight = sample_weight[:sample_out_dim, :]
return sample_weight
def sample_bias(bias, sample_out_dim):
sample_bias = bias[:sample_out_dim]
return sample_bias
| 33.597561 | 115 | 0.676951 | 381 | 2,755 | 4.577428 | 0.186352 | 0.058486 | 0.075688 | 0.045872 | 0.239679 | 0.164564 | 0.099771 | 0 | 0 | 0 | 0 | 0.001885 | 0.229764 | 2,755 | 81 | 116 | 34.012346 | 0.819981 | 0.043194 | 0 | 0.033898 | 0 | 0 | 0.021269 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 1 | 0.186441 | false | 0 | 0.067797 | 0 | 0.40678 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ff17afb225298a2ce7876034f454a6c0c4d8cebd | 1,251 | py | Python | aquascope/util/config.py | MicroscopeIT/aquascope_backend | 6b8c13ca3d6bd0a96f750fae809b6cf5a0062f24 | [
"MIT"
] | null | null | null | aquascope/util/config.py | MicroscopeIT/aquascope_backend | 6b8c13ca3d6bd0a96f750fae809b6cf5a0062f24 | [
"MIT"
] | 3 | 2019-04-03T13:22:47.000Z | 2019-12-02T15:49:31.000Z | aquascope/util/config.py | MicroscopeIT/aquascope_backend | 6b8c13ca3d6bd0a96f750fae809b6cf5a0062f24 | [
"MIT"
] | 2 | 2019-05-15T13:30:42.000Z | 2020-06-12T02:42:49.000Z | from collections import abc
import copy
import yaml
def data_merge(a, b):
if isinstance(a, abc.Mapping):
if not isinstance(b, abc.Mapping):
raise TypeError('cannot merge {} into a dictionary'.format(b))
a = copy.deepcopy(a)
for k in b:
try:
a[k] = data_merge(a[k], b[k])
except KeyError:
a[k] = b[k]
else:
a = b
return a
class ConfigDict:
class NoKey(KeyError):
"""Raised when someone accesses an attribute that wasn't set.
"""
pass
def __init__(self, data):
self.data = data
def keys(self):
return self.data.keys()
def __getitem__(self, key):
return self.data[key]
def __getattr__(self, k):
try:
v = self.data[k]
except KeyError:
raise self.NoKey(k)
if isinstance(v, abc.Mapping):
v = ConfigDict(v)
return v
class Config(ConfigDict):
def __init__(self, path):
with open(path) as fi:
loaded = yaml.load(fi)
merged = copy.deepcopy(self.DEFAULT)
merged = data_merge(merged, loaded)
super(Config, self).__init__(merged)
DEFAULT = dict()
| 20.508197 | 74 | 0.544365 | 157 | 1,251 | 4.191083 | 0.401274 | 0.06079 | 0.030395 | 0.012158 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.35012 | 1,251 | 60 | 75 | 20.85 | 0.809348 | 0.046363 | 0 | 0.097561 | 0 | 0 | 0.028014 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.146341 | false | 0.02439 | 0.073171 | 0.04878 | 0.414634 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ff18f8d189b281647e54313083f71e52b26e849f | 5,532 | py | Python | oxasl/gui/calib_tab.py | physimals/oxasl | e583103f3313aed2890b60190b6ca7b265a46e3c | [
"Apache-2.0"
] | 1 | 2021-01-27T05:48:20.000Z | 2021-01-27T05:48:20.000Z | oxasl/gui/calib_tab.py | ibme-qubic/oxasl | 8a0c055752d6e10cd932336ae6916f0c4fc0a2e9 | [
"Apache-2.0"
] | 13 | 2019-01-14T13:22:00.000Z | 2020-09-12T20:34:20.000Z | oxasl/gui/calib_tab.py | physimals/oxasl | e583103f3313aed2890b60190b6ca7b265a46e3c | [
"Apache-2.0"
] | 3 | 2019-03-19T15:46:48.000Z | 2020-03-13T16:55:48.000Z | """
oxasl.gui.calibration_tab.py
Copyright (c) 2019 University of Oxford
"""
from oxasl.gui.widgets import TabPage
class AslCalibration(TabPage):
"""
Tab page containing options for calibration
"""
def __init__(self, parent, idx, n):
TabPage.__init__(self, parent, "Calibration", idx, n)
self.calib_cb = self.checkbox("Enable Calibration", bold=True, handler=self.calib_changed)
self.calib_image_picker = self.file_picker("Calibration Image")
self.seq_tr_num = self.number("Sequence TR (s)", minval=0, maxval=10, initial=6)
self.calib_gain_num = self.number("Calibration Gain", minval=0, maxval=5, initial=1)
self.calib_mode_ch = self.choice("Calibration mode", choices=["Reference Region", "Voxelwise"])
self.section("Reference tissue")
self.ref_tissue_type_ch = self.choice("Type", choices=["CSF", "WM", "GM", "None"],
handler=self.ref_tissue_type_changed)
self.ref_tissue_mask_picker = self.file_picker("Mask", optional=True)
self.ref_t1_num = self.number("Reference T1 (s)", minval=0, maxval=5, initial=4.3)
self.seq_te_num = self.number("Sequence TE (ms)", minval=0, maxval=30, initial=0)
self.ref_t2_num = self.number("Reference T2 (ms)", minval=0, maxval=1000, initial=750, step=10)
self.blood_t2_num = self.number("Blood T2 (ms)", minval=0, maxval=1000, initial=150, step=10)
self.coil_image_picker = self.file_picker("Coil Sensitivity Image", optional=True)
self.sizer.AddGrowableCol(2, 1)
self.SetSizer(self.sizer)
self.next_prev()
def options(self):
options = {}
if self.calib():
options.update({
"calib" : self.image("Calibration data", self.calib_image_picker.GetPath()),
"calib_gain" : self.calib_gain_num.GetValue(),
"tr" : self.seq_tr_num.GetValue(),
})
if self.refregion():
options.update({
"calib_method" : "refregion",
"tissref" : self.ref_tissue_type_ch.GetString(self.ref_tissue_type()),
"te" : self.seq_te_num.GetValue(),
"t1r" : self.ref_t1_num.GetValue(),
"t2r" : self.ref_t2_num.GetValue(),
"t2b" : self.blood_t2_num.GetValue(),
})
if self.ref_tissue_mask_picker.checkbox.IsChecked():
options["refmask"] = self.ref_tissue_mask_picker.GetPath()
else:
options["calib_method"] = "voxelwise"
if self.coil_image_picker.checkbox.IsChecked():
options["cref"] = self.image("Calibration reference data", self.coil_image_picker.GetPath())
return options
def calib(self):
"""
:return: True if calibration is enabled
"""
return self.calib_cb.IsChecked()
def refregion(self):
"""
:return: True if reference region calibration is selected
"""
return self.calib_mode_ch.GetSelection() == 0
def ref_tissue_type(self):
"""
:return reference tissue type index
"""
return self.ref_tissue_type_ch.GetSelection()
def ref_tissue_type_changed(self, _):
"""
Update reference tissue parameters to currently selected reference tissue type
"""
if self.ref_tissue_type() == 0: # CSF
self.ref_t1_num.SetValue(4.3)
self.ref_t2_num.SetValue(750)
elif self.ref_tissue_type() == 1: # WM
self.ref_t1_num.SetValue(1.0)
self.ref_t2_num.SetValue(50)
elif self.ref_tissue_type() == 2: # GM
self.ref_t1_num.SetValue(1.3)
self.ref_t2_num.SetValue(100)
self.update()
def calib_changed(self, _):
"""
Update option visibility when calibration is enabled/disabled
"""
self.distcorr.calib_changed(self.calib())
self.update()
def update(self):
enable = self.calib()
self.seq_tr_num.Enable(enable)
self.calib_image_picker.Enable(enable)
self.calib_gain_num.Enable(enable)
self.coil_image_picker.checkbox.Enable(enable)
if self.analysis.white_paper():
self.calib_mode_ch.SetSelection(1)
self.calib_mode_ch.Enable(enable and not self.analysis.white_paper())
self.ref_tissue_type_ch.Enable(enable and self.refregion())
if self.ref_tissue_type() == 3:
# Ref tissue = None - enforce mask
self.ref_tissue_mask_picker.checkbox.Enable(False)
self.ref_tissue_mask_picker.checkbox.SetValue(enable and self.refregion())
self.ref_tissue_mask_picker.Enable(enable and self.refregion())
else:
self.ref_tissue_mask_picker.checkbox.Enable(enable and self.refregion())
self.ref_tissue_mask_picker.Enable(enable and self.refregion() and self.ref_tissue_mask_picker.checkbox.IsChecked())
self.coil_image_picker.checkbox.Enable(enable and self.refregion())
self.coil_image_picker.Enable(enable and self.refregion() and self.coil_image_picker.checkbox.IsChecked())
self.seq_te_num.Enable(enable and self.refregion())
self.blood_t2_num.Enable(enable and self.refregion())
self.ref_t1_num.Enable(enable and self.refregion())
self.ref_t2_num.Enable(enable and self.refregion())
TabPage.update(self)
| 41.283582 | 124 | 0.625271 | 688 | 5,532 | 4.797965 | 0.19186 | 0.065738 | 0.074826 | 0.073311 | 0.401999 | 0.25053 | 0.195092 | 0.101787 | 0.044229 | 0.044229 | 0 | 0.019961 | 0.257411 | 5,532 | 133 | 125 | 41.593985 | 0.783593 | 0.078091 | 0 | 0.089888 | 0 | 0 | 0.074007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089888 | false | 0 | 0.011236 | 0 | 0.157303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ff1965ee1d197cb5fef833ce6f16c3119dfdad66 | 295 | py | Python | annodomini/__init__.py | TheDubliner/RedArmy-Cogs | f0ae7ab554e176254a91e322e0cf349b69971e98 | [
"MIT"
] | null | null | null | annodomini/__init__.py | TheDubliner/RedArmy-Cogs | f0ae7ab554e176254a91e322e0cf349b69971e98 | [
"MIT"
] | 5 | 2020-05-16T12:21:26.000Z | 2020-06-01T11:26:50.000Z | annodomini/__init__.py | TheDubliner/RedArmy-Cogs | f0ae7ab554e176254a91e322e0cf349b69971e98 | [
"MIT"
] | null | null | null | from .annodomini import AnnoDomini
__red_end_user_data_statement__ = (
"This cog stores data attached to a user ID for the purpose of running "
" the game and saving statistics.\n"
"This cog supports data removal requests."
)
def setup(bot):
bot.add_cog(AnnoDomini(bot))
| 22.692308 | 76 | 0.718644 | 43 | 295 | 4.72093 | 0.744186 | 0.068966 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210169 | 295 | 12 | 77 | 24.583333 | 0.871245 | 0 | 0 | 0 | 0 | 0 | 0.488136 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
ff19d763570001bb5b3069ee03ffa76d0400a9d7 | 1,303 | py | Python | 3. Linear Regresion/3-6)Mini Batch and Data Load.py | choijiwoong/-ROKA-torch-tutorial-files | c298fdf911cd64757895c3ab9f71ae7c3467c545 | [
"Unlicense"
] | null | null | null | 3. Linear Regresion/3-6)Mini Batch and Data Load.py | choijiwoong/-ROKA-torch-tutorial-files | c298fdf911cd64757895c3ab9f71ae7c3467c545 | [
"Unlicense"
] | null | null | null | 3. Linear Regresion/3-6)Mini Batch and Data Load.py | choijiwoong/-ROKA-torch-tutorial-files | c298fdf911cd64757895c3ab9f71ae7c3467c545 | [
"Unlicense"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
x_train=torch.FloatTensor([[73,80,75],
[93,88,93],
[89,91,90],
[96,98,100],
[73,66,70]])
y_train=torch.FloatTensor([[152],[185],[180],[196],[142]])
dataset=TensorDataset(x_train,y_train)
#dataloader's argument's are 'dataset', and 'mini batch size'
dataloader=DataLoader(dataset, batch_size=2,shuffle=True)#for prevent adapting about line
model=nn.Linear(3,1)
optimizer=torch.optim.SGD(model.parameters(), lr=1e-5)
nb_epochs=20
for epoch in range(nb_epochs+1):
for batch_idx, samples in enumerate(dataloader):
#print(batch_idx)
#print(samples)
x_train, y_train=samples
prediction=model(x_train)
cost=F.mse_loss(prediction,y_train)
optimizer.zero_grad()
cost.backward()
optimizer.step()
print('Epoch {:4d}/{} Batch {}/{} Cost: {:.6f}'.format(
epoch, nb_epochs, batch_idx+1,len(dataloader),
cost.item()
))
new_var=torch.FloatTensor([[73,80,75]])
pred_y=model(new_var)
print("After traning, prediction about 73, 80, 75",pred_y)
| 30.302326 | 89 | 0.623945 | 180 | 1,303 | 4.4 | 0.488889 | 0.030303 | 0.022727 | 0.045455 | 0.136364 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069417 | 0.237145 | 1,303 | 42 | 90 | 31.02381 | 0.727364 | 0.092863 | 0 | 0 | 0 | 0 | 0.068761 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.16129 | 0 | 0.16129 | 0.064516 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
205f3f36d35279bed8e26e086ddaae4845a10bf2 | 4,211 | py | Python | docs/examples/04/do_mcmc.py | ast0815/likelihood-machine | 4b0ebd193253775c31539c4a0046b79cbec8fa2b | [
"MIT"
] | null | null | null | docs/examples/04/do_mcmc.py | ast0815/likelihood-machine | 4b0ebd193253775c31539c4a0046b79cbec8fa2b | [
"MIT"
] | 1 | 2017-03-15T15:36:48.000Z | 2017-03-15T15:36:48.000Z | docs/examples/04/do_mcmc.py | ast0815/likelihood-machine | 4b0ebd193253775c31539c4a0046b79cbec8fa2b | [
"MIT"
] | null | null | null | import emcee
import numpy as np
from matplotlib import pyplot as plt
from remu import binning, likelihood, likelihood_utils, plotting
with open("../01/reco-binning.yml") as f:
reco_binning = binning.yaml.full_load(f)
with open("../01/optimised-truth-binning.yml") as f:
truth_binning = binning.yaml.full_load(f)
reco_binning.fill_from_csv_file("../00/real_data.txt")
data = reco_binning.get_entries_as_ndarray()
data_model = likelihood.PoissonData(data)
response_matrix = "../03/response_matrix.npz"
matrix_predictor = likelihood.ResponseMatrixPredictor(response_matrix)
calc = likelihood.LikelihoodCalculator(data_model, matrix_predictor)
truth_binning.fill_from_csv_file("../00/modelA_truth.txt")
modelA = truth_binning.get_values_as_ndarray()
modelA /= np.sum(modelA)
modelA_shape = likelihood.TemplatePredictor([modelA])
calcA = calc.compose(modelA_shape)
samplerA = likelihood_utils.emcee_sampler(calcA)
guessA = likelihood_utils.emcee_initial_guess(calcA)
state = samplerA.run_mcmc(guessA, 100)
chain = samplerA.get_chain(flat=True)
with open("chain_shape.txt", "w") as f:
print(chain.shape, file=f)
fig, ax = plt.subplots()
ax.hist(chain[:, 0])
ax.set_xlabel("model A weight")
fig.savefig("burn_short.png")
with open("burn_short_tau.txt", "w") as f:
try:
tau = samplerA.get_autocorr_time()
print(tau, file=f)
except emcee.autocorr.AutocorrError as e:
print(e, file=f)
samplerA.reset()
state = samplerA.run_mcmc(guessA, 200 * 50)
chain = samplerA.get_chain(flat=True)
with open("burn_long_tau.txt", "w") as f:
try:
tau = samplerA.get_autocorr_time()
print(tau, file=f)
except emcee.autocorr.AutocorrError as e:
print(e, file=f)
fig, ax = plt.subplots()
ax.hist(chain[:, 0])
ax.set_xlabel("model A weight")
fig.savefig("burn_long.png")
samplerA.reset()
state = samplerA.run_mcmc(state, 100 * 50)
chain = samplerA.get_chain(flat=True)
with open("tauA.txt", "w") as f:
try:
tau = samplerA.get_autocorr_time()
print(tau, file=f)
except emcee.autocorr.AutocorrError as e:
print(e, file=f)
fig, ax = plt.subplots()
ax.hist(chain[:, 0])
ax.set_xlabel("model A weight")
fig.savefig("weightA.png")
truth, _ = modelA_shape(chain)
truth.shape = (np.prod(truth.shape[:-1]), truth.shape[-1])
pltr = plotting.get_plotter(truth_binning)
pltr.plot_array(truth, stack_function=np.median, label="Post. median", hatch=None)
pltr.plot_array(truth, stack_function=0.68, label="Post. 68%", scatter=0)
pltr.legend()
pltr.savefig("truthA.png")
reco, _ = calcA.predictor(chain)
reco.shape = (np.prod(reco.shape[:-1]), reco.shape[-1])
pltr = plotting.get_plotter(reco_binning)
pltr.plot_array(reco, stack_function=np.median, label="Post. median", hatch=None)
pltr.plot_array(reco, stack_function=0.68, label="Post. 68%")
pltr.plot_array(data, label="Data", hatch=None, linewidth=2)
pltr.legend()
pltr.savefig("recoA.png")
truth_binning.reset()
truth_binning.fill_from_csv_file("../00/modelB_truth.txt")
modelB = truth_binning.get_values_as_ndarray()
modelB /= np.sum(modelB)
combined = likelihood.TemplatePredictor([modelA, modelB])
calcC = calc.compose(combined)
samplerC = likelihood_utils.emcee_sampler(calcC)
guessC = likelihood_utils.emcee_initial_guess(calcC)
state = samplerC.run_mcmc(guessC, 200 * 50)
chain = samplerC.get_chain(flat=True)
with open("combined_chain_shape.txt", "w") as f:
print(chain.shape, file=f)
with open("burn_combined_tau.txt", "w") as f:
try:
tau = samplerC.get_autocorr_time()
print(tau, file=f)
except emcee.autocorr.AutocorrError as e:
print(e, file=f)
samplerC.reset()
state = samplerC.run_mcmc(state, 100 * 50)
chain = samplerC.get_chain(flat=True)
with open("combined_tau.txt", "w") as f:
try:
tau = samplerC.get_autocorr_time()
print(tau, file=f)
except emcee.autocorr.AutocorrError as e:
print(e, file=f)
fig, ax = plt.subplots()
ax.hist2d(chain[:, 0], chain[:, 1])
ax.set_xlabel("model A weight")
ax.set_ylabel("model B weight")
fig.savefig("combined.png")
fig, ax = plt.subplots()
ax.hist(np.sum(chain, axis=-1))
ax.set_xlabel("model A weight + model B weight")
fig.savefig("total.png")
| 30.294964 | 82 | 0.722156 | 639 | 4,211 | 4.593114 | 0.190923 | 0.020443 | 0.01431 | 0.016695 | 0.596934 | 0.549233 | 0.430324 | 0.375809 | 0.363203 | 0.336627 | 0 | 0.015821 | 0.129423 | 4,211 | 138 | 83 | 30.514493 | 0.784779 | 0 | 0 | 0.428571 | 0 | 0 | 0.117312 | 0.040133 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.035714 | 0 | 0.035714 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20628f6d5bc6347f947ff9c3729ba2edb1f796e2 | 4,464 | py | Python | data/tracking/sampler/_sampling_algos/sequence_sampling/triplet/_algo.py | zhangzhengde0225/SwinTrack | 526be17f8ef266cb924c6939bd8dda23e9b73249 | [
"MIT"
] | 143 | 2021-12-03T02:33:36.000Z | 2022-03-29T00:01:48.000Z | data/tracking/sampler/_sampling_algos/sequence_sampling/triplet/_algo.py | zhangzhengde0225/SwinTrack | 526be17f8ef266cb924c6939bd8dda23e9b73249 | [
"MIT"
] | 33 | 2021-12-03T10:32:05.000Z | 2022-03-31T02:13:55.000Z | data/tracking/sampler/_sampling_algos/sequence_sampling/triplet/_algo.py | zhangzhengde0225/SwinTrack | 526be17f8ef266cb924c6939bd8dda23e9b73249 | [
"MIT"
] | 24 | 2021-12-04T06:46:42.000Z | 2022-03-30T07:57:47.000Z | import numpy as np
from data.tracking.sampler.SiamFC.type import SiamesePairSamplingMethod
from data.tracking.sampler._sampling_algos.stateless.random import sampling_multiple_indices_with_range_and_mask
from data.tracking.sampler._sampling_algos.sequence_sampling.common._algo import sample_one_positive
def do_triplet_sampling_positive_only(length: int, frame_range: int, aux_frame_range: int, mask: np.ndarray=None,
sampling_method: SiamesePairSamplingMethod=SiamesePairSamplingMethod.causal,
aux_sampling_method: SiamesePairSamplingMethod=SiamesePairSamplingMethod.causal,
rng_engine: np.random.Generator=np.random.default_rng()):
assert frame_range > 0
assert aux_frame_range > 0
sort = False
frame_range = frame_range + 1
if sampling_method == SiamesePairSamplingMethod.causal:
sort = True
indices = sampling_multiple_indices_with_range_and_mask(length, mask, 2, frame_range, allow_duplication=False, allow_insufficiency=True, sort=sort, rng_engine=rng_engine)
if len(indices) < 2:
return indices
elif len(indices) == 2:
x_index = indices[1]
if aux_sampling_method == SiamesePairSamplingMethod.interval:
begin = x_index - aux_frame_range
end = x_index + aux_frame_range
if begin < 0:
begin = 0
if end > len(mask):
end = len(mask)
masked_candidates = np.arange(begin, end)
if mask is not None:
mask_copied = mask[begin: end].copy()
mask_copied[x_index - begin] = False
masked_candidates = masked_candidates[mask_copied]
else:
masked_candidates = np.delete(masked_candidates, x_index - begin)
if len(masked_candidates) == 0:
return indices
elif aux_sampling_method == SiamesePairSamplingMethod.causal:
z_index = indices[0]
if z_index < x_index:
begin = max(x_index - aux_frame_range, z_index)
end = x_index
else: # z_index > x_index
begin = x_index
end = min(z_index, x_index + aux_frame_range, len(mask))
if begin == end:
return indices
masked_candidates = np.arange(begin, end)
if mask is not None:
masked_candidates = masked_candidates[mask[begin: end]]
else:
raise NotImplementedError(aux_sampling_method)
aux_index = rng_engine.choice(masked_candidates)
return indices[0], indices[1], aux_index
else:
raise RuntimeError
from ..SiamFC._algo import _gaussian
def _negative_sampling(length, anchor_index, negative_sample_mask, frame_range, rng_engine):
begin = anchor_index - frame_range
end = anchor_index + frame_range
x_axis_begin_value = -begin * 8 / (2 * frame_range + 1) - 4
x_axis_end_value = (length - 1 - end) * 8 / (2 * frame_range + 1) + 4
x_axis_values = np.linspace(x_axis_begin_value, x_axis_end_value, length)
x_axis_values = x_axis_values[negative_sample_mask]
if len(x_axis_values) == 0:
return None
probability = _gaussian(x_axis_values, 0., 5.)
probability_sum = probability.sum()
if probability_sum == 0:
probability = None
else:
probability = probability / probability_sum
candidates = np.arange(0, length)[negative_sample_mask]
negative_sample_index = rng_engine.choice(candidates, p=probability)
return negative_sample_index
def do_triplet_sampling_negative_only(length: int, frame_range: int, aux_frame_range: int, mask: np.ndarray=None, rng_engine: np.random.Generator=np.random.default_rng()):
assert frame_range > 0
assert aux_frame_range > 0
z_index = sample_one_positive(length, mask, rng_engine)
if mask is None or length == 1:
return (z_index,)
false_mask = ~mask
false_mask[z_index] = False
negative_x_index = _negative_sampling(length, z_index, false_mask, frame_range, rng_engine)
if negative_x_index is None:
return (z_index, )
negative_aug_index = _negative_sampling(length, negative_x_index, false_mask, aux_frame_range, rng_engine)
if negative_aug_index is None:
return z_index, negative_x_index
return z_index, negative_x_index, negative_aug_index
| 44.19802 | 174 | 0.670475 | 561 | 4,464 | 4.996435 | 0.165775 | 0.078487 | 0.041741 | 0.019979 | 0.39101 | 0.254727 | 0.196932 | 0.146985 | 0.133428 | 0.133428 | 0 | 0.009036 | 0.256272 | 4,464 | 100 | 175 | 44.64 | 0.835241 | 0.003808 | 0 | 0.204545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045455 | 1 | 0.034091 | false | 0 | 0.056818 | 0 | 0.204545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
206298366159a32b63385b54342b7c1ecae4f4f8 | 6,887 | py | Python | check.py | imayank/project4 | 3ccab23560dec09180199726fbf252ac934b7bc2 | [
"MIT"
] | null | null | null | check.py | imayank/project4 | 3ccab23560dec09180199726fbf252ac934b7bc2 | [
"MIT"
] | null | null | null | check.py | imayank/project4 | 3ccab23560dec09180199726fbf252ac934b7bc2 | [
"MIT"
] | null | null | null | import pandas as pd
import numpy as np
import csv
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn.model_selection import train_test_split
from keras import regularizers
from keras.models import Sequential
from keras.layers import Dense, Flatten, Lambda, Cropping2D, Conv2D, MaxPooling2D, Activation, BatchNormalization,Dropout
base_path = "../data/recording/"
base_path_img = "../data/recording/IMG/"
#base_path = "/opt/data/recording/"
#base_path_img = "/opt/data/recording/IMG/"
data = pd.read_csv(base_path + "driving_log.csv")
def expanding_data(data):
X_center = data.loc[:,'center']
y_center = data.loc[:,'target']
X_left = data.loc[:,'left']
y_left = y_center + 0.3
X_right = data.loc[:,'right']
y_right = y_center - 0.3
center_data = pd.concat([X_center,y_center],axis=1,ignore_index=True)
left_data = pd.concat([X_left,y_left],axis=1,ignore_index=True)
right_data = pd.concat([X_right,y_right],axis=1,ignore_index=True)
merged_data = pd.concat([center_data,left_data,right_data],axis=0,ignore_index=True)
merged_data.columns=['path','target']
return merged_data
def undersampling(merged_data):
out = pd.cut(list(merged_data['target']),30,labels=False)
bins, counts = np.unique(out, return_counts=True)
avg_counts = np.mean(counts)
target_counts = int(np.percentile(counts,75))
indices = np.where(counts>avg_counts)
target_bins = bins[indices]
target_indices = []
total_indices = list(range(len(out)))
remaining_indices = total_indices
for value in target_bins:
bin_ind = list(np.where(out == value)[0])
remaining_indices = list(set(remaining_indices) - set(bin_ind))
random_indices = list(np.random.choice(bin_ind,target_counts, replace=False))
target_indices.extend(random_indices)
undersampled_indices = np.concatenate([target_indices,remaining_indices])
undersampled_data = merged_data.loc[undersampled_indices]
return undersampled_data
def reset_and_add(undersampled_data):
undersampled_data = undersampled_data.reset_index()
undersampled_data["ID"] = list(range(len(undersampled_data)))
return undersampled_data
def dataGenerator(data, batch_size,base_path_img):
ids = data['ID'].values
#print(ids)
num = len(ids)
#indices = np.arange(len(ids))
np.random.seed(42)
while True:
#indices = shuffle(indices)
np.random.shuffle(ids)
for offset in range(0,num,batch_size):
batch = ids[offset:offset+batch_size]
images = []
target = []
for batch_id in batch:
img_path = data.loc[batch_id,'path']
img_name = img_path.split('\\')[-1]
new_path = base_path_img + img_name
images.append(((mpimg.imread(new_path))/255)-0.5)
target.append(data.loc[batch_id,'target'])
images = np.array(images)
target = np.array(target)
yield images, target
def model_VGG():
model = Sequential()
#model.add(Lambda(normalize,input_shape=(160,320,3)))
model.add(Cropping2D(((20,20),(0,0)),input_shape=(160,320,3)))
model.add(Conv2D(64,3,padding='same',activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(128,3,padding='same',activation='relu'))
model.add(MaxPooling2D())
model.add(Conv2D(256,3,padding='same',activation='relu'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dropout(0.7))
model.add(Dense(100))
model.add(Dropout(0.7))
model.add(Dense(50))
model.add(Dropout(0.7))
model.add(Dense(10))
model.add(Dropout(0.7))
model.add(Dense(1))
return model
def model_nvidia_orig():
model = Sequential()
model.add(Cropping2D(((20,20),(0,0)),input_shape=(160,320,3)))
model.add(Conv2D(24,5,strides=(2,2),padding='valid'))
model.add(Conv2D(36,5,strides=(2,2),padding='valid'))
model.add(Conv2D(48,5,strides=(2,2),padding='valid'))
model.add(Conv2D(64,3,strides=(1,1),padding='valid'))
model.add(Conv2D(64,3,strides=(1,1),padding='valid'))
model.add(Flatten())
model.add(Dense(100))
model.add(Dense(50))
model.add(Dense(10))
model.add(Dense(1))
return model
def model_nvidia_updated():
model = Sequential()
model.add(Cropping2D(((20,20),(0,0)),input_shape=(160,320,3)))
model.add(Conv2D(24,5,strides=(2,2),padding='valid',kernel_regularizer=regularizers.l2(0.0001)))
#model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(Conv2D(36,5,strides=(2,2),padding='valid',kernel_regularizer=regularizers.l2(0.0001)))
#model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(Conv2D(48,5,strides=(2,2),padding='valid',kernel_regularizer=regularizers.l2(0.0001)))
#model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(Conv2D(64,3,strides=(1,1),padding='valid',kernel_regularizer=regularizers.l2(0.0001)))
#model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(Conv2D(64,3,strides=(1,1),padding='valid',kernel_regularizer=regularizers.l2(0.0001)))
#model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(Flatten())
model.add(Dense(100,kernel_regularizer=regularizers.l2(0.0001)))
model.add(Activation('elu'))
model.add(Dense(50,kernel_regularizer=regularizers.l2(0.0001)))
model.add(Activation('elu'))
model.add(Dense(10,kernel_regularizer=regularizers.l2(0.0001)))
model.add(Activation('elu'))
model.add(Dense(1))
return model
train_data, validation_data = train_test_split(data,test_size=0.2,random_state=42)
#merged_data = expanding_data(data)
#undersampled_data = undersampling(merged_data)
#undersampled_data = reset_and_add(undersampled_data)
train_data = expanding_data(train_data)
undersampled_data = undersampling(train_data)
undersampled_data = reset_and_add(undersampled_data)
validation_data = expanding_data(validation_data)
validation_data = reset_and_add(validation_data)
"""
undersampled_data = expanding_data(data)
undersampled_data = reset_and_add(undersampled_data)"""
#print(train_data.columns)
train_generator = dataGenerator(undersampled_data, 128,base_path_img)
valid_generator = dataGenerator(validation_data,128, base_path_img)
#model = model_nvidia_orig()
model = model_nvidia_updated()
#model = model_VGG()
model.compile(loss='mse',optimizer='adam')
model.fit_generator(generator=train_generator,
steps_per_epoch = (len(train_data)//128)+1,
validation_data=valid_generator,
validation_steps = (len(validation_data)//128)+1,
epochs = 5)
model.save('model_new.h5')
| 36.057592 | 121 | 0.684333 | 940 | 6,887 | 4.832979 | 0.17766 | 0.091569 | 0.040062 | 0.054589 | 0.455426 | 0.4081 | 0.385428 | 0.371561 | 0.316971 | 0.299802 | 0 | 0.043827 | 0.168433 | 6,887 | 190 | 122 | 36.247368 | 0.749433 | 0.080006 | 0 | 0.306569 | 0 | 0 | 0.036179 | 0.003538 | 0 | 0 | 0 | 0 | 0 | 1 | 0.051095 | false | 0 | 0.065693 | 0 | 0.160584 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20646ade19d84e61c29fbfddf68aea6634664078 | 4,069 | py | Python | editor/translate_new.py | NTUEELightDance/2019-LightDance | 2e2689f868364e16972465abc22801aaeaf3d8ba | [
"MIT"
] | 2 | 2019-07-16T10:40:52.000Z | 2022-03-14T00:26:42.000Z | editor/translate_new.py | NTUEELightDance/2019-LightDance | 2e2689f868364e16972465abc22801aaeaf3d8ba | [
"MIT"
] | null | null | null | editor/translate_new.py | NTUEELightDance/2019-LightDance | 2e2689f868364e16972465abc22801aaeaf3d8ba | [
"MIT"
] | 2 | 2019-12-01T07:40:04.000Z | 2020-02-15T09:58:50.000Z | BPM_1 = 120.000
BPM_2 = 150.000
BPM_3 = 128.000
BPM_4 = 180.000
SEC_BEAT_1 = 60. / BPM_1
SEC_BEAT_2 = 60. / BPM_2
SEC_BEAT_3 = 60. / BPM_3
SEC_BEAT_4 = 60. / BPM_4
N_DANCER = 8
N_PART = 16
'''
2019_eenight_bpm (v9)
00:00.00 - 01:22.00 BPM = 120 (41*4拍)
01:22.00 - 01:58.80 BPM = 150 (23*4拍)
01:58.80 - 02:40.05 BPM = 128 (22*4拍)
02:40.05 - END BPM = 180 (33*4拍)
'''
'''
2019_eenight_bpm (v8)
00:00.00 - 00:13.89 BPM = 64 (bar 14.816) 15
00:13.89 - 01:19.76 BPM = 120 (bar 131.74) 147
01:19.76 - 01:24.96 BPM = 94 (bar 8.146) 155
01:24.96 - 01:55.33 BPM = 150 (bar 75.925) 231
01:55.33 - 02:36.61 BPM = 128 (bar 88.064) 319
02:36.61 - end BPM = 180 (bar )
'''
'''
A 0
B 1
C 2
D 3
E 4
F 5
G 6
H 7
I 8
J 9
L 10
M 11
N 12
O 13
P 14
Q 15
'''
'''
2019_eenight_bpm
00:00.00 - 01:22.00 BPM = 120 (41*4拍) 164
01:22.00 - 01:58.80 BPM = 150 (23*4拍) 256
01:58.80 - 02:40.05 BPM = 128 (22*4拍) 344
02:40.05 - END BPM = 180 (33*4拍) 476
'''
def bbf2sec(bbf):
tokens = bbf.split('-')
bar = int(tokens[0]) - 1
beat = int(tokens[1]) - 1
frac = 0
sec = 0
if len(tokens) >= 3:
a, b = tokens[2].split('/')
frac = float(a) / float(b)
if bar < 41 :
sec = ( bar * 4 + beat + frac ) * SEC_BEAT_1
elif bar < 64 :
sec = 82.00 + ((bar-41)*4+beat+frac) * SEC_BEAT_2
elif bar < 86:
sec = 118.80 + ((bar-64)*4+beat+frac) * SEC_BEAT_3
else :
sec = 160.05 + ((bar-86)*4+beat+frac) * SEC_BEAT_4
return sec
chr2Num = {
'A' : 0,
'B' : 1,
'C' : 2,
'D' : 3,
'E' : 4,
'F' : 5,
'G' : 6,
'H': 7,
'I' : 8,
'J' : 9,
'K' : 10,
'L' : 11,
'M' : 12,
'N' : 13,
'O' : 14,
'P' : 15
}
def parse_single_part(s):
res = []
dancers = []
for x in range(len(s)):
if ord(s[x]) <= ord('9'):
dancers.append(ord(s[x])-ord('0'))
else:
for y in dancers:
res.append( (y,chr2Num[s[x]]) )
# print (res)
return res
def parse_parts(s):
res = []
parts = s.split('+')
for p in parts:
res += parse_single_part(p)
return list(set(res))
def translate(fname):
lst = [x.strip() for x in open(fname, encoding='utf-8')]
res = []
for i in range(N_DANCER):
v = []
for j in range(N_PART):
v.append([])
res.append(v)
for line in lst:
if line.strip() == '' or line[0] == '#':
continue
tokens = line.split()
start = bbf2sec(tokens[0])
end = bbf2sec(tokens[1])
parts = parse_parts(tokens[2])
#print(start, end, parts)
ltype = 1 # 1=ON, 2=Fade in, 3=Fade out
if len(tokens) >= 4:
if tokens[3] == 'FI':
ltype = 2
elif tokens[3] == 'FO':
ltype = 3
for i, j in parts:
res[i][j].append((start, end, ltype))
return res
def translate_pos(fname):
lst = [x.strip() for x in open(fname, encoding='utf-8')]
res = []
for i in range(N_DANCER):
res.append([])
tm = 0
sm = False
for line in lst:
if line.strip() == '' or line[0] == '#':
continue
# print (line)
tokens = line.split()
if len(tokens) <= 2:
tm = bbf2sec(tokens[0])
sm = (len(tokens) >= 2)
else:
num = int(tokens[0])
bx = int(tokens[1])
by = int(tokens[2])
if not sm:
res[num].append((tm, res[num][-1][1], res[num][-1][2]))
res[num].append((tm, bx, by))
return res
if __name__ == '__main__':
import json
import time
while True:
res = translate('tron.in')
s = json.dumps(res)
f = open('light.js', 'w')
f.write("var Data = \"")
f.write(s)
f.write("\";")
#print('done')
f.close()
res = translate_pos('tron.pos')
s = json.dumps(res)
f = open('pos.js', 'w')
f.write("var Pos = \"")
f.write(s)
f.write("\";")
f.close()
time.sleep(0.4)
| 20.974227 | 71 | 0.482182 | 678 | 4,069 | 2.818584 | 0.246313 | 0.029304 | 0.012559 | 0.025118 | 0.29304 | 0.233386 | 0.214547 | 0.214547 | 0.194662 | 0.194662 | 0 | 0.166049 | 0.336938 | 4,069 | 193 | 72 | 21.082902 | 0.542254 | 0.021873 | 0 | 0.23622 | 0 | 0 | 0.031318 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03937 | false | 0 | 0.015748 | 0 | 0.094488 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2066d9d51ccdadc9c2fac356ffcd5ca1583c63bd | 5,538 | py | Python | src/qt/qtwebkit/Tools/Scripts/webkitpy/style/checkers/python.py | viewdy/phantomjs | eddb0db1d253fd0c546060a4555554c8ee08c13c | [
"BSD-3-Clause"
] | 1 | 2015-05-27T13:52:20.000Z | 2015-05-27T13:52:20.000Z | src/qt/qtwebkit/Tools/Scripts/webkitpy/style/checkers/python.py | mrampersad/phantomjs | dca6f77a36699eb4e1c46f7600cca618f01b0ac3 | [
"BSD-3-Clause"
] | null | null | null | src/qt/qtwebkit/Tools/Scripts/webkitpy/style/checkers/python.py | mrampersad/phantomjs | dca6f77a36699eb4e1c46f7600cca618f01b0ac3 | [
"BSD-3-Clause"
] | 1 | 2022-02-18T10:41:38.000Z | 2022-02-18T10:41:38.000Z | # Copyright (C) 2010 Chris Jerdonek (cjerdonek@webkit.org)
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Supports checking WebKit style in Python files."""
import re
from StringIO import StringIO
from webkitpy.common.system.filesystem import FileSystem
from webkitpy.common.webkit_finder import WebKitFinder
from webkitpy.thirdparty.autoinstalled import pep8
from webkitpy.thirdparty.autoinstalled.pylint import lint
from webkitpy.thirdparty.autoinstalled.pylint.reporters.text import ParseableTextReporter
class PythonChecker(object):
"""Processes text lines for checking style."""
def __init__(self, file_path, handle_style_error):
self._file_path = file_path
self._handle_style_error = handle_style_error
def check(self, lines):
self._check_pep8(lines)
self._check_pylint(lines)
def _check_pep8(self, lines):
# Initialize pep8.options, which is necessary for
# Checker.check_all() to execute.
pep8.process_options(arglist=[self._file_path])
pep8_checker = pep8.Checker(self._file_path)
def _pep8_handle_error(line_number, offset, text, check):
# FIXME: Incorporate the character offset into the error output.
# This will require updating the error handler __call__
# signature to include an optional "offset" parameter.
pep8_code = text[:4]
pep8_message = text[5:]
category = "pep8/" + pep8_code
self._handle_style_error(line_number, category, 5, pep8_message)
pep8_checker.report_error = _pep8_handle_error
pep8_errors = pep8_checker.check_all()
def _check_pylint(self, lines):
pylinter = Pylinter()
# FIXME: for now, we only report pylint errors, but we should be catching and
# filtering warnings using the rules in style/checker.py instead.
output = pylinter.run(['-E', self._file_path])
lint_regex = re.compile('([^:]+):([^:]+): \[([^]]+)\] (.*)')
for error in output.getvalue().splitlines():
match_obj = lint_regex.match(error)
assert(match_obj)
line_number = int(match_obj.group(2))
category_and_method = match_obj.group(3).split(', ')
category = 'pylint/' + (category_and_method[0])
if len(category_and_method) > 1:
message = '[%s] %s' % (category_and_method[1], match_obj.group(4))
else:
message = match_obj.group(4)
self._handle_style_error(line_number, category, 5, message)
class Pylinter(object):
# We filter out these messages because they are bugs in pylint that produce false positives.
# FIXME: Does it make sense to combine these rules with the rules in style/checker.py somehow?
FALSE_POSITIVES = [
# possibly http://www.logilab.org/ticket/98613 ?
"Instance of 'Popen' has no 'poll' member",
"Instance of 'Popen' has no 'returncode' member",
"Instance of 'Popen' has no 'stdin' member",
"Instance of 'Popen' has no 'stdout' member",
"Instance of 'Popen' has no 'stderr' member",
"Instance of 'Popen' has no 'wait' member",
"Instance of 'Popen' has no 'pid' member",
]
def __init__(self):
self._pylintrc = WebKitFinder(FileSystem()).path_from_webkit_base('Tools', 'Scripts', 'webkitpy', 'pylintrc')
def run(self, argv):
output = _FilteredStringIO(self.FALSE_POSITIVES)
lint.Run(['--rcfile', self._pylintrc] + argv, reporter=ParseableTextReporter(output=output), exit=False)
return output
class _FilteredStringIO(StringIO):
def __init__(self, bad_messages):
StringIO.__init__(self)
self.dropped_last_msg = False
self.bad_messages = bad_messages
def write(self, msg=''):
if not self._filter(msg):
StringIO.write(self, msg)
def _filter(self, msg):
if any(bad_message in msg for bad_message in self.bad_messages):
self.dropped_last_msg = True
return True
if self.dropped_last_msg:
# We drop the newline after a dropped message as well.
self.dropped_last_msg = False
if msg == '\n':
return True
return False
| 42.6 | 117 | 0.68039 | 709 | 5,538 | 5.152327 | 0.369535 | 0.019162 | 0.028744 | 0.034492 | 0.168081 | 0.114427 | 0.058582 | 0.058582 | 0.03723 | 0.03723 | 0 | 0.00917 | 0.232033 | 5,538 | 129 | 118 | 42.930233 | 0.849753 | 0.373781 | 0 | 0.055556 | 0 | 0 | 0.112084 | 0 | 0 | 0 | 0 | 0.007752 | 0.013889 | 1 | 0.138889 | false | 0 | 0.097222 | 0 | 0.347222 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20672c1e3cc7378701a319c6aa66a5c9cd3fe2a4 | 581 | py | Python | avgamah/modules/NSFW/pussy.py | thenishantsapkota/Avgamah | c7f1f9a69f8a3f4c4ea53b25dbf62e272750f76c | [
"MIT"
] | 6 | 2021-11-03T06:37:33.000Z | 2022-01-26T15:09:37.000Z | avgamah/modules/NSFW/pussy.py | thenishantsapkota/Avgamah | c7f1f9a69f8a3f4c4ea53b25dbf62e272750f76c | [
"MIT"
] | 7 | 2021-11-03T14:58:38.000Z | 2022-03-29T23:16:21.000Z | avgamah/modules/NSFW/pussy.py | thenishantsapkota/Avgamah | c7f1f9a69f8a3f4c4ea53b25dbf62e272750f76c | [
"MIT"
] | 1 | 2021-08-31T08:04:51.000Z | 2021-08-31T08:04:51.000Z | import hikari
import tanjun
from avgamah.core.client import Client
pussy_component = tanjun.Component()
@pussy_component.with_slash_command
@tanjun.with_own_permission_check(
hikari.Permissions.SEND_MESSAGES
| hikari.Permissions.VIEW_CHANNEL
| hikari.Permissions.EMBED_LINKS
)
@tanjun.with_nsfw_check
@tanjun.as_slash_command("pussy", "Cute pussy cats.")
async def pussy(ctx: tanjun.abc.Context):
await ctx.shards.reddit_cache.reddit_sender(ctx, "pussy")
@tanjun.as_loader
def load_components(client: Client):
client.add_component(pussy_component.copy())
| 24.208333 | 61 | 0.79346 | 78 | 581 | 5.653846 | 0.525641 | 0.095238 | 0.104308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106713 | 581 | 23 | 62 | 25.26087 | 0.849711 | 0 | 0 | 0 | 0 | 0 | 0.04475 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0 | 0.176471 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20685d91ffa78a319f6c16393a78a8570e8e34ff | 564 | py | Python | test.py | DougDimmadome7/Virtual-Jenga | 0344b21126b680826ffd13c10e328e04db9b7ade | [
"MIT"
] | null | null | null | test.py | DougDimmadome7/Virtual-Jenga | 0344b21126b680826ffd13c10e328e04db9b7ade | [
"MIT"
] | null | null | null | test.py | DougDimmadome7/Virtual-Jenga | 0344b21126b680826ffd13c10e328e04db9b7ade | [
"MIT"
] | null | null | null | from jenga import Tower, Layer
from bots import StatBot
def layer_suite():
subjects = {Layer(): (3, (1, 1.0)),
Layer(False, False): (1, (1, 0.0))}
for subject in subjects:
if subject.get_mass() != subjects[subject][0]:
print("Failed: Expected {}".format(subjects[subject][0]))
if subject.get_COM() != subjects[subject][1]:
print("Failed: Expected {}".format(subjects[subject][1]))
t1 = Tower(15)
stats = StatBot()
t1.move_piece(4,1)
t1.display()
print(stats.all_valid(t1))
| 25.636364 | 70 | 0.58156 | 74 | 564 | 4.364865 | 0.459459 | 0.185759 | 0.018576 | 0.154799 | 0.247678 | 0.247678 | 0 | 0 | 0 | 0 | 0 | 0.04717 | 0.248227 | 564 | 21 | 71 | 26.857143 | 0.714623 | 0 | 0 | 0 | 0 | 0 | 0.07024 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0 | 0.2 | 0.2 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
206a29865133fb8ad4a844440e59032795701c2f | 20,863 | py | Python | test/socializer_test.py | aquanauts/tellus | d1185357b8a2f1106bbd951558dc040c709ff826 | [
"MIT"
] | null | null | null | test/socializer_test.py | aquanauts/tellus | d1185357b8a2f1106bbd951558dc040c709ff826 | [
"MIT"
] | null | null | null | test/socializer_test.py | aquanauts/tellus | d1185357b8a2f1106bbd951558dc040c709ff826 | [
"MIT"
] | null | null | null | # pylint: skip-file
import copy
import random
from tellus.configuration import TELLUS_INTERNAL
from tellus.tell import Tell, SRC_TELLUS_USER
from tellus.tellus_sources.socializer import Socializer, CoffeeBot
from tellus.tellus_utils import datetime_from_string
from tellus.users import UserManager
from test.tells_test import create_test_teller
def create_standard_source():
teller = create_test_teller()
users = ["quislet", "saturngirl", "cosmicboy", "lightninglad", "karatekid"]
no_coffee_users = [
"bouncingboy",
"chameleonboy",
] # These legionnaires are going to sit out our coffees for now
user_manager = UserManager(teller, users + no_coffee_users)
for username in users:
user = user_manager.get_or_create_valid_user(username)
user.tell.update_datum_from_source(
Socializer.SOURCE_ID, CoffeeBot.DATUM_CURRENT_COFFEE_PAIR, "FAKE"
)
for username in no_coffee_users:
user = user_manager.get_or_create_valid_user(username)
user.tell.remove_tag(CoffeeBot.TAG_COFFEE_BOT)
user.tell.update_datum_from_source(
Socializer.SOURCE_ID, CoffeeBot.DATUM_CURRENT_COFFEE_PAIR, "FAKE"
)
socializer = Socializer(user_manager)
return socializer, user_manager, users
def current_pair(username, teller):
return teller.get(username).get_datum(
Socializer.SOURCE_ID, CoffeeBot.DATUM_CURRENT_COFFEE_PAIR
)
def history(username, teller):
return teller.get(username).get_datum(
Socializer.SOURCE_ID, CoffeeBot.DATUM_COFFEE_HISTORY
)
async def test_load_source():
socializer, user_manager, users = create_standard_source()
teller = user_manager.teller
bouncingboy = user_manager.get("bouncingboy")
coffee_bot = socializer.coffee_bot()
assert coffee_bot.last_run is None
first_user = user_manager.get(users[0])
assert first_user.tell.get_data(socializer.source_id) == {
CoffeeBot.DATUM_CURRENT_COFFEE_PAIR: "FAKE"
}
assert bouncingboy.tell.get_data(socializer.source_id) == {
CoffeeBot.DATUM_CURRENT_COFFEE_PAIR: "FAKE"
}
await socializer.load_source()
assert first_user.tell.get_data(socializer.source_id) != {
CoffeeBot.DATUM_CURRENT_COFFEE_PAIR: "FAKE"
}, "load_source() should have updated the user's coffee pair."
assert (
bouncingboy.tell.get_datum(
socializer.source_id, CoffeeBot.DATUM_CURRENT_COFFEE_PAIR
)
is None
), "load_source() should have removed Bouncing Boy's coffee pair."
assert coffee_bot.last_run is not None
previous_run = coffee_bot.last_run_time
original_tell = teller.get(CoffeeBot.TELL_COFFEE_BOT)
first_user.tell.update_datum_from_source(
socializer.source_id, CoffeeBot.DATUM_CURRENT_COFFEE_PAIR, "FAKE2"
)
await socializer.load_source()
assert (
coffee_bot.last_run_time == previous_run
), "Did not force a run so should not have updated."
assert original_tell == teller.get(
CoffeeBot.TELL_COFFEE_BOT
), "Currently, we need to delete the schedule to cause it to run again..."
assert first_user.tell.get_data(socializer.source_id) == {
CoffeeBot.DATUM_COFFEE_HISTORY: {},
CoffeeBot.DATUM_CURRENT_COFFEE_PAIR: "FAKE2",
}, "load_source() will not update coffee pairs in this scenario."
teller.delete_tell(CoffeeBot.TELL_COFFEE_BOT)
await socializer.load_source()
new_tell = teller.get(CoffeeBot.TELL_COFFEE_BOT)
coffee_bot = socializer.coffee_bot()
assert (
original_tell != new_tell
), "Deleting the original schedule should result in a new schedule being generated on load."
assert (
original_tell.audit_info.created
!= teller.get(CoffeeBot.TELL_COFFEE_BOT).audit_info.created
)
assert coffee_bot.last_run is not None
assert (
coffee_bot.last_run_time > previous_run
), "New run should be more recent than previous."
previous_run = coffee_bot.last_run_time
quislet_history = copy.deepcopy(socializer.coffee_bot().history_for("quislet"))
teller.get("bouncingboy").add_tag(CoffeeBot.TAG_COFFEE_BOT)
await socializer.load_source()
assert new_tell == teller.get(
CoffeeBot.TELL_COFFEE_BOT
), "Adding a new user to the schedule should not have forced the schedule to be recreated..."
assert quislet_history == socializer.coffee_bot().history_for(
"quislet"
), "...and should not have changed the history."
# Note we are explicitly doing this as if it was updated by a user - had an issue with this
new_tell.update_datum_from_source(
SRC_TELLUS_USER, Tell.TAGS, CoffeeBot.TAG_FORCE_COFFEE
)
schedule = socializer.coffee_bot().current_schedule()
await socializer.load_source()
assert not new_tell.has_tag(
CoffeeBot.TAG_FORCE_COFFEE
), "Forcing a run should remove the force tag, even if the tag was put in by a different source (e.g., a user)"
assert (
not schedule == socializer.coffee_bot().current_schedule()
), "Should have a new schedule"
assert (
coffee_bot.last_run_time > previous_run
), "New run should be more recent than previous."
async def test_should_generate_coffee():
socializer, user_manager, users = create_standard_source()
bot = socializer.coffee_bot()
assert bot.should_generate_coffee(), "By default, we should generate coffee."
bot.pause()
assert (
not bot.should_generate_coffee()
), "We should not generate coffee when Paused."
bot.force_run()
assert (
not bot.should_generate_coffee()
), "We should not generate coffee when Paused, even if forced."
bot.pause(False)
assert (
bot.should_generate_coffee()
), "Unpausing should allow coffee to be generated."
async def test_calendar_scheduling():
socializer, user_manager, users = create_standard_source()
teller = user_manager.teller
bot = socializer.coffee_bot()
assert bot.should_generate_coffee(), "Just getting ourselves set up..."
await socializer.load_source()
assert not bot.should_generate_coffee(), "...and checking sanity..."
last_run = datetime_from_string("2020-06-01")
bot._tell.update_datum_from_source(
Socializer.SOURCE_ID, CoffeeBot.DATUM_LAST_COFFEE_BOT_RUN, last_run.isoformat()
)
assert not bot.should_generate_coffee(
datetime_from_string("2020-06-05")
), "Saturday"
assert bot.should_generate_coffee(datetime_from_string("2020-06-07")), "Sunday"
last_run = datetime_from_string("2020-06-05")
bot._tell.update_datum_from_source(
Socializer.SOURCE_ID, CoffeeBot.DATUM_LAST_COFFEE_BOT_RUN, last_run.isoformat()
)
assert not bot.should_generate_coffee(
datetime_from_string("2020-06-05")
), "Saturday"
assert not bot.should_generate_coffee(
datetime_from_string("2020-06-07")
), "Sunday, but too recent prior run"
assert bot.should_generate_coffee(
datetime_from_string("2020-06-14")
), "Sunday, and far enough out it will run again"
async def test_should_generate_new_schedule():
socializer, user_manager, users = create_standard_source()
assert (
socializer.should_generate_new_coffee_schedule()
), "Should always generate a new schedule before we've ever run."
await socializer.load_source()
coffee_tell = user_manager.teller.get(CoffeeBot.TELL_COFFEE_BOT)
assert (
not socializer.should_generate_new_coffee_schedule()
), "Should not generate a new schedule if we've got one."
coffee_tell.add_tag(CoffeeBot.TAG_FORCE_COFFEE)
assert (
socializer.should_generate_new_coffee_schedule()
), "Should generate a new schedule if the bot says to force one."
await socializer.load_source()
assert not coffee_tell.has_tag(
CoffeeBot.TAG_FORCE_COFFEE
), "Forcing a run should remove the force tag"
assert (
not socializer.should_generate_new_coffee_schedule()
), "And we should be back to not generating a schedule."
def test_coffee_for():
schedule = [("quislet", "saturngirl"), ("lightninglad", "cosmicboy")]
assert CoffeeBot.coffee_from_schedule("quislet", schedule) == "saturngirl"
assert CoffeeBot.coffee_from_schedule("saturngirl", schedule) == "quislet"
assert CoffeeBot.coffee_from_schedule("lightninglad", schedule) == "cosmicboy"
assert CoffeeBot.coffee_from_schedule("cosmicboy", schedule) == "lightninglad"
assert (
CoffeeBot.coffee_from_schedule("mattereaterlad", schedule) is None
), "Matter Eater Lad should be lonely."
async def test_coffee_scheduler_2():
socializer, user_manager, users = create_standard_source()
teller = user_manager.teller
# A little hackery to get into the state we want...
bot_tell = teller.create_tell(
CoffeeBot.TELL_COFFEE_BOT, TELLUS_INTERNAL, "test_new_coffee_scheduler"
)
bot = socializer.coffee_bot()
history = {
"quislet": {"saturngirl": 1, "cosmicboy": 1, "lightninglad": 1},
"lightninglad": {"saturngirl": 1, "cosmicboy": 1},
}
bot_tell.update_datum_from_source(
socializer.source_id, CoffeeBot.DATUM_COFFEE_HISTORY, history
)
assert bot.history_for("quislet") == history["quislet"], "Just to check..."
assert (
bot.history_for("lightninglad") == history["lightninglad"]
), "Just to check..."
assert bot.should_generate_coffee()
await socializer.load_source()
assert (
bot.coffee_with("quislet") == "karatekid"
), "With the current history, Karate Kid should always be next for Quislet"
assert history == bot.history()
async def test_ordered_history():
socializer, user_manager, users = create_standard_source()
teller = user_manager.teller
# "quislet", "saturngirl", "cosmicboy", "lightninglad", "karatekid"
# A little hackery to get into the state we want...
bot_tell = teller.create_tell(
CoffeeBot.TELL_COFFEE_BOT, TELLUS_INTERNAL, "test_new_coffee_scheduler"
)
bot = socializer.coffee_bot()
pair_counts = {
"quislet": {
"saturngirl": 1,
"cosmicboy": 1,
"lightninglad": 1,
"karatekid": 1,
"BYEWEEK": 1,
},
"lightninglad": {
"saturngirl": 1,
"cosmicboy": 1,
"karatekid": 1,
"quislet": 1,
"BYEWEEK": 1,
},
"saturngirl": {
"cosmicboy": 1,
"lightninglad": 1,
"karatekid": 1,
"quislet": 1,
"BYEWEEK": 1,
},
"cosmicboy": {
"saturngirl": 1,
"lightninglad": 1,
"karatekid": 1,
"quislet": 1,
"BYEWEEK": 1,
},
"karatekid": {
"saturngirl": 1,
"cosmicboy": 1,
"lightninglad": 1,
"quislet": 1,
"BYEWEEK": 1,
},
}
bot_tell.update_datum_from_source(
socializer.source_id, CoffeeBot.DATUM_COFFEE_HISTORY, pair_counts
)
assert bot.history_for("quislet") == pair_counts["quislet"], "Spot check..."
assert (
bot.history_for("lightninglad") == pair_counts["lightninglad"]
), "Spot check..."
for i in range(10):
bot.force_run()
assert bot.should_generate_coffee()
await socializer.load_source()
new_history = bot.history()
for user, pair_counts in new_history.items():
assert current_pair(user, teller) == list(pair_counts.keys())[-1]
assert history(user, teller) == bot.history_for(user)
for pair, count in pair_counts.items():
if pair != CoffeeBot.BYE_WEEK_COFFEE:
assert new_history[user].get(pair) == new_history[pair].get(user), (
f"Users and their pairs should always have the same count, "
f"but {user} and {pair} do not match: {new_history}"
)
def test_find_best_pair():
history = {
"quislet": {"saturngirl": 1, "cosmicboy": 1, "lightninglad": 1},
"lightninglad": {"saturngirl": 1, "cosmicboy": 1, "karatekid": 1},
"saturngirl": {"quislet": 1, "cosmicboy": 3, "karatekid": 2, "lightninglad": 2},
} # this is, of course, impossible
assert (
CoffeeBot._find_best_pair(
history["quislet"], ["saturngirl", "cosmicboy", "lightninglad", "karatekid"]
)
== "karatekid"
), "Quislet should always be paired with Karate Kid given the history"
assert (
CoffeeBot._find_best_pair(
history["lightninglad"], ["saturngirl", "cosmicboy", "quislet", "karatekid"]
)
== "quislet"
), "Lightning Lad should always be paired with Quislet given the history"
assert (
CoffeeBot._find_best_pair(
history["saturngirl"], ["quislet", "cosmicboy", "lightninglad", "karatekid"]
)
== "quislet"
), "Saturn girl should always be paired with Quislet given the history"
assert (
CoffeeBot._find_best_pair(
history["saturngirl"],
["cosmicboy", "lightninglad", "karatekid", "wildfire"],
)
== "wildfire"
), "Wildfire should always be the best pair given the history"
assert CoffeeBot._find_best_pair(
history["quislet"], ["cosmicboy", "lightninglad", "karatekid", "wildfire"]
) in [
"wildfire",
"karatekid",
], "Wildfire or Karate Kid should always be the best pair given the history"
def test_sorted_coffee_users():
users = ["quislet", "saturngirl", "cosmicboy", "lightninglad", "karatekid"]
random.shuffle(users) # just to check myself
history = {
"quislet": {"saturngirl": 1, "cosmicboy": 1, "lightninglad": 1},
"lightninglad": {"saturngirl": 1, "cosmicboy": 9, "karatekid": 1},
"saturngirl": {"cosmicboy": 1, "karatekid": 1},
"karatekid": {"quislet": 4},
}
assert CoffeeBot.sorted_coffee_users(users, history) == [
"lightninglad",
"karatekid",
"quislet",
"saturngirl",
"cosmicboy",
]
async def s_test_iterations(fs):
# Using this occasionally to eyeball how the algorithm does over time
# It should generate a roughly balanced set of coffees
# It...mostly works?
socializer, user_manager, users = create_standard_source()
enabled_users = socializer.determine_coffee_bot_users()
bot = socializer.coffee_bot()
history_history = ""
run_assertions = (
True # To turn on the inconsistent assertions when I want to test them
)
bot.set_algo(CoffeeBot.TAG_ALGO_1)
cycles = 100
for cycle in range(1, cycles):
for week in range(0, len(enabled_users)):
await socializer.load_source()
bot.force_run()
history_history += f"{cycle}: {bot.history()}\n"
if run_assertions:
for user in enabled_users:
history = bot.history_for(user)
# These assertions are true a lot of the time but not always - there is some probability involved,
# depending on which algo I'm using.
# So this will fail inconsistently, hence not a safe unit test.
assert len(history) == len(
enabled_users
), f"{cycle}: {user} should have had coffee with each user once after a cycle: {history}"
if cycle < 50:
# This is only true till we add in the newcomers
assert (
count == cycle for count in history.values()
), f"{user} should have had {cycle} coffees with each user: {history}"
if cycle == cycles / 2: # Halfway through, we add a couple of users
history_history += f"CYCLE {cycle}: Adding users\n"
user_manager.get("bouncingboy").tell.add_tag(CoffeeBot.TAG_COFFEE_BOT)
user_manager.get("chameleonboy").tell.add_tag(CoffeeBot.TAG_COFFEE_BOT)
enabled_users = socializer.determine_coffee_bot_users()
print(history_history)
bot.print_history()
#####
# Prior Algo
#
# I'm being a little overly careful here because figuring out an algo that worked was...not as easy as it sounds.
# So while I am trying to simplify Coffee bot, I am putting the original algo here for safekeeping.
#
# This is deprecated, and can be deleted once we have backed in without this algo as a safety net for a while.
#
#####
# @pytest.mark.skip(
# reason="We are currently using Algo 2, and this test has a low probability intermittent failure."
# )
def old_test_coffee_scheduler_1(fs):
socializer, user_manager, users = create_standard_source()
enabled_users = socializer.determine_coffee_bot_users()
coffee_bot = socializer.coffee_bot()
coffee_bot.set_algo(CoffeeBot.TAG_ALGO_1)
socializer.make_coffee_schedule()
current_schedule = coffee_bot.current_schedule()
assert current_schedule is not None
people_scheduled = coffee_bot.current_scheduled_users()
assert people_scheduled == sorted(enabled_users)
assert coffee_bot.history() == {}
socializer.lock_in_coffee_schedule()
coffee_history = coffee_bot.history()
assert len(coffee_history) == len(enabled_users)
for user in enabled_users:
user_history = coffee_bot.history_for(user)
assert len(user_history) == 1, "Should have only had one coffee"
assert (
next(iter(user_history.values())) == 1
), "Should have had one coffee with that person"
for week in range(1, len(enabled_users)):
coffee_bot.update_schedule(enabled_users)
socializer.lock_in_coffee_schedule()
assert current_schedule != coffee_bot.current_schedule()
for user in enabled_users:
user_history = coffee_bot.history_for(user)
assert len(user_history) == len(
enabled_users
), "Every user should have had coffee with the other scheduled users"
for value in user_history.values():
assert (
value == 1
), "Everyone should have just had one coffee with each other user..."
# Let's roll along!
for cycle in range(2, 10):
for week in range(0, len(enabled_users)):
coffee_bot.update_schedule(enabled_users)
socializer.lock_in_coffee_schedule()
# This assertion will very occasionally result in an intermittent failure - not currently worth debugging.
assert current_schedule != coffee_bot.current_schedule()
for user in enabled_users:
user_history = coffee_bot.history_for(user)
assert len(user_history) == len(
enabled_users
), "Every user should have had coffee with the other scheduled users"
for value in user_history.values():
assert (
value == cycle
), "Everyone should have had two coffees with each other user..."
bouncingboy = user_manager.get("bouncingboy")
bouncingboy.tell.add_tag(CoffeeBot.TAG_COFFEE_BOT)
enabled_users = socializer.determine_coffee_bot_users()
assert bouncingboy.username in enabled_users
coffee_bot.update_schedule(enabled_users)
socializer.lock_in_coffee_schedule()
bb_history = coffee_bot.history_for(bouncingboy.username)
assert len(bb_history) == 1, "Bouncing Boy should have had one coffee"
assert (
next(iter(bb_history.values())) == 1
), "Should have had one coffee with that person"
# -- ALGO 1 ---
def _schedule_coffees_1(people, sets=None):
"""
Schedules coffee pairings for a group of people. Created from various "round robin tournament" algorithms.
:param people: a group of people to schedule for coffees (if you want it to be random, do externally)
:param sets: the number of sets of coffee to calculate (defaults to people - 1)
:return: a list of coffee pairing tuples
"""
# logging.info("Scheduling Coffees with Algo 1.")
if len(people) % 2:
people = list(people)
people.append(CoffeeBot.BYE_WEEK_COFFEE)
count = len(people)
sets = sets or (count - 1)
half = int(count / 2)
schedule = []
for turn in range(sets):
left = people[:half]
right = people[count - half - 1 + 1 :][::-1]
pairings = zip(left, right)
if turn % 2 == 1:
pairings = [(y, x) for (x, y) in pairings]
else:
pairings = [
(x, y) for (x, y) in pairings
] # quick way to extract the tuples
people.insert(1, people.pop())
schedule.append(pairings)
return schedule
| 36.991135 | 118 | 0.658247 | 2,554 | 20,863 | 5.161316 | 0.143696 | 0.040282 | 0.022758 | 0.028675 | 0.567896 | 0.501517 | 0.422242 | 0.391367 | 0.344409 | 0.310044 | 0 | 0.009307 | 0.242966 | 20,863 | 563 | 119 | 37.056838 | 0.825313 | 0.089872 | 0 | 0.422122 | 0 | 0.004515 | 0.212062 | 0.002643 | 0 | 0 | 0 | 0 | 0.1693 | 1 | 0.018059 | false | 0 | 0.018059 | 0.004515 | 0.045147 | 0.004515 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2071613e7a408eabf03c6b2b9802aa4339e771ee | 955 | py | Python | WEEKS/CD_Sata-Structures/general/practice/leetCode_30DaysOfCode/day_17/number_of_islands.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/general/practice/leetCode_30DaysOfCode/day_17/number_of_islands.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/general/practice/leetCode_30DaysOfCode/day_17/number_of_islands.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | """
Given a 2d grid map of '1's (land) and '0's (water), count the number of islands. An island is surrounded by water
and is formed by connecting adjacent lands horizontally or vertically. You may assume all four edges of the grid
are all surrounded by water.
"""
def numIslands(grid):
if grid is None and len(grid) == 0:
return 0
nuOfIslands = 0
for i in range(len(grid)):
for j in range(len(grid[i])):
if grid[i][j] == "1":
nuOfIslands += 1
dfs(grid, i, j)
return nuOfIslands
def dfs(grid, i, j):
if (
(j >= len(grid[0]))
or (j < 0)
or (i < 0)
or (i >= len(grid))
or (grid[i][j] != "1")
):
return 0
grid[i][j] = 0
dfs(grid, i, j + 1)
dfs(grid, i + 1, j)
dfs(grid, i - 1, j)
dfs(grid, i, j - 1)
grid = [["1", "1", "0"], ["1", "1", "0"], ["0", "0", "1"], ["0", "0", "0"]]
print(numIslands(grid))
| 25.131579 | 114 | 0.503665 | 156 | 955 | 3.083333 | 0.314103 | 0.10395 | 0.087318 | 0.058212 | 0.08316 | 0.058212 | 0.058212 | 0.058212 | 0 | 0 | 0 | 0.046296 | 0.321466 | 955 | 37 | 115 | 25.810811 | 0.695988 | 0.268063 | 0 | 0.076923 | 0 | 0 | 0.02026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.192308 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2071b7738ea9668595f43aa1c16e6826cb81da1e | 2,607 | py | Python | tests/integration/helpers.py | canonical/alertmanager-operator | 48faea21c701a2edefe853de5b04ac1faf6cd736 | [
"Apache-2.0"
] | 1 | 2021-03-28T14:37:13.000Z | 2021-03-28T14:37:13.000Z | tests/integration/helpers.py | canonical/alertmanager-operator | 48faea21c701a2edefe853de5b04ac1faf6cd736 | [
"Apache-2.0"
] | 14 | 2020-11-12T11:22:28.000Z | 2021-09-23T23:51:05.000Z | tests/integration/helpers.py | canonical/alertmanager-operator | 48faea21c701a2edefe853de5b04ac1faf6cd736 | [
"Apache-2.0"
] | 7 | 2020-11-11T23:10:41.000Z | 2021-11-12T14:11:14.000Z | # Copyright 2021 Canonical Ltd.
# See LICENSE file for licensing details.
"""Helper functions for writing tests."""
import logging
from typing import Dict
from pytest_operator.plugin import OpsTest
log = logging.getLogger(__name__)
async def get_unit_address(ops_test: OpsTest, app_name: str, unit_num: int) -> str:
"""Get private address of a unit."""
status = await ops_test.model.get_status() # noqa: F821
return status["applications"][app_name]["units"][f"{app_name}/{unit_num}"]["address"]
def interleave(l1: list, l2: list) -> list:
"""Interleave two lists.
>>> interleave([1,2,3], ['a', 'b', 'c'])
[1, 'a', 2, 'b', 3, 'c']
Reference: https://stackoverflow.com/a/11125298/3516684
"""
return [x for t in zip(l1, l2) for x in t]
async def cli_upgrade_from_path_and_wait(
ops_test: OpsTest,
path: str,
alias: str,
resources: Dict[str, str] = None,
wait_for_status: str = None,
):
if resources is None:
resources = {}
resource_pairs = [f"{k}={v}" for k, v in resources.items()]
resource_arg_prefixes = ["--resource"] * len(resource_pairs)
resource_args = interleave(resource_arg_prefixes, resource_pairs)
cmd = [
"juju",
"refresh",
"--path",
path,
alias,
*resource_args,
]
retcode, stdout, stderr = await ops_test._run(*cmd)
assert retcode == 0, f"Upgrade failed: {(stderr or stdout).strip()}"
log.info(stdout)
await ops_test.model.wait_for_idle(apps=[alias], status=wait_for_status, timeout=120)
class IPAddressWorkaround:
"""Context manager for deploying a charm that needs to have its IP address.
Due to a juju bug, occasionally some charms finish a startup sequence without
having an ip address returned by `bind_address`.
https://bugs.launchpad.net/juju/+bug/1929364
Issuing dummy update_status just to trigger an event, and then restore it.
"""
def __init__(self, ops_test: OpsTest):
self.ops_test = ops_test
async def __aenter__(self):
"""On entry, the update status interval is set to the minimum 10s."""
config = await self.ops_test.model.get_config()
self.revert_to = config["update-status-hook-interval"]
await self.ops_test.model.set_config({"update-status-hook-interval": "10s"})
return self
async def __aexit__(self, exc_type, exc_value, exc_traceback):
"""On exit, the update status interval is reverted to its original value."""
await self.ops_test.model.set_config({"update-status-hook-interval": self.revert_to})
| 31.792683 | 93 | 0.667817 | 364 | 2,607 | 4.598901 | 0.456044 | 0.045998 | 0.035842 | 0.028674 | 0.124851 | 0.064516 | 0.064516 | 0.064516 | 0.064516 | 0.064516 | 0 | 0.022749 | 0.207518 | 2,607 | 81 | 94 | 32.185185 | 0.787512 | 0.224012 | 0 | 0 | 0 | 0 | 0.115901 | 0.057111 | 0 | 0 | 0 | 0 | 0.023256 | 1 | 0.046512 | false | 0 | 0.069767 | 0 | 0.209302 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
207275b58d80650023d7512a278cfc21b82f461a | 1,841 | py | Python | c3py/regions.py | h0s/c3py | 6fb669dd07e9a8433631b64b08213f5f38606ca1 | [
"MIT"
] | 1 | 2015-11-20T05:43:15.000Z | 2015-11-20T05:43:15.000Z | c3py/regions.py | h0s/c3py | 6fb669dd07e9a8433631b64b08213f5f38606ca1 | [
"MIT"
] | null | null | null | c3py/regions.py | h0s/c3py | 6fb669dd07e9a8433631b64b08213f5f38606ca1 | [
"MIT"
] | null | null | null | from .chart_component import ChartComponentList
class Regions(ChartComponentList):
"""
Highlight selected regions on the chart.
Parameters
----------
axes : c3py.axes.Axes
The chart's Axes object.
"""
def __init__(self, axes):
super(Regions, self).__init__()
self.axes = axes
self.styles = []
def add(self, name, axis, start, end, color):
"""
Add a region to be highlighted on the chart.
Parameters
----------
name : str
The name of the region. This will be the name of the CSS class that defines the region, therefore no two
regions on the same chart should have the same name.
axis : str
The axis on which to highlight the region.
**Accepts:** ['x' | 'y' | 'y2]
start : must match x axis type
The start position of the region.
end : must match x axis type
The start position of the region.
color : str
The color of the line. This can be a CSS named color, a hexadecimal value, or an RGB tuple.
Returns
-------
None
"""
if axis not in ['x', 'y', 'y2']:
raise Exception('axis must be either "x", "y" or "y2".')
else:
if axis == 'x' and self.axes.x_axis.config['type'] != self.__string_wrap__('indexed'):
start = self.__string_wrap__(start)
end = self.__string_wrap__(end)
self.config.append({
'class': self.__string_wrap__(name),
'axis': self.__string_wrap__(axis),
'start': start,
'end': end,
})
self.styles.append({
'name': name,
'fill': color,
}) | 24.878378 | 116 | 0.516567 | 215 | 1,841 | 4.260465 | 0.367442 | 0.027293 | 0.076419 | 0.043668 | 0.098253 | 0.098253 | 0.098253 | 0.098253 | 0.098253 | 0.098253 | 0 | 0.003487 | 0.376969 | 1,841 | 74 | 117 | 24.878378 | 0.795118 | 0.397067 | 0 | 0.086957 | 0 | 0 | 0.084783 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086957 | false | 0 | 0.043478 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2072be0c4aa28bf08e6acf94dbe34fa893e5ddc8 | 4,420 | py | Python | projects/2project/NeuralNetwork/SolutionNNRegressionKeras.py | fridtjrg/FYS-STK4155 | 071a039c9c9994c0d125b9432c05ddb08991bca9 | [
"MIT"
] | null | null | null | projects/2project/NeuralNetwork/SolutionNNRegressionKeras.py | fridtjrg/FYS-STK4155 | 071a039c9c9994c0d125b9432c05ddb08991bca9 | [
"MIT"
] | 1 | 2021-10-03T15:16:07.000Z | 2021-10-03T15:16:07.000Z | projects/2project/NeuralNetwork/SolutionNNRegressionKeras.py | fridtjrg/FYS-STK4155 | 071a039c9c9994c0d125b9432c05ddb08991bca9 | [
"MIT"
] | null | null | null | import tensorflow as tf
import numpy as np
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.models import Sequential
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
#====================== DATA
import sys
sys.path.append("../Data")
from DataRegression import X, X_test, X_train, x, x_mesh, y_mesh, z_test, z_train, plotSave, plotFunction, x_y_test, x_y_train, x_y, z, MSE
n_hidden_neurons = 50
batch_size = 5
epochs = 200
Eta = np.logspace(-6, -4, 5)
Lambda = np.logspace(-6, -4, 5)
train_mse = np.zeros((len(Eta), len(Lambda)))
test_mse = np.zeros((len(Eta), len(Lambda)))
compt = 0
best_learning_rate_NN = Eta[0]
best_lambda_NN = Lambda[0]
best_mse_NN = 1e10
for i, eta in enumerate(Eta):
for j, _lambda in enumerate(Lambda):
#===============================#
# Training #
#===============================#
#======= Keras NN
sgd = tf.keras.optimizers.SGD(lr=eta, momentum=_lambda, nesterov=True)
model = Sequential()
model.add(Dense(n_hidden_neurons, activation='sigmoid', input_dim=X_train.shape[1]))
model.add(Dense(units=n_hidden_neurons, activation='sigmoid'))
model.add(Dense(units=1))
model.compile(optimizer=sgd, loss='mean_squared_error')
model.fit(X_train, z_train, batch_size=batch_size, epochs=epochs)
#===============================#
# Testing #
#===============================#
#======= OWN NN
ytilde_test = model.predict(X_test)
ytilde_train = model.predict(X_train)
mse_test = MSE(z_test, ytilde_test)
if MSE(z_train, ytilde_train) < 1e10:
train_mse[i][j] = MSE(z_train, ytilde_train)
test_mse[i][j] = MSE(z_test, ytilde_test)
else:
train_mse[i][j] = np.inf
test_mse[i][j] = np.inf
if mse_test < best_mse_NN:
best_learning_rate_NN = i
best_lambda_NN = j
best_mse_NN = mse_test
compt += 1
print("step : " + str(compt) + "/" + str(len(Eta)*len(Lambda)))
#===============================#
# Training for the best #
# learning rate and lambda #
#===============================#
#======= OWN NN (With optimal paramaters)
sgd = tf.keras.optimizers.SGD(lr=Eta[best_learning_rate_NN], momentum=Lambda[best_lambda_NN], nesterov=True)
#======= Keras NN (With optimal paramaters)
model = Sequential()
model.add(Dense(n_hidden_neurons, activation='sigmoid', input_dim=X_train.shape[1]))
model.add(Dense(units=n_hidden_neurons, activation='sigmoid'))
model.add(Dense(units=1))
model.compile(optimizer=sgd, loss='mean_squared_error')
model.fit(X_train, z_train, batch_size=batch_size, epochs=epochs)
z_pred_NN = model.predict(X)
ytilde_test = model.predict(X_test)
ytilde_train = model.predict(X_train)
#title_NN = 'prediction_NN' + '_lamda_' + str(best_lambda_rate_NN) + '_eta_' + str(best_learning_rate_NN)
title_NN = 'NN_prediction_keras'
plotSave(x_mesh, y_mesh, z,'../Figures/NN/', 'Noisy_dataset' )
plotSave(x_mesh, y_mesh, z_pred_NN.reshape(len(x), len(x)),'../Figures/NN/',title_NN)
fig, ax = plt.subplots(figsize = (6, 5))
sns.heatmap(train_mse,vmin=0,vmax=0.3, annot=True, ax=ax)
#ax.set_title("Training mse")
ax.set_ylabel("$\eta$")
ax.set_xlabel("$\lambda$")
plt.savefig('../Figures/NN/train_heatmap_keras.pdf')
fig, ax = plt.subplots(figsize = (6, 5))
sns.heatmap(test_mse,vmin=0,vmax=0.3, annot=True, ax=ax)
#ax.set_title("Test mse")
ax.set_ylabel("$\eta$")
ax.set_xlabel("$\lambda$")
plt.savefig('../Figures/NN/test_heatmap_keras.pdf')
print('==========================================================')
print('Our final model is built with the following hyperparmaters:')
print('Learning_rate = ', best_learning_rate_NN)
print('Lambda = ', best_lambda_NN)
print('Epochs = ', epochs)
print('Batch size = ', batch_size)
print('----------------------------------------------------------')
print('The Eta and Lambda values we tested for are as follows:')
print('Eta = ', Eta)
print('Lambda =', Lambda)
print('----------------------------------------------------------')
print('Mean square error of prediction:')
print('Train MSE = ', MSE(z_train, ytilde_train))
print('Test MSE = ', MSE(z_test, ytilde_test))
print('==========================================================')
plt.show()
| 30.482759 | 139 | 0.597285 | 599 | 4,420 | 4.183639 | 0.222037 | 0.01676 | 0.038308 | 0.035914 | 0.46648 | 0.394254 | 0.37909 | 0.336792 | 0.336792 | 0.308859 | 0 | 0.009767 | 0.166063 | 4,420 | 144 | 140 | 30.694444 | 0.670103 | 0.136878 | 0 | 0.309524 | 0 | 0 | 0.185997 | 0.080581 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.095238 | 0.190476 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20737648d812b6fad6bec05c7ca144f99ca2d842 | 2,551 | py | Python | src/tf_imgaug/sequential.py | Marselliy/tf-aug | 478a632e1822722a74397a169b0b63bd0c5692e7 | [
"MIT"
] | 1 | 2019-01-11T15:36:24.000Z | 2019-01-11T15:36:24.000Z | src/tf_imgaug/sequential.py | Marselliy/tf-aug | 478a632e1822722a74397a169b0b63bd0c5692e7 | [
"MIT"
] | null | null | null | src/tf_imgaug/sequential.py | Marselliy/tf-aug | 478a632e1822722a74397a169b0b63bd0c5692e7 | [
"MIT"
] | null | null | null | import random
import tensorflow as tf
class Sequential:
def __init__(self, augments, seed=random.randint(0, 2 ** 32), n_augments=1, keypoints_format='xy', bboxes_format='xyxy'):
self.augments = augments
for aug in augments:
aug._set_formats(keypoints_format, bboxes_format)
self.random = random.Random(seed)
self.n_augments = n_augments
def __call__(self, images, keypoints=None, bboxes=None, segmaps=None):
with tf.name_scope('Sequential'):
with tf.name_scope('prepare'):
keypoints_none = False
if keypoints is None:
keypoints_none = True
keypoints = tf.zeros(tf.concat([tf.shape(images)[:1], [0, 2]], axis=0))
bboxes_none = False
if bboxes is None:
bboxes_none = True
bboxes = tf.zeros(tf.concat([tf.shape(images)[:1], [0, 4]], axis=0))
segmaps_none = False
if segmaps is None:
segmaps_none = True
segmaps = tf.zeros(tf.concat([tf.shape(images)[:3], [0]], axis=0))
res = (images, keypoints, bboxes, segmaps)
res = tuple([tf.tile(e, tf.concat([[self.n_augments], tf.ones_like(tf.shape(e)[1:], dtype=tf.int32)], axis=0)) for e in res])
if images.dtype != tf.float32:
res = (tf.image.convert_image_dtype(res[0], tf.float32),) + res[1:]
if segmaps.dtype != tf.float32:
res = res[:-1] + (tf.image.convert_image_dtype(res[3], tf.float32),)
for aug in self.augments:
aug._set_seed(self.random.randint(0, 2 ** 32))
res = aug(*res)
segmaps = res[3]
segmaps = tf.concat([tf.ones_like(segmaps[..., :1]) * 0e-2, segmaps], axis=-1)
segmaps = tf.one_hot(tf.argmax(segmaps, axis=-1), tf.shape(segmaps)[-1])[..., 1:]
res = res[:-1] + (segmaps,)
if images.dtype != tf.float32:
res = (tf.image.convert_image_dtype(res[0], images.dtype),) + res[1:]
if segmaps.dtype != tf.float32:
res = res[:-1] + (tf.image.convert_image_dtype(res[3], segmaps.dtype),)
result = [res[0]]
if not keypoints_none:
result.append(res[1])
if not bboxes_none:
result.append(res[2])
if not segmaps_none:
result.append(res[3])
return tuple(result)
| 39.859375 | 141 | 0.531164 | 319 | 2,551 | 4.115987 | 0.203762 | 0.041127 | 0.045697 | 0.05179 | 0.268088 | 0.242193 | 0.242193 | 0.220868 | 0.220868 | 0.175171 | 0 | 0.033372 | 0.330459 | 2,551 | 63 | 142 | 40.492063 | 0.735363 | 0 | 0 | 0.081633 | 0 | 0 | 0.00902 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040816 | false | 0 | 0.040816 | 0 | 0.122449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20752a68676d3d8a6893e798673529b3ef5ebcf1 | 6,585 | py | Python | ifitwala_ed/school_settings/doctype/school_calendar/school_calendar.py | mohsinalimat/ifitwala_ed | 8927695ed9dee36e56571c442ebbe6e6431c7d46 | [
"MIT"
] | 13 | 2020-09-02T10:27:57.000Z | 2022-03-11T15:28:46.000Z | ifitwala_ed/school_settings/doctype/school_calendar/school_calendar.py | mohsinalimat/ifitwala_ed | 8927695ed9dee36e56571c442ebbe6e6431c7d46 | [
"MIT"
] | 43 | 2020-09-02T07:00:42.000Z | 2021-07-05T13:22:58.000Z | ifitwala_ed/school_settings/doctype/school_calendar/school_calendar.py | mohsinalimat/ifitwala_ed | 8927695ed9dee36e56571c442ebbe6e6431c7d46 | [
"MIT"
] | 6 | 2020-10-19T01:02:18.000Z | 2022-03-11T15:28:47.000Z | # -*- coding: utf-8 -*-
# Copyright (c) 2020, ifitwala and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
import json
from frappe import _
from frappe.utils import get_link_to_form, today, getdate, formatdate, date_diff, cint
from frappe.model.document import Document
class SchoolCalendar(Document):
def onload(self):
weekend_color = frappe.get_single("Education Settings").weekend_color
self.set_onload("weekend_color", weekend_color)
break_color = frappe.get_single("Education Settings").break_color
self.set_onload("break_color", break_color)
def validate(self):
if not self.terms:
self.extend("terms", self.get_terms())
ay = frappe.get_doc("Academic Year", self.academic_year)
self.validate_dates()
if ay.school != self.school:
frappe.throw(_("The Academic Year {0} does not belong to the School {1}").format(get_link_to_form("Academic Year", self.academic_year), get_link_to_form("School", self.school)))
self.total_holiday_days = len(self.holidays)
self.total_number_day = date_diff(ay.year_end_date, ay.year_start_date)
self.total_instruction_days = date_diff(ay.year_end_date, ay.year_start_date) - self.total_holiday_days
@frappe.whitelist()
def get_terms(self):
self.terms = []
terms = frappe.get_list("Academic Term", filters = {"academic_year":self.academic_year}, fields=["name as term", "term_start_date as start", "term_end_date as end"])
if not terms:
frappe.throw(_("You need to add at least one term for this academic year {0}.").format(get_link_to_form("Academic Year", self.academic_year)))
for term in terms:
self.append("terms", {"term": term.term, "start": term.start, "end": term.end, "length": date_diff(getdate(term.end), getdate(term.start))})
def validate_dates(self):
ay = frappe.get_doc("Academic Year", self.academic_year)
for day in self.get("holidays"):
if not (getdate(ay.year_start_date) <= getdate(day.holiday_date) <= getdate(ay.year_end_date)):
frappe.throw(_("The holiday on {0} should be within your academic year {1} dates.").format(formatdate(day.holiday_date), get_link_to_form("Academic Year", self.academic_year)))
@frappe.whitelist()
def get_long_break_dates(self):
ay = frappe.get_doc("Academic Year", self.academic_year)
self.validate_break_dates()
date_list = self.get_long_break_dates_list(self.start_of_break, self.end_of_break)
last_idx = max([cint(d.idx) for d in self.get("holidays")] or [0,])
for i, d in enumerate(date_list):
ch = self.append("holidays", {})
ch.description = self.break_description if self.break_description else "Break"
ch.color = self.break_color if self.break_color else ""
ch.holiday_date = d
ch.idx = last_idx + i + 1
@frappe.whitelist()
def get_weekly_off_dates(self):
ay = frappe.get_doc("Academic Year", self.academic_year)
self.validate_values()
date_list = self.get_weekly_off_dates_list(ay.year_start_date, ay.year_end_date)
last_idx = max([cint(d.idx) for d in self.get("holidays")] or [0,])
for i, d in enumerate(date_list):
ch = self.append("holidays", {})
ch.description = _(self.weekly_off)
ch.holiday_date = d
ch.color = self.weekend_color if self.weekend_color else ""
ch.weekly_off = 1
ch.idx = last_idx + i + 1
# logic for the button "clear_table"
@frappe.whitelist()
def clear_table(self):
self.set("holidays", [])
def validate_break_dates(self):
ay = frappe.get_doc("Academic Year", self.academic_year)
if not self.start_of_break and not self.end_of_break:
frappe.throw(_("Please select first the start and end of your break."))
if getdate(self.start_of_break) > getdate(self.end_of_break):
frappe.throw(_("The start of the break cannot be after its end. Adjust the dates."))
if not (getdate(ay.year_start_date) <= getdate(self.start_of_break) <= getdate(ay.year_end_date)) or not (getdate(ay.year_start_date) <= getdate(self.end_of_break) <= getdate(ay.year_end_date)):
frappe.throw(_("The holiday called {0} should be within your academic year {1} dates.").format(self.break_description, get_link_to_form("Academic Year", self.academic_year)))
def get_long_break_dates_list(self, start_date, end_date):
start_date, end_date = getdate(start_date), getdate(end_date)
from datetime import timedelta
import calendar
date_list = []
existing_date_list = []
reference_date = start_date
existing_date_list = [getdate(holiday.holiday_date) for holiday in self.get("holidays")]
while reference_date <= end_date:
if reference_date not in existing_date_list:
date_list.append(reference_date)
reference_date += timedelta(days = 1)
return date_list
def validate_values(self):
if not self.weekly_off:
frappe.throw(_("Please select first the weekly off days."))
def get_weekly_off_dates_list(self, start_date, end_date):
start_date, end_date = getdate(start_date), getdate(end_date)
from dateutil import relativedelta
from datetime import timedelta
import calendar
date_list = []
existing_date_list = []
weekday = getattr(calendar, (self.weekly_off).upper())
reference_date = start_date + relativedelta.relativedelta(weekday = weekday)
existing_date_list = [getdate(holiday.holiday_date) for holiday in self.get("holidays")]
while reference_date <= end_date:
if reference_date not in existing_date_list:
date_list.append(reference_date)
reference_date += timedelta(days = 7)
return date_list
@frappe.whitelist()
def get_events(start, end, filters=None):
if filters:
filters = json.loads(filters)
else:
filters = []
if start:
filters.append(['Holiday', 'holiday_date', '>', getdate(start)])
if end:
filters.append(['Holiday', 'holiday_date', '<', getdate(end)])
return frappe.get_list('School Calendar',
fields=['name', 'academic_year', 'school', '`tabHoliday`.holiday_date', '`tabHoliday`.description', '`tabHoliday`.color'],
filters = filters,
update={"allDay": 1})
| 44.795918 | 202 | 0.672134 | 899 | 6,585 | 4.671858 | 0.157953 | 0.071429 | 0.049524 | 0.057143 | 0.525952 | 0.49619 | 0.42 | 0.401905 | 0.37619 | 0.337143 | 0 | 0.003863 | 0.213667 | 6,585 | 146 | 203 | 45.10274 | 0.807261 | 0.022779 | 0 | 0.344828 | 0 | 0 | 0.143079 | 0.007621 | 0 | 0 | 0 | 0 | 0 | 1 | 0.103448 | false | 0 | 0.094828 | 0 | 0.232759 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20779422059fa5a2638b842b8fa63f7290a8df78 | 2,824 | py | Python | inbm/cloudadapter-agent/cloudadapter/cloudadapter.py | ahameedx/intel-inb-manageability | aca445fa4cef0b608e6e88e74476547e10c06073 | [
"Apache-2.0"
] | 5 | 2021-12-13T21:19:31.000Z | 2022-01-18T18:29:43.000Z | inbm/cloudadapter-agent/cloudadapter/cloudadapter.py | ahameedx/intel-inb-manageability | aca445fa4cef0b608e6e88e74476547e10c06073 | [
"Apache-2.0"
] | 45 | 2021-12-30T17:21:09.000Z | 2022-03-29T22:47:32.000Z | inbm/cloudadapter-agent/cloudadapter/cloudadapter.py | ahameedx/intel-inb-manageability | aca445fa4cef0b608e6e88e74476547e10c06073 | [
"Apache-2.0"
] | 4 | 2022-01-26T17:42:54.000Z | 2022-03-30T04:48:04.000Z | #!/usr/bin/python
"""
Agent that monitors and reports the state of critical components of the framework
"""
import platform
from typing import Optional, List
from cloudadapter.client import Client
from cloudadapter.constants import LOGGERCONFIG
from cloudadapter.exceptions import BadConfigError
from cloudadapter.utilities import Waiter
import os
import signal
import logging
import sys
from logging.config import fileConfig
from inbm_lib.windows_service import WindowsService
class CloudAdapter(WindowsService):
_svc_name_ = 'inbm-cloud-adapter'
_svc_display_name_ = 'Cloud Adapter Agent'
_svc_description_ = 'Intel Manageability agent handling cloud connections'
def __init__(self, args: Optional[List] = None) -> None:
if args is None:
args = []
super().__init__(args)
self.waiter: Waiter = Waiter()
def svc_stop(self) -> None:
self.waiter.finish()
def svc_main(self) -> None:
self.start()
def start(self) -> None:
"""Start the Cloudadapter service.
Call this directly for Linux and indirectly through svc_main for Windows."""
# Configure logging
path = os.environ.get('LOGGERCONFIG', LOGGERCONFIG)
print(f"Looking for logging configuration file at {path}")
fileConfig(path, disable_existing_loggers=False)
logger = logging.getLogger(__name__)
if sys.version_info[0] < 3 or sys.version_info[0] == 3 and sys.version_info[1] < 8:
logger.error(
"Python version must be 3.8 or higher. Python interpreter version: " + sys.version)
sys.exit(1)
logger.info('Cloud Adapter agent is running')
# Exit if configuration is malformed
try:
client = Client()
client.start()
except BadConfigError as e:
logger.error(str(e))
return
# Refresh Waiter
self.waiter.finish()
self.waiter = Waiter()
if platform.system() != 'Windows':
# Unblock on termination signals
def unblock(signal, _):
self.waiter.finish()
signal.signal(signal.SIGTERM, unblock)
signal.signal(signal.SIGINT, unblock)
self.waiter.wait()
client.stop()
def main():
"""The main function"""
if platform.system() == 'Windows':
import servicemanager
import win32serviceutil
if len(sys.argv) == 1:
servicemanager.Initialize()
servicemanager.PrepareToHostSingle(CloudAdapter)
servicemanager.StartServiceCtrlDispatcher()
else:
win32serviceutil.HandleCommandLine(CloudAdapter)
else:
cloudadapter = CloudAdapter()
cloudadapter.start()
if __name__ == "__main__":
main()
| 27.417476 | 99 | 0.642351 | 304 | 2,824 | 5.832237 | 0.411184 | 0.033841 | 0.027073 | 0.01692 | 0.018049 | 0 | 0 | 0 | 0 | 0 | 0 | 0.00679 | 0.26983 | 2,824 | 102 | 100 | 27.686275 | 0.853055 | 0.114023 | 0 | 0.076923 | 0 | 0 | 0.108097 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092308 | false | 0 | 0.215385 | 0 | 0.384615 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
207a7db873682c7d633f14d1d74adaba1fed2784 | 5,947 | py | Python | prometheus_network_exporter/config/functions/junos.py | networkmess/prometheus-network-exporter | 4e7febb7e13447fd3612e591fbb2f634f97a5101 | [
"MIT"
] | 11 | 2018-12-13T05:39:24.000Z | 2022-01-07T16:59:59.000Z | prometheus_network_exporter/config/functions/junos.py | networkmess/prometheus-network-exporter | 4e7febb7e13447fd3612e591fbb2f634f97a5101 | [
"MIT"
] | 11 | 2018-11-29T20:43:44.000Z | 2020-11-14T22:33:50.000Z | prometheus_network_exporter/config/functions/junos.py | networkmess/prometheus-network-exporter | 4e7febb7e13447fd3612e591fbb2f634f97a5101 | [
"MIT"
] | 2 | 2021-09-11T22:28:56.000Z | 2021-09-11T22:43:19.000Z | from typing import Union
import logging
from ...utitlities import create_list_from_dict
from ..configuration import LabelConfiguration, MetricConfiguration
def default(value) -> float:
if isinstance(value, list):
return default(value[0])
return 0 if value is None else float(value)
def is_ok(boolean: Union[bool, str]) -> float:
if isinstance(boolean, bool):
if boolean:
return 1.0
return 0.0
elif isinstance(boolean, str):
if boolean.lower().strip() in ["up", "ok", "established"]:
return 1.0
return 0.0
elif boolean is None:
return 0.0
else:
raise Exception("Unknown Type: {}".format(boolean))
def boolify(string: str) -> bool:
return "true" in string.lower()
def none_to_zero(string) -> float:
return default(string)
def none_to_minus_inf(string) -> float:
return -float("inf") if string is None else string
def none_to_plus_inf(string) -> float:
return float("inf") if string is None else string
def floatify(string: Union[str, float]) -> float:
if isinstance(string, str):
if "- Inf" in string:
return -float("inf")
elif "Inf" in string:
return float("inf")
return float(string) if string is not None else none_to_zero(string)
# The complex Functions
def fan_power_temp_status(prometheus: MetricConfiguration, data: dict):
prometheus.labels = [
LabelConfiguration(config={"label": "sensorname", "key": "sensorname"})
]
prometheus.metric = prometheus.build_metric()
data_list = create_list_from_dict(data, "sensorname")
for data_part in data_list:
prometheus.metric.add_metric(
labels=[label.get_label(data_part) for label in prometheus.labels],
value=is_ok(data_part.get("status")),
)
return prometheus.metric
def temp_celsius(prometheus: MetricConfiguration, data: dict):
prometheus.labels = [
LabelConfiguration(config={"label": "sensorname", "key": "sensorname"})
]
prometheus.metric = prometheus.build_metric()
data_list = create_list_from_dict(data, "sensorname")
for data_part in data_list:
prometheus.metric.add_metric(
labels=[label.get_label(data_part) for label in prometheus.labels],
value=data_part.get("temperature") or float("-inf"),
)
return prometheus.metric
def reboot(prometheus: MetricConfiguration, data: dict):
data = list(data.values())[0]
label_config = LabelConfiguration(
config={"label": "reboot_reason", "key": "last_reboot_reason"}
)
reason_string = label_config.get_label(data)
prometheus.labels = [label_config]
prometheus.metric = prometheus.build_metric()
reason = 1
for a in ["failure", "error", "failed"]:
if a in reason_string.lower():
reason = 0
prometheus.metric.add_metric(
labels=[label.get_label(data) for label in prometheus.labels], value=reason
)
return prometheus.metric
def cpu_usage(prometheus: MetricConfiguration, data: dict):
prometheus.labels = [LabelConfiguration(config={"label": "cpu", "key": "cpu"})]
prometheus.metric = prometheus.build_metric()
data_list = create_list_from_dict(data, "cpu")
for perf in data_list:
prometheus.metric.add_metric(
labels=[label.get_label(perf) for label in prometheus.labels],
value=(100 - int(perf["cpu_idle"] or 0)),
)
return prometheus.metric
def cpu_idle(prometheus: MetricConfiguration, data: dict):
prometheus.labels = [LabelConfiguration(config={"label": "cpu", "key": "cpu"})]
prometheus.metric = prometheus.build_metric()
data_list = create_list_from_dict(data, "cpu")
for perf in data_list:
prometheus.metric.add_metric(
labels=[label.get_label(perf) for label in prometheus.labels],
value=int(perf["cpu_idle"]),
)
return prometheus.metric
def ram_usage(prometheus: MetricConfiguration, data: dict):
prometheus.labels = [
LabelConfiguration(config={"label": "routing_engine", "key": "routing_engine"})
]
prometheus.metric = prometheus.build_metric()
data_list = create_list_from_dict(data, "routing_engine")
for perf in data_list:
memory_complete = perf["memory_dram_size"].lower().replace("mb", "").strip()
memory_complete = int(memory_complete)
memory_usage = int(perf["memory_buffer_utilization"])
memory_bytes_usage = (memory_complete * memory_usage / 100) * 1049000
prometheus.metric.add_metric(
labels=[label.get_label(perf) for label in prometheus.labels],
value=memory_bytes_usage,
)
return prometheus.metric
def uptime(prometheus: MetricConfiguration, data: dict):
prometheus.labels = [
LabelConfiguration(config={"label": "routing_engine", "key": "routing_engine"})
]
prometheus.metric = prometheus.build_metric()
data_list = create_list_from_dict(data, "routing_engine")
for perf in data_list:
prometheus.metric.add_metric(
labels=[label.get_label(perf) for label in prometheus.labels],
value=perf["uptime"],
)
return prometheus.metric
def ram(prometheus: MetricConfiguration, data: dict):
prometheus.labels = [
LabelConfiguration(config={"label": "routing_engine", "key": "routing_engine"})
]
prometheus.metric = prometheus.build_metric()
data_list = create_list_from_dict(data, "routing_engine")
for perf in data_list:
memory_complete = perf["memory_dram_size"].lower().replace("mb", "").strip()
memory_complete = int(memory_complete)
memory_bytes = memory_complete * 1049000
prometheus.metric.add_metric(
labels=[label.get_label(perf) for label in prometheus.labels],
value=memory_bytes,
)
return prometheus.metric
| 34.375723 | 87 | 0.669077 | 708 | 5,947 | 5.437853 | 0.141243 | 0.09974 | 0.029091 | 0.037403 | 0.674545 | 0.635844 | 0.614805 | 0.604416 | 0.604416 | 0.591948 | 0 | 0.007719 | 0.215739 | 5,947 | 172 | 88 | 34.575581 | 0.817753 | 0.003531 | 0 | 0.47482 | 0 | 0 | 0.080351 | 0.00422 | 0 | 0 | 0 | 0 | 0 | 1 | 0.107914 | false | 0 | 0.028777 | 0.028777 | 0.294964 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
207ab28606db0bcb88724f2edd658e5f7978954f | 1,492 | py | Python | lib-python/io/proxies/fragment.py | geoffxy/tandem | 81e76f675634f1b42c8c3070c73443f3f68f8624 | [
"Apache-2.0"
] | 732 | 2018-03-11T03:35:17.000Z | 2022-01-06T12:22:03.000Z | lib-python/io/proxies/fragment.py | geoffxy/tandem | 81e76f675634f1b42c8c3070c73443f3f68f8624 | [
"Apache-2.0"
] | 21 | 2018-03-11T02:28:22.000Z | 2020-08-30T15:36:40.000Z | plugin/tandem_lib/agent/tandem/shared/io/proxies/fragment.py | typeintandem/vim | e076a9954d73ccb60cd6828e53adf8da76462fc6 | [
"Apache-2.0"
] | 24 | 2018-03-14T05:37:17.000Z | 2022-01-18T14:44:42.000Z | from tandem.shared.io.proxies.base import ProxyBase
from tandem.shared.utils.fragment import FragmentUtils
class FragmentProxy(ProxyBase):
def __init__(self, max_message_length=512):
self._max_message_length = max_message_length
def pre_generate_io_data(self, params):
args, kwargs = params
messages, addresses = args
if type(messages) is not list:
messages = [messages]
new_messages = []
for message in messages:
should_fragment = FragmentUtils.should_fragment(
message,
self._max_message_length,
)
if should_fragment:
new_messages.extend(FragmentUtils.fragment(
message,
self._max_message_length,
))
else:
new_messages.append(message)
new_args = (new_messages, addresses)
return (new_args, kwargs)
def on_retrieve_io_data(self, params):
args, kwargs = params
if args is None or args is (None, None):
return params
raw_data, address = args
if FragmentUtils.is_fragment(raw_data):
defragmented_data = FragmentUtils.defragment(raw_data, address)
if defragmented_data:
new_args = (defragmented_data, address)
return (new_args, kwargs)
else:
return (None, None)
else:
return params
| 30.44898 | 75 | 0.586461 | 153 | 1,492 | 5.45098 | 0.320261 | 0.059952 | 0.095923 | 0.095923 | 0.160671 | 0.160671 | 0.076739 | 0 | 0 | 0 | 0 | 0.003086 | 0.348525 | 1,492 | 48 | 76 | 31.083333 | 0.854938 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.051282 | 0 | 0.282051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
207b5782368cc522ab6d38370f4b51ed010dc707 | 8,888 | py | Python | integresql_client_python/__init__.py | msztolcman/integresql-client-python | 8636434f20ab771ac66885f3bdfe819a7e9ebbfe | [
"MIT"
] | 2 | 2021-05-20T18:38:41.000Z | 2021-06-26T23:10:27.000Z | integresql_client_python/__init__.py | msztolcman/integresql-client-python | 8636434f20ab771ac66885f3bdfe819a7e9ebbfe | [
"MIT"
] | null | null | null | integresql_client_python/__init__.py | msztolcman/integresql-client-python | 8636434f20ab771ac66885f3bdfe819a7e9ebbfe | [
"MIT"
] | 2 | 2021-06-02T13:39:56.000Z | 2021-06-14T02:11:05.000Z | __all__ = ['IntegreSQL', 'DBInfo', 'Database', 'Template']
import hashlib
import http.client
import os
import pathlib
import sys
from typing import Optional, NoReturn, Union, List
import requests
from . import errors
__version__ = '0.9.2'
ENV_INTEGRESQL_CLIENT_BASE_URL = 'INTEGRESQL_CLIENT_BASE_URL'
ENV_INTEGRESQL_CLIENT_API_VERSION = 'INTEGRESQL_CLIENT_API_VERSION'
DEFAULT_CLIENT_BASE_URL = "http://integresql:5000/api" # noqa
DEFAULT_CLIENT_API_VERSION = "v1"
class DBInfo:
__slots__ = ('db_id', 'tpl_hash', 'host', 'port', 'user', 'password', 'name')
def __init__(self, info: dict) -> None:
self.db_id = info.get('id')
self.tpl_hash = info['database']['templateHash']
info = info['database']['config']
self.host = info['host']
self.port = info['port']
self.user = info['username']
self.password = info['password']
self.name = info['database']
def __str__(self) -> str:
return f"postgresql://{self.user}:{self.password}@{self.host}:{self.port}/{self.name}"
__repr__ = __str__
class TemplateHash:
BUFFER_SIZE = 4 * 1024
def __init__(self, template: Union[str, List[str], pathlib.PurePath, List[pathlib.PurePath], None]) -> None:
if not isinstance(template, (list, tuple)):
template = [template]
self.templates = template
mhash = hashlib.md5()
for template in self.templates:
if not isinstance(template, pathlib.PurePath):
template = pathlib.Path(template)
if not template.exists():
raise RuntimeError(f"Path {template} doesn't exists")
if not template.is_dir():
raise RuntimeError(f"Path {template} must be a directory")
hashed = self.calculate(template)
mhash.update(hashed.encode())
self.hash = mhash.hexdigest()
def __str__(self) -> str:
return self.hash
@classmethod
def calculate(cls, path: pathlib.Path) -> str:
template_hash = hashlib.md5() # noqa: S303 # nosec
items = list(path.rglob('*'))
items.sort()
for item in items:
if item.is_dir():
continue
item_hash = hashlib.md5() # noqa: S303 # nosec
with item.open('rb') as fh:
while True:
data = fh.read(cls.BUFFER_SIZE)
item_hash.update(data)
if len(data) < cls.BUFFER_SIZE:
break
template_hash.update(item_hash.hexdigest().encode())
return template_hash.hexdigest()
class Database:
def __init__(self, integresql: 'IntegreSQL') -> None:
self.integresql = integresql
self.dbinfo = None
def open(self) -> DBInfo:
rsp = self.integresql.request('GET', f'/templates/{self.integresql.tpl_hash}/tests')
if rsp.status_code == http.client.OK:
return DBInfo(rsp.json())
if rsp.status_code == http.client.NOT_FOUND:
raise errors.TemplateNotFound()
elif rsp.status_code == http.client.GONE:
raise errors.DatabaseDiscarded()
elif rsp.status_code == http.client.SERVICE_UNAVAILABLE:
raise errors.ManagerNotReady()
else:
raise errors.IntegreSQLError(f"Received unexpected HTTP status {rsp.status_code}")
def mark_unmodified(self, db_id: Union[int, DBInfo]) -> NoReturn:
if isinstance(db_id, DBInfo):
db_id = db_id.db_id
if db_id is None:
raise errors.IntegreSQLError("Invalid database id")
rsp = self.integresql.request('DELETE', f'/templates/{self.integresql.tpl_hash}/tests/{db_id}')
if rsp.status_code == http.client.NO_CONTENT:
return
if rsp.status_code == http.client.NOT_FOUND:
raise errors.TemplateNotFound()
elif rsp.status_code == http.client.SERVICE_UNAVAILABLE:
raise errors.ManagerNotReady()
else:
raise errors.IntegreSQLError(f"Received unexpected HTTP status {rsp.status_code}")
def __enter__(self) -> DBInfo:
self.dbinfo = self.open()
return self.dbinfo
def __exit__(self, exc_type, exc_val, exc_tb): # noqa
pass
class Template:
def __init__(self, integresql: 'IntegreSQL') -> None:
self.integresql = integresql
self.dbinfo = None
def initialize(self) -> 'Template':
rsp = self.integresql.request('POST', '/templates', payload={'hash': str(self.integresql.tpl_hash)})
if rsp.status_code == http.client.OK:
self.dbinfo = DBInfo(rsp.json())
return self
elif rsp.status_code == http.client.LOCKED:
return self
if rsp.status_code == http.client.SERVICE_UNAVAILABLE:
raise errors.ManagerNotReady()
else:
raise errors.IntegreSQLError(f"Received unexpected HTTP status {rsp.status_code}")
def finalize(self) -> NoReturn:
rsp = self.integresql.request('PUT', f'/templates/{self.integresql.tpl_hash}')
if rsp.status_code == http.client.NO_CONTENT:
return
if rsp.status_code == http.client.NOT_FOUND:
raise errors.TemplateNotFound()
elif rsp.status_code == http.client.SERVICE_UNAVAILABLE:
raise errors.ManagerNotReady()
else:
raise errors.IntegreSQLError(f"Received unexpected HTTP status {rsp.status_code}")
def discard(self) -> NoReturn:
return self.integresql.discard_template(self.integresql.tpl_hash)
def get_database(self) -> Database:
return Database(self.integresql)
def __enter__(self) -> DBInfo:
return self.dbinfo
def __exit__(self, exc_type, exc_val, exc_tb): # noqa
self.finalize()
class IntegreSQL:
def __init__(self,
tpl_directory: Union[TemplateHash, str, List[str], pathlib.PurePath, List[pathlib.PurePath], None] = None, *,
base_url: Optional[str] = None, api_version: Optional[str] = None,
) -> None:
if not base_url:
base_url = os.environ.get(ENV_INTEGRESQL_CLIENT_BASE_URL, DEFAULT_CLIENT_BASE_URL)
if not api_version:
api_version = os.environ.get(ENV_INTEGRESQL_CLIENT_API_VERSION, DEFAULT_CLIENT_API_VERSION)
self.base_url = base_url
self.api_version = api_version
self.debug = False
self._connection = None
self._tpl_hash = None
if tpl_directory:
self.tpl_hash = tpl_directory
@property
def tpl_hash(self) -> Optional[TemplateHash]:
return self._tpl_hash
@tpl_hash.setter
def tpl_hash(self, value: Union[TemplateHash, pathlib.PurePath, str, List[str], List[pathlib.PurePath]]) -> NoReturn:
if not isinstance(value, TemplateHash):
value = TemplateHash(value)
self._tpl_hash = value
def get_template(self) -> Template:
return Template(self)
def discard_template(self, tpl_hash: Union[TemplateHash, str]) -> NoReturn:
rsp = self.request('DELETE', f'/templates/{tpl_hash}')
if rsp.status_code == http.client.NO_CONTENT:
return
if rsp.status_code == http.client.NOT_FOUND:
raise errors.TemplateNotFound()
elif rsp.status_code == http.client.SERVICE_UNAVAILABLE:
raise errors.ManagerNotReady()
else:
raise errors.IntegreSQLError(f"Received unexpected HTTP status {rsp.status_code}")
def reset_all_tracking(self) -> NoReturn:
rsp = self.request('DELETE', "/admin/templates")
if rsp.status_code == http.client.NO_CONTENT:
return
raise errors.IntegreSQLError(f"failed to reset all tracking: {rsp.content}")
@property
def connection(self) -> requests.Session:
if not self._connection:
self._connection = requests.Session()
return self._connection
def request(self, method: str, path: str, *,
qs: Optional[dict] = None, payload: Optional[dict] = None,
) -> requests.Response:
path = path.lstrip('/')
url = f"{self.base_url}/{self.api_version}/{path}"
if self.debug:
print(f"Request {method.upper()} to {url} with qs {qs} and data {payload}", file=sys.stderr)
rsp = self.connection.request(method, url, qs, payload)
if self.debug:
print(f"Response from {method.upper()} {url}: [{rsp.status_code}] {rsp.content}", file=sys.stderr)
return rsp
def close(self) -> NoReturn:
self._tpl_hash = None
if self._connection:
self._connection.close()
self._connection = None
def __enter__(self) -> Template:
return self.get_template()
def __exit__(self, exc_type, exc_val, exc_tb) -> NoReturn: # noqa
self.close()
| 33.923664 | 121 | 0.6259 | 1,045 | 8,888 | 5.111005 | 0.164593 | 0.038757 | 0.055982 | 0.05411 | 0.422018 | 0.360981 | 0.320352 | 0.301442 | 0.301442 | 0.288523 | 0 | 0.003348 | 0.260689 | 8,888 | 261 | 122 | 34.05364 | 0.809466 | 0.006413 | 0 | 0.308081 | 0 | 0.005051 | 0.123781 | 0.036726 | 0 | 0 | 0 | 0 | 0 | 1 | 0.141414 | false | 0.020202 | 0.040404 | 0.040404 | 0.318182 | 0.010101 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
207bb34267342365082bfa8ac841a3deb5a45c41 | 1,653 | py | Python | src/euler_python_package/euler_python/medium/p265.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p265.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | src/euler_python_package/euler_python/medium/p265.py | wilsonify/euler | 5214b776175e6d76a7c6d8915d0e062d189d9b79 | [
"MIT"
] | null | null | null | # In this problem we look at 2^n-digit binary strings and the n-digit substrings of these.
# We are given that n = 5, so we are looking at windows of 5 bits in 32-bit strings.
#
# There are of course 32 possible cyclic windows in a 32-bit string.
# We want each of these windows to be a unique 5-bit string. There are exactly 2^5 = 32
# possible 5-bit strings, hence the 32 windows must cover the 5-bit space exactly once.
#
# The result requires the substring of all zeros to be in the most significant bits.
# We argue that the top n bits must be all zeros, because this is one of the cyclic windows
# and the value 00...00 must occur once. Furthermore the next and previous bit must be 1 -
# because if they're not, then at least one of the adjacent windows are also zero, which
# violates the uniqueness requirement.
#
# With n = 5, this means every candidate string must start with 000001 and end with 1.
# In other words, they are of the form 000001xxxxxxxxxxxxxxxxxxxxxxxxx1.
# The middle 25 bits still need to be determined, and we simply search by brute force.
def problem265():
N = 5 # Must be at least 1
TWO_POW_N = 2 ** N
MASK = TWO_POW_N - 1 # Equal to n 1's in binary, i.e. 0b11111
def check_arrangement(digits):
seen = set()
digits |= digits << TWO_POW_N # Make second copy
for i in range(TWO_POW_N):
seen.add((digits >> i) & MASK)
return len(seen) == TWO_POW_N
start = 2 ** (TWO_POW_N - N - 1) + 1
end = 2 ** (TWO_POW_N - N)
ans = sum(i for i in range(start, end, 2) if check_arrangement(i))
return ans
if __name__ == "__main__":
print(problem265())
| 44.675676 | 91 | 0.69026 | 294 | 1,653 | 3.79932 | 0.421769 | 0.037601 | 0.043868 | 0.019696 | 0.016115 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048973 | 0.23412 | 1,653 | 36 | 92 | 45.916667 | 0.833333 | 0.672716 | 0 | 0 | 0 | 0 | 0.015355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
207f20ff7bc96059db1255846744ebb0c03d9e3f | 25,099 | py | Python | lib/geomet/wkb.py | davasqueza/eriskco_conector_CloudSQL | 99304b5eed06e9bba3646535a82d7fc98b0838b7 | [
"Apache-2.0"
] | null | null | null | lib/geomet/wkb.py | davasqueza/eriskco_conector_CloudSQL | 99304b5eed06e9bba3646535a82d7fc98b0838b7 | [
"Apache-2.0"
] | null | null | null | lib/geomet/wkb.py | davasqueza/eriskco_conector_CloudSQL | 99304b5eed06e9bba3646535a82d7fc98b0838b7 | [
"Apache-2.0"
] | null | null | null | # Copyright 2013 Lars Butler & individual contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import binascii
import six
import struct
from geomet.util import block_splitter
from geomet.util import take
from geomet.util import as_bin_str
from itertools import chain
#: '\x00': The first byte of any WKB string. Indicates big endian byte
#: ordering for the data.
BIG_ENDIAN = b'\x00'
#: '\x01': The first byte of any WKB string. Indicates little endian byte
#: ordering for the data.
LITTLE_ENDIAN = b'\x01'
#: Mapping of GeoJSON geometry types to the "2D" 4-byte binary string
#: representation for WKB. "2D" indicates that the geometry is 2-dimensional,
#: X and Y components.
#: NOTE: Byte ordering is big endian.
WKB_2D = {
'Point': b'\x00\x00\x00\x01',
'LineString': b'\x00\x00\x00\x02',
'Polygon': b'\x00\x00\x00\x03',
'MultiPoint': b'\x00\x00\x00\x04',
'MultiLineString': b'\x00\x00\x00\x05',
'MultiPolygon': b'\x00\x00\x00\x06',
'GeometryCollection': b'\x00\x00\x00\x07',
}
#: Mapping of GeoJSON geometry types to the "Z" 4-byte binary string
#: representation for WKB. "Z" indicates that the geometry is 3-dimensional,
#: with X, Y, and Z components.
#: NOTE: Byte ordering is big endian.
WKB_Z = {
'Point': b'\x00\x00\x03\xe9',
'LineString': b'\x00\x00\x03\xea',
'Polygon': b'\x00\x00\x03\xeb',
'MultiPoint': b'\x00\x00\x03\xec',
'MultiLineString': b'\x00\x00\x03\xed',
'MultiPolygon': b'\x00\x00\x03\xee',
'GeometryCollection': b'\x00\x00\x03\xef',
}
#: Mapping of GeoJSON geometry types to the "M" 4-byte binary string
#: representation for WKB. "M" indicates that the geometry is 2-dimensional,
#: with X, Y, and M ("Measure") components.
#: NOTE: Byte ordering is big endian.
WKB_M = {
'Point': b'\x00\x00\x07\xd1',
'LineString': b'\x00\x00\x07\xd2',
'Polygon': b'\x00\x00\x07\xd3',
'MultiPoint': b'\x00\x00\x07\xd4',
'MultiLineString': b'\x00\x00\x07\xd5',
'MultiPolygon': b'\x00\x00\x07\xd6',
'GeometryCollection': b'\x00\x00\x07\xd7',
}
#: Mapping of GeoJSON geometry types to the "ZM" 4-byte binary string
#: representation for WKB. "ZM" indicates that the geometry is 4-dimensional,
#: with X, Y, Z, and M ("Measure") components.
#: NOTE: Byte ordering is big endian.
WKB_ZM = {
'Point': b'\x00\x00\x0b\xb9',
'LineString': b'\x00\x00\x0b\xba',
'Polygon': b'\x00\x00\x0b\xbb',
'MultiPoint': b'\x00\x00\x0b\xbc',
'MultiLineString': b'\x00\x00\x0b\xbd',
'MultiPolygon': b'\x00\x00\x0b\xbe',
'GeometryCollection': b'\x00\x00\x0b\xbf',
}
#: Mapping of dimension types to maps of GeoJSON geometry type -> 4-byte binary
#: string representation for WKB.
_WKB = {
'2D': WKB_2D,
'Z': WKB_Z,
'M': WKB_M,
'ZM': WKB_ZM,
}
#: Mapping from binary geometry type (as a 4-byte binary string) to GeoJSON
#: geometry type.
#: NOTE: Byte ordering is big endian.
_BINARY_TO_GEOM_TYPE = dict(
chain(*((reversed(x) for x in wkb_map.items())
for wkb_map in _WKB.values()))
)
_INT_TO_DIM_LABEL = {2: '2D', 3: 'Z', 4: 'ZM'}
def dump(obj, dest_file):
"""
Dump GeoJSON-like `dict` to WKB and write it to the `dest_file`.
:param dict obj:
A GeoJSON-like dictionary. It must at least the keys 'type' and
'coordinates'.
:param dest_file:
Open and writable file-like object.
"""
dest_file.write(dumps(obj))
def load(source_file):
"""
Load a GeoJSON `dict` object from a ``source_file`` containing WKB (as a
byte string).
:param source_file:
Open and readable file-like object.
:returns:
A GeoJSON `dict` representing the geometry read from the file.
"""
return loads(source_file.read())
def dumps(obj, big_endian=True):
"""
Dump a GeoJSON-like `dict` to a WKB string.
.. note::
The dimensions of the generated WKB will be inferred from the first
vertex in the GeoJSON `coordinates`. It will be assumed that all
vertices are uniform. There are 4 types:
- 2D (X, Y): 2-dimensional geometry
- Z (X, Y, Z): 3-dimensional geometry
- M (X, Y, M): 2-dimensional geometry with a "Measure"
- ZM (X, Y, Z, M): 3-dimensional geometry with a "Measure"
If the first vertex contains 2 values, we assume a 2D geometry.
If the first vertex contains 3 values, this is slightly ambiguous and
so the most common case is chosen: Z.
If the first vertex contains 4 values, we assume a ZM geometry.
The WKT/WKB standards provide a way of differentiating normal (2D), Z,
M, and ZM geometries (http://en.wikipedia.org/wiki/Well-known_text),
but the GeoJSON spec does not. Therefore, for the sake of interface
simplicity, we assume that geometry that looks 3D contains XYZ
components, instead of XYM.
:param dict obj:
GeoJson-like `dict` object.
:param bool big_endian:
Defaults to `True`. If `True`, data values in the generated WKB will
be represented using big endian byte order. Else, little endian.
:param str dims:
Indicates to WKB representation desired from converting the given
GeoJSON `dict` ``obj``. The accepted values are:
* '2D': 2-dimensional geometry (X, Y)
* 'Z': 3-dimensional geometry (X, Y, Z)
* 'M': 3-dimensional geometry (X, Y, M)
* 'ZM': 4-dimensional geometry (X, Y, Z, M)
:returns:
A WKB binary string representing of the ``obj``.
"""
geom_type = obj['type']
exporter = _dumps_registry.get(geom_type)
if exporter is None:
_unsupported_geom_type(geom_type)
return exporter(obj, big_endian)
def loads(string):
"""
Construct a GeoJson `dict` from WKB (`string`).
"""
string = iter(string)
# endianness = string[0:1]
endianness = as_bin_str(take(1, string))
if endianness == BIG_ENDIAN:
big_endian = True
elif endianness == LITTLE_ENDIAN:
big_endian = False
else:
raise ValueError("Invalid endian byte: '0x%s'. Expected 0x00 or 0x01"
% binascii.hexlify(endianness.encode()).decode())
# type_bytes = string[1:5]
type_bytes = as_bin_str(take(4, string))
if not big_endian:
# To identify the type, order the type bytes in big endian:
type_bytes = type_bytes[::-1]
geom_type = _BINARY_TO_GEOM_TYPE.get(type_bytes)
# data_bytes = string[5:] # FIXME: This won't work for GeometryCollections
data_bytes = string
importer = _loads_registry.get(geom_type)
if importer is None:
_unsupported_geom_type(geom_type)
data_bytes = iter(data_bytes)
return importer(big_endian, type_bytes, data_bytes)
def _unsupported_geom_type(geom_type):
raise ValueError("Unsupported geometry type '%s'" % geom_type)
def _header_bytefmt_byteorder(geom_type, num_dims, big_endian):
"""
Utility function to get the WKB header (endian byte + type header), byte
format string, and byte order string.
"""
dim = _INT_TO_DIM_LABEL.get(num_dims)
if dim is None:
pass # TODO: raise
type_byte_str = _WKB[dim][geom_type]
if big_endian:
header = BIG_ENDIAN
byte_fmt = b'>'
byte_order = '>'
else:
header = LITTLE_ENDIAN
byte_fmt = b'<'
byte_order = '<'
# reverse the byte ordering for little endian
type_byte_str = type_byte_str[::-1]
header += type_byte_str
byte_fmt += b'd' * num_dims
return header, byte_fmt, byte_order
def _dump_point(obj, big_endian):
"""
Dump a GeoJSON-like `dict` to a point WKB string.
:param dict obj:
GeoJson-like `dict` object.
:param bool big_endian:
If `True`, data values in the generated WKB will be represented using
big endian byte order. Else, little endian.
:returns:
A WKB binary string representing of the Point ``obj``.
"""
coords = obj['coordinates']
num_dims = len(coords)
wkb_string, byte_fmt, _ = _header_bytefmt_byteorder(
'Point', num_dims, big_endian
)
wkb_string += struct.pack(byte_fmt, *coords)
return wkb_string
def _dump_linestring(obj, big_endian):
"""
Dump a GeoJSON-like `dict` to a linestring WKB string.
Input parameters and output are similar to :func:`_dump_point`.
"""
coords = obj['coordinates']
vertex = coords[0]
# Infer the number of dimensions from the first vertex
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'LineString', num_dims, big_endian
)
# append number of vertices in linestring
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for vertex in coords:
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string
def _dump_polygon(obj, big_endian):
"""
Dump a GeoJSON-like `dict` to a polygon WKB string.
Input parameters and output are similar to :funct:`_dump_point`.
"""
coords = obj['coordinates']
vertex = coords[0][0]
# Infer the number of dimensions from the first vertex
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'Polygon', num_dims, big_endian
)
# number of rings:
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for ring in coords:
# number of verts in this ring:
wkb_string += struct.pack('%sl' % byte_order, len(ring))
for vertex in ring:
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string
def _dump_multipoint(obj, big_endian):
"""
Dump a GeoJSON-like `dict` to a multipoint WKB string.
Input parameters and output are similar to :funct:`_dump_point`.
"""
coords = obj['coordinates']
vertex = coords[0]
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'MultiPoint', num_dims, big_endian
)
point_type = _WKB[_INT_TO_DIM_LABEL.get(num_dims)]['Point']
if big_endian:
point_type = BIG_ENDIAN + point_type
else:
point_type = LITTLE_ENDIAN + point_type[::-1]
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for vertex in coords:
# POINT type strings
wkb_string += point_type
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string
def _dump_multilinestring(obj, big_endian):
"""
Dump a GeoJSON-like `dict` to a multilinestring WKB string.
Input parameters and output are similar to :funct:`_dump_point`.
"""
coords = obj['coordinates']
vertex = coords[0][0]
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'MultiLineString', num_dims, big_endian
)
ls_type = _WKB[_INT_TO_DIM_LABEL.get(num_dims)]['LineString']
if big_endian:
ls_type = BIG_ENDIAN + ls_type
else:
ls_type = LITTLE_ENDIAN + ls_type[::-1]
# append the number of linestrings
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for linestring in coords:
wkb_string += ls_type
# append the number of vertices in each linestring
wkb_string += struct.pack('%sl' % byte_order, len(linestring))
for vertex in linestring:
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string
def _dump_multipolygon(obj, big_endian):
"""
Dump a GeoJSON-like `dict` to a multipolygon WKB string.
Input parameters and output are similar to :funct:`_dump_point`.
"""
coords = obj['coordinates']
vertex = coords[0][0][0]
num_dims = len(vertex)
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'MultiPolygon', num_dims, big_endian
)
poly_type = _WKB[_INT_TO_DIM_LABEL.get(num_dims)]['Polygon']
if big_endian:
poly_type = BIG_ENDIAN + poly_type
else:
poly_type = LITTLE_ENDIAN + poly_type[::-1]
# apped the number of polygons
wkb_string += struct.pack('%sl' % byte_order, len(coords))
for polygon in coords:
# append polygon header
wkb_string += poly_type
# append the number of rings in this polygon
wkb_string += struct.pack('%sl' % byte_order, len(polygon))
for ring in polygon:
# append the number of vertices in this ring
wkb_string += struct.pack('%sl' % byte_order, len(ring))
for vertex in ring:
wkb_string += struct.pack(byte_fmt, *vertex)
return wkb_string
def _dump_geometrycollection(obj, big_endian):
# TODO: handle empty collections
geoms = obj['geometries']
# determine the dimensionality (2d, 3d, 4d) of the collection
# by sampling the first geometry
first_geom = geoms[0]
rest = geoms[1:]
first_wkb = dumps(first_geom, big_endian=big_endian)
first_type = first_wkb[1:5]
if not big_endian:
first_type = first_type[::-1]
if first_type in WKB_2D.values():
num_dims = 2
elif first_type in WKB_Z.values():
num_dims = 3
elif first_type in WKB_ZM.values():
num_dims = 4
wkb_string, byte_fmt, byte_order = _header_bytefmt_byteorder(
'GeometryCollection', num_dims, big_endian
)
# append the number of geometries
wkb_string += struct.pack('%sl' % byte_order, len(geoms))
wkb_string += first_wkb
for geom in rest:
wkb_string += dumps(geom, big_endian=big_endian)
return wkb_string
def _load_point(big_endian, type_bytes, data_bytes):
"""
Convert byte data for a Point to a GeoJSON `dict`.
:param bool big_endian:
If `True`, interpret the ``data_bytes`` in big endian order, else
little endian.
:param str type_bytes:
4-byte integer (as a binary string) indicating the geometry type
(Point) and the dimensions (2D, Z, M or ZM). For consistency, these
bytes are expected to always be in big endian order, regardless of the
value of ``big_endian``.
:param str data_bytes:
Coordinate data in a binary string.
:returns:
GeoJSON `dict` representing the Point geometry.
"""
endian_token = '>' if big_endian else '<'
if type_bytes == WKB_2D['Point']:
coords = struct.unpack('%sdd' % endian_token,
as_bin_str(take(16, data_bytes)))
elif type_bytes == WKB_Z['Point']:
coords = struct.unpack('%sddd' % endian_token,
as_bin_str(take(24, data_bytes)))
elif type_bytes == WKB_M['Point']:
# NOTE: The use of XYM types geometries is quite rare. In the interest
# of removing ambiguity, we will treat all XYM geometries as XYZM when
# generate the GeoJSON. A default Z value of `0.0` will be given in
# this case.
coords = list(struct.unpack('%sddd' % endian_token,
as_bin_str(take(24, data_bytes))))
coords.insert(2, 0.0)
elif type_bytes == WKB_ZM['Point']:
coords = struct.unpack('%sdddd' % endian_token,
as_bin_str(take(32, data_bytes)))
return dict(type='Point', coordinates=list(coords))
def _load_linestring(big_endian, type_bytes, data_bytes):
endian_token = '>' if big_endian else '<'
is_m = False
if type_bytes in WKB_2D.values():
num_dims = 2
elif type_bytes in WKB_Z.values():
num_dims = 3
elif type_bytes in WKB_M.values():
num_dims = 3
is_m = True
elif type_bytes in WKB_ZM.values():
num_dims = 4
coords = []
[num_verts] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
while True:
vert_wkb = as_bin_str(take(8 * num_dims, data_bytes))
fmt = '%s' + 'd' * num_dims
vert = list(struct.unpack(fmt % endian_token, vert_wkb))
if is_m:
vert.insert(2, 0.0)
coords.append(vert)
if len(coords) == num_verts:
break
return dict(type='LineString', coordinates=list(coords))
def _load_polygon(big_endian, type_bytes, data_bytes):
endian_token = '>' if big_endian else '<'
data_bytes = iter(data_bytes)
is_m = False
if type_bytes in WKB_2D.values():
num_dims = 2
elif type_bytes in WKB_Z.values():
num_dims = 3
elif type_bytes in WKB_M.values():
num_dims = 3
is_m = True
elif type_bytes in WKB_ZM.values():
num_dims = 4
coords = []
[num_rings] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
while True:
ring = []
[num_verts] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
verts_wkb = as_bin_str(take(8 * num_verts * num_dims, data_bytes))
verts = block_splitter(verts_wkb, 8)
if six.PY2:
verts = (b''.join(x) for x in verts)
elif six.PY3:
verts = (b''.join(bytes([y]) for y in x) for x in verts)
for vert_wkb in block_splitter(verts, num_dims):
values = [struct.unpack('%sd' % endian_token, x)[0]
for x in vert_wkb]
if is_m:
values.insert(2, 0.0)
ring.append(values)
coords.append(ring)
if len(coords) == num_rings:
break
return dict(type='Polygon', coordinates=coords)
def _load_multipoint(big_endian, type_bytes, data_bytes):
endian_token = '>' if big_endian else '<'
data_bytes = iter(data_bytes)
is_m = False
if type_bytes in WKB_2D.values():
num_dims = 2
elif type_bytes in WKB_Z.values():
num_dims = 3
elif type_bytes in WKB_M.values():
num_dims = 3
is_m = True
elif type_bytes in WKB_ZM.values():
num_dims = 4
if is_m:
dim = 'M'
else:
dim = _INT_TO_DIM_LABEL[num_dims]
coords = []
[num_points] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
while True:
point_endian = as_bin_str(take(1, data_bytes))
point_type = as_bin_str(take(4, data_bytes))
values = struct.unpack('%s%s' % (endian_token, 'd' * num_dims),
as_bin_str(take(8 * num_dims, data_bytes)))
values = list(values)
if is_m:
values.insert(2, 0.0)
if big_endian:
assert point_endian == BIG_ENDIAN
assert point_type == _WKB[dim]['Point']
else:
assert point_endian == LITTLE_ENDIAN
assert point_type[::-1] == _WKB[dim]['Point']
coords.append(list(values))
if len(coords) == num_points:
break
return dict(type='MultiPoint', coordinates=coords)
def _load_multilinestring(big_endian, type_bytes, data_bytes):
endian_token = '>' if big_endian else '<'
data_bytes = iter(data_bytes)
is_m = False
if type_bytes in WKB_2D.values():
num_dims = 2
elif type_bytes in WKB_Z.values():
num_dims = 3
elif type_bytes in WKB_M.values():
num_dims = 3
is_m = True
elif type_bytes in WKB_ZM.values():
num_dims = 4
if is_m:
dim = 'M'
else:
dim = _INT_TO_DIM_LABEL[num_dims]
[num_ls] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
coords = []
while True:
ls_endian = as_bin_str(take(1, data_bytes))
ls_type = as_bin_str(take(4, data_bytes))
if big_endian:
assert ls_endian == BIG_ENDIAN
assert ls_type == _WKB[dim]['LineString']
else:
assert ls_endian == LITTLE_ENDIAN
assert ls_type[::-1] == _WKB[dim]['LineString']
[num_verts] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
num_values = num_dims * num_verts
values = struct.unpack(endian_token + 'd' * num_values,
as_bin_str(take(8 * num_values, data_bytes)))
values = list(block_splitter(values, num_dims))
if is_m:
for v in values:
v.insert(2, 0.0)
coords.append(values)
if len(coords) == num_ls:
break
return dict(type='MultiLineString', coordinates=coords)
def _load_multipolygon(big_endian, type_bytes, data_bytes):
endian_token = '>' if big_endian else '<'
is_m = False
if type_bytes in WKB_2D.values():
num_dims = 2
elif type_bytes in WKB_Z.values():
num_dims = 3
elif type_bytes in WKB_M.values():
num_dims = 3
is_m = True
elif type_bytes in WKB_ZM.values():
num_dims = 4
if is_m:
dim = 'M'
else:
dim = _INT_TO_DIM_LABEL[num_dims]
[num_polys] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
coords = []
while True:
polygon = []
poly_endian = as_bin_str(take(1, data_bytes))
poly_type = as_bin_str(take(4, data_bytes))
if big_endian:
assert poly_endian == BIG_ENDIAN
assert poly_type == _WKB[dim]['Polygon']
else:
assert poly_endian == LITTLE_ENDIAN
assert poly_type[::-1] == _WKB[dim]['Polygon']
[num_rings] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
for _ in range(num_rings):
ring = []
[num_verts] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
for _ in range(num_verts):
vert_wkb = as_bin_str(take(8 * num_dims, data_bytes))
fmt = '%s' + 'd' * num_dims
vert = list(struct.unpack(fmt % endian_token, vert_wkb))
if is_m:
vert.insert(2, 0.0)
ring.append(vert)
polygon.append(ring)
coords.append(polygon)
if len(coords) == num_polys:
break
return dict(type='MultiPolygon', coordinates=coords)
def _check_dimensionality(geom, num_dims):
def first_geom(gc):
for g in gc['geometries']:
if not g['type'] == 'GeometryCollection':
return g
first_vert = {
'Point': lambda x: x['coordinates'],
'LineString': lambda x: x['coordinates'][0],
'Polygon': lambda x: x['coordinates'][0][0],
'MultiLineString': lambda x: x['coordinates'][0][0],
'MultiPolygon': lambda x: x['coordinates'][0][0][0],
'GeometryCollection': first_geom,
}
if not len(first_vert[geom['type']](geom)) == num_dims:
error = 'Cannot mix dimensionality in a geometry'
raise Exception(error)
def _load_geometrycollection(big_endian, type_bytes, data_bytes):
endian_token = '>' if big_endian else '<'
is_m = False
if type_bytes in WKB_2D.values():
num_dims = 2
elif type_bytes in WKB_Z.values():
num_dims = 3
elif type_bytes in WKB_M.values():
num_dims = 3
is_m = True
elif type_bytes in WKB_ZM.values():
num_dims = 4
geometries = []
[num_geoms] = struct.unpack('%sl' % endian_token,
as_bin_str(take(4, data_bytes)))
while True:
geometry = loads(data_bytes)
if is_m:
_check_dimensionality(geometry, 4)
else:
_check_dimensionality(geometry, num_dims)
# TODO(LB): Add type assertions for the geometry; collections should
# not mix 2d, 3d, 4d, etc.
geometries.append(geometry)
if len(geometries) == num_geoms:
break
return dict(type='GeometryCollection', geometries=geometries)
_dumps_registry = {
'Point': _dump_point,
'LineString': _dump_linestring,
'Polygon': _dump_polygon,
'MultiPoint': _dump_multipoint,
'MultiLineString': _dump_multilinestring,
'MultiPolygon': _dump_multipolygon,
'GeometryCollection': _dump_geometrycollection,
}
_loads_registry = {
'Point': _load_point,
'LineString': _load_linestring,
'Polygon': _load_polygon,
'MultiPoint': _load_multipoint,
'MultiLineString': _load_multilinestring,
'MultiPolygon': _load_multipolygon,
'GeometryCollection': _load_geometrycollection,
}
| 31.217662 | 79 | 0.620742 | 3,433 | 25,099 | 4.330032 | 0.109525 | 0.042987 | 0.025362 | 0.021796 | 0.480928 | 0.434376 | 0.390649 | 0.350488 | 0.31443 | 0.29667 | 0 | 0.020602 | 0.270927 | 25,099 | 803 | 80 | 31.256538 | 0.791737 | 0.267222 | 0 | 0.419624 | 0 | 0 | 0.091885 | 0 | 0.004175 | 0 | 0.000447 | 0.004981 | 0.025052 | 1 | 0.045929 | false | 0.002088 | 0.020877 | 0 | 0.106472 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20803faf55678d4849c7071017ef34216a20319c | 2,104 | py | Python | tests/test_can_delete.py | Jesse-Yung/jsonclasses | d40c52aec42bcb978a80ceb98b93ab38134dc790 | [
"MIT"
] | 50 | 2021-08-18T08:08:04.000Z | 2022-03-20T07:23:26.000Z | tests/test_can_delete.py | Jesse-Yung/jsonclasses | d40c52aec42bcb978a80ceb98b93ab38134dc790 | [
"MIT"
] | 1 | 2021-02-21T03:18:09.000Z | 2021-03-08T01:07:52.000Z | tests/test_can_delete.py | Jesse-Yung/jsonclasses | d40c52aec42bcb978a80ceb98b93ab38134dc790 | [
"MIT"
] | 8 | 2021-07-01T02:39:15.000Z | 2021-12-10T02:20:18.000Z | from __future__ import annotations
from unittest import TestCase
from jsonclasses.excs import UnauthorizedActionException
from tests.classes.gs_product import GSProduct, GSProductUser, GSTProduct
from tests.classes.gm_product import GMProduct, GMProductUser
class TestCanDelete(TestCase):
def test_guards_raises_if_no_operator_is_assigned(self):
product = GSProduct(name='P')
paid_user = GSProductUser(id='P', name='A', paid_user=True)
product.user = paid_user
with self.assertRaises(UnauthorizedActionException):
product.delete()
def test_guard_is_called_for_existing_objects_on_delete(self):
product = GSProduct(name='P')
paid_user = GSProductUser(id='P', name='A', paid_user=True)
product.user = paid_user
product.opby(paid_user)
product.delete()
free_user = GSProductUser(id='F', name='A', paid_user=False)
product.user = free_user
product.opby(free_user)
with self.assertRaises(UnauthorizedActionException):
product.delete()
def test_multiple_guards_are_checked_for_existing_objects_on_del(self):
product = GMProduct(name='P')
setattr(product, '_is_new', False)
paid_user = GMProductUser(id='P', name='A', paid_user=True)
product.user = paid_user
product.opby(paid_user)
product.delete()
free_user = GMProductUser(id='F', name='A', paid_user=False)
product.user = free_user
product.opby(free_user)
with self.assertRaises(UnauthorizedActionException):
product.delete()
def test_types_guard_is_called_for_existing_object_on_delete(self):
product = GSTProduct(name='P')
paid_user = GSProductUser(id='P', name='n', paid_user=True)
product.user = paid_user
product.opby(paid_user)
product.delete()
free_user = GSProductUser(id='F', name='A', paid_user=False)
product.user = free_user
product.opby(free_user)
with self.assertRaises(UnauthorizedActionException):
product.delete()
| 39.698113 | 75 | 0.687738 | 251 | 2,104 | 5.494024 | 0.239044 | 0.104424 | 0.039159 | 0.056563 | 0.656273 | 0.621465 | 0.621465 | 0.621465 | 0.597534 | 0.548949 | 0 | 0 | 0.214354 | 2,104 | 52 | 76 | 40.461538 | 0.834241 | 0 | 0 | 0.652174 | 0 | 0 | 0.011882 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.086957 | false | 0 | 0.108696 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
208118ac1558f15291a466df173c06353392d5a4 | 2,112 | py | Python | scripts/matplot_hardware_comparison.py | shenweihai1/rolis-eurosys2022 | 59b3fd58144496a9b13415e30b41617b34924323 | [
"MIT"
] | null | null | null | scripts/matplot_hardware_comparison.py | shenweihai1/rolis-eurosys2022 | 59b3fd58144496a9b13415e30b41617b34924323 | [
"MIT"
] | null | null | null | scripts/matplot_hardware_comparison.py | shenweihai1/rolis-eurosys2022 | 59b3fd58144496a9b13415e30b41617b34924323 | [
"MIT"
] | null | null | null | import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import FuncFormatter
def millions(x, pos):
return '%1.1fM' % (x * 1e-6)
formatter = FuncFormatter(millions)
txt = """
4 381682 159258 1.61E+06
8 773030 366397 3.00E+06
12 1103917 519834 4.30E+06
16 1358805 740740 5.67E+06
20 1850419 884978 6.98E+06
24 2227947 1066185 8.20E+06
28 2587756 1223889 9.45E+06
"""
keys, values, values2, values3, values0, values20, values30 = [], [], [], [], [], [], []
idx = 0
for l in txt.split("\n"):
items = l.replace("\n", "").split("\t")
if len(items) != 4:
continue
values.append(float(items[1]))
values2.append(float(items[2]))
values3.append(float(items[3]))
keys = [4, 8, 12, 16, 20, 24, 28]
plt.rcParams["font.size"] = 30
matplotlib.rcParams['lines.markersize'] = 14
plt.rcParams["font.family"] = "serif"
matplotlib.rcParams["font.family"] = "serif"
fig, ax = plt.subplots(figsize=(14, 9))
ax.yaxis.set_major_formatter(formatter)
ax.plot(keys, values, marker="o", label='Meerkat - YCSB-T', linewidth=3)
ax.plot(keys, values2, marker="s", label='Meerkat - YCSB++', linewidth=3)
ax.plot(keys, values3, marker="^", label='Rolis - YCSB++', linewidth=3)
ax.set_ylim([0, 11 * 10 **6])
# ax.set(xlabel='# of threads',
# ylabel='Throughput (txns/sec)',
# title=None)
ax.set_xlabel("# of threads", fontname="serif")
ax.set_ylabel("Throughput (txns/sec)", fontname="serif")
# https://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot/43439132#43439132
ax.legend(bbox_to_anchor=(0, 0.80, 0.55, 0.2), mode="expand", ncol=1, loc="upper left", borderaxespad=0, frameon=True, fancybox=False, framealpha=1)
ax.set_xticks([4, 8, 12, 16, 20, 24, 28])
ax.set_xticklabels(["4", "8", "12", "16", "20", "24", "28"])
ax.yaxis.grid()
# ax.set_title("Rolis vs Meerkat", y=-0.28, fontsize=32)
for tick in ax.get_xticklabels():
tick.set_fontname("serif")
for tick in ax.get_yticklabels():
tick.set_fontname("serif")
fig.tight_layout()
fig.savefig("software_comparison_hardware.eps", format='eps', dpi=1000)
plt.show() | 34.622951 | 148 | 0.676136 | 332 | 2,112 | 4.25 | 0.490964 | 0.024805 | 0.034018 | 0.012757 | 0.10489 | 0.028349 | 0.028349 | 0.019844 | 0 | 0 | 0 | 0.13842 | 0.131155 | 2,112 | 61 | 149 | 34.622951 | 0.630518 | 0.116477 | 0 | 0.040816 | 0 | 0 | 0.227004 | 0.017214 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020408 | false | 0 | 0.081633 | 0.020408 | 0.122449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
208258846b51fff8cd031222e2f3da0eb301099e | 11,741 | py | Python | bintray-cleanup/bintray_cleanup/main.py | openzipkin/zipkin-release | 4508fca409783e62169382aff06fd7c32ad20a63 | [
"Apache-2.0"
] | 2 | 2017-08-07T10:00:52.000Z | 2019-06-25T01:59:22.000Z | bintray-cleanup/bintray_cleanup/main.py | openzipkin/zipkin-release | 4508fca409783e62169382aff06fd7c32ad20a63 | [
"Apache-2.0"
] | 3 | 2017-04-11T05:20:11.000Z | 2019-07-24T23:22:16.000Z | bintray-cleanup/bintray_cleanup/main.py | openzipkin/zipkin-release | 4508fca409783e62169382aff06fd7c32ad20a63 | [
"Apache-2.0"
] | 1 | 2017-09-19T08:38:07.000Z | 2017-09-19T08:38:07.000Z | #!/usr/bin/env python3
import json
import textwrap
from collections import defaultdict
from dataclasses import dataclass
from datetime import datetime, timedelta, timezone
from typing import Callable, Dict, List, Optional
import click
import pygments
import pygments.formatters
import pygments.lexers
import requests_cache
Version = Dict
VersionsByPackage = Dict[str, List[Version]]
ISO8601_WITH_MICROSECOND_FORMAT = "%Y-%m-%dT%H:%M:%S.%f%z"
def display_version_details(version: Version) -> str:
return pygments.highlight(
json.dumps(version, sort_keys=True, indent=4, default=str),
pygments.lexers.JsonLexer(),
pygments.formatters.TerminalFormatter(),
).strip()
class ContextObj:
def __init__(self, api_base_url: str, api_username: str, api_key: str) -> None:
self.api_base_url: str = api_base_url
self.session: requests_cache.CachedSession = requests_cache.CachedSession(
cache_name="requests_cache",
backend="sqlite",
expire_after=timedelta(hours=1),
)
self.session.auth = (api_username, api_key)
self.session.headers.update(
{"User-Agent": "gh:openzipkin/zipkin-release#bintray-cleanup"}
)
def request_json(
self,
verb: str,
url: str,
object_hook: Optional[Callable[[Version], Version]] = None,
) -> Dict:
if verb == "DELETE":
request_color = "red"
else:
request_color = "cyan"
click.secho(f"{verb} {url}", fg=request_color)
response = self.session.request(verb, url)
response.raise_for_status()
json_str = response.content
click.echo(display_version_details(json.loads(json_str)))
click.echo()
if (
"X-RateLimit-Limit" in response.headers
and "X-RateLimit-Reamining" in response.headers
):
ratelimit_limit = response.headers["X-RateLimit-Limit"]
ratelimit_remaining = response.headers["X-RateLimit-Remaining"]
click.secho(
f"Remaining API rate-limit: {ratelimit_remaining} / {ratelimit_limit}",
fg="cyan",
)
return json.loads(json_str, object_hook=object_hook)
@click.group()
@click.option("--api-base-url", default="https://api.bintray.com/")
@click.option("--api-username", envvar="BINTRAY_USERNAME", required=True)
@click.option("--api-key", envvar="BINTRAY_API_KEY", required=True)
@click.pass_context
def cli(ctx: click.Context, api_base_url: str, api_username: str, api_key: str):
if not api_base_url.endswith("/"):
api_base_url += "/"
ctx.obj = ContextObj(api_base_url, api_username, api_key)
def enrich_version_data(data: Version) -> Version:
data["created"] = datetime.strptime(
data["created"], ISO8601_WITH_MICROSECOND_FORMAT
)
data["updated"] = datetime.strptime(
data["updated"], ISO8601_WITH_MICROSECOND_FORMAT
)
return data
@cli.command()
@click.pass_obj
def clear_cache(obj: ContextObj) -> None:
obj.session.cache.clear()
click.echo("Cleared HTTP response cache")
@cli.command()
@click.argument("subject")
@click.argument("repo")
@click.argument("package")
@click.pass_context
def list_versions(
ctx: click.Context, subject: str, repo: str, package: str
) -> List[Version]:
obj: ContextObj = ctx.obj
package_data = obj.request_json(
"GET", f"{obj.api_base_url}packages/{subject}/{repo}/{package}"
)
version_names = package_data["versions"]
versions = []
for version_name in version_names:
versions.append(
obj.request_json(
"GET",
f"{obj.api_base_url}packages/{subject}/{repo}/{package}"
f"/versions/{version_name}",
object_hook=enrich_version_data,
)
)
return versions
def group_versions_by_package(versions: List[Version]) -> VersionsByPackage:
by_package: Dict[str, List[Dict]] = defaultdict(list)
for version in versions:
by_package[version["package"]].append(version)
return dict(by_package)
def display_version_names_pregrouped(by_package: VersionsByPackage) -> str:
return textwrap.indent(
"\n".join(
f"{package}: {' '.join(v['name'] for v in versions)}"
for package, versions in by_package.items()
),
prefix=" ",
)
def display_version_names(versions: List[Version]) -> str:
by_package = group_versions_by_package(versions)
return display_version_names_pregrouped(by_package)
@dataclass
class DateCutoffResult:
cutoff: datetime
old: List[Version]
new: List[Version]
def apply_date_cutoff(
versions: List[Version], older_than_days: int
) -> DateCutoffResult:
cutoff = datetime.now(timezone.utc) - timedelta(days=older_than_days)
old_versions = sorted(
[version for version in versions if version["created"] < cutoff],
key=lambda v: v["created"],
)
new_versions = sorted(
[version for version in versions if version["created"] >= cutoff],
key=lambda v: v["created"],
)
older_than_days_display = click.style(str(older_than_days), fg="yellow")
cutoff_display = click.style(str(cutoff), fg="yellow")
click.echo(f"Cutoff date {older_than_days_display} days ago: {cutoff_display}")
click.echo(
f"Found {click.style(str(len(old_versions)), fg='red')} versions created "
f"BEFORE {cutoff_display}:\n{display_version_names(old_versions)}"
)
click.echo(
f"Found {click.style(str(len(new_versions)), fg='green')} versions created "
f"AFTER {cutoff_display}:\n{display_version_names(new_versions)}"
)
return DateCutoffResult(cutoff, old_versions, new_versions)
@cli.command()
@click.argument("subject")
@click.argument("repo")
@click.argument("package")
@click.argument("older_than_days", type=int)
@click.pass_context
def list_old_versions(
ctx: click.Context, subject: str, repo: str, package: str, older_than_days: int
) -> DateCutoffResult:
versions = ctx.invoke(list_versions, subject=subject, repo=repo, package=package)
return apply_date_cutoff(versions, older_than_days)
@cli.command()
@click.argument("subject")
@click.argument("repo")
@click.pass_context
def list_packages(ctx: click.Context, subject: str, repo: str) -> List[str]:
obj: ContextObj = ctx.obj
response = obj.request_json(
"GET", f"{obj.api_base_url}repos/{subject}/{repo}/packages"
)
return [item["name"] for item in response]
@cli.command()
@click.argument("subject")
@click.argument("repo")
@click.argument("older_than_days", type=int)
@click.pass_context
def list_old_versions_in_repo(
ctx: click.Context, subject: str, repo: str, older_than_days: int
) -> DateCutoffResult:
versions: List[Dict] = []
for package in ctx.invoke(list_packages, subject=subject, repo=repo):
versions += ctx.invoke(
list_versions, subject=subject, repo=repo, package=package
)
return apply_date_cutoff(versions, older_than_days)
def _delete_old_versions(
ctx: click.Context,
dryrun: bool,
cutoff_result: DateCutoffResult,
limit: Optional[int],
yes: bool,
):
obj: ContextObj = ctx.obj
if dryrun:
dryrun_display = click.style("(DRYRUN) ", fg="cyan")
else:
dryrun_display = ""
versions_to_keep = group_versions_by_package(cutoff_result.new)
versions_to_delete = group_versions_by_package(cutoff_result.old)
for package, my_versions_to_delete in versions_to_delete.items():
if package not in versions_to_keep:
preserve = my_versions_to_delete.pop()
versions_to_keep[package] = [preserve]
click.echo(
f"No versions for {preserve['package']} are newer than "
f"{cutoff_result.cutoff}. Preserving the latest version: "
f"{preserve['name']}"
)
if not versions_to_delete:
click.secho("No versions to delete, exiting.", fg="green")
return
else:
count = sum(len(vs) for vs in versions_to_delete.values())
click.secho(
f"{dryrun_display}Selected {count} versions to "
f"delete:\n{display_version_names_pregrouped(versions_to_delete)}",
fg="red",
)
deleted_versions = []
for package, versions in versions_to_delete.items():
click.echo(f"Processing {package}")
for version in versions:
display_version_name = click.style(
f"{version['owner']}/{version['repo']}/"
f"{version['package']}@{version['name']}",
fg="red",
)
click.echo(
f"{dryrun_display}Candidate for deletion: {display_version_name}"
)
click.echo(display_version_details(version))
if yes:
click.secho(
"Invoked with --yes, skipping confirmation prompt.", fg="cyan"
)
if yes or click.confirm(
f"{dryrun_display}Confirm deletion of {display_version_name}"
):
if dryrun:
click.secho(
f"This is a dry-run, not deleting {display_version_name}",
fg="cyan",
)
else:
click.secho(str(datetime.now()), fg="cyan")
obj.request_json(
"DELETE",
f"{obj.api_base_url}packages/{version['owner']}"
f"/{version['repo']}/{version['package']}"
f"/versions/{version['name']}",
)
deleted_versions.append(version)
click.echo(f"Done processing {display_version_name}\n")
click.echo(f"Done processing {package}\n")
click.echo(
f"{dryrun_display}Deleted {click.style(str(len(deleted_versions)), fg='red')} "
f"versions:\n{display_version_names(deleted_versions)}"
)
if not dryrun:
ctx.invoke(clear_cache)
@cli.command()
@click.argument("subject")
@click.argument("repo")
@click.argument("package")
@click.argument("older_than_days", type=int)
@click.option("--dryrun/--no-dryrun", default=True)
@click.option("--limit", default=None, type=int)
@click.option("--yes", default=False, is_flag=True)
@click.pass_context
def delete_old_versions(
ctx: click.Context,
subject: str,
repo: str,
package: str,
older_than_days: int,
dryrun: bool,
limit: Optional[int],
yes: bool,
) -> None:
cutoff_result: DateCutoffResult = ctx.invoke(
list_old_versions,
subject=subject,
repo=repo,
package=package,
older_than_days=older_than_days,
)
click.echo()
_delete_old_versions(ctx, dryrun, cutoff_result, limit, yes)
@cli.command()
@click.argument("subject")
@click.argument("repo")
@click.argument("older_than_days", type=int)
@click.option("--dryrun/--no-dryrun", default=True)
@click.option("--limit", default=None, type=int)
@click.option("--yes", default=False, is_flag=True)
@click.pass_context
def delete_old_versions_in_repo(
ctx: click.Context,
subject: str,
repo: str,
older_than_days: int,
dryrun: bool,
limit: Optional[int],
yes: bool,
) -> None:
cutoff_result: DateCutoffResult = ctx.invoke(
list_old_versions_in_repo,
subject=subject,
repo=repo,
older_than_days=older_than_days,
)
click.echo()
_delete_old_versions(ctx, dryrun, cutoff_result, limit, yes)
if __name__ == "__main__":
cli()
| 31.226064 | 87 | 0.638106 | 1,401 | 11,741 | 5.142041 | 0.155603 | 0.034287 | 0.034287 | 0.018462 | 0.415741 | 0.348973 | 0.313437 | 0.293587 | 0.284981 | 0.273459 | 0 | 0.001671 | 0.235329 | 11,741 | 375 | 88 | 31.309333 | 0.800735 | 0.001789 | 0 | 0.342857 | 0 | 0 | 0.187815 | 0.092755 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053968 | false | 0.025397 | 0.034921 | 0.006349 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2082b1a4f9ffc658b8fe8bd47773012afce06769 | 688 | py | Python | backup_nanny/ami_cleanup.py | ForwardLine/backup-nanny | 67c687f43d732c60ab2e569e50bc40cc5e696b25 | [
"Apache-2.0"
] | 1 | 2019-11-13T04:15:41.000Z | 2019-11-13T04:15:41.000Z | backup_nanny/ami_cleanup.py | ForwardLine/backup-nanny | 67c687f43d732c60ab2e569e50bc40cc5e696b25 | [
"Apache-2.0"
] | null | null | null | backup_nanny/ami_cleanup.py | ForwardLine/backup-nanny | 67c687f43d732c60ab2e569e50bc40cc5e696b25 | [
"Apache-2.0"
] | 1 | 2019-10-25T21:24:20.000Z | 2019-10-25T21:24:20.000Z | #!/usr/bin/env python
from backup_nanny.util.env_loader import ENVLoader
from backup_nanny.util.log import Log
from backup_nanny.util.backup_helper import BackupHelper
def handler(event, context):
main(event)
def main(event):
log = Log()
try:
backup_helper = BackupHelper(log=log)
backup_amis = backup_helper.get_backup_amis_for_cleanup()
for backup_ami in backup_amis:
backup_helper.cleanup_old_ami(backup_ami)
backup_helper.cleanup_old_snapshots(backup_ami.snapshots)
except Exception as e:
log.info(e)
log.send_alert()
return True
if __name__ == '__main__':
ENVLoader.run()
main('local')
| 25.481481 | 69 | 0.700581 | 93 | 688 | 4.849462 | 0.44086 | 0.133038 | 0.099778 | 0.126386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.213663 | 688 | 26 | 70 | 26.461538 | 0.833641 | 0.02907 | 0 | 0 | 0 | 0 | 0.01949 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.15 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
208372cd97ff6cd39df211ced3240a7c03da900d | 4,179 | py | Python | msn/tests/test_events.py | mleger45/turnex | 2b805c3681fe6ce3ddad403270c09ac9900fbe7d | [
"MIT"
] | null | null | null | msn/tests/test_events.py | mleger45/turnex | 2b805c3681fe6ce3ddad403270c09ac9900fbe7d | [
"MIT"
] | 1 | 2021-04-12T05:14:28.000Z | 2021-04-12T05:14:28.000Z | msn/tests/test_events.py | mleger45/turnex | 2b805c3681fe6ce3ddad403270c09ac9900fbe7d | [
"MIT"
] | null | null | null | # -*- coding:utf-8 -*-
import json
from django.test import TestCase
from unittest.mock import patch, MagicMock
from msn.events import EventTurnex
class EventTurnexTest(TestCase):
def setUp(self):
self.events = EventTurnex()
@patch('msn.events.EventTurnex.valid')
def test_process_with_valid_message(self, valid):
valid.return_value = True
message = json.dumps({
'event': 'sample'
})
self.events.dispatcher = {
'sample': MagicMock(return_value={'result': 'ok'})
}
self.events.process(message)
self.assertTrue(self.events.dispatcher['sample'].called)
@patch('msn.events.EventTurnex.error')
@patch('msn.events.EventTurnex.valid')
def test_process_with_invalid_message(self, valid, error):
valid.return_value = False
message = json.dumps({
'event': 'sample'
})
self.events.dispatcher = {
'sample': MagicMock(return_value={'result': 'ok'})
}
self.events.process(message)
error.assert_called()
def test_form_register(self):
event = {
'event': self.events.FORM_REGISTER,
'userAgent': 'sample'
}
data = json.dumps(event)
result = self.events.form_register(data)
expected = json.dumps({
"event": self.events.SERVER_ACK_REGISTER,
"body": 'registered successfully',
})
self.assertEquals(result, expected)
def test_board_register(self):
event = {
'event': self.events.BOARD_REGISTER,
'userAgent': 'sample'
}
data = json.dumps(event)
result = self.events.board_register(data)
expected = json.dumps({
"event": self.events.SERVER_ACK_REGISTER,
"body": 'registered successfully',
})
self.assertEquals(result, expected)
def test_server_ack_register(self):
result = self.events.server_ack_register()
self.assertEquals(type(result), str)
@patch('msn.events.EventTurnex.server_ticket_broadcast')
def test_next_ticket(self, broadcast):
self.events.next_ticket(None)
self.assertTrue(broadcast.called)
def test_server_ticket_broadcast(self):
data = {
'event': 'sample'
}
broadcast = self.events.server_ticket_broadcast(data)
result = json.loads(broadcast)
expected = result['event'] == self.events.SERVER_TICKET_BROADCAST
self.assertTrue(expected)
self.assertIsInstance(broadcast, str)
def test_ring_the_bell(self):
data = {
'event': 'ring'
}
bell = self.events.ring_the_bell(data)
result = json.loads(bell)
expected = 'exec:ring'
self.assertIsInstance(bell, str)
self.assertEquals(result['event'], expected)
def test_weather_notify(self):
weather = self.events.weather_notify(None)
result = json.loads(weather)
self.assertIsInstance(weather, str)
self.assertEquals(result['event'], self.events.WEATHER_ACK_NOTIFY)
def test_valid_message_is_valid(self):
message = {
'event': 'server-ack-register'
}
raw_message = json.dumps(message)
valid = self.events.valid(raw_message)
self.assertTrue(valid)
def test_valid_meesage_is_not_valid(self):
message = {
'events': 'server-ack-register'
}
raw_message = json.dumps(message)
valid = self.events.valid(raw_message)
self.assertFalse(valid)
def test_valid_message_empty(self):
message = None
raw_message = json.dumps(message)
valid = self.events.valid(raw_message)
self.assertFalse(valid)
def test_error(self):
error = self.events.error()
result = json.loads(error)
self.assertEquals(result['event'], 'error')
self.assertEquals(result['body'], 'Stop Hacking.')
| 29.85 | 74 | 0.589375 | 426 | 4,179 | 5.615023 | 0.173709 | 0.096154 | 0.035117 | 0.041806 | 0.477007 | 0.414716 | 0.38796 | 0.38796 | 0.38796 | 0.347826 | 0 | 0.000341 | 0.299115 | 4,179 | 139 | 75 | 30.064748 | 0.81632 | 0.004786 | 0 | 0.398148 | 0 | 0 | 0.098629 | 0.031273 | 0 | 0 | 0 | 0 | 0.157407 | 1 | 0.12963 | false | 0 | 0.037037 | 0 | 0.175926 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
208521b0cff34706ddce62eebc847a4ed84c31d5 | 1,044 | py | Python | tests/test_no_identity.py | iexg/aiohttp-security | 225b0c989e397bc741159e0ccc6b54eb7add3f94 | [
"Apache-2.0"
] | null | null | null | tests/test_no_identity.py | iexg/aiohttp-security | 225b0c989e397bc741159e0ccc6b54eb7add3f94 | [
"Apache-2.0"
] | null | null | null | tests/test_no_identity.py | iexg/aiohttp-security | 225b0c989e397bc741159e0ccc6b54eb7add3f94 | [
"Apache-2.0"
] | null | null | null | from aiohttp import web
from aiohttp_security import remember, forget
async def test_remember(loop, test_client):
async def do_remember(request):
response = web.Response()
await remember(request, response, 'Andrew')
app = web.Application(loop=loop)
app.router.add_route('POST', '/', do_remember)
client = await test_client(app)
resp = await client.post('/')
assert 500 == resp.status
assert (('Security subsystem is not initialized, '
'call aiohttp_security.setup(...) first') ==
resp.reason)
async def test_forget(loop, test_client):
async def do_forget(request):
response = web.Response()
await forget(request, response)
app = web.Application(loop=loop)
app.router.add_route('POST', '/', do_forget)
client = await test_client(app)
resp = await client.post('/')
assert 500 == resp.status
assert (('Security subsystem is not initialized, '
'call aiohttp_security.setup(...) first') ==
resp.reason)
| 29.828571 | 57 | 0.64751 | 125 | 1,044 | 5.288 | 0.28 | 0.048412 | 0.036309 | 0.057489 | 0.73525 | 0.641452 | 0.568835 | 0.568835 | 0.568835 | 0.568835 | 0 | 0.007481 | 0.231801 | 1,044 | 34 | 58 | 30.705882 | 0.816708 | 0 | 0 | 0.615385 | 0 | 0 | 0.164751 | 0.051724 | 0 | 0 | 0 | 0 | 0.153846 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
208788b7bd11e361a5d176ed481e96bc882babed | 5,385 | py | Python | create_TFRecords.py | LemonLov/Create_TFRecords | 4274b3c2bfd2c041d0a8189f63b394b79edcf025 | [
"BSD-2-Clause"
] | 2 | 2020-09-12T03:10:30.000Z | 2020-09-13T06:18:00.000Z | create_TFRecords.py | LemonLov/Create_TFRecords | 4274b3c2bfd2c041d0a8189f63b394b79edcf025 | [
"BSD-2-Clause"
] | null | null | null | create_TFRecords.py | LemonLov/Create_TFRecords | 4274b3c2bfd2c041d0a8189f63b394b79edcf025 | [
"BSD-2-Clause"
] | null | null | null | # *-* coding:utf-8 *-*
import tensorflow as tf
import numpy as np
import os
import cv2
import random
# 生成整数型的属性
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
# 生成字符串型的属性
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
# 生成实数型的属性
def float_list_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
def get_example_nums(tf_records_filenames):
'''
统计tf_records文件中图像(or example)的个数
parameters:
tf_records_filenames: tf_records文件路径
return:
nums
'''
nums= 0
for record in tf.python_io.tf_record_iterator(tf_records_filenames):
nums += 1
return nums
def load_labels_file(filename, shuffle, labels_num=1):
'''
载入图片所在路径的txt文件,文件中每一行为一个图片信息,且以空格隔开:
图像路径 标签1 标签2,如:test_image/1.jpg 0 2
parameters:
filename: txt文件名称
labels_num: labels个数
shuffle :是否打乱顺序
return:
images:type->list
labels: type->list
'''
images = []
labels = []
with open(filename) as f:
lines_list = f.readlines()
if shuffle:
random.shuffle(lines_list)
for lines in lines_list:
line = lines.rstrip().split(' ')
label = []
for i in range(labels_num):
label.append(int(line[i+1]))
images.append(line[0])
labels.append(label)
return images, labels
def read_image(filename, resize_height, resize_width, normalization=False):
'''
读取图片数据,默认返回的是uint8, [0, 255]
parameters:
filename: 图片路径
resize_height: 图片的高度
resize_width: 图片的宽度
normalization: 是否将图片归一化至[0.0, 1.0]
return:
rgb_image: 返回的图片数据
'''
bgr_image = cv2.imread(filename)
if len(bgr_image.shape) == 2:#若是灰度图则转为三通道
print("Warning: gray image", filename)
bgr_image = cv2.cvtColor(bgr_image, cv2.COLOR_GRAY2BGR)
rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB) # 将BGR转为RGB
if resize_height > 0 and resize_width > 0:
rgb_image = cv2.resize(rgb_image, (resize_width, resize_height))
rgb_image = np.asanyarray(rgb_image)
if normalization:
rgb_image = rgb_image / 255.0
return rgb_image
def create_records(filename, output_record_dir, resize_height, resize_width, shuffle=True, log=20):
'''
实现将图像原始数据,标签,长,宽,通道数等信息保存为record文件
注意:读取的图像数据默认是uint8,再转为tf的字符串型BytesList保存,解析请根据需要转换类型
parameters:
filename: 输入保存图片信息的txt文件(image_dir+filename构成图片的路径)
output_record_dir: 保存record文件的路径
resize_height: 图片缩放的高度
resize_width: 图片缩放的宽度
PS: 当resize_height或者resize_width=0时,不执行resize
shuffle: 是否打乱顺序
log: log信息打印间隔
return:
None
'''
# 加载文件,仅获取一个标签(一般情况下图像分类任务都是处理一个标签)
images_list, labels_list = load_labels_file(filename, shuffle, 1)
# 创建一个writer来写TFRecord文件
writer = tf.python_io.TFRecordWriter(output_record_dir)
for i, [image_name, labels] in enumerate(zip(images_list, labels_list)):
# 构建图片相对路径
image_path = images_list[i]
if not os.path.exists(image_path):
print('Error: no image path ', image_path)
continue
# 读取一张图片
image = read_image(image_path, resize_height, resize_width)
# 将图像矩阵转化为一个字符串
image_raw = image.tostring()
# 显示处理进程
if i % log == 0 or i == len(images_list) - 1:
print('------------processing: {}-th------------'.format(i))
print('current image_path = {}'.format(image_path), 'shape: {}'.format(image.shape), 'labels: {}'.format(labels))
# 这里仅保存一个label, 多label适当增加 'label': _int64_feature(label) 项
label = labels[0]
example = tf.train.Example(features=tf.train.Features(feature={
'image': _bytes_feature(image_raw),
'label': _int64_feature(label),
'height': _int64_feature(image.shape[0]),
'width': _int64_feature(image.shape[1]),
'channels': _int64_feature(image.shape[2]),
}))
# 将一个Example写入TFRecord文件
writer.write(example.SerializeToString())
writer.close()
if __name__ == '__main__':
# 参数设置
resize_height = 224 # 指定存储图片高度
resize_width = 224 # 指定存储图片宽度
# 产生train.record文件
train_labels = './dataset/train.txt' # 图片路径
train_record_output = './dataset/record/train{}.tfrecords'.format(resize_height)
create_records(train_labels, train_record_output, resize_height, resize_width)
train_nums = get_example_nums(train_record_output)
print("save train example nums = {}".format(train_nums))
# 产生val.record文件
val_labels = './dataset/val.txt' # 图片路径
val_record_output = './dataset/record/val{}.tfrecords'.format(resize_height)
create_records(val_labels, val_record_output, resize_height, resize_width)
val_nums = get_example_nums(val_record_output)
print("save val example nums = {}".format(val_nums))
# 产生test.record文件
test_labels = './dataset/test.txt' # 图片路径
test_record_output = './dataset/record/test{}.tfrecords'.format(resize_height)
create_records(test_labels, test_record_output, resize_height, resize_width)
test_nums = get_example_nums(test_record_output)
print("save test example nums = {}".format(test_nums)) | 34.299363 | 125 | 0.659239 | 646 | 5,385 | 5.24613 | 0.289474 | 0.049572 | 0.031868 | 0.04072 | 0.130717 | 0.113603 | 0.018885 | 0 | 0 | 0 | 0 | 0.016611 | 0.228598 | 5,385 | 157 | 126 | 34.299363 | 0.79923 | 0.21727 | 0 | 0 | 0 | 0 | 0.09843 | 0.030401 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.059524 | 0.035714 | 0.214286 | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2087a291b6146f0ef00a3da950e1b29854405f84 | 9,694 | py | Python | curses_interface.py | danomagnum/rpncalc | 0c1242f3716b9bd2bd27cb80b9471ae843c1ee74 | [
"MIT"
] | null | null | null | curses_interface.py | danomagnum/rpncalc | 0c1242f3716b9bd2bd27cb80b9471ae843c1ee74 | [
"MIT"
] | null | null | null | curses_interface.py | danomagnum/rpncalc | 0c1242f3716b9bd2bd27cb80b9471ae843c1ee74 | [
"MIT"
] | null | null | null | import rpncalc
#import readline
import curses
import sys
import os
import math
import pkgutil
import settings
STACK = 0
GRAPH_XY = 1
GRAPH_X = 2
mode = STACK
screen = curses.initscr()
screen.keypad(1)
YMAX, XMAX = screen.getmaxyx()
curses.noecho()
stackbox = curses.newwin(YMAX-4,XMAX -1,0,0)
inputbox = curses.newwin(3,XMAX -1,YMAX-5,0)
msgbox = curses.newwin(3,XMAX -1,YMAX-3,0)
numbox = curses.newwin(YMAX-4, 4, 0, 0)
inputbox.keypad(1)
loaded_plugins = {}
#load the plugins
def load_all_modules_from_dir(dirname):
for importer, package_name, _ in pkgutil.iter_modules([dirname]):
full_package_name = '%s.%s' % (dirname, package_name)
if full_package_name not in sys.modules:
module = importer.find_module(package_name).load_module(full_package_name)
yield module
for module in load_all_modules_from_dir('plugins'):
loaded_plugins.update(module.register())
function_list = rpncalc.ops.copy()
function_list.update(loaded_plugins)
interp = rpncalc.Interpreter(function_list)
class ShutErDownBoys(Exception):
pass
class BadPythonCommand(Exception):
pass
input_string_pre = ''
input_string_post = ''
history = []
history_position = 0
historyfile = open('history', 'w+')
history = historyfile.readlines()
inputbox.box()
stackbox.box()
msgbox.box()
inputbox.overlay(screen)
stackbox.overlay(screen)
msgbox.overlay(screen)
screen.refresh()
def editor_validator(keystroke):
#raise Exception('ERRORRRRR: ' + str(keystroke))
message = str(keystroke)
tbox.do_command(keystroke)
def import_file(filename):
f = open(filename)
commands = f.read()
f.close()
#print "====================="
#print filename
#print "====================="
#print commands
#print "====================="
#return
for command in commands.split('\n'):
if len(command) > 0:
if command[0] == "#":
continue
parse(command)
def parse(input_string):
global mode
if input_string[0] == ':': # interface commands start with a colon
input_string = input_string[1:]
text = input_string.split()
if text[0] == 'import':
import_file(os.path.join(settings.functions_directory, text[1] + '.rpn'))
elif text[0] == 'export':
f = open(os.path.join(settings.functions_directory, text[1] + '.rpn'), 'w+')
commands = interp.stack[-1].stack
for cmd in commands:
f.write(cmd)
f.write('\n')
f.close()
elif text[0] == 'quit':
raise ShutErDownBoys()
elif text[0] == 'step':
interp.step()
elif text[0] == 'run':
interp.resume()
elif text[0] == '!':
try:
command = ''
for character in input_string[1:]:
if character == '?':
command += str(interp.pop()[0].val)
else:
command += character
res = eval(command)
interp.parse(str(res))
except Exception as e:
raise BadPythonCommand('Bad Python Command (' + command + ') ' + e.message)
elif text[0] == 'graph':
if len(text) > 1:
if text[1] == 'x':
mode = GRAPH_X
interp.message("X Graph Mode")
elif text[1] == 'xy':
mode = GRAPH_XY
interp.message("XY Graph Mode")
elif mode == GRAPH_XY:
mode = GRAPH_X
interp.message("X Graph Mode")
else:
mode = GRAPH_XY
interp.message("XY Graph Mode")
elif text[0] == 'stack':
interp.message("Stack Mode")
mode = STACK
else:
interp.parse(input_string, True)
if settings.auto_import_functions:
#curses.endwin()
for dirpath, dirnames, filenames in os.walk(settings.auto_functions_directory):
for filename in filenames:
if len(filename) > 5:
if (filename[-4:] == '.rpn') and (filename[0] != '.'):
import_file(os.path.join(settings.auto_functions_directory, filename))
#sys.exit()
def setupnumbox():
numbox.clear()
numbox.box()
for y in range(1, YMAX - 5):
numbox.addstr(numbox.getmaxyx()[0] - y - 1, 1, str(y - 1))
setupnumbox()
loop = True
screen.clear()
inputbox.overlay(screen)
stackbox.overlay(screen)
msgbox.overlay(screen)
numbox.overlay(screen)
screen.refresh()
while loop:
try:
screen.clear()
inputbox.clear()
inputbox.box()
stackbox.erase()
stackbox.box()
msgbox.clear()
msgbox.box()
inputbox.addstr(1, 2, input_string_pre)
inputbox.addstr(1, 2 + len(input_string_pre), input_string_post)
event = inputbox.getch(1, 2 + len(input_string_pre))
if event == 13:
event = curses.KEY_ENTER
if event == 10:
event = curses.KEY_ENTER
elif event == 8:
event = curses.KEY_BACKSPACE
elif event == 127:
event = curses.KEY_DC
if event <= 255:
if event > 0:
input_string_pre += chr(event)
else:
if event == curses.KEY_BACKSPACE:
if len(input_string_pre) > 0:
input_string_pre = input_string_pre[:-1]
elif event == curses.KEY_DC:
if len(input_string_post) > 0:
input_string_post = input_string_post[1:]
elif event == curses.KEY_LEFT:
if len(input_string_pre) > 0:
input_string_post = input_string_pre[-1] + input_string_post
input_string_pre = input_string_pre[:-1]
elif event == curses.KEY_RIGHT:
if len(input_string_post) > 0:
input_string_pre = input_string_pre + input_string_post[0]
input_string_post = input_string_post[1:]
elif event == curses.KEY_UP:
if history_position < len(history):
history_position += 1
input_string_post = ''
input_string_pre = history[-history_position]
elif event == curses.KEY_DOWN:
if history_position > 1:
history_position -= 1
input_string_post = ''
input_string_pre = history[-history_position]
if history_position == 1:
input_string_post = ''
input_string_pre = ''
elif event == curses.KEY_ENTER:
input_string = input_string_pre + input_string_post
if input_string != '':
history.append(input_string)
history_position = 0
input_string_post = ''
input_string_pre = ''
parse(input_string)
elif event == 262: #home key
input_string_post = input_string_pre + input_string_post
input_string_pre = ''
elif event == 360: #end key
input_string_pre = input_string_pre + input_string_post
input_string_post = ''
elif event == curses.KEY_RESIZE:
if curses.is_term_resized(YMAX, XMAX):
YMAX, XMAX = screen.getmaxyx()
interp.message("Screen Resized to " + str(YMAX) + ", " + str(XMAX))
screen.clear
curses.resizeterm(YMAX, XMAX)
screen.resize(YMAX, XMAX)
stackbox.resize(YMAX-4,XMAX -1)
inputbox.resize(3,XMAX -1)
msgbox.resize(3,XMAX -1)
numbox.resize(YMAX-4, 4)
stackbox.mvwin(0,0)
inputbox.mvwin(YMAX-5,0)
msgbox.mvwin(YMAX-3,0)
numbox.mvwin( 0, 0)
setupnumbox()
else:
interp.message(str(event))
if mode == STACK:
stack = interp.stack[:]
if interp.function_stack is not None:
stack += ['['] + interp.function_stack
max_stack = min(len(stack), YMAX-5)
if max_stack >= 0:
for row in range(1,max_stack + 1):
stackbox.addstr(YMAX- 5 - row, 5, str(stack[-row]))
elif mode == GRAPH_XY:
stackbox.clear()
stack = interp.stack[:]
xs = [x.val for x in stack[::2] if type(x) is not rpncalc.Function]
ys = [y.val for y in stack[1::2] if type(y) is not rpncalc.Function]
maxlength = min(len(xs), len(ys))
if maxlength <= 1:
mode = STACK
continue
x0 = min(xs)
xmax = max(xs)
dx = xmax - x0
y0 = min(ys)
ymax = max(ys)
dy = ymax - y0
frame_ymax, frame_xmax = stackbox.getmaxyx()
frame_ymax -= 3
frame_xmax -= 3
frame_x0 = 3
frame_dx = frame_xmax - frame_x0
frame_y0 = 1
frame_dy = frame_ymax - frame_y0
lastx = xs[0]
lasty = ys[0]
for index in range(maxlength):
xpos = int(frame_x0 + frame_dx * (xs[index] - x0)/dx + 1)
ypos = int(frame_y0 + frame_dy * (ymax - ys[index])/dy + 1)
deltax = xs[index] - lastx
deltay = ys[index] - lasty
if deltax == 0:
symbol = '|'
else:
slope = float(deltay) / float(deltax)
if slope > 0:
symbol = '/'
elif slope == 0:
symbol = '-'
else:
symbol = '\\'
lastx = xs[index]
lasty = ys[index]
stackbox.addstr(ypos, xpos, symbol)
elif mode == GRAPH_X:
stackbox.clear()
stack = interp.stack[:]
xs = [x.val for x in interp.stack if type(x) is not rpncalc.Function]
x0 = min(xs)
xmax = max(xs)
dx = xmax - x0
frame_ymax, frame_xmax = stackbox.getmaxyx()
frame_ymax -= 3
frame_xmax -= 3
frame_x0 = 4
frame_dx = frame_xmax - frame_x0
frame_y0 = 1
frame_dy = frame_ymax - frame_y0
maxlength = min(len(xs), frame_xmax - frame_x0)
if maxlength <= 1:
mode = STACK
continue
for index in range(maxlength):
xpos = index + frame_x0
ypos = int(frame_y0 + frame_dy * (xmax - xs[index])/dx + 1)
stackbox.addstr(ypos, xpos, 'X')
if interp.messages:
message_string = '| '.join(interp.messages)
msgbox.addstr(1, 5, message_string[:(XMAX - 8)])
screen.clear()
inputbox.overlay(screen)
stackbox.overlay(screen)
msgbox.overlay(screen)
numbox.overlay(screen)
screen.refresh()
except ShutErDownBoys:
loop = False
except KeyboardInterrupt:
input_string = input_string_pre + input_string_post
if input_string:
input_string_post = ''
input_string_pre = ''
else:
loop = False
except BadPythonCommand as e:
interp.message(e.message)
except:
curses.endwin()
for x in rpncalc.log:
print(x)
raise
curses.endwin()
for v in interp.stack:
print(v)
for x in rpncalc.log:
print(x)
if settings.history > 0:
historyfile = open('history', 'w')
history_to_log = history
if len(history) > settings.history:
history_to_log = history[-settings.history:]
for historyitem in history_to_log:
historyfile.write(('%s\n' % historyitem.strip()))
historyfile.close()
sys.exit(0)
| 24.356784 | 80 | 0.657726 | 1,365 | 9,694 | 4.506227 | 0.155311 | 0.1073 | 0.056901 | 0.039018 | 0.364819 | 0.354739 | 0.28727 | 0.263372 | 0.223541 | 0.159974 | 0 | 0.02048 | 0.204147 | 9,694 | 397 | 81 | 24.418136 | 0.776798 | 0.028574 | 0 | 0.372308 | 0 | 0 | 0.021589 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0.006154 | 0.043077 | 0 | 0.064615 | 0.009231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
208912d6f6d0a9d6a2c0d77936e735ebc5845e01 | 452 | py | Python | ex081.py | BrianBeyer/pythonExercicios | 062e2c6a9e6e6f513185f1fb1d4269d8ca1d9e89 | [
"MIT"
] | null | null | null | ex081.py | BrianBeyer/pythonExercicios | 062e2c6a9e6e6f513185f1fb1d4269d8ca1d9e89 | [
"MIT"
] | null | null | null | ex081.py | BrianBeyer/pythonExercicios | 062e2c6a9e6e6f513185f1fb1d4269d8ca1d9e89 | [
"MIT"
] | null | null | null | valores = []
c = 0
resp = 'S'
cinco = 0
while resp in 'Ss':
v = valores.append(int(input('Digite um valor:')))
resp = str(input('Quer continuar? [S/N]:')).upper().strip()[0]
c+=1
print(f'Você digitou {c} valores')# ou len(valores)
valores.sort(reverse=True)
print(f'Os valores em ordem decrescente são {valores}')
if 5 in valores:
print(f'O valor 5 foi encontrado na lista! ')
else:
print(f'O valor 5 não foi encontrado na lista! ') | 30.133333 | 66 | 0.652655 | 77 | 452 | 3.831169 | 0.597403 | 0.081356 | 0.047458 | 0.081356 | 0.088136 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018919 | 0.181416 | 452 | 15 | 67 | 30.133333 | 0.778378 | 0.033186 | 0 | 0 | 0 | 0 | 0.422018 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.266667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2089e701f232686bde232f252421fe8e4203a707 | 7,468 | py | Python | clairmeta/settings.py | Kariboupseudo/ClairMeta | e0a26073935f07abda84d9abf0f194716854292f | [
"BSD-3-Clause"
] | null | null | null | clairmeta/settings.py | Kariboupseudo/ClairMeta | e0a26073935f07abda84d9abf0f194716854292f | [
"BSD-3-Clause"
] | null | null | null | clairmeta/settings.py | Kariboupseudo/ClairMeta | e0a26073935f07abda84d9abf0f194716854292f | [
"BSD-3-Clause"
] | null | null | null | # Clairmeta - (C) YMAGIS S.A.
# See LICENSE for more information
LOG_SETTINGS = {
'level': 'INFO',
'enable_console': True,
'enable_file': True,
'file_name': '~/Library/Logs/clairmeta.log',
'file_size': 1e6,
'file_count': 10,
}
DCP_SETTINGS = {
# ISDCF Naming Convention enforced
'naming_convention': '9.3',
# Recognized XML namespaces
'xmlns': {
'xml': 'http://www.w3.org/XML/1998/namespace',
'xmldsig': 'http://www.w3.org/2000/09/xmldsig#',
'cpl_metadata_href': 'http://isdcf.com/schemas/draft/2011/cpl-metadata',
'interop_pkl': 'http://www.digicine.com/PROTO-ASDCP-PKL-20040311#',
'interop_cpl': 'http://www.digicine.com/PROTO-ASDCP-CPL-20040511#',
'interop_am': 'http://www.digicine.com/PROTO-ASDCP-AM-20040311#',
'interop_vl': 'http://www.digicine.com/PROTO-ASDCP-VL-20040311#',
'interop_stereo': 'http://www.digicine.com/schemas/437-Y/2007/Main-Stereo-Picture-CPL',
'interop_subtitle': 'interop_subtitle',
'smpte_pkl_2006': 'http://www.smpte-ra.org/schemas/429-8/2006/PKL',
'smpte_pkl_2007': 'http://www.smpte-ra.org/schemas/429-8/2007/PKL',
'smpte_cpl': 'http://www.smpte-ra.org/schemas/429-7/2006/CPL',
'smpte_cpl_metadata': 'http://www.smpte-ra.org/schemas/429-16/2014/CPL-Metadata',
'smpte_am_2006': 'http://www.smpte-ra.org/schemas/429-9/2006/AM',
'smpte_am_2007': 'http://www.smpte-ra.org/schemas/429-9/2007/AM',
'smpte_stereo_2007': 'http://www.smpte-ra.org/schemas/429-10/2007/Main-Stereo-Picture-CPL',
'smpte_stereo_2008': 'http://www.smpte-ra.org/schemas/429-10/2008/Main-Stereo-Picture-CPL',
'smpte_subtitles_2007': 'http://www.smpte-ra.org/schemas/428-7/2007/DCST',
'smpte_subtitles_2010': 'http://www.smpte-ra.org/schemas/428-7/2010/DCST',
'smpte_subtitles_2014': 'http://www.smpte-ra.org/schemas/428-7/2014/DCST',
'smpte_tt': 'http://www.smpte-ra.org/schemas/429-12/2008/TT',
'smpte_etm': 'http://www.smpte-ra.org/schemas/430-3/2006/ETM',
'smpte_kdm': 'http://www.smpte-ra.org/schemas/430-1/2006/KDM',
'atmos': 'http://www.dolby.com/schemas/2012/AD',
},
# Recognized XML identifiers
'xmluri': {
'interop_sig': 'http://www.w3.org/2000/09/xmldsig#rsa-sha1',
'smpte_sig': 'http://www.w3.org/2001/04/xmldsig-more#rsa-sha256',
'enveloped_sig': 'http://www.w3.org/2000/09/xmldsig#enveloped-signature',
'c14n': 'http://www.w3.org/TR/2001/REC-xml-c14n-20010315',
'sha1': 'http://www.w3.org/2000/09/xmldsig#sha1',
'dolby_edr': 'http://www.dolby.com/schemas/2014/EDR-Metadata',
},
'picture': {
# Standard resolutions
'resolutions': {
'2K': ['1998x1080', '2048x858', '2048x1080'],
'4K': ['3996x2160', '4096x1716', '4096x2160'],
'HD': ['1920x1080'],
'UHD': ['3840x2160'],
},
# Standard editrate
'editrates': {
'2K': {'2D': [24, 25, 30, 48, 50, 60], '3D': [24, 25, 30, 48, 50, 60]},
'4K': {'2D': [24, 25, 30], '3D': []},
},
# Archival editrate
'editrates_archival': [16, 200.0/11, 20, 240.0/11],
# HFR capable quipements (projection servers)
'editrates_min_series2': {
'2D': 96,
'3D': 48,
},
# Standard aspect ratio
'aspect_ratio': {
'F': {'ratio': 1.85, 'resolutions': ['1998x1080', '3996x2160']},
'S': {'ratio': 2.39, 'resolutions': ['2048x858', '4096x1716']},
'C': {'ratio': 1.90, 'resolutions': ['2048x1080', '4096x2160']},
},
# For metadata tagging, decoupled from bitrate thresholds
'min_hfr_editrate': 48,
# As stated in http://www.dcimovies.com/Recommended_Practice/
# These are in Mb/s
# Note : asdcplib use a 400Mb/s threshold for HFR, why ?
'max_dci_bitrate': 250,
'max_hfr_bitrate': 500,
'max_dvi_bitrate': 400,
'min_editrate_hfr_bitrate': {
'2K': {'2D': 60, '3D': 48},
'4K': {'2D': 48, '3D': 0}
},
# We allow a small offset above DCI specification :
# asdcplib use a method of computation that can only give an
# approximation (worst case scenario) of the actual max bitrate.
# asdcplib basically find the biggest frame in the whole track and
# multiply it by the editrate.
# Note : DCI specification seems to limit individual j2c frame size,
# the method used by asdcplib should be valid is this regard, it seems
# that the observed bitrate between 250 and 250.05 are due to the
# encryption overhead in the KLV packaging.
'bitrate_tolerance': 0.05,
# This is a percentage below max_bitrate
'average_bitrate_margin': 2.0,
# As stated in SMPTE 429-2
'dwt_levels_2k': 5,
'dwt_levels_4k': 6,
},
'sound': {
'sampling_rate': [48000, 96000],
'max_channel_count': 16,
'quantization': 24,
# This maps SMPTE 429-2 AudioDescriptor.ChannelFormat to a label and
# a min / max number of allowed channels.
# See. Section A.1.2 'Channel Configuration Tables'
'configuration_channels': {
1: ('5.1 with optional HI/VI', 6, 8),
2: ('6.1 (5.1 + center surround) with optional HI/VI', 7, 10),
3: ('7.1 (SDDS) with optional HI/VI', 8, 10),
4: ('Wild Track Format', 1, 16),
5: ('7.1 DS with optional HI/VI', 8, 10),
},
'format_channels': {
'10': 1,
'20': 2,
'51': 6,
'61': 7,
'71': 8,
'11.1': 12,
},
},
'atmos': {
'max_channel_count': 64,
'max_object_count': 118
},
'subtitle': {
# In bytes
'font_max_size': 655360,
},
}
DCP_CHECK_SETTINGS = {
# List of check modules for DCP check, these modules will be imported
# dynamically during the check process.
'module_prefix': 'dcp_check_',
'modules': {
'vol': 'VolIndex checks',
'am': 'AssetMap checks',
'pkl': 'PackingList checks',
'cpl': 'CompositionPlayList checks',
'sign': 'Digital signature checks',
'isdcf_dcnc': 'Naming Convention checks',
'picture': 'Picture essence checks',
'sound': 'Sound essence checks',
'subtitle': 'Subtitle essence checks',
'atmos': 'Atmos essence checks',
}
}
IMP_SETTINGS = {
'xmlns': {
'xmldsig': 'http://www.w3.org/2000/09/xmldsig#',
'imp_am': 'http://www.smpte-ra.org/schemas/429-9/2007/AM',
'imp_pkl': 'http://www.smpte-ra.org/schemas/429-8/2007/PKL',
'imp_opl': 'http://www.smpte-ra.org/schemas/2067-100/',
'imp_cpl': 'http://www.smpte-ra.org/schemas/2067-3/',
}
}
DSM_SETTINGS = {
'allowed_extensions': {
'.dpx': 'DPX image data',
'.tiff': 'TIFF image data',
'.tif': 'TIFF image data',
'.exr': 'OpenEXR image data',
'.cin': 'Cineon image data',
},
'directory_white_list': ['.thumbnails'],
'file_white_list': ['.DS_Store'],
}
DCDM_SETTINGS = {
'allowed_extensions': {
'.tiff': 'TIFF image data',
'.tif': 'TIFF image data',
},
'directory_white_list': ['.thumbnails'],
'file_white_list': ['.DS_Store'],
}
| 39.935829 | 99 | 0.573246 | 948 | 7,468 | 4.405063 | 0.330169 | 0.056992 | 0.051724 | 0.060345 | 0.283525 | 0.250718 | 0.209052 | 0.154933 | 0.060345 | 0.060345 | 0 | 0.110592 | 0.254151 | 7,468 | 186 | 100 | 40.150538 | 0.639138 | 0.174344 | 0 | 0.09396 | 0 | 0.026846 | 0.55983 | 0.019074 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
208b5a19512f2c1942088dd7f08d4f5e98808037 | 869 | py | Python | examples/python_in_cpp/python_src/py_display.py | aff3ct/py_aff3ct | 8afb7e6b1db1b621db0ae4153b29a2e848e09fcf | [
"MIT"
] | 15 | 2021-01-24T11:59:04.000Z | 2022-03-23T07:23:44.000Z | examples/python_in_cpp/python_src/py_display.py | aff3ct/py_aff3ct | 8afb7e6b1db1b621db0ae4153b29a2e848e09fcf | [
"MIT"
] | 8 | 2021-05-24T18:22:45.000Z | 2022-03-11T09:48:05.000Z | examples/python_in_cpp/python_src/py_display.py | aff3ct/py_aff3ct | 8afb7e6b1db1b621db0ae4153b29a2e848e09fcf | [
"MIT"
] | 4 | 2021-01-26T19:18:21.000Z | 2021-12-07T17:02:34.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
sys.path.insert(0, '../../../build/lib')
import numpy as np
import matplotlib.pyplot as plt
from py_aff3ct.module.py_module import Py_Module
class Display(Py_Module):
def plot(self, x):
if self.i_plt % 50 == 0:
self.line.set_data(x[0,::2], x[0,1::2])
self.fig.canvas.draw()
self.fig.canvas.flush_events()
plt.pause(0.000000000001)
self.i_plt = self.i_plt + 1
return 0
def __init__(self, N):
Py_Module.__init__(self)
t_plot = self.create_task("plot")
self.create_socket_in(t_plot, "x", N, np.float32)
self.create_codelet (t_plot, lambda m,l,f: m.plot(l[0]))
self.fig = plt.figure()
self.ax = self.fig.add_subplot(1, 1, 1)
self.line, = self.ax.plot([], '.b')
self.i_plt = 0
plt.xlabel("Real part")
plt.ylabel("Imaginary part")
plt.ylim(-2,2)
plt.xlim(-2,2) | 24.828571 | 59 | 0.654776 | 156 | 869 | 3.474359 | 0.455128 | 0.059041 | 0.059041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052055 | 0.159954 | 869 | 35 | 60 | 24.828571 | 0.690411 | 0.049482 | 0 | 0 | 0 | 0 | 0.058182 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.074074 | false | 0 | 0.148148 | 0 | 0.296296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
208b826ac073ffc78e9fc4a4daa39a825a8767d0 | 3,906 | py | Python | tests/_dao/TestRTKEnvironment.py | rakhimov/rtk | adc35e218ccfdcf3a6e3082f6a1a1d308ed4ff63 | [
"BSD-3-Clause"
] | null | null | null | tests/_dao/TestRTKEnvironment.py | rakhimov/rtk | adc35e218ccfdcf3a6e3082f6a1a1d308ed4ff63 | [
"BSD-3-Clause"
] | null | null | null | tests/_dao/TestRTKEnvironment.py | rakhimov/rtk | adc35e218ccfdcf3a6e3082f6a1a1d308ed4ff63 | [
"BSD-3-Clause"
] | 2 | 2020-04-03T04:14:42.000Z | 2021-02-22T05:30:35.000Z | #!/usr/bin/env python -O
# -*- coding: utf-8 -*-
#
# tests.unit._dao.TestRTKEnvironment.py is part of The RTK Project
#
# All rights reserved.
"""
This is the test class for testing the RTKEnvironment module algorithms and
models.
"""
import sys
from os.path import dirname
sys.path.insert(
0,
dirname(dirname(dirname(dirname(__file__)))) + "/rtk", )
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
import unittest
from nose.plugins.attrib import attr
from dao.RTKEnvironment import RTKEnvironment
__author__ = 'Andrew Rowland'
__email__ = 'andrew.rowland@reliaqual.com'
__organization__ = 'ReliaQual Associates, LLC'
__copyright__ = 'Copyright 2017 Andrew "weibullguy" Rowland'
class TestRTKEnvironment(unittest.TestCase):
"""
Class for testing the RTKEnvironment class.
"""
_attributes = {
'environment_id': 1,
'low_dwell_time': 0.0,
'minimum': 0.0,
'ramp_rate': 0.0,
'high_dwell_time': 0.0,
'name': 'Test Environmental Condition',
'maximum': 0.0,
'units': u'Units',
'variance': 0.0,
'phase_id': 1,
'mean': 0.0
}
def setUp(self):
"""
Sets up the test fixture for the RTKEnvironment class.
"""
engine = create_engine('sqlite:////tmp/TestDB.rtk', echo=False)
session = scoped_session(sessionmaker())
session.remove()
session.configure(bind=engine, autoflush=False, expire_on_commit=False)
self.DUT = session.query(RTKEnvironment).first()
self.DUT.name = 'Test Environmental Condition'
session.commit()
@attr(all=True, unit=True)
def test00_rtkenvironment_create(self):
"""
(TestRTKEnvironment) DUT should create an RTKEnvironment model.
"""
self.assertTrue(isinstance(self.DUT, RTKEnvironment))
# Verify class attributes are properly initialized.
self.assertEqual(self.DUT.__tablename__, 'rtk_environment')
self.assertEqual(self.DUT.phase_id, 1)
self.assertEqual(self.DUT.environment_id, 1)
self.assertEqual(self.DUT.name, 'Test Environmental Condition')
self.assertEqual(self.DUT.units, 'Units')
self.assertEqual(self.DUT.minimum, 0.0)
self.assertEqual(self.DUT.maximum, 0.0)
self.assertEqual(self.DUT.mean, 0.0)
self.assertEqual(self.DUT.variance, 0.0)
self.assertEqual(self.DUT.ramp_rate, 0.0)
self.assertEqual(self.DUT.low_dwell_time, 0.0)
self.assertEqual(self.DUT.high_dwell_time, 0.0)
@attr(all=True, unit=True)
def test01_get_attributes(self):
"""
(TestRTKEnvironment) get_attributes should return a tuple of attribute values.
"""
self.assertEqual(self.DUT.get_attributes(), self._attributes)
@attr(all=True, unit=True)
def test02a_set_attributes(self):
"""
(TestRTKEnvironment) set_attributes should return a zero error code on success
"""
_error_code, _msg = self.DUT.set_attributes(self._attributes)
self.assertEqual(_error_code, 0)
self.assertEqual(_msg, "RTK SUCCESS: Updating RTKEnvironment {0:d} " \
"attributes.".format(self.DUT.environment_id))
@attr(all=True, unit=True)
def test02b_set_attributes_missing_key(self):
"""
(TestRTKEnvironment) set_attributes should return a 10 error code when passed a dict with a missing key
"""
self._attributes.pop('variance')
_error_code, _msg = self.DUT.set_attributes(self._attributes)
self.assertEqual(_error_code, 40)
self.assertEqual(_msg,
"RTK ERROR: Missing attribute 'variance' in " \
"attribute dictionary passed to " \
"RTKEnvironment.set_attributes().")
| 31.248 | 111 | 0.647465 | 455 | 3,906 | 5.38022 | 0.32967 | 0.05433 | 0.100899 | 0.11683 | 0.28799 | 0.238971 | 0.096405 | 0.05719 | 0.05719 | 0.05719 | 0 | 0.017538 | 0.240911 | 3,906 | 124 | 112 | 31.5 | 0.808094 | 0.178187 | 0 | 0.086957 | 0 | 0 | 0.166341 | 0.027723 | 0 | 0 | 0 | 0 | 0.26087 | 1 | 0.072464 | false | 0.014493 | 0.101449 | 0 | 0.202899 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20947dad3cda2fd32b55ac90d29dde10b443304e | 2,704 | py | Python | models.py | RidleyLeisy/data-science-1 | bdb0ce1d5b01e2ee0b6b455c9382638cce0027e2 | [
"MIT"
] | null | null | null | models.py | RidleyLeisy/data-science-1 | bdb0ce1d5b01e2ee0b6b455c9382638cce0027e2 | [
"MIT"
] | 3 | 2021-02-08T20:34:21.000Z | 2021-06-02T00:21:00.000Z | models.py | RidleyLeisy/data-science-1 | bdb0ce1d5b01e2ee0b6b455c9382638cce0027e2 | [
"MIT"
] | 1 | 2019-08-28T21:51:14.000Z | 2019-08-28T21:51:14.000Z | import pandas as pd
import numpy as np
from sklearn.pipeline import Pipeline
import category_encoders as ce
from scipy.spatial.distance import cdist
from sklearn.externals import joblib
from db_helper import DbHelper
cols = ['column_a', 'player', 'all_nba', 'all_star', 'draft_yr','pk','team', 'college', 'yrs', 'games',
'minutes_played', 'pts', 'trb', 'ast', 'fg_percentage', 'tp_percentage','ft_percentage',
'minutes_per_game','points_per_game','trb_per_game','assits_per_game','win_share','ws_per_game','bpm',
'vorp','executive','tenure','exec_id','exec_draft_exp','attend_college','first_year', 'second_year',
'third_year', 'fourth_year','fifth_year']
target = 'player'
features = ['all_nba', 'all_star', 'draft_yr', 'pk', 'team',
'college', 'games', 'minutes_played', 'pts', 'trb', 'ast',
'fg_percentage', 'tp_percentage', 'ft_percentage', 'minutes_per_game',
'points_per_game', 'trb_per_game', 'assits_per_game', 'win_share',
'ws_per_game', 'bpm', 'vorp', 'exec_id',
'exec_draft_exp', 'attend_college', 'first_year', 'second_year',
'third_year', 'fourth_year', 'fifth_year', 'retire_yr']
class Model():
def __init__(self, name):
db = DbHelper()
self.all_players = db.query_all_players()
self.player = db.query_player(name)
return
def wrangle_df(self):
df = pd.DataFrame(self.all_players,columns=cols)
player_df = pd.DataFrame(self.player).T
player_df.columns = cols
df['retire_yr'] = df['draft_yr'] + df['yrs']
player_df['retire_yr'] = player_df['draft_yr'] + player_df['yrs']
return df, player_df
def build_similars(self):
df, player_df = self.wrangle_df()
encode_pipeline = Pipeline(steps=[('ord',ce.OrdinalEncoder(cols=['team','college','attend_college','first_year',
'second_year','third_year','fourth_year','fifth_year']))])
#encoding
X = encode_pipeline['ord'].fit_transform(df[features])
x_player = encode_pipeline['ord'].transform(player_df[features])
ary = cdist(x_player.values.reshape(1,-1), X.values, metric='euclidean')
euclid = pd.DataFrame(ary).T.sort_values(by=0)
top_three = euclid.iloc[1:4]
top_three = df.iloc[top_three.index]
#longevity
filename = 'model_longevity.sav'
loaded_model = joblib.load(filename)
longevity = loaded_model.predict(x_player)
return top_three, longevity
if __name__ == '__main__':
model = Model('Kobe Bryant')
print(model.build_similars())
s, l = model.build_similars()
print(l[0]) | 37.041096 | 120 | 0.640902 | 351 | 2,704 | 4.615385 | 0.34188 | 0.04321 | 0.033333 | 0.040741 | 0.34321 | 0.34321 | 0.34321 | 0.34321 | 0.34321 | 0.302469 | 0 | 0.002791 | 0.204882 | 2,704 | 73 | 121 | 37.041096 | 0.750698 | 0.006657 | 0 | 0 | 0 | 0 | 0.286778 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0 | 0.134615 | 0 | 0.269231 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2096856b33bf00dedb67422295e6927b9ab0e166 | 854 | py | Python | aggregate/color_videos.py | isaiahnields/attention.ai | 96fe8d738e4fc36f05e6c72e2f1fcdd7a4048261 | [
"MIT"
] | 8 | 2019-02-12T07:07:42.000Z | 2022-03-02T08:13:01.000Z | aggregate/color_videos.py | isaiahnields/attention.ai | 96fe8d738e4fc36f05e6c72e2f1fcdd7a4048261 | [
"MIT"
] | 7 | 2020-01-28T22:06:03.000Z | 2022-02-09T23:29:48.000Z | aggregate/color_videos.py | isaiahnields/attention.ai | 96fe8d738e4fc36f05e6c72e2f1fcdd7a4048261 | [
"MIT"
] | 8 | 2019-02-12T07:07:46.000Z | 2021-09-21T13:37:19.000Z | import cv2
from os import listdir
from os.path import isfile, join
import numpy as np
from math import sin, pi
paths = [f for f in listdir('combined_videos') if isfile(join('combined_videos', f))]
preds = np.load('agg_preds.npy')
preds = np.sqrt(preds)
for i in range(len(paths)):
ps = preds[i, :]
cap = cv2.VideoCapture('combined_videos/' + paths[i])
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('final_videos/' + paths[i], fourcc, 30,
(int(cap.get(3)), int(cap.get(4))))
for f in range(72):
for j in range(15):
ret, frame = cap.read()
if ret:
if ps[f] > 0.5:
frame[:, :, :2] = frame[:, :, :2] / (sin((j * pi) / 14) + 1)
out.write(frame)
else:
break
print(paths[i])
| 26.6875 | 85 | 0.533958 | 121 | 854 | 3.719008 | 0.471074 | 0.093333 | 0.026667 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032368 | 0.312646 | 854 | 31 | 86 | 27.548387 | 0.734242 | 0 | 0 | 0 | 0 | 0 | 0.088993 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.208333 | 0 | 0.208333 | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
2099d8a4a1df272a200a2b4774039e76e7ba0d00 | 5,909 | py | Python | src/beansapplicationmgr.py | primroses/docklet | 6c42a472a8b427496c03fad18b873cb4be219db3 | [
"BSD-3-Clause"
] | null | null | null | src/beansapplicationmgr.py | primroses/docklet | 6c42a472a8b427496c03fad18b873cb4be219db3 | [
"BSD-3-Clause"
] | null | null | null | src/beansapplicationmgr.py | primroses/docklet | 6c42a472a8b427496c03fad18b873cb4be219db3 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python3
'''
This module consists of three parts:
1.send_beans_email: a function to send email to remind users of their beans.
2.ApplicationMgr: a class that will deal with users' requests about beans application.
3.ApprovalRobot: a automatic robot to examine and approve users' applications.
'''
import threading,datetime,random,time
from model import db,User,ApplyMsg
from userManager import administration_required
import env
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from email.header import Header
email_from_address = env.getenv('EMAIL_FROM_ADDRESS')
# send email to remind users of their beans
def send_beans_email(to_address, username, beans):
global email_from_address
if (email_from_address in ['\'\'', '\"\"', '']):
return
#text = 'Dear '+ username + ':\n' + ' Your beans in docklet are less than' + beans + '.'
text = '<html><h4>Dear '+ username + ':</h4>'
text += '''<p> Your beans in <a href='%s'>docklet</a> are %d now. </p>
<p> If your beans are less than or equal to 0, all your worksapces will be stopped.</p>
<p> Please apply for more beans to keep your workspaces running by following link:</p>
<p> <a href='%s/beans/application/'>%s/beans/application/</p>
<br>
<p> Note: DO NOT reply to this email!</p>
<br><br>
<p> <a href='http://docklet.unias.org'>Docklet Team</a>, SEI, PKU</p>
''' % (env.getenv("PORTAL_URL"), beans, env.getenv("PORTAL_URL"), env.getenv("PORTAL_URL"))
text += '<p>'+ str(datetime.datetime.now()) + '</p>'
text += '</html>'
subject = 'Docklet beans alert'
msg = MIMEMultipart()
textmsg = MIMEText(text,'html','utf-8')
msg['Subject'] = Header(subject, 'utf-8')
msg['From'] = email_from_address
msg['To'] = to_address
msg.attach(textmsg)
s = smtplib.SMTP()
s.connect()
s.sendmail(email_from_address, to_address, msg.as_string())
s.close()
# a class that will deal with users' requests about beans application.
class ApplicationMgr:
def __init__(self):
# create database
try:
ApplyMsg.query.all()
except:
db.create_all()
# user apply for beans
def apply(self,username,number,reason):
user = User.query.filter_by(username=username).first()
if user is not None and user.beans >= 1000:
return [False, "Your beans must be less than 1000."]
if int(number) < 100 or int(number) > 5000:
return [False, "Number field must be between 100 and 5000!"]
applymsgs = ApplyMsg.query.filter_by(username=username).all()
lasti = len(applymsgs) - 1 # the last index, the last application is also the latest application.
if lasti >= 0 and applymsgs[lasti].status == "Processing":
return [False, "You already have a processing application, please be patient."]
# store the application into the database
applymsg = ApplyMsg(username,number,reason)
db.session.add(applymsg)
db.session.commit()
return [True,""]
# get all applications of a user
def query(self,username):
applymsgs = ApplyMsg.query.filter_by(username=username).all()
ans = []
for msg in applymsgs:
ans.append(msg.ch2dict())
return ans
# get all unread applications
@administration_required
def queryUnRead(self,*,cur_user):
applymsgs = ApplyMsg.query.filter_by(status="Processing").all()
ans = []
for msg in applymsgs:
ans.append(msg.ch2dict())
return {"success":"true","applymsgs":ans}
# agree an application
@administration_required
def agree(self,msgid,*,cur_user):
applymsg = ApplyMsg.query.get(msgid)
if applymsg is None:
return {"success":"false","message":"Application doesn\'t exist."}
applymsg.status = "Agreed"
user = User.query.filter_by(username=applymsg.username).first()
if user is not None:
# update users' beans
user.beans += applymsg.number
db.session.commit()
return {"success":"true"}
# reject an application
@administration_required
def reject(self,msgid,*,cur_user):
applymsg = ApplyMsg.query.get(msgid)
if applymsg is None:
return {"success":"false","message":"Application doesn\'t exist."}
applymsg.status = "Rejected"
db.session.commit()
return {"success":"true"}
# a automatic robot to examine and approve users' applications.
class ApprovalRobot(threading.Thread):
def __init__(self,maxtime=3600):
threading.Thread.__init__(self)
self.stop = False
self.interval = 20
self.maxtime = maxtime # The max time that users may wait for from 'processing' to 'agreed'
def stop(self):
self.stop = True
def run(self):
while not self.stop:
# query all processing applications
applymsgs = ApplyMsg.query.filter_by(status="Processing").all()
for msg in applymsgs:
secs = (datetime.datetime.now() - msg.time).seconds
ranint = random.randint(self.interval,self.maxtime)
if secs >= ranint:
msg.status = "Agreed"
user = User.query.filter_by(username=msg.username).first()
if user is not None:
# update users'beans
user.beans += msg.number
db.session.commit()
time.sleep(self.interval)
| 39.657718 | 137 | 0.616517 | 737 | 5,909 | 4.87517 | 0.272727 | 0.055664 | 0.066797 | 0.066797 | 0.367103 | 0.34595 | 0.320067 | 0.298358 | 0.183691 | 0.155302 | 0 | 0.009371 | 0.259604 | 5,909 | 148 | 138 | 39.925676 | 0.811886 | 0.160433 | 0 | 0.256881 | 0 | 0.055046 | 0.237034 | 0.058549 | 0 | 0 | 0 | 0 | 0 | 1 | 0.091743 | false | 0 | 0.073395 | 0 | 0.284404 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
209a7c9fd12e2cce719dba7f8f99eed34a7d71a3 | 863 | py | Python | Chapter03/Cisco/cisco_nxapi_4.py | stavsta/Mastering-Python-Networking-Second-Edition | 9999d2e415a1eb9c653ac3507500da7ddac2b556 | [
"MIT"
] | 107 | 2017-03-31T09:39:47.000Z | 2022-01-10T17:43:12.000Z | Chapter03/Cisco/cisco_nxapi_4.py | muzhang90/Mastering-Python-Networking-Third-Edition | f8086fc9a28e441cf8c31099d16839c2e868c7fc | [
"MIT"
] | 3 | 2020-03-29T14:14:43.000Z | 2020-10-29T18:21:09.000Z | Chapter03/Cisco/cisco_nxapi_4.py | muzhang90/Mastering-Python-Networking-Third-Edition | f8086fc9a28e441cf8c31099d16839c2e868c7fc | [
"MIT"
] | 98 | 2017-02-25T17:55:43.000Z | 2022-02-20T19:06:06.000Z | #!/usr/bin/env python3
import requests
import json
url='http://172.16.1.90/ins'
switchuser='cisco'
switchpassword='cisco'
myheaders={'content-type':'application/json-rpc'}
payload=[
{
"jsonrpc": "2.0",
"method": "cli",
"params": {
"cmd": "interface ethernet 2/12",
"version": 1.2
},
"id": 1
},
{
"jsonrpc": "2.0",
"method": "cli",
"params": {
"cmd": "description foo-bar",
"version": 1.2
},
"id": 2
},
{
"jsonrpc": "2.0",
"method": "cli",
"params": {
"cmd": "end",
"version": 1.2
},
"id": 3
},
{
"jsonrpc": "2.0",
"method": "cli",
"params": {
"cmd": "copy run start",
"version": 1.2
},
"id": 4
}
]
response = requests.post(url,data=json.dumps(payload), headers=myheaders,auth=(switchuser,switchpassword)).json()
| 15.981481 | 113 | 0.505214 | 99 | 863 | 4.40404 | 0.505051 | 0.073395 | 0.082569 | 0.137615 | 0.247706 | 0.247706 | 0.247706 | 0 | 0 | 0 | 0 | 0.050713 | 0.26883 | 863 | 53 | 114 | 16.283019 | 0.640254 | 0.024334 | 0 | 0.355556 | 0 | 0 | 0.323389 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.044444 | 0.044444 | 0 | 0.044444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
209ba14e1d9e24a86fee89293a465d6242084374 | 1,510 | py | Python | website/__init__.py | oforiwaasam/bookshub | 5c83422971f4abdc5fe18d9b088ed3ca5a230636 | [
"MIT"
] | null | null | null | website/__init__.py | oforiwaasam/bookshub | 5c83422971f4abdc5fe18d9b088ed3ca5a230636 | [
"MIT"
] | null | null | null | website/__init__.py | oforiwaasam/bookshub | 5c83422971f4abdc5fe18d9b088ed3ca5a230636 | [
"MIT"
] | null | null | null | from os import path, environ
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager
# Define a new database below
db = SQLAlchemy()
DB_NAME = "site.db"
login_manager = LoginManager()
def create_app():
# database configuration
app = Flask(__name__)
app.config['SECRET_KEY'] = 'akdhej klklejio jh'
# app.config['SQLALCHEMY_DATABASE_URI'] = f'sqlite:///{DB_NAME}' # path to database and its name
# using an environment variable DATABASE_URL, which is created by adding PostreSQL to the Heroku project, to tell SQLAlchemy where database is located
# if the DATABASE_URL is set, then we'll use that URL, otherwise, we'll use the sqlite one.
app.config['SQLALCHEMY_DATABASE_URI'] = environ.get('DATABASE_URL?sslmode=require') or f'sqlite:///{DB_NAME}'
#to disable a feature that signals the application every time a
# change is about to be made in the database
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# Initialize plugins with our application
db.init_app(app)
login_manager.init_app(app)
from . import routes
from . import auth
# Register Blueprints
app.register_blueprint(routes.main)
app.register_blueprint(auth.auth)
from .models import User, Note
create_database(app)
return app
# you can also create the database here
def create_database(app):
if not path.exists('website/' + DB_NAME):
db.create_all(app=app)
print('Created Database!') | 34.318182 | 154 | 0.72649 | 215 | 1,510 | 4.962791 | 0.474419 | 0.022493 | 0.053421 | 0.050609 | 0.056232 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190728 | 1,510 | 44 | 155 | 34.318182 | 0.873159 | 0.388742 | 0 | 0 | 0 | 0 | 0.175439 | 0.088816 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0 | 0.28 | 0 | 0.4 | 0.12 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
209bd889393f7e101ff66a22355ff5e0d930a797 | 1,230 | py | Python | python/ql/test/experimental/query-tests/Security/CWE-079/smtplib_bad_subparts.py | RasmusWL/ql | 298f4ab899dcb12414d4fd5112956b82effd140f | [
"MIT"
] | null | null | null | python/ql/test/experimental/query-tests/Security/CWE-079/smtplib_bad_subparts.py | RasmusWL/ql | 298f4ab899dcb12414d4fd5112956b82effd140f | [
"MIT"
] | 4 | 2022-02-17T06:25:43.000Z | 2022-02-23T15:55:30.000Z | python/ql/test/experimental/query-tests/Security/CWE-079/smtplib_bad_subparts.py | jketema/codeql | 09578015886a0c59c2d21c9d09d565742803a5a4 | [
"MIT"
] | null | null | null | # This test checks that the developer doesn't pass a MIMEText instance to a MIMEMultipart initializer via the subparts parameter.
from flask import Flask, request
import json
import smtplib
import ssl
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
app = Flask(__name__)
@app.route("/")
def email_person():
sender_email = "sender@gmail.com"
receiver_email = "receiver@example.com"
name = request.args['search']
# Create the plain-text and HTML version of your message
text = "hello there"
html = f"hello {name}"
# Turn these into plain/html MIMEText objects
part1 = MIMEText(text, "plain")
part2 = MIMEText(html, "html")
message = MIMEMultipart(_subparts=(part1, part2))
message["Subject"] = "multipart test"
message["From"] = sender_email
message["To"] = receiver_email
# Create secure connection with server and send email
context = ssl.create_default_context()
server = smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context)
server.login(sender_email, "SERVER_PASSWORD")
server.sendmail(
sender_email, receiver_email, message.as_string()
)
# if __name__ == "__main__":
# app.run(debug=True)
| 28.604651 | 129 | 0.710569 | 159 | 1,230 | 5.333333 | 0.490566 | 0.051887 | 0.03066 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007 | 0.186992 | 1,230 | 42 | 130 | 29.285714 | 0.841 | 0.26748 | 0 | 0 | 0 | 0 | 0.146532 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.038462 | false | 0.038462 | 0.230769 | 0 | 0.269231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
209d3f8143bb0c9a52bff1f3e1c1e5bf2dd136e1 | 1,072 | py | Python | tests/testOptArg.py | miniufo/xinvert | 5fec8586730ec16646304d3eedae1cd501f0673b | [
"MIT"
] | 4 | 2021-05-29T14:56:24.000Z | 2022-03-30T11:54:32.000Z | tests/testOptArg.py | miniufo/xinvert | 5fec8586730ec16646304d3eedae1cd501f0673b | [
"MIT"
] | null | null | null | tests/testOptArg.py | miniufo/xinvert | 5fec8586730ec16646304d3eedae1cd501f0673b | [
"MIT"
] | 2 | 2021-11-22T10:27:21.000Z | 2022-03-30T11:54:33.000Z | # -*- coding: utf-8 -*-
"""
Created on 2020.12.19
@author: MiniUFO
Copyright 2018. All rights reserved. Use is subject to license terms.
"""
#%% load data
import xarray as xr
import numpy as np
nx, ny = 100, 100
gridx = xr.DataArray(np.arange(nx), dims=['X'], coords={'X': np.arange(nx)})
gridy = xr.DataArray(np.arange(ny), dims=['Y'], coords={'Y': np.arange(ny)})
gy, gx = xr.broadcast(gridy, gridx)
epsilon = np.sin(np.pi/(2.*gx+2.))**2. + np.sin(np.pi/(2.*gy+2.))**2.
optArg = 2./(1.+np.sqrt(epsilon*(2.-epsilon)))
#%% plot wind and streamfunction
import proplot as pplt
import xarray as xr
import numpy as np
fig, axes = pplt.subplots(nrows=1, ncols=2, figsize=(11, 5), sharex=3, sharey=3)
fontsize = 16
axes.format(abc=True, abcloc='l', abcstyle='(a)',
grid=False)
ax = axes[0]
p = ax.contourf(epsilon, cmap='jet')
ax.set_title('epsilon', fontsize=fontsize)
ax.colorbar(p, loc='b', label='', ticks=0.2)
ax = axes[1]
p = ax.contourf(optArg, cmap='jet')
ax.set_title('optArg', fontsize=fontsize)
ax.colorbar(p, loc='b', label='', ticks=0.2)
| 22.808511 | 80 | 0.649254 | 183 | 1,072 | 3.79235 | 0.497268 | 0.04611 | 0.040346 | 0.04611 | 0.291066 | 0.213256 | 0.213256 | 0.213256 | 0.123919 | 0.123919 | 0 | 0.046638 | 0.139925 | 1,072 | 46 | 81 | 23.304348 | 0.706074 | 0.163246 | 0 | 0.26087 | 0 | 0 | 0.032731 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.217391 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
209dd913bd69c44b411f4ac37d2ead791d37fb9b | 1,451 | py | Python | scripts/plot_solutions.py | sovrasov/linear_sde_solver | 50a7248d9472889523e59c26b1c6448b8ce220da | [
"MIT"
] | null | null | null | scripts/plot_solutions.py | sovrasov/linear_sde_solver | 50a7248d9472889523e59c26b1c6448b8ce220da | [
"MIT"
] | null | null | null | scripts/plot_solutions.py | sovrasov/linear_sde_solver | 50a7248d9472889523e59c26b1c6448b8ce220da | [
"MIT"
] | null | null | null | import argparse
import os
import sys
import json
import pylab as pl
import numpy as np
def main(args):
for solution_file in args.solution_files:
with open(solution_file, 'r') as f:
print(solution_file)
data = json.load(f)
t0 = data['t_0']
n_steps = data['n_steps']
step = data['step']
x_data = np.array(data['solutions'])
n_impls = x_data.shape[0]
timestamps = np.arange(t0, t0 + n_steps*step, step)
assert x_data.shape[1] == timestamps.shape[0]
pl.subplot()
if args.average:
pl.plot(timestamps, np.average(x_data, axis=0), label='Averaged ' + solution_file)
else:
for i in range(n_impls):
pl.plot(timestamps, x_data[i], label='impl #{}'.format(i))
pl.xlabel('$t$')
pl.ylabel('$X(t)$')
pl.legend(loc='best')
pl.show()
pl.clf()
pl.xlabel('$t$')
pl.ylabel('$p$')
pl.plot(timestamps, data['hole_probs'], label='Probability of hole')
pl.legend(loc='best')
pl.show()
pl.clf()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='')
parser.add_argument('solution_files', type=str, nargs='+')
parser.add_argument('--average', action='store_true')
main(parser.parse_args())
| 29.02 | 98 | 0.535493 | 181 | 1,451 | 4.127072 | 0.425414 | 0.033467 | 0.064257 | 0.029451 | 0.115127 | 0.069612 | 0.069612 | 0.069612 | 0 | 0 | 0 | 0.008147 | 0.323225 | 1,451 | 49 | 99 | 29.612245 | 0.752546 | 0 | 0 | 0.2 | 0 | 0 | 0.093039 | 0 | 0 | 0 | 0 | 0 | 0.025 | 1 | 0.025 | false | 0 | 0.15 | 0 | 0.175 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
209ef0d114b6e2246d0313cca7bc6428bd8f1b6f | 2,275 | py | Python | lottery/db/MySqlUtil.py | DEAN-Lee/py_tools | 96968a5c5be3fa5e293671588ad7ec75cb0910f8 | [
"MIT"
] | null | null | null | lottery/db/MySqlUtil.py | DEAN-Lee/py_tools | 96968a5c5be3fa5e293671588ad7ec75cb0910f8 | [
"MIT"
] | 1 | 2021-01-08T08:40:54.000Z | 2021-01-08T08:40:54.000Z | lottery/db/MySqlUtil.py | DEAN-Lee/py_tools | 96968a5c5be3fa5e293671588ad7ec75cb0910f8 | [
"MIT"
] | null | null | null | import pymysql
import time
from lottery.conf import common_data
class MySqlUtil:
def __init__(self):
try:
config = common_data.readDBConf()
self._conn = pymysql.connect(host=config[0],
user=config[1],
password=config[2],
charset=config[3],
database=config[4],
port=config[5],
cursorclass=pymysql.cursors.DictCursor)
self.__cursor = None
print("连接数据库")
# set charset charset = ('latin1','latin1_general_ci')
except Exception as err:
print('mysql连接错误:' + err.msg)
def close_db(self):
self.__cursor.close()
self._conn.close()
def insert(self, **kwargs):
"""新增一条记录
table: 表名
data: dict 插入的数据
"""
fields = ','.join('`' + k + '`' for k in kwargs["data"].keys())
values = ','.join(("%s",) * len(kwargs["data"]))
sql = 'INSERT INTO `%s` (%s) VALUES (%s)' % (kwargs["table"], fields, values)
cursor = self.__getCursor()
cursor.execute(sql, tuple(kwargs["data"].values()))
insert_id = cursor.lastrowid
self._conn.commit()
return insert_id
# 查询多条数据在数据表中
def select_more(self, table, range_str, field='*'):
sql = 'SELECT ' + field + ' FROM ' + table + ' WHERE ' + range_str
try:
with self.__getCursor() as cursor:
cursor.execute(sql)
self._conn.commit()
return cursor.fetchall()
except pymysql.Error as e:
return False
def exist(self, **kwargs):
"""判断是否存在"""
return self.count(**kwargs) > 0
def close(self):
"""关闭游标和数据库连接"""
if self.__cursor is not None:
self.__cursor.close()
self._conn.close()
def __getCursor(self):
"""获取游标"""
if self.__cursor is None:
self.__cursor = self._conn.cursor()
return self.__cursor
def current_time(self):
# 毫秒级时间戳
t = time.time()
return int(round(t * 1000))
| 30.743243 | 85 | 0.491429 | 227 | 2,275 | 4.744493 | 0.440529 | 0.064995 | 0.027855 | 0.035283 | 0.057567 | 0.057567 | 0.057567 | 0 | 0 | 0 | 0 | 0.009272 | 0.383736 | 2,275 | 73 | 86 | 31.164384 | 0.758916 | 0.056703 | 0 | 0.153846 | 0 | 0 | 0.043935 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0.019231 | 0.057692 | 0 | 0.346154 | 0.038462 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20a392002d1e2c2127d30a148baaa0e429a4ea95 | 2,493 | py | Python | interestCalculatorNew.py | Heliodex/PythonCalculators | ab360d79b9e0a503fbb34adfdfa2e2e557097aad | [
"Unlicense"
] | null | null | null | interestCalculatorNew.py | Heliodex/PythonCalculators | ab360d79b9e0a503fbb34adfdfa2e2e557097aad | [
"Unlicense"
] | null | null | null | interestCalculatorNew.py | Heliodex/PythonCalculators | ab360d79b9e0a503fbb34adfdfa2e2e557097aad | [
"Unlicense"
] | null | null | null | # Heliodex 2021/08/24
# Last edited 2022/02/16 -- count number of steps and add reverse mode
# edit of vatRemover
# uses Short Method
print("Calculates amount after adding a percentage a number of times")
while True:
c = input("1 for normal, 2 for reverse, 3 for catchup ")
if c == "1":
val = float(input("Current value? "))
int = float(input("Interest %? "))
times = float(input("Number of times? "))
print(" ")
print("Values:")
for i in range(times+1):
f = (((100 + int)/100) ** i) * val
print(str(i) + ": " + str(f)) # funny c
# i, not times
print("Total interest:")
print(f - val)
print(" ")
elif c == "2":
val1 = float(input("Current value? "))
val2 = float(input("Ending value? "))
int = float(input("Interest %? "))
print(" ")
decrease = False
if val1 > val2:
if int >= 0:
print("Will never reach end value")
continue
decrease = True
elif int <= 0:
print("Will never reach end value")
continue
print("Values:")
i = 0
while True:
f = (((100 + int)/100) ** i) * val1
print(str(i) + ": " + str(f))
# i, not times
if f < val2 and decrease or f > val2 and not decrease:
break
i += 1
print("Total times taken:")
print(i)
print(" ")
elif c == "3":
val1 = float(input("First value? "))
int1 = float(input("First interest %? "))
val2 = float(input("Second value? "))
int2 = float(input("Second interest %? "))
print(" ")
if int1 == int2:
print("Interest is equal")
continue
elif val1 > val2:
int1, int2 = int2, int1 # swap 2 vars
val1, val2 = val2, val1
print("Values:")
i = 0 # idk how infinite for loops python
while True:
f1 = (((100 + int1)/100) ** i) * val1
f2 = (((100 + int2)/100) ** i) * val2
print(str(i) + ": " + str(f1) + ", " + str(f2))
# i, not times
if f1 > f2:
break
i += 1
print("Time taken to catchup:")
print(i)
print(" ")
else:
break
| 28.011236 | 71 | 0.441637 | 279 | 2,493 | 3.946237 | 0.322581 | 0.090827 | 0.024523 | 0.032698 | 0.161671 | 0.070845 | 0.070845 | 0.070845 | 0.070845 | 0 | 0 | 0.060797 | 0.425993 | 2,493 | 88 | 72 | 28.329545 | 0.708595 | 0.087445 | 0 | 0.454545 | 0 | 0 | 0.190717 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20a3b0acfd3632b1e52670412907b6db19696003 | 2,623 | py | Python | src/pylexibank/commands/init_profile.py | martino-vic/pylexibank | eefbfbb1754e85264a9fe98fefbcf5df254ad19a | [
"Apache-2.0"
] | 6 | 2019-11-04T09:15:34.000Z | 2022-02-19T23:02:51.000Z | src/pylexibank/commands/init_profile.py | martino-vic/pylexibank | eefbfbb1754e85264a9fe98fefbcf5df254ad19a | [
"Apache-2.0"
] | 228 | 2018-04-13T09:39:20.000Z | 2022-03-08T23:30:46.000Z | src/pylexibank/commands/init_profile.py | martino-vic/pylexibank | eefbfbb1754e85264a9fe98fefbcf5df254ad19a | [
"Apache-2.0"
] | 5 | 2019-07-10T04:53:15.000Z | 2022-03-07T01:43:23.000Z | """
Create an initial orthography profile, seeded from the forms created by a first run
of lexibank.makecldf.
"""
from lingpy import Wordlist
from lingpy.sequence import profile
from cldfbench.cli_util import get_dataset, add_catalog_spec
from csvw.dsv import UnicodeWriter
from clldutils.clilib import ParserError
from pylexibank.cli_util import add_dataset_spec
def register(parser):
add_dataset_spec(parser)
add_catalog_spec(parser, 'clts')
parser.add_argument(
'--context',
action='store_true',
help='Create orthography profile with context',
default=False)
parser.add_argument(
'-f', '--force',
action='store_true',
help='Overwrite existing profile',
default=False)
parser.add_argument(
'--semi-diacritics',
default='hsʃ̢ɕʂʐʑʒw',
help='Indicate characters which can occur both as "diacritics" (second part in a sound) '
'or alone.')
parser.add_argument(
'--merge-vowels',
action='store_true',
help='Indicate whether consecutive vowels should be merged.',
default=False)
parser.add_argument(
'--dont-merge-geminates',
action='store_true',
default=False)
def run(args):
bipa = args.clts.api.bipa
func = profile.simple_profile
cols = ['Grapheme', 'IPA', 'Frequence', 'Codepoints']
kw = {
'ref': 'form',
'clts': bipa,
'semi_diacritics': args.semi_diacritics,
'merge_vowels': args.merge_vowels,
'merge_geminates': not args.dont_merge_geminates,
}
if args.context:
func = profile.context_profile
cols = ['Grapheme', 'IPA', 'Examples', 'Languages', 'Frequence', 'Codepoints']
kw['col'] = 'language_id'
ds = get_dataset(args)
profile_path = ds.etc_dir / 'orthography.tsv'
if profile_path.exists() and not args.force:
raise ParserError('Orthography profile exists already. To overwrite, pass "-f" flag')
header, D = [], {}
for i, row in enumerate(ds.cldf_reader()['FormTable'], start=1):
if i == 1:
header = [f for f in row.keys() if f != 'ID']
D = {0: ['lid'] + [h.lower() for h in header]}
row['Segments'] = ' '.join(row['Segments'])
D[i] = [row['ID']] + [row[h] for h in header]
with UnicodeWriter(profile_path, delimiter='\t') as writer:
writer.writerow(cols)
for row in func(Wordlist(D, row='parameter_id', col='language_id'), **kw):
writer.writerow(row)
args.log.info('Orthography profile written to {0}'.format(profile_path))
| 33.628205 | 97 | 0.629813 | 324 | 2,623 | 4.981481 | 0.429012 | 0.039033 | 0.052664 | 0.035316 | 0.053903 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002006 | 0.239802 | 2,623 | 77 | 98 | 34.064935 | 0.806921 | 0.040031 | 0 | 0.203125 | 0 | 0 | 0.258566 | 0.008765 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0.015625 | 0.09375 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20a60c00e978fa11291e28ff7b092caf77628614 | 9,537 | py | Python | Source/ThirdParty/angle/testing/legion/lib/rpc/jsonrpclib.py | elix22/Urho3D | 99902ae2a867be0d6dbe4c575f9c8c318805ec64 | [
"MIT"
] | 20 | 2019-04-18T07:37:34.000Z | 2022-02-02T21:43:47.000Z | testing/legion/lib/rpc/jsonrpclib.py | lyapple2008/webrtc_simplify | c4f9bdc72d8e2648c4f4b1934d22ae94a793b553 | [
"BSD-3-Clause"
] | 11 | 2019-10-21T13:39:41.000Z | 2021-11-05T08:11:54.000Z | testing/legion/lib/rpc/jsonrpclib.py | lyapple2008/webrtc_simplify | c4f9bdc72d8e2648c4f4b1934d22ae94a793b553 | [
"BSD-3-Clause"
] | 1 | 2021-12-03T18:11:36.000Z | 2021-12-03T18:11:36.000Z | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Module to implement the JSON-RPC protocol.
This module uses xmlrpclib as the base and only overrides those
portions that implement the XML-RPC protocol. These portions are rewritten
to use the JSON-RPC protocol instead.
When large portions of code need to be rewritten the original code and
comments are preserved. The intention here is to keep the amount of code
change to a minimum.
This module only depends on default Python modules. No third party code is
required to use this module.
"""
# pylint: disable=no-value-for-parameter
import json
import urllib
import xmlrpclib as _base
__version__ = '1.0.0'
gzip_encode = _base.gzip_encode
gzip = _base.gzip
class Error(Exception):
def __str__(self):
return repr(self)
class ProtocolError(Error):
"""Indicates a JSON protocol error."""
def __init__(self, url, errcode, errmsg, headers):
Error.__init__(self)
self.url = url
self.errcode = errcode
self.errmsg = errmsg
self.headers = headers
def __repr__(self):
return (
'<ProtocolError for %s: %s %s>' %
(self.url, self.errcode, self.errmsg))
class ResponseError(Error):
"""Indicates a broken response package."""
pass
class Fault(Error):
"""Indicates a JSON-RPC fault package."""
def __init__(self, code, message):
Error.__init__(self)
if not isinstance(code, int):
raise ProtocolError('Fault code must be an integer.')
self.code = code
self.message = message
def __repr__(self):
return (
'<Fault %s: %s>' %
(self.code, repr(self.message))
)
def CreateRequest(methodname, params, ident=''):
"""Create a valid JSON-RPC request.
Args:
methodname: The name of the remote method to invoke.
params: The parameters to pass to the remote method. This should be a
list or tuple and able to be encoded by the default JSON parser.
Returns:
A valid JSON-RPC request object.
"""
request = {
'jsonrpc': '2.0',
'method': methodname,
'params': params,
'id': ident
}
return request
def CreateRequestString(methodname, params, ident=''):
"""Create a valid JSON-RPC request string.
Args:
methodname: The name of the remote method to invoke.
params: The parameters to pass to the remote method.
These parameters need to be encode-able by the default JSON parser.
ident: The request identifier.
Returns:
A valid JSON-RPC request string.
"""
return json.dumps(CreateRequest(methodname, params, ident))
def CreateResponse(data, ident):
"""Create a JSON-RPC response.
Args:
data: The data to return.
ident: The response identifier.
Returns:
A valid JSON-RPC response object.
"""
if isinstance(data, Fault):
response = {
'jsonrpc': '2.0',
'error': {
'code': data.code,
'message': data.message},
'id': ident
}
else:
response = {
'jsonrpc': '2.0',
'response': data,
'id': ident
}
return response
def CreateResponseString(data, ident):
"""Create a JSON-RPC response string.
Args:
data: The data to return.
ident: The response identifier.
Returns:
A valid JSON-RPC response object.
"""
return json.dumps(CreateResponse(data, ident))
def ParseHTTPResponse(response):
"""Parse an HTTP response object and return the JSON object.
Args:
response: An HTTP response object.
Returns:
The returned JSON-RPC object.
Raises:
ProtocolError: if the object format is not correct.
Fault: If a Fault error is returned from the server.
"""
# Check for new http response object, else it is a file object
if hasattr(response, 'getheader'):
if response.getheader('Content-Encoding', '') == 'gzip':
stream = _base.GzipDecodedResponse(response)
else:
stream = response
else:
stream = response
data = ''
while 1:
chunk = stream.read(1024)
if not chunk:
break
data += chunk
response = json.loads(data)
ValidateBasicJSONRPCData(response)
if 'response' in response:
ValidateResponse(response)
return response['response']
elif 'error' in response:
ValidateError(response)
code = response['error']['code']
message = response['error']['message']
raise Fault(code, message)
else:
raise ProtocolError('No valid JSON returned')
def ValidateRequest(data):
"""Validate a JSON-RPC request object.
Args:
data: The JSON-RPC object (dict).
Raises:
ProtocolError: if the object format is not correct.
"""
ValidateBasicJSONRPCData(data)
if 'method' not in data or 'params' not in data:
raise ProtocolError('JSON is not a valid request')
def ValidateResponse(data):
"""Validate a JSON-RPC response object.
Args:
data: The JSON-RPC object (dict).
Raises:
ProtocolError: if the object format is not correct.
"""
ValidateBasicJSONRPCData(data)
if 'response' not in data:
raise ProtocolError('JSON is not a valid response')
def ValidateError(data):
"""Validate a JSON-RPC error object.
Args:
data: The JSON-RPC object (dict).
Raises:
ProtocolError: if the object format is not correct.
"""
ValidateBasicJSONRPCData(data)
if ('error' not in data or
'code' not in data['error'] or
'message' not in data['error']):
raise ProtocolError('JSON is not a valid error response')
def ValidateBasicJSONRPCData(data):
"""Validate a basic JSON-RPC object.
Args:
data: The JSON-RPC object (dict).
Raises:
ProtocolError: if the object format is not correct.
"""
error = None
if not isinstance(data, dict):
error = 'JSON data is not a dictionary'
elif 'jsonrpc' not in data or data['jsonrpc'] != '2.0':
error = 'JSON is not a valid JSON RPC 2.0 message'
elif 'id' not in data:
error = 'JSON data missing required id entry'
if error:
raise ProtocolError(error)
class Transport(_base.Transport):
"""RPC transport class.
This class extends the functionality of xmlrpclib.Transport and only
overrides the operations needed to change the protocol from XML-RPC to
JSON-RPC.
"""
user_agent = 'jsonrpclib.py/' + __version__
def send_content(self, connection, request_body):
"""Send the request."""
connection.putheader('Content-Type','application/json')
#optionally encode the request
if (self.encode_threshold is not None and
self.encode_threshold < len(request_body) and
gzip):
connection.putheader('Content-Encoding', 'gzip')
request_body = gzip_encode(request_body)
connection.putheader('Content-Length', str(len(request_body)))
connection.endheaders(request_body)
def single_request(self, host, handler, request_body, verbose=0):
"""Issue a single JSON-RPC request."""
h = self.make_connection(host)
if verbose:
h.set_debuglevel(1)
try:
self.send_request(h, handler, request_body)
self.send_host(h, host)
self.send_user_agent(h)
self.send_content(h, request_body)
response = h.getresponse(buffering=True)
if response.status == 200:
self.verbose = verbose #pylint: disable=attribute-defined-outside-init
return self.parse_response(response)
except Fault:
raise
except Exception:
# All unexpected errors leave connection in
# a strange state, so we clear it.
self.close()
raise
# discard any response data and raise exception
if response.getheader('content-length', 0):
response.read()
raise ProtocolError(
host + handler,
response.status, response.reason,
response.msg,
)
def parse_response(self, response):
"""Parse the HTTP resoponse from the server."""
return ParseHTTPResponse(response)
class SafeTransport(_base.SafeTransport):
"""Transport class for HTTPS servers.
This class extends the functionality of xmlrpclib.SafeTransport and only
overrides the operations needed to change the protocol from XML-RPC to
JSON-RPC.
"""
def parse_response(self, response):
return ParseHTTPResponse(response)
class ServerProxy(_base.ServerProxy):
"""Proxy class to the RPC server.
This class extends the functionality of xmlrpclib.ServerProxy and only
overrides the operations needed to change the protocol from XML-RPC to
JSON-RPC.
"""
def __init__(self, uri, transport=None, encoding=None, verbose=0,
allow_none=0, use_datetime=0):
urltype, _ = urllib.splittype(uri)
if urltype not in ('http', 'https'):
raise IOError('unsupported JSON-RPC protocol')
_base.ServerProxy.__init__(self, uri, transport, encoding, verbose,
allow_none, use_datetime)
transport_type, uri = urllib.splittype(uri)
if transport is None:
if transport_type == 'https':
transport = SafeTransport(use_datetime=use_datetime)
else:
transport = Transport(use_datetime=use_datetime)
self.__transport = transport
def __request(self, methodname, params):
"""Call a method on the remote server."""
request = CreateRequestString(methodname, params)
response = self.__transport.request(
self.__host,
self.__handler,
request,
verbose=self.__verbose
)
return response
Server = ServerProxy
| 25.5 | 79 | 0.676838 | 1,218 | 9,537 | 5.215928 | 0.201149 | 0.028648 | 0.011333 | 0.014324 | 0.264914 | 0.23784 | 0.225878 | 0.190619 | 0.190619 | 0.168267 | 0 | 0.004209 | 0.227745 | 9,537 | 373 | 80 | 25.568365 | 0.858384 | 0.360386 | 0 | 0.179775 | 0 | 0 | 0.108506 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.11236 | false | 0.005618 | 0.016854 | 0.022472 | 0.241573 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20a8a073cd2c94b8099ac2c571da041af73d129b | 21,838 | py | Python | src/third_party/ffmpeg/chromium/scripts/build_ffmpeg.py | neeker/chromium_extract | 0f9a0206a1876e98cf69e03869983e573138284c | [
"BSD-3-Clause"
] | 27 | 2016-04-27T01:02:03.000Z | 2021-12-13T08:53:19.000Z | src/third_party/ffmpeg/chromium/scripts/build_ffmpeg.py | neeker/chromium_extract | 0f9a0206a1876e98cf69e03869983e573138284c | [
"BSD-3-Clause"
] | 2 | 2017-03-09T09:00:50.000Z | 2017-09-21T15:48:20.000Z | src/third_party/ffmpeg/chromium/scripts/build_ffmpeg.py | neeker/chromium_extract | 0f9a0206a1876e98cf69e03869983e573138284c | [
"BSD-3-Clause"
] | 17 | 2016-04-27T02:06:39.000Z | 2019-12-18T08:07:00.000Z | #!/usr/bin/env python
#
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from __future__ import print_function
import collections
import multiprocessing
import optparse
import os
import platform
import re
import shutil
import subprocess
import sys
SCRIPTS_DIR = os.path.abspath(os.path.dirname(__file__))
FFMPEG_DIR = os.path.abspath(os.path.join(SCRIPTS_DIR, '..', '..'))
CHROMIUM_ROOT_DIR = os.path.abspath(os.path.join(FFMPEG_DIR, '..', '..'))
NDK_ROOT_DIR = os.path.abspath(os.path.join(CHROMIUM_ROOT_DIR, 'third_party',
'android_tools', 'ndk'))
BRANDINGS = [
'Chrome',
'ChromeOS',
'Chromium',
'ChromiumOS',
]
USAGE = """Usage: %prog TARGET_OS TARGET_ARCH [options] -- [configure_args]
Valid combinations are android [ia32|x64|mipsel|mips64el|arm-neon|arm64]
linux [ia32|x64|mipsel|arm|arm-neon|arm64]
linux-noasm [x64]
mac [x64]
win [ia32|x64]
Platform specific build notes:
android:
Script can be run on a normal x64 Ubuntu box with an Android-ready Chromium
checkout: https://code.google.com/p/chromium/wiki/AndroidBuildInstructions
linux ia32/x64:
Script can run on a normal Ubuntu box.
linux mipsel:
Script must be run inside of ChromeOS SimpleChrome setup:
cros chrome-sdk --board=mipsel-o32-generic --use-external-config
linux arm/arm-neon:
Script must be run inside of ChromeOS SimpleChrome setup:
cros chrome-sdk --board=arm-generic
linux arm64:
Script can run on a normal Ubuntu with AArch64 cross-toolchain in $PATH.
mac:
Script must be run on OSX. Additionally, ensure the Chromium (not Apple)
version of clang is in the path; usually found under
src/third_party/llvm-build/Release+Asserts/bin
win:
Script must be run on Windows with VS2013 or higher under Cygwin (or MinGW,
but as of 1.0.11, it has serious performance issues with make which makes
building take hours).
Additionally, ensure you have the correct toolchain environment for building.
The x86 toolchain environment is required for ia32 builds and the x64 one
for x64 builds. This can be verified by running "cl.exe" and checking if
the version string ends with "for x64" or "for x86."
Building on Windows also requires some additional Cygwin packages plus a
wrapper script for converting Cygwin paths to DOS paths.
- Add these packages at install time: diffutils, yasm, make, python.
- Copy chromium/scripts/cygwin-wrapper to /usr/local/bin
Resulting binaries will be placed in:
build.TARGET_ARCH.TARGET_OS/Chrome/out/
build.TARGET_ARCH.TARGET_OS/ChromeOS/out/
build.TARGET_ARCH.TARGET_OS/Chromium/out/
build.TARGET_ARCH.TARGET_OS/ChromiumOS/out/
"""
def PrintAndCheckCall(argv, *args, **kwargs):
print('Running %r' % argv)
subprocess.check_call(argv, *args, **kwargs)
def DetermineHostOsAndArch():
if platform.system() == 'Linux':
host_os = 'linux'
elif platform.system() == 'Darwin':
host_os = 'mac'
elif platform.system() == 'Windows' or 'CYGWIN_NT' in platform.system():
host_os = 'win'
else:
return None
if re.match(r'i.86', platform.machine()):
host_arch = 'ia32'
elif platform.machine() == 'x86_64' or platform.machine() == 'AMD64':
host_arch = 'x64'
elif platform.machine() == 'aarch64':
host_arch = 'arm64'
elif platform.machine().startswith('arm'):
host_arch = 'arm'
else:
return None
return (host_os, host_arch)
def GetDsoName(target_os, dso_name, dso_version):
if target_os in ('linux', 'linux-noasm', 'android'):
return 'lib%s.so.%s' % (dso_name, dso_version)
elif target_os == 'mac':
return 'lib%s.%s.dylib' % (dso_name, dso_version)
elif target_os == 'win':
return '%s-%s.dll' % (dso_name, dso_version)
else:
raise ValueError('Unexpected target_os %s' % target_os)
def RewriteFile(path, search, replace):
with open(path) as f:
contents = f.read()
with open(path, 'w') as f:
f.write(re.sub(search, replace, contents))
# Extracts the Android toolchain version and api level from the Android
# config.gni. Returns (api level, api 64 level, toolchain version).
def GetAndroidApiLevelAndToolchainVersion():
android_config_gni = os.path.join(CHROMIUM_ROOT_DIR, 'build', 'config',
'android', 'config.gni')
with open(android_config_gni, 'r') as f:
gni_contents = f.read()
api64_match = re.search('_android64_api_level\s*=\s*(\d{2})', gni_contents)
api_match = re.search('_android_api_level\s*=\s*(\d{2})', gni_contents)
toolchain_match = re.search('_android_toolchain_version\s*=\s*"([.\d]+)"',
gni_contents)
if not api_match or not toolchain_match or not api64_match:
raise Exception('Failed to find the android api level or toolchain '
'version in ' + android_config_gni)
return (api_match.group(1), api64_match.group(1), toolchain_match.group(1))
# Sets up cross-compilation (regardless of host arch) for compiling Android.
# Returns the necessary configure flags as a list.
def SetupAndroidToolchain(target_arch):
api_level, api64_level, toolchain_version = (
GetAndroidApiLevelAndToolchainVersion())
# Toolchain prefix misery, for when just one pattern is not enough :/
toolchain_level = api_level
sysroot_arch = target_arch
toolchain_dir_prefix = target_arch
toolchain_bin_prefix = target_arch
if target_arch in ('arm', 'arm-neon'):
toolchain_bin_prefix = toolchain_dir_prefix = 'arm-linux-androideabi'
sysroot_arch = 'arm'
elif target_arch == 'arm64':
toolchain_level = api64_level
toolchain_bin_prefix = toolchain_dir_prefix = 'aarch64-linux-android'
elif target_arch == 'ia32':
toolchain_dir_prefix = sysroot_arch = 'x86'
toolchain_bin_prefix = 'i686-linux-android'
elif target_arch == 'x64':
toolchain_level = api64_level
toolchain_dir_prefix = sysroot_arch = 'x86_64'
toolchain_bin_prefix = 'x86_64-linux-android'
elif target_arch == 'mipsel':
sysroot_arch = 'mips'
toolchain_bin_prefix = toolchain_dir_prefix = 'mipsel-linux-android'
elif target_arch == 'mips64el':
toolchain_level = api64_level
sysroot_arch = 'mips64'
toolchain_bin_prefix = toolchain_dir_prefix = 'mips64el-linux-android'
sysroot = (NDK_ROOT_DIR + '/platforms/android-' + toolchain_level +
'/arch-' + sysroot_arch)
cross_prefix = (NDK_ROOT_DIR + '/toolchains/' + toolchain_dir_prefix + '-' +
toolchain_version + '/prebuilt/linux-x86_64/bin/' +
toolchain_bin_prefix + '-')
return [
'--enable-cross-compile',
'--sysroot=' + sysroot,
'--cross-prefix=' + cross_prefix,
'--target-os=linux',
]
def BuildFFmpeg(target_os, target_arch, host_os, host_arch, parallel_jobs,
config_only, config, configure_flags):
config_dir = 'build.%s.%s/%s' % (target_arch, target_os, config)
shutil.rmtree(config_dir, ignore_errors=True)
os.makedirs(os.path.join(config_dir, 'out'))
PrintAndCheckCall(
[os.path.join(FFMPEG_DIR, 'configure')] + configure_flags, cwd=config_dir)
if target_os in (host_os, host_os + '-noasm', 'android') and not config_only:
libraries = [
os.path.join('libavcodec', GetDsoName(target_os, 'avcodec', 57)),
os.path.join('libavformat', GetDsoName(target_os, 'avformat', 57)),
os.path.join('libavutil', GetDsoName(target_os, 'avutil', 55)),
]
PrintAndCheckCall(
['make', '-j%d' % parallel_jobs] + libraries, cwd=config_dir)
for lib in libraries:
shutil.copy(os.path.join(config_dir, lib),
os.path.join(config_dir, 'out'))
elif config_only:
print('Skipping build step as requested.')
else:
print('Skipping compile as host configuration differs from target.\n'
'Please compare the generated config.h with the previous version.\n'
'You may also patch the script to properly cross-compile.\n'
'Host OS : %s\n'
'Target OS : %s\n'
'Host arch : %s\n'
'Target arch : %s\n' % (host_os, target_os, host_arch, target_arch))
if target_arch in ('arm', 'arm-neon'):
RewriteFile(
os.path.join(config_dir, 'config.h'),
r'(#define HAVE_VFP_ARGS [01])',
r'/* \1 -- Disabled to allow softfp/hardfp selection at gyp time */')
def main(argv):
parser = optparse.OptionParser(usage=USAGE)
parser.add_option('--branding', action='append', dest='brandings',
choices=BRANDINGS,
help='Branding to build; determines e.g. supported codecs')
parser.add_option('--config-only', action='store_true',
help='Skip the build step. Useful when a given platform '
'is not necessary for generate_gyp.py')
options, args = parser.parse_args(argv)
if len(args) < 2:
parser.print_help()
return 1
target_os = args[0]
target_arch = args[1]
configure_args = args[2:]
if target_os not in ('android', 'linux', 'linux-noasm', 'mac', 'win'):
parser.print_help()
return 1
host_tuple = DetermineHostOsAndArch()
if not host_tuple:
print('Unrecognized host OS and architecture.', file=sys.stderr)
return 1
host_os, host_arch = host_tuple
parallel_jobs = multiprocessing.cpu_count()
if target_os == 'android' and host_os != 'linux' and host_arch != 'x64':
print('Android cross compilation can only be done from a linux x64 host.')
return 1
print('System information:\n'
'Host OS : %s\n'
'Target OS : %s\n'
'Host arch : %s\n'
'Target arch : %s\n'
'Parallel jobs : %d\n' % (
host_os, target_os, host_arch, target_arch, parallel_jobs))
configure_flags = collections.defaultdict(list)
# Common configuration. Note: --disable-everything does not in fact disable
# everything, just non-library components such as decoders and demuxers.
configure_flags['Common'].extend([
'--disable-everything',
'--disable-all',
'--disable-doc',
'--disable-htmlpages',
'--disable-manpages',
'--disable-podpages',
'--disable-txtpages',
'--disable-static',
'--enable-avcodec',
'--enable-avformat',
'--enable-avutil',
'--enable-fft',
'--enable-rdft',
'--enable-static',
# Disable features.
'--disable-bzlib',
'--disable-error-resilience',
'--disable-iconv',
'--disable-lzo',
'--disable-network',
'--disable-schannel',
'--disable-sdl',
'--disable-symver',
'--disable-xlib',
'--disable-zlib',
'--disable-securetransport',
# Disable hardware decoding options which will sometimes turn on
# via autodetect.
'--disable-d3d11va',
'--disable-dxva2',
'--disable-vaapi',
'--disable-vda',
'--disable-vdpau',
'--disable-videotoolbox',
# Common codecs.
'--enable-decoder=vorbis',
'--enable-decoder=pcm_u8,pcm_s16le,pcm_s24le,pcm_s32le,pcm_f32le',
'--enable-decoder=pcm_s16be,pcm_s24be,pcm_mulaw,pcm_alaw',
'--enable-demuxer=ogg,matroska,wav',
'--enable-parser=opus,vorbis',
])
if target_os == 'android':
configure_flags['Common'].extend([
# --optflags doesn't append multiple entries, so set all at once.
'--optflags="-Os"',
'--enable-small',
])
configure_flags['Common'].extend(SetupAndroidToolchain(target_arch))
else:
configure_flags['Common'].extend([
# --optflags doesn't append multiple entries, so set all at once.
'--optflags="-O2"',
'--enable-decoder=theora,vp8',
'--enable-parser=vp3,vp8',
])
if target_os in ('linux', 'linux-noasm', 'android'):
if target_arch == 'x64':
if target_os == 'android':
configure_flags['Common'].extend([
'--arch=x86_64',
])
if target_os != 'android':
# TODO(krasin): move this to Common, when https://crbug.com/537368
# is fixed and CFI is unblocked from launching on ChromeOS.
configure_flags['EnableLTO'].extend(['--enable-lto'])
pass
elif target_arch == 'ia32':
configure_flags['Common'].extend([
'--arch=i686',
'--extra-cflags="-m32"',
'--extra-ldflags="-m32"',
])
# Android ia32 can't handle textrels and ffmpeg can't compile without
# them. http://crbug.com/559379
if target_os != 'android':
configure_flags['Common'].extend([
'--enable-yasm',
])
else:
configure_flags['Common'].extend([
'--disable-yasm',
])
elif target_arch == 'arm' or target_arch == 'arm-neon':
# TODO(ihf): ARM compile flags are tricky. The final options
# overriding everything live in chroot /build/*/etc/make.conf
# (some of them coming from src/overlays/overlay-<BOARD>/make.conf).
# We try to follow these here closely. In particular we need to
# set ffmpeg internal #defines to conform to make.conf.
# TODO(ihf): For now it is not clear if thumb or arm settings would be
# faster. I ran experiments in other contexts and performance seemed
# to be close and compiler version dependent. In practice thumb builds are
# much smaller than optimized arm builds, hence we go with the global
# CrOS settings.
configure_flags['Common'].extend([
'--arch=arm',
'--enable-armv6',
'--enable-armv6t2',
'--enable-vfp',
'--enable-thumb',
'--extra-cflags=-march=armv7-a',
])
if target_os == 'android':
configure_flags['Common'].extend([
# Runtime neon detection requires /proc/cpuinfo access, so ensure
# av_get_cpu_flags() is run outside of the sandbox when enabled.
'--enable-neon',
'--extra-cflags=-mtune=generic-armv7-a',
# NOTE: softfp/hardfp selected at gyp time.
'--extra-cflags=-mfloat-abi=softfp',
])
if target_arch == 'arm-neon':
configure_flags['Common'].extend([
'--extra-cflags=-mfpu=neon',
])
else:
configure_flags['Common'].extend([
'--extra-cflags=-mfpu=vfpv3-d16',
])
else:
configure_flags['Common'].extend([
# Location is for CrOS chroot. If you want to use this, enter chroot
# and copy ffmpeg to a location that is reachable.
'--enable-cross-compile',
'--target-os=linux',
'--cross-prefix=armv7a-cros-linux-gnueabi-',
'--extra-cflags=-mtune=cortex-a8',
# NOTE: softfp/hardfp selected at gyp time.
'--extra-cflags=-mfloat-abi=hard',
])
if target_arch == 'arm-neon':
configure_flags['Common'].extend([
'--enable-neon',
'--extra-cflags=-mfpu=neon',
])
else:
configure_flags['Common'].extend([
'--disable-neon',
'--extra-cflags=-mfpu=vfpv3-d16',
])
elif target_arch == 'arm64':
if target_os != 'android':
configure_flags['Common'].extend([
'--enable-cross-compile',
'--cross-prefix=/usr/bin/aarch64-linux-gnu-',
'--target-os=linux',
])
configure_flags['Common'].extend([
'--arch=aarch64',
'--enable-armv8',
'--extra-cflags=-march=armv8-a',
])
elif target_arch == 'mipsel':
if target_os != 'android':
configure_flags['Common'].extend([
'--enable-cross-compile',
'--cross-prefix=mipsel-cros-linux-gnu-',
'--target-os=linux',
'--extra-cflags=-EL',
'--extra-ldflags=-EL',
'--extra-ldflags=-mips32',
])
else:
configure_flags['Common'].extend([
'--extra-cflags=-mhard-float',
])
configure_flags['Common'].extend([
'--arch=mips',
'--extra-cflags=-mips32',
'--disable-mipsfpu',
'--disable-mipsdsp',
'--disable-mipsdspr2',
])
elif target_arch == 'mips64el' and target_os == "android":
configure_flags['Common'].extend([
'--arch=mips',
'--cpu=i6400',
'--extra-cflags=-mhard-float',
'--extra-cflags=-mips64r6',
'--disable-msa',
])
else:
print('Error: Unknown target arch %r for target OS %r!' % (
target_arch, target_os), file=sys.stderr)
return 1
if target_os == 'linux-noasm':
configure_flags['Common'].extend([
'--disable-asm',
'--disable-inline-asm',
])
if 'win' not in target_os:
configure_flags['Common'].append('--enable-pic')
# Should be run on Mac.
if target_os == 'mac':
if host_os != 'mac':
print('Script should be run on a Mac host. If this is not possible\n'
'try a merge of config files with new linux ia32 config.h\n'
'by hand.\n', file=sys.stderr)
return 1
configure_flags['Common'].extend([
'--enable-yasm',
'--cc=clang',
'--cxx=clang++',
])
if target_arch == 'x64':
configure_flags['Common'].extend([
'--arch=x86_64',
'--extra-cflags=-m64',
'--extra-ldflags=-m64',
])
else:
print('Error: Unknown target arch %r for target OS %r!' % (
target_arch, target_os), file=sys.stderr)
# Should be run on Windows.
if target_os == 'win':
if host_os != 'win':
print('Script should be run on a Windows host.\n', file=sys.stderr)
return 1
configure_flags['Common'].extend([
'--toolchain=msvc',
'--enable-yasm',
'--extra-cflags=-I' + os.path.join(FFMPEG_DIR, 'chromium/include/win'),
])
if 'CYGWIN_NT' in platform.system():
configure_flags['Common'].extend([
'--cc=cygwin-wrapper cl',
'--ld=cygwin-wrapper link',
'--nm=cygwin-wrapper dumpbin -symbols',
'--ar=cygwin-wrapper lib',
])
# Google Chrome & ChromeOS specific configuration.
configure_flags['Chrome'].extend([
'--enable-decoder=aac,h264,mp3',
'--enable-demuxer=aac,mp3,mov',
'--enable-parser=aac,h264,mpegaudio',
])
# ChromiumOS specific configuration.
# Warning: do *NOT* add avi, aac, h264, mp3, mp4, amr*
# Flac support.
configure_flags['ChromiumOS'].extend([
'--enable-demuxer=flac',
'--enable-decoder=flac',
'--enable-parser=flac',
])
# Google ChromeOS specific configuration.
# We want to make sure to play everything Android generates and plays.
# http://developer.android.com/guide/appendix/media-formats.html
configure_flags['ChromeOS'].extend([
# Enable playing avi files.
'--enable-decoder=mpeg4',
'--enable-parser=h263,mpeg4video',
'--enable-demuxer=avi',
# Enable playing Android 3gp files.
'--enable-demuxer=amr',
'--enable-decoder=amrnb,amrwb',
# Flac support.
'--enable-demuxer=flac',
'--enable-decoder=flac',
'--enable-parser=flac',
# Wav files for playing phone messages.
'--enable-decoder=gsm_ms',
'--enable-demuxer=gsm',
'--enable-parser=gsm',
])
configure_flags['ChromeAndroid'].extend([
'--enable-demuxer=aac,mp3,mov',
'--enable-parser=aac,mpegaudio',
'--enable-decoder=aac,mp3',
# TODO(dalecurtis, watk): Figure out if we need h264 parser for now?
])
def do_build_ffmpeg(branding, configure_flags):
if options.brandings and branding not in options.brandings:
print('%s skipped' % branding)
return
print('%s configure/build:' % branding)
BuildFFmpeg(target_os, target_arch, host_os, host_arch, parallel_jobs,
options.config_only, branding, configure_flags)
# Only build Chromium, Chrome for ia32, x86 non-android platforms.
if target_os != 'android':
do_build_ffmpeg('Chromium',
configure_flags['Common'] +
configure_flags['Chromium'] +
configure_flags['EnableLTO'] +
configure_args)
do_build_ffmpeg('Chrome',
configure_flags['Common'] +
configure_flags['Chrome'] +
configure_flags['EnableLTO'] +
configure_args)
else:
do_build_ffmpeg('Chromium',
configure_flags['Common'] +
configure_args)
do_build_ffmpeg('Chrome',
configure_flags['Common'] +
configure_flags['ChromeAndroid'] +
configure_args)
if target_os in ['linux', 'linux-noasm']:
do_build_ffmpeg('ChromiumOS',
configure_flags['Common'] +
configure_flags['Chromium'] +
configure_flags['ChromiumOS'] +
configure_args)
# ChromeOS enables MPEG4 which requires error resilience :(
chrome_os_flags = (configure_flags['Common'] +
configure_flags['Chrome'] +
configure_flags['ChromeOS'] +
configure_args)
chrome_os_flags.remove('--disable-error-resilience')
do_build_ffmpeg('ChromeOS', chrome_os_flags)
print('Done. If desired you may copy config.h/config.asm into the '
'source/config tree using copy_config.sh.')
return 0
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
| 34.773885 | 81 | 0.61512 | 2,630 | 21,838 | 4.969962 | 0.228137 | 0.056767 | 0.050493 | 0.051718 | 0.285824 | 0.231275 | 0.193941 | 0.170148 | 0.120572 | 0.08538 | 0 | 0.017109 | 0.250572 | 21,838 | 627 | 82 | 34.829346 | 0.781559 | 0.123912 | 0 | 0.343496 | 0 | 0 | 0.391309 | 0.102165 | 0 | 0 | 0 | 0.001595 | 0.002033 | 1 | 0.018293 | false | 0.002033 | 0.020325 | 0 | 0.073171 | 0.03252 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20ab340c51fea34db17694c288889d5ae11f982b | 340 | py | Python | res_cookie.py | HLRJ/py-crawler | 326128f8aa8e83cb7a142a31efedc7d944dac4da | [
"MIT"
] | 1 | 2022-03-29T16:01:41.000Z | 2022-03-29T16:01:41.000Z | res_cookie.py | HLRJ/py-crawler | 326128f8aa8e83cb7a142a31efedc7d944dac4da | [
"MIT"
] | null | null | null | res_cookie.py | HLRJ/py-crawler | 326128f8aa8e83cb7a142a31efedc7d944dac4da | [
"MIT"
] | 1 | 2022-03-29T16:02:10.000Z | 2022-03-29T16:02:10.000Z | import requests
# 处理cookie的一个模板
# 会话
session = requests.session()
data = {
"账号" : "########",
"密码" : "########"
}
url = ""
res = session.post(url, data=data)
print(res.text)
res = session.get(url)
# resquests
header = {
"user-agent" : "dddd",
"Cookie" : "url"
}
resp = requests.get(url,headers=header)
print(resp.text) | 14.166667 | 39 | 0.576471 | 40 | 340 | 4.9 | 0.55 | 0.102041 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.197059 | 340 | 24 | 40 | 14.166667 | 0.717949 | 0.076471 | 0 | 0 | 0 | 0 | 0.13871 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.0625 | 0 | 0.0625 | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20af13f324e132c3a20d60cbd72a4f1adb5b9083 | 13,891 | py | Python | Auto Scroller - Python/venv/lib/python3.8/site-packages/listener/daemon.py | Nischal200/Music-Lyrics-Auto-Scroller | 92663e13451022a1500bfe56dff479dd0b3f1cac | [
"MIT"
] | null | null | null | Auto Scroller - Python/venv/lib/python3.8/site-packages/listener/daemon.py | Nischal200/Music-Lyrics-Auto-Scroller | 92663e13451022a1500bfe56dff479dd0b3f1cac | [
"MIT"
] | null | null | null | Auto Scroller - Python/venv/lib/python3.8/site-packages/listener/daemon.py | Nischal200/Music-Lyrics-Auto-Scroller | 92663e13451022a1500bfe56dff479dd0b3f1cac | [
"MIT"
] | null | null | null | #! /usr/bin/env python3
"""process which runs inside the docker daemon
the purpose of the doctor damon process is to allow the set up of
an environment which will support the deep speech recognition engine
to run on any recent nvidia Ubuntu host.
the basic operation of the demon is to create a named pipe
in the users run directory to which any audio source can then be
piped into the demon. the simplest way to achieve that
is to pipe the import from alsa through ffmpeg into the named pipe.
clients may onto the events unix socket in the same directory
to receive the partial and final event json records.
"""
from deepspeech import Model, version
from listener import eventserver
import logging, os, sys, select, json, socket, queue, collections, time
import numpy as np
import webrtcvad
from . import defaults
import threading
log = logging.getLogger(__name__ if __name__ != '__main__' else 'listener')
# How long of leading silence causes it to be discarded?
FRAME_SIZE = (defaults.SAMPLE_RATE // 1000) * 20 # rate of 16000, so 16samples/ms
SILENCE_FRAMES = 10 # in 20ms frames
def metadata_to_json(metadata, partial=False):
"""Convert DeepSpeech Metadata struct to a json-compatible format"""
struct = {
'partial': partial,
'final': not partial,
'transcripts': [],
}
for transcript in metadata.transcripts:
struct['transcripts'].append(transcript_to_json(transcript))
return struct
def transcript_to_json(transcript, partial=False):
"""Convert DeepSpeech Transcript struct to a json-compatible format"""
struct = {
'partial': partial,
'final': not partial,
'tokens': [],
'starts': [],
'words': [],
'word_starts': [],
'confidence': transcript.confidence,
}
text = []
word = []
starts = 0.0
in_word = False
for token in transcript.tokens:
struct['tokens'].append(token.text)
text.append(token.text)
struct['starts'].append(token.start_time)
if token.text == ' ':
if word:
struct['words'].append(''.join(word))
in_word = False
del word[:]
else:
if not in_word:
struct['word_starts'].append(token.start_time)
in_word = True
word.append(token.text)
if word:
struct['words'].append(''.join(word))
struct['text'] = ''.join(text)
return struct
class RingBuffer(object):
"""Crude numpy-backed ringbuffer"""
def __init__(self, duration=30, rate=defaults.SAMPLE_RATE):
self.duration = duration
self.rate = rate
self.size = duration * rate
self.buffer = np.zeros((self.size,), dtype=np.int16)
self.write_head = 0
self.start = 0
def read_in(self, fh, blocksize=1024):
"""Read in content from the buffer"""
target = self.buffer[self.write_head : self.write_head + blocksize]
if hasattr(fh, 'readinto'):
# On the blocking fifo this consistently reads
# the whole blocksize chunk of data...
written = fh.readinto(target)
if written != blocksize * 2:
log.debug(
"Didn't read the whole buffer (likely disconnect): %s/%s",
written,
blocksize // 2,
)
target = target[: (written // 2)]
else:
# This is junk, unix and localhost buffering in ffmpeg
# means we take 6+ reads to get a buffer and we wind up
# losing a *lot* of audio due to delays
tview = target.view(np.uint8)
written = 0
reads = 0
while written < blocksize:
written += fh.recv_into(tview[written:], blocksize - written)
reads += 1
if reads > 1:
log.debug("Took %s reads to get %s bytes", reads, written)
self.write_head = (self.write_head + written) % self.size
return target
def itercurrent(self):
"""Iterate over all samples in the current record
After we truncate from the beginning we have to
reset the stream with the content written already
"""
if self.write_head < self.start:
yield self.buffer[self.start :]
yield self.buffer[: self.write_head]
else:
yield self.buffer[self.start : self.write_head]
def __len__(self):
if self.write_head < self.start:
return self.size - self.start + self.write_head
else:
return self.write_head - self.start
def produce_voice_runs(
input,
read_frames=2,
rate=defaults.SAMPLE_RATE,
silence=SILENCE_FRAMES,
voice_detect_aggression=3,
):
"""Produce runs of audio with voice detected
input -- FIFO (named pipe) or Socket from which to read
read_frames -- number of frames to read in on each iteration, this is a
blocking read, so it needs to be pretty small to keep
latency down
rate -- sample rate, 16KHz required for DeepSpeech
silence -- number of audio frames that constitute a "pause" at which
we should produce a new utterance
Notes:
* we want to be relatively demanding about the detection of audio
as we are working with noisy/messy environments
* the start-of-voice event often is preceded by a bit of
lower-than-threshold "silence" which is critical for catching
the first word
* we are using a static ringbuffer so that the main audio buffer shouldn't
wind up being copied
yields audio frames in sequence from the input
"""
vad = webrtcvad.Vad(voice_detect_aggression)
ring = RingBuffer(rate=rate)
current_utterance = []
silence_count = 0
read_size = read_frames * FRAME_SIZE
# set of frames that were not considered speech
# but that we might need to recognise the first
# word of an utterance, here (in 20ms frames)
silence_frames = collections.deque([], 10)
while True:
buffer = ring.read_in(input, read_size)
if not len(buffer):
log.debug("Input disconnected")
yield None
silence_count = 0
raise IOError('Input disconnect')
for start in range(0, len(buffer) - 1, FRAME_SIZE):
frame = buffer[start : start + FRAME_SIZE]
if vad.is_speech(frame, rate):
if silence_count:
# Update the ring-buffer to tell us where
# the audio started... note: currently there
# is no checking for longer-than-ring-buffer
# duration speeches...
ring.start = ring.write_head
log.debug('<')
for last in silence_frames:
ring.start -= len(last)
yield last
ring.start = ring.start % ring.size
yield frame
silence_count = 0
silence_frames.clear()
else:
silence_count += 1
silence_frames.append(frame)
if silence_count == silence:
log.debug('[]')
yield None
elif silence_count < silence:
yield frame
log.debug('? %s', silence_count)
def run_recognition(
model,
input,
out_queue,
read_size=320,
rate=defaults.SAMPLE_RATE,
max_decode_rate=4,
):
"""Read fragments from input, write results to output
model -- DeepSpeech model to run
input -- input binary audio stream 16KHz mono 16-bit unsigned machine order audio
output -- output (text) stream to which to write updates
rate -- audio rate (16,000 to be compatible with DeepSpeech)
max_decode_rate -- maximum number of times/s to do partial recognition
As incoming data comes in, accumulate in a (ring)
buffer. As partial recognitions are run, look for
stability in the prefix of the utterance, so if we
see the same text for the top prediction for N
runs then move the start to the start of the last
word in the stable set, report all the words up to that
point and then continue processing as though the
last word was the start of the utterance
"""
# create our ring-buffer structure with 60s of audio
for metadata in iter_metadata(model, input=input, rate=rate):
out_queue.put(metadata)
def iter_metadata(model, input, rate=defaults.SAMPLE_RATE, max_decode_rate=4):
"""Iterate over input producing transcriptions with model"""
stream = model.createStream()
length = last_decode = 0
for buffer in produce_voice_runs(input, rate=rate,):
if buffer is None:
if length:
metadata = metadata_to_json(
stream.finishStreamWithMetadata(15), partial=False
)
for tran in metadata['transcripts']:
log.info(">>> %0.02f %s", tran['confidence'], tran['words'])
yield metadata
stream = model.createStream()
length = last_decode = 0
else:
stream.feedAudioContent(buffer)
written = len(buffer)
length += written
if (length - last_decode) > rate // max_decode_rate:
metadata = metadata_to_json(
stream.intermediateDecodeWithMetadata(), partial=True
)
if metadata['transcripts'][0]['text']:
yield metadata
words = metadata['transcripts'][0]['words']
log.info("... %s", ' '.join(words))
def open_fifo(filename, mode='rb'):
"""Open fifo for communication"""
if not os.path.exists(filename):
os.mkfifo(filename)
return open(filename, mode)
def create_input_socket(port):
"""Connect to the given socket as a read-only client"""
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setblocking(True)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 640 * 100)
sock.bind(('127.0.0.1', port))
sock.listen(1)
return sock
def get_options():
import argparse
parser = argparse.ArgumentParser(
description='Provides an audio sink to which to write buffers to feed into DeepSpeech',
)
parser.add_argument(
'-i', '--input', default='/src/run/audio',
)
parser.add_argument(
'-o', '--output', default='/src/run/events',
)
parser.add_argument(
'-m',
'--model',
default='/src/model/deepspeech-%s-models.pbmm'
% os.environ.get('DEEPSPEECH_VERSION', '0.7.3'),
help='DeepSpeech published model',
)
parser.add_argument(
'-s',
'--scorer',
default='/src/model/deepspeech-%s-models.scorer'
% os.environ.get('DEEPSPEECH_VERSION', '0.7.3'),
help='DeepSpeech published scorer, use "" to not apply the Language Model within the daemon (letting the interpreter handle the scoring)',
)
parser.add_argument(
'--beam-width',
default=None,
type=int,
help='If specified, override the model default beam width',
)
parser.add_argument(
'--port',
default=None,
type=int,
help='If specified, use a TCP/IP socket, unfortunately we cannot use unix domain sockets due to broken ffmpeg buffering',
)
parser.add_argument(
'-v',
'--verbose',
default=False,
action='store_true',
help='Enable verbose logging (for developmen/debugging)',
)
return parser
def process_input_file(conn, options, out_queue, background=True):
# TODO: allow socket connections from *clients* to choose
# the model rather than setting it in the daemon...
# to be clear, *output* clients, not audio sinks
log.info("Starting recognition on %s", conn)
model = Model(options.model,)
if options.beam_width:
model.setBeamWidth(options.beam_width)
desired_sample_rate = model.sampleRate()
if desired_sample_rate != defaults.SAMPLE_RATE:
log.error("Model expects rate of %s", desired_sample_rate)
if options.scorer:
model.enableExternalScorer(options.scorer)
else:
log.info("Disabling the scorer")
model.disableExternalScorer()
if background:
t = threading.Thread(target=run_recognition, args=(model, conn, out_queue))
t.setDaemon(background)
t.start()
else:
run_recognition(model, conn, out_queue)
def main():
options = get_options().parse_args()
defaults.setup_logging(options)
log.info("Send Raw, Mono, 16KHz, s16le, audio to %s", options.input)
out_queue = eventserver.create_sending_threads(options.output)
if options.port:
sock = create_input_socket(options.port)
while True:
log.info("Waiting on %s", sock)
conn, addr = sock.accept()
process_input_file(conn, options, out_queue, background=True)
else:
# log.info("Opening fifo (will pause until a source connects)")
while True:
try:
sock = open_fifo(options.input)
log.info("FIFO connected, processing")
process_input_file(sock, options, out_queue, background=False)
except (webrtcvad._webrtcvad.Error, IOError) as err:
log.info("Disconnect, re-opening fifo")
time.sleep(2.0)
if __name__ == '__main__':
logging.basicConfig(
level=logging.DEBUG, format='%(levelname) 7s %(name)s:%(lineno)s %(message)s',
)
main()
| 35.346056 | 146 | 0.608452 | 1,707 | 13,891 | 4.85413 | 0.281781 | 0.013034 | 0.017258 | 0.013275 | 0.126599 | 0.101497 | 0.076515 | 0.058895 | 0.050205 | 0.028723 | 0 | 0.010694 | 0.299906 | 13,891 | 392 | 147 | 35.436224 | 0.841337 | 0.257649 | 0 | 0.228782 | 0 | 0.00738 | 0.12505 | 0.009443 | 0 | 0 | 0 | 0.002551 | 0 | 1 | 0.051661 | false | 0 | 0.03321 | 0 | 0.118081 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20b1e4ce9fc466a2b7e51f104d05fa8d81c11041 | 23,645 | py | Python | nifpga/session.py | auchter/nifpga-python | d24ac338ec9b9d1bb94f1c8b8d06643670289e9e | [
"MIT"
] | null | null | null | nifpga/session.py | auchter/nifpga-python | d24ac338ec9b9d1bb94f1c8b8d06643670289e9e | [
"MIT"
] | null | null | null | nifpga/session.py | auchter/nifpga-python | d24ac338ec9b9d1bb94f1c8b8d06643670289e9e | [
"MIT"
] | 1 | 2020-09-19T15:44:08.000Z | 2020-09-19T15:44:08.000Z | """
Session, a convenient wrapper around the low-level _NiFpga class.
Copyright (c) 2017 National Instruments
"""
from .nifpga import (_SessionType, _IrqContextType, _NiFpga, DataType,
OPEN_ATTRIBUTE_NO_RUN, RUN_ATTRIBUTE_WAIT_UNTIL_DONE,
CLOSE_ATTRIBUTE_NO_RESET_IF_LAST_SESSION)
from .bitfile import Bitfile
from .status import IrqTimeoutWarning, InvalidSessionError
from collections import namedtuple
import ctypes
from builtins import bytes
from future.utils import iteritems
class Session(object):
"""
Session, a convenient wrapper around the low-level _NiFpga class.
The Session class uses regular python types, provides convenient default
arguments to C API functions, and makes controls, indicators, and FIFOs
available by name. If any NiFpga function return status is non-zero, the
appropriate exception derived from either WarningStatus or ErrorStatus is
raised.
Example usage of FPGA configuration functions::
with Session(bitfile="myBitfilePath.lvbitx", resource="RIO0") as session:
try: session.run()
except: FpgaAlreadyRunningWarning: pass
session.download()
session.abort()
session.reset()
Note:
It is always recommended that you use a Session with a context manager
(with). Opening a Session without a context manager could cause you to
leak the session if :meth:`Session.close` is not called.
Controls and indicators are accessed directly via a _Register object
obtained from the session::
my_control = session.registers["MyControl"]
my_control.write(data=4)
data = my_control.read()
FIFOs are accessed directly via a _FIFO object obtained from the session::
myHostToFpgaFifo = session.fifos["MyHostToFpgaFifo"]
myHostToFpgaFifo.stop()
actual_depth = myHostToFpgaFifo.configure(requested_depth=4096)
myHostToFpgaFifo.start()
empty_elements_remaining = myHostToFpgaFifo.write(data=[1, 2, 3, 4],
timeout_ms=2)
myFpgaToHostFifo = session.fifos["MyHostToFpgaFifo"]
read_values = myFpgaToHostFifo.read(number_of_elements=4,
timeout_ms=0)
print(read_values.data)
"""
def __init__(self,
bitfile,
resource,
no_run=False,
reset_if_last_session_on_exit=False,
**kwargs):
"""Creates a session to the specified resource with the specified
bitfile.
Args:
bitfile (str)(Bitfile): A bitfile.Bitfile() instance or a string
filepath to a bitfile.
resource (str): e.g. "RIO0", "PXI1Slot2", or "rio://hostname/RIO0"
no_run (bool): If true, don't run the bitfile, just open the
session.
reset_if_last_session_on_exit (bool): Passed into Close on
exit. Unused if not using this session as a context guard.
**kwargs: Additional arguments that edit the session.
"""
if not isinstance(bitfile, Bitfile):
""" The bitfile we were passed is a path to an lvbitx."""
bitfile = Bitfile(bitfile)
self._nifpga = _NiFpga()
self._session = _SessionType()
open_attribute = 0
for key, value in kwargs.items():
if key == '_open_attribute':
open_attribute = value
if no_run:
open_attribute = open_attribute | OPEN_ATTRIBUTE_NO_RUN
bitfile_path = bytes(bitfile.filepath, 'ascii')
bitfile_signature = bytes(bitfile.signature, 'ascii')
resource = bytes(resource, 'ascii')
self._nifpga.Open(bitfile_path,
bitfile_signature,
resource,
open_attribute,
self._session)
self._reset_if_last_session_on_exit = reset_if_last_session_on_exit
self._registers = {}
self._internal_registers_dict = {}
base_address_on_device = bitfile.base_address_on_device()
for name, bitfile_register in iteritems(bitfile.registers):
assert name not in self._registers, \
"One or more registers have the same name '%s', this is not supported" % name
if bitfile_register.is_array():
array_register = _ArrayRegister(self._session, self._nifpga,
bitfile_register,
base_address_on_device)
if bitfile_register.is_internal():
self._internal_registers_dict[name] = array_register
else:
self._registers[name] = array_register
else:
register = _Register(self._session, self._nifpga,
bitfile_register, base_address_on_device)
if bitfile_register.is_internal():
self._internal_registers_dict[name] = register
else:
self._registers[name] = register
self._fifos = {}
for name, bitfile_fifo in iteritems(bitfile.fifos):
assert name not in self._fifos, \
"One or more FIFOs have the same name '%s', this is not supported" % name
self._fifos[name] = _FIFO(self._session, self._nifpga, bitfile_fifo)
def __enter__(self):
return self
def __exit__(self, exception_type, exception_val, trace):
try:
self.close(reset_if_last_session=self._reset_if_last_session_on_exit)
except InvalidSessionError:
pass
def close(self, reset_if_last_session=False):
""" Closes the FPGA Session.
Args:
reset_if_last_session (bool): If True, resets the FPGA on the
last close. If true, does not reset the FPGA on the last
session close.
"""
close_attr = CLOSE_ATTRIBUTE_NO_RESET_IF_LAST_SESSION if reset_if_last_session is False else 0
self._nifpga.Close(self._session, close_attr)
def run(self, wait_until_done=False):
""" Runs the FPGA VI on the target.
Args:
wait_until_done (bool): If true, this functions blocks until the
FPGA VI stops running
"""
run_attr = RUN_ATTRIBUTE_WAIT_UNTIL_DONE if wait_until_done else 0
self._nifpga.Run(self._session, run_attr)
def abort(self):
""" Aborts the FPGA VI. """
self._nifpga.Abort(self._session)
def download(self):
""" Re-downloads the FPGA bitstream to the target. """
self._nifpga.Download(self._session)
def reset(self):
""" Resets the FPGA VI. """
self._nifpga.Reset(self._session)
def _irq_ordinals_to_bitmask(self, ordinals):
bitmask = 0
for ordinal in ordinals:
assert 0 <= ordinal and ordinal <= 31, "Valid IRQs are 0-31: %d is invalid" % ordinal
bitmask |= (1 << ordinal)
return bitmask
def wait_on_irqs(self, irqs, timeout_ms):
""" Stops the calling thread until the FPGA asserts any IRQ in the irqs
parameter or until the function call times out.
Args:
irqs: A list of irq ordinals 0-31, e.g. [0, 6, 31].
timeout_ms: The timeout to wait in milliseconds.
Returns:
session_wait_on_irqs (namedtuple)::
session_wait_on_irqs.irqs_asserted (list): is a list of the
asserted IRQs.
session_wait_on_irqs.timed_out (bool): Outputs whether or not
the time out expired before all irqs were asserted.
"""
if type(irqs) != list:
irqs = [irqs]
irqs_bitmask = self._irq_ordinals_to_bitmask(irqs)
context = _IrqContextType()
self._nifpga.ReserveIrqContext(self._session, context)
irqs_asserted_bitmask = ctypes.c_uint32(0)
timed_out = DataType.Bool._return_ctype()()
try:
self._nifpga.WaitOnIrqs(self._session,
context,
irqs_bitmask,
timeout_ms,
irqs_asserted_bitmask,
timed_out)
except IrqTimeoutWarning:
# We pass timed_out to the C API, so we can ignore this warning
# and just always return timed_out.
pass
finally:
self._nifpga.UnreserveIrqContext(self._session, context)
irqs_asserted = [i for i in range(32) if irqs_asserted_bitmask.value & (1 << i)]
WaitOnIrqsReturnValues = namedtuple('WaitOnIrqsReturnValues',
["irqs_asserted", "timed_out"])
return WaitOnIrqsReturnValues(irqs_asserted=irqs_asserted,
timed_out=bool(timed_out.value))
def acknowledge_irqs(self, irqs):
""" Acknowledges an IRQ or set of IRQs.
Args:
irqs (list): A list of irq ordinals 0-31, e.g. [0, 6, 31].
"""
self._nifpga.AcknowledgeIrqs(self._session,
self._irq_ordinals_to_bitmask(irqs))
def _get_unique_register_or_fifo(self, name):
assert not (name in self._registers and name in self._fifos), \
"Ambiguous: '%s' is both a register and a FIFO" % name
assert name in self._registers or name in self._fifos, \
"Unknown register or FIFO '%s'" % name
try:
return self._registers[name]
except KeyError:
return self._fifos[name]
@property
def registers(self):
""" This property returns a dictionary containing all registers that
are associated with the bitfile opened with the session. A register can
be accessed by its unique name.
"""
return self._registers
@property
def _internal_registers(self):
""" This property contains interal regis"""
return self._internal_registers_dict
@property
def fifos(self):
""" This property returns a dictionary containing all FIFOs that are
associated with the bitfile opened with the session. A FIFO can be
accessed by its unique name.
"""
return self._fifos
class _Register(object):
""" _Register is a private class that is a wrapper of logic that is
associated with controls and indicators.
All Registers will exists in a sessions session.registers property. This
means that all possible registers for a given session are created during
session initialization; a user should never need to create a new instance
of this class.
"""
def __init__(self, session, nifpga, bitfile_register, base_address_on_device):
self._datatype = bitfile_register.datatype
self._name = bitfile_register.name
self._session = session
self._write_func = nifpga["WriteArray%s" % self._datatype] if bitfile_register.is_array() \
else nifpga["Write%s" % self._datatype]
self._read_func = nifpga["ReadArray%s" % self._datatype] if bitfile_register.is_array() \
else nifpga["Read%s" % self._datatype]
self._ctype_type = self._datatype._return_ctype()
self._resource = bitfile_register.offset + base_address_on_device
if bitfile_register.access_may_timeout():
self._resource = self._resource | 0x80000000
def __len__(self):
""" A single register will always have one and only one element.
Returns:
(int): Always a constant 1.
"""
return 1
def write(self, data):
""" Writes the specified data to the control or indicator
Args:
data (DataType.value): The data to be written into the register
"""
self._write_func(self._session, self._resource, data)
def read(self):
""" Reads a single element from the control or indicator
Returns:
data (DataType.value): The data inside the register.
"""
data = self._ctype_type()
self._read_func(self._session, self._resource, data)
if self._datatype is DataType.Bool:
return bool(data.value)
return data.value
@property
def name(self):
""" Property of a register that returns the name of the control or
indicator. """
return self._name
@property
def datatype(self):
""" Property of a register that returns the datatype of the control or
indicator. """
return self._datatype
class _ArrayRegister(_Register):
"""
_ArryRegister is a private class that inherits from _Register with
additional interfaces unique to the logic of array controls and indicators.
"""
def __init__(self,
session,
nifpga,
bitfile_register,
base_address_on_device):
super(_ArrayRegister, self).__init__(session,
nifpga,
bitfile_register,
base_address_on_device)
self._num_elements = len(bitfile_register)
self._ctype_type = self._ctype_type * self._num_elements
self._write_func = nifpga["WriteArray%s" % self._datatype]
self._read_func = nifpga["ReadArray%s" % self._datatype]
def __len__(self):
""" Returns the length of the array.
Returns:
(int): The number of elements in the array.
"""
return self._num_elements
def write(self, data):
""" Writes the specified array of data to the control or indicator
Args:
data (list): The data "array" to be written into the registers
wrapped into a python list.
"""
# if data is not iterable make it iterable
try:
_ = iter(data)
except TypeError:
data = [data]
assert len(data) == len(self), \
"Bad data length %d for register '%s', expected %s" \
% (len(data), self._name, len(self))
buf = self._ctype_type(*data)
self._write_func(self._session, self._resource, buf, len(self))
def read(self):
""" Reads the entire array from the control or indicator.
Returns:
(list): The data in the register in a python list.
"""
buf = self._ctype_type()
self._read_func(self._session, self._resource, buf, len(self))
return [bool(elem) if self._datatype is DataType.Bool else elem for elem in buf]
class _FIFO(object):
""" _FIFO is a private class that is a wrapper for the logic that
associated with a FIFO.
All FIFOs will exists in a sessions session.fifos property. This means that
all possible FIFOs for a given session are created during session
initialization; a user should never need to create a new instance of this
class.
"""
def __init__(self, session, nifpga, bitfile_fifo):
self._datatype = bitfile_fifo.datatype
self._number = bitfile_fifo.number
self._session = session
self._write_func = nifpga["WriteFifo%s" % self._datatype]
self._read_func = nifpga["ReadFifo%s" % self._datatype]
self._acquire_read_func = nifpga["AcquireFifoReadElements%s" % self._datatype]
self._acquire_write_func = nifpga["AcquireFifoWriteElements%s" % self._datatype]
self._release_elements_func = nifpga["ReleaseFifoElements"]
self._nifpga = nifpga
self._ctype_type = self._datatype._return_ctype()
self._name = bitfile_fifo.name
def configure(self, requested_depth):
""" Specifies the depth of the host memory part of the DMA FIFO.
Args:
requested_depth (int): The depth of the host memory part of the DMA
FIFO in number of elements.
Returns:
actual_depth (int): The actual number of elements in the host
memory part of the DMA FIFO, which may be more than the
requested number.
"""
actual_depth = ctypes.c_size_t()
self._nifpga.ConfigureFifo2(self._session, self._number,
requested_depth, actual_depth)
return actual_depth.value
def start(self):
""" Starts the FIFO. """
self._nifpga.StartFifo(self._session, self._number)
def stop(self):
""" Stops the FIFO. """
self._nifpga.StopFifo(self._session, self._number)
def write(self, data, timeout_ms=0):
""" Writes the specified data to the FIFO.
NOTE:
If the FIFO has not been started before calling
:meth:`_FIFO.write()`, then it will automatically start and
continue to work as expected.
Args:
data (list): Data to be written to the FIFO.
timeout_ms (int): The timeout to wait in milliseconds.
Returns:
elements_remaining (int): The number of elements remaining in the
host memory part of the DMA FIFO.
"""
# if data is not iterable make it iterable
try:
_ = iter(data)
except TypeError:
data = [data]
buf_type = self._ctype_type * len(data)
buf = buf_type(*data)
empty_elements_remaining = ctypes.c_size_t()
self._write_func(self._session,
self._number,
buf,
len(data),
timeout_ms,
empty_elements_remaining)
return empty_elements_remaining.value
def read(self, number_of_elements, timeout_ms=0):
""" Read the specified number of elements from the FIFO.
NOTE:
If the FIFO has not been started before calling
:meth:`_FIFO.read()`, then it will automatically start and continue
to work as expected.
Args:
number_of_elements (int): The number of elements to read from the
FIFO.
timeout_ms (int): The timeout to wait in milliseconds.
Returns:
ReadValues (namedtuple)::
ReadValues.data (list): containing the data from
the FIFO.
ReadValues.elements_remaining (int): The amount of elements
remaining in the FIFO.
"""
buf_type = self._ctype_type * number_of_elements
buf = buf_type()
elements_remaining = ctypes.c_size_t()
self._read_func(self._session,
self._number,
buf,
number_of_elements,
timeout_ms,
elements_remaining)
data = [bool(elem) if self._datatype is DataType.Bool else elem for elem in buf]
ReadValues = namedtuple("ReadValues", ["data", "elements_remaining"])
return ReadValues(data=data,
elements_remaining=elements_remaining.value)
def _acquire_write(self, number_of_elements, timeout_ms=0):
""" Write the specified number of elements from the FIFO.
Args:
number_of_elements (int): The number of elements to read from the
FIFO.
timeout_ms (int): The timeout to wait in milliseconds.
Returns:
AcquireWriteValues(namedtuple)::
AcquireWriteValues.data (ctypes.pointer): Contains the data
from the FIFO.
AcquireWriteValues.elements_acquired (int): The number of
elements that were actually acquired.
AcquireWriteValues.elements_remaining (int): The amount of
elements remaining in the FIFO.
"""
block_out = ctypes.POINTER(self._ctype_type)()
elements_acquired = ctypes.c_size_t()
elements_remaining = ctypes.c_size_t()
self._acquire_write_func(self._session,
self._number,
block_out,
number_of_elements,
timeout_ms,
elements_acquired,
elements_remaining)
AcquireWriteValues = namedtuple("AcquireWriteValues",
["data", "elements_acquired",
"elements_remaining"])
return AcquireWriteValues(data=block_out,
elements_acquired=elements_acquired.value,
elements_remaining=elements_remaining.value)
def _acquire_read(self, number_of_elements, timeout_ms=0):
""" Read the specified number of elements from the FIFO.
Args:
number_of_elements (int): The number of elements to read from the
FIFO.
timeout_ms (int): The timeout to wait in milliseconds.
Returns:
AcquireWriteValues(namedtuple): has the following members::
AcquireWriteValues.data (ctypes.pointer): Contains the data
from the FIFO.
AcquireWriteValues.elements_acquired (int): The number of
elements that were actually acquired.
AcquireWriteValues.elements_remaining (int): The amount of
elements remaining in the FIFO.
"""
buf = self._ctype_type()
buf_ptr = ctypes.pointer(buf)
elements_acquired = ctypes.c_size_t()
elements_remaining = ctypes.c_size_t()
self._acquire_read_func(self._session,
self._number,
buf_ptr,
number_of_elements,
timeout_ms,
elements_acquired,
elements_remaining)
AcquireReadValues = namedtuple("AcquireReadValues",
["data", "elements_acquired",
"elements_remaining"])
return AcquireReadValues(data=buf_ptr,
elements_acquired=elements_acquired.value,
elements_remaining=elements_remaining.value)
def _release_elements(self, number_of_elements):
""" Releases the FIFOs elements. """
self._release_elements_func(self._session, self._number, number_of_elements)
def get_peer_to_peer_endpoint(self):
""" Gets an endpoint reference to a peer-to-peer FIFO. """
endpoint = ctypes.c_uint32(0)
self._nifpga.GetPeerToPeerFifoEndpoint(self._session, self._number, endpoint)
return endpoint.value
@property
def name(self):
""" Property of a Fifo that contains its name. """
return self._name
@property
def datatype(self):
""" Property of a Fifo that contains its datatype. """
return self._datatype
| 39.806397 | 102 | 0.587735 | 2,620 | 23,645 | 5.080916 | 0.144656 | 0.026442 | 0.030048 | 0.014874 | 0.464393 | 0.40302 | 0.361403 | 0.31543 | 0.268179 | 0.244967 | 0 | 0.00456 | 0.341552 | 23,645 | 593 | 103 | 39.873524 | 0.850472 | 0.356143 | 0 | 0.309028 | 0 | 0 | 0.047844 | 0.005308 | 0 | 0 | 0.000727 | 0 | 0.038194 | 1 | 0.128472 | false | 0.006944 | 0.024306 | 0.003472 | 0.246528 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20b28d884d7b7314d9f1c0fd125fd2bca1b74c56 | 2,439 | py | Python | in-app-payments-with-server-validation/backend-appengine/jwt/__init__.py | Acidburn0zzz/chrome-app-samples | 53c3184d3ff210918a5d9c7420dd2a92c0870cf5 | [
"Apache-2.0"
] | 16 | 2019-08-08T02:04:54.000Z | 2019-10-15T17:52:36.000Z | in-app-payments-with-server-validation/backend-appengine/jwt/__init__.py | Acidburn0zzz/chrome-app-samples | 53c3184d3ff210918a5d9c7420dd2a92c0870cf5 | [
"Apache-2.0"
] | null | null | null | in-app-payments-with-server-validation/backend-appengine/jwt/__init__.py | Acidburn0zzz/chrome-app-samples | 53c3184d3ff210918a5d9c7420dd2a92c0870cf5 | [
"Apache-2.0"
] | 8 | 2015-07-04T07:24:08.000Z | 2020-04-27T02:23:49.000Z | """ JSON Web Token implementation
Minimum implementation based on this spec:
http://self-issued.info/docs/draft-jones-json-web-token-01.html
"""
import base64
import hashlib
import hmac
try:
import json
except ImportError:
import simplejson as json
__all__ = ['encode', 'decode', 'DecodeError']
class DecodeError(Exception): pass
signing_methods = {
'HS256': lambda msg, key: hmac.new(key, msg, hashlib.sha256).digest(),
'HS384': lambda msg, key: hmac.new(key, msg, hashlib.sha384).digest(),
'HS512': lambda msg, key: hmac.new(key, msg, hashlib.sha512).digest(),
}
def base64url_decode(input):
input += '=' * (4 - (len(input) % 4))
return base64.urlsafe_b64decode(input)
def base64url_encode(input):
return base64.urlsafe_b64encode(input).replace('=', '')
def header(jwt):
header_segment = jwt.split('.', 1)[0]
try:
return json.loads(base64url_decode(header_segment))
except (ValueError, TypeError):
raise DecodeError("Invalid header encoding")
def encode(payload, key, algorithm='HS256'):
segments = []
header = {"typ": "JWT", "alg": algorithm}
segments.append(base64url_encode(json.dumps(header)))
segments.append(base64url_encode(json.dumps(payload)))
signing_input = '.'.join(segments)
try:
ascii_key = unicode(key).encode('utf8')
signature = signing_methods[algorithm](signing_input, ascii_key)
except KeyError:
raise NotImplementedError("Algorithm not supported")
segments.append(base64url_encode(signature))
return '.'.join(segments)
def decode(jwt, key='', verify=True):
try:
signing_input, crypto_segment = jwt.rsplit('.', 1)
header_segment, payload_segment = signing_input.split('.', 1)
except ValueError:
raise DecodeError("Not enough segments")
try:
header = json.loads(base64url_decode(header_segment))
payload = json.loads(base64url_decode(payload_segment))
signature = base64url_decode(crypto_segment)
except (ValueError, TypeError):
raise DecodeError("Invalid segment encoding")
if verify:
try:
ascii_key = unicode(key).encode('utf8')
if not signature == signing_methods[header['alg']](signing_input, ascii_key):
raise DecodeError("Signature verification failed")
except KeyError:
raise DecodeError("Algorithm not supported")
return payload
| 33.410959 | 89 | 0.678147 | 280 | 2,439 | 5.782143 | 0.332143 | 0.046325 | 0.022236 | 0.029648 | 0.25386 | 0.25386 | 0.165534 | 0.059296 | 0 | 0 | 0 | 0.030025 | 0.194342 | 2,439 | 72 | 90 | 33.875 | 0.793893 | 0.056171 | 0 | 0.206897 | 0 | 0 | 0.091979 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.086207 | false | 0.017241 | 0.103448 | 0.017241 | 0.293103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20b37fe078b398c2a89ecdde10b0aa1f69e2a0fc | 4,931 | py | Python | app/models/attribute.py | hack4impact/women-veterans-rock | 7de5f5645819dbe67ba71a1f0b29f84a45e35789 | [
"MIT"
] | 16 | 2015-10-26T20:30:35.000Z | 2017-02-01T01:45:35.000Z | app/models/attribute.py | hack4impact/women-veterans-rock | 7de5f5645819dbe67ba71a1f0b29f84a45e35789 | [
"MIT"
] | 34 | 2015-10-21T02:58:42.000Z | 2017-02-24T06:57:07.000Z | app/models/attribute.py | hack4impact/women-veterans-rock | 7de5f5645819dbe67ba71a1f0b29f84a45e35789 | [
"MIT"
] | 1 | 2015-10-23T21:32:28.000Z | 2015-10-23T21:32:28.000Z | from .. import db
user_tag_associations_table = db.Table(
'user_tag_associations', db.Model.metadata,
db.Column('tag_id', db.Integer, db.ForeignKey('tags.id')),
db.Column('user_id', db.Integer, db.ForeignKey('users.id'))
)
resource_tag_associations_table = db.Table(
'resource_tag_associations', db.Model.metadata,
db.Column('tag_id', db.Integer, db.ForeignKey('tags.id')),
db.Column('resource_id', db.Integer, db.ForeignKey('resources.id'))
)
class Tag(db.Model):
__tablename__ = 'tags'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(30), unique=True)
users = db.relationship('User', secondary=user_tag_associations_table,
backref='tags', lazy='dynamic')
resources = db.relationship('Resource',
secondary=resource_tag_associations_table,
backref='tags', lazy='dynamic')
type = db.Column(db.String(50))
is_primary = db.Column(db.Boolean, default=False)
__mapper_args__ = {
'polymorphic_on': type
}
def __init__(self, name, is_primary=False):
"""
If possible, the helper methods get_by_name and create_tag
should be used instead of explicitly using this constructor.
"""
self.name = name
self.is_primary = is_primary
@staticmethod
def get_by_name(name):
"""Helper for searching by Tag name."""
result = Tag.query.filter_by(name=name).first()
return result
def __repr__(self):
return '<%s \'%s\'>' % (self.type, self.name)
class ResourceCategoryTag(Tag):
__tablename__ = 'resource_category_tags'
id = db.Column(db.Integer, db.ForeignKey('tags.id'), primary_key=True)
__mapper_args__ = {
'polymorphic_identity': 'ResourceCategoryTag',
}
@staticmethod
def create_resource_category_tag(name):
"""
Helper to create a ResourceCategoryTag entry. Returns the newly
created ResourceCategoryTag or the existing entry if name is already
in the table.
"""
result = Tag.get_by_name(name)
# Tags must have unique names, so if a Tag that is not a
# ResourceCategoryTag already has the name `name`, then an error is
# raised.
if result is not None and result.type != 'ResourceCategoryTag':
raise ValueError("A tag with this name already exists.")
if result is None:
result = ResourceCategoryTag(name)
db.session.add(result)
db.session.commit()
return result
@staticmethod
def generate_fake(count=10):
"""Generate count fake Tags for testing."""
from faker import Faker
fake = Faker()
for i in range(count):
created = False
while not created:
try:
ResourceCategoryTag.\
create_resource_category_tag(fake.word())
created = True
except ValueError:
created = False
class AffiliationTag(Tag):
__tablename__ = 'affiliation_tags'
id = db.Column(db.Integer, db.ForeignKey('tags.id'), primary_key=True)
__mapper_args__ = {
'polymorphic_identity': 'AffiliationTag',
}
@staticmethod
def create_affiliation_tag(name, is_primary=False):
"""
Helper to create a AffiliationTag entry. Returns the newly
created AffiliationTag or the existing entry if name is already
in the table.
"""
result = Tag.get_by_name(name)
# Tags must have unique names, so if a Tag that is not an
# AffiliationTag already has the name `name`, then an error is raised.
if result is not None and result.type != 'AffiliationTag':
raise ValueError("A tag with this name already exists.")
if result is None:
result = AffiliationTag(name, is_primary)
db.session.add(result)
db.session.commit()
return result
@staticmethod
def generate_fake(count=10):
"""Generate count fake AffiliationTags for testing."""
from faker import Faker
fake = Faker()
for i in range(count):
created = False
while not created:
try:
AffiliationTag.create_affiliation_tag(fake.word())
created = True
except ValueError:
created = False
@staticmethod
def generate_default():
"""Generate default AffiliationTags."""
default_affiliation_tags = [
'Veteran', 'Active Duty', 'National Guard', 'Reservist', 'Spouse',
'Dependent', 'Family Member', 'Supporter', 'Other'
]
for tag in default_affiliation_tags:
AffiliationTag.create_affiliation_tag(tag, is_primary=True)
| 33.773973 | 78 | 0.608801 | 557 | 4,931 | 5.210054 | 0.235189 | 0.027567 | 0.022743 | 0.043418 | 0.5255 | 0.472433 | 0.464507 | 0.435562 | 0.435562 | 0.401103 | 0 | 0.002302 | 0.295275 | 4,931 | 145 | 79 | 34.006897 | 0.832806 | 0.164875 | 0 | 0.484848 | 0 | 0 | 0.120733 | 0.017068 | 0 | 0 | 0 | 0 | 0 | 1 | 0.080808 | false | 0 | 0.030303 | 0.010101 | 0.323232 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20b557e25c85bd79516024b5db25efc8be4b20b6 | 2,631 | py | Python | python/wlu_lr/CNN.py | GG-yuki/bugs | aabd576e9e57012a3390007af890b7c6ab6cdda8 | [
"MIT"
] | null | null | null | python/wlu_lr/CNN.py | GG-yuki/bugs | aabd576e9e57012a3390007af890b7c6ab6cdda8 | [
"MIT"
] | null | null | null | python/wlu_lr/CNN.py | GG-yuki/bugs | aabd576e9e57012a3390007af890b7c6ab6cdda8 | [
"MIT"
] | null | null | null | import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy
x = torch.unsqueeze(torch.linspace(-1, 1, 300), dim=1) # x data (tensor), shape=(100, 1)
print(x)
y = x.pow(2) + 0.2 * torch.rand(x.size()) # noisy y data (tensor), shape=(100, 1)
plt.scatter(x.data.numpy(), y.data.numpy())
plt.show()
# print(x[1])
# print(y)
x = [[13.523903173216974], [13.664147188790954], [20.96319999239993], [17.3609379172661], [66.2023020952398],
[48.2592968519504], [192.18925284800778], [148.3711903549705]]
y = [[13.669839864931738], [15.307883112568586], [18.558109998163587], [19.275146613524534], [50.90607216800526],
[57.82359232089161], [137.65130123147364], [177.1510542028334]]
z = [[10], [15], [20], [25], [50], [75], [100], [138]]
x = torch.tensor(x)
y = torch.tensor(y)
plt.scatter(x.data.numpy(), y.data.numpy())
plt.show()
# print(x)
# plt.scatter(x.data.numpy(), y.data.numpy())
# plt.show()
# print(x[1])
# # 画图
# plt.scatter(x.data.numpy(), y.data.numpy())
# plt.show()
#
#
class Net(torch.nn.Module): # 继承 torch 的 Module
def __init__(self, n_feature, n_hidden, n_output):
super(Net, self).__init__() # 继承 __init__ 功能
# 定义每层用什么样的形式
self.hidden = torch.nn.Linear(n_feature, n_hidden) # 隐藏层线性输出
self.predict = torch.nn.Linear(n_hidden, n_output) # 输出层线性输出
def forward(self, x1): # 这同时也是 Module 中的 forward 功能
# 正向传播输入值, 神经网络分析出输出值
x1 = F.relu(self.hidden(x1)) # 激励函数(隐藏层的线性值)
x1 = self.predict(x1) # 输出值
return x1
net = Net(n_feature=1, n_hidden=10, n_output=1)
# plt.ion() # 画图
# plt.show()
# # optimizer 是训练的工具
optimizer = torch.optim.SGD(net.parameters(), lr=0.2) # 传入 net 的所有参数, 学习率
loss_func = torch.nn.MSELoss() # 预测值和真实值的误差计算公式 (均方差)
for t in range(100):
prediction = net(x) # 喂给 net 训练数据 x, 输出预测值
print(prediction)
loss = loss_func(prediction, y) # 计算两者的误差
optimizer.zero_grad() # 清空上一步的残余更新参数值
loss.backward() # 误差反向传播, 计算参数更新值
optimizer.step() # 将参数更新值施加到 net 的 parameters 上
if t % 5 == 0:
# plot and show learning process
plt.cla()
plt.scatter(x.data.numpy(), y.data.numpy())
plt.plot(x.data.numpy(), prediction.data.numpy(), 'r-', lw=5)
plt.text(0.5, 0, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 20, 'color': 'red'})
plt.pause(0.1)
# y_as = net(x)
# plt.cla()
# plt.scatter(x.data.numpy(), y_as.data.numpy())
# plt.plot(x.data.numpy(), y_as.data.numpy(), 'r-', lw=5)
# plt.text(0.5, 0, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 20, 'color': 'red'})
# plt.ion() # 画图
# plt.show()
| 32.085366 | 113 | 0.625618 | 394 | 2,631 | 4.106599 | 0.365482 | 0.100124 | 0.049444 | 0.04759 | 0.295426 | 0.255253 | 0.255253 | 0.227441 | 0.207046 | 0.18665 | 0 | 0.153668 | 0.1813 | 2,631 | 81 | 114 | 32.481481 | 0.597493 | 0.303307 | 0 | 0.116279 | 0 | 0 | 0.012871 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046512 | false | 0 | 0.093023 | 0 | 0.186047 | 0.046512 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20b5d321129450b24c961028f4c4a899c4e5b34f | 7,440 | py | Python | mechanic/modifier.py | dl-stuff/dl9 | 1cbe98afc53a1de9d413797fb130946acc4b6ba4 | [
"MIT"
] | null | null | null | mechanic/modifier.py | dl-stuff/dl9 | 1cbe98afc53a1de9d413797fb130946acc4b6ba4 | [
"MIT"
] | null | null | null | mechanic/modifier.py | dl-stuff/dl9 | 1cbe98afc53a1de9d413797fb130946acc4b6ba4 | [
"MIT"
] | null | null | null | """
kinds of modifiers
1. Ability Passive
- stat mods
- act damage up/down
- punisher
2. Action condtion (and Aura)
- there's a lot of crap here but they r all additive if same field
- str aura is same as _RateAttack
- certain fields (crit, crit dmg, punisher) are same bracket for w/e reason
3. hitattr
- independent from everything else
"""
from __future__ import annotations
from collections import defaultdict
from functools import reduce
from itertools import chain
import itertools
from typing import Callable, Dict, Hashable, List, Sequence, Tuple, Optional, TYPE_CHECKING, Set
import operator
if TYPE_CHECKING:
from action import Action
from mechanic.hit import HitAttribute
from entity import Entity
class Modifier:
__slots__ = [
"_value",
"bracket",
"status",
"_active_fn",
]
def __init__(self, value: float, bracket: Tuple[Hashable, ...], active_fn: Optional[Callable] = None, status: bool = True) -> None:
self._value = value
self.bracket = bracket
self.status = status
self._active_fn = active_fn
def get(self, modifier_dict: ModifierDict) -> float:
if not self.status:
return 0.0
if self._active_fn is None:
return self._value
try:
return self._active_fn(modifier_dict.entity, modifier_dict.hitattr) * self._value
except TypeError:
return self._active_fn() * self._value
def __float__(self) -> float:
return self.get()
def __repr__(self) -> str:
return f"{self.bracket}: {self._value} ({self._active_fn})"
class ModifierDict:
__slots__ = ["_mods", "_tags", "entity", "hitattr"]
def __init__(self, entity: Optional[Entity] = None, hitattr: Optional[HitAttribute] = None) -> None:
self._mods: Dict[Tuple[Hashable, ...], List[Modifier]] = defaultdict(list)
self._tags: Dict[Tuple[Hashable, ...], Set[Tuple[Hashable, ...]]] = defaultdict(set)
self.entity = entity
self.hitattr = hitattr
def add(self, mod: Modifier) -> None:
self._mods[mod.bracket].append(mod)
for i in range(1, len(mod.bracket) + 1):
self._tags[mod.bracket[0:i]].add(mod.bracket)
def get(self, bracket: Tuple[Hashable, ...], specific: bool = False) -> Sequence[Modifier]:
if specific:
return self._mods.get(bracket, [])
return chain(*(self._mods.get(tag, []) for tag in self._tags.get(bracket)))
def mod(self, bracket: Tuple[Hashable, ...], op: Callable = operator.add, initial: float = 0, specific: bool = False) -> float:
try:
return initial + reduce(op, [mod.get(self), self.get(bracket, specific=specific)])
except TypeError:
return initial
# class MDMultiLevel(ModifierDict):
# def __init__(self) -> None:
# self._subdict = {}
# self._mods = []
# def add(self, mod: Modifier, seq: int = 0):
# if seq >= len(mod.bracket):
# self._mods.append(mod)
# else:
# tag = mod.bracket[seq]
# try:
# sub_md = self._subdict[tag]
# except KeyError:
# sub_md = MDMultiLevel()
# self._subdict[tag] = sub_md
# sub_md.add(mod, seq=seq + 1)
# def get(self, bracket: Tuple[Hashable, ...], seq: int = 0):
# if seq >= len(bracket):
# all_mods = []
# all_mods.extend(self._mods)
# for sub_md in self._subdict.values():
# all_mods.extend(sub_md.get(bracket, seq=seq + 1))
# return all_mods
# else:
# tag = bracket[seq]
# try:
# return self._subdict[tag].get(bracket, seq=seq + 1)
# except KeyError:
# return []
# class MDTagged(ModifierDict):
# def __init__(self) -> None:
# self._mods = defaultdict(list)
# self._tags = defaultdict(set)
# def add(self, mod: Modifier):
# self._mods[mod.bracket].append(mod)
# for i in range(1, len(mod.bracket) + 1):
# self._tags[mod.bracket[0:i]].add(mod.bracket)
# def get(self, bracket: Tuple[Hashable, ...]):
# all_mods = []
# try:
# for tag in self._tags.get(bracket):
# all_mods.extend(self._mods[tag])
# except TypeError:
# pass
# return all_mods
if __name__ == "__main__":
from core.constants import Stat
import random
from pprint import pprint
stat_lst = list(Stat)
randomized_mods = []
spr_mods = []
modcount = 100
counts = {
1: 0,
2: 0,
3: 0,
4: 0,
}
for i in range(modcount):
bracket_len = random.choice((1, 2, 3, 4))
counts[bracket_len] += 1
stat = random.choice(stat_lst)
if bracket_len == 1:
bracket = (stat,)
elif bracket_len == 2:
bracket = (stat, random.choice(("Passive", "Buff")))
elif bracket_len == 3:
bracket = (stat, random.choice(("Passive", "Buff")), "EX")
elif bracket_len == 4:
bracket = (stat, random.choice(("Passive", "Buff")), "test", "speshul")
mod = Modifier(random.random(), bracket)
if stat == Stat.Spr:
spr_mods.append(mod)
randomized_mods.append(mod)
# accuracy
spr = reduce(operator.add, map(float, spr_mods))
print(f"Real {spr}")
md = ModifierDict()
for mod in randomized_mods:
md.add(mod)
res = md.mod((Stat.Spr,))
print(f"Check {ModifierDict.__name__}: {res}")
print(flush=True)
# for MD in (MDMultiLevel, MDTagged):
# md = MD()
# for mod in randomized_mods:
# md.add(mod)
# res = md.mod((Stat.Spr,))
# print(f"Check {MD.__name__}: {res}")
# # pprint(md.get((Stat.Spr,)))
# if res != spr:
# for mod in md.get((Stat.Spr,)):
# if mod not in spr_mods:
# print(f"ERROR - wrong mod value {res} != {spr}")
# print("Reason", mod)
# print(flush=True)
# from time import monotonic
# trials = 100000
# print(f"{trials} trials with {modcount} mods")
# print(counts)
# for MD in (MDMultiLevel, MDTagged):
# print(f"Testing {MD.__name__}")
# start_t = monotonic()
# for i in range(1000):
# md = MD()
# for mod in randomized_mods:
# md.add(mod)
# print(f"Adding: {monotonic() - start_t}s")
# start_t = monotonic()
# for i in range(trials):
# md.mod((random.choice(stat_lst),))
# print(f"Getting 1: {monotonic() - start_t}s")
# start_t = monotonic()
# for i in range(trials):
# md.mod((random.choice(stat_lst), random.choice(("Passive", "Buff"))))
# print(f"Getting 2: {monotonic() - start_t}s")
# start_t = monotonic()
# for i in range(trials):
# md.mod((random.choice(stat_lst), random.choice(("Passive", "Buff")), "EX"))
# print(f"Getting 3: {monotonic() - start_t}s")
# start_t = monotonic()
# for i in range(trials):
# md.mod((random.choice(stat_lst), random.choice(("Passive", "Buff")), "test", "speshul"))
# print(f"Getting 4: {monotonic() - start_t}s")
# print(flush=True)
| 32.920354 | 135 | 0.562366 | 899 | 7,440 | 4.48832 | 0.182425 | 0.035688 | 0.011896 | 0.021809 | 0.319207 | 0.263445 | 0.202478 | 0.183147 | 0.183147 | 0.183147 | 0 | 0.009781 | 0.299194 | 7,440 | 225 | 136 | 33.066667 | 0.764097 | 0.476478 | 0 | 0.042553 | 0 | 0 | 0.052895 | 0.006316 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0.031915 | 0.138298 | 0.021277 | 0.37234 | 0.042553 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20b7cdbf4698fba9cb58486cf5f4749c946cfd11 | 4,154 | py | Python | ino/argparsing.py | qguv/ino | f23ee5cb14edc30ec087d3eab7b301736da42362 | [
"MIT"
] | 558 | 2015-01-02T08:12:53.000Z | 2022-03-08T17:13:26.000Z | ino/argparsing.py | jboone/ino | 4798827272f6b3916f1fb887e42538a976789d90 | [
"MIT"
] | 84 | 2015-01-01T11:17:27.000Z | 2021-02-11T02:40:23.000Z | ino/argparsing.py | jboone/ino | 4798827272f6b3916f1fb887e42538a976789d90 | [
"MIT"
] | 176 | 2015-01-14T08:59:39.000Z | 2021-06-24T07:41:31.000Z | # -*- coding: utf-8; -*-
# Stolen from: http://bugs.python.org/issue12806
import argparse
import re
import textwrap
class FlexiFormatter(argparse.RawTextHelpFormatter):
"""FlexiFormatter which respects new line formatting and wraps the rest
Example:
>>> parser = argparse.ArgumentParser(formatter_class=FlexiFormatter)
>>> parser.add_argument('--example', help='''\
... This argument's help text will have this first long line\
... wrapped to fit the target window size so that your text\
... remains flexible.
...
... 1. This option list
... 2. is still persisted
... 3. and the option strings get wrapped like this with an\
... indent for readability.
...
... You must use backslashes at the end of lines to indicate that\
... you want the text to wrap instead of preserving the newline.
...
... As with docstrings, the leading space to the text block is\
... ignored.
... ''')
>>> parser.parse_args(['-h'])
usage: argparse_formatter.py [-h] [--example EXAMPLE]
optional arguments:
-h, --help show this help message and exit
--example EXAMPLE This argument's help text will have this first
long line wrapped to fit the target window size
so that your text remains flexible.
1. This option list
2. is still persisted
3. and the option strings get wrapped like
this with an indent for readability.
You must use backslashes at the end of lines to
indicate that you want the text to wrap instead
of preserving the newline.
As with docstrings, the leading space to the
text block is ignored.
Only the name of this class is considered a public API. All the methods
provided by the class are considered an implementation detail.
"""
def _split_lines(self, text, width):
lines = list()
main_indent = len(re.match(r'( *)',text).group(1))
# Wrap each line individually to allow for partial formatting
for line in text.splitlines():
# Get this line's indent and figure out what indent to use
# if the line wraps. Account for lists of small variety.
indent = len(re.match(r'( *)',line).group(1))
list_match = re.match(r'( *)(([*-+>]+|\w+\)|\w+\.) +)',line)
if(list_match):
sub_indent = indent + len(list_match.group(2))
else:
sub_indent = indent
# Textwrap will do all the hard work for us
line = self._whitespace_matcher.sub(' ', line).strip()
new_lines = textwrap.wrap(
text=line,
width=width,
initial_indent=' '*(indent-main_indent),
subsequent_indent=' '*(sub_indent-main_indent),
)
# Blank lines get eaten by textwrap, put it back with [' ']
lines.extend(new_lines or [' '])
return lines
if __name__ == '__main__':
parser = argparse.ArgumentParser(formatter_class=FlexiFormatter)
parser.add_argument('--example', help='''\
This argument's help text will have this first long line wrapped to\
fit the target window size so that your text remains flexible.
1. This option list
2. is still persisted
3. and the option strings get wrapped like this with an indent\
for readability.
You must use backslashes at the end of lines to indicate that you\
want the text to wrap instead of preserving the newline.
As with docstrings, the leading space to the text block is ignored.
''')
parser.parse_args(['-h'])
| 41.54 | 78 | 0.553683 | 485 | 4,154 | 4.676289 | 0.315464 | 0.018519 | 0.017196 | 0.022487 | 0.572751 | 0.55776 | 0.55776 | 0.55776 | 0.55776 | 0.55776 | 0 | 0.006818 | 0.364468 | 4,154 | 99 | 79 | 41.959596 | 0.852273 | 0.547424 | 0 | 0 | 0 | 0 | 0.356109 | 0.013897 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027027 | false | 0 | 0.081081 | 0 | 0.162162 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20b85e774f4f333362b32f59f4d25bf560a3cebc | 6,997 | py | Python | NN_buildingblock/ConvNN.py | xupingxie/deep-learning-models | cc76aedf9631317452f9cd7df38998e2de727816 | [
"MIT"
] | null | null | null | NN_buildingblock/ConvNN.py | xupingxie/deep-learning-models | cc76aedf9631317452f9cd7df38998e2de727816 | [
"MIT"
] | null | null | null | NN_buildingblock/ConvNN.py | xupingxie/deep-learning-models | cc76aedf9631317452f9cd7df38998e2de727816 | [
"MIT"
] | null | null | null | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
This script contains basic functions for Conv Neural Nets.
foward conv and pooling
backward conv and pooling
@author: xuping
"""
import numpy as np
import h5py
import matplotlib.pyplot as plt
def Conv_forward(A_prev, W, b, para):
'''
This is the forward propagation for a convolution layer
Input: output from previous layer A_prev (m, H_prev, W_prev, C_prev)
W --- weights, (f,f, C_prev, C)
b --- bias,
para --- contains "stride" and "pad"
return the conv output Z(m, H, W, C), cache for backpropagation
'''
(m, H_prev, W_prev, C_prev) = A_prev.shape
(f, f, C_prev, C) = W.shape
stride = para["stride"]
pad = ["pad"]
H = int((H_prev - f + 2 * pad) / stride + 1)
W = int((W_prev - f + 2 * pad) / stride + 1)
Z = np.zeros((m, H, W, C))
# padding the input
A_prev_pad = np.pad(A_prev, ((0,0), (pad,pad), (pad,pad), (0,0)), 'constant', constant_value=(0,0))
# loop all dimension
for i in range(m):
a_prev_pad = A_prev_pad[i,:,:,:] # extract the i-th training example
for h in range(H):
for w in range(W):
for c in range(C):
hstart = stride * h
hend = hstart + f
wstart = stride * w
wend = wstart + f
# extract the slice for Conv
a_slice = a_prev_pad[hstart:hend, wstart:wend, :]
# Conv step
Z[i,h,w,c] = np.sum(a_slice * W[:,:,:,c]) + b[:,:,:,c]
#end for loop
assert(Z.shape == (m, H, W, C))
# save in cache for backprop
cache = (A_prev, W, b, para)
return Z, cache
def Pool_forward(A_prev, para, mode="max"):
'''
forward progation of pooling layer
Input: A_prev(m, H_prev, W_prev, C_prev)
para -- parameters
mode -- max pooling or average
output: pooling output layer A(m, H, W, C)
'''
(m, H_prev, W_prev, C_prev) = A_prev.shape
f = para["f"]
stride = para["stride"]
H = int((H_prev - f) / stride + 1)
W = int((W_prev - f) / stride + 1)
C = C_prev
# initialize output A
A = np.zeros((m, H, W, C))
# loop each dimension
for i in range(m):
for h in range(H):
for w in range(W):
for c in range(C):
hstart = stride * h
hend = hstart + f
wstart = stride * w
wend = wstart + f
# extract the slice from A_prev
a_slice = A_prev[i, hstart:hend, wstart:wend, c]
if mode == "max":
A[i,h,w,c] = np.max(a_slice)
elif mode == "average":
A[i,h,w,c] = np.mean(a_slice)
# end for loop
assert(A.shape == (m, H, W, C))
cache = (A_prev, para)
return A, cache
def Conv_backward(dZ, cache):
'''
the backward propgation of Conv Layer
Input: dZ -- gradient of the cost wrt the OUTPUT of Conv Layer Z (m, H, W, C)
cache -- stored data from forward prop
Output: dA -- gradient of the cost wrt INPUT of Conv layer A_prev (m, H_prev, W_prev, C_prev)
dW -- gradient wrt weights of the Conv layer W(f, f, C_prev, C)
db -- gradient wrt biases b(1,1,1,C)
'''
# get all the dimensions from previous data
(A_prev, W, b, para) = cache
(m, H_prev, W_prev, C_prev) = A_prev.shape
(f, f, C_prev, C) = W.shape
stride = para["stride"]
pad = para["pad"]
(m, H, W, C) = dZ.shape
#intialize all the gradients
dA_prev = np.zeros((m, H_prev, W_prev, C_prev))
dW = np.zeros((f, f, C_prev, C))
db = np.zeros(b.shape)
#padding the data
A_prev_pad = np.pad(A_prev, ((0,0), (pad,pad), (pad,pad),(0,0)),'constant', constant_value=(0,0))
dA_prev_pad = np.pad(dA_prev, ((0,0), (pad,pad), (pad,pad),(0,0)),'constant', constant_value=(0,0))
#loop all the dimensions
for i in range(m):
a_prev = A_prev_pad[i,:,:,:]
da_prev = dA_prev_pad[i,:,:,:]
for h in range(H):
for w in range(W):
for c in range(C):
# define the corner of the slice
hstart = stride * h
hend = hstart + f
wstart = stride * w
wend = wstart + f
#extract slice
a_slice = a_prev[hstart:hend, wstart:wend, :]
# compute the derivate
da_prev[hstart:hend, wstart:wend,:] += W[:,:,:,c]*dZ[i,h,w,c]
dW[:,:,:,c] += a_slice*dZ[i,h,w,c]
db[:,:,:,c] += dZ[i,h,w,c]
#remove pad from the local derivative slice
dA_prev[i,:,:,:] = da_prev[pad:-pad, pad:-pad, :]
#end for loop
assert(dA_prev.shape == (m, H, W, C))
return dA_prev, dW, db
def Pooling_backward(dA, cache, mode="max"):
"""
Find gradients through backward prop of the pooling layer
Input: dA -- gradients wrt OUTPUT of the pooling layer
cache -- stored output data from forward prop
mode -- max pooling or average
Output: dA_prev -- the gradient wrt the INPUT of the pooling layer
"""
(A_prev, para) = cache
stride = para["stride"]
f = para["f"]
m, H_prev, W_prev, C_prev = A_prev.shape
m, H, W, C = dA.shape
#Initialize dA_prev with zeros
dA_prev = np.zeros((m, H_prev, W_prev, C_prev))
#loop all the dimensions
for i in range(m):
# extract the training exmaple from A_prev
a_prev = A_prev[i,:,:,:]
for h in range(H):
for w in range(W):
for c in range(C):
# define the corner of the slice
hstart = stride * h
hend = hstart + f
wstart = stride * w
wend = wstart + f
# compute the backprop
if mode == "max":
# extract the slice
a_slice = a_prev[hstart:hend, wstart:wend, c]
# create mask for the slice matrix
mask = (a_slice == np.max(a_slice))
# compute derivative
dA_prev[i, hstart:hend, wstart:wend, c] += mask*dA[i,h,w,c]
elif mode == "average":
# get the value
da = dA[i,h,w,c]
# compute the derivative
dA_prev[i, hstart:hend, wstart:wend, c] += da/(f+f)*np.ones((f,f))
# end loop
assert(dA_prev.shape == A_prev.shape)
return dA_prev
| 33.319048 | 103 | 0.489924 | 991 | 6,997 | 3.349142 | 0.133199 | 0.045194 | 0.01627 | 0.012052 | 0.483579 | 0.423019 | 0.35824 | 0.333534 | 0.332329 | 0.264236 | 0 | 0.007 | 0.387452 | 6,997 | 210 | 104 | 33.319048 | 0.767382 | 0.277976 | 0 | 0.490385 | 0 | 0 | 0.016897 | 0 | 0 | 0 | 0 | 0 | 0.038462 | 1 | 0.038462 | false | 0 | 0.028846 | 0 | 0.105769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20b99c84a1d02922ac3d349c7b10e48dcaede0db | 4,974 | py | Python | nnf/dsharp.py | vishalbelsare/python-nnf | c2e81fd7851a3d11fff904bf5b4c5e521fde59ab | [
"0BSD"
] | 14 | 2020-07-14T01:51:26.000Z | 2021-12-17T22:45:47.000Z | nnf/dsharp.py | vishalbelsare/python-nnf | c2e81fd7851a3d11fff904bf5b4c5e521fde59ab | [
"0BSD"
] | 26 | 2020-07-14T23:37:52.000Z | 2021-11-04T18:06:38.000Z | nnf/dsharp.py | vishalbelsare/python-nnf | c2e81fd7851a3d11fff904bf5b4c5e521fde59ab | [
"0BSD"
] | 7 | 2020-07-26T10:53:21.000Z | 2021-09-19T00:35:30.000Z | """Interoperability with `DSHARP <https://github.com/QuMuLab/dsharp>`_.
``load`` and ``loads`` can be used to parse files created by DSHARP's
``-Fnnf`` option.
``compile`` invokes DSHARP directly to compile a sentence. This requires
having DSHARP installed.
The parser was derived by studying DSHARP's output and source code. This
format might be some sort of established standard, in which case this
parser might reject or misinterpret some valid files in the format.
DSHARP may not work properly for some (usually trivially) unsatisfiable
sentences, incorrectly reporting there's a solution. This bug dates back to
sharpSAT, on which DSHARP was based:
https://github.com/marcthurley/sharpSAT/issues/5
It was independently discovered by hypothesis during testing of this module.
"""
import io
import os
import subprocess
import tempfile
import typing as t
from nnf import NNF, And, Or, Var, false, true, dimacs
from nnf.util import Name
__all__ = ('load', 'loads', 'compile')
def load(
fp: t.TextIO, var_labels: t.Optional[t.Dict[int, Name]] = None
) -> NNF:
"""Load a sentence from an open file.
An optional ``var_labels`` dictionary can map integers to other names.
"""
def decode_name(num: int) -> Name:
if var_labels is not None:
return var_labels[num]
return num
fmt, nodecount, edges, varcount = fp.readline().split()
node_specs = dict(enumerate(line.split() for line in fp))
assert fmt == 'nnf'
nodes = {} # type: t.Dict[int, NNF]
for num, spec in node_specs.items():
if spec[0] == 'L':
if spec[1].startswith('-'):
nodes[num] = Var(decode_name(int(spec[1][1:])), False)
else:
nodes[num] = Var(decode_name(int(spec[1])))
elif spec[0] == 'A':
nodes[num] = And(nodes[int(n)] for n in spec[2:])
elif spec[0] == 'O':
nodes[num] = Or(nodes[int(n)] for n in spec[3:])
else:
raise ValueError("Can't parse line {}: {}".format(num, spec))
if int(nodecount) == 0:
raise ValueError("The sentence doesn't have any nodes.")
return nodes[int(nodecount) - 1]
def loads(s: str, var_labels: t.Optional[t.Dict[int, Name]] = None) -> NNF:
"""Load a sentence from a string."""
return load(io.StringIO(s), var_labels)
def compile(
sentence: And[Or[Var]],
executable: str = 'dsharp',
smooth: bool = False,
timeout: t.Optional[int] = None,
extra_args: t.Sequence[str] = ()
) -> NNF:
"""Run DSHARP to compile a CNF sentence to (s)d-DNNF.
This requires having DSHARP installed.
The returned sentence will be marked as deterministic.
:param sentence: The CNF sentence to compile.
:param executable: The path of the ``dsharp`` executable. If the
executable is in your PATH there's no need to set this.
:param smooth: Whether to produce a smooth sentence.
:param timeout: Tell DSHARP to give up after a number of seconds.
:param extra_args: Extra arguments to pass to DSHARP.
"""
args = [executable]
if smooth:
args.append('-smoothNNF')
if timeout is not None:
args.extend(['-t', str(timeout)])
args.extend(extra_args)
if not sentence.is_CNF():
raise ValueError("Sentence must be in CNF")
# Handle cases D# doesn't like
if not sentence.children:
return true
if false in sentence.children:
return false
var_labels = dict(enumerate(sentence.vars(), start=1))
var_labels_inverse = {v: k for k, v in var_labels.items()}
infd, infname = tempfile.mkstemp(text=True)
try:
with open(infd, 'w') as f:
dimacs.dump(sentence, f, mode='cnf', var_labels=var_labels_inverse)
outfd, outfname = tempfile.mkstemp()
try:
os.close(outfd)
proc = subprocess.Popen(
args + ['-Fnnf', outfname, infname],
stdout=subprocess.PIPE,
universal_newlines=True
)
log, _ = proc.communicate()
with open(outfname) as f:
out = f.read()
finally:
os.remove(outfname)
finally:
os.remove(infname)
if proc.returncode != 0:
raise RuntimeError(
"DSHARP failed with code {}. Log:\n\n{}".format(
proc.returncode, log
)
)
if out == 'nnf 0 0 0\n' or 'problem line expected' in log:
raise RuntimeError("Something went wrong. Log:\n\n{}".format(log))
if 'TIMEOUT' in log:
raise RuntimeError("DSHARP timed out after {} seconds".format(timeout))
if 'Theory is unsat' in log:
return false
if not out:
raise RuntimeError("Couldn't read file output. Log:\n\n{}".format(log))
result = loads(out, var_labels=var_labels)
result.mark_deterministic()
NNF.decomposable.set(result, True)
return result
| 32.090323 | 79 | 0.621431 | 679 | 4,974 | 4.505155 | 0.346097 | 0.038248 | 0.007846 | 0.010788 | 0.099379 | 0.090226 | 0.066688 | 0.054266 | 0.035306 | 0.035306 | 0 | 0.004646 | 0.264375 | 4,974 | 154 | 80 | 32.298701 | 0.831375 | 0.297346 | 0 | 0.106383 | 0 | 0 | 0.095182 | 0 | 0 | 0 | 0 | 0 | 0.010638 | 1 | 0.042553 | false | 0 | 0.074468 | 0 | 0.202128 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20ba293476d59761a6ef6fe44c77fac5699f68be | 4,513 | py | Python | stats/offense.py | wisarut-sirimart/Python-Baseball | f794455d4a217d7684bd86fdff19d1706bf7aab2 | [
"MIT"
] | null | null | null | stats/offense.py | wisarut-sirimart/Python-Baseball | f794455d4a217d7684bd86fdff19d1706bf7aab2 | [
"MIT"
] | null | null | null | stats/offense.py | wisarut-sirimart/Python-Baseball | f794455d4a217d7684bd86fdff19d1706bf7aab2 | [
"MIT"
] | null | null | null | import pandas as pd
import matplotlib.pyplot as plt
from data import games
# Select All Plays
# In the file called offense.py in the stats folder you will find similar imports as the last module.
# Import the games DataFrame from data.
# Now that we have access to the games DataFrame.
# Select all rows that have a type of play. Use the shortcut method. Hint: square brackets, simple boolean comparison.
# Assign this new DataFrame to a variable called plays.
# To make it easier to access certain columns, label them with the columns property: 'type', 'inning', 'team', 'player', 'count', 'pitches', 'event', 'game_id', and 'year'.
plays = games[games['type'] == 'play']
plays.columns = ['type', 'inning', 'team', 'player', 'count', 'pitches', 'event', 'game_id', 'year']
# Select Only Hits
# The plays DataFrame now contains all plays from every All-star game.
# The question we want to answer in this plot is: "What is the distribution of hits across innings?"
# For this we need just the hits, singles, doubles, triples, and home runs.
# Use loc[], str.contains() and the regex '^(?:S(?!B)|D|T|HR)' to select the rows where the event column's value starts with S (not SB), D, T, and HR in the plays DataFrame.
# Only return the inning and event columns. Assign the resulting DataFrame to hits.
hits = plays.loc[plays['event'].str.contains('^(?:S(?!B)|D|T|HR)'), ['inning', 'event']]
# Convert Column Type
# Convert the inning column of the hits DataFrame from strings to numbers using the pd.to_numeric() function. Hint: select the column with loc[]
hits.loc[:, 'inning'] = pd.to_numeric(hits.loc[:, 'inning'])
# Replace Dictionary
# The event column of the hits DataFrame now contains event information of various configurations. It contains where the ball was hit and other information that isn't needed. We will replace this with the type of hit for grouping later on.
# Create a dictionary called replacements that contains the following key value pairs
# r'^S(.*)': 'single'
# r'^D(.*)': 'double'
# r'^T(.*)': 'triple'
# r'^HR(.*)': 'hr'
replacements = {
r'^S(.*)': 'single',
r'^D(.*)': 'double',
r'^T(.*)': 'triple',
r'^HR(.*)': 'hr'
}
# Replace Function
# Call the replace() function on the hits['event'] column and pass in the replacements dictionary as the first parameter and regex=True as a keyword argument.
# Assign the result which is a DataFrame to hit_type.
hit_type = hits['event'].replace(replacements, regex=True)
# Add A New Column
# We have previously created new columns using groupby and concatenated DataFrames together. This time we will add a new column with assign().
# Below the replace() function, call assign() on the hits DataFrame, and pass in the keyword argument with the new column name and the new column hit_type=hit_type.
# Reassign the new resulting DataFrame to hits.
hits = hits.assign(hit_type=hit_type)
# Group By Inning and Hit Type
# In one line of code, group the hits DataFrame by inning and hit_type, call size() to count the number of hits per inning, and then reset the index of the resulting DataFrame.
# When reseting the index name the newly created column count.
# Since the final function call reset_index() returns a DataFrame make sure you reassign the resulting DataFrame to the variable hits.
hits = hits.groupby(['inning', 'hit_type']).size().reset_index(name='count')
# Convert Hit Type to Categorical
# Since there are only four types of hits let's save some memory by making hits['hit_type'] a categorical column with pd.Categorical().
# Pass a second parameter as a list 'single', 'double', 'triple', and 'hr'. This specifies the order.
hits['hit_type'] = pd.Categorical(hits['hit_type'], ['single', 'double', 'triple', 'hr'])
# Sort Values
# Sort the values in the hits DataFrame by inning and hit_type using the sort_values() function. Remember to reassign this operation to hits.
hits = hits.sort_values(['inning', 'hit_type'])
# Reshape With Pivot
# We need to reshape the hits DataFrame for plotting.
# Call the pivot() function on the hits DataFrame.
# Pass the pivot() function three keyword arguments index='inning', columns='hit_type', and values='count'
# Reassign the result of pivot() to hits.
hits = hits.pivot(index='inning', columns='hit_type', values='count')
# Stacked Bar Plot
# The most appropriate plot for our data is a stacked bar chart. To create this type of plot call plot.bar() with stacked set to True on the hits DataFrame.
# As always, show the plot.
hits.plot.bar(stacked=True)
plt.show()
| 65.405797 | 239 | 0.731886 | 727 | 4,513 | 4.511692 | 0.287483 | 0.03628 | 0.039024 | 0.012805 | 0.120732 | 0.064634 | 0.064634 | 0.064634 | 0.043902 | 0.017683 | 0 | 0 | 0.15821 | 4,513 | 68 | 240 | 66.367647 | 0.863385 | 0.779526 | 0 | 0 | 0 | 0 | 0.253165 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20bada261f2bc5bee2c060bfdfd13f8adc62d780 | 2,156 | py | Python | tests/gmprocess/io/read_test.py | baagaard-usgs/groundmotion-processing | 6be2b4460d598bba0935135efa85af2655578565 | [
"Unlicense"
] | 54 | 2019-01-12T02:05:38.000Z | 2022-03-29T19:43:56.000Z | tests/gmprocess/io/read_test.py | baagaard-usgs/groundmotion-processing | 6be2b4460d598bba0935135efa85af2655578565 | [
"Unlicense"
] | 700 | 2018-12-18T19:44:31.000Z | 2022-03-30T20:54:28.000Z | tests/gmprocess/io/read_test.py | baagaard-usgs/groundmotion-processing | 6be2b4460d598bba0935135efa85af2655578565 | [
"Unlicense"
] | 41 | 2018-11-29T23:17:56.000Z | 2022-03-31T04:04:23.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# stdlib imports
import os
from gmprocess.io.read import read_data, _get_format, _validate_format
from gmprocess.utils.test_utils import read_data_dir
from gmprocess.utils.config import get_config
def test_read():
config = get_config()
cosmos_files, _ = read_data_dir("cosmos", "ci14155260", "Cosmos12TimeSeriesTest.v1")
cwb_files, _ = read_data_dir("cwb", "us1000chhc", "1-EAS.dat")
dmg_files, _ = read_data_dir("dmg", "nc71734741", "CE89146.V2")
geonet_files, _ = read_data_dir(
"geonet", "us1000778i", "20161113_110259_WTMC_20.V1A"
)
knet_files, _ = read_data_dir("knet", "us2000cnnl", "AOM0011801241951.EW")
smc_files, _ = read_data_dir("smc", "nc216859", "0111a.smc")
file_dict = {}
file_dict["cosmos"] = cosmos_files[0]
file_dict["cwb"] = cwb_files[0]
file_dict["dmg"] = dmg_files[0]
file_dict["geonet"] = geonet_files[0]
file_dict["knet"] = knet_files[0]
file_dict["smc"] = smc_files[0]
for file_format in file_dict:
file_path = file_dict[file_format]
assert _get_format(file_path, config) == file_format
assert _validate_format(file_path, config, file_format) == file_format
assert _validate_format(file_dict["knet"], config, "smc") == "knet"
assert _validate_format(file_dict["dmg"], config, "cosmos") == "dmg"
assert _validate_format(file_dict["cosmos"], config, "invalid") == "cosmos"
for file_format in file_dict:
try:
stream = read_data(file_dict[file_format], config, file_format)[0]
except Exception as e:
pass
assert stream[0].stats.standard["source_format"] == file_format
stream = read_data(file_dict[file_format])[0]
assert stream[0].stats.standard["source_format"] == file_format
# test exception
try:
file_path = smc_files[0].replace("0111a.smc", "not_a_file.smc")
read_data(file_path)[0]
success = True
except BaseException:
success = False
assert success == False
if __name__ == "__main__":
os.environ["CALLED_FROM_PYTEST"] = "True"
test_read()
| 35.344262 | 88 | 0.670223 | 286 | 2,156 | 4.688811 | 0.29021 | 0.089485 | 0.05742 | 0.071588 | 0.278896 | 0.234154 | 0.119314 | 0.071588 | 0.071588 | 0 | 0 | 0.056549 | 0.196197 | 2,156 | 60 | 89 | 35.933333 | 0.717253 | 0.033395 | 0 | 0.130435 | 0 | 0 | 0.157692 | 0.025 | 0 | 0 | 0 | 0 | 0.173913 | 1 | 0.021739 | false | 0.021739 | 0.086957 | 0 | 0.108696 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20be3885a4f5f899c0e126b28f81e284e4b6e23c | 633 | py | Python | tests/syft/lib/torch/device_test.py | JMLourier/PySyft | 065a862ca061f9a526af81db5b3ee0d39d4f6407 | [
"MIT"
] | 2 | 2020-03-06T15:51:52.000Z | 2020-03-08T13:14:24.000Z | tests/syft/lib/torch/device_test.py | JMLourier/PySyft | 065a862ca061f9a526af81db5b3ee0d39d4f6407 | [
"MIT"
] | 5 | 2020-12-03T21:06:20.000Z | 2020-12-31T03:46:57.000Z | tests/syft/lib/torch/device_test.py | JMLourier/PySyft | 065a862ca061f9a526af81db5b3ee0d39d4f6407 | [
"MIT"
] | 1 | 2020-12-05T07:22:27.000Z | 2020-12-05T07:22:27.000Z | # third party
import torch as th
# syft absolute
import syft as sy
from syft.core.common.uid import UID
from syft.lib.python import String
def test_device() -> None:
device = th.device("cuda")
assert device.type == "cuda"
assert device.index is None
def test_device_init() -> None:
bob = sy.VirtualMachine(name="Bob")
client = bob.get_client()
torch = client.torch
type_str = String("cuda:0")
str_pointer = type_str.send(client)
device_pointer = torch.device(str_pointer)
assert type(device_pointer).__name__ == "devicePointer"
assert isinstance(device_pointer.id_at_location, UID)
| 23.444444 | 59 | 0.709321 | 90 | 633 | 4.8 | 0.444444 | 0.090278 | 0.060185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001938 | 0.184834 | 633 | 26 | 60 | 24.346154 | 0.835271 | 0.039494 | 0 | 0 | 0 | 0 | 0.049587 | 0 | 0 | 0 | 0 | 0 | 0.235294 | 1 | 0.117647 | false | 0 | 0.235294 | 0 | 0.352941 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20bf152d1098acb69e542429f3dd7a20635a56bc | 8,495 | py | Python | clu/periodic_actions_test.py | andsteing/CommonLoopUtils | b39e992f60d041bc77809d859586027700a2c3a9 | [
"Apache-2.0"
] | 80 | 2020-10-11T17:37:52.000Z | 2022-03-30T17:17:05.000Z | clu/periodic_actions_test.py | andsteing/CommonLoopUtils | b39e992f60d041bc77809d859586027700a2c3a9 | [
"Apache-2.0"
] | 22 | 2020-12-18T15:12:04.000Z | 2021-09-24T08:10:23.000Z | clu/periodic_actions_test.py | andsteing/CommonLoopUtils | b39e992f60d041bc77809d859586027700a2c3a9 | [
"Apache-2.0"
] | 10 | 2020-10-13T16:35:30.000Z | 2022-02-08T21:00:00.000Z | # Copyright 2021 The CLU Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for perodic actions."""
import tempfile
import time
from unittest import mock
from absl.testing import parameterized
from clu import periodic_actions
import tensorflow as tf
class ReportProgressTest(tf.test.TestCase, parameterized.TestCase):
def test_every_steps(self):
hook = periodic_actions.ReportProgress(
every_steps=4, every_secs=None, num_train_steps=10)
t = time.time()
with self.assertLogs(level="INFO") as logs:
self.assertFalse(hook(1, t))
t += 0.11
self.assertFalse(hook(2, t))
t += 0.13
self.assertFalse(hook(3, t))
t += 0.12
self.assertTrue(hook(4, t))
# We did 1 step every 0.12s => 8.333 steps/s.
self.assertEqual(logs.output, [
"INFO:absl:Setting work unit notes: 8.3 steps/s, 40.0% (4/10), ETA: 0m"
])
def test_every_secs(self):
hook = periodic_actions.ReportProgress(
every_steps=None, every_secs=0.3, num_train_steps=10)
t = time.time()
with self.assertLogs(level="INFO") as logs:
self.assertFalse(hook(1, t))
t += 0.11
self.assertFalse(hook(2, t))
t += 0.13
self.assertFalse(hook(3, t))
t += 0.12
self.assertTrue(hook(4, t))
# We did 1 step every 0.12s => 8.333 steps/s.
self.assertEqual(logs.output, [
"INFO:absl:Setting work unit notes: 8.3 steps/s, 40.0% (4/10), ETA: 0m"
])
def test_without_num_train_steps(self):
report = periodic_actions.ReportProgress(every_steps=2)
t = time.time()
with self.assertLogs(level="INFO") as logs:
self.assertFalse(report(1, t))
self.assertTrue(report(2, t + 0.12))
# We did 1 step in 0.12s => 8.333 steps/s.
self.assertEqual(logs.output, [
"INFO:absl:Setting work unit notes: 8.3 steps/s"
])
def test_unknown_cardinality(self):
report = periodic_actions.ReportProgress(
every_steps=2,
num_train_steps=tf.data.UNKNOWN_CARDINALITY)
t = time.time()
with self.assertLogs(level="INFO") as logs:
self.assertFalse(report(1, t))
self.assertTrue(report(2, t + 0.12))
# We did 1 step in 0.12s => 8.333 steps/s.
self.assertEqual(logs.output, [
"INFO:absl:Setting work unit notes: 8.3 steps/s"
])
def test_called_every_step(self):
hook = periodic_actions.ReportProgress(every_steps=3, num_train_steps=10)
t = time.time()
with self.assertRaisesRegex(
ValueError, "PeriodicAction must be called after every step"):
hook(1, t)
hook(11, t) # Raises exception.
@parameterized.named_parameters(
("_nowait", False),
("_wait", True),
)
@mock.patch("time.time")
def test_named(self, wait_jax_async_dispatch, mock_time):
mock_time.return_value = 0
hook = periodic_actions.ReportProgress(
every_steps=1, every_secs=None, num_train_steps=10)
def _wait():
# Here we depend on hook._executor=ThreadPoolExecutor(max_workers=1)
hook._executor.submit(lambda: None).result()
self.assertFalse(hook(1)) # Never triggers on first execution.
with hook.timed("test1", wait_jax_async_dispatch):
_wait()
mock_time.return_value = 1
_wait()
with hook.timed("test2", wait_jax_async_dispatch):
_wait()
mock_time.return_value = 2
_wait()
with hook.timed("test1", wait_jax_async_dispatch):
_wait()
mock_time.return_value = 3
_wait()
mock_time.return_value = 4
with self.assertLogs(level="INFO") as logs:
self.assertTrue(hook(2))
self.assertEqual(logs.output, [
"INFO:absl:Setting work unit notes: 0.2 steps/s, 20.0% (2/10), ETA: 0m"
" (0m : 50.0% test1, 25.0% test2)"
])
class DummyProfilerSession:
"""Dummy Profiler that records the steps at which sessions started/ended."""
def __init__(self):
self.step = None
self.start_session_call_steps = []
self.end_session_call_steps = []
def start_session(self):
self.start_session_call_steps.append(self.step)
def end_session_and_get_url(self, tag):
del tag
self.end_session_call_steps.append(self.step)
class ProfileTest(tf.test.TestCase):
@mock.patch.object(periodic_actions, "profiler", autospec=True)
@mock.patch("time.time")
def test_every_steps(self, mock_time, mock_profiler):
start_steps = []
stop_steps = []
step = 0
def add_start_step(logdir):
del logdir # unused
start_steps.append(step)
def add_stop_step():
stop_steps.append(step)
mock_profiler.start.side_effect = add_start_step
mock_profiler.stop.side_effect = add_stop_step
hook = periodic_actions.Profile(
logdir=tempfile.mkdtemp(),
num_profile_steps=2,
profile_duration_ms=2_000,
first_profile=3,
every_steps=7)
for step in range(1, 18):
mock_time.return_value = step - 0.5 if step == 9 else step
hook(step)
self.assertAllEqual([3, 7, 14], start_steps)
# Note: profiling 7..10 instead of 7..9 because 7..9 took only 1.5 seconds.
self.assertAllEqual([5, 10, 16], stop_steps)
class ProfileAllHostsTest(tf.test.TestCase):
@mock.patch.object(periodic_actions, "profiler", autospec=True)
def test_every_steps(self, mock_profiler):
start_steps = []
step = 0
def profile_collect(logdir, callback, hosts, duration_ms):
del logdir, callback, hosts, duration_ms # unused
start_steps.append(step)
mock_profiler.collect.side_effect = profile_collect
hook = periodic_actions.ProfileAllHosts(
logdir=tempfile.mkdtemp(),
profile_duration_ms=2_000,
first_profile=3,
every_steps=7)
for step in range(1, 18):
hook(step)
self.assertAllEqual([3, 7, 14], start_steps)
class PeriodicCallbackTest(tf.test.TestCase):
def test_every_steps(self):
callback = mock.Mock()
hook = periodic_actions.PeriodicCallback(
every_steps=2, callback_fn=callback)
for step in range(1, 10):
hook(step, 3, remainder=step % 3)
expected_calls = [
mock.call(remainder=2, step=2, t=3),
mock.call(remainder=1, step=4, t=3),
mock.call(remainder=0, step=6, t=3),
mock.call(remainder=2, step=8, t=3)
]
self.assertListEqual(expected_calls, callback.call_args_list)
@mock.patch("time.time")
def test_every_secs(self, mock_time):
callback = mock.Mock()
hook = periodic_actions.PeriodicCallback(every_secs=2, callback_fn=callback)
for step in range(1, 10):
mock_time.return_value = float(step)
hook(step, remainder=step % 5)
# Note: time will be initialized at 1 so hook runs at steps 4 & 7.
expected_calls = [
mock.call(remainder=4, step=4, t=4.0),
mock.call(remainder=2, step=7, t=7.0)
]
self.assertListEqual(expected_calls, callback.call_args_list)
def test_async_execution(self):
out = []
def cb(step, t):
del t
out.append(step)
hook = periodic_actions.PeriodicCallback(
every_steps=1, callback_fn=cb, execute_async=True)
hook(0)
hook(1)
hook(2)
hook(3)
# Block till all the hooks have finished.
hook.get_last_callback_result().result()
# Check order of execution is preserved.
self.assertListEqual(out, [1, 2, 3])
def test_error_async_is_forwarded(self):
def cb(step, t):
del step
del t
raise Exception
hook = periodic_actions.PeriodicCallback(
every_steps=1, callback_fn=cb, execute_async=True)
hook(0)
hook(1)
with self.assertRaises(Exception):
hook(2)
def test_function_without_step_and_time(self):
# This must be used with pass_step_and_time=False.
def cb():
return 5
hook = periodic_actions.PeriodicCallback(
every_steps=1, callback_fn=cb, pass_step_and_time=False)
hook(0)
hook(1)
self.assertEqual(hook.get_last_callback_result(), 5)
if __name__ == "__main__":
tf.test.main()
| 30.339286 | 80 | 0.671218 | 1,222 | 8,495 | 4.491817 | 0.211129 | 0.043724 | 0.038076 | 0.02423 | 0.566406 | 0.472035 | 0.435598 | 0.386774 | 0.322463 | 0.300055 | 0 | 0.036238 | 0.213891 | 8,495 | 279 | 81 | 30.448029 | 0.785714 | 0.143967 | 0 | 0.512077 | 0 | 0.014493 | 0.065653 | 0 | 0 | 0 | 0 | 0 | 0.15942 | 1 | 0.111111 | false | 0.004831 | 0.028986 | 0.004831 | 0.169082 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20c0713b32c4e888c8d8eba2e53d1e750e99ff54 | 570 | py | Python | _scripts/tools/utils/frontmatter_getter.py | gogntao/gogntao.github.io | ee200345d39521652b8c1cf9d27bcc2a6e02f3ef | [
"MIT"
] | null | null | null | _scripts/tools/utils/frontmatter_getter.py | gogntao/gogntao.github.io | ee200345d39521652b8c1cf9d27bcc2a6e02f3ef | [
"MIT"
] | null | null | null | _scripts/tools/utils/frontmatter_getter.py | gogntao/gogntao.github.io | ee200345d39521652b8c1cf9d27bcc2a6e02f3ef | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
Read the posts and return a tuple that consisting of
Front Matter and its line number.
© 2018-2019 Cotes Chung
MIT License
'''
def get_yaml(path):
end = False
yaml = ""
num = 0
with open(path, 'r') as f:
for line in f.readlines():
if line.strip() == '---':
if end:
break
else:
end = True
continue
else:
num += 1
yaml += line
return yaml, num
| 18.387097 | 52 | 0.452632 | 68 | 570 | 3.794118 | 0.764706 | 0.054264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.440351 | 570 | 30 | 53 | 19 | 0.77116 | 0.289474 | 0 | 0.125 | 0 | 0 | 0.010101 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20c17791d1b4f3f9764ce5a02c3f17aa3c7d9e44 | 1,014 | py | Python | main/cart.py | Dogechi/Me2U | 0852600983dc1058ee347f4065ee801e16c1249e | [
"MIT"
] | null | null | null | main/cart.py | Dogechi/Me2U | 0852600983dc1058ee347f4065ee801e16c1249e | [
"MIT"
] | 9 | 2020-06-06T01:16:25.000Z | 2021-06-04T23:20:37.000Z | main/cart.py | Me2U-Afrika/Me2U | aee054afedff1e6c87f87494eaddf044e217aa95 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.db.models import Max
from datetime import datetime, timedelta
from me2ushop.models import Order, OrderItem
def remove_old_cart_items():
print('Removing old carts')
print('session age:', settings.SESSION_AGE_DAYS)
# calculate date of session age days ago
remove_before = datetime.now() + timedelta(days=-settings.SESSION_AGE_DAYS)
cart_ids = []
old_items = Order.objects.values('id').annotate(last_change=Max('start_date')).filter(
last_change__lt=remove_before).order_by()
print('old items:', old_items)
# create a list of cart ids that havent been modified
for item in old_items:
cart_ids.append(item['id'])
to_remove = Order.objects.filter(id__in=cart_ids)
print('to remove:', to_remove)
for order in to_remove:
order_items = order.items.all()
print('order_items:', order_items)
order_items.delete()
to_remove.delete()
print(str(len(cart_ids)) + "carts were removed")
| 36.214286 | 90 | 0.708087 | 145 | 1,014 | 4.731034 | 0.406897 | 0.05102 | 0.061224 | 0.087464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001208 | 0.183432 | 1,014 | 27 | 91 | 37.555556 | 0.827295 | 0.088757 | 0 | 0 | 0 | 0 | 0.102063 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.181818 | 0 | 0.227273 | 0.272727 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20c351d08261a89f6671fb61ae44dfb0f32ca3f0 | 3,658 | py | Python | tasks/python3/processTemporalLayer.py | greck2908/worldview | d17c463080218ce0a3d922be3dc8da5152860391 | [
"NASA-1.3"
] | 1 | 2021-03-01T22:10:14.000Z | 2021-03-01T22:10:14.000Z | tasks/python3/processTemporalLayer.py | greck2908/worldview | d17c463080218ce0a3d922be3dc8da5152860391 | [
"NASA-1.3"
] | 4 | 2021-12-03T00:01:57.000Z | 2022-03-22T21:01:34.000Z | tasks/python3/processTemporalLayer.py | greck2908/worldview | d17c463080218ce0a3d922be3dc8da5152860391 | [
"NASA-1.3"
] | null | null | null | from datetime import datetime
import isodate
import re
import traceback
def to_list(val):
return [val] if not hasattr(val, 'reverse') else val
# Add duration to end date using
# ISO 8601 duration keys
def determine_end_date(key, date):
return date + isodate.parse_duration(key)
# This method takes a layer and a temporal
# value and tranlates it to start and end dates
def process_temporal(wv_layer, value):
try:
ranges = to_list(value)
if "T" in ranges[0]:
wv_layer["period"] = "subdaily"
else:
if ranges[0].endswith("Y"):
wv_layer["period"] = "yearly"
elif ranges[0].endswith("M"):
wv_layer["period"] = "monthly"
else:
wv_layer["period"] = "daily"
start_date = datetime.max
end_date = datetime.min
date_range_start, date_range_end, range_interval = [], [], []
for range in ranges:
times = range.split('/')
if wv_layer["period"] == "daily" \
or wv_layer["period"] == "monthly" \
or wv_layer["period"] == "yearly":
start_date = min(start_date,
datetime.strptime(times[0], "%Y-%m-%d"))
end_date = max(end_date,
datetime.strptime(times[1], "%Y-%m-%d"))
if start_date:
startDateParse = datetime.strptime(times[0], "%Y-%m-%d")
date_range_start.append(startDateParse.strftime("%Y-%m-%d") + "T" + startDateParse.strftime("%H:%M:%S") + "Z")
if end_date:
endDateParse = datetime.strptime(times[1], "%Y-%m-%d")
date_range_end.append(endDateParse.strftime("%Y-%m-%d") + "T" + endDateParse.strftime("%H:%M:%S") + "Z")
if times[2] != "P1D":
end_date = determine_end_date(times[2], end_date)
range_interval.append(re.search(r'\d+', times[2]).group())
else:
startTime = times[0].replace('T', ' ').replace('Z', '')
endTime = times[1].replace('T', ' ').replace('Z', '')
start_date = min(start_date,
datetime.strptime(startTime, "%Y-%m-%d %H:%M:%S"))
end_date = max(end_date,
datetime.strptime(endTime, "%Y-%m-%d %H:%M:%S"))
if start_date:
startTimeParse = datetime.strptime(startTime, "%Y-%m-%d %H:%M:%S")
date_range_start.append(startTimeParse.strftime("%Y-%m-%d") + "T" + startTimeParse.strftime("%H:%M:%S") + "Z")
if end_date:
endTimeParse = datetime.strptime(endTime, "%Y-%m-%d %H:%M:%S")
date_range_end.append(endTimeParse.strftime("%Y-%m-%d") + "T" + endTimeParse.strftime("%H:%M:%S") + "Z")
range_interval.append(re.search(r'\d+', times[2]).group())
wv_layer["startDate"] = start_date.strftime("%Y-%m-%d") + "T" + start_date.strftime("%H:%M:%S") + "Z"
if end_date != datetime.min:
wv_layer["endDate"] = end_date.strftime("%Y-%m-%d") + "T" + end_date.strftime("%H:%M:%S") + "Z"
if date_range_start and date_range_end:
wv_layer["dateRanges"] = [{"startDate": s, "endDate": e, "dateInterval": i} for s, e, i in zip(date_range_start, date_range_end, range_interval)]
except ValueError:
raise Exception("Invalid time: {0}".format(range))
except Exception as e:
print(traceback.format_exc())
raise Exception("Error processing temporal layer: {0}".format(e))
return wv_layer
| 49.432432 | 161 | 0.541006 | 455 | 3,658 | 4.204396 | 0.224176 | 0.054888 | 0.021955 | 0.034501 | 0.355985 | 0.315212 | 0.291166 | 0.187663 | 0.104548 | 0.041819 | 0 | 0.007764 | 0.29579 | 3,658 | 73 | 162 | 50.109589 | 0.73486 | 0.038272 | 0 | 0.19697 | 0 | 0 | 0.121549 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045455 | false | 0 | 0.060606 | 0.030303 | 0.151515 | 0.015152 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20c46aef8b21547d823f1dda228583f79f1a470c | 1,162 | py | Python | baal/active/__init__.py | llv22/baal_tf2.4_mac | 6eed225f8b57e61d8d16b1868ea655384c566700 | [
"Apache-2.0"
] | 575 | 2019-09-30T20:44:28.000Z | 2022-03-27T17:39:22.000Z | baal/active/__init__.py | llv22/baal_tf2.4_mac | 6eed225f8b57e61d8d16b1868ea655384c566700 | [
"Apache-2.0"
] | 84 | 2019-10-01T15:58:55.000Z | 2022-03-28T13:27:32.000Z | baal/active/__init__.py | llv22/baal_tf2.4_mac | 6eed225f8b57e61d8d16b1868ea655384c566700 | [
"Apache-2.0"
] | 51 | 2019-10-08T23:05:39.000Z | 2022-02-14T22:13:27.000Z | from typing import Union, Callable
from . import heuristics
from .active_loop import ActiveLearningLoop
from .dataset import ActiveLearningDataset
from .file_dataset import FileDataset
def get_heuristic(
name: str, shuffle_prop: float = 0.0, reduction: Union[str, Callable] = "none", **kwargs
) -> heuristics.AbstractHeuristic:
"""
Create an heuristic object from the name.
Args:
name (str): Name of the heuristic.
shuffle_prop (float): Shuffling proportion when getting ranks.
reduction (Union[str, Callable]): Reduction used after computing the score.
kwargs (dict): Complementary arguments.
Returns:
AbstractHeuristic object.
"""
heuristic: heuristics.AbstractHeuristic = {
"random": heuristics.Random,
"certainty": heuristics.Certainty,
"entropy": heuristics.Entropy,
"margin": heuristics.Margin,
"bald": heuristics.BALD,
"variance": heuristics.Variance,
"precomputed": heuristics.Precomputed,
"batch_bald": heuristics.BatchBALD,
}[name](shuffle_prop=shuffle_prop, reduction=reduction, **kwargs)
return heuristic
| 33.2 | 92 | 0.69191 | 117 | 1,162 | 6.803419 | 0.470085 | 0.055276 | 0.040201 | 0.062814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002186 | 0.212565 | 1,162 | 34 | 93 | 34.176471 | 0.86776 | 0.273666 | 0 | 0 | 0 | 0 | 0.08125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0 | 0.263158 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20c55612c6dde9a99bf02d10377bbecda4d7f7ce | 1,916 | py | Python | migrations/versions/e454b1597ab0_.py | Maybells/PostClassical | cbb45add86463deb942825d3c792bc8b6dcdd29b | [
"MIT"
] | null | null | null | migrations/versions/e454b1597ab0_.py | Maybells/PostClassical | cbb45add86463deb942825d3c792bc8b6dcdd29b | [
"MIT"
] | null | null | null | migrations/versions/e454b1597ab0_.py | Maybells/PostClassical | cbb45add86463deb942825d3c792bc8b6dcdd29b | [
"MIT"
] | null | null | null | """empty message
Revision ID: e454b1597ab0
Revises:
Create Date: 2021-01-19 11:50:17.899068
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'e454b1597ab0'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_unique_constraint(None, 'author', ['name'])
op.create_foreign_key(None, 'book_author_link', 'author', ['foreign_b'], ['id'])
op.create_foreign_key(None, 'book_author_link', 'book', ['foreign_a'], ['id'])
op.create_foreign_key(None, 'book_subject_link', 'subject', ['foreign_b'], ['id'])
op.create_foreign_key(None, 'book_subject_link', 'book', ['foreign_a'], ['id'])
op.create_foreign_key(None, 'printing', 'website', ['website'], ['id'])
op.create_foreign_key(None, 'printing', 'book', ['book_id'], ['id'])
op.drop_column('subject', 'count')
op.drop_index('ix_website_url_id', table_name='website_url')
op.create_foreign_key(None, 'website_url', 'website', ['site_id'], ['id'])
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(None, 'website_url', type_='foreignkey')
op.create_index('ix_website_url_id', 'website_url', ['id'], unique=False)
op.add_column('subject', sa.Column('count', sa.BIGINT(), nullable=True))
op.drop_constraint(None, 'printing', type_='foreignkey')
op.drop_constraint(None, 'printing', type_='foreignkey')
op.drop_constraint(None, 'book_subject_link', type_='foreignkey')
op.drop_constraint(None, 'book_subject_link', type_='foreignkey')
op.drop_constraint(None, 'book_author_link', type_='foreignkey')
op.drop_constraint(None, 'book_author_link', type_='foreignkey')
op.drop_constraint(None, 'author', type_='unique')
# ### end Alembic commands ###
| 40.765957 | 86 | 0.693111 | 252 | 1,916 | 4.984127 | 0.269841 | 0.047771 | 0.101911 | 0.127389 | 0.558917 | 0.511147 | 0.511147 | 0.479299 | 0.375796 | 0.300955 | 0 | 0.021661 | 0.132568 | 1,916 | 46 | 87 | 41.652174 | 0.734055 | 0.147704 | 0 | 0.214286 | 0 | 0 | 0.302005 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.142857 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20c609aa615f44f48228ba80019cfecbd7a032e3 | 3,249 | py | Python | config/application.py | dlsaavedra/Web-API | b3ad6d7d7dc7434c630ac4dcff3a805bba5e47a9 | [
"MIT"
] | null | null | null | config/application.py | dlsaavedra/Web-API | b3ad6d7d7dc7434c630ac4dcff3a805bba5e47a9 | [
"MIT"
] | null | null | null | config/application.py | dlsaavedra/Web-API | b3ad6d7d7dc7434c630ac4dcff3a805bba5e47a9 | [
"MIT"
] | null | null | null | from os import getenv
from pathlib import Path
from dotenv import load_dotenv
base_path = Path('.') # Fully qualified path to the project root
env_path = base_path / '.env' # Fully qualified path to the enviroment file
app_path = base_path.joinpath('app') # The fully qualified path to the app folder
public_path = base_path.joinpath('public') # The fully qualified path to the public folder
storage_path = base_path.joinpath('storage') # The fully qualified path to the storage folder
load_dotenv(dotenv_path=env_path)
config = {
# --------------------------------------------------------------------------
# Application Name
# --------------------------------------------------------------------------
#
# This value is the name of your application. This value is used when the
# framework needs to place the application's name in a notification or
# any other location as required by the application or its packages.
'name': getenv('APP_NAME', None),
# --------------------------------------------------------------------------
# Application Environment
# --------------------------------------------------------------------------
#
# This value determines the "environment" your application is currently
# running in. This may determine how you prefer to configure various
# services the application utilizes. Set this in your ".env" file.
'env': getenv('APP_ENV', 'production'),
# --------------------------------------------------------------------------
# Application URL
# --------------------------------------------------------------------------
#
# This URL is used by the console to properly generate URLs when using
# the Artisan command line tool. You should set this to the root of
# your application so that it is used when running Artisan tasks.
'url': getenv('APP_URL', 'http://localhost'),
# --------------------------------------------------------------------------
# Application Debug Mode
# --------------------------------------------------------------------------
#
# When your application is in debug mode, detailed error messages with
# stack traces will be shown on every error that occurs within your
# application. If disabled, a simple generic error page is shown.
'debug': getenv('APP_DEBUG', False),
# --------------------------------------------------------------------------
# Encryption Key
# --------------------------------------------------------------------------
#
# This key is used by the encrypter service and should be set
# to a random, 32 character string, otherwise these encrypted strings
# will not be safe. Please do this before deploying an application!
'key': getenv('APP_KEY', None),
# --------------------------------------------------------------------------
# Cross-Origin Resource Sharing (CORS)
# --------------------------------------------------------------------------
#
# The origin, or list of origins to allow requests from. The origin(s) may be
# regular expressions, case-sensitive strings, or else an asterisk
'origins': getenv('CORS_DOMAINS', '*')
}
| 48.492537 | 100 | 0.492767 | 324 | 3,249 | 4.882716 | 0.429012 | 0.018963 | 0.05689 | 0.063211 | 0.078382 | 0.049305 | 0 | 0 | 0 | 0 | 0 | 0.000751 | 0.180055 | 3,249 | 66 | 101 | 49.227273 | 0.593093 | 0.73715 | 0 | 0 | 0 | 0 | 0.152416 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20cca36f0660a5cec8d40a240e5c97178d31a054 | 5,511 | py | Python | predict.py | ciampluca/counting_perineuronal_nets | 29463a4810b74943ee5234673e9e0816716b7fee | [
"Apache-2.0"
] | 6 | 2021-12-16T13:47:56.000Z | 2022-02-05T09:49:37.000Z | predict.py | ciampluca/counting_perineuronal_nets | 29463a4810b74943ee5234673e9e0816716b7fee | [
"Apache-2.0"
] | 1 | 2021-06-28T17:09:48.000Z | 2021-06-28T18:58:04.000Z | predict.py | ciampluca/counting_perineuronal_nets | 29463a4810b74943ee5234673e9e0816716b7fee | [
"Apache-2.0"
] | null | null | null | import argparse
from pathlib import Path
import hydra
from omegaconf import OmegaConf
import torch
from torch.utils.data import DataLoader
from tqdm import tqdm
from datasets.patched_datasets import PatchedMultiImageDataset, RandomAccessMultiImageDataset
@torch.no_grad()
def score_patches(loader, model, device, cfg):
compute_loss_and_scores = hydra.utils.get_method(f'methods.rank.methods.{cfg.optim.method}')
model.eval()
all_scores = []
for sample in tqdm(loader, desc='PRED', leave=False, dynamic_ncols=True):
dummy_targets = torch.zeros(sample.shape[0], dtype=torch.int64, device=device)
sample = (sample, dummy_targets)
batch_metrics, scores = compute_loss_and_scores(sample, model, device, cfg)
all_scores.append(scores.flatten().cpu())
scores = torch.cat(all_scores).numpy()
return scores
def main(args):
run_path = Path(args.run)
cfg_path = run_path / '.hydra' / 'config.yaml'
cfg = OmegaConf.load(cfg_path)
cfg['cache_folder'] = './model_zoo'
dataset_params = dict(
patch_size=cfg.data.validation.get('patch_size', None),
transforms=hydra.utils.instantiate(cfg.data.validation.transforms),
)
dataset = PatchedMultiImageDataset.from_paths(args.data, **dataset_params)
print(f'[ DATA] {dataset}')
loader = DataLoader(dataset, batch_size=args.batch_size, shuffle=False, num_workers=8)
model_param_string = ', '.join(f'{k}={v}' for k, v in cfg.model.module.items() if not k.startswith('_'))
model = hydra.utils.instantiate(cfg.model.module, skip_weights_loading=True)
print(f"[ MODEL] {cfg.method} - {cfg.model.name}({model_param_string})")
device = torch.device(args.device)
model = model.to(device)
print(f'[DEVICE] {device}')
metric_name = 'count/game-3'
ckpt_path = run_path / 'best.pth'
if not ckpt_path.exists():
ckpt_path = run_path / 'best_models' / f"best_model_metric_{metric_name.replace('/', '-')}.pth"
print(f"[ CKPT] {ckpt_path}")
checkpoint = torch.load(ckpt_path, map_location=device)
model.load_state_dict(checkpoint['model'])
threshold = checkpoint['metrics'][metric_name]['threshold'] if args.threshold is None else args.threshold
print(f'[PARAMS] thr = {threshold:.2f}')
predict_points = hydra.utils.get_method(f'methods.{cfg.method}.train_fn.predict_points')
localizations = predict_points(loader, model, device, threshold, cfg)
localizations = localizations.sort_values(['imgName', 'Y', 'X'])
print(f'[OUTPUT] {args.output}')
localizations.to_csv(args.output, index=False)
if args.rescore:
run_path = Path(args.rescore)
cfg_path = run_path / '.hydra' / 'config.yaml'
cfg = OmegaConf.load(cfg_path)
cfg['cache_folder'] = './model_zoo'
dataset_params = dict(
patch_size=cfg.data.validation.get('patch_size', None),
transforms=hydra.utils.instantiate(cfg.data.validation.transforms),
)
paths_and_locs = ((image, data[['Y', 'X']].values.astype(int)) for image, data in localizations.groupby('imgName'))
paths, locs = zip(*paths_and_locs)
paths = [Path(args.data[0]).parent / p for p in paths] # TODO ugly hack
dataset = RandomAccessMultiImageDataset.from_paths_and_locs(paths, locs, **dataset_params)
print(f'[ DATA] {dataset}')
loader = DataLoader(dataset, batch_size=args.batch_size, shuffle=False, num_workers=8)
model_params = cfg.model.get('wrapper', cfg.model.base)
model = hydra.utils.instantiate(model_params)
model_param_string = ', '.join(f'{k}={v}' for k, v in model_params.items() if not k.startswith('_'))
print(f"[ MODEL] {cfg.model.name}({model_param_string})")
device = torch.device(args.device)
model = model.to(device)
print(f'[DEVICE] {device}')
metric_name = 'rank/spearman'
ckpt_path = run_path / 'best.pth'
if not ckpt_path.exists():
ckpt_path = run_path / 'best_models' / f"best_model_metric_{metric_name.replace('/', '-')}.pth"
if not ckpt_path.exists():
ckpt_path = run_path / 'last.pth'
print(f"[ CKPT] {ckpt_path}")
checkpoint = torch.load(ckpt_path, map_location=device)
model.load_state_dict(checkpoint['model'])
scores = score_patches(loader, model, device, cfg)
localizations['rescore'] = scores
print(f'[OUTPUT] {args.output}')
localizations.to_csv(args.output, index=False)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Perform Counting and Localization', formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('run', help='Pretrained run directory')
parser.add_argument('data', nargs='+', help='Input Images (Image or HDF5 formats)')
parser.add_argument('-d', '--device', default='cpu', help="Device to use; e.g., 'cpu', 'cuda:0'")
parser.add_argument('-b', '--batch-size', type=int, default=1, help="Batch size (number of patches processed in parallel by the model)")
parser.add_argument('-r', '--rescore', type=str, default=None, help="Pretrain run directory of rescoring model")
parser.add_argument('-t', '--threshold', type=float, default=None, help="Threshold (good values may vary depending on the model)")
parser.add_argument('-o', '--output', default='localizations.csv', help="Output file")
args = parser.parse_args()
main(args)
| 43.054688 | 141 | 0.678098 | 717 | 5,511 | 5.033473 | 0.276151 | 0.0266 | 0.021336 | 0.020781 | 0.446107 | 0.420615 | 0.387919 | 0.387919 | 0.387919 | 0.387919 | 0 | 0.00243 | 0.178552 | 5,511 | 127 | 142 | 43.393701 | 0.794787 | 0.00254 | 0 | 0.397959 | 0 | 0 | 0.205641 | 0.044586 | 0 | 0 | 0 | 0.007874 | 0 | 1 | 0.020408 | false | 0 | 0.081633 | 0 | 0.112245 | 0.112245 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20cda404e20b8445bfe6243758a4bf304606f130 | 375 | py | Python | test/test.py | innovate-invent/configutator | 372b45c44a10171b8518e61f2a7974969304c33a | [
"MIT"
] | null | null | null | test/test.py | innovate-invent/configutator | 372b45c44a10171b8518e61f2a7974969304c33a | [
"MIT"
] | 1 | 2017-09-22T05:52:54.000Z | 2017-09-22T05:52:54.000Z | test/test.py | innovate-invent/configutator | 372b45c44a10171b8518e61f2a7974969304c33a | [
"MIT"
] | null | null | null | from configutator import ConfigMap, ArgMap, EnvMap, loadConfig
import sys
def test(param1: int, param2: str):
"""
This is a test
:param param1: An integer
:param param2: A string
:return: Print the params
"""
print(param1, param2)
if __name__ == '__main__':
for argMap in loadConfig(sys.argv, (test,), "Test"):
test(**argMap[test])
| 22.058824 | 62 | 0.642667 | 48 | 375 | 4.854167 | 0.645833 | 0.06867 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020906 | 0.234667 | 375 | 16 | 63 | 23.4375 | 0.790941 | 0.24 | 0 | 0 | 0 | 0 | 0.046875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.285714 | 0 | 0.428571 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20cdcbc31867bb9709b928a4acae70fbdca1641b | 3,211 | py | Python | util/dates.py | cumanachao/utopia-crm | 6d648971c427ca9f380b15ed0ceaf5767b88e8b9 | [
"BSD-3-Clause"
] | 13 | 2020-12-14T19:56:04.000Z | 2021-11-06T13:24:48.000Z | util/dates.py | cumanachao/utopia-crm | 6d648971c427ca9f380b15ed0ceaf5767b88e8b9 | [
"BSD-3-Clause"
] | 5 | 2020-12-14T19:56:30.000Z | 2021-09-22T22:09:39.000Z | util/dates.py | cumanachao/utopia-crm | 6d648971c427ca9f380b15ed0ceaf5767b88e8b9 | [
"BSD-3-Clause"
] | 3 | 2021-03-24T03:55:08.000Z | 2022-01-13T15:22:34.000Z | import calendar
from datetime import date, datetime, timedelta
from time import localtime
from django.utils.translation import ugettext_lazy as _
def add_business_days(from_date, add_days):
"""
This is just used to add business days to a function. Should be moved to util.
"""
business_days_to_add = add_days
current_date = from_date
while business_days_to_add > 0:
current_date += timedelta(days=1)
weekday = current_date.weekday()
if weekday >= 5: # sunday = 6
continue
business_days_to_add -= 1
return current_date
def next_weekday_with_isoweekday(d, isoweekday):
"""
Returns the next day selecting a date
Uses Isoweekday (Mon: 1, Sun: 7)
"""
days_ahead = isoweekday - d.isoweekday()
if days_ahead <= 1: # Target day already happened this week
days_ahead += 7
return d + datetime.timedelta(days_ahead)
def next_business_day(today=None, today_hour=None):
"""
Returns the next business day INCLUDING SATURDAYS, keeping in mind that for our company the day starts at 5 am.
If necessary, today_hour can be removed.
"""
today = today or date.today()
today_hour = today_hour or localtime().tm_hour
if today_hour > 5: # days start at 5 am
# Then if it's more than 5 am, we check for today's isoweekday
iso = today.isoweekday()
# For us, Saturday is also a business day, if necessary, we can change this to say iso in (5, 6) so it takes
# both Friday and Saturday
if iso == 6:
# If it's Saturday, then next business day is going to be Monday
dif = 2
else:
dif = 1
return today + timedelta(days=dif)
return today
def first_saturday_on_month(today_date=None):
"""
Returns the first Saturday on the current month.
"""
today_date = today_date or date.today()
first_day_of_month = date(today_date.year, today_date.month, 1)
month_range = calendar.monthrange(today_date.year, today_date.month)
delta = (calendar.SATURDAY - month_range[0]) % 7
return first_day_of_month + timedelta(delta)
def next_month():
return date(
date.today().year + 1 if date.today().month == 12 else
date.today().year, int(date.today().strftime("%m")) % 12 + 1, 1)
def get_default_start_date():
return date.today() + timedelta(days=1)
def get_default_next_billing():
return date.today() + timedelta(days=1)
def format_date(d):
if d == date.today():
return _('Today')
elif d == date.today() - timedelta(1):
return _('Yesterday')
else:
if d.isoweekday() == 1:
return _('Mon') + d.strftime('%d-%m')
else:
return d.strftime('%d')
def add_month(d, n=1):
"""
Add n+1 months to date then subtract 1 day. To get eom, last day of target month.
"""
q, r = divmod(d.month + n, 12)
eom = date(d.year + q, r + 1, 1) - timedelta(days=1)
if d.month != (d + timedelta(days=1)).month or d.day >= eom.day:
return eom
return eom.replace(day=d.day)
def diff_month(newer, older):
return (newer.year - older.year) * 12 + newer.month - older.month
| 30.875 | 116 | 0.638742 | 476 | 3,211 | 4.168067 | 0.27521 | 0.054435 | 0.035282 | 0.025706 | 0.059476 | 0.059476 | 0.032258 | 0 | 0 | 0 | 0 | 0.017999 | 0.255995 | 3,211 | 103 | 117 | 31.174757 | 0.812474 | 0.236064 | 0 | 0.080645 | 0 | 0 | 0.011003 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16129 | false | 0 | 0.064516 | 0.064516 | 0.467742 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20d29550aa2f57d0c74fa67f5970acb95e350f79 | 2,771 | py | Python | LocadoraDeFilmes/funcoes_dos_filmes.py | JoaoVitorBernardino/Sistema-de-locadora-de-filmes | cf0cd0d8c9ad49fe48ab14626f241c4e0bb39846 | [
"MIT"
] | null | null | null | LocadoraDeFilmes/funcoes_dos_filmes.py | JoaoVitorBernardino/Sistema-de-locadora-de-filmes | cf0cd0d8c9ad49fe48ab14626f241c4e0bb39846 | [
"MIT"
] | null | null | null | LocadoraDeFilmes/funcoes_dos_filmes.py | JoaoVitorBernardino/Sistema-de-locadora-de-filmes | cf0cd0d8c9ad49fe48ab14626f241c4e0bb39846 | [
"MIT"
] | null | null | null | import os
from DataBase.dados_filmes import *
from DataBase.dados_aluguel import *
def cadastrar_filme():
os.system('cls' if os.name == 'nt' else 'clear')
nome = input('Digite o nome do filme: ')
ano = input('Digite o ano de lançamento do filme: ')
codigo = input('Digite o código do filme: ')
filmeAlugado = ""
filme = novo_filme(nome, ano, codigo, filmeAlugado)
add_filme(filme)
print(f'O filme {filme} foi cadastrado.')
def mostrar_catalogo():
os.system('cls' if os.name == 'nt' else 'clear')
filmes = get_filme()
for f in filmes:
print('-'*80)
print(f'{f["nome"]} - {f["ano"]} - {f["codigo"]} - {f["filmeAlugado"]}')
print('-'*80)
def ver_posicao_Filme():
i = 0
for pos in get_filme():
print('-'*80)
print(f'Posição {i}º - {pos["nome"]} - {pos["ano"]} - {pos["codigo"]} - {pos["filmeAlugado"]}')
i += 1
print('-' * 80)
def alugar():
alugar = novo_aluguel(str(input("Digite seu nome: ")))
resposta = "Sim"
filme= get_filme()
while resposta == 'sim' or resposta == 'Sim':
os.system('cls' if os.name == 'nt' else 'clear')
ver_posicao_Filme()
endereco = int(input("Digite a posição do filme desejado: "))
if filme[endereco]["filmeAlugado"] != "":
print("Este filme já foi alugado")
else:
filme[endereco]["filmeAlugado"] = alugar["alugador"]
add_aluguel(alugar, filme[endereco])
set_filme(endereco, filme[endereco])
resposta = input("Deseja alugar outro filme ? (Sim ou Não): ")
add_alugar(alugar)
def devolver():
devolver = novo_aluguel(str(input("Digite seu nome: ")))
resposta = "Sim"
filme= get_filme()
while resposta == 'sim' or resposta == 'Sim':
os.system('cls' if os.name == 'nt' else 'clear')
ver_posicao_Filme()
endereco = int(input("Digite a posição do filme desejado: "))
if filme[endereco]["filmeAlugado"] == "":
print("Este filme ainda não foi alugado")
else:
filme[endereco]["filmeAlugado"] = ""
add_aluguel(devolver, filme[endereco])
set_filme(endereco, filme[endereco])
resposta = input("Você possui outro filme para devolver ? (Sim ou Não): ")
add_alugar(devolver)
def tabela_de_preco():
preco_por_quant = {"01 Filme": 15,
"02 Filmes": 26,
"03 Filmes": 33,
"04 Filmes": 42,
"05 Filmes": 50}
print("Tabela de preços da quantidade de filmes alugados: ")
print(preco_por_quant)
def apagar_Filme():
ver_posicao_Filme()
endereco = int(input("Digite a posição do filme desejado: "))
apagar_Filme(endereco) | 32.6 | 103 | 0.582461 | 342 | 2,771 | 4.619883 | 0.266082 | 0.11519 | 0.027848 | 0.032911 | 0.489873 | 0.468354 | 0.418987 | 0.418987 | 0.418987 | 0.311392 | 0 | 0.0148 | 0.268495 | 2,771 | 85 | 104 | 32.6 | 0.764677 | 0 | 0 | 0.342857 | 0 | 0.028571 | 0.27886 | 0.007576 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.042857 | 0 | 0.142857 | 0.157143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20d2eced67b2dc37a106dd6188cfaac2ce3f1efd | 599 | py | Python | find-optional-modules/delete-pyc.py | berkut-174/salt-windows-msi | 3a0b9c891db95dfcfc5daa518305e1a5cc20d1b0 | [
"Apache-2.0"
] | null | null | null | find-optional-modules/delete-pyc.py | berkut-174/salt-windows-msi | 3a0b9c891db95dfcfc5daa518305e1a5cc20d1b0 | [
"Apache-2.0"
] | null | null | null | find-optional-modules/delete-pyc.py | berkut-174/salt-windows-msi | 3a0b9c891db95dfcfc5daa518305e1a5cc20d1b0 | [
"Apache-2.0"
] | null | null | null | ''' delete pyc except salt-minion.pyc '''
from __future__ import print_function
import os
SRCDIR = r'c:\salt'
def action(start_path):
skipped = 0
deleted = 0
for dirpath, dirnames, filenames in os.walk(start_path):
for f in filenames:
fp = os.path.join(dirpath, f)
if fp.endswith('.pyc'):
if f == 'salt-minion.pyc':
skipped += 1
else:
os.remove(fp)
deleted += 1
print('{} skipped'.format(skipped))
print('{} deleted'.format(deleted))
action(SRCDIR)
| 23.038462 | 60 | 0.534224 | 71 | 599 | 4.408451 | 0.507042 | 0.063898 | 0.083067 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010178 | 0.343907 | 599 | 25 | 61 | 23.96 | 0.78626 | 0.055092 | 0 | 0 | 0 | 0 | 0.082585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0 | 0.111111 | 0 | 0.166667 | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20d36a5f003e151ad485d22dfd0ea098ae87f73e | 387 | py | Python | DeepLearningAI/notification.py | philson-philip/harp | 8e38573cad1c3e16c062044a8f011658293d1531 | [
"MIT"
] | 1 | 2019-02-08T20:14:14.000Z | 2019-02-08T20:14:14.000Z | DeepLearningAI/notification.py | philson-philip/harp | 8e38573cad1c3e16c062044a8f011658293d1531 | [
"MIT"
] | 6 | 2021-03-18T22:10:34.000Z | 2022-03-11T23:40:16.000Z | DeepLearningAI/notification.py | philson-philip/harp | 8e38573cad1c3e16c062044a8f011658293d1531 | [
"MIT"
] | 3 | 2019-02-08T20:14:23.000Z | 2019-03-10T06:10:07.000Z | from win10toast import ToastNotifier
import time
def Notify(MessageTitle,MessageBody):
toaster = ToastNotifier()
toaster.show_toast(MessageTitle,
MessageBody,
icon_path="logo.ico",
duration=3,threaded=True)
from playsound import playsound
playsound('Notification2.mp3')
while toaster.notification_active(): time.sleep(0.1) | 35.181818 | 53 | 0.689922 | 40 | 387 | 6.6 | 0.725 | 0.174242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023333 | 0.224806 | 387 | 11 | 53 | 35.181818 | 0.856667 | 0 | 0 | 0 | 0 | 0 | 0.064433 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.272727 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20d57f78d8a0255a09b457d1362b139f81bf6db0 | 2,883 | py | Python | datasets/cifar10.py | killianlevacher/defenseInvGAN-src | 8fa398536773c5bc00c906562d2d9359572b8157 | [
"MIT"
] | 14 | 2019-12-12T11:28:18.000Z | 2022-03-09T11:56:04.000Z | datasets/cifar10.py | killianlevacher/defenseInvGAN-src | 8fa398536773c5bc00c906562d2d9359572b8157 | [
"MIT"
] | 7 | 2019-12-16T22:20:01.000Z | 2022-02-10T00:45:21.000Z | datasets/cifar10.py | killianlevacher/defenseInvGAN-src | 8fa398536773c5bc00c906562d2d9359572b8157 | [
"MIT"
] | 2 | 2020-04-01T09:02:00.000Z | 2021-08-01T14:27:11.000Z | import os
import numpy as np
import cPickle as pickle
from datasets.dataset import Dataset
class Cifar10(Dataset):
"""Implements the Dataset class to handle CIFAR-10.
Attributes:
y_dim: The dimension of label vectors (number of classes).
split_data: A dictionary of
{
'train': Images of np.ndarray, Int array of labels, and int
array of ids.
'val': Images of np.ndarray, Int array of labels, and int
array of ids.
'test': Images of np.ndarray, Int array of labels, and int
array of ids.
}
"""
def __init__(self, root='./data'):
super(Cifar10, self).__init__('cifar10', root)
self.y_dim = 10
self.split_data = {}
def load(self, split='train', lazy=True, randomize=True):
"""Implements the load function.
Args:
split: Dataset split, can be [train|val|test], default: train.
Returns:
Images of np.ndarray, Int array of labels, and int array of ids.
Raises:
ValueError: If split is not one of [train|val|test].
"""
if split in self.split_data.keys():
return self.split_data[split]
images = None
labels = None
data_dir = self.data_dir
for i in range(5):
f = open(os.path.join(data_dir, 'cifar-10-batches-py', 'data_batch_' + str(i + 1)), 'rb')
datadict = pickle.load(f)
f.close()
x = datadict['data']
y = datadict['labels']
x = x.reshape([-1, 3, 32, 32])
x = x.transpose([0, 2, 3, 1])
if images is None:
images = x
labels = y
else:
images = np.concatenate((images, x), axis=0)
labels = np.concatenate((labels, y), axis=0)
f = open(os.path.join(data_dir, 'cifar-10-batches-py', 'test_batch'), 'rb')
datadict = pickle.load(f)
f.close()
test_images = datadict['data']
test_labels = datadict['labels']
test_images = test_images.reshape([-1, 3, 32, 32])
test_images = test_images.transpose([0, 2, 3, 1])
if split == 'train':
images = images[:50000]
labels = labels[:50000]
elif split == 'val':
images = images[40000:50000]
labels = labels[40000:50000]
elif split == 'test':
images = test_images
labels = test_labels
if randomize:
rng_state = np.random.get_state()
np.random.shuffle(images)
np.random.set_state(rng_state)
np.random.shuffle(labels)
self.split_data[split] = [images, labels]
self.images = images
self.labels = labels
return images, labels
| 30.347368 | 101 | 0.534513 | 355 | 2,883 | 4.24507 | 0.287324 | 0.042468 | 0.053086 | 0.045123 | 0.285335 | 0.236231 | 0.216324 | 0.180491 | 0.180491 | 0.180491 | 0 | 0.036559 | 0.354839 | 2,883 | 94 | 102 | 30.670213 | 0.773656 | 0.248699 | 0 | 0.074074 | 0 | 0 | 0.055149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037037 | false | 0 | 0.074074 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20d64baaf85f6901221960787dd47f6f3ae74d9b | 759 | py | Python | tests/test_collide.py | csayres/kaiju | 0b4ca4fab5322351b97b8316b2d755d91bc05c16 | [
"BSD-3-Clause"
] | null | null | null | tests/test_collide.py | csayres/kaiju | 0b4ca4fab5322351b97b8316b2d755d91bc05c16 | [
"BSD-3-Clause"
] | null | null | null | tests/test_collide.py | csayres/kaiju | 0b4ca4fab5322351b97b8316b2d755d91bc05c16 | [
"BSD-3-Clause"
] | null | null | null | import pytest
import kaiju
from kaiju.robotGrid import RobotGridNominal
from kaiju import utils
def test_collide(plot=False):
# should make a grid
rg = RobotGridNominal()
collidedRobotIDs = []
for rid, r in rg.robotDict.items():
if r.holeID == "R-13C1":
r.setAlphaBeta(90,0)
collidedRobotIDs.append(rid)
elif r.holeID == "R-13C2":
collidedRobotIDs.append(rid)
r.setAlphaBeta(270, 0)
else:
r.setAlphaBeta(90,180)
for rid in collidedRobotIDs:
assert rg.isCollided(rid)
assert rg.getNCollisions() == 2
if plot:
utils.plotOne(0, rg, figname="test_collide.png", isSequence=False, xlim=[-30, 30], ylim=[-30, 30])
if __name__ == "__main__":
test_collide(True) | 24.483871 | 104 | 0.648221 | 97 | 759 | 4.958763 | 0.515464 | 0.068607 | 0.033264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.048027 | 0.231884 | 759 | 31 | 105 | 24.483871 | 0.777015 | 0.023715 | 0 | 0.086957 | 0 | 0 | 0.048649 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.043478 | false | 0 | 0.173913 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20d675e25832cd9793c5f67b4cb83f944dbf455c | 2,839 | py | Python | Lib/site-packages/psycopg/__init__.py | RosaSineSpinis/twitter_bitcon_tag_analyser | 3311022b6fd629ce85f0c4fa0516e310bed05d74 | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/psycopg/__init__.py | RosaSineSpinis/twitter_bitcon_tag_analyser | 3311022b6fd629ce85f0c4fa0516e310bed05d74 | [
"bzip2-1.0.6"
] | null | null | null | Lib/site-packages/psycopg/__init__.py | RosaSineSpinis/twitter_bitcon_tag_analyser | 3311022b6fd629ce85f0c4fa0516e310bed05d74 | [
"bzip2-1.0.6"
] | null | null | null | """
psycopg -- PostgreSQL database adapter for Python
"""
# Copyright (C) 2020-2021 The Psycopg Team
import logging
from . import pq # noqa: F401 import early to stabilize side effects
from . import types
from . import postgres
from .copy import Copy, AsyncCopy
from ._enums import IsolationLevel
from .cursor import Cursor
from .errors import Warning, Error, InterfaceError, DatabaseError
from .errors import DataError, OperationalError, IntegrityError
from .errors import InternalError, ProgrammingError, NotSupportedError
from ._column import Column
from .conninfo import ConnectionInfo
from .connection import BaseConnection, Connection, Notify
from .transaction import Rollback, Transaction, AsyncTransaction
from .cursor_async import AsyncCursor
from .server_cursor import AsyncServerCursor, ServerCursor
from .connection_async import AsyncConnection
from . import dbapi20
from .dbapi20 import BINARY, DATETIME, NUMBER, ROWID, STRING
from .dbapi20 import Binary, Date, DateFromTicks, Time, TimeFromTicks
from .dbapi20 import Timestamp, TimestampFromTicks
from .version import __version__ as __version__ # noqa: F401
# Set the logger to a quiet default, can be enabled if needed
logger = logging.getLogger("psycopg")
if logger.level == logging.NOTSET:
logger.setLevel(logging.WARNING)
# DBAPI compliancy
connect = Connection.connect
apilevel = "2.0"
threadsafety = 2
paramstyle = "pyformat"
# register default adapters for PostgreSQL
adapters = postgres.adapters # exposed by the package
postgres.register_default_adapters(adapters)
# After the default ones, because these can deal with the bytea oid better
dbapi20.register_dbapi20_adapters(adapters)
# Must come after all the types have been registered
types.array.register_all_arrays(adapters)
# Note: defining the exported methods helps both Sphynx in documenting that
# this is the canonical place to obtain them and should be used by MyPy too,
# so that function signatures are consistent with the documentation.
__all__ = [
"AsyncConnection",
"AsyncCopy",
"AsyncCursor",
"AsyncServerCursor",
"AsyncTransaction",
"BaseConnection",
"Column",
"Connection",
"ConnectionInfo",
"Copy",
"Cursor",
"IsolationLevel",
"Notify",
"Rollback",
"ServerCursor",
"Transaction",
# DBAPI exports
"connect",
"apilevel",
"threadsafety",
"paramstyle",
"Warning",
"Error",
"InterfaceError",
"DatabaseError",
"DataError",
"OperationalError",
"IntegrityError",
"InternalError",
"ProgrammingError",
"NotSupportedError",
# DBAPI type constructors and singletons
"Binary",
"Date",
"DateFromTicks",
"Time",
"TimeFromTicks",
"Timestamp",
"TimestampFromTicks",
"BINARY",
"DATETIME",
"NUMBER",
"ROWID",
"STRING",
]
| 27.563107 | 76 | 0.738288 | 304 | 2,839 | 6.819079 | 0.480263 | 0.019296 | 0.023155 | 0.037627 | 0.0685 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012409 | 0.176823 | 2,839 | 102 | 77 | 27.833333 | 0.874626 | 0.241634 | 0 | 0 | 0 | 0 | 0.211069 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.285714 | 0 | 0.285714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20d72cb6f933da26640aaaa4fbce23b2cbb317bd | 434 | py | Python | Datasets/Terrain/us_ned_topo_diversity.py | liuxb555/earthengine-py-examples | cff5d154b15a17d6a241e3c003b7fc9a2c5903f3 | [
"MIT"
] | 75 | 2020-06-09T14:40:11.000Z | 2022-03-07T08:38:10.000Z | Datasets/Terrain/us_ned_topo_diversity.py | gentaprekazi/earthengine-py-examples | 76ae8e071a71b343f5e464077afa5b0ed2f9314c | [
"MIT"
] | 1 | 2022-03-15T02:23:45.000Z | 2022-03-15T02:23:45.000Z | Datasets/Terrain/us_ned_topo_diversity.py | gentaprekazi/earthengine-py-examples | 76ae8e071a71b343f5e464077afa5b0ed2f9314c | [
"MIT"
] | 35 | 2020-06-12T23:23:48.000Z | 2021-11-15T17:34:50.000Z | import ee
import geemap
# Create a map centered at (lat, lon).
Map = geemap.Map(center=[40, -100], zoom=4)
dataset = ee.Image('CSP/ERGo/1_0/US/topoDiversity')
usTopographicDiversity = dataset.select('constant')
usTopographicDiversityVis = {
'min': 0.0,
'max': 1.0,
}
Map.setCenter(-111.313, 39.724, 6)
Map.addLayer(
usTopographicDiversity, usTopographicDiversityVis,
'US Topographic Diversity')
# Display the map.
Map
| 21.7 | 54 | 0.71659 | 57 | 434 | 5.438596 | 0.701754 | 0.012903 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064343 | 0.140553 | 434 | 19 | 55 | 22.842105 | 0.766756 | 0.12212 | 0 | 0 | 0 | 0 | 0.177249 | 0.07672 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20d89580fd6577e22f09078f9105ed4eb217404f | 6,139 | py | Python | cnn_to_mlp.py | minimario/CNN-Cert | 0dd60a8e8277cfecef3ab4d1ed055e62f92fd71c | [
"Apache-2.0"
] | 54 | 2020-09-09T12:43:43.000Z | 2022-03-17T17:31:19.000Z | cnn_to_mlp.py | jinzh154/CNN-Cert | 0dd60a8e8277cfecef3ab4d1ed055e62f92fd71c | [
"Apache-2.0"
] | 9 | 2019-04-26T15:33:21.000Z | 2022-02-17T13:20:47.000Z | cnn_to_mlp.py | jinzh154/CNN-Cert | 0dd60a8e8277cfecef3ab4d1ed055e62f92fd71c | [
"Apache-2.0"
] | 16 | 2019-02-17T03:02:36.000Z | 2021-05-17T13:59:07.000Z | """
cnn_to_mlp.py
Converts CNNs to MLP networks
Copyright (C) 2018, Akhilan Boopathy <akhilan@mit.edu>
Lily Weng <twweng@mit.edu>
Pin-Yu Chen <Pin-Yu.Chen@ibm.com>
Sijia Liu <Sijia.Liu@ibm.com>
Luca Daniel <dluca@mit.edu>
"""
from tensorflow.keras.models import load_model
from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Dense, Activation, Flatten, Conv2D, Lambda
from tensorflow.contrib.keras.api.keras.callbacks import LambdaCallback
from tensorflow.contrib.keras.api.keras.optimizers import SGD, Adam
from tensorflow.contrib.keras.api.keras import backend as K
import numpy as np
from setup_mnist import MNIST
from setup_cifar import CIFAR
import tensorflow as tf
import time as timing
import datetime
ts = timing.time()
timestr = datetime.datetime.fromtimestamp(ts).strftime('%Y%m%d_%H%M%S')
#Prints to log file
def printlog(s):
print(s, file=open("log_cnn2mlp_"+timestr+".txt", "a"))
#Function to get weights from saved model
def get_weights(file_name, inp_shape=(28,28,1)):
model = load_model(file_name, custom_objects={'fn':fn, 'tf':tf})
temp_weights = [layer.get_weights() for layer in model.layers]
new_params = []
eq_weights = []
cur_size = inp_shape
for p in temp_weights:
if len(p) > 0:
W, b = p
eq_weights.append([])
if len(W.shape) == 2: #FC
eq_weights.append([W, b])
else: # Conv
new_size = (cur_size[0]-W.shape[0]+1, cur_size[1]-W.shape[1]+1, W.shape[-1])
flat_inp = np.prod(cur_size)
flat_out = np.prod(new_size)
new_params.append(flat_out)
W_flat = np.zeros((flat_inp, flat_out))
b_flat = np.zeros((flat_out))
m,n,p = cur_size
d,e,f = new_size
for x in range(d):
for y in range(e):
for z in range(f):
b_flat[e*f*x+f*y+z] = b[z]
for k in range(p):
for idx0 in range(W.shape[0]):
for idx1 in range(W.shape[1]):
i = idx0 + x
j = idx1 + y
W_flat[n*p*i+p*j+k, e*f*x+f*y+z]=W[idx0, idx1, k, z]
eq_weights.append([W_flat, b_flat])
cur_size = new_size
print('Weights found')
return eq_weights, new_params
def fn(correct, predicted):
return tf.nn.softmax_cross_entropy_with_logits(labels=correct,
logits=predicted)
#Main function to convert CNN to MLP model
def convert(file_name, new_name, cifar = False):
if not cifar:
eq_weights, new_params = get_weights(file_name)
data = MNIST()
else:
eq_weights, new_params = get_weights(file_name, inp_shape = (32,32,3))
data = CIFAR()
model = Sequential()
model.add(Flatten(input_shape=data.train_data.shape[1:]))
for param in new_params:
model.add(Dense(param))
model.add(Lambda(lambda x: tf.nn.relu(x)))
model.add(Dense(10))
for i in range(len(eq_weights)):
try:
print(eq_weights[i][0].shape)
except:
pass
model.layers[i].set_weights(eq_weights[i])
sgd = SGD(lr=0.01, decay=1e-5, momentum=0.9, nesterov=True)
model.compile(loss=fn,
optimizer=sgd,
metrics=['accuracy'])
model.save(new_name)
acc = model.evaluate(data.validation_data, data.validation_labels)[1]
printlog("Converting CNN to MLP")
nlayer = file_name.split('_')[-3][0]
filters = file_name.split('_')[-2]
kernel_size = file_name.split('_')[-1]
printlog("model name = {0}, numlayer = {1}, filters = {2}, kernel size = {3}".format(file_name,nlayer,filters,kernel_size))
printlog("Model accuracy: {:.3f}".format(acc))
printlog("-----------------------------------")
return acc
if __name__ == '__main__':
table = 3
printlog("-----------------------------------")
if table == 3 or table == 4:
#Table 3+4
convert('models/mnist_cnn_4layer_5_3', 'models/mnist_cnn_as_mlp_4layer_5_3')
convert('models/mnist_cnn_4layer_20_3', 'models/mnist_cnn_as_mlp_4layer_20_3')
convert('models/mnist_cnn_5layer_5_3', 'models/mnist_cnn_as_mlp_5layer_5_3')
convert('models/cifar_cnn_7layer_5_3', 'models/cifar_cnn_as_mlp_7layer_5_3', cifar=True)
convert('models/cifar_cnn_5layer_10_3', 'models/cifar_cnn_as_mlp_5layer_10_3', cifar=True)
if table == 10 or table == 11:
#Table 10+11
convert('models/mnist_cnn_2layer_5_3', 'models/mnist_cnn_as_mlp_2layer_5_3')
convert('models/mnist_cnn_3layer_5_3', 'models/mnist_cnn_as_mlp_3layer_5_3')
convert('models/mnist_cnn_6layer_5_3', 'models/mnist_cnn_as_mlp_6layer_5_3')
convert('models/mnist_cnn_7layer_5_3', 'models/mnist_cnn_as_mlp_7layer_5_3')
convert('models/mnist_cnn_8layer_5_3', 'models/mnist_cnn_as_mlp_8layer_5_3')
convert('models/cifar_cnn_5layer_5_3', 'models/cifar_cnn_as_mlp_5layer_5_3', cifar=True)
convert('models/cifar_cnn_6layer_5_3', 'models/cifar_cnn_as_mlp_6layer_5_3', cifar=True)
convert('models/cifar_cnn_8layer_5_3', 'models/cifar_cnn_as_mlp_8layer_5_3', cifar=True)
convert('models/mnist_cnn_4layer_10_3', 'models/mnist_cnn_as_mlp_4layer_10_3')
convert('models/mnist_cnn_8layer_10_3', 'models/mnist_cnn_as_mlp_8layer_10_3')
convert('models/cifar_cnn_7layer_10_3', 'models/cifar_cnn_as_mlp_7layer_10_3', cifar=True)
convert('models/mnist_cnn_8layer_20_3', 'models/mnist_cnn_as_mlp_8layer_20_3')
convert('models/cifar_cnn_5layer_20_3', 'models/cifar_cnn_as_mlp_5layer_20_3', cifar=True)
convert('models/cifar_cnn_7layer_20_3', 'models/cifar_cnn_as_mlp_7layer_20_3', cifar=True)
| 42.93007 | 127 | 0.620948 | 904 | 6,139 | 3.90708 | 0.211283 | 0.068516 | 0.087203 | 0.065402 | 0.384768 | 0.353907 | 0.209513 | 0.047565 | 0 | 0 | 0 | 0.042891 | 0.251833 | 6,139 | 142 | 128 | 43.232394 | 0.726105 | 0.069555 | 0 | 0.036036 | 0 | 0.009009 | 0.249298 | 0.218574 | 0 | 0 | 0 | 0 | 0 | 1 | 0.036036 | false | 0.009009 | 0.108108 | 0.009009 | 0.171171 | 0.081081 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20da2372e14378343601425914ba9a4d487c8b91 | 5,328 | py | Python | libsymple.py | aoeftiger/PySymple | 00b887a59a107426d940aeb1e42a30a521b5729d | [
"MIT"
] | 1 | 2019-12-18T15:30:19.000Z | 2019-12-18T15:30:19.000Z | libsymple.py | aoeftiger/PySymple | 00b887a59a107426d940aeb1e42a30a521b5729d | [
"MIT"
] | null | null | null | libsymple.py | aoeftiger/PySymple | 00b887a59a107426d940aeb1e42a30a521b5729d | [
"MIT"
] | null | null | null | '''
Copyright 2014 by Adrian Oeftiger, oeftiger@cern.ch
This module provides various numerical integration methods
for Hamiltonian vector fields on (currently two-dimensional) phase space.
The integrators are separated according to symplecticity.
The method is_symple() is provided to check for symplecticity
of a given integration method -- it may be used generically
for any integration method with the described signature.
'''
from __future__ import division
import numpy as np
import libTPSA
def is_symple(integrator):
'''returns whether the given integrator is symplectic w.r.t. to a certain
numerical tolerance (fixed by numpy.allclose).
The decision is taken on whether the Jacobian determinant remains 1
(after a time step of 1 while modelling a harmonic oscillator).
The integrator input should be a function taking the argument
signature (x_initial, p_initial, timestep, H_p, H_x), where
the first three arguments are numbers and H_p(p) and H_x(x) are
functions taking one argument.'''
x_initial = libTPSA.TPS([2, 1, 0])
p_initial = libTPSA.TPS([0, 0, 1])
timestep = 1
x_final, p_final = integrator(x_initial, p_initial,
timestep, lambda pp:pp, lambda xx:xx)
jacobian = np.linalg.det( [x_final.diff, p_final.diff] )
return np.allclose(jacobian, 1.0)
class symple(object):
'''Contains *symplectic* integrator algorithms.
The integrator input should be a function taking the argument
signature (x_initial, p_initial, timestep, H_p, H_x).
It is assumed that the Hamiltonian is separable into a kinetic
part T(p) (giving rise to H_p(p) = dH/dp which only depends on the
conjugate momentum p) and into a potential part V(x) (giving rise
to H_x(x) = dH/dx which only depends on the spatial coordinate x).'''
@staticmethod
def Euler_Cromer(x_initial, p_initial, timestep, H_p, H_x):
'''Symplectic one-dimensional Euler Cromer O(T^2) Algorithm.
This Euler_Cromer is explicite! keyword: drift-kick mechanism'''
x_final = x_initial + timestep * H_p(p_initial)
p_final = p_initial - timestep * H_x(x_final)
return x_final, p_final
@staticmethod
def Verlet(x_initial, p_initial, timestep, H_p, H_x):
'''Symplectic one-dimensional (Velocity) Verlet O(T^3) Algorithm.
keyword: leapfrog mechanism'''
p_intermediate = p_initial - 0.5 * timestep * H_x(x_initial)
x_final = x_initial + timestep * H_p(p_intermediate)
p_final = p_intermediate - 0.5 * timestep * H_x(x_final)
return x_final, p_final
@staticmethod
def Ruth(x_initial, p_initial, timestep, H_p, H_x):
'''Symplectic one-dimensional Ruth and Forest O(T^5) Algorithm.
Harvard: 1992IAUS..152..407Y'''
twoot = np.power(2, 1. / 3)
fc = 1. / (2 - twoot)
# ci: drift, di: kick
c1 = fc / 2.; c4 = c1
c2 = (1 - twoot) * fc / 2.; c3 = c2
d1 = fc; d3 = d1
d2 = -twoot * fc
# d4 = 0
x_intermediate = x_initial + timestep * c4 * H_p(p_initial)
p_intermediate = p_initial - timestep * d3 * H_x(x_intermediate)
x_intermediate += timestep * c3 * H_p(p_intermediate)
p_intermediate -= timestep * d2 * H_x(x_intermediate)
x_intermediate += timestep * c2 * H_p(p_intermediate)
p_final = p_intermediate - timestep * d1 * H_x(x_intermediate)
x_final = x_intermediate + timestep * c1 * H_p(p_final)
return x_final, p_final
class non_symple(object):
'''Contains *non-symplectic* integrator algorithms.
The integrator input should be a function taking the argument
signature (x_initial, p_initial, timestep, H_p, H_x).
H_x(x) = dH/dx is a function of x only while
H_p(p) = dH/dp is a function of p only.'''
@staticmethod
def Euler(x_initial, p_initial, timestep, H_p, H_x):
'''Non-symplectic one-dimensional Euler O(T^2) Algorithm.'''
x_final = x_initial + timestep * H_p(p_initial)
p_final = p_initial - timestep * H_x(x_initial)
return x_final, p_final
@staticmethod
def RK2(x_initial, p_initial, timestep, H_p, H_x):
'''Non-symplectic one-dimensional Runge Kutta 2 O(T^3) Algorithm.'''
x_a = timestep * H_p(p_initial)
p_a = - timestep * H_x(x_initial)
x_b = timestep * H_p(p_initial + 0.5 * p_a)
p_b = - timestep * H_x(x_initial + 0.5 * x_a)
x_final = x_initial + x_b
p_final = p_initial + p_b
return x_final, p_final
@staticmethod
def RK4(x_initial, p_initial, timestep, H_p, H_x):
'''Non-symplectic one-dimensional Runge Kutta 4 O(T^5) Algorithm.'''
x_a = timestep * H_p(p_initial)
p_a = - timestep * H_x(x_initial)
x_b = timestep * H_p(p_initial + 0.5 * p_a)
p_b = - timestep * H_x(x_initial + 0.5 * x_a)
x_c = timestep * H_p(p_initial + 0.5 * p_b)
p_c = - timestep * H_x(x_initial + 0.5 * x_b)
x_d = timestep * H_p(p_initial + p_c)
p_d = - timestep * H_x(x_initial + x_c)
x_final = x_initial + x_a / 6. + x_b / 3. + x_c / 3. + x_d / 6.
p_final = p_initial + p_a / 6. + p_b / 3. + p_c / 3. + p_d / 6.
return x_final, p_final
| 45.152542 | 77 | 0.647147 | 835 | 5,328 | 3.906587 | 0.214371 | 0.077253 | 0.055181 | 0.062538 | 0.502452 | 0.434703 | 0.409258 | 0.366953 | 0.326487 | 0.326487 | 0 | 0.021701 | 0.256194 | 5,328 | 117 | 78 | 45.538462 | 0.801413 | 0.389077 | 0 | 0.328358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.104478 | false | 0 | 0.044776 | 0 | 0.283582 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20df31b6f10759e65d52beef5fda7d1f46c80d54 | 1,592 | py | Python | python/climate_ae/data_generator/utils.py | kueddelmaier/latent-linear-adjustment-autoencoders | f180732695a6c2abd8a9ad9d8cfeed2f82f047bb | [
"MIT"
] | 3 | 2020-10-29T19:08:27.000Z | 2021-08-14T09:19:48.000Z | python/climate_ae/data_generator/utils.py | kueddelmaier/latent-linear-adjustment-autoencoders | f180732695a6c2abd8a9ad9d8cfeed2f82f047bb | [
"MIT"
] | 6 | 2020-11-13T19:01:07.000Z | 2022-01-04T09:34:05.000Z | python/climate_ae/data_generator/utils.py | kueddelmaier/latent-linear-adjustment-autoencoders | f180732695a6c2abd8a9ad9d8cfeed2f82f047bb | [
"MIT"
] | 1 | 2021-03-01T15:28:56.000Z | 2021-03-01T15:28:56.000Z | import numpy as np
import tensorflow as tf
def parse_dataset(example_proto, img_size_h, img_size_w, img_size_d, dim_anno1,
dim_anno2, dim_anno3, dtype_img=tf.float64):
features = {
'inputs': tf.io.FixedLenFeature(shape=[], dtype=tf.string),
'annotations': tf.io.FixedLenFeature(shape=[dim_anno1], dtype=tf.float32),
'psl_mean_ens': tf.io.FixedLenFeature(shape=[dim_anno2], dtype=tf.float32),
'temp_mean_ens': tf.io.FixedLenFeature(shape=[dim_anno3], dtype=tf.float32),
'year': tf.io.FixedLenFeature(shape=[], dtype=tf.float32),
'month': tf.io.FixedLenFeature(shape=[], dtype=tf.float32),
'day': tf.io.FixedLenFeature(shape=[], dtype=tf.float32)
}
parsed_features = tf.io.parse_single_example(example_proto, features=features)
image = tf.io.decode_raw(parsed_features["inputs"], dtype_img)
image = tf.cast(image, tf.float32)
image = tf.reshape(image, [img_size_h, img_size_w, img_size_d])
annotations = parsed_features["annotations"]
psl = parsed_features["psl_mean_ens"]
temp = parsed_features["temp_mean_ens"]
year = parsed_features["year"]
month = parsed_features["month"]
day = parsed_features["day"]
return image, annotations, psl, temp, year, month, day
def climate_dataset(directory, filenames, height, width, depth, dim_anno1,
dim_anno2, dim_anno3, dtype):
dataset = tf.data.TFRecordDataset(filenames)
dataset = dataset.map(lambda x: parse_dataset(x, height, width, depth,
dim_anno1, dim_anno2, dim_anno3, dtype_img=dtype))
return dataset | 41.894737 | 84 | 0.704774 | 217 | 1,592 | 4.935484 | 0.258065 | 0.033613 | 0.124183 | 0.156863 | 0.385621 | 0.360411 | 0.331466 | 0.161531 | 0.128852 | 0.084034 | 0 | 0.021005 | 0.162688 | 1,592 | 38 | 85 | 41.894737 | 0.782446 | 0 | 0 | 0 | 0 | 0 | 0.067797 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.066667 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20e282771bc57317c73491a2286791812cf8bb2b | 2,715 | py | Python | microsim/column_names.py | dabreegster/RAMP-UA | 04b7473aed441080ee10b6f68eb8b9135dac6879 | [
"MIT"
] | 10 | 2020-07-01T15:04:28.000Z | 2021-11-01T17:04:27.000Z | microsim/column_names.py | dabreegster/RAMP-UA | 04b7473aed441080ee10b6f68eb8b9135dac6879 | [
"MIT"
] | 229 | 2020-05-12T12:21:57.000Z | 2022-03-22T09:40:12.000Z | microsim/column_names.py | dabreegster/RAMP-UA | 04b7473aed441080ee10b6f68eb8b9135dac6879 | [
"MIT"
] | 10 | 2020-04-29T16:17:28.000Z | 2021-12-23T13:11:30.000Z | class ColumnNames:
"""Used to record standard dataframe column names used throughout"""
LOCATION_DANGER = "Danger" # Danger associated with a location
LOCATION_NAME = "Location_Name" # Name of a location
LOCATION_ID = "ID" # Unique ID for each location
# Define the different types of activities/locations that the model can represent
class Activities:
RETAIL = "Retail"
PRIMARY = "PrimarySchool"
SECONDARY = "SecondarySchool"
HOME = "Home"
WORK = "Work"
ALL = [RETAIL, PRIMARY, SECONDARY, HOME, WORK]
ACTIVITY_VENUES = "_Venues" # Venues an individual may visit. Appended to activity type, e.g. 'Retail_Venues'
ACTIVITY_FLOWS = "_Flows" # Flows to a venue for an individual. Appended to activity type, e.g. 'Retail_Flows'
ACTIVITY_RISK = "_Risk" # Risk associated with a particular activity for each individual. E.g. 'Retail_Risk'
ACTIVITY_DURATION = "_Duration" # Column to record proportion of the day that invividuals do the activity
ACTIVITY_DURATION_INITIAL = "_Duration_Initial" # Amount of time on the activity at the start (might change)
# Standard columns for time spent travelling in different modes
TRAVEL_CAR = "Car"
TRAVEL_BUS = "Bus"
TRAVEL_TRAIN = "Train"
TRAVEL_WALK = "Walk"
INDIVIDUAL_AGE = "DC1117EW_C_AGE" # Age column in the table of individuals
INDIVIDUAL_SEX = "DC1117EW_C_SEX" # Sex column in the table of individuals
INDIVIDUAL_ETH = "DC2101EW_C_ETHPUK11" # Ethnicity column in the table of individuals
# Columns for information about the disease. These are needed for estimating the disease status
# Disease status is one of the following:
class DiseaseStatuses:
SUSCEPTIBLE = 0
EXPOSED = 1
PRESYMPTOMATIC = 2
SYMPTOMATIC = 3
ASYMPTOMATIC = 4
RECOVERED = 5
DEAD = 6
ALL = [SUSCEPTIBLE, EXPOSED, PRESYMPTOMATIC, SYMPTOMATIC, ASYMPTOMATIC, RECOVERED, DEAD]
assert len(ALL) == 7
DISEASE_STATUS = "disease_status" # Which one it is
DISEASE_STATUS_CHANGED = "status_changed" # Whether it has changed between the current iteration and the last
DISEASE_PRESYMP = "presymp_days"
DISEASE_SYMP_DAYS = "symp_days"
DISEASE_EXPOSED_DAYS = "exposed_days"
#DAYS_WITH_STATUS = "Days_With_Status" # The number of days that have elapsed with this status
CURRENT_RISK = "current_risk" # This is the risk that people get when visiting locations.
# No longer update disease counts per MSOA etc. Not needed
#MSOA_CASES = "MSOA_Cases" # The number of cases per MSOA
#HID_CASES = "HID_Cases" # The number of cases in the individual's house | 46.810345 | 115 | 0.704236 | 353 | 2,715 | 5.260623 | 0.419263 | 0.035003 | 0.012924 | 0.025848 | 0.112547 | 0.08993 | 0.074313 | 0 | 0 | 0 | 0 | 0.010552 | 0.232044 | 2,715 | 58 | 116 | 46.810345 | 0.880096 | 0.492449 | 0 | 0 | 0 | 0 | 0.179392 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 1 | 0 | false | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20e4504512f6f3017cd6e385cfabae07f81ee87a | 382 | py | Python | datasets/script/ds_cut_last.py | PoCInnovation/SmartShark | 2cf5eb32306fb5bd88972f44331322ae58d4bb2c | [
"MIT"
] | 26 | 2020-11-26T13:05:31.000Z | 2022-03-22T11:04:41.000Z | datasets/script/ds_cut_last.py | PoCFrance/SmartShark | 2cf5eb32306fb5bd88972f44331322ae58d4bb2c | [
"MIT"
] | 4 | 2020-09-26T16:30:47.000Z | 2022-03-06T18:02:52.000Z | datasets/script/ds_cut_last.py | PoCFrance/SmartShark | 2cf5eb32306fb5bd88972f44331322ae58d4bb2c | [
"MIT"
] | 9 | 2021-01-19T16:44:23.000Z | 2022-02-15T21:06:29.000Z | fichier = open("/run/media/Thytu/TOSHIBA EXT/PoC/Smartshark/DS/ds_benign_cleaned_div_3.csv", "r")
pos = 12 #pos to flatten
def flat_line(line, target):
index = 0
pos = 0
for l in line:
pos += 1
if (l == ','):
index += 1
if index == target:
break;
print(line[:pos - 1])
for line in fichier:
flat_line(line, pos)
| 21.222222 | 97 | 0.552356 | 57 | 382 | 3.596491 | 0.578947 | 0.102439 | 0.117073 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030418 | 0.311518 | 382 | 17 | 98 | 22.470588 | 0.749049 | 0.036649 | 0 | 0 | 0 | 0 | 0.207084 | 0.19891 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0 | 0 | 0.071429 | 0.071429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20e53640a34afd90bd7994b855aba3b499daef58 | 835 | py | Python | Scripts/python/setup_k8s_thirdparty.py | SnowPhoenix0105/ToolSite | c5084010665434711867b1b5cd4915fe79ab2c7b | [
"MIT"
] | null | null | null | Scripts/python/setup_k8s_thirdparty.py | SnowPhoenix0105/ToolSite | c5084010665434711867b1b5cd4915fe79ab2c7b | [
"MIT"
] | 7 | 2021-08-28T09:27:39.000Z | 2021-09-26T15:35:13.000Z | Scripts/python/setup_k8s_thirdparty.py | SnowPhoenix0105/ToolSite | c5084010665434711867b1b5cd4915fe79ab2c7b | [
"MIT"
] | null | null | null |
from python.utils.cmd_exec import cmd_exec
import json
import logging
from python.utils.path import Path, pcat
from logging import DEBUG
from python.utils.log import config_logging
_logger = logging.getLogger(__name__)
def setup_k8s_thirdparty():
list_path = Path.SCRIPTS_CONFIG_K8S_THIRD_PARTY_LIST
_logger.debug(f"using thrid-party list at path: {list_path}")
with open(list_path, 'r', encoding='utf8') as f:
third_party_list = json.load(f)
for third_party_package in third_party_list:
name = third_party_package["name"]
cmd = third_party_package["cmd"]
desc = third_party_package["desc"]
_logger.info(f"deploying {name}: {desc}")
cmd_exec(cmd, False)
if __name__ == '__main__':
config_logging(__file__, console_level=logging.INFO)
setup_k8s_thirdparty()
| 29.821429 | 65 | 0.728144 | 119 | 835 | 4.697479 | 0.394958 | 0.125224 | 0.121646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005814 | 0.176048 | 835 | 27 | 66 | 30.925926 | 0.806686 | 0 | 0 | 0 | 0 | 0 | 0.109244 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.285714 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20e5596dc72fd005174ca47cd9087713926d0058 | 1,506 | py | Python | utils/callbacks.py | YossiAsher/utils | c389d061378fca0b5495691c999f93adfa882faf | [
"MIT"
] | null | null | null | utils/callbacks.py | YossiAsher/utils | c389d061378fca0b5495691c999f93adfa882faf | [
"MIT"
] | null | null | null | utils/callbacks.py | YossiAsher/utils | c389d061378fca0b5495691c999f93adfa882faf | [
"MIT"
] | null | null | null | import os.path
import glob
import wandb
import numpy as np
from tensorflow.keras.callbacks import Callback
class ValLog(Callback):
def __init__(self, dataset=None, table="predictions", project="svg-attention6", run=""):
super().__init__()
self.dataset = dataset
self.table_name = table
self.run = wandb.init(project=project, job_type="inference", name=run)
def on_epoch_end(self, epoch, logs=None):
columns = ["epoch", "dataset", "file", "svg", "target", "prediction"]
predictions_table = wandb.Table(columns=columns)
for i in range(len(self.dataset)):
epoc_path_index = os.path.join(self.dataset.epoc_path.name, str(i))
data_path = os.path.join(epoc_path_index, 'data.npz')
loaded = np.load(data_path)
X = loaded['X']
y = loaded['y']
predictions = self.model.predict(X)
for index, x in enumerate(X):
target = self.dataset.classes[y[index]]
prediction = self.dataset.classes[np.argmax(predictions[index])]
png_file = glob.glob(f'{epoc_path_index}/{index}/**/*.png', recursive=True)[0]
file = png_file[png_file.index(png_file.split('/')[-2]):]
row = [epoch, self.dataset.name, file, wandb.Image(png_file), target, prediction]
predictions_table.add_data(*row)
self.run.log({self.table_name: predictions_table})
self.dataset.clean_epoc_path()
| 40.702703 | 97 | 0.616866 | 191 | 1,506 | 4.691099 | 0.371728 | 0.098214 | 0.043527 | 0.071429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002639 | 0.24502 | 1,506 | 36 | 98 | 41.833333 | 0.7854 | 0 | 0 | 0 | 0 | 0 | 0.075697 | 0.022576 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.166667 | 0 | 0.266667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20e603c9bd357a4fc46fca97e132ad4376a93633 | 12,680 | py | Python | cirrocumulus/anndata_util.py | PfizerRD/cirrocumulus | c7ce0c8c3c246282046e6d373d60442af55d3f09 | [
"BSD-3-Clause"
] | null | null | null | cirrocumulus/anndata_util.py | PfizerRD/cirrocumulus | c7ce0c8c3c246282046e6d373d60442af55d3f09 | [
"BSD-3-Clause"
] | null | null | null | cirrocumulus/anndata_util.py | PfizerRD/cirrocumulus | c7ce0c8c3c246282046e6d373d60442af55d3f09 | [
"BSD-3-Clause"
] | 1 | 2022-02-06T23:08:26.000Z | 2022-02-06T23:08:26.000Z | import anndata
import numpy as np
import pandas as pd
DATA_TYPE_MODULE = 'module'
DATA_TYPE_UNS_KEY = 'data_type'
ADATA_MODULE_UNS_KEY = 'anndata_module'
def get_base(adata):
base = None
if 'log1p' in adata.uns and adata.uns['log1p']['base'] is not None:
base = adata.uns['log1p'][base]
return base
def adata_to_df(adata):
df = pd.DataFrame(adata.X, index=adata.obs.index, columns=adata.var.index)
for key in adata.layers.keys():
df2 = pd.DataFrame(adata.layers[key], index=adata.obs.index.astype(str) + '-{}'.format(key),
columns=adata.var.index)
df = pd.concat((df, df2), axis=0)
df = df.T.join(adata.var)
df.index.name = 'id'
return df.reset_index()
def get_scanpy_marker_keys(dataset):
marker_keys = []
try:
dataset.uns # dataset can be AnnData or zarr group
except AttributeError:
return marker_keys
from collections.abc import Mapping
for key in dataset.uns.keys():
rank_genes_groups = dataset.uns[key]
if isinstance(rank_genes_groups, Mapping) and 'names' in rank_genes_groups and (
'pvals' in rank_genes_groups or 'pvals_adj' in rank_genes_groups or 'scores' in rank_genes_groups) and len(
rank_genes_groups['names'][0]) > 0 and not isinstance(rank_genes_groups['names'][0][0], bytes):
marker_keys.append(key)
return marker_keys
def get_pegasus_marker_keys(dataset):
marker_keys = []
try:
dataset.varm # dataset can be AnnData or zarr group
except AttributeError:
return marker_keys
for key in dataset.varm.keys():
d = dataset.varm[key]
if isinstance(d, np.recarray):
try:
''.join(d.dtype.names).index('log2Mean')
marker_keys.append(key)
except ValueError:
pass
return marker_keys
def obs_stats(adata, columns):
df = adata.obs[columns]
# variables on columns, stats on rows, transpose so that stats are on columns
return df.agg(['min', 'max', 'sum', 'mean']).T
def X_stats(adata):
X = adata.X
return pd.DataFrame(
data={'min': X.min(axis=0).toarray().flatten(),
'max': X.max(axis=0).toarray().flatten(),
'sum': X.sum(axis=0).flatten(),
'numExpressed': X.getnnz(axis=0),
'mean': X.mean(axis=0)}, index=adata.var.index)
def dataset_schema(dataset, n_features=10):
""" Gets dataset schema.
Returns
schema dict. Example:
{"version":"1.0.0",
"categoryOrder":{
"louvain":["0","1","2","3","4","5","6","7"],
"leiden":["0","1","2","3","4","5","6","7"]},
"var":["TNFRSF4","CPSF3L","ATAD3C"],
"obs":["percent_mito","n_counts"],
"obsCat":["louvain","leiden"],
"shape":[2638,1838],
"embeddings":[{"name":"X_pca","dimensions":3},{"name":"X_pca","dimensions":2},{"name":"X_umap","dimensions":2}]
}
"""
obs_cat = []
obs = []
marker_results = []
de_results_format = 'records'
prior_marker_results = dataset.uns.get('markers', [])
if isinstance(prior_marker_results, str):
import json
prior_marker_results = json.loads(prior_marker_results)
marker_results += prior_marker_results
de_results = [] # array of dicts containing params logfoldchanges, pvals_adj, scores, names
category_to_order = {}
embeddings = []
field_to_value_to_color = dict() # field -> value -> color
for scanpy_marker_key in get_scanpy_marker_keys(dataset):
rank_genes_groups = dataset.uns[scanpy_marker_key]
has_fc = 'logfoldchanges' in rank_genes_groups
min_fold_change = 1
params = rank_genes_groups['params']
if not isinstance(params, dict):
from anndata._io.zarr import read_attribute
params = {k: read_attribute(params[k]) for k in params.keys()}
# pts and pts_rest in later scanpy versions
rank_genes_groups_keys = list(rank_genes_groups.keys())
for k in ['params', 'names']:
if k in rank_genes_groups_keys:
rank_genes_groups_keys.remove(k)
if 'pvals' in rank_genes_groups_keys and 'pvals_adj' in rank_genes_groups_keys:
rank_genes_groups_keys.remove('pvals')
category = '{} ({})'.format(params['groupby'], scanpy_marker_key)
de_result_name = category
de_result_df = None
group_names = rank_genes_groups['names'].dtype.names
de_result = dict(id='cirro-{}'.format(scanpy_marker_key),
type='de',
readonly=True,
groups=group_names,
fields=rank_genes_groups_keys,
name=de_result_name)
for group_name in group_names:
group_df = pd.DataFrame(index=rank_genes_groups['names'][group_name][...])
group_df = group_df[group_df.index != 'nan']
for rank_genes_groups_key in rank_genes_groups_keys:
values = rank_genes_groups[rank_genes_groups_key][group_name][...]
column_name = '{}:{}'.format(group_name, rank_genes_groups_key)
group_df[column_name] = values
if de_result_df is None:
de_result_df = group_df
else:
de_result_df = de_result_df.join(group_df, how='outer')
if n_features > 0:
markers_df = group_df
if has_fc:
markers_df = group_df[group_df['{}:logfoldchanges'.format(group_name)] > min_fold_change]
if len(markers_df) > 0:
marker_results.append(
dict(category=de_result_name, name=str(group_name),
features=markers_df.index[:n_features]))
if de_results_format == 'records':
de_result_data = de_result_df.reset_index().to_dict(orient='records')
else:
de_result_data = dict(index=de_result_df.index[...])
for c in de_result_df:
de_result_data[c] = de_result_df[c]
de_result['params'] = params
de_result['data'] = de_result_data
de_results.append(de_result)
for pg_marker_key in get_pegasus_marker_keys(dataset):
de_res = dataset.varm[pg_marker_key]
key_name = pg_marker_key
if pg_marker_key.startswith('rank_genes_'):
key_name = pg_marker_key[len('rank_genes_'):]
names = de_res.dtype.names
field_names = set() # e.g. 1:auroc
group_names = set()
for name in names:
index = name.rindex(':')
field_name = name[index + 1:]
group_name = name[:index]
field_names.add(field_name)
group_names.add(group_name)
group_names = list(group_names)
field_names = list(field_names)
de_result_df = pd.DataFrame(data=de_res, index=dataset.var.index)
de_result_df.index.name = 'index'
if de_results_format == 'records':
de_result_data = de_result_df.reset_index().to_dict(orient='records')
else:
de_result_data = dict(index=de_result_df.index)
for c in de_res:
de_result_data[c] = de_result_df[c]
de_result = dict(id='cirro-{}'.format(pg_marker_key), type='de', name=key_name,
color='log2FC' if 'log2FC' in field_names else field_names[0],
size='mwu_qval' if 'mwu_qval' in field_names else field_names[0],
groups=group_names,
fields=field_names)
de_result['data'] = de_result_data
de_results.append(de_result)
if n_features > 0:
field_use = None
for field in ['mwu_qval', 'auroc', 't_qval']:
if field in field_names:
field_use = field
break
if field_use is not None:
field_ascending = field_use != 'auroc'
for group_name in group_names:
fc_column = '{}:log2FC'.format(group_name)
name = '{}:{}'.format(group_name, field_name)
idx_up = de_result_df[fc_column] > 0
df_up = de_result_df.loc[idx_up].sort_values(by=[name, fc_column],
ascending=[field_ascending, False])
features = df_up[:n_features].index.values
marker_results.append(dict(category=key_name, name=str(group_name), features=features))
categories_node = dataset.obs['__categories'] if '__categories' in dataset.obs else None
for key in dataset.obs.keys():
if categories_node is not None and (key == '__categories' or key == 'index'):
continue
val = dataset.obs[key]
if categories_node is not None and key in categories_node:
categories = categories_node[key][...]
ordered = categories_node[key].attrs.get('ordered', False)
val = pd.Categorical.from_codes(val[...], categories, ordered=ordered)
if pd.api.types.is_categorical_dtype(val) or pd.api.types.is_bool_dtype(
val) or pd.api.types.is_object_dtype(val):
obs_cat.append(key)
else:
obs.append(key)
if pd.api.types.is_categorical_dtype(val):
categories = val.cat.categories
if len(categories) < 100: # preserve order
category_to_order[key] = dataset.obs[key].cat.categories
color_field = key + '_colors'
if color_field in dataset.uns:
colors = dataset.uns[color_field][...]
if len(categories) == len(colors):
color_map = dict()
for j in range(len(categories)):
color_map[str(categories[j])] = colors[j]
field_to_value_to_color[key] = color_map
# spatial_node = adata.uns['spatial'] if 'spatial' in adata.uns else None
#
# if spatial_node is not None:
# spatial_node_keys = list(spatial_node.keys()) # list of datasets
# if len(spatial_node_keys) == 1:
# spatial_node = spatial_node[spatial_node_keys[0]]
images_node = dataset.uns.get('images',
[]) # list of {type:image or meta_image, name:image name, image:path to image, spot_diameter:Number}
image_names = list(map(lambda x: x['name'], images_node))
layers = []
try:
dataset.layers # dataset can be AnnData or zarr group
layers = list(dataset.layers.keys())
except AttributeError:
pass
for key in dataset.obsm.keys():
dim = dataset.obsm[key].shape[1]
if 1 < dim <= 3:
embedding = dict(name=key, dimensions=dim)
if dim == 2:
try:
image_index = image_names.index(key)
embedding['spatial'] = images_node[image_index]
except ValueError:
pass
embeddings.append(embedding)
meta_images = dataset.uns.get('meta_images', [])
for meta_image in meta_images:
embeddings.append(meta_image)
schema_dict = {'version': '1.0.0'}
schema_dict['results'] = de_results
schema_dict['colors'] = field_to_value_to_color
schema_dict['markers'] = marker_results
schema_dict['embeddings'] = embeddings
schema_dict['categoryOrder'] = category_to_order
schema_dict['layers'] = layers
var_df = dataset.var
if not isinstance(var_df, pd.DataFrame):
from anndata._io.zarr import read_attribute
var_df = read_attribute(dataset.var)
var_df.index.name = 'id'
schema_dict['var'] = var_df.reset_index().to_dict(orient='records')
modules_df = None
if ADATA_MODULE_UNS_KEY in dataset.uns and isinstance(dataset.uns[ADATA_MODULE_UNS_KEY], anndata.AnnData):
modules_df = dataset.uns[ADATA_MODULE_UNS_KEY].var
# if not isinstance(module_var, pd.DataFrame):
# from anndata._io.zarr import read_attribute
# module_var = read_attribute(module_var)
if modules_df is not None:
modules_df.index.name = 'id'
schema_dict['modules'] = modules_df.reset_index().to_dict(orient='records')
schema_dict['obs'] = obs
schema_dict['obsCat'] = obs_cat
shape = dataset.shape if isinstance(dataset, anndata.AnnData) else dataset.X.attrs.shape
schema_dict['shape'] = shape
return schema_dict
| 41.168831 | 135 | 0.596688 | 1,621 | 12,680 | 4.400987 | 0.143122 | 0.038127 | 0.054668 | 0.021447 | 0.286655 | 0.207597 | 0.162321 | 0.121391 | 0.101205 | 0.088029 | 0 | 0.00833 | 0.289905 | 12,680 | 307 | 136 | 41.302932 | 0.783985 | 0.105205 | 0 | 0.174797 | 0 | 0 | 0.05272 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028455 | false | 0.012195 | 0.028455 | 0 | 0.093496 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20e7676617875b614527d964e3fa868094f4c605 | 4,450 | py | Python | python-code/dlib-learning/face_reco_from_camera.py | juxiangwu/image-processing | c644ef3386973b2b983c6b6b08f15dc8d52cd39f | [
"Apache-2.0"
] | 13 | 2018-09-07T02:29:07.000Z | 2021-06-18T08:40:09.000Z | python-code/dlib-learning/face_reco_from_camera.py | juxiangwu/image-processing | c644ef3386973b2b983c6b6b08f15dc8d52cd39f | [
"Apache-2.0"
] | null | null | null | python-code/dlib-learning/face_reco_from_camera.py | juxiangwu/image-processing | c644ef3386973b2b983c6b6b08f15dc8d52cd39f | [
"Apache-2.0"
] | 4 | 2019-06-20T00:09:39.000Z | 2021-07-15T10:14:36.000Z | # created at 2018-05-11
# updated at 2018-08-23
# support multi-faces now
# By coneypo
# Blog: http://www.cnblogs.com/AdaminXie
# GitHub: https://github.com/coneypo/Dlib_face_recognition_from_camera
import dlib # 人脸识别的库dlib
import numpy as np # 数据处理的库numpy
import cv2 # 图像处理的库OpenCv
import pandas as pd # 数据处理的库Pandas
# face recognition model, the object maps human faces into 128D vectors
facerec = dlib.face_recognition_model_v1("dlib_face_recognition_resnet_model_v1.dat")
# 计算两个向量间的欧式距离
def return_euclidean_distance(feature_1, feature_2):
feature_1 = np.array(feature_1)
feature_2 = np.array(feature_2)
dist = np.sqrt(np.sum(np.square(feature_1 - feature_2)))
print("e_distance: ", dist)
if dist > 0.4:
return "diff"
else:
return "same"
# 处理存放所有人脸特征的csv
path_features_known_csv = "F:/code/python/P_dlib_face_reco/data/features_all.csv"
# path_features_known_csv= "/media/con/data/code/python/P_dlib_face_reco/data/csvs/features_all.csv"
csv_rd = pd.read_csv(path_features_known_csv, header=None)
# 存储的特征人脸个数
# print(csv_rd.shape[0])
# 用来存放所有录入人脸特征的数组
features_known_arr = []
for i in range(csv_rd.shape[0]):
features_someone_arr = []
for j in range(0, len(csv_rd.ix[i, :])):
features_someone_arr.append(csv_rd.ix[i, :][j])
# print(features_someone_arr)
features_known_arr.append(features_someone_arr)
print("Faces in Database:", len(features_known_arr))
# Dlib 预测器
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('../resources/models/dlib/shape_predictor_68_face_landmarks.dat')
# 创建 cv2 摄像头对象
cap = cv2.VideoCapture(0)
# cap.set(propId, value)
# 设置视频参数,propId设置的视频参数,value设置的参数值
cap.set(3, 480)
# 返回一张图像多张人脸的128D特征
def get_128d_features(img_gray):
dets = detector(img_gray, 1)
if len(dets) != 0:
face_des = []
for i in range(len(dets)):
shape = predictor(img_gray, dets[i])
face_des.append(facerec.compute_face_descriptor(img_gray, shape))
else:
face_des = []
return face_des
# cap.isOpened() 返回true/false 检查初始化是否成功
while cap.isOpened():
# cap.read()
# 返回两个值:
# 一个布尔值true/false,用来判断读取视频是否成功/是否到视频末尾
# 图像对象,图像的三维矩阵
flag, im_rd = cap.read()
# 每帧数据延时1ms,延时为0读取的是静态帧
kk = cv2.waitKey(1)
# 取灰度
img_gray = cv2.cvtColor(im_rd, cv2.COLOR_RGB2GRAY)
# 人脸数 dets
dets = detector(img_gray, 0)
# 待会要写的字体
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(im_rd, "q: quit", (20, 400), font, 0.8, (84, 255, 159), 1, cv2.LINE_AA)
# 存储人脸名字和位置的两个 list
# list 1 (dets): store the name of faces Jack unknown unknown Mary
# list 2 (pos_namelist): store the positions of faces 12,1 1,21 1,13 31,1
# 存储所有人脸的名字
pos_namelist = []
name_namelist = []
if len(dets) != 0:
# 检测到人脸
# 获取当前捕获到的图像的所有人脸的特征,存储到 features_cap_arr
features_cap_arr = []
for i in range(len(dets)):
shape = predictor(im_rd, dets[i])
features_cap_arr.append(facerec.compute_face_descriptor(im_rd, shape))
# 遍历捕获到的图像中所有的人脸
for k in range(len(dets)):
# 让人名跟随在矩形框的下方
# 确定人名的位置坐标
# 先默认所有人不认识,是 unknown
name_namelist.append("unknown")
# 每个捕获人脸的名字坐标
pos_namelist.append(tuple([dets[k].left(), int(dets[k].bottom() + (dets[k].bottom() - dets[k].top()) / 4)]))
# 对于某张人脸,遍历所有存储的人脸特征
for i in range(len(features_known_arr)):
# 将某张人脸与存储的所有人脸数据进行比对
compare = return_euclidean_distance(features_cap_arr[k], features_known_arr[i])
if compare == "same": # 找到了相似脸
name_namelist[k] = "person_" + str(i)
# 矩形框
for kk, d in enumerate(dets):
# 绘制矩形框
cv2.rectangle(im_rd, tuple([d.left(), d.top()]), tuple([d.right(), d.bottom()]), (0, 255, 255), 2)
# 写人脸名字
for i in range(len(dets)):
cv2.putText(im_rd, name_namelist[i], pos_namelist[i], font, 0.8, (0, 255, 255), 1, cv2.LINE_AA)
print("Name list:", name_namelist, "\n")
cv2.putText(im_rd, "faces: " + str(len(dets)), (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
# 按下q键退出
if kk == ord('q'):
break
# 窗口显示
cv2.imshow("camera", im_rd)
# 释放摄像头
cap.release()
# 删除建立的窗口
cv2.destroyAllWindows() | 28.164557 | 120 | 0.634382 | 611 | 4,450 | 4.418985 | 0.368249 | 0.013333 | 0.02963 | 0.02037 | 0.123704 | 0.05037 | 0.043704 | 0.023704 | 0 | 0 | 0 | 0.038746 | 0.240225 | 4,450 | 158 | 121 | 28.164557 | 0.759834 | 0.26382 | 0 | 0.130435 | 0 | 0 | 0.076016 | 0.048402 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028986 | false | 0 | 0.057971 | 0 | 0.130435 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20eb50420f20c4e8a8059ec57505d6d0d5ad5fae | 1,602 | py | Python | GreyNsights/utils.py | kamathhrishi/GreyNSights | 9a79b8ed04ccb4a9dd538c425ed6da00ebd1b00f | [
"MIT"
] | 19 | 2021-02-24T12:28:04.000Z | 2021-10-06T11:55:46.000Z | GreyNsights/utils.py | kamathhrishi/GreyNSights | 9a79b8ed04ccb4a9dd538c425ed6da00ebd1b00f | [
"MIT"
] | 2 | 2021-08-11T01:25:14.000Z | 2021-08-11T01:26:32.000Z | GreyNsights/utils.py | kamathhrishi/GreyNSights | 9a79b8ed04ccb4a9dd538c425ed6da00ebd1b00f | [
"MIT"
] | null | null | null | # python dependencies
import codecs
import pickle
import struct
def pickle_string_to_obj(obj):
return pickle.loads(codecs.decode(obj, "base64"))
def get_encoded_obj(obj):
return codecs.encode(pickle.dumps(obj), "base64").decode()
def log_message(msg_type: str, message: str):
"""The default style of log messages displayed on DataOwner's screen
args[str]: The type of message
message[str]: The message to be displayed
"""
print("Logger", "<" + msg_type + ">", ":", message)
def send_msg(sock, msg):
# Prefix each message with a 4-byte length (network byte order)
msg = struct.pack(">I", len(msg)) + msg
sock.sendall(msg)
def recv_msg(sock):
# Read message length and unpack it into an integer
raw_msglen = recvall(sock, 4)
if not raw_msglen:
return None
msglen = struct.unpack(">I", raw_msglen)[0]
# Read the message data
return recvall(sock, msglen)
def recvall(sock, n):
# Helper function to recv n bytes or return None if EOF is hit
data = bytearray()
print("data:", data)
print("n:", n)
while len(data) < n:
print("DIFFERENCE: ", n - len(data))
print("DATA TYPE: ", type(n - len(data)))
packet = sock.recv(n - len(data))
if not packet:
return None
data.extend(packet)
return data
BYTE_SIZE = 12000
def encode_msg(msg):
byte_array = []
data = msg
for index in range(0, len(data), BYTE_SIZE):
byte_array.append(data[index : index + BYTE_SIZE])
# print("CHECK",check(byte_array,msg))
return byte_array
| 22.56338 | 72 | 0.637328 | 230 | 1,602 | 4.347826 | 0.391304 | 0.035 | 0.024 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010682 | 0.240325 | 1,602 | 70 | 73 | 22.885714 | 0.811011 | 0.244694 | 0 | 0.054054 | 0 | 0 | 0.046414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.189189 | false | 0 | 0.081081 | 0.054054 | 0.459459 | 0.135135 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20ed1b39d69f2ba4a229353cc72ed388d06f0047 | 4,568 | py | Python | scripts/location_map_report.py | xperylabhub/iLEAPP | fd1b301bf2094387f51ccdbd10ed233ce9abd687 | [
"MIT"
] | null | null | null | scripts/location_map_report.py | xperylabhub/iLEAPP | fd1b301bf2094387f51ccdbd10ed233ce9abd687 | [
"MIT"
] | null | null | null | scripts/location_map_report.py | xperylabhub/iLEAPP | fd1b301bf2094387f51ccdbd10ed233ce9abd687 | [
"MIT"
] | null | null | null | # coding: utf-8
#Import the necessary Python modules
import pandas as pd
import folium
from folium.plugins import TimestampedGeoJson
from shapely.geometry import Point
import os
from datetime import datetime
from branca.element import Template, MacroElement
import html
from scripts.location_map_constants import iLEAPP_KMLs, defaultShadowUrl, defaultIconUrl, colors, legend_tag, legend_title_tag, legend_div, template_part1, template_part2
import sqlite3
from scripts.artifact_report import ArtifactHtmlReport
#Helpers
def htmlencode(string):
return string.encode(encoding='ascii',errors='xmlcharrefreplace').decode('utf-8')
def geodfToFeatures(df, f, props):
coords = []
times = []
for i,row in df[df.Description.str.contains(f)].iterrows():
coords.append(
[row.Point.x,row.Point.y]
)
times.append(datetime.strptime(row.Name,'%Y-%m-%d %H:%M:%S').isoformat())
return {
'type': 'Feature',
'geometry': {
'type': props[f]['fType'],
'coordinates': coords,
},
'properties': {
'times': times,
'style': {'color': props[f]['color']},
'icon': props[f]['icon'],
'iconstyle': {
'iconUrl': props[f]['iconUrl'],
'shadowUrl': props[f]['shadowUrl'],
'iconSize': [25, 41],
'iconAnchor': [12, 41],
'popupAnchor': [1, -34],
'shadowSize': [41, 41],
'radius': 5,
},
},
}
def generate_location_map(reportfolderbase,legend_title):
KML_path = os.path.join(reportfolderbase,iLEAPP_KMLs)
if not os.path.isdir(KML_path) or not os.listdir(KML_path):
return
location_path = os.path.join(reportfolderbase, 'LOCATIONS')
os.makedirs(location_path,exist_ok=True)
db = sqlite3.connect(os.path.join(KML_path,"_latlong.db"))
df = pd.read_sql_query("SELECT key as Name, Activity as Description, latitude, longitude FROM data ;", db)
df["Point"] = df.apply(lambda row: Point(float(row['longitude']),float(row['latitude']),.0), axis=1)
#sorting is needed for correct display
df.sort_values(by=['Name'],inplace=True)
#Parse geo data and add to Folium Map
data_names = df[~df.Description.str.contains('Photos')].Description.unique()
featuresProp = {}
for c,d in zip(colors, data_names):
descFilter = d
if 'ZRT' in d:
fType = 'LineString'
icon = 'marker'
iconUrl = defaultIconUrl.format(c)
shadowUrl = defaultShadowUrl
else:
fType = 'MultiPoint'
icon = 'circle'
iconUrl = ''
shadowUrl = ''
color = c
featuresProp[d] = {
'fType': fType,
'color': c,
'icon': icon,
'iconUrl': iconUrl,
'shadowUrl': defaultShadowUrl,
}
location_map = folium.Map([df.iloc[0].Point.y,df.iloc[0].Point.x], prefer_canvas=True, zoom_start = 6)
bounds = (df[~df.Description.str.contains('Photos')]['longitude'].min(),
df[~df.Description.str.contains('Photos')]['latitude'].min(),
df[~df.Description.str.contains('Photos')]['longitude'].max(),
df[~df.Description.str.contains('Photos')]['latitude'].max(),
)
location_map.fit_bounds([
(bounds[1],bounds[0]),
(bounds[3],bounds[2]),
]
)
tsGeo = TimestampedGeoJson({
'type': 'FeatureCollection',
'features': [
geodfToFeatures(df, f, featuresProp) for f in data_names
]
}, period="PT1M", duration="PT1H", loop=False, transition_time = 50, time_slider_drag_update=True, add_last_point=True, max_speed=200).add_to(location_map)
#legend
legend = '\n'.join([ legend_tag.format(featuresProp[f]['color'], htmlencode(f)) for f in data_names])
template = '\n'.join([template_part1, legend_title_tag.format(htmlencode(legend_title)), legend_div.format(legend), template_part2])
macro = MacroElement()
macro._template = Template(template)
location_map.get_root().add_child(macro)
location_map.save(os.path.join(location_path,"Locations_Map.html"))
report = ArtifactHtmlReport('Locations Map')
report.start_artifact_report(location_path, 'Locations Map', 'Map plotting all locations')
report.write_raw_html(open(os.path.join(location_path,"Locations_Map.html")).read())
report.end_artifact_report()
| 35.968504 | 170 | 0.613835 | 526 | 4,568 | 5.205323 | 0.385932 | 0.028123 | 0.032871 | 0.039445 | 0.14317 | 0.100804 | 0.089116 | 0.027757 | 0 | 0 | 0 | 0.011601 | 0.245184 | 4,568 | 126 | 171 | 36.253968 | 0.782483 | 0.029335 | 0 | 0 | 0 | 0 | 0.133725 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.029703 | false | 0 | 0.108911 | 0.009901 | 0.168317 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20eeefc46871bc4a82ccda95a4d09005d4444a71 | 1,096 | py | Python | src/BL/test/test_RandomNumber.py | yukiYamada/ThaGame | 4f206303d60b5760452a7eab8700626657f3e39e | [
"MIT"
] | null | null | null | src/BL/test/test_RandomNumber.py | yukiYamada/ThaGame | 4f206303d60b5760452a7eab8700626657f3e39e | [
"MIT"
] | null | null | null | src/BL/test/test_RandomNumber.py | yukiYamada/ThaGame | 4f206303d60b5760452a7eab8700626657f3e39e | [
"MIT"
] | null | null | null | # third party modules
import pytest
# user modules
from BL_main.RandomNumber import Number, Numbers, InvalidArgumentExceptionOfNumber
def test_NumberClass_InvalidException_underNumber():
'''
Test argument. under number.
'''
with pytest.raises(InvalidArgumentExceptionOfNumber):
Number.create(1)
Number.create(2)
def test_NumberClass_InvalidException_overNumber():
'''
Test argument. over number.
'''
with pytest.raises(InvalidArgumentExceptionOfNumber):
Number.create(100)
Number.create(99)
def test_NumberClass_CreateTargetNumber():
'''
Test NumberClass initialize.
'''
actual = Number.create(3)
assert actual.value == 3
actual2 = Number.create(3)
assert actual == actual2
actual3 = Number.create(4)
assert actual != actual3
def test_NumbersClass():
'''
Test NumbersClass initialize
'''
numbers = Numbers.create()
count = 0
while(not numbers.EOF):
count += 1
numbers.pop()
# Can getting 98 Values
actual = 98
assert actual == count
| 24.355556 | 82 | 0.667883 | 110 | 1,096 | 6.563636 | 0.445455 | 0.116343 | 0.074792 | 0.094183 | 0.252078 | 0.182825 | 0.182825 | 0 | 0 | 0 | 0 | 0.02515 | 0.238139 | 1,096 | 45 | 83 | 24.355556 | 0.839521 | 0.156022 | 0 | 0.08 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 1 | 0.16 | false | 0 | 0.08 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20f09232bbd39f5341bb954d6c3dd267beb0e85a | 10,058 | py | Python | venv/lib/python3.6/site-packages/ansible/module_utils/facts/hardware/aix.py | usegalaxy-no/usegalaxy | 75dad095769fe918eb39677f2c887e681a747f3a | [
"MIT"
] | 17 | 2017-06-07T23:15:01.000Z | 2021-08-30T14:32:36.000Z | ansible/ansible/module_utils/facts/hardware/aix.py | SergeyCherepanov/ansible | 875711cd2fd6b783c812241c2ed7a954bf6f670f | [
"MIT"
] | 12 | 2020-02-21T07:24:52.000Z | 2020-04-14T09:54:32.000Z | ansible/ansible/module_utils/facts/hardware/aix.py | SergeyCherepanov/ansible | 875711cd2fd6b783c812241c2ed7a954bf6f670f | [
"MIT"
] | 3 | 2018-05-26T21:31:22.000Z | 2019-09-28T17:00:45.000Z | # This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts.utils import get_mount_size
class AIXHardware(Hardware):
"""
AIX-specific subclass of Hardware. Defines memory and CPU facts:
- memfree_mb
- memtotal_mb
- swapfree_mb
- swaptotal_mb
- processor (a list)
- processor_cores
- processor_count
"""
platform = 'AIX'
def populate(self, collected_facts=None):
hardware_facts = {}
cpu_facts = self.get_cpu_facts()
memory_facts = self.get_memory_facts()
dmi_facts = self.get_dmi_facts()
vgs_facts = self.get_vgs_facts()
mount_facts = self.get_mount_facts()
devices_facts = self.get_device_facts()
hardware_facts.update(cpu_facts)
hardware_facts.update(memory_facts)
hardware_facts.update(dmi_facts)
hardware_facts.update(vgs_facts)
hardware_facts.update(mount_facts)
hardware_facts.update(devices_facts)
return hardware_facts
def get_cpu_facts(self):
cpu_facts = {}
cpu_facts['processor'] = []
rc, out, err = self.module.run_command("/usr/sbin/lsdev -Cc processor")
if out:
i = 0
for line in out.splitlines():
if 'Available' in line:
if i == 0:
data = line.split(' ')
cpudev = data[0]
i += 1
cpu_facts['processor_count'] = int(i)
rc, out, err = self.module.run_command("/usr/sbin/lsattr -El " + cpudev + " -a type")
data = out.split(' ')
cpu_facts['processor'] = data[1]
rc, out, err = self.module.run_command("/usr/sbin/lsattr -El " + cpudev + " -a smt_threads")
if out:
data = out.split(' ')
cpu_facts['processor_cores'] = int(data[1])
return cpu_facts
def get_memory_facts(self):
memory_facts = {}
pagesize = 4096
rc, out, err = self.module.run_command("/usr/bin/vmstat -v")
for line in out.splitlines():
data = line.split()
if 'memory pages' in line:
pagecount = int(data[0])
if 'free pages' in line:
freecount = int(data[0])
memory_facts['memtotal_mb'] = pagesize * pagecount // 1024 // 1024
memory_facts['memfree_mb'] = pagesize * freecount // 1024 // 1024
# Get swapinfo. swapinfo output looks like:
# Device 1M-blocks Used Avail Capacity
# /dev/ada0p3 314368 0 314368 0%
#
rc, out, err = self.module.run_command("/usr/sbin/lsps -s")
if out:
lines = out.splitlines()
data = lines[1].split()
swaptotal_mb = int(data[0].rstrip('MB'))
percused = int(data[1].rstrip('%'))
memory_facts['swaptotal_mb'] = swaptotal_mb
memory_facts['swapfree_mb'] = int(swaptotal_mb * (100 - percused) / 100)
return memory_facts
def get_dmi_facts(self):
dmi_facts = {}
rc, out, err = self.module.run_command("/usr/sbin/lsattr -El sys0 -a fwversion")
data = out.split()
dmi_facts['firmware_version'] = data[1].strip('IBM,')
lsconf_path = self.module.get_bin_path("lsconf")
if lsconf_path:
rc, out, err = self.module.run_command(lsconf_path)
if rc == 0 and out:
for line in out.splitlines():
data = line.split(':')
if 'Machine Serial Number' in line:
dmi_facts['product_serial'] = data[1].strip()
if 'LPAR Info' in line:
dmi_facts['lpar_info'] = data[1].strip()
if 'System Model' in line:
dmi_facts['product_name'] = data[1].strip()
return dmi_facts
def get_vgs_facts(self):
"""
Get vg and pv Facts
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 546 0 00..00..00..00..00
hdisk1 active 546 113 00..00..00..21..92
realsyncvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk74 active 1999 6 00..00..00..00..06
testvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk105 active 999 838 200..39..199..200..200
hdisk106 active 999 599 200..00..00..199..200
"""
vgs_facts = {}
lsvg_path = self.module.get_bin_path("lsvg")
xargs_path = self.module.get_bin_path("xargs")
cmd = "%s -o | %s %s -p" % (lsvg_path, xargs_path, lsvg_path)
if lsvg_path and xargs_path:
rc, out, err = self.module.run_command(cmd, use_unsafe_shell=True)
if rc == 0 and out:
vgs_facts['vgs'] = {}
for m in re.finditer(r'(\S+):\n.*FREE DISTRIBUTION(\n(\S+)\s+(\w+)\s+(\d+)\s+(\d+).*)+', out):
vgs_facts['vgs'][m.group(1)] = []
pp_size = 0
cmd = "%s %s" % (lsvg_path, m.group(1))
rc, out, err = self.module.run_command(cmd)
if rc == 0 and out:
pp_size = re.search(r'PP SIZE:\s+(\d+\s+\S+)', out).group(1)
for n in re.finditer(r'(\S+)\s+(\w+)\s+(\d+)\s+(\d+).*', m.group(0)):
pv_info = {'pv_name': n.group(1),
'pv_state': n.group(2),
'total_pps': n.group(3),
'free_pps': n.group(4),
'pp_size': pp_size
}
vgs_facts['vgs'][m.group(1)].append(pv_info)
return vgs_facts
def get_mount_facts(self):
mount_facts = {}
mount_facts['mounts'] = []
mounts = []
# AIX does not have mtab but mount command is only source of info (or to use
# api calls to get same info)
mount_path = self.module.get_bin_path('mount')
rc, mount_out, err = self.module.run_command(mount_path)
if mount_out:
for line in mount_out.split('\n'):
fields = line.split()
if len(fields) != 0 and fields[0] != 'node' and fields[0][0] != '-' and re.match('^/.*|^[a-zA-Z].*|^[0-9].*', fields[0]):
if re.match('^/', fields[0]):
# normal mount
mount = fields[1]
mount_info = {'mount': mount,
'device': fields[0],
'fstype': fields[2],
'options': fields[6],
'time': '%s %s %s' % (fields[3], fields[4], fields[5])}
mount_info.update(get_mount_size(mount))
else:
# nfs or cifs based mount
# in case of nfs if no mount options are provided on command line
# add into fields empty string...
if len(fields) < 8:
fields.append("")
mount_info = {'mount': fields[2],
'device': '%s:%s' % (fields[0], fields[1]),
'fstype': fields[3],
'options': fields[7],
'time': '%s %s %s' % (fields[4], fields[5], fields[6])}
mounts.append(mount_info)
mount_facts['mounts'] = mounts
return mount_facts
def get_device_facts(self):
device_facts = {}
device_facts['devices'] = {}
lsdev_cmd = self.module.get_bin_path('lsdev', True)
lsattr_cmd = self.module.get_bin_path('lsattr', True)
rc, out_lsdev, err = self.module.run_command(lsdev_cmd)
for line in out_lsdev.splitlines():
field = line.split()
device_attrs = {}
device_name = field[0]
device_state = field[1]
device_type = field[2:]
lsattr_cmd_args = [lsattr_cmd, '-E', '-l', device_name]
rc, out_lsattr, err = self.module.run_command(lsattr_cmd_args)
for attr in out_lsattr.splitlines():
attr_fields = attr.split()
attr_name = attr_fields[0]
attr_parameter = attr_fields[1]
device_attrs[attr_name] = attr_parameter
device_facts['devices'][device_name] = {
'state': device_state,
'type': ' '.join(device_type),
'attributes': device_attrs
}
return device_facts
class AIXHardwareCollector(HardwareCollector):
_platform = 'AIX'
_fact_class = AIXHardware
| 39.754941 | 137 | 0.509644 | 1,175 | 10,058 | 4.189787 | 0.236596 | 0.036563 | 0.031688 | 0.039001 | 0.235222 | 0.179159 | 0.114564 | 0.111721 | 0.085111 | 0.05586 | 0 | 0.031325 | 0.377908 | 10,058 | 252 | 138 | 39.912698 | 0.755474 | 0.185922 | 0 | 0.067073 | 0 | 0.006098 | 0.097004 | 0.012984 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042683 | false | 0 | 0.02439 | 0 | 0.140244 | 0.006098 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20f0b9c08e120b15c6e37ccd433da0ddfa26dd09 | 1,148 | py | Python | servicedirectory/src/sd-api/classes/urls.py | ealogar/servicedirectory | fb4f4bfa8b499b93c03af589ef2f34c08a830b17 | [
"Apache-2.0"
] | null | null | null | servicedirectory/src/sd-api/classes/urls.py | ealogar/servicedirectory | fb4f4bfa8b499b93c03af589ef2f34c08a830b17 | [
"Apache-2.0"
] | null | null | null | servicedirectory/src/sd-api/classes/urls.py | ealogar/servicedirectory | fb4f4bfa8b499b93c03af589ef2f34c08a830b17 | [
"Apache-2.0"
] | null | null | null | '''
(c) Copyright 2013 Telefonica, I+D. Printed in Spain (Europe). All Rights
Reserved.
The copyright to the software program(s) is property of Telefonica I+D.
The program(s) may be used and or copied only with the express written
consent of Telefonica I+D or in accordance with the terms and conditions
stipulated in the agreement/contract under which the program(s) have
been supplied.
'''
from django.conf.urls.defaults import patterns, url
from classes.views import ServiceClassCollectionView, ServiceInstanceView, ServiceClassItemView,\
ServiceInstanceItemView
from django.conf import settings
urlpatterns = patterns('',
url(r'^/(?P<class_name>{class_})/instances/?$'.format(class_=settings.CLASS_NAME_REGEX),
ServiceInstanceView.as_view()),
url(r'^/(?P<class_name>{class_})/instances/(?P<id>[\w]+)/?$'.format(class_=settings.CLASS_NAME_REGEX),
ServiceInstanceItemView.as_view(), name='instance_detail'),
url(r'^/?$',
ServiceClassCollectionView.as_view()),
url(r'^/(?P<class_name>{class_})/?$'.format(class_=settings.CLASS_NAME_REGEX),
ServiceClassItemView.as_view(), name='class_detail')
)
| 42.518519 | 106 | 0.743031 | 150 | 1,148 | 5.546667 | 0.48 | 0.064904 | 0.043269 | 0.036058 | 0.223558 | 0.223558 | 0.104567 | 0.060096 | 0 | 0 | 0 | 0.00398 | 0.124564 | 1,148 | 26 | 107 | 44.153846 | 0.823881 | 0.334495 | 0 | 0 | 0 | 0 | 0.201058 | 0.160053 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.214286 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20f40bb6774c781a86ff7385108395d2e004318d | 9,669 | py | Python | lib/kb_RDP_Classifier/kb_RDP_ClassifierImpl.py | kbaseapps/kb_RDP_Classifier | 7ac139db66b0291c847084e0633cb311befd05e1 | [
"MIT"
] | null | null | null | lib/kb_RDP_Classifier/kb_RDP_ClassifierImpl.py | kbaseapps/kb_RDP_Classifier | 7ac139db66b0291c847084e0633cb311befd05e1 | [
"MIT"
] | null | null | null | lib/kb_RDP_Classifier/kb_RDP_ClassifierImpl.py | kbaseapps/kb_RDP_Classifier | 7ac139db66b0291c847084e0633cb311befd05e1 | [
"MIT"
] | 1 | 2021-09-24T18:18:40.000Z | 2021-09-24T18:18:40.000Z | # -*- coding: utf-8 -*-
#BEGIN_HEADER
import logging
import os
import uuid
import shutil
from installed_clients.WorkspaceClient import Workspace
from installed_clients.DataFileUtilClient import DataFileUtil
from installed_clients.KBaseReportClient import KBaseReport
from installed_clients.GenericsAPIClient import GenericsAPI
from .impl.params import Params
from .impl import report
from .impl.globals import Var
from .impl.kbase_obj import AmpliconMatrix, AttributeMapping
from .impl import app_file
from .util.debug import dprint
from .util.misc import get_numbered_duplicate
from .util.cli import run_check
#END_HEADER
class kb_RDP_Classifier:
'''
Module Name:
kb_RDP_Classifier
Module Description:
A KBase module: kb_RDP_Classifier
'''
######## WARNING FOR GEVENT USERS ####### noqa
# Since asynchronous IO can lead to methods - even the same method -
# interrupting each other, you must be *very* careful when using global
# state. A method could easily clobber the state set by another while
# the latter method is running.
######################################### noqa
VERSION = "0.0.1"
GIT_URL = "https://github.com/kbaseapps/kb_RDP_Classifier"
GIT_COMMIT_HASH = "d8be422362a9166f4c3441c826be1d16ecdcabe2"
#BEGIN_CLASS_HEADER
#END_CLASS_HEADER
# config contains contents of config file in a hash or None if it couldn't
# be found
def __init__(self, config):
#BEGIN_CONSTRUCTOR
logging.basicConfig(format='%(created)s %(levelname)s: %(message)s',
level=logging.INFO)
self.callback_url = os.environ['SDK_CALLBACK_URL']
self.workspace_url = config['workspace-url']
self.shared_folder = config['scratch']
#END_CONSTRUCTOR
pass
def run_classify(self, ctx, params):
"""
This example function accepts any number of parameters and returns results in a KBaseReport
:param params: instance of mapping from String to unspecified object
:returns: instance of type "ReportResults" -> structure: parameter
"report_name" of String, parameter "report_ref" of String
"""
# ctx is the context object
# return variables are: output
#BEGIN run_classify
logging.info(params)
params = Params(params)
Var.params = params
'''
tmp/ `shared_folder`
└── kb_rdp_clsf_<uuid>/ `run_dir`
├── return/ `return_dir`
| ├── cmd.txt
| ├── study_seqs.fna
| └── RDP_Classifier_output/ `out_dir`
| ├── out_allRank.tsv
| └── out_fixedRank.tsv
└── report/ `report_dir`
├── pie_hist.html
├── suburst.html
└── report.html
'''
##
## set up globals ds `Var` for this API-method run
## which involves making this API-method run's directory structure
Var.update({
'run_dir': os.path.join(self.shared_folder, 'kb_rdp_clsf_' + str(uuid.uuid4())),
'dfu': DataFileUtil(self.callback_url),
'ws': Workspace(self.workspace_url),
'gapi': GenericsAPI(self.callback_url),
'kbr': KBaseReport(self.callback_url),
'warnings': [],
})
os.mkdir(Var.run_dir)
Var.update({
'return_dir': os.path.join(Var.run_dir, 'return'),
'report_dir': os.path.join(Var.run_dir, 'report'),
})
os.mkdir(Var.return_dir)
os.mkdir(Var.report_dir)
Var.update({
'out_dir': os.path.join(Var.return_dir, 'RDP_Classifier_output')
})
os.mkdir(Var.out_dir)
# cat and gunzip SILVA refdata
# which has been split into ~99MB chunks to get onto Github
#if params.is_custom():
# app_file.prep_refdata()
#
##
### load objects
####
#####
amp_mat = AmpliconMatrix(params['amp_mat_upa'])
row_attr_map_upa = amp_mat.obj.get('row_attributemapping_ref')
create_row_attr_map = row_attr_map_upa is None
row_attr_map = AttributeMapping(row_attr_map_upa, amp_mat=amp_mat)
#
##
### cmd
####
#####
fasta_flpth = os.path.join(Var.return_dir, 'study_seqs.fna')
Var.out_allRank_flpth = os.path.join(Var.out_dir, 'out_allRank.tsv')
Var.out_shortSeq_flpth = os.path.join(Var.out_dir, 'out_unclassifiedShortSeqs.txt') # seqs too short to classify
shutil.copyfile(amp_mat.get_fasta(), fasta_flpth)
cmd = (
'java -Xmx4g -jar %s classify %s ' % (Var.classifier_jar_flpth, fasta_flpth)
+ ' '.join(params.cli_args) + ' '
+ '--format allRank '
+ '--outputFile %s --shortseq_outfile %s' % (Var.out_allRank_flpth, Var.out_shortSeq_flpth)
)
run_check(cmd)
#
##
### extract classifications
####
#####
id2taxStr = app_file.get_fix_filtered_id2tax()
# get ids of classified and unclassified seqs
shortSeq_id_l = app_file.parse_shortSeq() # sequences too short to get clsf
classified_id_l = list(id2taxStr.keys())
# make sure classifieds and shorts complement
if Var.debug:
ret = sorted(classified_id_l + shortSeq_id_l)
mat = sorted(amp_mat.obj['data']['row_ids'])
assert ret == mat, \
'diff1: %s, diff2: %s' % (set(ret)-set(mat), set(mat)-set(ret))
if len(classified_id_l) == 0:
raise Exception('No sequences were long enough to be classified')
# add in id->'' for unclassified seqs
# so id2taxStr_l is complete
# so no KeyErrors later
for shortSeq_id in shortSeq_id_l:
id2taxStr[shortSeq_id] = ''
# add to globals for testing
Var.shortSeq_id_l = shortSeq_id_l
#
##
### add to row AttributeMapping
####
#####
prose_args = params.get_prose_args()
attribute = (
'RDP Classifier Taxonomy (conf=%s, gene=%s)'
% (prose_args['conf'], prose_args['gene'])
)
attribute_names = row_attr_map.get_attribute_names()
if attribute in attribute_names:
attribute = get_numbered_duplicate(attribute_names, attribute)
source = 'RDP Classifier'
ind, attribute = row_attr_map.add_attribute_slot(attribute, source)
row_attr_map.update_attribute(ind, id2taxStr)
#
##
### save obj
####
#####
amp_mat_output_name = Var.params['output_name']
attr_map_output_name = (
amp_mat_output_name + '.Amplicon_attributes' if create_row_attr_map else
None
)
row_attr_map_upa_new = row_attr_map.save(name=attr_map_output_name)
amp_mat.obj['row_attributemapping_ref'] = row_attr_map_upa_new
amp_mat_upa_new = amp_mat.save(amp_mat_output_name)
objects_created = [
dict( # row AttrMap
ref=row_attr_map_upa_new,
description='%sAdded attribute `%s`' % (
'Created. ' if create_row_attr_map else '',
attribute,
)
),
dict( # AmpMat
ref=amp_mat_upa_new,
description='Updated amplicon AttributeMapping reference to `%s`' % row_attr_map_upa_new
),
]
# testing
if Var.debug:
Var.update(dict(
amp_mat=amp_mat,
row_attr_map=row_attr_map,
))
#
##
### html report
####
#####
hrw = report.HTMLReportWriter(
cmd_l=[cmd]
)
html_flpth = hrw.write()
html_links = [{
'path': Var.report_dir,
'name': os.path.basename(html_flpth),
}]
#
##
###
####
#####
file_links = [{
'path': Var.run_dir,
'name': 'RDP_Classifier_results.zip',
'description': 'Input, output'
}]
params_report = {
'warnings': Var.warnings,
'objects_created': objects_created,
'html_links': html_links,
'direct_html_link_index': 0,
'file_links': file_links,
'workspace_id': params['workspace_id'],
'html_window_height': Var.report_height,
}
# testing
Var.params_report = params_report
report_obj = Var.kbr.create_extended_report(params_report)
output = {
'report_name': report_obj['name'],
'report_ref': report_obj['ref'],
}
#END run_classify
# At some point might do deeper type checking...
if not isinstance(output, dict):
raise ValueError('Method run_classify return value ' +
'output is not type dict as required.')
# return the results
return [output]
def status(self, ctx):
#BEGIN_STATUS
returnVal = {'state': "OK",
'message': "",
'version': self.VERSION,
'git_url': self.GIT_URL,
'git_commit_hash': self.GIT_COMMIT_HASH}
#END_STATUS
return [returnVal]
| 28.862687 | 121 | 0.562416 | 1,078 | 9,669 | 4.820037 | 0.295918 | 0.025597 | 0.032717 | 0.017513 | 0.081601 | 0.068514 | 0.029253 | 0.010393 | 0 | 0 | 0 | 0.006351 | 0.332299 | 9,669 | 334 | 122 | 28.949102 | 0.793371 | 0.170338 | 0 | 0.090909 | 0 | 0 | 0.144652 | 0.02597 | 0 | 0 | 0 | 0 | 0.006494 | 1 | 0.019481 | false | 0.006494 | 0.103896 | 0 | 0.162338 | 0.006494 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
20f473683e772c87b537f146508f569bdfe393ff | 4,189 | py | Python | image2text.py | minhpvwh/pytesseract-vie | 4159941a0f538845c535d090907cf230946cb4fe | [
"Leptonica",
"BSD-2-Clause"
] | null | null | null | image2text.py | minhpvwh/pytesseract-vie | 4159941a0f538845c535d090907cf230946cb4fe | [
"Leptonica",
"BSD-2-Clause"
] | null | null | null | image2text.py | minhpvwh/pytesseract-vie | 4159941a0f538845c535d090907cf230946cb4fe | [
"Leptonica",
"BSD-2-Clause"
] | null | null | null | import os
import cv2
import glob
import tqdm
import argparse
from skimage.filters import threshold_local
import pytesseract
import numpy as np
import random
def check_exist(path):
try:
if not os.path.exists(path):
os.mkdir(path)
except Exception:
raise ("please check your folder again")
pass
def median_filter(data, filter_size):
temp = []
indexer = filter_size // 2
data_final = []
data_final = np.zeros((len(data),len(data[0])))
for i in range(len(data)):
for j in range(len(data[0])):
for z in range(filter_size):
if i + z - indexer < 0 or i + z - indexer > len(data) - 1:
for c in range(filter_size):
temp.append(0)
else:
if j + z - indexer < 0 or j + indexer > len(data[0]) - 1:
temp.append(0)
else:
for k in range(filter_size):
temp.append(data[i + z - indexer][j + k - indexer])
temp.sort()
data_final[i][j] = temp[len(temp) // 2]
temp = []
return data_final
def extract_text_from_image(image, binary_mode = False, lang='vie'):
# Convert the warped image to grayscale, then threshold it
# to give it that 'black and white' paper effect
if binary_mode:
_input = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel3 = np.ones((3, 3), np.uint8)
kernel5 = np.ones((5, 5), np.uint8)
kernel7 = np.ones((7, 7), np.uint8)
# cv2.imshow('Input', _input)
# cv2.imshow('Erosion', img_erosion)
# cv2.imshow('Dilation', img_dilation)
#
# cv2.waitKey(0)
#############################################################################
_input = cv2.threshold(_input, 0, 255, cv2.THRESH_TRUNC + cv2.THRESH_OTSU)[1]
T = threshold_local(_input, 11, offset=10, method="gaussian")
_input = (_input > T).astype("uint8") * 255
#############################################################################
# _input = cv2.threshold(_input, 0, 255, cv2.THRESH_TOZERO + cv2.THRESH_OTSU)[1]
# # _input = cv2.erode(_input, kernel3, iterations=1)
# # _input = cv2.dilate(_input, kernel3, iterations=1)
# T = threshold_local(_input, 11, offset=10, method="gaussian")
# _input = (_input > T).astype("uint8") * 255
# _input = median_filter(_input, 5)
#
# _input = cv2.erode(_input, kernel5, iterations=1)
# _input = cv2.dilate(_input, kernel5, iterations=1)
# _input = median_filter(_input, 3)
cv2.imwrite("/home/minhpv/Desktop/pre_processing_text/%s.jpg" %(str(random.randint(1,100000000))), _input)
else:
_input = image
config = '-l {lang}'.format(lang=lang)
# cv2.imshow("g", _input)
# cv2.waitKey(0)
text = pytesseract.image_to_string(_input, config=config)
lines = text.splitlines()
text = '\n'.join(l.strip() for l in lines if l.strip())
return _input, text
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--input', type=str, default='./images')
parser.add_argument('--use_binary', type=bool, default=True)
parser.add_argument('--output', type=str, default='./output')
parser.add_argument('--binary_output', type=str, default='./binary')
FLAGS = parser.parse_args()
allow_type = ['jpg', 'png', 'JPG', 'PNG', 'JPEG', 'jpeg']
all_images = os.listdir(FLAGS.input)
for image in tqdm.tqdm(all_images):
try:
endswith = image.split('.')[-1]
if endswith in allow_type:
name = image.split('.')[0]
path_to_image = os.path.join(FLAGS.input, image)
imread = cv2.imread(path_to_image)
output_image, text = extract_text_from_image(image=imread, binary_mode=FLAGS.use_binary)
#
if FLAGS.use_binary:
check_exist(FLAGS.binary_output)
binary_output = '{}/{}.jpg'.format(FLAGS.binary_output, name)
cv2.imwrite(binary_output, output_image)
check_exist(FLAGS.output)
output_file = '{}/{}.txt'.format(FLAGS.output, name)
with open(output_file, 'w') as f:
print(text)
f.write(text)
else:
print("----> not allow type file: {} - type {}".format(image, endswith))
except Exception as e:
with open('logs.txt', 'w') as f:
f.write(str(e))
continue
| 32.726563 | 108 | 0.610169 | 564 | 4,189 | 4.35461 | 0.297872 | 0.029316 | 0.027687 | 0.020765 | 0.168567 | 0.133958 | 0.087541 | 0.087541 | 0.061075 | 0.061075 | 0 | 0.028986 | 0.209358 | 4,189 | 127 | 109 | 32.984252 | 0.71256 | 0.170446 | 0 | 0.116279 | 0 | 0 | 0.081007 | 0.01426 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034884 | false | 0.011628 | 0.104651 | 0 | 0.162791 | 0.023256 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
4545c91d0cdf7bdd633b1682893229895b5c4a88 | 2,808 | py | Python | mrkt/framework/platform/AWS.py | Tefx/Meerkat | ad9d4d3973a990406b976998dce9727b40139650 | [
"MIT"
] | null | null | null | mrkt/framework/platform/AWS.py | Tefx/Meerkat | ad9d4d3973a990406b976998dce9727b40139650 | [
"MIT"
] | null | null | null | mrkt/framework/platform/AWS.py | Tefx/Meerkat | ad9d4d3973a990406b976998dce9727b40139650 | [
"MIT"
] | null | null | null | from ...common.utils import patch;
patch()
import boto3
import urllib.request
from .PaaS import PaaS
from ..service import docker
from ...common.consts import *
COREOS_AMI_URL = "https://stable.release.core-os.net/amd64-usr/current/coreos_production_ami_hvm_{region}.txt"
COREOS_USERNAME = "core"
VM_TAG = [{"ResourceType": "instance",
"Tags": [{"Key": PLATFORM_EC2_VM_TAG,
"Value": "True"}]}]
def fetch_coreos_ami(region):
url = COREOS_AMI_URL.format(region=region)
return urllib.request.urlopen(url).read().decode().strip()
class EC2(PaaS):
def __init__(self, requests, sgroup, key_name, key_file,
ami=None,
username=COREOS_USERNAME,
placement_group=None,
region=PLATFORM_EC2_REGION,
clean_action=PaaS.CleanAction.Stop,
**options):
super(EC2, self).__init__(requests, clean_action, **options)
self.sgroup = sgroup
self.ami = ami or fetch_coreos_ami(region)
self.key_name = key_name
self.key_file = key_file
self.username = username
self.placement = {"GroupName": placement_group} if placement_group else {}
self.ec2_client = boto3.resource("ec2", region_name=region)
def VMs_on_platform(self):
filters = [
{"Name": "instance-state-name",
'Values': ["running", "stopped"]},
{"Name": "image-id",
'Values': [self.ami]},
{"Name": "instance-type",
"Values": list(self.requests.keys())},
{"Name": "tag:{}".format(PLATFORM_EC2_VM_TAG),
"Values": ["True"]}
]
return self.ec2_client.instances.filter(Filters=filters)
def launch_VMs(self, vm_type, vm_num):
return self.ec2_client.create_instances(ImageId=self.ami,
InstanceType=vm_type,
MinCount=vm_num,
MaxCount=vm_num,
KeyName=self.key_name,
Placement=self.placement,
SecurityGroupIds=[self.sgroup],
TagSpecifications=VM_TAG)
def VM_is_ready(self, vm):
vm.load()
return vm.state["Name"] == "running"
def service_on_VM(self, vm):
return docker.ViaSSH(vm.public_dns_name,
username=self.username,
key_filename=self.key_file)
def clean_VM(self, vm):
if self.clean_action != self.CleanAction.Null:
getattr(vm, self.clean_action)()
| 37.44 | 110 | 0.535613 | 290 | 2,808 | 4.951724 | 0.351724 | 0.02507 | 0.027159 | 0.022284 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007151 | 0.352564 | 2,808 | 74 | 111 | 37.945946 | 0.782728 | 0 | 0 | 0 | 0 | 0.016129 | 0.09188 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.112903 | false | 0 | 0.096774 | 0.032258 | 0.306452 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
45485b7021d68a5f5cea1ac732317f7615814dbe | 3,099 | py | Python | csrweb/api/resources.py | edbeard/csrweb | aecf8b6199aa6ce04a89c549ea2b970369f750e1 | [
"MIT"
] | null | null | null | csrweb/api/resources.py | edbeard/csrweb | aecf8b6199aa6ce04a89c549ea2b970369f750e1 | [
"MIT"
] | null | null | null | csrweb/api/resources.py | edbeard/csrweb | aecf8b6199aa6ce04a89c549ea2b970369f750e1 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
csrweb.api.resources
~~~~~~~~~~~~~~~~~~~~
API resources.
:copyright: Copyright 2019 by Ed Beard.
:license: MIT, see LICENSE file for more details.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import logging
import os
import uuid
import six
from flask import current_app, make_response
from flask_restplus import Resource, abort, fields
import werkzeug
from .. import db
from ..models import CsrJob
from ..tasks import run_csr
from . import api
log = logging.getLogger(__name__)
jobs = api.namespace('Jobs', path='/job', description='Submit jobs and retrieve results')
csrjob_schema = api.model('CsrJob', {
'job_id': fields.String(required=True, description='Unique job ID'),
'created_at': fields.DateTime(required=True, description='Job creation timestamp'),
'status': fields.String(required=True, description='Current job status'),
})
labels = dict()
labels['value'] = fields.String
result = dict()
result['smiles'] = fields.String
result['name'] = fields.String
result['labels'] = fields.Nested(labels)
csrjob_schema['result'] = fields.Nested(result)
submit_parser = api.parser()
submit_parser.add_argument('file', type=werkzeug.datastructures.FileStorage, required=True, help='The input file.', location='files')
result_parser = api.parser()
result_parser.add_argument('output', help='Response format', location='query', choices=['json', 'xml'])
@jobs.route('/')
# @api.doc(responses={400: 'Disallowed file type'})
class CsrJobSubmitResource(Resource):
"""Submit a new ChemSchematicResolver job and get the job ID."""
@api.doc(description='Submit a new ChemSchematicResolver job.', parser=submit_parser)
@api.marshal_with(csrjob_schema, code=201)
def post(self):
"""Submit a new job."""
args = submit_parser.parse_args()
file = args['file']
job_id = six.text_type(uuid.uuid4())
if '.' not in file.filename:
abort(400, b'No file extension!')
extension = file.filename.rsplit('.', 1)[1]
if extension not in current_app.config['ALLOWED_EXTENSIONS']:
abort(400, b'Disallowed file extension!')
filename = '%s.%s' % (job_id, extension)
file.save(os.path.join(current_app.config['UPLOAD_FOLDER'], filename))
csr_job = CsrJob(file=filename, job_id=job_id)
db.session.add(csr_job)
db.session.commit()
run_csr.apply_async([csr_job.id], task_id=job_id)
return csr_job, 201
@jobs.route('/<string:job_id>')
@api.doc(params={'job_id': 'The job ID'}) # responses={404: 'Job not found'},
class CsrJobResource(Resource):
"""View the status and results of a specific ChemSchematicResolver job."""
@api.doc(description='View the status and results of a specific ChemSchematicResolver job.', parser=result_parser)
@api.marshal_with(csrjob_schema)
def get(self, job_id):
"""Get the results of a job."""
return CsrJob.query.filter_by(job_id=job_id).first_or_404()
| 32.621053 | 133 | 0.700871 | 413 | 3,099 | 5.089588 | 0.363196 | 0.03568 | 0.030447 | 0.022835 | 0.151284 | 0.085633 | 0.055186 | 0.055186 | 0.055186 | 0.055186 | 0 | 0.011137 | 0.159729 | 3,099 | 94 | 134 | 32.968085 | 0.796083 | 0.137786 | 0 | 0 | 0 | 0 | 0.163134 | 0.015897 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.25 | 0 | 0.35 | 0.016667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
454c3029fdf43b8fecffd75acd0e4868c4a676d6 | 273 | py | Python | src/submarine/submarine.py | mokshasoul/aoc-2021-python | 6e6f24659c45f32eab5302075c3c2c0a0a876a60 | [
"MIT"
] | null | null | null | src/submarine/submarine.py | mokshasoul/aoc-2021-python | 6e6f24659c45f32eab5302075c3c2c0a0a876a60 | [
"MIT"
] | null | null | null | src/submarine/submarine.py | mokshasoul/aoc-2021-python | 6e6f24659c45f32eab5302075c3c2c0a0a876a60 | [
"MIT"
] | null | null | null | import re
import numpy as np
class Submarine:
def __init__(self) -> None:
self.depth = 0
self.aim_depth = 0
self.gamma_rate = 0
self.epsilon_rate = 0
self.oxygen_generator_rate = 0
self.carbon_dioxide_scrubber = 0
| 21 | 40 | 0.608059 | 37 | 273 | 4.189189 | 0.594595 | 0.16129 | 0.174194 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032609 | 0.326007 | 273 | 13 | 41 | 21 | 0.809783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.2 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |