hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4f239f097e9b19538c2da8b9e6eb10b642787d4e | 3,219 | py | Python | custom_components/tuneblade/tuneblade.py | spycle/tune_blade | bce9847531f410634765df7391565e2094549eb6 | [
"Apache-2.0"
] | null | null | null | custom_components/tuneblade/tuneblade.py | spycle/tune_blade | bce9847531f410634765df7391565e2094549eb6 | [
"Apache-2.0"
] | null | null | null | custom_components/tuneblade/tuneblade.py | spycle/tune_blade | bce9847531f410634765df7391565e2094549eb6 | [
"Apache-2.0"
] | null | null | null | """TuneBlade API Client."""
import logging
import asyncio
import socket
from typing import Optional
import aiohttp
import async_timeout
TIMEOUT = 10
_LOGGER: logging.Logger = logging.getLogger(__package__)
HEADERS = {"Content-type": "application/json; charset=UTF-8"}
class TuneBladeApiClient:
def __init__(
self, host: str, port: str, device_id: str, username: str, password: str, airplay_password: str, session: aiohttp.ClientSession, auth
) -> None:
"""Sample API Client."""
self._host = host
self._port = port
self._username = username
self._password = password
self._airplay_password = airplay_password
self._session = session
if device_id == "Master":
self._url = "http://"+host+":"+port+"/master"
else:
self._url = "http://"+host+":"+port+"/devices/"+device_id
async def async_get_data(self) -> dict:
"""Get data from the API."""
return await self.api_wrapper("get", self._url)
async def async_conn(self, value: str) -> None:
"""Get data from the API."""
await self.api_wrapper("put", self._url, data={"Password": self._airplay_password, "Status": value}, headers=HEADERS)
async def async_set_volume(self, volume: str) -> None:
"""Get data from the API."""
await self.api_wrapper("put", self._url, data={"Password": self._airplay_password, "Volume": str(int(volume*100))}, headers=HEADERS)
async def async_set_volume_master(self, volume: str) -> None:
"""Get data from the API."""
await self.api_wrapper("put", self._url, data={"Status": "Connect", "Volume": str(int(volume*100))}, headers=HEADERS)
async def api_wrapper(
self, method: str, url: str, data: dict = {}, headers: dict = {}
) -> dict:
"""Get information from the API."""
try:
async with async_timeout.timeout(TIMEOUT):
if method == "get":
response = await self._session.get(url, headers=headers)
return await response.json()
elif method == "put":
await self._session.put(url, headers=headers, json=data)
elif method == "patch":
await self._session.patch(url, headers=headers, json=data)
elif method == "post":
await self._session.post(url, headers=headers, json=data)
except asyncio.TimeoutError as exception:
_LOGGER.error(
"Timeout error fetching information from %s - %s",
url,
exception,
)
except (KeyError, TypeError) as exception:
_LOGGER.error(
"Error parsing information from %s - %s",
url,
exception,
)
except (aiohttp.ClientError, socket.gaierror) as exception:
_LOGGER.error(
"Error fetching information from %s - %s",
url,
exception,
)
except Exception as exception: # pylint: disable=broad-except
_LOGGER.error("Something really wrong happened! - %s", exception)
| 36.579545 | 141 | 0.581858 | 355 | 3,219 | 5.123944 | 0.259155 | 0.039582 | 0.027488 | 0.030786 | 0.390874 | 0.319956 | 0.319956 | 0.234744 | 0.234744 | 0.134689 | 0 | 0.003979 | 0.297297 | 3,219 | 87 | 142 | 37 | 0.800177 | 0.021746 | 0 | 0.138462 | 0 | 0 | 0.10565 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015385 | false | 0.076923 | 0.092308 | 0 | 0.153846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4f23e71ddb8ee1ace14dbb8e3bd3439af093af7d | 1,607 | py | Python | train.py | RuiShu/fast-style-transfer | abea698668aa070375afa488e490b93ba5bb9563 | [
"MIT"
] | 16 | 2017-09-30T21:13:27.000Z | 2019-08-31T19:16:44.000Z | train.py | RuiShu/fast-style-transfer | abea698668aa070375afa488e490b93ba5bb9563 | [
"MIT"
] | null | null | null | train.py | RuiShu/fast-style-transfer | abea698668aa070375afa488e490b93ba5bb9563 | [
"MIT"
] | 1 | 2021-08-05T08:59:58.000Z | 2021-08-05T08:59:58.000Z | from config import args
from utils import delete_existing, get_img, get_img_files
import tensorbayes as tb
import tensorflow as tf
import numpy as np
import os
def push_to_buffer(buf, data_files):
files = np.random.choice(data_files, len(buf), replace=False)
for i, f in enumerate(files):
buf[i] = get_img(f, (256, 256, 3))
def train(M):
delete_existing(args.log_dir)
train_writer = tf.summary.FileWriter(args.log_dir)
train_files = get_img_files(args.train_dir)
validation_files = get_img_files(args.validation_dir)
iterep = args.iter_visualize
with M.graph.as_default():
M.sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
batch = np.zeros((args.batch_size, 256, 256, 3), dtype='float32')
for i in xrange(len(train_files) * args.n_epochs):
push_to_buffer(batch, train_files)
summary, _ = M.sess.run(M.ops_main, {M.x: batch})
train_writer.add_summary(summary, i + 1)
train_writer.flush()
message='i={:d}'.format(i + 1)
end_viz, _ = tb.utils.progbar(i, iterep, message)
if (i + 1) % args.iter_visualize == 0:
for f, op in zip(validation_files, M.ops_images):
img = np.expand_dims(get_img(f), 0)
summary = M.sess.run(op, {M.x_test: img})
train_writer.add_summary(summary, i + 1)
if (i + 1) % args.iter_save == 0:
path = saver.save(M.sess, os.path.join(args.model_dir, 'model'),
global_step=i + 1)
print "Saving model to {:s}".format(path)
| 35.711111 | 76 | 0.627878 | 243 | 1,607 | 3.950617 | 0.378601 | 0.0375 | 0.034375 | 0.03125 | 0.129167 | 0.0625 | 0.0625 | 0 | 0 | 0 | 0 | 0.020661 | 0.247044 | 1,607 | 44 | 77 | 36.522727 | 0.772727 | 0 | 0 | 0.055556 | 0 | 0 | 0.023647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f2add3dfacdc457e8a021fc99bae54faa461b6f | 13,569 | py | Python | scripts/sg-toolbox/SG-Glyph-CopyLayer.py | tphinney/science-gothic | b5e8a73778fdb62c38a0ee81cbe923ae9e15fc9a | [
"Apache-2.0"
] | 104 | 2019-08-08T20:18:18.000Z | 2022-03-23T21:08:24.000Z | scripts/sg-toolbox/SG-Glyph-CopyLayer.py | tphinney/science-gothic | b5e8a73778fdb62c38a0ee81cbe923ae9e15fc9a | [
"Apache-2.0"
] | 269 | 2019-08-06T22:12:53.000Z | 2022-03-23T18:05:07.000Z | scripts/FontLab/sg-toolbox/SG-Glyph-CopyLayer.py | tphinney/bank-gothic | 5857f71c207ff9fd54899423304267339196ef21 | [
"Apache-2.0"
] | 5 | 2019-08-09T21:51:45.000Z | 2020-04-26T18:09:01.000Z | #FLM: Glyph: Copy Layer (TypeRig)
# ----------------------------------------
# (C) Vassil Kateliev, 2019 (http://www.kateliev.com)
# (C) Karandash Type Foundry (http://www.karandash.eu)
#-----------------------------------------
# www.typerig.com
# No warranties. By using this you agree
# that you use it at your own risk!
# - Dependencies -----------------
import os
from collections import OrderedDict
import fontlab as fl6
from PythonQt import QtCore
from typerig.gui import QtGui
from typerig.gui.widgets import getProcessGlyphs
from typerig.proxy import *
# - Init --------------------------------
app_version = '1.97'
app_name = '[SG] Copy Layers'
# -- Copy Presets (by request)
copy_presets = {'contrast':[('Blk','Blk Ctr'),
('Blk Cnd','Blk Cnd Ctr'),
('Blk Exp','Blk Exp Ctr'),
('Cnd','Cnd Ctr'),
('Medium','Ctr'),
('Exp','Exp Ctr'),
('Lt','Lt Ctr'),
('Lt Cnd','Lt Cnd Ctr'),
('Lt Exp','Lt Exp Ctr')],
'ctr_light':[('Lt','Lt Ctr'),
('Lt Cnd','Lt Cnd Ctr'),
('Lt Exp','Lt Exp Ctr')],
'ctr_light_s':[('Lt','Lt Ctr'),
('Lt Cnd','Lt Cnd Ctr'),
('Lt Exp','Lt Exp Ctr'),
('Lt S','Lt Ctr S'),
('Lt Cnd S','Lt Cnd Ctr S'),
('Lt Exp S','Lt Exp Ctr S')],
'width': [('Blk','Blk Cnd'),
('Medium','Cnd'),
('Lt','Lt Cnd'),
('Blk','Blk Exp'),
('Medium','Exp'),
('Lt','Lt Exp')],
'weight':
[('Medium','Lt'),
('Medium','Blk')],
'slant': [('Lt','Lt S'),
('Medium','Medium S'),
('Blk','Blk S'),
('Lt Cnd','Lt Cnd S'),
('Cnd','Cnd S'),
('Blk Cnd','Blk Cnd S'),
('Lt Exp','Lt Exp S'),
('Exp','Exp S'),
('Blk Exp','Blk Exp S'),
('Lt','Lt Ctr S'),
('Ctr','Ctr S'),
('Blk Ctr','Blk Ctr S'),
('Lt Cnd','Lt Cnd Ctr S'),
('Cnd Ctr','Cnd Ctr S'),
('Blk Cnd Ctr','Blk Cnd Ctr S'),
('Lt Exp','Lt Exp Ctr S'),
('Exp Ctr','Exp Ctr S'),
('Blk Exp Ctr','Blk Exp Ctr S')]
}
# -- GUI related
table_dict = {1:OrderedDict([('Master Name', None), ('SRC', False), ('DST', False)])}
spinbox_range = (-99, 99)
# - Widgets --------------------------------
class WTableView(QtGui.QTableWidget):
def __init__(self, data):
super(WTableView, self).__init__()
# - Init
self.setColumnCount(max(map(len, data.values())))
self.setRowCount(len(data.keys()))
# - Set
self.setTable(data)
self.itemChanged.connect(self.markChange)
# - Styling
self.horizontalHeader().setStretchLastSection(True)
self.setAlternatingRowColors(True)
self.setShowGrid(False)
#self.resizeColumnsToContents()
self.resizeRowsToContents()
def setTable(self, data, data_check=[], reset=False):
name_row, name_column = [], []
self.blockSignals(True)
self.setColumnCount(max(map(len, data.values())))
self.setRowCount(len(data.keys()))
# - Populate
for n, layer in enumerate(sorted(data.keys())):
name_row.append(layer)
for m, key in enumerate(data[layer].keys()):
# -- Build name column
name_column.append(key)
# -- Add first data column
newitem = QtGui.QTableWidgetItem(str(data[layer][key])) if m == 0 else QtGui.QTableWidgetItem()
# -- Selectively colorize missing data
if m == 0 and len(data_check) and data[layer][key] not in data_check: newitem.setBackground(QtGui.QColor('red'))
# -- Build Checkbox columns
if m > 0: newitem.setFlags(QtCore.Qt.ItemIsUserCheckable | QtCore.Qt.ItemIsEnabled)
if m > 0: newitem.setCheckState(QtCore.Qt.Unchecked if not data[layer][key] else QtCore.Qt.Checked)
self.setItem(n, m, newitem)
self.setHorizontalHeaderLabels(name_column)
self.setVerticalHeaderLabels(name_row)
self.blockSignals(False)
def getTable(self):
returnDict = {}
for row in range(self.rowCount):
#returnDict[self.item(row, 0).text()] = (self.item(row, 1).checkState() == QtCore.Qt.Checked, self.item(row, 2).checkState() == QtCore.Qt.Checked)
if self.item(row, 1).checkState() == QtCore.Qt.Checked:
returnDict.setdefault('SRC',[]).append(self.item(row, 0).text())
if self.item(row, 2).checkState() == QtCore.Qt.Checked:
returnDict.setdefault('DST',[]).append(self.item(row, 0).text())
return returnDict
def markChange(self, item):
item.setBackground(QtGui.QColor('powderblue'))
# - Dialogs --------------------------------
class dlg_CopyLayer(QtGui.QDialog):
def __init__(self):
super(dlg_CopyLayer, self).__init__()
# - Init
self.active_font = pFont()
self.pMode = 0
# - Basic Widgets
self.tab_masters = WTableView(table_dict)
self.table_populate()
self.edt_checkStr = QtGui.QLineEdit()
self.edt_checkStr.setPlaceholderText('DST string')
self.edt_checkStr.setToolTip('Enter search criteria for selectively selecting destination masters.')
self.btn_refresh = QtGui.QPushButton('Clear')
self.btn_checkOn = QtGui.QPushButton('Select')
self.btn_execute = QtGui.QPushButton('Execute Selection')
self.btn_preset_contrast = QtGui.QPushButton('Copy to Contrast Masters')
self.btn_preset_width = QtGui.QPushButton('Copy to Width Masters')
self.btn_preset_weight = QtGui.QPushButton('Copy to Weight Masters')
self.btn_preset_ctrlt = QtGui.QPushButton('Copy to Light Contrast Masters')
self.btn_preset_ctrlts = QtGui.QPushButton('Copy to Light Contrast Masters (incl. Slant)')
self.btn_preset_slant = QtGui.QPushButton('Copy to Slant Masters')
self.btn_refresh.clicked.connect(self.table_populate)
self.btn_checkOn.clicked.connect(lambda: self.table_populate(True))
self.btn_execute.clicked.connect(self.execute_table)
self.btn_preset_contrast.clicked.connect(lambda: self.execute_preset(copy_presets['contrast']))
self.btn_preset_width.clicked.connect(lambda: self.execute_preset(copy_presets['width']))
self.btn_preset_weight.clicked.connect(lambda: self.execute_preset(copy_presets['weight']))
self.btn_preset_ctrlt.clicked.connect(lambda: self.execute_preset(copy_presets['ctr_light']))
self.btn_preset_ctrlts.clicked.connect(lambda: self.execute_preset(copy_presets['ctr_light_s']))
self.btn_preset_slant.clicked.connect(lambda: self.execute_preset(copy_presets['slant']))
self.rad_glyph = QtGui.QRadioButton('Glyph')
self.rad_window = QtGui.QRadioButton('Window')
self.rad_selection = QtGui.QRadioButton('Selection')
self.rad_font = QtGui.QRadioButton('Font')
self.chk_outline = QtGui.QCheckBox('Outline')
self.chk_guides = QtGui.QCheckBox('Guides')
self.chk_anchors = QtGui.QCheckBox('Anchors')
self.chk_lsb = QtGui.QCheckBox('LSB')
self.chk_adv = QtGui.QCheckBox('Advance')
self.chk_rsb = QtGui.QCheckBox('RSB')
self.chk_lnk = QtGui.QCheckBox('Metric Links')
self.chk_crlayer = QtGui.QCheckBox('Add layers')
# -- Set States
self.chk_outline.setCheckState(QtCore.Qt.Checked)
self.chk_adv.setCheckState(QtCore.Qt.Checked)
self.chk_lsb.setCheckState(QtCore.Qt.Checked)
self.chk_anchors.setCheckState(QtCore.Qt.Checked)
self.chk_lnk.setCheckState(QtCore.Qt.Checked)
self.chk_crlayer.setCheckState(QtCore.Qt.Checked)
self.chk_guides.setEnabled(False)
self.rad_glyph.setChecked(True)
self.rad_glyph.setEnabled(True)
self.rad_window.setEnabled(True)
self.rad_selection.setEnabled(True)
self.rad_font.setEnabled(False)
self.rad_glyph.toggled.connect(self.refreshMode)
self.rad_window.toggled.connect(self.refreshMode)
self.rad_selection.toggled.connect(self.refreshMode)
self.rad_font.toggled.connect(self.refreshMode)
# - Build layouts
layoutV = QtGui.QGridLayout()
layoutV.addWidget(QtGui.QLabel('Process Mode:'), 0, 0, 1, 8, QtCore.Qt.AlignBottom)
layoutV.addWidget(self.rad_glyph, 1, 0, 1, 2)
layoutV.addWidget(self.rad_window, 1, 2, 1, 2)
layoutV.addWidget(self.rad_selection, 1, 4, 1, 2)
layoutV.addWidget(self.rad_font, 1, 6, 1, 2)
layoutV.addWidget(QtGui.QLabel('Copy Options:'), 2, 0, 1, 8, QtCore.Qt.AlignBottom)
layoutV.addWidget(self.chk_outline, 3, 0, 1, 2)
layoutV.addWidget(self.chk_guides, 3, 2, 1, 2)
layoutV.addWidget(self.chk_anchors, 3, 4, 1, 2)
layoutV.addWidget(self.chk_crlayer, 3, 6, 1, 2)
layoutV.addWidget(self.chk_lsb, 4, 0, 1, 2)
layoutV.addWidget(self.chk_adv, 4, 2, 1, 2)
layoutV.addWidget(self.chk_rsb, 4, 4, 1, 2)
layoutV.addWidget(self.chk_lnk, 4, 6, 1, 2)
layoutV.addWidget(QtGui.QLabel('Master Layers: Single source to multiple destinations'), 5, 0, 1, 8, QtCore.Qt.AlignBottom)
layoutV.addWidget(QtGui.QLabel('Search:'), 6, 0, 1, 1)
layoutV.addWidget(self.edt_checkStr, 6, 1, 1, 3)
layoutV.addWidget(self.btn_checkOn, 6, 4, 1, 2)
layoutV.addWidget(self.btn_refresh, 6, 6, 1, 2)
layoutV.addWidget(self.tab_masters, 7, 0, 15, 8)
layoutV.addWidget(self.btn_execute, 22, 0, 1,8)
layoutV.addWidget(QtGui.QLabel('Master Layers: Copy Presets'), 23, 0, 1, 8, QtCore.Qt.AlignBottom)
layoutV.addWidget(self.btn_preset_weight, 24, 0, 1,8)
layoutV.addWidget(self.btn_preset_width, 25, 0, 1,8)
layoutV.addWidget(self.btn_preset_contrast, 26, 0, 1,8)
layoutV.addWidget(self.btn_preset_ctrlt, 27, 0, 1,8)
layoutV.addWidget(self.btn_preset_ctrlts, 28, 0, 1,8)
layoutV.addWidget(self.btn_preset_slant, 29, 0, 1,8)
# - Set Widget
self.setLayout(layoutV)
self.setWindowTitle('%s %s' %(app_name, app_version))
self.setGeometry(300, 300, 300, 600)
self.setWindowFlags(QtCore.Qt.WindowStaysOnTopHint) # Always on top!!
self.show()
def refreshMode(self):
if self.rad_glyph.isChecked(): self.pMode = 0
if self.rad_window.isChecked(): self.pMode = 1
if self.rad_selection.isChecked(): self.pMode = 2
if self.rad_font.isChecked(): self.pMode = 3
def copyLayer(self, glyph, srcLayerName, dstLayerName, options, cleanDST=False, addLayer=False):
# -- Check if srcLayerExists
if glyph.layer(srcLayerName) is None:
print 'WARN:\tGlyph: %s\tMissing source layer: %s\tSkipped!' %(glyph.name, srcLayerName)
return
# -- Check if dstLayerExists
if glyph.layer(dstLayerName) is None:
print 'WARN:\tGlyph: %s\tMissing destination layer: %s\tAdd new: %s.' %(glyph.name, dstLayerName, addLayer)
if addLayer:
newLayer = fl6.flLayer()
newLayer.name = str(dstLayerName)
glyph.addLayer(newLayer)
else:
return
# -- Outline
if options['out']:
# --- Get shapes
srcShapes = glyph.shapes(srcLayerName)
# --- Cleanup destination layers
if cleanDST:
glyph.layer(dstLayerName).removeAllShapes()
# --- Copy/Paste shapes
for shape in srcShapes:
newShape = glyph.layer(dstLayerName).addShape(shape.cloneTopLevel())
glyph.update()
# -- Metrics
if options['lsb']: glyph.setLSB(glyph.getLSB(srcLayerName), dstLayerName)
if options['adv']: glyph.setAdvance(glyph.getAdvance(srcLayerName), dstLayerName)
if options['rsb']: glyph.setRSB(glyph.getRSB(srcLayerName), dstLayerName)
if options['lnk']:
glyph.setLSBeq(glyph.getSBeq(srcLayerName)[0], dstLayerName)
glyph.setRSBeq(glyph.getSBeq(srcLayerName)[1], dstLayerName)
# -- Anchors
if options['anc']:
if cleanDST:
glyph.clearAnchors(dstLayerName)
for src_anchor in glyph.anchors(srcLayerName):
#glyph.layer(dstLayerName).addAnchor(src_anchor)
glyph.addAnchor((src_anchor.point.x(), src_anchor.point.y()), src_anchor.name, dstLayerName)
def table_populate(self, filterDST=False):
if not filterDST:
self.tab_masters.setTable({n:OrderedDict([('Master Name', master), ('SRC', False), ('DST', False)]) for n, master in enumerate(self.active_font.pMasters.names)})
self.tab_masters.resizeColumnsToContents()
else:
#print ';'.join(sorted(self.active_font.pMasters.names))
self.tab_masters.setTable({n:OrderedDict([('Master Name', master), ('SRC', False), ('DST', self.edt_checkStr.text in master)]) for n, master in enumerate(self.active_font.pMasters.names)})
self.tab_masters.resizeColumnsToContents()
def getCopyOptions(self):
options = {'out': self.chk_outline.isChecked(),
'gui': self.chk_guides.isChecked(),
'anc': self.chk_anchors.isChecked(),
'lsb': self.chk_lsb.isChecked(),
'adv': self.chk_adv.isChecked(),
'rsb': self.chk_rsb.isChecked(),
'lnk': self.chk_lnk.isChecked(),
'ref': self.chk_crlayer.isChecked()
}
return options
def execute_table(self):
# - Init
copy_options = self.getCopyOptions()
process_glyphs = getProcessGlyphs(self.pMode)
# - Process
process_dict = self.tab_masters.getTable()
process_src = process_dict['SRC'][0]
process_dst = process_dict['DST']
for wGlyph in process_glyphs:
for dst_layer in process_dst:
self.copyLayer(wGlyph, process_src, dst_layer, copy_options, True, self.chk_crlayer.isChecked())
wGlyph.update()
wGlyph.updateObject(wGlyph.fl, 'Glyph: /%s;\tCopy Layer | %s -> %s.' %(wGlyph.name, process_src, '; '.join(process_dst)))
def execute_preset(self, preset_list):
# - Init
copy_options = self.getCopyOptions()
process_glyphs = getProcessGlyphs(self.pMode)
print_preset = [' -> '.join(item) for item in preset_list]
# - Process
for wGlyph in process_glyphs:
for process_src, process_dst in preset_list:
self.copyLayer(wGlyph, process_src, process_dst, copy_options, True, self.chk_crlayer.isChecked())
wGlyph.update()
wGlyph.updateObject(wGlyph.fl, 'Glyph: /%s;\tCopy Layer Preset | %s.' %(wGlyph.name, ' | '.join(print_preset)))
# - RUN ------------------------------
dialog = dlg_CopyLayer() | 37.587258 | 191 | 0.67271 | 1,797 | 13,569 | 4.963272 | 0.176405 | 0.0259 | 0.051575 | 0.028254 | 0.330979 | 0.30037 | 0.226707 | 0.186344 | 0.122996 | 0.108869 | 0 | 0.01454 | 0.158597 | 13,569 | 361 | 192 | 37.587258 | 0.766664 | 0.091458 | 0 | 0.096525 | 0 | 0 | 0.118528 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.027027 | null | null | 0.015444 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f355e9a0ec2580f7a7a369727ffb40554a5d73b | 1,108 | py | Python | morse_code.py | shwetabhsharan/leetcode | 6630592b1f962bb4c4bb3c83162a8ff12b2074b3 | [
"MIT"
] | null | null | null | morse_code.py | shwetabhsharan/leetcode | 6630592b1f962bb4c4bb3c83162a8ff12b2074b3 | [
"MIT"
] | null | null | null | morse_code.py | shwetabhsharan/leetcode | 6630592b1f962bb4c4bb3c83162a8ff12b2074b3 | [
"MIT"
] | null | null | null | """
Morse Code Implementation to tell unique pattern
Example:
Input: words = ["gin", "zen", "gig", "msg"]
Output: 2
Explanation:
The transformation of each word is:
"gin" -> "--...-."
"zen" -> "--...-."
"gig" -> "--...--."
"msg" -> "--...--."
There are 2 different transformations, "--...-." and "--...--.".
Notes
The length of words will be at most 100.
Each words[i] will have length in range [1, 12].
words[i] will only consist of lowercase letters.
"""
def uniqueMorseRepresentations(words):
import string
unique_code = []
if len(words) > 100:
return -1
letter_map = dict()
letters = [letter for letter in string.ascii_lowercase]
for count, letter in enumerate(morse_code_list):
letter_map[letters[count]] = morse_code_list[count]
for word in words:
morse_code = ""
for char in word:
morse_code = morse_code + letter_map[char]
if morse_code not in unique_code:
unique_code.append(morse_code)
return len(unique_code)
words = ["gin", "zen", "gig", "msg"]
print uniqueMorseRepresentations(words) | 22.16 | 64 | 0.624549 | 142 | 1,108 | 4.753521 | 0.450704 | 0.106667 | 0.04 | 0.053333 | 0.05037 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013921 | 0.222022 | 1,108 | 50 | 65 | 22.16 | 0.769142 | 0 | 0 | 0 | 0 | 0 | 0.01849 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.055556 | null | null | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f35e5d4ae5344e336395cce9d435d980c9b8a4f | 1,771 | py | Python | src/genie/libs/parser/iosxr/tests/ShowOspfVrfAllInclusiveShamLinks/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxr/tests/ShowOspfVrfAllInclusiveShamLinks/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxr/tests/ShowOspfVrfAllInclusiveShamLinks/cli/equal/golden_output_1_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z |
expected_output = {
"vrf": {
"VRF1": {
"address_family": {
"ipv4": {
"instance": {
"1": {
"areas": {
"0.0.0.1": {
"sham_links": {
"10.21.33.33 10.151.22.22": {
"cost": 111,
"dcbitless_lsa_count": 1,
"donotage_lsa": "not allowed",
"dead_interval": 13,
"demand_circuit": True,
"hello_interval": 3,
"hello_timer": "00:00:00:772",
"if_index": 2,
"local_id": "10.21.33.33",
"name": "SL0",
"link_state": "up",
"remote_id": "10.151.22.22",
"retransmit_interval": 5,
"state": "point-to-point,",
"transit_area_id": "0.0.0.1",
"transmit_delay": 7,
"wait_interval": 13,
}
}
}
}
}
}
}
}
}
}
}
| 42.166667 | 74 | 0.197628 | 89 | 1,771 | 3.719101 | 0.651685 | 0.024169 | 0.018127 | 0.024169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131115 | 0.711462 | 1,771 | 41 | 75 | 43.195122 | 0.516634 | 0 | 0 | 0 | 0 | 0 | 0.195025 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f41108bae3999d613b16bce6c5067e9321c891a | 262 | py | Python | backoffice/utils/constant.py | MedPy-C/backend | 262834adb1f4f5714c4bd490595fdfa1f49c9675 | [
"MIT"
] | null | null | null | backoffice/utils/constant.py | MedPy-C/backend | 262834adb1f4f5714c4bd490595fdfa1f49c9675 | [
"MIT"
] | 1 | 2021-05-20T16:08:35.000Z | 2021-05-20T16:08:35.000Z | backoffice/utils/constant.py | MedPy-C/backend | 262834adb1f4f5714c4bd490595fdfa1f49c9675 | [
"MIT"
] | null | null | null | from enum import Enum
class RoleLevel(Enum):
OWNER = 0
ADMIN = 1
USER = 3
class Status(Enum):
ACTIVE = 1
INACTIVE = 0
class AccessLevel(Enum):
ADMIN = 0
USER = 1
class URL():
ACTIVATION = '/backoffice/invitation/activate/'
| 12.47619 | 51 | 0.614504 | 33 | 262 | 4.878788 | 0.606061 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037433 | 0.28626 | 262 | 20 | 52 | 13.1 | 0.823529 | 0 | 0 | 0 | 0 | 0 | 0.122137 | 0.122137 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4f41b185490551ba17dfbc19cca9e3a24ae60285 | 1,309 | py | Python | pages/migrations/0018_auto_20171102_1809.py | Vicarium/amy_site | eeb779aae74dc3af96f2837d876bafb8e13522d2 | [
"MIT"
] | null | null | null | pages/migrations/0018_auto_20171102_1809.py | Vicarium/amy_site | eeb779aae74dc3af96f2837d876bafb8e13522d2 | [
"MIT"
] | null | null | null | pages/migrations/0018_auto_20171102_1809.py | Vicarium/amy_site | eeb779aae74dc3af96f2837d876bafb8e13522d2 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.6 on 2017-11-02 18:09
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import wagtail.wagtailcore.fields
class Migration(migrations.Migration):
dependencies = [
('wagtailimages', '0019_delete_filter'),
('wagtailcore', '0040_page_draft_title'),
('pages', '0017_standardindexpage_template_string'),
]
operations = [
migrations.CreateModel(
name='TestimonialPage',
fields=[
('page_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='wagtailcore.Page')),
('intro', wagtail.wagtailcore.fields.RichTextField(blank=True)),
('feed_image', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='wagtailimages.Image')),
],
options={
'abstract': False,
},
bases=('wagtailcore.page',),
),
migrations.AlterField(
model_name='testimonial',
name='text',
field=wagtail.wagtailcore.fields.RichTextField(),
),
]
| 35.378378 | 191 | 0.624905 | 131 | 1,309 | 6.068702 | 0.580153 | 0.040252 | 0.05283 | 0.083019 | 0.085535 | 0.085535 | 0.085535 | 0 | 0 | 0 | 0 | 0.029352 | 0.245225 | 1,309 | 36 | 192 | 36.361111 | 0.775304 | 0.051948 | 0 | 0.068966 | 1 | 0 | 0.176898 | 0.047658 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.137931 | 0 | 0.241379 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f4f6e6b80c56c5a27d1f21b5dbb08a35977376c | 362 | py | Python | nomad/images/migrations/0003_auto_20181218_2248.py | jss8882/nomad | 914fefe3a9bd47c6a252829f4cf5a30796ea4885 | [
"MIT"
] | null | null | null | nomad/images/migrations/0003_auto_20181218_2248.py | jss8882/nomad | 914fefe3a9bd47c6a252829f4cf5a30796ea4885 | [
"MIT"
] | 7 | 2021-05-07T23:17:55.000Z | 2022-02-26T14:37:28.000Z | nomad/images/migrations/0003_auto_20181218_2248.py | jss8882/nomad | 914fefe3a9bd47c6a252829f4cf5a30796ea4885 | [
"MIT"
] | null | null | null | # Generated by Django 2.0.9 on 2018-12-18 13:48
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('images', '0002_auto_20181218_2029'),
]
operations = [
migrations.RenameField(
model_name='like',
old_name='message',
new_name='creator',
),
]
| 19.052632 | 47 | 0.582873 | 39 | 362 | 5.25641 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123016 | 0.303867 | 362 | 18 | 48 | 20.111111 | 0.690476 | 0.124309 | 0 | 0 | 1 | 0 | 0.149206 | 0.073016 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f514c4b037c5ec11c6ed2a7b923cc3c118b32f5 | 378 | py | Python | dcodex_lectionary/migrations/0031_auto_20201119_2140.py | rbturnbull/dcodex_lectionary | 9a4787eb353d09fef023cd82af8859a7ee041aee | [
"Apache-2.0"
] | null | null | null | dcodex_lectionary/migrations/0031_auto_20201119_2140.py | rbturnbull/dcodex_lectionary | 9a4787eb353d09fef023cd82af8859a7ee041aee | [
"Apache-2.0"
] | 2 | 2021-08-09T01:11:59.000Z | 2021-08-09T01:12:49.000Z | dcodex_lectionary/migrations/0031_auto_20201119_2140.py | rbturnbull/dcodex_lectionary | 9a4787eb353d09fef023cd82af8859a7ee041aee | [
"Apache-2.0"
] | null | null | null | # Generated by Django 3.0.11 on 2020-11-19 10:40
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('dcodex_lectionary', '0030_auto_20201119_2131'),
]
operations = [
migrations.RenameField(
model_name='movableday',
old_name='period',
new_name='season',
),
]
| 19.894737 | 57 | 0.600529 | 40 | 378 | 5.5 | 0.825 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.119403 | 0.291005 | 378 | 18 | 58 | 21 | 0.701493 | 0.121693 | 0 | 0 | 1 | 0 | 0.187879 | 0.069697 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f5c288f9e0f093a2231fc27d8381b1980cd1226 | 566 | py | Python | pronotepy/ent/test_ent.py | Bapt5/pronotepy | 8787d3f6b3d3062e598c74a40df4a3ebf759201b | [
"MIT"
] | null | null | null | pronotepy/ent/test_ent.py | Bapt5/pronotepy | 8787d3f6b3d3062e598c74a40df4a3ebf759201b | [
"MIT"
] | null | null | null | pronotepy/ent/test_ent.py | Bapt5/pronotepy | 8787d3f6b3d3062e598c74a40df4a3ebf759201b | [
"MIT"
] | 3 | 2022-03-11T20:51:23.000Z | 2022-03-31T20:03:13.000Z | import unittest
from inspect import getmembers, isfunction
from functools import partial
import pronotepy
from pronotepy import ent
class TestENT(unittest.TestCase):
functions: list
@classmethod
def setUpClass(cls) -> None:
cls.functions = getmembers(
ent, lambda x: isfunction(x) or isinstance(x, partial)
)
def test_functions(self) -> None:
for func in self.functions:
self.assertRaises(pronotepy.ENTLoginError, func[1], "username", "password")
if __name__ == "__main__":
unittest.main()
| 21.769231 | 87 | 0.680212 | 63 | 566 | 5.968254 | 0.587302 | 0.069149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002294 | 0.229682 | 566 | 25 | 88 | 22.64 | 0.860092 | 0 | 0 | 0 | 0 | 0 | 0.042403 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 1 | 0.117647 | false | 0.058824 | 0.294118 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4f5cb8549eb68a8c6eb2d4195211dbccec0f0258 | 214 | py | Python | examples/lolcode_rockstar.py | hoojaoh/rockstar | 2cb911be76fc93692c180d629f0b282d672ea8f7 | [
"MIT"
] | 4,603 | 2015-07-16T20:11:28.000Z | 2022-03-21T23:51:47.000Z | examples/lolcode_rockstar.py | hoojaoh/rockstar | 2cb911be76fc93692c180d629f0b282d672ea8f7 | [
"MIT"
] | 90 | 2015-07-18T11:51:33.000Z | 2021-05-10T02:45:58.000Z | examples/lolcode_rockstar.py | hoojaoh/rockstar | 2cb911be76fc93692c180d629f0b282d672ea8f7 | [
"MIT"
] | 436 | 2015-07-16T22:10:50.000Z | 2022-02-15T04:53:19.000Z | from rockstar import RockStar
lolcode_code = """HAI
CAN HAS STDIO?
VISIBLE "HAI WORLD!"
KTHXBYE"""
rock_it_bro = RockStar(days=400, file_name='helloworld.lol', code=lolcode_code)
rock_it_bro.make_me_a_rockstar()
| 21.4 | 79 | 0.780374 | 34 | 214 | 4.617647 | 0.705882 | 0.140127 | 0.11465 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.102804 | 214 | 9 | 80 | 23.777778 | 0.802083 | 0 | 0 | 0 | 0 | 0 | 0.285047 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f5ff30ca61fd37831eb16f637a4b5ec27844a14 | 6,857 | py | Python | fca/algorithms/exploration/examples/bi_unars.py | ksiomelo/cubix | cd9e6dda6696b302a7c0d383259a9d60b15b0d55 | [
"Apache-2.0"
] | 3 | 2015-09-07T00:16:16.000Z | 2019-01-11T20:27:56.000Z | fca/algorithms/exploration/examples/bi_unars.py | ksiomelo/cubix | cd9e6dda6696b302a7c0d383259a9d60b15b0d55 | [
"Apache-2.0"
] | null | null | null | fca/algorithms/exploration/examples/bi_unars.py | ksiomelo/cubix | cd9e6dda6696b302a7c0d383259a9d60b15b0d55 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
import copy
import itertools
import fca
from fca.algorithms.exploration.exploration import (AttributeExploration,
ExplorationDB)
from fca import Implication
# from fca.algorithms.closure_operators import simple_closure as closure
class BiUnar(object):
def __init__(self, f, g):
self.f = f
self.g = g
def __str__(self):
return '%s%s' % (self.f, self.g)
def automorphic_copy(self):
return BiUnar(self.g, self.f)
class Term(object):
def __init__(self, name, function):
self._name = name
self._function = function
def __call__(self, bu):
return self._function(bu)
def __str__(self):
return self._name
class Equality(object):
def __init__(self, left_term, right_term):
self.left = left_term
self.right = right_term
def __call__(self, bu):
return self.left(bu) == self.right(bu)
def __str__(self):
return "%s = %s" % (self.left, self.right)
def automorphic_copy(self):
return Equality(get_symmetric_term(self.left),
get_symmetric_term(self.right))
#
# def __eq__(self, eq):
# return ((self.left == eq.left and self.right == eq.right) or
# (self.left == eq.right and self.right == eq.left))
#
# def __ne__(self, eq):
# return not self == eq
#
# def __hash__(self):
# if self.left < self.right:
# return hash((self.left, self.right))
# else:
# return hash((self.right, self.left))
class CommandLineExpert(object):
def is_valid(self, imp):
print "{0}".format(imp)
return input('Is the following implication valid? Enter "True" or "False": {0}'.format(imp))
def explore(self, exploration):
while exploration.get_open_implications():
imp = exploration.get_open_implications()[0]
if self.is_valid(imp):
exploration.confirm_implications([imp,
get_symmetric_implication(imp)])
else:
exploration.reject_implication(imp)
def provide_counterexample(self, imp):
print 'Provide a counterexample by typing in two tuples.'
bu = BiUnar(input('f: '), input('g: '))
if input('Add as a partial example? Enter "True" or "False": '):
intent = generate_partial_counterexample(imp, bu)
else:
intent = [bu_intent(bu)] * 2 # since our context is partial
return ((bu, bu.automorphic_copy()),
(intent, [get_symmetric_attribute_set(s) for s in intent]))
def compose(f, g):
return tuple([f[x - 1] for x in g])
def generate_examples(n):
maps = itertools.product(range(1, n + 1), repeat=n)
return (BiUnar(mm[0], mm[1]) for mm in itertools.product(maps, repeat=2))
def bu_intent(bu):
return set([a for a in attributes if a(bu)])
def generate_partial_counterexample(imp, bu):
relevant_attributes = imp.premise | imp.conclusion
xintent = set([a for a in relevant_attributes if a(bu)])
qintent = (set(attributes) - relevant_attributes) | xintent
return xintent, qintent
def find_counter_example(n, imp):
for e in generate_examples(n):
if not imp.is_respected(bu_intent(e)):
return e
def generate_context(n, attributes):
objects = [e for e in generate_examples(n)]
table = [[a(o) for a in attributes] for o in objects]
cxt = fca.partial_context.PartialContext(table,
copy.deepcopy(table),
objects,
attributes)
# TODO: reduce cxt
return cxt
def generate_background_implications(attributes):
return [Implication(set([x, y]), set([z]))
for x in attributes
for y in attributes
for z in attributes
if (x.left == z.left and
x.right == y.left and
y.right == z.right) or
(x.left == z.left and
x.right == y.right and
y.left == z.right) or
(x.left == y.left and
x.right == z.left and
y.right == z.right)
]
def is_orbit_maximal(eqs):
symmetric_eqs = get_symmetric_attribute_set(eqs)
for a in attributes:
if a in eqs:
if a not in symmetric_eqs:
return True
elif a in symmetric_eqs:
return False
return True
def get_symmetric_attribute_set(eqs):
symmetric_eqs = set([])
for e in eqs:
left = get_symmetric_term(e.left)
right = get_symmetric_term(e.right)
if terms.index(left) > terms.index(right):
left, right = right, left
for a in attributes:
if a.left == left and a.right == right:
symmetric_eqs.add(a)
break
return symmetric_eqs
def get_symmetric_term(term):
s = str(term)
s = s.replace('f', 'F')
s = s.replace('g', 'f')
s = s.replace('F', 'g')
for t in terms:
if str(t) == s:
return t
def get_symmetric_equality(e):
s = str(e.automorphic_copy())
for a in attributes:
if str(a) == s:
return a
def get_symmetric_implication(imp):
return Implication(get_symmetric_attribute_set(imp.premise),
get_symmetric_attribute_set(imp.conclusion))
terms = [
Term('id', lambda bu: tuple(range(1, len(bu.f) + 1))),
Term('f', lambda bu: bu.f),
Term('g', lambda bu: bu.g),
Term('ff', lambda bu: compose(bu.f, bu.f)),
Term('fg', lambda bu: compose(bu.f, bu.g)),
Term('gf', lambda bu: compose(bu.g, bu.f)),
Term('gg', lambda bu: compose(bu.g, bu.g))
]
attributes = [Equality(terms[i], terms[j]) for i in range(len(terms))
for j in range(i + 1, len(terms))]
db = ExplorationDB(generate_context(3, attributes),
generate_background_implications(attributes),
is_orbit_maximal)
expert = CommandLineExpert()
exploration = AttributeExploration(db, expert)
# expert.explore(exploration)
| 31.027149 | 100 | 0.530407 | 811 | 6,857 | 4.323058 | 0.176326 | 0.044495 | 0.010268 | 0.034227 | 0.185396 | 0.100114 | 0.023959 | 0.011409 | 0 | 0 | 0 | 0.003444 | 0.364883 | 6,857 | 220 | 101 | 31.168182 | 0.801607 | 0.085168 | 0 | 0.09589 | 0 | 0 | 0.03231 | 0 | 0 | 0 | 0 | 0.004545 | 0 | 0 | null | null | 0 | 0.034247 | null | null | 0.013699 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f639ad546becd053f95f292acd5d75f4f3f1148 | 1,270 | py | Python | Lexical_Semantics/Repositories/Word2Vec_and_Fasttext/word2vec-and-fasttext.py | MWTA/Text-Mining | d64250ed9f7d8f999bb925ec01c041062b1f4145 | [
"MIT"
] | null | null | null | Lexical_Semantics/Repositories/Word2Vec_and_Fasttext/word2vec-and-fasttext.py | MWTA/Text-Mining | d64250ed9f7d8f999bb925ec01c041062b1f4145 | [
"MIT"
] | null | null | null | Lexical_Semantics/Repositories/Word2Vec_and_Fasttext/word2vec-and-fasttext.py | MWTA/Text-Mining | d64250ed9f7d8f999bb925ec01c041062b1f4145 | [
"MIT"
] | null | null | null | import numpy as np
import os
import re
import urllib.request
import zipfile
import lxml.etree
from gensim.models import FastText
from random import shuffle
#download the data
urllib.request.urlretrieve(
"https://wit3.fbk.eu/get.php?path=XML_releases/xml/ted_en-20160408.zip&filename=ted_en-20160408.zip", filename="ted_en-20160408.zip")
# extract subtitle
with zipfile.ZipFile('ted_en-20160408.zip', 'r') as z:
doc = lxml.etree.parse(z.open('ted_en-20160408.xml', 'r'))
input_text = '\n'.join(doc.xpath('//content/text()'))
# remove parenthesis
input_text_noparens = re.sub(r'\([^)]*\)', '', input_text)
# store as list of sentences
sentences_strings_ted = []
for line in input_text_noparens.split('\n'):
m = re.match(r'^(?:(?P<precolon>[^:]{,20}):)?(?P<postcolon>.*)$', line)
sentences_strings_ted.extend(sent for sent in m.groupdict()[
'postcolon'].split('.') if sent)
# store as list of lists of words
sentences_ted = []
for sent_str in sentences_strings_ted:
tokens = re.sub(r"[^a-z0-9]+", " ", sent_str.lower()).split()
sentences_ted.append(tokens)
model_ted = FastText(sentences_ted, size=100, window=5,
min_count=5, workers=4, sg=1)
model_ted.wv.most_similar("Gastroenteritis")
| 32.564103 | 137 | 0.685827 | 189 | 1,270 | 4.465608 | 0.507937 | 0.029621 | 0.077014 | 0.075829 | 0.075829 | 0.075829 | 0.075829 | 0.075829 | 0.075829 | 0 | 0 | 0.048237 | 0.151181 | 1,270 | 38 | 138 | 33.421053 | 0.734694 | 0.088189 | 0 | 0 | 0 | 0.038462 | 0.234172 | 0.041631 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.307692 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4f63a170f475f21430256af0098ee8caf3ec6b78 | 2,453 | py | Python | Source Code/Python API/multivent_client.py | D-TACQ/acq400_lv | 684e06a294ceb865511a7568e5038b209bdc3374 | [
"MIT"
] | null | null | null | Source Code/Python API/multivent_client.py | D-TACQ/acq400_lv | 684e06a294ceb865511a7568e5038b209bdc3374 | [
"MIT"
] | 2 | 2018-04-23T16:37:19.000Z | 2018-07-11T10:51:19.000Z | Source Code/Python API/multivent_client.py | D-TACQ/acq400_lv | 684e06a294ceb865511a7568e5038b209bdc3374 | [
"MIT"
] | 3 | 2018-04-20T11:53:29.000Z | 2018-04-25T15:25:55.000Z | #!/usr/local/bin/python
# UUT is running continuous pre/post snapshots
# subscribe to the snapshots and save all the data.
import threading
import epics
import argparse
import time
import datetime
import os
NCHAN = 16
# WF record, raw binary (shorts)
WFNAME = ":1:AI:WF:{:02d}"
# alt WF record, VOLTS. Kindof harder to store this in a portable way..
#WFNAME = ":1:AI:WF:{:02d}:V.VALA""
#1:AI:WF:08:V.VALA
class Uut:
root = "DATA"
def make_file_name(self, upcount):
timecode = datetime.datetime.now().strftime("%Y/%m/%d/%H/%M/")
return self.root+"/"+timecode +"{:06d}".format(upcount)
def store_format(self, path):
# created a kst / dirfile compatible format file
fp = open(path+"/format", "w")
fp.write ("# format file {}\n".format(path))
# TODO enter start sample from event sample count
fp.write ("START_SAMPLE CONST UINT32 0\n")
fp.writelines(["CH{:02d} RAW s 1\n".format(ch) for ch in range(1,NCHAN+1)])
fp.close()
def on_update(self, **kws):
self.upcount = self.upcount + 1
fn = self.make_file_name(self.upcount)
print(fn)
if not os.path.isdir(fn):
os.makedirs(fn)
for ch in range(1, NCHAN+1):
yy = self.channels[ch-1].get()
yy.astype('int16').tofile(fn+"/CH{:02d}".format(ch))
self.store_format(fn)
print("{} {}".format(kws['pvname'], kws['value']))
print(self.channels[1])
def monitor(self):
self.channels = [epics.PV(self.name+WFNAME.format(ch)) for ch in range(1, NCHAN+1)]
updates = epics.PV(self.name + ":1:AI:WF:01:UPDATES", auto_monitor=True, callback=self.on_update)
def __init__(self, _name):
self.name = _name
self.upcount = 0
threading.Thread(target=self.monitor).start()
def multivent(parser):
uuts = [Uut(_name) for _name in parser.uuts]
for u in uuts:
u.root = parser.root
while True:
time.sleep(0.5)
def run_main():
parser = argparse.ArgumentParser(description='acq400 multivent')
parser.add_argument('--root', type=str, default="DATA", help="output root path")
parser.add_argument('uuts', nargs='+', help="uut names")
multivent(parser.parse_args())
# execution starts here
if __name__ == '__main__':
run_main()
| 31.448718 | 105 | 0.593151 | 338 | 2,453 | 4.213018 | 0.431953 | 0.038624 | 0.014045 | 0.025281 | 0.10323 | 0.051264 | 0.051264 | 0.037921 | 0.037921 | 0 | 0 | 0.022614 | 0.260905 | 2,453 | 77 | 106 | 31.857143 | 0.762824 | 0.157358 | 0 | 0 | 0 | 0 | 0.110355 | 0 | 0 | 0 | 0 | 0.012987 | 0 | 1 | 0.137255 | false | 0 | 0.117647 | 0 | 0.313725 | 0.058824 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f69fd9cc8113e6ef8df3f55cd4920361ce2a21d | 1,578 | py | Python | herramientas/zapador/zapador/zapadorapp.py | ZR-TECDI/Framework_ZR | 0fa994d4b3cbf19affab84695705d022adae872c | [
"MIT"
] | 4 | 2019-09-07T19:31:27.000Z | 2020-06-13T21:41:45.000Z | herramientas/zapador/zapador/zapadorapp.py | ZR-TECDI/Framework_ZR | 0fa994d4b3cbf19affab84695705d022adae872c | [
"MIT"
] | 21 | 2019-09-09T00:55:40.000Z | 2021-08-09T20:50:56.000Z | herramientas/zapador/zapador/zapadorapp.py | ZR-TECDI/Framework_ZR | 0fa994d4b3cbf19affab84695705d022adae872c | [
"MIT"
] | null | null | null | from kivy.app import App
from kivy.lang import Builder
from kivy.uix.gridlayout import GridLayout
from kivy.uix.screenmanager import ScreenManager, Screen, FadeTransition
import zapador.constantes as cons
from zapador.metodos import pre_run
from zapador.clases import *
from kivy.factory import Factory
ubicacion = cons.DIR_SCRIPT + '/zapador/kv/'
with open('{}main.kv'.format(ubicacion), encoding='UTF-8') as f:
Builder.load_string(f.read())
with open('{}clases.kv'.format(ubicacion), encoding='UTF-8') as f:
Builder.load_string(f.read())
with open('{}contenido.kv'.format(ubicacion), encoding='UTF-8') as f:
Builder.load_string(f.read())
class Pantalla_Nueva(Screen):
pass
class Pantalla_Importar(Screen):
popup = Factory.CargarMision()
importar = None
def on_pre_enter(self):
self.importar = self.children[0].children[0]
self.popup.papi = self.importar
self.popup.open()
class Pantalla_Opciones(Screen):
pass
sm = ScreenManager(transition=FadeTransition())
sm.add_widget(Pantalla_Nueva(name='pantalla_nueva'))
sm.add_widget(Pantalla_Importar(name='pantalla_importar'))
sm.add_widget(Pantalla_Opciones(name='pantalla_opciones'))
def cambiar_pantalla(pantalla):
sm.current = pantalla
class ZapadorApp(App):
"""Entry point para Zapador app"""
def on_start(self):
pre_run()
def on_stop(self):
Descargando.stop.set()
def build(self):
self.title = 'Zapador v'+cons.VERSION
self.icon = 'zapador/assets/img/zapador.ico'
return sm
| 27.684211 | 72 | 0.707858 | 211 | 1,578 | 5.184834 | 0.36019 | 0.036563 | 0.046618 | 0.068556 | 0.162706 | 0.162706 | 0.162706 | 0.162706 | 0.162706 | 0.162706 | 0 | 0.003817 | 0.169835 | 1,578 | 56 | 73 | 28.178571 | 0.831298 | 0.017744 | 0 | 0.121951 | 0 | 0 | 0.095917 | 0.019443 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0.04878 | 0.317073 | 0 | 0.609756 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4f701be6860630646ec1eab6b85fb174acb20f92 | 813 | py | Python | api/routers/fetch.py | ThisIsBrainDamage/OSC-API | af361ebe72cd282d1a3e69924dde1a215816f0bd | [
"MIT"
] | 2 | 2022-03-16T04:09:53.000Z | 2022-03-16T09:38:52.000Z | api/routers/fetch.py | ThisIsBrainDamage/OSC-API | af361ebe72cd282d1a3e69924dde1a215816f0bd | [
"MIT"
] | null | null | null | api/routers/fetch.py | ThisIsBrainDamage/OSC-API | af361ebe72cd282d1a3e69924dde1a215816f0bd | [
"MIT"
] | null | null | null | # Standard library imports
import os
# Third party imports
from fastapi import Depends, APIRouter, HTTPException
from dotenv import load_dotenv
# Local imports
from ..auth.classes import User
from ..auth.authenticate import get_current_active_user
from ..data import fetch_all, fetch_item
load_dotenv()
fetch = APIRouter()
@fetch.get("/find")
async def get_by_name(name : str, current_user: User = Depends(get_current_active_user)):
"""
Looks for an item in the database
"""
data = await fetch_item(name)
if data is None:
return HTTPException(404, "Item not found")
return data
@fetch.get("/get_all")
async def get_all(current_user: User = Depends(get_current_active_user)):
"""
Gets all items from the database
"""
data = await fetch_all()
return data | 22.583333 | 89 | 0.719557 | 115 | 813 | 4.904348 | 0.417391 | 0.053191 | 0.085106 | 0.106383 | 0.237589 | 0.148936 | 0.148936 | 0.148936 | 0 | 0 | 0 | 0.004559 | 0.190652 | 813 | 36 | 90 | 22.583333 | 0.852584 | 0.071341 | 0 | 0.111111 | 0 | 0 | 0.041221 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4f71b59f7fa424504c4850aad52c0fb59243156e | 1,475 | py | Python | graphium/graph_management/model/access.py | graphium-project/graphium-qgis-plugin | 480e90dc874522b4d4d36b0d7b909ef3144da8b2 | [
"Apache-2.0"
] | 1 | 2020-07-11T10:28:33.000Z | 2020-07-11T10:28:33.000Z | graphium/graph_management/model/access.py | graphium-project/graphium-qgis-plugin | 480e90dc874522b4d4d36b0d7b909ef3144da8b2 | [
"Apache-2.0"
] | null | null | null | graphium/graph_management/model/access.py | graphium-project/graphium-qgis-plugin | 480e90dc874522b4d4d36b0d7b909ef3144da8b2 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
/***************************************************************************
QGIS plugin 'Graphium'
/***************************************************************************
*
* Copyright 2020 Simon Gröchenig @ Salzburg Research
* eMail graphium@salzburgresearch.at
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
***************************************************************************/
"""
from enum import Enum
class Access(Enum):
PEDESTRIAN = 1
BIKE = 2
PRIVATE_CAR = 3
PUBLIC_BUS = 4
RAILWAY = 5
TRAM = 6
SUBWAY = 7
FERRY_BOAT = 8
HIGH_OCCUPATION_CAR = 9
TRUCK = 10
TAXI = 11
EMERGENCY_VEHICLE = 12
MOTOR_COACH = 13
TROLLY_BUS = 14
MOTORCYCLE = 15
RACK_RAILWAY = 16
CABLE_RAILWAY = 17
CAR_FERRY = 18
CAMPER = 19
COMBUSTIBLES = 20
HAZARDOUS_TO_WATER = 21
GARBAGE_COLLECTION_VEHICLE = 22
ELECTRIC_CAR = 23
NONE = -1
| 27.314815 | 77 | 0.568814 | 173 | 1,475 | 4.757225 | 0.751445 | 0.072904 | 0.031592 | 0.038882 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.040693 | 0.216949 | 1,475 | 53 | 78 | 27.830189 | 0.671861 | 0.635932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.038462 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4f746f8a5d2093a7b2129bda73802e19923f5043 | 851 | py | Python | ocean_utils/agreements/utils.py | oceanprotocol/common-utils-py | f577f4762841496584e114baaec0d476e73c700e | [
"Apache-2.0"
] | null | null | null | ocean_utils/agreements/utils.py | oceanprotocol/common-utils-py | f577f4762841496584e114baaec0d476e73c700e | [
"Apache-2.0"
] | 2 | 2019-12-16T11:26:21.000Z | 2021-03-18T13:06:31.000Z | ocean_utils/agreements/utils.py | oceanprotocol/common-utils-py | f577f4762841496584e114baaec0d476e73c700e | [
"Apache-2.0"
] | null | null | null | """Agreements module."""
# Copyright 2018 Ocean Protocol Foundation
# SPDX-License-Identifier: Apache-2.0
from ocean_utils.agreements.access_sla_template import ACCESS_SLA_TEMPLATE
from ocean_utils.agreements.compute_sla_template import COMPUTE_SLA_TEMPLATE
from ocean_utils.agreements.service_types import ServiceTypes
def get_sla_template(service_type=ServiceTypes.ASSET_ACCESS):
"""
Get the template for a ServiceType.
:param service_type: ServiceTypes
:return: template dict
"""
if service_type == ServiceTypes.ASSET_ACCESS:
return ACCESS_SLA_TEMPLATE['serviceAgreementTemplate'].copy()
elif service_type == ServiceTypes.CLOUD_COMPUTE:
return COMPUTE_SLA_TEMPLATE['serviceAgreementTemplate'].copy()
else:
raise ValueError(f'Invalid/unsupported service agreement type {service_type}')
| 35.458333 | 86 | 0.780259 | 100 | 851 | 6.38 | 0.46 | 0.12069 | 0.144201 | 0.112853 | 0.216301 | 0.109718 | 0 | 0 | 0 | 0 | 0 | 0.008219 | 0.142186 | 851 | 23 | 87 | 37 | 0.865753 | 0.225617 | 0 | 0 | 0 | 0 | 0.166932 | 0.076312 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.3 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4f74e6b364f85cdca76783ea050b55c4b0e2bf27 | 1,750 | py | Python | mobilenetv1.py | Monster880/pytorch_py | 9c5ac5974f48edb5ea3d897a1100a63d488c61d9 | [
"MIT"
] | null | null | null | mobilenetv1.py | Monster880/pytorch_py | 9c5ac5974f48edb5ea3d897a1100a63d488c61d9 | [
"MIT"
] | null | null | null | mobilenetv1.py | Monster880/pytorch_py | 9c5ac5974f48edb5ea3d897a1100a63d488c61d9 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
class mobilenet(nn.Module):
def __init__(self):
super(mobilenet, self).__init__()
self.conv_1 = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1),
nn.BatchNorm2d(64),
nn.ReLU()
)
self.conv_dw2 = self.conv_dw(32, 32, 1)
self.conv_dw3 = self.conv_dw(32, 64, 2)
self.conv_dw4 = self.conv_dw(64, 64, 1)
self.conv_dw5 = self.conv_dw(64, 128, 2)
self.conv_dw6 = self.conv_dw(128, 128, 1)
self.conv_dw7 = self.conv_dw(128, 256, 2)
self.conv_dw8 = self.conv_dw(256, 256, 1)
self.conv_dw9 = self.conv_dw(256, 512, 2)
self.fc = nn.Linear(512, 10)
def conv_dw(in_channel, out_channel, stride):
nn.Sequential(
nn.Conv2d(in_channel, out_channel, kernel_size=3 ,stride=stride, padding=1, groups=in_channel, bias=False),
nn.BatchNorm2d(in_channel),
nn.ReLU(),
nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=1, padding=0, bias=False),
nn.BatchNorm2d(in_channel),
nn.ReLU(),
)
self.conv_dw = conv_dw
raise
def forward(self,x):
out = self.conv1(x)
out = self.conv_dw2(out)
out = self.conv_dw3(out)
out = self.conv_dw4(out)
out = self.conv_dw5(out)
out = self.conv_dw6(out)
out = self.conv_dw7(out)
out = self.conv_dw8(out)
out = self.conv_dw9(out)
out = F.avg_pool2d(out,2)
out = out.view(-1, 512)
out = self.fc(out)
return out
def mobilenetv1_small():
return mobilenet() | 30.701754 | 123 | 0.566286 | 254 | 1,750 | 3.704724 | 0.232283 | 0.221041 | 0.095643 | 0.104145 | 0.157279 | 0.157279 | 0.157279 | 0.157279 | 0 | 0 | 0 | 0.08126 | 0.310857 | 1,750 | 57 | 124 | 30.701754 | 0.699005 | 0 | 0 | 0.085106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0 | 0.06383 | 0.021277 | 0.212766 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f80040acf6642b4b3a8529a3483b2576449af0d | 169 | py | Python | Test_cases/Generated_Python/loops.py | TY-Projects/Code-Converter | 9fee4e186e0be94741a38bb95abe32adb8957fde | [
"MIT"
] | null | null | null | Test_cases/Generated_Python/loops.py | TY-Projects/Code-Converter | 9fee4e186e0be94741a38bb95abe32adb8957fde | [
"MIT"
] | null | null | null | Test_cases/Generated_Python/loops.py | TY-Projects/Code-Converter | 9fee4e186e0be94741a38bb95abe32adb8957fde | [
"MIT"
] | null | null | null | def main () :
i = 1
fib = 1
target = 10
temp = 0
while (i < target) :
temp = fib
fib += temp
i+=1
print(fib)
return 0
if __name__ == '__main__':
main()
| 10.5625 | 26 | 0.538462 | 27 | 169 | 3.074074 | 0.518519 | 0.048193 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060345 | 0.313609 | 169 | 15 | 27 | 11.266667 | 0.655172 | 0 | 0 | 0 | 0 | 0 | 0.047337 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0 | 0 | 0.153846 | 0.076923 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f80e45d4e32c8be3ef561b50ff2444d16c0afd8 | 1,790 | py | Python | xgboost-0.6-py3.6.egg/xgboost/dmlc-core/tracker/dmlc_tracker/sge.py | EnjoyLifeFund/macHighSierra-py36-pkgs | 5668b5785296b314ea1321057420bcd077dba9ea | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | xgboost-0.6-py3.6.egg/xgboost/dmlc-core/tracker/dmlc_tracker/sge.py | EnjoyLifeFund/macHighSierra-py36-pkgs | 5668b5785296b314ea1321057420bcd077dba9ea | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | xgboost-0.6-py3.6.egg/xgboost/dmlc-core/tracker/dmlc_tracker/sge.py | EnjoyLifeFund/macHighSierra-py36-pkgs | 5668b5785296b314ea1321057420bcd077dba9ea | [
"BSD-3-Clause",
"BSD-2-Clause",
"MIT"
] | null | null | null | """Submit jobs to Sun Grid Engine."""
# pylint: disable=invalid-name
import os
import subprocess
from . import tracker
def submit(args):
"""Job submission script for SGE."""
if args.jobname is None:
args.jobname = ('dmlc%d.' % args.num_workers) + args.command[0].split('/')[-1]
if args.sge_log_dir is None:
args.sge_log_dir = args.jobname + '.log'
if os.path.exists(args.sge_log_dir):
if not os.path.isdir(args.sge_log_dir):
raise RuntimeError('specified --sge-log-dir %s is not a dir' % args.sge_log_dir)
else:
os.mkdir(args.sge_log_dir)
runscript = '%s/rundmlc.sh' % args.logdir
fo = open(runscript, 'w')
fo.write('source ~/.bashrc\n')
fo.write('export DMLC_TASK_ID=${SGE_TASK_ID}\n')
fo.write('export DMLC_JOB_CLUSTER=sge\n')
fo.write('\"$@\"\n')
fo.close()
def sge_submit(nworker, nserver, pass_envs):
"""Internal submission function."""
env_arg = ','.join(['%s=\"%s\"' % (k, str(v)) for k, v in list(pass_envs.items())])
cmd = 'qsub -cwd -t 1-%d -S /bin/bash' % (nworker + nserver)
if args.queue != 'default':
cmd += '-q %s' % args.queue
cmd += ' -N %s ' % args.jobname
cmd += ' -e %s -o %s' % (args.logdir, args.logdir)
cmd += ' -pe orte %d' % (args.vcores)
cmd += ' -v %s,PATH=${PATH}:.' % env_arg
cmd += ' %s %s' % (runscript, ' '.join(args.command))
print(cmd)
subprocess.check_call(cmd, shell=True)
print('Waiting for the jobs to get up...')
# call submit, with nslave, the commands to run each job and submit function
tracker.submit(args.num_workers, args.num_servers,
fun_submit=sge_submit,
pscmd=' '.join(args.command))
| 36.530612 | 92 | 0.581006 | 258 | 1,790 | 3.918605 | 0.426357 | 0.041543 | 0.062315 | 0.077151 | 0.035608 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002232 | 0.249162 | 1,790 | 48 | 93 | 37.291667 | 0.75 | 0.110056 | 0 | 0 | 0 | 0 | 0.19099 | 0.03236 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055556 | false | 0.055556 | 0.083333 | 0 | 0.138889 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4f8d150b4ae88cb9a27ba7e46fc92e9ece8c3e04 | 11,435 | py | Python | 14_plot_target-list.py | kuntzer/SALSA-public | 79fd601d3999ac977bbc97be010b2c4ef81e4c35 | [
"BSD-3-Clause"
] | 1 | 2021-07-30T09:59:41.000Z | 2021-07-30T09:59:41.000Z | 14_plot_target-list.py | kuntzer/SALSA-public | 79fd601d3999ac977bbc97be010b2c4ef81e4c35 | [
"BSD-3-Clause"
] | null | null | null | 14_plot_target-list.py | kuntzer/SALSA-public | 79fd601d3999ac977bbc97be010b2c4ef81e4c35 | [
"BSD-3-Clause"
] | 1 | 2021-07-30T10:38:54.000Z | 2021-07-30T10:38:54.000Z | ''' 14-plot_target-list.py
===============================================
AIM: Given a catalogue of objects, plots when the targets are visible according to their magnitude for a given period of time.
INPUT: files: - <orbit_id>_misc/orbits.dat
- <orbit_id>_flux/flux_*.dat
variables: see section PARAMETERS (below)
OUTPUT: in <orbit_id>_figures/ : (see below for file name definition)
CMD: python 14-plot_target-list.py
ISSUES: <none known>
REQUIRES:- standard python libraries, specific libraries in resources/ (+ SciPy)
- BaseMap --> http://matplotlib.org/basemap/
- Structure of the root folder:
* <orbit_id>_flux/ --> flux files
* <orbit_id>_figures/ --> figures
* <orbit_id>_misc/ --> storages of data
* all_figures/ --> comparison figures
REMARKS: based on 11-<...>.py, but has a better way of saving appearance and disapperance of the targets, using the class in resources/targets.py
'''
###########################################################################
### INCLUDES
import numpy as np
import pylab as plt
import matplotlib.cm as cm
import time
from resources.routines import *
from resources.TimeStepping import *
from resources.targets import *
import parameters as param
import resources.constants as const
import resources.figures as figures
import time
from matplotlib import dates
from matplotlib.ticker import MaxNLocator, MultipleLocator, FormatStrFormatter
###########################################################################
### PARAMETERS
# orbit_id
orbit_id = 701
apogee=700
perigee=700
# First minute analysis
minute_ini = 0
# Last minute to look for
minute_end = 1440
# Include SAA ?
SAA = False
# Show plots
show = True
# Save the picture ?
save = True
# Fancy plots ?
fancy = True
# Take into account the stray light?
straylight = True
# Minimum observable time for plots
threshold_obs_time = 50
# Time to acquire a target
t_acquisition = 6
# Catalogue name (in resources/)
catalogue = 'cheops_target_list_v0.1.dat'
# Maximum magnitude that can be seen by CHEOPS, only for cosmetics purposes
CHEOPS_mag_max = 12
# File name for the list of orbit file
orbits_file = 'orbits.dat'
# Factor in the SL post treatment correction ?
SL_post_treat = True
# Factor in mirror efficiency for the equivalent star magnitude ?
mirror_correction = True
#####################################################################################################################
# CONSTANTS AND PHYSICAL PARAMETERS
period = altitude2period(apogee,perigee)
###########################################################################
### INITIALISATION
file_flux = 'flux_'
# changes the threshold by addition the acquisition time:
threshold_obs_time += t_acquisition
# Formatted folders definitions
folder_flux, folder_figures, folder_misc = init_folders(orbit_id)
## Prepare grid
n_alpha = param.resx
n_delta = param.resy
ra_i = 0
ra_f = 2.*np.pi
dec_i = -np.pi/2.
dec_f = np.pi/2.
ra_step = (ra_f-ra_i)/n_alpha
dec_step = (dec_f-dec_i)/n_delta
iterable = (ra_i + ra_step/2+ i*ra_step for i in range(n_alpha))
ras = np.fromiter(iterable, np.float)
iterable = (dec_i + dec_step/2+ i*dec_step for i in range(n_delta))
decs = np.fromiter(iterable, np.float)
ra_grid, dec_grid = np.meshgrid(ras, decs)
if SAA:
SAA_data = np.loadtxt('resources/SAA_table_%d.dat' % orbit_id, delimiter=',')
SAA_data = SAA_data[SAA_data[:,0]>= minute_ini]
SAA_data = SAA_data[SAA_data[:,0]<= minute_end]
computed_orbits = np.loadtxt(folder_misc+orbits_file)[:,0]
############################################################################
### Load catalogue and assign them to the nearest grid point
name_cat, ra_cat, dec_cat, mag_cat = load_catalogue(catalogue)
index_ra_cat = np.zeros(np.shape(ra_cat))
index_dec_cat= np.zeros(np.shape(ra_cat))
targets = []
for name, ra, dec, mag in zip(name_cat, ra_cat, dec_cat, mag_cat):
id_ra = find_nearest(ras, ra/const.RAD)
id_dec = find_nearest(decs, dec/const.RAD)
targets.append(target_list(name, ra/const.RAD, id_ra, dec/const.RAD, id_dec, mag, int(period+3)))
# Apply the flux correction (SL post-treatment removal and the mirror efficiency)
corr_fact = 1.0
if mirror_correction: corr_fact /= param.mirror_efficiency
if SL_post_treat: corr_fact *= (1.0 - param.SL_post_treat_reduction)
############################################################################
### Start the anaylsis
start = time.time()
# Prepare the arrays
visibility = np.zeros(np.shape(ra_grid))
#observations = np.zeros(len(name_cat)*)
workspace = np.zeros(np.shape(ra_grid))
#data = np.zeros(np.shape(ra_grid))
# Load the reference times
orbits = np.loadtxt(folder_misc+orbits_file,dtype='i4')
minutes_orbit_iditude = np.loadtxt('resources/minute_table_%d.dat' % orbit_id, delimiter=',',dtype='Int32')
# Set variables for printing the advance
numberofminutes = minute_end+1 - minute_ini
lo = fast_minute2orbit(minutes_orbit_iditude,minute_end, orbit_id)
fo = fast_minute2orbit(minutes_orbit_iditude,minute_ini, orbit_id)
lp = -1
junk, junk, at_ini, junk = fast_orbit2times(minutes_orbit_iditude, fo, orbit_id)
first_computed = computed_orbits[computed_orbits<=fo][-1]
first_minute = minute_ini
last_minute = minute_end
if not fo == first_computed:
junk, junk, minute_ini, junk = fast_orbit2times(minutes_orbit_iditude, first_computed, orbit_id)
# print '1st referenced orbit: %d\twanted orbit: %d' % (first_computed, fo)
try:
for minute in range(minute_ini,minute_end+1+int(period)):
minute = int(minute)
if SAA and fast_SAA(SAA_data, minute): SAA_at_minute = True
else: SAA_at_minute = False
orbit_current = fast_minute2orbit(minutes_orbit_iditude, minute, orbit_id)
if orbit_current > lp:
lp = orbit_current
message = "Analysing orbit %d on %d...\t" % (lp,lo)
sys.stdout.write( '\r'*len(message) )
sys.stdout.write(message)
sys.stdout.flush()
junk, len_orbit, atc_ini, junk = fast_orbit2times(minutes_orbit_iditude, orbit_current, orbit_id)
try:
ra, dec, S_sl = load_flux_file(minute, file_flux, folder=folder_flux)
load = True
minute_to_load = minute-atc_ini#+shift
except IOError:
# if there is nothing then well, do nothing ie we copy the past values
# in which orbit are we ?
# get the previous orbit computed and copy the stray light data of this orbit :
#orbit_previous = orbits[orbits[:,0] < orbit_current][-1,0]
#minute_replacement = minute - atc_ini + shift #+ at_ini
minute_to_load = minute-atc_ini
for obj in targets:
if SAA_at_minute:
obj.current_visibility = 0
else:
obj.current_visibility = obj.visible_save[minute_to_load]
load = False
# populate the visbility matrix
# for ii in range(0, targets[0].CountObjects()):
if load:
for obj in targets:
ra_ = obj.ra
dec_ = obj.dec
a = np.where(np.abs(ra_-ra)<ra_step/2)[0]
b = np.where(np.abs(dec_-dec)<dec_step/2)[0]
INT = np.intersect1d(a,b)
if np.shape(INT)[0] == 0 or (straylight and S_sl[INT]*corr_fact > obj.maximum_flux()):
obj.visible_save[minute_to_load] = 0
obj.current_visibility = 0
continue
else:
obj.visible_save[minute_to_load] = 1
if SAA_at_minute: obj.current_visibility = 0
else: obj.current_visibility = 1
if minute == minute_ini:
for obj in targets:
obj.workspace=obj.current_visibility
continue
for obj in targets: obj.Next(minute,threshold_obs_time)
except KeyboardInterrupt: print hilite('\nWARNING! USER STOPPED LOADING AT MINUTE %d' % minute,False,False)
for ii in range(0, targets[0].CountObjects()): targets[ii].Next(minute,threshold_obs_time)
### #TODO if first minute look for past orbits anyways
print
worthy_targets = []
for ii in range(0, targets[0].CountObjects()):
if np.shape(targets[ii].visible)[0] > 0:
worthy_targets.append(targets[ii])
############################################################################
end = time.time()
elapsed_time = round((end-start)/60.,2)
sys.stdout.write( '\r'*len(message) )
sys.stdout.flush()
print "Time needed: %2.2f min" % elapsed_time
### Plot a few things
if fancy: figures.set_fancy()
### Plot time line
figures.set_fancy()
minute_ini = first_minute
minute_end = last_minute
maxy = len(worthy_targets)
print 'Number of star visible in period selected: %d' % maxy
size = 2 + maxy/3
figsize = (17.,size) # fig size in inches (width,height)
fig = plt.figure(figsize=figsize)
ax = plt.subplot(111)
ii = 0
ax.yaxis.set_major_locator(MultipleLocator(1))
plt.grid(True)
for ii in range (0, len(worthy_targets)):
y = float(ii)
visi = worthy_targets[ii].Visibility()
invi = worthy_targets[ii].Invisibility()
for vis, ini in zip(visi, invi):
plt.hlines(y, vis, ini, lw=3, color=cm.Dark2(y/(maxy+5)))
if ii > maxy: break
else: ii+=1
labels = ['%s (%2.1f)' % (wt.name, wt.mag) for wt in worthy_targets[0:maxy]]
ax.set_yticklabels(labels)
ax.set_ylim(-0.5,maxy-0.5)
# convert epoch to matplotlib float format
labels = np.linspace(minute_ini, minute_end+1, 12) * 60. + const.timestamp_2018_01_01
plt.xlim([minute_ini, minute_end+1])
ax.xaxis.set_major_locator(MultipleLocator((minute_end-minute_ini+1)/11))
# to human readable date
pre = map (time.gmtime, labels)
labels = map(figures.format_second, pre)
ax.set_xticklabels(labels)
fig.autofmt_xdate()
if save:
threshold_obs_time -= t_acquisition
if SAA: note = '_SAA'
else: note = ''
fname = '%svisibility_stars_obs_%d_o_%d_to_%d%s' % (folder_figures,threshold_obs_time,fo,lo, note)
figures.savefig(fname,fig,fancy)
### A spatial plot of the targets
fig = plt.figure()
ax = plt.subplot(111, projection='mollweide')
plt.scatter((ra_cat-180)/const.RAD,dec_cat/const.RAD, c=mag_cat, marker='*', s=50, edgecolor='none', vmin=param.magnitude_min,vmax=param.magnitude_max+0.2)
v = np.linspace(param.magnitude_min,param.magnitude_max, (param.magnitude_max-param.magnitude_min+1), endpoint=True)
t = map(figures.format_mag, v)
cbar = plt.colorbar(ticks=v, orientation='horizontal',shrink=.8)
cbar.set_ticklabels(t)
l,b,w,h = plt.gca().get_position().bounds
ll,bb,ww,hh = cbar.ax.get_position().bounds
cbar.ax.set_position([ll, bb+0.1, ww, hh])
ax.grid(True)
ax.set_xticklabels([r'$30^{\circ}$',r'$60^{\circ}$',r'$90^{\circ}$',r'$120^{\circ}$',r'$150^{\circ}$',r'$180^{\circ}$',r'$210^{\circ}$',r'$240^{\circ}$',r'$270^{\circ}$',r'$300^{\circ}$',r'$330^{\circ}$']) #,r'$360^{\circ}$'
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\delta$')
if save:
fname = '%stargets_distribution' % folder_figures
figures.savefig(fname,fig,fancy)
### A histogram of the magnitudes
fig = plt.figure(dpi=100)
ax = fig.add_subplot(111)
bins=np.linspace(np.amin(mag_cat),np.amax(mag_cat), 50)
n, bins, patches = plt.hist(mag_cat,bins=bins)
plt.setp(patches, 'edgecolor', 'black', 'linewidth', 2, 'facecolor','blue','alpha',1)
ax.xaxis.set_major_locator(MultipleLocator(2))
ax.xaxis.set_minor_locator(MultipleLocator(1))
ax.yaxis.set_major_locator(MultipleLocator(5))
ax.yaxis.set_minor_locator(MultipleLocator(1))
ax.xaxis.grid(True,'minor')
ax.yaxis.grid(True,'minor')
ax.xaxis.grid(True,'major',linewidth=2)
ax.yaxis.grid(True,'major',linewidth=2)
plt.xlim([np.amin(mag_cat)*0.95, 1.05*np.amax(mag_cat)])
plt.xlabel(r'$m_V$')
plt.ylabel(r'$\mathrm{distribution}$')
x1,x2,y1,y2 = plt.axis()
plt.axvline(CHEOPS_mag_max, lw=2, color='r')
if save:
fname = '%stargets_hist_mag' % folder_figures
figures.savefig(fname,fig,fancy)
if show: plt.show()
| 29.778646 | 224 | 0.689899 | 1,745 | 11,435 | 4.341547 | 0.25043 | 0.015708 | 0.017555 | 0.00924 | 0.231389 | 0.177138 | 0.103485 | 0.05161 | 0.024551 | 0.015312 | 0 | 0.019797 | 0.129777 | 11,435 | 383 | 225 | 29.856397 | 0.741534 | 0.143157 | 0 | 0.118483 | 0 | 0 | 0.074353 | 0.020046 | 0 | 0 | 0 | 0.002611 | 0 | 0 | null | null | 0 | 0.061611 | null | null | 0.018957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f900b835f7f2809af5aeab94956f59503ab79fe | 2,534 | py | Python | tests/test_lime.py | dianna-ai/dianna | 88bcaec001e640c35e5e1e08517ef1624fd661cb | [
"Apache-2.0"
] | 9 | 2021-11-16T09:53:47.000Z | 2022-03-02T13:28:53.000Z | tests/test_lime.py | dianna-ai/dianna | 88bcaec001e640c35e5e1e08517ef1624fd661cb | [
"Apache-2.0"
] | 340 | 2021-03-03T12:55:37.000Z | 2022-03-31T13:53:44.000Z | tests/test_lime.py | dianna-ai/dianna | 88bcaec001e640c35e5e1e08517ef1624fd661cb | [
"Apache-2.0"
] | 5 | 2021-08-19T08:14:35.000Z | 2022-03-17T21:12:46.000Z | from unittest import TestCase
import numpy as np
import dianna
import dianna.visualization
from dianna.methods import LIME
from tests.test_onnx_runner import generate_data
from tests.utils import ModelRunner
from tests.utils import run_model
class LimeOnImages(TestCase):
def test_lime_function(self):
np.random.seed(42)
input_data = np.random.random((1, 224, 224, 3))
labels = ('batch', 'y', 'x', 'channels')
explainer = LIME(random_state=42, axes_labels=labels)
heatmap = explainer.explain_image(run_model, input_data, num_samples=100)
heatmap_expected = np.load('tests/test_data/heatmap_lime_function.npy')
assert heatmap.shape == input_data[0].shape[:2]
assert np.allclose(heatmap, heatmap_expected, atol=.01)
def test_lime_filename(self):
np.random.seed(42)
model_filename = 'tests/test_data/mnist_model.onnx'
black_and_white = generate_data(batch_size=1)
# Make data 3-channel instead of 1-channel
input_data = np.zeros([1, 3] + list(black_and_white.shape[2:])) + black_and_white
input_data = input_data.astype(np.float32)
labels = ('batch', 'channels', 'y', 'x')
def preprocess(data):
# select single channel out of 3, but keep the channel axis
return data[:, [0], ...]
heatmap = dianna.explain_image(model_filename, input_data, method="LIME", preprocess_function=preprocess, random_state=42,
axes_labels=labels)
heatmap_expected = np.load('tests/test_data/heatmap_lime_filename.npy')
assert heatmap.shape == input_data[0, 0].shape
assert np.allclose(heatmap, heatmap_expected, atol=.01)
def test_lime_text():
model_path = 'tests/test_data/movie_review_model.onnx'
word_vector_file = 'tests/test_data/word_vectors.txt'
runner = ModelRunner(model_path, word_vector_file, max_filter_size=5)
review = 'such a bad movie'
explanation = dianna.explain_text(runner, review, labels=[0], method='LIME', random_state=42)[0]
words = [element[0] for element in explanation]
word_indices = [element[1] for element in explanation]
scores = [element[2] for element in explanation]
expected_words = ['bad', 'such', 'movie', 'a']
expected_word_indices = [7, 0, 11, 5]
expected_scores = [.492, -.046, .036, -.008]
assert words == expected_words
assert word_indices == expected_word_indices
assert np.allclose(scores, expected_scores, atol=.01)
| 39.59375 | 130 | 0.685478 | 343 | 2,534 | 4.848397 | 0.317784 | 0.043295 | 0.039086 | 0.041491 | 0.21828 | 0.196633 | 0.196633 | 0.120265 | 0.120265 | 0.066146 | 0 | 0.031683 | 0.202841 | 2,534 | 63 | 131 | 40.222222 | 0.791584 | 0.038674 | 0 | 0.085106 | 1 | 0 | 0.103576 | 0.076038 | 0 | 0 | 0 | 0 | 0.148936 | 1 | 0.085106 | false | 0 | 0.170213 | 0.021277 | 0.297872 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f90780c2a2a1ba5b699d89bbb0798c1c6453c81 | 1,361 | py | Python | backend/app.py | HalmonLui/square-hackathon | 62d5be7a229f9e39e27a546c164facd779d28aa4 | [
"MIT"
] | 3 | 2020-06-13T02:47:29.000Z | 2020-06-20T17:34:15.000Z | backend/app.py | HalmonLui/square-hackathon | 62d5be7a229f9e39e27a546c164facd779d28aa4 | [
"MIT"
] | 2 | 2020-06-14T20:29:26.000Z | 2020-06-14T20:29:34.000Z | backend/app.py | HalmonLui/square-hackathon | 62d5be7a229f9e39e27a546c164facd779d28aa4 | [
"MIT"
] | 1 | 2020-09-04T01:45:39.000Z | 2020-09-04T01:45:39.000Z | # import app
from flask import Flask, render_template, make_response, send_file
from flask_cors import CORS
# import custom helpers
from maplib import generate_embed
import loyaltylib as ll
app = Flask(__name__)
CORS(app)
# import declared routes
import frontenddata
@app.route('/ll')
def llfn():
ll.create_loyalty_account()
ll.retrieve_loyalty_account()
return
@app.route('/map')
def map():
location = '850 FOLSOM ST, San Francisco, CA 94107'
addresslist = {'a' : generate_embed(location)}
return render_template('map.html', addresslist=addresslist)
@app.route('/cal')
def cal():
appoint = {
'stylist': 'Bob Nguyen',
'salon': 'Salon Bobby',
'event': 'Men\'s Haircut',
'location':'850 FOLSOM ST, San Francisco, CA 94107',
'starttime':'2020-06-23 08:00:00',
'endtime':'2020-06-23 08:45:00',
}
return render_template('cal.html', appoint=appoint)
# def loop_matcher(delay):
# while(True):
# print('Matcher Automatically Run')
# handle_matcher()
# #do expired status update here
# time.sleep(delay)
# Run Server
if __name__ == "__main__":
#matcher_delay = 3600 # 1 hour in seconds
#p = Process(target=loop_matcher, args=(matcher_delay,))
#p.start()
app.run(host = '0.0.0.0', debug=True, use_reloader=False)
#p.join() | 24.303571 | 66 | 0.653196 | 179 | 1,361 | 4.798883 | 0.547486 | 0.048894 | 0.039581 | 0.044237 | 0.088475 | 0.088475 | 0.088475 | 0.088475 | 0 | 0 | 0 | 0.049257 | 0.209405 | 1,361 | 56 | 67 | 24.303571 | 0.749071 | 0.259368 | 0 | 0 | 1 | 0 | 0.224798 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.166667 | 0 | 0.366667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f926b4a8eb11004ae116e00ec4924740a1b7257 | 224 | py | Python | generators/UGATIT/dataset/day2rain/txt_gen.py | JW9MsjwjnpdRLFw/RMT | a877fd78639a8d4c534d0373b9d0ad023e0fa2dd | [
"MIT"
] | null | null | null | generators/UGATIT/dataset/day2rain/txt_gen.py | JW9MsjwjnpdRLFw/RMT | a877fd78639a8d4c534d0373b9d0ad023e0fa2dd | [
"MIT"
] | null | null | null | generators/UGATIT/dataset/day2rain/txt_gen.py | JW9MsjwjnpdRLFw/RMT | a877fd78639a8d4c534d0373b9d0ad023e0fa2dd | [
"MIT"
] | 3 | 2021-01-25T02:44:23.000Z | 2021-04-09T13:25:57.000Z | import os
dir = ['trainA', 'trainB', 'testA', 'testB']
for d in dir:
img_names = os.listdir(d)
f = open('list_' + d + '.txt', "w")
for img in img_names:
f.write('./' + img + '\n')
# print(img_names) | 22.4 | 44 | 0.522321 | 34 | 224 | 3.323529 | 0.617647 | 0.212389 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.263393 | 224 | 10 | 45 | 22.4 | 0.684848 | 0.071429 | 0 | 0 | 0 | 0 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4f997b4422654e8ded613b50854c6295cfbecb26 | 2,437 | py | Python | functions/sentim-preprocess/main.py | gipfelen/Sentiment-Analysis | f852f9887310cdff90e115cc140a2f8ae3618087 | [
"MIT"
] | null | null | null | functions/sentim-preprocess/main.py | gipfelen/Sentiment-Analysis | f852f9887310cdff90e115cc140a2f8ae3618087 | [
"MIT"
] | 1 | 2022-01-04T22:47:06.000Z | 2022-01-04T22:47:06.000Z | functions/sentim-preprocess/main.py | gipfelen/Sentiment-Analysis | f852f9887310cdff90e115cc140a2f8ae3618087 | [
"MIT"
] | 3 | 2021-04-15T09:47:28.000Z | 2021-12-06T21:05:28.000Z | import json
import re
##################################################
########## Boilerplate wrapping code #############
##################################################
# IBM wrapper
def main(args):
res = sentim_preprocess(args)
return res
# AWS wrapper
def lambda_handler(event, context):
# read in the args from the POST object
json_input = json.loads(event["body"])
res = sentim_preprocess(json_input)
return {"statusCode": 200, "body": json.dumps(res)}
##################################################
##################################################
# { 'text': string, 'id': number, 'location': 'string' }
def preprocess(tweet):
# split text into sentences
if ("text" in tweet) is False:
return None
text = tweet["text"]
sentences = text.replace("!", ".").replace("?", ".").split(".") # quick and dirty
# filter empty strings (sentences)
sentences = [sentence for sentence in sentences if sentence]
# In each sentence, only keep alpha numeric
sencences = [re.sub("[^0-9a-zA-Z]+", "*", sentence) for sentence in sentences]
processed_sentences = []
for sentence in sentences:
processed_sentence = sentence.split(" ") # split per word
# replace weird chars
processed_sentence = [
re.sub("[^0-9a-zA-Z]+", "", word) for word in processed_sentence
]
# filter empty strings (words)
processed_sentence = [word for word in processed_sentence if word]
# to lowercase
processed_sentence = [word.lower() for word in processed_sentence]
processed_sentences.append(processed_sentence)
# Return sentences in place of text
res = {
**tweet,
"sentences": processed_sentences,
}
res.pop("text", None)
return res
# Tokenizes and normalizes (TODO) tweets
def sentim_preprocess(j):
# read in the args
tweets = j["tweets"]["tweets"] # TODO
# do the preprocessing
tokenized_tweets = [preprocess(tweet) for tweet in tweets]
# filter out invalids
tokenized_tweets = [t for t in tokenized_tweets if t]
# return the result
res = {"tokenized_tweets": tokenized_tweets}
return res
# Docker wrapper
if __name__ == "__main__":
# read the json
json_input = json.loads(open("jsonInput.json").read())
result = sentim_preprocess(json_input)
# write to std out
print(json.dumps(result)) | 28.337209 | 86 | 0.587197 | 272 | 2,437 | 5.139706 | 0.356618 | 0.097282 | 0.027897 | 0.04721 | 0.148784 | 0.058655 | 0 | 0 | 0 | 0 | 0 | 0.00368 | 0.219532 | 2,437 | 86 | 87 | 28.337209 | 0.731335 | 0.220353 | 0 | 0.073171 | 0 | 0 | 0.073716 | 0 | 0 | 0 | 0 | 0.011628 | 0 | 1 | 0.097561 | false | 0 | 0.04878 | 0 | 0.268293 | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4fa4d1534b1e2c6e2ca72fcf0d2aa45f72c2bfdf | 1,516 | py | Python | tests/test_settings.py | abahnihi/kn-defaults | 02517a76f14da8b519124af38a773a621b9a4041 | [
"MIT"
] | 2 | 2020-10-04T09:22:52.000Z | 2020-11-18T15:37:22.000Z | tests/test_settings.py | abahnihi/kn-defaults | 02517a76f14da8b519124af38a773a621b9a4041 | [
"MIT"
] | 2 | 2020-10-12T06:13:49.000Z | 2020-11-18T15:37:05.000Z | tests/test_settings.py | abahnihi/kn-defaults | 02517a76f14da8b519124af38a773a621b9a4041 | [
"MIT"
] | 1 | 2021-09-16T10:23:56.000Z | 2021-09-16T10:23:56.000Z | import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
SECRET_KEY = 'fake-key'
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'raven.contrib.django.raven_compat',
'kn_defaults.logging',
"tests",
]
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages'
]
}
},
]
ROOT_URLCONF = 'tests.urls'
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'kn_defaults.logging.middlewares.KnLogging']
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'test.sqlite3'),
}
}
KN_LOGGING_URL_PATTERNS = [
'success_func_view',
'error_func_view',
]
from kn_defaults.logging.defaults import BASE_LOGGING
BASE_LOGGING.update({})
LOGGING = BASE_LOGGING
RAVEN_CONFIG = {'dsn': ''}
| 25.694915 | 70 | 0.674142 | 151 | 1,516 | 6.582781 | 0.470199 | 0.143863 | 0.051308 | 0.030181 | 0.032193 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001622 | 0.186675 | 1,516 | 58 | 71 | 26.137931 | 0.804542 | 0 | 0 | 0 | 0 | 0 | 0.53496 | 0.412929 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.040816 | 0 | 0.040816 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4fb045ff192d656ea573c38eeb8da1f72e35f93e | 416 | py | Python | mysite/myapp/migrations/0004_auto_20191219_1946.py | Niyy/monsterpedia | fa43286a49c5c6098c79c33f55af00a867a43da2 | [
"MIT"
] | null | null | null | mysite/myapp/migrations/0004_auto_20191219_1946.py | Niyy/monsterpedia | fa43286a49c5c6098c79c33f55af00a867a43da2 | [
"MIT"
] | null | null | null | mysite/myapp/migrations/0004_auto_20191219_1946.py | Niyy/monsterpedia | fa43286a49c5c6098c79c33f55af00a867a43da2 | [
"MIT"
] | null | null | null | # Generated by Django 2.2.8 on 2019-12-19 19:46
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('myapp', '0003_auto_20191218_0740'),
]
operations = [
migrations.AlterField(
model_name='monster',
name='monster_picture',
field=models.ImageField(upload_to='uploads/%Y/%m/%d/%h/%m'),
),
]
| 21.894737 | 72 | 0.598558 | 48 | 416 | 5.0625 | 0.791667 | 0.090535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101639 | 0.266827 | 416 | 18 | 73 | 23.111111 | 0.695082 | 0.108173 | 0 | 0 | 1 | 0 | 0.195122 | 0.121951 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96c5fae37373ce25e688f3df204cf57572673e1e | 53,086 | py | Python | diptera_track_ui.py | jmmelis/DipteraTrack | 1d267ccd4248635233147f2035b900a433dc4536 | [
"MIT"
] | 1 | 2019-06-14T10:19:19.000Z | 2019-06-14T10:19:19.000Z | diptera_track_ui.py | jmmelis/DipteraTrack | 1d267ccd4248635233147f2035b900a433dc4536 | [
"MIT"
] | null | null | null | diptera_track_ui.py | jmmelis/DipteraTrack | 1d267ccd4248635233147f2035b900a433dc4536 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'diptera_track.ui'
#
# Created by: PyQt5 UI code generator 5.9.2
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(1140, 683)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setMinimumSize(QtCore.QSize(1124, 674))
self.centralwidget.setObjectName("centralwidget")
self.verticalLayout_5 = QtWidgets.QVBoxLayout(self.centralwidget)
self.verticalLayout_5.setObjectName("verticalLayout_5")
self.tabs = QtWidgets.QTabWidget(self.centralwidget)
self.tabs.setObjectName("tabs")
self.ses_par_tab = QtWidgets.QWidget()
self.ses_par_tab.setObjectName("ses_par_tab")
self.verticalLayout_4 = QtWidgets.QVBoxLayout(self.ses_par_tab)
self.verticalLayout_4.setObjectName("verticalLayout_4")
self.widget = QtWidgets.QWidget(self.ses_par_tab)
self.widget.setMinimumSize(QtCore.QSize(0, 551))
self.widget.setObjectName("widget")
self.folder_select_tree = QtWidgets.QTreeView(self.widget)
self.folder_select_tree.setGeometry(QtCore.QRect(9, 30, 571, 321))
self.folder_select_tree.setMinimumSize(QtCore.QSize(451, 0))
self.folder_select_tree.setObjectName("folder_select_tree")
self.label = QtWidgets.QLabel(self.widget)
self.label.setGeometry(QtCore.QRect(9, 9, 128, 16))
self.label.setObjectName("label")
self.label_3 = QtWidgets.QLabel(self.widget)
self.label_3.setGeometry(QtCore.QRect(600, 0, 124, 16))
self.label_3.setObjectName("label_3")
self.line = QtWidgets.QFrame(self.widget)
self.line.setGeometry(QtCore.QRect(590, 20, 511, 20))
self.line.setFrameShape(QtWidgets.QFrame.HLine)
self.line.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line.setObjectName("line")
self.label_2 = QtWidgets.QLabel(self.widget)
self.label_2.setGeometry(QtCore.QRect(600, 30, 91, 16))
self.label_2.setObjectName("label_2")
self.ses_folder_label = QtWidgets.QLabel(self.widget)
self.ses_folder_label.setGeometry(QtCore.QRect(620, 50, 471, 20))
self.ses_folder_label.setObjectName("ses_folder_label")
self.label_5 = QtWidgets.QLabel(self.widget)
self.label_5.setGeometry(QtCore.QRect(600, 90, 141, 16))
self.label_5.setObjectName("label_5")
self.bckg_folder_label = QtWidgets.QLabel(self.widget)
self.bckg_folder_label.setGeometry(QtCore.QRect(620, 110, 281, 20))
self.bckg_folder_label.setObjectName("bckg_folder_label")
self.line_2 = QtWidgets.QFrame(self.widget)
self.line_2.setGeometry(QtCore.QRect(580, 30, 20, 561))
self.line_2.setFrameShape(QtWidgets.QFrame.VLine)
self.line_2.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_2.setObjectName("line_2")
self.label_7 = QtWidgets.QLabel(self.widget)
self.label_7.setGeometry(QtCore.QRect(600, 130, 291, 16))
self.label_7.setObjectName("label_7")
self.cal_folder_label = QtWidgets.QLabel(self.widget)
self.cal_folder_label.setGeometry(QtCore.QRect(620, 150, 381, 20))
self.cal_folder_label.setObjectName("cal_folder_label")
self.label_9 = QtWidgets.QLabel(self.widget)
self.label_9.setGeometry(QtCore.QRect(600, 200, 291, 16))
self.label_9.setObjectName("label_9")
self.mov_folder1_label = QtWidgets.QLabel(self.widget)
self.mov_folder1_label.setGeometry(QtCore.QRect(620, 220, 371, 20))
self.mov_folder1_label.setObjectName("mov_folder1_label")
self.mov_folder2_label = QtWidgets.QLabel(self.widget)
self.mov_folder2_label.setGeometry(QtCore.QRect(620, 240, 371, 20))
self.mov_folder2_label.setObjectName("mov_folder2_label")
self.mov_folder3_label = QtWidgets.QLabel(self.widget)
self.mov_folder3_label.setGeometry(QtCore.QRect(620, 260, 371, 20))
self.mov_folder3_label.setObjectName("mov_folder3_label")
self.mov_folder4_label = QtWidgets.QLabel(self.widget)
self.mov_folder4_label.setGeometry(QtCore.QRect(620, 280, 371, 20))
self.mov_folder4_label.setObjectName("mov_folder4_label")
self.mov_folder5_label = QtWidgets.QLabel(self.widget)
self.mov_folder5_label.setGeometry(QtCore.QRect(620, 300, 371, 20))
self.mov_folder5_label.setObjectName("mov_folder5_label")
self.mov_folder6_label = QtWidgets.QLabel(self.widget)
self.mov_folder6_label.setGeometry(QtCore.QRect(620, 320, 371, 20))
self.mov_folder6_label.setObjectName("mov_folder6_label")
self.mov_folder7_label = QtWidgets.QLabel(self.widget)
self.mov_folder7_label.setGeometry(QtCore.QRect(620, 340, 371, 20))
self.mov_folder7_label.setObjectName("mov_folder7_label")
self.mov_folder8_label = QtWidgets.QLabel(self.widget)
self.mov_folder8_label.setGeometry(QtCore.QRect(620, 360, 371, 20))
self.mov_folder8_label.setObjectName("mov_folder8_label")
self.label_18 = QtWidgets.QLabel(self.widget)
self.label_18.setGeometry(QtCore.QRect(600, 390, 301, 20))
self.label_18.setObjectName("label_18")
self.cam_folder1_label = QtWidgets.QLabel(self.widget)
self.cam_folder1_label.setGeometry(QtCore.QRect(620, 410, 371, 20))
self.cam_folder1_label.setObjectName("cam_folder1_label")
self.cam_folder2_label = QtWidgets.QLabel(self.widget)
self.cam_folder2_label.setGeometry(QtCore.QRect(620, 430, 371, 20))
self.cam_folder2_label.setObjectName("cam_folder2_label")
self.cam_folder3_label = QtWidgets.QLabel(self.widget)
self.cam_folder3_label.setGeometry(QtCore.QRect(620, 450, 371, 20))
self.cam_folder3_label.setObjectName("cam_folder3_label")
self.cam_folder4_label = QtWidgets.QLabel(self.widget)
self.cam_folder4_label.setGeometry(QtCore.QRect(620, 470, 371, 20))
self.cam_folder4_label.setObjectName("cam_folder4_label")
self.ses_folder_rbtn = QtWidgets.QRadioButton(self.widget)
self.ses_folder_rbtn.setGeometry(QtCore.QRect(600, 50, 21, 21))
self.ses_folder_rbtn.setObjectName("ses_folder_rbtn")
self.bckg_folder_rbtn = QtWidgets.QRadioButton(self.widget)
self.bckg_folder_rbtn.setGeometry(QtCore.QRect(600, 110, 21, 21))
self.bckg_folder_rbtn.setObjectName("bckg_folder_rbtn")
self.cal_folder_rbtn = QtWidgets.QRadioButton(self.widget)
self.cal_folder_rbtn.setGeometry(QtCore.QRect(600, 150, 21, 21))
self.cal_folder_rbtn.setObjectName("cal_folder_rbtn")
self.mov_folder1_rbtn = QtWidgets.QRadioButton(self.widget)
self.mov_folder1_rbtn.setGeometry(QtCore.QRect(600, 220, 21, 21))
self.mov_folder1_rbtn.setObjectName("mov_folder1_rbtn")
self.mov_folder2_rbtn = QtWidgets.QRadioButton(self.widget)
self.mov_folder2_rbtn.setGeometry(QtCore.QRect(600, 240, 21, 21))
self.mov_folder2_rbtn.setObjectName("mov_folder2_rbtn")
self.mov_folder3_rbtn = QtWidgets.QRadioButton(self.widget)
self.mov_folder3_rbtn.setGeometry(QtCore.QRect(600, 260, 21, 21))
self.mov_folder3_rbtn.setObjectName("mov_folder3_rbtn")
self.mov_folder4_rbtn = QtWidgets.QRadioButton(self.widget)
self.mov_folder4_rbtn.setGeometry(QtCore.QRect(600, 280, 21, 21))
self.mov_folder4_rbtn.setObjectName("mov_folder4_rbtn")
self.mov_folder5_rbtn = QtWidgets.QRadioButton(self.widget)
self.mov_folder5_rbtn.setGeometry(QtCore.QRect(600, 300, 21, 21))
self.mov_folder5_rbtn.setObjectName("mov_folder5_rbtn")
self.mov_folder6_rbtn = QtWidgets.QRadioButton(self.widget)
self.mov_folder6_rbtn.setGeometry(QtCore.QRect(600, 320, 21, 21))
self.mov_folder6_rbtn.setObjectName("mov_folder6_rbtn")
self.mov_folder7_rbtn = QtWidgets.QRadioButton(self.widget)
self.mov_folder7_rbtn.setGeometry(QtCore.QRect(600, 340, 21, 21))
self.mov_folder7_rbtn.setObjectName("mov_folder7_rbtn")
self.mov_folder8_rbtn = QtWidgets.QRadioButton(self.widget)
self.mov_folder8_rbtn.setGeometry(QtCore.QRect(600, 360, 21, 21))
self.mov_folder8_rbtn.setObjectName("mov_folder8_rbtn")
self.cam_folder1_rbtn = QtWidgets.QRadioButton(self.widget)
self.cam_folder1_rbtn.setGeometry(QtCore.QRect(600, 410, 21, 21))
self.cam_folder1_rbtn.setObjectName("cam_folder1_rbtn")
self.cam_folder2_rbtn = QtWidgets.QRadioButton(self.widget)
self.cam_folder2_rbtn.setGeometry(QtCore.QRect(600, 430, 21, 21))
self.cam_folder2_rbtn.setObjectName("cam_folder2_rbtn")
self.cam_folder3_rbtn = QtWidgets.QRadioButton(self.widget)
self.cam_folder3_rbtn.setGeometry(QtCore.QRect(600, 450, 21, 21))
self.cam_folder3_rbtn.setObjectName("cam_folder3_rbtn")
self.cam_folder4_rbtn = QtWidgets.QRadioButton(self.widget)
self.cam_folder4_rbtn.setGeometry(QtCore.QRect(600, 470, 21, 21))
self.cam_folder4_rbtn.setObjectName("cam_folder4_rbtn")
self.cam_folder5_rbtn = QtWidgets.QRadioButton(self.widget)
self.cam_folder5_rbtn.setGeometry(QtCore.QRect(600, 490, 21, 21))
self.cam_folder5_rbtn.setObjectName("cam_folder5_rbtn")
self.cam_folder6_rbtn = QtWidgets.QRadioButton(self.widget)
self.cam_folder6_rbtn.setGeometry(QtCore.QRect(600, 510, 21, 21))
self.cam_folder6_rbtn.setObjectName("cam_folder6_rbtn")
self.cam_folder5_label = QtWidgets.QLabel(self.widget)
self.cam_folder5_label.setGeometry(QtCore.QRect(620, 490, 371, 20))
self.cam_folder5_label.setObjectName("cam_folder5_label")
self.cam_folder6_label = QtWidgets.QLabel(self.widget)
self.cam_folder6_label.setGeometry(QtCore.QRect(620, 510, 371, 20))
self.cam_folder6_label.setObjectName("cam_folder6_label")
self.label_25 = QtWidgets.QLabel(self.widget)
self.label_25.setGeometry(QtCore.QRect(600, 540, 201, 16))
self.label_25.setObjectName("label_25")
self.frame_name_rbtn = QtWidgets.QRadioButton(self.widget)
self.frame_name_rbtn.setGeometry(QtCore.QRect(600, 560, 21, 21))
self.frame_name_rbtn.setObjectName("frame_name_rbtn")
self.frame_name_label = QtWidgets.QLabel(self.widget)
self.frame_name_label.setGeometry(QtCore.QRect(620, 560, 391, 20))
self.frame_name_label.setObjectName("frame_name_label")
self.label_27 = QtWidgets.QLabel(self.widget)
self.label_27.setGeometry(QtCore.QRect(930, 80, 161, 20))
self.label_27.setObjectName("label_27")
self.bck_img_fmt_box = QtWidgets.QComboBox(self.widget)
self.bck_img_fmt_box.setGeometry(QtCore.QRect(1020, 100, 79, 23))
self.bck_img_fmt_box.setObjectName("bck_img_fmt_box")
self.label_28 = QtWidgets.QLabel(self.widget)
self.label_28.setGeometry(QtCore.QRect(930, 130, 161, 20))
self.label_28.setObjectName("label_28")
self.cal_img_fmt_box = QtWidgets.QComboBox(self.widget)
self.cal_img_fmt_box.setGeometry(QtCore.QRect(1020, 150, 79, 23))
self.cal_img_fmt_box.setObjectName("cal_img_fmt_box")
self.label_29 = QtWidgets.QLabel(self.widget)
self.label_29.setGeometry(QtCore.QRect(970, 540, 131, 20))
self.label_29.setObjectName("label_29")
self.frame_img_fmt_box = QtWidgets.QComboBox(self.widget)
self.frame_img_fmt_box.setGeometry(QtCore.QRect(1020, 560, 79, 23))
self.frame_img_fmt_box.setObjectName("frame_img_fmt_box")
self.line_4 = QtWidgets.QFrame(self.widget)
self.line_4.setGeometry(QtCore.QRect(10, 460, 571, 16))
self.line_4.setFrameShape(QtWidgets.QFrame.HLine)
self.line_4.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_4.setObjectName("line_4")
self.line_5 = QtWidgets.QFrame(self.widget)
self.line_5.setGeometry(QtCore.QRect(10, 580, 1091, 20))
self.line_5.setFrameShape(QtWidgets.QFrame.HLine)
self.line_5.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_5.setObjectName("line_5")
self.label_30 = QtWidgets.QLabel(self.widget)
self.label_30.setGeometry(QtCore.QRect(250, 470, 151, 16))
self.label_30.setObjectName("label_30")
self.start_frame_spin = QtWidgets.QSpinBox(self.widget)
self.start_frame_spin.setGeometry(QtCore.QRect(120, 490, 91, 24))
self.start_frame_spin.setObjectName("start_frame_spin")
self.label_31 = QtWidgets.QLabel(self.widget)
self.label_31.setGeometry(QtCore.QRect(10, 490, 101, 16))
self.label_31.setObjectName("label_31")
self.label_32 = QtWidgets.QLabel(self.widget)
self.label_32.setGeometry(QtCore.QRect(10, 520, 101, 16))
self.label_32.setObjectName("label_32")
self.trig_frame_spin = QtWidgets.QSpinBox(self.widget)
self.trig_frame_spin.setGeometry(QtCore.QRect(120, 520, 91, 24))
self.trig_frame_spin.setObjectName("trig_frame_spin")
self.label_33 = QtWidgets.QLabel(self.widget)
self.label_33.setGeometry(QtCore.QRect(10, 550, 101, 16))
self.label_33.setObjectName("label_33")
self.end_frame_spin = QtWidgets.QSpinBox(self.widget)
self.end_frame_spin.setGeometry(QtCore.QRect(120, 550, 91, 24))
self.end_frame_spin.setObjectName("end_frame_spin")
self.label_34 = QtWidgets.QLabel(self.widget)
self.label_34.setGeometry(QtCore.QRect(250, 490, 91, 16))
self.label_34.setObjectName("label_34")
self.trig_mode_box = QtWidgets.QComboBox(self.widget)
self.trig_mode_box.setGeometry(QtCore.QRect(350, 490, 111, 23))
self.trig_mode_box.setObjectName("trig_mode_box")
self.line_3 = QtWidgets.QFrame(self.widget)
self.line_3.setGeometry(QtCore.QRect(10, 350, 571, 16))
self.line_3.setFrameShape(QtWidgets.QFrame.HLine)
self.line_3.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_3.setObjectName("line_3")
self.label_4 = QtWidgets.QLabel(self.widget)
self.label_4.setGeometry(QtCore.QRect(250, 360, 101, 16))
self.label_4.setObjectName("label_4")
self.mdl_loc_rbtn = QtWidgets.QRadioButton(self.widget)
self.mdl_loc_rbtn.setGeometry(QtCore.QRect(10, 400, 21, 21))
self.mdl_loc_rbtn.setObjectName("mdl_loc_rbtn")
self.mdl_loc_label = QtWidgets.QLabel(self.widget)
self.mdl_loc_label.setGeometry(QtCore.QRect(40, 400, 541, 21))
self.mdl_loc_label.setObjectName("mdl_loc_label")
self.label_10 = QtWidgets.QLabel(self.widget)
self.label_10.setGeometry(QtCore.QRect(10, 380, 171, 16))
self.label_10.setObjectName("label_10")
self.label_11 = QtWidgets.QLabel(self.widget)
self.label_11.setGeometry(QtCore.QRect(10, 420, 141, 16))
self.label_11.setObjectName("label_11")
self.mdl_name_rbtn = QtWidgets.QRadioButton(self.widget)
self.mdl_name_rbtn.setGeometry(QtCore.QRect(10, 440, 21, 21))
self.mdl_name_rbtn.setObjectName("mdl_name_rbtn")
self.mdl_name_label = QtWidgets.QLabel(self.widget)
self.mdl_name_label.setGeometry(QtCore.QRect(40, 440, 541, 21))
self.mdl_name_label.setObjectName("mdl_name_label")
self.label_6 = QtWidgets.QLabel(self.widget)
self.label_6.setGeometry(QtCore.QRect(600, 170, 101, 16))
self.label_6.setObjectName("label_6")
self.cal_file_label = QtWidgets.QLabel(self.widget)
self.cal_file_label.setGeometry(QtCore.QRect(710, 170, 291, 20))
self.cal_file_label.setObjectName("cal_file_label")
self.label_8 = QtWidgets.QLabel(self.widget)
self.label_8.setGeometry(QtCore.QRect(600, 70, 101, 16))
self.label_8.setObjectName("label_8")
self.ses_name_label = QtWidgets.QLabel(self.widget)
self.ses_name_label.setGeometry(QtCore.QRect(700, 70, 381, 20))
self.ses_name_label.setObjectName("ses_name_label")
self.reset_selection_push_btn = QtWidgets.QPushButton(self.widget)
self.reset_selection_push_btn.setGeometry(QtCore.QRect(470, 0, 101, 23))
self.reset_selection_push_btn.setObjectName("reset_selection_push_btn")
self.start_session_push_btn = QtWidgets.QPushButton(self.widget)
self.start_session_push_btn.setGeometry(QtCore.QRect(1010, 600, 85, 23))
self.start_session_push_btn.setObjectName("start_session_push_btn")
self.save_settings_push_btn = QtWidgets.QPushButton(self.widget)
self.save_settings_push_btn.setGeometry(QtCore.QRect(870, 600, 131, 23))
self.save_settings_push_btn.setObjectName("save_settings_push_btn")
self.load_settings_file_label = QtWidgets.QLabel(self.widget)
self.load_settings_file_label.setGeometry(QtCore.QRect(40, 600, 671, 21))
self.load_settings_file_label.setObjectName("load_settings_file_label")
self.load_settings_push_btn = QtWidgets.QPushButton(self.widget)
self.load_settings_push_btn.setGeometry(QtCore.QRect(720, 600, 141, 23))
self.load_settings_push_btn.setObjectName("load_settings_push_btn")
self.load_settings_rbtn = QtWidgets.QRadioButton(self.widget)
self.load_settings_rbtn.setGeometry(QtCore.QRect(10, 600, 21, 21))
self.load_settings_rbtn.setObjectName("load_settings_rbtn")
self.verticalLayout_4.addWidget(self.widget)
self.tabs.addTab(self.ses_par_tab, "")
self.focal_grid_tab = QtWidgets.QWidget()
self.focal_grid_tab.setObjectName("focal_grid_tab")
self.gridLayout_2 = QtWidgets.QGridLayout(self.focal_grid_tab)
self.gridLayout_2.setObjectName("gridLayout_2")
self.widget_2 = QtWidgets.QWidget(self.focal_grid_tab)
self.widget_2.setObjectName("widget_2")
self.gridLayout_7 = QtWidgets.QGridLayout(self.widget_2)
self.gridLayout_7.setObjectName("gridLayout_7")
self.line_9 = QtWidgets.QFrame(self.widget_2)
self.line_9.setFrameShape(QtWidgets.QFrame.HLine)
self.line_9.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_9.setObjectName("line_9")
self.gridLayout_7.addWidget(self.line_9, 0, 0, 2, 8)
self.label_16 = QtWidgets.QLabel(self.widget_2)
self.label_16.setObjectName("label_16")
self.gridLayout_7.addWidget(self.label_16, 1, 2, 2, 3)
self.line_6 = QtWidgets.QFrame(self.widget_2)
self.line_6.setFrameShape(QtWidgets.QFrame.HLine)
self.line_6.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_6.setObjectName("line_6")
self.gridLayout_7.addWidget(self.line_6, 2, 0, 1, 2)
self.label_12 = QtWidgets.QLabel(self.widget_2)
self.label_12.setObjectName("label_12")
self.gridLayout_7.addWidget(self.label_12, 3, 2, 1, 1)
self.nx_spin = QtWidgets.QSpinBox(self.widget_2)
self.nx_spin.setObjectName("nx_spin")
self.gridLayout_7.addWidget(self.nx_spin, 3, 3, 1, 1)
spacerItem = QtWidgets.QSpacerItem(928, 213, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.gridLayout_7.addItem(spacerItem, 3, 4, 7, 4)
self.label_13 = QtWidgets.QLabel(self.widget_2)
self.label_13.setObjectName("label_13")
self.gridLayout_7.addWidget(self.label_13, 4, 2, 1, 1)
self.ny_spin = QtWidgets.QSpinBox(self.widget_2)
self.ny_spin.setObjectName("ny_spin")
self.gridLayout_7.addWidget(self.ny_spin, 4, 3, 1, 1)
self.label_14 = QtWidgets.QLabel(self.widget_2)
self.label_14.setObjectName("label_14")
self.gridLayout_7.addWidget(self.label_14, 5, 2, 1, 1)
self.nz_spin = QtWidgets.QSpinBox(self.widget_2)
self.nz_spin.setObjectName("nz_spin")
self.gridLayout_7.addWidget(self.nz_spin, 5, 3, 1, 1)
self.label_15 = QtWidgets.QLabel(self.widget_2)
self.label_15.setObjectName("label_15")
self.gridLayout_7.addWidget(self.label_15, 6, 2, 1, 1)
self.ds_spin = QtWidgets.QDoubleSpinBox(self.widget_2)
self.ds_spin.setObjectName("ds_spin")
self.gridLayout_7.addWidget(self.ds_spin, 6, 3, 1, 1)
self.label_17 = QtWidgets.QLabel(self.widget_2)
self.label_17.setObjectName("label_17")
self.gridLayout_7.addWidget(self.label_17, 7, 2, 1, 1)
self.x0_spin = QtWidgets.QDoubleSpinBox(self.widget_2)
self.x0_spin.setObjectName("x0_spin")
self.gridLayout_7.addWidget(self.x0_spin, 7, 3, 1, 1)
self.label_19 = QtWidgets.QLabel(self.widget_2)
self.label_19.setObjectName("label_19")
self.gridLayout_7.addWidget(self.label_19, 8, 2, 1, 1)
self.y0_spin = QtWidgets.QDoubleSpinBox(self.widget_2)
self.y0_spin.setObjectName("y0_spin")
self.gridLayout_7.addWidget(self.y0_spin, 8, 3, 1, 1)
self.label_20 = QtWidgets.QLabel(self.widget_2)
self.label_20.setObjectName("label_20")
self.gridLayout_7.addWidget(self.label_20, 9, 2, 1, 1)
self.z0_spin = QtWidgets.QDoubleSpinBox(self.widget_2)
self.z0_spin.setObjectName("z0_spin")
self.gridLayout_7.addWidget(self.z0_spin, 9, 3, 1, 1)
self.line_7 = QtWidgets.QFrame(self.widget_2)
self.line_7.setFrameShape(QtWidgets.QFrame.HLine)
self.line_7.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_7.setObjectName("line_7")
self.gridLayout_7.addWidget(self.line_7, 10, 0, 1, 7)
spacerItem1 = QtWidgets.QSpacerItem(696, 48, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.gridLayout_7.addItem(spacerItem1, 10, 7, 2, 1)
self.calc_vox_btn = QtWidgets.QPushButton(self.widget_2)
self.calc_vox_btn.setObjectName("calc_vox_btn")
self.gridLayout_7.addWidget(self.calc_vox_btn, 11, 0, 1, 4)
self.vox_progress_bar = QtWidgets.QProgressBar(self.widget_2)
self.vox_progress_bar.setMinimumSize(QtCore.QSize(211, 0))
self.vox_progress_bar.setProperty("value", 24)
self.vox_progress_bar.setObjectName("vox_progress_bar")
self.gridLayout_7.addWidget(self.vox_progress_bar, 11, 5, 1, 2)
self.line_8 = QtWidgets.QFrame(self.widget_2)
self.line_8.setFrameShape(QtWidgets.QFrame.HLine)
self.line_8.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_8.setObjectName("line_8")
self.gridLayout_7.addWidget(self.line_8, 12, 0, 2, 8)
self.label_49 = QtWidgets.QLabel(self.widget_2)
self.label_49.setObjectName("label_49")
self.gridLayout_7.addWidget(self.label_49, 13, 1, 2, 6)
spacerItem2 = QtWidgets.QSpacerItem(804, 48, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.gridLayout_7.addItem(spacerItem2, 14, 6, 2, 2)
self.label_50 = QtWidgets.QLabel(self.widget_2)
self.label_50.setObjectName("label_50")
self.gridLayout_7.addWidget(self.label_50, 15, 1, 1, 3)
self.pixel_size_spin = QtWidgets.QDoubleSpinBox(self.widget_2)
self.pixel_size_spin.setObjectName("pixel_size_spin")
self.gridLayout_7.addWidget(self.pixel_size_spin, 15, 4, 1, 2)
spacerItem3 = QtWidgets.QSpacerItem(1079, 267, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.gridLayout_7.addItem(spacerItem3, 16, 0, 1, 8)
self.gridLayout_2.addWidget(self.widget_2, 0, 0, 1, 1)
self.tabs.addTab(self.focal_grid_tab, "")
self.model_scale_tab = QtWidgets.QWidget()
self.model_scale_tab.setObjectName("model_scale_tab")
self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.model_scale_tab)
self.verticalLayout_2.setObjectName("verticalLayout_2")
self.widget_3 = QtWidgets.QWidget(self.model_scale_tab)
self.widget_3.setObjectName("widget_3")
self.gridLayout_3 = QtWidgets.QGridLayout(self.widget_3)
self.gridLayout_3.setObjectName("gridLayout_3")
self.rawFrameView = ScaleModelWidget(self.widget_3)
self.rawFrameView.setMinimumSize(QtCore.QSize(1091, 511))
self.rawFrameView.setObjectName("rawFrameView")
self.gridLayout_3.addWidget(self.rawFrameView, 0, 0, 1, 1)
self.widget_4 = QtWidgets.QWidget(self.widget_3)
self.widget_4.setMinimumSize(QtCore.QSize(1091, 0))
self.widget_4.setMaximumSize(QtCore.QSize(16777215, 101))
self.widget_4.setObjectName("widget_4")
self.gridLayout = QtWidgets.QGridLayout(self.widget_4)
self.gridLayout.setObjectName("gridLayout")
self.label_22 = QtWidgets.QLabel(self.widget_4)
self.label_22.setObjectName("label_22")
self.gridLayout.addWidget(self.label_22, 0, 0, 1, 1)
self.scaleTable = QtWidgets.QTableWidget(self.widget_4)
self.scaleTable.setMinimumSize(QtCore.QSize(411, 81))
self.scaleTable.setObjectName("scaleTable")
self.scaleTable.setColumnCount(0)
self.scaleTable.setRowCount(0)
self.gridLayout.addWidget(self.scaleTable, 0, 1, 4, 1)
spacerItem4 = QtWidgets.QSpacerItem(248, 78, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.gridLayout.addItem(spacerItem4, 0, 2, 4, 1)
self.raw_mov_spin = QtWidgets.QSpinBox(self.widget_4)
self.raw_mov_spin.setObjectName("raw_mov_spin")
self.gridLayout.addWidget(self.raw_mov_spin, 1, 0, 1, 1)
self.load_scale_btn = QtWidgets.QPushButton(self.widget_4)
self.load_scale_btn.setObjectName("load_scale_btn")
self.gridLayout.addWidget(self.load_scale_btn, 1, 3, 2, 1)
self.save_scale_btn = QtWidgets.QPushButton(self.widget_4)
self.save_scale_btn.setObjectName("save_scale_btn")
self.gridLayout.addWidget(self.save_scale_btn, 1, 4, 2, 1)
self.raw_frame_spin = QtWidgets.QSpinBox(self.widget_4)
self.raw_frame_spin.setObjectName("raw_frame_spin")
self.gridLayout.addWidget(self.raw_frame_spin, 3, 0, 1, 1)
self.set_model_btn = QtWidgets.QPushButton(self.widget_4)
self.set_model_btn.setObjectName("set_model_btn")
self.gridLayout.addWidget(self.set_model_btn, 1, 5, 2, 1)
self.label_21 = QtWidgets.QLabel(self.widget_4)
self.label_21.setObjectName("label_21")
self.gridLayout.addWidget(self.label_21, 2, 0, 1, 1)
self.gridLayout_3.addWidget(self.widget_4, 1, 0, 1, 1)
self.verticalLayout_2.addWidget(self.widget_3)
self.tabs.addTab(self.model_scale_tab, "")
self.model_view_tab = QtWidgets.QWidget()
self.model_view_tab.setObjectName("model_view_tab")
self.horizontalLayout = QtWidgets.QHBoxLayout(self.model_view_tab)
self.horizontalLayout.setObjectName("horizontalLayout")
self.model_param_disp = QtWidgets.QWidget(self.model_view_tab)
self.model_param_disp.setObjectName("model_param_disp")
self.gridLayout_4 = QtWidgets.QGridLayout(self.model_param_disp)
self.gridLayout_4.setObjectName("gridLayout_4")
self.label_23 = QtWidgets.QLabel(self.model_param_disp)
self.label_23.setMinimumSize(QtCore.QSize(114, 621))
self.label_23.setObjectName("label_23")
self.gridLayout_4.addWidget(self.label_23, 0, 0, 1, 1)
self.horizontalLayout.addWidget(self.model_param_disp)
self.model_view_window = ModelViewWidget(self.model_view_tab)
self.model_view_window.setMinimumSize(QtCore.QSize(971, 631))
self.model_view_window.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.model_view_window.setFrameShadow(QtWidgets.QFrame.Raised)
self.model_view_window.setObjectName("model_view_window")
self.horizontalLayout.addWidget(self.model_view_window)
self.tabs.addTab(self.model_view_tab, "")
self.segment_tab = QtWidgets.QWidget()
self.segment_tab.setObjectName("segment_tab")
self.verticalLayout = QtWidgets.QVBoxLayout(self.segment_tab)
self.verticalLayout.setObjectName("verticalLayout")
self.seg_view = ImageSegmentWidget(self.segment_tab)
self.seg_view.setMinimumSize(QtCore.QSize(1101, 481))
self.seg_view.setObjectName("seg_view")
self.verticalLayout.addWidget(self.seg_view)
self.seg_widget = QtWidgets.QWidget(self.segment_tab)
self.seg_widget.setMinimumSize(QtCore.QSize(1122, 90))
self.seg_widget.setMaximumSize(QtCore.QSize(16777215, 141))
self.seg_widget.setObjectName("seg_widget")
self.gridLayout_5 = QtWidgets.QGridLayout(self.seg_widget)
self.gridLayout_5.setObjectName("gridLayout_5")
self.label_40 = QtWidgets.QLabel(self.seg_widget)
self.label_40.setObjectName("label_40")
self.gridLayout_5.addWidget(self.label_40, 0, 0, 1, 1)
self.label_24 = QtWidgets.QLabel(self.seg_widget)
self.label_24.setObjectName("label_24")
self.gridLayout_5.addWidget(self.label_24, 0, 1, 1, 1)
self.label_26 = QtWidgets.QLabel(self.seg_widget)
self.label_26.setObjectName("label_26")
self.gridLayout_5.addWidget(self.label_26, 0, 2, 1, 1)
self.label_35 = QtWidgets.QLabel(self.seg_widget)
self.label_35.setObjectName("label_35")
self.gridLayout_5.addWidget(self.label_35, 0, 3, 1, 1)
self.label_36 = QtWidgets.QLabel(self.seg_widget)
self.label_36.setObjectName("label_36")
self.gridLayout_5.addWidget(self.label_36, 0, 4, 1, 1)
self.label_37 = QtWidgets.QLabel(self.seg_widget)
self.label_37.setObjectName("label_37")
self.gridLayout_5.addWidget(self.label_37, 0, 5, 1, 1)
self.label_38 = QtWidgets.QLabel(self.seg_widget)
self.label_38.setObjectName("label_38")
self.gridLayout_5.addWidget(self.label_38, 0, 6, 1, 1)
self.label_39 = QtWidgets.QLabel(self.seg_widget)
self.label_39.setObjectName("label_39")
self.gridLayout_5.addWidget(self.label_39, 0, 7, 1, 1)
self.line_10 = QtWidgets.QFrame(self.seg_widget)
self.line_10.setFrameShape(QtWidgets.QFrame.VLine)
self.line_10.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_10.setObjectName("line_10")
self.gridLayout_5.addWidget(self.line_10, 0, 8, 4, 1)
self.label_43 = QtWidgets.QLabel(self.seg_widget)
self.label_43.setObjectName("label_43")
self.gridLayout_5.addWidget(self.label_43, 0, 9, 1, 2)
self.line_11 = QtWidgets.QFrame(self.seg_widget)
self.line_11.setFrameShape(QtWidgets.QFrame.VLine)
self.line_11.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line_11.setObjectName("line_11")
self.gridLayout_5.addWidget(self.line_11, 0, 12, 4, 1)
spacerItem5 = QtWidgets.QSpacerItem(176, 110, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.gridLayout_5.addItem(spacerItem5, 0, 13, 4, 1)
spacerItem6 = QtWidgets.QSpacerItem(88, 81, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
self.gridLayout_5.addItem(spacerItem6, 0, 14, 3, 1)
self.seg_mov_spin = QtWidgets.QSpinBox(self.seg_widget)
self.seg_mov_spin.setObjectName("seg_mov_spin")
self.gridLayout_5.addWidget(self.seg_mov_spin, 1, 0, 1, 1)
self.seg_frame_spin = QtWidgets.QSpinBox(self.seg_widget)
self.seg_frame_spin.setObjectName("seg_frame_spin")
self.gridLayout_5.addWidget(self.seg_frame_spin, 1, 1, 1, 1)
self.body_thresh_spin = QtWidgets.QSpinBox(self.seg_widget)
self.body_thresh_spin.setObjectName("body_thresh_spin")
self.gridLayout_5.addWidget(self.body_thresh_spin, 1, 2, 1, 1)
self.wing_thresh_spin = QtWidgets.QSpinBox(self.seg_widget)
self.wing_thresh_spin.setObjectName("wing_thresh_spin")
self.gridLayout_5.addWidget(self.wing_thresh_spin, 1, 3, 1, 1)
self.sigma_spin = QtWidgets.QDoubleSpinBox(self.seg_widget)
self.sigma_spin.setObjectName("sigma_spin")
self.gridLayout_5.addWidget(self.sigma_spin, 1, 4, 1, 1)
self.K_spin = QtWidgets.QSpinBox(self.seg_widget)
self.K_spin.setObjectName("K_spin")
self.gridLayout_5.addWidget(self.K_spin, 1, 5, 1, 1)
self.min_body_spin = QtWidgets.QSpinBox(self.seg_widget)
self.min_body_spin.setObjectName("min_body_spin")
self.gridLayout_5.addWidget(self.min_body_spin, 1, 6, 1, 1)
self.min_wing_spin = QtWidgets.QSpinBox(self.seg_widget)
self.min_wing_spin.setObjectName("min_wing_spin")
self.gridLayout_5.addWidget(self.min_wing_spin, 1, 7, 1, 1)
self.label_44 = QtWidgets.QLabel(self.seg_widget)
self.label_44.setObjectName("label_44")
self.gridLayout_5.addWidget(self.label_44, 1, 9, 1, 1)
self.mask_cam_nr_spin = QtWidgets.QSpinBox(self.seg_widget)
self.mask_cam_nr_spin.setObjectName("mask_cam_nr_spin")
self.gridLayout_5.addWidget(self.mask_cam_nr_spin, 1, 10, 1, 2)
self.label_45 = QtWidgets.QLabel(self.seg_widget)
self.label_45.setObjectName("label_45")
self.gridLayout_5.addWidget(self.label_45, 2, 9, 1, 1)
self.mask_seg_nr_spin = QtWidgets.QSpinBox(self.seg_widget)
self.mask_seg_nr_spin.setObjectName("mask_seg_nr_spin")
self.gridLayout_5.addWidget(self.mask_seg_nr_spin, 2, 10, 1, 2)
self.seg_update_btn = QtWidgets.QPushButton(self.seg_widget)
self.seg_update_btn.setObjectName("seg_update_btn")
self.gridLayout_5.addWidget(self.seg_update_btn, 3, 0, 1, 2)
self.add_mask_btn = QtWidgets.QPushButton(self.seg_widget)
self.add_mask_btn.setObjectName("add_mask_btn")
self.gridLayout_5.addWidget(self.add_mask_btn, 3, 9, 1, 1)
self.reset_mask_btn = QtWidgets.QPushButton(self.seg_widget)
self.reset_mask_btn.setObjectName("reset_mask_btn")
self.gridLayout_5.addWidget(self.reset_mask_btn, 3, 11, 1, 1)
self.continue_btn = QtWidgets.QPushButton(self.seg_widget)
self.continue_btn.setObjectName("continue_btn")
self.gridLayout_5.addWidget(self.continue_btn, 3, 14, 1, 1)
self.verticalLayout.addWidget(self.seg_widget)
self.tabs.addTab(self.segment_tab, "")
self.pcl_view_tab = QtWidgets.QWidget()
self.pcl_view_tab.setObjectName("pcl_view_tab")
self.verticalLayout_6 = QtWidgets.QVBoxLayout(self.pcl_view_tab)
self.verticalLayout_6.setObjectName("verticalLayout_6")
self.pcl_view = BBoxWidget(self.pcl_view_tab)
self.pcl_view.setMinimumSize(QtCore.QSize(1121, 521))
self.pcl_view.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.pcl_view.setFrameShadow(QtWidgets.QFrame.Raised)
self.pcl_view.setObjectName("pcl_view")
self.verticalLayout_6.addWidget(self.pcl_view)
self.widget_5 = QtWidgets.QWidget(self.pcl_view_tab)
self.widget_5.setMinimumSize(QtCore.QSize(1101, 111))
self.widget_5.setObjectName("widget_5")
self.gridLayout_8 = QtWidgets.QGridLayout(self.widget_5)
self.gridLayout_8.setObjectName("gridLayout_8")
self.label_41 = QtWidgets.QLabel(self.widget_5)
self.label_41.setObjectName("label_41")
self.gridLayout_8.addWidget(self.label_41, 0, 0, 1, 1)
self.flight_select_btn_group = QtWidgets.QGroupBox(self.widget_5)
self.flight_select_btn_group.setObjectName("flight_select_btn_group")
self.gridLayout_6 = QtWidgets.QGridLayout(self.flight_select_btn_group)
self.gridLayout_6.setObjectName("gridLayout_6")
self.tethered_radio_btn = QtWidgets.QRadioButton(self.flight_select_btn_group)
self.tethered_radio_btn.setObjectName("tethered_radio_btn")
self.gridLayout_6.addWidget(self.tethered_radio_btn, 0, 0, 1, 1)
self.free_radio_btn = QtWidgets.QRadioButton(self.flight_select_btn_group)
self.free_radio_btn.setObjectName("free_radio_btn")
self.gridLayout_6.addWidget(self.free_radio_btn, 1, 0, 1, 1)
self.gridLayout_8.addWidget(self.flight_select_btn_group, 0, 1, 4, 1)
self.label_47 = QtWidgets.QLabel(self.widget_5)
self.label_47.setObjectName("label_47")
self.gridLayout_8.addWidget(self.label_47, 0, 2, 1, 1)
self.label_51 = QtWidgets.QLabel(self.widget_5)
self.label_51.setObjectName("label_51")
self.gridLayout_8.addWidget(self.label_51, 0, 3, 1, 1)
spacerItem7 = QtWidgets.QSpacerItem(456, 90, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.gridLayout_8.addItem(spacerItem7, 0, 4, 4, 1)
self.view_select_group = QtWidgets.QGroupBox(self.widget_5)
self.view_select_group.setObjectName("view_select_group")
self.verticalLayout_3 = QtWidgets.QVBoxLayout(self.view_select_group)
self.verticalLayout_3.setObjectName("verticalLayout_3")
self.pcl_view_btn = QtWidgets.QRadioButton(self.view_select_group)
self.pcl_view_btn.setObjectName("pcl_view_btn")
self.verticalLayout_3.addWidget(self.pcl_view_btn)
self.bbox_view_btn = QtWidgets.QRadioButton(self.view_select_group)
self.bbox_view_btn.setObjectName("bbox_view_btn")
self.verticalLayout_3.addWidget(self.bbox_view_btn)
self.model_view_btn = QtWidgets.QRadioButton(self.view_select_group)
self.model_view_btn.setObjectName("model_view_btn")
self.verticalLayout_3.addWidget(self.model_view_btn)
self.gridLayout_8.addWidget(self.view_select_group, 0, 5, 4, 1)
self.pcl_mov_spin = QtWidgets.QSpinBox(self.widget_5)
self.pcl_mov_spin.setObjectName("pcl_mov_spin")
self.gridLayout_8.addWidget(self.pcl_mov_spin, 1, 0, 1, 1)
self.stroke_bound_spin = QtWidgets.QSpinBox(self.widget_5)
self.stroke_bound_spin.setObjectName("stroke_bound_spin")
self.gridLayout_8.addWidget(self.stroke_bound_spin, 1, 2, 1, 1)
self.wing_pitch_bound_spin = QtWidgets.QSpinBox(self.widget_5)
self.wing_pitch_bound_spin.setObjectName("wing_pitch_bound_spin")
self.gridLayout_8.addWidget(self.wing_pitch_bound_spin, 1, 3, 1, 1)
self.label_42 = QtWidgets.QLabel(self.widget_5)
self.label_42.setObjectName("label_42")
self.gridLayout_8.addWidget(self.label_42, 2, 0, 1, 1)
self.label_48 = QtWidgets.QLabel(self.widget_5)
self.label_48.setObjectName("label_48")
self.gridLayout_8.addWidget(self.label_48, 2, 2, 1, 1)
self.label_46 = QtWidgets.QLabel(self.widget_5)
self.label_46.setObjectName("label_46")
self.gridLayout_8.addWidget(self.label_46, 2, 3, 1, 1)
self.pcl_frame_spin = QtWidgets.QSpinBox(self.widget_5)
self.pcl_frame_spin.setObjectName("pcl_frame_spin")
self.gridLayout_8.addWidget(self.pcl_frame_spin, 3, 0, 1, 1)
self.dev_bound_spin = QtWidgets.QSpinBox(self.widget_5)
self.dev_bound_spin.setObjectName("dev_bound_spin")
self.gridLayout_8.addWidget(self.dev_bound_spin, 3, 2, 1, 1)
self.sphere_radius_spin = QtWidgets.QDoubleSpinBox(self.widget_5)
self.sphere_radius_spin.setObjectName("sphere_radius_spin")
self.gridLayout_8.addWidget(self.sphere_radius_spin, 3, 3, 1, 1)
self.verticalLayout_6.addWidget(self.widget_5)
self.tabs.addTab(self.pcl_view_tab, "")
self.opt_tab = QtWidgets.QWidget()
self.opt_tab.setObjectName("opt_tab")
self.verticalLayout_7 = QtWidgets.QVBoxLayout(self.opt_tab)
self.verticalLayout_7.setObjectName("verticalLayout_7")
self.opt_widget = QtWidgets.QWidget(self.opt_tab)
self.opt_widget.setObjectName("opt_widget")
self.verticalLayout_8 = QtWidgets.QVBoxLayout(self.opt_widget)
self.verticalLayout_8.setObjectName("verticalLayout_8")
self.contour_view = ContourViewWidget(self.opt_widget)
self.contour_view.setObjectName("contour_view")
self.verticalLayout_8.addWidget(self.contour_view)
self.opt_settings_widget = QtWidgets.QWidget(self.opt_widget)
self.opt_settings_widget.setMinimumSize(QtCore.QSize(0, 120))
self.opt_settings_widget.setObjectName("opt_settings_widget")
self.gridLayout_9 = QtWidgets.QGridLayout(self.opt_settings_widget)
self.gridLayout_9.setObjectName("gridLayout_9")
self.label_52 = QtWidgets.QLabel(self.opt_settings_widget)
self.label_52.setObjectName("label_52")
self.gridLayout_9.addWidget(self.label_52, 0, 0, 1, 1)
self.label_54 = QtWidgets.QLabel(self.opt_settings_widget)
self.label_54.setObjectName("label_54")
self.gridLayout_9.addWidget(self.label_54, 0, 1, 1, 1)
spacerItem8 = QtWidgets.QSpacerItem(849, 78, QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Minimum)
self.gridLayout_9.addItem(spacerItem8, 0, 2, 3, 1)
self.init_view_check = QtWidgets.QCheckBox(self.opt_settings_widget)
self.init_view_check.setObjectName("init_view_check")
self.gridLayout_9.addWidget(self.init_view_check, 0, 3, 1, 1)
self.opt_mov_spin = QtWidgets.QSpinBox(self.opt_settings_widget)
self.opt_mov_spin.setObjectName("opt_mov_spin")
self.gridLayout_9.addWidget(self.opt_mov_spin, 1, 0, 1, 1)
self.alpha_spin = QtWidgets.QDoubleSpinBox(self.opt_settings_widget)
self.alpha_spin.setObjectName("alpha_spin")
self.gridLayout_9.addWidget(self.alpha_spin, 1, 1, 1, 1)
self.dest_view_check = QtWidgets.QCheckBox(self.opt_settings_widget)
self.dest_view_check.setObjectName("dest_view_check")
self.gridLayout_9.addWidget(self.dest_view_check, 1, 3, 2, 1)
self.label_53 = QtWidgets.QLabel(self.opt_settings_widget)
self.label_53.setObjectName("label_53")
self.gridLayout_9.addWidget(self.label_53, 2, 0, 1, 1)
self.opt_frame_spin = QtWidgets.QSpinBox(self.opt_settings_widget)
self.opt_frame_spin.setObjectName("opt_frame_spin")
self.gridLayout_9.addWidget(self.opt_frame_spin, 3, 0, 1, 1)
self.src_view_check = QtWidgets.QCheckBox(self.opt_settings_widget)
self.src_view_check.setObjectName("src_view_check")
self.gridLayout_9.addWidget(self.src_view_check, 3, 3, 1, 1)
self.verticalLayout_8.addWidget(self.opt_settings_widget)
self.verticalLayout_7.addWidget(self.opt_widget)
self.tabs.addTab(self.opt_tab, "")
self.verticalLayout_5.addWidget(self.tabs)
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
self.tabs.setCurrentIndex(6)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "DipteraTrack"))
self.label.setText(_translate("MainWindow", "Select session folder:"))
self.label_3.setText(_translate("MainWindow", "Session parameters"))
self.label_2.setText(_translate("MainWindow", "Session folder:"))
self.ses_folder_label.setText(_translate("MainWindow", "..."))
self.label_5.setText(_translate("MainWindow", "Background folder:"))
self.bckg_folder_label.setText(_translate("MainWindow", "..."))
self.label_7.setText(_translate("MainWindow", "Calibration folder:"))
self.cal_folder_label.setText(_translate("MainWindow", "..."))
self.label_9.setText(_translate("MainWindow", "Movie folders:"))
self.mov_folder1_label.setText(_translate("MainWindow", "..."))
self.mov_folder2_label.setText(_translate("MainWindow", "..."))
self.mov_folder3_label.setText(_translate("MainWindow", "..."))
self.mov_folder4_label.setText(_translate("MainWindow", "..."))
self.mov_folder5_label.setText(_translate("MainWindow", "..."))
self.mov_folder6_label.setText(_translate("MainWindow", "..."))
self.mov_folder7_label.setText(_translate("MainWindow", "..."))
self.mov_folder8_label.setText(_translate("MainWindow", "..."))
self.label_18.setText(_translate("MainWindow", "Camera folders:"))
self.cam_folder1_label.setText(_translate("MainWindow", "..."))
self.cam_folder2_label.setText(_translate("MainWindow", "..."))
self.cam_folder3_label.setText(_translate("MainWindow", "..."))
self.cam_folder4_label.setText(_translate("MainWindow", "..."))
self.ses_folder_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.bckg_folder_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.cal_folder_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mov_folder1_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mov_folder2_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mov_folder3_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mov_folder4_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mov_folder5_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mov_folder6_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mov_folder7_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mov_folder8_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.cam_folder1_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.cam_folder2_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.cam_folder3_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.cam_folder4_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.cam_folder5_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.cam_folder6_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.cam_folder5_label.setText(_translate("MainWindow", "..."))
self.cam_folder6_label.setText(_translate("MainWindow", "..."))
self.label_25.setText(_translate("MainWindow", "Frame name:"))
self.frame_name_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.frame_name_label.setText(_translate("MainWindow", "..."))
self.label_27.setText(_translate("MainWindow", "Background image format:"))
self.label_28.setText(_translate("MainWindow", "Calibration image format:"))
self.label_29.setText(_translate("MainWindow", "Frame image format:"))
self.label_30.setText(_translate("MainWindow", "Trigger settings"))
self.label_31.setText(_translate("MainWindow", "start frame nr:"))
self.label_32.setText(_translate("MainWindow", "trigger frame nr:"))
self.label_33.setText(_translate("MainWindow", "end frame nr:"))
self.label_34.setText(_translate("MainWindow", "Trigger mode:"))
self.label_4.setText(_translate("MainWindow", "Model settings"))
self.mdl_loc_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mdl_loc_label.setText(_translate("MainWindow", "..."))
self.label_10.setText(_translate("MainWindow", "Model location:"))
self.label_11.setText(_translate("MainWindow", "Model name:"))
self.mdl_name_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.mdl_name_label.setText(_translate("MainWindow", "..."))
self.label_6.setText(_translate("MainWindow", "Calibration file:"))
self.cal_file_label.setText(_translate("MainWindow", "..."))
self.label_8.setText(_translate("MainWindow", "Session name:"))
self.ses_name_label.setText(_translate("MainWindow", "..."))
self.reset_selection_push_btn.setText(_translate("MainWindow", "reset selection"))
self.start_session_push_btn.setText(_translate("MainWindow", "start session"))
self.save_settings_push_btn.setText(_translate("MainWindow", "save parameter file"))
self.load_settings_file_label.setText(_translate("MainWindow", "..."))
self.load_settings_push_btn.setText(_translate("MainWindow", "load parameter file"))
self.load_settings_rbtn.setText(_translate("MainWindow", "RadioButton"))
self.tabs.setTabText(self.tabs.indexOf(self.ses_par_tab), _translate("MainWindow", "Movie selection"))
self.label_16.setText(_translate("MainWindow", "Voxel grid parameters:"))
self.label_12.setText(_translate("MainWindow", "Nx:"))
self.label_13.setText(_translate("MainWindow", "Ny:"))
self.label_14.setText(_translate("MainWindow", "Nz:"))
self.label_15.setText(_translate("MainWindow", "ds:"))
self.label_17.setText(_translate("MainWindow", "x0:"))
self.label_19.setText(_translate("MainWindow", "y0:"))
self.label_20.setText(_translate("MainWindow", "z0:"))
self.calc_vox_btn.setText(_translate("MainWindow", "calculate voxel grid"))
self.label_49.setText(_translate("MainWindow", "Camera parameters:"))
self.label_50.setText(_translate("MainWindow", "pixel size (mm):"))
self.tabs.setTabText(self.tabs.indexOf(self.focal_grid_tab), _translate("MainWindow", "Voxel grid"))
self.label_22.setText(_translate("MainWindow", "Movie nr:"))
self.load_scale_btn.setText(_translate("MainWindow", "load model scale"))
self.save_scale_btn.setText(_translate("MainWindow", "save model scale"))
self.set_model_btn.setText(_translate("MainWindow", "set model scale"))
self.label_21.setText(_translate("MainWindow", "Frame:"))
self.tabs.setTabText(self.tabs.indexOf(self.model_scale_tab), _translate("MainWindow", "Scale model"))
self.label_23.setText(_translate("MainWindow", "Model parameters:"))
self.tabs.setTabText(self.tabs.indexOf(self.model_view_tab), _translate("MainWindow", "Model view"))
self.label_40.setText(_translate("MainWindow", "movie nr:"))
self.label_24.setText(_translate("MainWindow", "frame:"))
self.label_26.setText(_translate("MainWindow", "body threshold"))
self.label_35.setText(_translate("MainWindow", "wing threshold"))
self.label_36.setText(_translate("MainWindow", "sigma"))
self.label_37.setText(_translate("MainWindow", "K"))
self.label_38.setText(_translate("MainWindow", "min body area"))
self.label_39.setText(_translate("MainWindow", "min wing area"))
self.label_43.setText(_translate("MainWindow", "Set image mask:"))
self.label_44.setText(_translate("MainWindow", "cam nr:"))
self.label_45.setText(_translate("MainWindow", "segment nr:"))
self.seg_update_btn.setText(_translate("MainWindow", "update"))
self.add_mask_btn.setText(_translate("MainWindow", "add to mask"))
self.reset_mask_btn.setText(_translate("MainWindow", "reset"))
self.continue_btn.setText(_translate("MainWindow", "continue"))
self.tabs.setTabText(self.tabs.indexOf(self.segment_tab), _translate("MainWindow", "Segmentation"))
self.label_41.setText(_translate("MainWindow", "movie nr:"))
self.tethered_radio_btn.setText(_translate("MainWindow", "tethered flight"))
self.free_radio_btn.setText(_translate("MainWindow", "free flight"))
self.label_47.setText(_translate("MainWindow", "stroke angle bound:"))
self.label_51.setText(_translate("MainWindow", "wing pitch angle bound:"))
self.pcl_view_btn.setText(_translate("MainWindow", "pcl view"))
self.bbox_view_btn.setText(_translate("MainWindow", "bbox view"))
self.model_view_btn.setText(_translate("MainWindow", "model view"))
self.label_42.setText(_translate("MainWindow", "frame nr:"))
self.label_48.setText(_translate("MainWindow", "deviation angle bound:"))
self.label_46.setText(_translate("MainWindow", "sphere radius:"))
self.tabs.setTabText(self.tabs.indexOf(self.pcl_view_tab), _translate("MainWindow", "Pointcloud view"))
self.label_52.setText(_translate("MainWindow", "movie nr:"))
self.label_54.setText(_translate("MainWindow", "alpha:"))
self.init_view_check.setText(_translate("MainWindow", "initial state"))
self.dest_view_check.setText(_translate("MainWindow", "destination contour"))
self.label_53.setText(_translate("MainWindow", "frame nr:"))
self.src_view_check.setText(_translate("MainWindow", "source contour"))
self.tabs.setTabText(self.tabs.indexOf(self.opt_tab), _translate("MainWindow", "Contour optimization"))
from BoundingBoxWidget import BBoxWidget
from ContourViewWidget import ContourViewWidget
from ImageSegmentWidget import ImageSegmentWidget
from ModelViewWidget import ModelViewWidget
from ScaleModelWidget import ScaleModelWidget
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
| 63.047506 | 118 | 0.714878 | 6,892 | 53,086 | 5.23462 | 0.059199 | 0.054134 | 0.08504 | 0.042964 | 0.550517 | 0.447487 | 0.252709 | 0.072345 | 0.029631 | 0.011919 | 0 | 0.053389 | 0.165204 | 53,086 | 841 | 119 | 63.122473 | 0.760696 | 0.003504 | 0 | 0 | 1 | 0 | 0.097935 | 0.002987 | 0 | 0 | 0 | 0 | 0 | 1 | 0.002418 | false | 0 | 0.008464 | 0 | 0.012092 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96c88c858215a0c25dd7b0d7a56c86c6fc360ef4 | 263 | py | Python | tests/test_day5.py | n1ckdm/advent-of-code-2020 | 913ea4cff29fa76df15c0c22616cc1eebb903490 | [
"MIT"
] | 1 | 2020-12-05T09:25:03.000Z | 2020-12-05T09:25:03.000Z | tests/test_day5.py | n1ckdm/advent-of-code-2020 | 913ea4cff29fa76df15c0c22616cc1eebb903490 | [
"MIT"
] | null | null | null | tests/test_day5.py | n1ckdm/advent-of-code-2020 | 913ea4cff29fa76df15c0c22616cc1eebb903490 | [
"MIT"
] | null | null | null | from aoc_2020.day5 import part1, get_pos, part2
data = """BFFFBBFRRR
FFFBBBFRRR
BBFFBBFRLL
"""
def test_get_pos():
assert get_pos("FBFBBFFRLR") == (44, 5)
def test_part1():
assert part1(data) == 820
def test_part2():
assert part2(data) is None
| 13.842105 | 47 | 0.680608 | 38 | 263 | 4.526316 | 0.578947 | 0.104651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079812 | 0.190114 | 263 | 18 | 48 | 14.611111 | 0.7277 | 0 | 0 | 0 | 0 | 0 | 0.163498 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 1 | 0.272727 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96ca626a4b0b422d3111b3bc01bab71f87bc23d8 | 944 | py | Python | inferencia/util/reader/reader_factory.py | yuya-mochimaru-np/inferencia | e09f298d0a80672fc5bb9383e23c941290eff334 | [
"Apache-2.0"
] | null | null | null | inferencia/util/reader/reader_factory.py | yuya-mochimaru-np/inferencia | e09f298d0a80672fc5bb9383e23c941290eff334 | [
"Apache-2.0"
] | 5 | 2021-07-25T23:19:29.000Z | 2021-07-26T23:35:13.000Z | inferencia/util/reader/reader_factory.py | yuya-mochimaru-np/inferencia | e09f298d0a80672fc5bb9383e23c941290eff334 | [
"Apache-2.0"
] | 1 | 2021-09-18T12:06:13.000Z | 2021-09-18T12:06:13.000Z | import os.path as osp
from .reader.video_reader import VideoReader
class ReaderFactory():
video_exts = [".mp4", ".avi", ".mov", ".MOV", ".mkv"]
def create(target_input, target_fps):
if osp.isfile(target_input):
ext = osp.splitext(target_input)[1]
if ext in ReaderFactory.video_exts:
return VideoReader(target_input, target_fps)
else:
msg = "{} is not supported. {} are supported.".format(
ext, ReaderFactory.video_exts)
raise TypeError(msg)
# elif osp.isdir(target_input):
# return ImageReader(target_input)
# # USB camera
# elif isinstance(target_input, int):
# return VideoReader(target_input)
# # network camera
# elif isinstance(target_input, str):
# return NetworkCameraReader(target_input)
else:
raise ValueError()
| 30.451613 | 70 | 0.581568 | 98 | 944 | 5.438776 | 0.489796 | 0.206379 | 0.123827 | 0.075047 | 0.116323 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003106 | 0.317797 | 944 | 30 | 71 | 31.466667 | 0.824534 | 0.262712 | 0 | 0.133333 | 0 | 0 | 0.084672 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.133333 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96d04e179f9f7c4579746fedd4c15e79b9631946 | 1,337 | py | Python | views/routes.py | macwille/python-chat-app | 95a5bc28c3daeb977e1a1c9e5801941389baeb21 | [
"CC0-1.0"
] | 1 | 2021-06-01T12:27:58.000Z | 2021-06-01T12:27:58.000Z | views/routes.py | macwille/python-chat-app | 95a5bc28c3daeb977e1a1c9e5801941389baeb21 | [
"CC0-1.0"
] | null | null | null | views/routes.py | macwille/python-chat-app | 95a5bc28c3daeb977e1a1c9e5801941389baeb21 | [
"CC0-1.0"
] | null | null | null | from app import app
from views import subject_routes, room_routes, user_routes, message_routes
from models import user_service
from flask import Flask, flash, render_template, request, session, abort
from db import db
# Rest of the routes are imported from /views
@app.route("/")
def index():
return render_template("index.html")
@app.route("/create/id=<int:id>")
def create(id):
return render_template("create.html", id=id)
@ app.route("/search")
def search():
return render_template("search.html")
# Utilties
def check_token():
token = session["csrf_token"]
form_token = request.form["csrf_token"]
if token != form_token:
print("Failed token")
abort(403)
else:
print("Token checked")
def word_too_long(string):
lengths = [len(x) for x in string.split()]
if any(l > 30 for l in lengths):
return True
else:
return False
@app.errorhandler(403)
def resource_not_found(e):
flash("Forbidden 403", "error")
return render_template("index.html")
@app.errorhandler(500)
def server_error(e):
flash("Server encountered an internal error", "error")
return render_template("index.html")
@app.errorhandler(Exception)
def error_dump(error):
print(error)
flash("Unexpected Error", "error")
return render_template("index.html")
| 21.918033 | 74 | 0.689604 | 184 | 1,337 | 4.88587 | 0.402174 | 0.10901 | 0.133482 | 0.111235 | 0.193548 | 0.193548 | 0.157953 | 0.10901 | 0 | 0 | 0 | 0.012868 | 0.186238 | 1,337 | 60 | 75 | 22.283333 | 0.813419 | 0.038893 | 0 | 0.146341 | 0 | 0 | 0.166927 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.195122 | false | 0 | 0.121951 | 0.073171 | 0.512195 | 0.073171 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
96d2afcdf9b53f37c987f04d16fd91b999a57d2d | 2,818 | py | Python | test_autocorrelation.py | jacob975/TATIRP | 2d81fa280e039aa931c6f8456632a23ef123282a | [
"MIT"
] | null | null | null | test_autocorrelation.py | jacob975/TATIRP | 2d81fa280e039aa931c6f8456632a23ef123282a | [
"MIT"
] | 4 | 2017-08-22T03:15:22.000Z | 2017-12-19T17:55:31.000Z | test_autocorrelation.py | jacob975/TATIRP | 2d81fa280e039aa931c6f8456632a23ef123282a | [
"MIT"
] | null | null | null | #!/usr/bin/python
'''
Program:
This is a test program for autocorrelation
Usage:
test_autocorrelation.py
Editor:
Jacob975
20181127
#################################
update log
'''
import numpy as np
import time
import matplotlib.pyplot as plt
from uncertainties import unumpy, ufloat
# Convert flux to magnitude
def flux2mag(flux, err_flux):
uflux = unumpy.uarray(flux, err_flux)
umag = -2.5 * unumpy.log(uflux, 10)
mag = unumpy.nominal_values(umag)
err_mag = unumpy.std_devs(umag)
return mag, err_mag
# Convert magnitude to flux
def mag2flux(mag, err_mag):
umag = unumpy.uarray(mag, err_mag)
uflux = 10 ** (-0.4*umag)
flux = unumpy.nominal_values(uflux)
err_flux = unumpy.std_devs(uflux)
return flux, err_flux
def autocorr(x):
# Compute the autocorrelation of the signal, based on the properties of the
# power spectral density of the signal.
xp = x-np.mean(x)
f = np.fft.fft(xp)
p = np.array([np.real(v)**2+np.imag(v)**2 for v in f])
pi = np.fft.ifft(p)
return np.real(pi)[:x.size/2]/np.sum(xp**2)
#--------------------------------------------
# Main code
if __name__ == "__main__":
# Measure time
start_time = time.time()
#----------------------------------------
# Period
T = 50
term = 500
# standard deviation
stdev = 2
# Model a Target Star
# Sine wave (radian)
t = np.arange(term)
intrinsic = np.sin(2 * np.pi * t / T) + 10
intrinsic_mag, _ = flux2mag(intrinsic, stdev)
uncertainties = np.random.normal(0, stdev, term)
flux = intrinsic + uncertainties
mag, err_mag = flux2mag(flux, stdev)
target = np.transpose([ t, mag, err_mag])
correlation = autocorr(mag)
#----------------------------------------
# Plot the answers
# target
target_plt = plt.figure("target", figsize=(9, 6), dpi=100)
plt.subplot(3, 1, 1)
plt.title('intrinsic_target')
plt.xlabel('time')
plt.ylabel('mag')
frame1 = plt.gca()
frame1.axes.get_xaxis().set_visible(False)
plt.plot(t, intrinsic_mag, label = 'intrinsic_target')
plt.legend()
plt.subplot(3, 1, 2)
plt.title('observed_target')
plt.xlabel("time")
plt.ylabel('mag')
frame1 = plt.gca()
frame1.axes.get_xaxis().set_visible(False)
plt.errorbar(target[:,0], target[:,1], yerr = target[:,2], fmt = 'ro', alpha = 0.5, label = 'observed_target')
plt.legend()
plt.subplot(3, 1, 3)
plt.title('autocorrelation')
plt.ylabel('R')
plt.xlabel('time')
plt.bar(t[1:t.size/2], correlation[1:], label = 'correlation')
plt.legend()
plt.show()
#---------------------------------------
# Measure time
elapsed_time = time.time() - start_time
print "Exiting Main Program, spending ", elapsed_time, "seconds."
| 28.755102 | 114 | 0.590845 | 379 | 2,818 | 4.290237 | 0.37467 | 0.02214 | 0.027675 | 0.02214 | 0.130381 | 0.130381 | 0.130381 | 0.097171 | 0.097171 | 0.097171 | 0 | 0.028276 | 0.209368 | 2,818 | 97 | 115 | 29.051546 | 0.701526 | 0.16643 | 0 | 0.180328 | 0 | 0 | 0.075139 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.065574 | null | null | 0.016393 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96d3857b0085dd30303fde36c9f088ee7b337a65 | 681 | py | Python | leetcode/ag_107.py | baobei813214232/common-alglib | 303e4edc5f7c1b5e9a2a6ebc4742d7bae61c5d31 | [
"MIT"
] | 4 | 2017-05-02T09:47:48.000Z | 2019-05-01T06:26:26.000Z | leetcode/ag_107.py | xiaolongnk/Alglib | 303e4edc5f7c1b5e9a2a6ebc4742d7bae61c5d31 | [
"MIT"
] | null | null | null | leetcode/ag_107.py | xiaolongnk/Alglib | 303e4edc5f7c1b5e9a2a6ebc4742d7bae61c5d31 | [
"MIT"
] | 1 | 2021-01-31T07:21:53.000Z | 2021-01-31T07:21:53.000Z | from ojlib.TreeLib import *
class Solution(object):
def levelOrderBottom(self, root):
"""
:type root: TreeNode
:rtype: List[List[int]]
"""
if not root:
return []
def slu(root, level, ans):
if not root: return []
if level >= len(ans):
ans.insert(0,[])
slu(root.left, level+1, ans)
slu(root.right, level+1, ans)
ans[len(ans) - level - 1].append(root.val)
ans = []
slu(root, 0 , ans)
return ans
def run():
s = Solution()
root = make_tree()
ans = s.levelOrderBottom(root)
for i in ans:
print i
| 23.482759 | 54 | 0.484581 | 82 | 681 | 4.012195 | 0.463415 | 0.085106 | 0.054711 | 0.091185 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.011962 | 0.386197 | 681 | 28 | 55 | 24.321429 | 0.77512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.047619 | null | null | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96d5b41db8b7fcb7f3b93add103d8acc7fa5230e | 1,067 | py | Python | main.py | adelhult/please-plot | dd47b4c5ce9b9a0d69ed0b36f11e949ef08cb428 | [
"MIT"
] | null | null | null | main.py | adelhult/please-plot | dd47b4c5ce9b9a0d69ed0b36f11e949ef08cb428 | [
"MIT"
] | null | null | null | main.py | adelhult/please-plot | dd47b4c5ce9b9a0d69ed0b36f11e949ef08cb428 | [
"MIT"
] | null | null | null | from traceback import print_exc
from uuid import uuid4
from flask import Flask, request, send_from_directory
from function_plot import *
app = Flask(__name__)
max_simultaneous_requests = 8
simultaneous_requests = 0
@app.route('/generate/')
def generate():
global simultaneous_requests
simultaneous_requests += 1
try:
if simultaneous_requests >= max_simultaneous_requests:
raise Exception("Too many simultaneous requests")
fn = request.args.get("fn")
x_min = float(request.args.get("x_min", -8))
x_max = float(request.args.get("x_max", 8))
y_min = float(request.args.get("y_min", -5))
y_max = float(request.args.get("y_max", 5))
filename = str(uuid4()) + ".mp4"
return plot(fn, filename, x_min, x_max, y_min, y_max)
except:
print_exc()
return ""
finally:
simultaneous_requests -= 1
@app.route('/videos/<path:filename>')
def download_file(filename):
return send_from_directory(path, filename, mimetype="video/mp4", as_attachment=True)
| 26.675 | 88 | 0.670103 | 141 | 1,067 | 4.829787 | 0.382979 | 0.234949 | 0.10279 | 0.111601 | 0.135095 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014269 | 0.211809 | 1,067 | 39 | 89 | 27.358974 | 0.795482 | 0 | 0 | 0 | 0 | 0 | 0.091846 | 0.021556 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068966 | false | 0 | 0.137931 | 0.034483 | 0.310345 | 0.068966 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96d6269877deb16ea1414529d7a4c8eeec745fe2 | 758 | py | Python | structmanager/optimization/genesis/utils.py | saullocastro/structmanager | 01e9677f201c9ef577fdf8a15833be7e364441ab | [
"BSD-3-Clause"
] | 1 | 2015-09-17T20:48:08.000Z | 2015-09-17T20:48:08.000Z | structmanager/optimization/genesis/utils.py | saullocastro/structmanager | 01e9677f201c9ef577fdf8a15833be7e364441ab | [
"BSD-3-Clause"
] | 1 | 2019-01-09T20:31:17.000Z | 2019-01-10T11:10:07.000Z | structmanager/optimization/genesis/utils.py | saullocastro/structmanager | 01e9677f201c9ef577fdf8a15833be7e364441ab | [
"BSD-3-Clause"
] | 1 | 2020-12-29T00:22:23.000Z | 2020-12-29T00:22:23.000Z | def format_float(x, size=8, lr):
"""Format a float number
Parameters
----------
x : float
The float number.
size : int, optional
Desired size of the output string.
lr : str ('<' or '>')
Indicates if it should be left or right aligned.
Returns
-------
out : str
The formatted string.
"""
y = str(x)
if '.' in y:
has_floating_point = True
else:
has_floating_point = False
if lr == '<':
y = y.ljust(size)
elif lr == '>':
y = y.rjust(size)
else:
raise ValueError("`lr` must be '<' or '>'")
if not '.' in y and has_floating_point:
raise ValueError('Float %f does not fit in size = %d' % (x, size))
return y
| 22.294118 | 74 | 0.514512 | 100 | 758 | 3.83 | 0.51 | 0.086162 | 0.125326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002024 | 0.348285 | 758 | 33 | 75 | 22.969697 | 0.773279 | 0 | 0 | 0.133333 | 0 | 0 | 0.140878 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96d86338e0559532f2558c6914fd242a57096663 | 239 | py | Python | src/chapter3/exercise1.py | group13bse1/BSE-2021 | edd715d06696e5993c2a5b458e31bd8b87f67ded | [
"MIT"
] | null | null | null | src/chapter3/exercise1.py | group13bse1/BSE-2021 | edd715d06696e5993c2a5b458e31bd8b87f67ded | [
"MIT"
] | null | null | null | src/chapter3/exercise1.py | group13bse1/BSE-2021 | edd715d06696e5993c2a5b458e31bd8b87f67ded | [
"MIT"
] | null | null | null | hours = input('Enter Hours \n')
rate = input('Enter Rate\n')
hours = int(hours)
rate = float(rate)
if (hours <= 40):
pay = rate*hours
else:
extra_time = hours - 40
pay = (rate*hours) + ((rate*extra_time)/2)
print('Pay: ', pay)
| 21.727273 | 46 | 0.610879 | 37 | 239 | 3.891892 | 0.405405 | 0.138889 | 0.138889 | 0.194444 | 0.263889 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026178 | 0.200837 | 239 | 10 | 47 | 23.9 | 0.727749 | 0 | 0 | 0 | 0 | 0 | 0.129707 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96d961e7c4273593f91da5aa4b4a9f9e0753b755 | 2,836 | py | Python | src/pytheas/tasks/schema.py | dcronkite/pytheas | 3cdd6a21bda488e762931cbf5975964d5e574abd | [
"MIT"
] | null | null | null | src/pytheas/tasks/schema.py | dcronkite/pytheas | 3cdd6a21bda488e762931cbf5975964d5e574abd | [
"MIT"
] | null | null | null | src/pytheas/tasks/schema.py | dcronkite/pytheas | 3cdd6a21bda488e762931cbf5975964d5e574abd | [
"MIT"
] | null | null | null | def get_schema():
return {
'type': 'object',
'properties': {
'connections': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'name': {'type': 'string'},
'path': {'type': 'string'},
'driver': {'type': 'string'},
'server': {'type': 'string'},
'database': {'type': 'string'},
'name_col': {'type': 'string'},
'text_col': {'type': 'string'},
}
}
},
'documents': {'type': 'array', 'items': {'$ref': '#/definitions/document'}},
'irr_documents': {'type': 'array', 'items': {'$ref': '#/definitions/document'}},
'labels': {'type': 'array', 'items': {'type': 'string'}},
'highlights': {'type': 'array', 'items': {'type': 'string'}},
'project': {'type': 'string'},
'subproject': {'type': 'string'},
'start_date': {'type': 'string'},
'end_date': {'type': 'string'},
'annotation': {
'type': 'object',
'properties': {
'irr_percent': {'type': 'number'},
'irr_count': {'type': 'integer'},
'annotators': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'name': {'type': 'string'},
'number': {'type': 'integer'},
'percent': {'type': 'number', 'maximum': 1.0, 'minimum': 0.0},
'documents': {'type': 'array', 'items': {'$ref': '#/definitions/document'}},
},
},
},
},
}
},
'definitions': {
'document': {
'type': 'object',
'properties': {
'name': {'type': 'string'},
'metadata': {'type': 'object'},
'text': {'type': 'string'},
'offsets': {'type': 'array', 'items': {'$ref': '#/definitions/offset'}},
'highlights': {'type': 'array', 'items': {'type': 'string'}},
'expiration_date': {'type': 'string'},
}
},
'offset': {
'type': 'object',
'properties': {
'start': {'type': 'integer', 'minimum': 0},
'end': {'type': 'integer', 'minimum': 0}
}
},
}
} | 41.705882 | 108 | 0.322285 | 164 | 2,836 | 5.518293 | 0.268293 | 0.198895 | 0.139227 | 0.099448 | 0.425414 | 0.367956 | 0.255249 | 0.106077 | 0.106077 | 0 | 0 | 0.004071 | 0.480254 | 2,836 | 68 | 109 | 41.705882 | 0.609905 | 0 | 0 | 0.338235 | 0 | 0 | 0.318999 | 0.023264 | 0 | 0 | 0 | 0 | 0 | 1 | 0.014706 | true | 0 | 0 | 0.014706 | 0.029412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96de8cfd18855bd3db54e341bcf82c6193cb52f1 | 1,213 | py | Python | web-server/webserver/urls.py | ApLight/groenlandicus | 61d230d2b1de0674424c2ea00d2d256f7a68db2e | [
"MIT"
] | null | null | null | web-server/webserver/urls.py | ApLight/groenlandicus | 61d230d2b1de0674424c2ea00d2d256f7a68db2e | [
"MIT"
] | 14 | 2018-09-01T07:41:10.000Z | 2018-09-01T19:41:11.000Z | web-server/webserver/urls.py | ApLight/groenlandicus | 61d230d2b1de0674424c2ea00d2d256f7a68db2e | [
"MIT"
] | null | null | null | """webserver URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/1.11/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: url(r'^$', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: url(r'^$', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.conf.urls import url, include
2. Add a URL to urlpatterns: url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import url
from django.contrib import admin
from account import views as account_views
from quiz import views as quiz_views
from entry import views as entry_views
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^login/', account_views.login, name="login"),
url(r'^feed/', account_views.feed_seolgi, name="feed"),
url(r'^quizzes/', quiz_views.get_quizzes, name="get_quizzes"),
url(r'^index/', entry_views.get_index, name="get_index"),
]
quiz_views.update_quizzes() # When the project starts, execute "update_quizzes".
| 37.90625 | 80 | 0.718054 | 189 | 1,213 | 4.513228 | 0.349206 | 0.037515 | 0.07034 | 0.028136 | 0.200469 | 0.200469 | 0.087925 | 0.087925 | 0 | 0 | 0 | 0.008763 | 0.153339 | 1,213 | 31 | 81 | 39.129032 | 0.821811 | 0.566364 | 0 | 0 | 0 | 0 | 0.125241 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.384615 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
96dfc543c2de8dddb6e747e4e8c3935460f47a78 | 1,016 | py | Python | apps/core/views.py | InfinityLoopA-Z/BigBoxChallenge | eb1b70412af6859032d78d23edfb4c588c17b8cd | [
"MIT"
] | null | null | null | apps/core/views.py | InfinityLoopA-Z/BigBoxChallenge | eb1b70412af6859032d78d23edfb4c588c17b8cd | [
"MIT"
] | null | null | null | apps/core/views.py | InfinityLoopA-Z/BigBoxChallenge | eb1b70412af6859032d78d23edfb4c588c17b8cd | [
"MIT"
] | null | null | null | from rest_framework import viewsets
from django_filters.rest_framework import DjangoFilterBackend
from . import models, serializers, filtersets, pagination
class ActivityViewSet(viewsets.ModelViewSet):
"""A viewset of Activity model"""
queryset = models.Activity.objects.all()
serializer_class = serializers.ActivitySerializer
permission_classes = []
filter_backends = [
DjangoFilterBackend
]
filterset_class = filtersets.ActivityFilterSet
pagination_class = pagination.CustomPagination
lookup_field = 'slug'
class BoxViewSet(viewsets.ModelViewSet):
"""A ViewSet of Box model"""
queryset = models.Box.objects.all()
serializer_class = serializers.BoxSerializer
permission_classes = []
lookup_field = 'slug'
class CategoryViewSet(viewsets.ModelViewSet):
"""A ViewSet of Category model"""
queryset = models.Category.objects.all()
serializer_class = serializers.CategorySerializer
permission_classes = []
lookup_field = 'slug'
| 28.222222 | 61 | 0.744094 | 98 | 1,016 | 7.561224 | 0.418367 | 0.080972 | 0.08502 | 0.11336 | 0.353576 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173228 | 1,016 | 35 | 62 | 29.028571 | 0.882143 | 0.076772 | 0 | 0.26087 | 0 | 0 | 0.013015 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.130435 | 0 | 0.913043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
96e0ca518913b396a050050562a88480c52f946d | 3,721 | py | Python | hearthbreaker/cards/spells/neutral.py | souserge/hearthbreaker | 481dcaa3ae13c7dc16c0e6b7f59f11c36fdb29a7 | [
"MIT"
] | 429 | 2015-01-01T16:07:20.000Z | 2022-03-16T22:30:50.000Z | hearthbreaker/cards/spells/neutral.py | souserge/hearthbreaker | 481dcaa3ae13c7dc16c0e6b7f59f11c36fdb29a7 | [
"MIT"
] | 47 | 2015-01-01T17:07:57.000Z | 2018-05-07T10:49:37.000Z | hearthbreaker/cards/spells/neutral.py | souserge/hearthbreaker | 481dcaa3ae13c7dc16c0e6b7f59f11c36fdb29a7 | [
"MIT"
] | 135 | 2015-01-12T21:52:17.000Z | 2022-02-25T21:18:08.000Z | from hearthbreaker.cards.base import SpellCard
from hearthbreaker.constants import CHARACTER_CLASS, CARD_RARITY
from hearthbreaker.tags.base import BuffUntil, Buff
from hearthbreaker.tags.event import TurnStarted
from hearthbreaker.tags.status import Stealth, Taunt, Frozen
import hearthbreaker.targeting
class TheCoin(SpellCard):
def __init__(self):
super().__init__("The Coin", 0, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False)
def use(self, player, game):
super().use(player, game)
if player.mana < 10:
player.mana += 1
class ArmorPlating(SpellCard):
def __init__(self):
super().__init__("Armor Plating", 1, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False,
target_func=hearthbreaker.targeting.find_minion_spell_target)
def use(self, player, game):
super().use(player, game)
self.target.increase_health(1)
class EmergencyCoolant(SpellCard):
def __init__(self):
super().__init__("Emergency Coolant", 1, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False,
target_func=hearthbreaker.targeting.find_minion_spell_target)
def use(self, player, game):
super().use(player, game)
self.target.add_buff(Buff(Frozen()))
class FinickyCloakfield(SpellCard):
def __init__(self):
super().__init__("Finicky Cloakfield", 1, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False,
target_func=hearthbreaker.targeting.find_friendly_minion_spell_target)
def use(self, player, game):
super().use(player, game)
self.target.add_buff(BuffUntil(Stealth(), TurnStarted()))
class ReversingSwitch(SpellCard):
def __init__(self):
super().__init__("Reversing Switch", 1, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False,
target_func=hearthbreaker.targeting.find_minion_spell_target)
def use(self, player, game):
super().use(player, game)
temp_attack = self.target.calculate_attack()
temp_health = self.target.health
if temp_attack == 0:
self.target.die(None)
else:
self.target.set_attack_to(temp_health)
self.target.set_health_to(temp_attack)
class RustyHorn(SpellCard):
def __init__(self):
super().__init__("Rusty Horn", 1, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False,
target_func=hearthbreaker.targeting.find_minion_spell_target)
def use(self, player, game):
super().use(player, game)
self.target.add_buff(Buff(Taunt()))
class TimeRewinder(SpellCard):
def __init__(self):
super().__init__("Time Rewinder", 1, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False,
target_func=hearthbreaker.targeting.find_friendly_minion_spell_target)
def use(self, player, game):
super().use(player, game)
self.target.bounce()
class WhirlingBlades(SpellCard):
def __init__(self):
super().__init__("Whirling Blades", 1, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False,
target_func=hearthbreaker.targeting.find_minion_spell_target)
def use(self, player, game):
super().use(player, game)
self.target.change_attack(1)
spare_part_list = [ArmorPlating(), EmergencyCoolant(), FinickyCloakfield(), TimeRewinder(), ReversingSwitch(),
RustyHorn(), WhirlingBlades()]
class GallywixsCoin(SpellCard):
def __init__(self):
super().__init__("Gallywix's Coin", 0, CHARACTER_CLASS.ALL, CARD_RARITY.COMMON, False)
def use(self, player, game):
super().use(player, game)
if player.mana < 10:
player.mana += 1
| 34.775701 | 110 | 0.67025 | 430 | 3,721 | 5.476744 | 0.197674 | 0.076433 | 0.061147 | 0.076433 | 0.631847 | 0.631847 | 0.521019 | 0.521019 | 0.521019 | 0.521019 | 0 | 0.006196 | 0.219296 | 3,721 | 106 | 111 | 35.103774 | 0.804475 | 0 | 0 | 0.493506 | 0 | 0 | 0.033593 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.233766 | false | 0 | 0.077922 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96e3e6827970c8c66b107feb95851d4718ea2a5e | 6,159 | py | Python | wr_ks_reader.py | Kevin-Prichard/werobots-kickstarter-python | aa089f66dcd7defbfd7d06a1fb5821bb59ead10f | [
"BSD-2-Clause"
] | 1 | 2020-10-27T11:06:19.000Z | 2020-10-27T11:06:19.000Z | wr_ks_reader.py | Kevin-Prichard/werobots-kickstarter-python | aa089f66dcd7defbfd7d06a1fb5821bb59ead10f | [
"BSD-2-Clause"
] | null | null | null | wr_ks_reader.py | Kevin-Prichard/werobots-kickstarter-python | aa089f66dcd7defbfd7d06a1fb5821bb59ead10f | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import json, sys, copy, os, re
import pprint
import locale
import csv
locale.setlocale(locale.LC_ALL, 'en_US')
from collections import defaultdict
from operator import itemgetter
pp = pprint.PrettyPrinter(indent=4)
# For MacPorts ... need to eliminate TODO
sys.path.append('/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages')
"""
Iterate a dictionary, generate a string buffer of key\tvalue pairs, assuming number
Allows second dictionary (d2), treated as denominator
"""
def dict_value_sort(d,d2=None):
hdr="\n\tUSD"
if d2!=None:
hdr+="\t#Projects\tUSD/Proj\n"
buf=""
for key in sorted(d, key=d.get, reverse=True):
buf += "%s\t%s" % (key, locale.format("%12d", d[key], grouping=True))
if d2 != None:
buf += "\t%s\t%s" % (locale.format("%6d", d2[key], grouping=True), locale.format("%6d", d[key]/d2[key], grouping=True))
buf += "\n"
return hdr+buf
def read_usd_fx_table(usd_fx_csv_pathname):
fxusd = dict();
with open(usd_fx_csv_pathname, 'rb') as csvfile:
fxreader = csv.reader(csvfile, delimiter=',')
for row in fxreader:
if len(row)>0 and row[0]!='Currency':
fxusd[row[1]] = {
'Name': row[0],
'cur_buys_usd': float(row[2]),
'usd_buys_cur': float(row[3])
}
return fxusd
def prep_predicates(filters):
preds = []
for filter in filters:
(path,value) = re.split('\s*=\s*',filter)
path_els = path.split('/')
values = value.split(',')
preds.append({"path_els":path_els,"values":values})
return preds
def project_predicate_test(proj,predicates):
v = proj
match = False
for pred in predicates:
for path_el in pred['path_els']:
v = v[path_el]
sv=str(v)
if (sv==pred['values']) or (type(pred['values'])==list and sv in pred['values']):
match = True
break
return match
def main(wr_kickstarter_json_path,usd_fx_pathname,filter_predicates=[]):
predicates = prep_predicates(filter_predicates) if filter_predicates else []
fxusd = read_usd_fx_table(usd_fx_pathname)
report = gen_ks_report(wr_kickstarter_json_path,fxusd,predicates)
print report
def gen_ks_report(wr_kickstarter_json_path,fxusd,predicates=[]):
json_data = open(wr_kickstarter_json_path).read()
j = json.loads(json_data)
schema_tree = defaultdict(dict)
tots = defaultdict(dict)
corpus = []
template = {
"pled_ctry" : dict(),
"goal_ctry" : dict(),
"cnt_ctry" : dict(),
"pled_cat" : dict(),
"goal_cat" : dict(),
"cnt_cat" : dict(),
"pled_state" : 0,
"goal_state" : 0,
"cnt_state" : 0
}
"""
92562 failed
74635 successful
17296 canceled
6496 live
395 suspended
"""
cnt_all = 0
for block_of_projects in j:
proj_count = len(block_of_projects["projects"])
for i in range(proj_count):
proj = block_of_projects["projects"][i]
if predicates and not project_predicate_test(proj,predicates):
continue
# Grab project values
pled = proj["pledged"] * fxusd[proj["currency"]]["cur_buys_usd"]
#goal = proj["goal"] * fxusd[proj["currency"]]["cur_buys_usd"]
ctry = proj["country"]
cat = "%s (%s)" % (proj["category"]["name"],proj["category"]["id"])
state = proj["state"]
# Ingest descriptive text for TF-IDF
corpus.append( "%s %s" % (proj["blurb"].lower(), proj["name"].lower()) )
# Ensure accumulation skeleton exists
if state not in tots:
tots[state] = copy.deepcopy(template)
# Accumulate totals, increment counters
tots[state]["pled_ctry"][ctry] = tots[state]["pled_ctry"][ctry] + pled if ctry in tots[state]["pled_ctry"] else pled
#tots[state]["goal_ctry"][ctry] = tots[state]["goal_ctry"][ctry] + goal if ctry in tots[state]["goal_ctry"] else goal
tots[state]["cnt_ctry"][ctry] = tots[state]["cnt_ctry"][ctry] + 1 if ctry in tots[state]["cnt_ctry"] else 1
tots[state]["pled_cat"][cat] = tots[state]["pled_cat"][cat] + pled if cat in tots[state]["pled_cat"] else pled
#tots[state]["goal_cat"][cat] = tots[state]["goal_cat"][cat] + goal if cat in tots[state]["goal_cat"] else goal
tots[state]["cnt_cat"][cat] = tots[state]["cnt_cat"][cat] + 1 if cat in tots[state]["cnt_cat"] else 1
tots[state]["pled_state"] += pled
#tots[state]["goal_state"] += goal
tots[state]["cnt_state"] += 1
cnt_all += 1
# Generate the report
buf = ""
for state in tots:
buf += "Per country, %s: %s\n" % (state,dict_value_sort(tots[state]["pled_ctry"],tots[state]["cnt_ctry"]))
buf += "Per category, %s: %s\n" % (state,dict_value_sort(tots[state]["pled_cat"],tots[state]["cnt_cat"]))
buf += "Pledged overall for %s: %s\n" % (state,locale.format("%6d", tots[state]["pled_state"], grouping=True))
#buf += "Goal overall for %s: %s\n" % (state,locale.format("%6d", tots[state]["goal_state"], grouping=True))
buf += "Count overall for %s: %s\n" % (state,locale.format("%6d", tots[state]["cnt_state"], grouping=True))
buf += "Per project for %s: %s\n" % (state,locale.format("%6d", tots[state]["pled_state"]/tots[state]["cnt_state"], grouping=True))
buf += "'%s\n" % ("=" * 40)
buf += "Number of projects, overall: %d\n" % cnt_all
return buf
if __name__ == '__main__':
min_args = 3
if (len(sys.argv)<min_args) or (not os.path.exists(sys.argv[1]) or not os.path.exists(sys.argv[2])):
print "Usage: wr_ks_reader.py <webrobots_ks_data.json> <usd_fx_csv>"
print "e.g. ./wr_ks_reader.py sample-data/five_projects_from-2014-12-02.json sample-data/usd_all_2015-03-25.csv"
exit()
main(sys.argv[1],sys.argv[2],sys.argv[3:] if len(sys.argv)>min_args else None)
| 39.993506 | 139 | 0.599448 | 860 | 6,159 | 4.131395 | 0.254651 | 0.078525 | 0.040248 | 0.01351 | 0.300591 | 0.160991 | 0.135097 | 0.094849 | 0.094849 | 0.068393 | 0 | 0.017469 | 0.237863 | 6,159 | 153 | 140 | 40.254902 | 0.739455 | 0.106836 | 0 | 0.036036 | 0 | 0.018018 | 0.178395 | 0.0407 | 0 | 0 | 0 | 0.006536 | 0 | 0 | null | null | 0 | 0.054054 | null | null | 0.045045 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96e84b02847af097942827575530d900a0b5412e | 5,279 | py | Python | pysensors.py | zakrzem1/pysensors | bfbef5f1442d845e5fa5febf1de5a35a81e436ee | [
"Apache-2.0"
] | null | null | null | pysensors.py | zakrzem1/pysensors | bfbef5f1442d845e5fa5febf1de5a35a81e436ee | [
"Apache-2.0"
] | null | null | null | pysensors.py | zakrzem1/pysensors | bfbef5f1442d845e5fa5febf1de5a35a81e436ee | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python
# Google Spreadsheet DHT Sensor Data-logging Example
# Depends on the 'gspread' package being installed. If you have pip installed
# execute:
# sudo pip install gspread
# Copyright (c) 2014 Adafruit Industries
# Author: Tony DiCola
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from __future__ import print_function
import sys
import getopt
import moskito
import json
import pytz
import dht_sensor as dht
import onewire_temp_sensor as ow
import airquality_sensor_serial as aqss
import sensor_serial_float as ssf
import time
import datetime
import influks
from log import warning, info
from config import conf
try:
import google_spreadsheet as gs
except ImportError:
warning('google_spreadsheet module cannot be loaded')
#from oauth2client.client import SignedJwtAssertionCredentials
# targetTz = pytz.timezone('Europe/Warsaw')
targetTz = pytz.timezone('UTC')
output_fmt = '%Y-%m-%dT%H:%M:%SZ'
client = moskito.start_client()
i=0
#mqtt_topic_temp=conf['mqtt']['topic_temp']
sensors_cfg_arr = conf['sensors']
roomName = conf['roomName']
def main_loop():
sensor_read_freq_secs = conf.get('sensor_read_freq_secs', 30)
while True:
global i
i+=1
now = datetime.datetime.now(targetTz)
for a in sensors_cfg_arr:
reading = ()
publishableDoc = None
readingType = a.get('type')
if(readingType == 'ow'):
info('reading ow sensor')
reading = ow.read(a.get('addr'))
publishableDoc = readingObj(now, reading)
elif(readingType == 'dht'):
info('reading dht sensor')
reading = dht.read(a.get('addr'))
publishableDoc = readingObj(now, reading)
elif(readingType == 'air_quality_serial'):
if(not aqss.inited()):
serialDevice = a.get('serialDevice')
aqss.init(serialDevice)
info('reading air quality sensor [serial]')
reading = aqss.read(output_fmt, targetTz)
if(reading):
publishableDoc = airquality_readingObj(reading)
else:
publishableDoc = None
elif(readingType == 'serial_float'):
if(not ssf.inited()):
serialDevice = a.get('serialDevice')
ssf.init(serialDevice)
info('reading sensor [serial] float')
reading = ssf.read()
publishableDoc = {'current':reading}
info(publishableDoc)
else:
info('unsupported reading type ', readingType)
continue
if(not publishableDoc):
warning('skipping malformed reading')
continue
topic = a.get('topic','')
if(topic):
info('publishing ', publishableDoc, ' to mqtt ', topic)
client.publish(topic, json.dumps(publishableDoc))
if(a.get('influx')):
influks.write('readings',publishableDoc)
if(i%10==0 and conf['gdocs']):
info("GDOCS object:", conf['gdocs'])
gs.append_reading(reading)
time.sleep(sensor_read_freq_secs)
def readingObj(now, reading):
if(not reading or len(reading)<1):
warning('Invalid reading data. Expected at least temp')
return None
obj = {'temp':reading[0],'tstamp':now.strftime(output_fmt), 'roomName':roomName}
if len(reading)>1:
obj['hum'] = reading[1]
return obj
def airquality_readingObj(reading):
if(len(reading)!=2):
warning('invalid data format read from air quality sensor file')
return
jason = {'level':reading[1],'tstamp':reading[0], 'roomName':roomName}
info('airquality_readingObj\n', json.dumps(jason))
return jason
def main(argv=None):
#info('Logging sensor measurements to {0} every {1} seconds.'.format(conf['gdocs']['doc_name'], conf['sensor_read_freq']))
if argv is None:
argv = sys.argv
try:
opts, args = getopt.getopt(argv[1:], "h", ["help"])
except getopt.error, msg:
print(msg)
sys.exit(2)
main_loop()
if __name__ == "__main__":
main()
| 37.176056 | 126 | 0.639326 | 641 | 5,279 | 5.180967 | 0.385335 | 0.026498 | 0.016862 | 0.01626 | 0.057212 | 0.036736 | 0.036736 | 0.036736 | 0.036736 | 0.036736 | 0 | 0.005905 | 0.262171 | 5,279 | 141 | 127 | 37.439716 | 0.846727 | 0.288691 | 0 | 0.116505 | 0 | 0 | 0.150711 | 0.011799 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.165049 | null | null | 0.019417 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96e9083445d26a4f829c2bf35d90cd2813d1671b | 4,287 | py | Python | compare/utils.py | l0rb/buzzword | 1f12875cfb883be07890b2da5482d3b53d878a09 | [
"MIT"
] | null | null | null | compare/utils.py | l0rb/buzzword | 1f12875cfb883be07890b2da5482d3b53d878a09 | [
"MIT"
] | null | null | null | compare/utils.py | l0rb/buzzword | 1f12875cfb883be07890b2da5482d3b53d878a09 | [
"MIT"
] | null | null | null | """
Random utilities this app needs
"""
import os
import re
from buzz import Corpus as BuzzCorpus
from buzz import Collection
from django.conf import settings
from explore.models import Corpus
from .models import OCRUpdate, PDF
# from django.core.exceptions import ObjectDoesNotExist
# when doing OCR, re.findall will be run on it using this regex, which sort of
# approximates a word. note that this will mean that "the end" will be marked
# as blank, but that is a decent tradeoff for marking blank a lot of junk pages.
MEANINGFUL = r"[A-Za-z]{3,}"
THRESHOLD = 3
def markdown_to_buzz_input(markdown, slug):
"""
todo
User can use markdown when correcting OCR
We need to parse out headers and bulletpoints into <meta> features,
handle italics and that sort of thing...perhaps we can convert the text
to html and then postprocess that...
"""
fixed = []
lines = markdown.splitlines()
for line in lines:
# handle headings
# note that this doesn't put text inside respective headings as sections.
# doing so would not work across pages anyway.
if line.startswith("#"):
pref, head = line.split(" ", 1)
depth = len(pref.strip())
head = head.strip()
line = f"<meta heading=\"true\" depth=\"{depth}\">{head}</meta>"
# handle bulletpoints
elif line.startswith("* "):
line = line.lstrip("* ")
line = "<meta point=\"true\">{line}</meta>"
for styler in ["***", "**", "*", "`"]:
pass
fixed.append(line)
return "\n".join(fixed)
def store_buzz_raw(raw, slug, pdf_path):
"""
Put the raw text into the right place for eventual parsing
"""
# todo: corpora dir?
base = os.path.join("static", "corpora", slug, "txt")
os.makedirs(base, exist_ok=True)
filename = os.path.basename(pdf_path).replace(".pdf", ".txt")
with open(os.path.join(base, filename), "w") as fo:
fo.write(raw)
return base
def dump_latest():
"""
Get the latest OCR corrections and build a parseable corpus.
Maybe even parse it?
"""
slugs = OCRUpdate.objects.values_list("slug")
slugs = set(slugs)
for slug in slugs:
corp = Corpus.objects.get(slug=slug)
lang = corp.language.name
# get the associated pdfs
pdfs = PDF.objects.filter(slug=slug)
for pdf in pdfs:
updates = OCRUpdate.objects.filter(pdf=pdf, slug=slug)
plaintext = updates.latest("timestamp").text
corpus_path = store_buzz_raw(plaintext, slug, pdf.path)
print(f"Parsing ({lang}): {corpus_path}")
corp = BuzzCorpus(corpus_path)
parsed = corp.parse(language=lang, multiprocess=1)
corp.parsed = True
corp.path = parsed.path
corp.save()
return parsed
def _is_meaningful(plaintext, language):
"""
Determine if an OCR page contains something worthwhile
"""
# skip this check for non latin alphabet ... right now the parser doesn't
# accept most non-latin languages, so it's mostly academic for now...
if lang in {"zh", "ja", "fa", "iw", "ar"}:
return True
words = re.findall(plaintext, MEANINGFUL)
return len(words) >= THRESHOLD
def _handle_page_numbers(text):
"""
Attempt to make page-level metadata containing page number
"""
# if no handling, just return text
if settings.COMPARE_HANDLE_PAGE_NUMBERS is False:
return text
# get first and maybe last line as list
lines = [i.strip() for i in text.splitlines() if i.strip()]
if not lines:
return text
if len(lines) == 1:
lines = [lines[0]]
else:
lines = [lines[0], lines[-1]]
page_number = None
ix_to_delete = set()
# lines is just the first and last, stripped
for i, line in enumerate(lines):
if line.isnumeric():
page_number = line
ix_to_delete.add(i)
break
if page_number is not None:
form = f"<meta page={page_number} />\n"
# we also want to REMOVE page number from the actual text
cut = [x for i, x in enumerate(text.splitlines()) if i not in ix_to_delete]
if form:
cut = [form] + cut
return "\n".join(cut)
| 31.291971 | 81 | 0.623513 | 588 | 4,287 | 4.489796 | 0.404762 | 0.022727 | 0.011364 | 0.012879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002544 | 0.266387 | 4,287 | 136 | 82 | 31.522059 | 0.836884 | 0.304175 | 0 | 0.025641 | 0 | 0 | 0.070157 | 0 | 0 | 0 | 0 | 0.014706 | 0 | 1 | 0.064103 | false | 0.012821 | 0.089744 | 0 | 0.25641 | 0.012821 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96ead76583647a3211d18d56e012be5c362ce02d | 1,399 | py | Python | extra_tests/snippets/stdlib_subprocess.py | dbrgn/RustPython | 6d371cea8a62d84dbbeec5a53cfd040f45899211 | [
"CC-BY-4.0",
"MIT"
] | 11,058 | 2018-05-29T07:40:06.000Z | 2022-03-31T11:38:42.000Z | extra_tests/snippets/stdlib_subprocess.py | dbrgn/RustPython | 6d371cea8a62d84dbbeec5a53cfd040f45899211 | [
"CC-BY-4.0",
"MIT"
] | 2,105 | 2018-06-01T10:07:16.000Z | 2022-03-31T14:56:42.000Z | extra_tests/snippets/stdlib_subprocess.py | dbrgn/RustPython | 6d371cea8a62d84dbbeec5a53cfd040f45899211 | [
"CC-BY-4.0",
"MIT"
] | 914 | 2018-07-27T09:36:14.000Z | 2022-03-31T19:56:34.000Z | import subprocess
import time
import sys
import signal
from testutils import assert_raises
is_unix = not sys.platform.startswith("win")
if is_unix:
def echo(text):
return ["echo", text]
def sleep(secs):
return ["sleep", str(secs)]
else:
def echo(text):
return ["cmd", "/C", f"echo {text}"]
def sleep(secs):
# TODO: make work in a non-unixy environment (something with timeout.exe?)
return ["sleep", str(secs)]
p = subprocess.Popen(echo("test"))
time.sleep(0.1)
assert p.returncode is None
assert p.poll() == 0
assert p.returncode == 0
p = subprocess.Popen(sleep(2))
assert p.poll() is None
with assert_raises(subprocess.TimeoutExpired):
assert p.wait(1)
p.wait()
assert p.returncode == 0
p = subprocess.Popen(echo("test"), stdout=subprocess.PIPE)
p.wait()
assert p.stdout.read().strip() == b"test"
p = subprocess.Popen(sleep(2))
p.terminate()
p.wait()
if is_unix:
assert p.returncode == -signal.SIGTERM
else:
assert p.returncode == 1
p = subprocess.Popen(sleep(2))
p.kill()
p.wait()
if is_unix:
assert p.returncode == -signal.SIGKILL
else:
assert p.returncode == 1
p = subprocess.Popen(echo("test"), stdout=subprocess.PIPE)
(stdout, stderr) = p.communicate()
assert stdout.strip() == b"test"
p = subprocess.Popen(sleep(5), stdout=subprocess.PIPE)
with assert_raises(subprocess.TimeoutExpired):
p.communicate(timeout=1)
| 19.985714 | 82 | 0.686919 | 207 | 1,399 | 4.608696 | 0.299517 | 0.080713 | 0.1174 | 0.08805 | 0.516771 | 0.359539 | 0.350105 | 0.230608 | 0.075472 | 0 | 0 | 0.011064 | 0.160114 | 1,399 | 69 | 83 | 20.275362 | 0.800851 | 0.051465 | 0 | 0.54 | 0 | 0 | 0.04 | 0 | 0 | 0 | 0 | 0.014493 | 0.3 | 1 | 0.08 | false | 0 | 0.1 | 0.08 | 0.26 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96ece3d883b91ffe8949433754af0502b5ee3b91 | 341 | py | Python | pythonProject1/chap2/demo8.py | zhudi7/pythonAK | d52995a4c35a3c9aeb1542922e9d786f4dcc8d1c | [
"Apache-2.0"
] | null | null | null | pythonProject1/chap2/demo8.py | zhudi7/pythonAK | d52995a4c35a3c9aeb1542922e9d786f4dcc8d1c | [
"Apache-2.0"
] | null | null | null | pythonProject1/chap2/demo8.py | zhudi7/pythonAK | d52995a4c35a3c9aeb1542922e9d786f4dcc8d1c | [
"Apache-2.0"
] | null | null | null | # 公众号:MarkerJava
# 开发时间:2020/10/5 17:25
scores = {'kobe': 100, 'lebron': 99, 'AD': 88}
# 获取所有key
keys = scores.keys()
print(keys)
print(type(keys))
print(list(keys)) # 将所有key组成的视图转换层列表
# 获取所有的值
value = scores.values()
print(value)
print(type(value)) # 将所有value组成的视图转换层列表
# 获取所有键值对
items = scores.items()
print(items)
print(type(items))
| 17.05 | 46 | 0.695015 | 47 | 341 | 5.042553 | 0.574468 | 0.113924 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.060606 | 0.129032 | 341 | 19 | 47 | 17.947368 | 0.737374 | 0.27566 | 0 | 0 | 0 | 0 | 0.050209 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.636364 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
96f175f0b51602c5e97137cd52bc7e95509e5606 | 423 | py | Python | tests/test_matrix.py | avere001/dsplot | 89948c2f1b16e00bb3a240f73d0cb100b3eac847 | [
"MIT"
] | 8 | 2021-08-08T06:06:39.000Z | 2022-02-04T18:30:38.000Z | tests/test_matrix.py | avere001/dsplot | 89948c2f1b16e00bb3a240f73d0cb100b3eac847 | [
"MIT"
] | 1 | 2022-01-04T02:01:36.000Z | 2022-01-04T02:01:36.000Z | tests/test_matrix.py | avere001/dsplot | 89948c2f1b16e00bb3a240f73d0cb100b3eac847 | [
"MIT"
] | 2 | 2021-08-18T12:28:40.000Z | 2022-01-03T23:56:41.000Z | import os
import pytest
from dsplot.errors import InputException
from dsplot.matrix import Matrix
def test_matrix():
matrix = Matrix([[1, 2, 3], [4, 5, 6], [1, 2, 6]])
matrix.plot('tests/test_data/matrix.png')
assert 'matrix.png' in os.listdir('tests/test_data')
with pytest.raises(InputException) as e:
Matrix(nodes=[[]])
assert str(e.value) == 'Input list must have at least 1 element.'
| 23.5 | 69 | 0.669031 | 64 | 423 | 4.375 | 0.578125 | 0.071429 | 0.092857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029155 | 0.189125 | 423 | 17 | 70 | 24.882353 | 0.787172 | 0 | 0 | 0 | 0 | 0 | 0.21513 | 0.061466 | 0 | 0 | 0 | 0 | 0.181818 | 1 | 0.090909 | false | 0 | 0.363636 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
96f38ecfbeecd1933cbe011b730c71f4c32f4990 | 301 | py | Python | rtl/tests/test_log.py | kelceydamage/raspi-tasks | 18aa323e3e2428c998b7472c226d05a00c8ae8c2 | [
"Apache-2.0"
] | 1 | 2019-08-10T00:27:45.000Z | 2019-08-10T00:27:45.000Z | rtl/tests/test_log.py | kelceydamage/raspi-tasks | 18aa323e3e2428c998b7472c226d05a00c8ae8c2 | [
"Apache-2.0"
] | null | null | null | rtl/tests/test_log.py | kelceydamage/raspi-tasks | 18aa323e3e2428c998b7472c226d05a00c8ae8c2 | [
"Apache-2.0"
] | null | null | null | from rtl.tasks.log import log
from dummy_data import KWARGS, CONTENTS3
def test_log():
KWARGS = {
'operations': [
{
'a': 'b',
'column': 'c'
}
]
}
r = log(KWARGS, CONTENTS3)
assert r['c'][2] == 2.5649493574615367
| 20.066667 | 42 | 0.465116 | 31 | 301 | 4.451613 | 0.645161 | 0.217391 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.401993 | 301 | 14 | 43 | 21.5 | 0.655556 | 0 | 0 | 0 | 0 | 0 | 0.066445 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96f41d0268d46bff1736443eb02dd162aabd3a36 | 2,784 | py | Python | sources/lectures.py | JhoLee/ecampus-manager | 9c56678ba06b3b92f539b746d7103798592ad1ac | [
"MIT"
] | 1 | 2021-09-05T06:34:01.000Z | 2021-09-05T06:34:01.000Z | sources/lectures.py | JhoLee/BBang-Shuttle | 9c56678ba06b3b92f539b746d7103798592ad1ac | [
"MIT"
] | null | null | null | sources/lectures.py | JhoLee/BBang-Shuttle | 9c56678ba06b3b92f539b746d7103798592ad1ac | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'ui/lectures.ui'
#
# Created by: PyQt5 UI code generator 5.13.0
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.Qt import QMessageBox, QSize, QIcon
from PyQt5.QtWidgets import QDialog, QListWidgetItem
from main import show_messagebox
from PyQt5.QtGui import QStandardItemModel, QStandardItem
class Ui_Lectures(QDialog):
def __init__(self, manager):
super().__init__()
self.manager = manager
self.lectures = self.manager.lectures
self.items = QStandardItemModel()
self.setFixedSize(QSize(272, 200))
self.setWindowIcon(QIcon('../resources/breadzip.ico'))
self.is_clicked_selection = False
def setupUi(self):
self.setObjectName("Lectures")
self.resize(272, 200)
self.lst_lectures = QtWidgets.QListWidget(self)
self.lst_lectures.setGeometry(QtCore.QRect(10, 30, 251, 121))
font = QtGui.QFont()
font.setFamily("Malgun Gothic")
font.setPointSize(10)
self.lst_lectures.setFont(font)
self.lst_lectures.setObjectName("lst_lectures")
self.btn_select_subject = QtWidgets.QPushButton(self)
self.btn_select_subject.setGeometry(QtCore.QRect(180, 160, 81, 31))
font = QtGui.QFont()
font.setFamily("Malgun Gothic")
font.setPointSize(10)
self.btn_select_subject.setFont(font)
self.btn_select_subject.setObjectName("btn_start")
self.label_3 = QtWidgets.QLabel(self)
self.label_3.setGeometry(QtCore.QRect(10, 10, 251, 16))
font = QtGui.QFont()
font.setFamily("Malgun Gothic")
font.setPointSize(10)
self.label_3.setFont(font)
self.label_3.setText("<html><head/><body><p align=\"justify\">수강할 과목을 선택하십시오.</p></body></html>")
self.label_3.setAlignment(QtCore.Qt.AlignCenter)
self.label_3.setWordWrap(True)
self.label_3.setObjectName("label_3")
_translate = QtCore.QCoreApplication.translate
self.setWindowTitle(_translate("Lectures", "Dialog"))
self.btn_select_subject.setText(_translate("Lectures", "과목 선택"))
self.setWindowTitle(_translate("Login", "수강과목 선택 :: KNUT 빵셔틀"))
# self.items = QStandardItemModel()
# for lecture in self.lectures:
# self.items.appendRow(QStandardItem(lecture.text))
# self.lst_lectures.setModel(self.items)
for lec in self.lectures:
self.lst_lectures.addItem(lec.text)
QtCore.QMetaObject.connectSlotsByName(self)
self.btn_select_subject.clicked.connect(self.select)
def select(self):
self.is_clicked_selection = True
self.close()
| 37.12 | 105 | 0.674569 | 331 | 2,784 | 5.534743 | 0.383686 | 0.026201 | 0.03821 | 0.065502 | 0.126092 | 0.099891 | 0.099891 | 0.099891 | 0.099891 | 0.099891 | 0 | 0.029572 | 0.210489 | 2,784 | 74 | 106 | 37.621622 | 0.803913 | 0.122845 | 0 | 0.173077 | 1 | 0 | 0.08803 | 0.029206 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0 | 0.096154 | 0 | 0.173077 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96f856b12bdb1da37c38499de024996b5d7da12b | 411 | py | Python | python-advanced/webscrap/wlog.py | Rokon-Uz-Zaman/thinkdiff_python_django | 5010c5f1dd8a028fb9e5235319bb6bb434831e6c | [
"MIT"
] | 92 | 2018-04-03T20:53:07.000Z | 2022-03-04T05:53:10.000Z | python-language/python-advanced/webscrap/wlog.py | mostafijur-rahman299/thinkdiff | b0e0c01fe38c406f4dfa8cc80b2f0c5654017079 | [
"MIT"
] | 11 | 2018-10-01T15:35:33.000Z | 2021-09-01T04:59:56.000Z | python-language/python-advanced/webscrap/wlog.py | mostafijur-rahman299/thinkdiff | b0e0c01fe38c406f4dfa8cc80b2f0c5654017079 | [
"MIT"
] | 98 | 2018-03-13T08:03:54.000Z | 2022-03-22T08:11:44.000Z | # author: Mahmud Ahsan
# code: https://github.com/mahmudahsan/thinkdiff
# blog: http://thinkdiff.net
# http://pythonbangla.com
# MIT License
# --------------------------
# Reporting Logs in text file
# --------------------------
import logging
def set_custom_log_info(filename):
logging.basicConfig(filename=filename, level=logging.INFO)
def report(e:Exception):
logging.exception(str(e))
| 21.631579 | 62 | 0.63017 | 46 | 411 | 5.565217 | 0.73913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136253 | 411 | 18 | 63 | 22.833333 | 0.721127 | 0.527981 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
96ff89e422270b31c5889bf32192c54fcd3d3495 | 817 | py | Python | 2019/08-kosen/rev-favorites/solve.py | wani-hackase/wani-writeup | dd4ad0607d2f2193ad94c1ce65359294aa591681 | [
"MIT"
] | 25 | 2019-03-06T11:55:56.000Z | 2021-05-21T22:07:14.000Z | 2019/08-kosen/rev-favorites/solve.py | wani-hackase/wani-writeup | dd4ad0607d2f2193ad94c1ce65359294aa591681 | [
"MIT"
] | 1 | 2020-06-25T07:27:15.000Z | 2020-06-25T07:27:15.000Z | 2019/08-kosen/rev-favorites/solve.py | wani-hackase/wani-writeup | dd4ad0607d2f2193ad94c1ce65359294aa591681 | [
"MIT"
] | 1 | 2019-02-14T00:42:28.000Z | 2019-02-14T00:42:28.000Z | def main():
seed = 0x1234
e = [0x62d5, 0x7b27, 0xc5d4, 0x11c4, 0x5d67, 0xa356, 0x5f84,
0xbd67, 0xad04, 0x9a64, 0xefa6, 0x94d6, 0x2434, 0x0178]
flag = ""
for index in range(14):
for i in range(0x7f-0x20):
c = chr(0x20+i)
res = encode(c, index, seed)
if res == e[index]:
print(c)
flag += c
seed = encode(c, index, seed)
print("Kosen{%s}" % flag)
def encode(p1, p2, p3):
p1 = ord(p1) & 0xff
p2 = p2 & 0xffffffff
p3 = p3 & 0xffffffff
result = (((p1 >> 4) | (p1 & 0xf) << 4) + 1) ^ ((p2 >> 4) |
(~p2 << 4)) & 0xff | (p3 >> 4) << 8 ^ ((p3 >> 0xc) | (p3 << 4)) << 8
return result & 0xffff
if __name__ == "__main__":
main()
| 29.178571 | 120 | 0.440636 | 99 | 817 | 3.555556 | 0.515152 | 0.039773 | 0.068182 | 0.090909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20284 | 0.396573 | 817 | 27 | 121 | 30.259259 | 0.511156 | 0 | 0 | 0 | 0 | 0 | 0.020808 | 0 | 0 | 0 | 0.173807 | 0 | 0 | 1 | 0.086957 | false | 0 | 0 | 0 | 0.130435 | 0.086957 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c094ff4339e6253f78275942a72ff3cc8c9f8a5 | 1,724 | py | Python | venta/admin.py | darkdrei/Inventario | dc2dcc830be5a49ba602c242d8c7d5d9c24c7b5c | [
"MIT"
] | null | null | null | venta/admin.py | darkdrei/Inventario | dc2dcc830be5a49ba602c242d8c7d5d9c24c7b5c | [
"MIT"
] | null | null | null | venta/admin.py | darkdrei/Inventario | dc2dcc830be5a49ba602c242d8c7d5d9c24c7b5c | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.contrib import admin
import models
import forms
from inventario import models as inventario
# Register your models here.
class DetalleInline(admin.StackedInline):
model = models.Detalle
form = forms.DetalleForm
extra = 1
# end class
class FacturaAdmin(admin.ModelAdmin):
list_display = ['comprador','fecha','subtotal','iva','impoconsumo','total','creador','paga']
search_fields = ['comprador','fecha','subtotal','iva','impoconsumo','total','creador','paga']
form = forms.FacturaForm
inlines = [DetalleInline,]
icon = '<i class="material-icons">receipt</i>'
def save_model(self, request, obj, form, change):
obj.save()
total = 0
for s in models.Detalle.objects.filter(factura__id=obj.id):
articulo = inventario.Activo.objects.filter(id=s.articulo.id).first()
if s.cantidad > s.articulo.existencias :
s.cantidad = s.articulo.existencias
#end if
if articulo:
articulo.existencias = articulo.existencias - s.cantidad
articulo.save()
total = s.cantidad * articulo.precio_venta
#end if
# end for
obj.total = total
obj.save()
# end if
#end class
class DetalleAdmin(admin.ModelAdmin):
list_display = ['factura','articulo','cantidad','valor_unitario','total']
search_fields = ['factura','articulo','cantidad','valor_unitario','total']
form = forms.DetalleForm
icon = '<i class="material-icons">assignment</i>'
#end class
admin.site.register(models.Factura, FacturaAdmin)
admin.site.register(models.Detalle, DetalleAdmin)
| 32.528302 | 97 | 0.654872 | 196 | 1,724 | 5.683673 | 0.387755 | 0.032316 | 0.035907 | 0.046679 | 0.260323 | 0.166966 | 0.093357 | 0.093357 | 0 | 0 | 0 | 0.002217 | 0.215197 | 1,724 | 52 | 98 | 33.153846 | 0.821138 | 0.059745 | 0 | 0.114286 | 0 | 0 | 0.164494 | 0.044072 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028571 | false | 0 | 0.142857 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8c139b694a21d0428d056ef4776c840e99b1f37e | 10,727 | py | Python | util/legacypgsql.py | twonds/palaver | fcaa1884bc206e0aba7c88d9614e38b492c59285 | [
"MIT"
] | 4 | 2015-01-20T17:25:12.000Z | 2020-02-12T08:24:05.000Z | util/legacypgsql.py | twonds/palaver | fcaa1884bc206e0aba7c88d9614e38b492c59285 | [
"MIT"
] | 1 | 2016-01-27T16:13:18.000Z | 2016-01-27T19:11:21.000Z | util/legacypgsql.py | twonds/palaver | fcaa1884bc206e0aba7c88d9614e38b492c59285 | [
"MIT"
] | null | null | null | # Copyright (c) 2007 Christopher Zorn, OGG, LLC
# See LICENSE.txt for details
# Converts the legacy muc spool to the new dirDBM one
import sys
from twisted.words.xish import domish, xpath
from twisted.words.protocols.jabber import jid
from twisted.enterprise import adbapi
from palaver import palaver, pgsql_storage
from pyPgSQL import PgSQL
class RoomParser:
"""
A simple stream parser for configuration files.
"""
def __init__(self):
# Setup the parser
self.stream = domish.elementStream()
self.stream.DocumentStartEvent = self.onDocumentStart
self.stream.ElementEvent = self.onElement
self.stream.DocumentEndEvent = self.onDocumentEnd
self.hash = {}
self.files = {}
self.room = {}
def parse(self, file, room):
self.room = room
f = open(file)
buf = f.read()
f.close()
self.stream.parse(buf)
return self.room
def serialize(self, obj):
if isinstance(obj, domish.Element):
obj = obj.toXml()
return obj
def onDocumentStart(self, rootelem):
pass
def onElement(self, element):
if element.name == 'room':
for c in element.elements():
if c.name == 'name':
self.room['roomname'] = str(c)
elif c.name=='notice':
for n in c.elements():
self.room[n.name] = str(n)
else:
if str(c) == '0':
self.room[c.name] = False
elif str(c) == '1':
self.room[c.name] = True
else:
self.room[c.name] = str(c)
elif element.name == 'list':
if element.hasAttribute('xdbns'):
if element['xdbns'] == 'muc:list:owner':
for i in element.elements():
self.room['owner'].append(i['jid'])
elif element['xdbns'] == 'muc:list:admin':
for i in element.elements():
self.room['admin'].append(i['jid'])
elif element['xdbns'] == 'muc:list:member':
for i in element.elements():
self.room['member'].append(i['jid'])
elif element['xdbns'] == 'muc:list:outcast':
for i in element.elements():
self.room['outcast'].append(i['jid'])
def onDocumentEnd(self):
pass
def _reset(self):
# Setup the parser
self.stream = domish.elementStream()
self.stream.DocumentStartEvent = self.onDocumentStart
self.stream.ElementEvent = self.onElement
self.stream.DocumentEndEvent = self.onDocumentEnd
class RoomsParser(RoomParser):
def parse(self, file):
f = open(file)
buf = f.read()
f.close()
self.stream.parse(buf)
return self.hash, self.files
def onElement(self, element):
if element.name == 'registered':
for i in element.elements():
name = i.getAttribute('name')
j = i.getAttribute('jid')
njid = jid.JID(name)
room = unicode(njid.user)
file = jid.JID(j).user
self.files[room] = file
self.hash[room] = {}
self.hash[room]['name'] = room
self.hash[room]['roomname'] = room
self.hash[room]['subject'] = ''
self.hash[room]['subject_change']= True
self.hash[room]['persistent'] = True
self.hash[room]['moderated'] = False
self.hash[room]['private'] = True
self.hash[room]['history'] = 10
self.hash[room]['game'] = False
self.hash[room]['hidden'] = False
self.hash[room]['locked'] = False
self.hash[room]['subjectlocked'] = False
self.hash[room]['description'] = room
self.hash[room]['leave'] = ''
self.hash[room]['join'] = ''
self.hash[room]['rename'] = ''
self.hash[room]['maxusers'] = 30
self.hash[room]['privmsg'] = True
self.hash[room]['change_nick'] = True
self.hash[room]['owner'] = []
self.hash[room]['member'] = []
self.hash[room]['admin'] = []
self.hash[room]['outcast'] = []
self.hash[room]['roster'] = []
def fetch_user(cursor, user):
cursor.execute("""SELECT * FROM muc_users WHERE username = %s""",(user,))
return cursor.fetchone()
def create_user(cursor, user):
dbuser = fetch_user(cursor,user)
# TODO - add other values
if not dbuser:
cursor.execute("""INSERT INTO muc_users (username)
VALUES (%s)
""", (user,))
dbuser = fetch_user(cursor,user)
return dbuser
def do_room(conn, room, hostname):
cursor = conn.cursor()
cursor.execute("""INSERT INTO muc_rooms (name,
roomname,
subject,
subject_change,
persistent,
moderated,
private,
history,
game,
\"hidden\",
\"locked\",
subjectlocked,
description,
\"leave\",
\"join\",
rename,
maxusers,
privmsg,
change_nick,
hostname
)
VALUES (%s, %s, %s, %s, %s, %s, %s,
%s, %s, %s, %s, %s, %s, %s,
%s, %s, %s, %s, %s, %s)""" ,
(room['name'],
room['roomname'],
room['subject'],
room['subject_change'],
room['persistent'],
room['moderated'],
room['private'],
room['history'],
room['game'],
room['hidden'],
room['locked'],
room['subjectlocked'],
room['description'],
room['leave'],
room['join'],
room['rename'],
room['maxusers'],
room['privmsg'],
room['change_nick'],
hostname
))
cursor.execute("""SELECT * FROM muc_rooms WHERE name = %s AND hostname = %s""", (room['name'],hostname))
dbroom = cursor.fetchone()
cursor.close()
# do admins , members, owners, etc
for u in room['admin']:
cursor = conn.cursor()
# create a user if not in he user table
dbuser = create_user(cursor, u)
cursor.execute("""INSERT INTO muc_rooms_admins (user_id, room_id)
VALUES (%s, %s)
""", (dbuser[0],dbroom[0]))
cursor.close()
for u in room['member']:
cursor = conn.cursor()
# create a user if not in he user table
dbuser = create_user(cursor, u)
cursor.execute("""INSERT INTO muc_rooms_members (user_id, room_id)
VALUES (%s, %s)
""", (dbuser[0],dbroom[0]))
cursor.close()
for u in room['owner']:
cursor = conn.cursor()
# create a user if not in he user table
dbuser = create_user(cursor, u)
cursor.execute("""INSERT INTO muc_rooms_owners (user_id, room_id)
VALUES (%s, %s)
""", (dbuser[0],dbroom[0]))
cursor.close()
for u in room['outcast']:
cursor = conn.cursor()
# create a user if not in he user table
dbuser = create_user(cursor, u)
cursor.execute("""INSERT INTO muc_rooms_outcasts (user_id, room_id)
VALUES (%s, %s)
""", (dbuser[0],dbroom[0]))
cursor.close()
def main(sdir, conf):
print 'Convert : %s ' % sdir
# parse conf file
cf = None
p = palaver.ConfigParser()
cf = p.parse(conf)
config = {}
backend = getattr(cf.backend,'type',None)
if backend:
config['backend'] = str(backend)
if config['backend'] == 'pgsql':
user = getattr(cf.backend,'dbuser',None)
database = str(getattr(cf.backend,'dbname',''))
if getattr(cf.backend,'dbpass',None):
password = str(getattr(cf.backend,'dbpass',''))
else:
password = ''
if getattr(cf.backend,'dbhostname',None):
hostname = str(getattr(cf.backend,'dbhostname',''))
else:
hostname = ''
for elem in cf.elements():
if elem.name == 'name':
host = str(elem)
_dbpool = PgSQL.connect(
database=database,
user=user,
password=password,
dsn=hostname,
client_encoding='utf-8'
)
rsp = RoomsParser()
rp = RoomParser()
rooms, files = rsp.parse(sdir+'/rooms.xml')
for f in files:
r = files[f]
print sdir+'/'+str(r)+'.xml'
room = rp.parse(sdir+'/'+str(r)+'.xml',rooms[f])
do_room(_dbpool, room, host)
rp._reset()
_dbpool.commit()
if __name__ == '__main__':
if len(sys.argv)==2:
main(sys.argv[1])
elif len(sys.argv)==3:
main(sys.argv[1], sys.argv[2])
else:
print "Usage : %s <old spool dir> <palaver config file>\n" % sys.argv[0]
| 34.38141 | 108 | 0.43535 | 1,008 | 10,727 | 4.578373 | 0.195437 | 0.046804 | 0.065005 | 0.014735 | 0.335211 | 0.313759 | 0.294475 | 0.253738 | 0.232286 | 0.232286 | 0 | 0.004188 | 0.44346 | 10,727 | 311 | 109 | 34.491961 | 0.768844 | 0.035891 | 0 | 0.232365 | 0 | 0.008299 | 0.255114 | 0 | 0 | 0 | 0 | 0.003215 | 0 | 0 | null | null | 0.024896 | 0.024896 | null | null | 0.012448 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c16f46acf3c76645280b368240ee645781f645e | 248 | py | Python | Algorithms/2. Implementation/18 - Climbing the Leaderboard.py | rosiejh/HackerRank | bfb07b8add04d3f3b67a61754db483f88a79e5a5 | [
"Apache-2.0"
] | null | null | null | Algorithms/2. Implementation/18 - Climbing the Leaderboard.py | rosiejh/HackerRank | bfb07b8add04d3f3b67a61754db483f88a79e5a5 | [
"Apache-2.0"
] | null | null | null | Algorithms/2. Implementation/18 - Climbing the Leaderboard.py | rosiejh/HackerRank | bfb07b8add04d3f3b67a61754db483f88a79e5a5 | [
"Apache-2.0"
] | null | null | null | def climbingLeaderboard(scores, alice):
scores = list(reversed(sorted(set(scores))))
r, rank = len(scores), []
for a in alice:
while (r > 0) and (a >= scores[r - 1]):
r -= 1
rank.append(r + 1)
return rank | 31 | 48 | 0.540323 | 34 | 248 | 3.941176 | 0.588235 | 0.044776 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.023256 | 0.306452 | 248 | 8 | 49 | 31 | 0.755814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c16f8302199537857c9b354ec984440477f9e4f | 1,417 | py | Python | endsem/component_1/initial_parameters_estimator.py | maher460/cmu10601 | 108811a648aa5128a1e44e269a1e6e8802e3f100 | [
"MIT"
] | 1 | 2019-04-17T14:05:35.000Z | 2019-04-17T14:05:35.000Z | endsem/component_1/initial_parameters_estimator.py | maher460/cmu10601 | 108811a648aa5128a1e44e269a1e6e8802e3f100 | [
"MIT"
] | null | null | null | endsem/component_1/initial_parameters_estimator.py | maher460/cmu10601 | 108811a648aa5128a1e44e269a1e6e8802e3f100 | [
"MIT"
] | null | null | null | import kmeans
import json
import numpy as np
NUM_GAUSSIANS = 32
DO_KMEANS = False
DEBUG = True
mixture_weights = [1.0/NUM_GAUSSIANS] * NUM_GAUSSIANS
if DEBUG:
print ("mixture_weights: ", mixture_weights)
print("Loading parsed data...")
traindata_processed_file = open("parsed_data/data1.universalenrollparsed", "r")
data = json.loads(traindata_processed_file.read())
traindata_processed_file.close()
print("Done loading parsed data!")
means = []
if DO_KMEANS:
means = kmeans.do_kmeans(data, 32)
else:
print("Loading centroids...")
traindata_processed_file = open("parsed_data/data1.kmeanspartialcentroids",
"r")
means = json.loads(traindata_processed_file.read())
traindata_processed_file.close()
print("Done loading centroids!")
data_np = np.array(data)
variances_np = np.var(data_np, axis=0)
if DEBUG:
print ("variances_np: ", variances_np)
variances = [variances_np.tolist()] * NUM_GAUSSIANS
initial_params = {
'mixture_weights': mixture_weights,
'means': means,
'variances': variances
}
print("writing inital parameters to file...")
traindata_processed_file = open("parsed_data/data1.initialparameters", "w")
traindata_processed_file.write(json.dumps(initial_params))
traindata_processed_file.close()
print("Done writing inital parameters to file")
| 27.784314 | 80 | 0.694425 | 168 | 1,417 | 5.613095 | 0.309524 | 0.171792 | 0.209968 | 0.082715 | 0.395546 | 0.33404 | 0.295864 | 0.165429 | 0.165429 | 0.165429 | 0 | 0.008757 | 0.194072 | 1,417 | 50 | 81 | 28.34 | 0.816988 | 0 | 0 | 0.128205 | 0 | 0 | 0.240649 | 0.080452 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.076923 | 0 | 0.076923 | 0.205128 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c1b1007e7ce83e67bed0e618b9207e4eb8ea15f | 9,369 | py | Python | githubly.py | kumaranvpl/githubly | 0261af92f375ad06105e2969c5e62a0db6d4b095 | [
"MIT"
] | null | null | null | githubly.py | kumaranvpl/githubly | 0261af92f375ad06105e2969c5e62a0db6d4b095 | [
"MIT"
] | 4 | 2016-10-24T18:40:54.000Z | 2016-10-25T02:34:44.000Z | githubly.py | kumaranvpl/githubly | 0261af92f375ad06105e2969c5e62a0db6d4b095 | [
"MIT"
] | null | null | null | import csv
import getpass
import json
import requests
import sys
from requests.auth import HTTPBasicAuth
class GithublyException(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class Githubly:
def __init__(self, username, password):
self.username = username
self.password = password
self.GITHUB_API = "https://api.github.com/"
self.auth_token = self._get_auth_token()
self.headers = {'Authorization': 'token %s' % self.auth_token}
self.user = self._need_another_user()
self._print_repos(self.user)
self.repo = raw_input("Please enter a repo name: ")
def _get_response_from_api(self, url, need_links=None):
# print url
response = requests.get(url, auth=HTTPBasicAuth(self.username, self.password), headers=self.headers)
if need_links:
if "next" in response.links and "last" in response.links:
return json.loads((response.text).encode('utf-8')), response.links["next"], response.links["last"]
return json.loads((response.text).encode('utf-8')), None, None
return json.loads((response.text).encode('utf-8'))
def _post_to_api(self, url, data):
# print url, data
response = requests.post(url, data=json.dumps(data), auth=HTTPBasicAuth(self.username, self.password), headers=self.headers)
return json.loads((response.text).encode('utf-8'))
def _get_auth_token(self):
with open('tokens.csv', 'rb') as f:
reader = csv.reader(f)
dict_from_csv = dict(reader)
if self.username in dict_from_csv:
return dict_from_csv[self.username]
url = self.GITHUB_API + "authorizations"
data = {"scopes": ["repo"], "note": "Auth token for Githubly"}
response = requests.post(url, data=json.dumps(data), auth=HTTPBasicAuth(self.username, self.password))
resp_dict = json.loads((response.text).encode('utf-8'))
with open('tokens.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow([self.username, resp_dict["token"]])
return resp_dict["token"]
def _need_another_user(self):
user = raw_input("Please enter username of another user to list issues, else enter no: ")
if user in ["no", "n", "N", "NO", "No"]:
user = self.username
return user
def _get_repos(self, user):
url = self.GITHUB_API + "users/" + user + "/repos?visibility=all&type=all"
repos_list = self._get_response_from_api(url)
return repos_list
def _print_issues(self, user, repo, url=None):
if not url:
url = self.GITHUB_API + "repos/" + user + "/" + repo + "/issues"
issues_list, next_url, last_url = self._get_response_from_api(url, need_links=True)
if not issues_list:
print "No issues found. Please open one first"
return False
for issue in issues_list:
print str(issue["number"]) + "-" + issue["title"]
if next_url and last_url:
print "Next - %s" % next_url["url"]
print "Last - %s" % last_url["url"]
new_choice = raw_input("Please enter next/last to navigate to next/last page. Enter exit to quit: ")
if new_choice not in ["Exit", "Quit", "quit", "exit", "q"]:
if new_choice in ["Next", "next", "NEXT"]:
new_url = next_url["url"]
elif new_choice in ["Last", "last", "LAST"]:
new_url = last_url["url"]
else:
print "Bad choice :("
return True
self._print_issues(user=user, repo=repo, url=new_url)
return True
def _print_repos(self, user):
need_repos = raw_input("Do you want to see all repos?(y/n) ")
if need_repos in ["yes", "Yes", "y", "Y", "YES"]:
repos_list = self._get_repos(user)
for repo in repos_list:
print repo["name"]
def list_issues(self):
issues_present = self._print_issues(self.user, self.repo)
def issue_in_detail(self):
issues_present = self._print_issues(self.user, self.repo)
if not issues_present: return
issue_num = raw_input("Please enter issue's number to check its details: ")
url = self.GITHUB_API + "repos" + "/" + self.user + "/" + self.repo + "/issues/" + issue_num
try:
response = self._post_to_api(url=url, data={})
#print response
print "Issue details:"
print "Issue id - %s" % response["id"]
print "Issue number - %s" % response["number"]
print "Issue title - %s" % response["title"]
print "Issue body - %s" % response["body"]
print "Issue state - %s" % response["state"]
print "Issue url - %s" % response["url"]
print "Issue repository_url - %s" % response["repository_url"]
print "Issue html_url - %s" % response["html_url"]
print "Issue comments - %s" % response["comments"]
print "Issue created_at - %s" % response["created_at"]
print "Issue closed_at - %s" % response["closed_at"]
except Exception as e:
print "Error occured - %s" % str(e)
raise GithublyException(e)
def open_issue(self):
title = raw_input("Please enter title for new issue: ")
body = raw_input("Please enter body for new issue: ")
data = {"title": title, "body": body}
url = self.GITHUB_API + "repos" + "/" + self.user + "/" + self.repo + "/issues"
try:
response = self._post_to_api(url=url, data=data)
print "Issue created successfully"
print "Issue id - %s" % response["id"]
print "Issue number - %s" % response["number"]
print "Issue created_at - %s" % response["created_at"]
except Exception as e:
print "Error occured - %s" % str(e)
raise GithublyException(e)
def close_issue(self):
issues_present = self._print_issues(self.user, self.repo)
if not issues_present: return
issue_num = raw_input("Please enter issue's number to close: ")
data = {"state": "closed"}
url = self.GITHUB_API + "repos" + "/" + self.user + "/" + self.repo + "/issues/" + issue_num
try:
response = self._post_to_api(url=url, data=data)
print response
print "Issue closed successfully"
print "Issue id - %s" % response["id"]
print "Issue number - %s" % response["number"]
print "Issue state - %s" % response["state"]
print "Issue created_at - %s" % response["created_at"]
print "Issue closed_at - %s" % response["closed_at"]
except Exception as e:
print "Error occured - %s" % str(e)
raise GithublyException(e)
def add_comment(self):
issues_present = self._print_issues(self.user, self.repo)
if not issues_present: return
issue_num = raw_input("Please enter issue's number to add comment: ")
comment = raw_input("Please enter your comment: ")
data = {"body": comment}
url = self.GITHUB_API + "repos" + "/" + self.user + "/" + self.repo + "/issues/" + issue_num + "/comments"
try:
response = self._post_to_api(url=url, data=data)
print response
print "Comment added successfully"
print "Comment id - %s" % response["id"]
print "Comment message - %s" % response["body"]
print "Comment html_url - %s" % response["html_url"]
print "Comment created_at - %s" % response["created_at"]
except Exception as e:
print "Error occured - %s" % str(e)
raise GithublyException(e)
if __name__ == "__main__":
print "Githublyyyyyyyyyyyyyyyyyyyyy"
print "Please enter your github username, password below. This is needed to avoid github's rate limitation. "
print "Don't worry I am not saving your credentials ;)"
username = raw_input("Username: ")
password = getpass.getpass(prompt='Password: ', stream=None)
try:
githubly = Githubly(username=username, password=password)
except Exception as e:
print "Something broke :("
print "Exception for geeks - %s" % str(e)
raise GithublyException(e)
while True:
print "Menu"
print "1. List issues"
print "2. Issue in detail"
print "3. Open new issue"
print "4. Close issue"
print "5. Add comment to an issue"
print "Exit or Ctrl + C to quit"
user_input = raw_input("Please enter your choice: ")
if user_input == "1":
githubly.list_issues()
elif user_input == "2":
githubly.issue_in_detail()
elif user_input == "3":
githubly.open_issue()
elif user_input == "4":
githubly.close_issue()
elif user_input == "5":
githubly.add_comment()
elif user_input in ["Exit", "Quit", "quit", "exit", "q"]:
print "Bye Bye Bye!!!"
sys.exit()
else:
print "Wrong choice... Try again please"
| 41.455752 | 132 | 0.582026 | 1,160 | 9,369 | 4.539655 | 0.149138 | 0.039309 | 0.022788 | 0.036081 | 0.428788 | 0.398975 | 0.376946 | 0.360425 | 0.335359 | 0.305545 | 0 | 0.002266 | 0.293308 | 9,369 | 225 | 133 | 41.64 | 0.793083 | 0.004163 | 0 | 0.26943 | 0 | 0 | 0.222818 | 0.006219 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.046632 | 0.031088 | null | null | 0.316062 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c1dba83b34a1c4ed784910f891d161f756f6131 | 905 | py | Python | examples/accessing_variables.py | Rory-Sullivan/yrlocationforecast | 26b66834cac4569704daf0009a9d2bba39dbfb75 | [
"MIT"
] | 13 | 2020-07-28T17:47:42.000Z | 2022-03-30T13:35:12.000Z | examples/accessing_variables.py | Rory-Sullivan/yrlocationforecast | 26b66834cac4569704daf0009a9d2bba39dbfb75 | [
"MIT"
] | 5 | 2020-10-14T11:10:13.000Z | 2022-01-01T17:35:19.000Z | examples/accessing_variables.py | Rory-Sullivan/yrlocationforecast | 26b66834cac4569704daf0009a9d2bba39dbfb75 | [
"MIT"
] | 6 | 2020-10-16T12:30:07.000Z | 2022-02-18T07:13:21.000Z | """An example of accessing individual forecast variables."""
from metno_locationforecast import Place, Forecast
USER_AGENT = "metno_locationforecast/1.0 https://github.com/Rory-Sullivan/yrlocationforecast"
new_york = Place("New York", 40.7, -74.0, 10)
new_york_forecast = Forecast(new_york, USER_AGENT, "complete")
new_york_forecast.update()
# Access a particular interval.
first_interval = new_york_forecast.data.intervals[0]
print(first_interval)
# Access the interval's duration attribute.
print(f"Duration: {first_interval.duration}")
print() # Blank line
# Access a particular variable from the interval.
rain = first_interval.variables["precipitation_amount"]
print(rain)
# Access the variables value and unit attributes.
print(f"Rain value: {rain.value}")
print(f"Rain units: {rain.units}")
# Get a full list of variables available in the interval.
print(first_interval.variables.keys())
| 30.166667 | 93 | 0.781215 | 127 | 905 | 5.425197 | 0.472441 | 0.060958 | 0.065312 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013631 | 0.108287 | 905 | 29 | 94 | 31.206897 | 0.840149 | 0.320442 | 0 | 0 | 0 | 0 | 0.326159 | 0.084437 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
8c2a807f43d6eef105689c63e12efd6dbb33bdcb | 2,099 | py | Python | backend_django/login_api/serializers.py | oereo/cau-lion-server | 58ae08bba739387796a814f03c193a1eeea1b8a6 | [
"MIT"
] | 2 | 2020-04-17T07:22:55.000Z | 2020-04-20T16:45:38.000Z | backend_django/login_api/serializers.py | oereo/cau-lion-server | 58ae08bba739387796a814f03c193a1eeea1b8a6 | [
"MIT"
] | 17 | 2020-04-25T12:01:16.000Z | 2022-03-12T00:32:42.000Z | backend_django/login_api/serializers.py | minseungseon/cau-lion-server | 705d892df4746f658f903bc30e1622da35e81e69 | [
"MIT"
] | 3 | 2020-04-16T06:20:53.000Z | 2020-04-19T01:47:20.000Z | #2020-04-20 minseung seon created.
#serializer는 모두 ModelSerializer로 간단히 처리함
from rest_framework import serializers
from django.contrib.auth.models import User
from django.contrib.auth import authenticate
from .models import Profile
#Sign Up 회원가입
class UserSerializer(serializers.ModelSerializer):
class meta:
model = User
fields = ('id', 'username', 'email')
class CreateUserSerializer(serializers.ModelSerializer) :
# def create(self, validated_data):
# username = validated_data['username']
# email = validated_data['email']
# password = validated_data['password']
# user_obj = User(
# username = username,
# email = email
# )
# user_obj.set_password(password)
# user_obj.save()
# return validated_data
# class Meta:
# model = User
# fields = [
# 'username',
# 'password',
# 'email',
# 'is_superuser',
# ]
class Meta:
model = User
fields = ("id", "username", "password", "email")
extra_kwargs = {"password": {"write_only": True}}
def create(self, validated_data):
user = User.objects.create_user(
validated_data["username"], None, validated_data["password"]
)
return user
#Check Valid Access on Server 접속 유지중인지 확인
class UserSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ("id", "username")
#Login 로그인
#연결되는 모델이 없기 때문에 serializer로 작성
class LoginUserSerializer(serializers.Serializer):
username = serializers.CharField()
password = serializers.CharField()
def validate(self, data):
user = authenticate(**data)
if user and user.is_active:
return user
raise serializers.ValidationError("아이디 혹은 비밀번호가 잘못 되었습니다.")
class ProfileSerializer(serializers.Serializer):
class Meta:
model = Profile
#exclude = ("user_pk", "likelion_number", "email")
fields = '__all__'
#read_only = True | 26.56962 | 72 | 0.613626 | 210 | 2,099 | 6.014286 | 0.438095 | 0.082344 | 0.055424 | 0.057007 | 0.212193 | 0.152019 | 0.152019 | 0.125099 | 0.125099 | 0.125099 | 0 | 0.005323 | 0.283945 | 2,099 | 79 | 73 | 26.56962 | 0.834997 | 0.322058 | 0 | 0.294118 | 0 | 0 | 0.079456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058824 | false | 0.117647 | 0.117647 | 0 | 0.558824 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8c321b723875c29f1ae6ed62908243de66570a8e | 3,430 | py | Python | core/views.py | lcs-amorim/OPE-EasyParty | b3439bf21523d7f3fb19b12283c24f364bf54388 | [
"Apache-2.0"
] | null | null | null | core/views.py | lcs-amorim/OPE-EasyParty | b3439bf21523d7f3fb19b12283c24f364bf54388 | [
"Apache-2.0"
] | 4 | 2020-06-05T18:01:15.000Z | 2021-09-07T23:51:04.000Z | core/views.py | lcs-amorim/OPE-EasyParty | b3439bf21523d7f3fb19b12283c24f364bf54388 | [
"Apache-2.0"
] | 1 | 2018-10-02T23:45:15.000Z | 2018-10-02T23:45:15.000Z | from django.shortcuts import render, redirect , HttpResponseRedirect, get_object_or_404
from django.contrib.auth.decorators import login_required, user_passes_test
from django.contrib.auth.forms import UserCreationForm, PasswordChangeForm
from django.views.generic import View, TemplateView, CreateView, UpdateView
from django.conf import settings
from django.contrib.auth.mixins import LoginRequiredMixin
from django.shortcuts import render
from core.forms import ClienteForm, EditaContaClienteForm
from core.models import Produto
from core.models import Categoria
def index(request):
contexto = {
"produtos":Produto.objects.all()
}
return render(request, "index.html", contexto)
def produto(request): #, slug):
#contexto = {
# 'produto': get_object_or_404(Produto, slug=slug) #verifica se a url existe, caso nao exista ele retorna erro 404
#}
template_name = 'produto.html'
return render(request, template_name)
def lista_produto(request):
pass
def categoria(request, slug):
categoria = Categoria.objects.get(slug=slug)
contexto = {
'categoria': categoria,
'produtos': Produto.objects.filter(categoria=categoria),
}
return render(request,'categoria.html', contexto)
def contato(request):
pass
def festa(request):
return render(request,"festa.html")
#Autenticação login
def login_cliente(request):
return render(request,"login.html")
def contato(request):
return render(request,"contato.html")
#Auntenticação Usuario
@login_required(login_url="entrar")
def page_user(request):
return render(request,'index.html')
# -----------------------------------------------//---------------------------------#
# pagina de cadastro
def registrar(request):
# Se dados forem passados via POST
if request.POST:
form = ClienteForm(request.POST)
if form.is_valid(): # se o formulario for valido
form.save() # cria um novo usuario a partir dos dados enviados
form.cleaner
else:
form = ClienteForm()
contexto = {
"form":form
}
return render(request, "registrar.html", contexto)
# -----------------------------------------------//---------------------------------#
#funcao para alterar conta
@login_required
def editarConta(request):
template_name = 'editarConta.html'
contexto = {}
if request.method == 'POST':
form = EditaContaClienteForm(request.POST, instance=request.user)
if form.is_valid():
form.save()
form = EditaContaClienteForm(instance=request.user)
contexto['success'] = True
else:
form = EditaContaClienteForm(instance=request.user)
contexto['form'] = form
return render(request, template_name, contexto)
# -----------------------------------------------//---------------------------------#
#funcao para alterar senha
@login_required
def editarSenha(request):
template_name = 'editarSenha.html'
context = {}
if request.method == 'POST':
form = PasswordChangeForm(data=request.POST, user=request.user)
if form.is_valid():
form.save()
context['success'] = True
else:
form = PasswordChangeForm(user=request.user)
context['form'] = form
return render(request, template_name, context)
# -----------------------------------------------//---------------------------------# | 27.886179 | 121 | 0.625656 | 354 | 3,430 | 5.991525 | 0.313559 | 0.056577 | 0.08958 | 0.049033 | 0.224422 | 0.132485 | 0.06695 | 0.030174 | 0 | 0 | 0 | 0.003228 | 0.187172 | 3,430 | 123 | 122 | 27.886179 | 0.757532 | 0.199708 | 0 | 0.285714 | 0 | 0 | 0.069358 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.155844 | false | 0.077922 | 0.12987 | 0.051948 | 0.415584 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8c32b849026f787dad3dce205fac98d12cb8e86b | 478 | py | Python | adminlte_log/tests.py | beastbikes/django-only-admin | c89b782b92edbb1f75151e71163c0708afacd4f9 | [
"MIT"
] | 32 | 2016-11-24T08:33:10.000Z | 2017-12-18T00:25:00.000Z | adminlte_log/tests.py | beastbikes/django-only-admin | c89b782b92edbb1f75151e71163c0708afacd4f9 | [
"MIT"
] | 15 | 2016-11-30T08:28:56.000Z | 2017-09-20T15:54:18.000Z | adminlte_log/tests.py | beastbikes/django-only-admin | c89b782b92edbb1f75151e71163c0708afacd4f9 | [
"MIT"
] | 9 | 2016-11-25T02:14:24.000Z | 2017-12-06T13:22:51.000Z | from django.contrib.auth.models import User
from django.test import TestCase
from adminlte_log.models import AdminlteLogType, AdminlteLog
class AdminlteLogTest(TestCase):
def setUp(self):
AdminlteLogType.objects.create(name='test', code='test')
self.user = User.objects.create_user(username='bohan')
def test_log(self):
log = AdminlteLog.info('test', user=self.user, sort_desc='This is a log', foo='bar')
self.assertEqual(log.id, 1)
| 29.875 | 92 | 0.713389 | 64 | 478 | 5.265625 | 0.546875 | 0.059347 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002513 | 0.167364 | 478 | 15 | 93 | 31.866667 | 0.844221 | 0 | 0 | 0 | 0 | 0 | 0.069038 | 0 | 0 | 0 | 0 | 0 | 0.1 | 1 | 0.2 | false | 0 | 0.3 | 0 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8c35b9a559288af50f129e1366cee6c00d138f00 | 621 | py | Python | citeyoursoftware/main.py | rodluger/citeyoursoftware | 8bccb33a268653a7267163ccdc67df72e191b388 | [
"MIT"
] | 3 | 2021-11-12T17:17:00.000Z | 2021-11-12T20:30:41.000Z | citeyoursoftware/main.py | rodluger/citeyoursoftware | 8bccb33a268653a7267163ccdc67df72e191b388 | [
"MIT"
] | null | null | null | citeyoursoftware/main.py | rodluger/citeyoursoftware | 8bccb33a268653a7267163ccdc67df72e191b388 | [
"MIT"
] | null | null | null | from .packages import get_packages
from .pypi import get_pypi_bib
def get_bibliography(
env_file="environment.yml", env_path=None, exclude=["python"]
):
# Get all user-listed packages w/ channels & exact versions
packages = get_packages(env_file=env_file, env_path=None)
# Try to find BibTeX entries for all packages
for name in packages:
packages[name]["bib"] = []
version = packages[name]["version"]
channel = packages[name]["channel"]
if channel == "pypi":
packages[name]["bib"] += get_pypi_bib(name, version)
# TODO!!!
return packages | 27 | 65 | 0.652174 | 79 | 621 | 4.974684 | 0.455696 | 0.122137 | 0.050891 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.231884 | 621 | 23 | 66 | 27 | 0.823899 | 0.175523 | 0 | 0 | 0 | 0 | 0.088409 | 0 | 0 | 0 | 0 | 0.043478 | 0 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.307692 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c444c182cfcbcbf6fc1fbb3f34c210eb99835c2 | 476 | py | Python | python/large-class/1_extract-class.py | mario21ic/refactoring-guru | a28730ebbcf54363cb98e921820198b50fc3204f | [
"MIT"
] | 1 | 2020-03-31T00:57:39.000Z | 2020-03-31T00:57:39.000Z | python/large-class/1_extract-class.py | mario21ic/refactoring-guru | a28730ebbcf54363cb98e921820198b50fc3204f | [
"MIT"
] | null | null | null | python/large-class/1_extract-class.py | mario21ic/refactoring-guru | a28730ebbcf54363cb98e921820198b50fc3204f | [
"MIT"
] | null | null | null | # When one class does the work of two, awkwardness results.
class Person:
def __init__(self, name, office_area_code, office_number):
self.name = name
self.office_area_code = office_area_code
self.office_number = office_number
def telephone_number(self):
return "%d-%d" % (self.office_area_code, self.office_number)
if __name__=="__main__":
p = Person("Mario", 51, 966296636)
print(p.name)
print(p.telephone_number()) | 29.75 | 68 | 0.680672 | 65 | 476 | 4.584615 | 0.446154 | 0.134228 | 0.187919 | 0.134228 | 0.201342 | 0.201342 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.214286 | 476 | 16 | 69 | 29.75 | 0.76738 | 0.119748 | 0 | 0 | 0 | 0 | 0.043062 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0 | 0.090909 | 0.363636 | 0.181818 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c49da13dca709daa5a8e6c290b24925848f9676 | 1,790 | py | Python | database_replication/python/mock_ripper.py | ryland-e-atkins/complexa | fa5ce6227f45227cdd6b27b5d63edd000c760e84 | [
"MIT"
] | null | null | null | database_replication/python/mock_ripper.py | ryland-e-atkins/complexa | fa5ce6227f45227cdd6b27b5d63edd000c760e84 | [
"MIT"
] | null | null | null | database_replication/python/mock_ripper.py | ryland-e-atkins/complexa | fa5ce6227f45227cdd6b27b5d63edd000c760e84 | [
"MIT"
] | null | null | null | # This module is used to combine and remove duplicates from raw mockaroo data
from subprocess import call
from util import *
# def generateCleanFiles():
# """
# DEPRECATED
# """
# filePrefix = 'mockaroo/mock_data_raw/'
# fileNames = [
# filePrefix + 'brandName/brandName1.csv',
# filePrefix + 'brandName/brandName2.csv',
# filePrefix + 'brandName/brandName3.csv',
# filePrefix + 'brandName/brandName4.csv',
# filePrefix + 'brandName/brandName5.csv']
# names = cleanList(readMultipleFilesIntoList(fileNames))
# words = cleanList(splitWords(names))
# nameFileName = 'mockaroo/mock_data_clean/brandName/brandNames.csv'
# wordFileName = 'mockaroo/mock_data_clean/brandName/brandWords.csv'
# print("Name write: {0}".format(writeListToFile(names,nameFileName)))
# print("Word write: {0}".format(writeListToFile(words,wordFileName)))
def generateFile(modifier, numFiles):
#mkdir(modifier)
filePrefix = 'mockaroo_data/raw/'+modifier+'/'
fileNames = [filePrefix + modifier + str(x) + '.csv' for x in range(1,numFiles+1)]
outFileName = 'mockaroo_data/cln/'+modifier+'/'+modifier+'.csv'
items = cleanList(readMultipleFilesIntoList(fileNames))
print("Write to {1} successful: {0}".format(writeListToFile(items,outFileName), outFileName))
return
def mkdir(dirName):
"""
Creates directory if not exists
"""
raw = 'mockaroo_data/raw/' + dirName
cln = 'mockaroo_data/cln/' + dirName
call(['mkdir', raw])
call(['mkdir', cln])
return
def main():
modifier = 'buzzword'
numFiles = 10
#mkdir(modifier)
generateFile(modifier, numFiles)
if __name__ == "__main__":
main() | 30.862069 | 98 | 0.644693 | 176 | 1,790 | 6.454545 | 0.403409 | 0.052817 | 0.077465 | 0.036972 | 0.052817 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009339 | 0.222346 | 1,790 | 58 | 99 | 30.862069 | 0.806753 | 0.497207 | 0 | 0.095238 | 1 | 0 | 0.168109 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.095238 | 0 | 0.333333 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c54ad4c8119847e922beb1d17182b424e160a27 | 819 | py | Python | Pre-Interview Challenges/camelcase.py | Wryhder/solve-with-code | 0fd1ef4f1c46ad89d68a667b3aaa6b98c69da266 | [
"MIT"
] | null | null | null | Pre-Interview Challenges/camelcase.py | Wryhder/solve-with-code | 0fd1ef4f1c46ad89d68a667b3aaa6b98c69da266 | [
"MIT"
] | null | null | null | Pre-Interview Challenges/camelcase.py | Wryhder/solve-with-code | 0fd1ef4f1c46ad89d68a667b3aaa6b98c69da266 | [
"MIT"
] | null | null | null | # Andela
"""
Problem Statement:
Write a function called camelCase that takes a string containing a Python-like variable name,
e.g. is_prime and turns it into the corresponding Java-like camel-case variable name, i.e. isPrime.
"""
def camelCase(python_var_name):
"""
This function takes a string containing a Python-like variable name
e.g. is_prime and turns it into the corresponding Java-like camel-case variable name,
i.e. isPrime.
"""
some_list = python_var_name.split("_")
java_version = ""
for word in some_list:
if word == some_list[0] or word.isdigit() == True:
java_version += word
continue
title_case = word.title()
java_version += title_case
return java_version
# test
print camelCase("is_prime1her")
| 27.3 | 99 | 0.666667 | 114 | 819 | 4.640351 | 0.45614 | 0.090737 | 0.045369 | 0.083176 | 0.461248 | 0.461248 | 0.461248 | 0.461248 | 0.461248 | 0.461248 | 0 | 0.003252 | 0.249084 | 819 | 29 | 100 | 28.241379 | 0.856911 | 0.013431 | 0 | 0 | 0 | 0 | 0.033943 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c5ec30a972f5c57beabebc879476a07e04ac93c | 3,213 | py | Python | lib/textwin.py | tomjackbear/python-0.9.1 | 00adeddadaede51e92447523266c9d5616201c38 | [
"FSFAP"
] | 4 | 2020-07-21T09:47:52.000Z | 2022-01-05T21:43:36.000Z | lib/textwin.py | tomjackbear/python-0.9.1 | 00adeddadaede51e92447523266c9d5616201c38 | [
"FSFAP"
] | 1 | 2020-09-23T20:46:33.000Z | 2020-09-23T20:59:57.000Z | lib/textwin.py | tomjackbear/python-0.9.1 | 00adeddadaede51e92447523266c9d5616201c38 | [
"FSFAP"
] | 4 | 2020-07-13T00:45:24.000Z | 2021-09-04T14:50:46.000Z | # Module 'textwin'
# Text windows, a subclass of gwin
import stdwin
import stdwinsupport
import gwin
S = stdwinsupport # Shorthand
def fixsize(w):
docwidth, docheight = w.text.getrect()[1]
winheight = w.getwinsize()[1]
if winheight > docheight: docheight = winheight
w.setdocsize(0, docheight)
fixeditmenu(w)
def cut(w, m, id):
s = w.text.getfocustext()
if s:
stdwin.setcutbuffer(0, s)
w.text.replace('')
fixsize(w)
def copy(w, m, id):
s = w.text.getfocustext()
if s:
stdwin.setcutbuffer(0, s)
fixeditmenu(w)
def paste(w, m, id):
w.text.replace(stdwin.getcutbuffer(0))
fixsize(w)
def addeditmenu(w):
m = w.editmenu = w.menucreate('Edit')
m.action = []
m.additem('Cut', 'X')
m.action.append(cut)
m.additem('Copy', 'C')
m.action.append(copy)
m.additem('Paste', 'V')
m.action.append(paste)
def fixeditmenu(w):
m = w.editmenu
f = w.text.getfocus()
can_copy = (f[0] < f[1])
m.enable(1, can_copy)
if not w.readonly:
m.enable(0, can_copy)
m.enable(2, (stdwin.getcutbuffer(0) <> ''))
def draw(w, area): # Draw method
w.text.draw(area)
def size(w, newsize): # Size method
w.text.move((0, 0), newsize)
fixsize(w)
def close(w): # Close method
del w.text # Break circular ref
gwin.close(w)
def char(w, c): # Char method
w.text.replace(c)
fixsize(w)
def backspace(w): # Backspace method
void = w.text.event(S.we_command, w, S.wc_backspace)
fixsize(w)
def arrow(w, detail): # Arrow method
w.text.arrow(detail)
fixeditmenu(w)
def mdown(w, detail): # Mouse down method
void = w.text.event(S.we_mouse_down, w, detail)
fixeditmenu(w)
def mmove(w, detail): # Mouse move method
void = w.text.event(S.we_mouse_move, w, detail)
def mup(w, detail): # Mouse up method
void = w.text.event(S.we_mouse_up, w, detail)
fixeditmenu(w)
def activate(w): # Activate method
fixeditmenu(w)
def open(title, str): # Display a string in a window
w = gwin.open(title)
w.readonly = 0
w.text = w.textcreate((0, 0), w.getwinsize())
w.text.replace(str)
w.text.setfocus(0, 0)
addeditmenu(w)
fixsize(w)
w.draw = draw
w.size = size
w.close = close
w.mdown = mdown
w.mmove = mmove
w.mup = mup
w.char = char
w.backspace = backspace
w.arrow = arrow
w.activate = activate
return w
def open_readonly(title, str): # Same with char input disabled
w = open(title, str)
w.readonly = 1
w.char = w.backspace = gwin.nop
# Disable Cut and Paste menu item; leave Copy alone
w.editmenu.enable(0, 0)
w.editmenu.enable(2, 0)
return w
| 26.775 | 70 | 0.522876 | 418 | 3,213 | 3.990431 | 0.239234 | 0.053957 | 0.053957 | 0.035971 | 0.144484 | 0.118106 | 0.118106 | 0.104317 | 0.053957 | 0.053957 | 0 | 0.011639 | 0.358232 | 3,213 | 119 | 71 | 27 | 0.797284 | 0.103953 | 0 | 0.212766 | 0 | 0 | 0.006641 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.031915 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c6005532028b79259ef2ccbde6cb3de137f6931 | 13,037 | py | Python | nexthop_summary.py | dalekirkman1/SecureCRT-tools | fd2e60cb0ec561da5c900a6396993c114d925c5a | [
"Apache-2.0"
] | 2 | 2021-03-18T05:14:24.000Z | 2022-03-30T08:54:49.000Z | nexthop_summary.py | dalekirkman1/SecureCRT-tools | fd2e60cb0ec561da5c900a6396993c114d925c5a | [
"Apache-2.0"
] | null | null | null | nexthop_summary.py | dalekirkman1/SecureCRT-tools | fd2e60cb0ec561da5c900a6396993c114d925c5a | [
"Apache-2.0"
] | 1 | 2021-02-18T23:46:22.000Z | 2021-02-18T23:46:22.000Z | # $language = "python"
# $interface = "1.0"
# ################################################ SCRIPT INFO ###################################################
# Author: Jamie Caesar
# Email: jcaesar@presidio.com
#
# This script will grab the route table information from a Cisco IOS or NXOS device and export details about each
# next-hop address (how many routes and from which protocol) into a CSV file. It will also list all connected networks
# and give a detailed breakdown of every route that goes to each next-hop.
#
#
# ################################################ SCRIPT SETTING ###################################################
#
# Global settings that affect all scripts (output directory, date format, etc) is stored in the "global_settings.json"
# file in the "settings" directory.
#
# If any local settings are used for this script, they will be stored in the same settings folder, with the same name
# as the script that uses them, except ending with ".json".
#
# All settings can be manually modified in JSON format (the same syntax as Python lists and dictionaries). Be aware of
# required commas between items, or else options are likely to get run together and break the script.
#
# **IMPORTANT** All paths saved in .json files must contain either forward slashes (/home/jcaesar) or
# DOUBLE back-slashes (C:\\Users\\Jamie). Single backslashes will be considered part of a control character and will
# cause an error on loading.
#
# ################################################ IMPORTS ###################################################
import os
import sys
import logging
# If the "crt" object exists, this is being run from SecureCRT. Get script directory so we can add it to the
# PYTHONPATH, which is needed to import our custom modules.
if 'crt' in globals():
script_dir, script_name = os.path.split(crt.ScriptFullName)
if script_dir not in sys.path:
sys.path.insert(0, script_dir)
else:
script_dir, script_name = os.path.split(os.path.realpath(__file__))
os.chdir(script_dir)
# Now we can import our custom modules
import securecrt_tools.sessions as sessions
import securecrt_tools.settings as settings
import securecrt_tools.utilities as utils
import securecrt_tools.ipaddress as ipaddress
# ################################################ LOAD SETTINGS ###################################################
session_set_filename = os.path.join(script_dir, "settings", settings.global_settings_filename)
session_settings = settings.SettingsImporter(session_set_filename, settings.global_defs)
# Set logger variable -- this won't be used unless debug setting is True
logger = logging.getLogger("securecrt")
# ################################################ SCRIPT ###################################################
def update_empty_interfaces(route_table):
"""
Takes the routes table as a list of dictionaries (with dict key names used in parse_routes function) and does
recursive lookups to find the outgoing interface for those entries in the route-table where the outgoing interface
isn't listed.
:param route_table: <list> A list of dictionaries - specifically with the keys 'network', 'protocol', 'nexthop'
and 'interface
:return: The updated route_table object with outbound interfaces filled in.
"""
def recursive_lookup(nexthop):
for network in connected:
if nexthop in network:
return connected[network]
for network in statics:
if nexthop in network:
return recursive_lookup(statics[network])
return None
logger.debug("STARTING update_empty_interfaces")
connected = {}
unknowns = {}
statics = {}
for route in route_table:
if route['protocol'] == 'connected':
connected[route['network']] = route['interface']
if route['protocol'] == 'static':
if route['nexthop']:
statics[route['network']] = route['nexthop']
if route['nexthop'] and not route['interface']:
unknowns[route['nexthop']] = None
for nexthop in unknowns:
unknowns[nexthop] = recursive_lookup(nexthop)
for route in route_table:
if not route['interface']:
if route['nexthop'] in unknowns:
route['interface'] = unknowns[route['nexthop']]
logger.debug("ENDING update_empty_interfaces")
def parse_routes(fsm_routes):
"""
This function will take the TextFSM parsed route-table from the `textfsm_parse_to_dict` function. Each dictionary
in the TextFSM output represents a route entry. Each of these dictionaries will be updated to convert IP addresses
into ip_address or ip_network objects (from the ipaddress.py module). Some key names will also be updated also.
:param fsm_routes: <list of dicts> TextFSM output from the `textfsm_parse_to_dict` function.
:return: <list of dicts> An updated list of dictionaries that replaces IP address strings with objects from the
ipaddress.py module from Google.
"""
logger.debug("STARTING parse_routes function.")
complete_table = []
for route in fsm_routes:
new_entry = {}
logger.debug("Processing route entry: {0}".format(str(route)))
new_entry['network'] = ipaddress.ip_network(u"{0}/{1}".format(route['NETWORK'], route['MASK']))
new_entry['protocol'] = utils.normalize_protocol(route['PROTOCOL'])
if route['NEXTHOP_IP'] == '':
new_entry['nexthop'] = None
else:
new_entry['nexthop'] = ipaddress.ip_address(unicode(route['NEXTHOP_IP']))
if route["NEXTHOP_IF"] == '':
new_entry['interface'] = None
else:
new_entry['interface'] = route['NEXTHOP_IF']
# Nexthop VRF will only occur in NX-OS route tables (%vrf-name after the nexthop)
if 'NEXTHOP_VRF' in route:
if route['NEXTHOP_VRF'] == '':
new_entry['vrf'] = None
else:
new_entry['vrf'] = route['NEXTHOP_VRF']
logger.debug("Adding updated route entry '{0}' based on the information: {1}".format(str(new_entry),
str(route)))
complete_table.append(new_entry)
update_empty_interfaces(complete_table)
logger.debug("ENDING parse_route function")
return complete_table
def nexthop_summary(textfsm_dict):
"""
A function that builds a CSV output (list of lists) that displays the summary information after analyzing the
input route table.
:param textfsm_dict:
:return:
"""
# Identify connected or other local networks -- most found in NXOS to exlude from next-hops. These are excluded
# from the nexthop summary (except connected has its own section in the output).
logger.debug("STARTING nexthop_summary function")
local_protos = ['connected', 'local', 'hsrp', 'vrrp', 'glbp']
# Create a list of all dynamic protocols from the provided route table. Add total and statics to the front.
proto_list = []
for entry in textfsm_dict:
if entry['protocol'] not in proto_list and entry['protocol'] not in local_protos:
logger.debug("Found protocol '{0}' in the table".format(entry['protocol']))
proto_list.append(entry['protocol'])
proto_list.sort(key=utils.human_sort_key)
proto_list.insert(0, 'total')
proto_list.insert(0, 'interface')
# Create dictionaries to store summary information as we process the route table.
summary_table = {}
connected_table = {}
detailed_table = {}
# Process the route table to populate the above 3 dictionaries.
for entry in textfsm_dict:
logger.debug("Processing route: {0}".format(str(entry)))
# If the route is connected, local or an FHRP entry
if entry['protocol'] in local_protos:
if entry['protocol'] == 'connected':
if entry['interface'] not in connected_table:
connected_table[entry['interface']] = []
connected_table[entry['interface']].append(str(entry['network']))
else:
if entry['nexthop']:
if 'vrf' in entry and entry['vrf']:
nexthop = "{0}%{1}".format(entry['nexthop'], entry['vrf'])
else:
nexthop = str(entry['nexthop'])
elif entry['interface'].lower() == "null0":
nexthop = 'discard'
if nexthop not in summary_table:
# Create an entry for this next-hop, containing zero count for all protocols.
summary_table[nexthop] = {}
summary_table[nexthop].update(zip(proto_list, [0] * len(proto_list)))
summary_table[nexthop]['interface'] = entry['interface']
# Increment total and protocol specific count
summary_table[nexthop][entry['protocol']] += 1
summary_table[nexthop]['total'] += 1
if nexthop not in detailed_table:
detailed_table[nexthop] = []
detailed_table[nexthop].append((str(entry['network']), entry['protocol']))
# Convert summary_table into a format that can be printed to the CSV file.
output = []
header = ["Nexthop", "Interface", "Total"]
header.extend(proto_list[2:])
output.append(header)
summary_keys = sorted(summary_table.keys(), key=utils.human_sort_key)
for key in summary_keys:
line = [key]
for column in proto_list:
line.append(summary_table[key][column])
output.append(line)
output.append([])
# Convert the connected_table into a format that can be printed to the CSV file (and append to output)
output.append([])
output.append(["Connected:"])
output.append(["Interface", "Network(s)"])
connected_keys = sorted(connected_table.keys(), key=utils.human_sort_key)
for key in connected_keys:
line = [key]
for network in connected_table[key]:
line.append(network)
output.append(line)
output.append([])
# Convert the detailed_table into a format that can be printed to the CSV file (and append to output)
output.append([])
output.append(["Route Details"])
output.append(["Nexthop", "Network", "Protocol"])
detailed_keys = sorted(detailed_table.keys(), key=utils.human_sort_key)
for key in detailed_keys:
for network in detailed_table[key]:
line = [key]
line.extend(list(network))
output.append(line)
output.append([])
# Return the output, ready to be sent to directly to a CSV file
logger.debug("ENDING nexthop_summary function")
return output
def script_main(session):
supported_os = ["IOS", "NXOS"]
if session.os not in supported_os:
logger.debug("Unsupported OS: {0}. Exiting program.".format(session.os))
session.message_box("{0} is not a supported OS for this script.".format(session.os), "Unsupported OS",
options=sessions.ICON_STOP)
return
else:
send_cmd = "show ip route"
selected_vrf = session.prompt_window("Enter the VRF name. (Leave blank for default VRF)")
if selected_vrf != "":
send_cmd = send_cmd + " vrf {0}".format(selected_vrf)
session.hostname = session.hostname + "-VRF-{0}".format(selected_vrf)
logger.debug("Received VRF: {0}".format(selected_vrf))
raw_routes = session.get_command_output(send_cmd)
if session.os == "IOS":
template_file = "textfsm-templates/cisco_ios_show_ip_route.template"
else:
template_file = "textfsm-templates/cisco_nxos_show_ip_route.template"
fsm_results = utils.textfsm_parse_to_dict(raw_routes, template_file)
route_list = parse_routes(fsm_results)
output_filename = session.create_output_filename("nexthop-summary", ext=".csv")
output = nexthop_summary(route_list)
utils.list_of_lists_to_csv(output, output_filename)
# Clean up before closing session
session.end()
# ################################################ SCRIPT LAUNCH ###################################################
# If this script is run from SecureCRT directly, create our session object using the "crt" object provided by SecureCRT
if __name__ == "__builtin__":
# Create a session object for this execution of the script and pass it to our main() function
crt_session = sessions.CRTSession(crt, session_settings)
script_main(crt_session)
# Else, if this script is run directly then create a session object without the SecureCRT API (crt object) This would
# be done for debugging purposes (running the script outside of SecureCRT and feeding it the output it failed on)
elif __name__ == "__main__":
direct_session = sessions.DirectSession(os.path.realpath(__file__), session_settings)
script_main(direct_session) | 43.312292 | 119 | 0.635192 | 1,616 | 13,037 | 4.992574 | 0.223391 | 0.019336 | 0.010412 | 0.008428 | 0.12593 | 0.083664 | 0.06532 | 0.040283 | 0.040283 | 0.040283 | 0 | 0.002481 | 0.227046 | 13,037 | 301 | 120 | 43.312292 | 0.798055 | 0.23487 | 0 | 0.143678 | 0 | 0 | 0.16022 | 0.0184 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.045977 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8c63a890e4c7c21d82ab4e450776446d2c1bea12 | 978 | py | Python | article/tests.py | AngleMAXIN/nomooc | 0c06b8af405b8254c242b34e85b7cba99c8f5737 | [
"MIT"
] | 1 | 2019-12-04T03:20:16.000Z | 2019-12-04T03:20:16.000Z | article/tests.py | AngleMAXIN/NoMooc | 0c06b8af405b8254c242b34e85b7cba99c8f5737 | [
"MIT"
] | null | null | null | article/tests.py | AngleMAXIN/NoMooc | 0c06b8af405b8254c242b34e85b7cba99c8f5737 | [
"MIT"
] | null | null | null |
# Create your tests here.
from article.db_manager.article_manager import create_article_db
from utils.api.tests import APIClient, APITestCase
from utils.constants import ArticleTypeChoice
from utils.shortcuts import rand_str
def mock_create_article(title=None, content=None, art_type=None, owner_id=None):
title = title or rand_str(type='str')
content = content or rand_str(type='str')
art_type = art_type or ArticleTypeChoice[0][1]
owner_id = owner_id or 1
return create_article_db(title, content, art_type, owner_id)
class ArticleViewTest(APITestCase):
def setUp(self):
self.client = APIClient()
def test_create_article_view(self):
self.create_user('maxin', 'password', login=True)
article = mock_create_article()
result = self.client.get('/api/article/', data={'article_id': 1}).json()
self.assertEqual(result['result'], 'successful')
self.assertEqual(result['data']['title'], article.title)
| 34.928571 | 80 | 0.723926 | 133 | 978 | 5.12782 | 0.368421 | 0.095308 | 0.043988 | 0.038123 | 0.046921 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004884 | 0.162577 | 978 | 27 | 81 | 36.222222 | 0.827839 | 0.023517 | 0 | 0 | 0 | 0 | 0.070378 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 1 | 0.157895 | false | 0.052632 | 0.210526 | 0 | 0.473684 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8c63e675baf1b7e7f373786c7af3c27f18480a82 | 756 | py | Python | util/geometry.py | c-ali/boomgan | 0fd1d13149d8d5719d12aa36f09f46461ca29dbb | [
"MIT"
] | 3 | 2022-03-14T12:41:16.000Z | 2022-03-19T01:11:43.000Z | util/geometry.py | c-ali/boomgan | 0fd1d13149d8d5719d12aa36f09f46461ca29dbb | [
"MIT"
] | null | null | null | util/geometry.py | c-ali/boomgan | 0fd1d13149d8d5719d12aa36f09f46461ca29dbb | [
"MIT"
] | 1 | 2022-03-14T12:41:18.000Z | 2022-03-14T12:41:18.000Z | import numpy as np
def orthogonalize(normal, non_ortho):
h = normal * non_ortho
return non_ortho - normal * h
def make_orthonormal_vector(normal, dims=512):
# random unit vector
rand_dir = np.random.randn(dims)
# make orthonormal
result = orthogonalize(normal, rand_dir)
return result / np.linalg.norm(result)
def random_circle(radius, ndim):
'''Given a radius, parametrizes a random circle'''
n1 = np.random.randn(ndim)
n1 /= np.linalg.norm(n1)
n2 = make_orthonormal_vector(n1, ndim)
def circle(theta):
return np.repeat(n1[None, :], theta.shape[0], axis=0) * np.cos(theta)[:, None] * radius + np.repeat(n2[None, :], theta.shape[0], axis=0) * np.sin(theta)[:, None] * radius
return circle
| 29.076923 | 178 | 0.665344 | 109 | 756 | 4.522936 | 0.366972 | 0.048682 | 0.056795 | 0.060852 | 0.089249 | 0.089249 | 0.089249 | 0 | 0 | 0 | 0 | 0.023102 | 0.198413 | 756 | 25 | 179 | 30.24 | 0.790429 | 0.107143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.266667 | false | 0 | 0.066667 | 0.066667 | 0.6 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4fb2d5f6091ab479c98c350959da092722caf376 | 287 | py | Python | Exercicios/matriz.py | beatrizflorenccio/Projects-Python | fc584167a2816dc89f22baef0fa0f780af796c98 | [
"MIT"
] | 1 | 2021-10-10T08:18:45.000Z | 2021-10-10T08:18:45.000Z | Exercicios/matriz.py | beatrizflorenccio/Projects-Python | fc584167a2816dc89f22baef0fa0f780af796c98 | [
"MIT"
] | null | null | null | Exercicios/matriz.py | beatrizflorenccio/Projects-Python | fc584167a2816dc89f22baef0fa0f780af796c98 | [
"MIT"
] | null | null | null | #MaBe
matriz = [[0, 0, 0], [0, 0, 0,], [0, 0, 0]]
for l in range(0, 3):
for c in range(0,3):
matriz[l][c] = int(input(f'Digite o valor da posição {(c, l)}: '))
for obj in range(0, 3):
for i in range(0, 3):
print(f'[{matriz[obj][i]}]', end=' ')
print()
| 22.076923 | 74 | 0.477352 | 55 | 287 | 2.490909 | 0.381818 | 0.116788 | 0.153285 | 0.175182 | 0.240876 | 0.065693 | 0.065693 | 0.065693 | 0 | 0 | 0 | 0.082126 | 0.278746 | 287 | 12 | 75 | 23.916667 | 0.57971 | 0.013937 | 0 | 0 | 0 | 0 | 0.195035 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4fb2debb7f9b472451fb47af6368ed89d51ef079 | 968 | py | Python | iminuit/__init__.py | danielbrener/iminuit | e6d1cbc3d4d51e3556dbf95b569b62d6d9c38ad1 | [
"MIT"
] | 1 | 2018-10-02T14:52:37.000Z | 2018-10-02T14:52:37.000Z | iminuit/__init__.py | danielbrener/iminuit | e6d1cbc3d4d51e3556dbf95b569b62d6d9c38ad1 | [
"MIT"
] | null | null | null | iminuit/__init__.py | danielbrener/iminuit | e6d1cbc3d4d51e3556dbf95b569b62d6d9c38ad1 | [
"MIT"
] | null | null | null | """MINUIT from Python - Fitting like a boss
Basic usage example::
from iminuit import Minuit
def f(x, y, z):
return (x - 2) ** 2 + (y - 3) ** 2 + (z - 4) ** 2
m = Minuit(f)
m.migrad()
print(m.values) # {'x': 2,'y': 3,'z': 4}
print(m.errors) # {'x': 1,'y': 1,'z': 1}
Further information:
* Code: https://github.com/iminuit/iminuit
* Docs: https://iminuit.readthedocs.io
"""
__all__ = [
'Minuit',
'minimize',
'describe',
'Struct',
'__version__',
'test',
]
from ._libiminuit import Minuit
from ._minimize import minimize
from .util import describe, Struct
from .info import __version__
def test(args=None):
"""Execute the iminuit tests.
Requires pytest.
From the command line:
python -c 'import iminuit; iminuit.test()
"""
# http://pytest.org/latest/usage.html#calling-pytest-from-python-code
import pytest
args = ['-v', '--pyargs', 'iminuit']
pytest.main(args)
| 20.595745 | 73 | 0.599174 | 129 | 968 | 4.387597 | 0.488372 | 0.035336 | 0.010601 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016173 | 0.233471 | 968 | 46 | 74 | 21.043478 | 0.746631 | 0.606405 | 0 | 0 | 0 | 0 | 0.17094 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.3125 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4fba29d360396207764ed0558aa4154a354cbfed | 764 | py | Python | facegram/profiles/serializers/v1/serializers.py | mabdullahadeel/facegram | f0eaa42008e876ae892b50f9f621a25b17cc70d5 | [
"MIT"
] | 1 | 2021-09-26T13:37:22.000Z | 2021-09-26T13:37:22.000Z | facegram/profiles/serializers/v1/serializers.py | mabdullahadeel/facegram | f0eaa42008e876ae892b50f9f621a25b17cc70d5 | [
"MIT"
] | 1 | 2021-08-08T22:04:39.000Z | 2021-08-08T22:04:39.000Z | facegram/profiles/serializers/v1/serializers.py | mabdullahadeel/facegram | f0eaa42008e876ae892b50f9f621a25b17cc70d5 | [
"MIT"
] | null | null | null | from django.db.models import fields
from rest_framework import serializers
from facegram.profiles.models import Profile
from facegram.users.api.serializers import UserSerializer
class RetrieveUserProfileSerializerV1(serializers.ModelSerializer):
user = UserSerializer(read_only=True)
class Meta:
model = Profile
exclude = ("followers","following", "up_votes", "down_votes")
read_only_fields = ('id', 'follower_count', 'following_count')
class UpdateProfileSerializerV1(serializers.ModelSerializer):
class Meta:
model = Profile
fields = ('profile_pic', 'bio', 'location', 'interests', 'skills')
read_only_fields = ('id', 'follower_count', 'following_count', "up_votes", "down_votes")
depth = 1 | 38.2 | 96 | 0.719895 | 81 | 764 | 6.604938 | 0.506173 | 0.04486 | 0.052336 | 0.078505 | 0.160748 | 0.160748 | 0.160748 | 0.160748 | 0 | 0 | 0 | 0.004739 | 0.171466 | 764 | 20 | 97 | 38.2 | 0.840442 | 0 | 0 | 0.25 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.5625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4fbe12f19888e0666719f9bebc6d8719b923b7bc | 38,570 | py | Python | anchore/anchore_policy.py | berez23/anchore | 594cce23f1d87d666397653054c22c2613247734 | [
"Apache-2.0"
] | 401 | 2016-06-16T15:29:48.000Z | 2022-03-24T10:05:16.000Z | anchore/anchore_policy.py | berez23/anchore | 594cce23f1d87d666397653054c22c2613247734 | [
"Apache-2.0"
] | 63 | 2016-06-16T21:10:27.000Z | 2020-07-01T06:57:27.000Z | anchore/anchore_policy.py | berez23/anchore | 594cce23f1d87d666397653054c22c2613247734 | [
"Apache-2.0"
] | 64 | 2016-06-16T13:05:57.000Z | 2021-07-16T10:03:45.000Z | import os
import json
import re
import sys
import logging
import hashlib
import uuid
import jsonschema
import tempfile
import controller
import anchore_utils
import anchore_auth
from anchore.util import contexts
_logger = logging.getLogger(__name__)
default_policy_version = '1_0'
default_whitelist_version = '1_0'
default_bundle_version = '1_0'
supported_whitelist_versions = [default_whitelist_version]
supported_bundle_versions = [default_bundle_version]
supported_policy_versions = [default_bundle_version]
# interface operations
def check():
if not load_policymeta():
return (False, "policys are not initialized: please run 'anchore policys sync' and try again")
return (True, "success")
def sync_policymeta(bundlefile=None, outfile=None):
ret = {'success': False, 'text': "", 'status_code': 1}
policyurl = contexts['anchore_config']['policy_url']
policy_timeout = contexts['anchore_config']['policy_conn_timeout']
policy_maxretries = contexts['anchore_config']['policy_max_retries']
policymeta = {}
if bundlefile:
if not os.path.exists(bundlefile):
ret['text'] = "no such file ("+str(bundlefile)+")"
return(False, ret)
try:
with open(bundlefile, 'r') as FH:
policymeta = json.loads(FH.read())
except Exception as err:
ret['text'] = "synced policy bundle cannot be read/is not valid JSON: exception - " +str(err)
return(False, ret)
else:
record = anchore_auth.anchore_auth_get(contexts['anchore_auth'], policyurl, timeout=policy_timeout, retries=policy_maxretries)
if record['success']:
try:
bundleraw = json.loads(record['text'])
policymeta = bundleraw['bundle']
except Exception as err:
ret['text'] = 'failed to parse bundle response from service - exception: ' + str(err)
return(False, ret)
else:
_logger.debug("failed to download policybundle: message from server - " + str(record))
themsg = "unspecificied failure while attempting to download bundle from anchore.io"
try:
if record['status_code'] == 404:
themsg = "no policy bundle found on anchore.io - please create and save a policy using the policy editor in anchore.io and try again"
elif record['status_code'] == 401:
themsg = "cannot download a policy bundle from anchore.io - current user does not have access rights to download custom policies"
except Exception as err:
themsg = "exception while inspecting response from server - exception: " + str(err)
ret['text'] = "failed to download policybundle: " + str(themsg)
return(False, ret)
if not verify_policy_bundle(bundle=policymeta):
_logger.debug("downloaded policy bundle failed to verify: " +str(policymeta))
ret['text'] = "input policy bundle does not conform to policy bundle schema"
return(False, ret)
if outfile:
if outfile != '-':
try:
with open(outfile, 'w') as OFH:
OFH.write(json.dumps(policymeta))
except Exception as err:
ret['text'] = "could not write downloaded policy bundle to specified file ("+str(outfile)+") - exception: " + str(err)
return(False, ret)
else:
if not contexts['anchore_db'].save_policymeta(policymeta):
ret['text'] = "cannot get list of policies from service\nMessage from server: " + record['text']
return (False, ret)
if policymeta:
ret['text'] = json.dumps(policymeta, indent=4)
return(True, ret)
def load_policymeta(policymetafile=None):
ret = {}
if policymetafile:
with open(policymetafile, 'r') as FH:
ret = json.loads(FH.read())
else:
ret = contexts['anchore_db'].load_policymeta()
if not ret:
# use the system default
default_policy_bundle_file = os.path.join(contexts['anchore_config'].config_dir, 'anchore_default_bundle.json')
try:
if os.path.exists(default_policy_bundle_file):
with open(default_policy_bundle_file, 'r') as FH:
ret = json.loads(FH.read())
else:
raise Exception("no such file: " + str(default_policy_bundle_file))
except Exception as err:
_logger.warn("could not load default bundle (" + str(default_policy_bundle_file) + ") - exception: " + str(err))
raise err
return(ret)
def save_policymeta(policymeta):
return(contexts['anchore_db'].save_policymeta(policymeta))
# bundle
# Convert
def convert_to_policy_bundle(name="default", version=default_bundle_version, policy_file=None, policy_version=default_policy_version, whitelist_files=[], whitelist_version=default_whitelist_version):
policies = {}
p = read_policy(name=str(uuid.uuid4()), file=policy_file)
policies.update(p)
whitelists = {}
for wf in whitelist_files:
w = read_whitelist(name=str(uuid.uuid4()), file=wf)
whitelists.update(w)
m = create_mapping(map_name="default", policy_name=policies.keys()[0], whitelists=whitelists.keys(), repotagstring='*/*:*')
mappings.append(m)
bundle = create_policy_bundle(name='default', policies=policies, policy_version=policy_version, whitelists=whitelists, whitelist_version=whitelist_version, mappings=mappings)
if not verify_policy_bundle(bundle=bundle):
return({})
return(bundle)
# C
def create_policy_bundle(name=None, version=default_bundle_version, policies={}, policy_version=default_policy_version, whitelists={}, whitelist_version=default_whitelist_version, mappings=[]):
ret = {
'id': str(uuid.uuid4()),
'name':name,
'version':version,
'policies':[],
'whitelists':[],
'mappings':[]
}
for f in policies:
el = {
'version':policy_version,
'id':f,
'name':f,
'rules':[]
}
el['rules'] = unformat_policy_data(policies[f])
ret['policies'].append(el)
for f in whitelists:
el = {
'version':whitelist_version,
'id':f,
'name':f,
'items':[]
}
el['items'] = unformat_whitelist_data(whitelists[f])
ret['whitelists'].append(el)
for m in mappings:
ret['mappings'].append(m)
_logger.debug("created bundle: ("+str(name)+") : " + json.dumps(ret.keys(), indent=4))
return(ret)
# R
def read_policy_bundle(bundle_file=None):
ret = {}
with open(bundle_file, 'r') as FH:
ret = json.loads(FH.read())
cleanstr = json.dumps(ret).encode('utf8')
ret = json.loads(cleanstr)
if not verify_policy_bundle(bundle=ret):
raise Exception("input bundle does not conform to bundle schema")
return(ret)
# V
def verify_policy_bundle(bundle={}):
bundle_schema = {}
try:
bundle_schema_file = os.path.join(contexts['anchore_config']['pkg_dir'], 'schemas', 'anchore-bundle.schema')
except:
from pkg_resources import Requirement, resource_filename
bundle_schema_file = os.path.join(resource_filename("anchore", ""), 'schemas', 'anchore-bundle.schema')
try:
if os.path.exists(bundle_schema_file):
with open (bundle_schema_file, "r") as FH:
bundle_schema = json.loads(FH.read())
except Exception as err:
_logger.error("could not load bundle schema: " + str(bundle_schema_file))
return(False)
if not bundle_schema:
_logger.error("could not load bundle schema: " + str(bundle_schema_file))
return(False)
else:
try:
jsonschema.validate(bundle, schema=bundle_schema)
except Exception as err:
_logger.error("could not validate bundle against schema: " + str(err))
return(False)
return(True)
# U
def update_policy_bundle(bundle={}, name=None, policies={}, whitelists={}, mappings={}):
if not verify_policy_bundle(bundle=bundle):
raise Exception("input bundle is incomplete - cannot update bad bundle: " + json.dumps(bundle, indent=4))
ret = {}
ret.update(bundle)
new_bundle = create_policy_bundle(name=name, policies=policies, whitelists=whitelists, mappings=mappings)
for key in ['name', 'policies', 'whitelists', 'mappings']:
if new_bundle[key]:
ret[key] = new_bundle.pop(key, ret[key])
return(ret)
# SAVE
def write_policy_bundle(bundle_file=None, bundle={}):
if not verify_policy_bundle(bundle=bundle):
raise Exception("cannot verify input policy bundle, skipping write: " + str(bundle_file))
with open(bundle_file, 'w') as OFH:
OFH.write(json.dumps(bundle))
return(True)
# mapping
# C
def create_mapping(map_name=None, policy_name=None, whitelists=[], repotagstring=None):
ret = {}
ret['name'] = map_name
ret['policy_id'] = policy_name
ret['whitelist_ids'] = whitelists
image_info = anchore_utils.get_all_image_info(repotagstring)
registry = image_info.pop('registry', "N/A")
repo = image_info.pop('repo', "N/A")
tag = image_info.pop('tag', "N/A")
imageId = image_info.pop('imageId', "N/A")
digest = image_info.pop('digest', "N/A")
ret['registry'] = registry
ret['repository'] = repo
ret['image'] = {
'type':'tag',
'value':tag
}
ret['id'] = str(uuid.uuid4())
return(ret)
# policy/wl
# V
def verify_whitelist(whitelistdata=[], version=default_whitelist_version):
ret = True
if not isinstance(whitelistdata, list):
ret = False
if version in supported_whitelist_versions:
# do 1_0 format/checks
pass
return(ret)
# R
def read_whitelist(name=None, file=None, version=default_whitelist_version):
if not name:
raise Exception("bad input: " + str(name) + " : " + str(file))
if file:
if not os.path.exists(file):
raise Exception("input file does not exist: " + str(file))
wdata = anchore_utils.read_plainfile_tolist(file)
if not verify_whitelist(whitelistdata=wdata, version=version):
raise Exception("cannot verify whitelist data read from file as valid")
else:
wdata = []
ret = {}
ret[name] = wdata
return(ret)
def structure_whitelist(whitelistdata):
ret = []
for item in whitelistdata:
try:
(k,v) = re.match("([^\s]*)\s*([^\s]*)", item).group(1,2)
if not re.match("^\s*#.*", k):
ret.append([k, v])
except Exception as err:
pass
return(ret)
def unformat_whitelist_data(wldata):
ret = []
whitelists = structure_whitelist(wldata)
for wlitem in whitelists:
gate, triggerId = wlitem
el = {
'gate':gate,
'trigger_id':triggerId,
'id':str(uuid.uuid4())
}
ret.append(el)
return(ret)
def format_whitelist_data(wldata):
ret = []
version = wldata['version']
if wldata['version'] == default_whitelist_version:
for item in wldata['items']:
ret.append(' '.join([item['gate'], item['trigger_id']]))
else:
raise Exception ("detected whitelist version format in bundle not supported: " + str(version))
return(ret)
def extract_whitelist_data(bundle, wlid):
for wl in bundle['whitelists']:
if wlid == wl['id']:
return(format_whitelist_data(wl))
# R
def read_policy(name=None, file=None, version=default_bundle_version):
if not name or not file:
raise Exception("input error")
if not os.path.exists(file):
raise Exception("input file does not exist: " + str(file))
pdata = anchore_utils.read_plainfile_tolist(file)
if not verify_policy(policydata=pdata, version=version):
raise Exception("cannot verify policy data read from file as valid")
ret = {}
ret[name] = pdata
return(ret)
def structure_policy(policydata):
policies = {}
for l in policydata:
l = l.strip()
patt = re.compile('^\s*#')
if (l and not patt.match(l)):
polinput = l.split(':')
module = polinput[0]
check = polinput[1]
action = polinput[2]
modparams = ""
if (len(polinput) > 3):
modparams = ':'.join(polinput[3:])
if module not in policies:
policies[module] = {}
if check not in policies[module]:
policies[module][check] = {}
if 'aptups' not in policies[module][check]:
policies[module][check]['aptups'] = []
aptup = [action, modparams]
if aptup not in policies[module][check]['aptups']:
policies[module][check]['aptups'].append(aptup)
policies[module][check]['action'] = action
policies[module][check]['params'] = modparams
return(policies)
# return a give policyId from a bundle in raw poldata format
def extract_policy_data(bundle, polid):
for pol in bundle['policies']:
if polid == pol['id']:
return(format_policy_data(pol))
# convert from policy bundle policy format to raw poldata format
def format_policy_data(poldata):
ret = []
version = poldata['version']
if poldata['version'] == default_policy_version:
for item in poldata['rules']:
polline = ':'.join([item['gate'], item['trigger'], item['action'], ""])
if 'params' in item:
for param in item['params']:
polline = polline + param['name'] + '=' + param['value'] + " "
ret.append(polline)
else:
raise Exception ("detected policy version format in bundle not supported: " + str(version))
return(ret)
# convert from raw poldata format to bundle format
def unformat_policy_data(poldata):
ret = []
policies = structure_policy(poldata)
for gate in policies.keys():
try:
for trigger in policies[gate].keys():
action = policies[gate][trigger]['action']
params = policies[gate][trigger]['params']
el = {
'gate':gate,
'trigger':trigger,
'action':action,
'params':[]
}
for p in params.split():
(k,v) = p.split("=")
el['params'].append({'name':k, 'value':v})
ret.append(el)
except Exception as err:
print str(err)
pass
return(ret)
# V
def verify_policy(policydata=[], version=default_policy_version):
ret = True
if not isinstance(policydata, list):
ret = False
if version in supported_policy_versions:
# do 1_0 format/checks
pass
return(ret)
def run_bundle(anchore_config=None, bundle={}, image=None, matchtags=[], stateless=False, show_whitelisted=True, show_triggerIds=True):
retecode = 0
if not anchore_config or not bundle or not image:
raise Exception("input error")
if not verify_policy_bundle(bundle=bundle):
raise Exception("input bundle does not conform to bundle schema")
imageId = anchore_utils.discover_imageId(image)
digests = []
if not matchtags:
matchtags = [image]
evalmap = {}
evalresults = {}
for matchtag in matchtags:
_logger.info("evaluating tag: " + str(matchtag))
mapping_results = get_mapping_actions(image=matchtag, imageId=imageId, in_digests=digests, bundle=bundle)
for pol,wl,polname,wlnames,mapmatch,match_json,evalhash in mapping_results:
evalmap[matchtag] = evalhash
_logger.debug("attempting eval: " + evalhash + " : " + matchtag)
if evalhash not in evalresults:
fnames = {}
try:
if stateless:
policies = structure_policy(pol)
whitelists = structure_whitelist(wl)
rc = execute_gates(imageId, policies)
result, fullresult = evaluate_gates_results(imageId, policies, {}, whitelists)
eval_result = structure_eval_results(imageId, fullresult, show_whitelisted=show_whitelisted, show_triggerIds=show_triggerIds, imageName=matchtag)
gate_result = {}
gate_result[imageId] = eval_result
else:
con = controller.Controller(anchore_config=anchore_config, imagelist=[imageId], allimages=contexts['anchore_allimages'], force=True)
for (fname, data) in [('tmppol', pol), ('tmpwl', wl)]:
fh, thefile = tempfile.mkstemp(dir=anchore_config['tmpdir'])
fnames[fname] = thefile
try:
with open(thefile, 'w') as OFH:
for l in data:
OFH.write(l + "\n")
except Exception as err:
raise err
finally:
os.close(fh)
gate_result = con.run_gates(policy=fnames['tmppol'], global_whitelist=fnames['tmpwl'], show_triggerIds=show_triggerIds, show_whitelisted=show_whitelisted)
evalel = {
'results': list(),
'policy_name':"N/A",
'whitelist_names':"N/A",
'policy_data':list(),
'whitelist_data':list(),
'mapmatch':"N/A",
'matched_mapping_rule': {}
}
evalel['results'] = gate_result
evalel['policy_name'] = polname
evalel['whitelist_names'] = wlnames
evalel['policy_data'] = pol
evalel['whitelist_data'] = wl
evalel['mapmatch'] = mapmatch
evalel['matched_mapping_rule'] = match_json
_logger.debug("caching eval result: " + evalhash + " : " + matchtag)
evalresults[evalhash] = evalel
ecode = result_get_highest_action(gate_result)
if ecode == 1:
retecode = 1
elif retecode == 0 and ecode > retecode:
retecode = ecode
except Exception as err:
_logger.error("policy evaluation error: " + str(err))
finally:
for f in fnames.keys():
if os.path.exists(fnames[f]):
os.remove(fnames[f])
else:
_logger.debug("skipping eval, result already cached: " + evalhash + " : " + matchtag)
ret = {}
for matchtag in matchtags:
ret[matchtag] = {}
ret[matchtag]['bundle_name'] = bundle['name']
try:
evalresult = evalresults[evalmap[matchtag]]
ret[matchtag]['evaluations'] = [evalresult]
except Exception as err:
raise err
return(ret, retecode)
def result_get_highest_action(results):
highest_action = 0
for k in results.keys():
action = results[k]['result']['final_action']
if action == 'STOP':
highest_action = 1
elif highest_action == 0 and action == 'WARN':
highest_action = 2
return(highest_action)
def get_mapping_actions(image=None, imageId=None, in_digests=[], bundle={}):
"""
Given an image, image_id, digests, and a bundle, determine which policies and whitelists to evaluate.
:param image: Image obj
:param imageId: image id string
:param in_digests: candidate digests
:param bundle: bundle dict to evaluate
:return: tuple of (policy_data, whitelist_data, policy_name, whitelist_names, matchstring, mapping_rule_json obj, evalhash)
"""
if not image or not bundle:
raise Exception("input error")
if not verify_policy_bundle(bundle=bundle):
raise Exception("input bundle does not conform to bundle schema")
ret = []
image_infos = []
image_info = anchore_utils.get_all_image_info(image)
if image_info and image_info not in image_infos:
image_infos.append(image_info)
for m in bundle['mappings']:
polname = m['policy_id']
wlnames = m['whitelist_ids']
for image_info in image_infos:
#_logger.info("IMAGE INFO: " + str(image_info))
ii = {}
ii.update(image_info)
registry = ii.pop('registry', "N/A")
repo = ii.pop('repo', "N/A")
tags = []
fulltag = ii.pop('fulltag', "N/A")
if fulltag != 'N/A':
tinfo = anchore_utils.parse_dockerimage_string(fulltag)
if 'tag' in tinfo and tinfo['tag']:
tag = tinfo['tag']
for t in [image, fulltag]:
tinfo = anchore_utils.parse_dockerimage_string(t)
if 'tag' in tinfo and tinfo['tag'] and tinfo['tag'] not in tags:
tags.append(tinfo['tag'])
digest = ii.pop('digest', "N/A")
digests = [digest]
for d in image_info['digests']:
dinfo = anchore_utils.parse_dockerimage_string(d)
if 'digest' in dinfo and dinfo['digest']:
digests.append(dinfo['digest'])
p_ids = []
p_names = []
for p in bundle['policies']:
p_ids.append(p['id'])
p_names.append(p['name'])
wl_ids = []
wl_names = []
for wl in bundle['whitelists']:
wl_ids.append(wl['id'])
wl_names.append(wl['name'])
if polname not in p_ids:
_logger.info("policy not in bundle: " + str(polname))
continue
skip=False
for wlname in wlnames:
if wlname not in wl_ids:
_logger.info("whitelist not in bundle" + str(wlname))
skip=True
if skip:
continue
mname = m['name']
mregistry = m['registry']
mrepo = m['repository']
if m['image']['type'] == 'tag':
mtag = m['image']['value']
mdigest = None
mimageId = None
elif m['image']['type'] == 'digest':
mdigest = m['image']['value']
mtag = None
mimageId = None
elif m['image']['type'] == 'id':
mimageId = m['image']['value']
mtag = None
mdigest = None
else:
mtag = mdigest = mimageId = None
mregistry_rematch = mregistry
mrepo_rematch = mrepo
mtag_rematch = mtag
try:
matchtoks = []
for tok in mregistry.split("*"):
matchtoks.append(re.escape(tok))
mregistry_rematch = "^" + '(.*)'.join(matchtoks) + "$"
matchtoks = []
for tok in mrepo.split("*"):
matchtoks.append(re.escape(tok))
mrepo_rematch = "^" + '(.*)'.join(matchtoks) + "$"
matchtoks = []
for tok in mtag.split("*"):
matchtoks.append(re.escape(tok))
mtag_rematch = "^" + '(.*)'.join(matchtoks) + "$"
except Exception as err:
_logger.error("could not set up regular expression matches for mapping check - exception: " + str(err))
_logger.debug("matchset: " + str([mregistry_rematch, mrepo_rematch, mtag_rematch]) + " : " + str([mregistry, mrepo, mtag]) + " : " + str([registry, repo, tag, tags]))
if registry == mregistry or mregistry == '*' or re.match(mregistry_rematch, registry):
_logger.debug("checking mapping for image ("+str(image_info)+") match.")
if repo == mrepo or mrepo == '*' or re.match(mrepo_rematch, repo):
doit = False
matchstring = mname + ": N/A"
if tag:
if False and (mtag == tag or mtag == '*' or mtag in tags or re.match(mtag_rematch, tag)):
matchstring = mname + ":" + ','.join([mregistry, mrepo, mtag])
doit = True
else:
for t in tags:
if re.match(mtag_rematch, t):
matchstring = mname + ":" + ','.join([mregistry, mrepo, mtag])
doit = True
break
if not doit and (digest and (mdigest == digest or mdigest in in_digests or mdigest in digests)):
matchstring = mname + ":" + ','.join([mregistry, mrepo, mdigest])
doit = True
if not doit and (imageId and (mimageId == imageId)):
matchstring = mname + ":" + ','.join([mregistry, mrepo, mimageId])
doit = True
matchstring = matchstring.encode('utf8')
if doit:
_logger.debug("match found for image ("+str(image_info)+") matchstring ("+str(matchstring)+")")
wldata = []
wldataset = set()
for wlname in wlnames:
wldataset = set(list(wldataset) + extract_whitelist_data(bundle, wlname))
wldata = list(wldataset)
poldata = extract_policy_data(bundle, polname)
wlnames.sort()
evalstr = ','.join([polname] + wlnames)
evalhash = hashlib.md5(evalstr).hexdigest()
ret.append( ( poldata, wldata, polname,wlnames, matchstring, m, evalhash) )
return(ret)
else:
_logger.debug("no match found for image ("+str(image_info)+") match.")
else:
_logger.debug("no match found for image ("+str(image_info)+") match.")
return(ret)
def execute_gates(imageId, policies, refresh=True):
import random
success = True
anchore_config = contexts['anchore_config']
imagename = imageId
gatesdir = '/'.join([anchore_config["scripts_dir"], "gates"])
workingdir = '/'.join([anchore_config['anchore_data_dir'], 'querytmp'])
outputdir = workingdir
_logger.info(imageId + ": evaluating policies...")
for d in [outputdir, workingdir]:
if not os.path.exists(d):
os.makedirs(d)
imgfile = '/'.join([workingdir, "queryimages." + str(random.randint(0, 99999999))])
anchore_utils.write_plainfile_fromstr(imgfile, imageId)
try:
gmanifest, failedgates = anchore_utils.generate_gates_manifest()
if failedgates:
_logger.error("some gates failed to run - check the gate(s) modules for errors: " + str(','.join(failedgates)))
success = False
else:
success = True
for gatecheck in policies.keys():
# get all commands that match the gatecheck
gcommands = []
for gkey in gmanifest.keys():
if gmanifest[gkey]['gatename'] == gatecheck:
gcommands.append(gkey)
# assemble the params from the input policy for this gatecheck
params = []
for trigger in policies[gatecheck].keys():
if 'params' in policies[gatecheck][trigger] and policies[gatecheck][trigger]['params']:
params.append(policies[gatecheck][trigger]['params'])
if not params:
params = ['all']
if gcommands:
for command in gcommands:
cmd = [command] + [imgfile, anchore_config['image_data_store'], outputdir] + params
_logger.debug("running gate command: " + str(' '.join(cmd)))
(rc, sout, cmdstring) = anchore_utils.run_command(cmd)
if rc:
_logger.error("FAILED")
_logger.error("\tCMD: " + str(cmdstring))
_logger.error("\tEXITCODE: " + str(rc))
_logger.error("\tOUTPUT: " + str(sout))
success = False
else:
_logger.debug("")
_logger.debug("\tCMD: " + str(cmdstring))
_logger.debug("\tEXITCODE: " + str(rc))
_logger.debug("\tOUTPUT: " + str(sout))
_logger.debug("")
else:
_logger.warn("WARNING: gatecheck ("+str(gatecheck)+") line in policy, but no gates were found that match this gatecheck")
except Exception as err:
_logger.error("gate evaluation failed - exception: " + str(err))
finally:
if imgfile and os.path.exists(imgfile):
try:
os.remove(imgfile)
except:
_logger.error("could not remove tempfile: " + str(imgfile))
if success:
report = generate_gates_report(imageId)
contexts['anchore_db'].save_gates_report(imageId, report)
_logger.info(imageId + ": evaluated.")
return(success)
def generate_gates_report(imageId):
# this routine reads the results of image gates and generates a formatted report
report = {}
outputs = contexts['anchore_db'].list_gate_outputs(imageId)
for d in outputs:
report[d] = contexts['anchore_db'].load_gate_output(imageId, d)
return(report)
def evaluate_gates_results(imageId, policies, image_whitelist, global_whitelist):
ret = list()
fullret = list()
final_gate_action = 'GO'
for m in policies.keys():
gdata = contexts['anchore_db'].load_gate_output(imageId, m)
for l in gdata:
(k, v) = re.match('(\S*)\s*(.*)', l).group(1, 2)
imageId = imageId
check = m
trigger = k
output = v
triggerId = hashlib.md5(''.join([check,trigger,output])).hexdigest()
# if the output is structured (i.e. decoded as an
# anchore compatible json string) then extract the
# elements for display
try:
json_output = json.loads(output)
if 'id' in json_output:
triggerId = str(json_output['id'])
if 'desc' in json_output:
output = str(json_output['desc'])
except:
pass
if k in policies[m]:
trigger = k
action = policies[check][trigger]['action']
r = {'imageId':imageId, 'check':check, 'triggerId':triggerId, 'trigger':trigger, 'output':output, 'action':action}
# this is where whitelist check should go
whitelisted = False
whitelist_type = "none"
if global_whitelist and ([m, triggerId] in global_whitelist):
whitelisted = True
whitelist_type = "global"
elif image_whitelist and 'ignore' in image_whitelist and (r in image_whitelist['ignore']):
whitelisted = True
whitelist_type = "image"
else:
# look for prefix wildcards
try:
for [gmod, gtriggerId] in global_whitelist:
if gmod == m:
# special case for backward compat
try:
if gmod == 'ANCHORESEC' and not re.match(".*\*.*", gtriggerId) and re.match("^CVE.*|^RHSA.*", gtriggerId):
gtriggerId = gtriggerId + "*"
except Exception as err:
_logger.warn("problem with backward compat modification of whitelist trigger - exception: " + str(err))
matchtoks = []
for tok in gtriggerId.split("*"):
matchtoks.append(re.escape(tok))
rematch = "^" + '(.*)'.join(matchtoks) + "$"
_logger.debug("checking regexp wl<->triggerId for match: " + str(rematch) + " : " + str(triggerId))
if re.match(rematch, triggerId):
_logger.debug("found wildcard whitelist match")
whitelisted = True
whitelist_type = "global"
break
except Exception as err:
_logger.warn("problem with prefix wildcard match routine - exception: " + str(err))
fullr = {}
fullr.update(r)
fullr['whitelisted'] = whitelisted
fullr['whitelist_type'] = whitelist_type
fullret.append(fullr)
if not whitelisted:
if policies[m][k]['action'] == 'STOP':
final_gate_action = 'STOP'
elif final_gate_action != 'STOP' and policies[m][k]['action'] == 'WARN':
final_gate_action = 'WARN'
ret.append(r)
else:
# whitelisted, skip evaluation
pass
ret.append({'imageId':imageId, 'check':'FINAL', 'trigger':'FINAL', 'output':"", 'action':final_gate_action})
fullret.append({'imageId':imageId, 'check':'FINAL', 'trigger':'FINAL', 'output':"", 'action':final_gate_action, 'whitelisted':False, 'whitelist_type':"none", 'triggerId':"N/A"})
return(ret, fullret)
def structure_eval_results(imageId, evalresults, show_triggerIds=False, show_whitelisted=False, imageName=None):
if not imageName:
imageName = imageId
record = {}
record['result'] = {}
record['result']['header'] = ['Image_Id', 'Repo_Tag']
if show_triggerIds:
record['result']['header'].append('Trigger_Id')
record['result']['header'] += ['Gate', 'Trigger', 'Check_Output', 'Gate_Action']
if show_whitelisted:
record['result']['header'].append('Whitelisted')
record['result']['rows'] = list()
for m in evalresults:
id = imageId
name = imageName
gate = m['check']
trigger = m['trigger']
output = m['output']
triggerId = m['triggerId']
action = m['action']
row = [id[0:12], name]
if show_triggerIds:
row.append(triggerId)
row += [gate, trigger, output, action]
if show_whitelisted:
row.append(m['whitelist_type'])
if not m['whitelisted'] or show_whitelisted:
record['result']['rows'].append(row)
if gate == 'FINAL':
record['result']['final_action'] = action
return(record)
# small test
if __name__ == '__main__':
from anchore.configuration import AnchoreConfiguration
config = AnchoreConfiguration(cliargs={})
anchore_utils.anchore_common_context_setup(config)
policies = {}
whitelists = {}
mappings = []
pol0 = read_policy(name=str(uuid.uuid4()), file='/root/.anchore/conf/anchore_gate.policy')
pol1 = read_policy(name=str(uuid.uuid4()), file='/root/.anchore/conf/anchore_gate.policy')
policies.update(pol0)
policies.update(pol1)
gl0 = read_whitelist(name=str(uuid.uuid4()))
wl0 = read_whitelist(name=str(uuid.uuid4()), file='/root/wl0')
whitelists.update(gl0)
whitelists.update(wl0)
map0 = create_mapping(map_name="default", policy_name=policies.keys()[0], whitelists=whitelists.keys(), repotagstring='*/*:*')
mappings.append(map0)
bundle = create_policy_bundle(name='default', policies=policies, policy_version=default_policy_version, whitelists=whitelists, whitelist_version=default_whitelist_version, mappings=mappings)
print "CREATED BUNDLE: " + json.dumps(bundle, indent=4)
rc = write_policy_bundle(bundle_file="/tmp/bun.json", bundle=bundle)
newbun = read_policy_bundle(bundle_file="/tmp/bun.json")
if newbun != bundle:
print "BUNDLE RESULT DIFFERENT AFTER SAVE/LOAD"
thebun = convert_to_policy_bundle(name='default', policy_file='/root/.anchore/conf/anchore_gate.policy', policy_version=default_policy_version, whitelist_files=['/root/wl0'], whitelist_version=default_whitelist_version)
rc = write_policy_bundle(bundle_file="/tmp/bun1.json", bundle=thebun)
pol0 = read_policy(name="meh", file='/root/.anchore/conf/anchore_gate.policy')
policies = structure_policy(pol0['meh'])
#rc = execute_gates("4a415e3663882fbc554ee830889c68a33b3585503892cc718a4698e91ef2a526", policies)
result, image_ecode = run_bundle(anchore_config=config, image='alpine', matchtags=[], bundle=thebun)
with open("/tmp/a", 'w') as OFH:
OFH.write(json.dumps(result, indent=4))
try:
result, image_ecode = run_bundle_stateless(anchore_config=config, image='alpine', matchtags=[], bundle=thebun)
with open("/tmp/b", 'w') as OFH:
OFH.write(json.dumps(result, indent=4))
except Exception as err:
import traceback
traceback.print_exc()
print str(err)
| 37.482993 | 223 | 0.554524 | 4,052 | 38,570 | 5.138944 | 0.117226 | 0.019594 | 0.013879 | 0.016328 | 0.274312 | 0.222494 | 0.166979 | 0.115161 | 0.091293 | 0.079479 | 0 | 0.004995 | 0.330464 | 38,570 | 1,028 | 224 | 37.519455 | 0.801348 | 0.023153 | 0 | 0.264031 | 0 | 0.002551 | 0.137395 | 0.006041 | 0.005102 | 0 | 0 | 0 | 0 | 0 | null | null | 0.007653 | 0.021684 | null | null | 0.006378 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4fcbcca983a903be3e13de0bc766ff057f2460ae | 2,991 | py | Python | kitsune/gallery/models.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | null | null | null | kitsune/gallery/models.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | null | null | null | kitsune/gallery/models.py | jgmize/kitsune | 8f23727a9c7fcdd05afc86886f0134fb08d9a2f0 | [
"BSD-3-Clause"
] | null | null | null | from datetime import datetime
from django.conf import settings
from django.contrib.auth.models import User
from django.db import models
from kitsune.sumo.models import ModelBase, LocaleField
from kitsune.sumo.urlresolvers import reverse
from kitsune.sumo.utils import auto_delete_files
class Media(ModelBase):
"""Generic model for media"""
title = models.CharField(max_length=255, db_index=True)
created = models.DateTimeField(default=datetime.now, db_index=True)
updated = models.DateTimeField(default=datetime.now, db_index=True)
updated_by = models.ForeignKey(User, null=True)
description = models.TextField(max_length=10000)
locale = LocaleField(default=settings.GALLERY_DEFAULT_LANGUAGE,
db_index=True)
is_draft = models.NullBooleanField(default=None, null=True, editable=False)
class Meta(object):
abstract = True
ordering = ['-created']
unique_together = (('locale', 'title'), ('is_draft', 'creator'))
def __unicode__(self):
return '[%s] %s' % (self.locale, self.title)
@auto_delete_files
class Image(Media):
creator = models.ForeignKey(User, related_name='gallery_images')
file = models.ImageField(upload_to=settings.GALLERY_IMAGE_PATH,
max_length=settings.MAX_FILEPATH_LENGTH)
thumbnail = models.ImageField(
upload_to=settings.GALLERY_IMAGE_THUMBNAIL_PATH, null=True,
max_length=settings.MAX_FILEPATH_LENGTH)
def get_absolute_url(self):
return reverse('gallery.media', args=['image', self.id])
def thumbnail_url_if_set(self):
"""Returns self.thumbnail, if set, else self.file"""
return self.thumbnail.url if self.thumbnail else self.file.url
@auto_delete_files
class Video(Media):
creator = models.ForeignKey(User, related_name='gallery_videos')
webm = models.FileField(upload_to=settings.GALLERY_VIDEO_PATH, null=True,
max_length=settings.MAX_FILEPATH_LENGTH)
ogv = models.FileField(upload_to=settings.GALLERY_VIDEO_PATH, null=True,
max_length=settings.MAX_FILEPATH_LENGTH)
flv = models.FileField(upload_to=settings.GALLERY_VIDEO_PATH, null=True,
max_length=settings.MAX_FILEPATH_LENGTH)
poster = models.ImageField(upload_to=settings.GALLERY_VIDEO_THUMBNAIL_PATH,
max_length=settings.MAX_FILEPATH_LENGTH,
null=True)
thumbnail = models.ImageField(
upload_to=settings.GALLERY_VIDEO_THUMBNAIL_PATH, null=True,
max_length=settings.MAX_FILEPATH_LENGTH)
def get_absolute_url(self):
return reverse('gallery.media', args=['video', self.id])
def thumbnail_url_if_set(self):
"""Returns self.thumbnail.url, if set, else default thumbnail URL"""
progress_url = settings.GALLERY_VIDEO_THUMBNAIL_PROGRESS_URL
return self.thumbnail.url if self.thumbnail else progress_url
| 41.541667 | 79 | 0.704112 | 366 | 2,991 | 5.516393 | 0.254098 | 0.040119 | 0.055473 | 0.079742 | 0.568598 | 0.568598 | 0.568598 | 0.480436 | 0.390292 | 0.285785 | 0 | 0.003351 | 0.201939 | 2,991 | 71 | 80 | 42.126761 | 0.84248 | 0.044467 | 0 | 0.259259 | 0 | 0 | 0.036946 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092593 | false | 0 | 0.12963 | 0.055556 | 0.685185 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4fd5295b29e4b31c48489377d517627d7c834a90 | 581 | py | Python | tanslate.py | Blues-star/bilibili-BV-conv | e53015fb7272e70945fbb6c35a59edef8ba0cb3f | [
"MIT"
] | null | null | null | tanslate.py | Blues-star/bilibili-BV-conv | e53015fb7272e70945fbb6c35a59edef8ba0cb3f | [
"MIT"
] | null | null | null | tanslate.py | Blues-star/bilibili-BV-conv | e53015fb7272e70945fbb6c35a59edef8ba0cb3f | [
"MIT"
] | null | null | null | table = 'fZodR9XQDSUm21yCkr6zBqiveYah8bt4xsWpHnJE7jL5VG3guMTKNPAwcF'
tr = {}
for i in range(58):
tr[table[i]] = i
s = [11, 10, 3, 8, 4, 6]
xor = 177451812
add = 8728348608
def dec(x):
r = 0
for i in range(6):
r += tr[x[s[i]]] * 58**i
return (r - add) ^ xor
def enc(x):
x = (x ^ xor) + add
r = list('BV1 4 1 7 ')
for i in range(6):
r[s[i]] = table[x // 58**i % 58]
return ''.join(r)
print(dec('BV17x411w7KC'))
print(dec('BV1Q541167Qg'))
print(dec('BV1mK4y1C7Bz'))
print(enc(170001))
print(enc(455017605))
print(enc(882584971)) | 19.366667 | 68 | 0.576592 | 91 | 581 | 3.681319 | 0.417582 | 0.035821 | 0.053731 | 0.098507 | 0.077612 | 0.077612 | 0 | 0 | 0 | 0 | 0 | 0.207207 | 0.2358 | 581 | 30 | 69 | 19.366667 | 0.547297 | 0 | 0 | 0.083333 | 0 | 0 | 0.182131 | 0.099656 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0 | 0 | 0.166667 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4fd92af7e0206cf0333bb60a3ec9bad3624fcb60 | 845 | py | Python | kronos_executor/kronos_executor/execution_contexts/trivial.py | ecmwf/kronos | 4f8c896baa634fc937f866d2bd05b438106c1663 | [
"Apache-2.0"
] | 4 | 2020-09-15T15:16:17.000Z | 2021-08-17T14:02:28.000Z | kronos_executor/kronos_executor/execution_contexts/trivial.py | ecmwf/kronos | 4f8c896baa634fc937f866d2bd05b438106c1663 | [
"Apache-2.0"
] | 4 | 2020-09-12T07:22:35.000Z | 2020-10-13T17:08:35.000Z | kronos_executor/kronos_executor/execution_contexts/trivial.py | ecmwf/kronos | 4f8c896baa634fc937f866d2bd05b438106c1663 | [
"Apache-2.0"
] | null | null | null |
import pathlib
from kronos_executor.execution_context import ExecutionContext
run_script = pathlib.Path(__file__).parent / "trivial_run.sh"
class TrivialExecutionContext(ExecutionContext):
scheduler_directive_start = ""
scheduler_directive_params = {}
scheduler_use_params = []
scheduler_cancel_head = "#!/bin/bash\nkill "
scheduler_cancel_entry = "{sequence_id} "
launcher_command = "mpirun"
launcher_params = {"num_procs": "-np "}
launcher_use_params = ["num_procs"]
def env_setup(self, job_config):
return "module load openmpi"
def submit_command(self, job_config, job_script_path, deps=[]):
return [str(run_script),
job_config['job_output_file'],
job_config['job_error_file'],
job_script_path]
Context = TrivialExecutionContext
| 28.166667 | 67 | 0.695858 | 93 | 845 | 5.892473 | 0.548387 | 0.065693 | 0.065693 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.205917 | 845 | 29 | 68 | 29.137931 | 0.816692 | 0 | 0 | 0 | 0 | 0 | 0.14455 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1 | false | 0 | 0.1 | 0.1 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4fde8899e019b17ecbe9311227f7c8f58e1e998b | 1,021 | py | Python | split_dataset/split_dataset.py | si-you/tools | 7b050310bdf64483531b5b16045a457ab7bc2e14 | [
"Apache-2.0"
] | null | null | null | split_dataset/split_dataset.py | si-you/tools | 7b050310bdf64483531b5b16045a457ab7bc2e14 | [
"Apache-2.0"
] | null | null | null | split_dataset/split_dataset.py | si-you/tools | 7b050310bdf64483531b5b16045a457ab7bc2e14 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import csv
from absl import app
from absl import flags
import pandas as pd
FLAGS = flags.FLAGS
flags.DEFINE_string('dataset', None, 'A path to the dataset.')
flags.DEFINE_float('test_fraction', 0.2, 'A split fraction between [0.0, 1.0]')
def main(unused_args):
df = pd.read_csv(FLAGS.dataset, sep=',', dtype=object,
quoting=csv.QUOTE_NONE)
# Step 1. Shuffle the dataset.
df = df.sample(frac=1.0).reset_index(drop=True)
split_point = int(len(df) * FLAGS.test_fraction)
# Step 2. Split the dataset.
df_test = df.iloc[:split_point, :]
df_train = df.iloc[split_point:, :]
# Step 3. Materialize the datasets.
df_train.to_csv('train.csv', sep=',', quoting=csv.QUOTE_NONE,
index=False)
df_test.to_csv('test.csv', sep=',', quoting=csv.QUOTE_NONE,
index=False)
if __name__ == '__main__':
app.run(main)
| 24.902439 | 79 | 0.664055 | 150 | 1,021 | 4.246667 | 0.406667 | 0.047096 | 0.075353 | 0.089482 | 0.10989 | 0.10989 | 0.10989 | 0.10989 | 0 | 0 | 0 | 0.013699 | 0.213516 | 1,021 | 40 | 80 | 25.525 | 0.779577 | 0.087169 | 0 | 0.086957 | 0 | 0 | 0.113147 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.304348 | 0 | 0.347826 | 0.043478 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4fdf1f2bc3691125407e6d9ef56d7aa3c6197b78 | 1,316 | py | Python | exercises/bssid-based/receive.py | ramonfontes/tutorials | 3335911512c14d90a1dddf425c3bc697c73e9581 | [
"Apache-2.0"
] | 3 | 2019-12-17T13:10:19.000Z | 2021-03-19T14:16:34.000Z | exercises/bssid-based/receive.py | ramonfontes/tutorials | 3335911512c14d90a1dddf425c3bc697c73e9581 | [
"Apache-2.0"
] | null | null | null | exercises/bssid-based/receive.py | ramonfontes/tutorials | 3335911512c14d90a1dddf425c3bc697c73e9581 | [
"Apache-2.0"
] | 4 | 2019-08-23T03:48:46.000Z | 2020-07-14T18:26:53.000Z | #!/usr/bin/env python
import sys
import os
from binascii import hexlify
from scapy.all import sniff
from scapy.all import TCP, Raw
allowed_bssids = ['020000000200']
reg_mac = []
def handle_pkt(pkt):
if TCP in pkt and pkt[TCP].dport == 1234 and Raw in pkt:
pkt = pkt.lastlayer()
pktHex = hexlify(str(pkt))
bssid = pktHex[0:12]
mac = pktHex[12:24]
if bssid in allowed_bssids and mac in reg_mac:
if os.path.exists("%s.txt" % mac):
print "Allowing packets from MAC %s" % (mac)
os.system("./run-code-allow.sh")
os.system("rm %s.txt" % mac)
if bssid not in allowed_bssids:
if os.path.exists("drop.txt"):
reg_mac.append(mac)
print "Dropping packets from BSSID %s" % (bssid)
os.system("./run-code-drop.sh")
os.system("rm drop.txt")
os.system("touch %s.txt" % mac)
sys.stdout.flush()
def main():
#ifaces = filter(lambda i: 'eth' in i, os.listdir('/sys/class/net/'))
iface = 'eth0'
os.system('rm *.txt')
os.system('touch drop.txt')
print "sniffing on %s" % iface
sys.stdout.flush()
sniff(iface = iface,
prn = lambda x: handle_pkt(x))
if __name__ == '__main__':
main()
| 28 | 73 | 0.56155 | 185 | 1,316 | 3.908108 | 0.383784 | 0.077455 | 0.029046 | 0.049793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026059 | 0.300152 | 1,316 | 46 | 74 | 28.608696 | 0.758958 | 0.066869 | 0 | 0.055556 | 0 | 0 | 0.163948 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.138889 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4fe32257723f36117284934c831c406997cca99a | 807 | py | Python | dcapi/dcapi.py | lethargilistic/dcapi-wrap | cc8366eee68bc4354a9256dc571c0a889e27d1de | [
"MIT"
] | null | null | null | dcapi/dcapi.py | lethargilistic/dcapi-wrap | cc8366eee68bc4354a9256dc571c0a889e27d1de | [
"MIT"
] | null | null | null | dcapi/dcapi.py | lethargilistic/dcapi-wrap | cc8366eee68bc4354a9256dc571c0a889e27d1de | [
"MIT"
] | null | null | null | import requests
from urllib.parse import urlparse
from os.path import join
#TODO: Break into separate standard settings module
ROOT_URL = 'http://progdisc.club/~lethargilistic/proxy'
HEADERS = {'User-Agent': 'dcapi-wrap (https://github.com/lethargilistic/dcapi-wrap)'}
def set_url(url):
if urlparse(url):
ROOT_URL = url
else:
raise ValueError('The URL was not a URL')
def character(search):
search_url = join(ROOT_URL, 'character', str(search))
response = requests.get(search_url, headers=HEADERS)
if response.status_code != 200:
raise ConnectionError('API endpoint returned status '
+ str(response.status_code))
return response.json()
if __name__ == '__main__':
print(character(0))
print(character('Ai Haibara'))
| 29.888889 | 85 | 0.680297 | 102 | 807 | 5.22549 | 0.598039 | 0.0394 | 0.067542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006231 | 0.204461 | 807 | 26 | 86 | 31.038462 | 0.823988 | 0.061958 | 0 | 0 | 0 | 0 | 0.246032 | 0 | 0 | 0 | 0 | 0.038462 | 0 | 1 | 0.1 | false | 0 | 0.15 | 0 | 0.3 | 0.1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4feb1a147d8e50c42d794ec01fa85f89b90107e5 | 3,378 | py | Python | tests/dataframesource/sources/test_dataikusource.py | telia-oss/birgitta | e4d27465f54a1d4789741e19e15aec726149a735 | [
"MIT"
] | 8 | 2019-11-25T16:39:33.000Z | 2022-03-31T12:48:54.000Z | tests/dataframesource/sources/test_dataikusource.py | telia-oss/birgitta | e4d27465f54a1d4789741e19e15aec726149a735 | [
"MIT"
] | 218 | 2019-09-09T11:11:59.000Z | 2022-03-08T05:16:40.000Z | tests/dataframesource/sources/test_dataikusource.py | telia-oss/birgitta | e4d27465f54a1d4789741e19e15aec726149a735 | [
"MIT"
] | 4 | 2020-07-21T15:33:40.000Z | 2021-12-22T11:32:45.000Z | import sys
from unittest.mock import MagicMock, patch # noqa F401
import mock
import pytest # noqa F401
# TODO: Simplify the mocking of private (unavailable) dataiku lib.
# Current mocking is ugly and complex.
if 'dataiku.Dataset' in sys.modules:
del sys.modules['dataiku.Dataset']
if 'dataiku' in sys.modules:
del sys.modules['dataiku']
if 'dataiku.Dataset' in sys.modules:
del sys.modules['dataiku.Dataset']
if 'dataikuapi' in sys.modules:
del sys.modules['dataikuapi']
if 'dataikuapi.dss.project.DSSProject' in sys.modules:
del sys.modules['dataikuapi.dss.project.DSSProject']
dataiku_mock = mock.MagicMock()
sys.modules['dataiku'] = dataiku_mock
spark_mock = mock.MagicMock()
sys.modules['dataiku.spark'] = spark_mock
ds_mock = mock.MagicMock()
sys.modules['dataiku.Dataset'] = ds_mock
dataikuapi_mock = mock.MagicMock()
sys.modules['dataikuapi'] = dataikuapi_mock
project_mock = mock.MagicMock()
sys.modules['dataikuapi.dss.project.DSSProject'] = project_mock
project_obj_mock = mock.MagicMock()
project_mock.return_value = project_obj_mock
dapi_dataset_mock = mock.MagicMock()
project_obj_mock.get_dataset.return_value = dapi_dataset_mock
import dataikuapi.dss.project.DSSProject # noqa l202
dataikuapi.dss.project.DSSProject.return_value = project_obj_mock
from birgitta.dataframesource.sources.dataikusource import DataikuSource # noqa F401
from birgitta.dataframe import dataframe, dfdiff # noqa l202
from birgitta.dataiku import schema as dkuschema # noqa E402
from birgitta.fields import Catalog # noqa E402
from birgitta.schema.schema import Schema # noqa E402
def is_current_platform():
return True
@mock.patch("birgitta.dataiku.platform.is_current_platform",
is_current_platform)
def test_write_without_set_schema():
dataiku_source = DataikuSource()
dataset_name = "fixtures"
s3_dir = "s3://birgittatestbucket/sourcetests"
fixtures_mock = MagicMock()
catalog = Catalog()
catalog.add_field('fooint', description='Foo int', example=39)
schema = Schema([['fooint', 'bigint']], catalog)
dataframe.write(fixtures_mock,
dataset_name,
prefix=s3_dir,
schema=schema,
skip_cast=True,
set_schema=False,
dataframe_source=dataiku_source)
dapi_dataset_mock.set_schema.assert_not_called()
def is_current_platform():
return True
@mock.patch("birgitta.dataiku.platform.is_current_platform",
is_current_platform)
def test_write():
# dapi_dataset_mock = mock.MagicMock()
# project_obj_mock.get_dataset.return_value = dapi_dataset_mock
dataiku_source = DataikuSource()
dataset_name = "fixtures"
s3_dir = "s3://birgittatestbucket/sourcetests"
fixtures_mock = MagicMock()
catalog = Catalog()
catalog.add_field('fooint', description='Foo int', example=39)
schema = Schema([['fooint', 'bigint']], catalog)
dataframe.write(fixtures_mock,
dataset_name,
prefix=s3_dir,
schema=schema,
skip_cast=True,
dataframe_source=dataiku_source)
dataiku_schema = dkuschema.to_dataiku(schema)
dapi_dataset_mock.set_schema.assert_called_once_with(dataiku_schema)
# dso_mock.set_schema.assert_called_once_with(dataiku_schema)
| 35.93617 | 85 | 0.718176 | 412 | 3,378 | 5.657767 | 0.211165 | 0.06435 | 0.058344 | 0.032175 | 0.651652 | 0.630202 | 0.519949 | 0.47619 | 0.47619 | 0.436722 | 0 | 0.01235 | 0.185021 | 3,378 | 93 | 86 | 36.322581 | 0.834363 | 0.100651 | 0 | 0.506667 | 0 | 0 | 0.153439 | 0.085648 | 0 | 0 | 0 | 0.010753 | 0.026667 | 1 | 0.053333 | false | 0 | 0.133333 | 0.026667 | 0.213333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4fefddd3427c3502d85ef9985a5b9195f2336262 | 1,095 | py | Python | src/poker_now_log_converter/player.py | charlestudor/PokerNowLogConverter | f888a26d805546b2e470c1066552da4dfe9caf58 | [
"MIT"
] | 1 | 2022-01-18T18:14:41.000Z | 2022-01-18T18:14:41.000Z | src/poker_now_log_converter/player.py | charlestudor/PokerNowLogConverter | f888a26d805546b2e470c1066552da4dfe9caf58 | [
"MIT"
] | 4 | 2022-01-19T10:48:49.000Z | 2022-01-26T21:55:08.000Z | src/poker_now_log_converter/player.py | charlestudor/PokerNowLogConverter | f888a26d805546b2e470c1066552da4dfe9caf58 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
""" Provides the Player class as part of the PokerNowLogConverter data model"""
from dataclasses import dataclass
@dataclass
class Player:
""" The Player class represents a player in a particular hand of poker.
Players start without an alias, which can be set using a method once the game object is constructed. Once an
alias is set it is
used in place of the player name when outputting to a log.
Args:
player_name (str): The player name from the log. The first part in the "Player @ ID" PokerNow format.
player_id (str): The ID given by the PokerNow server. The second part in the "Player @ ID" format.
player_name_with_id (str): This is the entire PokerNow player string in the format "Player @ ID".
alias_name (str): An alias for this player.
"""
player_name: str
player_id: str
player_name_with_id: str
alias_name: str = None
def __hash__(self):
""" Hash function. If two players have the same name with id, they are identical"""
return hash(self.player_name_with_id)
| 37.758621 | 112 | 0.694064 | 171 | 1,095 | 4.333333 | 0.438596 | 0.094467 | 0.053981 | 0.064777 | 0.097166 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001199 | 0.238356 | 1,095 | 28 | 113 | 39.107143 | 0.88729 | 0.725114 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.888889 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4ff270ae3740ae1c772d22e13e0d8d42fb7216e5 | 1,284 | py | Python | bg.py | Blue-IT-Marketing/sa-loans | ff574766f1ae8bc051e5ba07acc47418118a3704 | [
"MIT"
] | null | null | null | bg.py | Blue-IT-Marketing/sa-loans | ff574766f1ae8bc051e5ba07acc47418118a3704 | [
"MIT"
] | null | null | null | bg.py | Blue-IT-Marketing/sa-loans | ff574766f1ae8bc051e5ba07acc47418118a3704 | [
"MIT"
] | null | null | null | import os
import webapp2
import jinja2
from google.appengine.ext import ndb
from google.appengine.api import users
from google.appengine.api import mail
import datetime,random,string
from google.appengine.api import memcache
import logging
#Jinja Loader
template_env = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.getcwd()))
from loans import *
class bgDataHandler(webapp2.RequestHandler):
def get(self):
findRequest = LoanApplicantDetails.query()
thisLoanApplicantDetailsList = findRequest.fetch()
template = template_env.get_template('templates/dynamic/admin/bg/bg.html')
context = {'thisLoanApplicantDetailsList':thisLoanApplicantDetailsList}
self.response.write(template.render(context))
class BankingDetailsHandler(webapp2.RequestHandler):
def get(self):
findRequest = LoanBankingDetails.query()
thisLoanBankingList = findRequest.fetch()
template = template_env.get_template('templates/dynamic/admin/bg/bgl.html')
context = {'thisLoanBankingList':thisLoanBankingList}
self.response.write(template.render(context))
app = webapp2.WSGIApplication([
('/bg/data/mob1234567890@', bgDataHandler),
('/bg/data/bank/mob1234567890@', BankingDetailsHandler)
], debug=True)
| 28.533333 | 83 | 0.753115 | 130 | 1,284 | 7.4 | 0.430769 | 0.04158 | 0.079002 | 0.068607 | 0.397089 | 0.309771 | 0.143451 | 0.143451 | 0.143451 | 0.143451 | 0 | 0.024635 | 0.146417 | 1,284 | 44 | 84 | 29.181818 | 0.853102 | 0.009346 | 0 | 0.133333 | 0 | 0 | 0.131496 | 0.116535 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.333333 | 0 | 0.466667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4ff3692a34dcb497bd8bcdf14990309f69e66f40 | 4,522 | py | Python | pystella/fit/fit_lc.py | baklanovp/pystella | 47a8b9c3dcd343bf80fba80c8468b803f0f842ce | [
"MIT"
] | 1 | 2019-08-08T13:11:57.000Z | 2019-08-08T13:11:57.000Z | pystella/fit/fit_lc.py | cradesto/pystella | f6f44ed12d9648585a52a09e15d494daa4c70c59 | [
"MIT"
] | 9 | 2015-07-11T16:39:57.000Z | 2021-11-23T07:31:49.000Z | pystella/fit/fit_lc.py | cradesto/pystella | f6f44ed12d9648585a52a09e15d494daa4c70c59 | [
"MIT"
] | 1 | 2019-08-08T13:08:55.000Z | 2019-08-08T13:08:55.000Z | import numpy as np
class FitLc:
def __init__(self, name):
self._name = name
self._par = {'is_info': False, 'is_debug': False, 'is_quiet': True}
def print_parameters(self):
print(f'Parmeters of {self.Name}')
for k, v in self._par.items():
print(f'{k:20s}: {v}')
def fit_lc(self, lc_o, lc_m):
pass
def fit_curves(self, curves_m, curves_o):
pass
def fit_tss(self, tss_m, tss_o):
pass
@property
def Name(self):
return self._name
@property
def is_info(self):
return self.get('is_info')
@is_info.setter
def is_info(self, v):
self._par['is_info'] = v
@property
def is_quiet(self):
return self.get('is_quiet')
@is_quiet.setter
def is_quiet(self, v):
self._par['is_quiet'] = v
@property
def is_debug(self):
return self.get('is_debug')
@is_debug.setter
def is_debug(self, v):
self._par['is_debug'] = v
def get(self, k, default=None):
if k in self._par:
return self._par[k]
return default
def set_param(self, k, v):
self._par[k] = v
@staticmethod
def theta2arr(theta, nb):
if len(theta) == 1 + nb:
return FitLc.thetaDt2arr(theta, nb)
elif len(theta) == 2 + nb:
return FitLc.thetaDtDm2arr(theta, nb)
raise (ValueError("Len(theta) is not in [{},{}].".format(nb+1, nb+2), theta))
@staticmethod
def thetaDt2arr(theta, nb):
if len(theta) != 1+nb:
raise (ValueError("Len(theta) is not {}.".format(1+nb), theta))
return theta[0], theta[1:]
@staticmethod
def thetaDtDm2arr(theta, nb):
if len(theta) != 2+nb:
raise (ValueError("Len(theta) is not {}.".format(2 + nb), theta))
return theta[0], theta[1], theta[2:]
@staticmethod
def thetaDtNames(bnames):
res = ['dt']
for j, bn in enumerate(bnames):
res.append('sig_{}'.format(bn))
return tuple(res)
@staticmethod
def thetaDtDmNames(bnames):
res = ['dt', 'dm0']
for j, bn in enumerate(bnames):
res.append('sig_{}'.format(bn))
return tuple(res)
@staticmethod
def print_thetaDt(theta, bnames):
dt, sigs = FitLc.thetaDt2arr(theta, len(bnames))
txt = 'dt= {:4.1f} '.format(dt)
txt += ' '.join(['sig_{}= {:5.2f}'.format(bn, sigs[j]) for j, bn in enumerate(bnames)])
print(txt)
@staticmethod
def print_thetaDtDm(theta, bnames):
dt, dm0, sigs = FitLc.thetaDtDm2arr(theta, len(bnames))
txt = 'dt= {:4.1f} m_0= {:4.1f} '.format(dt, dm0)
txt += ' '.join(['sig_{}= {:5.2f}'.format(bn, sigs[j]) for j, bn in enumerate(bnames)])
print(txt)
@staticmethod
def log_priorDt(theta, nb, tlim, siglim):
dt, sigs = FitLc.thetaDt2arr(theta, nb)
if (dt < tlim[0]) or (dt > tlim[1]):
return -np.inf
for x in sigs:
if (x < -0.) or (x > siglim):
return -np.inf
return 0 # flat prior
@staticmethod
def log_priorDtDm(theta, tlim, maglim, siglim):
dt, dm, *sigs = theta
if (dt < tlim[0]) or (dt > tlim[1]):
return -np.inf
if (dm < maglim[0]) or (dm > maglim[1]):
return -np.inf
for x in sigs:
if (x < -0.) or (x > siglim):
return -np.inf
return 0 # flat prior
class FitLcResult:
def __init__(self):
self._mshift = 0.
self._msigma = 0.
self._tshift = 0.
self._tsigma = 0.
self._measure = None
self._comm = None
@property
def tshift(self):
return self._tshift
@tshift.setter
def tshift(self, shift):
self._tshift = shift
@property
def tsigma(self):
return self._tsigma
@tsigma.setter
def tsigma(self, v):
self._tsigma = v
@property
def mshift(self):
return self._mshift
@mshift.setter
def mshift(self, v):
self._mshift = v
@property
def msigma(self):
return self._msigma
@msigma.setter
def msigma(self, v):
self._msigma = v
@property
def measure(self):
return self._measure
@measure.setter
def measure(self, v):
self._measure = v
@property
def comm(self):
return self._comm
@comm.setter
def comm(self, v):
self._comm = v
| 24.181818 | 95 | 0.542459 | 597 | 4,522 | 3.984925 | 0.167504 | 0.046238 | 0.058848 | 0.013451 | 0.360235 | 0.290879 | 0.290879 | 0.222783 | 0.192518 | 0.192518 | 0 | 0.016591 | 0.320212 | 4,522 | 186 | 96 | 24.311828 | 0.757319 | 0.004644 | 0 | 0.306122 | 0 | 0 | 0.058693 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.244898 | false | 0.020408 | 0.006803 | 0.068027 | 0.435374 | 0.047619 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ff807a48e43dab450062bab8fe414c8fa64a3d1 | 1,856 | py | Python | src/users/models/microsoftgraphteam_messaging_settings.py | peombwa/Sample-Graph-Python-Client | 3396f531fbe6bb40a740767c4e31aee95a3b932e | [
"MIT"
] | null | null | null | src/users/models/microsoftgraphteam_messaging_settings.py | peombwa/Sample-Graph-Python-Client | 3396f531fbe6bb40a740767c4e31aee95a3b932e | [
"MIT"
] | null | null | null | src/users/models/microsoftgraphteam_messaging_settings.py | peombwa/Sample-Graph-Python-Client | 3396f531fbe6bb40a740767c4e31aee95a3b932e | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class MicrosoftgraphteamMessagingSettings(Model):
"""teamMessagingSettings.
:param allow_user_edit_messages:
:type allow_user_edit_messages: bool
:param allow_user_delete_messages:
:type allow_user_delete_messages: bool
:param allow_owner_delete_messages:
:type allow_owner_delete_messages: bool
:param allow_team_mentions:
:type allow_team_mentions: bool
:param allow_channel_mentions:
:type allow_channel_mentions: bool
"""
_attribute_map = {
'allow_user_edit_messages': {'key': 'allowUserEditMessages', 'type': 'bool'},
'allow_user_delete_messages': {'key': 'allowUserDeleteMessages', 'type': 'bool'},
'allow_owner_delete_messages': {'key': 'allowOwnerDeleteMessages', 'type': 'bool'},
'allow_team_mentions': {'key': 'allowTeamMentions', 'type': 'bool'},
'allow_channel_mentions': {'key': 'allowChannelMentions', 'type': 'bool'},
}
def __init__(self, allow_user_edit_messages=None, allow_user_delete_messages=None, allow_owner_delete_messages=None, allow_team_mentions=None, allow_channel_mentions=None):
super(MicrosoftgraphteamMessagingSettings, self).__init__()
self.allow_user_edit_messages = allow_user_edit_messages
self.allow_user_delete_messages = allow_user_delete_messages
self.allow_owner_delete_messages = allow_owner_delete_messages
self.allow_team_mentions = allow_team_mentions
self.allow_channel_mentions = allow_channel_mentions
| 45.268293 | 176 | 0.688039 | 196 | 1,856 | 6.066327 | 0.285714 | 0.090833 | 0.065601 | 0.105971 | 0.095879 | 0.04878 | 0 | 0 | 0 | 0 | 0 | 0.000631 | 0.146013 | 1,856 | 40 | 177 | 46.4 | 0.749527 | 0.362069 | 0 | 0 | 1 | 0 | 0.246454 | 0.14805 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ff80844273cf854c1faf190a9b2ee85a3b36cae | 887 | py | Python | setup.py | shraman-rc/SecFS | a48f931a4f9270c607150ba5322230d2510d1236 | [
"MIT"
] | 2 | 2019-02-16T18:17:16.000Z | 2021-10-31T23:12:57.000Z | setup.py | nicola/security-project | 386f8baf808ca5d9db0c6e43647b9f1468e49f22 | [
"MIT"
] | null | null | null | setup.py | nicola/security-project | 386f8baf808ca5d9db0c6e43647b9f1468e49f22 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from distutils.core import setup
setup(
name='SecFS',
version='0.1.0',
description='6.858 final project --- an encrypted and authenticated file system',
long_description= open('README.md', 'r').read(),
author='Jon Gjengset',
author_email='jon@thesquareplanet.com',
maintainer='MIT PDOS',
maintainer_email='pdos@csail.mit.edu',
url='https://github.com/mit-pdos/6.858-secfs',
packages=['secfs', 'secfs.store'],
install_requires=['llfuse', 'Pyro4', 'serpent', 'cryptography'],
scripts=['bin/secfs-server', 'bin/secfs-fuse'],
license='MIT',
classifiers=[
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Topic :: Education",
"Topic :: Security",
"Topic :: System :: Filesystems",
]
)
| 31.678571 | 85 | 0.62345 | 101 | 887 | 5.435644 | 0.722772 | 0.014572 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019774 | 0.201804 | 887 | 27 | 86 | 32.851852 | 0.75565 | 0.023675 | 0 | 0 | 0 | 0 | 0.508671 | 0.02659 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4ff824ddfbc7836ffecae20ead3372f3cce1509e | 1,532 | py | Python | eventkit_cloud/jobs/migrations/0011_add_file_data_providers.py | venicegeo/eventkit-cloud | 3a9a4fbfea54d48387f3f3b2ab6e71946af9ffb2 | [
"BSD-3-Clause"
] | 1 | 2018-03-30T21:01:15.000Z | 2018-03-30T21:01:15.000Z | eventkit_cloud/jobs/migrations/0011_add_file_data_providers.py | venicegeo/eventkit-cloud | 3a9a4fbfea54d48387f3f3b2ab6e71946af9ffb2 | [
"BSD-3-Clause"
] | 46 | 2017-06-27T03:12:57.000Z | 2018-12-28T19:48:35.000Z | eventkit_cloud/jobs/migrations/0011_add_file_data_providers.py | venicegeo/eventkit-cloud | 3a9a4fbfea54d48387f3f3b2ab6e71946af9ffb2 | [
"BSD-3-Clause"
] | 7 | 2017-07-28T18:16:34.000Z | 2019-01-18T04:41:55.000Z | # Generated by Django 3.1.2 on 2021-01-27 18:43
from django.db import migrations
class Migration(migrations.Migration):
def add_file_data_providers(apps, schema_editor):
DataProviderType = apps.get_model("jobs", "DataProviderType")
ExportFormat = apps.get_model("jobs", "ExportFormat")
# Create the DataProvider objects if they don't exist.
DataProviderType.objects.get_or_create(type_name="vector-file")
DataProviderType.objects.get_or_create(type_name="raster-file")
# Currently available Provider Types.
vector_data_provider_types = ["vector-file"]
raster_data_provider_types = ["raster-file"]
# Currently available Export Formats.
nitf = ExportFormat.objects.get(slug="nitf")
gtiff = ExportFormat.objects.get(slug="gtiff")
kml = ExportFormat.objects.get(slug="kml")
shp = ExportFormat.objects.get(slug="shp")
gpkg = ExportFormat.objects.get(slug="gpkg")
# Set the known supported export formats per provider type.
for provider_type in DataProviderType.objects.all():
if provider_type.type_name in vector_data_provider_types:
provider_type.supported_formats.add(gpkg, shp, kml)
if provider_type.type_name in raster_data_provider_types:
provider_type.supported_formats.add(gpkg, gtiff, nitf)
dependencies = [
("jobs", "0010_dataprovider_data_type"),
]
operations = [migrations.RunPython(add_file_data_providers)]
| 40.315789 | 71 | 0.693864 | 183 | 1,532 | 5.595628 | 0.371585 | 0.068359 | 0.107422 | 0.126953 | 0.230469 | 0.230469 | 0.183594 | 0.101563 | 0.101563 | 0 | 0 | 0.01569 | 0.20953 | 1,532 | 37 | 72 | 41.405405 | 0.829893 | 0.148825 | 0 | 0 | 1 | 0 | 0.100154 | 0.020801 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b05523632c605c54e5d43969458520e01212a63 | 3,858 | py | Python | stdlib2-src/dist-packages/quodlibet/qltk/dbus_.py | ch1huizong/Scode | c34fb9d0f9b73fe199ba370f2e3ffb30b8f70895 | [
"MIT"
] | null | null | null | stdlib2-src/dist-packages/quodlibet/qltk/dbus_.py | ch1huizong/Scode | c34fb9d0f9b73fe199ba370f2e3ffb30b8f70895 | [
"MIT"
] | null | null | null | stdlib2-src/dist-packages/quodlibet/qltk/dbus_.py | ch1huizong/Scode | c34fb9d0f9b73fe199ba370f2e3ffb30b8f70895 | [
"MIT"
] | null | null | null | # Copyright 2006 Federico Pelloni <federico.pelloni@gmail.com>
# 2013 Christoph Reiter
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of version 2 of the GNU General Public License as
# published by the Free Software Foundation.
import dbus
import dbus.service
from dbus import DBusException
from quodlibet.util import dbusutils
from quodlibet.parse import Query
from quodlibet.qltk.songlist import SongList
from quodlibet.util.path import fsdecode
class DBusHandler(dbus.service.Object):
def __init__(self, player, library):
try:
self.library = library
bus = dbus.SessionBus()
name = dbus.service.BusName('net.sacredchao.QuodLibet', bus=bus)
path = '/net/sacredchao/QuodLibet'
super(DBusHandler, self).__init__(name, path)
except DBusException:
pass
else:
player.connect('song-started', self.__song_started)
player.connect('song-ended', self.__song_ended)
player.connect('paused', lambda player: self.Paused())
player.connect('unpaused', lambda player: self.Unpaused())
self._player = player
def __dict(self, song):
dict = {}
for key, value in (song or {}).items():
if not isinstance(value, basestring):
value = unicode(value)
elif isinstance(value, str):
value = fsdecode(value)
dict[key] = dbusutils.dbus_unicode_validate(value)
if song:
dict["~uri"] = song("~uri")
return dict
def __song_started(self, player, song):
if song is not None:
song = self.__dict(song)
self.SongStarted(song)
def __song_ended(self, player, song, skipped):
if song is not None:
song = self.__dict(song)
self.SongEnded(song, skipped)
@dbus.service.signal('net.sacredchao.QuodLibet')
def SongStarted(self, song):
pass
@dbus.service.signal('net.sacredchao.QuodLibet')
def SongEnded(self, song, skipped):
pass
@dbus.service.signal('net.sacredchao.QuodLibet')
def Paused(self):
pass
@dbus.service.signal('net.sacredchao.QuodLibet')
def Unpaused(self):
pass
@dbus.service.method('net.sacredchao.QuodLibet')
def GetPosition(self):
return self._player.get_position()
@dbus.service.method('net.sacredchao.QuodLibet')
def IsPlaying(self):
return not self._player.paused
@dbus.service.method('net.sacredchao.QuodLibet')
def CurrentSong(self):
return self.__dict(self._player.song)
@dbus.service.method('net.sacredchao.QuodLibet')
def Next(self):
self._player.next()
@dbus.service.method('net.sacredchao.QuodLibet')
def Previous(self):
self._player.previous()
@dbus.service.method('net.sacredchao.QuodLibet')
def Pause(self):
self._player.paused = True
@dbus.service.method('net.sacredchao.QuodLibet')
def Play(self):
if self._player.song is None:
self._player.reset()
else:
self._player.paused = False
@dbus.service.method('net.sacredchao.QuodLibet')
def PlayPause(self):
if self._player.song is None:
self._player.reset()
else:
self._player.paused ^= True
return self._player.paused
@dbus.service.method('net.sacredchao.QuodLibet', in_signature='s')
def Query(self, query):
if query is not None:
try:
results = Query(query, star=SongList.star).search
except Query.error:
pass
else:
return [self.__dict(s) for s in self.library.itervalues()
if results(s)]
return None
| 31.365854 | 76 | 0.62027 | 444 | 3,858 | 5.281532 | 0.263514 | 0.072495 | 0.140725 | 0.127932 | 0.332196 | 0.332196 | 0.332196 | 0.187633 | 0.128785 | 0.081876 | 0 | 0.003209 | 0.272939 | 3,858 | 122 | 77 | 31.622951 | 0.832799 | 0.070762 | 0 | 0.333333 | 0 | 0 | 0.113471 | 0.100894 | 0 | 0 | 0 | 0 | 0 | 1 | 0.177083 | false | 0.0625 | 0.072917 | 0.03125 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8b0610b1c3104b500647289a378d78d363771d92 | 1,915 | py | Python | score/models.py | loric-/bcvscore | 2109ce2c4b747d98e1b4456d549d18ca2d7a2ff3 | [
"MIT"
] | 1 | 2018-01-09T21:47:43.000Z | 2018-01-09T21:47:43.000Z | score/models.py | loric-/bcvscore | 2109ce2c4b747d98e1b4456d549d18ca2d7a2ff3 | [
"MIT"
] | null | null | null | score/models.py | loric-/bcvscore | 2109ce2c4b747d98e1b4456d549d18ca2d7a2ff3 | [
"MIT"
] | null | null | null | from django.db import models
from django.contrib.auth.models import User
from solo.models import SingletonModel
class Division(models.Model):
nom = models.CharField(max_length=30)
def __str__(self):
return self.nom
class Equipe(models.Model):
nom = models.CharField(max_length=100)
division = models.ForeignKey(
Division,
verbose_name='Division'
)
def __str__(self):
return '{} ({})'.format(self.nom, self.division)
class Rencontre(models.Model):
numero = models.IntegerField()
date = models.DateField()
heure = models.TimeField()
equipeDom = models.ForeignKey(
Equipe,
related_name='rencontreDom',
verbose_name='Equipe Domicile'
)
equipeExt = models.ForeignKey(
Equipe,
related_name='rencontreExt',
verbose_name='Equipe Exterieur'
)
scoreDom = models.IntegerField(
verbose_name='Score Domicile',
null=True,
blank=True
)
scoreExt = models.IntegerField(
verbose_name='Score Exterieur',
null=True,
blank=True
)
forfaitDom = models.BooleanField(
verbose_name='Forfait Domicile',
default=False
)
forfaitExt = models.BooleanField(
verbose_name='Forfait Exterieur',
default=False
)
def __str__(self):
return str(self.numero)
class Profil(models.Model):
user = models.OneToOneField(User, verbose_name='Utilisateur')
equipes = models.ManyToManyField(Equipe)
def __str__(self):
return self.user.username
class SiteConfiguration(SingletonModel):
login = models.CharField(max_length=7, verbose_name='Identifiant')
password = models.CharField(max_length=8, verbose_name='Mot de passe')
username = models.CharField(max_length=20, verbose_name='Nom utilisateur')
class Meta:
verbose_name = "Site Configuration"
| 24.240506 | 78 | 0.662663 | 202 | 1,915 | 6.108911 | 0.361386 | 0.106969 | 0.072934 | 0.097245 | 0.26094 | 0.061588 | 0.061588 | 0 | 0 | 0 | 0 | 0.006156 | 0.236554 | 1,915 | 78 | 79 | 24.551282 | 0.837893 | 0 | 0 | 0.2 | 0 | 0 | 0.103916 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.016667 | 0.05 | 0.066667 | 0.566667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8b06973ff048f4917713726bfa5df01d91206315 | 6,178 | py | Python | GlueCustomConnectors/glueJobValidation/glue_job_validation_update.py | xy1m/aws-glue-samples | bf709caab5adc3a57415018899f3c573a8de13c8 | [
"MIT-0"
] | 925 | 2018-03-19T14:43:12.000Z | 2022-03-30T09:43:53.000Z | GlueCustomConnectors/glueJobValidation/glue_job_validation_update.py | xy1m/aws-glue-samples | bf709caab5adc3a57415018899f3c573a8de13c8 | [
"MIT-0"
] | 88 | 2018-03-22T18:00:10.000Z | 2022-03-23T22:43:18.000Z | GlueCustomConnectors/glueJobValidation/glue_job_validation_update.py | xy1m/aws-glue-samples | bf709caab5adc3a57415018899f3c573a8de13c8 | [
"MIT-0"
] | 555 | 2018-03-20T16:51:28.000Z | 2022-03-30T18:49:44.000Z | # Copyright 2016-2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import sys
from awsglue.utils import getResolvedOptions
from awsglue.transforms import *
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark import SparkConf
from awsglue.dynamicframe import DynamicFrame
from awsglue.gluetypes import Field, IntegerType, TimestampType, StructType
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
######################################## test connection options ########################################
## please pick up and customize the right connection options for testing
## If you are using a large testing data set, please consider using column partitioning to parallel the data reading for performance purpose.
# DataSourceTest - - please configure according to your connector type and options
options_dataSourceTest_jdbc = {
"query": "select NumberOfEmployees, CreatedDate from Account",
"className" : "partner.jdbc.some.Driver",
# test parameters
"url": "jdbc:some:url:SecurityToken=abc;",
"user": "user",
"password": "password",
}
# ColumnPartitioningTest
# for JDBC connector only
options_columnPartitioningTest = {
"query": "select NumberOfEmployees, CreatedDate from Account where ",
"url": "jdbc:some:url:user=${user};Password=${Password};SecurityToken=${SecurityToken};",
"secretId": "test-partner-driver",
"className" : "partner.jdbc.some.Driver",
# test parameters
"partitionColumn" : "RecordId__c",
"lowerBound" : "0",
"upperBound" : "13",
"numPartitions" : "2",
}
# DataTypeMappingTest
# for JDBC connector only
options_dataTypeMappingTest = {
"query" : "select NumberOfEmployees, CreatedDate from Account where ",
"url" : "jdbc:some:url:user=${user};Password=${Password};SecurityToken=${SecurityToken};",
"secretId" : "test-partner-driver",
"className" : "partner.jdbc.some.Driver",
# test parameter
"dataTypeMapping": {"INTEGER" : "STRING"}
}
# DbtableQueryTest
# for JDBC connector only
options_dbtableQueryTest = {
"url" : "jdbc:some:url:user=${user};Password=${Password};SecurityToken=${SecurityToken};",
"secretId" : "test-partner-driver",
"className" : "partner.jdbc.some.Driver",
# test parameter
"query": "select NumberOfEmployees, CreatedDate from Account"
# "dbTable" : "Account"
}
# JDBCUrlTest - extra jdbc connections UseBulkAPI appended
# for JDBC connector only
options_JDBCUrlTest = {
"query": "select NumberOfEmployees, CreatedDate from Account",
"secretId": "test-partner-driver",
"className" : "partner.jdbc.some.Driver",
# test parameter
"url": "jdbc:some:url:user=${user};Password=${Password};SecurityToken=${SecurityToken};UseBulkAPI=true",
}
# SecretsManagerTest - - please configure according to your connector type and options
options_secretsManagerTest = {
"query": "select NumberOfEmployees, CreatedDate from Account",
"url": "jdbc:some:url:user=${user};Password=${Password};SecurityToken=${SecurityToken};",
"className" : "partner.jdbc.some.Driver",
# test parameter
"secretId": "test-partner-driver"
}
# FilterPredicateTest
# for JDBC connector only
options_filterPredicateTest = {
"query": "select NumberOfEmployees, CreatedDate from Account where",
"url": "jdbc:some:url:user=${user};Password=${Password};SecurityToken=${SecurityToken};",
"secretId": "test-partner-driver",
"className" : "partner.jdbc.some.Driver",
# test parameter
"filterPredicate": "BillingState='CA'"
}
##################################### read data from data source ######################################
datasource0 = glueContext.create_dynamic_frame_from_options(
connection_type = "marketplace.jdbc",
connection_options = options_secretsManagerTest) # pick up the right test conection options
######################################## validate data reading ########################################
## validate data schema and count
# more data type: https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-types.html
expected_schema = StructType([Field("NumberOfEmployees", IntegerType()), Field("CreatedDate", TimestampType())])
expected_count = 2
assert datasource0.schema() == expected_schema
print("expected schema: " + str(expected_schema.jsonValue()))
print("result schema: " + str(datasource0.schema().jsonValue()))
print("result schema in tree structure: ")
datasource0.printSchema()
## validate data count is euqal to expected count
assert datasource0.count() == expected_count
print("expected record count: " + str(expected_count))
print("result record count: " + str(datasource0.count()))
######################################## write data to s3 ########################################
datasource0.write(
connection_type="s3",
connection_options = {"path": "s3://your/output/path/"},
format="json"
)
######################################## DataSinkTest ########################################
## Create a DynamicFrame on the fly
jsonStrings = ['{"Name":"Andrew"}']
rdd = sc.parallelize(jsonStrings)
sql_df = spark.read.json(rdd)
df = DynamicFrame.fromDF(sql_df, glueContext, "new_dynamic_frame")
## DataSinkTest options
options_dataSinkTest = {
"secretId": "test-partner-driver",
"dbtable" : "Account",
"className" : "partner.jdbc.some.Driver",
"url": "jdbc:some:url:user=${user};Password=${Password};SecurityToken=${SecurityToken};"
}
## Write to data target
glueContext.write_dynamic_frame.from_options(frame = df,
connection_type = "marketplace.jdbc",
connection_options = options_dataSinkTest)
## write validation
# You may check data in the database side.
# You may also refer to 'read data from data source' and 'validate data reading' part to compose your own validation logics.
job.commit() | 36.556213 | 141 | 0.674166 | 640 | 6,178 | 6.445313 | 0.296875 | 0.03103 | 0.038788 | 0.046545 | 0.423515 | 0.364606 | 0.307152 | 0.257939 | 0.257939 | 0.257939 | 0 | 0.004534 | 0.14325 | 6,178 | 169 | 142 | 36.556213 | 0.774797 | 0.239883 | 0 | 0.28125 | 0 | 0.010417 | 0.445019 | 0.191259 | 0 | 0 | 0 | 0 | 0.020833 | 1 | 0 | false | 0.083333 | 0.09375 | 0 | 0.09375 | 0.0625 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8b0c8ae225961cb83cd1db54cd4ce50a488afe5b | 1,341 | py | Python | redbot/message/headers/date.py | jakub-g/redbot | 48f17bd01da929eecdfd233fdc0730e2f4189800 | [
"MIT"
] | 1 | 2018-08-04T23:10:59.000Z | 2018-08-04T23:10:59.000Z | redbot/message/headers/date.py | jakub-g/redbot | 48f17bd01da929eecdfd233fdc0730e2f4189800 | [
"MIT"
] | null | null | null | redbot/message/headers/date.py | jakub-g/redbot | 48f17bd01da929eecdfd233fdc0730e2f4189800 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from redbot.message import headers
from redbot.syntax import rfc7231
from redbot.type import AddNoteMethodType
class date(headers.HttpHeader):
canonical_name = "Date"
description = """\
The `Date` header represents the time when the message was generated, regardless of caching that
happened since.
It is used by caches as input to expiration calculations, and to detect clock drift."""
reference = "%s#header.date" % rfc7231.SPEC_URL
syntax = False # rfc7231.Date
list_header = False
deprecated = False
valid_in_requests = True
valid_in_responses = True
def parse(self, field_value: str, add_note: AddNoteMethodType) -> int:
try:
date_value = headers.parse_date(field_value, add_note)
except ValueError:
raise
return date_value
class BasicDateTest(headers.HeaderTest):
name = 'Date'
inputs = [b'Mon, 04 Jul 2011 09:08:06 GMT']
expected_out = 1309770486
expected_err = [] # type: ignore
class BadDateTest(headers.HeaderTest):
name = 'Date'
inputs = [b'0']
expected_out = None # type: ignore
expected_err = [headers.BAD_DATE_SYNTAX]
class BlankDateTest(headers.HeaderTest):
name = 'Date'
inputs = [b'']
expected_out = None # type: ignore
expected_err = [headers.BAD_DATE_SYNTAX]
| 28.531915 | 96 | 0.694258 | 171 | 1,341 | 5.304094 | 0.549708 | 0.035281 | 0.06946 | 0.08269 | 0.229327 | 0.229327 | 0.123484 | 0.123484 | 0.123484 | 0.123484 | 0 | 0.033333 | 0.217002 | 1,341 | 46 | 97 | 29.152174 | 0.830476 | 0.053691 | 0 | 0.194444 | 0 | 0 | 0.205696 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.083333 | 0 | 0.805556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8b0cbc1b20f2da92c734eeacd0b5fed2d3ab7ec8 | 6,490 | py | Python | padre/authorizers.py | krislindgren/padre | 56e3342a953fdc472adc11ce301acabf6c595760 | [
"MIT"
] | null | null | null | padre/authorizers.py | krislindgren/padre | 56e3342a953fdc472adc11ce301acabf6c595760 | [
"MIT"
] | null | null | null | padre/authorizers.py | krislindgren/padre | 56e3342a953fdc472adc11ce301acabf6c595760 | [
"MIT"
] | null | null | null | import abc
import itertools
from oslo_utils import reflection
import six
from padre import exceptions as excp
from padre import utils
@six.add_metaclass(abc.ABCMeta)
class auth_base(object):
"""Base of all authorizers."""
def __and__(self, other):
return all_must_pass(self, other)
def __or__(self, other):
return any_must_pass(self, other)
@abc.abstractmethod
def __call__(self, bot, message, args=None):
pass
def pformat(self, bot):
return 'auth_base()'
class no_auth(auth_base):
"""Lets any message through."""
def pformat(self, bot):
return 'no_auth()'
def __call__(self, bot, message, args=None):
pass
class args_key_is_empty_or_allowed(auth_base):
"""Denies if args key is non-empty and not allowed."""
def __init__(self, args_key, allowed_extractor_func):
self.args_key = args_key
self.allowed_extractor_func = allowed_extractor_func
def __call__(self, bot, message, args=None):
if args is None:
raise excp.NotAuthorized(
"Message lacks a (non-empty)"
" 'args' keyword argument, unable to auth against"
" unknown arguments", message)
else:
v = args.get(self.args_key)
if v:
allowed_extractor_func = self.allowed_extractor_func
allowed = allowed_extractor_func(message)
if v not in allowed:
raise excp.NotAuthorized(
"Action can not be triggered"
" please check that the argument '%s' value is"
" allowed or that argument '%s' is"
" empty" % (self.args_key, self.args_key))
def pformat(self, bot):
base = 'args_key_is_empty_or_allowed'
func_name = reflection.get_callable_name(self.allowed_extractor_func)
return '%s(%r, %s)' % (base, self.args_key, func_name)
class user_in_ldap_groups(auth_base):
"""Denies if sending user is not in **config** driven ldap groups."""
def __init__(self, config_key, *more_config_keys):
self.config_keys = (config_key,) + more_config_keys
def pformat(self, bot):
groups = self._fetch_ok_groups(bot)
return 'user_in_ldap_groups(%s)' % (utils.quote_join(groups))
def _fetch_ok_groups(self, bot):
groups = []
for k in self.config_keys:
try:
val = utils.dict_or_munch_extract(bot.config, k)
except KeyError:
pass
else:
if isinstance(val, six.string_types):
groups.append(val)
elif isinstance(val, (tuple, list, set)):
groups.extend(val)
else:
raise TypeError("Unexpected ldap group"
" configuration value type"
" '%s' corresponding to lookup"
" key: %s" % (type(val), k))
return groups
def __call__(self, bot, message, args=None):
ldap_client = bot.clients.get("ldap_client")
if not ldap_client:
raise excp.NotFound("Ldap client not found; required to perform"
" authorization checks")
try:
user_name = message.body.user_name
except AttributeError:
user_name = None
if not user_name:
raise excp.NotAuthorized(
"Message lacks a (non-empty)"
" user name, unable to auth against"
" unknown users", message)
else:
if not ldap_client.is_allowed(user_name,
self._fetch_ok_groups(bot)):
raise excp.NotAuthorized(
"Action can not be triggered"
" please check that the sender is in the correct"
" ldap group(s)", message)
class message_from_channels(auth_base):
"""Denies messages not from certain channel name(s)."""
def __init__(self, channels):
self.channels = tuple(channels)
def pformat(self, bot):
return 'message_from_channels(%s)' % (utils.quote_join(self.channels))
def __call__(self, bot, message, args=None):
try:
channel_name = message.body.channel_name
except AttributeError:
channel_name = None
if not channel_name:
raise excp.NotAuthorized(
"Message lacks a (non-empty)"
" channel name, unable to trigger against"
" unknown channels", message)
if channel_name not in self.channels:
raise excp.NotAuthorized(
"Action can not be triggered in provided"
" channel '%s', please make sure that the sender"
" is in the correct channel(s)" % channel_name, message)
class any_must_pass(auth_base):
"""Combines one or more authorizer (any must pass)."""
def __init__(self, authorizer, *more_authorizers):
self.authorizers = tuple(
itertools.chain([authorizer], more_authorizers))
def pformat(self, bot):
others = ", ".join(a.pformat(bot) for a in self.authorizers)
return 'any_must_pass(%s)' % (others)
def __call__(self, bot, message, args=None):
fails = []
any_passed = False
for authorizer in self.authorizers:
try:
authorizer(bot, message, args=args)
except excp.NotAuthorized as e:
fails.append(e)
else:
any_passed = True
break
if not any_passed and fails:
# TODO: maybe make a multiple not authorized exception???
what = " or ".join('(%s)' % e for e in fails)
raise excp.NotAuthorized(what, message)
class all_must_pass(auth_base):
"""Combines one or more authorizer (all must pass)."""
def __init__(self, authorizer, *more_authorizers):
self.authorizers = tuple(
itertools.chain([authorizer], more_authorizers))
def pformat(self, bot):
others = ", ".join(a.pformat(bot) for a in self.authorizers)
return 'all_must_pass(%s)' % (others)
def __call__(self, bot, message, args=None):
for authorizer in self.authorizers:
authorizer(bot, message, args=args)
| 34.157895 | 78 | 0.575501 | 752 | 6,490 | 4.739362 | 0.206117 | 0.029461 | 0.035354 | 0.027497 | 0.406566 | 0.298541 | 0.285634 | 0.248036 | 0.204826 | 0.154321 | 0 | 0 | 0.333128 | 6,490 | 189 | 79 | 34.338624 | 0.823475 | 0.056703 | 0 | 0.356643 | 0 | 0 | 0.143773 | 0.012488 | 0 | 0 | 0 | 0.005291 | 0 | 1 | 0.153846 | false | 0.083916 | 0.041958 | 0.034965 | 0.314685 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8b0fd83a4c7961ced4d8ed61e941473be809399f | 17,554 | py | Python | skspec/IO/gwu_interfaces.py | hugadams/scikit-spectra | c451be6d54080fbcc2a3bc5daf8846b83b7343ee | [
"BSD-3-Clause"
] | 83 | 2015-01-15T18:57:22.000Z | 2022-01-18T11:43:55.000Z | skspec/IO/gwu_interfaces.py | hugadams/scikit-spectra | c451be6d54080fbcc2a3bc5daf8846b83b7343ee | [
"BSD-3-Clause"
] | 18 | 2015-02-02T22:46:51.000Z | 2019-04-29T17:23:32.000Z | skspec/IO/gwu_interfaces.py | hugadams/scikit-spectra | c451be6d54080fbcc2a3bc5daf8846b83b7343ee | [
"BSD-3-Clause"
] | 43 | 2015-01-02T20:47:11.000Z | 2021-12-18T16:14:40.000Z | ''' Utilities for converting various file formats to a skspec TimeSpectra.
To convert a list of raw files, use from_spec_files()
To convert old-style timefile/spectral data file, use from_timefile_datafile()
To convert spectral datafiles from Ocean Optics USB2000 and USB650, pas file list in
from_spec_files.
Returns a skspec TimeSpectra with custom attributes "metadata", "filedict", "baseline".
'''
__author__ = "Adam Hughes, Zhaowen Liu"
__copyright__ = "Copyright 2012, GWU Physics"
__license__ = "Free BSD"
__maintainer__ = "Adam Hughes"
__email__ = "hugadams@gwmail.gwu.edu"
__status__ = "Development"
import os
# 3RD Party Imports
from pandas import DataFrame, Series, datetime, read_csv, concat
import numpy as np
# skspec imports
from skspec.core.timespectra import TimeSpectra
from skspec.core.specindex import SpecIndex
from skspec.core.file_utils import get_files_in_dir, get_shortname
import logging
logger = logging.getLogger(__name__)
### Verified OCean Optics month naming conventions (Actually it capitalizes but this call with month.lower() ###
spec_suite_months={'jan':1,
'feb':2,
'mar':3,
'apr':4,
'may':5,
'jun':6,
'jul':7,
'aug':8,
'sep':9,
'oct':10,
'nov':11,
'dec':12}
spec_dtype = np.dtype([ ('wavelength', float), ('intensity', float) ])
def get_shortname(filepath, cut_extension=False):
''' simply get the filename of fullpath. Cut extension will remove file extension'''
shortname = os.path.basename(filepath)
if cut_extension:
shortname = os.path.splitext(shortname)[0]
return shortname
### Following two functions are utilized by both data interfacing functions. ###
def extract_darkfile(filelist, return_null=True):
''' Attempts to pick out a dark file from a list using string matching. Handling of darkfile
is done by other functions. Opted to raise errors and warnings here, instead of downstream.
If return_null: it will return none if no dark file is found. Otherwise it will raise an error.'''
darkfiles=[]
for infile in filelist:
if "dark" in infile.lower() or "drk" in infile.lower():
darkfiles.append(infile)
### If not darkfiles found, return null or raise ###
if len(darkfiles)==0:
if return_null:
return None
else:
raise Warning("WARNING: Darkfile not found in filelist: \
startfile %s - endfile %s"%(filelist[0], filelist[-1]))
### If multiple darkfiles found, RAISE a warning ###
elif len(darkfiles)>1:
raise Warning("Multiple darkfiles founds in filelist: \
startfile %s - endfile %s"%(filelist[0], filelist[-1]))
else:
return darkfiles[0]
def get_headermetadata_dataframe(dataframe, time_file_dict, name=''):
''' After creation of the dataframe and datetime-to-file dic, various metadata attributes
come together like filecount, filestart, filend etc...
**run title becomes name of dataframe and can be adopted by plots'''
filecount=len(dataframe.columns)
timestart, timeend=dataframe.columns[0], dataframe.columns[-1]
filestart, fileend=time_file_dict[timestart], time_file_dict[timeend]
specstart, specend=dataframe.index[0], dataframe.index[-1]
return {'filecount':filecount,
'timestart':timestart,
'timeend':timeend,
'filestart':filestart,
'fileend':fileend,
'specstart':specstart,
'specend':specend}
##########################################################
### Below are the 2 main functions to extract the data ###
##########################################################
def from_spec_files(file_list, name='', skiphead=17, skipfoot=1, check_for_overlapping_time=True, extract_dark=True):
''' Takes in raw files directly from Ocean optics USB2000 and USB650 spectrometers and returns a
skspec TimeSpectra. If spectral data stored without header, can be called with skiphead=0.
Parameters
----------
name: Set name of returned TimeSpectra.
check_for_overlapping_time: will raise errors if any files have identical times. Otherwise, time
is overwritten. Really only useful for testing or otherwise cornercase instances.
extract_dark: Attempt to find a filename with caseinsenstive string match to "dark". If dark spectrum
not found, will print warning. If multiple darks found, will raise error.
skiphead/skipfoot: Mostly for reminder that this filetype has a 17 line header and a 1 line footer.
Notes
-----
Built to work with 2-column data only!!!
Dataframe is constructed from a list of dictionaries.
Each dataframe gets an appended headerdata attribute (dataframe.headerdata) which is a dictionary,
keyed by columns and stores (infile, header, footer) data so no info is lost between files.
Constructed to work for non-equally spaced datafiles, or non-identical data (aka wavelengths can have nans).
'''
dict_of_series={} #Dict of series eventually merged to dataframe
time_file_dict={} #Dict of time:filename (darkfile intentionally excluded)
_overlap_count = 0 # Tracks if overlapping occurs
### If looking for a darkfile, this will find it. Bit redundant but I'm lazy..###
if extract_dark:
darkfile=extract_darkfile(file_list, return_null=True)
if darkfile:
with open(darkfile) as f:
header=[f.next().strip() for x in xrange(skiphead)]
wavedata=np.genfromtxt(darkfile, dtype=spec_dtype, skip_header=skiphead, skip_footer=skipfoot)
darktime=_get_datetime_specsuite(header)
baseline=Series(wavedata['intensity'], index=wavedata['wavelength'], name=darkfile)
file_list.remove(darkfile)
f.close()
else:
baseline=None
file_list = [f for f in file_list
if os.path.basename(f) != '.gitignore']
for infile in file_list:
###Read in only the header lines, not all the lines of the file
###Strips and splits in one go
with open(infile) as f:
header=[f.next().strip() for x in xrange(skiphead)]
#Store wavelength, intensity data in a 2-column datatime for easy itemlookup
#Eg wavedata['wavelength']
wavedata=np.genfromtxt(infile, dtype=spec_dtype, skip_header=skiphead, skip_footer=skipfoot)
# Extract time data from header
datetime=_get_datetime_specsuite(header)
if datetime in time_file_dict:
_overlap_count += 1
# Make sure timepoints aren't overlapping with any others
if check_for_overlapping_time and _overlap_count:
raise IOError('Duplicate time %s found in between files %s, %s.'
' To overwrite, set check_for_overlapping_time = False.'
%( datetime, infile, time_file_dict[datetime] ))
time_file_dict[datetime]=infile
dict_of_series[datetime]=Series(wavedata['intensity'], index=wavedata['wavelength'])
f.close()
# Make timespec, add filenames, baseline and metadata attributes (note, DateTimeIndex auto sorts!!)
df = DataFrame(dict_of_series) #Dataframe beacuse TS doesn't handle dict of series
print df.columns
timespec = TimeSpectra(df, name=name)
timespec.specunit = 'nm'
timespec.filedict = time_file_dict
timespec.baseline = baseline #KEEP THIS AS DARK SERIES RECALL IT IS SEPARATE FROM reference OR REFERENCE..
# Take metadata from first file in filelist that isn't darkfile
for infile in file_list:
if infile != darkfile:
with open(infile) as f:
header=[f.next().strip() for x in xrange(skiphead)]
meta_partial=_get_metadata_fromheader(header)
break
meta_general=get_headermetadata_dataframe(timespec, time_file_dict)
meta_general.update(meta_partial)
timespec.metadata=meta_general
if _overlap_count:
logger.warn('Time duplication found in %s of %s files. Duplicates were '
'removed!' % (_overlap_count, len(file_list)))
return timespec
def _get_datetime_specsuite(specsuiteheader):
''' Special, Ocean-optics specific function to get date information from a their customized header.'''
dateline=specsuiteheader[2].split()
year, month, day=int(dateline[6]), dateline[2], int(dateline[3])
month=spec_suite_months[month.lower()]
hrs, mins, secs=dateline[4].split(':')
hrs=int(hrs) ; mins=int(mins) ; secs=int(secs)
return datetime(year, month, day, hrs, mins, secs)
def _get_metadata_fromheader(specsuiteheader):
''' Populates metadata attributes from the speactrasuite datafile header'''
sh=specsuiteheader #for ease in calling
return {'dark_spec_pres':sh[4].split()[3], 'spectrometer':sh[7].split()[1],
'int_unit':sh[8].split()[2], 'int_time':int(sh[8].split()[3]),
'spec_avg':int(sh[9].split()[2]), 'boxcar':int(sh[10].split()[2]),
'ref_spec_pres':sh[5].split()[3], 'electric_dark_correct':sh[11].split()[4],
'strobe_lamp':sh[12].split()[2],'detector_nonlin_correct':sh[13].split()[4],
'stray_light_correct':sh[14].split()[4], 'pix_in_spec':int(sh[15].split()[6])}
######### Get dataframe from timefile / datafile #####
# Authors Zhaowen Liu/Adam Hughes, 10/15/12
def from_timefile_datafile(datafile, timefile, extract_dark=True, name=''):
''' Converts old-style spectral data from GWU phys lab into
a dataframe with timestamp column index and wavelength row indicies.
Creates the DataFrame from a dictionary of Series, keyed by datetime.
**name becomes name of dataframe'''
tlines=open(timefile,'r').readlines()
tlines=[line.strip().split() for line in tlines]
tlines.pop(0)
time_file_dict=dict((_get_datetime_timefile(tline),tline[0]) for tline in tlines)
### Read in data matrix, separate first row (wavelengths) from the rest of the data
wavedata=np.genfromtxt(datafile, dtype='float', skip_header=1)
data, wavelengths=wavedata[:,1::], wavedata[:,0] #Separate wavelength column
### Sort datetimes here before assigning/removing dark spec etc...
sorted_tfd=sorted(time_file_dict.items())
sorted_times, sorted_files=zip(*( (((i[0]), (i[1])) for i in sorted_tfd)))
### Seek darkfile. If found, take it out of dataframe. ###
if extract_dark:
darkfile=extract_darkfile(sorted_files, return_null=True)
if darkfile:
####Find baseline by reverse lookup (lookup by value) and get index position
#darkindex, darktime=[(idx, time) for idx, (time, afile) in enumerate(sorted_tfd) if afile == darkfile][0]
darkindex=sorted_files.index(darkfile)
darktime=sorted_times[darkindex]
baseline=Series(data[:,darkindex], index=wavelengths, name=darkfile)
del time_file_dict[darktime] #Intentionally remove
sorted_times=list(sorted_times) #Need to do in two steps
sorted_times.remove(darktime)
data=np.delete(data, darkindex, 1) #Delete dark column from numpy data
else:
baseline=None
dataframe=TimeSpectra(data, columns=sorted_times, index=wavelengths)
### Add field attributes to dataframe
dataframe.baseline=baseline
dataframe.filedict=time_file_dict
if name:
dataframe.name=name
### Get headermeta data from first line in timefile that isn't darkfile. Only checks one line
### Does not check for consistency
for line in tlines:
if line[0]==darkfile:
pass
else:
meta_partial=_get_headermetadata_timefile(line[0]) #DOUBLE CHECK THIS WORKS
break
### Extract remaining metadata (file/time info) and return ###
meta_general=get_headermetadata_dataframe(dataframe, time_file_dict)
meta_general.update(meta_partial)
dataframe.metadata=meta_general
dataframe.specunit='nm' #This autodetected in plots
### Sort dataframe by ascending time (could also sort spectral data) ###
dataframe.sort(axis=1, inplace=True) #axis1=columns
return dataframe
def from_gwu_chem_UVVIS(filelist, sortnames=False, shortname=True, cut_extension=False, name=''):
''' Format for comma delimited two column data from GWU chemistry's UVVis. These have no useful metadata
or dark data and so it is important that users either pass in a correctly sorted filelist. Once the
dataframe is created, on can do df=df.reindex(columns=[correct order]).
It uses read_csv() to and creates a list of dataframes. Afterwards, concat() merges these.
Kwds:
sortnames- Will attempt to autosort the filelist. Otherwise, order of files passed in is
directly used as columns.
shortname- If false, full file path is used as the column name. If true, only the filename is used.
cut_extension- If using the shortname, this will determine if the file extension is saved or cut from the data.'''
if shortname:
fget=lambda x:get_shortname(x, cut_extension=cut_extension)
else:
fget=lambda x: x
### Either full names or short names of filelist
working_names=[fget(afile) for afile in filelist]
dflist=[read_csv(afile, sep=',', header=None, index_col=0, skiprows=2, na_values=' ', #Used to be ' \r', or is this from IR?
names=[fget(afile)]) for afile in filelist]
### THIS IS BUSTED, PUTTING NANS EVERYWHERE EXCEPT ONE FILE, but dflist itself ws nice.
dataframe=concat(dflist, axis=1)
### concat tries to sort these, so this will preserve the sort order
if sortnames:
dataframe=dataframe.reindex(columns=sorted(working_names))
dataframe=TimeSpectra(dataframe) #this is fine
dataframe.metadata=None
dataframe.filedict=None
dataframe.baseline=None
dataframe.specunit='nm' #This autodetected in plots
if name:
dataframe.name=name
return dataframe
def _get_datetime_timefile(splitline):
''' Takes in SPLIT line of timefile (aka a list), returns datetime object '''
hrs, mins, secs=splitline[4].split(':')
hrs=int(hrs) ; mins=int(mins) ; secs=int(secs)
day, month, year=int(splitline[3]), spec_suite_months[splitline[2].lower()], int(splitline[1])
return datetime(year, month, day, hrs, mins, secs)
def _get_headermetadata_timefile(splitline):
''' Return the reduced, available components of headermetadata from timefile (missing data).'''
### Take items that are present ###
filldic=dict(zip(['int_time', 'int_unit', 'spec_avg', 'boxcar'], splitline[5:]))
### Append null for items that are missing ###
missing=('spectrometer','dark_spec_pres', 'ref_spec_pres', 'electric_dark_correct',
'strobe_lamp', 'detector_nonlin_correct', 'stray_light_correct', 'pix_in_spec')
missingdic=dict((item, None) for item in missing)
### MANUALLY ADDING SPECTROMETER FOR THIS DATATYPE! COULD BE PROBLEMATIC IF I CHANGE SPECTROMETERS
### OR USE THIS PROGRAM TO OPEN OTHER DATAFILES (UNLIKELY)
missingdic['spectrometer']='USB2E7196'
return missingdic
def from_oceanoptics(directory, sample_by=None, sort=True, **from_spec_files_kwds):
''' Wrapper to from_spec_files to both generate a file list from a directory and read files into time spectra.
Parameters:
-----------
directory: directory with raw spectral files.
sample_by: Sample every X files. (Useful for reducing large datsets prior to readin)
**from_spec_files_kwds: All kwds passed to from_spec_files().
Notes:
------
Slice works by taking the 0 file every time and counting from there. So if you enter sample_by=3,
expect to get files 0, 3, 6 etc... but if sample_by_10, you get files 0,10,20
'''
files=get_files_in_dir(directory, sort=sort)
if sample_by:
files=files[0::sample_by] #Slice already takes cares of most errors
return from_spec_files(files, **from_spec_files_kwds)
if __name__=='__main__':
# Assumes datapath is ../data/gwuspecdata...
### Test of raw fiber spectral data
ts=from_oceanoptics('../data/gwuspecdata/fiber1', sample_by=10, sort=True, name='oceanopticstest')
files=get_files_in_dir('../data/gwuspecdata/fiber1')
ts=from_spec_files(files, name='Test Spectra')
#print ts, type(ts)
### Test of timefile/datafile
#dfile='../data/gwuspecdata/ex_tf_df/11_data.txt'
#tfile='../data/gwuspecdata/ex_tf_df/11_tfile.txt'
#ts2=from_timefile_datafile(dfile, tfile, extract_dark=True, name='Test Spec 2')
#print ts2, type(ts2)
### Test of chem UVVIS data
files=get_files_in_dir('../data/gwuspecdata/IRData')
ts3=from_gwu_chem_UVVIS(files, name='IR Spec')
print ts3, type(ts3), ts3[ts3.columns[0]]
### ADD CHEM IR DATA (Did it in a notebook at one point actually) | 41.400943 | 129 | 0.660647 | 2,280 | 17,554 | 4.957018 | 0.25 | 0.00991 | 0.014865 | 0.004601 | 0.118829 | 0.10131 | 0.077243 | 0.051053 | 0.043975 | 0.035127 | 0 | 0.011611 | 0.23459 | 17,554 | 424 | 130 | 41.400943 | 0.829562 | 0.171015 | 0 | 0.188406 | 0 | 0 | 0.091083 | 0.021638 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.004831 | 0.033816 | null | null | 0.009662 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b10e7a68c8af58225108e52b0f94e2ff22aadec | 996 | py | Python | setup.py | Feeeenng/flask-3auth | 806b43e5bfb50ba5ab1b9b002b160c199487c2a1 | [
"BSD-3-Clause"
] | 13 | 2017-08-09T07:49:27.000Z | 2021-11-08T08:42:45.000Z | setup.py | Feeeenng/flask-3auth | 806b43e5bfb50ba5ab1b9b002b160c199487c2a1 | [
"BSD-3-Clause"
] | null | null | null | setup.py | Feeeenng/flask-3auth | 806b43e5bfb50ba5ab1b9b002b160c199487c2a1 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: UTF-8 -*-
'''
@author: 'FenG_Vnc'
@date: 2017-08-08 17:06
@file: setup.py
'''
from __future__ import unicode_literals
from setuptools import setup,find_packages
setup(
name='Flask-thridy',
version='0.0.3',
description='simple use thridy for login you web',
license='BSD',
author='Feeeenng',
author_email='z332007851@163.com',
url='https://github.com/Feeeenng/flask-thridy',
platforms = 'any',
packages = find_packages(),
zip_safe=False,
install_requires = [
'Flask>=0.8',
'requests>=2.18.3',
],
classifiers = [
'Environment :: Web Environment',
'Framework :: Flask',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2.7',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development :: Libraries :: Python Modules',
]
) | 26.210526 | 71 | 0.605422 | 110 | 996 | 5.381818 | 0.736364 | 0.040541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047556 | 0.23996 | 996 | 38 | 72 | 26.210526 | 0.734478 | 0.082329 | 0 | 0 | 0 | 0 | 0.491731 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b11462a64d61bf53c381ed6c701a1bc757341a7 | 763 | py | Python | config/users/views.py | sitepoint-editors/Django-photo-app | b80f545aba9c73a0810b49941e8e6769a80958df | [
"MIT",
"Unlicense"
] | null | null | null | config/users/views.py | sitepoint-editors/Django-photo-app | b80f545aba9c73a0810b49941e8e6769a80958df | [
"MIT",
"Unlicense"
] | null | null | null | config/users/views.py | sitepoint-editors/Django-photo-app | b80f545aba9c73a0810b49941e8e6769a80958df | [
"MIT",
"Unlicense"
] | null | null | null | from django.views.generic import CreateView
from django.contrib.auth import authenticate, login
from django.contrib.auth.views import LoginView
from django.contrib.auth.forms import UserCreationForm
from django.urls import reverse_lazy
class SignUpView(CreateView):
template_name = 'users/signup.html'
form_class = UserCreationForm
success_url = reverse_lazy('photo:list')
def form_valid(self, form):
to_return = super().form_valid(form)
user = authenticate(
username=form.cleaned_data["username"],
password=form.cleaned_data["password1"],
)
login(self.request, user)
return to_return
class CustomLoginView(LoginView):
template_name = 'users/login.html' | 23.121212 | 54 | 0.70118 | 88 | 763 | 5.943182 | 0.477273 | 0.095602 | 0.097514 | 0.120459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001664 | 0.21232 | 763 | 33 | 55 | 23.121212 | 0.868552 | 0 | 0 | 0 | 0 | 0 | 0.078534 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052632 | false | 0.052632 | 0.263158 | 0 | 0.684211 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
8b1c4977beae7cf445ff432bf546dbe77ab73927 | 2,440 | py | Python | NeoML/Python/neoml/Dnn/RepeatSequence.py | ndrewl/neoml | c87361fa8489c28a672cb8e1a447f47ba4c1dbc5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | NeoML/Python/neoml/Dnn/RepeatSequence.py | ndrewl/neoml | c87361fa8489c28a672cb8e1a447f47ba4c1dbc5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | NeoML/Python/neoml/Dnn/RepeatSequence.py | ndrewl/neoml | c87361fa8489c28a672cb8e1a447f47ba4c1dbc5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | """ Copyright (c) 2017-2020 ABBYY Production LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--------------------------------------------------------------------------------------------------------------*/
"""
import neoml.PythonWrapper as PythonWrapper
from .Dnn import Layer
from neoml.Utils import check_input_layers
class RepeatSequence(Layer):
"""The layer that repeats the input sequences several times.
Layer inputs
---------
#1: a sequence of objects.
The dimensions:
- BatchLength is the sequence length
- BatchWidth * ListSize is the number of sequences in the set
- Height * Width * Depth * Channels is the size of each object
Layer outputs
---------
#1: the same sequence repeated repeat_count times.
The dimensions:
- BatchLength is repeat_count times larger than the input's BatchLength
- all other dimensions are the same as for the input
Parameters
----------
input_layer : (object, int)
The input layer and the number of its output. If no number
is specified, the first output will be connected.
repeat_count : int, > 0
The number of repetitions.
name : str, default=None
The layer name.
"""
def __init__(self, input_layer, repeat_count, name=None):
if type(input_layer) is PythonWrapper.RepeatSequence:
super().__init__(input_layer)
return
layers, outputs = check_input_layers(input_layer, 1)
internal = PythonWrapper.RepeatSequence(str(name), layers[0], int(outputs[0]), int(repeat_count))
super().__init__(internal)
@property
def repeat_count(self):
"""Gets the number of repetitions.
"""
return self._internal.get_repeat_count()
@repeat_count.setter
def repeat_count(self, repeat_count):
"""Sets the number of repetitions.
"""
self._internal.set_repeat_count(int(repeat_count))
| 33.424658 | 112 | 0.657377 | 311 | 2,440 | 5.038585 | 0.450161 | 0.084237 | 0.035099 | 0.042119 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009474 | 0.221311 | 2,440 | 72 | 113 | 33.888889 | 0.815263 | 0.629508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.176471 | 0 | 0.529412 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
8b22096ffd13f82c182a40a63de2595a4a5e3bb5 | 3,626 | py | Python | Parsers/GFF.py | mahajrod/MAVR | 4db74dff7376a2ffe4426db720b241de9198f329 | [
"MIT"
] | 10 | 2015-04-28T14:15:04.000Z | 2021-03-15T00:07:38.000Z | Parsers/GFF.py | mahajrod/MAVR | 4db74dff7376a2ffe4426db720b241de9198f329 | [
"MIT"
] | null | null | null | Parsers/GFF.py | mahajrod/MAVR | 4db74dff7376a2ffe4426db720b241de9198f329 | [
"MIT"
] | 6 | 2017-03-16T22:38:41.000Z | 2021-08-11T00:22:52.000Z | #!/usr/bin/env python
from RouToolPa.Parsers.Abstract import Record, Metadata, Collection
class MetadataGFF(Metadata):
# TODO: rewrite for general case
pass
class RecordGFF(Record):
def __init__(self, chrom, source, type, start, end, score, strand, quality, attributes):
self.chrom = chrom # str
self.source = source # str
self.type = type # str
self.start = start # int
self.end = end # int
self.score = score # float or '.'
self.strand = strand # str : '+' or '-' or '.'
self.quality = quality # float or '.'
self.attributes = attributes # generally dict, but maybe a str if unparsed
def __str__(self):
if isinstance(self.attributes, dict):
attribute_string = ";".join(["%s=%s" % (key, self.attributes[key]) for key in self.attributes])
else:
attribute_string = self.attributes
return "%s\t%s\t%s\t%i\t%i\t%s\t%s\t%s\t%s" % (self.chrom, self.source, self.type, self.start, self.end,
str(self.score), self.strand, str(self.quality),
attribute_string)
class CollectionGFF(Collection):
def read(self, input_file):
self.metadata = MetadataGFF()
self.records = []
with open(input_file, "r") as in_fd:
for line in in_fd:
if line[0:2] == "##":
self.metadata.metadata.append(line[2:].strip())
continue
elif line[0] == "#":
self.metadata.metadata.append(line[1:].strip())
continue
tmp = line.strip().split("\t")
#print(tmp)
tmp[3] = int(tmp[3])
tmp[4] = int(tmp[4])
if tmp[5] != '.':
tmp[5] = float(tmp[5])
if tmp[7] != '.':
tmp[7] = float(tmp[7])
tmp[8] = tmp[8].split(";")
tmp[8] = dict(map(lambda s: s.split("="), tmp[8]))
self.records.append(RecordGFF(*tmp))
@staticmethod
def gff_simple_generator(input_file):
with open(input_file, "r") as in_fd:
for line in in_fd:
if line[0] == "#":
continue
tmp = line.strip().split("\t")
#print(tmp)
tmp[3] = int(tmp[3])
tmp[4] = int(tmp[4])
if tmp[5] != '.':
tmp[5] = float(tmp[5])
if tmp[7] != '.':
tmp[7] = float(tmp[7])
tmp[8] = tmp[8].split(";")
tmp[8] = dict(map(lambda s: s.split("="), tmp[8]))
yield tmp
@staticmethod
def gff_simple_record_generator(input_file):
with open(input_file, "r") as in_fd:
for line in in_fd:
if line[0] == "#":
continue
tmp = line.strip().split("\t")
#print(tmp)
tmp[3] = int(tmp[3])
tmp[4] = int(tmp[4])
if tmp[5] != '.':
tmp[5] = float(tmp[5])
if tmp[7] != '.':
tmp[7] = float(tmp[7])
tmp[8] = tmp[8].split(";")
tmp[8] = dict(map(lambda s: s.split("="), tmp[8]))
yield RecordGFF(*tmp)
| 37 | 112 | 0.427744 | 400 | 3,626 | 3.8075 | 0.22 | 0.031517 | 0.027577 | 0.010506 | 0.466842 | 0.391989 | 0.382797 | 0.382797 | 0.382797 | 0.382797 | 0 | 0.02374 | 0.430778 | 3,626 | 97 | 113 | 37.381443 | 0.714147 | 0.053778 | 0 | 0.539474 | 0 | 0.013158 | 0.019315 | 0.00995 | 0 | 0 | 0 | 0.010309 | 0 | 1 | 0.065789 | false | 0.013158 | 0.013158 | 0 | 0.131579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b2c96872756ada7578bae428920e05b419182c7 | 10,039 | py | Python | ddu_dirty_mnist/dirty_mnist.py | BlackHC/ddu_dirty_mnist | fdd4e0a4d5496bc3e3ba80359dd7cd67f564d955 | [
"Apache-2.0"
] | 3 | 2021-06-30T13:57:59.000Z | 2021-12-05T18:55:35.000Z | ddu_dirty_mnist/dirty_mnist.py | BlackHC/ddu_dirty_mnist | fdd4e0a4d5496bc3e3ba80359dd7cd67f564d955 | [
"Apache-2.0"
] | null | null | null | ddu_dirty_mnist/dirty_mnist.py | BlackHC/ddu_dirty_mnist | fdd4e0a4d5496bc3e3ba80359dd7cd67f564d955 | [
"Apache-2.0"
] | 1 | 2021-07-09T16:36:11.000Z | 2021-07-09T16:36:11.000Z | # AUTOGENERATED! DO NOT EDIT! File to edit: 01_dataloader.ipynb (unless otherwise specified).
__all__ = ['MNIST_NORMALIZATION', 'AmbiguousMNIST', 'FastMNIST', 'DirtyMNIST']
# Cell
import os
from typing import IO, Any, Callable, Dict, List, Optional, Tuple, Union
from urllib.error import URLError
import torch
from torchvision.datasets.mnist import MNIST, VisionDataset
from torchvision.datasets.utils import download_url, extract_archive, verify_str_arg
from torchvision.transforms import Compose, Normalize, ToTensor
# Cell
MNIST_NORMALIZATION = Normalize((0.1307,), (0.3081,))
# Cell
# based on torchvision.datasets.mnist.py (https://github.com/pytorch/vision/blob/37eb37a836fbc2c26197dfaf76d2a3f4f39f15df/torchvision/datasets/mnist.py)
class AmbiguousMNIST(VisionDataset):
"""
Ambiguous-MNIST Dataset
Please cite:
@article{mukhoti2021deterministic,
title={Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty},
author={Mukhoti, Jishnu and Kirsch, Andreas and van Amersfoort, Joost and Torr, Philip HS and Gal, Yarin},
journal={arXiv preprint arXiv:2102.11582},
year={2021}
}
Args:
root (string): Root directory of dataset where ``MNIST/processed/training.pt``
and ``MNIST/processed/test.pt`` exist.
train (bool, optional): If True, creates dataset from ``training.pt``,
otherwise from ``test.pt``.
download (bool, optional): If true, downloads the dataset from the internet and
puts it in root directory. If dataset is already downloaded, it is not
downloaded again.
transform (callable, optional): A function/transform that takes in an PIL image
and returns a transformed version. E.g, ``transforms.RandomCrop``
target_transform (callable, optional): A function/transform that takes in the
target and transforms it.
normalize (bool, optional): Normalize the samples.
device: Device to use (pass `num_workers=0, pin_memory=False` to the DataLoader for max throughput)
"""
mirrors = ["http://github.com/BlackHC/ddu_dirty_mnist/releases/download/data-v1.0.0/"]
resources = dict(
data=("amnist_samples.pt", "4f7865093b1d28e34019847fab917722"),
targets=("amnist_labels.pt", "3bfc055a9f91a76d8d493e8b898c3c95"),
)
def __init__(
self,
root: str,
*,
train: bool = True,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
download: bool = False,
normalize: bool = True,
noise_stddev=0.05,
device=None,
):
super().__init__(root, transform=transform, target_transform=target_transform)
self.train = train # training set or test set
if download:
self.download()
self.data = torch.load(self.resource_path("data"), map_location=device)
if normalize:
self.data = self.data.sub_(0.1307).div_(0.3081)
self.targets = torch.load(self.resource_path("targets"), map_location=device)
# Each sample has `num_multi_labels` many labels.
num_multi_labels = self.targets.shape[1]
# Flatten the multi-label dataset into a single-label dataset with samples repeated x `num_multi_labels` many times
self.data = self.data.expand(-1, num_multi_labels, 28, 28).reshape(-1, 1, 28, 28)
self.targets = self.targets.reshape(-1)
data_range = slice(None, 60000) if self.train else slice(60000, None)
self.data = self.data[data_range]
if noise_stddev > 0.0:
self.data += torch.randn_like(self.data) * noise_stddev
self.targets = self.targets[data_range]
def __getitem__(self, index: int) -> Tuple[torch.Tensor, int]:
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is index of the target class.
"""
img, target = self.data[index], self.targets[index]
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
return img, target
def __len__(self) -> int:
return len(self.data)
@property
def data_folder(self) -> str:
return os.path.join(self.root, self.__class__.__name__)
def resource_path(self, name):
return os.path.join(self.data_folder, self.resources[name][0])
def _check_exists(self) -> bool:
return all(os.path.exists(self.resource_path(name)) for name in self.resources)
def download(self) -> None:
"""Download the data if it doesn't exist in data_folder already."""
if self._check_exists():
return
os.makedirs(self.data_folder, exist_ok=True)
# download files
for filename, md5 in self.resources.values():
for mirror in self.mirrors:
url = "{}{}".format(mirror, filename)
try:
print("Downloading {}".format(url))
download_url(url, root=self.data_folder, filename=filename, md5=md5)
except URLError as error:
print("Failed to download (trying next):\n{}".format(error))
continue
except:
raise
finally:
print()
break
else:
raise RuntimeError("Error downloading {}".format(filename))
print("Done!")
# Cell
class FastMNIST(MNIST):
"""
FastMNIST, based on https://tinyurl.com/pytorch-fast-mnist. It's like MNIST (<http://yann.lecun.com/exdb/mnist/>) but faster.
Args:
root (string): Root directory of dataset where ``MNIST/processed/training.pt``
and ``MNIST/processed/test.pt`` exist.
train (bool, optional): If True, creates dataset from ``training.pt``,
otherwise from ``test.pt``.
download (bool, optional): If true, downloads the dataset from the internet and
puts it in root directory. If dataset is already downloaded, it is not
downloaded again.
transform (callable, optional): A function/transform that takes in an PIL image
and returns a transformed version. E.g, ``transforms.RandomCrop``
target_transform (callable, optional): A function/transform that takes in the
target and transforms it.
normalize (bool, optional): Normalize the samples.
device: Device to use (pass `num_workers=0, pin_memory=False` to the DataLoader for
max throughput).
"""
def __init__(self, *args, normalize, noise_stddev=0.05, device, **kwargs):
super().__init__(*args, **kwargs)
# Scale data to [0,1]
self.data = self.data.unsqueeze(1).float().div(255)
# Put both data and targets on GPU in advance
self.data, self.targets = self.data.to(device), self.targets.to(device)
if normalize:
self.data = self.data.sub_(0.1307).div_(0.3081)
if noise_stddev > 0.0:
self.data += torch.randn_like(self.data) * noise_stddev
def __getitem__(self, index: int) -> Tuple[torch.Tensor, int]:
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is index of the target class.
"""
img, target = self.data[index], self.targets[index]
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
return img, target
# Cell
def DirtyMNIST(
root: str,
*,
train: bool = True,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
download: bool = False,
normalize=True,
noise_stddev=0.05,
device=None,
):
"""
DirtyMNIST
Please cite:
@article{mukhoti2021deterministic,
title={Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty},
author={Mukhoti, Jishnu and Kirsch, Andreas and van Amersfoort, Joost and Torr, Philip HS and Gal, Yarin},
journal={arXiv preprint arXiv:2102.11582},
year={2021}
}
Args:
root (string): Root directory of dataset where ``MNIST/processed/training.pt``
and ``MNIST/processed/test.pt`` exist.
train (bool, optional): If True, creates dataset from ``training.pt``,
otherwise from ``test.pt``.
download (bool, optional): If true, downloads the dataset from the internet and
puts it in root directory. If dataset is already downloaded, it is not
downloaded again.
transform (callable, optional): A function/transform that takes in an PIL image
and returns a transformed version. E.g, ``transforms.RandomCrop``
target_transform (callable, optional): A function/transform that takes in the
target and transforms it.
normalize (bool, optional): Normalize the samples.
device: Device to use (pass `num_workers=0, pin_memory=False` to the DataLoader for
max throughput).
"""
mnist_dataset = FastMNIST(
root=root,
train=train,
transform=transform,
target_transform=target_transform,
download=download,
normalize=normalize,
noise_stddev=noise_stddev,
device=device,
)
amnist_dataset = AmbiguousMNIST(
root=root,
train=train,
transform=transform,
target_transform=target_transform,
download=download,
normalize=normalize,
noise_stddev=noise_stddev,
device=device,
)
return torch.utils.data.ConcatDataset([mnist_dataset, amnist_dataset]) | 35.725979 | 152 | 0.634227 | 1,190 | 10,039 | 5.24874 | 0.223529 | 0.029459 | 0.013449 | 0.017291 | 0.612712 | 0.595101 | 0.587416 | 0.57845 | 0.57845 | 0.57845 | 0 | 0.025129 | 0.26666 | 10,039 | 281 | 153 | 35.725979 | 0.823282 | 0.431916 | 0 | 0.4375 | 1 | 0.007813 | 0.058702 | 0.012041 | 0 | 0 | 0 | 0 | 0 | 1 | 0.078125 | false | 0 | 0.054688 | 0.03125 | 0.226563 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b2d1c41fd7a85a14145e077860057b9c61a9bae | 571 | py | Python | python/qipkg/actions/build.py | vbarbaresi/qibuild | eab6b815fe0af49ea5c41ccddcd0dff2363410e1 | [
"BSD-3-Clause"
] | null | null | null | python/qipkg/actions/build.py | vbarbaresi/qibuild | eab6b815fe0af49ea5c41ccddcd0dff2363410e1 | [
"BSD-3-Clause"
] | null | null | null | python/qipkg/actions/build.py | vbarbaresi/qibuild | eab6b815fe0af49ea5c41ccddcd0dff2363410e1 | [
"BSD-3-Clause"
] | null | null | null | # Copyright (c) 2012-2018 SoftBank Robotics. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the COPYING file.
""" Build all the projects of the given pml file
"""
import qibuild.parsers
import qipkg.parsers
def configure_parser(parser):
"""Configure parser for this action"""
qipkg.parsers.pml_parser(parser)
qibuild.parsers.cmake_build_parser(parser, with_build_parser=False)
def do(args):
"""Main entry point"""
pml_builder = qipkg.parsers.get_pml_builder(args)
pml_builder.build()
| 27.190476 | 72 | 0.742557 | 85 | 571 | 4.870588 | 0.611765 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016736 | 0.162872 | 571 | 20 | 73 | 28.55 | 0.849372 | 0.450088 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b31460e87326af480123834b4e021112e65e413 | 518 | py | Python | src/cuda_ai/mean.py | TRex22/dave-A-2048-AI | 6c2b97b960ef111b207d63e09f6644c1332e2c9b | [
"MIT"
] | null | null | null | src/cuda_ai/mean.py | TRex22/dave-A-2048-AI | 6c2b97b960ef111b207d63e09f6644c1332e2c9b | [
"MIT"
] | null | null | null | src/cuda_ai/mean.py | TRex22/dave-A-2048-AI | 6c2b97b960ef111b207d63e09f6644c1332e2c9b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Sun May 21 15:35:38 2017
@author: Liron
"""
import numpy as np
np.set_printoptions(threshold=np.nan)
data = np.genfromtxt("cuda_times.csv", delimiter=",", usecols=(0,1), max_rows=97, skip_header=96+96+96)
print data[0]
print data[data.shape[0]-1]
mean = np.zeros((16,2))
for i in np.arange(mean.shape[0]):
x = data[6*i + 1:6*(i+1) + 1,1]
mean[i,1] = np.mean(x)
mean[i,0] = data[6*i + 1,0]
print mean
#np.savetxt("cuda_1000threads.txt", mean, delimiter=',') | 21.583333 | 103 | 0.631274 | 97 | 518 | 3.319588 | 0.536082 | 0.024845 | 0.02795 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102975 | 0.156371 | 518 | 24 | 104 | 21.583333 | 0.633867 | 0.146718 | 0 | 0 | 0 | 0 | 0.03937 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.363636 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b31ef4de81790112ce464c5e4351de81b31c2e8 | 2,889 | py | Python | src/MemeEngine/meme_generator.py | svd-007/Meme-Generator | 09b3073bce11eb655d888a7ee789ba79a6f41862 | [
"BSD-3-Clause"
] | 1 | 2021-07-03T07:56:22.000Z | 2021-07-03T07:56:22.000Z | src/MemeEngine/meme_generator.py | svd-007/Meme-Generator | 09b3073bce11eb655d888a7ee789ba79a6f41862 | [
"BSD-3-Clause"
] | null | null | null | src/MemeEngine/meme_generator.py | svd-007/Meme-Generator | 09b3073bce11eb655d888a7ee789ba79a6f41862 | [
"BSD-3-Clause"
] | null | null | null | """Represent a class to generate a meme."""
from PIL import Image, ImageDraw, ImageFont
from os import makedirs
from random import randint
from textwrap import fill
class MemeGenerator:
"""
A class to generate a meme.
The following responsibilities are defined under this class:
- Loading of an image from a disk.
- Transforming the image by resizing to a maximum width of 500px while
maintaining the input aspect ratio.
- Adding a caption to the image with a body and author to a random location
on the image.
"""
def __init__(self, output_dir: str):
"""
Create a new `MemeGenerator`.
Parameters
----------
`output_dir`: str
output directory to save the generated meme.
"""
self.output_dir = output_dir
makedirs(self.output_dir, exist_ok=True)
def validate_width(self, width: int) -> ValueError:
"""
Assert whether desired width of the image does not exceed 500px.
Raise `ValueError` if width greater than 500px.
Parameters
----------
`width`: int
width of the image.
"""
if width > 500:
raise ValueError('Width of the image cannot exceed 500px.')
def make_meme(self, img_path: str, text: str,
author: str, width: int = 500) -> str:
"""
Return path of the saved image of the meme after adding caption to it.
Parameters
----------
`img_path`: str
path of the original image.
`text`: str
quote of the meme.
`author`: str
author of the meme.
`width`: int
desired width of the image, default = 500px.
"""
# Opening the image.
img = Image.open(img_path)
# Try-except block to handle instances where width > 500px.
try:
self.validate_width(width)
except ValueError as val_err:
print(val_err)
else:
# Resizing the image proportionate to the given width.
ratio = width / float(img.size[0])
height = int(ratio * float(img.size[1]))
img = img.resize((width, height), Image.NEAREST)
# Adding the caption to the image.
caption = fill(f'{text} - {author}', width=40)
font = ImageFont.truetype('./_data/font/Candara.ttf', size=20)
draw = ImageDraw.Draw(img)
draw.text(xy=(randint(10, 40),
randint(20, 70)),
text=caption,
font=font,
fill='white')
# Saving the image to the output directory.
output_path = f'{self.output_dir}/{randint(0, 1000)}.png'
try:
img.save(output_path)
except Exception as e:
print(e)
return output_path
| 28.323529 | 79 | 0.564209 | 350 | 2,889 | 4.591429 | 0.371429 | 0.05476 | 0.032358 | 0.037337 | 0.053516 | 0.026136 | 0 | 0 | 0 | 0 | 0 | 0.022691 | 0.344064 | 2,889 | 101 | 80 | 28.60396 | 0.82533 | 0.396677 | 0 | 0.055556 | 1 | 0 | 0.084861 | 0.035981 | 0 | 0 | 0 | 0 | 0 | 1 | 0.083333 | false | 0 | 0.111111 | 0 | 0.25 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
8b35a7c3678ed712b3ff354ea5d271f206795d53 | 221 | py | Python | UVa 11332 - Summing Digits/sample/main.py | tadvi/uva | 0ac0cbdf593879b4fb02a3efc09adbb031cb47d5 | [
"MIT"
] | 1 | 2020-11-24T03:17:21.000Z | 2020-11-24T03:17:21.000Z | UVa 11332 - Summing Digits/sample/main.py | tadvi/uva | 0ac0cbdf593879b4fb02a3efc09adbb031cb47d5 | [
"MIT"
] | null | null | null | UVa 11332 - Summing Digits/sample/main.py | tadvi/uva | 0ac0cbdf593879b4fb02a3efc09adbb031cb47d5 | [
"MIT"
] | 1 | 2021-04-11T16:22:31.000Z | 2021-04-11T16:22:31.000Z | import sys
sys.stdin = open('input.txt')
while True:
N = int(input())
if not N:
break
strN = str(N)
while len(strN) > 1:
N = sum(int(i) for i in strN)
strN = str(N)
print strN
| 17 | 37 | 0.515837 | 36 | 221 | 3.166667 | 0.611111 | 0.122807 | 0.140351 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.352941 | 221 | 12 | 38 | 18.416667 | 0.79021 | 0 | 0 | 0.181818 | 0 | 0 | 0.040724 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090909 | null | null | 0.090909 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.